content
stringlengths
0
625k
subset
stringclasses
1 value
meta
dict
--- abstract: | We consider a Kirchhoff-type diffusion problem driven by the magnetic fractional Laplace operator. The main result in this paper establishes that infinite time blow-up cannot occur for the problem. The proof is based on the potential well method, in relationship with energy and Nehari functionals. address: - School of Mathematics and Information Science, Guangzhou University, Guangzhou, 510006, China - Instituto de Matemática e Estatística, Universidade de São Paulo, Rua do Matão, 1010, Sõ Paulo - SP CEP 05508-090, Brazil - Faculty of Applied Mathematics, AGH University of Science and Technology, 30-059 Kraków, Poland - Brno University of Technology, Faculty of Electrical Engineering and Communication, Technická 3058/10, Brno 61600, Czech Republic - Department of Mathematics, University of Craiova, Street A.I. Cuza 13, 200585 Craiova, Romania - Simion Stoilow Institute of Mathematics of the Romanian Academy, Calea Griviţei 21, 010702 Bucharest, Romania author: - Jiabin Zuo - Juliana Honda Lopes - Vicentiu D. Rădulescu title: Long-time behavior for the Kirchhoff diffusion problem with magnetic fractional Laplace operator --- Diffusion problem ,Kirchhoff function ,magnetic fractional Laplacian , Nehari functional ,potential function 35R11 ,35J20 ,35J60 # Introduction Let $\Omega\subset\mathbb{R}^n$ ($n>2s$) be a bounded domain with smooth boundary. In this paper we study the following Kirchhoff-type diffusion problem $$\label{problem1} \begin{cases} u_t+M\left(\|u\|_{X_{0,A}}^2\right)(-\Delta)_A^su= f(|u|)u,&\mbox{ in }\Omega\times(0,T),\\ u(x,t)=0,&\mbox{ in }(\mathbb{R}^n\setminus\Omega)\times(0,T),\\ u(x,0)=u_0(x),&\mbox{ in }\Omega. \end{cases}$$ Given $s \in (0,1)$ and $A\in L^{\infty}_{loc}(\mathbb{R}^n)$, we define the magnetic fractional Laplace operator defined $(-\Delta)_A^s$ by $$\label{operator} (-\Delta)_A^su(x,t)=2\lim_{\varepsilon\rightarrow0^+} \int_{\mathbb{R}^n\setminus B(x,\varepsilon)}\frac{u(x,t)-e^{i(x-y) \cdot A(\frac{x+y}{2})}u(y,t)}{|x-y|^{n+2s}}\ dy,$$ for $x\in\mathbb{R}^n$ and $u\in C_0^{\infty}(\mathbb{R}^n,\mathbb{C})$. This differential operator is weighted by a Kirchhoff-type function $M:[0,\infty)\rightarrow[0,\infty)$ (see [@Kirchhoff1883]), satisfying $(M1)$-$(M2)$ below. When $A \equiv 0$ in [\[operator\]](#operator){reference-type="eqref" reference="operator"}, then we have the usual fractional Laplacian differential operator denoted by $(-\Delta)^s$. Such differential operator was studied in the context of problems in quantum mechanics and of the motion of chains or arrays of particles connected by elastic springs, as well as in the context of problems of unusual diffusion processes in turbulent fluids and of mass transport in fractured media. We refer to [@Applebaum] (Lévy processes), [@Caffarelli] (nonlocal diffusions, drifts and games), [@Avron; @ji1; @ji2; @Li; @wen; @Radulescu] for other classes of nonlocal operators. In all the aforementioned works, the authors deal with Schrödinger operators with magnetic fields. For instance, [@Li] establish the existence of nontrivial solutions to a parametric fractional Schrödinger equation in the case of critical or supercritical nonlinearity. Next, the operator $(-\Delta)_A^s$ (see [\[operator\]](#operator){reference-type="eqref" reference="operator"}) was introduced in [@Avenia], as a fractional counterpart of the magnetic Laplacian $(\nabla-iA)^2$, where $A:\mathbb{R}^n\rightarrow\mathbb{R}^n$ is a $L_{loc}^{\infty}$-vector potential. Zuo & Lopes [@Zuo] established the existence of weak solutions to problem [\[problem1\]](#problem1){reference-type="eqref" reference="problem1"}. The strategy is based on the potential well method, hence they obtain global in time solutions and blow-up in finite time solutions. Here, we show that the global in time solutions to [\[problem1\]](#problem1){reference-type="eqref" reference="problem1"} can not blow-up in infinite time. We also mention [@Zhou] (non-local parabolic equation in a bounded convex domain), [@Avenia] (for the equation $(-\Delta)_A^su+u=|u|^{p-2}u$ posed in $\mathbb{R}^3$), [@LWL] (fractional Choquard equation), and [@Toscano] (system of Kirchhoff type equations). # Mathematical background and hypotheses {#Sec2} The right framework for the analysis of equation [\[problem1\]](#problem1){reference-type="eqref" reference="problem1"} is the function space $X_{0,A}$ (hence $H_A^s(\Omega)$) defined as follows. For an open and bounded set $\Omega\subset\mathbb{R}^n$ ($n>2s$), let $|\Omega|$ be the measure of the set $\Omega$. By $L^p(\Omega,\mathbb{C})$ we mean the Lebesgue space of complex valued functions with norm $\|\cdot\|_{L^p(\Omega)}$ and inner product $\langle\cdot,\cdot\rangle$. For $p=2$, $s\in(0,1)$ and $A\in L^{\infty}_{loc}(\mathbb{R}^n)$, we consider the magnetic Gagliardo semi-norm defined by $$[u]_{H^s_A(\Omega)}^2:=\int\int_{\Omega\times\Omega}\frac{|u(x,t)-e^{i(x-y)\cdot A\left(\frac{x+y}{2}\right)}u(y,t)|^2}{|x-y|^{n+2s}}\ dx\ dy.$$ Hence we consider the space $H_A^s(\Omega)$ of functions $u\in L^2(\Omega,\mathbb{C})$ with $[u]_{H^s_A(\Omega)}<\infty$ and furnished with the norm $\|u\|_{H^s_A(\Omega)}:=(\|u\|^2_{L^2(\Omega)}+[u]^2_{H^s_A(\Omega)})^{\frac{1}{2}}.$ Referring to [@Fiscella; @Vecchi], we define $X_{0,A}:=\{u\in H_A^s(\mathbb{R}^n): u=0 \mbox{ a.e. in }\mathbb{R}^n\setminus\Omega\},$ with the real scalar product (see [@Avenia]) given as $$\label{product} \begin{aligned} \langle u,v\rangle_{X_{0,A}}:=\mathcal{R}\int\int_{\mathbb{R}^{2n}}\frac{\left(u(x,t)-e^{i(x-y)\cdot A\left(\frac{x+y}{2}\right)}u(y,t)\right)\overline{\left(v(x,t)-e^{i(x-y)\cdot A\left(\frac{x+y}{2}\right)}v(y,t)\right)}}{|x-y|^{n+2s}}\ dx \ dy, \end{aligned}$$ where, for every $z\in\mathbb{C}$, by $\mathcal{R}z$ we mean the real part of $z$ and by $\overline{z}$ its complex conjugate. This scalar product induces the following norm $$\|u\|_{X_{0,A}}:=\left(\int\int_{\mathbb{R}^{2n}}\frac{|u(x,t)-e^{i(x-y)\cdot A\left(\frac{x+y}{2}\right)}u(y,t)|^2}{|x-y|^{n+2s}}\ dx \ dy\right)^{\frac{1}{2}}.$$ We will use $\xrightarrow{w}$ and $\to$ to denote weak and strong convergences, respectively. Our hypotheses on problem [\[problem1\]](#problem1){reference-type="eqref" reference="problem1"} are the following: - $f\in C^1([0,\infty))$, and we can find $C>0$ and $\gamma\geq p$, for $p\in(2,2_s^{\ast})$ with $2_s^{\ast}=\frac{2n}{n-2s}$, such that $$\label{f} \begin{array}{ll} |F(u)|\leq Cu^p, f(u)u^2\leq pCu^p, \mbox{ for } u\geq0,\\ 0<\gamma F(u)\leq f(u)u^2, u^2(uf'(u)-(p-2)f(u))\geq 0 \mbox{ for }u>0, \end{array}$$ where $F(u):=\int_0^{u}f(\tau)\tau\ d\tau$. Further, the Kirchhoff function $M:[0,\infty)\rightarrow[0,\infty)$ is as follows: 1. it is a continuous function and there exist constants $m_0>0$ and $\theta\in(1,\frac{2_s^{\ast}}{2})$ such that $M(u)\geq m_0u^{\theta-1}$ for all $u\in[0,\infty)$; 2. there exists a constant $\mu\in(1,\frac{2_s^{\ast}}{2})$ such that $\mu\mathscr{M}(u)\geq M(u)u$ for all $u\in[0,\infty)$, where $\mathscr{M}(u):=\int_0^uM(\tau)\ d\tau.$ Here, $\theta$ and $\mu$ satisfy the condition: (P) $2\max\{\theta,\mu\}<p<2_s^{\ast}$. We consider the $C^1$-functional related to problem [\[problem1\]](#problem1){reference-type="eqref" reference="problem1"} and defined by $$\label{J} J(u):=\frac{1}{2}\mathscr{M}(\|u\|_{X_{0,A}}^2)-\int_{\Omega}F(|u|)\ dx.$$ We have $\langle J'(u),\phi\rangle= M(\|u\|_{X_{0,A}}^2)\langle u,\phi\rangle_{X_{0,A}}-\mathcal{R}\int_{\Omega}f(|u|)u\overline{\phi}\ dx,$ for any $\phi\in X_{0,A}$, and we introduce the Nehari functional for [\[problem1\]](#problem1){reference-type="eqref" reference="problem1"} given by $I(u):=\langle J'(u),u\rangle.$ Hence, we can consider the non-negative value $d$ (i.e., mountain pass level) given as $d= \inf_{u \in X_{0,A}\setminus \{0\} \,:\, I(u)=0} J(u).$ According to Sattinger [@Sattinger] and Payne & Sattinger [@Payne] we know that if the initial energy $J(u_0)$ is less than the mountain pass level $d$, then the solution to problem [\[problem1\]](#problem1){reference-type="eqref" reference="problem1"} exists globally if it begins in the stable set $\mathcal{W}=\{u\in X_{0,A} :I(u)>0, \, J(u)<d \} \cup \{0\},$ and fails to exist globally if it starts from the unstable set $\mathcal{U}=\{u\in X_{0,A} :I(u)<0, \, J(u)<d \}.$ The functionals $I(\cdot)$ and $J(\cdot)$ are energy functionals of the stationary problem $$\begin{aligned} \label{problem2} \left\{ \begin{array}{ll} M(\|u\|_{X_{0,A}}^2)(-\Delta)_A^su= f(|u|)u,&\mbox{ \text{in} }\Omega,\\ u(x)=0,&\mbox{ in }\mathbb{R}^n\setminus\Omega, \end{array} \right.\end{aligned}$$ and we recall that $u\in X_{0,A}(\Omega)$ is a solution to [\[problem2\]](#problem2){reference-type="eqref" reference="problem2"} (i.e., stationary solution to problem [\[problem1\]](#problem1){reference-type="eqref" reference="problem1"}) if $\langle J'(u),\phi\rangle=0$, namely $M(\|u\|_{X_{0,A}}^2)\langle u,\phi\rangle_{X_{0,A}}-\mathcal{R}\int_{\Omega}f(|u|)u\overline{\phi}\ dx=0,$ for all $\phi\in X_{0,A}(\Omega)$, see also [@Zuo Definition 1]. So, $u\in X_{0,A}(\Omega)$ solves [\[problem1\]](#problem1){reference-type="eqref" reference="problem1"} on $(0,T)$ for $T>0$ if $u\in L^{\infty}(0,\infty;L^2(\Omega))\cap L^{\infty}(0,\infty;X_{0,A}) \cap C(0,\infty;L^2(\Omega))$ with $u_t\in L^2(0,\infty;L^2(\Omega))$ and $$\begin{aligned} & \mathcal{R}\int_0^T\int_{\Omega}u_t\overline{\phi}\ dx\, dt+\int_0^T M(\|u\|_{X_{0,A}}^2)\langle u,\phi\rangle_{X_{0,A}}\, dt\\ &-\mathcal{R}\int_0^T\int_{\Omega}f(|u|)u\overline{\phi}\ dx\, dt=0 \mbox{ for all $ \phi\in X_{0,A}(\Omega).$}\end{aligned}$$ **Remark 1**. *We note that [@Zuo] established the existence of a weak solution $u\in X_{0,A}(\Omega)$ to problem [\[problem1\]](#problem1){reference-type="eqref" reference="problem1"}, based on hypotheses (F) and (M1) (see [@Zuo Theorem 2.1] for the precise requirements). Imposing also (M2), they obtained that weak solutions blow-up in a finite time, provided that the initial energy is negative (see [@Zuo Theorem 2.2]). Further, problem [\[problem1\]](#problem1){reference-type="eqref" reference="problem1"} has a weak solution for all $T>0$ (namely, a global solution) such that $u\in L^{\infty}(0,\infty;L^2(\Omega))\cap L^{\infty}(0,\infty;X_{0,A}) \cap C(0,\infty;L^2(\Omega))$ with $u_t\in L^2(0,\infty;L^2(\Omega))$, provided that $u_0 \in \mathcal{W}$ (see [@Zuo Theorem 2.3]).* # Main result Let $u=u(t)$ be solution of [\[problem1\]](#problem1){reference-type="eqref" reference="problem1"} with initial data $u_0\in X_{0,A}$, then by $T=T(u_0)$ we denote the maximal existence time of $u=u(t)$ given as - $T=\infty$ if $u(t)\in X_{0,A}$ for $t \in [0,\infty)$; - $T=t_{\max}\, (>0)$ if $u(t)\in X_{0,A}$ for $t \in [0,t_{\max})$, $u(t_{\max})\not\in X_{0,A}$. **Theorem 1**. *Assume that hypotheses (F), (M1), (M2) and (P) hold and $p\in(2,2_s^{\ast})$ $(n>2s)$. Let $u=u(t)$ be a solution of [\[problem1\]](#problem1){reference-type="eqref" reference="problem1"} with $u_0\in X_{0,A}$ such that $J(u_0)\leq d$ and $I(u_0)>0$. If the maximal existence time is $T=\infty$, then there exists an increasing sequence $\{t_k\}_{k=1}^{\infty}$ with $t_k\rightarrow\infty$ as $k\rightarrow\infty$, such that $u(t_k)$ converges to a stationary solution $v\in X_{0,A}(\Omega)$ of problem [\[problem1\]](#problem1){reference-type="eqref" reference="problem1"}, that is $u(t_k)\rightarrow v$ as $k\rightarrow\infty$.* *Proof.* We show global in time solutions to [\[problem1\]](#problem1){reference-type="eqref" reference="problem1"} can not blow-up in infinite time, by arguing in three steps. **Step 1: Existence of an increasing sequence $\{t_k\}$.** Let $u=u(t)$ be a solution of problem [\[problem1\]](#problem1){reference-type="eqref" reference="problem1"} with $u_0\in X_{0,A}$ and maximal existence time $T=\infty$. Now, [@Zuo Theorem 2.3] gives us $u\in L^{\infty}(0,\infty;L^2(\Omega))\cap L^{\infty}(0,\infty;X_{0,A}) \cap C(0,\infty;L^2(\Omega))$, $u_t\in L^2(0,\infty;L^2(\Omega)).$ Multiplying equation [\[problem1\]](#problem1){reference-type="eqref" reference="problem1"} by $\phi\in X_{0,A}$ and integrating over $\Omega$, we get $$\begin{aligned} \label{eq1} (u'(t),\phi) =-M(\|u\|_{X_{0,A}}^2)\langle u(t),\phi\rangle_{X_{0,A}} +\mathcal{R}\int_{\Omega}f(|u(t)|)u(t)\overline{\phi}\ dx.\end{aligned}$$ By [@Zuo Lemma 3.3] $J(u(t))$ is non-increasing with respect to $t$, hence $$\begin{aligned} \label{eq2} 0\leq J(u(t))\leq J(u_0),\end{aligned}$$ where we use a contradiction argument to conclude the first inequality in [\[eq2\]](#eq2){reference-type="eqref" reference="eq2"}. So, we assume that there exists a time $t_0$ such that $J(u(t_0))<0$, then by [\[J\]](#J){reference-type="eqref" reference="J"} we deduce that $0>J(u(t_0))=\frac{1}{2}\mathscr{M}(\|u(t_0)\|_{X_{0,A}}^2) -\int_{\Omega}F(|u(t_0)|)\ dx.$ It follows that $$\begin{aligned} \label{eq3} \frac{1}{2}\mathscr{M}(\|u(t_0)\|_{X_{0,A}}^2)<\int_{\Omega}F(|u(t_0)|)\ dx.\end{aligned}$$ Combining the Nehari functional $I(\cdot)$, together with assumptions [\[f\]](#f){reference-type="eqref" reference="f"}, (M2), (P) and inequality [\[eq3\]](#eq3){reference-type="eqref" reference="eq3"}, we conclude that $$\begin{aligned} I(u(t_0))=&M(\|u(t_0)\|_{X_{0,A}}^2)\|u(t_0)\|_{X_{0,A}}^2- \mathcal{R}\int_{\Omega}f(|u(t_0)|)|u(t_0)|^2\ dx\\ \leq&\mu\mathscr{M}(\|u(t_0)\|_{X_{0,A}}^2)-\gamma\int_{\Omega} F(|u(t_0)|)\ dx <\mu \mathscr{M}(\|u(t_0)\|_{X_{0,A}}^2)-\frac{\gamma}{2} \mathscr{M}(\|u(t_0)\|_{X_{0,A}}^2)\\ \leq&(\mu-\frac{p}{2})\mathscr{M}(\|u(t_0)\|_{X_{0,A}}^2)<0 \quad \mbox{(recall that $\gamma \geq p$)}.\end{aligned}$$ So, at time $t=t_0$ we have $J(u(t_0))<0$ and $I(u(t_0))<0$, hence $u(t_0)$ is in the so-called unstable set $\mathcal{U}=\{u\in X_{0,A} :I(u)<0, \, J(u)<d \mbox{ ($d>0$})\}$. By [@Zuo Theorem 2.4], $u(t)$ blows-up in a finite time, which contradicts $T=\infty$. Since $J(u(t))$ is non-increasing with respect to $t$, by [\[eq2\]](#eq2){reference-type="eqref" reference="eq2"} we can find $c$ with $0\leq c\leq J(u_0)$ and such that $J(u(t))\rightarrow c$ as $t\rightarrow\infty$. Passing to the limit as $t\rightarrow\infty$ in $\int_0^t\|u'(s)\|_{L^2(\Omega)}^2\ ds+J(u)=J(u_0),$ then we get $c=J(u_0)-\int_0^{\infty}\|u'(s)\|_{L^2(\Omega)}^2\ ds.$ It follows that we can find an increasing sequence $\{t_k\}_{k=1}^{\infty}$ with $t_k\rightarrow\infty$ as $k\rightarrow\infty$ satisfying $$\begin{aligned} \label{eq6} \lim_{k\rightarrow\infty}\|u'(t_k)\|_{L^2(\Omega)}=0.\end{aligned}$$ **Step 2: Convergence of $\{u(t_k)\}$ to a function $v\in X_{0,A}$.** Equation [\[eq1\]](#eq1){reference-type="eqref" reference="eq1"} leads to the following $$\langle J'(u(t)),\phi\rangle =M(\|u(t)\|_{X_{0,A}}^2)\langle u(t),\phi\rangle_{X_{0,A}}- \mathcal{R}\int_{\Omega}f(|u(t)|)u(t)\overline{\phi}\ dx =(-u'(t),\phi) \mbox{ for all $\phi\in X_{0,A}$.}$$ Combining the Schwartz inequality, the definition of the first eigenvalue $\lambda_1$ of $(-\Delta)_A^s$ (see [@Vecchi Proposition 3.3]) and the limit in [\[eq6\]](#eq6){reference-type="eqref" reference="eq6"}, we conclude $$\begin{aligned} \label{zero} &\|J'(u(t_k))\|_{X_{0,A}'}=\sup_{\stackrel{\phi\in X_{0,A}} {\|\phi\|_{X_{0,A}}=1}}\langle J'(u(t_k)),\phi \rangle = \sup_{\stackrel{\phi\in X_{0,A}}{\|\phi\|_{X_{0,A}}=1}}(-u'(t_k),\phi)\nonumber\\ \leq& \sup_{\stackrel{\phi\in X_{0,A}}{\|\phi\|_{X_{0,A}}=1}}\|u'(t_k)\|_{L^2(\Omega)} \|\phi\|_{L^2(\Omega)}\nonumber \leq\|u'(t_k)\|_{L^2(\Omega)}\sup_{\phi\in X_{0,A}\setminus\{0\}}\left(\frac{\|\phi\|_{L^2(\Omega)}}{\|\phi\|_{X_{0,A}}}\right) \\ \leq &\|u'(t_k)\|_{L^2(\Omega)}\left(\displaystyle\inf_{\phi\in X_{0,A}\setminus\{0\}}\left(\frac{\|\phi\|_{X_{0,A}}}{\|\phi\|_{L^2(\Omega)}}\right)\right)^{-1} \leq \frac{1}{\sqrt{\lambda_1}}\|u'(t_k)\|_{L^2(\Omega)}\rightarrow0\end{aligned}$$ as $k\rightarrow\infty$. As usual, by $X_{0,A}'$ we denote the dual space of $X_{0,A}$, hence we can find $c_1>0$, independent of the index $k$, such that $$\begin{aligned} \label{eq7} \|J'(u(t_k))\|_{X_{0,A}'}\leq c_1,\ k=1,2,\cdots.\end{aligned}$$ For the Nehari energy functional $I(\cdot)$, the bound from above in [\[eq7\]](#eq7){reference-type="eqref" reference="eq7"} and the Young inequality lead to $$\label{eq8} |I(u(t_k))|\leq|\langle J'(u(t_k)),u(t_k)\rangle| \leq \|J'(u(t_k))\|_{X_{0,A}'}\|u(t_k)\|_{X_{0,A}}\nonumber \leq c_1\|u(t_k)\|_{X_{0,A}} \leq c_2+\frac{(p-2\mu)m_0}{4p\mu}\|u(t_k)\|_{X_{0,A}}^{2\theta}.$$ Using $J(\cdot)$ and $I(\cdot)$, together with [\[eq8\]](#eq8){reference-type="eqref" reference="eq8"} and (M1), we obtain $$\begin{aligned} &J(u(t_k))=\frac{1}{2}\mathscr{M}\left(\|u(t_k)\|_{X_{0,A}}^2\right) -\int_{\Omega}F(|u(t_k)|)\ dx\\ \geq&\frac{1}{2\mu}M(\|u(t_k)\|_{X_{0,A}}^2)\|u(t_k)\|_{X_{0,A}}^2 -\frac{1}{\gamma}\int_{\Omega}f(|u(t_k)|)|u(t_k)|^2\ dx\\ =&\frac{1}{2\mu}M(\|u(t_k)\|_{X_{0,A}}^2)\|u(t_k)\|_{X_{0,A}}^2 +\frac{1}{p}I(u(t_k))-\frac{1}{p}M(\|u(t_k)\|_{X_{0,A}}^2) \|u(t_k)\|_{X_{0,A}}^2\\ \geq&\frac{(p-2\mu)}{2p\mu}M(\|u(t_k)\|_{X_{0,A}}^2)\|u(t_k)\|_{X_{0,A}}^2 -c_2-\frac{(p-2\mu)}{4p\mu}m_0\|u(t_k)\|_{X_{0,A}}^{2\theta}\\ \geq&\frac{(p-2\mu)}{2p\mu}m_0\|u(t_k)\|_{X_{0,A}}^{2\theta} -\frac{(p-2\mu)}{4p\mu}m_0\|u(t_k)\|_{X_{0,A}}^{2\theta}-c_2.\end{aligned}$$ This inequality, taking into account the bound from above in [\[eq2\]](#eq2){reference-type="eqref" reference="eq2"}, gives us $J(u_0)+c_2\geq\frac{(p-2\mu)}{4p\mu}m_0\|u(t_k)\|_{X_{0,A}}^{2\theta},$ which implies that $$\begin{aligned} \label{eq9} \|u(t_k)\|_{X_{0,A}}\leq\Big[\frac{(J(u_0)+c_2)4p\mu}{(p-2\mu)m_0}\Big]^{\frac{1}{2\theta}}, \quad k=1,2,\cdots.\end{aligned}$$ From [\[eq9\]](#eq9){reference-type="eqref" reference="eq9"} and the fact that the embedding $X_{0,A}\hookrightarrow L^q(\Omega,\mathbb{C})$ is compact for all $q\in[1,2_s^{\ast})$ (see [@Fiscella Lemma 2.2]), then there exist an increasing subsequence, still denoted by $\{t_k\}_{k=1}^{\infty}$, and a function $v\in X_{0,A}$ such that $u_k:=u(t_k)$ satisfies the following convergences $$\begin{aligned} &u_k\xrightarrow{w} v\mbox{ in }X_{0,A} \mbox{ as }k\rightarrow\infty,\label{eq10}\\ &u_k\rightarrow v \mbox{ in }L^q(\Omega,\mathbb{C})\mbox{ for all } q\in[1,2_s^{\ast}) \mbox{ as }k\rightarrow\infty.\label{eq11}\end{aligned}$$ By [\[eq11\]](#eq11){reference-type="eqref" reference="eq11"}, there exist a subsequence, still denoted by $\{u_k\}_{k=1}^{\infty}$, and a function $w\in L^q(\Omega,\mathbb{C})$ for all $q\in[1,2_s^{\ast})$, such that $$\begin{aligned} &u_k(x)\rightarrow v(x)\mbox{ a.e. in }\Omega \mbox{ as }k\rightarrow\infty,\label{eq12}\\ &\mbox{for all }k, |u_k(x)|\leq w(x)\mbox{ a.e. in }\Omega.\label{eq13}\end{aligned}$$ **Step 3: $v$ is a solution to the stationary problem [\[problem2\]](#problem2){reference-type="eqref" reference="problem2"}.** Let us prove that for all $\phi\in X_{0,A}$ we have $$\begin{aligned} \label{eq14} \mathcal{R}\int_{\Omega}f(|u_k|)u_k\overline{\phi}\ dx\to\mathcal{R}\int_{\Omega}f(|v|)v\overline{\phi}\ dx \quad \mbox{as $k\rightarrow\infty$.}\end{aligned}$$ By [\[eq12\]](#eq12){reference-type="eqref" reference="eq12"} and $f\in C^1([0,\infty))$, we get $f(|u_k|)u_k\overline{\phi}\rightarrow f(|v|)v\overline{\phi},$ for a.e. $x\in\Omega$ as $k$ to $\infty$. Using [\[f\]](#f){reference-type="eqref" reference="f"} and [\[eq13\]](#eq13){reference-type="eqref" reference="eq13"} we get $|f(|u_k|)u_k\overline{\phi}|\leq pC|w|^{p-2}|w||\phi| =pC|w|^{p-1}|\phi|,$ for a.e. $x\in\Omega$. The Hölder inequality implies $$\int_{\Omega}pC|w|^{p-1}|\phi|\ dx\leq c\|\phi\|_{L^p}\left(\int_{\Omega} |w|^{(p-1)\frac{p}{p-1}}\ dx\right)^{\frac{p-1}{p}}= c\|\phi\|_{L^p}\|w\|_{L^p}^{p-1}\leq c,$$ thanks to $w\in L^q(\Omega,\mathbb{C})$ for all $q\in[1,2_s^{\ast})$ and $\phi\in X_{0,A}\hookrightarrow L^p(\Omega,\mathbb{C})$ for $p\in(2,2_s^{\ast})$. We conclude $|w|^{p-1}|\phi|\in L^1(\Omega).$ The Lebesgue dominated convergence theorem gives us [\[eq14\]](#eq14){reference-type="eqref" reference="eq14"}. We show that $$\begin{aligned} \label{eq17.1} \langle J'(u_k),\phi\rangle=M(\|u_k\|_{X_{0,A}}^2) \langle u_k,\phi\rangle-\mathcal{R}\int_{\Omega}f(|u_k|)u_k\overline{\phi}\ dx,\end{aligned}$$ converges to $0=M(\|v\|_{X_{0,A}}^2) \langle v,\phi\rangle-\mathcal{R}\int_{\Omega}f(|v|)v\overline{\phi}\ dx$ as $k\rightarrow\infty$ and $u_k\rightarrow v$ in $X_{0,A}$. We show that $M(\|u_k\|_{X_{0,A}}^2)\rightarrow M(\|v\|_{X_{0,A}}^2)$ and $u_k\rightarrow v$ strongly in $X_{0,A}$ (see [@Zhang]). Since $M$ is continuous, [\[eq9\]](#eq9){reference-type="eqref" reference="eq9"} implies $M(\|u_k\|_{X_{0,A}}^2)\leq c$ for all $k\in\mathbb{N}$, some $c>0$, hence $\{M(\|u_k\|_{X_{0,A}}^2)\}_{k\in \mathbb{N}}$ is bounded in $\mathbb{R}$. Now, there is a subsequence, still say $\{M(\|u_k\|_{X_{0,A}}^2)\}_{k\in \mathbb{N}}$, converging to $\overline{M}$, so $\lim_{k\rightarrow\infty}M(\|u_k\|_{X_{0,A}}^2)\langle v,\phi\rangle_{X_{0,A}} =\overline{M}\langle v,\phi\rangle_{X_{0,A}}$ and $$\lim_{k\rightarrow\infty}\int\int_{\mathbb{R}^{2n}}\left[ M(\|u_k\|_{X_{0,A}}^2)-\overline{M}\right]^2 \frac{|v(x)-e^{i(x-y)\cdot A\left(\frac{x+y}{2}\right)}v(y)|^2}{|x-y|^{n+2s}}\ dx\ dy=0,$$ that is, $M(\|u_k\|_{X_{0,A}}^2)v\rightarrow \overline{M}v\mbox{ in }X_{0,A}.$ This together with [\[eq10\]](#eq10){reference-type="eqref" reference="eq10"} imply $\lim_{k\rightarrow\infty}M(\|u_k\|_{X_{0,A}}^2) \langle u_k,v\rangle_{X_{0,A}}=\overline{M}\langle v,v\rangle_{X_{0,A}}.$ By [\[zero\]](#zero){reference-type="eqref" reference="zero"}, [\[eq10\]](#eq10){reference-type="eqref" reference="eq10"} and [\[eq14\]](#eq14){reference-type="eqref" reference="eq14"}, we pass to the limit as $k\rightarrow\infty$ in [\[eq17.1\]](#eq17.1){reference-type="eqref" reference="eq17.1"} to get $0=\overline{M}\langle v,\phi\rangle_{X_{0,A}}-\mathcal{R}\int_{\Omega}f(|v|)v\overline{\phi}\ dx$ for all $\phi\in X_{0,A}$. For $\phi=v$, we get $\overline{M}\langle v,v\rangle_{X_{0,A}}=\int_{\Omega}f(|v|)|v|^2\ dx.$ Similar to [\[eq14\]](#eq14){reference-type="eqref" reference="eq14"}, we deduce $\lim_{k\rightarrow\infty}\int_{\Omega}f(|u_k|)|u_k|^2\ dx=\int_{\Omega} f(|v|)|v|^2\ dx.$ So, [\[eq10\]](#eq10){reference-type="eqref" reference="eq10"}, [\[zero\]](#zero){reference-type="eqref" reference="zero"} lead to $|\langle J'(u_k),u_k\rangle|\leq\|J'(u_k)\|_{X_{0,A}'}\|u_k\|_{X_{0,A}}\\ \leq \|J'(u_k)\|_{X_{0,A}'}\left[\frac{(J(u_0)+c_2)4p\mu}{(p-2\mu)m_0} \right]^{\frac{1}{2\theta}} \to 0$ as $k\rightarrow\infty$. We first deduce that $$\begin{aligned} &\lim_{k\rightarrow\infty}M(\|u_k\|_{X_{0,A}}^2)\langle u_k,u_k \rangle_{X_{0,A}}=\lim_{k\rightarrow\infty}(\langle J'(u_k),u_k\rangle+\int_{\Omega}f(|u_k|)|u_k|^2\ dx)\\ &=\int_{\Omega}f(|v|)|v|^2\ dx=\lim_{k\rightarrow\infty}M( \|u_k\|_{X_{0,A}}^2)\langle u_k,v\rangle_{X_{0,A}},\end{aligned}$$ then $\lim_{k\rightarrow\infty}M(\|u_k\|_{X_{0,A}}^2)( \langle u_k,u_k\rangle_{X_{0,A}}-\langle u_k,v\rangle_{X_{0,A}})=0,$ and so $$\begin{aligned} 0=&\lim_{k\rightarrow\infty}M(\|u_k\|_{X_{0,A}}^2) \langle u_k,u_k-v\rangle_{X_{0,A}}\\ =&\lim_{k\rightarrow\infty}M(\|u_k\|_{X_{0,A}}^2)\left[ \|u_k-v\|_{X_{0,A}}^2+\langle v,u_k-v\rangle_{X_{0,A}}\right]\\ =&\lim_{k\rightarrow\infty}M(\|u_k\|_{X_{0,A}}^2)\|u_k -v\|_{X_{0,A}}.\end{aligned}$$ Since $M(\sigma)\geq m_0\sigma^{\theta-1}$ for all $\sigma\geq0$, then $\lim_{k\rightarrow\infty}\|u_k-v\|_{X_{0,A}}=0.$ So, $u_k(x)\rightarrow v(x)$ in $X_{0,A}$, which implies that $\|u_k\|_{X_{0,A}}^2\rightarrow\|v\|_{X_{0,A}}^2$ as $k\rightarrow\infty$. Using the continuity of $M$, we get $\lim_{k\rightarrow\infty}M(\|u_k\|_{X_{0,A}}^2)= M(\|v\|_{X_{0,A}}^2),$ which allows us to conclude that $\overline{M}=M(\|v\|_{X_{0,A}}^2)$. ◻ 00 D. Applebaum, Lévy processes - From probability to finance quantum groups, Notices Amer. Math. Soc. 51 (2004) 1336-1347. J.E. Avron, I.W. Herbst, B. Simon, Schrödinger operators with magnetic fields. Commun. Math. Phys. 79 (1981) 529--572. L. Caffarelli, Non-local diffusions, drifts and games, in Nonlinear Partial Differential Equations, Abel Symposia, Vol. 7 (Springer, 2012), pp. 37--52. P. d'Avenia, M. Squassina, Ground states for fractional magnetic operators. ESAIM Control Optim. Calc. Var. 24 (2018) 1--24. A. Fiscella, A. Pinamonti, E. Vecchi, Multiplicity results for magnetic fractional problems, J. Differential Equations. 263 (2017) 4617--4633. A. Fiscella, E. Vecchi, Bifurcation and multiplicity results for critical magnetic fractional problems, Electron. J. Diff. Equ. 153 (2018) 1--18. G. Kirchhoff, Mechanik. Teubner, Leipzig, 1883. C. Ji, V.D. Rădulescu, Concentration phenomena for magnetic Kirchhoff equations with critical growth, Discrete Contin. Dyn. Syst. 41 (2021), no. 12, 5551-5577. C. Ji, V.D. Rădulescu, Multi-bump solutions for the nonlinear magnetic Choquard equation with deepening potential well, J. Differential Equations 306 (2022), 251-279. Q. Li, K. Teng, W. Wang, J. Zhang, Existence of nontrivial solutions for fractional Schrödinger equations with electromagnetic fields and critical or supercritical nonlinearity, Bound. Value Probl. 2020:112 (2020) 1--10. Q. Li, W. Wang, M. Liu, Normalized solutions for the fractional Choquard equations with Sobolev critical and double mass supercritical growth. Lett. Math. Phys. 113:49 (2023) 1--9. X. Mingqi, V. D. R$\breve{\mbox{a}}$dulescu, B. Zhang, Nonlocal Kirchhoff diffusion problems: local existence and blow-up of solutions, Nonlinearity. 31 (2018) 3228--3250. L.E. Payne and D.H. Sattinger, Saddle points and instability of nonlinear hyperbolicequations, Israel J. Math. 22 (1975), 273--303. D.H. Sattinger, Stability of nonlinear hyperbolic equations, Arch. Ration. Mech. Anal. 28 (1968) 226--244. E. Toscano, C. Vetro, D. Wardowski, Systems of Kirchhoff type equations with gradient dependence in the reaction term via subsolution-supersolution method, Discrete Contin. Dyn. Syst. Ser. S (2023), doi:10.3934/dcdss.2023070 L. Wen, V.D. Rădulescu, Groundstates for magnetic Choquard equations with critical exponential growth, Appl. Math. Lett. 132 (2022), Paper No. 108153, 8 pp. M. Xiang, V.D. Rădulescu, B. Zhang, A critical fractional Choquard-Kirchhoff problem with magnetic field, Commun. Contemp. Math. 21:1850004 (2019). J. Zhou, Lifespan, asymptotic behavior and ground-state solutions to a nonlocal parabolic equation, Z. Angew. Math. Phys. (2020) 71:28. J. Zuo, J.H. Lopes, The Kirchhoff-type diffusion problem driven by a magnetic fractional Laplace operator, J. Math. Phys. 63:061505 (2022) 1--14.
arxiv_math
{ "id": "2310.06325", "title": "Long-time behavior for the Kirchhoff diffusion problem with magnetic\n fractional Laplace operator", "authors": "Jiabin Zuo, Juliana Honda Lopes, Vicentiu D. R\\u{a}dulescu", "categories": "math.AP", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We consider the three-dimensional Dirac operator coupled with a combination of electrostatic and Lorentz scalar $\delta$-shell interactions. We approximate this operator with general local interactions $V$. Without any hypotheses of smallness on the potential $V$, converges in the strong resolvent sense to the Dirac Hamiltonian coupled with a $\delta$-shell potential supported on $\Sigma$, a bounded smooth surface. However, the coupling constant depends nonlinearly on the potential $V.$ address: | ^1^Institut de Mathématiques de Bordeaux, UMR 5251, Université de Bordeaux 33405 Talence Cedex, FRANCE\ and Departamento de Matemáticas, Universidad del País Vasco, Barrio Sarriena s/n 48940 Leioa, SPAIN. author: - Mahdi Zreik ^1^ title: | On the approximation of the $\delta$-shell interaction\ for the 3-D Dirac operator. --- # Introduction Dirac Hamiltonians of the type $D_m + V$, where $V$ is a suitable perturbation, are used in many problems where the implications of special relativity play an important role. This is the case, for example, in the description of elementary particles such as quarks, or in the analysis of graphene, which is used in research for batteries, water filters, or photovoltaic cells. For these problems, mathematical investigations are still in their infancy. The present work studies the three-dimensional Dirac operator with a singular interaction on a closed surface $\Sigma$.\ \ Mathematically, the Hamiltonian we are interested in can be formulated as follows $$\begin{aligned} \label{DEL} \mathbb{D}_{\eta,\tau}=D_m+B_{\eta,\tau}\delta_{\Sigma}= D_m + \big (\eta \,\mathbb{I}_{4} + \tau \beta\big)\delta_{\Sigma},\end{aligned}$$ where $B_{\eta,\tau}$ is the combination of the *electrostatic* and *Lorentz scalar* potentials of strengths $\eta$ and $\tau$, respectively. Physically, the Hamiltonian $\mathbb{D}_{\eta,\tau}$ is used as an idealized model for Dirac operators with strongly localized electric and massive potential near the interface $\Sigma$ (*e.g.*, an annulus), *i.e.*, it replaces a Hamiltonian of the form $$\begin{aligned} \label{DEB} \mathbb{H}_{\tilde{\eta},\tilde{\tau}}=D_m + B_{\tilde{\eta},\tilde{\tau}}= D_m + \big (\tilde{\eta} \,\mathbb{I}_{4} + \tilde{\tau} \beta\big)\mathfrak{P}_{\Sigma},\end{aligned}$$ where $\mathfrak{P}_{\Sigma}$ is a regular potential localized in a thin layer containing the interface $\Sigma$.\ \ The operators $\mathbb{D}_{\eta,\tau }$ have been studied in detail recently. Starting from the first directed work on spectral studies of Hamiltonian $\mathbb{D}_{\eta,\tau}$ in Ref. [@DES], in which the authors treated the case that the surface is a sphere, assuming $\eta^{2} - \tau^{2}= -4.$ This is known as the *confinement case* and in physics means the stability of a particle (*e.g.*, an electron) in its initial region during time evolution, *i.e.*, if for time $t=0$ the particle is considered in a confined region $\Omega\subset\mathbb{R}^{3}$, then it cannot cross the surface $\partial\Omega$ to join the region $\mathbb{R}^{3}\setminus\overline{\Omega}$ for all $t > 0$. Mathematically, this means that the Dirac operator under consideration decouples into a direct sum of two Dirac operators acting on $\Omega$ and $\mathbb{R}^{3}\setminus\overline{\Omega}$, respectively, with appropriate boundary conditions. After this work, spectral studies of Schrodinger's operators coupled with $\delta$-shell interaction flourished, while spectral studies of $\delta$-shell interaction of Dirac operators in deep stability were lifeless.\ \ In 2014, the spectral studies of $\delta$-shell interaction of Dirac operators was revived in [@AMV1], where the authors developed a new technique to characterize the self-adjointness of the free Dirac operator coupled to a $\delta$-shell potential. In a special case, they treated pure electrostatic $\delta$-shell interactions (*i.e.*, $\tau = 0$) supported on the boundary of a bounded regular domain and proved that the perturbed operator is self-adjoint. The same authors continued the spectral study of the electrostatic case; for example, the existence of a point spectrum and related problems; see [@AMV2] and [@AMV3].\ \ The approximation of the Dirac operator $\mathbb{D}_{\eta,\tau}$ by Dirac operators with regular potentials with shrinking support (*i.e.*, of the form [\[DEB\]](#DEB){reference-type="eqref" reference="DEB"}) provides a justification of the considered idealized model. In the one-dimensional framework, the analysis is carried out in [@PS], where Šeba showed that convergence in the sense of norm resolution is true. Subsequently, Hughes and Tušek prove strong resolvent convergence and norm resolvent convergence for Dirac operators with general point interactions in [@RJH1; @RJH2] and [@Tus], respectively. In 2D, [@CLMT] considered the approximation of Dirac operators with electrostatic, Lorentz scalar, and anomalous magnetic $\delta$-shell potentials on closed and bounded curves. Furthermore, in [@BHT] the authors examined the same question as in the paper [@CLMT], but on a straight line. More precisely, taking parameters $(\tilde{\eta},\tilde{\tau})\in{\mathbb R}^2$ in [\[DEB\]](#DEB){reference-type="eqref" reference="DEB"} and a potential $\mathfrak{P}^{\varepsilon}_{\Sigma}$ converging to $\delta_{\Sigma}$ when $\varepsilon$ tends to $0$ (in the sense of distributions), then $D_m + \big (\tilde{\eta} \,\mathbb{I}_{4} + \tilde{\tau} \beta\big)\mathfrak{P}^{\varepsilon}_{\Sigma}$ converges to the Dirac operator $\mathbb{D}_{\eta,\tau}$ with different coupling constants $(\eta,\tau)\in{\mathbb R}^2$ which depend nonlinearly on the potential $\mathfrak{P}^{\varepsilon}_{\Sigma}$.\ \ In the three-dimensional case, the situation seems to be even more complex, as recently shown in [@AF]. There, too, the authors were able to show the convergence in the norm resolvent sense in the non-confining case, however, a smallness assumption for the potential $\mathfrak{P}^{\varepsilon}_{\Sigma}$ was required to achieve such a result. On the other hand, this assumption, unfortunately, prevents us from obtaining an approximation of the operator $\mathbb{D}_{\eta,\tau}$ with the parameters $\eta$ and $\tau$ which are more relevant from the physical or mathematical point of view. Believing this to be the case, the authors of the recent paper [@BHS] have studied and confirmed the approximation problem for two- and three-dimensional Dirac operators with delta-shell potential in norm resolvent sense. Without the smallness assumption of the potential $\mathfrak{P}^{\varepsilon}_{\Sigma}$, no results could be obtained here either. Finally, we note that in the two- and three-dimensional setting a renormalization of the interaction strength was observed in [@CLMT; @AF; @BHS].\ \ Our main goal in this work is to develop new techniques that will allow us to establish the approximation in terms of the strong resolvent in the non-critical and non-confinement cases (*i.e.*, when $\eta^2 - \tau^2 \neq \pm 4$) without the smallness assumed in [@AF] and to obtain results on how the initial parameters should be chosen so that the mathematical models reflect the physical reality in the correct way.\ Let $m>0$, recall the free Dirac operator $D_m$ on ${\mathbb R}^3$ defined by $D_m:=- i \alpha \cdot\nabla + m\beta$, where $$\begin{aligned} \alpha_k&=\begin{pmatrix} 0 & \sigma_k\\ \sigma_k & 0 \end{pmatrix}\quad \text{for } k=1,2,3, \quad \beta=\begin{pmatrix} \mathbb{I}_{2} & 0\\ 0 & -\mathbb{I}_{2} \end{pmatrix},\quad \mathbb{I}_{2} :=\begin{pmatrix} 1& 0 \\ 0 & 1 \end{pmatrix}, \\ &\text{and }\, \sigma_1=\begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix},\quad \sigma_2= \begin{pmatrix} 0 & -i\\ i & 0 \end{pmatrix} , \quad \sigma_3=\begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix}, \end{aligned}$$ are the family of Dirac and Pauli matrices satisfy the anticommutation relations: $$\begin{aligned} \label{PMD} \lbrace \alpha_j,\alpha_k\rbrace = 2\delta_{jk}\mathbb{I}_4,\quad \lbrace \alpha_j,\beta\rbrace = 0,\quad \text{and} \quad \beta^{2} = \mathbb{I}_4,\quad j,k\in \lbrace 1,2,3\rbrace, \end{aligned}$$ where $\lbrace \cdot,\cdot\rbrace$ is the anticommutator bracket. We use the notation $\alpha \cdot x=\sum_{j=1}^{3}\alpha_j x_j$ for $x=(x_1,x_2,x_3)\in{\mathbb R}^3$. We recall that $(D_m, \mathrm{dom}(D_m))$ is self-adjoint (see, *e.g.*, [@Tha Subsection 1.4]), and that $$\begin{aligned} \mathrm{Sp}(D_m) =\mathrm{Sp}_{\mathrm{ess}}(D_m)=(-\infty,-m]\cup [m,+\infty). \end{aligned}$$ Finally, we also give the Dirac operator coupled with a combination of electrostatic, Lorentz scalar $\delta$-shell interactions of strength $\eta$ and $\tau$, respectively, which we will denote $D_{\eta,\tau}$ in what follows. Throughout this paper, for $\Omega\subset\mathbb{R}^3$ a bounded smooth domain with boundary $\Sigma:=\partial\Omega$, we refer to $H^{1}(\Omega,\mathbb{C}^{4}):=H^{1}(\Omega)^{4}$ as the first order Sobolev space $$\begin{aligned} \mathit{H}^1(\Omega)^4=\{ \varphi\in\mathit{L}^2(\Omega)^4: \text{ there exists } \tilde{\varphi}\in\mathit{H}^1({\mathbb R}^{3})^4 \text{ such that } \tilde{\varphi}|_{\Omega} =\varphi\}.\end{aligned}$$ We denote by $H^{1/2}(\Sigma,\mathbb{C}^{4}):=H^{1/2}(\Sigma)^{4}$ the Sobolev space of order $1/2$ along the boundary $\Sigma$, and by $t_{\Sigma}: H^{1}(\Omega)^{4}\rightarrow H^{1/2}(\Sigma)^{4}$ the classical trace operator. **Definition 1**. *Let $\Omega$ be a bounded domain in $\mathbb{R}^3$ with a boundary $\Sigma=\partial\Omega$. Let $(\eta, \tau)\in \mathbb{R}^{2}$. Then, $D_{\eta,\tau} =D_m + B_{\eta,\tau}\delta_{\Sigma} := D_m + (\eta \mathbb{I}_{4} + \tau\beta)\delta_{\Sigma}$ acting in $L^{2}({\mathbb R}^3)^{4}$ and defined as follows:* *$$\begin{aligned} &D_{\eta,\tau}f= D_m f_{+} \oplus D_m f_-, \text{ for all }f\in \mathrm{dom}(D_{\eta,\tau}):=\lbrace f=f_{+} \oplus f_{-} \in H^{1}(\Omega)^{4} \oplus H^{1}(\mathbb{R}^{3}\setminus\overline{\Omega})^{4}: \\&\text{ the transmission condition (T.C) below holds in } H^{1/2}({\Sigma})^{4}\rbrace.\end{aligned}$$ $$\begin{aligned} \label{DOMDL} \text{Transmission condition}: i\alpha\cdot\nu(t_{\Sigma}f_{+} - t_{\Sigma}f_-) + \dfrac{1}{2} (\eta \,\mathbb{I}_{4} + \tau\beta) (t_{\Sigma}f_+ + t_{\Sigma}f_-)=0,\end{aligned}$$* where $\nu$ is the outward pointing normal to $\Omega$.0◻\ \ Recall that for $\eta^2 - \tau^2 \neq 0,4,$ the Dirac operator $(D_{\eta,\tau}, \mathrm{dom}(D_{\eta,\tau}))$ is self-adjoint and verifies the following assertions (see, e.g., [@BEHL2 Theorem 3.4, 4.1]) - $\mathrm{Sp_{ess}}(D_{\eta,\tau})=(-\infty,m]\cup[m,+\infty).$ - $\mathrm{Sp_{dis}}(D_{\eta,\tau})$ is finite.\ **Organization of the paper.** The present paper is structured as follows. We start with Section [2](#MMR){reference-type="ref" reference="MMR"}, where we define the model to be studied in our paper by introducing the family $\lbrace\mathscr{E}_{\eta,\tau,\varepsilon}\rbrace_{\varepsilon},$ which is the approximate Dirac operator family of operators $D_{\eta,\tau}$. We also discuss our main results by establishing Theorem [Theorem 1](#maintheorem){reference-type="ref" reference="maintheorem"}. Moreover, in this section we give some geometric aspects characterizing the surface $\Sigma$, as well as some spectral properties of the Dirac operator coupled with the $\delta$-shell interaction presented in Lemma [Lemma 1](#lemma3.1){reference-type="ref" reference="lemma3.1"}. Section [3](#prooftheorem3.1){reference-type="ref" reference="prooftheorem3.1"} is devoted to the proof of Theorem [Theorem 1](#maintheorem){reference-type="ref" reference="maintheorem"}, which approximates the Dirac operators with $\delta$-shell interaction by sequences of Dirac operators with regular potentials at the appropriate scale in the sense of strong resolvent. # Model and Main results {#MMR} For a smooth bounded domain $\Omega\subset \mathbb{R}^{3}$, we consider an interaction supported on the boundary $\Sigma:=\partial\Omega$ of $\Omega$. The surface $\Sigma$ divides the Euclidean space into disjoint union $\mathbb{R}^{3}=\Omega_+ \cup\Sigma\cup\Omega_-$, where $\Omega_+:=\Omega$ is a bounded domain and $\Omega_-=\mathbb{R}^{3}\setminus\overline{\Omega_+}.$ We denote by $\nu$ and $\mathrm{dS}$ the unit outward pointing normal to $\Omega$ and the surface measure on $\Sigma$, respectively. We also denote by $f_{\pm}:=f\downharpoonright \Omega_{\pm}$ be the restriction of $f$ in $\Omega_{\pm},$ for all $C^{2}$--valued function $f$ defined on $\mathbb{R}^{3}.$ Then, we define the distribution $\delta_{\Sigma}f$ by $$\begin{aligned} \langle \delta_{\Sigma}f,\, g\rangle := \frac{1}{2} \int_{\Sigma} (t_{\Sigma} f_+ + t_{\Sigma}f_-)\, g \, \mathrm{dS}, \quad \text{ for any test function}\quad g\in C^{\infty}_{0} (\mathbb{R}^{3})^{4}, \end{aligned}$$ where $t_{\Sigma}f_{\pm}$ is the classical trace operator defined below in Definition [Definition 1](#DiracelectroLorentz){reference-type="ref" reference="DiracelectroLorentz"}. Now, we explicitly construct regular symmetric potentials $V_{\eta,\tau,\varepsilon} \in L^{\infty}(\mathbb{R}^{3}; \mathbb{C}^{4\times4})$ supported on a tubular $\varepsilon$-neighbourhood of $\Sigma$ and such that $$\begin{aligned} V_{\eta,\tau,\varepsilon} \xrightarrow[\varepsilon\to 0]{} (\eta \,\mathbb{I}_{4} + \tau \beta)\delta_{\Sigma} \quad \text{in the sense of distributions.}\end{aligned}$$ To explicitly describe the approximate potentials $V_{\eta,\tau,\varepsilon}$, we will introduce an additional notation. For $\gamma>0,$ we define $\Sigma_{\gamma}:= \lbrace x\in \mathbb{R}^{3}, \, \text{dist}(x,\Sigma)< \gamma \rbrace$ a tubular neighborhood of $\Sigma$ with width $\gamma$. For $\gamma>0$ small enough, $\Sigma_{\gamma}$ is parametrized as $$\begin{aligned} \label{neighborhood} \Sigma_{\gamma}= \lbrace x_{\Sigma} + p\nu(x_{\Sigma}),\, x_{\Sigma}\in \Sigma\quad \text{and} \quad p\in (-\gamma,\gamma)\rbrace.\end{aligned}$$ For $0<\varepsilon<\gamma,$ let $h_{\varepsilon}(p):= \dfrac{1}{\varepsilon}h\left(\dfrac{p}{\varepsilon}\right)$, for all $p\in\mathbb{R}$, with the function $h$ verifies the following $$\begin{aligned} h\in L^{\infty}(\mathbb{R},\mathbb{R}),\quad \text{supp} \,h \subset (-1,1) \text{ and }\int_{-1}^{1}h(t)\,\text{d}t =1.\end{aligned}$$ Thus, we have: $$\begin{aligned} \label{prophe} %&h\in L^{\infty}(\mathbb{R},\mathbb{R}),\\ \text{supp} \,h_{\varepsilon} \subset (-\varepsilon,\varepsilon),\quad \int_{-\varepsilon}^{\varepsilon}h_{\varepsilon}(t)\,\text{d}t =1, \text{ and }\lim_{\varepsilon\rightarrow 0} h_{\varepsilon} = \delta_{0}\quad\text{in the sense of the distributions},\end{aligned}$$ where $\delta_0$ is the Dirac $\delta$-function supported at the origin. Finally, for any $\varepsilon\in (0,\gamma)$, we define the symmetric approximate potentials $V_{\eta,\tau,\varepsilon}\in L^{\infty}(\mathbb{R}^{3},\mathbb{C}^{{4}\times{4}})$, as follows: $$\label{potential} V_{\eta,\tau,\varepsilon}(x):= \left\{ \begin{aligned} &B_{\eta,\tau}h_{\varepsilon}(p),\quad &\text{if}& \quad x=x_{\Sigma} + p\nu(x_{\Sigma})\in\Sigma_{\gamma},\\ &0,\quad &\text{if}& \quad x\in \mathbb{R}^{3}\setminus \Sigma_{\gamma}. \end{aligned} \right.$$ It is easy to see that $\lim_{\varepsilon \rightarrow 0} V_{\eta,\tau,\varepsilon} = B_{\eta,\tau}\delta_{\Sigma}$, in $\mathcal{D}^{'}(\mathbb{R}^{3})^{4}.$ For $0<\varepsilon<\gamma$, we define the family of Dirac operator $\lbrace\mathscr{E}_{\eta,\tau,\varepsilon}\rbrace_{\varepsilon}$ as follows: $$\label{familleE} \begin{aligned} &\mathrm{dom} (\mathscr{E}_{\eta,\tau,\varepsilon}):=\mathrm{dom} (D_m)= H^{1}(\mathbb{R}^{3})^{4}, \\ & \mathscr{E}_{\eta,\tau,\varepsilon} \psi = D_{m} \psi + V_{\eta,\tau,\varepsilon}\psi,\quad\text{for all } \psi\in\mathrm{dom}(\mathscr{E}_{\eta,\tau,\varepsilon}). \end{aligned}$$ The main purpose of the present paper is to study the strong resolvent limit of $\mathscr{E}_{\eta,\tau,\varepsilon}$ at $\varepsilon\rightarrow 0.$ To do this, we will introduce some notations and geometrical aspects which we will use in the rest of the paper. ## Notations and geometric aspects {#geo} Let $\Sigma$ be parametrized by the family $\lbrace \phi_{j}, U_{j},V_{j},\rbrace_{j\in J}$ with $J$ a finite set, $U_j\subset {\mathbb R}^{2},\,V_j \subset {\mathbb R}^{3},\,\Sigma\subset \bigcup_{j\in J}V_j$ and $\phi_j(U_{j})=V_j\cap\Sigma\subset\Sigma\subset\mathbb{R}^{3}$ for all $j\in J.$ We set $s=\phi_{j}^{-1}(x_{\Sigma})$ for any $x_{\Sigma}\in\Sigma.$ **Definition 2** (Weingarten map). *For $x_{\Sigma}=\phi_j (s)\in\Sigma\cap V_j$ with $s\in U_j,$ one defines the Weingarten map (arising from the second fundamental form) as the following linear operator $$\begin{aligned} \begin{array}{rcl} W_{x_{\Sigma}}:=W(x_{\Sigma}):T_{x_{\Sigma}} &\to & T_{x_{\Sigma}}\\ \partial_i \phi_j (s) &\mapsto & W(x_{\Sigma})[\partial_i \phi_j] (s):=-\partial_i \nu (\phi_j (s)), \end{array}\end{aligned}$$ where $T_{x_{\Sigma}}$ denotes the tangent space of $\Sigma$ on $x_{\Sigma}$ and $\lbrace \partial_i \phi_j (s)\rbrace_{i=1,2}$ is a basis vector of $T_{x_{\Sigma}}$.* **Proposition 1**. *[@JAT Chapter 9 (Theorem 2), 12 (Theorem 2)].[\[WP\]]{#WP label="WP"} Let ${\Sigma}$ be an $n-$surface in $\mathbb{R}^{n+1}$, oriented by the unit normal vector field $\nu$, and let $x\in {\Sigma}$. Then, the Weingarten map verifies the following properties:* - *Symmetric with respect to the inner product induced by the first fundamental form.* - *Self-adjoint; that is $W_{x}(v)\cdot w=v \cdot W_{x}(w),$ for all $v,\,w\in T_{x}.$* - *The eigenvalues $k_{1}(x),....,k_{n}(x)$ of the Weingarten map $W_{x}$ are called principal curvatures of $\Sigma$ at $x$. Moreover, $k_{1}(x),....,k_{n}(x)$ uniformly bounded on $\Sigma$.* - *The quadratic form associated with the Weingarten map at a point $x$ is called the second fundamental form of $\Sigma$ at $x$.* The following theorem is the main result of this paper. **Theorem 1**. *Let $(\eta,\tau)\in\mathbb{R}^{2}$, and denote by $d=\eta^2 - \tau^2$. Let $(\hat{\eta},\hat{\tau})\in\mathbb{R}^2$ be defined as follows: $$\label{parametres} \begin{aligned} &\bullet \text{if } d<0, \text{ then } (\hat{\eta},\hat{\tau})=\dfrac{\mathrm{tanh}(\sqrt{-d}/2)}{(\sqrt{-d}/2)}(\eta,\tau),\\ &\bullet \text{if } d=0, \text{ then } (\hat{\eta},\hat{\tau})=(\eta,\tau),\\ &\bullet \text{if } d>0 \text{ such that } d\neq (2k+1)^{2}\pi^2, \,k\in\mathbb{N}\cup\lbrace 0 \rbrace, \text{ then } (\hat{\eta},\hat{\tau})=\dfrac{\mathrm{tan}(\sqrt{d}/2)}{(\sqrt{d}/2)}(\eta,\tau).\, %\text{such that } d = (2k_{0})^{2}\pi^{2} \text{ for some } k_{0}\in\mathbb{N}\cup\lbrace 0 \rbrace.\\ \end{aligned}$$ Now, let $\mathscr{E}_{\eta,\tau,\varepsilon}$ be defined as in [\[familleE\]](#familleE){reference-type="eqref" reference="familleE"} and $D_{\hat{\eta},\hat{\tau}}$ as in Definition [Definition 1](#DiracelectroLorentz){reference-type="ref" reference="DiracelectroLorentz"}. Then, $$\begin{aligned} \label{relation} \mathscr{E}_{\eta,\tau,\varepsilon} \xrightarrow[\varepsilon\rightarrow 0]{} D_{\hat{\eta},\hat{\tau}} \quad \text{in the strong resolvent sense.}\end{aligned}$$* **Remark 1**. *We mention that in this work we find approximations by regular potentials in the strong resolvent sense for the Dirac operator with $\delta$-shell potentials $\mathscr{E}_{\eta,\tau,\varepsilon}$ in the non-critical case (i.e., when $d \neq 4$) and non-confining case, (i.e., when $d\neq -4$) everywhere on $\Sigma.$ This is what we shall prove in the proof of Theorem [Theorem 1](#maintheorem){reference-type="ref" reference="maintheorem"}.* ### Tubular neighborhood of $\Sigma$ {#tubular} Recall that for $\Omega\subset\mathbb{R}^{3}$ a bounded domain with smooth boundary $\Sigma$ parametrized by $\phi=\lbrace \phi_{j}\rbrace_{j\in J},$ we set $\nu_{\phi}=\nu\circ\phi: \Sigma\longrightarrow \mathbb{R}^{3}$ the unit normal vector field which points outwards of $\Omega$, is independent of the particular choice of the positively oriented arc-length parametrization $\phi$.\ \ For $\gamma>0,$ $\Sigma_{\gamma}$ [\[neighborhood\]](#neighborhood){reference-type="eqref" reference="neighborhood"} is a tubular neighborhood of $\Sigma$ of width $\gamma$. We define the diffeomorphism $\Phi_{\phi}$ by: $$\begin{aligned} \Phi_{\phi}: U_{x_{\Sigma}}\times (-\gamma,\gamma) &\longrightarrow \mathbb{R}^{3} \\ (s,p) &\longmapsto \Phi_{\phi}(s,p) = \phi(s) + p\nu(\phi(s)). \end{aligned}$$ For $\gamma$ be small enough, $\Phi_{\phi}$ is a smooth parametrization of $\Sigma_{\gamma}$. Moreover, the matrix of the differential $\mathrm{d}\Phi_{\phi}$ of $\Phi_{\phi}$ in the canonical basis of $\mathbb{R}^{3}$ is $$\begin{aligned} \label{gradfi} \mathrm{d}\Phi_{\phi}(s,p)= \Big( \partial_1 \phi(s) + p\,\mathrm{d}\nu(\partial_1 \phi)(s)\quad \partial_2 \phi(s) + p\,\mathrm{d}\nu(\partial_2 \phi)(s)\quad \nu_{\phi}(s)\Big).\end{aligned}$$ Thus, the differential on $U_{x_{\Sigma}}$ and the differential on $(-\gamma,\gamma)$ of $\Phi_{\phi}$ are respectively given by $$\begin{aligned} &\mathrm{d}_{s}\Phi_{\phi}(s,p)= \partial_{i}\phi_j(s) - pW(x_{\Sigma})\partial_{i}\phi_j(s) \quad \text{ for } i=1,2 \text{ and } x_{\Sigma}\in\Sigma,\\ &\mathrm{d}_{p}\Phi_{\phi}(s,p)= \nu_{\phi}(s), \end{aligned}$$ where $\partial_{i}\phi$, $\nu_{\phi}$ should be understood as column vectors, and $W(x_{\Sigma})$ is the Weingarten map defined as in Definition [Definition 2](#Weingarten){reference-type="ref" reference="Weingarten"}. Next, we define $$\label{PP} \begin{aligned} & \mathscr{P}_{\phi}:= \Big(\Phi_{\phi}^{-1} \Big)_{1} : \Sigma_{\gamma} \longrightarrow U_{x_{\Sigma}}\subset\mathbb{R}^{2};\quad \mathscr{P}_{\phi}\big( \phi(s) + p\nu(\phi(s)) \big) = s\in\mathbb{R}^{2},\quad x_{\Sigma}=\phi(s),\\ & \mathscr{P}_{\perp}:= \Big(\Phi_{\phi}^{-1} \Big)_{2} : \Sigma_{\gamma} \longrightarrow (-\gamma,\gamma);\quad \mathscr{P}_{\perp}\big( \phi(s) + p\nu(\phi(s)) \big) = p. \end{aligned}$$ Using the inverse function theorem and thanks to [\[gradfi\]](#gradfi){reference-type="eqref" reference="gradfi"}, then we have for $x=\phi(s) + p\nu(\phi(s))\in\Sigma_{\gamma}$ the following differential $$\label{gradPP} \begin{aligned} \nabla \mathscr{P}_{\phi}(x)=\big ( 1-pW(s)\big)^{-1}t_{\phi}(s)\quad\text{and} \quad \nabla \mathscr{P}_{\perp}(x)=\nu_{\phi}(s), \end{aligned}$$ with $t_{\phi}(s)=\partial_{i}\phi(s),$ $i=1,2.$ ## Preparations for proof Before presenting the tools for the proof of Theorem [Theorem 1](#maintheorem){reference-type="ref" reference="maintheorem"}, let us state some properties verified by the operator $D_{\eta,\tau}.$ **Lemma 1**. *Let $(\eta,\tau)\in\mathbb{R}^{2},$ and let $D_{\eta,\tau}$ be as in Definition [Definition 1](#DiracelectroLorentz){reference-type="ref" reference="DiracelectroLorentz"}. Then, the following hold:* - *If $\eta^{2} - \tau^{2} \neq -4$, then there exists an invertible matrix $R_{\eta,\tau}$ such that a function $f = f_+ \oplus f_- \in H^1(\Omega_+)^4 \oplus H^1(\Omega_-)^4$ belongs to $\mathrm{dom}(D_{\eta,\tau})$ if and only if $t_{\Sigma} f_{+} = R_{\eta,\tau} t_{\Sigma}f_{-},$ with $R_{\eta,\tau}$ given by $$\begin{aligned} \label{R} R_{\eta,\tau}:= \Big(\mathbb{I}_4 - \dfrac{i\alpha\cdot\nu}{2}(\eta \,\mathbb{I}_{4} + \tau\beta)\Big)^{-1}\Big (\mathbb{I}_4 + \dfrac{i\alpha\cdot\nu}{2}(\eta \,\mathbb{I}_{4} + \tau\beta)\Big). \end{aligned}$$* - *If $\eta^{2} - \tau^{2} = -4,$ then a function $f = f_+ \oplus f_- \in H^1(\Omega_+)^4 \oplus H^1(\Omega_-)^4$ belongs to $\mathrm{dom}(D_{\eta,\tau})$ if and only if $$\begin{aligned} \Big ( \,\mathbb{I}_{4} - \dfrac{i\alpha\cdot\nu}{2}(\eta \,\mathbb{I}_{4} + \beta\tau)\Big) t_{\Sigma} f_{+} = 0 \quad \text{ and } \quad \Big ( \,\mathbb{I}_{4} + \dfrac{i\alpha\cdot\nu}{2}(\eta \,\mathbb{I}_{4} + \beta\tau)\Big) t_{\Sigma} f_{-} = 0. \end{aligned}$$* **Proof.** Using the transmission condition introduced in [\[DOMDL\]](#DOMDL){reference-type="eqref" reference="DOMDL"}, then for assertion (i): for all $f=f_+ \oplus f_- \in \mathrm{dom}(D_{\eta,\tau}),$ we have that $$\begin{aligned} \Big(i\alpha\cdot\nu +\dfrac{1}{2}(\eta \mathbb{I}_4 + \tau \beta)\Big)t_{\Sigma}f_+ = \Big(i\alpha\cdot\nu - \dfrac{1}{2}(\eta \mathbb{I}_4 + \tau \beta)\Big)t_{\Sigma}f_-.\end{aligned}$$ Thanks to properties in [\[PMD\]](#PMD){reference-type="eqref" reference="PMD"} and the fact that $(i\alpha\cdot\nu)^{-1} = -i\alpha\cdot\nu$, we get that $$\begin{aligned} \label{relation} \Big( \,\mathbb{I}_{4} - M\Big) t_{\Sigma} f_{+} = \Big( \,\mathbb{I}_{4} + M\Big) t_{\Sigma} f_{-},\end{aligned}$$ with $M$ a $4\times4$ matrix has the following form $$\begin{aligned} M=\dfrac{i\alpha\cdot\nu}{2}(\eta \,\mathbb{I}_{4} + \beta\tau),\end{aligned}$$ thus [\[R\]](#R){reference-type="eqref" reference="R"} is established.\ Furthermore, as $d:=\eta^{2} - \tau^{2} \neq -4$ and $M^{2} = -\dfrac{d}{4} \mathbb{I}_4$, $(\mathbb{I}_4 - M )(\mathbb{I}_4 + M) = \dfrac{4+d}{4}\mathbb{I}_4$, then $(\mathbb{I}_4 - M)$ is invertible and $(\mathbb{I}_4 - M)^{-1} = \dfrac{4}{4+d}(\mathbb{I}_4 + M)$. Consequently, using [\[relation\]](#relation){reference-type="eqref" reference="relation"} we obtain that $t_{\Sigma}f_+ = R_{\eta,\tau}t_{\Sigma}f_-$ which $R_{\eta,\tau}$ has the following explicit form $$\begin{aligned} \label{ExplR} R_{\eta,\tau} = \dfrac{4}{4+d}\Bigg( \dfrac{4-d}{4}\mathbb{I}_4 + i\alpha\cdot\nu (\eta \mathbb{I}_4 + \tau\beta)\Bigg).\end{aligned}$$ For assertion (ii), one just has to multiply [\[relation\]](#relation){reference-type="eqref" reference="relation"} by $(\mathbb{I}_4 \pm M)$ we get $$\begin{aligned} (\mathbb{I}_4 + M)^{2} t_{\Sigma}f_-= 0 \quad \text{and} \quad (\mathbb{I}_4 - M)^{2} t_{\Sigma}f_+ = 0.\end{aligned}$$ This achieves the proof of Lemma [Lemma 1](#lemma3.1){reference-type="ref" reference="lemma3.1"}.0◻ # Proof of Theorem [Theorem 1](#maintheorem){reference-type="ref" reference="maintheorem"} {#prooftheorem3.1} Let $\lbrace\mathscr{E}_{\eta,\tau,\varepsilon}\rbrace_{\varepsilon \in (0,\gamma)}$ and $D_{\hat{\eta},\hat{\tau}}$ be as defined in [\[familleE\]](#familleE){reference-type="eqref" reference="familleE"} and Definition [Definition 1](#DiracelectroLorentz){reference-type="ref" reference="DiracelectroLorentz"}, respectively. Since the singular interaction $V_{\eta,\tau,\varepsilon}$ are bounded and symmetric, then by the Kato-Rellich theorem, the operators $\mathscr{E}_{\eta,\tau,\varepsilon}$ are self-adjoint in $L^{2}(\mathbb{R}^{3})^{4}.$ Moreover, we know that $D_{\hat{\eta},\hat{\tau}}$ are self-adjoint and $\mathrm{dom}(D_{\hat{\eta},\hat{\tau}})\subset H^{1}(\mathbb{R}^{3}\setminus \Sigma)^{4}.$ Although the limiting operators and the limit operator are self-adjoint, it has been shown in [@RS Theorem VIII.26] that $\lbrace\mathscr{E}_{\eta,\tau,\varepsilon}\rbrace_{\varepsilon\in (0,\gamma)}$ converges in the strong resolvent sense to $D_{\hat{\eta},\hat{\tau}}$ as $\varepsilon\rightarrow 0$ if and only if it converges in the strong graph limit sense. The latter means that, for all $\psi \in \mathrm{dom}(D_{\hat{\eta},\hat{\tau}})$, there exists a family of vectors $\lbrace \psi_{\varepsilon}\rbrace_{\varepsilon\in (0,\gamma)} \subset \mathrm{dom} (\mathscr{E}_{\eta,\tau,\varepsilon})$ such that $$\begin{aligned} \label{SGL} \mathrm{(a)} \,\lim_{\varepsilon\rightarrow 0} \psi_{\varepsilon} = \psi \quad\text{ and } \quad \mathrm{(b)}\,\lim_{\varepsilon\rightarrow 0} \mathscr{E}_{\eta,\tau,\varepsilon}\psi_{\varepsilon} = D_{\hat{\eta},\hat{\tau}}\psi \quad \text{in } L^{2}(\mathbb{R}^{3})^{4}.\end{aligned}$$ Let $\psi\equiv \psi_+ \oplus \psi_- \in \mathrm{dom}(D_{\hat{\eta},\hat{\tau}}).$ From [\[parametres\]](#parametres){reference-type="eqref" reference="parametres"}, we have that $$\begin{aligned} \hat{d} = \hat{\eta}^{2} - \hat{\tau}^{2} = -4\mathrm{tanh}^{2}(\sqrt{-d}/2),\quad\text{if}\quad d<0,\\ \hat{d} = \hat{\eta}^{2} - \hat{\tau}^{2} = 4\mathrm{tan}^{2}(\sqrt{d}/2),\quad\text{if}\quad d>0,\\ \hat{d} = \hat{\eta}^{2} - \hat{\tau}^{2} = 0,\quad\text{if}\quad d=0.\end{aligned}$$ In all cases, we have that $\hat{d} > -4$ (in particular $\hat{d}\neq -4)$. Then, by Lemma [Lemma 1](#lemma3.1){reference-type="ref" reference="lemma3.1"} (i), $$\begin{aligned} \label{} t_{\Sigma} \psi_+ = R_{\hat{\eta}, \hat{\tau}} t_{\Sigma} \psi_-,\end{aligned}$$ where $R_{\hat{\eta}, \hat{\tau}}$ are given in [\[ExplR\]](#ExplR){reference-type="eqref" reference="ExplR"}.\ \ Using the Definition [Definition 1](#DiracelectroLorentz){reference-type="ref" reference="DiracelectroLorentz"}, we get that $t_{\Sigma}\psi_{\pm}\in H^{1/2}(\Sigma)^{4}.$\ \ $\bullet\,\,$ ***Show that*** $$\begin{aligned} \label{REXP} e^{i\alpha\cdot\nu B_{\eta,\tau}} = R_{\hat{\eta}, \hat{\tau}}. \end{aligned}$$\ Recall the definition of the family $\mathscr{E}_{\eta,\tau,\varepsilon}$ and $V_{\eta,\tau,\varepsilon}$ defined in [\[familleE\]](#familleE){reference-type="eqref" reference="familleE"} and [\[potential\]](#potential){reference-type="eqref" reference="potential"}, respectively. We have that $$\begin{aligned} (i\alpha\cdot\nu B_{\eta,\tau})^{2}= (i\alpha\cdot\nu (\eta \mathbb{I}_4 + \tau\beta))^{2}=-(\eta^{2} - \tau^{2})=:D^{2},\quad \text{with } D= \sqrt{-(\eta^{2} - \tau^{2})}=\sqrt{-d}.\end{aligned}$$ Using this equality, we can write: $e^{i\alpha\cdot\nu B_{\eta,\tau}} = e^{-D} \Pi_- + e^{D}\Pi_+$, with $\pm D$ the eigenvalues of $i\alpha\cdot\nu B_{\eta,\tau};$ and $\Pi_{\pm}$ the eigenprojections are given by: $$\begin{aligned} \Pi_{\pm}:= \dfrac{1}{2}\Bigg( \mathbb{I}_4 \pm \dfrac{i\alpha\cdot\nu B_{\eta,\tau}}{D}\Bigg).\end{aligned}$$ Therefore, $$\begin{aligned} e^{(i\alpha\cdot\nu B_{\eta,\tau})}& =\Bigg(\dfrac{ e^{D} + e^{-D}}{2} \Bigg)\mathbb{I}_4 + \dfrac{i\alpha\cdot\nu B_{\eta,\tau}}{D}\Bigg(\dfrac{e^{D} - e^{-D}}{2}\Bigg)\\&= \mathrm{cosh}(D) \mathbb{I}_4 + \dfrac{\mathrm{sinh}(D)}{D}(i\alpha\cdot\nu (\eta \mathbb{I}_4 + \tau\beta)).\end{aligned}$$ Now, the idea is to show [\[REXP\]](#REXP){reference-type="eqref" reference="REXP"}, *i.e.*, it remains to show $$\begin{aligned} \label{ERE} \dfrac{4}{4+\hat{d}}\Bigg( \dfrac{4-\hat{d}}{4} \mathbb{I}_4 + i\alpha\cdot\nu(\hat{\eta} \mathbb{I}_4 + \hat{\tau}\beta)\Bigg)-\mathrm{cosh}(D) \mathbb{I}_4 - \dfrac{\mathrm{sinh(D)}}{D}(i\alpha\cdot\nu (\eta \mathbb{I}_4 + \tau \beta))=0. \end{aligned}$$ To this end, set $\mathfrak{U}=\dfrac{4-\hat{d}}{4+\hat{d}}-\mathrm{cosh(D)}$ and $\mathfrak{V} = \dfrac{4}{4+\hat{d}} - \dfrac{\mathrm{sinh(D)}}{D}$. If we apply [\[ERE\]](#ERE){reference-type="eqref" reference="ERE"} to the unit vector $e_1 =(1\,\,0\,\,0\,\,0)^{t},$ then we get that $\mathfrak{U} = \mathfrak{V} = 0$. Hence, [\[ERE\]](#ERE){reference-type="eqref" reference="ERE"} makes sense if and only if $$\begin{aligned} \quad \mathrm{cosh}(D) = \dfrac{4-\hat{d}}{4+\hat{d}} \quad\text{and} \quad \dfrac{\mathrm{sinh}(D)}{D}(\eta,\tau) = \dfrac{4}{4+\hat{d}}(\hat{\eta},\hat{\tau}).\end{aligned}$$ Consequently, we have $R_{\hat{\eta}, \hat{\tau}}= e^{i\alpha\cdot\nu B_{\eta,\tau}}$.\ \ Moreover, dividing $\dfrac{\mathrm{sinh}(D)}{D}$ by $(1+\mathrm{cosh}(D))$ we get that $$\begin{aligned} (\hat{\eta},\hat{\tau})= \dfrac{\mathrm{sinh}(D)}{1+\mathrm{cosh}(D)}\dfrac{1}{D/2}(\eta,\tau).\end{aligned}$$ Now, applying the elementary identity $\mathrm{tanh(\dfrac{\theta}{2})} = \dfrac{\mathrm{sinh}(\theta)}{1+\mathrm{cosh}(\theta)}$, for all $\theta\in\mathbb{C}\setminus\lbrace i(2k+1)\pi,\, k\in\mathbb{Z}\rbrace$. We conclude that $$\begin{aligned} \dfrac{\mathrm{tanh}(\sqrt{-d}/2)}{\sqrt{-d}/2}(\eta,\tau) = (\hat{\eta},\hat{\tau}),\quad \text{if } d<0,\end{aligned}$$ and so, for $d>0$ we apply the elementary identity $-i\mathrm{tanh}(i\theta) = \mathrm{tan} (\theta)$ for all $\theta\in \mathbb{C}\setminus \lbrace \pi(k+\dfrac{1}{2}),\, k\in \mathbb{Z}\rbrace$, then we get that $$\begin{aligned} \dfrac{\mathrm{tanh}(\sqrt{-d}/2)}{\sqrt{-d}/2}= \dfrac{\mathrm{tan}(\sqrt{d}/2)}{\sqrt{d}/2}. %\quad \text{for } d\neq (2k +1)^{2}\pi^{2},\,\,k\in\mathbb{N}\cup\lbrace 0 \rbrace.\end{aligned}$$ Hence, we obtain that $\dfrac{\mathrm{tan}(\sqrt{d}/2)}{\sqrt{d}/2} (\eta,\tau) =(\hat{\eta},\hat{\tau}) \quad \text{if } d>0$ such that $d\neq (2k+1)^{2}\pi^2$. Consequently, the equality $e^{i\alpha\cdot\nu B_{\eta,\tau}} = R_{\hat{\eta}, \hat{\tau}}$ is shown such that the following parameters verify: $$\label{base} \begin{aligned} &\bullet \,\,\dfrac{\mathrm{tanh}(\sqrt{-d}/2)}{\sqrt{-d}/2}(\eta,\tau) = (\hat{\eta},\hat{\tau}),&\quad \text{if }& d<0,\\ & \bullet\,\, \dfrac{\mathrm{tan}(\sqrt{d}/2)}{\sqrt{d}/2} (\eta,\tau) =(\hat{\eta},\hat{\tau}),& \quad \text{if }& d>0,\\ & \bullet \,\, (\eta,\tau) = (\hat{\eta},\hat{\tau}),& \quad \text{if }& d=0. \end{aligned}$$ Moreover, the fact that $\int_{-\varepsilon}^{\varepsilon}h_{\varepsilon}(t)dt =1$ (see, [\[prophe\]](#prophe){reference-type="eqref" reference="prophe"}) with the statement [\[REXP\]](#REXP){reference-type="eqref" reference="REXP"} make it possible to write $$\begin{aligned} \label{1} \mathrm{exp}\Bigg[\big( -i\int_{-\varepsilon}^{0} h_{\varepsilon}(t) \,\mathrm{d}t\big)(\alpha\cdot\nu B_{\eta,\tau})\Bigg] t_{\Sigma}\psi_{+} = \mathrm{exp}\Bigg[\big( i\int^{\varepsilon}_{0} h_{\varepsilon}(t) \,\mathrm{d}t\big)(\alpha\cdot\nu B_{\eta,\tau})\Bigg]t_{\Sigma}\psi_{-}.\end{aligned}$$ $\bullet$ ***Construction of the family $\boldsymbol{\lbrace\psi_{\varepsilon}\rbrace_{\varepsilon\in(0,\gamma)}}$.*** For all $0<\varepsilon<\gamma$, we define the function $H_{\varepsilon}: \mathbb{R}\setminus\lbrace 0 \rbrace \rightarrow \mathbb{R}$ such that $$\label{familleH} H_{\varepsilon}(p):= \left\{ \begin{aligned} &\int^{\varepsilon}_{p} h_{\varepsilon}(t) \,\mathrm{d}t,\quad &\text{if}& \quad 0<p<\varepsilon,\\ & -\int_{-\varepsilon}^{p} h_{\varepsilon}(t) \,\mathrm{d}t,\quad &\text{if}& \quad -\varepsilon<p<0,\\ &0,\quad &\text{if}& \quad |p|\geq \varepsilon. \end{aligned} \right.$$ Clearly, $H_{\varepsilon}\in L^{\infty}(\mathbb{R})$ and supported in $(-\varepsilon,\varepsilon)$. The fact that $|| H_{\varepsilon}||_{L^{\infty}}\leq ||h||_{L^{1}}$, we get $\lbrace H_{\varepsilon}\rbrace_{\varepsilon}$ is bounded uniformly in $\varepsilon.$ For all $\varepsilon\in(0,\gamma)$, the restrictions of $H_{\varepsilon}$ to $\mathbb{R}_{\pm}$ are uniformly continuous, so finite limits at $p=0$ exist, and differentiable a.e., with derivative being bounded, since $h_{\varepsilon}\in L^{\infty}(\mathbb{R},\mathbb{R})$. Using these function, we set the matrix functions $\mathbb{U}_{\varepsilon}: \mathbb{R}^{3}\setminus \Sigma \rightarrow \mathbb{C}^{4\times4}$ such that $$\label{familleU} \mathbb{U}_{\varepsilon}(x):= \left\{ \begin{aligned} &e^{(i\alpha\cdot\nu) B_{\eta,\tau} H_{\varepsilon} (\mathscr{P}_{\perp}(x))},\quad &\text{if}& \quad x\in \Sigma_{\varepsilon}\setminus\Sigma,\\ & \,\mathbb{I}_{4},\quad &\text{if}& \quad x\in\mathbb{R}^{3}\setminus\Sigma_{\varepsilon}, \end{aligned}\quad \in L^{\infty}(\mathbb{R}^{3}, \mathbb{C}^{4\times4}), \right.$$ where the mappings $\mathscr{P}_{\perp}$ is defined as in [\[PP\]](#PP){reference-type="eqref" reference="PP"}. As the functions $\mathbb{U}_{\varepsilon}$ are bounded, uniformly in $\varepsilon$, and uniformly continuous in $\Omega_{\pm}$, with a jump discontinuity across $\Sigma$, then $\forall x_{\Sigma}\in \Sigma$ and $y_{\pm}\in \Omega_{\pm}$, we get $$\label{UEP} \begin{aligned} \mathbb{U}_{\varepsilon}(x_{\Sigma}^{-})&:= \lim_{y_{-}\rightarrow x_{\Sigma}}\mathbb{U}_{\varepsilon}(y_-)= \mathrm{exp} \Big[i \Big(\int_{0}^{\varepsilon} h_{\varepsilon}(t) \,\mathrm{d}t \Big) (\alpha\cdot\nu (x_{\Sigma}))B_{\eta,\tau}\Big],\\ \mathbb{U}_{\varepsilon}(x_{\Sigma}^{+})&:= \lim_{y_{+}\rightarrow x_{\Sigma}}\mathbb{U}_{\varepsilon}(y_+)= \mathrm{exp} \Big[ -i\Big(\int_{-\varepsilon}^{0} h_{\varepsilon}(t) \,\mathrm{d}t \Big) (\alpha\cdot\nu (x_{\Sigma}))B_{\eta,\tau}\Big]. \end{aligned}$$ Thus, we construct $\psi_{\varepsilon}$ by $\psi_{\varepsilon} = \psi_{\varepsilon,+} \oplus \psi_{\varepsilon,-} := \mathbb{U}_{\varepsilon} \psi\in L^{2} (\mathbb{R}^{3})^{4}.$\ \ Since $\mathbb{U}_{\varepsilon}$ are bounded, uniformly in $\varepsilon$, using the construction of $\psi_{\varepsilon}$ we get that $\psi_{\varepsilon} - \psi:= (\mathbb{U}_{\varepsilon} -\mathbb{I}_4 )\psi$. Then, by the dominated convergence theorem and the fact that $\mathrm{supp}\, (\mathbb{U}_{\varepsilon} -\mathbb{I}_4 )\subset | \Sigma_{\varepsilon}|$ with $| \Sigma_{\varepsilon}|\rightarrow 0$ as $\varepsilon \rightarrow 0$, it is easy to show that $$\begin{aligned} \label{a} \psi_{\varepsilon}\xrightarrow[\varepsilon\to 0]{}\psi\quad\text{in } L^{2}(\mathbb{R}^{3})^{4}. \end{aligned}$$ This achieves assertion (a).\ \ $\bullet$ ***Show that** $\boldsymbol{\psi_{\varepsilon} \in \mathrm{dom}(\mathscr{E}_{\eta,\tau,\varepsilon})= H^{1}(\mathbb{R}^{3})^{4}}$.* This means that we must show, for all $0<\varepsilon<\gamma$, $$\begin{aligned} \mathrm{(i)} \,\psi_{\varepsilon,\pm}\in H^{1}(\Omega_{\pm})^{4} \quad \text{and} \quad \mathrm{(ii)}\,t_{\Sigma}\psi_{\varepsilon,+} = t_{\Sigma}\psi_{\varepsilon,-} \in H^{1/2}(\Sigma)^{4}.\end{aligned}$$ Let us show point (i). By construction of $\psi_{\varepsilon}$, we have $\psi_{\varepsilon} \in L^{2}(\mathbb{R}^{3})^{4}.$ It remains to have $\partial_{j}\mathbb{U}_{\varepsilon} \in L^{2}(\mathbb{R}^{3})^{4}$, for $j=1,2,3.$ To do so, recall the parametrization $\phi: U \rightarrow \Sigma\subset\mathbb{R}^{3}$ of $\Sigma$ defined at the beginning of part [\[geo\]](#geo){reference-type="eqref" reference="geo"} and let $A$ a $4\times 4$ matrix such that $A(s):=i\alpha\cdot\nu(\phi(s)) B_{\eta,\tau},$ for $s=(s_1,s_2)\in U\subset\mathbb{R}^{2}.$ Thus, the matrix functions $\mathbb{U}_{\varepsilon}$ in [\[familleU\]](#familleU){reference-type="eqref" reference="familleU"} can be written $$\label{familleU1} \mathbb{U}_{\varepsilon}(x)= \left\{ \begin{aligned} &e^{A(\mathscr{P}_{\phi}(x)) H_{\varepsilon} (\mathscr{P}_{\perp}(x))},\quad &\text{if}& \quad x\in \Sigma_{\varepsilon}\setminus\Sigma,\\ & \,\mathbb{I}_{4},\quad &\text{if}& \quad x\in\mathbb{R}^{3}\setminus\Sigma_{\varepsilon}, \end{aligned}\quad \in L^{\infty}(\mathbb{R}^{3}, \mathbb{C}^{4\times4}), \right.$$ where $\mathscr{P}_{\phi}$ is defined as in [\[PP\]](#PP){reference-type="eqref" reference="PP"}.\ \ For $j=1,2,3,$ $\mathrm{supp}\,\partial_{j}\mathbb{U}_{\varepsilon}\subset \Sigma_{\varepsilon}$. Furthermore, it was mentioned in [@RMW Eq.(4.1)] that for all $x\in \Sigma_{\varepsilon}\setminus\Sigma$, $\partial_{j}\mathbb{U}_{\varepsilon}$ can be written as follows $$\label{djU} \begin{aligned} \partial_{j}\mathbb{U}_{\varepsilon}(x)=\int_{0}^{1} \Bigg[\mathrm{exp} \Big( z A(\mathscr{P}_{\phi}(x)) H_{\varepsilon} (\mathscr{P}_{\perp}(x)) \Big)\partial_{j}\Big(A(\mathscr{P}_{\phi}(x)) H_{\varepsilon} (\mathscr{P}_{\perp}(x))\Big)\times \\ \mathrm{exp} \Big( (1-z) A(\mathscr{P}_{\phi}(x)) H_{\varepsilon} (\mathscr{P}_{\perp}(x)) \Big)\Bigg]\mathrm{d}z. \end{aligned}$$ Let $x=\phi(s) + p \nu(\phi(s))\in \Sigma_{\gamma},$ and recall the definition of the mappings $\mathscr{P}_{\phi}(x)$ and $\mathscr{P}_{\perp}(x)$ introduced in [\[PP\]](#PP){reference-type="eqref" reference="PP"}. Based on the quantities [\[gradPP\]](#gradPP){reference-type="eqref" reference="gradPP"} (with $s=\mathscr{P}_{\phi}(x)$ and $p=\mathscr{P}_{\perp}(x)$), we get that $$\begin{aligned} \label{djU2} \partial_{j}\Big(A(\mathscr{P}_{\phi}(x)) H_{\varepsilon} (\mathscr{P}_{\perp}(x)) \Big) = \partial_{s}A(s)(1-p W(s))^{-1} (t_{\phi}(s))_{j}H_{\varepsilon}(p) - A(s) h_{\varepsilon}(p)(\nu_{\phi}(s))_{j}.\end{aligned}$$ Therefore, $\partial_{j}\mathbb{U}_{\varepsilon}$ has the following form $$\label{djU3} \begin{aligned} \partial_{j}\mathbb{U}_{\varepsilon}(x)&= - A(s) h_{\varepsilon}(p)(\nu_{\phi}(s))_{j}\mathbb{U}_{\varepsilon}(x)\\& + \int_{0}^{1} e^{z A(s) H_{\varepsilon} (p)}\big[\partial_{s}A(s)(1-p W(s))^{-1} (t_{\phi}(s))_{j}H_{\varepsilon}(p)\big]e^{(1-z) A(s) H_{\varepsilon} (p)}\,\mathrm{d}z. \end{aligned}$$ Set by $\mathbb{E}_{\varepsilon,j}$ the second term of the right part of equality [\[djU3\]](#djU3){reference-type="eqref" reference="djU3"}, *i.e.*, $$\begin{aligned} \label{EE} \mathbb{E}_{\varepsilon,j} = \int_{0}^{1} e^{z A(s) H_{\varepsilon} (p)}\big[\partial_{s}A(s)(1-p W(s))^{-1} (t_{\phi}(s))_{j}H_{\varepsilon}(p)\big]e^{(1-z) A(s) H_{\varepsilon} (p)}\,\mathrm{d}z.\end{aligned}$$ Then, thanks to the third property of the Proposition [\[WP\]](#WP){reference-type="ref" reference="WP"} verified by the Weingarten map, the matrix-valued functions $\mathbb{E}_{\varepsilon,j}$ are bounded, uniformly for $0<\varepsilon<\gamma$, and $\mathrm{supp}\,\mathbb{E}_{\varepsilon,j}\subset \Sigma_{\varepsilon}$. Moreover, we have $\mathbb{U}_\varepsilon$ and $\partial_j\mathbb{U}_\varepsilon \in L^{\infty}(\Omega_{\pm}, \mathbb{C}^{4\times4})$. Hence, for all $\psi_{\pm}\in H^{1}(\Omega_{\pm})^{4}$ we have that $\psi_{\varepsilon, \pm}= \mathbb{U}_{\varepsilon}\psi_\pm\in H^{1}(\Omega_{\pm})^{4}$ and statement (i) is proved.\ \ Now, we show point (ii). As $\psi_{\varepsilon, \pm} \in H^{1}(\Omega_\pm)^{4}$, we get that $t_{\Sigma}\psi_{\varepsilon,\pm}\in H^{1/2}(\Sigma)^{4}.$ On the other hand, it have been showed in [@EG Chapter 4 (p.133)], for a.e., $x_{\Sigma}\in\Sigma$ and $r>0$, that $$\begin{aligned} t_{\Sigma}\psi_{\varepsilon,\pm} (x_{\Sigma}) = \lim_{r\rightarrow 0 }\dfrac{1}{| B(x_{\Sigma},r)|}\int_{\Omega_{\pm}\cap B(x_{\Sigma},r)}\psi_{\varepsilon}(y)\,\mathrm{d}y= \lim_{r\rightarrow 0 }\dfrac{1}{| B(x_{\Sigma},r)|}\int_{\Omega_{\pm}\cap B(x_{\Sigma},r)}\mathbb{U}_{\varepsilon}(y)\psi(y)\,\mathrm{d}y,\end{aligned}$$ and so, similarly, $$\begin{aligned} \mathbb{U}_{\varepsilon}(x_{\Sigma}^{\pm})t_{\Sigma}\psi_{\pm} (x_{\Sigma}) = \lim_{r\rightarrow 0 }\dfrac{1}{| B(x_{\Sigma},r)|}\int_{\Omega_{\pm}\cap B(x_{\Sigma},r)}\mathbb{U}_{\varepsilon}(x_{\Sigma}^{\pm})\psi(y )\,\mathrm{d}y.\end{aligned}$$ As $\mathbb{U}_{\varepsilon}$ is continuous in $\overline{\Omega_{\pm}}$, we get $t_{\Sigma}\psi_{\varepsilon,\pm} (x_{\Sigma})=\mathbb{U}_{\varepsilon}(x_{\Sigma}^{\pm})t_{\Sigma}\psi_{\pm} (x_{\Sigma}).$ Consequently, [\[1\]](#1){reference-type="eqref" reference="1"} with [\[UEP\]](#UEP){reference-type="eqref" reference="UEP"} give us that $t_{\Sigma} \psi_{\varepsilon,+} = t_{\Sigma} \psi_{\varepsilon,-} \in H^{1/2}(\Sigma)^{4}$. With this, (ii) is valid and $\psi_{\varepsilon}\in \mathrm{dom}(\mathscr{E}_{\eta,\tau,\varepsilon})$.\ \ To complete the proof of Theorem [Theorem 1](#maintheorem){reference-type="ref" reference="maintheorem"}, it remains to show the property (b), mentioned in [\[SGL\]](#SGL){reference-type="eqref" reference="SGL"}. Since $(\mathscr{E}_{\eta,\tau,\varepsilon}\psi_{\varepsilon} - D_{\hat{\eta}, \hat{\tau}}\psi)$ belongs to $L^{2}(\mathbb{R}^{3})^{4}$, it suffices to prove the following: $$\begin{aligned} \label{EDO} \mathscr{E}_{\eta,\tau,\varepsilon}\psi_{\varepsilon,\pm} - D_{\hat{\eta}, \hat{\tau}}\psi_{\pm} \xrightarrow[\varepsilon\to 0]{}0\quad \text{in } L^{2}(\Omega_{\pm})^{4}.\end{aligned}$$ To do this, let $\psi\equiv\psi_+\oplus\psi_- \in \mathrm{dom} (D_{\hat{\eta}, \hat{\tau}})$ and $\psi_{\varepsilon}\equiv\psi_{\varepsilon,+}\oplus\psi_{\varepsilon,-}\in\mathrm{dom} (\mathscr{E}_{\eta,\tau,\varepsilon})$. We have $$\label{diff} \begin{aligned} \mathscr{E}_{\eta,\tau,\varepsilon}\psi_{\varepsilon,\pm} - D_{\hat{\eta}, \hat{\tau}}\psi_{\pm} &= -i\alpha\cdot\nabla\psi_{\varepsilon,\pm}+ m\beta\psi_{\varepsilon,\pm} + V_{\eta,\tau,\varepsilon}\psi_{\varepsilon,\pm} + i\alpha\cdot\nabla\psi_\pm - m\beta\psi_\pm \\& = -i\alpha\cdot\nabla(\mathbb{U}_{\varepsilon}\psi_\pm) + i\alpha\cdot\nabla\psi_\pm + m\beta (\mathbb{U}_{\varepsilon} - \mathbb{I}_4)\psi_\pm + V_{\eta,\tau,\varepsilon}\psi_{\varepsilon,\pm}\\& = -i\sum_{j=1}^{3}\alpha_j\big[(\partial_{j}\mathbb{U}_{\varepsilon})\psi_\pm + (\mathbb{U}_{\varepsilon} - \mathbb{I}_4)\partial_{j}\psi_\pm\big] + m\beta (\mathbb{U}_{\varepsilon} - \mathbb{I}_4)\psi_\pm + V_{\eta,\tau,\varepsilon}\psi_{\varepsilon,\pm}. \end{aligned}$$ Using the form of $\partial_{j}\mathbb{U}_{\varepsilon}$ given in [\[djU3\]](#djU3){reference-type="eqref" reference="djU3"}, the quantity $-i\sum_{j=1}^{3}\alpha_j(\partial_{j}\mathbb{U}_{\varepsilon})\psi_\pm$ yields $$\begin{aligned} -i\sum_{j=1}^{3}\alpha_j(\partial_{j}\mathbb{U}_{\varepsilon})\psi_\pm &= -i\sum_{j=1}^{3} \alpha_{j} \big[ -i\alpha\cdot\nu V_{\eta,\tau,\varepsilon}\nu_{j} \mathbb{U}_{\varepsilon}\psi_\pm + \mathbb{E}_{\varepsilon,j}\psi_\pm\big] \\& = -(\alpha\cdot\nu)^{2} V_{\eta,\tau,\varepsilon}\psi_{\varepsilon,\pm} - i\sum_{j=1}^{3}\alpha_j\mathbb{E}_{\varepsilon,j}\psi_\pm\\& = -V_{\eta,\tau,\varepsilon}\psi_{\varepsilon,\pm} + \mathbb{R}_{\varepsilon}\psi_\pm,\end{aligned}$$ where $\mathbb{E}_{\varepsilon,j}$ is given in [\[EE\]](#EE){reference-type="eqref" reference="EE"} and $\mathbb{R}_{\varepsilon}= - i\sum_{j=1}^{3}\alpha_j\mathbb{E}_{\varepsilon,j},$ a matrix-valued functions in $L^{\infty}(\mathbb{R}^{3}, \mathbb{C}^{4\times4})$, verifies the same property of $\mathbb{E}_{\varepsilon,j}$ given in [\[EE\]](#EE){reference-type="eqref" reference="EE"}, for $\varepsilon\in(0,\gamma).$ Thus, [\[diff\]](#diff){reference-type="eqref" reference="diff"} becomes $$\begin{aligned} \mathscr{E}_{\eta,\tau,\varepsilon}\psi_{\varepsilon,\pm} - D_{\hat{\eta}, \hat{\tau}}\psi_\pm= -i\sum_{j=1}^{3}\alpha_j\big[(\mathbb{U}_{\varepsilon} - \mathbb{I}_4)\partial_{j}\psi_\pm\big] + m\beta (\mathbb{U}_{\varepsilon} - \mathbb{I}_4)\psi_\pm + \mathbb{R}_{\varepsilon}\psi.\end{aligned}$$ Since $\psi_{\pm}\in H^{1}(\Omega_{\pm})^{4}$, $(\mathbb{U}_{\varepsilon} - \mathbb{I}_4)$ and $\mathbb{R}_{\varepsilon}$ are bounded, uniformly in $\varepsilon\in(0,\gamma)$ and supported in $\Sigma_{\varepsilon},$ and $|\Sigma_{\varepsilon}|$ tends to 0 as $\varepsilon\rightarrow 0$. By the dominated convergence theorem, we conclude that $$\begin{aligned} \label{relfin} \mathscr{E}_{\eta,\tau,\varepsilon}\psi_{\varepsilon,\pm} - D_{\hat{\eta}, \hat{\tau}}\psi_\pm \xrightarrow[\varepsilon\rightarrow 0 ]{} 0,\quad \text{holds in } L^{2}(\Omega_\pm)^{4},\end{aligned}$$ and this achieves the assertion [\[EDO\]](#EDO){reference-type="eqref" reference="EDO"}.\ \ Thus, the two conditions mentioned in [\[SGL\]](#SGL){reference-type="eqref" reference="SGL"} (*i.e.*, (a) and (b)) of the convergence in the strong graph limit sense are proved (see, [\[a\]](#a){reference-type="eqref" reference="a"} and [\[relfin\]](#relfin){reference-type="eqref" reference="relfin"}). Also, note that the latter remains stable with respect to bounded symmetric perturbations (in our case $m\beta(\mathbb{U}_{\varepsilon} - \mathbb{I}_4 )$ with $m>0$, so we can assume $m=0$). Hence, the family $\lbrace\mathscr{E}_{\varepsilon}\rbrace_{\varepsilon\in (0,\gamma)}$ converges in the strong resolvent sense to $D_{\hat{\eta}, \hat{\tau}}$ as $\varepsilon\rightarrow 0.$ The proof of the Theorem [Theorem 1](#maintheorem){reference-type="ref" reference="maintheorem"} is complete. 0◻ # Acknowledgment I wish to express my gratitude to my thesis advisor Luis VEGA for suggesting the problem and for many stimulating conversations, for his patient advice, and enthusiastic encouragement. I would also like to thank my supervisor Vincent BRUNEAU for his reading, comments, and helpful criticism. 99 N. Arrizabalaga, A. Mas, and L. Vega, *Shell interactions for Dirac operators*. J.Math. Pures Appl. (9), 102(4):617-639, 2014. N. Arrizabalaga, A. Mas, and L. Vega, *Shell interactions for Dirac operators: on the point spectrum and the confinement*. SIAMJ. Math. Anal., 47(2):1044-1069, 2015. N. Arrizabalaga, A. Mas, and L. Vega, *An isoperimetric-type inequality for electrostatic shell interactions for Dirac operators*. Commun. Math. Phys., 344(2):483-505, 2016. J. Behrndt, P. Exner, M. Holzmann, and V. Lotoreichik, *On Dirac operators in ${\mathbb R}^3$ with electrostatic and Lorentz scalar $\delta$-shell interactions*. Quantum Stud. Math. Found. 6: 295-314, 2019. J. Behrndt, M. Holzmann and M. Tǔsek, *Two-dimensional Dirac operators with general $\delta$-shell interactions supported on a straight line*. J. Phys. A 56: 045201 (29 pages), 2023. J. Behrndt, M. Holzmann and C. Stelzer, *Approximation of Dirac operators with $\delta$-shell potentials in the norm resolvent sense*, <https://doi.org/10.48550/arXiv.2308.13344>. B. Cassano, V. Lotoreichik, A. Mas and M. Tǔsek, *General $\delta$-Shell Interactions for the two-dimensional Dirac Operator: Self-adjointness and Approximation*. Rev. Mat. Iberoam. 39 (2023), no. 4, pp. 1443--1492. J. Dittrich, P. Exner and P. Šeba, *Dirac operators with a spherically $\delta$-shell interactions*. J.Math. Phys. 30 (1989), 2875-2882. R.J. Hughes, *Relativistic point interactions: approximation by smooth potentials*. Rep. Mathematical Phys. 39(3): 425--432, 1997. R.J. Hughes, *Finite-rank perturbations of the Dirac operator*. J. Math. Anal. Appl. 238: 67--81, 1999. L. C. Evans and R. F. Gariepy, *Measure theory and fine properties of functions*. CRC Press, 2015. A. Mas and F. Pizzichillo, *Klein's Paradox and the Relativistic $\delta$-shell Interaction in $\mathbb{R}^{3}$*, Analysis & PDE, 11 (2018), pp. 705--744. M. Reed and B. Simon, *A Methods of modern mathematical physics*. vol. 1. Functional analysis, Academic Press, New York, 1980. P. Šeba, *Klein's paradox and the relativistic point interaction*, Lett. Math. Phys., 18 (1989), pp. 77--86. B. Thaller, *The Dirac equation*, Text and Monographs in Physics, Springer-Verlag, Berlin, 1992. J. A. Thorpe, *Elementary Topics in Differential Geometry*, Undergraduate Texts in Mathematics, Springer-Verlag New York Inc. 1979. M. Tušek, *Approximation of one-dimensional relativistic point interactions by regular potentials revised*, Lett. Math. Phys. 110: 2585--2601, 2020. R. M. WILCOX, *Exponential operators and parameter differentiation in quantum physics*, Journal of Mathematical Physics, 8 (1967), pp. 962--982.
arxiv_math
{ "id": "2309.12911", "title": "On the approximation of the $\\delta$-shell interaction for the 3-D Dirac\n operator", "authors": "Mahdi Zreik", "categories": "math.SP math-ph math.MP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper we consider from two different aspects the proximal alternating direction method of multipliers (ADMM) in Hilbert spaces. We first consider the application of the proximal ADMM to solve well-posed linearly constrained two-block separable convex minimization problems in Hilbert spaces and obtain new and improved non-ergodic convergence rate results, including linear and sublinear rates under certain regularity conditions. We next consider proximal ADMM as a regularization method for solving linear ill-posed inverse problems in Hilbert spaces. When the data is corrupted by additive noise, we establish, under a benchmark source condition, a convergence rate result in terms of the noise level when the number of iteration is properly chosen. address: Mathematical Sciences Institute, Australian National University, Canberra, ACT 2601, Australia author: - Qinian Jin date: March 3, 2023 title: On convergence rates of proximal alternating direction method of multipliers --- # **Introduction** The alternating direction method of multipliers (ADMM) was introduced and developed in the 1970s by Glowinski, Marrocco [@GM1975] and Gabay, Mercier [@GM1976] for the numerical solutions of partial differential equations. Due to its decomposability and superior flexibility, ADMM and its variants have gained renewed interest in recent years and have been widely used for solving large-scale optimization problems that arise in signal/image processing, statistics, machine learning, inverse problems and other fields, see [@BPCPE2011; @GBO2010; @JJLW2016]. Because of their popularity, many works have been devoted to the analysis of ADMM and its variants, see [@BPCPE2011; @DY2016; @EP1992; @G1983; @HY2012; @LM1979; @ZBO2011] for instance. In this paper we will devote to deriving convergence rates of ADMM in two aspects: its applications to solve well-posed convex optimization problems and its use to solve linear ill-posed inverse problems as a regularization method. In the first part of this paper we consider ADMM for solving linearly constrained two-block separable convex minimization problems. Let $\mathcal X$, $\mathcal Y$ and $\mathcal Z$ be real Hilbert spaces with possibly infinite dimensions. We consider the convex minimization problem of the form $$\begin{aligned} \label{prob} \begin{split} \mbox{minimize } & \quad H(x, y) := f(x) + g(y) \\ \mbox{subject to } & \quad A x + By =c, \end{split}\end{aligned}$$ where $c \in \mathcal Z$, $A: \mathcal X\to \mathcal Z$ and $B: \mathcal Y\to \mathcal Z$ are bounded linear operators, and $f: \mathcal X\to (-\infty, \infty]$ and $g: \mathcal Y\to (-\infty, \infty]$ are proper, lower semi-continuous, convex functions. The classical ADMM solves ([\[prob\]](#prob){reference-type="ref" reference="prob"}) approximately by constructing an iterative sequence via alternatively minimizing the augmented Lagrangian function $${\mathscr L}_\rho (x, y, z) := f(x) + g(y) + \langle\lambda, A x + B y -c \rangle+ \frac{\rho}{2} \|A x + B y - c\|^2$$ with respect to the primal variables $x$ and $y$ and then updating the dual variable $\lambda$; more precisely, starting from an initial guess $y^0\in \mathcal Y$ and $\lambda^0\in \mathcal Z$, an iterative sequence $\{(x^k, y^k, \lambda^k)\}$ is defined by $$\begin{aligned} \label{admm} \begin{split} x^{k+1} &= \arg\min_{x\in \mathcal X} \left\{f(x) + \langle\lambda^k, A x\rangle+ \frac{\rho}{2} \|A x+B y^k -c\|^2 \right\},\\ y^{k+1} &= \arg\min_{y\in \mathcal Y} \left\{g(y) + \langle\lambda^k, B y\rangle+ \frac{\rho}{2} \|A x^{k+1}+By -c\|^2 \right\},\\ \lambda^{k+1} &= \lambda^k + \rho (A x^{k+1}+B y^{k+1}-c), \end{split}\end{aligned}$$ where $\rho>0$ is a given penalty parameter. The implementation of ([\[admm\]](#admm){reference-type="ref" reference="admm"}) requires to determine $x^{k+1}$ and $y^{k+1}$ by solving two convex minimization problems during each iteration. Although $f$ and $g$ may have special structures so that their proximal mappings are easy to be determined, solving the minimization problems in ([\[admm\]](#admm){reference-type="ref" reference="admm"}) in general is highly nontrivial due to the appearance of the terms $\|A x\|^2$ and $\|B y\|^2$. In order to avoid this implementation issue, one may consider to add certain proximal terms to the $x$-subproblem and $y$-subproblem in ([\[admm\]](#admm){reference-type="ref" reference="admm"}) to remove the terms $\|A x\|^2$ and $\|B y\|^2$. For any bounded linear positive semi-definite self-adjoint operator $D$ on a real Hilbert space $\mathcal H$, we will use the notation $$\|u\|_D^2 : = \langle z, D u\rangle, \quad \forall u \in \mathcal H.$$ By taking two bounded linear positive semi-definite self-adjoint operators $P: \mathcal X\to \mathcal X$ and $Q: \mathcal Y\to \mathcal Y$, we may add the terms $\frac{1}{2} \|x-x^k\|_P^2$ and $\frac{1}{2} \|y-y^k\|_Q^2$ to the $x$- and $y$-subproblems in ([\[admm\]](#admm){reference-type="ref" reference="admm"}) respectively to obtain the following proximal alternating direction method of multipliers ([@B2017; @DY2016b; @HY2012; @HY2015; @JJLW2017; @ZBO2011]) $$\begin{aligned} \label{alg1} \begin{split} x^{k+1} &= \arg\min_{x\in \mathcal X} \left\{ f(x) + \langle\lambda^k, A x\rangle+ \frac{\rho}{2} \left\|A x + B y^k -c \right\|^2 + \frac{1}{2} \|x-x^k\|_P^2\right\},\\ y^{k+1} &= \arg\min_{y\in \mathcal Y} \left\{g(y) + \langle\lambda^k, B y\rangle+ \frac{\rho}{2} \left\|A x^{k+1}+By-c \right\|^2 + \frac{1}{2} \|y-y^k\|_Q^2\right\},\\[1ex] \lambda^{k+1} &= \lambda^k + \rho (A x^{k+1} + B y^{k+1} -c). \end{split}\end{aligned}$$ The advantage of ([\[alg1\]](#alg1){reference-type="ref" reference="alg1"}) over ([\[admm\]](#admm){reference-type="ref" reference="admm"}) is that, with wise choices of $P$ and $Q$, it is possible to remove the terms $\|A x\|^2$ and $\|B y\|^2$ and thus make the determination of $x^{k+1}$ and $y^{k+1}$ much easier. In recent years, various convergence rate results have been established for ADMM and its variants in either ergodic or non-ergodic sense. In [@HY2012; @LMZ2015] the ergodic convergence rate $$\begin{aligned} \label{ergodic} |H(\bar x^k, \bar y^k)-H_*| = O\left(\frac{1}{k}\right) \quad \mbox{and} \quad \|A \bar x^k+B \bar y^k -c\|= O \left(\frac{1}{k}\right)\end{aligned}$$ has been derived in terms of the objective error and the constraint error, where $H_*$ denotes the minimum value of ([\[prob\]](#prob){reference-type="ref" reference="prob"}), $k$ denotes the number of iteration, and $$\bar x^k := \frac{1}{k} \sum_{j=1}^k x^j \qquad \mbox{and} \qquad \bar y^k := \frac{1}{k} \sum_{j=1}^k y^j$$ denote the ergodic iterates of $\{x^k\}$ and $\{y^k\}$ respectively; see also [@B2017 Theorem 15.4]. A criticism on ergodic result is that it may fail to capture the feature of the sought solution of the underlying problem because ergodic iterate has the tendency to average out the expected property and thus destroy the feature of the solution. This is in particular undesired in sparsity optimization and low-rank learning. In contrast, the non-ergodic iterate tends to share structural properties with the solution of the underlying problem. Therefore, the use of non-ergodic iterates becomes more favorable in practice. In [@HY2015] a non-ergodic convergence rate has been derived for the proximal ADMM ([\[alg1\]](#alg1){reference-type="ref" reference="alg1"}) with $Q=0$ and the result reads as $$\begin{aligned} \label{ner0} \|x^{k+1} - x^k\|_P^2 + \|B(y^{k+1}-y^k)\|^2 + \|\lambda^{k+1}-\lambda^k\|^2 = o \left(\frac{1}{k}\right).\end{aligned}$$ By exploiting the connection with the Douglas-Rachford splitting algorithm, the non-ergodic convergence rate $$\begin{aligned} \label{ner1} |H(x^k, y^k)-H_*| = o\left(\frac{1}{\sqrt{k}}\right) \quad \mbox{and} \quad \|A x^k+B y^k -c\|= o \left(\frac{1}{\sqrt{k}}\right)\end{aligned}$$ in terms of the objective error and the constraint error has been established in [@DY2016] for the ADMM ([\[admm\]](#admm){reference-type="ref" reference="admm"}) and an example has been provided to demonstrate that the estimates in ([\[ner1\]](#ner1){reference-type="ref" reference="ner1"}) are sharp. However, the derivation of ([\[ner1\]](#ner1){reference-type="ref" reference="ner1"}) in [@DY2016] relies on some unnatural technical conditions involving the convex conjugate of $f$ and $g$, see Remark [Remark 1](#Rk2.1){reference-type="ref" reference="Rk2.1"}. Note that the estimate ([\[ner0\]](#ner0){reference-type="ref" reference="ner0"}) implies the second estimate in ([\[ner1\]](#ner1){reference-type="ref" reference="ner1"}), however it does not imply directly the first estimate in ([\[ner1\]](#ner1){reference-type="ref" reference="ner1"}). In Section [2](#sect2){reference-type="ref" reference="sect2"} we will show, by a simpler argument, that similar estimate as in ([\[ner0\]](#ner0){reference-type="ref" reference="ner0"}) can be established for the proximal ADMM ([\[alg1\]](#alg1){reference-type="ref" reference="alg1"}) with arbitrary positive semi-definite $Q$. Based on this result and some additional properties of the method, we will further show that the non-ergodic rate ([\[ner1\]](#ner1){reference-type="ref" reference="ner1"}) holds for the proximal ADMM ([\[alg1\]](#alg1){reference-type="ref" reference="alg1"}) with arbitrary positive semi-definite $P$ and $Q$. Our result does not require any technical conditions as assumed in [@DY2016]. In order to obtain faster convergence rates for the proximal ADMM ([\[alg1\]](#alg1){reference-type="ref" reference="alg1"}), certain regularity conditions should be imposed. In finite dimensional situation, a number of linear convergence results have been established. In [@DY2016b] some linear convergence results of the proximal ADMM have been provided under a number of scenarios involving the strong convexity of $f$ and/or $g$, the Lipschitz continuity of $\nabla f$ and/or $\nabla g$, together with further full row/column rank assumptions on $A$ and/or $B$. Under a bounded metric subregularity condition, in particular under the assumption that both $f$ and $g$ are convex piecewise linear-quadratic functions, a global linear convergence rate has been established in [@YH2016] for the proximal ADMM ([\[alg1\]](#alg1){reference-type="ref" reference="alg1"}) with $$\begin{aligned} \label{yhc} P:=\tau_1 I - \rho A^*A \succ 0 \quad \mbox{and} \quad Q:= \tau_2 I - \rho B^*B\succ 0,\end{aligned}$$ where $A^*$ and $B^*$ denotes the adjoints of $A$ and $B$ respectively; the condition ([\[yhc\]](#yhc){reference-type="ref" reference="yhc"}) plays an essential role in the convergence analysis in [@YH2016]. We will derive faster convergence rates for the proximal ADMM ([\[alg1\]](#alg1){reference-type="ref" reference="alg1"}) in the general Hilbert space setting. To this end, we need first to consider the weak convergence of $\{(x^k, y^k, \lambda^k)\}$ and demonstrate that every weak cluster point of this sequence is a KKT point of ([\[prob\]](#prob){reference-type="ref" reference="prob"}). This may not be an issue in finite dimensions. However, this is nontrivial in infinite dimensional spaces because extra care is required to dealing with weak convergence. In [@BS2017] the weak convergence of the proximal ADMM ([\[alg1\]](#alg1){reference-type="ref" reference="alg1"}) has been considered by transforming the method into a proximal point method and the result there requires restrictive conditions, see [@BS2017 Lemma 3.4 and Theorem 3.1]. These restrictive conditions have been weakened later in [@Sun2019] by using machinery from the maximal monotone operator theory. We will explore the structure of the proximal ADMM and show by an elementary argument that every weak cluster point of $\{(x^k, y^k, \lambda^k)\}$ is indeed a KKT point of ([\[prob\]](#prob){reference-type="ref" reference="prob"}) without any additional conditions. We will then consider the linear convergence of the proximal ADMM under a bounded metric subregularity condition and obtain the linear convergence for any positive semi-definite $P$ and $Q$; in particular, we obtain the linear convergence of $|H(x^k, y^k) - H_*|$ and $\|A x^k + B y^k - c\|$. We also consider deriving convergence rates under a bounded Hölder metric subregularity condition which is weaker than the bounded metric subregularity. This weaker condition holds if both $f$ and $g$ are semi-algebraic functions and thus a wider range of applications can be covered. We show that, under a bounded Hölder metric subrigularity condition, among other things the convergence rates in ([\[ner1\]](#ner1){reference-type="ref" reference="ner1"}) can be improved to $$\|A x^k + B y^k - c\| = O(k^{-\beta}) \quad \mbox{ and } \quad |H(x^k, y^k) - H_*| = O(k^{-\beta})$$ for some number $\beta >1/2$; the value of $\beta$ depends on the properties of $f$ and $g$. To further weaken the bounded (Hölder) metric subregularity assumption, we introduce an iteration based error bound condition which is an extension of the one in [@LYZZ2018] to the general proximal ADMM ([\[alg1\]](#alg1){reference-type="ref" reference="alg1"}). It is interesting to observe that this error bound condition holds under any one of the scenarios proposed in [@DY2016b]. Hence, we provide a unified analysis for deriving convergence rates under the bounded (Hölder) metric subregularity or the scenarios in [@DY2016b]. Furthermore, we extend the scenarios in [@DY2016b] to the general Hilbert space setting and demonstrate that some conditions can be weakened and the convergence result can be strengthened; see Theorem [Theorem 11](#thm2.11){reference-type="ref" reference="thm2.11"}. In the second part of this paper, we consider using ADMM as a regularization method to solve linear ill-posed inverse problems in Hilbert spaces. Linear inverse problems have a wide range of applications, including medical imaging, geophysics, astronomy, signal processing, and more ([@EHN1996; @G1984; @N2001]). We consider linear inverse problems of the form $$\begin{aligned} \label{ip1.1} Ax = b, \quad x\in \mathcal C, \end{aligned}$$ where $A : \mathcal X\to \mathcal H$ is a compact linear operator between two Hilbert spaces $\mathcal X$ and $\mathcal H$, $\mathcal C$ is a closed convex set in $\mathcal X$, and $b \in \mbox{Ran}(A)$, the range of $A$. In order to find a solution of ([\[ip1.1\]](#ip1.1){reference-type="ref" reference="ip1.1"}) with desired properties, a priori available information on the sought solution should be incorporated into the problem. Assume that, under a suitable linear transform $L$ from $\mathcal X$ to another Hilbert spaces $\mathcal Y$ with domain $\mbox{dom}(L)$, the feature of the sought solution can be captured by a proper convex penalty function $f : \mathcal Y\to (-\infty, \infty]$. One may consider instead of ([\[ip1.1\]](#ip1.1){reference-type="ref" reference="ip1.1"}) the constrained optimization problem $$\begin{aligned} \label{ip1.2} \min\{f (Lx): Ax = b, \ x\in \mathcal C, \ x\in \mbox{dom}(L)\}. \end{aligned}$$ A challenging issue related to the numerical resolution of ([\[ip1.2\]](#ip1.2){reference-type="ref" reference="ip1.2"}) is its ill-posedness in the sense that the solution of ([\[ip1.2\]](#ip1.2){reference-type="ref" reference="ip1.2"}) does not depend continuously on the data and thus a small perturbation on data can lead to a large deviation on solutions. In practical applications, the exact data $b$ is usually unavailable, instead only a noisy data $b^\delta$ is at hand with $$\|b^\delta- b\| \le \delta$$ for some small noise level $\delta>0$. To overcome ill-posedness, regularization methods should be introduced to produce reasonable approximate solutions; one may refer to [@BO2004; @EHN1996; @Jin2022; @RS2006] for various regularization methods. The common use of ADMM to solve ([\[ip1.2\]](#ip1.2){reference-type="ref" reference="ip1.2"}) with noisy data $b^\delta$ first considers the variational regularization $$\begin{aligned} \label{vr.admm} \min_{x\in \mathcal C} \left\{\frac{1}{2} \|A x - b^\delta\|^2 + \alpha f(Lx)\right\},\end{aligned}$$ then uses the splitting technique to rewrite ([\[vr.admm\]](#vr.admm){reference-type="ref" reference="vr.admm"}) into the form ([\[prob\]](#prob){reference-type="ref" reference="prob"}), and finally applies the ADMM procedure to produce approximate solutions. The parameter $\alpha>0$ is the so-called regularization parameter which should be adjusted carefully to achieve reasonable good performance; consequently one has to run ADMM to solve ([\[vr.admm\]](#vr.admm){reference-type="ref" reference="vr.admm"}) for many different values of $\alpha$, which can be time consuming. In [@JJLW2016; @JJLW2017] the ADMM has been considered to solve ([\[ip1.2\]](#ip1.2){reference-type="ref" reference="ip1.2"}) directly to reduce the computational load. Note that ([\[ip1.2\]](#ip1.2){reference-type="ref" reference="ip1.2"}) can be written as $$\begin{aligned} \left\{\begin{array}{lll} \min f(y) + \iota_{\mathcal C}(x)\\ \mbox{subject to } A z = b, \ L z - y = 0, \ z - x = 0, \ z \in \mbox{dom}(L), \end{array}\right.\end{aligned}$$ where $\iota_{\mathcal C}$ denotes the indicator function of $\mathcal C$. With the noisy data $b^\delta$ we introduce the augmented Lagrangian function $$\begin{aligned} {\mathscr L}_{\rho_1, \rho_2, \rho_3}(z, y, x, \lambda, \mu, \nu) &:= f(y) + \iota_{\mathcal C}(x) + \langle\lambda, A z - b^\delta\rangle+ \langle\mu, L z - y\rangle+ \langle\nu, z - x \rangle\\ & \quad \, + \frac{\rho_1}{2} \|A z - b^\delta\|^2 + \frac{\rho_2}{2} \|L z - y\|^2 + \frac{\rho_3}{2} \|z -x\|^2,\end{aligned}$$ where $\rho_1$, $\rho_2$ and $\rho_3$ are preassigned positive numbers. The proximal ADMM proposed in [@JJLW2017] for solving ([\[ip1.2\]](#ip1.2){reference-type="ref" reference="ip1.2"}) then takes the form $$\begin{aligned} \label{PADMM2} \begin{split} & z^{k+1} = \arg\min_{z\in \mbox{dom}(L)} \left\{{\mathscr L}_{\rho_1, \rho_2, \rho_3}(z, y^k, x^k, \lambda^k, \mu^k, \nu^k) + \frac{1}{2} \|z-z^k\|_Q^2\right\},\\ & y^{k+1} = \arg\min_{y\in \mathcal Y} \left\{{\mathscr L}_{\rho_1, \rho_2, \rho_3}(z^{k+1}, y, x^k, \lambda^k, \mu^k, \nu^k) \right\},\\ & x^{k+1} = \arg\min_{x\in \mathcal X} \left\{{\mathscr L}_{\rho_1, \rho_2, \rho_3}(z^{k+1}, y^{k+1}, x, \lambda^k, \mu^k, \nu^k)\right\}, \\ & \lambda^{k+1} = \lambda^k + \rho_1 (A z^{k+1} - b^\delta), \\ & \mu^{k+1} = \mu^k + \rho_2 (L z^{k+1} - y^{k+1}), \\ & \nu^{k+1} = \nu^k + \rho_3 (z^{k+1} - x^{k+1}), \end{split}\end{aligned}$$ where $Q$ is a bounded linear positive semi-definite self-adjoint operator. The method ([\[PADMM2\]](#PADMM2){reference-type="ref" reference="PADMM2"}) is not a 3-block ADMM. Note that the variables $y$ and $x$ are not coupled in ${\mathscr L}_{\rho_1, \rho_2, \rho_3}(z, y, x, \lambda, \mu, \nu)$. Thus, $y^{k+1}$ and $x^{k+1}$ can be updated simultaneously, i.e. $$\begin{aligned} (y^{k+1}, x^{k+1}) = \arg\min_{y\in \mathcal Y, x \in \mathcal X} \left\{{\mathscr L}_{\rho_1, \rho_2, \rho_3}(z^{k+1}, y, x, \lambda^k, \mu^k, \nu^k) \right\}. \end{aligned}$$ This demonstrates that ([\[PADMM2\]](#PADMM2){reference-type="ref" reference="PADMM2"}) is a 2-block proximal ADMM. It should be highlighted that all well-established convergence results on proximal ADMM for well-posed optimization problems are not applicable to ([\[PADMM2\]](#PADMM2){reference-type="ref" reference="PADMM2"}) directly. Note that ([\[PADMM2\]](#PADMM2){reference-type="ref" reference="PADMM2"}) uses the noisy data $b^\delta$. If the convergence theory for well-posed optimization problems could be applicable, one would obtain a solution of the perturbed problem $$\begin{aligned} \label{perturb} \min\left\{f(Lx): A x = b^\delta, \ x \in \mathcal C, \ x \in \mbox{dom}(L)\right\}\end{aligned}$$ of ([\[ip1.2\]](#ip1.2){reference-type="ref" reference="ip1.2"}). Because $A$ is compact, it is very likely that $b^\delta\not\in \mbox{Ran}(A^*)$ and thus ([\[perturb\]](#perturb){reference-type="ref" reference="perturb"}) makes no sense as the feasible set is empty. Even if $b^\delta\in \mbox{Ran}(A^*)$ and ([\[perturb\]](#perturb){reference-type="ref" reference="perturb"}) has a solution, this solution could be far away from the solution of ([\[ip1.2\]](#ip1.2){reference-type="ref" reference="ip1.2"}) because of the ill-posedness. Therefore, if ([\[PADMM2\]](#PADMM2){reference-type="ref" reference="PADMM2"}) is used to solve ([\[ip1.2\]](#ip1.2){reference-type="ref" reference="ip1.2"}), better result can not be expected even if larger number of iterations are performed. In contrast, like all other iterative regularization methods, when ([\[PADMM2\]](#PADMM2){reference-type="ref" reference="PADMM2"}) is used to solve ([\[ip1.2\]](#ip1.2){reference-type="ref" reference="ip1.2"}), it shows the semi-convergence property, i.e., the iterate becomes close to the sought solution at the beginning; however, after a critical number of iterations, the iterate leaves the sought solution far away as the iteration proceeds. Thus, properly terminating the iteration is important to produce acceptable approximate solutions. One may hope to determine a stopping index $k_\delta$, depending on $\delta$ and/or $b^\delta$, such that $\|x^{k_\delta}-x^\dag\|$ is as small as possible and $\|x^{k_\delta}-x^\dag\|\to 0$ as $\delta\to 0$, where $x^\dag$ denotes the solutio of ([\[ip1.2\]](#ip1.2){reference-type="ref" reference="ip1.2"}). This has been done in our previous work [@JJLW2016; @JJLW2017] in which early stopping rules have been proposed for the method ([\[PADMM2\]](#PADMM2){reference-type="ref" reference="PADMM2"}) to render it into a regularization method and numerical results have been reported to demonstrate the nice performance. However, the work in [@JJLW2016; @JJLW2017] does not provide convergence rates, i.e. the estimate on $\|x^{k_\delta} - x^\dag\|$ in terms of $\delta$. Deriving convergence rates for iterative regularization methods involving general convex regularization terms is a challenging question and only a limited number of results are available. In order to derive a convergence rate of a regularization method for ill-posed problems, certain source condition should be imposed on the sought solution. In Section [3](#sect3){reference-type="ref" reference="sect3"}, under a benchmark source condition on the sought solution, we will provide a partial answer to this question by establishing a convergence rate result for ([\[PADMM2\]](#PADMM2){reference-type="ref" reference="PADMM2"}) if the iteration is terminated by an *a priori* stopping rule. We conclude this section with some notation and terminology. Let $\mathcal V$ be a real Hilbert spaces. We use $\langle\cdot, \cdot\rangle$ and $\|\cdot\|$ to denote its inner product and the induced norm. We also use "$\to$\" and "$\rightharpoonup$\" to denote the strong convergence and weak convergence respectively. For a function $\varphi: \mathcal V\to (-\infty, \infty]$ its domain is defined as $\mbox{dom}(\varphi) := \{x\in \mathcal V: \varphi(x) <\infty\}$. If $\mbox{dom}(\varphi)\ne \emptyset$, $\varphi$ is called proper. For a proper convex function $\varphi: \mathcal V\to (-\infty, \infty]$, its modulus of convexity, denoted by $\sigma_\varphi$, is defined to be the largest number $c$ such that $$\varphi(t x + (1-t) y) + c t(1-t) \|x-y\|^2 \le t \varphi(x) + (1-t) \varphi(y)$$ for all $x, y \in \mbox{dom}(\varphi)$ and $0\le t\le 1$. We always have $\sigma_\varphi\ge 0$. If $\sigma_\varphi>0$, $\varphi$ is called strongly convex. For a proper convex function $\varphi: \mathcal V\to (-\infty, \infty]$, we use $\partial\varphi$ to denote its subdifferential, i.e. $$\partial\varphi(x) :=\{\xi \in \mathcal V: \varphi(y) \ge \varphi(x) + \langle\xi, y-x\rangle \mbox{ for all } y \in \mathcal V\}, \quad x \in \mathcal V.$$ Let $\mbox{dom}(\partial\varphi) :=\{x\in \mathcal V: \partial\varphi(x) \ne \emptyset\}$. It is easy to see that $$\varphi(y) - \varphi(x) - \langle\xi, y-x\rangle\ge \sigma_\varphi \|y-x\|^2$$ for all $y\in \mathcal V$, $x\in \mbox{dom}(\partial\varphi)$ and $\xi \in \partial\varphi(x)$ which in particular implies the monotonicity of $\partial\varphi$, i.e. $$\langle\xi - \eta, x - y\rangle\ge 2 \sigma_\varphi \|x-y\|^2$$ for all $x, y \in \mbox{dom}(\partial\varphi)$, $\xi \in \partial\varphi(x)$ and $\eta \in \partial\varphi(y)$. # **Proximal ADMM for convex optimization problems** {#sect2} In this section we will consider the proximal ADMM ([\[alg1\]](#alg1){reference-type="ref" reference="alg1"}) for solving the linearly constrained convex minimization problem ([\[prob\]](#prob){reference-type="ref" reference="prob"}). For the convergence analysis, we will make the following standard assumptions. **Assumption 1**. $\mathcal X$, $\mathcal Y$ and $\mathcal Z$ are real Hilbert spaces, $A: \mathcal X\to \mathcal Z$ and $B: \mathcal Y\to \mathcal Z$ are bounded linear operators, $P: \mathcal X\to \mathcal X$ and $Q: \mathcal Y\to \mathcal Y$ are bounded linear positive semi-definite self-adjoint operators, and $f: \mathcal X\to (-\infty, \infty]$ and $g: \mathcal Y\to (-\infty, \infty]$ are proper, lower semi-continuous, convex functions. **Assumption 2**. The problem ([\[prob\]](#prob){reference-type="ref" reference="prob"}) has a Karush-Kuhn-Tucker (KKT) point, i.e. there exists $(\bar x, \bar y, \bar \lambda) \in \mathcal X\times \mathcal Y\times \mathcal Z$ such that $$-A^* \bar \lambda\in \partial f(\bar x), \quad -B^* \bar \lambda\in \partial g (\bar y), \quad A \bar x + B \bar y =c.$$ It should be mentioned that, to guarantee the proximal ADMM ([\[alg1\]](#alg1){reference-type="ref" reference="alg1"}) to be well-defined, certain additional conditions need to be imposed to ensure that the $x$- and $y$-subproblems do have minimizers. Since the well-definedness can be easily seen in concrete applications, to make the presentation more succinct we will not state these conditions explicitly. By the convexity of $f$ and $g$, it is easy to see that, for any KKT point $(\bar x, \bar y, \bar \lambda)$ of ([\[prob\]](#prob){reference-type="ref" reference="prob"}), there hold $$\begin{aligned} 0 &\le f(x) - f(\bar x) + \langle\bar \lambda, A(x-\bar x)\rangle, \quad \forall x \in \mathcal X,\\ 0 &\le g(y) - g(\bar y) + \langle\bar \lambda, B(y-\bar y)\rangle, \quad \forall y \in \mathcal Y.\end{aligned}$$ Adding these two equations and using $A \bar x+ B\bar y-c =0$, it follows that $$\begin{aligned} \label{9.18.11} 0 \le H(x, y) - H(\bar x, \bar y) + \langle\bar \lambda, A x + B y-c \rangle, \quad \forall (x, y) \in \mathcal X\times \mathcal Y. \end{aligned}$$ This in particular implies that $(\bar x, \bar y)$ is a solution of ([\[prob\]](#prob){reference-type="ref" reference="prob"}) and thus $H_*:= H(\bar x, \bar y)$ is the minimum value of ([\[prob\]](#prob){reference-type="ref" reference="prob"}). Based on Assumptions [Assumption 1](#Ass1){reference-type="ref" reference="Ass1"} and [Assumption 2](#Ass2){reference-type="ref" reference="Ass2"} we will analyze the proximal ADMM ([\[alg1\]](#alg1){reference-type="ref" reference="alg1"}). For ease of exposition, we set $\widehat Q := \rho B^* B + Q$ and define $$\begin{aligned} %\label{9.20.-1} G u:= (P x, \widehat Q y, \lambda/\rho), \quad \forall u:= (x, y, \lambda) \in \mathcal X\times \mathcal Y\times \mathcal Z\end{aligned}$$ which is a bounded linear positive semi-definite self-adjoint operator on $\mathcal X\times \mathcal Y\times \mathcal Z$. Then, for any $u:=(x, y,\lambda) \in \mathcal X\times \mathcal Y\times \mathcal Z$ we have $$\|u\|_G^2:=\langle u, G u\rangle= \|x\|_P^2 + \|y\|_{\widehat Q}^2 + \frac{1}{\rho} \|\lambda\|^2.$$ For the sequence $\{u^k:=(x^k, y^k, \lambda^k)\}$ defined by the proximal ADMM ([\[alg1\]](#alg1){reference-type="ref" reference="alg1"}), we use the notation $$\begin{aligned} %\label{1.9.1} \Delta x^k := x^k-x^{k-1}, \ \ \Delta y^k := y^k-y^{k-1}, \ \ \Delta\lambda^k := \lambda^k-\lambda^{k-1}, \ \ \Delta u^k := u^k - u^{k-1}.\end{aligned}$$ We start from the first order optimality conditions on $x^{k+1}$ and $y^{k+1}$ which by definition can be stated as $$\begin{aligned} \label{fg} \begin{split} -A^* \lambda^k - \rho A^* (A x^{k+1} + B y^k -c) - P (x^{k+1}-x^k) & \in \partial f(x^{k+1}), \\ -B^* \lambda^k - \rho B^* (A x^{k+1} + B y^{k+1} -c) - Q(y^{k+1}-y^k) & \in \partial g(y^{k+1}). \end{split}\end{aligned}$$ By using $\lambda^{k+1}=\lambda^k + \rho (A x^{k+1} + B y^{k+1} -c)$ we may rewrite ([\[fg\]](#fg){reference-type="ref" reference="fg"}) as $$\begin{aligned} \label{11.11.1} \begin{split} -A^* (\lambda^{k+1} - \rho B \Delta y^{k+1}) -P \Delta x^{k+1} &\in \partial f(x^{k+1}), \\ -B^* \lambda^{k+1} - Q \Delta y^{k+1} &\in \partial g(y^{k+1}) \end{split}\end{aligned}$$ which will be frequently used in the following analysis. We first prove the following important result which is inspired by [@HY2012 Lemma 3.1] and [@B2017 Theorem 15.4]. **Proposition 1**. *Let Assumption [Assumption 1](#Ass1){reference-type="ref" reference="Ass1"} hold. Then for the proximal ADMM ([\[alg1\]](#alg1){reference-type="ref" reference="alg1"}) there holds $$\begin{aligned} & \sigma_f \|x^{k+1} - x\|^2 + \sigma_g \|y^{k+1} - y\|^2 \\ & \le H(x, y) - H(x^{k+1}, y^{k+1}) + \langle\lambda^{k+1} - \rho B \Delta y^{k+1}, A x + B y -c \rangle\\ & \quad \, - \langle\lambda, A x^{k+1} + B y^{k+1}-c\rangle+ \frac{1}{2} \left(\|u^k-u\|_G^2 - \|u^{k+1}-u\|_G^2\right) \\ & \quad \, -\frac{1}{2\rho} \|\Delta \lambda^{k+1} - \rho B \Delta y^{k+1}\|^2 -\frac{1}{2} \|\Delta x^{k+1}\|_P^2- \frac{1}{2} \|\Delta y^{k+1}\|_Q^2\end{aligned}$$ for all $u := (x, y, \lambda) \in \mathcal X\times \mathcal Y\times \mathcal Z$, where $\sigma_f$ and $\sigma_g$ denote the modulus of convexity of $f$ and $g$ respectively.* *Proof.* Let $\tilde \lambda^{k+1} := \lambda^{k+1} - \rho B \Delta y^{k+1}$. By using ([\[11.11.1\]](#11.11.1){reference-type="ref" reference="11.11.1"}) and the convexity of $f$ and $g$ we have for any $(x, y, \lambda) \in \mathcal X\times \mathcal Y\times \mathcal Z$ that $$\begin{aligned} & \sigma_f \|x^{k+1} - x\|^2 + \sigma_g \|y^{k+1} - y\|^2 \\ & \le f(x) - f(x^{k+1}) + \langle\lambda^{k+1} -\rho B \Delta y^{k+1}, A (x-x^{k+1})\rangle + \langle P\Delta x^{k+1}, x -x^{k+1}\rangle\\ & \quad \, + g(y) - g(y^{k+1}) + \langle\lambda^{k+1}, B(y-y^{k+1})\rangle + \langle Q \Delta y^{k+1}, y-y^{k+1}\rangle\displaybreak[0]\\ & = H(x, y) - H(x^{k+1}, y^{k+1}) + \langle\tilde \lambda^{k+1}, A(x-x^{k+1}) + B(y - y^{k+1})\rangle\\ & \quad \, + \langle P\Delta x^{k+1}, x -x^{k+1}\rangle+ \langle\widehat Q \Delta y^{k+1}, y-y^{k+1}\rangle\displaybreak[0]\\ & = H(x, y) - H(x^{k+1}, y^{k+1}) + \langle\tilde \lambda^{k+1}, A x + B y -c\rangle\\ & \quad \, - \langle\lambda, A x^{k+1} + B y^{k+1} -c \rangle + \langle\lambda- \tilde \lambda^{k+1}, A x^{k+1} + B y^{k+1} - c\rangle\\ & \quad \, + \langle P\Delta x^{k+1}, x -x^{k+1}\rangle+ \langle\widehat Q \Delta y^{k+1}, y-y^{k+1}\rangle.\end{aligned}$$ Since $\rho(A x^{k+1} + B y^{k+1} - c) = \Delta \lambda^{k+1}$ we then obtain $$\begin{aligned} & \sigma_f \|x^{k+1} - x\|^2 + \sigma_g \|y^{k+1} - y\|^2 \\ & \le H(x, y) - H(x^{k+1}, y^{k+1}) + \langle\tilde \lambda^{k+1}, A x + By -c\rangle - \langle\lambda, A x^{k+1} + B y^{k+1} -c \rangle\\ & \quad \, + \frac{1}{\rho} \langle\lambda- \lambda^{k+1}, \Delta \lambda^{k+1}\rangle + \frac{1}{\rho}\langle\lambda^{k+1} - \tilde \lambda^{k+1}, \Delta \lambda^{k+1} \rangle\\ & \quad \, + \langle P\Delta x^{k+1}, x -x^{k+1}\rangle + \langle\widehat Q \Delta y^{k+1}, y-y^{k+1}\rangle.\end{aligned}$$ By using the polarization identity and the definition of $G$, it follows that $$\begin{aligned} & \sigma_f \|x^{k+1} - x\|^2 + \sigma_g \|y^{k+1} - y\|^2 \\ & \le H(x, y) - H(x^{k+1}, y^{k+1}) + \langle\tilde \lambda^{k+1}, A x + B y -c\rangle - \langle\lambda, A x^{k+1} + B y^{k+1} -c \rangle\displaybreak[0]\\ & \quad \, + \frac{1}{2\rho} \left(\|\lambda^k-\lambda\|^2 - \|\lambda^{k+1} -\lambda\|^2 - \|\Delta \lambda^{k+1}\|^2 \right) \displaybreak[0]\\ & \quad \, - \frac{1}{2\rho} \left(\|\lambda^k - \tilde \lambda^{k+1}\|^2 - \|\lambda^{k+1} - \tilde \lambda^{k+1}\|^2 - \|\Delta \lambda^{k+1}\|^2\right) \displaybreak[0]\\ & \quad \, + \frac{1}{2} \left(\|x^k-x\|_P^2 -\|x^{k+1}-x\|_P^2 - \|\Delta x^{k+1}\|_P^2 \right) \displaybreak[0]\\ & \quad \, + \frac{1}{2} \left(\|y^k-y\|_{\widehat Q}^2 - \|y^{k+1}-y\|_{\widehat Q}^2 -\|\Delta y^{k+1}\|_{\widehat Q}^2 \right) \displaybreak[0]\\ & = H(x, y) - H(x^{k+1}, y^{k+1}) + \langle\tilde \lambda^{k+1}, A x + B y -c\rangle - \langle\lambda, A x^{k+1} + B y^{k+1} -c \rangle\\ & \quad \, + \frac{1}{2} \left(\|u^k-u\|_G^2 - \|u^{k+1} - u\|_G^2 \right) -\frac{1}{2\rho} \left(\|\lambda^k - \tilde \lambda^{k+1}\|^2 - \|\lambda^{k+1} - \tilde \lambda^{k+1}\|^2\right) \\ & \quad \, - \frac{1}{2} \|\Delta x^{k+1}\|_P^2 - \frac{1}{2} \|\Delta y^{k+1}\|_{\widehat Q}^2. \end{aligned}$$ Using the definition of $\tilde \lambda^{k+1}$ gives $$\begin{aligned} \lambda^k - \tilde \lambda^{k+1} = - \Delta \lambda^{k+1} + \rho B \Delta y^{k+1}, \quad \lambda^{k+1} -\tilde \lambda^{k+1} = \rho B \Delta y^{k+1}.\end{aligned}$$ Therefore $$\begin{aligned} & \sigma_f \|x^{k+1} - x\|^2 + \sigma_g \|y^{k+1} - y\|^2 \\ & \le H(x, y) - H(x^{k+1}, y^{k+1}) + \langle\tilde \lambda^{k+1}, A x + B y -c\rangle - \langle\lambda, A x^{k+1} + B y^{k+1} -c \rangle\\ & \quad \, + \frac{1}{2} \left(\|u^k-u\|_G^2 - \|u^{k+1} - u\|_G^2 \right) -\frac{1}{2\rho} \|\Delta \lambda^{k+1} - \rho B \Delta y^{k+1}\|^2 \\ & \quad \, - \frac{1}{2} \|\Delta x^{k+1}\|_P^2 - \frac{1}{2} \|\Delta y^{k+1}\|_{\widehat Q}^2 + \frac{\rho}{2} \|B \Delta y^{k+1}\|^2. \end{aligned}$$ Since $\rho \|B \Delta y^{k+1}\|^2 - \|\Delta y^{k+1}\|_{\widehat Q}^2 = - \|\Delta y^{k+1}\|_Q^2$, we thus complete the proof. ◻ **Corollary 2**. *Let Assumption [Assumption 1](#Ass1){reference-type="ref" reference="Ass1"} and Assumption [Assumption 2](#Ass2){reference-type="ref" reference="Ass2"} hold and let $\bar u :=(\bar x, \bar y, \bar \lambda)$ be any KKT point of ([\[prob\]](#prob){reference-type="ref" reference="prob"}). Then for the proximal ADMM ([\[alg1\]](#alg1){reference-type="ref" reference="alg1"}) there holds $$\begin{aligned} \label{admm.111} \sigma_f \|x^{k+1} - \bar x\|^2 + \sigma_g \|y^{k+1} - \bar y\|^2 & \le H_* - H(x^{k+1}, y^{k+1}) - \langle\bar \lambda, A x^{k+1} + B y^{k+1} -c \rangle\nonumber \\ & \quad \, + \frac{1}{2} \left(\|u^k-\bar u\|_G^2 - \|u^{k+1} - \bar u\|_G^2 \right)\end{aligned}$$ for all $k\ge 0$. Moreover, the sequence $\{\|u^k-\bar u\|_G^2\}$ is monotonically decreasing.* *Proof.* By taking $u = \bar u$ in Proposition [Proposition 1](#prop9.18){reference-type="ref" reference="prop9.18"} and using $A \bar x + B \bar y - c =0$ we immediately obtain ([\[admm.111\]](#admm.111){reference-type="ref" reference="admm.111"}). According to ([\[9.18.11\]](#9.18.11){reference-type="ref" reference="9.18.11"}) we have $$H(x^{k+1}, y^{k+1}) - H_* + \langle\bar \lambda, A x^{k+1} + B y^{k+1} -c \rangle\ge 0.$$ Thus, from ([\[admm.111\]](#admm.111){reference-type="ref" reference="admm.111"}) we can obtain $$\begin{aligned} \label{eq:cor.1} \sigma_f \|x^{k+1} - \bar x\|^2 + \sigma_g \|y^{k+1} - \bar y\|^2 \le \frac{1}{2} \left(\|u^k-\bar u\|_G^2 - \|u^{k+1} - \bar u\|_G^2 \right)\end{aligned}$$ which implies the monotonicity of the sequence $\{\|u^k - \bar u\|_G^2\}$. ◻ We next show that $\|\Delta u^k\|_G^2 = o(1/k)$ as $k\to \infty$. This result for the proximal ADMM ([\[alg1\]](#alg1){reference-type="ref" reference="alg1"}) with $Q =0$ has been established in [@HY2015] based on a variational inequality approach. We will establish this result for the proximal ADMM ([\[alg1\]](#alg1){reference-type="ref" reference="alg1"}) with general bounded linear positive semi-definite self-adjoint operators $P$ and $Q$ by a simpler argument. **Lemma 3**. *Let Assumption [Assumption 1](#Ass1){reference-type="ref" reference="Ass1"} hold. For the proximal ADMM ([\[alg1\]](#alg1){reference-type="ref" reference="alg1"}), the sequence $\{\|\Delta u^k\|_G^2\}$ is monotonically decreasing.* *Proof.* By using ([\[11.11.1\]](#11.11.1){reference-type="ref" reference="11.11.1"}) and the monotonicity of $\partial f$ and $\partial g$, we can obtain $$\begin{aligned} 0 \le & \left\langle-A^*(\Delta \lambda^{k+1} - \rho B \Delta y^{k+1} + \rho B \Delta y^k) - P \Delta x^{k+1} + P \Delta x^k, \Delta x^{k+1}\right\rangle\\ & + \left\langle- B^* \Delta \lambda^{k+1}-Q \Delta y^{k+1} + Q\Delta y^k, \Delta y^{k+1} \right\rangle\\ = & - \langle\Delta \lambda^{k+1}, A \Delta x^{k+1} + B \Delta y^{k+1} \rangle+ \rho \langle B (\Delta y^{k+1} -\Delta y^k), A \Delta x^{k+1}\rangle\\ & - \langle P (\Delta x^{k+1} - \Delta x^k), \Delta x^{k+1}\rangle - \langle Q(\Delta y^{k+1} - \Delta y^k), \Delta y^{k+1}\rangle.\end{aligned}$$ Note that $$\begin{aligned} A \Delta x^{k+1} + B \Delta y^{k+1} = \frac{1}{\rho} (\Delta \lambda^{k+1} -\Delta \lambda^k).\end{aligned}$$ We therefore have $$\begin{aligned} 0 & \le - \frac{1}{\rho} \langle\Delta \lambda^{k+1}, \Delta \lambda^{k+1} -\Delta \lambda^k\rangle - \rho \langle B (\Delta y^{k+1} -\Delta y^k), B \Delta y^{k+1}\rangle\\ & \quad \, + \langle B(\Delta y^{k+1} - \Delta y^k), \Delta \lambda^{k+1} - \Delta \lambda^k\rangle\\ & \quad \, - \langle P (\Delta x^{k+1} - \Delta x^k), \Delta x^{k+1}\rangle- \langle Q(\Delta y^{k+1} - \Delta y^k), \Delta y^{k+1}\rangle.\end{aligned}$$ By the polarization identity we then have $$\begin{aligned} 0 & \le \frac{1}{2\rho} \left(\|\Delta \lambda^k\|^2 - \|\Delta \lambda^{k+1}\|^2 - \|\Delta\lambda^k -\Delta \lambda^{k+1}\|^2 \right) \\ & \quad \, + \frac{\rho}{2} \left(\|B \Delta y^k\|^2 - \|B \Delta y^{k+1}\|^2 - \|B(\Delta y^k-\Delta y^{k+1})\|^2 \right)\\ & \quad \, + \frac{1}{2} \left(\|\Delta x^k \|_P^2 - \|\Delta x^{k+1}\|_P^2 - \|\Delta x^k-\Delta x^{k+1}\|_P^2\right)\\ & \quad \, + \frac{1}{2} \left(\|\Delta y^k\|_Q^2 - \|\Delta y^{k+1}\|_Q^2 - \|\Delta y^k-\Delta y^{k+1}\|_Q^2 \right)\\ & \quad \, + \langle B(\Delta y^{k+1} - \Delta y^k), \Delta \lambda^{k+1} - \Delta \lambda^k\rangle.\end{aligned}$$ With the help of the definition of $G$, we obtain $$\begin{aligned} 0 & \le \|\Delta u^k\|_G^2 - \|\Delta u^{k+1}\|_G^2 - \|\Delta x^k-\Delta x^{k+1}\|_P^2 - \|\Delta y^k-\Delta y^{k+1}\|_Q^2 \\ & \quad \, - \frac{\rho}{2} \left\|B(\Delta y^{k+1} -\Delta y^k) - \frac{1}{\rho} (\Delta \lambda^{k+1} -\Delta \lambda^k) \right\|^2\end{aligned}$$ which completes the proof. ◻ **Lemma 4**. *Let Assumptions [Assumption 1](#Ass1){reference-type="ref" reference="Ass1"} and [Assumption 2](#Ass2){reference-type="ref" reference="Ass2"} hold and let $\bar u:=(\bar x, \bar y, \bar \lambda)$ be any KKT point of ([\[prob\]](#prob){reference-type="ref" reference="prob"}). For the proximal ADMM ([\[alg1\]](#alg1){reference-type="ref" reference="alg1"}) there holds $$\begin{aligned} %\label{9.18.9} & \|\Delta u^{k+1}\|_G^2 \le \left(\|u^k-\bar u\|_G^2 + \|\Delta y^k\|_Q^2\right) -\left(\|u^{k+1}-\bar u\|_G^2 + \|\Delta y^{k+1}\|_Q^2\right)\end{aligned}$$ for all $k \ge 1$.* *Proof.* We will use ([\[11.11.1\]](#11.11.1){reference-type="ref" reference="11.11.1"}) together with $-A^* \bar \lambda\in \partial f(\bar x)$ and $-B^* \bar \lambda\in \partial g(\bar y)$. By using the monotonicity of $\partial f$ and $\partial g$ we have $$\begin{aligned} 0 & \le \left\langle-A^* (\lambda^{k+1} -\bar \lambda- \rho B \Delta y^{k+1}) - P \Delta x^{k+1}, x^{k+1}-\bar x\right\rangle\\ & \quad \, + \left\langle- B^* (\lambda^{k+1} - \bar \lambda) -Q \Delta y^{k+1}, y^{k+1} - \bar y\right\rangle\\ & = \langle\bar \lambda-\lambda^{k+1}, A x^{k+1} + B y^{k+1} -c \rangle + \rho \langle B \Delta y^{k+1}, A (x^{k+1}-\bar x)\rangle\\ & \quad \, - \langle P \Delta x^{k+1}, x^{k+1}-\bar x\rangle- \langle Q \Delta y^{k+1}, y^{k+1} - \bar y\rangle.\end{aligned}$$ By virtue of $\rho(A x^{k+1} + B y^{k+1} - c\rangle= \Delta \lambda^{k+1}$ we further have $$\begin{aligned} 0 & \le \frac{1}{\rho} \langle\bar \lambda-\lambda^{k+1}, \Delta \lambda^{k+1} \rangle - \rho \langle B \Delta y^{k+1}, B(y^{k+1} - \bar y) \rangle + \langle B \Delta y^{k+1}, \Delta \lambda^{k+1}\rangle\\ & \quad \, - \langle P \Delta x^{k+1}, x^{k+1}-\bar x\rangle- \langle Q \Delta y^{k+1}, y^{k+1} - \bar y\rangle.\end{aligned}$$ By using the second equation in ([\[11.11.1\]](#11.11.1){reference-type="ref" reference="11.11.1"}) and the monotonicity of $\partial g$ we have $$\begin{aligned} 0 & \le \left\langle-B^* \Delta \lambda^{k+1} - Q \Delta y^{k+1} + Q \Delta y^k, \Delta y^{k+1}\right\rangle\\ & = - \langle\Delta \lambda^{k+1}, B \Delta y^{k+1}\rangle- \langle Q(\Delta y^{k+1} -\Delta y^k), \Delta y^{k+1}\rangle\end{aligned}$$ which shows that $$\langle\Delta \lambda^{k+1}, B \Delta y^{k+1}\rangle\le - \langle Q(\Delta y^{k+1} -\Delta y^k), \Delta y^{k+1}\rangle.$$ Therefore $$\begin{aligned} 0\le & \frac{1}{\rho} \langle\bar \lambda-\lambda^{k+1}, \Delta \lambda^{k+1} \rangle - \langle\widehat Q \Delta y^{k+1}, y^{k+1} - \bar y\rangle- \langle P \Delta x^{k+1}, x^{k+1}-\bar x\rangle\\ & - \langle Q(\Delta y^{k+1} -\Delta y^k), \Delta y^{k+1}\rangle.\end{aligned}$$ By using the polarization identity we then obtain $$\begin{aligned} 0\le & \frac{1}{2\rho} \left(\langle\lambda^k-\bar \lambda\|^2 -\|\lambda^{k+1} -\bar \lambda\|^2 -\|\Delta \lambda^{k+1}\|^2 \right) \\ & + \frac{1}{2} \left( \|y^k-\bar y\|_{\widehat Q}^2 - \|y^{k+1}-\bar y\|_{\widehat Q}^2 -\|\Delta y^{k+1}\|_{\widehat Q}^2\right)\\ & + \frac{1}{2} \left(\|x^k-\bar x\|_P^2 - \|x^{k+1}-\bar x\|_P^2 - \|\Delta x^{k+1}\|_P^2 \right) \\ & + \frac{1}{2} \left(\|\Delta y^k\|_Q^2 - \|\Delta y^{k+1}\|_Q^2 - \|\Delta y^{k+1} -\Delta y^k\|_Q^2 \right).\end{aligned}$$ Recalling the definition of $G$ we then complete the proof. ◻ **Proposition 5**. *Let Assumption [Assumption 1](#Ass1){reference-type="ref" reference="Ass1"} and Assumption [Assumption 2](#Ass2){reference-type="ref" reference="Ass2"} hold. Then for the proximal ADMM ([\[alg1\]](#alg1){reference-type="ref" reference="alg1"}) there holds $\|\Delta u^k\|_G^2 = o(1/k)$ as $k\to \infty$.* *Proof.* Let $\bar u$ be a KKT point of ([\[prob\]](#prob){reference-type="ref" reference="prob"}). From Lemma [Lemma 4](#prop9.20){reference-type="ref" reference="prop9.20"} it follows that $$\begin{aligned} \label{9.23.1} \sum_{j=1}^k \|\Delta u^{j+1}\|_G^2 & \le \sum_{j=1}^k \left(\left(\|u^j-\bar u\|_G^2 + \|\Delta y^j\|_Q^2\right) - \left(\|u^{j+1}-\bar u\|_G^2+\|\Delta y^{j+1}\|_Q^2\right)\right) \nonumber\\ & \le \|u^1-\bar u\|_G^2 +\|\Delta y^1 \|_Q^2 %& \le \|u^0-\bar u\|_G^2 + \|y^1 - y^0\|_Q^2\end{aligned}$$ for all $k \ge 1$. By Lemma [Lemma 3](#lem:mono){reference-type="ref" reference="lem:mono"}, $\{\|\Delta u^{j+1}\|_G^2\}$ is monotonically decreasing. Thus $$\begin{aligned} \label{9.23.2} \left(\frac{k}{2} +1\right)\|\Delta u^{k+1}\|_G^2 \le \sum_{j=[k/2]}^k \|\Delta u^{j+1}\|_G^2,\end{aligned}$$ where $[k/2]$ denotes the largest integer $\le k/2$. Since ([\[9.23.1\]](#9.23.1){reference-type="ref" reference="9.23.1"}) shows that $$\sum_{j=1}^\infty \|\Delta u^{j+1}\|_G^2 <\infty,$$ the right hand side of ([\[9.23.2\]](#9.23.2){reference-type="ref" reference="9.23.2"}) must converge to $0$ as $k\to \infty$. Thus $(k+1) \|\Delta u^{k+1}\|_G^2 =o(1)$ and hence $\|\Delta u^{k}\|_G^2 =o(1/k)$ as $k\to \infty$. ◻ As a byproduct of Proposition [Proposition 5](#lem3){reference-type="ref" reference="lem3"} and Corollary [Corollary 2](#cor2){reference-type="ref" reference="cor2"}, we can prove the following non-ergodic convergence rate result for the proximal ADMM ([\[alg1\]](#alg1){reference-type="ref" reference="alg1"}) in terms of the objective error and the constraint error. **Theorem 6**. *Let Assumption [Assumption 1](#Ass1){reference-type="ref" reference="Ass1"} and Assumption [Assumption 2](#Ass2){reference-type="ref" reference="Ass2"} hold. Consider the proximal ADMM ([\[alg1\]](#alg1){reference-type="ref" reference="alg1"}) for solving ([\[prob\]](#prob){reference-type="ref" reference="prob"}). Then $$\begin{aligned} \label{ner} |H(x^k, y^k)-H_*| = o\left(\frac{1}{\sqrt{k}}\right) \quad \mbox{and} \quad \|A x^k+B y^k -c\|= o \left(\frac{1}{\sqrt{k}}\right)\end{aligned}$$ as $k \to \infty$.* *Proof.* Since $$\begin{aligned} \label{eq:sbl.2} \rho (A x^{k} + B y^{k}-c) = \Delta \lambda^{k} \quad \mbox{and} \quad \|\Delta \lambda^{k}\|^2 \le \rho \|\Delta u^{k}\|_G^2\end{aligned}$$ we may use Proposition [Proposition 5](#lem3){reference-type="ref" reference="lem3"} to obtain the estimate $\|A x^{k}+B y^{k} -c\| = o (1/\sqrt{k})$ as $k\rightarrow \infty$. In the following we will focus on deriving the estimate of $|H(x^k, y^k) - H_*|$. Let $\bar u := (\bar x, \bar y, \bar \lambda)$ be a KKT point of ([\[prob\]](#prob){reference-type="ref" reference="prob"}). By using ([\[admm.111\]](#admm.111){reference-type="ref" reference="admm.111"}) we have $$\begin{aligned} \label{10.30.1} H(x^{k}, y^{k}) - H_* & \le - \langle\bar \lambda, A x^{k}+By^{k}-c\rangle + \frac{1}{2} \left(\|u^{k-1}-\bar u\|_G^2 - \|u^{k}-\bar u\|_G^2\right) \nonumber \\ & = -\frac{1}{\rho} \langle\bar \lambda, \Delta \lambda^{k}\rangle-\langle u^{k-1}-\bar u, G \Delta u^{k}\rangle - \frac{1}{2} \|\Delta u^{k}\|_G^2 \nonumber \\ & \le \frac{\|\bar \lambda\|}{\rho} \|\Delta \lambda^{k}\| + \|u^{k-1}-\bar u\|_G \|\Delta u^{k}\|_G.\end{aligned}$$ By virtue of the monotonicity of $\{\|u^k-\bar u\|_G^2\}$ given in Corollary [Corollary 2](#cor2){reference-type="ref" reference="cor2"} we then obtain $$\begin{aligned} H(x^{k}, y^{k}) - H_* & \le \frac{\|\bar \lambda\|}{\rho} \|\Delta \lambda^{k}\| + \|u^0-\bar u\|_G \|\Delta u^{k}\|_G \nonumber \\ & \le \left(\|u^0-\bar u\|_G + \frac{\|\bar \lambda\|}{\sqrt{\rho}}\right) \|\Delta u^{k}\|_G.\end{aligned}$$ On the other hand, by using ([\[9.18.11\]](#9.18.11){reference-type="ref" reference="9.18.11"}) we have $$\begin{aligned} H(x^{k}, y^{k}) - H_* & \ge -\langle\bar \lambda, A x^{k} + B y^{k} -c\rangle= -\frac{1}{\rho} \langle\bar \lambda, \Delta \lambda^{k}\rangle\\ & \ge -\frac{\|\bar \lambda\|}{\rho} \|\Delta \lambda^{k}\| \ge - \frac{\|\bar \lambda\|}{\sqrt{\rho}} \|\Delta u^{k}\|_G.\end{aligned}$$ Therefore $$\begin{aligned} \label{eq:sbl} \left|H(x^{k}, y^{k}) - H_*\right| \le \left(\|u^0-\bar u\|_G + \frac{\|\bar \lambda\|}{\sqrt{\rho}}\right) \|\Delta u^{k}\|_G.\end{aligned}$$ Now we can use Proposition [Proposition 5](#lem3){reference-type="ref" reference="lem3"} to conclude the proof. ◻ *Remark 1*. By exploiting the connection between the Douglas-Rachford splitting algorithm and the classical ADMM ([\[admm\]](#admm){reference-type="ref" reference="admm"}), the non-ergodic convergence rate ([\[ner\]](#ner){reference-type="ref" reference="ner"}) has been established in [@DY2016] for the classical ADMM ([\[admm\]](#admm){reference-type="ref" reference="admm"}) under the conditions that $$\begin{aligned} \label{DR1} \mbox{zero}(\partial d_f + \partial d_g) \ne \emptyset\end{aligned}$$ and $$\begin{aligned} \label{DR2} \partial d_f = A^*\circ \partial f^* \circ A, \qquad \partial d_g = B^*\circ \partial g^* \circ B - c,\end{aligned}$$ where $d_f (\lambda) := f^*(A^* \lambda)$ and $d_g(\lambda) := g^*(B^*\lambda)-\langle\lambda, c\rangle$ with $f^*$ and $g^*$ denoting the convex conjugates of $f$ and $g$ respectively. The conditions ([\[DR1\]](#DR1){reference-type="ref" reference="DR1"}) and ([\[DR2\]](#DR2){reference-type="ref" reference="DR2"}) seems strong and unnatural because they are posed on the convex conjugates $f^*$ and $g^*$ instead of $f$ and $g$ themselves. In Theorem [Theorem 6](#thm1){reference-type="ref" reference="thm1"} we establish the non-ergodic convergence rate ([\[ner\]](#ner){reference-type="ref" reference="ner"}) for the proximal ADMM ([\[alg1\]](#alg1){reference-type="ref" reference="alg1"}) with any positive semi-definite $P$ and $Q$ without requiring the conditions ([\[DR1\]](#DR1){reference-type="ref" reference="DR1"}) and ([\[DR2\]](#DR2){reference-type="ref" reference="DR2"}) and therefore our result extends and improves the one in [@DY2016]. Next we will consider establishing faster convergence rates under suitable regularity conditions. As a basis, we first prove the following result which tells that any weak cluster point of $\{u^k\}$ is a KKT point of ([\[prob\]](#prob){reference-type="ref" reference="prob"}). This result can be easily established for ADMM in finite-dimensional spaces, however it is nontrivial for the proximal ADMM ([\[alg1\]](#alg1){reference-type="ref" reference="alg1"}) in infinite-dimensional Hilbert spaces due to the required treatment of weak convergence; Proposition [Proposition 1](#prop9.18){reference-type="ref" reference="prop9.18"} plays a crucial role in our proof. **Theorem 7**. *Let Assumption [Assumption 1](#Ass1){reference-type="ref" reference="Ass1"} and Assumption [Assumption 2](#Ass2){reference-type="ref" reference="Ass2"} hold. Consider the sequence $\{u^k:=(x^k, y^k, \lambda^k)\}$ generated by the proximal ADMM ([\[alg1\]](#alg1){reference-type="ref" reference="alg1"}). Assume $\{u^k\}$ is bounded and let $u^\dag :=(x^\dag, y^\dag, \lambda^\dag)$ be a weak cluster point of $\{u^k\}$. Then $u^\dag$ is a KKT point of ([\[prob\]](#prob){reference-type="ref" reference="prob"}). Moreover, for any weak cluster point $u^*$ of $\{u^k\}$ there holds $\|u^*-u^\dag\|_G =0.$* *Proof.* We first show that $u^\dag$ is a KKT point of ([\[prob\]](#prob){reference-type="ref" reference="prob"}). According to Propositon [Proposition 5](#lem3){reference-type="ref" reference="lem3"} we have $\|\Delta u^k\|_G^2 \to 0$ which means $$\begin{aligned} \label{9.18.12} \Delta \lambda^k \to 0, \quad P \Delta x^k \to 0, \quad B \Delta y^k \to 0, \quad Q\Delta y^k\to 0\end{aligned}$$ as $k\to \infty$. According to Theorem [Theorem 6](#thm1){reference-type="ref" reference="thm1"} we also have $$\begin{aligned} \label{admm-lc.1} A x^k + B y^k -c \to 0 \quad \mbox{and} \quad H(x^k, y^k) \to H_* \quad \mbox{as } k \to \infty.\end{aligned}$$ Since $u^\dag$ is a weak cluster point of the sequence $\{u^k\}$, there exists a subsequence $\{u^{k_j}:= (x^{k_j}, y^{k_j},\lambda^{k_j})\}$ of $\{u^k\}$ such that $u^{k_j} \rightharpoonup u^\dag$ as $j \to \infty$. By using the first equation in ([\[admm-lc.1\]](#admm-lc.1){reference-type="ref" reference="admm-lc.1"}) we immediately obtain $$\begin{aligned} \label{admm-lc.43} A x^\dag + B y^\dag -c =0.\end{aligned}$$ By using Proposition [Proposition 1](#prop9.18){reference-type="ref" reference="prop9.18"} with $k = k_j -1$ we have for any $u := (x, y, \lambda) \in \mathcal X\times \mathcal Y\times \mathcal Z$ that $$\begin{aligned} \label{admm-lc.41} 0 &\le H(x, y) - H(x^{k_j}, y^{k_j}) + \langle\lambda^{k_j} - \rho B \Delta y^{k_j}, A x + B y - c\rangle\nonumber \\ & \quad - \langle\lambda, A x^{k_j} + B y^{k_j} -c\rangle + \frac{1}{2} \left(\|u^{k_j-1} - u\|_G^2 - \|u^{k_j} - u\|_G^2\right). \end{aligned}$$ According to Corollary [Corollary 2](#cor2){reference-type="ref" reference="cor2"}, $\{\|u^k\|_G\}$ is bounded. Thus we may use Proposition [Proposition 5](#lem3){reference-type="ref" reference="lem3"} to conclude $$\left|\|u^{k_j-1} - u\|_G^2 - \|u^{k_j} - u\|_G^2\right| \le \left(\|u^{k_j-1} - u\|_G + \|u^{k_j} - u\|_G\right) \|\Delta u^{k_j}\|_G \to 0$$ as $j \to \infty$. Therefore, by taking $j \to \infty$ in ([\[admm-lc.41\]](#admm-lc.41){reference-type="ref" reference="admm-lc.41"}) and using ([\[9.18.12\]](#9.18.12){reference-type="ref" reference="9.18.12"}), ([\[admm-lc.1\]](#admm-lc.1){reference-type="ref" reference="admm-lc.1"}) and $\lambda^{k_j} \rightharpoonup \lambda^\dag$ we can obtain $$\begin{aligned} \label{admm-lc.44} 0 \le H(x, y) - H_* + \langle\lambda^\dag, A x + B y -c\rangle\end{aligned}$$ for all $(x, y) \in \mathcal X\times \mathcal Y$. Since $f$ and $g$ are convex and lower semi-continuous, they are also weakly lower semi-continuous (see [@ET1976 Chapter 1, Corollary 2.2]). Thus, by using $x^{k_j} \rightharpoonup x^\dag$ and $y^{k_j} \rightharpoonup y^\dag$ we obtain $$\begin{aligned} H(x^\dag, y^\dag) & = f(x^\dag) + g(y^\dag) \le \liminf_{j\to \infty} f(x^{k_j}) + \liminf_{j\to \infty} g(y^{k_j}) \\ & \le \liminf_{j\to \infty} \left(f(x^{k_j}) + g(y^{k_j}) \right) \\ & = \liminf_{j \to \infty} H(x^{k_j}, y^{k_j}) = H_*.\end{aligned}$$ Since $(x^\dag, y^\dag)$ satisfies ([\[admm-lc.43\]](#admm-lc.43){reference-type="ref" reference="admm-lc.43"}), we also have $H(x^\dag, y^\dag) \ge H_*$. Therefore $H(x^\dag, y^\dag) = H_*$ and then it follows from ([\[admm-lc.44\]](#admm-lc.44){reference-type="ref" reference="admm-lc.44"}) and ([\[admm-lc.43\]](#admm-lc.43){reference-type="ref" reference="admm-lc.43"}) that $$0 \le H(x, y) - H(x^\dag, y^\dag) + \langle\lambda^\dag, A(x - x^\dag) + B (y - y^\dag)\rangle$$ for all $(x, y) \in \mathcal X\times \mathcal Y$. Using the definition of $H$ we can immediately see that $-A^* \lambda^\dag \in \partial f(x^\dag)$ and $-B^* \lambda^\dag \in \partial g(y^\dag)$. Therefore $u^\dag$ is a KKT point of ([\[prob\]](#prob){reference-type="ref" reference="prob"}). Let $u^*$ be another weak cluster point of $\{u^k\}$. Then there exists a subsequence $\{u^{l_j}\}$ of $\{u^k\}$ such that $u^{l_j} \rightharpoonup u^*$ as $j \to \infty$. Noting the identity $$\begin{aligned} \label{admm-conv} 2\langle u^k, G(u^* - u^\dag)\rangle= \|u^k - u^\dag\|_G^2 - \|u^k - u^*\|_G^2 - \|u^\dag\|_G^2 + \|u^*\|_G^2.\end{aligned}$$ Since both $u^*$ and $u^\dag$ are KKT points of ([\[prob\]](#prob){reference-type="ref" reference="prob"}) as shown above, it follows from Corollary [Corollary 2](#cor2){reference-type="ref" reference="cor2"} that both $\{\|u^k - u^\dag\|_G^2\}$ and $\{\|u^k - u^*\|_G^2\}$ are monotonically decreasing and thus converge as $k \to \infty$. By taking $k = k_j$ and $k=l_j$ in ([\[admm-conv\]](#admm-conv){reference-type="ref" reference="admm-conv"}) respectively and letting $j \to \infty$ we can see that, for the both cases, the right hand side tends to the same limit. Therefore $$\begin{aligned} \langle u^*, G(u^*-u^\dag)\rangle& = \lim_{j\to \infty} \langle u^{l_j}, G(u^*-u^\dag)\rangle\\ & = \lim_{j\to \infty} \langle u^{k_j}, G(u^*-u^\dag)\rangle\\ & = \langle u^\dag, G(u^*-u^\dag)\rangle\end{aligned}$$ which implies $\|u^*-u^\dag\|_G^2 =0$. ◻ *Remark 2*. Theorem [Theorem 7](#thm2:ADMM){reference-type="ref" reference="thm2:ADMM"} requires $\{u^k\}$ to be bounded. According to Corollary [Corollary 2](#cor2){reference-type="ref" reference="cor2"}, $\{\|u^k\|_G^2\}$ is bounded which implies the boundedness of $\{\lambda^k\}$. In the following we will provide sufficient conditions to guarantee the boundedness of $\{(x^k, y^k)\}$. 1. From ([\[eq:cor.1\]](#eq:cor.1){reference-type="ref" reference="eq:cor.1"}) it follows that $\{\sigma_f \|x^k\|^2 + \sigma_g \|y^k\|^2 + \|u^k\|_G^2\}$ is bounded. By the definition of $G$, this in particular implies the boundedness of $\{\lambda^k\}$ and $\{B y^k\}$. Consequently, it follows from $\Delta \lambda^k = \rho(A x^k + B y^k - c)$ that $\{A x^k\}$ is bounded. Putting the above together we can conclude that both $\{(\sigma_f I + P + A^*A) x^k\}$ and $\{(\sigma_g I + Q + B^* B) y^k\}$ are bounded. Therefore, if both the bounded linear self-adjoint operators $$\sigma_f I + P + A^*A \quad \mbox{and} \quad \sigma_g I + Q + B^*B$$ are coercive, we can conclude the boundedness of $\{x^k\}$ and $\{y^k\}$. Here a linear operator $L: \mathcal V\to \mathcal H$ between two Hilbert spaces $\mathcal V$ and $\mathcal H$ is called coercive if $\|Lv\|\to \infty$ whenever $\|v\|\to \infty$. It is easy to see that $L$ is coercive if and only if there is a constant $c>0$ such that $c\|v\|\le \|Lv\|$ for all $v\in \mathcal V$. 2. If there exist $\beta>H_*$ and $\sigma>0$ such that the set $$\{(x, y) \in \mathcal X\times \mathcal Y: H(x, y)\le \beta \mbox{ and } \|A x + B y -c\| \le \sigma\}$$ is bounded, then $\{(x^k,y^k)\}$ is bounded. In fact, since $H(x^k, y^k)\to H_*$ and $A x^k + B y^k - c\to 0$ as shown in Theorem [Theorem 6](#thm1){reference-type="ref" reference="thm1"}, the sequence $\{(x^k, y^k)\}$ is contained in the above set except for finite many terms. Thus $\{(x^k, y^k)\}$ is bounded. *Remark 3*. It is interesting to investigate under what conditions $\{u^k\}$ has a unique weak cluster point. According to Theorem [Theorem 7](#thm2:ADMM){reference-type="ref" reference="thm2:ADMM"}, for any two weak cluster points $u^* := (x^*, y^*, \lambda^*)$ and $u^\dag:= (x^\dag, y^\dag, \lambda^\dag)$ of $\{u^k\}$ there hold $$\begin{aligned} & \|u^* - u^\dag\|_G^2 =0, \quad Ax^* + By^* = c, \quad -A^* \lambda^* \in \partial f(x^*), \quad -B^* \lambda^* \in \partial g(y^*), \\ & A x^\dag + B y^\dag = c, \quad -A^* \lambda^\dag \in \partial f(x^\dag), \quad - B^* \lambda^\dag \in \partial g(y^\dag).\end{aligned}$$ By using the definition of $G$ and the monotonicity of $\partial f$ and $\partial g$ we can deduce that $$\begin{aligned} %\label{admm.55} & \lambda^* = \lambda^\dag, \quad P(x^*-x^\dag) =0, \quad Q(y^*-y^\dag) = 0, \quad B(y^* - y^\dag) = 0, \\ & A(x^* - x^\dag) = 0, \quad \sigma_f \|x^* - x^\dag\|^2 =0, \quad \sigma_g \|y^* - y^\dag\|^2=0.\end{aligned}$$ Consequently $$(\sigma_f I + P + A^*A) (x^*-x^\dag) =0 \quad \mbox{ and } \quad (\sigma_g I + Q + B^* B) (y^* - y^\dag) = 0.$$ Therefore, if both $\sigma_f I + P + A^* A$ and $\sigma_g I + Q + B^* B$ are injective, then $x^* = x^\dag$ and $y^* = y^\dag$ and hence $\{u^k\}$ has a unique weak cluster point, say $u^\dag$; consequently $u^k \rightharpoonup u^\dag$ as $k \to \infty$. *Remark 4*. In [@Sun2019] the proximal ADMM (with relaxation) has been considered under the condition that $$\begin{aligned} \label{3.5} P + \rho A^*A + \partial f \mbox{ and } Q + \rho B^*B + \partial g \mbox{ are strongly maximal monotone}.\end{aligned}$$ which requires both $(P + \rho A^*A + \partial f)^{-1}$ and $(Q + \rho B^*B + \partial g)^{-1}$ exist as single valued mappings and are Lipschitz continuous. It has been shown that the iterative sequence converges weakly to a KKT point which is its unique weak cluster point. The argument in [@Sun2019] used the facts that the KKT mapping $F(u)$, defined in ([\[eq:F\]](#eq:F){reference-type="ref" reference="eq:F"}) below, is maximal monotone and maximal monotone operators are closed under the weak-strong topology ([@AS2009b; @BC2011]). Our argument is essentially based on Proposition 2.1, it is elementary and does not rely on any machinery from the maximal monotone operator theory. Based on Theorem [Theorem 7](#thm2:ADMM){reference-type="ref" reference="thm2:ADMM"}, we now devote to deriving convergence rates of the proximal ADMM ([\[alg1\]](#alg1){reference-type="ref" reference="alg1"}) under certain regularity conditions. To this end, we introduce the multifuncton $F: \mathcal X\times \mathcal Y\times \mathcal Z\rightrightarrows \mathcal X\times \mathcal Y\times \mathcal Z$ defined by $$\begin{aligned} \label{eq:F} F(u):= \left(\begin{array}{ccc} \partial f(x) + A^* \lambda\\ \partial g(y) + B^* \lambda\\ A x + By -c \end{array}\right), \quad \forall u = (x, y, \lambda)\in \mathcal X\times \mathcal Y\times \mathcal Z.\end{aligned}$$ Then $\bar u$ is a KKT point of ([\[prob\]](#prob){reference-type="ref" reference="prob"}) means $0 \in F(\bar u)$ or, equivalently, $\bar u\in F^{-1}(0)$, where $F^{-1}$ denotes the inverse multifunction of $F$. We will achieve our goal under certain bounded (Hölder) metric subregularity conditions of $F$. We need the following calculus lemma. **Lemma 8**. *Let $\{\Delta_k\}$ be a sequence of nonnegative numbers satisfying $$\begin{aligned} \label{5.8.6} \Delta_k^\theta \le C (\Delta_{k-1}-\Delta_k)\end{aligned}$$ for all $k\ge 1$, where $C>0$ and $\theta>1$ are constants. Then there is a constant $\tilde C>0$ such that $$\Delta_k \le \tilde C (1+ k)^{-\frac{1}{\theta-1}}$$ for all $k\ge 0$.* *Proof.* Please refer to the proof of [@AB2009 Theorem 2]. ◻ **Theorem 9**. *Let Assumption [Assumption 1](#Ass1){reference-type="ref" reference="Ass1"} and Assumption [Assumption 2](#Ass2){reference-type="ref" reference="Ass2"} hold. Consider the sequence $\{u^k:=(x^k, y^k, \lambda^k)\}$ generated by the proximal ADMM ([\[alg1\]](#alg1){reference-type="ref" reference="alg1"}). Assume $\{u^k\}$ is bounded and let $u^\dag :=(x^\dag, y^\dag, \lambda^\dag)$ be a weak cluster point of $\{u^k\}$. Let $R$ be a number such that $\|u^k - u^\dag\| \le R$ for all $k$ and assume that there exist $\kappa>0$ and $\alpha\in (0, 1]$ such that $$\begin{aligned} \label{HMS} d(u, F^{-1}(0)) \le \kappa [d(0, F(u))]^\alpha, \quad \forall u \in B_R(u^\dag).\end{aligned}$$* 1. *If $\alpha= 1$, then there exists a constant $0<q<1$ such that $$\begin{aligned} \label{eq:lc} \|u^{k+1} - u^\dag\|_G^2 + \|\Delta y^{k+1}\|_Q^2 \le q^2 \left( \|u^k - u^\dag\|_G^2 + \|\Delta y^k\|_Q^2\right)\end{aligned}$$ for all $k \ge 0$ and consequently there exist $C>0$ and $0<q<1$ such that $$\begin{aligned} \label{eq:lc.1} \begin{split} \|u^k - u^\dag\|_G, \, \|\Delta u^k\|_G & \le C q^k, \\ \|A x^k + B y^k - c\| & \le C q^k, \\ |H(x^k, y^k) - H_*| & \le C q^k \end{split}\end{aligned}$$ for all $k \ge 0$.* 2. *If $\alpha\in (0, 1)$ then there is a constant $C$ such that $$\begin{aligned} \label{eq:HCR} \|u^k - u^\dag\|_G^2 + \|\Delta y^k\|_Q^2 \le C (k+1)^{-\frac{\alpha}{1-\alpha}}\end{aligned}$$ and consequently $$\begin{aligned} \label{eq:HCR.1} \begin{split} \|u^k - u^\dag\|_G, \, \|\Delta u^k \|_G & \le C (k+1)^{-\frac{1}{2(1-\alpha)}}, \\ \|A x^k + B y^k - c\| & \le C (k+1)^{-\frac{1}{2(1-\alpha)}}, \\ |H(x^k, y^k) - H_*| & \le C (k+1)^{- \frac{1}{2(1-\alpha)}} \end{split}\end{aligned}$$ for all $k \ge 0$.* *Proof.* According to Theorem [Theorem 7](#thm2:ADMM){reference-type="ref" reference="thm2:ADMM"}, $u^\dag$ is a KKT point of ([\[prob\]](#prob){reference-type="ref" reference="prob"}). Therefore we may use Lemma [Lemma 4](#prop9.20){reference-type="ref" reference="prop9.20"} with $\bar u = u^\dag$ to obtain $$\begin{aligned} \label{ADMM-LC.2} \|u^{k+1} - u^\dag\|_G^2 + \|\Delta y^{k+1}\|_Q^2 & \le \|u^k - u^\dag\|_G^2 + \|\Delta y^k\|_Q^2 - \|\Delta u^{k+1}\|_G^2 \nonumber \\ & = \|u^k- u^\dag\|_G^2 + \|\Delta y^k\|_Q^2 - \eta \|\Delta u^{k+1}\|_G^2 \nonumber \\ & \quad \, - (1-\eta) \|\Delta u^{k+1}\|_G^2,\end{aligned}$$ where $\eta\in (0, 1)$ is any number. According to ([\[11.11.1\]](#11.11.1){reference-type="ref" reference="11.11.1"}), $$\begin{pmatrix} \rho A^* B \Delta y^{k+1} - P \Delta x^{k+1}\\ - Q \Delta y^{k+1} \\ A x^{k+1} + B y^{k+1} - c \end{pmatrix} \in F(u^{k+1}).$$ Thus, by using $\Delta \lambda^{k+1} = \rho (A x^{k+1} + B y^{k+1}-c)$ we can obtain $$\begin{aligned} \label{ADMM-LC.3} d^2(0, F(u^{k+1})) & \le \|\rho A^* B \Delta y^{k+1} - P \Delta x^{k+1}\|^2 + \| -Q \Delta y^{k+1}\|^2 \nonumber \\ & \quad \, + \|A x^{k+1} + B y^{k+1}-c\|^2 \nonumber \\ & \le 2 \|P \Delta x^{k+1}\|^2 + 2 \rho^2 \|A\|^2 \|B \Delta y^{k+1}\|^2 \nonumber \\ & \quad \, + \|Q \Delta y^{k+1}\|^2 + \frac{1}{\rho^2} \|\Delta \lambda^{k+1}\|^2 \nonumber \\ & \le \gamma \|\Delta u^{k+1}\|_G^2,\end{aligned}$$ where $$\gamma:= \max\left\{2 \|P\|, 2 \rho \|A\|^2, \|Q\|, \frac{1}{\rho}\right\}.$$ Combining this with ([\[ADMM-LC.2\]](#ADMM-LC.2){reference-type="ref" reference="ADMM-LC.2"}) gives $$\begin{aligned} \|u^{k+1} - u^\dag\|_G^2 + \|\Delta y^{k+1} \|_Q^2 & \le \|u^k - u^\dag\|_G^2 + \|\Delta y^k\|_Q^2 - \eta \|\Delta u^{k+1}\|_G^2 \\ & \quad \, - \frac{1-\eta}{\gamma} d^2(0, F(u^{k+1})). \end{aligned}$$ Since $\|u^k - u^\dag\|\le R$ for all $k$ and $F$ satisfies ([\[HMS\]](#HMS){reference-type="ref" reference="HMS"}), one can see that $$d(u^{k+1}, F^{-1}(0)) \le \kappa [d(0, F(u^{k+1}))]^\alpha, \quad \forall k\ge 0.$$ Consequently $$\begin{aligned} \|u^{k+1} - u^\dag\|_G^2 + \|\Delta y^{k+1}\|_Q^2 & \le \|u^k - u^\dag\|_G^2 + \|\Delta y^k\|_Q^2 - \eta \|\Delta u^{k+1}\|_G^2 \\ & \quad \, - \frac{1-\eta}{\gamma \kappa^{2/\alpha}} [d(u^{k+1}, F^{-1}(0))]^{2/\alpha}. \end{aligned}$$ For any $u =(x, y, \lambda) \in \mathcal X\times \mathcal Y\times \mathcal Z$ let $$d_G(u, F^{-1}(0)):= \inf_{\bar u\in F^{-1}(0)} \|u-\bar u\|_G$$ which measures the "distance\" from $u$ to $F^{-1}(0)$ under the semi-norm $\|\cdot\|_G$. It is easy to see that $$d_G^2(u, F^{-1}(0)) \le \|G\| d^2(u, F^{-1}(0)),$$ where $\|G\|$ denotes the norm of the operator $G$. Then we have $$\begin{aligned} \|u^{k+1} - u^\dag\|_G^2 + \|\Delta y^{k+1}\|_Q^2 & \le \|u^k - u^\dag\|_G^2 + \|\Delta y^k\|_Q^2 - \eta \|\Delta u^{k+1}\|_G^2 \\ & \quad \, - \frac{1-\eta}{\gamma (\kappa^2 \|G\|)^{1/\alpha}} [d_G(u^{k+1}, F^{-1}(0))]^{2/\alpha}. \end{aligned}$$ Now let $\bar u\in F^{-1}(0)$ be any point. Then $$\begin{aligned} \|u^{k+1} - u^\dag\|_G \le \|u^{k+1} - \bar u\|_G + \|u^\dag - \bar u\|_G.\end{aligned}$$ Since $u^\dag$ is a weak cluster point of $\{u^k\}$, there is a subsequence $\{u^{k_j}\}$ of $\{u^k\}$ such that $u^{k_j} \rightharpoonup u^\dag$. Thus $$\begin{aligned} \|u^\dag - \bar u\|_G^2 = \lim_{j \to \infty} \langle u^{k_j} - \bar u, G(u^\dag - \bar u)\rangle \le \liminf_{j\to \infty} \|u^{k_j} - \bar u\|_G \|u^\dag - \bar u\|_G\end{aligned}$$ which implies $\|u^\dag - \bar u\|_G \le \liminf_{j\to \infty} \|u^{k_j} - \bar u\|_G$. From Corollary [Corollary 2](#cor2){reference-type="ref" reference="cor2"} we know that $\{\|u^k - \bar u\|_G^2\}$ is monotonically decreasing. Thus $$\begin{aligned} \|u^{k+1} - u^\dag\|_G \le \|u^{k+1} - \bar u\|_G + \liminf_{j\to \infty} \|u^{k_j} - \bar u\|_G \le 2 \|u^{k+1} - \bar u\|_G.\end{aligned}$$ Since $\bar u\in F^{-1}(0)$ is arbitrary, we thus have $$\|u^{k+1}- u^\dag\|_G \le 2 d_G(u^{k+1}, F^{-1}(0)).$$ Therefore $$\begin{aligned} \|u^{k+1} - u^\dag\|_G^2 + \|\Delta y^{k+1}\|_Q^2 & \le \|u^k - u^\dag\|_G^2 + \|\Delta y^k\|_Q^2 - \eta \|\Delta u^{k+1}\|_G^2 \\ & \quad \, - \frac{1-\eta}{\gamma (4\kappa^2 \|G\|)^{1/\alpha}} \|u^{k+1} - u^\dag\|_G^{2/\alpha}. \end{aligned}$$ By using the fact $\|\Delta u^k\|_G\to 0$ established in Proposition [Proposition 5](#lem3){reference-type="ref" reference="lem3"}, we can find a constant $C>0$ such that $$\|\Delta u^{k+1}\|_G^2 \ge C \|\Delta u^{k+1}\|_G^{2/\alpha}.$$ Note that $\|\Delta u^{k+1}\|_G^2 \ge \|\Delta y^{k+1}\|_Q^2$. Thus $$\begin{aligned} \|u^{k+1} - u^\dag\|_G^2 + \|\Delta y^{k+1}\|_Q^2 & \le \|u^k - u^\dag\|_G^2 + \|\Delta y^k\|_Q^2 - C \eta \|\Delta y^{k+1}\|_Q^{2/\alpha} \\ & \quad \, - \frac{1-\eta}{\gamma (4\kappa^2 \|G\|)^{1/\alpha}} \|u^{k+1} - u^\dag\|_G^{2/\alpha}. \end{aligned}$$ Choose $\eta$ such that $$\eta = \frac{1}{1 + C \gamma (4 \kappa^2 \|G\|)^{1/\alpha}}.$$ Then $$\begin{aligned} & \|u^{k+1} - u^\dag\|_G^2 + \|\Delta y^{k+1}\|_Q^2 \\ & \le \|u^k - u^\dag\|_G^2 + \|\Delta y^k\|_Q^2 - C \eta \left(\|\Delta y^{k+1}\|_Q^{2/\alpha} + \|u^{k+1} - u^\dag\|_G^{2/\alpha}\right). \end{aligned}$$ Using the inequality $(a+b)^p \le 2^{p-1}(a^p + b^p)$ for $a,b\ge 0$ and $p\ge 1$, we then obtain $$\begin{aligned} \label{eq:lc.3} & \|u^{k+1} - u^\dag\|_G^2 + \|\Delta y^{k+1}\|_Q^2 \nonumber\\ & \le \|u^k - u^\dag\|_G^2 + \|\Delta y^k\|_Q^2 - 2^{1-1/\alpha} C \eta \left(\|u^{k+1} - u^\dag\|_G^2 + \|\Delta y^{k+1}\|_Q^2\right)^{1/\alpha}. \end{aligned}$$ \(i\) If $\alpha=1$, then we obtain the linear convergence $$(1 + C\eta) \left(\|u^{k+1} - u^\dag\|_G^2 + \|\Delta y^{k+1}\|_Q^2\right) \le \|u^k - u^\dag\|_G^2 + \|\Delta y^k\|_Q^2$$ which is ([\[eq:lc\]](#eq:lc){reference-type="ref" reference="eq:lc"}) with $q = 1/(1 + C \eta)$. By using Lemma [Lemma 4](#prop9.20){reference-type="ref" reference="prop9.20"} and ([\[eq:lc\]](#eq:lc){reference-type="ref" reference="eq:lc"}) we immediately obtain the first estimate in ([\[eq:lc.1\]](#eq:lc.1){reference-type="ref" reference="eq:lc.1"}). By using ([\[eq:sbl.2\]](#eq:sbl.2){reference-type="ref" reference="eq:sbl.2"}) and ([\[eq:sbl\]](#eq:sbl){reference-type="ref" reference="eq:sbl"}) we then obtain the last two estimates in ([\[eq:lc.1\]](#eq:lc.1){reference-type="ref" reference="eq:lc.1"}). \(ii\) If $\alpha\in (0, 1)$, we may use ([\[eq:lc.3\]](#eq:lc.3){reference-type="ref" reference="eq:lc.3"}) and Lemma [Lemma 8](#lem5){reference-type="ref" reference="lem5"} to obtain ([\[eq:HCR\]](#eq:HCR){reference-type="ref" reference="eq:HCR"}). To derive the first estimate in ([\[eq:HCR.1\]](#eq:HCR.1){reference-type="ref" reference="eq:HCR.1"}), we may use Lemma [Lemma 4](#prop9.20){reference-type="ref" reference="prop9.20"} to obtain $$\sum_{j=l}^k \|\Delta u^j\|_G^2 \le \|u^l - u^\dag\|_G^2 + \|\Delta y^l\|_Q^2$$ for all integers $1 \le l < k$. By using the monotonicity of $\{\|\Delta u^j\|_G^2\}$ shown in Lemma [Lemma 3](#lem:mono){reference-type="ref" reference="lem:mono"} and the estimate ([\[eq:HCR\]](#eq:HCR){reference-type="ref" reference="eq:HCR"}) we have $$(k-l+1) \|\Delta u^k\|_G^2 \le C (l+1)^{-\frac{\alpha}{1-\alpha}}.$$ Taking $l = [k/2]$, the largest integers $\le k/2$, gives $$\|\Delta u^k\|_G^2 \le C (k + 1)^{-\frac{\alpha}{1-\alpha}-1} = C (k+1)^{-\frac{1}{1-\alpha}}$$ with a possibly different generic constant $C$. This shows the first estimate in ([\[eq:HCR.1\]](#eq:HCR.1){reference-type="ref" reference="eq:HCR.1"}). Based on this, we can use ([\[eq:sbl.2\]](#eq:sbl.2){reference-type="ref" reference="eq:sbl.2"}) and ([\[eq:sbl\]](#eq:sbl){reference-type="ref" reference="eq:sbl"}) to obtain the last two estimates in ([\[eq:HCR.1\]](#eq:HCR.1){reference-type="ref" reference="eq:HCR.1"}). The proof is therefore complete. ◻ *Remark 5*. Let us give some comments on the condition ([\[HMS\]](#HMS){reference-type="ref" reference="HMS"}). In finite dimensional Euclidean spaces, it has been proved in [@R1981] that for every polyhedral multifunction $\Psi: {\mathbb R}^m \rightrightarrows {\mathbb R}^n$ there is a constant $\kappa>0$ such that for any $y \in {\mathbb R}^n$ there is a number $\varepsilon>0$ such that $$d(x, \Psi^{-1}(y)) \le \kappa d(y, \Psi(x)), \quad \forall x \mbox{ satisfying } d(y, \Psi(x))<\varepsilon.$$ This result in particular implies the bounded metric subregularity of $\Psi$, i.e. for any $r>0$ and any $y \in {\mathbb R}^n$ there is a number $C>0$ such that $$d(x, \Psi^{-1}(y)) \le C d(y, \Psi(x)), \quad \forall x \in B_r(0).$$ Therefore, if $\partial f$ and $\partial g$ are polyhedral multifunctions, then the multifunction $F$ defined by ([\[eq:F\]](#eq:F){reference-type="ref" reference="eq:F"}) is also polyhedral and thus ([\[HMS\]](#HMS){reference-type="ref" reference="HMS"}) with $\alpha=1$ holds. The bounded metric subregularity of polyhedral multifunctions in arbitrary Banach spaces has been established in [@ZN2014]. On the other hand, if $\mathcal X$, $\mathcal Y$ and $\mathcal Z$ are finite dimensional Euclidean spaces, and if $f$ and $g$ are semi-algebraic convex functions, then the multifunction $F$ satisfies ([\[HMS\]](#HMS){reference-type="ref" reference="HMS"}) for some $\alpha\in (0, 1]$. Indeed, the semi-algebraicity of $f$ and $g$ implies that their subdifferentials $\partial f$ and $\partial g$ are semi-algebraic multifunctions with closed graph; consequently $F$ is semi-algebraic with closed graph. According to [@LP2022 Proposition 3.1], $F$ is bounded Hölder metrically subregular at any point $(\bar u, \bar\xi)$ on its graph, i.e. for any $r>0$ there exist $\kappa>0$ and $\alpha\in (0,1]$ such that $$d(u, F^{-1}(\bar \xi)) \le \kappa [d(\bar \xi, F(u))]^\alpha, \quad \forall u \in B_r(\bar u)$$ which in particular implies ([\[HMS\]](#HMS){reference-type="ref" reference="HMS"}). By inspecting the proof of Theorem [Theorem 9](#thm4:ADMM){reference-type="ref" reference="thm4:ADMM"}, it is easy to see that the same convergence rate results can be derived with the condition ([\[HMS\]](#HMS){reference-type="ref" reference="HMS"}) replaced by the weaker condition: there exist $\kappa>0$ and $\alpha\in (0,1]$ such that $$\begin{aligned} \label{IBEBC} d_G(u^k, F^{-1}(0)) \le \kappa \left\|\Delta u^k\right\|_G^\alpha, \quad \forall k \ge 1. \end{aligned}$$ Therefore we have the following result. **Theorem 10**. *Let Assumption [Assumption 1](#Ass1){reference-type="ref" reference="Ass1"} and Assumption [Assumption 2](#Ass2){reference-type="ref" reference="Ass2"} hold. Consider the sequence $\{u^k:=(x^k, y^k, \lambda^k)\}$ generated by the proximal ADMM ([\[alg1\]](#alg1){reference-type="ref" reference="alg1"}). Assume $\{u^k\}$ is bounded. If there exist $\kappa>0$ and $\alpha\in (0, 1]$ such that ([\[IBEBC\]](#IBEBC){reference-type="ref" reference="IBEBC"}) holds, then, for any weak cluster point $u^\dag$ of $\{u^k\}$, the same convergence rate results in Theorem [Theorem 9](#thm4:ADMM){reference-type="ref" reference="thm4:ADMM"} hold.* *Remark 6*. Note that the condition ([\[IBEBC\]](#IBEBC){reference-type="ref" reference="IBEBC"}) is based on the iterative sequence itself. Therefore, it makes possible to check the condition by exploring not only the property of the multifunction $F$ but also the structure of the algorithm. The condition ([\[IBEBC\]](#IBEBC){reference-type="ref" reference="IBEBC"}) with $\alpha= 1$ has been introduced in [@LYZZ2018] as an iteration based error bound condition to study the linear convergence of the proximal ADMM ([\[alg1\]](#alg1){reference-type="ref" reference="alg1"}) with $Q =0$ in finite dimensions. *Remark 7*. The condition ([\[IBEBC\]](#IBEBC){reference-type="ref" reference="IBEBC"}) is strongly motivated by the proof of Theorem [Theorem 9](#thm4:ADMM){reference-type="ref" reference="thm4:ADMM"}. We would like to provide here an alternative motivation. Consider the proximal ADMM ([\[alg1\]](#alg1){reference-type="ref" reference="alg1"}). We can show that if $\|\Delta u^k\|_G =0$ then $u^k$ must be a KKT point of ([\[prob\]](#prob){reference-type="ref" reference="prob"}). Indeed, $\|\Delta u^k\|_G^2=0$ implies $P\Delta x^k =0$, ${\widehat Q} \Delta y^k=0$ and $\Delta \lambda^k=0$. Since ${\widehat Q} = Q +\rho B^T B$ with $Q$ positive semi-definite and $\Delta \lambda^k = \rho (A x^k + B y^k-c)$, we also have $B \Delta y^k=0$, $Q\Delta y^k=0$ and $A x^k+B y^k-c=0$. Thus, it follows from ([\[11.11.1\]](#11.11.1){reference-type="ref" reference="11.11.1"}) that $$-A^* \lambda^k \in \partial f(x^k), \quad -B^* \lambda^k \in \partial g(y^k), \quad A x^k + B y^k =c$$ which shows that $u^k=(x^k, y^k, \lambda^k)$ is a KKT point, i.e., $u^k \in F^{-1}(0)$. Therefore, it is natural to ask, if $\|\Delta u^k\|_G$ is small, can we guarantee $d_G(u^k, F^{-1}(0))$ to be small as well? This motivates us to propose a condition like $$d_G(u^k, F^{-1}(0)) \le \varphi(\|\Delta u^k\|_G), \quad \forall k \ge 1$$ for some function $\varphi: [0, \infty) \to [0, \infty)$ with $\varphi(0) =0$. The condition ([\[IBEBC\]](#IBEBC){reference-type="ref" reference="IBEBC"}) corresponds to $\varphi(s) = \kappa s^\alpha$ for some $\kappa>0$ and $\alpha\in (0, 1]$. In finite dimensional Euclidean spaces some linear convergence results on the proximal ADMM ([\[alg1\]](#alg1){reference-type="ref" reference="alg1"}) have been established in [@DY2016b] under various scenarios involving strong convexity of $f$ and/or $g$, Lipschitz continuity of $\nabla f$ and/or $\nabla g$, together with further conditions on $A$ and/or $B$, see [@DY2016b Theorem 3.1 and Table 1]. In the following theorem we will show that ([\[IBEBC\]](#IBEBC){reference-type="ref" reference="IBEBC"}) with $\alpha= 1$ holds under any one of these scenarios and thus the linear convergence in [@DY2016b Theorem 3.1 and Theorem 3.4] can be established by using Theorem [Theorem 10](#thm5:ADMM){reference-type="ref" reference="thm5:ADMM"}. Therefore, the linear convergence results based on the bounded metric subregularity of $F$ or the scenarios in [@DY2016b] can be treated in a unified manner. Actually our next theorem improves the results in [@DY2016b] by establishing the linear convergence of $\{u^k\}$ and $\{H(x^k, y^k)\}$ and relaxing the Lipschitz continuity of gradient(s) to the local Lipschitz continuity. Furthermore, Our result is established in general Hilbert spaces. To formulate the scenarios from [@DY2016b] in this general setting, we need to replace the full row/column rank of matrices by the coercivity of linear operators. We also need the linear operator $M: \mathcal X\times \mathcal Y\to \mathcal Z$ defined by $$M(x,y):=Ax+By, \quad \forall (x,y)\in \mathcal X\times \mathcal Y$$ which is constructed from $A$ and $B$. It is easy to see that the adjoint of $M$ is $M^* z = (A^* z, B^* z)$ for any $z \in \mathcal Z$. **Theorem 11**. *Let Assumption [Assumption 1](#Ass1){reference-type="ref" reference="Ass1"} and Assumption [Assumption 2](#Ass2){reference-type="ref" reference="Ass2"} hold. Let $\{u^k\}$ be the sequence generated by the proximal ADMM ([\[alg1\]](#alg1){reference-type="ref" reference="alg1"}). Then $\{u^k\}$ is bounded and there exists a constant $C>0$ such that $$\begin{aligned} \label{ADMM.30} d_G(u^k, F^{-1}(0)) \le C \|\Delta u^k\|_G\end{aligned}$$ for all $k\ge 1$, provided any one of the following conditions holds:* 1. *$\sigma_g>0$, $A$ and $B^*$ are coercive, $g$ is differentiable and its gradient is Lipschitz continuous over bounded sets;* 2. *$\sigma_f>0$, $\sigma_g>0$, $B^*$ is coercive, $g$ is differentiable and its gradient is Lipschitz continuous over bounded sets;* 3. *$\lambda^0=0$, $\sigma_f>0$, $\sigma_g>0$, $M^*$ restricted on $\mathcal N(M^*)^\perp$ is coercive, both $f$ and $g$ are differentiable and their gradients are Lipschitz continuous over bounded sets;* 4. *$\lambda^0=0$, $\sigma_g>0$, $A$ is coercive, $M^*$ restricted on $\mathcal N(M^*)^\perp$ is coercive, both $f$ and $g$ are differentiable and their gradients are Lipschitz continuous over bounded sets;* *where $\mathcal N(M^*)$ denotes the null space of $M^*$. Consequently, there exist $C>0$ and $0< q<1$ such that $$\|u^k - u^\dag\| \le C q^k \quad \mbox{ and } \quad |H(x^k, y^k) - H_*| \le C q^{k}$$ for all $k \ge 0$, where $u^\dag:=(x^\dag, y^\dag, \lambda^\dag)$ is a KKT point of ([\[prob\]](#prob){reference-type="ref" reference="prob"}).* *Proof.* We will only consider the scenario (i) since the proofs for other scenarios are similar. In the following we will use $C$ to denote a generic constant which may change from line to line but is independent of $k$. We first show the boundedness of $\{u^k\}$. According to Corollary [Corollary 2](#cor2){reference-type="ref" reference="cor2"}, $\{\|u^k\|_G^2\}$ is bounded which implies the boundedness of $\{\lambda^k\}$. Since $\sigma_g>0$, it follows from ([\[eq:cor.1\]](#eq:cor.1){reference-type="ref" reference="eq:cor.1"}) that $\{y^k\}$ is bounded. Consequently, it follows from $\Delta \lambda^k = \rho(A x^k + B y^k -c)$ that $\{A x^k\}$ is bounded. Since $A$ is coercive, $\{x^k\}$ must be bounded. Next we show ([\[ADMM.30\]](#ADMM.30){reference-type="ref" reference="ADMM.30"}). Let $u^\dag :=(x^\dag, y^\dag, \lambda^\dag)$ be a weak cluster point of $\{u^k\}$ whose existence is guaranteed by the boundedness of $\{u^k\}$. According to Theorem [Theorem 7](#thm2:ADMM){reference-type="ref" reference="thm2:ADMM"}, $u^\dag$ is a KKT point of ([\[prob\]](#prob){reference-type="ref" reference="prob"}). Let $(\xi, \eta, \tau) \in F(u^k)$ be any element. Then $$\xi - A^* \lambda^k \in \partial f(x^k), \quad \eta - B^* \lambda^k \in \partial g(y^k), \quad \tau = A x^k + B y^k - c.$$ By using the monotonicity of $\partial f$ and $\partial g$ we have $$\begin{aligned} \label{ADMM.31} & \sigma_f \|x^k - x^\dag\|^2 + \sigma_g \|y^k - y^\dag\|^2 \nonumber \\ & \le \langle\xi - A^* \lambda^k + A^* \lambda^\dag, x^k - x^\dag\rangle+ \langle\eta - B^* \lambda^k + B^* \lambda^\dag, y^k - y^\dag\rangle\nonumber \\ & = \langle\xi, x^k - x^\dag\rangle+ \langle\eta, y^k - y^\dag\rangle+ \langle\lambda^\dag - \lambda^k, A(x^k-x^\dag) + B(y^k - y^\dag)\rangle\nonumber \\ & = \langle\xi, x^k - x^\dag\rangle+ \langle\eta, y^k - y^\dag\rangle+ \langle\lambda^\dag - \lambda^k, \tau\rangle.\end{aligned}$$ Since $\sigma_g>0$, it follows from ([\[ADMM.31\]](#ADMM.31){reference-type="ref" reference="ADMM.31"}) and the Cauchy-Schwarz inequality that $$\begin{aligned} \label{ADMM.32} \|y^k - y^\dag\|^2 \le C \left(\|\eta\|^2 + \|\xi\| \|x^k - x^\dag\| + \|\tau\| \|\lambda^k - \lambda^\dag\|\right).\end{aligned}$$ Note that $A (x^k - x^\dag) = - B(y^k - y^\dag) + \frac{1}{\rho} \Delta\lambda^k$. Since $A$ is coercive, we have $$\begin{aligned} \label{ADMM.34} \|x^k - x^\dag\|^2 \le C \|A(x^k - x^\dag)\|^2 \le C\left(\|y^k - y^\dag\|^2 + \|\Delta \lambda^k\|^2\right). \end{aligned}$$ By the differentiability of $g$ we have $-B^*\lambda^\dag = \nabla g(y^\dag)$ and $-B^* \lambda^k - Q \Delta y^k = \nabla g(y^k)$. Since $B^*$ is coercive and $\nabla g$ is Lipschitz continuous over bounded sets, we thus obtain $$\begin{aligned} \label{ADMM.33} \|\lambda^k - \lambda^\dag\|^2 & \le C \|B^*(\lambda^k - \lambda^\dag)\|^2 = \| Q\Delta y^k + \nabla g(y^k) - \nabla g(y^\dag)\|^2 \nonumber \\ & \le C \left(\|\Delta y^k\|_Q^2 + \|y^k - y^\dag\|^2\right).\end{aligned}$$ Adding ([\[ADMM.34\]](#ADMM.34){reference-type="ref" reference="ADMM.34"}) and ([\[ADMM.33\]](#ADMM.33){reference-type="ref" reference="ADMM.33"}) and then using ([\[ADMM.32\]](#ADMM.32){reference-type="ref" reference="ADMM.32"}), it follows $$\begin{aligned} \|x^k - x^\dag\|^2 + \|\lambda^k - \lambda^\dag\|^2 %& \le C \left(\|y^k - y^\dag\|^2 + \|\Delta u^k\|_G^2\right) \\ & \le C \left(\|\eta\|^2 + \|\Delta u^k\|_G^2 + \|\xi\| \|x^k - x^\dag\| + \|\tau\| \|\lambda^k - \lambda^\dag\|\right) \end{aligned}$$ which together with the Cauchy-Schwarz inequality then implies $$\begin{aligned} \label{ADMM.35} \|x^k - x^\dag\|^2 + \|\lambda^k - \lambda^\dag\|^2 \le C\left(\|\xi\|^2 + \|\eta\|^2 +\|\tau\|^2 + \|\Delta u^k\|_G^2\right).\end{aligned}$$ Combining ([\[ADMM.32\]](#ADMM.32){reference-type="ref" reference="ADMM.32"}) and ([\[ADMM.35\]](#ADMM.35){reference-type="ref" reference="ADMM.35"}) we can obtain $$\begin{aligned} \|x^k - x^\dag\|^2 + \|y^k - y^\dag\|^2 + \|\lambda^k - \lambda^\dag\|^2 \le C \left(\|\xi\|^2 + \|\eta\|^2 + \|\tau\|^2 + \|\Delta u^k\|_G^2\right). \end{aligned}$$ Since $(\xi, \eta, \tau) \in F(u^k)$ is arbitrary, we therefore have $$\begin{aligned} \|u^k - u^\dag\|^2 \le C \left([d(0, F(u^k))]^2 + \|\Delta u^k\|_G^2\right).\end{aligned}$$ With the help of ([\[ADMM-LC.3\]](#ADMM-LC.3){reference-type="ref" reference="ADMM-LC.3"}), we then obtain $$\begin{aligned} \label{admm.71} \|u^k - u^\dag\|^2 \le C \|\Delta u^k\|_G^2.\end{aligned}$$ Thus $$\begin{aligned} ^2 \le C [d(u^k, F^{-1}(0))]^2 \le C\|u^k - u^\dag\|^2 \le C \|\Delta u^k\|_G^2\end{aligned}$$ which shows ([\[ADMM.30\]](#ADMM.30){reference-type="ref" reference="ADMM.30"}). Because $\{u^k\}$ is bounded and ([\[ADMM.30\]](#ADMM.30){reference-type="ref" reference="ADMM.30"}) holds, we may use Theorem [Theorem 10](#thm5:ADMM){reference-type="ref" reference="thm5:ADMM"} to conclude the existence of a constant $q \in (0, 1)$ such that $$\|\Delta u^k\|_G \le C q^k \quad \mbox{and} \quad |H(x^k, y^k) - H_*|\le C q^{k}.$$ Finally we may use ([\[admm.71\]](#admm.71){reference-type="ref" reference="admm.71"}) to obtain $\|u^k-u^\dag\| \le C q^k$. ◻ *Remark 8*. If $\mathcal Z$ is finite-dimensional, the coercivity of $M^*$ restricted on $\mathcal N(M^*)^\perp$ required in the scenarios (iii) and (iv) holds automatically. If it is not, then there exists a sequence $\{z^k\}\subset \mathcal N(M^*)^\perp\setminus\{0\}$ such that $$\|z^k\| \ge k \|M^* z^k\|, \quad k = 1, 2, \cdots.$$ By rescaling we may assume $\|z^k\|=1$ for all $k$. Since $\mathcal Z$ is finite-dimensional, by taking a subsequence if necessary, we may assume $z^k \to z$ for some $z \in \mathcal Z$. Clearly $z\in \mathcal N(M^*)^\perp$ and $\|z\|=1$. Note that $\|M^* z^k\|\le 1/k$ for all $k$, we have $\|M^* z\| = \lim_{k\to \infty} \|M^* z^k\|=0$ which means $z \in \mathcal N(M^*)$. Thus $z \in \mathcal N(M^*) \cap \mathcal N(M^*)^\perp = \{0\}$ which is a contradiction. # **Proximal ADMM for linear inverse problems** {#sect3} In this section we consider the method ([\[PADMM2\]](#PADMM2){reference-type="ref" reference="PADMM2"}) as a regularization method for solving ([\[ip1.2\]](#ip1.2){reference-type="ref" reference="ip1.2"}) and establish a convergence rate result under a benchmark source condition on the sought solution. Throughout this section we make the following assumptions on the operators $Q$, $L$, $A$, the constraint set $\mathcal C$ and the function $f$: **Assumption 3**. 1. $A: \mathcal X\to \mathcal H$ is a bounded linear operator, $Q: \mathcal X\to \mathcal X$ is a bounded linear positive semi-definite self-adjoint operator, and $\mathcal C\subset \mathcal X$ is a closed convex subset. 2. $L$ is a densely defined, closed, linear operator from $\mathcal X$ to $\mathcal Y$ with domain $\emph{dom}(L)$. 3. There is a constant $c_0>0$ such that $$\|A x\|^2 + \|L x\|^2 \ge c_0 \|x\|^2, \qquad \forall x\in \emph{dom}(L).$$ 4. $f: \mathcal{Y}\to (-\infty, \infty]$ is proper, lower semi-continuous, and strongly convex. This assumptions is standard in the literature on regularization methods and has been used in [@JJLW2016; @JJLW2017]. Based on (iii), we can define the adjoint $L^*$ of $L$ which is also closed and densely defined; moreover, $z\in \mbox{dom}(L^*)$ if and only if $\langle L^* z, x\rangle=\langle z, L x\rangle$ for all $x\in \mbox{dom}(L)$. Under Assumption [Assumption 3](#ass:ADMM1){reference-type="ref" reference="ass:ADMM1"}, it has been shown in [@JJLW2016; @JJLW2017] that the proximal ADMM ([\[PADMM2\]](#PADMM2){reference-type="ref" reference="PADMM2"}) is well-defined and if the exact data $b$ is consistent in the sense that there exists $\hat x\in \mathcal X$ such that $$\hat x \in \mbox{dom}(L) \cap \mathcal C, \quad L \hat x \in \mbox{dom}(f) \quad \mbox{ and } \quad A \hat x = b,$$ then the problem ([\[ip1.2\]](#ip1.2){reference-type="ref" reference="ip1.2"}) has a unique solution, denoted by $x^\dag$. Furthermore, there holds the following monotonicity result, see [@JJLW2017 Lemma 2.3]; alternatively, it can also be derived from Lemma [Lemma 3](#lem:mono){reference-type="ref" reference="lem:mono"}.. **Lemma 12**. *Let $\{z^k, y^k, x^k, \lambda^k, \mu^k, \nu^k\}$ be defined by the proximal ADMM ([\[PADMM2\]](#PADMM2){reference-type="ref" reference="PADMM2"}) with noisy data and let $$\begin{aligned} \label{eq:25} E_k & := \frac{1}{2\rho_1} \|\Delta \lambda^k\|^2 + \frac{1}{2\rho_2} \|\Delta \mu^k\|^2 + \frac{1}{2\rho_3} \|\Delta \nu^k\|^2 \nonumber \\ & \quad \ + \frac{\rho_2}{2} \|\Delta y^k\|^2 + \frac{\rho_3}{2} \|\Delta x^k\|^2 + \frac{1}{2} \|\Delta z^k\|_Q^2.\end{aligned}$$ Then $\{E_k\}$ is monotonically decreasing with respect to $k$.* In the following we will always assume the exact data $b$ is consistent. We will derive a convergence rate of $x^k$ to the unique solution $x^\dag$ of ([\[ip1.2\]](#ip1.2){reference-type="ref" reference="ip1.2"}) under the source condition $$\begin{aligned} \label{ip.2} \exists \mu^\dag \in \partial f(L x^\dag) \cap \mbox{dom}(L^*) \mbox{ and } \nu^\dag \in \partial\iota_\mathcal C(x^\dag) \mbox{ such that } L^* \mu^\dag + \nu^\dag \in \mbox{Ran}(A^*).\end{aligned}$$ Note that when $L = I$ and $\mathcal C= \mathcal X$, ([\[ip.2\]](#ip.2){reference-type="ref" reference="ip.2"}) becomes the benchmark source condition $$\partial f(x^\dag) \cap \mbox{Ran}(A^*) \ne \emptyset$$ which has been widely used to derive convergence rate for regularization methods, see [@BO2004; @FS2010; @Jin2022; @RS2006] for instance. We have the following convergence rate result. **Theorem 13**. *Let Assumption [Assumption 3](#ass:ADMM1){reference-type="ref" reference="ass:ADMM1"} hold, let the exact data $b$ be consistent, and let the sequence $\{z^k, y^k, x^k, \lambda^k, \mu^k, \nu^k\}$ be defined by the proximal ADMM ([\[PADMM2\]](#PADMM2){reference-type="ref" reference="PADMM2"}) with noisy data $b^\delta$ satisfying $\|b^\delta- b\| \le \delta$. Assume the unique solution $x^\dag$ of ([\[ip1.2\]](#ip1.2){reference-type="ref" reference="ip1.2"}) satisfies the source condition ([\[ip.2\]](#ip.2){reference-type="ref" reference="ip.2"}). Then for the integer $k_\delta$ chosen by $k_\delta\sim \delta^{-1}$ there hold $$\|x^{k_\delta} - x^\dag\| = O(\delta^{1/4}), \quad \|y^{k_\delta} - L x^\dag\| = O(\delta^{1/4}) \quad \mbox{and} \quad \|z^{k_\delta} - x^\dag\| = O(\delta^{1/4})$$ as $\delta\to 0$.* In order to prove this result, let us start from the formulation of the algorithm ([\[PADMM2\]](#PADMM2){reference-type="ref" reference="PADMM2"}) to derive some useful estimates. For simplicity of exposition, we set $$\begin{aligned} &\Delta x^{k+1}:= x^{k+1} - x^k, \quad \Delta y^{k+1}:= y^{k+1} - y^k, \quad \Delta z^{k+1}:= z^{k+1} - z^k, \\ &\Delta \lambda^{k+1}:= \lambda^{k+1} - \lambda^k, \quad \Delta \mu^{k+1}:= \mu^{k+1} - \mu^k, \quad \Delta \nu^{k+1}:= \nu^{k+1} - \nu^k.\end{aligned}$$ According to the definition of $z^{k+1}$, $y^{k+1}$ and $x^{k+1}$ in ([\[PADMM2\]](#PADMM2){reference-type="ref" reference="PADMM2"}), we have the optimality conditions $$\begin{aligned} & 0 = A^* \lambda^k + \nu^k + \rho_1 A^* (A z^{k+1} - b^\delta) + L^* (\mu^k + \rho_2(L z^{k+1} -y^k)) \nonumber \\ & \quad \quad + \rho_3 (z^{k+1} - x^k) + Q(z^{k+1} - z^k), \label{ip.3}\\ & 0 \in \partial f(y^{k+1}) - \mu^k - \rho_2(L z^{k+1} - y^{k+1}), \label{ip.4}\\ & 0 \in \partial\iota_C(x^{k+1}) -\nu^k - \rho_3 (z^{k+1} - x^{k+1}). \label{ip.5}\end{aligned}$$ By using the last two equations in ([\[PADMM2\]](#PADMM2){reference-type="ref" reference="PADMM2"}), we have from ([\[ip.4\]](#ip.4){reference-type="ref" reference="ip.4"}) and ([\[ip.5\]](#ip.5){reference-type="ref" reference="ip.5"}) that $$\begin{aligned} \mu^{k+1} \in \partial f(y^{k+1}) \quad \mbox{and} \quad \nu^{k+1} \in \partial\iota_{\mathcal C}(x^{k+1}). \label{ip.7}\end{aligned}$$ Letting $y^\dag = L x^\dag$. From the strong convexity of $f$, the convexity of $\iota_C$, and ([\[ip.7\]](#ip.7){reference-type="ref" reference="ip.7"}) it follows that $$\begin{aligned} \label{ip.8} \sigma_f \|y^{k+1} - y^\dag\|^2 %& \le f(y^\dag) - f(y^{k+1}) - \l \mu^{k+1}, y^\dag - y^{k+1}\r \nonumber \\ & \le f(y^\dag) - f(y^{k+1}) - \langle\mu^{k+1}, y^\dag - y^{k+1}\rangle\nonumber \\ & \quad \, + \langle\nu^{k+1}, x^{k+1} - x^\dag\rangle. \end{aligned}$$ where $\sigma_f$ denotes the modulus of convexity of $f$; we have $\sigma_f>0$ as $f$ is strongly convex. By taking the inner product of ([\[ip.3\]](#ip.3){reference-type="ref" reference="ip.3"}) with $z^{k+1} - x^\dag$ we have $$\begin{aligned} 0 %& = \l A^* \la^k + \nu^k + \rho_1 A^*(A z^{k+1} - b^\d) + L^* (\mu^k + \rho_2(L z^{k+1} -y^k)) \nonumber \\ %& \quad \, + \rho_3 ( z^{k+1} - x^k) + Q (z^{k+1} - z^k), z^{k+1} - x^\dag\r \nonumber \\ & = \langle\lambda^k + \rho_1 (A z^{k+1} - b^\delta), A (z^{k+1} - x^\dag)\rangle\\ & \quad \, + \langle\mu^k + \rho_2(L z^{k+1} - y^k), L(z^{k+1} - x^\dag)\rangle\nonumber \\ & \quad \, + \langle\nu^k + \rho_3 (z^{k+1} - x^k), z^{k+1} - x^\dag\rangle\\ & \quad \, + \langle Q(z^{k+1} - z^k), z^{k+1} - x^\dag\rangle.\end{aligned}$$ Therefore we may use the definition of $\lambda^{k+1}, \mu^{k+1}, \nu^{k+1}$ in ([\[PADMM2\]](#PADMM2){reference-type="ref" reference="PADMM2"}) and the fact $A x^\dag = b$ to further obtain $$\begin{aligned} \label{ip.9} 0 & = \langle\lambda^{k+1}, A z^{k+1} - b\rangle+ \langle\mu^{k+1} + \rho_2 \Delta y^{k+1}, L z^{k+1} - y^\dag\rangle\nonumber \\ & \quad \, + \langle\nu^{k+1} + \rho_3 \Delta x^{k+1}, z^{k+1} - x^\dag\rangle\nonumber \\ & \quad \, + \langle Q\Delta z^{k+1}, z^{k+1} - x^\dag\rangle. \end{aligned}$$ Subtracting ([\[ip.8\]](#ip.8){reference-type="ref" reference="ip.8"}) by ([\[ip.9\]](#ip.9){reference-type="ref" reference="ip.9"}) gives $$\begin{aligned} \sigma_f \|y^{k+1} - y^\dag\|^2 & \le f(y^\dag) - f(y^{k+1}) - \langle\lambda^{k+1}, A z^{k+1} - b\rangle+ \langle\mu^{k+1}, y^{k+1} - L z^{k+1}\rangle\\ & \quad \, - \rho_2 \langle\Delta y^{k+1}, L z^{k+1} - y^\dag\rangle+ \langle\nu^{k+1}, x^{k+1} - z^{k+1}\rangle\\ & \quad \, -\rho_3 \langle\Delta x^{k+1}, z^{k+1} - x^\dag\rangle- \langle Q\Delta z^{k+1}, z^{k+1} - x^\dag\rangle.\end{aligned}$$ Note that under the source condition ([\[ip.2\]](#ip.2){reference-type="ref" reference="ip.2"}), there exist $\mu^\dag$, $\nu^\dag$ and $\lambda^\dag$ such that $$\begin{aligned} \label{ip.9.5} \mu^\dag \in \partial f(y^\dag), \quad \nu^\dag \in \partial\iota_\mathcal C(x^\dag) \quad \mbox{ and } \quad L^* \mu^\dag + \nu^\dag + A^* \lambda^\dag = 0. \end{aligned}$$ Thus, it follows from the above equation and the last two equations in ([\[PADMM2\]](#PADMM2){reference-type="ref" reference="PADMM2"}) that $$\begin{aligned} & \sigma_f \|y^{k+1} - y^\dag\|^2 \\ & \le f(y^\dag) - f(y^{k+1}) - \langle\lambda^\dag, A z^{k+1} - b\rangle - \langle\mu^\dag, L z^{k+1} - y^{k+1}\rangle- \langle\nu^\dag, z^{k+1} - x^{k+1}\rangle\\ & \quad \ - \langle\lambda^{k+1} - \lambda^\dag, A z^{k+1} - b^\delta+ b^\delta- b\rangle\\ & \quad \ -\frac{1}{\rho_2} \langle\mu^{k+1} - \mu^\dag, \Delta\mu^{k+1} \rangle - \rho_2 \langle\Delta y^{k+1}, L z^{k+1} - y^\dag\rangle\\ & \quad \ - \frac{1}{\rho_3} \langle\nu^{k+1} - \nu^\dag, \Delta\nu^{k+1}\rangle - \rho_3 \langle\Delta x^{k+1}, z^{k+1} - x^\dag\rangle\\ & \quad \, - \langle Q\Delta z^{k+1}, z^{k+1} - x^\dag\rangle. \end{aligned}$$ By using ([\[ip.9.5\]](#ip.9.5){reference-type="ref" reference="ip.9.5"}), $b = A x^\dag$ and the convexity of $f$, we can see that $$\begin{aligned} & f(y^\dag) - f(y^{k+1}) - \langle\lambda^\dag, A z^{k+1} - b\rangle - \langle\mu^\dag, L z^{k+1} - y^{k+1}\rangle- \langle\nu^\dag, z^{k+1} - x^{k+1}\rangle\\ & = f(y^\dag) - f(y^{k+1}) + \langle\lambda^\dag, b\rangle+ \langle\mu^\dag, y^{k+1}\rangle + \langle\nu^\dag, x^{k+1}\rangle\\ & = f(y^\dag) - f(y^{k+1}) + \langle A^* \lambda^\dag, x^\dag\rangle+ \langle\mu^\dag, y^{k+1}\rangle + \langle\nu^\dag, x^{k+1}\rangle\\ & = f(y^\dag) - f(y^{k+1}) - \langle L^* \mu^\dag, x^\dag\rangle+ \langle\mu^\dag, y^{k+1}\rangle + \langle\nu^\dag, x^{k+1} - x^\dag \rangle\\ & = f(y^\dag) - f(y^{k+1}) + \langle\mu^\dag, y^{k+1} - y^\dag\rangle + \langle\nu^\dag, x^{k+1} - x^\dag \rangle\le 0.\end{aligned}$$ Consequently, by using the fourth equation in ([\[PADMM2\]](#PADMM2){reference-type="ref" reference="PADMM2"}), we have $$\begin{aligned} \sigma_f \|y^{k+1} - y^\dag\|^2 & \le - \langle\lambda^{k+1} - \lambda^\dag, b^\delta- b\rangle - \frac{1}{\rho_1} \langle\lambda^{k+1} - \lambda^\dag, \Delta \lambda^{k+1}\rangle\\ & \quad \ - \frac{1}{\rho_2} \langle\mu^{k+1} - \mu^\dag, \Delta\mu^{k+1}\rangle - \frac{1}{\rho_3} \langle\nu^{k+1} - \nu^\dag, \Delta\nu^{k+1}\rangle\\ & \quad \ - \rho_2 \langle\Delta y^{k+1}, y^{k+1}-y^\dag + L z^{k+1} - y^{k+1}\rangle\\ & \quad \ - \rho_3 \langle\Delta x^{k+1}, x^{k+1} - x^\dag + z^{k+1} - x^{k+1}\rangle\\ & \quad \ - \langle Q\Delta z^{k+1}, z^{k+1} - x^\dag\rangle. \end{aligned}$$ By using the polarization identity and the last two equations in ([\[PADMM2\]](#PADMM2){reference-type="ref" reference="PADMM2"}) we further have $$\begin{aligned} \sigma_f \|y^{k+1} - y^\dag\|^2 & \le - \langle\lambda^{k+1} - \lambda^\dag, b^\delta- b\rangle\\ & \quad \ + \frac{1}{2\rho_1} \left(\|\lambda^k -\lambda^\dag\|^2 - \|\lambda^{k+1} - \lambda^\dag\|^2 - \|\Delta \lambda^{k+1}\|^2\right) \\ & \quad \ + \frac{1}{2\rho_2} \left(\|\mu^k - \mu^\dag\|^2 - \|\mu^{k+1} - \mu^\dag\|^2 - \|\Delta\mu^{k+1}\|^2\right) \\ & \quad \ + \frac{1}{2\rho_3} \left(\|\nu^k - \nu^\dag\|^2 - \|\nu^{k+1} - \nu^\dag\|^2 - \|\Delta \nu^{k+1}\|^2 \right) \\ & \quad \ + \frac{1}{2} \left(\|z^k - z^\dag\|_Q^2 - \|z^{k+1} - z^\dag\|_Q^2 - \|\Delta z^{k+1}\|_Q^2\right) \\ & \quad \ + \frac{\rho_2}{2} \left(\|y^k - y^\dag\|^2 - \|y^{k+1} - y^\dag\|^2 - \|\Delta y^{k+1}\|^2\right) \\ & \quad \ + \frac{\rho_3}{2} \left(\|x^k - x^\dag\|^2 - \|x^{k+1} - x^\dag\|^2 - \|\Delta x^{k+1}\|^2\right) \\ & \quad \ - \langle\Delta y^{k+1}, \Delta \mu^{k+1} \rangle - \langle\Delta x^{k+1}, \Delta \nu^{k+1} \rangle.\end{aligned}$$ Let $$\begin{aligned} \Phi_k & := \frac{1}{2\rho_1} \|\lambda^k -\lambda^\dag\|^2 + \frac{1}{2\rho_2} \|\mu^k - \mu^\dag\|^2 + \frac{1}{2\rho_3} \|\nu^k - \nu^\dag\|^2 \\ & \quad \ + \frac{1}{2} \|z^k - x^\dag\|_Q^2 + \frac{\rho_2}{2} \|y^k - y^\dag\|^2 + \frac{\rho_3}{2} \|x^k - x^\dag\|^2. \end{aligned}$$ Then $$\begin{aligned} \label{ip.10} \sigma_f \|y^{k+1} - y^\dag\|^2 & \le \Phi_k - \Phi_{k+1} - \langle\lambda^{k+1} - \lambda^\dag, b^\delta- b\rangle- E_{k+1} \nonumber \\ & \quad \ - \langle\Delta y^{k+1}, \Delta \mu^{k+1}\rangle - \langle\Delta x^{k+1}, \Delta \nu^{k+1}\rangle,\end{aligned}$$ where $E_k$ is defined by ([\[eq:25\]](#eq:25){reference-type="ref" reference="eq:25"}). **Lemma 14**. *For all $k = 0, 1, \cdots$ there hold $$\begin{aligned} \label{ip.11} \sigma_f \|y^{k+1} - y^\dag\|^2 \le \Phi_k - \Phi_{k+1} - \langle\lambda^{k+1} - \lambda^\dag, b^\delta- b\rangle- E_{k+1},\end{aligned}$$ $$\begin{aligned} \label{ip.12} E_{k+1} \le \Phi_k - \Phi_{k+1} + \sqrt{2\rho_1 \Phi_{k+1}} \delta\end{aligned}$$ and $$\begin{aligned} \label{ip.13} \Phi_{k+1} \le \Phi_0 + \left(\sum_{j=1}^{k+1} \sqrt{2\rho_1 \Phi_j}\right) \delta. \end{aligned}$$* *Proof.* By using ([\[ip.7\]](#ip.7){reference-type="ref" reference="ip.7"}) and the monotonicity of the subdifferentials $\partial f$ and $\partial\iota_{\mathcal C}$ we have $$0 \le \sigma_f \|\Delta y^{k+1}\|^2 \le \langle\Delta \mu^{k+1}, \Delta y^{k+1}\rangle + \langle\Delta \nu^{k+1}, \Delta x^{k+1}\rangle.$$ This together with ([\[ip.10\]](#ip.10){reference-type="ref" reference="ip.10"}) implies ([\[ip.11\]](#ip.11){reference-type="ref" reference="ip.11"}). From ([\[ip.11\]](#ip.11){reference-type="ref" reference="ip.11"}) it follows immediately that $$\begin{aligned} E_{k+1} &\le \Phi_k - \Phi_{k+1} - \langle\lambda^{k+1} - \lambda^\dag, b^\delta- b\rangle\nonumber\\ & \le \Phi_k - \Phi_{k+1} + \|\lambda^{k+1} - \lambda^\dag\| \delta\nonumber \\ & \le \Phi_k - \Phi_{k+1} + \sqrt{2\rho_1 \Phi_{k+1}} \delta\end{aligned}$$ which shows ([\[ip.12\]](#ip.12){reference-type="ref" reference="ip.12"}). By the non-negativity of $E_{k+1}$ we then obtain from ([\[ip.12\]](#ip.12){reference-type="ref" reference="ip.12"}) that $$\Phi_{k+1} \le \Phi_k + \sqrt{2 \rho_1 \Phi_{k+1}} \delta, \quad \forall k \ge 0$$ which clearly implies ([\[ip.13\]](#ip.13){reference-type="ref" reference="ip.13"}). ◻ In order to derive the estimate on $\Phi_k$ from ([\[ip.13\]](#ip.13){reference-type="ref" reference="ip.13"}), we need the following elementary result. **Lemma 15**. *Let $\{a_k\}$ and $\{b_k\}$ be two sequences of nonnegative numbers such that $$a_k^2 \le b_k^2 + c \sum_{j=1}^{k} a_j, \quad k=0, 1, \cdots,$$ where $c \ge 0$ is a constant. If $\{b_k\}$ is non-decreasing, then $$a_k \le b_k + c k, \quad k=0, 1, \cdots.$$* *Proof.* We show the result by induction on $k$. The result is trivial for $k =0$ since the given condition with $k=0$ gives $a_0 \le b_0$. Assume that the result is valid for all $0\le k \le l$ for some $l\ge 0$. We show it is also true for $k = l+1$. If $a_{l+1} \le \max\{a_0, \cdots, a_l\}$, then $a_{l+1} \le a_j$ for some $0\le j\le l$. Thus, by the induction hypothesis and the monotonicity of $\{b_k\}$ we have $$a_{l+1} \le a_j \le b_j + c j \le b_{l+1} + c (l+1).$$ If $a_{l+1} > \max\{a_0, \cdots, a_l\}$, then $$\begin{aligned} a_{l+1}^2 \le b_{l+1}^2 + c \sum_{j=1}^{l+1} a_j \le b_{l+1}^2 + c(l+1) a_{l+1}\end{aligned}$$ which implies that $$\begin{aligned} \left(a_{l+1} - \frac{1}{2} c (l+1)\right)^2 & = a_{l+1}^2 - c (l+1) a_{l+1} + \frac{1}{4} c^2 (l+1)^2\\ & \le b_{l+1}^2 + \frac{1}{4} c^2 (l+1)^2 \\ & \le \left(b_{l+1} + \frac{1}{2} c (l+1)\right)^2.\end{aligned}$$ Taking square roots shows $a_{l+1} \le b_{l+1} + c (l+1)$ again. ◻ **Lemma 16**. *There hold $$\begin{aligned} \label{ip.14} \Phi_k^{1/2} \le \Phi_0^{1/2} + \sqrt{2 \rho_1} k \delta, \quad \forall k \ge 0 \end{aligned}$$ and $$\begin{aligned} \label{ip.15} E_k \le \frac{2\Phi_0}{k} + \frac{5}{2} \rho_1 k \delta^2, \quad \forall k \ge 1.\end{aligned}$$* *Proof.* Based on ([\[ip.13\]](#ip.13){reference-type="ref" reference="ip.13"}), we may use Lemma [Lemma 15](#lem4){reference-type="ref" reference="lem4"} with $a_k = \Phi_k^{1/2}$, $b_k = \Phi_0^{1/2}$ and $c = (2\rho_2)^{1/2} \delta$ to obtain ([\[ip.14\]](#ip.14){reference-type="ref" reference="ip.14"}) directly. Next, by using the monotonicity of $\{E_k\}$, ([\[ip.12\]](#ip.12){reference-type="ref" reference="ip.12"}) and ([\[ip.14\]](#ip.14){reference-type="ref" reference="ip.14"}) we have $$\begin{aligned} k E_k &\le \sum_{j=1}^k E_j \le \sum_{j=1}^k \left(\Phi_{j-1} - \Phi_j + \sqrt{2\rho_1 \Phi_j}\delta\right) \\ & \le \Phi_0 - \Phi_k + \sum_{j=1}^k \sqrt{2\rho_1 \Phi_j} \delta\\ & \le \Phi_0 + \sum_{j=1}^k \sqrt{2\rho_1} \left(\sqrt{\Phi_0} + \sqrt{2\rho_1} j \delta\right) \delta\\ & = \Phi_0 + \sqrt{2\rho_1 \Phi_0} k \delta+ \rho_1 k(k+1) \delta^2 \\ & \le 2 \Phi_0 + \frac{5}{2} \rho_1 k^2 \delta^2\end{aligned}$$ which shows ([\[ip.15\]](#ip.15){reference-type="ref" reference="ip.15"}). ◻ Now we are ready to complete the proof of Theorem [Theorem 13](#thm:ip.admm){reference-type="ref" reference="thm:ip.admm"}. *Proof of Theorem [Theorem 13](#thm:ip.admm){reference-type="ref" reference="thm:ip.admm"}.* Let $k_\delta$ be an integer such that $k_\delta\sim \delta^{-1}$. From ([\[ip.14\]](#ip.14){reference-type="ref" reference="ip.14"}) and ([\[ip.15\]](#ip.15){reference-type="ref" reference="ip.15"}) in Lemma [Lemma 16](#lem11){reference-type="ref" reference="lem11"} it follows that $$\begin{aligned} \label{ip.16} E_{k_\delta} \le C_0 \delta\quad \mbox{and} \quad \Phi_k \le C_1 \mbox{ for all } k\le k_\delta,\end{aligned}$$ where $C_0$ and $C_1$ are constants independent of $k$ and $\delta$. In order to use ([\[ip.11\]](#ip.11){reference-type="ref" reference="ip.11"}) in Lemma [Lemma 14](#lem12){reference-type="ref" reference="lem12"} to estimate $\|y^{k_\delta} - y^\dag\|$, we first consider $\Phi_k - \Phi_{k+1}$ for all $k\ge 0$. By using the definition of $\Phi_k$ and the inequality $\|u\|^2 - \|v\|^2 \le (\|u\| + \|v\|) \|u - v\|$, we have for $k \ge 0$ that $$\begin{aligned} \Phi_{k} - \Phi_{k+1} & \le \frac{1}{2\rho_1} \left(\|\lambda^{k} - \lambda^\dag\| + \|\lambda^{k+1} - \lambda^\dag\|\right) \|\Delta \lambda^{k+1}\| \\ & \quad \ + \frac{1}{2\rho_2} \left(\|\mu^{k} - \mu^\dag\| + \|\mu^{k+1} - \mu^\dag\|\right)\|\Delta \mu^{k+1}\| \\ & \quad \ + \frac{1}{2\rho_3} \left(\|\nu^{k} - \nu^\dag\| + \|\nu^{k+1} - \nu^\dag\|\right) \|\Delta \nu^{k+1}\| \\ & \quad \ + \frac{1}{2}\left( \|z^k - x^\dag\|_Q + \|z^{k+1}-x^\dag\|_Q\right)\|\Delta z^{k+1}\|_Q \\ & \quad \ + \frac{\rho_2}{2} \left(\|y^k - y^\dag\| + \|y^{k+1} - y^\dag\|\right) \|\Delta y^{k+1}\| \\ & \quad \ + \frac{\rho_3}{2} \left(\|x^k - x^\dag\| + \|x^{k+1} - x^\dag\|\right) \|\Delta x^{k+1}\|.\end{aligned}$$ By virtue of the Cauchy-Schwarz inequality and the inequality $(a+b)^2 \le 2(a^2 + b^2)$ for any numbers $a, b \in {\mathbb R}$ we can further obtain $$\begin{aligned} \Phi_k - \Phi_{k+1} \le \sqrt{2(\Phi_k + \Phi_{k+1}) E_{k+1}}, \quad \forall k\ge 0. \end{aligned}$$ This together with ([\[ip.16\]](#ip.16){reference-type="ref" reference="ip.16"}) in particular implies $$\Phi_{k_\delta-1} - \Phi_{k_\delta} \le \sqrt{4 C_0 C_1 \delta}.$$ Therefore, it follows from ([\[ip.11\]](#ip.11){reference-type="ref" reference="ip.11"}) that $$\begin{aligned} \sigma_f \|y^{k_\delta}-y^\dag\|^2 & \le \Phi_{k_\delta-1} - \Phi_{k_\delta} + \|\lambda^{k_\delta}-\lambda^\dag\| \delta\\ & \le \sqrt{4 C_0C_1 \delta} + \sqrt{2\rho_1 \Phi_{k_\delta}} \delta\\ & \le \sqrt{4 C_0 C_1 \delta} + \sqrt{2\rho_1 C_1} \delta.\end{aligned}$$ Thus $$\begin{aligned} \|y^{k_\delta} - y^\dag\|^2 \le C_2 \delta^{1/2},\end{aligned}$$ where $C_2$ is a constant independent of $\delta$ and $k$. By using the estimate $E_{k_\delta} \le C_0 \delta$ in ([\[ip.16\]](#ip.16){reference-type="ref" reference="ip.16"}), the definition of $E_{k_\delta}$, and the last three equations in ([\[PADMM2\]](#PADMM2){reference-type="ref" reference="PADMM2"}), we can see that $$\begin{aligned} & \|A z^{k_\delta}- b^\delta\|^2 = \frac{1}{\rho_1^2} \|\Delta \lambda^{k_\delta}\|^2 \le \frac{2}{\rho_1} E_{k_\delta} \le \frac{2 C_0}{\rho_1} \delta,\\ & \|L z^{k_\delta} - y^{k_\delta}\|^2 = \frac{1}{\rho_2^2} \|\Delta \mu^{k_\delta}\|^2 \le \frac{2}{\rho_2} E_{k_\delta} \le \frac{2C_0}{\rho_2} \delta,\\ & \|z^{k_\delta} - x^{k_\delta}\|^2 = \frac{1}{\rho_3^2} \|\Delta \nu^{k_\delta}\|^2 \le \frac{2}{\rho_3} E_{k_\delta} \le \frac{2 C_0}{\rho_3} \delta. \end{aligned}$$ Therefore $$\begin{aligned} &\|L(z^{k_\delta}-x^\dag)\|^2 \le 2\left(\|L z^{k_\delta} - y^{k_\delta}\|^2 + \|y^{k_\delta} - y^\dag\|^2\right) \le \frac{4 C_0}{\rho_2} \delta+ 2 C_2 \delta^{1/2}, \\ & \|A(z^{k_\delta}- x^\dag)\|^2 \le 2\left(\|A z^{k_\delta} - b^\delta\|^2 + \|b^\delta- b\|^2\right) \le 2 \left(\frac{2 C_0}{\rho_1} + 1\right) \delta.\end{aligned}$$ By virtue of (iii) in Assumption [Assumption 3](#ass:ADMM1){reference-type="ref" reference="ass:ADMM1"} on $A$ and $L$ we thus obtain $$\begin{aligned} c_0 \|z^{k_\delta}-x^\dag\|^2 & \le \|A(z^{k_\delta} - x^\dag)\|^2 + \|L(z^{k_\delta}- x^\dag)\|^2 \\ & \le 2 \left(\frac{2 C_0}{\rho_1} + \frac{2 C_0}{\rho_1} + 1\right) \delta+ 2 C_2 \delta^{1/2}. \end{aligned}$$ This means there is a constant $C_3$ independent of $\delta$ and $k$ such that $$\begin{aligned} \|z^{k_\delta} - x^\dag\|^2 \le C_3 \delta^{1/2}. \end{aligned}$$ Finally we obtain $$\|x^{k_\delta} - x^\dag\|^2 \le 2\left(\|x^{k_\delta}- z^{k_\delta}\|^2 + \|z^{k_\delta} - x^\dag\|^2\right) \le \frac{4 C_0}{\rho_3} \delta+ 2 C_3 \delta^{1/2}.$$ The proof is thus complete. ◻ *Remark 9*. Under the benchmark source condition ([\[ip.2\]](#ip.2){reference-type="ref" reference="ip.2"}), we have obtained in Theorem [Theorem 13](#thm:ip.admm){reference-type="ref" reference="thm:ip.admm"} the convergence rate $O(\delta^{1/4})$ for the proximal ADMM ([\[PADMM2\]](#PADMM2){reference-type="ref" reference="PADMM2"}). This rate is not order optimal. It is not yet clear if the order optimal rate $O(\delta^{1/2})$ can be achieved. *Remark 10*. When using the proximal ADMM to solve ([\[ip1.2\]](#ip1.2){reference-type="ref" reference="ip1.2"}) with $L = I$, i.e. $$\begin{aligned} \label{ip} \min\left\{f(x): A x = b \mbox{ and } x \in \mathcal C\right\}, \end{aligned}$$ it is not necessary to introduce the $y$-variable as is done in ([\[PADMM2\]](#PADMM2){reference-type="ref" reference="PADMM2"}) and thus ([\[PADMM2\]](#PADMM2){reference-type="ref" reference="PADMM2"}) can be simplified to the scheme $$\begin{aligned} \label{PADMM2.5} \begin{split} & z^{k+1} = \arg\min_{z\in \mathcal X} \left\{{\mathscr L}_{\rho_1, \rho_2}(z, x^k, \lambda^k, \nu^k) + \frac{1}{2} \|z-z^k\|_Q^2\right\},\\ & x^{k+1} = \arg\min_{x\in \mathcal X} \left\{{\mathscr L}_{\rho_1, \rho_2}(z^{k+1}, x, \lambda^k, \nu^k)\right\}, \\ & \lambda^{k+1} = \lambda^k + \rho_1 (A z^{k+1} - b^\delta), \\ & \nu^{k+1} = \nu^k + \rho_2 (z^{k+1} - x^{k+1}), \end{split}\end{aligned}$$ where $${\mathscr L}_{\rho_1, \rho_2}(z, x, \lambda, \nu) := f(z) + \iota_\mathcal C(x) + \langle\lambda, A z- b^\delta\rangle+ \langle\nu, z-x\rangle + \frac{\rho_1}{2}\|A z- b^\delta\|^2 + \frac{\rho_2}{2} \|z-x\|^2.$$ The source condition ([\[ip1.2\]](#ip1.2){reference-type="ref" reference="ip1.2"}) reduces to the form $$\begin{aligned} \label{sc} \exists \mu^\dag \in \partial f(x^\dag) \mbox{ and } \nu^\dag \in \partial\iota_\mathcal C(x^\dag) \mbox{ such that } \mu^\dag + \nu^\dag \in \mbox{Ran}(A^*).\end{aligned}$$ If the unique solution $x^\dag$ of ([\[ip\]](#ip){reference-type="ref" reference="ip"}) satisfies the source condition ([\[sc\]](#sc){reference-type="ref" reference="sc"}), one may follow the proof of Theorem [Theorem 13](#thm:ip.admm){reference-type="ref" reference="thm:ip.admm"} with minor modification to deduce for the method ([\[PADMM2.5\]](#PADMM2.5){reference-type="ref" reference="PADMM2.5"}) that $$\|x^{k_\delta} - x^\dag\| = O(\delta^{1/4}) \quad \mbox{ and } \quad \|z^{k_\delta} - x^\dag\| = O(\delta^{1/4})$$ whenever the integer $k_\delta$ is chosen such that $k_\delta\sim \delta^{-1}$. We conclude this section by presenting a numerical result to illustrate the semi-convergence property of the proximal ADMM and the convergence rate. We consider finding a solution of ([\[ip1.1\]](#ip1.1){reference-type="ref" reference="ip1.1"}) with minimal norm. This is equivalent to solving ([\[ip\]](#ip){reference-type="ref" reference="ip"}) with $f(y) = \frac{1}{2} \|y\|^2$. With a noisy data $b^\delta$ satisfying $\|b^\delta- b\| \le \delta$, the corresponding proximal ADMM ([\[PADMM2.5\]](#PADMM2.5){reference-type="ref" reference="PADMM2.5"}) takes the form $$\begin{aligned} \label{PADMM2.6} \begin{split} z^{k+1} &= \left((1 + \rho_2) I + Q + \rho_1 A^*A\right)^{-1} \left(\rho_1 A^* b^\delta+ \rho_2 x^k + Q z^k - A^* \lambda^k - \nu^k\right), \\ x^{k+1} &= P_\mathcal C\left(z^{k+1} + \nu^k/\rho_2\right), \\ \lambda^{k+1} & = \lambda^k + \rho_1(A z^{k+1} - b^\delta), \\ \nu^{k+1} & = \nu^k + \rho_2(z^{k+1} - x^{k+1}), \end{split}\end{aligned}$$ where $P_\mathcal C$ denotes the orthogonal projection of $\mathcal X$ onto $\mathcal C$. The source condition ([\[sc\]](#sc){reference-type="ref" reference="sc"}) now takes the form $$\begin{aligned} \label{sc.2} \exists \nu^\dag \in \partial\iota_\mathcal C(x^\dag) \mbox{ such that } x^\dag + \nu^\dag \in \mbox{Ran}(A^*)\end{aligned}$$ which is equivalent to the projected source condition $x^\dag \in P_\mathcal C(\mbox{Ran}(A^*))$. **Example 1**. In our numerical simulation we consider the first kind integral equation $$\begin{aligned} \label{IE} (A x)(t) := \int_0^1 \kappa(s, t) x(s) ds = b(t), \quad t \in [0,1]\end{aligned}$$ on $L^2[0,1]$, where the kernel $\kappa$ is continuous on $[0,1]\times [0,1]$. Such equations arise naturally in many linear ill-posed inverse problems, see [@EHN1996; @G1984]. Clearly $A$ is a compact linear operator from $L^2[0,1]$ to itself. We will use $$\kappa(s, t) = d \left(d^2 + (s-t)^2\right)^{-3/2}$$ with $d = 0.1$. The corresponding equation is a 1-D model problem in gravity surveying. Assume the equation ([\[IE\]](#IE){reference-type="ref" reference="IE"}) has a nonnegative solution. We will employ the method ([\[PADMM2.6\]](#PADMM2.6){reference-type="ref" reference="PADMM2.6"}) to determine the unique nonnegative solution of ([\[IE\]](#IE){reference-type="ref" reference="IE"}) with minimal norm in case the data is corrupted by noise. Here $\mathcal C:=\{x \in L^2[0,1]: x\ge 0 \mbox{ a.e.}\}$ and thus $P_\mathcal C(x) = \max\{x, 0\}$. ![(a) plots the true solution $x^\dag$, (b) and (c) plot the relative errors versus the number of iterations for the method ([\[PADMM2.6\]](#PADMM2.6){reference-type="ref" reference="PADMM2.6"}) using noisy data with noise level $\delta= 10^{-2}$ and $10^{-4}$ respectively](True_solution.png "fig:"){#fig1 width="31%"} ![(a) plots the true solution $x^\dag$, (b) and (c) plot the relative errors versus the number of iterations for the method ([\[PADMM2.6\]](#PADMM2.6){reference-type="ref" reference="PADMM2.6"}) using noisy data with noise level $\delta= 10^{-2}$ and $10^{-4}$ respectively](semi_convergence_001.png "fig:"){#fig1 width="31%"} ![(a) plots the true solution $x^\dag$, (b) and (c) plot the relative errors versus the number of iterations for the method ([\[PADMM2.6\]](#PADMM2.6){reference-type="ref" reference="PADMM2.6"}) using noisy data with noise level $\delta= 10^{-2}$ and $10^{-4}$ respectively](semi_convergence_00001.png "fig:"){#fig1 width="31%"} In order to investigate the convergence rate of the method, we generate our data as follows. First take $\omega^\dag \in L^2[0,1]$, set $x^\dag := \max\{A^* \omega^\dag, 0\}$ and define $b := A x^\dag$. Thus $x^\dag$ is a nonnegative solution of $A x = b$ satisfying $x^\dag = P_\mathcal C(A^* \omega^\dag)$, i.e. the source condition ([\[sc.2\]](#sc.2){reference-type="ref" reference="sc.2"}) holds. We use $\omega^\dag = t^3(0.9-t)(t-0.35)$, the corresponding $x^\dag$ is plotted in Figure [3](#fig1){reference-type="ref" reference="fig1"} (a). We then pick a random data $\xi$ with $\|\xi\|_{L^2[0,1]} = 1$ and generate the noisy data $b^\delta$ by $b^\delta:= b + \delta\xi$. Clearly $\|b^\delta- b\|_{L^2[0,1]} = \delta$. $\delta$ $\texttt{err}_{\min}$ $\texttt{iter}_{\min}$ $\texttt{err}_{\min}/\delta^{1/2}$ $\texttt{err}_{\min}/\delta^{1/4}$ ---------- ----------------------- ------------------------ ------------------------------------ ------------------------------------ -- -- 1e-1 4.9307e-2 1 0.155922 0.087681 1e-2 1.3255e-2 2 0.132553 0.041917 1e-3 5.2985e-3 19 0.167552 0.029796 1e-4 2.1196e-3 501 0.211957 0.021196 1e-5 7.2638e-4 2512 0.229702 0.012917 1e-6 2.7450e-4 31447 0.274496 0.008680 1e-7 7.4693e-5 329542 0.236199 0.004200 : Numerical results for the method ([\[PADMM2.6\]](#PADMM2.6){reference-type="ref" reference="PADMM2.6"}) using noisy data with diverse noise levels, where $\texttt{err}_{\min}$ and $\texttt{iter}_{\min}$ denote respectively the the smallest relative error and the required number of iterations. \ For numerical implementation, we discretize the equation by the trapzoidal rule based on partitioning $[0, 1]$ into $N-1$ subintervals of equal length with $N = 600$. We then execute the method ([\[PADMM2.6\]](#PADMM2.6){reference-type="ref" reference="PADMM2.6"}) with $Q =0$, $\rho_1 = 10$, $\rho_2=1$ and the initial guess $x^0 = \lambda^0 = \nu^0 = 0$ using the noisy data $b^\delta$ for several distinct values of $\delta$. In Figure [3](#fig1){reference-type="ref" reference="fig1"} (b) and (c) we plot the relative error $\|x^k- x^\dag\|_{L^2}/\|x^\dag\|_{L^2}$ versus $k$, the number of iterations, for $\delta= 10^{-2}$ and $\delta= 10^{-4}$ respectively. These plots demonstrate that the proximal ADMM always exhibits the semi-convergence phenomenon when used to solve ill-posed problems, no matter how small the noise level is. Therefore, properly terminating the iteration is important to produce useful approximate solutions. This has been done in [@JJLW2016; @JJLW2017]. In Table [1](#table1){reference-type="ref" reference="table1"} we report further numerical results. For the noisy data $b^\delta$ with each noise level $\delta= 10^{-i}$, $i = 1, \cdots, 7$, we execute the method and determine the smallest relative error, denoted by $\texttt{err}_{\min}$, and the required number of iterations, denoted by $\texttt{iter}_{\min}$. The ratios $\texttt{err}_{\min}/\delta^{1/2}$ and $\texttt{err}_{\min}/\delta^{1/4}$ are then calculated. Since $x^\dag$ satisfies the source condition ([\[sc.2\]](#sc.2){reference-type="ref" reference="sc.2"}), our theoretical result predicts the convergence rate $O(\delta^{1/4})$. However, Table [1](#table1){reference-type="ref" reference="table1"} illustrates that the value of $\texttt{err}_{\min}/\delta^{1/2}$ does not change much while the value of $\texttt{err}_{\min}/\delta^{1/4}$ tends to decrease to 0 as $\delta\to 0$. This strongly suggests that the proximal ADMM admits the order optimal convergence rate $O(\delta^{1/2})$ if the source condition ([\[sc.2\]](#sc.2){reference-type="ref" reference="sc.2"}) holds. However, how to derive this order optimal rate remains open. 999 H. Attouch and J. Bolte, *On the convergence of the proximal algorithm for nonsmooth functions involving analytic features*, Math. Program., Ser. B, 116 (2009), 5--16. H. Attouch and M. Soueycatt, *Augmented Lagrangian and proximal alternating direction methods of multipliers in Hilbert spaces. Applications to games, PDE's and control*, Pac. J. Optim., 5 (2009), no. 1, 17--37. H. H. Bauschke and P. L. Combettes, *Convex Analysis and Monotone Operator Theory in Hilbert Spaces*, Springer, New York, 2011. A. Beck, *First-Order Methods in Optimization*, MOS-SIAM Series on Optimization. Society for Industrial and Applied Mathematics, 2017. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, *Distributed optimization and statistical learning via the alternating direction method of multipliers*, Foundations and Trends in Machine Learning, 3(1): 1--122, 2011. K. Bredies and H. Sun, *A proximal point analysis of the preconditioned alternating direction method of multipliers*, J. Optim. Theory Appl., 173 (2017), no. 3, 878--907. M. Burger and S. Osher, *Convergence rates of convex variational regularization*, Inverse Problems 20 (2004), no. 5, 1411--1421. D. Davis and W. Yin, *Convergence rate analysis of several splitting schemes*, Splitting methods in communication, imaging, science, and engineering, 115--163, Sci. Comput., Springer, Cham, 2016. W. Deng and W. Yin, *On the global and linear convergence of the generalized alternating direction method of multipliers*, J. Sci. Comput., 66 (2016), 889--916. J. Eckstein and D. P. Bertsekas, *On the douglas-rachford splitting method and the proximal point algorithm for maximal monotone operators*, Math. Program., 55(1-3): 293--318, 1992. I. Ekeland and R. Temam, *Convex Analysis and Variational Problems*, Amsterdam: North Holland, 1976. H. W. Engl, M. Hanke and A. Neubauer, *Regularization of Inverse Problems*, Dordrecht, Kluwer, 1996. K. Frick and O. Scherzer, *Regularization of ill-posed linear equations by the non-stationary augmented Lagrangian method*, J. Integral Equ. Appl., 22 (2010), no. 2, 217--257. D. Gabay, *Applications of the method of multipliers to variational inequalities*, Studies in Mathematics and its Applications, 15, 299--331, 1983. D. Gabay and B. Mercier, *A dual algorithm for the solution of nonlinear variational problems via finite element approximation*, Comput. Math. Appl., 2 (1976), 17--40. R. Glowinski and A. Marrocco, *Sur l'approximation, par éléments finis d'ordre un, et la résolution, par pénalisation-dualité, d'une classe de problémes de Dirichlet non linéaires*, ESAIM: Math. Model. Numer. Anal., 9(R2):41--76, 1975. T. Goldstein, X. Bresson and S. Osher, *Geometric applications of the split Bregman method: segmentation and surface reconstruction*, J. Sci. Comput., 45 (2010), no. 1, 272--293. C. W. Groetsch, *The theory of Tikhonov regularization for Fredholm equations of the first kind*, Research Notes in Mathematics, 105. Pitman (Advanced Publishing Program), Boston, MA, 1984. B. He and X. Yuan, *On the $O(1/n)$ convergence rate of Douglas--Rachford alternating direction method*, SIAM J. Numer. Anal., 50 (2012), 700--709. B. He and X. Yuan, *On non-ergodic convergence rate of Douglas--Rachford alternating direction method of multipliers*, Numer. Math., 130 (2015), 567--577. Y. Jiao, Q. Jin, X. Lu and W. Wang, *Alternating direction method of multipliers for linear inverse problems*, SIAM J. Numer. Anal., 54 (2016), no. 4, 2114--2137. Y. Jiao, Q. Jin, X. Lu and W. Wang, *Preconditioned alternating direction method of multipliers for solving inverse problems with constraints*, Inverse Problems, 33 (2017), 025004 (34pp). Q. Jin, *Convergence rates of a dual gradient method for constrained linear ill-posed problems*, Numer. Math. 151 (2022), no. 4, 841--871. J. H. Lee and T. S. Pham, *Openness, Hölder metric regularity, and Hölder continuity properties of semialgebraic set-valued maps*, SIAM J. Optim., 32 (2022), no. 1, pp. 56--74. T. Lin, S. Ma and S. Zhang, *On the Sublinear Convergence Rate of Multi-Block ADMM*, J. Oper. Res. Soc. China, 3 (2015), no. 3, 251--274. P. L. Lions and B. Mercier, *Splitting algorithms for the sum of two nonlinear operators*, SIAM J. Numer. Anal., 16(6), 964--979, 1979. Y. Liu, X. Yuan, S. Zeng and J. Zhang, *Partial error bound conditions and the linear convergence rate of the alternating direction method of multipliers*, SIAM J. Numer. Anal., 56 (2018), no. 4, 2095--2123. F. Natterer,*The Mathematics of Computerized Tomography*, SIAM, Philadelphia, 2001. E. Resmerita and O. Scherzer, *Error estimates for non-quadratic regularization and the relation to enhancement*, Inverse Problems, 22 (2006), no. 3, 801--814. S. M. Robinson, *Some continuity properties of polyhedral multifunctions*, Math. Program. Study, 14 (1981), pp. 206--214. H. Sun, *Analysis of Fully Preconditioned Alternating Direction Method of Multipliers with Relaxation in Hilbert Spaces*, J. Optim. Theory Appl., 183 (2019), no. 1, 199--229. W. H. Yang and D. R. Han, *Linear convergence of alternating direction method of multipliers for a class of convex optimization problems*, SIAM J. Numer. Anal., 54 (2016), no. 2, 625--640. X. Zhang, M. Burger and S. Osher, *A unified primal--dual algorithm framework based on Bregman iteration*, J. Sci. Comput., 46 (2011), 20--46. X. Y. Zheng and K. F. Ng, *Metric subregularity of piecewise linear multifunctions and applications to piecewise linear multiobjective optimization*, SIAM J. Optim., 24 (2014), no. 1, 154--174.
arxiv_math
{ "id": "2310.06211", "title": "On convergence rates of proximal alternating direction method of\n multipliers", "authors": "Qinian Jin", "categories": "math.OC cs.NA math.NA", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | In this paper, we construct an example of temperature patch solutions for the two-dimensional, incompressible Boussinesq system with kinematic viscosity such that both the curvature and perimeter grow to infinity over time. The presented example consists of two disjoint, simply connected patches. The rates of growth for both curvature and perimeter in this example are at least algebraic. author: - Jaemin Park bibliography: - references.bib title: Growth of curvature and perimeter of temperature patches in the 2D Boussinesq equations --- [^1] # Introduction and the main results In this paper, we investigate the long-time behavior of the two-dimensional incompressible Boussinesq equations in the absence of thermal diffusivity: $$\label{Boussinesq} \begin{aligned} \rho_t + u\cdot \nabla \rho & = 0,\\ u_t + u\cdot \nabla u &= -\nabla p - \rho e_2+ \nu\Delta u, \quad t>0,\ x\in \mathbb{R}^2\\ \nabla \cdot u&=0,\\ (\rho(t,x),u(t,x))|_{t=0}&=(\rho_0(x),u_0(x)), \end{aligned}$$ where $e_2=(0,1)^T$ and $\nu>0$ is the kinematic viscosity coefficient. The system [\[Boussinesq\]](#Boussinesq){reference-type="eqref" reference="Boussinesq"} describes the evolution of the temperature distribution $\rho$, of a viscous, heat-conducting fluid moving in an external gravitational force field, assuming that the Boussinesq approximation is valid [@blandford2008applications; @gill1982atmosphere; @majda2003introduction; @pedlosky1987geophysical]. The primary goal of this paper is to provide an example of a patch-type temperature distribution whose curvature and perimeter grow as time approaches infinity. ## Overview of long-time behavior in the Boussinesq equations Before presenting the precise statement of the main theorem, let us provide a brief review of the relevant literature. ### Global well-poseness In the presence of kinematic viscosity as in [\[Boussinesq\]](#Boussinesq){reference-type="eqref" reference="Boussinesq"}, Hou--Li [@HouLi2005] and Chae [@Chae2006] obtained global-in-time regularity results in $(\rho,u)\in H^{m}\times H^{m-1}$ and $(\rho,u)\in H^m\times H^{m}$, respectively, with $m\ge 3$. In essence, when the initial data are sufficiently smooth and decay rapidly, these results ensure the existence of a global-in-time strong solution. For cases with rough initial data, Abidi--Hmidi [@MR2290277] and Hmidi--Keraani [@MR2305876] proved the existence of global weak solutions in $(\rho,u)\in B^0_{2,1}\times \left(L^2\cap B^{-1}_{\infty,1}\right)$ and $(\rho,u)\in L^2\times \left(L^2\cap H^s\right)$, where $s\in[0,2)$ and $B^s_{p,q}$ represents the Besov spaces. ### Long-time behavior of classical solutions In the study of long-time behavior of solutions, stability analysis, by itself, provides qualitative information about their long-term behavior, moreover it also plays a crucial role in proving various solution features, as exploited in [@choi2021growth; @drivas2023twisting]. Denoting $$\rho_s^\alpha:=\alpha y,\quad u_s^\beta:=(\beta y, 0)^T,\quad \alpha,\beta\in \mathbb{R},$$ one can easily see that any pair $(\rho_s^\alpha, u_s^\beta)$ is a steady solution (time-independent) for the system [\[Boussinesq\]](#Boussinesq){reference-type="eqref" reference="Boussinesq"}. In [@MR4451473], the authors established stability under perturbations in a Gevrey class near the Couette flow ($(\rho_s^\alpha,u_s^1)$ with $\alpha\le 0$), when considering the Boussinesq system in the spatial domain $\mathbb{T}\times \mathbb{R}$. Near the hydrostatic equilibria ($\rho_s^\alpha,0)$, under perturbations in a Sobolev space, Doering--Wu--Zhao--Zheng [@MR3815212] established stability (when $\alpha<0$) and instability (when $\alpha>0$), considering the Boussinesq system in a general Lipschitz domain. Also Tao--Wu--Zhao--Zheng [@tao2020stability] conducted another stability analysis with relaxed assumptions on the initial data in a spatially periodic domain. Regarding a long-time behavior of solutions without an assumption on the smallness of the initial data, several quantitative results are available in the literature. Since the density $\rho$ is transported by an incompressible flow, one cannot expect any growth or decay of $\rVert \rho(t)\rVert_{L^p}$. However, a creation of small scale by the flow might induce the growth of finer norms of $\rho$ (or vorticity $\omega:=\nabla \times u$) over time. Indeed, considering [\[Boussinesq\]](#Boussinesq){reference-type="eqref" reference="Boussinesq"} in a bounded domain, Ju [@Ju2017] showed $\rVert \rho \rVert_{H^1}\lesssim e^{ct^2}$, which was further improved to an exponential bound $e^{ct}$ in $\mathbb{T}^2$ by Kukavica--Wang [@KW2020]. Subsequently, Kukavica--Massatt--Ziane [@kukavica2021asymptotic] achieved a slightly better upper bound $\rVert \rho \rVert_{H^2}\lesssim C_\epsilon e^{\epsilon t}$ for any small $\epsilon>0$. In addition to these upper bounds on growth rates, an interesting lower bound was obtained by Brandolese--Schonbek [@BS2012] proving that in $\mathbb{R}^2$, the kinetic energy $\rVert u\rVert_{L^2}$ must grow faster than $c(1+t)^{1/4}$ as $t\to\infty$, provided that the initial density $\rho_0$ does not have a zero average. Recently, Kiselev--Park--Yao [@kiselev2022small] showed that for a large class of initial data, the Sobolev norms $\rVert \rho \rVert_{H^m}$, for $m\ge 1$, must grow at least at some algebraic rate in $\mathbb{T}^2$ and $\mathbb{R}^2$. Besides these quantitative analyses of norm growth, one might expect some asymptotic behavior due to the damping effect induced by viscosity. In this direction, Kukavica--Massatt--Ziane [@kukavica2021asymptotic] and Aydin--Kukavica--Ziane [@aydin2023asymptotic] showed that $\rVert \nabla u \rVert_{L^2}$ and $\rVert \nu\Delta u - \mathbb{P}(\rho e_2)\rVert_{L^2}$, where $\mathbb{P}$ denotes the Leray projection, converge to $0$ as $t\to \infty$ in a bounded domain. ### Temperature patch problem An interesting class of solutions to a transport equation, $$\begin{aligned} \label{transport1} \rho_t + u\cdot \nabla \rho = 0, \end{aligned}$$ is called *patch solutions*. These are weak solutions composed of characteristic functions. For instance, if $\rho_0=1_D$ for some bounded domain $D$, the solution remains as a characteristic function $\rho(t)=1_{D_t}$ for some time-dependent domain $D_t$, provided that the velocity field $u$ is suitably regular. In a more general sense, in this paper, we refer to a solution $\rho$ as a patch solution, if it can be expressed as a linear combination of characteristic functions defined on some bounded domains. As transport phenomena are prevalent in fluid dynamics, the long-time behavior of patch solutions has been a subject of active study in various fluid models, particularly concerning the behavior of the patch boundary. In certain two-dimensional models, it has been observed that the curvature of the patch boundary may grow to infinity ([@kiselev2019global] for the 2D Euler), the perimeter also may grow to infinity ([@choi2021growth; @elgindi2020singular; @drivas2023twisting] for the 2D Euler) as $t\to\infty$, or even a singularities can develop in a finite time ([@kiselev2016finite; @gancedo2021local] for the generalized surface quasi-geostrophic equations). In the context of the 2D Boussinesq equations [\[Boussinesq\]](#Boussinesq){reference-type="eqref" reference="Boussinesq"}, the temperature distribution is also transported by the velocity field, making it natural to explore patch solution $\rho$. The global existence of patch solutions have been rigorously proved by Danchin--Zhang [@danchin2017global] and Gancedo--Garcı́a-Juárez [@gancedo2017global]. More precisely, [@gancedo2017global Theorem 3.1] states that if $\rho_0=1_{D_0}$ for a simply connected domain $D_0\in\mathbb{R}^2$ with $\partial D_0\in W^{2,\infty}$, and $u_0\in C^\infty_c(\mathbb{R}^2)$ is divergence-free, then - there is a unique global solution $(\rho(t),u(t))$ such that $u\in L^1((0,T);W^{2,\infty}(\mathbb{R}^2))$ and $\rho(t)=1_{D_t}$, where $D_t=X_t(D_0)$ with $X_t$, a flow map associated to the velocity field $u$. More precisely $X_t$ is the unique map determine by $$\begin{aligned} \label{flow1} \frac{d{X}_t}{dt}(x)= u(t,X_t(x)),\quad X_0(x)=x, \text{ for all $x\in \mathbb{R}^2$.} \end{aligned}$$ - $\partial D_t\in L_{loc}^\infty(\mathbb{R}^+ ; W^{2,\infty})$, which ensures that the curvature cannot grow to infinity in a finite time. Now, let us consider an initial data $(\rho_0,u_0)$ such that the initial temperature density consists of multiple patches with smooth boundaries. The existence of global weak solution is guaranteed. Indeed, for a set $D_0$ as described in [\[assumption1\]](#assumption1){reference-type="ref" reference="assumption1"}, it is well-known that $1_{D_0}\in B^{\alpha}_{2,2}$ for any $\alpha<\frac{1}{2}$ (e.g., [@sickel1999pointwise Proposition 3.6]), thus $\rho_0 \in B^{\alpha}_{2,2}$. In this case, classical embedding theorems in Besov spaces (e.g., [@triebel2010theory Subsection 2.7]), yield that $\rho_0\in B^0_{2,1} \cap B^0_{p,\infty}$ for $p\in (2,4)$. Combining this with $u_0\in C_c^\infty(\mathbb{R}^2)$, the well-posedness theorem [@MR2305876 Theorem 1.2] ensures that there exists a unique weak solution $(\rho,u)$ to [\[Boussinesq\]](#Boussinesq){reference-type="eqref" reference="Boussinesq"} in the class, $$\begin{aligned} \label{global_weak} (\rho,u)\in C(\mathbb{R}^+; B^0_{2,1} \cap B^0_{p,\infty})\times C(\mathbb{R}^+; H^2). \end{aligned}$$ However, technically speaking, the global existence results for patch solutions in [@gancedo2017global] are not directly applicable to the initial data as above, since $\rho_0$ consists of two disjoint patches instead of a single patch. More precisely, it is not trivial to see whether the temperature distribution $\rho$ in [\[global_weak\]](#global_weak){reference-type="eqref" reference="global_weak"} qualifies as a patch solution, and if it does, whether boundary regularity can persist; A rough velocity $u\in C(\mathbb{R}^+; H^2)$ does not guarantee enough regularity of a flow map $x\mapsto X_t(x)$ to ensure any regularity of the boundary of the set $X_t(D_0)$. Since our primary focus lies elsewhere and considering that the main ideas from [@gancedo2017global] can be readily applied to the proof, we will only state a theorem concerning the global existence of patch solutions involving multiple patches. The detailed proof is left to the interested reader. **Theorem 1**. *[@gancedo2017global Theorem 3.1][\[wellposed\]]{#wellposed label="wellposed"} Let $N\in\mathbb{N}$. For $i=1,\ldots,N$, let us pick real numbers $a_i\in \mathbb{R}$ and simply connected bounded domains $D_i\subset \mathbb{R}^2$ such that $\overline{D}_i$ are disjoint and $\partial D_i \in W^{2,\infty}$. Let us consider initial data $(\rho_0,u_0)$ such that $\rho_0=\sum_{i=1}^N a_i1_{D_i}$ and $u_0\in H^2(\mathbb{R}^2)$ is divergence-free. Then there exists a unique weak solution $(\rho,u)$ to [\[Boussinesq\]](#Boussinesq){reference-type="eqref" reference="Boussinesq"} such that $u\in C(\mathbb{R}^+ ; H^2)\cap L^1_{loc}(\mathbb{R}^+; W^{2,\infty}(\mathbb{R}^2))$. In addition, for almost every $t\ge 0$ $$\rho(t) =\sum_{i=1}^N a_i 1_{D_{i,t}},\quad D_{i,t} = X_{t}(D_i),$$ where $X_t$ is the flow map generated by the velocity $u$. Lastly, we have persistence of the curvature, $\partial D_{i,t}\in L^\infty_{loc}(\mathbb{R}^+; W^{2,\infty})$ for $i=1,\ldots,N$.* ## Main results The goal of this paper is to construct an example of initial data with a patch-type temperature distribution such that under the dynamics [\[Boussinesq\]](#Boussinesq){reference-type="eqref" reference="Boussinesq"}, the temperature patch exhibits a growth of the curvature and the perimeter. To this end, we will consider initial data $(\rho_0,u_0)$ satisfying the following assumptions: 1. [\[assumption1\]]{#assumption1 label="assumption1"} $\rho_0(x)=1_{D_0}(x) - 1_{D_0^*}$ for a simply connected domain such that $\overline{D_0}\subset \mathbb{R}\times \mathbb{R}^+$, $|D_0|=1$ and $\partial D_0 \in C^\infty$, and $$D_{0}^*=\left\{ x\in \mathbb{R}^2: (x_1,-x_2)\in D_0\right\}.$$ See Figure [1](#fig1){reference-type="ref" reference="fig1"} for an illustration. 2. [\[assumption2\]]{#assumption2 label="assumption2"} $u_0\in C_c^\infty(\mathbb{R}^2)$ and $u_0$ is divergence-free. Moreover, denoting $u_0=(u_{01},u_{02})$, we assume that $u_{02}$ is odd in $x_2$ and $u_{01}$ is even in $x_2$. ![Illustration of the initial patch $\rho_0$ ](Untitled.pdf){#fig1} When considering $(\rho_0,u_0)$ satisfying [\[assumption1\]](#assumption1){reference-type="ref" reference="assumption1"} and [\[assumption2\]](#assumption2){reference-type="ref" reference="assumption2"}, the uniqueness part of Theorem [\[wellposed\]](#wellposed){reference-type="ref" reference="wellposed"} ensures the preservation of the $x_2$-odd symmetry in the solution $\rho(t)$ obtained in Theorem [\[wellposed\]](#wellposed){reference-type="ref" reference="wellposed"}: $\rho(t,x_1,-x_2) = -\rho(t,x_1,x_2)$. In other words, $\rho(t)$ takes the form $$\rho(t) = 1_{D_t} - 1_{D_t^*},\text{ where $D_t=X_t(D_0)$ and $D_{t}^*=\left\{ x\in \mathbb{R}^2: (x_1,-x_2)\in D_t\right\}.$}$$ Now, we are ready to state the paper's main theorem: **Theorem 2**. *Let $(\rho_0,u_0)$ satisfy [\[assumption1\]](#assumption1){reference-type="ref" reference="assumption1"} and [\[assumption2\]](#assumption2){reference-type="ref" reference="assumption2"}. Then, the global patch-type solution $\rho(t)=1_{D_t}-1_{D_t^*}$, where $D_t^*$ denotes the $x_2$ symmetric copy of $D_t$, satisfies the following:* 1. *[\[theorem1\]]{#theorem1 label="theorem1"} Infinite growth of curvature: We have $$\limsup_{t\to \infty}t^{-\frac{1}{6}}|\kappa(t)| = \infty,$$ where $\kappa(t)$ is the maximum curvature of $\partial D_t$.* 2. *[\[theorem2\]]{#theorem2 label="theorem2"} Infinite growth of perimeter: Denoting $L_t$ be the distance between a far-left and a far-right point on $\partial D_t$, we have $$\limsup_{t\to \infty} t^{-\frac{1}{6}} L_t=\infty.$$ Since $D_t$ is simply connected, we have infinite growth of perimeter.* # prelimineries In this section, we collect several well-known conserved properties and some useful uniform estimates for the solutions. Due to the incompressibility of the flow, the conservation $L^p$ norms of the density follows immediately, $$\begin{aligned} \label{lp_conservation} \rVert \rho(t)\rVert_{L^p}=\rVert \rho_0\rVert_{L^p},\text{ for all $p\in [1,\infty]$.}\end{aligned}$$ Another well-known conserved quantity is the total energy of the system. We define the potential energy $E_P(t)$ and the kinetic energy $E_K(t)$ as follows: $$\begin{aligned} E_P(t):=\int_{\mathbb{R}^2}\rho(t,x)x_2 dx,\quad E_K(t):=\frac{1}{2}\int_{\mathbb{R}^2} |u(t,x)|^2dx.\end{aligned}$$ Then it follows straightforwardly from [\[Boussinesq\]](#Boussinesq){reference-type="eqref" reference="Boussinesq"} that $$\begin{aligned} \frac{d}{dt}E_P(t) &= \int_{\mathbb{R}^2}\rho_t x_2 dx = \int_{\mathbb{R}^2}-u\cdot \nabla \rho x_2 dx = \int_{\mathbb{R}^2}\rho u_2 dx,\label{Epderiv}\\ \frac{d}{dt}E_K(t) &=\int_{\mathbb{R}^2}u\cdot u_t dx = \int_{\mathbb{R}^2}-\rho e_2\cdot u + \nu\Delta u\cdot udx = \int_{\mathbb{R}^2}-\rho u_2 dx - \nu\int_{\mathbb{R}^2}|\nabla u |^2dx.\label{Ekderiv}\end{aligned}$$ By summing up the above two quantities and integrating over time, we obtain $$\begin{aligned} \label{energy_conservation} E_P(0)+E_K(0) = E_{P}(t)+E_K(t) + \nu\int_0^t \rVert \nabla u(t)\rVert_{L^2}^2dt =:E_T(t)+\nu\int_0^t \rVert \nabla u(t)\rVert_{L^2}^2dt.\end{aligned}$$ Under the assumptions [\[assumption1\]](#assumption1){reference-type="ref" reference="assumption1"} and [\[assumption2\]](#assumption2){reference-type="ref" reference="assumption2"}, both $E_P(t)$ and $E_K(t)$ are always nonnegative. Thus the above energy equality gives us a uniform bound for the vorticity $\omega:=\nabla\times u$, $$\begin{aligned} \label{bound2} \int_0^t \rVert \omega(t)\rVert_{L^2}^2dt \le \int_0^t \rVert \nabla u(t)\rVert_{L^2}^2dt\le C(\rho_0,u_0)(1+{\nu^{-1}}) \text{ for all $t>0$}.\end{aligned}$$ # Uniform in time estimates In this section, let us derive another uniform time estimate that is simple but will play a crucial role in proving the main theorem. Roughly speaking, the estimate in [\[bound2\]](#bound2){reference-type="eqref" reference="bound2"} tells us that time-averaged vorticity dissipates eventually, for instance, $\frac{1}{T}\int_T^{2T}\rVert \omega(t)\rVert_{L^2}^2dt\to 0$, as $T\to \infty$. Now, let us consider the vorticity equation, $$\begin{aligned} \label{vorticity_eq} \omega_t +u\cdot\nabla \omega = -\partial_1\rho +\nu\Delta\omega,\end{aligned}$$ which can be easily derived by taking the curl operator in the second equation of [\[Boussinesq\]](#Boussinesq){reference-type="eqref" reference="Boussinesq"}. Assuming $\omega$ is sufficiently small in some sense, the quadratic term $u\cdot\nabla \omega$ is comparatively less dominant (in a weak sense) when compared to the linear terms in [\[vorticity_eq\]](#vorticity_eq){reference-type="eqref" reference="vorticity_eq"}. Consequently, considering the dissipation of vorticity, we may anticipate a convergence towards zero for the quantity $-\partial_1\rho +\nu\Delta \omega$. Establishing such asymptotic behavior in a rigorous sense may be nontrivial. However, in the next lemma, we will derive a uniform estimate which exhibits a convergence towards zero of a time-averaged quantity of $-\partial_1\rho +\nu\Delta \omega$. **Lemma 3**. *Let $(\rho, u)$ be a solution to [\[Boussinesq\]](#Boussinesq){reference-type="eqref" reference="Boussinesq"} with initial data $(\rho_0,u_0)$ satisfying [\[assumption1\]](#assumption1){reference-type="ref" reference="assumption1"} and [\[assumption2\]](#assumption2){reference-type="ref" reference="assumption2"}. Then, $$\begin{aligned} \label{unifo1} \int_0^t \rVert \partial_1 \Delta^{-1}\rho - \nu\omega\rVert_{\dot{H}^1}^2dt \le (1+\nu^{-1})C(\rho_0,u_0), \text{ for all $t>0$.}\end{aligned}$$* **Remark 4**. *When considering [\[Boussinesq\]](#Boussinesq){reference-type="eqref" reference="Boussinesq"} in a bounded domain, in [@kukavica2021asymptotic; @aydin2023asymptotic], it was shown that for a general initial data, $\rVert \nu\Delta u - \mathbb{P}(\rho e_2)\rVert_{L^2}$ converges to $0$ as $t\to\infty$, where $\mathbb{P}$ is the Leray projection, which is equivalent to $\rVert\nu \omega -\partial_1\Delta^{-1} \rho\rVert_{\dot{H}^1}\to 0$. When considering the Boussinesq system in an unbounded domain, obtaining such a result for general initial data becomes nontrivial. The challenge arises from the fact that while the proof provided in [@kukavica2021asymptotic; @aydin2023asymptotic] relies on a uniform bound of the kinetic energy, the kinetic energy in an unbounded domain may not be uniformly bounded in time in general. Although the total energy is inferred to be bounded from [\[energy_conservation\]](#energy_conservation){reference-type="eqref" reference="energy_conservation"}, it remains possible for the kinetic energy to increase indefinitely throughout the evolution without a lower bound of the potential energy. In our case, this issue is overcome by the assumptions [\[assumption1\]](#assumption1){reference-type="ref" reference="assumption1"} and [\[assumption2\]](#assumption2){reference-type="ref" reference="assumption2"}, which ensure a uniform lower bound of the potential energy.* *Proof.* The proof relies on the second derivative of the potential energy. To begin, we recall the expression of $E''_P(t)$ from [@kiselev2022small]: **Lemma 5**. *[@kiselev2022small Lemma 2.1, Lemma 2.3][\[second_derivative\]]{#second_derivative label="second_derivative"} Let $(\rho, u)$ be a solution to [\[Boussinesq\]](#Boussinesq){reference-type="eqref" reference="Boussinesq"} with initial data $(\rho_0,u_0)$ satisfying [\[assumption1\]](#assumption1){reference-type="ref" reference="assumption1"} and [\[assumption2\]](#assumption2){reference-type="ref" reference="assumption2"}. Then the potential energy $E_P(t)$ satisfies $$\label{2nd_der} E_P''(t) = A(t) + B(t) -\int |\nabla \partial_1\Delta^{-1}\rho(t)|^2dx \quad\text{ for all }t\geq 0,$$ where $$\label{def_A} A(t):=\sum_{i,j=1}^2\int_{\Omega}((-\Delta)^{-1}\partial_2\rho) \partial_i u_j\partial_{j}u_i \,dx,~~\text{and}~~ B(t):= \nu \int_\Omega \rho \Delta u_2 dx.$$ Furthermore, $A(t)$ satisfies $$\begin{aligned} \label{Atbound} \int_0^t |A(t)|dt \le C(\rho_0)\int_0^t \rVert \nabla u(t)\rVert_{L^2}^2 dt,\text{ for all $t>0$.}\end{aligned}$$* Towards the proof of [\[unifo1\]](#unifo1){reference-type="eqref" reference="unifo1"}, we rewrite $B(t)$, using the Biot-Savart law ($u_2=\partial_1\Delta^{-1}\omega$), as $$B(t)=\int \rho \Delta (\partial_1 \Delta^{-1}\omega)dx = -\int \partial_1 \rho \omega dx.$$ Therefore [\[2nd_der\]](#2nd_der){reference-type="eqref" reference="2nd_der"} can be also rewritten as $$\begin{aligned} \label{eppdef} E_P''(t) = A(t) - \nu\int \partial_1 \rho \omega dx - \int |\nabla \partial_1\Delta^{-1}\rho|^2dx .\end{aligned}$$ From the vorticity equation [\[vorticity_eq\]](#vorticity_eq){reference-type="eqref" reference="vorticity_eq"}, we compute $$\begin{aligned} \label{vorticity_energy} \nu\frac{d}{dt}\left(\frac{1}{2}\rVert \omega(t)\rVert_{L^2}^2\right) &= \nu\int \omega \omega_t dx = \nu \int \omega (-\partial_1 \rho + \nu\Delta \omega)dx = -\int \nu \omega\partial_1\rho dx - \int \nu^2 |\nabla \omega|^2dx.\end{aligned}$$ Combining this with [\[eppdef\]](#eppdef){reference-type="eqref" reference="eppdef"}, we obtain $$\begin{aligned} \frac{d}{dt}\left(E'_P(t) + \frac{\nu}2 \rVert \omega(t)\rVert_{L^2}^2\right) &= A(t) - 2\int \nu \omega \partial_1\rho dx - \int |\nabla \partial_1\Delta^{-1}\rho|^2dx - \int \nu^2 |\nabla \omega|^2dx\\ & = A(t) - 2\int \nabla\nu\omega\cdot \nabla \partial_1\Delta^{-1}\rho dx - \int |\nabla \partial_1\Delta^{-1}\rho|^2dx - \int \nu^2 |\nabla \omega|^2dx\\ & = A(t) - \int |\nabla(\partial_1\Delta^{-1}\rho - \nu \omega)|^2dx.\end{aligned}$$ Thus integrating this over time, we obtain $$\begin{aligned} \label{uniform_estimates} E'_P(t) + \frac{\nu}2 \rVert \omega(t)\rVert_{L^2}^2 + \int_0^t \rVert \partial_1 \Delta^{-1}\rho - \nu\omega\rVert_{\dot{H}^1}^2dt = E_P'(0)+\frac{\nu}2\rVert \omega_0\rVert_{L^2}^2 + \int_{0}^t A(t)dt,\text{ for all $t>0$.}\end{aligned}$$ Finally, sending $E'_P(t)$ on the left-hand side to the other side, $$\int_0^t \rVert \partial_1 \Delta^{-1}\rho - \nu\omega\rVert_{\dot{H}^1}^2dt \le C(\rho_0,u_0) + \int_0^t A(t)dt - E_P'(t) \le (1+\nu^{-1})C(\rho_0,u_0) - E_P'(t),$$ where the last inequality follows from [\[bound2\]](#bound2){reference-type="eqref" reference="bound2"} and [\[Atbound\]](#Atbound){reference-type="eqref" reference="Atbound"}. Recalling $E_P'(t)=\int_{\mathbb{R}^2}\rho u_2dx$ from [\[Epderiv\]](#Epderiv){reference-type="eqref" reference="Epderiv"}, and using the Cauchy-Schwarz inequality, we have $|E_P'(t)|\le C\rVert \rho(t)\rVert_{L^2}\rVert u\rVert_{L^2}\le C(\rho_0,u_0)$. This gives the desired estimate [\[unifo1\]](#unifo1){reference-type="eqref" reference="unifo1"}. ◻ **Remark 6**. *In the case $\Omega=\mathbb{T}^2$ (or in a suitable bounded domains), the Sobolev inequality allows us to derive a more concise estimate: $$\begin{aligned} \label{slightlyneater} \int_0^t\rVert \partial_1 \rho\rVert_{\dot{H}^{-2}}^2dt \le C(\rho_0,u_0)(1+\nu^{-1}).\end{aligned}$$ Indeed, the triangular inequality and the Sobolev inequality give $$\rVert \partial_1\Delta^{-1}\rho \rVert_{L^2(\mathbb{T}^2)}^2 \le \rVert \nu\omega \rVert_{L^2(\mathbb{T}^2)}^2 + \rVert \nu\omega - \partial_1\Delta^{-1}\rho\rVert_{L^2(\mathbb{T}^2)}^2 \le \rVert \nu\omega \rVert_{L^2(\mathbb{T}^2)}^2 + \rVert \nu\omega - \partial_1\Delta^{-1}\rho\rVert_{\dot{H}^1(\mathbb{T}^2)}^2.$$ Thus, integrating over time and combining it with [\[unifo1\]](#unifo1){reference-type="eqref" reference="unifo1"} and [\[bound2\]](#bound2){reference-type="eqref" reference="bound2"}, we obtain [\[slightlyneater\]](#slightlyneater){reference-type="eqref" reference="slightlyneater"}.* # Lemmas for curvature and perimeter In this section, we study relations between the curvature/perimeter of a patch and the (negative) Sobolev norms. Throughout the section, $D$ is always assumed to be a simply connected bounded domain such that $\overline{D}\subset \mathbb{R}\times \mathbb{R}^+$, $|D|=1$ and $\partial D\in C^\infty$. Also, we will denote the disk centered at $x\in \mathbb{R}^2$ by $B_r(x)\subset \mathbb{R}^2$. A constant $C>0$ will denote a universal constant that does not depend on any variables, while it might vary from line to line. Let us recall the Pestov--Ionin theorem, which asserts that every simple-closed curve with a curvature of at most one encloses a unit disk. In other words, it holds that $$\begin{aligned} \label{pIlem} \sup\left\{r: B_r(x)\subset D,\text{ for some $x\in \mathbb{R}^2$}\right\}\le \frac{1}{\max_{x\in\partial D}|\kappa(x)|},\end{aligned}$$ where $\kappa(x)$ is the signed curvature at $x\in \partial D$. In the next lemma, we explore how the radius of a maximal disk within $D$ can be constrained in terms of the negative Sobolev norms of $1_D$. **Lemma 7**. *Suppose $D$ contains a disk with radius $r>0$. Then there exists a universal constant $C>0$ such that for any $\Omega\in C^\infty_c(\mathbb{R}^2)$, $$\begin{aligned} \label{lemma_cur} r^3\le C\left( \rVert \partial_1\Delta^{-1}(1_D-1_{D^*}) - \Omega \rVert_{\dot{H}^1(\mathbb{R}^2)} + \rVert \Omega\rVert_{L^2(\mathbb{R}^2)}\right),\end{aligned}$$ where $D^*:=\left\{ (x_1,x_2)\subset \mathbb{R}\times \mathbb{R}^-: (x_1,-x_2)\in D\right\}$.* **Remark 8**. *Since $1_D-1_{D^*}\in L^2(\mathbb{R}^2)$ and it has a zero average, we have $\partial_1\Delta^{-1}(1_D-1_{D^*})\in L^2(\mathbb{R}^2)$ [@Majda-Bertozzi:vorticity-incompressible-flow Proposition3.3]. Therefore, if we simply take $\Omega:=\partial_1\Delta^{-1}(1_D-1_{D^*})$, then Lemma [Lemma 7](#curvature_lem){reference-type="ref" reference="curvature_lem"} tells us that the radius $r$ of a maximal disk contained in $D$ must satisfy $$\begin{aligned} r^3\le \rVert \partial_1\Delta^{-1}(1_D-1_{D^*})\rVert_{L^2(\mathbb{R}^2)}.\end{aligned}$$ In our proof of the main theorem, we do not have any smallness of $\rVert\partial_1 \Delta^{-1}\rho(t)\rVert_{L^2(\mathbb{R}^2)}$, therefore, we will make a use of carefully chosen $\Omega$ in the application of the lemma.* *Proof.* Let $B_{0}$ be a disk contained in $D$. Without loss of generality, we assume that the center of $B_0$ lies on the $x_2$-axis, that is, $B_0=B_r((0,b))$ for some $b>0$. Clearly, we must have $r<1$, due to the assumption $|D|=1$. Next, let us consider a sequence of horizontally translated disks $B_n:=\left\{ (x_1,x_2)\in \mathbb{R}^2: (x_1-2rn,x_2)\in B_r(b)\right\}$ for $n\in\mathbb{N}$ and choose $N^*:=\inf\left\{ n \in \mathbb{N}: |D\cap B_n|\le \frac{r^2}{16}\right\}.$ We claim that $$\begin{aligned} \label{nstar} |D\cap B_{N^*}|\le \frac{r^2}{16}\text{ and }N^*\le \frac{32}{r^2}. \end{aligned}$$ The first statement is clear by the definition of $N^*$. To see the upper bound of $N^*$, let us suppose, to the contrary, that $N^* > \frac{32}{r^2}$ and denote $n^*$:=$\lfloor \frac{32}{r^2} \rfloor$, where $\lfloor a\rfloor$ denotes the largest integer not exceeding $a$. Since $r<1$, we have $n^*> \frac{16}{r^2}$. This implies that $|D\cap B_n|\ge \frac{r^2}{16}$ for all $n=1,... n^*$. However, in this case, we must have$$1=|D| \ge \sum_{n=1}^{n^*}|D\cap B_n| \ge n^*\frac{r^2}{16}>1,$$ which is a contradiction. Towards the proof of the lemma, we define a function $x_1\mapsto g(x_1)$ and $x_2\mapsto h(x_2)$ as $$\begin{aligned} g(x_1):= \begin{cases} 0, & \text{ if $x_1\le 0$},\\ \frac{1-\cos(\pi x_1/r)}{2} & \text{ if $x_1\in (0,r]$},\\ 1 & \text{ if $x_1\in (r,2rN^*]$},\\ \frac{1+\cos(\pi (x_1-2rN^*)/r)}{2} & \text{ if $x_1\in (2rN^*,2rN^*+r]$},\\ 0 & \text{ if $x_1 > 2rN^*+r$}, \end{cases} \end{aligned}$$ and $$\begin{aligned} h(x_2):= \begin{cases} \frac{1+\cos(\frac{\pi (x_2-b)}r)}{2} & \text{ if $x_2\in (b-r,b+r)$},\\ 0 & \text{ otherwise}. \end{cases} \end{aligned}$$ And we define $f=f(x_1,x_2)$ as $$\begin{aligned} \begin{cases} f(x_1,x_2):=g(x_1)h(x_2), & \text{ if $x_2\ge 0$,}\\ f(x_1,x_2):=f(x_1,-x_2),& \text{ if $x_2<0$}. \end{cases} \end{aligned}$$ From the properties of $g$ and $h$, it is clear that the support of $f$ is contained in the vertical strip bounded by $\left\{x_1=0\right\}\cup\left\{ x_1=2rN^*+r\right\}$ whose width is $2rN^* + r\le \frac{C}r$ (see [\[nstar\]](#nstar){reference-type="eqref" reference="nstar"}). At the same time, the support of $f$ in $\mathbb{R}\times \mathbb{R}^+$ lies in the horizontal strip whose width is less than $2r$. Consequently, we have $$\begin{aligned} \label{support_f} |\text{supp}(f)|, \ |\text{supp}(\nabla f)|, \ |\text{supp}(\Delta f)|\le C. \end{aligned}$$ Now, denoting $\mu=1_D-1_{D^*}$, we see that for any $\Omega\in C^\infty_c(\mathbb{R}^2)$, $$\begin{aligned} \label{l2dual} {\int_{\mathbb{R}^2}\mu(x)\partial_1 f(x)dx}&={\int_{\mathbb{R}^2}\mu(x)\Delta^{-1}\Delta\partial_1 f(x)dx}=-{\int_{\mathbb{R}^2}\partial_1\Delta^{-1}\mu(x)\Delta f(x)dx}\nonumber\\ &= \int_{\mathbb{R}^2} \left(\partial_1\Delta^{-1}\mu(x)-\Omega(x)\right)\Delta f(x)dx +\int_{\mathbb{R}^2} \Omega(x)\Delta f(x)dx\nonumber\\ &\le \rVert \partial_1\Delta^{-1}\mu - \Omega\rVert_{\dot{H}^1}\rVert \nabla f\rVert_{L^2} +\rVert \Omega\rVert_{L^2}\rVert \Delta f\rVert_{L^2}, \end{aligned}$$ where we used the integration by parts and the Cauchy-Schwarz inequality to get the last inequality. We will estimate the left/right-hand side of the inequality [\[l2dual\]](#l2dual){reference-type="eqref" reference="l2dual"} separately. To get a lower bound of the left-hand side, we notice that $\mu$ and $\partial_1 f$ are both odd in $x_2$, therefore, $$\begin{aligned} \label{numerator1} \int_{\mathbb{R}^2}\mu(x)\partial_1 f(x)dx &= 2\int_{\mathbb{R}\times \mathbb{R}^+}\mu(x)\partial_{1}f(x)dx = 2\int_{D}\partial_1 f(x)dx \nonumber\\ &=2 \int_{D\cap B_0}\partial_1f(x)dx +2 \int_{D\cap B_{N^*}}\partial_1f(x)dx, \end{aligned}$$ where the last equality follows from the fact that, by the definition of $f$ (especially the definition of $g$), $\partial_1f(x_1,x_2)=g'(x_1)h(x_2)=0$ if $x\in (B_0\cup B_{N^*})^c$. Using $B_0\subset D$ and the definition of $g,h$, we can estimate the first integral as $$\begin{aligned} \int_{D\cap B_0}\partial_1 f(x)dx &= \int_{B_0} g'(x_1)h(x_2)dx \ge \int_{B_0,\ \left\{\frac{r}{4}<x_1<\frac{3r}{4}\right\}} g'(x_1)h(x_2)dx\\ & \ge \int_{\frac{r}4}^{\frac{3r}{4}}\int_{b-\frac{\sqrt{7}}4r}^{b+\frac{\sqrt{7}}4r} g'(x_1)h(x_2)dx_2 dx_1\\ & \ge \int_{\frac{r}4}^{\frac{3r}{4}}\int_{-\frac{\sqrt{7}}4r}^{\frac{\sqrt{7}}4r} \frac{\pi}{2r}\sin\left(\frac{\pi x_1}r\right) \frac{(1+\cos(\pi x_2/r))}2 dx_2dx_1\\ &=\pi r\int_{1/4}^{1/2}\int_{0}^{\frac{\sqrt{7}}4}\sin (\pi x_1) (1+\cos(\pi x_2)) dx_2dx_1\\ & = \pi r \int_{1/4}^{1/2}\sin(\pi x_1)dx_1\int_0^{\sqrt{7/}4}(1+\cos(\pi x_2))dx_2\\ &\ge \pi r \frac{\sqrt{2}}{2}\frac{\sqrt{7}}4 \ge \pi r\frac{\sqrt{14}}{8}.\end{aligned}$$ By using $|\partial_1 f|_{L^\infty}\le \frac{\pi}{2r}$, the second integral can be estimated as $$\int_{D\cap B_{N^*}}\partial_1f(x)dx\le |D\cap B_{N^*}|\frac{\pi}{2r}\le r^2/16\cdot \frac{\pi}{2r} =\frac{\pi r}{32},$$ where the second inequality follows from [\[nstar\]](#nstar){reference-type="eqref" reference="nstar"}. Thus, in [\[numerator1\]](#numerator1){reference-type="eqref" reference="numerator1"}, we see that $$\begin{aligned} \label{numerator_estimate} \int_{\mathbb{R}^2}\mu(x)\partial_1 f(x)dx \ge 2\left(\frac{\pi r\sqrt{14}}{8} - \frac{\pi r}{32}\right)\ge Cr.\end{aligned}$$ Let us estimate the right-hand side of [\[l2dual\]](#l2dual){reference-type="eqref" reference="l2dual"} Again, using the properties of $g,h$, we have that $\rVert\Delta f\rVert_{L^\infty}\le \rVert g''\rVert_{L^\infty} + \rVert h''\rVert_{L^\infty}\le \frac{C}{r^2}$ and $\rVert \nabla f \rVert_{L^\infty}\le \rVert g'\rVert_{L^\infty} + \rVert h'\rVert_{L^\infty}\le \frac{C}{r}$. Combining this with [\[support_f\]](#support_f){reference-type="eqref" reference="support_f"}, we get $$\begin{aligned} \label{delta_f} \rVert \Delta f \rVert_{L^2}\le \frac{C}{r^2},\text{ and }\rVert \nabla f\rVert_{L^2}\le \frac{C}{r}\le \frac{C}{r^2},\end{aligned}$$ where the last inequality follows from $r<1$. Plugging this and [\[numerator_estimate\]](#numerator_estimate){reference-type="eqref" reference="numerator_estimate"} into [\[l2dual\]](#l2dual){reference-type="eqref" reference="l2dual"}, we obtain the desired estimate [\[lemma_cur\]](#lemma_cur){reference-type="eqref" reference="lemma_cur"}. ◻ Now, we make a lemma to estimate the parameter of the domain $D$. **Lemma 9**. *Let $L>0$ be the distance between a far-left and a far-right points on $\partial D$, and let $A:=\int_{\mathbb{R}^2}1_{D}(x)x_2dx$. Then, there exists a universal constant $C>0$ such that for any $\Omega\in C^\infty_c(\mathbb{R}^2)$, $$\begin{aligned} \label{perimeter_c1} 1\le C(A+1)(1+L^3)\left( \rVert \partial_1\Delta^{-1}(1_D-1_{D^*})- \Omega\rVert_{\dot{H}^1}+\rVert \Omega\rVert_{L^2}\right),\end{aligned}$$ where $D^*:=\left\{ (x_1,x_2)\subset \mathbb{R}\times \mathbb{R}^-: (x_1,-x_2)\in D\right\}$.* **Remark 10**. *As explained in Remark [Remark 8](#remark_zeromean){reference-type="ref" reference="remark_zeromean"}, if we simply choose $\Omega=\partial_1\Delta^{-1}(1_D-1_{D^*})$ in [\[perimeter_c1\]](#perimeter_c1){reference-type="eqref" reference="perimeter_c1"}, then we get $$1\le C(A+1)(1+L^3)\rVert \partial_1\Delta^{-1}(1_D-1_{D^*})\rVert_{L^{2}}.$$ This inequality indeed tells us that assuming $\int_{\mathbb{R}^2}1_{D}(x)x_2dx$ is bounded, the perimeter of $\partial D$ grows to infinity, as $\rVert \partial_1(1_D-1_{D^*})\rVert_{\dot{H}^{-2}}$ goes to zero. In our proof of the main theorem, we will use the slightly finer estimate stated in the lemma, due to the lack of smallness of $\rVert \partial_1\rho(t)\rVert_{\dot{H}^{-2}}$.* *Proof.* Without loss of generality, let us assume that $\inf\left\{ x_1: (x_1,x_2)\in D\right\} =0$ so that a far-right point of $\partial D$ can be denoted by $(L,x_2^r)$ for some $x_2^r>0$. Using that $|D|=1$, we see $$\begin{aligned} \int_{\mathbb{R}^2,\ \left\{x_{2}>4A\right\}}1_Ddx &\le \frac{1}{4A} \int_{\mathbb{R}^2,\ \left\{x_{2}>4A\right\}}1_D(x)x_2dx\le \frac{1}{4},\label{est12}\\ \int_{\mathbb{R}^2,\ \left\{x_{2}<\frac{1}{4L}\right\}}1_Ddx &=\int_{-\infty}^{\infty}\int_{0}^{\frac{1}{4L}}1_D(x)dx\le \frac{1}{4}\label{est13}.\end{aligned}$$ This implies $$\begin{aligned} \label{AandL} 4A> \frac{1}{4L}.\end{aligned}$$ Indeed, if it were not true, we would have $$1 = |D| = \int_{\mathbb{R}^2}1_Ddx \le \int_{\mathbb{R}^2,\ \left\{x_{2}>4A\right\}} 1_D dx + \int_{\mathbb{R}^2,\ \left\{x_{2}<\frac{1}{4L}\right\}}1_Ddx \le \frac{1}{2},$$ which is a contradiction. Moreover, the above estimates [\[est12\]](#est12){reference-type="eqref" reference="est12"} and [\[est13\]](#est13){reference-type="eqref" reference="est13"} imply that at least a half of $D$ is contained in the horizontal strip bounded by $\left\{x_2= 4A\right\}$ and $\left\{ x_2=\frac{1}{4L}\right\}$. Therefore, taking away $\left\{0\le x_1\le \frac{1}{32A},\ 0\le x_2\le 4A\right\}\cup\left\{L-\frac{1}{32A}\le x_1\le L,\ 0\le x_2\le 4A\right\}$ from $D$, whose total measure is at most $\frac{1}{4}$, we have that $$\begin{aligned} \label{confined_D} \left|\left\{ x\in D: \frac{1}{32A} \le x_1 \le L-\frac{1}{32A},\quad \frac{1}{4L}\le x_2\le4A\right\} \right| \ge \frac{1}{4}.\end{aligned}$$ Towards the proof of the lemma, we choose nonnegative smooth functions $g(x_1)$ and $h(x_2)$ satisfying $$\begin{aligned} \begin{cases} \text{$g(x_1)=0$ if $x_1\le 0$},\\ \text{$0\le g'(x_1)\le 1$ for $x_1\in (0,L)$ and $g'(x_1)=1$ for $x_1\in (\frac{1}{32A},L-\frac{1}{32A})$},\\ \text{$g(x_1)$ is symmetric about the axis $\left\{x_1=L\right\}$, that is, $g(x_1)=g(2L-x_1)$ for $x_1\ge L$},\\ \text{$0\le g(x_1)\le L$, $|g'(x_1)|\le 1$ and $|g''(x_1)|\le 32A$ for all $x_1\in\mathbb{R}$,} \end{cases}\end{aligned}$$ and $$\begin{aligned} \begin{cases} \text{$h(x_2)= 0$ for $x_2\le 0$ or $x_2\ge 4A+\frac{1}{4L}$},\\ \text{$h(x_2) = 1$ for $x_2\in (\frac{1}{4L},4A)$},\\ \text{$0\le h(x_2)\le 1$, $|h'(x_2)|\le 4L$ and $|h''(x_2)|\le 32L^2$ for all $x_2\in\mathbb{R}$.} \end{cases}\end{aligned}$$ A construction of $g,h$ satisfying above properties is straightforward. For such $g,h$, we define $$\begin{aligned} f(x_1,x_2):=g(x_1)h(x_2) \text{ for $x_2\ge 0$ and } f(x_1,x_2)=-f(x_1,-x_2), \text{ for $x_2< 0$.}\end{aligned}$$ Clearly, $|\text{supp}(f)|\le C(AL+1)$, thus the above properties of $g,h$ give us that $$\begin{aligned} \label{support_f2} |\text{supp}(\nabla f)|\le C(AL+1)\le CAL,\end{aligned}$$ where the last inequality is due to [\[AandL\]](#AandL){reference-type="eqref" reference="AandL"}. Furthermore, noticing that $g(x_1)$ is linear for $x_1\in (\frac{1}{32A},L-\frac{1}{32A})$, it is not difficult to see that $$\begin{aligned} \label{support_f3} |\text{supp}(\Delta f)|\le C.\end{aligned}$$ Next, denoting $\mu:=1_D - 1_{D^*}$ and following the same computations in [\[l2dual\]](#l2dual){reference-type="eqref" reference="l2dual"}, we get $$\begin{aligned} \label{fract} {\int_{\mathbb{R}^2} \mu(x)\partial_1f(x)dx}\le \rVert \partial_1\Delta^{-1}\mu - \Omega\rVert_{\dot{H}^1}\rVert \nabla f\rVert_{L^2} +\rVert \Omega\rVert_{L^2}\rVert \Delta f\rVert_{L^2}, \text{ for any $\Omega\in C^\infty_c(\mathbb{R}^2)$}. \end{aligned}$$ Using $x_2$-odd symmetry of $\mu$ and $f$, we see that $\int_{\mathbb{R}^2}\mu(x)\partial_1 f(x)dx = 2\int_{D}g'(x_1)h(x_2)dx$, while the properties of $g,h$ give us that $$\begin{aligned} \int_{D}g'(x_1)h(x_2)dx\ge \int_{\frac{1}{32A}}^{L-\frac{1}{32A}}\int_{\frac{1}{4L}}^{4A}1_D(x_1,x_2)dx_2dx_1\ge \frac{1}{4},\end{aligned}$$ where the last inequality follows from [\[confined_D\]](#confined_D){reference-type="eqref" reference="confined_D"}. Thus, we have $$\begin{aligned} \label{nue} \int_{\mathbb{R}^2}\mu(x)\partial_1 f(x)dx \ge \frac{1}{2}.\end{aligned}$$ On the other hand, we have that $$\rVert \Delta f \rVert_{L^\infty}\le \rVert g''\rVert_{L^\infty} + \rVert h''\rVert_{L^\infty}\rVert g\rVert_{L^\infty} \le 32A + 32L^3,\quad \rVert \nabla f\rVert_{L^\infty}\le \rVert g'\rVert_{L^\infty}+\rVert h'\rVert_{L^\infty}\rVert g\rVert_{L^\infty}\le 1+4L^2.$$ Combining this with [\[support_f2\]](#support_f2){reference-type="eqref" reference="support_f2"} and [\[support_f3\]](#support_f3){reference-type="eqref" reference="support_f3"}, we get $$\rVert \Delta f\rVert_{L^2}\le C(A+L^3)\le C(A+1)(1+L^3),\quad \rVert \nabla f \rVert_{L^2} \le C(AL^3 +AL )\le C(A+1)(1+L^3).$$ Plugging this and [\[nue\]](#nue){reference-type="eqref" reference="nue"} into [\[fract\]](#fract){reference-type="eqref" reference="fract"}, we obtain the desired estimate [\[perimeter_c1\]](#perimeter_c1){reference-type="eqref" reference="perimeter_c1"}. ◻ # Proof of the main theorem In this section, we prove the main theorem of the paper. Let $\rho(t)=1_{D_t}-1_{D_t^*}$ be the global solution with the initial data $(\rho_0,u_0)$ satisfying the assumptions [\[assumption1\]](#assumption1){reference-type="ref" reference="assumption1"} and [\[assumption2\]](#assumption2){reference-type="ref" reference="assumption2"}. In the rest of the proof, $C$ denotes some positive constant that depends on only $(\rho_0,u_0,\nu)$ and might vary from line to line. From [\[bound2\]](#bound2){reference-type="eqref" reference="bound2"} and Lemma [Lemma 3](#uniform_vs){reference-type="ref" reference="uniform_vs"}, we have that for any $n\in\mathbb{N}$, we can find $T_n>0$ such that $T_n\mapsto \infty$ and $$\int_{T_n}^{2T_n} \rVert \omega(t)\rVert_{L^2}^2 + \rVert \partial_1 \Delta^{-1}\rho(t) - \nu\omega(t)\rVert_{\dot{H}^1}^2 dt \le \frac{1}{n}.$$ Therefore, there exists $t_n\in [T_n,2T_n]$ such that $$\begin{aligned} \label{t_nomega} \rVert \omega(t_n)\rVert_{L^2} + \rVert \partial_1 \Delta^{-1}\rho(t_n) - \nu\omega(t_n)\rVert_{\dot{H}^1} \le \frac{1}{\sqrt{n T_n}}.\end{aligned}$$ We prove the growth of curvature [\[theorem1\]](#theorem1){reference-type="ref" reference="theorem1"} first. *[**Proof of Theorem [Theorem 2](#curvature){reference-type="ref" reference="curvature"}, part (a)**]{.upright}.* Let $B_r(x^*)$ be the largest disk contained in $D_{t_n}$, centered at $x^* = (x_1^*, x_2^*)$ with radius $r$. Then the Pestov--Ionin theorem (see [\[pIlem\]](#pIlem){reference-type="eqref" reference="pIlem"}) tells us that $$\begin{aligned} \label{radicurv} r\ge \frac{1}{\max_{x\in\partial D_{t_n}}|\kappa(t_n)|},\end{aligned}$$ where $\kappa(t_n)$ is the signed curvature of $\partial D_{t_n}$. Applying Lemma [Lemma 7](#curvature_lem){reference-type="ref" reference="curvature_lem"} with $D=D_{t_n}$ and $\Omega=\omega(t_n)$, we obtain $$r^3\le {C}\left( \rVert \partial_1\Delta^{-1}\rho(t_n) - \omega(t_n)\rVert_{\dot{H}^1} +\rVert \omega(t_n)\rVert_{L^2}\right)\le \frac{C}{\sqrt{nT_n}},$$ where the last inequality follows from [\[t_nomega\]](#t_nomega){reference-type="eqref" reference="t_nomega"}. Combining this with [\[radicurv\]](#radicurv){reference-type="eqref" reference="radicurv"} yields that $$\max_{x\in\partial D_{t_n}}|\kappa(t_n)| \ge (nT_n)^{\frac{1}{6}} \ge (nt_n)^{\frac{1}{6}},$$ where we used $t_n\in [T_n,2T_n]$ for the last inequality. Since $t_n\mapsto \infty$, as $n\to \infty$, we obtain the desired infinite growth of the curvature. ◻ *[**Proof of Theorem [Theorem 2](#curvature){reference-type="ref" reference="curvature"}, part (b)**]{.upright}.* Let $L_n>0$ be the distance between a far-left and a far-right points on $\partial D_{t_n}$. From the energy conservation [\[energy_conservation\]](#energy_conservation){reference-type="eqref" reference="energy_conservation"}, we see that $E_P(t)\le E_T(0)<C$ for all $t>0$. Therefore, applying Lemma [Lemma 9](#periLem){reference-type="ref" reference="periLem"} with $D=D_{t_n}$, $\Omega=\omega(t_n)$ and $A=\frac{1}{2}E_P(t_n)\le C$, we obtain $$1\le C(L_n^3+1)\left( \rVert \partial_1\Delta^{-1}\rho(t_n)-\omega(t_n)\rVert_{\dot{H}^1} + \rVert \omega(t_n)\rVert_{L^2}\right)\le \frac{C}{\sqrt{nT_n}}(L_n^3+1),$$ where the last inequality follows from [\[t_nomega\]](#t_nomega){reference-type="eqref" reference="t_nomega"}. Since $T_n\to \infty$ as $n\to \infty$, the above inequalities give us that $L_n \ge C (nT_n)^{\frac{1}{6}}\ge C(nt_n)^{\frac{1}{6}}$ for all $n\in\mathbb{N}$, where $t_n\le 2T_n$ was used to justify the second inequality. Since $D_{t_n}$ is simply connected, this proves the desired infinite growth of perimeter. ◻ ## Acknowledgements {#acknowledgements .unnumbered} The author was partially supported by the SNF grant 212573-FLUTURA and the Ambizione fellowship project PZ00P2-216083. The author also extends gratitude to and Eduardo García-Juárez and Yao Yao for their valuable discussions and insightful suggestions during the course of this research. [^1]: Department of Mathematics and Computer Science, University of Basel, Spiegelgasse 1, 4051 Basel, Switzerland jaemin.park\@unibas.ch
arxiv_math
{ "id": "2309.11232", "title": "Growth of curvature and perimeter of temperature patches in the 2D\n Boussinesq equations", "authors": "Jaemin Park", "categories": "math.AP", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | This paper introduces harmonic control Lyapunov barrier functions (harmonic CLBF) that aid in constrained control problems such as reach-avoid problems. Harmonic CLBFs exploit the maximum principle that harmonic functions satisfy to encode the properties of control Lyapunov barrier functions (CLBFs). As a result, they can be initiated at the start of an experiment rather than trained based on sample trajectories. The control inputs are selected to maximize the inner product of the system dynamics with the steepest descent direction of the harmonic CLBF. Numerical results are presented with four different systems under different reach-avoid environments. Harmonic CLBFs show a significantly low risk of entering unsafe regions and a high probability of entering the goal region. author: - "Amartya Mukherjee$^{1}$, Ruikun Zhou$^{1}$, and Jun Liu$^{1}$ [^1]" bibliography: - root.bib title: " **Harmonic Control Lyapunov Barrier Functions for Constrained Optimal Control with Reach-Avoid Specifications** " --- # Introduction For safety-critical systems, ensuring stability and safety is of paramount importance. One approach to address this issue for such systems is to model it as a constrained optimal control problem. The objective of constrained optimal control is to design a control system that optimizes a notion of performance while satisfying a set of constraints. These constraints are usually designed to prevent catastrophic events, ensure safety, and take into multiple objectives with bounds. Furthermore, with experiments conducted in real life, resetting the environment if the agent goes outside a domain can be expensive [@nguyen2023reset]. One of the most common scenarios for safety-critical systems is the reach-avoid problem, which can be regarded as a subset of constrained optimal control problems where a trajectory aims to reach a goal state while avoiding a set of unsafe states. In recent years, there has been a growing interest in reach-avoid problems among the control theory and reinforcement learning community. Such methods involve learning control Lyapunov functions to certify reachability and control barrier functions that satisfy avoidability [@gurriet2018towards; @choi2020reinforcement]. In reinforcement learning, agents learn optimal policies through exploration of the environment with unknown dynamics [@sutton2018reinforcement]. Constraints are imposed using penalty methods [@heess2017emergence] and Lagrangian methods [@ray2019benchmarking]. In these methods, two value functions are computed to estimate cumulative rewards and cumulative costs. The problem with using two certificates or two value functions is that it leads to conflicts in the control policy whether to satisfy reachability or avoidability at a given point in time. To unify the conflicts, reachability and safety have been combined into a single certificate called the control Lyapunov barrier function (CLBF). It is shown in [@MENG2022110478] that theoretically there exists a single Lyapunov barrier function for such reach-avoid type specifications with stability and safety guarantees, provided that such specifications are robustly satisfiable. Moreover, [@dawson2021safe] introduces robust CLBFs and derives a QP-based controller, in which the CLBF is computed using neural networks by encoding the constraints in their loss function. In a similar manner but with reinforcement learning, the Lyapunov barrier-based actor-critic method is proposed in [@du2023reinforcement] to find a controller that satisfies both certificates with a CLBF that is obtained by solving the Lagrangian relaxation of the constrained optimization problem. Unfortunately, both methods do not provide guarantees that their value function $V$ satisfies all the CLBF constraints, and this is evidenced by their experiments. For example, in 2D quadrotor control problems, the contour plots of the derived CLBF do not completely cover the unsafe regions. Our work addresses this issue by enforcing the CLBF constraints as boundary conditions for harmonic functions. Numerical results demonstrate that this technique indeed improves safety rates, compared with the results reported in [@dawson2021safe]. Harmonic functions are solutions to the Laplace equation, which is a second-order linear elliptic partial differential equation (PDE). They satisfy the maximum principle on a compact set, thus making it easier to impose CLBF constraints. They do not have any critical points other than saddle points in the interior of their domain, which makes it easy to derive optimal control strategies. Harmonic functions have been used in the control literature to derive potential fields in past works. In the work of [@kim1992real], harmonic potential functions with panel methods were used to build potential fields over a task space. The work of [@akishita1990lapace] used harmonic functions with complex variables and conformal mappings to achieve moving obstacle avoidance. Furthermore, [@connolly1990path] combines harmonic functions using superposition to derive potential fields. In this paper, we will focus on reach-avoid problems, where an agent must reach a goal region while avoiding unsafe regions. For example, the unsafe regions could be regions on a floor that have holes or pillars. We achieve this by introducing harmonic control Lyapunov barrier functions (harmonic CLBFs) that use Laplace's equation to encode the properties of control Lyapunov barrier functions (CLBFs). Furthermore, Laplace's equation can be solved using numerical methods such as finite element methods (FEMs) and do not be trained using neural networks based on sample trajectories as done in prior work, thus significantly reducing the computational cost of reach-avoid experiments. In all, the main contributions of this paper are threefold. Firstly, we propose a novel method to unify harmonic functions with CLBFs to provide both stability and safety guarantees. Secondly, under this framework, we show that optimal controllers for reach-avoid tasks can be derived using gradient-based methods directly with no need for searching such controllers by training a neural network or solving optimal problems. Lastly, we conduct a set of numerical experiments that show a significantly low risk of trajectories entering unsafe regions with the proposed approach. # Preliminaries Throughout this work, we consider a nonlinear control system of the form $$\dot{x} = f(x,u), \quad x(0)=x_{0}, \label{eq:dyn}$$ where $x \in \mathcal{S}\subset \mathbb{R}^n$ is the state of the system, and $u \in U \subset \mathbb{R}^{m}$ is the control input. Let $\mathcal{S}$ be a compact subspace of $\mathbb{R}^n$ denoting the space of all admissible states in a control problem. Let $\mathcal{S}_{goal}\subset\mathcal{S}$ be an open set denoting the space of states that the controller aims to reach, and let $\mathcal{S}_{unsafe}$ be a compact subset of $\mathcal{S}$ denoting the space of states the controller aims to avoid. Define $\mathcal{S}_{safe}=\mathcal{S}\backslash\{\overline{\mathcal{S}_{goal}}\cup \mathcal{S}_{unsafe}\}$. ## Control Lyapunov Barrier Functions **Definition 1** (Control Lyapunov Barrier Function [@dawson2021safe; @du2023reinforcement]). *A function $V:\mathcal{S}\to\mathbb{R}$ is a control Lyapunov barrier function (CLBF) if, for some constants $c,\lambda>0$:* 1. *$V(s)=0$ for all $s\in \mathcal{S}_{goal}$;* 2. *$V(s)>0$ for all $s\in \mathcal{S}\backslash \mathcal{S}_{goal}$;* 3. *$V(s)\geq c$ for all $s\in \mathcal{S}_{unsafe}$;* 4. *$V(s)<c$ for all $s\in \mathcal{S}\backslash\mathcal{S}_{unsafe}$;* 5. *There exists a controller $\pi:\,S_{safe} \rightarrow U$ such that $\langle \nabla V(s),f(s,\pi(s))\rangle+\lambda V(s)\leq 0$ for all $s\in S_{safe}$.* *[\[def:clbf\]]{#def:clbf label="def:clbf"}* ## Harmonic Functions **Definition 2** (Harmonic Function [@gilbarg1983elliptic]). *A harmonic function $u$ is a $C^2$-function that satisfies the Laplace equation $\nabla^2u=\nabla\cdot\nabla u=0$* In the following theorems, we introduce properties of harmonic functions that are of importance in the field of elliptic PDEs, differential geometry, and complex analysis. **Theorem 1** (Mean Value Theorem [@protter1983maximum]). *If $u$ is harmonic on a domain $\mathcal{S}$, then $u$ satisfies the mean value property in $\mathcal{S}$. Furthermore, if $x\in\mathcal{S}$ and $r>0$ are such that $\overline{B_r(x)}\subset\mathcal{S}$, where $B_r(x)$ is a sphere of radius $r$ centered at $x$, then $$u(x)=\frac{\int_{B_r(x)}u(y)dy}{\int_{B_r(x)}dy}.$$* This theorem is essential for proving Theorems [Theorem 2](#thm:strong_max){reference-type="ref" reference="thm:strong_max"} and [Theorem 3](#thm:weak_max){reference-type="ref" reference="thm:weak_max"} that provide bounds on harmonic functions. **Theorem 2** (Strong Maximum Principle) [@gilbarg1983elliptic]). *If $u$ is harmonic on a domain $\mathcal{S}$ and $u$ has its maximum in $\mathcal{S}\backslash\partial\mathcal{S}$, then $u$ is constant.* **Theorem 3** (Weak Maximum Principle [@gilbarg1983elliptic]). *If $u\in C^2(\mathcal{S})\cup C^1(\partial\mathcal{S})$ is harmonic in a bounded domain $\mathcal{S}$, then $$\max_{\overline{\mathcal{S}}}u=\max_{\partial\mathcal{S}}u.$$* This shows that Theorems [Theorem 2](#thm:strong_max){reference-type="ref" reference="thm:strong_max"} and [Theorem 3](#thm:weak_max){reference-type="ref" reference="thm:weak_max"} can be exploited to derive a CLBF that has its maximum in unsafe regions and minimum in goal regions. # Harmonic CLBF In this work, we explore the intersection between CLBF and harmonic functions. By exploiting the maximum principle, we can derive a function $V:S\to\mathbb{R}$ that satisfies properties (1)--(4) for CLBF by imposing boundary conditions on the CLBF. **Definition 3** (Harmonic CLBF). *A function $V\in C^2(\mathcal{S})\cup C^1(\partial \mathcal{S})$ is a harmonic CLBF if it satisfies the following conditions:* 1. *$\nabla^2V(s)=0$ for all $s\in \mathcal{S}_{safe}$* 2. *$V(s)=0$ for all $s\in\overline{\mathcal{S}_{goal}}$* 3. *$V(s)=c$ for all $s\in\partial\mathcal{S}\cup \mathcal{S}_{unsafe}$.* *In other words, $V(s)$ is a solution to the boundary value problem given above.* **Theorem 4**. *All harmonic CLBFs $V$ satisfy properties (1)--(4) in Definition [\[def:clbf\]](#def:clbf){reference-type="ref" reference="def:clbf"}.* *Proof.* Properties (1) and (3) in Definition [\[def:clbf\]](#def:clbf){reference-type="ref" reference="def:clbf"} are trivially satisfied as they are set as boundary conditions from properties (2) and (3) in Definition [Definition 3](#def:harmonicclbf){reference-type="ref" reference="def:harmonicclbf"}. If $V(s)\leq 0$ for some $s\in\mathcal{S}\backslash\mathcal{S}_{goal}$ ($V(s)\geq c$ for some $s\in\mathcal{S}\backslash\mathcal{S}_{unsafe}$), then $V(s)$ has its minimum (maximum) in the interior of $\mathcal{S}_{safe}$. By Theorem [Theorem 2](#thm:strong_max){reference-type="ref" reference="thm:strong_max"}, $V$ is then constant in $\mathcal{S}_{safe}$, which contradicts the non-constant boundary conditions imposed on it. Thus, harmonic CLBFs satisfy properties (2) and (4) in Definition [\[def:clbf\]](#def:clbf){reference-type="ref" reference="def:clbf"}. ◻ **Proposition 1**. *Every reach-avoid problem admits a unique harmonic CLBF.* *Proof.* Consider an arbitrary reach-avoid problem characterized by $S,S_{goal},S_{unsafe}$. Assume for a contradiction that a reach avoid problem has two distinct harmonic CLBFs, $V_1(s)$ and $V_2(s)$. Define the function $U(s)=V_1(s)-V_2(s)$. The Laplacian of $U(s)$ is $\nabla^2U=\nabla^2V_1-\nabla^2V_2=0$ on $S_{safe}$. As a result, $U$ is a function satisfying $\nabla^2 U=0$ on $S_{safe}$, $U=0$ on $\overline{S_{goal}}$, and $U=0$ on $\partial\mathcal{S}\cup \mathcal{S}_{unsafe}$. By Theorem [Theorem 3](#thm:weak_max){reference-type="ref" reference="thm:weak_max"}, $U$ has its maximum and minimum of $0$ on the boundary, so $U=0$ on $S_{safe}$. Thus, the harmonic CLBF is unique. ◻ We now introduce a sufficient condition for verifying property (5) of the CLBF conditions in Definition [\[def:clbf\]](#def:clbf){reference-type="ref" reference="def:clbf"}. **Theorem 5**. *Let $V$ be a harmonic CLBF, and consider the deterministic nonlinear system [\[eq:dyn\]](#eq:dyn){reference-type="eqref" reference="eq:dyn"}. If there exists a positive constant $\lambda$ such that $$\label{eq:lambda} \lambda \leq -\sup_{x\in\mathcal{S}_{safe}}\inf_{u\in U}\frac{\langle f(x,u),\nabla V(x)\rangle}{V(x)},$$ then property (5) in Definition [\[def:clbf\]](#def:clbf){reference-type="ref" reference="def:clbf"} holds.* *Proof.* Define an optimal controller $\pi^*(x)$ as $$\pi^*(x) = \arg\inf_{u\in U}\langle f(x,u),\nabla V(x)\rangle.$$ Since $\lambda V(x)\geq 0$ for all $x\in\mathcal{S}_{safe}$, it follows that $$\begin{aligned} \frac{\langle f(x,\pi^*(x)),\nabla V(x)\rangle}{V(x)} &\le \inf_{u\in U}\frac{\langle f(x,u),\nabla V(x)\rangle}{V(x)} \\ & \le \sup_{x\in\mathcal{S}_{safe}}\inf_{u\in U}\frac{\langle f(x,u),\nabla V(x)\rangle}{V(x)} \\ &\le -\lambda,\end{aligned}$$ where the last inequality is precisely ([\[eq:lambda\]](#eq:lambda){reference-type="ref" reference="eq:lambda"}). We have verified property (5) in Definition [\[def:clbf\]](#def:clbf){reference-type="ref" reference="def:clbf"}. ◻ Furthermore, a sufficient condition for a system to avoid $\mathcal{S}_{unsafe}$ is: $$\label{eq:sup_inf} \sup_{x\in\mathcal{S}_{safe}}\inf_{u\in U}\langle f(x,u),\nabla V(x)\rangle\leq 0$$ Since $\mathcal{S}_{unsafe}$ is a closed set, this means $V(x)<c$ for all $x\in\mathcal{S}_{safe}$. Furthermore, since $\frac{\partial}{\partial t}V(x(t))\leq 0$, this means for all $t\in[0,\infty)$, $x(t)$ will never approach a point where $V(x)=c$, meaning, it will never approach a point in $\mathcal{S}_{unsafe}$. ## Avoiding the saddle points As shown in [@yanushauskas1969zeros], all critical points in $V(x)$ are saddle points. Random noise is added in [@ge2015escaping] to parameters while performing gradient descent to guarantee that the algorithm converges to a local minimum. The authors of [@lee2016gradient] show that gradient descent with random parameter initialization asymptotically avoids saddle points. So in this paper, we compare deterministic control inputs with stochastic control inputs to see which method shows better convergence to $\mathcal{S}_{goal}$. At every point $x\in \mathcal{S}_{safe}$, the control input is selected as $u=\text{argmin}_{u\in U}\langle f(x,u),\nabla V(x)\rangle+z$ where $z$ is a small noise sampled from a normal distribution $N(0,\sigma^2)$ to mitigate local optima. ## Conditions on the noise In order to preserve the properties of the CLBF, we introduce bounds that must be imposed on $z$ with Taylor series expansion so that the trajectory avoids unsafe regions when the first-order terms dominate. Let $\Delta t$ be the time step size used for numerical simulation. Consider the first-order expansion of $x(t+\Delta t)$ as $$x(t+\Delta t)=x+f(x,u)\Delta t+O(\Delta t^2).$$ By adding some noise $z$ to the control input to avoid saddle points, we have $$x(t+\Delta t)=x+f(x,u+z)\Delta t+O(\Delta t^2).$$ Then we apply again the first-order expansion of $f(x,u+z)$, yields $$x(t+\Delta t)=x+f(x,u)\Delta t+f_u(x,u)^Tz\Delta t+O(\Delta t^2).$$ Also, we consider the first-order expansion of $V(x(t+\Delta t))$: $$\begin{aligned} V(x(t+\Delta t))=&V(x)+\langle\nabla V,f(x,u)+\nabla_uf(x,u)^Tz\rangle\Delta t\nonumber\\&+O(\Delta t^2). \end{aligned}$$ To have the safety guarantees, the condition $V(x(t+\Delta t))<c$ needs to be satisfied, that is, $$\langle\nabla V(x),f(x,u)+\nabla_uf(x,u)^Tz\rangle<\frac{c-V(x)}{\Delta t}.$$ Expanding and simplifying this expression gives the following bounds: $$\label{eq:noise_bounds} \langle\nabla_uf(x,u)\nabla V(x),z\rangle<\frac{c-V(x)}{\Delta t}-\langle\nabla V(x), f(x,u)\rangle.$$ Using the Cauchy-Schwarz inequality, a sufficient way to bound the noise $z$ is $$||\nabla_uf(x,u)\nabla V(x)||_2||z||_2<\frac{c-V(x)}{\Delta t}-\langle\nabla V(x), f(x,u)\rangle.$$ If $f$ is a control-affine function $f(x,u)=f_1(x)+f_2(x)u$, then we can bound $z$ as: $$\label{eq:z_upper_bound} ||z||_2<\frac{\frac{c-V(x)}{\Delta t}-\langle\nabla V(x), f(x,u)\rangle}{||f_2(x)||_2||\nabla V(x)||_2}.$$ Since Theorem [Theorem 5](#thm:lambda){reference-type="ref" reference="thm:lambda"} requires that under the optimal controller $\pi^*(x)$, $\langle\nabla V(x), f(x,\pi^*(x))\rangle\leq 0$, this shows that the bound imposed on $z$ is positive. **Remark 1**. *Numerical solutions in section [4](#sec:results){reference-type="ref" reference="sec:results"} show that in harmonic CLBFs, the distinction between safe and unsafe regions is unclear. To make the distinction more transparent, we replace the CLBF as a solution to Laplace's equation with a solution to Poisson's equation $\nabla^2V=-6$ in this paper, meaning $V$ is a superharmonic function. However, this method also poses a risk that $V$ has local minima in the interior of its domain [@axler2013harmonic]. This is undesirable as local minima are harder to escape compared to saddle points. We will compare harmonic CLBFs with superharmonic CLBFs with numerical results.* # Numerial Results {#sec:results} In this section, we will explore two different reach-avoid environments, one with four small unsafe regions and the other with two big unsafe regions, for three different systems: Roomba, DiffDrive, and CarRobot [@ModernRobotics]. Except that, we also perform a reach-avoid task for 2D Quadrotor [@dawson2021safe]. The dynamics of each of these systems are provided in Appendix I along with control inputs that minimize the expression in Equation [\[eq:sup_inf\]](#eq:sup_inf){reference-type="eqref" reference="eq:sup_inf"} for any $x\in \mathcal{S}_{safe}$. We use $c=1$ in the CLBF defined in Definition [\[def:clbf\]](#def:clbf){reference-type="ref" reference="def:clbf"} for all numerical results. For each environment and system, we compute the harmonic ($\nabla^2V=0$) and superharmonic ($\nabla^2V=-6$) CLBF numerically using finite element methods. For both of the CLBFs, we compute the trajectories of the system with 1,000 different randomly initialized initial conditions and count the number of time steps (of size $\Delta t=0.1$) it takes for the system to reach $\mathcal{S}_{goal}$. We test this setting with both deterministic ($\sigma=0$) and stochastic ($\sigma=0.1$, clipped by the upper bound in Equation [\[eq:z_upper_bound\]](#eq:z_upper_bound){reference-type="eqref" reference="eq:z_upper_bound"}) controllers and report the mean time taken to reach the goal ($\mu_T$), the standard deviation of the time taken to reach the goal ($\sigma_T$), the number of times the system ends in an unsafe region (not included in $\mu_T,\sigma_T$), and the number of times the system does not reach $\mathcal{S}_{goal}$ after 1,000 time steps (not included in $\mu_T,\sigma_T$). ## Problem I In this problem, we explore an environment that contains a goal region near the origin and four unsafe regions in the interior of the domain. The results with three car-like dynamical systems, Roomba, DiffDrive, and CarRobot, are reported. $$\begin{aligned} S&=[-1,1]\times[-1,1]\\ \mathcal{S}_{goal}&=[-0.1,0.1]\times[-0.1,0.1]\\ \mathcal{S}_{unsafe}&=\partial S\cup C_1\cup C_2\cup C_3\cup C_4,\end{aligned}$$ with the subdomains of the unsafe region given by $$\begin{aligned} % \partial S&=[-1,1]\times\{-1,1\}\cup\{-1,1\}\times[-1,1]\\ C_1&=[-0.5,-0.3]\times[-0.5,-0.3]\\ C_2&=[-0.5,-0.3]\times[0.3,0.5]\\ C_3&=[0.3,0.5]\times[-0.5,-0.3]\\ C_4&=[0.3,0.5]\times[0.3,0.5].\end{aligned}$$ Initial conditions are sampled as: $$\begin{aligned} x(0),y(0)&\sim U[-0.9,-0.6]\cup[0.6,0.9]\\ \theta(0)&\sim U[0,2\pi].\end{aligned}$$ We derive a harmonic CLBF using finite element methods using DOLFIN [@dolfin]. We used piecewise linear trial functions and a triangular mesh. The domain has been divided into 5,000 triangular meshes of equal area. This numerical solution is plotted in 2D view in Fig. [1](#fig:1_0_2d){reference-type="ref" reference="fig:1_0_2d"}. ![Problem I with $\nabla^2V=0$](1_0_2d.eps){#fig:1_0_2d width="\\linewidth"} ![Problem I with $\nabla^2V=-6$](1_6_2d.eps){#fig:1_6_2d width="\\linewidth"} ![Problem II with $\nabla^2V=0$](2_0_2d.eps){#fig:2_0_2d width="\\linewidth"} ![Problem II with $\nabla^2V=-6$](2_6_2d.eps){#fig:2_6_2d width="\\linewidth"} Although in theory, the solution should take values less than 1.0 outside the unsafe region, the distinction is highly unclear in the numerical solution. To clarify the distinction, we replace the Laplace equation with the Poisson equation $\nabla^2V=-6$. We plotted the solution in 2D view in Fig. [2](#fig:1_6_2d){reference-type="ref" reference="fig:1_6_2d"}. This solution shows a better distinction between safe and unsafe regions, but it shows more points where $V(x)$ has a local optimum. Local optima are undesirable as $\langle f(x,u),\nabla V(x)\rangle=0$ in those points so $V(x)$ is unlikely to show a significant decrease. System $\nabla^2V$ $\sigma$ $\mu_T$ $\sigma_T$ unsafe no reach --------------- ------------- ---------- ------------- ------------- -------- ---------- Roomba 0 0 97.278 65.607 0 105 Roomba -6 0 58.224 50.388 0 459 **Roomba** **0** **0.1** **114.706** **81.527** **0** **0** Roomba -6 0.1 139.606 180.247 0 185 DiffDrive 0 0 301.403 216.495 0 95 DiffDrive -6 0 127.045 115.348 0 492 **DiffDrive** **0** **0.1** **239.759** **220.689** **0** **2** DiffDrive -6 0.1 158.837 167.421 0 5 CarRobot 0 0 452.125 323.942 0 193 **CarRobot** **-6** **0** **277.398** **253.148** **0** **47** CarRobot 0 0.1 21.200 69.889 0 930 CarRobot -6 0.1 26.887 88.174 0 920 : Results for Problem I. Each system was run with 1,000 different randomly initialized initial conditions under harmonic and superharmonic CLBFs with deterministic and stochastic controllers. The numerical results are listed in Table [1](#tab:CLBF1){reference-type="ref" reference="tab:CLBF1"}. Roomba achieves the best results under a harmonic CLBF with a stochastic control policy as all the trajectories converged to $\mathcal{S}_{goal}$. DiffDrive achieves the best results under a harmonic CLBF with a stochastic control policy with a $99.8\%$ safety rate and similar results with a superharmonic CLBF with a stochastic control policy with a $99.5\%$ safety rate. CarRobot achieves the best results under a superharmonic CLBF with a deterministic policy with a $95.3\%$ safety rate. ## Problem II In this problem, we explore an environment that contains a goal region near the origin and two unsafe regions in the interior of the domain. In a similar vein to problem I, the three models are tested with this reach-avoid task. $$\begin{aligned} S&=[-1,1]\times[-1,1]\\ \mathcal{S}_{goal}&=[-0.1,0.1]\times[-0.1,0.1]\\ \mathcal{S}_{unsafe}&=\partial S\cup C_5\cup C_6,\end{aligned}$$ with the subdomains of the unsafe region given by $$\begin{aligned} % \partial S&=[-1,1]\times\{-1,1\}\cup\{-1,1\}\times[-1,1]\\ C_5&=[-0.5,-0.3]\times[-0.5,0.5]\\ C_6&=[0.3,0.5]\times[-0.5,0.5].\end{aligned}$$ Initial conditions are sampled as: $$\begin{aligned} x(0)&\sim U[-0.9,-0.6]\cup[0.6,0.9]\\ y(0)&\sim U[-0.3,0.3]\\ \theta(0)&\sim U[0,2\pi].\end{aligned}$$ The numerical solution to the harmonic CLBF for this problem is plotted in 2D view in Fig. [3](#fig:2_0_2d){reference-type="ref" reference="fig:2_0_2d"}. To make the distinction between safe and unsafe regions clearer, we plotted the solution to the Poisson equation $\nabla^2V=-6$ in 2D view in Fig. [4](#fig:2_6_2d){reference-type="ref" reference="fig:2_6_2d"}. System $\nabla^2V$ $\sigma$ $\mu_T$ $\sigma_T$ unsafe no reach --------------- ------------- ---------- ------------- ------------- -------- ---------- Roomba 0 0 85.600 54.603 12 0 **Roomba** **-6** **0** **202.568** **199.603** **0** **91** Roomba 0 0.1 86.846 55.789 9 78 Roomba -6 0.1 48.033 21.363 0 880 **DiffDrive** **0** **0** **258.502** **168.726** **0** **55** DiffDrive -6 0 96.077 28.211 0 730 DiffDrive 0 0.1 265.348 164.137 7 9 DiffDrive -6 0.1 209.452 171.338 0 118 **CarRobot** **0** **0** **380.014** **230.751** **30** **124** CarRobot -6 0 415.246 257.179 0 570 CarRobot 0 0.1 264.435 205.930 45 942 CarRobot -6 0.1 342.060 262.481 0 987 : Results for Problem II. Each system was run with 1,000 different randomly initialized initial conditions under harmonic and superharmonic CLBFs with deterministic and stochastic controllers. The numerical results are posted in Table [2](#tab:CLBF2){reference-type="ref" reference="tab:CLBF2"}. Roomba achieves the best results under a superharmonic CLBF with a deterministic control policy with a $90.9\%$ safety rate. DiffDrive achieves the best results under a harmonic CLBF with a deterministic control policy with a $94.5\%$ safety rate. And CarRobot achieves the best results under a harmonic CLBF with a deterministic policy with a $84.6\%$ safety rate, although it has 30 trajectories converging to $\mathcal{S}_{unsafe}$ and 124 trajectories not converging to $\mathcal{S}_{goal}$. Furthermore, this table shows that deterministic control policies achieve better results than stochastic control policies and have significantly more trajectories that converge to $\mathcal{S}_{goal}$. Tables [1](#tab:CLBF1){reference-type="ref" reference="tab:CLBF1"} and [2](#tab:CLBF2){reference-type="ref" reference="tab:CLBF2"} show that deterministic policies under superharmonic CLBFs always avoid $\mathcal{S}_{unsafe}$ as expected since the Poisson equation makes the distinction between safe and unsafe regions more evident. But it comes with the drawback that there is a high risk that the trajectory will converge to a local minimum and not reach $\mathcal{S}_{goal}$. Deterministic policies generally outperform stochastic policies as the noise added to control inputs comes with the risk that it drives the trajectory away from the goal or towards an unsafe region. Despite using control inputs that solve the minimization problem in Equation [\[eq:sup_inf\]](#eq:sup_inf){reference-type="eqref" reference="eq:sup_inf"}, in many episodes, the trajectory does not converge to $\mathcal{S}_{goal}$. This is likely due to numerical errors associated with estimating $\nabla V$ or with solving the system dynamics. ## 2D Quadrotor In this problem, we control a quadrotor to land at a safe ground-level region while avoiding two obstacles. The dynamics of the quadrotor are in Appendix [5.4](#app:quad2d){reference-type="ref" reference="app:quad2d"}. We are running the same problem as in [@du2023reinforcement] and the horizontally flipped problem in [@dawson2021safe]. The safe and unsafe subsets are given as the following. $$\begin{aligned} S&=[-1,2]\times [0,2]\\ \mathcal{S}_{goal}&=[-1,0.5]\times[0.25,0.75]\\ \mathcal{S}_{unsafe}&=\partial S\cup C_7\cup C_8\cup C_9,\end{aligned}$$ with the subdomains of the unsafe region given by $$\begin{aligned} C_7&=[-1,0]\times[1.25,2]\\ C_8&=[-1,2]\times [0,0.25]\\ C_9&=[0.5,1]\times [0,1].\end{aligned}$$ Initial conditions are sampled as: $$\begin{aligned} x(0)&\sim U[1.4,1.6]\\ z(0)&\sim U[0.3,1.5]\\ x'(0),z'(0),\theta(0),\theta'(0)&\sim U[-0.1,0.1].\end{aligned}$$ The numerical solution to the Harmonic CLBF for this problem is plotted in Fig. [5](#fig:quad2d){reference-type="ref" reference="fig:quad2d"}. ![Countour plot of the harmonic CLBF for the 2D Quadrotor environment](quad2d_1_0.eps){#fig:quad2d width="0.8\\linewidth"} In this case, we use a neural network deterministic controller $\pi_\kappa(\textbf{x})$ parameterized by $\kappa$, where $\textbf{x}=(x,\dot x,z,\dot z,\theta,\dot\theta)$, and we train the controller using model-predictive control (MPC). After numerically solving the ode $\dot\textbf{x}=f(\textbf{x},\pi_\kappa(\textbf{x}))$ for a horizon of 128 time steps with time step size $\Delta t=0.01$ using RK4, we compute the loss as the cosine similarity of $f(\textbf{x}_t,\pi_\kappa(\textbf{x}_t))$ and $\nabla V(\textbf{x}_t)$ for all time steps. The time step size is 0.01. We repeat this for 100,000 different initial conditions. $\mu_T$ $\sigma_T$ unsafe no reach safety rate --------- ------------ -------- ---------- ------------- 70.350 16.114 43 96 86.1% : Results for 2D Quadrotor After training, we test the controller on the environment for 1,000 different initial conditions. The numerical results are posted in Table [3](#tab:quad2d){reference-type="ref" reference="tab:quad2d"}. The safety rate of the controller is $86.1\%$, which is higher than the safety rates for rCLBF-QP ($83\%$) and Robust-MPC ($53\%$) reported in [@dawson2021safe]. To compare with [@du2023reinforcement], we have computed the trajectories of the quadrotor for the initial conditions $\textbf{x}_1=[1.5,0,0.35,0.0.0]$ and $\textbf{x}_2=[1.5,0,1.2,0.0.0]$. In Fig. [6](#fig:quad2d_traj){reference-type="ref" reference="fig:quad2d_traj"}, we overlay the trajectories on a superhamornic CLBF plot with $\nabla^2V=-6$ that we computed to make the distinction between safe and unsafe regions clearer. The trajectories in the plot show that the quadrotor can effectively avoid unsafe regions and enter the goal regions. ![Plot of trajectories on the 2D Quadrotor environment with $z(0)=0.35$ (blue) and $z(0)=1.2$ (red).](quad2d_2_sol.eps){#fig:quad2d_traj width="0.8\\linewidth"} # Conclusions In this paper, we introduced harmonic CLBFs that exploit the maximum principle that harmonic functions satisfy to encode the properties of CLBFs. This paper is the first to unify harmonic functions with CLBFs for an application to control theory. We select control inputs that maximize the inner product of the system dynamics with the steepest descent direction of the harmonic CLBF. This has been applied to reach-avoid problems and demonstrated a low risk of entering unsafe regions while converging to the goal region. # Appendix I: Dynamics of each system {#appendix-i-dynamics-of-each-system .unnumbered} ## Roomba The dynamics of the Roomba are as follows: $$\begin{aligned} \nonumber \dot{x}=v\cos\theta, \quad \dot{y}=v\sin\theta, \quad \dot\theta=\omega,\end{aligned}$$ where $v\in[-1,1]$ and $\omega\in[-1,1]$ are the control inputs. The minimization problem to Equation [\[eq:sup_inf\]](#eq:sup_inf){reference-type="eqref" reference="eq:sup_inf"} is: $$\nonumber \inf_{v\in[-1,1],\omega\in[-1,1]}(v\cos\theta,v\sin\theta)(V_x(x,y),V_y(x,y))^T.$$ The infimum is reached when $v$ maximizes the negative of the magnitude of the inner product, and $w$ is set to the direction that maximizes the magnitude of the inner product, which yields $$\begin{aligned} v&=-\text{sign}(V_x(x,y)\cos\theta+V_y(x,y)\sin\theta), \nonumber\\ w&=\text{sign}(-V_x(x,y)\cos\theta+V_y(x,y)\sin\theta). \nonumber\end{aligned}$$ These control input pairs are used in computing the results in Tables [1](#tab:CLBF1){reference-type="ref" reference="tab:CLBF1"} and [2](#tab:CLBF2){reference-type="ref" reference="tab:CLBF2"}. ## DiffDrive The dynamics of the diff-drive robot are as follows: $$\begin{aligned} \nonumber \dot x&=(u_L+u_R)\frac{r}{2}\cos\theta, \dot y=(u_L+u_R)\frac{r}{2}\sin\theta,\nonumber\\ \dot\theta&=(u_R-u_L)\frac{r}{2d}, \nonumber\end{aligned}$$ where $u_L\in[-1,1]$ and $u_R\in[-1,1]$ are the control inputs subject to the constraint $|u_L|+|u_R|\leq 1$. $r=0.1$ and $d=0.1$. The minimization problem to Equation [\[eq:sup_inf\]](#eq:sup_inf){reference-type="eqref" reference="eq:sup_inf"} is: $$\begin{aligned} \nonumber \inf_{|u_L|+|u_R|\leq 1}&((u_L+u_R)\frac{r}{2}\cos\theta,(u_L+u_R)\frac{r}{2}\sin\theta)\nabla V(x,y).\end{aligned}$$ The infimum is reached when $u_L+u_R$ maximizes the negative of the magnitude of the inner product, and $u_R-u_L$ is set to the direction that maximizes the magnitude of the inner product, which yields $$\begin{aligned} u_R=[&\text{sign}(-V_x(x,y)\cos\theta+V_y(x,y)\sin\theta)\nonumber\\&-\text{sign}(V_x(x,y)\cos\theta+V_y(x,y)\sin\theta)]/2, \nonumber\\ u_L=[&-\text{sign}(-V_x(x,y)\cos\theta+V_y(x,y)\sin\theta)\nonumber\\&-\text{sign}(V_x(x,y)\cos\theta+V_y(x,y)\sin\theta)]/2. \nonumber\end{aligned}$$ ## CarRobot The dynamics of the car-like robot are as follows: $$\begin{aligned} \nonumber \dot{x}=v\cos\theta, \quad \dot{y}=v\sin\theta, \quad \dot\theta=v\tan(\psi)/l, \quad \dot\psi=w,\end{aligned}$$ where $v\in[-1,1]$ and $w\in[-1,1]$ are the control inputs subject to the constraint $|v|\leq |w|$. $l=0.1$. $\psi(0)$ is initialized to $0$. We set $l=0.1$. The minimization problem to Equation [\[eq:sup_inf\]](#eq:sup_inf){reference-type="eqref" reference="eq:sup_inf"} is: $$\nonumber \inf_{|v|\leq|w|\leq 1}(v\cos\theta,v\sin\theta)(V_x(x,y),V_y(x,y))^T.$$ The infimum is reached when $v$ maximizes the negative of the magnitude of the inner product, and $\theta$ is set to the direction that maximizes the magnitude of the inner product. $$\begin{aligned} v&=-\text{sign}(V_x(x,y)\cos\theta+V_y(x,y)\sin\theta), \nonumber\\ w&=v\text{sign}(-V_x(x,y)\cos\theta+V_y(x,y)\sin\theta)-\text{sign}(\psi). \nonumber\end{aligned}$$ ## 2D Quadrotor {#app:quad2d} We use the implementation of the 2D quadrotor environment by [@9849119]. The dynamics of the quadrotor are as follows: $$\begin{aligned} \ddot{x}&=\sin\theta (T_1+T_2)/m \nonumber\\ \ddot{z}&=\cos\theta (T_1+T_2)/m-g \nonumber\\ \ddot{\theta}&=(T_2-T_1)d/I_{yy}, \nonumber\end{aligned}$$ where $T_1,T_2\in [g/2-4,g/2+4]$ are the control inputs. $m=0.033$, $I_{yy}=1.436e-05$, $g=9.81$ and $d=0.00397/\sqrt{2}$ are constants. # ACKNOWLEDGMENT {#acknowledgment .unnumbered} The authors would like to thank Professor Yash Pant from the Department of Electrical and Computer Engineering, University of Waterloo for providing us with feedback for this paper. [^1]: $^{1}$Amartya Mukherjee, Ruikun Zhou, and Jun Liu are with the Department of Applied Mathematics, University of Waterloo, Waterloo, Ontario, Canada N2L 3G1 (email: `a29mukhe, ruikun.zhou, j.liu@uwaterloo.ca`).
arxiv_math
{ "id": "2310.02869", "title": "Harmonic Control Lyapunov Barrier Functions for Constrained Optimal\n Control with Reach-Avoid Specifications", "authors": "Amartya Mukherjee, Ruikun Zhou and Jun Liu", "categories": "math.OC cs.LG math.AP", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- author: - "Matteo Dalla Riva[^1] , Paolo Luzzini[^2] , Riccardo Molinarolo[^3] , Paolo Musolino[^4]" date: September 14, 2023 title: Multi-parameter perturbations for the space periodic heat equation --- **Abstract:** This paper is divided into three parts. The first part focuses on periodic layer heat potentials, demonstrating their smooth dependence on regular perturbations of the support of integration. In the second part, we present an application of the results from the first part. Specifically, we consider a transmission problem for the heat equation in a periodic two-phase composite material and we show that the solution depends smoothly on the shape of the transmission interface, boundary data, and conductivity parameters. Finally, in the last part of the paper, we fix all parameters except for the contrast parameter and outline a strategy to deduce an explicit expansion of the solution using a Neumann-type series. **Keywords:** heat equation, domain perturbation, layer potentials, transmission problem, shape sensitivity analysis, periodic domain, special nonlinear operators, Neumann series. **2020 Mathematics Subject Classification:** 35K20; 31B10; 35B10; 47H30; 45A05. # Introduction Understanding how the properties of on object depend on its shape is a crucial aspect of many real-world problems, especially when seeking to achieve the optimal configuration for maximizing some sort of efficiency. In mathematical jargon, the quest for optimal shapes is commonly known as "shape optimization," and it has garnered considerable attention in the mathematical literature. The interested reader can find ample references and results in the monographs by Henrot and Pierre [@HePi05], Novotny and Sokołowski [@NoSo13], and Sokołowski and Zolésio [@SoZo92]. From a mathematical standpoint, addressing such questions often involves studying how solutions to specific boundary value problems, as well as related quantities, are affected by perturbations of the domain of definition and other problem parameters. This leads us to analyze the mappings that connect a set of perturbation parameters to the solution of a boundary value problem. To undertake this, having access to the toolbox of differential calculus is advantageous. Consequently, understanding the regularity properties of these maps becomes crucial. In other words, it is important to determine whether these maps are continuous, differentiable, or enjoy higher regularity properties, such as smoothness and analyticity. These properties reveal different aspects of the perturbation and can be used in different ways: Continuity implies that small variations of the perturbation parameters correspond to small changes in the solution. Differentiability allows for characterizing the stationary points as critical points. These critical points are important in optimization problems as they represent potential optimal configurations. Smoothness and analyticity are stronger properties. With smoothness we can approximate the solution with its Taylor expansion in the perturbation parameter with any degree of accuracy, while with analyticity we can represent the solution as a convergent power series. Now, a common method for studying boundary value problems is potential theory, which employs integral operators to transform the original problem into a system of boundary integral equations. Eventually, this method allows us to obtain the solution as a sum of layer potentials. As a result, an approach to understanding the perturbation sensitivity of a solution to a boundary value problem is by studying how the layer potentials and the integral operators depend upon such perturbations. Many authors have explored this approach for elliptic equations. For example, Potthast [@Po94; @Po96a; @Po96b] proved that layer potentials for the Helmholtz equation are Fréchet differentiable functions of the support of integration. Similar results are obtained for a variety of equations, including the Stokes system of fluid dynamics and the Lamé equations of elasticity. Notable references in this context include Charalambopoulos [@Ch95], Costabel and Le Louër [@CoLe12a; @CoLe12b; @Le12], Haddar and Kress [@HaKr04], Hettlich [@He95], Kirsch [@Ki93], and Kress and Päivärinta [@KrPa99]. However, we observe that very few results prove regularities beyond differentiability. An exception is the works of Lanza de Cristoforis and his collaborators, dedicated to proving that layer potentials and integral operators depend analytically on domain perturbations. Here we mention Lanza de Cristoforis and Rossi [@LaRo04] and [@DaLuMu22] for the layer potentials for the Laplace equation, Lanza de Cristoforis and Rossi [@LaRo08] for the Helmholtz equation, [@DaLa10] for general second order equations, and [@LaMu11] for the periodic case. Moreover, in [@DaLu23] we have obtained a smoothness result for the heat layer potentials which, in the first part of the present paper, we will extend to the space-periodic heat layer potentials. The method developed by Lanza de Cristoforis and collaborators was called the "functional analytic approach" (cf. [@DaLaMu21]). It was used for both regular and singular perturbations, where a perturbation is classified as regular if it does not cause any loss of regularity in the domain, and as singular if it does. Another approach to dealing with regular domain perturbations has recently appeared in the literature, relying on complex analysis techniques and aiming to prove the "shape holomorphy" of layer potential operators and integral operators. For applications of this approach, we refer the reader to Jerez-Hanckes, Schwab, and Zech [@JeScZe17], which deals with the electromagnetic wave scattering problem, Cohen, Schwab, and Zech [@CoScZe18], about the stationary Navier-Stokes equations, Henríquez and Schwab [@HeSc21], on the Calderón projector for the Laplacian in $\mathbb{R}^2$, Dölz and Henríquez [@DoHe23], on parametric shape holomorphy, and Pinto, Henríquez, and Jerez-Hanckes [@PiHeJe23], on the shape holomorphy of boundary integral operators on multiple open arcs. Apart from [@DaLu23], all the above cited literature concerns elliptic equations. Notably, corresponding results for parabolic problems are more scarse. To the best of our knowledge the only exceptions are Chapko, Kress and Yoon [@ChKrYo98; @ChKrYo99] and Hettlich and Rungell [@HeRu01] for the Fréchet differentiability upon the domain of the solution of the heat equation with application to some inverse problems in heat conduction, and the already cited [@DaLu23] for the infinite order smoothness of the layer heat potentials upon the support of integration. In this paper, we adopt Lanza de Cristoforis' functional analytic approach to obtain higher order regularity results for the space-periodic version of layer heat potentials upon the support of integration. In particular, in the first part of the paper we investigate the space-periodic layer potentials for the heat equation and demonstrate that they depend smoothly on a pair $(\phi,\mu)$, where $\phi$ is a function that characterizes the shape of the domain and $\mu$ is the (pull-back of the) density function. To achieve this, we build upon similar findings for the nonperiodic heat layer potentials established in [@DaLu23]. To the best of our knowledge, this is the first paper to show such a result for space-periodic heat layer potentials, previous papers dealing with periodic layer potentials being dedicated to the case of elliptic operators (see, e.g., Feppon and Ammari [@FeAm22], Pukhtaievych [@Pu18], and [@LuMu20]). In the subsequent sections, we showcase how the results obtained in the first part can be utilized to examine the shape sensitivity of solutions to boundary value problems. As an illustrative application, we consider an ideal transmission problem for the heat equation in a space-periodic two-phase composite material. We show that the solution depends smoothly on the shape of the transmission interface, as well as on the boundary data and the conductivity parameters. Lastly, in the final part of the paper, we revisit the space-periodic transmission problem studied in the previous section. However, this time, we fix all parameters except for the contrast parameter. Then we outline a strategy to deduce an explicit expansion of the solution using a Neumann-type series. The paper is organized as follows: Section [2](#s:prel){reference-type="ref" reference="s:prel"} introduces some notation and preliminaries. In Section [3](#s:class){reference-type="ref" reference="s:class"}, we review certain results from [@DaLu23] concerning nonperiodic layer potentials. In Section [4](#s:period){reference-type="ref" reference="s:period"}, we derive analogous results for the space-periodic layer potentials. Section [5](#s:trans){reference-type="ref" reference="s:trans"} investigates the perturbation sensitivity of solutions to an ideal transmission problem in a space-periodic domain. Finally, in Section [6](#s:neu){reference-type="ref" reference="s:neu"}, we consider the scenario where all parameters are fixed, except for the contrast parameter. # Preliminaries {#s:prel} From this point onward, we fix a value for $n$ from the set ${\mathbb{N}}\setminus\{0,1\}$, where ${\mathbb{N}}$ denotes the set of natural numbers, including zero. Additionally, we define a periodicity cell as follows: $$Q := \prod_{j=1}^n ]0,q_{jj}[,$$ where $q_{jj}>0$ for all $j \in \{1,\ldots,n\}$. We denote by $q$ the diagonal matrix $$q := \begin{pmatrix} q_{11} & 0 & \cdots & 0 \\ 0 & q_{22} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & q_{nn} \end{pmatrix},$$ and by $|Q|_n = \prod_{i=1}^n q_{jj}$ the measure of the peridicity cell $Q$. Clearly $$q\mathbb{Z}^n= \{qz:z\in \mathbb{Z}^n\}$$ is the set of vertices of a periodic subdivision of $\mathbb{R}^n$ corresponding to the fundamental cell $Q$. A set $A\subseteq \mathbb{R}^n$ is said to be $q$-periodic if $A +qz = A$ for all $z \in \mathbb{Z}^n$. If $A$ is a $q$-periodic set, a function $f:A \to \mathbb{R}$ is said to be $q$-periodic if $f(\cdot +qz) = f(\cdot)$ for all $z \in \mathbb{Z}^n$. If $\Omega$ is a subset of $\mathbb{R}^n$ then $\overline\Omega$, $\partial\Omega$, and $\nu_\Omega$ denote the closure, boundary, and, where defined, outward normal to $\Omega$, respectively. If $\overline\Omega \subseteq Q$, then we set $$\mathbb{S}[\Omega]:= \bigcup_{z\in \mathbb{Z}^n}(qz +\Omega)=q\mathbb{Z}^n +\Omega, \qquad \mathbb{S}[\Omega]^- := \mathbb{R}^n \setminus \overline{\mathbb{S}[\Omega]}.$$ We observe that both $\mathbb{S}[\Omega]$ and $\mathbb{S}[\Omega]^-$ are $q$-periodic domains. We will consider the heat equation $$\partial_t u - \Delta u = 0$$ in domains that are space-periodic and our approach will rely on the space-periodic potential theory for the heat equation. Specifically, we will exploit space-periodic layer potentials obtained by replacing the classical fundamental solution of the heat equation with a periodic counterpart. As it is well known, a fundamental solution of the heat equation is defined as follows: $$%\begin{equation} \label{phin} S_{n}(t,x):= \left\{ \begin{array}{ll} \frac{1}{(4\pi t)^{\frac{n}{2}} }e^{-\frac{|x|^{2}}{4t}}&{\mathrm{if}}\ (t,x)\in (0,+\infty)\times{\mathbb{R}}^{n}\,, \\ 0 &{\mathrm{if}}\ (t,x)\in ((-\infty,0]\times{\mathbb{R}}^{n})\setminus\{(0,0)\}. \end{array} \right.$$ Then a $q$-periodic fundamental solution $S_{q,n}:(\mathbb{R}\times {\mathbb{R}}^{n})\setminus (\{0\} \times q \mathbb{Z}^n)\to\mathbb{R}$ for the heat equation is defined by taking $$\label{phinq} S_{q,n}(t,x):= \left\{ \begin{array}{ll} \sum_{z\in \mathbb{Z}^n}\frac{1}{(4\pi t)^{\frac{n}{2}} }e^{-\frac{|x+qz|^{2}}{4t}}&{\mathrm{if}}\ (t,x)\in (0,+\infty)\times{\mathbb{R}}^{n}\,, \\ 0 &{\mathrm{if}}\ (t,x)\in \left((-\infty,0]\times{\mathbb{R}}^{n}\right)\setminus (\{0\} \times q \mathbb{Z}^n) \end{array} \right.$$ (see Pinsky [@Pi02 Ch. 4.2] for the case $n=1$ and Bernstein, Ebert and Sören Kraußhar [@BeEbSo11] for $n \geq 2$, see also [@Lu18]). We will use the functional framework of Schauder classes. For the classical definitions of sets and functions belonging to class $C^{j,\alpha}$, with $\alpha\in(0,1)$ and $j\in\{0,1\}$, we refer to Gilbarg and Trudinger [@GiTr83]. For the definition of time-dependent functions in the parabolic Schauder class $C^{\frac{j+\alpha}{2};j+\alpha}$ on $[0,T]\times \overline\Omega$ or $[0,T]\times \partial\Omega$ we refer to Ladyženskaja, Solonnikov, and Ural'ceva [@LaSoUr68]. In essence, a function of class $C^{\frac{j+\alpha}{2};j+\alpha}$ is $\left(\frac{j+\alpha}{2}\right)$-Hölder continuous in the time variable, and $(j,\alpha)$-Schauder regular in the space variable. We also denote by $C_0^{\frac{j+\alpha}{2};j+\alpha}$ the parabolic Schauder class of functions that vanish at time $t=0$, and by $C_{0,q}^{\frac{j+\alpha}{2};j+\alpha}$ the subspace of $C_0^{\frac{j+\alpha}{2};j+\alpha}$ consisting of functions that are also $q$-periodic. The definition of parabolic Schauder classes can be extended to products of intervals and manifolds by using local charts. In the present paper we consider all the functional spaces to be made of real valued functions. We will adopt the following notation: If $D$ is a subset of $\mathbb{R}^n$, $T >0$ and $h$ is a map from $D$ to $\mathbb{R}^n$, we denote by $h^T$ the map from $[0,T] \times D$ to $[0,T] \times \mathbb{R}^n$ defined by $$h^T(t,x) := (t, h(x)) \qquad \forall (t,x) \in [0,T] \times D.$$ Let $\alpha\in(0,1)$ and assume that $$\label{cond Omega} \begin{split} &\Omega\text{ is a bounded connected open subset of } \mathbb{R}^n\ \text{of class } C^{1,\alpha}\\ & \text {and has connected exterior } \Omega^-:=\mathbb{R}^n\setminus\overline{\Omega}\,. \end{split}$$ We take $\Omega$ to be the reference shape, and to formalize domain perturbations, we consider specific classes of diffeomorphisms defined on the boundary $\partial \Omega$. Precisely, we denote by $\mathcal{A}^{1,\alpha}_{\partial \Omega}$ the set of functions of class $C^{1,\alpha}(\partial\Omega, \mathbb{R}^{n})$ that are injective together with their differential at all points of $\partial\Omega$. According to Lanza de Cristoforis and Rossi [@LaRo08 Lem. 2.2, p. 197] and [@LaRo04 Lem. 2.5, p. 143], $\mathcal{A}^{1,\alpha}_{\partial \Omega}$ is an open subset of $C^{1,\alpha}(\partial\Omega, \mathbb{R}^{n})$. For $\phi \in \mathcal{A}^{1,\alpha}_{\partial \Omega}$, the Jordan-Leray separation theorem ensures that $\mathbb{R}^{n}\setminus \phi(\partial \Omega)$ has exactly two open connected components (see, e.g., Deimling [@De85 Thm. 5.2, p. 26] and [@DaLaMu21 §A.4]). We denote the bounded connected component of $\mathbb{R}^{n}\setminus \phi(\partial \Omega)$ by $\mathbb{I}[\phi]$ and the unbounded one by $\mathbb{E}[\phi]$. Moreover, we will use $\nu_\phi$ to denote the outer unit normal to $\mathbb{I}[\phi]$. Then we set $${\mathcal{A}}_{\partial\Omega,Q}^{1,\alpha} := \left\{\phi \in\mathcal{A}_{\partial \Omega}^{1,\alpha} : \phi(\partial\Omega) \subseteq Q\right\},$$ and for brevity, we use the notation $$\mathbb{S}[\phi]:= \mathbb{S}[\mathbb{I}[\phi]], \qquad \mathbb{S}[\phi]^-:= \mathbb{S}[\mathbb{I}[\phi]]^-$$ for all $\phi \in {\mathcal{A}}_{\partial\Omega,Q}^{1,\alpha}$. Both $\mathbb{S}[\phi]$ and $\mathbb{S}[\phi]^-$ are $q$-periodic domains depending on the diffeomorphism $\phi$ (see Figure [1](#fig:lphi){reference-type="ref" reference="fig:lphi"}). Therefore, we can perturb the shape of $\mathbb{S}[\phi]$ and $\mathbb{S}[\phi]^-$ by changing the function $\phi$. ![*The sets $\mathbb{S}[\phi]^-$, $\mathbb{S}[\phi]$, and $\phi(\partial\Omega)$ in case $n=2$.*](Fig1.pdf){#fig:lphi width="4.2in"} We will consider integral operators supported on $\phi(\partial \Omega)$. To analyze their dependence on $\phi$, we will perform a change of variables. For this purpose, we rely on the following technical lemma, which shows that the map related to the change of variables in the area element and the pullback $\nu_\phi\circ\phi$ of the outer normal field depend analytically on $\phi$. A proof of this lemma can be found in Lanza de Cristoforis and Rossi [@LaRo04 p. 166] and Lanza de Cristoforis [@La07 Prop. 1]. **Lemma 1**. *Let $\alpha\in \mathopen (0,1)$ and $\Omega$ be a bounded open subset of $\mathbb{R}^n$ of class $C^{1,\alpha}$ with connected exterior. Then the following statements hold.* - *For each $\phi \in \mathcal{A}^{1,\alpha}_{\partial \Omega}$, there exists a unique $\tilde \sigma_n[\phi] \in C^{1,\alpha}(\partial\Omega)$ such that $\tilde \sigma_n[\phi] > 0$ and $$\int_{\phi(\partial\Omega)}w(s)\,d\sigma_s= \int_{\partial\Omega}w \circ \phi(y)\tilde\sigma_n[\phi](y)\,d\sigma_y, \qquad \forall w \in L^1(\phi(\partial\Omega)).$$ Moreover, the map $\tilde \sigma_n[\cdot]$ is real analytic from $\mathcal{A}_{\partial \Omega}^{1,\alpha}$ to $C^{0,\alpha}(\partial\Omega)$.* - *The map from $\mathcal{A}_{\partial \Omega}^{1,\alpha}$ to $C^{0,\alpha}(\partial\Omega, \mathbb{R}^{n})$ which takes $\phi$ to $\nu_{\phi} \circ \phi$ is real analytic.* # Domain perturbations of classical layer potentials {#s:class} Our first goal is to demonstrate that space-periodic layer potentials for the heat equation depend smoothly on perturbations of the support of integration. As previously mentioned in the introduction, similar results have already been established in [@DaLu23] for the non-periodic layer potentials. We intend to leverage those existing results and extend them to the periodic case. Therefore, we begin by reviewing the findings of [@DaLu23], which concern layer heat potentials supported on $[0,T] \times \phi(\partial\Omega)$ for some $T>0$ and $\phi\in \mathcal{A}_{\partial \Omega}^{1,\alpha}$, as well as integral operators acting between Schauder spaces on $[0,T] \times \phi(\partial\Omega)$. However, to treat $\phi$ as a variable and state smoothness results for $\phi$-dependent functions, we need to work in a $\phi$-independent functional setting. We will then pullback the layer potentials to the fixed domain $[0,T]\times \partial \Omega$ and, simultaneously, push forward the density functions from $[0,T]\times \partial \Omega$ to $[0,T]\times \phi(\partial\Omega)$. To be precise, we consider the operators that take $\mu \in C_0^{\frac{\alpha}{2};\alpha}([0,T]\times \partial\Omega)$ to $$\begin{aligned} &V[\phi,\mu](t,\xi) := \int_0^t\int_{\phi(\partial\Omega)} S_n(t-\tau,\phi(\xi)-y) \mu \circ (\phi^T)^{(-1)} (\tau,y) \,d\sigma_yd\tau,\\ & V_l[\phi,\mu](t,\xi) := \int_0^t\int_{\phi(\partial\Omega)} \partial_{x_l}S_n(t-\tau,\phi(\xi)-y) \mu \circ (\phi^T)^{(-1)} (\tau,y) \,d\sigma_yd\tau \quad \forall l \in\{1,\ldots,n\},\\ &W^*[\phi,\mu](t,\xi) :=\int_0^t\int_{\phi(\partial\Omega)}D_xS_n(t-\tau,\phi(\xi)-y)\cdot \nu_{\phi}(\xi) \mu \circ (\phi^T)^{(-1)} (\tau,y) \,d\sigma_yd\tau,\end{aligned}$$ for all $(t,\xi) \in [0,T] \times \partial\Omega$. Additionally, for $\psi \in C_0^{\frac{1+\alpha}{2};1+\alpha}([0,T]\times \partial\Omega)$ we define $$\begin{aligned} &W[\phi,\psi](t,\xi) := -\int_0^t\int_{\phi(\partial\Omega)}D_xS_n(t-\tau,\phi(\xi)-y)\cdot \nu_{\phi}(y) \psi \circ (\phi^T)^{(-1)} (\tau,y) \,d\sigma_yd\tau,\end{aligned}$$ for all $(t,\xi) \in [0,T] \times \partial\Omega$. In the expressions above, $\partial_{x_l}S_n$ and $D_xS_n$ denote the $x_l$-derivative and the gradient of $S_n$ with respect to the spatial variables, respectively. The functions $V[\phi,\mu]$, $V_l[\phi,\mu]$, $W^*[\phi,\mu]$, and $W[\phi,\psi]$ are the $\phi$-pullbacks of the single-layer potential, its $x_l$-derivative, normal derivative, and the double-layer potential, respectively. They are defined on the boundary $[0,T] \times \phi(\partial\Omega)$ and have densities given by $\mu \circ (\phi^T)^{(-1)}$ and $\psi \circ (\phi^T)^{(-1)}$. In [@DaLu23 Thm. 6.3], it has been proven that the operators $V[\phi,\cdot]$, $V_l[\phi,\cdot]$, $W^*[\phi,\cdot]$, and $W[\phi,\cdot]$ depend smoothly on the shape parameter $\phi$. Specifically, we have the following result: **Theorem 2**. *Let $\alpha\in(0,1)$ and $T>0$. Let $\Omega$ be as in [\[cond Omega\]](#cond Omega){reference-type="eqref" reference="cond Omega"}. Then, the maps that take $\phi \in \mathcal{A}^{1,\alpha}_{\partial\Omega}$ to the following operators are all of class $C^{\infty}$:* - *$V[\phi,\cdot] \in \mathcal{L} \left( C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega), C^{\frac{1+\alpha}{2};1+\alpha}_0([0,T]\times\partial\Omega) \right)$,* - *$V_l[\phi,\cdot] \in \mathcal{L} \left(C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega), C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega) \right)$ for all $l \in \{1,\ldots,n\}$,* - *$W^*[\phi,\cdot] \in \mathcal{L} \left( C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega), C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega)\right)$,* - *$W[\phi,\cdot] \in \mathcal{L} \left( C^{\frac{1+\alpha}{2};1+\alpha}_0([0,T]\times\partial\Omega), C^{\frac{1+\alpha}{2};1+\alpha}_0([0,T]\times\partial\Omega) \right)$ .* Theorem [Theorem 2](#thm:clp){reference-type="ref" reference="thm:clp"} presents an extension of similar results that were already known for layer potentials associated with elliptic equations to the parabolic setting. For example, Lanza de Cristoforis and Rossi [@LaRo04; @LaRo08] established these results for the Laplace and Helmholtz equations, and [@DaLa10] for general second-order equations. However, extending these results to the parabolic setting is not a trivial task. The main difficulty lies in the interaction between the time and space variables. Applying the strategy used in [@LaRo04] to the parabolic case only yields a regularity result for $C^2$ perturbations of the domain, falling short of the desired $C^{1,\alpha}$ setting. Another difference between the elliptic and parabolic cases is that in the elliptic scenario, the layer potentials exhibit analytic dependence on the shape parameter $\phi$, while Theorem [Theorem 2](#thm:clp){reference-type="ref" reference="thm:clp"} only guarantees that they are infinitely differentiable maps. The reason for this lack of analyticity lies in the regularity of the fundamental solution $S_n$, which is $C^\infty$ but not real analytic over the entire space $\mathbb{R}^{1+n}\setminus\{(0,0)\}$ due to its non-analytic behavior at $t=0$. In contrast, the fundamental solution of the Laplace equation, as well as other constant coefficient elliptic operators, is analytic in $\mathbb{R}^n\setminus\{0\}$. As we shall see, such a difference implies a distinct behavior of the solutions to boundary value problems: analytic dependence on $\phi$ for the elliptic case *vs* $C^\infty$ dependence for the parabolic case. # Space-periodic layer heat potentials {#s:period} We now shift our focus to space-periodic layer heat potentials, where we replace the classical fundamental solution $S_n$ of the heat equation with its periodization $S_{q,n}$ (see [\[phinq\]](#phinq){reference-type="eqref" reference="phinq"}). We will start by introducing the definition of periodic layer potentials. Next, we will review some properties established in [@Lu18]. Finally, we will utilize Theorem [Theorem 2](#thm:clp){reference-type="ref" reference="thm:clp"} to derive the corresponding regularity results for the $\phi$-pullback of periodic layer potentials. Let $\alpha\in(0,1)$ and $T>0$. Let $\Omega$ be a bounded open subset of $\mathbb{R}^n$ of class $C^{1,\alpha}$ such that $\overline{\Omega} \subseteq Q$. For a density $\mu \in L^\infty\big([0,T] \times \partial\Omega\big)$, the $q$-periodic in space layer heat potentials are defined as $$\begin{aligned} v_q [\mu](t,x) := \int_{0}^{t} \int_{\partial \Omega} S_{q,n}(t-\tau,x-y) \mu(\tau, y)\,d\sigma_y d\tau \qquad \forall\,(t,x) \in [0, T] \times \mathbb{R}^n,\end{aligned}$$ and $$w_q[\mu](t,x) := -\int_{0}^t\int_{\partial\Omega} D_x S_{q,n}(t-\tau,x-y)\cdot \nu_\Omega(y) \mu(\tau,y)\,d\sigma_yd\tau \qquad \forall\,(t,x) \in [0,T]\times \mathbb{R}^n.$$ The functions $v_q[\mu]$ and $w_q[\mu]$ are called respectively the $q$-periodic single- and double-layer heat potential. Moreover, we set $$\begin{aligned} w^*_{q}[\mu](t,x) := \int_{0}^t\int_{\partial\Omega} D_x S_{q,n}(t-\tau,x-y)\cdot \nu_\Omega(x)\mu(\tau,y)\,d\sigma_yd\tau \qquad \forall\,(t,x) \in [0,T] \times \partial\Omega.\end{aligned}$$ The map $w^*_{q}[\mu]$ is related to the normal derivative of the $q$-periodic in space single-layer potential (see Theorem [Theorem 3](#thmsl){reference-type="ref" reference="thmsl"}). Periodic layer heat potentials enjoy properties similar to that of their standard counterpart. We collect them in the following two theorems. The proofs can be found in [@Lu18 Thms. 2, 3]. **Theorem 3**. *Let $\alpha\in(0,1)$ and $T>0$. Let $\Omega$ be a bounded open subset of $\mathbb{R}^n$ of class $C^{1,\alpha}$ such that $\overline{\Omega} \subseteq Q$. Then the following statements hold.* - *Let $\mu \in L^\infty([0,T] \times \partial\Omega)$. Then $v_q[\mu]$ is continuous, $q$-periodic in space and $v_q[\mu] \in C^\infty\big((0,T) \times (\mathbb{R}^n \setminus \partial\mathbb{S}[\Omega])\big)$. Moreover $v_q[\mu]$ solves the heat equation in $(0,T]\times (\mathbb{R}^n \setminus \partial\mathbb{S}[\Omega])$.* - *Let $v_q^+[\mu]$ and $v_q^-[\mu]$ denote the restrictions of $v_q[\mu]$ to $[0,T] \times \overline{\mathbb{S}[\Omega]}$ and to $[0,T]\times \overline{\mathbb{S}[\Omega]^-}$, respectively. The map from $C_0^{\frac{\alpha}{2}; \alpha}([0,T] \times \partial\Omega)$ to $C_{0,q}^{\frac{1+\alpha}{2}; 1+\alpha}\big([0,T] \times \overline{\mathbb{S}[\Omega]}\big)$ that takes $\mu$ to $v_q^+[\mu]$ is linear and continuous. Likewise, the map from $C_0^{\frac{\alpha}{2}; \alpha}([0,T] \times \partial\Omega)$ to $C_{0,q}^{\frac{1+\alpha}{2}; 1+\alpha}\big([0,T] \times \overline{\mathbb{S}[\Omega]^-}\big)$ that associates $\mu$ with $v_q^-[\mu]$ is also linear and continuous.* - *Let $\mu \in C_0^{\frac{\alpha}{2}; \alpha}([0,T] \times \partial\Omega)$ and $l \in \{1,\ldots,n\}$. Then the following jump relations hold: $$\begin{aligned} %\label{jumpsl} \frac{\partial}{\partial \nu_\Omega}v_q^\pm[\mu](t,x) = &\pm \frac{1}{2}\mu(t,x) +w_{q}^*[\mu](t,x),\\ \nonumber \partial_{x_l}v_q^\pm[\mu](t,x) = &\pm \frac{1}{2}\mu(t,x)\left(\nu_{\Omega}(x)\right)_l + \int_{0}^t \int_{\partial\Omega} \partial_{x_l} S_{q,n}(t-\tau,x-y)\mu(\tau,y)\, d\sigma_yd\tau, \nonumber\end{aligned}$$ for all $(t,x) \in [0,T] \times \partial\Omega$.* **Theorem 4**. *Let $\alpha\in(0,1)$ and $T>0$. Let $\Omega$ be a bounded open subset of $\mathbb{R}^n$ of class $C^{1,\alpha}$ such that $\overline{\Omega} \subseteq Q$. Then the following statements hold.* - *Let $\mu \in L^\infty([0,T] \times \Omega)$. Then $w_q[\mu]$ is $q$-periodic in space, $w_q[\mu] \in C^\infty\big((0,T) \times (\mathbb{R}^n \setminus \partial\mathbb{S}[\Omega])\big)$, and $w_q[\mu]$ solves the heat equation in $(0,T] \times(\mathbb{R}^n \setminus \partial\mathbb{S}[\Omega])$.* - *Let $\mu \in C_0^{\frac{1+\alpha}{2}; 1+\alpha}([0,T] \times \partial\Omega)$. Then the restriction $w_q[\mu]_{|[0,T] \times\mathbb{S}[\Omega]}$ can be extended uniquely to an element $w_q^+[\mu] \in C_{0,q}^{\frac{1+\alpha}{2};1+\alpha}\big([0,T]\times \overline{\mathbb{S}[\Omega]}\big)$ and the restriction $w_q[\mu]_{|[0,T] \times \mathbb{S}[\Omega]^-}$ can be extended uniquely to an element $w_q^-[\mu] \in C_{0,q}^{\frac{1+\alpha}{2};1+\alpha}\big([0,T] \times\overline{\mathbb{S}[\Omega]^-}\big)$. Moreover the following jump formulas hold: $$\begin{aligned} &w_q^\pm[\mu](t,x) = \mp \frac{1}{2} \mu(t,x) + w_q[\mu](t,x)\,, \\ &\frac{\partial}{\partial \nu_\Omega}w_q^+[\mu](t,x) - \frac{\partial}{\partial \nu_\Omega}w_q^-[\mu](t,x) = 0,\end{aligned}$$ for all $(t,x) \in [0,T] \times \partial\Omega$.* - *The map from $C^{\frac{1+\alpha}{2};1+\alpha}_0([0,T] \times \partial\Omega)$ to $C_{0,q}^{\frac{1+\alpha}{2};1+\alpha}\big([0,T]\times \overline{\mathbb{S}[\Omega]}\big)$ that takes $\mu$ to the function $w_q^+[\mu]$ is linear and continuous. Likewise, the map from $C^{\frac{1+\alpha}{2};1+\alpha}_0([0,T] \times \partial\Omega)$ to $C_{0,q}^{\frac{1+\alpha}{2};1+\alpha}\big([0,T]\times \overline{\mathbb{S}[\Omega]^-}\big)$ which takes $\mu$ to the function $w_q^-[\mu]$ is also linear and continuous.* The main idea in the proof of Theorems [Theorem 3](#thmsl){reference-type="ref" reference="thmsl"} and [Theorem 4](#thmdl){reference-type="ref" reference="thmdl"} revolves around representing periodic layer potentials as the sum of their non-periodic counterparts and a remainder, which is an integral operator with a nonsingular kernel. This is feasible because the map $$\label{def:R} R_{q,n}(t,x) := S_{q,n}(t,x) - S_n(t,x), \qquad\forall \,(t,x) \in ({\mathbb{R}}\times {\mathbb{R}}^{n})\setminus (\{0\} \times q \mathbb{Z}^n)$$ can be extended by continuity to $({\mathbb{R}}\times {\mathbb{R}}^{n})\setminus (\{0\} \times q (\mathbb{Z}^n \setminus\{0\}))$. Keeping the notation $R_{q,n}$ for this extension, we have that $$R_{q,n} \in C^\infty (({\mathbb{R}}\times {\mathbb{R}}^{n})\setminus (\{0\} \times q (\mathbb{Z}^n \setminus\{0\}))).$$ In other words, $R_{q,n}$ is smooth in a neighborhood of the origin $(0,0)$. A proof of this assertion can be found in [@Lu18 Thm. 1]. The same idea can be used to recover the periodic counterpart of Theorem [Theorem 2](#thm:clp){reference-type="ref" reference="thm:clp"}. We first need to introduce the pull-back of the boundary integral operators associated with $q$-periodic layer heat potentials. Let $\Omega$ be a bounded open subset of $\mathbb{R}^n$ of class $C^{1,\alpha}$ such that both $\Omega$ and $\Omega^-$ are connected. Let $\phi \in {\mathcal{A}}_{\partial\Omega,Q}^{1,\alpha}$. For $\mu \in C_0^{\frac{\alpha}{2};\alpha}([0,T]\times \partial\Omega)$, we consider the operators $$\begin{aligned} &V_q[\phi,\mu](t,\xi) := \int_0^t\int_{\phi(\partial\Omega)} S_{q,n}(t-\tau,\phi(\xi)-y) \mu \circ (\phi^T)^{(-1)} (\tau,y) \,d\sigma_yd\tau \\ & V_{q,l}[\phi,\mu](t,\xi) := \int_0^t\int_{\phi(\partial\Omega)} \partial_{x_l}S_{q,n}(t-\tau,\phi(\xi)-y) \mu \circ (\phi^T)^{(-1)} (\tau,y) \,d\sigma_yd\tau \quad \forall l \in\{1,\ldots,n\}\\ &W^*_{q}[\phi,\mu](t,\xi) :=\int_0^t\int_{\phi(\partial\Omega)}D_xS_{q,n}(t-\tau,\phi(\xi)-y)\cdot \nu_{\phi}(\xi) \mu \circ (\phi^T)^{(-1)} (\tau,y) \,d\sigma_yd\tau,\end{aligned}$$ for all $(t,\xi) \in [0,T] \times \partial\Omega$. Also, for $\psi \in C_0^{\frac{1+\alpha}{2};1+\alpha}([0,T]\times \partial\Omega)$ we set $$\begin{aligned} &W_q[\phi,\psi](t,\xi) := -\int_0^t\int_{\phi(\partial\Omega)}D_xS_{q,n}(t-\tau,\phi(\xi)-y)\cdot \nu_{\phi}(y) \psi \circ (\phi^T)^{(-1)} (\tau,y) \,d\sigma_yd\tau,\end{aligned}$$ for all $(t,\xi) \in [0,T] \times \partial\Omega$. Similarly to the non-periodic scenario, the function $V_q[\phi,\mu]$ is the $\phi$-pullback of the $q$-periodic single-layer potential restricted on the boundary $[0,T] \times \phi(\partial\Omega)$, while $V_{q,l}[\phi,\mu]$ and $W^*_{q}[\phi,\mu]$ are respectively related to its $x_l$ and normal derivatives. The function $W_q[\phi,\psi]$ is instead related to the boundary behavior of the $q$-periodic double-layer potential. We are now ready to present the main result of this section, concerning the smoothness of the mappings that associate $\phi$ with $V_q[\phi,\cdot]$, $V_{q,l}[\phi,\cdot]$, $W^*_{q}[\phi,\cdot]$, and $W_{q}[\phi,\cdot]$. **Theorem 5**. *Let $\alpha\in(0,1)$ and $T>0$. Let $\Omega$ be as in [\[cond Omega\]](#cond Omega){reference-type="eqref" reference="cond Omega"}. Then the maps that take $\phi \in \mathcal{A}^{1,\alpha}_{\partial\Omega,Q}$ to the following operators are all of class $C^\infty$:* - *$V_q[\phi,\cdot] \in \mathcal{L} \left(C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega), C^{\frac{1+\alpha}{2};1+\alpha}_0([0,T]\times\partial\Omega)\right)$,* - *$V_{q,l}[\phi,\cdot] \in \mathcal{L} \left( C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega), C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega)\right)$ for all $l \in \{1,\ldots,n\}$,* - *$W^*_{q}[\phi,\cdot] \in \mathcal{L}\left( C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega), C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega)\right)$,* - *$W_q[\phi,\cdot] \in \mathcal{L} \left(C^{\frac{1+\alpha}{2};1+\alpha}_0([0,T]\times\partial\Omega), C^{\frac{1+\alpha}{2};1+\alpha}_0([0,T]\times\partial\Omega)\right)$ .* *Proof.* We confine ourselves to demonstrate the theorem for the map $\phi\mapsto V_q[\phi,\cdot]$ in point (i). The proof for the operators in (ii), (iii), and (iv) can be carried out by a straightforward adaptation of the argument presented below. In these cases, we will use statements (ii), (iii), and (iv) of Theorem [Theorem 2](#thm:clp){reference-type="ref" reference="thm:clp"}, analogously to how we will use statement (i) of the same Theorem [Theorem 2](#thm:clp){reference-type="ref" reference="thm:clp"} in the forthcoming argument. As shown in [@Lu18 Thm. 1], the map $R_{q,n}$ defined in [\[def:R\]](#def:R){reference-type="eqref" reference="def:R"} is of class $C^\infty$ in the set $({\mathbb{R}}\times {\mathbb{R}}^{n})\setminus (\{0\} \times q (\mathbb{Z}^n \setminus\{0\}))$. In particular, $R_{q,n}$ is smooth in a neighborhood of $(0,0)\in \mathbb{R}\times \mathbb{R}^n$. Let $(\phi, \mu) \in {\mathcal{A}}_{\partial\Omega,Q}^{1,\alpha} \times C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega)$. Clearly, definition [\[def:R\]](#def:R){reference-type="eqref" reference="def:R"} implies that $$\begin{aligned} \label{thm:main1} V_q[\phi,\mu](t,\xi) = V[\phi,\mu](t,\xi) + \int_0^t\int_{\phi(\partial\Omega)}R_{q,n}(t-\tau,\phi(\xi)-y) \mu \circ (\phi^T)^{(-1)} (\tau,y) \,d\sigma_yd\tau\end{aligned}$$ for all $(t,\xi) \in [0,T] \times \partial \Omega$. By Theorem [Theorem 2](#thm:clp){reference-type="ref" reference="thm:clp"} (i), the map that takes $\phi \in {\mathcal{A}}_{\partial\Omega,Q}^{1,\alpha}$ to $$V[\phi,\cdot] \in \mathcal{L} \left( C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega), C^{\frac{1+\alpha}{2};1+\alpha}_0([0,T]\times\partial\Omega) \right)$$ is of class $C^\infty$. We now consider the second term on the right-hand side of [\[thm:main1\]](#thm:main1){reference-type="eqref" reference="thm:main1"}. By Lemma [Lemma 1](#rajacon){reference-type="ref" reference="rajacon"} we have $$\begin{split} \int_0^t\int_{\phi(\partial\Omega)}&R_{q,n}(t-\tau,\phi(\xi)-y) \mu \circ (\phi^T)^{(-1)} (\tau,y) \,d\sigma_y d\tau \\ &= \int_0^t\int_{\partial\Omega}R_{q,n}(t-\tau,\phi(\xi)-\phi(\eta)) \mu (\tau,y) \tilde \sigma_n[\phi](\eta)\,d\sigma_\eta d\tau. \end{split}$$ We note that $$\phi(\xi)-\phi(\eta) \notin q\mathbb{Z}^n \setminus \{0\}\qquad \forall\,(\xi,\eta) \in \partial \Omega \times \partial \Omega.$$ Indeed, if it was that $(\xi,\eta) \in \partial \Omega \times \partial \Omega$ and $\phi(\xi)-\phi(\eta) \in q\mathbb{Z}^n\setminus\{0\}$, then we would have that $\phi(\xi) \in \phi(\partial \Omega) + q\mathbb{Z}^n\setminus\{0\}$, which clearly cannot be. Then, by Lemma [Lemma 1](#rajacon){reference-type="ref" reference="rajacon"} and by the results of [@DaLu23 Lemma A.2, Lemma A.3] on non-autonomous composition operators and on time-dependent integral operators with non-singular kernels, we deduce that the map from ${\mathcal{A}}_{\partial\Omega,Q}^{1,\alpha} \times C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega)$ to $C^{\frac{1+\alpha}{2};1+\alpha}_0([0,T]\times\partial\Omega)$ that takes $(\phi,\mu)$ to the function $$K[\phi,\mu](t,\xi) := \int_0^t\int_{\partial\Omega}R_{q,n}(t-\tau,\phi(\xi)-\phi(\eta)) \mu (\tau,y) \tilde \sigma_n[\phi](\eta)\,d\sigma_\eta d\tau \qquad \forall (t,\xi) \in [0,T] \times \partial\Omega,$$ is of class $C^\infty$. It remains to show that $\phi\mapsto K[\phi,\cdot]$ is $C^\infty$ from ${\mathcal{A}}_{\partial\Omega,Q}^{1,\alpha}$ to the operator space $$\mathcal{L} \left( C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega), C^{\frac{1+\alpha}{2};1+\alpha}_0([0,T]\times\partial\Omega) \right)\,.$$ Given that $K[\phi,\mu]$ is linear and continuous with respect to the variable $\mu$, we have $$\label{thm:main2} K[\phi,\cdot]= d_{\mu} K[\phi,\mu] \qquad \forall (\phi,\mu) \in {\mathcal{A}}_{\partial\Omega,Q}^{1,\alpha} \times C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega),$$ where the term on the right-hand side is the partial Frechet differential of $(\phi,\mu)\mapsto K[\phi,\mu]$ with respect to $\mu$, evaluated at the point $(\phi,\mu)$. Because $(\phi,\mu)\mapsto K[\phi,\mu]$ is a map of class $C^\infty$, the map that takes $(\phi,\mu)$ to $d_{\mu} K[\phi,\mu]$ is also of class $C^\infty$ from ${\mathcal{A}}_{\partial\Omega,Q}^{1,\alpha} \times C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega)$ to the operator space $\mathcal{L} \left( C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega), C^{\frac{1+\alpha}{2};1+\alpha}_0([0,T]\times\partial\Omega) \right)$. Hence, the map $(\phi,\mu) \mapsto K[\phi,\cdot]$ is of class $C^\infty$ by [\[thm:main2\]](#thm:main2){reference-type="eqref" reference="thm:main2"}, and, since it does not depend on $\mu$, we conclude that $\phi\mapsto K[\phi,\cdot]$ is $C^\infty$ from ${\mathcal{A}}_{\partial\Omega,Q}^{1,\alpha}$ to the operators space $\mathcal{L} \left( C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega), C^{\frac{1+\alpha}{2};1+\alpha}_0([0,T]\times\partial\Omega) \right)$. Hence, the validity of the theorem for the map $\phi\mapsto V[\phi,\cdot]$ in point (i) has now been proven. ◻ It is worth recalling that a result similar to Theorem [Theorem 5](#thm:main){reference-type="ref" reference="thm:main"} was proven previously in [@LaMu11] for periodic layer potentials corresponding to a general class of second-order elliptic equations. Later, these findings were used to study the effect of perturbations on physical quantities relevant to materials science and fluid mechanics. For instance, references such as [@DaLuMuPu22; @LuMu20; @LuMuPu19] deal with the effective conductivity of periodic composites and the longitudinal flow of fluids through periodic structures. # An ideal transmission problem {#s:trans} The theorem presented in the preceding section, Theorem [Theorem 5](#thm:main){reference-type="ref" reference="thm:main"}, serves as a toolkit to analyze the solution to boundary value problems for the heat equation in spatially periodic domains. The primary goal of using this theorem is to demonstrate the smooth dependence of such solutions on shape perturbations. As emphasized in the introduction, the feasibility of employing Theorem [Theorem 5](#thm:main){reference-type="ref" reference="thm:main"} for this purpose relies on the applicability of boundary integral operators and layer potentials to derive solutions for boundary value problems. As an illustrative application, we consider a periodic ideal transmission problem. We will demonstrate that its solution depends smoothly on the shape of the transmission interface, the boundary data, and the conductivity parameters. Now, let's introduce this specific problem. Consider $\alpha \in (0,1)$, $T>0$, and a bounded open subset $\Omega$ of $\mathbb{R}^n$ of class $C^{1,\alpha}$ such that both $\Omega$ and its exterior $\Omega^-$ are connected. Let $\phi \in {\mathcal{A}}_{\partial\Omega,Q}^{1,\alpha}$. We fix $\lambda^+,\lambda^- > 0$ and choose $f \in C^{\frac{1+\alpha}{2};1+\alpha}_0([0,T]\times\partial \Omega)$ and $g \in C^{\frac{\alpha}{2};\alpha}_0([0,T]\times \partial\Omega)$. With this setup, we proceed to consider the following ideal transmission problem: $$\begin{aligned} \label{periodicidtran} \left\{ \begin{array}{ll} \partial_t u^+ - \Delta u^+ = 0 &\mbox{in } (0, T]\times \mathbb{S}[\phi], \\ \partial_t u^- - \Delta u^- = 0 &\mbox{in } (0, T]\times \mathbb{S}[\phi]^-, \\ u^+(t,x+qz) = u^+(t,x) &\forall\,(t,x) \in [0,T] \times \overline{ \mathbb{S}[\phi]},\, \forall\, z\in \mathbb{Z}^n, \\ u^-(t,x+qz) = u^-(t,x) &\forall\,(t,x) \in [0,T] \times \overline{\mathbb{S}[\phi]^-}, \, \forall\,z\in \mathbb{Z}^n, \\ u^+-u^-= f \circ (\phi^T)^{(-1)}&\mbox{on } [0, T]\times \partial \Omega,\\ \lambda^-\frac{\partial} {\partial \nu_\Omega} u^- - \lambda^+\frac{\partial} {\partial \nu_\Omega} u^+ = g\circ (\phi^T)^{(-1)} &\mbox{on } [0, T]\times \partial \Omega,\\ u^+(0,\cdot) = 0 &\mbox{in } \overline{ \mathbb{S}[\phi]},\\ u^-(0,\cdot) = 0 &\mbox{in } \overline{ \mathbb{S}[\phi]^-}. \end{array} \right.\end{aligned}$$ From the physics viewpoint, Problem [\[periodicidtran\]](#periodicidtran){reference-type="eqref" reference="periodicidtran"} models the heat diffusion within a periodic composite material consisting of two distinct phases: $\mathbb{S}[\phi]$ and $\mathbb{S}[\phi]^-$, where the former represents a periodic arrangement of inclusions with thermal conductivity $\lambda^+$ and the latter is a matrix material possessing a thermal conductivity of $\lambda^-$. At the interface of transmission, we impose a non-homogeneous ideal contact condition (or perfect contact condition). This condition dictates that both the temperature field and the heat flux remain continuous across the interface, provided that both $f$ and $g$ are equal to zero. In [@LuMu18 Thm. 4] it is proved that the solution $(u^+,u^-)$ of [\[periodicidtran\]](#periodicidtran){reference-type="eqref" reference="periodicidtran"} exists, is unique, and belongs to a suitable product of Schauder spaces. Moreover, this solution can be expressed as a sum of periodic single-layer heat potentials, and the densities of these potentials are solutions to a particular system of boundary integral equations. To be precise, the following result holds: **Theorem 6**. *Let $\alpha\in(0,1)$ and $T>0$. Let $\Omega$ be as in [\[cond Omega\]](#cond Omega){reference-type="eqref" reference="cond Omega"}. Let $\phi \in {\mathcal{A}}_{\partial\Omega,Q}^{1,\alpha}$. Let $\lambda^+,\lambda^- > 0$ and $f \in C^{\frac{1+\alpha}{2};1+\alpha}_0([0,T]\times\partial \Omega)$, $g \in C^{\frac{\alpha}{2};\alpha}_0([0,T]\times \partial\Omega)$. Then problem [\[periodicidtran\]](#periodicidtran){reference-type="eqref" reference="periodicidtran"} has a unique solution $$(u^+,u^-) \in C^{\frac{1+\alpha}{2};1+\alpha}_{0,q}([0,T]\times\overline{\mathbb{S}[\phi]}) \times C^{\frac{1+\alpha}{2};1+\alpha}_{0,q}([0,T]\times\overline{\mathbb{S}[\phi]^-}).$$ Moreover, $$\begin{aligned} u^+ = v^+_q[\mu^+], \qquad u^-=v^-_q[\mu^-],\end{aligned}$$ where $(\mu^+,\mu^-) \in C^{\frac{\alpha}{2};\alpha}_{0}([0,T] \times \phi(\partial\Omega)) \times C^{\frac{\alpha}{2};\alpha}_{0}([0,T] \times \phi(\partial\Omega))$ is the unique solution of the system of integral equations $$\label{sys1} \begin{cases} v_q^+[\mu^+]_{|[0,T]\times\phi(\partial\Omega)} - v_q^-[\mu^-]_{|[0,T]\times\phi(\partial\Omega)} =f \circ (\phi^T)^{(-1)},\\ \lambda^-\left(-\frac{1}{2}\mu^- + w^*_{q}[\mu^-]\right) - \lambda^+\left(\frac{1}{2}\mu^+ + w^*_{q}[\mu^+]\right)=g \circ (\phi^T)^{(-1)}. \end{cases}$$* Keeping in mind Theorem [Theorem 6](#thm:uniqsol){reference-type="ref" reference="thm:uniqsol"}, we will use the notation $$(u^+[\phi,\lambda^+,\lambda^-,f,g], u^-[\phi,\lambda^+,\lambda^-,f,g])$$ to denote the unique solution of problem [\[periodicidtran\]](#periodicidtran){reference-type="eqref" reference="periodicidtran"}. Moreover, thanks to Theorem [Theorem 6](#thm:uniqsol){reference-type="ref" reference="thm:uniqsol"}, we have a representation of the unique solution of the transmission problem as a pair of single-layer potentials with densities that solve the system of boundary integral equations in [\[sys1\]](#sys1){reference-type="eqref" reference="sys1"}. Then, to understand how the solution depends upon variations of $\phi$, $\lambda^+$, $\lambda^-$, $f$, and $g$, we plan to first understand how the densities depend on such parameters. To maintain consistency within the functional spaces, we have to perform a $\phi$-pullback of the integral equations in [\[sys1\]](#sys1){reference-type="eqref" reference="sys1"}. This transformation results in a system of $\phi$-dependent integral equations defined on the fixed domain $[0,T] \times \partial \Omega$. This is achieved through a change of variables applied to [\[sys1\]](#sys1){reference-type="eqref" reference="sys1"}, leading to the following proposition: **Proposition 7**. *Let $\alpha\in(0,1)$ and $T>0$. Let $\Omega$ be as in [\[cond Omega\]](#cond Omega){reference-type="eqref" reference="cond Omega"}. Let $\phi \in {\mathcal{A}}_{\partial\Omega,Q}^{1,\alpha}$. Let $\lambda^+,\lambda^- > 0$ and $f \in C^{\frac{1+\alpha}{2};1+\alpha}_0([0,T]\times\partial \Omega)$, $g \in C^{\frac{\alpha}{2};\alpha}_0([0,T]\times \partial\Omega)$. Then the unique solution $$(u^+[\phi,\lambda^+,\lambda^-,f,g], u^-[\phi,\lambda^+,\lambda^-,f,g]) \in C^{\frac{1+\alpha}{2};1+\alpha}_{0,q}([0,T]\times\overline{\mathbb{S}[\phi]}) \times C^{\frac{1+\alpha}{2};1+\alpha}_{0,q}([0,T]\times\overline{\mathbb{S}[\phi]^-})$$ of problem [\[periodicidtran\]](#periodicidtran){reference-type="eqref" reference="periodicidtran"} can be written as $$u^+[\phi,\lambda^+,\lambda^-,f,g] = v_q^+[\rho^+ \circ (\phi^T)^{(-1)}] \qquad u^-[\phi,\lambda^+,\lambda^-,f,g]=v^-_q[\rho^- \circ (\phi^T)^{(-1)}],$$ where $(\rho^+,\rho^-) \in C_0^{\frac{\alpha}{2};\alpha}([0,T] \times \partial\Omega) \times C_0^{\frac{\alpha}{2};\alpha}([0,T] \times \partial\Omega)$ is the unique solution of the system of integral equations $$\label{sys2} \begin{cases} V_q[\phi,\rho^+] - V_q[\phi,\rho^-] =f , \\ \lambda^-\left(-\frac{1}{2}\rho^- + W_{q}^*[\phi,\rho^-]\right) - \lambda^+\left(\frac{1}{2}\rho^+ + W_{q}^*[\phi,\rho^+]\right)=g . \end{cases}$$* Our next step is to understand the dependence of the solution $(\rho^+,\rho^-)$ of [\[sys2\]](#sys2){reference-type="eqref" reference="sys2"} upon $(\phi,\lambda^+,\lambda^-,f,g)$. To achieve this, we first observe that system [\[sys2\]](#sys2){reference-type="eqref" reference="sys2"} can be equivalently reformulated as a single integral equation. In fact, by the linearity of the single-layer potential $V_q[\phi,\cdot]$, we can rewrite the first equation in [\[sys2\]](#sys2){reference-type="eqref" reference="sys2"} as $$\label{eq:vrhof} V_q[\phi,\rho^+ -\rho^-] = f.$$ Then, by leveraging the invertibility of the single-layer potential (cf. [@LuMu18 Thm. 2]) and using equality [\[eq:vrhof\]](#eq:vrhof){reference-type="eqref" reference="eq:vrhof"}, we can express either $\rho^+$ or $\rho^-$ in terms of the other. Substituting this expression into the second equation of [\[sys2\]](#sys2){reference-type="eqref" reference="sys2"}, we arrive at the following proposition: **Proposition 8**. *Let $\alpha\in(0,1)$ and $T>0$. Let $\Omega$ be as in [\[cond Omega\]](#cond Omega){reference-type="eqref" reference="cond Omega"}. Take $\phi \in {\mathcal{A}}_{\partial\Omega,Q}^{1,\alpha}$. Assume $\lambda^+,\lambda^- > 0$ and take $f \in C^{\frac{1+\alpha}{2};1+\alpha}_0([0,T]\times\partial \Omega)$ and $g \in C^{\frac{\alpha}{2};\alpha}_0([0,T]\times \partial\Omega)$. Define the contrast parameter $\lambda_\mathrm{\bf c}[\lambda^+,\lambda^-]$ by $$\label{eq def lambda} \lambda_\mathrm{\bf c}[\lambda^+,\lambda^-] := \frac{\lambda^- - \lambda^+}{\lambda^- + \lambda^+}\,.$$ If $(\rho^+,\rho^-) \in C_0^{\frac{\alpha}{2};\alpha}([0,T] \times \partial\Omega) \times C_0^{\frac{\alpha}{2};\alpha}([0,T] \times \partial\Omega)$ is the unique solution of the system of integral equations [\[sys2\]](#sys2){reference-type="eqref" reference="sys2"}, then $\rho^-$ is the unique solution in $C_0^{\frac{\alpha}{2};\alpha}([0,T] \times \partial\Omega)$ of the integral equation $$\label{int eq rho-} \rho^- - 2\lambda_\mathrm{\bf c}[\lambda^+,\lambda^-] W_{q}^*[\phi,\rho^-] = -\frac{2}{\lambda^- + \lambda^+} \left( \lambda^+ \left( \frac{1}{2}I + W_{q}^*[\phi,\cdot]\right)\left( V_q[\phi,\cdot]^{(-1)}(f)\right) + g \right)$$ and $\rho^+$ is given by $$\label{eq rho+} \rho^+ = \rho^- + V_q[\phi,\cdot]^{(-1)}(f).$$* *Proof.* As already noted, equation [\[eq rho+\]](#eq rho+){reference-type="eqref" reference="eq rho+"} follows by the first equation of [\[sys2\]](#sys2){reference-type="eqref" reference="sys2"} and by the linearity and invertibility of the operator $V_q[\phi,\cdot]$ from $C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega)$ to $C^{\frac{1+\alpha}{2};1+\alpha}_0([0,T]\times\partial\Omega)$ (cf. [@LuMu18 Thm. 2]). Then, substituting [\[eq rho+\]](#eq rho+){reference-type="eqref" reference="eq rho+"} into the second equation in [\[sys2\]](#sys2){reference-type="eqref" reference="sys2"} and using the linearity of $W_{q}^*[\phi,\cdot]$, we obtain $$\begin{aligned} \lambda^-\left(-\frac{1}{2}\rho^- + W_{q}^*[\phi,\rho^-]\right) &- \lambda^+ \left( \frac{1}{2}\rho^- + \frac{1}{2} V_q[\phi,\cdot]^{(-1)}(f) \right) \\ &- \lambda^+ \left( W_{q}^*[\phi,\rho^-] + W_{q}^*\left[\phi,V_q[\phi,\cdot]^{(-1)}(f)\right]\right) = g,\end{aligned}$$ which, after a rearrangement, yields $$\begin{aligned} (\lambda^- + \lambda^+) \left(- \frac{1}{2}\rho^-\right) &+ (\lambda^- - \lambda^+)W_{q}^*[\phi,\rho^-] \\ &= \lambda^+ \left(\frac{1}{2}I + W_{q}^*[\phi,\cdot]\right)\left( V_q[\phi,\cdot]^{(-1)}(f)\right) + g.\end{aligned}$$ Multiplying both sides of the above equation by $-\frac{2}{\lambda^- + \lambda^+}$, we obtain [\[int eq rho-\]](#int eq rho-){reference-type="eqref" reference="int eq rho-"}, which, in view of [@LuMu18 Lem. 2], is well known to have a unique solution (cf. the definition of $\lambda_\mathrm{\bf c}[\lambda^+,\lambda^-]$ in [\[eq def lambda\]](#eq def lambda){reference-type="eqref" reference="eq def lambda"}). ◻ In the proof of Proposition [Proposition 8](#prop int eq){reference-type="ref" reference="prop int eq"}, we utilized the invertibility of the operator $I - 2\gamma W_{q}^*[\phi,\cdot]$ for $\gamma\in (-1,1)$, a fact established in [@LuMu18 Lem. 2]. Even for $\gamma=1$, this operator remains invertible, as follows from [@Lu18 Lem. 6]. In the subsequent lemma, we demonstrate the invertibility of this operator for $\gamma=-1$ as well, thereby establishing its invertibility for all $\gamma\in[-1,1]$. **Lemma 9**. *Let $\alpha\in(0,1)$ and $T>0$. Let $\Omega$ be as in [\[cond Omega\]](#cond Omega){reference-type="eqref" reference="cond Omega"}. Let $\phi \in {\mathcal{A}}_{\partial\Omega,Q}^{1,\alpha}$ and $\gamma \in [-1,1]$. Then the operator from $C_0^{\frac{\alpha}{2};\alpha}([0,T] \times \partial\Omega)$ into itself that maps $\rho$ to the function $\rho -2\gamma W_{q}^*[\phi,\rho]$ is a linear homeomorphism.* *Proof.* As previously noted, the assertion for $\gamma\in (-1,1)$ and $\gamma=1$ follows by [@LuMu18 Lem. 2] and [@Lu18 Lem. 6], respectively (note that for $\gamma \in (-1,1)$, there exist $\gamma^+,\gamma^->0$ such that $\gamma = (\gamma^- - \gamma^+)/(\gamma^- + \gamma^+)$). Thus, the task at hand is to demonstrate the statement for $\gamma=-1$. Due to the compactness of $W^*_q[\phi,\cdot]$ (cf. [@LuMu18 Thm. 1]), the operator $I -2\gamma W_{q}^*[\phi,\cdot]$ is a Fredholm operator of index zero. Consequently, to demonstrate that it is a linear homeomorphism, it suffices to prove its injectivity. So, let $\rho \in C_0^{\frac{\alpha}{2};\alpha}([0,T] \times \partial\Omega)$ be such that $$\rho + 2W_{q}^*[\phi,\rho] = 0 \quad \text{on } [0,T] \times \partial\Omega.$$ By Theorem [Theorem 3](#thmsl){reference-type="ref" reference="thmsl"}, the single-layer potential $v_q^+[\rho \circ (\phi^T)^{(-1)}]$ belongs to $C_{0,q}^{\frac{1+\alpha}{2}; 1+\alpha}\big([0,T] \times \overline{\mathbb{S}[\phi]}\big)$ and is a solution of the following $q$-periodic homogeneous interior Neumann problem: $$\label{q periodic neumann problem: u} \begin{cases} \partial_t u - \Delta u = 0 &\mbox{in } (0, T]\times \mathbb{S}[\phi], \\ u(t,x+qz) = u(t,x) &\forall\,(t,x) \in [0,T] \times \overline{ \mathbb{S}[\phi]},\, \forall\,z\in \mathbb{Z}^n, \\ \frac{\partial} {\partial \nu_\Omega} u = 0 &\mbox{on } [0, T]\times \partial \Omega,\\ u(0,\cdot) = 0 &\mbox{in } \overline{ \mathbb{S}[\phi]}\, . \end{cases}$$ We proceed to prove that $u=0$ is the sole solution of problem [\[q periodic neumann problem: u\]](#q periodic neumann problem: u){reference-type="eqref" reference="q periodic neumann problem: u"} by a standard energy argument. It will follow that $v_q^+[\rho \circ (\phi^T)^{(-1)}]=0$ and, by the invertibility of the restriction to $[0,T]\times \phi(\partial \Omega)$ of the single-layer potential (cf. [@LuMu18 Thm. 2]), we will conclude that $\rho \circ (\phi^T)^{(-1)}=0$, and thus that $\rho=0$. So, let $u \in C_{0,q}^{\frac{1+\alpha}{2};1+\alpha}([0,T]\times\overline{\mathbb{S}[\phi]})$ be a solution of [\[q periodic neumann problem: u\]](#q periodic neumann problem: u){reference-type="eqref" reference="q periodic neumann problem: u"}. Let $$e(t) := \int_{\Omega} (u(t,y))^2 \,dy \qquad \forall t \in [0,T].$$ Given that $u$ is uniformly continuous on $[0,T]\times\overline{\mathbb{S}[\phi]}$, we can see that $t\mapsto e(t)$ is continuous on $[0,T]$. Furthermore, we can demonstrate that $e$ belongs to $C^1([0,T])$. A detailed proof is provided in [@Lu18 Lem. 5 and Prop. 2], and it is based on classical differentiation theorems for integrals depending on a parameter, along with a specific approximation of the support of integration (see Verchota [@Ve84 Thm. 1.12, p. 581]). Following the argument in the same reference ([@Lu18 Lem. 5 and Prop. 2]), we can also verify that $$\frac{d}{dt} e(t) = -2 \int_{\Omega} |Du(t,y)|^2 \,dy + 2 \int_{\partial\Omega} u(t,y) \frac{\partial} {\partial \nu_\Omega} u(t,y) \,d\sigma_y = -2 \int_{\Omega} |Du(t,y)|^2 \,dy \quad \forall t \in (0,T),$$ where the integral on $\partial \Omega$ vanishes thanks to the boundary condition in [\[q periodic neumann problem: u\]](#q periodic neumann problem: u){reference-type="eqref" reference="q periodic neumann problem: u"}. Hence $\frac{d}{dt} e \leq 0$ in $(0,T)$. Since $e \geq 0$ and $e(0)=0$, we conclude that $e(t) = 0$ for all $t \in [0,T]$. Accordingly, $u = 0$ on $[0, T] \times \overline{\Omega}$, and the $q$-periodicity of $u$ implies $u=0$ on $[0,T]\times\overline{\mathbb{S}[\phi]}$. Hence $$v_q^+[\rho \circ (\phi^T)^{(-1)}]=0\qquad \mbox{in } [0, T]\times \overline{\mathbb{S}[\phi]}\,,$$ a fact that, as explained above, concludes the proof of the statement. ◻ Taking inspiration from Proposition [Proposition 8](#prop int eq){reference-type="ref" reference="prop int eq"} and Lemma [Lemma 9](#lem: intervibility of K_gamma){reference-type="ref" reference="lem: intervibility of K_gamma"}, we define the map $$\Lambda : {\mathcal{A}}_{\partial\Omega,Q}^{1,\alpha} \times (0,+\infty)^2 \times C^{\frac{1+\alpha}{2};1+\alpha}_0([0,T]\times\partial \Omega) \times C_0^{\frac{\alpha}{2};\alpha}([0,T] \times \partial\Omega) \to C_0^{\frac{\alpha}{2};\alpha}([0,T] \times \partial\Omega)$$ given by $$\begin{aligned} \Lambda[\phi, \lambda^+,\lambda^-, f,g] :=& \left(I -2 \lambda_\mathrm{\bf c}[\lambda^+,\lambda^-] W_{q}^*[\phi,\cdot]\right)^{(-1)} \\ & \left(-\frac{2}{\lambda^- + \lambda^+} \left( \lambda^+ \left( \frac{1}{2}I + W_{q}^*[\phi,\cdot]\right)\left( V_q[\phi,\cdot]^{(-1)}(f)\right) + g \right) \right)\, , \end{aligned}$$ with $\lambda_\mathrm{\bf c}[\lambda^+,\lambda^-]$ as in [\[eq def lambda\]](#eq def lambda){reference-type="eqref" reference="eq def lambda"}. Then the solution $\rho^-[\phi,\lambda^+,\lambda^-,f,g]$ to the integral equation in [\[int eq rho-\]](#int eq rho-){reference-type="eqref" reference="int eq rho-"} is given by $$\label{sys3} \rho^-[\phi,\lambda^+,\lambda^-,f,g] = \Lambda[\phi, \lambda^+,\lambda^-, f,g],$$ and if we take $$\label{sys4} \rho^+[\phi,\lambda^+,\lambda^-,f,g] = \rho^-[\phi,\lambda^+,\lambda^-,f,g] + V_q[\phi,\cdot]^{(-1)}(f),$$ we see, by Proposition [Proposition 8](#prop int eq){reference-type="ref" reference="prop int eq"}, that the pair $$\left(\rho^+[\phi,\lambda^+,\lambda^-,f,g],\rho^-[\phi,\lambda^+,\lambda^-,f,g]\right)$$ is the unique solution of [\[sys2\]](#sys2){reference-type="eqref" reference="sys2"}. Our next objective is to establish a regularity result for the map that takes $(\phi,\lambda^+,\lambda^-,\!f,\!g)$ to $\left(\rho^+[\phi,\lambda^+,\lambda^-,\!f,\!g],\rho^-[\phi,\lambda^+,\lambda^-,\!f,\!g]\right)$, which stems from the smooth dependence of layer potentials on perturbations in the integration's support of Theorem [Theorem 5](#thm:main){reference-type="ref" reference="thm:main"}, coupled with the analyticity of the inversion map in Banach algebras. Subsequently, the regularity of the mapping $(\phi,\lambda^+,\lambda^-,\!f,\!g)\mapsto \left(\rho^+[\phi,\lambda^+,\lambda^-,\!f,\!g],\rho^-[\phi,\lambda^+,\lambda^-,\!f,\!g]\right)$ will resolve into a regularity result for the mapping that relates $(\phi,\lambda^+,\lambda^-,\!f,\!g)$ with the solution of [\[periodicidtran\]](#periodicidtran){reference-type="eqref" reference="periodicidtran"}. **Proposition 10**. *Let $\alpha\in(0,1)$ and $T>0$. Let $\Omega$ be as in [\[cond Omega\]](#cond Omega){reference-type="eqref" reference="cond Omega"}. Then the map $$(\phi,\lambda^+,\lambda^-,\!f,\!g)\mapsto \left(\rho^+[\phi,\lambda^+,\lambda^-,\!f,\!g],\rho^-[\phi,\lambda^+,\lambda^-,\!f,\!g]\right)$$ is of class $C^\infty$ from ${\mathcal{A}}_{\partial\Omega,Q}^{1,\alpha} \times (0,+\infty)^2 \times C^{\frac{1+\alpha}{2};1+\alpha}_0([0,T]\times\partial \Omega) \times C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial \Omega)$ to $C_0^{\frac{\alpha}{2};\alpha}([0,T] \times \partial\Omega)\times C_0^{\frac{\alpha}{2};\alpha}([0,T] \times \partial\Omega)$.* *Proof.* By Theorem [Theorem 5](#thm:main){reference-type="ref" reference="thm:main"}, the map that takes $\phi$ to $V_q[\phi,\cdot]$ is of class $C^\infty$ from ${\mathcal{A}}_{\partial\Omega,Q}^{1,\alpha}$ to $\mathcal{L} \left(C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega), C^{\frac{1+\alpha}{2};1+\alpha}_0([0,T]\times\partial\Omega)\right)$, and the map that takes $(\phi, \gamma)$ to $I -2 \gamma W_{q}^*[\phi,\cdot]$ is of class $C^\infty$ from ${\mathcal{A}}_{\partial\Omega,Q}^{1,\alpha}\times (-1,1)$ to $\mathcal{L}\left( C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega), C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega)\right)$. Since the map from $(0,+\infty)^2$ to $(-1,1)$ that takes $(\lambda^+,\lambda^-)$ to $\frac{\lambda^- - \lambda^+}{\lambda^- + \lambda^+}$ is also of class $C^\infty$, we deduce that the map from ${\mathcal{A}}_{\partial\Omega,Q}^{1,\alpha}\times(0,+\infty)^2$ to $\mathcal{L}\left( C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega), C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega)\right)$ that takes a triple $(\phi,\lambda^+,\lambda^-)$ to $$I -2 \frac{\lambda^- - \lambda^+}{\lambda^- + \lambda^+} W_{q}^*[\phi,\cdot]$$ is of class $C^\infty$. Now, the map that takes a linear invertible operator to its inverse is real analytic (cf. Hille and Phillips [@HiPh57 Thms. 4.3.2 and 4.3.4]), and therefore of class $C^\infty$. So, by the invertibility of the periodic single layer of [@LuMu18 Thm. 2] and by Lemma [Lemma 9](#lem: intervibility of K_gamma){reference-type="ref" reference="lem: intervibility of K_gamma"} we deduce that the map from ${\mathcal{A}}_{\partial\Omega,Q}^{1,\alpha}$ to $\mathcal{L} \left(C^{\frac{1+\alpha}{2};1+\alpha}_0([0,T]\times\partial\Omega), C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega)\right)$ that takes $\phi$ to $V_q[\phi,\cdot]^{(-1)}$ and the map from ${\mathcal{A}}_{\partial\Omega,Q}^{1,\alpha} \times (0,+\infty)^2$ to $\mathcal{L} \left(C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega), C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega)\right)$ that takes $(\phi,\lambda^+,\lambda^-)$ to $\left(I -2 \frac{\lambda^- - \lambda^+}{\lambda^- + \lambda^+} W_{q}^*[\phi,\cdot]\right)^{(-1)}$, are both of class $C^\infty$. Given the bilinearity and continuity of the evaluation map $(L,v)\mapsto L[v]$, which acts from $$\mathcal{L} \left(C^{\frac{1+\alpha}{2};1+\alpha}_0([0,T]\times\partial\Omega), C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega)\right)\times C^{\frac{1+\alpha}{2};1+\alpha}_0([0,T]\times\partial\Omega)$$ to $C_0^{\frac{\alpha}{2};\alpha}([0,T] \times \partial\Omega)$, as well as from $$\mathcal{L}\left( C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega), C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega)\right)\times C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega)$$ to $C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega)$, we can deduce that the mapping $(\phi,f)\mapsto V_q[\phi,\cdot]^{(-1)}(f)$ is of class $C^\infty$ from ${\mathcal{A}}_{\partial\Omega,Q}^{1,\alpha} \times C^{\frac{1+\alpha}{2};1+\alpha}_0([0,T]\times\partial \Omega)$ to $C_0^{\frac{\alpha}{2};\alpha}([0,T] \times \partial\Omega)$ and, similarly, the map $$(\phi,\lambda^+,f)\mapsto\lambda^+ \left( \frac{1}{2}I + W_{q}^*[\phi,\cdot]\right)\left( V_q[\phi,\cdot]^{(-1)}(f)\right)$$ is of class $C^\infty$ from ${\mathcal{A}}_{\partial\Omega,Q}^{1,\alpha}\times (0,+\infty) \times C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega)$ to $C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega)$. By once again relying on the bilinearity and continuity of the evaluation map, we ultimately deduce that the map taking $(\phi,\lambda^+,\lambda^-,f,g)$ to $$\left(I -2 \frac{\lambda^- - \lambda^+}{\lambda^- + \lambda^+} W_{q}^*[\phi,\cdot]\right)^{(-1)} \left(-\frac{2}{\lambda^- + \lambda^+} \left( \lambda^+ \left( \frac{1}{2}I + W_{q}^*[\phi,\cdot]\right)\left( V_q[\phi,\cdot]^{(-1)}(f)\right) + g \right) \right)$$ is of class $C^\infty$, where the domain is ${\mathcal{A}}_{\partial\Omega,Q}^{1,\alpha} \times (0,+\infty)^2\times C^{\frac{1+\alpha}{2};1+\alpha}_0([0,T]\times\partial \Omega) \times C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial \Omega)$, and the codomain is $C_0^{\frac{\alpha}{2};\alpha}([0,T] \times \partial\Omega)$. Hence, the smoothness of the map $(\phi,\lambda^+,\lambda^-,f,g)\mapsto \rho^-[\phi,\lambda^+,\lambda^-,f,g]$ follows directly from [\[sys3\]](#sys3){reference-type="eqref" reference="sys3"} and the definition of $\Lambda$. The smoothness of $(\phi,\lambda^+,\lambda^-,f,g)\mapsto \rho^+[\phi,\lambda^+,\lambda^-,f,g]$ is a consequence of [\[sys4\]](#sys4){reference-type="eqref" reference="sys4"}. ◻ Theorem [Proposition 7](#periodiccp){reference-type="ref" reference="periodiccp"} provides a representation formula for the solution of problem [\[periodicidtran\]](#periodicidtran){reference-type="eqref" reference="periodicidtran"} in terms of periodic single-layer potentials, while Proposition [Proposition 10](#prop:aninteq){reference-type="ref" reference="prop:aninteq"} demonstrates that the corresponding densities exhibit smooth dependence on the shape, boundary data, and conductivity parameters. Specifically, we have the expressions $$\label{repfu1} \begin{split} u^+[\phi,\lambda^+,\lambda^-,f,g]&(t,x) \\ = \int_0^{t} \int_{\partial\Omega}&S_{q,n}(t-\tau, x - \phi(y)) \rho^+[\phi,\lambda^+,\lambda^-,f,g](\tau,y) \tilde \sigma_n[\phi](y)\, d\sigma_yd\tau, \end{split}$$ for all $(t,x) \in [0,T] \times \mathbb{S}[\phi]$, and $$\label{repfu2} \begin{split} u^-[\phi,\lambda^+,\lambda^-,f,g]&(t,x) \\ = \int_0^{t} \int_{\partial\Omega}&S_{q,n}(t-\tau, x - \phi(y)) \rho^-[\phi,\lambda^+,\lambda^-,f,g](\tau,y) \tilde \sigma_n[\phi](y)\, d\sigma_yd\tau, \end{split}$$ for all $(t,x) \in [0,T] \times \mathbb{S}[\phi]^-$, where $\rho^+[\phi,\lambda^+,\lambda^-,f,g]$ and $\rho^-[\phi,\lambda^+,\lambda^-,f,g]$ are maps of class $C^\infty$ with respect to the variables $(\phi,\lambda^+,\lambda^-,f,g$). We are ready to show the main result of this section, about the smooth dependence of the solution of [\[periodicidtran\]](#periodicidtran){reference-type="eqref" reference="periodicidtran"} on $(\phi,\lambda^+,\lambda^-,f,g)$. **Theorem 11**. *Let $\alpha\in(0,1)$ and $T>0$. Let $\Omega$ be as in [\[cond Omega\]](#cond Omega){reference-type="eqref" reference="cond Omega"}. Let $\Omega^i$ and $\Omega^e$ be two bounded open subsets of $\mathbb{R}^n$. Let ${\mathcal{B}}_{\partial\Omega,Q}^{1,\alpha}$ be the open subset of ${\mathcal{A}}_{\partial\Omega,Q}^{1,\alpha}$ consisting of those diffeomorphisms $\phi$ such that $$\overline{\Omega^i} \subseteq \mathbb{S}[\phi], \quad \overline{\Omega^e} \subseteq \mathbb{S}[\phi]^-.$$ Then, the map $$(\phi,\lambda^+,\lambda^-,f,g) \mapsto \left(u^+[\phi,\lambda^+,\lambda^-,f,g]_{|[0,T]\times\overline{\Omega^i}},u^-[\phi,\lambda^+,\lambda^-,f,g]_{|[0,T]\times\overline{\Omega^e}}\right)$$ is of class $C^\infty$ from ${\mathcal{B}}_{\partial\Omega,Q}^{1,\alpha} \times (0,+\infty)^2 \times C^{\frac{1+\alpha}{2};1+\alpha}_0([0,T]\times\partial \Omega) \times C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial \Omega)$ to $C_0^{\frac{1+\alpha}{2},1+\alpha}([0,T]\times \overline{\Omega^i}) \times C_0^{\frac{1+\alpha}{2},1+\alpha}([0,T]\times \overline{\Omega^e})$.* *Proof.* Without loss of generality we can assume that $\Omega^i$ and $\Omega^e$ are of class $C^{1,\alpha}$. The maps that associate a diffeomorphism $\phi$ with the functions $$\overline{\Omega^i} \times \partial \Omega \ni (x,y) \mapsto x -\phi(y)\in \mathbb{R}^n$$ and $$\overline{\Omega^e} \times \partial \Omega \ni (x,y) \mapsto x -\phi(y)\in \mathbb{R}^n$$ are both affine and continuous (and thus, smooth), from ${\mathcal{B}}_{\partial\Omega,Q}^{1,\alpha}$ to $C^{1,\alpha}(\overline{\Omega^i} \times \partial \Omega, \mathbb{R}^n \setminus q\mathbb{Z}^n)$ and $C^{1,\alpha}(\overline{\Omega^e} \times \partial \Omega, \mathbb{R}^n \setminus q\mathbb{Z}^n)$, respectively. By arguing as in the proof of [@DaLu23 Lem. A.1 and Lem. A.3] regarding the regularity of superposition operators, we deduce that the maps that take $\phi$ to the functions $$S_{q,n}(t, x-\phi(y))\qquad\forall[0,T]\times\overline{\Omega^i} \times \partial \Omega$$ and $$S_{q,n}(t, x-\phi(y))\qquad\forall[0,T]\times\overline{\Omega^e} \times \partial \Omega$$ are of class $C^{\infty}$ from ${\mathcal{B}}_{\partial\Omega,Q}^{1,\alpha}$ to $C_0^{\frac{1+\alpha}{2};1+\alpha}([0,T]\times (\overline{\Omega^i} \times \partial \Omega))$ and to $C_0^{\frac{1+\alpha}{2};1+\alpha}([0,T]\times (\overline{\Omega^e} \times \partial \Omega))$, respectively. Indeed, we note that the results of [@DaLu23 Lem. A.1 and Lem. A.3] remain valid also in the case of a manifold with a boundary. Then, the statement follows by the representation formulas [\[repfu1\]](#repfu1){reference-type="eqref" reference="repfu1"}, [\[repfu2\]](#repfu2){reference-type="eqref" reference="repfu2"} for $u^\pm[\phi,\lambda^+,\lambda^-,f,g]$, by Proposition [Proposition 10](#prop:aninteq){reference-type="ref" reference="prop:aninteq"} on the smoothness of $\rho^\pm[\phi,\lambda^+,\lambda^-,f,g]$, by Lemma [Lemma 1](#rajacon){reference-type="ref" reference="rajacon"} on the analyticity of $\tilde \sigma_n[\phi]$, and by the regularity result on integral operators with non-singular kernels of [@DaLu23 Lem. A.2], which continues to apply even in the case of a manifold with a boundary. ◻ # An expansion result by Neumann-type series {#s:neu} If we consider fixed values of $\phi \in {\mathcal{A}}_{\partial\Omega,Q}^{1,\alpha}$, $f \in C^{\frac{1+\alpha}{2};1+\alpha}_0([0,T]\times\partial \Omega)$, and $g \in C^{\frac{\alpha}{2};\alpha}_0([0,T]\times \Omega)$, a combination of Proposition [Proposition 8](#prop int eq){reference-type="ref" reference="prop int eq"} and a modified version of Proposition [Proposition 10](#prop:aninteq){reference-type="ref" reference="prop:aninteq"} allows us to establish that the solution to problem [\[periodicidtran\]](#periodicidtran){reference-type="eqref" reference="periodicidtran"} exhibits analytic dependence on the contrast parameter $\lambda_\mathrm{\bf c}[\lambda^+,\lambda^-]$. Consequently, we can express the densities as convergent power series. Alternatively, this result can be achieved more directly by employing the Neumann series Theorem. To be more precise, we can demonstrate that locally, around a fixed pair of parameters $(\lambda^+_0,\lambda^-_0) \in (0,+\infty)^2$, the densities can be expressed as a Neumann-type series. The terms of this series involve the difference of the contrast parameters $\lambda_\mathrm{\bf c}[\lambda^+,\lambda^-]$ and $\lambda_\mathrm{\bf c}[\lambda^+_0,\lambda^-_0]$, as well as iterated compositions of the operator $$\left(I - 2 \lambda_\mathrm{\bf c}[\lambda^+_0,\lambda^-_0] W^*_q[\phi,\cdot] \right)^{(-1)} \circ W^\ast_q[\phi,\cdot].$$ Naturally, once we establish this result for the densities, by utilizing the representation formula of the solution in terms of space-periodic layer potentials, we can deduce a similar result for the solution. The detailed calculation is left to the zealous reader. We will use the following notation: Given two Banach spaces $X$ and $Y$ and a bounded linear map $T:X \to Y$, we define $$T^j := \underbrace{ T \circ \dots \circ T}_{j-\text{times}} \quad \text{for every } j \in \mathbb{N},$$ with the convention that $T^0= I$. In the theorem below, we fix $\phi\in {\mathcal{A}}_{\partial\Omega,Q}^{1,\alpha}$, $\lambda^+_0,\lambda^-_0 > 0$, $f \in C^{\frac{1+\alpha}{2};1+\alpha}_0([0,T]\times\partial \Omega)$, and $g \in C^{\frac{\alpha}{2};\alpha}_0([0,T]\times \Omega)$ and we show a representation formula for $\rho^-[\phi,\lambda^+,\lambda^-,f,g]$ as a convergent power series depending on the difference of the contrast parameters $\lambda_\mathrm{\bf c}[\lambda^+,\lambda^-]$ and $\lambda_\mathrm{\bf c}[\lambda^+_0,\lambda^-_0]$. For the sake of exposition, for every $j \in {\mathbb{N}}$, we define the map $$\mathcal{K}_j: {\mathcal{A}}_{\partial\Omega,Q}^{1,\alpha} \times (0,+\infty)^2 \to \mathcal{L}(C_0^{\frac{\alpha}{2};\alpha}([0,T] \times \partial\Omega),C_0^{\frac{\alpha}{2};\alpha}([0,T] \times \partial\Omega)),$$ given by $$\label{eq K_j} \mathcal{K}_j[\phi,\lambda^+_0,\lambda^-_0] := 2^j \left(\left(I - 2 \lambda_\mathrm{\bf c}[\lambda^+_0,\lambda^-_0] W^*_q[\phi,\cdot] \right)^{(-1)} W^\ast_q[\phi,\cdot] \right)^j.$$ Then the following holds. **Theorem 12**. *Let $\alpha\in(0,1)$ and $T>0$. Let $\Omega$ be as in [\[cond Omega\]](#cond Omega){reference-type="eqref" reference="cond Omega"}. Let $\phi \in {\mathcal{A}}_{\partial\Omega,Q}^{1,\alpha}$, $\lambda^+_0,\lambda^-_0 > 0$, $f \in C^{\frac{1+\alpha}{2};1+\alpha}_0([0,T]\times\partial \Omega)$, and $g \in C^{\frac{\alpha}{2};\alpha}_0([0,T]\times \Omega)$ be fixed.* *Then, there exists a positive constant $\varepsilon \in (0,+\infty)$ such that the following holds: For every $(\lambda^+,\lambda^-) \in (0,+\infty)^2$ such that $$\label{thm condition epsilon} |\lambda_\mathrm{\bf c}[\lambda^+,\lambda^-] - \lambda_\mathrm{\bf c}[\lambda^+_0,\lambda^-_0]| < \varepsilon,$$ with $\lambda_\mathrm{\bf c}[\cdot,\cdot]$ defined by [\[eq def lambda\]](#eq def lambda){reference-type="eqref" reference="eq def lambda"}, we have $$\label{thm eq rho- series} \begin{split} \rho^-[\phi,\lambda^+,\lambda^-,f,g] =& \left( \sum_{j=0}^{+\infty} (\lambda_\mathrm{\bf c}[\lambda^+,\lambda^-] - \lambda_\mathrm{\bf c}[\lambda^+_0,\lambda^-_0])^j \mathcal{K}_j[\phi,\lambda^+_0,\lambda^-_0] \right) \\ & \circ \left(I - 2 \lambda_\mathrm{\bf c}[\lambda^+_0,\lambda^-_0] W^*_q[\phi,\cdot] \right)^{(-1)} (\rho^-_0[\phi,\lambda^+,\lambda^-,f,g]), \end{split}$$ where the series $$\sum_{j=0}^{+\infty} \zeta^j \mathcal{K}_j[\phi,\lambda^+_0,\lambda^-_0]$$ converges normally in $\mathcal{L}(C^{\frac{\alpha}{2};\alpha}_0([0,T]\times \partial\Omega), C^{\frac{\alpha}{2};\alpha}_0([0,T]\times \partial\Omega))$ for $|\zeta|<\varepsilon$ and where $$\label{eq rho-0} \rho^-_0[\phi,\lambda^+,\lambda^-,f,g] := -\frac{2}{\lambda^- + \lambda^+} \left( \lambda^+ \left( \frac{1}{2}I + W_{q}^*[\phi,\cdot]\right)\left( V_q[\phi,\cdot]^{(-1)}(f)\right) + g \right).$$* *Proof.* Let $\phi \in {\mathcal{A}}_{\partial\Omega,Q}^{1,\alpha}$, $\lambda^+_0,\lambda^-_0 > 0$, $f \in C^{\frac{1+\alpha}{2};1+\alpha}_0([0,T]\times\partial \Omega)$, and $g \in C^{\frac{\alpha}{2};\alpha}_0([0,T]\times \partial\Omega)$. We first notice that, by the definition of $\rho^-_0$ in [\[eq rho-0\]](#eq rho-0){reference-type="eqref" reference="eq rho-0"}, we can rewrite [\[int eq rho-\]](#int eq rho-){reference-type="eqref" reference="int eq rho-"} as $$\label{eq I -2lambda W* rho- = rho-0} \left(I - 2 \lambda_\mathrm{\bf c}[\lambda^+,\lambda^-] W^*_q[\phi,\cdot] \right) \rho^-[\phi,\lambda^+,\lambda^-,f,g] = \rho^-_0[\phi,\lambda^+,\lambda^-,f,g] \, ,$$ for every $(\lambda^+,\lambda^-) \in (0,+\infty)^2$. We now consider the operator on the left-hand side of [\[eq I -2lambda W\* rho- = rho-0\]](#eq I -2lambda W* rho- = rho-0){reference-type="eqref" reference="eq I -2lambda W* rho- = rho-0"}, which is $I - 2 \lambda_\mathrm{\bf c}[\lambda^+,\lambda^-] W^*_q[\phi,\cdot]: C^{\frac{\alpha}{2};\alpha}_0([0,T]\times \partial\Omega) \to C^{\frac{\alpha}{2};\alpha}_0([0,T]\times \partial\Omega)$. By adding and subtracting the term $2 \lambda_\mathrm{\bf c}[\lambda^+_0,\lambda^-_0] W^*_q[\phi,\cdot]$ and factoring out the operator $I - 2 \lambda_\mathrm{\bf c}[\lambda^+_0,\lambda^-_0] W^*_q[\phi,\cdot]$, we can rewrite this operator as follows: $$\label{identity for I-2lambda_c W^*} \begin{split} I - 2& \lambda_\mathrm{\bf c}[\lambda^+,\lambda^-] W^*_q[\phi,\cdot] \\ =& I - 2 \lambda_\mathrm{\bf c}[\lambda^+_0,\lambda^-_0] W^*_q[\phi,\cdot] - 2 (\lambda_\mathrm{\bf c}[\lambda^+,\lambda^-] - \lambda_\mathrm{\bf c}[\lambda^+_0,\lambda^-_0]) W^*_q[\phi,\cdot] \\ =&\left( I - 2 \lambda_\mathrm{\bf c}[\lambda^+_0,\lambda^-_0] W^*_q[\phi,\cdot]\right) \\ &\circ \left(I - 2 (\lambda_\mathrm{\bf c}[\lambda^+,\lambda^-] - \lambda_\mathrm{\bf c}[\lambda^+_0,\lambda^-_0]) \left(I - 2 \lambda_\mathrm{\bf c}[\lambda^+_0,\lambda^-_0] W^*_q[\phi,\cdot]\right)^{(-1)} W^*_q[\phi,\cdot]\right). \end{split}$$ In particular, by [\[identity for I-2lambda_c W\^\*\]](#identity for I-2lambda_c W^*){reference-type="eqref" reference="identity for I-2lambda_c W^*"}, we deduce that $$\label{identity for (I-2lambda_c W^*)^(-1)} \begin{split} &\left(I - 2 \lambda_\mathrm{\bf c}[\lambda^+,\lambda^-] W^*_q[\phi,\cdot]\right)^{(-1)} \\ & \qquad=\left(I - 2 (\lambda_\mathrm{\bf c}[\lambda^+,\lambda^-] - \lambda_\mathrm{\bf c}[\lambda^+_0,\lambda^-_0]) \left(I - 2 \lambda_\mathrm{\bf c}[\lambda^+_0,\lambda^-_0] W^*_q[\phi,\cdot]\right)^{(-1)} W^*_q[\phi,\cdot]\right)^{(-1)} \\ &\qquad\circ \left( I - 2 \lambda_\mathrm{\bf c}[\lambda^+_0,\lambda^-_0] W^*_q[\phi,\cdot]\right)^{(-1)}. \end{split}$$ Then, if we choose $\varepsilon>0$ small enough, for example $$\label{eq varepsilon} \varepsilon := \frac{1}{2 \left\| \left(I - 2 \lambda_\mathrm{\bf c}[\lambda^+_0,\lambda^-_0] W^*_q[\phi,\cdot]\right)^{(-1)} W^*_q[\phi,\cdot]\right\|_{\mathcal{L}\left( C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega), C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega)\right)}},$$ we have that, for every $(\lambda^+,\lambda^-) \in (0,+\infty)^2$ such that [\[thm condition epsilon\]](#thm condition epsilon){reference-type="eqref" reference="thm condition epsilon"} holds, the inverse of the operator $$I - 2 (\lambda_\mathrm{\bf c}[\lambda^+,\lambda^-] - \lambda_\mathrm{\bf c}[\lambda^+_0,\lambda^-_0]) \left(I - 2 \lambda_\mathrm{\bf c}[\lambda^+_0,\lambda^-_0] W^*_q[\phi,\cdot]\right)^{(-1)} W^*_q[\phi,\cdot]$$ from $C^{\frac{\alpha}{2};\alpha}_0([0,T]\times \partial\Omega)$ into itself can be written as a normally convergent Neumann series in $\mathcal{L}\left(C^{\frac{\alpha}{2};\alpha}_0([0,T]\times \partial\Omega), C^{\frac{\alpha}{2};\alpha}_0([0,T]\times\partial\Omega)\right)$. In fact, by [\[thm condition epsilon\]](#thm condition epsilon){reference-type="eqref" reference="thm condition epsilon"} and [\[eq varepsilon\]](#eq varepsilon){reference-type="eqref" reference="eq varepsilon"}, and by the Neumann series Theorem (see, e.g., Davies [@Dav07 Prob. 1.2.8] or Rudin [@Rud91 Thm. 10.7]), we have that $$\label{eq neumann series operator} \begin{split} \left(I - 2 (\lambda_\mathrm{\bf c}[\lambda^+,\lambda^-] - \lambda_\mathrm{\bf c}[\lambda^+_0,\lambda^-_0]) \left(I - 2 \lambda_\mathrm{\bf c}[\lambda^+_0,\lambda^-_0] W^*_q[\phi,\cdot]\right)^{(-1)} W^*_q[\phi,\cdot]\right)^{(-1)} \\ = \sum_{j=0}^{+\infty} (\lambda_\mathrm{\bf c}[\lambda^+,\lambda^-] - \lambda_\mathrm{\bf c}[\lambda^+_0,\lambda^-_0])^j \mathcal{K}_j[\phi,\lambda^+_0,\lambda^-_0]\, , \end{split}$$ where for each $j \in \mathbb{N}$ the operator $\mathcal{K}_j[\cdot,\cdot,\cdot]$ is defined by [\[eq K_j\]](#eq K_j){reference-type="eqref" reference="eq K_j"}. Finally, [\[eq I -2lambda W\* rho- = rho-0\]](#eq I -2lambda W* rho- = rho-0){reference-type="eqref" reference="eq I -2lambda W* rho- = rho-0"}, [\[identity for (I-2lambda_c W\^\*)\^(-1)\]](#identity for (I-2lambda_c W^*)^(-1)){reference-type="eqref" reference="identity for (I-2lambda_c W^*)^(-1)"} and [\[eq neumann series operator\]](#eq neumann series operator){reference-type="eqref" reference="eq neumann series operator"} yield to the validity of [\[thm eq rho- series\]](#thm eq rho- series){reference-type="eqref" reference="thm eq rho- series"}. ◻ ## Acknowledgment {#acknowledgment .unnumbered} The authors are members of the "Gruppo Nazionale per l'Analisi Matematica, la Probabilità e le loro Applicazioni" (GNAMPA) of the "Istituto Nazionale di Alta Matematica" (INdAM). M.D.R., P.L. and P.M. acknowledge the support of the "INdAM GNAMPA Project" codice CUP_E53C22001930001 "Operatori differenziali e integrali in geometria spettrale". P.M. and R.M. also acknowledge the support of the SPIN Project "DOMain perturbation problems and INteractions Of scales - DOMINO" of the Ca' Foscari University of Venice. P.M. also acknowledges the support from EU through the H2020-MSCA-RISE-2020 project EffectFact, Grant agreement ID: 101008140. Part of this work was done while R.M. was visiting C3M - Centre for Computational Continuum Mechanics (Slovenia). R.M. wishes to thank C3M for the kind hospitality. 11 S. Bernstein, S. Ebert, and R.  Sören Kraußhar, On the diffusion equation and diffusion wavelets on flat cylinders and the $n$-torus, *Math. Methods Appl. Sci.* **34** (2011), no. 4, 428--441. R. Chapko, R. Kress, and J. R. Yoon, On the numerical solution of an inverse boundary value problem for the heat equation, *Inverse Problems* **14** (1998), 853--867. R. Chapko, R. Kress, and J. R. Yoon, An inverse boundary value problem for the heat equation: the Neumann condition, *Inverse Problems* **15** (1999), 1033--1046. A. Charalambopoulos, On the Fréchet differentiability of boundary integral operators in the inverse elastic scattering problem, *Inverse Problems* **11** (1995), 1137--1161. A. Cohen, C. Schwab, and J. Zech, Shape holomorphy of the stationary Navier-Stokes equations, *SIAM J. Math. Anal.* **50** (2018), no. 2, 1720--1752. M. Costabel and F. Le Louër, Shape derivatives of boundary integral operators in electromagnetic scattering. Part I: Shape differentiability of pseudo-homogeneous boundary integral operators, *Integral Equations Oper. Theory* **72** (2012), 509--535. M. Costabel and F. Le Louër, Shape derivatives of boundary integral operators in electromagnetic scattering. Part II: Application to scattering by a homogeneous dielectric obstacle, *Integral Equations Oper. Theory* **73** (2012), 17--48. M. Dalla Riva and M. Lanza de Cristoforis, A perturbation result for the layer potentials of general second order differential operators with constant coefficients, *J. Appl. Funct. Anal.* **5** (2010), no. 1, 10--30. M. Dalla Riva, M. Lanza de Cristoforis, and P. Musolino, *Singularly Perturbed Boundary Value Problems: A Functional Analytic Approach*, Springer Nature, Cham, 2021. M. Dalla Riva and P. Luzzini, Regularity of layer heat potentials upon perturbation of the space support in the optimal Hölder setting. *Differ. Integral Equ.* **36** (2023), no. 11-12, 971--1003. M. Dalla Riva, P. Luzzini, and P. Musolino, Shape analyticity and singular perturbations for layer potential operators, *ESAIM Math. Model. Numer. Anal.* **56** (2022), no. 6, 1889--1910. M. Dalla Riva, P. Luzzini, P. Musolino, and R. Pukhtaievych, Dependence of effective properties upon regular perturbations, In I. Andrianov, S. Gluzman, V. Mityushev, Editors, *Mechanics and Physics of Structured Media. Asymptotic and Integral Equations Methods of Leonid Filshtinsky*. Academic Press, Elsevier, London, 2022, pp. 271--301. E.B.  Davies, *Linear operators and their spectra*, Vol. **106**, Cambridge University Press, 2007. K. Deimling, *Nonlinear Functional Analysis*. Springer-Verlag, Berlin, 1985. J. Dölz and F. Henríquez, Parametric Shape Holomorphy of Boundary Integral Operators with Applications, Preprint, (2023), arXiv:2305.19853 F. Feppon and H. Ammari, High order topological asymptotics: reconciling layer potentials and compound asymptotic expansions, *Multiscale Model. Simul.* **20** (2022), no. 3, 957--1001. D. Gilbarg , N.S. Trudinger, *Elliptic partial differential equations of second order.* Reprint of the 1998 edition. Classics in Mathematics. Springer-Verlag, Berlin, 2001. H. Haddar and R. Kress, On the Fréchet derivative for obstacle scattering with an impedance boundary condition, *SIAM J. Appl. Math.* **65** (2004), no. 1, 194--208. F. Henríquez and C. Schwab, Shape holomorphy of the Calderón projector for the Laplacian in $\mathbb{R}^2$, *Integral Equations Operator Theory* **93** (2021), no. 4, Paper No. 43, 40 pp. A. Henrot and M. Pierre, *Variation et optimisation de formes*. Vol. **48** of Mathématiques & Applications (Berlin) \[Mathematics & Applications\], Springer, Berlin, 2005. F. Hettlich, Fréchet derivatives in inverse obstacle scattering, *Inverse Problems* **11** (1995), no. 2, 371--382. F. Hettlich, W. Rundell, Identification of a discontinuous source in the heat equation, *Inverse Problems* **17** (2001), no.5, 1465--1482. E. Hille and R.S. Phillips, *Functional analysis and semi-groups*. American Mathematical Society Colloquium Publications, vol. 31, American Mathematical Society, Providence, R. I., 1957. C. Jerez-Hanckes, C. Schwab, and J. Zech, Electromagnetic wave scattering by random surfaces: shape holomorphy, *Math. Models Methods Appl. Sci.* **27** (2017), no. 12, 2229--2259. A. Kirsch, The domain derivative and two applications in inverse scattering theory, *Inverse Problems* **9** (1993), no. 1, 81--96. R. Kress and L. Päivärinta, On the far field in obstacle scattering, *SIAM J. Appl. Math.* **59** (1999), no. 4, 1413--1426. O. A. Ladyženskaja, V.A. Solonnikov, and N.N. Ural'ceva, *Linear and quasilinear equations of parabolic type.* (Russian) Translated from the Russian by S. Smith. Translations of Mathematical Monographs, **23** American Mathematical Society, Providence, R.I. 1968. M. Lanza de Cristoforis, Perturbation problems in potential theory, a functional analytic approach, *J. Appl. Funct. Anal.* **2** (2007), no. 3, 197--222. M. Lanza de Cristoforis and P. Musolino, A perturbation result for periodic layer potentials of general second order differential operators with constant coefficients, *Far East J. Math. Sci.* **52** (2011), no. 1, 75--120. M. Lanza de Cristoforis and L. Rossi, Real analytic dependence of simple and double-layer potentials upon perturbation of the support and of the density, *J. Integral Equations Appl.* **16** (2004), no. 2, 137--174. M. Lanza de Cristoforis and L. Rossi, Real analytic dependence of simple and double-layer potentials for the Helmholtz equation upon perturbation of the support and of the density, In *Analytic methods of analysis and differential equations: AMADE 2006*, Camb. Sci. Publ., Cambridge, 2008, pp. 193--220. F. Le Louër, On the Fréchet derivative in elastic obstacle scattering, *SIAM J. Appl. Math.* **72** (2012), no. 5, 1493--1507. P. Luzzini, Regularizing properties of space-periodic layer heat potentials and applications to boundary value problems in periodic domains, *Math. Methods Appl. Sci.* **43** (2020), 5273--5294. P. Luzzini and P. Musolino, Periodic transmission problems for the heat equation, In *Integral methods in science and engineering*, 211--223, Birkhäuser/Springer, Cham, 2019. P. Luzzini and P. Musolino, Perturbation analysis of the effective conductivity of a periodic composite, *Netw. Heterog. Media* **15** (2020), no. 4, 581--603. P. Luzzini, P. Musolino and R. Pukhtaievych, Shape analysis of the longitudinal flow along a periodic array of cylinders, *J. Math. Anal. Appl.* **477** (2019), no. 2, 1369--1395. A. A. Novotny and J. Sokołowski, *Topological derivatives in shape optimization*, Interaction of Mechanics and Mathematics, Springer, Heidelberg, 2013. J. Pinto, F. Henríquez, C. Jerez-Hanckes, Shape Holomorphy of Boundary Integral Operators on Multiple Open Arcs, Preprint, (2023), arXiv:2305.12202 M.A. Pinsky, *Introduction to Fourier analysis and wavelets.* Reprint of the 2002 original. Graduate Studies in Mathematics, **102**, American Mathematical Society, Providence, RI, 2009. R. Potthast, Fréchet differentiability of boundary integral operators in inverse acoustic scattering, *Inverse Problems* **10** (1994), no. 2, 431--447. R. Potthast, Fréchet differentiability of the solution to the acoustic Neumann scattering problem with respect to the domain, *J. Inverse Ill-Posed Probl.* **4** (1996), no. 1, 67--84. R. Potthast, Domain derivatives in electromagnetic scattering, *Math. Methods Appl. Sci.* **19** (1996), no. 15, 1157--1175. R. Pukhtaievych, Effective conductivity of a periodic dilute composite with perfect contact and its series expansion, *Z. Angew. Math. Phys.* **69** (2018), no. 3, Paper No. 83, 22 pp. W. Rudin, *Functional Analysis*, 2-nd edition, McGraw-Hill, Singapore, 1991. J. Sokołowski and J.-P. Zolésio, *Introduction to shape optimization. Shape sensitivity analysis*, Vol. **16** of Springer Series in Computational Mathematics, Springer-Verlag, Berlin, 1992. G. Verchota, Layer potentials and regularity for the Dirichlet problem for Laplace's equation in Lipschitz domains. *J. Funct. Anal.* **59** (1984), no. 3, 572-611. [^1]: Dipartimento di Ingegneria, Università degli Studi di Palermo, Viale delle Scienze, Ed. 8, 90128 Palermo, Italy. Email: matteo.dallariva\@unipa.it [^2]: Dipartimento di Matematica 'Tullio Levi-Civita', Università degli Studi di Padova, Via Trieste 63, 35121 Padova, Italy. Email: pluzzini\@math.unipd.it [^3]: Dipartimento di Scienze Molecolari e Nanosistemi, Università Ca' Foscari Venezia, Via Torino 155, 30172 Venezia Mestre, Italy. Email: riccardo.molinarolo\@unive.it [^4]: Dipartimento di Scienze Molecolari e Nanosistemi, Università Ca' Foscari Venezia, Via Torino 155, 30172 Venezia Mestre, Italy. Email: paolo.musolino\@unive.it
arxiv_math
{ "id": "2309.07501", "title": "Multi-parameter perturbations for the space periodic heat equation", "authors": "Matteo Dalla Riva, Paolo Luzzini, Riccardo Molinarolo, Paolo Musolino", "categories": "math.AP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We introduce the concept of a Dedekind superring and study some of its properties. We also define invertible supermodules, fractional superideals, the Picard group of a superring, and the ideal class group of a superring.\ [Key words:]{.smallcaps} Dedekind superring, invertible supermodule, fractional superideal, integrally closed superring. address: Instituto de Matemáticas, FCEyN, Universidad de Antioquia, 50010 Medellín, Colombia author: - Pedro Rizzo - Joel Torres del Valle - Alexander Torres-Gomez title: Dedekind superrings --- # Introduction The concept of superring, and especially superalgebra, has been extensively studied by the physics and mathematics community. Superalgebras extend beyond the familiar boundaries of commutative algebras, introducing a new layer of complexity by incorporating both commuting and anticommuting elements. With their roots in the realm of theoretical physics, superalgebras have transcended their origins, permeating various branches of mathematics and physics. More specifically, a superalgebra is an algebra over a field or commutative ring that is equipped with a $\mathds{Z}_2$-grading. Similarly, a superring is a $\mathds{Z}_2$-graded ring with a supercommutative product on homogeneous elements. The study of superrings is an endeavor that draws from a spectrum of mathematical disciplines, including abstract algebra, category theory, and representation theory. The concept of ideals and supermodules, or modules over superrings, assumes new dimensions in the realm of superrings, offering insights into the behavior of anticommuting elements and their interactions. An important aspect of superrings is that they are a relatively easy class of noncommutative rings where the multiplication is better handle for the homogeneous elements. As it is well known, noncommutative rings exhibit greater complexity compared to commutative rings, leading to a relatively lesser understanding of their structure, properties, and behaviour. Extensive efforts have been devoted to successfully extending certain outcomes from commutative ring theory to the noncommutative realm. Thus, several of the definitions and properties in the theory of superrings could be used as a guiding principle to build the corresponding notions in noncommutative rings. The notion of Krull dimension in commutative algebra provides a rigorous definition of the dimension of algebraic varieties and schemes. In search of a rigorous algebraic definition of the dimension of superschemes, Masuoka and Zubkov [@MASUOKA2020106245] introduced the concept of Krull superdimension of a superring and used it to define regularity for a superring. This led naturally to the definition of a smooth superscheme. In the case of a regular superring with Krull superdimension $1\mid d$, it is natural to ask whether it is possible to introduce a notion of Dedekind superring. Dedekind rings play a central role in algebraic number theory and the theory of nonsingular algebraic curves. A class of important examples of Dedekind domains arises in the study of curves using algebraic geometry methods. The coordinate ring of a nonsingular algebraic curve is a Dedekind ring, meaning that the ring of all regular functions on the curve is a Dedekind ring. Motivated by these generalizations and interested in developing the theory of abstract nonsingular algebraic supercurves, we define and study Dedekind superrings. We discuss properties of Dedekind superrings that are analogous to those of Dedekind rings. For example, we prove that every nonzero ideal of a Dedekind superring has a unique prime factorization. In summary, this article explores the concept of Dedekind superrings, a new type of algebraic structure that extends the classical notion of Dedekind rings. Indeed, we generalize some properties of Dedekind rings to the broader framework of superrings. To achieve this, we discuss some fundamental ideas related to invertible supermodules and integral elements, all within the context of superrings. The paper is organized as follows. Section [2](#Sec:Basic:Def){reference-type="ref" reference="Sec:Basic:Def"} summarizes the general notions of superrings and supermodules that will be used throughout this paper. In particular, it covers prime ideals associated with a supermodule and the Krull superdimension of a superring. While many of the results in this section have simple and similar proofs to their counterparts in commutative algebra, we have included them here because they are not widely known in the literature on supercommutative algebra. Section [3](#Sec:Invertibibility){reference-type="ref" reference="Sec:Invertibibility"} introduces the concepts of invertible supermodule and fractional superideal, studies their basic properties, and constructs the Picard and Cartier groups of a Noetherian superring. Section [4](#Sec:Projective:Supermodules){reference-type="ref" reference="Sec:Projective:Supermodules"} studies projective supermodules and compares them to invertible supermodules. This topic is covered in more detail in the work of Moyre et al. [@morye2022notes], so our treatment here is brief and only includes what is necessary for this paper. Section [5](#Sec:Dedekind){reference-type="ref" reference="Sec:Dedekind"} defines Dedekind superrings. First, we provide a definition for an integrally closed superring, motivated by [@atterton_1972]. Then, we prove Theorem 5.10, which states that Dedekind superrings admit a unique prime factorization. In particular, we relate our concept of Dedekind superring to the definition of regularity from [@MASUOKA2020106245]. Section [6](#comments){reference-type="ref" reference="comments"} discusses possible new directions for research and raises some open questions. # Preliminaries {#Sec:Basic:Def} In this section we introduce some basic notation and conventions, which will be used through the paper. Here, $\mathds{Z}_2$ represents the group of integers module 2. For any integer $i\in \mathds{Z}$, we denote by $\overline{i}$ its class in $\mathds{Z}_2$. Any ring $R$ in this paper is assumed to be unitary and non-trivial (i.e., $1\neq 0$). We also assume that $2$ is a non-zerodivisor in $R$. ## Superrings **Definition 1**. A $\mathds{Z}_2$-graded unitary ring $R=R_{\overline{0}}\oplus R_{\overline{1}}$ is called a *superring*. Let $R$ be a superring. We define the *parity* of $x\in R$ as $$|x|:=\begin{cases} 0& \text{ if }x\in R_{\overline{0}},\\ 1& \text{ if }x\in R_{\overline{1}}. \end{cases}$$ The set $h(R):=R_{\overline{0}}\cup R_{\overline{1}}$ is called the *homogeneous elements of $R$* and each $x\in h(R)$ is said to be *homogeneous*. A homogeneous nonzero element $x$ is called *even* if $x\in R_{\overline{0}}$ and *odd* if $x\in R_{\overline{1}}$. Any superring $R$ in this work will be *supercommutative*, i.e., $$\label{Supercommutativity} xy=(-1)^{|x||y|}yx,\quad \text{ for all }\quad x, y\in h(R).$$ Note that from [\[Supercommutativity\]](#Supercommutativity){reference-type="eqref" reference="Supercommutativity"}, it immediately follows that $R_{\overline{0}}$ is a commutative ring. **Definition 2**. Let $R$ and $R'$ be superrings. A *morphism* $\phi:R\longrightarrow R'$ is a ring homomorphism *preserving the parity*, that is, such that $\phi(R_i)\subseteq R'_i$ for all $i\in\mathds{Z}_2$. A morphism $\phi:R\longrightarrow R'$ is an *isomorphism* if it is bijective. In this case, we say that $R$ and $R'$ are *isomorphic superrings*, which we denote by $R\cong R'$. **Definition 3**. Let $R$ be a superring. 1. A *superideal* of $R$ is an ideal $\mathfrak{a}$ of $R$ which admits the decomposition $\mathfrak{a}=(\mathfrak{a}\cap R_{\overline{0}})\oplus(\mathfrak{a}\cap R_{\overline{1}})$. 2. An ideal $\mathfrak{p}$ of $R$ is *prime* (resp. *maximal*) if $R/\mathfrak{p}$ is an integral domain (resp. a field). **Remark 4**. Let $R$ be a superring. 1. If $\mathfrak{a}$ is a superideal of $R$ we denote $\mathfrak{a}_i:=\mathfrak{a}\cap R_i$, for each $i\in\mathds{Z}_2$. 2. Any superideal $\mathfrak{a}$ of $R$ is a two-sided ideal. 3. If $\mathfrak{a}$ is a superideal of $R$, it is easy to show that the quotient ring $R/\mathfrak{a}=(R/\mathfrak{a})_{\overline{0}}\oplus(R/\mathfrak{a})_{\overline{1}}$ becomes a superring with $\mathds{Z}_2$-gradding given by $(R_{\overline{0}}/\mathfrak{a}_{\overline{0}})\oplus(R_{\overline{1}}/\mathfrak{a}_{\overline{1}})$. 4. The definition of prime ideal is equivalent to requiring that if $x, y$ are elements (resp. homogeneous elements) in $R$ whose product is in $\mathfrak{p}$, one of them is in $\mathfrak{p}$ (see [@westrathesis Lemma 4.1.2]). In other words, any prime ideal is completely prime. 5. Any prime ideal of $R$ is a superideal of the form $\mathfrak{p}=\mathfrak{p}_{\overline{0}}\oplus R_{\overline{1}}$, where $\mathfrak{p}_{\overline{0}}$ is prime in $R_{\overline{0}}$. Similarly, any maximal ideal of $R$ is a superideal of the form $\mathfrak{m}=\mathfrak{m}_{\overline{0}}\oplus R_{\overline{1}}$, where $\mathfrak{m}_{\overline{0}}$ is a maximal ideal of $R_{\overline{0}}$ (see [@westrathesis Lemma 4.1.9]). **Definition 5**. Let $R$ be a superring. - The ideal $\mathfrak{j}_R=R\cdot R_{\overline{1}}=R_{\overline{1}}^2\oplus R_{\overline{1}}$ is called the *canonical superideal of $R$*. - The *superreduced* of $R$ is the commutative ring $\overline{R}=R/\mathfrak{j}_R\cong R_{\overline{0}}/R_{\overline{1}}^2$. - $R$ is called a *superdomain* if $\overline{R}$ is a domain or, equivalently, $\mathfrak{j}_R$ is a prime ideal. - $R$ is called a *superfield* if $\overline{R}$ is a field or, equivalently, $\mathfrak{j}_R$ is a maximal ideal. **Definition 6**. A superring $R$ is called *Noetherian* if $R$ satisfies one (and hence all) of the following equivalent conditions (for a proof see [@westrathesis Proposition 3.3.1]): - $R$ has the ascending chain condition on superideals. - Any superideal $\mathfrak{a}$ of $R$ is *finitely generated*, that is, there exist finitely many homogeneous elements $a_1,\ldots, a_n$ in $\mathfrak{a}$ such that $\mathfrak{a}=Ra_1+\cdots+Ra_n$. - Any nonempty collection of superideals of $R$ has a maximal element. An important characterization of Noetherianity is the following (see [@MASUOKA2020106245 Lemma 1.4]). **Proposition 7**. *Let $R$ be a superring. The following conditions are equivalent.* - *$R$ is Noetherian.* - - *$R_{\overline{0}}$ is a Noetherian ring.* - *$R_{\overline{1}}$ is a finitely generated $R_{\overline{0}}-$module. $\square$* In analogy with the commutative construction, the localization of a superring is as follows. Let $R$ be a superring and $S\subseteq R_{\overline{0}}$ a multiplicative set. *The localization $S^{-1}R$ of $R$ at $S$*, is defined as the superring $$S^{-1}R:=(S^{-1}R_{\overline{0}})\oplus(S^{-1}R_{\overline{1}}),$$ where $R_{\overline{1}}$ is regarded as $R_{\overline{0}}$ -module. As usual, an element in $S^{-1}R$ is denoted by $x/y$ or $y^{-1}x$ where $x\in R$ and $y\in S$. If $S:=R-\mathfrak{p}$, where $\mathfrak{p}$ is a prime ideal, $S^{-1}R$ is often written as $R_\mathfrak{p}$. If $S$ is the set of non-zerodivisors of $R$, $S^{-1}R$ is usually denoted by $K(R)$ and is called *the total superring of fractions of $R$*. Let $\lambda_R^S: R\longrightarrow S^{-1}R,\displaystyle \,x\longmapsto\frac{x}{1}$ be the *localization morphism*. It is easy to check that the pair $(S^{-1}R, \lambda_R^S)$ satisfies the universal property of localization. **Proposition 8**. *Let $R$ be a superring and $S, T$ multiplicative sets in $R_{\overline{0}}$.* 1. *If $U:=\lambda_R^S(T)$, then $U^{-1}(S^{-1}R) \cong (TS)^{-1}R$.* 2. *If $\mathfrak{p}\cap S=\varnothing$, then $(S^{-1}R)_{S^{-1}\mathfrak{p}} \cong R_\mathfrak{p}$ for all $\mathfrak{p}$ prime in $R$.* *Proof.* - As in the commutative case (see [@bourbaki1998commutative Proposition 2.3.7]), using the universal property of the localization, we obtain a (unique) isomorphism $\phi:(TS)^{-1}R\longrightarrow U^{-1}(S^{-1}R)$ such that the diagram $$\xymatrix{ R\ar[rr]^{\lambda_R^S}\ar[dd]_{\lambda_R^{(TS)^{-1}}}&&S^{-1}R\ar[dd]^{\lambda_{S^{-1}R}^{U}}\\&&\\(ST)^{-1}R\ar@{..>}[rr]_{\exists\,\text{!}\,\phi}&&U^{-1}(S^{-1}R) }$$ commutes. - This follows from i) considering $T=R-\mathfrak{p}$.  ◻ **Definition 9**. A superring $R$ is said to be *local* if it has a unique maximal ideal. The following proposition is proved in [@westrathesis Proposition 5.1.9]. **Proposition 10**. *Let $R$ be a superring and $\mathfrak{p}$ a prime ideal of $R$. Then, $(R_\mathfrak{p}, R_\mathfrak{p}\mathfrak{p})$ is a local superring. $\square$* ## Supermodules Let $\underline{\mathrm{Hom}}_R(M,N)$ be the set of all homomorphisms (of $R$-modules) from $M$ to $N$. This set is an $R-$supermodule with components $$\underline{\mathrm{Hom}}_R(M,N)_{i}:=\left\{\psi\in \underline{\mathrm{Hom}}_R(M,N)\mid\,\psi(M_{j})\subseteq N_{i+j},\,j\in\mathds{Z}_2\right\}\text{ for all }i\in\mathds{Z}_2,$$ where the left action of $R$ is given by $(a\psi)(m)=a\psi(m)$ for all $a\in h(R)$ and $m\in h(M)$. It is easy to see that $\underline{\mathrm{Hom}}_R(M,N)_{\overline{0}}=\text{Hom}_R(M,N)$. For a further discussion on this see [@morye2022notes Lemma 2.9] and [@westrathesis Chapter 6]. **Definition 11**. An $R-$supermodule $M$ is *finitely generated* if as $R-$module is generated by finitely many homogeneous elements. We denote by $\text{smod}_R$ the full subcategory of $\text{sMod}_R$ whose objects are supermodules finitely generated over $R$. An important endofunctor $\Pi:\text{sMod}_R\longrightarrow \text{sMod}_R$ (resp. $\Pi:\text{smod}_R\longrightarrow \text{smod}_R$) is the *parity swapping functor* defined over objects as the one who takes $M$ and sends it to $\Pi M=(\Pi M)_{\overline{0}}\oplus(\Pi M)_{\overline{1}}$, where $(\Pi M)_{\overline{0}}=M_{\overline{1}}$ and $(\Pi M)_{\overline{1}}=M_{\overline{0}}$. Over morphisms, the functor takes $\psi:M\longrightarrow N$ and sends it to $\Pi \psi:\Pi M\longrightarrow\Pi N$ which is defined by $\psi$ itself. For more details see [@carmeli2011mathematical p. 2] and [@westrathesis Subsection 6.1.2]. **Definition 12**. A morphism $\psi:M\longrightarrow N$ is called an *isomorphism* if it is bijective. We say that $M$ and $N$ are *isomorphic* if there exists an isomorphism between them, and it is denoted by $M\simeq N$. If $M$ and $N$ are $R-$supermodules, it can be proved that $M\oplus N$ and $M\otimes_R N$ are $R-$supermodules. Likewise, if $M$ and $N$ are $R-$subsupermodules of $P$, then $M+N$ and $M\cap N$ are $R-$subsupermodules of $P$. Moreover, if $N\subseteq M$ is an $R-$subsupermodule of $M$, then $M/N$ is an $R-$supermodule as well. For example, we define the *superreduced* of $M$ as the $R-$supermodule $\overline{M}=M/\mathfrak{j}_R M$, where $\mathfrak{j}_R M$ is an example of an $R-$subsupermodule of $M$ of the type $\mathfrak{a}M$, that is the *product* of $M$ and the superideal $\mathfrak{a}$ of $R$. **Proposition 13**. *If $M, N$ and $P$ are $R-$supermodules, then* - *$M\simeq M\otimes_R R$ and $M\simeq R\otimes_R M$.* - *$M\otimes_R(N\otimes_R P)\simeq(M\otimes_R N)\otimes_ R P$.* - *$M\otimes_R N\simeq N\otimes_R M$.* - *$M\otimes_R(N\oplus P)\simeq (M\otimes_R N)\oplus (M\otimes_R P)$.* - *$\overline{M}\simeq \overline{R}\otimes_R M$.* - *$\underline{\mathrm{Hom}}_R(R,M)\simeq M$.* *Proof.* See [@bartocci2012geometry Proposition 1.4] and [@westrathesis Section 3.2]. ◻ It is possible to show that the category $\text{sMod}_R$ is a strict monoidal (abelian) category with "tensor" product the usual tensor product "$\otimes_R$\" of supermodules whose unit is given by $R$. It is worth mentioning that the functor $\underline{\mathrm{Hom}}_R(M,-)$ is the left-adjoint of the functor $-\otimes_R M$ (see [@etingof Chapter 1] and [@westrathesis Chapter 6] for the details). **Definition 14**. Let $N, M$ and $P$ be $R-$supermodules. The short exact sequence $$\label{eq:ses} \xymatrix{0\ar[r]&N\ar[r]_{\phi}&M\ar@/_1pc/[l]_{\theta}\ar[r]^{\psi}&P\ar[r]\ar@/^1pc/[l]^{\xi} &0}$$ is *split* if there exists a morphism $\xi:P\longrightarrow M$ such that $\psi\circ\xi=\mathrm{id}_P$. As in the purely even case, this is easily seen to be equivalent to have a morphism $\theta:M\longrightarrow N$ such that $\phi\circ \theta=\mathrm{id}_N$. **Proposition 15**. *If we have a split exact sequence as in [\[eq:ses\]](#eq:ses){reference-type="eqref" reference="eq:ses"}, then $M \simeq N\oplus P$ and vice versa.* *Proof.* If we have a morphism $\xi:P\longrightarrow M$ such that $\psi\circ\xi=\mathrm{id}_P$, we construct an isomorphism $\zeta: N\oplus P\longrightarrow M$, $(n, h)\longmapsto\phi(n)+\xi(h)$. If we have a morphism $\theta:M\longrightarrow N$ such that $\phi\circ\theta=\mathrm{id}_N$, we can define an isomorphism $\gamma:M\longrightarrow N\oplus P$, $m\longmapsto(\theta(m), \psi(m))$. Finally, if there is an isomorphism $\alpha:M\longrightarrow N\oplus P$, we can use it set up a splitting morphism $\theta:M\longrightarrow N$ that sends $m\in M$ to the first component of $\alpha(m)$. ◻ Let $M$ be an $R-$supermodule and $S\subseteq R_{\overline{0}}$ a multiplicative set. The *localization of $M$ at $S$* is defined to be the $S^{-1}R-$supermodule $$S^{-1}M=(S^{-1}M_{\overline{0}})\oplus(S^{-1}M_{\overline{1}}).$$ Each element in $S^{-1}M$ is denoted by $x/s$ or $s^{-1}x$, where $x\in M$ and $s\in S$. In the case $S=R-\mathfrak{p}$, where $\mathfrak{p}$ is a prime ideal of $R$, $S^{-1}M$ is denoted by $M_\mathfrak{p}$. Let $\lambda_M^S:M\longrightarrow S^{-1}M, m\longmapsto\displaystyle\frac{m}{1}$, be the *localization morphism*. It can be easily proved that the pair $(S^{-1}M, \lambda_M^S)$ satisfies the universal property of localization. **Proposition 16**. *Let $R$ be a superring and $S$ a multiplicative set in $R_{\overline{0}}$. Consider $\mathfrak{a}$ and $\mathfrak{b}$ superideals of $R$ and $R-$supermodules $M$, $N$ and $P$.* 1. *$S^{-1}(M\otimes_R N) \simeq S^{-1}M\otimes_{S^{-1}R}S^{-1}N\simeq M\otimes_R S^{-1}N \simeq S^{-1}M\otimes_R N$.* 2. *If $M$ and $N$ are considered as $R-$subsupermodules of $P$, then* *$$S^{-1}(M+N)\simeq (S^{-1}M)+(S^{-1}N)\quad\text{and}\quad S^{-1}(M\cap N)\simeq (S^{-1}M)\cap(S^{-1}N).$$* 3. *If $N\subseteq M$ is an $R-$subsupermodule of $M$, then $S^{-1}(M/N)\simeq S^{-1}M/S^{-1}N$.* 4. *$S^{-1}(\mathfrak{a}\cdot\mathfrak{b})=S^{-1}\mathfrak{a}\cdot S^{-1}\mathfrak{b}$.* *Proof.* 1. It follows from the universal property of localization that $S^{-1}M\simeq S^{-1}R\otimes_R M$. Then, by Proposition [Proposition 13](#prop:basicpros){reference-type="ref" reference="prop:basicpros"}, we have $$\begin{aligned} S^{-1}(M\otimes_R N)&\simeq S^{-1}R\otimes_R(M\otimes_R N)\\ &\simeq (S^{-1}R\otimes_R M)\otimes_R N\\ &\simeq S^{-1}M\otimes_R N. \end{aligned}$$ In the same way, we obtain the other isomorphisms. 2. Follows from the very definitions. 3. It is easy to show (similarly to the even case) that the localization functor $S^{-1}(\cdot):R\text{-sMod}\longrightarrow S^{-1}R\text{-sMod}$ is exact (see [@westrathesis Proposition 5.1.17]). Applying such functor to the short exact sequence of $R$-supermodules $$0\longrightarrow N\longrightarrow M\longrightarrow M/N\longrightarrow0,$$ the statement of the proposition holds. 4. Each element in $S^{-1}\mathfrak{a}$ has the form $\frac{a}{s}$, with $a\in \mathfrak{a}$ and $s\in S$. Now, if $\frac{a}{s}\frac{b}{t}\in S^{-1}\mathfrak{a}\cdot S^{-1}\mathfrak{b}$, where $a\in \mathfrak{a}$ and $b\in \mathfrak{b}$, we have that $\frac{ab}{st}\in S^{-1}(\mathfrak{a}\cdot \mathfrak{b})$. Conversely, if $\frac{ab}{s}\in S^{-1}(\mathfrak{a}\cdot \mathfrak{b})$, with $a\in \mathfrak{a}$ and $b\in \mathfrak{b}$, then we obtain that $\frac{ab}{s}=\frac{a'}{s}\frac{b}{t}\in S^{-1}\mathfrak{a}\cdot S^{-1}\mathfrak{b}$, where $a'=at\in \mathfrak{a}$. Finally, using the additivity of the localization map, the claim is true.  ◻ **Remark 17**. The localization functor $S^{-1}(\cdot):R\text{-sMod}\longrightarrow S^{-1}R\text{-sMod}$, beyond being an exact functor, by Proposition [Proposition 16](#P:2.2:B){reference-type="ref" reference="P:2.2:B"} i), is a strict monoidal endofunctor. More details in [@etingof Section 2.4]. In the next proposition we consider a special case of the "Local-Global Principle". **Proposition 18**. *Let $\varphi:M\rightarrow N$ be a morphism. The following statements are equivalent.* 1. *$\varphi$ is an isomorphism.* 2. *$\varphi_{\mathfrak{p}}:M_\mathfrak{p}\rightarrow N_\mathfrak{p}$ is an isomorphism of $R_\mathfrak{p}-$supermodules for all prime ideal $\mathfrak{p}$ of $R$.* 3. *$\varphi_{\mathfrak{m}}:M_\mathfrak{m}\rightarrow N_\mathfrak{m}$ is an isomorphism of $R_\mathfrak{m}-$supermodules for all maximal ideal $\mathfrak{m}$ of $R$.* *Proof.* The proof is adapted from the arguments of [@atiyah Proposition 3.9] to the "super" context. ◻ ## Supervector spaces and superalgebras **Definition 19**. Let $\Bbbk$ be a field. A *supervector space over* $\Bbbk$ is a $(\Bbbk\oplus0)$-supermodule. If $V=V_{\overline{0}}\oplus V_{\overline{1}}$ is a supervector space over $\Bbbk$, we define the *superdimension* $\mathrm{sdim}_\Bbbk(V)$ of $V$ as the pair $$\mathrm{sdim}_\Bbbk(V) = r \mid s \quad \text{ where } \quad r:= \dim_\Bbbk(V_{\overline{0}})\text{ and }s:= \dim_\Bbbk(V_{\overline{1}}).$$ **Proposition 20**. *Let $(R, \mathfrak{m})$ be a local Noetherian superring. Thus, $\mathrm{sdim}_{R/\mathfrak{m}}(\mathfrak{m}/\mathfrak{m}^2)$ is finite.* *Proof.* Since $R$ is Noetherian, by Proposition [Proposition 7](#P:2.5){reference-type="ref" reference="P:2.5"}, $R_{\overline{0}}$ is a Noetherian ring and $R_{\overline{1}}$ is a finitely generated $R_{\overline{0}}-$module. Then, $\mathfrak{m}=\mathfrak{m}_{\overline{0}}\oplus R_{\overline{1}}$ is finitely generated as $R_{\overline{0}}-$ module. Now, we claim that $\mathfrak{m}/\mathfrak{m}^2$ is finitely generated as $R/\mathfrak{m}-$module. Indeed, first observe that $$\mathfrak{m}/\mathfrak{m}^2=\left(\mathfrak{m}/\mathfrak{m}^2\right)_{\overline{0}}\oplus\left(\mathfrak{m}/\mathfrak{m}^2\right)_{\overline{1}}=\mathfrak{m}_{\overline{0}}/\mathfrak{m}_{\overline{0}}^2\oplus R_{\overline{1}}/\mathfrak{m}_{\overline{0}}R_{\overline{1}},$$ where each summand admits an $R/\mathfrak{m}=R_{\overline{0}}/\mathfrak{m}_{\overline{0}}-$module structure. Hence, the claim is a consequence of Nakayama's Lemma. The proposition follows from this claim. ◻ **Definition 21**. Let $\Bbbk$ be a field and $R$ a superring. If $R$ is also a $\Bbbk$-supervector space with the same grading, we call $R$ a $\Bbbk$-*superalgebra*. **Example 22**. Let $\Bbbk$ be a field and consider $$R=\Bbbk[X_1, \ldots, X_s \mid \theta_1, \ldots, \theta_d]:=\Bbbk\langle Z_1, \ldots, Z_s, Y_1, \ldots, Y_d\rangle/(Y_iY_j+Y_jY_i, Z_iZ_j-Z_jZ_i, Z_iY_j-Y_jZ_i).$$ This $\Bbbk$-superalgebra is called the *polynomial superalgebra with coefficients in* $\Bbbk$, with *even indeterminants* $X_1, \ldots, X_s$ and *odd indeterminants* $\theta_1, \ldots, \theta_d$. Here, $\theta_i$ corresponds to the image of $Y_i$ and $X_j$ corresponds to the image of $Z_j$ in the quotient $\Bbbk-$algebra from the equality above, with $i=1, \ldots, d$ and $j=1, \ldots, s$. The super-reduced of $R$ corresponds to $\overline{R}\cong\Bbbk[X_1, \ldots, X_s]$ and it is easy to check that any element of $R$ can be written in the form $$f=f_{i_0}(X_1, \ldots, X_s)+\sum_{J\,:\,\text{even}}f_{i_1\cdots i_J}(X_1, \ldots, X_s)\theta_{i_1}\cdots\theta_{i_J}+\sum_{J\,:\,\text{odd}}f_{i_1\cdots i_J}(X_1, \ldots, X_s)\theta_{i_1}\cdots\theta_{i_J}.$$ where $f_{i_0}, f_{i_1\cdots i_J}\in\Bbbk[X_1, \ldots, X_s]$ for all $J$. $\square$ A generalization of the previous example is given below. **Example 23**. Let $R$ be a commutative ring and $M$ an $R-$module whose elements are labeled as odd. Consider the tensor superalgebra $$\begin{aligned} T_R(M)&:=\bigoplus_{n\geq0}M^{\otimes n}. \end{aligned}$$ If $n\geq0$ consider $N_n$ the supersubmodule of $M^{\otimes n}$ generated by the elements $$v_1\otimes v_2+(-1)^{|v_1||v_2|}v_2\otimes v_1,\quad\text{ where }\quad v_1, v_2\in h(M^{\otimes n}).$$ Let $$N:=\bigoplus_{n\geq0}N_n \quad\text{ and }\quad \bigwedge_R(M):=T_R(M)/N.$$ If $M$ has a free basis $\theta_1, \ldots, \theta_d$, we define $$R[\theta_1, \ldots, \theta_d]:=\bigwedge_R (M).$$ The above construction corresponds to the *polynomial $R-$superalgebra with odd indeterminants* $\theta_1, \ldots, \theta_d$. Now, as a generalization of Example [Example 22](#ex:ksuperalg){reference-type="ref" reference="ex:ksuperalg"}, given $R$ any $\Bbbk$-superalgebra, we define the so-called *polynomial $R-$superalgebra in even indeterminants* $X_1, \ldots, X_s$ and *odd indeterminants $\theta_1, \ldots, \theta_d$* as $$R[X_1, \ldots, X_s\mid\theta_1, \ldots, \theta_d]:=R\otimes \Bbbk[X_1, \ldots, X_s][\theta_1, \ldots, \theta_d].$$ $\square$ **Remark 24**. Polynomial superalgebras are the prototypical example of an important subclass of superrings, which are called *split superrings*. More precisely, a superring $R$ is called *split* if the short exact sequence $$0\longrightarrow \mathfrak{j}_R\longrightarrow R\longrightarrow \overline{R}\longrightarrow0$$ is a split exact sequence or, by Proposition [Proposition 15](#prop:split){reference-type="ref" reference="prop:split"}, $R\cong\overline{R}\oplus\mathfrak{j}_R$ (see [@bruzzo2023notes p. 7] and [@westrathesis Section 3.5]). $\square$ ## Associated primes **Definition 25**. Let $R$ be a superring and $M$ an $R-$supermodule. A prime ideal $\mathfrak{p}$ of $R$ is *associated to $M$* if $\mathfrak{p}$ is the annihilator of some nonzero homogeneous element $m$ of $M$, i.e., $\mathfrak{p}=\mathrm{Ann}(m):=\{a\in R:am=0\}$. The set of all associated primes to $M$ is denoted by $\mathfrak{ass}(M)$. **Proposition 26**. *Let $R$ be a superring.* 1. *If $R$ is Noetherian and $M$ is a nonzero $R-$supermodule, then $\mathfrak{ass}(M)$ is non-empty.* 2. *Given a short exact sequence of $R-$supermodules $$\xymatrix{0\ar[r]&M'\ar[r]^{\varphi}&M\ar[r]^{\psi}&M''\ar[r]&0,}$$* *we obtain the inclusions $\mathfrak{ass}(M')\subseteq\mathfrak{ass}(M)$ and $\mathfrak{ass}(M)\subseteq\mathfrak{ass}(M')\cup\mathfrak{ass}(M'')$.* 3. *If $M=M'\oplus M''$ is the direct sum of $R-$supermodules, then $\mathfrak{ass}(M)=\mathfrak{ass}(M')\cup\mathfrak{ass}(M'')$.* *Proof.* 1. See [@westrathesis Lemma 6.4.2]. 2. If $\mathfrak{p}\in\mathfrak{ass}(M')$, there exists a nonzero homogeneous element $m'\in M'$ such that $\mathfrak{p}$ is its annihilator. Clearly, the element $\varphi(m')$ is a nonzero homogeneous element in $M$ such that $\mathfrak{p}=\mathrm{Ann}(\varphi(m'))$. Thus, $\mathfrak{p}\in\mathfrak{ass}(M)$ which proves the inclusion $\mathfrak{ass}(M')\subseteq\mathfrak{ass}(M)$. Now, let $m\in M$ be a nonzero homogeneous element with annihilator $\mathfrak{p}$. We need to analyze two cases: $m\in\varphi(M')$ or $m\notin\varphi(M')$. For the former case, there exists some $m'\in M'$, such that $\varphi(m')=m$, and $\mathfrak{p}=\mathrm{Ann}(m')$, since $\varphi$ is an injective map. Hence, $\mathfrak{p}\in\mathfrak{ass}(M')$. For the latter case, from the equality $\varphi(M')=\mathrm{Ker}(\psi)$, by exactness of the sequence, we have that $\mathfrak{p}=\mathrm{Ann}(\psi(m))$, where $\psi(m)$ is a nonzero homogeneous element of $M''$. Thus, $\mathfrak{p}\in\mathfrak{ass}(M'')$ and $\mathfrak{ass}(M)\subseteq\mathfrak{ass}(M')\cup\mathfrak{ass}(M'')$. 3. Follows from the proof ii) applied to the split exact sequence (see Proposition [Proposition 15](#prop:split){reference-type="ref" reference="prop:split"}): $$\xymatrix{0\ar[r]&M'\ar[r]^-{\iota}&M'\oplus M''\ar[r]^-{\pi}&M''\ar[r]\ar@/^1pc/[l]^{\iota} &0}.$$  ◻ **Definition 27**. Let $R$ be a superring and $M$ an $R-$supermodule. An element $a\in R-\{0\}$ is called a *zerodivisor on* $M$ if there exists some nonzero homogeneous element $m\in M$ such that $am=0$. We denote by $D(M)$ the set consisting of $0$ and the set of zerodivisors on $M$. As in the commutative case, if $R$ is a Noetherian superring, we have a natural relation between the set of zerodivisors and associated primes. **Corollary 28**. *Let $R$ be a Noetherian superring and let $M$ be a nonzero $R-$supermodule. Then* *$$\displaystyle D(M)=\displaystyle\bigcup_{\mathfrak{p}\in\mathfrak{ass}(M)}\mathfrak{p}.$$* *Proof.* Let $a\in R$ be a nonzero element which is a zerodivisor on $M$. Consider $N:=RM_0$ to be the $R$-subsupermodule of $M$ generated by $M_0=\{m\in h(M)\mid\,am=0\}$. Since $N\neq0$, by Proposition [Proposition 26](#Prop:1.7.2){reference-type="ref" reference="Prop:1.7.2"} i), there is some $\mathfrak{p}\in \mathfrak{ass}(N)$ with $a\in\mathfrak{p}$, *a fortiori*, $\mathfrak{p}\in \mathfrak{ass}(M)$ by Proposition [Proposition 26](#Prop:1.7.2){reference-type="ref" reference="Prop:1.7.2"} ii). The other inclusion is clear, so we have the equality. ◻ **Proposition 29**. *Let $R$ be a Noetherian superring and let $M$ be a finitely generated $R-$supermodule. There exists a "filtration" by $R-$subsupermodules of $M$, i.e.,* *$$0=M_0\subset M_1\subset\cdots\subset M_d=M$$* *such that each quotient supermodule $M_i/M_{i-1}$ is isomorphic to $R/\mathfrak{p}_i$ or to $\Pi(R/\mathfrak{p}_i)$ for some prime ideal $\mathfrak{p}_i$ of $R$, with $1\leq i\leq d$. In particular, $\mathfrak{ass}(M)\subseteq\{\mathfrak{p}_1, \ldots, \mathfrak{p}_d\}$ and $\mathfrak{ass}(M)$ is a finite set.* *Proof.* The existence of a such filtration is consequence of [@westrathesis Theorem 6.4.3]. To prove the inclusion $\mathfrak{ass}(M)\subseteq\{\mathfrak{p}_1, \ldots, \mathfrak{p}_d\}$ we proceed by induction on $d$. If $d=1$ and $\mathfrak{p}\in\mathfrak{ass}(M)$, then for some nonzero homogeneous $m\in M$, $\mathfrak{p}$ is the annihilator of $m$. Since, by hypothesis, there exists a prime ideal $\mathfrak{p}_1$ with either $M \simeq R/\mathfrak{p}_1$ or $M\simeq\Pi(R/\mathfrak{p}_1)$, we have that $m$ is mapped by this isomorphism to a nonzero homogeneous element. In any case, we obtain that $\mathfrak{p}\subseteq\mathfrak{p}_1$. If $\mathfrak{p}=\mathfrak{p}_1$ we are done. Now, suppose by contradiction that there some is $f\in \mathfrak{p}_1-\mathfrak{p}$ homogeneous and nonzero. It is easy to check that $\mathfrak{p}=\mathrm{Ann}(fm)$ and, by the inclusion $\mathfrak{p}\subseteq\mathfrak{p}_1$, we conclude $fm\in M_0=0$, i.e., $f\in \mathrm{Ann}(m)=\mathfrak{p}$, which contradicts the choice of $f$. In conclusion, $\mathfrak{p}=\mathfrak{p}_1$. Assume by induction that the conclusion holds for $d$ and suppose that $M$ has a filtration by $d+1$ subsupermodules. Let $m\in M$ be homogeneous whose annihilator is a prime ideal $\mathfrak{p}$. If $m\in M_{d-1}$, the result follows from the inductive hypothesis. If $m\not\in M_{d-1}$, it must be mapped to a nonzero homogeneous element by either $M/M_{d-1} \simeq R/\mathfrak{p}_d$ or $M/M_{d-1} \simeq \Pi(R/\mathfrak{p}_d)$. In any case, we obtain $\mathfrak{p}\subseteq\mathfrak{p}_d$. If $\mathfrak{p}\neq\mathfrak{p}_d$, then we proceed as in the case $d=1$ to obtain a contradiction. Thus, $\mathfrak{p}=\mathfrak{p}_d$. ◻ **Proposition 30** (Prime avoidance). *Let $\mathfrak{a}, \mathfrak{a}_1, \ldots, \mathfrak{a}_d$ be superideals of a superring $R$, where at most two of the $\mathfrak{a}_i$ are not prime. If $$\mathfrak{a}\subseteq\bigcup_{1\leq i\leq d} \mathfrak{a}_i,$$ then $\mathfrak{a}$ is contained in one of the $\mathfrak{a}_i$.* *Proof.* Recall that any prime ideal is completely prime (see Remark [Remark 4](#rmk:prime-ideal){reference-type="ref" reference="rmk:prime-ideal"} iv)), so the proof is same as in the commutative case (see [@eisenbud Lemma 3.3]). ◻ **Corollary 31**. *Let $R$ be a Noetherian superring, $M$ a nonzero $R-$supermodule and $\mathfrak{a}$ a superideal of $R$. Then $\mathfrak{a}$ contains a non-zerodivisor on $M$ or it annihilates an element of $M$.* *Proof.* Let $\mathfrak{a}$ be a superideal consisting of zerodivisors on $M$. Then, by Corollary [Corollary 28](#Cor:2.8){reference-type="ref" reference="Cor:2.8"}, $\mathfrak{a}$ is contained in the union of the associated primes of $M$. Thus, by Proposition [Proposition 30](#Prop:PrimeAvoidance){reference-type="ref" reference="Prop:PrimeAvoidance"}, $\mathfrak{a}$ must be contained in $\mathrm{Ann}(m)$ for some $m\in M$. ◻ **Proposition 32**. *Let $R$ be a Noetherian superring. Then $K(R)$ has finitely many maximal ideals, and such ideals are the localizations of the maximal associated primes.* *Proof.* By Proposition [Proposition 29](#P:2.11){reference-type="ref" reference="P:2.11"}, it suffices to prove that $\mathfrak{ass}(R)=\mathfrak{ass}(K(R))$. By the universal property of localization, we can identify $R\subseteq K(R)$. Thus, by Proposition [Proposition 26](#Prop:1.7.2){reference-type="ref" reference="Prop:1.7.2"}, we see that $\mathfrak{ass}(R)\subseteq\mathfrak{ass}(K(R))$. Conversely, let $a/s\in K(R)$ be an element whose annihilator is a prime ideal $\mathfrak{p}$. Clearly, the annihilator of $a$ in $R$ is also $\mathfrak{p}$ so $\mathfrak{ass}(R)\supseteq\mathfrak{ass}(K(R))$, which completes the proof. ◻ ## Noetherian and Finitely Presented supermodules **Definition 33**. Let $R$ be a superring and $M$ an $R-$supermodule. We call $M$ *Noetherian* if it satisfies one (and hence all) of the following equivalent conditions (for a proof see [@westrathesis Proposition 3.3.1]): - $M$ satisfies the ascending chain condition on its $R-$subsupermodules. - Each $R-$subsupermodule $N$ of $M$ is finitely generated. - Any non-empty collection of $R-$subsupermodules of $M$ has a maximal element. As in the commutative case, the Noetherian superring and Noetherian supermodule concepts are very close related, as the proposition below shows. **Proposition 34**. *Let $R$ be a Noetherian superring and let $M$ be a finitely generated $R-$supermodule. Then, $M$ is a Noetherian $R-$supermodule.* *Proof.* See [@westrathesis Proposition 3.3.8]. ◻ **Definition 35**. Let $R$ be a superring and $M$ a supermodule over $R$. - We say that $M$ is a *free $R$-supermodule* if there are disjoint sets $I$ and $J$ such that $$M\simeq R^{\oplus I\mid J}:=\displaystyle\left(\bigoplus_{i\in I}R\right)\oplus\left(\bigoplus_{j\in J}\Pi R\right).$$ We say that $M$ has *finite rank* $n$ if $n=|I\cup J|$ is finite. We also say that $M$ is *finite free*. - We call $M$ a *flat $R-$supermodule* if the functor $M\otimes_R -$ is left-exact. - $M$ is called *finitely presented* if there exists a short exact sequence $$0\longrightarrow K\longrightarrow F \longrightarrow M\longrightarrow 0$$ where $F$ is a finite free $R-$supermodule and $K$, the kernel of the morphism $F\longrightarrow M$, is finitely generated. Observe that, if $R$ is a Noetherian superring then, by Proposition [Proposition 34](#Prop:Noetherians){reference-type="ref" reference="Prop:Noetherians"}, $M$ is finitely presented if and only if $M$ is finitely generated. For the next important result, we need to fix some notation: given $\varphi: R\rightarrow R'$ a morphism of superrings we can endow $R'$ with an $R-$supermodule structure induced by $\varphi$. Indeed, the left action of $R$ is given by $a\cdot x=\varphi(a)x$, for any homogeneous elements $a\in R$ and $x\in R'$. Now, if $M$ is an $R-$supermodule, then $M_{R'}:=R'\otimes_R M$ admits a natural structure of $R'-$supermodule, where on $R'$ we assume the $R-$supermodule structure induced by $\varphi$. Indeed, the left $R'$-action is defined by $a'\cdot(a\otimes m) =aa'\otimes m$, where $m\in h(M)$ and $a, a'\in h(R')$. **Proposition 36** (Devissage Theorem). *Let $R$ and $R'$ be superrings and let $M, N$ be $R-$supermodules. For any morphism $\varphi:R\longrightarrow R'$, we have a morphism of $R'-$supermodules* *$$\begin{aligned} \psi:\underline{\mathrm{Hom}}_R(M, N)\otimes_R R'&\longrightarrow&\underline{\mathrm{Hom}}_{R'}(M\otimes_R B, N\otimes_R R')\\ \psi\otimes_R r&\longmapsto&\psi_*,\end{aligned}$$* *where $\psi_*$ the morphism of $R-$supermodules* *$$\begin{aligned} \psi_*:M\otimes_R R'&\longrightarrow&N\otimes_R R'\\ m\otimes_R x&\longmapsto&(-1)^{|m||r|}\psi(m)\otimes_R rx. \end{aligned}$$* *Moreover, if $R'$ is flat and $M$ is finitely presented, then the map $\psi$ is an isomorphism of $R'-$supermodules.* *Proof.* See [@westrathesis Theorem 6.5.9]. ◻ **Remark 37**. An important application of Proposition [Proposition 36](#Westra:Theorem:6.5.9){reference-type="ref" reference="Westra:Theorem:6.5.9"} is under localizations. For any finitely presented $R-$supermodules $M$ and $N$ (using the fact that $S^{-1}R$ is a flat $S^{-1}R-$supermodule and the exactness of the localization functor) we get the isomorphism $$S^{-1}R\otimes_R \underline{\mathrm{Hom}}_R(M, N)\stackrel{\simeq}{\longrightarrow}{\underline{\mathrm{Hom}}_{S^{-1}R}(S^{-1}R \otimes_R M,\, S^{-1}R\otimes_R N)}.$$ In particular, if $R$ is Noetherian and $\mathfrak{p}$ is a prime ideal of $R$, we obtain $$\underline{\mathrm{Hom}}_R(M, N)_\mathfrak{p}\stackrel{\simeq}{\longrightarrow}{\underline{\mathrm{Hom}}_{R_\mathfrak{p}}(M_\mathfrak{p},\, N_\mathfrak{p})}.$$ $\square$ **Proposition 38**. *Let $(R, \mathfrak{m})$ be a local Noetherian superring and let $M$ and $N$ be finitely generated $R$-supermodules. If we have an isomorphism $\phi:M\longrightarrow N$ and a morphism $\xi:M\longrightarrow N$ with $\xi(M)\subseteq \mathfrak{m}N$, then $\phi+\xi:M\longrightarrow N$ is an isomorphism.* *Proof.* Since $\phi$ is surjective (because it is an isomorphism), for any $y\in N$ we can find $x\in M$ such that $\phi(x)=y$. Clearly, we have $\phi(x)+\xi(y)-\xi(y)\in(\phi+\xi)(M)+\mathfrak{m}N$. Since $M$ is finitely generated, then $M/N$ is finitely generated. Using the super Nakayama's Lemma (see [@bartocci2012geometry Proposition 1.1, (2)]) on the finitely generated $R$-supermodule $M/N$ we obtain $(\phi+\xi)(M)=N$, that is, $\phi+\xi$ is surjective. Since $\phi^{-1}\circ(\phi+\xi):M\longrightarrow M$ is surjective, $\phi+\xi:M\longrightarrow M$ is surjective, so it remains to prove a "Super" Vasconcelos' Theorem; that is, if $\psi:M\longrightarrow M$ is a surjective morphism of finitely generated $R$-supermodules, then $\psi$ is an isomorphism. For this, consider $R'=R[X]$. Note that $M$ is an $R'$-supermodule where the left-action of $R$ on $M$ is defined by $Xm=\psi(m)$. Since $\psi$ is surjective, we find that $XM=M$, so $\mathfrak{a}M=M$, where $\mathfrak{a}$ is the superideal $XR'$. Because $M$ is finitely generated as $R'$-supermodule, there exists a homogeneous element $f\in\mathfrak{a}$ such that $(1+f)M=0$ (see [@westrathesis Corollary 6.4.8]). Let $Y\in R'$ be such that $XY=f$. Thus, we obtain that $(1+XY)M=0$. Now, for every $u\in\mathrm{Ker}(\psi)$, we get $0=(1+XY)u=u+Y\psi(u)=u$, that is, $\mathrm{Ker}(\psi)=0$. Therefore $\psi$ is an isomorphism. ◻ The following proposition is the "superization" of [@altman2013term Proposition 13.11] (cf. [@eisenbud Exercise 4.13]). **Proposition 39**. *Let $R$ be a Noetherian superring with finitely many maximal ideals $\mathfrak{m}_1,\ldots,\mathfrak{m}_d$, and let $M$ and $N$ be finitely generated $R-$supermodules. If $M_{\mathfrak{m}_i} \simeq N_{\mathfrak{m}_i}$, for each $i=1,\ldots, d$, then $M \simeq N$.* *Proof.* Suppose that, for each $i=1,\ldots, d$, there exists an isomorphism $\varphi_{i}:M_{\mathfrak{m}_i}\longrightarrow N_{\mathfrak{m}_i}$. Then, each $\varphi_{i}$ can be identified with a homogeneous even element in $\underline{\mathrm{Hom}}_{R_{\mathfrak{m}_i}}(M_{\mathfrak{m}_i}, N_{\mathfrak{m}_i})$. This $R-$supermodule coincides with $\underline{\mathrm{Hom}}_{R}(M, N)_{\mathfrak{m}_i}$, by Remark [Remark 37](#rem:devissage){reference-type="ref" reference="rem:devissage"}. Therefore we can choose $\psi_i:M\longrightarrow N$, with the same parity of $\varphi_i$ and such that $(\psi_i)_{\mathfrak{m}_i}=\varphi_i$, for each $i=1,\ldots, d$. Since the maximal ideals are pairwise disjoint, using the contrapositive of Proposition [Proposition 30](#Prop:PrimeAvoidance){reference-type="ref" reference="Prop:PrimeAvoidance"}, there exists an even element $a_i\in(\bigcap_{j\neq i}\mathfrak{m}_j)-\mathfrak{m}_i$, for each $i=1,\ldots, d$. Hence, for each $i=1,\ldots, d$, we define $b_i:=\prod_{i\neq j}a_j$, which is an even element with the property $b_i\in \mathfrak{m}_j-\mathfrak{m}_i$, for all $i=1,\ldots, d$. Now, we consider the morphism $$% \begingroup \setlength\arraycolsep{0pt} \psi=\sum_{j} b_j\psi_j\colon\begin{array}[t]{c >{{}}c<{{}} c} M & \longrightarrow& N \\ m & \longmapsto& \sum_{j} b_j\psi_j(m). \end{array}% \endgroup$$ Using Propositions [Proposition 18](#prop:localglobal){reference-type="ref" reference="prop:localglobal"} and [Proposition 38](#lem:nakaapp){reference-type="ref" reference="lem:nakaapp"} with $\phi:=(a_i\varphi_i)_{\mathfrak{m}_i}$ and $\xi:=\psi_{\mathfrak{m}_i}$ (for any $i=1,\ldots,d$), we find that $\psi$ is an isomorphism. ◻ ## Krull superdimension and regularity {#Subsec:Krull:SuperDim} In this subsection we introduce the superdimension theory of superrings recently developed by Masuoka and Zubkov in [@MASUOKA2020106245] (see also the work of Zubkov and P. S. Kolesnikov [@ZubkovKolesnikov]). We denote by $\mathrm{Kdim}(-)$ the usual Krull dimension in the commutative context. If $R$ is a commutative ring and $M$ is an $R-$module, we denote by $\text{dim}(M)$ *the dimension of $M$ as $R-$module*, $\text{dim}(M)=\mathrm{Kdim}(R/\text{Ann}_R(M))$. **Definition 40**. Let $R$ be a Noetherian superring such that $\mathrm{Kdim}(R_{\overline{0}})<\infty$ and let $a_1, \ldots, a_s\in R_{\overline{1}}$. For any $I\subseteq[s]$, with $|I|=n$, let $a^I:=a_{i_1}\cdots a_{i_{n}}$ where $I=\{i_1, \ldots, i_{n}\}$ and the product is taken in $R$. Let $\mathrm{Ann}_{R_{\overline{0}}}(a^I)$ be the ideal of $R_{\overline{0}}$ whose elements are those in $R_{\overline{0}}$ annihilating $a^I$. - If $\mathrm{Kdim}(R_{\overline{0}})=\mathrm{Kdim}(R_{\overline{0}}/\mathrm{Ann}_{R_{\overline{0}}}(a^I))$, we say that the $a_{i_j}$ form a *system of odd parameters of length $n$*. - The greatest integer $m$ such that there exists a set of odd parameters of length $m$ is called the *odd Krull superdimension* of $R$ and is denoted by $m:=\mathrm{Ksdim}_{\overline{1}}(R)$. - The *even Krull superdimension* of $R$ is the usual Krull dimension of $R_{\overline{0}}$ and is denoted $\mathrm{Ksdim}_{\overline{0}}(R)$. - The *Krull superdimension* of $R$ is the pair $\mathrm{Ksdim}_{\overline{0}}(R)\mid\mathrm{Ksdim}_{\overline{1}}(R)$. **Proposition 41**. *Let $R$ be a Noetherian superring with $\mathrm{Kdim}(R_{\overline{0}})<\infty$. Let $a_1, \ldots, a_d$ be a set of generators of the $R_{\overline{0}}$ -module $R_{\overline{1}}$. The following conditions are equivalent.* - *There is a system of odd parameters of $R$ with size $l\geq1$.* - *For some $1\leq i_1<i_2<\cdots<i_l\leq d$ the elements $a_{i_1}, \ldots, a_{i_l}$ form a set of odd parameters.* - *$\dim(R_{\overline{1}}^l)=\mathrm{Kdim}(R_{\overline{0}})$.* *In particular,* *$$\displaystyle \mathrm{Ksdim}_{\overline{1}}(R)=\max\{l\in\mathds{Z^+} \mid \dim(R_{\overline{1}}^l)=\mathrm{Kdim}(R_{\overline{0}})\}.$$* *Proof.* See [@MASUOKA2020106245 Proposition 4.1] ◻ Let $(R, \mathfrak{m})$ be a local Noetherian superring with residue field $\Bbbk=R/\mathfrak{m}$. Then, $\mathrm{Kdim}(R_{\overline{0}})=r$ is finite and any set of odd parameters has length at most $\dim_{\Bbbk}((\mathfrak{m}/\mathfrak{m}^2)_{\overline{1}})$ (see [@MASUOKA2020106245 Lemma 5.1]). Thus, we obtain $$\mathrm{Ksdim}_{\overline{0}}(R)\leq\dim_\Bbbk((\mathfrak{m}/\mathfrak{m}^2)_{\overline{0}})\quad\text{ and }\quad\mathrm{Ksdim}_{\overline{1}}(R)\leq\dim_\Bbbk((\mathfrak{m}/\mathfrak{m}^2)_{\overline{1}}).$$ **Definition 42**. Let $(R, \mathfrak{m})$ be a local Noetherian superring, with $\Bbbk=R/\mathfrak{m}$ its residue field. The superring $R$ is called *regular* if $$\mathrm{Ksdim}(R)=\mathrm{sdim}_\Bbbk(\mathfrak{m}/\mathfrak{m}^2).$$ **Proposition 43**. *Let $(R, \mathfrak{m})$ be a local superring. The following statements are equivalent.* - *$R$ is regular.* - - *$\overline{R}$ is regular.* - *For some/any minimal system $z_1, \ldots, z_s$ of generators of the $R_{\overline{0}}$-module $R_{\overline{1}}$, $$\mathrm{Ann}_{R_{\overline{0}}}(z^{[s]})=R_{\overline{1}}^2.$$* *Proof.* See [@MASUOKA2020106245 Proposition 5.2]. ◻ **Definition 44**. Let $R$ be a Noetherian superring. We say that $R$ is *regular* if it satisfies one of the following equivalent conditions (see [@MASUOKA2020106245 Lemma 5.4 and Subsection 5.2]). - For every prime ideal $\mathfrak{p}$ in $R$, the local superring $R_\mathfrak{p}$ is regular. - For every maximal ideal $\mathfrak{m}$ in $R$, the local superring $R_\mathfrak{m}$ is regular. The condition of regularity for non-local superrings is characterized as follows. **Proposition 45**. *Let $R$ be a superring. The following conditions are equivalent.* - *$R$ is regular.* - - *The ring $\overline{R}$ is regular.* - *The $\overline{R}$-module $\mathfrak{j}_R/\mathfrak{j}_R^2$ is projective.* - *The graded $\overline{R}$-superalgebra map $$\displaystyle\lambda_R:\bigwedge_{\overline{R}}(\mathfrak{j}_R/\mathfrak{j}_R^2)\longrightarrow\bigoplus_{n\geq0}\mathfrak{j}_R^n/\mathfrak{j}_R^{n+1}$$* *(induced by the embedding $\displaystyle\mathfrak{j}_R/\mathfrak{j}_R^2\hookrightarrow\bigoplus_{n\geq0}\mathfrak{j}_R^n/\mathfrak{j}_R^{n+1}$) is an isomorphism.* *Proof.* See [@MASUOKA2020106245 Proposition 5.8]. ◻ **Remark 46**. A geometric perspective of the above proposition is in [@bruzzo2023notes Proposition A.17]. $\square$ # Invertibility {#Sec:Invertibibility} In this section we explore the concepts of invertible supermodules and fractional superideals. This section is divided in two subsections. In the first one we study the fundamental properties of invertible supermodules and fractional superideals. In the second one, we construct the Picard and Cartier groups of a superring as an application of the theory developed in the first subsection. ## Invertible supermodules and Fractional superideals **Definition 47**. Let $R$ be a superring and $M$ an $R-$supermodule. We say that $M$ is *invertible* if: 1. $M$ is finitely generated, and 2. for every prime ideal $\mathfrak{p}$ of $R$, $M_\mathfrak{p}\simeq R_\mathfrak{p}$. **Definition 48**. Let $R$ be a superdomain and let $K(R)$ be the total superring of fractions of $R$. 1. An $R-$subsupermodule $M$ of $K(R)$ is called a *fractional superideal of* $R$ if there exists a homogeneous element $d\in R\setminus\mathfrak{j}_R$ such that $d M\subseteq R$. 2. If $M$ is an $R-$subsupermodule of $K(R)$, we denote by $M^{-1}$ the $R-$supermodule generated by the homogeneous elements $d\in K(R)$ such that $dM\subseteq R$. **Remark 49**. Observe that if $M$ and $N$ are $R-$subsupermodule of $K(R)$, then it makes sense to define *the product of $M$ and $N$* as the $R-$subsupermodule of $K(R)$ given by $$MN:=\left\{\sum_i m_in_i\mid\,m_i\in M,\,n_i\in N\right\}.$$ $\square$ For superideals $\mathfrak{a}$ and $\mathfrak{b}$ of $R$, with $\mathfrak{b}\neq\mathfrak{j}_R$, we denote by $(\mathfrak{a}:\mathfrak{b})$, the $R-$supermodule generated by the homogeneous elements $z\in K(R)$ such that $z\mathfrak{b}\subseteq\mathfrak{a}$ and we denote by $R^{\ast}$ the set of unities of $R$. The following lemma is the superization of [@altman2013term Proposition 25.3]. **Proposition 50**. *Let $R$ be a superdomain and $K(R)=S^{-1}R$ where $S=R-\mathfrak{j}_R$. Consider the following conditions for an $R-$subsupermodule $M$ of $K(R)$:* 1. *There exist superideals $\mathfrak{a}$ and $\mathfrak{b}$ of $R$, with $\mathfrak{b}\neq\mathfrak{j}_R$, such that $M=(\mathfrak{a}:\mathfrak{b})$.* 2. *For some homogeneous $z\in K(R)^*$ we have $z M\subseteq R$.* 3. *For some homogeneous $z\in R-\mathfrak{j}_R$ we have $zM\subseteq R$.* 4. *$M$ is finitely generated.* *Then i), ii), iii) are equivalent and iv) implies i), ii), iii). Moreover, if $R$ is Noetherian, then i)-iv) are equivalent, and the equivalence between i) - iv) implies that $R$ is Noetherian.* *Proof.* - Consider $z\in \mathfrak{b}-\mathfrak{j}_R$ a homogeneous element, *a fortiori*, $z\in K(R)^{\ast}$. Since $M=(\mathfrak{a}:\mathfrak{b})$, it follows that for all $m\in M$, $zm\in\mathfrak{a}\subseteq R$. Thus, $z M\subseteq R$, which proves ii). - This is an immediate consequence of the equivalence: $x/y\in K(R)^{\ast}$ if and only if $x\in R-\mathfrak{j}_R$. - Let $z\in R-\mathfrak{j}_R$ be homogeneous such that $zM\subseteq R$. Then, $zM$ and $zR$ are superideals of $R$. Now, if $x\in M$ is an arbitrary homogeneous element, we obtain $xz\in zM$. Hence, $xzR\subseteq zM$ and $x\in(zM:zR)$, which proves that $M\subseteq (zM:zR)$. Conversely, let $x\in(zM:zR)$ be a homogeneous element. Then $xzR\subseteq zM$ and we have $xz\in zM$. That is, $z^{-1}xz\in M$, *a fortiori*, $x\in M$. - Suppose that $M$ is a finitely generated $R-$subsupermodule of $K(R)$. Then, there exist finitely many homogeneous $a_1, \ldots, a_d\in M$ generating $M$ as an $R-$module. Let $z_i\in R-\mathfrak{j}_R$ and $y_i\in R$ be homogeneous elements such that $a_i=z_i^{-1}\cdot y_i$, where $|a_i|=|y_i|$ (for each $i=1, 2, \ldots, d$). Taking $z:=z_1\cdots z_d$, it follows that $z\notin\mathfrak{j}_R$, because $\mathfrak{j}_R$ is a prime ideal. Thus, we obtain that $z M\subseteq R$, which proves iii). Now, suppose that $R$ is Noetherian, and let us prove that iii) $\Rightarrow$ iv). Let $z$ be an homogeneous element satisfying iii). Then, $zM$ is a superideal of $R$. Since $R$ is Noetherian, $zM=a_1R+\cdots+a_dR$, where $a_1, \ldots, a_d\in h(zM)$. Thus, $a_1/z, \ldots, a_d/z$ generate $M$ as an $R-$subsupermodule of $K(R)$. Finally, let us show that $R$ is Noetherian assuming that i) $\Leftrightarrow$ ii) $\Leftrightarrow$ iii) $\Leftrightarrow$ iv). Let $\mathfrak{a}\subseteq R$ be a superideal. Then, $\mathfrak{a}$ can be identified with its image in $K(R)$ (by the localization map) as $R-$subsupermodule of $K(R)$. Clearly with $z=1$ in iii), it follows that $z\mathfrak{a}=\mathfrak{a}\subseteq R$. Since iii) implies iv), $\mathfrak{a}$ is finitely generated, which completes the proof. ◻ **Remark 51**. Let $R$ be a superdomain and $M$ an $R-$subsupermodule of $K(R)$. - An immediate consequence of Proposition [Proposition 50](#P:3.2:ALTMAN){reference-type="ref" reference="P:3.2:ALTMAN"} is: if $M$ is finitely generated, then $M$ is isomorphic to a superideal of $R$. Hence, we can view $M$ as a fractional superideal of $R$. - If $M$ is nonzero, then $M^{-1}$ is a fractional superideal of $R$. Indeed, since $M\neq0$, there exists a nonzero $y\in h(R)$ and $x\in R-\mathfrak{j}_R$, such that $x^{-1}\cdot y\in M$. Then, $y=x\cdot x^{-1}y\in M\cap R$. It is easy to verify that $M^{-1}M\subseteq R$, so $yM^{-1}\subseteq R$. In other words, $M^{-1}$ is a fractional superideal of $R$.$\square$ Let $R$ be a Noetherian superring and $M$ an invertible $R-$supermodule. The $R$-supermodule $M^{\vee}=\underline{\mathrm{Hom}}_R(M, R)$ is called *the dual supermodule of $M$*. In this case, we can construct a natural "evaluation" morphism $$\begin{aligned} \mu:M^{\vee}\otimes_R M&\longrightarrow&R\\ \varphi\otimes a&\longmapsto&\varphi(a).\end{aligned}$$ The following four propositions are the superization of [@eisenbud Theorem 11.6], and their proofs follow closely the arguments given there. **Proposition 52**. *Let $R$ be a Noetherian superring and $M$ an $R-$supermodule. Then, $M$ is invertible if and only if $\mu$ is an isomorphism.* *Proof.* The "only if" part: Suppose that $M$ is invertible and $\mathfrak{p}$ is a prime ideal of $R$. By Propositions [Proposition 13](#prop:basicpros){reference-type="ref" reference="prop:basicpros"}, [Proposition 16](#P:2.2:B){reference-type="ref" reference="P:2.2:B"}, [Proposition 36](#Westra:Theorem:6.5.9){reference-type="ref" reference="Westra:Theorem:6.5.9"} and Remark [Remark 37](#rem:devissage){reference-type="ref" reference="rem:devissage"}, we have $$\begin{aligned} (M^{\vee}\otimes_R M)_\mathfrak{p}&\simeq\underline{\mathrm{Hom}}_R(M, R)_\mathfrak{p}\otimes_{R_\mathfrak{p}}M_\mathfrak{p}\\ &\simeq\underline{\mathrm{Hom}}_{R_\mathfrak{p}}(M_\mathfrak{p}, R_\mathfrak{p})\otimes_{R_\mathfrak{p}}M_\mathfrak{p}\\ &\simeq\underline{\mathrm{Hom}}_{R_\mathfrak{p}}(R_\mathfrak{p}, R_\mathfrak{p})\otimes_{R_\mathfrak{p}}M_\mathfrak{p}\\ &\simeq R_\mathfrak{p}\otimes_{R_\mathfrak{p}}M_\mathfrak{p}\\ &\simeq R_\mathfrak{p}\otimes_{R_\mathfrak{p}}R_\mathfrak{p}\\ &\simeq R_\mathfrak{p}.\end{aligned}$$ Thus, $\mu_\mathfrak{p}$ is an isomorphism for any prime ideal $\mathfrak{p}$ of $R$. By Proposition [Proposition 18](#prop:localglobal){reference-type="ref" reference="prop:localglobal"}, $\mu$ is an isomorphism. The "if" part: Suppose that $\mu$ is an isomorphism and let $\mathfrak{p}$ be a prime ideal of $R$. We first show that $M_\mathfrak{p}\simeq R_\mathfrak{p}$. Since $\mu$ is surjective, there exist a homogeneous element $\sum_{i=1}^s\varphi_i\otimes a_i\in M^{\vee}\otimes_R M$ such that $\mu(\sum_{i=1}^s\varphi_i\otimes a_i)=1$. Thus, $1=\sum_{i=1}^s \varphi_i(a_i)$, where $\varphi_i$ and $a_i$ have the same parity for all $i$. This implies that $\varphi_i(a_i)\not\in\mathfrak{p}$ and $\varphi_i(a_i)$ is invertible in $R_\mathfrak{p}$, for some $i$. Hence, if $v:=\varphi_i(a_i)^{-1}\in R_\mathfrak{p}$, it follows that the element $a:=va_i\in M_\mathfrak{p}$ is mapped to $1\in R_\mathfrak{p}$ by $(\varphi_i)_\mathfrak{p}$. According to the parity of $\varphi_i$ and $a_i$, we have two cases: either $\varphi_i$ and $a_i$ are even or $\varphi_i$ and $a_i$ are odd. [Case 1:]{.ul} $\varphi_i$ and $a_i$ are both even. We define the morphisms $$% \begingroup \setlength\arraycolsep{0pt} (\varphi_i)_\mathfrak{p}\colon\begin{array}[t]{c >{{}}c<{{}} c} M_\mathfrak{p} & \longrightarrow& R_\mathfrak{p} \\ a & \longmapsto& 1 \end{array}% \endgroup\quad\text{ and }\quad% \begingroup \setlength\arraycolsep{0pt} \psi\colon\begin{array}[t]{c >{{}}c<{{}} c} R_\mathfrak{p} & \longrightarrow& M_\mathfrak{p} \\ 1 & \longmapsto& a \end{array}% \endgroup.$$ Since $(\varphi_i)_\mathfrak{p}\circ \psi=\mathrm{id}_{R_\mathfrak{p}}$, by Proposition [Proposition 15](#prop:split){reference-type="ref" reference="prop:split"}, $M_\mathfrak{p}\simeq \text{Im}(\psi)\oplus\mathrm{Ker}((\varphi_i)_\mathfrak{p})=R_\mathfrak{p}a\oplus\mathrm{Ker}((\varphi_i)_\mathfrak{p})$, where $R_\mathfrak{p}a \simeq R_\mathfrak{p}$. Similarly, we define the morphisms $$% \begingroup \setlength\arraycolsep{0pt} \text{ev}_a\colon\begin{array}[t]{c >{{}}c<{{}} c} M_\mathfrak{p}^{\vee} & \longrightarrow& R_\mathfrak{p} \\ \xi & \longmapsto& \xi(a) \end{array}% \endgroup \quad\text{ and }\quad% \begingroup \setlength\arraycolsep{0pt} \widetilde{\psi}\colon\begin{array}[t]{c >{{}}c<{{}} c} R_\mathfrak{p} & \longrightarrow& M_\mathfrak{p}^{\vee} \\ 1 & \longmapsto& (\varphi_i)_\mathfrak{p} \end{array}% \endgroup.$$ In the same way as before, $(\text{ev}_a)\circ\widetilde{\psi}=\mathrm{id}_{R_\mathfrak{p}}$ and $M_\mathfrak{p}^{\vee} \simeq\text{Im}(\widetilde{\psi})\oplus\mathrm{Ker}(\text{ev}_a)= R_\mathfrak{p}(\varphi_i)_\mathfrak{p}\oplus\mathrm{Ker}(\text{ev}_a)$, with $R_\mathfrak{p}(\varphi_i)_\mathfrak{p}\simeq R_\mathfrak{p}$. Thus $$\begin{aligned} M_\mathfrak{p}^{\vee}\otimes M_\mathfrak{p}& \simeq(R_\mathfrak{p}(\varphi_i)_\mathfrak{p}\oplus\mathrm{Ker}(\text{ev}_a))\otimes(R_\mathfrak{p}a\oplus\mathrm{Ker}((\varphi_i)_\mathfrak{p}))\\ & \simeq (R_\mathfrak{p}a\otimes R_\mathfrak{p}(\varphi_i)_\mathfrak{p})\oplus(\mathrm{Ker}((\varphi_i)_\mathfrak{p})\otimes R_\mathfrak{p}(\varphi_i)_\mathfrak{p})\oplus(\mathrm{Ker}(\text{ev}_a)\otimes R_\mathfrak{p}a)\oplus(\mathrm{Ker}(\text{ev}_a)\otimes\mathrm{Ker}((\varphi_i)_\mathfrak{p})).\end{aligned}$$ Hence, $\mu(R_\mathfrak{p}(\varphi_i)_\mathfrak{p}\otimes(\mathrm{Ker}(\varphi_i)_\mathfrak{p}))=(\varphi_i)_\mathfrak{p}(\mathrm{Ker}(\varphi_i)_\mathfrak{p})=0$. Since $R_\mathfrak{p}(\varphi_i)_\mathfrak{p}\simeq R_\mathfrak{p}$, we have that $\mathrm{Ker}((\varphi_i)_\mathfrak{p})=0$ and, consequently, $M_\mathfrak{p}\simeq R_\mathfrak{p}a \simeq R_\mathfrak{p}$ for all prime ideal $\mathfrak{p}$ of $R$. Moreover, $(\varphi_i)_\mathfrak{p}$ is a monomorphism, and it is also surjective, because $1\in\mathrm{Im}((\varphi_i)_\mathfrak{p})$. In other words, $(\varphi_i)_\mathfrak{p}$ is an isomorphism, for all prime ideal $\mathfrak{p}$ of $R$. By Proposition [Proposition 18](#prop:localglobal){reference-type="ref" reference="prop:localglobal"}, $\varphi_i:M\rightarrow R$ is an isomorphism. [Case 2:]{.ul} $\varphi_i$ and $a_i$ are both odd. In this case, we proceed exactly as in the first one, but now considering $(\Pi(\varphi_i))_\mathfrak{p}$ and $\Pi(\text{ev}_a)$, instead of $(\varphi_i)_\mathfrak{p}$ and $\text{ev}_a$, respectively, to show that $\Pi(\varphi_i)$ is an isomorphism. Thus, $\varphi_i=\Pi(\Pi(\varphi_i))$ is an isomorphism. It remains to prove that $M$ is finitely generated. Note that $\varphi_i$ maps $a_i$ to a generator. Since $1=\sum_{i=1}^s \varphi_i(a_i)$, $M$ is generated by the $a_i$, that is, $M$ is finitely generated. ◻ **Proposition 53**. *Let $R$ be a Noetherian superdomain.* 1. *Any invertible $R-$supermodule is isomorphic to a fractional superideal of $R$.* 2. *Any invertible fractional superideal contains a non-zerodivisor of $R$.* *Proof.* - Suppose that $M$ is an invertible $R-$supermodule. By Proposition [Proposition 32](#prop1){reference-type="ref" reference="prop1"}, $K(R)$ has a finite number of maximal ideals, that are the localizations of some maximal associated primes of $R$. Since $R$ is a superdomain, we have that $S=R-\mathfrak{j}_R$ corresponds to the set of non-zerodivisors of $R$. Hence, if $\mathfrak{p}$ is an associated prime of $R$, then $\mathfrak{p}$ is formed by zerodivisors, *a fortiori*, $\mathfrak{p}\cap S=\varnothing$. Using Proposition [Proposition 8](#P:2.2:A){reference-type="ref" reference="P:2.2:A"} ii), it follows that $S^{-1}R=K(R)$, $S^{-1}\mathfrak{p}=\mathfrak{p}K(R)$ and $K(R)_{\mathfrak{p}K(R)} \simeq R_\mathfrak{p}$. Since $M$ is invertible, we have $K(R)_{\mathfrak{p}K(R)}\simeq R_\mathfrak{p}\simeq M_\mathfrak{p}\simeq M\otimes_R R_\mathfrak{p}\simeq M\otimes_R K(R)_{\mathfrak{p}K(R)}$. Again by Proposition [Proposition 8](#P:2.2:A){reference-type="ref" reference="P:2.2:A"} ii), we find $(M\otimes_R K(R))_{\mathfrak{p}K(R)} \simeq M\otimes_R K(R)_{\mathfrak{p}K(R)} \simeq K(R)_{\mathfrak{p}K(R)}$, which implies that $M\otimes K(R) \simeq K(R)$, by Proposition [Proposition 39](#Prop2){reference-type="ref" reference="Prop2"}. Now, we claim that the localization map $\lambda_M^S:M\longrightarrow K(R)\otimes M \simeq S^{-1}M$ is injective. Indeed, for each prime $\mathfrak{p}$, $(\lambda_M^S)_\mathfrak{p}:M_\mathfrak{p}\simeq R_\mathfrak{p}\longrightarrow K(R)\otimes_R R_\mathfrak{p}\simeq S^{-1}R_\mathfrak{p}$ is injective, and consequently, by Proposition [Proposition 18](#prop:localglobal){reference-type="ref" reference="prop:localglobal"}, $\lambda_M^S$ is injective. Thus, from the embedding $M\hookrightarrow K(R)\otimes M$ and the isomorphism $K(R)\otimes M \simeq K(R)$, we can identify $M$ isomorphically with an $R-$subsupermodule $N$ of $K(R)$. In particular, $N$ is finitely generated and, since $R$ is Noetherian (by Proposition [Proposition 50](#P:3.2:ALTMAN){reference-type="ref" reference="P:3.2:ALTMAN"} iii)) there exists $d\in R-\mathfrak{j}_R=S$ such that $dN\subseteq R$, i.e., $N$ is a fractional superideal of $R$. - We will show the contrapositive. Suppose that $M\subseteq K(R)$ is a fractional superideal such that $M\cap R$ consist of zerodividors. There is a homogeneous non-zerodivisor $u\in R$ such that $uM\subseteq R$. By Corollary [Corollary 31](#Cor3){reference-type="ref" reference="Cor3"}, there exists a homogeneous nonzero element $0\neq b\in R$ annihilated by $uM$. Hence, it follows that $M$ is annihilated by $ub$. Now, by Corollary [Corollary 28](#Cor:2.8){reference-type="ref" reference="Cor:2.8"}, there exists a prime $\mathfrak{p}$, which contains the annihilator of $ub$. Therefore, we conclude that $M_\mathfrak{p}\not \simeq R_\mathfrak{p}$, and it follows that $M$ is not invertible.  ◻ **Proposition 54**. *Let $R$ be a Noetherian superdomain. If $M$ and $N$ are invertible $R-$subsupermodules of $K(R)$, then the natural morphism* *$$\begin{aligned} \phi:M^{-1}N&\longrightarrow&\underline{\mathrm{Hom}}_R(M, N)\\ t&\longmapsto&% \begingroup \setlength\arraycolsep{0pt} \phi_t\colon\begin{array}[t]{c >{{}}c<{{}} c} M & \longrightarrow& N \\ m & \longmapsto& tm \end{array}% \endgroup\end{aligned}$$* *is an isomorphism.* *Proof.* By Proposition [Proposition 53](#P:Eisenbud:11.6-b){reference-type="ref" reference="P:Eisenbud:11.6-b"} ii), there is a non-zerodivisor $v$ of $R$ belonging to $M$. Consider a nonzero homogeneous element $t\in M^{-1}N$, then $vt\neq 0$ (otherwise $v$ is seen to be a zerodivisor). Hence, $t$ induces a nonzero element in $\underline{\mathrm{Hom}}_R(M, N)$, and the morphism $\phi:M^{-1}N\longrightarrow\underline{\mathrm{Hom}}_R(M, N)$ is injective. It remains to show that $\phi$ is surjective. For this, consider any $\varphi\in\underline{\mathrm{Hom}}_R(M, N)$ and let $w\in N$ be such that $\varphi(v)=w$. We claim that $\varphi=\phi_{w/v}$ (i.e, $\varphi$ is the image of $w/v$ under $\phi$). It is clear that $\varphi(v)=\phi_{w/v}(v)$, so the proof of the proposition follows from the next claim. [Claim.]{.ul} If two morphisms $\xi$ and $\phi$ from $M$ to $K(R)$, are such that $\xi(v)=\psi(v)$, then $\psi=\xi$. *Proof of the claim.* It suffices to show that $\psi_\mathfrak{p}=\xi_\mathfrak{p}$ for every prime ideal $\mathfrak{p}$ of $R$. Note that since $v$ is a nonzero divisor, the annihilator of $v$ must be zero, so the image of $v$ in $M_\mathfrak{p}\simeq R_\mathfrak{p}$ is a non-zerodivisor. In this case, $\xi_\mathfrak{p}(v)=\psi_\mathfrak{p}(v)$. Thus, $v\xi_\mathfrak{p}(1)=\xi_\mathfrak{p}(v)=\psi_\mathfrak{p}(v)=v\psi(1)$, that is, $\xi_\mathfrak{p}(1)=\psi_\mathfrak{p}(1)$, so $\phi_\mathfrak{p}=\psi_\mathfrak{p}$. ◻ **Corollary 55**. *Let $R$ be a Noetherian superdomain. If $M$ is an invertible $R-$subsupermodule of $K(R)$, then $M^\vee\simeq M^{-1}$.* *Proof.* It is easy to show that $M^{-1}R=M^{-1}$. Since $M$ is invertible, $M^\vee\otimes_R M\simeq R$, by Proposition [Proposition 52](#prop:Eisenbud11.6-a){reference-type="ref" reference="prop:Eisenbud11.6-a"}, and we find that $M^\vee \simeq M^{-1}R=M^{-1}$, by Proposition [Proposition 54](#P:Eisenbud:11.6-c){reference-type="ref" reference="P:Eisenbud:11.6-c"}. ◻ **Proposition 56**. *Let $R$ be a Noetherian superdomain and $M$ an $R-$subsupermodule of $K(R)$. Thus, $M^{-1}M=R$ if and only if $M$ is invertible.* *Proof.* The "if" part: Suppose that $M\subseteq K(R)$ is an $R-$supermodule with $M^{-1}M=R$. First, let us show that the conclusion holds when $R$ is local with maximal ideal $\mathfrak{m}$. Since $M^{-1}M=R$, we can find finitely many $a_i$ in $h(M^{-1})$ and $x_i$ in $h(M)$ such that $1=a_1x_1+\cdots+a_rx_r$. If for all $i$, $a_ix_i\in\mathfrak{m}$, then $1\in\mathfrak{m}$. Then, there exists a homogeneous element $a_i\in M^{-1}$ such that $a_iM\not\subseteq\mathfrak{m}$. Consequently, $a_iM=R$, which implies that $a_i$ is a non-zerodivisor of $R$. We may assume that $a_i$ is even. Then, multiplication by $a_i$ defines an isomorphism between $M$ and $R$, which proves the claim. In the general case, if $R$ is not local, we apply the previous reasoning to $R_{\mathfrak{p}}$, with $\mathfrak{p}$ running through the set of prime ideals of $R$. The "only if" part: Note that $M^{-1}M$ is a superideal of $R$. We must show that $1\in M^{-1}M$. For this, consider $\varphi_i\in M^\vee$ and $a_i\in M$ ($i=1, \ldots, n$) such that $1=\varphi_1(a_1)+\cdots+\varphi_n(a_n)$. If $\psi_i$ is the element of $M^{-1}$ corresponding to $\varphi_i$, through the isomorphism $M^{-1}\simeq M^\vee$, then $1=\psi_1a_1+\cdots+\psi_na_n$. ◻ **Proposition 57**. *The following statements are equivalent:* - *$M$ is an invertible $R-$supermodule.* - *For any prime ideal $\mathfrak{p}$ of $R$, $M_\mathfrak{p}$ is an invertible $R_{\mathfrak{p}}-$supermodule.* - *For any maximal ideal $\mathfrak{m}$ of $R$, $M_\mathfrak{m}$ is an invertible $R_\mathfrak{m}-$supermodule.* *Proof.* - If $M$ is invertible, by Proposition [Proposition 52](#prop:Eisenbud11.6-a){reference-type="ref" reference="prop:Eisenbud11.6-a"}, $\underline{\mathrm{Hom}}_R(M, R)\otimes_R M \simeq R$. Then, for every prime $\mathfrak{p}$ of $R$, using Proposition [Proposition 36](#Westra:Theorem:6.5.9){reference-type="ref" reference="Westra:Theorem:6.5.9"} (see Remark [Remark 37](#rem:devissage){reference-type="ref" reference="rem:devissage"}), we find $$\begin{aligned} R_\mathfrak{p}&\simeq \underline{\mathrm{Hom}}_R(M, R)_\mathfrak{p}\otimes_{R_\mathfrak{p}} M_\mathfrak{p}\\ &\simeq\underline{\mathrm{Hom}}_{R_\mathfrak{p}}(M_\mathfrak{p}, R_\mathfrak{p})\otimes_{R_\mathfrak{p}} M_\mathfrak{p}\\ &= M^{\vee}_\mathfrak{p}\otimes_{R_\mathfrak{p}}M_\mathfrak{p}.\\ \end{aligned}$$ Thus, $M_\mathfrak{p}^{\vee}\otimes_{R_\mathfrak{p}}M_\mathfrak{p}\simeq R_\mathfrak{p}$, so by Proposition [Proposition 52](#prop:Eisenbud11.6-a){reference-type="ref" reference="prop:Eisenbud11.6-a"}, $M_\mathfrak{p}$ is invertible. - Obvious. - Suppose that $M_\mathfrak{m}$ is invertible for all maximal ideal $\mathfrak{m}$ of $R$ and consider $N:=MM^{-1}$. Notice that $N$ is a superideal of $R$, such that $N_\mathfrak{m}=M_\mathfrak{m}M^{-1}_\mathfrak{m}=R_\mathfrak{m}$, for any maximal ideal $\mathfrak{m}$. In particular, we have $N\not\subseteq\mathfrak{m}$ and, consequently, $N=R$. Thus, by Proposition [Proposition 56](#P:Eisenbud:11.6-d){reference-type="ref" reference="P:Eisenbud:11.6-d"}, $M$ is an invertible $R-$supermodule.  ◻ This completes our discussion of the properties of invertible supermodules that is necessary in the present paper. Nonetheless, we present some considerations regarding the Picard group of a superring in the next subsection. ## Picard group and Cartier divisors {#Sub:sec:3.2} Let $M$ and $N$ be invertible $R-$supermodules, and denote by $[M]$ the isomorphism class of $M$. We define the *even Picard supergroup $\mathrm{s}\mathbb{P}\mathrm{ic}(R)$ of $R$* to be the group of isomorphisms classes of invertible $R-$supermodules under tensor product, that is, $[M]\cdot[N]:=[M\otimes_R N]$. Then, the binary operation "$\cdot$" endows $\mathrm{s}\mathbb{P}\mathrm{ic}(R)$ with an Abelian group structure, where the identity is the class of $R$ and the inverse of the class of $M$ is the class of $M^{\vee}$. Similarly, the set $\mathrm{s}\mathbb{C}\mathrm{art}(R)$ of invertible $R-$subsupermodules of $K(R)$ can be endow with a group structure, in which the product and inverse were defined in Remark [Remark 49](#rem:prodK){reference-type="ref" reference="rem:prodK"} and Proposition [Proposition 56](#P:Eisenbud:11.6-d){reference-type="ref" reference="P:Eisenbud:11.6-d"}, respectively. This group $\mathrm{s}\mathbb{C}\mathrm{art}(R)$ is called the *Cartier group of* $R$. Let $R$ be a Noetherian superdomain, and let $u\in R_{\overline{0}}$ be a non-zerodivisor. Then $uR \simeq R$ and this implies that $uR$ is invertible. Hence, an even homogeneous element $u\in K(R)^*$ corresponds uniquely to the invertible supermodule $uR$, called a *principal divisor*. Now, consider the map $\psi:\mathrm{s}\mathbb{C}\mathrm{art}(R)\longrightarrow\mathrm{s}\mathbb{P}\mathrm{ic}(R)$ that assigns to an invertible $R$-subsupermodule of $K(R)$ its isomorphism class. This map is a group homomorphism, which is surjective by Proposition [Proposition 53](#P:Eisenbud:11.6-b){reference-type="ref" reference="P:Eisenbud:11.6-b"}. Moreover, if we restrict $\psi$ to the purely even homogeneous elements, the kernel is isomorphic to $K(R)_{\overline{0}}^*$. We call the quotient group $\mathrm{s}\mathbb{C}\mathrm{art}(R)/K(R)_{\overline{0}}^*$ the *ideal class group of $R$*. Thus, the ideal class group is canonically isomorphic to the Picard group: $$\mathrm{s}\mathbb{P}\mathrm{ic}(R)\simeq\mathrm{s}\mathbb{C}\mathrm{art}(R)/K(R)_{\overline{0}}^*.$$ Note that, as a consequence of the reasoning used in the proof of Proposition [Proposition 56](#P:Eisenbud:11.6-d){reference-type="ref" reference="P:Eisenbud:11.6-d"}, $R$ is a local superring, and any invertible fractional superideal of $R$ is isomorphic to $R$. Hence, in this case both $\mathrm{s}\mathbb{C}\mathrm{art}(R)$ and $\mathrm{s}\mathbb{P}\mathrm{ic}(R)$ are trivial. Let $R$ and $R'$ be Noetherian superrings, $\varphi:R\longrightarrow R'$ a morphism and $M$ an invertible $R-$supermodule. Recall that by "extension of scalars", $M\otimes_R R'$ admits an $R'$-supermodule structure, such that it is a finitely generated $R'$-supermodule. Now, let $\mathfrak{q}$ be any prime ideal of $R'$, then (by Proposition [Proposition 16](#P:2.2:B){reference-type="ref" reference="P:2.2:B"}) we obtain the isomorphisms: $$\begin{aligned} (M\otimes_R R')_{\mathfrak{q}}&\simeq M\otimes_R R'_{\mathfrak{q}}\\ &\simeq M\otimes_R(R_{\varphi^{-1}(\mathfrak{q})}\otimes_{R_{\varphi^{-1}(\mathfrak{q})}}R'_{\mathfrak{q}})\\ &\simeq(M\otimes_R R_{\varphi^{-1}(\mathfrak{q})})\otimes_{R_{\varphi^{-1}(\mathfrak{q})}}R'_{\mathfrak{q}}\\ &\simeq M_{\varphi^{-1}(\mathfrak{q})}\otimes_{R_{\varphi^{-1}(\mathfrak{q})}}R'_{\mathfrak{q}}\\ &\simeq R_{\varphi^{-1}(\mathfrak{q})}\otimes_{R_{\varphi^{-1}(\mathfrak{q})}}R'_{\mathfrak{q}}\\ &\simeq R'_\mathfrak{q}.\end{aligned}$$ i.e., $M\otimes_R R'$ is an invertible $R'$-supermodule. In particular, the map $\varphi_{\ast}:\mathrm{s}\mathbb{P}\mathrm{ic}(R)\longrightarrow\mathrm{s}\mathbb{P}\mathrm{ic}(R')$ which associates the class of $M$ to the class of $M\otimes_R R'$ is a well-defined group homomorphism. If we denote by $\mathsf{sRngs}$ the category of Noetherian superrings and by $\mathsf{aGrps}$ the category of Abelian groups, then we have a covariant functor $$\begin{aligned} \mathrm{s}\mathbb{P}\mathrm{ic}:\mathsf{sRngs}& \longrightarrow& {\mathsf{aGrps}} \\ \nonumber R &\longmapsto& \mathrm{s}\mathbb{P}\mathrm{ic}(R) \\ \nonumber \varphi:R\longrightarrow R' & \longmapsto& \varphi_{\ast}:\mathrm{s}\mathbb{P}\mathrm{ic}(R)\longrightarrow\mathrm{s}\mathbb{P}\mathrm{ic}(R') \nonumber\end{aligned}$$ Indeed, for any Noetherian superring $R$, $\mathrm{id}_{R\ast}=\mathrm{id}_{\mathrm{s}\mathbb{P}\mathrm{ic}(R)}$. Moreover, if $R, R', R''$ are Noetherian superrings and $\varphi:R\longrightarrow R'$ and $\psi:R'\longrightarrow R''$ are morphisms, then $(\psi\circ\varphi)_{\ast}=\psi_{\ast}\circ\varphi_{\ast}$. **Proposition 58**. *Let $R$ be a Noetherian split superring. Then, the projection $\pi_R:R\longrightarrow\overline{R}$ induces a surjective group homomorphism $\pi_{R\ast}:\mathrm{s}\mathbb{P}\mathrm{ic}(R)\longrightarrow\mathrm{s}\mathbb{P}\mathrm{ic}(\overline{R})$ and $\mathrm{s}\mathbb{P}\mathrm{ic}(R)$ splits as $\mathrm{s}\mathbb{P}\mathrm{ic}(R) \simeq\mathrm{s}\mathbb{P}\mathrm{ic}(\overline{R})\oplus\mathrm{Ker}(\pi_{R\ast})$.* *Proof.* Since $R$ is a split superring, there exists a splitting morphism $\sigma:\overline{R}\longrightarrow R$ such that $\pi_R\circ\sigma=\mathrm{id}_{\overline{R}}$ (see Remark [Remark 24](#rem:splitring){reference-type="ref" reference="rem:splitring"}). Applying the $\mathrm{s}\mathbb{P}\mathrm{ic}$ functor to $\pi_R\circ\sigma=\mathrm{id}_{\overline{R}}$, we obtain $\pi_{R\ast}\circ \sigma_{\ast}=\mathrm{id}_{\mathrm{s}\mathbb{P}\mathrm{ic}(\overline{R})}$, that is, $\pi_R*:\mathrm{s}\mathbb{P}\mathrm{ic}(\overline{R})\longrightarrow\mathrm{s}\mathbb{P}\mathrm{ic}(R)$ is a surjective group homomorphism. Thus, the isomorphism $\mathrm{s}\mathbb{P}\mathrm{ic}(R) \simeq\mathrm{s}\mathbb{P}\mathrm{ic}(\overline{R})\oplus\mathrm{Ker}(\pi_{R\ast})$ follows from the split exact sequence of $\mathds{Z}$-modules: $$\xymatrix{0\ar[r]&\mathrm{Ker}(\pi_{R \ast})\ar[r]^{\iota}&\mathrm{s}\mathbb{P}\mathrm{ic}(R)\ar[r]^{\pi_{R\ast}}&\mathrm{s}\mathbb{P}\mathrm{ic}(\overline{R})\ar[r]\ar@/^1pc/[l]^{\sigma_{\ast}}&0.}$$ ◻ # Projective supermodules {#Sec:Projective:Supermodules} In this section we define projective supermodules and study some of their properties. We also exhibit the relation between projective and invertible supermodules. Since this concept is studied more extensively in [@morye2022notes] and [@westrathesis Chapter 6], our treatment is only limited to the aspects strictly necessary in this paper. **Definition 59**. An $R-$supermodule $M$ is *projective* if the functor $\underline{\mathrm{Hom}}_R(M, \cdot)$ is (right) exact. **Proposition 60**. *Let $R$ be a superring and $M$ an $R-$supermodule. The following conditions are equivalent.* 1. *$M$ is projective.* 2. *For any morphism $M\longrightarrow H$ and any surjective morphism $N\longrightarrow H$, there is a morphism $M\longrightarrow N$ such that the following diagram commutes: $$\xymatrix{&M\ar[d]\ar@{.>}[ld]&\\N\ar[r]&H\ar[r]&0.}$$* 3. *Every exact sequence $0\longrightarrow H\longrightarrow N\longrightarrow M\longrightarrow 0$ splits.* 4. *There is a free $R-$supermodule $F$ such that $F\simeq M\oplus Q$, for an $R-$supermodule $Q$.* *Proof.* See [@morye2022notes Theorem 3.1] and [@westrathesis Lemma 6.2.2]. ◻ The following proposition is the superization of the *Dual Basis Theorem* (see [@narkiewicz2004elementary Proposition 1.35]). **Proposition 61**. *Let $R$ be a superring and $M$ an $R-$supermodule. Then $M$ is projective if and only if there exist collections $\{a_i\mid i\in I\}\subseteq h(M)$ and (homogeneous) morphisms $\{\phi_i\mid i\in I\}\subseteq\underline{\mathrm{Hom}}_R(M, R)$ such that every $m\in M$ has the form $m=\sum_{i\in I}\phi_{i}(m)a_i$, where only finitely many $\phi_i(m)$ are nonzero.* *Proof.* The "only if" part: Suppose that $M$ is projective and let $F$ and $Q$ be as in Proposition [Proposition 60](#Westra:Lemma:6.2.2){reference-type="ref" reference="Westra:Lemma:6.2.2"} iv). Let $\{x_i\mid i\in I\}$ be a free basis of $F$ formed by homogeneous elements. Let $m\in M$ be arbitrary, and consider the inclusion $\iota_M:M\longrightarrow F$. Then, we may interpret $m$ as an element of $F$ of the form $$m=\sum_{i\in I}b_i x_i$$ where $b_i\in R$ and only finitely many of them are nonzero. Now, if we send this $m$ to $M$ by the projection morphism $\pi_M:F\longrightarrow M$, then $m$ has the form $$m=\sum_{i\in I}b_i\pi_M(x_i).$$ Writing $a_i:=\pi_M(x_i)$ and $\phi_i(m):=b_i$, we conclude that $m$ has the desired form. The "if" part: Let $a_i$ and $\phi_i$ with $i\in I$ be given such that any $m\in M$ has the form $m=\sum_{i\in I}\phi_{i}(m)a_i$, where only finitely many of the $\phi_i(m)$ are nonzero. Let $F$ be a free $R-$supermodule with a homogeneous basis of free generators $\{x_i\mid i\in I\}$. Consider the morphisms $\xi:F\longrightarrow M$ assigning $x_i$ to $a_i$ (resp. $\Pi a_i$) if $|x_i|=|a_i|$ (resp. $|x_i|\neq|a_i|$) and $\theta:M\longrightarrow F$ assigning $a_i$ to $x_i$ (resp. $\Pi x_i$) if $|a_i|=|x_i|$ (resp. $|a_i|\neq|x_i|$). Therefore, we have an split exact sequence $$\xymatrix{0\ar[r]&\mathrm{Ker}(\xi)\ar[r]&F\ar[r]^{\xi}&M\ar[r]\ar@/^1pc/[l]^{\psi}&0}$$ where $\psi$ is the map $$\sum_{i\in I}\phi_i(m)a_i\longmapsto\sum_{i\in I}\phi_i(m)\theta(a_i).$$ Hence, $F\simeq M\oplus\mathrm{Ker}(\xi)$, so $M$ is projective, by Proposition [Proposition 60](#Westra:Lemma:6.2.2){reference-type="ref" reference="Westra:Lemma:6.2.2"} iv). ◻ **Corollary 62**. *Let $R$ be a superring such that every nonzero superideal $\mathfrak{a}$ of $R$, other than $\mathfrak{j}_R$, is invertible. Then every nonzero superideal of $R$, except $\mathfrak{j}_R$, is a projective $R-$supermodule.* *Proof.* Let $\mathfrak{a}\neq\mathfrak{j}_R$ be a nonzero superideal of $R$. Since $\mathfrak{a}$ is invertible, $\mathfrak{a}\mathfrak{a}^{-1}=R$, then we can find finitely many homogeneous $a_1, \ldots, a_d\in\mathfrak{a}$ and $x_1, \ldots, x_d\in\mathfrak{a}^{-1}$ such that $1=a_1x_1+\cdots+a_dx_d$. If we consider the morphism $\phi_i:\mathfrak{a}\longrightarrow R, a\longmapsto ax_i$, the conditions of Proposition [Proposition 61](#DualBasisTheorem){reference-type="ref" reference="DualBasisTheorem"} are satisfied (with $\{a_i\mid i\leq d\}$ and $\{\phi\mid i\leq d\}$), and therefore $\mathfrak{a}$ is projective. ◻ **Corollary 63**. *Let $R$ be a superdomain. Then any nonzero projective fractional superideal, not equal to $\mathfrak{j}_{K(R)}$, is invertible.* *Proof.* Let $M$ be a nonzero and projective fractional superideal of $R$, other than $\mathfrak{j}_{K(R)}$. Consider $\phi_i:M\longrightarrow R$ and $\mathfrak{a}_i$ with $i\in I$, as in Proposition [Proposition 61](#DualBasisTheorem){reference-type="ref" reference="DualBasisTheorem"}. For $b\in M-\mathfrak{j}_{K(R)}$, we define $d_i:=b^{-1}\phi_i(b)$, which satisfies $a\phi_i(b)=b\phi_i(a)$, for any $a\in R$. Since $b\not\in\mathfrak{j}_{K(R)}$, we have that $\phi_i(a)=d_i a$, for all $a\in R$, which implies that $d_i M\subseteq R$. Thus, every $a\in R$ has the form $a=\sum_{i\in I}d_ia_ia$ where all, but finitely many $b_i$, are zero. Finally, if we take $a\in R-\mathfrak{j}_R$ we obtain that $1\in MM^{-1}$, that is, $M^{-1}M=R$, which completes the proof. ◻ # Dedekind superrings {#Sec:Dedekind} In this section we define and study the properties of Dedekind superdomains. ## Integral elements in superrings {#Subsec:Integrally} Recall that, in this paper, any superring $R$ is supercommutative with unit. Moreover, for any pair of superrings $R, R'$, the inclusion $R\subseteq R'$ means that both have the same identity and the $\mathds{Z}_2$-gradings are compatible, i.e., $R_i\subseteq R'_i$ for all $i\in\mathds{Z}_2$. Motivated by the definition of integral elements for unitary non-commutative rings in [@atterton_1972], we define integral elements in superrings. Although inspired by [@atterton_1972], our definition differs from it in some respects. Indeed, since all our rings are $\mathds{Z}_2$-graded, we want the integral closure of a superring to be a $\mathds{Z}_2$-graded ring. Therefore, an element $x$ will be integral provided that its homogeneous components are integral (note that such an $x$ satisfies the definition of [@atterton_1972]). Before stating the definition, we fix some notation. Let $R$ and $R'$ be superrings such that $R\subseteq R'$. An $R$-supermodule of the form $M=Rb_1+\cdots+ Rb_n$, where $b_1,\ldots,b_n\in R'_{\overline{0}}$, is called *even finitely generated*. Observe that, $M$ is finitely generated over $R$. Furthermore, if $M=Rb_1+\cdots+ Rb_n$ and $N=Rc_1+\cdots+ Rc_m$ are even finitely generated $R-$supermodules, then its product $MN=\sum_{i,j}Rb_ic_j$ is an even finitely generated $R-$supermodule. **Definition 64**. Let $R$ and $R'$ be superrings such that $R\subseteq R'$. A homogeneous element $b\in R'$ is called *integral over* $R$ if there exists an even finitely generated $R$-supermodule, $M=Rb_1+\cdots+ Rb_n$, such that $b M\subseteq M$. In general, $b\in R'$ is called *integral over* $R$ if its homogeneous components are integral over $R$. The set of elements in $R'$ that are integral over $R$, denoted by ${\bf\mathrm{cl}}_{R'}(R)$, is called *integral closure of $R$ in $R'$*. The superring $R$ is said to be *integrally closed in* $R'$ if $R={\bf\mathrm{cl}}_{R'}(R)$. A superdomain is called *integrally closed* (or *normal*) if it is integrally closed in $K(R)$. In this latter case, we write ${\bf\mathrm{cl}}(R)$ instead of ${\bf\mathrm{cl}}_{K(R)}(R)$. **Theorem 65**. *Let $R$ and $R'$ be superrings with $R\subseteq R'$. Then ${\bf\mathrm{cl}}_{R'}(R)$ is a superring containing $R$.* *Proof.* Suppose first $x, y\in {\bf\mathrm{cl}}_{R'}(R)$ are homogeneous elements. Then, for some $s,d\in\mathbb{N}$, there exist $x_1, \ldots, x_s$, $y_1, \ldots, y_d\in R'{_{\overline{0}}}$ such that the even finitely generated $R$-supermodules $M=Rx_1+\cdots+R x_s$ and $N=Ry_1+\cdots+Ry_d$, satisfy $xM\subseteq M$ and $yN\subseteq N$. In this case, $MN$ is a even finitely generated $R$-supermodule, that satisfies $xyMN\subseteq MN$ and $(x+y)MN\subseteq MN$, as one can easily verify. Thus, $xy$ and $x+y$ lie in ${\bf\mathrm{cl}}_{R'}(R)$. In the general case, let $x$ and $y$ be non-homogeneous in ${\bf\mathrm{cl}}_{R'}(R)$. Then, by Definition [Definition 64](#Definition:IntegralElements){reference-type="ref" reference="Definition:IntegralElements"}, $x_{\overline{0}}, x_{\overline{1}}, y_{\overline{0}}, y_{\overline{1}}\in{\bf\mathrm{cl}}_{R '}(R)$, and by the first part of the proof, $x_{\overline{0}}+y_{\overline{0}}, x_{\overline{1}}+y_{\overline{1}}, x_{\overline{0}}y_{\overline{0}}+x_{\overline{1}}y_{\overline{1}}, x_{\overline{0}}y_{\overline{1}}+x_{\overline{1}}y_{\overline{0}}\in{\bf\mathrm{cl}}_{R'}(R)$ so $x+y$ and $xy\in{\bf\mathrm{cl}}_{R'}(R)$. Clearly, $1\in {\bf\mathrm{cl}}_{R'}(R)$ and therefore ${\bf\mathrm{cl}}_{R'}(R)$ is an unitary ring. Let ${\bf\mathrm{cl}}_{R'}(R){_{\overline{0}}}={\bf\mathrm{cl}}_{R'}(R)\cap R_{\overline{0}}'$ and ${\bf\mathrm{cl}}_{R'}(R){_{\overline{1}}}={\bf\mathrm{cl}}_{R'}(R)\cap R_{\overline{1}}'$. Thus, ${\bf\mathrm{cl}}_{R'}(R)={\bf\mathrm{cl}}_{R'}(R){_{\overline{0}}}\oplus{\bf\mathrm{cl}}_{R'}(R){_{\overline{1}}}$, endows ${\bf\mathrm{cl}}_{R'}(R)$ with a $\mathds{Z}_2$-graduation compatible with the one of $R'$. Finally, to show that $R\subseteq {\bf\mathrm{cl}}_{R'}(R)$, it suffices to note that if $x\in R$ is a homogeneous element, then the even finitely generated $R$-supermodule $M:=R1$ is such that $xM\subseteq M$. Thus, ${\bf\mathrm{cl}}_{R'}(R)$ is a superring containing $R$. ◻ **Theorem 66**. *Let $R\subseteq R' \subseteq R''$ be superrings. If any element of $R''$ is integral over $R'$ and any element of $R'$ is integral over $R$, then any element of $R''$ is integral over $R$. In particular, ${\bf\mathrm{cl}}_{R'}(R)$ is integrally closed in $R'$.* *Proof.* The proof follows the same arguments as in [@atterton_1972 Theorem 2]. ◻ Let $R$ and $R'$ be commutative rings with $R\subseteq R'$. In commutative algebra, an element $x\in R'$ is *integral over* $R$ if $x$ is a root of a monic polynomial with coefficients in $R$ (see [@atiyah p. 59]). However, this definition does not hold for non-commutative rings. A weaker (and useful) version is provided by the following proposition. **Proposition 67**. *Let $R\subseteq R'$ be superrings. If $b\in R'$ is integral over $R$ and $b$ commutes with every element of $R$, then $b$ satisfies the equation $$\label{eq:AttP2} b^n+a_1b^{n-1}+\cdots+a_{n-1}b+a_n=0, \quad \text{where} \quad a_1, \ldots, a_n\in R.$$ Conversely, if $b\in R'{_{\overline{0}}}$ satisfies an equation of the form [\[eq:AttP2\]](#eq:AttP2){reference-type="eqref" reference="eq:AttP2"}, then $b\in {\bf\mathrm{cl}}_{R'}(R)$.* *Proof.* The proof is a very close adaptation of the proof of Property 1 in [@atterton_1972 Section 1]. ◻ **Remark 68**. Let $R$ be a regular local superring and $z^{[s]}=0$ for some minimal set of generators of the $R_{\overline{0}}$ -module $R_{\overline{1}}$. Then, by Proposition [Proposition 43](#masu-zubprop5.2){reference-type="ref" reference="masu-zubprop5.2"}, $R_{\overline{1}}^2=\mathrm{Ann}_{R_{\overline{0}}}(z^{[s]})=R_{\overline{0}}$ and we find that $1$ must be a zero divisor, which is clearly a contradiction. Thus, $z^{[s]}\neq0$ is a necessary condition for regularity. This condition is discussed in the next section where we explain its role in the definition of Dedekind superrings. **Definition 69**. Let $R$ be a Noetherian superdomain that is not a superfield. $R$ is called a *Dedekind superring* or *Dedekind superdomain* if it satisfies the following conditions: 1. $R$ is integrally closed. 2. Any prime ideal of $R$, other than $\mathfrak{j}_R$, is maximal. 3. For any minimal set $z_1, \ldots, z_s$ of generators of the $R_{\overline{0}}$ -module $R_{\overline{1}}$, $z^{[s]}\neq0$. ## Prime factorization In this subsection we show that in a Noetherian integrally closed superdomain $R$ that is not a superfield, and such that every prime ideal of $R$, not equal to $\mathfrak{j}_R$, is maximal, every superideal admits a unique factorization into a product of prime superideals. In particular, the property holds for Dedekind superdomains. The results presented in this subsection are inspired by the material on Dedekind domains in Chapter 1, Section 1 of [@narkiewicz2004elementary]. **Lemma 70**. *Let $R$ be a Noetherian integrally closed superdomain that is not a superfield. If $x\in K(R)$ is an even element with $x\not\in R$, then any nonzero prime ideal $\mathfrak{p}$ of $R$, other than $\mathfrak{j}_R$, satisfies that $x\,\mathfrak{p}\not\subseteq\mathfrak{p}$.* *Proof.* Suppose, for a contradiction, that $x\,\mathfrak{p}\subseteq\mathfrak{p}$. By the condition $\mathfrak{p}\neq\mathfrak{j}_R$, we can take some $a\in\mathfrak{p}-\mathfrak{j}_R=\mathfrak{p}_{\overline{0}}- R_{\overline{1}}^2$. By hypothesis, $x\mathfrak{p}\subseteq\mathfrak{p}$, hence $x^ia\in R$ for any integer $i>0$. Then, defining $\mathfrak{a}_i:=(a, x^1a, x^2a, \ldots, x^ia)_R$, for each integer $i>0$, we obtain an ascending chain of superideals of $R$, given by $\mathfrak{a}_1\subseteq\mathfrak{a}_2\subseteq\cdots\subseteq \mathfrak{a}_i\subseteq\mathfrak{a}_{i+1}\subseteq\cdots$. Since $R$ is Noetherian, $\mathfrak{a}_{i+d}=\mathfrak{a}_d$, for some $d$ and every $i$. This implies that $x^{d+1}a=b_0a+b_1x^1a+b_2x^2a+\cdots+b_dx^da$, where $b_0, \ldots, b_d\in R$. Because $a\not\in\mathfrak{j}_R$ is invertible in $K(R)$, we get the equation $x^{d+1}=b_0+b_1x^1+b_2x^2+\cdots+b_dx^d$. But, by Proposition [Proposition 67](#Att:P2){reference-type="ref" reference="Att:P2"}, $x\notin R$ is integral over $R$, contradicting the hypothesis of $R$ being integrally closed. ◻ **Lemma 71**. *Let $R$ be a Noetherian superring and $\mathfrak{a}$ a nonzero proper superideal of $R$ that is not prime. Then, there are finitely many prime ideals $\mathfrak{p}_1, \ldots, \mathfrak{p}_d$ of $R$ such that $\mathfrak{p}_1\cdot\mathfrak{p}_2\cdots\mathfrak{p}_d\subseteq\mathfrak{a}\subseteq\mathfrak{p}_1\cap\mathfrak{p}_2\cap\cdots\cap\mathfrak{p}_d$.* *Proof.* Let $\mathcal{S}=\{\mathfrak{q}_i\mid i\in I\}$ be the collection of proper nonzero superideals $\mathfrak{q}$ of $R$, that are not prime, and such that there is no finite collection of prime ideals $\mathfrak{p}_1, \ldots, \mathfrak{p}_d$ of $R$ with $\mathfrak{p}_1\cdot\mathfrak{p}_2\cdot\cdots\cdot\mathfrak{p}_d\subseteq\mathfrak{q}\subseteq\mathfrak{p}_1\cap\mathfrak{p}_2\cap\cdots\cap\mathfrak{p}_d$. We prove that $\mathcal{S}=\varnothing$. Assume, for a contradiction, that $\mathcal{S}$ is nonempty. Because $R$ is Noetherian, $\mathcal{S}$ has a maximal element, say $\mathfrak{a}$. Since $\mathfrak{a}$ is not a prime ideal, there exist homogeneous elements $c, d\in R$ none of them in $\mathfrak{a}$ but whose product is in $\mathfrak{a}$. Consider the superideals $\mathfrak{c}=\mathfrak{a}+cR$ and $\mathfrak{d}=\mathfrak{a}+dR$. Observe that, $\mathfrak{a}\subsetneq\mathfrak{c}$, $\mathfrak{a}\subsetneq\mathfrak{d}$ and $\mathfrak{c}\mathfrak{d}\subseteq\mathfrak{a}\subseteq\mathfrak{c}\cap\mathfrak{d}$. On one hand, if $\mathfrak{c}$ is not proper, then $\mathfrak{d}\subseteq\mathfrak{d}R\subseteq\mathfrak{a}\subseteq R\cap \mathfrak{d}=R$, i.e., $\mathfrak{a}=\mathfrak{d}$ which is not possible. A similar reasoning applies to prove that $\mathfrak{d}$ is a proper superideal. On the other hand, if $\mathfrak{c}=0$ then $\mathfrak{a}+cR=0$, in particular, $cd+c=0$ and, consequently $-c=cd$. Thus, $c\in\mathfrak{a}$, which contradicts the choice of $c$. Hence $\mathfrak{c}\neq0$, and analogously we prove that $\mathfrak{d}\neq0$. Therefore, there exists finitely many prime ideals $\mathfrak{p}_1, \ldots, \mathfrak{p}_s, \mathfrak{q}_1, \ldots, \mathfrak{q}_t$ such that $\mathfrak{p}_1\cdot\mathfrak{p}_2\cdots\mathfrak{p}_s\subseteq\mathfrak{c}\subseteq\mathfrak{p}_1\cap\mathfrak{p}_2\cap\cdots\cap\mathfrak{p}_s$ and $\mathfrak{q}_1\cdot\mathfrak{q}_2\cdots\mathfrak{q}_t\subseteq\mathfrak{d}\subseteq\mathfrak{q}_1\cap\mathfrak{q}_2\cap\cdots\cap\mathfrak{q}_t$. Thus, $\mathfrak{p}_1\cdot\mathfrak{p}_2\cdots\mathfrak{p}_s\cdot\mathfrak{q}_1\cdot\mathfrak{q}_2\cdot\cdots\cdot\mathfrak{q}_t\subseteq\mathfrak{a}\subseteq\mathfrak{p}_1\cap\mathfrak{p}_2\cap\cdots\cap\mathfrak{p}_s\cap \mathfrak{q}_1\cap\mathfrak{q}_2\cap\cdots\cap\mathfrak{q}_t$, contradicting the maximality of $\mathfrak{a}$. We conclude that $\mathcal{S}=\varnothing$, which completes the proof of the lemma. ◻ **Lemma 72**. *Let $R$ be a Noetherian integrally closed superdomain that is not a superfield, such that every prime ideal of $R$, other than $\mathfrak{j}_R$, is maximal. If $\mathfrak{p}\neq\mathfrak{j}_R$ is a nonzero prime ideal of $R$, then $\mathfrak{p}$ is invertible, i.e., $\mathfrak{p}^{-1}\mathfrak{p}=R$.* *Proof.* For $a\in\mathfrak{p}-\mathfrak{j}_R=\mathfrak{p}_{\overline{0}}-R_{\overline{1}}^2$, by Lemma [Lemma 71](#lem:facprime){reference-type="ref" reference="lem:facprime"}, there exist a minimal set of prime ideals of $R$, $\mathfrak{p}_1, \ldots, \mathfrak{p}_s$, such that $\mathfrak{p}_1\cdots\mathfrak{p}_s\subseteq aR\subseteq\mathfrak{p}_1\cap\cdots\cap \mathfrak{p}_s$. In particular, $\mathfrak{p}_1\cdots\mathfrak{p}_s\subseteq\mathfrak{p}$, because $aR\subseteq\mathfrak{p}$. By [@westrathesis Theorem 4.1.3], it follows that for some $i$, $\mathfrak{p}_i\subseteq \mathfrak{p}$ and thus $\mathfrak{p}=\mathfrak{p}_i$, since any prime ideal is maximal. We may assume, without loss of generality, that $\mathfrak{p}=\mathfrak{p}_1$. By the minimality of the $\mathfrak{p}_i$, the product $\mathfrak{p}_2\cdots\mathfrak{p}_s$ cannot be contained in $aR$. Hence, there exists a homogeneous element $b$ in the latter product not in $aR$. Now, if we define $x:=a^{-1}b$, then for any $y\in\mathfrak{p}$, we find that $by\in\mathfrak{p}_1\cdots\mathfrak{p}_s$ and, consequently, $by\in aR$. Therefore, $xy=a^{-1}by\in R$, i.e., $x\,\mathfrak{p}\subseteq R$. Observe that, $x\in\mathfrak{p}^{-1}$ and $x\not\in R$. Indeed, if $x\in R$, $b=ax\in\mathfrak{p}=\mathfrak{p}_1$, which is impossible. Since $\mathfrak{p}^{-1}\mathfrak{p}$ is clearly a superideal of $R$ containing $\mathfrak{p}$, by the maximality of $\mathfrak{p}$, it follows that there are only two possible cases: $\mathfrak{p}^{-1}\mathfrak{p}=R$ or $\mathfrak{p}^{-1}\mathfrak{p}=\mathfrak{p}$. If the latter case occurs, $x\,\mathfrak{p}\subseteq\mathfrak{p}^{-1}\mathfrak{p}\subseteq\mathfrak{p}$ which contradicts Lemma [Lemma 70](#xpnotinp){reference-type="ref" reference="xpnotinp"}. Thus, $\mathfrak{p}$ is an invertible superideal of $R$. ◻ Now we prove the main theorem of this subsection. **Theorem 73**. *Let $R$ be a Noetherian superdomain that is not a superfield. If $R$ is integrally closed and every prime superideal in $R$, other than $\mathfrak{j}_R$, is maximal, then any nonzero proper superideal $\mathfrak{a}$ of $R$ has a unique factorization $\mathfrak{a}=\mathfrak{p}_1\cdots\mathfrak{p}_s$ into prime ideals, called "the factors of $\mathfrak{a}$". If the $\mathfrak{a}$ admits two factorizations $\mathfrak{a}=\mathfrak{p}_1\cdots\mathfrak{p}_s$ and $\mathfrak{a}=\mathfrak{q}_1\cdots\mathfrak{q}_r$, where $\mathfrak{j}_R$ is not a factor, then necessarily $s=r$ and $\{\mathfrak{p}_i\}$ is a permutation of $\{\mathfrak{q}_j\}$.* *Proof.* Assume, for a contradiction, that $\mathfrak{a}$ is a nonzero proper superideal of $R$ such that $\mathfrak{a}\neq\mathfrak{p}_1\cdots\mathfrak{p}_s$, for any collection of prime ideal $\mathfrak{p}_i$. We show that $\mathfrak{a}$ is a prime ideal. By Lemma [Lemma 71](#lem:facprime){reference-type="ref" reference="lem:facprime"}, we can consider a minimal set of prime ideals of $R$, $\mathfrak{p}_1, \ldots, \mathfrak{p}_s$, such that $\mathfrak{p}_1\cdot\cdots\cdot\mathfrak{p}_s\subseteq \mathfrak{a}\subseteq \mathfrak{p}_1\cap\cdots\cap\mathfrak{p}_s$. If $s=1$, $\mathfrak{a}$ is prime and we are done. Now, suppose that $s\geq 2$ and let $\mathfrak{p}$ be a prime ideal containing $\mathfrak{a}$ (e.g., the maximal ideal containing $\mathfrak{a}$). Thus, $\mathfrak{p}$ contains $\mathfrak{p}_1\cdots\mathfrak{p}_s$ and it should be one of the $\mathfrak{p}_i$, say $\mathfrak{p}\subseteq\mathfrak{p}_1$. Again, by [@westrathesis Theorem 4.1.3] and the fact that any prime superideal is maximal, we have $\mathfrak{p}=\mathfrak{p}_1$. We can assume (without loss of generality) that $\mathfrak{p}\neq\mathfrak{j}_R$ and so, by Lemma [Lemma 72](#lem:Dedinv){reference-type="ref" reference="lem:Dedinv"}, $\mathfrak{p}$ is invertible. Then $\mathfrak{p}_2\cdots\mathfrak{p}_s\subseteq\mathfrak{p}^{-1}\mathfrak{a}\subseteq\mathfrak{p}^{-1}\mathfrak{p}=R$, which implies that $\mathfrak{p}^{-1}\mathfrak{a}$ is a superideal of $R$. This superideal contains a product of $s-1$ prime ideals, but $\mathfrak{a}$ was chosen to be such that $s$ is minimal, consequently, $\mathfrak{p}^{-1}\mathfrak{a}$ must be the product of prime ideals, say, $\mathfrak{p}^{-1}\mathfrak{a}=\mathfrak{q}_1\cdots\mathfrak{q}_r$, with the $\mathfrak{q}_i$ prime. Therefore, $\mathfrak{a}=\mathfrak{p}\cdot\mathfrak{q}_1\cdots\mathfrak{q}_r$, which contradicts the fact that $\mathfrak{a}\neq\mathfrak{p}_1\cdot\cdots\cdot\mathfrak{p}_s$. We conclude that $\mathfrak{a}$ is a prime superideal of $R$. This completes the proof of the existence of the "prime factorization". For the uniqueness, suppose that $\mathfrak{a}=\mathfrak{p}_1\cdots\mathfrak{p}_s=\mathfrak{q}_1\cdots\mathfrak{q}_r$. Without loss of generality, we can assume that $s\leq r$. Suppose that $\mathfrak{a}$ is taken with $s$ minimal. Now, $\mathfrak{q}_1\cdots\mathfrak{q}_r\subseteq\mathfrak{p}_1$ and, since $\mathfrak{p}_1$ is prime, some of the $\mathfrak{q}_i$ should be contained in it. Reordering the $\mathfrak{q}_i$ (if necessary) we can assume that $\mathfrak{q}_1\subseteq\mathfrak{p}_1$, so $\mathfrak{q}_1=\mathfrak{p}_1$, because any prime is maximal. Thus $\mathfrak{p}_2\cdots\mathfrak{p}_s=\mathfrak{p}_1^{-1}\mathfrak{a}=\mathfrak{q}_2\cdots\mathfrak{q}_r$. But, the prime ideal $\mathfrak{p}_2\cdots\mathfrak{p}_s$ has two decompositions into prime ideals with length less than $s$, contradicting the minimality of $s$. Thus, $r=s$ and $\{\mathfrak{p}_1, \ldots, \mathfrak{p}_s\}=\{\mathfrak{q}_1, \ldots, \mathfrak{q}_r\}$, which completes the proof of theorem. ◻ **Remark 74**. In the situation of Theorem [Theorem 73](#Prime:Factors){reference-type="ref" reference="Prime:Factors"}: - If $\mathfrak{p}$ is any prime ideal other than $\mathfrak{j}_R$, then there exist no superideal $\mathfrak{q}\neq \mathfrak{p}$ such that $\mathfrak{p}=\mathfrak{j}_R\mathfrak{q}$. If such $\mathfrak{q}$ exists, we have that $\mathfrak{p}\subseteq\mathfrak{j}_R$, consequently, $\mathfrak{p}=\mathfrak{j}_R$, which is impossible. - Suppose that a superideal $\mathfrak{a}$ has a prime factorization of the form $\mathfrak{a}=\mathfrak{p}_1\cdots\mathfrak{p}_s\cdot\mathfrak{j}_R^n$, with $n\geq1$, and each $\mathfrak{p}_i\neq\mathfrak{j}_R$. If $\mathfrak{a}=\mathfrak{q}_1\cdots\mathfrak{q}_r$ is another prime factorization of $\mathfrak{a}$, then $\mathfrak{j}_R^n=\mathfrak{p}_1^{-1}\cdots\mathfrak{p}_s^{-1}\mathfrak{q}_1\cdots\mathfrak{q}_r$ is a superideal of $R$ contained in $\mathfrak{j}_R$. In particular, we get $\mathfrak{q}_1\cdots\mathfrak{q}_r\subseteq\mathfrak{j}_R$ and, consequently, $\mathfrak{q}_i\subseteq \mathfrak{j}_R$, *a fortiori*, $\mathfrak{q}_i=\mathfrak{j}_R$, for some $i=1, 2, \ldots, r$. Hence, if $\mathfrak{j}_R$ appears in one factorization of a superideal, it should appear in any other factorization. In this paper we are not interested in superideals $\mathfrak{a}$ where $\mathfrak{j}_R$ appears as a factor, so they where excluded from the statement of Theorem [Theorem 73](#Prime:Factors){reference-type="ref" reference="Prime:Factors"}. One of the reason being that we focused on invertible superideals, and recall that a superideal is invertible if and only if $\mathfrak{j}_R$ is not a factor of it. - For any $d\geq\mathrm{Ksdim}_{\overline{1}}(R)+1$, we obtain $\mathfrak{j}_R^d=0$. Then, even when the zero ideal has a prime factorization, this factorization is not unique. We denote by $\,\mathfrak{spec}(R)$ the set of all prime superideals of $R$ and $\,\mathfrak{spec}^{\ast}(R):=\,\mathfrak{spec}(R)-\{\mathfrak{j}_R\}$. **Corollary 75**. *In the situation of Theorem [Theorem 73](#Prime:Factors){reference-type="ref" reference="Prime:Factors"}, we have the following:* 1. *Any nonzero superideal $\mathfrak{a}$ of a Dedekind superring $R$, not having $\mathfrak{j}_R$ as a factor, has unique prime factorization* *$$\mathfrak{a}=\prod_{\mathfrak{p}\in\,\mathfrak{spec}^*(R)}\mathfrak{p}^{v_\mathfrak{p}(\mathfrak{a})}$$ where the $v_\mathfrak{p}(\mathfrak{a})$ are non-negative integers and only finitely many of them are nonzero.* 2. *If $M$ is any fractional superideal of $R$, not having $\mathfrak{j}_R$ as a factor, and $d\in R-\mathfrak{j}_R$ is such that $dM\subseteq R$, then $M$ has a unique prime factorization* *$$M=\prod_{\mathfrak{p}\in\,\mathfrak{spec}^*(R)}\mathfrak{p}_i^{v_\mathfrak{p}(M)}$$* *where the $v_\mathfrak{p}(M)$ are integers and only finitely many of them are nonzero.* 3. *Any nonzero fractional superideal of $R$, not having $\mathfrak{j}_R$ as a factor, is invertible.* 4. *Any nonzero fractional superideal of $R$, not having $\mathfrak{j}_R$ as a factor, is projective.* 5. *Any nonzero fractional superideal of $R$, not having $\mathfrak{j}_R$ as a factor, is flat.* 6. *The set of Cartier superdivisors $\mathrm{s}\mathbb{C}\mathrm{art}(R)$ is generated by $\,\mathfrak{spec}^*(R)$.* *Proof.* - This follows immediately from Theorem [Theorem 73](#Prime:Factors){reference-type="ref" reference="Prime:Factors"}. - Let $M=d^{-1}\mathfrak{a}$, with $\mathfrak{a}=dM$. By i) we have $$\mathfrak{a}=\prod_{\mathfrak{p}\in\,\mathfrak{spec}^*(R)}\mathfrak{p}^{v_\mathfrak{p}(\mathfrak{a})}\quad \text{and} \quad dR=\prod_{\mathfrak{p}\in\,\mathfrak{spec}^*(R)}\mathfrak{p}^{v_\mathfrak{p}(dR)}$$ where the $v_\mathfrak{p}(\mathfrak{a})$ and the $v_\mathfrak{p}(dR)$ are non-negative integers and only finitely many of them are nonzero. Hence $$\begin{aligned} M&=\prod_{\mathfrak{p}\in\,\mathfrak{spec}^*(R)}\mathfrak{p}^{v_\mathfrak{p}(\mathfrak{a})-v_\mathfrak{p}(dR)}\\ &=\prod_{\mathfrak{p}\in\,\mathfrak{spec}^*(R)}\mathfrak{p}_i^{v_\mathfrak{p}(M)},\quad \text{ where }v_\mathfrak{p}(M):=v_\mathfrak{p}(\mathfrak{a})-v_\mathfrak{p}(dR), \forall\mathfrak{p}\in\,\mathfrak{spec}^*(R) \end{aligned}$$ and the $v_\mathfrak{p}(M)$ are integers and only finitely many of them are nonzero. The uniqueness is proved in a similar way to that of Theorem [Theorem 73](#Prime:Factors){reference-type="ref" reference="Prime:Factors"}. - Follows from ii) and Lemma [Lemma 72](#lem:Dedinv){reference-type="ref" reference="lem:Dedinv"}. - From iii), any fractional superideal of $R$, not having $\mathfrak{j}_R$ as a factor, is invertible. Using the same arguments of the proof of Corollary [Corollary 62](#P:5.4){reference-type="ref" reference="P:5.4"}, we prove the assertion. - Any projective supermodule is flat (see [@westrathesis Proposition 6.2.5]). Thus, the conclusion follows from iv). - It is not hard to see that $\mathrm{s}\mathbb{C}\mathrm{art}(R)$ is generated by the invertible superideals of $R$ (using a similar argument than [@eisenbud Corollary 11.7 b]). Hence, the conclusion follows from i).  ◻ ## Regular superrings with Krull super-dimension $1\mid d$ {#Subsec:Dedekind:Is:Regular} In this subsection, we explore the relation between the notions of Dedekind superrings and regular superrings. **Theorem 76**. *Let $R$ be a Dedekind superring. Then $R$ is regular and $\mathrm{Ksdim}_{\overline{0}}(R)=1$. In particular, $\overline{R}$ is a Dedekind ring.* *Proof.* Suppose first that $R$ is a local superring, with maximal ideal $\mathfrak{m}$. Since $R$ is a Dedekind superring, then $R$ is Noetherian and $\overline{R}$ is Noetherian as well (see [@westrathesis Corollary 3.3.4]). Clearly, $\overline{R}$ is also a local domain whose maximal ideal is $\overline{\mathfrak{m}}=\mathfrak{m}/\mathfrak{j}_R$. We claim that $\overline{\mathfrak{m}}$ is invertible. To see this, note that for any prime ideal $\overline{\mathfrak{p}}$ of $\overline{R}$, $$\begin{aligned} \overline{\mathfrak{m}}_{\overline{\mathfrak{p}}}&\simeq(\overline{R}\otimes_R\mathfrak{m})_{\overline{\mathfrak{p}}}\\ &\simeq\overline{R}_{\overline{\mathfrak{p}}}\otimes_{R_\mathfrak{p}}\mathfrak{m}_\mathfrak{p}\\ &\simeq\overline{R}_{\overline{\mathfrak{p}}}\otimes_{R_\mathfrak{p}}R_\mathfrak{p}\\ &\simeq\overline{R}_{\overline{\mathfrak{p}}},\end{aligned}$$ where we have used $\overline{\mathfrak{m}}\simeq\overline{R}\otimes_R\mathfrak{m}$. Now, we prove that $\overline{R}$ is discrete valuation ring, DVR. We only need to show that any nonzero ideal of $\overline{R}$ is a power of $\overline{\mathfrak{m}}$ (see the proof of Proposition 9.7 in [@atiyah]). Suppose, for a contradiction, that the assertion is not true and consider the collection $\mathcal{S}$ of nonzero ideals of $\overline{R}$ that are not a power of $\overline{\mathfrak{m}}$, ordered by inclusion. By our hypothesis, this collection is nonempty and, since $\overline{R}$ is Noetherian, $\mathcal{S}$ has a maximal element, say $\mathfrak{a}$. Clearly, $\overline{\mathfrak{m}}\not\in\mathcal{S}$ and thus $\mathfrak{a}\subseteq\overline{\mathfrak{m}}$, because $\mathfrak{a}\neq\overline{\mathfrak{m}}$. Note that $\mathfrak{a}\subseteq \overline{\mathfrak{m}}^{-1}\mathfrak{a}\subsetneq\overline{\mathfrak{m}}^{-1}\overline{\mathfrak{m}}=\overline{R}$, since $\overline{\mathfrak{m}}$ is invertible. Thus, $\overline{\mathfrak{m}}^{-1}\mathfrak{a}$ is a proper ideal of $\overline{R}$. If $\overline{\mathfrak{m}}^{-1}\mathfrak{a}=\mathfrak{a}$, then $\mathfrak{a}=\overline{\mathfrak{m}}\mathfrak{a}$, so $\mathfrak{a}=0$ by (the usual) Nakayama's Lemma. Hence, $\overline{\mathfrak{m}}^{-1}\mathfrak{a}\supsetneq\mathfrak{a}$. Therefore, the maximality of $\mathfrak{a}$ implies that $\overline{\mathfrak{m}}^{-1}\mathfrak{a}$ is a power of $\overline{\mathfrak{m}}$, say $\overline{\mathfrak{m}}^{-1}\mathfrak{a}=\overline{\mathfrak{m}}\,^{r}$. We conclude that $\mathfrak{a}=\overline{\mathfrak{m}}\,^{r+1}$, i.e., $\mathfrak{a}$ is a power of $\overline{\mathfrak{m}}$, a contradiction. In particular, because $\overline{R}$ is a DVR, $1=\mathrm{Kdim}(\overline{R})=\mathrm{Kdim}(R_{\overline{0}})=\mathrm{Ksdim}_{\overline{0}}(R)$ and $$1=\mathrm{Kdim}(\overline{R})=\dim_{\overline{R}/\overline{\mathfrak{m}}}(\overline{\mathfrak{m}}/\overline{\mathfrak{m}}^2)=\dim_{R/\mathfrak{m}}((\mathfrak{m}/\mathfrak{m}^2)_{\overline{0}}).$$ We remains to prove that $\mathrm{Ksdim}_{\overline{1}}(R)=\dim_{R/\mathfrak{m}}((\mathfrak{m}/\mathfrak{m}^2)_{\overline{1}})$. Let $d\geq0$ be the odd Krull super-dimension of $R$. Note that if $d=0$ then, $R\simeq\overline{R}$ and there is nothing to prove. Thus, suppose that $d\geq 1$ and consider any minimal system $z_1, \ldots, z_s$ of generators of the $R_{\overline{0}}$ -module $R_{\overline{1}}$ such that $z^{[s]}\neq0$. There are two cases to be considered, namely: [Case 1:]{.ul} $\mathrm{Kdim}(R_{\overline{0}}/\mathrm{Ann}_{R_{\overline{0}}}(y^{[s]}))=\mathrm{Kdim}(R_{\overline{0}})$. In this case, $z_1, \ldots, z_s$ is a system of odd parameters and it follows, from the discussion in Section  [2.6](#Subsec:Krull:SuperDim){reference-type="ref" reference="Subsec:Krull:SuperDim"}, that $\mathrm{Ksdim}_{\overline{1}}(R)=\dim_{R/\mathfrak{m}}((\mathfrak{m}/\mathfrak{m}^2)_{\overline{1}})=d$. Hence, $R$ is regular. [Case 2:]{.ul} $\mathrm{Kdim}(R_{\overline{0}}/\mathrm{Ann}_{R_{\overline{0}}}(y^{[s]}))<\mathrm{Kdim}(R_{\overline{0}})=1$. In this case, $\mathrm{Kdim}(R_{\overline{0}}/\mathrm{Ann}_{R_{\overline{0}}}(y^{[s]}))=0$ so the only prime ideal in $R_{\overline{0}}/\mathrm{Ann}_{R_{\overline{0}}}(y^{[s]})$ is the maximal one, namely $\mathfrak{m}_{\overline{0}}/\mathrm{Ann}_{R_{\overline{0}}}(y^{[s]})$. Hence, the quotient should be Artinian and we obtain that $(\mathfrak{m}_{\overline{0}}/\mathrm{Ann}_{R_{\overline{0}}}(y^{[s]}))^n=0$, for some positive integer $n$. Thus, $$\begin{aligned} 0&=(\mathfrak{m}_{\overline{0}}/\mathrm{Ann}_{R_{\overline{0}}}(y^{[s]}))^n\\ &=\left(\mathfrak{m}_{\overline{0}}^n+\mathrm{Ann}_{R_{\overline{0}}}(y^{[s]})\right)/\mathrm{Ann}_{R_{\overline{0}}}(y^{[s]})\\ &\simeq\mathfrak{m}_{\overline{0}}^n/\left(\mathfrak{m}_{\overline{0}}^n\cap \mathrm{Ann}_{R_{\overline{0}}}(y^{[s]})\right).\end{aligned}$$ It follows that $$\begin{aligned} \mathfrak{m}_{\overline{0}}^n&\subseteq\mathfrak{m}_{\overline{0}}^n\cap \mathrm{Ann}_{R_{\overline{0}}}(y^{[s]})\\ &\subseteq\mathrm{Ann}_{R_{\overline{0}}}(y^{[s]}).\end{aligned}$$ The above inclusion implies that any element of $\mathfrak{m}_{\overline{0}}$ to the power of $n$ is in $\mathrm{Ann}_{R_{\overline{0}}}(y^{[s]})$. By Proposition [Proposition 53](#P:Eisenbud:11.6-b){reference-type="ref" reference="P:Eisenbud:11.6-b"}, there exists a non-zerodivisor of $R$, say $m\in\mathfrak{m}$. We can assume that $m\in\mathfrak{m}_{\overline{0}}$. Then, we obtain that $m^n\in\mathrm{Ann}_{R_{\overline{0}}}(y^{[s]})$, i.e., $m^n y^{[s]}=0$. Since $m$ is not a zerodivisor, $m^{n-1}y^{[s]}=0$ and, through a recursive reasoning, it is not hard to see that $y^{[s]}=0$, a contradiction. This contradiction shows that $\mathrm{Kdim}(R_{\overline{0}}/\mathrm{Ann}_{R_{\overline{0}}}(y^{[s]}))=\mathrm{Kdim}(R_{\overline{0}})=1$, which means that $R$ is regular. For the general case, when $R$ is not necessarily local, we only need to ensure that $\mathrm{Ksdim}_{\overline{0}}(R)=1$. In this case, $\overline{R}$ is a Dedekind ring, because every localization of it is a DVR. Thus, its Krull dimension is one. Since $R$ is a superdomain, the even Krull dimension of $R$ is just the Krull dimension of $\overline{R}$, which is equal to one. Therefore, we conclude that $R$ is a regular superdomain with $\mathrm{Ksdim}_{\overline{0}}(R)=1$. This completes the proof of theorem. ◻ **Theorem 77**. *Let $R$ be a Noetherian superdomain which is not a superfield. Suppose that if $z_1, \ldots, z_s$ is minimal system of generators of the $R_{\overline{0}}$-module $R_{\overline{1}}$, then $z^{[s]}\neq0$. If every prime ideal of $R$, other than $\mathfrak{j}_R$, is invertible, then $R$ is a Dedekind superdomain.* *Proof.* Assume, for a contradiction, that there is some prime ideal $\mathfrak{p}$ of $R$, other $\mathfrak{j}_R$, which is not maximal. Let $\mathfrak{m}$ be a maximal ideal properly containing $\mathfrak{p}$. Since $\mathfrak{m}$ is invertible, $\mathfrak{q}:=\mathfrak{m}^{-1}\mathfrak{p}$ is a superideal of $R$ such that $\mathfrak{m}\mathfrak{q}=\mathfrak{p}$. There are two cases. In the first one, $\mathfrak{q}=\mathfrak{p}$ and this implies that $\mathfrak{m}=R$, since $\mathfrak{p}$ is invertible. which is not possible, so $\mathfrak{p}\subsetneq\mathfrak{q}$ and can take $x\in\mathfrak{m}-\mathfrak{p}$ and $y\in\mathfrak{q}-\mathfrak{p}$. Thus $xy\in\mathfrak{p}=\mathfrak{m}\mathfrak{q}$, contradicting that $\mathfrak{p}$ is prime. Therefore $\mathfrak{p}$ must be maximal. In particular, the even Krull superdimension of $R$ is $1$. It remains to show that $R$ is integrally closed. Assume, for a moment, that any nonzero fractional superideal of $R$, other than $\mathfrak{j}_R$ and $\mathfrak{j}_{K(R)}$, is invertible. Now, consider $x\in K(R)$ to be an integral element over $R$. Let us prove that $x\in R$. By Definition [Definition 64](#Definition:IntegralElements){reference-type="ref" reference="Definition:IntegralElements"}, there is some unitary $R$-supermodule $M$ generated by even elements $a_1, \ldots, a_d\in K(R)$ such that $Mx\subseteq M$. Since $M$ is finitely generated, it is fractional superideal of $R$ (by Proposition [Proposition 50](#P:3.2:ALTMAN){reference-type="ref" reference="P:3.2:ALTMAN"}) so it is invertible. Then, $M^{-1}Mx\subseteq M^{-1}M=R$ and this implies that $x\in R$. Therefore, $R$ is integrally closed. Finally, under the hypothesis that any prime ideal of $R$ is invertible, and following the same arguments as in the proof of Theorem [Theorem 73](#Prime:Factors){reference-type="ref" reference="Prime:Factors"}, we obtain that any nonzero proper superideal of $R$ has a prime factorization. Therefore, any nonzero fractional superideal of $R$, not having $\mathfrak{j}_R$ as a factor, is invertible, as can be shown following the same reasoning of the proof of Corollary [Corollary 75](#Cor:4:11){reference-type="ref" reference="Cor:4:11"} ii) and iii). This completes the proof (see Definition [Definition 69](#def:dedekind){reference-type="ref" reference="def:dedekind"}). ◻ **Remark 78**. Note that the assumption on prime ideals being invertible in Theorem [Theorem 77](#Final:Th){reference-type="ref" reference="Final:Th"} can be replaced by: any nonzero prime ideal of $R$, other than $\mathfrak{j}_R$, is projective (see Corollary [Corollary 63](#Prop:4.23){reference-type="ref" reference="Prop:4.23"}). Furthermore, the hypothesis that if $z_1, \ldots, z_s$ is minimal system of generators of the $R_{\overline{0}}$-module $R_{\overline{1}}$, then $z^{[s]}\neq0$ can be replaced by $R$ being regular (see Remark [Remark 68](#Remark:5.5){reference-type="ref" reference="Remark:5.5"}). $\square$ **Remark 79**. Let $R$ be a split superring such that $\overline{\mathfrak{p}}$ is an invertible $\overline{R}$-module, where $\mathfrak{p}$ is a prime ideal of $R$. It is not hard to prove that $\mathfrak{p}=\overline{\mathfrak{p}}\oplus\mathfrak{j}_R$, hence $\mathfrak{p}$ is an invertible $R$-supermodule. If $R$ is a split regular superdomain with even Krull superdimension 1, then $\overline{R}$ is a Dedekind domain. By the previous reasoning and Theorem [Theorem 77](#Final:Th){reference-type="ref" reference="Final:Th"}, $R$ is a Dedekind superring. Examples of such rings appear frequently. For example, let $R$ be a Noetherian superalgebra over a perfect field $\Bbbk$. If $R$ is regular, then (by [@masuoka2017solvability Theorem A.2]), $R$ is split. If $\Bbbk$ is not necessarily perfect, then this can be guaranteed for smooth superalgebras, using the same theorem cited above. $\square$ **Remark 80**. Let $(X, \mathcal{O}_X)$ be a nonsingular superscheme of finite type over a perfect field $\Bbbk$ with superdimension $1\mid d$, where $d$ is a non-negative integer. If $U\subseteq X$ is an affine open subsuperscheme of $X$, then $\mathcal{O}_X(U)$ is regular with Krull superdimension $1\mid d$. Furthermore, according to [@masuoka2017solvability Theorem A. 2], it is also split. Therefore $\mathcal{O}_X(U)$ is a Dedekind superdomain. $\square$ **Definition 81**. Let $R$ be a local Dedekind superdomain. In this case we say that $R$ is a *discrete valuation superring*. Recall that the notions of invertibility, regularity and superdimension are local properties. Then, the following proposition is an immediate consequence of Definition [Definition 81](#sDVR){reference-type="ref" reference="sDVR"} and Theorem [Theorem 76](#Theorem:4.16){reference-type="ref" reference="Theorem:4.16"}. **Proposition 82**. *$R$ is a Dedekind superring if and only if for all $\mathfrak{p}\in\,\mathfrak{spec}(R)$, $R_\mathfrak{p}$ is a discrete valuation superring. $\square$* # Final Comments {#comments} In this section we discuss three possible future directions for our work, as well as some issues we currently considering. We also mention some open problems in the field. ## Valuation superrings In this paper, have defined discrete valuation superrings. It is natural to ask "what is an appropriate definition of valuation on superrings?" We can then try to recover the concept of discrete valuation superring from such a definition, and study the algebraic and geometric properties that implies. For example, we could define an abstract nonsingular algebraic curve, or, in other words, an abstract Riemann super-surface over a SUSY field (see [@MASUOKASUSY]). We will present our results soon [@RTT4]. While preparing this paper, we have developed a theory of valuations for superrings, where we construct analogous versions of the well-known results for valuations on commutative rings (e.g., approximation theorems). Remarkably, in the supercommutative context, all valuations turn out to take values in abelian groups, unlike what usually happens in the noncommutative world. This topic is transversal to the present paper, and in order to keep it within certain spatial limits, we have decided to dedicate a second paper [@RTT3] to it. We expect to publish our findings soon. ## Supermodules over a Dedekind superring We are interested in the problem of classifying finitely generated modules on Dedekind superrings. Consulting books such as [@narkiewicz2004elementary], we expect that much of the classical theory can be extended to Dedekind superrings without too much difficulty, following the ideas used in this paper. However, we do not know of a theorem describing the structure of a supermodule over Dedekind superring analogous to the purely even case. It would be also interesting to develop further a theory of superideals of Dedekind superrings. We plan to attack these problems in the near future. ## Dedekind superschemes A *Dedekind superscheme* would be a locally ringed space admitting a covering by the spectrum of Dedekind superrings. Following the ideas in this paper and [@bruzzo2023notes], we have enough background to developed this notion. This is one of our current research projects, and we will report our findings in the near future. # Acknowledgements {#acknowledgements .unnumbered} ## Funding {#funding .unnumbered} The first and second authors were partially supported by CODI (Universidad de Antioquia, UdeA) by project numbers 2020-33713 and 2022-52654. 10 A. Altman and S. Kleiman. . Worldwide Center of Mathematics, 2013. M. F. Atiyah and I. G. MacDonald. Addison-Wesley-Longman, 1969. T. W. Atterton. Definitions of integral elements and quotient rings over non-commutative rings with identity. , 13(4):433--446, 1972. C. Bartocci, U. Bruzzo, and D. Hernández. , volume 71. Springer Science & Business Media, 2012. N. Bourbaki. , volume 1. Springer Science & Business Media, 1998. U. Bruzzo, D. Hernández, and A. Polishchuk. Notes on fundamental algebraic supergeometry. hilbert and picard superschemes. , 415:108890, 2023. C. Carmeli, L. Caston, and R. Fioresi. , volume 15. European Mathematical Society, 2011. D. Eisenbud. Springer New York, 1995. P. Etingof, Sh. Gelaki, D. Nikshych, and V. Ostrik. , volume 205. American Mathematical Soc., 2016. A. Masuoka. Supersymmetric picard-vessiot theory, i: Basic theory. . A. Masuoka and A. N. Zubkov. Solvability and nilpotency for algebraic supergroups. , 221(2):339--365, 2017. A. Masuoka and A.N. Zubkov. On the notion of krull super-dimension. , 224(5):106245, 2020. A. S. Morye, A. S. Phukon, and V. Devichandrika. Notes on super projective modules. , 2022. W. Narkiewicz. . Springer, 2004. P. Rizzo, J. Torres, and A. Torres-Gomez. The abstract riemann super-surface over a susy field. . P. Rizzo, J. Torres, and A. Torres-Gomez. Valuations on a superring. . D. B. Westra. . PhD Thesis. Universität Wien, Wien, 2009. A. N. Zubkov and P. S. Kolesnikov. On dimension theory of supermodules, super-rings, and superschemes. , 50(12):5387--5409, 2022.
arxiv_math
{ "id": "2310.03822", "title": "Dedekind Superrings", "authors": "Pedro Rizzo, Joel Torres Del Valle, Alexander Torres-Gomez", "categories": "math.RA math.AG math.QA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | The atom-bond-connectivity (ABC) index is one of the well-investigated degree-based topological indices. The atom-bond sum-connectivity (ABS) index is a modified version of the ABC index, which was introduced recently. The primary goal of the present paper is to investigate the difference between the aforementioned two indices, namely $ABS-ABC$. It is shown that the difference $ABS-ABC$ is positive for all graphs of minimum degree at least $2$ as well as for all line graphs of those graphs of order at least $5$ that are different from the path and cycle graphs. By means of computer search, the difference $ABS-ABC$ is also calculated for all trees of order at most $15$. author: - | Akbar Ali$^{a,}$[^1], Ivan Gutman$^{b}$, Izudin Redžepović$^{c}$,\ Jaya Percival Mazorodze$^{d}$, Abeer M. Albalahi$^a$,\ Amjad E. Hamza$^a$ title: On the Difference of Atom-Bond Sum-Connectivity and Atom-Bond-Connectivity Indices --- `` # Introduction In this paper we consider finite simple graphs (i.e., graphs without directed, weighted, and multiple edges, and without self-loops). Let $G$ be such a graph. In order to avoid trivialities, it will be assumed that $G$ is connected. Its vertex set is $\mathbf V(G)$ and its edge set is $\mathbf E(G)$. The order and size of $G$ are $|\mathbf V(G)|=n$ and $|\mathbf E(G)|=m$, respectively. By an $n$-vertex graph, we mean a graph of order $n$. The degree $d_u = d_u(G)$ of the vertex $u\in \mathbf V(G)$ is the number of vertices adjacent to $u$. The edge connecting the vertices $u$ and $v$ will be denoted by $uv$. A vertex with degree one is known as a pendent vertex. For graph-theoretical terminology and notation used without being defined, we refer the readers to the books [@ga; @gb; @gc] In the early years of mathematical chemistry, Milan Randić invented a topological index [@g1] that eventually became one of the most successfully applied graph-based molecular structure descriptors [@g2; @g3; @g4]. It is nowadays called "*connectivity index*" or "*Randić index*" and is defined as $$R = R(G) = \sum_{uv \in \mathbf E(G)} \frac{1}{\sqrt{d_u\,d_v}}\,.$$ Much later, Zhou and Trinajstić [@g5] proposed to consider the variant of the connectivity index, in which multiplication is replaced by summation, named "*sum-connectivity index*", defined as $$SC = SC(G) = \sum_{uv \in \mathbf E(G)} \frac{1}{\sqrt{d_u+d_v}}\,.$$ The same authors examined the relations between $R$ and $SC$  [@g6]. In 1998, Estrada et al. [@g7] conceived another modification of the connectivity index, called "*atom-bond-connectivity index*", defined as $$ABC = ABC(G) = \sum_{uv \in \mathbf E(G)} \sqrt{\frac{d_u+d_v-2}{d_u\,d_v}}\,.$$ This molecular descriptor differs from the original connectivity index by the expression $d_u+d_v-2$, which is just the degree of the edge $uv$ (= number of edges incident to $uv$). Soon it was established that the $ABC$ index has valuable applicative properties [@g8]. Its mathematical features were also much investigated, see the recent papers [@Dimitrov-2021; @Ghanbari-22; @Husin-22], the review [@g9], and the references cited therein. Especially intriguing is the fact that the apparently simple problem of finding the connected $n$-vertex graph(s) with minimum $ABC$ index remained unsolved for about a decade [@Hosseini-22]. Quite recently, the sum-connectivity analogue of the $ABC$ index was put forward, defined as $$ABS = ABS(G) = \sum_{uv \in \mathbf E(G)} \sqrt{\frac{d_u+d_v-2}{d_u+d_v}}$$ and named "*atom-bond sum-connectivity index*" [@g10]. Until now, only a limited number of properties of the $ABS$ index were determined. In [@g10], the authors determined graphs having the minimum/maximum values of the $ABS$ index among all (i) general graphs (ii) (molecular) trees, with a fixed order; parallel results for the case of unicyclic graphs were obtained in the paper [@ABS-EJM], where chemical applications of the $ABS$ index were also reported. (The general $ABS$ index corresponding to the general $ABC$ index [@Furtula-GABC; @Das-GABC; @Abreu-Blaya] was also proposed in [@ABS-EJM]; besides, see [@new-01; @new-02].) Alraqad et al. [@Alraqad-arXiv] addressed the problem of finding graphs attaining the minimum $ABS$ index over the class of all trees having given order or/and a fixed number of pendent vertices. Additional detail about the known mathematical properties of the $ABS$ index can be found in the recent papers [@new-03; @new-04; @new-05; @new-06]. As well known, if a graph $G$ has components $G_1$ and $G_2$, then $ABC(G)=ABC(G_1)+ABC(G_2)$ and $ABS(G)=ABS(G_1)+ABS(G_2)$. As a consequence of this, denoting by $P_2$ the graph of order 2 and size 1, the following holds.\ (a) If $G$ is any graph, and $G^+$ is a graph whose components are $G$, an arbitrary number of isolated vertices, and an arbitrary number of $P_2$-graphs, then $ABC(G)=ABC(G^+)$ and $ABS(G)=ABS(G^+)$.\ (b) if $G^{++}$ is a graph whose components are $G$, an arbitrary number of isolated vertices, an arbitrary number of $P_2$-graphs, and an arbitrary number of cycles of arbitrary size, then $$ABC(G)-ABS(G) = ABC(G^{++})-ABS(G^{++}).$$ In order to avoid these trivialities, in what follows we consider only connected graphs. An obvious question is how the two closely related molecular descriptors $ABC$ and $ABS$ are related. In this paper, we provide some answers to this question. More precisely, we prove that the difference $ABS-ABC$ is positive for all graphs of minimum degree at least $2$ as well as for all line graphs of those graphs of order at least $5$ that are different from the path and cycle graphs. We also calculate the difference $ABS-ABC$ for all trees of order at most $15$ by utilizing computer software. # Main Results {#sec-2} We start this section with a simple but notable result that if the minimum degree of a graph $G$ is at least $2$ then the ABS index of $G$ cannot be lesser than the ABC index of $G$. **Proposition 1**. *Let $G$ be a connected non-trivial graph of order $n$, without pendent vertices. Then $$ABC(G) \leq ABS(G).$$ Equality holds if and only if $G \cong C_n$, where $C_n$ is the $n$-vertex cycle.* *Proof.* For every edge $uv\in E(G)$, note that $d_u\,d_v \geq d_u+d_v$ with equality if and only if $d_u=d_v=2$ because $\min\{d_u,d_v\}\ge2$. ◻ If the order of a graph $G$ is one or two, then the equality $ABC(G)=ABS(G)=0$ holds in a trivial manner. **Proposition 2**. *Let $G$ be a connected graph possessing a vertex $x$ of degree $2$. Construct the graph $G^\star$ by inserting a new vertex $y$ on an edge incident to $x$. Evidently, the degree of $y$ is also $2$. Then $$\label{ksi} ABC(G)-ABS(G) = ABC(G^\star)-ABS(G^\star)\,.$$* *Proof.* Bearing in mind the way in which the graph $G^\star$ was constructed, we see that $$ABC(G^\star) = ABC(G) + \sqrt{\frac{d_x+d_y-2}{d_x\,d_y}} = ABC(G) + \frac{1}{\sqrt{2}}$$ and $$ABS(G^\star) = ABS(G) + \sqrt{\frac{d_x+d_y-2}{d_x+d_y}} = ABS(G) + \frac{1}{\sqrt{2}}\,.$$ ◻ Proposition [Proposition 2](#rem2){reference-type="ref" reference="rem2"} implies that if there is a graph $G$ of order $n$, possessing a vertex of degree 2, for which $ABC(G)-ABS(G)=\Theta$, then for any $p \geq 1$ there exist graphs of order $n+p$ with the same $\Theta$-value. The situation with graphs possessing pendent vertices is much less simple. In what follows we present our results pertaining to trees. By means of computer search we established the following. **Observation 3**. *(a) All trees of order $n$ , $3 \leq n \leq 10$, have the property $ABC>ABS$.\ (b) The smallest tree for which $ABC<ABS$ is depicted in Fig. [1](#Fig4){reference-type="ref" reference="Fig4"}. For $n=11$, this tree is unique satisfying $ABC<ABS$.\ (c) For $n=12,13,14$, and $15$, there exist, respectively, $6, 31, 134$, and $564$ distinct $n$-vertex trees for which $ABC<ABS$.\ (d) The tree depicted in Fig. [1](#Fig4){reference-type="ref" reference="Fig4"} possess vertices of degree $2$. Therefore, from Proposition $\ref{rem2}$ it follows that there exist $n$-vertex trees with property $ABS>ABC$ for any $n \geq 11$.* ![The smallest tree for which $ABC<ABS$.](GRA-Figure1.pdf){#Fig4 width="25%"} **Observation 4**. *No tree of order $n$ , $3 \leq n \leq 15$, has the property $ABC=ABS$. However, there is a family of four $15$-vertex trees, shown in Fig. [2](#Fig5){reference-type="ref" reference="Fig5"}, whose $ABC$- and $ABS$-values are remarkably close. For each of these trees: $ABC\approx10.184232$ and $ABS\approx10.184135$.* ![A family of trees with nearly equal $ABC$- and $ABS$-values.](GRA-Figure2.pdf){#Fig5 width="70%"} Next, we show that the inequality $ABS > ABC$ is satisfied by a reasonably large class of graphs, namely by the line graphs. If $G$ is the line graph of a connected $n$-vertex graph $K$ such that $2\le n \le 4$, then from the discussion made in the previous part of this section one can directly obtain the classes of graphs satisfying (i) $ABS(G)>ABC(G)$, (ii) $ABS(G)<ABC(G)$, (iii) $ABS(G)=ABC(G)$. Consequently, we assume that $n\ge 5$. **Theorem 5**. *If $G$ is the line graph of a connected $n$-vertex graph $K$ such that $n \geq 5$ and that $K \not\in \{ P_n, C_n\}$, then $ABS(G)>ABC(G)$.* In order to prove Theorem [Theorem 5](#thm-1){reference-type="ref" reference="thm-1"}, we need some preparations. A decomposition of a graph $G$ is a class $\mathcal{S}_G$ of edge-disjoint subgraphs of $G$ such that $\cup _{S\in \mathcal{S}_G} \mathbf E(S) = \mathbf E(G)$. By a clique in a graph $G$, we mean a maximal complete subgraph of $G$. A branching vertex in a graph is a vertex of degree at least $3$. By a pendent edge of a graph, we mean an edge whose one of the end-vertices is pendent and the other one is non-pendent. For $r\geq 2$, a path $u_1\cdots u_r$ in a graph is said to be pendent if $\min\{d_{u_1},d_{u_r}\}=1$, $\max\{d_{u_1},d_{u_r}\} \geq 3$, and $d_{u_i}=2$ for $2\le i \le r-1$. If $P:u_1\cdots u_r$ is a pendent path in a graph with $d_{u_r}\ge3$, we say that $P$ is attached with the vertex $u_r$. Two pendent paths of a graph are said to be adjacent if they have a common (branching) vertex. A triangle of a graph $G$ is said to be odd if there is a vertex of $G$ adjacent to an odd number of its vertices. For the proof of Theorem [Theorem 5](#thm-1){reference-type="ref" reference="thm-1"} we need the following well-known result: **Lemma 6**. *[@Harary] A graph $G$ is the line graph of a graph if and only if the star graph of order $4$ is not an induced subgraph of $G$, and if two odd triangles have a common edge then the subgraph induced by their vertices is the complete graph of order $4$.* We can now start with the proof of Theorem [Theorem 5](#thm-1){reference-type="ref" reference="thm-1"}. *Proof of Theorem [Theorem 5](#thm-1){reference-type="ref" reference="thm-1"}.* Since $K \not \cong P_n$, the graph $G$ has at least one cycle. If $G$ is one of the two graphs $H_1,H_2,$ depicted in Fig. [3](#Fig-1){reference-type="ref" reference="Fig-1"}, then one can directly verify that $ABS >ABC$ holds. In what follows, we assume that $G\not\in \{H_1,H_2\}$. ![The graphs $H_1$ and $H_2$ mentioned in the proof of Theorem [Theorem 5](#thm-1){reference-type="ref" reference="thm-1"}.](GRA-Figure3.pdf){#Fig-1 width="70%"} Consider the difference $$ABS(G)-ABC(G) =\sum_{uv\in \mathbf E(G)}\left( \sqrt{\frac{d_u+d_v-2}{d_u+d_v}}-\sqrt{\frac{d_u+d_v-2}{d_u\,d_v}}\, \right)$$ and define a function $f$ of two variables $x$ and $y$ as $$f(x,y) = \sqrt{\frac{x+y-2}{x+y}}-\sqrt{\frac{x+y-2}{xy}}$$ where $y\ge x \ge 1$ and $y\ge2$. Note that the function $f$ is strictly increasing (in both $x$ and $y$). Also, if $x$ and $y$ are integers satisfying the inequalities $y\ge x \ge 1$ and $y\ge2$, then the inequality $f(x,y)<0$ holds if and only if $x=1$. Thus, $$-0.129757 \approx \frac{1}{\sqrt{3}} - \frac{1}{\sqrt{2}} = f(1,2) \le f(1,y) < 0$$ for every $y \ge 2$. Also, $$f(x,y) \ge f(2,3) = \sqrt{\frac{3}{5}} - \frac{1}{\sqrt{2}}\approx 0.0674899> f(2,2)=0$$ for $y\ge x \ge 2$ and $y \ge 3$. Furthermore, we have $f(1,2)+f(2,y) >0$ for every $y\ge5$. Thus, if either $G$ has no pendent paths or every pendent path of $G$ has length at least $2$, which is attached with a vertex of degree at least $5$, then $ABS(G)-ABC(G)>0$. In the remaining proof, we assume that $G\not\in \{H_1,H_2\}$ and that $G$ either has at least one pendent path of length $1$ or it has at least one pendent path of length at least $2$, which is attached with a vertex of degree $3$ or $4$. Let $H'$ be the graph depicted in Fig. [4](#Fig-2){reference-type="ref" reference="Fig-2"}, i.e., $H'$ is obtained from two disjoint graphs $H_1$ and $H$ by identifying their vertices $z$ and $z'$. ![The graphs $H_1$, $H$, and $H'$ mentioned in the proof of Theorem [Theorem 5](#thm-1){reference-type="ref" reference="thm-1"}.](GRA-Figure4.pdf){#Fig-2 width="50%"} **Fact 1.** *If $G \cong H'$, then the sum of the contributions of the edges of $H_1$ in $G$ to the difference $ABS(G)-ABC(G)$ is positive.*\ It is a well-known fact that the line graph $G$ can be decomposed into cliques, such that every edge of $G$ lies on exactly one clique and every non-pendent vertex of $G$ lies on exactly two cliques. Also, by Lemma [Lemma 6](#lem-1){reference-type="ref" reference="lem-1"}, $G$ contains no pair of adjacent pendent paths/edges and hence the number of pendent edges/paths of $G$ is at most $\left\lfloor \,|\mathbf E(G)|/2 \right\rfloor$. Bearing this in mind, we decompose $G$ into connected subgraphs $G_1,\ldots,G_k$ in such a way that every $G_i$ contains at most one pendent path of $G$, such that: \(a\) : if $G_i$ contains a pendent path of $G$ of length $1$ such that the branching vertex (in $G$) of the considered path has a neighbor of degree $2$ in $G$, then $G_i$ is induced by the vertices of the mentioned path and the vertices adjacent to the branching vertex (in $G$) of the mentioned path (for an example, see $G_4$ in Fig. [5](#Fig-3){reference-type="ref" reference="Fig-3"}(b)); \(b\) : if $G_i$ has a pendent path of length at least $2$ in $G$ or if $G_i$ contains a pendent path of $G$ of length $1$ such that the branching vertex (in $G$) of the considered path has no neighbor of degree $2$ in $G$, then $G_i$ consists of the mentioned path together with exactly one additional edge incident with the branching vertex (in $G$) of the mentioned path (for an example, see Fig. [5](#Fig-3){reference-type="ref" reference="Fig-3"}). ![(a) A tree $T$, its line graph $L(T)$, and a decomposition of $L(T)$ into three connected subgraphs $G_1,G_2,G_3$. (b) A tree $T$, its line graph $L(T)$, and a decomposition of $L(T)$ into four connected subgraphs $G_1,G_2,G_3,G_4$.](GRA-Figure5.pdf){#Fig-3 width="90%"} In order to complete the proof, it is enough to show that the sum of contributions of all edges of $G_i$ (in $G$) to the difference $ABS(G)-ABC(G)$ is positive. If a subgraph $G_i$ of $G$ contains no pendent vertex of $G$ then certainly, the sum of contributions of all edges of $G_i$ (in $G$) to the difference $ABS(G)-ABC(G)$ is positive.\ **Case 1:** a subgraph $G_i$ contains a pendent path of $G$ of length $1$, such that the branching vertex (in $G$) of the considered path has a neighbor of degree $2$ in $G$. Let $P:v_1v_2$ be the pendent path of $G$ contained in $G_i$, where $d_{v_1}(G)=1$ and $d_{v_2}(G)\ge3$. Note that every neighbor of $v_2$ different from $v_1$ in $G$ has degree at least $d_{v_2}(G)-1$ in $G$ (by Lemma [Lemma 6](#lem-1){reference-type="ref" reference="lem-1"}). Thus, $d_{v_2}(G)=3$ in the case under consideration. Recall that $G\not\in \{H_1,H_2\}$ (see Fig. [3](#Fig-1){reference-type="ref" reference="Fig-1"}). Consequently, $G_i \cong H_1$ and hence by Fact 1, the sum of contributions of all edges of $G_i$ to the difference $ABS(G)-ABC(G)$ is positive. **Case 2:** a subgraph $G_i$ has a pendent path of $G$ of length $1$, such that the branching vertex (in $G$) of the considered path has no neighbor of degree $2$ in $G$. Note that $G_i$ is a path of length $2$ in this case. Let $G_i:v_1v_2v'$, where $v_1v_2$ is a pendent path of $G$, $d_{v_1}(G)=1$, and $d_{v_2}(G)\ge 3$. If $d_{v_2}(G)\ge4$, then the sum of contributions of all edges of $G_i$ to the difference $ABS(G)-ABC(G)$ is positive because $d_{v'}(G)\ge d_{v_2}(G)-1$ (by Lemma [Lemma 6](#lem-1){reference-type="ref" reference="lem-1"}) and $f(1,y)+(y-1,y)>0$ for every $y\ge4$. Next, assume that $d_{v_2}(G)=3$. Since $d_{v'}(G)\ge3$ in the considered case, the sum of contributions of all edges of $G_i$ to the difference $ABS(G)-ABC(G)$ is again positive because $f(1,3) + f(3,y) > 0$ for all $y\ge 3$. **Case 3:** a subgraph $G_i$ has a pendent path of length at least $2$ in $G$. Note that $G_i$ is itself a path. Let $G_i:v_1v_2\cdots v_rv'$, where $v_1v_2\cdots v_r$ ($r\ge3$) is a pendent path of $G$, $d_{v_1}(G)=1$, and $d_{v_r}(G)\in \{ 3,4\}$, because $G$ has no pendent path of length at least $2$, which is attached with a vertex of degree at least $5$ (see the paragraph appears right before the definition of $H'$ (before Fact 1)). **Subcase 3.1:** $d_{v_r}(G)=3$. The vertex $v'$ has degree at least $2$ (in $G$) and $f(1,2)+f(2,3)+f(3,y)\ge f(1,2)+f(2,3)+f(2,3)>0$ for $y\ge2$. Thus, the sum of contributions of all edges of $G_i$ (in $G$) to the difference $ABS(G)-ABC(G)$ is positive. **Subcase 3.2:** $d_{v_r}(G)=4$. In this case, the vertex $v'$ has degree at least $3$ (in $G$) and $f(1,2)+f(2,4)+f(4,y)\geq f(1,2)+f(2,4)+f(3,4)>0$ for $y\geq 3$. Thus, the sum of contributions of all edges of $G_i$ (in $G$) to the difference $ABS(G)-ABC(G)$ is again positive. This completes the proof of Theorem [Theorem 5](#thm-1){reference-type="ref" reference="thm-1"}. ◻ **Theorem 7**. *Let $G$ be a connected graph of size $m$. If the number of pendent vertices of $G$ is at most $\lfloor m/2 \rfloor$ and the number of vertices of degree $2$ in $G$ is zero, then $$ABS(G)>ABC(G).$$* *Proof.* Consider the function $f$ defined in the proof of Theorem [Theorem 5](#thm-1){reference-type="ref" reference="thm-1"}. Here, $$-0.10939 \approx \frac{1}{\sqrt{2}} - \sqrt{\frac{2}{3}} = f(1,3) \le f(1,y) < 0$$ for every $y \ge 3$. Also, $$f(x,y) \ge f(3,3) = \sqrt{\frac{2}{3}} - \frac{2}{3} \approx 0.14983$$ for $y\ge x \ge 3$. Let $p$ denote the number of pendent vertices of $G$. Then, $m-p\ge p$. Now, by keeping in mind these observations, we have $$\begin{aligned} ABS(G)-ABC(G) &=\sum_{uv\in \mathbf E(G);\,d_u=1}\left( \sqrt{\frac{d_u+d_v-2}{d_u+d_v}}-\sqrt{\frac{d_u+d_v-2}{d_u\,d_v}}\, \right)\\[2mm] &+ \sum_{\substack{uv\in \mathbf E(G);\\ \min\{d_u,d_v\}\ge3}}\left( \sqrt{\frac{d_u+d_v-2}{d_u+d_v}}-\sqrt{\frac{d_u+d_v-2}{d_u\,d_v}}\, \right)\\[2mm] &\ge \sum_{\substack{uv\in \mathbf E(G);\\ d_u=1}} \left(\frac{1}{\sqrt{2}} - \sqrt{\frac{2}{3}} \right)+ \sum_{\substack{uv\in \mathbf E(G);\\ \min\{d_u,d_v\}\ge3}} \left(\sqrt{\frac{2}{3}} - \frac{2}{3}\right)\\[2mm] &= p \left(\frac{1}{\sqrt{2}} - \sqrt{\frac{2}{3}} \right)+ (m-p) \left(\sqrt{\frac{2}{3}} - \frac{2}{3}\right)\\[2mm] &\ge p \left(\frac{1}{\sqrt{2}} - \sqrt{\frac{2}{3}} \right)+ p \left(\sqrt{\frac{2}{3}} - \frac{2}{3}\right)\\ &>0.\end{aligned}$$ ◻ **Theorem 8**. *Let $G$ be a connected graph of size $m$ such that if $v\in V(G)$ is a vertex of degree $2$ then $v$ has no neighbor of any of the degrees $2$, $3$, $4$. If the number of pendent vertices of $G$ is at most $\lfloor m/2 \rfloor$, then $$ABS(G)>ABC(G).$$* *Proof.* Consider the function $f$ defined in the proof of Theorem [Theorem 5](#thm-1){reference-type="ref" reference="thm-1"}. Recall that $$-0.129757 \approx \frac{1}{\sqrt{3}} - \frac{1}{\sqrt{2}} = f(1,2) \le f(1,y) < 0$$ for every $y \ge 2$. Also, $f(x,y) \ge f(2,5) = \sqrt{\frac{5}{7}} - \frac{1}{\sqrt{2}} \approx 0.138047$ for $y\ge x \ge 2$ with $y\ge 5$ and $f(x,y) \ge f(3,3)>f(2,5)$ for $y\ge x\ge3$. Let $P$ denote the set of pendent edges of $G$. Then, $|\mathbf E(G)\setminus P | \ge |P|$. Now, by keeping in mind the above observations, we have $$\begin{aligned} ABS(G)-ABC(G) &=\sum_{uv\in \mathbf E(G)\setminus P}\left( \sqrt{\frac{d_u+d_v-2}{d_u+d_v}}-\sqrt{\frac{d_u+d_v-2}{d_u\,d_v}}\, \right)\\[2mm] &+ \sum_{uv\in P}\left( \sqrt{\frac{d_u+d_v-2}{d_u+d_v}}-\sqrt{\frac{d_u+d_v-2}{d_u\,d_v}}\, \right)\\[2mm] &\ge \sum_{uv\in \mathbf E(G)\setminus P} \left(\sqrt{\frac{5}{7}} - \frac{1}{\sqrt{2}} \right)+ \sum_{uv\in P} \left( \frac{1}{\sqrt{3}} - \frac{1}{\sqrt{2}} \right)\\[2mm] &= |\mathbf E(G)\setminus P| \left(\sqrt{\frac{5}{7}} - \frac{1}{\sqrt{2}} \right)+ |P| \left( \frac{1}{\sqrt{3}} - \frac{1}{\sqrt{2}} \right)\\[2mm] &\ge | P| \left(\sqrt{\frac{5}{7}} - \frac{1}{\sqrt{2}} \right)+ |P| \left( \frac{1}{\sqrt{3}} - \frac{1}{\sqrt{2}} \right)\\ &>0.\end{aligned}$$ ◻ # Conclusion and Some Open Problems In this paper, we considered the difference between atom-bond-connectivity ($ABC$) and atom-bond sum-connectivity ($ABS$) indices. In the case of graphs without pendent vertices, finding the sign of this difference is trivially easy (see Proposition [Proposition 1](#rem1){reference-type="ref" reference="rem1"}). On the other hand, in the case of graphs possessing pendent vertices, especially for trees, this difference becomes perplexed and the complete solution of the problem awaits additional studies. Denote the difference $ABC-ABS$ by $\Theta$. By means of computer search we found that for trees with $n \leq 15$ vertices (except in the trivial cases $n=1,2$), $\Theta=0$ never happens. It would be of some interest to extend this finding to higher values of $n$, or to discover a tree (or a graph with minimum degree $1$) for which $\Theta=0$. Let $T_n$ be the number of trees of order $n$, and $t_n$ the number of trees of order $n$ for which $\Theta<0$. We know that $t_n/T_n > 0$ for $n \geq 11$. It is an open problem what the value of $\lim_{n \to \infty} t_n/T_n$ is, especially whether it is equal to zero or to unity. ***Acknowledgment*:** This research has been funded by the Scientific Research Deanship, University of Ha'il, Saudi Arabia, through project number RG-23 019. 00 A. M. Albalahi, Z. Du, A. Ali, On the general atom-bond sum-connectivity index, *AIMS Math.* **8** (2023) 23771--23785. A. M. Albalahi, E. Milovanović, A. Ali, General atom-bond sum-connectivity index of graphs, *Mathematics* **11** (2023) \#2494. A. Ali, K. C. Das, D. Dimitrov, B. Furtula, Atom-bond connectivity index of graphs: a review over extremal results and bounds, *Discrete Math. Lett.* **5** (2021) 68--93. A. Ali, B. Furtula, I. Redžepović, I. Gutman, Atom-bond sum-connectivity index, *J. Math. Chem.* **60** (2022) 2081--2093. A. Ali, I. Gutman, I. Redžepović, Atom-bond sum-connectivity index of unicyclic graphs and some applications, *Electron. J. Math.* **5** (2023) 1--7. R. Abreu-Blaya, R. Reyes, J. M. Rodríguez, J. M. Sigarreta, Inequalities on the generalized atom bond connectivity index, *J. Math. Chem.* **59** (2021) 775--791. T. A. Alraqad, I. Ž. Milovanović, H. Saber, A. Ali, J. P. Mazorodze, Minimum atom-bond sum-connectivity index of trees with a fixed order and/or number of pendent vertices, arXiv:2211.05218 \[math.CO\]. J. A. Bondy, U. S. R. Murty, *Graph Theory*, Springer, London, 2008. G. Chartrand, L. Lesniak, P. Zhang, *Graphs & Digraphs*, CRC Press, Boca Raton, 2016. K. C. Das, J. M. Rodríguez, J. M. Sigarreta, On the generalized ABC index of graphs, *MATCH Commun. Math. Comput. Chem.* **87** (2022) 147--169. D. Dimitrov, Z. Du, A solution of the conjecture about big vertices of minimal-ABC trees, *Appl. Math. Comput.* **397** (2021) \#125818. E. Estrada, L. Torres, L. Rodrı́guez, I. Gutman, An atom-bond connectivity index: modelling the enthalpy of formation of alkanes, *Indian J. Chem. Sec. A* **37** (1998) 849--855. B. Furtula, A. Graovac, D. Vukicević, *Augmented Zagreb index*, *J. Math. Chem.* **48** (2010) 370--380. N. Ghanbari, On the Graovac-Ghorbani and atom-bond connectivity indices of graphs from primary subgraphs, *Iranian J. Math. Chem.* **13** (2022) 45--72. K. J. Gowtham, I. Gutman, On the difference between atom-bond sum-connectivity and sum-connectivity indices, *Bull. Cl. Sci. Math. Nat. Sci. Math.* **47** (2022) 55--65. I. Gutman, J. Tošović, S. Radenković, S. Marković, On atom--bond connectivity index and its chemical applicability, *Indian J. Chem. A* **51** (2012) 690--694. F. Harary, *Graph Theory*, Addison-Wesley, Reading, 1969. S. A. Hosseini, B. Mohar, M. B. Ahmadi, The evolution of the structure of ABC-minimal trees, *J. Combin. Theory Ser. B* **152** (2022) 415--452. Y. Hu, F. Wang, On the maximum atom-bond sum-connectivity index of trees, *MATCH Commun. Math. Comput. Chem.*, in press. M. N. Husin, S. Zafar, R. U. Gobithaasan, Investigation of atom-bond connectivity indices of line graphs using subdivision approach, *Math. Prob. Engin.* **2022** (2022) \#6219155. L. B. Kier, L. H. Hall, *Molecular Connectivity in Chemistry and Drug Research*, Academic Press, New York, 1976. L. B. Kier, L. H. Hall, *Molecular Connectivity in Structure--Activity Analysis*, Wiley, New York, 1986. P. Nithya, S. Elumalai, S. Balachandran, S. Mondal, Smallest ABS index of unicyclic graphs with given girth, *J. Appl. Math. Comput.*, available online at <https://doi.org/10.1007/s12190-023-01898-0>. S. Noureen, A. Ali, Maximum atom-bond sum-connectivity index of $n$-order trees with fixed number of leaves, *Discrete Math. Lett.* **12** (2023) 26--28. M. Randić, On characterization of molecular branching, *J. Am. Chem. Soc.* **97** (1975) 6609--6615. M. Randić, M. Novič, D. Plavšić, *Solved and Unsolved Problems in Structural Chemistry*, CRC Press, Boca Raton, 2016. S. Wagner, H. Wang, *Introduction to Chemical Graph Theory*, CRC Press, Boca Raton, 2018. B. Zhou, N. Trinajstić, On a novel connectivity index, *J. Math. Chem.* **46** (2009) 1252--1270. B. Zhou, N. Trinajstić, Relations between the product- and sum-connectivity indices, *Croat. Chem. Acta* **83** (2012) 363--365. [^1]: Corresponding author
arxiv_math
{ "id": "2309.13689", "title": "On the Difference of Atom-Bond Sum-Connectivity and\n Atom-Bond-Connectivity Indices", "authors": "Akbar Ali, Ivan Gutman, Izudin Redzepovic, Jaya Percival Mazorodze,\n Abeer M. Albalahi, Amjad E. Hamza", "categories": "math.CO", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | The paper introduces an adaptive version of the stabilized Trace Finite Element Method (TraceFEM) designed to solve low-regularity elliptic problems on level-set surfaces using a shape-regular bulk mesh in the embedding space. Two stabilization variants, gradient-jump face and normal-gradient volume, are considered for continuous trace spaces of the first and second degrees, based on the polynomial families $Q_1$ and $Q_2$. We propose a practical error indicator that estimates the 'jumps' of finite element solution derivatives across background mesh faces and it avoids integration of any quantities along implicitly defined curvilinear edges of the discrete surface elements. For the $Q_1$ family of piecewise trilinear polynomials on bulk cells, the solve-estimate-mark-refine strategy, combined with the suggested error indicator, achieves optimal convergence rates typical of two-dimensional problems. We also provide a posteriori error estimates, establishing the reliability of the error indicator for the $Q_1$ and $Q_2$ elements and for two types of stabilization. In numerical experiments, we assess the reliability and efficiency of the error indicator. While both stabilizations are found to deliver comparable performance,the lowest degree finite element space appears to be the more robust choice for the adaptive TraceFEM framework. author: - "Timo Heister[^1]" - "Maxim A. Olshanskii[^2]" - Vladimir Yushutin bibliography: - citations.bib title: An adaptive stabilized trace finite element method for surface PDEs --- ***Keywords:*** Surface, PDE, finite elements, traces, unfitted grid, adaptivity, level set, stabilization # Introduction The Trace or Cut Finite Element Method is one of the approaches used to approximate surface Partial Differential Equations (PDEs) [@olshanskii2017trace; @burman2015stabilized]. It falls into the category of geometrically unfitted methods because the domain of a variational problem, a two-dimensional surface denoted as $\Gamma$, is embedded within a three-dimensional triangulated domain $\Omega$ that is a subset of $\mathbb{R}^3$, such as a sufficiently large cube. Identifying the active mesh, denoted as $\Omega_h\subset\Omega$, and performing local refinement or any other mesh cell updating procedure is straightforward due to the geometrical simplicity. We refer to Figure [\[fig:snapshots\]](#fig:snapshots){reference-type="ref" reference="fig:snapshots"} for a visual representation. Furthermore, handling data structures on the octree mesh $\Omega_h$ can be implemented efficiently and is available in many finite element libraries. This flexibility is one of the advantages of the Trace Finite Element Method (TraceFEM). However, it comes with the cost of constructing quadratures on the intersections of $\Gamma$ with cells from $\Omega_h$. The size and shape of these intersections vary uncontrollably between cells, leading to the necessity for a stabilization term, similar to $s_h$ in equation [\[abstract\]](#abstract){reference-type="eqref" reference="abstract"}, in any TraceFEM discretization of surface problems. Several variants of such terms are available in the literature [@olshanskii2017trace; @larson2020stabilization], but in this context, we will only consider the 'gradient-jump' face stabilization and the 'normal-gradient' volume stabilization. These methods have been successfully used and proven to be practical and robust. Adaptive strategies within the context of stabilized TraceFEM are not yet well-understood. Previous discussions on adaptivity in the TraceFEM setting can be found in the literature [@DemlowOlsh; @Chernyshenko2015]. In [@DemlowOlsh], there is no stabilization, and an inferior (as seen in the comparison in [@olshanskii2017trace]) 'full-gradient' stabilization is considered in [@Chernyshenko2015]. Additionally, both papers only considered piece-wise linear finite element spaces, with [@DemlowOlsh] assuming tetrahedral meshes and [@Chernyshenko2015] using octree meshes. We extend the adaptive methodology introduced in [@DemlowOlsh] by studying the first and second-order stabilized TraceFEM on octree meshes. Another novel aspect is the consideration of two stabilizations, namely $s_h^{JF}$ (as defined in [\[s_jf\]](#s_jf){reference-type="eqref" reference="s_jf"}) and $s_h^{NV}$ (as defined in [\[s_nv\]](#s_nv){reference-type="eqref" reference="s_nv"}), in the context of adaptive TraceFEM. Many mathematical models involving surface PDEs necessitate the use of adaptive numerical methods. For instance, the dynamics of liquid crystal films can give rise to the formation of defects [@gharbi2013microparticles; @hu2016disclination; @koning2016spherical; @nestler2022active]. Mathematically, a defect in a liquid crystal film corresponds to low regularity solutions of the governing PDEs on surfaces. From a numerical modeling perspective, this entails the need for adaptive refinement and coarsening as the defect forms and evolves along the film. The evolution of defects is driven by variations in the energy of the liquid crystal and the mass flow, which are governed by the surface Navier--Stokes equation [@nestler2022active]. The necessity of addressing these coupled phenomena numerically serves as motivation for the development of adaptive surface FEMs with both first and second-order polynomial accuracy. In this paper, our focus is on adaptive strategies for the stabilized TraceFEM applied to the Laplace--Beltrami equation, which serves as a prototypical elliptic problem on a surface $\Gamma$ [@Dziuk88]. An overview of the motivation and the main results follow. At this point, we will omit certain technical details regarding the geometrical consistency of the adaptive method. To begin, consider an abstract variational problem on a surface $\Gamma$: given $f\in H^{-1}(\Gamma)$, we seek to find $u\in H^1(\Gamma)$ such that $a(u,v)=\left<f,v\right>$ for all $v \in H^1(\Gamma)$. We assume that the bilinear form $a$ is symmetric and coercive. In the TraceFEM, the discrete space $V_h$ is defined on a graded, regular bulk mesh $\Omega_h$, and we solve the following linear system: $$\begin{aligned} \label{abstract} a(u_h, v_h)+s_h(u_h,v_h)=\left<f,v_h\right>, \quad \forall v_h\in V_h\end{aligned}$$ Here, a stabilization form $s_h$ ensures the algebraic stability of the resulting linear algebraic system. Residual and jump indicators can be derived [@bernardi2000adaptive] from the integration by parts in $a(u_h, v_h)$, as done in [@DemlowOlsh] for an unstabilized TraceFEM. However, in our case, the stabilization $s_h$ is incorporated into an a posteriori estimate. We would like to highlight two important aspects of the adaptivity methodology for the method [\[abstract\]](#abstract){reference-type="eqref" reference="abstract"}: - The jump indicator requires the construction of non-standard one-dimensional quadratures to handle curved intersections of the surface with faces of bulk cells. The associated implementation burden represents a practical inconvenience of the adaptive TraceFEM approach introduced in [@DemlowOlsh]. - We have observed that the ratio $s_h(u_h,u_h)/a(u_h,u_h)$, where both forms are restricted to a single bulk element, often exhibits significant growth, even for uniformly refined meshes. Consequently, the inclusion of the stabilization term $s_h$ in an error indicator has the potential to compromise its efficiency. To address the first aspect, we propose an alternative error indicator designed for adaptively refined, graded, octree tessellations of the bulk domain $\Omega$, denoted as $\Omega_h$. This novel indicator is reliable and straightforward to compute, as it eliminates the need for integration over the curved intersections of an implicitly defined surface with two-dimensional faces of the bulk cells. Instead, the indicator incorporates a jump term that only requires the use of a standard 2D quadrature for the faces of the bulk mesh cells. Moreover, for the TraceFEM stabilized with the gradient-jump face stabilization, this term is already an integral part of the method. As for the second aspect, it is worth noting that the efficiency analysis of TraceFEM indicators remains an open question to the best of our knowledge. To explore this further, we undertake a comprehensive numerical investigation to assess the efficiency of the new indicator. In the case of stabilized TraceFEM with $Q_1$ finite elements, the indicator is found to be efficient. However, in the $Q_2$ case, efficiency gradually diminishes, although the convergence rates for the adaptive gradient-jump stabilized TraceFEM still appear to remain optimal. The remainder of this paper is organized as follows: Section [2](#s_FEM){reference-type="ref" reference="s_FEM"} introduces the stabilized adaptive TraceFEM along with a new computationally practical indicator. In Section [3](#s:analysis){reference-type="ref" reference="s:analysis"}, we provide a proof of the reliability estimate for the indicator. In Section [4](#s:num){reference-type="ref" reference="s:num"}, the adaptive method is tested numerically for low-regularity solutions to the Laplace--Beltrami equation on the unit sphere. We assess both the reliability and efficiency of the method, considering $Q_1$ and $Q_2$ conforming finite elements defined on octree meshes. Furthermore, we perform experiments using the adaptive TraceFEM with two different stabilizations. # The adaptive trace finite element method {#s_FEM} We are interested in the geometrically unfitted finite element method known as the TraceFEM [@ORG09]. The method considered in this section is an extension of the TraceFEM and stabilization techniques introduced in [@Alg1; @burman2018cut; @grande2018analysis] to hexahedral bulk octree meshes. After formulation of the method for our model problem, the Laplace--Beltrami equation, we introduce error indicators and an adaptive discretization. ## Model problem {#s_setup} Let $\Omega$ be an open domain in $\mathbb{R}^3$ and let $\Gamma\subset \Omega$ be a smooth connected compact and closed hyper-surface embedded in $\mathbb R^3$. For a sufficiently smooth function $g:\Omega\rightarrow \mathbb{R}$ the tangential derivative on $\Gamma$ is defined by $$\nabla_{\Gamma} g=\nabla g-(\nabla g\cdot \mathbf n)\mathbf n,\label{e:2.1}$$ where $\mathbf n$ denotes the unit normal to $\Gamma$. Denote by $\mbox{\rm div}_\Gamma=\mbox{tr}(\nabla_\Gamma)$ the surface divergence operator and by $\Delta_{\Gamma}=\nabla_\Gamma\cdot\nabla_\Gamma$ the Laplace--Beltrami operator on $\Gamma$. The Laplace--Beltrami equation is a model example of an elliptic PDE posed on the surface $\Gamma$. The equation reads as follows: find $u:\Gamma \rightarrow \mathbb{R}$ satisfying $$\label{LB} -\Delta_{\Gamma} u +u =f\quad\text{on}~~\Gamma$$ The zero order term is added to avoid non-essential technical details of handling one-dimensional kernel consisting of all constant functions on $\Gamma$. The problem is well-posed in the sense of the weak formulation: Given $f\in H^{-1}(\Gamma)$, find $u\in H^1(\Gamma)$ satisfying $$\label{LBw} \int_{\Gamma}\big( \nabla_{\Gamma} u\cdot\nabla_{\Gamma}v+uv\big)\,ds =\int_\Gamma f\,v\,ds\, \quad\forall\,v\in H^1(\Gamma).$$ If $f\in L^2(\Gamma)$, then the unique solution satisfies $u\in H^2(\Gamma)$ and $\|u\|_{H^2(\Gamma)}\le c\|f\|_{L^2(\Gamma)}$ with a constant $c$ independent of $f$; see [@Aubin]. ## Discretization We assume an octree cubic mesh $\mathcal{T}_h$ covering the bulk domain $\Omega$. In addition, we assume that the mesh is *gradually* refined, i.e., the sizes of two active (finest level) neighboring cubes differ at most by a factor of 2. Such octree grids are also known as balanced. The method also applies for unbalanced octrees, but our analysis and experiments use balanced grids. The set of all active (finest level) faces is denoted by $\partial\mathcal{T}_h$. The mesh is not aligned with the surface $\Gamma$, which can cut through the cubes with no further restrictions. By $\Gamma_h$ we denote a given approximation of $\Gamma$ such that $\Gamma_h$ is a $C^{0,1}$ piecewise smooth surface without boundary and $\Gamma_h$ is formed by smooth segments: $$\label{defgammah} \Gamma_h=\bigcup\limits_{T\in\mathcal{F}_h} \overline{T},$$ where $\mathcal{F}_h=\{T\subset\Gamma_h\,:\, T=\Gamma_h\cap S, ~\text{for}\,S\in \mathcal T_h\}$. For a given $T\in\mathcal F_h$ denote by $S_T$ a cube $S_T\in\mathcal{T}_h$ such that $T\subset S_T$ (if $T$ lies on a side shared by two cubes, any of these two cubes can be chosen as $S_T$). In practice, we construct $\Gamma_h$ as follows. Assume $\phi$ is a sign distance or general level set function for $\Gamma$ . We define $\Gamma_h$ as the zero level set of $\phi_h$, a piecewise polynomial interpolant to $\phi$ on $\mathcal{T}_h$: $${\Gamma}_h:=\{\mathbf x\in\Omega\,:\, \phi_h(\mathbf x)=0 \}.$$ For geometric consistency, the polynomial degree of $\phi_h$ is the same as the degree of piecewise polynomial functions we use to define trial and test spaces in a finite element formulation. In some applications, $\phi_h$ is recovered from a solution of a discrete indicator function equation (e.g. in the level set or the volume of fluid methods), without any direct knowledge of $\Gamma$. Assumptions of how well ${\Gamma}_h$ should approximate $\Gamma$ will be given later. The unit (outward pointing) normal vector $\mathbf n_{h}$ is defined almost everywhere on $\Gamma_h$. We also define $\mathbf P_h(\mathbf x):=\mathbf{I}-\mathbf n_{h}(\mathbf x)\mathbf n_{h}(\mathbf x)^T$ for $\mathbf x\in \Gamma_h,~\mathbf x$ not on an edge. The tangential derivative along $\Gamma_h$ is given by $\nabla_{\Gamma_h} g=\mathbf P_h\nabla g$ for sufficiently smooth $g$ defined in a neighborhood of $\Gamma_h$. Consider a subdomain $\omega_h$ of $\Omega$ consisting only of those end-level cubic cells that contain $\Gamma_h$: $$\label{defomeg} \omega_h= \bigcup_{S\in{\mathcal T}_h^\Gamma} S,\quad\text{with}~{\mathcal T}_h^\Gamma=\{S\in\mathcal T_h\,:\, S=S_T~\text{for}~ T \in \mathcal{F}_h \}.$$ The piecewise constant function $h_S:\omega_h\rightarrow \mathbb R$ denotes the bulk cubic cell size. Denote by $\Sigma_h$ the set of all end-level internal faces of ${\mathcal T}_h^\Gamma$, i.e. square faces between intersected cells from ${\mathcal T}_h^\Gamma$, $$\label{defsigma} \Sigma_h= \{F\in\partial\mathcal{T}_h\,: F\in \textrm{int}(\omega_h) \}.$$ The piecewise constant function $h_F:\Sigma_h\rightarrow \mathbb R$ denotes the face size. Since the mesh is gradually refined, $h_F=\min(h_{S_F^+},h_{S_F^-})$, where $S_F^+,S_F^-\in {\mathcal T}_h^\Gamma$ are the two bulk cells which share the end-level face $F\in \Sigma_h$. We are also interested in the set of all faces which are intersected by $\Gamma_h$, $$\label{defsigmaG} \Sigma^\Gamma_h= \{F\in\Sigma_h\,: F \cap \Gamma_h\neq \emptyset \, \}.$$ Intersected faces are necessary internal, so that $\Sigma^\Gamma_h \subset \Sigma_h$, but the opposite inclusion does not hold. For each cell $S$, let $M_S$ be the affine mapping from the reference unit cube. Then the finite element space of order $k$ is defined as : $$\begin{aligned} \label{traceVhk} V_h^k:=\{v\in C(\omega_h)\ |\ v|_{S}\circ M_S\in Q_k\,, \forall\ S \in{\mathcal T}_h^\Gamma\},\end{aligned}$$ where $Q_k$ is the Lagrangian finite element basis of degree $k$. In case of $k=1$, $V_h=V_h^1$ is the space of piecewise trilinear functions corresponding to the family $$\begin{aligned} \label{q1_family} Q_1=\mbox{span}\{1,x_1,x_2,x_3,x_1x_2,x_1x_3,x_2x_3,x_1x_2x_3\}. \end{aligned}$$ Note that we consider $H^1$-conforming (i.e., continuous) finite elements. In this paper we restrict to $k=1,2$. Let $f^e$ be an extension of $f$ from $\Gamma$ to $\Gamma_h$. The finite element formulation reads: Find $u_h\in V_h^k$ such that $$\label{FEM} \int_{\Gamma_h} \big( \nabla_{\Gamma_h} u_h\cdot\nabla_{\Gamma_h} v_h +u_hv_h\big)\,ds +s_h( u_h, v_h) = \int_{\Gamma_h}f^e v_h\,ds\qquad\forall\,v_h\in V_h^k.$$ Here $s_h$ is a stabilization term defined later. The purpose of the stabilization term is to enhance the robustness of the formulation with respect to position of the position of $\Gamma_h$ in the background mesh $\mathcal T_h$. In the context of TraceFEM the idea of stabilization was first introduced in [@Alg1]. ## TraceFEM stabilizations {#sec:tracefem_stab} We are interested in the two commonly used variants of the stabilization terms $s_h$ in [\[FEM\]](#FEM){reference-type="eqref" reference="FEM"}. In both cases, the stabilizing term can be assembled elementwise over all end-level cubes intersected by $\Gamma_h$: $$\begin{aligned} %\label{s_nv} s_h( u_h, v_h)=\sum_{S\in{\mathcal T}_h^\Gamma} s_S^\ast( u_h, v_h).\end{aligned}$$ 1. **Gradient-jump face stabilization** is the method introduced in [@Alg1] following the cutFEM approach developed for the volumetric problems. In the context of the TraceFEM, this stabilization is often used with quasi-uniform bulk meshes, stationary surfaces, and lowest order elements; see e.g. [@hansbo2015characteristic; @burman2016cut; @burman2018cut; @grande2018analysis]. In this variant, local stabilizing terms are computed over cube's faces which are in the active skeleton $\eqref{defsigma}$, $$\begin{aligned} \label{s_jf} s_S^{JF}( u_h, v_h)= \sum_{F\in\partial S\cap\Sigma_h}\int_{F} \sigma_F \,\llbracket\nabla u_h\rrbracket\cdot \llbracket\nabla v_h\rrbracket\end{aligned}$$ where $\sigma_F$ is $O(1)$ stabilization parameter, and $\llbracket\nabla u_h\rrbracket = (\nabla u_h)|_{S^+} - (\nabla u_h)|_{S^-}$, $F=S^-\cap S^+$, is a "jump" of the gradient across the face. Note that for continuous FE, stabilization [\[s_jf\]](#s_jf){reference-type="eqref" reference="s_jf"} is equivalent to penalizing the jumps of normal derivatives across faces. A higher-order version of $s_h^{JF}$ was suggested in [@zahedi2017space] and analyzed for quasi-uniform meshes in [@larson2020stabilization]. For $Q_2$ elements it reads: $$\begin{gathered} \label{s_jf2} s_S^{JF2}( u_h, v_h)= s_S^{JF}( u_h, v_h) + \int_{\Gamma_h\cap S} \sigma_\Gamma(\mathbf n_h\cdot \nabla{}u_h)( \mathbf n_h\cdot \nabla{}v_h) \\ +\sum_{F\in\partial S\cap\Sigma_h}\int_{F} \tilde{\sigma}_F{h^2_{F}}\,(\mathbf n_F \cdot \llbracket\nabla^2 u_h\rrbracket \mathbf n_F) (\mathbf n_F\cdot \llbracket\nabla^2 v_h\rrbracket \mathbf n_F)\\ +\int_{\Gamma_h\cap S} \tilde{\sigma}_\Gamma{h^2_{S}}\, (\mathbf n_h\cdot (\nabla{}^2u_h) \mathbf n_h)( \mathbf n_h\cdot (\nabla{}^2v_h)\mathbf n_h),\end{gathered}$$ where ${\sigma}_\Gamma$, $\tilde{\sigma}_F$, and $\tilde{\sigma}_\Gamma$ are $O(1)$ tuning parameter. The bilinear form $s_h^{JF2}$ stabilizes the trace finite element space $V_h^2$ in the case $Q_2$ polynomial family as shown in [@larson2020stabilization]. In that paper, a more general stabilization $h^{\gamma}s_h^{JF2}$, $0\leq\gamma\leq2$, was considered and the sensitivity of the method to all stabilization parameters was explored. In our numerical results for $Q_2$ family, we choose $\sigma_F=\tilde{\sigma}_F=\tilde{\sigma}_\Gamma=\sigma_\Gamma$. We see that the gradient-jump stabilization gets quite complicated for higher order elements. Below we consider a normal-gradient volume stabilization, which is universal with respect to the FE degree. 2. **Normal-gradient volume stabilization** was introduced in [@burman2018cut; @grande2018analysis] and it penalizes the variation of the FE solution in the normal direction to the surface. This property was found particularly useful for applying TraceFEM to problems posed on evolving surfaces [@lehrenfeld2018stabilized] and so it is commonly used in this context [@yushutin2020numerical; @olshanskii2021finite; @olshanskii2022tangential; @olshanskii2023eulerian]. In what follows, $\mathbf n_h=\nabla \phi_h/|\nabla \phi_h|$ denotes an extension of the normal field on $\Gamma_h$ to a neighborhood of $\Gamma_h$ that contains $\omega_h$. The stabilization reads: $$\begin{aligned} \label{s_nv} s_S^{NV}( u_h, \mathbf v_h)= \int_{S} \rho_S (\mathbf n_h\cdot\nabla u_h) (\mathbf n_h\cdot\nabla v_h),\end{aligned}$$ where $\rho_S$ is the stabilization parameter, constant in each cell such that $$\rho_S\simeq h_S^{-1}\quad\text{for}~S\in \mathcal T_h^\Gamma.$$ ## Error indicators One of the goals of this paper is to construct a new TraceFEM estimator which does not involve complicated and expensive computations on edges $\Gamma_h\cap F$, $F\in{}\Sigma_h^\Gamma$. These edges are available only implicitly as intersections of $\Gamma_h$ with bulk faces. Moreover, one needs to construct an immersed edge quadrature on each intersected face from $\Sigma_h^\Gamma$ which is a significant computational burden. Again, note that some of faces from $\Sigma_h^\Gamma$ are subfaces of bulk cells which complicates the accumulation of flux jumps even further. To this end, we define the *bulk jump* indicator: $$\begin{aligned} \label{indicator_F} \eta_F(S_T)=\|\llbracket \nabla u_h \rrbracket\|_{L^2(\partial S_T\cap \omega_h)}\,,\quad S_T\in {\mathcal T}_h^\Gamma.\end{aligned}$$ Note that the indicator [\[indicator_F\]](#indicator_F){reference-type="eqref" reference="indicator_F"} assesses the variation of the solution gradient across internal, square faces shared by the cubic cells in ${\mathcal T}_h^\Gamma$ rather than across the implicit edges $\Gamma_h\cap F$, $F\in{}\Sigma_h^\Gamma$, as done in [@DD07; @DemlowOlsh; @Chernyshenko2015]. The former is more straightforward to compute. Also note that [\[indicator_F\]](#indicator_F){reference-type="eqref" reference="indicator_F"} is accumulated over all faces from [\[defsigma\]](#defsigma){reference-type="eqref" reference="defsigma"} rather then just the intersected faces from [\[defsigmaG\]](#defsigmaG){reference-type="eqref" reference="defsigmaG"}. We will also need the *surface residual* indicator, $$\begin{aligned} \label{indicator_R} \eta_R(T)=h_{S_T}\|f_h+\Delta_{\Gamma_h} u_h-u_h\|_{L^2(T)}\end{aligned}$$ which was already used in [@DD07; @DemlowOlsh; @Chernyshenko2015]. The computation of the [\[indicator_R\]](#indicator_R){reference-type="eqref" reference="indicator_R"} requires integration over surface cuts $T=\Gamma_h\cap S_T$, $S_T\in {\mathcal T}_h^\Gamma$ which is a standard procedure in the implementation of TraceFEM [\[FEM\]](#FEM){reference-type="eqref" reference="FEM"}. Thus, for the purpose of local mesh adaptation we use the following error indicator: $$\label{total_indicator} \eta(S_T):=(\alpha_r\eta_R(T)^2+\alpha_e\eta_F(S_T)^2+\alpha_s s_{S_T}^\ast(u_h,u_h))^{\frac12}.$$ with some parameters $\alpha_r,\alpha_e, \alpha_s\ge 0$. **Remark 1**. *Note that for the gradient-jump face stabilization, the solution's jumps over faces (i.e. the $\eta_F(S_T)^2$ quantity) are included in $s_{S_T}^\ast(u_h,u_h)$ term and so the face indicator is extra and we let $\alpha_e=0$ in the cases of $Q_1$ and $Q_2$. Otherwise, in our numerical experiments with normal-gradient volume, we choose $\alpha_r=\alpha_e=\alpha_s=1$.* In this paper we do not consider any indicator of the geometric error resulting from the approximation $\Gamma$ and other geometric quantities. They are assumed to be of a higher order with respect to $h_S$. Results of experiments in Section [4](#s:num){reference-type="ref" reference="s:num"} show that the trace FE adaptive method based on $\eta(T)$ results in the optimal convergence of the adaptive method in $H^1$ and $L^2$ norms. # Reliability {#s:analysis} In this section we prove an a posterior error estimate that implies the reliability the error indicator [\[total_indicator\]](#total_indicator){reference-type="eqref" reference="total_indicator"}. We start with several preliminaries. ## Preliminaries For the surface $\Gamma$, we consider its neighborhood: $$\mathcal O(\Gamma):=\{\mathbf{x}\in \mathbb{R}^3\ |\ \mathrm{dist}(\mathbf{x},\Gamma)< \tilde{c}\},\label{e:3.1}$$ with a suitable $\tilde{c}$ depending on $\Gamma$ such that $\omega_h\subset \mathcal O(\Gamma)\subset\Omega$ and the normal projection $\mathbf p: \mathcal O(\Gamma)\rightarrow\Gamma$, $$\mathbf p(\mathbf x)=\mathbf x-d(\mathbf x)\mathbf n(\mathbf x)$$ is well-defined. Hereafter $d\in C^2(\mathcal O(\Gamma))$ denotes the signed distance function such that $d<0$ in the interior of $\Gamma$ and $d>0$ in the exterior, and $\mathbf{n}(\mathbf{x}):=\nabla d(\mathbf{x})$ for all $\mathbf{x}\in \mathcal O(\Gamma)$. Hence, $\mathbf{n}$ is the normal vector on $\Gamma$ and $|\mathbf{n}(\mathbf x)|=1$ for all $\mathbf x\in \mathcal O(\Gamma)$. The Hessian of $d$ is denoted by $$\mathbf{H}(\mathbf x):= \nabla^2 d(\mathbf x)\in \mathbb{R}^{3\times 3},\quad \mathbf x\in \mathcal O(\Gamma).$$ The eigenvalues of $\mathbf{H}(\mathbf x)$ are the principal curvatures $\kappa_1(\mathbf x)$, $\kappa_2(\mathbf x)$, and $0$. We assume the following estimates on how well $\Gamma_h$ approximates $\Gamma$: $$\begin{aligned} &\mathrm{ess\ sup}_{\mathbf x\in \Gamma_h}|d(\mathbf x)| \le c_1 h^{k+1}, \label{e:3.9}\\ &\mathrm{ess\ sup}_{\mathbf x\in \Gamma_h}|\mathbf n(\mathbf x)-\mathbf n_{h}(\mathbf x)| \le c_2 h^k,\label{e:3.10}\end{aligned}$$ with constants $c_1$, $c_2$ independent of $h$ and $k\in\{1,2\}$ in the FE degree. The assumption is reasonable if $\Gamma$ is defined as the zero level of a (locally) smooth level set function $\phi$ and $\Gamma_h$ is the zero of an $\phi_h\in V_h$, where $\phi_h$ interpolates $\phi$ and it holds $$\|\phi-\phi_h\|_{L^\infty(\mathcal O(\Gamma))}+h\|\nabla(\phi-\phi_h)\|_{L^\infty(\mathcal O(\Gamma))}\lesssim h^{k+1}.$$ Here and in the remainder, $A\lesssim B$ means $A\leq c\, B$ for some positive constant $c$ independent of the number of refinement levels and the position of $\Gamma_h$ in the background mesh. For $\mathbf x\in\Gamma_h$, define $\mu_h(\Gamma)(\mathbf x) = (1-d(\mathbf x)\kappa_1(\mathbf x))(1-d(\mathbf x)\kappa_2(\mathbf x))\mathbf n^T(\mathbf x)\mathbf n_h(\mathbf x)$. The surface measures $\mathrm{d}\mathbf{s}$ and $\mathrm{d}\mathbf{s}_{h}$ on $\Gamma$ and $\Gamma_h$, respectively, are related by $$\mu_h(\Gamma)(\mathbf x)\mathrm{d}\mathbf{s}_h(\mathbf x)=\mathrm{d}\mathbf{s}(\mathbf p(\mathbf x)),\quad \mathbf x\in\Gamma_h. \label{e:3.16}$$ The solution of the Laplace--Beltrami problem and its data are defined on $\Gamma$, while the finite element method is defined on $\Gamma_h$. Hence, we need a suitable extension of a function from $\Gamma$ to its neighborhood. For a function $v$ on $\Gamma$ we define $$v^e(\mathbf x):= v(\mathbf p(\mathbf x)) \quad \hbox{for all } \mathbf x\in \mathcal O(\Gamma).$$ The following formulas for this extended function are well-known (cf. section 2.3 in [@DD07]): $$\begin{aligned} \nabla u^e(\mathbf{x}) &= (\mathbf{I}-d(\mathbf{x})\mathbf{H})\nabla_{\Gamma} u(\mathbf p(\mathbf x)) \quad \hbox{ in } \mathcal O(\Gamma),\label{grad1}\\ \nabla_{\Gamma_h} u^e(\mathbf x) &= \mathbf P_h(\mathbf x)(\mathbf{I}-d(\mathbf{x})\mathbf{H})\nabla_{\Gamma} u(\mathbf p(\mathbf x)) \quad \hbox{ a.e. on } \Gamma_h,\label{grad2}\end{aligned}$$ with $\mathbf{H}=\mathbf{H}(\mathbf{x})$. For $\mathbf x\in \Gamma_h$ also define $\tilde{\mathbf P}_{h}(\mathbf x)= \mathbf{I}-\mathbf n_h(\mathbf x)\mathbf n(\mathbf x)^T/(\mathbf n_h(\mathbf x)\cdot\mathbf n(\mathbf x))$. One can represent the surface gradient of $u\in H^1(\Gamma)$ in terms of $\nabla_{\Gamma_h}u^e$ as follows $$\label{hhl} \nabla_{\Gamma}u(\mathbf p(\mathbf x))=(\mathbf{I}-d(\mathbf x)\mathbf{H}(\mathbf x))^{-1} \tilde{\mathbf P}_{h}(\mathbf x) \nabla_{\Gamma_h}u^e(\mathbf x)~~ \hbox{ a.e. }\mathbf x\in \Gamma_h.$$ Due to [\[e:3.16\]](#e:3.16){reference-type="eqref" reference="e:3.16"} and [\[hhl\]](#hhl){reference-type="eqref" reference="hhl"}, one gets $$\label{aux13} \int_{\Gamma}\nabla_{\Gamma}u\nabla_{\Gamma}v\, \mathrm{d}\mathbf{s}=\int_{\Gamma_h} \mathbf{A}_h\nabla_{\Gamma_h}u^e\nabla_{\Gamma_h}v^e \, \mathrm{d}\mathbf{s}_h \quad \hbox{for all } v\in H^1(\Gamma),$$ with $\mathbf{A}_h(\mathbf x)=\mu_h(\mathbf x) \tilde{\mathbf P}^T_h(\mathbf x)(\mathbf{I}-d(\mathbf x)\mathbf{H}(\mathbf x))^{-2}\tilde{\mathbf P}_h(\mathbf x)$. For sufficiently smooth $u$ and $|\mu| \leq 2$, it holds (cf. Lemma 3 in [@Dziuk88]): $$|D^{\mu} u^e(\mathbf x)|\lesssim \left(\sum_{|\mu|=2}|D_{\Gamma}^{\mu} u(\mathbf p(\mathbf x))| + |\nabla_{\Gamma} u(\mathbf p(\mathbf x))|\right)\quad \hbox{in } \mathcal O(\Gamma), \label{e:3.13}$$ We need the following uniform trace inequalities. For any end level cell $S\subset\omega_h$ and its face $F\subset S$ it holds $$\begin{aligned} \label{eq_trace} \|v\|_{L^2(S\cap \Gamma_h)}^2\lesssim h^{-1}_S\|v\|_{L^2(S)}^2+h_S\|\nabla v\|_{L^2(S)}^2\quad \forall~v\in H^1(S).\\ \label{eq_trace2} \|v\|_{L^2(F\cap \Gamma_h)}^2\lesssim h^{-1}_F\|v\|_{L^2(F)}^2+h_F\|\nabla v\|_{L^2(F)}^2\quad \forall~v\in H^1(F).\end{aligned}$$ Note that for graded octree meshes it holds $h_F\simeq h_S$. The proof of [\[eq_trace\]](#eq_trace){reference-type="eqref" reference="eq_trace"} follows by subdividing any cubic cell into a finite number of regular tetrahedra and further applying Lemma 4.2 from [@Hansbo2003] on each of these tetrahedra. Similar procedure is applied to prove [\[eq_trace2\]](#eq_trace2){reference-type="eqref" reference="eq_trace2"}. We will use the following notation $$a_h(u,v):= \int_{\Gamma_h} (\nabla_{\Gamma_h} u\cdot\nabla_{\Gamma_h} v +uv)\,d\mathbf s_h.$$ ## A posteriori estimate {#s_adapt} In this section, we deduce an a posteriori error estimate for the TraceFEM  [\[FEM\]](#FEM){reference-type="eqref" reference="FEM"}. For the sake of analysis we make the following *assumptions*: (i) The octree mesh is gradualy refined; (ii) For any $\mathbf s\in \Gamma$ denote by $K(\mathbf s)$ a number of end-level cubic cells from $\omega_h$ intersected by the line $\ell(\mathbf s)=\{\mathbf x\in\mathcal{O}(\Gamma):\,\mathbf p(\mathbf x)=\mathbf s\}$. We assume $K(\mathbf s)\le K$ with a constant $K$ independent of $\mathbf s$ and the number of refinement levels. Consider the surface finite element error $e_h=u^e-u_h$ in $\omega_h$. By $e_h^l$ we denote the lift of the error function on $\mathcal O(\Gamma)$, $e_h^l(\mathbf x)=u(\mathbf p(\mathbf x))-u_h(\mathbf s)$ with $\mathbf s\in\Gamma_h$ such that $\mathbf p(\mathbf s)=\mathbf p(\mathbf x)$. Note that $e_h^l$ is constant in normal directions to $\Gamma$, i.e. $e_h^l=(e_h^l|_\Gamma)^e$. Further we prove an a posteriori bound for the augmented $H^1$-norm of $e_h^l$ on $\Gamma$, i.e. for $$\label{3.1} |\!|\!| e_h |\!|\!|^2= a( e_h^l, e_h^l)+s_h(e_h, e_h),\quad\text{with}~a( u,v)=\int_\Gamma(\nabla_\Gamma u\cdot \nabla_\Gamma v + uv )\,ds.$$ Using straightforward calculations and [\[aux13\]](#aux13){reference-type="eqref" reference="aux13"} one checks the following identities for any $\psi_h\in V_h$ $$\label{s4_e1} \begin{split} |\!|\!| e_h |\!|\!|^2&= \int_\Gamma f e_h^l\,d\mathbf s-a(u_h^l, e_h^l)+s_h(e_h, e_h)\\ &= \int_{\Gamma_h} f^e e_h\mu_h\,d\mathbf s_h-\int_{\Gamma_h} f_h \psi_h\,d\mathbf s_h+a_h(u_h, \psi_h)+s_h(u_h, \psi_h)-a(u_h^l, e_h^l)+s_h(e_h, e_h)\\ % &=\int_{\Gamma_h} (f^e\mu_h-f_h) e_h\,d\mathbf s_h +\int_{\Gamma_h} f_h(e_h-\psi_h)\,d\mathbf s_h+a_h(u_h, \psi_h- e_h) +s_h(u_h, \psi_h-e_h) \\&\qquad -\int_{\Gamma_h}(\mathbf{A}_h-\mathbf P_h)\nabla_{\Gamma_h} u_h \cdot \nabla_{\Gamma_h} e_h\, \mathrm{d}\mathbf{s}_h. \end{split}$$ Element-wise integration by parts for the third term on the right hand side of [\[s4_e1\]](#s4_e1){reference-type="eqref" reference="s4_e1"} gives $$\label{s4_e2} a_h(u_h, \psi_h-e_h)= \int_{\Gamma_h} (\Delta_{\Gamma_h} u_h-u_h)( e_h-\psi_h) \hspace{2pt}{\rm d}\mathbf s_h -\frac12\sum_{T\in \mathcal F_h}\int_{\partial T} \llbracket {\nabla}_{\Gamma_h}u_h \rrbracket ( e_h-\psi_h) \hspace{2pt}{\rm d}\mathbf r.$$ The Cauchy inequality gives $$s_h(u_h, \psi_h-e_h)\le \left(\sum_{S\in {\mathcal T}_h^\Gamma} s_S^\ast(u_h,u_h)\right)^{\frac12} \left(\sum_{S\in {\mathcal T}_h^\Gamma} s_S^\ast(\psi_h-e_h,\psi_h-e_h) \right)^{\frac12}.$$ Substituting [\[s4_e2\]](#s4_e2){reference-type="eqref" reference="s4_e2"} into [\[s4_e1\]](#s4_e1){reference-type="eqref" reference="s4_e1"} and applying the Cauchy inequality elementwise over $\mathcal F_h$ to estimate integrals, we get $$\label{s4_e3} \begin{split} |\!|\!| e_h |\!|\!|^2&\lesssim \sum_{T\in \mathcal F_h} \left(\|f^e\mu_h-f_h\|_{L^2(T)}+ \|\mathbf{A}_h-\mathbf P_h\|_{L^\infty(T)}\|\nabla_{\Gamma_h} u_h\|_{L^2(T)}\right)\| e_h\|_{H^1(\Gamma_h)} \\&+ \left(\sum_{T\in \mathcal F_h}\eta_R(T)^2\right)^{\frac12} \left(\sum_{T\in \mathcal F_h}h^{-2}_{S_T}\| e_h-\psi_h\|_{L^2(T)}^2 \right)^{\frac12}\\ &+ \left(\sum_{T\in \mathcal F_h} h_{S_T}\|\llbracket {\nabla}_{\Gamma_h}u_h \rrbracket\|^2_{\partial T} \right)^{\frac12} \left(\sum_{T\in \mathcal F_h} h_{S_T}^{-1}\| e_h-\psi_h\|_{L^2(\partial T)}^2 \right)^{\frac12} \\ &+ \left(\sum_{S\in {\mathcal T}_h^\Gamma} s_S^\ast(u_h,u_h)\right)^{\frac12} \left(\sum_{S\in {\mathcal T}_h^\Gamma} s_S^\ast(\psi_h-e_h,\psi_h-e_h) \right)^{\frac12}. \end{split}$$ To proceed further we need several results, which we split into a few lemmas. **Lemma 1**. *For all $T\in\mathcal{F}_h$ it holds $$\label{eq:l1_new} %\sum_{T\in \F_h} h_{S_T}\|\llbracket {\nabla}_{\Gamma_h}u_h\rrbracket\|^2_{L^2(\partial T)} \lesssim \eta_F(S_T)^2. %+ \sum_{T\in \F_h}h_{S_T}^4 \|\llbracket \unablah u_h\Gamma) \rrbracket\|^2_{\partial T} .$$* *Proof.* Recall that the face-based indicator $\eta_F(S_T)$ for a cell $S_T$ includes all internal faces $F\in \partial{}S_T \cap \Sigma_h$ rather than only faces from $\partial{}S_T\cap \Sigma_h^\Gamma$. Also note that $\llbracket {\nabla}_{\Gamma_h}u_h \rrbracket=\llbracket\mathbf P_h\nabla u_h\rrbracket$ is a rational function of a finite degree on each face of $S_T$. Application of the uniform trace estimate [\[eq_trace2\]](#eq_trace2){reference-type="eqref" reference="eq_trace2"} followed by the FE inverse estimate on each face $F\subset \partial{}S_T\cap \Sigma_h^\Gamma$ gives the assertion. ◻ **Lemma 2**. *The following bound holds for both stabilizations and FE degrees: $$\label{eq:s_bound} s_S^\ast(\psi_h-e_h,\psi_h-e_h)\lesssim s_S^\ast(e_h,e_h) + h^{-1}_S\|\nabla \psi_h\|^2_{L^2(\omega(S))}.$$* *Proof.* We first apply the triangle inequality to show $$\label{eq:639} s_S^\ast(\psi_h-e_h,\psi_h-e_h)\le 2(s_S^\ast(e_h,e_h) + s_S^\ast(\psi_h,\psi_h))$$ We need to estimate the second term on the right-hand side. For the gradient-jump stabilization and $k=2$ we have $$\begin{gathered} \label{eq:643} s_S^{JF2}(\psi_h,\psi_h)= \sigma_\Gamma\|\mathbf n_h\cdot \nabla{}\psi_h\|^2_{L^2(\Gamma_h\cap S)}+\tilde{\sigma}_\Gamma{h^2_{S}}\|\mathbf n_h\cdot (\nabla{}^2\psi_h) \mathbf n_h\|^2_{L^2(\Gamma_h\cap S)} \\ +\sum_{F\in\partial S\cap\Sigma_h}\left( \sigma_F \,\|\nabla \psi_h\rrbracket\|^2_{L^2(F)} + \tilde{\sigma}_F{h^2_{F}}\|\mathbf n_F \cdot \llbracket\nabla^2 \psi_h\rrbracket \mathbf n_F\|^2_{L^2(F)} \right).\end{gathered}$$ To estimate the first two terms on the right-hand side of [\[eq:643\]](#eq:643){reference-type="eqref" reference="eq:643"}, we apply the trace estimate [\[eq_trace\]](#eq_trace){reference-type="eqref" reference="eq_trace"}: $$\label{eq:648} \begin{split} &\|\mathbf n_h\cdot \nabla{}\psi_h\|^2_{L^2(\Gamma_h\cap S)} \le \|\nabla{}\psi_h\|^2_{L^2(\Gamma_h\cap S)} \lesssim h_{S}^{-1} \|\nabla\psi_h\|^2_{L^2(S)} + h_{S} \|\nabla^2\psi_h\|^2_{L^2(S)} \lesssim h_{S}^{-1} \|\nabla\psi_h\|^2_{L^2(S)} \\ {h^2_{S}}&\|\mathbf n_h\cdot (\nabla{}^2\psi_h) \mathbf n_h\|^2_{L^2(\Gamma_h\cap S)} \le {h^2_{S}}\|\nabla{}^2\psi_h\|^2_{L^2(\Gamma_h\cap S)} \lesssim {h_{S}}\|\nabla{}^2\psi_h\|^2_{L^2(S)} \lesssim h_{S}^{-1}\|\nabla\psi_h\|^2_{L^2(S)}. \end{split}$$ By $\omega(S)$ we denote a union of cubic cells from $\omega_h$ sharing faces with $S$. To estimate the third and fourth terms on the right-hand side of [\[eq:643\]](#eq:643){reference-type="eqref" reference="eq:643"}, we apply the finite element trace and inverse inequalities: $$\label{eq:658} \begin{split} \sum_{F\in\partial S\cap\Sigma_h}\sigma_F \,\|\llbracket \nabla \psi_h\rrbracket\|^2_{L^2(F)} &\lesssim \|\llbracket \nabla\psi_h\rrbracket\|^2_{L^2(\partial S\cap\Sigma_h)} \lesssim h_{S}^{-1} \|\nabla\psi_h\|^2_{L^2(\omega(S))}\\ \sum_{F\in\partial S\cap\Sigma_h} {h^2_{F}}\|\mathbf n_F \cdot \llbracket\nabla^2 \psi_h\rrbracket \mathbf n_F\|^2_{L^2(F)}& \lesssim h^2_{S}\|\llbracket\nabla^2 \psi_h\rrbracket\|^2_{L^2(\partial S\cap\Sigma_h)} \lesssim h_{S}\|\nabla^2 \psi_h\|^2_{L^2(\omega(S))}\\ &\lesssim h_{S}^{-1} \|\nabla\psi_h\|^2_{L^2(\omega(S))}. \end{split}$$ The combination of [\[eq:643\]](#eq:643){reference-type="eqref" reference="eq:643"}--[\[eq:658\]](#eq:658){reference-type="eqref" reference="eq:658"} gives $$\label{eq:670} s_S^{JF2}(\psi_h,\psi_h)\lesssim h_{S}^{-1} \|\nabla\psi_h\|^2_{L^2(\omega(S))}.$$ Of course, the same bound [\[eq:670\]](#eq:670){reference-type="eqref" reference="eq:670"} holds also for $k=1$. For the normal-volume stabilization we have $$\label{eq:671} s_S^{NV}(\psi_h,\psi_h)= \rho_S\|\mathbf n_h\cdot \nabla{}\psi_h\|^2_{L^2(S)}\lesssim h^{-1}_S\|\mathbf n_h\cdot \nabla{}\psi_h\|^2_{L^2(S)},$$ where we used that $\rho_S$ is an $O(h^{-1}_S)$ parameters. Substituting [\[eq:670\]](#eq:670){reference-type="eqref" reference="eq:670"} and [\[eq:671\]](#eq:671){reference-type="eqref" reference="eq:671"} with [\[eq:639\]](#eq:639){reference-type="eqref" reference="eq:639"} proves the lemma.\  ◻ Due to geometric approximation properties [\[e:3.9\]](#e:3.9){reference-type="eqref" reference="e:3.9"}, [\[e:3.10\]](#e:3.10){reference-type="eqref" reference="e:3.10"} and "lifting" identities [\[e:3.16\]](#e:3.16){reference-type="eqref" reference="e:3.16"} and [\[grad2\]](#grad2){reference-type="eqref" reference="grad2"} we have $$\label{aux25} \| e_h\|_{H^1(\Gamma_h)} \lesssim \| e_h^l\|_{H^1(\Gamma)}.$$ **Lemma 3**. *There exist $\psi_h\in V_h$ such that $$\label{A2} \sum_{T\in \mathcal F_h}\left[h^{-2}_{S_T}\| e_h-\psi_h\|_{L^2(T)}^2 +h_{S_T}^{-1}\| e_h-\psi_h\|_{L^2(\partial T)}^2 +s_S^\ast(\psi_h-e_h,\psi_h-e_h) \right]\lesssim |\!|\!| e_h |\!|\!|^2.$$* *Proof.* To handle the edge term on the left-hand side of [\[A2\]](#A2){reference-type="eqref" reference="A2"}, we need some further constructions: For a curved edge $e\subset\partial T$ denote by $F_e\subset \partial S_T$ the face of $S_T$ such that $e\subset F_e$. Denote by $\omega(e)\subset\mathcal T_h$ the set of all cubic cells touching $F_e$. Let $\tilde\phi_h$ be the natural polynomial extension of the level-set function $\phi_h|_{S_T}$ and $\widetilde{\Gamma}_h(e)=\{\mathbf x\in\omega(e)\,:\,\tilde\phi_h(\mathbf x)=0\}$ be a smooth approximation of $\Gamma$ locally in $\omega(e)$. Note that due to the graded refinement assumption there is a $h_{S_T}/2$ neighborhood of $e$ in $\widetilde{\Gamma}_h(e)$. Then for $\rho\in H^1(\widetilde{\Gamma}_h(e))$ in holds $$\label{aux590} \|\rho\|_{L^2(e)}^2 \lesssim h^{-1}_{S_T}\|\rho\|_{L^2(\widetilde{\Gamma}_h(e))}^2 +h_{S_T}\|\nabla_{\widetilde{\Gamma}_h(e)}\rho\|_{L^2(\widetilde{\Gamma}_h(e))}^2.$$ The estimate [\[aux590\]](#aux590){reference-type="eqref" reference="aux590"} follows from a standard flattening argument and applying a trace inequality as in [\[eq_trace2\]](#eq_trace2){reference-type="eqref" reference="eq_trace2"}. We apply the bulk and [\[eq_trace\]](#eq_trace){reference-type="eqref" reference="eq_trace"} trace inequalities and [\[aux590\]](#aux590){reference-type="eqref" reference="aux590"} to estimate $$\label{A2_1} \begin{split} h^{-2}_{S_T}\| e_h-\psi_h\|_{L^2(T)}^2&+\sum_{e\in\partial T}h_{S_T}^{-1}\| e_h-\psi_h\|_{L^2(e)}^2\\ &\lesssim h^{-3}_{S_T}\| e_h^l-\psi_h\|_{L^2(S_T)}^2+h^{-1}_{S_T}\|\nabla( e_h^l-\psi_h)\|_{L^2(S_T)}^2\\ &\qquad+\sum_{e\in\partial T}\left(h_{S_T}^{-2}\| e_h^l-\psi_h\|_{L^2(\widetilde{\Gamma}_h(e))}^2+\|\nabla_{\widetilde{\Gamma}_h}( e_h^l-\psi_h)\|_{L^2(\widetilde{\Gamma}_h(e))}^2\right)\\ &\lesssim h^{-3}_{S_T}\| e_h^l-\psi_h\|_{L^2(\omega(e))}^2+h^{-1}_{S_T}\|\nabla( e_h^l-\psi_h)\|_{L^2(\omega(e))}^2 \\ &\qquad +\sum_{e\in\partial T}\left(\|\nabla_{\widetilde{\Gamma}_h} e_h^l\|_{L^2(\widetilde{\Gamma}_h(e))}^2+\|\nabla\psi_h\|_{L^2(\widetilde{\Gamma}_h(e))}^2\right)\\ &\lesssim h^{-3}_{S_T}\| e_h^l-\psi_h\|_{L^2(\omega(e))}^2+h^{-1}_{S_T}\|\nabla( e_h^l-\psi_h)\|_{L^2(\omega(e))}^2\\ &\qquad+\sum_{e\in\partial T}\left(\|\nabla_\Gamma e_h^l\|_{L^2(\mathbf p(\widetilde{\Gamma}_h(e)))}^2+h^{-1}_{S_T}\|\nabla\psi_h\|_{L^2(\omega(e))}^2\right), \end{split}$$ where we used an estimate $$\label{aux599-2} \|\nabla_{\widetilde{\Gamma}_h} e_h^l\|_{L^2(\widetilde{\Gamma}_h(e))}\lesssim \|\nabla_{\Gamma}e_h^l\|_{L^2(\mathbf p(\widetilde{\Gamma}_h(e)))},$$ which holds due to [\[e:3.16\]](#e:3.16){reference-type="eqref" reference="e:3.16"}, [\[grad2\]](#grad2){reference-type="eqref" reference="grad2"} and the fact that [\[e:3.9\]](#e:3.9){reference-type="eqref" reference="e:3.9"}, [\[e:3.10\]](#e:3.10){reference-type="eqref" reference="e:3.10"} also hold for the locally extended $\Gamma_h$ with possibly different $O(1)$ constants $c_1$, $c_2$. Also note that for any lifted function $u^l\in L^2(\omega_h)$ $$\label{aux599} \|u^l\|_{L^2(S)}^2 \lesssim h_{S_T}\|u^l\|_{L^2(\mathbf p(S))}^2.$$ Thanks to our assumption (i) there is a Scott-Zhang type interpolant $\psi_h\in V_h$ of $e_h^l\in H^1(\Omega)$ [@heuveline2007h] such that $$\label{A2_2} h^{-1}_{S}\| e_h^l-\psi_h\|_{L^2(S)}+\|\nabla\psi_h\|_{L^2(S)}\lesssim \| e_h^l\|_{H^1(\omega(S))}\quad\forall~S\in\Omega_h,$$ where $\omega(S)$ is defined as follows: Let $\tilde\omega(S)$ consist of $S$ and of all end-level cubic cells touching $S$, then $\omega(S)$ is a patch of cells defined as the union of $\tilde\omega(S)$ and of all end-level cubic cells touching $\tilde\omega(S)$. We assume $\tilde{c}$ in [\[e:3.1\]](#e:3.1){reference-type="eqref" reference="e:3.1"} to be sufficiently large and $h$ sufficiently small that $\omega(S)\subset \mathcal O(\Gamma)$ for all $S\in\Omega_h$. Applying in [\[A2_1\]](#A2_1){reference-type="eqref" reference="A2_1"} the estimates from [\[A2_2\]](#A2_2){reference-type="eqref" reference="A2_2"}, [\[aux599\]](#aux599){reference-type="eqref" reference="aux599"} and the result from Lemma [Lemma 1](#L3){reference-type="ref" reference="L3"} yields $$\label{A2_3} \begin{split} \sum_{T\in \mathcal F_h}\left[h^{-2}_{S_T}\| e_h-\psi_h\|_{L^2(T)}^2\right. & \left. +h_{S_T}^{-1}\| e_h-\psi_h\|_{L^2(\partial T)}^2 + s_{S_T}^\ast(\psi_h-e_h,\psi_h-e_h) \right]\\ &\lesssim \sum_{T\in \mathcal F_h}\left( h_{S_T}^{-1} \| e_h^l\|_{H^1(\omega(S_T))}^2+\|\nabla_\Gamma e_h^l\|_{L^2(\mathbf p(\omega(S_T)))}^2+ h^{-1}_S\|\nabla \psi_h\|^2_{L^2(\omega(S_T))}\right)\\ &\lesssim \sum_{T\in \mathcal F_h}\left( h_{S_T}^{-1} \| e_h^l\|_{H^1(\omega(S_T))}^2+\|\nabla_\Gamma e_h^l\|_{L^2(\mathbf p(\omega(S_T)))}^2\right)\\ &\lesssim \sum_{S\in \omega_h} \| e_h^l\|_{H^1(\mathbf p(\omega(S_T)))}^2. \end{split}$$ In the last inequality we also used the fact that for the graded octree mesh $\mbox{diam}(\omega(S_T))\simeq h_{S_T}$. Due to assumption (i) any cell $S_T$ may belong to a uniformly bounded number of patches. Thanks to this and assumption (ii) any $\mathbf x\in\Gamma$ may belong to the projections of patches which total number is also uniformly bounded. This establishes the bound $$\label{A2_4} \sum_{T\in \mathcal F_h} \| e_h^l\|_{H^1(\mathbf p(\omega(S_T)))}^2\lesssim \| e_h^l\|_{H^1(\Gamma)}^2.$$ Using [\[A2_1\]](#A2_1){reference-type="eqref" reference="A2_1"}--[\[A2_4\]](#A2_4){reference-type="eqref" reference="A2_4"} proves the lemma. ◻ Combining [\[s4_e3\]](#s4_e3){reference-type="eqref" reference="s4_e3"}, [\[eq:l1_new\]](#eq:l1_new){reference-type="eqref" reference="eq:l1_new"} and [\[aux25\]](#aux25){reference-type="eqref" reference="aux25"}, [\[A2\]](#A2){reference-type="eqref" reference="A2"} gives the following *a posteriori* error estimate Assume that local grid refinement leads to better local surface reconstruction, i.e. [\[e:3.9\]](#e:3.9){reference-type="eqref" reference="e:3.9"} and [\[e:3.10\]](#e:3.10){reference-type="eqref" reference="e:3.10"} can be formulated locally, then it holds $\|f^e\mu_h-f_h\|_{L^2(T)}+\|\mathbf{A}_h-\mathbf P_h\|_{L^\infty(T)}=O(h^{k+1})$. In this case, the first term on the right-hand side of [\[apost\]](#apost){reference-type="eqref" reference="apost"} is of higher order if $k\ge1$ for $Q_1$ and $k\ge2$ for $k=2$. # Numerical examples {#s:num} This section presents a numerical study of an adaptive version of the stabilized TraceFEM [\[FEM\]](#FEM){reference-type="eqref" reference="FEM"}, which relies on the novel indicator [\[apost\]](#apost){reference-type="eqref" reference="apost"}. First, we provide details of the adaptive algorithm, including the surface approximation, in Section [4.2](#sec:adaptiveTrace){reference-type="ref" reference="sec:adaptiveTrace"}. Next, we confirm a posteriori estimates for the families $Q_1$ and $Q_2$. Moreover, we address the efficiency of the indicator using a manufactured solution. We test both gradient jump and normal gradient volume stabilizations. However, we omit the bulk jump indicator $\eta_F(S_T)$ [\[indicator_F\]](#indicator_F){reference-type="eqref" reference="indicator_F"} in the proposed indicator [\[numerical_indicator\]](#numerical_indicator){reference-type="eqref" reference="numerical_indicator"} if the TraceFEM scheme [\[linear_system\]](#linear_system){reference-type="eqref" reference="linear_system"} is stabilized by including $s_h^{JF}$ or $s_h^{JF2}$ forms; see Remark [Remark 1](#rem:omit_bulkjump){reference-type="ref" reference="rem:omit_bulkjump"}. ## A low-regularity test case This section discusses the model problem [\[LBw\]](#LBw){reference-type="eqref" reference="LBw"}, the solution of which is not regular enough to provide optimal rates of convergence if uniform refinement is employed. We consider the unit sphere $\Gamma$ and a family of solutions $u=u_\lambda \in H^{1+\lambda}(\Gamma)$, $0\leq\lambda\leq 1$, such that $$-\Delta_{\Gamma} u + u=f,\label{eq:lambdaLB}$$ with the forcing $f=f_\lambda\in H^{\lambda-1}(\Gamma)$. Consequently, by choosing different values of $\lambda$, we may obtain exact solutions of desired regularity. An example [@Chernyshenko2015] of such a family is given in spherical polar coordinates $(\phi, \theta)$, $\theta\in [0,\pi]$, $\phi\in(-\pi,\pi]$, by $$\label{polar} u= \sin^\lambda\theta\sin\phi,\qquad f=(1+\lambda^2+\lambda)\sin^\lambda\theta\sin\phi+(1-\lambda^2)\sin^{\lambda-2}\theta\sin\phi.$$ Clearly, $u$ and $f$ have singularities at the north, $\theta=0$ or $(x,y,z)=(0,0,1)$, and the south, $\theta=\pi$ or $(x,y,z)=(0,0,-1)$, poles (see Figure [\[fig:snapshots\]](#fig:snapshots){reference-type="ref" reference="fig:snapshots"}) while being harmonic in the azimuthal direction $\phi$ for each fixed $\theta\neq 0,\pi$. Before the iterative adaptive procedure starts, one constructs a sufficiently fine mesh of $\Omega=[-2,2]^3$ so the initial surface approximation $\Gamma_h$ is well-defined. To this end, the distance function $d(x,y,z)=x^2+y^2+z^2-1$ is chosen for the level-set description of the unit sphere $\Gamma$. The edges of the cube $\Omega$ are divided in eight equal segments of length $h=0.5$, see Figure [\[fig:snapshots\]](#fig:snapshots){reference-type="ref" reference="fig:snapshots"}, cycle$=0$. These cells constitute the initial mesh $\mathcal{T}_h$. ## Adaptive stabilized TraceFEM {#sec:adaptiveTrace} In this section we present the adaptive algorithm tested in the numerical experiments. The adaptive procedure is a sequence of cycles each consisting of three steps below. *Step 1* (`APPROXIMATE GEOMETRY`). To guarantee continuity of the surface approximation, we first resolve all hanging nodes in $\mathcal{T}_h$ by adding a sufficient number of linear constraints. The interpolant $\phi^k_h$ of order $k$ of the distance function $d$ on the mesh $\mathcal{T}_h$ identifies the active domain $\omega_h$ consisting of intersected cells $\mathcal{T}_h^\Gamma$. Geometrical information such as the normal vector $\mathbf n_h$ and the surface quadratures representing $\Gamma_h$ is derived from the discrete distance function $\phi_h^k$. *Step 2* (`SOLVE`). The finite element space $V_h^{k}$ consists of continuous piece-wise $Q_1$ or $Q_2$ functions defined on $\mathcal{T}_h^\Gamma$. We solve the following linear system: find $u_h\in V_h^{k}$ such that $$\begin{aligned} \label{linear_system} \int_{\Gamma_h} \nabla_{\Gamma_h} u_h\cdot\nabla_{\Gamma_h}v_h+ \int_{\Gamma_h}u_h v_h+{s_h( u_h, v_h)} = \left(f^e, v_h\right)_{\Gamma_h} \,,\qquad\, \forall v_h \in V_h^{k} \end{aligned}$$ where the term $s_h$ represents one of stabilizations from Section [2.3](#sec:tracefem_stab){reference-type="ref" reference="sec:tracefem_stab"}. *Step 3* (`ESTIMATE&MARK&REFINE`). Fix a $0<\theta<1$. Using the discrete solution $u_h$, we compute the indicator $\eta(S_T)$, $$\begin{aligned} \label{numerical_indicator} \eta^2(S_T)=\|\llbracket \nabla u_h \rrbracket\|^2_{L^2(\partial S_T\cap \omega_h)}+h^2_{S_T}\|f^e+\Delta_{\Gamma_h} u_h-u_h\|^2_{L^2(T)} + s_{S_T}^\ast(u_h,u_h)\end{aligned}$$ on each intersected cell $S_T\in \mathcal{T}_h^\Gamma$. Next we determine the smallest by cardinality set $\mathcal{T}_h^\theta\subset \mathcal{T}_h^\Gamma$ such that $$\begin{aligned} \label{fraction} \sum_{S_T \in \mathcal{T}_h^\theta} \eta^2(S_T) > \theta \sum_{S_T \in \mathcal{T}_h^\Gamma} \eta^2(S_T) \end{aligned}$$ and, finally, refine the cells in $\mathcal{T}_h^\theta$ uniformly. This completes the first cycle. At the beginning of the next cycle the new mesh $\mathcal{T}_h$, refined near $\Gamma$, of the domain $\Omega$ is available and we proceed to Step 1. \ \ ## Unfitted quadratures and other implementation details The adaptive stabilized TraceFEM scheme of Section [4.2](#sec:adaptiveTrace){reference-type="ref" reference="sec:adaptiveTrace"} was implemented in the Finite Element library **deal.II** [@dealII95; @dealii2019design]. Since the method is not standard, we start with discussing some implementation details. - The degrees of freedom of the level-set function exist across the entire mesh domain, whereas the degrees of freedom of the solution are confined to the colored, active domain of intersected cells. In principle, the discrete level-set approximation could have a different order or even an independent mesh from that of the solution. However, for the sake of convenience, we utilized the same triangulation for both the solution and the level-set in our implementation. - Given that the mesh contains hanging nodes, ensuring the continuity of the FE spaces defined on it is necessary for a $H^1$-conforming method. This continuity requirement extends to both the discrete level-set and the discrete solution. To achieve this, we express the continuity condition for each hanging node as a linear combination involving local degrees of freedom, which is subsequently incorporated into the linear system. We apply a similar post-processing technique to the discrete level-set function, defined by a point-wise Lagrange interpolant, to eliminate any gaps in the discrete surface $\Gamma_h$. - The implementation of [\[linear_system\]](#linear_system){reference-type="eqref" reference="linear_system"} requires the integration of polynomial functions over the intersections of the implicit surface $\Gamma_h$ with end cells from $\mathcal T_h^\Gamma$. This procedure is non-standard, and our implementation relies on the dimension-reduction approach detailed in [@saye2015high]. Notably, this algorithm is purpose-built for quadrilaterals and can accommodate higher-order approximations of $\Gamma_h$. - Implementation of stabilization forms $s_h^{NV}$ and $s_h^{JF}$ requires standard, e.g. Gauss--Lobatto, quadratures on a three-dimensional cube $S_T$ and on a two-dimensional square $F$, correspondingly. - Computation of the indicator [\[total_indicator\]](#total_indicator){reference-type="eqref" reference="total_indicator"} involves the same numerical integration procedures as used for [\[linear_system\]](#linear_system){reference-type="eqref" reference="linear_system"}. - Although the forcing term $f^e$ is not an $L^2(\Gamma_h)$ function, the integral on the right-hand side of [\[linear_system\]](#linear_system){reference-type="eqref" reference="linear_system"} remains well-defined, provided that none of the surface quadrature nodes intersect the north or south poles when projected onto $\Gamma$. - In the course of adaptive refinement some of inactive cells and some active cells not from $\mathcal T_h^\theta$ are refined so that the mesh remains graded. ## Uniform refinement The first example serves to motivate the adaptivity and to test our implementation of TraceFEM for $V_h^1$ and $V_h^2$ ambient spaces. We choose the exact solutions [\[polar\]](#polar){reference-type="eqref" reference="polar"}, $u_\lambda\in H^{1+\lambda}(\Gamma)$ with $\lambda=1.0$, $\lambda=0.7$ and $\lambda=0.4$ and solve the discrete problems [\[FEM\]](#FEM){reference-type="eqref" reference="FEM"} with $k=1$, $s_h( u, v)=s_h^{NV}( u, v)$, and stabilization parameter $\rho_S=10h_S^{-1}$. The active domain $\omega_h$ is refined uniformly and the obtained solutions $u_h\in V_h^1$ are compared with the normal extension $u^e$ of the exact solution $u\in H^{1+\lambda}(\Gamma)$. We evaluate the following surface error norms, $$\begin{aligned} \label{error_norms} \|u_h-u^e\|_{L^2(\Gamma_h)}\,,\qquad \|\nabla_{\Gamma_h}{u_h}-(\nabla_{\Gamma}{}u)^e\|_{L^2(\Gamma_h)}\, \end{aligned}$$ and the results are presented in Figure [\[fig:uniform\]](#fig:uniform){reference-type="ref" reference="fig:uniform"}. Optimal rates are observed for $\lambda=1.0$, which corresponds to $u\in H^2(\Gamma)$, but, as $\lambda$ decreases, the rates deteriorate in accordance with the regularity, $u_\lambda\in H^{1+\lambda}$, of the problem. Asymptotically, the rate $h^{\lambda}$ is attained for the energy norm as it would be expected for fitted FEMs. We conducted the same uniform refinement test using the gradient-jump face stabilization $s_h^{JF}$, and the results closely resemble those shown in Figure [\[fig:uniform\]](#fig:uniform){reference-type="ref" reference="fig:uniform"}. Therefore, we have opted not to include an additional plot. Next, we repeated the test for the $Q_2$ family with $k=2$ in $V_h^k$, employing the stabilizations $s_h^{NV}$ and $s_h^{JF2}$. When $\lambda=1$, the convergence rates are optimal. In cases of low regularity where $\lambda<1$, the rate of convergence approximates $h^{1+\lambda}$ in the energy norm corresponding to a finite element space of second degree. ## Efficiency indexes {#sec:efficiency} In the numerical experiments we consider different notions of the efficiency. As usual, local efficiency indexes are computed for active cells $S_T \in \mathcal{T}_h^\Gamma$. These indices gauge how closely the actual error, $e_h=\nabla_{\Gamma_h}{u_h}-\nabla{}u^e$, is to the error indicator $\eta$ on the cell. Accumulated over all cells, a reliable indicator estimates the error from above. The indicator is said to be efficient if the ratio of the indicator and the error, i.e. the efficiency index, is bounded from above independent of the discretization level. We will consider three efficiency indexes which differ in the patch of neighboring cells contributing to the local error $e_h$ for the cell $S_T$. To compute the indexes, one maximizes the following ratios over all cuts $T=S_T\cap\Gamma_h$, $$\begin{aligned} \label{def::efficiency} I_1=\max\limits_{T}\,\frac{\eta(S_T)}{ \|\xi_h\|_{\Gamma_h\cap\omega_{S_T}}}\,,\qquad I_2=\max\limits_{T}\,\frac{\eta(S_T)}{ \|\xi_h\|_{\Gamma_h\cap\omega_T }}\,,\qquad I_3=\max\limits_{T}\,\frac{\eta_R(T)}{ \|\xi_h\|_{\Gamma_h\cap S_T} } \end{aligned}$$ Here $\xi_h=\nabla_{\Gamma_h}{u_h}-\nabla{}u^e$ is the energy error, $\omega_{S_T}$ is the patch of all active cells from $\omega_h$ which share at least a vertex with the cell $S_T$; $\omega_T$ is the patch of all active cells from $\omega_h$ which share with the cell $S_T$ a face intersected by $\Gamma_h$. Clearly, the efficiency index $I_3$ accumulates the error over a single cell $S_T$ only and it is the sharpest way to characterize the indicator. The notion of efficiency given by $I_3$ is too stringent, as it is known that the corresponding index blows up numerically even for a fitted FEM. At the same time, the theory of a fitted adaptive FEM guarantees that the indicator is efficient if the error is accumulated over a patch of neighbors. This fact suggests that the indexes $I_2$ and $I_3$ are reasonable extensions of a similar notion to the unfitted finite element. The distinction between $I_2$ and $I_3$ lies in their dependence on the bulk mesh and the surface: in the former, the patch is based on the connectivity of the intersected cuts $T$, while in the latter, it relies on the connectivity of the bulk cells $S_T$. **Remark 2**. *Note that the error part in [\[def::efficiency\]](#def::efficiency){reference-type="eqref" reference="def::efficiency"} does not include the stabilization $s_h$ because we are interested in the surface error for a solution to a surface PDE. This is in contrast to the indicator $\eta(S_T)$ and to the natural discrete norm of [\[FEM\]](#FEM){reference-type="eqref" reference="FEM"} which include the stabilization $s_h$. One may question if adding the stabilization $s_h(u_h-u^e,u_h-u^e)$ to the denominator of indicators [\[def::efficiency\]](#def::efficiency){reference-type="eqref" reference="def::efficiency"} can lead to a notion of efficiency which is more suitable to TraceFEM. As we found in our numerical experiments, such alternation does not change main conclusions drawn from the numerical experiments. For these reasons, we present the numerical results using the efficiency indexes as defined in [\[def::efficiency\]](#def::efficiency){reference-type="eqref" reference="def::efficiency"}.* ## Efficiency and Reliability for the $Q_1$ elements {#sec:Q1} In this experiment, we assess the reliability and the efficiency of the indicator [\[numerical_indicator\]](#numerical_indicator){reference-type="eqref" reference="numerical_indicator"} using the $Q_1$ family of polynomials [\[q1_family\]](#q1_family){reference-type="eqref" reference="q1_family"}. Therefore, we choose a low-regularity solution [\[polar\]](#polar){reference-type="eqref" reference="polar"}, $u\in H^{1+\lambda}(\Gamma)$ with $\lambda=0.4$, of the Laplace--Beltrami problem [\[LBw\]](#LBw){reference-type="eqref" reference="LBw"} posed on the unit sphere. We run the adaptive TraceFEM stabilized by $s_h=s_h^{JF}$ with $\sigma_F=10$ and by $s_h=s_h^{NV}$ with $\rho_S=10h_S^{-1}$ and evaluate surface errors [\[error_norms\]](#error_norms){reference-type="eqref" reference="error_norms"}. The numerical results, as presented in the top panel of Figure [\[fig:main_Q1\]](#fig:main_Q1){reference-type="ref" reference="fig:main_Q1"}, confirm the a posteriori analysis conducted in Section [3](#s:analysis){reference-type="ref" reference="s:analysis"}. Optimal rates are observed with both stabilizations, $s_h^{NV}$ and $s_h^{JF}$, as shown in Figure [\[fig:main_Q1\]](#fig:main_Q1){reference-type="ref" reference="fig:main_Q1"}. Furthermore, in the plots of the bottom panel in Figure [\[fig:main_Q1\]](#fig:main_Q1){reference-type="ref" reference="fig:main_Q1"}, we evaluate the efficiency indexes [\[def::efficiency\]](#def::efficiency){reference-type="eqref" reference="def::efficiency"} corresponding to several notions of efficiency discussed in Section [4.5](#sec:efficiency){reference-type="ref" reference="sec:efficiency"}. The indexes $I_1$ and $I_2$ suggest the efficiency of the indicators for $Q_1$ adaptive TraceFEM. ## Efficiency and Reliability for the $Q_2$ elements {#sec:Q2} We proceeded to repeat the experiment for the $Q_2$ TraceFEM, employing the discrete space $V_h^2$ for both the solution $u_h$ and the surface approximation $\Gamma_h$, following the same adaptive algorithm outlined in Section [4.2](#sec:adaptiveTrace){reference-type="ref" reference="sec:adaptiveTrace"}. In this case, for the gradient-jump face stabilization, the $s_h^{JF}$ form was replaced by the $s_h^{JF2}$ form with $\sigma_F=10$. As shown in the top panel of Figure [\[fig:main_Q2\]](#fig:main_Q2){reference-type="ref" reference="fig:main_Q2"}, the $Q_2$ TraceFEM with gradient-jump face stabilization exhibits optimal convergence rates, while the $Q_2$ TraceFEM with normal-gradient volume stabilization shows suboptimal rates. Unlike the $Q_1$ scenario, the efficiency indexes in the $Q_2$ case exhibit linear growth with the number of degrees of freedom, as depicted in the bottom panel of Figure [\[fig:main_Q2\]](#fig:main_Q2){reference-type="ref" reference="fig:main_Q2"}. ### Effect of the stabilization parameter in $s_h^{JF2}$ {#sec:omitting} It was observed in [@larson2020stabilization] that the performance of the stabilization $s_h^{JF2}$ defined in [\[s_jf2\]](#s_jf2){reference-type="eqref" reference="s_jf2"} is sensitive to the choice of the stabilization parameters. We would like to demonstrate how different values of $\sigma_F$ affect the adaptive TraceFEM with indicator [\[numerical_indicator\]](#numerical_indicator){reference-type="eqref" reference="numerical_indicator"}. We did not observe improvements in efficiency by tuning the parameter $\sigma_F$ in Figure [\[fig:main_Q2\]](#fig:main_Q2){reference-type="ref" reference="fig:main_Q2"}, where we used $\sigma_F=10$. To illustrate this point, we present the results of adaptive TraceFEM for two extreme values of the stabilization parameter: $\sigma_F=0.1$ and $\sigma_F=1000$, as shown in Figure [\[fig:Q2_stab_param\]](#fig:Q2_stab_param){reference-type="ref" reference="fig:Q2_stab_param"}. Similar to Figure [\[fig:main_Q2\]](#fig:main_Q2){reference-type="ref" reference="fig:main_Q2"}, the convergence rates are nearly optimal for both extreme values. However, when $\sigma_F=1000$, achieving the same level of accuracy requires more degrees of freedom compared to the case of $\sigma_F=0.1$. This behavior of errors is consistent with what is typically observed during uniform refinement. In the adaptive setting, the indicator $\eta$ includes the stabilization, and when $\sigma_F=1000$, the estimator focuses on reducing the contribution of the stabilization $s_h(u_h,u_h)$ to the error functional $|\!|\!| e_h |\!|\!|$, as illustrated in the right panels of Figure [\[fig:Q2_stab_param\]](#fig:Q2_stab_param){reference-type="ref" reference="fig:Q2_stab_param"}. # Conclusions {#s_concl} In this paper, we explore the application of adaptive stabilized TraceFEM for the first time. We focus on solving an elliptic problem on a fixed surface using the two lowest-order continuous finite element spaces based on $Q_1$ and $Q_2$ elements. For each family, we investigate both the gradient-jump face and normal-gradient volume stabilizations. Our analysis demonstrates that the error indicator in the proposed adaptive TraceFEM is reliable, and our numerical tests confirm the theoretical findings. Specifically, for $Q_1$ elements, a reasonable choice for low-regularity solutions, we establish a robust and practical adaptive stabilized TraceFEM scheme. However, when using $Q_2$ elements, we observe suboptimal convergence rates if the normal-gradient stabilization is employed. In this case, the efficiency indexes grow proportionally with the number of active degrees of freedom. Another significant contribution of this paper relates to the practical implementation of the proposed indicator. Rather than computing gradient jumps along one-dimensional curvilinear edges between surface patches, which can be computationally intensive due to the implicit surface description in TraceFEM, we evaluate gradient jumps on two-dimensional faces between bulk cells. This approach simplifies the implementation of the indicator. In conclusion, we recommend caution when using the $Q_2$ element in adaptive stabilized TraceFEM schemes, while the $Q_1$ element provides a highly robust adaptive method. ## Acknowledgments {#acknowledgments .unnumbered} The author T.H. was partially supported by the National Science Foundation Award DMS-2028346, OAC-2015848, EAR-1925575, and by the Computational Infrastructure in Geodynamics initiative (CIG), through the NSF under Award EAR-0949446, EAR-1550901, EAR-2149126 via the University of California -- Davis. The author M.O. was partially supported by the National Science Foundation under award DMS-2309197. The author V.Y. was partially supported by the National Science Foundation Award OAC-2015848 and EAR-1925575. Clemson University is acknowledged for generous allotment of compute time on Palmetto cluster. [^1]: School of Mathematical and Statistical Sciences, Clemson University, Clemson, SC 29634-0975, USA, heister\@clemson.edu, vyushut\@clemson.edu. [^2]: Department of Mathematics, University of Houston, Houston, Texas 77204-3008, USA maolshanskiy\@uh.edu, www.math.uh.edu/ molshan
arxiv_math
{ "id": "2310.03089", "title": "An adaptive stabilized trace finite element method for surface PDEs", "authors": "Timo Heister, Maxim A. Olshanskii, Vladimir Yushutin", "categories": "math.NA cs.NA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper we construct a two-parameter version of spectral density functions and Novikov-Shubin invariants on fibre bundles. The aim of this approach is to gain a better understanding of how the near-zero spectrum of the Hodge Laplace operators on the fibre and the base of a fibre bundle contribute separately to the near-zero spectrum of the Laplace operators of the total space. We show that this two-parameter generalisation of the classical spectral density function still satisfies several invariance properties. As an example, we compute it explicitly for the three-dimensional Heisenberg group. author: - "Tim Höpfner[^1]" bibliography: - bibliography.bib title: | Two-Parameter Novikov-Shubin Invariants\ for Fibre Bundles --- # Introduction Given a product space $M=M_1\times M_2$, we have a good understanding of the near-zero spectrum of the Hodge Laplace operators on $M$ in terms of the near-zero spectra on $M_1$ and $M_2$. Indeed, in this case we get results based on the Künneth formula, see e.g., the book of W. Lück [@LckL2 §2.1.3]. Similar approaches for non-trivial fibre bundles, for example using the Serre spectral sequence, do not seem to work out as nicely. To understand this problem better in the case of fibre bundles, we define two-parameter versions $\mathcal{G}_k\colon \mathbb{R}_+\times \mathbb{R}_+\to [0,\infty]$ of spectral density functions and the Novikov-Shubin invariants. The aim of this generalisation of the classical spectral density functions $\mathcal{F}_k$ of the total space is to detect the individual contributions from the base and from the fibre and to obtain finer invariants for (non-trivial) bundles. We prove several invariance properties of these generalisations. First, we show that for a fixed connection the numbers are invariant under change of compatible metrics: **Theorem 1**. * Let $G\curvearrowright(M\to B,\nabla,g)$ be a fibre bundle with fixed connection $\nabla$ and compatible free proper cocompact group action by a group $G$. Then the dilatational equivalence class of the spectral density function underlying the two-parameter Novikov-Shubin numbers $$\mathcal{G}_k(M\to B, \nabla) = \mathcal{G}_k(M\to B, \nabla, g)$$ does not depend on the choice of $G$-invariant $\nabla$-compatible Riemannian metric $g$. * **Theorem 2** ([\[Thm_2NSI_Metric_fixed_conn_invar\]](#Thm_2NSI_Metric_fixed_conn_invar){reference-type="ref" reference="Thm_2NSI_Metric_fixed_conn_invar"}). * Let $G\curvearrowright(M\to B,\nabla,g)$ be a fibre bundle with fixed connection $\nabla$ and compatible free proper cocompact group action by a group $G$. Then the dilatational equivalence class of the spectral density function underlying the two-parameter Novikov-Shubin numbers $$\mathcal{G}_k(M\to B, \nabla) = \mathcal{G}_k(M\to B, \nabla, g)$$ does not depend on the choice of $G$-invariant $\nabla$-compatible Riemannian metric $g$. * Then, we prove that it is further invariant under certain compatible fibre homotopy equivalences: **Theorem 3**. * If there is a $G$-equivariant fibre homotopy equivalence between suitable bundles $M\to B$ and $M'\to B$ such that $\nabla = f^*\nabla'$, then their spectral density functions are dilatationally equivalent, $$\mathcal{G}_k(M'\to B, \nabla') \sim \mathcal{G}_k(M\to B, f^*\nabla').$$ * **Theorem 4** ([\[Thm_2NSI_fibre_homot_invar\]](#Thm_2NSI_fibre_homot_invar){reference-type="ref" reference="Thm_2NSI_fibre_homot_invar"}). * If there is a $G$-equivariant fibre homotopy equivalence between suitable bundles $M\to B$ and $M'\to B$ such that $\nabla = f^*\nabla'$, then their spectral density functions are dilatationally equivalent, $$\mathcal{G}_k(M'\to B, \nabla') \sim \mathcal{G}_k(M\to B, f^*\nabla').$$ * Lastly, we show that the two-parameter Novikov-Shubin numbers are invariant under change of connection as long as the fibre is shrunk at least as fast as the base: **Theorem 5**. * Let $G$ be a group and $M\to B$ be equipped with two pairs of connection and compatible Riemannian metric such that $G\curvearrowright(M\to B,\nabla,g)$ and $G\curvearrowright(M\to B,\nabla',g')$ are Riemannian fibre bundles with connection and compatible free proper cocompact $G$-action. Then the two-parameter spectral density functions restricted to the subspace $\{\nu\leq\mu\}$ are dilatationally equivalent, $$\mathcal{G}_k(M,\nabla,g)|_{\{\nu\leq\mu\}}\sim \mathcal{G}_k(M,\nabla',g')|_{\{\nu\leq\mu\}}.$$ * **Theorem 6** ([\[Thm_2NSI_half_metric_indep\]](#Thm_2NSI_half_metric_indep){reference-type="ref" reference="Thm_2NSI_half_metric_indep"}). * Let $G$ be a group and $M\to B$ be equipped with two pairs of connection and compatible Riemannian metric such that $G\curvearrowright(M\to B,\nabla,g)$ and $G\curvearrowright(M\to B,\nabla',g')$ are Riemannian fibre bundles with connection and compatible free proper cocompact $G$-action. Then the two-parameter spectral density functions restricted to the subspace $\{\nu\leq\mu\}$ are dilatationally equivalent, $$\mathcal{G}_k(M,\nabla,g)|_{\{\nu\leq\mu\}}\sim \mathcal{G}_k(M,\nabla',g')|_{\{\nu\leq\mu\}}.$$ * Then, as an example, we compute these numbers explicitly in the example of the three-dimensional Heisenberg group. This fits into the recent study of fibre bundles by means of invariants, such as work based on J.-M. Bismut and J. Cheeger's study [@BismutCheeger] of higher torsion invariants on fibre bundles using adiabatic limits, J.-M. Bismut's study [@Bismut] of an Atiyah-Singer theorem for families of Dirac operators and characteristic classes of fibre bundles such as the Morita-Miller-Mumford classes named after D. Mumford [@Mumford], E. Y. Miller [@Miller] and S. Morita [@Morita]. This new definition is different, but similar in spirit, to the study of adiabatic limits of fibre bundles. # Two-Parameter Novikov-Shubin Numbers ## Scaling of Fibre Bundles Let $(M,g)$ be a non-compact Riemannian manifold with a cocompact free proper group action $G\curvearrowright M$ acting by isometries. The spectral density function $\mathcal{F}(d)$ of $d$ is defined in terms of the upper Laplacian $\Delta^k_\mathrm{up}= d^*d\curvearrowright L^2\Omega^k(M,g)$ using the von Neumann trace of the group von Neumann algebra $\mathcal{N}G$ of $G$ by $$\mathcal{F}_k(M,g)(\lambda) = \mathop{\mathrm{tr}}_{\mathcal{N}G} \chi_{[0,\lambda^2]}(\Delta^k_\mathrm{up}(M,g)) = \dim_{\mathcal{N}G} \mathop{\mathrm{im}}\chi_{[0,\lambda^2]}(\Delta^k_\mathrm{up}(M,g)).$$ An interesting observation is that it can instead be defined by rescaling the manifold and looking at a fixed interval of the spectrum. For $g_\lambda= \lambda^2 g$ the Laplace operators for the different metrics satisfy $\Delta^k_\mathrm{up}(M, g_\lambda) = \lambda^{-2} \Delta^k_\mathrm{up}(M,g)$. Therefore, we obtain $$\mathcal{F}_k(M,g)(\lambda) = \mathop{\mathrm{tr}}_{\mathcal{N}G} \chi_{[0,\lambda^2]}(\Delta^k_\mathrm{up}(M,g)) = \mathop{\mathrm{tr}}_{\mathcal{N}G} \chi_{[0,1]}(\Delta_\mathrm{up}^k(M, g_\lambda)) = \mathcal{F}_k(M, g_\lambda)(1).$$ If $M$ is the total space of a fibre bundle and the fibre bundle structure is compatible with the $G$-action, then we can scale $M$ with different speed in fibre and base directions. This way we can define a refined version of the Novikov-Shubin invariants. More precisely, let $M \xrightarrow{\pi} B$ be a fibre bundle with fibres $\{F_b = \pi^{-1}(b)\}_{b\in B}$, where $(M,g)$ is a Riemannian manifold with Riemannian metric $g$. (Without loss of generality, we can assume the base, the fibres and thus also the total space to be connected.) At every point $x\in M$ with $\pi(x)=b$, we have the subspace $$T_x F_b = \ker D_x\pi \subset T_x M,$$ giving rise to the vertical subbundle $VM\subset TM$ of the tangent bundle by $$VM = T_\bullet F_\bullet = \ker(\pi_\ast) \subset TM.$$ Choosing a connection $\nabla$ compatible with $g$ on the fibre bundle is equivalent to specifying an orthogonal complement $HM$ of $VM$ in $TM$, so that the tangent space $TM$ decomposes as $$TM \cong_\nabla VM \perp_g HM.$$ The bundle $HM$ is called the horizontal subbundle of $TM$. The Riemannian metric decomposes fibrewise into a vertical and a horizontal contribution, $$g_x = g_{x,V} + g_{x,H},$$ where $g_{x,V}$ is supported in $V_xM\otimes V_xM$ and $g_{x,H}$ is supported in $H_xM\otimes H_xM$. In the following, we denote the situation described here by the triple $(M\to B, \nabla, g)$ and call such a triple a Riemannian fibre bundle with connection. **Definition 1**. We call a cocompact free proper group action $G\curvearrowright M$ by a (discrete) group $G$ compatible with this structure, and write $G\curvearrowright(M\to B, \nabla, g)$, if the Riemannian metric $g$ is $G$-invariant and there is a group action $G'\curvearrowright B$ together with a surjective group homomorphism $\varphi\colon G\twoheadrightarrow G'$ such that the projection $M\xrightarrow{\pi} B$ is $\varphi$-equivariant[^2]. **Definition 2** (). We call a cocompact free proper group action $G\curvearrowright M$ by a (discrete) group $G$ compatible with this structure, and write $G\curvearrowright(M\to B, \nabla, g)$, if the Riemannian metric $g$ is $G$-invariant and there is a group action $G'\curvearrowright B$ together with a surjective group homomorphism $\varphi\colon G\twoheadrightarrow G'$ such that the projection $M\xrightarrow{\pi} B$ is $\varphi$-equivariant[^3]. **Example 3**. The typical example for such a Riemannian fibre bundle with connection and compatible group action is obtained by starting with a compact fibre bundle $F\to M\to B$ where $F$, $M$ and $B$ are connected. The universal covering $\widetilde{M}$ of $M$ can be considered as a fibre bundle $\widetilde{M}\to\widetilde{B}$ over the universal covering of the base $B$ with some fibres $F'_\bullet$ (in general, these are not the universal coverings of the fibres $F_\bullet$). On the universal coverings, we have the action of $\pi_1(M)$ on $\widetilde{M}$ and the action of $\pi_1(B)$ on $\widetilde{B}$, compare the following diagram: $$\begin{tikzcd}[ampersand replacement=\&, row sep=10, column sep=20] \& \pi_1(M) \ar[r, "\varphi"] \ar[d, phantom, "\curvearrowright" description] \& \pi_1(B) \ar[d, phantom, "\curvearrowright" description]\\ F' \ar[r] \& \widetilde{M} \ar[r]\ar[ddd]\& \widetilde{B} \ar[ddd] \\ \\ \\ F \ar[r] \& M \ar[r] \& B \end{tikzcd}$$ Here, the long exact sequence of homotopy groups for the fibre bundle $F\to M\to B$, $$\cdots \to \pi_2(B) \to \pi_1(F) \to \pi_1(M) \overset{\varphi}{\twoheadrightarrow} \pi_1(B) \to 0 ,$$ yields a group homomorphism $\varphi\colon \pi_1(M)\to \pi_1(B)$ that is surjective since $\pi_0(F)$ is trivial. The elements in the kernel of $\varphi$ are in the image of $\pi_1(F)\to \pi_1(M)$ and act fibrewise on each fibre $F'_b$ for $b\in \widetilde{B}$ and the projection $\widetilde{M}\to \widetilde{B}$ is $\varphi$-equivariant. **Example 4** (). The typical example for such a Riemannian fibre bundle with connection and compatible group action is obtained by starting with a compact fibre bundle $F\to M\to B$ where $F$, $M$ and $B$ are connected. The universal covering $\widetilde{M}$ of $M$ can be considered as a fibre bundle $\widetilde{M}\to\widetilde{B}$ over the universal covering of the base $B$ with some fibres $F'_\bullet$ (in general, these are not the universal coverings of the fibres $F_\bullet$). On the universal coverings, we have the action of $\pi_1(M)$ on $\widetilde{M}$ and the action of $\pi_1(B)$ on $\widetilde{B}$, compare the following diagram: $$\begin{tikzcd}[ampersand replacement=\&, row sep=10, column sep=20] \& \pi_1(M) \ar[r, "\varphi"] \ar[d, phantom, "\curvearrowright" description] \& \pi_1(B) \ar[d, phantom, "\curvearrowright" description]\\ F' \ar[r] \& \widetilde{M} \ar[r]\ar[ddd]\& \widetilde{B} \ar[ddd] \\ \\ \\ F \ar[r] \& M \ar[r] \& B \end{tikzcd}$$ Here, the long exact sequence of homotopy groups for the fibre bundle $F\to M\to B$, $$\cdots \to \pi_2(B) \to \pi_1(F) \to \pi_1(M) \overset{\varphi}{\twoheadrightarrow} \pi_1(B) \to 0 ,$$ yields a group homomorphism $\varphi\colon \pi_1(M)\to \pi_1(B)$ that is surjective since $\pi_0(F)$ is trivial. The elements in the kernel of $\varphi$ are in the image of $\pi_1(F)\to \pi_1(M)$ and act fibrewise on each fibre $F'_b$ for $b\in \widetilde{B}$ and the projection $\widetilde{M}\to \widetilde{B}$ is $\varphi$-equivariant. **Definition 5**. Let $(M\to B, \nabla, g)$ be a Riemannian fibre bundle with connection. For smooth positive functions $s_H,s_V \in \mathcal{C}^\infty(M,\mathbb{R}_+)$ we define the Riemannian metric $g^{s_H,s_V}$ on $M$ by $$x\mapsto g^{s_H,s_V}_x = s_H(x)^2 g_{x,H} + s_V(x)^2 g_{x,V}.$$ In particular, if $s_H\equiv \overline\mu > 0$ and $s_V\equiv \overline\nu > 0$ are constant functions, this defines $$g^{\overline\mu,\overline\nu}=g^{s_H,s_V} = \overline \mu ^2 g_V + \overline\nu^2 g_H.$$ **Definition 6** (). Let $(M\to B, \nabla, g)$ be a Riemannian fibre bundle with connection. For smooth positive functions $s_H,s_V \in \mathcal{C}^\infty(M,\mathbb{R}_+)$ we define the Riemannian metric $g^{s_H,s_V}$ on $M$ by $$x\mapsto g^{s_H,s_V}_x = s_H(x)^2 g_{x,H} + s_V(x)^2 g_{x,V}.$$ In particular, if $s_H\equiv \overline\mu > 0$ and $s_V\equiv \overline\nu > 0$ are constant functions, this defines $$g^{\overline\mu,\overline\nu}=g^{s_H,s_V} = \overline \mu ^2 g_V + \overline\nu^2 g_H.$$ We use this structure to define a refined version of the spectral density function depending on two parameters in place of the classical parameter $\lambda$. **Definition 7**. Let $G\curvearrowright(M\to B,\nabla,g)$ be a Riemannian fibre bundle with connection and compatible $G$-action. Then, using the previous definition, we define the two-parameter spectral density function $\mathcal{G}_k(M\to B, \nabla, g)\colon \mathbb{R}_+\times\mathbb{R}_+ \to [0,\infty]$ by $$\begin{aligned} \mathcal{G}_k(M\to B, \nabla, g)(\overline\mu,\overline\nu) &= \mathop{\mathrm{tr}}_{\mathcal{N}G} \chi_{[0,1]}(\Delta^k_\mathrm{up}(M, g^{\overline\mu,\overline\nu})) = \mathcal{F}_k(M,g^{\overline\mu,\overline\nu})(1).\end{aligned}$$ We call two such functions $\mathcal{G}, \mathcal{G}'\colon \mathbb{R}_+\times \mathbb{R}_+\to [0,\infty]$ dilatationally equivalent if there exists a constant $C>0$ such that for all $\overline\mu,\overline\nu\in \mathbb{R}_+$, $$\mathcal{G}(C^{-1}\overline\mu,C^{-1}\overline\nu) \leq \mathcal{G}'(\overline\mu,\overline\nu) \leq \mathcal{G}(C\overline\mu, C\overline\nu).$$ In this case we write $\mathcal{G}\sim \mathcal{G}'$. **Definition 8** (). Let $G\curvearrowright(M\to B,\nabla,g)$ be a Riemannian fibre bundle with connection and compatible $G$-action. Then, using the previous definition, we define the two-parameter spectral density function $\mathcal{G}_k(M\to B, \nabla, g)\colon \mathbb{R}_+\times\mathbb{R}_+ \to [0,\infty]$ by $$\begin{aligned} \mathcal{G}_k(M\to B, \nabla, g)(\overline\mu,\overline\nu) &= \mathop{\mathrm{tr}}_{\mathcal{N}G} \chi_{[0,1]}(\Delta^k_\mathrm{up}(M, g^{\overline\mu,\overline\nu})) = \mathcal{F}_k(M,g^{\overline\mu,\overline\nu})(1).\end{aligned}$$ We call two such functions $\mathcal{G}, \mathcal{G}'\colon \mathbb{R}_+\times \mathbb{R}_+\to [0,\infty]$ dilatationally equivalent if there exists a constant $C>0$ such that for all $\overline\mu,\overline\nu\in \mathbb{R}_+$, $$\mathcal{G}(C^{-1}\overline\mu,C^{-1}\overline\nu) \leq \mathcal{G}'(\overline\mu,\overline\nu) \leq \mathcal{G}(C\overline\mu, C\overline\nu).$$ In this case we write $\mathcal{G}\sim \mathcal{G}'$. The fact that we chose the value one for the upper end of the interval is not of importance here, in the sense that the dilatational equivalence class of $\mathcal{G}$ does not depend on the upper end. **Lemma 9**. * The dilatational equivalence class is independent of the right end chosen for the interval, that is for all $\lambda_0>0$, $$\mathcal{G}_k(M\to B, \nabla, g)(\overline\mu,\overline\nu) \sim \left( (\overline\mu,\overline\nu) \mapsto \mathop{\mathrm{tr}}_{\mathcal{N}G} \chi_{[0,\lambda_0]}(\Delta^k_\mathrm{up}(M, g^{\overline\mu,\overline\nu}))\right).$$ * **Lemma 10** (). * The dilatational equivalence class is independent of the right end chosen for the interval, that is for all $\lambda_0>0$, $$\mathcal{G}_k(M\to B, \nabla, g)(\overline\mu,\overline\nu) \sim \left( (\overline\mu,\overline\nu) \mapsto \mathop{\mathrm{tr}}_{\mathcal{N}G} \chi_{[0,\lambda_0]}(\Delta^k_\mathrm{up}(M, g^{\overline\mu,\overline\nu}))\right).$$ * *Proof.* This follows directly with constant $C=\sqrt\lambda_0$ since $$\begin{aligned} \mathop{\mathrm{tr}}_{\mathcal{N}G} \chi_{[0,\lambda_0]}(\Delta^k_\mathrm{up}(M, g^{\overline\mu,\overline\nu})) &= \mathop{\mathrm{tr}}_{\mathcal{N}G} \chi_{[0,1]}(\lambda_0^{-1} \Delta^k_\mathrm{up}(M, g^{\overline\mu,\overline\nu})) \\ &= \mathop{\mathrm{tr}}_{\mathcal{N}G} \chi_{[0,1]}(\Delta^k_\mathrm{up}(M, \lambda_0 g^{\overline\mu,\overline\nu})) \\ &= \mathop{\mathrm{tr}}_{\mathcal{N}G} \chi_{[0,1]}(\Delta^k_\mathrm{up}(M, g^{\sqrt{\lambda_0}\cdot \overline\mu, \sqrt{\lambda_0} \cdot \overline\nu})) \\ &= \mathcal{G}_k(M\to B, g, \nabla)(\sqrt{\lambda_0}\cdot\overline\mu,\sqrt{\lambda_0}\cdot\overline\nu). \qedhere\end{aligned}$$ ◻ Instead of having two truely independent parameters $\overline{\mu}$ and $\overline{\nu}$, we would like to consider the two parameters as different speeds of scaling the manifold. Therefore, we replace these two parameters with two functions, depending on the same variable $\lambda$, governing how fast the fibre respectively the base get scaled as $\lambda\searrow 0$. **Definition 11**. [\[D_2ParamNSI\]]{#D_2ParamNSI label="D_2ParamNSI"} Let $G\curvearrowright(M\to B,\nabla,g)$ be a Riemannian fibre bundle with connection and compatible $G$-action. Let $\mu,\nu\colon \mathbb{R}_{\geq 0}\to \mathbb{R}_{\geq 0}$ be monotonously increasing continuous functions with $\mu(0)=0=\nu(0)$. Denoting $\mathcal{G}_k = \mathcal{G}_k(M\to B, \nabla ,g)$ we define the two-parameter Novikov-Shubin numbers by $$\begin{aligned} \alpha_k(M\to B,\nabla,g)(\mu,\nu) &= \alpha\big(\ \lambda\mapsto \mathcal{G}_k(\mu(\lambda),\nu(\lambda))\ \big) \\ &=\liminf_{\lambda\searrow 0}\frac{\log(\mathcal{G}_k(\mu(\lambda),\nu(\lambda)) - b^{(2)}(d_{k+1}))}{\log(\lambda)}. \end{aligned}$$ **Definition 12** (). [\[D_2ParamNSI\]]{#D_2ParamNSI label="D_2ParamNSI"} Let $G\curvearrowright(M\to B,\nabla,g)$ be a Riemannian fibre bundle with connection and compatible $G$-action. Let $\mu,\nu\colon \mathbb{R}_{\geq 0}\to \mathbb{R}_{\geq 0}$ be monotonously increasing continuous functions with $\mu(0)=0=\nu(0)$. Denoting $\mathcal{G}_k = \mathcal{G}_k(M\to B, \nabla ,g)$ we define the two-parameter Novikov-Shubin numbers by $$\begin{aligned} \alpha_k(M\to B,\nabla,g)(\mu,\nu) &= \alpha\big(\ \lambda\mapsto \mathcal{G}_k(\mu(\lambda),\nu(\lambda))\ \big) \\ &=\liminf_{\lambda\searrow 0}\frac{\log(\mathcal{G}_k(\mu(\lambda),\nu(\lambda)) - b^{(2)}(d_{k+1}))}{\log(\lambda)}. \end{aligned}$$ Recall here that $b^{(2)}(d_{k+1})$ measures the size of the kernel of $d_{k+1}$ and is metric invariant. Thus, we extend the definition of $\mathcal{G}_k$ formally by $\mathcal{G}_k(\mu(0),\nu(0))= \mathcal{G}_k(0,0)=b^{(2)}(d_{k+1})$. **Remark 13**. This two-parameter function generalises the usual spectral density function. Indeed, if[^4] $\mu=\nu=\lambda$, then $g^{\lambda,\lambda} = \lambda^2 g = g_\lambda$ independently of the connection $\nabla$ chosen. Hence, $$\mathcal{G}_k(M\to B, \nabla, g)(\lambda,\lambda) = \mathop{\mathrm{tr}}_{\mathcal{N}G} \chi_{[0,1]}(\Delta^k_\mathrm{up}(M, g_\lambda)) = \mathcal{F}_k(M,g)(\lambda)$$ is the classical spectral density function of $(M,g)$ and therefore $$\alpha_k(M\to B, g, \nabla)(\lambda,\lambda) = \alpha_k(M)$$ recovers the Novikov-Shubin invariants. **Remark 14** (). This two-parameter function generalises the usual spectral density function. Indeed, if[^5] $\mu=\nu=\lambda$, then $g^{\lambda,\lambda} = \lambda^2 g = g_\lambda$ independently of the connection $\nabla$ chosen. Hence, $$\mathcal{G}_k(M\to B, \nabla, g)(\lambda,\lambda) = \mathop{\mathrm{tr}}_{\mathcal{N}G} \chi_{[0,1]}(\Delta^k_\mathrm{up}(M, g_\lambda)) = \mathcal{F}_k(M,g)(\lambda)$$ is the classical spectral density function of $(M,g)$ and therefore $$\alpha_k(M\to B, g, \nabla)(\lambda,\lambda) = \alpha_k(M)$$ recovers the Novikov-Shubin invariants. **Example 15**. In the simplest case of a product manifold $(M,g) = (F,g_F)\times (B,g_B)$ with the canonical connection $TM \cong_\nabla TF\perp TB$, for $\mu,\nu>0$ we have $$\begin{aligned} \mathcal{G}_k(F\times B, \nabla, g)(\mu,\nu) &= \mathcal{F}_k((F,\nu^2 g_F)\times (B,\mu^2 g_B))(1).\end{aligned}$$ By [@LckL2 Cor. 2.44], it is therefore dilatationally equivalent to $$\begin{aligned} \mathcal{G}_k(F\times B, \nabla, g)(\mu,\nu) &\sim \sum_{p+q=k} \mathcal{F}_p((F, \nu^2 g_F))(1)\cdot \mathcal{F}_q((B,\mu^2 g_B))(1) \\ &= \sum_{p+q=k} \mathcal{F}_p(F)(\nu) \cdot \mathcal{F}_q(B)(\mu).\end{aligned}$$ If $\mu = \lambda^r$ and $\nu = \lambda^s$, we can consider a limit as $\lambda\searrow 0$ in the spirit of the Novikov-Shubin invariants. We assume that all $L^2$-Betti numbers in this example vanish[^6]. Following the computation in W. Lück's book [@LckL2 Thm. 2.55 (3)], $$\begin{aligned} \alpha_k(F \times B, \nabla, g)(\mu,\nu) &= \liminf_{\lambda\searrow 0} \frac{\log(\mathcal{G}_k(F\times B, \nabla,g)(\lambda^r,\lambda^s))}{\log(\lambda)} \\ &= \liminf_{\lambda\searrow 0} \frac{\log(\mathcal{F}_k((F,\lambda^{2s} g_F)\times (B,\lambda^{2r} g_B), \nabla,g)(1))}{\log(\lambda)} \\ &= \min_{0\leq p\leq k}\left\{ \begin{matrix} \alpha\left(\mathcal{F}_p(F)(\lambda^s) \cdot \mathcal{F}_{k-p}(B)(\lambda^r)\right),\\ \alpha\left(\mathcal{F}_{p+1}(F)(\lambda^s) \cdot \mathcal{F}_{k-p}(B)(\lambda^r)\right) \end{matrix} \right\} \\ &= \min_{0\leq p\leq k}\left\{ \begin{matrix} \alpha\left(\mathcal{F}_p(F)(\lambda^s)\right) + \alpha\left(\mathcal{F}_{k-p}(B)(\lambda^r)\right),\\ \alpha\left(\mathcal{F}_{p+1}(F)(\lambda^s)\right) + \alpha\left(\mathcal{F}_{k-p}(B)(\lambda^r)\right) \end{matrix} \right\} \\ &= \min_{0\leq p\leq k}\left\{ \begin{matrix} s\cdot\alpha_p(F) + r\cdot\alpha_{k-p}(B),\\ s\cdot\alpha_{p+1}(F) + r\cdot\alpha_{k-p}(B) \end{matrix} \right\}.\end{aligned}$$ In this case, we see the contributions from the base and fibre are scaled according to the chosen functions $\mu(\lambda)=\lambda^r$ and $\nu(\lambda)=\lambda^s$ as $\lambda\searrow 0$. **Example 16** (). In the simplest case of a product manifold $(M,g) = (F,g_F)\times (B,g_B)$ with the canonical connection $TM \cong_\nabla TF\perp TB$, for $\mu,\nu>0$ we have $$\begin{aligned} \mathcal{G}_k(F\times B, \nabla, g)(\mu,\nu) &= \mathcal{F}_k((F,\nu^2 g_F)\times (B,\mu^2 g_B))(1).\end{aligned}$$ By [@LckL2 Cor. 2.44], it is therefore dilatationally equivalent to $$\begin{aligned} \mathcal{G}_k(F\times B, \nabla, g)(\mu,\nu) &\sim \sum_{p+q=k} \mathcal{F}_p((F, \nu^2 g_F))(1)\cdot \mathcal{F}_q((B,\mu^2 g_B))(1) \\ &= \sum_{p+q=k} \mathcal{F}_p(F)(\nu) \cdot \mathcal{F}_q(B)(\mu).\end{aligned}$$ If $\mu = \lambda^r$ and $\nu = \lambda^s$, we can consider a limit as $\lambda\searrow 0$ in the spirit of the Novikov-Shubin invariants. We assume that all $L^2$-Betti numbers in this example vanish[^7]. Following the computation in W. Lück's book [@LckL2 Thm. 2.55 (3)], $$\begin{aligned} \alpha_k(F \times B, \nabla, g)(\mu,\nu) &= \liminf_{\lambda\searrow 0} \frac{\log(\mathcal{G}_k(F\times B, \nabla,g)(\lambda^r,\lambda^s))}{\log(\lambda)} \\ &= \liminf_{\lambda\searrow 0} \frac{\log(\mathcal{F}_k((F,\lambda^{2s} g_F)\times (B,\lambda^{2r} g_B), \nabla,g)(1))}{\log(\lambda)} \\ &= \min_{0\leq p\leq k}\left\{ \begin{matrix} \alpha\left(\mathcal{F}_p(F)(\lambda^s) \cdot \mathcal{F}_{k-p}(B)(\lambda^r)\right),\\ \alpha\left(\mathcal{F}_{p+1}(F)(\lambda^s) \cdot \mathcal{F}_{k-p}(B)(\lambda^r)\right) \end{matrix} \right\} \\ &= \min_{0\leq p\leq k}\left\{ \begin{matrix} \alpha\left(\mathcal{F}_p(F)(\lambda^s)\right) + \alpha\left(\mathcal{F}_{k-p}(B)(\lambda^r)\right),\\ \alpha\left(\mathcal{F}_{p+1}(F)(\lambda^s)\right) + \alpha\left(\mathcal{F}_{k-p}(B)(\lambda^r)\right) \end{matrix} \right\} \\ &= \min_{0\leq p\leq k}\left\{ \begin{matrix} s\cdot\alpha_p(F) + r\cdot\alpha_{k-p}(B),\\ s\cdot\alpha_{p+1}(F) + r\cdot\alpha_{k-p}(B) \end{matrix} \right\}.\end{aligned}$$ In this case, we see the contributions from the base and fibre are scaled according to the chosen functions $\mu(\lambda)=\lambda^r$ and $\nu(\lambda)=\lambda^s$ as $\lambda\searrow 0$. ## Near Cohomological Approach When studying the classical spectral density functions and Novikov-Shubin invariants, a useful approach in many cases is by using the notion of near cohomology, as given by M. Gromov and M. A. Shubin in their paper [@GroShu]. We show that a similar approach can still be used in this two-parameter case. Decomposing the tangent bundle $TM\xrightarrow{\pi} M$ as $TM \cong VM\oplus HM$ into a vertical and a horizontal subbundle gives us a diagram $$\begin{tikzcd}[ampersand replacement = \&] TF_\bullet \ar[d] \ar[r] \& T_\bullet F_\bullet \ar[dr] \& \begin{matrix} VM \oplus HM \\ \cong \\ TM \end{matrix} \ar[d] \ar[l, "\cong", no head, rounded corners, to path ={[pos=.15]([yshift=.45cm]\tikztostart.west) -| ([yshift=-3pt]\tikztotarget.north)\tikztonodes -- ([yshift=-3pt]\tikztotarget.north)}] \ar[r, "\cong"', no head, rounded corners, to path ={[pos=.15]([yshift=.45cm]\tikztostart.east) -| ([yshift=-3pt]\tikztotarget.north)\tikztonodes -- ([yshift=-3pt]\tikztotarget.north)}] \arrow[rr, "d\pi"',rounded corners, to path={[pos=0.25] (\tikztostart.north) -| ([yshift=.5cm]\tikztostart.north) -| (\tikztotarget.north)\tikztonodes -- (\tikztotarget.north)} ] \& \pi^*TB \ar[dl] \ar[r, "d\pi"] \& TB \ar[d] \\ F_\bullet \ar[rr] \& \& M \ar[rr, "\pi"] \& \& B. \end{tikzcd}$$ Given any vector field $X\in \Gamma(TM)$, we can decompose $$X = Y + Z, \qquad \text{with} \qquad Y\in \Gamma(\pi^*TB), \quad Z\in \Gamma(T_\bullet F_\bullet)$$ into a horizontal component $Y$ and a vertical component $Z$. We call a vector field $Y\in \Gamma(\pi^*TB)$ basic, if there exists a vector field $\overline{Y}\in \Gamma(TB)$ such that $Y$ is $\pi$-related to $\overline{Y}$, that is, the following diagram commutes (compare, for example, Besse [@Bes Ch. 9]): $$\begin{tikzcd}[ampersand replacement = \&] TM \ar[r, "d\pi"] \& TB \\ M \ar[u, "Y"] \ar[r, "\pi"'] \& B \ar[u, "\overline{Y}"'] \end{tikzcd}$$ We call $Y$ the lift of $\overline{Y}$. For every $U\in \Gamma(TB)$ there exists a unique such lift $\widetilde U\in \Gamma(\pi^* TB)$. We denote by $\Gamma_b(HM) \subset \Gamma(\pi^*TB)$ the set of basic vector fields. Then $\Gamma_b(HM)$ spans $\Gamma(\pi^*TB)$ as a $\mathcal{C}^\infty(M)$-module, so every horizontal vector field $Y\in \Gamma(HM)\cong \Gamma(\pi^*(TB))$ can be written as $$Y = \sum_{i\in I} f_i\cdot \widetilde{U_i}$$ for smooth functions $f_i \in \mathcal{C}^\infty(M)$ and $U_i\in \Gamma(TB)$. **Lemma 17**. *Let $Z,Z'\in \Gamma(VM)$ be vertical vector fields and $Y\in \Gamma_b(HM)$ a basic horizontal vector field. Then* 1. *$[Z, Z'] \in \Gamma(VM)$,* 2. *$[Y, Z] \in \Gamma(VM)$.* **Lemma 18** (). *Let $Z,Z'\in \Gamma(VM)$ be vertical vector fields and $Y\in \Gamma_b(HM)$ a basic horizontal vector field. Then* 1. *$[Z, Z'] \in \Gamma(VM)$,* 2. *$[Y, Z] \in \Gamma(VM)$.* *Proof.* Recall that $VM = \ker(d\pi)$, hence $Z\sim_\pi 0$ and $Z\sim_\pi 0$ where $0\in \Gamma(TB)$ denotes the zero section. By definition, $Y\sim_\pi \overline{Y}$ for some $\overline{Y}\in TB$. Therefore, $$\begin{aligned} d\pi [Z, Z'] &= \widetilde{[0,0]_B} = 0, \qquad d\pi [Y,Z] = \widetilde{[\overline{Y}, 0]_B} = 0,\end{aligned}$$ and the claim follows. ◻ Looking at the de Rham complex $\Omega^\bullet(M)$, it can be decomposed using the fibre bundle structure. **Theorem 19**. * [\[T_splittingOmegaFMB\]]{#T_splittingOmegaFMB label="T_splittingOmegaFMB"} Let $F_\bullet\to M\to B$ be a fibre bundle, then there is an isomorphism $$\Omega^k(M) \xrightarrow[\ \cong\ ]{\ \Phi\ } \bigoplus_{p+q=k} \Omega^p(B, \{\Omega^q(F_b)\}_{b\in B}),$$ identifying forms on $M$ and forms on $B$ with values in the system of forms on the fibres $\{F_b\}_{b\in B}$.[^8] * **Theorem 20** (). * [\[T_splittingOmegaFMB\]]{#T_splittingOmegaFMB label="T_splittingOmegaFMB"} Let $F_\bullet\to M\to B$ be a fibre bundle, then there is an isomorphism $$\Omega^k(M) \xrightarrow[\ \cong\ ]{\ \Phi\ } \bigoplus_{p+q=k} \Omega^p(B, \{\Omega^q(F_b)\}_{b\in B}),$$ identifying forms on $M$ and forms on $B$ with values in the system of forms on the fibres $\{F_b\}_{b\in B}$.[^9] * *Proof.* Using that $TM\cong VM\oplus HM$, we decompose $X\in \Gamma(TM)$ as $X = Y+Z$ with $Y\in \Gamma(HM)$ and $Z\in \Gamma(VM)$. Given $U_1,\dots, U_p\in \Gamma(TB)$ with basic lifts $\widetilde{U_1},\dots, \widetilde{U_p}\in \Gamma_b(HM)$ and $Z_{p+1},\dots, Z_k\in T_\bullet F_\bullet \cong VM$, for a $k$-form $\omega\in \Omega^k(M)$ we define $$\Phi(\omega) = \sum_{p+q=k} (\Phi(\omega))_{p,q}$$ where the $(p,q)$-summand $(\Phi(\omega))_{p,q} \in \Omega^p(B,\{\Omega^q(F_\bullet)\})$ is given by $$(\Phi(\omega))_{p,q}(U_1,\dots,U_p)(Z_{p+1},\dots,Z_k) = \omega\left(\widetilde{U_1},\dots,\widetilde{U_p},Z_{p+1},\dots, Z_k \right).$$ Decomposing $X_\bullet\in \Gamma(TM)$ as $X_\bullet=Y_\bullet+Z_\bullet$ with $Y_\bullet\in \Gamma(HM)$ and $Z_\bullet\in \Gamma(VM)$ as before, we construct the inverse $$\Psi\colon \bigoplus_{p+q=n} \Omega^p(B,\{\Omega^q(F_\bullet)\})\to\Omega^n(M)$$ to this map, starting with $\alpha\in\Omega^p(B,\{\Omega^q(F_\bullet)\})$ by $$\begin{aligned} \Psi(\alpha)(X_1,\dots, X_k) &= \frac{1}{p!q!}\sum_{\sigma\in \mathcal{S}_k}\mathrm{sgn}(\sigma) \left(\pi^*\alpha\right)( Y_{\sigma(1)},\dots, Y_{\sigma(p)})( Z_{\sigma(p+1)},\dots, Z_{\sigma(k)}),\end{aligned}$$ where $\mathcal{S}_k$ is the set of permutations of the first $k$ integers, $\{1,\dots,k\}$, and $\mathrm{sgn}$ the sign of the permutation. This is then extended linearly to the direct sum. We check that $\Psi$ and $\Phi$ are indeed inverses to each other. With the notation above, for a summand $\alpha\in\Omega^p(B,\{\Omega^q(F_\bullet)\})$, $$\begin{aligned} \Phi\Psi(\alpha)&(U_1,\dots, U_p)(Z_{p+1},\dots, Z_k) \\ &= \Psi(\alpha)\left(\widetilde{U_1},\dots, \widetilde{U_p},Z_{p+1},\dots, Z_k\right) \\ &= \frac{1}{p!q!}\sum_{\sigma\in S_k}\mathrm{sgn}(\sigma) \left(\pi^*\alpha\right)\left(\widetilde{U_{\sigma(1)}},\dots, \widetilde{U_{\sigma(p)}}\right)( Z_{\sigma(p+1)},\dots, Z_{\sigma(k)}) \\ &= \left(\pi^*\alpha\right)\left(\widetilde{U_1}, \dots, \widetilde{U_p}\right)(Z_{p+1},\dots,Z_n) \\ &= \alpha(U_1,\dots, U_p)(Z_{p+1},\dots,Z_k),\end{aligned}$$ where in the third equality $U_l=0$ for $l>p$ and $Z_l=0$ for $l\leq p$, so that after reordering the arguments, each summand appears $p!q!$ times with $+$ sign. In the other direction, writing $X_\bullet = Y_\bullet+Z_\bullet\in \Gamma(HM)\oplus \Gamma(VM) \cong \Gamma(TM)$ as before, $$\begin{aligned} \Psi\Phi(\omega)&(X_1,\dots, X_k) \\ &= \sum_{p+q=k}\frac{1}{p!q!}\sum_{\sigma\in S_k}\mathrm{sgn}(\sigma) (\pi^*\Phi(\omega))(Y_{\sigma(1)},\dots, Y_{\sigma(p)})(Z_{\sigma(p+1)},\dots, Z_{\sigma(k)}),\end{aligned}$$ where pointwise for $x\in M$ with $b=\pi(x)$, $$\begin{aligned} (\pi^*\Phi(\omega))_x&(Y_{\sigma(1)}(x),\dots, Y_{\sigma(p)}(x)) (Z_{\sigma(p+1)}(x),\dots, Z_{\sigma(k)}(x)) \\ &= \Phi(\omega)_b(Y_{\sigma(1)}(x),\dots, Y_{\sigma(p)}(x)) (Z_{\sigma(p+1)}(x),\dots, Z_{\sigma(k)}(x)) \\ &= \omega_x(A_{\sigma(1)}(x),\dots, A_{\sigma(p)}(x), Z_{\sigma(p+1)}(x),\dots, Z_{\sigma(k)}(x)) \\ &= \omega_x({Y_{\sigma(1)}(x)},\dots, {Y_{\sigma(p)}(x)}, Z_{\sigma(p+1)}(x),\dots, Z_{\sigma(k)}(x))\end{aligned}$$ where $A_i$ is some basic horizontal vector field with $A_i(x) = Y_i(x)$. Therefore, $$\begin{aligned} \Psi\Phi(\omega)(X_1,\dots, X_k) &= \sum_{p+q=k}\frac{1}{p!q!}\sum_{\sigma\in S_n}\mathrm{sgn}(\sigma) \omega(Y_{\sigma(1)},\dots, Y_{\sigma(p)},Z_{\sigma(p+1)},\dots, Z_{\sigma(k)}) \\ &= \sum_{(\Xi_1,\dots,\Xi_k) \in \{Y_1,Z_1\}\times\cdots\times \{Y_k,Z_k\}} \omega(\Xi_1, \dots, \Xi_k) \\ &= \omega(X_1,\dots, X_k),\end{aligned}$$ where we use in the second equality that $\omega$ is antisymmetric and that after ordering each summand appears $p!q!$ times, where $p$ is the number of $Y_\bullet$s and $q$ the number of $Z_\bullet$s chosen. The last equality then follows by linearity of $\omega$. ◻ We can now look at the differential $d\colon \Omega^k(M)\to \Omega^{k+1}(M)$ under this decomposition. **Lemma 21**. * Under the decomposition $\Phi$ of $\Omega^\bullet(M)$, the de Rham differential splits into three summands, $d \cong d^{0,1} + d^{1,0} + d^{2,-1},$ where $$d^{i,1-i}\colon \Omega^p(B,\{\Omega^q(F_\bullet)\}) \to \Omega^{p+i}(B,\{\Omega^{q+1-i}(F_\bullet)\}).$$ * **Lemma 22** (). * Under the decomposition $\Phi$ of $\Omega^\bullet(M)$, the de Rham differential splits into three summands, $d \cong d^{0,1} + d^{1,0} + d^{2,-1},$ where $$d^{i,1-i}\colon \Omega^p(B,\{\Omega^q(F_\bullet)\}) \to \Omega^{p+i}(B,\{\Omega^{q+1-i}(F_\bullet)\}).$$ * *Proof.* By Cartan's formula, for $\omega\in \Omega^k(M)$ and $X_0,\dots, X_{k}\in \Gamma(TM)$, the de Rham differential of $\omega$ evaluated on the $X_\bullet$s is given by $$\begin{aligned} d(\omega)(X_1,\dots, X_{k+1}) &= \sum_{i=0}^k (-1)^i X_i ( \omega( X_0,\overset{\widehat X_i}{\dots}, X_k) ) \\ & + \sum_{0\leq i< j\leq k} (-1)^{i+j} \omega([X_i, X_j],X_0,\overset{\widehat X_i, \widehat X_j}{\dots}, X_k).\end{aligned}$$ We denote by $[X_i, X_j]_H$ respectively $[X_i, X_j]_V$ the projection of $[X_i, X_j]$ to $\Gamma(HM)$ respectively $\Gamma(VM)$. Given $\alpha\in \Omega^p(B, \{\Omega^q F_\bullet\})$, we compute $\Phi d\Psi \alpha$ by looking at the $(r,s)$-component $(\Phi d \Psi \alpha)_{r,s}\in \Omega^r(B, \{\Omega^s F_\bullet\})$ (with $r+s=k+1)$. For this, let $U_1,\dots, U_r\in \Gamma(TB)$ and $Z_{r+1},\dots, Z_{k+1}\in \Gamma(T_\bullet F_\bullet)$, then $$\begin{aligned} (\Phi d \Psi \alpha)_{r,s}&(U_1,\dots, U_r)(Z_{r+1},\dots, Z_{k+1}) \\ &=(d\Psi\alpha)\left(\widetilde{U_1},\dots,\widetilde{U_r},Z_{r+1},\dots, Z_{k+1}\right) \\ &= \sum_{1\leq i\leq r} (-1)^{i+1} \widetilde{U_i}\left(\Psi\alpha(\widetilde{U_1},\overset{\widehat{{U_i}}}{\dots},\widetilde{U_r},Z_{r+1},\dots,Z_{k+1})\right) \\ &\quad + \sum_{r+1\leq i\leq k+1} (-1)^{i+1} Z_i\left(\Psi\alpha(\widetilde{Y_1},\dots,\widetilde{U_r},Z_{r+1},\overset{\widehat{Z_i}}{\dots},Z_{k+1})\right) \\ &\quad + \sum_{1\leq i<j\leq r} (-1)^{i+j+1} \Psi\alpha\left([\widetilde{U_i},\widetilde{U_j}],\widetilde{U_1},\overset{\widehat{{U_i}},\widehat{U_j}}{\dots},\widetilde{U_r}, Z_{r+1},\dots,Z_{k+1}\right) \\ &\quad + \sum_{1\leq i \leq r < j\leq k+1} (-1)^{i+j+1} \Psi\alpha\left([\widetilde{U_i},Z_j],\widetilde{U_1},\overset{\widehat{U_i}}{\dots},\widetilde{U_r}, Z_{r+1},\overset{\widehat{Z_j}}{\dots},Z_{k+1}\right) \\ &\quad + \sum_{r+1\leq i<j\leq k+1} (-1)^{i+j+1} \Psi\alpha\left([Z_i,Z_j],\widetilde{U_1},\dots,\widetilde{U_r}, Z_{r+1},\overset{\widehat{Z_i},\widehat{Z_j}}{\dots},Z_{k+1}\right).\end{aligned}$$ By definition, $\Psi(\alpha)\neq 0$ only if $p$ of the arguments have non-zero components in $\Gamma(HM)$ and $q$ of the arguments have non-zero components in $\Gamma(VM)$. Recall that $[Z,Z'],[\widetilde{U},Z]\in \Gamma(VM)$ for all $Z,Z'\in \Gamma(VM)$ and $U\in \Gamma(TB)$. Therefore, the operator $\Phi d\Psi \alpha$ decomposes into the following three summands. 1. The first summand keeps the base-degree fixed and increases the fibre-degree by one. It is given for $\alpha\in \Omega^p(B, \{\Omega^q F_\bullet\})$ by $$\begin{aligned} (\Phi d \Psi \alpha)_{p,q+1}&(U_1,\dots, U_p)(Z_{p+1}, \dots, Z_{k+1})\\ &= \sum_{p+1\leq i\leq k+1} (-1)^{i+1} Z_i(\Psi\alpha(\widetilde{U_1},\dots,\widetilde{U_p},Z_{p+1},\overset{\widehat{Z_i}}{\dots},Z_{k+1})) \\ &\quad + \sum_{p+1\leq i<j\leq k+1} (-1)^{i+j+1-p} \Psi\alpha(\widetilde{U_1},\dots,\widetilde{U_p}, [Z_i,Z_j], Z_{p+1},\overset{\widehat{Z_i},\widehat{Z_j}}{\dots},Z_{k+1}) \\ &= \sum_{p+1\leq i\leq k+1} (-1)^{i+1} Z_i(\alpha(U_1,\dots, U_p))(Z_{p+1},\overset{\widehat{Z_i}}{\dots}, Z_{k+1}) \\ &\quad + \sum_{p+1\leq i<j \leq k+1} (-1)^{i+j+1-p} \alpha(U_1,\dots, U_p)([Z_i, Z_j], Z_{p+1}, \overset{\widehat{Z_i},\widehat{Z_j}}{\dots}, Z_{k+1}).\end{aligned}$$ 2. The second summand increases the base-degree by one and keeps the fibre-degree fixed. It is given by $$\begin{aligned} (\Phi d \Psi \alpha)_{p+1,q}&(U_1,\dots, U_{p+1})(Z_{p+2}, \dots, Z_{k+1})\\ &= \sum_{1\leq i\leq p+1} (-1)^{i+1} \widetilde{U_i}(\Psi\alpha(\widetilde{U_1},\overset{\widehat{U_i}}{\dots},\widetilde{U_{p+1}},Z_{p+2},\dots,Z_{k+1})) \\ &\quad + \sum_{1\leq i<j\leq p+1} (-1)^{i+j+1} \Psi\alpha([\widetilde{U_i},\widetilde{U_j}]_H,\widetilde{U_1},\overset{\widehat{U_i},\widehat{U_j}}{\dots},\widetilde{U_{p+1}}, Z_{p+2},\dots,Z_{k+1}) \\ &\quad + \sum_{1\leq i \leq p+1 < j\leq k+1} (-1)^{i+j+1-p} \Psi\alpha(\widetilde{U_1},\overset{\widehat{U_i}}{\dots}, \widetilde{U_{p+1}}, [\widetilde{U_i},Z_j], Z_{p+2},\overset{\widehat{Z_j}}{\dots},Z_{k+1}) \\ &= \sum_{1\leq i\leq p+1} (-1)^{i+1} {\widetilde U_i}(\alpha({U_1},\overset{\widehat{U_i}}{\dots},{U_{p+1}})(Z_{p+2},\dots,Z_{k+1})) \\ &\quad + \sum_{1\leq i<j\leq p+1} (-1)^{i+j+1} \alpha([{U_i},{U_j}],{U_1},\overset{\widehat{U_i},\widehat{U_j}}{\dots},{U_{p+1}})( Z_{p+2},\dots,Z_{k+1}) \\ &\quad + \sum_{1\leq i \leq p+1 < j\leq k+1} (-1)^{i+j+1-p} \alpha({U_1},\overset{\widehat{U_i}}{\dots}, {U_{p+1}})( [\widetilde{U_i},Z_j], Z_{p+2},\overset{\widehat{Z_j}}{\dots},Z_{k+1}).\end{aligned}$$ 3. The third summand increases the base-degree by two and decreases the fibre-degree by one. It is given by $$\begin{aligned} (\Phi d \Psi \alpha)_{p+2,q-1}&(U_1,\dots, U_{p+2})(Z_{p+3}, \dots, Z_{k+1})\\ &= \sum_{1\leq i<j\leq p+2} (-1)^{i+j+1-p} \Psi\alpha(\widetilde{U_1},\overset{\widehat{U_i},\widehat{U_j}}{\dots}, \widetilde{U_{p+2}}, [\widetilde{U_i},\widetilde{U_j}]_V, Z_{p+3},\dots,Z_{k+1}) \\ &= \sum_{1\leq i<j\leq p+2} (-1)^{i+j+1-p} \alpha({U_1},\overset{\widehat{U_i},\widehat{U_j}}{\dots}, {U_{p+2}})([\widetilde{U_i},\widetilde{U_j}]_V, Z_{p+3},\dots,Z_{k+1}).\end{aligned}$$ The claim follows with the maps defined for $\alpha\in \Omega^p(B,\{\Omega^q(F_\bullet)\})$ by $$\begin{aligned} d^{0,1}(\alpha) &= (\Phi d\Psi\alpha)_{p,q+1}, \\ d^{1,0}(\alpha) &= (\Phi d\Psi\alpha)_{p+1,q}, \\ d^{2,-1}(\alpha) &= (\Phi d\Psi\alpha)_{p+2,q-1}.\end{aligned}$$ and $d=d^{0,1} + d^{1,0} + d^{2,-1}$ extended linearly to $\bigoplus_{p+q=k}\Omega^p(B,\{\Omega^q(F_\bullet)\})$. ◻ Denote $E_0^{p,q} = \Omega^{p}(B, \{\Omega^q(F_\bullet)\})$, then we can visualise this decomposition as a $\mathbb{Z}^2$-graded complex.[^10] An excerpt of this is pictured below, with the maps $d^{i,1-i}$ only drawn at $E_0^{p,q}$ and as dashed arrows at their images. As usual, the parts appearing in $\Omega^k(M)$ align along the antidiagonal $p+q=k$ in the diagram. $$\fbox{ \begin{tikzcd}[ampersand replacement = \&] E_0^{p,q+2} \& E_0^{p+1,q+2} \& E_0^{p+2,q+2} \& E_0^{p+3,q+2} \& E_0^{p+4,q+2} \\ E_0^{p,q+1} \ar[u, dashed] \ar[r, dashed] \ar[rrd, dashed] \& E_0^{p+1,q+1} \& E_0^{p+2,q+1} \& E_0^{p+3,q+1} \& E_0^{p+4,q+1} \\ E_0^{p,q} \ar[u, "d^{0,1}" description] \ar[r, "d^{1,0}" description] \ar[rrd, "d^{2,-1}" description] \& E_0^{p+1,q} \ar[u, dashed] \ar[r, dashed] \ar[rrd, dashed] \& E_0^{p+2,q} \& E_0^{p+3,q} \& E_0^{p+4,q} \\ E_0^{p,q-1} \& E_0^{p+1,q-1} \& E_0^{p+2,q-1} \ar[u, dashed] \ar[r, dashed] \ar[rrd, dashed] \& E_0^{p+3,q-1} \& E_0^{p+4,q-1} \\ E_0^{p,q-2} \& E_0^{p+1,q-2} \& E_0^{p+2,q-2} \& E_0^{p+3,q-2} \& E_0^{p+4,q-2} \end{tikzcd} }$$ Since $d = d^{0,1}+d^{1,0}+d^{2,-1}$ is a differential, that is, $d^2 = 0$, we obtain immediately that $$\begin{aligned} {2} &0 = (d^{0,1})^2, &&0 = d^{0,1} d^{1,0} + d^{1,0} d^{0,1}, \\ &0 = d^{0,1}d^{2,-1} + (d^{1,0})^2 + d^{2,-1}d^{0,1}, \qquad \qquad &&0 = d^{1,0}d^{2,-1} + d^{2,-1}d^{1,0}, \\ &0 = (d^{2,-1})^2. &&\end{aligned}$$ Note that $d^{1,0}$ is not a differential in general. Leaving out the terms that cancel due to the usual alternating sign[^11], a direct computation shows that for $\alpha\in E_0^{p,q}$, $U_1,\dots,U_{p+2}\in \Gamma(TB)$ and $Z_{p+3},\dots, Z_{k+2}\in \Gamma(T_\bullet F_\bullet)$: $$\begin{aligned} (d^{1,0})^2(\alpha)&(U_1,\dots, U_{p+3})(Z_{p+3},\dots, Z_{k+2}) \\ &= \sum_{1\leq i < j\leq p+2} (-1)^{i+j} (\widetilde U_j \widetilde U_i - \widetilde U_i \widetilde U_j)(\alpha(U_1,\overset{\widehat U_i, \widehat U_j}{\dots}, U_{p+2})(Z_{p+3},\dots, Z_{k+2})) \\ &\quad + \sum_{1\leq i < j\leq p+2} (-1)^{i+j} \widetilde{[U_i,U_j]_B} (\alpha(U_1,\overset{\widehat U_i, \widehat U_j}{\dots}, U_{p+2})(Z_{p+3},\dots, Z_{k+2})) \\ &\quad + \sum_{1\leq i< j\leq p+2 <l\leq k+2} (-1)^{i+j+l+p}\alpha(U_1,\overset{\widehat U_i, \widehat U_j}{\dots}, U_{p+2})([\widetilde U_i, [\widetilde U_j, Z_l]], Z_{p+3},\dots, Z_{k+2}).\end{aligned}$$ For the first two terms we have $$[\widetilde U_i, \widetilde U_j] - \widetilde{[U_i, U_j]_B}= [\widetilde U_i, \widetilde U_j]_V$$ and since by the Jacobi identity $[\widetilde U_i, [\widetilde U_j, Z_l]] - [\widetilde U_j, [\widetilde U_i, Z_l]] = -[[\widetilde U_i, \widetilde U_j], Z_l]$ this precisely cancels out the terms that survive in $d^{0,1}d^{2,-1} + d^{2,-1}d^{0,1}$, $$\begin{aligned} (d^{0,1}&d^{2,-1} + d^{2,-1}d^{0,1})(\alpha)(U_1,\dots, U_{p+3})(Z_{p+3},\dots, Z_{k+2}) \\ &= \sum_{1\leq i < j\leq p+2} (-1)^{i+j} [\widetilde U_i, \widetilde U_j]_V (\alpha(U_1,\overset{\widehat U_i, \widehat U_j}{\dots}, U_{p+2})(Z_{p+3},\dots, Z_{k+2})) \\ &\quad + \sum_{1\leq i< j\leq p+2 <l\leq k+2} (-1)^{i+j+l+p+1}\alpha(U_1,\overset{\widehat U_i, \widehat U_j}{\dots}, U_{p+2})([[\widetilde U_i, \widetilde U_j], Z_l], Z_{p+3},\dots, Z_{k+2}).\end{aligned}$$ The inner product on $\Omega^k(M)$ (coming from the Riemannian metric) induces inner products on the decomposition. For $\alpha,\alpha'\in \Omega^p(B, \{\Omega^q(F_\bullet)\})$, $$\langle \alpha,\alpha' \rangle_{g, (p,q)} = \langle \Psi\alpha, \Psi\alpha'\rangle_{(M,g)} = \int_{(M,g)} \Psi\alpha\wedge\ast\Psi\alpha',$$ whereas the different direct summands are mutually orthogonal to each other since the decomposition of $TM$ into $VM$ and $HM$ is orthogonal. This implies for $\omega\in \Omega^k(M)$ with $\Phi(\omega)=\alpha = \sum_{p+q=k}\alpha_{p,q} \in \bigoplus_{p+q=k} E_0^{p,q}$, $$\|\omega\|_g^2 = \|\alpha\|^2_g = \sum_{p+q=k}\|\alpha_{p,q}\|^2_g = \sum_{p+q=k} \langle \alpha_{p,q}, \alpha_{p,q} \rangle_{g,(p,q)} .$$ When changing the metric from $g$ to $g^{\mu,\nu}$ on $M$, the length of a vertical tangent vector $v\in V_xM$ changes by a factor $\nu$ as $$\|v\|_{g^{\mu,\nu}}^2 = \nu^2 g_x^V(v,v) = (\nu \|v\|_{g})^2$$ and on horizontal tangent vectors $h\in H_xM$ by $\|h\|_{g^{\mu,\nu}}^2 = (\mu \|h\|_{g})^2$. Denote by $\omega_g$ the volume form on $(M,g)$ and by $\omega_{g^{\mu,\nu}}$ the volume form on $(M, g^{\mu,\nu})$. By the observation above, $$\omega_g = \mu^{-\dim(B)} \nu^{-\dim(F)}\cdot \omega_{g^{\mu,\nu}}.$$ The Hodge $*$-operators $\ast_g, \ast_{g^{\mu,\nu}}$ map $\Omega^k(M)\to \Omega^{n-k}(M)$ and preserve the decomposition as $$\ast\colon \Omega^{p}(B, \{\Omega^{q}(F_\bullet)\}) \to \Omega^{\dim(B)-p}(B, \{\Omega^{\dim(F)-q}(F_\bullet)\}).$$ Since on $\Omega^{p}(B, \{\Omega^{q}(F_\bullet)\})$, $$\begin{aligned} \langle \alpha, \beta\rangle_{g^{\mu,\nu}} &= \int_{(M,g^{\mu,\nu})} \alpha\wedge \ast_{g^{\mu,\nu}} \beta = \mu^{-\dim(B)} \nu^{-\dim(F)} \int_{(M,g)} \alpha\wedge \ast_{g^{\mu,\nu}} \beta \\ &= \mu^{-\dim(B)} \nu^{-\dim(F)} \mu^{\dim(B)-2p} \nu^{\dim(F)-2q}\cdot \int_{(M,g)} \alpha\wedge \ast_{g} \beta \\ &=\mu^{-2p} \nu^{-2q} \langle \alpha, \beta\rangle_{g},\end{aligned}$$ the scalar product changes by a factor $\mu^{-2p} \nu^{-2q}$. This allows us to define the two-parameter Novikov-Shubin numbers via the near cohomology cones of the decomposed complex. Since the near cohomology cone satisfies $$\begin{aligned} &C_{\lambda_0}^k(M,g^{\mu,\nu}) = \left\{ \omega\in \Omega^k(M) \cap \ker(d)^\perp \:\middle|\: \|d\omega\|_{g^{\mu,\nu}} \leq \lambda_0 \|\omega\|_{g^{\mu,\nu}} \right\} \\ & \cong \left\{ \! \alpha \! \in \!\! \left(\bigoplus_{p+q=k} E_0^{p,q}\right) \! \cap \Phi\!\left(\ker(d)^\perp\right) \:\middle|\: \sum_{r+s=k+1} \!\!\!\! \mu^{-r}\nu^{-s}\|(d\alpha)_{r,s}\|_g \leq \lambda_0 \!\!\! \sum_{p+q=k} \!\!\! \mu^{-p}\nu^{-q} \|\alpha_{p,q}\|_g\right\}\!,\end{aligned}$$ we can define $\mathcal{G}_k(M\to B, \nabla, g)$ in terms of this near cohomology cone with $\lambda_0=1$ as follows. **Corollary 23**. * In the notation as above, $$\mathcal{G}_k(M\to B, \nabla, g)(\mu,\nu) = \sup_{L }\dim_{\mathcal{N}G} L,$$ where the supremum runs over all closed linear subspaces $L$ of $C_1^k(M, g^{\mu,\nu})$. * **Corollary 24** (). * In the notation as above, $$\mathcal{G}_k(M\to B, \nabla, g)(\mu,\nu) = \sup_{L }\dim_{\mathcal{N}G} L,$$ where the supremum runs over all closed linear subspaces $L$ of $C_1^k(M, g^{\mu,\nu})$. * *Proof.* This follows immediately since $\mathcal{G}_k(M\to B, \nabla, g)(\mu,\nu) = \mathcal{F}_k(M, g^{\mu,\nu})(1)$. ◻ # Invariance Properties In this section we show multiple invariance properties of the two-parameter Novikov-Shubin numbers. We show that for a fibre bundle $M\to B$ and a fixed connection $\nabla$, the dilatational equivalence class of the underlying spectral density functions is independent of the $\nabla$-compatible Riemannian metric $g$ on $M$. Then we show that the spectral density functions are dilatationally equivalent for two bundles $M\to B$ and $M'\to B$ if there exists a certain type of $\nabla$-compatible fibre homotopy equivalence. We also show that the dilatational equivalence class of the spectral density functions is independent of the connection $\nabla$ if we restrict them to the parameter subspace $\{\nu\leq \mu\}$, where the fibre is scaled at least as fast as the base. In particular, the two-parameter Novikov-Shubin numbers are invariant under these operations. ## Metric Invariance for Fixed Connection From the definition in terms of near cohomology cones, we can derive that the dilatational equivalence class of $\mathcal{G}_k(M\to B, \nabla, g)$ for a fixed connection $\nabla$ does not depend on the metric $g$. **Theorem 25**. * [\[Thm_2NSI_Metric_fixed_conn_invar\]]{#Thm_2NSI_Metric_fixed_conn_invar label="Thm_2NSI_Metric_fixed_conn_invar"} Let $G\curvearrowright(M\to B,\nabla,g)$ be a fibre bundle with fixed connection $\nabla$ and compatible cocompact free proper group action by a group $G$. Then for $0\leq k\leq \dim(M)$ the dilatational equivalence class of $$\mathcal{G}_k(M\to B, \nabla) = \mathcal{G}_k(M\to B, \nabla, g)$$ does not depend on the choice of $G$-invariant $\nabla$-compatible Riemannian metric $g$. * **Theorem 26** (). * [\[Thm_2NSI_Metric_fixed_conn_invar\]]{#Thm_2NSI_Metric_fixed_conn_invar label="Thm_2NSI_Metric_fixed_conn_invar"} Let $G\curvearrowright(M\to B,\nabla,g)$ be a fibre bundle with fixed connection $\nabla$ and compatible cocompact free proper group action by a group $G$. Then for $0\leq k\leq \dim(M)$ the dilatational equivalence class of $$\mathcal{G}_k(M\to B, \nabla) = \mathcal{G}_k(M\to B, \nabla, g)$$ does not depend on the choice of $G$-invariant $\nabla$-compatible Riemannian metric $g$. * *Proof.* On a compact manifold $\overline M$, any two Riemannian metrics $\overline g,\overline{g}'$ are quasi-equivalent, that is there exists $K\geq 1$ such that $K^{-1} \overline{g} \leq \overline{g}' \leq K \overline{g}.$ By $G$-invariance of the Riemannian metrics and cocompactness of the action $G\curvearrowright M$, this is true for any two choices of $G$-invariant Riemannian metrics $g,g'$ on $M$. Restricting to the subbundles $V^*M$ and $H^*M$ of $T^*M$, this inequality holds also for the vertical and horizontal parts individually. After rescaling, it follows that there is $K>0$ such that for all $\mu,\nu>0$, $$K^{-1} g^{\mu,\nu} \leq (g')^{\mu,\nu} \leq K g^{\mu,\nu}.$$ If $\omega\in C_\lambda^k(M,(g')^{\nu,\mu})$, then $$\begin{aligned} K^{-2(k+1)} \|d\omega\|^2_{g^{\mu,\nu}} &= \|d\omega\|^2_{K^{-1} g^{\mu,\nu}} \leq \|d\omega\|^2_{g'^{\mu,\nu}} \\ &\leq \lambda^2 \|\omega\|_{g'^{\mu,\nu}}^2 \leq \lambda^2 \|\omega\|^2_{K g^{\mu,\nu}} = K^{2k} \lambda^2 \|\omega\|^2_{g^{\mu,\nu}}.\end{aligned}$$ This implies that $\omega\in C^k_{K^{2k+1} \lambda}(M, g^{\mu,\nu})= C^k_{\lambda}(M, Kg^{\mu,\nu}).$ We can repeat this argument starting with $C_\lambda^k(M,g^{\mu,\nu})$ to obtain an inclusion in the other direction, so that in total $$C^k_{\lambda}(M, K^{-1} g^{\mu,\nu}) \subset C^k_{\lambda}(M, g'^{\mu,\nu}) \subset C^k_{\lambda}(M, K g^{\mu,\nu}).$$ Taking suprema over the $\mathcal{N}G$-dimensions of closed linear subspaces with $K g^{\mu,\nu} = g^{K^{\nicefrac{1}{2}} \mu, K^{\nicefrac{1}{2}} \nu}$, $$\begin{aligned} \mathcal{G}_k(M\to B, \nabla, g)({K^{-\nicefrac{1}{2}}}\mu,{K^{-\nicefrac{1}{2}}}\nu) &\leq \mathcal{G}_k(M\to B, g', \nabla)(\mu,\nu) \\&\leq \mathcal{G}_k(M\to B, \nabla, g)(K^{\nicefrac{1}{2}} \mu,K^{\nicefrac{1}{2}}\nu) \end{aligned}$$ and hence the spectral density functions are dilatationally equivalent, $$G_k(M\to B, \nabla, g)\sim G_k(M\to B, g', \nabla). \qedhere$$ ◻ ## Fibre Homotopy Invariance Next, we want to study the behaviour of the two-parameter Novikov-Shubin numbers under fibre homotopy equivalences. Such a homotopy equivalence $f$, say between $M\to B$ and $M'\to B$, should respect the decomposition of $TM \cong_\nabla HM \oplus VM$ and $TM' \cong_{\nabla'} HM' \oplus VM'$ coming from the connections in the sense that $f^*\nabla' = \nabla$. This leads us to the following definition of geometric fibre homotopy equivalences. **Definition 27**. Let $F_\bullet\to M\xrightarrow{\pi} B$ and $F'_\bullet\to M'\xrightarrow{\pi'} B$ be two fibre bundles over $B$ equipped with connections $$TM\cong_\nabla VM\oplus HM, \qquad TM' \cong_{\nabla'} VM' \oplus HM'.$$ A fibre homotopy equivalence $f\colon M\to M'$ is a homotopy equivalence $f\colon M\to M'$ such that $f$ is a fibre map over the identity $\mathop{\mathrm{id}}_B$ of $B$, that is the diagram $$\begin{tikzcd}[ampersand replacement=\&, row sep = 30, column sep = 30] M \ar[r, "\pi"] \ar[d, "f"] \& B \ar[d, equal] \\ M' \ar[r, "\pi'"] \& B, \end{tikzcd}$$ commutes, and so is a homotopy equivalence inverse $g$ of $f$ as well as the homotopy $\Phi\colon M\times [0,1] \to M$ between $gf$ and $\mathop{\mathrm{id}}_M$ at every time $t\in [0,1]$. We call such a fibre homotopy equivalence $f\colon M\to M'$ geometric if it satisfies $f^*\nabla' = \nabla$. **Definition 28** (). Let $F_\bullet\to M\xrightarrow{\pi} B$ and $F'_\bullet\to M'\xrightarrow{\pi'} B$ be two fibre bundles over $B$ equipped with connections $$TM\cong_\nabla VM\oplus HM, \qquad TM' \cong_{\nabla'} VM' \oplus HM'.$$ A fibre homotopy equivalence $f\colon M\to M'$ is a homotopy equivalence $f\colon M\to M'$ such that $f$ is a fibre map over the identity $\mathop{\mathrm{id}}_B$ of $B$, that is the diagram $$\begin{tikzcd}[ampersand replacement=\&, row sep = 30, column sep = 30] M \ar[r, "\pi"] \ar[d, "f"] \& B \ar[d, equal] \\ M' \ar[r, "\pi'"] \& B, \end{tikzcd}$$ commutes, and so is a homotopy equivalence inverse $g$ of $f$ as well as the homotopy $\Phi\colon M\times [0,1] \to M$ between $gf$ and $\mathop{\mathrm{id}}_M$ at every time $t\in [0,1]$. We call such a fibre homotopy equivalence $f\colon M\to M'$ geometric if it satisfies $f^*\nabla' = \nabla$. The property of being geometric implies that the pullback $f^*$ commutes not only with the de Rham differential $d$ itself but also with each of the individual summands we identified earlier. **Lemma 29**. * If $f\colon M\to M'$ is a geometric fibre homotopy equivalence then $f^*$ commutes with the differential $d$ and each of its three summands $d = d^{0,1}+d^{1,0}+d^{2,-1}$. * **Lemma 30** (). * If $f\colon M\to M'$ is a geometric fibre homotopy equivalence then $f^*$ commutes with the differential $d$ and each of its three summands $d = d^{0,1}+d^{1,0}+d^{2,-1}$. * *Proof.* Since $f$ is geometric, the fibre homotopy equivalence $f$ restricts fibrewise to homotopy equivalences $$f|_{F_b}\colon F_b\xrightarrow{\ \simeq\ } F'_b.$$ and the push-forward $f_*\colon TM \to TM'$ restricts to maps $$f_*\colon HM\to HM' \quad \text{and} \quad f_*\colon VM\to VM'.$$ Therefore, the induced chain homotopy $f^*\colon \Omega^\bullet M'\to \Omega^\bullet M$ restricts under the direct sum decompositions to maps on each $(p,q)$-summand, that is, $$f^*_{p,q}\colon \Omega^p(B,\{\Omega^q(F'_\bullet)\}) \to \Omega^p(B,\{\Omega^q(F_\bullet)\})$$ given on $\alpha_{p,q}\in \Omega^p(B,\{\Omega^q(F'_\bullet)\})$ with $p+q=k$ by $$\begin{aligned} (f^*_{p,q}\alpha)_{p,q}&(U_1,\dots, U_p)(Z_{p+1}, \dots, Z_{k}) \\ &= (f|_{F_\bullet})^*\left(\alpha_{p,q}(U_1,\dots, U_p)\right)(Z_{p+1}, \dots, Z_{k}) \\ &= \alpha_{p,q}(U_1,\dots, U_p)(df(Z_{p+1}), \dots, df(Z_{k})).\end{aligned}$$ Recall that the differential $d$ on $\Omega^p(B,\{\Omega^q(F_\bullet)\})$ splits into the following three summands: $$\begin{aligned} (d^{0,1} \alpha)_{p,q+1}&(U_1,\dots, U_p)(Z_{p+1}, \dots, Z_{k+1})\\ &= \sum_{p+1\leq i\leq k+1} (-1)^{i+1} Z_i(\alpha(U_1,\dots, U_p))(Z_{p+1},\overset{\widehat{Z_i}}{\dots}, Z_{k+1}) \\ &\quad + \sum_{p+1\leq i<j \leq k+1} (-1)^{i+j+1-p} \alpha(U_1,\dots, U_p)([Z_i, Z_j], Z_{p+1}, \overset{\widehat{Z_i},\widehat{Z_j}}{\dots}, Z_{k+1}), \\ (d^{1,0} \alpha)_{p+1,q}&(U_1,\dots, U_{p+1})(Z_{p+2}, \dots, Z_{k+1})\\ &= \sum_{1\leq i\leq p+1} (-1)^{i+1} {\widetilde U_i}(\alpha({U_1},\overset{\widehat{U_i}}{\dots},{U_{p+1}})(Z_{p+2},\dots,Z_{k+1})) \\ &\quad + \sum_{1\leq i<j\leq p+1} (-1)^{i+j+1} \alpha([{U_i},{U_j}],{U_1},\overset{\widehat{U_i},\widehat{U_j}}{\dots},{U_{p+1}})( Z_{p+2},\dots,Z_{k+1}) \\ &\quad + \sum_{1\leq i \leq p+1 < j\leq k+1} (-1)^{i+j+1-p} \alpha({U_1},\overset{\widehat{U_i}}{\dots}, {U_{p+1}})( [\widetilde{U_i},Z_j], Z_{p+2},\overset{\widehat{Z_j}}{\dots},Z_{k+1}) \\ (d^{2,-1} \alpha)_{p+2,q-1}&(U_1,\dots, U_{p+2})(Z_{p+3}, \dots, Z_{k+1})\\ &= \sum_{1\leq i<j\leq p+2} (-1)^{i+j+1-p} \alpha({U_1},\overset{\widehat{U_i},\widehat{U_j}}{\dots}, {U_{p+2}})([\widetilde{U_i},\widetilde{U_j}]_V, Z_{p+3},\dots,Z_{k+1}).\end{aligned}$$ Here, $f^*$ commutes with $d^{0,1}$ as we can see directly from the formulae or from the fact that $$(d^{0,1} \alpha)(U_1,\dots,U_p) = d_{F_\bullet}(\alpha(U_1,\dots,U_p))$$ acts as the fibre differential and therefore commutes with the pullback of $f$. From the formulae we see further that $d^{1,0}$ commutes with $f^*$ since $df(\widetilde{U}) = \widetilde{U}'\circ f = \widetilde{U}$ as $f$ preserves base points, $$df([\widetilde{U}, Z]) = [df(\widetilde{U}), df(Z)] = [\widetilde U', df(Z)],$$ where $\widetilde{U}$ is the horizontal lift of $U$ to $TM$ and $\widetilde{U}'$ the horizontal lift to $TM'$. Lastly, $d^{2,-1}$ commutes with $f^*$ since $$\begin{aligned} df\left([\widetilde{U_1},\widetilde{U_2} ]_V\right) &= df\left([\widetilde{U_1},\widetilde{U_2} ] - [\widetilde{U_1},\widetilde{U_2} ]_H \right) = [df(\widetilde{U_1}),df(\widetilde{U_2}) ] - df(\widetilde{[{U_1},{U_2} ]}) \\ &= [\widetilde{U_1}', \widetilde{U_2}'] - \widetilde{[U_1,U_2]}' = [\widetilde{U_1}', \widetilde{U_2}'] - [\widetilde{U_1}', \widetilde{U_2}']_H = [\widetilde U_1', \widetilde U_2']_V. \qedhere\end{aligned}$$ ◻ **Lemma 31**. * Let $G\curvearrowright(M\to B, \nabla, g)$ and $G\curvearrowright(M'\to B, \nabla', g')$ be two Riemannian fibre bundles with connection over the same base $B$ and with compatible $G$-action. Let $f\colon M\to M'$ be a $G$-equivariant geometric fibre homotopy equivalence. If $f^*$ and a geometric fibre homotopy inverse $g^*$ of $f^*$ are bounded as operators between $L^2\Omega^\bullet M'$ and $L^2\Omega^\bullet M$, then the two-parameter spectral density functions are dilatationally equivalent, that is, for $0\leq k\leq \dim(M)$, $$\mathcal{G}_k(M'\to B, \nabla') \sim \mathcal{G}_k(M\to B, f^*\nabla').$$ * **Lemma 32** (). * Let $G\curvearrowright(M\to B, \nabla, g)$ and $G\curvearrowright(M'\to B, \nabla', g')$ be two Riemannian fibre bundles with connection over the same base $B$ and with compatible $G$-action. Let $f\colon M\to M'$ be a $G$-equivariant geometric fibre homotopy equivalence. If $f^*$ and a geometric fibre homotopy inverse $g^*$ of $f^*$ are bounded as operators between $L^2\Omega^\bullet M'$ and $L^2\Omega^\bullet M$, then the two-parameter spectral density functions are dilatationally equivalent, that is, for $0\leq k\leq \dim(M)$, $$\mathcal{G}_k(M'\to B, \nabla') \sim \mathcal{G}_k(M\to B, f^*\nabla').$$ * *Proof.* By assumption, the induced map $f^*$ is a bounded chain homotopy equivalence $L^2\Omega^\bullet M'\to L^2\Omega^\bullet M$ of Hilbert chain complexes, with bounded inverse $g^*$. Since $$\mathcal{G}_k(M'\to B, \nabla',g')(\mu,\nu) = \mathcal{F}_k(M',g'^{\mu,\nu})(1)$$ and in the same way $$\mathcal{G}_k(M\to B, f^*\nabla', g)(\mu,\nu) = \mathcal{F}_k(M,g^{\mu,\nu})(1)$$ for some[^12] $G$-invariant Riemannian metrics compatible with the connections, the statement follows from a Proposition of M. Gromov and M. A. Shubin [@GroShu2 Prop. 4.1]: There exists $C(\mu,\nu)$ depending only on $\|f^*\|_{(M',g'^{\mu,\nu})\to (M,g^{\mu,\nu})}$ and $\|g^*\|_{(M',g'^{\mu,\nu})\to (M,g^{\mu,\nu})}$ with $$\begin{aligned} \mathcal{G}_k(M\to B, f^*\nabla', g)&(C(\mu,\nu)^{-1}\mu, C(\mu,\nu)^{-1}\nu) \\ &= \mathcal{F}_k(M,C(\mu,\nu)^{-1} g^{\mu,\nu})(1) \\ &= \mathcal{F}_k(M,g^{\mu,\nu})(C(\mu,\nu)^{-1}) \\ &\leq \mathcal{F}_k(M',g'^{\mu,\nu})(1) \\ &=\mathcal{G}(M'\to B, \nabla', g')(\mu, \nu) \\ &\leq \mathcal{F}_k(M,g^{\mu,\nu})(C(\mu,\nu)) \\ &= \mathcal{G}_k(M\to B, f^*\nabla', g)(C(\mu,\nu)\mu, C(\mu,\nu)\nu).\end{aligned}$$ Since for $f^*\colon \Omega^p(B,\{\Omega^q F'_\bullet\})\to \Omega^p(B,\{\Omega^q F_\bullet\})$ (and in the same way for $g^*$), $$\begin{aligned} \|f^*\|_{(M',g'^{\mu,\nu})\to (M,g^{\mu,\nu})} &= \sup_{0\neq\omega\in \Omega^p(B,\{\Omega^q F'_\bullet\}) } \frac{\|f^*\omega\|_{g^{\mu,\nu}}}{\|\omega\|_{g'^{\mu,\nu}}} \\ &= \sup_{0\neq\omega\in \Omega^p(B,\{\Omega^q F'_\bullet\}) } \frac{\mu^{-p} \nu^{-q} \cdot \|f^*\omega\|_{g}}{\mu^{-p} \nu^{-q} \cdot \|\omega\|_{g'}} \\ &= \sup_{0\neq\omega\in \Omega^p(B,\{\Omega^q F'_\bullet\}) } \frac{\|f^*\omega\|_{g}}{\|\omega\|_{g'}} = \|f^*\|_{(M',g')\to (M,g)},\end{aligned}$$ the norms of $f^*$ and $g^*$ are independent of $\mu,\nu$ and hence so is $C=C(\mu,\nu)$. Therefore, the claim follows from the inequalities above. ◻ Following the idea behind M. Gromov and M. A. Shubin's approach in [@GroShu] further, we can drop the restrictive requirement that $f^*$ and $g^*$ are bounded and obtain the desired first invariance theorem. **Theorem 33**. * [\[Thm_2NSI_fibre_homot_invar\]]{#Thm_2NSI_fibre_homot_invar label="Thm_2NSI_fibre_homot_invar"} In the notation above, if there is a $G$-equivariant geometric fibre homotopy equivalence between $M\to B$ and $M'\to B$, then for $0\leq k\leq \dim(M)$, $$\mathcal{G}_k(M'\to B, \nabla') \sim \mathcal{G}_k(M\to B, f^*\nabla').$$ * **Theorem 34** (). * [\[Thm_2NSI_fibre_homot_invar\]]{#Thm_2NSI_fibre_homot_invar label="Thm_2NSI_fibre_homot_invar"} In the notation above, if there is a $G$-equivariant geometric fibre homotopy equivalence between $M\to B$ and $M'\to B$, then for $0\leq k\leq \dim(M)$, $$\mathcal{G}_k(M'\to B, \nabla') \sim \mathcal{G}_k(M\to B, f^*\nabla').$$ * *Proof.* In the spirit of [@GroShu2 Thm 5.2], we show that for any geometric fibre homotopy equivalence $f\colon M\to M'$, we can construct a homotopy equivalence between the corresponding Hilbert chain complexes (which in particular is bounded). The main step here is to construct a submersive fibre homotopy equivalence $\widetilde f\colon M\times D^N\to M'$ from the a thickened fibre bundle $F_\bullet\times D^N\to M\times D^N \to B$ to $F'_\bullet\to M'\to B$, where $D^N$ is a disk in $\mathbb{R}^N$. We consider the vertical bundle $VM'\to M'$ and its pullback $f^*VM'\to M$ along $f$. By the smooth Serre-Swan theorem[^13] there exists $N\in \mathbb{N}$ and an epimorphism $p_1$ of bundles over $M$, $$\begin{tikzcd}[ampersand replacement = \&] M\times \mathbb{R}^N\ar[r,"p_1"] \ar[d] \& f^*VM' \ar[d] \\ M \ar[r, equal] \& M. \end{tikzcd}$$ This gives us the following commutative diagram, where $p_1$ and $p_2$ are bundle projections: $$\begin{tikzcd}[ampersand replacement = \&] M\times \mathbb{R}^N \ar[rr, twoheadrightarrow, "p_1"]\ar[dr] \& \& f^*VM' \ar[dl]\ar[rr, twoheadrightarrow, "p_2"] \& \& VM' \ar[d, "\pi"] \\ \& M \ar[rrr, "f"]\ar[d] \& \& \& M' \ar[d] \\ \& B \ar[rrr, equal] \& \& \& B \end{tikzcd}$$ After fixing any $\nabla'$-compatible Riemannian metric $g'$ and the corresponding fibrewise geodesic flows on $M'$, on each fibre $F'_b=\pi^{-1}(b)$ of the bundle $VM'\to M'$ the exponential maps $\exp_b \colon V_bM'\to F_b'$ are defined and they glue to a map $$\exp_V \colon VM'\to F'_\bullet.$$ For each $b\in B$, there is $\varepsilon(b)>0$ such that the exponential map restricts to a diffeomorphism from $D_{\varepsilon(b)}^{V_bM'} = \left\lbrace v\in V_bM' \:\middle|\: g'_{V,b}(v,v)< \varepsilon(b)^2 \right\rbrace$ onto its image. This radius $\varepsilon(b)$ can be chosen to depend continuously on $b$ and be invariant under the cocompact action $G'\curvearrowright B$ and such that $$\varepsilon = \inf_{b\in B}\left\{ \varepsilon(b) \right\} = \min_{[b]\in {G'}\backslash{B}} \left\{\varepsilon([b])\right\}$$ exists and $\varepsilon>0$. Since $g_{F,b}$ depends smoothly on $b\in B$, the set $$U = \bigcup_{b\in B} D_\varepsilon^{V_bM'}$$ defines a neighbourhood of the zero section $0\in \Gamma(VM')$. In particular, the map $\exp_V$ restricts to a diffeomorphism from $U$ onto its image in $M'$, $$\exp_V \colon U \xrightarrow{\ \cong\ } \exp_V(U).$$ Further, for each $b\in B$ we can find $\delta(b)>0$ depending continuously on $b$ such that the subset $\{b\}\times D_\delta^N$ of the fibre over $b$ of $M\times \mathbb{R}^N\to M$ maps into $D_\varepsilon^{V_bM'}$ via the composition $p_2\circ p_1$. Since $f$ preserves the base point, this can be chosen invariantly under the cocompact action $G'\curvearrowright B$ and we can define $\delta = \min_{[b]\in G'\backslash B} \left\{\delta([b])\right\} > 0$. The image of $M\times D_\delta^N\subset M\times \mathbb{R}^N$ under $p_2\circ p_1$ is contained in $U$. Hence, the composition $$\widetilde{f}=\exp_\bullet\circ \, p_2\circ p_1$$ defines a submersion from $M\times D_\delta^N$ into $M'$ (as a map over $\mathop{\mathrm{id}}_B$): $$\begin{tikzcd}[ampersand replacement=\&, row sep = 10] M\times \mathbb{R}^N \ar[d, phantom, "\cup" description] \& \& VM' \ar[d, phantom, "\cup" description] \& \\ M\times D_\delta^N \ar[rrrr, "\widetilde f" description, rounded corners, to path ={[pos=0.25] (\tikztostart.south) -| ([yshift=-0.5cm]\tikztostart.south) -| (\tikztotarget)\tikztonodes -- (\tikztotarget.south)}] \ar[r, twoheadrightarrow, "p_1"] \& f^*VM' \ar[r, twoheadrightarrow, "p_2"] \& U \ar[r, "\exp_V", "\cong"'] \& \exp_V(U) \ar[r, phantom, "\subset" description] \& M' \end{tikzcd}$$ Denote by $\iota\colon M\cong M\times \{0\} \hookrightarrow M\times D^N_\delta$ the inclusion as the zero section. Then the following diagram commutes: $$\begin{tikzcd}[ampersand replacement = \&] M \ar[r, "\iota"]\ar[d]\ar[rr, "f" description, rounded corners, to path ={[pos=0.25] (\tikztostart.north) -| ([yshift=.5cm]\tikztostart.north) -| (\tikztotarget.north)\tikztonodes -- (\tikztotarget)}] \& M\times D_\delta^N \ar[r, "\widetilde f"]\ar[d] \& M'\ar[d] \\ B \ar[r, equal] \& B \ar[r, equal] \& B \end{tikzcd}$$ Note that all maps are bundle maps over the identity $\mathop{\mathrm{id}}_B$. The cochain homotopy equivalences $L^2 \Omega^k(M) \simeq L^2 \Omega^k(M\times D_\delta^N)$ respect the direct sum decompositions.[^14] Following [@GroShu2 Thm. 5.2] further, the cochain homotopy equivalence $\widetilde f^*$ between $L^2 \Omega^\bullet M'$ and $L^2 \Omega^\bullet (M\times D^N_\delta)$ induced by the submersion $\widetilde f$ is bounded. Since $f$ is a bundle map over $\mathop{\mathrm{id}}_B$, we even obtain bounded homotopy equivalences on each summand of the direct sum decomposition, $\Omega^p(B, \{\Omega^q F'_\bullet\})\to \Omega^p(B, \{\Omega^q F_\bullet\})$. The claim now follows from the previous lemma. ◻ ## (Partial) Metric Invariance We have seen that the two-parameter Novikov-Shubin numbers behave well if the connection is fixed. If we allow the connection to vary, we still obtain invariance properties if we scale the fibre at least as fast as the base, that is on the parameter space $\{\nu \leq \mu\}$. **Theorem 35**. * [\[Thm_2NSI_half_metric_indep\]]{#Thm_2NSI_half_metric_indep label="Thm_2NSI_half_metric_indep"} Let $G$ be a group and $M\to B$ be equipped with two pairs of connection and Riemannian metric such that $G\curvearrowright(M\to B,\nabla,g)$ and $G\curvearrowright(M\to B,\nabla',g')$ are Riemannian fibre bundles with connection and compatible $G$-action. Then for all $0\leq k\leq \dim(M)$ the two-parameter spectral density functions restricted to the subspace $\{\nu\leq\mu\}$ are dilatationally equivalent, $$\mathcal{G}_k(M,\nabla,g)|_{\{\nu\leq\mu\}}\sim \mathcal{G}_k(M,\nabla',g')|_{\{\nu\leq\mu\}}.$$ * **Theorem 36** (). * [\[Thm_2NSI_half_metric_indep\]]{#Thm_2NSI_half_metric_indep label="Thm_2NSI_half_metric_indep"} Let $G$ be a group and $M\to B$ be equipped with two pairs of connection and Riemannian metric such that $G\curvearrowright(M\to B,\nabla,g)$ and $G\curvearrowright(M\to B,\nabla',g')$ are Riemannian fibre bundles with connection and compatible $G$-action. Then for all $0\leq k\leq \dim(M)$ the two-parameter spectral density functions restricted to the subspace $\{\nu\leq\mu\}$ are dilatationally equivalent, $$\mathcal{G}_k(M,\nabla,g)|_{\{\nu\leq\mu\}}\sim \mathcal{G}_k(M,\nabla',g')|_{\{\nu\leq\mu\}}.$$ * *Proof.* Consider the decompositions $$VM\oplus HM \quad \cong_\nabla \quad TM \quad \cong_{\nabla'} \quad VM \oplus H'M,$$ where the vertical bundle $VM = \ker (\pi^*)$ is independent of the connection. The identity $$\mathop{\mathrm{id}}\colon (M,g)\to (M,g')$$ induces a map $d\mathop{\mathrm{id}}\colon TM\to TM$ decomposing into maps $d\mathop{\mathrm{id}}\colon VM\to VM$ and $d\mathop{\mathrm{id}}\colon HM\to VM\oplus H'M$, so vertical tangent vectors remain vertical, but horizontal tangent vectors can obtain a vertical component. This is captured in the following diagram. $$\begin{tikzcd}[ampersand replacement = \&, row sep = 30, column sep = 40] HM \ar[d, phantom, "\oplus" description] \ar[r, "d\mathop{\mathrm{id}}"' {yshift=-5pt, xshift = 0pt}] \ar[dr] \& H'M \ar[d, phantom, "\oplus" description] \\ VM \ar[d, phantom, "\cong_\nabla" description] \ar[r, "d\mathop{\mathrm{id}}"] \& VM \ar[d, phantom, "\cong_{\nabla'}" description] \\ TM \ar[d] \ar[r, "d\mathop{\mathrm{id}}"] \& TM \ar[d] \\ (M,g) \ar[r, "\mathop{\mathrm{id}}"] \& (M,g') \end{tikzcd}$$ For a form of pure degree $(p,q)$ with respect to the direct sum decomposition coming from the connection $\nabla'$, $$\omega_{p,q} \in \Omega^p(B,\{\Omega^q(F_\bullet)\}) \subset_{\nabla'} \Omega^k (M,g'),$$ its pullback $\mathop{\mathrm{id}}^* \omega_{p,q} \in \Omega^k (M,g)$ decomposes under the direct sum decomposition coming from the connection $\nabla$ as a sum, $$\mathop{\mathrm{id}}^* \omega_{p,q} = \sum_{r+s=k} \alpha_{r,s},$$ with $\alpha_{r,s}\in \Omega^r(B,\{\Omega^s(F_\bullet)\}) \subset_{\nabla} \Omega^k (M,g)$. Since $$\mathop{\mathrm{id}}^* \omega(X_1,\dots, X_k) = \omega(d\mathop{\mathrm{id}}(X_1),\dots, d\mathop{\mathrm{id}}(X_k)),$$ in the $\nabla$-decomposition the $(r,s)$-summand $\alpha_{r,s}$ vanishes if ${r<p}$ or equivalently ${s>q}$. Hence $$\mathop{\mathrm{id}}^* \omega_{p,q} = \sum_{\substack{r+s=k \\ r\geq p\ \wedge\ s\leq q}} \alpha_{r,s}.$$ Therefore, $\|\omega_{p,q}\|_{g'^{\mu,\nu}} = \mu^{-p} \nu^{-q}\|\omega_{p,q}\|_{g'}$ and $$\begin{aligned} \|\mathop{\mathrm{id}}^* \omega_{p,q}\|_{g^{\mu,\nu}} &= \sum_{\substack{r+s=k \\ r\geq p\ \wedge\ s\leq q}} \|\alpha_{r,s}\|_{g^{\mu,\nu}} = \sum_{\substack{r+s=k \\ r\geq p\ \wedge\ s\leq q}} \mu^{-r} \nu^{-s} \|\alpha_{r,s}\|_{g} \\ &\!\!\overset{\nu\leq\mu} \leq \sum_{\substack{r+s=k \\ r\geq p\ \wedge\ s\leq q}} \mu^{-p} \nu^{-q} \|\alpha_{r,s}\|_{g} = \mu^{-p} \nu^{-q} \|\mathop{\mathrm{id}}^*\omega_{p,q}\|_g.\end{aligned}$$ Consequently, $$\begin{aligned} \|\mathop{\mathrm{id}}^*|_{\Omega^p(B,\{\Omega^q(F_\bullet)\})}\|_{(M,g'^{\mu,\nu})\to (M,g^{\mu,\nu})} &= \sup_{\omega_{p,q}\in {\Omega^p(B,\{\Omega^q(F_\bullet)\})}} \frac{\|\mathop{\mathrm{id}}^*\omega_{p,q}\|_{g^{\mu,\nu}}}{\|\omega_{p,q}\|_{g'^{\mu,\nu}}} \\ &\leq \sup_{\omega_{p,q}\in {\Omega^p(B,\{\Omega^q(F_\bullet)\})}} \frac{\mu^{-p} \nu^{-q}\cdot \|\mathop{\mathrm{id}}^*\omega_{p,q}\|_{g}}{\mu^{-p} \nu^{-q}\cdot \|\omega_{p,q}\|_{g'}} \\ &= \|\mathop{\mathrm{id}}^*|_{\Omega^p(B,\{\Omega^q(F_\bullet)\})}\|_{(M,g')\to (M,g)}\end{aligned}$$ Since the decomposition into the $\Omega^p(B,\{\Omega^q(F_\bullet)\})$ is orthogonal, it follows that $$\|\mathop{\mathrm{id}}^*\|_{(M,g'^{\mu,\nu})\to (M,g^{\mu,\nu})} \leq \|\mathop{\mathrm{id}}^*\|_{(M,g')\to (M,g)}.$$ The same argument holds if we consider the identity map as a map in the other direction, that is $\mathop{\mathrm{id}}\colon (M,g')\to (M,g)$. Let $$K=\max\left\{\|\mathop{\mathrm{id}}^*\|_{(M,g')\to (M,g)},\, \|\mathop{\mathrm{id}}^*\|_{(M,g)\to (M,g')}\right\}.$$ For any $\omega\in C^k(M,g'^{\mu,\nu})(1)$ with $\nu\leq\mu$ it follows, therefore, that $$\begin{aligned} \| d\mathop{\mathrm{id}}^*\omega \|_{g^{\mu,\nu}} &= \| \mathop{\mathrm{id}}^* d\omega\|_{g^{\mu,\nu}} \leq K \|d\omega\|_{g'^{\mu,\nu}} \leq K \|\omega\|_{g'^{\mu,\nu}} = K \|\mathop{\mathrm{id}}^* \omega\|_{g'^{\mu,\nu}} \leq K^2 \|\omega\|_{g^{\mu,\nu}} \end{aligned}$$ and similarly in the other direction. These inequalities imply that $$C^k(M,g^{\mu,\nu})(K^{-2}) \subset C^k(M,g'^{\mu,\nu})(1) \subset C^k(M,g^{\mu,\nu})(K^2).$$ Hence the spectral density functions are dilatationally equivalent and the claim follows. ◻ # Example: The Heisenberg Group Let us consider the three-dimensional Heisenberg group $\mathbb H^3$ with its associated Lie algebra $\mathfrak{h}^3 = \langle X,Y,Z \:|\: [X,Y]=Z\rangle$ as a fibre bundle with fibre $\mathbb{R}$ corresponding to the central $Z$-direction and base $\mathbb{R}^2$ corresponding to the $X$- and $Y$-directions. A basis of left-invariant vector fields is given by the vector fields $$\begin{aligned} \vartheta_X &= \partial_X - \frac{1}{2} y \partial_Z, \qquad \vartheta_Y = \partial_Y + \frac{1}{2} x \partial_Z, \qquad \vartheta_Z = \partial_Z,\end{aligned}$$ where $x$ and $y$ denote coordinates in the base $\mathbb{R}^2 = \langle X,Y\rangle$. Requiring that $\vartheta_X, \vartheta_Y$ and $\vartheta_Z$ are orthonormal yields the standard metric $g$ and with $VM = \langle \vartheta_Z\rangle$ and $HM = \langle \vartheta_X,\vartheta_Y\rangle$. We also fix a connection $\nabla$. The scaled metric $g^{\overline\mu,\overline\nu}$ is the metric for which $${\overline\mu}^{-1}\cdot\vartheta_X, \quad {\overline\mu}^{-1}\cdot\vartheta_Y \quad \text{ and } \quad {\overline\nu}^{-1}\cdot\vartheta_Z$$ form an orthonormal basis of $\mathfrak{h}^3$. Refining a computation of J. Lott [@Lott Prop. 52], we obtain the following values for the two-parameter Novikov-Shubin numbers. **Theorem 37**. * [\[T_result2ParamH3\]]{#T_result2ParamH3 label="T_result2ParamH3"} On $\mathfrak{h}^3$, by direct computation we obtain $$\begin{aligned} \alpha_0(\mathfrak{h}^3)(\lambda,\lambda^{1+\zeta}) &= 4+2\zeta \quad \text{for } -\nicefrac{1}{2}\leq \zeta, \\ \alpha_1(\mathfrak{h}^3)(\lambda,\lambda^{1+\zeta}) &= 2-2\zeta \quad \text{for } -\nicefrac{1}{2}< \zeta < 1,\end{aligned}$$ and, by Hodge duality, also $\alpha_2(\mathfrak{h}^3)(\lambda,\lambda^{1+\zeta}) = 4+2\zeta$ for $-\nicefrac{1}{2}\leq \zeta$. Compare also Figure [\[F_2PNSIForH3\]](#F_2PNSIForH3){reference-type="ref" reference="F_2PNSIForH3"}. * **Theorem 38** (). * [\[T_result2ParamH3\]]{#T_result2ParamH3 label="T_result2ParamH3"} On $\mathfrak{h}^3$, by direct computation we obtain $$\begin{aligned} \alpha_0(\mathfrak{h}^3)(\lambda,\lambda^{1+\zeta}) &= 4+2\zeta \quad \text{for } -\nicefrac{1}{2}\leq \zeta, \\ \alpha_1(\mathfrak{h}^3)(\lambda,\lambda^{1+\zeta}) &= 2-2\zeta \quad \text{for } -\nicefrac{1}{2}< \zeta < 1,\end{aligned}$$ and, by Hodge duality, also $\alpha_2(\mathfrak{h}^3)(\lambda,\lambda^{1+\zeta}) = 4+2\zeta$ for $-\nicefrac{1}{2}\leq \zeta$. Compare also Figure [\[F_2PNSIForH3\]](#F_2PNSIForH3){reference-type="ref" reference="F_2PNSIForH3"}. * *Proof.* It was shown by J. Lott [@Lott Prop. 52] that in this setting of $\mathbb H^3$ with metric $g^{1,c}$, the heat kernel on functions is given by $$\begin{aligned} e^{-t\Delta_0}(0,0) = \frac{1}{4\pi^2}\frac{1}{ct^2} \int_{0}^\infty e^{-\frac{u^2}{c^2t}} \sinh(u)^{-1} u \,\mathrm{d}u.\end{aligned}$$ Classically, if $c$ is constant and we let $t\to \infty$, the density function of the normal distribution $e^{-\frac{u^2}{c^2t}}$, converges to the constant-$1$ function and therefore $$\lim_{t\to\infty} \int_{0}^\infty e^{-\frac{u^2}{c^2t}} \sinh(u)^{-1} u \,\mathrm{d}u = \int_0^\infty \sinh(u)^{-1} u \,\mathrm{d}u = \frac{\pi}{4}.$$ Hence, $e^{-t\Delta_0}(0,0)$ is in $\Theta (t^{-2})$ as $t\to \infty$ and, following M. Gromov and M. A. Shubin's work in [@GroShu2], we can therefore conclude that $\alpha_0(\mathbb H^3)=4$. If we let $c$ depend on $t$, the same argument remains true as long as $c(t)^2 t\to \infty$ as $t\to \infty$, showing that then $$e^{-\Delta_0}(0,0)\in \Theta (c(t)^{-1}t^{-2}).$$ Therefore, with $c = t^\zeta$ and $\zeta > -1/2$, $$\alpha_0(\mathfrak{h}^3)(\lambda, \lambda^{1+\zeta}) = \alpha\left(\lambda\mapsto \mathcal{G}_{0}(\lambda, \lambda^{1+\zeta})\right) = 4+2\zeta.$$ Indeed, since for $\zeta = -\nicefrac{1}{2}$ the integral is a positive constant, $$0 < \int_{0}^\infty e^{-{u^2}} \sinh(u)^{-1} u \,\mathrm{d}u < \infty,$$ the argument holds also for $\zeta = -\nicefrac{1}{2}$, however, the integral converges to zero for $\zeta < -\nicefrac{1}{2}$, so that its asymptotic behaviour needs to be taken into account. The summand $2\zeta$ tells us that the scaling of the $Z$-direction contributes quadratically to the spectral density. This fits with the computation of $\alpha_0(\mathbb H^3)=N(\mathbb H^3)$ via the growth rate $N(\mathbb H^3)$ (using the result of N. Th. Varopolous [@Varop]) since by the Bass-Guivarc'h formula, $$\begin{aligned} N(\mathbb H^3) &= \mathop{\mathrm{rk}}(\langle X,Y\rangle) + 2\cdot \mathop{\mathrm{rk}}(\langle Z\rangle) = 2 + 2 = 4,\end{aligned}$$ so we also see a quadratic contribution from the central $Z$-direction in this picture. On $1$-forms, J. Lott computes the heat operator as $$\begin{aligned} e^{-t\Delta_1}(0,0) &= \frac{1}{2\pi^2}\frac{1}{c} \left[ I_1^+ + I_1^- + I_2 + I_3 \right], \end{aligned}$$ where the summands $I_\bullet$ are the following integral expressions: $$\begin{aligned} I_1^\pm &= \int_0^\infty \sum\limits_{m=1}^\infty e^{-t\left[ (2m+1)k+ \frac{k^2}{c^2}+\frac{c^2}{2} \pm c\sqrt{(2m+1)k+\frac{k^2}{c^2}+\frac{c^2}{4}}\right]} kdk, \\ I_2 &= \int_0^\infty e^{-\frac{k^2}{c^2}t} kdk, \\ I_3 &= \int_0^\infty e^{-\left(2k+\frac{k^2}{c^2}+c^2\right)t} kdk.\end{aligned}$$ J. Lott estimates these integrals in the case where $c$ is constant in order to compute the Novikov-Shubin invariant $\alpha_1(\mathbb H^3)=2$. We will now adapt these computations to the case where $c=c(t)$ is a function of $t$, in particular a power $c(t)=t^\zeta$. **Lemma 39**. *The integrals $I_2$ and $I_3$ evaluate to $$\begin{aligned} I_2 &= \frac{1}{2}\frac{c^2}{t}, \\ I_3 &= \frac{1}{2}\frac{c^2}{t}e^{-c^2 t} + \sqrt{\pi}\frac{c^3}{\sqrt{t}} \cdot \mathrm{erfc}\left(c\sqrt{t}\right),\end{aligned}$$ where $\mathrm{erfc}$ denotes the complementary Gauss error function. * **Lemma 40** (). *The integrals $I_2$ and $I_3$ evaluate to $$\begin{aligned} I_2 &= \frac{1}{2}\frac{c^2}{t}, \\ I_3 &= \frac{1}{2}\frac{c^2}{t}e^{-c^2 t} + \sqrt{\pi}\frac{c^3}{\sqrt{t}} \cdot \mathrm{erfc}\left(c\sqrt{t}\right),\end{aligned}$$ where $\mathrm{erfc}$ denotes the complementary Gauss error function. * *Proof.* The integral $I_2$ can be directly evaluated by substituting $u = k^2$ as $$\begin{aligned} I_2 &= \int_0^\infty e^{-\frac{t}{c^2}\cdot k^2} k \,\mathrm{d}k = \frac{1}{2} \int_0^\infty e^{\frac{-t}{c^2} u} \,\mathrm{d}u = \frac{1}{2}\frac{c^2}{t}.\end{aligned}$$ Substituting $u=(\nicefrac{k}{c}+c)^2t$ and $v=(\nicefrac{k}{c}+c)\sqrt{t}$, we can compute $$\begin{aligned} I_3 &= \int_0^\infty e^{-\left(2k+\frac{k^2}{c^2}+c^2\right)t} k \,\mathrm{d}k \\ &= \int_0^\infty e^{-(\frac{k}{c}+c)^2t} k \,\mathrm{d}k \\ &= \frac{c^2}{2t}\int_0^\infty e^{-(\frac{k}{c}+c)^2t} \left(\frac{2t}{c^2}k + 2t\right)\,\mathrm{d}k - c^2 \int_0^\infty e^{-(\frac{k}{c}+c)^2t} \,\mathrm{d}k \\ &= \frac{c^2}{2t}\int_{c^2t}^\infty e^{-u} \,\mathrm{d}u - \frac{c^3}{\sqrt{t}} \int_{c\sqrt{t}}^\infty e^{-{v^2}} \,\mathrm{d}v \\ &= \frac{c^2}{2t}e^{-c^2 t} + \frac{\sqrt\pi c^3}{\sqrt{t}} \cdot \mathrm{erfc}\left(c\sqrt{t}\right). \qedhere\end{aligned}$$ ◻ **Lemma 41**. * By substitution, $$I_1^\pm = c^4 \int_0^\infty \left(v\mp \frac{1}{2}\right) e^{-tc^2v^2} \sum_{m=1}^\infty \left[ 1 - \left(\sqrt{1+\frac{(v\mp \frac{1}{2})^2 - \frac{1}{4}}{(m+\frac{1}{2})^2}}\right)^{-1} \right] \,\mathrm{d}v.$$ * **Lemma 42** (). * By substitution, $$I_1^\pm = c^4 \int_0^\infty \left(v\mp \frac{1}{2}\right) e^{-tc^2v^2} \sum_{m=1}^\infty \left[ 1 - \left(\sqrt{1+\frac{(v\mp \frac{1}{2})^2 - \frac{1}{4}}{(m+\frac{1}{2})^2}}\right)^{-1} \right] \,\mathrm{d}v.$$ * *Proof.* Following J. Lott's computations, we substitute in the same way $$\begin{aligned} u_\pm &= \sqrt{(2m+1)k + \frac{k^2}{c^2} + \frac{c^2}{4}} \pm \frac{c}{2} \\ u_\pm^2 &= (2m+1)k + \frac{k^2}{c^2} + \frac{c^2}{2} \pm c\sqrt{(2m+1)k+\frac{k^2}{c^2} + \frac{c^2}{4}} \\ k_\pm &= c\sqrt{u_\pm^2 \mp u_\pm c + c^2(m+\nicefrac{1}{2})^2} - \left(m+\frac{1}{2}\right)c^2 \\ \frac{\,\mathrm{d}k_\pm}{\,\mathrm{d}u_\pm} &= \frac{c(u_\pm \mp\nicefrac{c}{2})}{\sqrt{u_\pm^2 \mp u_\pm c + c^2(m+\nicefrac{1}{2})^2}}.\end{aligned}$$ Omitting the index $\pm$ in notation[^15], we use this with $v=\nicefrac{u}{c}$ to obtain $$\begin{aligned} I_1^\pm &= \int_0^\infty \sum\limits_{m=1}^\infty e^{-t\left[ (2m+1)k+ \frac{k^2}{c^2}+\frac{c^2}{2} \pm c\sqrt{(2m+1)k+\frac{k^2}{c^2}+\frac{c^2}{4}}\right]} k \,\mathrm{d}k \\ &=c^2 \int_0^\infty \left(u\mp \frac{c}{2}\right) e^{-tu^2} \sum_{m=1}^\infty \frac{\sqrt{u^2 \mp cu + c^2(m+\frac{1}{2})^2} - c(m+\frac{1}{2})}{\sqrt{u^2 \mp cu + c^2(m+\frac{1}{2})^2}} \,\mathrm{d}u \\ &= c^3\int_0^\infty \left(\frac{u}{c}\mp \frac{1}{2}\right) e^{-t\frac{u^2}{c^2}c^2} \sum_{m=1}^\infty \frac{\sqrt{\frac{u^2}{c^2} \mp \frac{u}{c} + (m+\frac{1}{2})^2} - (m+\frac{1}{2})}{\sqrt{\frac{u^2}{c^2} \mp \frac{u}{c} + (m+\frac{1}{2})^2}} \,\mathrm{d}u \\ &= c^4 \int_0^\infty \left(v\mp \frac{1}{2}\right) e^{-tc^2v^2} \sum_{m=1}^\infty \frac{\sqrt{v^2 \mp v + (m+\frac{1}{2})^2} - (m+\frac{1}{2})}{\sqrt{v^2 \mp v + (m+\frac{1}{2})^2}} \,\mathrm{d}v \\ &= c^4 \int_0^\infty \left(v\mp \frac{1}{2}\right) e^{-tc^2v^2} \sum_{m=1}^\infty \left[ 1 - \left(\sqrt{1+\frac{(v\mp \frac{1}{2})^2 - \frac{1}{4}}{(m+\frac{1}{2})^2}}\right)^{-1} \right] \,\mathrm{d}v. \qedhere\end{aligned}$$ ◻ **Lemma 43**. * We can estimate $I_1^-$ by $$\begin{aligned} \frac{1}{5}\left( \frac{\sqrt \pi}{4} \frac{c}{\sqrt{t^3}} + \frac{1}{4}\frac{c^2}{t}\right) \leq I_1^- \leq \frac{\sqrt \pi}{4} \frac{c}{\sqrt{t^3}} + \frac{1}{4}\frac{c^2}{t}.\end{aligned}$$ * **Lemma 44** (). * We can estimate $I_1^-$ by $$\begin{aligned} \frac{1}{5}\left( \frac{\sqrt \pi}{4} \frac{c}{\sqrt{t^3}} + \frac{1}{4}\frac{c^2}{t}\right) \leq I_1^- \leq \frac{\sqrt \pi}{4} \frac{c}{\sqrt{t^3}} + \frac{1}{4}\frac{c^2}{t}.\end{aligned}$$ * *Proof.* Consider the function $f\colon \mathbb{R}_{\geq 0}\to \mathbb{R}$ describing the summands, $$f(x) = 1 - \left(\sqrt{1+\frac{(v\mp \frac{1}{2})^2 - \frac{1}{4}}{(x+\frac{1}{2})^2}}\right)^{-1}.$$ This function is positive, monotonously decreasing with values $f(0) = 1 - (2v+ 1)^{-1}$ and $\lim_{x\to\infty} f(x) = 0$. We can therefore estimate the sum over the $f(n)$ by integrals, $$\begin{aligned} \int_2^\infty f(x) \,\mathrm{d}x \leq \sum_{n=1}^\infty f(n) \leq \int_1^\infty f(x) \,\mathrm{d}x\end{aligned}$$ To compute these integrals, let $w = (v+\frac{1}{2})^2 - \frac{1}{4}$, then $$\begin{aligned} F(x) = \int f(x) dx &= \int 1 - \left(\sqrt{1+\frac{w}{(x+\frac{1}{2})^2}}\right)^{-1} \,\mathrm{d}x \\ &= x - \left(x+\frac{1}{2}\right)\sqrt{1 + \frac{w}{(x+\frac{1}{2})^2}} + \mathrm{const}\end{aligned}$$ and we can compute the values $$\begin{aligned} F(1) &= 1-\sqrt{(v+\nicefrac{1}{2})^2 + 2} + \mathrm{const} ,\qquad F(2) = 2-\sqrt{(v+\nicefrac{1}{2})^2 + 6} + \mathrm{const}\end{aligned}$$ as well as $\lim_{x\to\infty} F(x) = -\nicefrac{1}{2}+ \mathrm{const}$. Hence, we get bounds on the sum by $$\begin{aligned} \sqrt{\left(v+\frac{1}{2}\right)^2 + 6}-\frac{5}{2} \leq \sum_{m=1}^\infty f(m) \leq \sqrt{\left(v+\frac{1}{2}\right)^2 + 2} - \frac{3}{2}.\end{aligned}$$ For the lower bound, observe that $g\colon \mathbb{R}_{\geq 0}\to \mathbb{R}$, $v\mapsto \sqrt{\left(v+\nicefrac{1}{2}\right)^2 + 6}-\nicefrac{5}{2}$ satisfies $g(0) =0$, $$g'(v) = \frac{v+\frac{1}{2}}{\sqrt{\left(v+\frac{1}{2}\right)^2+6}}, \quad g''(v) = \frac{6}{((v+\frac{1}{2})^2+3)^{3/2}},$$ so $g''>0$ meaning that $g'$ is strictly monotonously increasing and has its minimum at $g'(0) = \nicefrac{1}{5}$. This implies $g(v) \geq \nicefrac{v}{5}$. For the upper bound, we do the same analysis and find that for $h(v) = \sqrt{\left(v+\nicefrac{1}{2}\right)^2 + 2} - \nicefrac{3}{2}$ we have $h(0)=0$ and $h'(v)\leq \lim_{v\to\infty} h'(v) = 1$ implying that $h(v)\leq v$. Hence we get new bounds $$\begin{aligned} \frac{v}{5} \leq \sum_{m=1}^\infty f(m) \leq v.\end{aligned}$$ Using these bounds, we get bounds on $I_1^-$ by evaluating $$\begin{aligned} c^4 \int_0^\infty \left(v+\frac{1}{2}\right) e^{-tc^2v^2}v \,\mathrm{d}v &= c^4 \int_0^\infty v^2 e^{-tc^2v^2} \,\mathrm{d}v + \frac{c^4}{2} \int_0^\infty v e^{-tc^2v^2} \,\mathrm{d}v\end{aligned}$$ By partial integration and with $\kappa = c v \sqrt t$, the first summand is given by $$\begin{aligned} c^4\int_0^\infty v^2 e^{-tc^2v^2} \,\mathrm{d}v &= c^2 \left[-\frac{ve^{-tc^2v^2}}{2t}\right]_{v=0}^\infty + \frac{c^2}{2t} \int_0^\infty e^{-tc^2v^2} \,\mathrm{d}v \\ &= 0 + \frac{c}{2\sqrt{t^3}} \int_0^\infty e^{-\kappa^2} \,\mathrm{d}\kappa = \frac{\sqrt \pi c}{4\sqrt{t^3}}\end{aligned}$$ and with $\xi = tc^2v^2$ the second summand is $$\begin{aligned} \frac{c^4}{2}\int_0^\infty v e^{-tc^2v^2} \,\mathrm{d}v &= \frac{c^2}{4t}\int_0^\infty e^{-\xi} \,\mathrm{d}\xi = \frac{c^2}{4t}.\end{aligned}$$ Therefore, $$\begin{aligned} \frac{1}{5}\left( \frac{\sqrt \pi c}{4\sqrt{t^3}} + \frac{c^2}{4t}\right) &\leq I_1^- \leq \frac{\sqrt \pi c}{4\sqrt{t^3}} + \frac{c^2}{4t}. \qedhere\end{aligned}$$ ◻ **Lemma 45**. *Let $I_4$ be the part of $I_1^+$ starting at $1$, that is $$I_4 = c^4 \int_1^\infty \left(v- \frac{1}{2}\right) e^{-tc^2v^2} \sum_{m=1}^\infty \left[ 1 - \left(\sqrt{1+\frac{(v- \frac{1}{2})^2 - \frac{1}{4}}{(m+\frac{1}{2})^2}}\right)^{-1} \right] \,\mathrm{d}v.$$ Then $$\begin{aligned} &\frac{1}{5}\left[-\frac{c^2}{4t}e^{-tc^2}+ \frac{\sqrt\pi}{4}\left(\frac{c}{\sqrt{t^3}} + \frac{c^3}{\sqrt t}\right) \mathrm{erfc}(c\sqrt{t})\right] \\ &\quad \leq I_4 \leq \left[-\frac{c^2}{4t}e^{-tc^2}+ \frac{\sqrt\pi}{4}\left(\frac{c}{\sqrt{t^3}} + \frac{c^3}{\sqrt t}\right) \mathrm{erfc}(c\sqrt{t})\right]. \end{aligned}$$ * **Lemma 46** (). *Let $I_4$ be the part of $I_1^+$ starting at $1$, that is $$I_4 = c^4 \int_1^\infty \left(v- \frac{1}{2}\right) e^{-tc^2v^2} \sum_{m=1}^\infty \left[ 1 - \left(\sqrt{1+\frac{(v- \frac{1}{2})^2 - \frac{1}{4}}{(m+\frac{1}{2})^2}}\right)^{-1} \right] \,\mathrm{d}v.$$ Then $$\begin{aligned} &\frac{1}{5}\left[-\frac{c^2}{4t}e^{-tc^2}+ \frac{\sqrt\pi}{4}\left(\frac{c}{\sqrt{t^3}} + \frac{c^3}{\sqrt t}\right) \mathrm{erfc}(c\sqrt{t})\right] \\ &\quad \leq I_4 \leq \left[-\frac{c^2}{4t}e^{-tc^2}+ \frac{\sqrt\pi}{4}\left(\frac{c}{\sqrt{t^3}} + \frac{c^3}{\sqrt t}\right) \mathrm{erfc}(c\sqrt{t})\right]. \end{aligned}$$ * *Proof.* Similar as for $I^1_-$, in the case of $I_1^+$ we consider $$f(x) = 1 - \left(\sqrt{1+\frac{(v - \frac{1}{2})^2 - \frac{1}{4}}{(x+\frac{1}{2})^2}}\right)^{-1}.$$ If $v>1$, the function $f$ is again monotonously decreasing and we can estimate $$\frac{v-1}{5} \leq \sqrt{\left(v-\frac{1}{2}\right)^2 + 6}-\frac{5}{2} \leq \sum_{m=1}^\infty f(m) \leq \sqrt{\left(v-\frac{1}{2}\right)^2 + 2} - \frac{3}{2} \leq v-1.$$ Therefore, we can bound the $(v>1)$-part $I_4$ of $I_1^+$ by evaluating $$\begin{aligned} \widetilde{I_4} &= c^4 \int_1^\infty \left(v-\frac{1}{2}\right) e^{-tc^2v^2}(v-1) \,\mathrm{d}v \\ &= c^4 \int_1^\infty \left( v^2-\frac{3}{2}v+\frac{1}{2} \right)e^{-tc^2v^2} \,\mathrm{d}v \\ &= c^4 \int_1^\infty v^2 e^{-tc^2v^2} \,\mathrm{d}v - \left(\frac{3}{2}c^4\right) \int_1^\infty v e^{-tc^2v^2} \,\mathrm{d}v +\frac{c^4}{2} \int_1^\infty e^{-tc^2v^2} \,\mathrm{d}v \\ &= c^2 \left[-\frac{ve^{-tc^2v^2}}{2t}\right]_{v=1}^\infty + \frac{c}{2\sqrt t} \int_{c\sqrt{t}}^\infty e^{-\kappa^2} \,\mathrm{d}\kappa - \left(\frac{3c^2}{4t}\right) \int_{tc^2}^\infty e^{-\xi} \,\mathrm{d}\xi + \frac{c^3}{2\sqrt{t}} \int_{c\sqrt{t}}^\infty e^{-\kappa^2} \,\mathrm{d}\kappa \\ &= \frac{c^2 e^{-tc^2}}{2t} + \frac{\sqrt \pi c}{4\sqrt{t^3}} \mathrm{erfc}(c\sqrt{t}) - \left(\frac{3c^2}{4t}\right)e^{-tc^2} + \frac{\sqrt{\pi} c^3}{4\sqrt{t}}\mathrm{erfc}(c\sqrt{t}) \\ &= -\frac{c^2}{4t}e^{-tc^2}+ \frac{\sqrt\pi}{4}\left(\frac{c}{\sqrt{t^3}} + \frac{c^3}{\sqrt{t}}\right) \mathrm{erfc}(c\sqrt{t}) \end{aligned}$$ with $\nicefrac{1}{5}\cdot \widetilde{I_4} \leq I_4 \leq \widetilde{I_4}$. ◻ It remains to estimate $$I_5 = c^4 \int_0^1 \left(v- \frac{1}{2}\right) e^{-tc^2v^2} \sum_{m=1}^\infty \left[ 1 - \left(\sqrt{1+\frac{(v- \frac{1}{2})^2 - \frac{1}{4}}{(m+\frac{1}{2})^2}}\right)^{-1} \right] dv.$$ **Lemma 47**. * There is some constant $-\infty<-K<0$ such that $$-K \left(\frac{1}{2}\frac{c^2}{t}e^{-tc^2} - \frac{\sqrt{\pi}}{4}\frac{c^3}{\sqrt{t}}\mathrm{erfc}(c\sqrt{t})\right) \leq I_5 \leq 0.$$ * **Lemma 48** (). * There is some constant $-\infty<-K<0$ such that $$-K \left(\frac{1}{2}\frac{c^2}{t}e^{-tc^2} - \frac{\sqrt{\pi}}{4}\frac{c^3}{\sqrt{t}}\mathrm{erfc}(c\sqrt{t})\right) \leq I_5 \leq 0.$$ * *Proof.* Note that the summands are non-positive and $$\begin{aligned} \left[ 1 - \left(\sqrt{1-\frac{1}{4(m+\frac{1}{2})^2}}\right)^{-1} \right] \leq \left[ 1 - \left(\sqrt{1+\frac{(v- \frac{1}{2})^2 - \frac{1}{4}}{(m+\frac{1}{2})^2}}\right)^{-1} \right] \leq 0\end{aligned}$$ so that $$\begin{aligned} \sum_{m=1}^\infty \left[ 1 - \left(\sqrt{1-\frac{1}{4(m+\frac{1}{2})^2}}\right)^{-1} \right] \cdot c^4 \int_0^1 \left(v- \frac{1}{2}\right) e^{-tc^2v^2} dv \leq I_5 \leq 0.\end{aligned}$$ The sum converges to some constant $-\infty < -K < 0$ while $$\begin{aligned} c^4 \int_0^1 \left(v- \frac{1}{2}\right) e^{-tc^2v^2} \,\mathrm{d}v &= \frac{c^2}{2t} \int_0^{tc^2} e^{-\xi} \,\mathrm{d}\xi - \frac{c^3}{2\sqrt{t}}\int_0^{c\sqrt{t}}e^{-\kappa^2} \,\mathrm{d}\kappa \\ &= \frac{c^2}{2t} - \frac{c^2}{2t}e^{-tc^2} - \frac{\sqrt{\pi} c}{4\sqrt{t}}\mathrm{erfc}(c\sqrt{t}). \qedhere\end{aligned}$$ ◻ **Corollary 49**. * If $c = t^\zeta$ for $\zeta > -\nicefrac{1}{2}$, then $$e^{-t\Delta_1}(0,0) \sim \frac{c}{t} = \frac{1}{t^{1-\zeta}}$$ as $t\to\infty$. In particular, for $-\nicefrac{1}{2}< \zeta < 1$, $$\begin{aligned} \alpha\left(\lambda \mapsto \mathcal{G}_{1} (\lambda, \lambda^{1+\zeta})\right) = 2-2\zeta.\end{aligned}$$ * **Corollary 50** (). * If $c = t^\zeta$ for $\zeta > -\nicefrac{1}{2}$, then $$e^{-t\Delta_1}(0,0) \sim \frac{c}{t} = \frac{1}{t^{1-\zeta}}$$ as $t\to\infty$. In particular, for $-\nicefrac{1}{2}< \zeta < 1$, $$\begin{aligned} \alpha\left(\lambda \mapsto \mathcal{G}_{1} (\lambda, \lambda^{1+\zeta})\right) = 2-2\zeta.\end{aligned}$$ * *Proof.* The assumption $\zeta>-\nicefrac{1}{2}$ implies $c^2t\xrightarrow{t\to\infty}\infty$ and both $e^{-tc^2}$ and $\mathrm{erfc}(c\sqrt{t})$ decay exponentially. By the previous computations, $$e^{-t\Delta_1}(0,0) \sim \frac{1}{c} \left[I_1^+ + I_1^- + I_2 + I_3\right] \sim \frac{c}{t} + \frac{1}{t^{\nicefrac{3}{2}}}$$ as $t\to\infty$. The assumption $\zeta > -\nicefrac{1}{2}$ implies $t^{-\nicefrac{3}{2}}\in \mathcal{O}(\nicefrac{c}{t})$. In particular, since $\nicefrac{c}{t} = t^{\zeta-1}$, this decays to zero as $t\to\infty$ for $\zeta < 1$. ◻ This concludes the computation of the asymptotics for $\alpha_\bullet(\mathfrak{h}^3)(\lambda, \lambda^{1+\zeta})$. ◻ [^1]: This work is part of the author's doctorial thesis at the University of Göttingen. [^2]: For all $\gamma\in G$ and $x\in M$ we have $\pi(\gamma x) = \varphi(\gamma)\pi(x)$. In particular, $\ker(\varphi)$ acts on each fibre $F'_b$. [^3]: For all $\gamma\in G$ and $x\in M$ we have $\pi(\gamma x) = \varphi(\gamma)\pi(x)$. In particular, $\ker(\varphi)$ acts on each fibre $F'_b$. [^4]: By abuse of notation, $\lambda$ denotes the function $\mathop{\mathrm{id}}\colon \lambda\mapsto \lambda$ or, more generally, $\lambda^c$ the function $\lambda\mapsto \lambda^c$. [^5]: By abuse of notation, $\lambda$ denotes the function $\mathop{\mathrm{id}}\colon \lambda\mapsto \lambda$ or, more generally, $\lambda^c$ the function $\lambda\mapsto \lambda^c$. [^6]: This is not necessary but reduces the length of notation for this example considerably. One can proceed just as in cited source by W. Lück even if the $L^2$-Betti numbers do not vanish. [^7]: This is not necessary but reduces the length of notation for this example considerably. One can proceed just as in cited source by W. Lück even if the $L^2$-Betti numbers do not vanish. [^8]: *The right-hand-side is understood in the sense of A. Fomenko and D. Fuchs [@FomenkoFuchs Lec. 22.2].* [^9]: *The right-hand-side is understood in the sense of A. Fomenko and D. Fuchs [@FomenkoFuchs Lec. 22.2].* [^10]: This is not a double complex in general as there is the diagonal $d^{2,-1}$-map. If we can choose a flat connection on $M$, then $d^{2,-1}$ vanishes and this is a true double complex. In terms of objects, this may be viewed as the zeroth page of the Serre spectral sequence of $F_\bullet\to M\to B$. [^11]: Coming from leaving out two arguments in two different orders. [^12]: By Theorem [\[Thm_2NSI_Metric_fixed_conn_invar\]](#Thm_2NSI_Metric_fixed_conn_invar){reference-type="ref" reference="Thm_2NSI_Metric_fixed_conn_invar"}, the dilatational equivalence classes of these spectral density functions are independent of this choice. [^13]: The original Serre-Swan theorem [@Swan Lem. 5] holds for compact topological manifolds. It since has been shown that in the smooth case it holds also for non-compact manifolds, see for example J. Nestruev's book [@Nestruev Sec. 12.33] or Section 11.33 in the first edition. Here, we want the fibre bundle to be compatible with the action of $G'$ on $B$, so that we may use the Serre-Swan theorem over the compact quotient $f^*V({\left.\raisebox{-.15em}{$G$}\middle\backslash\raisebox{.05em}{$M'$}\right.})\to {\left.\raisebox{-.15em}{$G$}\middle\backslash\raisebox{.05em}{$M$}\right.}$ and lift the bundle ${\left.\raisebox{-.15em}{$G$}\middle\backslash\raisebox{.05em}{$M$}\right.} \times \mathbb{R}^N \to {\left.\raisebox{-.15em}{$G$}\middle\backslash\raisebox{.05em}{$M$}\right.}$ to a bundle $M\times \mathbb{R}^N\to M$ compatible with the group action. This is possible since $f$ is $G$-equivariant. [^14]: These homotopy equivalences are explicitly constructed in [@GroShu2 Lem. 5.1]: Let $I=[0,1]$ and $p\colon M\times I \to M$ be the natural projection and let $i_t\colon M\to M\times I$ for $t\in I$ be that map $x\mapsto (x,t)$. Then $p^*\colon L^2\Omega^k M \to L^2 \Omega^k (M\times I)$ is a homotopy equivalence with inverse $J\colon L^2\Omega(M\times I) \to L^2\Omega^k M$, $J\omega = \int_0^1 i_t^*\omega dt$. Using this and the fact that $I^N\simeq D_\delta^N$ by Lipschitz maps gives the needed homotopy equivalences. Here, we consider $M\times I$ as a bundle $M\times I\to B$ with fibres $F_b\times I$ over $b\in B$. [^15]: For $I_1^+$, the index $+$ is to be used and for $I_1^-$ the index $-$ is to be used.
arxiv_math
{ "id": "2310.00969", "title": "Two-Parameter Novikov-Shubin Invariants for Fibre Bundles", "authors": "Tim H\\\"opfner", "categories": "math.AT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We develop a variational principle for mean dimension with potential of $\mathbb{R}^d$-actions. We prove that mean dimension with potential is bounded from above by the supremum of the sum of rate distortion dimension and a potential term. A basic strategy of the proof is the same as the case of $\mathbb{Z}$-actions. However measure theoretic details are more involved because $\mathbb{R}^d$ is a continuous group. We also establish several basic properties of metric mean dimension with potential and mean Hausdorff dimension with potential for $\mathbb{R}^d$-actions. address: Department of Mathematics, Kyoto University, Kitashirakawa Oiwake-cho, Sakyo-ku, Kyoto 606-8502, Japan author: - Masaki Tsukamoto title: "Variational principle for mean dimension with potential of $\\mathbb{R}^d$-actions: I" --- [^1] # Introduction {#section: introduction} ## Background: the case of $\mathbb{Z}$-actions {#subsection: background} The purpose of this paper is to develop a theory of variational principle for mean dimension with potential of $\mathbb{R}^d$-actions. First we review the theory already established in the case of $\mathbb{Z}$-actions. Mean dimension is a topological invariant of dynamical systems introduced by Gromov [@Gromov] at the end of the last century. It is the number of parameters *per unit time* for describing given dynamical systems. Mean dimension has several applications to topological dynamics, most notably in the embedding problem of dynamical systems [@Lindenstrauss--Weiss; @Lindenstrauss; @Gutman--Tsukamoto; @embedding; @Gutman--Qiao--Tsukamoto]. Lindenstrauss and the author [@Lindenstrauss--Tsukamoto; @IEEE; @Lindenstrauss--Tsukamoto; @double; @Tsukamoto; @potential] began to develop the variational principle in mean dimension theory. Let $\mathcal{X}$ be a compact metrizable space, and let $T\colon \mathcal{X}\to \mathcal{X}$ be a homeomorphism of $\mathcal{X}$. The classical variational principle [@Goodwyn; @Dinaburg; @Goodman] states that the topological entropy $h_{\mathrm{top}}(T)$ is equal to the supremum of the Kolmogorov--Sinai entropy $h_\mu(T)$ over all invariant probability measures $\mu$: $$\label{eq: variational principle for entropy} h_{\mathrm{top}}(T) = \sup_{\mu\in \mathscr{M}^T(\mathcal{X})} h_{\mu}(T),$$ where $\mathscr{M}^T(\mathcal{X})$ denotes the set of all $T$-invariant Borel probability measures on $\mathcal{X}$. Ruelle [@Ruelle] and then Walters [@Walters] generalized ([\[eq: variational principle for entropy\]](#eq: variational principle for entropy){reference-type="ref" reference="eq: variational principle for entropy"}) to pressure: Let $\varphi \colon \mathcal{X} \to \mathbb{R}$ be a continuous function, and we denote by $P_T(\varphi)$ the topological pressure of $(X, T, \varphi)$. Then $$\label{eq: variational principle for pressure} P_T(\varphi) = \sup_{\mu\in \mathscr{M}^T(\mathcal{X})}\left(h_\mu(T) + \int_{\mathcal{X}} \varphi \, d\mu\right).$$ In the classical variational principles ([\[eq: variational principle for entropy\]](#eq: variational principle for entropy){reference-type="ref" reference="eq: variational principle for entropy"}) and ([\[eq: variational principle for pressure\]](#eq: variational principle for pressure){reference-type="ref" reference="eq: variational principle for pressure"}), the quantities $h_{\mathrm{top}}(T)$ and $P_T(\varphi)$ in the left-hand sides are topological invariants of dynamical systems. The Kolomogorov--Sinai entropy in the right-hand side is an information theoretic quantity. Therefore ([\[eq: variational principle for entropy\]](#eq: variational principle for entropy){reference-type="ref" reference="eq: variational principle for entropy"}) and ([\[eq: variational principle for pressure\]](#eq: variational principle for pressure){reference-type="ref" reference="eq: variational principle for pressure"}) connect topological dynamics to information theory. Lindenstrauss and the author tried to find an analogous structure in mean dimension theory. (See also the paper of Gutman--Śpiewak [@Gutman--Spiewak] for a connection between mean dimension and information theory.) In the papers [@Lindenstrauss--Tsukamoto; @IEEE; @Lindenstrauss--Tsukamoto; @double] they found that *rate distortion theory* provides a fruitful framework for the problem. This is a branch of information theory studying *lossy* data compression method under a distortion constraint. Let $T:\mathcal{X}\to \mathcal{X}$ be a homeomorphism on a compact metrizable space $\mathcal{X}$ as in the above. We denote the mean dimension of $(\mathcal{X}, T)$ by $\mathrm{mdim}(\mathcal{X}, T)$. We would like to connect it to some information theoretic quantity. We define $\mathscr{D}(\mathcal{X})$ as the set of all metrics (distance functions) on $\mathcal{X}$ compatible with the given topology. Let $\mathbf{d}\in \mathscr{D}(\mathcal{X})$ and $\mu\in \mathcal{M}^T(\mathcal{X})$. We randomly choose a point $x\in \mathcal{X}$ according to the distribution $\mu$ and consider the orbit $\{T^n x\}_{n\in \mathbb{Z}}$. For $\varepsilon>0$, we define the *rate distortion function* $R(\mathbf{d}, \mu, \varepsilon)$ as the minimum number of bits per unit time for describing $\{T^n x\}_{n\in \mathbb{Z}}$ with average distortion bounded by $\varepsilon$ with respect to $\mathbf{d}$. See §[2.3](#subsection: rate distortion theory){reference-type="ref" reference="subsection: rate distortion theory"} for the precise definition of $R(\mathbf{d},\mu,\varepsilon)$ in the case of $\mathbb{R}^d$-actions. We define the **upper and lower rate distortion dimensions** by $$\overline{\mathrm{rdim}}(\mathcal{X}, T, \mathbf{d}, \mu) = \limsup_{\varepsilon\to 0} \frac{R(\mathbf{d}, \mu, \varepsilon)}{\log(1/\varepsilon)},\quad \underline{\mathrm{rdim}}(\mathcal{X}, T, \mathbf{d}, \mu) = \liminf_{\varepsilon\to 0} \frac{R(\mathbf{d}, \mu, \varepsilon)}{\log(1/\varepsilon)}.$$ Rate distortion dimension was first introduced by Kawabata--Dembo [@Kawabata--Dembo]. Lindenstrauss and the author [@Lindenstrauss--Tsukamoto; @double Corollary 3.13] proved that $$\label{eq: variational principle for mean dimension} \mathrm{mdim}(\mathcal{X}, T) \leq \sup_{\mu \in \mathcal{M}^T(\mathcal{X})} \underline{\mathrm{rdim}}(\mathcal{X}, T, \mathbf{d}, \mu)$$ for any metric $\mathbf{d}$ on $\mathcal{X}$ compatible with the given topology. Moreover they proved that if $(\mathcal{X}, T)$ is a free minimal dynamical system then [@Lindenstrauss--Tsukamoto; @double Theorem 1.1] $$\label{eq: double variational principle for mean dimension} \begin{split} \mathrm{mdim}(\mathcal{X}, T) &= \min_{\mathbf{d}\in \mathscr{D}(\mathcal{X})}\left(\sup_{\mu \in \mathcal{M}^T(\mathcal{X})} \underline{\mathrm{rdim}}(\mathcal{X}, T, \mathbf{d}, \mu)\right) \\ &= \min_{\mathbf{d}\in \mathscr{D}(\mathcal{X})}\left(\sup_{\mu \in \mathcal{M}^T(\mathcal{X})} \overline{\mathrm{rdim}}(\mathcal{X}, T, \mathbf{d}, \mu)\right). \end{split}$$ They called this "double variational principle" because it involves a minimax problem with respect to the *two* variables $\mathbf{d}$ and $\mu$. We conjecture that ([\[eq: double variational principle for mean dimension\]](#eq: double variational principle for mean dimension){reference-type="ref" reference="eq: double variational principle for mean dimension"}) holds for all dynamical systems without any additional assumption. The author [@Tsukamoto; @potential] generalized ([\[eq: variational principle for mean dimension\]](#eq: variational principle for mean dimension){reference-type="ref" reference="eq: variational principle for mean dimension"}) and ([\[eq: double variational principle for mean dimension\]](#eq: double variational principle for mean dimension){reference-type="ref" reference="eq: double variational principle for mean dimension"}) to *mean dimension with potential*, which is a mean dimension analogue of topological pressure. Let $\varphi\colon \mathcal{X}\to \mathbb{R}$ be a continuous function. The paper [@Tsukamoto; @potential] introduced mean dimension with potential (denoted by $\mathrm{mdim}(\mathcal{X}, T, \varphi)$) and proved that [@Tsukamoto; @potential Corollary 1.7] $$\label{eq: variational principle for mean dimension with potential} \mathrm{mdim}(\mathcal{X}, T, \varphi) \leq \sup_{\mu \in \mathscr{M}^T(\mathcal{X})} \left(\underline{\mathrm{rdim}}(\mathcal{X}, T, \mathbf{d}, \mu) + \int_\mathcal{X} \varphi\, d\mu\right).$$ Moreover, if $(\mathcal{X}, T)$ is a free minimal dynamical system then [@Tsukamoto; @potential Theorem 1.1] $$\label{eq: double variational principle for mean dimension with potential} \begin{split} \mathrm{mdim}(\mathcal{X}, T, \varphi) &= \min_{\mathbf{d}\in \mathscr{D}(\mathcal{X})} \left\{\sup_{\mu \in \mathscr{M}^T(\mathcal{X})} \left(\underline{\mathrm{rdim}}(\mathcal{X}, T, \mathbf{d}, \mu) + \int_\mathcal{X} \varphi\, d\mu\right)\right\} \\ &= \min_{\mathbf{d}\in \mathscr{D}(\mathcal{X})} \left\{\sup_{\mu \in \mathscr{M}^T(\mathcal{X})} \left(\overline{\mathrm{rdim}}(\mathcal{X}, T, \mathbf{d}, \mu) + \int_\mathcal{X} \varphi\, d\mu\right)\right\}. \end{split}$$ We also conjecture that this holds for all dynamical systems. The main purpose of this paper is to generalize the above ([\[eq: variational principle for mean dimension with potential\]](#eq: variational principle for mean dimension with potential){reference-type="ref" reference="eq: variational principle for mean dimension with potential"}) to $\mathbb{R}^d$-actions. We think that we can also generalize the *double variational principle* ([\[eq: double variational principle for mean dimension with potential\]](#eq: double variational principle for mean dimension with potential){reference-type="ref" reference="eq: double variational principle for mean dimension with potential"}) to free minimal $\mathbb{R}^d$-actions. However it requires a technically heavy work. We postpone it to Part II of this series of papers. In this paper we concentrate on the inequality ([\[eq: variational principle for mean dimension with potential\]](#eq: variational principle for mean dimension with potential){reference-type="ref" reference="eq: variational principle for mean dimension with potential"}). The motivation to generalize ([\[eq: variational principle for mean dimension with potential\]](#eq: variational principle for mean dimension with potential){reference-type="ref" reference="eq: variational principle for mean dimension with potential"}) and ([\[eq: double variational principle for mean dimension with potential\]](#eq: double variational principle for mean dimension with potential){reference-type="ref" reference="eq: double variational principle for mean dimension with potential"}) to $\mathbb{R}^d$-actions comes from the fact that many natural examples of mean dimension theory are rooted in geometric analysis [@Gromov; @Matsuo--Tsukamoto; @Brody; @curves; @Tsukamoto; @Brody; @curves]. In geometric analysis we usually consider actions of groups more complicated than $\mathbb{Z}$. Maybe $\mathbb{R}^d$-actions are the most basic case. We plan to apply the results of this paper to geometric examples of [@Gromov; @Matsuo--Tsukamoto; @Brody; @curves; @Tsukamoto; @Brody; @curves] in a future paper. Since $\mathbb{R}^d$ is a continuous group, several new technical difficulties appear. Especially measure theoretic details are more complicated in the case of $\mathbb{R}^d$-actions than in the case of $\mathbb{Z}$-actions. A main task of this paper is to establish such details. We would like to mention the paper of Huo--Yuan [@Huo--Yuan]. They develop the variational principle for mean dimension of $\mathbb{Z}^d$-actions. In §[4](#section: mean dimension of Z^d-actions){reference-type="ref" reference="section: mean dimension of Z^d-actions"} and §[5](#section: proof of mdim is bounded by mdimh){reference-type="ref" reference="section: proof of mdim is bounded by mdimh"} we also touch the case of $\mathbb{Z}^d$-actions. Some results in these sections were already studied in [@Huo--Yuan]. ## Mean dimension with potential of $\mathbb{R}^d$-actions {#subsection: mean dimension with potential of R^d-actions} In this subsection we introduce mean dimension with potential for $\mathbb{R}^d$-actions. Let $P$ be a finite simplicial complex. (Here "finite" means that the number of faces is finite. In this paper we do not consider infinite simplicial complexes. Simplicial complexes are always finite.) For a point $a \in P$ we define $\dim_a P$ as the maximum of $\dim \Delta$ where $\Delta$ runs over all simplices of $P$ containing $a$. We call $\dim_a P$ the **local dimension** of $P$ at $a$. See Figure [1](#figure: local dimension){reference-type="ref" reference="figure: local dimension"}. (This is the same as [@Tsukamoto; @potential Fig. 1].) ![Here $P$ has four vertexes (denoted by dots), four $1$-dimensional simplexes and one $2$-dimensional simplex. The points $b$ and $d$ are vertexes of $P$ wheres $a$ and $c$ are not. We have $\dim_a P = \dim_b P =2$ and $\dim_c P = \dim_d P =1$.](local_dimension.eps){#figure: local dimension width="3.0in"} Let $(\mathcal{X}, \mathbf{d})$ be a compact metric space. Let $\mathcal{Y}$ be a topological space and $f\colon \mathcal{X}\to \mathcal{Y}$ a continuous map. For a positive number $\varepsilon$ we call $f$ an **$\varepsilon$-embedding** if we have $\mathrm{Diam}f^{-1}(y) < \varepsilon$ for all $y\in \mathcal{Y}$. Let $\varphi: \mathcal{X}\to \mathbb{R}$ be a continuous function. We define the **$\varepsilon$-width dimension with potential** by $$\label{eq: widim with potential} \begin{split} & \mathrm{Widim}_\varepsilon(\mathcal{X}, \mathbf{d}, \varphi) \\ & = \inf\left\{\max_{x\in \mathcal{X}} \left(\dim_{f(x)} P + \varphi(x)\right) \middle| \parbox{3in}{\centering $P$ is a finite simplicial complex and $f:\mathcal{X}\to P$ is an $\varepsilon$-embedding}\right\}. \end{split}$$ Let $d$ be a natural number. We consider that $\mathbb{R}^d$ is equipped with the Euclidean topology and standard additive group structure. We denote the standard Lebesgue measure on $\mathbb{R}^d$ by $\mathbf{m}$. Let $T\colon \mathbb{R}^d\times \mathcal{X}\to \mathcal{X}$ be a continuous action of $\mathbb{R}^d$ on a compact metrizable space $\mathcal{X}$. Let $\mathbf{d}$ be a metric on $\mathcal{X}$ compatible with the topology, and let $\varphi\colon \mathcal{X}\to \mathbb{R}$ be a continuous function. For a bounded Borel subset $A \subset \mathbb{R}^d$ we define a new metric $\mathbf{d}_A$ and a new function $\varphi_A \colon \mathcal{X}\to \mathbb{R}$ by $$\mathbf{d}_A(x, y) = \sup_{u\in A} \mathbf{d}(T^u x, T^u y), \quad \varphi_A(x) = \int_{A} \varphi(T^u x)\, d\mathbf{m}(u).$$ If $\varphi(x) \geq 0$ for all $x\in \mathcal{X}$ then we have: 1. **Subadditivity:** For bounded Borel subsets $A, B \subset \mathbb{R}^d$ $$\mathrm{Widim}_\varepsilon\left(\mathcal{X}, \mathbf{d}_{A \cup B}, \varphi_{A \cup B}\right) \leq \mathrm{Widim}_\varepsilon\left(\mathcal{X}, \mathbf{d}_{A}, \varphi_{A}\right) + \mathrm{Widim}_\varepsilon\left(\mathcal{X}, \mathbf{d}_{B}, \varphi_{B}\right).$$ 2. **Monotonicity:** If $A \subset B$ then $$0\leq \mathrm{Widim}_\varepsilon\left(\mathcal{X}, \mathbf{d}_{A}, \varphi_{A}\right) \leq \mathrm{Widim}_\varepsilon\left(\mathcal{X}, \mathbf{d}_{B}, \varphi_{B}\right).$$ 3. **Invariance:** For $a \in \mathbb{R}^d$ and a bounded Borel subset $A \subset \mathbb{R}^d$ $$\mathrm{Widim}_\varepsilon\left(\mathcal{X}, \mathbf{d}_{a+A}, \varphi_{a+A}\right) = \mathrm{Widim}_\varepsilon\left(\mathcal{X}, \mathbf{d}_{A}, \varphi_{A}\right),$$ where $a+A = \{a+u\mid u\in A \}$. Notice that we need to assume the nonnegativity of $\varphi$ for the properties (1) and (2). For a positive number $L$ we denote $\mathbf{d}_{[0,L)^d}$ and $\varphi_{[0, L)^d}$ by $\mathbf{d}_L$ and $\varphi_L$ respectively for simplicity. We define the **mean dimension with potential** of $\left(\mathcal{X}, T, \varphi\right)$ by $$\label{eq: definition of mean dimension with potential} \mathrm{mdim}\left(\mathcal{X}, T, \varphi\right) = \lim_{\varepsilon \to 0} \left(\lim_{L\to \infty} \frac{\mathrm{Widim}_\varepsilon(\mathcal{X}, \mathbf{d}_L, \varphi_L)}{L^d}\right).$$ This is a topological invariant, namely its value is independent of the choice of the metric $\mathbf{d}$. Notice that we do not assume the nonnegativity of $\varphi$ in the definition ([\[eq: definition of mean dimension with potential\]](#eq: definition of mean dimension with potential){reference-type="ref" reference="eq: definition of mean dimension with potential"}). We need to check that the limits in the definition ([\[eq: definition of mean dimension with potential\]](#eq: definition of mean dimension with potential){reference-type="ref" reference="eq: definition of mean dimension with potential"}) exist. The limit with respect to $\varepsilon$ exists because $\mathrm{Widim}_\varepsilon(\mathcal{X}, \mathbf{d}_L, \varphi_L)$ is monotone in $\varepsilon$. We prove the existence of the limit with respect to $L$ in the next lemma. **Lemma 1**. *The limit $\displaystyle \lim_{L\to \infty} \mathrm{Widim}_\varepsilon(\mathcal{X}, \mathbf{d}_L, \varphi_L)/L^d$ exists in the definition ([\[eq: definition of mean dimension with potential\]](#eq: definition of mean dimension with potential){reference-type="ref" reference="eq: definition of mean dimension with potential"}).* *Proof.* Let $c$ be the minimum of $\varphi(x)$ over $x\in \mathcal{X}$ and set $\psi(x) = \varphi(x) -c$. Then $\psi$ is a nonnegative function with $$\mathrm{Widim}_\varepsilon\left(\mathcal{X}, \mathbf{d}_A, \psi_A\right) = \mathrm{Widim}_\varepsilon\left(\mathcal{X}, \mathbf{d}_A, \varphi_A\right) - c\, \mathbf{m}(A).$$ Set $h(A) = \mathrm{Widim}_\varepsilon\left(\mathcal{X}, \mathbf{d}_A, \psi_A\right)$. It is enough to prove that the limit $\displaystyle \lim_{L\to \infty} h\left([0,L)^d\right)/L^d$ exists. For $0<L<R$, let $n= \lfloor R/L \rfloor$ be the integer part of $R/L$. We have $$[0, R)^d \subset \bigcup_{u\in \mathbb{Z}^d\cap [0, n]^d} \left(Lu+ [0, L)^d\right).$$ Since $\psi$ is nonnegative, $h(A)$ satisfies the subadditivity, monotonicity and invariance. Hence $$h\left([0,R)^d\right) \leq (n+1)^d\cdot h\left([0,L)^d\right).$$ Dividing this by $R^d$ and letting $R\to \infty$, we get $$\limsup_{R\to \infty} \frac{h\left([0,R)^d\right)}{R^d} \leq \frac{h\left([0,L)^d\right)}{L^d}.$$ Then letting $L\to \infty$ we get $$\limsup_{R\to \infty} \frac{h\left([0,R)^d\right)}{R^d} \leq \liminf_{L\to \infty} \frac{h\left([0,L)^d\right)}{L^d}.$$ Therefore the limit $\displaystyle \lim_{L\to \infty} h\left([0,L)^d\right)/L^d$ exists. ◻ **Remark 2**. By the Ornstein--Weiss quasi-tiling argument ([@Ornstein--Weiss], [@Gromov §1.3.1]) we can also prove that for any Følner sequence $A_1, A_2, A_3, \dots$ of $\mathbb{R}^d$ the limit $$\lim_{n\to \infty} \frac{\mathrm{Widim}_\varepsilon(\mathcal{X}, \mathbf{d}_{A_n}, \varphi_{A_n})}{\mathbf{m}(A_n)}$$ exists and that its value is independent of the choice of a Følner sequence. In particular, we can define the mean dimension with potential by $$\mathrm{mdim}\left(\mathcal{X}, T, \varphi\right) = \lim_{\varepsilon \to 0} \left(\lim_{R \to \infty} \frac{\mathrm{Widim}_\varepsilon(\mathcal{X}, \mathbf{d}_{B_R}, \varphi_{B_R})}{\mathbf{m}(B_R)}\right),$$ where $B_R = \{u\in \mathbb{R}^d\mid |u| \leq R\}$. ## Main result {#subsection: main result} Let $\mathcal{X}$ be a compact metrizable space. Recall that we have denoted by $\mathscr{D}(\mathcal{X})$ the set of metrics $\mathbf{d}$ on $\mathcal{X}$ compatible with the given topology. Let $T\colon \mathbb{R}^d\times \mathcal{X}\to \mathcal{X}$ be a continuous action. A Borel probability measure $\mu$ on $\mathcal{X}$ is said to be **$T$-invariant** if $\mu(T^{-u} A) = \mu(A)$ for all $u\in \mathbb{R}^d$ and all Borel subsets $A\subset \mathcal{X}$. We define $\mathscr{M}^T(\mathcal{X})$ as the set of all $T$-invariant Borel probability measures $\mu$ on $\mathcal{X}$. Take a metric $\mathbf{d}\in \mathscr{D}(\mathcal{X})$ and a measure $\mu\in \mathscr{M}^T(\mathcal{X})$. We randomly choose a point $x\in \mathcal{X}$ according to the distribution $\mu$ and consider the orbit $\{T^u x\}_{u\in \mathbb{R}^d}$. For a positive number $\varepsilon$ we define the rate distortion function $R(\mathbf{d}, \mu, \varepsilon)$ as the minimum bits per unit volume for describing $\{T^u x\}_{u\in \mathbb{R}^d}$ with average distortion bounded by $\varepsilon$ with respect to $\mathbf{d}$. The precise definition of $R(\mathbf{d},\mu,\varepsilon)$ is given in §[2.3](#subsection: rate distortion theory){reference-type="ref" reference="subsection: rate distortion theory"}. We define the **upper and lower rate distortion dimensions** by $$\overline{\mathrm{rdim}}(\mathcal{X}, T, \mathbf{d}, \mu) = \limsup_{\varepsilon\to 0} \frac{R(\mathbf{d}, \mu, \varepsilon)}{\log(1/\varepsilon)},\quad \underline{\mathrm{rdim}}(\mathcal{X}, T, \mathbf{d}, \mu) = \liminf_{\varepsilon\to 0} \frac{R(\mathbf{d}, \mu, \varepsilon)}{\log(1/\varepsilon)}.$$ The following is the main result of this paper. **Theorem 3** (Main theorem). *Let $T\colon \mathbb{R}^d\times \mathcal{X}\to \mathcal{X}$ be a continuous action of $\mathbb{R}^d$ on a compact metrizable space $\mathcal{X}$. Let $\varphi\colon \mathcal{X}\to \mathbb{R}$ be a continuous function. Then for any metric $\mathbf{d}\in \mathscr{D}(\mathcal{X})$ $$\mathrm{mdim}\left(\mathcal{X}, T, \varphi\right) \leq \sup_{\mu\in \mathscr{M}^T(\mathcal{X})}\left(\underline{\mathrm{rdim}}(\mathcal{X}, T, \mathbf{d}, \mu) + \int_\mathcal{X} \varphi\, d\mu\right).$$* We propose a conjecture: **Conjecture 4**. *In the setting of Theorem [Theorem 3](#main theorem){reference-type="ref" reference="main theorem"} we have $$\begin{split} \mathrm{mdim}(\mathcal{X}, T, \varphi) & = \min_{\mathbf{d}\in \mathscr{D}(\mathcal{X})} \left\{\sup_{\mu\in \mathscr{M}^T(\mathcal{X})}\left(\underline{\mathrm{rdim}}(\mathcal{X}, T, \mathbf{d}, \mu) + \int_\mathcal{X} \varphi\, d\mu\right)\right\} \\ & = \min_{\mathbf{d}\in \mathscr{D}(\mathcal{X})} \left\{\sup_{\mu\in \mathscr{M}^T(\mathcal{X})}\left(\overline{\mathrm{rdim}}(\mathcal{X}, T, \mathbf{d}, \mu) + \int_\mathcal{X} \varphi\, d\mu\right)\right\}. \end{split}$$* We think that probably we can prove this conjecture if $T\colon \mathbb{R}^d\times \mathcal{X}\to \mathcal{X}$ is a free minimal action. The proof will be rather lengthy and technically heavy. We postpone it to Part II of this series of papers. Along the way to prove Theorem [Theorem 3](#main theorem){reference-type="ref" reference="main theorem"}, we will introduce *mean Hausdorff dimension with potential* and *metric mean dimension with potential* for $\mathbb{R}^d$-actions and establish their basic properties. In particular we prove: 1. Mean Hausdorff dimension with potential bounds $\mathrm{mdim}\left(\mathcal{X}, T, \varphi\right)$ from above (Theorem [Theorem 26](#theorem: mdim is bounded by mdimh){reference-type="ref" reference="theorem: mdim is bounded by mdimh"}). 2. We can construct invariant probability measures which capture the complexity of dynamics expressed by mean Hausdorff dimension with potential (*Dynamical Frostman's lemma*; Theorem [Theorem 29](#theorem: dynamical Frostman lemma){reference-type="ref" reference="theorem: dynamical Frostman lemma"}). 3. Metric mean dimension with potential bounds $\displaystyle \overline{\mathrm{rdim}}(\mathcal{X}, T, \mathbf{d}, \mu) + \int_\mathcal{X} \varphi\, d\mu$ from above (Proposition [Proposition 27](#proposition: rate distortion dimension and metric mean dimension){reference-type="ref" reference="proposition: rate distortion dimension and metric mean dimension"}). 4. Metric mean dimension with potential can be calculated by using only "local" information (Theorem [Theorem 49](#theorem: local formula of metric mean dimension with potential){reference-type="ref" reference="theorem: local formula of metric mean dimension with potential"}). The results (1) and (2) will be used in the proof of Theorem [Theorem 3](#main theorem){reference-type="ref" reference="main theorem"}. The results (3) and (4) are not used in the proof of Theorem [Theorem 3](#main theorem){reference-type="ref" reference="main theorem"}. We plan to use (3) in Part II of this series of papers. The result (4) may be useful when we study geometric examples of [@Gromov; @Matsuo--Tsukamoto; @Brody; @curves; @Tsukamoto; @Brody; @curves] in a future. ## Organization of the paper In §[2](#section: mutual information and rate distortion theory){reference-type="ref" reference="section: mutual information and rate distortion theory"} we prepare basic definitions and results on mutual information and rate distortion theory. In §[3](#section: metric mean dimension with potential and mean Hausdorff dimension with potential){reference-type="ref" reference="section: metric mean dimension with potential and mean Hausdorff dimension with potential"} we introduce mean Hausdorff dimension with potential and metric mean dimension with potential for $\mathbb{R}^d$-actions. We also state their fundamental properties in §[3](#section: metric mean dimension with potential and mean Hausdorff dimension with potential){reference-type="ref" reference="section: metric mean dimension with potential and mean Hausdorff dimension with potential"}. The proofs will be given in §[5](#section: proof of mdim is bounded by mdimh){reference-type="ref" reference="section: proof of mdim is bounded by mdimh"} and §[6](#section: proof of dynamical Frostman lemma){reference-type="ref" reference="section: proof of dynamical Frostman lemma"}. Theorem [Theorem 3](#main theorem){reference-type="ref" reference="main theorem"} (Main Theorem) follows from the properties of mean Hausdorff dimension with potential stated in §[3](#section: metric mean dimension with potential and mean Hausdorff dimension with potential){reference-type="ref" reference="section: metric mean dimension with potential and mean Hausdorff dimension with potential"}. In §[4](#section: mean dimension of Z^d-actions){reference-type="ref" reference="section: mean dimension of Z^d-actions"} we prepare some basic results on mean dimension theory of $\mathbb{Z}^d$-actions. They will be used in §[5](#section: proof of mdim is bounded by mdimh){reference-type="ref" reference="section: proof of mdim is bounded by mdimh"}. In §[5](#section: proof of mdim is bounded by mdimh){reference-type="ref" reference="section: proof of mdim is bounded by mdimh"} we prove that $\mathrm{mdim}(\mathcal{X}, T, \varphi)$ is bounded from above by mean Hausdorff dimension with potential. In §[6](#section: proof of dynamical Frostman lemma){reference-type="ref" reference="section: proof of dynamical Frostman lemma"} we prove dynamical Frostman's lemma. In §[7](#section: local nature of metric mean dimension with potential){reference-type="ref" reference="section: local nature of metric mean dimension with potential"} we prove that metric mean dimension with potential can be calculated by using certain local information. §[7](#section: local nature of metric mean dimension with potential){reference-type="ref" reference="section: local nature of metric mean dimension with potential"} is independent of the proof of Theorem [Theorem 3](#main theorem){reference-type="ref" reference="main theorem"}. # Mutual information and rate distortion theory {#section: mutual information and rate distortion theory} We prepare basics of rate distortion theory in this section. Throughout this paper $\log x$ denotes the logarithm of base two. The natural logarithm is denoted by $\ln x$: $$\log x = \log_2 x, \quad \ln x = \log_e x.$$ This section is rather long. This is partly because we have to be careful of measure theoretic details. Hopefully this section will become a useful reference in a future study of mean dimension of $\mathbb{R}^d$-actions. At the first reading, readers may skip the whole of Subsection [2.1](#subsection: measure theoretic preparations){reference-type="ref" reference="subsection: measure theoretic preparations"} and most of Subsection [2.2](#subsection: mutual information){reference-type="ref" reference="subsection: mutual information"}. The crucial parts of this section are only the definition of mutual information in §[2.2](#subsection: mutual information){reference-type="ref" reference="subsection: mutual information"} and the definition of rate distortion function in §[2.3](#subsection: rate distortion theory){reference-type="ref" reference="subsection: rate distortion theory"}. All the rest of this section are technical details. ## Measure theoretic preparations {#subsection: measure theoretic preparations} We need to prepare some basic results on measure theory. A **measurable space** is a pair $(\mathcal{X}, \mathcal{A})$ of a set $\mathcal{X}$ and its $\sigma$-algebra $\mathcal{A}$. Two measurable spaces $(\mathcal{X}, \mathcal{A})$ and $(\mathcal{Y}, \mathcal{B})$ are said to be **isomorphic** if there exists a bijection $f\colon \mathcal{X}\to \mathcal{Y}$ such that both $f$ and $f^{-1}$ are measurable (i.e. $f(\mathcal{A}) = \mathcal{B}$). For a topological space $\mathcal{X}$, its **Borel $\sigma$-algebra** $\mathcal{B}_\mathcal{X}$ is the minimum $\sigma$-algebra containing all open subsets of $\mathcal{X}$. A **Polish space** is a topological space $\mathcal{X}$ admitting a metric $\mathbf{d}$ for which $(\mathcal{X}, \mathbf{d})$ is a complete separable metric space. A measurable space $(\mathcal{X}, \mathcal{A})$ is said to be a **standard Borel space** if there exists a Polish space $\mathcal{Y}$ for which $(\mathcal{X}, \mathcal{A})$ is isomorphic to $(\mathcal{Y}, \mathcal{B}_\mathcal{Y})$ as measurable spaces. It is known that any two uncountable standard Borel spaces are isomorphic to each other (the Borel isomorphism theorem [@Srivastava Theorem 3.3.13]). Therefore every standard Borel space is isomorphic to one of the following measurable spaces: - A finite set $A$ with its discrete $\sigma$-algebra $2^A := \{\text{subset of $A$}\}$. - The set of natural numbers $\mathbb{N}$ with its discrete $\sigma$-algebra $2^{\mathbb{N}} := \{\text{subset of $\mathbb{N}$}\}$. - The Cantor set $\mathcal{C} = \{0,1\}^{\mathbb{N}}$ with its Borel $\sigma$-algebra $\mathcal{B}_{\mathcal{C}}$. (Here $\{0,1\}$ is endowed with the discrete topology and the topology of $\mathcal{C}$ is the product topology.) An importance of standard Borel spaces is that we can prove the existence of *regular conditional distribution* under the assumption of "standard Borel". Let $(\mathcal{X}, \mathcal{A})$ and $(\mathcal{Y}, \mathcal{B})$ be measurable spaces. A **transition probability** on $\mathcal{X}\times \mathcal{Y}$ is a map $\nu:\mathcal{X}\times \mathcal{B}\to [0,1]$ such that - for every $x\in \mathcal{X}$, the map $\mathcal{B}\ni B\mapsto \nu(x, B)\in [0,1]$ is a probability measure on $(\mathcal{Y}, \mathcal{B})$, - for every $B \in \mathcal{B}$, the map $\mathcal{X}\ni x \mapsto \nu(x, B)\in [0,1]$ is measurable. We often denote $\nu(x, B)$ by $\nu(B|x)$. For two measurable spaces $(\mathcal{X}, \mathcal{A})$ and $(\mathcal{Y}, \mathcal{B})$ we denote their product by $(\mathcal{X}\times \mathcal{Y}, \mathcal{A}\otimes \mathcal{B})$ where $\mathcal{A}\otimes \mathcal{B}$ is the minimum $\sigma$-algebra containing all the rectangles $A\times B$ $(A\in \mathcal{A}, B\in \mathcal{B})$. For any $E\in \mathcal{A}\otimes \mathcal{B}$, it is known that the **section** $E_x:= \{y\in \mathcal{Y}\mid (x, y)\in E\}$ belongs to $\mathcal{B}$ for every $x\in \mathcal{X}$. (This fact is a part of the Fubini theorem. It can be easily proved by using Dynkin's $\pi$-$\lambda$ theorem [@Durrett p.402 Theorem A.1.4].) Moreover, if $(\mathcal{Y}, \mathcal{B})$ is a standard Borel space, then for any transition probability $\nu$ on $\mathcal{X}\times \mathcal{Y}$ and any $E\in \mathcal{A}\otimes \mathcal{B}$ the map $\mathcal{X}\ni x\mapsto \nu(E_x|x) \in [0,1]$ is measurable [@Srivastava Proposition 3.4.24]. A **probability space** is a triplet $(\Omega, \mathcal{F}, \mathbb{P})$ where $(\Omega, \mathcal{F})$ is a measurable space and $\mathbb{P}$ is a probability measure defined on it. Let $X\colon \Omega \to \mathcal{X}$ be a measurable map from a probability space $(\Omega, \mathcal{F}, \mathbb{P})$ to a Borel space $(\mathcal{X}, \mathcal{A})$. We denote the push-forward measure $X_*\mathbb{P}$ by $\mathrm{Law} X$ and call it the **law of $X$** or the **distribution of $X$**. (Here $X_*\mathbb{P}(A) = \mathbb{P}\left(X\in A\right) = \mathbb{P}\left(X^{-1}(A)\right)$ for $A\in \mathcal{A}$.) The next theorem is a fundamental result. It guarantees the existence of regular conditional probability. For the proof, see [@Ikeda--Watanabe p.15 Theorem 3.3 and its Corollary] or Gray [@Gray_probability p. 182 Corollary 6.2]. **Theorem 5** (Existence of regular conditional distribution). *Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space, and $(\mathcal{X}, \mathcal{A})$ and $(\mathcal{Y}, \mathcal{B})$ standard Borel spaces. Let $X:\Omega\to \mathcal{X}$ and $Y:\Omega \to \mathcal{Y}$ be measurable maps, and set $\mu := \mathrm{Law} X$. Then there exists a transition probability $\nu$ on $\mathcal{X}\times \mathcal{Y}$ such that for any $E\in \mathcal{A}\otimes \mathcal{B}$ we have $$\mathbb{P}\left((X,Y)\in E\right) = \int_\mathcal{X} \nu(E_x|x)\, d\mu(x).$$ If a transition probability $\nu^\prime$ on $\mathcal{X}\times \mathcal{Y}$ satisfies the same property then there exists a $\mu$-null set $N\in \mathcal{A}$ such that $\nu(B|x) = \nu^\prime(B|x)$ for all $x\in \mathcal{X}\setminus N$ and $B\in \mathcal{B}$.* The transition probability $\nu(\cdot|x)$ in this theorem is called the **regular conditional distribution of $Y$ given $X=x$**. We sometimes denote $\nu(B|x)$ by $\mathbb{P}(Y\in B|X=x)$ for $x\in \mathcal{X}$ and $B\in \mathcal{B}$. If $\mathcal{X}$ and $\mathcal{Y}$ are finite sets, then this coincides with the elementary notion of conditional probability: $$\mathbb{P}(Y\in B|X=x) = \frac{\mathbb{P}(X=x, Y\in B)}{\mathbb{P}(X=x)}, \quad \left(\text{if }\mathbb{P}(X=x)\neq 0\right).$$ In this case we usually denote $\nu\left(\{y\}|x\right)$ by $\nu(y|x)$ $(x\in \mathcal{X}, y\in \mathcal{Y})$ and call it a **conditional probability mass function**[^2]. By using the notion of regular conditional distribution, we can introduce the definition of *conditional independence* of random variables. Let $(\Omega, \mathbb{P})$ be a probability space and $(\mathcal{X},\mathcal{A}), (\mathcal{Y},\mathcal{B}), (\mathcal{Z},\mathcal{C})$ standard Borel spaces. Let $X\colon \Omega\to \mathcal{X}$, $Y\colon \Omega\to \mathcal{Y}$ and $Z\colon \Omega\to \mathcal{Z}$ be measurable maps. We say that $X$ and $Y$ are **conditionally independent given $Z$** if we have $$\label{eq: conditional independence} \mathbb{P}\left((X, Y)\in A\times B| Z=z\right) = \mathbb{P}\left(X\in A|Z=z\right) \cdot \mathbb{P}\left(Y\in B|Z=z\right)$$ for $Z_*\mathbb{P}$-a.e. $z\in \mathcal{Z}$ and all $A\in \mathcal{A}$ and $B\in \mathcal{B}$. Here $Z_*\mathbb{P}$ is the push-forward measure of $\mathbb{P}$ by $Z$. The left-hand side of ([\[eq: conditional independence\]](#eq: conditional independence){reference-type="ref" reference="eq: conditional independence"}) is the conditional regular distribution of $(X, Y)\colon \Omega \to \mathcal{X}\times \mathcal{Y}$ given $Z=z$. The right-hand side is the multiple of the conditional distribution of $X$ given $Z=z$ and the conditional distribution of $Y$ given $Z=z$. At the end of this subsection we explain the log-sum inequality. This will be used in the next subsection. **Lemma 6** (Log-sum inequality). *Let $(\mathcal{X}, \mathcal{A})$ be a measurable space. Let $\mu$ be a measure on it with $0< \mu(\mathcal{X}) < \infty$. Let $f$ and $g$ be nonnegative measurable functions defined on $\mathcal{X}$. Suppose that $g$ is $\mu$-integrable and $g(x) >0$ for $\mu$-a.e. $x\in \mathcal{X}$. Then $$\left(\int_\mathcal{X} f(x) \, d\mu(x) \right) \log \frac{\int_\mathcal{X} f(x) \, d\mu(x)}{\int_\mathcal{X} g(x) \, d\mu(x)} \leq \int_\mathcal{X} f(x) \log \frac{f(x)}{g(x)}\, d\mu(x).$$ In particular, if the left-hand side is infinite then the right-hand side is also infinite.* Here we assume $0\log \frac{0}{a} = 0$ for all $a>0$. *Proof.* Set $\phi(t) = t\log t$ for $t\geq 0$. Since $\phi^{\prime \prime}(t) = \log e/t>0$ for $t>0$, this is a convex function. We define a probability measure $w$ on $\mathcal{X}$ by $$w(A) = \frac{\int_A g \, d\mu}{\int_{\mathcal{X}} g\, d\mu} \quad (A\subset \mathcal{X}).$$ The Radon--Nikodim derivative of $w$ by $\mu$ is given by $$\frac{dw}{d\mu} = \frac{g}{\int_{\mathcal{X}} g\, d\mu}.$$ By Jensen's inequality $$\label{eq: Jensen inequality} \phi\left(\int_{\mathcal{X}} \frac{f}{g}\, dw\right) \leq \int_{\mathcal{X}} \phi\left(\frac{f}{g}\right) \, dw.$$ Here, if the left-hand side is infinite, then the right-hand side is also infinite. We have $$\phi\left(\int_{\mathcal{X}} \frac{f}{g}\, dw\right) = \phi\left(\frac{\int_{\mathcal{X}}f\, d\mu}{\int_{\mathcal{X}}g\, d\mu}\right) = \frac{\int_{\mathcal{X}}f\, d\mu}{\int_{\mathcal{X}}g\, d\mu} \log \frac{\int_{\mathcal{X}}f\, d\mu}{\int_{\mathcal{X}}g\, d\mu}.$$ The right-hand side of ([\[eq: Jensen inequality\]](#eq: Jensen inequality){reference-type="ref" reference="eq: Jensen inequality"}) is $$\int_{\mathcal{X}} \phi\left(\frac{f}{g}\right) \, dw = \frac{1}{\int_{\mathcal{X}} g\, d\mu} \int_{\mathcal{X}} g \phi\left(\frac{f}{g}\right)\, d\mu = \frac{1}{\int_{\mathcal{X}} g\, d\mu} \int_{\mathcal{X}} f \log \frac{f}{g}\, d\mu.$$ Therefore ([\[eq: Jensen inequality\]](#eq: Jensen inequality){reference-type="ref" reference="eq: Jensen inequality"}) provides $$\left(\int_{\mathcal{X}} f\, d\mu\right) \log \frac{\int_{\mathcal{X}} f\, d\mu}{\int_{\mathcal{X}}g\, d\mu} \leq \int_{\mathcal{X}} f \log \frac{f}{g}\, d\mu.$$ ◻ The following is the finitary version of the log-sum inequality: **Corollary 7**. *Let $a_1, \dots, a_n$ be nonnegative numbers and $b_1, \dots, b_n$ positive numbers. Then $$\left(\sum_{i=1}^n a_i\right) \log \frac{\sum_{i=1}^n a_i}{\sum_{i=1}^n b_i} \leq \sum_{i=1}^n a_i \log \frac{a_i}{b_i}.$$* *Proof.* Apply Lemma [Lemma 6](#lemma: log-sum inequality){reference-type="ref" reference="lemma: log-sum inequality"} to the finite set $\mathcal{X}= \{1,2,\dots, n\}$ with the discrete $\sigma$-algebra and the counting measure. ◻ ## Mutual information {#subsection: mutual information} Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space. We assume that all random variables in this subsection are defined on $(\Omega, \mathcal{F}, \mathbb{P})$. In this paper a finite set is always assumed to be endowed with the discrete topology and the discrete $\sigma$-algebra (i.e. the set of all subsets). The purpose of this subsection is to define and study mutual information. A basic reference of mutual information is the book of Cover--Thomas [@Cover--Thomas]. A mathematically sophisticated presentation is given in the book of Gray [@Gray_entropy]. First we define the Shannon entropy. Let $(\mathcal{X}, \mathcal{A})$ be a finite set with the discrete $\sigma$-algebra, and let $X\colon \Omega \to \mathcal{X}$ be a measurable map. We define the **Shannon entropy of $X$** by $$H(X) = -\sum_{x\in \mathcal{X}} \mathbb{P}(X=x) \log \mathbb{P}(X=x).$$ Here we assume $0\log 0 = 0$ as usual. Next we define the mutual information. Let $(\mathcal{X}, \mathcal{A})$ and $(\mathcal{Y}, \mathcal{B})$ be two measurable spaces and let $X\colon \Omega \to \mathcal{X}$ and $Y\colon \Omega \to \mathcal{Y}$ be measurable maps. We want to define the mutual information $I(X;Y)$. Intuitively $I(X;Y)$ measure the amount of information shared by the random variables $X$ and $Y$. - **Case I:** Suppose $(\mathcal{X}, \mathcal{A})$ and $(\mathcal{Y}, \mathcal{B})$ are finite sets with the discrete $\sigma$-algebras. Then we define $$\label{eq: mutual information} I(X;Y) = H(X) + H(Y) - H(X, Y),$$ where $H(X, Y)$ is the Shannon entropy of the measurable map $(X, Y): \Omega \to \mathcal{X}\times \mathcal{Y}$. Since $H(X, Y) \leq H(X) + H(Y)$, the mutual information $I(X;Y)$ is always nonnegative. The explicit formula is given by $$I(X;Y) = \sum_{x\in \mathcal{X}, y\in \mathcal{Y}} \mathbb{P}(X=x, Y=y) \log \frac{\mathbb{P}(X=x, Y=y)}{\mathbb{P}(X=x) \mathbb{P}(Y=y)}.$$ Here we assume $0\log \frac{0}{a}=0$ for any $a\geq 0$. The mutual information $I(X;Y)$ satisfies the following natural monotonicity[^3]: Let $\mathcal{X}^\prime$ and $\mathcal{Y}^\prime$ be finite sets (endowed with the discrete $\sigma$-algebras), and let $f\colon \mathcal{X}\to \mathcal{X}^\prime$ and $g\colon \mathcal{Y}\to \mathcal{Y}^\prime$ be any maps. Then it follows from the log-sum inequality (Corollary [Corollary 7](#cor: log-sum inequality){reference-type="ref" reference="cor: log-sum inequality"}) that $$\label{eq: data-processing inequality} I\left(f(X); g(Y)\right) \leq I(X;Y).$$ - **Case II:** Here we define $I(X;Y)$ for general random variables $X$ and $Y$. (Namely $\mathcal{X}$ and $\mathcal{Y}$ may be infinite sets.) Let $\mathcal{X}^\prime$ and $\mathcal{Y}^\prime$ be arbitrary finite sets, and let $f\colon \mathcal{X}\to \mathcal{X}^\prime$ and $g\colon \mathcal{Y}\to \mathcal{Y}^\prime$ be any measurable maps. Then we can consider $I\left(f(X); g(Y)\right)$ by Case I. We define $I(X;Y)$ as the supremum of $I\left(f(X); g(Y)\right)$ over all finite sets $\mathcal{X}^\prime, \mathcal{Y}^\prime$ and measurable maps $f\colon \mathcal{X}\to \mathcal{X}^\prime$ and $g\colon \mathcal{Y}\to \mathcal{Y}^\prime$. The mutual information $I(X, Y)$ is always nonnegative and symmetric: $I(X;Y) = I(Y;X)\geq 0$. If $f\colon \mathcal{X}\to \mathcal{X}^\prime$ and $g\colon \mathcal{Y}\to \mathcal{Y}^\prime$ are measurable maps to some other measurable spaces $(\mathcal{X}^\prime, \mathcal{A}^\prime)$ and $(\mathcal{Y}^\prime, \mathcal{B}^\prime)$ (not necessarily finite sets) then we have $I\left(f(X); g(Y)\right) \leq I\left(X;Y\right)$. If $\mathcal{X}$ and $\mathcal{Y}$ are finite sets, then the definition of Case II is compatible with Case I by the monotonicity ([\[eq: data-processing inequality\]](#eq: data-processing inequality){reference-type="ref" reference="eq: data-processing inequality"}). If $(\mathcal{X}, \mathcal{A})$ and $(\mathcal{Y}, \mathcal{B})$ are standard Borel spaces, then we can consider the regular conditional distribution of $Y$ given $X=x$. We denote $$\nu(B|x) = \mathbb{P}(Y\in B|X=x) \quad (x\in \mathcal{X}, B\in \mathcal{B}).$$ Let $\mu = X_*\mathbb{P}$ be the push-forward measure of $\mathbb{P}$ by $X$. The distribution of $(X, Y)$ is determined by $\mu$ and $\nu$. Hence the mutual information $I(X;Y)$ is also determined by $\mu$ and $\nu$. Therefore we sometimes denote $I(X;Y)$ by $I(\mu, \nu)$. An importance of this description comes from the fact that $I(\mu, \nu)$ is a concave function in $\mu$ and a convex function in $\nu$ (Proposition [Proposition 14](#proposition: concavity and convexity of I(mu, nu)){reference-type="ref" reference="proposition: concavity and convexity of I(mu, nu)"} below). In the rest of this subsection we prepare several basic properties of mutual information. They are rather heavy. Readers may skip to the next subsection at the first reading. If $(\mathcal{X}, \mathcal{A})$ is a standard Borel space and *if $\mathcal{Y}$ is a finite set*, then we can express $I(X; Y)$ in another convenient way. For $x\in \mathcal{X}$ we set $$H(Y|X=x) = - \sum_{y\in \mathcal{Y}} \mathbb{P}\left(Y=y|X=x\right) \log \mathbb{P}\left(Y=y|X=x\right).$$ We define the **conditional entropy of $Y$ given $X$** by $$H(Y|X) = \int_{\mathcal{X}} H(Y|X=x) \, d\mu(x), \quad (\mu := X_*\mathbb{P}).$$ The next theorem is given in the book of Gray [@Gray_entropy p. 213, Lemma 7.20]. **Theorem 8**. *Let $X$ and $Y$ be random variables taking values in a standard Borel space $(\mathcal{X}, \mathcal{A})$ and a finite set $\mathcal{Y}$ respectively. Then we have $$I(X;Y) = H(Y) - H(Y|X).$$* When both $\mathcal{X}$ and $\mathcal{Y}$ are finite sets, this theorem is a very well-known result. A point of the theorem is that we do not need to assume that $\mathcal{X}$ is a finite set. The following is also a basic result. This is given in [@Gray_entropy p. 211, Lemma 7.18]. **Theorem 9**. *Let $(\mathcal{X}, \mathcal{A})$ and $\left(\mathcal{Y}, \mathcal{B}\right)$ be standard Borel spaces. Then there exist sequences of measurable maps $f_n\colon \mathcal{X}\to \mathcal{X}_n$ and $g_n\colon \mathcal{Y}\to \mathcal{Y}_n$ to some finite sets $\mathcal{X}_n$ and $\mathcal{Y}_n$ $(n\geq 1)$ for which the following statement holds: If $X$ and $Y$ are random variables taking values in $(\mathcal{X}, \mathcal{A})$ and $\left(\mathcal{Y}, \mathcal{B}\right)$ respectively, then $$I(X;Y) = \lim_{n\to \infty} I\left(f_n(X); g_n(Y)\right).$$* *Sketch of the proof.* Standard Borel spaces are isomorphic to either countable sets or the Cantor sets. The case of countable sets is easier. So we assume that both $\mathcal{X}$ and $\mathcal{Y}$ are the Cantor set $\{0,1\}^{\mathbb{N}}$. Let $f_n\colon \mathcal{X}\to \{0,1\}^n$ and $g_n\colon \mathcal{Y}\to \{0,1\}^n$ be the natural projections to the first $n$ coordinates. Then we can check that $f_n$ and $g_n$ satisfy the statement. ◻ **Lemma 10**. *Let $X_n$ and $Y_n$ $(n\geq 1)$ be sequences of random variables taking values in finite sets $\mathcal{X}$ and $\mathcal{Y}$ respectively. Suppose $(X_n, Y_n)$ converges to $(X, Y)$ in law. (Namely $\mathbb{P}\left(X_n = x, Y_n =y\right) \to \mathbb{P}(X=x, Y=y)$ as $n\to \infty$ for all $(x, y)\in \mathcal{X}\times \mathcal{Y}$.) Then $I(X_n;Y_n)$ converges to $I(X;Y)$ as $n\to \infty$.* *Proof.* This immediately follows from the definition ([\[eq: mutual information\]](#eq: mutual information){reference-type="ref" reference="eq: mutual information"}) in Case I above. ◻ **Lemma 11** (Subadditivity of mutual information). *Let $X, Y, Z$ be random variables taking values in standard Borel spaces $(\mathcal{X}, \mathcal{A}), (\mathcal{Y},\mathcal{B}), (\mathcal{Z}, \mathcal{C})$ respectively. Suppose that $X$ and $Y$ are conditionally independent given $Z$. Then $$I\left(X, Y;Z\right) \leq I(X; Z) + I(Y; Z),$$ where $I\left(X, Y; Z\right) = I\left((X, Y); Z\right)$ is the mutual information between the random variables $(X, Y)$ and $Z$.* *Proof.* Let $f\colon \mathcal{X}\to \mathcal{X}^\prime$ and $g\colon \mathcal{Y}\to \mathcal{Y}^\prime$ be measurable maps to some finite sets $\mathcal{X}^\prime$ and $\mathcal{Y}^\prime$. Then by Theorem [Theorem 8](#theorem: mutual information and conditional entropy){reference-type="ref" reference="theorem: mutual information and conditional entropy"} $$I\left(f(X), g(Y); Z\right) = H\left(f(X), g(Y)\right) - H\left(f(X), g(Y)|Z\right).$$ We have [@Cover--Thomas Theorem 2.6.6] $$H\left(f(X), g(Y)\right) \leq H\left(f(X)\right) + H\left(g(Y)\right).$$ The random variables $f(X)$ and $g(Y)$ are conditionally independent given $Z$. Hence $$H\left(f(X), g(Y)|Z\right) = H\left(f(X)|Z\right) + H\left(g(Y)|Z\right).$$ Therefore $$\begin{split} I\left(f(X), g(Y); Z\right) & \leq \left\{H\left(f(X)\right) - H\left(f(X)|Z\right)\right\} + \left\{H\left(g(Y)\right) - H\left(g(Y)|Z\right)\right\} \\ & = I\left(f(X); Z\right) + I\left(g(Y); Z\right). \end{split}$$ We have $I\left(X, Y; Z\right) = \sup_{f, g} I\left(f(X), g(Y); Z\right)$ where $f\colon \mathcal{X}\to \mathcal{X}^\prime$ and $g\colon \mathcal{Y}\to \mathcal{Y}^\prime$ run over all measurable maps to some finite sets. This follows from the fact that $\mathcal{A}\otimes \mathcal{B}$ is generated by rectangles $A\times B$ $(A\in \mathcal{A}, B\in \mathcal{B})$ [@Gray_entropy p.175 Lemma 7.3]. Therefore we get $$I\left(X, Y;Z\right) \leq I(X; Z) + I(Y; Z).$$ ◻ As we briefly mentioned above, the mutual information $I(\mu, \nu)$ is a concave function in a probability measure $\mu$ and a convex function in a transition probability $\nu$. Next we are going to establish this fact. We need some preparations. For a finite set $\mathcal{Y}$, a **probability mass function** $p$ on $\mathcal{Y}$ is a nonnegative function on $\mathcal{Y}$ satisfying $\sum_{y\in \mathcal{Y}} p(y) = 1$. For a probability mass function $p$ on $\mathcal{Y}$ we define $$H(p) = -\sum_{y\in \mathcal{Y}} p(y) \log p(y).$$ **Lemma 12** (Concavity of the Shannon entropy). *Let $\mathcal{Y}$ be a finite set and let $(\mathcal{Z}, \mathcal{C}, m)$ be a probability space. Suppose that we are given a probability mass function $p_z$ on $\mathcal{Y}$ for each $z\in \mathcal{Z}$ and that the map $\mathcal{Z}\ni z\mapsto p_z(y)\in [0,1]$ is measurable for each $y\in \mathcal{Y}$. We define a probability mass function $p$ on $\mathcal{Y}$ by $$p(y) = \int_{\mathcal{Z}} p_z(y) \, dm(z).$$ Then $$H(p) \geq \int_{\mathcal{Z}} H(p_z) \, dm(z).$$* *Proof.* From the log-sum inequality (Lemma [Lemma 6](#lemma: log-sum inequality){reference-type="ref" reference="lemma: log-sum inequality"}), $$- p(y) \log p(y) \geq - \int_{\mathcal{Z}} p_z(y) \log p_z(y)\, dm(z).$$ Summing this over $y\in \mathcal{Y}$, we get the statement. ◻ **Lemma 13**. *Let $\mathcal{X}$ and $\mathcal{Y}$ be finite sets and $(\mathcal{Z}, \mathcal{C}, m)$ a probability space. Let $\mu$ be a probability mass function on $\mathcal{X}$. Suppose that, for each $z\in \mathcal{Z}$, we are given a conditional probability mass function $\nu_z(y|x)$ in $x\in \mathcal{X}$ and $y\in \mathcal{Y}$ such that the map $\mathcal{Z}\ni z\mapsto \nu_z(y|x)\in [0,1]$ is measurable for each $(x, y)\in \mathcal{X}\times \mathcal{Y}$. We define $$\nu(y|x) = \int_{\mathcal{Z}} \nu_z(y|x)\, dm(z), \quad (x\in \mathcal{X}, y\in \mathcal{Y}).$$ Then $$I(\mu, \nu) \leq \int_{\mathcal{Z}} I(\mu, \nu_z)\, dm(z).$$* *Proof.* For $y\in \mathcal{Y}$ we set $$p_z(y) = \sum_{x\in \mathcal{X}} \mu(x) \nu_z(y|x), \quad p(y) = \sum_{x\in \mathcal{X}} \mu(x) \nu(y|x).$$ We have $$I(\mu, \nu) = \sum_{x, y} \mu(x) \nu(y|x) \log \frac{\mu(x) \nu(y|x)}{\mu(x) p(y)}, \quad I(\mu, \nu_z) = \sum_{x, y} \mu(x) \nu_z(y|x) \log \frac{\mu(x)\nu_z(y|x)}{\mu(x) p_z(y)}.$$ Here we assume $0\log \frac{a}{0} = 0$ for all $a\geq 0$. We estimate each summand of $I(\mu, \nu)$ and $I(\mu, \nu_z)$. We fix $(x, y)\in \mathcal{X}\times \mathcal{Y}$ with $\mu(x) p(y) >0$. We define a subset $\mathcal{Z}^\prime \subset \mathcal{Z}$ by $$\mathcal{Z}^\prime = \{z\mid p_z(y) >0\} \supset \{z\mid \nu_z(y|x)>0\}.$$ Since $\mu(x) p(y) >0$, we have $m\left(\mathcal{Z}^\prime\right) >0$. We have $$\mu(x) \nu(y|x) = \int_{\mathcal{Z}^\prime} \mu(x)\nu_z(y|x)\, dm(z), \quad \mu(x)p(y) = \int_{\mathcal{Z}^\prime} \mu(x) p_z(y) \, dm(z).$$ By the log-sum inequality (Lemma [Lemma 6](#lemma: log-sum inequality){reference-type="ref" reference="lemma: log-sum inequality"}) $$\begin{split} \mu(x) \nu(y|x) \log \frac{\mu(x) \nu(y|x)}{\mu(x) p(y)} & \leq \int_{\mathcal{Z}^\prime} \mu(x) \nu_z(y|x) \log \frac{\mu(x)\nu_z(y|x)}{\mu(x) p_z(y)} dm(z) \\ & = \int_{\mathcal{Z}} \mu(x) \nu_z(y|x) \log \frac{\mu(x)\nu_z(y|x)}{\mu(x) p_z(y)} dm(z). \end{split}$$ Taking sums over $(x, y)\in \mathcal{X}\times \mathcal{Y}$, we get the statement. ◻ **Proposition 14** ($I(\mu, \nu)$ is concaive in $\mu$ and convex in $\nu$). *Let $(\mathcal{X}, \mathcal{A})$ and $(\mathcal{Y}, \mathcal{B})$ be standard Borel spaces, and let $(\mathcal{Z}, \mathcal{C}, m)$ be a probability space.* 1. *Let $\nu$ be a transition probability on $\mathcal{X}\times \mathcal{Y}$. Suppose that we are given a probability measure $\mu_z$ on $\mathcal{X}$ for each $z\in \mathcal{Z}$ such that the map $\mathcal{Z}\ni z\mapsto \mu_z(A)\in [0,1]$ is measurable for every $A\in \mathcal{A}$. We define a probability measure $\mu$ on $(\mathcal{X}, \mathcal{A})$ by $$\mu(A) = \int_{\mathcal{Z}} \mu_z(A)\, dm(z), \quad (A\in \mathcal{A}).$$ Then we have $$\label{eq: concavity of mutual information} I(\mu, \nu) \geq \int_{\mathcal{Z}} I(\mu_z, \nu)\, dm(z).$$* 2. *Let $\mu$ be a probability measure on $\mathcal{X}$. Suppose that we are given a transition probability $\nu_z$ on $\mathcal{X}\times \mathcal{Y}$ for each $z\in \mathcal{Z}$ such that the map $\mathcal{X} \times \mathcal{Z} \ni (x, z) \mapsto \nu_z(B|x) \in [0,1]$ is measurable with respect to $\mathcal{A}\otimes \mathcal{C}$ for each $B \in \mathcal{B}$. We define a transition probability $\nu$ on $\mathcal{X}\times \mathcal{Y}$ by $$\nu(B|x) = \int_{\mathcal{Z}} \nu_z(B|x) dm(z), \quad (x\in \mathcal{X}, B\in \mathcal{B}).$$ Then we have $$I(\mu, \nu) \leq \int_{\mathcal{Z}} I(\mu, \nu_z) \, dm(z).$$* *Proof.* (1) By Lemma [Theorem 9](#theorem: approximation of mutual information){reference-type="ref" reference="theorem: approximation of mutual information"}, there exists a sequence of measurable maps $g_n\colon \mathcal{Y}\to \mathcal{Y}_n$ to finite sets $\mathcal{Y}_n$ such that $$I(\mu_z, \nu) = \lim_{n\to \infty} I\left(\mu_z, (g_n)_*\nu\right), \quad I(\mu, \nu) = \lim_{n\to \infty} I\left(\mu, (g_n)_*\nu\right).$$ Here $(g_n)_*\nu$ is a transition probability on $\mathcal{X}\times \mathcal{Y}_n$ defined by $$(g_n)_*\nu(B|x) = \nu\left((g_n)^{-1}B|x\right), \quad (B\subset \mathcal{Y}_n).$$ It is enough to prove that for each $n$ we have $$I\left(\mu, (g_n)_*\nu\right) \geq \int_{\mathcal{Z}} I\left(\mu_z, (g_n)_*\nu\right) \, dm(z).$$ If this is proved then we get the above ([\[eq: concavity of mutual information\]](#eq: concavity of mutual information){reference-type="ref" reference="eq: concavity of mutual information"}) by Fatou's lemma. Therefore we can assume that $\mathcal{Y}$ itself is a finite set from the beginning. We define probability mass functions $p(y)$ and $p_z(y)$ $(z\in \mathcal{Z}$) on $\mathcal{Y}$ by $$p(y) = \int_{\mathcal{X}} \nu(y|x)\, d\mu(x), \quad p_z(y) = \int_{\mathcal{X}} \nu(y|x)\, d\mu_z(x).$$ We have $p(y) = \int_{\mathcal{Z}} p_z(y)\, dm(z)$. Then by Theorem [Theorem 8](#theorem: mutual information and conditional entropy){reference-type="ref" reference="theorem: mutual information and conditional entropy"} $$I(\mu, \nu) = H(p) - \int_{\mathcal{X}} H\left(\nu(\cdot|x)\right)\, d\mu(x), \quad I\left(\mu_z, \nu\right) = H(p_z) - \int_{\mathcal{X}} H\left(\nu(\cdot|x)\right) \, d\mu_z(x).$$ Here $H\left(\nu(\cdot|x)\right) = -\sum_{y\in \mathcal{Y}} \nu(y|x)\log \nu(y|x)$. Notice that in particular this shows that $I\left(\mu_z, \nu\right)$ is a measurable function in the variable $z\in \mathcal{Z}$. By Lemma [Lemma 12](#lemma: entropy is concave){reference-type="ref" reference="lemma: entropy is concave"}, we have $H(p) \geq \int_{\mathcal{Z}} H(p_z)\, dm(z)$. We also have $$\int_{\mathcal{X}} H\left(\nu(\cdot|x)\right) d\mu(x) = \int_{\mathcal{Z}} \left(\int_{\mathcal{X}} H\left(\nu(\cdot|x)\right)\, d\mu_z(x)\right) dm(z).$$ Thus $$I(\mu, \nu) \geq \int_{\mathcal{Z}} I\left(\mu_z, \nu\right) dm(z).$$ \(2\) Let $f\colon \mathcal{X}\to \mathcal{X}^\prime$ and $g\colon \mathcal{Y}\to \mathcal{Y}^\prime$ be measurable maps to finite sets $\mathcal{X}^\prime$ and $\mathcal{Y}^\prime$. We define a probability mass function $\mu^\prime$ on $\mathcal{X}^\prime$ by $$\mu^\prime(x^\prime) = \mu\left(f^{-1}(x^\prime)\right) \quad (x^\prime\in \mathcal{X}^\prime).$$ We also define conditional probability mass functions $\nu^\prime$ and $\nu^\prime_z$ $(z\in \mathcal{Z})$ on $\mathcal{X}^\prime\times \mathcal{Y}^\prime$ by $$\nu^\prime(y^\prime|x^\prime) = \frac{\int_{f^{-1}(x^\prime)} \nu\left(g^{-1}(y^\prime)|x\right) d\mu(x)}{\mu\left(f^{-1}(x^\prime)\right)}, \quad \nu^\prime_z(y^\prime|x^\prime) = \frac{\int_{f^{-1}(x^\prime)} \nu_z\left(g^{-1}(y^\prime)|x\right) d\mu(x)}{\mu\left(f^{-1}(x^\prime)\right)}$$ where $x^\prime \in \mathcal{X}^\prime$ and $y^\prime\in \mathcal{Y}^\prime$. We have $$\nu^\prime(y^\prime|x^\prime) = \int_{\mathcal{Z}} \nu^\prime_z(y^\prime|x^\prime) dm(z).$$ Then by Lemma [Lemma 13](#lemma: mutual information is convex in nu finite case){reference-type="ref" reference="lemma: mutual information is convex in nu finite case"} $$I(\mu^\prime, \nu^\prime) \leq \int_{\mathcal{Z}} I(\mu^\prime, \nu^\prime_z)\, dm(z).$$ It follows from the definition of mutual information that we have $I(\mu^\prime, \nu^\prime_z) \leq I(\mu, \nu_z)$ for all $z\in \mathcal{Z}$. Hence $$I(\mu^\prime, \nu^\prime) \leq \int_{\mathcal{Z}} I(\mu, \nu_z) \, dm(z).$$ (It follows from Theorem [Theorem 9](#theorem: approximation of mutual information){reference-type="ref" reference="theorem: approximation of mutual information"} that $I(\mu, \nu_z)$ is measurable in $z\in \mathcal{Z}$.) Taking the supremum over $f$ and $g$, we get $$I(\mu, \nu) \leq \int_{\mathcal{Z}} I(\mu, \nu_z) \, dm(z).$$ ◻ Next we will establish a method to prove a lower bound on mutual information (Proposition [Proposition 17](#proposition: Kawabata--Dembo){reference-type="ref" reference="proposition: Kawabata--Dembo"} below). We need to use the following integral representation of $I(X;Y)$. This is given in [@Gray_entropy p. 176 Lemma 7.4, p. 206 Equation (7.31)]. **Theorem 15**. *Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space, and let $(\mathcal{X}, \mathcal{A})$ and $(\mathcal{Y}, \mathcal{B})$ be measurable spaces. Let $X\colon \Omega\to \mathcal{X}$ and $Y\colon \Omega\to \mathcal{Y}$ be measurable maps with distributions $\mu = \mathrm{Law}(X) = X_*\mathbb{P}$ and $\nu = \mathrm{Law}(Y) = Y_*\mathbb{P}$ respectively. Let $p = \mathrm{Law}(X, Y) = (X, Y)_*\mathbb{P}$ be the distribution of $(X, Y)\colon \Omega \to \mathcal{X}\times \mathcal{Y}$. Suppose that the mutual information $I(X;Y)$ is finite. Then $p$ is absolutely continuous with respect to the product measure $\mu\otimes \nu$. Moreover, letting $f = dp/d(\mu\otimes \nu)$ be the Radon--Nikodim derivative, we have $$I(X;Y) = \int_{\mathcal{X}\times \mathcal{Y}} \log f\, dp = \int_{\mathcal{X}\times \mathcal{Y}}f\log f \, d(\mu\otimes \nu).$$* We learnt the next result from [@Kawabata--Dembo Lemma A.1]. This is a kind of duality of convex programming. **Proposition 16**. *Let $\varepsilon>0$ and $a\geq 0$ be real numbers. Let $(\mathcal{X}, \mathcal{A})$ and $(\mathcal{Y}, \mathcal{B})$ be measurable spaces and $\rho\colon \mathcal{X}\times \mathcal{Y}\to [0, +\infty)$ a measurable map. Let $\mu$ be a probability measure on $\mathcal{X}$. Suppose a measurable map $\lambda\colon \mathcal{X}\to [0, +\infty)$ satisfies $$\forall y\in \mathcal{Y}: \quad \int_{\mathcal{X}} \lambda(x) 2^{-a \rho(x, y)}\, d\mu(x) \leq 1.$$ If $X$ and $Y$ are random variables taking values in $\mathcal{X}$ and $\mathcal{Y}$ respectively and satisfying $\mathrm{Law}(X) = \mu$ and $\mathbb{E}\rho(X, Y) < \varepsilon$ then we have $$\label{eq: duality of convex programming} I(X;Y) \geq -a\varepsilon + \int_{\mathcal{X}} \log \lambda(x) \, d\mu(x).$$* *Proof.* Let $\nu = \mathrm{Law}(Y)$ and $p = \mathrm{Law}(X, Y)$ be the distributions of $Y$ and $(X, Y)$ respectively. If $I(X;Y)$ is infinite then the statement is trivial. So we assume $I(X;Y)<\infty$. Then by Theorem [Theorem 15](#theorem: integral representation of mutual information){reference-type="ref" reference="theorem: integral representation of mutual information"} the measure $p$ is absolutely continuous with respect to $\mu\otimes \nu$. Let $f = dp/d(\mu\otimes \nu)$ be the Radon--Nikodim derivative. We have $$I(X;Y) = \int_{\mathcal{X}\times \mathcal{Y}} \log f\, dp.$$ Set $g(x, y) = \lambda(x) 2^{-a\rho(x, y)}$. Since $-\varepsilon < -\mathbb{E}\rho(X, Y) = -\int_{\mathcal{X}\times \mathcal{Y}} \rho(x, y) dp(x, y)$, we have $$\begin{aligned} \int_{\mathcal{X}\times \mathcal{Y}} \log g(x, y) \, dp(x, y) & \geq -a\varepsilon + \int_{\mathcal{X}\times \mathcal{Y}} \log \lambda(x) \, dp(x, y) \\ & = -a\varepsilon + \int_{\mathcal{X}}\log \lambda(x) \, d\mu(x). \end{aligned}$$ Therefore $$I(X;Y) + a\varepsilon - \int_{\mathcal{X}}\log \lambda(x) \, d\mu(x) \geq \int_{\mathcal{X}\times \mathcal{Y}} (\log f - \log g) \, dp = \int_{\mathcal{X}\times \mathcal{Y}}f \log (f/g) \, d\mu(x) d\nu(y).$$ Since $\ln t \leq t-1$, we have $\ln (1/t) \geq 1-t$ and hence $f\ln(f/g) \geq f-g$. Then $$\begin{aligned} (\ln 2) \int_{\mathcal{X}\times \mathcal{Y}} f \log (f/g) \, d\mu(x) d\nu(y) & = \int_{\mathcal{X}\times \mathcal{Y}} f \ln (f/g) \, d\mu(x) d\nu(y) \\ & \geq \int_{\mathcal{X}\times \mathcal{Y}} \left(f(x,y)-g(x,y)\right) d\mu(x) d\nu(y) \\ & = 1 - \int_{\mathcal{Y}} \left(\int_{\mathcal{X}} g(x,y)\, d\mu(x)\right) d\nu(y) \geq 0.\end{aligned}$$ In the last inequality we have used the assumption $\int_{\mathcal{X}} g(x,y)\, d\mu(x) \leq 1$. ◻ The next proposition is a key result. We will use it for connecting geometric measure theory to rate distortion theory. This result is essentially due to Kawabata--Dembo [@Kawabata--Dembo Proposition 3.2]. Recall that, for a metric space $(\mathcal{X}, \mathbf{d})$, we use the notation $$\mathrm{Diam}E = \sup\{\mathbf{d}(x, y)\mid x, y\in E\} \quad (E\subset \mathcal{X}).$$ **Proposition 17** (Kawabata--Dembo estimate). *Let $\varepsilon$ and $\delta$ be positive numbers with $2\varepsilon \log (1/\varepsilon) \leq \delta$. Let $s$ be a nonnegative real number. Let $(\mathcal{X}, \mathbf{d})$ be a separable metric space with a Borel probability measure $\mu$ satisfying $$\label{eq: power law} \mu(E) \leq \left(\mathrm{Diam}E\right)^s \quad \text{for all Borel sets $E\subset \mathcal{X}$ with $\mathrm{Diam}E < \delta$}.$$ Let $X$ and $Y$ be random variables taking values in $\mathcal{X}$ and satisfying $\mathrm{Law} X = \mu$ and $\mathbb{E}\mathbf{d}(X, Y) < \varepsilon$. Then $$I(X;Y) \geq s\log (1/\varepsilon) -K(s+1).$$ Here $K$ is a universal positive constant independent of the given data (i.e. $\varepsilon, \delta, s, (\mathcal{X}, \mathbf{d}), \mu$).* *Proof.* The proof is almost identical with [@Lindenstrauss--Tsukamoto; @double Lemma 2.10]. But we repeat it for completeness. If $s=0$ then the statement is trivial. So we can assume $s>0$. We use Proposition [Proposition 16](#proposition: duality of convex programming){reference-type="ref" reference="proposition: duality of convex programming"}. Set $a= s/\varepsilon$ and we estimate $\int_{\mathcal{X}}2^{-a d(x, y)}d\mu(x)$ for each $y\in \mathcal{X}$. By the Fubini theorem (see [@Mattila 1.15 Theorem]) $$\int_{\mathcal{X}}2^{-a d(x, y)}d\mu(x) = \int_0^1 \mu\{x\mid 2^{-a d(x,y)}\geq u\} \, du.$$ Changing the variable $u = 2^{-a v}$, we have $du = -a (\ln 2) 2^{-av} dv$ and hence $$\begin{aligned} \int_0^1 \mu\{x\mid 2^{-a d(x,y)}\geq u\} \, du & = \int_0^\infty \mu\{x\mid d(x, y) \leq v\} a(\ln 2) 2^{-av}\, dv \\ & = a\ln 2 \left(\int_0^{\delta/2} + \int_{\delta/2}^\infty\right) \mu\{x\mid d(x, y) \leq v\} 2^{-av} \, dv.\end{aligned}$$ By using ([\[eq: power law\]](#eq: power law){reference-type="ref" reference="eq: power law"}) $$\begin{aligned} a\ln 2 \int_0^{\delta/2} \mu\{x\mid d(x, y) \leq v\} 2^{-av} \, dv &\leq a\ln 2 \int_0^{\delta/2} (2v)^s 2^{-av}\, dv \\ & = \int_0^{\frac{a\delta\ln 2}{2}}\left(\frac{2t}{a\ln 2}\right)^s e^{-t}\, dt, \quad (t = a(\ln 2) v) \\ & \leq \left(\frac{2}{a\ln 2}\right)^s \int_0^\infty t^s e^{-t}\, dt \\ & = \left(\frac{2\varepsilon}{\ln 2}\right)^s s^{-s} \Gamma(s+1), \quad \left(a = \frac{s}{\varepsilon}\right). \end{aligned}$$ On the other hand $$\begin{aligned} a\ln 2 \int_{\delta/2}^\infty \mu\{x\mid d(x, y) \leq v\} 2^{-av} \, dv & \leq a \ln 2 \int_{\delta/2}^\infty 2^{-av} dv \\ & = 2^{-a\delta/2} \\ & = \left(2^{-\delta/(2\varepsilon)}\right)^s , \quad \left(a = \frac{s}{\varepsilon}\right).\end{aligned}$$ Since $\delta\geq 2\varepsilon\log(1/\varepsilon)$, we have $-\frac{\delta}{2\varepsilon}\leq \log \varepsilon$. Hence $2^{-\delta/(2\varepsilon)} \leq \varepsilon$. Summing the above estimates, we get $$\int_{\mathcal{X}}2^{-a d(x, y)}d\mu(x) \leq \varepsilon^s\left\{1 + \left(\frac{2}{\ln 2}\right)^s s^{-s} \Gamma(s+1)\right\}.$$ Using the Stirling formula $s^{-s}\Gamma(s+1) \sim e^{-s}\sqrt{2\pi s}$, we can find a constant $c>1$ such that the term $\{\cdots\}$ is bounded by $c^{s+1}$ from above and hence $$\int_{\mathcal{X}}2^{-a d(x, y)}d\mu(x) \leq c^{s+1} \varepsilon^s.$$ We set $\lambda(x) = c^{-1-s} \varepsilon^{-s}$ for $x\in \mathcal{X}$. (This is a constant function.) Then for all $y\in \mathcal{X}$ $$\int_{\mathcal{X}}\lambda(x) 2^{-a d(x,y)}d\mu(x) \leq 1.$$ We apply Proposition [Proposition 16](#proposition: duality of convex programming){reference-type="ref" reference="proposition: duality of convex programming"} and get $$\begin{aligned} I(X;Y) & \geq -a\varepsilon + \int_{\mathcal{X}}\log \lambda \, d\mu \\ & = -s + \log \lambda , \quad \left(a = \frac{s}{\varepsilon}\right) \\ & = s\log(1/\varepsilon) - (1+\log c) s - \log c.\end{aligned}$$ Then the constant $K := 1+ \log c$ satisfies the statement. ◻ ## Rate distortion theory {#subsection: rate distortion theory} In this subsection we introduce a rate distortion function. A basic of rate distortion theory can be found in the book of Cover--Thomas [@Cover--Thomas Chapter 10]. The rate distortion theory for continuous-time stochastic processes are presented in the paper of Pursley--Gray [@Pursley--Gray]. Recall that we have denoted the Lebesgue measure on $\mathbb{R}^d$ by $\mathbf{m}$. For a measurable function $f(u)$ on $\mathbb{R}^d$ we usually denote its integral with respect to $\mathbf{m}$ by $$\int_{\mathbb{R}^d} f(u) du.$$ Let $(\mathcal{X}, \mathbf{d})$ be a compact metric space. Let $A$ be a Borel subset of $\mathbb{R}^d$ of finite measure $\mathbf{m}(A) < \infty$. We define $L^1(A, \mathcal{X})$ as the space of all measurable maps $f\colon A\to \mathcal{X}$. We identify two maps if they coincide $\mathbf{m}$-almost everywhere. We define a metric on $L^1(A, \mathcal{X})$ by $$D(f, g) = \int_{A} \mathbf{d}\left(f(u), g(u)\right) du \quad \left(f, g \in L^1(A, \mathcal{X})\right).$$ We need to check the following technical fact. **Lemma 18**. *$\left(L^1(A, \mathcal{X}), D\right)$ is a complete separable metric space. Hence it is a standard Borel space with respect to the Borel $\sigma$-algebra.* *Proof.* First we need to understand what happens if we change the metric $\mathbf{d}$ on $\mathcal{X}$. Let $\mathbf{d}^\prime$ be another metric on $\mathcal{X}$ compatible with the given topology. We define a metric $D^\prime$ on $L^1(A, \mathcal{X})$ by $$D^\prime(f, g) = \int_{A} \mathbf{d}^\prime \left(f(u), g(u)\right) du.$$ Let $\varepsilon$ be a positive number. There exists $\delta>0$ such that $\mathbf{d}(x, y) < \delta \Longrightarrow \mathbf{d}^\prime(x, y) < \varepsilon$. Suppose $f, g\in L^1(A, \mathcal{X})$ satisfy $D(f, g) < \varepsilon \delta$. Then $$\mathbf{m}\{u\in A\mid \mathbf{d}\left(f(u), g(u)\right) \geq \delta\} \leq \frac{1}{\delta} \int_{A} \mathbf{d}\left(f(u), g(u)\right) du < \varepsilon.$$ We have $\mathbf{d}^\prime\left(f(u), g(u)\right) < \varepsilon$ on $\{u\in A\mid \mathbf{d}\left(f(u), g(u)\right) < \delta\}$. Hence $$D^\prime(f, g) < \varepsilon \left(\mathrm{Diam}(\mathcal{X}, \mathbf{d}^\prime) + \mathbf{m}(A)\right).$$ So the identity map $\mathrm{id}\colon \left(L^1(A, \mathcal{X}), D\right) \to \left(L^1(A, \mathcal{X}), D^\prime\right)$ is uniformly continuous. The same is true if we exchange $D$ and $D^\prime$. Therefore if $\left(L^1(A, \mathcal{X}), D^\prime\right)$ is complete and separable then so is $\left(L^1(A, \mathcal{X}), D\right)$. Every compact metric space topologically embeds into the Hilbert cube $[0,1]^{\mathbb{N}}$. We define a metric $\mathbf{d}^\prime$ on $[0,1]^{\mathbb{N}}$ by $$\mathbf{d}^\prime(x, y) = \sum_{n=1}^\infty 2^{-n} |x_n-y_n|.$$ Let $L^1\left(A, [0,1]^{\mathbb{N}}\right)$ be the space of measurable maps from $A$ to $[0,1]^{\mathbb{N}}$. We define a metric $D^\prime$ on $L^1\left(A, [0,1]^{\mathbb{N}}\right)$ as above. The space $L^1(A, \mathcal{X})$ is identified with a closed subspace of $L^1\left(A, [0,1]^{\mathbb{N}}\right)$. So it is enough to show that $\left(L^1\left(A, [0,1]^{\mathbb{N}}\right), D^\prime\right)$ is a complete separable metric space. This follows from the standard fact that $L^1(A, [0,1])$ is complete and separable with respect to the $L^1$-norm. ◻ In the following we always assume that $L^1(A, \mathcal{X})$ is endowed with the Borel $\sigma$-algebra (and hence it is a standard Borel space). Let $(\mathcal{X}, \mathbf{d})$ be a compact metric space, and let $T\colon \mathbb{R}^d\times \mathcal{X}\to \mathcal{X}$ be a continuous action of $\mathbb{R}^d$. Let $\mu$ be a $T$-invariant Borel probability measure on $\mathcal{X}$. Let $\varepsilon>0$ and let $A$ be a bounded Borel subset of $\mathbb{R}^d$ with $\mathbf{m}(A)>0$. We define $R(\varepsilon, A)$ as the infimum of the mutual information $I(X; Y)$ where $X$ and $Y$ are random variables defined on some probability space $(\Omega, \mathcal{F}, \mathbb{P})$ such that - $X$ takes values in $\mathcal{X}$ and its distribution is given by $\mu$, - $Y$ takes values in $L^1(A, \mathcal{X})$ and satisfies $$\mathbb{E}\left(\frac{1}{\mathbf{m}(A)}\int_A\mathbf{d}(T^u X, Y_u)\, du\right) < \varepsilon.$$ Here $Y_u = Y_u(\omega)$ ($\omega\in \Omega$) is the value of $Y(\omega)\in L^1(A, \mathcal{X})$ at $u\in A$. We set $R(\varepsilon,A) = 0$ if $\mathbf{m}(A) = 0$. **Remark 19**. In the above definition of $R(\varepsilon, A)$, we can assume that $Y$ takes only finitely many values. Indeed let $X$ and $Y$ be random variables satisfying the conditions in the definition of $R(\varepsilon, A)$. We take a positive number $\tau$ satisfying $$\mathbb{E}\left(\frac{1}{\mathbf{m}(A)}\int_A\mathbf{d}(T^u X, Y_u)\, du\right) < \varepsilon - 2\tau.$$ Since $L^1(A, \mathcal{X})$ is separable, it contains a dense countable subsets $\{f_1, f_2, f_3, \dots\}$. We define a map $F\colon L^1(A, \mathcal{X})\to \{f_1, f_2, f_3, \dots\}$ by $F(f) = f_n$ where $n$ is the smallest natural number satisfying $D(f, f_n) < \tau\cdot \mathbf{m}(A)$. Set $Y^\prime = F(Y)$. Then we have $$\mathbb{E}\left(\frac{1}{\mathbf{m}(A)}\int_A\mathbf{d}(T^u X, Y^\prime_u)\, du\right) < \varepsilon - \tau.$$ Define $p_n = \mathbb{P}(Y^\prime = f_n)$. We choose $n_0$ such that $$\sum_{n>n_0} p_n \mathrm{Diam}(\mathcal{X}, \mathbf{d}) < \tau.$$ We define $G\colon \{f_1, f_2, f_3, \dots\}\to \{f_1, f_2, \dots, f_{n_0}\}$ by $$G(f) = \begin{cases} f & \text{if } f\in \{f_1, f_2, \dots, f_{n_0}\} \\ f_{n_0} & \text{otherwise} \end{cases}.$$ Set $Y^{\prime\prime} = G(Y^\prime)$. Then $Y^{\prime\prime}$ takes only finitely many values (i.e. $f_1, \dots, f_{n_0}$) and we have $$\mathbb{E}\left(\frac{1}{\mathbf{m}(A)}\int_A\mathbf{d}(T^u X, Y^{\prime\prime}_u)\, du\right) < \varepsilon,$$ $$I(X; Y^{\prime\prime}) \leq I(X; Y^\prime) \leq I(X; Y).$$ Therefore, when we consider the infimum in the definition of $R(\varepsilon, A)$, we only need to take into account such random variables $Y^{\prime\prime}$. For a bounded Borel subset $A\subset \mathbb{R}^d$ and $r>0$ we define $N_r(A)$ as the $r$-neighborhood of $A$ with respect to the $\ell^\infty$-norm, i.e. $N_r(A) = \{u +v\mid u\in A, v\in (-r, r)^d\}$. **Lemma 20**. *We have:* 1. *$R(\varepsilon, A) \leq \log \#\left(\mathcal{X}, \mathbf{d}_A,\varepsilon\right) \leq \mathbf{m}\left(N_{1/2}(A)\right) \log \#(\mathcal{X}, \mathbf{d}_{(-1, 1)^d},\varepsilon)$.* 2. *$R(\varepsilon, a+A) = R(\varepsilon, A)$ for any $a\in \mathbb{R}^d$.* 3. *If $A\cap B = \emptyset$ then $R(\varepsilon, A\cup B) \leq R(\varepsilon, A) + R(\varepsilon,B)$.* *Proof.* (1) Let $\mathcal{X}= U_1\cup U_2\cup \dots \cup U_n$ be an open cover with $n = \#\left(\mathcal{X}, \mathbf{d}_A,\varepsilon\right)$ and $\mathrm{Diam}(U_k, \mathbf{d}_A) < \varepsilon$ for all $1\leq k \leq n$. Take a point $x_k\in U_k$ for each $k$ and define a map $f\colon \mathcal{X}\to \{x_1,\dots, x_n\}$ by $f(x) = \{x_k\}$ for $x\in U_k\setminus (U_1\cup \dots \cup U_{k-1})$. Let $X$ be a random variable taking values in $\mathcal{X}$ according to $\mu$. We set $Y_u = T^u f(X)$ for $u\in A$. Then $X$ and $Y$ satisfy the conditions of the definition of $R(\varepsilon, A)$. We have $$R(\varepsilon, A) \leq I(X;Y) \leq H(Y) \leq \log n = \log \#\left(\mathcal{X}, \mathbf{d}_A,\varepsilon\right).$$ We estimate $\log \#\left(\mathcal{X}, \mathbf{d}_A,\varepsilon\right)$. Let $\{u_1, \dots, u_a\}$ be a maximal $1$-separated subset of $A$ where "$1$-separated" means $\left\lVert u_i-u_j\right\rVert_\infty \geq 1$ for $i\neq j$. Then $A\subset \bigcup_{i=1}^a \left(u_i+(-1, 1)^d\right)$ and hence $$\log \#\left(\mathcal{X}, \mathbf{d}_A,\varepsilon\right) \leq a \log \#\left(\mathcal{X}, \mathbf{d}_{(-1, 1)^d},\varepsilon\right).$$ The sets $u_i+(-1/2,1/2)^d$ $(1\leq i\leq a)$ are mutually disjoint and contained in $N_{1/2}(A)$. Therefore $a\leq \mathbf{m}\left(N_{1/2}(A)\right)$. \(2\) Let $X$ and $Y$ be random variables satisfying the conditions of the definition of $R(\varepsilon, A)$ (i.e., $X$ is distributed according to $\mu$ and the average distance between $\{T^u X\}_{u\in A}$ and $Y$ is bounded by $\varepsilon$). We define new random variables $X^\prime$ and $Y^\prime$ by $$X^\prime = T^{-a} X, \quad Y^\prime_v = Y_{v-a} \quad (v\in a + A).$$ Since $\mu$ is $T$-invariant, we have $\mathrm{Law} X^\prime = \mathrm{Law} X = \mu$. The random variable $Y^\prime$ takes values in $L^1(a+A, \mathcal{X})$ and $$\begin{aligned} \int_{a+A}\mathbf{d}(T^v X^\prime, Y^\prime_v) \, dv & = \int_{a+A}\mathbf{d}(T^{v-a}X, Y_{v-a}) \, dv \\ & = \int_A \mathbf{d}(T^u X, Y_u)\, du, \quad (u=v-a).\end{aligned}$$ We have $I(X^\prime; Y^\prime) = I(X;Y)$. Therefore $R(\varepsilon, a+A) = R(\varepsilon, A)$. \(3\) Let $X$ and $Y$ be random variables satisfying the conditions of the definition of $R(\varepsilon, A)$ as above, and let $X^\prime$ and $Y^\prime$ be random variables satisfying the conditions of the definition of $R(\varepsilon, B)$. We denote by $\mathbb{P}(Y\in E\mid X=x)$ $(E\subset L^1(A, \mathcal{X}))$ the regular conditional distribution of $Y$ given $X=x$. Similarly for $\mathbb{P}(Y^\prime\in F\mid X^\prime=x)$. We naturally identify $L^1(A\cup B, \mathcal{X})$ with $L^1(A, \mathcal{X})\times L^1(B, \mathcal{X})$. We define a transition probability $\nu$ on $\mathcal{X}\times L^1(A\cup B, \mathcal{X})$ by $$\nu(E\times F|x) = \mathbb{P}(Y\in E\mid X=x) \mathbb{P}(Y^\prime \in F\mid X^\prime=x),$$ for $E\times F\subset L^1(A, \mathcal{X})\times L^1(B, \mathcal{X}) = L^1(A\cup B, \mathcal{X})$ and $x\in \mathcal{X}$. We define a probability measure $Q$ on $\mathcal{X}\times L^1(A\cup B, \mathcal{X})$ by $$Q(G) = \int_{\mathcal{X}} \nu(G_x| x)\, d\mu(x), \quad (G\subset \mathcal{X}\times L^1(A\cup B, \mathcal{X})),$$ where $G_x = \{f\in L^1(A\cup B, \mathcal{X})\mid (x, f)\in G\}$. Let $(X^{\prime\prime}, Y^{\prime\prime})$ be the random variable taking values in $\mathcal{X}\times L^1(A\cup B, \mathcal{X})$ according to $Q$. Then $\mathrm{Law} X^{\prime\prime} = \mu$ and $$\begin{aligned} \mathbb{E}\left(\int_{A\cup B} \mathbf{d}(T^u X^{\prime\prime}, Y^{\prime\prime}_u)\, du\right) & = \mathbb{E}\left(\int_A \mathbf{d}(T^u X, Y_u)\, du\right) + \mathbb{E}\left(\int_B \mathbf{d}(T^u X^\prime, Y^\prime_u)\, du\right) \\ & < \varepsilon \, \mathbf{m}(A) + \varepsilon \, \mathbf{m}(B) = \varepsilon\, \mathbf{m}(A\cup B). \end{aligned}$$ The random variables $Y^{\prime\prime}|_A$ and $Y^{\prime\prime}|_B$ is conditionally independent given $X^{\prime\prime}$. Therefore by Lemma [Lemma 11](#lemma: subadditivity of mutual information){reference-type="ref" reference="lemma: subadditivity of mutual information"} $$I(X^{\prime\prime}; Y^{\prime\prime}) = I(X^{\prime\prime}; Y^{\prime\prime}|_A, Y^{\prime\prime}|_B) \leq I(X^{\prime\prime}; Y^{\prime\prime}|_A) + I(X^{\prime\prime}; Y^{\prime\prime}|_B) = I(X; Y) + I(X^\prime, Y^\prime).$$ The statement (3) follows from this. ◻ **Lemma 21**. *The limit of $\displaystyle \frac{R\left(\varepsilon, [0, L)^d\right)}{L^d}$ as $L\to \infty$ exists and is equal to the infimum of $\displaystyle \frac{R\left(\varepsilon, [0, L)^d\right)}{L^d}$ over $L>0$.* *Proof.* Let $0<\ell <L$. We divide $L$ by $\ell$ and let $L = q\ell + r$ where $q$ is a natural number and $0\leq r < \ell$. Set $$\Gamma = \{ (\ell n_1, \dots, \ell n_d)\mid n_i \in \mathbb{Z}, \, 0 \leq n_i < q \> (1\leq i \leq d)\}.$$ The cubes $u +[0, \ell)^d$ $(u\in \Gamma)$ are disjoint and contained in $[0, L)^d$. Let $A$ be the complement: $$A = [0, L)^d\setminus \bigcup_{u\in \Gamma} \left(u+ [0, \ell)^d\right).$$ The volume of the $1/2$-neighborhood of $A$ is $O(L^{d-1})$: $$\mathbf{m}\left(N_{1/2}(A)\right) \leq d(r+1)(L+1)^{d-1}.$$ By Lemma [Lemma 20](#lemma: subadditivity of R(A, varepsilon)){reference-type="ref" reference="lemma: subadditivity of R(A, varepsilon)"} $$\begin{aligned} R\left(\varepsilon, [0,L)^d\right) & \leq \sum_{u\in \Gamma} R\left(\varepsilon, u+[0,\ell)^d\right) + R(\varepsilon, A) \\ & \leq q^d R\left(\varepsilon, [0,\ell)^d\right) + C (L+1)^{d-1}.\end{aligned}$$ By dividing this by $L^d$ and letting $L\to \infty$, we get $$\limsup_{L\to \infty} \frac{R\left(\varepsilon, [0, L)^d\right)}{L^d} \leq \frac{R\left(\varepsilon, [0,\ell)^d\right)}{\ell^d}.$$ Then $$\limsup_{L\to \infty} \frac{R\left(\varepsilon, [0, L)^d\right)}{L^d} \leq \inf_{\ell>0} \frac{R\left(\varepsilon, [0,\ell)^d\right)}{\ell^d} \leq \liminf_{\ell\to \infty} \frac{R\left(\varepsilon, [0,\ell)^d\right)}{\ell^d}$$ ◻ Recall that $T\colon \mathbb{R}^d\times \mathcal{X}\to \mathcal{X}$ is a continuous action of $\mathbb{R}^d$ on a compact metric space $(\mathcal{X}, \mathbf{d})$ with an invariant probability measure $\mu$. For $\varepsilon>0$ we define the **rate distortion function** $R(\mathbf{d}, \mu, \varepsilon)$ $(\varepsilon>0)$ by $$R(\mathbf{d}, \mu, \varepsilon) = \lim_{L\to \infty} \frac{R\left(\varepsilon, [0,L)^d\right)}{L^d} = \inf_{L>0} \frac{R\left(\varepsilon, [0,L)^d\right)}{L^d}.$$ We define the **upper/lower rate distortion dimensions** of $(\mathcal{X}, T, \mathbf{d}, \mu)$ by $$\overline{\mathrm{rdim}}(\mathcal{X}, T, \mathbf{d}, \mu) = \limsup_{\varepsilon\to 0} \frac{R(\mathbf{d}, \mu, \varepsilon)}{\log(1/\varepsilon)}, \quad \underline{\mathrm{rdim}}(\mathcal{X}, T, \mathbf{d}, \mu) = \liminf_{\varepsilon\to 0} \frac{R(\mathbf{d}, \mu, \varepsilon)}{\log(1/\varepsilon)}.$$ **Remark 22**. A tiling argument similar to the proof of Lemma [Lemma 21](#lemma: existence of the limit in defining rate distortion function){reference-type="ref" reference="lemma: existence of the limit in defining rate distortion function"} shows that if $\Lambda_1, \Lambda_2, \Lambda_3, \dots$ are a sequence of rectangles of $\mathbb{R}^d$ such that the minimum side length of $\Lambda_n$ diverges to infinity then we have $$R(\mathbf{d}, \mu, \varepsilon) = \lim_{n\to \infty} \frac{R(\varepsilon, \Lambda_n)}{\mathbf{m}(\Lambda_n)}.$$ With a bit more effort we can also prove that $$R(\mathbf{d}, \mu, \varepsilon) = \lim_{r\to \infty} \frac{R(\varepsilon, B_r)}{\mathbf{m}(B_r)}$$ where $B_r$ is the Euclidean $r$-ball of $\mathbb{R}^d$ centered at the origin. But we are not sure whether, for any Følner sequence $A_1, A_2, A_3,\dots$ of $\mathbb{R}^d$, the limit of $\frac{R(\varepsilon, A_n)}{\mathbf{m}(A_n)}$ exists or not. (Maybe not.) Probably we need to modify the definition of rate distortion function when we study the rate distortion theory for actions of general amenable groups. # Metric mean dimension with potential and mean Hausdorff dimension with potential {#section: metric mean dimension with potential and mean Hausdorff dimension with potential} The purpose of this section is to introduce *metric mean dimension with potential* and *mean Hausdorff dimension with potential* for $\mathbb{R}^d$-actions. These are dynamical versions of Minkowski dimension and Hausdorff dimension. Mean Hausdorff dimension with potential is a main ingredient of the proof of Theorem [Theorem 3](#main theorem){reference-type="ref" reference="main theorem"}. Metric mean dimension with potential is not used in the proof of Theorem [Theorem 3](#main theorem){reference-type="ref" reference="main theorem"}. But it is also an indispensable tool of mean dimension theory. Therefore we develop its basic theory. We plan to use it in Part II of this series of papers. Let $(\mathcal{X}, \mathbf{d})$ be a compact metric space and $\varphi\colon \mathcal{X}\to \mathbb{R}$ a continuous function. Let $\varepsilon$ be a positive number. We define **covering number with potential** by $$\#\left(\mathcal{X}, \mathbf{d}, \varphi, \varepsilon\right) = \inf\left\{\sum_{i=1}^n (1/\varepsilon)^{\sup_{U_i}\varphi} \middle|\, \parbox{3in}{\centering $\mathcal{X} = U_1\cup \dots \cup U_n$ is an open cover with $\mathrm{Diam}\, U_i < \varepsilon$ for all $1\leq i\leq n$} \right\}.$$ Let $s$ be a real number larger than the maximum value of $\varphi$. We define $$\mathcal{H}^s_\varepsilon(\mathcal{X}, \mathbf{d},\varphi) = \inf\left\{\sum_{n=1}^\infty \left(\mathrm{Diam}E_n\right)^{s-\sup_{E_n} \varphi} \middle|\, \mathcal{X} = E_1\cup E_2\cup E_3\cup \dots \text{ with } \mathrm{Diam}E_n < \varepsilon\right\}.$$ Here we assume that the empty set has diameter zero. We define **Hausdorff dimension with potential at the scale $\varepsilon$** by $$\dim_{\mathrm{H}}(\mathcal{X},\mathbf{d},\varphi,\varepsilon) = \inf\{s \mid \mathcal{H}^s_{\varepsilon}(\mathcal{X},\mathbf{d},\varphi) < 1, \, s >\max\varphi\}.$$ When the function $\varphi$ is identically zero ($\varphi \equiv 0$), we denote $\#\left(\mathcal{X}, \mathbf{d}, \varphi, \varepsilon\right)$, $\mathcal{H}^s_\varepsilon(\mathcal{X}, \mathbf{d},\varphi)$ and $\dim_{\mathrm{H}}(\mathcal{X},\mathbf{d},\varphi,\varepsilon)$ by $\#\left(\mathcal{X}, \mathbf{d},\varepsilon\right)$, $\mathcal{H}^s_\varepsilon(\mathcal{X}, \mathbf{d})$ and $\dim_{\mathrm{H}}(\mathcal{X},\mathbf{d},\varepsilon)$ respectively: $$\begin{aligned} \#\left(\mathcal{X}, \mathbf{d},\varepsilon\right) & = \min \left\{n \middle|\, \parbox{3in}{\centering $\mathcal{X} = U_1\cup \dots \cup U_n$ is an open cover with $\mathrm{Diam}\, U_i < \varepsilon$ for all $1\leq i\leq n$} \right\}, \\ \mathcal{H}^s_\varepsilon(\mathcal{X}, \mathbf{d}) & = \inf\left\{\sum_{n=1}^\infty \left(\mathrm{Diam}E_n\right)^{s} \middle|\, \mathcal{X} = E_1\cup E_2\cup E_3\cup \dots \text{ with } \mathrm{Diam}E_n < \varepsilon\right\}, \\ \dim_{\mathrm{H}}(\mathcal{X},\mathbf{d},\varepsilon) & = \inf\{s \mid \mathcal{H}^s_{\varepsilon}(\mathcal{X},\mathbf{d}) < 1, \, s > 0 \}. \end{aligned}$$ We assume that $\dim_{\mathrm{H}}(\mathcal{X},\mathbf{d},\varepsilon) = -\infty$ if $\mathcal{X}$ is the empty set. **Lemma 23**. *For $0<\varepsilon < 1$, we have $\displaystyle \dim_{\mathrm{H}}(\mathcal{X},\mathbf{d},\varphi,\varepsilon) \leq \frac{\log \#(\mathcal{X},\mathbf{d},\varphi,\varepsilon)}{\log(1/\varepsilon)}$.* *Proof.* Suppose $\displaystyle \frac{\log \#(\mathcal{X},\mathbf{d},\varphi,\varepsilon)}{\log(1/\varepsilon)} <s$. We have $s>\max \varphi$. There is an open cover $\mathcal{X} = U_1\cup\dots\cup U_n$ with $\mathrm{Diam}U_i < \varepsilon$ and $$\sum_{i=1}^n (1/\varepsilon)^{\sup_{U_i} \varphi} < (1/\varepsilon)^s.$$ Then $$\sum_{i=1}^n (\mathrm{Diam}U_i)^{s- \sup_{U_i} \varphi} \leq \sum_{i=1}^n \varepsilon^{s- \sup_{U_i} \varphi} < 1.$$ Hence $\mathcal{H}^s_{\varepsilon}(\mathcal{X},\mathbf{d},\varepsilon) < 1$ and we have $\dim_{\mathrm{H}}(\mathcal{X},\mathbf{d},\varphi,\varepsilon) \leq s$. ◻ Let $T\colon \mathbb{R}^d\times \mathcal{X}\to \mathcal{X}$ be a continuous action of the group $\mathbb{R}^d$. For a bounded Borel subset $A$ of $\mathbb{R}^d$, as in §[1.2](#subsection: mean dimension with potential of R^d-actions){reference-type="ref" reference="subsection: mean dimension with potential of R^d-actions"}, we define a metric $\mathbf{d}_A$ and a function $\varphi_A$ on $\mathcal{X}$ by $$\mathbf{d}_A(x, y) = \sup_{u\in A}\mathbf{d}(T^u x, T^u y), \quad \varphi_A(x) = \int_{A}\varphi(T^u x)\, du.$$ In particular, for a positive number $L$, we set $\mathbf{d}_L = \mathbf{d}_{[0,L)^d}$ and $\varphi_L = \varphi_{[0,L)^d}$. We define **upper/lower metric mean dimension with potential** by $$\label{eq: metric mean dimension with potential} \begin{split} \overline{\mathrm{mdim}}_{\mathrm{M}}(\mathcal{X}, T, \mathbf{d}, \varphi) &= \limsup_{\varepsilon\to 0} \left(\lim_{L\to \infty} \frac{\log \#\left(\mathcal{X}, \mathbf{d}_L, \varphi_L,\varepsilon\right)}{L^d \log(1/\varepsilon)}\right), \\ \underline{\mathrm{mdim}}_{\mathrm{M}}(\mathcal{X}, T, \mathbf{d}, \varphi) &= \liminf_{\varepsilon\to 0} \left(\lim_{L\to \infty} \frac{\log \#\left(\mathcal{X}, \mathbf{d}_L, \varphi_L,\varepsilon\right)}{L^d \log(1/\varepsilon)}\right). \end{split}$$ Here the limit with respect to $L$ exists. The proof is similar to the proof of Lemma [Lemma 1](#lemma: existence of limit with respect to L){reference-type="ref" reference="lemma: existence of limit with respect to L"}. (The quantity $\log \#\left(\mathcal{X}, \mathbf{d}_A, \varphi_A, \varepsilon\right)$ satisfies the natural subadditivity, monotonicity and invariance if $\varphi(x)$ is a nonnegative function.) We define **upper/lower mean Hausdorff dimension with potential** by $$\begin{aligned} \overline{\mathrm{mdim}}_{\mathrm{H}}(\mathcal{X}, T, \mathbf{d}, \varphi) & = \lim_{\varepsilon\to 0} \left(\limsup_{L\to \infty} \frac{\dim_{\mathrm{H}}\left(\mathcal{X},\mathbf{d}_L, \varphi_L,\varepsilon\right)}{L^d}\right), \\ \underline{\mathrm{mdim}}_{\mathrm{H}}(\mathcal{X}, T, \mathbf{d}, \varphi) & = \lim_{\varepsilon\to 0} \left(\liminf_{L\to \infty} \frac{\dim_{\mathrm{H}}\left(\mathcal{X},\mathbf{d}_L, \varphi_L,\varepsilon\right)}{L^d}\right).\end{aligned}$$ **Remark 24**. We are not sure whether or not these definitions of the upper and lower mean Hausdorff dimensions with potential coincide with the following: $$\lim_{\varepsilon\to 0} \left(\limsup_{r\to \infty} \frac{\dim_{\mathrm{H}}\left(\mathcal{X},\mathbf{d}_{B_r}, \varphi_{B_r},\varepsilon\right)}{\mathbf{m}(B_r)}\right), \quad \lim_{\varepsilon\to 0} \left(\liminf_{r\to \infty} \frac{\dim_{\mathrm{H}}\left(\mathcal{X},\mathbf{d}_{B_r}, \varphi_{B_r},\varepsilon\right)}{\mathbf{m}(B_r)}\right).$$ (Maybe not in general.) Here $B_r$ is the Euclidean $r$-ball of $\mathbb{R}^d$ centered at the origin. The next lemma is a dynamical version of the fact that Hausdorff dimension is smaller than or equal to Minkowski dimension. **Lemma 25**. *$\overline{\mathrm{mdim}}_{\mathrm{H}}(\mathcal{X},T,\mathbf{d},\varphi) \leq \underline{\mathrm{mdim}}_{\mathrm{M}}(\mathcal{X},T,\mathbf{d},\varphi)$.* *Proof.* This follows from Lemma [Lemma 23](#lemma: Hausdorff dimension and covering number){reference-type="ref" reference="lemma: Hausdorff dimension and covering number"}. ◻ For the proof of Theorem [Theorem 3](#main theorem){reference-type="ref" reference="main theorem"} we need another version of mean Hausdorff dimension. For a positive number $L$ we define a metric $\overline{\mathbf{d}}_L$ on $\mathcal{X}$ by $$\overline{\mathbf{d}}_L(x,y) = \frac{1}{L^d}\int_{[0,L)^d} d(T^u x, T^u y)\, du.$$ This is also compatible with the given topology of $\mathcal{X}$. We define **upper/lower $L^1$-mean Hausdorff dimension with potential** by $$\label{eq: L^1 mean Hausdorff dimension} \begin{split} \overline{\mathrm{mdim}}_{\mathrm{H},L^1}(\mathcal{X}, T, \mathbf{d}, \varphi) & = \lim_{\varepsilon\to 0} \left(\limsup_{L\to \infty} \frac{\dim_{\mathrm{H}}\left(\mathcal{X},\overline{\mathbf{d}}_L, \varphi_L,\varepsilon\right)}{L^d}\right), \\ \underline{\mathrm{mdim}}_{\mathrm{H},L^1}(\mathcal{X}, T, \mathbf{d}, \varphi) & = \lim_{\varepsilon\to 0} \left(\liminf_{L\to \infty} \frac{\dim_{\mathrm{H}}\left(\mathcal{X},\overline{\mathbf{d}}_L, \varphi_L,\varepsilon\right)}{L^d}\right). \end{split}$$ We have $\overline{\mathbf{d}}_L(x,y) \leq \mathbf{d}_L(x,y)$ and hence $$\overline{\mathrm{mdim}}_{\mathrm{H},L^1}(\mathcal{X}, T, \mathbf{d}, \varphi) \leq \overline{\mathrm{mdim}}_{\mathrm{H}}(\mathcal{X}, T, \mathbf{d}, \varphi), \quad \underline{\mathrm{mdim}}_{\mathrm{H},L^1}(\mathcal{X}, T, \mathbf{d}, \varphi) \leq \underline{\mathrm{mdim}}_{\mathrm{H}}(\mathcal{X}, T, \mathbf{d}, \varphi).$$ It is well-known that topological dimension is smaller than or equal to Hausdorff dimension. The next result is its dynamical version. The proof will be given in §[5](#section: proof of mdim is bounded by mdimh){reference-type="ref" reference="section: proof of mdim is bounded by mdimh"}. **Theorem 26**. *$\mathrm{mdim}(\mathcal{X}, T, \varphi) \leq \underline{\mathrm{mdim}}_{\mathrm{H},L^1}(\mathcal{X}, T, \mathbf{d}, \varphi)$.* Notice that this also implies $\mathrm{mdim}(\mathcal{X}, T, \varphi) \leq \underline{\mathrm{mdim}}_{\mathrm{H}}(\mathcal{X}, T, \mathbf{d}, \varphi)$. Metric mean dimension with potential is related to rate distortion dimension by the following result. **Proposition 27**. *For any $T$-invariant Borel probability measure $\mu$ on $\mathcal{X}$ we have $$\begin{aligned} \overline{\mathrm{rdim}}(\mathcal{X},T,\mathbf{d},\mu) +\int_{\mathcal{X}}\varphi\, d\mu & \leq \overline{\mathrm{mdim}}_{\mathrm{M}}(\mathcal{X},T,\mathbf{d},\varphi), \\ \underline{\mathrm{rdim}}(\mathcal{X},T,\mathbf{d},\mu) + \int_{\mathcal{X}}\varphi\, d\mu & \leq \underline{\mathrm{mdim}}_{\mathrm{M}}(\mathcal{X},T,\mathbf{d},\varphi). \end{aligned}$$* *Proof.* We will use the following well-known inequality. For the proof see [@Walters §9.3, Lemma 9.9]. **Lemma 28**. *Let $a_1, \dots, a_n$ be real numbers and $(p_1, \dots, p_n)$ a probability vector. For a positive number $\varepsilon$ we have $$\sum_{i=1}^n \left(-p_i\log p_i + p_i a_i \log(1/\varepsilon)\right) \leq \log \sum_{i=1}^n (1/\varepsilon)^{a_i}.$$* Let $L$ and $\varepsilon$ be positive numbers. Let $\mathcal{X}= U_1\cup\dots \cup U_n$ be an open cover with $\mathrm{Diam}(U_i, \mathbf{d}_L)<\varepsilon$. Take a point $x_i\in U_i$ for each $1\leq i\leq n$. Set $E_i = U_i\setminus (U_1\cup\dots \cup U_{i-1})$ and $p_i = \mu(E_i)$. We define $f\colon \mathcal{X}\to \{x_1, \dots, x_n\}$ by $f(E_i) =\{x_i\}$. Let $X$ be a random variable taking values in $\mathcal{X}$ according to $\mu$. Set $Y_u = T^u f(X)$ for $u\in [0,L)^d$. Then almost surely we have $\mathbf{d}(T^u X, Y_u) < \varepsilon$ for all $u\in [0,L)^d$. Therefore $$R\left(\varepsilon, [0,L)^d\right) \leq H(Y) = H\left(f(X)\right) = -\sum_{i=1}^n p_i \log p_i.$$ On the other hand $$\int_{\mathcal{X}} \varphi\, d\mu = \int_{\mathcal{X}} \frac{\varphi_L}{L^d}\, d\mu \leq \frac{1}{L^d} \sum_{i=1}^n p_i \sup_{U_i} \varphi_L.$$ By Lemma [Lemma 28](#lemma: free energy){reference-type="ref" reference="lemma: free energy"} $$\begin{aligned} R\left(\varepsilon, [0,L)^d\right) + L^d \log(1/\varepsilon) \int_{\mathcal{X}}\varphi \, d\mu & \leq \sum_{i=1}^n \left(-p_i\log p_i + p_i \sup_{U_i} \varphi_L \log (1/\varepsilon)\right) \\ & \leq \log \sum_{i=1}^n (1/\varepsilon)^{\sup_{U_i} \varphi_L}. \end{aligned}$$ Therefore $$\frac{R\left(\varepsilon, [0,L)^d\right)}{L^d \log(1/\varepsilon)} + \int_{\mathcal{X}}\varphi \, d\mu \leq \frac{\log\#\left(\mathcal{X}, \mathbf{d}_L,\varphi_L,\varepsilon\right)}{L^d\log(1/\varepsilon)}.$$ Letting $L\to \infty$ $$\frac{R(\mathbf{d},\mu,\varepsilon)}{\log(1/\varepsilon)} + \int_{\mathcal{X}}\varphi \, d\mu \leq \lim_{L\to \infty} \frac{\log\#\left(\mathcal{X}, \mathbf{d}_L,\varphi_L,\varepsilon\right)}{L^d\log(1/\varepsilon)}.$$ Letting $\varepsilon\to 0$ we get the statement. ◻ $L^1$-mean Hausdorff dimension with potential is related to rate distortion dimension by the next theorem. We call this result "dynamical Frostman's lemma\" because the classical Frostman's lemma [@Mattila Sections 8.14-8.17] plays an essential role in its proof. Recall that we have denoted by $\mathscr{M}^T(\mathcal{X})$ the set of $T$-invariant Borel probability measures on $\mathcal{X}$. **Theorem 29** (Dynamical Frostman's lemma). *$$\overline{\mathrm{mdim}}_{\mathrm{H},L^1}(\mathcal{X}, T, \mathbf{d}, \varphi) \leq \sup_{\mu\in \mathscr{M}^T(\mathcal{X})}\left(\underline{\mathrm{rdim}}(\mathcal{X},T,\mathbf{d},\mu) + \int_{\mathcal{X}}\varphi\, d\mu\right).$$* The proof of this theorem is the most important step of the proof of Theorem [Theorem 3](#main theorem){reference-type="ref" reference="main theorem"}. It will be given in §[6](#section: proof of dynamical Frostman lemma){reference-type="ref" reference="section: proof of dynamical Frostman lemma"}. By combining Theorems [Theorem 26](#theorem: mdim is bounded by mdimh){reference-type="ref" reference="theorem: mdim is bounded by mdimh"} and [Theorem 29](#theorem: dynamical Frostman lemma){reference-type="ref" reference="theorem: dynamical Frostman lemma"}, we get $$\mathrm{mdim}(\mathcal{X},T,\varphi) \leq \sup_{\mu\in \mathscr{M}^T(\mathcal{X})}\left(\underline{\mathrm{rdim}}(\mathcal{X},T,\mathbf{d},\mu) + \int_{\mathcal{X}}\varphi\, d\mu\right).$$ This is the statement of Theorem [Theorem 3](#main theorem){reference-type="ref" reference="main theorem"}. Therefore the proof of Theorem [Theorem 3](#main theorem){reference-type="ref" reference="main theorem"} is reduced to the proofs of Theorems [Theorem 26](#theorem: mdim is bounded by mdimh){reference-type="ref" reference="theorem: mdim is bounded by mdimh"} and [Theorem 29](#theorem: dynamical Frostman lemma){reference-type="ref" reference="theorem: dynamical Frostman lemma"}. **Conjecture 30**. *Let $T\colon \mathbb{R}^d\times \mathcal{X}\to \mathcal{X}$ be a continuous action of $\mathbb{R}^d$ on a compact metrizable space $\mathcal{X}$. For any continuous function $\varphi\colon \mathcal{X}\to \mathbb{R}$ there exists a metric $\mathbf{d}$ on $\mathcal{X}$ compatible with the given topology satisfying $$\label{eq: dynamical PS} \mathrm{mdim}(\mathcal{X},T,\varphi) = \overline{\mathrm{mdim}}_{\mathrm{M}}(\mathcal{X},T,\mathbf{d},\varphi).$$* Suppose that this conjecture is true and let $\mathbf{d}$ be a metric satisfying ([\[eq: dynamical PS\]](#eq: dynamical PS){reference-type="ref" reference="eq: dynamical PS"}). Then by Proposition [Proposition 27](#proposition: rate distortion dimension and metric mean dimension){reference-type="ref" reference="proposition: rate distortion dimension and metric mean dimension"} $$\begin{split} \mathrm{mdim}(\mathcal{X}, T, \varphi) & = \sup_{\mu\in \mathscr{M}^T(\mathcal{X})} \left(\underline{\mathrm{rdim}}(\mathcal{X}, T, \mathbf{d}, \mu) + \int_\mathcal{X} \varphi\, d\mu\right) \\ & = \sup_{\mu\in \mathscr{M}^T(\mathcal{X})} \left(\overline{\mathrm{rdim}}(\mathcal{X}, T, \mathbf{d}, \mu) + \int_\mathcal{X} \varphi\, d\mu\right). \end{split}$$ Namely Conjecture [Conjecture 30](#conjecture: dynamical PS){reference-type="ref" reference="conjecture: dynamical PS"} implies Conjecture [Conjecture 4](#conjecture: double variational principle){reference-type="ref" reference="conjecture: double variational principle"} in §[1.3](#subsection: main result){reference-type="ref" reference="subsection: main result"}. Conjecture [Conjecture 30](#conjecture: dynamical PS){reference-type="ref" reference="conjecture: dynamical PS"} is widely open in general. We plan to prove it for free minimal $\mathbb{R}^d$-actions in Part II of this series of papers. At the end of this section we present a small technical result on $L^1$-mean Hausdorff dimension with potential. This will be used in §[4](#section: mean dimension of Z^d-actions){reference-type="ref" reference="section: mean dimension of Z^d-actions"}. **Lemma 31**. *In the definition ([\[eq: L\^1 mean Hausdorff dimension\]](#eq: L^1 mean Hausdorff dimension){reference-type="ref" reference="eq: L^1 mean Hausdorff dimension"}) we can restrict the parameter $L$ to natural numbers. Namely we have $$\begin{aligned} \overline{\mathrm{mdim}}_{\mathrm{H}, L^1}(\mathcal{X}, T, \mathbf{d},\varphi) & = \lim_{\varepsilon\to 0}\left( \limsup_{\substack{N\in \mathbb{N} \\ N\to \infty}} \frac{\dim_{\mathrm{H}}\left(\mathcal{X}, \overline{\mathrm{d}}_N, \varphi_N, \varepsilon \right)}{N^d}\right) \\ \underline{\mathrm{mdim}}_{\mathrm{H},L^1}(\mathcal{X}, T, \mathbf{d},\varphi) & = \lim_{\varepsilon\to 0}\left( \liminf_{\substack{N\in \mathbb{N} \\ N\to \infty}} \frac{\dim_{\mathrm{H}}\left(\mathcal{X}, \overline{\mathrm{d}}_N, \varphi_N, \varepsilon \right)}{N^d}\right). \end{aligned}$$ Here the parameter $N$ runs over natural numbers. A similar result also holds for upper and lower mean Hausdorff dimensions with potential.* *Proof.* We prove the lower case. The upper case is similar. By adding a positive constant to $\varphi$, we can assume that $\varphi$ is a nonnegative function. Set $$\underline{\mathrm{mdim}}_{\mathrm{H},L^1}(\mathcal{X}, T, \mathbf{d},\varphi)^\prime = \lim_{\varepsilon\to 0}\left( \liminf_{\substack{N\in \mathbb{N} \\ N\to \infty}} \frac{\dim_{\mathrm{H}}\left(\mathcal{X}, \overline{\mathrm{d}}_N, \varphi_N, \varepsilon \right)}{N^d}\right).$$ It is obvious that $\underline{\mathrm{mdim}}_{\mathrm{H},L^1}(\mathcal{X}, T, \mathbf{d},\varphi) \leq \underline{\mathrm{mdim}}_{\mathrm{H},L^1}(\mathcal{X}, T, \mathbf{d},\varphi)^\prime$. We prove the reverse inequality. Let $L$ be a positive real number. We assume that it is sufficiently large so that $$\left(\frac{L}{L-1}\right)^d < 2.$$ Let $N:=\lfloor L\rfloor$ be the natural number not greater than $L$. Then for any $x,y\in \mathcal{X}$ we have $\overline{\mathbf{d}}_N(x, y) \leq 2\, \overline{\mathbf{d}}_L(x,y)$. We also have $\varphi_N(x) \leq \varphi_L(x)$. Let $0<\varepsilon<1/2$. We prove that for any $s>\max \varphi_L$ $$\label{eq: comparison of approximate Hausdorff measures} \mathcal{H}^{s-\frac{s}{\log \varepsilon}}_\varepsilon \left(\mathcal{X}, \overline{\mathbf{d}}_N, \varphi_N\right) \leq \mathcal{H}^s_{\varepsilon/2}\left(\mathcal{X},\overline{\mathbf{d}}_L, \varphi_L\right).$$ Indeed let $E$ be a subset of $\mathcal{X}$ with $\mathrm{Diam}(E, \overline{\mathbf{d}}_L) < \varepsilon/2$. Then $$\mathrm{Diam}(E, \overline{\mathbf{d}}_N) \leq 2\, \mathrm{Diam}(E, \overline{\mathbf{d}}_L) < \varepsilon.$$ Moreover $$\begin{aligned} \mathrm{Diam}(E, \overline{\mathbf{d}}_N)^{s-\frac{s}{\log\varepsilon}-\sup_{E}\varphi_N} & \leq \mathrm{Diam}(E, \overline{\mathbf{d}}_N)^{s-\frac{s}{\log\varepsilon}-\sup_{E}\varphi_L} \quad \text{by $\varphi_N\leq \varphi_L$}\\ & \leq \left(2\, \mathrm{Diam}(E, \overline{\mathbf{d}}_L) \right)^{s-\frac{s}{\log\varepsilon}-\sup_{E}\varphi_L} \\ & \leq \varepsilon^{-\frac{s}{\log\varepsilon}} \cdot \left(2\, \mathrm{Diam}(E, \overline{\mathbf{d}}_L) \right)^{s-\sup_{E}\varphi_L} \\ & = 2^{-s} \left(2\, \mathrm{Diam}(E, \overline{\mathbf{d}}_L) \right)^{s-\sup_{E}\varphi_L} \\ & \leq \mathrm{Diam}(E, \overline{\mathbf{d}}_L)^{s-\sup_{E}\varphi_L} \quad \text{by $\varphi_L\geq 0$}.\end{aligned}$$ Therefore we have ([\[eq: comparison of approximate Hausdorff measures\]](#eq: comparison of approximate Hausdorff measures){reference-type="ref" reference="eq: comparison of approximate Hausdorff measures"}) and hence $$\dim_{\mathrm{H}}(\mathcal{X},\overline{\mathbf{d}}_N,\varphi_N,\varepsilon) \leq \left(1-\frac{1}{\log\varepsilon}\right)\dim_{\mathrm{H}}\left(\mathcal{X},\overline{\mathbf{d}}_L, \varphi_L, \frac{\varepsilon}{2}\right).$$ We divide this by $L^d$ and let $L\to \infty$ and $\varepsilon\to 0$. Then we get $\underline{\mathrm{mdim}}_{\mathrm{H},L^1}(\mathcal{X}, T, \mathbf{d},\varphi)^\prime \leq \underline{\mathrm{mdim}}_{\mathrm{H},L^1}(\mathcal{X}, T, \mathbf{d},\varphi)$. ◻ **Remark 32**. In the definitions ([\[eq: definition of mean dimension with potential\]](#eq: definition of mean dimension with potential){reference-type="ref" reference="eq: definition of mean dimension with potential"}) and ([\[eq: metric mean dimension with potential\]](#eq: metric mean dimension with potential){reference-type="ref" reference="eq: metric mean dimension with potential"}) of mean dimension with potential and (upper/lower) metric mean dimension with potential, the limits with respect to $L$ exist. Therefore we can also restrict the parameter $L$ to natural numbers when we take the limits. # Mean dimension of $\mathbb{Z}^d$-actions {#section: mean dimension of Z^d-actions} In this section we prepare some basic results on mean dimension theory of $\mathbb{Z}^d$-actions. We need it in the proof of Theorem [Theorem 26](#theorem: mdim is bounded by mdimh){reference-type="ref" reference="theorem: mdim is bounded by mdimh"}. This is a rather technical and indirect approach. It is desirable to find a more direct proof of Theorem [Theorem 26](#theorem: mdim is bounded by mdimh){reference-type="ref" reference="theorem: mdim is bounded by mdimh"}. However we have not found it so far[^4]. The paper of Huo--Yuan [@Huo--Yuan] studies the variational principle for mean dimension of $\mathbb{Z}^d$-actions. Proposition [Proposition 35](#proposition: L^1 mean Hausdorff dimension and mean Hausdorff dimension under tame growth){reference-type="ref" reference="proposition: L^1 mean Hausdorff dimension and mean Hausdorff dimension under tame growth"} and Theorem [Theorem 40](#theorem: mdim is bounded by mdimh for Z^d actions){reference-type="ref" reference="theorem: mdim is bounded by mdimh for Z^d actions"} below were already mentioned in their paper [@Huo--Yuan Lemma 2.12 and Lemma 2.15] in the case that the potential function is zero. ## Definitions of various mean dimensions for $\mathbb{Z}^d$-actions {#subsection: definitions of various mean dimensions for Z^d-actions} For a natural number $N$ we set $$[N]^d = \{0,1,2,\dots, N-1\}^d.$$ Let $T\colon \mathbb{Z}^d\times \mathcal{X}\to \mathcal{X}$ be a continuous action of the group $\mathbb{Z}^d$ on a compact metrizable space $\mathcal{X}$. Let $\mathbf{d}$ be a metric on $\mathcal{X}$ and $\varphi\colon \mathcal{X}\to \mathbb{R}$ a continuous function. For a natural number $N$ we define metrics $\mathbf{d}_N$ and $\overline{\mathbf{d}}_N$ and a function $\varphi_N$ on $\mathcal{X}$ by $$\mathbf{d}_N(x, y) = \max_{u\in [N]^d} \mathbf{d}(T^u x, T^u y), \quad \overline{\mathbf{d}}_N(x,y) = \frac{1}{N^d}\sum_{u\in [N]^d}\mathbf{d}(T^u x, T^u y),$$ $$\varphi_N(x) = \sum_{u\in [N]^d}\varphi(T^u x).$$ In the sequel we will sometimes consider $\mathbb{Z}^d$-actions and $\mathbb{R}^d$-actions simultaneously. In that case we use the notations $\mathbf{d}_N^{\mathbb{Z}}, \overline{\mathbf{d}}_N^{\mathbb{Z}}, \varphi^{\mathbb{Z}}_N$ for clarifying that these quantities are defined with respect to $\mathbb{Z}^d$-actions. (On the other hand, we will use the notations $\mathbf{d}_N^{\mathbb{R}}, \overline{\mathbf{d}}_N^{\mathbb{R}}, \varphi^{\mathbb{R}}_N$ when they are defined with respect to $\mathbb{R}^d$-actions.) We define mean dimension with potential by $$\mathrm{mdim}(\mathcal{X}, T, \varphi) = \lim_{\varepsilon\to 0} \left(\lim_{N\to \infty} \frac{\mathrm{Widim}_\varepsilon(\mathcal{X}, \mathbf{d}_N, \varphi_N)}{N^d}\right).$$ This is a topological invariant (i.e. independent of the choice of $\mathbf{d}$). We define upper/lower mean Hausdorff dimension with potential by $$\begin{aligned} \overline{\mathrm{mdim}}_{\mathrm{H}}(\mathcal{X}, T, \mathbf{d},\varphi) &= \lim_{\varepsilon\to 0} \left(\limsup_{N\to \infty} \frac{\dim_{\mathrm{H}}(\mathcal{X},\mathbf{d}_N, \varphi_N,\varepsilon)}{N^d}\right), \\ \underline{\mathrm{mdim}}_{\mathrm{H}}(\mathcal{X}, T, \mathbf{d},\varphi) &= \lim_{\varepsilon\to 0} \left(\liminf_{N\to \infty} \frac{\dim_{\mathrm{H}}(\mathcal{X},\mathbf{d}_N, \varphi_N,\varepsilon)}{N^d}\right).\end{aligned}$$ We define upper/lower $L^1$-mean Hausdorff dimension with potential by $$\begin{aligned} \overline{\mathrm{mdim}}_{\mathrm{H}, L^1}(\mathcal{X}, T, \mathbf{d},\varphi) &= \lim_{\varepsilon\to 0} \left(\limsup_{N\to \infty} \frac{\dim_{\mathrm{H}}(\mathcal{X},\overline{\mathbf{d}}_N, \varphi_N,\varepsilon)}{N^d}\right), \\ \underline{\mathrm{mdim}}_{\mathrm{H},L^1}(\mathcal{X}, T, \mathbf{d},\varphi) &= \lim_{\varepsilon\to 0} \left(\liminf_{N\to \infty} \frac{\dim_{\mathrm{H}}(\mathcal{X},\overline{\mathbf{d}}_N, \varphi_N,\varepsilon)}{N^d}\right).\end{aligned}$$ Since $\overline{\mathbf{d}}_N(x, y) \leq \mathbf{d}_N(x,y)$, we have $$\overline{\mathrm{mdim}}_{\mathrm{H}, L^1}(\mathcal{X}, T, \mathbf{d},\varphi) \leq \overline{\mathrm{mdim}}_{\mathrm{H}}(\mathcal{X}, T, \mathbf{d},\varphi), \quad \underline{\mathrm{mdim}}_{\mathrm{H},L^1}(\mathcal{X}, T, \mathbf{d},\varphi) \leq \underline{\mathrm{mdim}}_{\mathrm{H}}(\mathcal{X}, T, \mathbf{d},\varphi).$$ We can also consider upper/lower metric mean dimension with potential for $\mathbb{Z}^d$-actions. But we do not need them in this paper. ## Tame growth of covering numbers {#subsection: tame growth of covering numbers} The purpose of this subsection is to establish a convenient sufficient condition under which mean Hausdorff dimension with potential and $L^1$-mean Hausdorff dimension with potential coincide. The following is a key definition [@Lindenstrauss--Tsukamoto; @IEEE Condition 3]. **Definition 33**. A compact metric space $(\mathcal{X}, \mathbf{d})$ is said to have **tame growth of covering numbers** if for any positive number $\delta$ we have $$\lim_{\varepsilon\to 0} \varepsilon^\delta \log \#\left(\mathcal{X}, \mathbf{d}, \varepsilon\right) = 0.$$ Recall that $\#\left(\mathcal{X}, \mathbf{d}, \varepsilon\right)$ is the minimum number $n$ such that there is an open cover $\mathcal{X} = U_1\cup U_2\cup \dots \cup U_n$ with $\mathrm{Diam}U_i < \varepsilon$ for all $1\leq i \leq n$. Notice that this is purely a condition on metric geometry. It does not involve dynamics. For example, every compact subset of the Euclidean space $\mathbb{R}^n$ has the tame growth of covering numbers with respect to the Euclidean metric. The Hilbert cube $[0,1]^{\mathbb{N}}$ has the tame growth of covering numbers with respect to the metric $$\mathbf{d}\left((x_n)_{n\in \mathbb{N}}, (y_n)_{n\in \mathbb{N}}\right) = \sum_{n=1}^\infty 2^{-n}|x_n-y_n|.$$ The next lemma shows that every compact metrizable space admits a metric having the tame growth of covering numbers [@Lindenstrauss--Tsukamoto; @double Lemma 3.10]. **Lemma 34**. *For any compact metric space $(\mathcal{X},\mathbf{d})$ there exists a metric $\mathbf{d}^\prime$ on $\mathcal{X}$ compatible with the given topology satisfying the following two conditions.* - *For all $x, y\in \mathcal{X}$ we have $\mathbf{d}^\prime(x,y) \leq \mathbf{d}(x,y)$.* - *The space $(\mathcal{X}, \mathbf{d}^\prime)$ has the tame growth of covering numbers.* *Proof.* Take a countable dense subset $\{x_1, x_2, x_3, \dots\}$ of $\mathcal{X}$. We define a metric $\mathbf{d}^\prime$ by $$\mathbf{d}^\prime(x,y) = \sum_{n=1}^\infty 2^{-n} \left|d(x, x_n) - d(y, x_n)\right|.$$ It is easy to check that this satisfies the statement. ◻ **Proposition 35**. *Let $(\mathcal{X},\mathbf{d})$ be a compact metric space having the tame growth of covering numbers. Let $T\colon \mathbb{Z}^d\times \mathcal{X}\to \mathcal{X}$ be a continuous action of the group $\mathbb{Z}^d$. For any continuous function $\varphi\colon \mathcal{X}\to \mathbb{R}$ we have $$\begin{aligned} \overline{\mathrm{mdim}}_{\mathrm{H}, L^1}(\mathcal{X},T,\mathbf{d},\varphi) = \overline{\mathrm{mdim}}_{\mathrm{H}}(\mathcal{X},T,\mathbf{d},\varphi), \\ \underline{\mathrm{mdim}}_{\mathrm{H},L^1}(\mathcal{X},T,\mathbf{d},\varphi) = \underline{\mathrm{mdim}}_{\mathrm{H}}(\mathcal{X},T,\mathbf{d},\varphi). \end{aligned}$$* *Proof.* The case of $d=1$ was proved in [@Tsukamoto; @potential Lemma 4.3]. The following argument is its simple generalization. We prove the lower case. The upper case is similar. It is obvious that $\underline{\mathrm{mdim}}_{\mathrm{H},L^1}(\mathcal{X},T,\mathbf{d},\varphi) \leq \underline{\mathrm{mdim}}_{\mathrm{H}}(\mathcal{X},T,\mathbf{d},\varphi)$. We prove the reverse inequality. By adding a positive constant to $\varphi$, we can assume that $\varphi$ is a nonnegative function. For a finite subset $A$ of $\mathbb{Z}^d$ we define a metric $\mathbf{d}_A$ on $\mathcal{X}$ by $$\mathbf{d}_A(x,y) = \max_{u\in A} \mathbf{d}(T^u x, T^u y).$$ Let $s$ be an arbitrary positive number with $\underline{\mathrm{mdim}}_{\mathrm{H},L^1}(\mathcal{X},T,\mathbf{d},\varphi) <s$. Let $0<\delta<1/2$ be arbitrary. For each positive number $\tau$ we take an open cover $\mathcal{X} = W^\tau_1\cup \dots \cup W^\tau_{M(\tau)}$ with $M(\tau) = \#\left(\mathcal{X},\mathbf{d},\tau\right)$ and $\mathrm{Diam}(W^\tau_i, \mathbf{d}) < \tau$ for all $1\leq i \leq M(\tau)$. From the condition of tame growth of covering numbers, we can find $0<\varepsilon_0<1$ satisfying $$\begin{aligned} & M(\tau)^{\tau^\delta} < 2 \quad \text{for all $0<\tau<\varepsilon_0$}, \label{eq: choice of varepsilon_0 covering number} \\ & 2^{2+(1+2\delta)s}\, \varepsilon_0^{s \delta(1-2\delta)} < 1. \label{eq: varepsilon_0 is small}\end{aligned}$$ Let $0<\varepsilon<\varepsilon_0$. We have $\dim_{\mathrm{H}}(\mathcal{X},\overline{\mathbf{d}}_N, \varphi_N, \varepsilon) < sN^d$ for infinitely many $N$. Pick up such an $N$. Then there is a covering $\mathcal{X} = \bigcup_{n=1}^\infty E_n$ with $\tau_n:= \mathrm{Diam}(E_n,\overline{\mathbf{d}}_N) < \varepsilon$ for all $n\geq 1$ and $$\label{eq: choice of tau_n} \sum_{n=1}^\infty \tau_n^{sN^d -\sup_{E_n} \varphi_N} < 1.$$ Set $L_n = \tau_n^{-\delta}$. Pick $x_n\in E_n$ for each $n$. Every $x\in E_n$ satisfies $\overline{\mathbf{d}}_N(x,x_n) \leq \tau_n$ and hence $$\left|\{u\in [N]^d\mid \mathbf{d}(T^u x, T^u x_n) \geq L_n \tau_n\}\right| \leq \frac{N^d}{L_n}.$$ Namely there is $A\subset [N]^d$ (depending on $x$) such that $|A| \leq N^d/L_n$ and $\mathbf{d}_{[N]^d\setminus A}(x, x_n) < L_n \tau_n$. Therefore $$E_n\subset \bigcup_{\substack{A\subset [N]^d \\ |A|\leq N^d/L_n}} B_{L_n\tau_n}(x_n, \mathbf{d}_{[N]^d\setminus A}).$$ Here $B_{L_n\tau_n}(x_n, \mathbf{d}_{[N]^d\setminus A})$ is the ball of radius $L_n \tau_n$ with respect to $\mathbf{d}_{[N]^d\setminus A}$ centered at $x_n$. For $A = \{a_1, \dots, a_r\}\subset [N]^d$ and $1\leq i_1, \dots, i_r \leq M(\tau_n)$ we set $$W(A, \tau_n, i_1, \dots,i_r) = T^{-a_1}W_{i_1}^{\tau_n}\cap \dots \cap T^{-a_r}W_{i_r}^{\tau_n}.$$ We have $$\mathcal{X} = \bigcup_{1\leq i_1, \dots, i_r\leq M(\tau_n)} W(A, \tau_n, i_1, \dots, i_r), \quad (\text{here $A$ and $\tau_n$ are fixed}),$$ and hence $$B_{L_n\tau_n}(x_n, \mathbf{d}_{[N]^d\setminus A}) = \bigcup_{1\leq i_1, \dots, i_r\leq M(\tau_n)} B_{L_n\tau_n}(x_n, \mathbf{d}_{[N]^d\setminus A}) \cap W(A, \tau_n, i_1, \dots, i_r).$$ Then $$\mathcal{X} = \bigcup_{n=1}^\infty \bigcup_{\substack{A\subset [N]^d \\ r:= |A|\leq N^d/L_n}} \bigcup_{1\leq i_1, \dots, i_r\leq M(\tau_n)} E_n\cap B_{L_n\tau_n}(x_n, \mathbf{d}_{[N]^d\setminus A}) \cap W(A, \tau_n, i_1, \dots, i_r).$$ The diameter of $E_n\cap B_{L_n\tau_n}(x_n, \mathbf{d}_{[N]^d\setminus A}) \cap W(A, \tau_n, i_1, \dots, i_r)$ with respect to $\mathbf{d}_N$ is less than or equal to $2L_n \tau_n = 2\tau_n^{1-\delta} < 2\varepsilon^{1-\delta}$. Hence $$\mathcal{H}^{(1+2\delta)sN^d}_{2\varepsilon^{1-\delta}}\left(\mathcal{X},\mathbf{d}_N, \varphi_N\right) \leq \sum_{n=1}^\infty 2^{N^d}\, M(\tau_n)^{N^d/L_n}\left(2\tau_n^{1-\delta}\right)^{(1+2\delta)sN^d-\sup_{E_n}\varphi_N}.$$ Here the factor $2^{N^d}$ comes from the choice of $A\subset [N]^d$. By $L_n = \tau_n^{-\delta}$ and ([\[eq: choice of varepsilon_0 covering number\]](#eq: choice of varepsilon_0 covering number){reference-type="ref" reference="eq: choice of varepsilon_0 covering number"}) $$M(\tau_n)^{N^d/L_n} = \left(M(\tau_n)^{\tau_n^\delta}\right)^{N^d} < 2^{N^d}.$$ Since $\varphi$ is a nonnegative function, $$2^{(1+2\delta)sN^d-\sup_{E_n}\varphi_N} \leq 2^{(1+2\delta)sN^d}.$$ Hence $$\mathcal{H}^{(1+2\delta)sN^d}_{2\varepsilon^{1-\delta}}\left(\mathcal{X},\mathbf{d}_N, \varphi_N\right) \leq \sum_{n=1}^\infty \left(2^{2+(1+2\delta)s}\right)^{N^d} \left(\tau_n^{1-\delta}\right)^{(1+2\delta)sN^d-\sup_{E_n}\varphi_N}.$$ We have $$\begin{aligned} \left(\tau_n^{1-\delta}\right)^{(1+2\delta)sN^d-\sup_{E_n}\varphi_N} & = \tau_n^{-\delta\left\{(1+2\delta)sN^d-\sup_{E_n}\varphi_N\right\}}\cdot \tau_n^{(1+2\delta)sN^d-\sup_{E_n}\varphi_N} \\ &= \tau_n^{\delta\left\{(1-2\delta)sN^d+\sup_{E_n}\varphi_n\right\}}\cdot \tau_n^{sN^d-\sup_{E_n}\varphi_N}.\end{aligned}$$ Since $\varphi$ is nonnegative and $\tau_n < \varepsilon < \varepsilon_0< 1$ $$\tau_n^{\delta\left\{(1-2\delta)sN^d+\sup_{E_n}\varphi_n\right\}} \leq \tau_n^{\delta(1-2\delta)sN^d} < \varepsilon_0^{\delta(1-2\delta)sN^d}.$$ Therefore $$\begin{aligned} \mathcal{H}^{(1+2\delta)sN^d}_{2\varepsilon^{1-\delta}}\left(\mathcal{X},\mathbf{d}_N, \varphi_N\right) & \leq \sum_{n=1}^\infty \underbrace{\left(2^{2+(1+2\delta)s} \cdot \varepsilon_0^{\delta(1-2\delta)s} \right)^{N^d}}_{\text{$<1$ by (\ref{eq: varepsilon_0 is small})}} \, \tau_n^{sN^d-\sup_{E_n}\varphi_N} \\ &\leq \sum_{n=1}^\infty \tau_n^{sN^d-\sup_{E_n}\varphi_N} < 1 \quad \text{by (\ref{eq: choice of tau_n})}.\end{aligned}$$ Thus $$\dim_{\mathrm{H}}(\mathcal{X},\mathbf{d}_N,\varphi_N,2\varepsilon^{1-\delta}) \leq (1+2\delta)sN^d.$$ This holds for infinitely many $N$. Hence $$\liminf_{N\to \infty} \frac{\dim_{\mathrm{H}}(\mathcal{X},\mathbf{d}_N,\varphi_N,2\varepsilon^{1-\delta})}{N^d} \leq (1+2\delta)s.$$ Letting $\varepsilon \to 0$ $$\underline{\mathrm{mdim}}_{\mathrm{H}}(\mathcal{X},T,\mathbf{d},\varphi) \leq (1+2\delta)s.$$ Letting $\delta\to 0$ and $s\to \underline{\mathrm{mdim}}_{\mathrm{H},L^1}(\mathcal{X},T,\mathbf{d},\varphi)$, we get $$\underline{\mathrm{mdim}}_{\mathrm{H}}(\mathcal{X},T,\mathbf{d},\varphi) \leq \underline{\mathrm{mdim}}_{\mathrm{H},L^1}(\mathcal{X},T,\mathbf{d},\varphi).$$ ◻ ## $\mathbb{R}^d$-actions and $\mathbb{Z}^d$-actions {#subsection: R^d-actions and Z^d-actions} We naturally consider $\mathbb{Z}^d$ as a subgroup of $\mathbb{R}^d$. Let $T\colon \mathbb{R}^d\times \mathcal{X}\to \mathcal{X}$ be a continuous action of $\mathbb{R}^d$ on a compact metrizable space $\mathcal{X}$. We denote by $T|_{\mathbb{Z}^d}\colon \mathbb{Z}^d\times \mathcal{X}\to \mathcal{X}$ the restriction of $T$ to the subgroup $\mathbb{Z}^d$. In this subsection we study relations between various mean dimensions of $T$ and $T|_{\mathbb{Z}^d}$. Let $\mathbf{d}$ be a metric on $\mathcal{X}$ and $\varphi\colon \mathcal{X}\to \mathbb{R}$ a continuous function. **Lemma 36**. *We have: $$\begin{aligned} & \mathrm{mdim}(\mathcal{X}, T, \varphi) = \mathrm{mdim}(\mathcal{X}, T|_{\mathbb{Z}^d}, \varphi^{\mathbb{R}}_1), \\ & \overline{\mathrm{mdim}}_{\mathrm{H}, L^1}(\mathcal{X},T,\mathbf{d},\varphi) = \overline{\mathrm{mdim}}_{\mathrm{H}, L^1}(\mathcal{X}, T|_{\mathbb{Z}^d}, \overline{\mathbf{d}}^{\mathbb{R}}_1, \varphi^{\mathbb{R}}_1), \\ & \underline{\mathrm{mdim}}_{\mathrm{H},L^1}(\mathcal{X},T,\mathbf{d},\varphi) = \underline{\mathrm{mdim}}_{\mathrm{H},L^1}(\mathcal{X}, T|_{\mathbb{Z}^d}, \overline{\mathbf{d}}^{\mathbb{R}}_1, \varphi^{\mathbb{R}}_1).\end{aligned}$$ Here $\overline{\mathbf{d}}^{\mathbb{R}}_1$ and $\varphi^{\mathbb{R}}_1$ are a metric and a function on $\mathcal{X}$ defined by $$\overline{\mathbf{d}}^{\mathbb{R}}_1(x,y) = \int_{[0,1)^d} \mathbf{d}(T^u x, T^u y)\, du, \quad \varphi^{\mathbb{R}}_1(x) = \int_{[0,1)^d}\varphi(T^u x)\, du.$$ We also have a similar result for mean Hausdorff dimension with potential by replacing $\overline{\mathbf{d}}_1^{\mathbb{R}}(x,y)$ by $\mathbf{d}_1^{\mathbb{R}}(x,y) = \sup_{u\in [0, 1)^d}\mathbf{d}(T^u x, T^u y)$.* *Proof.* Set $\rho= \overline{\mathbf{d}}^{\mathbb{R}}_1$ and $\psi = \varphi^{\mathbb{R}}_1$. For a natural number $N$ we have $$\begin{aligned} \overline{\rho}^{\mathbb{Z}}_N(x, y) & = \frac{1}{N^d}\sum_{u\in [N]^d}\rho(T^u x, T^u y) = \frac{1}{N^d}\sum_{u\in [N]^d} \int_{v\in [0,1)^d}\mathbf{d}(T^{u+v}x, T^{u+v}y)\, dv \\ & = \frac{1}{N^d} \int_{[0,N)^d}\mathbf{d}(T^v x, T^v y) \, dv = \overline{\mathbf{d}}^{\mathbb{R}}_N(x,y).\end{aligned}$$ Similarly $$\psi^{\mathbb{Z}}_N(x) = \sum_{u\in [N]^d}\psi(T^u x) = \int_{[0,N)^d} \varphi(T^v x) \, dv = \varphi^{\mathbb{R}}_N.$$ By using Lemma [Lemma 31](#lemma: mean Hausdorff dimension integer parameter){reference-type="ref" reference="lemma: mean Hausdorff dimension integer parameter"} $$\begin{aligned} \underline{\mathrm{mdim}}_{\mathrm{H},L^1}(\mathcal{X}, T,\mathbf{d}, \varphi) & = \lim_{\varepsilon\to 0} \left(\liminf_{\substack{N\in \mathbb{N} \\ N\to \infty}} \frac{\dim_{\mathrm{H}}\left(\mathcal{X},\overline{\mathbf{d}}^{\mathbb{R}}_N, \varphi^{\mathbb{R}}_N, \varepsilon\right)}{N^d}\right) \\ & = \lim_{\varepsilon\to 0} \left(\liminf_{\substack{N\in \mathbb{N} \\ N\to \infty}} \frac{\dim_{\mathrm{H}}\left(\mathcal{X},\overline{\rho}^{\mathbb{Z}}_N, \psi^{\mathbb{Z}}_N, \varepsilon\right)}{N^d}\right) \\ & = \underline{\mathrm{mdim}}_{\mathrm{H},L^1}(\mathcal{X}, T|_{\mathbb{Z}^d},\rho, \psi).\end{aligned}$$ We can prove the case of upper $L^1$-mean Hausdorff dimension with potential in the same way. The case of (topological) mean dimension with potential can be also proved similarly by using $\left(\mathbf{d}^{\mathbb{R}}_1\right)^{\mathbb{Z}}_N = \mathbf{d}_N^{\mathbb{R}}$. ◻ # Mean dimension is bounded by mean Hausdorff dimension: proof of Theorem [Theorem 26](#theorem: mdim is bounded by mdimh){reference-type="ref" reference="theorem: mdim is bounded by mdimh"} {#section: proof of mdim is bounded by mdimh} In this section we prove Theorem [Theorem 26](#theorem: mdim is bounded by mdimh){reference-type="ref" reference="theorem: mdim is bounded by mdimh"}. ## A variation of the definition of mean dimension with potential {#subsection: a variation of the definition of mean dimension with potential} This subsection is a simple generalization of [@Tsukamoto; @potential §3.2]. Here we introduce a variation of the definition of mean dimension with potential. Let $P$ be a finite simplicial complex and $a\in P$. We define **small local dimension** $\dim^\prime_a P$ as the minimum of $\dim \Delta$ where $\Delta$ is a simplex of $P$ containing $a$. See Figure [2](#figure: small local dimension){reference-type="ref" reference="figure: small local dimension"}. (This is the same as [@Tsukamoto; @potential Figure 2].) ![Here $P$ has four vertexes (denoted by dots), four $1$-dimensional simplexes and one $2$-dimensional simplex. The points $b$ and $d$ are vertexes of $P$ wheres $a$ and $c$ are not. We have $\dim^\prime_a P =2$, $\dim^\prime_b P =0$, $\dim^\prime_c P =1$ and $\dim^\prime_d P =0$. Recall $\dim_a P = \dim_b P =2$ and $\dim_c P = \dim_d P =1$.](local_dimension.eps){#figure: small local dimension width="3.0in"} Let $(\mathcal{X},\mathbf{d})$ be a compact metric space and $\varphi\colon \mathcal{X}\to \mathbb{R}$ a continuous function. For $\varepsilon>0$ we set $$\begin{split} & \mathrm{Widim}^\prime_\varepsilon(\mathcal{X}, \mathbf{d}, \varphi) \\ & = \inf\left\{\sup_{x\in \mathcal{X}} \left(\dim^\prime_{f(x)} P + \varphi(x)\right) \middle| \parbox{3in}{\centering $P$ is a finite simplicial complex and $f:\mathcal{X}\to P$ is an $\varepsilon$-embedding}\right\}. \end{split}$$ We also set $$\mathrm{var}_\varepsilon(\varphi, \mathbf{d}) = \sup\{|\varphi(x)-\varphi(y)| \, |\, \mathbf{d}(x,y) < \varepsilon\}.$$ The following lemma is given in [@Tsukamoto; @potential Lemma 3.4]. **Lemma 37**. *$$\mathrm{Widim}^\prime_\varepsilon(\mathcal{X}, \mathbf{d}, \varphi) \leq \mathrm{Widim}_\varepsilon(\mathcal{X}, \mathbf{d}, \varphi) \leq \mathrm{Widim}^\prime_\varepsilon(\mathcal{X}, \mathbf{d}, \varphi) + \mathrm{var}_\varepsilon(\varphi, \mathbf{d}).$$* The next lemma shows that we can use $\mathrm{Widim}^\prime_\varepsilon(\mathcal{X}, \mathbf{d}, \varphi)$ instead of $\mathrm{Widim}_\varepsilon(\mathcal{X}, \mathbf{d}, \varphi)$ in the definition of mean dimension with potential. **Lemma 38**. *Let $T\colon \mathbb{Z}^d\times \mathcal{X}\to \mathcal{X}$ be a continuous action of $\mathbb{Z}^d$ on a compact metrizable space $\mathcal{X}$. Let $\mathbf{d}$ be a metric on $\mathcal{X}$ and $\varphi\colon \mathcal{X}\to \mathbb{R}$ a continuous function. Then $$\label{eq: definition of mean dimension by small local dimension} \mathrm{mdim}(\mathcal{X}, T, \varphi) = \lim_{\varepsilon\to 0} \left(\lim_{N\to \infty} \frac{\mathrm{Widim}^\prime_\varepsilon(\mathcal{X}, \mathbf{d}_N, \varphi_N)}{N^d}\right).$$ Here the limits in the right-hand side exist as in §[1.2](#subsection: mean dimension with potential of R^d-actions){reference-type="ref" reference="subsection: mean dimension with potential of R^d-actions"}.* *Proof.* By Lemma [Lemma 37](#lemma: widim and small widim){reference-type="ref" reference="lemma: widim and small widim"}, for any natural number $N$, we have $$\mathrm{Widim}^\prime_\varepsilon(\mathcal{X}, \mathbf{d}_N, \varphi_N) \leq \mathrm{Widim}_\varepsilon(\mathcal{X}, \mathbf{d}_N, \varphi_N) \leq \mathrm{Widim}^\prime_\varepsilon(\mathcal{X}, \mathbf{d}_N, \varphi_N) + \mathrm{var}_\varepsilon(\varphi_N, \mathbf{d}_N).$$ We have $$\mathrm{var}_\varepsilon(\varphi_N, \mathbf{d}_N) \leq N^d \mathrm{var}_\varepsilon(\varphi, \mathbf{d}).$$ Then $$\lim_{\varepsilon\to 0} \left(\limsup_{N\to \infty} \frac{\mathrm{var}_\varepsilon(\varphi_N, \mathbf{d}_N)}{N^d}\right) \leq \lim_{\varepsilon \to 0} \mathrm{var}_\varepsilon(\varphi, \mathbf{d}) = 0.$$ ◻ ## Case of $\mathbb{Z}^d$-actions {#subsection: case of Z^d-actions} In this subsection we prove that, for $\mathbb{Z}^d$-actions, mean dimension with potential is bounded from above by lower mean Hausdorff dimension with potential. A key ingredient of the proof is the following result on metric geometry. This was proved in [@Tsukamoto; @potential Lemma 3.8]. **Lemma 39**. *Let $(\mathcal{X}, \mathbf{d})$ be a compact metric space and $\varphi\colon \mathcal{X}\to \mathbb{R}$ a continuous function. Let $\varepsilon$ and $L$ be positive numbers, and let $s$ be a real number with $s>\max_{\mathcal{X}} \varphi$. Suppose there exists a map $f\colon \mathcal{X}\to [0,1]^N$ such that* - *$\left\lVert f(x)-f(y)\right\rVert_\infty \leq L \, \mathbf{d}(x, y)$ for all $x, y\in \mathcal{X}$,* - *if $d(x, y) \geq \varepsilon$ then $\left\lVert f(x)-f(y)\right\rVert_\infty =1$.* *Here $\left\lVert\cdot\right\rVert_\infty$ is the $\ell^\infty$-norm. Moreover we assume $$4^N (L+1)^{1+s+\left\lVert\varphi\right\rVert_\infty} \mathcal{H}_1^s\left(\mathcal{X},\mathbf{d},\varphi\right) < 1,$$ where $\left\lVert\varphi\right\rVert_\infty = \max_{x\in \mathcal{X}} |\varphi(x)|$. Then we conclude that $$\mathrm{Widim}^\prime_\varepsilon(\mathcal{X}, \mathbf{d},\varphi) \leq s + 1.$$* The following theorem is the main result of this subsection. **Theorem 40**. *Let $T\colon \mathbb{Z}^d\times \mathcal{X}\to \mathcal{X}$ be a continuous action of $\mathbb{Z}^d$ on a compact metrizable space $\mathcal{X}$. Let $\mathbf{d}$ be a metric on $\mathcal{X}$ and $\varphi\colon \mathcal{X}\to \mathbb{R}$ a continuous function. Then $$\mathrm{mdim}(\mathcal{X}, T, \varphi) \leq \underline{\mathrm{mdim}}_{\mathrm{H}}(\mathcal{X}, T, \mathbf{d}, \varphi).$$* *Proof.* If $\underline{\mathrm{mdim}}_{\mathrm{H}}(\mathcal{X}, T, \mathbf{d}, \varphi)$ is infinite then the statement is trivial. So we assume that it is finite. Let $s$ be an arbitrary number larger than $\underline{\mathrm{mdim}}_{\mathrm{H}}(\mathcal{X}, T, \mathbf{d}, \varphi)$. We prove that $\mathrm{mdim}(\mathcal{X}, T, \varphi) \leq s$. Let $\varepsilon$ be a positive number. There is a Lipschitz map $f\colon \mathcal{X} \to [0,1]^M$ such that[^5] if $\mathbf{d}(x, y) \geq \varepsilon$ then $\left\lVert f(x)-f(y)\right\rVert_\infty =1$. Let $L$ be a Lipschitz constant of $f$. Namely we have $\left\lVert f(x)-f(y)\right\rVert_\infty \leq L \, \mathbf{d}(x,y)$. For each natural number $N$ we define $f_N\colon \mathcal{X}\to \left([0,1]^M\right)^{[N]^d}$ by $$f_N(x) = \left(f(T^u x)\right)_{u \in [N]^d}.$$ Then we have - $\left\lVert f_N(x)-f_N(y)\right\rVert_\infty \leq L \, \mathbf{d}_N(x, y)$ for all $x, y \in \mathcal{X}$, - if $\mathbf{d}_N(x, y) \geq \varepsilon$ then $\left\lVert f_N(x)-f_N(y)\right\rVert_\infty =1$. Let $\tau$ be an arbitrary positive number. We choose a positive number $\delta<1$ satisfying $$\label{eq: choice of delta in proof of mdim is bounded by mdimh} 4^M (L+1)^{1+s+\tau+\left\lVert\varphi\right\rVert_\infty} \, \delta^\tau < 1.$$ From $\underline{\mathrm{mdim}}_{\mathrm{H}}(\mathcal{X}, T, \mathbf{d}, \varphi) < s$, there is a sequence of natural numbers $N_1<N_2<N_3<\dots$ such that $$\dim_{\mathrm{H}}\left(\mathcal{X}, \mathbf{d}_{N_n}, \varphi_{N_n}, \delta\right) < sN_n^d \quad \text{for all $n\geq 1$}.$$ Then $\mathcal{H}^{sN_n^d}_\delta\left(\mathcal{X}, \mathbf{d}_{N_n}, \varphi_{N_n}\right) < 1$ and hence $$\mathcal{H}_\delta^{(s+\tau)N_n^d}\left(\mathcal{X}, \mathbf{d}_{N_n}, \varphi_{N_n}\right) \leq \delta^{\tau N_n^d} \mathcal{H}^{sN_n^d}_\delta\left(\mathcal{X}, \mathbf{d}_{N_n}, \varphi_{N_n}\right) < \delta^{\tau N_n^d}.$$ Therefore $$\begin{aligned} 4^{MN_n^d}(L+1)^{1+(s+\tau)N_n^d+\left\lVert\varphi_{N_n}\right\rVert_\infty} \, \mathcal{H}_1^{(s+\tau)N_n^d}\left(\mathcal{X}, \mathbf{d}_{N_n}, \varphi_{N_n}\right) & < \left\{4^M (L+1)^{1+s+\tau+\left\lVert\varphi\right\rVert_\infty} \, \delta^{\tau}\right\}^{N_n^d} \\ & < 1 \quad (\text{by (\ref{eq: choice of delta in proof of mdim is bounded by mdimh})}).\end{aligned}$$ Here we have used $\left\lVert\varphi_{N_n}\right\rVert_\infty \leq N_n^{d} \left\lVert\varphi\right\rVert_\infty$ in the first inequality. Now we can use Lemma [Lemma 39](#lemma: widim is bounded by using Hausdorff dimension){reference-type="ref" reference="lemma: widim is bounded by using Hausdorff dimension"} and conclude $$\mathrm{Widim}^\prime_{\varepsilon}\left(\mathcal{X}, \mathbf{d}_{N_n}, \varphi_{N_n}\right) \leq (s+\tau)N_n^d+1.$$ Therefore $$\lim_{N\to \infty} \frac{\mathrm{Widim}^\prime_{\varepsilon}\left(\mathcal{X}, \mathbf{d}_{N}, \varphi_{N}\right)}{N^d} = \lim_{n \to \infty} \frac{\mathrm{Widim}^\prime_{\varepsilon}\left(\mathcal{X}, \mathbf{d}_{N_n}, \varphi_{N_n}\right)}{N_n^d} \leq s+\tau.$$ Since $\tau>0$ is arbitrary, we have $$\lim_{N\to \infty} \frac{\mathrm{Widim}^\prime_{\varepsilon}\left(\mathcal{X}, \mathbf{d}_{N}, \varphi_{N}\right)}{N^d} \leq s.$$ Thus, letting $\varepsilon \to 0$, we have $\mathrm{mdim}(\mathcal{X}, T, \varphi) \leq s$ by Lemma [Lemma 38](#lemma: mean dimension by small local dimension){reference-type="ref" reference="lemma: mean dimension by small local dimension"}. ◻ ## Proof of Theorem [Theorem 26](#theorem: mdim is bounded by mdimh){reference-type="ref" reference="theorem: mdim is bounded by mdimh"} {#subsection: proof of Theorem mdim is bounded by mdimh} Now we can prove Theorem [Theorem 26](#theorem: mdim is bounded by mdimh){reference-type="ref" reference="theorem: mdim is bounded by mdimh"}. We write the statement again. **Theorem 41** ($=$ Theorem [Theorem 26](#theorem: mdim is bounded by mdimh){reference-type="ref" reference="theorem: mdim is bounded by mdimh"}). *Let $T\colon \mathbb{R}^d\times \mathcal{X}\to \mathcal{X}$ be a continuous action of $\mathbb{R}^d$ on a compact metrizable space $\mathcal{X}$. Let $\mathbf{d}$ be a metric on $\mathcal{X}$ and $\varphi\colon \mathcal{X}\to \mathbb{R}$ a continuous function. Then we have $$\mathrm{mdim}(\mathcal{X}, T, \varphi) \leq \underline{\mathrm{mdim}}_{\mathrm{H},L^1}\left(\mathcal{X}, T, \mathbf{d}, \varphi\right).$$* *Proof.* We use the notations in Lemma [Lemma 36](#lemma: mean dimension of R^d action and Z^d action){reference-type="ref" reference="lemma: mean dimension of R^d action and Z^d action"}. Namely, we denote by $T|_{\mathbb{Z}^d}\colon \mathbb{Z}^d\times \mathcal{X} \to \mathcal{X}$ the restriction of $T$ to the subgroup $\mathbb{Z}^d$. We define a metric $\overline{\mathbf{d}}_1^{\mathbb{R}}$ and a function $\varphi^{\mathbb{R}}_1$ on $\mathcal{X}$ by $$\overline{\mathbf{d}}_1^{\mathbb{R}}(x, y) = \int_{[0,1)^d}\mathbf{d}(T^u x, T^u y)\, du, \quad \varphi^{\mathbb{R}}_1(x) = \int_{[0,1)^d}\varphi(T^u x)\, du.$$ By Lemma [Lemma 36](#lemma: mean dimension of R^d action and Z^d action){reference-type="ref" reference="lemma: mean dimension of R^d action and Z^d action"} $$\begin{aligned} \mathrm{mdim}(\mathcal{X}, T, \varphi) & = \mathrm{mdim}\left(\mathcal{X}, T|_{\mathbb{Z}^d}, \varphi^{\mathbb{R}}_1\right) \\ \underline{\mathrm{mdim}}_{\mathrm{H},L^1}\left(\mathcal{X}, T, \mathbf{d}, \varphi\right) & = \underline{\mathrm{mdim}}_{\mathrm{H},L^1}\left(\mathcal{X}, T|_{\mathbb{Z}^d}, \overline{\mathbf{d}}^{\mathbb{R}}_1, \varphi^{\mathbb{R}}_1\right).\end{aligned}$$ By Lemma [Lemma 34](#lemma: tame growth of covering numbers){reference-type="ref" reference="lemma: tame growth of covering numbers"} there exists a metric $\mathbf{d}^\prime$ on $\mathcal{X}$ such that $\mathbf{d}^\prime(x,y) \leq \overline{\mathbf{d}}^{\mathbb{R}}_1(x, y)$ for all $x, y\in \mathcal{X}$ and that $(\mathcal{X}, \mathbf{d}^\prime)$ has the tame growth of covering numbers. By $\mathbf{d}^\prime(x,y) \leq \overline{\mathbf{d}}^{\mathbb{R}}_1(x, y)$ we have $$\underline{\mathrm{mdim}}_{\mathrm{H},L^1}\left(\mathcal{X}, T|_{\mathbb{Z}^d}, \mathbf{d}^\prime, \varphi^{\mathbb{R}}_1\right) \leq \underline{\mathrm{mdim}}_{\mathrm{H},L^1}\left(\mathcal{X}, T|_{\mathbb{Z}^d}, \overline{\mathbf{d}}^{\mathbb{R}}_1, \varphi^{\mathbb{R}}_1\right).$$ Since $(\mathcal{X}, \mathbf{d}^\prime)$ has the tame growth of covering numbers, we can apply Proposition [Proposition 35](#proposition: L^1 mean Hausdorff dimension and mean Hausdorff dimension under tame growth){reference-type="ref" reference="proposition: L^1 mean Hausdorff dimension and mean Hausdorff dimension under tame growth"} to it and get $$\underline{\mathrm{mdim}}_{\mathrm{H},L^1}\left(\mathcal{X}, T|_{\mathbb{Z}^d}, \mathbf{d}^\prime, \varphi^{\mathbb{R}}_1\right) = \underline{\mathrm{mdim}}_{\mathrm{H}}\left(\mathcal{X}, T|_{\mathbb{Z}^d}, \mathbf{d}^\prime, \varphi^{\mathbb{R}}_1\right).$$ By Theorem [Theorem 40](#theorem: mdim is bounded by mdimh for Z^d actions){reference-type="ref" reference="theorem: mdim is bounded by mdimh for Z^d actions"} $$\mathrm{mdim}\left(\mathcal{X}, T|_{\mathbb{Z}^d}, \varphi^{\mathbb{R}}_1\right) \leq \underline{\mathrm{mdim}}_{\mathrm{H}}\left(\mathcal{X}, T|_{\mathbb{Z}^d}, \mathbf{d}^\prime, \varphi^{\mathbb{R}}_1\right).$$ Combining all the above inequalities, we conclude $$\mathrm{mdim}(\mathcal{X}, T, \varphi) \leq \underline{\mathrm{mdim}}_{\mathrm{H},L^1}\left(\mathcal{X}, T, \mathbf{d}, \varphi\right).$$ ◻ **Remark 42**. Since $\underline{\mathrm{mdim}}_{\mathrm{H},L^1}\left(\mathcal{X}, T, \mathbf{d}, \varphi\right) \leq \underline{\mathrm{mdim}}_{\mathrm{H}}\left(\mathcal{X}, T, \mathbf{d}, \varphi\right)$, we also have $$\mathrm{mdim}(\mathcal{X}, T, \varphi) \leq \underline{\mathrm{mdim}}_{\mathrm{H}}\left(\mathcal{X}, T, \mathbf{d}, \varphi\right).$$ We can directly prove this inequality (for $\mathbb{R}^d$-actions) without using $\mathbb{Z}^d$-actions. The proof is almost the identical with the proof of Theorem [Theorem 40](#theorem: mdim is bounded by mdimh for Z^d actions){reference-type="ref" reference="theorem: mdim is bounded by mdimh for Z^d actions"}. However we have not found a direct proof Theorem [Theorem 26](#theorem: mdim is bounded by mdimh){reference-type="ref" reference="theorem: mdim is bounded by mdimh"} (the $L^1$ version) so far. # Mean Hausdorff dimension is bounded by rate distortion dimension: proof of Theorem [Theorem 29](#theorem: dynamical Frostman lemma){reference-type="ref" reference="theorem: dynamical Frostman lemma"} {#section: proof of dynamical Frostman lemma} In this section we prove Theorem [Theorem 29](#theorem: dynamical Frostman lemma){reference-type="ref" reference="theorem: dynamical Frostman lemma"} (dynamical Frostman's lemma). The proof is based on results on mutual information prepared in §[2.2](#subsection: mutual information){reference-type="ref" reference="subsection: mutual information"}. Another key ingredient is the following version of Frostman's lemma. This was proved in [@Lindenstrauss--Tsukamoto; @double Corollary 4.4]. **Lemma 43**. *For any $0<c<1$ there exists $\delta_0 = \delta_0(c)\in (0,1)$ such that for any compact metric space $(\mathcal{X}, \mathbf{d})$ and any $0<\delta\leq \delta_0$ there exists a Borel probability measure $\nu$ on $\mathcal{X}$ satisfying $$\nu(E) \leq \left(\mathrm{Diam}\, E\right)^{c \cdot \dim_{\mathrm{H}}(\mathcal{X}, \mathbf{d}, \delta)} \quad \text{for all Borel sets $E\subset \mathcal{X}$ with } \mathrm{Diam}\, E <\frac{\delta}{6}.$$* We also need the following elementary lemma. This was proved in [@Lindenstrauss--Tsukamoto; @IEEE Appendix]. **Lemma 44**. *Let $A$ be a finite set and $\{\mu_n\}$ a sequence of probability measures on $A$. Suppose that $\mu_n$ converges to some probability measure $\mu$ in the weak$^*$ topology (i.e. $\mu_n(a) \to \mu(a)$ for every $a\in A$). Then there exists a sequence of probability measures $\pi_n$ on $A\times A$ such that* - *$\pi_n$ is a coupling between $\mu_n$ and $\mu$, i.e., the first and second marginals of $\pi_n$ are $\mu_n$ and $\mu$ respectively,* - *$\pi_n$ converges to $(\mathrm{id}\times \mathrm{id})_*\mu$ in the weak$^*$ topology, namely $$\pi_n(a, b) \to \begin{cases} \mu(a) & (\text{if $a=b$}) \\ 0 & (\text{if $a\neq b$}) \end{cases}.$$* We write the statement of Theorem [Theorem 29](#theorem: dynamical Frostman lemma){reference-type="ref" reference="theorem: dynamical Frostman lemma"} again. **Theorem 45** ($=$ Theorem [Theorem 29](#theorem: dynamical Frostman lemma){reference-type="ref" reference="theorem: dynamical Frostman lemma"}). *Let $T\colon \mathbb{R}^d\times \mathcal{X}\to \mathcal{X}$ be a continuous action of $\mathbb{R}^d$ on a compact metrizable space $\mathcal{X}$. Let $\mathbf{d}$ be a metric on $\mathcal{X}$ and $\varphi\colon \mathcal{X}\to \mathbb{R}$ a continuous function. Then we have $$\overline{\mathrm{mdim}}_{\mathrm{H}, L^1}\left(\mathcal{X}, T, \mathbf{d}, \varphi\right) \leq \sup_{\mu \in \mathscr{M}^T(\mathcal{X})} \left(\underline{\mathrm{rdim}}(\mathcal{X}, T, \mathbf{d}, \mu) + \int_{\mathcal{X}} \varphi\, d\mu \right).$$ Here recall that $\mathscr{M}^T(\mathcal{X})$ is the set of all $T$-invariant Borel probability measures on $\mathcal{X}$.* *Proof.* Let $c$ and $s$ be arbitrary real numbers with $0<c<1$ and $s< \overline{\mathrm{mdim}}_{\mathrm{H}, L^1}(\mathcal{X}, T, \mathbf{d}, \varphi)$. We will construct $\mu\in \mathscr{M}^T(\mathcal{X})$ satisfying $$\label{eq: requirement on mu in dynamical Frostman} \underline{\mathrm{rdim}}(\mathcal{X}, T, \mathbf{d}, \mu) + \int_{\mathcal{X}} \varphi\, d\mu \geq c s - (1-c)\left\lVert\varphi\right\rVert_\infty.$$ If this is proved then we get the claim of the theorem by letting $c\to 1$ and $s\to \overline{\mathrm{mdim}}_{\mathrm{H}, L^1}(\mathcal{X}, T, \mathbf{d}, \varphi)$. Take $\eta>0$ with $\overline{\mathrm{mdim}}_{\mathrm{H}, L^1}(\mathcal{X}, T, \mathbf{d}, \varphi) > s+2\eta$. Let $\delta_0 = \delta_0(c) \in (0, 1)$ be the constant introduced in Lemma [Lemma 43](#lemma: Frostman){reference-type="ref" reference="lemma: Frostman"}. There are $\delta\in (0, \delta_0)$ and a sequence of positive numbers $L_1<L_2<L_3<\dots \to \infty$ satisfying $$\dim_{\mathrm{H}}\left(\mathcal{X}, \overline{\mathbf{d}}_{L_n}, \varphi_{L_n}, \delta\right) > (s+2\eta) L_n^d$$ for all $n\geq 1$. For a real number $t$ we set $$\mathcal{X}_n(t) := \left(\frac{\varphi_{L_n}}{L_n^d}\right)^{-1}[t, t+\eta] = \left\{x\in \mathcal{X}\middle|\, t\leq \frac{\varphi_{L_n}(x)}{L_n^d} \leq t + \eta\right\}.$$ **Claim 46**. *We can choose $t\in [-\left\lVert\varphi\right\rVert_\infty, \left\lVert\varphi\right\rVert_\infty]$ such that for infinitely many $n$ we have $$\dim_{\mathrm{H}}\left(\mathcal{X}_n(t), \overline{\mathbf{d}}_{L_n}, \delta\right) \geq (s-t)L_n^d.$$ Notice that, in particular, this inequality implies that $\mathcal{X}_n(t)$ is not empty because we assumed that $\dim_{\mathrm{H}}(\cdot)$ is $-\infty$ for the empty set.* *Proof.* We have[^6] $\mathcal{H}^{(s+2\eta)L_n^d}_\delta\left(\mathcal{X}, \overline{\mathbf{d}}_{L_n}, \varphi_{L_n}\right) \geq 1$. Set $m = \lceil 2\left\lVert\varphi\right\rVert_\infty/\eta\rceil$. We have $$\mathcal{X} = \bigcup_{\ell=0}^{m-1}\mathcal{X}_n\left(-\left\lVert\varphi\right\rVert_\infty + \ell \eta\right).$$ Then there exists $t\in \{-\left\lVert\varphi\right\rVert_\infty + \ell \eta\mid \ell = 0,1,\dots, m-1\}$ such that $$\mathcal{H}^{(s+2\eta)L_n^d}_\delta \left(\mathcal{X}_n(t), \overline{\mathbf{d}}_{L_n}, \varphi_{L_n}\right) \geq \frac{1}{m} \quad \text{for infinitely many $n$}.$$ On the set $\mathcal{X}_n(t)$ we have $$(s+2\eta)L_n^d - \varphi_{L_n} \geq (s+2\eta)L_n^d - (t+\eta)L_n^d = (s-t+\eta)L_n^d.$$ Hence $$\begin{aligned} \mathcal{H}^{(s+2\eta)L_n^d}_\delta \left(\mathcal{X}_n(t), \overline{\mathbf{d}}_{L_n}, \varphi_{L_n}\right) & \leq \mathcal{H}^{(s-t+\eta)L_n^d}_\delta \left(\mathcal{X}_n(t), \overline{\mathbf{d}}_{L_n}\right) \\ & \leq \delta^{\eta L_n^d} \mathcal{H}^{(s-t)L_n^d}_\delta \left(\mathcal{X}_n(t), \overline{\mathbf{d}}_{L_n}\right).\end{aligned}$$ Therefore for infinitely many $n$ we have $$\mathcal{H}^{(s-t)L_n^d}_\delta \left(\mathcal{X}_n(t), \overline{\mathbf{d}}_{L_n}\right) \geq \frac{\delta^{-\eta L_n^d}}{m} \to \infty \quad (n\to \infty).$$ Thus $\dim_{\mathrm{H}}\left(\mathcal{X}_n(t), \overline{\mathbf{d}}_{L_n}, \delta\right) \geq (s-t)L_n^d$ for infinitely many $n$. ◻ We fix $t\in [-\left\lVert\varphi\right\rVert_\infty, \left\lVert\varphi\right\rVert_\infty]$ satisfying the statement of this claim. By choosing a subsequence (also denoted by $L_n$) we can assume that $$\dim_{\mathrm{H}}\left(\mathcal{X}_n(t), \overline{\mathbf{d}}_{L_n}, \delta\right) \geq (s-t)L_n^d \quad \text{for all $n$}.$$ By a version of Frostman's lemma (Lemma [Lemma 43](#lemma: Frostman){reference-type="ref" reference="lemma: Frostman"}), there is a Borel probability measure $\nu_n$ on $\mathcal{X}_n(t)$ such that $$\label{eq: choice of nu_n in the proof of dynamical Frostman} \nu_n(E) \leq \left(\mathrm{Diam}\left(E, \overline{\mathbf{d}}_{L_n}\right)\right)^{c(s-t)L_n^d} \quad \text{for all Borel sets $E\subset \mathcal{X}$ with } \mathrm{Diam}\left(E, \overline{\mathbf{d}}_{L_n}\right) < \frac{\delta}{6}.$$ We define a Borel probability measure $\mu_n$ on $\mathcal{X}$ by $$\mu_n = \frac{1}{L_n^d} \int_{[0, L_n)^d} T^u_*\nu_n \, du.$$ By choosing a subsequence (also denoted by $\mu_n$) we can assume that $\mu_n$ converges to $\mu\in \mathscr{M}^T(\mathcal{X})$ in the weak$^*$ topology. We have $$\int_{\mathcal{X}} \varphi\, d\mu_n = \int_{\mathcal{X}} \frac{\varphi_{L_n}}{L_n^d}\, d\nu_n = \int_{\mathcal{X}_n(t)} \frac{\varphi_{L_n}}{L_n^d}\, d\nu_n \geq t.$$ Here we have used that $\nu_n$ is supported in $\mathcal{X}_n(t)$ in the second inequality and that $\varphi_{L_n}/L_n^d \geq t$ on $\mathcal{X}_n(t)$ in the last inequality. Since $\mu_n \rightharpoonup \mu$, we have $$\int_{\mathcal{X}} \varphi \, d\mu \geq t.$$ If $t\geq s$ then ([\[eq: requirement on mu in dynamical Frostman\]](#eq: requirement on mu in dynamical Frostman){reference-type="ref" reference="eq: requirement on mu in dynamical Frostman"}) trivially holds (recalling $0<c<1$): $$\underline{\mathrm{rdim}}(\mathcal{X},T,\mathbf{d},\mu) + \int_{\mathcal{X}} \varphi \, d\mu \geq t \geq cs - (1-c)\left\lVert\varphi\right\rVert_\infty.$$ Therefore we assume $s> t$. We will prove that for sufficiently small $\varepsilon>0$ $$\label{eq: lower bound on rate distortion function in the proof of dynamical Frostman} R(\mathbf{d}, \mu, \varepsilon) \geq c(s-t) \log(1/\varepsilon) - Kc (s-t),$$ where $R(\mathbf{d}, \mu, \varepsilon)$ is the rate distortion function and $K$ is the universal positive constant introduced in Proposition [Proposition 17](#proposition: Kawabata--Dembo){reference-type="ref" reference="proposition: Kawabata--Dembo"}. Once this is proved, we have $$\underline{\mathrm{rdim}}(\mathcal{X}, T, \mathbf{d}, \mu) = \liminf_{\varepsilon\to 0} \frac{R(\mathbf{d}, \mu, \varepsilon)}{\log(1/\varepsilon)} \geq c(s-t).$$ Then we get ([\[eq: requirement on mu in dynamical Frostman\]](#eq: requirement on mu in dynamical Frostman){reference-type="ref" reference="eq: requirement on mu in dynamical Frostman"}) by $$\underline{\mathrm{rdim}}(\mathcal{X}, T, \mathbf{d}, \mu) + \int_{\mathcal{X}} \varphi\, d\mu \geq c (s-t) + t = cs + (1-c) t \geq cs - (1-c) \left\lVert\varphi\right\rVert_\infty.$$ Here we have used $0<c<1$ and $t \geq -\left\lVert\varphi\right\rVert_\infty$. So the task is to prove ([\[eq: lower bound on rate distortion function in the proof of dynamical Frostman\]](#eq: lower bound on rate distortion function in the proof of dynamical Frostman){reference-type="ref" reference="eq: lower bound on rate distortion function in the proof of dynamical Frostman"}). Let $\varepsilon$ be a positive number with $2\varepsilon \log (1/\varepsilon) < \delta/6$. Let $M$ be a positive number, and let $X$ and $Y$ be random variables such that - $X$ takes values in $\mathcal{X}$ with $\mathrm{Law} X = \mu$, - $Y$ takes values in $L^1([0, M)^d, \mathcal{X})$ with $\mathbb{E}\left( \int_{[0, M)^d}\mathbf{d}(T^v X, Y_v)\, dv\right) < \varepsilon\, M^d$. We want to prove $$\frac{1}{M^d} I(X;Y) \geq c(s-t) \log(1/\varepsilon) - Kc(s-t).$$ If this is proved then we get ([\[eq: lower bound on rate distortion function in the proof of dynamical Frostman\]](#eq: lower bound on rate distortion function in the proof of dynamical Frostman){reference-type="ref" reference="eq: lower bound on rate distortion function in the proof of dynamical Frostman"}) and the proof is done. We can assume that $Y$ takes only finitely many values (Remark [Remark 19](#remark: we can assume that Y takes only finitely many values){reference-type="ref" reference="remark: we can assume that Y takes only finitely many values"}). We denote the set of values of $Y$ by $\mathcal{Y}$. This is a finite subset of $L^1([0, M)^d, \mathcal{X})$. Take a positive number $\tau$ satisfying $\mathbb{E}\left( \int_{[0, M)^d}\mathbf{d}(T^v X, Y_v)\, dv\right) < (\varepsilon - 3\tau)M^d$. We take a measurable partition $$\mathcal{X} = P_1 \cup P_2 \cup \dots \cup P_\alpha \quad (\text{disjoint union})$$ such that $\mathrm{Diam}(P_i, \overline{\mathbf{d}}_M) < \tau$ and $\mu(\partial P_i) = 0$ for all $1\leq i \leq \alpha$. We pick a point $x_i \in P_i$ for each $i$ and set $A = \{x_1, \dots, x_\alpha\}$. We define a map $\mathcal{P}\colon \mathcal{X}\to A$ by $\mathcal{P}(P_i) = \{x_i\}$. Then we have $$\label{eq: average distance between P(X) and Y} \mathbb{E}\left( \frac{1}{M^d} \int_{[0, M)^d}\mathbf{d}\left(T^v \mathcal{P}(X), Y_v \right) \, dv\right) < \varepsilon - 2\tau.$$ We consider the push-forward measures $\mathcal{P}_*\mu_n$ on $A$. They converge to $\mathcal{P}_*\mu$ as $n\to \infty$ in the weak$^*$ topology by $\mu(\partial P_i) = 0$. By Lemma [Lemma 44](#lemma: construction of coupling){reference-type="ref" reference="lemma: construction of coupling"}, we can construct random variables $X(n)$ coupled to $\mathcal{P}(X)$ such that $X(n)$ take values in $A$ with $\mathrm{Law} X(n) = \mathcal{P}_*\mu_n$ and $$\mathbb{P}\left(X(n) = x_i, \mathcal{P}(X) = x_j\right) \to \delta_{ij} \mathbb{P}\left(\mathcal{P}(X)= x_j \right) \quad (n\to \infty).$$ Then $\mathbb{E}\overline{\mathbf{d}}_M\left(X(n), \mathcal{P}(X)\right) \to 0$ as $n\to \infty$. We consider[^7] that $X(n)$ is coupled to $Y$ with the conditional distribution $$\mathbb{P}\left(X(n) = x_i, Y=y\middle|\, \mathcal{P}(X) = x_j \right) = \mathbb{P}\left(X(n) = x_i\middle|\, \mathcal{P}(X) = x_j \right) \cdot \mathbb{P}\left(Y=y\middle|\, \mathcal{P}(X) = x_j \right)$$ for $x_i, x_j\in A$ and $y\in \mathcal{Y}$. Namely $X(n)$ and $Y$ are conditionally independent given $\mathcal{P}(X)$. Then $$\begin{aligned} \mathbb{P}\left(X(n)= x_i, Y=y\right) & = \sum_{j=1}^\alpha \mathbb{P}\left(X(n) = x_i, \mathcal{P}(X) = x_j \right) \cdot \mathbb{P}\left(Y=y\middle|\, \mathcal{P}(X) = x_j \right) \\ & \to \mathbb{P}\left(\mathcal{P}(X) = x_i, Y=y\right) \quad (n\to \infty). \end{aligned}$$ By ([\[eq: average distance between P(X) and Y\]](#eq: average distance between P(X) and Y){reference-type="ref" reference="eq: average distance between P(X) and Y"}) $$\label{eq: average distance between X(n) and Y} \mathbb{E}\left(\frac{1}{M^d} \int_{[0, M)^d}\mathbf{d}\left(T^u X(n), Y_u\right) du \right) < \varepsilon - 2\tau \quad \text{for large $n$}.$$ Notice that $(X(n), Y)$ take values in a fixed finite set $A\times \mathcal{Y}$ and that their distributions converge to that of $\left(\mathcal{P}(X), Y\right)$. Hence by Lemma [Lemma 10](#lemma: convergence of mutual information){reference-type="ref" reference="lemma: convergence of mutual information"} $$I\left(X(n); Y\right) \to I\left(\mathcal{P}(X); Y\right) \quad (n\to \infty).$$ We want to estimate $I\left(X(n); Y\right)$ from below. Fix a point $x_0\in \mathcal{X}$. We will also denote by $x_0$ any constant function whose value is $x_0$. For $x\in A$ and $y\in L^1([0, M)^d, \mathcal{X})$ we define a conditional probability mass function by $$\rho_n(y|x) = \mathbb{P}\left(Y=y \middle|\, X(n) = x\right).$$ This is nonzero only for $y\in \mathcal{Y}$. (Here $\rho_n(\cdot|x)$ may be an arbitrary probability measure on $\mathcal{Y}$ if $\mathbb{P}\left(X(n) = x \right)=0$.) We define $\Lambda\subset \mathbb{R}^d$ by $$\Lambda = \left\{\left(M m_1, M m_2, \dots, M m_d\right) \middle|\, m_k\in \mathbb{Z}, \> 0 \leq m_k \leq \frac{L_n}{M}-2 \> (1\leq k \leq d) \right\}.$$ Let $v \in [0,M)^d$. We have $$\bigcup_{\lambda\in \Lambda} \left(v+\lambda+ [0, M)^d\right) \subset [0, L_n)^d.$$ Notice that the left-hand side is a disjoint union. Here $v+\lambda+[0, M)^d = \{v+\lambda+w \mid w\in [0,M)^d\}$. Set $$E_v = [0, L_n)^d \setminus \bigcup_{\lambda\in \Lambda} \left(v+\lambda+ [0, M)^d\right).$$ See Figure [3](#figure: squares){reference-type="ref" reference="figure: squares"}. ![The big square is $[0, L_n)^d$ and small squares are $v+\lambda + [0, M)^d$ ($\lambda \in \Lambda$). The shadowed region is $E_v$.](squares.eps){#figure: squares width="3.0in"} For $x\in \mathcal{X}$ and $f\in L^1\left([0, L_n)^d, \mathcal{X}\right)$ we define a conditional probability mass function $\sigma_{n, v}(f|x)$ by $$\sigma_{n, v}(f|x) = \delta_{x_0} \left(f|_{E_v}\right) \cdot \prod_{\lambda\in \Lambda} \rho_n\left(f|_{v+\lambda+[0,M)^d}\middle|\, \mathcal{P}(T^{v+\lambda}x)\right).$$ Here $f|_{E_v}$ is the restriction of $f$ to $E_v$ and $\delta_{x_0}$ is the delta probability measure concentrated at the constant function $x_0 \in L^1(E_v, \mathcal{X})$. We naturally consider $f|_{v+\lambda+[0,M)^d}$ (the restriction of $f$ to $v+\lambda+[0,M)^d$) as an element of $L^1\left([0, M)^d, \mathcal{X}\right)$. We define a transition probability $\sigma_n$ on $\mathcal{X}\times L^1([0, L_n)^d, \mathcal{X})$ by $$\sigma_n(B \mid x) = \frac{1}{M^d}\int_{[0, M)^d} \sigma_{n, v}(B \mid x)\, dv$$ for $x\in \mathcal{X}$ and Borel subsets $B \subset L^1([0, L_n)^d, \mathcal{X})$. Here $\sigma_{n, v}(B \mid x) = \sum_{f\in B} \sigma_{n, v}(f|x)$. (Notice that for each $x\in \mathcal{X}$ the function $\sigma_{n, v}(f|x)$ is nonzero only for finitely many $f$.) Take random variables $Z$ and $W$ such that $Z$ takes values in $\mathcal{X}$ with $\mathrm{Law} Z = \nu_n$ and that $W$ takes values in $L^1([0, L_n)^d, \mathcal{X})$ with $$\mathbb{P}\left(W \in B\middle|\, Z=x\right) = \sigma_n(B\mid x) \quad (x\in \mathcal{X}, B\subset L^1([0, L_n)^d, \mathcal{X})).$$ Notice that $Z$ and $W$ depend on $n$. Rigorously speaking, we should denote them by $Z^{(n)}$ and $W^{(n)}$. However we suppress their dependence on $n$ in the notations for simplicity. We estimate $\mathbb{E}\left(\frac{1}{L_n^d}\int_{[0, L_n)^d}\mathbf{d}(T^u Z, W_u)\, du\right)$ and $I(Z;W)$. **Claim 47**. *For all sufficiently large $n$ we have $$\mathbb{E}\left(\frac{1}{L_n^d}\int_{[0, L_n)^d}\mathbf{d}(T^u Z, W_u)\, du\right) < \varepsilon.$$* *Proof.* For each $v\in [0, M)^d$ we take a random variable $W(v)$ coupled to $Z$ such that $W(v)$ takes values in $L^1([0, L_n)^d, \mathcal{X})$ with $\mathbb{P}\left(W(v) = f\mid Z=x\right) = \sigma_{n, v}(f|x)$ for $x\in \mathcal{X}$ and $f\in L^1([0, L_n)^d, \mathcal{X})$. Then $$\mathbb{E}\int_{[0, L_n)^d} \mathbf{d}(T^u Z, W_u)\, du = \frac{1}{M^d} \int_{[0, M)^d} \mathbb{E}\left(\int_{[0, L_n)^d}\mathbf{d}(T^u Z, W(v)_u)\, du\right) \, dv.$$ For each $v\in [0, M)^d$ we have $$\begin{aligned} & \mathbb{E}\int_{[0, L_n)^d}\mathbf{d}(T^u Z, W(v)_u)\, du \\ & = \mathbb{E}\int_{E_v}\mathbf{d}(T^u Z, W(v)_u)\, du + \sum_{\lambda\in \Lambda} \mathbb{E}\int_{v+\lambda+[0, M)^d}\mathbf{d}(T^u Z, W(v)_u)\, du \\ & \leq C L_n^{d-1} + \sum_{\lambda\in \Lambda} \mathbb{E}\int_{v+\lambda+[0, M)^d}\mathbf{d}(T^u Z, W(v)_u)\, du, \end{aligned}$$ where $C$ is a positive constant independent of $v, L_n$. In the last inequality we have used $\mathbf{m}(E_v) \leq \mathrm{const}\cdot L_n^{d-1}$. Since $\overline{\mathbf{d}}_M(x, \mathcal{P}(x)) < \tau$ for every $x\in \mathcal{X}$, we have $$\begin{aligned} & \mathbb{E}\int_{v+\lambda+[0, M)^d}\mathbf{d}(T^u Z, W(v)_u)\, du \\ &\leq M^d \tau + \mathbb{E}\int_{[0, M)^d} \mathbf{d}\left(T^u \mathcal{P}(T^{v+\lambda}Z), W(v)_{v+\lambda+u}\right) du \\ & \leq M^d\tau + \sum_{f\in \mathcal{Y}} \int_{[0, M)^d}\left(\int_{\mathcal{X}}\mathbf{d}(T^u x, f_u)\rho_n(f|x) \, d\left(\mathcal{P}_*T^{v+\lambda}_*\nu_n(x)\right)\right)du.\end{aligned}$$ We sum up these estimates over $\lambda\in \Lambda$. Noting $M^d |\Lambda| \leq L_n^d$, we have $$\begin{aligned} & \mathbb{E} \int_{[0, L_n)^d}\mathbf{d}(T^u Z, W(v)_u)\, du \\ & \leq C L_n^{d-1} + \tau L_n^d + \sum_{\substack{\lambda\in \Lambda \\ f\in \mathcal{Y}}} \int_{[0, M)^d} \left(\int_{\mathcal{X}}\mathbf{d}(T^u x, f_u)\rho_n(f|x) d\left(\mathcal{P}_*T^{v+\lambda}_*\nu_n(x)\right)\right)du. \end{aligned}$$ We integrate this over $v\in [0, M)^d$. Note that[^8] $$\begin{aligned} \int_{[0, M)^d} \left(\sum_{\lambda \in \Lambda} \mathcal{P}_*T^{v+\lambda}_*\nu_n\right) dv &= \sum_{\lambda\in \Lambda} \int_{\lambda+[0,M)^d} \mathcal{P}_*T^{v}_*\nu_n\, dv \\ & \leq \int_{[0, L_n)^d}\mathcal{P}_*T^v_*\nu_n \, dv = L_n^d \mathcal{P}_*\mu_n.\end{aligned}$$ Hence we have $$\begin{aligned} & \frac{1}{M^d} \int_{[0, M)^d} \mathbb{E}\left(\int_{[0, L_n)^d}\mathbf{d}(T^u Z, W(v)_u)\, du\right) \, dv \\ & \leq CL_n^{d-1} + \tau L_n^d + \frac{L_n^d}{M^d} \sum_{f\in \mathcal{Y}} \int_{[0,M)^d}\left(\int_{\mathcal{X}} \mathbf{d}(T^u x, f_u)\rho_n(f|x) d\mathcal{P}_*\mu_n(x)\right) du \\ & = CL_n^{d-1} + \tau L_n^d + \frac{L_n^d}{M^d} \mathbb{E}\left(\int_{[0, M)^d}\mathbf{d}\left(T^u X(n), Y_u\right) du \right).\end{aligned}$$ In the last equality we have used $\mathrm{Law} X(n) = \mathcal{P}_*\mu_n$ and $\rho_n(f|x) = \mathbb{P}(Y=f\mid X(n) = x)$. Therefore $$\mathbb{E}\left(\frac{1}{L_n^d} \int_{[0, L_n)^d} \mathbf{d}(T^u Z, W_u)\, du \right) \leq \frac{C}{L_n} + \tau + \mathbb{E}\left(\frac{1}{M^d} \int_{[0, M)^d}\mathbf{d}\left(T^u X(n), Y_u\right) du \right).$$ The third term in the right-hand side is smaller than $\varepsilon - 2\tau$ for large $n$ by ([\[eq: average distance between X(n) and Y\]](#eq: average distance between X(n) and Y){reference-type="ref" reference="eq: average distance between X(n) and Y"}). Therefore we have $$\mathbb{E}\left(\frac{1}{L_n^d} \int_{[0, L_n)^d} \mathbf{d}(T^u Z, W_u)\, du \right) < \varepsilon$$ for all sufficiently large $n$. ◻ **Claim 48**. *$$\frac{1}{L_n^d}I(Z;W) \leq \frac{1}{M^d} I\left(X(n); Y\right).$$* *Proof.* We have $I(Z;W) = I(\nu_n, \sigma_n)$. Since $\sigma_n = (1/M^d) \int_{[0, M)^d} \sigma_{n, v} \, dv$, we apply to it Proposition [Proposition 14](#proposition: concavity and convexity of I(mu, nu)){reference-type="ref" reference="proposition: concavity and convexity of I(mu, nu)"} (2) (the convexity of mutual information in transition probability): $$I(\nu_n, \sigma_n) \leq \frac{1}{M^d} \int_{[0, M)^d} I(\nu_n, \sigma_{n, v})\, dv.$$ By Lemma [Lemma 11](#lemma: subadditivity of mutual information){reference-type="ref" reference="lemma: subadditivity of mutual information"} (subadditivity of mutual information)[^9] $$I(\nu_n, \sigma_{n, v}) \leq \sum_{\lambda\in \Lambda} I\left(\mathcal{P}_*T^{v+\lambda}_*\nu_n, \rho_n\right).$$ Hence $$\begin{aligned} I(\nu_n, \sigma_n) & \leq \frac{1}{M^d} \sum_{\lambda\in \Lambda} \int_{[0, M)^d} I\left(\mathcal{P}_*T^{\lambda+v}_*\nu_n, \rho_n\right) dv \\ & = \frac{1}{M^d}\int_{\cup_{\lambda\in \Lambda} \left(\lambda + [0, M)^d\right)}I\left(\mathcal{P}_*T^v_*\nu_n, \rho_n\right) dv \\ & \leq \frac{1}{M^d}\int_{[0, L_n)^d}I\left(\mathcal{P}_*T^v_*\nu_n, \rho_n\right) dv\end{aligned}$$ By Proposition [Proposition 14](#proposition: concavity and convexity of I(mu, nu)){reference-type="ref" reference="proposition: concavity and convexity of I(mu, nu)"} (1) (the concavity of mutual information in probability measure) $$\begin{aligned} \frac{1}{L_n^d} \int_{[0, L_n)^d}I\left(\mathcal{P}_*T^v_*\nu_n, \rho_n\right) dv & \leq I\left(\frac{1}{L_n^d}\int_{[0, L_n)^d}\mathcal{P}_*T^v_*\nu_n\, dv, \rho_n\right) \\ & = I\left(\mathcal{P}_*\mu_n, \rho_n\right) \\ & = I(X(n); Y).\end{aligned}$$ Therefore we conclude $$I(Z;W) = I(\nu_n, \sigma_n) \leq \frac{L_n^d}{M^d} I(X(n); Y).$$ ◻ We define a metric $D_n$ on $L^1\left([0,L_n)^d, \mathcal{X}\right)$ by $$D_n(f, g) = \frac{1}{L_n^d} \int_{[0, L_n)^d} \mathbf{d}\left(f(u), g(u)\right) du.$$ Then the map $$F_n\colon \left(\mathcal{X}, \overline{\mathbf{d}}_{L_n}\right) \ni x \mapsto \left(T^t x\right)_{t\in [0, L_n)^d} \in \left(L^1\left([0,L_n)^d, \mathcal{X}\right), D_n\right)$$ is an isometric embedding. Consider the push-forward measure ${F_n}_*\nu_n$ on $L^1\left([0,L_n)^d, \mathcal{X}\right)$. It follows from ([\[eq: choice of nu_n in the proof of dynamical Frostman\]](#eq: choice of nu_n in the proof of dynamical Frostman){reference-type="ref" reference="eq: choice of nu_n in the proof of dynamical Frostman"}) that $${F_n}_*\nu_n\left(E\right) \leq \left(\mathrm{Diam}(E, D_n)\right)^{c(s-t)L_n^d}$$ for all Borel subsets $E\subset L^1\left([0,L_n)^d, \mathcal{X}\right)$ with $\mathrm{Diam}(E, D_n) < \delta/6$. We have $\mathrm{Law} F_n(Z) = {F_n}_*\nu_n$, and by Claim [Claim 47](#claim: distance between Z and W){reference-type="ref" reference="claim: distance between Z and W"} $$\mathbb{E}\left( D_n(F_n(Z), W) \right) < \varepsilon \quad \text{for large $n$}.$$ Since $2\varepsilon \log (1/\varepsilon) < \delta/6$, we apply Proposition [Proposition 17](#proposition: Kawabata--Dembo){reference-type="ref" reference="proposition: Kawabata--Dembo"} (Kawabata--Dembo estimate) to $(F_n(Z), W)$ and get $$I(Z;W) = I\left(F_n(Z); W\right) \geq c(s-t)L_n^d \log (1/\varepsilon) - K \left(c(s-t)L_n^d+1\right)$$ for large $n$. Here $K$ is a universal positive constant. By Claim [Claim 48](#claim: mutual information between Z and W){reference-type="ref" reference="claim: mutual information between Z and W"} $$\frac{1}{M^d} I\left(X(n); Y\right) \geq \frac{1}{L_n^d} I\left(Z;W\right) \geq c(s-t) \log(1/\varepsilon) - K\left(c(s-t) + \frac{1}{L_n^d}\right)$$ for large $n$. Since $I\left(X(n); Y\right) \to I\left(\mathcal{P}(X); Y\right)$ as $n\to \infty$, we get $$\frac{1}{M^d}I\left(\mathcal{P}(X); Y\right) \geq c(s-t) \log (1/\varepsilon) - K c (s-t).$$ Then we have $$\frac{1}{M^d}I(X; Y) \geq \frac{1}{M^d}I\left(\mathcal{P}(X); Y\right) \geq c(s-t) \log (1/\varepsilon) - K c (s-t).$$ This is what we want to prove. ◻ Now we have proved Theorems [Theorem 26](#theorem: mdim is bounded by mdimh){reference-type="ref" reference="theorem: mdim is bounded by mdimh"} and [Theorem 29](#theorem: dynamical Frostman lemma){reference-type="ref" reference="theorem: dynamical Frostman lemma"}. These two theorems implies Theorem [Theorem 3](#main theorem){reference-type="ref" reference="main theorem"} (Main theorem) as we already explained in §[3](#section: metric mean dimension with potential and mean Hausdorff dimension with potential){reference-type="ref" reference="section: metric mean dimension with potential and mean Hausdorff dimension with potential"}. Therefore we have completed the proof of Theorem [Theorem 3](#main theorem){reference-type="ref" reference="main theorem"}. # Local nature of metric mean dimension with potential {#section: local nature of metric mean dimension with potential} This section is independent of the proof of Theorem [Theorem 3](#main theorem){reference-type="ref" reference="main theorem"} (Main Theorem). It can be read independently of Sections 2, 4, 5 and 6. Here we present a formula expressing metric mean dimension with potential by using a certain *local* quantity. We plan to use it in a future study of geometric examples of dynamical systems [@Gromov; @Tsukamoto; @Brody; @curves]. ## A formula of metric mean dimension with potential {#subsection: a formula of metric mean dimension with potential} Let $T\colon \mathbb{R}^d\times \mathcal{X}\to \mathcal{X}$ be a continuous action of $\mathbb{R}^d$ on a compact metrizable space $\mathcal{X}$. Let $\mathbf{d}$ be a metric on $\mathcal{X}$ and $\varphi\colon \mathcal{X}\to \mathbb{R}$ a continuous function. For a subset $E\subset \mathcal{X}$ and $\varepsilon>0$ we set $$P_T(E, \mathbf{d}, \varphi, \varepsilon) = \liminf_{L\to \infty} \frac{\log \#\left(E, \mathbf{d}_L, \varphi_L, \varepsilon\right)}{L^d}.$$ Here recall that $$\#\left(E, \mathbf{d}_L, \varphi_L, \varepsilon\right) = \inf\left\{\sum_{i=1}^n (1/\varepsilon)^{\sup_{U_i}\varphi} \middle|\, \parbox{3in}{\centering $E \subset U_1\cup \dots \cup U_n$. Each $U_i$ is an open set of $\mathcal{X}$ with $\mathrm{Diam}(U_i, \mathbf{d}_L) < \varepsilon$.} \right\}.$$ Also recall that the upper and lower metric mean dimensions with potential are defined by $$\begin{aligned} \overline{\mathrm{mdim}}_{\mathrm{M}}(\mathcal{X}, T, \mathbf{d}, \varphi) & = \limsup_{\varepsilon\to 0} \frac{P_T(\mathcal{X}, \mathbf{d}, \varphi, \varepsilon)}{\log (1/\varepsilon)}, \\ \underline{\mathrm{mdim}}_{\mathrm{M}}(\mathcal{X}, T, \mathbf{d}, \varphi) & = \liminf_{\varepsilon\to 0} \frac{P_T(\mathcal{X}, \mathbf{d}, \varphi, \varepsilon)}{\log (1/\varepsilon)}. \end{aligned}$$ For a (not necessarily bounded) subset $A$ of $\mathbb{R}^d$ we define a metric $\mathbf{d}_A$ on $\mathcal{X}$ by $$\mathbf{d}_A(x, y) = \sup_{a\in A} \mathbf{d}\left(T^a x, T^a y\right).$$ (If $A$ is unbounded, this metric is not compatible with the given topology of $\mathcal{X}$ in general.) For $x\in \mathcal{X}$ and $\delta>0$ we define $B_\delta(x, \mathbf{d}_{\mathbb{R}^d})$ as the closed $\delta$-ball with respect to $\mathbf{d}_{\mathbb{R}^d}$: $$B_\delta(x, \mathbf{d}_{\mathbb{R}^d}) = \{y\in \mathcal{X}\mid \mathbf{d}_{\mathbb{R}^d}(x, y) \leq \delta\} = \{y\in \mathcal{X}\mid \mathbf{d}(T^u x, T^u y) \leq \delta \> (\forall u\in \mathbb{R}^d)\}.$$ The following is the main result of this section. **Theorem 49**. *For any $\delta>0$ we have $$\begin{aligned} \overline{\mathrm{mdim}}_{\mathrm{M}}(\mathcal{X}, T, \mathbf{d}, \varphi) & = \limsup_{\varepsilon\to 0} \frac{\sup_{x\in \mathcal{X}} P_T\left(B_\delta(x, \mathbf{d}_{\mathbb{R}^d}), \mathbf{d}, \varphi, \varepsilon\right)}{\log (1/\varepsilon)}, \\ \underline{\mathrm{mdim}}_{\mathrm{M}}(\mathcal{X}, T, \mathbf{d}, \varphi) & = \liminf_{\varepsilon\to 0} \frac{\sup_{x\in \mathcal{X}} P_T\left(B_\delta(x, \mathbf{d}_{\mathbb{R}^d}), \mathbf{d}, \varphi, \varepsilon\right)}{\log (1/\varepsilon)}. \end{aligned}$$* Notice that $B_\delta(x, \mathbf{d}_{\mathbb{R}^d})$ is not a neighborhood of $x$ with respect to the original metric $\mathbf{d}$ in general. Nevertheless we can calculate the metric mean dimension with potential by gathering such information. In the case that $\varphi$ is identically zero, Theorem [Theorem 49](#theorem: local formula of metric mean dimension with potential){reference-type="ref" reference="theorem: local formula of metric mean dimension with potential"} was proved in [@Tsukamoto; @local; @nature]. The proof of Theorem [Theorem 49](#theorem: local formula of metric mean dimension with potential){reference-type="ref" reference="theorem: local formula of metric mean dimension with potential"} follows the argument of [@Tsukamoto; @local; @nature], which is in turn based on the paper of Bowen [@Bowen; @entropy; @expansive]. ## Tiling argument {#subsection: tiling argument} Here we prepare a technical lemma (Lemma [Lemma 50](#lemma: tiles){reference-type="ref" reference="lemma: tiles"} below). For $x = (x_1, \dots, x_d)\in \mathbb{R}^d$ we set $\left\lVert x\right\rVert_\infty = \max_{1\leq i \leq d} |x_i|$. A **cube** of $\mathbb{R}^d$ is a set $\Lambda$ of the form $$\Lambda = u + [0, L]^d = \{u+v\mid v\in [0, L]^d\},$$ where $u\in \mathbb{R}^d$ and $L>0$. We set $\ell(\Lambda) = L$. For $r>0$ and $A\subset \mathbb{R}^d$ we define $$\partial (A, r) = \left\{x \in \mathbb{R}^d\mid \exists y\in A, \exists z\in \mathbb{R}^d\setminus A: \left\lVert x-y\right\rVert_\infty \leq r \text{ and } \left\lVert x-z\right\rVert_\infty \leq r \right\},$$ $$B_r(A) = A\cup \partial(A, r) = \{x\in \mathbb{R}^d\mid \exists y\in A: \left\lVert x-y\right\rVert_\infty \leq r\}.$$ For a finite set $\mathcal{C} = \{\Lambda_1, \dots, \Lambda_n\}$ of cubes of $\mathbb{R}^d$ we set $$\ell_{\min}(\mathcal{C}) = \min_{1\leq i \leq n} \ell(\Lambda_i), \quad \ell_{\max}(\mathcal{C}) = \max_{1\leq i \leq n} \ell(\Lambda_i).$$ The following lemma was proved in [@Tsukamoto; @local; @nature Proposition 3.4]. **Lemma 50**. *For any $\eta>0$ there exists a natural number $k_0 = k_0(\eta)>0$ for which the following statement holds. Let $A$ be a bounded Borel subset of $\mathbb{R}^d$. Let $\mathcal{C}_k$ $(1\leq k \leq k_0)$ be finite sets of cubes of $\mathbb{R}^d$ such that* 1. *$\ell_{\max}(\mathcal{C}_1) \geq 1$ and $\ell_{\min}(\mathcal{C}_{k+1}) \geq k_0\cdot \ell_{\max}(\mathcal{C}_k)$ for all $1\leq k \leq k_0-1$,* 2. *$\mathbf{m}\left(\partial (A, \ell_{\max}(\mathcal{C}_{k_0}))\right) < (\eta/3)\cdot \mathbf{m}(A)$,* 3. *$A \subset \bigcup_{\Lambda\in \mathcal{C}_k} \Lambda$ for every $1\leq k \leq k_0$.* *Then there is a disjoint subfamily $\mathcal{C}^\prime \subset \mathcal{C}_1\cup \dots \cup \mathcal{C}_{k_0}$ satisfying $$\bigcup_{\Lambda\in \mathcal{C}^\prime}\Lambda \subset A, \quad \mathbf{m}\left(B_1\left(A \setminus \bigcup_{\Lambda\in \mathcal{C}^\prime} \Lambda\right)\right) < \eta \cdot \mathbf{m}(A).$$ Here "disjoint\" means that for any two distinct $\Lambda_1, \Lambda_2\in \mathcal{C}^\prime$ we have $\Lambda_1\cap \Lambda_2 = \emptyset$.* This is a rather technical statement. The assumption (1) means that some cube of $\mathcal{C}_1$ is not so small and that every cube in $\mathcal{C}_{k+1}$ is much larger than cubes in $\mathcal{C}_k$. The assumption (2) means that $A$ is much larger than all the given cubes. The assumption (3) means that each $\mathcal{C}_k$ covers $A$. The conclusion means that we can find a disjoint subfamily $\mathcal{C}^\prime$ which covers a substantial portion of $A$. ## The case that $\varphi$ is nonnegative {#subsection: the case that varphi is nonnegative} This subsection is also a preparation for the proof of Theorem [Theorem 49](#theorem: local formula of metric mean dimension with potential){reference-type="ref" reference="theorem: local formula of metric mean dimension with potential"}. Let $T\colon \mathbb{R}^d\times \mathcal{X}\to \mathcal{X}$ be a continuous action of $\mathbb{R}^d$ on a compact metrizable space $\mathcal{X}$. Let $\mathbf{d}$ be a metric on $\mathcal{X}$ and $\varphi\colon \mathcal{X} \to \mathbb{R}$ a continuous function. Throughout this subsection, we assume that $\varphi$ is a nonnegative function. Recall that for a bounded Borel subset $A\subset \mathbb{R}^d$ a new function $\varphi_A\colon \mathcal{X}\to \mathbb{R}$ is defined by $$\varphi_A(x) = \int_A \varphi(T^u x) \, du.$$ **Lemma 51**. *Let $0<\varepsilon <1$ and $E\subset \mathcal{X}$. Let $A, A_1, A_2, \dots, A_n$ be bounded Borel subsets of $\mathbb{R}^d$. If $A\subset A_1 \cup A_2 \cup \dots \cup A_n$ then $$\#\left(E, \mathbf{d}_A, \varphi_A, \varepsilon\right) \leq \prod_{k=1}^n \#\left(E, \mathbf{d}_{A_k}, \varphi_{A_k}, \varepsilon\right).$$* *Proof.* Suppose we are given an open cover $E\subset U_{k1}\cup \dots \cup U_{k m_k}$ with $\mathrm{Diam}(U_{k j}, \mathbf{d}_{A_k}) < \varepsilon$ for each $1\leq k \leq n$. Then $$E\subset \bigcup \left\{ U_{1 j_1}\cap U_{2 j_2} \cap \dots \cap U_{n j_n} \middle|\, 1\leq j_1 \leq m_1, 1 \leq j_2 \leq m_2, \dots, 1\leq j_n \leq m_n \right\}.$$ From $A\subset A_1\cup \dots \cup A_n$, the diameter of $U_{1 j_1}\cap U_{2 j_2} \cap \dots \cap U_{n j_n}$ is smaller than $\varepsilon$ with respect to the metric $\mathbf{d}_A$. Since $\varphi$ is nonnegative (here we use this assumption), we have $$\varphi_A \leq \varphi_{A_1} + \varphi_{A_2}+ \dots + \varphi_{A_n}$$ and hence $$\sup_{U_{1 j_1} \cap \dots \cap U_{n j_n}} \varphi_{A} \leq \sum_{k=1}^n \sup_{U_{k j_k}} \varphi_{A_k}.$$ Therefore we have $$\sum_{\substack{1\leq j_1 \leq m_1 \\ % \vbox{\fontsize{\sf@size}{\sf@size pt}\linespread{0.3}\selectfont \kern 0.2\baselineskip \hbox{.}\hbox{.}\hbox{.}% \kern 0.1\baselineskip }% \\ 1\leq j_n \leq m_n}} \left(\frac{1}{\varepsilon}\right)^{\sup_{U_{1 j_1} \cap \dots \cap U_{n j_n}} \varphi_{A}} \leq \prod_{k=1}^n \left(\sum_{j=1}^{m_k} \left(\frac{1}{\varepsilon}\right)^{\sup_{U_{k j}} \varphi_{A_k}}\right).$$ This proves the claim of the lemma. ◻ **Lemma 52**. *For $0< \varepsilon < 1$ and a bounded Borel subset $A\subset \mathbb{R}^d$, we have $$\#\left(\mathcal{X}, \mathbf{d}_A, \varphi_A, \varepsilon\right) \leq \{\#\left(\mathcal{X}, \mathbf{d}_{[0, 1]^d}, \varphi_{[0, 1]^d}, \varepsilon\right)\}^{\mathbf{m}\left(B_1(A)\right)}.$$ Notice that $\#\left(\mathcal{X}, \mathbf{d}_{[0, 1]^d}, \varphi_{[0, 1]^d}, \varepsilon\right) \geq 1$ because $0<\varepsilon <1$ and $\varphi$ is nonnegative.* *Proof.* Let $\Omega$ be the set of $u\in \mathbb{Z}^d$ with $\left(u+[0, 1]^d\right)\cap A \neq \emptyset$. We have $$A \subset \bigcup_{u\in \Omega} \left(u+[0,1]^d\right) \subset B_1(A).$$ In particular the cardinality of $\Omega$ is bounded from above by $\mathbf{m}\left(B_1(A)\right)$. Then by Lemma [Lemma 51](#lemma: block coding){reference-type="ref" reference="lemma: block coding"} $$\begin{aligned} \#\left(\mathcal{X}, \mathbf{d}_A, \varphi_A, \varepsilon\right) & \leq \prod_{u\in \Omega} \#\left(\mathcal{X}, \mathbf{d}_{u+[0,1]^d}, \varphi_{u+[0,1]^d}, \varepsilon\right) \\ & = \prod_{u\in \Omega} \#\left(\mathcal{X}, \mathbf{d}_{[0,1]^d}, \varphi_{[0,1]^d}, \varepsilon\right) \\ & \leq \{\#\left(\mathcal{X}, \mathbf{d}_{[0, 1]^d}, \varphi_{[0, 1]^d}, \varepsilon\right)\}^{\mathbf{m}\left(B_1(A)\right)}. \end{aligned}$$ ◻ The following is the main result of this subsection. This is a modification of a classical result of Bowen [@Bowen; @entropy; @expansive Proposition 2.2]. Here recall that we have assumed that $\varphi$ is nonnegative. **Proposition 53**. *For positive numbers $\delta, \beta$ and $0<\varepsilon<1$ there is a positive number $D = D(\delta, \beta, \varepsilon)$ for which the following statement holds. Set $$a = \frac{\sup_{x\in \mathcal{X}} P_T\left(B_\delta(x, \mathbf{d}_{\mathbb{R}^d}), \mathbf{d}, \varphi, \varepsilon\right)}{\log (1/\varepsilon)}.$$ Then for all sufficiently large $L$ we have $$\sup_{x\in \mathcal{X}} \#\left(B_\delta(x, \mathbf{d}_{[-D, L+D]^d}), \mathbf{d}_L, \varphi_L, \varepsilon\right) \leq \left(\frac{1}{\varepsilon}\right)^{(a+\beta)L^d}.$$ Here $B_\delta(x, \mathbf{d}_{[-D, L+D]^d}) = \{y\in \mathcal{X}\mid \mathbf{d}_{[-D, L+D]^d}(x, y) \leq \delta\}$.* *Proof.* Choose a positive number $\eta$ satisfying $$\label{eq: choice of eta in the proof of local nature} \left(\#(\mathcal{X}, \mathbf{d}_{[0, 1]^d}, \varphi_{[0, 1]^d}, \varepsilon)\right)^\eta < \left(\frac{1}{\varepsilon}\right)^{\frac{\beta}{2}}.$$ Let $k_0 = k_0(\eta)$ be the natural number introduced in Lemma [Lemma 50](#lemma: tiles){reference-type="ref" reference="lemma: tiles"}. We will construct the following data inductively on $k =1, 2, \dots, k_0$. - A finite set $Y_k \subset \mathcal{X}$. - Positive numbers $L_k(y)$ and $M_k(y)$ for each $y\in Y_k$. - Open neighborhoods $V_k(y)$ and $U_k(y)$ of $y$ in $\mathcal{X}$ with $V_k(y) \subset U_k(y)$ for each $y\in Y_k$. We assume the following conditions. 1. $L_1(y) > 1$ for all $y\in Y_1$. 2. $L_k(y) > k_0 L_{k-1}(z)$ for all $y\in Y_k$, $z\in Y_{k-1}$ and $2\leq k \leq k_0$. 3. $\#\left(U_k(y), \mathbf{d}_{L_k(y)}, \varphi_{L_k(y)}, \varepsilon\right) < (1/\varepsilon)^{(a+\frac{\beta}{2})L_k(y)^d}$ for all $y\in Y_k$. 4. $B_\delta(v, \mathbf{d}_{[-M_k(y), M_k(y)]^d}) \subset U_k(y)$ for all $y\in Y_k$ and $v\in V_k(y)$. 5. $X = \bigcup_{y\in Y_k} V_k(y)$ for every $1\leq k \leq k_0$. The construction of these data go as follows. Suppose that the data of $(k-1)$-th step (i.e. $Y_{k-1}, L_{k-1}(y), M_{k-1}(y), V_{k-1}(y), U_{k-1}(y)$) have been constructed. We consider the $k$-th step. (The case of $k=1$ is similar.) Take an arbitrary $y\in \mathcal{X}$. Since we have $$\frac{P_T\left(B_\delta(y, \mathbf{d}_{\mathbb{R}^d}), \mathbf{d}, \varphi, \varepsilon\right)}{\log (1/\varepsilon)} \leq a < a + \frac{\beta}{2},$$ there is a positive number $L_k(y)$ larger than $k_0 \max_{z\in Y_{k-1}} L_{k-1}(z)$ (we assume $L_1(y)>1$ in the case of $k=1$) satisfying $$\frac{1}{L_k(y)^d} \log \#\left(B_\delta(y, \mathbf{d}_{\mathbb{R}^d}), \mathbf{d}_{L_k(y)}, \varphi_{L_k(y)}, \varepsilon\right) < \left(a+\frac{\beta}{2}\right) \log (1/\varepsilon).$$ Then there is an open set $U_k(y) \supset B_\delta(y, \mathbf{d}_{\mathbb{R}^d})$ such that $$\frac{1}{L_k(y)^d} \log \#\left(U_k(y), \mathbf{d}_{L_k(y)}, \varphi_{L_k(y)}, \varepsilon\right) < \left(a+\frac{\beta}{2}\right) \log (1/\varepsilon).$$ Namely we have $$\#\left(U_k(y), \mathbf{d}_{L_k(y)}, \varphi_{L_k(y)}, \varepsilon\right) < \left(\frac{1}{\varepsilon}\right)^{(a+\frac{\beta}{2})L_k(y)^d}.$$ Since $B_\delta(y, \mathbf{d}_{\mathbb{R}^d}) = \bigcap_{M=1}^\infty B_{\delta}(y, \mathbf{d}_{[-M, M]^d}) \subset U_k(y)$, there is a positive number $M_k(y)$ with $B_\delta(y, \mathbf{d}_{[-M_k(y), M_k(y)]^d}) \subset U_k(y)$. There is an open neighborhood $V_k(y)$ of $y$ such that for every $v\in V_k(y)$ we have $B_\delta(v, \mathbf{d}_{[-M_k(y), M_k(y)]^d}) \subset U_k(y)$. Since $\mathcal{X}$ is compact we can find a finite set $Y_k\subset \mathcal{X}$ satisfying $\mathcal{X} = \bigcup_{y\in Y_k} V_k(y)$. The construction of the $k$-th step has been finished. Take $D>1$ with $D>\max\{M_k(y)\mid 1\leq k \leq k_0, y\in Y_k\}$. Let $L$ be a sufficiently large number so that the cube $A :=[0, L]^d$ satisfies $\mathbf{m}\left(\partial(A, \max_{y\in Y_{k_0}} L_{k_0}(y))\right) < \frac{\eta}{3} L^d$. We will show that $\sup_{x\in \mathcal{X}} \#\left(B_\delta(x, \mathbf{d}_{[-D, L+D]^d}), \mathbf{d}_L, \varphi_L, \varepsilon\right) \leq \left(\frac{1}{\varepsilon}\right)^{(a+\beta)L^d}$. Take an arbitrary point $x\in \mathcal{X}$. For each $1 \leq k \leq k_0$ and $t\in A\cap \mathbb{Z}^d$ we pick $y\in Y_k$ with $T^t x\in V_k(y)$. Set $\Lambda_{k, t} = t + [0, L_k(y)]^d$. It follows from the choice of $D$ that $$T^t\left(B_\delta(x, \mathbf{d}_{[-D, L+D]^d})\right) \subset B_\delta(T^t x, \mathbf{d}_{[-M_k(y), M_k(y)]^d}) \subset U_k(y).$$ Hence $$\#\left(B_\delta(x, \mathbf{d}_{[-D, L+D]^d}), \mathbf{d}_{\Lambda_{k, t}}, \varphi_{\Lambda_{k,t}}, \varepsilon\right) \leq \#\left(U_k(y), \mathbf{d}_{L_k(y)}, \varphi_{L_k(y)}, \varepsilon\right) < \left(\frac{1}{\varepsilon}\right)^{(a+\frac{\beta}{2})L_k(y)^d}.$$ Set $\mathcal{C}_k = \{\Lambda_{k, t}\mid t\in A\cap \mathbb{Z}^d\}$. This is a finite family of cubes covering $A= [0, L]^d$. Notice that $\mathcal{C}_k$ depends on the choice of $x$; we suppress its dependence on $x$ in the notation for simplicity. By Lemma [Lemma 50](#lemma: tiles){reference-type="ref" reference="lemma: tiles"} there is a disjoint subfamily $\mathcal{C}^\prime\subset \mathcal{C}_1\cup \dots \cup \mathcal{C}_{k_0}$ such that $$\bigcup_{\Lambda\in \mathcal{C}^\prime} \Lambda \subset A, \quad \mathbf{m}\left(B_1\left(A\setminus \bigcup_{\Lambda\in \mathcal{C}^\prime} \Lambda \right)\right) < \eta \, \mathbf{m}(A).$$ Set $$A^\prime = A\setminus \bigcup_{\Lambda\in \mathcal{C}^\prime} \Lambda.$$ For every $\Lambda\in \mathcal{C}^\prime$ we have $$\#\left(B_\delta(x, \mathbf{d}_{[-D, L+D]^d}), \mathbf{d}_{\Lambda}, \varphi_{\Lambda}, \varepsilon\right) < \left(\frac{1}{\varepsilon}\right)^{(a+\frac{\beta}{2})\mathbf{m}(\Lambda)}.$$ By Lemma [Lemma 52](#lemma: crude estimate of covering number){reference-type="ref" reference="lemma: crude estimate of covering number"} $$\begin{aligned} \#\left(B_\delta(x, \mathbf{d}_{[-D, L+D]^d}), \mathbf{d}_{A^\prime}, \varphi_{A^\prime}, \varepsilon\right) & \leq \#\left(\mathcal{X}, \mathbf{d}_{A^\prime}, \varphi_{A^\prime}, \varepsilon\right) \\ & \leq \left\{\#\left(\mathcal{X}, \mathbf{d}_{[0, 1]^d}, \varphi_{[0, 1]^d}, \varepsilon\right)\right\}^{\mathbf{m}(B_1(A^\prime))} \\ & < \left\{\#\left(\mathcal{X}, \mathbf{d}_{[0, 1]^d}, \varphi_{[0,1]^d}, \varepsilon\right)\right\}^{\eta \, \mathbf{m}(A)} \\ & < \left(\frac{1}{\varepsilon}\right)^{\frac{\beta}{2} \mathbf{m}(A)}.\end{aligned}$$ In the last inequality we have used ([\[eq: choice of eta in the proof of local nature\]](#eq: choice of eta in the proof of local nature){reference-type="ref" reference="eq: choice of eta in the proof of local nature"}). From Lemma [Lemma 51](#lemma: block coding){reference-type="ref" reference="lemma: block coding"} $$\begin{aligned} & \#\left(B_\delta(x, \mathbf{d}_{[-D, L+D]^d}), \mathbf{d}_A, \varphi_A, \varepsilon\right) \\ & \leq \#\left(B_\delta(x, \mathbf{d}_{[-D, L+D]^d}), \mathbf{d}_{A^\prime}, \varphi_{A^\prime}, \varepsilon\right) \prod_{\Lambda\in \mathcal{C}^\prime} \#\left(B_\delta(x, \mathbf{d}_{[-D, L+D]^d}), \mathbf{d}_{\Lambda}, \varphi_{\Lambda}, \varepsilon\right) \\ & < \left(\frac{1}{\varepsilon}\right)^{(a+\beta)\mathbf{m}(A)}.\end{aligned}$$ This holds for every point $x\in \mathcal{X}$. Thus we have proved the claim of the proposition. ◻ ## Proof of Theorem [Theorem 49](#theorem: local formula of metric mean dimension with potential){reference-type="ref" reference="theorem: local formula of metric mean dimension with potential"} {#subsection: proof of the local formula} Here we prove Theorem [Theorem 49](#theorem: local formula of metric mean dimension with potential){reference-type="ref" reference="theorem: local formula of metric mean dimension with potential"}. Let $T\colon \mathbb{R}^d\times \mathcal{X}\to \mathcal{X}$ be a continuous action of $\mathbb{R}^d$ on a compact metrizable space $\mathcal{X}$. Let $\mathbf{d}$ be a metric on $\mathcal{X}$ and $\varphi\colon \mathcal{X}\to \mathbb{R}$ a continuous function. We do not assume that $\varphi$ is nonnegative. The next proposition looks the same as Proposition [Proposition 53](#prop: modification of Bowen paper){reference-type="ref" reference="prop: modification of Bowen paper"}. The point is that we do not assume the nonnegativity of $\varphi$ here whereas we assumed it in Proposition [Proposition 53](#prop: modification of Bowen paper){reference-type="ref" reference="prop: modification of Bowen paper"}. **Proposition 54**. *Let $\delta, \beta, \varepsilon$ be positive numbers with $0<\varepsilon <1$. Set $$a = \sup_{x\in \mathcal{X}} \frac{P_T\left(B_\delta(x, \mathbf{d}_{\mathbb{R}^d}), \mathbf{d},\varphi, \varepsilon\right)}{\log (1/\varepsilon)}.$$ Then for all sufficiently large $L$ we have $$\sup_{x\in \mathcal{X}}\#\left(B_\delta(x, \mathbf{d}_{[-D, L+D]^d}), \mathbf{d}_L, \varphi_L, \varepsilon\right) \leq \left(\frac{1}{\varepsilon}\right)^{(a+\beta)L^d}.$$ Here $D = D(\delta, \beta, \varepsilon)$ is the positive constant[^10] introduced in Proposition [Proposition 53](#prop: modification of Bowen paper){reference-type="ref" reference="prop: modification of Bowen paper"}.* *Proof.* Set $c = \min_{x \in \mathcal{X}}\varphi(x)$ and $\psi(x) = \varphi(x)-c$. We have $\psi(x) \geq 0$. For any positive number $L$ we have $$\psi_L(x) = \varphi_L(x) - cL^d.$$ For any subset $E\subset \mathcal{X}$ $$\#\left(E, \mathbf{d}_L, \psi_L, \varepsilon\right) = (1/\varepsilon)^{-c L^d} \#\left(E, \mathbf{d}_L, \varphi_L, \varepsilon\right).$$ Hence $$P_T(E, \mathbf{d}, \psi, \varepsilon) = P_T(E,\mathbf{d},\varphi, \varepsilon) - c\log (1/\varepsilon),$$ $$\sup_{x\in \mathcal{X}} \frac{P_T\left(B_\delta(x, \mathbf{d}_{\mathbb{R}^d}), \mathbf{d},\psi,\varepsilon\right)}{\log (1/\varepsilon)} = a-c.$$ Since $\psi$ is a nonnegative function, we apply Proposition [Proposition 53](#prop: modification of Bowen paper){reference-type="ref" reference="prop: modification of Bowen paper"} and get $$\sup_{x\in \mathcal{X}}\#\left(B_\delta(x, \mathbf{d}_{[-D,L+D]^d}), \mathbf{d}_L, \psi_L, \varepsilon\right) \leq \left(\frac{1}{\varepsilon}\right)^{(a-c+\beta)L^d}$$ for sufficiently large $L$. We have $$\#\left(B_\delta(x, \mathbf{d}_{[-D,L+D]^d}), \mathbf{d}_L, \psi_L, \varepsilon\right) = \left(\frac{1}{\varepsilon}\right)^{-c L^d} \#\left(B_\delta(x, \mathbf{d}_{[-D,L+D]^d}), \mathbf{d}_L, \varphi_L, \varepsilon\right).$$ Therefore $$\sup_{x\in \mathcal{X}} \#\left(B_\delta(x, \mathbf{d}_{[-D,L+D]^d}), \mathbf{d}_L, \varphi_L, \varepsilon\right) \leq \left(\frac{1}{\varepsilon}\right)^{(a+\beta)L^d}.$$ ◻ Now we prove Theorem [Theorem 49](#theorem: local formula of metric mean dimension with potential){reference-type="ref" reference="theorem: local formula of metric mean dimension with potential"}. We write the statement again. **Theorem 55** ($=$ Theorem [Theorem 49](#theorem: local formula of metric mean dimension with potential){reference-type="ref" reference="theorem: local formula of metric mean dimension with potential"}). *For any positive number $\delta$ $$\label{eq: local formula of metric mean dimension with potential} \begin{split} \overline{\mathrm{mdim}}_{\mathrm{M}}(\mathcal{X}, T, \mathbf{d}, \varphi) = \limsup_{\varepsilon\to 0} \frac{\sup_{x\in \mathcal{X}} P_T\left(B_\delta(x, \mathbf{d}_{\mathbb{R}^d}), \mathbf{d},\varphi,\varepsilon\right)}{\log (1/\varepsilon)}, \\ \underline{\mathrm{mdim}}_{\mathrm{M}}(\mathcal{X}, T, \mathbf{d}, \varphi) = \liminf_{\varepsilon\to 0} \frac{\sup_{x\in \mathcal{X}} P_T\left(B_\delta(x, \mathbf{d}_{\mathbb{R}^d}), \mathbf{d},\varphi,\varepsilon\right)}{\log (1/\varepsilon)}. \end{split}$$* *Proof.* It is obvious that the left-hand sides of ([\[eq: local formula of metric mean dimension with potential\]](#eq: local formula of metric mean dimension with potential){reference-type="ref" reference="eq: local formula of metric mean dimension with potential"}) are greater than or equal to the right-hand sides. So it is enough to prove the reverse inequalities. Let $\beta$ and $\varepsilon$ be arbitrary positive numbers with $0<\varepsilon<1$. Let $D= D(\delta, \beta, \varepsilon)$ be the positive constant introduced in Proposition [Proposition 53](#prop: modification of Bowen paper){reference-type="ref" reference="prop: modification of Bowen paper"}. Set $$a = \sup_{x\in \mathcal{X}} \frac{P_T\left(B_\delta(x, \mathbf{d}_{\mathbb{R}^d}), \mathbf{d},\varphi, \varepsilon\right)}{\log (1/\varepsilon)}.$$ For any positive number $L$ we can take points $x_1, \dots, x_M\in \mathcal{X}$ such that $$\mathcal{X} = \bigcup_{m=1}^M B_\delta(x_m, \mathbf{d}_{[-D, L+D]^d}),$$ $$M\leq \#\left(\mathcal{X}, \mathbf{d}_{[-D,L+D]^d}, \delta\right) = \#\left(\mathcal{X}, \mathbf{d}_{[0, L+2D]^d},\delta\right).$$ Then we have $$\begin{aligned} \#\left(\mathcal{X}, \mathbf{d}_L, \varphi_L, \varepsilon\right) & \leq \sum_{m=1}^M \#\left(B_\delta(x_m, \mathbf{d}_{[-D, L+D]^d}), \mathbf{d}_L, \varphi_L, \varepsilon\right) \\ & \leq M\, \sup_{x\in \mathcal{X}} \#\left(B_\delta(x, \mathbf{d}_{[-D, L+D]^d}), \mathbf{d}_L, \varphi_L, \varepsilon\right) \\ & \leq M \left(\frac{1}{\varepsilon}\right)^{(a+\beta)L^d}.\end{aligned}$$ The last inequality holds for all sufficiently large $L$ by Proposition [Proposition 54](#prop: modification of Bowen paper revisited){reference-type="ref" reference="prop: modification of Bowen paper revisited"}. Therefore $$\log \#\left(\mathcal{X}, \mathbf{d}_L, \varphi_L, \varepsilon\right) \leq \log \#\left(\mathcal{X}, \mathbf{d}_{L+2D}, \delta\right) + (a+\beta) L^d \log (1/\varepsilon).$$ Dividing this by $L^d$ and letting $L\to \infty$, we have $$\begin{aligned} P_T(\mathcal{X}, \mathbf{d}, \varphi, \varepsilon) & \leq \lim_{L\to \infty} \frac{\log \#\left(\mathcal{X}, \mathbf{d}_L, \delta\right)}{L^d} + (a+\beta) \log (1/\varepsilon) \\ & \leq \log \#\left(\mathcal{X}, \mathbf{d}_1, \delta\right) + (a+\beta) \log (1/\varepsilon).\end{aligned}$$ We can let $\beta\to 0$ and get $$\begin{aligned} P_T(\mathcal{X}, \mathbf{d},\varphi, \varepsilon) & \leq \log \#\left(\mathcal{X}, \mathbf{d}_1, \delta\right) + a \log (1/\varepsilon) \\ & = \log \#\left(\mathcal{X}, \mathbf{d}_1, \delta\right) + \sup_{x\in \mathcal{X}} P_T\left(B_\delta(x, \mathbf{d}_{\mathbb{R}^d}), \mathbf{d}, \varphi, \varepsilon\right). \end{aligned}$$ We divide this by $\log (1/\varepsilon)$ and let $\varepsilon \to 0$. Then we conclude that the left-hand sides of ([\[eq: local formula of metric mean dimension with potential\]](#eq: local formula of metric mean dimension with potential){reference-type="ref" reference="eq: local formula of metric mean dimension with potential"}) are less than or equal to the right-hand sides. ◻ 99 R. Bowen, Entropy-expansive maps, Trans. Amer. Math. Soc. **164** (1972) 323-331. T. M. Cover, J. A. Thomas, Elements of information theory, second edition, Wiley, New York, 2006. E. I. Dinaburg, A correlation between topological entropy and metric entropy, Dokl. Akad. Nauk SSSR **190** (1970) 19-22. R. Durrett, Probability: theory and examples, Fourth edition, Cambridge Series in Statistical and Probabilistic Mathematics, 31. Cambridge University Press, Cambridge, 2010. T. N. T. Goodman, Relating topological entropy and measure entropy, Bull. London Math. Soc. **3** (1971) 176-180. L. W. Goodwyn, Topological entropy bounds measure-theoretic entropy, Proc. Amer. Math. Soc. **23** (1969) 679-688. R. M. Gray, Probability, random processes, and ergodic properties, second edition, Springer, Dordrecht, 2009. R. M. Gray, Entropy and information theory, second edition, Springer, New York, 2011. M. Gromov, Topological invariants of dynamical systems and spaces of holomorphic maps: I, Math. Phys. Anal. Geom. **2** (1999) 323-415. Y. Gutman, E. Lindenstrauss, M. Tsukamoto, Mean dimension of $\mathbb{Z}^k$-actions, Geom. Funct. Anal. **26** (2016) 778-817. Y. Gutman, Y. Qiao, M. Tsukamoto, Application of signal analysis to the embedding problem of $\mathbb{Z}^k$-actions, Geom. Funct. Anal. **29** (2019) 1440-1502. Y. Gutman, A. Śpiewak, Metric mean dimension and analog compression, IEEE Trans. Inf. Theory, vol. 66, no. 11 (2020) 6977--6998. Y. Gutman, M. Tsukamoto, Embedding minimal dynamical systems into Hilbert cubes, Invent. Math. **221** (2020) 113-166. Q. Huo, R. Yuan, Double variational principle for mean dimension of $\mathbb{Z}^K$-actions, arXiv:2310.03194. N. Ikeda, S. Watanabe, Stochastic differential equations and diffusion processes, second edition, North-Holland Publishing Co., Amsterdam-New York; Kodansha, Ltd., Tokyo, 1989. T. Kawabata and A. Dembo, The rate distortion dimension of sets and measures, IEEE Trans. Inf. Theory, vol. 40, no. 5, pp. 1564-1572, Sep. 1994. E. Lindenstrauss, Mean dimension, small entropy factors and an embedding theorem, Inst. Hautes Études Sci. Publ. Math. vol. 89 pp. 227-262, 1999. E. Lindenstrauss, M. Tsukamoto, From rate distortion theory to metric mean dimension: variational principle, IEEE Trans. Inform. Theory **64** (2018) 3590-3609. E. Lindenstrauss, M. Tsukamoto, Double variational principle for mean dimension, Geom. Funct. Anal. **29** (2019) 1048--1109. E. Lindenstrauss, B. Weiss, Mean topological dimension, Israel J. Math. **115** (2000) 1-24. S. Matsuo, M. Tsukamoto, Brody curves and mean dimension, J. Amer. Math. Soc. **28** (2015) 159-182. P. Mattila, Geometry of sets and measures in Euclidean spaces, Fractals and rectifiability, Cambridge Studies in Advanced Mathematics, 44, Cambridge University Press, Cambridge, 1995. D. S. Ornstein, B. Weiss, Entropy and isomorphism theorems for actions of amenable groups, J. Analyse Math. **48** (1987) 1-141. M. B. Pursley, R. M. Gray, Source coding theorems for stationary, continuous-time stochastic processes, Ann. Probab. **5** (1977) 966-986. D. Ruelle, Statistical mechanics on a compact set with $Z^\nu$ action satisfying expansiveness and specification, Trans. Amer. Math. Soc. **185** (1973) 237-251. S. M. Srivastava, A course on Borel sets, Grad. Texts in Math., 180, Springer-Verlag, New York, 1998 M. Tsukamoto, Mean dimension of the dynamical systems of Brody curves, Invent. Math. **211** (2018) 935-968. M. Tsukamoto, Double variational principle for mean dimension with potential, Adv. Math. **361** (2020) 106935. M. Tsukamoto, Remark on the local nature of metric mean dimension, Kyushu J. Math. **76** (2022) 143-162. P. Walters, A variational principle for the pressure of continuous transformations, Amer. J. Math. **17** (1975) 937-971. [^1]: The author was supported by JSPS KAKENHI JP21K03227. [^2]: For convenience in the sequel, we define this notion more precisely. Let $\mathcal{X}$ and $\mathcal{Y}$ be finite sets. A map $\mathcal{X}\times \mathcal{Y}\ni (x, y) \mapsto \nu(y|x)\in [0,1]$ is called a conditional probability mass function if $\sum_{y\in \mathcal{Y}} \nu(y|x) = 1$ for every $x\in \mathcal{X}$. [^3]: This is a special case of the data-processing inequality [@Cover--Thomas Theorem 2.8.1]. [^4]: The difficulty lies in Proposition [Proposition 35](#proposition: L^1 mean Hausdorff dimension and mean Hausdorff dimension under tame growth){reference-type="ref" reference="proposition: L^1 mean Hausdorff dimension and mean Hausdorff dimension under tame growth"} below. It is unclear for the author how to formulate and prove an analogous result for $\mathbb{R}^d$-actions. [^5]: The construction of $f$ is as follows. Take a Lipschitz function $\psi\colon [0, \infty) \to [0,1]$ such that $\psi(t) = 1$ for $0\leq t \leq \varepsilon/4$ and $\psi(t) = 0$ for $t\geq \varepsilon/2$. Let $\{x_1, \dots, x_M\}$ be a $(\varepsilon/4)$-spanning set of $\mathcal{X}$. Then we set $f(x) = \left(\psi(d(x, x_1)), \psi(d(x, x_2)), \dots, \psi(d(x, x_M))\right)$. [^6]: The quantity $\mathcal{H}^{(s+2\eta)L_n^d}_\delta\left(\mathcal{X}, \overline{\mathbf{d}}_{L_n}, \varphi_{L_n}\right)$ is defined only when $(s+2\eta)L_n^d >\max \varphi_{L_n}$. Therefore the following argument is problematic if we have $(s+2\eta)L_n^d \leq \max \varphi_{L_n}$ for all but finitely many $n$. However, in this case, there is $t \in [-\left\lVert\varphi\right\rVert_\infty, \left\lVert\varphi\right\rVert_\infty]$ such that $t\geq s+\eta$ and $\mathcal{X}_n(t) \neq \emptyset$ for infinitely many $n$. Then we have $\dim_{\mathrm{H}}(\mathcal{X}_n(t), \overline{\mathbf{d}}_{L_n}, \delta) \geq 0 > c(s-t) L_n^d$ for infinitely many $n$ for this choice of $t$. [^7]: This sentence is not rigorous. Strictly speaking, we can construct random variables $X(n), X^\prime, Y^\prime$ defined on a common probability space such that $\mathrm{Law}\left(X^\prime, Y^\prime\right) = \mathrm{Law}\left(\mathcal{P}(X), Y\right)$, $$\mathbb{P}\left(X(n) = x_i, X^\prime = x_j\right) \to \delta_{ij} \mathbb{P}\left(X^\prime= x_j \right) \quad (n\to \infty),$$ and $$\mathbb{P}\left(X(n) = x_i, Y=y\middle|\, X^\prime = x_j \right) = \mathbb{P}\left(X(n) = x_i\middle|\, X^\prime = x_j \right) \cdot \mathbb{P}\left(Y=y\middle|\, X^\prime = x_j \right).$$ For simplicity we identify $X^\prime$ and $Y^\prime$ with $\mathcal{P}(X)$ and $Y$ respectively. [^8]: For two measures $\mathbf{m}_1$ and $\mathbf{m}_2$ on $\mathcal{X}$ we write $\mathbf{m}_1\leq \mathbf{m}_2$ if we have $\mathbf{m}_1(B) \leq \mathbf{m}_2(B)$ for all Borel subsets $B\subset \mathcal{X}$. [^9]: Let $W(v)$ be the random variable introduced in Claim [Claim 47](#claim: distance between Z and W){reference-type="ref" reference="claim: distance between Z and W"}. Then $I(\nu_n, \sigma_{n, v}) = I(Z;W(v))$. Consider the restrictions $W(v)|_{v+\lambda+[0, M)^d}$ $(\lambda\in \Lambda)$ and $W(v)|_{E_v}$. From the definition of the measure $\sigma_{n, v}$, they are conditionally independent given $Z$. By Lemma [Lemma 11](#lemma: subadditivity of mutual information){reference-type="ref" reference="lemma: subadditivity of mutual information"} $$I(Z;W) \leq I\left(Z;W(v)|_{E_v}\right) + \sum_{\lambda\in \Lambda} I\left(Z;W(v)|_{v+\lambda+[0, M)^d}\right).$$ $I\left(Z;W(v)|_{E_v}\right) =0$ because $W(v)|_{E_v}$ is constantly equal to $x_0$. We have $$I\left(Z;W(v)|_{v+\lambda+[0, M)^d}\right) = I\left(\mathcal{P}(T^{v+\lambda} Z); W(v)|_{v+\lambda+[0, M)^d}\right) = I\left(\mathcal{P}_*T^{v+\lambda}_*\nu_n, \rho_n\right).$$ [^10]: *Strictly speaking, the constant $D$ depends on not only $\delta, \beta, \varepsilon$ but also $(\mathcal{X},T, \mathbf{d}, \psi)$ where $\psi := \varphi - \min_{\mathcal{X}}\varphi$.*
arxiv_math
{ "id": "2310.03989", "title": "Variational principle for mean dimension with potential of\n $\\mathbb{R}^d$-actions: I", "authors": "Masaki Tsukamoto", "categories": "math.DS", "license": "http://creativecommons.org/licenses/by-nc-sa/4.0/" }
**Rescaling method for blow-up solutions of nonlinear wave equations** Mondher Benjemaa^a^, Aida Jrajria^b^, Hatem Zaag^c^ ^a^Faculty of Sciences of Sfax, Sfax University, Tunisia. mondher.benjemaa\@fss.usf.tn\ ^b^Faculty of Sciences of Sfax, Sfax University, Tunisia. aida.jrajria\@gmail.com (Corresponding author)\ ^c^University Sorbonne Paris Nord, LAGA-CNRS, F-93420, Villetaneuse, France. Hatem.Zaag\@math.cnrs.fr\ **Abstract:** We develop a hybrid scheme based on a finite difference scheme and a rescaling technique to approximate the solution of nonlinear wave equation. In order to numerically reproduce the blow-up phenomena, we propose a rule of scaling transformation, which is a variant of what was successfully used in the case of nonlinear parabolic equations. A careful study of the convergence of the proposed scheme is carried out and several numerical examples are performed in illustration.\ **Keywords:** Nonlinear wave equation, Numerical blow-up, Finite difference method, Rescaling method.\ **2010 MSC:** 26A33, 34A08, 34A12, 45D05, 65D25 # Introduction {#sec_intro} In this paper, we are concerned with the study of the numerical approximation of solutions of the nonlinear wave equation that achieve blow-up in finite time $$\begin{aligned} \label{equation1} \partial_{tt} u = \partial_{xx} u + F(u),\ x\in\,(0,1),\ t \in\,(0,\infty) \end{aligned}$$ with $F(u) = u^p, p>1$, subject to periodic boundary conditions $$\begin{aligned} \label{boundary condition} u(0,t) = u(1,t) ,\ t\geqslant 0 \end{aligned}$$ and the initial conditions $$\begin{aligned} \label{initial condition} u(x,0) = u_0(x),\ \partial_t u(x,0) = u_1(x),\ x\in\,(0,1)\end{aligned}$$ where $u(t): x\in (0,1)\longmapsto u(x,t)\in \mathbb{R}$ is the unknown function.\ The existence of solutions of the nonlinear wave equation [\[equation1\]](#equation1){reference-type="eqref" reference="equation1"}-[\[initial condition\]](#initial condition){reference-type="eqref" reference="initial condition"} was developed in [@Caffarelli; @Caffarelli; @and; @Friedman],, where the authors gave a full description of the blow-up set. In [@Glassey], Glassey proved that under suitable assumptions on the initial data, the solution u of [\[equation1\]](#equation1){reference-type="eqref" reference="equation1"} blows up in a finite time the following sense: there exists $T_\infty < \infty$, called the blow-up time, such that the solution $u$ exists on $[0, T_\infty)$ and $$\begin{aligned} || u(.,t)||_\infty \longrightarrow \infty\ \text{as}\ t\longrightarrow T_\infty\end{aligned}$$ Caffarelli and Friedman [@Caffarelli] found that there exists the so-called blow-up curve $t = T(x)$ such that the solution $u(x,t)$ satisfies $|u(x,t)|<\infty$ if and only if $t< T(x)$. The blow-up time is therefore $\inf_x T(x)$. For more theoretical results, the reader can refer e.g. to [@Azaiez; @Merle; @and; @Zaag; @2003; @Merle; @and; @Zaag; @2005; @Merle; @and; @Zaag; @2007; @Merle; @and; @Zaag; @2012]. In the numerical direction, the first work was done by Nakagawa in [@Nakagawa] using an adaptive time-stepping strategy to compute the blowup finite difference solutions and the numerical blow-up time for the 1D semilinear heat equation (see also [@abia; @abia; @and; @al; @Cho2007]). For the numerical approximation of blow-up solutions of hyperbolic equations, Cho applied Nakagawa's ideas to the nonlinear wave equation [@Cho2010]. Later on, his results were generalized in [@Azaiez; @and; @al; @Cho2018; @Sasaki; @and; @Saito].\ In this paper, we intend to develop the rescaling algorithm proposed first by Berger and Kohn [@Berg] in 1988 to parabolic equations which are invariant under a scaling transformation. This scaling property allows us to make a zoom of the solution when it is close to the singularity, still keeping the same equation. The scaling transformation is given by $$\begin{aligned} \label{scal transformation} u_\lambda(\xi,\tau) = \lambda^{\frac{2}{p-1}} u(\lambda \xi,\lambda \tau), \quad \lambda>0.\end{aligned}$$ Clearly, if $u$ is a solution of [\[equation1\]](#equation1){reference-type="eqref" reference="equation1"} then $u_\lambda$ is also a solution of [\[equation1\]](#equation1){reference-type="eqref" reference="equation1"}.\ This paper is written as follows. In the next Section, we present the finite difference scheme and the rescaling algorithm. Section [3](#sec Some properties of the discret scheme){reference-type="ref" reference="sec Some properties of the discret scheme"} is devoted to the proof of several results in concern with the discrete solution. In Section [4](#sec Convergence of the scheme){reference-type="ref" reference="sec Convergence of the scheme"}, we prove the main results of this paper namely we establish that the numerical solution converges toward the exact solution. Finally, we give some illustrative examples in Section [5](#sec Numerical examples){reference-type="ref" reference="sec Numerical examples"}. # The numerical algorithm In this section, we derive the rescaling algorithm in combination with a finite difference scheme for the nonlinear wave equation [\[equation1\]](#equation1){reference-type="eqref" reference="equation1"}. ## Finite difference approximation We use a second order approximation of both the temporal and the spatial derivative operators. Let $I$ be a positive integer and set $x_i = i\Delta x$ with $\Delta x = \frac{1}{I}$. For the time discretization, let $\Delta t>0$ be a time step and $n\geqslant 0$ be a positive integer and set $t^n = n\Delta t$. The finite difference scheme of [\[equation1\]](#equation1){reference-type="eqref" reference="equation1"} is defined as follows: for all $n\geqslant 0$ and $1\leqslant i\leqslant I$, $$\begin{aligned} \label{scheme} &\frac{U_{i}^{n+1} - 2U_i^n + U_i^{n-1}}{\Delta t^2} = \frac{U_{i+1}^{n} -2U_{i}^{n} + U_{i-1}^{n}}{\Delta x^2} + F(U_{i}^{n}),\end{aligned}$$ where $U_{i}^{n}$ denotes the approximation for $u(x_i,t^n)$. We set the CFL number $\text{cfl} = \frac{\Delta t}{\Delta x} = 1$, with the following discrete initial and the periodic boundary conditions $$\label{ini_boundary_cond} \left\{ \begin{array}{ll} U_{i}^{0} & \! \! = u_0(x_i), \\ U_{i}^{1} & \! \! = u_0(x_i) + \Delta t u_1(x_i) + \frac{\Delta t^2}{2\Delta x^2}(u_0(x_{i+1}) -2u_0(x_i)+u_0(x_{i-1})) + \frac{\Delta t^2}{2}(F(u_0(x_i))),\\ U_{0}^{n} & \! \! = U_{I}^{n}, \quad U_{I+1}^{n} = U_{1}^{n}. \end{array} \right.$$ **Notation 1**. *We denote $U^n$ for $(U_1^n,\cdots, U_I^n)^T$ and we set $$\begin{aligned} (U_i^n)_t = \frac{U_i^{n+1} - U_i^n}{\Delta t}, \quad (U_i^n)_{\overline{t}} = \frac{U_i^n - U_i^{n-1}}{\Delta t}\end{aligned}$$ $$\begin{aligned} (U_i^n)_{t\overline{t}} = \frac{U_i^{n+1} - 2U_i^n + U_i^{n-1}}{\Delta t^2}, \quad (U_i^n)_{x\overline{x}} = \frac{U_{i+1}^n - 2U_i^n + U_{i-1}^n}{\Delta x^2}.\end{aligned}$$ We define the norm $|| U||_\infty = \max_{1\leqslant i\leqslant I} |U_i|$ and we write $U\geqslant 0$ if $U_i \geqslant 0$ for all $1\leqslant i\leqslant I$. Let $\lbrace (x_i,t^i,U_i^n)| 1\leqslant i\leqslant I, n\geqslant 0 \rbrace$ be a set of data points, we associate the function $\mathbf{U}$ which is a piecewise linear approximation in both space and time such that for all $(x,t)\in (x_{i+1},x_i)\times(t^n,t^{n+1})$, $$\begin{aligned} \label{linear approximation power} \mathbf{U}(x,t) &= \frac{1}{\Delta t\Delta x}\Big( U_i^n(x_{i+1}-x)(t^{n+1}-t) + U_{i+1}^n(x-x_i)(t^{n+1}-t)\nonumber\\ & + U_i^{n+1}(x_{i+1}-x)(t-t^n) + U_{i+1}^{n+1}(x-x_i)(t-t^n)\Big).\end{aligned}$$* ## The algorithm Now, we study the rescaling method for the system [\[scheme\]](#scheme){reference-type="eqref" reference="scheme"}-[\[ini_boundary_cond\]](#ini_boundary_cond){reference-type="eqref" reference="ini_boundary_cond"}. The transformation [\[scal transformation\]](#scal transformation){reference-type="eqref" reference="scal transformation"} is originally due to Berger and Kohn [@Berg] and was used successfully for parabolic blow-up problems, see [@Ngu17]. To set the rescaling algorithm, let $J\in \mathds{N}^*$ and $I=J^2$. We consider the partition $[1,I] = \cup_{j=1}^{J}K_j$, with $K_j = [(j-1)J,jJ]$ and the numerical solution $U_{j}^{n} = {U^n}_{\vert K_j}$. Now, we introduce some notations: - $0<\lambda <1$: the scale factor such that $\lambda^{-1}$ is a small positive integer. - M: the maximum amplitude before scaling. - $u^{(k)}(\xi_k,\tau_k)$ is the kth rescaled solution defined in space time variables $(\xi_k,\tau_k)$. The initial index ($k=0$) corresponds to the real solution ($u^{(k=0)} = u, \xi_0 = x, \tau_0 = t$). - $U_i^{n,(k)}$: the approximation of $u^{(k)}(\xi_{k,i},\tau_{k}^n)$. The numerical solution [\[scheme\]](#scheme){reference-type="eqref" reference="scheme"} is updated until the first time step $n_0$ such that $||U_j^{n_0}||_\infty \geqslant M$ is reached. Then using two time levels and a linear interpolation in time to find out a value $\tau_0^*$ satisfying $$\begin{aligned} \label{find tauStar} (n_0-1)\Delta t\leqslant\tau_0^*\leqslant n_0\Delta t, \, and\, \, ||\mathbf{U}_j(.,\tau_0^*)|| = M,\end{aligned}$$ as well as the rescaled interval $(x_{i_0^-},x_{i_0^+})$, with $i_0^-,i_0^+ \in K_j$. More precisely, we find the index $i$ where the solution reaches $M$, then we take $$\begin{aligned} \label{interval to be rescaled power} \left\{ \begin{array}{ll} i_0^+ = i\ \text{and} \ i_0^- = i - 1 &\text{if $U_j$ is increasing} \\ i_0^- = i \ \text{and} \ i_0^+ = i + 1 &\text{if $U_j$ is decreasing} \\ i_0^+ = i+1\ \text{and} \ i_0^- = i-1&\text{otherwise} \\ \end{array} \right.\end{aligned}$$ The first rescaled solution $u^{(1)}$ is related to u by $$\begin{aligned} u^{(1)}(\xi_1,\tau_1) = \lambda^{\frac{2}{p-1}}u(\lambda\xi_1,\tau^{*}_0+\lambda\tau_1),\end{aligned}$$ which is also a solution of equation [\[equation1\]](#equation1){reference-type="eqref" reference="equation1"} for $\lambda^{-1}x_{i^-_0}<\xi_1<\lambda^{-1}x_{i^+_0}$ and $0<\tau_1<\frac{T-\tau^*_0}{\lambda}$ with initial conditions $$\begin{aligned} &u^{(1)}(\xi_1,0) = \lambda^{\frac{2}{p-1}}u(\lambda\xi_1,\tau^*_0)\\ &u_{\tau_1}^{(1)}(\xi_1,0 )= \lambda^{\frac{p+1}{p-1}}u_t(\lambda\xi_1,\tau_0^*),\end{aligned}$$ and the boundary conditions $$\begin{aligned} u^{(1)}(\lambda^{-1}x_{i_0^\pm},\tau_1) = \lambda^{\frac{2}{p-1}}u(x_{i_0^\pm},\tau^*_0 + \lambda\tau_1).\end{aligned}$$ The maximum value of $u^{(1)}$ at initial time $\tau_1 = 0$ is $$\begin{aligned} ||u^{(1)}(.,0) ||_\infty &= \lambda^{\frac{2}{p-1}}||u(.,\tau^*_0)||_\infty\\ &= \lambda^{\frac{2}{p-1}} M.\end{aligned}$$ Since $\lambda \in (0,1)$, then, $||u^{(1)}(.,0) ||_\infty < M$, i.e the rescaled solution steps down below the threshold criterion. This is the purpose of the rescaling method. Then, we apply the finite difference method to $u^{(1)}$. Let $I_1^{\pm} = \lambda^{-1}i_0^{\pm}$ and $U^{n,(1)}$ the approximation of $u^{(1)}$ at time $\tau_1^n$. Then, the scheme [\[scheme\]](#scheme){reference-type="eqref" reference="scheme"} applied to $U^{n,(1)}$ writes: for all $n\geqslant 0$ and $I_1^-\leqslant i\leqslant I_1^+$ $$\begin{aligned} & (U_{i}^{n,(1)})_{t\overline{t}} = (U_{i}^{n,(1)})_{x\overline{x}} + F(U_{i}^{n,(1)}) \nonumber\\ & U_{I_1^-}^{n,(1)} = \psi^{n,(1)},\nonumber\\ & U_{I_1^+}^{n,(1)} = \Psi^{n,(1)},\nonumber\\ & U_{i}^{0,(1)} = \phi^{(1)}_i,\nonumber\\ &U_{i}^{1,(1)} = \Phi^{(1)}_i,\nonumber\\\end{aligned}$$ where $$\begin{aligned} \label{conditions for U1 power} &\psi^{n,(1)} = \lambda^{\frac{2}{p-1}}\mathbf{U}(x_{i_0^-},\tau^*_0 + \lambda n\Delta t),\nonumber\\ & \Psi^{n,(1)} = \lambda^{\frac{2}{p-1}}\mathbf{U}(x_{i_0^+},\tau^*_0 + \lambda n\Delta t),\nonumber\\ &\phi_i^{(1)} = \lambda^{\frac{2}{p-1}}\mathbf{U}(\lambda \xi_{1,i},\tau^{*}_0),\nonumber \\ &\Phi_i^{(1)} = \lambda^{\frac{2}{p-1}} \mathbf{U}(\lambda\xi_{1,i},\tau^{*}_0) + \lambda^{\frac{p+1}{p-1}}(\mathbf{U}(\lambda\xi_{1,i},\tau_0^* + \Delta t) - \mathbf{U}(\lambda\xi_{1,i},\tau_0^*))\nonumber\\ &+\lambda^{\frac{2p}{p-1}}\Big( \frac{\Delta t^2}{2\Delta x^2}(\mathbf{U}(\lambda\xi_{1,i+1},\tau_0^*) - 2\mathbf{U}(\lambda\xi_{1,i},\tau_0^*) + \mathbf{U}(\lambda\xi_{1,i-1},\tau_0^*))\nonumber\\ & +\frac{\Delta t^2}{2} F(\mathbf{U}(\lambda\xi_{1,i},\tau_0^*))\Big).\nonumber\\\end{aligned}$$ When $||U^{n_{1},(1)}||_\infty$ reaches the given threshold value $M$, we determine $\tau^*_1$ and two grid points $\xi_{1,i_{1}^+}, \xi_{1,i_{1}^-}$ where $i_1^-$ and $i_1^+ \in \lbrace I_1^-,\cdots, I_1^+\rbrace$ using [\[find tauStar\]](#find tauStar){reference-type="eqref" reference="find tauStar"} and [\[interval to be rescaled power\]](#interval to be rescaled power){reference-type="eqref" reference="interval to be rescaled power"} respectively. In the interval where $U^{(1)}\geqslant M$ the solution is rescaled further, yielding $U^{(2)}$, and so forth. The $(k+1)^\text{th}$ rescaled solution $u^{(k+1)}$ is introduced when $\tau_k$ reaches a value $\tau^*_k$ satisfying $$\begin{aligned} (n_k-1)\Delta t\leqslant\tau^*_k\leqslant n_k\Delta t, \ n_k>0\ \text{ and} \ ||\mathbf{U}^{(k)}(., \tau^*_k)||_\infty = M.\end{aligned}$$ The interval $(\xi_{k,i_{k}^-},\xi_{k,i_{k}^+})$ to be rescaled satisfies [\[interval to be rescaled power\]](#interval to be rescaled power){reference-type="eqref" reference="interval to be rescaled power"} and the solution $u^{(k+1)}$ is related to $u^{(k)}$ by $$\begin{aligned} \label{the solution u(k) power} u^{(k+1)}(\xi_{k+1},\tau_{k+1}) = \lambda^{\frac{2}{p-1}}u^{(k)}(\lambda\xi_{k+1},\tau^*_k + \lambda\tau_{k+1}).\end{aligned}$$ Let $I_{k+1}^\pm = \lambda^{-1}i^\pm_k$, the approximation of $u^{(k+1)}(\xi_{k+1,i},\tau_{k+1}^n)$ denoted by $U_i^{n,(k+1)}$ uses the scheme [\[scheme\]](#scheme){reference-type="eqref" reference="scheme"} with the space step $\Delta x$ and the time step $\Delta t$, which reads $$\begin{aligned} \label{rescaling solution Uk power} &(U_{i}^{n,(k+1)})_{t\overline{t}} = (U_{i}^{n,(k+1)})_{x\overline{x}} + F(U_{i}^{n,(k+1)}) \nonumber\\ & U_{I_1^-}^{n,(k+1)} = \psi^{n,(k+1)},\nonumber\\ & U_{I_1^+}^{n,(k+1)} = \Psi^{n,(k+1)},\nonumber\\ & U_{i}^{0,(k+1)} = \phi^{(k+1)}_i,\nonumber\\ &U_{i}^{1,(k+1)} = \Phi^{(k+1)}_i,\nonumber\\\end{aligned}$$ for all $n\geqslant 0$ and $i \in\lbrace I_{k+1}^-,\cdots,I_{k+1}^+\rbrace$ where $$\begin{aligned} \label{conditions for Uk power} &\psi^{n,(k+1)} = \lambda^{\frac{2}{p-1}}\mathbf{U}^{(k)}(x_{i_k^-},\tau^*_k + \lambda n\Delta t),\nonumber\\ & \Psi^{n,(k+1)} = \lambda^{\frac{2}{p-1}}\mathbf{U}^{(k)}(x_{i_k^+},\tau^*_k + \lambda n\Delta t),\nonumber\\ &\phi_i^{(k+1)} = \lambda^{\frac{2}{p-1}}\mathbf{U}^{(k)}(\lambda \xi_{k+1},\tau^{*}_k),\nonumber \\ &\Phi_i^{(k+1)} = \lambda^{\frac{2}{p-1}} \mathbf{U}^{(k)}(\lambda\xi_{k+1,i},\tau^{*}_k)+ \lambda^{\frac{p+1}{p-1}}(\mathbf{U}^{(k)}(\lambda\xi_{k+1,i},\tau^{*}_k + \Delta t) - \mathbf{U}^{(k)}(\lambda\xi_{k+1,i},\tau^{*}_k)) \nonumber\\ & \qquad \quad+\lambda^{\frac{2p}{p-1}}\Big( \frac{\Delta t^2}{2\Delta x^2}(\mathbf{U}^{(k)}(\lambda\xi_{k+1,i+1},\tau_k^*) - 2\mathbf{U}^{(k)}(\lambda\xi_{k+1,i},\tau_k^*) + \mathbf{U}^{(k)}(\lambda\xi_{k+1,i-1},\tau_k^*)) \nonumber \\ & \qquad \quad+\frac{\Delta t^2}{2} F(\mathbf{U}^{(k)}(\lambda\xi_{k+1,i},\tau_k^*))\Big).\end{aligned}$$ Previously rescaled solutions are stepped forward independently: $\mathbf{U}^{(k)}$ is stepped forward once every $\lambda^{-1}$ time steps of $\mathbf{U}^{(k+1)}$, $\mathbf{U}^{(k-1)}$ once every $\lambda^{-2}$ time steps of $\mathbf{U}^{(k+1)}$, etc. On the other hand, the values of $\mathbf{U}^{(k)}, \mathbf{U}^{(k-1)}$, etc., must be updated to agree with the calculation of $\mathbf{U}^{(k+1)}$. When a time step is reached such that $\Vert \mathbf{U}^{(k+1)} (., \tau_{k+1}) \Vert_\infty > M$, then it is time for another rescaling. Then, the numerical solution $\mathbf{U}_j(x,t)$ of the rescaling method is defined by: for all $1\leqslant j\leqslant J$ $$\begin{aligned} \label{numerical solution power} \mathbf{U}_j(x,t) = \left\{ \begin{array}{ll} & \mathbf{U}^{(0)}(x,t)\\ & \lambda^{\frac{-2}{p-1}}\mathbf{U}^{(1)}(\lambda^{-1}x,\lambda^{-1}(t-t_0))\\ &\vdots \\ & \lambda^{\frac{-2(k-1)}{p-1}}\mathbf{U}^{(k-1)}(\lambda^{-(k-1)}x,\lambda^{-(k-1)}(t-t_{k-2}))\\ & \lambda^{\frac{-2k}{p-1}}\mathbf{U}^{(k)}(\lambda^{-k}x,\lambda^{-k}(t-t_{k-1}))\\ \end{array} \right.\end{aligned}$$ where $t_k = \sum_{i=0}^{k} \lambda^i \tau_i^*$ and $\mathbf{U}^{(k)}$ is the linear interpolation defined in [\[linear approximation power\]](#linear approximation power){reference-type="eqref" reference="linear approximation power"} for $k\geqslant 1$. **Definition 2**. *We define the numerical blow-up time of $\mathbf{U}_j$ by $$\begin{aligned} \label{blow up time power} T^j = \lim_{k \longrightarrow \infty} \sum_{l=0}^{k} \lambda^l \tau_l^*.\end{aligned}$$ We say that $U_i^n$ blows up if $£\lim_{n\longrightarrow \infty}||U^n||_\infty = \infty$£* Now, we focus on the convergence of the rescaling method. Let $V^{n} = (V_1^n,V_2^n,\cdots,V_I^n)^T$, then one may write [\[rescaling solution Uk power\]](#rescaling solution Uk power){reference-type="eqref" reference="rescaling solution Uk power"} as: for all $n\geqslant 0$, $i = 1,\cdots,I$ [\[discrt non zero dirichlet condition power\]]{#discrt non zero dirichlet condition power label="discrt non zero dirichlet condition power"} $$\begin{gathered} \begin{alignat}{1} & (V_{i}^{n})_{t\overline{t}} = (V_{i}^{n})_{x\overline{x}} + F(V_{i}^{n})\\ & V_{1}^{n} = \psi^n,\\ & V_{I}^{n} = \Psi^n,\\ & V_{i}^{0} = \phi_i,\\ & V_{i}^{1} = \Phi_i. \end{alignat}\end{gathered}$$ where $\psi^n, \Psi^n, \phi_i\ \text{and}\ \Phi_i$ represented by $\psi^{n,(k)}$, $\Psi^{n,(k)}$, $\phi_i^{(k)}$ and $\Phi_i^{(k)}$ in [\[conditions for Uk power\]](#conditions for Uk power){reference-type="eqref" reference="conditions for Uk power"}. We can see from [\[numerical solution power\]](#numerical solution power){reference-type="eqref" reference="numerical solution power"} that the numerical solution $\mathbf{U}_j$ is built from $\mathbf{U}^{(k)}$ which is the solutions of the problem [\[rescaling solution Uk power\]](#rescaling solution Uk power){reference-type="eqref" reference="rescaling solution Uk power"}. Thus, we focus on the study of the following problem with the non-periodic Dirichlet conditions: $$\begin{aligned} \label{contenu non zero condition power} \left\{ \begin{array}{ll} \partial_{tt}v = \partial_{xx} v + F(v),\ & x\in (a,b),\ t>0 \\ v(a,t) = f(t),\ & t\geqslant 0\\ v(b,t) = g(t),\ & t\geqslant 0\\ v(x,0) = v_0(x),\ & x\in (a,b)\\ \partial_t v(x,0) = v_1(x),\ & x\in (a,b), \end{array} \right.\end{aligned}$$ where $v(t) : x\in (a,b) \longmapsto v(x,t) \in \mathds{R}$. # Some properties of the discrete scheme {#sec Some properties of the discret scheme} In this section, we give some lemmas of the discrete scheme [\[discrt non zero dirichlet condition power\]](#discrt non zero dirichlet condition power){reference-type="eqref" reference="discrt non zero dirichlet condition power"} which will be used later. The first lemma below shows a property of the discrete solution. **Lemma 3**. *Let $V^n = (V_{1}^{n},V_{2}^{n},\cdots,V_{I}^{n})$ be the solution of [\[discrt non zero dirichlet condition power\]](#discrt non zero dirichlet condition power){reference-type="eqref" reference="discrt non zero dirichlet condition power"}. Denote $Z_i^n = (V_i^n)_{x\overline{x}}$, suppose that $Z_i^0\geqslant 0$ and $Z_i^1\geqslant\dfrac{1}{2}(Z_{i+1}^0 + Z_{i-1}^0)$. Then we have for all $n\geqslant 0$ $$\begin{aligned} \label{Positivity of Vxx} Z_i^n\geqslant 0.\end{aligned}$$* *Proof.* We proceed by induction on $n$. Suppose that [\[Positivity of Vxx\]](#Positivity of Vxx){reference-type="eqref" reference="Positivity of Vxx"} is valid for all $i$ and $1\leqslant k \leqslant n-1$. Now taking into account that $V_i^n$ is a solution of [\[discrt non zero dirichlet condition power\]](#discrt non zero dirichlet condition power){reference-type="eqref" reference="discrt non zero dirichlet condition power"}, we have $$\begin{aligned} (Z_i^k)_{x\overline{x}} &= ((V_i^k)_{x\overline{x}})_{x\overline{x}} = (V_i^k)_{t\overline{t}x\overline{x}} - (F(V_i^k))_{x\overline{x}}\end{aligned}$$ and $$\begin{aligned} (Z_i^k)_{t\overline{t}} &= (V_i^k)_{x\overline{x}t\overline{t}} = (V_i^k)_{t\overline{t}x\overline{x}}.\end{aligned}$$ Therefore, by means of Taylor expansions $$\begin{aligned} (Z_i^k)_{t\overline{t}} - (Z_i^k)_{x\overline{x}} &= (F(V_i^k))_{x\overline{x}}\\ & = F'(V_i^k) Z_i^k + F''(\zeta_i^k)\frac{(V_{i+1}^k-V_i^k)^2}{2\Delta x^2} + F''(\xi_i^k)\frac{(V_{i-1}^k-V_i^k)^2}{2\Delta x^2} \geqslant 0.\end{aligned}$$ Then $$\begin{aligned} Z_i^{k+1} - Z_{i+1}^k\geqslant Z_{i-1}^k - Z_i^{k-1} \quad \forall\ i \ \text{and}\ 1\leqslant k\leqslant n-1.\end{aligned}$$ It follows $$\begin{aligned} Z_i^n &= \sum_{j=0}^{n-1}(Z_{i+j}^{n-j} - Z_{i+j+1}^{n-1-j}) + Z_{i+n}^0\\ &\geqslant\sum_{j=0}^{n-1} (Z^1_{i-n+1+2j} - Z^0_{i-n+2j+2}) + Z^0_{i+n}\\ &\geqslant\frac{1}{2}\sum_{j=0}^{n-1} (Z^0_{i-n+2j} - Z^0_{i-n+2j+2}) + Z^0_{i+n}\\ & = \frac{1}{2}(Z^0_{i+n} + Z^0_{i-n})\geqslant 0.\end{aligned}$$ $\Box$ In the next lemma, we show that the numerical solution of [\[discrt non zero dirichlet condition power\]](#discrt non zero dirichlet condition power){reference-type="eqref" reference="discrt non zero dirichlet condition power"} is not bounded. **Lemma 4**. *Under the same assumptions of Lemma [Lemma 3](#positivity of V_t){reference-type="ref" reference="positivity of V_t"}, the numerical solution $V_i^n$ blows up, i.e. $\lim_{n\longrightarrow\infty} V_i^n = \infty$ for all $i$.* *Proof.* Since $(V_i^n)_{x\overline{x}} \geqslant 0$, we have by [\[discrt non zero dirichlet condition power\]](#discrt non zero dirichlet condition power){reference-type="eqref" reference="discrt non zero dirichlet condition power"} $$\begin{aligned} (V_i^n)_{t\overline{t}} \geqslant F(V_i^n),\end{aligned}$$ implying $$\begin{aligned} (V_i^n)_t \geqslant(V_i^{n-1})_t + \Delta t F(V_i^n).\end{aligned}$$ A induction argument yields $$\begin{aligned} (V_i^n)_t^2 &\geqslant\left((V_i^{n-1})_t + \Delta t F(V_i^n)\right)^2\\ & \geqslant((V_i^{n-1})_t)^2 + 2 F(V_i^n) (V_i^n - V_i^{n-1})\\ & \geqslant((V_i^0)_t)^2 + 2\sum_{k=1}^{n} F(V_i^k)(V_i^k - V_i^{k-1})\\ & \geqslant((V_i^0)_t)^2 + 2\int_{V_i^0}^{V_i^n} z^p dz\\ & = ((V_i^0)_t)^2 + \frac{2}{p+1}((V_i^n)^{p+1} - (V_i^0)^{p+1})\\ & = \frac{2}{p+1} (V_i^n)^{p+1} + K_i ,\end{aligned}$$ with $K_i =((V_i^0)_t)^2 - \frac{2}{p+1} (V_i^0)^{p+1}$. Thus $$\begin{aligned} (V_i^n)_t \geqslant\sqrt{\frac{2}{p+1}(V_i^n)^{p+1} + K_i}.\end{aligned}$$ It follows that $$\begin{aligned} V_i^n &\geqslant V_i^{n-1} + \Delta t \sqrt{\frac{2}{p+1}(V_i^n)^{p+1} + K_i}\\ & \geqslant V_i^{n-1} + \Delta t \sqrt{\frac{2}{p+1}(V_i^0)^{p+1} + K_i}\\ &\geqslant V_i^{0} +n \Delta t \sqrt{\frac{2}{p+1}(V_i^0)^{p+1} + K_i}.\end{aligned}$$ This achieves the proof. $\Box$ The following lemma is a discrete form of the maximum principle. **Lemma 5**. *Let $b^n=(b_1^n,b_2^n,\cdots,b_I^n)$ be vector such that $b^n\geqslant 0$. Let $\Theta^n = (\Theta_i^{n})_{1\leqslant i \leqslant I}$ satisfy $$\begin{aligned} \frac{\Theta_{i}^{n}-2\Theta_{i}^{n-1} + \Theta_{i}^{n-2}}{\Delta t^2} - \frac{\Theta_{i+1}^{n-1}-2\Theta_{i}^{n-1} + \Theta_{i-1}^{n-1}}{\Delta x^2} - b_i^{n-1}\Theta_{i}^{n-1} \geqslant 0,\quad & 2\leqslant i\leqslant I-1\\ \Theta_{1}^{n}\geqslant 0,\quad &n\geqslant 0,\\ \Theta_{I}^{n}\geqslant 0,\quad &n\geqslant 0,\\ \Theta_{i}^{0} \geqslant 0, \quad & 1\leqslant i\leqslant I,\\ \Theta_{i}^{1} \geqslant 0, \quad & 1\leqslant i\leqslant I\\ \Theta^1_{i} - \Theta^0_{i+1}\geqslant 0, \quad & 1\leqslant i\leqslant I-1. \end{aligned}$$ Then $\Theta^n \geqslant 0$ for all $n\geqslant 0$.* *Proof.* Arguing by contradiction, we assume that there exists $n^* \in \mathds{N}$ such that there exists $i^*$ with $\Theta_{i^*}^{n^*}< 0$, and $\Theta_i^n>0$ for all $0\leqslant n< n^*$. We have $$\begin{aligned} \Theta_{i}^{n} \geqslant\frac{\Delta t^2}{\Delta x^2} (\Theta_{i+1}^{n-1} + \Theta_{i-1}^{n-1}) - \Theta_i^{n-2} + 2(1-\frac{\Delta t^2}{\Delta x^2})\Theta_i^{n-1} + \Delta t^2 b_i^{n-1}\Theta_{i}^{n-1}.\end{aligned}$$ Since $\Delta t = \Delta x$, then $$\begin{aligned} \label{Inegalite sur Theta} \Theta_i^n - \Theta_{i+1}^{n-1}\geqslant\Theta_{i-1}^{n-1} - \Theta_i^{n-2} + \Delta t^2 b_i^{n-1} \Theta_{i}^{n-1}\end{aligned}$$ Let $W_i^n = \Theta_{i}^{n} - \Theta_{i+1}^{n-1}$, it follows from [\[Inegalite sur Theta\]](#Inegalite sur Theta){reference-type="eqref" reference="Inegalite sur Theta"} $$\begin{aligned} W_{i}^{n} &\geqslant W_{i-1}^{n-1} + \Delta t^2 b_i^{n-1}\Theta_{i}^{n-1}\\ & \geqslant W_{i-n+1}^{1} + \Delta t^2 \sum_{l=1}^{n-1} b_{i+1-l}^{n-l}\Theta_{i+1-l}^{n-l}.\end{aligned}$$ Then $$\begin{aligned} \Theta_{i^*}^{n^*} &= \Theta_{i^*+1}^{n^*-1} + W_{i^*}^{n^*}\\ & = \Theta_{i^*+n^*}^0 + \sum_{k=0}^{n^*-1} W_{i^*+k}^{n^*-k}\\ & \geqslant\Theta_{i^*+n^*}^0 + \sum_{k=0}^{n^*-1}W_{i^*-n^*+1+2k}^1 + \Delta t^2\sum_{k=0}^{n^*-2}\ \sum_{l=1}^{n^*-k-1} b_{i^*+k+1-l}^{n^*-k-l} \Theta_{i^*+k+1-l}^{n^*-k-l} \\ & \geqslant\Theta_{i^*+n^*}^0 + \sum_{k=0}^{n^*-1} (\Theta^1_{i^*-n^*+1+2k} - \Theta^0_{i^*-n^*+2+2k}) + \Delta t^2\sum_{k=0}^{n^*-2}\ \sum_{l=1}^{n^*-k-1} b_{i^*+k+1-l}^{n^*-k-l} \Theta_{i^*+k+1-l}^{n^*-k-l}\\ & \geqslant 0 \end{aligned}$$ which is a contradiction. $\Box$ # Convergence of the scheme {#sec Convergence of the scheme} We prove in the following the convergence of the scheme [\[discrt non zero dirichlet condition power\]](#discrt non zero dirichlet condition power){reference-type="eqref" reference="discrt non zero dirichlet condition power"}. The next result establishes that for each fixed time interval $[0,T_\infty)$ where the solution [\[contenu non zero condition power\]](#contenu non zero condition power){reference-type="eqref" reference="contenu non zero condition power"} $v$ is defined, the numerical solution of the problem [\[discrt non zero dirichlet condition power\]](#discrt non zero dirichlet condition power){reference-type="eqref" reference="discrt non zero dirichlet condition power"} approximates $v$ as $\Delta x\longrightarrow 0$. **Theorem 6**. *Let $V_{i}^{n}$ and $v$ be the solution of [\[discrt non zero dirichlet condition power\]](#discrt non zero dirichlet condition power){reference-type="eqref" reference="discrt non zero dirichlet condition power"} and [\[contenu non zero condition power\]](#contenu non zero condition power){reference-type="eqref" reference="contenu non zero condition power"} respectively. Let $T_\infty$ denotes the blow-up time of $v$ and let $T_0$ be an arbitrary number such that $0 < T_0 < T_\infty$. Suppose that $v \in C^{2}([a,b]\times [0,T_0])$ and the initial data and boundary data of [\[discrt non zero dirichlet condition power\]](#discrt non zero dirichlet condition power){reference-type="eqref" reference="discrt non zero dirichlet condition power"} satisfy $$\begin{aligned} \epsilon_1 = \sup_{x\in[a,b]}|v_0(x)- \phi(x)| = o(\Delta x) \quad \text{as} \ \Delta x\longrightarrow 0,\end{aligned}$$ $$\begin{aligned} \epsilon_2 = \sup_{x\in[a,b]}|v_1(x)- \Phi(x)| = o(\Delta x) \quad \text{as} \ \Delta x\longrightarrow 0,\end{aligned}$$ $$\begin{aligned} \epsilon_3 = \sup_{t\in[0,T_0]}|f(t)- \psi(t)| = o(1) \quad \text{as} \ \Delta t\longrightarrow 0,\end{aligned}$$ $$\begin{aligned} \epsilon_4 = \sup_{t\in[0,T_0]}|g(t)- \Psi(t)| = o(1) \quad \text{as} \ \Delta t\longrightarrow 0,\end{aligned}$$ where $\phi$, $\Phi$, $\psi$ and $\Psi$ are the interpolations of $\phi_i$, $\Phi_i$, $\psi^n$ and $\Psi^n$ respectively defined in [\[linear approximation power\]](#linear approximation power){reference-type="eqref" reference="linear approximation power"}. Then, $$\begin{aligned} \label{convergence of solution power} \max_{0\leqslant n\leqslant N}||V_i^n - v(x_i,t^n)||_\infty = \mathcal{O}(\epsilon_1 + \epsilon_2 + \epsilon_3 + \epsilon_4 + \Delta x^2) \ \text{as}\quad \Delta x\longrightarrow 0,\end{aligned}$$ where $N>0$ such that $t^N \leqslant T_0$.* *Proof.* We denote $N^*$ the greatest value such that $N^*< N$, and for all $0\leqslant n < N^*$ $$\begin{aligned} \label{e less 1 power} || V_i^n - v(x_i,t^n)||_\infty < 1. \end{aligned}$$ Let $e^n_i = V_i^n - v(x_i,t^n)$ the error at the node $(x_i,t^n)$. By Taylor's expansion and [\[contenu non zero condition power\]](#contenu non zero condition power){reference-type="eqref" reference="contenu non zero condition power"}, we have for all $2\leqslant i\leqslant I-1$ and $0\leqslant n < N^*$ $$\frac{v(x_i,t^{n+1})-2v(x_i,t^n)+v(x_i,t^{n-1})}{\Delta t^2} = \partial_{tt}v(x_i,t^n) + \frac{\Delta t^2}{24}\lbrace{\partial_{tttt} v(x_i,\tilde{t^n}) + \partial_{tttt}v(x_i,\tilde{\tilde{t^n}})\rbrace},$$ where $\tilde{t^n}, \tilde{\tilde{t^n}} \in [t^{n-1},t^{n+1}]$ and $$\frac{v(x_{i+1},t^{n})-2v(x_i,t^n)+v(x_{i-1},t^{n})}{\Delta x^2} = \partial_{xx}v(x_i,t^n) + \frac{\Delta x^2}{24}\lbrace{\partial_{xxxx} v(\tilde{x_i},t^n) + \partial_{xxxx}v(\tilde{\tilde{x_i}},t^n)\rbrace},$$ where $\tilde{x_i}, \tilde{\tilde{x_i}} \in [x_{i-1},x_{i+1}]$. Using the mean value theorem, we obtain $$F(V_i^n) - F(v(x_i,t^n)) = F'(\delta_i^n) (V_i^n - v(x_i,t^n)),$$ where $\delta_i^n$ is an intermediate value between $V_i^n$ and $v(x_i,t^n)$. It follows $$(e_i^n)_{t\overline{t}} - (e_i^n)_{x\overline{x}} = F'(\delta_i^n) e_i^n + r_i^n,$$ with $$\begin{aligned} r_i^n = - \frac{\Delta t^2}{24}\lbrace{\partial_{tttt} v(x_i,\tilde{t^n}) + \partial_{tttt}v(x_i,\tilde{\tilde{t^n}})\rbrace} + \frac{\Delta x^2}{24} \lbrace{\partial_{xxxx} v(\tilde{x_i},t^n) + \partial_{xxxx}v(\tilde{\tilde{x_i}},t^n)\rbrace}.\end{aligned}$$ Let $C$ be positive constant such that $$\begin{aligned} \frac{1}{12}\max_{(x,t)\in[a,b]\times[0,T_0]}|\left(\partial_{tttt} v(x,t)| +|\partial_{xxxx} v(x,t)|\right) \leqslant C. \end{aligned}$$ Since $\Delta t = \Delta x$, we obtain for all $2\leqslant i\leqslant I-1$ and $n\geqslant 0$ $$\begin{aligned} (e_i^n)_{t\overline{t}} - (e_i^n)_{x\overline{x}} \leqslant F'(\delta_i^n) e_i^n + C \Delta x^2.\end{aligned}$$ Now, consider the function $E(x,t)$ defined by $$E(x,t) = e^{K t + x}(\epsilon_1 + \epsilon_2 + \epsilon_3 + \epsilon_4 + C \Delta x^2) ,$$ with $K$ is a positive constant which will be chosen adequately. Using Taylor expansion, we get $$\begin{aligned} \label{Taylor on E power} &(E(x_i,t^n))_{t\overline{t}} - (E(x_i,t^n))_{x\overline{x}} - F'(\delta_i^n) E(x_i,t^n)\nonumber\\ & = \partial_{tt}E(x_i,t^n) - \partial_{xx} E(x_i^n,t) - F'(\delta_i^n) E(x_i,t^n) \nonumber\\ & + \frac{\Delta t^2}{24}\{\partial_{tttt}E(x_i,\bar{t^n}) + \partial_{tttt}E(x_i,\bar{\bar{t^n}}) \} - \frac{\Delta x^2}{24}\{\partial_{xxxx}E(\bar{x_i},t^n) + \partial_{xxxx}E(\bar{\bar{x_i}},t^n)\}\end{aligned}$$ where $\bar{t^n},\bar{\bar{t^n}} \in [t^{n-1},t^{n+1}]$ and $\bar{x_i}, \bar{\bar{x_i}} \in [x_{i-1},x_{i+1}]$. We have for all $x\in [a,b]$ and $t\in [0,T_0]$ $$\partial_{tt}E(x,t) - \partial_{xx} E(x,t) = (K^2-1)E(x,t)$$ and $$E(a,0)\leqslant E(x,t)\leqslant E(b,T_0),$$ yielding $$\begin{aligned} \partial_{tttt} E(x,t) = K^4 E(x,t) \geqslant K^4 E(a,0)\end{aligned}$$ and $$\begin{aligned} \partial_{xxxx} E(x,t) = E(x,t) \leqslant E(b,T_0).\end{aligned}$$ Then, [\[Taylor on E power\]](#Taylor on E power){reference-type="eqref" reference="Taylor on E power"} implies $$\begin{aligned} & (E(x_i,t^n))_{t\overline{t}} - (E(x_i,t^n))_{x\overline{x}} - F'(\delta_i^n) E(x_i,t^n)\\ & \geqslant(K^2 - 1 - F'(\delta_i^n)) E(x_i,t^n) + \frac{\Delta x^2}{12}\{ (K^4 E(a,0) - E(b,T_0)\}.\end{aligned}$$ By taking $K$ large enough such that the right hand side of the above inequality is large than $C\Delta x^2$, we obtain $$\begin{aligned} (E(x_i,t^n))_{t\overline{t}} - (E(x_i,t^n))_{x\overline{x}} - F'(\delta_i^n) E(x_i,t^n) - C\Delta x^2 \geqslant 0.\end{aligned}$$ Therefore, from Lemma [Lemma 5](#the maximum principle){reference-type="ref" reference="the maximum principle"} with $b_i^n = F'(\delta_i^n)$ and $\Theta_i^n = E(x_i,t^n)- e_i^n$, we get for $1\leqslant i\leqslant I$ $$\begin{aligned} E(x_i,0) &\geqslant e_i^0\\ \partial_t E(x_i,0) &\geqslant e_i^1\\ E(x_1,t^n) &\geqslant e_1^n\\ E(x_I,t^n) &\geqslant e_I^n,\end{aligned}$$ and $$\begin{aligned} E(x_i,\Delta t) - E(x_{i+1},0) &= e^{K \Delta t + x_i}(\epsilon_1 + \epsilon_2 + \epsilon_3 + \epsilon_4 + C \Delta x^2) - e^{x_{i+1}}(\epsilon_1 + \epsilon_2 + \epsilon_3 + \epsilon_4 + C \Delta x^2)\\ & = (e^{K \Delta t} e^{x_i} - e^{x_i + \Delta x})(\epsilon_1 + \epsilon_2 + \epsilon_3 + \epsilon_4 + C \Delta x^2)\\ & = e^{x_i}(e^{K \Delta t} - e^{ \Delta x})(\epsilon_1 + \epsilon_2 + \epsilon_3 + \epsilon_4 + C \Delta x^2)\\ & \geqslant e_i^1,\end{aligned}$$ yielding $$\begin{aligned} E(x_i,\Delta t) - e_i^1 \geqslant E(x_{i+1},0) - e_{i+1}^0,\end{aligned}$$ and hence $$\begin{aligned} V_i^n - v(x_i,t^n) \leqslant E(x_i,t^n). \end{aligned}$$ Using the same argument for $\epsilon_i^n =v(x_i,t^n) - V_i^n = - e_i^n$, we obtain $$\begin{aligned} (\epsilon_i^n)_{t\overline{t}} - (\epsilon_i^n)_{x\overline{x}} &\leqslant F(v(x_i,t^n)) - F(V_i^n) + C\Delta x^2\\ &\leqslant F'(\delta_i^n)\epsilon_i^n + C\Delta x^2.\end{aligned}$$ By Lemma [Lemma 5](#the maximum principle){reference-type="ref" reference="the maximum principle"} with $\Theta_i^n = E(x_i,t^n) - \epsilon_i^n$, we get $$\begin{aligned} v(x_i,t^n) - V_i^n \leqslant E(x_i,t^n),\end{aligned}$$ then $$\begin{aligned} |V_i^n - v(x_i,t^n)| &\leqslant E(x_i,t^n)\\ & \leqslant e^{K T_0 + b}(\epsilon_1 + \epsilon_2 + \epsilon_3 + \epsilon_4 + C \Delta x^2). \end{aligned}$$ Hence, we obtain, for $n < N^*$ $$\begin{aligned} \label{inequality on e power} \max_{1\leqslant i\leqslant I}|V_i^n - v(x_i,t^n)| &\leqslant e^{K T_0 + b}(\epsilon_1 + \epsilon_2 + \epsilon_3 + \epsilon_4 + C \Delta x^2).\end{aligned}$$ In order to prove [\[convergence of solution power\]](#convergence of solution power){reference-type="eqref" reference="convergence of solution power"}, we have to show that $N^* = N$. If it is not true, we have by [\[e less 1 power\]](#e less 1 power){reference-type="eqref" reference="e less 1 power"} and [\[inequality on e power\]](#inequality on e power){reference-type="eqref" reference="inequality on e power"} $$\begin{aligned} 1\leqslant|| V_i^{N^*} - v(x_i,t^{N^*})||_\infty \leqslant e^{K T_0 + b}(\epsilon_1 + \epsilon_2 + \epsilon_3 + \epsilon_4 + C \Delta x^2).\end{aligned}$$ The last term of the above inequality goes to zero as $\Delta x$ tends to zero, which is a contradiction. The proof is achieved. $\Box$ **Remark 7**. *From the relation between the numerical solution and $\mathbf{U}^{(k)}$ in [\[numerical solution power\]](#numerical solution power){reference-type="eqref" reference="numerical solution power"}, we conclude the convergence of the rescaling method according to the Theorem [Theorem 6](#theorem of convergence power){reference-type="ref" reference="theorem of convergence power"}.* # Numerical examples {#sec Numerical examples} In this section, we present some numerical examples. For all the examples, we set $\lambda = \frac{1}{2}$ and we choose the threshold value $M$ such that the maximum of the initial data of all rescaled solutions are equal, i.e. for all $k\geqslant 0$ we have $|| u_0 ||_\infty = \lambda^{\frac{2}{p-1}}|| u^{(k)}(\tau_k^*)||_\infty$. Since $||u^{(k)}||_\infty = M$, then $M = \lambda^{\frac{-2}{p-1}}|| u_0||_\infty$. ## Example 1 {#example 1 .unnumbered} We consider the system [\[equation1\]](#equation1){reference-type="eqref" reference="equation1"} with an exact solution given by $$\begin{aligned} u(x,t) = \mu(T-t+dx)^{\frac{2}{1-p}}\end{aligned}$$ with $\mu = \Big(2(1-d^2)\frac{p+1}{(p-1)^2}\Big)^{\frac{1}{p-1}}$ and $d\in (0, 1)$ is an arbitrary parameter. The parameters used are $T = 0,5$ and $d = 0.1$. Figure [2](#comparison_exp1){reference-type="ref" reference="comparison_exp1"} shows a comparison between the exact solution and the numerical solution. One can notice a very good superposition between the solutions. In table [2](#tab_norm){reference-type="ref" reference="tab_norm"}, we report the relative $L^2$ and $L^\infty$ errors. The blow-up time for both cases is set $T_\infty(x) = T + dx$. Thus, one can approximate numerically the blow-up curve $T_\infty(x)$ by computing the numerical blow up time $T^j$ [\[blow up time power\]](#blow up time power){reference-type="eqref" reference="blow up time power"} for all $1\leqslant j\leqslant J$. Figure [4](#blowup_curve_exp1){reference-type="ref" reference="blowup_curve_exp1"} shows a comparison between the exact blow up curve and $T(x)$. In [@Merle; @and; @Zaag; @2003], the authors proved that the solution satisfies $$\begin{aligned} || u(.,t)||_{2} \sim (T-t)^{\frac{-2}{p-1}},\end{aligned}$$ one has $$\begin{aligned} \log(|| u(.,t)||_{2}) \sim \frac{-2}{p-1}\log(T-t)\end{aligned}$$ In order to calculate the blow-up rate $\frac{2}{p-1}$ numerically, we show the plot of $\log(|| U^n||_2)$ versus $\log(\frac{1}{T-t^n})$. Figure [6](#blowup_rate_exp1){reference-type="ref" reference="blowup_rate_exp1"} presents these slopes for $p = 2$ and $p = 3$. ![[\[comparison_exp1\]]{#comparison_exp1 label="comparison_exp1"}Exact solution (red line) and numerical solution (blue circles) with p = 2 (left) and p = 3 (right).](exp1_Unum_p2.pdf "fig:"){#comparison_exp1 width="8cm" height="6cm"} ![[\[comparison_exp1\]]{#comparison_exp1 label="comparison_exp1"}Exact solution (red line) and numerical solution (blue circles) with p = 2 (left) and p = 3 (right).](exp1_Unum_p3.pdf "fig:"){#comparison_exp1 width="8cm" height="6cm"} $p = 2$ --------- --------------------------------------------------- --------------------------------------------------------------- I $\frac{\|U_{num}-u_{exact}\|_2}{\|u_{exact}\|_2}$ $\frac{\|U_{num} - u_{exact}\|_\infty}{\|u_{exact}\|_\infty}$ $2^6$ $9\times 10^{-2}$ $10\times 10^{-2}$ $2^7$ $3\times 10^{-2}$ $3.2\times 10^{-2}$ $2^8$ $8.2\times 10^{-3}$ $8.7\times 10^{-3}$ $2^9$ $2.1\times 10^{-3}$ $2.2\times 10^{-3}$ : [\[tab_norm\]]{#tab_norm label="tab_norm"} Relative errors of the numerical solution versus the exact solution. $p = 3$ --------- --------------------------------------------------- ------------------------------------------------------------- I $\frac{\|u_{num}-u_{exact}\|_2}{\|u_{exact}\|_2}$ $\frac{\|u_{num}-u_{exact}\|_\infty}{\|u_{exact}\|_\infty}$ $2^6$ $7\times 10^{-2}$ $9\times 10^{-2}$ $2^7$ $2.3\times 10^{-2}$ $3.3\times 10^{-2}$ $2^8$ $6.6\times 10^{-3}$ $9.5\times 10^{-3}$ $2^9$ $1.7\times 10^{-3}$ $2.5\times 10^{-3}$ : [\[tab_norm\]]{#tab_norm label="tab_norm"} Relative errors of the numerical solution versus the exact solution. ![[\[blowup_curve_exp1\]]{#blowup_curve_exp1 label="blowup_curve_exp1"}Comparison between the numerical blowup time (blue circles) and $T_\infty$(red line) for $p = 2$ (left) and $p = 3$ (right).](exp1_blowup_curve_p2.pdf "fig:"){#blowup_curve_exp1 width="8cm" height="6cm"} ![[\[blowup_curve_exp1\]]{#blowup_curve_exp1 label="blowup_curve_exp1"}Comparison between the numerical blowup time (blue circles) and $T_\infty$(red line) for $p = 2$ (left) and $p = 3$ (right).](exp1_blowup_curve_p3.pdf "fig:"){#blowup_curve_exp1 width="8cm" height="6cm"} ![[\[blowup_rate_exp1\]]{#blowup_rate_exp1 label="blowup_rate_exp1"}Blow up rate for $p = 2$ (left) and $p = 3$ (right).](pente_p2.pdf "fig:"){#blowup_rate_exp1 width="8cm" height="6cm"} ![[\[blowup_rate_exp1\]]{#blowup_rate_exp1 label="blowup_rate_exp1"}Blow up rate for $p = 2$ (left) and $p = 3$ (right).](pente_p3.pdf "fig:"){#blowup_rate_exp1 width="8cm" height="6cm"} ## Example 2 {#example 2 .unnumbered} We consider the system [\[equation1\]](#equation1){reference-type="eqref" reference="equation1"} with $p=2$ and the initial data $u_0(x) = 100(1-\cos(2\pi x))$ and $u_1(x) = 10\sin(2\pi x)$. We investigate the numerical blow-up curve by computation of $T^j$ for all $1\leqslant j\leqslant J$. Figure [8](#numerical solution ex2 power){reference-type="ref" reference="numerical solution ex2 power"} shows the numerical solution and the numerical blow up curve. It is shown in [@Berg] and [@Ngu17] that the value $\tau_k^*$ is independent of $k$ and tends to a constant as $k$ tends to infinity for nonlinear heat equation. We prove that this assertion also holds true in our case. Notice that by [\[the solution u(k) power\]](#the solution u(k) power){reference-type="eqref" reference="the solution u(k) power"} $$u^{(k)}(\xi_k,\tau_k^{*}) = \lambda^{\frac{2}{p-1}}u^{(k-1)}(\lambda\xi_k,\tau_{k-1}^{*}+\lambda\tau^*_k) =\cdots = \lambda^{\frac{2k}{p-1}}u(\lambda^k\xi_k,t_k),$$ where $t_k = \tau^*_{0}+\lambda\tau^*_{1}+\cdots +\lambda^k\tau^*_k$. We recall that if $T$ denotes the blow up time of $u$, then $$\begin{aligned} \label{profile} (T-t)^{\frac{2}{p-1}}||u(t)||_{\infty} = \mu\ \text{as}\ t\longrightarrow T,\ \text{with}\ \mu = \Big(2\frac{p+1}{(p-1)^2}\Big)^{\frac{1}{p-1}}.\end{aligned}$$ In particular, at time $t=t_k$, we have $$\begin{aligned} (T-t_k)^{\frac{2}{p-1}}||u(t_k)||_{\infty} &= (T-t_k)^{\frac{2}{p-1}}\lambda^{\frac{-2k}{p-1}}||u^k(\tau_k^*)||_{\infty}\\ &= (T-t_k)^{\frac{2}{p-1}}\lambda^{\frac{-2k}{p-1}} M,\end{aligned}$$ yielding $$\begin{aligned} T-t_k = \lambda^{k}M^{\frac{1-p}{2}} \mu^{\frac{p-1}{2}} + o(1).\end{aligned}$$ Then, we obtain $$\begin{aligned} \tau_k^* &= \lambda^{-k}(t_k-t_{k-1})\nonumber\\ &= \lambda^{-k}((T-t_{k-1})-(T-t_k))\nonumber\\ &= M^{\frac{1-p}{2}}\mu^{\frac{p-1}{2}}(\lambda^{-1}-1) + o(1).\end{aligned}$$ Finally, $$\begin{aligned} \label{tau} \lim_{k\longrightarrow\infty}\tau^*_k = M^{\frac{1-p}{2}}\mu^{\frac{p-1}{2}}(\lambda^{-1}-1).\end{aligned}$$ The values of $\tau^*_k$ are tabulated in Table [3](#tab_tau){reference-type="ref" reference="tab_tau"} for various values of $k$. These experimental results shows that $\tau^*_k$ tends to the constant indicated in [\[tau\]](#tau){reference-type="eqref" reference="tau"} as $k$ tends to infinity, and are in total agreement with our theoretical study. $k$ $I=100$ $I=200$ $I=300$ $I=400$ ------ ---------- ---------- ---------- ---------- $10$ $0.0840$ $0.0855$ $0.0858$ $0.0860$ $20$ $0.0840$ $0.0855$ $0.0858$ $0.0860$ $30$ $0.0840$ $0.0855$ $0.0858$ $0.0860$ $40$ $0.0841$ $0.0855$ $0.0859$ $0.0862$ : [\[tab_tau\]]{#tab_tau label="tab_tau"} Various values of $\tau^*_k$ with $p=2$. ![[\[numerical solution ex2 power\]]{#numerical solution ex2 power label="numerical solution ex2 power"}Left: The numerical solution of example 2. Right: $x$ vs $T(x)$](exp2_Unum.pdf "fig:"){#numerical solution ex2 power width="9cm" height="7cm"} ![[\[numerical solution ex2 power\]]{#numerical solution ex2 power label="numerical solution ex2 power"}Left: The numerical solution of example 2. Right: $x$ vs $T(x)$](exp2_blowup_curve.pdf "fig:"){#numerical solution ex2 power width="8cm" height="7cm"} ## Example 3 {#example 3 .unnumbered} In this example, we consider the system [\[equation1\]](#equation1){reference-type="eqref" reference="equation1"} with $$u_0(x) = 10(2-\cos(2\pi x)-\cos(4\pi x)),$$ $$u_1(x) = 0,$$ Figure [10](#numerical solution of exp3){reference-type="ref" reference="numerical solution of exp3"} shows the evolution of the numerical solutions in space-time axes for $p = 3$ and the numerical blow-up time $T(x)$. ![[\[numerical solution of exp3\]]{#numerical solution of exp3 label="numerical solution of exp3"}Left: The numerical solution of example 3. Right: $x$ vs $T(x)$.](exp3_Unum.pdf "fig:"){#numerical solution of exp3 width="9cm" height="7cm"} ![[\[numerical solution of exp3\]]{#numerical solution of exp3 label="numerical solution of exp3"}Left: The numerical solution of example 3. Right: $x$ vs $T(x)$.](exp3_blowup_curve.pdf "fig:"){#numerical solution of exp3 width="8cm" height="7cm"} # Conclusion In this paper, we derived a numerical scheme based on both finite difference scheme and a rescaling method for the approximation of the nonlinear wave equation. We proved that under some suitable hypotheses, the numerical solution converges toward the exact solution of the problem. Finally, some numerical experiments are performed and confirm the theoretical study. We expect that all the presented results remain valid for a non linearity $F$ such that $F(u), F'(u),F''(u) \geqslant 0$ if $u\geqslant 0$. This will be the object of a future work # Data Availability Statement {#data-availability-statement .unnumbered} Data sharing not applicable to this article as no data sets were generated or analyzed during the current study. # References {#references .unnumbered} 00 Abia, L.M., López-Marcos, J.C. and Martínez, J. On the blow-up time convergence of semidiscretizations of reaction-diffusion equations. Appl. Numer. Math. **26**(4): 399-414, 1998. Abia, L.M., López-Marcos, J.C. and Martínez, J. The Euler method in the numerical integration of reaction-diffusion problems with blow-up, Appl. Numer. Math. **38**: 287-313, 2001. Antonini, C. and Merle, F. Optimal bounds on positive blow-up solutions for a semilinear wave equation. Int. Math. Res. Notices, **21**: 1141-1167,2001. Azaiez, A., Benjemaa, M. Jrajria, A and Zaag, H. Discontinuous Galerkin method for blow up solutions of nonlinear wave equations. Turk J Math, **47**(3): 1015-1038, 2023. Azaiez, A., Masmoudi, N. and Zaag, H. Blow-up rate for a semilinear wave equation with exponential nonlinearity in one space dimension, Math. Soc. Lect. Note Ser, **450**: 1-32, 2019. Dannawi, I., Kirane, M. and Fino, A. Z. Finite time blow-up for damped wave equations with space--time dependent potential and nonlinear memory, Nonlinear Differ. Equ. Appl, **25**:, (2018). Berger, M. and Kohn, R.V. A rescaling algorithm for the numerical calculation of blowing-up solutions. Comm. Pure. Appl. Math., **41**(6): 841--863, 1988. Caffarelli, L.A. and Friedman, A. Differentiability of the Blow-up Curve for one Dimensional Nonlinear Wave Equations. Arch. Rational Mech. Anal., **91**(1): 83-98, 1985. Caffarelli, L.A. and Friedman, A. The blow-up boundary for nonlinear wave equations. Trans. Am. Math. Soc. **297**(1): 223-241, 1986. Cho, C.H., Hamada, S. and Okamoto, H. On the finite difference approximation for a parabolic blow-up problems, Japan J. Indust. Appl. Math. **24**: 105-134, 2007. Cho, C.H. A finite difference scheme for blow-up solutions of nonlinear wave equations, Numer. Math. Theor. Meth. Appl. **3**(4): 475-498, 2010. Cho, C.H. On the computation for blow-up solutions of the nonlinear wave equation, Numer. Math. **138**: 537-556, 2018. Evans, L.C. Partial Differential Equations, Graduate Studies in Math. **19**. American Mathematical Society, Providence ,1998 Glassey, R.T. Blow-up Theorems for nonlinear wave equations. Math. Z. **132**: 183-203, 1973. Glassey, R.T. Finite-time blow-up for solutions of nonlinear wave equations. Math. Z. **177**: 323-340, 1981. Levine, H.A. Instability and nonexistence of global solutions to nonlinear wave equations of the form $Pu_{tt} = -Au + F(u)$. Trans. Amer. Math. Soc., **192**: 1-21, 1974. Merle, F. and Zaag, H. Determination of the blow-up rate for the semilinear wave equation. Amer. J. Math., **125**(5): 1147-1164, 2003. Merle, F. and Zaag, H. On growth rate near the blowup surface for semilinear wave equations. Int. Math. Res. Not., **19**: 1127-1155, 2005. Merle, F. and Zaag, H. Existence and universality of the blow-up profile for the semilinear wave equation in one space dimension. J. Funct. Anal., **253**(1): 43-121, 2007. Merle, F. and Zaag, H. Existence and classification of characteristic points at blow-up for a semilinear wave equation in one space dimension. Amer. J. Math., **134**(3): 581-648, 2012. Nakagawa, T. Blowing up of a finite difference solution to $u_t = u_{xx}+u^2$. Appl. Math. Optim., **2**: 337-350, 1976. Nguyen. V.T. Numerical analysis of the rescaling method for parabolic problems with blow-up in finite time. Physica D., **339**, 49-65, 2017. Saito, N. and Sasaki, T. Blow-up of finite-difference solutions to nonlinear wave equations. J. Math. Sci. Univ. Tokyo **23**(1): 349-380, 2016.
arxiv_math
{ "id": "2309.05358", "title": "Rescaling method for blow-up solutions of nonlinear wave equations", "authors": "Mondher Benjemaa, Aida Jrajria, Hatem Zaag", "categories": "math.NA cs.NA math.AP", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/" }
--- author: - "Xiangrui Pan [^1]" - "Cheng Zeng[^2]" - "Longyu Li [^3]" - "Gengji Li [^4]" bibliography: - Ref.bib title: The comparison of two Zagreb-Fermat eccentricity indices --- ABSTRACT In this paper, we focus on comparing the first and second Zagreb-Fermat eccentricity indices of graphs. We show that $$\frac{\sum_{uv\in E\left( G \right)}{\varepsilon _3\left( u \right) \varepsilon _3\left( v \right)}}{m\left( G \right)} \leq \frac{\sum_{u\in V\left( G \right)}{\varepsilon _{3}^{2}\left( u \right)}}{n\left( G \right)}$$ holds for all acyclic and unicyclic graphs. Besides, we verify that the inequality may not be applied to graphs with at least two cycles. **Key words:** Fermat eccentricity; Zagreb-Fermat eccentricity indices; acyclic graphs; unicyclic graphs; multicyclic graphs # Introduction In the fields of mathematical chemistry and graph theory, various graph invariants have been developed to characterize the structural properties of chemical compounds and complex networks, see Refs. [@todeschini2008handbook; @latora2017complex; @trinajstic2018chemical; @brouwer2011spectra]. Suppose $G=(V(G),E(G))$ is a connected simple graph with node set $V(G)$ and edge set $E(G)$. Gutman and Trinajstić proposed two degree-based topological indices, namely, the first and second Zagreb indices [@gutman1972graph], which are defined by $$Z_1(G)=\sum_{u\in V(G)}\deg_G^2(u)$$ and $$Z_2(G)=\sum_{uv\in E(G)}\deg_G(u)\deg_G(v),$$ respectively. Here $\deg_G(u)$ is the degree of vertex $u$ in $G$. Then, Gutman, Ruščić, Trinajstić and Wilcox [@gutman1975graph] elaborated Zagreb indices. In Ref. [@borovicanin2017bounds], a survey of the most significant estimates of Zagreb indices has been introduced. Todeschini and Consonni [@todeschini2008handbook; @consonni2009molecular] pointed out that the Zagreb indices and their variants have a wide range of applications in QSPR and QSAR models. We denote by $n=n(G)$ the number of vertices of $G$ and by $m=m(G)$ the number of its edges. Recently, a conjecture about the averages of two Zagreb indices has been studied, that is, $$\frac{Z_2(G)}{m(G)}\geq\frac{Z_1(G)}{n(G)}.$$ Although this conjecture has been disproved, it has been shown to hold for all graphs with the maximum degree at most four [@hansen2007comparing], for acyclic graphs [@vukicevic2008comparing] and for unicyclic graphs [@horoldagva2009comparing]. To better describe the structural characteristics of graphs and chemical compounds, Zagreb index variants are continuously extended, such as improved Zagreb index, regularized Zagreb index, etc. [@zhou2009novel; @gutman2013degree; @gutman2018beyond]. Another important invariant is eccentricity $\varepsilon_2(u;G)$ of a vertex $u$ in $G$, namely the maximum distance from $u$ to other vertices in $G$, which is defined by $\varepsilon_2(u;G):=\max\{d_G(u,v):v\in V(G)\}$. In analogy with the first and the second Zagreb indices, the first, $E_1(G)$ and second, $E_2(G)$, Zagreb eccentricity indices [@indices2010note; @ghorbani2012new], an important class of graph indices, were proposed by replacing degrees by eccentricities of the vertices. Here the first and second Zagreb eccentricity indices are defined by $$E_1(G)=\sum_{u\in V\left( G \right)}{\varepsilon _{2}^{2}\left( u \right)},\ E_2(G)=\sum_{uv\in E\left( G \right)}{\varepsilon _2\left( u \right) \varepsilon _2\left( v \right)}.$$ respectively. The two Zagreb eccentricity indices attract a large amount of interest. Some general mathematical properties of $E_1(G)$ and $E_2(G)$ have been investigated in [@2011On; @Das2013Some]. Very recently, Zagreb eccentricity indices have been applied to fractals [@xu2023fractal] to show the structural properties of fractals. In [@indices2010note], Vukičević and Graovac compared $E_1(G)/n(G)$ with $E_2(G)/m(G)$. They showed that the inequality $$\frac{E_2(G)}{m(G)}\leq\frac{E_1(G)}{n(G)}$$ holds for all acyclic and unicyclic graphs and is not always valid for general graphs. Based on the conclusions in [@indices2010note], Qi and Du [@2017On] determined the minimum and maximum Zagreb eccentricity indices of trees, while Qi, Zhou and Li [@2017Zagreb] presented the minimum and maximum Zagreb eccentricity of unicycle graphs. Let $S$ be a nonempty subset of $V(G)$ with at least $2$ vertices. The Steiner distance $d_G(S)$ (or simply the distance of $S$) is the minimum size of spanning tree of $S$ in $G$, that is, $$d_G(S)=\min\{m(T): T~ \text{is a subtree of}~ G ~\text{with} ~S\subseteq V(T)\}.$$ Naturally, the Steiner $k$-eccentricity $\varepsilon_k(u;G)$ of $u$ is defined by $$\varepsilon_k(u;G)=\max\{d_G(S): u\in S\subseteq V(G), |S|=k\geq2 \}.$$ For notational convenience, we sometimes use $\varepsilon _{k}\left( u \right)$ instead of $\varepsilon _{k}\left( u;G \right)$. Very recently, Steiner $3$-eccentricity and its average were first investigated by Li, Yu and Klavžar et al. [@li2021average; @li2021average2]. For Steiner $k$-eccentricity on trees, see [@li2021steiner]. If the cardinality of $S$ is $3$, equivalently, $S=\{u,v,w\}$, then Steiner distance of $S$ is clearly the minimum of the total distance from a vertex $\sigma\in V(G)$ to the three vertices in $S$, and we may write $$d_G(S)=\mathcal{F}_G(u,v,w)=\min_{\sigma\in V(G)}\{d(\sigma,u)+d(\sigma,v)+d(\sigma,w)\}.$$ This concept in Euclidean space was first raised by Fermat [@1999Geometric] and then extended in graphs. So the Steiner distance $\mathcal{F}(u,v,w)$ is also called Fermat distance. Similarly, we call $\varepsilon_3(u;G)$ Fermat eccentricity of $u$ in $G$. Fermat distance, Steiner distance and their related variants have been widely applied to many fields, including operations research, VLSI design, optical and wireless communication networks [@du2008steiner]. We now introduce two modified Zagreb eccentricity indices based on Fermat eccentricity, that is, the first Zagreb-Fermat eccentricity index $$F_1(G)=\sum_{u\in V\left( G \right)}{\varepsilon _{3}^{2}\left( u \right)},$$ and the second Zagreb-Fermat eccentricity index $$F_2(G)=\sum_{uv\in E\left( G \right)}{\varepsilon _3\left( u \right) \varepsilon _3\left( v \right)}.$$ In this paper, we focus on comparison of the first and second Zagreb-Fermat eccentricity indices and show that inequality $$\label{eq:important} \frac{F_2(G)}{m\left( G \right) }\leq \frac{F_1(G)}{n\left( G \right)}$$ holds for all acyclic and unicyclic graphs. Based on two counterexamples given in Section [5](#sec4){reference-type="ref" reference="sec4"}, we observe that the inequality is not always valid for multicyclic graphs. # Notations and Preliminaries We first recall some notations and definitions. The degree of a vertex $u\in V(G)$, denoted by $\deg_G(u)$, is the number of adjacent vertices of $u$ in graph $G$. We call a vertex a leaf if its degree is $1$. If a vertex has degree at least $2$, then it is an internal vertex. Let $P_n$, $C_n$, $K_{1,n-1}$ be the $n$-vertex path, cycle and star. Define the radius and diameter of $G$ by $\operatorname{rad}(G):=\min_{u\in V(G)}\varepsilon_{2}(u)$ and $\operatorname{diam}(G):=\max_{u\in V(G)}\varepsilon_{2}(u)$, respectively. A vertex that realizes $\operatorname{rad}(G)$ is called the central vertex. It is well known that tree $T$ contains one or two central vertices. A path in $G$ of length $\operatorname{diam}(G)$ is called the diametrical path. Note that every graph contains at least one diametrical path. A path joining a vertex and its most distance vertex is called a eccentric path. We now focus on Fermat eccentricity. The vertex $\sigma$ that minimize the sum $d(\sigma,u)+d(\sigma,v)+d(\sigma,w)$ is called Fermat vertex of vertex set $\{u,v,w\}$. Meanwhile, the minimal spanning tree consisting of $\{u,v,w,\sigma\}$ is a Fermat tree of $\{u,v,w\}$. Note that the Fermat vertex and Fermat tree of a given vertex triplet in a graph may not be unique. We call the vertices that realize $\varepsilon_3(u;G)$ the Fermat eccentric vertices of $u$ and the corresponding Fermat tree the Fermat eccentric $u$-tree. The following lemma shows that the difference of Fermat eccentricity of two adjacent vertices can not exceed $1$. **Lemma 1**. *For all $uv \in E\left(G\right)$, we have $$\left| \varepsilon _3\left( u \right) -\varepsilon _3\left( v \right) \right|\le 1.$$* *Proof.* Note that $u$, $v$ are adjacent. Hence for any vertices $x$, $y$ in $G$, $|\mathcal{F}(u,x,y)-\mathcal{F}(v,x,y)|\leq1$ holds by the minimum definition of the Fermat distance. Let $\omega_1$, $\omega_2$ be two vertices that realize $\varepsilon_3(u)$. It is obvious that $$|\varepsilon_3(u)-\mathcal{F}(v,\omega_1,\omega_2)|\leq1.$$ Similarly, suppose that $\varepsilon_3(v)=\mathcal{F}(v,\omega_3,\omega_4)$, we thus have $$|\varepsilon_3(v)-\mathcal{F}(u,\omega_3,\omega_4)|\leq1.$$ Then we can know that $$\varepsilon _3\left( u \right) -1\le \mathcal{F} \left( v,\omega _1,\omega _2 \right) \le \varepsilon _3\left( v \right)$$ and $$\varepsilon _3\left( v \right) -1\le \mathcal{F} \left( u,\omega _3,\omega _4 \right) \le \varepsilon _3\left( u \right).$$ We thus get the desired inequality $\left| \varepsilon _3\left( v \right) -\varepsilon _3\left( u \right) \right|\leq 1$. ◻ The following lemma is the corollary of Lemma 2.6 in [@li2021average] and will be implicitly applied in our paper. **Lemma 2**. *Suppose $T$ is a tree and $u\in V(T)$. Then the Fermat eccentric $u$-tree contains the longest path starting from $u$. In other words, we can select the farthest vertex from $u$ as a Fermat eccentric vertex of $u$.* In the rest of our paper, much operations on the diametrical path will be added. So for a given path $P_{d+1}=v_0v_1\cdots v_d$, we set path segment of $P_{d+1}$ by $P_{[p:q]}=v_p\cdots v_q$, where $0\leq p<q\leq d$. # Acyclic Graphs Two basic properties about eccentric path are listed below: 1. If tree has one central vertex, then each eccentric path passes through the central vertex; 2. If tree have two central vertices, then each eccentric path passes through these two vertices. The above properties ensure that for a tree $T$, all central vertices must be on a diametrical path. Let $P_{d+1}=v_0v_1\cdots v_d$ be a diametrical path of length $d$ which contains all central vertices of a given tree $T$. Now the tree $T$ can be depicted by Fig. [1](#Fig1){reference-type="ref" reference="Fig1"} when $T$ has only one central vertex $c$, and Fig. [2](#Fig2){reference-type="ref" reference="Fig2"} when $T$ has two central vertices $c_1$, $c_2$. Moreover, $P_{d+1}$ has one central vertex $c$ when $d$ is odd, and two central vertices $c_1$ and $c_2$ when $d$ is even. Let $T_{v_i}$ be the subtree with root vertex $v_i$, as shown in Figs. [1](#Fig1){reference-type="ref" reference="Fig1"} and [2](#Fig2){reference-type="ref" reference="Fig2"}, where $i=1,\ldots,d-1$. We denote $\ell_i$ as the maximal distance from $v_i$ to all leaves in $T_{v_i}$, where $i=1,\ldots,d-1$, that is, $\ell_i=\varepsilon_2(v_i;T_{v_i})$. We set $\ell=\max\{\ell_i:i = 1,\ldots,d-1\}$ and denote one subtree $T_{v_i}$ with $\ell_i=\ell$ by $T_\ell$. ![$G$ expanding at $P_{d+1}$ (One central vertex case).](Fig1.png "fig:"){#Fig1 width="70%"}\ ![$G$ expanding at $P_{d+1}$ (Two central vertices case).](Fig2.png "fig:"){#Fig2 width="70%"}\ The following lemma is easy to prove since the diametrical path $P_{d+1}$ is the longest path in tree $T$. **Lemma 3**. *For all $i=1,\ldots, d-1$, inequality $\ell_i \leq \min\{i,d-i\}$ holds.* Immediately from Lemma [Lemma 3](#lem1){reference-type="ref" reference="lem1"}, we find the symmetry property of diametrical path, as established below: **Lemma 4**. *For all $i=1,\ldots, d-1$, $\varepsilon_3\left ( v_i \right ) =\varepsilon_3\left ( v_{d-i} \right )$.* *Proof.* Symmetrically, we only need to check that the equation holds for $i=0,\cdots,\lfloor\frac{d}{2}\rfloor$. We obtain that $$\begin{aligned} \varepsilon_3\left ( v_i \right ) &= d\left( v_i,v_d \right)+ \max_{k=i+1,\cdots,d-i-1}\left\{ \ell_k,i \right\} \\ &=d-i +\max_{k=i+1,\cdots,d-i-1}\left\{ \ell_k,d-i \right\} \\ &=d\left( v_{d-i},v_0\right)+\max_{k=i+1,\cdots,d-i-1}\left\{ \ell_k,d-i \right\} \\ &=\varepsilon_3\left ( v_{d-i} \right ). \end{aligned}$$ ◻ From Lemma [Lemma 1](#lem7){reference-type="ref" reference="lem7"}, we give more precise properties on the diametrical path $P_{d+1}$ of trees. **Lemma 5**. *For $i\in \left\{ 0,\ldots,\lfloor \frac{d}{2} \rfloor \right\}$, Fermat eccentricity satisfies $$\varepsilon _3\left( v_{i+1} \right) \le \varepsilon _3\left( v_i \right) \le \varepsilon _3\left( v_{i+1} \right) +1.$$ For $i\in \left\{ \lfloor \frac{d}{2} \rfloor+1 ,\ldots,d \right\}$, Fermat eccentricity satisfies $$\varepsilon _3\left( v_i \right) \le \varepsilon _3\left( v_{i+1} \right) \le \varepsilon _3\left( v_i \right) +1.$$ In other words, for vertices in $P_{d+1}$, the central vertices reach the minimum Fermat eccentricity.* *Proof.* For a given $v_i$, $i=0,\ldots,\left \lfloor \frac{d}{2} \right \rfloor$, we obtain that $$\begin{aligned} \varepsilon_3\left ( v_{i+1} \right ) &= d\left ( v_{i+1},v_d \right ) + \max_{k=i+2,\cdots,d-i-2}\left\{ \ell_k,i+1 \right\} \\ &=d\left ( v_i,v_d \right ) +\max_{k=i+2,\cdots,d-i-2}\left\{ \ell_k-1,i \right\} \\ &\le d\left ( v_i,v_d \right ) +\max_{k=i+1,\cdots,d-i-1}\left\{ \ell_k,i \right\} = \varepsilon_3\left ( v_i \right ).\end{aligned}$$ By Lemma [Lemma 3](#lem1){reference-type="ref" reference="lem1"}, for $i=0,\ldots,\left \lfloor \frac{d}{2} \right \rfloor-1$, we deduce that $$\begin{aligned} &\ell_{i+1},\ell_{d-i-1} \le i+1,\end{aligned}$$ and hence, $$\begin{aligned} \varepsilon_3\left ( v_{i+1} \right ) +1&= d\left ( v_i,v_d \right ) + \max_{k=i+2,\cdots,d-i-2}\left\{ \ell_k,i+1 \right\} \\ &=d\left ( v_i,v_d \right ) +\max_{k=i+1,\cdots,d-i-1}\left\{ \ell_k,i+1 \right\} \\ &\ge d\left ( v_i,v_d \right ) +\max_{k=i+1,\cdots,d-i-1}\left\{ \ell_k,i \right\} = \varepsilon_3\left ( v_i \right ).\end{aligned}$$ By an analogous discussion, it is easy to prove the second assertion of Lemma [Lemma 5](#lem3){reference-type="ref" reference="lem3"}. The formulas in Lemma [Lemma 5](#lem3){reference-type="ref" reference="lem3"} explain that the Fermat eccentricity of vertex $v_i$ for $i\in \left \{ 0,\ldots,d \right \}$ is diminishing from two endpoints to the center of the diametrical path $P_{d+1}$, so central vertices reach the minimum Fermat eccentricity. ◻ **Lemma 6**. *(1) For all $uv\in E\left(P_{[\ell:d-\ell]}\right)$, it satisfies $\varepsilon _3(u)=\varepsilon _3(v)$;* *(2) For all $uv\in E\left(P_{[0:\ell]}\cup P_{[d-\ell:d]}\right)$, it satisfies $\left| \varepsilon _3(u)-\varepsilon _3(v) \right|=1$.* *Proof.* Suppose that $u,v\in \{v_0,\ldots,v_{\lfloor \frac{d}{2} \rfloor}\}$ and $u$ is more distant from the center. Let us proceed with the discussion by considering the following two cases: 1. $d\left ( v_0,u \right ) < \ell$. We have $$\begin{gathered} \varepsilon _3\left ( u \right ) = d\left ( u,v_d \right ) + \ell, \\ \varepsilon _3\left ( v \right ) = d\left ( v,v_d \right ) + \ell =\varepsilon_3\left( u \right) - 1. \end{gathered}$$ 2. $d\left ( v_0,u \right ) \ge \ell$. We have $$\begin{gathered} \varepsilon _3\left ( u \right ) = \varepsilon _3\left ( v \right )= d. \end{gathered}$$ For $u,v\in \left\{v_{\lfloor \frac{d}{2} \rfloor},\ldots,v_d\right\}$, we have the similar conclusion. Above all, when $uv\in E\left(P_{[\ell:d-\ell]}\right)$, we obtain $\varepsilon _3(u)=\varepsilon _3(v)$, and when $uv\in E\left(P_{[0:\ell]}\cup P_{[d-\ell:d]}\right)$, we obtain $\left| \varepsilon _3(u)-\varepsilon _3(v) \right|=1$. ◻ Lemma [Lemma 6](#lem5){reference-type="ref" reference="lem5"} divides edges of $P_{d+1}$ into two parts and is the key lemma that will be used to prove Theorem [Theorem 1](#thm1){reference-type="ref" reference="thm1"}. The following lemma determines the behaviors of subtree $T_{v_i}$, $i=1,2,\ldots, d-1$. **Lemma 7**. *For any vertex $u\in T_{v_i}$, $i=1,\ldots,d-1$, it holds that $$\label{eq:lemma4} \varepsilon_3\left( u \right) = d\left( u,v_i \right)+ \varepsilon_3\left( v_i \right).$$* *Proof.* By Lemma [Lemma 3](#lem1){reference-type="ref" reference="lem1"}, for any vertex $u \in T_{v_i}$, $i=1,\ldots,\lfloor \frac{d}{2} \rfloor$, it is obvious that one eccentric vertex of $u$ is $v_d$, and another one can be chosen as $v_0$ when $\ell_i=\ell$ and in subtree $T_{\ell}$ when $\ell>\ell_i$. In other words, we have $$\begin{aligned} \varepsilon_3\left( u \right) = d\left( u,v_d \right)+\max_{t=i+1,\cdots,d-i-1}\left\{ \ell_t,i\right\}=d\left( u,v_i\right)+\varepsilon_3\left( v_i \right).\end{aligned}$$ Similarly, Eq. [\[eq:lemma4\]](#eq:lemma4){reference-type="eqref" reference="eq:lemma4"} holds for $i=\lceil \frac{d}{2} \rceil,\ldots,d-1$. ◻ **Theorem 1**. *For any acyclic graph $T$, we have $$\label{eq:thm1} \frac{\sum_{uv\in E\left( T\right)}{\varepsilon _3\left( u \right) \varepsilon _3\left( v \right)}}{m}\le \frac{\sum_{u\in V\left( T \right)}{\varepsilon _3^2\left( u \right)}}{n},$$ where the equation holds if and only if $T\cong P_n$.* *Proof.* In this proof, we always assume that $u$ is further from the center than $v$ for any given edge $uv\in E(T)$. Whether the selected edge $uv$ is on the diametrical path $P_{d+1}$ leads to two cases as follows. 1. Edge $uv\in E\left( T_{v_k}\setminus \left\{ v_k \right\} \right)$, $k = 1,2,\ldots,d-1$. By Lemma [Lemma 7](#lem4){reference-type="ref" reference="lem4"}, we have $\varepsilon _3\left( u \right) -\varepsilon _3\left( v \right) =1$ and thus $$\begin{aligned} &\quad\frac{\sum_{uv\in E\left( T_{v_k} \right)}{\varepsilon _3\left( u \right) \varepsilon _3\left( v \right)}}{m}-\frac{\sum_{u\in V\left( T_{v_k}\setminus \{v_k\} \right)}{\varepsilon _{3}^{2}\left( u \right)}}{n} \\ &=\frac{n\sum_{uv\in E\left( T_{v_k} \right)}{\varepsilon _3\left( u \right) \varepsilon _3\left( v \right)}-\left( n-1 \right) \sum_{u\in V\left( T_{v_k}\setminus \{v_k\} \right)}{\varepsilon _{3}^{2}\left( u \right)}}{n\left( n-1 \right)} \\ &=\frac{n\sum_{u\in V\left( T_{v_k}\setminus \{v_k\} \right)}{\varepsilon _3\left( u \right) \left( \varepsilon _3\left( u \right) -1 \right)}-\left( n-1 \right) \sum_{u\in V\left( T_{v_k}\setminus \{v_k\} \right)}{\varepsilon _{3}^{2}\left( u \right)}}{n\left( n-1 \right)}\\ &=\frac{\sum_{u\in V\left( T_{v_k}\setminus \{v_k\} \right)}{\varepsilon _3\left( u \right) \left( \varepsilon _3\left( u \right) -n \right)}}{n\left( n-1 \right)}<0.\end{aligned}$$ Hence, $$\frac{\sum_{uv\in E\left( T_{v_k} \right)}{\varepsilon _3\left( u\right) \varepsilon _3\left( v \right)}}{m}\le \frac{\sum_{u\in V\left( T_{v_k}\setminus \{v_k\}\right)}{\varepsilon _3^2\left( u \right)}}{n}$$ 2. Edge $uv\in E\left( P_{d+1} \right)$. By Lemma [Lemma 6](#lem5){reference-type="ref" reference="lem5"}, the difference can be rewritten by $$\begin{aligned} L&=\frac{\sum_{uv\in E\left( P_{d+1} \right)}{\varepsilon _3\left( u \right) \varepsilon _3\left( v \right)}}{m}-\frac{\sum_{u\in V\left( P_{d+1} \right)}{\varepsilon _{3}^{2}\left( u \right)}}{n} \\ &=\frac{n\sum_{uv\in E\left( P_{d+1} \right)}{\varepsilon _3\left( u \right) \varepsilon _3\left( v \right)}-\left( n-1 \right) \sum_{u\in V\left(P_{d+1} \right)}{\varepsilon _{3}^{2}\left( u \right)}}{n\left( n-1 \right)} \\ &=\frac{n\sum_{uv\in E\left( P_{[\ell:d-\ell]} \right)}{\varepsilon _3\left( u \right) \varepsilon _3\left( v \right)}-\left( n-1 \right) \sum_{u\in V\left( P_{[\ell:d-\ell]} \right)}{\varepsilon _{3}^{2}\left( u \right)}}{n\left( n-1 \right)} \\ &\quad+\frac{n\sum_{uv\in E\left(P_{[0:\ell]}\cup P_{[d-\ell:d]} \right)}{\varepsilon _3\left( u \right) \varepsilon _3\left( v \right)}-\left( n-1 \right) \sum_{u\in V\left( P_{[0:\ell]}\cup P_{[d-\ell:d]}\setminus\left\{v_\ell,v_{d-\ell}\right\} \right)}{\varepsilon _{3}^{2}\left( u \right)}}{n\left( n-1 \right)}.\end{aligned}$$ We consider the following two subcases. Notice that $\varepsilon_{3}(u)\leq n-1$ and $d+1\leq n$ always hold. 1. Acyclic graph $G$ has only one central vertex $c$. It derives that $$\label{eq.2(a)} \begin{split} L&=\frac{n\sum_{u\in V\left( P_{[\ell:d-\ell]}\right) \setminus \left\{ c \right\}}{\varepsilon _{3}^{2}\left( u \right)}-\left( n-1 \right) \sum_{u\in V\left( P_{[\ell:d-\ell]}\right)}{\varepsilon _{3}^{2}\left( u \right)}}{n\left( n-1 \right)} \\ &\quad +\frac{\splitdfrac{n\sum_{u\in V\left( P_{[0:\ell]}\cup P_{[d-\ell:d]}\setminus\left\{v_\ell,v_{d-\ell}\right\} \right)}{\varepsilon _3\left( u \right) \left( \varepsilon _3\left( u \right) -1 \right)}}{-\left( n-1 \right) \sum_{u\in V\left( P_{[0:\ell]}\cup P_{[d-\ell:d]}\setminus\left\{v_\ell,v_{d-\ell}\right\}\right)}{\varepsilon _{3}^{2}\left( u \right)}}}{n\left( n-1 \right)} \\ & =\frac{\splitdfrac{\sum_{u\in V\left( P_{[\ell:d-\ell]} \right)}{\varepsilon _{3}^{2}\left( u \right)}-n\varepsilon _{3}^{2}\left( c \right) +\sum_{u\in V\left( P_{[0:\ell]}\cup P_{[d-\ell:d]}\setminus\left\{v_\ell,v_{d-\ell}\right\} \right)}{\varepsilon _{3}^{2}\left( u \right)}}{-n\sum_{u\in V\left( P_{[0:\ell]}\cup P_{[d-\ell:d]}\setminus\left\{v_\ell,v_{d-\ell}\right\} \right)}{\varepsilon _3\left( u \right)}}}{n\left( n-1 \right)} \\ & =\frac{(d-2l+1)d^2-nd^2+\sum_{u\in V\left(P_{[0:\ell]}\cup P_{[d-\ell:d]}\setminus\left\{v_\ell,v_{d-\ell}\right\} \right)}{\varepsilon _3\left( u \right) \left( \varepsilon _3\left( u \right) -n \right)}}{n\left( n-1 \right)}\le 0. \end{split}$$ Hence, $$\frac{\sum_{uv\in E\left( P_{d+1} \right)}{\varepsilon _3\left( u \right) \varepsilon _3\left( v \right)}}{m}\le \frac{\sum_{u\in V\left( P_{d+1} \right)}{\varepsilon _3^2\left( u \right)}}{n}.$$ 2. Acyclic graph $G$ has two central vertices $c_1$, $c_2$. We see that $$\label{eq.2(b)} \begin{split} L&=\frac{n\sum_{u\in V\left( P_{[\ell:d-\ell]} \right) \setminus \left\{ c_1 \right\}}{\varepsilon _{3}^{2}\left( u \right)}-\left( n-1 \right) \sum_{u\in V\left( P_{[\ell:d-\ell]} \right)}{\varepsilon _{3}^{2}\left( u \right)}}{n\left( n-1 \right)} \\ &\quad+\frac{\splitdfrac{n\sum_{u\in V\left(P_{[0:\ell]}\cup P_{[d-\ell:d]}\setminus\left\{v_\ell,v_{d-\ell}\right\} \right)}{\varepsilon _3\left( u \right) \left( \varepsilon _3\left( u \right) -1 \right)}}{-\left( n-1 \right) \sum_{u\in V\left( P_{[0:\ell]}\cup P_{[d-\ell:d]}\setminus\left\{v_\ell,v_{d-\ell}\right\} \right)}{\varepsilon _{3}^{2}\left( u \right)}}}{n\left( n-1 \right)} \\ &=\frac{\splitdfrac{\sum_{u\in V\left( P_{[\ell:d-\ell]} \right)}{\varepsilon _{3}^{2}\left( u \right)}-n\varepsilon _{3}^{2}\left( c_1 \right)+\sum_{u\in V\left( P_{[0:\ell]}\cup P_{[d-\ell:d]}\setminus\left\{v_\ell,v_{d-\ell}\right\} \right)}{\varepsilon _{3}^{2}\left( u \right)}}{-n\sum_{u\in V\left(P_{[0:\ell]}\cup P_{[d-\ell:d]}\setminus\left\{v_\ell,v_{d-\ell}\right\} \right)}{\varepsilon _3\left( u \right)}}}{n\left( n-1 \right)} \\ &=\frac{(d-2l+1)d^2-nd^2+\sum_{u\in V\left( P_{[0:\ell]}\cup P_{[d-\ell:d]}\setminus\left\{v_\ell,v_{d-\ell}\right\} \right)}{\varepsilon _3\left( u \right) \left( \varepsilon _3\left( u \right) -n \right)}}{n\left( n-1 \right)}\le 0. \end{split}$$ Hence, $$\frac{\sum_{uv\in E\left( P_{d+1}\right)}{\varepsilon _3\left( u \right) \varepsilon _3\left( v \right)}}{m}\le \frac{\sum_{u\in V\left( P_{d+1} \right)}{\varepsilon _3^2\left( u \right)}}{n}.$$ Combining above two cases together yields $$\frac{\sum_{uv\in E\left( T\right)}{\varepsilon _3\left( u \right) \varepsilon _3\left( v \right)}}{m}\le \frac{\sum_{u\in V\left( T \right)}{\varepsilon _3^2\left( u \right)}}{n}.$$ Suppose that $T\cong P_n$. We thus have $T_{v_k}=\{v_k\}$, $k=1,2,\ldots,d-1$. Consequently, $$\frac{\sum_{uv\in E\left( T \right)}{\varepsilon _3\left( u \right) \varepsilon _3\left( v \right)}}{n-1}-\frac{\sum_{u\in V\left( T \right)}{\varepsilon _{3}^{2}\left( u \right)}}{n}=\frac{\left( n-1 \right) d^2}{n-1}-\frac{nd^2}{n}=0.$$ On the other hand, if [\[eq:thm1\]](#eq:thm1){reference-type="eqref" reference="eq:thm1"} is an equation, then [\[eq.2(a)\]](#eq.2(a)){reference-type="eqref" reference="eq.2(a)"} and [\[eq.2(b)\]](#eq.2(b)){reference-type="eqref" reference="eq.2(b)"} must be equations. This means $(d-2\ell+1)d^2-nd^2=0$, and we thus conclude $\ell=0$, which implies that $T$ is a path. ◻ Theorem [Theorem 1](#thm1){reference-type="ref" reference="thm1"} partly reflects the fact that $F_1(T)$ and $F_2(T)$ may share some extremal bounds. We thus give the following theorem. **Theorem 2**. *Suppose $T$ is a $n$-vertex tree, where $n\geq 3$. Then we have $$\label{eq:thm21} F_1(K_{1,n-1})\leq F_1(T)\leq F_1(P_n)$$ and $$\label{eq:thm22} F_2(K_{1,n-1})\leq F_2(T)\leq F_2(P_n).$$* *Proof.* We first check the upper bounds of [\[eq:thm21\]](#eq:thm21){reference-type="eqref" reference="eq:thm21"} and [\[eq:thm22\]](#eq:thm22){reference-type="eqref" reference="eq:thm22"}. If $T\cong P_n$, then $\varepsilon_3(u)$ reach its maximum value $n-1$ for all $u\in V(T)$ and thus the right sides of [\[eq:thm21\]](#eq:thm21){reference-type="eqref" reference="eq:thm21"} and [\[eq:thm22\]](#eq:thm22){reference-type="eqref" reference="eq:thm22"} hold. For the lower bound, we claim that the left sides of [\[eq:thm21\]](#eq:thm21){reference-type="eqref" reference="eq:thm21"} and [\[eq:thm22\]](#eq:thm22){reference-type="eqref" reference="eq:thm22"} hold if the diameter of $T$ is $2$, equivalently, the maximal degree of $T$ is $n-2$. Suppose that the diameter of $T$ is $d$ and $P_{d+1}=v_0v_1\cdots v_d$ is the diametrical path. Note that $v_0$ and $v_d$ are leaves. Then the vertices with maximal degree must be internal vertices. We choose one maximal degree vertex $v_i$ of $P_d$ and transform $T$ into another tree $T'$ by deleting edge $v_{d-1}v_d$ and attach $v_d$ to$v_i$ by an edge. From the definition of diameter and Lemma [Lemma 7](#lem4){reference-type="ref" reference="lem4"}, we obtain $\varepsilon_{3}(u;T)\geq\varepsilon_{3}(u;T')$ for all vertices if we consider vertices and their transformations in pairs. We thus get $$F_1(T')-F_1(T)=\sum_{u\in V(T')}\varepsilon_{3}^2(u)-\sum_{u\in V(T)}\varepsilon_{3}^2(u)\leq 0$$ and $$F_2(T')-F_2(T)=\sum_{uv\in E(T')}\varepsilon_{3}(u)\varepsilon_{3}(v)-\sum_{uv\in E(T)}\varepsilon_{3}(u)\varepsilon_{3}(v)\leq 0.$$ We can repeat this transformation a sufficient number of times until arrive at a tree with maximal degree $n-2$, that is, we arrive at star $K_{1,n-1}$, which is the only graph with the smallest diameter $2$. Recall that the transformation does not increase the two Fermat-Zagreb indices. Therefore, we have $F_i(K_{1,n-1})\leq F_i(T)$, $i=1,2$. ◻ # Unicyclic Graphs As shown in Fig. [3](#Fig3){reference-type="ref" reference="Fig3"}, unicyclic graphs $G$ is consisting of the unique cycle $C_g$ with $g$ vertices and the maximum subtree $T_{x_i}$ with root vertex $x_i$ on $C_g$, where $i=1,\ldots ,g$. ![Unicyclic graph $G$.](Fig3.png "fig:"){#Fig3 width="40%"}\ The following key lemma is from [@indices2010note]. **Lemma 8**. *Let $n\ge2$ and $x_1, \ldots, x_n$ $$be positive integers such that $\left| x_i-x_{i+1}\right|\le1$ for each $i = 1,\ldots,n$, where $x_1=x_{n+1}$. Then $\sum_{i=1}^{n}{x_{i}^{2} }\ge \sum_{i=1}^{n}{x_ix_{i+1}}$.* **Lemma 9**. We are now in a position to prove the inequality for unicyclic graphs. **Theorem 3**. *When $G$ is a unicyclic graph, we have $$\frac{\sum_{uv\in E\left( G \right)}{\varepsilon _3\left( u \right) \varepsilon _3\left( v \right)}}{m}\le \frac{\sum_{u\in V\left( G \right)}{\varepsilon _{3}^{2}\left( u \right)}}{n}.$$* *Proof.* We prove it by making a difference. Note that $n=m$ for unicyclic graphs. We thus have $$\begin{aligned} \frac{\sum_{uv\in E\left( G \right)}{\varepsilon _3\left( u \right) \varepsilon _3\left( v \right)}}{m} - \frac{\sum_{u\in V\left( G \right)}{\varepsilon _{3}^{2}\left( u \right)}}{n} =\frac{\sum_{uv\in E\left( G \right)}{\varepsilon _3\left( u \right) \varepsilon _3\left( v \right)}-\sum_{u\in V\left( G \right)}{\varepsilon _{3}^{2}\left( u \right)}}{n}.\end{aligned}$$ It is equivalent to prove that $$\sum_{uv\in E\left( G \right)}{\varepsilon _3\left( u \right) \varepsilon _3\left( v \right)}-\sum_{u\in V\left( G \right)}{\varepsilon _{3}^{2}\left( u \right)}\le 0 .$$ Clearly, $$\begin{aligned} &\quad\sum_{uv\in E\left( G \right)}{\varepsilon _3\left( u \right) \varepsilon _3\left( v \right)}-\sum_{u\in V\left( G \right)}{\varepsilon _{3}^{2}\left( u \right)} \\ &=\left[ \sum_{uv\in E\left( C_g \right)}{\varepsilon _3\left( u \right) \varepsilon _3\left( v \right)}-\sum_{u\in V\left( C_g \right)}{\varepsilon _{3}^{2}\left( u \right)} \right]+\sum_{i=1}^g{\left[ \sum_{uv\in E\left( T_{x_i} \right)}{\varepsilon _3\left( u \right) \varepsilon _3\left( v \right)}-\sum_{u\in V\left( T_{x_i}\setminus \left\{ x_i \right\} \right)}{\varepsilon _{3}^{2}\left( u \right)} \right]}.\end{aligned}$$ We can get that $$\sum_{uv\in E\left( C_g \right)}{\varepsilon _3\left( u \right) \varepsilon _3\left( v \right)}-\sum_{u\in V\left( C_g \right)}{\varepsilon _{3}^{2}\left( u \right)}\le 0.$$ by Lemmas [Lemma 1](#lem7){reference-type="ref" reference="lem7"} and [Lemma 8](#lem6){reference-type="ref" reference="lem6"}. We now just need to prove $$\sum_{uv\in E\left( T_{x_i} \right)}{\varepsilon _3\left( u \right) \varepsilon _3\left( v \right)}-\sum_{u\in V\left( T_{x_i}\setminus \left\{ x_i \right\} \right)}{\varepsilon _{3}^{2}\left( u \right)}\le 0$$ for any fixed $T_{x_i}$, $i=1,\ldots,g$. For a given $x_i$, $i=1,\ldots,g$, we denote $x = x_i$ for convenient. In the remaining part of this proof, suppose that $u$ is more distant from $x$ than $v$ for all $uv\in E(G)$. Let us now investigate $T_x$ in two cases. **Case :** $x$ is on one of the diametrical paths of $T_x$, see Fig. [4](#Fig5){reference-type="ref" reference="Fig5"}. Suppose that $P_{d+1}=v_0v_1\cdots v_d$ is one diametrical path of $T_x$ which contains all central vertices of $T_x$. Set $c=v_t=v_{\lfloor \frac{d}{2} \rfloor}$, when $T_x$ has only one central vertex $c$ of $T_x$, and $c_1 =v_t=v_{\lfloor \frac{d}{2} \rfloor}$, when $T_x$ has two central vertices $c_1, c_2$ of $T_x$. Without loss of generality, let $x$ be on the left side of $c$, which means that $p\leq t$. We denote $v_q$ the symmetry vertex of $v_p$ about the center, i.e., $q = d - p$. Let $T_{v_j}$ be the subtree of $T_x$ with root vertex $v_j$. We also remark $\ell_j=\varepsilon_2(v_j; T_{v_j})$ and $\ell=\max\{\ell_j:j = 1,\ldots,d-1\}$. Let $\omega'$, $\omega''$ be the vertices that realizes $\varepsilon_3(x;G\setminus T_x)$ and $T_\ell$ be one subtree $T_{v_j}$ with $\ell_j=\ell$. Denote one vertex in $T_\ell$ which is farthest from the root vertex of $T_\ell$ by $\bar{\omega}$. ![$T_x$ expanding at $P_{d+1}$.](Fig5.png "fig:"){#Fig5 width="70%"}\ 1. $uv \in E\left(T_{v_j}\right)$, $j = 1,\ldots,d-1$. Recall that $\ell_j\leq\min\{j,d-j\}$. For all $w\in V\left(T_{v_j}\right)$, we can select the Fermat eccentric vertices of $\varepsilon_{3}(w)$ belonging to set $\{v_0,v_d,\omega', \omega'',\bar{\omega}\}$ when $\ell_j<\ell$, and $\{v_0,v_d,\omega',\omega''\}$ when $\ell_j=\ell$. In other words, $\varepsilon_3(w)=d(w,v_j)+\varepsilon_3(v_j)$ and hence $\varepsilon _3\left( u\right)-\varepsilon _3\left( v\right) = 1$. So, for any $uv \in E\left(T_{v_j}\right)$, $j=1\ldots,d-1$, we observe that $$\begin{aligned} &\quad\sum_{uv\in E\left( T_{v_j} \right)}{\varepsilon _3\left( u \right) \varepsilon _3\left( v \right)}-\sum_{u\in V\left( T_{v_j}\setminus\left\{v_j\right\} \right)}{\varepsilon _{3}^{2}\left( u \right)} \\ &=\sum_{u\in V\left( T_{v_j}\setminus \left\{ v_j \right\} \right)}{\varepsilon _3\left( u \right) \left( \varepsilon _3\left( u \right) -1 \right)}-\sum_{u\in V\left( T_{v_j}\setminus\left\{v_j\right\} \right)}{\varepsilon _{3}^{2}\left( u \right)} \\ &=-\sum_{u\in V\left( T_{v_j}\setminus \left\{ v_j \right\} \right)}{\varepsilon _3\left( u \right)}\le 0.\end{aligned}$$ 2. $uv \in E\left(P_{[0:p]} \cup P_{[q:d]} \right)$. For $w\in V\left(P_{[0:p]}\right)$, the farthest vertex from $w$ (also one Fermat eccentric vertex of $w$) belongs to $\{v_d,\omega', \omega''\}$. Similarly, for $w\in V\left(P_{[q:d]}\right)$, the farthest vertex from $w$ (also one Fermat eccentric vertex of $w$) belongs to $\{v_0,\omega', \omega''\}$. The two properties ensure that $0\leq \varepsilon _3\left( u\right)-\varepsilon _3\left( v\right)\leq 1$. Hence, $$\begin{aligned} &\quad\sum_{uv\in E\left( P_{[0:p]} \cup P_{[q:d]} \right)}{ \varepsilon _3\left( u \right) \varepsilon _3\left( v \right)}-\sum_{u\in V\left(P_{[0:p]} \cup P_{[q:d]}\setminus\left\{v_p,v_q\right\} \right)}{ \varepsilon _{3}^{2}\left( u \right)}\\ &\leq\sum_{u\in V\left( P_{[0:p]} \cup P_{[q:d]}\setminus\left\{v_p,v_q\right\} \right)}{ \varepsilon _{3}^{2}\left( u \right)}-\sum_{u\in V\left( P_{[0:p]} \cup P_{[q:d]}\setminus\left\{v_p,v_q\right\} \right)}{ \varepsilon _{3}^{2}\left( u \right)}=0.\end{aligned}$$ 3. $uv \in E\left(P_{[p:q]}\right)$. 1. For any $uv \in E\left(P_{[t:q]}\right)$, the farthest Fermat eccentric vertex of $u\in V\left(P_{[p:q]}\right)$ is in set $\{v_0,\omega',\omega''\}$. This property leads to $0 \le \varepsilon _3\left(u\right) -\varepsilon _3\left( v \right) \le 1$. Hence, $$\begin{aligned} \sum_{uv\in E\left( P_{[t:q]} \right)}{\varepsilon _3\left( u \right) \varepsilon _3\left( v \right)}-\sum_{u\in V\left( P_{[t:q]}\setminus\left\{v_t\right\} \right)}{\varepsilon _{3}^{2}\left( u \right)} \le 0.\end{aligned}$$ 2. For any $uv \in E\left(P_{[p:t]}\right)$, we have $-1\leq \varepsilon _3\left( u \right)- \varepsilon _3\left( v \right)\leq 1$ by Lemma [Lemma 1](#lem7){reference-type="ref" reference="lem7"}. We need to check what occurs when $\varepsilon _3\left( u \right)- \varepsilon _3\left( v \right)=-1$. Recall that $u$ is more distant from $x$ than $v$. Suppose that there exists an edge $v_kv_{k+1} \in E \left(P_{[p:t]} \right)$ such that $\varepsilon _3\left( v_{k}\right) - \varepsilon _3\left( v_{k+1} \right) = 1$. It is obvious that $\varepsilon _3\left( v_{k}\right)= \varepsilon _3\left( v_k; T_x \right)$ and $\varepsilon _3\left( v_{k+1}\right)= \varepsilon _3\left( v_{k+1}; T_x \right)$. Moreover, the Fermat eccentric vertices of $\varepsilon _3\left( v_{k}\right)$ and $\varepsilon _3\left( v_{k+1}\right)$ are both $v_d$ and $\bar{\omega}$. We thus have $\ell>\max\{k-p+\ell_p,k\}$ and then find that $$\varepsilon _3\left( v_{d-k}\right)-\varepsilon _3\left( v_{d-k-1}\right)=1.$$ Now we can take $v_kv_{k+1}$ and $v_{d-k-1}v_{d-k}$ into consideration pairwisely. It is clear that $$\begin{split} &\quad\varepsilon _3(v_k)\varepsilon _3(v_{k+1})-\varepsilon _3^2(v_{k+1})+\varepsilon _3(v_{d-k-1})\varepsilon _3(v_{d-k})-\varepsilon_3^2(v_{d-k})\\ &=\varepsilon _3(v_{k+1})-\varepsilon _3(v_{d-k})=-1<0. \end{split}$$ So we get the inequation $$\sum_{uv\in E\left(P_{[p:q]} \right)}{ \varepsilon _3\left( u \right) \varepsilon _3\left( v \right)}-\sum_{u\in V\left( P_{[p:q]} \setminus \left\{ x \right\} \right)}{ \varepsilon _{3}^{2}\left( u \right)} \le 0.$$ Combining all three subcases in Case together yields $$\begin{aligned} \quad\sum_{uv\in E\left( T_x \right)}{ \varepsilon _3\left( u \right) \varepsilon _3\left( v \right)}-\sum_{u\in V\left(T_x \setminus \left\{ x \right\} \right)}{ \varepsilon _{3}^{2}\left( u \right)} \le 0.\end{aligned}$$ **Case :** $x$ is not on any diametrical paths of the $T_x$. Then we can display $T_x$ in Fig. [\[Fig6\]](#Fig6){reference-type="ref" reference="Fig6"}. Set $P'=v_pv_{d+1}v_{d+2}\cdots v_{s+1}$, and then denote $T_{v_p} = \bigcup_{j=d+1}^{s+1}{T_{v_j}}\cup P'$. Some notations utilized in Case are no longer reiterated. ![image](Fig6.png){width="70%"} 1. $uv\in E\left( T_x\setminus T_{v_p} \right)$. We claim that $$\begin{aligned} &\quad\sum_{uv\in E\left( T_x\setminus T_{v_p} \right)}{ \varepsilon _3\left( u \right) \varepsilon _3\left( v \right)}-\sum_{u\in V\left( T_x\setminus T_{v_p} \right)}{ \varepsilon _{3}^{2}\left( u \right)} \le 0.\end{aligned}$$ Similar to the discussion in Case , we have the following: - For $uv\in T_{v_j}$, $j=1,2,\ldots,p-1,p+1,\ldots d-1$, we have $\varepsilon_{3}(u)-\varepsilon_{3}(v)=1$. - For $uv\in E\left(P_{[0:p]} \cup P_{[q:d]} \right)$, we have $0\leq\varepsilon_{3}(u)-\varepsilon_{3}(v)\leq1$. - For $uv\in E\left(P_{[p:q]}\right)$, we focus on the edge $uv$ such that $\varepsilon_{3}(u)-\varepsilon_{3}(v)=-1$. Suppose that there exists a edge $v_kv_{k+1}$ such that $\varepsilon _3\left( v_{k}\right) - \varepsilon _3\left( v_{k+1} \right) = 1$, $k\in \{p,\ldots,t\}$. We can still consider edge $v_{d-k-1}v_{d-k}$ with $\varepsilon _3\left( v_{d-k}\right)-\varepsilon _3\left( v_{d-k-1}\right)=1$, and then get $$\sum_{uv\in E\left(P_{[p:q]} \right)}{ \varepsilon _3\left( u \right) \varepsilon _3\left( v \right)}-\sum_{u\in V\left( P_{[p:q]} \setminus \left\{ x \right\} \right)}{ \varepsilon _{3}^{2}\left( u \right)} \le 0.$$ In fact, the validity of the above three statements is obvious for the following reasons: In the discussion of Case , our focus solely lies on determining the precise location of Fermat eccentric vertices, without relying on the unicycle property of $G\backslash T_x$. This approach ensures the feasibility of our analysis in Case when substituting $G\backslash T_x$ and $T_{v_p}$ in Figure [4](#Fig5){reference-type="ref" reference="Fig5"} with $T_{v_p}\cup G\backslash T_x$ and single vertex $\{v_p\}$ in Figure [\[Fig6\]](#Fig6){reference-type="ref" reference="Fig6"}, respectively. So our claim holds. 2. We now investigate edges in $E\left(T_{v_j}\right)$, $j = d+1,\ldots ,s+1$. Recall that $x$ is not on any diametrical path. We have $\ell_p=\varepsilon_{2}(v_p;T_{v_p})< p$. Furthermore, for any $uv \in E\left(T_{v_j}\right)$, $j = d+1,\ldots ,s+1$, the Fermat eccentric vertices of $u$ and $v$ do not belong to $V\left(T_{v_p}\right)$. Note that the distant between $u$ and $x$ is greater than that between $v$ and $x$. Hence we have $$\varepsilon _3\left( u \right)=\varepsilon _3\left( v_j \right) +d\left( u,v_j \right)$$ and $$\varepsilon _3\left( v \right) =\varepsilon _3\left( v_j \right) +d\left( v,v_j \right) =\varepsilon _3\left( u \right) -1.$$ We derive that $$\begin{aligned} &\sum_{uv\in E\left( T_{v_j} \right)}{\varepsilon _3\left( u \right) \varepsilon _3\left( v \right)}-\sum_{u\in V\left( T_{v_j}\setminus \left\{ v_j \right\} \right)}{\varepsilon _{3}^{2}\left( u \right)} \\ &=\sum_{u\in V\left( T_{v_j}\setminus \left\{ v_j \right\} \right)}{\varepsilon _3\left( u \right) \left( \varepsilon _3\left( u \right) -1 \right)}-\sum_{u\in V\left( T_{v_j}\setminus \left\{ v_j \right\} \right)}{\varepsilon _{3}^{2}\left( u \right)} \\ &= - \sum_{u\in V\left( T_{v_j}\setminus \left\{ v_j \right\} \right)}{\varepsilon _3\left( u \right)}\le 0\end{aligned}$$ for $j = d+1,\ldots,s+1$. 3. For $uv \in E \left(P'\right)$, note that $-1\leq\varepsilon _3(u)-\varepsilon _3(v)\leq 1$ from Lemma [Lemma 1](#lem7){reference-type="ref" reference="lem7"}. Let us deal with the following two subcases: 1. For all $uv \in E \left(P'\right)$, $0 \le \varepsilon _3\left( u\right) -\varepsilon _3\left( v \right) \le 1$. It is obvious that $$\begin{aligned} &\sum_{uv\in E\left(P' \right)}{\varepsilon _3\left( u \right) \varepsilon _3\left( v \right)}-\sum_{u\in V\left( P'\setminus \left\{x \right\} \right)}{\varepsilon _{3}^{2}\left( u \right)} \le 0 .\end{aligned}$$ 2. There exists an edge $v_kv_{k+1} \in E\left(P'\right)$ such that $\varepsilon _3\left( v_k\right) -\varepsilon _3\left( v_{k+1} \right) =-1$ and $k$ is maximum. Recall that $v_{k+1}$ is more distant from $x$ than $v_k$ and $\ell_p< \ell$. So the Fermat eccentric vertices of $v_k$ and $v_{k+1}$ must be $\omega'$ and $\omega''$. This statement is also true for all $uv\in v_pv_{d+1}\cdots v_{k+1}$, that is to say, $\varepsilon _3\left(u\right) -\varepsilon _3\left( v \right) =-1$. For any vertex $u$ in $P_{[0:p]}$, it now becomes evident that the Fermat eccentric vertices of $u$ is also $\omega'$ and $\omega''$. We thus have $\varepsilon _3\left(u\right) -\varepsilon _3\left( v \right) =1$ for all $uv\in P_{[0:p]}$. Taking all the edges $uv\in E\left(P_{[0:k-d]}\right)$ and $P'$ into consideration, we have $$\begin{aligned} &\quad\sum_{uv\in E(P'\cup P_{[0:k-d]})}\varepsilon _3\left(u\right)\varepsilon _3\left( v \right)- \sum_{u\in V((P_{[0:k-d]}\backslash \{v_{s-d}\})\cup (P'\backslash\{v_p\}))}\varepsilon _3^2\left(u\right)\\&=\sum_{u\in V(P'\backslash\{v_p\})}\varepsilon _3\left(u\right)-\sum_{u\in V((P_{[0:k-d]}\backslash \{v_{k-d}\})}\varepsilon _3(u)\leq 0,\end{aligned}$$ where the last inequality holds due to $\varepsilon _3(v_j)\geq \varepsilon _3(v_{k+1-j})$ for all $j=0,1,2,\ldots, k-d$. Note that although $\varepsilon_3(u)-\varepsilon_3(v)=-1$ may exist in both paths $P_{[p:t]}$ and $P'$. The vertex sets $V\left(P_{[p:q]}\right)$, $V\left(P'\right)$ and $V\left(P_{[0:k-d]}\right)$ we discussed are pairwise disjoint. Therefore, we conclude that $$\begin{aligned} \sum_{uv\in E\left(T_x \right)}{\varepsilon _3\left( u \right) \varepsilon _3\left( v \right)}-\sum_{u\in V\left( T_x\setminus \left\{ x \right\} \right)}{\varepsilon _{3}^{2}\left( u \right)} \le 0.\end{aligned}$$ Above all, all cases are proved perspectively, and inequality [\[eq:important\]](#eq:important){reference-type="eqref" reference="eq:important"} holds for unicyclic graphs. ◻ # Multicyclic graphs {#sec4} In this section, we give two counterexamples which show that inequality [\[eq:important\]](#eq:important){reference-type="eqref" reference="eq:important"} does not always apply to general graphs. ![image](Fig8.png){width="40%"} We give a bicyclic graph with $3x+6$ vertices shown in Fig. [\[Fig7\]](#Fig7){reference-type="ref" reference="Fig7"}. One can easily compute that $$\frac{\sum\limits_{u\in V(\tilde{G}_{2,x})}\varepsilon_3^2(u)}{n(\tilde{G}_{2,x})}-\frac{\sum\limits_{uv\in E(\tilde{G}_{2,x})}\varepsilon_3(u)\varepsilon_3(v)}{m(\tilde{G}_{2,x})}=\frac{-\frac{1}{2}x^3+31x^2+173x+55}{(3x+6)(3x+7)}.$$ We check that $$\frac{\sum\limits_{u\in V(\tilde{G}_{2,64})}\varepsilon_3^2(u)}{n(\tilde{G}_{2,64})}-\frac{\sum\limits_{uv\in E(\tilde{G}_{2,64})}\varepsilon_3(u)\varepsilon_3(v)}{m(\tilde{G}_{2,64})}>0$$ and $$\frac{\sum\limits_{u\in V(\tilde{G}_{2,65})}\varepsilon_3^2(u)}{n(\tilde{G}_{2,65})}-\frac{\sum\limits_{uv\in E(\tilde{G}_{2,65})}\varepsilon_3(u)\varepsilon_3(v)}{m(\tilde{G}_{2,65})}<0.$$ We next consider a graph with multiple cycles, as shown in Fig. [\[Fig8\]](#Fig8){reference-type="ref" reference="Fig8"}. ![image](Fig7.png){width="40%"} Without difficulty, we can get the following equation: $$\begin{aligned} &\frac{\sum\limits_{u\in V(G_{k,x})}\varepsilon_3^2(u)}{n(G_{k,x})}-\frac{\sum\limits_{uv\in E(G_{k,x})}\varepsilon_3(u)\varepsilon_3(v)}{m(G_{k,x})}=\\ & \quad\frac{\left(-\frac{1}{6}k^2-\frac{26k}{3}\right)x^3+(8k^2-54k)x^2+\left(\frac{121k^2}{6}-\frac{226k}{3}\right)x+12k^2-30k}{(kx+3k)(kx+2k+1)}.\end{aligned}$$ It is clear that $$\frac{\sum\limits_{u\in V(G_{k,x})}\varepsilon_3^2(u)}{n(G_{k,x})}-\frac{\sum\limits_{uv\in E(G_{k,x})}\varepsilon_3(u)\varepsilon_3(v)}{m(G_{k,x})}>0$$ when $x=0$, while $$\frac{\sum\limits_{u\in V(G_{k,x})}\varepsilon_3^2(u)}{n(G_{k,x})}-\frac{\sum\limits_{uv\in E(G_{k,x})}\varepsilon_3(u)\varepsilon_3(v)}{m(G_{k,x})}<0$$ for $x$ large enough. The above two counterexamples demonstrate that inequality [\[eq:important\]](#eq:important){reference-type="eqref" reference="eq:important"} and its opposite inequality are not always valid for graphs with at least two cycles. # Conclusion In this paper, we have investigated inequality $$\frac{\sum_{u\in V\left( G \right)}{\varepsilon _3^2\left( u \right)}}{n(G)}\geq\frac{\sum_{uv\in E\left( G \right)}{\varepsilon _3\left( u \right) \varepsilon _3\left( v \right)}}{m(G)}$$ and proved that this inequality holds for all acyclic and unicycle graphs. We negate the validity of this inequality and its opposite inequality in graphs with at least two cycles. The properties and applications of Zagreb-Fermat indices, or more generally, Zagreb Steiner $k$-indices, need further study. Zagreb-type indices are important in graphs, so a lot of extensive research and development still needs to be done. # Acknowledgments {#acknowledgments .unnumbered} The research is partially supported by the Natural Science Foundation of China (No. 12301107) and the Natural Science Foundation of Shandong Province, China (No. ZR202209010046). [^1]: panr2358\@gmail.com [^2]: czeng\@sdtbu.edu.cn (Corresponding author) [^3]: 815253894lly\@gmail.com [^4]: lgengji\@gmail.com
arxiv_math
{ "id": "2309.12682", "title": "The comparison of two Zagreb-Fermat eccentricity indices", "authors": "Xiangrui Pan, Cheng Zeng, Longyu Li, Gengji Li", "categories": "math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We show that, up to a natural equivalence relation, the only non-trivial, non-identity holomorphic maps $\mathop{\mathrm{Conf}}_n\mathbb{C}\to\mathop{\mathrm{Conf}}_m\mathbb{C}$ between unordered configuration spaces, where $m\in\{3,4\}$, are the resolving quartic map $R\colon\mathop{\mathrm{Conf}}_4\mathbb{C}\to\mathop{\mathrm{Conf}}_3\mathbb{C}$, a map $\Psi_3\colon\mathop{\mathrm{Conf}}_3\mathbb{C}\to\mathop{\mathrm{Conf}}_4\mathbb{C}$ constructed from the inflection points of elliptic curves in a family, and $\Psi_3\circ R$. This completes the classification of holomorphic maps $\mathop{\mathrm{Conf}}_n\mathbb{C}\to\mathop{\mathrm{Conf}}_m\mathbb{C}$ for $m\leq n$, extending results of Lin, Chen and Salter, and partially resolves a conjecture of Farb. We also classify the holomorphic families of elliptic curves over $\mathop{\mathrm{Conf}}_n\mathbb{C}$. To do this we classify homomorphisms between braid groups with few strands and $\mathop{\mathrm{PSL}}_2\mathbb{Z}$, then apply powerful results from complex analysis and Teichmüller theory. Furthermore, we prove a conjecture of Castel about the equivalence classes of endomorphisms of the braid group with three strands. address: - "PH: Department of Mathematics, University of Chicago, 5734 S University Ave, Chicago, IL 60637, USA" - "JS: Department of Mathematics, University of Auckland, 38 Princes Street, Auckland 1010, New Zealand" author: - Peter Huxford and Jeroen Schillewaert bibliography: - references.bib title: Braid groups, elliptic curves, and resolving the quartic --- # Introduction For a topological space $X$, let $$\mathop{\mathrm{PConf}}_n X \coloneqq \{(x_1,\ldots,x_n)\in X^n : x_i\neq x_j \text{ for all } i\neq j \}$$ be the *ordered configuration space* of $n$ points in $X$. The symmetric group $S_n$ acts freely and properly discontinuously on $\mathop{\mathrm{PConf}}_nX$ by permuting coordinates, and the quotient $$\mathop{\mathrm{Conf}}_nX\coloneqq (\mathop{\mathrm{PConf}}_nX) / S_n$$ is the *(unordered) configuration space* of $n$ points in $X$, and can be thought as the space of $n$-element subsets of $X$. If $X$ is a complex manifold, then so too are $\mathop{\mathrm{PConf}}_nX$ and $\mathop{\mathrm{Conf}}_nX$. In this paper we address some interesting special cases of the following fundamental question. **Question 1**. What are the holomorphic maps $\mathop{\mathrm{Conf}}_n\mathbb{C}\to\mathop{\mathrm{Conf}}_m\mathbb{C}$? An answer to this question can be thought of as a classification of the geometric constructions which take $n$ points in the complex plane, and produce $m$ points in the complex plane. Lin answered this question when $n\geq5$ and $m\leq n$ [@Lin04a], which was generalized by Chen and Salter to the cases with $n\geq5$ and $m\leq 2n$ [@CS23]. In order to state their results and our own, we first define a natural equivalence relation on the set such maps, whose name we take from Chen and Salter [@CS23]. **Definition 1** (**Affine twist**). Let $f\colon Y\to Z$ be a holomorphic map between complex manifolds $Y$ and $Z$, and suppose the *affine group* $$\mathop{\mathrm{Aff}}\coloneqq\{z\mapsto az+b : a\in\mathbb{C}^*, b\in\mathbb{C}\} \cong \mathbb{C}\rtimes\mathbb{C}^*.$$ has a holomorphic action on $Z$. If $A\colon Y\to\mathop{\mathrm{Aff}}$ is a holomorphic map, then the *affine twist* $f^A\colon Y\to Z$ of $f$ by $A$ is the holomorphic map given by $$f^A(y) \coloneqq A(y) \cdot f(y).$$ We say that $f$ and $f^A$ are *affine equivalent*. We let $\mathop{\mathrm{Aff}}$ act on $\mathop{\mathrm{Conf}}_n\mathbb{C}$ element-wise, and consider the holomorphic maps $\mathop{\mathrm{Conf}}_n\mathbb{C}\to\mathop{\mathrm{Conf}}_m\mathbb{C}$ up to affine equivalence. Chen and Salter define the following class of examples of holomorphic maps $\mathop{\mathrm{Conf}}_n\mathbb{C}\to\mathop{\mathrm{Conf}}_m\mathbb{C}$ [@CS23 §3]. **Definition 2** (**Root map**). If $k,p\geq1$ and $\epsilon\in\{0,1\}$, then a *basic root map* is a map $r_{p,\epsilon}\colon\mathop{\mathrm{Conf}}_k\mathbb{C}^*\to\mathop{\mathrm{Conf}}_{kp+\epsilon}\mathbb{C}$ given by taking $p$th roots, and also including zero if $\epsilon=1$. A *root map* $\mathop{\mathrm{Conf}}_n\mathbb{C}\to\mathop{\mathrm{Conf}}_{kp+\epsilon}$ is a composition of the form $$\mathop{\mathrm{Conf}}_n\mathbb{C}\to \mathop{\mathrm{Conf}}_k\mathbb{C}^* \xrightarrow{r_{p,\epsilon}} \mathop{\mathrm{Conf}}_{kp+\epsilon}\mathbb{C},$$ where the first map is an affine twist of a constant map $\mathop{\mathrm{Conf}}_n\mathbb{C}\to\mathop{\mathrm{Conf}}_k\mathbb{C}^*$. To make sense of the affine twist, we let $\mathop{\mathrm{Aff}}$ act on $\mathop{\mathrm{Conf}}_k\mathbb{C}^*$ element-wise with translations acting trivially. It's worth noting that constant maps are affine twists of root maps. In fact, Chen and Salter prove that affine twists of root maps are precisely the holomorphic maps $\mathop{\mathrm{Conf}}_n\mathbb{C}\to\mathop{\mathrm{Conf}}_m\mathbb{C}$ for which the induced map $\mathop{\mathrm{Conf}}_n\mathbb{C}\to\mathop{\mathrm{Conf}}_m\mathbb{C}/\mathop{\mathrm{Aff}}$ is constant [@CS23 §3.3]. Now we are in a position to more precisely state what is known about holomorphic maps between unordered configuration spaces. Lin's results [@Lin04b Theorems 1.4, 4.5, 4.8] together with Chen and Salter's generalizations [@CS23 Theorem 3.1] imply that if $n\geq5$ and $m\leq 2n$, then, up to affine equivalence, the only holomorphic maps $\mathop{\mathrm{Conf}}_n\mathbb{C}\to\mathop{\mathrm{Conf}}_m\mathbb{C}$ are root maps, and the identity map when $n=m$. It is an open question whether all holomorphic maps $\mathop{\mathrm{Conf}}_n\mathbb{C}\to\mathop{\mathrm{Conf}}_m\mathbb{C}$ with $n\geq5$ are affine twists of either a root map or of the identity map. However, for $n=3,4$ there are many examples of holomorphic maps $\mathop{\mathrm{Conf}}_n\mathbb{C}\to\mathop{\mathrm{Conf}}_m\mathbb{C}$ that are not of this form. The next map we describe is such an example, and arises from identifying a monic, square-free polynomial of degree $n$ over $\mathbb{C}$ with the set of its roots in $\mathop{\mathrm{Conf}}_n\mathbb{C}$. It has some interesting history: as an 18-year-old in 1540, Lodovico Ferrari discovered a method for solving a general quartic equation, by first solving a closely related cubic equation. This method gives rise to a holomorphic map often referred to as *resolving the quartic*. **Definition 3** (**Resolving the quartic**). Let $R\colon\mathop{\mathrm{Conf}}_4\mathbb{C}\to\mathop{\mathrm{Conf}}_3\mathbb{C}$ be the map that lifts to $\tilde{R}\colon\mathop{\mathrm{PConf}}_4\mathbb{C}\to\mathop{\mathrm{PConf}}_3\mathbb{C}$ given by $$\tilde{R}(x_1,x_2,x_3,x_4) \coloneqq (x_1x_4+x_2x_3,x_1x_3+x_2x_4,x_1x_2+x_3x_4).$$ It is clear that $\tilde{R}$ defines a map $\mathop{\mathrm{PConf}}_4\mathbb{C}\to\mathbb{C}^3$, but it is quite remarkable that its image is contained in $\mathop{\mathrm{PConf}}_3\mathbb{C}$. Indeed, the cross ratio of the three points $\tilde{R}(x_1,x_2,x_3,x_4)$ and $\infty$ is equal to the cross ratio of $x_1,x_2,x_3$, and $x_4$. Farb conjectured that, up to affine equivalence, any holomorphic map $\mathop{\mathrm{Conf}}_n\mathbb{C}\to\mathop{\mathrm{Conf}}_m\mathbb{C}$ with $n\geq4$ and $m\geq3$ is either a root map, the identity, or factors through the resolving quartic map $R\colon\mathop{\mathrm{Conf}}_4\mathbb{C}\to\mathop{\mathrm{Conf}}_3\mathbb{C}$ [@Far23 Conjecture 2.1]. The aforementioned work of Lin [@Lin04a], and Chen and Salter [@CS23] confirms this is the case in the ranges $n\geq5$, $m\leq 2n$. Farb points out that this conjecture requires the assumption $n\geq4$. This is due to the existence of the following holomorphic maps. Consider the family of elliptic curves over $\mathop{\mathrm{Conf}}_3\mathbb{C}$ equipped with a planar embedding given by the equation $$\label{eq:elliptic-curves} y^2 = (x-x_1)(x-x_2)(x-x_3), \qquad \{x_1,x_2,x_3\}\in\mathop{\mathrm{Conf}}_3\mathbb{C}\tag{$\ast$}$$ For $k\geq2$, let *Jordan's totient function* $J_2(k)$ be the number of elementary $k$-torsion points on an elliptic curve, and let $m_k=J_2(k)/2$ if $k>2$, and $m_2=J_2(2)=3$. **Definition 4** (**Elliptic curve construction**). For $k\geq2$, let $\Psi_k\colon\mathop{\mathrm{Conf}}_3\mathbb{C}\to\mathop{\mathrm{Conf}}_{m_k}\mathbb{C}$ be the holomorphic map given by $$\Psi_k(\{x_1,x_2,x_3\}) \coloneqq \left\{\begin{tabular}{l} $x$-coordinates of the elementary $k$-torsion \\[0.1em] points of $y^2=(x-x_1)(x-x_2)(x-x_3)$ \end{tabular}\right\}.$$ Although $\Psi_2\colon\mathop{\mathrm{Conf}}_3\mathbb{C}\to\mathop{\mathrm{Conf}}_3\mathbb{C}$ is just the identity map, one can show $\Psi_3\colon\mathop{\mathrm{Conf}}_3\mathbb{C}\to\mathop{\mathrm{Conf}}_4\mathbb{C}$ is a holomorphic map that does not lift to $\mathop{\mathrm{PConf}}_3\mathbb{C}\to\mathop{\mathrm{PConf}}_4\mathbb{C}$. We're now in a position to state our main theorem. We complete the classification of holomorphic maps $\mathop{\mathrm{Conf}}_n\mathbb{C}\to\mathop{\mathrm{Conf}}_m\mathbb{C}$ for $m\leq n$. We also partially resolve Conjecture 2.1 of Farb [@Far23]. **Theorem 1**. *If $n\geq3$ and $m\in\{3,4\}$, then, up to affine equivalence, every holomorphic map $\mathop{\mathrm{Conf}}_n\mathbb{C}\to\mathop{\mathrm{Conf}}_m\mathbb{C}$ is either* - *A root map $\mathop{\mathrm{Conf}}_n\mathbb{C}\to\mathop{\mathrm{Conf}}_m\mathbb{C}$* - *The identity map $\mathop{\mathrm{Conf}}_n\mathbb{C}\to\mathop{\mathrm{Conf}}_n\mathbb{C}$* - *$R\colon\mathop{\mathrm{Conf}}_4\mathbb{C}\to\mathop{\mathrm{Conf}}_3\mathbb{C}$* - *$\Psi_3\colon\mathop{\mathrm{Conf}}_3\mathbb{C}\to\mathop{\mathrm{Conf}}_4\mathbb{C}$* - *$\Psi_3\circ R\colon\mathop{\mathrm{Conf}}_4\mathbb{C}\to\mathop{\mathrm{Conf}}_4\mathbb{C}$* It's worth noting that the composition $R\circ\Psi_3\colon\mathop{\mathrm{Conf}}_3\mathbb{C}\to\mathop{\mathrm{Conf}}_3\mathbb{C}$ happens to be affine equivalent to a root map. Combined with the results of Lin [@Lin04a], and Chen and Salter [@CS23], completes the classification of holomorphic maps $\mathop{\mathrm{Conf}}_n\mathbb{C}\to\mathop{\mathrm{Conf}}_m\mathbb{C}$ for $m\leq n$. Our second theorem concerns geometric constructions of elliptic curves. Let $\mathcal{M}_{g,n}$ be the moduli space of genus $g$ Riemann surfaces with $n$ unordered marked points. Then $\mathcal{M}_{1,1}\cong\mathop{\mathrm{SL}}_2\mathbb{Z}\backslash\mathbb{H}$ is the moduli space of elliptic curves. Let the *hyperelliptic map* $$H \colon \mathop{\mathrm{Conf}}_3\mathbb{C}\to \mathcal{M}_{1,1}$$ be the holomorphic map corresponding to the family of elliptic curves [\[eq:elliptic-curves\]](#eq:elliptic-curves){reference-type="eqref" reference="eq:elliptic-curves"}, forgetting the planar embedding. We think of $\mathcal{M}_{1,1}$ as a complex orbifold, so that holomorphic maps $Y\to\mathcal{M}_{1,1}$, where $Y$ is a complex manifold, bijectively correspond to isomorphism classes of families of elliptic curves over $Y$. We may forget the orbifold structure of $\mathcal{M}_{1,1}$ by using the $j$-invariant $j\colon\mathcal{M}_{1,1}\to\mathbb{C}$. If $f,g\colon Y\to\mathcal{M}_{1,1}$ are holomorphic families of elliptic curves, then $j\circ f=j\circ g$ if and only if for all $y\in Y$ the fibers over $y$ in each family are isomorphic. Let $\Delta\colon\mathop{\mathrm{Conf}}_n\mathbb{C}\to\mathbb{C}^*$ be the *discriminant* $$\Delta(\{x_1,\ldots,x_n\}) \coloneqq \prod_{i\neq j}(x_i-x_j).$$ We can affine twist by the discriminant by viewing $\mathbb{C}^*$ as a subgroup of $\mathop{\mathrm{Aff}}$. **Theorem 1**. *Let $f\colon\mathop{\mathrm{Conf}}_n\mathbb{C}\to\mathcal{M}_{1,1}$ be a holomorphic family of elliptic curves over $\mathop{\mathrm{Conf}}_n\mathbb{C}$. Let $\mathop{\mathrm{id}}^{\Delta}$ be the affine twist of the identity on $\mathop{\mathrm{Conf}}_3\mathbb{C}$ by the discriminant $\Delta$. Then one of the following holds:* - *$j\circ f$ is constant,* - *$n=3$ and $f$ is $H$ or $H\circ\mathop{\mathrm{id}}^\Delta$,* - *$n=4$ and $f$ is $H\circ R$ or $H\circ\mathop{\mathrm{id}}^\Delta\circ R$.* Although $j\circ H=j\circ H\circ\mathop{\mathrm{id}}^{\Delta}$, the families $H$ and $H\circ\mathop{\mathrm{id}}^{\Delta}$ have different monodromy. A similar phenomenon occurs in a theorem of Chen and Salter. They prove that if $\mathcal{M}_g\coloneqq\mathcal{M}_{g,0}$, then the hyperelliptic family $\mathop{\mathrm{Conf}}_n\mathbb{C}\to\mathcal{M}_{\lfloor(n-1)/2\rfloor}$ is the only non-trivial holomorphic family $\mathop{\mathrm{Conf}}_n\mathbb{C}\to\mathcal{M}_g$ for $n\geq26$ and $g\leq n-2$ up to precomposing with $\mathop{\mathrm{id}}^\Delta\colon\mathop{\mathrm{Conf}}_n\mathbb{C}\to\mathop{\mathrm{Conf}}_n\mathbb{C}$, see [@CS23 §3.5]. There is also a hyperelliptic family of genus one curves over $\mathop{\mathrm{Conf}}_4\mathbb{C}$, given by the equation $$y^2=(x-x_1)(x-x_2)(x-x_3)(x-x_4).$$ By taking the Jacobian of the underlying genus one curves in this family, we obtain a family $\mathop{\mathrm{Conf}}_4\mathbb{C}\to\mathcal{M}_{1,1}$ of elliptic curves. One can show that this is isomorphic to the family $H\circ R$. To prove we first classify the homomorphisms between the relevant fundamental groups. The *braid group on $n$ strands of $X$*, where $X$ is a topological space, is the fundamental group $$B_n(X) \coloneqq \pi_1(\mathop{\mathrm{Conf}}_nX).$$ The *braid group on $n$ strands* is $B_n\coloneqq B_n(\mathbb{C})$. Lin classified the homomorphisms $B_n\to B_m$ for $n\geq5$ and $m\leq n$ [@Lin04b]. These results were extended by Castel who handled the cases $n\geq6$, $m\leq n+1$ [@Cas16], and finally Chen, Kordek, and Margalit extended this to $n\geq5$, $m\leq 2n$ [@CKM23]. Chen, Kordek, and Margalit's classification is in terms of a natural equivalence relation on the set of homomorphisms $B_n\to B_m$ that we define in . This equivalence relation is roughly analogous to affine equivalence for holomorphic maps. They show that if $n\geq5$ and $m\leq 2n$, then there are at most five equivalence classes of homomorphisms $B_n\to B_m$, and they give explicit representatives of these equivalence classes [@CKM23 Theorem 1.1]. We prove the following theorem which stands in stark contrast to these results. The special case $m=n=3$ is a conjecture of Castel [@Cas09 Conjecture 14.2]. **Theorem 1**. *If $n\in\{3,4\}$ and $m\geq3$, then there are infinitely many equivalence classes of homomorphisms $B_n\to B_m$.* The ubiquity of homomorphisms in these cases reflects that previously established classification methods break down in the cases we consider, both in the group theoretic and holomorphic settings. Indeed, Chen and Salter's classification [@CS23 Theorem 3.1] uses techniques from Teichmüller theory and complex analysis to show that at most two of Chen, Kordek, and Margalit's five equivalence classes can be represented by holomorphic maps. Since both of these are in fact represented by holomorphic maps, they are able to apply a theorem of Imayoshi and Shiga [@IS88] to complete their classification. Our proofs require, in addition to the techniques of Chen and Salter, a well-known consequence of work of Bers [@Ber78], which concerns the monodromy of families of Riemann surfaces over a punctured disk, see . This fact imposes a constraint on homomorphisms $B_n\to B_m$ that can arise from a holomorphic map. When $n\geq5$ this constraint is automatically satisfied for all known homomorphisms with non-cyclic image. However, when $n\in\{3,4\}$ there are infinitely many inequivalent examples where it is not satisfied. To prove we classify the homomorphisms $B_n\to B_m$ for $m\in\{3,4\}$ that satisfy the constraint imposed by and the other constraints used by Chen and Salter [@CS23]. Our methods rely on special properties of the braid groups $B_3$ and $B_4$. To handle homomorphisms $B_n\to B_4$ we solve a delicate equation in the free group of rank 2 --- this is the sole purpose of . To prove , many of the considerations for also apply, since the orbifold fundamental group $\pi_1^{\mathop{\mathrm{orb}}}(\mathcal{M}_{1,1})\cong\mathop{\mathrm{SL}}_2\mathbb{Z}$ is a central quotient of $B_3$. ## Structure of the paper In we classify homomorphisms $B_3\to\mathop{\mathrm{PSL}}_2\mathbb{C}$, and use this to classify homomorphisms $B_n\to\mathop{\mathrm{PSL}}_2\mathbb{Z}$ and $B_n\to B_3$. We discuss Nielsen--Thurston theory in and use it in to prove , in particular proving a conjecture of Castel. In we use the Nielsen--Thurston classification theorem to classify the homomorphisms $B_3\to B_4$ and $B_4\to B_4$. In we prove a property of a particular automorphism of the free group of rank 2. We use many of our group theoretic results, together with results on holomorphic maps we discuss in , to prove . ## Acknowledgements We especially thank Benson Farb for suggesting the use of the Nielsen--Thurston classification theorem, pointing out some important applications of Teichmüller theory, and for extensive comments. We also thank Lei Chen and Nick Salter whose work inspired this paper. Furthermore, we thank all the previously mentioned people as well as Ishan Banerjee, Rose Elliott Smith, Trevor Hyde, Eduard Looijenga, Dan Margalit, Joshua Mundinger, Carlos A. Serván, Claire Voisin, and Nicholas Wawrykow for their helpful comments and discussions. # Homomorphisms $B_n\to\mathop{\mathrm{PSL}}_2\mathbb{Z}$ and $B_n\to B_3$ {#sec:homfromBn} We begin by establishing some relevant facts about the braid groups. Artin's presentation of $B_n$ is $$\begin{aligned} \label{eq:Bn-presentation} \begin{split} B_n = \langle \sigma_1,\ldots,\sigma_{n-1} \mid{}& \sigma_i\sigma_{i+1}\sigma_i = \sigma_{i+1}\sigma_i\sigma_{i+1}, \quad 1\leq i < n-1 \\ & \sigma_i\sigma_j=\sigma_j\sigma_i, \quad 1\leq i<j-1<n-1 \rangle. \end{split}\end{aligned}$$ The holomorphic map $R\colon\mathop{\mathrm{Conf}}_4\mathbb{C}\to\mathop{\mathrm{Conf}}_3\mathbb{C}$ induces the homomorphism $R_*\colon B_4\twoheadrightarrow B_3$ given by $\sigma_1,\sigma_3\mapsto\sigma_1$ and $\sigma_2\mapsto\sigma_2$. From Artin's presentation it can be seen that $B_3=\langle\alpha,\beta\mid\alpha^3=\beta^2\rangle$, where $\alpha=\sigma_1\sigma_2$ and $\beta=\sigma_1\sigma_2\sigma_1$. Hence $Z(B_3)=\langle\alpha^3\rangle=\langle\beta^2\rangle$, and there is a short exact sequence $$1 \to Z(B_3) \to B_3 \to \mathop{\mathrm{PSL}}_2\mathbb{Z}\to 1.$$ Bearing these facts in mind, we can state the two main theorems to be shown in this section. **Theorem 1**. *If $\varphi\colon B_n\to \mathop{\mathrm{PSL}}_2\mathbb{Z}$ is a homomorphism, then the following hold.* 1. *If $n\geq5$, then $\varphi$ is cyclic.* 2. *If $n=4$, then $\varphi$ factors through $R_*\colon B_4\twoheadrightarrow B_3$.* 3. *If $n=3$ and $\varphi$ is non-cyclic, then $\varphi$ descends to an endomorphism of $\mathop{\mathrm{PSL}}_2\mathbb{Z}$.* Since $\mathop{\mathrm{PSL}}_2\mathbb{Z}\cong\mathbb{Z}/2*\mathbb{Z}/3$, it is possible to classify its many endomorphisms, see . This gives a classification of homomorphisms $B_n\to\mathop{\mathrm{PSL}}_2\mathbb{Z}$. We also classify the homomorphisms $B_3\to\mathop{\mathrm{PSL}}_2\mathbb{Z}$ which send $\sigma_1$ and $\sigma_2$ to parabolic elements of $\mathop{\mathrm{PSL}}_2\mathbb{Z}$. This additional constraint will be relevant in where we classify holomorphic maps. We write $\begin{bsmallmatrix}a&b\\c&d\end{bsmallmatrix}$ for the image of $\begin{psmallmatrix}a&b\\c&d\end{psmallmatrix}\in\mathop{\mathrm{SL}}_2\mathbb{Z}$ in $\mathop{\mathrm{PSL}}_2\mathbb{Z}$. **Theorem 1**. *If $\varphi\colon B_3\to\mathop{\mathrm{PSL}}_2\mathbb{Z}$ is a non-cyclic homomorphism with $\varphi(\sigma_1)$ and $\varphi(\sigma_2)$ parabolic, then it is given by composing the quotient map $B_3\twoheadrightarrow B_3/Z(B_3)$, with some isomorphism $B_3/Z(B_3)\cong\mathop{\mathrm{PSL}}_2\mathbb{Z}$. In other words, up to automorphisms of $\mathop{\mathrm{PSL}}_2\mathbb{Z}$ it is given by $$\varphi(\sigma_1) = \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix}, \qquad \varphi(\sigma_2) = \begin{bmatrix} 1 & 0 \\ -1 & 1 \end{bmatrix}.$$* Note that the quotient map $B_3\twoheadrightarrow\mathop{\mathrm{PSL}}_2\mathbb{Z}$ lifts to a surjection $H_*\colon B_3\twoheadrightarrow\pi_1^{\mathop{\mathrm{orb}}}(\mathcal{M}_{1,1})\cong\mathop{\mathrm{SL}}_2\mathbb{Z}$ given by $$\sigma_1 \mapsto \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}, \qquad \sigma_2 \mapsto \begin{pmatrix} 1 & 0 \\ -1 & 1 \end{pmatrix}.$$ In we will address the implications of for homomorphisms $B_n\to B_3$. A homomorphism $\varphi\colon G\to H$ is *cyclic* if the image $\varphi(G)$ is a cyclic subgroup of $H$. We will use the following extensively. **Lemma 1** ([@For96 Corollary 2]). *Let $\varphi\colon B_n\to G$ be a homomorphism. Then either of the following conditions imply that $\varphi$ is cyclic.* 1. *$\varphi(\sigma_i)$ commutes with $\varphi(\sigma_{i+1})$ for some $1\leq i<n-1$.* 2. *$n\geq5$, and $\varphi(\sigma_i)=\varphi(\sigma_j)$ for some $1\leq i<j<n$.* ## $B_3\to\mathop{\mathrm{PSL}}_2\mathbb{C}$ and $B_3\to\mathop{\mathrm{PSL}}_2\mathbb{Z}$ {#sec:B3toPSL} **Lemma 1**. *If $\varphi\colon B_3\to\mathop{\mathrm{PSL}}_2\mathbb{C}$ is non-cyclic, then $Z(B_3)\leq\ker(\varphi)$. In particular $\varphi$ descends to a homomorphism $\mathop{\mathrm{PSL}}_2\mathbb{Z}\to\mathop{\mathrm{PSL}}_2\mathbb{C}$.* *Proof.* Write $a=\varphi(\sigma_1\sigma_2)$ and $b=\varphi(\sigma_1\sigma_2\sigma_1)$, so that $a^3=b^2$. It suffices to show that either $a$ and $b$ commute, or $a^3=b^2=I$. Non-identity powers of a given Möbius transformation have the same fixed point set in $\mathbb{CP}^1$. Hence, if $a^3=b^2$ is not the identity, then $a$ and $b$ both have the same fixed point set, and thus $a$ and $b$ commute by [@Bea83 Theorem 4.3.5(ii)]. ◻ **Remark 1**. Since $H^2(B_3;\mathbb{Z}/2)=0$ [@Fuk70], any homomorphism $B_3\to\mathop{\mathrm{PSL}}_2\mathbb{C}$ lifts to a representation $B_3\to\mathop{\mathrm{SL}}_2\mathbb{C}$. Formanek classified the two-dimensional irreducible representations of $B_3$ [@For96 Theorem 11]. From that classification one can easily check that $Z(B_3)$ has image contained in $\{\pm I\}$ under any irreducible representation $B_3\to\mathop{\mathrm{SL}}_2\mathbb{C}$. However also handles the reducible indecomposable representations $B_3\to\mathop{\mathrm{SL}}_2\mathbb{C}$. The following proposition classifies the endomorphisms of $\mathop{\mathrm{PSL}}_2\mathbb{Z}$. **Proposition 1**. *Let $a$, $b$ denote the images of $\alpha=\sigma_1\sigma_2$, $\beta=\sigma_1\sigma_2\sigma_1$ respectively under the quotient map $B_3\twoheadrightarrow\mathop{\mathrm{PSL}}_2\mathbb{Z}$, so that $\mathop{\mathrm{PSL}}_2\mathbb{Z}=\langle a\rangle*\langle b\rangle$ and $a^3=b^2=1$. Then the non-cyclic endomorphisms of $\mathop{\mathrm{PSL}}_2\mathbb{Z}$ are precisely those of the form $$a \mapsto ga^{\pm1}g^{-1}, \quad b \mapsto hbh^{-1},$$ for some group elements $g,h\in\mathop{\mathrm{PSL}}_2\mathbb{Z}$.* *Proof.* By the free product description of $\mathop{\mathrm{PSL}}_2\mathbb{Z}$ all maps above extend to endomorphisms. Since the order of $a$ and the order of $b$ are coprime integers, Kurosh's theorem [@Kur34] implies that any endomorphism must carry $\langle a\rangle$ into a conjugate subgroup $g\langle a\rangle g^{-1}$, and also $\langle b\rangle$ to a conjugate subgroup $h\langle b\rangle h^{-1}$. Hence these are the only possibilities. ◻ We now describe the non-cyclic homomorphisms $B_3\to\mathop{\mathrm{PSL}}_2\mathbb{Z}$ where the images of the standard generators are parabolic. *Proof of .* Without loss of generality, we may assume that $\varphi(\sigma_1)=\begin{bsmallmatrix}1&x\\0&1\end{bsmallmatrix}$ for some integer $x>0$. Let $\eta\in\mathbb{P}^1(\mathbb{Q})=\mathbb{Q}\cup\{\infty\}$ be the unique fixed point of $\varphi(\sigma_2)$. Since $\varphi$ is non-cyclic, $\eta\neq\infty$. Let $T_\eta\coloneqq\begin{bsmallmatrix}1&\eta\\0&1\end{bsmallmatrix}$, and define $\psi\colon B_3\to\mathop{\mathrm{PSL}}_2\mathbb{Q}$ by $\psi(g)=T_\eta^{-1}\cdot\varphi(g)\cdot T_\eta$. Note that $\psi(\sigma_2)$ fixes $0\in\mathbb{P}^1(\mathbb{Q})$, hence its top right entry is zero. We can directly compute the remaining entries in terms of $\varphi(\sigma_2)=\begin{bsmallmatrix} a & \ast \\ c & d \end{bsmallmatrix}$ to be $$\psi(\sigma_2) = \begin{bmatrix} a - c\eta & 0 \\ c & d + c\eta \end{bmatrix}.$$ Since $\det(\psi(\sigma_2))=1\in\mathbb{Z}$ and $a,d\in\mathbb{Z}$, we must have $c\eta\in\mathbb{Z}$. Hence $\psi(\sigma_2)=\begin{bsmallmatrix}1&0\\y&1\end{bsmallmatrix}$, where $y=\pm c\in\mathbb{Z}$ is non-zero. The braid relation implies $$\begin{gathered} \begin{bmatrix} 1 & x \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ y & 1 \end{bmatrix} \begin{bmatrix} 1 & x \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ y & 1 \end{bmatrix} \begin{bmatrix} 1 & x \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ y & 1 \end{bmatrix} \\[0.5em] \implies xy + 1 = 0 \implies (x,y) = (1,-1). \end{gathered}$$ Hence $c=\pm1$, and thus $\eta\in\mathbb{Z}$. Thus $T_\eta\in\mathop{\mathrm{PSL}}_2\mathbb{Z}$, and $\psi\colon B_3\to\mathop{\mathrm{PSL}}_2\mathbb{Z}$ has the required form, completing the proof. ◻ ## $B_n\to\mathop{\mathrm{PSL}}_2\mathbb{Z}$ {#sec:BntoPSL} We first prove a useful lemma about $\mathop{\mathrm{PSL}}_2\mathbb{Z}$, by viewing it as a subgroup of $\mathop{\mathrm{Isom}}^+(\mathbb{H})\cong\mathop{\mathrm{PSL}}_2\mathbb{R}$. **Lemma 1**. *If two elements of $\mathop{\mathrm{PSL}}_2\mathbb{Z}$ commute and are conjugate, then they must be either equal or inverses.* *Proof.* If both are hyperbolic they have the same translation axis and translation length. If they are both elliptic they have a common fixed point and the same unsigned angle of rotation. If they are both parabolic we may assume they both fix $\infty\in\mathbb{P}^1(\mathbb{Q})$. Hence both are powers of $\begin{bsmallmatrix} 1 & 1 \\ 0 &1 \end{bsmallmatrix}$, thus equal since they are conjugate in $\mathop{\mathrm{PSL}}_2\mathbb{Z}$. In all three cases they must be equal or inverses. ◻ *Proof of .* implies (iii). It follows from (ii) that (ii)$\implies$(i). Suppose then that $n=4$. By , $\varphi(\sigma_1)$ and $\varphi(\sigma_3)$ are either equal or inverses. By if $\varphi$ is non-cyclic, then $\varphi(\sigma_i)$ and $\varphi(\sigma_{i+1})$ are mapped to the same generator of the abelianization $\mathbb{Z}/6\mathbb{Z}$ of $\mathop{\mathrm{PSL}}_2\mathbb{Z}$, for $i=1,2$. This rules out the possibility that $\varphi(\sigma_1)$ and $\varphi(\sigma_3)$ are inverses. Hence $\varphi(\sigma_1)=\varphi(\sigma_3)$, which proves (ii). ◻ ## Lifts to $B_n\to B_3$ {#sec:BntoB3} Throughout this paper we will denote the centralizer of a group $H$ in a group $G$ by $C_G(H)$, and $C_G(g)\coloneqq C_G(\langle g\rangle)$ for $g\in G$. **Definition 5** (**Transvection**). Let $\varphi\colon G\to H$ be a homomorphism. If $t\colon G\to C_H(\varphi(G))$ is a cyclic homomorphism, then the *transvection of $\varphi$ by $t$* is the homomorphism $\varphi^t\colon G\to H$ defined by $$\varphi^t(g) = \varphi(g)t(g).$$ **Theorem 1**. *Let $\varphi\colon B_n\to B_3$ be a homomorphism.* 1. *If $n\geq5$, then $\varphi$ is cyclic.* 2. *If $n=4$, then $\varphi$ factors through $R_*$.* 3. *If $n=3$ and $\varphi$ is non-cyclic, then up to a transvection by a homomorphism $B_3\to Z(B_3)$ and post-composing by an automorphism of $B_3$, the map $\varphi$ has the form $$\alpha \mapsto g\alpha g^{-1}, \quad \beta \mapsto \beta,$$ for some $g\in B_3$.* *Proof.* If we post-compose the above homomorphisms with the quotient map $B_3\twoheadrightarrow B_3/Z(B_3)\cong\mathop{\mathrm{PSL}}_2\mathbb{Z}$, then we obtain all homomorphisms classified in . Furthermore, any two homomorphisms $B_n\to B_3$ that lift the same homomorphism $B_n\to\mathop{\mathrm{PSL}}_2\mathbb{Z}$ can only differ by a transvection $B_n\to Z(B_3)$. Hence this is a complete list of homomorphisms $B_n\to B_3$. ◻ Dyer and Grossman [@DG81] have shown that $\mathop{\mathrm{Aut}}(B_n)=\mathop{\mathrm{Inn}}(B_n)\rtimes\langle\iota\rangle$ for $n\geq2$, where $\iota\colon B_n\to B_n$ is the inversion automorphism given by $\sigma_i\mapsto\sigma_i^{-1}$ for $i=1,\ldots,n-1$, so this gives a complete classification of the homomorphisms $B_n\to B_3$. In the case $n\geq5$ is a special case of a Theorem of Lin [@Lin04b Theorem 3.1], and $n=4$ is already known by Chen, Kordek and Margalit [@CKM23 Proposition 8.6]. The case $n=3$ is stated by Orevkov [@Ore22 Proposition 1.8] without proof. Our proof of Theorem [Theorem 1](#thm:BntoG){reference-type="ref" reference="thm:BntoG"} is independent of these results. # Nielsen--Thurston theory {#sec:Nielsen-Thurston} The purpose of this section is twofold: to recall well-known facts about the braid group from the Nielsen--Thurston point of view, while also establishing the notation and conventions we use. There is a canonical isomorphism between the braid group $B_n$ and the mapping class group $\mathop{\mathrm{Mod}}(\mathbb{D}_n)$ of $\mathbb{D}_n$. Here $\mathbb{D}_n$ is a disk including its boundary with $n$ unordered marked points in the interior. Recall that the mapping class group of a surface $\mathop{\mathrm{Mod}}(S)$ is the group of orientation preserving homeomorphisms of $S$ that preserve its marked points as a set, and fix each boundary component pointwise. The Nielsen--Thurston classification theorem states that every element of a mapping class group, hence also every braid, is either reducible, periodic, or pseudo-Anosov [@Thu88]. We will outline what this means for the braid group $B_n$. For more detail, see [@FM12 Chapters 9 and 13]. Our strategy for classifying homomorphisms $B_n\to B_4$ involves considering the possible Nielsen--Thurston types of the image of a generator of $Z(B_n)$ in $B_4$, and its possible canonical reduction systems. ## Canonical reduction systems A *simple closed curve* in $\mathbb{D}_n$ is an isotopy class of smooth maps $S^1\to\mathbb{D}_n$ without self intersection that avoid the $n$ marked points. For convenience, we do not consider null-homotopic curves to count as simple closed curves. The braid group $B_n$ acts on the set of simple closed curves in $\mathbb{D}_n$. A *multicurve* is a set of disjoint simple closed curves. For our purposes, it will be convenient to allow multicurves to contain non-essential simple closed curves, i.e., curves that are isotopic to a marked point or boundary component. We say a multicurve is *nontrivial* if it contains at least one essential component, and we say it is *essential* if all its components are essential. A *reduction system* of a braid $g\in B_n$ is an essential multicurve preserved by $g$. The *canonical reduction system* $\mathop{\mathrm{crs}}(g)$ of $g$ is the intersection of all maximal reduction systems of $g$. In the special case of the surface $\mathbb{D}_n$ there is one additional notion we may define: the *outer canonical reduction system* $\mathop{\mathrm{\mathop{\mathrm{crs}}^\circ}}(g)$. We define this to be the outermost loops in the canonical reduction system, together with a small loop surrounding every marked point not already encircled by a component of the canonical reduction system. Note that $\mathop{\mathrm{crs}}(ghg^{-1})=g\cdot\mathop{\mathrm{crs}}(h)$ and $\mathop{\mathrm{\mathop{\mathrm{crs}}^\circ}}(ghg^{-1})=g\cdot\mathop{\mathrm{\mathop{\mathrm{crs}}^\circ}}(h)$. In particular commuting elements preserve each other's (outer) canonical reduction systems. The outer canonical reduction systems in $\mathbb{D}_n$, up to the action of $B_n$, are in bijection with the partitions of $n$: the partition of $\mathop{\mathrm{\mathop{\mathrm{crs}}^\circ}}(g)$ is obtained by recording the number of marked points each component of $g$ encircles. ## Reducible braids A braid is *reducible* if its canonical reduction system is non-empty. This is equivalent to its outer canonical reduction system being non-trivial. Important examples of reducible braids include *Dehn twists* $T_\gamma$ about a simple closed curve $\gamma$, see [@FM12 Chapter 3] for a definition. The Dehn twist $z$ about a curve isotopic to the boundary component of $\mathbb{D}_n$ generates $Z(B_n)$. A *multitwist* is a product $\prod_{i=1}^kT_{\gamma_i}^{p_i}$ of Dehn twists, where $p_i\in\mathbb{Z}$ and the curves $\gamma_i$ are pairwise disjoint. The canonical reduction system of such a product is the set of all $\gamma_i$ that are not isotopic to a boundary component. If $\varphi\colon G\to B_m$ is a homomorphism, we say it is *reducible* if some essential multicurve is preserved by $\varphi(G)$. If $G=B_n$ and $z$ is a generator of $Z(B_n)$, then $\varphi$ is reducible if and only if $\varphi(z)$ is. For this reason we define $\mathop{\mathrm{crs}}(\varphi)\coloneqq\mathop{\mathrm{crs}}(\varphi(z))$ and $\mathop{\mathrm{\mathop{\mathrm{crs}}^\circ}}(\varphi)\coloneqq\mathop{\mathrm{\mathop{\mathrm{crs}}^\circ}}(\varphi(z))$. We will need to work with the stabilizer $B_\Gamma$ in $B_n$ of an outer canonical reduction system $\Gamma\coloneqq\{\gamma_1,\ldots,\gamma_m\}$ in $\mathbb{D}_n$. If we collapse the disks bounded by each $\gamma_i$ to a point, then we obtain a disk $\mathbb{D}_m$ whose $i$th marked point can be labelled with the number $n_i$ of marked points originally encircled by $\gamma_i$. Given a labelling $L$ of the marked points in $\mathbb{D}_m$, let $B_L$ be the subgroup of $B_m$ preserving this labelling. Groups of the form $B_L$ are *mixed braid groups*. The labelling $L$ determines a partition of the $m$ marked points. If $k_1,\ldots,k_\ell$ are the sizes of the parts of this partition, then these integers determine the isomorphism type of $B_L$, and we write $B_L=B_{k_1,\ldots,k_\ell}$. If a labelling is obtained by the above construction collapsing the disks bounded by components of $\Gamma$, then denote the labelling by $[\Gamma]$. See for an example. For example, if $\Gamma\coloneqq\{\gamma_1,\ldots,\gamma_5\}$ is the outer canonical reduction system in , then $B_{[\Gamma]}\cong B_{1,2,1,1}<B_5$. **Lemma 1** ([@KM22 Lemma 3.1]). *The short exact sequence $$1\to (B_{n_1}\times\cdots\times B_{n_m})\to B_\Gamma\to B_{[\Gamma]}\to1$$ obtained by collapsing the disks bounded by $\gamma_i$ to a point, has a section. We obtain a semi-direct product $B_\Gamma \cong (B_{n_1}\times\cdots\times B_{n_m})\rtimes B_{[\Gamma]}$. The action of $B_{[\Gamma]}$ on $B_{n_1}\times\cdots\times B_{n_m}$ factors through the homomorphism $B_{[\Gamma]}\to S_m$ and is given by permuting the factors.* ## Periodic braids A braid in $B_n$ is *periodic* if it has finite order in $B_n/Z(B_n)$. Let $$\alpha_n\coloneqq\sigma_1\cdots\sigma_{n-1}, \quad \beta_n\coloneqq\alpha_n\sigma_1, \quad z_n\coloneqq\alpha_n^n=\beta_n^{n-1}.$$ It is a result of Chow that $Z(B_n)$ is infinite cyclic generated by $z_n$ [@Cho48]. We will continue to write $\alpha=\alpha_3$ and $\beta=\beta_3$. The following proposition follows from work of Kerékjártó and Eilenberg. **Proposition 1** ([@Ker19; @Eil34]). *The periodic elements of $B_n$ are all conjugate to a power of either $\alpha_n$ or $\beta_n$.* González-Meneses and Wiest attribute the computation of centralizers of non-central periodic braids below to Bessis, Digne and Michel [@BDM02]. **Proposition 1**. *[@GW04 Theorems 3.2 and 3.4] [\[prop:periodic\]]{#prop:periodic label="prop:periodic"} * 1. *Let $d=\gcd(k,n)$, and suppose that $d<n$. The centralizer of $\alpha_n^k$ in $B_n$ is isomorphic to the mixed braid group $B_{d,1}$.* 2. *Let $d=\gcd(k,n-1)$, and suppose that $d<n-1$. The centralizer of $\beta_n^k$ in $B_n$ is isomorphic to $B_{d,1}$.* ## Pseudo-Anosov braids We will not define pseudo-Anosov braids here, we refer to [@FM12 Chapter 13] for a definition. By the Nielsen--Thurston classification theorem, they are precisely the non-periodic irreducible braids. The centralizers of pseudo-Anosov braids are well understood: **Proposition 1** ([@GW04 Proposition 4.1]). *The centralizer in $B_n$ of a pseudo-Anosov braid is isomorphic to $\mathbb{Z}^2$.* By Lemma [Lemma 1](#lem:cyclic-homomorphism){reference-type="ref" reference="lem:cyclic-homomorphism"} this implies that if $\rho\colon B_n\to B_m$ is a homomorphism and $\rho(z_n)$ is pseudo-Anosov, then $\rho$ has cyclic image. # Equivalence classes of homomorphisms {#sec:equivalence-classes} Following [@CKM23], we define an equivalence relation on the set of homomorphisms $G\to H$ as follows: **Definition 6** (**Equivalence of homomorphisms**). Two homomorphisms $G\to H$ are *equivalent* if they are related by post-composition by an automorphism of $H$ and a sequence of transvections. The goal of this section is to prove , namely that there are infinitely many equivalence classes of homomorphisms $B_n\to B_m$ when $n\in\{3,4\}$ and $m\geq3$. We will also give an explicit list of the equivalence classes of homomorphisms $B_n\to B_3$. To do this, we first compute the centralizers of the non-cyclic homomorphisms from . ## Centralizers We will repeatedly use the following result of González-Meneses and Wiest. **Lemma 1** ([@GW04 Propositions 3.3 and 3.5]). * * 1. *The centralizer of $\alpha_n$ in $B_n$ is $\langle\alpha_n\rangle$.* 2. *The centralizer of $\beta_n$ in $B_n$ is $\langle\beta_n\rangle$.* **Lemma 1**. *If $\rho$ is a non-cyclic endomorphism of $B_3$, then the centralizer of $\rho(B_3)$ in $B_3$ is the center $Z(B_3)$.* *Proof.* By it suffices to show that $C_{B_3}(g\alpha g^{-1})\cap C_{B_3}(\beta)=Z(B_3)$ for all $g\in G$. The image of $g\alpha g^{-1}$ in $S_3$ is a 3-cycle, and the image of $\beta$ is a 2-cycle. Therefore, the intersection of $C_{B_3}(g\alpha g^{-1})=\langle g\alpha g^{-1}\rangle$ with $C_{B_3}(\beta)=\langle\beta\rangle$ is contained in, and hence equal to, $\langle (g\alpha g^{-1})^3 \rangle = \langle\beta^2 \rangle = Z(B_3)$. ◻ ## Reduction to endomorphisms of $B_3$ Next we show that Castel's conjecture, the $n=m=3$ case of , implies all other cases of . **Lemma 1**. *Inequivalent endomorphisms of $B_3$ remain inequivalent as homomorphisms after either one or both of the following operations* 1. *Post-composition with the standard inclusion $B_3\hookrightarrow B_m$* 2. *Pre-composition with $R_*\colon B_4\twoheadrightarrow B_3$.* *Proof.* Since $R_*$ is surjective, (ii) doesn't collapse equivalence classes of homomorphisms. So it suffices to show (i) doesn't collapse equivalence classes of endomorphisms of $B_3$. Suppose that $\varphi\colon B_3\to B_m$ is a non-cyclic endomorphism of $B_3$ postcomposed with the standard inclusion $B_3\hookrightarrow B_m$. Let $t\colon B_3\to B_{1,m-3}$ be a cyclic homomorphism. View $B_3\times B_{1,m-3}$ as a subgroup of $B_m$, namely the stabilizer of the simple closed curve $\gamma$ in surrounding three marked points, cf. . We claim $C_{B_m}(\varphi^t(B_3))\leq Z(B_3)\times B_{1,m-3}$. First note that $\varphi$ and $\varphi^t$ agree on $[B_3,B_3]$. By the classification of endomorphisms of $B_3$, $\varphi(B_3)\leq B_3$ surjects onto $S_3$, hence $\varphi^t([B_3,B_3])=\varphi([B_3,B_3])$ surjects onto $A_3$. If $g\in\varphi^t([B_3,B_3])$ has non-trivial image in $A_3$, then its canonical reduction system in $\mathbb{D}_m$ is $\{\gamma\}$. By a result of González-Meneses and Wiest [@GW04 Theorem 1.1], $C_{B_m}(g)=C_{B_3}(g)\times B_{1,m-3}$. In particular $C_{B_m}(\varphi^t(B_3))\leq B_3\times B_{1,m-3}$. Now implies $C_{B_m}(\varphi^t(B_3))\leq Z(B_3)\times B_{1,m-3}$. This implies by induction that repeated transvections of $\varphi$ can be realized by one transvection in $B_3$ together with one transvection by a cyclic homomorphism $t\colon B_3\to B_{1,m-3}\leq B_m$. Suppose that $\tau\in\mathop{\mathrm{Aut}}(B_3)$, $t\colon B_3\to B_{1,m-3}$ is a cyclic homomorphism, and that $\tau(\varphi^t(B_3))\leq B_3\leq B_m$. Since $\varphi^t([B_3,B_3])=\varphi([B_3,B_3])\leq B_3$ surjects onto $A_3$, it follows that $\tau$ must preserve $\gamma$. Thus $\tau$ restricts to an automorphism of $B_3\leq B_m$. This implies $t$ is trivial. Hence, if two homomorphisms $\varphi_1,\varphi_2\colon B_3\to B_3\hookrightarrow B_m$ are equivalent as homomorphisms $B_3\to B_m$, then they are also equivalent as endomorphisms of $B_3$. ◻ ## Equivalence classes of endomorphisms of $B_3$ Let $\rho_g$ be the endomorphism of $B_3$ defined by $\alpha\mapsto g\alpha g^{-1}$, $\beta\mapsto\beta$. By every endomorphism of $B_3$ is equivalent to some $\rho_g$, hence is equivalent to the $\rho_g$ representing infinitely many equivalence classes. The next lemma shows that these equivalence classes are in bijective correspondence with certain double cosets in $B_3$. **Lemma 1**. *The endomorphisms $\rho_g$ and $\rho_{g'}$ are equivalent if and only if the double cosets $\langle\beta\rangle g\langle\alpha\rangle$ and $\langle\beta\rangle g'\langle\alpha\rangle$ are equal.* *Proof.* First note that $\rho_g$ itself only depends on the coset $g\langle\alpha\rangle$. Conjugating by $\beta$ shows that its equivalence class only depends on the double coset $\langle\beta\rangle g\langle\alpha\rangle$. Conversely, suppose that $\rho_g$ is equivalent to $\rho_{g'}$. By , there exists a homomorphism $t\colon B_3\to Z(B_3)$ and $\tau\in\mathop{\mathrm{Aut}}(B_3)$ such that $$\label{eq:equivalent} \rho_{g'} = \tau \circ \rho_g^t.$$ By restricting both sides of [\[eq:equivalent\]](#eq:equivalent){reference-type="eqref" reference="eq:equivalent"} to $Z(B_3)$, we can conclude that $t$ is trivial and $\tau\in\mathop{\mathrm{Inn}}(B_3)$. By evaluating both sides of [\[eq:equivalent\]](#eq:equivalent){reference-type="eqref" reference="eq:equivalent"} at $\beta$, we see that $\tau$ fixes $\beta$. Hence $\tau\colon g\mapsto hgh^{-1}$ for some $h\in C_{B_3}(\beta)=\langle\beta\rangle$. By evaluating both sides of [\[eq:equivalent\]](#eq:equivalent){reference-type="eqref" reference="eq:equivalent"} at $\alpha$ we find that $(g')^{-1}hg\in C_{B_3}(\alpha)=\langle \alpha \rangle$. Hence $\langle\beta\rangle g\langle\alpha\rangle=\langle\beta\rangle g'\langle\alpha\rangle$. ◻ Everything is now in place to prove Theorem [Theorem 1](#thm:equivalence-classes){reference-type="ref" reference="thm:equivalence-classes"}. *Proof of Theorem [Theorem 1](#thm:equivalence-classes){reference-type="ref" reference="thm:equivalence-classes"}.* By it suffices to show that there are infinitely many double cosets $\langle\beta\rangle g\langle\alpha\rangle$. Recall that $B_3/Z(B_3)\cong\langle a\rangle*\langle b\rangle\cong\mathbb{Z}/3*\mathbb{Z}/2$, where $a$ and $b$ are the images of $\alpha$ and $\beta$ respectively. By considering the usual normal form for elements of a free product, the double cosets $\langle b\rangle (ab)^n\langle a\rangle=\{b^i(ab)^na^j : i,j\in\mathbb{Z}\}$ are distinct as $n$ ranges over the positive integers. Therefore, the double cosets $\langle\beta\rangle(\alpha\beta)^n\langle\alpha\rangle$ are also all distinct as $n$ ranges over the positive integers. ◻ # Classification of homomorphisms $B_3\to B_4$ and $B_4\to B_4$ {#sec:B3toB3andB4} A major goal of this section is to prove the following theorem. **Theorem 1**. *The non-cyclic homomorphisms $B_3\to B_4$ are, up to a transvection $B_3\to Z(B_4)$ and post-composing by an automorphism of $B_4$, one of the following.* 1. *An endomorphism of $B_3$, followed by the standard inclusion $B_3\hookrightarrow B_4$.* 2. *Given by $\alpha\mapsto g\beta_4g^{-1}$, $\beta\mapsto\alpha_4^2$ for some $g\in B_4$.* We provide a proof of based on the Nielsen--Thurston classification theorem and canonical reduction systems. These methods were suggested to us by Farb. is also stated by Orevkov [@Ore22 Proposition 1.9]. Although a proof is not provided, the methods suggested there are the same as our own. We also prove the following theorem in , which classifies the homomorphisms $B_3\to B_4$ satisfying certain conditions. In these will be shown to be necessary conditions to be induced by a non-root holomorphic map. **Theorem 1**. *The homomorphism $\Psi_{3*}\colon B_3\to B_4$ given by $$\Psi_{3*}(\sigma_1) = \sigma_1\sigma_2, \qquad \Psi_{3*}(\sigma_2) = \sigma_3\sigma_2,$$ is, up to automorphisms of $B_4$ and transvections $B_3\to Z(B_4)$, the only non-cyclic, irreducible homomorphism $\varphi\colon B_3\to B_4$ such that $\varphi(\sigma_1)$ has some positive power that is a multitwist.* A complete proof of will rely on the solution to an equation in the free group of rank 2 that is dedicated to proving. Chen, Kordek, and Margalit prove that every endomorphism of $B_4$ is either a transvection of the identity, or factors through the homomorphism $R_*\colon B_4\twoheadrightarrow B_3$ [@CKM23 Theorem 8.4]. Together with , this implies the following classification of endomorphisms of $B_4$. **Theorem 1**. *The non-cyclic endomorphisms of $B_4$ are, up to a transvection $B_4\to Z(B_4)$ and post-composing by an automorphism of $B_4$, one of the three following types.* 1. *The identity.* 2. *The resolving quartic homomorphism $R_*\colon B_4\twoheadrightarrow B_3$, followed by an endomorphism of $B_3$, followed by the standard inclusion $B_3\hookrightarrow B_4$.* 3. *The resolving quartic homomorphism $R_*\colon B_4\twoheadrightarrow B_3$, followed by a homomorphism $B_3\to B_4$ given by $\alpha\mapsto g\beta_4g^{-1}$, $\beta\mapsto\alpha_4^2$ for some $g\in B_4$.* Let $\varphi\colon B_3\to B_4$ be a non-cyclic homomorphism, and let $Z(B_3)=\langle z_3\rangle$. follows from two lemmas that we prove in . ## The reducible case {#sec:reducible} **Lemma 1**. *If $\varphi$ is reducible, then $\varphi$ belongs to case (i) of .* *Proof of .* Suppose that $\varphi$ is reducible. Up to the action of $B_n$ there are three possible values of $\mathop{\mathrm{\mathop{\mathrm{crs}}^\circ}}(\varphi)$. These are depicted in , and they correspond to the three non-trivial partitions of 4: $2+2$, $2+1+1$, and $3+1$. We consider these three cases in turn. ### The partition $2+2$   **Claim:** If $\mathop{\mathrm{\mathop{\mathrm{crs}}^\circ}}(\varphi)$ is $\Gamma\coloneqq\{\gamma_1,\gamma_2\}$ as depicted in up to the action of $B_4$, then $\varphi$ is cyclic. *Proof.* By , $B_\Gamma\cong(B_2\times B_2)\rtimes B_2\cong\mathbb{Z}^2\rtimes\mathbb{Z}$, where $n\in\mathbb{Z}$ acts on $\mathbb{Z}^2$ by swapping coordinates if $n$ is odd. We present the group $\mathbb{Z}^2\rtimes\mathbb{Z}$ as $\langle x,y,z \mid [x,y]=1,\ zx=yz,\ zy=xz\rangle$, where every element of this group can be written uniquely in the form $x^ay^bz^c$. Write $$\varphi(\sigma_i) = x^{a_i}y^{b_i}z^{c_i}, \quad i=1,2.$$ Since $\langle x,y,z\rangle/\langle x,y\rangle\cong\langle z\rangle$ and since $\sigma_1$ and $\sigma_2$ are conjugate, we see that $c_1=c_2$. If this common value $c$ is even, then $\varphi$ is cyclic since $\langle x,y,z^2\rangle$ is abelian. So suppose that $c$ is odd. Since $\sigma_1\sigma_2\sigma_1=\sigma_2\sigma_1\sigma_2$, we find $$x^{2a_1+b_2}y^{2b_1+a_2}z^{3c} = x^{2a_2+b_1}y^{2b_2+a_1}z^{3c}.$$ Hence $a_1=a_2$ and $b_1=b_2$, and so again $\varphi$ is cyclic. ◻ ### The partition $2+1+1$   **Claim:** If $\mathop{\mathrm{\mathop{\mathrm{crs}}^\circ}}(\varphi)$ is $\Gamma\coloneqq\{\gamma_1,\gamma_2,\gamma_3\}$ as depicted in up to the action of $B_4$, then $\varphi$ is cyclic. *Proof.* By , $$B_\Gamma\cong(B_2\times B_1\times B_1)\rtimes B_{1,2}\cong\mathbb{Z}\times B_{1,2}.$$ So it suffices to show that any homomorphism $B_3\to B_{1,2}$ is cyclic. By (iii), the image of any non-cyclic homomorphism $B_3\to B_3$ surjects onto $S_3$. Since $B_{1,2}<B_3$ does not surject onto $S_3$, it follows that $\varphi$ is cyclic. ◻ ### The partition $3+1$   **Claim:** If $\mathop{\mathrm{\mathop{\mathrm{crs}}^\circ}}(\varphi)$ is $\Gamma\coloneqq\{\gamma_1,\gamma_2\}$ as depicted in up to the action of $B_4$, then $\varphi$ belongs to case (i) of . *Proof.* By , $$B_\Gamma\cong(B_3\times B_1)\rtimes B_{1,1}\cong B_3\times\mathbb{Z}.$$ The $B_3$ factor can be identified with the standard copy of $B_3<B_4$. The $\mathbb{Z}$ factor produced by the section in is generated by $z_3z_4^{-1}$, where $z_3=(\sigma_1\sigma_2)^3$ generates the center of $B_3$ and $z_4=(\sigma_1\sigma_2\sigma_3)^4$ generates the center of $B_4$. Hence $B_\Gamma=B_3\times Z(B_4)$, and so $\varphi$ is a composition of an endomorphism of $B_3$, followed by the standard inclusion, up to transvections by a homomorphism $B_3\to Z(B_4)$ and automorphisms of $B_4$. ◻ In conclusion, if $\varphi\colon B_3\to B_4$ is reducible and non-cyclic, then it belongs to case (i) of , which completes the proof of . ◻ ## The periodic case {#sec:other-cases} **Lemma 1**. *If $\varphi(z_3)$ is periodic, then $\varphi$ belongs to case (ii) of .* *Proof.* We consider two cases. ### Periodic but not central   **Claim:** If $\varphi(z_3)$ is periodic but not central, then $\varphi$ is cyclic. *Proof.* By , the centralizer of $\varphi(z_3)$ is isomorphic to $B_{d,1}$ where $d$ is a proper divisor of either 3 or 4, i.e., $d=1$ or $d=2$. If $d=1$, then $B_{d,1}\cong\mathbb{Z}$ and so $\varphi$ is cyclic. If $d=2$, then $B_{d,1}=B_{2,1}$. By , the image of any non-cyclic endomorphism of $B_3$ surjects onto $S_3$. Thus any homomorphism $B_3\to B_{2,1}$ is cyclic. Therefore $\varphi$ is cyclic. ◻ ### Central   **Claim:** If $\varphi(z_3)$ is central, then $\varphi$ belongs to case (ii) of . *Proof.* Since $\varphi(Z(B_3))\leq Z(B_4)$, $\varphi$ descends to a homomorphism $$\psi\colon B_3/Z(B_3) \to B_4/Z(B_4).$$ Let $a,b$ denote the images of $\alpha,\beta$ in $Z/Z(B_3)$. If $\psi(a)$ or $\psi(b)$ is trivial, then $\psi$ is cyclic, and hence $\varphi$ is cyclic, contradiction. Hence $\psi(a)$ has order 3 and $\psi(b)$ has order 2. By , possibly after replacing $\varphi$ with $\iota_4\circ\varphi$, we find that $\psi(a)$ and $\psi(b)$ are conjugate to the images of $\beta_4$ and $\alpha_4^2$ in $B_4/Z(B_4)$, respectively. By the relation $\alpha^3=\beta^2$ it follows that, up to a transvection $B_3\to Z(B_4)$ and an automorphism of $B_4$, $\varphi$ belongs to case (ii) of . ◻ This completes the proof of , and hence . ◻ ## Proof of Theorem [Theorem 1](#thm:b3-to-b4-pseudoperiodic){reference-type="ref" reference="thm:b3-to-b4-pseudoperiodic"} {#sec:pseudo-periodic} We now classify the irreducible homomorphisms $\varphi\colon B_3\to B_4$ such that $\varphi(\sigma_1)$ and $\varphi(\sigma_2)$ have some powers which are multitwists. This will be used in . The following lemma imposes a strong constraint on the image of $\sigma_1$. **Lemma 1**. *Let $\varphi\colon B_3\to B_4$ be an irreducible, non-cyclic homomorphism such that $\varphi(\sigma_1)$ has a power that is a multitwist. Then there exist $\tau\in\mathop{\mathrm{Aut}}(B_4)$ and $t\colon B_3\to Z(B_4)$ such that $\tau\circ\varphi^t$ maps $\sigma_1\mapsto (\sigma_1\sigma_2)^{6k+1}$ for some $k\in\mathbb{Z}$.* *Proof.* Since $\varphi$ irreducible and non-cyclic, without loss of generality by we may assume $\varphi$ is given by $\alpha\mapsto g\beta_4g^{-1}$, $\beta\mapsto\alpha_4^2$ for some $g\in B_4$. Thus $\varphi(\sigma_1)=g\beta_4^{-1}g^{-1}\alpha_4^2$. Suppose that $\varphi(\sigma_1)$ preserves no essential simple closed curve. Then $\varphi(\sigma_1)$ is periodic. However, the image of $\varphi(\sigma_1)$ has order 6 in the abelianization $\mathbb{Z}/12\mathbb{Z}$ of $B_4/Z(B_4)$, whereas periodic elements have orders dividing 3 or 4 in $B_4/Z(B_4)$. Therefore, $\varphi(\sigma_1)$ preserves some essential simple closed curve. Since $\varphi(\sigma_1)$ induces a permutation of the marked points that is a 3-cycle in $S_4$ then, up to conjugation in $B_4$, the canonical reduction system of $\varphi(\sigma_1)$ consists of the single curve $\gamma\coloneqq\gamma_1$ in . Using the isomorphism $B_{\{\gamma\}}\cong B_3\times B_{1,1}=B_3\times Z(B_4)$ given by we see that, up to transvections $B_3\to Z(B_4)$, some power of $\varphi(\sigma_1)$ is conjugate to a power of $\sigma_1\sigma_2$, as these are the only periodic elements of $B_3$ having order 3 in $B_3/Z(B_3)$. Since $\varphi(\sigma_1)$ has order 6 in the abelianization $\mathbb{Z}/12$ of $B_4/Z(B_4)$, it follows that $\varphi(\sigma_1)$ is conjugate to a power $(\sigma_1\sigma_2)^{6k+1}$ for some $k\in\mathbb{Z}$. ◻ It remains to determine the possible values of $\varphi(\sigma_2)\in B_4$ under the hypothesis that $\varphi(\sigma_1)=(\sigma_1\sigma_2)^{6k+1}$. To aid us in this we invoke the following result of Gassner [@Gas61]. **Theorem 1**. *The kernel of $R_*\colon B_4\to B_3$ is freely generated by $$x\coloneqq\sigma_1^{-1}\sigma_3, \quad \text{ and } \quad y\coloneqq x^{-1}\sigma_2^{-1}x\sigma_2.$$ The action of the subgroup $B_3<B_4$ on $\ker(R_*)$ by conjugation is given by $$\sigma_1^{-1}x\sigma_1 = x, \quad \sigma_2^{-1}x\sigma_2 = xy, \quad \sigma_1^{-1}y\sigma_1 = yx^{-1}, \quad \sigma_2^{-1}y\sigma_2 = y.$$* *Proof.* Gassner shows that $\ker(R_*)$ is freely generated by $x^{-1}=\sigma_3^{-1}\sigma_1$ and $yx^{-1}=\sigma_2\sigma_1\sigma_3^{-1}\sigma_2^{-1}$ [@Gas61 Theorem 7], and gives an equivalent description of the action of $B_3$ on these generators. ◻ Since the standard inclusion $B_3\hookrightarrow B_4$ is a section of $R_*$ the short exact sequence $$1 \to F_2 \to B_4 \xrightarrow{R_*} B_3 \to 1$$ splits, where $F_2$ is the free group generated by $x$ and $y$. thus gives an explicit description of the semi-direct product decomposition of $B_4=F_2\rtimes B_3$. Let $f\in\mathop{\mathrm{Aut}}(F_2)$ be defined by $w\mapsto(\sigma_1\sigma_2)^{-1}w(\sigma_1\sigma_2)$. By we have $$f(x) = xy, \qquad f(y) = x^{-1}.$$ In the proof of we will show that classifying the irreducible, non-cyclic homomorphisms $\varphi\colon B_3\to B_4$ with $\varphi(\sigma_1)$ having some power that is a multitwist, is equivalent to solving the following delicate equation in $F_2$ involving the automorphism $f$ $$f^{6k+1}(w)f^{-6k-1}(w) = w, \quad w\in F_2.$$ Solving this equation is the purpose of , see . There we show that if there is a non-identity solution, then $k=0$ and $w=f^n(x)$ for some $n\in\mathbb{Z}$. *Proof of .* Let $\varphi\colon B_3\to B_4$ be a non-cyclic, irreducible homomorphism such that some power of $\varphi(\sigma_1)$ is a multitwist. We seek $\tau\in\mathop{\mathrm{Aut}}(B_4)$ and $t\colon B_3\to Z(B_4)$ such that $\Psi_{3*}=\tau\circ\varphi^t$. By , we can assume without loss of generality that $\varphi(\sigma_1)=(\sigma_1\sigma_2)^{6k+1}$ for some integer $k$. By , $R_*\circ\varphi$ is cyclic, so there exists $w\in F_2$ such that $\varphi(\sigma_2)=w(\sigma_1\sigma_2)^{6k+1}$. The braid relation $\varphi(\sigma_1\sigma_2\sigma_1)=\varphi(\sigma_2\sigma_1\sigma_2)$ is equivalent to $$f^{6k+1}(w)f^{-6k-1}(w) = w.$$ By , $k=0$ and $w=f^n(x)$ for some $n\in\mathbb{Z}$. Thus $$\varphi(\sigma_2) = f^n(x)(\sigma_1\sigma_2) = (\sigma_1\sigma_2)^{-n}(\sigma_1^{-1}\sigma_3)(\sigma_1\sigma_2)^{n+1} = (\sigma_1\sigma_2)^{-n}(\sigma_3\sigma_2)(\sigma_1\sigma_2)^n.$$ Hence we may take $t$ to be trivial, and $\tau\colon g\mapsto(\sigma_1\sigma_2)^nx(\sigma_1\sigma_2)^{-n}$. ◻ # Solution to the equation in the free group of rank 2 {#sec:f2} Let $x,y$ be a free basis of the free group $F_2$ of rank 2, and let $f\in\mathop{\mathrm{Aut}}(F_2)$ be the automorphism defined by $$f(x) \coloneqq xy, \qquad f(y) \coloneqq x^{-1}.$$ The goal of this section is to prove the following theorem. **Theorem 1**. *If $w\in F_2$ is a non-identity word satisfying $$f^{6k+1}(w)f^{-6k-1}(w) = w,$$ then $k=0$ and $w=f^n(x)$ for some $n\in\mathbb{Z}$.* ## Basic properties of $f$ Note that $f$ has order 6 in $\mathop{\mathrm{GL}}_2\mathbb{Z}$. In fact: $$f^6(w) = cwc^{-1}, \quad \text{where } c\coloneqq[x,y]=xyx^{-1}y^{-1}.$$ **Lemma 1**. *The set of words fixed by $f$ is the subgroup $\langle c\rangle$.* *Proof.* One can easily check that $f(c)=c$, hence $f$ fixes every element of $\langle c\rangle$. Conversely, if $w$ is fixed by $f$, then it is also fixed by $f^6$. Hence $w$ must commute with $c$. Since $c$ has cyclic centralizer and is not an $n$th power in $F_2$ for any $n\geq2$, it follows that $w\in\langle c\rangle$. ◻ If $w\in F_2$, then let $\ell(w)$ be the length of $w$ as a reduced word in the alphabet $\{x,x^{-1},y,y^{-1}\}$. **Definition 7** (**Start and end**). Suppose that $w=uv$ where $u,v\in F_2$ and there is no cancellation in the product $uv$, i.e., $\ell(w)=\ell(u)+\ell(v)$. Then we say that $w$ *starts with* $u$ and *ends with* $v$. We write $w=u\,\rule[-0.1mm]{0.5cm}{0.15mm}\,$ to denote that $w$ starts with $u$, or $w=\,\rule[-0.1mm]{0.5cm}{0.15mm}\,v$ to denote that $w$ ends with $v$, or $w=u\,\rule[-0.1mm]{0.5cm}{0.15mm}\,v$ to denote that $w$ starts with $u$ and ends with $v$. For example, we may write $c=xy\,\rule[-0.1mm]{0.5cm}{0.15mm}\,$, or $c=\,\rule[-0.1mm]{0.5cm}{0.15mm}\,y^{-1}$. It is also true that $c=xyx^{-1}\,\rule[-0.1mm]{0.5cm}{0.15mm}\,x^{-1}y^{-1}$, since in the expression $w=u\,\rule[-0.1mm]{0.5cm}{0.15mm}\,v$, the subwords $u$ and $v$ can overlap in the expression of $w$ as a reduced word. The following lemma gives a precise relationship between the start and end of $w$ and $f(w)$. **Lemma 1**. *Let $n\in\mathbb{Z}$. Then $$\begin{aligned} {2} w &= y^nx\,\rule[-0.1mm]{0.5cm}{0.15mm}\,&\iff f(w) &= x^{-n+1}y\,\rule[-0.1mm]{0.5cm}{0.15mm}\,\\ w &= y^nx^{-1}\,\rule[-0.1mm]{0.5cm}{0.15mm}\,& \iff f(w) &= x^{-n}y^{-1}\,\rule[-0.1mm]{0.5cm}{0.15mm}\,\\ w &= \,\rule[-0.1mm]{0.5cm}{0.15mm}\,xy^n & \iff f(w) &= \,\rule[-0.1mm]{0.5cm}{0.15mm}\,yx^{-n} \\ w &= \,\rule[-0.1mm]{0.5cm}{0.15mm}\,x^{-1}y^n & \iff f(w) &= \,\rule[-0.1mm]{0.5cm}{0.15mm}\,y^{-1}x^{-n-1}. \end{aligned}$$* *Proof.* Write $$w=y^{n_0}x^{m_1}y^{n_1}\cdots x^{m_r}y^{n_r},$$ where $r\geq1$ and the exponents all nonzero except possibly $n_0$ and $n_r$. We will prove all four equivalences simultaneously by induction on $r$. We have $$f(y^{n_0}x^{m_1}y^{n_1}) = x^{-n_0}(xy)^{m_1}x^{-n_1} = \begin{cases} x^{-n_0+1}y\,\rule[-0.1mm]{0.5cm}{0.15mm}\,yx^{-n_1} & m_1>0 \\ x^{-n_0}y^{-1}\,\rule[-0.1mm]{0.5cm}{0.15mm}\,y^{-1}x^{-n_1-1} & m_1<0 \end{cases}$$ which proves the lemma for $r=1$. Suppose now that $r>1$, and let $$w_1 \coloneqq y^{n_0}x^{m_1}y^{n_1}\cdots x^{m_{r-1}}y^{n_{r-1}}, \qquad w_2 \coloneqq x^{m_r}y^{n_r},$$ so that $w=w_1w_2$. By the inductive hypothesis $$f(w_1) = \begin{cases} \,\rule[-0.1mm]{0.5cm}{0.15mm}\,yx^{-n_{r-1}} & m_{r-1} > 0 \\ \,\rule[-0.1mm]{0.5cm}{0.15mm}\,y^{-1}x^{-n_{r-1}-1} & m_{r-1} < 0, \end{cases}$$ and $$f(w_2) = \begin{cases} xy\,\rule[-0.1mm]{0.5cm}{0.15mm}\,& m_r > 0 \\ y^{-1}\,\rule[-0.1mm]{0.5cm}{0.15mm}\,& m_r<0 \end{cases}$$ We claim that if there is cancellation in the product $f(w_1)f(w_2)$, then only one letter from each factor cancels: an $x^{-1}$ at the end of $f(w_1)$, and an $x$ at the start of $f(w_2)$. Note that $n_{r-1}\neq0$ since $r>1$. If $f(w_1)$ ends with a non-trivial power of $y$, then it must end with $y^{-1}$. However $f(w_2)$ does not start with $y$. Hence, if there is cancellation, then $f(w_1)$ must end with a non-trivial power of $x$. If $f(w_2)$ starts with a non-trivial power of $x$, then $m_r>0$, in which case $f(w_2)=xy\,\rule[-0.1mm]{0.5cm}{0.15mm}\,$. If there is cancellation, then $f(w_1)$ must end with $x^{-1}$. To cancel two letters from each factor would require $f(w_1)$ to end with $y^{-1}x^{-1}$, which is not possible since $n_{r-1}\neq0$. In particular, no powers of $y$ cancel in the product $f(w_1)f(w_2)$. Hence, if we apply the inductive hypothesis to the start of $w_1$ and the end of $w_2$, then we find that all four equivalences also hold for $w$. ◻ ## Proof that $k=0$ Suppose that $w\notin\langle c\rangle$. Let $L(w),R(w)$ be maximal in absolute value such that $w=c^{L(w)}\,\rule[-0.1mm]{0.5cm}{0.15mm}\,c^{-R(w)}$, and let $m(w)\in F_2$ be the unique word satisfying $$w=c^{L(w)}m(w)c^{-R(w)}.$$ **Lemma 1**. *Suppose that $w\notin\langle c\rangle$. Then there is no cancellation in the product $c^{L(w)}m(w)c^{-R(w)}$, i.e., $\ell(w)=\ell(c^{L(w)})+\ell(m(w))+\ell(c^{-R(w)})$.* *Proof.* This is equivalent to showing that the $c^{L(w)}$ at the start of $w$ does not overlap with the $c^{-R(w)}$ at the end of $w$. Suppose for a contradiction that they do overlap. Then there exists $u\in F_2$ with $0<\ell(u)<\ell(c)$ such that $c^\epsilon=\,\rule[-0.1mm]{0.5cm}{0.15mm}\,u$ and $c^{\epsilon'}=u\,\rule[-0.1mm]{0.5cm}{0.15mm}\,$, where $\epsilon,\epsilon'\in\{\pm1\}$ depend on the signs of $L(w)$ and $R(w)$. If $\epsilon=-\epsilon'$, then $u=u^{-1}$ which is impossible. If $\epsilon=\epsilon'$, then $c^\epsilon=u\,\rule[-0.1mm]{0.5cm}{0.15mm}\,u$, however the only non-trivial word $u$ satisfying this is $u=c^\epsilon$. Hence there is no overlap. ◻ **Lemma 1**. *If $w_1,w_2,w_1w_2\notin\langle c\rangle$, then either $L(w_1w_2)=L(w_1)$, or $R(w_1w_2)=R(w_2)$.* *Proof.* Write $L_i=L(w_i)$, $R_i=R(w_i)$, $m_i=m(w_i)$, $L_{12}=L(w_1w_2)$, and $R_{12}=R(w_1w_2)$. Then $$w_1w_2 = c^{L_1}m_1c^{L_2-R_1}m_2c^{-R_2}.$$ The only way for both $L_{12}\neq L_1$ and $R_{12}\neq R_2$ to hold is for the middle factor $c^{L_2-R_1}$ to be trivial, and for only three letters of $m_1$ and three letters of $m_2$ to remain in $m_1m_2$ after cancellation. But since $\ell(c)=4$, this leaves only enough letters to form at most $c^{\pm1}$, so at most one power of $c$ from one end can be altered. Hence, either $L_{12}=L_1$ or $R_{12}=R_2$. ◻ **Lemma 1**. *If $w\notin\langle c\rangle$ then $L(w)\leq L(f(w))$ and $R(w)\leq R(f(w))$.* *Proof.* It suffices to prove the first inequality for all $w\notin\langle c\rangle$, since $R(w)=L(w^{-1})$. If $L(w)>0$, then $$w = c^{L(w)-1}xyx^{-1}\,\rule[-0.1mm]{0.5cm}{0.15mm}\,,$$ which by implies that $f(w)=c^{L(w)}\,\rule[-0.1mm]{0.5cm}{0.15mm}\,$, so $L(w)\leq L(f(w))$. Similarly, if $L(f(w))<0$, then $$f(w) = c^{L(f(w))+1}yxy^{-1}\,\rule[-0.1mm]{0.5cm}{0.15mm}\,,$$ which by implies that $w=c^{L(f(w))}\,\rule[-0.1mm]{0.5cm}{0.15mm}\,$, so $L(w)\leq L(f(w))$. The only other possibility is $L(w)\leq0\leq L(f(w))$, and the inequality clearly holds here. ◻ **Lemma 1**. *If there is a non-identity word $w$ satisfying $w = f^{6k+1}(w)f^{-6k-1}(w)$, then $k=0$.* *Proof.* Note that the set of solutions to this equation is closed under the action of $\langle f^{6k+1}\rangle$. Let $w$ be a non-identity solution to $w=f^{6k+1}(w)f^{-6k-1}(w)$. The identity is the only solution that is fixed by $f$, so $w\notin\langle c\rangle$ by . Write $w_n=f^n(w)$, $L_n=L(w_n)$ and $R_n=R(w_n)$. Since $w_{n+6}=cw_nc^{-1}$, then for all $n$ sufficiently large $L_{n+6}=L_n+1$ and $R_{n+6}=R_n+1$. Furthermore, the sequences $L_n$ and $R_n$ are non-decreasing by . Since $6k+1$ is coprime to 6, there exists $n$ such that the following hold: - $L_{n+1}=L_n$ - $L_{n+6k+1}=L_{n+1}+k$ - $R_{n-1}=R_n$ - $R_{n-6k-1}=R_{n-1}-k$. In particular $L_{n+6k+1}=L_n+k$ and $R_{n-6k-1}=R_n-k$. By and the equation $w_n=w_{n+6k+1}w_{n-6k-1}$ we can conclude that either $L_n=L_n+k$ or $R_n=R_n-k$. In either case we must have $k=0$. ◻ ## The solutions when $k=0$ **Lemma 1**. *Suppose that both $w$ and $f(w)$ start with $u$, where $\ell(u)=4$. Then $u=c^{\pm1}$. Similarly if both $w$ and $f(w)$ end with $v$, where $\ell(v)=4$, then $v=c^{\pm1}$.* *Proof.* Note that the second statement follows from applying the first to $w^{-1}$. Write $$w=y^{n_0}x^{m_1}y^{n_1}\cdots x^{m_r}y^{n_r}$$ as before. Suppose that $w$ and $f(w)$ both start with the same length four word $u$. By , $w$ starts with either $yx$ or $x$. Hence $r>0$, $m_1>0$ and $n_0\in\{0,1\}$. In the arguments that follow, we will freely apply in similar ways to the previous application. For instance, $$f(y^{n_1}\cdots x^{m_r}y^{n_r}) = \begin{cases} x^{-n_1} & k=1 \\ x^{-n_1+1}y\,\rule[-0.1mm]{0.5cm}{0.15mm}\,& m_2 > 0 \\ x^{-n_1}y^{-1}\,\rule[-0.1mm]{0.5cm}{0.15mm}\,& m_2 < 0. \end{cases}$$ Hence, $$f(w) = \begin{cases} x^{-n_0+1}y(xy)^{m_1-1}x^{-n_1} & r = 1 \\ x^{-n_0+1}y(xy)^{m_1-1}x^{-n_1+1}y \,\rule[-0.1mm]{0.5cm}{0.15mm}\,& m_2 > 0 \\ x^{-n_0+1}y(xy)^{m_1-1}x^{-n_1}y^{-1}\,\rule[-0.1mm]{0.5cm}{0.15mm}\,& m_2<0. \end{cases}$$ If $n_0=0$, then $f(w)=xy\,\rule[-0.1mm]{0.5cm}{0.15mm}\,$, hence $m_1=1$. If $n_0=1$, then $f(w)=y(xy)^{m_1-1}\,\rule[-0.1mm]{0.5cm}{0.15mm}\,$, hence $m_1$ cannot be larger than 1. Thus, in all cases $m_1=1$. Hence, $$f(w) = \begin{cases} x^{-n_0+1}yx^{-n_1} & r = 1 \\ x^{-n_0+1}yx^{-n_1+1}y \,\rule[-0.1mm]{0.5cm}{0.15mm}\,& m_2 > 0 \\ x^{-n_0+1}yx^{-n_1}y^{-1}\,\rule[-0.1mm]{0.5cm}{0.15mm}\,& m_2<0. \end{cases}$$ Clearly $r=1$ leads to a contradiction. If $n_0=1$, then $$f(w) = \begin{cases} yx^{-n_1+1}y\,\rule[-0.1mm]{0.5cm}{0.15mm}\,& m_2 > 0 \\ yx^{-n_1}y^{-1}\,\rule[-0.1mm]{0.5cm}{0.15mm}\,& m_2 < 0. \end{cases}$$ Since $m_1=1$ (and $n_0=1$) we must have $n_1=-1$ and $m_2<0$. Thus, if $n_0=1$, then $w$ starts with $c^{-1}=yxy^{-1}x^{-1}$. Now suppose $n_0=0$, and assume for a contradiction that $n_1\neq1$. Then $f(w)=xyx^{e}\,\rule[-0.1mm]{0.5cm}{0.15mm}\,$, where $e\neq0$. Hence, by comparing with $w$ we must have $n_1=1$. Thus, if $n_0=0$, then $m_1=n_1=1$, and hence $m_2<0$. Note that $$f(y^{n_2}\cdots x^{n_k}y^{n_k}) = \begin{cases} x^{-n_2} & r = 2 \\ x^{-n_2+1}y\,\rule[-0.1mm]{0.5cm}{0.15mm}\,& m_3 > 0 \\ x^{-n_2}y^{-1}\,\rule[-0.1mm]{0.5cm}{0.15mm}\,& m_3 < 0. \end{cases}$$ Hence, if $n_0=0$, then $$f(w) = \begin{cases} xyx^{-1}(y^{-1}x^{-1})^{-m_2-1}y^{-1}x^{-n_2-1} & r = 2 \\ xyx^{-1}(y^{-1}x^{-1})^{-m_2-1}y^{-1}x^{-n_2}y\,\rule[-0.1mm]{0.5cm}{0.15mm}\,& m_3 > 0 \\ xyx^{-1}(y^{-1}x^{-1})^{-m_2-1}y^{-1}x^{-n_2-1}y^{-1}\,\rule[-0.1mm]{0.5cm}{0.15mm}\,& m_3<0, \end{cases}$$ and so $w$ starts with $c=xyx^{-1}y^{-1}$. ◻ In what follows, the *$f$-orbit of $w$* is the set $\{f^n(w) : n\in\mathbb{Z}\}$, i.e., the union of the forwards and backwards $f$-orbits of $w$. **Lemma 1**. *Every word in $F_2$ not in $\langle c\rangle$ contains a word $w$ in its $f$-orbit such that $f^{-1}(w)$, $w$, and $f(w)$ all neither end with $c$ nor $c^{-1}$.* *Proof.* Suppose $w\notin\langle c\rangle$. Since $f^6(w)=cwc^{-1}$, the word $f^n(w)$ ends with $c^{-1}$ for all sufficiently large $n$, and ends with $c$ for all sufficiently negative $n$. For a given $f$-orbit not in $\langle c\rangle$, choose $v$ in it such that $v$ ends in $c$ but $f(v)$ does not, and let $w\coloneqq f^2(v)$, so $f^{-1}(w)$ does not end in $c$. By , neither $w$ nor $f(w)$ end with $c$. Also, by , $$v = \,\rule[-0.1mm]{0.5cm}{0.15mm}\,c \implies f(v) = f^{-1}(w) = \,\rule[-0.1mm]{0.5cm}{0.15mm}\,yx^{-1}y^{-1} \implies w = \,\rule[-0.1mm]{0.5cm}{0.15mm}\,y^{-1}.$$ Suppose for a contradiction that $f(w)=\,\rule[-0.1mm]{0.5cm}{0.15mm}\,c^{-1}$. Then in particular $f(w)=\,\rule[-0.1mm]{0.5cm}{0.15mm}\,y^{-1}x^{-1}$, so by $$w = \,\rule[-0.1mm]{0.5cm}{0.15mm}\,x^{-1},$$ which is a contradiction. Hence, none of $f^{-1}(w)$, $w$ or $f(w)$ end in $c^{-1}$. ◻ **Lemma 1**. *If $w$ is a non-identity solution of $w=f(w)f^{-1}(w)$, then $w=f^n(x)$ for some $n\in\mathbb{Z}$.* *Proof.* First note the claim holds under the additional hypothesis that $\ell(w)\leq6$, which is easily verified by a computer. We can also ignore elements of $\langle c\rangle$, since the only solution contained in there is the identity. Suppose $w$ neither starts nor ends with either of $c$ or $c^{-1}$. Then by , in the product $w=f(w)f^{-1}(w)$ all but at most the first three letters of $f(w)$ and at most the last three letters of $f^{-1}(w)$ must be cancelled, hence $\ell(w)\leq6$, so $w$ is in the $f$-orbit of $x$. Assume then that all words in the $f$-orbit of $w\notin\langle c\rangle$ either start or end with one of $c$ or $c^{-1}$. By , we may assume without loss of generality that $f^{-1}(w)$, $w$ and $f(w)$ do not end with either of $c$ or $c^{-1}$, and hence all start with $c$ or $c^{-1}$. In particular, by all but at most three letters of $f^{-1}(w)$ are cancelled in the product $f(w)f^{-1}(w)$. Hence $\ell(f^{-1}(w))\leq6$, so $w$ is in the $f$-orbit of $x$. ◻ *Proof of .* If $w$ is a non-identity solution of $f^{6k+1}(w)f^{-6k-1}(w)$, then $k=0$ by , and $w=f^n(x)$ for some $n\in\mathbb{Z}$ by . ◻ # Classification of holomorphic maps {#sec:conf-spaces} The goal of this section is to classify the holomorphic maps $\mathop{\mathrm{Conf}}_n\mathbb{C}\to\mathop{\mathrm{Conf}}_m\mathbb{C}$ for $m\in\{3,4\}$, and to classify the families of elliptic curves over $\mathop{\mathrm{Conf}}_n\mathbb{C}$. We denote by $\mathcal{M}_{g,n}$ the moduli space of Riemann surfaces of genus $g$ with $n$ unordered marked points. We only consider this moduli space when $2g-2+n>0$, where it can be identified with the orbifold quotient $\mathop{\mathrm{Mod}}_{g,n}\backslash{\mathcal T_{g,n}}$, where $\mathop{\mathrm{Mod}}_{g,n}$ and ${\mathcal T_{g,n}}$ are the associated mapping class group and Teichmüller spaces. Note that $\mathop{\mathrm{Mod}}_{g,n}$ need not act faithfully on ${\mathcal T_{g,n}}$, e.g., $\mathcal{M}_{1,1}=\mathop{\mathrm{SL}}_2\mathbb{Z}\backslash\mathbb{H}$. Adopting this convention means that holomorphic maps $B\to\mathcal{M}_{g,n}$, in the orbifold sense, where $B$ is a complex manifold, correspond to families of genus $g$ curves having $n$ marked points. Following the techniques of Chen and Salter [@CS23], we first constrain the possible homomorphisms that can come from holomorphic maps, and then apply a theorem of Imayoshi and Shiga [@IS88] to see that each possible equivalence class is realized by a unique holomorphic map, up to affine twists. ## Constraining the homomorphisms The following is a well-known consequence of work of Bers [@Ber78], see, e.g., [@McM00 §3]. **Theorem 1**. *Let $f\colon\mathbb{D}^*\to\mathcal{M}_{g,n}$ be holomorphic, where $\mathbb{D}^*=\{z\in\mathbb{C}: 0 < |z| < 1\}$ is a punctured disk and $2g-2+n>0$. Then $f_*(k)$ is a multitwist for some $k\geq1$, where $f_*\colon\mathbb{Z}\to\mathop{\mathrm{Mod}}_{g,n}$ is the induced map on fundamental groups.* *Proof.* It is a theorem of Royden [@Roy71] (see [@EK74 §2.3]) that $\mathcal{M}_{g,n}$ is Kobayashi hyperbolic, with Kobayashi metric equal to the Teichmüller metric. Hence, the holomorphic map $f$ is distance non-increasing with respect to the hyperbolic metric on $\mathbb{D}^*$ and the Teichmüller metric on $\mathcal{M}_{g,n}$. Since a loop around the puncture in $\mathbb{D}^*$ can be made arbitrarily short in the hyperbolic metric, $f_*(k_0)$ has translation length zero for all $k_0\in\mathbb{Z}$. Let $k_0\geq1$ be such that $f_*(k_0)$ preserves the connected components of the complement of its canonical reduction system. By assertion (A) in the proof of Bers' Theorem 7 [@Ber78], we can conclude that $f_*(k_0)$ is not pseudo-Anosov when restricted to any of these connected components. Thus, $f_*(k_0)$ is periodic on each component, hence $f_*(k)$ is a multitwist for some positive multiple $k$ of $k_0$. ◻ As a useful consequence we have the following. **Proposition 1**. *Let $f\colon\mathop{\mathrm{Conf}}_m\mathbb{C}\to\mathcal{M}_{g,n}$ be a holomorphic map, where $2g-2+n>0$. Then there exists some $k\neq0$ such that $f_*(\sigma_i^k)$ is a multitwist for $i=1,\ldots,n-1$.* *Proof.* The configuration space $\mathop{\mathrm{Conf}}_m\mathbb{C}$ is isomorphic to the space of monic degree $m$ square-free polynomials over $\mathbb{C}$. This identifies $\mathop{\mathrm{Conf}}_m\mathbb{C}$ with $\mathbb{C}^m-{\mathcal D}_m$, where ${\mathcal D}_m$ is the discriminant locus. The smooth points of ${\mathcal D}_m$ correspond to configurations with exactly one repeated point. Thus, a loop in the discriminant complement representing $\sigma_i$ is freely homotopic to a small loop $\gamma$ in the normal space of ${\mathcal D}_m$ at some smooth point. In fact, we can choose the same $\gamma$ for each $\sigma_i$, since the set of smooth points of ${\mathcal D}_m$ is connected. Applying to the punctured disk bounded by $\gamma$ gives the result. ◻ We will also need the following theorem, which is a special case of a theorem of Daskalopoulos and Wentworth. **Theorem 1** ([@DW07 Theorem 5.7]). *Let $f\colon B\to\mathcal{M}_{g,n}$ be a holomorphic map, where $B$ is a Riemann surface of finite type. If $f_*\colon\pi_1(B)\to\mathop{\mathrm{Mod}}_{g,n}$ is cyclic or reducible, then $f$ is constant.* Chen and Salter prove the following in [@CS23 §3.3]. **Lemma 1**. *Let $f:\mathop{\mathrm{Conf}}_n\mathbb{C}\to \mathop{\mathrm{Conf}}_m\mathbb{C}$ be a holomorphic map such that the induced map $\bar{f}:\mathop{\mathrm{Conf}}_n\mathbb{C}\to \mathop{\mathrm{Conf}}_m\mathbb{C}/\mathrm{Aff}$ is constant. Then $f$ is an affine twist of a root map.* ## Uniqueness The following result is a generalization of a theorem of Imayoshi--Shiga [@IS88]. We mimic the proof of [@AAS18 Proposition 3.2] and invoke [@CS23 Theorem 2.5]. **Theorem 1**. *Let $M$ be a smooth, quasi-projective complex variety, let $\mathcal{M}_{g,n}'$ be a finite orbifold cover of $\mathcal{M}_{g,n}$, and let $f,g\colon M\to\mathcal{M}_{g,n}'$ be non-constant holomorphic or antiholomorphic maps. If $f$ and $g$ are homotopic as orbifold maps, then $f=g$.* *Proof.* Chen and Salter's result [@CS23 Theorem 2.5] handles the case where $M$ is a Riemann surface of finite type. For the general case, note that a generic curve in $M$ is a Riemann surface of finite type. Thus, $f$ and $g$ agree on a generic curve in $M$, and hence $f=g$. ◻ Note that $\mathop{\mathrm{Conf}}_m\mathbb{C}$ is a smooth quasi-projective variety, so applies to it. Furthermore, since $\mathop{\mathrm{Conf}}_m\mathbb{C}$ is a $K(\pi,1)$ space, $f,g\colon Y\to\mathcal{M}_{g,n}'$ are homotopic as orbifold maps if and only if $f_*,g_*\colon\pi_1(Y)\to\pi_1(\mathcal{M}_{g,n}')$ are conjugate, for any covering space $Y$ of $\mathop{\mathrm{Conf}}_m\mathbb{C}$. The following consequence is proved by Chen and Salter in the special case where $m=n$ [@CS23 §3.1]. Note that there is a natural holomorphic map $\mathop{\mathrm{Conf}}_n\mathbb{C}\to\mathcal{M}_{0,n+1}$, given by associating to $\{x_1,\ldots,x_n\}\in\mathop{\mathrm{Conf}}_n\mathbb{C}$ the Riemann surface $\mathbb{CP}^1=\mathbb{C}\cup\{\infty\}$, with marked points $\{x_1,\ldots,x_n,\infty\}$. **Proposition 1**. *Let $f,g\colon\mathop{\mathrm{Conf}}_m\mathbb{C}\to\mathop{\mathrm{Conf}}_n\mathbb{C}$ be holomorphic maps such that the induced maps $\bar{f},\bar{g}\colon\mathop{\mathrm{Conf}}_m\mathbb{C}\to\mathcal{M}_{0,n+1}$ are not constant. Suppose that $f_*,g_*\colon B_m\to B_n$ differ by at most an automorphism of $B_n$ and a transvection $B_m\to Z(B_n)$. Then $f$ is an affine twist of $g$.* *Proof.* Without loss of generality, we can apply an affine twist by a power of the discriminant $\Delta\colon\mathop{\mathrm{Conf}}_m\mathbb{C}\to\mathbb{C}^*$ so that $f_*,g_*$ differ only by an automorphism of $B_n$. Dyer and Grossman showed that $\mathop{\mathrm{Out}}(B_n)\cong\langle\iota\rangle$, where $\iota$ is given by $\sigma_i\mapsto\sigma_i^{-1}$ for $i=1,\ldots,n-1$ [@DG81]. The involution $\iota$ is induced by the coordinate-wise complex conjugation on $\mathop{\mathrm{Conf}}_n\mathbb{C}$. Thus, if $f_*=\tau\circ g_*$ for an outer automorphism $\tau$ of $B_n$, then by the induced maps $\mathop{\mathrm{Conf}}_m\mathbb{C}\to\mathcal{M}_{0,n+1}$ only differ by complex conjugation, which contradicts both $f,g$ being holomorphic. Hence $f_*,g_*$ are conjugate, and thus by they induce the same maps $\bar{f}=\bar{g}\colon\mathop{\mathrm{Conf}}_m\mathbb{C}\to\mathcal{M}_{0,n+1}$. Let $Y$ be the finite cover of $\mathop{\mathrm{Conf}}_m\mathbb{C}$ such that $f,g$ lift to maps $\tilde{f},\tilde{g}\colon Y\to\mathop{\mathrm{PConf}}_n\mathbb{C}$, and write $\tilde{f}=(f_1,\ldots,f_n)$ and $\tilde{g}=(g_1,\ldots,g_n)$, where $f_i,g_i\colon Y\to\mathbb{C}$ for $i=1,\ldots,n$. Note that $Y$ is also a quasi-projective variety by a result of Grauert and Remmert [@GR58]. Since $\bar{f}=\bar{g}$ and $f_*,g_*$ are conjugate, $\tilde{f}_*,\tilde{g}_*$ are also conjugate. implies that $\tilde{f},\tilde{g}$ induce identical maps $Y\to\mathop{\mathrm{PConf}}_n\mathbb{C}/\mathop{\mathrm{Aff}}$. Hence, there exists a function $\tilde{A}\colon Y\to\mathop{\mathrm{Aff}}$ such $\tilde{f}=\tilde{g}^{\tilde{A}}$. If $\tilde{A}\cdot z=az+b$ for all $z\in\mathbb{C}$, where $a,b\colon Y\to\mathbb{C}$, then for all distinct $i,j\in\{1,\ldots,n\}$ we have $$a = \frac{g_i-g_j}{f_i-f_j}, \qquad b = \frac{f_ig_j-f_jg_i}{f_i-f_j}.$$ Hence $\tilde{A}$ is holomorphic. Furthermore, the above holds for all distinct indices $i,j$, and $\tilde{f},\tilde{g}$ are equivariant with respect to the group of deck transformations of the covering map $Y\to\mathop{\mathrm{Conf}}_m\mathbb{C}$, which is a subgroup of $S_n$. Hence $\tilde{A}$ descends to a holomorphic map $A\colon\mathop{\mathrm{Conf}}_m\mathbb{C}\to\mathop{\mathrm{Aff}}$ such that $f=g^A$. ◻ **Remark 1**. Grauert and Remmert's result is not needed by Chen and Salter, since in the case $m=n$ all such holomorphic maps lift to self maps of $\mathop{\mathrm{PConf}}_n\mathbb{C}$, which is clearly quasi-projective. However, we will apply with the map $\Psi_3$, which does not lift to a holomorphic map $\mathop{\mathrm{PConf}}_3\mathbb{C}\to\mathop{\mathrm{PConf}}_4\mathbb{C}$. Neither the argument of Grauert and Remmert, nor Grothendieck's vast generalization [@Gro71 XII, Théorème 5.1], give explicit equations for an arbitrary finite cover of $\mathop{\mathrm{Conf}}_n\mathbb{C}$. However, since $\Psi_3$ is also algebraic, this enables us to write down explicit equations for the pullback $Y\to\mathop{\mathrm{Conf}}_3\mathbb{C}$ of the cover $\mathop{\mathrm{PConf}}_4\mathbb{C}\to\mathop{\mathrm{Conf}}_4\mathbb{C}$ along $\Psi_3$ that express $Y$ as a quasi-projective variety. ## Classifying the holomorphic maps We are finally ready to prove . *Proof of .* Let $f\colon\mathop{\mathrm{Conf}}_n\mathbb{C}\to\mathop{\mathrm{Conf}}_m\mathbb{C}$ be a holomorphic map, where $n\geq3$ and $m\in\{3,4\}$. If $f$ is not a root map or the identity, then by and the homomorphism $f_*\colon B_n\to B_m$ must be irreducible, non-cyclic, and $f_*(\sigma_i)$ must have a power that is a multitwist for all $i=1,\ldots,n-1$. If $m=3$, then for $f_*$ to be non-cyclic we must have $n\in\{3,4\}$ by . Since $R_*$ is surjective, $n\in\{3,4\}$ is also necessary if $m=4$. Suppose $n=3$. By and , $f_*$ is either the identity on $\mathop{\mathrm{Conf}}_3\mathbb{C}$ or $\Psi_{3*}$, up to a transvection $B_3\to Z(B_m)$ and an automorphism of $B_m$. Suppose $n=4$. By and the above, $f_*$ is either $R_*$, the identity on $\mathop{\mathrm{Conf}}_4\mathbb{C}$, or $\Psi_{3*}\circ R_*$, up to a transvection $B_4\to Z(B_m)$ and an automorphism of $B_m$. Since $f$ is not the identity, then $f$ is an affine twist of $R$, $\Psi_3$, or $\Psi_3\circ R$ by . ◻ Next we classify the holomorphic maps $\mathop{\mathrm{Conf}}_n\mathbb{C}\to\mathop{\mathrm{PSL}}_2\mathbb{Z}\backslash\mathbb{H}$. There is a natural map $\mathop{\mathrm{SL}}_2\mathbb{Z}\backslash\mathbb{H}\to\mathop{\mathrm{PSL}}_2\mathbb{Z}\backslash\mathbb{H}$ corresponding to the quotient $\mathop{\mathrm{SL}}_2\mathbb{Z}\twoheadrightarrow\mathop{\mathrm{PSL}}_2\mathbb{Z}$. Moreover, $\mathop{\mathrm{PSL}}_2\mathbb{Z}\backslash\mathbb{H}$ is the moduli space of quotients of elliptic curves by their hyperelliptic involution. This is the same data as a four times marked sphere, where one of the four points is distinguished. Hence $\mathop{\mathrm{PSL}}_2\mathbb{Z}\backslash\mathbb{H}$ is a finite orbifold cover of $\mathcal{M}_{0,4}$. **Theorem 1**. *Let $f\colon\mathop{\mathrm{Conf}}_n\mathbb{C}\to\mathop{\mathrm{PSL}}_2\mathbb{Z}\backslash\mathbb{H}$ be a holomorphic map.* - *If $n\geq5$, then $f$ is constant.* - *If $n=4$, then $f=g\circ R$, where $g\colon\mathop{\mathrm{Conf}}_3\mathbb{C}\to\mathop{\mathrm{PSL}}_2\mathbb{Z}\backslash\mathbb{H}$ is holomorphic.* - *If $n=3$, then either $f$ is constant, or it is the post-composition $\bar{H}$ of the hyperelliptic embedding $H$ with the natural map $\mathcal{M}_{1,1}\cong\mathop{\mathrm{SL}}_2\mathbb{Z}\backslash\mathbb{H}\to\mathop{\mathrm{PSL}}_2\mathbb{Z}\backslash\mathbb{H}$.* *Proof.* Suppose that $f$ is non-constant. By , $f_*\colon B_n\to\mathop{\mathrm{PSL}}_2\mathbb{Z}$ is non-cyclic, and hence by , $n<5$, and if $n=4$, then $f_*$ factors through $R_*$. By , the $f_*(\sigma_i)$ are not hyperbolic. Let $\iota\in\mathop{\mathrm{Aut}}(B_n)$ be the inversion automorphism which is defined by $\sigma_i\mapsto\sigma_i^{-1}, i=1,\cdots,n-1$. By $n=3$ or $n=4$, and $f_*$ is $H_*$ up to precomposition by one or both of $R_*$ and $\iota$. Note that coordinate-wise complex conjugation is an antiholomorphic self map of $\mathop{\mathrm{Conf}}_n\mathbb{C}$ inducing $\iota$. Hence, by either $f=\bar{H}$, $f=R\circ\bar{H}$, or $f$ is one of these maps precomposed with coordinate-wise complex conjugation. Since complex conjugation is antiholomorphic, the only holomorphic possibilities for $f$ are $f=H$ when $n=3$, and $f=H\circ R$ when $n=4$, as desired. ◻ *Proof of .* Let $f\colon\mathop{\mathrm{Conf}}_n\mathbb{C}\to\mathcal{M}_{1,1}$ be a holomorphic family such that the induced map $\bar{f}\colon\mathop{\mathrm{Conf}}_n\mathbb{C}\to\mathop{\mathrm{PSL}}_2\mathbb{Z}\backslash\mathbb{H}$ non-constant. Since the kernel of $\mathop{\mathrm{SL}}_2\mathbb{Z}\twoheadrightarrow\mathop{\mathrm{PSL}}_2\mathbb{Z}$ is $Z(\mathop{\mathrm{SL}}_2\mathbb{Z})\cong\mathbb{Z}/2$, and $H^1(B_n;\mathbb{Z}/2)\cong\mathbb{Z}/2$, each holomorphic map $\mathop{\mathrm{Conf}}_n\mathbb{C}\to\mathop{\mathrm{PSL}}_2\mathbb{Z}\backslash\mathbb{H}$ lifts to at most two holomorphic families $\mathop{\mathrm{Conf}}_n\mathbb{C}\to\mathcal{M}_{1,1}$, whose induced homomorphisms $B_n\to\mathop{\mathrm{SL}}_2\mathbb{Z}$ differ by a transvection $B_n\to Z(\mathop{\mathrm{SL}}_2\mathbb{Z})$. For any holomorphic family, we can induce such a transvection by pre-composing with $\mathop{\mathrm{id}}^{\Delta}$, since $\Delta\colon\mathop{\mathrm{Conf}}_n\mathbb{C}\to\mathbb{C}^*$ induces the abelianization map $\Delta_*\colon B_n\to\mathbb{Z}$. Hence, if $\bar{f}$ is non-constant, then by we have $n\in\{3,4\}$ and $f=H$ up to pre-composition by one or both of $R$ and $\mathop{\mathrm{id}}^{\Delta}$. ◻
arxiv_math
{ "id": "2309.12999", "title": "Braid groups, elliptic curves, and resolving the quartic", "authors": "Peter Huxford, Jeroen Schillewaert", "categories": "math.GT math.AG math.GR", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We completely determine the $\tau$-tilting finiteness (2-term silting finiteness) of Borel-Schur algebras. To achieve this, we use two recently introduced techniques in silting theory: sign decomposition as introduced by Aoki, Higashitani, Iyama, Kase and Mizuno, and symmetry of silting quivers as investigated by Aihara and the author. address: Yau Mathematical Sciences Center, Tsinghua University, Beijing 100084, China. author: - Qi Wang title: On $\tau$-tilting finite Borel-Schur algebras --- [^1] # Introduction Representation type plays a fundamental role in the representation theory of finite-dimensional algebras. One can classify finite-dimensional algebras into three classes based on the indecomposable modules the algebra admits. Let $A$ be a finite-dimensional algebra over an algebraically closed field $K$. Then $A$ is said to be *representation-finite* if it admits only finitely many indecomposable modules up to isomorphism; otherwise, $A$ is said to be *representation-infinite*. We say that a representation-infinite algebra $A$ is *tame* if all but finitely many $d$-dimensional indecomposable $A$-modules can be organized in a one-parameter family, for each dimension $d$. A representation-infinite algebra $A$ is called *wild* if there exists a faithful exact $K$-linear functor from the module category of the free associative algebra $K\langle x,y\rangle$ to the module category of $A$. The famous Tame-Wild Dichotomy, established by Drozd [@Dr-tame-wild], asserts that the representation type of any finite-dimensional algebra over $K$ is exactly one of representation-finite, tame and wild. Similar to the concept of representation-finiteness, Demonet, Iyama and Jasso [@DIJ-tau-tilting-finite] introduced a modern notion called *$\tau$-tilting finiteness*, which is inspired by the $\tau$-tilting theory initially proposed by Adachi, Iyama and Reiten [@AIR]. Here, $\tau$ denotes the Auslander-Reiten translation. Over the past decade, $\tau$-tilting theory has arisen not only as a generalization of classical tilting theory but also as a field closely connected to various other objects in representation theory. These include torsion classes, wide subcategories, semibricks, two-term silting complexes, stability conditions, maximal green sequences, wall-and-chamber structures, cluster-tilted objects, fans and polytopes, and so on. We refer readers to some in-depth papers such as [@AHIKM], [@Asai], [@BST-max-green-sequence], [@DIRRT], [@IR-introduction], [@KT-wall-chamber], [@Treffinger-introduction], for more details. The study of $\tau$-tilting finiteness and new results in $\tau$-tilting theory may provide a deeper understanding of some classical problems, and then enhance the connection to the aforementioned objects. Moreover, the methodologies developed during the process may lead to other applications that fall within their own research directions. These are the underlying motivations for studying this modern notion. A finite-dimensional algebra over $K$ is said to be *$\tau$-tilting finite* if it has only a finite number of $\tau$-tilting modules. It is easy to observe that a representation-finite algebra must be $\tau$-tilting finite. Certain tame or wild algebras could also exhibit $\tau$-tilting finiteness, e.g., Brauer graph algebras [@AAC-Brauer-graph-alg] and preprojective algebras of Dynkin type [@Mizuno-preprojective-alg]. In fact, $\tau$-tilting finiteness captures a *brick-finiteness*, as discussed in [@DIJ-tau-tilting-finite], in both tame and wild algebras. Besides, $\tau$-tilting finiteness has been determined for various classes of algebras, including radical square zero algebras [@Ada-rad-square-0], cycle finite algebras [@MS-cycle-finite], biserial algebras [@Mousavand-biserial-alg], gentle algebras [@Plamondon-gentle-alg], some subclasses of two-point algebras [@W-two-point; @W-two-point-II], Hecke algebras of type $A$ [@ALS-Hecke-alg], Schur algebras [@W-schur-alg; @Aoki-Wang], simply connected algebras [@W-simply], some tensor product algebras [@MW-simply-connected], cluster-tilted algebras [@Z-tilted], and more. In this paper, we pay particular attention to the bijection (as demonstrated in Theorem [Theorem 6](#theo::iso-tilt-silt){reference-type="ref" reference="theo::iso-tilt-silt"}) between $\tau$-tilting theory and silting theory. We are aiming to study additional properties (see Section [3](#section-3){reference-type="ref" reference="section-3"}) of sign decomposition within the framework of silting theory. Sign decomposition was initially introduced by Aoki in [@Aoki-sign-decomposition] as a means to classify torsion classes in radical square zero algebras. Subsequently, this concept was extended and generalized by Aoki, Higashitani, Iyama, Kase and Mizuno in their work [@AHIKM], where it was applied to the study of fans and polytopes in $\tau$-tilting theory. We are also aiming to investigate the behavior of sign decomposition on specific subclasses of triangular algebras, i.e., algebras without oriented cycles. As a consequence, we present a complete classification of Borel-Schur algebras in terms of $\tau$-tilting finiteness, using sign decomposition alongside the bijection between silting theory and $\tau$-tilting theory. Indeed, the application of sign decomposition to certain Borel-Schur algebras yields interesting insights, for example, Proposition [Proposition 21](#prop::source-sink){reference-type="ref" reference="prop::source-sink"}. Borel-Schur algebras are certain subalgebras of Schur algebras and play an important role in the study of projective resolutions for Weyl modules associated with the general linear group, see [@SY-Borel-Schur; @Wo-Schur-alg; @Wo-Borel-Schur-alg]. Erdmann, Santana and Yudin [@ESY-AR-quiver; @ESY-rep-type-Borel-Schur-alg] have provided a complete classification for the representation type of Borel-Schur algebras. Meanwhile, Aoki and the author [@W-schur-alg; @Aoki-Wang] have completely determined the $\tau$-tilting finiteness of Schur algebras. Consequently, it becomes a natural interest to determine the $\tau$-tilting finiteness of Borel-Schur algebras. Let $S^+(n,r)$ be the Borel-Schur algebra over an algebraically closed field $K$ of any characterize $p\ge 0$. See Subsection [2.2](#subsection-Borel-Schur-alg){reference-type="ref" reference="subsection-Borel-Schur-alg"} for the definitions. **Theorem 1** ([@ESY-AR-quiver; @ESY-rep-type-Borel-Schur-alg]). *The Borel-Schur algebra $S^+(n,r)$ is* - *representation-finite if one of the following holds:* - *$n=2$ and $p=0$, or $p=2, r\le 3$ or $p=3, r\le 4$ or $p\ge 5, r\le p$;* - *$n\ge 3$ and $r=1$.* - *tame if $n=2, p=3, r=5$ or $n=3, r=2$.* *Otherwise, $S^+(n,r)$ is wild.* Representation-finite cases are naturally $\tau$-tilting finite, but the converse is not valid in the class of Borel-Schur algebras. Our main result below implies that a tame case ($S^+(2,5)$ over $p=3$) and two wild cases ($S^+(2,4)$ over $p=2$, $S^+(2,6)$ over $p=5$) occur as $\tau$-tilting finite algebras. These are the only cases that are representation-infinite but $\tau$-tilting finite. **Theorem 2** (Theorem [Theorem 35](#result-1){reference-type="ref" reference="result-1"} and Theorem [Theorem 40](#result-2){reference-type="ref" reference="result-2"}). *Let $S^+(n,r)$ be the Borel-Schur algebra. Then, it is $\tau$-tilting finite if and only if one of the following holds:* - *$n=2$ and $p=0$, or $p=2, r\le 4$ or $p=3, r\le 5$ or $p=5, r\le 6$ or $p\ge 7, r\le p$;* - *$n\ge 3$ and $r=1$.* In order to prove the above result, we first reduce the problem on $S^+(n,r)$ to cases involving small $n$ and $r$ via idempotent truncation, and then check the $\tau$-tilting finiteness of these few cases via quotient methods and the sign decomposition in silting theory. See Subsection [2.3](#subsection2.3){reference-type="ref" reference="subsection2.3"} for several reduction methods. In particular, the symmetry of silting quivers, tame concealed algebras and simply connected algebras, all of which play a crucial role in our proof. We divide the proof into Section [4](#section-4){reference-type="ref" reference="section-4"} and Section [5](#section-5){reference-type="ref" reference="section-5"}. An immediate observation is that a $\tau$-tilting finite Borel-Schur algebra is representation-finite if $p=0$ or $n=2, p\ge 7$ or $n\ge 3$. This observation potentially introduces new examples of algebras for which $\tau$-tilting finiteness is equivalent to representation-finiteness. # Preliminaries {#section-2} Any finite-dimensional algebra $A$ over an algebraically closed field $K$ can be presented as a bound quiver algebra $KQ/I$ with a finite (connected) quiver $Q$ and an admissible ideal $I$. Here, *admissible* stands for the condition $R_Q^m\subseteq I \subseteq R_Q^2$ with some integer $m\ge 2$, where $R_Q$ is the arrow ideal of $KQ$. In this paper, however, we do not always deal with admissible ideals. We will meet a quotient $KQ/J$ of $KQ$, in which $J$ satisfies $R_Q^m\subseteq J$ for some $m\ge 2$ and contains a relation $\alpha-\sum_{i=1}^{k}\lambda_i\omega_i$ for some arrow $\alpha\in Q$, paths $\omega_i\in KQ$, coefficients $\lambda_i\in K$ such that $\alpha$ does not occur in any of $\omega_i$'s. It is obvious that $J$ is not admissible. If this is the case, the quotient $KQ/J$ is isomorphic to a bound quiver algebra $KQ'/J'$, where $Q'$ is obtained by removing $\alpha$ from $Q$ and $J'$ is obtained by removing the relation $\alpha-\sum_{i=1}^{k}\lambda_i\omega_i$ from $J$ as well as replacing all references to $\alpha$ by $\sum_{i=1}^{k}\lambda_i\omega_i$. For example, $$\vcenter{\xymatrix@C=0.7cm@R=0.7cm{ \circ \ar[d]\ar[r]\ar[dr]\ar@{.}[drr] \ar@{.}@/_0.3cm/[dr] & \circ \ar[r]\ar[dr] \ar@{.}@/^0.3cm/[dr]& \circ \ar[d] \\ \circ\ar[r]& \circ\ar[r]& \circ}} \quad \simeq \quad \vcenter{\xymatrix@C=0.8cm@R=0.7cm{ \circ \ar[d]\ar[r]\ar@{.}[drr] & \circ \ar[r]& \circ \ar[d] \\ \circ\ar[r]& \circ\ar[r]& \circ}}.$$ Here, a dotted line stands for a commutativity relation in an obvious way. Throughout, we will use such a graph to indicate the bound quiver algebra presented by the quiver with commutativity relations. ## $\tau$-tilting theory and silting theory We first review the fundamental definitions in $\tau$-tilting theory. Then, we explain some details of silting theory as well as its connection with $\tau$-tilting theory. We denote by $\operatorname{rad}A$ the Jacobson radical of $A$ and by $A^{\rm op}$ the opposite algebra of $A$. The category of finitely generated right $A$-modules is denoted by $\operatorname{\mathsf{mod}}A$ and the full subcategory of $\operatorname{\mathsf{mod}}A$ consisting of projective $A$-modules is denoted by $\operatorname{\mathsf{proj}}A$. **Definition 1** ([@AIR Definition 0.1]). Let $M\in \operatorname{\mathsf{mod}}A$ and $|M|$ be the number of isomorphism classes of indecomposable direct summands of $M$. 1. $M$ is called $\tau$-rigid if $\operatorname{Hom}_A(M,\tau M)=0$. 2. $M$ is called $\tau$-tilting if it is $\tau$-rigid and $\left | M \right |=\left | A \right |$. 3. $M$ is called support $\tau$-tilting if $M$ is a $\tau$-tilting $\left ( A/A e A\right )$-module for an idempotent $e$ of $A$. In this case, $(M,P)$ with $P:=eA$ is called a support $\tau$-tilting pair. We denote by $\mbox{\sf $\tau$-tilt}\hspace{.02in}A$ (respectively, $\mbox{\sf s$\tau$-tilt}\hspace{.02in}A$) the set of isomorphism classes of basic (respectively, support $\tau$-tilting) $\tau$-tilting $A$-modules. Obviously, $\mbox{\sf $\tau$-tilt}\hspace{.02in}A\subset \mbox{\sf s$\tau$-tilt}\hspace{.02in}A$. **Definition 2** ([@DIJ-tau-tilting-finite Definition 1.1]). An algebra $A$ is called $\tau$-tilting finite if $\mbox{\sf $\tau$-tilt}\hspace{.02in}A$ is a finite set. Otherwise, $A$ is said to be $\tau$-tilting infinite. It is shown by [@AIR Theorem 0.2] that any $\tau$-rigid $A$-module is a direct summand of some $\tau$-tilting $A$-modules. Then, $A$ is $\tau$-tilting finite if and only if $\mbox{\sf s$\tau$-tilt}\hspace{.02in}A$ is finite, if and only if $A$ has only finitely many pairwise non-isomorphic indecomposable $\tau$-rigid modules. The last condition may yield the finiteness of bricks in $\operatorname{\mathsf{mod}}A$, see [@DIJ-tau-tilting-finite Theorem 4.2]. Let $\mathcal{K}_A:=\mathsf{K}^{\rm b}(\operatorname{\mathsf{proj}}A)$ be the homotopy category of bounded complexes of finitely generated projective $A$-modules. For any $T\in \mathcal{K}_A$, we denote by $\operatorname{\mathsf{thick}}T$ the smallest thick subcategory of $\mathcal{K}_A$ containing $T$. Let $\operatorname{\mathsf{add}}(T)$ be the full subcategory of $\mathcal{K}_A$ whose objects are direct summands of finite direct sums of copies of $T$. **Definition 3** ([@AI-silting Definition 2.1]). Let $T\in \mathcal{K}_A$. Then, 1. $T$ is called presilting (pretilting) if $\operatorname{Hom}_{\mathcal{K}_A}(T,T[i])=0$ for any $i>0$ ($i\neq 0$). 2. $T$ is called silting (tilting) if $T$ is presilting (pretilting) and $\operatorname{\mathsf{thick}}T=\mathcal{K}_A$. We recall the construction of silting mutations, following from [@AI-silting Definition 2.30]. Let $T=X\oplus Y \in \mathcal{K}_A$ be a basic silting complex with a direct summand $X$ (not necessarily indecomposable). Take a minimal left $\operatorname{\mathsf{add}}(Y)$-approximation $\pi$ and a triangle $$X\overset{\pi}{\longrightarrow}Z\longrightarrow X'\longrightarrow X[1],$$ where $X'$ is the mapping cone of $\pi$. It is shown in [@AI-silting Theorem 2.31] that $\mu_X^-(T):=X'\oplus Y$ is again a basic silting complex in $\mathcal{K}_A$. We call $\mu_X^-(T)$ the left (silting) mutation of $T$ with respect to $X$. Dually, we can define the right (silting) mutation $\mu_X^+(T)$ of $T$ with respect to $X$. If moreover, $X$ is indecomposable, then $\mu_X^{\pm}(T)$ is said to be *irreducible*. Let $\operatorname{\mathsf{silt}}A$ be the set of isomorphism classes of basic silting complexes in $\mathcal{K}_A$. For any $T,S \in \operatorname{\mathsf{silt}}A$, we define $T\geqslant S$ if $\operatorname{Hom}_{\mathcal{K}_A}(T,S[i])=0$ for any $i>0$. This actually gives a partial order on the set $\operatorname{\mathsf{silt}}A$. It is known from [@AI-silting Theorem 2.35] that $S$ is a left mutation of $T$ if and only if $T$ is a right mutation of $S$, if and only if, $T>S$. We now restrict our attention to two-term silting complexes. A complex in $\mathcal{K}_A$ is said to be *two-term* if it is homotopy equivalent to a complex $T$, which is concentrated in degrees $0$ and $-1$, that is, $$T=(T^{-1}\overset{f}{\longrightarrow } T^0 ):=\xymatrix@C=0.7cm{\cdots\ar[r]&0\ar[r]&T^{-1}\ar[r]^{f}&T^0\ar[r]&0\ar[r]&\cdots}.$$ Let $\mathsf{2\mbox{-}silt}\hspace{.02in}A$ be the set of two-term complexes in $\operatorname{\mathsf{silt}}A$. Obviously, $\mathsf{2\mbox{-}silt}\hspace{.02in}A$ is again a poset under the partial order $\geqslant$ on $\operatorname{\mathsf{silt}}A$. We then denote by $\mathcal{H}(\mathsf{2\mbox{-}silt}\hspace{.02in}A)$ the Hasse quiver of $\mathsf{2\mbox{-}silt}\hspace{.02in}A$, which is compatible with the left/right mutation of two-term silting complexes. **Proposition 4** ([@AI-silting Lemma 2.25, Theorem 2.27]). *Let $T=(T^{-1}\rightarrow T^0)\in \mathsf{2\mbox{-}silt}\hspace{.02in}A$. Then, we have $\operatorname{\mathsf{add}}A= \operatorname{\mathsf{add}}(T^0 \oplus T^{-1})$ and $\operatorname{\mathsf{add}}(T^0)\cap \operatorname{\mathsf{add}}(T^{-1})=\emptyset$.* Let $\{P_1, P_2,\ldots, P_n\}$ be a complete set of pairwise non-isomorphic indecomposable projective $A$-modules. We denote by $[P_1], [P_2], \ldots, [P_n]$ the isomorphism classes of indecomposable complexes concentrated in degree 0. Then, the classes $[P_1], [P_2], \ldots, [P_n]$ in $\mathcal{K}_A$ induce a standard basis of the Grothendieck group $K_0(\mathcal{K}_A)$. If a two-term complex $T \in \mathcal{K}_A$ is written as $$\left ( \bigoplus_{i=1}^n P_i^{\oplus b_i}\longrightarrow \bigoplus_{i=1}^n P_i^{\oplus a_i} \right ),$$ then the class $[T]$ can be identified by an integer vector $$g(T) = (a_1-b_1, a_2-b_2, \ldots, a_n-b_n)\in \mathbb{Z}^n,$$ which is called the $g$-vector of $T$. The above Proposition [Proposition 4](#prop::two-term-silt-empty){reference-type="ref" reference="prop::two-term-silt-empty"} says that a two-term silting complex $T \in \mathsf{2\mbox{-}silt}\hspace{.02in}A$ must be of the form $$\label{equ::form-of-twosilt} \left ( \bigoplus_{i\in I} P_i^{\oplus a_i}\longrightarrow \bigoplus_{j\in J} P_j^{\oplus a_j} \right )$$ with $I\cap J=\emptyset$ and $I\cup J=\{1,2,\ldots, n\}$. Then, each entry of $g(T)$ for $T \in \mathsf{2\mbox{-}silt}\hspace{.02in}A$ is either positive or negative, and can not be zero. Without confusion, we sometimes display $g$-vectors of two-term silting complexes as the so-called $g$-matrices, in which the rows are given by $g$-vectors of indecomposable two-term presilting complexes. The following is a critical statement. **Proposition 5** ([@AIR Theorem 5.5]). *Let $T$ be a two-term silting complex in $\mathsf{2\mbox{-}silt}\hspace{.02in}A$. Then, the map $T \mapsto g(T)$ is an injection.* We are able to explain the connection between $\tau$-tilting theory and silting theory. This will give the reason why we focus on two-term silting complexes. **Theorem 6** ([@AIR Theorem 3.2]). *There exists a poset isomorphism between $\mbox{\sf s$\tau$-tilt}\hspace{.02in}A$ and $\mathsf{2\mbox{-}silt}\hspace{.02in}A$. More precisely, the bijection is given by $$\xymatrix@C=0.8cm@R=0.2cm{M\ar@{|->}[r]&(P_1\oplus P\overset{\binom{f}{0}}{\longrightarrow} P_0)},$$ where $(M,P)$ is the support $\tau$-tilting pair corresponding to $M$ and $P_1\overset{f}{\longrightarrow }P_0\longrightarrow M\longrightarrow 0$ is the minimal projective presentation of $M$.* We give an example to illustrate the above settings. **Example 7**. Let $A$ be the algebra presented by $$\vcenter{\xymatrix@C=0.7cm@R=0.5cm{ 1 \ar[d]_\beta \ar[r]^\alpha \ar@{.}[dr]& 2 \ar[d]^\mu \\ 3\ar[r]_\nu & 4}}.$$ The indecomposable projective $A$-module $P_i$ at vertex $i$ can be displayed as $$P_1\simeq\begin{smallmatrix} & 1 & \\ 2 & & 3\\ & 4 & \end{smallmatrix},\quad P_2\simeq\begin{smallmatrix} 2\\ 4 \end{smallmatrix},\quad P_3\simeq\begin{smallmatrix} 3\\ 4 \end{smallmatrix},\quad P_4\simeq4.$$ A complete list of support $\tau$-tilting $A$-modules is given in [@W-simply Example A.3], and there are 46 support $\tau$-tilting $A$-modules. We choose, e.g., $\substack{\\2\\}\substack{1\\\ \\ 4}\substack{\\3\\}\oplus \substack{1\\3}\oplus \substack{1\\2}\oplus \substack{\\ \\2}\substack{1\\\ \\}\substack{\\ \\3}\in \mbox{\sf s$\tau$-tilt}\hspace{.02in}A$, the aforementioned correspondences can be illustrated as $\substack{\\2\\}\substack{1\\\ \\ 4}\substack{\\3\\}\oplus \substack{1\\3}\oplus \substack{1\\2}\oplus \substack{\\ \\2}\substack{1\\\ \\}\substack{\\ \\3} \xrightarrow{\text{Th. \ref{theo::iso-tilt-silt}}} \left [\begin{smallmatrix} 0\longrightarrow P_1\\ \oplus \\ P_2\overset{\alpha}{\longrightarrow} P_1\\ \oplus \\ P_3\overset{\beta}{\longrightarrow} P_1\\ \oplus \\ P_4\overset{\alpha\mu}{\longrightarrow} P_1 \end{smallmatrix} \right ] \xrightarrow{\text{Prop. \ref{prop::g-vector-injection}}} \left(\begin{matrix} 1 & 0 & 0 &0\\ 1 & -1 & 0 &0\\ 1 & 0 & -1 &0\\ 1 & 0 & 0 &-1 \end{matrix} \right ) \xrightarrow{1:1} (4,-1,-1,-1)$.  \ We know from [@AIR] that checking the $\tau$-tilting finiteness of $A$ is identical to finding either a finite connected component of $\mathcal{H}(\mathsf{2\mbox{-}silt}\hspace{.02in}A)\simeq \mathcal{H}(\mbox{\sf s$\tau$-tilt}\hspace{.02in}A)$ or an infinite left mutation chain in $\mathcal{H}(\mathsf{2\mbox{-}silt}\hspace{.02in}A)\simeq \mathcal{H}(\mbox{\sf s$\tau$-tilt}\hspace{.02in}A)$. This is the reason why we are able to solve certain small cases via direct calculation. **Proposition 8** ([@AIR Corollary 2.38]). *If the Hasse quiver $\mathcal{H}(\mathsf{2\mbox{-}silt}\hspace{.02in}A)$ contains a finite connected component $\mathcal{C}$, then $\mathcal{H}(\mathsf{2\mbox{-}silt}\hspace{.02in}A)=\mathcal{C}$.* In order to identify a finite connected component of $\mathcal{H}(\mathsf{2\mbox{-}silt}\hspace{.02in}A)$, we usually calculate the left mutations starting from $A$ since $A$ is the maximal element in $\mathsf{2\mbox{-}silt}\hspace{.02in}A$. Although such a left mutation is always silting, it may not always be two-term. Hence, it becomes crucial to determine when such a left mutation is out of $\mathsf{2\mbox{-}silt}\hspace{.02in}A$. This leads us to the following essential statement. **Proposition 9** ([@AIR Corollary 3.8]). *If $S\in\mathcal{K}_A$ is a two-term presilting complex with $|S|=|A|-1$, then $S$ is a direct summand of exactly two basic two-term silting complexes.* ## Borel-Schur algebras {#subsection-Borel-Schur-alg} Let $n,r$ be two positive integers and $K$ an algebraically closed field of characteristic $p\ge 0$. We take an $n$-dimensional vector space $V$ over $K$ with a basis $\left \{ v_1,v_2,\ldots, v_n \right \}$ and denote by $V^{\otimes r}$ the $r$-fold tensor product $V\otimes_KV\otimes_K\cdots\otimes_KV$. Then, the tensor product $V^{\otimes r}$ admits a $K$-basis given by $$\left \{ v_{i_1}\otimes v_{i_2}\otimes \cdots\otimes v_{i_r}\mid 1\leqslant i_j\leqslant n \ \text{for all}\ 1\leqslant j\leqslant r \right \}.$$ Since the general linear group $\text{GL}_n$ over $K$ has a natural action on $V$, it acts on $V^{\otimes r}$ by $$(v_{i_1}\otimes v_{i_2}\otimes \ldots\otimes v_{i_r})\cdot g=gv_{i_1}\otimes gv_{i_2}\otimes \ldots\otimes gv_{i_r}$$ for any $g\in \text{GL}_n$. This is naturally extended to the group algebra $K\text{GL}_n$ of $\text{GL}_n$. We then obtain a homomorphism of algebras: $$\rho: K\text{GL}_n \longrightarrow \operatorname{End}_K (V^{\otimes r}).$$ The image of $\rho$, i.e., $\rho(K\text{GL}_n)$, is called the *Schur algebra*. **Definition 10**. Let $B^+$ be the Borel subgroup of $\text{GL}_n$ consisting of all upper triangular matrices. We call the subalgebra $\rho(B^+)$ of $\rho(K\text{GL}_n)$ the Borel-Schur algebra and denote it by $S_K^+(n,r)$, or simply by $S^+(n,r)$. Borel-Schur algebra $S^+(n,r)$ admits many nice properties, for example, it is a basic algebra and has a finite global dimension. There is also an explicit formula for the multiplication in $S^+(n,r)$, see [@Green-Borel-Schur-alg]. For our purpose, however, we do not need to explain more details here, but only the quiver presentation of $S^+(2,r)$. **Proposition 11** ([@ESY-AR-quiver Proposition 2.6 (a)]). *Let $p=0$. Then $S^+(2,r)$ is isomorphic to the path algebra of the linear quiver $\xymatrix@C=0.7cm{0&1 \ar[l] & 2 \ar[l] & \cdots \ar[l] & r\ar[l]}$.* There are several different approaches to describe the quiver and relations of $S^+(2,r)$ over $p>0$, see [@Santana-quiver], [@Yudin-Borel-Schur-alg], [@ESY-AR-quiver], etc. We follow the constructions in [@Liang-Borel-Schur-alg Chapter 2] as follows. Given a positive integer $s$, it admits a unique $p$-adic decomposition $s=\sum_{i\geq 0}s_ip^i$ with $0\leq s_i \leq p-1$. Set $[s]_p:=\sum_{i\geq 0}s_i$. We define a quiver $\Delta_r$ whose vertex set is labelled by $\{0, 1,2, \ldots, r\}$, and arrows are given by $\alpha_{s,t}: s\rightarrow t$ if $0\le t<s\le r$ and $[s-t]_p=1$. Then, we define a two-sided ideal $\mathcal{I}$ of $K\Delta_r$ generated by $$\left \{\alpha_{s+p^{k+1}, s+(p-1)p^{k}}\alpha_{s+(p-1)p^{k}, s+(p-2)p^{k}}\cdots \alpha_{s+p^{k}, s} \mid s,k \ge 0 , 0\le s+p^k \le r \right \}$$ and $$\left \{\alpha_{s+p^{k}+p^\ell, s+p^{k}}\alpha_{s+p^{k}, s}- \alpha_{s+p^{k}+p^\ell, s+p^{\ell}}\alpha_{s+p^{\ell}, s} \mid s,k\neq \ell \ge 0 , 0\le s+p^k+p^\ell\le r \right \}.$$ **Proposition 12** ([@Liang-Borel-Schur-alg Theorem 2.3.1]). *Let $p>0$. Then, $S^+(2,r)\simeq K\Delta_r/\mathcal{I}$.* ## Reduction on $\tau$-tilting finiteness {#subsection2.3} We need several methods to check the $\tau$-tilting finiteness of algebras. **Proposition 13** ([@DIRRT Theorem 5.12], [@DIJ-tau-tilting-finite Theorem 4.2]). *If $A$ is $\tau$-tilting finite,* 1. *the quotient algebra $A/I$ is $\tau$-tilting finite for any two-sided ideal $I$ of $A$,* 2. *the idempotent truncation $eAe$ is $\tau$-tilting finite for any idempotent $e$ of $A$.* **Lemma 14**. *Let $m\le n$ and $s\le r$. If $S^+(m,s)$ is $\tau$-tilting infinite, then so is $S^+(n,r)$.* *Proof.* It is shown in [@ESY-rep-type-Borel-Schur-alg Proposition 3.2] that there exists an idempotent $e$ of $S^+(n,r)$ satisfying $S^+(m,s)=eS^+(n,r)e$. Then, the statement follows from Proposition [Proposition 13](#prop::quotient-idempotent){reference-type="ref" reference="prop::quotient-idempotent"}. ◻ Let $A:=K\Delta$ be the path algebra of an acyclic quiver $\Delta$. Recall from [@HR-tilted] that an $A$-module $T$ is said to be *tilting* if $|T|=|A|$, $\text{Ext}_A^1(T,T)=0$ and the projective dimension of $T$ is at most one. If $T$ is a tilting $A$-module, we call the endomorphism algebra $B:=\operatorname{End}_A T$ a *tilted algebra* of type $\Delta$. If moreover, $T$ is contained in a preprojective component of the Auslander-Reiten quiver of $A$, we call $B$ a *concealed algebra* of type $\Delta$. Tame concealed algebras consist of the concealed algebras of type $\widetilde{\mathbb{A}}_n$, $\widetilde{\mathbb{D}}_n (n\geqslant 4)$, $\widetilde{\mathbb{E}}_6$, $\widetilde{\mathbb{E}}_7$ and $\widetilde{\mathbb{E}}_8$. The following result is well-known, for example, see [@Mousavand-biserial-alg Remark 2.9]. **Lemma 15**. *A tame concealed algebra is $\tau$-tilting infinite.* It is worth mentioning that tame concealed algebras have been classified completely by quiver and relations in [@HV-tame-concealed] (see also [@B-critical]). We also mention (see [@W-simply Theorem 1.1]) that a simply connected algebra is $\tau$-tilting finite if and only if it is representation-finite, if and only if it does not contain a concealed algebra of type $\widetilde{\mathbb{D}}_n (n\geqslant 4)$, $\widetilde{\mathbb{E}}_6$, $\widetilde{\mathbb{E}}_7$ or $\widetilde{\mathbb{E}}_8$ as an idempotent truncation. An algebra $A$ is called *sincere* if there is an indecomposable $A$-module $M$ such that all simple $A$-modules appear in $M$ as composition factors. A complete list of representation-finite sincere simply connected algebras is given in [@RT-sincere-simply], in terms of quiver and relations. # Sign decomposition {#section-3} This technical tool is first introduced in [@Aoki-sign-decomposition] to classify torsion classes for radical square zero algebras, and then generalized in [@AHIKM] to study fans and polytopes in $\tau$-tilting theory. Let $A=KQ/I$ be a bound quiver algebra whose vertex set is labeled by $[n]:=\{1,2,\ldots,n\}$. We define $$\mathsf{s}_n:=\{\epsilon=(\epsilon_1, \epsilon_2, \ldots,\epsilon_n) \colon [n]\to \{\pm1\}\}$$ to be the set of all maps from $[n]$ to $\{\pm1\}$. In particular, any map $\epsilon \in \mathsf{s}_n$ encodes an involution $\epsilon \mapsto -\epsilon$ satisfying $(-\epsilon)_i=-\epsilon_i$. We sometimes write $\epsilon_i=+$ or $-$ instead of $\epsilon_i=+1$ or $-1$ without causing confusion. Let $\mathbb{Z}_{\epsilon}^n$ be the area determined by the same sign with $\epsilon_i$ of $\epsilon\in \mathsf{s}_n$, i.e., $$\mathbb{Z}_{\epsilon}^n:=\left \{(x_1, x_2, \ldots, x_n) \in \mathbb{Z}^n \mid \epsilon_ix_i > 0,\ i\in [n]\right \}.$$ By comparing the signs of entries in each $g$-vector of two-term silting complexes, we may assign a subset of $\mathsf{2\mbox{-}silt}\hspace{.02in}A$ to each map $\epsilon \in \mathsf{s}_n$, that is, $$\mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A := \left \{T \in \mathsf{2\mbox{-}silt}\hspace{.02in}A \mid g(T) \in \mathbb{Z}_{\epsilon}^n\right \}.$$ It is obvious from the definition of $\mathbb{Z}_{\epsilon}^n$ that $\mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A \cap \mathsf{2\mbox{-}silt}\hspace{.02in}_{\epsilon'} A=\emptyset$ if $\epsilon\neq \epsilon'$. We have $$\mathsf{2\mbox{-}silt}\hspace{.02in}A =\bigsqcup_{\epsilon\in \mathsf{s}_n} \mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A.$$ Suppose $[n]=I\cup J$ with $I=\{i\in [n]\mid \epsilon_i=-\}$ and $J=\{j\in [n]\mid \epsilon_j=+\}$. Let $P_i$ be the indecomposable projective $A$-module at vertex $i$. We define $$P_I:=\oplus_{i\in I} P_i \quad \text{and} \quad P_J:=\oplus_{j\in J} P_j.$$ **Proposition 16**. *We have $\mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A=\{T\in \mathsf{2\mbox{-}silt}\hspace{.02in}A \mid \mu_{P_I}^-(A)\ge T\ge \mu_{P_J[1]}^+(A[1])\}$.* *Proof.* Let $g(T)=(g_1,g_2, \ldots, g_n)$ be the $g$-vector of $T\in \mathsf{2\mbox{-}silt}\hspace{.02in}A$. As mentioned in [\[equ::form-of-twosilt\]](#equ::form-of-twosilt){reference-type="eqref" reference="equ::form-of-twosilt"}, $g_i$ is either positive or negative. By the definition of silting mutation, we find that $\epsilon_i=-$ if and only if $\mu_{P_i}^-(A)\geq T$, for each $i\in[n]$. Then, the upper bound follows from the fact that $\epsilon_i=-$ for all $i\in I$. We may obtain the lower bound dually. ◻ It is worth noting that $\mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A$ could be a finite set even if $\mathsf{2\mbox{-}silt}\hspace{.02in}A$ is infinite. Since we need to do direct calculations in some small cases and the calculation of left mutations in silting theory is very complicated, it is better to use the language of $\tau$-tilting theory. We denote by $\mathcal{F}:\mathsf{2\mbox{-}silt}\hspace{.02in}A \overset{\sim}{\longrightarrow} \mbox{\sf s$\tau$-tilt}\hspace{.02in}A$ the poset isomorphism as mentioned in Theorem [Theorem 6](#theo::iso-tilt-silt){reference-type="ref" reference="theo::iso-tilt-silt"}, and then, set $$\mbox{\sf s$\tau$-tilt}\hspace{.02in}_\epsilon A:=\{\mathcal{F}(T)\mid T\in \mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A\}\cap \mbox{\sf s$\tau$-tilt}\hspace{.02in}A.$$ We may define $\mbox{\sf $\tau$-tilt}\hspace{.02in}_\epsilon A$ similarly. Note that both $\mbox{\sf $\tau$-tilt}\hspace{.02in}_\epsilon A$ and $\mbox{\sf s$\tau$-tilt}\hspace{.02in}_\epsilon A$ are again posets. **Proposition 17**. *Let $\epsilon\in \mathsf{s}_n$. If the Hasse quiver $\mathcal{H}(\mbox{\sf s$\tau$-tilt}\hspace{.02in}_\epsilon A)$ (or $\mathcal{H}(\mbox{\sf $\tau$-tilt}\hspace{.02in}_\epsilon A)$) contains a finite connected component $\mathcal{C}$, then $\mathcal{C}$ exhausts all elements of $\mbox{\sf s$\tau$-tilt}\hspace{.02in}_\epsilon A$ (or $\mbox{\sf $\tau$-tilt}\hspace{.02in}_\epsilon A$). In this case, the poset $\mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A$ is finite.* *Proof.* As we mentioned in the previous section, $\mbox{\sf $\tau$-tilt}\hspace{.02in}_\epsilon A$ is finite if and only if $\mbox{\sf s$\tau$-tilt}\hspace{.02in}_\epsilon A$ is finite. Then, the statement follows from Proposition [Proposition 8](#prop::finite-connected-component){reference-type="ref" reference="prop::finite-connected-component"}. ◻ **Example 18**. Let $A$ be the path algebra of the bipartite quiver: $\xymatrix@C=0.8cm{1 \ar[r]^{\alpha_1} & 2 & 3 \ar[l]_{\alpha_2} \ar[r]^{\alpha_3} & 4}$. We take $I=\{2,4\}$ and $J=\{1,3\}$, i.e., $\epsilon=(+,-,+,-)$. Then, $$\mu_{P_I}^-(A)= \left [\begin{smallmatrix} \xymatrix@C=1.2cm{P_2\ar[r]^-{(\alpha_1, \alpha_2)^t}& P_1\oplus P_3}\\ \oplus \\ \xymatrix@C=1.2cm{P_4\ar[r]^-{\alpha_3}& P_3}\\ \oplus \\ \xymatrix@C=1.3cm{0\ar[r] & P_1\oplus P_3} \end{smallmatrix} \right ] \quad \text{and} \quad \mu_{P_J[1]}^+(A[1])= \left [\begin{smallmatrix} \xymatrix@C=1.2cm{P_2\oplus P_4\ar[r]^-{(\alpha_2, \alpha_3)}& P_3}\\ \oplus \\ \xymatrix@C=1.4cm{P_2\ar[r]^-{\alpha_1}& P_1}\\ \oplus \\ \xymatrix@C=1.3cm{P_2\oplus P_4\ar[r] &0} \end{smallmatrix} \right ].$$ The Hasse quiver $\mathcal{H}(\mbox{\sf s$\tau$-tilt}\hspace{.02in}_\epsilon A)$ has the following connected component:  , in which $\xymatrix{*+[o][F]{-}}$ and $\xymatrix{*+[F]{-}}$ indicate the elements in $\mbox{\sf $\tau$-tilt}\hspace{.02in}_\epsilon A$ and $\mbox{\sf s$\tau$-tilt}\hspace{.02in}_\epsilon A$, respectively. (Note that $\mathcal{H}(\mbox{\sf s$\tau$-tilt}\hspace{.02in}_\epsilon A)$ contains both the maximal and minimal elements, but $\mathcal{H}(\mbox{\sf $\tau$-tilt}\hspace{.02in}_\epsilon A)$ may only contain the maximal element.) The information on each vertex is given below. 1. $\mu_{P_I}^-(A) \mapsto$ $\substack{1\\2}\oplus \substack{1\\\ \\}\substack{\\ \\2}\substack{3\\\ \\}\substack{\\ \\4}\oplus \substack{\\ \\2}\substack{3\\\ \\}\substack{\\ \\4}\oplus \substack{3\\2}$ $\mapsto$ (( 1, 0, 0, 0 ), ( 1, -1, 1, 0 ), ( 0, 0, 1, 0 ), ( 0, 0, 1, -1 )), 2. $\substack{1\\2}\oplus \substack{1\\\ \\}\substack{\\ \\2}\substack{3\\\ \\}\substack{\\ \\4}\oplus \substack{1\\ \\ }\substack{ \\\ \\ 2}\substack{3\\ \\ }\oplus \substack{3\\2}$ $\mapsto$ (( 1, 0, 0, 0 ), ( 1, -1, 1, 0 ), ( 1, -1, 1, -1 ), ( 0, 0, 1, -1 )), 3. $\substack{3\\4}\oplus \substack{1\\\ \\}\substack{\\ \\2}\substack{3\\\ \\}\substack{\\ \\4}\oplus \substack{\\ \\2}\substack{3\\\ \\}\substack{\\ \\4}\oplus \substack{3\\2}$ $\mapsto$ (( 0, -1, 1, 0 ), ( 1, -1, 1, 0 ), ( 0, 0, 1, 0 ), ( 0, 0, 1, -1 )), 4. $\substack{1\\2}\oplus \substack{1\\\ \\}\substack{\\ \\2}\substack{3\\\ \\}\substack{\\ \\4}\oplus \substack{1\\ \\ }\substack{ \\\ \\ 2}\substack{3\\ \\ }\oplus \substack{1}$ $\mapsto$ (( 1, 0, 0, 0 ), ( 1, -1, 1, 0 ), ( 1, -1, 1, -1 ), ( 1, -1, 0, 0 )), 5. $(\substack{1\\2}\oplus \substack{1\\ \\ }\substack{ \\\ \\ 2}\substack{3\\ \\ }\oplus \substack{3\\2}, \ \substack{4})$ $\mapsto$ (( 1, 0, 0, 0 ), ( 1, -1, 1, -1 ), ( 0, 0, 1, -1 ), ( 0, 0, 0, -1 )), 6. $\substack{3\\4}\oplus \substack{1\\\ \\}\substack{\\ \\2}\substack{3\\\ \\}\substack{\\ \\4}\oplus \substack{1\\ \\ }\substack{ \\\ \\ 2}\substack{3\\ \\ }\oplus \substack{3\\2}$ $\mapsto$ (( 0, -1, 1, 0 ), ( 1, -1, 1, 0 ), ( 1, -1, 1, -1 ), ( 0, 0, 1, -1 )), 7. $\substack{3\\4}\oplus \substack{3}\oplus \substack{1\\ \\ }\substack{ \\\ \\ 2}\substack{3\\ \\ }\oplus \substack{3\\2}$ $\mapsto$ (( 0, -1, 1, 0 ), ( 0, -1, 1, -1 ), ( 1, -1, 1, -1 ), ( 0, 0, 1, -1 )), 8. $\substack{3\\4}\oplus \substack{1\\\ \\}\substack{\\ \\2}\substack{3\\\ \\}\substack{\\ \\4}\oplus \substack{1\\ \\ }\substack{ \\\ \\ 2}\substack{3\\ \\ }\oplus \substack{1}$ $\mapsto$ (( 0, -1, 1, 0 ), ( 1, -1, 1, 0 ), ( 1, -1, 1, -1 ), ( 1, -1, 0, 0 )), 9. $(\substack{3}\oplus \substack{1\\ \\ }\substack{ \\\ \\ 2}\substack{3\\ \\ }\oplus \substack{3\\2}, \ \substack{4})$ $\mapsto$ (( 0, -1, 1, -1 ), ( 1, -1, 1, -1 ), ( 0, 0, 1, -1 ), ( 0, 0, 0, -1 )), 10. $\substack{3\\4}\oplus \substack{3}\oplus \substack{1\\ \\ }\substack{ \\\ \\ 2}\substack{3\\ \\ }\oplus \substack{1}$ $\mapsto$ (( 0, -1, 1, 0 ), ( 0, -1, 1, -1 ), ( 1, -1, 1, -1 ), ( 1, -1, 0, 0 )), 11. $(\substack{1\\2}\oplus \substack{1\\ \\ }\substack{ \\\ \\ 2}\substack{3\\ \\ }\oplus \substack{1}, \ \substack{4})$ $\mapsto$ (( 1, 0, 0, 0 ), ( 1, -1, 1, -1 ), ( 1, -1, 0, 0 ), ( 0, 0, 0, -1 )), 12. $(\substack{3}\oplus \substack{1\\ \\ }\substack{ \\\ \\ 2}\substack{3\\ \\ }\oplus \substack{1}, \ \substack{4})$ $\mapsto$ (( 0, -1, 1, -1 ), ( 1, -1, 1, -1 ), ( 1, -1, 0, 0 ), ( 0, 0, 0, -1 )), 13. $(\substack{3\\4}\oplus \substack{3}\oplus \substack{1}, \ \substack{2})$ $\mapsto$ (( 0, -1, 1, 0 ), ( 0, -1, 1, -1 ), ( 1, -1, 0, 0 ), ( 0, -1, 0, 0 )), 14. $\mu_{P_J[1]}^+(A[1])\mapsto (\substack{3}\oplus \substack{1}, \ \substack{2}\oplus \substack{4})$ $\mapsto$ (( 0, -1, 1, -1 ), ( 1, -1, 0, 0 ), ( 0, -1, 0, 0 ), ( 0, 0, 0, -1 )). We next are aiming to characterize the elements in $\mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A$ via a simpler algebra $A_\epsilon$. Let $e_i$ be the primitive idempotent of $A$ associated with the vertex $i$. For each $\epsilon\in \mathsf{s}_n$, set $$e_{\epsilon,+} := \textstyle\sum\limits_{\epsilon_i=+} e_i \quad \text{ and } \quad e _{\epsilon,-} := \textstyle\sum\limits_{\epsilon_i=-} e_i.$$ Then, let $J_{\epsilon,+}$ (resp., $J_{\epsilon,-}$) be the two-sided ideal of $e_{\epsilon,+}Ae_{\epsilon,+}$ (resp., $e_{\epsilon,-}Ae_{\epsilon,-}$) generated by all $x\in \operatorname{rad}(e_{\epsilon,+}Ae_{\epsilon,+})$ satisfying $xy=0$ (resp., $yx=0$) for any $y\in e_{\epsilon,+}Ae_{\epsilon,-}$. **Definition 19**. We define an upper triangular matrix algebra as follows. $$A_{\epsilon} := \begin{pmatrix} e_{\epsilon,+} A e_{\epsilon,+}/J_{\epsilon,+} & e_{\epsilon,+} A e_{\epsilon,-} \\ &\\ 0 & e_{\epsilon,-} A e_{\epsilon,-}/J_{\epsilon,-} \end{pmatrix}.$$ It is better to give an example to illustrate the structure of $A_\epsilon$. **Example 20**. Let $A$ be the path algebra of the linear quiver: $\xymatrix@C=0.7cm{1 \ar[r]^{\alpha_1} & 2 \ar[r]^{\alpha_2} & 3 \ar[r]^{\alpha_3} & 4}$. - If $\epsilon=(+,+,+,+)$, then $e_{\epsilon,+} A e_{\epsilon,-}=0$ and $\alpha_i\in J_{\epsilon,+}$. Thus, $A_\epsilon=K\oplus K \oplus K \oplus K$. - If $\epsilon=(+,-,+,+)$, then $e_{\epsilon,+} A e_{\epsilon,-}=\{\alpha_1\}$, $\alpha_1\alpha_2, \alpha_1\alpha_2\alpha_3, \alpha_3\in J_{\epsilon,+}$, and $\alpha_2, \alpha_2\alpha_3$ are removed. Thus, $A_\epsilon=K(\xymatrix@C=0.7cm{1 \ar[r]^{\alpha_1} & 2}) \oplus K \oplus K$. - If $\epsilon=(+,-,+,-)$, then $e_{\epsilon,+} A e_{\epsilon,-}=\{\alpha_1, \alpha_3, \alpha_1\alpha_2\alpha_3\}$, $e_{\epsilon,+} A e_{\epsilon,+}=\{ e_1,e_3, \alpha_1\alpha_2 \}$, $e_{\epsilon,-} A e_{\epsilon,-}=\{ e_2,e_4, \alpha_2\alpha_3 \}$, $J_{\epsilon,+}=J_{\epsilon,-}=0$ and $\alpha_2$ is removed. Thus, $A_\epsilon$ is exactly the algebra given in Example [Example 7](#example::square){reference-type="ref" reference="example::square"}. It turns out that $A_\epsilon$ shares the same vertex set $\{1,2, \ldots, n\}$ with $A$. Furthermore, **Proposition 21**. *Let $\epsilon=(\epsilon_1, \epsilon_2, \ldots, \epsilon_n) \in \mathsf{s}_n$ and $i$ a vertex in the quiver of $A$.* 1. *If $i$ is a source and $\epsilon_i=-$, then $i$ is an isolated vertex in the quiver of $A_\epsilon$.* 2. *If $i$ is a sink and $\epsilon_i=+$, then $i$ is an isolated vertex in the quiver of $A_\epsilon$.* *Proof.* This is obvious from the definition of $A_\epsilon$. ◻ The above Proposition [Proposition 21](#prop::source-sink){reference-type="ref" reference="prop::source-sink"} is useful when one deals with one-point extension algebras. Recall that the *one-point extension* $A[M]$ of $A$ by an $A$-module $M$ is defined as $$A[M]:=\begin{bmatrix} A & 0\\ M & K \end{bmatrix}.$$ The quiver of $A[M]$ then contains the quiver of $A$ as a full subquiver with exactly one additional extension vertex which is always a source. Hence, $(A[M])_\epsilon=(K\oplus A)_\epsilon$ if the sign of the extension vertex is $-$. It is shown in [@ESY-rep-type-Borel-Schur-alg] that $S^+(2,r)$ is an one-point extension of $S^+(2,r-1)$. More precisely, let $P_r$ be the indecomposable projective module of $S^+(2,r)$ at vertex $r$, then $\operatorname{rad}P_r$ is an indecomposable module of $S^+(2,r-1)$ and $$S^+(2,r)=\begin{bmatrix} S^+(2,r-1) &0\\ \operatorname{rad}P_r&K \end{bmatrix}.$$ One finds that the vertex corresponding to $\operatorname{rad}P_r$ in the quiver of $S^+(2,r)$ is a source. We have the following reduction result for Borel-Schur algebras. **Proposition 22**. *Let $\epsilon=(\epsilon_0, \epsilon_1, \ldots, \epsilon_r)$. If $\epsilon_r=-$, then $S^+(2,r)_\epsilon=(S^+(2,r-1)\oplus K)_\epsilon$.* One of the most important properties of sign decomposition is stated as follows. **Proposition 23** ([@AHIKM Theorem 4.26]). *For any $\epsilon \in \mathsf{s}_n$, there is an isomorphism $$\mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A \overset{\sim}{\longrightarrow} \mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A_{\epsilon},$$ which preserves $g$-vectors of two-term silting complexes.* An immediate corollary of Proposition [Proposition 23](#prop::Aepsilon){reference-type="ref" reference="prop::Aepsilon"} is that $A$ is $\tau$-tilting finite if $\mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A_{\epsilon}$ is a finite set for all $\epsilon\in \mathsf{s}_n$. Although $A$ has a complicated structure, the finiteness of $\mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A_{\epsilon}$ can be easily verified for most maps in $\mathsf{s}_n$, with only a few cases remaining. Moreover, these remaining cases may be resolved by the tilting mutation process. (This approach has been applied to the block algebras of Schur algebras, see [@Aoki-Wang].) Let $\Phi:=\{\epsilon\in \mathsf{s}_n \mid \epsilon_i=-\}$ and $-\Phi:=\{-\epsilon \mid \epsilon\in \Phi\}$. We define $$\mathsf{2\mbox{-}silt}\hspace{.02in}_\Phi A := \bigsqcup_{\epsilon\in \Phi} \mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A.$$ **Proposition 24** ([@Aoki-Wang Proposition 2.20]). *Let $B:=\operatorname{End}\mu_{P_i}^-(A)$ be the endomorphism algebra of the left mutation $\mu_{P_i}^-(A)$ of $A$ with respect to the indecomposable projective module $P_i$. If $\mu_{P_i}^-(A)$ is tilting, then there is a poset isomorphism $$\label{equ::endo} \mathsf{2\mbox{-}silt}\hspace{.02in}_\Phi A \overset{\sim}{\longrightarrow} \mathsf{2\mbox{-}silt}\hspace{.02in}_{-\Phi} B.$$* **Remark 25**. The poset isomorphism between $\mathsf{2\mbox{-}silt}\hspace{.02in}_\Phi A$ and $\mathsf{2\mbox{-}silt}\hspace{.02in}_{-\Phi} B$ first appeared in [@AMN Lemma 4.6], in which the authors deal with Brauer tree algebras. Then, a similar proof for arbitrary algebras is given in [@Aoki-Wang Proposition 2.20]. See also [@AHIKM Theorem 4.28] for a proof in terms of $g$-fans. It is worth mentioning that the poset isomorphism in Proposition [Proposition 24](#prop::Aepsilon-tilting){reference-type="ref" reference="prop::Aepsilon-tilting"} does not imply an involution on the sign of $g$-vectors, i.e., $\mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A \not \simeq \mathsf{2\mbox{-}silt}\hspace{.02in}_{-\epsilon} B$ for $\epsilon\in \Phi$. However, we may describe the relation precisely as follows. **Proposition 26**. *Suppose $\mu_{P_i}^-(A)$ is tilting. Let $\mathcal{G}_i$ be the $g$-matrix of $\mu_{P_i}^-(A)$. For any $T\in \mathsf{2\mbox{-}silt}\hspace{.02in}_\Phi A$, the poset isomorphism in [\[equ::endo\]](#equ::endo){reference-type="eqref" reference="equ::endo"} is given by $g(T)\mapsto g(T)\cdot \mathcal{G}_i$.* *Proof.* It is known from Rickard [@Rickard-tilting-complex] that there is a triangle equivalence from $\mathsf{D}^{\rm b}(\operatorname{\mathsf{mod}}A)$ to $\mathsf{D}^{\rm b}(\operatorname{\mathsf{mod}}B)$ given by $\mu_{P_i}^-(A)\mapsto B$. Here, $\mathsf{D}^{\rm b}(\operatorname{\mathsf{mod}}A)$ is the derived category of $\operatorname{\mathsf{mod}}A$. In this way, the indecomposable direct summand $S_i$ of $\mu_{P_i}^-(A)$ is mapped to the indecomposable projective module $Q_i$ of $B$. The triangle equivalence naturally induces an isomorphism $K_0(\mathcal{K}_{A})\rightarrow K_0(\mathcal{K}_B)$ of Grothendieck groups by $[S_i] \mapsto [Q_i]$. Then, the statement follows from Proposition [Proposition 5](#prop::g-vector-injection){reference-type="ref" reference="prop::g-vector-injection"}. ◻ We will provide an example in Section [5](#section-5){reference-type="ref" reference="section-5"} to explain how to apply Proposition [Proposition 26](#prop::sign-tilting){reference-type="ref" reference="prop::sign-tilting"}. At the end of this section, we reformulate the anti-isomorphism between $\mathsf{2\mbox{-}silt}\hspace{.02in}A$ and $\mathsf{2\mbox{-}silt}\hspace{.02in}A^{\rm op}$ and the anti-automorphism of $\mathsf{2\mbox{-}silt}\hspace{.02in}A$ as follows. Let $(-)^\ast:=\operatorname{Hom}_A(-,A)$. For a given $T\in \mathsf{2\mbox{-}silt}\hspace{.02in}A$, we have $$T=\left ( 0\longrightarrow \bigoplus_{i\in I} P_i^{\oplus a_i}\overset{f}{\longrightarrow} \bigoplus_{j\in J} P_j^{\oplus a_j} \longrightarrow 0\right )$$ with $I\cap J=0$ and $I\cup J=[n]$ (see [\[equ::form-of-twosilt\]](#equ::form-of-twosilt){reference-type="eqref" reference="equ::form-of-twosilt"}). Then, $$T^\ast=\left ( 0\longrightarrow 0\longrightarrow \bigoplus_{j\in J} (P_j^\ast)^{\oplus a_j} \overset{f^\ast}{\longrightarrow} \bigoplus_{i\in I} (P_i^\ast)^{\oplus a_i}\right ).$$ If there exists an algebra isomorphism $\sigma: A^{\rm op}\rightarrow A$, then $\sigma$ induces a permutation on $\{1,2, \cdots, n\}$ by $\sigma(e_i^\ast)=e_j$. We then obtain $$\sigma(T^\ast)=\left ( 0\longrightarrow 0\longrightarrow \bigoplus_{j\in J} (P_{\sigma(j)})^{\oplus a_{\sigma(j)}} \overset{\sigma(f^\ast)}{\longrightarrow} \bigoplus_{i\in I} (P_{\sigma(i)})^{\oplus a_{\sigma(i)}}\right ),$$ which is again a silting complex in $\mathcal{K}_A$. Set $S_\sigma:=[1]\circ \sigma \circ (-)^\ast$. **Proposition 27** ([@AIR Theorem 2.14]). *For any $\epsilon\in \mathsf{s}_n$, the duality $(-)^\ast$ induces a bijection $\mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A \overset{1:1}{\longrightarrow} \mathsf{2\mbox{-}silt}\hspace{.02in}_{-\epsilon} A^{\rm op}$.* **Proposition 28** ([@Aihara-Wang Theorem 1.2]). *Let $\epsilon=(\epsilon_1, \epsilon_2, \ldots, \epsilon_n) \in \mathsf{s}_n$. Then, the functor $S_\sigma$ gives a bijection $\mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A \overset{1:1}{\longrightarrow} \mathsf{2\mbox{-}silt}\hspace{.02in}_{-\sigma(\epsilon)} A$, where $\sigma(\epsilon)=(\epsilon_{\sigma(1)}, \epsilon_{\sigma(2)}, \ldots, \epsilon_{\sigma(n)})$.* **Corollary 29**. *Let $\epsilon=(\epsilon_1, \epsilon_2, \ldots, \epsilon_n) \in \mathsf{s}_n$. If there is an algebra isomorphism $\sigma: A^{\rm op}\rightarrow A$, then we have $A_\epsilon \simeq (A_{-\sigma(\epsilon)})^{\rm op}$.* # $\tau$-tilting infinite Borel-Schur algebras {#section-4} In this section, we show that most representation-infinite Borel-Schur algebras are $\tau$-tilting infinite. We will frequently use Proposition [Proposition 13](#prop::quotient-idempotent){reference-type="ref" reference="prop::quotient-idempotent"}, Lemma [Lemma 14](#lem::reduction){reference-type="ref" reference="lem::reduction"} and Lemma [Lemma 15](#lem::tame-concealed){reference-type="ref" reference="lem::tame-concealed"} without repeated citations. **Proposition 30**. *Let $p\ge 0$ and $n\ge 3$. Then $S^+(n,r)$ is $\tau$-tilting infinite for any $r\ge 2$.* *Proof.* It suffices to consider $S^+(3,2)$. By [@ESY-rep-type-Borel-Schur-alg Section 5], $S^+(3,2)$ contains a tame concealed algebra of type $\widetilde{\mathbb{A}}_3$, i.e., the path algebra of the quiver: $$\vcenter{\xymatrix@C=0.7cm@R=0.7cm{\circ \ar[d] \ar[r] &\circ \ar[d] \\ \circ\ar[r] & \circ}},$$ as an independent truncation. Then, $S^+(3,2)$ is $\tau$-tilting infinite. ◻ **Proposition 31**. *Let $p=2$. Then $S^+(2,r)$ is $\tau$-tilting infinite for any $r\ge 5$.* *Proof.* We show that $S^+(2,5)$ is $\tau$-tilting infinite. By Proposition [Proposition 12](#prop::quiver-(2,r)){reference-type="ref" reference="prop::quiver-(2,r)"}, $S^+(2,5)$ over $p=2$ is isomorphic to $KQ/I$ with $$Q:\xymatrix@C=1cm{ 5\ar[r]^{\alpha_4}\ar@/_0.7cm/[rr]_{\beta_3}\ar@/^1.4cm/[rrrr]^{\gamma_1}& 4\ar[r]^{\alpha_3}\ar@/^0.7cm/[rr]^{\beta_2}\ar@/_1.4cm/[rrrr]_{\gamma_0}& 3\ar[r]^{\alpha_2}\ar@/_0.7cm/[rr]_{\beta_1}& 2\ar[r]^{\alpha_1}\ar@/^0.7cm/[rr]^{\beta_0}& 1\ar[r]^{\alpha_0}& 0}$$ and $$I: \left<\begin{matrix} \alpha_4\alpha_3, \alpha_3\alpha_2, \alpha_2\alpha_1, \alpha_1\alpha_0, \beta_3\beta_1, \beta_2\beta_0, \gamma_1\alpha_0-\alpha_4\gamma_0\\ \alpha_4\beta_2-\beta_3\alpha_2, \alpha_3\beta_1-\beta_2\alpha_1, \alpha_2\beta_0-\beta_1\alpha_0 \end{matrix}\right>.$$ It is obvious that $S^+(2,5)$ over $p=2$ contains a tame concealed algebra of type $\widetilde{\mathbb{A}}_5$, i.e., the path algebra of the bipartite quiver: $$\vcenter{\xymatrix@C=0.7cm@R=0.7cm{ 0&2 \ar[l]\ar[r] &1 \\ 4\ar[u]\ar[r] & 3&5\ar[u]\ar[l] }},$$ as a quotient algebra. Then, it is $\tau$-tilting infinite. ◻ **Proposition 32**. *Let $p=3$. Then $S^+(2,r)$ is $\tau$-tilting infinite for any $r\ge 6$.* *Proof.* By Proposition [Proposition 12](#prop::quiver-(2,r)){reference-type="ref" reference="prop::quiver-(2,r)"}, $S^+(2,6)$ over $p=3$ is isomorphic to $KQ/I$ with $$Q:\xymatrix@C=1cm{ 6\ar[r]^{\alpha_5}\ar@/^0.8cm/[rrr]^{\beta_3} & 5\ar[r]^{\alpha_4}\ar@/_0.8cm/[rrr]_{\beta_2}& 4\ar[r]^{\alpha_3}\ar@/^0.8cm/[rrr]^{\beta_1}& 3\ar[r]^{\alpha_2}\ar@/_0.8cm/[rrr]_{\beta_0}& 2\ar[r]^{\alpha_1}& 1\ar[r]^{\alpha_0}& 0}$$ and $$I: \left< \alpha_5\alpha_4\alpha_3, \alpha_4\alpha_3\alpha_2, \alpha_3\alpha_2\alpha_1, \alpha_2\alpha_1\alpha_0, \alpha_5\beta_2-\beta_3\alpha_2, \alpha_4\beta_1-\beta_2\alpha_1, \alpha_3\beta_0-\beta_1\alpha_0 \right>.$$ It turns out that $S^+(2,6)$ over $p=3$ contains a tame concealed algebra of type $\widetilde{\mathbb{D}}_6$, i.e., $$\xymatrix@C=0.4cm@R=0.2cm{ &4\ar[dl]\ar[dr]\ar@{.}[dd] &&6\ar[dr]\ar[dl]\ar@{.}[dd]&\\ 1 \ar[dr] &&3 \ar[dr]\ar[dl]&&5\ar[dl]\\ &0&&2&}$$ as a quotient algebra, and it is $\tau$-tilting infinite. ◻ **Proposition 33**. *Let $p=5$. Then $S^+(2,r)$ is $\tau$-tilting infinite for any $r\ge 7$.* *Proof.* By Proposition [Proposition 12](#prop::quiver-(2,r)){reference-type="ref" reference="prop::quiver-(2,r)"}, $S^+(2,7)$ over $p=5$ is isomorphic to $KQ/I$ with $$Q:\xymatrix@C=1cm{ 7\ar[r]^{\alpha_6}\ar@/_1cm/[rrrrr]_{\beta_2} & 6\ar[r]^{\alpha_5}\ar@/^1cm/[rrrrr]^{\beta_1}& 5\ar[r]^{\alpha_4}\ar@/_1cm/[rrrrr]_{\beta_0}& 4\ar[r]^{\alpha_3}& 3\ar[r]^{\alpha_2}& 2\ar[r]^{\alpha_1}& 1\ar[r]^{\alpha_0}& 0}$$ and $$I: \left< \alpha_6\alpha_5\alpha_4\alpha_3\alpha_2, \alpha_5\alpha_4\alpha_3\alpha_2\alpha_1, \alpha_4\alpha_3\alpha_2\alpha_1\alpha_0, \alpha_6\beta_1-\beta_2\alpha_1, \alpha_5\beta_0-\beta_1\alpha_0 \right>.$$ It turns out that $S^+(2,7)$ over $p=5$ contains a tame concealed algebra of type $\widetilde{\mathbb{E}}_7$, i.e., $$\vcenter{\xymatrix@C=0.7cm@R=0.5cm{ & 7\ar[r]\ar[d]\ar@{.}[dr]& 6 \ar[d]\ar[r]\ar@{.}[dr]& 5\ar[r]\ar[d]& 4 \\ 3\ar[r]& 2\ar[r]& 1\ar[r]&0& }},$$ as a quotient algebra, and it is $\tau$-tilting infinite. ◻ **Proposition 34**. *Let $p\ge 7$. Then $S^+(2,r)$ is $\tau$-tilting infinite for any $r\ge p+1$.* *Proof.* It is shown in [@ESY-AR-quiver Theorem 6.6] that $S^+(2,p+1)$ over $p\ge 7$ contains a concealed algebra of type $\widetilde{\mathbb{E}}_7$, i.e., $$\vcenter{\xymatrix@C=0.7cm@R=0.5cm{ && p+1\ar[d]\ar[r]\ar@{.}[dr]& p\ar[r]\ar[d]& p-1\ar[r]& p-2\\ 3\ar[r]& 2\ar[r]& 1\ar[r]&0 && }},$$ as an idempotent truncation, and it is $\tau$-tilting infinite. ◻ In summary, we have the following result. **Theorem 35**. *Let $p\ge 0$. Then, $B^+(n,r)$ is $\tau$-tilting infinite if* - *$n\ge 3$ and $r\ge 2$; or* - *$n=2$ and $p=2, r\ge 5$ or $p=3, r\ge 6$ or $p=5, r\ge 7$ or $p\ge 7, r\ge p+1$.* # $\tau$-tilting finite Borel-Schur algebras {#section-5} Representation-finite Borel-Schur algebras are naturally $\tau$-tilting finite. By combining this fact with the $\tau$-tilting infinite cases established in the previous section, we are left with the remaining cases: $S^+(2,4)$ over $p=2$, $S^+(2,5)$ over $p=3$ and $S^+(2,6)$ over $p=5$. In this section, we show that all these cases are $\tau$-tilting finite. We introduce some technical methods in advance. - Let $w$ be a maximal path of $A=KQ/I$, i.e., a path $w$ such that $w\not\in I$ but $\alpha w, w\alpha\in I$ for all arrows $\alpha\in Q$. If $w=\alpha_1\alpha_2\cdots\alpha_k\in e_{\epsilon,+} A e_{\epsilon,-}$ and $\alpha_i$ is removed in $A_\epsilon$ for some $i$, then we should replace the minimal subpath containing $\alpha_i$ by a new arrow $\beta$ in $A_\epsilon$. If $w\in e_{\epsilon,+} A e_{\epsilon,+}$ (or $e_{\epsilon,-} A e_{\epsilon,-}$), then $w\in J_{\epsilon, +}$ (or $J_{\epsilon, -}$) by the definition. - Let $A$ be the path algebra of $\xymatrix@C=0.5cm{1 \ar[r] & 2 \ar[r] & 3 \ar[r] & 4}$ and $B$ the algebra presented by $\vcenter{\xymatrix@C=0.5cm@R=0.3cm{ 1 \ar[d] \ar[r] \ar@{.}[dr]& 3 \ar[d] \\ 2 \ar[r] & 4}}.$ Then, $A_{(+,-,+,-)}\simeq B_{(+,-,+,-)}$ as explained in Example [Example 20](#example::linear-quiver){reference-type="ref" reference="example::linear-quiver"}. If the quiver of $A$ or $B$ appears as a subquiver in the quiver of $S^+(n,r)$, then we may exchange $A_{(+,-,+,-)}$ and $B_{(+,-,+,-)}$ (in an appropriate way) to reduce the complexity. - Let $\sigma: A^{\rm op}\rightarrow A$ be an algebra isomorphism. We use $\epsilon \sim_\sigma \epsilon'$ to indicate $\epsilon'=-\sigma(\epsilon)$, and then, $A_\epsilon\simeq (A_{\epsilon'})^{\rm op}$ by Corollary [Corollary 29](#cor::epsilon-sigma){reference-type="ref" reference="cor::epsilon-sigma"}. - We use the *String Applet* to check the representation type of gentle algebras and special biserial algebras. **Proposition 36**. *Let $p=2$. Then $S^+(2,4)$ is $\tau$-tilting finite.* *Proof.* By Proposition [Proposition 12](#prop::quiver-(2,r)){reference-type="ref" reference="prop::quiver-(2,r)"}, $S^+(2,4)$ over $p=2$ is isomorphic to $A:=KQ/I$ with $$Q:\xymatrix@C=1cm{ 4\ar[r]^{\alpha_3}\ar@/^0.7cm/[rr]^{\beta_2}\ar@/_1.4cm/[rrrr]_{\gamma_0}& 3\ar[r]^{\alpha_2}\ar@/_0.7cm/[rr]_{\beta_1}& 2\ar[r]^{\alpha_1}\ar@/^0.7cm/[rr]^{\beta_0}& 1\ar[r]^{\alpha_0}& 0} \quad \text{and} \quad I: \left<\begin{matrix} \alpha_3\alpha_2, \alpha_2\alpha_1, \alpha_1\alpha_0, \beta_2\beta_0, \\ \alpha_3\beta_1-\beta_2\alpha_1, \alpha_2\beta_0-\beta_1\alpha_0 \end{matrix}\right>.$$ We check the $\tau$-tilting finiteness of $A_\epsilon$ for $\epsilon=(\epsilon_0, \epsilon_1, \ldots, \epsilon_4)\in \mathsf{s}_5$ case by case. If $\epsilon_0=+$ or $\epsilon_4=-$, we have $A_\epsilon\simeq (K\oplus S^+(2,3))_\epsilon$ by Proposition [Proposition 21](#prop::source-sink){reference-type="ref" reference="prop::source-sink"} and Proposition [Proposition 22](#prop::epsilon-(2,r)){reference-type="ref" reference="prop::epsilon-(2,r)"}. Then, $A_\epsilon$ is $\tau$-tilting finite since $S^+(2,3)$ over $p=2$ is representation-finite. Suppose $\epsilon_0=-$ and $\epsilon_4=+$. Since there is an algebra isomorphism $\sigma: A^{\rm op}\rightarrow A$ sending $e_i$ to $e_{4-i}$, it suffices to consider the following 4 cases by using Corollary [Corollary 29](#cor::epsilon-sigma){reference-type="ref" reference="cor::epsilon-sigma"}. $$(-,+,+,+,+) \sim_\sigma (-,-,-,-,+), \quad (-,+,+,-,+) \sim_\sigma (-,+,-,-,+),$$ $$(-,+,-,+,+) \sim_\sigma (-,-,+,-,+), \quad (-,-,-,+,+) \sim_\sigma (-,-,+,+,+).$$ Let $P_i$ be the indecomposable projective $A$-module at vertex $i$. Then, $\operatorname{Hom}_A(P_i, P_j)$ is given in the following table. $\operatorname{Hom}$ 0 1 2 3 4 ---------------------- ------- ------------ ------------ ------------------- ------------------- 0 $e_0$ $\alpha_0$ $\beta_0$ $\alpha_2\beta_0$ $\gamma_0$ 1 $0$ $e_1$ $\alpha_1$ $\beta_1$ $\alpha_3\beta_1$ 2 $0$ $0$ $e_2$ $\alpha_2$ $\beta_2$ 3 $0$ $0$ $0$ $e_3$ $\alpha_3$ 4 $0$ $0$ $0$ $0$ $e_4$ . Some $A_\epsilon$'s can be easily found as follows: $$\begin{matrix} \vcenter{\xymatrix@C=0.7cm@R=0.5cm{ 3 \ar[d] \ar[r] \ar@{.}[dr] & 2\ar[d] & \\ 1\ar[r] & 0 & 4\ar[l]}} & \vcenter{\xymatrix@C=0.7cm@R=0.5cm{ 1 & 2\ar[l] & \\ 3\ar[u] & 4 \ar[u] \ar[l]\ar[r] \ar@{.}[lu]& 0}} & \vcenter{\xymatrix@C=0.5cm@R=0.5cm{ &2\ar[d] & &\\ 1\ar[r] & 0& 4\ar[r]\ar[l]& 3}} & \vcenter{\xymatrix@C=0.5cm@R=0.5cm{ &2 & &\\ 3& 4\ar[u]\ar[l]\ar[r]& 0& 1\ar[l]}} \\ A_{(-,+,+,+,+)} & A_{(-,-,-,-,+)} & A_{(-,+,+,-,+)} & A_{(-,+,-,-,+)} \end{matrix}$$ i.e., the first two are representation-finite sincere simply connected algebras and the last two are path algebras of type $\mathbb{D}_5$. The reader may use these to verify Corollary [Corollary 29](#cor::epsilon-sigma){reference-type="ref" reference="cor::epsilon-sigma"}. We give the details as follows. - Set $\epsilon=(-,+,+,+,+)$. Then, $\alpha_1, \alpha_3, \beta_2, \alpha_3\beta_1\in J_{\epsilon,+}$ and all others are survived. - Set $\epsilon=(-,+,+,-,+)$. Then, $\alpha_1, \beta_2, \alpha_3\beta_1\in J_{\epsilon,+}$, $\alpha_2\beta_0\in J_{\epsilon,-}$, $\alpha_2, \beta_1$ are removed and all others are survived. - Set $\epsilon=(-,+,-,+,+)$. Then, $\alpha_3, \alpha_3\beta_1\in J_{\epsilon,+}$, $\alpha_1$ is removed and all others are survived. We obtain that $A_\epsilon=K\left ( \vcenter{\xymatrix@C=0.7cm@R=0.7cm{ 3\ar[r]^{\alpha_2} \ar[d]_{\beta_1} \ar@{.}[dr] & 2\ar[d]^{\beta_0} & \\ 1\ar[r]_{\alpha_0} & 0& 4\ar@/_0.4cm/[lu]_{\beta_2}\ar[l]^{\gamma_0} }} \right ) \bigg/ \left<\alpha_2\beta_0-\beta_1\alpha_0, \beta_2\beta_0\right>.$ We define $B:=K\left ( \vcenter{\xymatrix@C=0.8cm@R=0.4cm{ 3\ar[r]^{\alpha_2} & 2\ar[r]^{x} & 1\ar[r]^{\alpha_0} & 0\\ &&4\ar[lu]^{\beta_2}\ar[ur]_{\gamma_0} & }} \right ) \bigg/ \left<\beta_2x\right>,$ i.e., we replace a commutativity square subquiver in $A_\epsilon$ with a linear subquiver without relation in $B$. It is easy to check that $A_\epsilon \simeq B_\epsilon$. Since $B$ is a representation-finite gentle algebra, we deduce that $\mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A_{\epsilon}\simeq \mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}B_{\epsilon}$ is a finite set. - Set $\epsilon=(-,-,-,+,+)$. This case is slightly more complicated because of $A_\epsilon\simeq A$. We apply Proposition [Proposition 16](#prop::element-two-silt-epsilon){reference-type="ref" reference="prop::element-two-silt-epsilon"} and Proposition [Proposition 17](#prop::element-tau-tilt-epsilon){reference-type="ref" reference="prop::element-tau-tilt-epsilon"} to this case. Set $P_I=P_0\oplus P_1\oplus P_2$, $\mu_{P_I}^-(A)= \left [\begin{smallmatrix} \xymatrix@C=1.3cm{P_1\ar[r]^-{\beta_1}& P_3}\\ \oplus \\ \xymatrix@C=1.5cm{P_0\ar[r]^-{(\alpha_2\beta_0,\gamma_0)^t}& P_3\oplus P_4}\\ \oplus \\ \xymatrix@C=1.4cm{P_2\ar[r]^-{(\alpha_2, \beta_2)^t}& P_3\oplus P_4}\\ \oplus \\ \xymatrix@C=1.6cm{0\ar[r]& P_3\oplus P_4} \end{smallmatrix} \right ] \in \mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A.$ Then we have $g(\mu_{P_I}^-(A))=(-1,-1,-1,4,3)$ and $\mathcal{F}(\mu_{P_I}^-(A))$ $\simeq$ $\oplus$ $\oplus$ $\oplus$ $\oplus$ . All elements in $\mbox{\sf $\tau$-tilt}\hspace{.02in}_\epsilon A$ can be obtained by left mutations starting from $\mathcal{F}(\mu_{P_I}^-(A))$ if $\mathcal{H}(\mbox{\sf $\tau$-tilt}\hspace{.02in}_\epsilon A)$ admits a finite connected component. By direct calculation, we find that $\#\mbox{\sf $\tau$-tilt}\hspace{.02in}_\epsilon A=34$. A complete list of $g$-matrices of $\tau$-tilting modules in $\mbox{\sf $\tau$-tilt}\hspace{.02in}_\epsilon A$ is given below. 1) (( 0, -1, 0, 1, 0 ), ( 0, 0, -1, 1, 1 ), ( -1, 0, 0, 1, 1 ), ( 0, 0, 0, 1, 0 ), ( 0, 0, 0, 0, 1 )), 2) (( 0, -1, 0, 1, 0 ), ( 0, 0, -1, 1, 1 ), ( -1, 0, 0, 1, 1 ), ( 0, 0, 0, 1, 0 ), ( -1, 0, -1, 2, 1 )), 3) (( 0, -1, 0, 1, 0 ), ( 0, 0, -1, 1, 1 ), ( -1, 0, 0, 1, 1 ), ( 0, 0, 0, 0, 1 ), ( -1, 0, 0, 0, 1 )), 4) (( 0, -1, 0, 1, 0 ), ( 0, 0, -1, 1, 1 ), ( -1, 0, 0, 0, 1 ), ( 0, -1, -1, 1, 1 ), ( 0, 0, 0, 0, 1 )), 5) (( 0, -1, 0, 1, 0 ), ( 0, 0, -1, 1, 1 ), ( -1, 0, 0, 0, 1 ), ( 0, -1, -1, 1, 1 ), ( -1, 0, -1, 1, 1 )), 6) (( 0, -1, 0, 1, 0 ), ( 0, 0, -1, 1, 1 ), ( -1, 0, 0, 0, 1 ), ( -1, 0, -1, 2, 1 ), ( -1, 0, -1, 1, 1 )), 7) (( 0, -1, 0, 1, 0 ), ( 0, 0, -1, 1, 1 ), ( -1, 0, 0, 0, 1 ), ( -1, 0, -1, 2, 1 ), ( -1, 0, 0, 1, 1 )), 8) (( 0, -1, 0, 1, 0 ), ( 0, 0, -1, 1, 1 ), ( 0, 0, -1, 1, 0 ), ( -1, 0, -1, 2, 1 ), ( 0, 0, 0, 1, 0 )), 9) (( 0, -1, 0, 1, 0 ), ( 0, 0, -1, 1, 1 ), ( 0, 0, -1, 1, 0 ), ( -1, 0, -1, 2, 1 ), ( -1, 0, -1, 1, 1 )), 10) (( 0, -1, 0, 1, 0 ), ( 0, 0, -1, 1, 1 ), ( 0, 0, -1, 1, 0 ), ( 0, -1, -1, 1, 1 ), ( -1, 0, -1, 1, 1 )), 11) (( 0, -1, 0, 1, 0 ), ( 0, -1, -1, 1, 1 ), ( -1, 0, 0, 0, 1 ), ( 0, -1, 0, 0, 1 ), ( 0, 0, 0, 0, 1 )), 12) (( 0, -1, 0, 1, 0 ), ( 0, -1, -1, 1, 1 ), ( -1, 0, 0, 0, 1 ), ( 0, -1, 0, 0, 1 ), ( -1, -1, 0, 0, 1 )), 13) (( 0, -1, 0, 1, 0 ), ( 0, -1, -1, 1, 1 ), ( -1, 0, 0, 0, 1 ), ( -1, 0, -1, 1, 1 ), ( -1, -1, -1, 1, 1 )), 14) (( 0, -1, 0, 1, 0 ), ( 0, -1, -1, 1, 1 ), ( -1, 0, 0, 0, 1 ), ( -1, -1, 0, 0, 1 ), ( -1, -1, -1, 1, 1 )), 15) (( 0, -1, 0, 1, 0 ), ( 0, -1, -1, 1, 1 ), ( 0, 0, -1, 1, 0 ), ( -1, 0, -1, 1, 1 ), ( -1, -1, -1, 1, 1 )), 16) (( 0, -1, 0, 1, 0 ), ( 0, -1, -1, 1, 1 ), ( 0, 0, -1, 1, 0 ), ( 0, -1, -1, 1, 0 ), ( -1, -1, -1, 1, 1 )), 17) (( 0, -1, 0, 1, 0 ), ( -1, 0, 0, 1, 0 ), ( -1, 0, -1, 2, 1 ), ( 0, 0, -1, 1, 0 ), ( 0, 0, 0, 1, 0 )), 18) (( 0, -1, 0, 1, 0 ), ( -1, 0, 0, 1, 0 ), ( -1, 0, -1, 2, 1 ), ( -1, 0, 0, 1, 1 ), ( 0, 0, 0, 1, 0 )), 19) (( 0, -1, 0, 1, 0 ), ( -1, 0, 0, 1, 0 ), ( -1, 0, -1, 2, 1 ), ( -1, 0, 0, 1, 1 ), ( -1, 0, 0, 0, 1 )), 20) (( 0, -1, 0, 1, 0 ), ( -1, 0, 0, 1, 0 ), ( -1, 0, -1, 2, 1 ), ( -1, 0, -1, 1, 1 ), ( -1, 0, 0, 0, 1 )), 21) (( 0, -1, 0, 1, 0 ), ( -1, 0, 0, 1, 0 ), ( -1, 0, -1, 2, 1 ), ( -1, 0, -1, 1, 1 ), ( 0, 0, -1, 1, 0 )), 22) (( 0, 0, -1, 0, 1 ), ( 0, -1, -1, 1, 1 ), ( -1, 0, 0, 0, 1 ), ( 0, 0, -1, 1, 1 ), ( 0, 0, 0, 0, 1 )), 23) (( 0, 0, -1, 0, 1 ), ( 0, -1, -1, 1, 1 ), ( -1, 0, 0, 0, 1 ), ( 0, 0, -1, 1, 1 ), ( -1, 0, -1, 1, 1 )), 24) (( 0, 0, -1, 0, 1 ), ( 0, -1, -1, 1, 1 ), ( -1, 0, 0, 0, 1 ), ( 0, -1, 0, 0, 1 ), ( 0, 0, 0, 0, 1 )), 25) (( 0, 0, -1, 0, 1 ), ( 0, -1, -1, 1, 1 ), ( -1, 0, 0, 0, 1 ), ( 0, -1, 0, 0, 1 ), ( -1, -1, 0, 0, 1 )), 26) (( 0, 0, -1, 0, 1 ), ( 0, -1, -1, 1, 1 ), ( -1, 0, 0, 0, 1 ), ( -1, 0, -1, 1, 1 ), ( -1, -1, -1, 1, 1 )), 27) (( 0, 0, -1, 0, 1 ), ( 0, -1, -1, 1, 1 ), ( 0, 0, -1, 1, 0 ), ( -1, 0, -1, 1, 1 ), ( -1, -1, -1, 1, 1 )), 28) (( 0, 0, -1, 0, 1 ), ( 0, -1, -1, 1, 1 ), ( 0, 0, -1, 1, 0 ), ( -1, 0, -1, 1, 1 ), ( 0, 0, -1, 1, 1 )), 29) (( 0, 0, -1, 0, 1 ), ( 0, -1, -1, 1, 1 ), ( 0, 0, -1, 1, 0 ), ( 0, -1, -1, 1, 0 ), ( -1, -1, -1, 1, 1 )), 30) (( 0, 0, -1, 0, 1 ), ( -1, -1, -1, 1, 1 ), ( -1, 0, 0, 0, 1 ), ( -1, 0, -1, 1, 1 ), ( -1, 0, -1, 0, 1 )), 31) (( 0, 0, -1, 0, 1 ), ( -1, -1, -1, 1, 1 ), ( -1, 0, 0, 0, 1 ), ( -1, 0, -1, 0, 1 ), ( -1, -1, 0, 0, 1 )), 32) (( 0, 0, -1, 0, 1 ), ( -1, -1, -1, 1, 1 ), ( -1, 0, 0, 0, 1 ), ( 0, -1, -1, 1, 1 ), ( -1, -1, 0, 0, 1 )), 33) (( 0, 0, -1, 0, 1 ), ( -1, -1, -1, 1, 1 ), ( 0, 0, -1, 1, 0 ), ( -1, 0, -1, 0, 1 ), ( 0, -1, -1, 1, 0 )), 34) (( 0, 0, -1, 0, 1 ), ( -1, -1, -1, 1, 1 ), ( 0, 0, -1, 1, 0 ), ( -1, 0, -1, 0, 1 ), ( -1, 0, -1, 1, 1 )). To verify the list, a useful fact from Proposition [Proposition 9](#prop::exactly-two){reference-type="ref" reference="prop::exactly-two"} is that an irreducible left mutation of a two-term silting complex $T$ changes exactly one row of the $g$-matrix $g(T)$. By comparing the rows of $g$-matrices, it is not difficult to find the finite connected component of $\mathcal{H}(\mbox{\sf $\tau$-tilt}\hspace{.02in}_\epsilon A)$. However, we leave the details to the readers, as it is hard to visualize the Hasse quiver on a plane. In conclusion, $\mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A_{\epsilon}\simeq \mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A$ is a finite set. We have proved that $\mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A_{\epsilon}$ is finite for any $\epsilon\in \mathsf{s}_5$. ◻ **Proposition 37**. *Let $p=3$. Then $S^+(2,5)$ is $\tau$-tilting finite.* *Proof.* By Proposition [Proposition 12](#prop::quiver-(2,r)){reference-type="ref" reference="prop::quiver-(2,r)"}, $S^+(2,5)$ over $p=3$ is isomorphic to $A:=KQ/I$ with $$Q:\xymatrix@C=0.7cm{ 5\ar[r]^{\alpha_4}\ar@/_0.8cm/[rrr]_{\beta_2}& 4\ar[r]^{\alpha_3}\ar@/^0.8cm/[rrr]^{\beta_1}& 3\ar[r]^{\alpha_2}\ar@/_0.8cm/[rrr]_{\beta_0}& 2\ar[r]^{\alpha_1}& 1\ar[r]^{\alpha_0}& 0} \quad \text{and} \quad I: \left<\begin{matrix} \alpha_4\alpha_3\alpha_2, \alpha_3\alpha_2\alpha_1, \alpha_2\alpha_1\alpha_0,\\ \alpha_4\beta_1-\beta_2\alpha_1, \alpha_3\beta_0-\beta_1\alpha_0 \end{matrix} \right>.$$ Let $\epsilon=(\epsilon_0,\epsilon_1, \ldots, \epsilon_5)\in \mathsf{s}_6$. If $\epsilon_0=+$ or $\epsilon_5=-$, we have $A_\epsilon\simeq (K\oplus S^+(2,4))_\epsilon$ by Proposition [Proposition 21](#prop::source-sink){reference-type="ref" reference="prop::source-sink"} and Proposition [Proposition 22](#prop::epsilon-(2,r)){reference-type="ref" reference="prop::epsilon-(2,r)"}, and then $A_\epsilon$ is $\tau$-tilting finite since $S^+(2,4)$ over $p=3$ is representation-finite. Suppose $\epsilon_0=-$ and $\epsilon_5=+$. Since there is an algebra isomorphism $\sigma: A^{\rm op}\rightarrow A$ sending $e_i$ to $e_{5-i}$, it suffices to consider the following 10 cases by using Corollary [Corollary 29](#cor::epsilon-sigma){reference-type="ref" reference="cor::epsilon-sigma"}. $$(-,+,+,-,+,+) \sim_\sigma (-,-,+,-,-,+), \quad (-,-,-,-,-,+) \sim_\sigma (-,+,+,+,+,+),$$ $$(-,-,+,-,+,+), \quad (-,+,+,-,-,+), \quad (-,+,+,+,-,+) \sim_\sigma (-,+,-,-,-,+),$$ $$(-,+,-,+,-,+), \quad (-,-,+,+,-,+) \sim_\sigma (-,+,-,-,+,+), \quad (-,-,-,+,+,+),$$ $$(-,-,-,-,+,+) \sim_\sigma (-,-,+,+,+,+), \quad (-,-,-,+,-,+) \sim_\sigma (-,+,-,+,+,+).$$ Let $P_i$ be the indecomposable projective $A$-module at vertex $i$. Then, $\operatorname{Hom}_A(P_i, P_j)$ is given in the following table. $\operatorname{Hom}$ 0 1 2 3 4 5 ---------------------- ------- ------------ -------------------- -------------------- -------------------- --------------------------- 0 $e_0$ $\alpha_0$ $\alpha_1\alpha_0$ $\beta_0$ $\alpha_3\beta_0$ $\alpha_4\alpha_3\beta_0$ 1 $0$ $e_1$ $\alpha_1$ $\alpha_2\alpha_1$ $\beta_1$ $\alpha_4\beta_1$ 2 $0$ $0$ $e_2$ $\alpha_2$ $\alpha_3\alpha_2$ $\beta_2$ 3 $0$ $0$ $0$ $e_3$ $\alpha_3$ $\alpha_4\alpha_3$ 4 $0$ $0$ $0$ $0$ $e_4$ $\alpha_4$ 5 $0$ $0$ $0$ $0$ $0$ $e_5$ . - If $\epsilon=(-,+,+,-,+,+)$, then $\alpha_3\alpha_2\in J_{\epsilon,+}$, $\alpha_2, \alpha_2\alpha_1$ are removed and all others are survived; If $\epsilon=(-,-,-,-,-,+)$, then $\alpha_2, \alpha_3\alpha_2,\alpha_2\alpha_1\in J_{\epsilon,-}$; If $\epsilon=(-,-,+,-,+,+)$, then $\alpha_3\alpha_2\in J_{\epsilon,+}$, $\alpha_2\alpha_1\in J_{\epsilon,-}$, $\alpha_2$ is removed and all others are survived. In all these cases, $A_\epsilon$ is presented as $\vcenter{\xymatrix@C=0.7cm@R=0.5cm{ 5 \ar[d] \ar[r] \ar@{.}[dr] & 4 \ar[d] \ar[r] \ar@{.}[dr] & 3\ar[d]\\ 2 \ar[r] & 1 \ar[r] & 0}}$, which is a representation-finite sincere simply connected algebra. - If $\epsilon=(-,+,+,-,-,+)$, then $\alpha_2,\beta_1, \alpha_3\alpha_2, \alpha_2\alpha_1$ are removed and all others are survived, and $A_\epsilon$ is presented as $\vcenter{\xymatrix@C=0.7cm@R=0.5cm{ 5 \ar[d]\ar[r]\ar[dr]\ar@{.}[drr] \ar@{.}@/_0.3cm/[dr] & 4 \ar[r]\ar[dr] \ar@{.}@/^0.3cm/[dr]& 3 \ar[d] \\ 2\ar[r]& 1\ar[r]& 0}}$; If $\epsilon=(-,+,+,+,-,+)$, then $\alpha_2, \alpha_2\alpha_1\in J_{\epsilon,+}$, $\beta_1, \alpha_3, \alpha_3\alpha_2$ are removed and all others are survived, and $A_\epsilon$ is presented as $\vcenter{\xymatrix@C=0.7cm@R=0.5cm{ 5 \ar[d]\ar[r]\ar@/^0.6cm/[rr]^{\ } \ar[dr]\ar@{.}[drr] \ar@{.}@/_0.3cm/[dr] \ar@{.}@/^0.8cm/[drr]& 4 \ar[dr] & 3 \ar[d] \\ 2\ar[r]& 1\ar[r]& 0}}$. In both two cases, we have $A_\epsilon \simeq B_\epsilon$ and $B$ is presented as $\vcenter{\xymatrix@C=0.7cm@R=0.1cm{ &4 \ar[r] &3\ar[dr] &\\ 5 \ar[ur] \ar[dr]\ar@{.}[rrr] &&&0 \\ &2 \ar[r] &1\ar[ur] &}}$, which is a representation-finite sincere simply connected algebra. - If $\epsilon=(-,+,-,+,-,+)$, then $\alpha_2\alpha_1\in J_{\epsilon,+}$, $\alpha_3\alpha_2\in J_{\epsilon,-}$, $\alpha_1, \alpha_3, \beta_1$ are removed and all others are survived. In this case, $A_\epsilon=KQ/I$ is given by $Q:\vcenter{\xymatrix@C=1.5cm@R=0.2cm{ &1\ar[ddr]^-{\alpha_0}&\\ &2 \ar[dr]_-{w} & \\ 5\ar[ur]_-{\beta_2}\ar[uur]^-{y} \ar[dr]^-{z}\ar[rdd]_-{\alpha_4}&&0 \\ &3 \ar[ur]^-{\beta_0} \ar[uu]_-{\alpha_2} & \\ &4\ar[uur]_-{x}& }}$ $\quad$ and $\quad$ $I:\left< \begin{matrix} y\alpha_0-\beta_2w, \beta_2w-z\beta_0, \\ z\beta_0-\alpha_4x,z\alpha_2,\alpha_2w \end{matrix} \right>$. We replace two commutativity square subquivers, and it gives $B:=KQ/I$ with $Q:\vcenter{\xymatrix@C=0.7cm@R=0.1cm{ &4 \ar[r]^-{s} &3\ar[dr]^-{\beta_0} \ar[ddl]_-{\alpha_2} &\\ 5 \ar[ur]^-{\alpha_4} \ar[dr]_-{\beta_2}&&&0 \\ &2 \ar[r]_-{t} &1\ar[ur]_-{\alpha_0} &}}$ $\quad$ and $\quad$ $I:\left< \alpha_4s\beta_0-\beta_2t\alpha_0, s\alpha_2, \alpha_2t \right>$. It is easy to check that $A_\epsilon\simeq B_\epsilon$. Since $B$ is a representation-finite special biserial algebra, we conclude that $\mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A_\epsilon$ is a finite set. - If $\epsilon=(-,-,+,+,-,+)$, then $\alpha_3, \alpha_3\alpha_2$ are removed and all others are survived. In this case, $A_\epsilon=KQ/I$ is presented by $Q:\vcenter{\xymatrix@C=1cm@R=0.5cm{ 5 \ar[d]_-{\alpha_4}\ar[r]^-{\beta_2}\ar@/^0.6cm/[rr]^{y} \ar@{.}[dr] & 2 \ar[d]^-{\alpha_1} & 3 \ar[d]^-{\beta_0} \ar[l]_-{\alpha_2} \\ 4\ar[r]_-{\beta_1}& 1\ar[r]_-{\alpha_0}& 0}}$ $\quad$ and $\quad$ $I:\left< \begin{matrix} y\alpha_2, \alpha_2\alpha_1\alpha_0, \beta_2\alpha_1-\alpha_4\beta_1,\\ y\beta_0-\alpha_4\beta_1\alpha_0 \end{matrix} \right>$. We replace the commutativity square subquiver consisting of $1,2,4,5$ with a linear subquiver, say, $B:=KQ/I$ with $Q:\vcenter{\xymatrix@C=0.7cm@R=0.5cm{ 5\ar[r]^-{\alpha_4}\ar[drr]_-{y}&4\ar[r]^-{z}&2\ar[r]^-{\alpha_1}&1\ar[r]^-{\alpha_0}&0\\ &&3\ar[u]^-{\alpha_2}\ar[urr]_-{\beta_0}&& }}$ $\quad$ and $\quad$ $I:\left< \begin{matrix} y\alpha_2, \alpha_2\alpha_1\alpha_0,\\ y\beta_0-\alpha_4z\alpha_1\alpha_0 \end{matrix} \right>$. Then, $A_\epsilon\simeq B_\epsilon$. We apply Proposition [Proposition 16](#prop::element-two-silt-epsilon){reference-type="ref" reference="prop::element-two-silt-epsilon"} and Proposition [Proposition 17](#prop::element-tau-tilt-epsilon){reference-type="ref" reference="prop::element-tau-tilt-epsilon"} to this case. Let $R_i$ be the indecomposable projective $B$-module at vertex $i$. Then, $\mu_{R_0\oplus R_1\oplus R_4}^-(B)= \left [\begin{smallmatrix} \xymatrix@C=1.5cm{R_0\ar[r]^-{(\alpha_1\alpha_0, \beta_0)^t}& R_2\oplus R_3}\\ \oplus \\ \xymatrix@C=1.5cm{R_1\ar[r]^-{\alpha_1}& R_2}\\ \oplus \\ \xymatrix@C=1.5cm{R_4\ar[r]^-{\alpha_4}& R_5}\\ \oplus \\ \xymatrix@C=0.7cm{0\ar[r]& R_2\oplus R_3\oplus R_5} \end{smallmatrix} \right ] \in \mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}B$ and $\mathcal{F}(\mu_{R_0\oplus R_1\oplus R_4}^-(B))$ $\simeq$ $\oplus$ $\oplus$ $\oplus$ $\oplus$ $\oplus$ . By direct calculation of left mutations starting from $\mathcal{F}(\mu_{R_0\oplus R_1\oplus R_4}^-(B))$, we find that $\mathcal{H}(\mbox{\sf $\tau$-tilt}\hspace{.02in}_\epsilon B)$ admits a finite connected component:  , where the $g$-matrix of each vertex is listed below: 1) $(( 0, 0, 1, 0, 0, 0 ), ( 0, 0, 0, 1, 0, 0 ), ( 0, 0, 0, 0, 0, 1 ), ( -1, 0, 1, 1, 0, 0 ), ( 0, -1, 1, 0, 0, 0 ), ( 0, 0, 0, 0, -1, 1 ))$, 2) $(( 0, 0, 1, 0, 0, 0 ), ( 0, 0, 0, 0, 0, 1 ), ( -1, 0, 1, 1, 0, 0 ), ( -1, 0, 1, 0, 0, 0 ), ( 0, -1, 1, 0, 0, 0 ), ( 0, 0, 0, 0, -1, 1 ))$, 3) $(( 0, 0, 0, 1, 0, 0 ), ( 0, 0, 0, 0, 0, 1 ), ( -1, 0, 1, 1, 0, 0 ), ( -1, 0, 0, 1, 0, 0 ), ( 0, -1, 1, 0, 0, 0 ), ( 0, 0, 0, 0, -1, 1 ))$, 4) $(( 0, 0, 0, 1, 0, 0 ), ( 0, 0, 0, 0, 0, 1 ), ( 0, -1, 0, 1, 0, 1 ), ( -1, 0, 0, 1, 0, 0 ), ( 0, -1, 1, 0, 0, 0 ), ( 0, 0, 0, 0, -1, 1 ))$, 5) $(( 0, 0, 0, 0, 0, 1 ), ( -1, 0, 1, 1, 0, 0 ), ( -1, 0, 1, 0, 0, 0 ), ( -1, 0, 0, 1, 0, 0 ), ( 0, -1, 1, 0, 0, 0 ), ( 0, 0, 0, 0, -1, 1 ))$, 6) $(( 0, 0, 0, 0, 0, 1 ), ( 0, -1, 0, 1, 0, 1 ), ( -1, 0, 0, 1, 0, 0 ), ( -1, -1, 0, 1, 0, 1 ), ( 0, -1, 1, 0, 0, 0 ), ( 0, 0, 0, 0, -1, 1 ))$, 7) $(( 0, 0, 0, 1, 0, 0 ), ( 0, -1, 0, 1, 0, 1 ), ( -1, 0, 0, 1, 0, 0 ), ( 0, -1, 0, 1, 0, 0 ), ( 0, -1, 1, 0, 0, 0 ), ( 0, 0, 0, 0, -1, 1 ))$, 8) $(( 0, 0, 0, 0, 0, 1 ), ( -1, 0, 1, 0, 0, 0 ), ( -1, 0, 0, 1, 0, 0 ), ( -1, 0, 0, 0, 0, 1 ), ( 0, -1, 1, 0, 0, 0 ), ( 0, 0, 0, 0, -1, 1 ))$, 9) $(( 0, 0, 0, 0, 0, 1 ), ( 0, -1, 0, 1, 0, 1 ), ( 0, -1, 0, 0, 0, 1 ), ( -1, -1, 0, 1, 0, 1 ), ( 0, -1, 1, 0, 0, 0 ), ( 0, 0, 0, 0, -1, 1 ))$, 10) $(( 0, -1, 0, 1, 0, 1 ), ( -1, 0, 0, 1, 0, 0 ), ( -1, -1, 0, 1, 0, 1 ), ( 0, -1, 0, 1, 0, 0 ), ( 0, -1, 1, 0, 0, 0 ), ( 0, 0, 0, 0, -1, 1 ))$, 11) $(( 0, 0, 0, 0, 0, 1 ), ( -1, 0, 0, 1, 0, 0 ), ( -1, -1, 0, 1, 0, 1 ), ( -1, 0, 0, 0, 0, 1 ), ( 0, -1, 1, 0, 0, 0 ), ( 0, 0, 0, 0, -1, 1 ))$, 12) $(( 0, -1, 0, 1, 0, 1 ), ( 0, -1, 0, 0, 0, 1 ), ( -1, -1, 0, 1, 0, 1 ), ( 0, -1, 0, 1, 0, 0 ), ( 0, -1, 1, 0, 0, 0 ), ( 0, 0, 0, 0, -1, 1 ))$, 13) $(( 0, 0, 0, 0, 0, 1 ), ( 0, -1, 0, 0, 0, 1 ), ( -1, -1, 0, 1, 0, 1 ), ( -1, 0, 0, 0, 0, 1 ), ( 0, -1, 1, 0, 0, 0 ), ( 0, 0, 0, 0, -1, 1 ))$, 14) $(( -1, 0, 0, 1, 0, 0 ), ( -1, -1, 0, 1, 0, 1 ), ( 0, -1, 0, 1, 0, 0 ), ( -1, -1, 0, 1, 0, 0 ), ( 0, -1, 1, 0, 0, 0 ), ( 0, 0, 0, 0, -1, 1 ))$, 15) $(( 0, -1, 0, 0, 0, 1 ), ( -1, -1, 0, 1, 0, 1 ), ( 0, -1, 0, 1, 0, 0 ), ( -1, -1, 0, 1, 0, 0 ), ( 0, -1, 1, 0, 0, 0 ), ( 0, 0, 0, 0, -1, 1 ))$. If the reader wishes to verify the above quiver, Proposition [Proposition 9](#prop::exactly-two){reference-type="ref" reference="prop::exactly-two"} serves as a useful tool, as mentioned in the proof of Proposition [Proposition 36](#prop::S(2,4)){reference-type="ref" reference="prop::S(2,4)"}. Another helpful observation is that the second and third direct summands of $\mathcal{F}(\mu_{R_0\oplus R_1\oplus R_4}^-(B))$ are immutable in $\mbox{\sf $\tau$-tilt}\hspace{.02in}_\epsilon B$. Consequently, we conclude that $\mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A_{\epsilon}\simeq \mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}B_\epsilon$ is a finite set. - If $\epsilon=(-,-,-,+,+,+)$, then $A_\epsilon\simeq A$. In this case, we have $\mu_{P_0\oplus P_1\oplus P_2}^-(A)= \left [\begin{smallmatrix} \xymatrix@C=1.4cm{P_0\ar[r]^-{\beta_0}& P_3}\\ \oplus \\ \xymatrix@C=1.5cm{P_1\ar[r]^-{(\alpha_2\alpha_1,\beta_1)^t}& P_3\oplus P_4}\\ \oplus \\ \xymatrix@C=1.5cm{P_2\ar[r]^-{(\alpha_2,\beta_2)^t}& P_3\oplus P_5}\\ \oplus \\ \xymatrix@C=0.7cm{0\ar[r]& P_3\oplus P_4\oplus P_5} \end{smallmatrix} \right ] \in \mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A$ and $\mathcal{F}(\mu_{P_0\oplus P_1\oplus P_2}^-(A))$ $\simeq$ $\oplus$ $\oplus$ $\oplus$ $\oplus$ $\oplus$ . Similar to the previous case, we find $\#\mbox{\sf $\tau$-tilt}\hspace{.02in}_\epsilon A=157$ using Proposition [Proposition 16](#prop::element-two-silt-epsilon){reference-type="ref" reference="prop::element-two-silt-epsilon"} and Proposition [Proposition 17](#prop::element-tau-tilt-epsilon){reference-type="ref" reference="prop::element-tau-tilt-epsilon"}. Here is nothing new but only more calculations of left mutations, we thus omit the details. It turns out that $\mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A_{\epsilon}$ is a finite set. - Set $\epsilon=(-,-,-,-,+,+)$. We denote $P_I=P_0\oplus P_1\oplus P_2\oplus P_3$. We have $\mu_{P_I}^-(A)= \left [\begin{smallmatrix} \xymatrix@C=1.9cm{P_0\ar[r]^-{\alpha_3\beta_0}& P_4}\\ \oplus \\ \xymatrix@C=1.9cm{P_1\ar[r]^-{\beta_1}& P_4}\\ \oplus \\ \xymatrix@C=1.9cm{P_3\ar[r]^-{\alpha_3}& P_4}\\ \oplus \\ \xymatrix@C=1.9cm{P_2\ar[r]^-{(\alpha_3\alpha_2,\beta_2)^t}& P_4\oplus P_5}\\ \oplus \\ \xymatrix@C=2cm{0\ar[r]& P_4\oplus P_5} \end{smallmatrix} \right ] \in \mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A$ and $\mathcal{F}(\mu_{P_I}^-(A))$ $\simeq$ $\oplus$ $\oplus$ $\oplus$ $\oplus$ $\oplus$ . The Hasse quiver $\mathcal{H}(\mbox{\sf $\tau$-tilt}\hspace{.02in}_\epsilon A)$ has the following connected component:  , where the $g$-matrix of each vertex is given below: 1) $(( 0, 0, 0, 0, 1, 0 ), ( 0, 0, 0, 0, 0, 1 ), ( 0, 0, 0, -1, 1, 0 ), ( 0, 0, -1, 0, 1, 1 ), ( 0, -1, 0, 0, 1, 0 ), ( -1, 0, 0, 0, 1, 0 ))$, 2) $(( 0, 0, 0, 0, 1, 0 ), ( 0, 0, 0, -1, 1, 0 ), ( 0, 0, -1, 0, 1, 1 ), ( 0, 0, -1, 0, 1, 0 ), ( 0, -1, 0, 0, 1, 0 ), ( -1, 0, 0, 0, 1, 0 ))$, 3) $(( 0, 0, 0, 0, 0, 1 ), ( 0, 0, 0, -1, 1, 0 ), ( 0, 0, -1, 0, 1, 1 ), ( 0, -1, 0, 0, 1, 0 ), ( -1, 0, 0, 0, 1, 0 ), ( -1, 0, -1, 0, 1, 1 ))$, 4) $(( 0, 0, 0, 0, 0, 1 ), ( 0, 0, 0, -1, 1, 0 ), ( 0, 0, -1, 0, 1, 1 ), ( 0, -1, 0, 0, 1, 0 ), ( 0, -1, -1, 0, 1, 1 ), ( -1, 0, -1, 0, 1, 1 ))$, 5) $(( 0, 0, 0, -1, 1, 0 ), ( 0, 0, -1, 0, 1, 1 ), ( 0, 0, -1, 0, 1, 0 ), ( 0, -1, 0, 0, 1, 0 ), ( -1, 0, 0, 0, 1, 0 ), ( -1, 0, -1, 0, 1, 1 ))$, 6) $(( 0, 0, 0, 0, 0, 1 ), ( 0, 0, 0, -1, 1, 0 ), ( 0, -1, 0, 0, 1, 0 ), ( -1, 0, 0, 0, 1, 0 ), ( -1, 0, -1, 0, 1, 1 ), ( -1, 0, 0, 0, 0, 1 ))$, 7) $(( 0, 0, 0, -1, 1, 0 ), ( 0, 0, -1, 0, 1, 0 ), ( 0, -1, 0, 0, 1, 0 ), ( -1, 0, 0, 0, 1, 0 ), ( -1, 0, -1, 0, 1, 1 ), ( -1, 0, -1, 0, 1, 0 ))$, 8) $(( 0, 0, 0, 0, 0, 1 ), ( 0, 0, 0, -1, 1, 0 ), ( 0, 0, -1, 0, 1, 1 ), ( 0, 0, -1, 0, 0, 1 ), ( 0, -1, -1, 0, 1, 1 ), ( -1, 0, -1, 0, 1, 1 ))$, 9) $(( 0, 0, 0, -1, 1, 0 ), ( 0, 0, -1, 0, 1, 1 ), ( 0, 0, -1, 0, 1, 0 ), ( 0, -1, 0, 0, 1, 0 ), ( 0, -1, -1, 0, 1, 1 ), ( -1, 0, -1, 0, 1, 1 ))$, 10) $(( 0, 0, 0, 0, 0, 1 ), ( 0, 0, 0, -1, 1, 0 ), ( 0, -1, 0, 0, 1, 0 ), ( 0, -1, -1, 0, 1, 1 ), ( -1, 0, -1, 0, 1, 1 ), ( -1, 0, 0, 0, 0, 1 ))$, 11) $(( 0, 0, 0, 0, 0, 1 ), ( 0, 0, 0, -1, 1, 0 ), ( 0, 0, -1, 0, 0, 1 ), ( 0, -1, -1, 0, 1, 1 ), ( -1, 0, -1, 0, 1, 1 ), ( -1, 0, 0, 0, 0, 1 ))$, 12) $(( 0, 0, 0, -1, 1, 0 ), ( 0, 0, -1, 0, 1, 1 ), ( 0, 0, -1, 0, 0, 1 ), ( 0, 0, -1, 0, 1, 0 ), ( 0, -1, -1, 0, 1, 1 ), ( -1, 0, -1, 0, 1, 1 ))$, 13) $(( 0, 0, 0, -1, 1, 0 ), ( 0, 0, -1, 0, 1, 0 ), ( 0, -1, 0, 0, 1, 0 ), ( 0, -1, -1, 0, 1, 1 ), ( -1, 0, -1, 0, 1, 1 ), ( -1, 0, -1, 0, 1, 0 ))$, 14) $(( 0, 0, 0, 0, 0, 1 ), ( 0, 0, 0, -1, 1, 0 ), ( 0, -1, 0, 0, 1, 0 ), ( 0, -1, -1, 0, 1, 1 ), ( 0, -1, 0, -1, 1, 1 ), ( -1, 0, 0, 0, 0, 1 ))$, 15) $(( 0, 0, 0, 0, 0, 1 ), ( 0, -1, 0, 0, 1, 0 ), ( 0, -1, -1, 0, 1, 1 ), ( 0, -1, 0, 0, 0, 1 ), ( 0, -1, 0, -1, 1, 1 ), ( -1, 0, 0, 0, 0, 1 ))$, 16) $(( 0, 0, 0, -1, 1, 0 ), ( 0, 0, -1, 0, 0, 1 ), ( 0, 0, -1, 0, 1, 0 ), ( 0, -1, -1, 0, 1, 1 ), ( -1, 0, -1, 0, 1, 1 ), ( -1, 0, -1, 0, 1, 0 ))$, 17) $(( 0, 0, 0, -1, 1, 0 ), ( 0, 0, -1, 0, 1, 0 ), ( 0, -1, 0, 0, 1, 0 ), ( 0, -1, -1, 0, 1, 1 ), ( 0, -1, -1, 0, 1, 0 ), ( -1, 0, -1, 0, 1, 0 ))$, 18) $(( 0, 0, 0, 0, 0, 1 ), ( 0, 0, 0, -1, 1, 0 ), ( 0, 0, -1, 0, 0, 1 ), ( 0, -1, -1, 0, 1, 1 ), ( 0, -1, 0, -1, 1, 1 ), ( -1, 0, 0, 0, 0, 1 ))$, 19) $(( 0, 0, 0, 0, 0, 1 ), ( 0, 0, -1, 0, 0, 1 ), ( 0, -1, -1, 0, 1, 1 ), ( 0, -1, 0, 0, 0, 1 ), ( 0, -1, 0, -1, 1, 1 ), ( -1, 0, 0, 0, 0, 1 ))$, 20) $(( 0, 0, 0, -1, 1, 0 ), ( 0, 0, -1, 0, 0, 1 ), ( 0, 0, -1, 0, 1, 0 ), ( 0, -1, -1, 0, 1, 1 ), ( 0, -1, -1, 0, 1, 0 ), ( -1, 0, -1, 0, 1, 0 ))$, 21) $(( 0, 0, 0, 0, 0, 1 ), ( 0, 0, 0, -1, 1, 0 ), ( 0, 0, 0, -1, 0, 1 ), ( 0, 0, -1, 0, 0, 1 ), ( 0, -1, 0, -1, 1, 1 ), ( -1, 0, 0, 0, 0, 1 ))$, 22) $(( 0, 0, 0, 0, 0, 1 ), ( 0, 0, 0, -1, 0, 1 ), ( 0, 0, -1, 0, 0, 1 ), ( 0, -1, 0, -1, 1, 1 ), ( -1, 0, 0, 0, 0, 1 ), ( 0, -1, 0, 0, 0, 1 ))$. We obtain $\#\mbox{\sf $\tau$-tilt}\hspace{.02in}_\epsilon A=22$ and thus, $\mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A$ is finite by Proposition [Proposition 17](#prop::element-tau-tilt-epsilon){reference-type="ref" reference="prop::element-tau-tilt-epsilon"}. - Set $\epsilon=(-,-,-,+,-,+)$. We have $P_I=P_0\oplus P_1\oplus P_2\oplus P_4$. Then, $\mu_{P_I}^-(A)= \left [\begin{smallmatrix} \xymatrix@C=1.2cm{P_0\ar[r]^-{\beta_0}& P_3}\\ \oplus \\ \xymatrix@C=1.2cm{P_4\ar[r]^-{\alpha_4}& P_5}\\ \oplus \\ \xymatrix@C=1.7cm{P_1\ar[r]^-{(\alpha_2\alpha_1,\alpha_4\beta_1)^t}& P_3\oplus P_5}\\ \oplus \\ \xymatrix@C=1.7cm{P_2\ar[r]^-{(\alpha_2,\beta_2)^t}& P_3\oplus P_5}\\ \oplus \\ \xymatrix@C=1.8cm{0\ar[r]& P_3\oplus P_5} \end{smallmatrix} \right ] \in \mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A$ and $\mathcal{F}(\mu_{P_I}^-(A))$ $\simeq$ $\oplus$ $\oplus$ $\oplus$ $\oplus$ $\oplus$ . We find $\#\mbox{\sf $\tau$-tilt}\hspace{.02in}_\epsilon A=60$ by similar calculations in the previous case. We have proved that $\mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A\simeq \mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A_{\epsilon}$ is finite for any $\epsilon\in \mathsf{s}_6$. ◻ **Proposition 38**. *Let $p=5$. Then $S^+(2,6)$ is $\tau$-tilting finite.* *Proof.* By Proposition [Proposition 12](#prop::quiver-(2,r)){reference-type="ref" reference="prop::quiver-(2,r)"}, $S^+(2,6)$ over $p=5$ is isomorphic to $A:=KQ/I$ with $$Q:\xymatrix@C=0.7cm{ 6\ar[r]^{\alpha_5}\ar@/^1cm/[rrrrr]^{\beta_1}& 5\ar[r]^{\alpha_4}\ar@/_1cm/[rrrrr]_{\beta_0}& 4\ar[r]^{\alpha_3}& 3\ar[r]^{\alpha_2}& 2\ar[r]^{\alpha_1}& 1\ar[r]^{\alpha_0}& 0}$$ and $$I: \left< \alpha_5\alpha_4\alpha_3\alpha_2\alpha_1, \alpha_4\alpha_3\alpha_2\alpha_1\alpha_0, \alpha_5\beta_0-\beta_1\alpha_0 \right>.$$ Let $\epsilon=(\epsilon_0,\epsilon_1, \ldots, \epsilon_6)\in \mathsf{s}_7$. If $\epsilon_0=+$ or $\epsilon_6=-$, we have $A_\epsilon\simeq (K\oplus S^+(2,5))_\epsilon$ by Proposition [Proposition 21](#prop::source-sink){reference-type="ref" reference="prop::source-sink"} and Proposition [Proposition 22](#prop::epsilon-(2,r)){reference-type="ref" reference="prop::epsilon-(2,r)"}, and then $A_\epsilon$ is $\tau$-tilting finite since $S^+(2,5)$ over $p=5$ is representation-finite. Suppose $\epsilon_0=-$ and $\epsilon_6=+$. Since there is an algebra isomorphism $\sigma: A^{\rm op}\rightarrow A$ sending $e_i$ to $e_{6-i}$, it suffices to consider the following 16 cases by using Corollary [Corollary 29](#cor::epsilon-sigma){reference-type="ref" reference="cor::epsilon-sigma"}. Let $P_i$ be the indecomposable projective $A$-module at vertex $i$. Then, $\operatorname{Hom}_A(P_i, P_j)$ is given in the following table. $\operatorname{Hom}$ 0 1 2 3 4 5 6 ---------------------- ------- ------------ -------------------- ---------------------------- ------------------------------------ ------------------------------------ ------------------------------------ 0 $e_0$ $\alpha_0$ $\alpha_1\alpha_0$ $\alpha_2\alpha_1\alpha_0$ $\alpha_3\alpha_2\alpha_1\alpha_0$ $\beta_0$ $\alpha_5\beta_0$ 1 $0$ $e_1$ $\alpha_1$ $\alpha_2\alpha_1$ $\alpha_3\alpha_2\alpha_1$ $\alpha_4\alpha_3\alpha_2\alpha_1$ $\beta_1$ 2 $0$ $0$ $e_2$ $\alpha_2$ $\alpha_3\alpha_2$ $\alpha_4\alpha_3\alpha_2$ $\alpha_5\alpha_4\alpha_3\alpha_2$ 3 $0$ $0$ $0$ $e_3$ $\alpha_3$ $\alpha_4\alpha_3$ $\alpha_5\alpha_4\alpha_3$ 4 $0$ $0$ $0$ $0$ $e_4$ $\alpha_4$ $\alpha_5\alpha_4$ 5 $0$ $0$ $0$ $0$ $0$ $e_5$ $\alpha_5$ 6 $0$ $0$ $0$ $0$ $0$ $0$ $e_6$ . - If $\epsilon=(-,+,+,+,+,+,+)$, then we have $\alpha_4$, $\alpha_4\alpha_3$, $\alpha_4\alpha_3\alpha_2$, $\alpha_4\alpha_3\alpha_2\alpha_1$, $\alpha_5\alpha_4$, $\alpha_5\alpha_4\alpha_3$, $\alpha_5\alpha_4\alpha_3\alpha_2 \in J_{\epsilon,+}$, and all others are survived; If $\epsilon=(-,+,+,+,+,-,+)$, then $\alpha_5\alpha_4$, $\alpha_5\alpha_4\alpha_3$, $\alpha_5\alpha_4\alpha_3\alpha_2 \in J_{\epsilon,+}$, $\alpha_4$, $\alpha_4\alpha_3$, $\alpha_4\alpha_3\alpha_2$, $\alpha_4\alpha_3\alpha_2\alpha_1$ are removed and all others are survived; If $\epsilon=(-,-,+,+,+,-,+)$, then $\alpha_5\alpha_4$, $\alpha_5\alpha_4\alpha_3$, $\alpha_5\alpha_4\alpha_3\alpha_2 \in J_{\epsilon,+}$, $\alpha_4\alpha_3\alpha_2\alpha_1\in J_{\epsilon,-}$, $\alpha_4$, $\alpha_4\alpha_3$, $\alpha_4\alpha_3\alpha_2$ are removed and all others are survived. In all these cases, $A_\epsilon$ is presented as $\vcenter{\xymatrix@C=0.7cm@R=0.5cm{ 5\ar[d] & 6\ar[d] \ar[l]\ar@{.}[dl]& \\ 0 & 1 \ar[l] & 2\ar[l]&3\ar[l]&4\ar[l] }}$, which is a representation-finite sincere simply connected algebra. - If $\epsilon=(-,+,+,+,-,+,+)$, then $\alpha_4$, $\alpha_4\alpha_3$, $\alpha_4\alpha_3\alpha_2$, $\alpha_4\alpha_3\alpha_2\alpha_1$, $\alpha_5\alpha_4\alpha_3$, $\alpha_5\alpha_4\alpha_3\alpha_2 \in J_{\epsilon,+}$, $\alpha_3\alpha_2\alpha_1\alpha_0\in J_{\epsilon,-}$, $\alpha_3$, $\alpha_3\alpha_2$, $\alpha_3\alpha_2\alpha_1$ are removed and all others are survived; If $\epsilon=(-,+,+,+,-,-,+)$, then $\alpha_5\alpha_4\alpha_3$, $\alpha_5\alpha_4\alpha_3\alpha_2 \in J_{\epsilon,+}$, $\alpha_3\alpha_2\alpha_1\alpha_0\in J_{\epsilon,-}$, $\alpha_3$, $\alpha_3\alpha_2$, $\alpha_3\alpha_2\alpha_1$, $\alpha_4\alpha_3$, $\alpha_4\alpha_3\alpha_2$, $\alpha_4\alpha_3\alpha_2\alpha_1$ are removed and all others are survived. In both cases, $A_\epsilon$ is presented as $\vcenter{\xymatrix@C=0.7cm@R=0.5cm{ && 6 \ar[d]\ar[r]\ar@{.}[dr]& 5\ar[r]\ar[d]& 4 \\ 3\ar[r]& 2\ar[r]& 1\ar[r]&0& }}$, which is a representation-finite sincere simply connected algebra. - If $\epsilon=(-,+,+,-,-,+,+)$, then $\alpha_4\alpha_3\alpha_2$, $\alpha_4\alpha_3\alpha_2\alpha_1$, $\alpha_5\alpha_4\alpha_3\alpha_2 \in J_{\epsilon,+}$, $\alpha_2\alpha_1\alpha_0$, $\alpha_3\alpha_2\alpha_1\alpha_0\in J_{\epsilon,-}$, $\alpha_2$, $\alpha_2\alpha_1$, $\alpha_3\alpha_2$, $\alpha_3\alpha_2\alpha_1$ are removed and all others are survived. In this case, $A_\epsilon$ is presented as $\vcenter{\xymatrix@C=0.7cm@R=0.5cm{ & 6 \ar[d]\ar[r]\ar@{.}[dr]& 5\ar[r]\ar[d]& 4 \ar[r] &3\\ 2\ar[r]& 1\ar[r]&0& }}$, which is again representation-finite. - If $\epsilon=(-,+,+,-,+,-,+)$, then $\alpha_5\alpha_4\alpha_3\alpha_2 \in J_{\epsilon,+}$, $\alpha_2$, $\alpha_4$, $\alpha_2\alpha_1$, $\alpha_4\alpha_3\alpha_2$, $\alpha_4\alpha_3\alpha_2\alpha_1$ are removed and all others are survived. One may check that $A_\epsilon \simeq B_\epsilon$ and $B:=KQ/I$ is given by $Q:\vcenter{\xymatrix@C=0.7cm@R=0.5cm{ 6\ar[r]^-{\alpha_5}&5\ar[rr]^-{x}\ar[d]^-{\alpha_4}&&1\ar[r]^-{\alpha_0}&0\\ &4\ar[r]_-{\alpha_3}&3\ar[r]_-{\alpha_2}&2\ar[u]_-{\alpha_1}& }}$ $\quad$ and $\quad$ $I:\left< \alpha_4\alpha_3\alpha_2 \right>$. By using Proposition [Proposition 16](#prop::element-two-silt-epsilon){reference-type="ref" reference="prop::element-two-silt-epsilon"} and Proposition [Proposition 17](#prop::element-tau-tilt-epsilon){reference-type="ref" reference="prop::element-tau-tilt-epsilon"}, we find that $\mathcal{H}(\mbox{\sf $\tau$-tilt}\hspace{.02in}_\epsilon B)$ has the following connected component:  , where the $g$-matrices are given as follows: 1) (( 0, 1, 0, 0, 0, 0, 0 ), ( 0, 0, 1, 0, 0, 0, 0 ), ( 0, 0, 0, 0, 1, 0, 0 ), ( 0, 0, 0, 0, 0, 0, 1 ), ( 0, 0, 0, -1, 1, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), ( -1, 1, 0, 0, 0, 0, 0 )), 2) (( 0, 0, 1, 0, 0, 0, 0 ), ( 0, 0, 0, 0, 1, 0, 0 ), ( 0, 0, 0, 0, 0, 0, 1 ), ( 0, 0, 0, -1, 1, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), ( -1, 1, 0, 0, 0, 0, 0 ), ( -1, 0, 1, 0, 0, 0, 1 )), 3) (( 0, 1, 0, 0, 0, 0, 0 ), ( 0, 0, 1, 0, 0, 0, 0 ), ( 0, 0, 0, 0, 0, 0, 1 ), ( 0, 0, 0, -1, 1, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), ( -1, 1, 0, 0, 0, 0, 0 ), ( 0, 0, 0, -1, 0, 0, 1 )), 4) (( 0, 0, 1, 0, 0, 0, 0 ), ( 0, 0, 0, 0, 1, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), ( -1, 1, 0, 0, 0, 0, 0 ), (-1, 0, 1, 0, 0, 0, 1 ), ( -1, 0, 1, 0, 0, 0, 0 ), ( 0, 0, 0, -1, 1, 0, 0 )), 5) (( 0, 0, 0, 0, 1, 0, 0 ), ( 0, 0, 0, 0, 0, 0, 1 ), ( 0, 0, 0, -1, 1, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), (-1, 1, 0, 0, 0, 0, 0 ), ( -1, 0, 1, 0, 0, 0, 1 ), ( -1, 0, 0, 0, 1, 0, 1 )), 6) (( 0, 0, 1, 0, 0, 0, 0 ), ( 0, 0, 0, 0, 0, 0, 1 ), ( 0, 0, 0, -1, 1, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), (-1, 1, 0, 0, 0, 0, 0 ), ( -1, 0, 1, 0, 0, 0, 1 ), ( 0, 0, 0, -1, 0, 0, 1 )), 7) (( 0, 0, 0, 0, 1, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), ( -1, 1, 0, 0, 0, 0, 0 ), (-1, 0, 1, 0, 0, 0, 1 ), (-1, 0, 1, 0, 0, 0, 0 ), ( 0, 0, 0, -1, 1, 0, 0 ), ( -1, 0, 0, 0, 1, 0, 1 )), 8) (( 0, 0, 1, 0, 0, 0, 0 ), ( 0, 0, 0, -1, 1, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), (-1, 1, 0, 0, 0, 0, 0 ), (-1, 0, 1, 0, 0, 0, 1 ), ( 0, 0, 0, -1, 0, 0, 1 ), ( -1, 0, 1, -1, 0, 0, 1 )), 9) (( 0, 0, 0, 0, 0, 0, 1 ), ( 0, 0, 0, -1, 1, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), (-1, 1, 0, 0, 0, 0, 0 ), (-1, 0, 1, 0, 0, 0, 1 ), ( -1, 0, 0, 0, 1, 0, 1 ), ( -1, 0, 0, 0, 0, 0, 1 )), 10) (( 0, 0, 0, 0, 0, 0, 1 ), ( 0, 0, 0, -1, 1, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), (-1, 1, 0, 0, 0, 0, 0 ), (-1, 0, 1, 0, 0, 0, 1 ), ( 0, 0, 0, -1, 0, 0, 1 ), ( -1, 0, 0, 0, 0, 0, 1 )), 11) (( 0, 0, 1, 0, 0, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), ( -1, 1, 0, 0, 0, 0, 0 ), (-1, 0, 1, 0, 0, 0, 1 ), (-1, 0, 1, 0, 0, 0, 0 ), ( 0, 0, 0, -1, 1, 0, 0 ), ( -1, 0, 1, -1, 0, 0, 1 )), 12) (( 0, 0, 0, 0, 1, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), (-1, 1, 0, 0, 0, 0, 0 ), (-1, 0, 1, 0, 0, 0, 0 ), ( 0, 0, 0, -1, 1, 0, 0 ), ( -1, 0, 0, 0, 1, 0, 1 ), ( -1, 0, 0, 0, 1, 0, 0 )), 13) (( 0, 0, 0, -1, 1, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), (-1, 1, 0, 0, 0, 0, 0 ), (-1, 0, 1, 0, 0, 0, 1 ), (-1, 0, 0, 0, 1, 0, 1 ), ( -1, 0, 0, 0, 0, 0, 1 ), ( -1, 0, 1, 0, 0, 0, 0 )), 14) (( 0, 0, 0, -1, 1, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), (-1, 1, 0, 0, 0, 0, 0 ), (-1, 0, 1, 0, 0, 0, 1 ), (0, 0, 0, -1, 0, 0, 1 ), ( -1, 0, 0, 0, 0, 0, 1 ), ( -1, 0, 1, -1, 0, 0, 1 )), 15) (( 0, 0, 0, 0, 0, -1, 1 ), ( -1, 1, 0, 0, 0, 0, 0 ), (-1, 0, 1, 0, 0, 0, 1 ), (-1, 0, 1, 0, 0, 0, 0 ), (0, 0, 0, -1, 1, 0, 0 ), ( -1, 0, 1, -1, 0, 0, 1 ), ( -1, 0, 0, 0, 0, 0, 1 )), 16) (( 0, 0, 0, 0, 0, -1, 1 ), (-1, 1, 0, 0, 0, 0, 0 ), (-1, 0, 1, 0, 0, 0, 0 ), ( 0, 0, 0, -1, 1, 0, 0 ), (-1, 0, 0, 0, 1, 0, 1 ), ( -1, 0, 0, 0, 1, 0, 0 ), ( -1, 0, 0, 0, 0, 0, 1 )), 17) (( 0, 0, 0, -1, 1, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), (-1, 1, 0, 0, 0, 0, 0 ), (0, 0, 0, -1, 0, 0, 1 ), (-1, 0, 0, 0, 0, 0, 1 ), ( -1, 0, 1, -1, 0, 0, 1 ), ( -1, 0, 0, -1, 0, 0, 1 )), 18) (( 0, 0, 0, 0, 0, -1, 1 ), (-1, 1, 0, 0, 0, 0, 0 ), (-1, 0, 1, 0, 0, 0, 0 ), (0, 0, 0, -1, 1, 0, 0 ), (-1, 0, 1, -1, 0, 0, 1 ), ( -1, 0, 0, 0, 0, 0, 1 ), ( -1, 0, 0, -1, 0, 0, 1 )). This implies that $\#\mbox{\sf $\tau$-tilt}\hspace{.02in}_\epsilon B=18$. Hence, $\mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A_\epsilon\simeq \mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}B_\epsilon$ is finite. - If $\epsilon=(-,+,-,+,+,-,+)$, then $\alpha_1$, $\alpha_4$, $\alpha_4\alpha_3$, $\alpha_4\alpha_3\alpha_2\alpha_1$ are removed and all others are survived. We find that $A_\epsilon \simeq B_\epsilon$ and $B:=KQ/I$ is given by $Q:\vcenter{\xymatrix@C=0.7cm@R=0.5cm{ 6\ar[r]^-{\alpha_5}&5\ar[rr]^-{x}\ar[d]^-{\alpha_4}&&1\ar[r]^-{\alpha_0}&0\\ &4\ar[r]_-{\alpha_3}&3\ar[r]_-{\alpha_2}&2\ar[u]_-{\alpha_1}& }}$ $\quad$ and $\quad$ $I:\left< \alpha_4\alpha_3\alpha_2\alpha_1 \right>$. We then find that $\mathcal{H}(\mbox{\sf $\tau$-tilt}\hspace{.02in}_\epsilon B)$ has the following structure:  , where the $g$-matrices are given as follows: 1) (( 0, 1, 0, 0, 0, 0, 0 ), ( 0, 0, 0, 1, 0, 0, 0 ), ( 0, 0, 0, 0, 1, 0, 0 ), ( 0, 0, 0, 0, 0, 0, 1 ), ( 0, 0, -1, 1, 0, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), ( -1, 1, 0, 0, 0, 0, 0 )), 2) (( 0, 1, 0, 0, 0, 0, 0 ), ( 0, 0, 0, 0, 1, 0, 0 ), ( 0, 0, 0, 0, 0, 0, 1 ), ( 0, 0, -1, 1, 0, 0, 0 ), ( 0, 0, -1, 0, 1, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), ( -1, 1, 0, 0, 0, 0, 0 )), 3) (( 0, 0, 0, 1, 0, 0, 0 ), ( 0, 0, 0, 0, 1, 0, 0 ), ( 0, 0, 0, 0, 0, 0, 1 ), ( 0, 0, -1, 1, 0, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), ( -1, 1, 0, 0, 0, 0, 0 ), ( -1, 0, 0, 1, 0, 0, 1 )), 4) (( 0, 0, 0, 0, 1, 0, 0 ), ( 0, 0, 0, 0, 0, 0, 1 ), ( 0, 0, -1, 1, 0, 0, 0 ), ( 0, 0, -1, 0, 1, 0, 0 ), (0, 0, 0, 0, 0, -1, 1 ), ( -1, 1, 0, 0, 0, 0, 0 ), ( -1, 0, 0, 0, 1, 0, 1 )), 5) (( 0, 1, 0, 0, 0, 0, 0 ), ( 0, 0, 0, 0, 0, 0, 1 ), ( 0, 0, -1, 1, 0, 0, 0 ), ( 0, 0, -1, 0, 1, 0, 0 ), (0, 0, 0, 0, 0, -1, 1 ), ( -1, 1, 0, 0, 0, 0, 0 ), ( 0, 0, -1, 0, 0, 0, 1 )), 6) (( 0, 0, 0, 0, 1, 0, 0 ), ( 0, 0, 0, 0, 0, 0, 1 ), ( 0, 0, -1, 1, 0, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), (-1, 1, 0, 0, 0, 0, 0 ), ( -1, 0, 0, 1, 0, 0, 1 ), ( -1, 0, 0, 0, 1, 0, 1 )), 7) (( 0, 0, 0, 1, 0, 0, 0 ), ( 0, 0, 0, 0, 1, 0, 0 ), ( 0, 0, -1, 1, 0, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), (-1, 1, 0, 0, 0, 0, 0 ), ( -1, 0, 0, 1, 0, 0, 1 ), ( -1, 0, 0, 1, 0, 0, 0 )), 8) (( 0, 0, 0, 0, 0, 0, 1 ), ( 0, 0, -1, 1, 0, 0, 0 ), ( 0, 0, -1, 0, 1, 0, 0 ), (0, 0, 0, 0, 0, -1, 1 ), (-1, 1, 0, 0, 0, 0, 0 ), ( -1, 0, 0, 0, 1, 0, 1 ), ( -1, 0, 0, 0, 0, 0, 1 )), 9) (( 0, 0, 0, 0, 1, 0, 0 ), ( 0, 0, -1, 1, 0, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), (-1, 1, 0, 0, 0, 0, 0 ), (-1, 0, 0, 1, 0, 0, 1 ), ( -1, 0, 0, 0, 1, 0, 1 ), ( -1, 0, 0, 1, 0, 0, 0 )), 10) (( 0, 0, 0, 0, 0, 0, 1 ), ( 0, 0, -1, 1, 0, 0, 0 ), ( 0, 0, -1, 0, 1, 0, 0 ), (0, 0, 0, 0, 0, -1, 1 ), (-1, 1, 0, 0, 0, 0, 0 ), ( 0, 0, -1, 0, 0, 0, 1 ), ( -1, 0, 0, 0, 0, 0, 1 )), 11) (( 0, 0, 0, 0, 0, 0, 1 ), ( 0, 0, -1, 1, 0, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), (-1, 1, 0, 0, 0, 0, 0 ), (-1, 0, 0, 1, 0, 0, 1 ), ( -1, 0, 0, 0, 1, 0, 1 ), ( -1, 0, 0, 0, 0, 0, 1 )), 12) (( 0, 0, 0, 0, 1, 0, 0 ), ( 0, 0, -1, 1, 0, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), (-1, 1, 0, 0, 0, 0, 0 ), (-1, 0, 0, 0, 1, 0, 1 ), ( -1, 0, 0, 1, 0, 0, 0 ), ( -1, 0, 0, 0, 1, 0, 0 )), 13) (( 0, 0, -1, 1, 0, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), (-1, 1, 0, 0, 0, 0, 0 ), (-1, 0, 0, 1, 0, 0, 1 ), (-1, 0, 0, 0, 1, 0, 1 ), ( -1, 0, 0, 0, 0, 0, 1 ), ( -1, 0, 0, 1, 0, 0, 0 )), 14) (( 0, 0, -1, 1, 0, 0, 0 ), ( 0, 0, -1, 0, 1, 0, 0 ), (0, 0, 0, 0, 0, -1, 1 ), (-1, 1, 0, 0, 0, 0, 0 ), (0, 0, -1, 0, 0, 0, 1 ), ( -1, 0, 0, 0, 0, 0, 1 ), ( -1, 0, -1, 0, 0, 0, 1 )), 15) (( 0, 0, -1, 1, 0, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), (-1, 1, 0, 0, 0, 0, 0 ), (-1, 0, 0, 0, 1, 0, 1 ), (-1, 0, 0, 0, 0, 0, 1 ), ( -1, 0, 0, 1, 0, 0, 0 ), ( -1, 0, 0, 0, 1, 0, 0 )), 16) (( 0, 0, 0, 0, 1, 0, 0 ), ( 0, 0, -1, 1, 0, 0, 0 ), (0, 0, -1, 0, 1, 0, 0 ), (0, 0, 0, 0, 0, -1, 1 ), (-1, 1, 0, 0, 0, 0, 0 ), ( -1, 0, 0, 0, 1, 0, 1 ), ( -1, 0, 0, 0, 1, 0, 0 )), 17) (( 0, 0, -1, 1, 0, 0, 0 ), ( 0, 0, -1, 0, 1, 0, 0 ), (0, 0, 0, 0, 0, -1, 1 ), (-1, 1, 0, 0, 0, 0, 0 ), (-1, 0, 0, 0, 1, 0, 1 ), ( -1, 0, 0, 0, 0, 0, 1 ), ( -1, 0, 0, 0, 1, 0, 0 )). Then, $\#\mbox{\sf $\tau$-tilt}\hspace{.02in}_\epsilon B=17$ and $\mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A_\epsilon\simeq \mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}B_\epsilon$ is a finite set by Proposition [Proposition 17](#prop::element-tau-tilt-epsilon){reference-type="ref" reference="prop::element-tau-tilt-epsilon"}. - If $\epsilon=(-,-,-,-,+,-,+)$, we find that $\mathcal{H}(\mbox{\sf $\tau$-tilt}\hspace{.02in}_\epsilon A)$ has the following structure: , where the $g$-matrices are given as follows: 1) ((0, 0, 0, 0, 1, 0, 0 ), ( 0, 0, 0, 0, 0, 0, 1 ), ( 0, 0, 0, -1, 1, 0, 0 ), ( 0, 0, -1, 0, 1, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), ( 0, -1, 0, 0, 1, 0, 1 ), ( -1, 0, 0, 0, 1, 0, 1 )), 2) ((0, 0, 0, 0, 1, 0, 0 ), ( 0, 0, 0, -1, 1, 0, 0 ), ( 0, 0, -1, 0, 1, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), (0, -1, 0, 0, 1, 0, 1 ), ( -1, 0, 0, 0, 1, 0, 1 ), ( -1, 0, 0, 0, 1, 0, 0 )), 3) ((0, 0, 0, 0, 0, 0, 1 ), ( 0, 0, 0, -1, 1, 0, 0 ), ( 0, 0, -1, 0, 1, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), (0, -1, 0, 0, 1, 0, 1 ), ( -1, 0, 0, 0, 1, 0, 1 ), ( -1, 0, 0, 0, 0, 0, 1 )), 4) ((0, 0, 0, -1, 1, 0, 0 ), ( 0, 0, -1, 0, 1, 0, 0 ), (0, 0, 0, 0, 0, -1, 1 ), (0, -1, 0, 0, 1, 0, 1 ), (-1, 0, 0, 0, 1, 0, 1 ), ( -1, 0, 0, 0, 0, 0, 1 ), ( -1, 0, 0, 0, 1, 0, 0 )), 5) ((0, 0, 0, 0, 0, 0, 1 ), ( 0, 0, 0, -1, 1, 0, 0 ), ( 0, 0, -1, 0, 1, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), (0, -1, 0, 0, 1, 0, 1 ), ( 0, -1, 0, 0, 0, 0, 1 ), ( -1, 0, 0, 0, 0, 0, 1 )), 6) ((0, 0, 0, 0, 1, 0, 0 ), ( 0, 0, 0, -1, 1, 0, 0 ), ( 0, 0, -1, 0, 1, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), (0, -1, 0, 0, 1, 0, 1 ), ( 0, -1, 0, 0, 1, -1, 1 ), ( -1, 0, 0, 0, 1, 0, 0 )), 7) ((0, 0, 0, -1, 1, 0, 0 ), ( 0, 0, -1, 0, 1, 0, 0 ), (0, 0, 0, 0, 0, -1, 1 ), (0, -1, 0, 0, 1, 0, 1 ), (-1, 0, 0, 0, 0, 0, 1 ), ( -1, 0, 0, 0, 1, 0, 0 ), ( -1, -1, 0, 0, 1, 0, 1 )), 8) ((0, 0, 0, 0, 0, 0, 1 ), ( 0, 0, 0, -1, 1, 0, 0 ), ( 0, 0, -1, 0, 1, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), (0, 0, -1, 0, 0, 0, 1 ), ( 0, -1, 0, 0, 0, 0, 1 ), ( -1, 0, 0, 0, 0, 0, 1 )), 9) ((0, 0, 0, -1, 1, 0, 0 ), ( 0, 0, -1, 0, 1, 0, 0 ), (0, 0, 0, 0, 0, -1, 1 ), (0, -1, 0, 0, 1, 0, 1 ), (0, -1, 0, 0, 0, 0, 1 ), ( -1, 0, 0, 0, 0, 0, 1 ), ( -1, -1, 0, 0, 1, 0, 1 )), 10) ((0, 0, 0, -1, 1, 0, 0 ), ( 0, 0, -1, 0, 1, 0, 0 ), (0, 0, 0, 0, 0, -1, 1 ), (0, -1, 0, 0, 1, 0, 1 ), (0, -1, 0, 0, 1, -1, 1 ), ( -1, 0, 0, 0, 1, 0, 0 ), ( -1, -1, 0, 0, 1, 0, 1 )), 11) ((0, 0, 0, 0, 1, 0, 0 ), ( 0, 0, 0, -1, 1, 0, 0 ), ( 0, 0, -1, 0, 1, 0, 0 ), (0, -1, 0, 0, 1, 0, 1 ), (0, -1, 0, 0, 1, 0, 0 ), ( 0, -1, 0, 0, 1, -1, 1 ), ( -1, 0, 0, 0, 1, 0, 0 )), 12) ((0, 0, 0, -1, 1, 0, 0 ), ( 0, 0, -1, 0, 1, 0, 0 ), (0, -1, 0, 0, 1, 0, 1 ), (0, -1, 0, 0, 1, 0, 0 ), (0, -1, 0, 0, 1, -1, 1 ), ( -1, 0, 0, 0, 1, 0, 0 ), ( -1, -1, 0, 0, 1, 0, 1 )), 13) ((0, 0, 0, -1, 1, 0, 0 ), ( 0, 0, -1, 0, 1, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), (0, -1, 0, 0, 1, 0, 1 ), (0, -1, 0, 0, 0, 0, 1 ), ( 0, -1, 0, 0, 1, -1, 1 ), ( -1, -1, 0, 0, 1, 0, 1 )), 14) ((0, 0, 0, -1, 1, 0, 0 ), ( 0, 0, -1, 0, 1, 0, 0 ), (0, -1, 0, 0, 1, 0, 1 ), (0, -1, 0, 0, 1, 0, 0 ), (0, -1, 0, 0, 1, -1, 1 ), ( 0, -1, 0, 0, 0, 0, 1 ), ( -1, -1, 0, 0, 1, 0, 1 )), 15) ((0, 0, 0, -1, 1, 0, 0 ), ( 0, 0, -1, 0, 1, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), (0, 0, -1, 0, 0, 0, 1 ), (0, -1, 0, 0, 0, 0, 1 ), ( -1, 0, 0, 0, 0, 0, 1 ), ( -1, 0, -1, 0, 0, 0, 1 )), 16) ((0, 0, 0, 0, 0, 0, 1 ), ( 0, 0, 0, -1, 1, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), (0, 0, 0, -1, 0, 0, 1 ), (0, 0, -1, 0, 0, 0, 1 ), ( 0, -1, 0, 0, 0, 0, 1 ), ( -1, 0, 0, 0, 0, 0, 1 )), 17) ((0, 0, 0, -1, 1, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), ( 0, 0, 0, -1, 0, 0, 1 ), (0, 0, -1, 0, 0, 0, 1 ), (0, -1, 0, 0, 0, 0, 1 ), ( -1, 0, 0, 0, 0, 0, 1 ), ( -1, 0, -1, 0, 0, 0, 1 )), 18) ((0, 0, 0, -1, 1, 0, 0 ), ( 0, 0, -1, 0, 1, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), (0, 0, -1, 0, 0, 0, 1 ), (0, -1, 0, 0, 0, 0, 1 ), ( 0, -1, -1, 0, 0, 0, 1 ), ( -1, 0, -1, 0, 0, 0, 1 )), 19) ((0, 0, 0, -1, 1, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), ( 0, 0, 0, -1, 0, 0, 1 ), (0, 0, -1, 0, 0, 0, 1 ), (0, -1, 0, 0, 0, 0, 1 ), ( 0, -1, -1, 0, 0, 0, 1 ), ( -1, 0, -1, 0, 0, 0, 1 )), 20) ((0, 0, 0, -1, 1, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), ( 0, 0, 0, -1, 0, 0, 1 ), (0, -1, 0, 0, 0, 0, 1 ), (-1, 0, 0, 0, 0, 0, 1 ), ( -1, 0, -1, 0, 0, 0, 1 ), ( -1, 0, 0, -1, 0, 0, 1 )), 21) ((0, 0, 0, -1, 1, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), (0, 0, 0, -1, 0, 0, 1 ), (0, -1, 0, 0, 0, 0, 1 ), (0, -1, -1, 0, 0, 0, 1 ), ( -1, 0, -1, 0, 0, 0, 1 ), ( -1, 0, 0, -1, 0, 0, 1 )), 22) ((0, 0, 0, -1, 1, 0, 0 ), ( 0, 0, 0, 0, 0, -1, 1 ), (0, 0, 0, -1, 0, 0, 1 ), (0, -1, 0, 0, 0, 0, 1 ), (0, -1, -1, 0, 0, 0, 1 ), ( 0, -1, 0, -1, 0, 0, 1 ), ( -1, 0, 0, -1, 0, 0, 1 )). It turns out that $\#\mbox{\sf $\tau$-tilt}\hspace{.02in}_\epsilon A=22$ and $\mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A$ is finite. We address the remaining 7 cases in a manner similar to the approach outlined above. More precisely, we first find the left mutation $\mu_{P_I}^-(A)$ of $A$ corresponding to $\epsilon$, and then calculate the left mutation sequences starting from $\mathcal{F}(\mu_{P_I}^-(A))$ until we find a finite connected component in $\mathcal{H}(\mbox{\sf $\tau$-tilt}\hspace{.02in}_\epsilon A)$. The utilization of Aoki's GAP-QPA program enables us to reduce this heavy process to a computer. We have $\epsilon$ $\#\mbox{\sf $\tau$-tilt}\hspace{.02in}_\epsilon A$ ------------------- ----------------------------------------------------- $(-,-,-,+,-,-,+)$ $51$ $(-,-,-,+,+,-,+)$ $73$ $(-,-,+,-,+,-,+)$ $102$ $(-,-,-,-,+,+,+)$ $115$ $(-,-,-,-,-,+,+)$ $142$ $(-,-,-,+,-,+,+)$ $242$ $(-,-,+,-,-,+,+)$ $1067$ . By Proposition [Proposition 17](#prop::element-tau-tilt-epsilon){reference-type="ref" reference="prop::element-tau-tilt-epsilon"}, the poset $\mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A$ is finite for each of these 7 cases. We have proved that $\mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A$ is finite for any $\epsilon\in \mathsf{s}_7$. ◻ **Remark 39**. We give an application of Proposition [Proposition 26](#prop::sign-tilting){reference-type="ref" reference="prop::sign-tilting"}. Let $A=S^+(2,6)$ over $p=5$. Then, we have $\mu_{P_3}^-(A)= \left [\begin{smallmatrix} \xymatrix@C=0.8cm{0\ar[r]& P_0\oplus P_1\oplus P_2}\\ \oplus \\ \xymatrix@C=1.5cm{P_3\ar[r]^-{\alpha_3}& P_4}\\ \oplus \\ \xymatrix@C=0.8cm{0\ar[r]& P_4\oplus P_5\oplus P_6} \end{smallmatrix} \right ]$ $\quad$ and $\quad$ $\mathcal{G}_3=\begin{pmatrix} 1&0&0&0&0&0&0\\ 0&1&0&0&0&0&0\\ 0&0&1&0&0&0&0\\ 0&0&0&-1&1&0&0\\ 0&0&0&0&1&0&0\\ 0&0&0&0&0&1&0\\ 0&0&0&0&0&0&1 \end{pmatrix}$. It is easy to check that $\mu_{P_3}^-(A)$ is a tilting complex. Set $B=\operatorname{End}\mu_{P_3}^-(A)$. Then, $B$ is isomorphic to $KQ/I$ with $Q:\vcenter{\xymatrix@C=0.7cm@R=0.5cm{ 6 \ar[d]_{\beta_1}\ar[r]^{\alpha_5}\ar@{.}[dr]& 5 \ar[d]^{\beta_0}\ar[r]^{\alpha_4} & 4 \ar[d]^{y} &3\ar[l]_{x} \\ 1\ar[r]_{\alpha_0}& 0 & 2\ar@/^0.6cm/[ll]^{\alpha_1} & }}$ $\quad$ and $\quad$ $I: \langle \alpha_5\beta_0-\beta_1\alpha_0, \alpha_5\alpha_4y, y\alpha_1\alpha_0\rangle$. We take, for example, $\epsilon=(-,+,+,-,-,+,+)$, and it is sent to $\epsilon'=(-,+,+,+,-,+,+)$ under the action of $\mathcal{G}_3$. We then obtain $\mathsf{2\mbox{-}silt}_{\epsilon}\hspace{.02in}A \simeq \mathsf{2\mbox{-}silt}\hspace{.02in}_{\epsilon'} B$. In summary, we have the following result. **Theorem 40**. *A representation-infinite Borel-Schur algebra $S^+(n,r)$ is $\tau$-tilting finite if and only if $n=2$ and $p=2, r=4$, or $p=3, r=5$, or $p=5, r=6$.* # Acknowledgements {#acknowledgements .unnumbered} The author is grateful to Toshitaka Aoki for many useful discussions on silting theory. The author is partially supported by the National Key R$\&$D Program of China (Grant No. 2020YFA0713000) and China Postdoctoral Science Foundation (Grant No. 315251). AAAA T. Adachi, Characterizing $\tau$-tilting finite algebras with radical square zero. *Proc. Amer. Math. Soc.* **144** (2016), no. 11, 4673--4685. T. Adachi, O. Iyama and I. Reiten, $\tau$-tilting theory. *Compos. Math.* **150** (2014), no. 3, 415--452. T. Adachi, T. Aihara and A. Chan, Classification of two-term tilting complexes over Brauer graph algebras. *Math. Z.* **290** (2018), no. 1--2, 1--36. T. Aihara and O. Iyama, Silting mutation in triangulated categories. *J. Lond. Math. Soc.* **85** (2012), no. 3, 633--668. T. Aihara and Q. Wang, A symmetry of silting quivers. To appear in Glasgow Mathematical Journal, (2022). <https://doi.org/10.1017/S0017089523000204>. T. Aoki, Classifying torsion classes for algebras with radical square zero via sign decomposition. *J. Algebra* **610** (2022), 167--198. T. Aoki, A. Higashitani, O. Iyama, R. Kase and Y. Mizuno, Fans and polytopes in tilting theory. Preprint (2022), arXiv: 2203.15213. T. Aoki and Q. Wang, On $\tau$-tilting finiteness of blocks of Schur algebras. Preprint (2021), arXiv: 2110.02000. S. Ariki, S. Lyle and L. Speyer, Schurian-finiteness of blocks of type $A$ Hecke algebras. To appear in Journal of London Mathematical Society, (2023), arXiv: 2112.11148. S. Asai, Semibricks. *Int. Math. Res. Not.* (2020), no. 16, 4993--5054. H. Asashiba, Y. Mizuno and K. Nakashima, Simplicial complexes and tilting theory for Brauer tree algebras. *J. Algebra* **557** (2020), 119--153. K. Bongartz, Critical simply connected algebras. *Manuscripta Math.* **46** (1984), no. 1-3, 117--136. T. Br$\ddot{\text{u}}$stle, D. Smith and H. Treffinger, Stability conditions, $\tau$-tilting theory and maximal green sequences. Preprint (2017), arXiv:1705.08227. L. Demonet, O. Iyama and G. Jasso, $\tau$-tilting finite algebras, bricks, and $g$-vectors. *Int. Math. Res. Not.* (2019), no. 3, 852--892. L. Demonet, O. Iyama, N. Reading, I. Reiten and H. Thomas, Lattice theory of torsion classes: beyond $\tau$-tilting theory. *Trans. Amer. Math. Soc. Ser. B* **10** (2023), 542--612. Yu. A. Drozd, Tame and wild matrix problems, in: Representation Theory II, Lecture Notes in Math., vol. 832. *Springer Verlag*, (1980), pp. 242--258. K. Erdmann, A. P. Santana and I. Yudin, On Auslander-Reiten sequences for Borel-Schur algebras. *J. Algebra Appl.* **17** (2018), no. 2, 1850028. K. Erdmann, A. P. Santana and I. Yudin, Representation type of Borel-Schur algebras. *Algebr. Represent. Theory* **24** (2021), no. 6, 1387--1413. J. A. Green, On certain subalgebras of the Schur algebra. *J. Algebra* **131 (1)** (1990), 265--280. D. Happel and C. M. Ringel, Tilted algebras. *Trans. Amer. Math. Soc.* **274** (1982), no. 2, 399--443. D. Happel and D. Vossieck, Minimal algebras of infinite representation type with preprojective component. *Manuscripta Math.* **42** (1983), no. 2-3, 221--243. O. Iyama and I. Reiten, Introduction to $\tau$-tilting theory. *Proc. Natl. Acad. Sci.* **111** (2014), no. 27, 9704--9711. M. Kaipel and H. Treffinger, Wall-and-chamber structures for finite-dimensional algebras and $\tau$-tilting theory. Preprint (2023), arXiv: 2302.12699. D. G. Liang, On the quiver and relations of the Borel Schur algebras. *Ph. D. Thesis*, 2007. P. Malicki and A. Skowro$\acute{\text{n}}$ski, Cycle-finite algebras with finitely many $\tau$-rigid indecomposable modules. *Comm. Algebra* **44** (2016), no. 5, 2048--2057. K. Miyamoto and Q. Wang, On $\tau$-tilting finiteness of tensor products between simply connected algebras. Preprint (2021), arXiv: 2106.06423. Y. Mizuno, Classifying $\tau$-tilting modules over preprojective algebras of Dynkin type. *Math. Z.* **277** (2014), no. 3-4, 665--690. K. Mousavand, $\tau$-tilting finiteness of biserial algebras. Preprint (2019), arXiv: 1904.11514. P. G. Plamondon, $\tau$-tilting finite gentle algebras are representation-finite. *Pacific J. Math.* **302** (2019), no. 2, 709--716. J. Rickard, Morita theory for derived categories. *J. London Math. Soc.* **39** (1989), no. 3, 436--456. A. Rogat and T. Tesche, The Gabriel quivers of the sincere simply connected algebras. *Erganzungsreihe*, Bielefeld (1993). A. P. Santana, The Schur algebra $S(B^+)$ and projective resolutions of Weyl modules. *J. Algebra* **161** (1993), no. 2, 480--504. A. P. Santana and I. Yudin, Characteristic-free resolutions of Weyl and Specht modules. *Adv. Math.* **229** (2012), no. 4, 2578--2601. H. Treffinger, $\tau$-tilting theory - An introduction. Preprint (2021), arXiv: 2106.00426. Q. Wang, $\tau$-tilting finiteness of two-point algebras I. *Math. J. Okayama Univ.* **64** (2022), 117-141. Q. Wang, On $\tau$-tilting finite simply connected algebras. *Tsukuba J. Math.* **46 (1)** (2022), 1--37. Q. Wang, On $\tau$-tilting finiteness of the Schur algebra. *J. Pure Appl. Algebra* **226 (1)** (2022), 106818. Q. Wang, $\tau$-tilting finiteness of two-point algebras II. *To appear in Journal of Algebra and its Applications*, (2022), arXiv: 2206.00239. D. Woodcock, Schur algebras and global bases: new proofs of old vanishing theorems. *J. Algebra* **191** (1997), no.1, 331--370. D. Woodcock, Borel Schur algebras. *Comm. Algebra* **22** (1994), no.5, 1703--1721. I. Yudin, On projective resolutions of simple modules over the Borel subalgebra $S^+(n,r)$ of the Schur algebra $S(n, r)$ for $n\le 3$. *J. Algebra* **319** (2008), no.5, 1870--1902. S. Zito, $\tau$-tilting finite cluster-tilted algebras. *Proc. Edinb. Math. Soc.* **(2) 63** (2020), no. 4, 950--955. [^1]: 2020 *Mathematics Subject Classification.* 16G20, 16G60
arxiv_math
{ "id": "2310.00358", "title": "On $\\tau$-tilting finite Borel-Schur algebras", "authors": "Qi Wang", "categories": "math.RT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- author: - Adrien Deloro and Jules Tindzogho Ntsiri bibliography: - DTLie.bib title: | Simple Lie rings of Morley rank $4$\ (The Spanish Inquisition) --- 6.8cm1.6cm - Ici, Chimère ! Arrête-toi ! - Non ! Jamais ! 2cm2cm **Abstract.** We prove the Lie ring equivalent of the Cherlin-Zilber conjecture  in characteristic $0$, for any rank and  in characteristic $\neq 2, 3$, for rank $\leq 4$. Both are open in the group case. § [1](#S:introduction){reference-type="ref" reference="S:introduction"}. Introduction --- § [2](#S:general){reference-type="ref" reference="S:general"}. Generalities --- § [3](#S:proofs){reference-type="ref" reference="S:proofs"}. Proofs --- § [4](#S:questions){reference-type="ref" reference="S:questions"}. Questions # Introduction {#S:introduction} For first glance, our main result is here; an explanation will follow. Just consider Morley rank as a form of abstract dimension not related to any linear structure. **Theorem 1**. *Let $\mathfrak{g}$ be a simple Lie ring of finite Morley rank.* - *If the characteristic is $0$, then $\mathfrak{g}$ is a Lie algebra over a definable field (and one of Lie-Chevalley type).* - *If the characteristic is $\neq 2, 3$, then $\mathfrak{g}$ does not have Morley rank $4$.* The first ad should be considered folklore; as a matter of fact, it was announced without a proof in [@ZUncountably Theorem 2]. Later [@NNonassociative] did not explicitly prove nor state it, but he certainly knew it. This is classical model theoretic-algebra, and not the core of the present work. Like the unpublished [@Raleph] we shall focus on positive characteristic, where the topic is more algebraic and can be appreciated without knowing mathematical logic. Experts in modular Lie algebras should understand our results, our methods, and most of our final questions. The second ad is a partial analogue in model theory of [@SLie Theorem 2.2]. We emphasize that one starts with a Lie ring devoid of any linear structure or gradation, which highly complicates matters: paradoxically, Lie rings bear extremely little geometric information. Therefore our proof does not proceed through Cartan subalgebras, but through Borel subalgebras, with obvious influences from group-theoretic 'local analysis'. Finally we mention that the equivalent for simple *groups* of finite Morley rank is notoriously open and challenging. Returning to simple Lie rings of finite Morley rank, one conjectures that classifying them would be to the Block-Premet-Strade-Wilson theorem (see § [1.1](#s:abstractLie){reference-type="ref" reference="s:abstractLie"}), what the theory of groups of finite Morley rank is to the classification of the finite simple groups: an abstract, simpler sketch (see § [1.2](#s:modeltheory){reference-type="ref" reference="s:modeltheory"}). This will deserve future work. ## Abstract Lie rings {#s:abstractLie} #### Lie and Chevalley. The name 'Lie' points at two distinct notions. First, a certain type of compatible higher structure on natural group-theoretic objects such as $(\mathrm{GL}_n(\mathbb{C}); \cdot)$. Second, a certain class of non-associative algebraic structures such as $(\mathfrak{gl}_n(\mathbb{C}); [\cdot, \cdot])$. Relations between the two sets of ideas were first discovered by Lie in the intended model of differential geometry, and then pushed to general grounds by Chevalley. How much Lie theory is better understood in general terms, with no specific reference to the intended model of geometry, is however *taboo* in first expositions of the topic. Some mathematicians seem unaware of the failure of categoricity and of the legitimacy of non-Archimedean analysis, not to mention positive characteristic fields. The mathematical physicist will righteously object that 'Lie groups' in the physical sense of the term are non-functorial entities. Through provocation we are merely stating the obvious: Lie theory and Chevalley theory are not fully identical, and the focus on higher-order structure is an obstacle to developing the latter. #### Lie-Chevalley functors, Lie-Cartan functors. Familiar objects such as $\mathfrak{sl}_2$ may be taken over various fields. More generally, Chevalley's celebrated basis theorem [@HLinear § 25 and notably § 25.4] brought into Lie algebras the idea of functoriality, with all its power and might. Let $\Phi$ be a simple root system, and fix a Chevalley basis of $\Phi$; this gives rise to the Lie ring $\mathfrak{G}_\Phi(\mathbb{Z})$. ($\mathfrak{G}$ is upper-case fraktur g.) Now for $R$ an associative ring with identity, let $\mathfrak{G}_\Phi(R) = \mathfrak{G}_\Phi(\mathbb{Z}) \otimes_\mathbb{Z}R$; this is functorial in $R$. The construction $\mathfrak{G}_\Phi$ could be called a (simple) Lie-Chevalley functor; our terminology may evolve in the future. It generates phrases such as 'Lie rings of Lie-Chevalley type', for images of Lie-Chevalley functors, in Chevalley's notation $A_n, \dots, G_2$. (Experts call 'Lie-Chevalley type' the *classical* algebras. This clashes with group-theoretic terminology, for which *classical* only means $A_n, B_n, C_n, D_n$, while $E_6, E_7, E_8, F_4, G_2$ are refered to as *exceptional*.) The Lie-Chevalley functors, familiar in algebraic geometry, contrast sharply with other constructions yielding simple Lie algebras, usually called 'of Cartan type': Witt $W(m; \underline{1})$, special $S(m; \underline{1})^{(1)}$, Hamiltonian $H(m; \underline{1})^{(2)}$, contact $K(m; \underline{1})^{(1)}$. We prefer 'of Lie-Cartan type' to prevent confusion with Cartan's maximal toral subalgebras. While infinite-dimensional in general, simple Lie algebras of Lie-Cartan type become finite-dimensional in positive characteristic. Thus finite-dimensional, simple Lie algebras over algebraically closed fields of positive characteristics do not all fall into Chevalley's families. #### Finite-dimensional, simple Lie algebras over algebraically closed fields. A key fact is the Block-(Premet-Strade)-Wilson [@BWClassification; @PSClassification] positive answer to the Kostrikin-Shavarevitch conjecture [@KSGraded], with the following consequence. **Fact 1**. *Let $\mathfrak{g}$ be a finite-dimensional, simple Lie algebra over an algebraically closed field of characteristic $\geq 5$. Then :* - *either $\mathfrak{g}$ is of Lie-Chevalley type,* - *or $\mathfrak{g}$ is of Lie-Cartan type,* - *or the characteristic is $5$ and $\mathfrak{g}$ is 'of Melikian type' [@MSimple].* One recommends [@PSClassification] or [@SSimple1 Introduction]. No understanding of the proof nor of the result itself is needed here, as the only simple Lie algebra encountered in the present work is $\mathfrak{sl}_2$. #### Abstract Lie rings. A *Lie ring* is an abelian group equipped with a bi-additive map satisfying Jacobi's identity and called the *bracket*. Subrings and ideals are defined in these terms; so are nilpotence, solubility, and simplicity. A *Lie morphism* is a morphism of Lie rings, viz. an additive morphism preserving the bracket. There is a legitimate tradition of calling Lie rings 'Lie algebras over $\mathbb{Z}$'. The *characteristic* of a Lie ring is the exponent of the underlying additive group; 'characteristic $0$' means torsion-free divisible. Every simple Lie ring has characteristic $0$ or a prime. Every such is therefore a simple Lie algebra over either $\mathbb{Q}$ or $\mathbb{F}_p$; but the dimension is in general infinite. We reserve the phrase *Lie algebra* for the presence of an interesting field acting compatibly. (For us, interesting means 'infinite and definable', in the sense below.) Simple Lie rings are unclassifiable, unless mathematical logic itself brings some form of 'tameness'. ## Lie rings in model theory {#s:modeltheory} The present work aims at showing that some simple Lie rings provided by model theory are *finite-dimensional* Lie algebras. We restrict our study to finite Morley rank. #### Morley rank (for non-logicians). When doing model-theoretic algebra, Morley rank is best construed as a dimension on definable sets; see for instance [@BNGroups chapters 2, 3, 4]. Alternative discussions are [@PStable Introduction] or [@ABCSimple § A.I.2]. Let $(\mathfrak{g}; +, [\cdot])$ be a Lie ring. A subset $X\subseteq \mathfrak{g}^n$ is *definable* if there is a first-order formula, viz. one using the algebraic operations, elements of $\mathfrak{g}$ as parameters, equations, negations, conjunctions, quantification on elements, and giving $X$ as a solution set. (This is the natural generalisation of *constructible* when losing the Chevalley-Tarski theorem that the constructible class is closed under projections.) For good measure and up to abusing terminology, allow quotients of definable sets by definable equivalence relations. Now $\mathfrak{g}$ has *finite Morley rank* if there is a dimension function assigning to each non-empty definable set an integer, and satisfying the following: - for definable $X$, $\dim X \geq n+1$ iff there are infinitely many, pairwise disjoint $Y_i \subseteq X$ with $\dim Y_i \geq n$; - for definable $f\colon X \to Y$ and integer $k$, the set $Y_k = \{y \in Y: \dim f^{-1}(\{y\}) = k\}$ is definable; - as above, if $Y = Y_k$, then $\dim X = \dim Y + k$; - as above, there is an integer $m$ such that $Y_0 = \{y \in Y: \operatorname{card} f^{-1}(\{y\}) \leq m\}$. The definition (actually the Borovik-Poizat axiomatisation of groups of finite Morley rank) was given for self-containedness. In practice, there is a dimension provided by mathematical logic, which obeys algebraically predictable rules. For example, 'finite non-empty' is equivalent to '$0$-dimensional'. If $\mathfrak{g}$ is a $\mathbb{K}$-Lie algebra and $\mathbb{K}$ is definable and infinite, then $\dim \mathfrak{g}= \dim \mathbb{K}\cdot \operatorname{ldim}_\mathbb{K}(\mathfrak{g})$, where $\operatorname{ldim}_\mathbb{K}$ denotes the $\mathbb{K}$-linear dimension. If $\mathbb{K}$ is finite but $\mathfrak{g}$ is not, the formula makes no sense as $0 \cdot \infty$ is not defined. If $\mathbb{K}$ is not definable, the formula makes no sense at all since only definable sets bear a dimension. #### Lie rings of finite Morley rank. So far they received unsufficient attention. Nesin's seminal [@NNonassociative], from which the first author learnt a lot, and Rosengarten's thesis [@Raleph], are not widely cited. However Baudisch's groups (non-algebraic, nilpotent groups of Morley rank $2$) are built from Lie rings [@BNew], so interactions between Lie rings and model theory are not to be despised. As a matter of fact, an ambitious model-theoretic study of nilpotent Lie algebras and the *Lazard(-Malcev) correspondence* with nilpotent groups is in preparation [@DMRSModel] under neo-stability assumptions. But we focus on model-theoretic algebra of finite Morley rank and the following. **Conjecture 1** ('$\log \mathrm{CZ}$'; implicit in [@Raleph]). *Let $\mathfrak{g}$ be an infinite simple Lie ring of finite Morley rank. Suppose the characteristic is sufficiently large. Then $\mathfrak{g}$ is a simple Lie algebra over an algebraically closed field.* **Remarks 1**. - Applying the Block-Premet-Strade-Wilson theorem, one would even know the isomorphism type of $\mathfrak{g}$. - It remains to explain 'sufficiently large'. The lazy version is: characteristic $0$, which we establish in Theorem [Theorem 1](#t:0){reference-type="ref" reference="t:0"}. The modest version is: $> f(d)$ where $d$ is the Morley rank, and $f$ would be determined while working on the conjecture. The optimistic version is: $> 3$. There is an intermediate version in characteristic $> d$, where one would also prove that $\mathfrak{g}$ is of Lie-Chevalley type. #### Relations to groups and to Lie algebras. 1. Relation to the Cherlin-Zilber conjecture. The conjecture is a clear analogue of the Cherlin-Zilber conjecture on infinite simple groups of finite Morley rank [@ABCSimple]. We do *not* believe there is a general method for obtaining a Lie ring and an adjoint action from an abstract group of finite Morley rank. Conversely, we do *not* believe there is a general method for retrieving a definable group acting on an abstract Lie ring of finite Morley rank. In short we do *not* believe in a 'Lie-Chevalley correspondence' at this level of generality, so the two lines of thought should remain independent. We do not know whether adding a Zariski geometry [@HZZariski] would favorably change the landscape but this is worth asking. See final questions in § [4](#S:questions){reference-type="ref" reference="S:questions"}. 2. Relation to the Block-Premet-Strade-Wilson theorem. Work on the Cherlin-Zilber conjecture is supposed to provide a simplified skeleton of the classification of the finite simple groups. Though the latter is settled but not the former, short versions are always interesting to have. Likewise, we ask experts whether work on '$\log\mathrm{CZ}$' could give a blueprint of the Block-Premet-Strade-Wilson classification [@SSimple1; @SSimple2; @SSimple3]. #### Rosengarten's work. The '$\log \mathrm{CZ}$' conjecture implicitly motivated Rosengarten's early and unpublished (and in our opinion, underrated) study [@Raleph] of Lie rings of Morley rank $\leq 3$. Directed by Cherlin, Rosengarten proved the following in characteristic $\neq 2, 3$: - a connected Lie ring of Morley rank $1$ is abelian [@Raleph Corollary 4.1.1]; - a connected Lie ring of Morley rank $2$ is soluble; if non-nilpotent, it covers $\mathfrak{ga}_1(\mathbb{K})$ [@Raleph Theorem 4.2.1]; - a connected, non-soluble Lie ring of Morley rank $3$ covers $\mathfrak{sl}_2(\mathbb{K})$ [@Raleph Theorem 4.4.1]. These statements are discussed further in §§ [3.2](#s:1){reference-type="ref" reference="s:1"}--[3.4](#s:3){reference-type="ref" reference="s:3"}. #### Our results. We prove the following. **Theorem 2**. *Let $\mathfrak{g}$ be a simple Lie ring of finite Morley rank.* - *If the characteristic is $0$, then $\mathfrak{g}$ is a Lie algebra over a definable field (and one of Lie-Chevalley type).* - *If the characteristic is $\neq 2, 3$, then $\mathfrak{g}$ does not have Morley rank $4$.* Group theorists will deem these results merely inspirational insofar as they should have no impact on the Cherlin-Zilber conjecture, other than a renewed challenge. However seen from the legitimate Lie-theoretic perspective we do believe the connection with the Block-Premet-Strade-Wilson theorem deserves serious investigations in model-theoretic algebra. # Notation, facts, lemmas {#S:general} This section contains general notation and terminology (§ [2.1](#s:notation){reference-type="ref" reference="s:notation"}), some facts from the theory of groups of finite Morley rank (§ [2.2](#s:facts){reference-type="ref" reference="s:facts"}), and then a couple of lemmas on Lie rings of finite Morley rank (§[2.3](#s:lemmas){reference-type="ref" reference="s:lemmas"}). ## Notation; Borel and Cartan subrings {#s:notation} - We occasionally combine bracket and product notations, as in Jacobi's identity: $$[a, bc] = [ab, c] + [b, ac].$$ - To distinguish between mere additive subgroups, subrings, and ideals, we reserve: $$\leq\text{ for subgroups}, \qquad \sqsubseteq\text{ for subrings}, \qquad \trianglelefteq\text{ for ideals}.$$ Throughout, $\oplus$ is used at the group-theoretic level. - If $\mathfrak{g}$ is a Lie ring and $x \in \mathfrak{g}$, we denote by $\operatorname{ad}_x$ the derivation $y \mapsto [x, y]$. - We use capital $B$ for 'bracket', not for 'Borel'. If $\mathfrak{g}$ is a Lie ring and $x \in \mathfrak{g}$, we let $B_x = [x, \mathfrak{g}] = \operatorname{im}\operatorname{ad}_x \leq \mathfrak{g}$, a mere *subgroup*. If $\mathfrak{g}$ is connected (see § [2.2](#s:facts){reference-type="ref" reference="s:facts"}), so is $B_x$. (Notation generalised below.) - For Borel subrings we favour $\mathfrak{b}\sqsubseteq \mathfrak{g}$. By definition, a Borel subring is one definable, connected, soluble, and maximal as such. - We use capital $C$ for 'centraliser', not for 'Cartan'. If $\mathfrak{g}$ is a Lie ring with connected components (see § [2.2](#s:facts){reference-type="ref" reference="s:facts"}) and $x \in \mathfrak{g}$, we let $C_x = C_\mathfrak{g}^\circ (x) = \{y \in \mathfrak{g}: [x, y] = 0\}^\circ\sqsubseteq \mathfrak{g}$, a *subring*. (Notation generalised below.) - For Cartan subrings we favour $\mathfrak{c}\sqsubseteq \mathfrak{g}$. By definition, a Cartan subring is one definable, connected, nilpotent, and quasi-self-normalising, viz. of finite index in its normaliser. - For $n$ an integer and $x \in \mathfrak{g}$ we let: $$B_x^n = \operatorname{im}\operatorname{ad}_x^n \leq \mathfrak{g}\quad\text{and}\quad C_x^n = \ker^\circ \operatorname{ad}_x^n \leq \mathfrak{g}.$$ Be careful that for $n > 1$, $C_x^n$ need not be a subring. If $\mathfrak{g}$ is connected, these are series of definable, connected sub*groups* (however see Lemma [Lemma 1](#l:Cn){reference-type="ref" reference="l:Cn"}); the latter is increasing while the former is decreasing. When one series stabilises, so does the other. - If $x \in \mathfrak{g}$ and $k$ is an integer (modulo the characteristic of $\mathfrak{g}$), we let $E_k(x) = \{y \in \mathfrak{g}: [x, y] = ky\}^\circ = \ker^\circ (\operatorname{ad}_x - k \operatorname{Id})$. It is a subgroup, and $[E_k(x), E_\ell(x)] \leq E_{k+\ell}(x)$. Often $x$ will remain implicit. For fixed $x$, the various $E_k$ are in direct sum, being well-understood that indices are integers modulo the characteristic. ## Facts from the theory of groups of finite Morley rank {#s:facts} We need only basic results, used with no reference. **Fact 2**. *Work inside a group of finite Morley rank.* - *Every descending chain of definable subgroups is stationary [@BNGroups § 5.1].* - *Every definable subgroup $A$ has a connected component $A^\circ$, viz. a smallest definable subgroup of finite index; as a subgroup it is characteristic in $A$ [@BNGroups § 5.2]. In case $\mathfrak{g}$ is a Lie ring, then $\mathfrak{g}^\circ$ is an ideal of $\mathfrak{g}$ [@NNonassociative Lemma 1]. In particular, if $\mathfrak{g}$ is simple and infinite then it is connected.* - *The image of a definable, connected group under a definable group morphism remains definable and connected.* - *The sum of two definable, connected subgroups remains definable and connected; the sum of infinitely many such reduces to a finite sum.* - *(Strong) Cherlin-Macintyre-Shelah property: infinite definable skew-fields are algebraically closed fields [@BNGroups Theorem 8.10].* **Remark 1** (more on fields). Augmented fields abound in model theory, even in universes of finite Morley rank. The existence of fields of finite Morley rank with non-minimal multiplicative group (at least in characteristic $0$, [@BHMWBoese]) complicates considerably the study of groups of finite Morley rank. They play no role here. However, fields of finite Morley rank with non-minimal additive groups (which exist only in positive characteristic, [@BMZRed]) make matters non-trivial. See Steps [Step 4](#p:3:nonilpotent:st:Bb){reference-type="ref" reference="p:3:nonilpotent:st:Bb"}, [Step 14](#p:4:nonilpotent:st:contradiction){reference-type="ref" reference="p:4:nonilpotent:st:contradiction"}. We turn to Lie rings of finite Morley rank. The following are straightforward adaptations of their group counterparts. More details can be found in [@Raleph § 3.3]. **Fact 3**. *Let $\mathfrak{g}$ be a Lie ring of finite Morley rank.* - *For $x \in \mathfrak{g}$, one has $\dim \mathfrak{g}= \dim B_x + \dim C_x$.* *In particular, if $\mathfrak{g}$ is connected, soluble and infinite, then all centralisers are infinite; otherwise $\mathfrak{g}= B_x \leq \mathfrak{g}'$, a contradiction. Without solubility there are some challenging open problems; see the questions in § [4](#S:questions){reference-type="ref" reference="S:questions"}.* - *If $\mathfrak{g}$ is connected, then the ideal $\mathfrak{g}' \trianglelefteq \mathfrak{g}$ generated by commutators is definable and connected.* *(This is actually trivial, as abelianity completely bypasses the Chevalley-Zilber 'indecomposable generation' lemma of [@BNGroups § 5.4]. We take it as an indication that all the strength of finite Morley rank may not be required. See the final questions in § [4](#S:questions){reference-type="ref" reference="S:questions"}.)* - *If $\mathfrak{g}$ is connected and the centre $Z(\mathfrak{g})$ is finite, then $\mathfrak{g}/Z(\mathfrak{g})$ is centreless [@BNGroups Lemma 6.1].* - *If $\mathfrak{g}$ is non-abelian and definably simple (viz. has no definable ideals other than $\{0\}$ and $\mathfrak{g}$), then $\mathfrak{g}$ is simple.* *(To our surprise, this is absent from Rosengarten's work and only implicit in Nesin's. However the argument is present in [@NNonassociative top of p. 130]. )* - *If $\mathfrak{g}$ is nilpotent and infinite, then $Z^\circ(\mathfrak{g})$ is infinite; every definable, infinite subring meets $Z^\circ(\mathfrak{g})$, and the normaliser condition holds [@BNGroups Lemmas 6.2 and 6.3].* - *Borel subrings are quasi-self-normalising. (Relies on Fact [Fact 1](#f:1){reference-type="ref" reference="f:1"} below.)* We move to modules, linearisation principles, and their consequences. For $V$ an abelian group, we let $\operatorname{DefEnd}(V)$ be the Lie ring of *definable* endomorphisms. Be careful that $\operatorname{DefEnd}(V)$ itself need not be definable. The bracket is the usual one: $\llbracket f_1, f_2 \rrbracket = f_1\circ f_2 - f_2\circ f_1$. **Definition 1**. *A *definable Lie ring action* is a triple $(\mathfrak{g}, V, \cdot)$ where:* - *$\mathfrak{g}$ is a definable Lie ring,* - *$V$ is a definable, connected abelian group,* - *$\cdot\colon \mathfrak{g}\times V \to V$ is a definable map inducing a Lie morphism $\rho\colon \mathfrak{g}\to \operatorname{DefEnd}(V)$.* *When the context is clear we simply say *definable module*.* *A definable module is *faithful* if $\ker \rho = \{0\}$, and *$\mathfrak{g}$-irreducible* if $V$ has no definable, connected, infinite, proper $0 < W < V$ which is $\mathfrak{g}$-invariant. (Also called '$\mathfrak{g}$-minimal'.)* The following fact extends Zilber's original 'definabilisation' of the Schur covariance field. **Fact 4** (consequence of [@DZilber]). *Let $(\mathfrak{g}, V)$ be a definable, faithful, $\mathfrak{g}$-irreducible module in a finite-dimensional universe. Suppose that $\mathfrak{g}$ and $C_{\operatorname{DefEnd}(V)}(\mathfrak{g})$ are infinite. Then there is a definable field $\mathbb{K}$ such that $V$ is a finite-dimensional $\mathbb{K}$-vector space and $\mathfrak{g}$ acts $\mathbb{K}$-linearly.* ## General lemmas {#s:lemmas} We first discuss iterated centralisers. **Lemma 1**. *Let $\mathfrak{g}$ be a Lie ring of finite Morley rank and $h \in \mathfrak{g}$. Let $n$ be an integer such that $C_h^n$ is abelian. Then $C_h^{n+1}$ is a subring.* *Proof.* Let $\delta = \operatorname{ad}_h$, so that $C_n = \ker^\circ \delta^n$. Let $x, y \in C_{n+1}$. Then: $$\delta^{n+1}([x, y]) = \sum_{k = 0}^{n+1} \binom{n}{k} [\delta^k x, \delta^{n+1-k} y] = \sum_{k = 1}^n \binom{n}{k} [\delta^k x, \delta^{n+1 - k} y].$$ Now for $1 \leq k \leq n$, one has $\delta^k x \in C_{n+1 - k} \leq C_n$ and $\delta^{n+1 - k} y \in C_k \leq C_n$. By abelianity, all brackets vanish, so $[x, y] \in \ker \delta^{n+1}$. Since $[C_h^{n+1}, C_h^{n+1}]$ is connected, it is contained in $\ker^\circ \delta^{n+1} = C_h^{n+1}$. ◻ There is a form of converse. **Lemma 2** ([@Raleph Lemma 3.2.6]). *Let $\mathfrak{g}$ be a Lie ring with no additive $2$-torsion and $a \in \mathfrak{g}$ be such that $C_a^2 = \mathfrak{g}$. Then $B_a$ is an abelian subring of $C_a$, and actually an ideal of $C_a$.* *Proof.* By assumption, $B_a \leq C_a$. Now let $b_1, b_2 \in B_a$, say $b_i = [a, x_i]$. Then: $$\begin{aligned} & = [b_1, [a, x_2]] = [b_1 a, x_2] + [a, b_1 x_2]\\ & = [a, b_1 x_2] = [a, [a, x_1] x_2]\\ & = [a, [ax_2, x_1] + [a, x_1 x_2]]\\ & = [a, [b_2, x_1]] + [a, \underbrace{[a, x_1 x_2]]}_{\in B_a \leq C_a}\\ & = [a b_2, x_1] + [b_2, ax_1]\\ & = [b_2, b_1].\end{aligned}$$ Since there is no $2$-torsion, we find $[b_1, b_2] = 0$, as wished. Last, if $b = [a, x] \in B_a$ and $c \in C_a$, then $[b, c] = [[a, x], c] = [ac, x] + [a, xc] = [a, xc] \in B_a$, meaning $B_a \trianglelefteq C_a$. ◻ The following plays a key role when removing nilpotent Borel subrings in Propositions [Proposition 1](#p:3:nonilpotent){reference-type="ref" reference="p:3:nonilpotent"} and [Proposition 6](#p:4:nonilpotent){reference-type="ref" reference="p:4:nonilpotent"}. **Lemma 3** ([@Raleph Claim 4.3.2]). *Let $\mathfrak{g}$ be a Lie ring with no additive $2$-torsion and $\mathfrak{a}\sqsubseteq \mathfrak{g}$ be an abelian subring. For $n \geq 1$ and $x \in \mathfrak{g}$, let $A_n = [\mathfrak{a}, \dots, [\mathfrak{a}, x]{\scriptscriptstyle \dots}] \leq \mathfrak{g}$, where $\mathfrak{a}$ appears $n$ times. Then $[A_n, A_n] \leq [\mathfrak{a}, \mathfrak{g}]$.* *Proof.* Compute modulo $[\mathfrak{a}, \mathfrak{g}]$. For any $a \in \mathfrak{a}$ and $x, y \in \mathfrak{g}$ one has: $$0 \equiv [a, [x, y]] = [ax, y] + [x, ay],$$ whence $[ax, y] \equiv - [x, ay]$. Let $f$ be a product of $n$ $\mathfrak{a}$-derivations, say $f = \operatorname{ad}_{a_1}\circ \cdots\circ \operatorname{ad}_{a_n}$ with the $a_i$ all in $\mathfrak{a}$. By abelianity, the order does not matter. Then $[f(x), y] \equiv (-1)^n [x, f(y)]$. If $f'$ is another product, of say $n'$ $\mathfrak{a}$-derivations, then keeping abelianity in mind: $$\begin{aligned} & \equiv (-1)^n [x, ff'(x)] \equiv (-1)^n [x, f'f(x)]\\ & \equiv (-1)^{n+n'} [f'(x), f(x)] \equiv (-1)^{n+n'+1} [f(x), f'(x)].\end{aligned}$$ Hence if $n + n'$ is even then $2 [f(x), f'(x)] \equiv 0$ and $[f(x), f'(x)] \equiv 0$. This is the case when $n' = n$, implying $[A_n, A_n] \leq [\mathfrak{a}, \mathfrak{g}]$. ◻ The following extremely useful principle is simply additivity of $\dim$. **Lemma 4** ('lifting the eigenspace'). *Let $\mathfrak{g}$ be a Lie ring of finite Morley rank and $X$ be a subquotient module, viz. $X = V/V'$ where $V' \leq V \leq \mathfrak{g}$ are $\mathfrak{g}$-submodules. If for some $h \in \mathfrak{g}$ and some integer $k$ modulo the characteristic, $E_k^X(h) = \{x \in X: [h, x] = x\}^\circ$ is non-trivial, then $E_k^\mathfrak{g}(h) \leq \mathfrak{g}$ is non-trivial.* *Proof.* Let $\varphi(x) = [h, x] - kx$, which stabilises both $V$ and $V'$. Let $\pi\colon V \to X$ be the quotient map and $\overline{\varphi}\colon X \to X$ be induced by $\varphi$. Notice $E_k^X = \ker^\circ \overline{\varphi}$. Let $W = (\pi^{-1}(\ker^\circ \overline{\varphi}))^\circ > V'$. Then $\varphi(W) \leq V'$, so $\ker^\circ \varphi > 0$. ◻ We now turn to the action of a definable subring on the quotient group. **Lemma 5**. *Let $\mathfrak{g}$ be a simple Lie ring of finite Morley rank and $\mathfrak{h}\sqsubset \mathfrak{g}$ be a definable, connected, proper subring. Then $I = C_\mathfrak{h}^\circ(\mathfrak{g}/\mathfrak{h})$ is nilpotent.* *Proof.* Of course $I \trianglelefteq \mathfrak{h}$. Let $I^{[n]}$ be the descending nilpotence series, so that $I^{[n]} \trianglelefteq \mathfrak{h}$. We claim that for $n \geq 0$, one has $[I^{[n+1]}, \mathfrak{g}] \leq I^{[n]}$. Indeed when $n = 0$, one has: $$[I^{[1]}, \mathfrak{g}] = [I I, \mathfrak{g}] \leq [I, I\mathfrak{g}] \leq [I, \mathfrak{h}] \leq I,$$ and then by induction: $$[I^{[n+2]}, \mathfrak{g}] = [I I^{[n+1]}, \mathfrak{g}] \leq [I, I^{[n1+]}\mathfrak{g}] + [I^{[n+1]}, I\mathfrak{g}] \leq [I, I^{[n]}] + [I^{[n+1]}, \mathfrak{h}] \leq I^{[n+1]}.$$ By the descending chain condition, there is $n$ such that $I^{[n+1]} = I^{[n]}$. Then $[I^{[n]}, \mathfrak{g}] \leq I^{[n]}$ so $I^{[n]} \trianglelefteq \mathfrak{g}$. Since $I^{[n]} < \mathfrak{g}$, we get $I^{[n]} = 0$ by simplicity. ◻ To our own surprise we could easily obtain a self-normalisation criterion. **Lemma 6**. *Let $\mathfrak{g}$ be a definable, connected Lie ring and $\mathfrak{h}\sqsubset \mathfrak{g}$ be a definable, connected subring of codimension $1$. Then either $\mathfrak{h}\triangleleft \mathfrak{g}$ is an ideal, or $N_\mathfrak{g}(\mathfrak{h}) = \mathfrak{h}$.* *Proof.* Let $I = C_\mathfrak{g}(\mathfrak{g}/\mathfrak{h})$. If $I = \mathfrak{h}$ then $\mathfrak{h}$ is an ideal of $\mathfrak{g}$; so suppose not. Then $\mathfrak{h}/I$ acts non-trivially on $\mathfrak{g}/\mathfrak{h}$. Linearising in dimension $1$ (see § [3.2](#s:1){reference-type="ref" reference="s:1"}), the action is scalar; hence free. Let $h \in \mathfrak{h}\setminus I$. Let $n \in N_\mathfrak{g}(\mathfrak{h})$. Then $[h, n] = - [n, h] \in \mathfrak{h}$, so the non-trivial, scalar action of $h$ on $\mathfrak{g}/\mathfrak{h}$ sends the image of $n$ to $0$. By freeness, $n \in \mathfrak{h}$ already. ◻ Last is an important observation on definable derivations. **Lemma 7**. *An infinite field of finite Morley rank has no non-trivial definable derivation.* *Proof.* Recall that the Cherlin-Macintyre-Shelah property holds: $\mathbb{K}$, or any infinite definable subfield, is algebraically closed. Let $\delta \colon \mathbb{K}\to \mathbb{K}$ be a definable derivation, and $\mathbb{K}_0$ be the field of constants. Suppose the characteristic is not $2$. Let $x \in \mathbb{K}_0$, and let $y \in \mathbb{K}$ such that $y^2 = x$. Then $0 = \delta(x) = 2 y \delta(y)$, so $y \in \mathbb{K}_0$. So $\mathbb{K}_0$ is closed under taking square roots: hence infinite. In characteristic $2$, it is however closed under taking cubic roots: hence infinite as well. So in either case, $\mathbb{K}_0$ is infinite and definable: this proves $\mathbb{K}_0 = \mathbb{K}$, viz. $\delta = 0$. ◻ The same holds in $o$-minimal universes, but seems unclear in the general finite-dimensional case. It however appears like a desirable property for a more systematic investigation of Lie rings in model theory. # The proofs {#S:proofs} § [3.1](#s:0){reference-type="ref" reference="s:0"} handles characteristic $0$; Theorem [Theorem 1](#t:0){reference-type="ref" reference="t:0"} will not surprise people familiar with Zilber's early work. Turning to positive characteristic, §§ [3.2](#s:1){reference-type="ref" reference="s:1"}, [3.3](#s:2){reference-type="ref" reference="s:2"}, [3.4](#s:3){reference-type="ref" reference="s:3"}, [3.5](#s:4){reference-type="ref" reference="s:4"} deal with increasing dimensions, and theorems are numbered accordingly. Since we do not reprove Rosengarten's rank $1$ analysis, we call it *Fact [Fact 1](#f:1){reference-type="ref" reference="f:1"}*; Corollary [Corollary 1](#c:1){reference-type="ref" reference="c:1"} should be folklore. Theorems [Theorem 2](#t:2){reference-type="ref" reference="t:2"} is already in [@Raleph], but the too partial (though important) Corollary [Corollary 2](#c:Lie2){reference-type="ref" reference="c:Lie2"} is new. Theorem [Theorem 3](#t:3){reference-type="ref" reference="t:3"} too is already in [@Raleph]; its simple Corollary [Corollary 3](#c:3){reference-type="ref" reference="c:3"} is not. Theorem [Theorem 4](#t:4){reference-type="ref" reference="t:4"} is definitely new. ## Characteristic 0 {#s:0} The following entirely settles matters in characteristic $0$. It is essentially Zilber's definability of the Schur covariance field and was known to Zilber [@ZUncountably Theorem 2]. We learnt about this paper from Belegradek, whom we quote (personal correspondence). > *This is one-page paper containing only formulations of results and some hints of proofs. It was submitted in 1979 but published only in 1982. The rule of that journaI was that they could publish a short paper containing no proofs in case if the author also submitted a manuscript with detailed proofs; the referee wrote a report based on the short paper and manuscript together. I don't know whether that manuscript is still available. But I think it makes sense for people working on groups of finite Morley rank to rediscover the proofs of the paper, and to advertise the paper (which seems not well-known even though its author is the famous Zilber; maybe this is because that second rate Russian journal was not translated into English, and the paper was not reviewed in MR).* Although we found no further trace of the topic in the literature, the result must also have been known to Nesin [@NNonassociative] among others. Why Rosengarten does not discuss it, is a mystery to us. **Theorem 1**. *Work in a theory of finite Morley rank.* 1. *[\[t:0:i:linearisation\]]{#t:0:i:linearisation label="t:0:i:linearisation"} Let $(\mathfrak{g}, V, \cdot)$ be a definable Lie ring action with $\mathfrak{g}$ infinite. Suppose it is irreducible and faithful. *Suppose further that $V$ does not have bounded exponent.* Then the configuration is linear: there is a definable field $\mathbb{K}$ such that $V$ is a $\mathbb{K}$-vector space and $\mathfrak{g}$ is a $\mathbb{K}$-Lie subalgebra of $\mathfrak{gl}_\mathbb{K}(V)$.* 2. *If in addition $\mathfrak{g}$ is a Lie algebra over some algebraically closed field $\mathbb{L}$, then $\mathbb{L}\simeq \mathbb{K}$ definably.* 3. *If $\mathfrak{g}$ is a simple Lie ring of finite Morley rank *and characteristic $0$*, then there is a definable field $\mathbb{K}$ such that $\mathfrak{g}$ is the ring of $\mathbb{K}$-points of one of the Lie-Chevalley functors $A_n, \dots, G_2$.* *Proof.* 1. The action of $\mathfrak{g}$ commutes to $\mathbb{Z}\operatorname{Id}_V$, so we may linearise (see § [2.2](#s:facts){reference-type="ref" reference="s:facts"}). There is a definable field $\mathbb{K}$ such that $V$ a finite-dimensional vector space over $\mathbb{K}$ and $\mathfrak{g}\hookrightarrow \mathfrak{gl}_\mathbb{K}(V)$, a mere embedding of Lie rings. Of course $\operatorname{char} \mathbb{K}= 0$. It suffices to show that $\mathfrak{g}$ is a vector subspace of $\mathfrak{gl}_\mathbb{K}(V)$. Indeed $N = \{\lambda \in \mathbb{K}: (\forall x \in \mathfrak{g})(\lambda \cdot x \in \mathfrak{g})\}$ is an infinite, definable subring of $\mathbb{K}$, hence equal to $\mathbb{K}$ by the Cherlin-Macintyre-Shelah property. 2. Let $\VDash$ denote interpretability. The above and definability of linear structures over a pure field imply $(\mathbb{K}; +, \cdot) \VDash (\mathfrak{g}, V; +, [\cdot, \cdot], \cdot)$. Now using the adjoint action of a Cartan subalgebra, one has $(\mathfrak{g}; +, [\cdot, \cdot]) \VDash (\mathbb{L}; +, \cdot)$. It follows $(\mathbb{K}; +, \cdot) \VDash (\mathbb{L}; +, \cdot)$ and one may conclude by Poizat's monosomy theorem [@PStable Theorem 4.15] that $\mathbb{K}\simeq \mathbb{L}$, even $\mathbb{K}$-definably. 3. Let $\mathfrak{g}$ act on $V = (\mathfrak{g}; +)$ by the adjoint representation: the module is faithful and irreducible. By [\[t:0:i:linearisation\]](#t:0:i:linearisation){reference-type="ref" reference="t:0:i:linearisation"}, $\mathfrak{g}$ is a Lie algebra over some algebraically closed field $\mathbb{K}$. It remains simple as such. Since $\mathbb{K}$ has characteristic $0$, the isomorphism type of $\mathfrak{g}$ is given by the classification into Chevalley families.  ◻ Nothing similar is possible in positive characteristic---and one should also expect objects of Lie-Cartan type. ## Dimension 1 {#s:1} We state Rosengarten's 'minimal' result (which we do not reprove), and then a number of consequences in Corollary [Corollary 1](#c:1){reference-type="ref" reference="c:1"}. **Fact 1** (Rosengarten; [@Raleph Theorem 4.1.1]). *Let $\mathfrak{g}$ be an infinite Lie ring of finite Morley rank and characteristic $\neq 2, 3$. Then $\mathfrak{g}$ contains an infinite, definable, connected, abelian subring.* This is the analogue of Reineke's theorem on 'definably minimal' groups [@RMinimale]. The existing proof is non-trivial and relies on results in the finite case. (Moreover the problem is open in characteristic $3$). See the section on open questions, § [4](#S:questions){reference-type="ref" reference="S:questions"}. We prove a couple of consequences, some already mentioned in § [2](#S:general){reference-type="ref" reference="S:general"}. In general they will be used without mention. **Corollary 1**. *Let $\mathfrak{g}\neq 0$ be a connected Lie ring of finite Morley rank and characteristic $\neq 2, 3$.* 1. *Borel subrings are infinite and quasi-self-normalising.* 2. *[\[c:1:i:defmin\]]{#c:1:i:defmin label="c:1:i:defmin"} Let $V$ be a definable, faithful module. Suppose that $V$ is definably minimal as a group (viz. $\{0\}$-irreducible). Then $\dim \mathfrak{g}\leq \dim V$.* 3. *Let $V$ be a faithful $1$-dimensional $\mathfrak{g}$-module. Then there is a field $\mathbb{K}$ with $V \simeq \mathbb{K}_+$ and $\mathfrak{g}\simeq \mathbb{K}\operatorname{Id}_V$. In particular, there is $h \in \mathfrak{g}$ acting like $\operatorname{Id}_V$.* We call the last ad 'linearising in dimension $1$'. *Proof.* 1. By Fact [Fact 1](#f:1){reference-type="ref" reference="f:1"}, there exist infinite definable abelian subrings, so Borel subrings are non-trivial. Let $\mathfrak{b}\sqsubseteq \mathfrak{g}$ be one such. If $N_\mathfrak{g}(\mathfrak{b})/\mathfrak{b}$ is infinite, then it contains an infinite definable abelian subring $\overline{\mathfrak{a}}$. Lifting and taking the connected component, the soluble subring $\pi^{-1}(\overline{\mathfrak{a}})^\circ > \mathfrak{b}$ contradicts the definition of $\mathfrak{b}$. 2. Let $V$ be a faithful $\mathfrak{g}$-module which is $\{0\}$-irreducible. Let $v \in V\setminus\{0\}$ and $\mathfrak{h}= C_\mathfrak{g}^\circ(v)$. Suppose that $\mathfrak{h}$ is infinite. By Fact [Fact 1](#f:1){reference-type="ref" reference="f:1"}, it contains an infinite, definable, connected, abelian $\mathfrak{a}\sqsubseteq \mathfrak{h}$. Now $V$ remains faithful as an $\mathfrak{a}$-module, and is $\mathfrak{a}$-irreducible. Linearising the abelian action (see end of § [2.2](#s:facts){reference-type="ref" reference="s:facts"}), $V$ is a vector space over a definable field $\mathbb{K}$, and the action of $\mathfrak{a}$ is by scalars, hence free. But $\mathfrak{a}$ acts trivially on $v \neq 0$: a contradiction. So $\mathfrak{h}$ is finite. In particular, $\dim (\mathfrak{g}\cdot v) = \dim \mathfrak{g}\leq \dim V$. 3. Let $V$ be a $1$-dimensional, faithful $\mathfrak{g}$-module. By [\[c:1:i:defmin\]](#c:1:i:defmin){reference-type="ref" reference="c:1:i:defmin"}, $\dim \mathfrak{g}= 1$. By Fact [Fact 1](#f:1){reference-type="ref" reference="f:1"}, $\mathfrak{g}$ is abelian and we may linearise. Clearly $V \simeq \mathbb{K}_+$ and $\mathfrak{g}\simeq \mathbb{K}\operatorname{Id}_V$.  ◻ ## Dimension 2 {#s:2} We reprove Rosengarten's dimension $2$ analysis (Theorem [Theorem 2](#t:2){reference-type="ref" reference="t:2"}), and derive an important, though too partial, form of Lie's theorem *in dimension $2$* (Corollary [Corollary 2](#c:Lie2){reference-type="ref" reference="c:Lie2"}). The following is already in [@Raleph]; our proof is as short but more conceptual. **Theorem 2** (Rosengarten; [@Raleph Theorem 4.2.1]). *Let $\mathfrak{g}$ be a connected Lie ring of Morley rank $2$ and characteristic $\neq 2, 3$. Then:* 1. *$\mathfrak{g}$ is soluble;* 2. *if $\mathfrak{g}$ is non-nilpotent, then:* - *there is a Cartan subring $\mathfrak{c}\sqsubseteq \mathfrak{g}$ with $\mathfrak{g}= \mathfrak{c}\oplus \mathfrak{g}'$;* - *there is $h \in \mathfrak{c}$ with $(\operatorname{ad}_h)_{|\mathfrak{g}'} = \operatorname{Id}_{\mathfrak{g}'}$ (in particular, $Z(\mathfrak{g}) \leq \mathfrak{c}$);* - *$Z(\mathfrak{g})$ is finite and there is a definable field $\mathbb{K}$ with $\mathfrak{g}/Z(\mathfrak{g}) \simeq \mathfrak{ga}_1(\mathbb{K})$.* *Proof.* 1. Let $\mathfrak{h}\sqsubseteq \mathfrak{g}$ be an infinite, definable, connected subring of minimal $\dim$. By Fact [Fact 1](#f:1){reference-type="ref" reference="f:1"}, $\mathfrak{h}$ is abelian, so we may assume it is proper; $\dim \mathfrak{h}= 1$. Consider the action of $\mathfrak{h}$ on $V = \mathfrak{g}/\mathfrak{h}$. If it is trivial, then $\mathfrak{h}\triangleleft \mathfrak{g}$ is an ideal, so $\mathfrak{g}$ is soluble and we are done. So we suppose that said action is non-trivial. Linearising in dimension $1$, there is $h \in \mathfrak{h}$ acting on $V$ as $\operatorname{Id}_V$. Lifting the eigenspace with Lemma[Lemma 4](#l:liftingEk){reference-type="ref" reference="l:liftingEk"}, $E_1(h) \neq 0$. Now $\mathfrak{h}$ is abelian so $\mathfrak{h}\leq E_0(h)$. Since $\dim \mathfrak{g}= 2$, we get $\mathfrak{g}= E_0 \oplus E_1$; finally $[E_1, E_1] \leq E_2 = 0$ so $E_1$ is an ideal of $\mathfrak{g}$ and we are done. 2. Suppose $\mathfrak{g}$ is non-nilpotent. Then $\mathfrak{g}'$ has dimension $1$, so it is abelian by Fact [Fact 1](#f:1){reference-type="ref" reference="f:1"}. By non-nilpotence, the action of $\mathfrak{g}$ on $\mathfrak{g}'$ is non-trivial. Linearising in dimension $1$, there is $h \in \mathfrak{g}$ acting on $\mathfrak{g}'$ like $\operatorname{Id}_{\mathfrak{g}'}$; hence $E_1 \neq 0$. Since $\mathfrak{g}$ is soluble, one has $E_0 \neq 0$. So we find $\mathfrak{g}= E_0 \oplus E_1$ and $E_1 = \mathfrak{g}'$. Always by non-nilpotence, $E_1$ is the only $1$-dimensional, connected ideal of $\mathfrak{g}$. It follows that $\mathfrak{c}= E_0$ is abelian, and has finite index in its normaliser: it is a Cartan subring. Clearly $Z(\mathfrak{g}) \leq E_0$. Last, $Z(\mathfrak{g})$ is finite, so $\mathfrak{g}/Z(\mathfrak{g})$ is centreless. We now suppose $\mathfrak{g}$ centreless. The action of $E_0$ on $E_1$ is then faithful, so linearisation directly gives the required isomorphism.  ◻ **Remark 2**. A finite centre is unavoidable, as the following example shows. Let: $$\mathfrak{g}= \left\{\begin{pmatrix} x & 0 & 0\\ 0 & \alpha(x) & y\\ 0 & 0 & 0\end{pmatrix}: (x, y) \in \mathbb{K}^2\right\},$$ for any additive $\alpha\colon \mathbb{K}_+ \to \mathbb{K}_+$, such as for instance $x \mapsto x^p$. Then $Z(\mathfrak{g})$ is isomorphic to $\ker \alpha$. The following is very unsatisfactory and one should generalise it to $\dim V$ less than the characteristic, with no mention of $\dim \mathfrak{b}$. It plays an important role in the proof of Theorem [Theorem 4](#t:4){reference-type="ref" reference="t:4"}. **Corollary 2**. *Work in finite Morley rank. Let $\mathfrak{b}$ be a non-nilpotent, connected Lie ring of characteristic $\neq 2, 3$. Let $V$ be an irreducible $\mathfrak{b}$-module. Suppose $\dim \mathfrak{b}= \dim V = 2$. Then $\mathfrak{b}'$ centralises $V$.* *Proof.* By Theorem [Theorem 2](#t:2){reference-type="ref" reference="t:2"}, $\mathfrak{a}= \mathfrak{b}' > 0$; moreover there is a Cartan subring $\mathfrak{t}< \mathfrak{b}$ such that $\mathfrak{b}= \mathfrak{t}\oplus \mathfrak{a}$ and $\mathfrak{a}= [\mathfrak{t}, \mathfrak{a}]$. Let $\Delta = \operatorname{DefEnd}(V)$, which is an associative ring with induced Lie bracket $\llbracket f, g\rrbracket = fg - gf$. Let $\rho\colon \mathfrak{b}\to \Delta$, a Lie morphism. We avoid it from notation mostly, with exceptions for enhanced clarity. To prove that $\mathfrak{a}\leq \ker \rho$, we suppose otherwise. Notice that $C_V^\circ(\mathfrak{a})$ is $\mathfrak{b}$-invariant, so we may suppose $C_V^\circ(\mathfrak{a}) = 0$. **Step 1**. $V$ is not $\mathfrak{a}$-irreducible. *Proof.* Suppose it is. Linearising the abelian action, $\mathbb{K}= C_{\operatorname{DefEnd}(V)}(\rho(\mathfrak{a}))$ is a definable field containing $\rho(\mathfrak{a})$. We claim that $\mathfrak{t}$ acts by definable derivations of $\mathbb{K}$. Indeed, let $\lambda \in \mathbb{K}$ and $a \in \mathfrak{a}$. Also let $h \in \mathfrak{t}$ and $a' = [h, a]$. Then: $$\begin{aligned} \llbracket \llbracket \rho(h), \lambda\rrbracket, \rho(a)\rrbracket & = (\rho(h) \lambda - \lambda \rho(h)) \rho(a) - \rho(a) (\rho(h) \lambda - \lambda \rho(h))\\ & = \rho(h) \rho(a) \lambda - \lambda \rho(h) \rho(a) - \rho(a) \rho(h) \lambda + \lambda \rho(a) \rho(h)\\ & = \llbracket\rho (h), \rho(a)\rrbracket \lambda - \lambda \llbracket \rho(h), \rho(a)\rrbracket\\ & = \rho([h, a)] \lambda - \lambda \rho([h,a])\\ & = \llbracket \rho(a'), \lambda\rrbracket\\ & = 0,\end{aligned}$$ proving that $\rho(h)$ does act on $\mathbb{K}$. The action is clearly that of a derivation. By Lemma [Lemma 7](#l:noderivations){reference-type="ref" reference="l:noderivations"}, $\rho(\mathfrak{t})$ must act trivially; in particular $\rho(\mathfrak{t})$ centralises $\rho(\mathfrak{a}) \leq \mathbb{K}$. So $\mathfrak{a}= [\mathfrak{t}, \mathfrak{a}] \leq \ker \rho$, as wanted. ◻ **Step 2**. $V$ is $\mathfrak{a}$-irreducible. *Proof.* Otherwise let $W < V$ be a $1$-dimensional $\mathfrak{a}$-submodule. If $\mathfrak{a}$ acts trivially on $W$, then $C_V^\circ(\mathfrak{a}) \neq 0$, a contradiction. So $\mathfrak{a}$ acts non-trivially on $W$. Linearising, $\mathbb{K}= C_{\operatorname{DefEnd}(W)}(\mathfrak{a})$ is a definable field containing $\rho(\mathfrak{a})$. We fix $h \in \mathfrak{t}$ ad-acting on $\mathfrak{a}$ like $\operatorname{Id}_\mathfrak{a}$, viz. $[h, a] = a$ for $a \in \mathfrak{a}$. We claim $hW \neq 0$ and $(W \cap hW)^\circ = 0$, so that $V = W + h W$. Suppose $w_1 = h w_2$ for some $w_1, w_2 \in W$. Then applying an arbitrary $a \in \mathfrak{a}$: $$a w_1 = a h w_2 = ha w_2 - a w_2,$$ so $h a w_2 = a(w_1 + w_2) \in W$. But if $w_2 \notin C_V(\mathfrak{a})$, then $\mathfrak{a}w_2 = W$, which proves $h W = W$. So $h$ acts on $W$, and as before on $\mathbb{K}$ as well. Now it induces a derivation of $\mathbb{K}\geq \mathfrak{a}$, although it acts non-trivially on $\mathfrak{a}$; against Lemma [Lemma 7](#l:noderivations){reference-type="ref" reference="l:noderivations"}. So $w_2 \in C_V(\mathfrak{a})$, which is finite. So $w_1 = h w_2$ has finitely many solutions, which proves all claims. We derive a contradiction. Let $w \in W$. Then there are $w_1, w_2 \in W$ such that $h^2 w = w_1 + hw_2$. Let $a \in \mathfrak{a}$ act on $W$ like $\operatorname{Id}_W$ (this is the element going to $1_\mathbb{K}$). Remember $[h, a] = a$; in particular $a h^2 = h^2 a - 2 ha + a$. Applying to $w$ one finds: $$\begin{aligned} a h^2 w & = a w_1 + a h w_2 = w_1 + ha w_2 - a w_2 = w_1 - w_2 + h w_2\\ = (h^2 a - 2 ha + a)w & = h^2 w - 2 h w + w = w_1 + w + h (w_2 - 2w).\end{aligned}$$ This proves $hw \in W \cap hW$ which is finite, so by connectedness $hW = 0$, a contradiction. ◻ This completes the proof of Corollary [Corollary 2](#c:Lie2){reference-type="ref" reference="c:Lie2"}. ◻ ## Dimension 3 {#s:3} **Theorem 3** (Rosengarten; [@Raleph Theorem 4.4.1]). *Let $\mathfrak{g}$ be a simple Lie ring of Morley rank $3$ and characteristic $\neq 2, 3$. Then $\mathfrak{g}\simeq \mathfrak{sl}_2(\mathbb{K})$.* Our three reasons to give a proof are: Rosengarten's own argument was never published; ours is sufficiently different from Rosengarten's, and more conceptual; parts of it make sense in the group setting; some key ideas reappear in the dimension $4$ argument of § [3.5](#s:4){reference-type="ref" reference="s:4"}. The proof first removes nilpotent Borel subrings (Proposition [Proposition 1](#p:3:nonilpotent){reference-type="ref" reference="p:3:nonilpotent"}), then explicitly identifies $\mathfrak{g}$ (Proposition [Proposition 2](#p:3:identification){reference-type="ref" reference="p:3:identification"}). **Proposition 1**. *No Borel subring is nilpotent.* *Proof.* Let $\mathfrak{b}\sqsubseteq \mathfrak{g}$ be a nilpotent Borel subring. Let $Z = Z^\circ(\mathfrak{b}) > 0$. First we prove that $\mathfrak{b}$ acts trivially on subquotients of rank $1$ (Step [Step 3](#p:3:nonilpotent:st:1modules){reference-type="ref" reference="p:3:nonilpotent:st:1modules"}), and has rank $1$ itself. Then we find a field $\mathbb{L}$ with rank $2$, such that $\mathfrak{b}$ embeds properly into $\mathbb{L}_+$ (Step [Step 4](#p:3:nonilpotent:st:Bb){reference-type="ref" reference="p:3:nonilpotent:st:Bb"}). Then we derive a contradiction (Step [Step 5](#p:3:nonilpotent:st:contradiction){reference-type="ref" reference="p:3:nonilpotent:st:contradiction"}). **Step 3**. Let $X$ be a $1$-dimensional subquotient $\mathfrak{b}$-module. Then $\mathfrak{b}$ centralises $X$. *Proof.* Suppose not. Linearising in dimension $1$, there is $h \in \mathfrak{b}$ acting on $X$ like $\operatorname{Id}_X$. Lifting the eigenspace, $E_1 = E_1(h) \leq \mathfrak{g}$ is non-trivial. It is proper since $E_0 \geq Z > 0$. For each integer $k \not\equiv 0$ one has $E_k\cap \mathfrak{b}= 0$. Indeed, if $a \in E_k\cap \mathfrak{b}$, then $a = \frac{1}{k} [h, a] \in \mathfrak{b}'$, and for every integer $n$ one has $a \in \mathfrak{b}^{[n]}$, the descending nilpotent series. Hence $a = 0$. We claim $\dim \mathfrak{b}= 1$. Otherwise $\dim \mathfrak{b}= 2$ and $\mathfrak{g}= \mathfrak{b}\oplus E_1$. But then $E_2 = 0$, so $E_1$ is an abelian subring. Now $Z \leq E_0$ normalises $E_1$. If the action is trivial, then $E_1 \leq C_\mathfrak{g}^\circ(Z) = \mathfrak{b}$, a contradiction. So the action of $Z$ on $E_1$ is non-trivial. Hence we may assume $h \in Z$; in that case, $E_0 = \mathfrak{b}$ normalises $E_1$, which is an ideal of $\mathfrak{g}$. This is a contradiction; hence $\dim \mathfrak{b}= 1$. It follows that $\mathfrak{b}$ is abelian and $E_0 = \mathfrak{b}$. If $E_2 \neq 0$ then $\mathfrak{g}= E_0 \oplus E_1 \oplus E_2$ and $E_2 \triangleleft \mathfrak{g}$; a contradiction. So $E_2 = 0$ and $E_1$ is an abelian subring. But then $E_0 \oplus E_1$ is a subring properly containing $E_0 = \mathfrak{b}$. By definition of a Borel subring, $\mathfrak{g}= E_0 \oplus E_1$, which contains $E_1$ as an ideal: a contradiction. ◻ It follows $\dim \mathfrak{b}= 1$. Otherwise $\dim \mathfrak{b}= 2$ and $\dim \mathfrak{g}/\mathfrak{b}= 1$. By Step [Step 3](#p:3:nonilpotent:st:1modules){reference-type="ref" reference="p:3:nonilpotent:st:1modules"}, $\mathfrak{b}$ centralises $\mathfrak{g}/\mathfrak{b}$, meaning $\mathfrak{b}\triangleleft \mathfrak{g}$, a contradiction. Hence $\dim \mathfrak{b}= 1$ and in particular $\mathfrak{b}$ is abelian. It follows that for $a \in \mathfrak{b}\setminus\{0\}$ one has $C_a = \mathfrak{b}$. Let $B_\mathfrak{b}= [\mathfrak{b}, \mathfrak{g}]$. **Step 4**. $\mathfrak{g}= \mathfrak{b}+ B_\mathfrak{b}$. There is a definable field $\mathbb{L}$ of dimensional $2$ such that $B_\mathfrak{b}\simeq \mathbb{L}_+$ and $\mathfrak{b}\hookrightarrow \mathbb{L}\operatorname{Id}_{B_\mathfrak{b}}$. *Proof.* First, $\dim B_\mathfrak{b}= 2$. Indeed, let $a, a' \in \mathfrak{b}$ be non-zero. Clearly $\dim B_a = 2$; by abelianity, $B_a$ is $\mathfrak{b}$-invariant. By Step [Step 3](#p:3:nonilpotent:st:1modules){reference-type="ref" reference="p:3:nonilpotent:st:1modules"}, $\mathfrak{b}$ centralises $\mathfrak{g}/B_a$. In particular $B_{a'} \leq B_a$ and equality follows. So $B_\mathfrak{b}= B_a$ for any $a \in \mathfrak{b}\setminus\{0\}$, which proves $\dim B_\mathfrak{b}= 2$. Suppose $\mathfrak{b}\leq B_\mathfrak{b}$. Then for $a \in \mathfrak{b}\setminus\{0\}$ one has $C_a = \mathfrak{b}\leq B_\mathfrak{b}= B_a$. In particular $B_a^2 < B_a$, and $C_a^2 > C_a$. However, $C_a = \mathfrak{b}$ is abelian, so by Lemma [Lemma 1](#l:Cn){reference-type="ref" reference="l:Cn"}, $C_a^2$ is a subring. But $\dim C_a^2 \leq 2 \dim C_a = 2\dim \mathfrak{b}< \dim \mathfrak{g}$, so $C_a^2$ is a definable, connected, proper subring containing $C_a = \mathfrak{b}$. Hence $C_a^2 = \mathfrak{b}= C_a$, a contradiction proving $\mathfrak{b}\not\leq B_\mathfrak{b}$. $B_\mathfrak{b}$ is $\mathfrak{b}$-irreducible. Otherwise there is a $1$-dimensional $\mathfrak{b}$-submodule $V_1 < B_\mathfrak{b}$. By Step [Step 3](#p:3:nonilpotent:st:1modules){reference-type="ref" reference="p:3:nonilpotent:st:1modules"}, $\mathfrak{b}$ centralises $V_1$, so $V_1 \leq C_\mathfrak{g}^\circ(\mathfrak{b}) = \mathfrak{b}$. Always by Step [Step 3](#p:3:nonilpotent:st:1modules){reference-type="ref" reference="p:3:nonilpotent:st:1modules"}, $\mathfrak{b}$ centralises $B_\mathfrak{b}/\mathfrak{b}$, so $B_\mathfrak{b}\leq N_\mathfrak{g}^\circ(\mathfrak{b}) = \mathfrak{b}$, a contradiction. The action of $\mathfrak{b}$ on $B_\mathfrak{b}$ is faithful. Otherwise there is $a \in \mathfrak{b}\setminus\{0\}$ with $B_\mathfrak{b}\leq C_a = \mathfrak{b}$, a contradiction again. Linearising the abelian action, there is a definable field $\mathbb{L}$ with $B_\mathfrak{b}\simeq \mathbb{L}_+$ and $\mathfrak{b}\hookrightarrow \mathbb{L}\operatorname{Id}_{B_\mathfrak{b}}$. ◻ **Step 5**. Contradiction. *Proof.* Fix any $a_0 \in \mathfrak{b}\setminus\{0\}$ and $x_0 \in B_\mathfrak{b}\setminus\{0\}$. Because the action of $\mathfrak{b}$ on $B_\mathfrak{b}$ is by scalars, $\dim [a_0, [\mathfrak{b}, x_0]] = \dim [\mathfrak{b}, x_0] = 1$ (the subgroups ned not be equal). We claim that $\dim [\mathfrak{b}, [\mathfrak{b}, x_0]] = 1$. Let $A_2 = [\mathfrak{b}, [\mathfrak{b}, x_0]] \leq B_\mathfrak{b}$. By Lemma [Lemma 3](#l:Rosengarten){reference-type="ref" reference="l:Rosengarten"}, $[A_2, A_2] \leq B_\mathfrak{b}$. If $\dim A_2 = 2$ then dimensions match and $A_2 = B_\mathfrak{b}$. Now $[B_\mathfrak{b}, B_\mathfrak{b}] \leq B_\mathfrak{b}$, so $B_\mathfrak{b}$ is a subring. It also is $\mathfrak{b}$-invariant, but $\mathfrak{b}\not\leq B_\mathfrak{b}$ by Step [Step 4](#p:3:nonilpotent:st:Bb){reference-type="ref" reference="p:3:nonilpotent:st:Bb"}. So $B_\mathfrak{b}\triangleleft \mathfrak{g}$, a contradiction. Hence $\dim A_2 = 1$. Thus $\dim [a_0, [\mathfrak{b}, x_0]] = \dim [\mathfrak{b}, [\mathfrak{b}, x_0]]$; the groups are equal. So for $a_1, a_2 \in \mathfrak{b}$, there is $a_3 \in \mathfrak{b}$ with $[a_0,[a_3, x_0]] = [a_1, [a_2, x_0]]$. Returning to the field $\mathbb{L}$, this means $a_0 a_3 = a_1 a_2$ as elements of $\mathbb{L}$. So the image of $\mathfrak{b}$ in $\mathbb{L}$ is closed under the map $(a_1, a_2) \mapsto \frac{a_1 a_2}{a_0}$. Let $R = \{\lambda \in \mathbb{L}: \lambda \mathfrak{b}\leq \mathfrak{b}\}$, a definable subring. We just proved $\frac{1}{a_0} \mathfrak{b}\leq R$, so $R$ is infinite. By the Cherlin-Macintyre-Shelah property, every infinite definable domain is an algebraically closed field. So $R = \mathbb{L}$ and $\mathfrak{b}\neq 0$ is an ideal of $\mathbb{L}$. Hence $\mathfrak{b}= \mathbb{L}$ but dimensions do not match. ◻ This eliminates nilpotent Borel subrings in a $3$-dimensional, simple Lie ring: Proposition [Proposition 1](#p:3:nonilpotent){reference-type="ref" reference="p:3:nonilpotent"} is proved. (The argument will be extended to the $4$-dimensional case in Proposition [Proposition 6](#p:4:nonilpotent){reference-type="ref" reference="p:4:nonilpotent"}.) ◻ By Theorem [Theorem 2](#t:2){reference-type="ref" reference="t:2"}, the structure of Borel subrings of $\mathfrak{g}$ is well-understood. **Proposition 2**. *$\mathfrak{g}\simeq \mathfrak{sl}_2(\mathbb{K})$.* *Proof.* The proof starts by some local analysis, viz. the study of intersections of Borel subrings (Step [Step 6](#p:3:identification:st:local){reference-type="ref" reference="p:3:identification:st:local"}). Then we find the desired weight space decomposition (Step [Step 7](#p:3:identification:st:weights){reference-type="ref" reference="p:3:identification:st:weights"}), which allows final coordinatisation (Step [Step 8](#p:3:identification:st:coordinatisation){reference-type="ref" reference="p:3:identification:st:coordinatisation"}). **Step 6**. If $\mathfrak{b}_1 \neq \mathfrak{b}_2$ are distinct Borel subrings, then $\mathfrak{c}= (\mathfrak{b}_1\cap \mathfrak{b}_2)^\circ$ is a Cartan subring of both. *Proof.* Clearly $\dim \mathfrak{c}= 1$. By Proposition [Proposition 1](#p:3:nonilpotent){reference-type="ref" reference="p:3:nonilpotent"}, $\mathfrak{b}_1$ and $\mathfrak{b}_2$ are non-nilpotent; it is enough to prove that $\mathfrak{c}$ is normal in neither. Suppose it is, say in $\mathfrak{b}_1$; then $\mathfrak{c}= \mathfrak{b}_1' \leq \mathfrak{b}_2$. Considering $N_\mathfrak{g}^\circ(\mathfrak{c}) < \mathfrak{g}$, the ring $\mathfrak{c}$ is not normal in $\mathfrak{b}_2$, so it is a Cartan subring of $\mathfrak{b}_2$. Let $\mathfrak{t}$ be a Cartan subring of $\mathfrak{b}_1$. Let $t \in \mathfrak{t}$ act on $\mathfrak{b}'_1 = \mathfrak{c}$ like the identity. Now let $h \in \mathfrak{c}$ act on $\mathfrak{b}_2'$ like the identity. Notice that $[\mathfrak{t}, h] = \mathfrak{b}'_1 = \mathfrak{c}$, and $[h, \mathfrak{b}'_2] = \mathfrak{b}'_2$, so $B_h \geq \mathfrak{c}+ \mathfrak{b}'_2 = \mathfrak{b}_2$. Equality follows since $C_h \geq \mathfrak{b}'_1 > 0$. Now for $f \in \mathfrak{b}'_2$ one has: $$[t, f] = [t, [h, f]] = [th, f] + [h, tf] = [h, f] + [h, tf].$$ The right-hand is in $B_h = \mathfrak{b}_2$. Therefore $[t, f] \in \mathfrak{b}_2$, and the right-hand is in $\mathfrak{b}_2'$. Therefore $[t, f] \in \mathfrak{b}_2'$, implying $[h, tf] = [t, f]$. So there remains $[h, f] = 0$, a contradiction. ◻ **Step 7**. Weight decomposition: there is $h_1 \in \mathfrak{g}$ such that $\mathfrak{g}= E_{-2} \oplus E_0 \oplus E_2$, and $[E_{-2}, E_2] = E_0$. Moreover, for $e \in E_2\setminus\{0\}$ one has $C_\mathfrak{g}(e) = E_2$. *Proof.* Let $\mathfrak{b}\sqsubseteq \mathfrak{g}$ be a Borel suring. By Proposition [Proposition 1](#p:3:nonilpotent){reference-type="ref" reference="p:3:nonilpotent"}, it is non-nilpotent. In particular, by Theorem [Theorem 2](#t:2){reference-type="ref" reference="t:2"}, there is a definable field such that $\mathfrak{b}/Z(\mathfrak{b}) \simeq \mathfrak{ga}_1(\mathbb{K})$. Let a temporary $h_1 \in \mathfrak{b}$ act on $\mathfrak{b}'$ like $\operatorname{Id}_{\mathfrak{b}'}$. (It will be rescaled at the end of the proof of the Step.) Of course $E_0 = E_0(h_1) = C_{h_1} > 0$ and $E_1 \geq \mathfrak{b}' > 0$. Let $e \in \mathfrak{b}'\setminus\{0\}$. We claim that $C_e = \mathfrak{b}'$ and $B_e = \mathfrak{b}$. Clearly $C_e \geq \mathfrak{b}'$. If equality does not hold, then $\dim C_e = 2$. Now $C_e$ is a Borel subring distinct from $\mathfrak{b}$ since $e \notin Z(\mathfrak{b})$. But $C_e \geq \mathfrak{b}'$ which is not Cartan in $\mathfrak{b}$, against Step [Step 6](#p:3:identification:st:local){reference-type="ref" reference="p:3:identification:st:local"}. So $C_e = \mathfrak{b}'$ and $\dim B_e = 2$. Now consider the action of $\mathfrak{b}$ on $\mathfrak{g}/\mathfrak{b}$. Linearising in dimension $1$, $\mathfrak{b}'$ acts trivially so $B_e \leq \mathfrak{b}$ and equality holds. We now claim that $E_{-1} = E_{-1}(h_1) \neq 0$. For suppose $E_{-1} = 0$. The additive morphism $\varphi(x) = [h_1, x] + x$ then has a finite kernel, and is surjective. Therefore: $$\operatorname{im}\operatorname{ad}_{h_1} \operatorname{ad}_e = \operatorname{im}\operatorname{ad}_e (1 + \operatorname{ad}_{h_1}) = \operatorname{im}\operatorname{ad}_e = B_e = \mathfrak{b}.$$ But $\operatorname{im}\operatorname{ad}_{h_1} \operatorname{ad}_e = [h_1, B_e] = [h_1, \mathfrak{b}] < \mathfrak{b}$, a contradiction proving $E_{-1} \neq 0$. It follows $\mathfrak{g}= E_{-1} \oplus E_0 \oplus E_1$, and $\mathfrak{c}= E_0$ normalises each term. Also, $\mathfrak{b}= E_0 \oplus E_1$ and $\mathfrak{b}' = E_1$. If $[E_{-1}, E_1] = 0$, then $E_{-1} \oplus E_1$ is a subring, hence a Borel subring. However it intersects $\mathfrak{b}$ in $E_1 = \mathfrak{b}'$, against Step [Step 6](#p:3:identification:st:local){reference-type="ref" reference="p:3:identification:st:local"}. So $[E_{-1}, E_1] = E_0$. There remains to prove that for any $e \in E_1 \setminus\{0\}$, one has $C_\mathfrak{g}(e) = E_1$ (notice the absence of a connected component). Since $\mathfrak{g}= E_{-1} \oplus E_0 \oplus E_1$, it is enough to prove $C_{E_0}(e) = C_{E_{-1}}(e) = 0$. - Let $h \in E_0$ centralise $e$; then it centralises $E_0$ and $[E_0, e] = E_1$. But also, for $y \in E_{-1}$, one has $[e, y] \in E_0 \leq C_h$, so: $$[e, [h, y]] = [e h, y] + [h, e y] = 0.$$ Therefore $[h, E_{-1}] \leq C_e = E_1$. However $[h, E_{-1}] \leq E_{-1}$, proving $h \in Z(\mathfrak{g}) = 0$. - Now let $f \in E_{-1}$ centralise $e$. Then it normalises $C_e = E_1$ and $N_\mathfrak{g}^\circ(E_1) = E_0 + E_1$. In particular, $[f, h_1] \in E_0 + E_1$, but $[f, h_1] = - [h_1, f] = f \in E_{-1}$ so $f = 0$. This shows $C_\mathfrak{g}(e) = E_1$. We finally rescale and replace $h_1$ by $\frac{1}{2} h_1$. ◻ **Step 8**. Coordinatisation. *Proof.* Let $\mathfrak{c}= E_0$, $\mathfrak{u}_+ = E_2$, and $\mathfrak{u}_- = E_{-2}$. By Step [Step 7](#p:3:identification:st:weights){reference-type="ref" reference="p:3:identification:st:weights"}, there are $e_1 \in \mathfrak{u}_+$ and $f_1 \in \mathfrak{u}_-$ with $[e_1, f_1] = h_1$. We simultaneously coordinatise $\mathfrak{c}$ and $\mathfrak{u}_+$ as follows. Linearising in dimension $1$, there is a definable field $\mathbb{K}$ such that $\mathfrak{c}\simeq \mathbb{K}\operatorname{Id}$ acts by *doubled* scalars on $\mathfrak{u}_+ \simeq \mathbb{K}_+$, viz. $[h_\lambda, e_\mu] = 2 e_{\lambda\mu}$. Notice that $h_1$ was defined consistently. Since the multiplicative unit is not fixed, we choose it to be $e_1$ and consistently write $\mathfrak{u}_+ = \{e_\lambda: \lambda \in \mathbb{K}\}$. Now we let $f_\lambda = -\frac12 [h_\lambda, f_1]$, so $\mathfrak{u}_- = \{f_\lambda: \lambda \in \mathbb{K}\}$. Notice that $f_1$ was defined consistently. We prove $\mathfrak{g}\simeq \mathfrak{sl}_2(\mathbb{K})$. It suffices to check a couple of identities. First, by definition, $[h_\lambda, e_\mu] = \frac12 [h_\lambda, [h_\mu, e_1]] = [h_\mu, e_\lambda]$; and $[h_\lambda, f_\mu] = [h_\mu, f_\lambda]$ likewise. Then $[e_\mu, f_1] = h_\mu$, because: $$[[e_\mu, f_1], e_1] = [e_\mu e_1, f_1] + [e_\mu, f_1 e_1] = [h_1, e_\mu] = 2 e_\mu = [h_\mu, e_1],$$ meaning that $[e_\mu, f_1] - h_\mu \in C_\mathfrak{c}(e_1) = 0$ by Step [Step 7](#p:3:identification:st:weights){reference-type="ref" reference="p:3:identification:st:weights"}. It follows $[e_\mu, f_\nu] = h_{\mu\nu}$. Indeed, $$-2 [e_\mu, f_\nu] = [e_\mu, [h_\nu, f_1]] = [e_\mu h_\nu, f_1] + [h_\nu, e_\mu f_1] = - 2 [e_{\mu\nu}, f_1] = - 2 h_{\mu\nu}.$$ Last, $[h_\mu, f_\nu] = - 2 f_{\mu\nu}$ since: $$[[h_\mu, f_\nu], e_1] = [h_\mu e_1, f_\nu] + [h_\mu, f_\nu e_1] = 2 [e_\mu, f_\nu] = 2 h_{\mu\nu} = - 2 [f_{\mu\nu}, e_1],$$ implying $[h_\mu, f_\nu] + 2 f_{\mu\nu}\in C_{\mathfrak{u}_-}(e_1) = 0$ by Step [Step 7](#p:3:identification:st:weights){reference-type="ref" reference="p:3:identification:st:weights"} again. The isomorphism $\mathfrak{g}\simeq \mathfrak{sl}_2(\mathbb{K})$ is now obvious. ◻ This completes the recognition and proves Theorem [Theorem 3](#t:3){reference-type="ref" reference="t:3"}. ◻ The extension problem is easy, although not considered by Rosengarten. **Corollary 3**. *Let $\mathfrak{g}$ be a connected, non-soluble Lie ring of Morley rank $3$ and characteristic $\neq 2, 3$. Then $\mathfrak{g}\simeq \mathfrak{sl}_2(\mathbb{K})$ for some definable field $\mathbb{K}$.* *Proof.* It is enough to show simplicity of $\mathfrak{g}$, which we shall derive from the isomorphism type of $\mathfrak{g}/Z(\mathfrak{g})$. Let $Z = Z(\mathfrak{g})$, a definable ideal of $\mathfrak{g}$, and $\overline{\mathfrak{g}} = \mathfrak{g}/Z$. We first prove that $\overline{\mathfrak{g}}$ is simple. It is enough to prove that it is definable simple (see § [2.2](#s:facts){reference-type="ref" reference="s:facts"}). Let $I \triangleleft \mathfrak{g}$ be any proper definable ideal; then $I^\circ \triangleleft\mathfrak{g}$ as well. If $I^\circ > 0$ then by Theorem [Theorem 2](#t:2){reference-type="ref" reference="t:2"} both $I^\circ$ and $\mathfrak{g}/I^\circ$ are soluble, against the assumption. Therefore $I$ is finite. By connectedness, $[I, \mathfrak{g}] = 0$ so $I$ is central. This shows that the largest proper definable ideal of $\mathfrak{g}$ is $Z$. In particular, $\overline{\mathfrak{g}}$ is definably simple; being non-abelian, it is simple. We now conclude $Z = 0$. By Theorem [Theorem 3](#t:3){reference-type="ref" reference="t:3"}, $\overline{\mathfrak{g}} \simeq \mathfrak{sl}_2(\mathbb{K})$ for some definable field $\mathbb{K}$. So there is $\eta \in \overline{\mathfrak{g}}$ such that $\overline{\mathfrak{g}} = E_{-1}(\eta) \oplus E_0(\eta) \oplus E_1(\eta)$. Lift $\eta$ to $h \in \mathfrak{g}$; lifting eigenspaces, one has $\mathfrak{g}= E_{-1}(h) \oplus E_0(h) \oplus E_1(h)$, and each has dimension $1$. It now follows that $Z \leq E_0(h)$. Moreover, $[E_{-1}(h), E_1(h)] = E_0(h)$. Let $z \in Z$. Then there are $a_{-1} \in E_{-1}$ and $a_1 \in E_1$ with $z = [a_{-1}, a_1]$. Let $\alpha_i = (a_i \mod Z) \in E_i(\eta)$. Then $[\alpha_{-1}, \alpha_1] = 0$ in $\mathfrak{sl}_2(\mathbb{K})$, so at least one $\alpha_i$ is zero. Thus $a_i \in E_i \cap Z = 0$ and $z = 0$. This proves $Z = 0$ and $\mathfrak{g}\simeq \overline{\mathfrak{g}}$. ◻ ## Dimension 4 {#s:4} **Theorem 4**. *There is no simple Lie ring of Morley rank $4$ and characteristic $\neq 2, 3$.* #### The main idea is elementary: use Borel subrings as flintstones. We need at least two, and at least one of them should be large enough. It is unreasonable to have more upper-triangular matrices than lower-triangular ones, and this is the contradictory picture. For this we need two Borel subrings of distinct dimensions intersecting over a $1$-dimensional, 'diagonal' subring, creating friction between unbalanced dimensions. So we shall produce a $3$-dimensional Borel subring $\mathfrak{b}_1$, and another Borel subring $\mathfrak{b}_2$. With rudimentary proxies for 'a toral subring' and 'the nilpotent radical', $\mathfrak{b}_1$ and $\mathfrak{b}_2$ will intersect in a $1$-dimensional, toral subring $\mathfrak{c}$, and have disjoint nilpotent radicals $I_1 \neq I_2$; also we shall see $\dim I_1 = 2$. Then $[I_1, I_2]$ will be too large to fit into $\mathfrak{c}$. This creates centralisation phenomena between nilpotent radicals in a tight configuration, where nilpotence is not for sharing: a contradiction. Of course the above strategy has to be implemented without weight theory, since there is no linear structure to begin with. So we are left with basic means inspired by the group-theoretic analysis of $\mathrm{PGL}_2(\mathbb{K})$ in terms of Borel subgroups and unipotent radicals [@CJTame; @DGroupes; @DJInvolutive]. In short the proof does *not* proceed by picking a Cartan subring and studying the root decomposition. To do this one would need a number of tools yet dubious; see final questions in § [4](#S:questions){reference-type="ref" reference="S:questions"}. #### The proof starts. Let $\mathfrak{g}$ be a simple Lie ring of Morley rank $4$. There are three main stages. We first prove minimal simplicity of $\mathfrak{g}$ in Proposition [Proposition 3](#p:4:minimalsimple){reference-type="ref" reference="p:4:minimalsimple"}. We derive that $\mathfrak{g}$ cannot be a sum of integral weight spaces in Proposition [Proposition 4](#p:4:weights){reference-type="ref" reference="p:4:weights"}; Proposition [Proposition 5](#p:4:E0E1){reference-type="ref" reference="p:4:E0E1"} is an important consequence. Changing topic, we remove nilpotent Borel subrings in Proposition [Proposition 6](#p:4:nonilpotent){reference-type="ref" reference="p:4:nonilpotent"}; Proposition [Proposition 7](#p:4:3dimensionalBorel){reference-type="ref" reference="p:4:3dimensionalBorel"} then allows us to find a $3$-dimensional Borel subring, whose fine structure is described in Proposition [Proposition 8](#p:4:b3){reference-type="ref" reference="p:4:b3"}. Proposition [Proposition 9](#p:4:contradiction){reference-type="ref" reference="p:4:contradiction"} gives the final contradiction. **Proposition 3**. *$\mathfrak{g}$ is minimal simple, viz. every definable, connected, proper subring is soluble.* *Proof.* Let $\mathfrak{h}\sqsubset \mathfrak{g}$ be a definable, connected, proper subring. By Theorem [Theorem 2](#t:2){reference-type="ref" reference="t:2"}, we may assume $\dim \mathfrak{h}= 3$. Let $I = C_\mathfrak{h}(\mathfrak{g}/\mathfrak{h})$, an ideal of $\mathfrak{h}$. Notice $I < \mathfrak{h}$ since otherwise $\mathfrak{h}\triangleleft \mathfrak{g}$, a contradiction. In particular $I^\circ$ is soluble by Theorem [Theorem 2](#t:2){reference-type="ref" reference="t:2"}. Linearising in dimension $1$ forces abelianity of $\mathfrak{h}/I$. Thus $\mathfrak{h}$ is soluble. ◻ Much later, Proposition [Proposition 8](#p:4:b3){reference-type="ref" reference="p:4:b3"} will return to the analysis of $3$-dimensional subrings. #### Lack of weights. We shall now forbid $\mathfrak{g}$ to be a sum of integral weight spaces. While mostly trivial, the proof needs attention as we also allow for characteristic $5$. There, dimension $4$ comes extremely close to 'Witt-type' root distribution and requires some care. Moreover, the final case of two Borel subrings intersecting over a $2$-dimensional Cartan subring (Step [Step 11](#p:4:weights:contradiction){reference-type="ref" reference="p:4:weights:contradiction"}) is a key configuration to eliminate. **Proposition 4**. *Let $h \neq 0$. Then $\mathfrak{g}$ is not a sum of eigenspaces, viz. $\sum_{i \in \mathbb{F}_p} E_i(h) < \mathfrak{g}$.* *Proof.* Unless otherwise indicated, *all eigenspaces in this proof are with respect to $h$.* Let $p$ be the characteristic; by assumption $p \neq 2, 3$, but $5$ is allowed. Suppose $\sum_{i \in \mathbb{F}_p} E_i = \mathfrak{g}$. Bear in mind that the sum is direct. **Step 9**. One has $h\in E_0$. Allowing trivial terms, we may suppose $\mathfrak{g}= E_{-1} \oplus E_0 \oplus E_1 \oplus E_k$ for some $k \notin \{-1, 0, 1\}$. *Proof.* A priori, and up to allowing trivial terms, $\mathfrak{g}= E_i \oplus E_j \oplus E_k \oplus E_\ell$ for $i, j, k, \ell$ *distinct* integers modulo $p$. We claim $h \in E_0 \neq 0$ and $\mathfrak{g}= E_0 \oplus E_i \oplus E_j \oplus E_k$. Indeed, write $h = e_i + e_j + e_k + e_\ell$ in obvious notation; we may assume $e_\ell \neq 0$. Applying $\operatorname{ad}_h$, we find $ie_i + je_j + ke_k + \ell e_\ell = 0$. In particular, $\ell e_\ell = 0$ and therefore $\ell = 0$. Since the integers were distinct, $e_i = e_j = e_k = 0$ and $h = e_\ell \in E_\ell = E_0$. We may assume $E_i$, $E_j$, $E_k$ all non-trivial and proper. Properness is since $h \in E_0$. Now if $E_i = 0$, then let $h' = \frac{1}{j} h$. Notice how $E_\ell(h') = E_{j\ell}(h)$, so $\mathfrak{g}= E_0(h') \oplus E_1(h') \oplus E_{j^{-1} k}(h')$. Up to considering $h'$ instead of $h$ (which we call *up to dividing*) we have the desired form. We may assume $i + j$, $i + k$, $j+k$ all non-zero. Indeed if $j = -i$, then up to dividing we have the desired form. We may assume $i + j \neq k$ and $i + k \neq j$. Indeed if $i + j - k = i + k - j = 0$, then $2i = 0$ and $i = 0$; a contradiction. So at most one the three equalities of type $i + j = k$ holds. We may assume $2i \in \{j, k\}$. Indeed since $i + j \neq 0$ and $i + j \neq k$, we have $i + j \notin \{0, i, j, k\}$, so $[E_i, E_j] \leq E_{i + j} = 0$. By our assumptions, $[E_i, E_k] = 0$ likewise. So $0 < E_i < \mathfrak{g}$ is normalised by $E_0 + E_j + E_k$. Since $E_i$ is no ideal of $\mathfrak{g}$, we therefore have $[E_i, E_i] \neq 0$ and $2i \in \{0, i, j, k\}$. But $2i \notin \{0, i\}$ since $i \neq 0$. We have $2j \in \{i, k\}$ and $2k \in \{i, j\}$. For suppose $j + k = i$. With $2i = j$ this gives $i + k = 0$, a contradiction; the case $2i = k$ is similarly excluded. So $j + k \neq i$, and the previous paragraph also gives $2j \in \{i, k\}$ and $2k \in \{i, j\}$. Conclusion. We may assume $2i = j$; the other case is similar. If $2j = i$ then $4 = 1$, against $p \neq 3$. So $2j = k$. Similarly, $2k = i$. Therefore $8 = 1$ and $p = 7$. Up to dividing for readability, we now have $\mathfrak{g}= E_0 \oplus E_1 \oplus E_2 \oplus E_4$. But now $E_3 = E_5 = E_6 = 0$ and $E_8 = E_1$, so $E_1 \oplus E_2 \oplus E_4 \triangleleft \mathfrak{g}$, a contradiction. ◻ **Step 10**. We may assume $\mathfrak{g}= E_{-1} \oplus E_0 \oplus E_1$. Moreover $E_{-1}$, $E_0$, $E_1$ are non-trivial and proper. *Proof.* By Step [Step 9](#p:4:weights:-101k){reference-type="ref" reference="p:4:weights:-101k"} we may assume $\mathfrak{g}= E_{-1} \oplus E_0 \oplus E_1 \oplus E_k$ for some $k \notin \{-1, 0, 1\}$. We first show that we may assume $k = 2$. If $k = -2$, then up to considering $-h$, we have the desired form. Therefore suppose $k \neq \pm 2$, so that $k \notin \{0, \pm 1, \pm 2\}$. In particular $E_{-2} = E_2 = 0$, implying that $E_{-1}$ and $E_1$ are abelian subrings. Let $\mathfrak{h}= E_{-1} + E_0 + E_1$, a definable, connected subring of $\mathfrak{g}$. If $\mathfrak{h}= \mathfrak{g}$, then up to taking $k = 2$ as a dummy we have the desired form. So we may assume $\mathfrak{h}< \mathfrak{g}$. By minimal simplicity (Proposition [Proposition 3](#p:4:minimalsimple){reference-type="ref" reference="p:4:minimalsimple"}), $\mathfrak{h}$ is soluble. By Step [Step 9](#p:4:weights:-101k){reference-type="ref" reference="p:4:weights:-101k"}, $h \in E_0 \leq \mathfrak{h}$, so $E_1 = [h, E_1] \leq \mathfrak{h}'$, and $E_{-1} \leq \mathfrak{h}'$ likewise. By solubility, $\mathfrak{h}' < \mathfrak{h}$, implying $[E_{-1}, E_1] < E_0$. If $\dim E_0 = 2$ then since $\mathfrak{h}< \mathfrak{g}$ is proper, at least one of $E_{-1}, E_1$ is trivial; hence $[E_{-1}, E_1] = 0$. If $\dim E_0 = 1$ then $[E_{-1}, E_1] = 0$. So $[E_{-1}, E_1] = 0$ in either case. Moreover $k \pm 1 \notin \{-1, 0, 1, k\}$, so $E_{-1}$ and $E_1$ are ideals of $\mathfrak{g}$. Since $E_0 \neq 0$ they are proper, hence trivial. There remains $\mathfrak{g}= E_0 \oplus E_k$ and $E_k$ is a proper ideal of $\mathfrak{g}$, hence trivial. Finally $\mathfrak{g}= E_0$, so $h \in Z(\mathfrak{g}) = 0$, a contradiction. So we may assume $k = 2$. Hence $\mathfrak{g}= E_{-1} \oplus E_0 \oplus E_1 \oplus E_2$. We prove that $E_{-1}, E_0, E_1$ are non-trivial and proper. - We already know $0 < E_0 < \mathfrak{g}$. - If $E_{-1} = 0$ then (even when $p = 5$) $E_2$ is an ideal of $\mathfrak{g}$; hence trivial. Then $E_1$ is an ideal of $\mathfrak{g}$; hence trivial as well, and this is a contradiction. - If $E_1 = 0$ then $[E_{-1}, E_2] = 0$ so (even when $p = 5$) $E_{-1}$ is an ideal of $\mathfrak{g}$; hence trivial, a contradiction. It remains to show that $E_2 = 0$. Suppose not. Then $\mathfrak{g}= E_{-1} \oplus E_0 \oplus E_1$, and each has dimension $1$. We claim $[E_{-1}, E_1] = E_0$, $[E_1, E_1] = E_2$, and $[E_{-1}, E_2] = E_1$. - If $[E_{-1}, E_1] = 0$ then (even when $p = 5$) $E_{-1} + E_1 + E_2$ is an ideal of $\mathfrak{g}$, a contradiction. - If $[E_1, E_1] = 0$, then $\mathfrak{h}= E_{-1} + E_0 + E_1$ is a proper subring, hence soluble by Proposition [Proposition 3](#p:4:minimalsimple){reference-type="ref" reference="p:4:minimalsimple"}. But $h \in E_0$ so $\mathfrak{h}' \geq E_{-1} + E_1 + [E_{-1}, E_1] = \mathfrak{h}$, a contradiction. - Suppose $[E_{-1}, E_2] = 0$. Write $h \in E_0 = [E_{-1}, E_1]$ as $h = [e_{-1}, e_1]$; let $e_2 \in E_2$. Then: $$2 e_2 = [h, e_2] = [[e_{-1}, e_1], e_2] = [e_{-1} e_2, e_1] + [e_{-1}, e_1 e_2] = 0,$$ a contradiction. (If the characteristic is not $5$ there is a shorter argument: $E_2$ is an ideal of $\mathfrak{g}$, a contradiction.) We claim that for any $x_0 \in E_0$ there is $y_2 \in E_2\setminus\{0\}$ such that $[x_0, y_2] = 0$. Let $e_{-1} \in E_{-1}$ be such that $[e_{-1}, E_1] = E_0$ and $[e_{-1}, E_2] = E_{1}$; there is one such since otherwise $C_{E_{-1}}(E_1) \cup C_{E_{-1}}(E_2) = E_{-1}$, a contradiction. Let $x_0 \in E_0$. Then there is $y_1 \in E_1$ with $[y_1, e_{-1}] = x_0$. Now there is $y_2 \in E_2$ with $[e_{-1}, y_2] = y_1$. On the other hand, $[E_1, E_2] = 0$. Altogether, one has: $$\begin{aligned} & = [[y_1,e_{-1}],y_2]\\ & = [y_1 y_2, e_{-1}] + [y_1, e_{-1} y_2]\\ & = [0, e_{-1}] + [y_1, y_1]\\ & = 0.\end{aligned}$$ Conclusion. Since $h \in E_0$, the action of $E_0$ on $E_2$ is non-trivial. Linearising in dimension $1$, there is a finite ideal $F \triangleleft E_0$ such that $E_0/F$ acts by scalars, hence freely, on $E_2$. This contradicts the last claim. ◻ **Step 11**. Contradiction. *Proof.* By Step [Step 10](#p:4:weights:-101){reference-type="ref" reference="p:4:weights:-101"}, exactly one of $E_{-1}, E_0, E_1$ has dimension $2$; moreover $E_{\pm 1}$ are abelian. For the end of the argument, bear in mind that $E_{-1}$ does not normalise $E_1$ nor vice-versa, since the normalised one would be an ideal of $\mathfrak{g}$. There are essentially two cases. - Suppose $\dim E_1 = 2$. Let $a \in E_{-1}$. Since $\operatorname{ad}_a$ takes $E_1$ to $E_0$, the subgroup $X_a = C_{E_1}^\circ(a)$ is infinite; since $E_1$ is abelian, $X_a$ is actually a subring. Notice that both $a$ and $h$ normalise $X_a$. Moreover $C_\mathfrak{g}^\circ(X_a) \geq E_1$. If $C_\mathfrak{g}^\circ(X_a) > E_1$, then it is a $3$-dimensional Borel subring. In that case, by Lemma [Lemma 6](#l:selfnormalisation){reference-type="ref" reference="l:selfnormalisation"}, $h \in C_\mathfrak{g}^\circ(X_a)$, a contradiction as the action of $h$ on $X_a$ is non-trivial. Hence $C_\mathfrak{g}^\circ(X_a) = E_1$ is normalised by $a$. This shows that $E_{-1}$ normalises $E_1$, a contradiction. This rules out $\dim E_{-1} = 2$ as well, by considering $-h$. - Suppose $\dim E_0 = 2$. (It is unclear whether $E_0$ is abelian.) Since $E_0$ contains $h$, it is trivial neither on $E_1$ nor on $E_{-1}$. Linearising in dimension $1$, both $\mathfrak{t}_1 = C_{E_0}^\circ(E_1)$ and $\mathfrak{t}_{-1} = C_{E_0}^\circ(E_{-1})$ are subrings of dimension $1$. In particular they are abelian. We show $\mathfrak{t}_1 \neq \mathfrak{t}_{-1}$. Otherwise denote it by $\mathfrak{t}_{\pm 1}$ and let $\mathfrak{h}= C_\mathfrak{g}^\circ(\mathfrak{t}_{\pm 1})$, a proper subring of $\mathfrak{g}$. By assumption, $\mathfrak{h}\geq E_{-1} + E_1 + \mathfrak{t}_{\pm 1}$; so equality holds. In particular $[E_{-1}, E_1] \leq (\mathfrak{h}\cap E_0)^\circ = \mathfrak{t}_{\pm 1}$. Now $h$ normalises $E_{-1}$, $E_1$, and their bracket $\mathfrak{t}_{\pm 1}$; so $h$ normalises $\mathfrak{h}$. By Lemma [Lemma 6](#l:selfnormalisation){reference-type="ref" reference="l:selfnormalisation"}, one gets $h \in \mathfrak{h}= E_{-1} \oplus E_0^\mathfrak{h}\oplus E_1$, and finally $h \in E_0^\mathfrak{h}= \mathfrak{t}_{\pm 1}$, a contradiction. Let $\mathfrak{c}= [E_{-1}, E_1] \neq 0$. Since $E_0$ normalises $\mathfrak{c}$, this is a subring. We prove $\mathfrak{t}_1 \not\leq \mathfrak{c}$; suppose inclusion holds. Then $\mathfrak{t}_1 \neq \mathfrak{t}_{-1}$ acts non-trivially on $E_{-1}$, so linearising in dimension $1$, there is $t_1 \in \mathfrak{t}_1$ acting on $E_{-1}$ as $1$; by definition it acts trivially on $E_1$. Then by assumption, $t_1 \in \mathfrak{c}= [E_{-1}, E_1]$. At most two elements $e_{-1}, e_{-1}' \in E_{-1}$ suffice to write $\mathfrak{c}= \operatorname{ad}_{e_{-1}}(E_1) + \operatorname{ad}_{e_{-1}'}(E_1)$. So there are $e_1, e_1' \in E_1$ with $t_1 = [e_{-1}, e_1] + [e_{-1}', e_1']$. But then: $$\begin{aligned} 0 & = [t_1, t_1] = [t_1, [e_{-1}, e_1]] + [t_1, [e_{-1}', e_1']]\\ & = [t_1 e_{-1}, e_1] + [e_{-1}, t_1 e_1] + [t_1 e_{-1}', e_1'] + [e_{-1}', t_1 e_1']\\ & = [e_{-1}, e_1] + [e_{-1}', e_1'] = t_1,\end{aligned}$$ a contradiction. Notice $\mathfrak{t}_{-1} \not\leq \mathfrak{c}$ for the same reasons. Conclusion. This implies $\dim \mathfrak{c}= 1$. Indeed, $\dim \mathfrak{c}= 2$ would contradict $\mathfrak{t}_1 \not\leq \mathfrak{c}$. Finally let $\mathfrak{h}= E_{-1} + \mathfrak{c}+ E_1$, a subring. If $\mathfrak{c}$ centralises $E_1$, then $\mathfrak{c}= \mathfrak{t}_1$, a contradiction. For the same reason, $\mathfrak{c}$ does not centralise $E_{-1}$. So $\mathfrak{h}' = \mathfrak{h}$, against Proposition [Proposition 3](#p:4:minimalsimple){reference-type="ref" reference="p:4:minimalsimple"}.  ◻ This proves Proposition [Proposition 4](#p:4:weights){reference-type="ref" reference="p:4:weights"}. ◻ As a consequence we derive a severe control on sizes of weight spaces. **Proposition 5**. *Suppose $h \in \mathfrak{g}$ is such that $E_0(h) \neq 0$ and $E_1(h) \neq 0$. Then $\dim E_0(h) = \dim E_1(h) = 1$ and $E_2(h) = 0$.* *Proof.* *All eigenspaces in this proof are with respect to $h$.* Let $\mathfrak{b}= E_0 + E_1 + E_2$, a priori a definable, connected subgroup. By Proposition [Proposition 4](#p:4:weights){reference-type="ref" reference="p:4:weights"}, $E_3 = E_4 = 0$, so $\mathfrak{b}$ is actually a proper subring. We must prove $\dim \mathfrak{b}= 2$ and suppose not; then $\mathfrak{b}$ is a $3$-dimensional Borel subring. By Lemma [Lemma 6](#l:selfnormalisation){reference-type="ref" reference="l:selfnormalisation"}, it is self-normalising in $\mathfrak{g}$. Therefore $h \in N_\mathfrak{g}(\mathfrak{b}) = \mathfrak{b}$. Since $h \in \mathfrak{b}$, for $i \in \{1, 2\}$ one has $E_i = [h, E_i] \leq \mathfrak{b}'$. It follows $\mathfrak{b}' = E_1 + E_2 + E_0^{\mathfrak{b}'}$, where each of the last two terms is allowed to be trivial. Then $[h, \mathfrak{b}'] \leq E_1 + E_2$. Consider the action of $\mathfrak{b}$ on $\mathfrak{g}/\mathfrak{b}$. Linearising in dimension $1$, $\mathfrak{b}'$ centralises $\mathfrak{g}/\mathfrak{b}$. So for $i \in \{1, 2\}$ and $e \in E_i$ one has $B_e \leq \mathfrak{b}$. By Proposition [Proposition 4](#p:4:weights){reference-type="ref" reference="p:4:weights"} again, $E_{-i} = 0$, so $\operatorname{ad}_h + i$ is surjective and: $$[h, B_e] = \operatorname{im}\operatorname{ad}_h \operatorname{ad}_e = \operatorname{im}\operatorname{ad}_e (\operatorname{ad}_h + i) = \operatorname{im}\operatorname{ad}_e = B_e.$$ Since $B_e \leq \mathfrak{b}$ and $h \in \mathfrak{b}$ one has $B_e = [h, B_e] \leq \mathfrak{b}'$. Now $B_e = [h, B_e] \leq [h, \mathfrak{b}'] \leq E_1 + E_2$. Summing over $e \in E_i$ and $i \in \{1, 2\}$, for $a \in E_1 + E_2$ one has $B_a \leq E_1 + E_2$. Hence $E_1 + E_2 \triangleleft \mathfrak{g}$. But $E_0 \neq 0$ and $E_1 \neq 0$: a contradiction to simplicity. ◻ #### A first analysis of Borel subrings. We now change thread and study abstract Borel subrings. **Proposition 6** (see Proposition [Proposition 1](#p:3:nonilpotent){reference-type="ref" reference="p:3:nonilpotent"}). *No Borel subring is nilpotent.* **Remark 3**. Some of the ideas in the following proof could be pushed further. One suspects that if $\mathfrak{g}$ is minimal simple, then no Borel subring has dimension $1$. *Proof.* Towards a contradiction, let $\mathfrak{b}$ be one such. Let $Z = Z^\circ(\mathfrak{b}) > 0$. It is a possibility that $Z = \mathfrak{b}$. For $z \in Z\setminus\{0\}$, one has $\mathfrak{b}\sqsubseteq C_z$; by minimal simplicity, equality holds. **Step 12** (see Step [Step 3](#p:3:nonilpotent:st:1modules){reference-type="ref" reference="p:3:nonilpotent:st:1modules"}). Let $X$ be a $1$-dimensional subquotient $\mathfrak{b}$-module. Then $\mathfrak{b}$ centralises $X$. In particular, $\dim \mathfrak{b}\leq 2$. *Proof.* The former claim implies the latter. Indeed, if $\dim \mathfrak{b}= 3$ then $\mathfrak{b}$ acts trivially on $\mathfrak{g}/\mathfrak{b}$, so $[\mathfrak{b}, \mathfrak{g}] \leq \mathfrak{b}$ and $\mathfrak{b}$ is an ideal of $\mathfrak{g}$, a contradiction. So we deal with the former claim. We first prove that if $Y$ is a $1$-dimensional subquotient $Z$-module, then $Z$ centralises $Y$. (Both the conclusion *and assumption* are weaker.) Suppose not. Linearising in dimension $1$, there is $z \in Z$ acting on $Y$ like $\operatorname{Id}_Y$. *In the present paragraph, eigenspaces are with respect to $z$.* Lifting the eigenspace, $E_1 \leq \mathfrak{g}$ is non-trivial. Since $p \geq 5$, the multiplicative order of $2 \in \mathbb{F}_p^\times$ is at least $3$; on the other hand $E_0 > 0$. So there is $\ell \in \mathbb{F}_p^\times$ such that $E_\ell \neq 0$ but $E_{2\ell} = 0$. Then $E_0 + E_\ell$ is a subring properly containing $E_0 = C_z = \mathfrak{b}$. It follows $\mathfrak{g}= E_0 + E_\ell$ and $E_\ell \triangleleft \mathfrak{g}$, a contradiction. So the claim about $1$-dimensional $Z$-subquotient modules holds. (We lost $z$.) To prove our full claim, we may suppose that $\mathfrak{b}$ is non-abelian, implying $\dim \mathfrak{b}\geq 2$, and that $\mathfrak{b}$ does not act trivially on $X$. Linearising in dimension $1$, there is $h \in \mathfrak{b}$ acting on $X$ as the identity. *From now on, all eigenspaces are with respect to $h$.* Lifting the eigenspace, there is $h \in \mathfrak{b}$ with $E_1 \leq \mathfrak{g}$ non-trivial. We aim for a contradiction. We claim that $\mathfrak{g}= \mathfrak{b}\oplus E_1$ where $\dim \mathfrak{b}= \dim E_1 = 2$, and $\mathfrak{g}/\mathfrak{b}\simeq E_1$ as $Z$-modules. (Not 'as $\mathfrak{b}$-modules', since $\mathfrak{b}$ need not normalise $E_1$.) First, $\mathfrak{b}\cap E_1 = 0$. This is because $\mathfrak{b}\cap E_1$ is contained in each term of the descending nilpotence series, which vanishes by nilpotence of $\mathfrak{b}$. Second, $\dim E_1 \geq 2$. Otherwise $\dim E_1 = 1$ and the action of $Z$ on $E_1$ is trivial, so $E_1 \leq C_\mathfrak{g}^\circ(Z) = \mathfrak{b}$, a contradiction. In particular, $\dim \mathfrak{b}= \dim E_1 = 2$. Now $\mathfrak{g}= \mathfrak{b}\oplus E_1$ and $E_1 \simeq \mathfrak{g}/\mathfrak{b}$ as $Z$-modules. We show that $Z$ acts freely and irreducibly on $\mathfrak{g}/\mathfrak{b}$. If $E_1$ is not $Z$-irreducible, there is a $1$-dimensional $Z$-module $Y < E_1$. Now $Z$ acts trivially on $Y$ so $Y \leq C_\mathfrak{g}^\circ(Z) = \mathfrak{b}$ and $Y \leq E_1 \cap \mathfrak{b}$, a contradiction again. Hence $E_1$ is $Z$-irreducible. If some $z \in Z\setminus\{0\}$ acts trivially then $E_1 \leq C_z = \mathfrak{b}$, a contradiction. Linearising the abelian action, there is a field $\mathbb{K}$ with $E_1 \simeq \mathbb{K}_+$ and $Z \hookrightarrow \mathbb{K}\operatorname{Id}_{E_1}$. Therefore $Z$ acts freely on $\mathfrak{g}/\mathfrak{b}\simeq E_1$. We finish the proof by turning to the action of $\mathfrak{b}$ on $\mathfrak{g}/\mathfrak{b}$. Let $I = C_\mathfrak{b}(\mathfrak{g}/\mathfrak{b})$. Since $Z$ acts freely on $\mathfrak{g}/\mathfrak{b}$, one has $I \cap Z = 0$. In particular $\dim I < 2$. If $\dim I = 1$ then $\mathfrak{b}= I + Z$ is a sum of two abelian ideals, hence abelian itself: a contradiction. Therefore $\dim I = 0$. Notice that $\mathfrak{g}/\mathfrak{b}$ is $\mathfrak{b}$-irreducible as it was $Z$-irreducible. Now $ZI/I$ is infinite, and can be used to linearise the action of $\mathfrak{b}/I$ (see § [2.2](#s:facts){reference-type="ref" reference="s:facts"}). Hence $\mathfrak{b}/I$ is a linear nilpotent Lie ring, and $\mathfrak{g}/\mathfrak{b}$ is $\mathfrak{b}$-irreducible. This forces the linear dimension to be $1$, and therefore $\mathfrak{b}/I$ is abelian. But then $\mathfrak{b}'$ is finite and connected: $\mathfrak{b}$ is abelian, a contradiction. ◻ Let $W = \mathfrak{g}/\mathfrak{b}$, a $\mathfrak{b}$-module, and $I = C_\mathfrak{b}(W)$, an ideal of $\mathfrak{b}$. The proof of Proposition [Proposition 1](#p:3:nonilpotent){reference-type="ref" reference="p:3:nonilpotent"} cannot be followed literally but some ideas will look familiar. **Step 13**. $\dim \mathfrak{b}= 1$; in particular, $\mathfrak{b}$ is abelian. *Proof.* Suppose $\dim \mathfrak{b}= 2$. We claim that $W$ is $\mathfrak{b}$-irreducible. Otherwise there is a $3$-dimensional $\mathfrak{b}$-module $V_3$ with $\mathfrak{b}< V_3 < \mathfrak{g}$. By Step [Step 12](#p:4:nonilpotent:st:1modules){reference-type="ref" reference="p:4:nonilpotent:st:1modules"}, $\mathfrak{b}$ acts trivially on $V_3/\mathfrak{b}$, so $V_3 \leq N_\mathfrak{g}^\circ(\mathfrak{b})$: a contradiction. Now we show $\dim I = 1$ and $I^\circ \leq Z$. The former implies the latter. Indeed if $\mathfrak{b}$ is abelian, we are done; while if $\mathfrak{b}$ is not, then its only $1$-dimensional, connected ideal is $Z$, so $I^\circ = Z$. So we focus on proving $\dim I = 1$. If $\dim I = 2$ then $\mathfrak{b}$ is an ideal of $\mathfrak{g}$: a contradiction. We suppose $\dim I = 0$ and give a contradiction. First, $\mathfrak{b}/I$ can be linearised thanks to $ZI/I > 0$. Since $\mathfrak{b}$ is nilpotent and $W$ is $\mathfrak{b}$-irreducible, this forces the linear dimension to be $1$ and $\mathfrak{b}$ to be abelian. Moreover there is a field structure with $\mathfrak{b}/I\simeq \mathbb{K}\operatorname{Id}_W$ and $W \simeq \mathbb{K}_+$. In particular there is $h \in \mathfrak{b}$ acting on $\mathfrak{g}/\mathfrak{b}$ as the identity. We lift the eigenspace; thus $E_1 > 0$, but also $E_0 = \mathfrak{b}$ by abelianity. We prove $\dim E_1 \geq 2$ as follows (covering properties are unclear to us; $\dim E_1^{\mathfrak{g}/\mathfrak{b}}(h) = 2$ does not seem to be a sufficient argument). Let $z \in Z\setminus \{0\}$; for $x \in \mathfrak{g}$ there is $b \in \mathfrak{b}$ such that $[h, x] = x + b$, so: $$[h, [z, x]] = [hz, x] + [z, hx] = [z, x + b] = [z, x].$$ Therefore $\operatorname{ad}_z\colon \mathfrak{g}\to E_1$. However $\ker^\circ \operatorname{ad}_z = C_z = \mathfrak{b}$, so $\dim E_1 \geq \dim \mathfrak{g}- \dim \mathfrak{b}= 2$, as wished. Equality follows as $\mathfrak{b}\cap E_1 = 0$ by abelianity. So $\mathfrak{g}= \mathfrak{b}\oplus E_1$ and $\mathfrak{b}= E_0$ normalises $E_1$, an ideal of $\mathfrak{g}$; this is a contradiction. The claim is proved. It follows $N_\mathfrak{g}(\mathfrak{b}) = \mathfrak{b}$. Indeed, $\mathfrak{b}/I$ is abelian so we may linearise the action of $\mathfrak{b}/I$ on $\mathfrak{g}/\mathfrak{b}$: it is by scalars, hence free. Now if $n \in N_\mathfrak{g}(\mathfrak{b})$ and $b \in \mathfrak{b}\setminus I$, then $[b, n] \in \mathfrak{b}$ implies $n \in \mathfrak{b}$. (This is the argument of Lemma [Lemma 6](#l:selfnormalisation){reference-type="ref" reference="l:selfnormalisation"}, which we cannot invoke as stated.) We derive a contradiction. Let $z \in I^\circ \leq Z$ be non-trivial. Since $C_z = \mathfrak{b}$, one has $\dim B_z = 2$. By definition of $I$, one also has $B_z \leq \mathfrak{b}$, so equality follows. Hence there is $x \in \mathfrak{g}$ with $z = [x, z]$. For arbitrary $b \in \mathfrak{b}$ one has: $$[[b, x], z] = [bz, x] + [b, xz] = [b, z] = 0,$$ so $[\mathfrak{b}, x] \leq C_z = \mathfrak{b}$. Hence $x \in N_\mathfrak{g}(\mathfrak{b}) = \mathfrak{b}$, and $z = [x, z] = 0$, a contradiction. ◻ **Step 14** (see Step [Step 5](#p:3:nonilpotent:st:contradiction){reference-type="ref" reference="p:3:nonilpotent:st:contradiction"}). Contradiction. *Proof.* Throughout, $h$ will stand for an arbitrary non-zero element of $\mathfrak{b}$. Let $B_\mathfrak{b}= [\mathfrak{b}, \mathfrak{g}]$, a $\mathfrak{b}$-module containing each $B_h$ for $h \in \mathfrak{b}\setminus\{0\}$. We first show that $B_h = B_\mathfrak{b}$ and $\dim B_h = 3$. Indeed, by abelianity, $B_h$ is $\mathfrak{b}$-invariant; now $C_h = \mathfrak{b}$, so $\dim (\mathfrak{g}/B_h) = 1$. By Step [Step 12](#p:4:nonilpotent:st:1modules){reference-type="ref" reference="p:4:nonilpotent:st:1modules"}, $\mathfrak{b}$ acts trivially on $\mathfrak{g}/B_h$. So $[\mathfrak{b}, \mathfrak{g}] \leq B_h$. Hence for any $h' \in \mathfrak{b}$ one has $B_{h'} \leq B_h$ and equality follows. Thus $B_h$ does not depend on $h \in \mathfrak{b}\setminus\{0\}$; it equals $B_\mathfrak{b}$. We claim that $C_h^2 = C_h = \mathfrak{b}$ and $B_h^2 = B_\mathfrak{b}$. Indeed $C_h = \mathfrak{b}$ is abelian, so by Lemma [Lemma 1](#l:Cn){reference-type="ref" reference="l:Cn"}, $C_h^2$ is a subring of $\mathfrak{g}$. But $\dim C_h^2 \leq 2 \dim C_h < \dim \mathfrak{g}$, so $C_h^2$ is a proper subring containing $C_h = \mathfrak{b}$. Hence $C_h^2 = \mathfrak{b}= C_h$ and therefore $B_h^2 = B_h = \mathfrak{b}$. We deduce that $\mathfrak{g}= \mathfrak{b}+ B_\mathfrak{b}$ and the action of $\mathfrak{b}$ on $B_\mathfrak{b}$ is faithful. Otherwise $\mathfrak{b}\leq B_\mathfrak{b}$; hence $C_h \leq B_h$, which forces $B_h^2 < B_h$, a contradiction. Moreover, if $h \in \mathfrak{b}$ centralises $B_\mathfrak{b}$, it also centralises $\mathfrak{b}+ B_\mathfrak{b}= \mathfrak{g}$ and is therefore zero. We now contend that $B_\mathfrak{b}$ is $\mathfrak{b}$-irreducible. Suppose it is not. If there is a $1$-dimensional $\mathfrak{b}$-submodule $V_1 < B_\mathfrak{b}$, then by Step [Step 12](#p:4:nonilpotent:st:1modules){reference-type="ref" reference="p:4:nonilpotent:st:1modules"} one has $V_1 \leq C_\mathfrak{g}^\circ(\mathfrak{b}) = \mathfrak{b}$, so $\mathfrak{b}= V_1 \leq B_\mathfrak{b}$, a contradiction. If there is a $2$-dimensional $\mathfrak{b}$-submodule $V_2 < B_\mathfrak{b}$, then by Step [Step 12](#p:4:nonilpotent:st:1modules){reference-type="ref" reference="p:4:nonilpotent:st:1modules"} again, one has $[\mathfrak{b}, B_\mathfrak{b}] \leq V_2$, so $B_\mathfrak{b}^2 < B_\mathfrak{b}$, a contradiction. We may thus linearise the action of $\mathfrak{b}$ on $B_\mathfrak{b}$; there is a field $\mathbb{K}$ such that $B_\mathfrak{b}\simeq \mathbb{K}_+$ and $\mathfrak{b}$ acts by scalars on $B_\mathfrak{b}$. Let us fix some notation: there are $\varphi\colon B_\mathfrak{b}\simeq \mathbb{K}_+$ and $\chi\colon \mathfrak{b}\hookrightarrow \mathbb{K}_+$ such that, for $h \in \mathfrak{b}$ and $x \in B_\mathfrak{b}$: $$\varphi([h, x]) = \chi(h) \cdot \varphi(x).$$ Fix arbitrary $x \in B_\mathfrak{b}$ and let $A_n = [\mathfrak{b}, \dots, [\mathfrak{b}, x]{\scriptscriptstyle \dots}]$. Also let $A'_n = \varphi(A_n)$. These definable, connected subgroups of $\mathbb{K}_+$ need not form an ascending series but their dimensions do. Let $n$ be such that $\dim A'_n = \dim A'_{n+1}$. If $h \in \mathfrak{b}$ and $q \in A'_n$, say $q = \varphi(y)$ with $y \in A_n$, then: $$\chi(h) \cdot q = \varphi([h, y]) \in A'_{n+1}.$$ But dimensions match, so $\chi(h) A'_n = A'_{n+1}$ as subgroups of $\mathbb{K}_+$. So fix $h_0 \in \mathfrak{b}\setminus\{0\}$. For arbitrary $h_1 \in \mathfrak{b}$, one has $\frac{1}{\chi(h_0)} \chi(h_1) \in N_\mathbb{K}(A'_n) = \{\lambda \in \mathbb{K}: \lambda A'_n \leq A'_n\}$. The latter is therefore an infinite definable subring, implying $N_\mathbb{K}(A'_n) = \mathbb{K}$. Hence $A'_n$ is a non-trivial ideal of a field, and $A'_n = \mathbb{K}$. Going back through $\varphi$, we have $A_n = B_\mathfrak{b}$. But $[A_n, A_n] \leq B_\mathfrak{b}$ by Lemma [Lemma 3](#l:Rosengarten){reference-type="ref" reference="l:Rosengarten"}. So $B_\mathfrak{b}$ is a subring. As it is normalised by $\mathfrak{b}$ and $\mathfrak{g}= \mathfrak{b}+ B_\mathfrak{b}$, we have a final contradiction. ◻ This completes the proof of Proposition [Proposition 6](#p:4:nonilpotent){reference-type="ref" reference="p:4:nonilpotent"}. ◻ Using non-nilpotence of Borel subrings (Proposition [Proposition 6](#p:4:nonilpotent){reference-type="ref" reference="p:4:nonilpotent"}) and the impossibility of a global weight decomposition (Proposition [Proposition 4](#p:4:weights){reference-type="ref" reference="p:4:weights"}), we derive the existence of a $3$-dimensional Borel subring. **Proposition 7**. *There is a $3$-dimensional Borel subring.* *Proof.* Suppose not. The proof has something common with the weight decomposition of Proposition [Proposition 2](#p:3:identification){reference-type="ref" reference="p:3:identification"}, Step [Step 7](#p:3:identification:st:weights){reference-type="ref" reference="p:3:identification:st:weights"}. Let $\mathfrak{b}$ be a Borel subring; by Proposition [Proposition 6](#p:4:nonilpotent){reference-type="ref" reference="p:4:nonilpotent"}, $\dim \mathfrak{b}= 2$ and $\mathfrak{b}$ is non-nilpotent. Using Theorem [Theorem 2](#t:2){reference-type="ref" reference="t:2"} we let $h \in \mathfrak{b}$ act on $\mathfrak{b}'$ as $\operatorname{Id}_{\mathfrak{b}'}$. Let $e \in \mathfrak{b}'\setminus\{0\}$. We claim that $C_e = \mathfrak{b}'$. Otherwise $\mathfrak{b}' \sqsubset C_e \sqsubset \mathfrak{g}$, so $C_e$ is a $2$-dimensional Borel subring; it is non-nilpotent by Proposition [Proposition 6](#p:4:nonilpotent){reference-type="ref" reference="p:4:nonilpotent"}. Now $C_e \neq \mathfrak{b}= N_\mathfrak{g}^\circ(\mathfrak{b}')$ so $\mathfrak{b}' \neq C_e'$. By Theorem [Theorem 2](#t:2){reference-type="ref" reference="t:2"}, $\mathfrak{b}'$ is a Cartan subring of $C_e$. Moreover $h$ normalises $C_e$ since for $c \in C_e$ one has: $$[[h, c], e] = [he, c] + [h, ce] = [e, c] = 0.$$ Hence $h$ also normalises $C_e'$. Since $\mathfrak{b}'$ is a Cartan subring of the non-nilpotent, $2$-dimensional ring $C_e$, Theorem [Theorem 2](#t:2){reference-type="ref" reference="t:2"} gives $\eta \in \mathfrak{b}'$ acting on $C_e'$ as $\operatorname{Id}_{C_e'}$. Then for any $a \in C_e'$ one has $[h, a] \in C_e'$ and therefore: $$[h, a] = [h, [\eta, a]] = [h \eta, a] + [\eta, ha] = [\eta, a] + [h, a] = a + [h, a].$$ So $a = 0$, a contradiction. We derive $B_e^2 = \mathfrak{b}$. Indeed $C_e = \mathfrak{b}'$ is abelian. By Lemma [Lemma 1](#l:Cn){reference-type="ref" reference="l:Cn"}, $C_e^2$ is a subring of $\mathfrak{g}$; since $\dim C_e^2 \leq 2 \dim C_e = 2$, it is proper. Now $\mathfrak{b}\sqsubseteq C_e^2$ so equality holds. Hence $\mathfrak{b}' = C_e \sqsubset C_e^2 = \mathfrak{b}$. In particular, $\dim B_e^2 = 2$. We turn to the action of $\mathfrak{b}$ on $\mathfrak{g}/\mathfrak{b}$. If $\mathfrak{g}/\mathfrak{b}$ is $\mathfrak{b}$-irreducible, then by Corollary [Corollary 2](#c:Lie2){reference-type="ref" reference="c:Lie2"}, $\mathfrak{b}'$ centralises $\mathfrak{g}/\mathfrak{b}$; in particular $B_e^2 \leq B_e \leq \mathfrak{b}$. If $\mathfrak{g}/\mathfrak{b}$ is $\mathfrak{b}$-reducible, we still get $B_e^2 \leq \mathfrak{b}$ by linearising in dimension $1$ *twice*. In either case, $B_e^2 = \mathfrak{b}$. We now consider eigenspaces with respect to $h$ and show $\mathfrak{g}= E_{-2} + E_{-1} + E_0 + E_1$. Let us turn to the adjoint action of $h$. First, $E_0 \neq 0$ since $h \in \mathfrak{b}$ which is soluble. By construction, $\mathfrak{b}' \leq E_1$. Suppose $E_{-1} = 0$. Then by surjectivity of $\operatorname{ad}_h +1$ one gets: $$B_e = \operatorname{im}\operatorname{ad}_e = \operatorname{im}\operatorname{ad}_e (\operatorname{ad}_h + 1) = \operatorname{im}\operatorname{ad}_h \operatorname{ad}_e = [h, B_e].$$ However $B_e \geq B_e^2 = \mathfrak{b}$ intersects $C_h$ by solubility, a contradiction. We apply just the same argument to prove $E_{-2} \neq 0$; otherwise: $$B_e^2 = \operatorname{im}\operatorname{ad}_e^2 (\operatorname{ad}_h + 2) = \operatorname{im}\operatorname{ad}_h \operatorname{ad}_e^2 = [h, B_e^2],$$ but $B_e^2 = \mathfrak{b}$, a contradiction. Thus $\mathfrak{g}= E_{-2} + E_{-1} + E_0 + E_1$, against Proposition [Proposition 4](#p:4:weights){reference-type="ref" reference="p:4:weights"}. ◻ #### The analysis of $3$-dimensional Borel subrings. We now combine the two main threads (weights, and large Borel subrings) into the analysis of $3$-dimensional Borel subrings. **Proposition 8**. *Let $\mathfrak{b}\sqsubset \mathfrak{g}$ be a $3$-dimensional Borel subring. Then:* 1. *[\[p:4:b3:st:I\]]{#p:4:b3:st:I label="p:4:b3:st:I"} $I = C_\mathfrak{b}^\circ(\mathfrak{g}/\mathfrak{b}) \triangleleft \mathfrak{b}$ is a $2$-dimensional, nilpotent ideal of $\mathfrak{b}$ containing $\mathfrak{b}'$;* 2. *[\[p:4:b3:st:J\]]{#p:4:b3:st:J label="p:4:b3:st:J"} if $J \leq I$ is a $1$-dimensional subring, then $J \leq Z(I)$;* 3. *[\[p:4:b3:k-l\]]{#p:4:b3:k-l label="p:4:b3:k-l"} if $h \in \mathfrak{b}$ is such that $I$ is a sum of eigenspaces, then either $I \leq E_0$ or $I = E_k \oplus E_{-k}$ is a sum of $1$-dimensional subrings, with $k \neq 0$;* 4. *[\[p:4:b3:st:Z\]]{#p:4:b3:st:Z label="p:4:b3:st:Z"} $Z^\circ(\mathfrak{b}) = 0$;* 5. *[\[p:4:b3:st:buniqueonI\]]{#p:4:b3:st:buniqueonI label="p:4:b3:st:buniqueonI"} $\mathfrak{b}$ is the only Borel subring containing $I$;* 6. *[\[p:4:b3:st:Iabelian\]]{#p:4:b3:st:Iabelian label="p:4:b3:st:Iabelian"} $I$ is abelian; for $i \in I\setminus Z(\mathfrak{b})$ one has $C_i = I$.* Be however careful that until [\[p:4:b3:st:Iabelian\]](#p:4:b3:st:Iabelian){reference-type="ref" reference="p:4:b3:st:Iabelian"} is proved, a subgroup of $I$ need not be a subring. *Proof.* 1. By Lemma [Lemma 5](#l:preHrushovski){reference-type="ref" reference="l:preHrushovski"}, $I$ is nilpotent; it is proper in $\mathfrak{b}$ since $\mathfrak{b}$ is no ideal of $\mathfrak{g}$. Consider the action of $\mathfrak{b}$ on $\mathfrak{g}/\mathfrak{b}$. Linearising in dimension $1$, $\dim(\mathfrak{b}/I) \leq 1$ and equality holds. 2. By nilpotence, $Z^\circ(I) \neq 0$. If $J \not \leq Z^\circ(I)$ then $I = J + Z^\circ(I)$ and $I$ is abelian, a contradiction. 3. *All eigenspaces in this proof are with respect to $h$.* Suppose $I \not\leq E_0$, so there is $k \neq 0$ with $E_k^I \neq 0$. Since $E_0^\mathfrak{b}\neq 0$ by solubility, Proposition [Proposition 5](#p:4:E0E1){reference-type="ref" reference="p:4:E0E1"} implies $\dim E_0 = \dim E_k = 1$. Therefore write $I = E_k \oplus E_\ell$; possibly $\ell = 0$, but even so $\dim E_\ell = 1$. We may assume $E_{-k} \neq 0$. Indeed, suppose $E_{-k} = 0$. Fix $e \in E_k = E_k^\mathfrak{b}\leq I$. Notice $B_e \leq \mathfrak{b}$. Now $E_{-k} = 0$ implies $[h, B_e] = \operatorname{im}\operatorname{ad}_h \operatorname{ad}_e = \operatorname{im}\operatorname{ad}_e (\operatorname{ad}_h + k) = \operatorname{im}\operatorname{ad}_e = B_e$. Therefore $B_e \leq \mathfrak{b}' \leq I$. If $\ell = 0$ then $B_e \leq [h, I] = E_k$, and summing over $e \in E_k$ we obtain $E_k \triangleleft \mathfrak{g}$, a contradiction. Hence $\ell \neq 0$. If $E_{-\ell} = 0$, then for $e \in E_\ell$ we also have $B_e \leq I$; summing over $e \in E_k \cup E_\ell$ we obtain $I \triangleleft \mathfrak{g}$, a contradiction. Hence $\ell\neq 0$ and $E_{-\ell} \neq 0$. Up to exchanging, we may assume $E_{-k} \neq 0$. Let $\mathfrak{h}= E_{-k} + E_0 + E_k$. By Proposition [Proposition 5](#p:4:E0E1){reference-type="ref" reference="p:4:E0E1"}, each term has dimension $1$. By Proposition [Proposition 4](#p:4:weights){reference-type="ref" reference="p:4:weights"}, $E_{\pm 2k} = 0$, so $\mathfrak{h}$ is a $3$-dimensional subring, therefore self-normalising by Lemma [Lemma 6](#l:selfnormalisation){reference-type="ref" reference="l:selfnormalisation"}. In particular, $h \in \mathfrak{h}$; hence $E_{-k} + E_k \leq \mathfrak{h}'$. But $\mathfrak{h}$ is soluble by Proposition [Proposition 3](#p:4:minimalsimple){reference-type="ref" reference="p:4:minimalsimple"}, so $[E_{-k}, E_k] = 0$. However $I \leq C_\mathfrak{g}^\circ(E_k)$ by [\[p:4:b3:st:J\]](#p:4:b3:st:J){reference-type="ref" reference="p:4:b3:st:J"}. If $I < C_\mathfrak{g}^\circ(E_k)$, then the latter is a $3$-dimensional subring, hence self-normalising; then $h \in C_\mathfrak{g}^\circ(E_k)$, a contradiction. So $I = C_\mathfrak{g}^\circ(C_k) \geq E_{-k}$, and the claim is proved. 4. Let $Z = Z^\circ(\mathfrak{b})$ and suppose $Z \neq 0$. Since $\mathfrak{b}$ is not nilpotent by Proposition [Proposition 6](#p:4:nonilpotent){reference-type="ref" reference="p:4:nonilpotent"}, $\dim Z = 1$ and $\mathfrak{b}/Z$ is non-nilpotent. In particular $Z \leq I$. By Theorem [Theorem 2](#t:2){reference-type="ref" reference="t:2"}, there is $\eta \in \mathfrak{b}/Z$ acting on $(\mathfrak{b}/Z)'$ like the identity. We lift to $h \in \mathfrak{b}$ with the same action, and now lift the eigenspace. *All eigenspaces in this proof are with respect to $h$.* Thus $E_1^\mathfrak{b}\neq 0$. By Proposition [Proposition 5](#p:4:E0E1){reference-type="ref" reference="p:4:E0E1"}, one has $\dim E_0 = \dim E_1 = 1$, so $E_0 = E_0^\mathfrak{b}= Z \leq I$; of course $E_1 = E_1^\mathfrak{b}\leq I$ as well, contradicting [\[p:4:b3:k-l\]](#p:4:b3:k-l){reference-type="ref" reference="p:4:b3:k-l"}. 5. Let $\mathfrak{b}_2 \neq \mathfrak{b}$ be another Borel subring containing $I$. Clearly $\dim \mathfrak{b}_2 = 3$, so our earlier analysis applies. Let $I_2 = C_{\mathfrak{b}_2}^\circ(\mathfrak{g}/\mathfrak{b}_2)$. Since $I_2 \triangleleft \mathfrak{b}_2$ one has $I_2 \neq I$. So $X = (I \cap I_2)^\circ$ has dimension exactly $1$. Now $X$ is a subring of both $I$ and $I_2$, so by [\[p:4:b3:st:J\]](#p:4:b3:st:J){reference-type="ref" reference="p:4:b3:st:J"} one has $C_\mathfrak{g}^\circ(X) \geq I + I_2$ and therefore $C_\mathfrak{g}^\circ(X)$ is a $3$-dimensional Borel subring with an infinite centre, against [\[p:4:b3:st:Z\]](#p:4:b3:st:Z){reference-type="ref" reference="p:4:b3:st:Z"}. 6. Suppose not. Then $J = I' \triangleleft \mathfrak{b}$ is a $1$-dimensional ideal of $\mathfrak{b}$. There is $h \in \mathfrak{b}$ with $J = E_1(h)$. Indeed, by [\[p:4:b3:st:Z\]](#p:4:b3:st:Z){reference-type="ref" reference="p:4:b3:st:Z"}, the action of $\mathfrak{b}$ on $J$ is non-trivial. Linearising in dimension $1$, there is $h \in \mathfrak{b}$ acting on $J$ like the identity. *All eigenspaces are now with respect to $h$.* We may assume $E_{-1} \neq 0$. If not then for $j \in J = E_1$ one has $[h, B_j] = B_j$ and quickly $B_j \leq I$. Since $J \leq Z(I)$ by [\[p:4:b3:st:J\]](#p:4:b3:st:J){reference-type="ref" reference="p:4:b3:st:J"}, one also has $C_j \geq I$. Now by [\[p:4:b3:st:buniqueonI\]](#p:4:b3:st:buniqueonI){reference-type="ref" reference="p:4:b3:st:buniqueonI"}, $\mathfrak{b}$ is the only Borel subring containing $I$ and $j \notin Z(\mathfrak{b})$ because of the action of $h$, so $C_j = I$. It follows $B_j = I = C_j$. By Lemma [Lemma 2](#l:divisors){reference-type="ref" reference="l:divisors"}, $B_j = I$ is abelian and we are done. Let $\mathfrak{h}= E_{-1} + E_0 + E_1$, which by Proposition [Proposition 4](#p:4:weights){reference-type="ref" reference="p:4:weights"} is a $3$-dimensional Borel subring. By Lemma [Lemma 6](#l:selfnormalisation){reference-type="ref" reference="l:selfnormalisation"}, $h \in \mathfrak{h}$. In particular $E_{-1} + E_1 \leq \mathfrak{h}' < \mathfrak{h}$, implying $[E_{-1}, E_1] = 0$. However $I \leq C_\mathfrak{g}^\circ(E_1)$ by [\[p:4:b3:st:J\]](#p:4:b3:st:J){reference-type="ref" reference="p:4:b3:st:J"}, and equality must hold since otherwise $h \in C_\mathfrak{g}^\circ(E_1)$ by Lemma [Lemma 6](#l:selfnormalisation){reference-type="ref" reference="l:selfnormalisation"}. So $E_{-1} \leq C_\mathfrak{g}^\circ(E_1) = I$ and $I = E_{-1} + E_1$, hence abelian. It remains to take $i \in I \setminus Z(\mathfrak{b})$ and prove $C_i = I$. One inclusion is clear. If $C_i > I$ then by [\[p:4:b3:st:buniqueonI\]](#p:4:b3:st:buniqueonI){reference-type="ref" reference="p:4:b3:st:buniqueonI"}, $C_i = \mathfrak{b}$ and $i \in Z(\mathfrak{b})$, a contradiction.  ◻ #### The final contradiction. **Proposition 9**. *The configuration is inconsistent.* *Proof.* Let $\mathfrak{b}\sqsubset \mathfrak{g}$ be a $3$-dimensional Borel subring, given by Proposition [Proposition 7](#p:4:3dimensionalBorel){reference-type="ref" reference="p:4:3dimensionalBorel"}. Since $\mathfrak{b}$ is no ideal of $\mathfrak{g}$, the action of $\mathfrak{b}$ on $\mathfrak{g}/\mathfrak{b}$ is non-trivial. Linearising in dimension $1$, there is $h \in \mathfrak{b}$ acting on $\mathfrak{g}/\mathfrak{b}$ like the identity. *All eigenspaces in this proof are with respect to $h$.* Lifting the eigenspace, $E_1 \neq 0$. By solubility, $E_0 \neq 0$. By Proposition [Proposition 5](#p:4:E0E1){reference-type="ref" reference="p:4:E0E1"}, $\dim E_0 = \dim E_1 = 1$ and $E_2 = 0$, so $E_1$ is a $1$-dimensional, abelian subring. As in Proposition [Proposition 8](#p:4:b3){reference-type="ref" reference="p:4:b3"}, which will be used heavily, we let $I = C_\mathfrak{b}^\circ(\mathfrak{g}/\mathfrak{b})$. **Step 15**. We may assume $h \in E_0$. *Proof.* Consider the action of $E_0$ on $\mathfrak{g}/\mathfrak{b}$. Suppose $E_0$ centralises $\mathfrak{g}/\mathfrak{b}$, viz. $E_0 \leq I$. By Proposition [Proposition 5](#p:4:E0E1){reference-type="ref" reference="p:4:E0E1"}, one has $\dim E_0 = \dim E_1 = 1$. If $E_1 \leq \mathfrak{b}$, then $I = E_0 + E_1$ where neither is trivial, against Proposition [Proposition 8](#p:4:b3){reference-type="ref" reference="p:4:b3"} [\[p:4:b3:k-l\]](#p:4:b3:k-l){reference-type="ref" reference="p:4:b3:k-l"}. Hence $E_1 \not \leq \mathfrak{b}$ and $(E_1\cap \mathfrak{b})^\circ = 0$. However $E_0 \leq I$, so $[E_0, E_1] \leq (\mathfrak{b}\cap E_1)^\circ = 0$ and $E_0$ centralises $E_1$. Then $E_1 \leq C_\mathfrak{g}^\circ(E_0)$. The latter contains $I$; by Proposition [Proposition 8](#p:4:b3){reference-type="ref" reference="p:4:b3"} [\[p:4:b3:st:buniqueonI\]](#p:4:b3:st:buniqueonI){reference-type="ref" reference="p:4:b3:st:buniqueonI"}, it equals either $I$ or $\mathfrak{b}$. This is a contradiction in either case. Therefore $E_0 \not\leq I$, and $\mathfrak{b}= E_0 + I$. Write $h = e_0 + i$ in obvious notation. Then $e_0$ also acts on $\mathfrak{g}/\mathfrak{b}$ like the identity; hence $E_1(e_0) \neq 0$. Moreover $E_0 = E_0(h) \leq E_0(e_0)$ by abelianity. By Proposition [Proposition 5](#p:4:E0E1){reference-type="ref" reference="p:4:E0E1"}, one has $\dim E_0(e_0) = 1$, so $E_0(e_0) = E_0(h)$ contains $e_0$. Up to considering $e_0$ we are done. ◻ **Step 16**. $E_1 \not\leq \mathfrak{b}$. *Proof.* Suppose $E_1 \leq \mathfrak{b}$. Let $f = \operatorname{ad}_h - 1$, an additive endomorphism of $\mathfrak{g}$. Let $K_1 = \ker^\circ f = E_1$ and $K_2 = \ker^\circ(f^2) \geq K_1$. By definition of $h$, one has $\operatorname{im}f \leq \mathfrak{b}$. Since $\dim \ker f = 1$, one has $\operatorname{im}f = \mathfrak{b}$, viz. $f\colon \mathfrak{g}\twoheadrightarrow \mathfrak{b}$ is surjective. Since $K_1 = E_1 \leq \mathfrak{b}$, the restriction of $f\colon K_2 \to K_1$ remains surjective. Therefore $\dim K_2 = 2 \dim K_1 = 2$. We claim $K_2 = I$. Let $k_2 \in K_2$, $k_1 = f(k_2) \in K_1$, and $e \in E_1$. Then: $$[h, [e, k_2]] = [he, k_2] + [e, hk_2] = [e, k_2] + [e, k_2 + k_1] = 2[e, k_2] + [e, k_1].$$ However $e, k_1 \in E_1 = K_1$ while $E_2 = 0$, so $[e, k_1] = 0$. There remains $[e, k_2] \in E_2 = 0$, whence $K_2 \leq C_\mathfrak{g}^\circ(E_1)$. Now $E_1 \sqsubset I$ so $I \leq C_\mathfrak{g}^\circ(E_1)$. If the latter has dimension $3$, then it is a $3$-dimensional Borel subring normalised by $h$. By Lemma [Lemma 6](#l:selfnormalisation){reference-type="ref" reference="l:selfnormalisation"}, $h \in C_\mathfrak{g}^\circ(E_1)$, a contradiction. Therefore $C_\mathfrak{g}^\circ(E_1) = I = K_2$. In particular, $B_{k_2} \leq \mathfrak{b}$. In the notation above, also let $x \in \mathfrak{g}$. By definition of $h$, there is $b \in \mathfrak{b}$ with $[h, x] = x + b$. Therefore: $$[h, [k_2, x]] = [hk_2, x] + [k_2, hx] = [k_2 + k_1, x] + [k_2, x + b] = 2[k_2, x] + [k_1, x] + [k_2, b].$$ The left-hand is in $\mathfrak{b}'$. So is $[k_2, b]$. If $E_{-1} = 0$ then $[h, B_{k_1}] = B_{k_1}$ so $B_{k_1} \leq \mathfrak{b}'$. In that case, $[k_2, x] \in \mathfrak{b}' \leq I$, and therefore $K_2 = I$ is an ideal of $\mathfrak{g}$, a contradiction. So $E_{-1} \neq 0$. Let $\mathfrak{h}= E_{-1} + E_0 + E_1$, which must be soluble. Hence $E_{-1} \leq C_\mathfrak{g}^\circ(E_1) = I$ and $\mathfrak{b}= E_0 + E_{-1} + E_1$. But then for $e_{-1} \in E_{-1} \leq I = K_2$, one has $f(e_{-1}) = - 2 e_{-1} \in E_1$, whence $e_{-1} = 0$, a contradiction. This proves $E_1 \not\leq \mathfrak{b}$. ◻ Let $\mathfrak{a}= E_1$; by Step [Step 16](#p:4:contradiction:st:anotleqb){reference-type="ref" reference="p:4:contradiction:st:anotleqb"} one has $\mathfrak{g}= \mathfrak{a}+ \mathfrak{b}$. Let $\mathfrak{b}_2 = \mathfrak{a}+ E_0$, a soluble subring. By Step [Step 15](#p:4:contradiction:st:hinE0){reference-type="ref" reference="p:4:contradiction:st:hinE0"} we could assume $h \in E_0$; so one has $\mathfrak{b}' = [E_0, \mathfrak{a}] = \mathfrak{a}$. Let $\mathfrak{b}_2 < V \leq \mathfrak{g}$ be a $\mathfrak{b}_2$-irreducible module; $V$ need not be a subring. Linearising in dimension $1$ or by Corollary [Corollary 2](#c:Lie2){reference-type="ref" reference="c:Lie2"}, $[\mathfrak{b}'_2, V] \leq \mathfrak{b}_2$. Let $Y = (I \cap V)^\circ$, an $E_0$-module. Since $I$ is abelian by Proposition [Proposition 8](#p:4:b3){reference-type="ref" reference="p:4:b3"} [\[p:4:b3:st:Iabelian\]](#p:4:b3:st:Iabelian){reference-type="ref" reference="p:4:b3:st:Iabelian"}, $Y$ is actually a subring. A priori, $0 < Y \leq I$. Notice $[\mathfrak{a}, Y] \leq (\mathfrak{b}\cap \mathfrak{b}_2)^\circ = E_0$. We also claim $[E_0, Y] = Y$ since otherwise $h$ centralises $Y$, so $Y \leq E_0$ and equality holds, forcing $h \in I$, a contradiction. If $\dim Y = 2$ then $Y = I$. Then for $a \in \mathfrak{a}$, the map $\operatorname{ad}_a$ takes $I$ to $E_0$. But by Proposition [Proposition 8](#p:4:b3){reference-type="ref" reference="p:4:b3"} [\[p:4:b3:st:Z\]](#p:4:b3:st:Z){reference-type="ref" reference="p:4:b3:st:Z"} one has $\dim Z(\mathfrak{b}) = 0$, so there is $i \in I\setminus Z(\mathfrak{b})$ with $[a, i] = 0$. By Proposition [Proposition 8](#p:4:b3){reference-type="ref" reference="p:4:b3"} [\[p:4:b3:st:Iabelian\]](#p:4:b3:st:Iabelian){reference-type="ref" reference="p:4:b3:st:Iabelian"}, $a$ normalises $C_i = I$ and therefore $a$ normalises $N_\mathfrak{g}^\circ(I) = \mathfrak{b}$. Hence $a \in \mathfrak{b}$ by Lemma [Lemma 6](#l:selfnormalisation){reference-type="ref" reference="l:selfnormalisation"}. This proves $\mathfrak{a}\leq \mathfrak{b}$, against Step [Step 16](#p:4:contradiction:st:anotleqb){reference-type="ref" reference="p:4:contradiction:st:anotleqb"}. Therefore $\dim Y = 1$. Let $\mathfrak{h}= \mathfrak{a}+ E_0 + Y$, a $3$-dimensional subring. By Proposition [Proposition 3](#p:4:minimalsimple){reference-type="ref" reference="p:4:minimalsimple"}, $\mathfrak{h}$ is soluble. Since $[E_0, Y] = Y$ and $[E_0, \mathfrak{a}] = \mathfrak{a}$, solubility implies $[\mathfrak{a}, Y] = 0$. So $\mathfrak{a}\leq C_\mathfrak{g}^\circ(Y) = I \leq \mathfrak{b}$ by Proposition [Proposition 8](#p:4:b3){reference-type="ref" reference="p:4:b3"} [\[p:4:b3:st:buniqueonI\]](#p:4:b3:st:buniqueonI){reference-type="ref" reference="p:4:b3:st:buniqueonI"}, a final contradiction. ◻ It would be interesting to push this line further and try to classify minimal simple (or better: $N_\circ^\circ$-)Lie rings of finite Morley rank. Care will be needed since there is no clear indication what unipotence theory will become in this context, and since the Witt algebra is $N_\circ^\circ$. # Appendix: questions {#S:questions} We list a number of questions, some already in [@DRegard]. ## Extending the result {#extending-the-result .unnumbered} **Question 1**. *What happens to our theorem in characteristic $3$?* This is likely to be a question for pure algebraists, and a challenging one. Fact [Fact 1](#f:1){reference-type="ref" reference="f:1"} is open in characteristic $3$. ## Abstract Lie rings and their representations {#abstract-lie-rings-and-their-representations .unnumbered} **Question 2**. *State and prove an analogue of the 'Borel-Tits' theorem [@BTHomomorphismes], viz. describe automorphisms of Lie rings of Lie-Chevalley type.* One could start or content oneself over algebraically closed fields (see the elegant model-theoretic proof for groups in [@PMessieurs]), and even ask the same in Lie-Cartan type. **Question 3**. *Devise identification methods à la Curtis-Phan-Tits [@TCurtis; @GDevelopments] for Lie rings of Lie-Chevalley type.* **Question 4**. *Study Lie ring representations of Lie rings of Lie-Chevalley type $\mathfrak{G}_\Phi(\mathbb{K})$. (Started in [@DSymmetric1; @DLocally]. Also see Question [Question 6](#q:Sasha){reference-type="ref" reference="q:Sasha"}.)* One could also ask about Lie rings of Lie-Chevalley type (or even Lie-Cartan type) up to elementary equivalence, à la Malcev [@MElementary; @BElementary]. Such questions usually rely on retrieving the base field, which should be easier here than in groups. For more on elementary properties of Chevalley *groups*, see [@BMElementary]. ## Lie modules of finite Morley rank {#lie-modules-of-finite-morley-rank .unnumbered} In the proof of Theorem [Theorem 4](#t:4){reference-type="ref" reference="t:4"} (more specifically, when analysing $3$-dimensional Borel subrings), results like relevant analogues of Maschke's Theorem [@TStructure Proposition 2.13] or Lie's Theorem (in characteristic larger than Morley rank, generalising Corollary [Corollary 2](#c:Lie2){reference-type="ref" reference="c:Lie2"}) would come handy. **Question 5**. *Develop basic tools for Lie modules of finite Morley rank.* **Question 6**. *Let $\mathbb{K}$ be an algebraically closed field of characteristic $p > 0$ and $\mathfrak{g}$ be a finite-dimensional, simple $\mathbb{K}$-lie algebra. Assume that $\mathfrak{g}$ acts definably and irreducibly on an elementary abelian $p$-group $V$ of finite Morley rank. Prove that:* 1. *[\[q:Sasha:i:linear\]]{#q:Sasha:i:linear label="q:Sasha:i:linear"} $V$ has a structure of a finite dimensional $\mathbb{K}$-vector space compatible with the action of $\mathfrak{g}$ \[see [@BFinite Theorem 3]\];* 2. *[\[q:Sasha:i:algebra\]]{#q:Sasha:i:algebra label="q:Sasha:i:algebra"} $\mathfrak{g}$ is a $\mathbb{K}$-Lie subalgebra of $\mathfrak{gl}_\mathbb{K}(V)$. \[More dubious.\]* The original result was for *connected algebraic groups*, which correspond more to Lie rings of Lie-Chevalley type. And for that reason, we are more confident in [\[q:Sasha:i:linear\]](#q:Sasha:i:linear){reference-type="ref" reference="q:Sasha:i:linear"} than in [\[q:Sasha:i:algebra\]](#q:Sasha:i:algebra){reference-type="ref" reference="q:Sasha:i:algebra"}. **Question 7**. *Work in a theory of finite Morley rank. Prove a 'Steinberg tensor product theorem' [@SRepresentations] for definable $\mathfrak{G}_\Phi(\mathbb{K})$-modules. \[See [@BFinite Theorem 3].\]* A more abstract direction makes no assumptions on $\mathfrak{g}$. **Question 8**. *Classify faithful, irreducible $\mathfrak{g}$-modules of Morley rank $2$. \[See [@DActions].\]* ## Abstract Lie rings of finite Morley rank {#abstract-lie-rings-of-finite-morley-rank .unnumbered} There is of course the question of field interpretation in a non-abelian nilpotent Lie ring, where unreasonable hopes are shattered by [@BNew]. Returning to the Reineke phenomenon, some topics are quite unclear. **Question 9**. - *Is there a more model-theoretic proof of Fact [Fact 1](#f:1){reference-type="ref" reference="f:1"} (viz. one *not* using Block-Premet-Strade-Wilson)?* - *Let $\mathfrak{g}> 0$ be a connected Lie ring of finite Morley rank and $x \in \mathfrak{g}$. Is $C_x$ infinite? \[See [@BBCInvolutions Proposition 1.1].\]* - *When is $x \in C_x$? \[One should be careful with such questions, the Baudisch algebras could be counter-examples.\]* At the soluble level, we have reasonable expectations. **Question 10**. *Develop a theory of soluble Lie rings of finite Morley rank: Fitting theory and Cartan subring theory. \[See [@FAnormaux].\]* Of course we do not aim for 'conjugacy' (if at all meaningful since there is no group around; fails anyway in Witt's algebra)---but existence and self-normalisation. Notice that existence could be hoped without the solubility assumption \[see [@FJExistence]\], though we currently see no way to attack this. Then of course, what matters is the simple case. Let $\mathbb{K}$ be an algebraically closed field of characteristic $p$. Then Witt's algebra $W_\mathbb{K}(1; \underline{1})$ is a $p$-dimensional, simple Lie algebra over $\mathbb{K}$, hence definable in the pure field. Moreover, it has a subalgebra of codimension $1$. Mind the assumption on the characteristic in the following. **Question 11**. *Let $\mathfrak{g}$ be a simple Lie ring of finite Morley rank $d$ and characteristic $p > d$. Suppose $\mathfrak{g}$ has a definable subring of corank $1$. Then $\mathfrak{g}\simeq \mathfrak{sl}_2(\mathbb{K})$. \[See [@HAlmost].\]* Last but not least in this direction, is of course the question of finding a strategy for the '$\log \mathrm{CZ}$' conjecture. For $\mathrm{CZ}$ itself, the positive solution in so-called 'even type' [@ABCSimple] was a form of the classification of the finite simple groups ([cfsg]{.smallcaps}) seen from the *reducing* lens of model-theoretic arguments for the tame infinite. At the high end of the classification, viz. in high Lie rank, are 'generic identification results'. **Question 12**. *Devise generic identification methods for simple Lie rings of finite Morley rank. \[See Question [Question 3](#q:CPT){reference-type="ref" reference="q:CPT"}, see [@BBGeneric].\]* As said in the introduction, we are curious whether model theory may provide a reduced sketch of the Block-Premet-Strade-Wilson theorem, like it did for the [cfsg]{.smallcaps}. ## Model theory {#model-theory .unnumbered} We see two questions here; one aims at broadening the model-theoretic frame of study of Lie rings, and the other at tightening connections between groups and Lie rings, presumably under new assumptions. First, it is unclear to us whether the full strength of finite Morley rank was used; dimensionality could suffice, but this may depend on the behaviour of differential fields, if any. (It is unclear to us whether non-trivial differential fields would give counter-examples to the dimensional version of our results. Reasoning by loose analogy, fields with non-minimal $\mathbb{G}_m$ have not provided counter-examples to the Cherlin-Zilber conjecture, at least not yet.) **Question 13**. *Determine in which model-theoretic setting the present paper took place.* Second, by Lie-Chevalley correspondence, we mean the ability to attach to some groups a Lie ring. We do not believe in one for abstract groups of finite Morley rank, as categorical nature allows no infinitesimal methods. (This contrasts sharply with $o$-minimal nature, where infinitesimals are provided by elementary extensions; in the tame *ordered* case, model-theoretic functoriality proves geometrically sufficient.) One could add a stronger model-theoretic assumption with geometric flavour. **Question 14**. *Is there a Lie-Chevalley correspondence for groups definable *in Zariski geometries* [@HZZariski]?* To our understanding, the Cherlin-Zilber conjecture is still open for groups definable in Zariski geometries. A positive answer to Question [Question 14](#q:LieChevalleyZariski){reference-type="ref" reference="q:LieChevalleyZariski"} would certainly give a natural proof. ## Acknowledgements {#acknowledgements .unnumbered} We thank Gregory Cherlin for useful comments. #### Very special thanks: the CIRM. The collaboration started virtually during a lockdown; but for serious research one must meet in person. This was made possible thanks to the [cirm-aims]{.smallcaps} residence programme (no longer restricted to host [aims]{.smallcaps} countries), allowing the second author to visit France for one week and the first author to visit Gabon for two, all in the Summer of 2023. Both stays proved highly productive and pleasant. Our very special thanks to Olivia Barbarroux, Pascal Hubert and Carolin Pfaff.
arxiv_math
{ "id": "2309.12465", "title": "Simple Lie rings of Morley rank 4", "authors": "Adrien Deloro and Jules Tindzogho Ntsiri", "categories": "math.LO math.RA", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/" }
--- abstract: | Given a positive integer $n$, consider a random permutation $\tau$ of the set $\{1,2,\ldots, n\}$. In $\tau$, we look for sequences of consecutive integers that appear in adjacent positions: a maximal such a sequence is called a block. Each block in $\tau$ is merged, and after all the merges, the elements of this new set are relabeled from $1$ to the current number of elements. We continue to randomly permute and merge this new set until only one integer is left. In this paper, we investigate the asymptotic behavior of $X_n$, the number of permutations needed for this process to end. In particular, we find an explicit asymptotic expression for each of $\mathbf{E}[X_n]$ and $\mathbf{Var}[X_n]$ as well as for every higher central moment, and show that $X_n$ satisfies a central limit theorem. address: - Department of Mathematics and Statistics, Dalhousie University, Halifax, NS, B3H 4R2, Canada. - Zu Chongzhi Center for Mathematics and Computational Sciences, Duke Kunshan University, Kunshan, Jiangsu Province, 215316, China. - Zu Chongzhi Center for Mathematics and Computational Sciences, Duke Kunshan University, Kunshan, Jiangsu Province, 215316, China. author: - Shane Chern - Lin Jiu$^1$ - Italo Simonelli title: A central limit theorem for a card shuffling problem --- [^1] # Introduction Given a positive integer $n$, consider a random permutation $\tau$ of the set $[n]:=\{1,2,\ldots, n\}$. In $\tau$, we look for sequences of consecutive integers that appear in adjacent positions, and a maximal such a sequence is called a *block*. Each block in $\tau$ is merged into its first integer, and after all the merges the elements of this new set are relabeled from $1$ to the current number of elements. For example, if $n=10$, and $\tau = (1,7,5,6,8, 10, 9, 2, 3, 4)$, then the blocks are $(1)$, $(7)$, $(5,6)$, $(8)$, $(10)$, $(9)$, and $(2, 3, 4)$. The merging gives $(1, 7, 5, 8, 10, 9, 2)$ and the relabeling will result in $(1,4,3,5,7,6,2)$, a permutation of the set $[7]= \{1, 2, 3, 4, 5, 6, 7\}$. We randomly permute this new set, and continue this merging and permuting until only one integer is left. The quantity of interest is $X_n$, the number of permutations needed for the process to end. This problem was introduced by Rao et al. [@Rao] to model the number of times that catalysts must be added to $n$ molecules to bond into a single lump. The molecules have a given hierarchical order which led to the above mathematical formulation of the process. These authors proved that, for the *mean value* of $X_n$, i.e., $\mu_n:=\mathbf{E}[X_n]$, the inequality $n \leq \mathbf{E}[X_n] \leq n + \sqrt{n}$ holds. Hence, $$\begin{aligned} \label{asymp-mean} \mathbf{E}[X_n]\sim n.\end{aligned}$$ However, they did not provide any explicit expression for $\mathbf{E}[X_n]$ or estimate for $\mathbf{Var}[X_n]$. Instead, they simulated the distribution of $X_n$ for $n=2,\ldots, 100$, and their calculations hinted that the asymptotic distribution of $X_n$ would tend to be normal. In this paper, we provide a solution to this problem in the affirmative, as follows. **Theorem 1**. *Let $Z$ denote a standard normal random variable, i.e., $Z \sim \mathcal{N}(0,1)$. Then as $n\to +\infty$, $$\begin{aligned} \label{eq:main} \frac{X_n -n}{\sqrt{n}} \stackrel{w}{\longrightarrow} Z.\end{aligned}$$* The proof of the above central limit theorem follows directly from the asymptotic estimate of each central moment of $X_n$. To present these asymptotic relations, we begin with some notation. First, the conventional *Bachmann--Landau symbols* are to be used throughout this paper: - If there exists a constant $C$ such that $|f(n)|\le C g(n)$, we adopt the *big-$O$ notation* and write $f(n)=O\big(g(n)\big)$; - If $\lim_{n\rightarrow+\infty} f(n)/g(n)=0$, we adopt the *small-$o$ notation* and write $f(n)=o\big(g(n)\big)$; - If $\lim_{n\rightarrow+\infty} f(n)/g(n)=1$, we adopt the *asymptotic equivalence symbol* and write $f(n)\sim g(n)$. Also, the *double factorials* are defined by $$\begin{aligned} n!!:=n\cdot (n-2) \cdot (n-4) \cdots\end{aligned}$$ Now our main result can be stated as follows. **Theorem 2**. *For every $m\ge 2$, we have, as $n\to +\infty$, $$\begin{aligned} \label{eq:mom-asymp} \mathbf{E}\big[(X_n-\mu_n)^m\big] = \begin{cases} (2M-1)!!\cdot n^M + O(n^{M-1}\log n), & \text{if $m=2M$},\\ \tfrac{2}{3}M(2M+1)!!\cdot n^M + O(n^{M-1}\log n), & \text{if $m=2M+1$}, \end{cases} \end{aligned}$$ wherein the asymptotic relation depends only on $m$.* *Proof of Theorem [Theorem 1](#th:main){reference-type="ref" reference="th:main"} from Theorem [Theorem 2](#th:c-mom){reference-type="ref" reference="th:c-mom"}.* Directly, [\[eq:mom-asymp\]](#eq:mom-asymp){reference-type="eqref" reference="eq:mom-asymp"} indicates that $$\begin{aligned} \label{eq:Var-asymp} \mathbf{Var}[X_n] = \mathbf{E}\big[(X-\mu_n)^2\big] &\sim n, \end{aligned}$$ Therefore, for $Z_n := (X_n - \mu_n)/\sqrt{\mathbf{Var}[X_n]}$, we have $\mathbf{E}[Z_n]=0$, $\mathbf{Var}[Z_n]=1$, and more importantly, as $n\to +\infty$, $$\begin{aligned} \mathbf{E}\big[Z_n^m\big] = \frac{\mathbf{E}\big[(X_n-\mu_n)^m\big]}{\mathbf{Var}[X_n]^{m/2}} \to \begin{cases} (m-1)!!, & \text{if $m$ is even},\\ 0, & \text{if $m$ is odd}, \end{cases} \end{aligned}$$ matching with the moments of the standard normal distribution $Z$. By recourse to *Chebyshev's method of moments* [@Bil1995 p. 390, Theorem 30.2], the weak convergence of $Z_n \Rightarrow Z$ is clear. Lastly, the asymptotic relations [\[asymp-mean\]](#asymp-mean){reference-type="eqref" reference="asymp-mean"} and [\[eq:Var-asymp\]](#eq:Var-asymp){reference-type="eqref" reference="eq:Var-asymp"} assert the validity of Theorem [Theorem 1](#th:main){reference-type="ref" reference="th:main"}. ◻ As now the main task is to prove Theorem [Theorem 2](#th:c-mom){reference-type="ref" reference="th:c-mom"}, we shall organize the paper in the following way. In Section [2](#sec:A(n,k)){reference-type="ref" reference="sec:A(n,k)"}, we review the shuffling model in [@Rao] and derive a handful of properties of its associated probability sequence. Those properties will be utilized in Section [3](#sec:asymp-generic){reference-type="ref" reference="sec:asymp-generic"} for the asymptotic analysis in regard to a family of generic recurrences. In Section [4](#sec:mean-value){reference-type="ref" reference="sec:mean-value"}, a more precise asymptotic estimate of $\mu_n$ is obtained; and in particular, Subsection [4.2](#sec:Bell){reference-type="ref" reference="sec:Bell"} is devoted to a surprising connection between the mean values and the Bell numbers. Finally, in Section [5](#sec:var-3moment){reference-type="ref" reference="sec:var-3moment"}, we elaborate on Theorem [Theorem 2](#th:c-mom){reference-type="ref" reference="th:c-mom"} with a delicate estimate of the error terms, and prove this strengthening, as stated in Theorem [Theorem 29](#th:c-mom-new){reference-type="ref" reference="th:c-mom-new"}, by an inductive argument. ## Convention {#convention .unnumbered} - Throughout the rest of the paper, all limits and asymptotic relations are to be interpreted with respect to $n$, as $n\to +\infty$. - In this work we may occasionally encounter terms of the form $0^0$ (e.g., the summand $(n-k)^\ell$ in [\[eq:A-(n-k)u\]](#eq:A-(n-k)u){reference-type="eqref" reference="eq:A-(n-k)u"} when $k=n$ and $\ell=0$). In such scenarios, we simply treat $0^0$ as $1$. # The shuffling model and probability sequence {#sec:A(n,k)} ## The shuflling model To formulate the model in this work, we recall that by denoting $Y_n$ the number of blocks in a random permutation of $[n]$, it is well-known (cf. [@Rao p. 615, eq. (1)]) that for $k=1, 2, \ldots, n$, the following probability evaluation holds: $$\mathbf{P}(Y_n = k) = \frac{A(n,k)}{n!},$$ where the sequence $A(n,k)$ is defined for $1\leq k \leq n$ by $$A(n,k) := \binom{n-1}{k-1} A(k-1),$$ with the sequence $A(n)$ being given by $$\begin{aligned} A(n) := \frac{(n+2)!}{n+1}\sum_{i=0}^{n+2}\frac{(-1)^i}{i!}.\end{aligned}$$ Hence, summing $\mathbf{P}(Y_n = k)$ over $k$ gives $$\label{sum=1} \sum_{k=1}^n \frac{A(n,k)}{n!} =1.$$ Meanwhile, for $1 \leq k \leq n$, let $\widetilde{X}_k$ be independent random variables such that $\widetilde{X}_k \stackrel{d}{=}X_k$. That is, $\widetilde{X}_k$ is the number of permutations needed for the shuffling process to end with the initial state being a permutation of the set $[k]$. A crucial relation shown by Rao et al. is [@Rao p. 616, eq. (4)]: $$\begin{aligned} X_n = \begin{cases} 1, & \text{with probability $\mathbf{P}(Y_n = 1)$},\\ 1+\widetilde{X}_k, & \text{with probability $\mathbf{P}(Y_n = k)$ for $2\le k\le n$}. \end{cases}\end{aligned}$$ With this, it was shown in [@Rao] that the mean value of $X_n$, that is, $\mu_n=\mathbf{E}[X_n]$, satisfies the recurrence relation: $$\begin{aligned} \mu_n & = \mathbf{E}\big[\mathbf{E}[X_n\,|\,Y_n]\big]\\ &= \mathbf{P}(Y_n=1) + \sum_{k=2}^n \mathbf{P}(Y_n=k) \mathbf{E}\big[1+ \widetilde{X}_k\big]\\ & = \frac{A(n,1)}{n!}+\sum_{k=2}^{n} \frac{A(n,k)}{n!} (1+\mu_k)\\ & = 1 + \sum_{k=2}^{n} \frac{A(n,k)}{n!} \mu_k,\end{aligned}$$ where we have applied [\[sum=1\]](#sum=1){reference-type="eqref" reference="sum=1"}. Now to facilitate our analysis, we adopt the convention that $X_{1}=0$, a definite constant, so that the mean value $\mu_{1}=\mathbf{E}[X_{1}]=0$, as there is no more shuffling needed. Then the above recurrence for $\mu_n$ becomes $$\label{rec-1} \mu_n = 1 + \sum_{k=1}^{n} \frac{A(n,k)}{n!} \mu_k,$$ with initial values $\mu_1=0$ and $\mu_2=2$. In general, if we assume that $p(x)$ is an arbitrary polynomial in $x$, then $$\begin{aligned} \label{eq:E-poly} \mathbf{E}\big[p(X_n)\big] = \sum_{k=1}^n \mathbf{P}(Y_n=k)\mathbf{E}\big[p(1+X_k)\big] = \sum_{k=1}^n \frac{A(n,k)}{n!}\cdot \mathbf{E}\big[p(1+X_k)\big].\end{aligned}$$ This relation will serve as a key in our subsequent evaluation of the central moments of $X_n$. Here, we offer a historical remark on the sequence $A(n)$, which is registered as sequence [A000255]{.sans-serif} in the On-Line Encyclopedia of Integer Sequences [@OEIS]. This sequence was first introduced by Euler [@Eul1782 p. 235, Sect. 152] in his study of Latin squares, and Euler interpreted that this sequence enumerates "fixed-point-free permutations beginning with $2$." For modern studies of the numbers $A(n)$ and $A(n,k)$, see, for example, Kreweras [@Kreweras] and Myers [@Myers]. ## Properties of the probability sequence In this subsection, we take a deeper look at the double sequence $A(n,k)$. To facilitate our investigation of $A(n,k)$, an explicit formula for its generating function becomes much needed. We start by noting that the following exponential generating function of the sequence $A(n)$ was already derived in [@Kreweras]. However, for the sake of completeness, we sketch a proof. **Lemma 3**. *We have $$\label{eq:A-egf} \sum_{n=0}^{\infty} \frac{A(n)}{n!}x^n = \frac{e^{-x}}{(1-x)^2}.$$* *Proof.* It is known from [@Cha2002 p. 178, eq. (5.9)] that $A(n)$ satisfies the recurrence relation: $$\begin{aligned} \label{eq:A-rec} A(n) = n A(n-1)+(n-1) A(n-2), \end{aligned}$$ with initial values $A(0) = A(1) =1$. Let $$\begin{aligned} F(x):= \sum_{n=0}^\infty A(n)\frac{x^n}{n!}. \end{aligned}$$ Now we multiply by $\frac{x^n}{n!}$ on both sides of [\[eq:A-rec\]](#eq:A-rec){reference-type="eqref" reference="eq:A-rec"} and then sum over $n\ge 2$. Thus, $$\begin{aligned} \sum_{n=2}^\infty A(n) \frac{x^n}{n!} = \sum_{n=2}^\infty A(n-1) \frac{x^n}{(n-1)!} + \sum_{n=2}^\infty A(n-2) \frac{x^n}{(n-2)!}\cdot\frac{1}{n}, \end{aligned}$$ namely, $$\begin{aligned} F(x)-\big(1+x\big) = x \big(F(x)-1\big) + \sum_{n=2}^\infty A(n-2) \frac{x^n}{(n-2)!}\cdot \frac{1}{n}. \end{aligned}$$ Differentiating both sides with respect to $x$ then gives $$\begin{aligned} F'(x) - 1 = F(x)-1 + xF'(x) + xF(x). \end{aligned}$$ Solving the above PDE under the boundary condition that $F(0)=1$, we arrive at the claimed relation. ◻ Now we are ready to provide a bivariate generating function identity for $A(n,k)$. **Theorem 4**. *We have $$\begin{aligned} \label{eq:A-bivariate-gf} \sum_{n=1}^\infty \left(\sum_{k=1}^n \frac{A(n,k)}{n!}z^k\right) nx^n = \frac{xze^{x(1-z)}}{(1-xz)^2}. \end{aligned}$$* *Proof.* We make use of [\[eq:A-egf\]](#eq:A-egf){reference-type="eqref" reference="eq:A-egf"} to get $$\begin{aligned} \frac{xze^{x(1-z)}}{(1-xz)^2} &= e^x\cdot \frac{xze^{-xz}}{(1-xz)^2}\\ &=\left(\sum_{j=0}^\infty \frac{x^j}{j!}\right) \left(\sum_{\ell=0}^\infty \frac{A(\ell)}{\ell!} x^{\ell+1}z^{\ell+1}\right)\\ &= \sum_{n=1}^\infty \left(\sum_{k=1}^n \frac{1}{(n-k)!} \frac{A(k-1)}{(k-1)!} z^k\right) x^n\\ &= \sum_{n=1}^\infty \left(\sum_{k=1}^n \binom{n-1}{k-1} \frac{A(k-1)}{n!} z^k\right) nx^n, \end{aligned}$$ wherein the third equality follows by performing the Cauchy product with respect to $x$. Finally, the claimed relation holds by recalling that $A(n,k) = \binom{n-1}{k-1}A(k-1)$. ◻ In [\[eq:A-bivariate-gf\]](#eq:A-bivariate-gf){reference-type="eqref" reference="eq:A-bivariate-gf"}, by expanding $e^{x(1-z)}$ and $xz/(1-xz)^2$ as a series in $x$, respectively, the following relation is clear. **Corollary 5**. *For $n\ge 1$, $$\begin{aligned} \label{eq:A-z-gf} \sum_{k=1}^n \frac{A(n,k)}{n!}z^k = \sum_{m=0}^{n-1}\frac{n-m}{n\cdot m!}z^{n-m}(1-z)^m. \end{aligned}$$* **Remark 6**. Taking $z=1$ in [\[eq:A-z-gf\]](#eq:A-z-gf){reference-type="eqref" reference="eq:A-z-gf"} gives an alternative proof of [\[sum=1\]](#sum=1){reference-type="eqref" reference="sum=1"}. We may further deduce from [\[eq:A-z-gf\]](#eq:A-z-gf){reference-type="eqref" reference="eq:A-z-gf"} the following evaluation: **Theorem 7**. *Let $\ell$ be a nonnegative integer. Then for $n\ge \ell$, $$\begin{aligned} \label{eq:A-(n-k)u} \sum_{k=1}^n \frac{A(n,k)}{n!} (n-k)^\ell = B_\ell - (B_{\ell+1}-B_\ell)\cdot \frac{1}{n}, \end{aligned}$$ where $B_\ell$ is the $\ell$-th Bell number defined by the exponential generating function $$\begin{aligned} \sum_{\ell= 0}^\infty B_\ell \frac{x^\ell}{\ell!} := e^{e^x-1}. \end{aligned}$$* *Proof.* The $\ell=0$ case is exactly [\[sum=1\]](#sum=1){reference-type="eqref" reference="sum=1"}. Throughout, we assume that $\ell\ge 1$. In light of [\[eq:A-z-gf\]](#eq:A-z-gf){reference-type="eqref" reference="eq:A-z-gf"}, it is plain that $$\begin{aligned} \sum_{k=1}^n \frac{A(n,k)}{n!}z^{n-k} = \sum_{m=0}^{n-1}\frac{n-m}{n\cdot m!}(z-1)^m. \end{aligned}$$ Thus, $$\begin{aligned} \sum_{k=1}^n \frac{A(n,k)}{n!} (n-k)^\ell = \sum_{m=0}^{n-1}\frac{n-m}{n\cdot m!} \left[\left(z\frac{d}{dz}\right)^\ell (z-1)^m\right]_{z=1}, \end{aligned}$$ where $(z\frac{d}{dz})^\ell$ means applying $\ell$ times the operator $z\frac{d}{dz}$. Note that when $m>\ell$, we always have a factor $(z-1)$ in $(z\frac{d}{dz})^\ell (z-1)^m$, thereby implying that this quantity vanishes when taking $z=1$. Hence, $$\begin{aligned} \sum_{k=1}^n \frac{A(n,k)}{n!} (n-k)^\ell &= \sum_{m=0}^{\min\{n-1,\ell\}}\frac{n-m}{n\cdot m!} \left[\left(z\frac{d}{dz}\right)^\ell (z-1)^m\right]_{z=1}\\ &= \sum_{m=0}^{\min\{n-1,\ell\}}\frac{n-m}{n\cdot m!} \left[\left(z\frac{d}{dz}\right)^\ell \sum_{i=0}^m \binom{m}{i}(-1)^{m-i}z^i\right]_{z=1}\\ &= \sum_{m=0}^{\min\{n-1,\ell\}}\frac{n-m}{n\cdot m!} \sum_{i=0}^m \binom{m}{i}(-1)^{m-i} i^\ell\\ &= \sum_{m=0}^{\min\{n-1,\ell\}}\frac{n-m}{n} {\ell \brace m}, \end{aligned}$$ where ${\ell\brace m}$ are the *Stirling numbers of the second kind*. Therefore, for $n\ge \ell$, $$\begin{aligned} \label{eq:A-(n-k)u-Stirling} \sum_{k=1}^n \frac{A(n,k)}{n!} (n-k)^\ell = \sum_{m=0}^{\ell}\frac{n-m}{n} {\ell\brace m}. \end{aligned}$$ Now it is known from *Dobiński's formula* [@Com1974 p. 210, eq. (4a)], with ${\ell\brace 0} = 0$ for positive integers $\ell$ in mind, that $$\begin{aligned} \label{eq:Dob} \sum_{m=0}^{\ell} {\ell\brace m} = B_\ell. \end{aligned}$$ Meanwhile, we recall the following recurrence for the Stirling numbers of the second kind [@Wil2006 p. 20, eq. (1.34)]: $$\begin{aligned} m{\ell\brace m} = {\ell+1\brace m}-{\ell\brace m-1}, \end{aligned}$$ with the convention that ${\ell\brace m} = 0$ whenever $m<0$. Hence, by also recalling that ${\ell+1\brace \ell+1} ={\ell\brace \ell} = 1$, we have $$\begin{aligned} \label{eq:Dob*m} \sum_{m=0}^{\ell} m{\ell\brace m} = \sum_{m=0}^{\ell+1} {\ell+1\brace m} - \sum_{m=0}^{\ell} {\ell\brace m} = B_{\ell+1}-B_{\ell}, \end{aligned}$$ where Dobiński's formula is applied again. Finally, the desired relation [\[eq:A-(n-k)u\]](#eq:A-(n-k)u){reference-type="eqref" reference="eq:A-(n-k)u"} follows by substituting [\[eq:Dob\]](#eq:Dob){reference-type="eqref" reference="eq:Dob"} and [\[eq:Dob\*m\]](#eq:Dob*m){reference-type="eqref" reference="eq:Dob*m"} into [\[eq:A-(n-k)u-Stirling\]](#eq:A-(n-k)u-Stirling){reference-type="eqref" reference="eq:A-(n-k)u-Stirling"}. ◻ The identity [\[eq:A-(n-k)u\]](#eq:A-(n-k)u){reference-type="eqref" reference="eq:A-(n-k)u"} gives us the following estimate: **Corollary 8**. *Let $\ell$ be a nonnegative integer and let $s$ be an arbitrary real number. Then as $n\to +\infty$, $$\begin{aligned} \label{eq:Q-s-bound} \sum_{k=1}^n \frac{A(n,k)}{n!}(n-k)^\ell\, k^s = O(n^{s}). \end{aligned}$$ Here the asymptotic relation depends only on $\ell$ and $s$.* In what follows, an important recipe that will be repeatedly utilized is the *Abel summation formula* [@Ten2015 p. 3, Theorem 0.1]: **Lemma 9** (Abel summation formula). *Let $\{u_n\}_{n\ge 1}$ and $\{v_n\}_{n\ge 1}$ be sequences of complex numbers. Then for any $N\ge 1$, $$\begin{aligned} \sum_{n=1}^N u_n v_n = U(N) v_{N+1} + \sum_{n=1}^N U(n) \big(v_n-v_{n+1}\big), \end{aligned}$$ where $U(n):=\sum_{k=1}^n u_k$.* Now we are ready to prove Corollary [Corollary 8](#coro:Q-s-bound){reference-type="ref" reference="coro:Q-s-bound"}. *Proof of Corollary [Corollary 8](#coro:Q-s-bound){reference-type="ref" reference="coro:Q-s-bound"}.* Note that the summand with $k=n$ in [\[eq:Q-s-bound\]](#eq:Q-s-bound){reference-type="eqref" reference="eq:Q-s-bound"} is $0$ when $\ell>0$, and $O(n^{s})$ when $\ell=0$ since $0\le \frac{A(n,n)}{n!}\le 1$ in view of [\[sum=1\]](#sum=1){reference-type="eqref" reference="sum=1"}. Hence, it is sufficient to show that $$\begin{aligned} \label{eq:Q-s-bound-new} \sum_{k=1}^{n-1} \frac{A(n,k)}{n!}(n-k)^\ell\, k^s = O(n^{s}). \end{aligned}$$ The cases where $s\ge 0$ are simpler as we directly have $$\begin{aligned} \sum_{k=1}^{n-1} \frac{A(n,k)}{n!}(n-k)^\ell\, k^s \le n^s\cdot \sum_{k=1}^{n-1} \frac{A(n,k)}{n!}(n-k)^\ell = O(n^s), \end{aligned}$$ where [\[eq:A-(n-k)u\]](#eq:A-(n-k)u){reference-type="eqref" reference="eq:A-(n-k)u"} has been applied to derive that the summation in the middle part of the above is $O(1)$. In what follows, we always assume that $s<0$. We begin with the cases where $s$ is an integer. Define the following partial sums with $1\le t\le n$: $$\begin{aligned} Q_n(t)=Q_n^{(\ell,s)}(t):= \sum_{k=1}^t \frac{A(n,k)}{n!}(n-k)^{\ell-s}. \end{aligned}$$ Recalling that [\[eq:A-(n-k)u\]](#eq:A-(n-k)u){reference-type="eqref" reference="eq:A-(n-k)u"} advocates that $Q_n(n)$ converges as $n$ goes to infinity, we may find a constant $K$, depending on $\ell$ and $s$, such that for every $n\ge 1$, $$\begin{aligned} 0\le Q_n(1)\le Q_n(2)\le \cdots \le Q_n(n) \le K. \end{aligned}$$ Now we apply the Abel summation formula to obtain $$\begin{aligned} &\quad\; \sum_{k=1}^{n-1} \frac{A(n,k)}{n!}(n-k)^\ell\, k^s\\ &= \sum_{k=1}^{n-1} \frac{A(n,k)}{n!}(n-k)^{\ell-s}\cdot k^s (n-k)^s\\ &= Q_n(n-1)\cdot (n-1)^s + \sum_{t=1}^{n-2} Q_n(t)\cdot \big(t^s (n-t)^s-(t+1)^s (n-t-1)^s\big). \end{aligned}$$ Note that the sequence $\{t (n-t)\}_{1\le t\le (n-1)}$ is unimodal. That is, if we let $T=\left\lfloor\frac{n}{2}\right\rfloor$ where the *floor function* $\lfloor x\rfloor$ denotes the greatest integer not exceeding $x$, then $$\begin{aligned} 1\cdot (n-1)\le \cdots \le T\cdot (n-T)\ge \cdots \ge (n-1)\cdot \big(n-(n-1)\big). \end{aligned}$$ We then reformulate the above summation as $$\begin{aligned} \sum_{k=1}^{n-1} \frac{A(n,k)}{n!}(n-k)^\ell\cdot k^s &= Q_n(n-1)\cdot (n-1)^s\\ &\quad + \sum_{t=1}^{T-1} Q_n(t)\cdot \big(t^s (n-t)^s-(t+1)^s (n-t-1)^s\big)\\ &\quad+ \sum_{t=T}^{n-2} Q_n(t)\cdot \big(t^s (n-t)^s-(t+1)^s (n-t-1)^s\big). \end{aligned}$$ First, since $Q_n(n-1)\le K$, we have the estimate $$\begin{aligned} Q_n(n-1)\cdot (n-1)^s = O(n^{s}). \end{aligned}$$ Next, bearing in mind that $s<0$, we have $$\begin{aligned} t^s (n-t)^s-(t+1)^s (n-t-1)^s \begin{cases} \ge 0, & \text{if $1\le t\le T-1$},\\ \le 0, & \text{if $T\le t\le n-2$}. \end{cases} \end{aligned}$$ Thus, $$\begin{aligned} 0\le \sum_{t=1}^{T-1} Q_n(t)\cdot \big(t^s (n-t)^s-(t+1)^s (n-t-1)^s\big)\le K\cdot \big((n-1)^s - T^s (n-T)^s\big), \end{aligned}$$ and $$\begin{aligned} 0\ge \sum_{t=T}^{n-2} Q_n(t)\cdot \big(t^s (n-t)^s-(t+1)^s (n-t-1)^s\big)\ge K\cdot \big(T^s (n-T)^s - (n-1)^s\big), \end{aligned}$$ while it is plain that $$\begin{aligned} K\cdot \big((n-1)^s - T^s (n-T)^s\big) = O(n^{s}). \end{aligned}$$ Hence, [\[eq:Q-s-bound-new\]](#eq:Q-s-bound-new){reference-type="eqref" reference="eq:Q-s-bound-new"} is valid for integral $s<0$. If $s<0$ is not an integer, then we write $s_0=\lfloor s\rfloor$ so that $s-s_0\ge 0$. It follows that $$\begin{aligned} \sum_{k=1}^{n-1} \frac{A(n,k)}{n!}(n-k)^\ell\, k^s &= \sum_{k=1}^{n-1} \frac{A(n,k)}{n!}(n-k)^\ell\, k^{s_0}\cdot k^{s-s_0}\\ &\le n^{s-s_0} \cdot \sum_{k=1}^{n-1} \frac{A(n,k)}{n!}(n-k)^\ell\, k^{s_0}. \end{aligned}$$ We have shown that the last summation is $O(n^{s_0})$. The required estimate immediately follows. ◻ From now on, for any given $n\ge 1$, we define the partial sums $S_n(t)$ with $1\le t\le n$: $$\begin{aligned} S_n(t):= \sum_{k=1}^t \frac{A(n,k)}{n!}.\end{aligned}$$ In light of [\[sum=1\]](#sum=1){reference-type="eqref" reference="sum=1"}, we particularly have $$\label{eq:Snn=1} S_n(n) =1.$$ Meanwhile, for $S_n(n-1)$, we have an asymptotic estimate as follows: **Lemma 10**. *As $n\to +\infty$, $$\begin{aligned} \label{eq:Sn(n-1)-asymp} S_n(n-1)\sim 1-e^{-1}. \end{aligned}$$* *Proof.* We invoke the following standard result [@Cha2002 p. 178, Remark 5.4 and p. 184, Exercise 5.7.1]: $$\begin{aligned} \label{eq:A(n)-exact} A(n) = \left\lfloor \frac{(n+2)\cdot n!}{e}+\frac{1}{2}\right\rfloor. \end{aligned}$$ Recalling from [\[sum=1\]](#sum=1){reference-type="eqref" reference="sum=1"} that $S_n(n-1) = 1 - \frac{A(n-1)}{n!}$, we immediately arrive at the required relation. ◻ **Remark 11**. By performing a more delicate computation on the exact formula [\[eq:A(n)-exact\]](#eq:A(n)-exact){reference-type="eqref" reference="eq:A(n)-exact"} for $A(n)$, we further have explicit bounds: *For $n\ge 200$*, $$\begin{aligned} \label{eq:Sn(n-1)-bound} 0.63 < S_n(n-1)< 0.64. \end{aligned}$$ We end this section by deriving some useful identities for $S_n(t)$. **Lemma 12**. *For $n\ge 1$, $$\begin{aligned} \label{eq:Sn-gf} \sum_{t=1}^n S_n(t)z^t = z^n + \sum_{m=1}^{n-1}\frac{n-m}{n\cdot m!}z^{n-m}(1-z)^{m-1}. \end{aligned}$$* *Proof.* We have, by interchanging the order of summations, that $$\begin{aligned} \sum_{t=1}^n S_n(t)z^t = \sum_{t=1}^n \sum_{k=1}^t \frac{A(n,k)}{n!}z^t = \sum_{k=1}^n \frac{A(n,k)}{n!} \sum_{t=k}^n z^t = \sum_{k=1}^n \frac{A(n,k)}{n!} \frac{z^k-z^{n+1}}{1-z}. \end{aligned}$$ Invoking Corollary [Corollary 5](#coro:A-z-gf){reference-type="ref" reference="coro:A-z-gf"} gives the desired result. ◻ We apply $\ell$ times the operator $z\frac{d}{dz}$ to [\[eq:Sn-gf\]](#eq:Sn-gf){reference-type="eqref" reference="eq:Sn-gf"} and then take $z=1$ to obtain the following estimate: **Corollary 13**. *Let $\ell$ be a nonnegative integer. Then as $n\to +\infty$, $$\begin{aligned} \label{eq:sum-S-1} \sum_{t=1}^n S_n(t)\cdot t^\ell = 2n^{\ell} +O(n^{\ell-1}), \end{aligned}$$ where the asymptotic relation depends only on $\ell$.* Meanwhile, by applying the operator $[\int z^{-1}\bullet dz]$ zero times, once and twice to [\[eq:Sn-gf\]](#eq:Sn-gf){reference-type="eqref" reference="eq:Sn-gf"} and then let $z=1$, respectively, we have the following relations: **Corollary 14**. *For $n\ge 1$, $$\begin{aligned} \sum_{t=1}^n S_n(t) &= 2-\frac{1}{n},\label{eq:sum-S-0}\\ \sum_{t=1}^n S_n(t)\cdot \frac{1}{t} &= \frac{1}{n} + \sum_{m=1}^{n-1}\frac{(n-m)!}{m\cdot n!},\label{eq:sum-S--1}\\ \sum_{t=1}^n S_n(t)\cdot \frac{1}{t^2} &= \frac{1}{n^2} + \sum_{m=1}^{n-1}\frac{(n-m-1)!}{n\cdot n!} + \sum_{m=1}^{n-1}\frac{(n-m)!}{m\cdot n!}\big(\mathcal{H}_n - \mathcal{H}_{n-m}\big),\label{eq:sum-S--2} \end{aligned}$$ where $\mathcal{H}_n=\sum_{k=1}^n \frac{1}{k}$ is the $n$-th harmonic number.* # Asymptotics for a generic recurrence {#sec:asymp-generic} Recall that the recurrence [\[rec-1\]](#rec-1){reference-type="eqref" reference="rec-1"} for the mean values $\mu_n$ can be reformulated as $$\begin{aligned} \label{eq:mu-original} \left(1-\frac{A(n,n)}{n!}\right)\mu_n = 1+\sum_{k=1}^{n-1}\frac{A(n,k)}{n!}\mu_k,\end{aligned}$$ However, as hinted by [\[eq:E-poly\]](#eq:E-poly){reference-type="eqref" reference="eq:E-poly"}, the recurrence relations [\[eq:mth-M\]](#eq:mth-M){reference-type="eqref" reference="eq:mth-M"} for the central moments of $X_n$ have a more complicated form in which the $1$ on the right-hand side of [\[eq:mu-original\]](#eq:mu-original){reference-type="eqref" reference="eq:mu-original"} is replaced with much more sophisticated terms. This motivates the investigation of the asymptotics for sequences satisfying a generic recurrence. **Definition 15**. Let $\{\lambda_n\}_{n\ge 1}$ be a complex sequence such that $\lambda_n\sim Mn^L$ as $n\to +\infty$, wherein $L$ is a fixed nonnegative integer, and $M$ is a fixed complex number, which, in addition, is nonzero when $L$ is nonzero. Write $$\begin{aligned} \delta_n:=\lambda_n-Mn^L. \end{aligned}$$ We define a complex sequence $\{\xi_n\}_{n\ge 1}$ with given initial values $\xi_1,\ldots,\xi_{n_0}$ for a certain $n_0\ge 2$ by the recurrence $$\begin{aligned} \label{eq:rec:xi} \left(1-\frac{A(n,n)}{n!}\right)\xi_n = \lambda_n+\sum_{k=1}^{n-1}\frac{A(n,k)}{n!}\xi_k, \end{aligned}$$ for every $n>n_0$. We are interested in the growth rate of $\xi_n$. **Theorem 16**. *As $n\to+\infty$, $$\begin{aligned} \label{eq:xi-asymp} \xi_n\sim \frac{M}{L+1}\,n^{L+1}, \end{aligned}$$ where the asymptotic relation depends only on $L$ and $M$. More precisely, letting $$\begin{aligned} \label{eq:eta} \eta_n:=\xi_n-\frac{M}{L+1}\,n^{L+1}, \end{aligned}$$ there exists a positive constant $C$, depending only on $L$ and $M$, such that for all $n\ge 1$, $$\begin{aligned} \label{eq:eta-bound} \big|\eta_n\big| < C \sum_{j=1}^n \left(\big|\delta_j\big|+j^{L-1}\right). \end{aligned}$$* *Proof.* It is clear that [\[eq:eta-bound\]](#eq:eta-bound){reference-type="eqref" reference="eq:eta-bound"} implies [\[eq:xi-asymp\]](#eq:xi-asymp){reference-type="eqref" reference="eq:xi-asymp"}. Throughout, we assume that $n$ is such that $n>n_0$. In [\[eq:rec:xi\]](#eq:rec:xi){reference-type="eqref" reference="eq:rec:xi"}, we first apply [\[sum=1\]](#sum=1){reference-type="eqref" reference="sum=1"} to rewrite the multiplier on the left-hand side as $$\begin{aligned} 1-\frac{A(n,n)}{n!} = S_n(n-1), \end{aligned}$$ and then replace all $\xi_k$ on the right-hand side according to [\[eq:eta\]](#eq:eta){reference-type="eqref" reference="eq:eta"}. Thus, $$\begin{aligned} \label{eq:xi-Sigma} S_n(n-1)\cdot \xi_n = \lambda_n+ \Sigma_1+\Sigma_2, \end{aligned}$$ where $$\begin{aligned} \Sigma_1 &:= \sum_{k=1}^{n-1} \frac{A(n,k)}{n!}\frac{M}{L+1}\,k^{L+1},\\ \Sigma_2 &:= \sum_{k=1}^{n-1} \frac{A(n,k)}{n!}\eta_k. \end{aligned}$$ Now we evaluate $\Sigma_1$ by means of the Abel summation formula: $$\begin{aligned} \Sigma_1 &= \sum_{k=1}^{n-1} \frac{A(n,k)}{n!}\frac{M}{L+1}\,k^{L+1}\\ &= S_n(n-1) \cdot \frac{M}{L+1}\,n^{L+1} + \sum_{k=1}^{n-1} S_n(k)\cdot \frac{M}{L+1} \big(k^{L+1}-(k+1)^{L+1}\big)\\ &= S_n(n-1) \cdot \frac{M}{L+1}\,n^{L+1} - \sum_{k=1}^{n-1} S_n(k)\cdot \big(M k^L +O(k^{L-1})\big), \end{aligned}$$ where the big-$O$ term further vanishes in the case where $L=0$. Recalling that $S_n(n)=1$ from [\[eq:Snn=1\]](#eq:Snn=1){reference-type="eqref" reference="eq:Snn=1"}, we then apply [\[eq:sum-S-1\]](#eq:sum-S-1){reference-type="eqref" reference="eq:sum-S-1"} and obtain that $$\begin{aligned} \sum_{k=1}^{n-1} S_n(k)\cdot \big(M k^L +O(k^{L-1})\big) = Mn^L + O(n^{L-1}). \end{aligned}$$ Thus, $$\begin{aligned} \label{eq:Sigma1} \Sigma_1 = S_n(n-1) \cdot \frac{M}{L+1}\,n^{L+1} - M n^L + O(n^{L-1}). \end{aligned}$$ Substituting [\[eq:Sigma1\]](#eq:Sigma1){reference-type="eqref" reference="eq:Sigma1"} into [\[eq:xi-Sigma\]](#eq:xi-Sigma){reference-type="eqref" reference="eq:xi-Sigma"} implies that $$\begin{aligned} S_n(n-1)\cdot \xi_n = \lambda_n+S_n(n-1) \cdot \frac{M}{L+1}\,n^{L+1} - Mn^L + \sum_{k=1}^{n-1} \frac{A(n,k)}{n!}\eta_k + O(n^{L-1}), \end{aligned}$$ which further gives us $$\begin{aligned} \label{eq:e-new} S_n(n-1)\cdot \eta_{n} = \delta_n + \sum_{k=1}^{n-1} \frac{A(n,k)}{n!}\eta_k + O(n^{L-1}), \end{aligned}$$ where we have used the facts that $\xi_n-\frac{M}{L+1}\,n^{L+1} = \eta_n$ and that $\lambda_n-Mn^L = \delta_n$. Now we find a constant $C$ such that 1. $C\ge 2$; 2. the inequality [\[eq:eta-bound\]](#eq:eta-bound){reference-type="eqref" reference="eq:eta-bound"} holds for every $n$ with $1\le n\le n_0$; 3. the big-$O$ term in [\[eq:e-new\]](#eq:e-new){reference-type="eqref" reference="eq:e-new"} is bounded from above by $\frac{1}{2}C n^{L-1}$, that is, $$\begin{aligned} \label{eq:xi-simple-error-O} \big|O(n^{L-1})\big| < \frac{1}{2}C n^{L-1}, \end{aligned}$$ for every $n\ge 1$ (this is admissible since the big-$O$ term depends only on $L$ and $M$). Herein we prove [\[eq:eta-bound\]](#eq:eta-bound){reference-type="eqref" reference="eq:eta-bound"} inductively by first bounding for each $k$ such that $1\le k\le n-1$ for a certain $n>n_0$, $$\begin{aligned} \big|\eta_k\big| < C \sum_{j=1}^{k} \left(\big|\delta_j\big|+j^{L-1}\right)\le C \sum_{j=1}^{n-1} \left(\big|\delta_j\big|+j^{L-1}\right). \end{aligned}$$ Thus, $$\begin{aligned} \left|\sum_{k=1}^{n-1} \frac{A(n,k)}{n!}\eta_k\right| < C \sum_{j=1}^{n-1} \left(\big|\delta_j\big|+j^{L-1}\right)\cdot \sum_{k=1}^{n-1} \frac{A(n,k)}{n!} = C \sum_{j=1}^{n-1} \left(\big|\delta_j\big|+j^{L-1}\right)\cdot S_n(n-1). \end{aligned}$$ It follows from [\[eq:e-new\]](#eq:e-new){reference-type="eqref" reference="eq:e-new"}, with [\[eq:xi-simple-error-O\]](#eq:xi-simple-error-O){reference-type="eqref" reference="eq:xi-simple-error-O"} recalled, that $$\begin{aligned} \big|\eta_n\big| < \left(\big|\delta_n\big| + \frac{1}{2}Cn^{L-1}\right) \frac{1}{S_n(n-1)} + C \sum_{j=1}^{n-1} \left(\big|\delta_j\big|+j^{L-1}\right). \end{aligned}$$ Bearing in mind from [\[sum=1\]](#sum=1){reference-type="eqref" reference="sum=1"}, $$\begin{aligned} \frac{1}{S_n(n-1)} = \frac{1}{1- \frac{A(n,n)}{n!}} \le \frac{1}{1-\frac{A(3,3)}{3!}} = 2, \end{aligned}$$ where we have used the assumption that $n>n_0\ge 2$. Then, it is immediately clear that [\[eq:eta-bound\]](#eq:eta-bound){reference-type="eqref" reference="eq:eta-bound"} holds for $n$ since we have assumed that $C\ge 2$. ◻ We may further elaborate on the error term $\eta_n$ when $\delta_n$ is efficiently bounded. **Theorem 17**. *With the notation in Theorem [Theorem 16](#th:xi-asymp){reference-type="ref" reference="th:xi-asymp"}, we further have* 1. *In the case where $L=0$, if we further require that $\delta_n = O(n^{-1})$ as $n\to +\infty$, then $$\begin{aligned} \label{eq:eta-diff-bound} \big|\eta_n-\eta_{n-1}\big| = O(n^{-1}). \end{aligned}$$* 2. *In the case where $L\ge 1$, if we further require that $\delta_n = O(n^{L-1}\log n)$ as $n\to +\infty$, then $$\begin{aligned} \label{eq:eta-diff-bound-2} \big|\eta_n-\eta_{n-1}\big| = O(n^{L-1}\log n). \end{aligned}$$* *The above asymptotic relations depend only on $L$ and $M$.* We first need to apply the Abel summation formula to further reformulate $\Sigma_2$ in [\[eq:xi-Sigma\]](#eq:xi-Sigma){reference-type="eqref" reference="eq:xi-Sigma"}: $$\begin{aligned} \label{eq:Sigma2} \Sigma_2 &= \sum_{k=1}^{n-1} \frac{A(n,k)}{n!}\eta_k\notag\\ &= S_n(n-1) \cdot \eta_n + \sum_{k=1}^{n-1} S_n(k)\cdot \big(\eta_k-\eta_{k+1}\big)\notag\\ &= S_n(n-1) \cdot \eta_{n-1} + \sum_{k=1}^{n-2} S_n(k)\cdot \big(\eta_k-\eta_{k+1}\big).\end{aligned}$$ Substituting [\[eq:Sigma2\]](#eq:Sigma2){reference-type="eqref" reference="eq:Sigma2"} into [\[eq:e-new\]](#eq:e-new){reference-type="eqref" reference="eq:e-new"}, we have $$\begin{aligned} \label{eq:e-diff-new} S_n(n-1)\cdot \big(\eta_{n}-\eta_{n-1}\big) = \delta_n + \sum_{k=1}^{n-2} S_n(k)\cdot \big(\eta_k-\eta_{k+1}\big) + O(n^{L-1}).\end{aligned}$$ Now we prove the two cases separately. *Proof of Part (i).* We choose a constant $c$, depending only on $M$, to be such that 1. the quantity $\delta_n + O(n^{-1})$ in [\[eq:e-diff-new\]](#eq:e-diff-new){reference-type="eqref" reference="eq:e-diff-new"} is bounded above by $\frac{0.23c}{n}$, that is, $$\begin{aligned} \label{eq:delta-Part-i} \big|\delta_n + O(n^{-1})\big| < \frac{0.23c}{n}, \end{aligned}$$ for every $n\ge 1$ (this is admissible since the big-$O$ term in [\[eq:e-diff-new\]](#eq:e-diff-new){reference-type="eqref" reference="eq:e-diff-new"} depends only on $M$ while we have also assumed that $\delta_n = O(n^{-1})$; here note that $L=0$); 2. the inequality $$\begin{aligned} \label{eq:eta-diff-bound-new} \big|\eta_n-\eta_{n-1}\big| < \frac{c}{n} \end{aligned}$$ holds for every $n$ with $2\le n\le \max\{200,n_0\}$. Inductively, we assume that [\[eq:eta-diff-bound-new\]](#eq:eta-diff-bound-new){reference-type="eqref" reference="eq:eta-diff-bound-new"} holds for each of $2,\ldots,n-1$ with a certain $n>\max\{200,n_0\}$, and we prove [\[eq:eta-diff-bound-new\]](#eq:eta-diff-bound-new){reference-type="eqref" reference="eq:eta-diff-bound-new"} for $n$. Note that this inductive hypothesis implies that $$\begin{aligned} \left|\sum_{k=1}^{n-2} S_n(k)\cdot \big(\eta_k-\eta_{k+1}\big)\right| < \sum_{k=1}^{n-2} S_n(k)\cdot \frac{c}{k+1}< \sum_{k=1}^{n-2} S_n(k)\cdot \frac{c}{k}. \end{aligned}$$ Meanwhile, it is known from [\[eq:sum-S\--1\]](#eq:sum-S--1){reference-type="eqref" reference="eq:sum-S--1"} that $$\begin{aligned} \sum_{k=1}^{n} S_n(k)\cdot \frac{1}{k} \sim \frac{2}{n}. \end{aligned}$$ Performing some routine computations, we explicitly have, with recourse to [\[eq:Snn=1\]](#eq:Snn=1){reference-type="eqref" reference="eq:Snn=1"} and [\[eq:Sn(n-1)-asymp\]](#eq:Sn(n-1)-asymp){reference-type="eqref" reference="eq:Sn(n-1)-asymp"}, that whenever $n>\max\{200,n_0\}$, $$\begin{aligned} \sum_{k=1}^{n-2} S_n(k)\cdot \frac{1}{k} < \frac{0.4}{n}. \end{aligned}$$ Since $S_n(n-1)> 0.63$ by [\[eq:Sn(n-1)-bound\]](#eq:Sn(n-1)-bound){reference-type="eqref" reference="eq:Sn(n-1)-bound"}, we finally deduce from [\[eq:e-diff-new\]](#eq:e-diff-new){reference-type="eqref" reference="eq:e-diff-new"}, with [\[eq:delta-Part-i\]](#eq:delta-Part-i){reference-type="eqref" reference="eq:delta-Part-i"} in mind, that $$\begin{aligned} \big|\eta_{n}-\eta_{n-1}\big| < \left(\frac{0.23c}{n} + \frac{0.4c}{n}\right) \frac{1}{S_n(n-1)} <\frac{0.63c}{n}\cdot \frac{1}{0.63} =\frac{c}{n}. \end{aligned}$$ Now we have proved that [\[eq:eta-diff-bound-new\]](#eq:eta-diff-bound-new){reference-type="eqref" reference="eq:eta-diff-bound-new"} holds for every $n\ge 2$, and it is immediately clear that [\[eq:eta-diff-bound\]](#eq:eta-diff-bound){reference-type="eqref" reference="eq:eta-diff-bound"} is valid. ◻ *Proof of Part (ii).* We choose a constant $c$, depending only on $L$ and $M$, to be such that 1. the quantity $\delta_n + O(n^{L-1})$ in [\[eq:e-diff-new\]](#eq:e-diff-new){reference-type="eqref" reference="eq:e-diff-new"} is bounded above by $0.26c\cdot n^{L-1}\log n$, that is, $$\begin{aligned} \label{eq:delta-Part-ii} \big|\delta_n + O(n^{L-1})\big| < 0.26c\cdot n^{L-1}\log n, \end{aligned}$$ for every $n\ge 1$ (this is admissible since the big-$O$ term in [\[eq:e-diff-new\]](#eq:e-diff-new){reference-type="eqref" reference="eq:e-diff-new"} depends only on $M$ while we have also assumed that $\delta_n = O(n^{L-1}\log n)$); 2. the inequality $$\begin{aligned} \label{eq:eta-diff-bound-2-new} \big|\eta_n-\eta_{n-1}\big| < c\cdot n^{L-1}\log n \end{aligned}$$ holds for every $n$ with $2\le n\le \max\{200,n_0\}$. Inductively, we assume that [\[eq:eta-diff-bound-2-new\]](#eq:eta-diff-bound-2-new){reference-type="eqref" reference="eq:eta-diff-bound-2-new"} holds for each of $2,\ldots,n-1$ with a certain $n>\max\{200,n_0\}$, and we prove [\[eq:eta-diff-bound-2-new\]](#eq:eta-diff-bound-2-new){reference-type="eqref" reference="eq:eta-diff-bound-2-new"} for $n$. Note that this inductive hypothesis implies that $$\begin{aligned} \left|\sum_{k=1}^{n-2} S_n(k)\cdot \big(\eta_k-\eta_{k+1}\big)\right| &< \sum_{k=1}^{n-2} S_n(k)\cdot c\cdot (k+1)^{L+1}\log(k+1)\\ & \le c\cdot n^{L+1}\log n \cdot \sum_{k=1}^{n-2} S_n(k). \end{aligned}$$ Here we have used the fact that $n^{L-1}\log n$ is non-decreasing with respect to $n$ when $L\ge 1$. Meanwhile, it is known from [\[eq:sum-S-0\]](#eq:sum-S-0){reference-type="eqref" reference="eq:sum-S-0"} that $$\begin{aligned} \sum_{k=1}^{n} S_n(k) < 2. \end{aligned}$$ Recalling that $S_n(n)=1$ from [\[eq:Snn=1\]](#eq:Snn=1){reference-type="eqref" reference="eq:Snn=1"} and that $S_n(n-1)> 0.63$ from [\[eq:Sn(n-1)-bound\]](#eq:Sn(n-1)-bound){reference-type="eqref" reference="eq:Sn(n-1)-bound"}, we have $$\begin{aligned} \sum_{k=1}^{n-2} S_n(k) < 0.37. \end{aligned}$$ Finally, noting the bound given in [\[eq:delta-Part-ii\]](#eq:delta-Part-ii){reference-type="eqref" reference="eq:delta-Part-ii"}, it follows from [\[eq:e-diff-new\]](#eq:e-diff-new){reference-type="eqref" reference="eq:e-diff-new"} that $$\begin{aligned} \big|\eta_{n}-\eta_{n-1}\big| &< \big(0.26c+0.37c\big)\cdot n^{L-1}\log n \cdot \frac{1}{S_n(n-1)}\\ &< 0.63c \cdot n^{L-1}\log n \cdot \frac{1}{0.63} \\ &= c\cdot n^{L-1}\log n. \end{aligned}$$ Now we have proved that [\[eq:eta-diff-bound-2-new\]](#eq:eta-diff-bound-2-new){reference-type="eqref" reference="eq:eta-diff-bound-2-new"} holds for every $n\ge 2$, and it is immediately clear that [\[eq:eta-diff-bound-2\]](#eq:eta-diff-bound-2){reference-type="eqref" reference="eq:eta-diff-bound-2"} is valid. ◻ # Mean value {#sec:mean-value} In this section, we analyze the behavior of the mean value $\mu_n$ of $X_n$ as $n\rightarrow+\infty$. ## Explicit formula for the mean value Recall that it was shown by Rao et al. [@Rao] that as $n\to+\infty$, $$\begin{aligned} \mu_n\sim n.\end{aligned}$$ Alternatively, this asymptotic relation is a direct consequence of Theorem [Theorem 16](#th:xi-asymp){reference-type="ref" reference="th:xi-asymp"} by choosing $\lambda_n = 1$ for all $n\ge 1$, and then taking $\xi_n=\mu_n$. Now our objective is to elaborate on this asymptotic formula. **Theorem 18**. *We have $$\begin{aligned} \label{eq:mu-new-formula} \mu_n = n + \mathcal{H}_{n-1} + \varepsilon_n, \end{aligned}$$ where the limit $\lim_{n\to+\infty} \varepsilon_n$ exists. In particular, for $n\ge 2$, $$\begin{aligned} \label{eq:epsilon-diff} 0 < \varepsilon_n-\varepsilon_{n+1} < \frac{1}{n^2}. \end{aligned}$$* We proceed with an argument analogous to the one used in the proof of Theorem [Theorem 16](#th:xi-asymp){reference-type="ref" reference="th:xi-asymp"} but with more delicacy. *Proof.* We begin by rewriting the recurrence [\[eq:mu-original\]](#eq:mu-original){reference-type="eqref" reference="eq:mu-original"} as $$\begin{aligned} \label{eq:mu-Sigma} S_n(n-1)\cdot \mu_n = 1+ \Sigma'_1+\Sigma'_2+\Sigma'_3, \end{aligned}$$ where $$\begin{aligned} \Sigma'_1 &:= \sum_{k=1}^{n-1} \frac{A(n,k)}{n!} k,\\ \Sigma'_2 &:= \sum_{k=1}^{n-1} \frac{A(n,k)}{n!} \mathcal{H}_{k-1},\\ \Sigma'_3 &:= \sum_{k=1}^{n-1} \frac{A(n,k)}{n!} \varepsilon_k. \end{aligned}$$ Now we evaluate each sum separately by the Abel summation formula. First, it is clear by [\[eq:sum-S-0\]](#eq:sum-S-0){reference-type="eqref" reference="eq:sum-S-0"} that $\Sigma'_1$ equals $$\begin{aligned} \label{eq:Sigma1-new} \Sigma'_1 = \sum_{k=1}^{n-1} \frac{A(n,k)}{n!} k = S_n(n-1) \cdot n - \sum_{k=1}^{n-1} S_n(k) =S_n(n-1) \cdot n - 1 + \frac{1}{n}. \end{aligned}$$ For $\Sigma'_2$, we have, with [\[eq:sum-S\--1\]](#eq:sum-S--1){reference-type="eqref" reference="eq:sum-S--1"} utilized, that $$\begin{aligned} \label{eq:Sigma2-new} \Sigma'_2 &= \sum_{k=1}^{n-1} \frac{A(n,k)}{n!} \mathcal{H}_{k-1}\notag\\ &= S_n(n-1) \cdot \mathcal{H}_{n-1} - \sum_{k=1}^{n-1} S_n(k)\cdot \frac{1}{k}\notag\\ &= S_n(n-1) \cdot \mathcal{H}_{n-1} - \sum_{m=1}^{n-1}\frac{(n-m)!}{m\cdot n!}. \end{aligned}$$ Finally, for $\Sigma'_3$, we find that $$\begin{aligned} \label{eq:Sigma3-new} \Sigma'_3 = \sum_{k=1}^{n-1} \frac{A(n,k)}{n!} \varepsilon_k= S_n(n-1) \cdot \varepsilon_{n-1} + \sum_{k=1}^{n-2} S_n(k)\cdot \big(\varepsilon_k-\varepsilon_{k+1}\big). \end{aligned}$$ Substituting [\[eq:Sigma1-new\]](#eq:Sigma1-new){reference-type="eqref" reference="eq:Sigma1-new"}, [\[eq:Sigma2-new\]](#eq:Sigma2-new){reference-type="eqref" reference="eq:Sigma2-new"} and [\[eq:Sigma3-new\]](#eq:Sigma3-new){reference-type="eqref" reference="eq:Sigma3-new"} into [\[eq:mu-Sigma\]](#eq:mu-Sigma){reference-type="eqref" reference="eq:mu-Sigma"}, we have $$\begin{aligned} \label{eq:epsilon-diff-level-2} S_n(n-1)\cdot \big(\varepsilon_{n-1}-\varepsilon_n\big) = \sum_{m=2}^{n-1}\frac{(n-m)!}{m\cdot n!} - \sum_{k=1}^{n-2} S_n(k)\cdot \big(\varepsilon_k-\varepsilon_{k+1}\big). \end{aligned}$$ It is plain that the following asymptotic relations hold: 1. $\displaystyle S_n(n-1)\sim 1-e^{-1}$; 2. $\displaystyle \sum_{m=2}^{n-1}\frac{(n-m)!}{m\cdot n!}\sim \frac{1}{2n^2}$; 3. $\displaystyle \sum_{t=1}^n S_n(t)\cdot \frac{1}{t^2}\sim \frac{2}{n^2}$. Here **(a)** is [\[eq:Sn(n-1)-asymp\]](#eq:Sn(n-1)-asymp){reference-type="eqref" reference="eq:Sn(n-1)-asymp"}. For **(b)**, we note that the left-hand side is dominated by the summand with $m=2$. For relation **(c)**, we shall recall [\[eq:sum-S\--2\]](#eq:sum-S--2){reference-type="eqref" reference="eq:sum-S--2"}. More precisely, we have explicit bounds: *For $n\ge 200$*, 1. $\displaystyle 0.63 < S_n(n-1)< 0.64$; 2. $\displaystyle \frac{0.49}{n^2} < \sum_{m=2}^{n-1}\frac{(n-m)!}{m\cdot n!}< \frac{0.51}{n^2}$. Here **(a')** was already given in [\[eq:Sn(n-1)-bound\]](#eq:Sn(n-1)-bound){reference-type="eqref" reference="eq:Sn(n-1)-bound"}, and **(b')** can be derived by performing some additional computations for the error terms. Meanwhile, we numerically verify that [\[eq:epsilon-diff\]](#eq:epsilon-diff){reference-type="eqref" reference="eq:epsilon-diff"} is valid whenever $2\le n\le 200$. So throughout we inductively assume [\[eq:epsilon-diff\]](#eq:epsilon-diff){reference-type="eqref" reference="eq:epsilon-diff"} for $2,\ldots, n-1$ with a certain $n> 200$, and we shall prove [\[eq:epsilon-diff\]](#eq:epsilon-diff){reference-type="eqref" reference="eq:epsilon-diff"} for $n$. Noting that $\varepsilon_1-\varepsilon_2 = (-1)-(-1)=0$, we have, with our inductive assumption in mind, that $$\begin{aligned} 0< \sum_{k=1}^{n-2} S_n(k) \big(\varepsilon_k-\varepsilon_{k+1}\big)< \sum_{k=1}^{n-2} S_n(k)\cdot \frac{1}{k^2}. \end{aligned}$$ Recall that $S_n(n)=1$ and that $S_n(n-1)\sim 1-e^{-1}$. Consulting the above asymptotic relation **(c)**, we carry out some extra computations and find that 3. $\displaystyle 0< \sum_{k=1}^{n-2} S_n(k) \big(\varepsilon_k-\varepsilon_{k+1}\big)<\frac{0.47}{n^2}$. Substituting the above inequalities **(b')** and **(c')** into [\[eq:epsilon-diff-level-2\]](#eq:epsilon-diff-level-2){reference-type="eqref" reference="eq:epsilon-diff-level-2"} gives $$\begin{aligned} \frac{0.02}{n^2} < S_n(n-1)\cdot \big(\varepsilon_{n-1}-\varepsilon_n\big) < \frac{0.51}{n^2}. \end{aligned}$$ Furthering applying the inequality **(a')** confirms the required bounds: $$\begin{aligned} 0 < \varepsilon_{n-1}-\varepsilon_n < \frac{1}{(n-1)^2}. \end{aligned}$$ Finally, since the sequence $\{\varepsilon_n\}_{n\ge 2}$ is strictly decreasing but bounded from below by $\varepsilon_1-\zeta(2) = -1 -\zeta(2)$, the *monotone convergence theorem* asserts that it has a limit. ◻ ## Connection with Bell numbers {#sec:Bell} Our explicit formula for $\mu_n$ allows us to build the following connection between the Bell numbers $B_\ell$ and a certain family of weighted moments of $\mu_n-\mu_k$. **Theorem 19**. *Let $\ell_1$ and $\ell_2$ be nonnegative integers. Then as $n\to +\infty$, $$\begin{aligned} \label{eq:mu-moments-limit} \sum_{k=1}^n \frac{A(n,k)}{n!}\big(n-k\big)^{\ell_1}\big(\mu_n-\mu_k\big)^{\ell_2} = B_{\ell_1+\ell_2} + O(n^{-1}), \end{aligned}$$ where the big-$O$ term depends only on $\ell_1$ and $\ell_2$.* For this purpose, we first need to bound $\mu_n-\mu_k$. **Lemma 20**. *Let $n\ge 1$. For any $k$ such that $1\le k\le n$, $$\begin{aligned} \label{eq:mun-muk} n-k \le \mu_n-\mu_k \le (n-k)\big(1+\tfrac{1}{k}\big). \end{aligned}$$* *Proof.* Recall from [\[eq:mu-new-formula\]](#eq:mu-new-formula){reference-type="eqref" reference="eq:mu-new-formula"}, $$\begin{aligned} \mu_n-\mu_k = (n-k) + \sum_{j=k}^{n-1}\frac{1}{j} - \sum_{j=k}^{n-1}\big(\varepsilon_j-\varepsilon_{j+1}\big). \end{aligned}$$ It will be sufficient to bound the two summations on the right-hand side of the above. In view of [\[eq:epsilon-diff\]](#eq:epsilon-diff){reference-type="eqref" reference="eq:epsilon-diff"}, it is clear that $$\begin{aligned} 0\le \sum_{j=k}^{n-1}\big(\varepsilon_j-\varepsilon_{j+1}\big)\le \sum_{j=k}^{n-1}\frac{1}{j^2}. \end{aligned}$$ Thus, $$\begin{aligned} \sum_{j=k}^{n-1}\frac{1}{j} - \sum_{j=k}^{n-1}\big(\varepsilon_j-\varepsilon_{j+1}\big) \ge \sum_{j=k}^{n-1}\frac{1}{j} - \sum_{j=k}^{n-1}\frac{1}{j^2}\ge 0. \end{aligned}$$ Meanwhile, $$\begin{aligned} \sum_{j=k}^{n-1}\frac{1}{j} - \sum_{j=k}^{n-1}\big(\varepsilon_j-\varepsilon_{j+1}\big) \le \sum_{j=k}^{n-1}\frac{1}{j} \le \frac{n-k}{k}. \end{aligned}$$ The claimed inequalities [\[eq:mun-muk\]](#eq:mun-muk){reference-type="eqref" reference="eq:mun-muk"} then follow. ◻ Now we complete the proof of Theorem [Theorem 19](#th:mu-moments-limit){reference-type="ref" reference="th:mu-moments-limit"}. *Proof of Theorem [Theorem 19](#th:mu-moments-limit){reference-type="ref" reference="th:mu-moments-limit"}.* By virtue of Lemma [Lemma 20](#le:mun-muk){reference-type="ref" reference="le:mun-muk"}, we know that for $n\ge 1$, $$\begin{aligned} \sum_{k=1}^n \frac{A(n,k)}{n!}\big(n-k\big)^{\ell_1}\big(\mu_n-\mu_k\big)^{\ell_2}\ge \sum_{k=1}^n \frac{A(n,k)}{n!}\big(n-k\big)^{\ell_1+\ell_2} \end{aligned}$$ and $$\begin{aligned} \sum_{k=1}^n \frac{A(n,k)}{n!}\big(n-k\big)^{\ell_1}\big(\mu_n-\mu_k\big)^{\ell_2}\le \sum_{k=1}^n \frac{A(n,k)}{n!}\big(n-k\big)^{\ell_1+\ell_2}\big(1+\tfrac{1}{k}\big)^{\ell_2}. \end{aligned}$$ Recalling from [\[eq:A-(n-k)u\]](#eq:A-(n-k)u){reference-type="eqref" reference="eq:A-(n-k)u"}, $$\begin{aligned} \label{eq:limit-1} \sum_{k=1}^n \frac{A(n,k)}{n!}(n-k)^{\ell_1+\ell_2} = B_{\ell_1+\ell_2} + O(n^{-1}), \end{aligned}$$ it is sufficient to show that we also have $$\begin{aligned} \label{eq:limit-1+1/k} \sum_{k=1}^n \frac{A(n,k)}{n!}\big(n-k\big)^{\ell_1+\ell_2}\big(1+\tfrac{1}{k}\big)^{\ell_2} = B_{\ell_1+\ell_2} + O(n^{-1}). \end{aligned}$$ To see this, we expand $\big(1+\frac{1}{k}\big)^{\ell_2}$ as $1+\sum_{s=1}^{\ell_2} \binom{\ell_2}{s}\frac{1}{k^s}$, and apply [\[eq:Q-s-bound\]](#eq:Q-s-bound){reference-type="eqref" reference="eq:Q-s-bound"} to get $$\begin{aligned} \sum_{s=1}^{\ell_2} \binom{\ell_2}{s}\sum_{k=1}^n \frac{A(n,k)}{n!}(n-k)^{\ell_1+\ell_2} \frac{1}{k^s} = O(n^{-1}). \end{aligned}$$ Thus, $$\begin{aligned} \sum_{k=1}^n \frac{A(n,k)}{n!}\big(n-k\big)^{\ell_1+\ell_2}\big(1+\tfrac{1}{k}\big)^{\ell_2} = \sum_{k=1}^n \frac{A(n,k)}{n!}(n-k)^{\ell_1+\ell_2} + O(n^{-1}), \end{aligned}$$ which gives [\[eq:limit-1+1/k\]](#eq:limit-1+1/k){reference-type="eqref" reference="eq:limit-1+1/k"} by recalling [\[eq:limit-1\]](#eq:limit-1){reference-type="eqref" reference="eq:limit-1"}. ◻ **Example 21**. We have $$\begin{aligned} \sum_{k=1}^n \frac{A(n,k)}{n!}\big(\mu_n-\mu_k\big) &= 1 + O(n^{-1}),\label{eq:mu-1st-moments-limit}\\ \sum_{k=1}^n \frac{A(n,k)}{n!}\big(\mu_n-\mu_k\big)^2 &= 2 + O(n^{-1}),\\ \sum_{k=1}^n \frac{A(n,k)}{n!}\big(\mu_n-\mu_k\big)^3 &= 5 + O(n^{-1}). \end{aligned}$$ **Remark 22**. We may further elaborate on [\[eq:mu-1st-moments-limit\]](#eq:mu-1st-moments-limit){reference-type="eqref" reference="eq:mu-1st-moments-limit"}. That is, by virtue of [\[eq:mu-original\]](#eq:mu-original){reference-type="eqref" reference="eq:mu-original"}, it is immediately clear that for $n\geq 2$, $$\begin{aligned} \label{eq:mu-1st-moments-limit-new} \sum_{k=1}^n \frac{A(n,k)}{n!}\big(\mu_n-\mu_k\big) = 1. \end{aligned}$$ The argument for Theorem [Theorem 19](#th:mu-moments-limit){reference-type="ref" reference="th:mu-moments-limit"} also works, mutatis mutandis, for the following estimate by a direct application of [\[eq:Q-s-bound\]](#eq:Q-s-bound){reference-type="eqref" reference="eq:Q-s-bound"}. **Theorem 23**. *Let $\ell_1$ and $\ell_2$ be nonnegative integers. Then as $n\to +\infty$, $$\begin{aligned} \label{eq:mu-moments-limit-k-1} \sum_{k=1}^n \frac{A(n,k)}{n!}\big(n-k\big)^{\ell_1}\big(\mu_n-\mu_k\big)^{\ell_2}\frac{1}{k} = O(n^{-1}). \end{aligned}$$ Here the big-$O$ term depends only on $\ell_1$ and $\ell_2$.* Finally, we derive the following result from Theorem [Theorem 19](#th:mu-moments-limit){reference-type="ref" reference="th:mu-moments-limit"}. **Corollary 24**. *Let $\ell_1$ and $\ell_2$ be nonnegative integers. Then as $n\to +\infty$, $$\begin{aligned} \label{eq:mu-moments-limit-new} \sum_{k=1}^n \frac{A(n,k)}{n!}k^{\ell_1}\big(\mu_n-\mu_k\big)^{\ell_2} = B_{\ell_2} n^{\ell_1} + O(n^{\ell_1-1}). \end{aligned}$$ Here the big-$O$ term depends only on $\ell_1$ and $\ell_2$.* *Proof.* When $\ell_1=0$, the result is exactly a specialization of Theorem [Theorem 19](#th:mu-moments-limit){reference-type="ref" reference="th:mu-moments-limit"}. In what follows, we assume that $\ell_1\ge 1$ and expand $k^{\ell_1}$ as $$\begin{aligned} \label{eq:expansion-k} k^{\ell_1} = \big(n-(n-k)\big)^{\ell_1} = n^{\ell_1} + \sum_{s=1}^{\ell_1} \binom{\ell_1}{s} (-1)^s \big(n-k\big)^s n^{\ell_1-s}. \end{aligned}$$ In light of Theorem [Theorem 19](#th:mu-moments-limit){reference-type="ref" reference="th:mu-moments-limit"}, $$\begin{aligned} n^{\ell_1}\cdot \sum_{k=1}^n \frac{A(n,k)}{n!}\big(\mu_n-\mu_k\big)^{\ell_2} = B_{\ell_2} n^{\ell_1} + O(n^{\ell_1-1}). \end{aligned}$$ Also, for each $s$ with $1\le s\le \ell_1$, $$\begin{aligned} n^{\ell_1-s}\cdot \sum_{k=1}^n \frac{A(n,k)}{n!}\binom{\ell_1}{s} (-1)^s \big(n-k\big)^s\big(\mu_n-\mu_k\big)^{\ell_2} = O(n^{\ell_1-s}). \end{aligned}$$ The desired result then follows. ◻ ## Other estimates Herein we establish a few more estimates for later use. All asymptotic relations in this subsection depend only on $L$ and $M$. **Lemma 25**. *Let $L\ge 0$ and $M\ge 1$ be integers. Then as $n\to +\infty$, $$\begin{aligned} \label{eq:mu-moments-limit-LM} \sum_{k=1}^n \frac{A(n,k)}{n!}k^{L}\big(1+\mu_k-\mu_n\big)^{M} = \left(\sum_{m=0}^M \binom{M}{m}(-1)^m B_m\right) n^{L} + O(n^{L-1}). \end{aligned}$$* *Proof.* We make use of the expansion $$\begin{aligned} \label{eq:expansion-mu-k-n} \big(1+\mu_k-\mu_n\big)^{M} = \sum_{m=0}^M \binom{M}{m}(-1)^m \big(\mu_n-\mu_k\big)^{m}, \end{aligned}$$ and then apply [\[eq:mu-moments-limit-new\]](#eq:mu-moments-limit-new){reference-type="eqref" reference="eq:mu-moments-limit-new"}. ◻ **Example 26**. Recall that $$\begin{aligned} B_0=1,\qquad B_1=1,\qquad B_2=2,\qquad B_3=5. \end{aligned}$$ The following special cases will be used in the sequel: $$\begin{aligned} \sum_{k=1}^n \frac{A(n,k)}{n!}k^{L}\big(1+\mu_k-\mu_n\big)^{2} &= n^{L} + O(n^{L-1}),\label{eq:mu-moments-limit-L2}\\ \sum_{k=1}^n \frac{A(n,k)}{n!}k^{L}\big(1+\mu_k-\mu_n\big)^{3} &= - n^{L} + O(n^{L-1}).\label{eq:mu-moments-limit-L3} \end{aligned}$$ A closer look at the $M = 1$ case of [\[eq:mu-moments-limit-LM\]](#eq:mu-moments-limit-LM){reference-type="eqref" reference="eq:mu-moments-limit-LM"} reveals that $$\begin{aligned} \sum_{m=0}^1 \binom{1}{m}(-1)^m B_m = B_0 - B_1 = 1 - 1 = 0,\end{aligned}$$ thereby indicating that the main term in [\[eq:mu-moments-limit-LM\]](#eq:mu-moments-limit-LM){reference-type="eqref" reference="eq:mu-moments-limit-LM"} vanishes. Hence, we make an elaboration. **Lemma 27**. *Let $L\ge 0$ be an integer. Then as $n\to +\infty$, $$\begin{aligned} \label{eq:mu-moments-limit-L1} \sum_{k=1}^n \frac{A(n,k)}{n!}k^{L}\big(1+\mu_k-\mu_n\big) = L n^{L-1} + O(n^{L-2}). \end{aligned}$$* *Proof.* For the case when $L=0$, we note from [\[sum=1\]](#sum=1){reference-type="eqref" reference="sum=1"} that $$\sum_{k=1}^n \frac{A(n,k)}{n!} =1,$$ and from [\[eq:mu-1st-moments-limit-new\]](#eq:mu-1st-moments-limit-new){reference-type="eqref" reference="eq:mu-1st-moments-limit-new"} that $$\sum_{k=1}^n \frac{A(n,k)}{n!}\big(\mu_n-\mu_k\big) = 1.$$ Thus, $$\begin{aligned} \label{eq:mu-moments-limit-01} \sum_{k=1}^n \frac{A(n,k)}{n!}\big(1+\mu_k-\mu_n\big) = 0. \end{aligned}$$ Now assume that $L\ge 1$. We again use [\[eq:expansion-k\]](#eq:expansion-k){reference-type="eqref" reference="eq:expansion-k"} but with one more term taken out: $$\begin{aligned} \label{eq:expansion-k-new} k^L = n^L - L n^{L-1} \big(n-k\big) + \sum_{\ell=2}^{L} \binom{L}{\ell} (-1)^\ell \big(n-k\big)^\ell n^{L-\ell}. \end{aligned}$$ As pointed out earlier, the contribution from the $n^L$ term vanishes in [\[eq:mu-moments-limit-L1\]](#eq:mu-moments-limit-L1){reference-type="eqref" reference="eq:mu-moments-limit-L1"}. Thus, $$\begin{aligned} \sum_{k=1}^n \frac{A(n,k)}{n!}k^{L}\big(1+\mu_k-\mu_n\big) &= - L n^{L-1}\sum_{k=1}^n \big(n-k\big)\big(1+\mu_k-\mu_n\big) +O(n^{L-2})\\ &= - L n^{L-1} \big(B_1-B_2+O(n^{-1})\big) +O(n^{L-2})\\ &= L n^{L-1} + O(n^{L-2}), \end{aligned}$$ as claimed. ◻ Finally, we have the following result. **Lemma 28**. *Let $M\ge 1$ be an integer. For any sequence $f(n)=O(n^L \log n)$ with $L\ge 0$ an integer, we have, as $n\to +\infty$, $$\begin{aligned} \label{eq:mu-moments-limit-LM-log} \sum_{k=1}^n \frac{A(n,k)}{n!}f(k) \big(1+\mu_k-\mu_n\big)^{M} = O(n^{L}\log n). \end{aligned}$$* *Proof.* It is clear that $$\begin{aligned} \sum_{k=1}^n \frac{A(n,k)}{n!}f(k) \big(1+\mu_k-\mu_n\big)^{M} = O\left(n^{L}\log n \sum_{m=0}^M \binom{M}{m} \sum_{k=1}^n \frac{A(n,k)}{n!} \big(\mu_n-\mu_k\big)^{m}\right). \end{aligned}$$ Noting that the inner summation on the right-hand side is $O(1)$ by [\[eq:mu-moments-limit\]](#eq:mu-moments-limit){reference-type="eqref" reference="eq:mu-moments-limit"}, we arrive at the desired relation. ◻ # Proof of Theorem [Theorem 2](#th:c-mom){reference-type="ref" reference="th:c-mom"} {#sec:var-3moment} Now, we are ready to prove Theorem [Theorem 2](#th:c-mom){reference-type="ref" reference="th:c-mom"}. To do so, we require the following elaboration. **Theorem 29**. *For every $m\ge 2$, Theorem [Theorem 2](#th:c-mom){reference-type="ref" reference="th:c-mom"} is valid. Furthermore, if we define $$\begin{aligned} \varepsilon_n^{(m)} := \mathbf{E}\big[(X_n-\mu_n)^m\big]- \begin{cases} (2M-1)!!\cdot n^M, & \text{if $m=2M$},\\ \tfrac{2}{3}M(2M+1)!!\cdot n^M, & \text{if $m=2M+1$}, \end{cases} \end{aligned}$$ then $$\begin{aligned} \label{eq:mom-error-diff} \big|\varepsilon^{(m)}_n-\varepsilon^{(m)}_{n-1}\big| = \begin{cases} O(n^{-1}), & \text{if $m=2$ or $3$},\\ O(n^{\lfloor\frac{m}{2}\rfloor-2}\log n), & \text{if $m\ge 4$}. \end{cases} \end{aligned}$$* *Outline of the Proof.* We begin with [\[eq:E-poly\]](#eq:E-poly){reference-type="eqref" reference="eq:E-poly"} and find that $$\begin{aligned} \mathbf{E}\left[(X_n-\mu_n)^m\right] & =\sum_{k=1}^{n}\frac{A(n,k)}{n!}\cdot \mathbf{E}\left[\big(1+X_{k}-\mu_{n}\big)^{m}\right]\\ &=\sum_{k=1}^{n}\frac{A(n,k)}{n!}\cdot \mathbf{E}\left[\big((X_{k}-\mu_{k})+(1+\mu_{k}-\mu_{n})\big)^{m}\right].\end{aligned}$$ Hence, $$\begin{aligned} \label{eq:mth-M} \left(1-\frac{A(n,n)}{n!}\right)\mathbf{E}\big[(X_n-\mu_n)^m\big] = \sum_{k=1}^{n-1}\frac{A(n,k)}{n!}\,\mathbf{E}\big[(X_k-\mu_k)^m\big] + \sum_{\ell=1}^m \binom{m}{\ell} I_\ell^{(m)},\end{aligned}$$ where for each $\ell$ with $1\le \ell\le m$, $$\begin{aligned} I_\ell^{(m)} := \sum_{k=1}^{n}\frac{A(n,k)}{n!}\cdot \mathbf{E}\big[(X_k-\mu_k)^{m-\ell}\big] \big(1+\mu_k-\mu_n\big)^{\ell}.\end{aligned}$$ In what follows, we first work on the variance and the third central moment in Subsections [5.1](#sec:var){reference-type="ref" reference="sec:var"} and [5.2](#sec:3rd){reference-type="ref" reference="sec:3rd"}, respectively. Using them as initial cases, we then evaluate the higher central moments by an inductive argument in Subsection [5.3](#sec:higher){reference-type="ref" reference="sec:higher"}. In this course, we shall consider even- and odd-order moments separately. ◻ ## Asymptotic relation for the variance {#sec:var} For the variance $\mathbf{Var}[X_n]=\mathbf{E}\big[(X_n-\mu_n)^2\big]$, we explicitly paste [\[eq:mth-M\]](#eq:mth-M){reference-type="eqref" reference="eq:mth-M"}: $$\begin{aligned} \label{eq:Var-rec} \left(1-\frac{A(n,n)}{n!}\right)\mathbf{Var}[X_n] = \sum_{k=1}^{n-1}\frac{A(n,k)}{n!}\,\mathbf{Var}[X_k] + I_1^{(2)} + I_2^{(2)}, \end{aligned}$$ where $$\begin{aligned} I_1^{(2)} &= \sum_{k=1}^{n}\frac{A(n,k)}{n!}\cdot \mathbf{E}\big[X_k-\mu_k\big] \big(1+\mu_k-\mu_n\big),\\ I_2^{(2)} &= \sum_{k=1}^{n}\frac{A(n,k)}{n!}\cdot \big(1+\mu_{k}-\mu_{n}\big)^2.\end{aligned}$$ In light of [\[eq:mu-moments-limit-L2\]](#eq:mu-moments-limit-L2){reference-type="eqref" reference="eq:mu-moments-limit-L2"}, $$\begin{aligned} \label{eq:I2-2} I_2^{(2)} = 1 + O(n^{-1}).\end{aligned}$$ Also, using the fact that $$\begin{aligned} \label{eq:E-diff} \mathbf{E}\left[X_{k}-\mu_{k}\right] = \mathbf{E}\left[X_{k}\right]-\mu_{k} = \mu_k-\mu_k = 0,\end{aligned}$$ we have $$\begin{aligned} \label{eq:I2-1} I_1^{(2)} = 0.\end{aligned}$$ Consequently, we insert [\[eq:I2-2\]](#eq:I2-2){reference-type="eqref" reference="eq:I2-2"} and [\[eq:I2-1\]](#eq:I2-1){reference-type="eqref" reference="eq:I2-1"} into [\[eq:Var-rec\]](#eq:Var-rec){reference-type="eqref" reference="eq:Var-rec"}, and obtain $$\begin{aligned} \left(1-\frac{A(n,n)}{n!}\right)\mathbf{Var}[X_n] = \sum_{k=1}^{n-1}\frac{A(n,k)}{n!}\,\mathbf{Var}[X_k] + 1 + O(n^{-1}).\end{aligned}$$ Applying Theorems [Theorem 16](#th:xi-asymp){reference-type="ref" reference="th:xi-asymp"} and [Theorem 17](#th:xi-asymp-error){reference-type="ref" reference="th:xi-asymp-error"} yields the desired result. ## Asymptotic relation for the third central moment {#sec:3rd} The third central moment $\mathbf{E}\big[(X_n-\mu_n)^3\big]$ satisfies: $$\begin{aligned} \label{eq:3rdM} \left(1-\frac{A(n,n)}{n!}\right)\mathbf{E}\big[(X_n-\mu_n)^3\big] = \sum_{k=1}^{n-1}\frac{A(n,k)}{n!}\,\mathbf{E}\big[(X_k-\mu_k)^3\big] + 3I_1^{(3)} + 3I_2^{(3)} + I_3^{(3)},\end{aligned}$$ where $$\begin{aligned} I_1^{(3)} &= \sum_{k=1}^{n}\frac{A(n,k)}{n!}\cdot \mathbf{E}\big[(X_k-\mu_k)^{2}\big] \big(1+\mu_k-\mu_n\big),\\ I_2^{(3)} &= \sum_{k=1}^{n}\frac{A(n,k)}{n!}\cdot \mathbf{E}\big[X_k-\mu_k\big] \big(1+\mu_{k}-\mu_{n}\big)^2,\\ I_3^{(3)} &= \sum_{k=1}^{n}\frac{A(n,k)}{n!}\cdot \big(1+\mu_{k}-\mu_{n}\big)^3.\end{aligned}$$ First, by [\[eq:mu-moments-limit-L3\]](#eq:mu-moments-limit-L3){reference-type="eqref" reference="eq:mu-moments-limit-L3"}, $$\begin{aligned} \label{eq:I3-3} I_3^{(3)} = -1 + O(n^{-1}).\end{aligned}$$ Also, [\[eq:E-diff\]](#eq:E-diff){reference-type="eqref" reference="eq:E-diff"} tells us that $$\begin{aligned} \label{eq:I3-2} I_2^{(3)} = 0.\end{aligned}$$ Now it remains to evaluate $I_1^{(3)}$. Recalling Theorem [Theorem 29](#th:c-mom-new){reference-type="ref" reference="th:c-mom-new"} for $m=2$, we split $I_1^{(3)}$ as $$\begin{aligned} I_1^{(3)} = I_{1,1}^{(3)} + I_{1,2}^{(3)},\end{aligned}$$ where $$\begin{aligned} I_{1,1}^{(3)} &:= \sum_{k=1}^{n}\frac{A(n,k)}{n!}\cdot k \big(1+\mu_k-\mu_n\big),\\ I_{1,2}^{(3)} &:= \sum_{k=1}^{n}\frac{A(n,k)}{n!}\cdot \varepsilon^{(2)}_k \big(1+\mu_k-\mu_n\big).\end{aligned}$$ In view of [\[eq:mu-moments-limit-L1\]](#eq:mu-moments-limit-L1){reference-type="eqref" reference="eq:mu-moments-limit-L1"}, it is clear that $$\begin{aligned} \label{eq:I3-11} I_{1,1}^{(3)} = 1 + O(n^{-1}).\end{aligned}$$ In what follows, our objective is to show $$\label{eq:I3-12} I_{1,2}^{(3)} = O(n^{-1}).$$ To begin with, we define, for $1\le t\le n$, $$\begin{aligned} \label{eq:R-def} R_n(t):= \sum_{k=1}^t \frac{A(n,k)}{n!}\big(1+\mu_k-\mu_n\big).\end{aligned}$$ Clearly, $R_n(t)\le 0$ whenever $1\le t\le n-1$ since $\mu_k-\mu_n\le -1$ for every $k$ with $1\le k\le t\le n-1$ according to Lemma [Lemma 20](#le:mun-muk){reference-type="ref" reference="le:mun-muk"}. In the meantime, it was shown in [\[eq:mu-moments-limit-01\]](#eq:mu-moments-limit-01){reference-type="eqref" reference="eq:mu-moments-limit-01"} that $$\begin{aligned} R_n(n)=0.\end{aligned}$$ In light of the Abel summation formula, $$\begin{aligned} I_{1,2}^{(3)} = R_n(n) \varepsilon^{(2)}_{n+1} + \sum_{k=1}^{n} R_n(k) \big(\varepsilon^{(2)}_{k}-\varepsilon^{(2)}_{k+1}\big)= \sum_{k=1}^{n} R_n(k) \big(\varepsilon^{(2)}_{k}-\varepsilon^{(2)}_{k+1}\big).\end{aligned}$$ Recalling from [\[eq:mom-error-diff\]](#eq:mom-error-diff){reference-type="eqref" reference="eq:mom-error-diff"}, there exists a constant $c^{(2)}$ such that $$\begin{aligned} \big|\varepsilon^{(2)}_{k}-\varepsilon^{(2)}_{k+1}\big|\le \frac{c^{(2)}}{k}\end{aligned}$$ for every $k$ with $1\le k\le n$. Thus, $$\begin{aligned} \big|I_{1,2}^{(3)}\big|\le -\sum_{k=1}^{n} R_n(k) \big|\varepsilon^{(2)}_{k}-\varepsilon^{(2)}_{k+1}\big|\le -\sum_{k=1}^{n} R_n(k)\cdot \frac{c^{(2)}}{k}.\end{aligned}$$ Let $$\begin{aligned} \widehat{I}_{1,2}^{(3)}:= \sum_{k=1}^{n} R_n(k)\cdot \frac{1}{k}.\end{aligned}$$ To prove [\[eq:I3-12\]](#eq:I3-12){reference-type="eqref" reference="eq:I3-12"}, it will be sufficient to show that $$\label{eq:I3-22-new} \widehat{I}_{1,2}^{(3)} = O(n^{-1}).$$ Utilizing the fact that $R_n(n)=0$ and recalling the definition [\[eq:R-def\]](#eq:R-def){reference-type="eqref" reference="eq:R-def"}, we have $$\begin{aligned} \widehat{I}_{1,2}^{(3)} = \sum_{k=1}^{n-1} \frac{1}{k} \sum_{j=1}^k \frac{A(n,j)}{n!}\big(1+\mu_j-\mu_n\big) = \sum_{j=1}^{n-1} \frac{A(n,j)}{n!}\big(1+\mu_j-\mu_n\big)\cdot \sum_{k=j}^{n-1} \frac{1}{k}.\end{aligned}$$ Thus, noting the plain inequality $$\begin{aligned} \sum_{k=j}^{n-1} \frac{1}{k}\le \frac{n-j}{j},\end{aligned}$$ we have $$\begin{aligned} \big|\widehat{I}_{1,2}^{(3)}\big| &\le \sum_{j=1}^{n-1} \frac{A(n,j)}{n!}\cdot \sum_{k=j}^{n-1} \frac{1}{k} + \sum_{j=1}^{n-1} \frac{A(n,j)}{n!}\big(\mu_n-\mu_j\big) \cdot \sum_{k=j}^{n-1} \frac{1}{k}\\ &\le \sum_{j=1}^{n-1} \frac{A(n,j)}{n!} \frac{n-j}{j} + \sum_{j=1}^{n-1} \frac{A(n,j)}{n!}\big(\mu_n-\mu_j\big)\frac{n-j}{j}\\ &= O(n^{-1}),\end{aligned}$$ as claimed in [\[eq:I3-22-new\]](#eq:I3-22-new){reference-type="eqref" reference="eq:I3-22-new"}. Here we have applied [\[eq:mu-moments-limit-k-1\]](#eq:mu-moments-limit-k-1){reference-type="eqref" reference="eq:mu-moments-limit-k-1"} to each summation in the second line for the final asymptotic relation. Hence, combining [\[eq:I3-11\]](#eq:I3-11){reference-type="eqref" reference="eq:I3-11"} and [\[eq:I3-12\]](#eq:I3-12){reference-type="eqref" reference="eq:I3-12"} gives us $$\begin{aligned} \label{eq:I3-1} I_{1}^{(3)} = 1 + O(n^{-1}).\end{aligned}$$ By substituting [\[eq:I3-3\]](#eq:I3-3){reference-type="eqref" reference="eq:I3-3"}, [\[eq:I3-2\]](#eq:I3-2){reference-type="eqref" reference="eq:I3-2"} and [\[eq:I3-1\]](#eq:I3-1){reference-type="eqref" reference="eq:I3-1"} into [\[eq:3rdM\]](#eq:3rdM){reference-type="eqref" reference="eq:3rdM"}, we have $$\begin{aligned} \left(1-\frac{A(n,n)}{n!}\right)\mathbf{E}\big[(X_n-\mu_n)^3\big] = \sum_{k=1}^{n-1}\frac{A(n,k)}{n!}\,\mathbf{E}\big[(X_k-\mu_k)^3\big] + 2 + O(n^{-1}).\end{aligned}$$ The required result follows from Theorems [Theorem 16](#th:xi-asymp){reference-type="ref" reference="th:xi-asymp"} and [Theorem 17](#th:xi-asymp-error){reference-type="ref" reference="th:xi-asymp-error"}. ## Asymptotic relations for higher central moments {#sec:higher} For higher central moments in [\[eq:mom-asymp\]](#eq:mom-asymp){reference-type="eqref" reference="eq:mom-asymp"}, we shall use mathematical induction by first assuming the validity for $2,3,\ldots, 2M-1$, and then proving the cases of $2M$ and $2M+1$ sequentially. ### Even-order central moments Recall from [\[eq:mth-M\]](#eq:mth-M){reference-type="eqref" reference="eq:mth-M"} that $$\begin{aligned} \left(1-\frac{A(n,n)}{n!}\right)\mathbf{E}\big[(X_n-\mu_n)^{2M}\big] = \sum_{k=1}^{n-1}\frac{A(n,k)}{n!}\,\mathbf{E}\big[(X_k-\mu_k)^{2M}\big] + \sum_{\ell=1}^{2M} \binom{2M}{\ell} I_\ell^{(2M)},\end{aligned}$$ where $$\begin{aligned} I_1^{(2M)} &= \sum_{k=1}^{n}\frac{A(n,k)}{n!}\cdot \mathbf{E}\big[(X_k-\mu_k)^{2M-1}\big] \big(1+\mu_k-\mu_n\big),\\ I_2^{(2M)} &= \sum_{k=1}^{n}\frac{A(n,k)}{n!}\cdot \mathbf{E}\big[(X_k-\mu_k)^{2M-2}\big] \big(1+\mu_k-\mu_n\big)^{2},\\ &=\quad \vdots \\ I_{2M-1}^{(2M)} &= \sum_{k=1}^{n}\frac{A(n,k)}{n!}\cdot \mathbf{E}\big[X_k-\mu_k\big] \big(1+\mu_{k}-\mu_{n}\big)^{2M-1},\\ I_{2M}^{(2M)} &= \sum_{k=1}^{n}\frac{A(n,k)}{n!}\cdot \big(1+\mu_{k}-\mu_{n}\big)^{2M}.\end{aligned}$$ As before, $$\begin{aligned} I_{2M}^{(2M)} = O(1),\end{aligned}$$ and $$\begin{aligned} I_{2M-1}^{(2M)} = 0.\end{aligned}$$ Also, for every $L$ with $1\le L\le M-1$, we know from the inductive hypothesis that $$\begin{aligned} \mathbf{E}\big[(X_n-\mu_n)^{2L}\big] &= (2L-1)!!\cdot n^L + O(n^{L-1}\log n),\\ \mathbf{E}\big[(X_n-\mu_n)^{2L+1}\big] &= \tfrac{2}{3}L(2L+1)!!\cdot n^L + O(n^{L-1}\log n).\end{aligned}$$ In light of [\[eq:mu-moments-limit-LM\]](#eq:mu-moments-limit-LM){reference-type="eqref" reference="eq:mu-moments-limit-LM"}, [\[eq:mu-moments-limit-L1\]](#eq:mu-moments-limit-L1){reference-type="eqref" reference="eq:mu-moments-limit-L1"} and [\[eq:mu-moments-limit-LM-log\]](#eq:mu-moments-limit-LM-log){reference-type="eqref" reference="eq:mu-moments-limit-LM-log"}, we derive that for $2\le L'\le M-1$, $$\begin{aligned} I_{2L'-1}^{(2M)} &= O(n^{M-L'}),\\ I_{2L'}^{(2M)} &= O(n^{M-L'}).\end{aligned}$$ Also, $$\begin{aligned} I_{1}^{(2M)} &= O(n^{M-2}\log n),\\ I_{2}^{(2M)} &= (2M-3)!!\cdot n^{M-1} + O(n^{M-2}\log n).\end{aligned}$$ Therefore, $$\begin{aligned} \left(1-\frac{A(n,n)}{n!}\right)\mathbf{E}\big[(X_n-\mu_n)^{2M}\big] &= \sum_{k=1}^{n-1}\frac{A(n,k)}{n!}\,\mathbf{E}\big[(X_k-\mu_k)^{2M}\big]\\ &\quad + \tbinom{2M}{2}(2M-3)!!\cdot n^{M-1} + O(n^{M-2}\log n).\end{aligned}$$ Noting that $$\begin{aligned} \tfrac{1}{(M-1)+1}\cdot \tbinom{2M}{2}(2M-3)!! = (2M-1)!!,\end{aligned}$$ we conclude the case of $2M$ in Theorem [Theorem 29](#th:c-mom-new){reference-type="ref" reference="th:c-mom-new"} by virtue of Theorems [Theorem 16](#th:xi-asymp){reference-type="ref" reference="th:xi-asymp"} and [Theorem 17](#th:xi-asymp-error){reference-type="ref" reference="th:xi-asymp-error"}. ### Odd-order central moments By virtue of [\[eq:mth-M\]](#eq:mth-M){reference-type="eqref" reference="eq:mth-M"}, $$\begin{aligned} \left(1-\frac{A(n,n)}{n!}\right)\mathbf{E}\big[(X_n-\mu_n)^{2M+1}\big] &= \sum_{k=1}^{n-1}\frac{A(n,k)}{n!}\,\mathbf{E}\big[(X_k-\mu_k)^{2M+1}\big]\\ &\quad + \sum_{\ell=1}^{2M+1} \binom{2M+1}{\ell} I_\ell^{(2M+1)},\end{aligned}$$ where $$\begin{aligned} I_1^{(2M+1)} &= \sum_{k=1}^{n}\frac{A(n,k)}{n!}\cdot \mathbf{E}\big[(X_k-\mu_k)^{2M}\big] \big(1+\mu_k-\mu_n\big),\\ I_2^{(2M+1)} &= \sum_{k=1}^{n}\frac{A(n,k)}{n!}\cdot \mathbf{E}\big[(X_k-\mu_k)^{2M-1}\big] \big(1+\mu_k-\mu_n\big)^{2},\\ I_3^{(2M+1)} &= \sum_{k=1}^{n}\frac{A(n,k)}{n!}\cdot \mathbf{E}\big[(X_k-\mu_k)^{2M-2}\big] \big(1+\mu_k-\mu_n\big)^{3},\\ &=\quad \vdots \\ I_{2M}^{(2M+1)} &= \sum_{k=1}^{n}\frac{A(n,k)}{n!}\cdot \mathbf{E}\big[X_k-\mu_k\big] \big(1+\mu_{k}-\mu_{n}\big)^{2M},\\ I_{2M+1}^{(2M+1)} &= \sum_{k=1}^{n}\frac{A(n,k)}{n!}\cdot \big(1+\mu_{k}-\mu_{n}\big)^{2M+1}.\end{aligned}$$ In the same vein, $$\begin{aligned} I_{2M+1}^{(2M+1)} = O(1),\end{aligned}$$ and $$\begin{aligned} I_{2M}^{(2M+1)} = 0.\end{aligned}$$ Also, for $2\le L'\le M-1$, $$\begin{aligned} I_{2L'}^{(2M+1)} &= O(n^{M-L'}),\\ I_{2L'+1}^{(2M+1)} &= O(n^{M-L'}).\end{aligned}$$ Furthermore, $$\begin{aligned} I_{2}^{(2M+1)} &= \tfrac{2}{3}(M-1)(2M-1)!!\cdot n^{M-1} + O(n^{M-2}\log n),\\ I_{3}^{(2M+1)} &= -(2M-3)!!\cdot n^{M-1} + O(n^{M-2}\log n).\end{aligned}$$ It remains to estimate $I_{1}^{(2M+1)}$. Recalling from the inductive hypothesis that $$\begin{aligned} \mathbf{E}\big[(X_k-\mu_k)^{2M}\big] = (2M-1)!!\cdot n^{M} + \varepsilon^{(2M)}_n,\end{aligned}$$ we rewrite $I_{1}^{(2M+1)}$ as $$\begin{aligned} \label{eq:I-2m+1-1-def} I_{1}^{(2M+1)} = (2M-1)!!\cdot I_{1,1}^{(2M+1)} + I_{1,2}^{(2M+1)}\end{aligned}$$ where $$\begin{aligned} I_{1,1}^{(2M+1)}&:=\sum_{k=1}^{n}\frac{A(n,k)}{n!}\cdot k^{M} \big(1+\mu_k-\mu_n\big),\\ I_{1,2}^{(2M+1)}&:=\sum_{k=1}^{n}\frac{A(n,k)}{n!}\cdot \varepsilon^{(2M)}_k \big(1+\mu_k-\mu_n\big).\end{aligned}$$ Now we first derive from [\[eq:mu-moments-limit-L1\]](#eq:mu-moments-limit-L1){reference-type="eqref" reference="eq:mu-moments-limit-L1"} that $$\begin{aligned} \label{eq:I2M+1-11} I_{1,1}^{(2M+1)} = M\cdot n^{M-1} + O(n^{M-2}).\end{aligned}$$ Next, we claim that $$\label{eq:I2M+1-12} I_{1,2}^{(2M+1)} = O(n^{M-2}\log n).$$ Recalling the partial sums defined in [\[eq:R-def\]](#eq:R-def){reference-type="eqref" reference="eq:R-def"}, we apply the Abel summation formula and obtain $$\begin{aligned} I_{1,2}^{(2M+1)} = \sum_{k=1}^{n} R_n(k) \big(\varepsilon^{(2M)}_{k}-\varepsilon^{(2M)}_{k+1}\big).\end{aligned}$$ Recalling from [\[eq:mom-error-diff\]](#eq:mom-error-diff){reference-type="eqref" reference="eq:mom-error-diff"}, there exists a constant $c^{(2M)}$ such that $$\begin{aligned} \big|\varepsilon^{(2M)}_{k}-\varepsilon^{(2M)}_{k+1}\big|\le c^{(2M)}k^{M-2}\log k\end{aligned}$$ for every $k$ with $1\le k\le n$. Thus, $$\begin{aligned} \big|I_{1,2}^{(2M+1)}\big|\le -\sum_{k=1}^{n} R_n(k) \big|\varepsilon^{(2M)}_{k}-\varepsilon^{(2M)}_{k+1}\big|\le -\sum_{k=1}^{n} R_n(k)\cdot c^{(2M)}k^{M-2}\log k.\end{aligned}$$ Let $$\begin{aligned} \widehat{I}_{1,2}^{(2M+1)}:= \sum_{k=1}^{n} R_n(k)\cdot k^{M-2}\log k.\end{aligned}$$ We shall prove that $$\label{eq:I2M+1-12-new} \widehat{I}_{1,2}^{(2M+1)} = O(n^{M-2}\log n),$$ thereby yielding [\[eq:I2M+1-12\]](#eq:I2M+1-12){reference-type="eqref" reference="eq:I2M+1-12"}. Since $R_n(n)=0$, we recall the definition [\[eq:R-def\]](#eq:R-def){reference-type="eqref" reference="eq:R-def"} and get $$\begin{aligned} \widehat{I}_{1,2}^{(2M+1)} &= \sum_{k=1}^{n-1} k^{M-2}\log k \sum_{j=1}^k \frac{A(n,j)}{n!}\big(1+\mu_j-\mu_n\big)\\ &= \sum_{j=1}^{n-1} \frac{A(n,j)}{n!}\cdot \sum_{k=j}^{n-1} k^{M-2}\log k - \sum_{j=1}^{n-1} \frac{A(n,j)}{n!}\big(\mu_n-\mu_j\big)\cdot \sum_{k=j}^{n-1} k^{M-2}\log k.\end{aligned}$$ Noting that $$\begin{aligned} \sum_{k=j}^{n-1} k^{M-2}\log k\le n^{M-2}\log n\cdot \big(n-j\big),\end{aligned}$$ we have, by virtue of [\[eq:mu-moments-limit\]](#eq:mu-moments-limit){reference-type="eqref" reference="eq:mu-moments-limit"}, $$\begin{aligned} \big|\widehat{I}_{1,2}^{(2M+1)}\big| &\le n^{M-2}\log n\cdot \left(\sum_{j=1}^{n-1} \frac{A(n,j)}{n!} \big(n-j\big) + \sum_{j=1}^{n-1} \frac{A(n,j)}{n!}\big(n-j\big)\big(\mu_n-\mu_j\big)\right)\\ &= O(n^{M-2}\log n).\end{aligned}$$ Therefore, [\[eq:I2M+1-12-new\]](#eq:I2M+1-12-new){reference-type="eqref" reference="eq:I2M+1-12-new"} holds, so does [\[eq:I2M+1-12\]](#eq:I2M+1-12){reference-type="eqref" reference="eq:I2M+1-12"}. Inserting [\[eq:I2M+1-11\]](#eq:I2M+1-11){reference-type="eqref" reference="eq:I2M+1-11"} and [\[eq:I2M+1-12\]](#eq:I2M+1-12){reference-type="eqref" reference="eq:I2M+1-12"} into [\[eq:I-2m+1-1-def\]](#eq:I-2m+1-1-def){reference-type="eqref" reference="eq:I-2m+1-1-def"} gives $$\begin{aligned} I_{1}^{(2M+1)} = M(2M-1)!!\cdot n^{M-1} + O(n^{M-2}\log n).\end{aligned}$$ In conclusion, $$\begin{aligned} &\left(1-\frac{A(n,n)}{n!}\right)\mathbf{E}\big[(X_n-\mu_n)^{2M+1}\big]\\ &= \sum_{k=1}^{n-1}\frac{A(n,k)}{n!}\,\mathbf{E}\big[(X_k-\mu_k)^{2M+1}\big]\\ &\quad + \left(\tbinom{2M+1}{1}M(2M-1)!!+\tbinom{2M+1}{2}\tfrac{2}{3}(M-1)(2M-1)!!-\tbinom{2M+1}{3}(2M-3)!!\right)\cdot n^{M-1}\\ &\quad + O(n^{M-2}\log n).\end{aligned}$$ Noting that $\tfrac{2}{3}M(2M+1)!!$ equals $$\begin{aligned} \tfrac{1}{(M-1)+1}\cdot \left(\tbinom{2M+1}{1} M(2M-1)!!+\tbinom{2M+1}{2}\tfrac{2}{3}(M-1)(2M-1)!!-\tbinom{2M+1}{3}(2M-3)!!\right),\end{aligned}$$ we arrive at the case of $2M+1$ in Theorem [Theorem 29](#th:c-mom-new){reference-type="ref" reference="th:c-mom-new"} after applying Theorems [Theorem 16](#th:xi-asymp){reference-type="ref" reference="th:xi-asymp"} and [Theorem 17](#th:xi-asymp-error){reference-type="ref" reference="th:xi-asymp-error"}. # Acknowledgment {#acknowledgment .unnumbered} The second and last authors would like to thank our colleague, Dr. Xingshi Cai from Duke Kunshan University, for the discussion at the early stage of this project, in our Discrete Math Seminar. 99 Billingsley, P.: *Probability and Measure. Third Edition*, John Wiley & Sons, Inc., New York, 1995. Charalambides, C.A.: *Enumerative Combinatorics*, Chapman & Hall/CRC, Boca Raton, FL, 2002. Comtet, L.: *Advanced Combinatorics. The Art of Finite and Infinite Expansions*, D. Reidel Publishing Co., Dordrecht, 1974. Euler, L.: Recherches sur une nouvelle espèce des quarrés magiques, *Verhandelingen uitgegeven door het zeeuwsch Genootschap der Wetenschappen te Vlissingen* **9** (1782), 85--239. Kreweras, G.: The number of more or less "regular" permutations, *Fibonacci Quart.* **18** (1980), no. 3, 226--229. Myers, A.N.: Counting permutations by their rigid patterns, *J. Combin. Theory Ser. A* **99** (2002), no. 2, 345--357. Rao, M.B., Zhang, H., Huang, C., and Cheng, F.-C.: A discrete probability problem in card shuffling, *Comm. Statist. Theory Methods* **45** (2016), no. 3, 612--620. Sloane, N.J.A.: On-Line Encyclopedia of Integer Sequences; <http://oeis.org>. Tenenbaum, G.: *Introduction to Analytic and Probabilistic Number Theory. Third Edition*, American Mathematical Society, Providence, RI, 2015. Wilf, H.S.: *Generatingfunctionology. Third Edition*, A K Peters, Ltd., Wellesley, MA, 2006. [^1]: corresponding author
arxiv_math
{ "id": "2309.08841", "title": "A central limit theorem for a card shuffling problem", "authors": "Shane Chern, Lin Jiu, Italo Simonelli", "categories": "math.PR math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper, we focus on an asynchronous distributed optimization problem. In our problem, each node is endowed with a convex local cost function, and is able to communicate with its neighbors over a directed communication network. Furthermore, we assume that the communication channels between nodes have limited bandwidth, and each node suffers from processing delays. We present a distributed algorithm which combines the Alternating Direction Method of Multipliers (ADMM) strategy with a finite time quantized averaging algorithm. In our proposed algorithm, nodes exchange quantized valued messages and operate in an asynchronous fashion. More specifically, during every iteration of our algorithm each node (i) solves a local convex optimization problem (for the one of its primal variables), and (ii) utilizes a finite-time quantized averaging algorithm to obtain the value of the second primal variable (since the cost function for the second primal variable is not decomposable). We show that our algorithm converges to the optimal solution at a rate of $O(1/k)$ (where $k$ is the number of time steps) for the case where the local cost function of every node is convex and not-necessarily differentiable. Finally, we demonstrate the operational advantages of our algorithm against other algorithms from the literature. author: - "Apostolos I. Rikos, Wei Jiang, Themistoklis Charalambous, and Karl H. Johansson [^1] [^2] [^3] [^4] [^5]" bibliography: - bibliografia_consensus.bib title: | **Asynchronous Distributed Optimization via ADMM\ with Efficient Communication** --- # Introduction {#sec:intro} The problem of distributed optimization has received extensive attention in recent years. Due to the rise of large-scale machine learning [@2020:Nedich], control [@SEYBOTH:2013], and other data-driven applications [@2018:Stich_Jaggi], there is a growing need to solve optimization problems that involve massive amounts of data. Solving these problems in a centralized way is proven to be infeasible since it is difficult or impossible to store and process large amounts of data on a single node. Distributed optimization is a method that distributes data across multiple nodes. Each node performs computations on its stored data and collaborates with others to solve the optimization problem collectively. This approach optimizes a global objective function by combining each node's local objective function and coordinating with the network. The advantage is reducing computational and storage requirements for individual nodes. However, frequent communication with neighboring nodes is necessary to update optimization variables. This can become a bottleneck with increasing nodes or data. To address this issue, recent attention from the scientific community focuses on developing optimization algorithms with efficient communication. This leads to enhancements on scalability and operational efficiency, while mitigating issues like network congestion, latency, and bandwidth limitations. **Existing Literature.** Most works in the literature assume that nodes can process and exchange real values. This may result in communication overhead, especially for algorithms requiring frequent and complex communication (see, e.g., [@2017:Makhdoumi_Ozdaglar; @2009:Nedic_Optim; @2021:Tiancheng_Uribe; @2016:Ling_Yongmei; @2021:Wei_Themis; @2021:Bastianello_Todescato; @2022:Khatana_Salapaka]). In practical applications, nodes must exchange quantized messages to efficiently utilize network resources like energy and processing power. For this reason, recent research focuses on communication-efficient algorithms (e.g., [@2013:Xavier_Markus; @2016:Shengyu_Biao; @2016:Ling_Yongmei; @2017:Tsai_Chang; @2021:Tiancheng_Uribe; @2022:Yaohua_Ling; @rikos2023distributed]), but they often assume perfectly synchronized nodes or bidirectional communication, limiting their applicability. Addressing communication overhead remains a key challenge, necessitating the development of communication-efficient algorithms that can operate over directed networks asynchronously. Therefore, continued research in this area is crucial to overcoming this bottleneck and enhancing the performance of distributed optimization methods. **Main Contributions.** Existing algorithms in the literature often assume that nodes can exchange precise values of their optimization variables and operate synchronously. However, transmitting exact values (often irrational numbers) necessitates an infinite number of bits and becomes infeasible. Moreover, synchronizing nodes within a distributed network involves costly protocols, time-consuming to execute.   In this paper, we present a distributed optimization algorithm, which aims to address these challenges. More specifically, we make the following contributions.\ **A.** We present a distributed optimization algorithm that leverages the advantages of the ADMM optimization strategy and operates over a directed communication graph. Our algorithm allows nodes to operate in an asynchronous fashion, and enables efficient communication as nodes communicate with quantized messages; see Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"}.\ **B.** We prove that our algorithm converges to the optimal solution at a rate of $O(1/k)$ even for non-differentiable and convex local cost functions (as it is the case for similar algorithms with real-valued states). This rate is justified in our simulations in which our algorithm exhibits comparable performance with real-valued communication algorithms while guaranteeing efficient (quantized) communication among nodes; see Section [6](#sec:results){reference-type="ref" reference="sec:results"}. Furthermore, we show that the optimal solution is calculated within an error bound that depends on the quantization level; see Theorem [Theorem 1](#converge_Alg1){reference-type="ref" reference="converge_Alg1"}. # NOTATION AND PRELIMINARIES {#sec:preliminaries} **Notation.** The sets of real, rational, integer and natural numbers are denoted by $\mathds {R}, \mathds {Q}, \mathds {Z}$ and $\mathds {N}$, respectively. The symbol $\mathds {Z}_{\geq 0}$ ($\mathds {Z}_{> 0}$) denotes the set of nonnegative (positive) integer numbers. The symbol $\mathds {R}_{\geq 0}$ ($\mathds {R}_{> 0}$) denotes the set of nonnegative (positive) real numbers. The symbol $\mathds {R}^n_{\geq 0}$ denotes the nonnegative orthant of the $n$-dimensional real space $\mathds {R}^n$. Matrices are denoted with capital letters (e.g., $A$), and vectors with small letters (e.g., $x$). The transpose of matrix $A$ and vector $x$ are denoted as $A^\top$, $x^\top$, respectively. For any real number $a \in \mathds {R}$, the floor $\lfloor a \rfloor$ denotes the greatest integer less than or equal to $a$ while the ceiling $\lceil a \rceil$ denotes the least integer greater than or equal to $a$. For any matrix $A \in \mathds {R}^{n \times n}$, the $a_{ij}$ denotes the entry in row $i$ and column $j$. By $\mathds {1}$ and $\mathds {I}$ we denote the all-ones vector and the identity matrix of appropriate dimensions, respectively. By $\| \cdot \|$, we denote the Euclidean norm of a vector. **Graph Theory.** The communication network is captured by a directed graph (digraph) defined as $\mathcal{G} = (\mathcal{V}, \mathcal{E})$. This digraph consists of $n$ ($n \geq 2$) agents communicating only with their immediate neighbors, and is static (i.e., it does not change over time). In $\mathcal{G}$, the set of nodes is denoted as $\mathcal{V} = \{ v_1, v_2, ..., v_n \}$, and the set of edges as $\mathcal{E} \subseteq \mathcal{V} \times \mathcal{V} \cup \{ (v_i, v_i) \ | \ v_i \in \mathcal{V} \}$ (note that each agent has also a virtual self-edge). The cardinality of the sets of nodes, edges are denoted as $| \mathcal{V} | = N$, $| \mathcal{E} | = m$, respectively. A directed edge from node $v_i$ to node $v_l$ is denoted by $(v_l, v_i) \in \mathcal{E}$, and captures the fact that node $v_l$ can receive information from node $v_i$ at time step $k$ (but not the other way around). The subset of nodes that can directly transmit information to node $v_i$ is called the set of in-neighbors of $v_i$ and is represented by $\mathcal{N}_i^- = \{ v_j \in \mathcal{V} \; | \; (v_i, v_j)\in \mathcal{E}\}$. The subset of nodes that can directly receive information from node $v_i$ is called the set of out-neighbors of $v_i$ and is represented by $\mathcal{N}_i^+ = \{ v_l \in \mathcal{V} \; | \; (v_l, v_i)\in \mathcal{E}\}$. The *in-degree*, and *out-degree* of $v_j$ are denoted by $\mathcal{D}_i^- = | \mathcal{N}_i^- |$, $\mathcal{D}_i^+ = | \mathcal{N}_i^+ |$, respectively. The diameter $D$ of a digraph is the longest shortest path between any two nodes $v_l, v_i \in \mathcal{V}$. A directed *path* from $v_i$ to $v_l$ of length $t$ exists if we can find a sequence of agents $i \equiv l_0,l_1, \dots, l_t \equiv l$ such that $(l_{\tau+1},l_{\tau}) \in \mathcal{E}$ for $\tau = 0, 1, \dots , t-1$. A digraph is *strongly connected* if there exists a directed path from every node $v_i$ to every node $v_l$, for every $v_i, v_l \in \mathcal{V}$. **ADMM Algorithm.** The standard ADMM algorithm [@2014:Wei_Wotao] is designed to solve the following problem: $$\begin{aligned} \label{admm_prob_min} \min_{x \in \mathds {R}^p, z \in \mathds {R}^m}~ & f(x) + g(x), \\ \text{s.t.}~ & A x + B z = c, \nonumber\end{aligned}$$ where $A \in \mathds {R}^{q \times p}$, $B \in \mathds {R}^{q \times m}$ and $c \in \mathds {R}^{q}$ (for $q, p, m \in \mathds {N}$). In order to solve [\[admm_prob_min\]](#admm_prob_min){reference-type="eqref" reference="admm_prob_min"}, the augmented Lagrangian is: $$\begin{aligned} \label{admm_Lagrangian} L_{\rho} (x, z, \lambda) = f(x) + g(x) & + \lambda (A x + B z - c) \nonumber \\ & + \frac{\rho}{2} \| A x + B z - c \|^2, \end{aligned}$$ where $\lambda \in \mathds {R}$ is the Lagrange multiplier, and $\rho \in \mathds {R}$ is the positive penalty parameter. The primary variables $x$, $z$ and the Lagrangian multiplier $\lambda$ are initialized as $[ x, z, \lambda ]^\top = [ x^{[0]}, z^{[0]}, \lambda^{[0]} ]^\top$. Then, during every ADMM time step, the $x$, $z$ and $\lambda$ are updated as: $$\begin{aligned} x^{[k+1]} = & \mathop{\mathrm{arg\,min}}_{x \in \mathds {R}^p} L_{\rho} (x, z^{[k]}, \lambda^{[k]}), \label{admm_iterations_x} \\ z^{[k+1]} = & \mathop{\mathrm{arg\,min}}_{z \in \mathds {R}^m} L_{\rho} (x^{[k+1]}, z, \lambda^{[k]}), \label{admm_iterations_z} \\ \lambda^{[k+1]} = & \lambda^{[k]} + \rho (A x^{[k+1]} + B z^{[k+1]} - c) , \label{admm_iterations_lambda} \end{aligned}$$ where $\rho$ in [\[admm_iterations_lambda\]](#admm_iterations_lambda){reference-type="eqref" reference="admm_iterations_lambda"} is the penalty parameter from [\[admm_Lagrangian\]](#admm_Lagrangian){reference-type="eqref" reference="admm_Lagrangian"}. **Asymmetric Quantizers.** Quantization is a strategy that lessens the number of bits needed to represent information. This reduces the required communication bandwidth and increases power and computation efficiency. Quantization is mainly used to describe communication constraints and imperfect information exchanges between nodes [@2019:Wei_Johansson]. The three main types of quantizers are (i) asymmetric, (ii) uniform, and (iii) logarithmic. In this paper, we rely on asymmetric quantizers to reduce the required communication bandwidth. Note that the results of this paper are transferable to other quantizer types (e.g., logarithmic or uniform). Asymmetric quantizers are defined as $$\label{asy_quant_defn} q_{\Delta}^a(\xi) = \Bigl \lfloor \frac{\xi}{\Delta} \Bigr \rfloor,$$ where $\Delta \in \mathds {Q}$ is the quantization level, $\xi \in \mathds {R}$ is the value to be quantized, and $q_{\Delta}^a(\xi) \in \mathds {Q}$ is the quantized version of $\xi$ with quantization level $\Delta$ (note that the superscript "$a$" indicates that the quantizer is asymmetric). # Problem Formulation {#sec:probForm} **Problem Statement.** Let us consider a distributed network modeled as a digraph $\mathcal{G} = (\mathcal{V}, \mathcal{E})$ with $n = | \mathcal{V} |$ nodes. In our network $\mathcal{G}$, we assume that the communication channels among nodes have limited bandwidth. Each node $v_i$ is endowed with a scalar local cost function $f_i(x): \mathds {R}^p \mapsto \mathds {R}$ only known to node $v_i$. In this paper we aim to develop a distributed algorithm which allows nodes to cooperatively solve the following optimization problem $$\begin{aligned} \label{admm_prob_min_probForm} \min_{x \in \mathds {R}^p}~ & \sum_{i=1}^n f_i(x) , \end{aligned}$$ where $x \in \mathds {R}^p$ is the global optimization variable (or common decision variable). We will solve [\[admm_prob_min_probForm\]](#admm_prob_min_probForm){reference-type="eqref" reference="admm_prob_min_probForm"} via the distributed ADMM strategy. Furthermore, in our solution we guarantee efficient communication between nodes (due to communication channels of limited bandwidth in the network). **Modification of the Optimization Problem.** In order to solve [\[admm_prob_min_probForm\]](#admm_prob_min_probForm){reference-type="eqref" reference="admm_prob_min_probForm"} via the ADMM and guarantee efficient communication between nodes, we introduce (i) the variable $x_i$ for every node $v_i$, (ii) the constraint $| x_i - x_j | \le \epsilon$ for every $v_i, v_j \in \mathcal{V}$ (where $\epsilon \in \mathds {R}$ is an error tolerance which is predefined), and (iii) the constraint that nodes communicate with quantized values. The second constraint is introduced to allow an asynchronous implementation of the distributed ADMM strategy, and the third constraint to guarantee efficient communication between nodes. Considering the aforementioned constraints (i), (ii) and (iii), [\[admm_prob_min_probForm\]](#admm_prob_min_probForm){reference-type="eqref" reference="admm_prob_min_probForm"} becomes: $$\begin{aligned} \min_{x_i}~ & \sum_{i=1}^n f_i(x_i) , i=1, ..., n\label{admm_prob_min_probForm_1} \\ \text{s.t.}~ & | x_i - x_j | \le \epsilon, \forall v_i, v_j \in \mathcal{V}, \label{admm_prob_min_probForm_2} \\ & \text{nodes communicate with quantized values.} \label{constr_quant} \end{aligned}$$ Let us now define a closed nonempty convex set $\mathcal{C}$ as $$\label{setC} \mathcal{C} = \left\{\begin{bmatrix} x_1^{\mbox{\tiny T}}& x_2^{\mbox{\tiny T}}& \ldots & x_n^{\mbox{\tiny T}}\end{bmatrix}^{\mbox{\tiny T}}\in \mathds {R}^{np} \, :\, \| x_i - x_j\| \le \epsilon \right\} .$$ Furthermore, denote $X \coloneqq \begin{bmatrix} x_1^{\mbox{\tiny T}}& x_2^{\mbox{\tiny T}}& \ldots & x_n^{\mbox{\tiny T}}\end{bmatrix}^{\mbox{\tiny T}}$ and its copy variable $z \in \mathds {R}^{np}$. This means that $\eqref{admm_prob_min_probForm_2}$ and [\[setC\]](#setC){reference-type="eqref" reference="setC"} become $$\label{problem_reformulated2} X = z, \, \forall z \in \mathcal{C}.$$ Now let us define the indicator function $g(z)$ of set $\mathcal{C}$ as $$\label{indicator_function_g} g(z) = \left\{ \begin{array}{l} \begin{aligned} &0,\quad \text{if} \, z \in \mathcal{C},\\ &\infty, \, \ \text{otherwise}. \end{aligned} \end{array} \right.$$ Incorporating [\[problem_reformulated2\]](#problem_reformulated2){reference-type="eqref" reference="problem_reformulated2"} and [\[indicator_function_g\]](#indicator_function_g){reference-type="eqref" reference="indicator_function_g"} into [\[admm_prob_min_probForm_1\]](#admm_prob_min_probForm_1){reference-type="eqref" reference="admm_prob_min_probForm_1"}, we have that [\[admm_prob_min_probForm_1\]](#admm_prob_min_probForm_1){reference-type="eqref" reference="admm_prob_min_probForm_1"} becomes $$\label{objective_function_no_quant} \begin{aligned} \min_{z, x_i} \, & \left\{ \sum_{i=1}^{n} f_i(x_i) + g(z) \right\}, i=1,\ldots, n\\ \text{s.t.} \, & X - z = 0, \, \forall z \in \mathcal{C}, \\ & \text{nodes communicate with quantized values.} \end{aligned}$$ As a result, in this paper, we aim to design a distributed algorithm that solves [\[objective_function_no_quant\]](#objective_function_no_quant){reference-type="eqref" reference="objective_function_no_quant"} via the distributed ADMM strategy. # Preliminaries on Distributed Coordination {#sec:prelim_distrCoord} We now present a definition of asynchrony (borrowed from [@2021:Wei_Themis]) that defines the operation of nodes in the network. Furthermore, we present a distributed coordination algorithm that operates with quantized values and is necessary for our subsequent development. ## Definition of Asynchronous Operation {#Prel_asynch_oper} During their optimization operation, nodes aim to coordinate in an asynchronous fashion. Specifically, let us assume that the iterations for the optimization operation start at time step $t(0)\in \mathds {R}_{+}$. Furthermore, we assume that one (or more) nodes transmit values to their out-neighbors at a set of time instances $\mathcal{T}=\{t(1), t(2),t(3),\ldots\}$. During the nodes' asynchronous operation, a message that is received at time step $t(\eta_1)$ from node $v_i$, is processed at time step $t(\eta_2)$ where $\eta_2 > \eta_1$. This means that the message received at time step $t(\eta_1)$ suffers from a processing delay of $t(\eta_2)-t(\eta_1)$ time steps. An example of how processing delays affecting transmissions is shown in Fig. [1](#fig:async){reference-type="ref" reference="fig:async"} (that is borrowed from [@2021:Wei_Themis]). ![Example of how processing and transmission delays affect the operation of nodes $v_1$, $v_2$, $v_3$. Blue dots indicate the iterations and blue arrows indicate the transmissions. Transmissions occur at time steps $t_i(\eta)$, and $t_i(\eta+1) - t_i(\eta)$ is the processing delay, where $i \in \{1, 2, 3\}$, $\eta \in \mathds {Z}_{\geq 0}$. The time difference from the blue dot to the blue arrow is the transmission delay [@2021:Wei_Themis].](Figures/async_new.pdf){#fig:async width="0.9\\columnwidth"} Note here that the nodes states at time step $t(\eta)$ are indexed by $\eta$. This means that the state of node $v_i$ at time step $t(\eta)$ is denoted as $x_i^{\eta} \in \mathds {R}^p$. We now present the following assumption which is necessary for the asynchronous operation of every node. **Assumption 1**. *The number of time steps required for a node $v_i$ to process the information received from its in-neighbors is upper bounded by $\mathcal{B} \in \mathds {N}$. Furthermore, the actual time (in seconds) required for a node $v_i$ to process the information received from its in-neighbors is upper bounded by $T \in \mathds {R}_{\geq 0}$.* Assumption [Assumption 1](#assump_time_index_bound){reference-type="ref" reference="assump_time_index_bound"} states that there exists a finite number of steps $\mathcal{B}$ before which all nodes have updated their states and proceed to perform transmissions to their neighboring nodes. The upper bound $\mathcal{B}$ is translated to an upper bound of $T$ in actual time (in seconds). This is mainly because it is not possible for nodes to count the number of time steps elapsed in the network (and understand when $\mathcal{B}$ time steps have passed. The value $T$ can be counted by each node individually. ## Asynchronous $\max$/$\min$ - Consensus {#Prel_asy_maxmincons} In asynchronous max/min consensus (see [@2013:Giannini]), the update rule for every node $v_i \in \mathcal{V}$ is: $$\begin{aligned} \label{asynch_max_operation_eq} x_i^{[k + \theta_i^{[k]}]} = \max_{v_{j}\in \mathcal{N}_i^{-} \cup \{v_{i}\}}\{ x_j^{[k + \theta_{ij}^{[k]}]} \}, \end{aligned}$$ where $\theta_i^{[k]}$ is the update instance of node $v_i$, $x_j^{[k + \theta_{ij}^{[k]}]}$ are the states of the in-neighbors $v_{j}\in \mathcal{N}_i^{-} \cup \{v_{i}\}$ during the time instant of $v_i$'s update, $\theta_{ij}^{[k]}$ are the asynchronous state updates of the in-neighbors of node $v_i$ that occur between two consecutive updates of node $v_i$'s state. The asynchronous max/min consensus in [\[asynch_max_operation_eq\]](#asynch_max_operation_eq){reference-type="eqref" reference="asynch_max_operation_eq"} converges to the maximum value among all nodes in a finite number of steps $s' \leq D \mathcal{B}$ (see [@2013:Giannini]), where $D$ is the diameter of the network, and $\mathcal{B}$ is the upper bound on the number of time steps required for a node $v_j$ to process the information received from its in-neighbors. # Distributed Asynchronous Optimization via ADMM with Efficient Communication {#sec:distr_ADMM_quant} In this section we present a distributed algorithm which solves problem [\[objective_function_no_quant\]](#objective_function_no_quant){reference-type="eqref" reference="objective_function_no_quant"}. Before presenting the operation of the proposed algorithm, we analyze the ADMM operation over the problem [\[objective_function_no_quant\]](#objective_function_no_quant){reference-type="eqref" reference="objective_function_no_quant"}. In [\[objective_function_no_quant\]](#objective_function_no_quant){reference-type="eqref" reference="objective_function_no_quant"}, let us denote $F(X) \coloneqq \sum_{i=1}^{n} f_i(x_i)$. This means that the Lagrangian function is equal to $$\label{Lagrangian} L(X,z,\lambda) = F(X) + g(z) + \lambda^{\mbox{\tiny T}}(X - z),$$ where $\lambda \in \mathds {R}^{np}$ is the Lagrange multiplier. We now make the following assumptions to solve the problem [\[objective_function_no_quant\]](#objective_function_no_quant){reference-type="eqref" reference="objective_function_no_quant"}. **Assumption 2**. *Every cost function $f_i:\mathds {R}^{p} \rightarrow \mathds {R}$ is closed, proper and convex.* **Assumption 3**. *The Lagrangian $L(X,z,\lambda)$ has a saddle point. This means that there exists $(X^{*},z^{*},\lambda^{*})$, for which $$\label{saddle_point} L(X^{*},z^{*}, \lambda)\le L (X^{*},z^{*},\lambda^{*})\le L(X,z,\lambda^{*}),$$ for all $X \in \mathds {R}^{np}$, $z \in \mathds {R}^{np}$, and $\lambda \in \mathds {R}^{np}$.* Assumption [Assumption 2](#assup_convex){reference-type="ref" reference="assup_convex"} means that the local cost function $f_i$ of every node $v_i$ can be non-differentiable (see [@boyd2011distributed]). Furthermore, Assumptions [Assumption 2](#assup_convex){reference-type="ref" reference="assup_convex"} and [Assumption 3](#assup_saddel_point){reference-type="ref" reference="assup_saddel_point"} mean that $L(X,z,\lambda^{*})$ is convex in $(X,z)$ and $(X^{*},z^{*})$ is a solution to problem [\[objective_function_no_quant\]](#objective_function_no_quant){reference-type="eqref" reference="objective_function_no_quant"} (see [@boyd2011distributed; @wei2012distributed]). Note that this is also based on the definition of $g(z)$ in [\[indicator_function_g\]](#indicator_function_g){reference-type="eqref" reference="indicator_function_g"}. Note here that our results extend naturally to strongly convex cost functions, since strong convexity implies convexity. Let us now focus on the Lagrangian of the problem in [\[objective_function_no_quant\]](#objective_function_no_quant){reference-type="eqref" reference="objective_function_no_quant"}. At time step $k$, the augmented Lagrangian of [\[objective_function_no_quant\]](#objective_function_no_quant){reference-type="eqref" reference="objective_function_no_quant"} is $$\begin{aligned} \label{augmented_Lagrangian2} L_{\rho}&(X^{[k]},z^{[k]},\lambda^{[k]}) \\ =& \sum_{i=1}^{n} f_i(x_i^{[k]}) + g(z^{[k]}) + {\lambda^{[k]}}^{\mbox{\tiny T}}(X^{[k]} - z^{[k]} ) \nonumber \\ +& \frac{\rho}{2}\|X^{[k]} - z^{[k]} \|^{2} \nonumber \\ =& \sum_{i=1}^{n}\left( f_i(x_i^{[k]}) + {\lambda_i^{[k]}}^{\mbox{\tiny T}}(x_i^{[k]} - z_i^{[k]}) + \frac{\rho}{2}\|x_i^{[k]} - z_i^{[k]} \|^{2} \right) \nonumber \\ +& g(z^{[k]}) , \nonumber \end{aligned}$$ where $z_i\in \mathds {R}^{p}$ is the $i^{th}$ element of vector $z$. In [\[admm_iterations_x\]](#admm_iterations_x){reference-type="eqref" reference="admm_iterations_x"}--[\[admm_iterations_lambda\]](#admm_iterations_lambda){reference-type="eqref" reference="admm_iterations_lambda"} we ignore terms that are independent of the optimization variables such as $x_i, z$ for node $v_i$. This means that [\[admm_iterations_x\]](#admm_iterations_x){reference-type="eqref" reference="admm_iterations_x"}--[\[admm_iterations_lambda\]](#admm_iterations_lambda){reference-type="eqref" reference="admm_iterations_lambda"} become: $$\begin{aligned} x_i^{[k+1]} =& \operatorname*{argmin}_{x_i} f_i(x_i) + {\lambda_i^{[k]}}^{\mbox{\tiny T}}x_i + \frac{\rho}{2}\|x_i - z_i^{[k]} \|^{2}, \label{dadmm_x}\\ z^{[k+1]} =& \operatorname*{argmin}_z g(z) + {\lambda^{[k]}}^{\mbox{\tiny T}}(X^{[k+1]} - z ) + \frac{\rho}{2}\|X^{[k+1]} - z \|^{2}\nonumber \\ =& \operatorname*{argmin}_z g(z) +\frac{\rho}{2}\|X^{[k+1]} - z + \frac{1}{\rho}\lambda^{[k]} \|^{2}, \label{dadmm_z}\\ \lambda_i^{[k+1]} =& \lambda_i^{[k]} + \rho (x_i^{[k+1]} - z_i^{[k+1]}) \label{dadmm_lamda}. \end{aligned}$$ Note that for [\[dadmm_z\]](#dadmm_z){reference-type="eqref" reference="dadmm_z"} we use the identity $2a^Tb + b^2 = (a+b)^2-a^2$ for $a = \lambda^{[k]}/\rho$ and $b = X^{[k+1]} - z$. Equations [\[dadmm_x\]](#dadmm_x){reference-type="eqref" reference="dadmm_x"}, [\[dadmm_lamda\]](#dadmm_lamda){reference-type="eqref" reference="dadmm_lamda"} can be executed independently by node $v_i$ in a parallel fashion. Specifically, node $v_i$ can solve [\[dadmm_x\]](#dadmm_x){reference-type="eqref" reference="dadmm_x"} for $x_i^{[k+1]}$ by a classical method (e.g., a proximity operator [@boyd2011distributed Section 4]), and implement trivially [\[dadmm_lamda\]](#dadmm_lamda){reference-type="eqref" reference="dadmm_lamda"} for $\lambda_i^{[k+1]}$. In [\[indicator_function_g\]](#indicator_function_g){reference-type="eqref" reference="indicator_function_g"}, $g(z)$ is the indicator function of the closed nonempty convex set $\mathcal{C}$. This means that [\[dadmm_z\]](#dadmm_z){reference-type="eqref" reference="dadmm_z"} becomes $$\begin{aligned} z^{[k+1]} = \Pi_{\mathcal{C}}(X^{[k+1]}+ \lambda^{[k]}/\rho) , \end{aligned}$$ where $\Pi_{\mathcal{C}}$ is the projection (in the Euclidean norm) onto $\mathcal{C}$. It is important to note here that the elements of $z$ (i.e., $z_1, z_2, \ldots, z_n$) should belong into the set $\mathcal{C}$ in finite time. This is due to the definition of $g(z)$ in [\[indicator_function_g\]](#indicator_function_g){reference-type="eqref" reference="indicator_function_g"}. Specifically, if the elements of $z$ do not belong in $\mathcal{C}$ then $g(z) = \infty$ (thus [\[dadmm_z\]](#dadmm_z){reference-type="eqref" reference="dadmm_z"} cannot be executed). Therefore, we need to adjust the values of the elements of $z$ so that they belong in the set $\mathcal{C}$ in finite time (i.e., we need to set the elements of $z$ such that $\| z_i - z_j\| \le \epsilon, \forall v_i, v_j \in \mathcal{V}$). Note that if $z_i - z_j = 0, \forall v_i, v_j \in \mathcal{V}$, then every node $v_i \in \mathcal{V}$ has reached consensus. Specifically, we can have that in finite time the state $z_i$ becomes $$\begin{aligned} \label{consensus_z} z_i = \frac{1}{n} \sum_{l=1}^{n}z_l^{[0]} , \forall v_i \in \mathcal{V} ,\end{aligned}$$ where $z_l^{[0]} = x_l^{[k+1]} + \lambda_l^{[k]}/\rho$. Furthermore, $\| z_i - z_j\| \le \epsilon, \forall v_i, v_j\in\mathcal{V}$ means that $$\begin{aligned} \label{consensus_z2} z_i \in [ \frac{1}{n} \sum_{l=1}^{n}z_l^{[0]} - \frac{\epsilon}{2} , \frac{1}{n} \sum_{l=1}^{n}z_l^{[0]} + \frac{\epsilon}{2} ], \forall v_i \in \mathcal{V} ,\end{aligned}$$ where $z_l^{[0]} = x_l^{[k+1]} + \lambda_l^{[k]}/\rho$. This means that for every node $v_i$, $z_i$ enters a circle with its center at $\frac{1}{n} \sum_{l=1}^{n} (x_l^{[k+1]} + \lambda_l^{[k]}/\rho)$ and its radius as $\epsilon / 2$. Finally, from [\[objective_function_no_quant\]](#objective_function_no_quant){reference-type="eqref" reference="objective_function_no_quant"}, we have that each node in the network needs to communicate with its neighbors in an efficient manner. For this reason, we aim to allow each node $v_i$ coordinate with its neighboring nodes by exchanging quantized values in order to fulfil [\[consensus_z2\]](#consensus_z2){reference-type="eqref" reference="consensus_z2"}. ## Distributed Optimization Algorithm {#alg_ADMM_quant} We now present our distributed optimization algorithm. The algorithm is detailed below as Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} and allows each node in the network to solve the problem presented in [\[objective_function_no_quant\]](#objective_function_no_quant){reference-type="eqref" reference="objective_function_no_quant"}. The operation of the proposed algorithm is based on two parts. During these parts, each node $v_i$ (i) calculates $x_i^{[k+1]}$, $z_i^{[k+1]}$, $\lambda_i^{[k+1]}$ according to [\[dadmm_x\]](#dadmm_x){reference-type="eqref" reference="dadmm_x"}--[\[dadmm_lamda\]](#dadmm_lamda){reference-type="eqref" reference="dadmm_lamda"} (see Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"}), and (ii) coordinates with other nodes in a communication efficient manner in order to calculate $z_i^{[k+1]}$ that belongs in $\mathcal{C}$ in [\[setC\]](#setC){reference-type="eqref" reference="setC"} (see Algorithm [\[alg2\]](#alg2){reference-type="ref" reference="alg2"}). Note that Algorithm [\[alg2\]](#alg2){reference-type="ref" reference="alg2"} is a finite time coordination algorithm with quantized communication and is executed as a step of Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"}. Note that during Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"}, nodes operate in an asynchronous fashion. Synchronous operation requires synchronization among nodes or the existence of a global clock so that all nodes to agree on their update time. In our setting, asynchronous operation arises when each node (i) starts calculating $x_i^{[k+1]}$, $z_i^{[k+1]}$, $\lambda_i^{[k+1]}$ according to [\[dadmm_x\]](#dadmm_x){reference-type="eqref" reference="dadmm_x"}--[\[dadmm_lamda\]](#dadmm_lamda){reference-type="eqref" reference="dadmm_lamda"} in Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"}, and (ii) calculates $z_i^{[k+1]}$ that belongs in $\mathcal{C}$ in [\[setC\]](#setC){reference-type="eqref" reference="setC"} in Algorithm [\[alg2\]](#alg2){reference-type="ref" reference="alg2"}. This can be achieved by making the internal clocks of all nodes have similar pacing, which will allow them to execute the optimization step at roughly the same time [@lamport_time_2019]. Furthermore, making the internal clocks of all nodes have similar pacing does not mean that we have to synchronize the clocks of the nodes (or their time-zones). Note that this is a common procedure in most modern computers as the clock pacing specification is defined within the Advanced Configuration and Power Interface (ACPI) specification [@AdvancedConfigurationandPowerInterfa]. We now make the following assumption which is important for the operation of our algorithm. **Assumption 4**. *The diameter $D$ (or an upper bound) is known to every node $v_i$ in the network.* Assumption [Assumption 4](#digr_diam){reference-type="ref" reference="digr_diam"} is necessary so that each node $v_i$ is able to determine whether calculation of $z_i$ that belongs in $\mathcal{C}$ in [\[setC\]](#setC){reference-type="eqref" reference="setC"} has been achieved in a distributed manner. We now present the details of Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"}. **Input:** Strongly connected $\mathcal{G} = (\mathcal{V}, \mathcal{E})$, parameter $\rho$, diameter $D$, error tolerance $\epsilon \in \mathds {Q}$, upper bound on processing delays $\mathcal{B}$. Assumptions [Assumption 1](#assump_time_index_bound){reference-type="ref" reference="assump_time_index_bound"}, [Assumption 2](#assup_convex){reference-type="ref" reference="assup_convex"}, [Assumption 3](#assup_saddel_point){reference-type="ref" reference="assup_saddel_point"}, [Assumption 4](#digr_diam){reference-type="ref" reference="digr_diam"} hold. $k_{\text{max}}$ (ADMM maximum number of iterations).\ **Initialization:** Each node $v_i \in \mathcal{V}$ sets randomly $x^{[0]}, z^{[0]}, \lambda^{[0]}$, and sets $\Delta = \epsilon / 3$.\ **Iteration:** For $k=0,1,2,\dots {, k_{\text{max}}}$, each node $v_i \in \mathcal{V}$ does the following: $\bullet$ Calculate $x_i^{[k+1]}$ via [\[dadmm_x\]](#dadmm_x){reference-type="eqref" reference="dadmm_x"}; Calculate $z_i^{[k+1]}$ = Algorithm [\[alg2\]](#alg2){reference-type="ref" reference="alg2"}($x_i^{[k+1]} + \lambda_i^{[k]}/\rho, D, \Delta, \mathcal{B}$); Calculate $\lambda_i^{[k+1]}$ via [\[dadmm_lamda\]](#dadmm_lamda){reference-type="eqref" reference="dadmm_lamda"}. **Output:** Each node $v_i \in \mathcal{V}$ calculates $x_i^*$ which solves problem [\[objective_function_no_quant\]](#objective_function_no_quant){reference-type="eqref" reference="objective_function_no_quant"} in Section [3](#sec:probForm){reference-type="ref" reference="sec:probForm"}. [\[alg1\]]{#alg1 label="alg1"} **Input:** $x_i^{[k+1]} + \lambda_i^{[k]}/\rho, D, \Delta, \mathcal{B}$.\ **Initialization:** Each node $v_i \in \mathcal{V}$ does the following: $\bullet$ Assigns probability $b_{li}$ to each out-neigbor $v_l \in \mathcal{N}^+_i \cup \{v_i\}$, as follows $$\begin{aligned} b_{li} = \left\{ \begin{array}{ll} \frac{1}{1 + \mathcal{D}_i^+}, & \mbox{if $l = i$ or $v_{l} \in \mathcal{N}_i^+$,} \\ 0, & \mbox{if $l \neq i$ and $v_{l} \notin \mathcal{N}_i^+$;}\end{array} \right. \end{aligned}$$ $\text{flag}_i = 0$, $\xi_i = 2$, $y_i = 2 \ q_{\Delta}^a(x_i^{[k+1]} + \lambda_i^{[k]}/\rho)$ (see [\[asy_quant_defn\]](#asy_quant_defn){reference-type="eqref" reference="asy_quant_defn"}); **Iteration:** For $\eta = 1,2,\dots$, each node $v_i \in \mathcal{V}$, does: $\bullet$ **if** $\eta \mod (D \mathcal{B}) = 1$ **then** sets $M_i = \lceil y_i / \xi_i \rceil$, $m_i = \lfloor y_i / \xi_i \rfloor$; broadcasts $M_i$, $m_i$ to every $v_{l} \in \mathcal{N}_i^+$; receives $M_j$, $m_j$ from every $v_{j} \in \mathcal{N}_i^-$; sets $M_i = \max_{v_{j} \in \mathcal{N}_i^-\cup \{ v_i \}} M_j$,\ $m_i = \min_{v_{j} \in \mathcal{N}_i^-\cup \{ v_i \}} m_j$; sets $d_i = \xi_i$; **while** $d_{i} > 1$ **do** $\bullet$ $c_i^{[\eta]} = \lfloor y_{i} \ / \ \xi_{i} \rfloor$; sets $y_{i} = y_{i} - c_i^{[\eta]}$, $\xi_{i} = \xi_{i} - 1$, and $d_i = d_i - 1$; transmits $c_i^{[\eta]}$ to randomly chosen out-neighbor $v_l \in \mathcal{N}^+_i \cup \{v_i\}$ according to $b_{li}$; receives $c_j^{[\eta]}$ from $v_j \in \mathcal{N}_i^-$ and sets $$\begin{aligned} y_i & = y_i + \sum_{j=1}^{n} \sum_{r=0}^{\mathcal{B}} w^{[r]}_{\eta-r,ij} \ c^{[\eta-r]}_{j} \ , \\ \xi_i & = \xi_i + \sum_{j=1}^{n} \sum_{r=0}^{\mathcal{B}} w^{[r]}_{\eta-r,ij} \ ,\end{aligned}$$ where $w^{[r]}_{\eta-r,ij} = 1$ when the processing time of node $v_i$ is equal to $r$ at time step $\eta-r$, so that node $v_i$ receives $c^{[\eta]}_{i}$, $1$ from $v_j$ at time step $\eta$ (otherwise $w^{[r]}_{\eta-r,ij} = 0$ and $v_i$ receives no message at time step $\eta$ from $v_j$); **if** $\eta \mod (D \mathcal{B}) = 0$ **and** $M_i - m_i \leq 1$ **then** sets $z_i^{[k+1]} = m_i \Delta$ and stops operation. **Output:** $z_i^{[k+1]}$. [\[alg2\]]{#alg2 label="alg2"} Note here that Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} shares similarities with [@2021:Wei_Themis]. However, Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} is adjusted to guarantee efficient communication between nodes by allowing them to exchange quatized valued messages (this characteristic is not present in [@2021:Wei_Themis]). More specifically, the intuition of Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} is the following. Each node $v_i$ aims to solve the problem presented in [\[objective_function_no_quant\]](#objective_function_no_quant){reference-type="eqref" reference="objective_function_no_quant"} via the ADMM strategy. Initially, each node chooses a suitable quantization level $\Delta$ so that the constraints of [\[objective_function_no_quant\]](#objective_function_no_quant){reference-type="eqref" reference="objective_function_no_quant"} are fulfilled (as explained later in Remark [Remark 1](#epsilon_choice){reference-type="ref" reference="epsilon_choice"}). During each time step $k$ each node $v_i$ calculates $x_i^{[k+1]}$ via [\[dadmm_x\]](#dadmm_x){reference-type="eqref" reference="dadmm_x"}. Then, each node $v_i$ executes Algorithm [\[alg2\]](#alg2){reference-type="ref" reference="alg2"} in order to calculate $z_i^{[k+1]}$ which belongs in the set $\mathcal{C}$ (i.e., $z_i^{[k+1]} \in \mathcal{C}, \forall v_i \in \mathcal{V}$). Finally, each node $v_i$ uses $x_i^{[k+1]}$ and the result of Algorithm [\[alg2\]](#alg2){reference-type="ref" reference="alg2"} in order to calculate $\lambda_i^{[k+1]}$ via [\[dadmm_lamda\]](#dadmm_lamda){reference-type="eqref" reference="dadmm_lamda"}. Algorithm [\[alg2\]](#alg2){reference-type="ref" reference="alg2"} is a distributed coordination algorithm which allows each node to calculate the quantized average of each node's initial state. The main characteristic of Algorithm [\[alg2\]](#alg2){reference-type="ref" reference="alg2"} is that it combines (i) (asymmetric) quantization, (ii) quantized averaging, and (iii) a stopping strategy. In Algorithm [\[alg2\]](#alg2){reference-type="ref" reference="alg2"}, initially each node $v_i$ uses an asymmetric quantizer to quantize its state; see Initialization-step $2$. Then, at each time step $\eta$ each node $v_i$: - splits the $y_i$ into $\xi_i$ equal pieces (the value of some pieces might be greater than others by one); see Iteration-steps $4.1$, $4.2$, - transmits each piece to a randomly selected out-neighbor or to itself; see Iteration-step $4.3$, - receives the pieces transmitted from its in-neighbors, sums them with $y_i$ and $\xi_i$, and repeats the operation; see Iteration-step $4.4$. Finally, every $D \mathcal{B}$ time steps, each node $v_i$ performs in parallel a max-consensus and a min-consensus operation; see Iteration-steps $1$ and $2$. If the results of the max-consensus and min-consensus have a difference less or equal to one (see Iteration-steps $5$), each node $v_i$ (i) scales the solution according to the quantization level, (ii) stops the operation of Algorithm [\[alg2\]](#alg2){reference-type="ref" reference="alg2"}, (iii) uses the value $z_i^{[k+1]}$ to continue the operation of Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"}. Note that Algorithm [\[alg2\]](#alg2){reference-type="ref" reference="alg2"} converges in finite number of steps according to [@2021:Rikos_Hadj_Splitting_Autom Theorem $1$], since QuAsAvCo has similar structure to that in [@2021:Rikos_Hadj_Splitting_Autom]. **Remark 1**. *It is important to note here that during the initialization of Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"}, the error tolerance $\epsilon$ is chosen to be a rational number (i.e., $\epsilon \in \mathds {Q}$). This is not a limitation for the ADMM optimization process in Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"}. The real-valued $\epsilon$ can be chosen such that it can be represented as a rational value. Furthermore, this choice facilitates the operation of Algorithm [\[alg2\]](#alg2){reference-type="ref" reference="alg2"}. Specifically, a rational value for $\epsilon$ facilitates the choice of a suitable quantization level $\Delta$ (since $\Delta = \epsilon/3$). During the execution of Algorithm [\[alg2\]](#alg2){reference-type="ref" reference="alg2"} nodes quantize their states, thus an error $e_{q_1} \leq \Delta$ is imposed to every state. Then, Algorithm [\[alg2\]](#alg2){reference-type="ref" reference="alg2"} converges to the quantized average thus, the final states of the nodes have an error $e_{q_2} \leq \Delta$. This means that after executing Algorithm [\[alg2\]](#alg2){reference-type="ref" reference="alg2"}, we have $| z_i - z_j | \leq 2\Delta < \epsilon$, and thus we have $z_i^{[k+1]} \in \mathcal{C}$ in [\[setC\]](#setC){reference-type="eqref" reference="setC"}, $\forall v_i \in \mathcal{V}$. For this reason, any choice of $\Delta$ for which $\Delta < \epsilon / 2$ is suitable for the operation of our algorithm for a given error tolerance $\epsilon$.* **Remark 2**. *In practical applications, nodes do not know the value of $\mathcal{B}$. However, $B$ time-steps (which is its upper bound) is guaranteed to be executed within $T$ seconds (see Assumption [Assumption 1](#assump_time_index_bound){reference-type="ref" reference="assump_time_index_bound"}). As noted previously, consistent pacing of each node's clock ensures that the check for convergence at each node will happen at roughly the same time (see [@lamport_time_2019]). Therefore, at every $DT$ seconds, each node checks whether Algorithm [\[alg2\]](#alg2){reference-type="ref" reference="alg2"} can be terminated.* ## Convergence of Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} {#ConvADMMAlg} We now analyze the convergence time of Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} via the following theorem. Our theorem is inspired from [@2021:Wei_Themis] but is adjusted to the quantized nature of Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"}. However, due to space limitations we omit the proof (we will include it at an extended version of our paper). **Theorem 1**. *Let us consider a strongly connected digraph $\mathcal{G} = (\mathcal{V}, \mathcal{E})$. Each node $v_i \in \mathcal{V}$, is endowed with a scalar local cost function $f_i(x): \mathds {R}^p \mapsto \mathds {R}$, and Assumptions [Assumption 1](#assump_time_index_bound){reference-type="ref" reference="assump_time_index_bound"}-[Assumption 4](#digr_diam){reference-type="ref" reference="digr_diam"} hold. Furthermore, every node $v_i$ has knowledge of a parameter $\rho$, the network diameter $D$, an error tolerance $\epsilon \in \mathds {Q}$, and an upper bound on processing delays $\mathcal{B}$. During the operation of Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"}, let us consider the variables $\{X^{[k]}, z^{[k]}, \lambda^{[k]}\}$, where $X^{[k]} = [{x_1^{[k]}}^{\mbox{\tiny T}},{x_2^{[k]}}^{\mbox{\tiny T}}, \ldots, {x_n^{[k]}}^{\mbox{\tiny T}}]^{\mbox{\tiny T}}$ and $\lambda^{[k]} = [{\lambda_1^{[k]}}^{\mbox{\tiny T}},{\lambda_2^{[k]}}^{\mbox{\tiny T}}, \ldots, {\lambda_n^{[k]}}^{\mbox{\tiny T}}]^{\mbox{\tiny T}}$; then, define $\bar{X}^{[k]} = \frac{1}{k} \sum_{s=0}^{k-1}X^{[s+1]}, \bar{z}^{[k]} = \frac{1}{k} \sum_{s=0}^{k-1}z^{[s+1]}$. During the operation of Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} we have $$\begin{aligned} 0&\le L(\bar X^{[k]},\bar z^{[k]}, \lambda^{*})- L ( X^{*},z^{*},\lambda^{*}) \label{convergence_relationship}\\ &\le \frac{1}{k}\left( \frac{1}{2\rho} \|\lambda^{*}-\lambda^{[0]}\|^2 + \frac{\rho}{2}\|X^{*}-z^{[0]}\|^2\right) + \mathcal{O}( 2\Delta \sqrt{n}) ,\nonumber\end{aligned}$$ for every time step $k$, where $\Delta$ is the quantization level for calculating $z_i \in \mathcal{C}$ in [\[setC\]](#setC){reference-type="eqref" reference="setC"} during the operation of Algorithm [\[alg2\]](#alg2){reference-type="ref" reference="alg2"}.* It is important to note that in Theorem [Theorem 1](#converge_Alg1){reference-type="ref" reference="converge_Alg1"} we focus on the convergence of the optimization steps, i.e., the steps executed during the operation of Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"}. Due to the operation of Algorithm [\[alg2\]](#alg2){reference-type="ref" reference="alg2"} we have that in [\[convergence_relationship\]](#convergence_relationship){reference-type="eqref" reference="convergence_relationship"} an additional term $\mathcal{O}(2\Delta \sqrt{n})$ appears. This term (as will be seen later in Section [6](#sec:results){reference-type="ref" reference="sec:results"}) affects the precision according to which the optimal solution is calculated. However, we can adjust Algorithm [\[alg2\]](#alg2){reference-type="ref" reference="alg2"} to operate with a dynamically refined quantization level $\Delta$. For example, we can initially set $\Delta = \epsilon / 3$ (where $\epsilon \in \mathds {Q}$). Then, execute Algorithm [\[alg2\]](#alg2){reference-type="ref" reference="alg2"} during every time step $k$ with quantization level $\Delta' = \frac{\Delta}{10(k+1)}$. Since we have $\frac{\Delta}{10(k+1)} < \frac{\Delta}{10(k)}$ for every $k$, then, Algorithm [\[alg2\]](#alg2){reference-type="ref" reference="alg2"} will lead to a reduction of the error on the optimal solution that depends on the quantization level (i.e., the term $\mathcal{O}(2\Delta \sqrt{n})$ in [\[convergence_relationship\]](#convergence_relationship){reference-type="eqref" reference="convergence_relationship"} will be reduced after every execution of Algorithm [\[alg2\]](#alg2){reference-type="ref" reference="alg2"}). However, please note that this analysis is outside of the scope of this paper and will be considered in an extended version. # Simulation Results {#sec:results} In this section, we present simulation results in order to demonstrate the operation of Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} and its advantages. Furthermore, we compare Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} against existing algorithms and emphasize on the introduced improvements. In Fig. [2](#Themis_1){reference-type="ref" reference="Themis_1"}, we focus on a network comprised of $100$ nodes modelled as a directed graph. Each node $v_i$ is endowed with a scalar local cost function $f_i(x) = 0.5 x^\top P_i x + q_i^\top x + r_i$. This cost function is quadratic and convex. Furthermore, for $f_i(x)$ we have that (i) $P_i$ was initialized as the square of a randomly generated symmetric matrix $A_i$ (ensuring it is is positive definite), (ii) $q_i$ is initialized as the negation of the product of the transpose of $A_i$ and a randomly generated vector $b_i$ (i.e., it is a linear term), (iii) and $r_i$ is initialized as half of the squared norm of the randomly generated vector $b_i$ (i.e., it is a scalar constant). We execute Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} and we show how the nodes' states converge to the optimal solution for $\epsilon = 0.03, 0.003, 0.0003$, and $\Delta = 0.01, 0.001, 0.0001$, respectively. We plot the error $e^{[k]}$ defined as $$\label{eq:distance_optimal} e^{[k]} = \frac{\sqrt{\sum_{j=1}^n (x_j^{[k]} - x^*)^2}}{\sqrt{\sum_{j=1}^n (x_j^{[0]} - x^*)^2}} ,$$ where $x^*$ is the optimal solution of the optimization problem in $\eqref{objective_function_no_quant}$. Note that from Remark [Remark 1](#epsilon_choice){reference-type="ref" reference="epsilon_choice"}, we have that any $\Delta < \epsilon / 2$ is suitable for the operation of Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} for a given $\epsilon$. In Fig. [2](#Themis_1){reference-type="ref" reference="Themis_1"}, we execute Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} for $\Delta = \epsilon / 3$. We can see that Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} converges to the optimal solution for the three different values of $\epsilon$. However, Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} is able to approximate the optimal solution with precision that depends on the quantization level (i.e., during Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"}, nodes are able to calculate a neighborhood of the optimal solution). Reducing the quantization level $\Delta$ allows calculation of the optimal solution with higher precision. Furthermore, we can see that after calculating the optimal solution our algorithm exhibits an oscillatory behavior due to quantized communication. This means quantized communication introduces nonlinearities to the consensus calculation which in turn affect the values of other parameters such as $x$ and $z$, and $\lambda$ (see iteration steps $1$, $2$, $3$), and for this reason we have this oscillatory behavior. Finally, we can see that Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} exhibits comparable performance with [@2021:Wei_Charalambous] (which is plotted until optimization step $14$) until the neighborhood of the optimal solution is calculated. However, in [@2021:Wei_Charalambous] nodes are able to exchange real-valued messages. Specifically, in [@2021:Wei_Charalambous] nodes are required to form the Hankel matrix and perform additional computations when the matrix loses rank. This requires nodes to exchange the exact values of their states. Therefore, the main advantage of Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} compared to [@2021:Wei_Charalambous], is that it exhibits comparable performance while guaranteeing efficient (quantized) communication among nodes. ![Comparison of Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} with [@2021:Wei_Charalambous] over a directed graph comprised of $100$ nodes for $\epsilon = 0.03, 0.003, 0.0003$, and $\Delta = 0.01, 0.001, 0.0001$, respectively.](Figures/Themis_3.pdf){#Themis_1 width=".95\\columnwidth"} # Conclusions and Future Directions {#sec:conclusions} In this paper, we presented an asynchronous distributed optimization algorithm which combines the Alternating Direction Method of Multipliers (ADMM) strategy with a finite time quantized averaging algorithm. We showed that our proposed algorithm is able to calculate the optimal solution while operating over directed communication networks in an asynchronous fashion, and guaranteeing efficient (quantized) communication between nodes. We analyzed the operation of our algorithm and showed that it converges to a neighborhood of the optimal solution (that depends on the quantization level) at a rate of $O(1/k)$. Finally, we demonstrated the operation of our algorithm and compared it against other algorithms from the literature. In the future, we aim to enhance the operation of our algorithm to avoid the oscillatory behavior after calculating the optimal solution. Furthermore, we plan to develop strategies that allow calculation of the *exact* optimal solution while guaranteeing efficient communication among nodes. Finally, we will focus on designing efficient communication strategies for non-convex distributed optimization problems. [^1]: Apostolos I. Rikos is with the Department of Electrical and Computer Engineering, Division of Systems Engineering, Boston University, Boston, MA 02215, US. E-mail: `arikos@bu.edu`. [^2]: Wei Jiang resides in Hong Kong, China. Email: `wjiang.lab@gmail.com`. [^3]: T. Charalambous is with the Department of Electrical and Computer Engineering, School of Engineering, University of Cyprus, 1678 Nicosia, Cyprus. He is also with the Department of Electrical Engineering and Automation, School of Electrical Engineering, Aalto University, Espoo, Finland. Email: `charalambous.themistoklis@ucy.ac.cy`. [^4]: K. H. Johansson is with the Division of Decision and Control Systems, KTH Royal Institute of Technology, SE-100 44 Stockholm, Sweden. He is also affiliated with Digital Futures. E-mail: `kallej@kth.se`. [^5]: Part of this work was supported by the Knut and Alice Wallenberg Foundation, the Swedish Research Council, and the Swedish Foundation for Strategic Research. The work of T. Charalambous was partly supported by the European Research Council (ERC) Consolidator Grant MINERVA (Grant agreement No. 101044629).
arxiv_math
{ "id": "2309.04585", "title": "Asynchronous Distributed Optimization via ADMM with Efficient\n Communication", "authors": "Apostolos I. Rikos and Wei Jiang and Themistoklis Charalambous and\n Karl H. Johansson", "categories": "math.OC cs.SY eess.SY", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | For two relatively prime square-free positive integers $a$ and $b$, we study integers of the form $a p+b P_{2}$ and give a new lower bound for it, where $a p$ and $b P_{2}$ are both square-free, $p$ is a prime, and $P_{2}$ has at most two prime factors. We also consider four special cases where $p$ is small, $p$ and $P_2$ are within short intervals, $p$ belongs to a fixed arithmetic sequence and a Goldbach-type upper bound result. Our new results generalize and improve the previous results. address: The High School Affiliated to Renmin University of China, Beijing 100080, People's Republic of China author: - Runbo Li bibliography: - bib.bib title: Remarks on additive representations of natural numbers --- *Dedicated to Professor Chen Jingrun on the occasion of the 90th anniversary of his birth and the 50th anniversary of the birth of Chen's theorem.* # Introduction Let $N_e$ be a sufficiently large even integer, $p$ be a prime, and let $P_{r}$ denote an integer with at most $r$ prime factors counted with multiplicity. For each $N_e \geqslant 4$ and $r \geqslant 2$, we define $$D_{1,r}(N_e):= \left|\left\{p : p \leqslant N_e, N_e-p=P_{r}\right\}\right|.$$ In 1966 Jingrun Chen [@Chen1966] proved his remarkable Chen's theorem: let $N_e$ be a sufficiently large even integer, then $$D_{1,2}(N_e)>0.67\frac{C_e(N_e) N_e}{(\log N_e)^2}$$ where $$C_e(N_e):=\prod_{\substack{p \mid N_e \\ p>2}} \frac{p-1}{p-2} \prod_{p>2}\left(1-\frac{1}{(p-1)^{2}}\right).$$ and the detail was published in [@Chen1973]. The original proof of Jingrun Chen was simplified by Pan, Ding and Wang [@PDW1975], Halberstam and Richert [@HR74], Halberstam [@Halberstam1975], Ross [@Ross1975]. As Halberstam and Richert indicated in [@HR74], it would be interesting to know whether a more elaborate weighting procedure could be adapted to the purpose of (2). This might lead to numerical improvements and could be important. Chen's constant 0.67 was improved successively to $$0.689, 0.7544, 0.81, 0.8285, 0.836, 0.867, 0.899$$ by Halberstam and Richert [@HR74] [@Halberstam1975], Chen [@Chen1978_1] [@Chen1978_2], Cai and Lu [@CL2002], Wu [@Wu2004], Cai [@CAI867] and Wu [@Wu2008] respectively.\ In 1990, Wu [@WU90] proved that $$D_{1,3}(N_e) \geqslant 0.67 \frac{C_e(N_e) N_e}{(\log N_e)^2} \log \log N_e$$ and $$D_{1,r}(N_e) \geqslant 0.67 \frac{C_e(N_e) N_e}{(\log N_e)^2} (\log \log N_e)^{r-2}.$$ Kan [@Kan1] also proved the similar result in 1991: $$D_{1,r}(N_e) \geqslant \frac{0.77}{(r-2)!} \frac{C_e(N_e) N_e}{(\log N_e)^2} (\log \log N_e)^{r-2},$$ which is better than Wu's result when $r=3$. In 2023, Li [@LRB] improved their results and obtained $$D_{1,3}(N_e) \geqslant 0.8671 \frac{C_e(N_e) N_e}{(\log N_e)^2} \log \log N_e$$ and $$D_{1,r}(N_e) \geqslant 0.8671 \frac{C_e(N_e) N_e}{(\log N_e)^2} (\log \log N_e)^{r-2}.$$ Kan [@Kan2] proved the more generalized theorem in 1992: $$D_{s,r}(N_e) \geqslant \frac{0.77}{(s-1)!(r-2)!} \frac{C_e(N_e) N_e}{(\log N_e)^2} (\log \log N_e)^{s+r-3},$$ where $s \geqslant 1$, $$D_{s,r}(N_e):= \left|\left\{P_s : P_s \leqslant N_e, N_e-P_s=P_{r}\right\}\right|.$$ Furthermore, for two relatively prime square-free positive integers $a$ and $b$, let $N$ be a sufficiently large integer that is relatively prime to both $a$ and $b$, $a,b \leqslant N^{\varepsilon}$ and let $N$ be even if $a$ and $b$ are both odd. Let $R_{a,b}(N)$ be the number of primes $p$ such that $ap$ and $N-ap$ are both square-free, $b \mid (N-ap)$, and $\frac{N-ap}{b}=P_2$. In 2023, Li [@LIHUIXI] proved that $$R_{a, b}(N) \geqslant 0.68 \frac{C(N) N}{a b(\log N)^{2}},$$ where $$C(N):=\prod_{\substack{p \mid a b \\ p>2}} \frac{p-1}{p-2}\prod_{\substack{p \mid N \\ p>2}} \frac{p-1}{p-2} \prod_{p>2}\left(1-\frac{1}{(p-1)^{2}}\right).$$ In this paper, we improve the result by using a delicate sieve process similar to that of [@CAI867] and prove that **Theorem 1**. *$$R_{a, b}(N) \geqslant 0.8671\frac{C(N) N}{a b(\log N)^{2}}.$$* It is easy to see that when we take $a=1$ and $b=1$, Theorem [Theorem 1](#t1){reference-type="ref" reference="t1"} implies Cai's result on Chen's theorem \[[@CAI867], Theorem 1\]; when we take $a=1$ and $b=2$, Theorem [Theorem 1](#t1){reference-type="ref" reference="t1"} improves Li's result related to the Lemoine's conjecture \[[@LiHuixi2019], Theorem 1\]. When we take $a=q_1 q_2 \cdots q_s$ and $b=q^\prime_1 q^\prime_2 \cdots q^\prime_r$ where $\varepsilon$ be a sufficiently small positive number, and $q, q^\prime$ denote a prime number satisfies $$s,r\geqslant 1, \quad (\log N)^{\varepsilon} \leqslant q_i,q^\prime_j \leqslant N^{\varepsilon},$$ $$(q_i, N)=(q^\prime_j, N)=1\ \operatorname{for\ every}\ 1 \leqslant i \leqslant s,1 \leqslant j \leqslant r,$$ Theorem [Theorem 1](#t1){reference-type="ref" reference="t1"} generalizes and improves the previous results of Kan \[[@Kan1], Theorem 2\] \[[@Kan2], Theorem 2\], Wu \[[@WU90], Theorems 1 and 2\], and Li \[[@LRB], Theorems 1.1 and 1.2\]. Clearly one can modify our proof of Theorem [Theorem 1](#t1){reference-type="ref" reference="t1"} to get a similar lower bound on the twin prime version. For this, we refer the interested readers to \[[@Opera], Sect. 25.6\]. Chen's theorem with small primes was first studied by Cai [@Cai2002]. For $0<\theta \leqslant 1$, we define $$D_{1,r}^{\theta}(N_e):= \left|\left\{p : p \leqslant N_e^{\theta}, N_e-p=P_{r}\right\}\right|.$$ Then it is proved in [@Cai2002] that for $0.95 \leqslant \theta \leqslant 1$, we have $$D_{1,2}^{\theta}(N_e) \gg \frac{C_e(N_e) N_e^{\theta}}{(\log N_e)^2}.$$ Cai's range $0.95 \leqslant \theta \leqslant 1$ was extended successively to $0.945 \leqslant \theta \leqslant 1$ in [@CL2011] and to $0.941 \leqslant \theta \leqslant 1$ in [@Cai2015]. In this paper, we generalize their results to integers of the form $ap+bP_2$. For two relatively prime square-free positive integers $a$ and $b$, let $N$ be a sufficiently large integer that is relatively prime to both $a$ and $b$, $a,b \leqslant N^{\varepsilon}$ and let $N$ be even if $a$ and $b$ are both odd. Let $R_{a,b}^{\theta}(N)$ be the number of primes $p\leqslant N^{\theta}$ such that $ap$ and $N-ap$ are both square-free, $b \mid (N-ap)$, and $\frac{N-ap}{b}=P_2$. Then by using a delicate sieve process similar to that of [@Cai2015] we prove that **Theorem 2**. *For $0.941 \leqslant \theta \leqslant 1$ we have $$R_{a, b}^{\theta}(N) \gg \frac{C(N) N^{\theta}}{a b(\log N)^{2}}.$$* Chen's theorem in short intervals was first studied by Ross [@Ross1978]. For $0<\theta \leqslant 1$, we define $$D_{1,r}(N_e,\kappa):= \left|\left\{p : N_e/2-N_e^{\kappa} \leqslant p,P_r \leqslant N_e/2+N_e^{\kappa}, N_e=p+P_{r}\right\}\right|.$$ Then it is proved in [@Ross1978] that for $\kappa \geqslant 0.98$, we have $$D_{1,2}(N_e,\kappa) \gg \frac{C_e(N_e) N_e^{\kappa}}{(\log N_e)^2}.$$ The constant 0.98 was improved successively to $$0.974, 0.973, 0.9729, 0.972, 0.971, 0.97$$ by Wu [@Wu1993] [@Wu1994], Salerno and Vitolo [@SV1993], Cai and Lu [@CL1999], Wu [@Wu2004] and Cai [@CAI867] respectively. In this paper, we generalize their results to integers of the form $ap+bP_2$. For two relatively prime square-free positive integers $a$ and $b$, let $N$ be a sufficiently large integer that is relatively prime to both $a$ and $b$, $a,b \leqslant N^{\varepsilon}$ and let $N$ be even if $a$ and $b$ are both odd. Let $R_{a,b}(N,\kappa)$ be the number of primes $N/2-N^{\kappa} \leqslant p \leqslant N/2+N^{\kappa}$ such that $ap$ and $N-ap$ are both square-free, $b \mid (N-ap)$, $\frac{N-ap}{b}=P_2$, and $N/2-N^{\kappa} \leqslant \frac{N-ap}{b} \leqslant N/2+N^{\kappa}$. Then by using a delicate sieve process similar to that of [@CAI867] we prove that **Theorem 3**. *For $\kappa \geqslant 0.97$ we have $$R_{a, b}(N,\kappa) \gg \frac{C(N) N^{\kappa}}{a b(\log N)^{2}}.$$* Chen's theorem with a prime belongs to a fixed arithmetic sequence was first studied by Kan and Shan [@KanShan]. For $c \ll (\log N_e)^{\varepsilon}$, we define $$D_{1,r}(N_e,c,d):= \left|\left\{p : p \leqslant N_e, p \equiv d(\bmod c), (c,d)=1, (N_e-p,c)=1, N_e-p=P_{r}\right\}\right|.$$ Then it is proved in [@KanShan] that we have $$D_{1,2}(N_e,c,d) \geqslant 0.77\frac{1}{\varphi(c)} \prod_{\substack{p \mid c \\ p \nmid N_e\\ p>2}} \left(\frac{p-1}{p-2}\right)\frac{C_e(N_e) N_e}{(\log N_e)^2}$$ and $$D_{1,r}(N_e,c,d) \geqslant \frac{0.77}{(r-2)!}\frac{1}{\varphi(c)} \prod_{\substack{p \mid c \\ p \nmid N_e\\ p>2}} \left(\frac{p-1}{p-2}\right)\frac{C_e(N_e) N_e}{(\log N_e)^2} (\log \log N_e)^{r-2}.$$ Clearly their results (18) and (19) generalized the previous results (2), (4), (5) and (6). They also got the similar results on the twin prime version and Lewulis [@lewulis] considered the similar problem. In this paper, we further generalize their results to integers of the form $ap+bP_2$. For two relatively prime square-free positive integers $a$ and $b$, let $N$ be a sufficiently large integer that is relatively prime to both $a$ and $b$, $a,b \leqslant N^{\varepsilon}$ and let $N$ be even if $a$ and $b$ are both odd. Let $c \ll (\log N)^{\varepsilon}$ and let $R_{a,b}(N,c,d)$ be the number of primes $p \equiv d(\bmod c)$ such that $ap$ and $N-ap$ are both square-free, $b \mid (N-ap)$, and $\frac{N-ap}{b}=P_2$. Then by using a delicate sieve process similar to that of [@CAI867] we prove that **Theorem 4**. *$$R_{a,b}(N,c,d) \geqslant 0.8671 \frac{1}{\varphi(c)} \prod_{\substack{p \mid c \\ p \nmid N\\ p>2}} \left(\frac{p-1}{p-2}\right)\frac{C(N) N}{ab(\log N)^2}.$$* Similar to \[[@LIHUIXI], Theorem 1. (2)\], we also improve the upper bound of the number of primes $p$ such that $ap$ and $N-ap$ are both square-free, $b \mid (N-ap)$, and $\frac{N-ap}{b}$ is a prime number. By using a delicate sieve process similar to that of \[[@ERPAN], Chap. 9.2\], we prove that **Theorem 5**. *$$\sum_{\substack{a p_{1}+b p_{2}=N \\ p_{1} \text { and } p_{2} \text { are primes }}} 1 \leqslant 7.928 \frac{C(N) N}{a b(\log N)^{2}}.$$* In fact, Lemmas [Lemma 18](#l31){reference-type="ref" reference="l31"}-- [Lemma 23](#l35){reference-type="ref" reference="l35"} are also valid for the sets $\mathcal{A}_3$ and $\mathcal{A}_4$ in section 2 if we make some suitable modifications. Since the detail of the proofs of Theorems [Theorem 3](#t3){reference-type="ref" reference="t3"}-- [Theorem 5](#t5){reference-type="ref" reference="t5"} are similar to those of [@CL1999], [@KanShan], [@ERPAN] and Theorems [Theorem 1](#t1){reference-type="ref" reference="t1"}-- [Theorem 2](#t2){reference-type="ref" reference="t2"} so we omit them in this paper. In this paper, we do not focus on Chen's double sieve technique. Maybe this can be used to improve our Theorems [Theorem 1](#t1){reference-type="ref" reference="t1"}-- [Theorem 5](#t5){reference-type="ref" reference="t5"}. For this, we refer the interested readers to [@Chen1978_3], [@Wu2004], [@Wu2008] and David Quarel's thesis [@quarel]. Li [@LIHUIXI] also mentioned that if the large integer $N$ is carefully chosen, our proofs of Theorems [Theorem 1](#t1){reference-type="ref" reference="t1"}-- [Theorem 5](#t5){reference-type="ref" reference="t5"} also work when we remove the square-free restrictions on $a$ and $b$. # The sets we want to sieve Let $\theta=0.941$ in this paper. Put $$\begin{aligned} \nonumber \mathcal{A}_1=&\left\{\frac{N-a p}{b}: p \text { is a prime }, p \leqslant \frac{N}{a}, (p,a b N)=1,\right. \\ \nonumber & \left. p \equiv N a_{b^{2}}^{-1}+k b \left(\bmod b^{2}\right), 0 \leqslant k \leqslant b-1,(k, b)=1\right\}, \\ \nonumber \mathcal{A}_2=&\left\{\frac{N-a p}{b}: p \text { is a prime }, p \leqslant \frac{N^{\theta}}{a}, (p,a b N)=1,\right. \\ \nonumber & \left. p \equiv N a_{b^{2}}^{-1}+k b \left(\bmod b^{2}\right), 0 \leqslant k \leqslant b-1,(k, b)=1\right\},\\ \nonumber \mathcal{A}_3=&\left\{\frac{N-a p}{b}: p \text { is a prime },\frac{N/2-N^{0.97}}{a} \leqslant p \leqslant \frac{N/2+N^{0.97}}{a}, (p,a b N)=1,\right. \\ \nonumber & \left. p \equiv N a_{b^{2}}^{-1}+k b \left(\bmod b^{2}\right), 0 \leqslant k \leqslant b-1,(k, b)=1\right\},\\ \nonumber \mathcal{A}_4=&\left\{\frac{N-a p}{b}: p \text { is a prime }, p \leqslant \frac{N}{a}, (p,a b N)=1, p \equiv d(\bmod c), (c,d)=1, \right. \\ \nonumber & \left. \left(\frac{N-ad}{b},c\right)=1, p \equiv N a_{b^{2}}^{-1}+k b \left(\bmod b^{2}\right), 0 \leqslant k \leqslant b-1,(k, b)=1\right\}, \\ \nonumber \mathcal{B}_1= & \left\{\frac{N-b p_{1} p_{2} p_{3}}{a}: p_{1}, p_{2}, p_{3} \text { are primes, }\left(p_{1} p_{2} p_{3}, a b N\right)=1,\right. \\ \nonumber & p_{3} \leqslant \frac{N}{b p_{1} p_{2}},\left(\frac{N}{b}\right)^{\frac{1}{13.2}} \leqslant p_1 \leqslant\left(\frac{N}{b}\right)^{\frac{1}{3}} <p_2 \leqslant\left(\frac{N}{b p_{1}}\right)^{\frac{1}{2}},\\ \nonumber &\left.p_{3} \equiv N\left(b p_{1} p_{2}\right)_{a^{2}}^{-1}+j a \left(\bmod a^{2}\right), 0 \leqslant j \leqslant a-1,(j, a)=1\right\},\\ \nonumber \mathcal{B}_2= & \left\{\frac{N-b p_{1} p_{2} p_{3}}{a}: p_{1}, p_{2}, p_{3} \text { are primes, }\left(p_{1} p_{2} p_{3}, a b N\right)=1,\right. \\ \nonumber & \frac{N-N^{\theta}}{b p_{1} p_{2}} \leqslant p_{3} \leqslant \frac{N}{b p_{1} p_{2}},\left(\frac{N}{b}\right)^{\frac{1}{14}} \leqslant p_1 \leqslant\left(\frac{N}{b}\right)^{\frac{1}{3.106}} <p_2 \leqslant\left(\frac{N}{b p_{1}}\right)^{\frac{1}{2}},\\ \nonumber &\left.p_{3} \equiv N\left(b p_{1} p_{2}\right)_{a^{2}}^{-1}+j a \left(\bmod a^{2}\right), 0 \leqslant j \leqslant a-1,(j, a)=1\right\},\\ \nonumber\mathcal{C}_1=&\left\{\frac{N-bmp_1p_2p_3p_4}{a}: p_1, p_2, p_3, p_4 \text { are primes, }\left(p_{1} p_{2} p_{4}, abN\right)=1, \right.\\ \nonumber&\left. \left(\frac{N}{b}\right)^{\frac{1}{13.2}} \leqslant p_{1}<p_{4}<p_{2}<\left(\frac{N}{b}\right)^{\frac{1}{8.4}},1 \leqslant m \leqslant \frac{N}{bp_{1} p_{2}^{2} p_{4}},\left(m, p_{1}^{-1} abN P\left(p_{4}\right)\right)=1,\right. \\ \nonumber&\left.p_{3} \equiv N\left(bm p_{1} p_{2} p_4\right)_{a^{2}}^{-1}+j a \left(\bmod a^{2}\right), 0 \leqslant j \leqslant a-1,(j, a)=1,\right.\\ \nonumber&\left.p_2<p_3<\min\left(\left(\frac{N}{b}\right)^\frac{1}{8.4},\left(\frac{N}{bmp_1p_2p_4}\right)\right) \right\},\\ \nonumber\mathcal{C}_2=&\left\{\frac{N-bmp_1p_2p_3p_4}{a}: p_1, p_2, p_3, p_4 \text { are primes, }\left(p_{1} p_{2} p_3 p_{4}, abN\right)=1, \right.\\ \nonumber&\left. \left(\frac{N}{b}\right)^{\frac{1}{14}} \leqslant p_{1}<p_{2}<p_3<p_{4}<\left(\frac{N}{b}\right)^{\frac{1}{8.8}},\right. \\ \nonumber&\left.bm p_{1} p_{2} p_3 p_{4} \equiv N+j a \left(\bmod a^{2}\right), 0 \leqslant j \leqslant a-1,(j, a)=1,\right.\\ \nonumber&\left. \frac{N-N^{\theta}}{bp_{1} p_{2} p_3 p_{4}} \leqslant m \leqslant \frac{N}{bp_{1} p_{2} p_3 p_{4}},\left(m, p_{1}^{-1} abN P\left(p_{2}\right)\right)=1\right\},\\ \nonumber\mathcal{C}_3=&\left\{\frac{N-bmp_1p_2p_3p_4}{a}: p_1, p_2, p_3, p_4 \text { are primes, }\left(p_{1} p_{2} p_3 p_{4}, abN\right)=1, \right.\\ \nonumber&\left. \left(\frac{N}{b}\right)^{\frac{1}{14}} \leqslant p_{1}<p_{2}<p_3<\left(\frac{N}{b}\right)^{\frac{1}{8.8}}\leqslant p_4<\left(\frac{N}{b}\right)^{\frac{\theta}{2}-\frac{3}{14}}p_3^{-1},\right. \\ \nonumber&\left.bm p_{1} p_{2} p_3 p_{4} \equiv N+j a \left(\bmod a^{2}\right), 0 \leqslant j \leqslant a-1,(j, a)=1,\right.\\ \nonumber&\left. \frac{N-N^{\theta}}{bp_{1} p_{2} p_3 p_{4}} \leqslant m \leqslant \frac{N}{bp_{1} p_{2} p_3 p_{4}},\left(m, p_{1}^{-1} abN P\left(p_{2}\right)\right)=1\right\},\\ \nonumber\mathcal{E}_1=&\left\{p_1 p_2:p_1, p_2 \text { are primes, }\left(p_{1} p_{2}, abN\right)=1,\right.\\ \nonumber&\left.\left(\frac{N}{b}\right)^{\frac{1}{13.2}} \leqslant p_{1}<\left(\frac{N}{b}\right)^{\frac{1}{3}}\leqslant p_{2}<\left(\frac{N}{bp_1}\right)^{\frac{1}{2}} \right\},\\ \nonumber\mathcal{E}_2=&\left\{p_1 p_2:p_1, p_2 \text { are primes, }\left(p_{1} p_{2}, abN\right)=1,\right.\\ \nonumber&\left.\left(\frac{N}{b}\right)^{\frac{1}{14}} \leqslant p_{1}<\left(\frac{N}{b}\right)^{\frac{1}{3.106}}\leqslant p_{2}<\left(\frac{N}{bp_1}\right)^{\frac{1}{2}} \right\},\\ \nonumber\mathcal{F}_1=&\left\{mp_1 p_2 p_4:p_1, p_2, p_4 \text { are primes, }\left(\frac{N}{b}\right)^{\frac{1}{13.2}} \leqslant p_{1}<p_{4}<p_{2}<\left(\frac{N}{b}\right)^{\frac{1}{8.4}},\right.\\ \nonumber&\left.\left(p_{1} p_{2} p_{4}, abN\right)=1, 1 \leqslant m \leqslant \frac{N}{bp_{1} p_{2}^{2} p_{4}}, \left(m, p_{1}^{-1} abN P\left(p_{4}\right)\right)=1 \right\},\\ \nonumber\mathcal{F}_2=&\left\{bmp_1 p_2 p_3 p_4:p_1, p_2, p_3, p_4 \text { are primes, }\left(p_{1} p_{2} p_3 p_{4}, abN\right)=1, \right.\\ \nonumber&\left.\left(\frac{N}{b}\right)^{\frac{1}{14}} \leqslant p_{1}<p_{2}<p_3<p_{4}<\left(\frac{N}{b}\right)^{\frac{1}{8.8}},\right. \\ \nonumber&\left. \frac{N-N^{\theta}}{bp_{1} p_{2} p_3 p_{4}} \leqslant m \leqslant \frac{N}{bp_{1} p_{2} p_3 p_{4}}, \left(m, p_{1}^{-1} abN P\left(p_{2}\right)\right)=1 \right\},\\ \nonumber\mathcal{F}_3=&\left\{bmp_1 p_2 p_3 p_4:p_1, p_2, p_3, p_4 \text { are primes, }\left(p_{1} p_{2} p_3 p_{4}, abN\right)=1, \right.\\ \nonumber&\left. \left(\frac{N}{b}\right)^{\frac{1}{14}} \leqslant p_{1}<p_{2}<p_3<\left(\frac{N}{b}\right)^{\frac{1}{8.8}}\leqslant p_4<\left(\frac{N}{b}\right)^{\frac{\theta}{2}-\frac{3}{14}}p_3^{-1},\right. \\ \nonumber&\left. \frac{N-N^{\theta}}{bp_{1} p_{2} p_3 p_{4}} \leqslant m \leqslant \frac{N}{bp_{1} p_{2} p_3 p_{4}},\left(m, p_{1}^{-1} abN P\left(p_{2}\right)\right)=1\right\},\end{aligned}$$ where $a_{b^{2}}^{-1}$ is the multiplicative inverse of $a \bmod b^{2}$, which exists by our assumption $(a, b)=1$. # Preliminary lemmas Let $\mathcal{A}$ denote a finite set of positive integers, $\mathcal{P}$ denote an infinite set of primes and $z \geqslant 2$. Suppose that $|\mathcal{A}| \sim X_{\mathcal{A}}$ and for square-free $d$, put $$\mathcal{P}=\{p : (p, N)=1\},\quad \mathcal{P}(r)=\{p : p \in \mathcal{P},(p, r)=1\},$$ $$P(z)=\prod_{\substack{p\in \mathcal{P}\\p<z}} p,\quad \mathcal{A}_{d}=\{a : a \in \mathcal{A}, a \equiv 0(\bmod d)\},\quad S(\mathcal{A}; \mathcal{P},z)=\sum_{\substack{a \in \mathcal{A} \\ (a, P(z))=1}} 1.$$ **Lemma 6**. *(\[[@Kan2], Lemma 1\]). If $$\sum_{z_{1} \leqslant p<z_{2}} \frac{\omega(p)}{p}=\log \frac{\log z_{2}}{\log z_{1}}+O\left(\frac{1}{\log z_{1}}\right), \quad z_{2}>z_{1} \geqslant 2,$$ where $\omega(d)$ is a multiplicative function, $0 \leqslant \omega(p)<p, X>1$ is independent of $d$. Then $$S(\mathcal{A}, \mathcal{P}, z) \geqslant X_{\mathcal{A}} W(z)\left\{f\left(\frac{\log D}{\log z}\right)+O\left(\frac{1}{\log ^{\frac{1}{3}} D}\right)\right\}-\sum_{\substack{n\leqslant D \\ n \mid P(z)}}\eta(X_{\mathcal{A}}, n)$$ $$S(\mathcal{A}, \mathcal{P}, z) \leqslant X_{\mathcal{A}} W(z)\left\{F\left(\frac{\log D}{\log z}\right)+O\left(\frac{1}{\log ^{\frac{1}{3}} D}\right)\right\}+\sum_{\substack{n\leqslant D \\ n \mid P(z)}}\eta(X_{\mathcal{A}}, n)$$ where $$W(z)=\prod_{\substack{p<z \\ (p,N)=1}}\left(1-\frac{\omega(p)}{p}\right),\quad \eta(X_{\mathcal{A}}, n)=\left||\mathcal{A}_n|-\frac{\omega(n)}{n} X_\mathcal{A}\right|=\left|\sum_{\substack{a \in \mathcal{A} \\ a \equiv 0(\bmod n)}} 1-\frac{\omega(n)}{n} X_\mathcal{A}\right|,$$ $\gamma$ denotes the Euler's constant, $f(s)$ and $F(s)$ are determined by the following differential-difference equation $$\begin{aligned} \begin{cases} F(s)=\frac{2 e^{\gamma}}{s}, \quad f(s)=0, \quad &0<s \leqslant 2,\\ (s F(s))^{\prime}=f(s-1), \quad(s f(s))^{\prime}=F(s-1), \quad &s \geqslant 2 . \end{cases}\end{aligned}$$* **Lemma 7**. *(\[[@CAI867], Lemma 2\], deduced from [@HR74]). $$\begin{aligned} F(s)=&\frac{2 e^{\gamma}}{s}, \quad 0<s \leqslant 3;\\ F(s)=&\frac{2 e^{\gamma}}{s}\left(1+\int_{2}^{s-1} \frac{\log (t-1)}{t} d t\right), \quad 3 \leqslant s \leqslant 5 ;\\ F(s)=&\frac{2 e^{\gamma}}{s}\left(1+\int_{2}^{s-1} \frac{\log (t-1)}{t} d t+\int_{2}^{s-3} \frac{\log (t-1)}{t} d t \int_{t+2}^{s-1} \frac{1}{u} \log \frac{u-1}{t+1} d u\right), \quad 5 \leqslant s \leqslant 7;\\ f(s)=&\frac{2 e^{\gamma} \log (s-1)}{s}, \quad 2 \leqslant s \leqslant 4 ;\\ f(s)=&\frac{2 e^{\gamma}}{s}\left(\log (s-1)+\int_{3}^{s-1} \frac{d t}{t} \int_{2}^{t-1} \frac{\log (u-1)}{u} d u\right), \quad 4 \leqslant s \leqslant 6 ;\\ f(s)=& \frac{2 e^{\gamma}}{s}\left(\log (s-1)+\int_{3}^{s-1} \frac{d t}{t} \int_{2}^{t-1}\frac{\log (u-1)}{u} d u\right.\\ & \left.+\int_{2}^{s-4} \frac{\log (t-1)}{t} d t \int_{t+2}^{s-2} \frac{1}{u} \log \frac{u-1}{t+1} \log \frac{s}{u+2} d u\right), \quad 6 \leqslant s \leqslant 8.\end{aligned}$$* **Lemma 8**. *(\[[@CAI867], Lemma 4\], deduced from [@JIA96], [@ERPAN]). Let $$x>1, \quad z=x^{\frac{1}{u}}, \quad Q(z)=\prod_{p<z} p.$$ Then for $u \geqslant 1$, we have $$\sum_{\substack{n \leqslant x \\(n, Q(z))=1}} 1=w(u) \frac{x}{\log z}+O\left(\frac{x}{\log ^{2} z}\right),$$ where $w(u)$ is determined by the following differential-difference equation $$\begin{aligned} \begin{cases} w(u)=\frac{1}{u}, & \quad 1 \leqslant u \leqslant 2, \\ (u w(u))^{\prime}=w(u-1), & \quad u \geqslant 2 . \end{cases}\end{aligned}$$ Moreover, we have $$\begin{cases}w(u) \leqslant \frac{1}{1.763}, & u \geqslant 2, \\ w(u)<0.5644, & u \geqslant 3, \\ w(u)<0.5617, & u \geqslant 4.\end{cases}$$* *Remark 9*. (\[[@CL2011], Lemma 2.6\]). Let $$x>1, \quad x^{\frac{19}{24}+\varepsilon} \leqslant y \leqslant \frac{x}{\log x}, \quad z=x^{\frac{1}{u}}, \quad Q(z)=\prod_{p<z} p.$$ Then for $u \geqslant 1$, we have $$\sum_{\substack{x-y \leqslant n \leqslant x \\(n, Q(z))=1}} 1=w(u) \frac{y}{\log z}+O\left(\frac{y}{\log ^{2} z}\right),$$ where $w(u)$ is defined in Lemma [Lemma 8](#l4){reference-type="ref" reference="l4"}. **Lemma 10**. *If $N^{\frac{1}{\alpha}-\varepsilon}<z \leqslant N^{\frac{1}{\alpha}}$, then we have $$W(z)=\frac{2\alpha e^{-\gamma} C(N)(1+o(1))}{\log N}.$$* *Proof.* By \[[@LIHUIXI], Lemma 2\] we have $$W(z)=\frac{N}{\varphi(N)} \prod_{(p,N)=1}\left(1-\frac{\omega(p)}{p}\right)\left(1-\frac{1}{p}\right)^{-1} \frac{e^{-\gamma}}{\log z}\left(1+O\left(\frac{1}{\log z}\right)\right),$$ where $\varphi$ is the Euler's totient function and $\gamma$ is the Euler's constant. Since $2 \mid a b N$, we have $$\begin{aligned} \nonumber W(z)&=\frac{N}{\varphi(N)} \prod_{(p,N)=1}\left(1-\frac{\omega(p)}{p}\right)\left(1-\frac{1}{p}\right)^{-1} \frac{e^{-\gamma}}{\log z}\left(1+O\left(\frac{1}{\log z}\right)\right) \\ \nonumber & =\prod_{p \mid N} \frac{p}{p-1} \prod_{(p,N)=1} \left(1-\frac{\omega(p)}{p}\right)\left(1-\frac{1}{p}\right)^{-1} \frac{\alpha e^{-\gamma}(1+o(1))}{\log N} \\ \nonumber & =\prod_{p \mid N} \frac{p}{p-1} \prod_{p \mid a b} \left(1-\frac{1}{p}\right)^{-1} \prod_{(p,abN)=1} \left(1-\frac{1}{p-1}\right)\left(1-\frac{1}{p}\right)^{-1} \frac{\alpha e^{-\gamma}(1+o(1))}{\log N} \\ \nonumber & =\prod_{\substack{p \mid N \\ p>2}} \frac{p}{p-1} \prod_{\substack{p \mid a b \\ p>2}} \frac{p}{p-1} \frac{\prod_{p>2} \frac{p(p-2)}{(p-1)^{2}}}{\prod_{\substack{p \mid a b N \\ p>2}} \frac{p(p-2)}{(p-1)^{2}}} \frac{2\alpha e^{-\gamma}(1+o(1))}{\log N} \\ \nonumber & =\frac{2\alpha e^{-\gamma} C(N)(1+o(1))}{\log N}.\end{aligned}$$ ◻ **Lemma 11**. *(\[[@LIHUIXI], Lemma 1\], deduced from \[[@IwaniecKowalski], Corollary 5.29\]). Let $q \geqslant 1$ and $A>0$. Let $\pi(x ; q, l)$ be the number of primes up to $x$ that are congruent to $l$ modulo $q$. For $(l, q)=1$, we have $$\pi(x ; q, l)=\frac{\operatorname{Li}(x)}{\varphi(q)}+O\left(\frac{x}{(\log x)^{A}}\right),$$ where $$\operatorname{Li}(x)=\int_{2}^{x} \frac{\mathrm{d} t}{\log t}.$$* **Lemma 12**. *(\[[@LIHUIXI], Lemma 6\], deduced from \[[@Sitaramachandra], p. 579\]). We have $$\sum_{n \leqslant x} \frac{1}{\varphi(n)}=C_{2}\left(\log x+C_{3}\right)+O\left(\frac{\log x}{x}\right)$$ for some constants $C_{2}$ and $C_{3}$.* # Mean value theorems **Lemma 13**. *(\[[@ERPAN], p. 192, Corollary 8.2\]). Let $$\pi(x ; k, d, l)=\sum_{\substack{k p \leq x \\ k p \equiv l(\bmod d)}} 1$$ and let $g(k)$ be a real function, $g(k) \ll 1$. Then, for any given constant $A>0$, there exists a constant $B=B(A)>0$ such that $$\sum_{d \leqslant x^{1/2} (\log x)^{-B}}\max _{y \leqslant x}\max _{(l, d)=1} \left|\sum_{\substack{k \leqslant E(x) \\ (k, d)=1}} g(k) H(y ; k, d, l)\right| \ll \frac{x}{\log ^{A} x},$$ where $$H(y ; k, d, l)=\pi(y ; k, d, l)-\frac{1}{\varphi(d)}\pi(y ; k, 1, 1)=\sum_{\substack{k p \leqslant y \\ k p \equiv l (\bmod d)}} 1-\frac{1}{\varphi(d)} \sum_{k p \leqslant y} 1,$$ $$\frac{1}{2} \leqslant E(x) \ll x^{1-\alpha}, \quad 0<\alpha \leqslant 1,\quad B(A)=\frac{3}{2}A+17.$$* *Remark 14*. (\[[@ERPAN], p. 195--196, Corollary 8.3 and 8.4\]). Let $r_{1}(y)$ be a positive function depending on $x$ and satisfying $r_{1}(y) \ll x^{\alpha}$ for $y \leqslant x$. Then under the conditions in Lemma [Lemma 13](#l3){reference-type="ref" reference="l3"}, we have $$\sum_{d \leqslant x^{1/2} (\log x)^{-B}} \max _{y \leqslant x}\max _{(l, d)=1} \left|\sum_{\substack{k \leqslant E(x) \\ (k, d)=1}} g(k) H\left(kr_{1}(y) ; k, d, l\right)\right| \ll \frac{x}{\log ^{A} x} .$$ Let $r_{2}(k)$ be a positive function depending on $x$ and $y$ such that $k r_{2}(y) \ll x$ for $k \leqslant E(x), y \leqslant x$. Then under the conditions in Lemma [Lemma 13](#l3){reference-type="ref" reference="l3"}, we have $$\sum_{d \leqslant x^{1/2} (\log x)^{-B}} \max _{y \leqslant x}\max _{(l, d)=1} \left|\sum_{\substack{k \leqslant E(x) \\ (k, d)=1}} g(a) H\left(kr_{2}(y) ; k, d, l\right)\right| \ll \frac{x}{\log ^{A} x} .$$ **Lemma 15**. *(\[[@Wu1993], Theorem 2\]). Let $g(k)$ be a real function such that $$\sum_{k \leqslant x} \frac{g^{2}(k)}{k} \ll \log ^{C} x$$ for some $C>0$. Then, for any given constant $A>0$, there exists a constant $B=B(A,C)>0$ such that $$\sum_{d \leqslant x^{\theta-1/2} (\log x)^{-B}} \max _{x/2 \leqslant y \leqslant x}\max _{(l, d)=1}\max _{h \leqslant x^\theta} \left|\sum_{\substack{k \leqslant x^\beta \\ (k, d)=1}} g(k) \bar{H}(y ,h, k, d, l)\right| \ll \frac{x^\theta}{\log ^{A} x},$$ where $$\begin{aligned} \bar{H}(y ,h, k, d, l)=&\left(\pi(y+h ; k, d, l)-\pi(y ; k, d, l)\right)\\ &-\frac{1}{\varphi(d)}\left(\pi(y+h ; k, 1, 1)-(\pi(y ; k, 1, 1)\right)\\ =&\sum_{\substack{y<k p \leqslant y+h \\ k p \equiv l (\bmod d)}} 1-\frac{1}{\varphi(d)} \sum_{y<k p \leqslant y+h} 1, \end{aligned}$$ $$0 \leqslant \beta<\frac{5 \theta-3}{2},\quad B(A,C)=3A+C+34.$$* *Remark 16*. (\[[@Cai2015], Lemma 7\]). Let $g(k)$ be a real function such that $$\sum_{k \leqslant x} \frac{g^{2}(k)}{k} \ll \log ^{C} x$$ for some $C>0$. Let $r_1(k,h)$ and $r_2(k,h)$ be positive function such that $$y \leqslant kr_1(k,h), kr_2(k,h) \leqslant y+h.$$ Then, for any given constant $A>0$, there exists a constant $B=B(A,C)>0$ such that $$\sum_{d \leqslant x^{\theta-1/2} (\log x)^{-B}}\max _{x/2 \leqslant y \leqslant x}\max _{(l, d)=1}\max _{h \leqslant x^\theta} \left|\sum_{\substack{k \leqslant x^\beta \\ (k, d)=1}} g(a) \bar{H}^{\prime}(y ,h, k, d, l)\right| \ll \frac{x^\theta}{\log ^{A} x},$$ where $$\begin{aligned} \bar{H}^{\prime}(y ,h, k, d, l)=&\left(\pi(kr_2(k,h) ; k, d, l)-\pi(kr_1(k,h) ; k, d, l)\right)\\ &-\frac{1}{\varphi(d)}\left(\pi(kr_2(k,h); k, 1, 1)-(\pi(kr_1(k,h) ; k, 1, 1)\right)\\ =&\sum_{\substack{kr_1(k,h)<k p \leqslant kr_2(k,h) \\ k p \equiv l (\bmod d)}} 1-\frac{1}{\varphi(d)} \sum_{kr_1(k,h)<k p \leqslant kr_2(k,h)} 1, \end{aligned}$$ $$0 \leqslant \beta<\frac{5 \theta-3}{2},\quad B(A,C)=3A+C+34.$$ **Lemma 17**. *For $j=2,3$ and any $A>0$, there exists a constant $B=B(A)>0$ such that $$\sum_{d \leqslant x^{\theta-1/2} (\log x)^{-B}}\max _{(l, d)=1}\left|\sum_{\substack{bmp_1 p_2 p_3 p_4 \in \mathcal{F}_j \\ bm p_1 p_2 p_3 p_4 \equiv l(\bmod d)}} 1-\frac{1}{\varphi(d)} \sum_{\substack{bm p_1 p_2 p_3 p_4 \in \mathcal{F}_j \\(bm p_1 p_2 p_3 p_4, d)=1}} 1\right| \ll \frac{x^\theta}{\log ^{A} x} .$$* *Proof.* This result can be proved in the same way as \[[@Cai2015], Lemma 8\]. ◻ # Weighted sieve method The parameters $\alpha$ and $\beta$ in Lemma [Lemma 20](#l32){reference-type="ref" reference="l32"}--[Lemma 21](#l33){reference-type="ref" reference="l33"} come from Cai's papers [@CAI867] and [@Cai2015], which have been proven to be optimal parameters by himself and Guang-Liang Zhou by solving equations. **Lemma 18**. *Let $\mathcal{A}=\mathcal{A}_1$ in section 2 and $0<\alpha<\beta \leqslant \frac{1}{3}$. Then we have $$\begin{aligned} 2R_{a, b}(N) \geqslant& 2S\left(\mathcal{A};\mathcal{P}, \left(\frac{N}{b}\right)^{\alpha}\right)-\sum_{\substack{(\frac{N}{b})^{\alpha} \leqslant p<(\frac{N}{b})^{\beta} \\ (p, N)=1}} S\left(\mathcal{A}_{p};\mathcal{P}, \left(\frac{N}{b}\right)^{\alpha}\right)\\ &-\sum_{\substack{(\frac{N}{b})^{\alpha} \leqslant p_1<(\frac{N}{b})^{\beta} \leqslant p_2 <(\frac{N}{bp_1})^{\frac{1}{2}} \\ (p_1 p_2, N)=1}} S\left(\mathcal{A}_{p_1 p_2};\mathcal{P},p_2\right) - 2\sum_{\substack{(\frac{N}{b})^{\beta} \leqslant p_1 < p_2 <(\frac{N}{bp_1})^{\frac{1}{2}} \\ (p_1 p_2, N)=1} }S\left(\mathcal{A}_{p_1 p_2};\mathcal{P},p_2\right)\\ &+\sum_{\substack{(\frac{N}{b})^{\alpha} \leqslant p_1 < p_2 < p_3<(\frac{N}{b})^{\beta} \\ (p_1 p_2 p_3, N)=1} }S\left(\mathcal{A}_{p_1 p_2 p_3};\mathcal{P}(p_1),p_2\right) +O\left(N^{1-\alpha}\right).\end{aligned}$$* *Proof.* It is similar to that of \[[@CAI867], Lemma 5\]. By the trivial inequality $$R_{a,b}(N)\geqslant S\left(\mathcal{A};\mathcal{P}, \left(\frac{N}{b}\right)^{\beta}\right)-\sum_{\substack{(\frac{N}{b})^{\beta} \leqslant p_{1}<p_{2}<(\frac{N}{bp_1})^{\frac{1}{2}} \\ (p_{1} p_{2}, N)=1}} S\left(\mathcal{A}_{p_{1} p_{2}} ; \mathcal{P}(p_{1}), p_{2}\right)$$ and Buchstab's identity we have $$\begin{aligned} \nonumber R_{a,b}(N) \geqslant& S\left(\mathcal{A};\mathcal{P}, \left(\frac{N}{b}\right)^{\beta}\right)-\sum_{\substack{(\frac{N}{b})^{\beta} \leqslant p_{1}<p_{2}<(\frac{N}{bp_1})^{\frac{1}{2}} \\(p_{1} p_{2}, N)=1}} S\left(\mathcal{A}_{p_{1} p_{2}} ; \mathcal{P}(p_{1}), p_{2}\right) \\ \nonumber =&S\left(\mathcal{A};\mathcal{P}, \left(\frac{N}{b}\right)^{\alpha}\right)-\sum_{\substack{(\frac{N}{b})^{\alpha} \leqslant p<(\frac{N}{b})^{\beta} \\ (p, N)=1} }S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b}\right)^{\alpha}\right) \\ &+\sum_{\substack{(\frac{N}{b})^{\alpha} \leqslant p_1<p_2<(\frac{N}{b})^{\beta} \\ (p_1 p_2, N)=1}} S\left(\mathcal{A}_{p_1 p_2};\mathcal{P},p_1\right) -\sum_{\substack{(\frac{N}{b})^{\beta} \leqslant p_{1}<p_{2}<(\frac{N}{bp_1})^{\frac{1}{2}} \\ (p_{1} p_{2}, N)=1}} S\left(\mathcal{A}_{p_{1} p_{2}} ; \mathcal{P}(p_{1}), p_{2}\right).\end{aligned}$$ On the other hand, we have the trivial inequality $$\begin{aligned} \nonumber R_{a,b}(N)\geqslant& S\left(\mathcal{A};\mathcal{P}, \left(\frac{N}{b}\right)^{\alpha}\right)-\sum_{\substack{(\frac{N}{b})^{\alpha} \leqslant p_{1}<p_{2}<(\frac{N}{bp_1})^{\frac{1}{2}} \\ (p_{1} p_{2}, N)=1}} S\left(\mathcal{A}_{p_{1} p_{2}} ; \mathcal{P}(p_{1}), p_{2}\right)\\ \nonumber =&S\left(\mathcal{A};\mathcal{P}, \left(\frac{N}{b}\right)^{\alpha}\right)-\sum_{\substack{(\frac{N}{b})^{\alpha} \leqslant p_{1}<p_{2}<(\frac{N}{b})^{\beta} \\ (p_{1} p_{2}, N)=1}} S\left(\mathcal{A}_{p_{1} p_{2}} ; \mathcal{P}(p_{1}), p_{2}\right)\\ \nonumber &-\sum_{\substack{(\frac{N}{b})^{\alpha} \leqslant p_{1}<(\frac{N}{b})^{\beta} \leqslant p_2 <(\frac{N}{bp_1})^{\frac{1}{2}} \\ (p_{1} p_{2}, N)=1}} S\left(\mathcal{A}_{p_{1} p_{2}} ; \mathcal{P}(p_{1}), p_{2}\right)\\ &-\sum_{\substack{(\frac{N}{b})^{\beta} \leqslant p_{1}<p_{2}<(\frac{N}{bp_1})^{\frac{1}{2}} \\ (p_{1} p_{2}, N)=1}} S\left(\mathcal{A}_{p_{1} p_{2}} ; \mathcal{P}(p_{1}), p_{2}\right).\end{aligned}$$ Now by Buchstab's identity we have $$\sum_{\substack{(\frac{N}{b})^{\alpha} \leqslant p_1<p_2<(\frac{N}{b})^{\beta} \\ (p_1 p_2, N)=1}} S\left(\mathcal{A}_{p_1 p_2};\mathcal{P},p_1\right)-\sum_{\substack{(\frac{N}{b})^{\alpha} \leqslant p_{1}<p_{2}<(\frac{N}{b})^{\beta} \\ (p_{1} p_{2}, N)=1}} S\left(\mathcal{A}_{p_{1} p_{2}} ; \mathcal{P}(p_{1}), p_{2}\right)$$ $$=\sum_{\substack{(\frac{N}{b})^{\alpha} \leqslant p_1 < p_2 < p_3<(\frac{N}{b})^{\beta} \\ (p_1 p_2 p_3, N)=1} }S\left(\mathcal{A}_{p_1 p_2 p_3};\mathcal{P}(p_1),p_2\right) +O\left(N^{1-\alpha}\right),$$ where the trivial bound $$\sum_{\substack{(\frac{N}{b})^{\alpha} \leqslant p_1<p_2<(\frac{N}{b})^{\beta} \\ (p_1 p_2, N)=1} }S\left(\mathcal{A}_{p^2_1 p_2};\mathcal{P},p_1\right) \ll N^{1-\alpha}$$ is used. Now we add (20) and (21) and by (22) Lemma [Lemma 18](#l31){reference-type="ref" reference="l31"} follows. ◻ **Lemma 19**. *Let $\mathcal{A}=\mathcal{A}_2$ in section 2 and $0<\alpha<\beta \leqslant \frac{1}{3}$. Then we have $$\begin{aligned} 2R_{a, b}^{\theta}(N) \geqslant& 2S\left(\mathcal{A};\mathcal{P}, \left(\frac{N}{b}\right)^{\alpha}\right)-\sum_{\substack{(\frac{N}{b})^{\alpha} \leqslant p<(\frac{N}{b})^{\beta} \\ (p, N)=1}} S\left(\mathcal{A}_{p};\mathcal{P}, \left(\frac{N}{b}\right)^{\alpha}\right)\\ &-\sum_{\substack{(\frac{N}{b})^{\alpha} \leqslant p_1<(\frac{N}{b})^{\beta} \leqslant p_2 <(\frac{N}{bp_1})^{\frac{1}{2}} \\ (p_1 p_2, N)=1}} S\left(\mathcal{A}_{p_1 p_2};\mathcal{P},p_2\right) - 2\sum_{\substack{(\frac{N}{b})^{\beta} \leqslant p_1 < p_2 <(\frac{N}{bp_1})^{\frac{1}{2}} \\ (p_1 p_2, N)=1} }S\left(\mathcal{A}_{p_1 p_2};\mathcal{P},p_2\right)\\ &+\sum_{\substack{(\frac{N}{b})^{\alpha} \leqslant p_1 < p_2 < p_3<(\frac{N}{b})^{\beta} \\ (p_1 p_2 p_3, N)=1} }S\left(\mathcal{A}_{p_1 p_2 p_3};\mathcal{P}(p_1),p_2\right) +O\left(N^{1-\alpha}\right).\end{aligned}$$* *Proof.* It is similar to that of Lemma [Lemma 18](#l31){reference-type="ref" reference="l31"} so we omit it here. ◻ **Lemma 20**. *Let $\mathcal{A}=\mathcal{A}_1$ in section 2, then we have $$\begin{aligned} 4R_{a,b}(N) \geqslant &3S\left(\mathcal{A};\mathcal{P}, \left(\frac{N}{b}\right)^{\frac{1}{13.2}}\right)+S\left(\mathcal{A};\mathcal{P}, \left(\frac{N}{b}\right)^{\frac{1}{8.4}}\right)\\ &+\sum_{\substack{(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_1<p_2<(\frac{N}{b})^{\frac{1}{8.4}} \\ (p_1 p_2, N)=1} }S\left(\mathcal{A}_{p_1 p_2};\mathcal{P},\left(\frac{N}{b}\right)^{\frac{1}{13.2}}\right)\\ &+\sum_{\substack{(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_1<(\frac{N}{b})^{\frac{1}{8.4}} \leqslant p_2<(\frac{N}{b})^{\frac{4.6}{13.2}}p^{-1}_1 \\ (p_1 p_2, N)=1} }S\left(\mathcal{A}_{p_1 p_2};\mathcal{P},\left(\frac{N}{b}\right)^{\frac{1}{13.2}}\right)\\ &-\sum_{\substack{(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p<(\frac{N}{b})^{\frac{4.1001}{13.2}} \\ (p, N)=1} }S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b}\right)^{\frac{1}{13.2}}\right)\\ &-\sum_{\substack{(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p<(\frac{N}{b})^{\frac{3.6}{13.2}} \\ (p, N)=1}} S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b}\right)^{\frac{1}{13.2}}\right)\\ &-\sum_{\substack{(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_1<(\frac{N}{b})^{\frac{1}{3}} \leqslant p_2 <(\frac{N}{bp_1})^{\frac{1}{2}} \\ (p_1 p_2, N)=1} }S\left(\mathcal{A}_{p_1 p_2};\mathcal{P}(p_1),p_2\right)\\ &-\sum_{\substack{(\frac{N}{b})^{\frac{1}{8.4}} \leqslant p_1<(\frac{N}{b})^{\frac{1}{3.604}} \leqslant p_2 <(\frac{N}{bp_1})^{\frac{1}{2}} \\ (p_1 p_2, N)=1} }S\left(\mathcal{A}_{p_1 p_2};\mathcal{P}(p_1),\left(\frac{N}{bp_1 p_2}\right)^{\frac{1}{2}}\right)\\ &-\sum_{\substack{(\frac{N}{b})^{\frac{4.1001}{13.2}} \leqslant p<(\frac{N}{b})^{\frac{1}{3}} \\ (p, N)=1}} S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b}\right)^{\frac{1}{13.2}}\right)\\ &-\sum_{\substack{(\frac{N}{b})^{\frac{3.6}{13.2}} \leqslant p<(\frac{N}{b})^{\frac{1}{3.604}} \\ (p, N)=1} }S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b}\right)^{\frac{1}{8.4}}\right)\\ &-\sum_{\substack{(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_1 < p_2 < p_3< p_4<(\frac{N}{b})^{\frac{1}{8.4}} \\ (p_1 p_2 p_3 p_4, N)=1} }S\left(\mathcal{A}_{p_1 p_2 p_3 p_4};\mathcal{P}(p_1),p_2\right) \\ &-\sum_{\substack{(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_1 < p_2 < p_3<(\frac{N}{b})^{\frac{1}{8.4}} \leqslant p_4< (\frac{N}{b})^{\frac{4.6}{13.2}}p^{-1}_3 \\ (p_1 p_2 p_3 p_4, N)=1} }S\left(\mathcal{A}_{p_1 p_2 p_3 p_4};\mathcal{P}(p_1),p_2\right) \\ &-2\sum_{\substack{(\frac{N}{b})^{\frac{1}{3.604}} \leqslant p_1 < p_2 <(\frac{N}{bp_1})^{\frac{1}{2}} \\ (p_1 p_2, N)=1} }S\left(\mathcal{A}_{p_1 p_2};\mathcal{P}(p_1),p_2\right) +O\left(N^{\frac{12.2}{13.2}}\right)\\ =&\left(3 S_{11}+S_{12}\right)+\left(S_{21}+S_{22}\right)-\left(S_{31}+S_{32}\right)-\left(S_{41}+S_{42}\right)\\ &-\left(S_{51}+S_{52}\right)-\left(S_{61}+S_{62}\right)-2S_{7}+O\left(N^{\frac{12.2}{13.2}}\right)\\ =&S_1+S_2-S_3-S_4-S_5-S_6-2S_7+O\left(N^{\frac{12.2}{13.2}}\right).\end{aligned}$$* *Proof.* It is similar to that of \[[@CAI867], Lemma 6\]. By Buchstab's identity, we have $$\begin{aligned} \nonumber S\left(\mathcal{A};\mathcal{P}, \left(\frac{N}{b}\right)^{\frac{1}{8.4}}\right)=&S\left(\mathcal{A};\mathcal{P}, \left(\frac{N}{b}\right)^{\frac{1}{13.2}}\right)\\ \nonumber &-\sum_{\substack{(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p<(\frac{N}{b})^{\frac{1}{8.4}} \\ (p, N)=1} }S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b}\right)^{\frac{1}{13.2}}\right)\\ \nonumber &+\sum_{\substack{(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_1<p_2<(\frac{N}{b})^{\frac{1}{8.4}} \\ (p_1 p_2, N)=1}} S\left(\mathcal{A}_{p_1 p_2};\mathcal{P},\left(\frac{N}{b}\right)^{\frac{1}{13.2}}\right)\\ &-\sum_{\substack{(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_1<p_2<p_3<(\frac{N}{b})^{\frac{1}{8.4}} \\ (p_1 p_2 p_3, N)=1} }S\left(\mathcal{A}_{p_1 p_2 p_3};\mathcal{P},p_1\right),\end{aligned}$$ $$\begin{aligned} \nonumber &\sum_{\substack{(\frac{N}{b})^{\frac{1}{8.4}} \leqslant p<(\frac{N}{b})^{\frac{3.6}{13.2}} \\ (p, N)=1} }S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b}\right)^{\frac{1}{8.4}}\right)\\ \nonumber \leqslant &\sum_{\substack{(\frac{N}{b})^{\frac{1}{8.4}} \leqslant p<(\frac{N}{b})^{\frac{3.6}{13.2}} \\ (p, N)=1} }S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b}\right)^{\frac{1}{13.2}}\right)\\ \nonumber &-\sum_{\substack{(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_1<(\frac{N}{b})^{\frac{1}{8.4}} \leqslant p_2<(\frac{N}{b})^{\frac{4.6}{13.2}}p^{-1}_1 \\ (p_1 p_2, N)=1} }S\left(\mathcal{A}_{p_1 p_2};\mathcal{P},\left(\frac{N}{b}\right)^{\frac{1}{13.2}}\right)\\ &+\sum_{\substack{(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_1<p_2<(\frac{N}{b})^{\frac{1}{8.4}} \leqslant p_3<(\frac{N}{b})^{\frac{4.6}{13.2}}p^{-1}_2 \\ (p_1 p_2 p_3, N)=1} }S\left(\mathcal{A}_{p_1 p_2 p_3};\mathcal{P},p_1\right),\end{aligned}$$ $$\begin{aligned} \nonumber &\sum_{\substack{(\frac{N}{b})^{\frac{1}{8.4}} \leqslant p_1<(\frac{N}{b})^{\frac{1}{3.604}} \leqslant p_2 <(\frac{N}{bp_1})^{\frac{1}{2}} \\ (p_1 p_2, N)=1} }S\left(\mathcal{A}_{p_1 p_2};\mathcal{P}(p_1),p_2\right)\\ \nonumber =&\sum_{\substack{(\frac{N}{b})^{\frac{1}{8.4}} \leqslant p_1<(\frac{N}{b})^{\frac{1}{3.604}} \leqslant p_2 <(\frac{N}{bp_1})^{\frac{1}{3}} \\ (p_1 p_2, N)=1} }S\left(\mathcal{A}_{p_1 p_2};\mathcal{P}(p_1),p_2\right)\\ &+\sum_{\substack{(\frac{N}{b})^{\frac{1}{8.4}} \leqslant p_1<(\frac{N}{b})^{\frac{1}{3.604}},\ (\frac{N}{bp_1})^{\frac{1}{3}}\leqslant p_2 <(\frac{N}{bp_1})^{\frac{1}{2}} \\ (p_1 p_2, N)=1} }S\left(\mathcal{A}_{p_1 p_2};\mathcal{P}(p_1),p_2\right).\end{aligned}$$ If $p_{2} \leqslant \left(\frac{N}{bp_{1}}\right)^{\frac{1}{3}}$, then $p_{2} \leqslant \left(\frac{N}{bp_{1} p_{2}}\right)^{\frac{1}{2}}$ and by Buchstab's identity we have $$\begin{aligned} \nonumber &\sum_{\substack{(\frac{N}{b})^{\frac{1}{8.4}} \leqslant p_1<(\frac{N}{b})^{\frac{1}{3.604}} \leqslant p_2 <(\frac{N}{bp_1})^{\frac{1}{3}} \\ (p_1 p_2, N)=1}} S\left(\mathcal{A}_{p_1 p_2};\mathcal{P}(p_1),p_2\right)\\ \nonumber=&\sum_{\substack{(\frac{N}{b})^{\frac{1}{8.4}} \leqslant p_1<(\frac{N}{b})^{\frac{1}{3.604}} \leqslant p_2 <(\frac{N}{bp_1})^{\frac{1}{3}} \\ (p_1 p_2, N)=1} }S\left(\mathcal{A}_{p_1 p_2};\mathcal{P}(p_1),\left(\frac{N}{bp_1 p_2}\right)^{\frac{1}{2}}\right)\\ &+\sum_{\substack{(\frac{N}{b})^{\frac{1}{8.4}} \leqslant p_1<(\frac{N}{b})^{\frac{1}{3.604}} \leqslant p_2\leqslant p_3 <(\frac{N}{bp_1 p_2})^{\frac{1}{2}} \\ (p_1 p_2 p_3, N)=1} }S\left(\mathcal{A}_{p_1 p_2 p_3};\mathcal{P}(p_1 p_2),p_3\right).\end{aligned}$$ On the other hand, if $p_{2} \geqslant \left(\frac{N}{bp_{1}}\right)^{\frac{1}{3}}$, then $p_{2} \geqslant \left(\frac{N}{bp_{1} p_{2}}\right)^{\frac{1}{2}}$ and we have $$\begin{aligned} \nonumber&\sum_{\substack{(\frac{N}{b})^{\frac{1}{8.4}} \leqslant p_1<(\frac{N}{b})^{\frac{1}{3.604}},\ (\frac{N}{bp_1})^{\frac{1}{3}}\leqslant p_2 <(\frac{N}{bp_1})^{\frac{1}{2}} \\ (p_1 p_2, N)=1} }S\left(\mathcal{A}_{p_1 p_2};\mathcal{P}(p_1),p_2\right)\\ \leqslant &\sum_{\substack{(\frac{N}{b})^{\frac{1}{8.4}} \leqslant p_1<(\frac{N}{b})^{\frac{1}{3.604}},\ (\frac{N}{bp_1})^{\frac{1}{3}}\leqslant p_2 <(\frac{N}{bp_1})^{\frac{1}{2}} \\ (p_1 p_2, N)=1} }S\left(\mathcal{A}_{p_1 p_2};\mathcal{P}(p_1),\left(\frac{N}{bp_1 p_2}\right)^{\frac{1}{2}}\right).\end{aligned}$$ By (26)--(28) we get $$\begin{aligned} \nonumber&\sum_{\substack{(\frac{N}{b})^{\frac{1}{8.4}} \leqslant p_1<(\frac{N}{b})^{\frac{1}{3.604}} \leqslant p_2 <(\frac{N}{bp_1})^{\frac{1}{2}} \\ (p_1 p_2, N)=1} }S\left(\mathcal{A}_{p_1 p_2};\mathcal{P}(p_1),p_2\right)\\ \nonumber\leqslant &\sum_{\substack{(\frac{N}{b})^{\frac{1}{8.4}} \leqslant p_1<(\frac{N}{b})^{\frac{1}{3.604}} \leqslant p_2 <(\frac{N}{bp_1})^{\frac{1}{2}} \\ (p_1 p_2, N)=1} }S\left(\mathcal{A}_{p_1 p_2};\mathcal{P}(p_1),\left(\frac{N}{bp_1 p_2}\right)^{\frac{1}{2}}\right)\\ &+\sum_{\substack{(\frac{N}{b})^{\frac{1}{8.4}} \leqslant p_1<(\frac{N}{b})^{\frac{1}{3.604}} \leqslant p_2\leqslant p_3 <(\frac{N}{bp_1 p_2})^{\frac{1}{2}} \\ (p_1 p_2 p_3, N)=1}} S\left(\mathcal{A}_{p_1 p_2 p_3};\mathcal{P}(p_1 p_2),p_3\right).\end{aligned}$$ By Buchstab's identity we have $$\begin{aligned} \nonumber&\sum_{\substack{(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_1<p_2<p_3<(\frac{N}{b})^{\frac{1}{3}} \\ (p_1 p_2 p_3, N)=1} }S\left(\mathcal{A}_{p_1 p_2 p_3};\mathcal{P}(p_1),p_2\right)\\ \nonumber&-\sum_{\substack{(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_1<p_2<p_3<(\frac{N}{b})^{\frac{1}{8.4}} \\ (p_1 p_2 p_3, N)=1} }S\left(\mathcal{A}_{p_1 p_2 p_3};\mathcal{P},p_1\right)\\ \nonumber&-\sum_{\substack{(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_1<p_2<(\frac{N}{b})^{\frac{1}{8.4}} \leqslant p_3<(\frac{N}{b})^{\frac{4.6}{13.2}}p^{-1}_2 \\ (p_1 p_2 p_3, N)=1} }S\left(\mathcal{A}_{p_1 p_2 p_3};\mathcal{P},p_1\right)\\ \nonumber&-\sum_{\substack{(\frac{N}{b})^{\frac{1}{8.4}} \leqslant p_1<(\frac{N}{b})^{\frac{1}{3.604}} \leqslant p_2\leqslant p_3 <(\frac{N}{bp_1 p_2})^{\frac{1}{2}} \\ (p_1 p_2 p_3, N)=1} }S\left(\mathcal{A}_{p_1 p_2 p_3};\mathcal{P}(p_1 p_2),p_3\right)\\ \nonumber\geqslant &-\sum_{\substack{(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_1 < p_2 < p_3< p_4<(\frac{N}{b})^{\frac{1}{8.4}} \\ (p_1 p_2 p_3 p_4, N)=1}} S\left(\mathcal{A}_{p_1 p_2 p_3 p_4};\mathcal{P}(p_1),p_2\right) \\ &-\sum_{\substack{(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_1 < p_2 < p_3<(\frac{N}{b})^{\frac{1}{8.4}} \leqslant p_4< (\frac{N}{b})^{\frac{4.6}{13.2}}p^{-1}_3 \\ (p_1 p_2 p_3 p_4, N)=1} }S\left(\mathcal{A}_{p_1 p_2 p_3 p_4};\mathcal{P}(p_1),p_2\right) +O\left(N^{\frac{12.2}{13.2}}\right),\end{aligned}$$ where an argument similar to (23) is used. By Lemma [Lemma 18](#l31){reference-type="ref" reference="l31"} with $(\alpha, \beta)=\left(\frac{1}{13.2}, \frac{1}{3}\right)$ and $(\alpha, \beta)=$ $\left(\frac{1}{8.4}, \frac{1}{3.604}\right)$ and (24)--(25), (29)--(30) we complete the proof of Lemma [Lemma 20](#l32){reference-type="ref" reference="l32"}. ◻ **Lemma 21**. *Let $\mathcal{A}=\mathcal{A}_2$ in section 2, then we have $$\begin{aligned} 4R_{a,b}^{\theta}(N) \geqslant &3S\left(\mathcal{A};\mathcal{P}, \left(\frac{N}{b}\right)^{\frac{1}{14}}\right)+S\left(\mathcal{A};\mathcal{P}, \left(\frac{N}{b}\right)^{\frac{1}{8.8}}\right)\\ &+\sum_{\substack{(\frac{N}{b})^{\frac{1}{14}} \leqslant p_1<p_2<(\frac{N}{b})^{\frac{1}{8.8}} \\ (p_1 p_2, N)=1} }S\left(\mathcal{A}_{p_1 p_2};\mathcal{P},\left(\frac{N}{b}\right)^{\frac{1}{14}}\right)\\ &+\sum_{\substack{(\frac{N}{b})^{\frac{1}{14}} \leqslant p_1<(\frac{N}{b})^{\frac{1}{8.8}} \leqslant p_2<(\frac{N}{b})^{\frac{\theta}{2}-\frac{2}{14}}p^{-1}_1 \\ (p_1 p_2, N)=1} }S\left(\mathcal{A}_{p_1 p_2};\mathcal{P},\left(\frac{N}{b}\right)^{\frac{1}{14}}\right)\\ &-\sum_{\substack{(\frac{N}{b})^{\frac{1}{14}} \leqslant p<(\frac{N}{b})^{\frac{4.0871}{14}} \\ (p, N)=1} }S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b}\right)^{\frac{1}{14}}\right)\\ &-\sum_{\substack{(\frac{N}{b})^{\frac{1}{14}} \leqslant p<(\frac{N}{b})^{\frac{\theta}{2}-\frac{3}{14}} \\ (p, N)=1}} S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b}\right)^{\frac{1}{14}}\right)\\ &-\sum_{\substack{(\frac{N}{b})^{\frac{1}{14}} \leqslant p_1<(\frac{N}{b})^{\frac{1}{3.106}} \leqslant p_2 <(\frac{N}{bp_1})^{\frac{1}{2}} \\ (p_1 p_2, N)=1} }S\left(\mathcal{A}_{p_1 p_2};\mathcal{P}(p_1),p_2\right)\\ &-\sum_{\substack{(\frac{N}{b})^{\frac{1}{8.8}} \leqslant p_1<(\frac{N}{b})^{\frac{1}{3.73}} \leqslant p_2 <(\frac{N}{bp_1})^{\frac{1}{2}} \\ (p_1 p_2, N)=1} }S\left(\mathcal{A}_{p_1 p_2};\mathcal{P}(p_1),\left(\frac{N}{bp_1 p_2}\right)^{\frac{1}{2}}\right)\\ &-\sum_{\substack{(\frac{N}{b})^{\frac{4.0871}{14}} \leqslant p<(\frac{N}{b})^{\frac{1}{3.106}} \\ (p, N)=1}} S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b}\right)^{\frac{1}{14}}\right)\\ &-\sum_{\substack{(\frac{N}{b})^{\frac{\theta}{2}-\frac{3}{14}} \leqslant p<(\frac{N}{b})^{\frac{1}{3.73}} \\ (p, N)=1} }S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b}\right)^{\frac{1}{8.8}}\right)\\ &-\sum_{\substack{(\frac{N}{b})^{\frac{1}{14}} \leqslant p_1 < p_2 < p_3< p_4<(\frac{N}{b})^{\frac{1}{8.8}} \\ (p_1 p_2 p_3 p_4, N)=1} }S\left(\mathcal{A}_{p_1 p_2 p_3 p_4};\mathcal{P}(p_1),p_2\right) \\ &-\sum_{\substack{(\frac{N}{b})^{\frac{1}{14}} \leqslant p_1 < p_2 < p_3<(\frac{N}{b})^{\frac{1}{8.8}} \leqslant p_4< (\frac{N}{b})^{\frac{\theta}{2}-\frac{3}{14}}p_3^{-1} \\ (p_1 p_2 p_3 p_4, N)=1} }S\left(\mathcal{A}_{p_1 p_2 p_3 p_4};\mathcal{P}(p_1),p_2\right) \\ &-2\sum_{\substack{(\frac{N}{b})^{\frac{1}{3.106}} \leqslant p_1 < p_2 <(\frac{N}{bp_1})^{\frac{1}{2}} \\ (p_1 p_2, N)=1} }S\left(\mathcal{A}_{p_1 p_2};\mathcal{P}(p_1),p_2\right)\\ &-2\sum_{\substack{(\frac{N}{b})^{\frac{1}{3.73}} \leqslant p_1 < p_2 <(\frac{N}{bp_1})^{\frac{1}{2}} \\ (p_1 p_2, N)=1} }S\left(\mathcal{A}_{p_1 p_2};\mathcal{P}(p_1),p_2\right)+O\left(N^{\frac{13}{14}}\right)\\ =&\left(3 S_{11}^{\prime}+S_{12}^{\prime}\right)+\left(S_{21}^{\prime}+S_{22}^{\prime}\right)-\left(S_{31}^{\prime}+S_{32}^{\prime}\right)-\left(S_{41}^{\prime}+S_{42}^{\prime}\right)\\ &-\left(S_{51}^{\prime}+S_{52}^{\prime}\right)-\left(S_{61}^{\prime}+S_{62}^{\prime}\right)-2\left(S_{71}^{\prime}+S_{72}^{\prime}\right)+O\left(N^{\frac{13}{14}}\right)\\ =&S_1^{\prime}+S_2^{\prime}-S_3^{\prime}-S_4^{\prime}-S_5^{\prime}-S_6^{\prime}-2S_7^{\prime}+O\left(N^{\frac{13}{14}}\right).\end{aligned}$$* *Proof.* It is similar to that of Lemma [Lemma 20](#l32){reference-type="ref" reference="l32"} and \[[@Cai2015], Lemma 9\] so we omit it here. ◻ **Lemma 22**. *See [@CAI867]. Let $\mathcal{A}=\mathcal{A}_1$ in section 2, $D_{\mathcal{A}_1}=N^{1/2} (\log N)^{-B}$ with $B=B(A)>0$ in Lemma [Lemma 13](#l3){reference-type="ref" reference="l3"}, and $\underline{p}=\frac{D_{\mathcal{A}_1}}{bp}$. Then for $\left(\frac{N}{b}\right)^{\frac{1}{4.5}}<D_{1}<D_{2} \leqslant \left(\frac{N}{b}\right)^{\frac{1}{3}}$ we have $$\begin{aligned} & \sum_{\substack{D_{1} \leqslant p<D_{2} \\ (p, N)=1}} S\left(\mathcal{A}_{p};\mathcal{P}, \underline{p}^{\frac{1}{2.5}}\right) \\ \leqslant& \sum_{\substack{D_{1} \leqslant p<D_{2} \\ (p, N)=1}} S\left(\mathcal{A}_{p};\mathcal{P}, \underline{p}^{\frac{1}{3.675}}\right) \\ &-\frac{1}{2}\sum_{\substack{D_{1} \leqslant p<D_{2} \\ (p, N)=1}}\sum_{\substack{\underline{p}^{\frac{1}{3.675}} \leqslant p_1<\underline{p}^{\frac{1}{2.5}} \\ (p_1, N)=1}}S\left(\mathcal{A}_{p p_1};\mathcal{P}, \underline{p}^{\frac{1}{3.675}}\right) \\ &+\frac{1}{2} \sum _{ \substack{D_{1} \leqslant p<D_{2} \\ (p, N)=1} } \sum _{ \substack{\underline{p}^{\frac{1}{3.675}} \leqslant p_1<p_2<p_3<\underline{p}^{\frac{1}{2.5}} \\ (p_1 p_2 p_3, N)=1} } S\left(\mathcal{A}_{p p_1 p_2 p_3};\mathcal{P}(p_1), p_2\right) +O\left(N^{\frac{19}{20}}\right).\end{aligned}$$* *Proof.* It is similar to that of \[[@CAI867], Lemma 7\]. By Buchstab's identity, we have $$\begin{aligned} \nonumber S\left(\mathcal{A}_{p};\mathcal{P}, \underline{p}^{\frac{1}{2.5}}\right)= & S\left(\mathcal{A}_{p};\mathcal{P}, \underline{p}^{\frac{1}{3.675}}\right)-\sum_{\substack{\underline{p}^{\frac{1}{3.675}} \leqslant p_1<\underline{p}^{\frac{1}{2.5}} \\ (p_1, N)=1}}S\left(\mathcal{A}_{p p_1};\mathcal{P}, \underline{p}^{\frac{1}{3.675}}\right)\\ &+\sum_{\substack{\underline{p}^{\frac{1}{3.675}} \leqslant p_1<p_2<\underline{p}^{\frac{1}{2.5}} \\ (p_1 p_2, N)=1}}S\left(\mathcal{A}_{p p_1 p_2};\mathcal{P}, p_1\right),\\ \nonumber S\left(\mathcal{A}_{p};\mathcal{P}, \underline{p}^{\frac{1}{2.5}}\right)=&S\left(\mathcal{A}_{p};\mathcal{P}, \underline{p}^{\frac{1}{3.675}}\right)-\sum_{\substack{\underline{p}^{\frac{1}{3.675}} \leqslant p_1<\underline{p}^{\frac{1}{2.5}} \\ (p_1, N)=1}}S\left(\mathcal{A}_{p p_1};\mathcal{P}, \underline{p}^{\frac{1}{2.5}}\right)\\ &-\sum_{\substack{\underline{p}^{\frac{1}{3.675}} \leqslant p_1<p_2<\underline{p}^{\frac{1}{2.5}} \\ (p_1 p_2, N)=1}}S\left(\mathcal{A}_{p p_1 p_2};\mathcal{P}(p_1), p_2\right),\end{aligned}$$ $$\begin{aligned} \nonumber &\sum_{\substack{\underline{p}^{\frac{1}{3.675}} \leqslant p_1<p_2<\underline{p}^{\frac{1}{2.5}} \\ (p_1 p_2, N)=1}}S\left(\mathcal{A}_{p p_1 p_2};\mathcal{P}, p_1\right)-\sum_{\substack{\underline{p}^{\frac{1}{3.675}} \leqslant p_1<p_2<\underline{p}^{\frac{1}{2.5}} \\ (p_1 p_2, N)=1}}S\left(\mathcal{A}_{p p_1 p_2};\mathcal{P}(p_1), p_2\right)\\ =&\sum_{\substack{\underline{p}^{\frac{1}{3.675}} \leqslant p_1<p_2<p_3<\underline{p}^{\frac{1}{2.5}} \\ (p_1 p_2 p_3, N)=1}}S\left(\mathcal{A}_{p p_1 p_2 p_3};\mathcal{P}(p_1), p_2\right)+\sum_{\substack{\underline{p}^{\frac{1}{3.675}} \leqslant p_1<p_2<\underline{p}^{\frac{1}{2.5}} \\ (p_1 p_2, N)=1}}S\left(\mathcal{A}_{p p^2_1 p_2};\mathcal{P}, p_1\right).\end{aligned}$$ Now we add (31) and (32), sum over $p$ in the interval $\left[D_{1}, D_{2}\right)$ and by (33), we get Lemma [Lemma 22](#l34){reference-type="ref" reference="l34"}, where the trivial inequality $$\sum_{\substack{D_{1} \leqslant p<D_{2} \\ (p, N)=1}}\sum_{\substack{\underline{p}^{\frac{1}{3.675}} \leqslant p_1<p_2<\underline{p}^{\frac{1}{2.5}} \\ (p_1 p_2, N)=1}}S\left(\mathcal{A}_{p p^2_1 p_2};\mathcal{P}, p_1\right) \ll N^{\frac{19}{20}}$$ is used. ◻ **Lemma 23**. *See [@Cai2015]. Let $\mathcal{A}=\mathcal{A}_2$ in section 2, $D_{\mathcal{A}_2}=N^{\theta-1/2} (\log N)^{-B}$ with $B=B(A)>0$ in Lemma [Lemma 13](#l3){reference-type="ref" reference="l3"}, and $\underline{p}^{\prime}=\frac{D_{\mathcal{A}_2}}{bp}$. Then for $\left(\frac{N}{b}\right)^{\frac{1}{4.5}}<D_{1}<D_{2} \leqslant \left(\frac{N}{b}\right)^{\frac{1}{3}}$ we have $$\begin{aligned} & \sum_{\substack{D_{1} \leqslant p<D_{2} \\ (p, N)=1}} S\left(\mathcal{A}_{p};\mathcal{P}, \underline{p}^{\prime\frac{1}{2.5}}\right) \\ \leqslant& \sum_{\substack{D_{1} \leqslant p<D_{2} \\ (p, N)=1}} S\left(\mathcal{A}_{p};\mathcal{P}, \underline{p}^{\prime\frac{1}{3.675}}\right) \\ &-\frac{1}{2}\sum_{\substack{D_{1} \leqslant p<D_{2} \\ (p, N)=1}}\sum_{\substack{\underline{p}^{\prime\frac{1}{3.675}} \leqslant p_1<\underline{p}^{\prime\frac{1}{2.5}} \\ (p_1, N)=1}}S\left(\mathcal{A}_{p p_1};\mathcal{P}, \underline{p}^{\prime\frac{1}{3.675}}\right) \\ &+\frac{1}{2} \sum _{ \substack{D_{1} \leqslant p<D_{2} \\ (p, N)=1} } \sum _{ \substack{\underline{p}^{\prime\frac{1}{3.675}} \leqslant p_1<p_2<p_3<\underline{p}^{\prime\frac{1}{2.5}} \\ (p_1 p_2 p_3, N)=1} } S\left(\mathcal{A}_{p p_1 p_2 p_3};\mathcal{P}(p_1), p_2\right) +O\left(N^{\theta-\frac{1}{20}}\right).\end{aligned}$$* *Proof.* It is similar to that of Lemma [Lemma 22](#l34){reference-type="ref" reference="l34"} and \[[@Cai2015], Lemma 10\] so we omit it here. ◻ # Proof of Theorem 1.1 In this section, sets $\mathcal{A}_1$, $\mathcal{B}_1$, $\mathcal{C}_1$, $\mathcal{E}_1$ and $\mathcal{F}_1$ are defined respectively. We define the function $\omega$ as $\omega(p)=0$ for primes $p \mid a b N$ and $\omega(p)=\frac{p}{p-1}$ for other primes. ## Evaluation of $S_{1}, S_{2}, S_{3}$ Let $D_{\mathcal{A}_{1}}=\left(\frac{N}{b}\right)^{1 / 2}\left(\log \left(\frac{N}{b}\right)\right)^{-B}$ and $D_{\mathcal{A}_{1_{p}}}=\frac{D_{\mathcal{A}_1}}{p}=$ $\frac{\left(\frac{N}{b}\right)^{1 / 2}\left(\log \left(\frac{N}{b}\right)\right)^{-B}}{p}$ for some positive constant $B$. By Lemma [Lemma 11](#l6){reference-type="ref" reference="l6"}, we can take $$X_{\mathcal{A}_1}=\sum_{\substack{0 \leqslant k \leqslant b-1 \\(k, b)=1}}\pi\left(\frac{N}{a} ; b^{2}, N a_{b^{2}}^{-1}+k b\right) \sim \frac{\varphi(b) N}{a \varphi\left(b^{2}\right) \log N} \sim \frac{N}{a b \log N}$$ so that $|\mathcal{A}_1| \sim X_{\mathcal{A}_1}$. By Lemma [Lemma 10](#Wfunction){reference-type="ref" reference="Wfunction"} for $z_{\mathcal{A}_1}=N^{\frac{1}{\alpha}}$ we have $$W(z_{\mathcal{A}_1})=\frac{2\alpha e^{-\gamma} C(N)(1+o(1))}{\log N}.$$ To deal with the error term, we have $$\sum_{\substack{n \leqslant D_{\mathcal{A}_1} \\ n \mid P(z_{\mathcal{A}_1})}} \eta\left(X_{\mathcal{A}_1}, n\right) \ll \sum_{n \leqslant D_{\mathcal{A}_1}} \mu^{2}(n) \eta\left(X_{\mathcal{A}_1}, n\right),$$ and $$\sum_{p}\sum_{\substack{n \leqslant D_{\mathcal{A}_{1_p}} \\ n \mid P(z_{\mathcal{A}_1})}} \eta\left(X_{\mathcal{A}_1}, pn\right)\ll \sum_{n \leqslant D_{\mathcal{A}_1}} \mu^{2}(n)\eta\left(X_{\mathcal{A}_1}, n\right).$$ By our previous discussion, any $\frac{N-a p}{b}$ in $\mathcal{A}_1$ is relatively prime to $b$, so $\eta\left(X_{\mathcal{A}_1}, n\right)=0$ for any integer $n$ that shares a common prime divisor with $b$. If $n$ and $a$ share a common prime divisor $r$, say $n=r n^{\prime}$ and $a=r a^{\prime}$, then $\frac{N-a p}{b n}=\frac{N-r a^{\prime} p}{b r n^{\prime}} \in \mathbb{Z}$ implies $r \mid N$, which is a contradiction to $(a, N)=1$. Similarly, we have $\eta\left(X_{\mathcal{A}_1}, n\right)=0$ if $(n, N)>1$. We conclude that $\eta\left(X_{\mathcal{A}_1}, n\right)=0$ if $(n, a b N)>1$. For a square-free integer $n \leqslant D_{\mathcal{A}_1}$ such that $(n, abN)=1$, to make $n \mid \frac{N-a p}{b}$ for some $\frac{N-a p}{b} \in \mathcal{A}_1$, we need $a p \equiv N(\bmod b n)$, which implies $a p \equiv N+k b n$ $\left(\bmod b^{2} n\right)$ for some $0 \leqslant k \leqslant b-1$. Since $\left(\frac{N-a p}{b n}, b\right)=1$, we can further require $(k, b)=1$. When $k$ runs through the reduced residues modulo $b$, we know $a_{b^{2} n}^{-1} k$ also runs through the reduced residues modulo $b$. Therefore, we have $p \equiv a_{b^{2} n}^{-1} N+k b n\left(\bmod b^{2} n\right)$ for some $0 \leqslant k \leqslant b-1$ such that $(k, b)=1$. Conversely, if $p=a_{b^{2} n}^{-1} N+k b n+m b^{2} n$ for some integer $m$ and some $0 \leqslant k \leqslant b-1$ such that $(k, b)=1$, then $\left(\frac{N-a p}{b n}, b\right)=$ $\left(\frac{N-a a_{b^{2} n}^{-1} N-a k b n-a m b^{2} n}{b n}, b\right)=(-a k, b)=1$. Therefore, for square-free integers $n$ such that $(n, abN)=1$, we have $$\begin{aligned} \nonumber \eta\left(X_{\mathcal{A}_1}, n\right) =&\left|\sum_{\substack{a \in \mathcal{A}_1 \\ a \equiv 0(\bmod n)}} 1-\frac{\omega(n)}{n} X_{\mathcal{A}_1}\right| \\ \nonumber =&\left|\sum_{\substack{0 \leqslant k \leqslant b-1 \\ (k, \bar{b})=1}} \pi\left(\frac{N}{a} ; b^{2} n, a_{b^{2} n}^{-1} N+k b n\right)-\frac{X_{\mathcal{A}_1}}{\varphi(n)}\right| \\ \nonumber \ll&\left|\sum_{\substack{0 \leqslant k \leqslant b-1 \\ (k, b)=1}} \pi\left(\frac{N}{a} ; b^{2} n, a_{b^{2} n}^{-1} N+k b n\right)-\varphi(b) \frac{\pi\left(\frac{N}{a} ; 1,1\right)}{\varphi\left(b^{2} n\right)}\right|\\ \nonumber& +\left|\sum_{\substack{0 \leqslant k \leqslant b-1 \\ (k, b)=1}} \frac{\pi\left(\frac{N}{a} ; b^{2}, a_{b^{2}}^{-1} N+k b\right)}{\varphi(n)}-\varphi(b) \frac{\pi\left(\frac{N}{a} ; 1,1\right)}{\varphi\left(b^{2} n\right)}\right| \\ \nonumber \ll& \sum_{\substack{0 \leqslant k \leqslant b-1 \\ (k, \bar{b})=1}}\left|\pi\left(\frac{N}{a} ; b^{2} n, a_{b^{2} n}^{-1} N+k b n\right)-\frac{\pi\left(\frac{N}{a} ; 1,1\right)}{\varphi\left(b^{2} n\right)}\right| \\ & +\frac{1}{\varphi(n)} \sum_{\substack{0 \leqslant k \leqslant b-1 \\ (k, \bar{b})=1}}\left|\pi\left(\frac{N}{a} ; b^{2}, a_{b^{2}}^{-1} N+k b\right)-\frac{\pi\left(\frac{N}{a} ; 1,1\right)}{\varphi\left(b^{2}\right)}\right| .\end{aligned}$$ By Lemmas [Lemma 12](#l7){reference-type="ref" reference="l7"} and  [Lemma 13](#l3){reference-type="ref" reference="l3"} with $g(k)=1$ for $k=1$ and $g(k)=0$ for $k>1$, we know $$\sum_{n \leqslant D_{\mathcal{A}_1}} \mu^{2}(n) \eta\left(X_{\mathcal{A}_1}, n\right) \ll N(\log N)^{-3}.$$ Then by (34)--(39), Lemma [Lemma 6](#l1){reference-type="ref" reference="l1"}, Lemma [Lemma 7](#l2){reference-type="ref" reference="l2"} and some routine arguments we have $$\begin{aligned} \nonumber S_{11} &\geqslant X_{\mathcal{A}_1} W\left(z_{\mathcal{A}_1}\right)\left\{f\left(\frac{1/2}{1/13.2}\right)+O\left(\frac{1}{\log ^{\frac{1}{3}} D}\right)\right\}-\sum_{\substack{n<D_{\mathcal{A}_1} \\ n \mid P(z_{\mathcal{A}_1})}}\eta(X_{\mathcal{A}_1}, n)\\ \nonumber &\geqslant \frac{N}{a b \log N}\frac{2\times 13.2 e^{-\gamma} C(N)(1+o(1))}{\log N}\left(\frac{2 e^{\gamma}}{\frac{13.2}{2}}\left(\log 5.6+\int_{2}^{4.6} \frac{\log (s-1)}{s} \log \frac{5.6}{s+1} d s\right)\right)\\ \nonumber &\geqslant (1+o(1)) \frac{8C(N) N}{ab(\log N)^2}\left(\log 5.6+\int_{2}^{4.6} \frac{\log (s-1)}{s} \log \frac{5.6}{s+1} d s\right)\\ \nonumber&\geqslant 14.82216 \frac{C(N) N}{ab(\log N)^2},\\ \nonumber S_{12} &\geqslant X_{\mathcal{A}_1} W\left(z_{\mathcal{A}_1}\right)\left\{f\left(\frac{1/2}{1/8.4}\right)+O\left(\frac{1}{\log ^{\frac{1}{3}} D}\right)\right\}-\sum_{\substack{n<D_{\mathcal{A}_1} \\ n \mid P(z_{\mathcal{A}_1})}}\eta(X_{\mathcal{A}_1},n)\\ \nonumber &\geqslant \frac{N}{a b \log N}\frac{2\times 8.4 e^{-\gamma} C(N)(1+o(1))}{\log N}\left(\frac{2 e^{\gamma}}{\frac{8.4}{2}}\left(\log 3.2+\int_{2}^{2.2} \frac{\log (s-1)}{s} \log \frac{3.2}{s+1} d s\right)\right)\\ \nonumber &\geqslant (1+o(1)) \frac{8C(N) N}{ab(\log N)^2}\left(\log 3.2+\int_{2}^{2.2} \frac{\log (s-1)}{s} \log \frac{3.2}{s+1} d s\right) \\ \nonumber&\geqslant 9.30664 \frac{C(N) N}{ab(\log N)^2},\\ S_{1}&=3 S_{11}+S_{12} \geqslant 53.77312\frac{C(N) N}{ab(\log N)^2} .\end{aligned}$$ Similarly, we have $$\begin{aligned} \nonumber S_{21} &\geqslant(1+o(1)) \frac{8 C(N) N}{ab(\log N)^{2} } \left(\int_{\frac{1}{13.2}}^{\frac{1}{8.4}} \int_{t_{1}}^{\frac{1}{8.4}} \frac{\log \left(5.6-13.2\left(t_{1}+t_{2}\right)\right)}{t_{1} t_{2}\left(1-2\left(t_{1}+t_{2}\right)\right)} d t_{1} d t_{2}\right),\\ \nonumber S_{22} &\geqslant(1+o(1)) \frac{8 C(N) N}{ab(\log N)^{2}} \left(\int_{\frac{1}{13.2}}^{\frac{1}{8.4}} \int_{\frac{1}{8.4}}^{\frac{4.6}{13.2}-t_{1}} \frac{\log \left(5.6-13.2\left(t_{1}+t_{2}\right)\right)}{t_{1} t_{2}\left(1-2\left(t_{1}+t_{2}\right)\right)} d t_{1} d t_{2}\right),\\ \nonumber S_{2}&=S_{21}+S_{22}\\ \nonumber &\geqslant(1+o(1)) \frac{8 C(N) N}{ab(\log N)^{2}} \left(\int_{\frac{1}{13.2}}^{\frac{1}{8.4}} \int_{t_{1}}^{\frac{4.6}{13.2}-t_{1}} \frac{\log \left(5.6-13.2\left(t_{1}+t_{2}\right)\right)}{t_{1} t_{2}\left(1-2\left(t_{1}+t_{2}\right)\right)} d t_{1} d t_{2}\right)\\ &\geqslant 5.201296 \frac{C(N) N}{ab(\log N)^{2}},\end{aligned}$$ $$\begin{aligned} \nonumber S_{31} \leqslant&(1+o(1)) \frac{8 C(N) N}{ab(\log N)^{2}}\left(\log \frac{4.1001(13.2-2)}{13.2-8.2002}+\int_{2}^{4.6} \frac{\log (s-1)}{s} \log \frac{5.6(5.6-s)}{s+1} d s\right.\\ \nonumber&\left.+\int_{2}^{2.6} \frac{\log (s-1)}{s} d s \int_{s+2}^{4.6} \frac{1}{t} \log \frac{t-1}{s+1} \log \frac{5.6(5.6-t)}{t+1} d t\right)\leqslant 21.9016 \frac{C(N) N}{ab(\log N)^{2}},\\ \nonumber S_{32} \leqslant&(1+o(1)) \frac{8 C(N) N}{ab(\log N)^{2}}\left(\log \frac{3.6(13.2-2)}{13.2-7.2}+\int_{2}^{4.6} \frac{\log (s-1)}{s} \log \frac{5.6(5.6-s)}{s+1} d s\right.\\ \nonumber&\left.+\int_{2}^{2.6} \frac{\log (s-1)}{s} d s \int_{s+2}^{4.6} \frac{1}{t} \log \frac{t-1}{s+1} \log \frac{5.6(5.6-t)}{t+1} d t\right) \leqslant 19.40136 \frac{C(N) N}{ab(\log N)^{2}},\\ S_{3}=& S_{31}+S_{32} \leqslant 41.30296 \frac{C(N) N}{ab(\log N)^{2}}.\end{aligned}$$ ## Evaluation of $S_{4}, S_{7}$ Let $D_{\mathcal{B}_1}=N^{1 / 2}(\log N)^{-B}$. By Chen's role-reversal trick we know that $$S_{41} \leqslant S(\mathcal{B}_1;\mathcal{P},D_{\mathcal{B}_1}^{\frac{1}{2}})+O\left(N^{\frac{2}{3}}\right).$$ We can take $$X_{\mathcal{B}_1}=\sum_{\substack{(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_{1} \leqslant(\frac{N}{b})^{\frac{1}{3}}<p_{2} \leqslant(\frac{N}{b p_{1}})^{\frac{1}{2}} \\ 0 \leqslant j \leqslant a-1,(j, a)=1}} \pi\left(\frac{N}{b p_{1} p_{2}} ; a^{2}, N\left(b p_{1} p_{2}\right)_{a^{2}}^{-1}+j a\right)$$ so that $|\mathcal{B}_1| \sim X_{\mathcal{B}_1}$. By Lemma [Lemma 10](#Wfunction){reference-type="ref" reference="Wfunction"} for $z_{\mathcal{B}_1}=N^{\frac{1}{4}-\varepsilon}$ we have $$W(z_{\mathcal{B}_1})=\frac{8 e^{-\gamma} C(N)(1+o(1))}{\log N}.$$ By Lemma [Lemma 11](#l6){reference-type="ref" reference="l6"} and integeration by parts we get that $$\begin{aligned} \nonumber X_{\mathcal{B}_1} &=(1+o(1)) \sum_{(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_{1} \leqslant(\frac{N}{b})^{\frac{1}{3}}<p_{2} \leqslant(\frac{N}{b p_{1}})^{\frac{1}{2}}} \frac{\varphi(a) \frac{N}{b p_{1} p_{2}}}{\varphi\left(a^{2}\right) \log \left(\frac{N}{b p_{1} p_{2}}\right)} \\ \nonumber & =(1+o(1)) \frac{N}{a b} \sum_{(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_{1} \leqslant(\frac{N}{b})^{\frac{1}{3}}<p_{2} \leqslant(\frac{N}{b p_{1}})^{\frac{1}{2}}} \frac{1}{p_{1} p_{2} \log \left(\frac{N}{p_{1} p_{2}}\right)} \\ \nonumber& =(1+o(1)) \frac{N}{a b} \int_{(\frac{N}{b})^{\frac{1}{13.2}}}^{(\frac{N}{b})^{\frac{1}{3}}} \frac{d t}{t \log t} \int_{(\frac{N}{b})^{\frac{1}{3}}}^{(\frac{N}{bt})^{\frac{1}{2}}} \frac{d u}{u \log u \log \left(\frac{N}{u t}\right)}\\ & =(1+o(1)) \frac{N}{ab\log N} \int_{2}^{12.2} \frac{\log \left(2-\frac{3}{s+1}\right)}{s} d s .\end{aligned}$$ To deal with the error term, we have $$\sum_{\substack{n \leqslant D_{\mathcal{B}_1} \\ n \mid P(z_{\mathcal{B}_1})}} \eta\left(X_{\mathcal{B}_1}, n\right) \ll \sum_{n \leqslant D_{\mathcal{B}_1}} \mu^{2}(n)\eta\left(X_{\mathcal{B}_1}, n\right) .$$ For an integer $n$ such that $(n, a b N)>1$, similarly to the discussion for $\eta\left(X_{\mathcal{A}_1}, n\right)$, we have $\eta\left(X_{\mathcal{B}_1}, n\right)=0$. For a square-free integer $n$ such that $(n, a b N)=1$, if $n \mid \frac{N-b p_{1} p_{2} p_{3}}{a}$, then $(p_{1} ,n)=1$ and $(p_{2} , n)=1$. Moreover, if $\left(\frac{N-b p_{1} p_{2} p_{3}}{a n}, a\right)=1$, then we have $b p_{1} p_{2} p_{3} \equiv$ $N+j a n\left(\bmod a^{2} n\right)$ for some $j$ such that $0 \leqslant j \leqslant a-1$ and $(j, a)=1$. Conversely, if $b p_{1} p_{2} p_{3}=N+j a n+s a^{2} n$ for some integer $j$ such that $0 \leqslant j \leqslant a$ and $(j, a)=1$, some integer $n$ relatively prime to $p_{1} p_{2}$ such that $an \mid\left(N-b p_{1} p_{2} p_{3}\right)$, and some integer $s$, then $\left(\frac{N-b p_{1} p_{2} p_{3}}{a n}, a\right)=(-j, a)=1$. Since $j b p_{1} p_{2}$ runs through the reduced residues modulo $a$ when $j$ runs through the reduced residues modulo $a$ and $\pi\left(x ; k, 1,1\right)=\pi\left(\frac{x}{k} ; 1,1\right)$, for square-free integers $n$ such that $(n, a b N)=1$, we have $$\begin{aligned} \nonumber \eta\left(X_{\mathcal{B}_1}, n\right) =&\left|\sum_{\substack{a \in \mathcal{B}_1 \\ a \equiv 0(\bmod n)}} 1-\frac{\omega(n)}{n} X_{\mathcal{B}_1}\right|=\left|\sum_{\substack{a \in \mathcal{B}_1 \\ a \equiv 0(\bmod n)}} 1-\frac{X_{\mathcal{B}_1}}{\varphi(n)}\right| \\ \nonumber =&\left|\sum_{\substack{(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_{1} \leqslant(\frac{N}{b})^{\frac{1}{3}}<p_{2} \leqslant(\frac{N}{b p_{1}})^{\frac{1}{2}},(p_1 p_2,N)=1 \\ \left(p_{1} p_{2}, n\right)=1, 0 \leqslant j \leqslant a-1,(j, a)=1}} \pi\left(N ; b p_{1} p_{2}, a^{2} n, N+j a n\right)\right.\\ \nonumber&\left. -\sum_{\substack{(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_{1} \leqslant(\frac{N}{b})^{\frac{1}{3}}<p_{2} \leqslant(\frac{N}{b p_{1}})^{\frac{1}{2}} \\ \left(p_{1} p_{2}, n\right)=1, 0 \leqslant j \leqslant a-1,(j, a)=1}} \frac{\pi\left(\frac{N}{b p_{1} p_{2}} ; a^{2}, N\left(b p_{1} p_{2}\right)_{a^{2}}^{-1}+j a\right)}{\varphi(n)}\right| \\ \nonumber \ll& \left|\sum_{\substack{(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_{1} \leqslant(\frac{N}{b})^{\frac{1}{3}}<p_{2} \leqslant(\frac{N}{b p_{1}})^{\frac{1}{2}},(p_1 p_2,N)=1 \\ \left(p_{1} p_{2}, n\right)=1, 0 \leqslant j \leqslant a-1,(j, a)=1}}\left(\pi\left(N ; b p_{1} p_{2}, a^{2} n, N+j a n\right)\right.\right. \\ \nonumber& \left.\left.-\frac{\pi\left(\frac{N}{b p_{1} p_{2}} ; a^{2}, N\left(b p_{1} p_{2}\right)_{a^{2}}^{-1}+j a\right)}{\varphi(n)}\right)\right| \\ \nonumber& +\sum_{\substack{(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_{1} \leqslant(\frac{N}{b})^{\frac{1}{3}}<p_{2} \leqslant(\frac{N}{b p_{1}})^{\frac{1}{2}} \\ \left(p_{1} p_{2}, nN\right)>1, 0 \leqslant j \leqslant a-1,(j, a)=1}} \frac{\pi\left(\frac{N}{b p_{1} p_{2}} ; a^{2}, N\left(b p_{1} p_{2}\right)_{a^{2}}^{-1}+j a\right)}{\varphi(n)}\\ \nonumber \ll&\left|\sum_{\substack{(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_{1} \leqslant(\frac{N}{b})^{\frac{1}{3}}<p_{2} \leqslant(\frac{N}{b p_{1}})^{\frac{1}{2}},(p_1 p_2,N)=1 \\ \left(p_{1} p_{2}, n\right)=1, 0 \leqslant j \leqslant a-1,(j, a)=1}}\left(\pi\left(N ; b p_{1} p_{2}, a^{2} n, N+j a n\right)-\frac{\pi\left(N ; b p_{1} p_{2}, 1,1\right)}{\varphi\left(a^{2} n\right)}\right)\right| \\ \nonumber& +\left| \sum_{\substack{(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_{1} \leqslant(\frac{N}{b})^{\frac{1}{3}}<p_{2} \leqslant(\frac{N}{b p_{1}})^{\frac{1}{2}},(p_1 p_2,N)=1 \\ \left(p_{1} p_{2}, n\right)=1, 0 \leqslant j \leqslant a-1,(j, a)=1}}\left(\frac{\pi\left(\frac{N}{b p_{1} p_{2}} ; a^{2}, N\left(b p_{1} p_{2}\right)_{a^{2}}^{-1}+j a\right)}{\varphi(n)}-\frac{\pi\left(\frac{N}{b p_{1} p_{2}} ; 1,1\right)}{\varphi\left(a^{2} n\right)}\right)\right| \\ \nonumber&+N^{\frac{12.2}{13.2}}(\log N)^{2}\\ \nonumber \ll&\left|\sum_{\substack{(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_{1} \leqslant(\frac{N}{b})^{\frac{1}{3}}<p_{2} \leqslant(\frac{N}{b p_{1}})^{\frac{1}{2}},(p_1 p_2,N)=1 \\ \left(p_{1} p_{2}, n\right)=1, 0 \leqslant j \leqslant a-1,(j, a)=1}}\left(\pi\left(N ; b p_{1} p_{2}, a^{2} n, N+j a n\right)-\frac{\pi\left(N ; b p_{1} p_{2}, 1,1\right)}{\varphi\left(a^{2} n\right)}\right)\right| \\ \nonumber& +\frac{1}{\varphi(n)}\left| \sum_{\substack{(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_{1} \leqslant(\frac{N}{b})^{\frac{1}{3}}<p_{2} \leqslant(\frac{N}{b p_{1}})^{\frac{1}{2}},(p_1 p_2,N)=1 \\ \left(p_{1} p_{2}, n\right)=1, 0 \leqslant j \leqslant a-1,(j, a)=1}}\left(\pi\left(\frac{N}{b p_{1} p_{2}} ; a^{2}, N\left(b p_{1} p_{2}\right)_{a^{2}}^{-1}+j a\right)-\frac{\pi\left(\frac{N}{b p_{1} p_{2}} ; 1,1\right)}{\varphi\left(a^{2}\right)}\right)\right| \\ &+N^{\frac{12.2}{13.2}}(\log N)^{2}.\end{aligned}$$ By Lemma [Lemma 13](#l3){reference-type="ref" reference="l3"} with $$g(k)= \begin{cases} 1, & \text { if } k \in \mathcal{E}_1 \\ 0, & \text { otherwise } \end{cases}$$ and Lemma [Lemma 12](#l7){reference-type="ref" reference="l7"}, we have $$\sum_{n \leqslant D_{\mathcal{B}_1}} \mu^{2}(n)\eta\left(X_{\mathcal{B}_1}, n\right)=\sum_{\substack{n \leqslant D_{\mathcal{B}_1}\\(n,abN)=1}} \mu^{2}(n) \eta\left(X_{\mathcal{B}_1}, n\right) \ll N(\log N)^{-3}.$$ Then by (43)--(49) and some routine arguments we have $$S_{41} \leqslant (1+o(1)) \frac{8C(N) N}{ab(\log N)^{2}} \int_{2}^{12.2} \frac{\log \left(2-\frac{3}{s+1}\right)}{s} d s.$$ Similarly, we have $$S_{42} \leqslant(1+o(1)) \frac{8C(N) N}{ab(\log N)^{2}}\int_{2.604}^{7.4} \frac{\log \left(2.604-\frac{3.604}{s+1}\right)}{s} d s,$$ $$S_{4}=S_{41}+S_{42} \leqslant (1+o(1)) \frac{8C(N) N}{ab(\log N)^{2}} \left(\int_{2}^{12.2} \frac{\log \left(2-\frac{3}{s+1}\right)}{s} d s+\int_{2.604}^{7.4} \frac{\log \left(2.604-\frac{3.604}{s+1}\right)}{s} d s\right)$$ $$\leqslant 10.69152 \frac{C(N) N}{ab(\log N)^{2}},$$ $$S_7 \leqslant(1+o(1)) \frac{8C(N) N}{ab(\log N)^{2}}\int_{2}^{2.604} \frac{\log (s-1)}{s} d s \leqslant 0.5160672 \frac{C(N) N}{ab(\log N)^{2}}.$$ ## Evaluation of $S_{6}$ Let $D_{\mathcal{C}_1}=N^{1 / 2}(\log N)^{-B}$. By Chen's role-reversal trick we know that $$S_{61} \leqslant S(\mathcal{C}_1;\mathcal{P},D_{\mathcal{C}_1}^{\frac{1}{2}})+O\left(N^{\frac{12.2}{13.2}}\right).$$ By Lemma [Lemma 8](#l4){reference-type="ref" reference="l4"} we have $$\begin{aligned} \nonumber |\mathcal{C}_1|&= \sum_{mp_1 p_2 p_4 \in \mathcal{F}_1}\sum_{\substack{p_2<p_3<\min((\frac{N}{b})^\frac{1}{8.4},(\frac{N}{bmp_1p_2p_4})) \\p_{3} \equiv N(bm p_{1} p_{2} p_4)_{a^{2}}^{-1}+j a (\bmod a^{2}) \\ 0 \leqslant j \leqslant a-1,(j, a)=1 }}1 \\ \nonumber &=\sum_{\substack{(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_1 < p_4 < p_2< p_3<(\frac{N}{b})^{\frac{1}{8.4}} \\ (p_1 p_2 p_3 p_4, N)=1} }\sum_{\substack{1\leqslant m\leqslant \frac{N}{bp_1 p_2 p_3 p_4}\\\left(m, p_{1}^{-1} abN P\left(p_{4}\right)\right)=1 }}\frac{\varphi(a)}{\varphi(a^2)}+O\left(N^{\frac{12.2}{13.2}}\right)\\ \nonumber &<(1+o(1))\frac{N}{ab}\sum_{(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_{1}<p_{4}<p_{2}<p_{3}<(\frac{N}{b})^{\frac{1}{8.4}}} \frac{0.5617}{p_{1} p_{2} p_{3} p_{4} \log p_{4}}+O\left(N^{\frac{12.2}{13.2}}\right)\\ &=(1+o(1))\frac{0.5617N}{ab\log N}\int_{\frac{1}{13.2}}^{\frac{1}{8.4}} \frac{d t_{1}}{t_{1}} \int_{t_{1}}^{\frac{1}{8.4}} \frac{1}{t_{2}}\left(\frac{1}{t_{1}}-\frac{1}{t_{2}}\right) \log \frac{1}{8.4 t_{2}} d t_{2} .\end{aligned}$$ To deal with the error term, we have $$\sum_{\substack{n \leqslant D_{\mathcal{C}_1} \\ n \mid P\left(z_{\mathcal{C}_1}\right)}} \eta\left(|{\mathcal{C}_1}|, n\right) \ll \sum_{n \leqslant D_{\mathcal{C}_1}} \mu^{2}(n) \eta\left(|{\mathcal{C}_1}|, n\right) .$$ Since $\omega(p)=0$ for primes $p \mid a b N$ and $\omega(p)=\frac{p}{p-1}$ for other primes, for an integer $n$ such that $(n, a b N)>1$, similarly to the discussion for $\eta\left(X_{\mathcal{B}_1}, n\right)$, we have $\eta\left(|{\mathcal{C}_1}|, n\right)=0$. For a square-free integer $n$ that is relatively prime to $a b N$, if $n \mid \frac{N-bm p_{1} p_{2} p_{3} p_4}{a}$, then $(p_{1} ,n)=1, (p_{2} ,n)=1$ and $(p_{4} , n)=1$. Moreover, if $\left(\frac{N-bm p_{1} p_{2} p_{3} p_4}{a n}, a\right)=1$, then we have $bm p_{1} p_{2} p_{3} p_4 \equiv$ $N+j a n\left(\bmod a^{2} n\right)$ for some $j$ such that $0 \leqslant j \leqslant a-1$ and $(j, a)=1$. Conversely, if $bm p_{1} p_{2} p_{3} p_4=N+j a n+s a^{2} n$ for some integer $j$ such that $0 \leqslant j \leqslant a$ and $(j, a)=1$, some integer $n$ relatively prime to $p_{1} p_{2} p_4$ such that $an \mid\left(N-bm p_{1} p_{2} p_{3} p_4\right)$, and some integer $s$, then $\left(\frac{N-bm p_{1} p_{2} p_{3} p_4}{a n}, a\right)=(-j, a)=1$. Since $j bm p_{1} p_{2} p_4$ runs through the reduced residues modulo $a$ when $j$ runs through the reduced residues modulo $a$ and $\pi\left(x ; k, 1,1\right)=\pi\left(\frac{x}{k} ; 1,1\right)$, for a square-free integer $n$ relatively prime to $a b N$, we have $$\begin{aligned} \nonumber \eta\left(|{\mathcal{C}_1}|, n\right) &=\left|\sum_{\substack{a \in \mathcal{C}_1 \\ a \equiv 0(\bmod n)}} 1-\frac{\omega(n)}{n} |\mathcal{C}_1|\right|=\left|\sum_{\substack{a \in \mathcal{C}_1 \\ a \equiv 0(\bmod n)}} 1-\frac{|\mathcal{C}_1|}{\varphi(n)}\right| \\ \nonumber &=\left|\sum_{\substack{e \in \mathcal{F}_1 \\ (e,n)=1}}\left(\sum_{\substack{p_2<p_3<\min((\frac{N}{b})^\frac{1}{8.4},(\frac{N}{be})) \\be p_3 \equiv N+jan (\bmod a^{2}n) \\ 0 \leqslant j \leqslant a-1,(j, a)=1 }}1-\frac{1}{\varphi(n)}\sum_{\substack{p_2<p_3<\min((\frac{N}{b})^\frac{1}{8.4},(\frac{N}{be})) \\p_{3} \equiv N(bm p_{1} p_{2} p_4)_{a^{2}}^{-1}+j a (\bmod a^{2}) \\ 0 \leqslant j \leqslant a-1,(j, a)=1 }}1\right)\right|\\ &+\frac{1}{\varphi(n)}\sum_{\substack{e \in \mathcal{F}_1 \\ (e,n)>1}}\sum_{\substack{p_2<p_3<\min((\frac{N}{b})^\frac{1}{8.4},(\frac{N}{be})) \\p_{3} \equiv N(bm p_{1} p_{2} p_4)_{a^{2}}^{-1}+j a (\bmod a^{2}) \\ 0 \leqslant j \leqslant a-1,(j, a)=1 }}1.\end{aligned}$$ Let $$g(k)=\sum_{\substack{e=k \\ e\in \mathcal{F}_1 \\ 0 \leqslant j \leqslant a-1,(j, a)=1}}1,$$ then $$\begin{aligned} \nonumber \eta\left(|{\mathcal{C}_1}|, n\right) \ll& \left|\sum_{\substack{(\frac{N}{b})^{\frac{1}{4.4}}<k< (\frac{N}{b})^{\frac{12.2}{13.2}}\\ (k,n)=1}}g(k)\left(\sum_{\substack{p_2<p_3<\min((\frac{N}{b})^\frac{1}{8.4},(\frac{N}{bk})) \\bk p_3 \equiv N+jan (\bmod a^{2}n) }}1-\frac{1}{\varphi(n)}\sum_{\substack{p_2<p_3<\min((\frac{N}{b})^\frac{1}{8.4},(\frac{N}{bk})) \\p_{3} \equiv N(bm p_{1} p_{2} p_4)_{a^{2}}^{-1}+j a (\bmod a^{2}) }}1\right)\right|\\ \nonumber &+N^{\frac{12.2}{13.2}}(\log N)^2\\ \nonumber =&\left|\sum_{\substack{(\frac{N}{b})^{\frac{1}{4.4}}<k< (\frac{N}{b})^{\frac{7.4}{8.4}}\\ (k,n)=1}}g(k)\left(\pi\left(bk\left(\frac{N}{b}\right)^\frac{1}{8.4}; b k, a^2 n, N + j a n\right)-\frac{\pi\left(\left(\frac{N}{b}\right)^\frac{1}{8.4}; a^2, N(bm p_{1} p_{2} p_4)_{a^{2}}^{-1}+j a\right)}{\varphi(n)}\right)\right|\\ \nonumber &+\left|\sum_{\substack{(\frac{N}{b})^{\frac{7.4}{8.4}}<k< (\frac{N}{b})^{\frac{12.2}{13.2}}\\ (k,n)=1}}g(k)\left(\pi\left(N; b k, a^2 n, N + j a n\right)-\frac{\pi\left(\frac{N}{bk}; a^2, N(bm p_{1} p_{2} p_4)_{a^{2}}^{-1}+j a\right)}{\varphi(n)}\right)\right|\\ \nonumber &+\left|\sum_{\substack{(\frac{N}{b})^{\frac{1}{4.4}}<k< (\frac{N}{b})^{\frac{12.2}{13.2}}\\ (k,n)=1}}g(k)\left(\pi\left(b k p_2; b k, a^2 n, N + j a n\right)-\frac{\pi\left(p_2; a^2, N(bm p_{1} p_{2} p_4)_{a^{2}}^{-1}+j a\right)}{\varphi(n)}\right)\right|\\ \nonumber &+N^{\frac{12.2}{13.2}}(\log N)^2\\ \nonumber \ll&\left|\sum_{\substack{(\frac{N}{b})^{\frac{1}{4.4}}<k< (\frac{N}{b})^{\frac{7.4}{8.4}}\\ (k,n)=1}}g(k)\left(\pi\left(bk\left(\frac{N}{b}\right)^\frac{1}{8.4}; b k, a^2 n, N + j a n\right)-\frac{\pi\left(bk\left(\frac{N}{b}\right)^\frac{1}{8.4}; b k, 1, 1\right)}{\varphi(a^2 n)}\right)\right|\\ \nonumber &+\left|\sum_{\substack{(\frac{N}{b})^{\frac{1}{4.4}}<k< (\frac{N}{b})^{\frac{7.4}{8.4}}\\ (k,n)=1}}g(k)\left(\frac{\pi\left(\left(\frac{N}{b}\right)^\frac{1}{8.4}; a^2, N(bm p_{1} p_{2} p_4)_{a^{2}}^{-1}+j a\right)}{\varphi(n)}-\frac{\pi\left(\left(\frac{N}{b}\right)^\frac{1}{8.4}; 1, 1\right)}{\varphi(a^2 n)}\right)\right|\\ \nonumber &+\left|\sum_{\substack{(\frac{N}{b})^{\frac{7.4}{8.4}}<k< (\frac{N}{b})^{\frac{12.2}{13.2}}\\ (k,n)=1}}g(k)\left(\pi\left(N; b k, a^2 n, N + j a n\right)-\frac{\pi\left(N; b k, 1, 1\right)}{\varphi(a^2 n)}\right)\right|\\ \nonumber &+\left|\sum_{\substack{(\frac{N}{b})^{\frac{7.4}{8.4}}<k< (\frac{N}{b})^{\frac{12.2}{13.2}}\\ (k,n)=1}}g(k)\left(\frac{\pi\left(\frac{N}{bk}; a^2, N(bm p_{1} p_{2} p_4)_{a^{2}}^{-1}+j a\right)}{\varphi(n)}-\frac{\pi\left(\frac{N}{bk}; 1,1\right)}{\varphi(a^2 n)}\right)\right|\\ \nonumber &+\left|\sum_{\substack{(\frac{N}{b})^{\frac{1}{4.4}}<k< (\frac{N}{b})^{\frac{12.2}{13.2}}\\ (k,n)=1}}g(k)\left(\pi\left(b k p_2; b k, a^2 n, N + j a n\right)-\frac{\pi\left(b k p_2; b k, 1,1\right)}{\varphi(a^2 n)}\right)\right|\\ \nonumber &+\left|\sum_{\substack{(\frac{N}{b})^{\frac{1}{4.4}}<k< (\frac{N}{b})^{\frac{12.2}{13.2}}\\ (k,n)=1}}g(k)\left(\frac{\pi\left(p_2; a^2, N(bm p_{1} p_{2} p_4)_{a^{2}}^{-1}+j a\right)}{\varphi(n)}-\frac{\pi\left(p_2;1,1\right)}{\varphi(a^2 n)}\right)\right|\\ \nonumber &+N^{\frac{12.2}{13.2}}(\log N)^2\\ \nonumber \ll&\left|\sum_{\substack{(\frac{N}{b})^{\frac{1}{4.4}}<k< (\frac{N}{b})^{\frac{7.4}{8.4}}\\ (k,n)=1}}g(k)\left(\pi\left(bk\left(\frac{N}{b}\right)^\frac{1}{8.4}; b k, a^2 n, N + j a n\right)-\frac{\pi\left(bk\left(\frac{N}{b}\right)^\frac{1}{8.4}; b k, 1, 1\right)}{\varphi(a^2 n)}\right)\right|\\ \nonumber &+\frac{1}{\varphi(n)}\left|\sum_{\substack{(\frac{N}{b})^{\frac{1}{4.4}}<k< (\frac{N}{b})^{\frac{7.4}{8.4}}\\ (k,n)=1}}g(k)\left(\pi\left(\left(\frac{N}{b}\right)^\frac{1}{8.4}; a^2, N(bm p_{1} p_{2} p_4)_{a^{2}}^{-1}+j a\right)-\frac{\pi\left(\left(\frac{N}{b}\right)^\frac{1}{8.4}; 1, 1\right)}{\varphi(a^2 )}\right)\right|\\ \nonumber &+\left|\sum_{\substack{(\frac{N}{b})^{\frac{7.4}{8.4}}<k< (\frac{N}{b})^{\frac{12.2}{13.2}}\\ (k,n)=1}}g(k)\left(\pi\left(N; b k, a^2 n, N + j a n\right)-\frac{\pi\left(N; b k, 1, 1\right)}{\varphi(a^2 n)}\right)\right|\\ \nonumber &+\frac{1}{\varphi(n)}\left|\sum_{\substack{(\frac{N}{b})^{\frac{7.4}{8.4}}<k< (\frac{N}{b})^{\frac{12.2}{13.2}}\\ (k,n)=1}}g(k)\left(\pi\left(\frac{N}{bk}; a^2, N(bm p_{1} p_{2} p_4)_{a^{2}}^{-1}+j a\right)-\frac{\pi\left(\frac{N}{bk}; 1,1\right)}{\varphi(a^2 )}\right)\right|\\ \nonumber &+\left|\sum_{\substack{(\frac{N}{b})^{\frac{1}{4.4}}<k< (\frac{N}{b})^{\frac{12.2}{13.2}}\\ (k,n)=1}}g(k)\left(\pi\left(b k p_2; b k, a^2 n, N + j a n\right)-\frac{\pi\left(b k p_2; b k, 1,1\right)}{\varphi(a^2 n)}\right)\right|\\ \nonumber &+\frac{1}{\varphi(n)}\left|\sum_{\substack{(\frac{N}{b})^{\frac{1}{4.4}}<k< (\frac{N}{b})^{\frac{12.2}{13.2}}\\ (k,n)=1}}g(k)\left(\pi\left(p_2; a^2, N(bm p_{1} p_{2} p_4)_{a^{2}}^{-1}+j a\right)-\frac{\pi\left(p_2;1,1\right)}{\varphi(a^2 )}\right)\right|\\ &+N^{\frac{12.2}{13.2}}(\log N)^2.\end{aligned}$$ By Lemma [Lemma 13](#l3){reference-type="ref" reference="l3"}, Remark [Remark 14](#remark1){reference-type="ref" reference="remark1"} and Lemma [Lemma 12](#l7){reference-type="ref" reference="l7"}, we have $$\sum_{n \leqslant D_{\mathcal{C}_1}} \mu^{2}(n) \eta\left(|{\mathcal{C}_1|}, n\right)=\sum_{\substack{n \leqslant D_{\mathcal{C}_1}\\(n,abN)=1}} \mu^{2}(n) \eta\left(|{\mathcal{C}_1|}, n\right) \ll N(\log N)^{-3}.$$ By (52)--(57) we have $$\begin{aligned} \nonumber S_{61} &\leqslant(1+o(1)) \frac{0.5617 \times 8 C(N) N}{ab(\log N)^2} \int_{\frac{1}{13.2}}^{\frac{1}{8.4}} \frac{d t_{1}}{t_{1}} \int_{t_{1}}^{\frac{1}{8.4}} \frac{1}{t_{2}}\left(\frac{1}{t_{1}}-\frac{1}{t_{2}}\right) \log \frac{1}{8.4 t_{2}} d t_{2} \\ &\leqslant 0.0864362 \frac{C(N) N}{ab(\log N)^2}.\end{aligned}$$ Similarly, we have $$S_{62}= \sum_{(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_{1}<p_{2}<p_{3}<(\frac{N}{b})^{ \frac{1}{8.4}} \leqslant p_{4}<(\frac{N}{b})^{\frac{1.4}{8.4}} } S\left(\mathcal{A}_{p_{1} p_{2} p_{3} p_{4}} ; \mathcal{P}\left(p_{1}\right), p_{2}\right)$$ $$+\sum_{(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_{1}<p_{2}<p_{3}<(\frac{N}{b})^{ \frac{1}{8.4}}<(\frac{N}{b})^{ \frac{1.4}{8.4}} \leqslant p_{4}<(\frac{N}{b})^{\frac{4.6}{13.2}}p^{-1}_3 } S\left(\mathcal{A}_{p_{1} p_{2} p_{3} p_{4}} ; \mathcal{P}\left(p_{1}\right), p_{2}\right)$$ $$\leqslant(1+o(1)) \frac{0.5617 \times 8 C(N) N}{ab(\log N)^2} \left(21.6 \log \frac{13.2}{8.4}-9.6\right) \log 1.4$$ $$+(1+o(1)) \frac{0.5644 \times 8 C(N) N}{ab(\log N)^2}\int_{\frac{1}{13.2}}^{\frac{1}{8.4}} \frac{d t_{1}}{t_{1}} \int_{t_{1}}^{\frac{1}{8.4}} \frac{1}{t_{2}}\left(\frac{1}{t_{1}}-\frac{1}{t_{2}}\right) \log \left(\frac{8.4}{1.4}\left(\frac{4.6}{13.2}-t_{2}\right)\right) d t_{2}$$ $$\leqslant 0.5208761\frac{ C(N) N}{ab(\log N)^2}.$$ By (58) and (59) we have $$\begin{aligned} \nonumber S_{6}=S_{61}+S_{62} &\leqslant 0.0864362 \frac{C(N) N}{ab(\log N)^2} +0.5208761 \frac{C(N) N}{ab(\log N)^2}\\ &\leqslant 0.6073123 \frac{C(N) N}{ab(\log N)^2} .\end{aligned}$$ ## Evaluation of $S_{5}$ For $p \geqslant \left(\frac{N}{b}\right)^{\frac{4.1001}{13.2}}$ we have $$\underline{p}^{\frac{1}{2.5}} \leqslant \left(\frac{N}{b}\right)^{\frac{1}{13.2}}, \quad S\left(\mathcal{A}_{p};\mathcal{P}, \left(\frac{N}{b}\right)^{\frac{1}{13.2}}\right) \leqslant S\left(\mathcal{A}_{p};\mathcal{P}, \underline{p}^{\frac{1}{2.5}}\right).$$ By Lemma [Lemma 22](#l34){reference-type="ref" reference="l34"} we have $$\begin{aligned} \nonumber S_{51}&=\sum_{\substack{(\frac{N}{b})^{\frac{4.1001}{13.2}} \leqslant p<(\frac{N}{b})^{\frac{1}{3}} \\ (p, N)=1}} S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b}\right)^{\frac{1}{13.2}}\right)\\ &\leqslant \sum_{\substack{(\frac{N}{b})^{\frac{4.1001}{13.2}} \leqslant p<(\frac{N}{b})^{\frac{1}{3}} \\ (p, N)=1}} S\left(\mathcal{A}_{p};\mathcal{P},\underline{p}^{\frac{1}{2.5}}\right) \leqslant \Gamma_{1}-\frac{1}{2} \Gamma_{2}+\frac{1}{2} \Gamma_{3}+O\left(N^{\frac{19}{20}}\right) .\end{aligned}$$ By Lemmas [Lemma 6](#l1){reference-type="ref" reference="l1"},  [Lemma 7](#l2){reference-type="ref" reference="l2"},  [Lemma 13](#l3){reference-type="ref" reference="l3"} and some routine arguments we get $$\begin{aligned} \nonumber\Gamma_{1}&=\sum_{\substack{(\frac{N}{b})^{\frac{4.1001}{13.2}} \leqslant p<(\frac{N}{b})^{\frac{1}{3}} \\ (p, N)=1}} S\left(\mathcal{A}_{p};\mathcal{P},\underline{p}^{\frac{1}{3.675}}\right)\\ &\leqslant(1+o(1)) \frac{8C(N) N}{ab(\log N)^{2}}\left(\int_{\frac{4.1001}{13.2}}^{\frac{1}{3}} \frac{d t}{t(1-2 t)}\right)\left(1+\int_{2}^{2.675} \frac{\log (t-1)}{t} d t\right),\\ \nonumber\Gamma_{2}&=\sum_{\substack{(\frac{N}{b})^{\frac{4.1001}{13.2}} \leqslant p<(\frac{N}{b})^{\frac{1}{3}} \\ (p, N)=1}} \sum_{\substack{\underline{p}^{\frac{1}{3.675}} \leqslant p_1<\underline{p}^{\frac{1}{2.5}} \\ (p_1, N)=1}}S\left(\mathcal{A}_{p p_1};\mathcal{P}, \underline{p}^{\frac{1}{3.675}}\right)\\ &\geqslant(1+o(1)) \frac{8C(N) N}{ab(\log N)^{2}}\left(\int_{\frac{4.1001}{13.2}}^{\frac{1}{3}} \frac{d t}{t(1-2 t)}\right)\left(\int_{1.5}^{2.675} \frac{\log \left(2.675-\frac{3.675}{t+1}\right)}{t} d t\right).\end{aligned}$$ By an argument similar to the evaluation of $S_{61}$ we get that $$\begin{aligned} \nonumber \Gamma_{3}&=\sum_{\substack{(\frac{N}{b})^{\frac{4.1001}{13.2}} \leqslant p<(\frac{N}{b})^{\frac{1}{3}} \\ (p, N)=1}} \sum _{ \substack{\underline{p}^{\frac{1}{3.675}} \leqslant p_1<p_2<p_3<\underline{p}^{\frac{1}{2.5}} \\ (p_1 p_2 p_3, N)=1} } S\left(\mathcal{A}_{p p_1 p_2 p_3};\mathcal{P}(p_1), p_2\right)\\ &\leqslant(1+o(1)) \frac{16C(N) N}{1.763ab(\log N)^{2}}\left(\int_{\frac{4.1001}{13.2}}^{\frac{1}{3}} \frac{d t}{t(1-2 t)}\right)\left(6.175 \log \frac{3.675}{2.5}-2.35\right).\end{aligned}$$ By (68)--(71) we have $$S_{51}=\sum_{\substack{(\frac{N}{b})^{\frac{4.1001}{13.2}} \leqslant p<(\frac{N}{b})^{\frac{1}{3}} \\ (p, N)=1}} S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b}\right)^{\frac{1}{13.2}}\right)$$ $$\begin{gathered} \leqslant(1+o(1)) \frac{8C(N) N}{ab(\log N)^{2}}\left(\int_{\frac{4.1001}{13.2}}^{\frac{1}{3}} \frac{d t}{t(1-2 t)}\right)\times\\ \left(1+\int_{2}^{2.675} \frac{\log (t-1)}{t} d t-\frac{1}{2} \int_{1.5}^{2.675} \frac{\log \left(2.675-\frac{3.675}{t+1}\right)}{t} d t +\frac{1}{1.763}\left(6.175 \log \frac{3.675}{2.5}-2.35\right)\right). \end{gathered}$$ Similarly, we have $$S_{52}=\sum_{\substack{(\frac{N}{b})^{\frac{3.6}{13.2}} \leqslant p<(\frac{N}{b})^{\frac{1}{3.604}} \\ (p, N)=1} }S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b}\right)^{\frac{1}{8.4}}\right)$$ $$\begin{gathered} \leqslant(1+o(1)) \frac{8C(N) N}{ab(\log N)^{2}}\left(\int_{\frac{3.6}{13.2}}^{\frac{1}{3.604}} \frac{d t}{t(1-2 t)}\right)\times\\ \left(1+\int_{2}^{2.675} \frac{\log (t-1)}{t} d t-\frac{1}{2} \int_{1.5}^{2.675} \frac{\log \left(2.675-\frac{3.675}{t+1}\right)}{t} d t +\frac{1}{1.763}\left(6.175 \log \frac{3.675}{2.5}-2.35\right)\right) \end{gathered}$$ $$S_{5}=S_{51}+S_{52}$$ $$\begin{gathered} \leqslant(1+o(1)) \frac{8C(N) N}{ab(\log N)^{2}}\left(\int_{\frac{4.1001}{13.2}}^{\frac{1}{3}} \frac{d t}{t(1-2 t)}+\int_{\frac{3.6}{13.2}}^{\frac{1}{3.604}} \frac{d t}{t(1-2 t)}\right)\times\\ \left(1+\int_{2}^{2.675} \frac{\log (t-1)}{t} d t-\frac{1}{2} \int_{1.5}^{2.675} \frac{\log \left(2.675-\frac{3.675}{t+1}\right)}{t} d t +\frac{1}{1.763}\left(6.175 \log \frac{3.675}{2.5}-2.35\right)\right) \end{gathered}$$ $$\leqslant 1.87206 \frac{C(N) N}{ab(\log N)^{2}}.$$ ## Proof of theorem 1.1 By (40)--(42), (50)--(51), (60) and (65) we get $$S_{1}+S_{2} \geqslant 58.974416 \frac{C(N) N}{ab(\log N)^{2}},$$ $$S_{3}+S_{4}+S_{5}+S_{6}+2S_{7} \leqslant 55.505987 \frac{C(N) N}{ab(\log N)^{2}},$$ $$4R_{a,b}(N) \geqslant (S_{1}+S_{2})-(S_{3}+S_{4}+S_{5}+S_{6}+2S_{7}) \geqslant 3.468429 \frac{C(N) N}{ab(\log N)^{2}},$$ $$R_{a,b}(N) \geqslant 0.8671 \frac{C(N) N}{ab(\log N)^{2}}.$$ Theorem [Theorem 1](#t1){reference-type="ref" reference="t1"} is proved. # Proof of Theorem 1.2 In this section, sets $\mathcal{A}_2$, $\mathcal{B}_2$, $\mathcal{C}_2$, $\mathcal{C}_3$, $\mathcal{E}_2$, $\mathcal{F}_2$ and $\mathcal{F}_3$ are defined respectively. We define the function $\omega$ as $\omega(p)=0$ for primes $p \mid a b N$ and $\omega(p)=\frac{p}{p-1}$ for other primes. ## Evaluation of $S_{1}^{\prime}, S_{2}^{\prime}, S_{3}^{\prime}$ Let $D_{\mathcal{A}_{2}}=\left(\frac{N}{b}\right)^{\theta / 2}\left(\log \left(\frac{N}{b}\right)\right)^{-B}$ and $D_{\mathcal{A}_{2_{p}}}=\frac{D_{\mathcal{A}_2}}{p}=$ $\frac{\left(\frac{N}{b}\right)^{\theta / 2}\left(\log \left(\frac{N}{b}\right)\right)^{-B}}{p}$ for some positive constant $B$. By Lemma [Lemma 11](#l6){reference-type="ref" reference="l6"}, we can take $$X_{\mathcal{A}_2}=\sum_{\substack{0 \leqslant k \leqslant b-1 \\(k, b)=1}}\pi\left(\frac{N^\theta}{a} ; b^{2}, N a_{b^{2}}^{-1}+k b\right) \sim \frac{\varphi(b) N^\theta}{a \varphi\left(b^{2}\right) \log N^\theta} \sim \frac{N^\theta}{a b\theta \log N}.$$ so that $|\mathcal{A}_2| \sim X_{\mathcal{A}_2}$. By Lemma [Lemma 10](#Wfunction){reference-type="ref" reference="Wfunction"} for $z_{\mathcal{A}_2}=N^{\frac{1}{\alpha}}$ we have $$W(z_{\mathcal{A}_2})=\frac{2\alpha e^{-\gamma} C(N)(1+o(1))}{\log N}.$$ To deal with the error term, we have $$\sum_{\substack{n \leqslant D_{\mathcal{A}_2} \\ n \mid P(z_{\mathcal{A}_2})}} \eta\left(X_{\mathcal{A}_2}, n\right) \ll \sum_{n \leqslant D_{\mathcal{A}_2}} \mu^{2}(n)\eta\left(X_{\mathcal{A}_2}, n\right),$$ and $$\sum_{p}\sum_{\substack{n \leqslant D_{\mathcal{A}_{2_p}} \\ n \mid P(z_{\mathcal{A}_2})}} \eta\left(X_{\mathcal{A}_2}, pn\right)\ll \sum_{n \leqslant D_{\mathcal{A}_2}} \mu^{2}(n)\eta\left(X_{\mathcal{A}_2}, n\right).$$ By our previous discussion, any $\frac{N-a p}{b}$ in $\mathcal{A}_2$ is relatively prime to $b$, so $\eta\left(X_{\mathcal{A}_2}, n\right)=0$ for any integer $n$ that shares a common prime divisor with $b$. If $n$ and $a$ share a common prime divisor $r$, say $n=r n^{\prime}$ and $a=r a^{\prime}$, then $\frac{N-a p}{b n}=\frac{N-r a^{\prime} p}{b r n^{\prime}} \in \mathbb{Z}$ implies $r \mid N$, which is a contradiction to $(a, N)=1$. Similarly, we have $\eta\left(X_{\mathcal{A}_2}, n\right)=0$ if $(n, N)>1$. We conclude that $\eta\left(X_{\mathcal{A}_2}, n\right)=0$ if $(n, a b N)>1$. For a square-free integer $n \leqslant D_{\mathcal{A}_2}$ such that $(n, abN)=1$, to make $n \mid \frac{N-a p}{b}$ for some $\frac{N-a p}{b} \in \mathcal{A}_2$, we need $a p \equiv N(\bmod b n)$, which implies $a p \equiv N+k b n$ $\left(\bmod b^{2} n\right)$ for some $0 \leqslant k \leqslant b-1$. Since $\left(\frac{N-a p}{b n}, b\right)=1$, we can further require $(k, b)=1$. When $k$ runs through the reduced residues modulo $b$, we know $a_{b^{2} n}^{-1} k$ also runs through the reduced residues modulo $b$. Therefore, we have $p \equiv a_{b^{2} n}^{-1} N+k b n\left(\bmod b^{2} n\right)$ for some $0 \leqslant k \leqslant b-1$ such that $(k, b)=1$. Conversely, if $p=a_{b^{2} n}^{-1} N+k b n+m b^{2} n$ for some integer $m$ and some $0 \leqslant k \leqslant b-1$ such that $(k, b)=1$, then $\left(\frac{N-a p}{b n}, b\right)=$ $\left(\frac{N-a a_{b^{2} n}^{-1} N-a k b n-a m b^{2} n}{b n}, b\right)=(-a k, b)=1$. Therefore, for square-free integers $n$ such that $(n, abN)=1$, we have $$\begin{aligned} \nonumber \eta\left(X_{\mathcal{A}_2}, n\right) =&\left|\sum_{\substack{a \in \mathcal{A}_2 \\ a \equiv 0(\bmod n)}} 1-\frac{\omega(n)}{n} X_{\mathcal{A}_2}\right| \\ \nonumber=&\left|\sum_{\substack{0 \leqslant k \leqslant b-1 \\ (k, \bar{b})=1}} \pi\left(\frac{N^\theta}{a} ; b^{2} n, a_{b^{2} n}^{-1} N+k b n\right)-\frac{X_{\mathcal{A}_2}}{\varphi(n)}\right| \\ \nonumber \ll&\left|\sum_{\substack{0 \leqslant k \leqslant b-1 \\ (k, b)=1}} \pi\left(\frac{N^\theta}{a} ; b^{2} n, a_{b^{2} n}^{-1} N+k b n\right)-\varphi(b) \frac{\pi\left(\frac{N^\theta}{a} ; 1,1\right)}{\varphi\left(b^{2} n\right)}\right|\\ \nonumber& +\left|\sum_{\substack{0 \leqslant k \leqslant b-1 \\ (k, b)=1}} \frac{\pi\left(\frac{N^\theta}{a} ; b^{2}, a_{b^{2}}^{-1} N+k b\right)}{\varphi(n)}-\varphi(b) \frac{\pi\left(\frac{N^\theta}{a} ; 1,1\right)}{\varphi\left(b^{2} n\right)}\right| \\ \nonumber \ll& \sum_{\substack{0 \leqslant k \leqslant b-1 \\ (k, \bar{b})=1}}\left|\pi\left(\frac{N^\theta}{a} ; b^{2} n, a_{b^{2} n}^{-1} N+k b n\right)-\frac{\pi\left(\frac{N^\theta}{a} ; 1,1\right)}{\varphi\left(b^{2} n\right)}\right| \\ & +\frac{1}{\varphi(n)} \sum_{\substack{0 \leqslant k \leqslant b-1 \\ (k, \bar{b})=1}}\left|\pi\left(\frac{N^\theta}{a} ; b^{2}, a_{b^{2}}^{-1} N+k b\right)-\frac{\pi\left(\frac{N^\theta}{a} ; 1,1\right)}{\varphi\left(b^{2}\right)}\right| .\end{aligned}$$ By Lemmas [Lemma 12](#l7){reference-type="ref" reference="l7"} and  [Lemma 13](#l3){reference-type="ref" reference="l3"} with $g(k)=1$ for $k=1$ and $g(k)=0$ for $k>1$, we know $$\sum_{n \leqslant D_{\mathcal{A}_2}} \mu^{2}(n)\eta\left(X_{\mathcal{A}_2}, n\right) \ll N^\theta(\log N)^{-3}.$$ Then by (66)--(71), Lemma [Lemma 6](#l1){reference-type="ref" reference="l1"}, Lemma [Lemma 7](#l2){reference-type="ref" reference="l2"} and some routine arguments we have $$\begin{aligned} \nonumber S_{11}^{\prime} &\geqslant X_{\mathcal{A}_2} W\left(z_{\mathcal{A}_2}\right)\left\{f\left(\frac{\theta/2}{1/14}\right)+O\left(\frac{1}{\log ^{\frac{1}{3}} D}\right)\right\}-\sum_{\substack{n<D_{\mathcal{A}_2} \\ n \mid P(z_{\mathcal{A}_2})}}3^{\nu(n)} \eta(X_{\mathcal{A}_2}, n)\\ \nonumber &\geqslant \frac{N^\theta}{a b\theta \log N}\frac{2\times 14 e^{-\gamma} C(N)(1+o(1))}{\log N}\times\\ \nonumber & \left(\frac{2 e^{\gamma}}{\frac{14\theta}{2}}\left(\log (7\theta-1)+\int_{2}^{7\theta-2} \frac{\log (s-1)}{s} \log \frac{7\theta-1}{s+1} d s\right)\right)\\ \nonumber &\geqslant (1+o(1)) \frac{8C(N) N^\theta}{ab\theta^2 (\log N)^2}\left(\log (7\theta-1)+\int_{2}^{7\theta-2} \frac{\log (s-1)}{s} \log \frac{7\theta-1}{s+1} d s\right)\\ \nonumber&\geqslant 16.706129 \frac{C(N) N^\theta}{ab (\log N)^2},\\ \nonumber S_{12}^{\prime} &\geqslant X_{\mathcal{A}_2} W\left(z_{\mathcal{A}_2}\right)\left\{f\left(\frac{\theta/2}{1/8.8}\right)+O\left(\frac{1}{\log ^{\frac{1}{3}} D}\right)\right\}-\sum_{\substack{n<D_{\mathcal{A}_2} \\ n \mid P(z_{\mathcal{A}_2})}}3^{\nu(n)} \eta(X_{\mathcal{A}_2},n)\\ \nonumber &\geqslant \frac{N^\theta}{a b\theta \log N}\frac{2\times 8.8 e^{-\gamma} C(N)(1+o(1))}{\log N}\times\\ \nonumber &\left(\frac{2 e^{\gamma}}{\frac{8.8\theta}{2}}\left(\log (4.4\theta-1)+\int_{2}^{4.4\theta-2} \frac{\log (s-1)}{s} \log \frac{4.4\theta-1}{s+1} d s\right)\right)\\ \nonumber &\geqslant (1+o(1)) \frac{8C(N) N^\theta}{ab\theta^2(\log N)^2}\left(\log (4.4\theta-1)+\int_{2}^{4.4\theta-2} \frac{\log (s-1)}{s} \log \frac{4.4\theta-1}{s+1} d s\right) \\ \nonumber&\geqslant 10.339329 \frac{C(N) N^\theta}{ab(\log N)^2},\\ S_{1}^{\prime}&=3 S_{11}^{\prime}+S_{12}^{\prime} \geqslant 60.457716\frac{C(N) N^\theta}{ab(\log N)^2} .\end{aligned}$$ Similarly, we have $$\begin{aligned} \nonumber S_{21}^{\prime} &\geqslant(1+o(1)) \frac{8 C(N) N^\theta}{ab\theta(\log N)^{2} } \left(\int_{\frac{1}{14}}^{\frac{1}{8.8}} \int_{t_{1}}^{\frac{1}{8.8}} \frac{\log \left((7 \theta-1)-14\left(t_{1}+t_{2}\right)\right)}{t_{1} t_{2}\left(\theta-2\left(t_{1}+t_{2}\right)\right)} d t_{1} d t_{2}\right),\\ \nonumber S_{22}^{\prime} &\geqslant(1+o(1)) \frac{8 C(N) N^\theta}{ab\theta(\log N)^{2}} \left(\int_{\frac{1}{14}}^{\frac{1}{8.8}} \int_{\frac{1}{8.8}}^{\frac{\theta}{2}-\frac{2}{14}-t_{1}} \frac{\log \left((7 \theta-1)-14\left(t_{1}+t_{2}\right)\right)}{t_{1} t_{2}\left(\theta-2\left(t_{1}+t_{2}\right)\right)} d t_{1} d t_{2}\right),\\ \nonumber S_{2}^{\prime}&=S_{21}^{\prime}+S_{22}^{\prime}\\ \nonumber &\geqslant(1+o(1)) \frac{8 C(N) N^\theta}{ab\theta(\log N)^{2}} \left(\int_{\frac{1}{14}}^{\frac{1}{8.8}} \int_{t_{1}}^{\frac{\theta}{2}-\frac{2}{14}-t_{1}} \frac{\log \left((7 \theta-1)-14\left(t_{1}+t_{2}\right)\right)}{t_{1} t_{2}\left(\theta-2\left(t_{1}+t_{2}\right)\right)} d t_{1} d t_{2}\right)\\ &\geqslant 5.916004 \frac{C(N) N^\theta}{ab(\log N)^{2}},\end{aligned}$$ $$\begin{aligned} \nonumber S_{31}^{\prime} \leqslant&(1+o(1)) \frac{8 C(N) N^\theta}{ab\theta^2(\log N)^{2}}\left(\log \frac{4.0871(14\theta-2)}{14\theta-8.1742}\right.\\ \nonumber&\left.+\int_{2}^{7\theta-2} \frac{\log (s-1)}{s} \log \frac{(7\theta-1)(7\theta-1-s)}{s+1} d s\right.\\ \nonumber&\left.+\int_{2}^{7\theta-4} \frac{\log (s-1)}{s} d s \int_{s+2}^{7\theta-2} \frac{1}{t} \log \frac{t-1}{s+1} \log \frac{(7\theta-1)(7\theta-1-t)}{t+1} d t\right)\\ \nonumber\leqslant& 24.636554 \frac{C(N) N^\theta}{ab(\log N)^{2}},\\ \nonumber S_{32}^{\prime} \leqslant&(1+o(1)) \frac{8 C(N) N^\theta}{ab\theta^2(\log N)^{2}}\left(\log \frac{(7\theta-3)(7\theta-1)}{3}\right.\\ \nonumber&\left.+\int_{2}^{7\theta-2} \frac{\log (s-1)}{s} \log \frac{(7\theta-1)(7\theta-1-s)}{s+1} d s\right.\\ \nonumber&\left.+\int_{2}^{7\theta-4} \frac{\log (s-1)}{s} d s \int_{s+2}^{7\theta-2} \frac{1}{t} \log \frac{t-1}{s+1} \log \frac{(7\theta-1)(7\theta-1-t)}{t+1} d t\right)\\ \nonumber\leqslant& 21.808803 \frac{C(N) N^\theta}{ab(\log N)^{2}},\\ S_{3}^{\prime}=& S_{31}^{\prime}+S_{32}^{\prime} \leqslant 46.445357 \frac{C(N) N^\theta}{ab(\log N)^{2}}.\end{aligned}$$ ## Evaluation of $S_{4}^{\prime}, S_{7}^{\prime}$ Let $D_{\mathcal{B}_2}=N^{\theta-1 / 2}(\log N)^{-B}$. By Chen's role-reversal trick we know that $$S_{41}^{\prime} \leqslant S(\mathcal{B}_2;\mathcal{P},D_{\mathcal{B}_2}^{\frac{1}{2}})+O\left(N^{\frac{2}{3}}\right).$$ We can take $$\begin{gathered} \nonumber X_{\mathcal{B}_2}=\sum_{\substack{(\frac{N}{b})^{\frac{1}{14}} \leqslant p_{1} \leqslant(\frac{N}{b})^{\frac{1}{3.106}}<p_{2} \leqslant(\frac{N}{b p_{1}})^{\frac{1}{2}} \\ 0 \leqslant j \leqslant a-1,(j, a)=1}} \left(\pi\left(\frac{N}{b p_{1} p_{2}} ; a^{2}, N\left(b p_{1} p_{2}\right)_{a^{2}}^{-1}+j a\right)\right.\\ \left.-\pi\left(\frac{N-N^\theta}{b p_{1} p_{2}} ; a^{2}, N\left(b p_{1} p_{2}\right)_{a^{2}}^{-1}+j a\right)\right)\end{gathered}$$ so that $|\mathcal{B}_2| \sim X_{\mathcal{B}_2}$. By Lemma [Lemma 10](#Wfunction){reference-type="ref" reference="Wfunction"} for $z_{\mathcal{B}_1}=N^{\frac{2\theta-1}{4}-\varepsilon}$ we have $$W(z_{\mathcal{B}_2})=\frac{8 e^{-\gamma} C(N)(1+o(1))}{(2\theta-1)\log N}.$$ By Lemma [Lemma 11](#l6){reference-type="ref" reference="l6"} and integeration by parts we get that $$\begin{aligned} \nonumber X_{\mathcal{B}_2} &=(1+o(1)) \sum_{(\frac{N}{b})^{\frac{1}{14}} \leqslant p_{1} \leqslant(\frac{N}{b})^{\frac{1}{3.106}}<p_{2} \leqslant(\frac{N}{b p_{1}})^{\frac{1}{2}}} \frac{\varphi(a) \frac{N^\theta}{b p_{1} p_{2}}}{\varphi\left(a^{2}\right) \log \left(\frac{N}{b p_{1} p_{2}}\right)} \\ \nonumber & =(1+o(1)) \frac{N^\theta}{a b} \sum_{(\frac{N}{b})^{\frac{1}{14}} \leqslant p_{1} \leqslant(\frac{N}{b})^{\frac{1}{3.106}}<p_{2} \leqslant(\frac{N}{b p_{1}})^{\frac{1}{2}}} \frac{1}{p_{1} p_{2} \log \left(\frac{N}{p_{1} p_{2}}\right)} \\ \nonumber& =(1+o(1)) \frac{N^\theta}{a b} \int_{(\frac{N}{b})^{\frac{1}{14}}}^{(\frac{N}{b})^{\frac{1}{3.106}}} \frac{d t}{t \log t} \int_{(\frac{N}{b})^{\frac{1}{3.106}}}^{(\frac{N}{bt})^{\frac{1}{2}}} \frac{d u}{u \log u \log \left(\frac{N}{u t}\right)}\\ & =(1+o(1)) \frac{N^\theta}{ab\log N} \int_{2.106}^{13} \frac{\log \left(2.106-\frac{3.106}{s+1}\right)}{s} d s .\end{aligned}$$ To deal with the error term, we have $$\sum_{\substack{n \leqslant D_{\mathcal{B}_2} \\ n \mid P(z_{\mathcal{B}_2})}} \eta\left(X_{\mathcal{B}_2}, n\right) \ll \sum_{n \leqslant D_{\mathcal{B}_2}} \mu^{2}(n)\eta\left(X_{\mathcal{B}_2}, n\right) .$$ For an integer $n$ such that $(n, a b N)>1$, similarly to the discussion for $\eta\left(X_{\mathcal{A}_2}, n\right)$, we have $\eta\left(X_{\mathcal{B}_2}, n\right)=0$. For a square-free integer $n$ such that $(n, a b N)=1$, if $n \mid \frac{N-b p_{1} p_{2} p_{3}}{a}$, then $(p_{1} ,n)=1$ and $(p_{2} , n)=1$. Moreover, if $\left(\frac{N-b p_{1} p_{2} p_{3}}{a n}, a\right)=1$, then we have $b p_{1} p_{2} p_{3} \equiv$ $N+j a n\left(\bmod a^{2} n\right)$ for some $j$ such that $0 \leqslant j \leqslant a-1$ and $(j, a)=1$. Conversely, if $b p_{1} p_{2} p_{3}=N+j a n+s a^{2} n$ for some integer $j$ such that $0 \leqslant j \leqslant a$ and $(j, a)=1$, some integer $n$ relatively prime to $p_{1} p_{2}$ such that $an \mid\left(N-b p_{1} p_{2} p_{3}\right)$, and some integer $s$, then $\left(\frac{N-b p_{1} p_{2} p_{3}}{a n}, a\right)=(-j, a)=1$. Since $j b p_{1} p_{2}$ runs through the reduced residues modulo $a$ when $j$ runs through the reduced residues modulo $a$ and $\pi\left(x ; k, 1,1\right)=\pi\left(\frac{x}{k} ; 1,1\right)$, for square-free integers $n$ such that $(n, a b N)=1$, we have $$\begin{aligned} \nonumber \eta\left(X_{\mathcal{B}_2}, n\right) =&\left|\sum_{\substack{a \in \mathcal{B}_2 \\ a \equiv 0(\bmod n)}} 1-\frac{\omega(n)}{n} X_{\mathcal{B}_2}\right|=\left|\sum_{\substack{a \in \mathcal{B}_2 \\ a \equiv 0(\bmod n)}} 1-\frac{X_{\mathcal{B}_2}}{\varphi(n)}\right| \\ \nonumber =&\left|\sum_{\substack{(\frac{N}{b})^{\frac{1}{14}} \leqslant p_{1} \leqslant(\frac{N}{b})^{\frac{1}{3.106}}<p_{2} \leqslant(\frac{N}{b p_{1}})^{\frac{1}{2}},(p_1 p_2,N)=1 \\ \left(p_{1} p_{2}, n\right)=1, 0 \leqslant j \leqslant a-1,(j, a)=1}} \left(\pi\left(N ; b p_{1} p_{2}, a^{2} n, N+j a n\right)-\pi\left(N-N^\theta ; b p_{1} p_{2}, a^{2} n, N+j a n\right)\right)\right.\\ \nonumber&\left. -\sum_{\substack{(\frac{N}{b})^{\frac{1}{14}} \leqslant p_{1} \leqslant(\frac{N}{b})^{\frac{1}{3.106}}<p_{2} \leqslant(\frac{N}{b p_{1}})^{\frac{1}{2}} \\ \left(p_{1} p_{2}, n\right)=1, 0 \leqslant j \leqslant a-1,(j, a)=1}} \frac{\left(\pi\left(\frac{N}{b p_{1} p_{2}} ; a^{2}, N\left(b p_{1} p_{2}\right)_{a^{2}}^{-1}+j a\right)-\pi\left(\frac{N-N^\theta}{b p_{1} p_{2}} ; a^{2}, N\left(b p_{1} p_{2}\right)_{a^{2}}^{-1}+j a\right)\right)}{\varphi(n)}\right| \\ \nonumber \ll& \left|\sum_{\substack{(\frac{N}{b})^{\frac{1}{14}} \leqslant p_{1} \leqslant(\frac{N}{b})^{\frac{1}{3.106}}<p_{2} \leqslant(\frac{N}{b p_{1}})^{\frac{1}{2}},(p_1 p_2,N)=1 \\ \left(p_{1} p_{2}, n\right)=1, 0 \leqslant j \leqslant a-1,(j, a)=1}}\left(\left(\pi\left(N ; b p_{1} p_{2}, a^{2} n, N+j a n\right)-\pi\left(N-N^\theta ; b p_{1} p_{2}, a^{2} n, N+j a n\right)\right)\right.\right. \\ \nonumber& \left.\left.-\frac{\left(\pi\left(\frac{N}{b p_{1} p_{2}} ; a^{2}, N\left(b p_{1} p_{2}\right)_{a^{2}}^{-1}+j a\right)-\pi\left(\frac{N-N^\theta}{b p_{1} p_{2}} ; a^{2}, N\left(b p_{1} p_{2}\right)_{a^{2}}^{-1}+j a\right)\right)}{\varphi(n)}\right)\right| \\ \nonumber& +\sum_{\substack{(\frac{N}{b})^{\frac{1}{14}} \leqslant p_{1} \leqslant(\frac{N}{b})^{\frac{1}{3.106}}<p_{2} \leqslant(\frac{N}{b p_{1}})^{\frac{1}{2}} \\ \left(p_{1} p_{2}, nN\right)>1, 0 \leqslant j \leqslant a-1,(j, a)=1}} \frac{\left(\pi\left(\frac{N}{b p_{1} p_{2}} ; a^{2}, N\left(b p_{1} p_{2}\right)_{a^{2}}^{-1}+j a\right)-\pi\left(\frac{N-N^\theta}{b p_{1} p_{2}} ; a^{2}, N\left(b p_{1} p_{2}\right)_{a^{2}}^{-1}+j a\right)\right)}{\varphi(n)}\\ \nonumber \ll&\left|\sum_{\substack{(\frac{N}{b})^{\frac{1}{14}} \leqslant p_{1} \leqslant(\frac{N}{b})^{\frac{1}{3.106}}<p_{2} \leqslant(\frac{N}{b p_{1}})^{\frac{1}{2}},(p_1 p_2,N)=1 \\ \left(p_{1} p_{2}, n\right)=1, 0 \leqslant j \leqslant a-1,(j, a)=1}}\left(\left(\pi\left(N ; b p_{1} p_{2}, a^{2} n, N+j a n\right)-\pi\left(N-N^\theta ; b p_{1} p_{2}, a^{2} n, N+j a n\right)\right)\right.\right.\\ \nonumber&\left.\left.-\frac{\left(\pi\left(N ; b p_{1} p_{2}, 1,1\right)-\pi\left(N-N^\theta ; b p_{1} p_{2}, 1,1\right)\right)}{\varphi\left(a^{2} n\right)}\right)\right| \\ \nonumber& +\left| \sum_{\substack{(\frac{N}{b})^{\frac{1}{14}} \leqslant p_{1} \leqslant(\frac{N}{b})^{\frac{1}{3.106}}<p_{2} \leqslant(\frac{N}{b p_{1}})^{\frac{1}{2}},(p_1 p_2,N)=1 \\ \left(p_{1} p_{2}, n\right)=1, 0 \leqslant j \leqslant a-1,(j, a)=1}}\right.\\ \nonumber&\left.\left(\frac{\left(\pi\left(\frac{N}{b p_{1} p_{2}} ; a^{2}, N\left(b p_{1} p_{2}\right)_{a^{2}}^{-1}+j a\right)-\pi\left(\frac{N-N^\theta}{b p_{1} p_{2}} ; a^{2}, N\left(b p_{1} p_{2}\right)_{a^{2}}^{-1}+j a\right)\right)}{\varphi(n)}\right.\right.\\ \nonumber&\left.\left.-\frac{\left(\pi\left(\frac{N}{b p_{1} p_{2}} ; 1,1\right)-\pi\left(\frac{N-N^\theta}{b p_{1} p_{2}} ; 1,1\right)\right)}{\varphi\left(a^{2} n\right)}\right)\right|+N^{\frac{13}{14}}(\log N)^{2}\\ \nonumber \ll&\left|\sum_{\substack{(\frac{N}{b})^{\frac{1}{14}} \leqslant p_{1} \leqslant(\frac{N}{b})^{\frac{1}{3.106}}<p_{2} \leqslant(\frac{N}{b p_{1}})^{\frac{1}{2}},(p_1 p_2,N)=1 \\ \left(p_{1} p_{2}, n\right)=1, 0 \leqslant j \leqslant a-1,(j, a)=1}}\left(\left(\pi\left(N ; b p_{1} p_{2}, a^{2} n, N+j a n\right)-\pi\left(N-N^\theta ; b p_{1} p_{2}, a^{2} n, N+j a n\right)\right)\right.\right.\\ \nonumber&\left.\left.-\frac{\left(\pi\left(N ; b p_{1} p_{2}, 1,1\right)-\pi\left(N-N^\theta ; b p_{1} p_{2}, 1,1\right)\right)}{\varphi\left(a^{2} n\right)}\right)\right| \\ \nonumber& +\frac{1}{\varphi(n)}\left| \sum_{\substack{(\frac{N}{b})^{\frac{1}{14}} \leqslant p_{1} \leqslant(\frac{N}{b})^{\frac{1}{3.106}}<p_{2} \leqslant(\frac{N}{b p_{1}})^{\frac{1}{2}},(p_1 p_2,N)=1 \\ \left(p_{1} p_{2}, n\right)=1, 0 \leqslant j \leqslant a-1,(j, a)=1}}\right.\\ \nonumber&\left.\left(\left(\pi\left(\frac{N}{b p_{1} p_{2}} ; a^{2}, N\left(b p_{1} p_{2}\right)_{a^{2}}^{-1}+j a\right)-\pi\left(\frac{N-N^\theta}{b p_{1} p_{2}} ; a^{2}, N\left(b p_{1} p_{2}\right)_{a^{2}}^{-1}+j a\right)\right)\right.\right.\\ &\left.\left.-\frac{\left(\pi\left(\frac{N}{b p_{1} p_{2}} ; 1,1\right)-\pi\left(\frac{N-N^\theta}{b p_{1} p_{2}} ; 1,1\right)\right)}{\varphi\left(a^{2} \right)}\right)\right|+N^{\frac{13}{14}}(\log N)^{2}.\end{aligned}$$ By Lemma [Lemma 15](#BVshort){reference-type="ref" reference="BVshort"} with $$g(k)= \begin{cases} 1, & \text { if } k \in \mathcal{E}_2 \\ 0, & \text { otherwise } \end{cases}$$ and Lemma [Lemma 12](#l7){reference-type="ref" reference="l7"}, we have $$\sum_{n \leqslant D_{\mathcal{B}_2}} \mu^{2}(n)\eta\left(X_{\mathcal{B}_2}, n\right)=\sum_{\substack{n \leqslant D_{\mathcal{B}_2}\\(n,abN)=1}} \mu^{2}(n)\eta\left(X_{\mathcal{B}_2}, n\right) \ll N(\log N)^{-3}.$$ Then by (75)--(81) and some routine arguments we have $$S_{41}^{\prime} \leqslant (1+o(1)) \frac{8C(N) N^\theta}{ab(2\theta-1)(\log N)^{2}} \int_{2.106}^{13} \frac{\log \left(2.106-\frac{3.106}{s+1}\right)}{s} d s.$$ Similarly, we have $$S_{42}^{\prime} \leqslant(1+o(1)) \frac{8C(N) N^\theta}{ab(2\theta-1)(\log N)^{2}}\int_{2.73}^{7.8} \frac{\log \left(2.73-\frac{3.73}{s+1}\right)}{s} d s,$$ $$S_{4}^{\prime}=S_{41}^{\prime}+S_{42}^{\prime} \leqslant (1+o(1)) \frac{8C(N) N^\theta}{ab(2\theta-1)(\log N)^{2}} \left(\int_{2.106}^{13} \frac{\log \left(2.106-\frac{3.106}{s+1}\right)}{s} d s+\int_{2.73}^{7.8} \frac{\log \left(2.73-\frac{3.73}{s+1}\right)}{s} d s\right)$$ $$\leqslant 14.062223 \frac{C(N) N^\theta}{ab(\log N)^{2}},$$ $$\begin{aligned} \nonumber S_{71}^{\prime} &\leqslant(1+o(1)) \frac{8C(N) N^\theta}{ab(2\theta-1)(\log N)^{2}}\int_{2}^{2.106} \frac{\log (s-1)}{s} d s\\ \nonumber S_{72}^{\prime} &\leqslant(1+o(1)) \frac{8C(N) N^\theta}{ab(2\theta-1)(\log N)^{2}}\int_{2}^{2.73} \frac{\log (s-1)}{s} d s,\end{aligned}$$ $$S_{7}^{\prime}=S_{71}^{\prime}+S_{72}^{\prime} \leqslant(1+o(1)) \frac{8C(N) N^\theta}{ab(2\theta-1)(\log N)^{2}}\left(\int_{2}^{2.106} \frac{\log (s-1)}{s} d s+\int_{2}^{2.73} \frac{\log (s-1)}{s} d s\right)$$ $$\leqslant 0.827696 \frac{C(N) N^\theta}{ab(\log N)^{2}}.$$ ## Evaluation of $S_{6}^{\prime}$ Let $D_{\mathcal{C}_2}=N^{\theta-1 / 2}(\log N)^{-B}$. By Chen's role-reversal trick we know that $$S_{61}^{\prime} \leqslant S(\mathcal{C}_2;\mathcal{P},D_{\mathcal{C}_2}^{\frac{1}{2}})+O\left(N^{\frac{2\theta-1}{4}}\right).$$ By Remark [Remark 9](#buchstabshort){reference-type="ref" reference="buchstabshort"} we have $$\begin{aligned} \nonumber |\mathcal{C}_2|&= \sum_{\substack{f \in \mathcal{F}_2 \\f \equiv N+j a (\bmod a^{2}) \\ 0 \leqslant j \leqslant a-1,(j, a)=1 }}1 \\ \nonumber &=\sum_{\substack{(\frac{N}{b})^{\frac{1}{14}} \leqslant p_1 < p_2 < p_3< p_4<(\frac{N}{b})^{\frac{1}{8.8}} \\ (p_1 p_2 p_3 p_4, N)=1} }\sum_{\substack{\frac{N-N^\theta}{bp_1 p_2 p_3 p_4}\leqslant m\leqslant \frac{N}{bp_1 p_2 p_3 p_4}\\\left(m, p_{1}^{-1} abN P\left(p_{2}\right)\right)=1 }}\frac{\varphi(a)}{\varphi(a^2)}\\ \nonumber &<(1+o(1))\frac{N^\theta}{ab}\sum_{(\frac{N}{b})^{\frac{1}{14}} \leqslant p_{1}<p_{2}<p_{3}<p_{4}<(\frac{N}{b})^{\frac{1}{8.8}}} \frac{0.5617}{p_{1} p_{2} p_{3} p_{4} \log p_{2}}\\ &=(1+o(1))\frac{0.5617N^{\theta}}{ab\log N}\int_{\frac{1}{14}}^{\frac{1}{8.8}} \frac{d t_{1}}{t_{1}} \int_{t_{1}}^{\frac{1}{8.8}} \frac{1}{t_{2}}\left(\frac{1}{t_{1}}-\frac{1}{t_{2}}\right) \log \frac{1}{8.8 t_{2}} d t_{2} .\end{aligned}$$ To deal with the error term, we have $$\sum_{\substack{n \leqslant D_{\mathcal{C}_2} \\ n \mid P\left(z_{\mathcal{C}_2}\right)}} \eta\left(|{\mathcal{C}_2}|, n\right) \ll \sum_{n \leqslant D_{\mathcal{C}_2}} \mu^{2}(n) \eta\left(|{\mathcal{C}_2}|, n\right) .$$ Since $\omega(p)=0$ for primes $p \mid a b N$ and $\omega(p)=\frac{p}{p-1}$ for other primes, for an integer $n$ such that $(n, a b N)>1$, similarly to the discussion for $\eta\left(X_{\mathcal{B}_2}, n\right)$, we have $\eta\left(|{\mathcal{C}_2}|, n\right)=0$. For a square-free integer $n$ that is relatively prime to $a b N$, if $n \mid \frac{N-bm p_{1} p_{2} p_{3} p_4}{a}$, then $(p_{1} ,n)=1, (p_{2} ,n)=1, (p_{3} ,n)=1$ and $(p_{4} , n)=1$. Moreover, if $\left(\frac{N-bm p_{1} p_{2} p_{3} p_4}{a n}, a\right)=1$, then we have $bm p_{1} p_{2} p_{3} p_4 \equiv$ $N+j a n\left(\bmod a^{2} n\right)$ for some $j$ such that $0 \leqslant j \leqslant a-1$ and $(j, a)=1$. Conversely, if $bm p_{1} p_{2} p_{3} p_4=N+j a n+s a^{2} n$ for some integer $j$ such that $0 \leqslant j \leqslant a$ and $(j, a)=1$, some integer $n$ relatively prime to $p_{1} p_{2} p_3 p_4$ such that $an \mid\left(N-bm p_{1} p_{2} p_{3} p_4\right)$, and some integer $s$, then $\left(\frac{N-bm p_{1} p_{2} p_{3} p_4}{a n}, a\right)=(-j, a)=1$. Since $j bm p_{1} p_{2} p_3 p_4$ runs through the reduced residues modulo $a$ when $j$ runs through the reduced residues modulo $a$, for a square-free integer $n$ relatively prime to $a b N$, we have $$\begin{aligned} \nonumber \eta\left(|{\mathcal{C}_2}|, n\right) =&\left|\sum_{\substack{a \in \mathcal{C}_2 \\ a \equiv 0(\bmod n)}} 1-\frac{\omega(n)}{n} |\mathcal{C}_2|\right|=\left|\sum_{\substack{a \in \mathcal{C}_2 \\ a \equiv 0(\bmod n)}} 1-\frac{|\mathcal{C}_2|}{\varphi(n)}\right| \\ \nonumber \ll&\left|\sum_{\substack{f \in \mathcal{F}_2 \\ (f,n)=1 \\ f \equiv N+j a n (\bmod a^{2} n) \\ 0 \leqslant j \leqslant a-1, (j,a)=1}}1-\frac{1}{\varphi(n)}\sum_{\substack{f \in \mathcal{F}_2 \\ (f,n)=1 \\ f \equiv N+j a (\bmod a^{2} ) \\ 0 \leqslant j \leqslant a-1, (j,a)=1}}1\right|\\ \nonumber&+\frac{1}{\varphi(n)}\sum_{\substack{f \in \mathcal{F}_2 \\ (f,n)>1 \\ f \equiv N+j a (\bmod a^{2} ) \\ 0 \leqslant j \leqslant a-1, (j,a)=1}}1.\\ \nonumber\ll&\left|\sum_{\substack{f \in \mathcal{F}_2 \\ (f,n)=1 \\ f \equiv N+j a n (\bmod a^{2} n) \\ 0 \leqslant j \leqslant a-1, (j,a)=1}}1-\frac{1}{\varphi(a^2 n)}\sum_{\substack{f \in \mathcal{F}_2 \\ (f,n)=1}}1\right|\\ \nonumber&+\left|\frac{1}{\varphi(n)}\sum_{\substack{f \in \mathcal{F}_2 \\ (f,n)=1 \\ f \equiv N+j a (\bmod a^{2} ) \\ 0 \leqslant j \leqslant a-1, (j,a)=1}}1-\frac{1}{\varphi(a^2 n)}\sum_{\substack{f \in \mathcal{F}_2 \\ (f,n)=1}}1\right|\\ \nonumber&+N^{\theta-\frac{1}{14}}(\log N)^{2}\\ \nonumber\ll&\left|\sum_{\substack{f \in \mathcal{F}_2 \\ (f,n)=1 \\ f \equiv N+j a n (\bmod a^{2} n) \\ 0 \leqslant j \leqslant a-1, (j,a)=1}}1-\frac{1}{\varphi(a^2 n)}\sum_{\substack{f \in \mathcal{F}_2 \\ (f,n)=1}}1\right|\\ \nonumber&+\frac{1}{\varphi(n)}\left|\sum_{\substack{f \in \mathcal{F}_2 \\ (f,n)=1 \\ f \equiv N+j a (\bmod a^{2} ) \\ 0 \leqslant j \leqslant a-1, (j,a)=1}}1-\frac{1}{\varphi(a^2 )}\sum_{\substack{f \in \mathcal{F}_2 \\ (f,n)=1}}1\right|\\ &+N^{\theta-\frac{1}{14}}(\log N)^{2}.\end{aligned}$$ By Lemma [Lemma 17](#newmeanvalue){reference-type="ref" reference="newmeanvalue"} and Lemma [Lemma 12](#l7){reference-type="ref" reference="l7"}, we have $$\sum_{n \leqslant D_{\mathcal{C}_2}} \mu^{2}(n) \eta\left(|{\mathcal{C}_2|}, n\right)=\sum_{\substack{n \leqslant D_{\mathcal{C}_2}\\(n,abN)=1}} \mu^{2}(n) \eta\left(|{\mathcal{C}_2|}, n\right) \ll N(\log N)^{-3}.$$ Then by (84)--(88) and some routine arguments we have $$S_{61}^{\prime} \leqslant(1+o(1)) \frac{0.5617 \times 8 C(N) N^\theta}{ab(2\theta-1)(\log N)^2} \int_{\frac{1}{14}}^{\frac{1}{8.8}} \frac{d t_{1}}{t_{1}} \int_{t_{1}}^{\frac{1}{8.8}} \frac{1}{t_{2}}\left(\frac{1}{t_{1}}-\frac{1}{t_{2}}\right) \log \frac{1}{8.8 t_{2}} d t_{2}.$$ Similarly, we have $$S_{62}^{\prime} \leqslant(1+o(1)) \frac{0.5617 \times 8 C(N) N^\theta}{ab(2\theta-1)(\log N)^2} \int_{\frac{1}{14}}^{\frac{1}{8.8}} \frac{d t_{1}}{t_{1}} \int_{t_{1}}^{\frac{1}{8.8}} \frac{1}{t_{2}}\left(\frac{1}{t_{1}}-\frac{1}{t_{2}}\right) \log \left(8.8\left(\frac{\theta}{2}-\frac{2}{14}-t_2\right)\right) d t_{2}.$$ By (89) and (90) we have $$\begin{gathered} \nonumber S_{6}^{\prime}=S_{61}^{\prime}+S_{62}^{\prime} \leqslant(1+o(1)) \frac{0.5617 \times 8 C(N) N^\theta}{ab(2\theta-1)(\log N)^2} \left(\int_{\frac{1}{14}}^{\frac{1}{8.8}} \frac{d t_{1}}{t_{1}} \int_{t_{1}}^{\frac{1}{8.8}} \frac{1}{t_{2}}\left(\frac{1}{t_{1}}-\frac{1}{t_{2}}\right) \log \frac{1}{8.8 t_{2}} d t_{2}+\right. \\ \left.\int_{\frac{1}{14}}^{\frac{1}{8.8}} \frac{d t_{1}}{t_{1}} \int_{t_{1}}^{\frac{1}{8.8}} \frac{1}{t_{2}}\left(\frac{1}{t_{1}}-\frac{1}{t_{2}}\right) \log \left(8.8\left(\frac{\theta}{2}-\frac{2}{14}-t_2\right)\right) d t_{2}\right)\leqslant 0.769041 \frac{C(N) N^\theta}{ab(\log N)^{2}}\end{gathered}$$ ## Evaluation of $S_{5}^{\prime}$ For $p \geqslant \left(\frac{N}{b}\right)^{\frac{4.0871}{14}}$ we have $$\underline{p}^{\prime\frac{1}{2.5}} \leqslant \left(\frac{N}{b}\right)^{\frac{1}{14}}, \quad S\left(\mathcal{A}_{p};\mathcal{P}, \left(\frac{N}{b}\right)^{\frac{1}{14}}\right) \leqslant S\left(\mathcal{A}_{p};\mathcal{P}, \underline{p}^{\prime\frac{1}{2.5}}\right).$$ By Lemma [Lemma 23](#l35){reference-type="ref" reference="l35"} we have $$\begin{aligned} \nonumber S_{51}^{\prime}&=\sum_{\substack{(\frac{N}{b})^{\frac{4.0871}{14}} \leqslant p<(\frac{N}{b})^{\frac{1}{3.106}} \\ (p, N)=1}} S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b}\right)^{\frac{1}{14}}\right)\\ &\leqslant \sum_{\substack{(\frac{N}{b})^{\frac{4.0871}{14}} \leqslant p<(\frac{N}{b})^{\frac{1}{3.106}} \\ (p, N)=1}} S\left(\mathcal{A}_{p};\mathcal{P},\underline{p}^{\prime\frac{1}{2.5}}\right) \leqslant \Gamma_{1}^{\prime}-\frac{1}{2} \Gamma_{2}^{\prime}+\frac{1}{2} \Gamma_{3}^{\prime}+O\left(N^{\theta-\frac{1}{20}}\right) .\end{aligned}$$ By Lemmas [Lemma 6](#l1){reference-type="ref" reference="l1"},  [Lemma 7](#l2){reference-type="ref" reference="l2"},  [Lemma 13](#l3){reference-type="ref" reference="l3"} and some routine arguments we get $$\begin{aligned} \nonumber\Gamma_{1}^{\prime}&=\sum_{\substack{(\frac{N}{b})^{\frac{4.0871}{14}} \leqslant p<(\frac{N}{b})^{\frac{1}{3.106}} \\ (p, N)=1}} S\left(\mathcal{A}_{p};\mathcal{P},\underline{p}^{\prime\frac{1}{3.675}}\right)\\ &\leqslant(1+o(1)) \frac{8C(N) N^\theta}{ab\theta(\log N)^{2}}\left(\int_{\frac{4.0871}{14}}^{\frac{1}{3.106}} \frac{d t}{t(\theta-2 t)}\right)\left(1+\int_{2}^{2.675} \frac{\log (t-1)}{t} d t\right),\\ \nonumber\Gamma_{2}^{\prime}&=\sum_{\substack{(\frac{N}{b})^{\frac{4.0871}{14}} \leqslant p<(\frac{N}{b})^{\frac{1}{3.106}} \\ (p, N)=1}} \sum_{\substack{\underline{p}^{\prime\frac{1}{3.675}} \leqslant p_1<\underline{p}^{\prime\frac{1}{2.5}} \\ (p_1, N)=1}}S\left(\mathcal{A}_{p p_1};\mathcal{P}, \underline{p}^{\prime\frac{1}{3.675}}\right)\\ &\geqslant(1+o(1)) \frac{8C(N) N^\theta}{ab\theta(\log N)^{2}}\left(\int_{\frac{4.0871}{14}}^{\frac{1}{3.106}} \frac{d t}{t(\theta-2 t)}\right)\left(\int_{1.5}^{2.675} \frac{\log \left(2.675-\frac{3.675}{t+1}\right)}{t} d t\right).\end{aligned}$$ By an argument similar to the evaluation of $S_{61}^{\prime}$ we get that $$\begin{aligned} \nonumber \Gamma_{3}^{\prime}&=\sum_{\substack{(\frac{N}{b})^{\frac{4.0871}{14}} \leqslant p<(\frac{N}{b})^{\frac{1}{3.106}} \\ (p, N)=1}} \sum _{ \substack{\underline{p}^{\prime\frac{1}{3.675}} \leqslant p_1<p_2<p_3<\underline{p}^{\prime\frac{1}{2.5}} \\ (p_1 p_2 p_3, N)=1} } S\left(\mathcal{A}_{p p_1 p_2 p_3};\mathcal{P}(p_1), p_2\right)\\ &\leqslant(1+o(1)) \frac{16C(N) N^\theta}{1.763ab(2\theta-1)(\log N)^{2}}\left(\int_{\frac{4.0871}{14}}^{\frac{1}{3.106}} \frac{d t}{t(\theta-2 t)}\right)\left(6.175 \log \frac{3.675}{2.5}-2.35\right).\end{aligned}$$ By (68)--(71) we have $$S_{51}^{\prime}=\sum_{\substack{(\frac{N}{b})^{\frac{4.0871}{14}} \leqslant p<(\frac{N}{b})^{\frac{1}{3.106}} \\ (p, N)=1}} S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b}\right)^{\frac{1}{14}}\right)$$ $$\begin{gathered} \leqslant(1+o(1)) \frac{8C(N) N^\theta}{ab\theta(\log N)^{2}}\left(\int_{\frac{4.0871}{14}}^{\frac{1}{3.106}} \frac{d t}{t(\theta-2 t)}\right)\times\\ \left(1+\int_{2}^{2.675} \frac{\log (t-1)}{t} d t-\frac{1}{2} \int_{1.5}^{2.675} \frac{\log \left(2.675-\frac{3.675}{t+1}\right)}{t} d t +\frac{\theta}{1.763(2\theta-1)}\left(6.175 \log \frac{3.675}{2.5}-2.35\right)\right). \end{gathered}$$ Similarly, we have $$S_{52}^{\prime}=\sum_{\substack{(\frac{N}{b})^{\frac{\theta}{2}-\frac{3}{14}} \leqslant p<(\frac{N}{b})^{\frac{1}{3.73}} \\ (p, N)=1} }S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b}\right)^{\frac{1}{8.8}}\right)$$ $$\begin{gathered} \leqslant(1+o(1)) \frac{8C(N) N^\theta}{ab\theta(\log N)^{2}}\left(\int_{\frac{\theta}{2}-\frac{3}{14}}^{\frac{1}{3.73}} \frac{d t}{t(\theta-2 t)}\right)\times\\ \left(1+\int_{2}^{2.675} \frac{\log (t-1)}{t} d t-\frac{1}{2} \int_{1.5}^{2.675} \frac{\log \left(2.675-\frac{3.675}{t+1}\right)}{t} d t +\frac{\theta}{1.763(2\theta-1)}\left(6.175 \log \frac{3.675}{2.5}-2.35\right)\right) \end{gathered}$$ $$S_{5}^{\prime}=S_{51}^{\prime}+S_{52}^{\prime}$$ $$\begin{gathered} \leqslant(1+o(1)) \frac{8C(N) N^\theta}{ab\theta(\log N)^{2}}\left(\int_{\frac{4.0871}{14}}^{\frac{1}{3.106}} \frac{d t}{t(\theta-2 t)}+\int_{\frac{\theta}{2}-\frac{3}{14}}^{\frac{1}{3.73}} \frac{d t}{t(\theta-2 t)}\right)\times\\ \left(1+\int_{2}^{2.675} \frac{\log (t-1)}{t} d t-\frac{1}{2} \int_{1.5}^{2.675} \frac{\log \left(2.675-\frac{3.675}{t+1}\right)}{t} d t +\frac{\theta}{1.763(2\theta-1)}\left(6.175 \log \frac{3.675}{2.5}-2.35\right)\right) \end{gathered}$$ $$\leqslant 3.43654 \frac{C(N) N^\theta}{ab(\log N)^{2}}.$$ ## Proof of theorem 1.2 By (72)--(74), (82)--(83), (91) and (96) we get $$S_{1}^{\prime}+S_{2}^{\prime} \geqslant 66.37372 \frac{C(N) N^\theta}{ab(\log N)^{2}},$$ $$S_{3}^{\prime}+S_{4}^{\prime}+S_{5}^{\prime}+S_{6}^{\prime}+2S_{7}^{\prime} \leqslant 66.368553 \frac{C(N) N^\theta}{ab(\log N)^{2}},$$ $$4R_{a,b}^{\theta}(N) \geqslant (S_{1}^{\prime}+S_{2}^{\prime})-(S_{3}^{\prime}+S_{4}^{\prime}+S_{5}^{\prime}+S_{6}^{\prime}+2S_{7}^{\prime}) \geqslant 0.005167 \frac{C(N) N^\theta}{ab(\log N)^{2}},$$ $$R_{a,b}^{\theta}(N) \geqslant 0.00129 \frac{C(N) N^\theta}{ab(\log N)^{2}}.$$ Theorem [Theorem 2](#t2){reference-type="ref" reference="t2"} is proved. While calculating, we also try to extend the range to $0.9409 \leqslant \theta \leqslant 1$ but fail. The range $0.941 \leqslant \theta \leqslant 1$ is rather near to the limit obtained by this method. # Acknowledgements {#acknowledgements .unnumbered} The author would like to thank Huixi Li and Guang-Liang Zhou for providing information about the papers [@LiHuixi2019] [@LIHUIXI] and for some helpful discussions.
arxiv_math
{ "id": "2309.03218", "title": "Remarks on additive representations of natural numbers", "authors": "Runbo Li", "categories": "math.NT", "license": "http://creativecommons.org/licenses/by-sa/4.0/" }
--- abstract: | In this note, we give sufficient conditions for the almost sure and the convergence in $\mathbb L^p$ of a $U$-statistic of order $m$ built on a strictly stationary but not necessarily ergodic sequence. address: Institut de Recherche Mathématique Avancée UMR 7501, Université de Strasbourg and CNRS 7 rue René Descartes 67000 Strasbourg, France author: - Davide Giraudo title: Some notes on ergodic theorem for $U$-statistics of order $m$ for stationary and not necessarily ergodic sequences --- # Introduction and main results [@MR26294] introduced the concept of $U$-statistics of order $m\in\mathbb N^*$, defined as follows: if $\left(X_i\right)_{i\geqslant 1}$ is strictly stationary sequence taking values in a measurable space $\left(S,\mathcal{S}\right)$ and $h\colon S^m\to\mathbb R$, the $U$-statistic of kernel $h$ is given by $$U_{m,n,h}:=\frac 1{\binom mn}\sum_{\left(i_\ell\right)_{\ell\in\llbracket 1,m\rrbracket}\in\operatorname{Inc}^m_n } h\left(X_{i_1},\dots,X_{i_m}\right),$$ where $\llbracket 1,m\rrbracket=\left\{ k\in\mathbb N,1\leqslant k\leqslant m\right\}$ and $\operatorname{Inc}^m_n =\left\{ \left(i_\ell\right)_{\ell\in\llbracket 1,m\rrbracket}, 1\leqslant i_1<i_2<\dots<i_m\leqslant n\right\}$. If $\left(X_i\right)_{i\geqslant 1}$ is i.i.d. and $\mathbb E\left[\left|h\left(X_1,\dots,X_m\right)\right|\right]$ is finite, then $U_{m,n,h}\to \mathbb E\left[h\left(X_1,\dots,X_m\right)\right]$ a.s. and in $\mathbb L^1$. A natural question is whether for a strictly stationary sequence $\left(X_i\right)_{i\geqslant 1}$, the sequence $\left(U_{m,n,h}\right)_{n\geqslant m}$ converges almost surely or in $\mathbb L^1$ to some random variable. Assume first that $\left(X_i\right)_{i\geqslant 1}$ is ergodic. It is shown in [@MR1363941] that if $S=\mathbb R$, $\left(X_i\right)_{i\geqslant 1}$ has common distribution $\mathbb P_{X_0}$, $h$ is bounded and $\mathbb P_{X_0}\times\dots\times \mathbb P_{X_0}$ almost everywhere continuous, then $$\label{eq:conv_ps_generale} U_{m,n,h}\to \int h\left(x_1,\dots,x_m\right)d\mathbb P_{X_0}\left(x_1\right)\dots d\mathbb P_{X_0}\left(x_m\right)\mbox{ a.s.}.$$ Convergence in probability was also investigated in [@MR1979966]. A proof of [\[eq:conv_ps_generale\]](#eq:conv_ps_generale){reference-type="eqref" reference="eq:conv_ps_generale"} in the context of absolutely regular sequences has been given in [@MR1624866]. Moreover, Marcinkievicz law of large numbers for $U$-statistics of order two has been established in [@MR2571765] for absolutely regular sequences and [@MR4243516] for sequences expressable as functions of an independent sequence. It is worth pointing out that in general, the sequence $\left(U_{2,n,h}\right)_{n\geqslant 2}$ may fail to converge. For instance, [@MR1363941], Example 4.5, found a non-bounded kernel $h$ and a strictly stationary sequence such that $\limsup_{n\to\infty}U_{2,n,h}=\infty$. Moreover, the example given in Proposition 3 of [@dehling2023remarks] shows the existence of a bounded kernel $h$ and a stationary ergodic sequence $\left(X_i\right)_{i\geqslant 1}$ such that a subsequence of $\left(U_{2,n,h}\right)_{n\geqslant 2}$ converges to $0$ almost surealy and an other subsequence of $\left(U_{2,n,h}\right)_{n\geqslant 2}$ converges to $1$ almost surely. Also, as Proposition 4 shows, boundedness in $\mathbb L^1$ of $\left(h\left(X_1,X_j\right)\right)_{j\geqslant 2}$ plays a key role, otherwise, we can find a kernel $h$ and a strictly stationary sequence $\left(X_i\right)_{i\geqslant 1}$ for which the sequence $\left(U_{2,n,h}-\mathbb E\left[U_{2,n,h}\right]\right)_{n\geqslant 2}$ converges to a non-degenerate normal distribution. Some results have been established in [@dehling2023remarks], assuming that the strictly stationary sequence $\left(X_i\right)_{i\geqslant 1}$ is ergodic. 1. If $S$ is a separable metric space, $h\colon S\times S\rightarrow \mathbb R$ is a symmetric kernel that is bounded and $\mathbb P_{X_0}\times \mathbb P_{X_0}$-almost everywhere continuous, then, as $n\rightarrow \infty$, $U_{2,n,h}\to \int h\left(x,y\right)d\mathbb P_{X_0}\left(x\right)d\mathbb P_{X_0}\left(y\right)$ almost surely. 2. If $S=\mathbb R^d$, the family $\left\{ h\left(X_1,X_j\right),j\geqslant 1\right\}$ is uniformly integrable, $h$ is $\mathbb P_{X_0}\times\dots\times \mathbb P_{X_0}$ almost everywhere continuous and symmetric, then $$\label{eq:conv_thm_ergo_densite_bornee} \lim_{n\to\infty}\mathbb E\left[\left|U_{2,n,h}-\int_{\mathbb R^d}\int_{\mathbb R^d}h\left(x,y\right)d\mathbb P_{X_0}\left(x\right)d\mathbb P_{X_0}\left(y\right) \right|\right]= 0.$$ 3. If $S=\mathbb R^d$, the family $\left\{ h\left(X_1,X_j\right),j\geqslant 1\right\}$ is uniformly integrable, $\int_{\mathbb R^d}\int_{\mathbb R^d}\left|h\left(x,y\right)\right|d\mathbb P_{X_0}\left(x\right)d\mathbb P_{X_0}\left(y\right)$ is finite, the random variable $X_0$ has a bounded density with respect to the Lebesgue measure on $\mathbb R^d$ and for each $k\geqslant 1$, the vector $\left(X_0,X_k\right)$ has a density $f_k$ with respect to the Lebesgue measure of $\mathbb R^d\times\mathbb R^d$ and $\sup_{k\geqslant 1}\sup_{s,t\in\mathbb R^d}f_k\left(s,t\right)$ is finite, then [\[eq:conv_thm_ergo_densite_bornee\]](#eq:conv_thm_ergo_densite_bornee){reference-type="eqref" reference="eq:conv_thm_ergo_densite_bornee"} holds. Such results lead us to consider the following extensions. The case of $U$-statistics of order two has been adressed and we may want to extend these results to $U$-statistics of arbitrary order, especially because such mathematical object is widely used in statistics, for instance in [@MR3127883] for the distance covariance and stochastic geometry (see [@MR3585402]). Moreover, it is a natural question to see what happens in the non-ergodic case. It is natural to consider a decomposition of $\Omega$ into ergodic components and use the results of the ergodic case to each of them. However, the multiple integral expression of the limit does not give a simple expression. Moreover, the assumptions of the ergodic case for each ergodic component, namely, almost everywhere continuity (for the product law of the marginal distribution) of the kernel and and assumption on density of the vector $\left(X_{i_1},\dots,X_{i_m}\right)$ does not seem to give a tractable condition. Instead, we will use the following approach: when $h$ is symmetric and bounded, the convergence of the considered $U$-statistic is view as the convergence of random product measures toward a product of random measures (deterministic measures in the ergodic case). When we make an assumption on the densit of $\left(X_{i_1},\dots,X_{i_m}\right)$, we approximate $h$ by linear combinations of products of indicator functions. This approach has similarities with the one used in [@MR3256808]. The case of products of indicators follows then from an application of the usual ergodic theore. We will assume that the strictly stationary sequence is such that $X_i=X_0\circ T^i$, where $T\colon\Omega\to\Omega$ is a measure preserving map. Since the convergences we will study will only involve the law of $\left(X_i\right)_{i\geqslant 1}$, there is no loss of generality by assuming such a representation (see [@MR832433], page 178). We will study the almost sure convergence of $\left(U_{m,n,h}\right)_{n\geqslant m}$ and the convergence in $\mathbb L^p$. We will denote by $\left\lVert Z \right\rVert_p:= \left(\mathbb E\left[\left|Z^p\right|\right]\right)^{1/p}$ the norm of a real-valued random variable $Z$. It turns out that the limit will be expressed as an integral with respect to products of a random measure defined as follows: $$\label{eq:def_mu_omega} \mu_\omega\left(A\right)=\mathbb E\left[\mathbf{1}_{X_0\in A}\mid\mathcal{I}\right]\left(\omega\right), A\in\mathcal{B}\left(S\right),$$ where $\mathcal{I}$ denotes the $\sigma$-algebra of invariant sets, that is, the sets $E$ such that $T^{-1}E=E$. The limit of $U$-statistics will be expressed as integral with respect to product measure of $\mu_\omega$, which lead us to define $$\label{eq:definition_of_I_h_omega} I_m\left(S,h,\omega\right):=\int_{S^m}h\left(x_1,\dots,x_m\right)d\mu_\omega\left(x_1\right) \dots d\mu_\omega\left(x_m\right).$$ We will also define as $I_m\left(S,h,\cdot\right)$ the random variable given by $$\label{eq:definition_of_I_h_omega_dot} I_m\left(S,h,\cdot\right) \colon \omega\mapsto I_m\left(S,h,\omega\right).$$ Some assumption will be made on the set of discontinuity points of $h$, which will be denoted by $D\left(h\right)$. ## Almost sure convergence Our first result deals with the almost sure convergence of a $U$-statistic under the assumption of boundedness of the kernel and negligibility of the set of discontinuity with respect to the product of the marginal law. **Theorem 1**. *Let $\left(S,d\right)$ be a separable metric space, let $\left(X_i\right)_{i\in\mathbb Z}$ be a strictly stationary sequence. Suppose that $h\colon S^m\to\mathbb R$ satisfies the following assumptions:* 1. *[\[ass:h_sym\]]{#ass:h_sym label="ass:h_sym"} $h$ is symmetric, that is, $h\left(x_{\sigma\left(1\right)},\dots,x_{\sigma\left(m\right)}\right)=h\left(x_1,\dots,x_m\right)$ for each $x_1,\dots,x_m\in S$ and each bijective $\sigma\colon\llbracket 1,m\rrbracket\to\llbracket 1,m\rrbracket$,* 2. *[\[ass:h_borne\]]{#ass:h_borne label="ass:h_borne"} $h$ is bounded and* 3. *[\[ass:h_cont\]]{#ass:h_cont label="ass:h_cont"} for almost every $\omega\in\Omega$, $I_m\left(S,\mathbf{1}_{D_h},\omega\right)=0$, where $D_h$ denotes the set of discontinuity points of $h$.* *Then for almost every $\omega\in\Omega$, the following convergence holds: $$\label{eq:conv_ps} \lim_{n\to\infty} U_{m,n,h}\left(\omega\right) =I_m\left(S,h,\omega\right) ,$$ where $I_m\left(S,h,\omega\right)$ is defined as in [\[eq:definition_of_I\_h_omega\]](#eq:definition_of_I_h_omega){reference-type="eqref" reference="eq:definition_of_I_h_omega"}.* This result extends Theorem 1 in [@dehling2023remarks] in two directions: first, the case of $U$-statistics of arbitrary order are considered. Second, we address here the not necessarity ergodic case. When $\left(X_{i}\right)_{i\geqslant 1}$ is ergodic, the measure $\mu_\omega$ is simply the distribution of $X_0$ hence the right hand side of [\[eq:conv_ps\]](#eq:conv_ps){reference-type="eqref" reference="eq:conv_ps"} can be simply expressed as $\mathbb E\left[h\left(X_1^{\left(1\right)},\dots,X_1^{\left(m\right)}\right)\right]$, where $X_1^{\left(1\right)},\dots,X_1^{\left(m\right)}$ are independent copies of $X_1$. The symmetry assumption is needed in order to relate $U_{m,n,h}$ to a sum over a rectangle and then see the convergence in [\[eq:conv_ps\]](#eq:conv_ps){reference-type="eqref" reference="eq:conv_ps"} as a convergence in distribution of product of random measures. ## Convergence in $\mathbb L^p$, $p\geqslant 1$ In this subsection, we present sufficient conditions for the convergence in $\mathbb L^p$ of $\left(U_{m,n,h}\right)_{n\geqslant 1}$. We start by mentioning the following consequence of Theorem [Theorem 1](#thm:ergodic_theorem_sym_ps){reference-type="ref" reference="thm:ergodic_theorem_sym_ps"}. **Corollary 2**. *Let$\left(S,d\right)$ be a separable metric space, let $\left(X_i\right)_{i\in\mathbb Z}$ be a strictly stationary sequence and let $p\geqslant 1$. Suppose that $h\colon S^m\to\mathbb R$ satisfies the following assumptions:* 1. *[\[ass:h_sym_cor\]]{#ass:h_sym_cor label="ass:h_sym_cor"} $h$ is symmetric, that is, $h\left(x_{\sigma\left(1\right)},\dots,x_{\sigma\left(m\right)}\right)=h\left(x_1,\dots,x_m\right)$ for each $x_1,\dots,x_m\in S$ and each bijective $\sigma\colon\llbracket 1,m\rrbracket\to\llbracket 1,m\rrbracket$,* 2. *[\[ass:h_cor_UI\]]{#ass:h_cor_UI label="ass:h_cor_UI"} the family $\left\{ \left|h\left(X_{i_1},\dots, X_{i_m}\right)\right|^p, 1\leqslant i_1<\dots<i_m\right\}$ is uniformly integrable.* 3. *[\[ass:h_cor_limit_Lp\]]{#ass:h_cor_limit_Lp label="ass:h_cor_limit_Lp"} the following integral is finite: $$\int_\Omega\int_{S^m} \left|h\left(x_1,\dots,x_m\right)\right|^p d\mu_{\omega}\left(x_1\right) \dots d\mu_{\omega}\left(x_m\right)d\mathbb P\left(\omega\right).$$* 4. *[\[ass:h_cont_cor\]]{#ass:h_cont_cor label="ass:h_cont_cor"} $\mathbb P_{X_0}\times \dots\times\mathbb P_{X_0}\left(D_h\right)=0$, where $D_h$ denotes the set of discontinuity points of $h$ and $\mathbb P_{X_0}$ the distribution of $X_0$.* *Then the following convergence takes place: $$\lim_{n\to\infty} \left\lVert U_{m,n,h} -I_m\left(S,h,\cdot\right) \right\rVert_p=0,$$ where $I_m\left(S,h,\cdot\right)$ is defined as in [\[eq:definition_of_I\_h_omega_dot\]](#eq:definition_of_I_h_omega_dot){reference-type="eqref" reference="eq:definition_of_I_h_omega_dot"}.* One can wonder what happens if we remove the symmetry assumption. **Theorem 3**. *Let $\left(S,d\right)$ be a separable metric space, let $\left(X_0\circ T^i\right)_{i\in\mathbb Z}$ be a strictly stationary sequence taking values in $S$ and let $p\geqslant 1$. Suppose that $h\colon S^2\to\mathbb R$ and $\left(X_0\circ T^i\right)_{i\in\mathbb Z}$ satisfy the following assumptions:* 1. *[\[ass:UI_ergodique\]]{#ass:UI_ergodique label="ass:UI_ergodique"} the collection $\left\{ \left|h\left(X_{i},X_{j}\right)\right|^p,1\leqslant i<j\right\}$ is uniformly integrable.* 2. *[\[ass:h_cont_Lp\]]{#ass:h_cont_Lp label="ass:h_cont_Lp"} for almost every $\omega\in\Omega$, $I_m\left(S,\mathbf{1}_{D_h},\omega\right)=0$, where $D_h$ denotes the set of discontinuity points of $h$.* 3. *[\[ass:h_copies_ordre_p\]]{#ass:h_copies_ordre_p label="ass:h_copies_ordre_p"} the following integral is finite: $$\int_\Omega \int_{S^2}\left|h\left(x,y\right)\right|^pd\mu_\omega\left(x\right)d\mu_\omega\left(y\right)d\mathbb P\left(\omega\right).$$* *Then the following convergence takes place: $$\label{eq:conv_Lp_non_sym} \lim_{n\to\infty}\left\lVert U_{2,n,h}-I_2\left(S,h,\cdot\right) \right\rVert _p=0,$$ where $I_2\left(S,h,\cdot\right)$ is defined as in [\[eq:definition_of_I\_h_omega_dot\]](#eq:definition_of_I_h_omega_dot){reference-type="eqref" reference="eq:definition_of_I_h_omega_dot"}.* This improves Theorem 2 in [@dehling2023remarks] under assumption (A.1) in the paper, since we do not require symmetry of the kernel. One may wonder why we do not present a similar result for $U$-statistics of order $m$. A first idea would be an argument by induction on the dimension. In order to perform the induction step, say from $m=2$ to $m=3$, we would need to show, after a use of the weighted ergodic, the convergence in $\mathbb L^p$ of $\binom{n}{2}^{-1}\sum_{1\leqslant i<j\leqslant n}h\left(X_{-j},X_{-i},X_0\right)$. Since we assume uniform integrability, it suffices to show the almost sure convergence, which could be established by seeing this almost sure convergence as that of a product of random measures. But without symmetry, we do not know whether the almost sure convergence of the sequence of random measures $\binom{n}{2}^{-1}\sum_{1\leqslant i<j\leqslant n}\delta_{\left(X_{-j},X_{-i}\right)}$ takes place. Let us now state a result on the convergence in $\mathbb L^p$ without imposing any continuity of the kernel, but making assumptions on the distribution of the vectors $\left(X_{i_1},\dots,X_{i_m}\right)$ . **Theorem 4**. *Let $\left(X_0\circ T^i\right)_{i\in\mathbb Z}$ be a strictly stationary sequence taking values in $\mathbb R^d$ and let $p\geqslant 1$. Suppose that $h\colon \left(\mathbb R^d\right)^m\to\mathbb R$ and $\left(X_0\circ T^i\right)_{i\in\mathbb Z}$ satisfy the following assumptions:* 1. *[\[ass:Ui_dens\]]{#ass:Ui_dens label="ass:Ui_dens"} the collection $\left\{ \left|h\left(X_{i_1},\dots,X_{i_m}\right)\right|^p,1\leqslant i_1<\dots<i_m\right\}$ is uniformly integrable.* 2. *[\[ass:densite_bornee\]]{#ass:densite_bornee label="ass:densite_bornee"} for each $\left(i_\ell\right)_{\ell\in\llbracket 1,m\rrbracket}$ such that $1\leqslant i_1<\dots<i_m$, the vector $\left(X_{i_1},\dots,X_{i_m}\right)$ has a density $f_{i_1,\dots,i_m}$ and there exists a $q_0>1$ such that $$M_1:=\sup_{\left(i_\ell\right)_{\ell\in\llbracket 1,m\rrbracket}:1\leqslant i_1<\dots<i_m} \int_{\left(\mathbb R^d\right)^m}f_{i_1,\dots,i_m}\left(t_1,\dots,t_m\right)^{q_0}dt_1\dots dt_m<\infty.$$* 3. *[\[ass:den_bornee_m\_omega\]]{#ass:den_bornee_m_omega label="ass:den_bornee_m_omega"} For almost every $\omega$, the measure $\mu_{\omega}$ defined as in [\[eq:def_mu_omega\]](#eq:def_mu_omega){reference-type="eqref" reference="eq:def_mu_omega"} admits a density $f_\omega$ with respect to the Lebesgue measure and there exists a set $\Omega'$ having probability one and $q_1>1$ for which $$M_2:=\sup_{\omega\in\Omega'}\int_{\mathbb R^d}f_{\omega}\left(t\right)^{q_1}dt<\infty.$$* 4. *[\[ass:inte_limite\]]{#ass:inte_limite label="ass:inte_limite"} the following integral is finite: $$\int_{\Omega}\int_{\left(\mathbb R^d\right)^m}\left|h\left(x_1,\dots,x_m\right)\right|^pd\mu_\omega\left(x_1\right)\dots d\mu_{\omega}\left(x_m\right)d\mathbb P\left(\omega\right) .$$* *Then the following convergence hold: $$\label{eq:thm_erg_densite} \lim_{n\to\infty}\left\lVert U_{m,n,h}-I_m\left(\mathbb R^d,h,\cdot\right) \right\rVert_p=0,$$ where $I_m\left(\mathbb R^d,h,\cdot\right)$ is defined as in [\[eq:definition_of_I\_h_omega_dot\]](#eq:definition_of_I_h_omega_dot){reference-type="eqref" reference="eq:definition_of_I_h_omega_dot"}.* Assumption [\[ass:densite_bornee\]](#ass:densite_bornee){reference-type="ref" reference="ass:densite_bornee"} is needed in order to approximate $h$ by a linear combination of indicator functions of produts of Borel sets, uniformly with respect to the distribution of $\left(X_{i_1},\dots,X_{i_m}\right)$. Our Theorem [Theorem 4](#thm:conv_Lp_non-ergodique_densite){reference-type="ref" reference="thm:conv_Lp_non-ergodique_densite"} improves Theorem 2 in [@dehling2023remarks] under assumption (A.2) in the following directions. First, we provide a result for $U$-statistics of arbitrary order. Second, the not necessarily ergodic case is addressed. Third, even in the ergodic case, our assumption only require a uniform control on the $\mathbb L^{q_1}$ norm of the densities instead of a uniform bound. # Proofs ## Proof of Theorem [Theorem 1](#thm:ergodic_theorem_sym_ps){reference-type="ref" reference="thm:ergodic_theorem_sym_ps"} {#proof-of-theorem-thmergodic_theorem_sym_ps} The symmetry assumption guarantees the following decomposition $$U_{m,n,h}=\frac{1}{m!\binom{n}m}\sum_{i_1,\dots,i_m=1}^n h\left(X_{i_1},\dots,X_{i_m}\right)+\frac 1{\binom{n}m}\sum_{\left(i_\ell\right)_{\ell\in\llbracket 1,m\rrbracket}\in J_n }h\left(X_{i_1},\dots,X_{i_m}\right),$$ where $J_n$ denotes the set of elements $\left(i_\ell\right)_{\ell\in\llbracket 1,m\rrbracket} \in\llbracket 1,n\rrbracket^m$ for which there exist at least two distinct indexes $\ell$ and $\ell'$ for which $i_\ell=i_{\ell'}$. Since $h$ is bounded and $\operatorname{Card}\left(J_n\right)/\binom{n}m$ goes to $0$ as $n$ goes to infinity, it suffices to prove that for almost every $\omega\in\Omega$, $$\lim_{n\to\infty}\frac 1{n^m}\sum_{i_1,\dots,i_m=1}^n h\left(X_{i_1}\left(\omega\right),\dots,X_{i_m}\left(\omega\right)\right)= \int_{S^m}h\left(x_1,\dots,x_m\right)d\mu_\omega\left(x_1\right)\dots d\mu_{\omega}\left(x_m\right),$$ where $\mu_\omega$ is defined as in [\[eq:def_mu_omega\]](#eq:def_mu_omega){reference-type="eqref" reference="eq:def_mu_omega"}. Observe that for each $\omega\in\Omega$, $$\frac 1{n^m}\sum_{i_1,\dots,i_m=1}^n h\left(X_{i_1}\left(\omega\right),\dots,X_{i_m}\left(\omega\right)\right)= \int_{S^m}h\left(x_1,\dots,x_m\right)d\nu_{n,\omega}\left(x_1\right) \dots d\nu_{n,\omega}\left(x_m\right),$$ where $$\nu_{n,\omega}=\frac 1n\sum_{i=1}^n \delta_{X_i\left(\omega\right)}.$$ Separability of $S$ guarantees the existence of a countable collection $\left(f_k\right)_{k\geqslant 1}$ of continuous and bounded functions from $S$ to $\mathbb R$ such that a sequence $\left(\mu_n\right)_{n\geqslant 1}$ of probability measures converges weakly to a probability measure $\mu$ if and only if for each $k\geqslant 1$, $\int f_kd\mu_n\to \inf f_kd\mu$. By the ergodic theorem, we know that for each $k\geqslant 1$, there exists a set $\Omega_k$ having probability one for which the convergence $$\lim_{n\to\infty}\int f_k\left(x\right)d\mu_{n,\omega} =\lim_{n\to\infty}\frac 1n\sum_{j=1}^n f_k\left(X_j\left(\omega\right)\right) =\mathbb E\left[f_k\left(X_0\right)\mid\mathcal{I}\right]\left(\omega\right)=\int f_k\left(x\right)d\mu_{\omega}\left(x\right).$$ holds for each $\omega\in\Omega_k$. Therefore, for each $\omega$ belonging to the set of probability one $\Omega':=\bigcap_{k\geqslant 1}\Omega_k$, the sequence $\left(\mu_{n,\omega}\right)_{n\geqslant 1}$ converges weakly to $\mu_\omega$. Recall that Theorem 3.2 (page 21) of [@MR0233396] shows that if $\mu_n\to\mu$ and $\mu'_n\to\mu'$ in distribution on metric spaces $S_1$ and $S_2$ respectively, then $\mu_n\times\mu'_n\to \mu\times \mu'$ in distribution on $S_1\times S_2$. Applying inductively this result and using assumptions [\[ass:h_borne\]](#ass:h_borne){reference-type="ref" reference="ass:h_borne"} and [\[ass:h_cont\]](#ass:h_cont){reference-type="ref" reference="ass:h_cont"} shows that for each $\omega\in\Omega'$, [\[eq:conv_ps\]](#eq:conv_ps){reference-type="eqref" reference="eq:conv_ps"} holds. ## Proof of Corollary [Corollary 2](#cor:conv_Lp_sym){reference-type="ref" reference="cor:conv_Lp_sym"} {#proof-of-corollary-corconv_lp_sym} Let $h_R$ be as in [\[eq:def_de_hR\]](#eq:def_de_hR){reference-type="eqref" reference="eq:def_de_hR"}. Observe that assumption [\[ass:h_cor_limit_Lp\]](#ass:h_cor_limit_Lp){reference-type="ref" reference="ass:h_cor_limit_Lp"} guarantee that $I_m\left(S,h,\cdot\right)$ defined as in [\[eq:definition_of_I\_h_omega\]](#eq:definition_of_I_h_omega){reference-type="eqref" reference="eq:definition_of_I_h_omega"} belongs to $\mathbb L^p$. Moreover, the triangle inequality implies $$\left\lVert U_{m,n,h}-I_m\left(S,h,\cdot\right) \right\rVert_p\leqslant\left\lVert U_{m,n,h}- U_{m,n,h_R} \right\rVert_p +\left\lVert U_{m,n,h_R}-I_m\left(S,h_R,\cdot\right) \right\rVert_p+\left\lVert I_m\left(S,h,\cdot\right)-I_m\left(S,h_R,\cdot\right) \right\rVert_p.$$ Using assumption [\[ass:h_cor_UI\]](#ass:h_cor_UI){reference-type="ref" reference="ass:h_cor_UI"} combined with the triangle inequality, one gets $$\lim_{R\to\infty}\sup_{n\geqslant m}\left\lVert U_{m,n,h}- U_{m,n,h_R} \right\rVert_p=0.$$ Moreover, assumption [\[ass:h_cor_limit_Lp\]](#ass:h_cor_limit_Lp){reference-type="ref" reference="ass:h_cor_limit_Lp"} combined with monotone convergence shows that $$\lim_{R\to\infty}\left\lVert I_m\left(S,h,\cdot\right)-I_m\left(S,h_R,\cdot\right) \right\rVert_p=0$$ hence it suffices to show that for each fixed $R>0$, $\left\lVert U_{m,n,h_R}I_m\left(S,h_R,\cdot\right) \right\rVert_p\to 0$ as $n$ goes to infinity. This follows from an application of Theorem [Theorem 1](#thm:ergodic_theorem_sym_ps){reference-type="ref" reference="thm:ergodic_theorem_sym_ps"} with $h$ replaced by $h_R$ (note that continuity of $\phi_R$ guarantees that $D\left(h_R\right)\subset D\left(h\right)$), which gives that $U_{m,n,h_R}\to Y_R$ almost surely and the dominated convergence theorem allows to conclude. ## Convergence of weighted averages The proof of Theorems [Theorem 3](#thm:conv_Lp_ergodique){reference-type="ref" reference="thm:conv_Lp_ergodique"} and [Theorem 4](#thm:conv_Lp_non-ergodique_densite){reference-type="ref" reference="thm:conv_Lp_non-ergodique_densite"} rests on weighted versions of the ergodic theorem, which read as follows. **Lemma 5**. *Let $T$ be a measure preserving map on the probability space $\left(\Omega,\mathcal{F},\mathbb P\right)$ and let $\mathcal{I}$ be the $\sigma$-algebra of $T$ invariance sets. Let $p\geqslant 1$ and let $\left(f_j\right)_{j\geqslant 1}$ be a sequence of functions such that $\left\lVert f_j-f \right\rVert_p\to 0$. Then for each $m\geqslant 0$, the following convergence holds: $$\label{eq:weighted_ergodic_theorem_ordre_m} \lim_{n\to\infty}\left\lVert \frac 1{\binom n{m+1}}\sum_{j=m+1}^n\binom{j-1}m f_j\circ T^j-\mathbb E\left[f\mid\mathcal{I}\right] \right\rVert_p=0.$$* *Proof.* First observe that since $T$ is measure preserving, $$\begin{gathered} \left\lVert \frac 1{\binom n{m+1}}\sum_{j=m+1}^n\binom{j-1}m f_j\circ T^j-\mathbb E\left[f\mid\mathcal{I}\right] \right\rVert_p\\ \leqslant\frac 1{\binom n{m+1}}\sum_{j=m+1}^n\binom{j-1}m \left\lVert f_j-f \right\rVert_p+ \left\lVert \frac 1{\binom n{m+1}}\sum_{j=m+1}^n\binom{j-1}m f\circ T^j-\mathbb E\left[f\mid\mathcal{I}\right] \right\rVert_p.\end{gathered}$$ The first term goes to zero as $j$ goes to infinity from the elementary fact that $\sum_{i=1}^nc_i x_i/\left(\sum_{j=1}^n c_j\right) \to 0$ if $c_j>0$ and $\sum_{j=1}^n c_j\to \infty$. For the second term, we assume for without loss of generality that $\mathbb E\left[f\mid\mathcal{I}\right]=0$, otherwise, we replace $f$ by $f-\mathbb E\left[f\mid\mathcal{I}\right]$. Let $S_j:=\sum_{i=1}^j f\circ T^i$. Then $$\sum_{j=m+1}^n\binom{j-1}m f\circ T^j =\sum_{j=m+1}^n\binom{j-1}m S_j-\sum_{j=m}^{n-1}\binom{j}mS_j$$ and it follows that $$\label{eq:step_proof_ergodic_weighted} \left\lVert \frac 1{\binom n{m+1}}\sum_{j=m+1}^n\binom{j-1}m f\circ T^j \right\rVert_p\leqslant\frac{\binom{n-1}m }{\binom n{m+1}}\left\lVert S_n \right\rVert_p+ \frac 1{\binom n{m+1}} \sum_{j=m}^{n-1}\left(\binom{j}m-\binom{j-1}m\right)\left\lVert S_j \right\rVert_p.$$ Since $\left\lVert S_n \right\rVert_p/n\to 0$, the first term of the right hand side of [\[eq:step_proof_ergodic_weighted\]](#eq:step_proof_ergodic_weighted){reference-type="eqref" reference="eq:step_proof_ergodic_weighted"} goes to $0$ as $n$ goes to infinity. For the second term, one has for each $m\leqslant R\leqslant n$ that $$\begin{gathered} \frac 1{\binom n{m+1}} \sum_{j=m}^{n-1}\left(\binom{j}m-\binom{j-1}m\right)\left\lVert S_j \right\rVert_p \leqslant\frac R{\binom n{m+1}} \sum_{j=m}^{R}\left(\binom{j}m-\binom{j-1}m\right)\left\lVert S_j \right\rVert_p\\ + \frac 1{\binom n{m+1}} \sum_{j=R}^{n-1}j\left(\binom{j}m-\binom{j-1}m\right)\sup_{k\geqslant R}\frac{\left\lVert S_k \right\rVert_p}k\end{gathered}$$ hence $$\limsup_{n\to\infty} \frac 1{\binom n{m+1}}\sum_{j=m}^{n-1}\left(\binom{j}m-\binom{j-1}m\right)\left\lVert S_j \right\rVert_p \leqslant\sup_{k\geqslant R}\frac{\left\lVert S_k \right\rVert_p}k$$ and we conclude using again that $\frac{\left\lVert S_k \right\rVert_p}k\to 0$ as $k$ goes to infinity. ◻ ## Proof of Theorem [Theorem 3](#thm:conv_Lp_ergodique){reference-type="ref" reference="thm:conv_Lp_ergodique"} {#proof-of-theorem-thmconv_lp_ergodique} The proofs will lead us to consider truncated versions of the kernel $h$. Define for each fixed $R>0$ the maps $\phi_R\colon\mathbb R\to\mathbb R$ by $$\phi_R\left(t\right):= \begin{cases} -R&\mbox{ if }t<-R,\\ t&\mbox{ if }-R\leqslant t<R,\\ R&\mbox{ if }t\geqslant R \end{cases}$$ and $h_R\colon S^m\to\mathbb R$ by $$\label{eq:def_de_hR} h_R\left(x_1,\dots,x_m\right):=\phi_R\left(h\left(x_1,\dots,x_m\right)\right),\quad x_1,\dots,x_m\in S.$$ Then $\left|h_R\right|$ is bounded by $R$ and since $D_{h_R}\subset D_h$, the equality $\mathbb P_{X_0}\times \mathbb P_{X_0}\left(D_{h_R}\right)=0$ holds. We claim that it suffices to prove that [\[eq:conv_Lp_non_sym\]](#eq:conv_Lp_non_sym){reference-type="eqref" reference="eq:conv_Lp_non_sym"} holds for each $R$ with $h$ replaced by $h_R$. Indeed, by the triangle inequality, $$\sup_{n\geqslant 2}\left\lVert U_{2,n,h}-U_{2,n,h_R} \right\rVert_p\leqslant \sup_{1\leqslant i<j}\left\lVert h\left(X_{i},X_{j}\right) \mathbf{1}_{\left|h\left(X_i,X_{j}\right)\right|>R} \right\rVert_p$$ and $$\begin{gathered} \int_\Omega \left| \int_{S^2}h\left(x,y\right)d\mu_{\omega}\left(x\right) d\mu_{\omega}\left(y\right)- \int_{S^2}h_R\left(x,y\right)d\mu_{\omega}\left(x\right) d\mu_{\omega}\left(y\right) \right|^pd\mathbb P\left(\omega\right) \\ \leqslant \int_\Omega \int_{S^2} \left|h\left(x,y\right)\right|^p \mathbf{1}_{\left|h\left(x,y\right)\right|>R} d\mu_{\omega}\left(x\right) d\mu_{\omega}\left(y\right)d\mathbb P\left(\omega\right),\end{gathered}$$ hence assumptions [\[ass:UI_ergodique\]](#ass:UI_ergodique){reference-type="ref" reference="ass:UI_ergodique"} and [\[ass:h_copies_ordre_p\]](#ass:h_copies_ordre_p){reference-type="ref" reference="ass:h_copies_ordre_p"} allows us to choose $R$ making the previous quantities as small as we wish. Defining $$d_{j,R}:=\frac 1j\sum_{i=1}^{j-1}h_R\left(X_{-i},X_0\right),$$ we get that $$\label{eq:U_2n_en_terms_de_djR} U_{2,n,h_R}=\frac 1{\binom n2}\sum_{j=2}^n jd_{j,R}\circ T^j.$$ We first show that there exists a set of probability one $\Omega'$ such that for each $\omega\in \Omega'$, $$\label{eq:conv_djR} d_{j,R}\left(\omega\right)\to \int_{S^2}h_R\left(x,y\right)d\mu_{\omega}\left(x\right)d\delta_{X_0\left(\omega\right)}=:Y_R\left(\omega\right).$$ First, separability of $S$ guarantees the existence of a countable collection $\left(f_k\right)_{k\geqslant 1}$ of continuous and bounded functions from $S$ to $\mathbb R$ such that a sequence $\left(\mu_n\right)_{n\geqslant 1}$ of probability measures converges weakly to a probability measure $\mu$ if and only if for each $k\geqslant 1$, $\int f_kd\mu_n\to \inf f_kd\mu$. Taking $\mu_{n,\omega}:= n^{-1}\sum_{i=1}^n \delta_{X_{-i}\left(\omega\right)}$, the ergodic theorem furnishes for each $k\geqslant 1$ a set $\Omega_k$ having probability one for which the convergence $$\lim_{n\to\infty}\int f_k\left(x\right)d\mu_{n,\omega} =\lim_{n\to\infty}\frac 1n\sum_{j=1}^n f_k\left(X_j\left(\omega\right)\right) =\mathbb E\left[f_k\left(X_0\right)\mid\mathcal{I}\right]\left(\omega\right)=\int f_k\left(x\right)d\mu_{\omega}\left(x\right).$$ holds for each $\omega\in\Omega_k$. Consequently, for each $\omega\in\Omega':=\bigcap_{k\geqslant 1}\Omega_k$, one has $\mu_{n,\omega}\to \mu_{\omega}$ weakly in $S$ and by Theorem 3.2 (page 21) of [@MR0233396], we get that $\mu_{n,\omega}\times\delta_{X_0\left(\omega\right)}\to \mu_{\omega}\times\delta_{X_0\left(\omega\right)}$ weakly in $S^2$. Since $h_R$ is bounded and for almost every $\omega\in \Omega$, $\mu_{\omega}\times\delta_{X_0\left(\omega\right)}\left(D_{h_R}\right)=0$, we get [\[eq:conv_djR\]](#eq:conv_djR){reference-type="eqref" reference="eq:conv_djR"}. Moreover, $\left|d_{j,R}\right|\leqslant R$ hence by dominated convergence, $$\label{eq:convergence_de_djR} \lim_{j\to\infty}\left\lVert d_{j,R} - \int_{S^2}h_R\left(x,y\right)d\mu_{\omega}\left(x\right)d\delta_{X_0\left(\omega\right)} \right\rVert_p=0.$$ By [\[eq:U_2n_en_terms_de_djR\]](#eq:U_2n_en_terms_de_djR){reference-type="eqref" reference="eq:U_2n_en_terms_de_djR"} and the fact that $T$ is measure preserving, we infer that $$\label{eq:conv_Ustat_ordre_2_etape} \left\lVert U_{2,n,h_R} -\mathbb E\left[Y_R\mid\mathcal{I}\right] \right\rVert_p \leqslant\frac 1{\binom n2}\sum_{j=2}^nj\left\lVert d_{j,R}-Y_R \right\rVert_p +\left\lVert \frac 1{\binom n2}\sum_{j=2}^nj Y_R\circ T^j -\mathbb E\left[Y_R\mid\mathcal{I}\right] \right\rVert_p.$$ An application of [\[eq:conv_djR\]](#eq:conv_djR){reference-type="eqref" reference="eq:conv_djR"} combined with the dominated convergence theorem shows that the first term of the right hand side of [\[eq:conv_Ustat_ordre_2\_etape\]](#eq:conv_Ustat_ordre_2_etape){reference-type="eqref" reference="eq:conv_Ustat_ordre_2_etape"} goes to $0$ as $n$ goes to infinity. Then Lemma [Lemma 5](#lem:weighted_averatges){reference-type="ref" reference="lem:weighted_averatges"} with $m=1$ shows that $\left\lVert U_{2,n,h_R} - \mathbb E\left[Y_R\mid\mathcal{I}\right] \right\rVert_p\to 0$. It remains to check that $$\mathbb E\left[Y_R\mid\mathcal{I}\right]\left(\omega\right)=\int_{S^2}h\left(x,y\right)d\mu_{\omega}\left(x\right) d\mu_{\omega}\left(y\right).$$ Observe that by the ergodic theorem, $\mathbb E\left[Y_R\mid\mathcal{I}\right]\left(\omega\right) =\lim_{n\to\infty} n^{-1}\sum_{k=1}^n Y_R\left(T^k\omega\right)$. Since $\mu_{T^k\omega}=\mu_{\omega}$, it follows that $$\mathbb E\left[Y_R\mid\mathcal{I}\right]\left(\omega\right)=\lim_{n\to\infty}\frac 1n \sum_{k=1}^n\int_{S^2}h_R\left(x,y\right)d\mu_{\omega}\left(x\right)d\delta_{X_k\left(\omega\right)}\left(y\right).$$ Using similar arguments as before gives that for almost every $\omega$, $\mu_\omega\times \left(n^{-1}\sum_{k=1}^n\delta_{X_k\left(\omega\right)}\right)$ converges in distribution to $\mu_\omega\times\mu_\omega$. This ends the proof of Theorem [Theorem 3](#thm:conv_Lp_ergodique){reference-type="ref" reference="thm:conv_Lp_ergodique"}. ## Proof of Theorem [Theorem 4](#thm:conv_Lp_non-ergodique_densite){reference-type="ref" reference="thm:conv_Lp_non-ergodique_densite"} {#proof-of-theorem-thmconv_lp_non-ergodique_densite} We start by proving Theorem [Theorem 4](#thm:conv_Lp_non-ergodique_densite){reference-type="ref" reference="thm:conv_Lp_non-ergodique_densite"} in the case where $h\left(x_1,\dots,x_m\right)=\prod_{\ell=1}^m \mathbf{1}_{x_\ell\in A_\ell}$. We show by induction over $m$ that if $A_\ell,\ell\in\llbracket 1,m\rrbracket$ are Borel subsets of $\mathbb R^d$ and $\left(X\circ T^i\right)_{i\in\mathbb Z}$ a stationary sequence with invariance $\sigma$-algebra $\mathcal{I}$, then $$\label{eq:thm_ergo_indic} \lim_{n\to\infty}\left\lVert \frac 1{\binom nm}\sum_{\left(i_\ell\right)_{\ell\in\llbracket 1,m\rrbracket}\in\operatorname{Inc}^m_n} \prod_{\ell=1}^m \mathbf{1}_{X_{i_\ell}\in A_\ell} -\prod_{\ell=1}^m\mathbb E\left[\mathbf{1}_{X_0\in A_\ell} \mid\mathcal{I}\right] \right\rVert_p=0.$$ The case $m=1$ is a direct consequence of the ergodic theorem. Let us show the case $m=2$. We start from $$\frac{1}{\binom n2}\sum_{1\leqslant i<j\leqslant n}\mathbf{1}_{X_i\in A_1}\mathbf{1}_{X_j\in A_2} \\ \leqslant\frac{1}{\binom n2}\sum_{j=2}^n j \left( \frac 1j\sum_{i=1}^{j-1}\mathbf{1}_{X_{-i}\in A_1}\mathbf{1}_{X_0\in A_2} \right)\circ T^j$$ Let $f_j:=\frac 1j\sum_{i=1}^{j-1}\mathbf{1}_{X_{-i}\in A_1}\mathbf{1}_{X_0\in A_2}$. By the ergodic theorem, $f_j\to \mathbb E\left[\mathbf{1}_{X_0\in A_1}\mid\mathcal{I}\right]\mathbf{1}_{X_0\in A_2}$, which gives [\[eq:thm_ergo_indic\]](#eq:thm_ergo_indic){reference-type="eqref" reference="eq:thm_ergo_indic"} for $m=2$. Suppose now that [\[eq:thm_ergo_indic\]](#eq:thm_ergo_indic){reference-type="eqref" reference="eq:thm_ergo_indic"} holds for each Borel subset $A_1,\dots,A_m$ of $\mathbb R^d$ and each strictly stationary sequence $\left(X_0\circ T^i\right)_{i\in\mathbb Z}$. Let $A_1,\dots,A_{m+1}$ be Borel subsets of $\mathbb R^d$ and let $\left(X_0\circ T^i\right)_{i\in\mathbb Z}$ be a strictly stationary sequence. We start from $$\begin{gathered} \frac 1{\binom n{m+1}}\sum_{\left(i_\ell\right)_{\ell\in\llbracket 1,m+1\rrbracket}\in\operatorname{Inc}^{m+1}_n} \prod_{\ell\in\llbracket 1,m+1\rrbracket} \mathbf{1}_{X_{i_\ell}\in A_\ell}\\ =\frac 1{\binom n{m+1}}\sum_{j=m+1}^n \left(\mathbf{1}_{X_0\in A_{m+1}}\sum_{\left(i_\ell\right)_{\ell\in\llbracket 1,m\rrbracket}\in\operatorname{Inc}^m_{j-1}} \prod_{\ell\in\llbracket 1,m\rrbracket }\mathbf{1}_{X_{i_\ell-j}\in A_\ell }\right)\circ T^j.\end{gathered}$$ Define $$f_j:=\frac 1{\binom{j-1}m}\mathbf{1}_{X_0\in A_{m+1}}\sum_{\left(i_\ell\right)_{\ell\in\llbracket 1,m\rrbracket}\in\operatorname{Inc}^m_{j-1}} \prod_{\ell\in\llbracket 1,m\rrbracket }\mathbf{1}_{X_{i_\ell-j}\in A_\ell }.$$ Doing the changes of index $k_1=j-i_m,\dots,k_m=j-i_1$, the previous expression can be rewritten as $$f_j=\mathbf{1}_{X_0\in A_{m+1}}\frac 1{\binom{j-1}m}\sum_{\left(k_\ell\right)_{\ell\in\llbracket 1,m\rrbracket}\in\operatorname{Inc}^m_{j-1}} \prod_{\ell\in\llbracket 1,m\rrbracket }\mathbf{1}_{X_{-k_\ell}\in A_{m-\ell+1} }$$ and using the induction assumption, we derive that $$\lim_{j\to\infty}\left\lVert f_j- \mathbf{1}_{X_0\in A_{m+1}}\prod_{\ell\in \llbracket 1,m\rrbracket} \mathbb E\left[\mathbf{1}_{X_0\in A_{m-\ell+1}}\mid \mathcal{I}\right] \right\rVert_p.$$ Then we conclude by [\[eq:weighted_ergodic_theorem_ordre_m\]](#eq:weighted_ergodic_theorem_ordre_m){reference-type="eqref" reference="eq:weighted_ergodic_theorem_ordre_m"}. We now show [\[eq:thm_erg_densite\]](#eq:thm_erg_densite){reference-type="eqref" reference="eq:thm_erg_densite"} in the general case. Fix a positive $\varepsilon$ and define for a positive $K$ $$h^{\left(K\right)}\left(x_1,\dots,x_m\right)=h\left(x_1,\dots,x_m\right) \mathbf{1}_{\left|h\left(x_1,\dots,x_m\right)\right|\leqslant K} \prod_{\ell=1}^m\mathbf{1}_{\left|x_\ell\right|_d\leqslant K},$$ where $\left|\cdot\right|_d$ denotes the Euclidean norm on $\mathbb R^d$. Observe that by the triangle inequality, $$\begin{gathered} \left\lVert U_{m,n,h}-U_{m,n,h^{\left(K\right)}} \right\rVert_p \leqslant\sup_{1\leqslant i_1< \dots<i_m} \left\lVert h\left(X_{i_1},\dots,X_{i_m}\right)\mathbf{1}_{\left|h\left(X_{i_1},\dots,X_{i_m}\right)\right|>K} \right\rVert_p\\ +\sum_{\ell=1}^m \sup_{1\leqslant i_1< \dots<i_m} \left\lVert h\left(X_{i_1},\dots,X_{i_m}\right)\mathbf{1}_{\left| x_\ell\right|_d>K} \right\rVert_p,\end{gathered}$$ hence by assumption [\[ass:Ui_dens\]](#ass:Ui_dens){reference-type="ref" reference="ass:Ui_dens"}, we can find $K'$ such that for each $K\geqslant K'$, $$\sup_{n\geqslant m}\left\lVert U_{m,n,h}-U_{m,n,h^{\left(K'\right)}} \right\rVert_p\leqslant\varepsilon.$$ Moreover, by assumption [\[ass:inte_limite\]](#ass:inte_limite){reference-type="ref" reference="ass:inte_limite"}, we can choose $K''$ such that for each $K\geqslant K''$, $$\int_{\Omega} I_m\left(\mathbb R^d, \left|h-h^{\left(K\right)}\right|^p,\omega\right)d\mathbb P\left(\omega\right) \leqslant\varepsilon^p .$$ Let $K_0=\max\left\{ K',K''\right\}$. Observe that in assumptions [\[ass:densite_bornee\]](#ass:densite_bornee){reference-type="ref" reference="ass:densite_bornee"} and [\[ass:den_bornee_m\_omega\]](#ass:den_bornee_m_omega){reference-type="ref" reference="ass:den_bornee_m_omega"} , we can assume without loss of generality that $q_0=q_1$. By standard results in measure theory, we know that we can find an integer $J$, constants $c_1,\dots,c_J$ and Borel subsets $A_{\ell,j}$, $\ell\in\llbracket 1,m\rrbracket, j\in\llbracket 1,J\rrbracket$ such that $$\label{eq:approx_h_K_comb_ind} \int_{\left(\mathbb R^d\right)^m}\left|h^{\left(K_0\right)}\left(x_1,\dots,x_m\right)- \widetilde{ h}^{\left(K_0\right)}\left(x_1,\dots,x_m\right)\right|^{p\frac{q_0}{q_0-1}} dx_1\dots dx_m <\left(M_1+M_2\right)^p\varepsilon^{p\frac{q_0}{q_0-1}} ,$$ where $$\label{eq:expression_h_K_comb_ind} \widetilde{ h}^{\left(K_0\right)}\left(x_1,\dots,x_m\right) =\sum_{j=1}^Jc_j\prod_{\ell=1}^m \mathbf{1}_{x_\ell \in A_{\ell,j}}.$$ Notice that for each $1\leqslant i_1<\dots<i_m$, $$\begin{gathered} \left\lVert h^{\left(K_0\right)}\left(X_{i_1},\dots,X_{i_m}\right)- \widetilde{ h}^{\left(K_0\right)}\left(X_{i_1},\dots,X_{i_m}\right) \right\rVert_p^p\\ = \int_{\left(\mathbb R^d\right)^m}\left|h^{\left(K_0\right)}\left(x_1,\dots,x_m\right)- \widetilde{ h}^{\left(K_0\right)}\left(x_1,\dots,x_m\right)\right|^{p } f_{i_1,\dots,i_m}\left(x_1,\dots,x_m\right) dx_1\dots dx_m\end{gathered}$$ hence using Hölder's inequality, [\[eq:approx_h\_K_comb_ind\]](#eq:approx_h_K_comb_ind){reference-type="eqref" reference="eq:approx_h_K_comb_ind"} and assumption [\[ass:densite_bornee\]](#ass:densite_bornee){reference-type="ref" reference="ass:densite_bornee"}, we derive that $$\sup_{1\leqslant i_1<\dots<i_m}\left\lVert h^{\left(K_0\right)}\left(X_{i_1},\dots,X_{i_m}\right)- \widetilde{ h}^{\left(K_0\right)}\left(X_{i_1},\dots,X_{i_m}\right) \right\rVert_p \leqslant\varepsilon$$ and by the triangle inequality, $$\sup_{N\geqslant m}\left\lVert U_{m,N,h^{\left(K_0\right)}}-U_{m,N,\widetilde{h}^{\left(K_0\right)}} \right\rVert_p\leqslant\varepsilon.$$ Moreover, using Hölder's inequality, we find that $$\int_{\Omega} I_m\left(\mathbb R^d,\left|h^{\left(K_0\right)}-\widetilde{h}^{\left(K_0\right)}\right|^p,\omega\right)d\mathbb P\left(\omega\right) \leqslant\varepsilon^p.$$ As a consequence, $$\begin{aligned} \left\lVert U_{m,n,h}-I_m\left(\mathbb R^d,h,\cdot\right) \right\rVert_p &\leqslant\sup_{N\geqslant m}\left\lVert U_{m,N,h}-U_{m,N,h^{\left(K_0\right)}} \right\rVert_p + \sup_{N\geqslant m}\left\lVert U_{m,N,h^{\left(K_0\right)}}-U_{m,N,\widetilde{h}^{\left(K_0\right)}} \right\rVert_p \\ &+\left\lVert U_{m,n,\widetilde{h}^{\left(K_0\right)}}-I_m\left(\mathbb R^d,\widetilde{h}^{\left(K_0\right)},\cdot\right) \right\rVert_p +\left\lVert I_m\left(\mathbb R^d,\widetilde{h}^{\left(K_0\right)},\cdot\right)- I_m\left(\mathbb R^d,h^{\left(K_0\right)},\cdot\right) \right\rVert_p\\ &+\left\lVert I_m\left(\mathbb R^d,\ h ^{\left(K_0\right)},\cdot\right)- I_m\left(\mathbb R^d,h ,\cdot\right) \right\rVert_p.\end{aligned}$$ By [\[eq:thm_ergo_indic\]](#eq:thm_ergo_indic){reference-type="eqref" reference="eq:thm_ergo_indic"} and [\[eq:expression_h\_K_comb_ind\]](#eq:expression_h_K_comb_ind){reference-type="eqref" reference="eq:expression_h_K_comb_ind"}, we can find $n_0$ such that for each $n\geqslant n_0$, $\left\lVert I_m\left(\mathbb R^d,\widetilde{h}^{\left(K_0\right)},\cdot\right)- I_m\left(\mathbb R^d,h^{\left(K_0\right)},\cdot\right) \right\rVert_p\leqslant\varepsilon$ hence we derive that for such $n$'s, $\left\lVert U_{m,n,h}-I_m\left(\mathbb R^d,h,\cdot\right) \right\rVert_p\leqslant 4\varepsilon$. This ends the proof of Theorem [Theorem 4](#thm:conv_Lp_non-ergodique_densite){reference-type="ref" reference="thm:conv_Lp_non-ergodique_densite"}. 9 urlstyle J. Aaronson, R. Burton, H. Dehling, D. Gilat, T. Hill, and B. Weiss. Strong laws for $L$- and $U$-statistics. *Trans. Amer. Math. Soc.*, 348 (7): 2845--2866, 1996. ISSN 0002-9947. URL `https://doi.org/10.1090/S0002-9947-96-01681-9`. M. A. Arcones, *The law of large numbers for $U$-statistics under absolute regularity*, Electron. Comm. Probab. **3** (1998), 13--19. P. Billingsley. *Convergence of probability measures*. John Wiley & Sons Inc., New York, 1968. S. Borovkova, R. Burton, and H. Dehling. From dimension estimation to asymptotics of dependent $U$-statistics. In *Limit theorems in probability and statistics, Vol. I (Balatonlelle, 1999)*, pages 201--234. János Bolyai Math. Soc., Budapest, 2002. I. P. Cornfeld, S. V. Fomin, and Ya. G. Sinaı̆. *Ergodic theory*, volume 245 of *Grundlehren der Mathematischen Wissenschaften \[Fundamental Principles of Mathematical Sciences\]*. Springer-Verlag, New York, 1982. ISBN 0-387-90580-4. URL `http://dx.doi.org/10.1007/978-1-4615-6927-5`. Translated from the Russian by A. B. Sosinskiı̆. H. Dehling, D. Giraudo, and D. Volny. Some remarks on the ergodic theorem for $U$-statistics, 2023. URL `https://arxiv.org/abs/2302.04539`, to appear in *Comptes Rendus Mathématique*. H. G. Dehling and O. Sh. Sharipov, *Marcinkiewicz-Zygmund strong laws for $U$-statistics of weakly dependent observations*, Statist. Probab. Lett. **79** (2009), no. 19, 2028--2036. M. Denker and M. Gordin. Limit theorems for von Mises statistics of a measure preserving transformation. *Probab. Theory Related Fields*, 160 (1-2): 1--45, 2014. ISSN 0178-8051. URL `https://doi.org/10.1007/s00440-013-0522-z`. D. Giraudo, *Limit theorems for $U$-statistics of Bernoulli data*, ALEA Lat. Am. J. Probab. Math. Stat. **18** (2021), no. 1, 793--828. W. Hoeffding. A class of statistics with asymptotically normal distribution. *Ann. Math. Statistics*, 19: 293--325, 1948. ISSN 0003-4851. URL `https://doi.org/10.1214/aoms/1177730196`. R. Lachièze-Rey and M. Reitzner. -statistics in stochastic geometry. In *Stochastic analysis for Poisson point processes*, volume 7 of *Bocconi Springer Ser.*, pages 229--253. Bocconi Univ. Press, 2016. R. Lyons. Distance covariance in metric spaces. *Ann. Probab.*, 41 (5): 3284--3305, 2013. ISSN 0091-1798. URL `https://doi.org/10.1214/12-AOP803`.
arxiv_math
{ "id": "2309.05988", "title": "Some notes on ergodic theorem for $U$-statistics of order $m$ for\n stationary and not necessarily ergodic sequences", "authors": "Davide Giraudo (IRMA, UNISTRA UFR MI)", "categories": "math.PR math.ST stat.TH", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- author: - "Manting Peng[^1]" - "Zheng Sun[^2]" - "Kailiang Wu[^3]" title: | OEDG: Oscillation-eliminating discontinuous Galerkin\ method for hyperbolic conservation laws [^4] --- [^1]: Department of Mathematics, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China. [^2]: Department of Mathematics, The University of Alabama, Tuscaloosa, AL 35487, USA (). [^3]: Corresponding author. Department of Mathematics and SUSTech International Center for Mathematics, Southern University of Science and Technology, and National Center for Applied Mathematics Shenzhen (NCAMS), Shenzhen, Guangdong 518055, China (). [^4]: M. Peng and K. Wu was partially supported by Shenzhen Science and Technology Program (No. RCJC20221008092757098) and National Natural Science Foundation of China (No. 12171227). Z. Sun was partially supported by NSF grant DMS-2208391.
arxiv_math
{ "id": "2310.04807", "title": "OEDG: Oscillation-eliminating discontinuous Galerkin method for\n hyperbolic conservation laws", "authors": "Manting Peng, Zheng Sun, Kailiang Wu", "categories": "math.NA cs.NA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Let $\mathcal{X}\to Y$ be a birational modification of a variety by an Artin stack. In previous work, under the assumption that $\mathcal{X}$ is smooth, we proved a change of variables formula relating motivic integrals over arcs of $Y$ to motivic integrals over arcs of $\mathcal{X}$. In this paper, we extend that result to the case where $\mathcal{X}$ is singular. We may therefore apply this generalized formula to the so-called warping stack $\mathscr{W}(\mathcal{X})$ of $\mathcal{X}$, which may be singular even when $\mathcal{X}$ is smooth. We thus obtain a change of variables formula *canonically* expressing any given motivic integral over arcs of $Y$ as a motivic integral over *warped arcs* of $\mathcal{X}$. address: - Matthew Satriano, Department of Pure Mathematics, University of Waterloo - Jeremy Usatine, Department of Mathematics, Florida State University author: - Matthew Satriano and Jeremy Usatine bibliography: - SingularMCVF.bib title: Motivic integration for singular Artin stacks --- # Introduction Motivic integration was pioneered by Kontesevich [@Kontsevich] in his proof that birational smooth Calabi--Yau varieties have equal Hodge numbers. Since then, the field has seen broad ranging applications in birational geometry, mirror symmetry, and the study of singularities. Central to these applications is a *motivic change of variables formula* relating the motivic measures of $Y$ and $X$ under a birational modification $X \to Y$, see e.g., [@DenefLoeser1999; @Looijenga]. Motivated by mirror symmetry for singular Calabi-Yau varieties, Batyrev [@Batyrev1998] introduced the *stringy Hodge numbers* for varieties with *log-terminal singularities*. These are defined in terms of the combinatorial information of a resolution of singularities rather than the dimensions of cohomology groups. However when $Y$ admits a *crepant* resolution $X\to Y$, the stringy Hodge numbers of $Y$ agree with their classical counterparts on $X$. In [@Yasuda2004] (see also [@Yasuda2006; @Yasuda2019]), Yasuda gave a beautiful description for the stringy Hodge numbers of $Y$ in the case where $Y$ has only quotient singularities. Such varieties need not admit a crepant resolution by a scheme, however Vistoli [@Vistoli89] proved that they always admit small (hence crepant) resolutions $\pi\colon\mathcal{X}\to Y$ by Deligne--Mumford stacks. Yasuda extended the theory of motivic integration to the case of smooth Deligne--Mumford stacks and proved a motivic change of variables for $\pi$. As a result, he obtained that the stringy Hodge numbers of $Y$ coincide with the orbifold Hodge numbers of $\mathcal{X}$ in the sense of [@ChenRuan]. Since varieties with log-terminal singularities rarely admit crepant resolutions by Deligne--Mumford stacks, if we wish to say something similar for log-terminal singularities in general, we must consider Artin stacks. Indeed, it is easy to write down examples (see e.g., [@SatrianoUsatine3 Example 7.1]) of log-terminal varieties admitting a small (hence crepant) resolution by an Artin stack, yet no crepant resolution by a Deligne--Mumford stack and even no non-commutative crepant resolution (NCCR), as defined by van den Bergh [@vdB2004]. Furthermore, we recently proved [@SatrianoUsatine3] that *every* log-terminal variety admits a crepant resolution by a smooth Artin stack. In order to apply a motivic change of variables formula to a resolution $\pi\colon\mathcal{X}\to Y$ by a stack, one must understand how arcs of $Y$ lift to arcs of $\mathcal{X}$. Here we must draw an important distinction between three cases of increasing generality: when $\mathcal{X}$ is a scheme, when $\mathcal{X}$ is Deligne--Mumford, or when $\mathcal{X}$ is Artin. In the scheme case, (outside a set of measure 0) every arc $\varphi\colon D\to Y$ lifts uniquely to $\mathcal{X}$. When $\mathcal{X}$ is Deligne--Mumford, $\varphi$ need not lift, however it does lift uniquely to a so-called *twisted arc* as introduced by [@Yasuda2004]. When $\mathcal{X}$ is Artin, the situation is far more complicated: since Artin stacks are non-separated, $\varphi$ may admit many (even infinitely many) lifts to $\mathcal{X}$. Thus, one is in need of a more general notion than twisted arcs, one that yields a *unique* lift of $\varphi$. In [@SatrianoUsatine4], we construct this desired theory, obtaining new notions of arcs which we call *warped arcs*. We prove that (outside a set of measure 0) every arc $\varphi$ of $Y$ admits a *canonical* warped lift $\varphi^{\mathop{\mathrm{can}}}$, and that such lifts can naturally be interpreted as usual arcs of an auxiliary stack $\mathscr{W}(\mathcal{X})$, see [@SatrianoUsatine4 Theorem 1.8 and Theorem 1.15]. It is thus natural to ask for a motivic change of variables formula that can be applied to $\mathscr{W}(\mathcal{X})$. A technical hurdle arises here: even when $\mathcal{X}$ is smooth and finite type, $\mathscr{W}(\mathcal{X})$ need not be. Thus, even if one is only interested in resolutions by smooth Artin stacks, in order to obtain canonical lifts, one must work with $\mathscr{W}(\mathcal{X})$, and is therefore is need of a more general motivic change of variables formulas. The purpose of this paper is to accomplish this necessary generalization. Namely, we prove a motivic change of variables formula for stacks that are: (i) singular, (ii) locally of finite type, and (iii) non-equidimensional, see . This generalizes our previous work [@SatrianoUsatine2] where we proved a motivic change of variables formula for smooth stacks. Ultimately, we prove our main theorem () by reducing to the smooth case; thus, the current work relies on [@SatrianoUsatine2] rather than supersede it. To state our main theorem, we must first discuss new kinds of functions that we introduced in [@SatrianoUsatine2] known as *height functions*. Let $\mathscr{L}(\mathcal{X})$ denote the stack of arcs of $\mathcal{X}$ and let $|\mathscr{L}(\mathcal{X})|$ denote its associated topological space. Given an object of the derived category $E\in D^-_{\mathop{\mathrm{coh}}}(\mathcal{X})$, we let $$\mathop{\mathrm{ht}}^{(i)}_E\colon|\mathscr{L}(\mathcal{X})|\to\mathbb{Z}\cup\{\infty\}$$ be the function assigning to an arc $\varphi\colon \mathop{\mathrm{Spec}}k'\llbracket t \rrbracket\to\mathcal{X}$ the value $$\mathop{\mathrm{ht}}^{(i)}_E(\varphi)=\dim_{k'} H^i(L\varphi^*E).$$ This definition was inspired by recent work [@ESZB] of the first author, Ellenberg, and Zureick-Brown, where we generalized the notion of number-theoretic Weil heights to the case of stacks, and unified the Batyrev--Manin and Malle Conjectures. When $\mathcal{X}$ is smooth, our motivic change of variables formula $\pi\colon\mathcal{X}\to Y$ is governed by the difference of heights $$\mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{X}/Y}}-\mathop{\mathrm{ht}}^{(1)}_{L_{\mathcal{X}/Y}},$$ where $L_{\mathcal{X}/Y}$ is the relative cotangent complex; see [@SatrianoUsatine2 Theorem 1.3] and [@SatrianoUsatine3 Theorem 2.3]. When $\mathcal{X}$ is a scheme, $\mathop{\mathrm{ht}}^{(1)}_{L_{\mathcal{X}/Y}}$ vanishes and $\mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{X}/Y}}$ agrees with the order function of the relative Jacobian ideal of $\pi$. Hence, we recover Kontsevich's original motivic change of variables formula. In this paper, we show that for singular $\mathcal{X}$, the above height function needs to be modified by an interesting correction term. Namely, we introduce the following *relative height function*: $$\begin{aligned} \mathop{\mathrm{ht}}_{\mathcal{X}/\mathcal{Y}}(\varphi) :=\ &\mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{X}/\mathcal{Y}}}(\varphi) - \mathop{\mathrm{ht}}^{(1)}_{L_{\mathcal{X}/\mathcal{Y}}}(\varphi)\\ &- \dim_{k'}\mathop{\mathrm{coker}}\left(H^0(L\varphi^*L\pi^*L_\mathcal{Y})_\mathrm{tor}\to H^0(L\varphi^* L_\mathcal{X})_\mathrm{tor}\right),\end{aligned}$$ where the subscript $\mathrm{tor}$ denotes the torsion submodule; see Definition [Definition 8](#def:relhet){reference-type="ref" reference="def:relhet"}. We may now state our main theorem after fixing the following notation. See for the definition of the measure $\mu_{\mathcal{X},d}$. **Notation 1**. Let $\mathcal{X}$ be a locally finite type Artin stack over $k$ with affine geometric stabilizers and separated diagonal, let $Y$ be an irreducible finite type scheme over $k$, let $\pi: \mathcal{X}\to Y$ be a morphism, let $U$ be a non-empty smooth open subscheme of $Y$ such that $\mathcal{U}:=\pi^{-1}(U) \to U$ is an isomorphism. **Theorem 1**. *Keep Notation [Notation 1](#not:mainnot){reference-type="ref" reference="not:mainnot"}, and let $\mathcal{C}\subset |\mathscr{L}(\mathcal{X})|$ be a cylinder such that the map $$%\overline{ \mathcal{C}\smallsetminus|\mathscr{L}(\mathcal{X}\smallsetminus\mathcal{U})|\ \longrightarrow\ \mathscr{L}(Y) \smallsetminus\mathscr{L}(Y \smallsetminus U)$$ induced by $\pi$ is a bijection on isomorphism classes of $k'$-points for all field extensions $k'$ of $k$. Then $$\int_{\mathscr{L}(Y)} \mathbb{L}^{f} \mathrm{d}\mu_Y = \int_{\mathcal{C}\smallsetminus|\mathscr{L}(\mathcal{X}\smallsetminus\mathcal{U})|} \mathbb{L}^{f \circ \mathscr{L}(\pi)-\mathop{\mathrm{ht}}_{\mathcal{X}/Y}} \mathrm{d}\mu_{\mathcal{X}, \dim Y}.$$ for every measurable function $f\colon \mathscr{L}(Y) \to \mathbb{Z}\cup \{\infty\}$ where $\mathbb{L}^f$ is integrable on $\mathscr{L}(Y)$.* **Remark 1**. See Theorem [Theorem 5](#theoremMotivicChangeOfVariablesMeasurable-realversion){reference-type="ref" reference="theoremMotivicChangeOfVariablesMeasurable-realversion"} for a stronger version of the above theorem; in particular, Theorem [Theorem 5](#theoremMotivicChangeOfVariablesMeasurable-realversion){reference-type="ref" reference="theoremMotivicChangeOfVariablesMeasurable-realversion"}([\[theoremMotivicChangeOfVariablesMeasurable-realversion::a\]](#theoremMotivicChangeOfVariablesMeasurable-realversion::a){reference-type="ref" reference="theoremMotivicChangeOfVariablesMeasurable-realversion::a"}) shows that the left hand side of the above equation converges if and only if the right hand side does. See also Theorem [Theorem 4](#theoremMCVFCylinder){reference-type="ref" reference="theoremMCVFCylinder"} for a variant. Since $\mathcal{X}$ may be locally of finite type and non-equidimensional, proving requires some extra bookkeeping. For this we introduce the notions of *small cylinders* and *$d$-convergent cylinders*, see Definitions [Definition 2](#def:smallcylinder){reference-type="ref" reference="def:smallcylinder"} and [Definition 4](#def:d-measurability-and-measure){reference-type="ref" reference="def:d-measurability-and-measure"}. The latter concept essentially involves fixing a "virtual dimension" $d$ for $\mathcal{X}$ and only considering those sets that behave like cylinders in the $d$-dimensional case. This leads naturally to the measure $\mu_{\mathcal{X},d}$, see . Applying to $\mathscr{W}(\mathcal{X})$ gives our desired motivic change of variables formula for warped arcs. In the statement below, $\mathcal{C}_\mathcal{X}$ denotes the set of canonical warped lifts of $\mathscr{L}(Y)$ and $\underline{\pi}\colon\mathscr{W}(\mathcal{X})\to Y$ denotes a canonical map induced by $\pi$. **Theorem 2** ([@SatrianoUsatine4 Theorem 1.18]). *Keep Notation [Notation 1](#not:mainnot){reference-type="ref" reference="not:mainnot"}, assume $\mathcal{X}$ has affine diagonal, and assume $\mathcal{X}\to Y$ is a good moduli space map. For any measurable function $f\colon\mathscr{L}(Y) \to \mathbb{Z}\cup \{\infty\}$ with $\mathbb{L}^f$ integrable on $\mathscr{L}(Y)$, we have $$\int_{\mathscr{L}(Y)} \mathbb{L}^{f} \mathrm{d}\mu_Y \ =\ \int_{\mathcal{C}_\mathcal{X}} \mathbb{L}^{f \circ \mathscr{L}(\underline{\pi}) - \mathop{\mathrm{ht}}_{\mathscr{W}(\mathcal{X})/Y}} \mathrm{d}\mu_{\mathscr{W}(\mathcal{X}), \dim Y}.$$* Because $\mathscr{W}(\mathcal{X})$ may not be smooth, equidimensional, or finite type, is a crucial ingredient in the proof of . Given any resolution $\pi\colon\mathcal{X}\to Y$, this shows that any motivic integral over $Y$ can be expressed canonically as a motivic integral over warped arcs of $\mathcal{X}$. ## Conventions {#conventions .unnumbered} Throughout this paper, let $k$ be an algebraically closed field of characteristic 0. For any stack $\mathcal{X}$ over $k$, we let $|\mathcal{X}|$ denote its associated topological space, and for any subset $\mathcal{C}\subset |\mathcal{X}|$ and field extension $k'$ of $k$, we let $\mathcal{C}(k')$ (resp. $\overline{\mathcal{C}}(k')$) denote the category of (resp. set of isomorphism classes of) $k'$-valued points of $\mathcal{X}$ whose class in $|\mathcal{X}|$ is contained in $\mathcal{C}$. # Preliminaries In this section, we collect several properties of cylinders which will be useful throughout this paper. **Definition 1**. Let $\mathcal{X}$ be a locally finite type Artin stack over $k$, and let $\mathcal{C}\subset |\mathscr{L}(\mathcal{X})|$. We call $\mathcal{C}$ a *cylinder* if $\mathcal{C}= \theta_n^{-1}(\mathcal{C}_n)$ for some $n \in \mathbb{Z}_{\geq 0}$ and $\mathcal{C}_n$ a locally constructible subset of $|\mathscr{L}_n(\mathcal{X})|$. **Remark 2**. If $\mathcal{W}$ is a locally finite type Artin stack over $k$ and $\mathcal{E}\subset |\mathcal{W}|$, then $\mathcal{E}$ is locally constructible in $|\mathcal{W}|$ if and only if for every quasi-compact open substack $\mathcal{V}$ of $\mathcal{W}$ we have $\mathcal{E}\cap |\mathcal{V}|$ is a constructible subset of $|\mathcal{V}|$. Since our motivic change of variables formula applies to locally finite type stacks which are not necessarily quasi-compact, we require a boundedness assumption on our cylinders. This is given in the following definition. **Definition 2**. Let $\mathcal{X}$ be a locally finite type Artin stack over $k$, and let $\mathcal{C}\subset |\mathscr{L}(\mathcal{X})|$ be a cylinder. We call $\mathcal{C}$ *small* if $\theta_0(\mathcal{C})$ is contained in a quasi-compact subset of $|\mathcal{X}|$. The following shows that boundedness of the $0$-trunction of a cylinder forces boundedness of all truncations. **Proposition 1**. *Let $\mathcal{X}$ be a locally finite type Artin stack over $k$, and let $\mathcal{C}\subset |\mathscr{L}(\mathcal{X})|$ be a small cylinder. If $n \in \mathbb{Z}_{\geq 0}$, then $\theta_n(\mathcal{C})$ is a quasi-compact locally constructible subset of $|\mathscr{L}_n(\mathcal{X})|$.* *Proof.* Because $\theta_0(\mathcal{C})$ is contained in a quasi-compact subset of $|\mathcal{X}|$, by replacing $\mathcal{X}$ with a quasi-compact open substack that contains $\theta_0(\mathcal{C})$, we may assume that $\mathcal{X}$ is finite type over $k$. Chevalley's Theorem for Artin stacks [@HallRydh17 Theorem 5.1] and [@SatrianoUsatine1 Lemma 3.24] then reduces the proposition to the case where $\mathcal{X}$ is a scheme, which is well-known, see e.g., [@ChambertLoirNicaiseSebag Chapter 5 Corollary 1.5.7(b)]. ◻ **Remark 3**. Let $\mathcal{W}$ be a locally finite type Artin stack over $k$ with affine geometric stabilizers and $\mathcal{E}$ be a quasi-compact locally constructible subset of $|\mathcal{W}|$. Then there exists a quasi-compact open substack $\mathcal{V}$ of $\mathcal{W}$ such that $\mathcal{E}$ is a (constructible) subset of $|\mathcal{V}|$, allowing us to associate to $\mathcal{E}$ a class $\mathop{\mathrm{e}}(\mathcal{E}) \in \widehat{\mathscr{M}}_k$. It is straightforward to verify that $\mathop{\mathrm{e}}(\mathcal{E})$ does not depend on the choice of $\mathcal{V}$. In particular if $\mathcal{X}$ is a locally finite type Artin stack over $k$ with affine geometric stabilizers and $\mathcal{C}\subset |\mathscr{L}(\mathcal{X})|$ is a small cylinder, for every $n \in \mathbb{Z}_{\geq 0}$, we have a well defined class $\mathop{\mathrm{e}}(\theta_n(\mathcal{C})) \in \widehat{\mathscr{M}}_k$ by . **Proposition 2**. *Let $\mathcal{X}$ be a locally finite type Artin stack over $k$, and let $\mathcal{C}\subset |\mathscr{L}(\mathcal{X})|$ be a small cylinder. Then every cover of $\mathcal{C}$ by cylinders has a finite subcover.* *Proof.* Let $\{\mathcal{C}^{(i)}\}_i$ be a collection of cylinders in $|\mathscr{L}(\mathcal{X})|$ such that $\mathcal{C}\subset \bigcup_i \mathcal{C}^{(i)}$. Because $\mathcal{C}$ is small, there exists a quasi-compact open substack $\mathcal{X}_1$ of $\mathcal{X}$ such that $\theta_0(\mathcal{C}) \subset |\mathcal{X}_1|$. Replacing $\mathcal{X}$ with $\mathcal{X}_1$ and each $\mathcal{C}^{(i)}$ with $\mathcal{C}^{(i)} \cap |\mathscr{L}(\mathcal{X}_1)|$, we assume that $\mathcal{X}$ is finite type. The result then follows from [@SatrianoUsatine2 Proposition 2.2]. ◻ We next relate cylinders in a stack to that of a smooth cover. **Lemma 1**. *Let $\eta: \mathcal{W}\to \mathcal{Y}$ be a smooth surjective morphism of finite type Artin stacks over $k$, and let $\mathcal{D}\subset |\mathscr{L}(\mathcal{Y})|$. If $\mathscr{L}(\eta)^{-1}(\mathcal{D})$ is a cylinder in $|\mathscr{L}(\mathcal{W})|$, then $\mathcal{D}$ is a cylinder in $|\mathscr{L}(\mathcal{Y})|$.* *Proof.* By replacing $\mathcal{W}$ with a smooth cover by a finite type $k$-scheme, we may assume that $\mathcal{W}= W$ is a scheme. Set $C = \mathscr{L}(\eta)^{-1}(\mathcal{D})$. We will use throughout this proof that because $\eta$ is smooth and surjective, $\mathscr{L}(\eta): \mathscr{L}(W) \to |\mathscr{L}(\mathcal{Y})|$ is surjective. In particular, $\mathcal{D}= \mathscr{L}(\eta)(C)$. Because $C$ is a cylinder, there exists some $n$ and a constructible subset $C_n \subset \mathscr{L}_n(W)$ such that $C = \theta_n^{-1}(C_n)$. Because $C \subset \theta_n^{-1}(\theta_n (C)) \subset \theta_n^{-1}(C_n) = C$, we may replace $C_n$ with the constructible set $\theta_n(C)$ and thus assume that $C_n = \theta_n(C)$. By Chevalley's theorem for Artin stacks, it is sufficient to show that $$\mathcal{D}= \theta_n^{-1}(\mathscr{L}_n(\eta)(C_n)).$$ For the first direction of this equality, if $\alpha \in \mathcal{D}$ then $$\theta_n(\alpha) \in \theta_n(\mathcal{D}) = \theta_n(\mathscr{L}(\eta)(C)) = \mathscr{L}_n(\eta)(\theta_n(C)) = \mathscr{L}_n(\eta)(C_n).$$ Thus $\mathcal{D}\subset \theta_n^{-1}(\mathscr{L}_n(\eta)(C_n))$. For the other direction, let $\alpha \in \theta_n^{-1}(\mathscr{L}_n(\eta)(C_n))$ and let $\varphi \in \mathscr{L}(W)$ be such that $\alpha = \mathscr{L}(\eta)(\varphi)$. Then $$\mathscr{L}_n(\eta)(\theta_n(\varphi)) = \theta_n(\mathscr{L}_n(\eta)(\varphi)) = \theta_n(\alpha) \in \mathscr{L}_n(\eta)(C_n) = \theta_n(\mathcal{D}),$$ so $$\theta_n(\varphi) \in \mathscr{L}_n(\eta)^{-1}(\theta_n(\mathcal{D})) = \theta_n(C) = C_n$$ where the first equality of sets follows from [@SatrianoUsatine1 Lemma 3.24]. Therefore $\varphi \in \theta_n^{-1}(C_n) = C$, so $\alpha = \mathscr{L}(\eta)(\varphi) \in \mathscr{L}(\eta)(C) = \mathcal{D}$ and we are done. ◻ The following shows a type of base change result for cylinders. **Lemma 2**. *Let $\mathcal{X}, \mathcal{Y}, \mathcal{W}$ be locally finite type Artin stacks over $k$, let $\pi: \mathcal{X}\to \mathcal{Y}$ and $\eta: \mathcal{W}\to \mathcal{Y}$ be morphisms, and let $\mathcal{C}\subset |\mathscr{L}(\mathcal{X})|$. Set $\mathcal{Z}= \mathcal{X}\times_\mathcal{Y}\mathcal{W}$ and let $\rho: \mathcal{Z}\to \mathcal{W}$ and $\xi: \mathcal{Z}\to \mathcal{X}$ be the projection maps. Then $$\mathscr{L}(\rho)(\mathscr{L}(\xi)^{-1}(\mathcal{C})) = \mathscr{L}(\eta)^{-1}(\mathscr{L}(\pi)(\mathcal{C})).$$* *Proof.* For the first direction, let $\alpha \in \mathscr{L}(\rho)(\mathscr{L}(\xi)^{-1}(\mathcal{C}))$, and let $\varphi \in \mathscr{L}(\xi)^{-1}(\mathcal{C})$ such that $\alpha = \mathscr{L}(\rho)(\varphi)$. Then $$\mathscr{L}(\eta)(\alpha) = \mathscr{L}(\eta)(\mathscr{L}(\rho)(\varphi)) = \mathscr{L}(\pi)(\mathscr{L}(\xi)(\varphi)) \in \mathscr{L}(\pi)(\mathcal{C}),$$ so $\mathscr{L}(\rho)(\mathscr{L}(\xi)^{-1}(\mathcal{C})) \subset \mathscr{L}(\eta)^{-1}(\mathscr{L}(\pi)(\mathcal{C}))$. For the other direction, let $\alpha \in (\mathscr{L}(\eta)^{-1}(\mathscr{L}(\pi)(\mathcal{C})))(k')$ and by slight abuse of notation, also let $\alpha$ denote the corresponding map $\mathop{\mathrm{Spec}}(k'\llbracket t \rrbracket) \to \mathcal{W}$. Possibly by extending $k'$, we may assume there exists $\psi \in \mathcal{C}(k')$ such that $\mathscr{L}(\pi)(\psi) \cong \mathscr{L}(\eta)(\alpha)$. Again by slight abuse of notation, also let $\psi$ denote the corresponding map $\mathop{\mathrm{Spec}}(k'\llbracket t \rrbracket) \to \mathcal{X}$, so $\pi \circ \psi \cong \eta \circ \alpha$. Thus there exists $\varphi: \mathop{\mathrm{Spec}}(k' \llbracket t \rrbracket) \to \mathcal{Z}$ such that $\xi \circ \varphi = \psi$ and $\rho \circ \varphi = \alpha$. By slight abuse of notation, let $\varphi$ also denote the corresponding object of $(\mathscr{L}(\mathcal{Z}))(k')$, so $\mathscr{L}(\xi)(\varphi) = \psi$ and $\mathscr{L}(\rho)(\varphi) = \alpha$. Therefore $\varphi \in (\mathscr{L}(\xi)^{-1}(\mathcal{C}))(k')$, so $\alpha \in (\mathscr{L}(\rho)(\mathscr{L}(\xi)^{-1}(\mathcal{C})))(k')$ and we are done. ◻ We now prove a liftability lemma for elements in the truncation of a cylinder. **Lemma 3**. *Let $\mathcal{X}$ be an equidimensional finite type Artin stack over $k$ with separated diagonal and affine geometric stabilizers, let $\mathcal{U}$ be a smooth open substack of $\mathcal{X}$, and let $\mathcal{C}\subset |\mathscr{L}(\mathcal{X})|$ be a cylinder that is disjoint from $|\mathscr{L}(\mathcal{X}\smallsetminus\mathcal{U})|$. There exists some $N_\mathcal{C}$ such that for any $n \geq N_{\mathcal{C}}$, any field extension $k'$ of $k$, and any $\varphi_n \in (\theta_n(\mathcal{C}))(k')$, there exists some $\varphi \in \mathcal{C}(k')$ such that $\theta_n(\varphi) \cong \varphi_n$.* *Proof.* Because $\mathcal{C}$ is a cylinder, there exists some $N_0 \in \mathbb{Z}_{\geq 0}$ such that $\mathcal{C}$ is the preimage along $\theta_{N_0}$ of some constructible subset of $\mathscr{L}_{N_0}(\mathcal{C})$. For any $n \geq N_0$, $$\mathcal{C}= \theta_n^{-1}(\theta_n(\mathcal{C})).$$ Let $\xi: X \to \mathcal{X}$ be a smooth cover by a finite type $k$-scheme $X$ such that for all field extensions $k'$ of $k$, the map $X(k') \to \overline{\mathcal{X}}(k')$ is surjective. Such a smooth cover exists, e.g, by [@Deshmukh Theorem 1.2(b)], and by [@stacks-project Lemmas 0DRP and 0DRQ] we may assume that $X$ is equidimensional. Let $C = \mathscr{L}(\xi)^{-1}(\mathcal{C}) \subset \mathscr{L}(X)$. Noting that $\xi^{-1}(\mathcal{U})$ is smooth and that $C$ is disjoint from $\mathscr{L}(X) \smallsetminus\mathscr{L}(\xi^{-1}(\mathcal{U}))$, [@SatrianoUsatine1 Lemma 7.5] implies there exists some $N_C$ such that for any $n \geq N_{\mathcal{C}}$, any field extension $k'$ of $k$, and any $\psi_n \in (\theta_n(C))(k')$, there exists some $\psi \in C(k')$ such that $\theta_n(\psi) = \psi_n$. Note that the proof of [@SatrianoUsatine1 Lemma 7.5] still holds when the irreducibility hypothesis is replaced with equidimensionality. Set $N_\mathcal{C}= \max(N_0, N_C)$. Let $n \geq N_\mathcal{C}$, let $k'$ be a field extension of $k$, and let $\varphi_n \in (\theta_n(\mathcal{C}))(k')$. Because $X(k') \to \overline{\mathcal{X}}(k')$ is surjective and $X \to \mathcal{X}$ is smooth, there exists some $\psi_n \in \mathscr{L}_n(X)(k')$ such that $\mathscr{L}_n(\xi)(\psi_n) \cong \varphi_n$. Because $\varphi_n \in (\theta_n(\mathcal{C}))(k')$, there exists some field extension $k''$ of $k'$ and some $\varphi' \in \mathcal{C}(k'')$ such that $\theta_n(\varphi') \cong \varphi_n \otimes_{k'} k''$. Because $X \to \mathcal{X}$ is smooth, there exists some $\psi' \in \mathscr{L}(X)(k'')$ such that $\mathscr{L}(\xi)(\psi') \cong \varphi'$ and $\theta_n(\psi') \cong \psi_n \otimes_{k'} k''$. Thus $\psi_n \in (\theta_n(C))(k')$, so by our choice of $N_C$, there exists some $\psi \in C(k')$ such that $\theta_n(\psi) = \psi_n$. Set $\varphi = \mathscr{L}(\xi)(\psi) \in \mathscr{L}(\mathcal{X})(k')$. Then $\theta_n(\varphi) \cong \varphi_n$. Because $n \geq N_0$, we have $\mathcal{C}= \theta_n^{-1}(\theta_n(\mathcal{C}))$. Thus $\varphi \in \mathcal{C}(k')$, and we are done. ◻ We end this section with the following general result. **Lemma 4**. *Let $\mathcal{X}$ be a locally finite type Artin stack over $k$. If $\mathcal{U}$ is an open substack of $\mathcal{X}$, then $\theta_0(|\mathscr{L}(\mathcal{X})| \smallsetminus|\mathscr{L}(\mathcal{X}\smallsetminus\mathcal{U})|)$ is contained in the closure of $|\mathcal{U}|$ in $|\mathcal{X}|$. Furthermore, if $\mathcal{W}$ is a closed substack of $\mathcal{X}$ such that $|\mathcal{U}| \subset |\mathcal{W}|$, then $$|\mathscr{L}(\mathcal{X})| \smallsetminus|\mathscr{L}(\mathcal{X}\smallsetminus\mathcal{U})| \subset |\mathscr{L}(\mathcal{W})|.$$* *Proof.* Let $k'$ be a field extension of $k$, and let $\varphi: \mathop{\mathrm{Spec}}(k'\llbracket t \rrbracket) \to \mathcal{X}$ correspond to a $k'$-point of $\mathscr{L}(\mathcal{X})$ that is not in $\mathscr{L}(\mathcal{X}\smallsetminus\mathcal{U})$. Then the generic point $\mathop{\mathrm{Spec}}(k'\llparenthesis t \rrparenthesis) \to \mathop{\mathrm{Spec}}(k' \llbracket t \rrbracket) \xrightarrow{\varphi} \mathcal{X}$ factors through $\mathcal{U}$, so the special point $\theta_0(\varphi): \mathop{\mathrm{Spec}}(k') \hookrightarrow \mathop{\mathrm{Spec}}(k\llbracket t \rrbracket) \xrightarrow{\varphi} \mathcal{X}$ has image in the closure of $|\mathcal{U}|$ in $|\mathcal{X}|$, verifying the first sentence of the proposition. We have also shown $$\varphi(|\mathop{\mathrm{Spec}}(k'\llbracket t \rrbracket)|) \subset |\mathcal{W}|.$$ Then because $\mathop{\mathrm{Spec}}(k'\llbracket t \rrbracket)$ is reduced, $\varphi$ factors through $\mathcal{W}$, so we have verified the second sentence of the proposition. ◻ # Motivic change of variables for representable resolutions of singularities We will ultimately prove Theorem [Theorem 1](#theoremMotivicChangeOfVariablesMeasurable){reference-type="ref" reference="theoremMotivicChangeOfVariablesMeasurable"} by reducing to the case of smooth stacks and making use of our motivic change of variables formula [@SatrianoUsatine2 Theorem 1.3]. The first step in this reduction is to prove the following change of variables formula for representable resolutions of singularities. **Theorem 3**. *Let $\mathcal{X}$ be a locally finite type Artin stack over $k$ with affine geometric stabilizers and separated diagonal. Let $\mathcal{Z}$ be an irreducible smooth finite type Artin stack over $k$, and let $\pi: \mathcal{Z}\to \mathcal{X}$ be a morphism. Assume that $\pi$ factors as a proper morphism that is representable by schemes followed by an open immersion. Let $\mathcal{U}$ be an open substack of $\mathcal{X}$ such that $\pi^{-1}(\mathcal{U}) \to \mathcal{U}$ is an isomorphism, let $\mathcal{C}\subset |\mathscr{L}(\mathcal{X})|$ be a cylinder that is disjoint from $|\mathscr{L}(\mathcal{X}\smallsetminus\mathcal{U})|$, and assume that $\theta_0(\mathcal{C}) \subset \pi(|\mathcal{Z}|)$. Set $\mathcal{E}= \mathscr{L}(\pi)^{-1}(\mathcal{C})$.* (a) *[\[theoremPartCylinderBijection\]]{#theoremPartCylinderBijection label="theoremPartCylinderBijection"} The subset $\mathcal{E}\subset |\mathscr{L}(\mathcal{Z})|$ is a cylinder, and for any field extension $k'$ of $k$, the map $\overline{\mathcal{E}}(k') \to \overline{\mathcal{C}}(k')$ induced by $\pi$ is a bijection.* (b) *[\[theoremPartConstructibleRepresentableOfMCVF\]]{#theoremPartConstructibleRepresentableOfMCVF label="theoremPartConstructibleRepresentableOfMCVF"} The restriction of $\mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{Z}/\mathcal{X}}}: |\mathscr{L}(\mathcal{Z})| \to \mathbb{Z}\cup \{\infty\}$ to $\mathcal{E}$ is integer valued and constructible.* (c) *[\[theoremPartIntegralOfMCVF\]]{#theoremPartIntegralOfMCVF label="theoremPartIntegralOfMCVF"} There exists $N \in \mathbb{Z}_{\geq 0}$ such that for all $n \in \mathbb{Z}_{\geq N}$, $$\mathop{\mathrm{e}}(\theta_n(\mathcal{C}))\mathbb{L}^{-(n+1)\dim\mathcal{Z}} = \int_{\mathcal{E}} \mathbb{L}^{-\mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{Z}/\mathcal{X}}}}\mathrm{d}\mu_\mathcal{Z}.$$* **Remark 4**. Because $\pi: \mathcal{Z}\to \mathcal{X}$ is proper, $\mathcal{Z}$ has separated diagonal and thus its geometric stabilizers are schemes by [@stacks-project Tag 0B8D]. Therefore because $\pi: \mathcal{Z}\to \mathcal{X}$ is representable, $\mathcal{Z}$ has affine geometric stabilizers, so the motivic measure $\mu_\mathcal{Z}$ is well defined [@SatrianoUsatine2 Definition 2.3]. Furthermore, part ([\[theoremPartConstructibleRepresentableOfMCVF\]](#theoremPartConstructibleRepresentableOfMCVF){reference-type="ref" reference="theoremPartConstructibleRepresentableOfMCVF"}) implies that the integral $\int_{\mathcal{E}} \mathbb{L}^{-\mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{Z}/\mathcal{X}}}}\mathrm{d}\mu_\mathcal{Z}$ is well defined [@SatrianoUsatine2 Definition 2.8]. Also note that $\pi(|\mathbb{Z}|)$ is quasi-compact, so $\mathcal{C}$ is small and thus each $\theta_n(\mathcal{C})$ is a quasi-compact locally constructible subset of $|\mathscr{L}_n(\mathcal{X})|$ by . In particular $\mathop{\mathrm{e}}(\theta_n(\mathcal{C}))$ is well defined. The following result gives a representable resolution of singularities for stacks. This will be used throughout this paper. **Proposition 3**. *Let $\mathcal{W}$ be a reduced locally finite type Artin stack over $k$, and let $\mathcal{U}$ be the locus of $\mathcal{W}$ that is smooth over $k$. Then there exists a smooth Artin stack $\mathcal{Z}$ over $k$ and a morphism $\pi: \mathcal{Z}\to \mathcal{W}$ that is proper and representable by schemes such that $\pi^{-1}(\mathcal{U}) \to \mathcal{U}$ is an isomorphism and $|\pi^{-1}(\mathcal{U})|$ is dense in $|\mathcal{Z}|$. In particular, if $\mathcal{W}$ is irreducible then $\mathcal{Z}$ is irreducible, and if $\mathcal{W}$ is finite type over $k$ then $\mathcal{Z}$ is finite type over $k$.* *Proof.* By considering a smooth cover of $\mathcal{W}$ by a scheme, this is a straightforward consequence of functorial resolution of singularities, i.e., a resolution algorithm that is functorial with respect to smooth morphisms for the case where $\mathcal{W}$ is a scheme. Such algorithms exist, e.g., by [@AbramovichTemkinWlodarczyk Theorem 8.1.2] or see [@Wlodarczyk Theorem 1.1.6] for the not-necessarily-equidimensional case. Note also that a morphism being representable by schemes is a property that is smooth local on the target. ◻ Our next goal is to prove that for our cylinders of interest, images of small cylinders remain small. We do so after a preliminary technical lemma. **Lemma 5**. *Let $\mathcal{X}$ be a smooth finite type Artin stack over $k$, let $Y$ be a finite type scheme over $k$, let $\pi: \mathcal{X}\to Y$ be a morphism, let $\mathcal{U}$ be a smooth open substack of $\mathcal{X}$ such that $\mathcal{U}\hookrightarrow \mathcal{X}\xrightarrow{\pi} \mathcal{Y}$ is an open immersion, let $\mathcal{C}\subset |\mathscr{L}(\mathcal{X})|$ be a cylinder that is disjoint from $|\mathscr{L}(\mathcal{X}\smallsetminus\mathcal{U})|$, let $\ell \in \mathbb{Z}$ be such that $\mathcal{C}$ is the preimage along $\theta_\ell$ of a constructible subset of $|\mathscr{L}_\ell(\mathcal{X})|$, suppose there exists $h \in \mathbb{Z}$ such that $\mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{X}/Y}}$ is equal to $h$ on all of $\mathcal{C}$, let $\mathscr{J}$ be the $(\dim Y)$th Fitting ideal of $\Omega^1_Y$, suppose there exists $e \in \mathbb{Z}$ such that $\mathop{\mathrm{ord}}_{\mathscr{J}} \circ \mathscr{L}(\pi)$ is equal to $e$ on all of $\mathcal{C}$, and assume that $\mathcal{X}$ and $Y$ are equidimensional.* *Let $k'$ be a field extension of $k$, let $\varphi \in \mathcal{C}(k')$, let $\alpha \in \mathscr{L}(Y)(k')$, and let $n \in \mathbb{Z}_{\geq 0}$.* *If $n \geq \max(\ell + h, e)$ and $\theta_n(\mathscr{L}(\pi)(\varphi)) = \theta_n(\alpha)$, then there exists some $\varphi' \in \mathcal{C}(k')$ such that $\mathscr{L}(\pi)(\varphi') = \alpha$ and $\theta_{n-h}(\varphi) \cong \theta_{n-h}(\varphi')$.* *Proof.* Set $\varphi^{(0)} = \varphi$. We will find a sequence $\{\varphi^{(i)}\}_{i \in \mathbb{Z}_{\geq 0}} \subset \mathcal{C}(k')$ such that for all $i \in \mathbb{Z}_{\geq 0}$, we have $\theta_{n-h+i}(\varphi^{(i+1)}) \cong \theta_{n-h+i}(\varphi^{(i)})$ and $\theta_{n+i}(\mathscr{L}(\pi)(\varphi^{(i)})) = \theta_{n+i}(\alpha)$. We will do so inductively on $i$, so assume we already have $\varphi^{(0)}, \dots, \varphi^{(i)}$. We know $\varphi^{(i)} \in \mathcal{C}(k')$, $n + i \geq \max(\ell + h, e)$, and $\theta_{n+i}(\mathscr{L}(\pi)(\varphi^{(i)})) = \theta_{n+i}(\alpha)$, so [@SatrianoUsatine2 Proposition 5.2] implies that there exists $\varphi^{(i+1)} \in \mathscr{L}(\mathcal{X})(k')$ such that $\theta_{n-h+i}(\varphi^{(i+1)}) \cong \theta_{n-h+i}(\varphi^{(i)})$ and $\theta_{n+i+1}(\mathscr{L}(\pi)(\varphi^{(i+1)})) = \theta_{n+i+1}(\alpha)$ (for more details see the paragraph following [@SatrianoUsatine2 Proposition 5.2], and note that the proof of [@SatrianoUsatine2 Proposition 5.2] never uses the injectivity hypothesis on $\mathcal{C}(k')$ and still works if the hypothesis that $\mathcal{X}$ and $Y$ are irreducible is replaced with the assumption that they are equidimensional). Because $n-h+i \geq \ell$, we also have $\varphi^{(i+1)} \in \mathcal{C}(k')$, so we have the desired sequence. Because $\theta_{n-h+i}(\varphi^{(i+1)}) \cong \theta_{n-h+i}(\varphi^{(i)})$ for all $i \in \mathbb{Z}_{\geq 0}$, there exists some $\varphi' \in \mathscr{L}(\mathcal{X})(k')$ such that $\theta_{n-h+i}(\varphi') \cong \theta_{n-h+i}(\varphi^{(i+1)})$ for all $i \in \mathbb{Z}_{\geq 0}$. Thus for all $i \in \mathbb{Z}_{\geq 0}$, $$\begin{aligned} \theta_{n-h+i}(\mathscr{L}(\pi)(\varphi')) &= \mathscr{L}_{n-h+i}(\pi)(\theta_{n-h+i}(\varphi')) \\ &= \mathscr{L}_{n-h+i}(\pi)(\theta_{n-h+i}(\varphi^{(i+1)})) \\ &= \theta_{n-h+i}(\mathscr{L}(\pi)(\varphi^{(i+1)})) \\ &= \theta_{n-h+i}(\alpha),\end{aligned}$$ so $\mathscr{L}(\pi)(\varphi') = \alpha$. Also $$\theta_{n-h}(\varphi') \cong \theta_{n-h}(\varphi^{(1)}) \cong \theta_{n-h}(\varphi^{(0)}) = \theta_{n-h}(\varphi).$$ Because $n - h \geq \ell$, this also implies $\varphi' \in \mathcal{C}(k')$, and we are done. ◻ **Proposition 4**. *Let $\mathcal{X}$ and $\mathcal{Y}$ be locally finite type Artin stacks over $k$, let $\pi: \mathcal{X}\to \mathcal{Y}$ be a morphism, let $\mathcal{U}$ be a smooth open substack of $\mathcal{X}$ such that $\mathcal{U}\hookrightarrow \mathcal{X}\xrightarrow{\pi} \mathcal{Y}$ is an open immersion, let $\mathcal{C}\subset |\mathscr{L}(\mathcal{X})|$ be a small cylinder that is disjoint from $|\mathscr{L}(\mathcal{X}\smallsetminus\mathcal{U})|$, and assume that $\mathcal{Y}$ is equidimensional. Then $\mathscr{L}(\pi)(\mathcal{C}) \subset |\mathscr{L}(\mathcal{Y})|$ is a small cylinder.* *Proof.* By replacing $\mathcal{X}$ with a quasi-compact open substack $\mathcal{X}_1$ that contains $\theta_0(\mathcal{C})$, replacing $\mathcal{U}$ with $\mathcal{U}\cap \mathcal{X}_1$, and replacing $\mathcal{Y}$ with a quasi-compact open substack $\mathcal{Y}_1$ such that $\pi(|\mathcal{X}_1|) \subset |\mathcal{Y}_1|$, we may assume that $\mathcal{X}$ and $\mathcal{Y}$ are finite type over $k$. Let $\eta: W \to \mathcal{Y}$ be a smooth cover by a finite type $k$-scheme $W$. Because $\mathcal{Y}$ is equidimensional and $W \to \mathcal{Y}$ is smooth, every connected component of $W$ is equidimensional, e.g., by [@stacks-project Lemmas 0DRP and 0DRQ]. Therefore, possibly replacing some of these connected components with their products with an affine space, we may assume that $W$ is equidimensional. Let $\rho: \mathcal{Z}\to W$ be the base change of $\pi$ along $W \xrightarrow{\eta} \mathcal{Y}$, and let $\mathcal{E}$ be the preimage of $\mathcal{C}$ in $|\mathscr{L}(\mathcal{Z})|$. By , $$\mathscr{L}(\eta)^{-1}(\mathscr{L}(\pi)(\mathcal{C})) = \mathscr{L}(\rho)(\mathcal{E}),$$ so by , we may replace $\pi, \mathcal{X}, \mathcal{Y}, \mathcal{U}, \mathcal{C}$ with $\rho, \mathcal{Z}, W, \mathcal{U}\times_\mathcal{Y}W, \mathcal{E}$ and thus assume that $\mathcal{Y}$ is a scheme $Y$. By replacing $\pi: \mathcal{X}\to Y$ with its reduction, we may assume that $\mathcal{X}$ and $Y$ are reduced. Then by , there exists a smooth finite type Artin stack $\mathcal{X}_2$ over $k$ and a morphism $\xi: \mathcal{X}_2 \to \mathcal{X}$ that is proper and representable by schemes such that $\xi^{-1}(\mathcal{U}) \to \mathcal{U}$ is an isomorphism. By replacing $\mathcal{X}_2$ with the union of its connected components that intersect $\xi^{-1}(\mathcal{U})$, we may assume that $\mathcal{X}_2$ is equidimensional. Because $\xi$ is proper and representable by schemes and an isomorphism above $\mathcal{U}$, the induced map $$|\mathscr{L}(\mathcal{X}_2)| \smallsetminus|\mathscr{L}(\mathcal{X}_2 \smallsetminus\xi^{-1}(\mathcal{U}))| \to |\mathscr{L}(\mathcal{X})| \smallsetminus|\mathscr{L}(\mathcal{X}\smallsetminus\mathcal{U})|$$ is surjective. In particular $\mathscr{L}(\xi)(\mathscr{L}(\xi^{-1})(\mathcal{C})) = \mathcal{C}$. Therefore by replacing $\pi, \mathcal{X}, \mathcal{U}, \mathcal{C}$ with $\pi \circ \xi, \mathcal{X}_2, \xi^{-1}(\mathcal{U}), \mathscr{L}(\xi)^{-1}(\mathcal{C})$, we may assume that $\mathcal{X}$ is smooth and equidimensional. Let $\mathscr{J}$ be the $(\dim Y)$th Fitting ideal of $\Omega^1_Y$. Because $\mathcal{U}\hookrightarrow \mathcal{X}\xrightarrow{\pi} \mathcal{Y}$ is an open immersion and $\mathcal{U}$ is smooth, we have that $\mathop{\mathrm{ord}}_{\mathscr{J}} \circ \mathscr{L}(\pi)$ and $\mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{X}/Y}}$ are constructible functions on $\mathcal{C}$ [@SatrianoUsatine2 Remarks 2.6 and 3.5]. These constructible functions take only finitely many values on $\mathcal{C}$ [@SatrianoUsatine2 Proposition 2.2], so by separately considering each stratum of $\mathcal{C}$ where $\mathop{\mathrm{ord}}_{\mathscr{J}} \circ \mathscr{L}(\pi)$ and $\mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{X}/Y}}$ are both constant, we may assume there exist $h, e \in \mathbb{Z}$ such that $\mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{X}/Y}}$ is equal to $h$ on all of $\mathcal{C}$ and $\mathop{\mathrm{ord}}_{\mathscr{J}} \circ \mathscr{L}(\pi)$ is equal to $e$ on all of $\mathcal{C}$. Because $\mathcal{C}$ is a cylinder, there exists $\ell \in \mathbb{Z}_{\geq 0}$ and a constructible subset $\mathcal{C}_\ell \subset |\mathscr{L}_\ell(\mathcal{X})|$ such that $\mathcal{C}= \theta_\ell^{-1}(\mathcal{C}_\ell)$. Set $\mathcal{C}_{\ell + h+e} = \theta_{\ell + h+e}(\mathcal{C}) \subset |\mathscr{L}_{\ell + h+e}(\mathcal{C})|$. By Chevalley's Theorem for Artin stacks [@HallRydh17 Theorem 5.1], $\mathscr{L}_{\ell + h+e}(\pi)(\mathcal{C}_{\ell + h+e})$ is a constructible subset of $\mathscr{L}_{\ell+h+e}(Y)$. We are thus done if we can show $$\mathscr{L}(\pi)(\mathcal{C}) = \theta_{\ell+h+e}^{-1}(\mathscr{L}_{\ell + h+e}(\pi)(\mathcal{C}_{\ell + h+e})).$$ Clearly $\mathscr{L}(\pi)(\mathcal{C}) \subset \theta_{\ell+h+e}^{-1}(\mathscr{L}_{\ell + h+e}(\pi)(\mathcal{C}_{\ell + h+e}))$. To show the other inclusion, let $\alpha \in (\theta_{\ell+h+e}^{-1}(\mathscr{L}_{\ell + h+e}(\pi)(\mathcal{C}_{\ell + h+e})))(k')$ for some field extension $k'$ of $k$. Possibly after extending $k'$, we may assume there exists some $\varphi \in \mathcal{C}(k')$ such that $\theta_{\ell+h+e}(\mathscr{L}(\pi)(\varphi)) = \theta_{\ell+h+e}(\alpha)$. Then by there exists some $\varphi' \in \mathcal{C}(k')$ such that $\mathscr{L}(\pi)(\varphi') = \alpha$, so $\alpha \in (\mathscr{L}(\pi)(\mathcal{C}))(k')$, completing our proof. ◻ We turn now to the main theorem of this section. *Proof of Theorem [Theorem 3](#theoremRepresentableChangeOfVariables){reference-type="ref" reference="theoremRepresentableChangeOfVariables"}.* By assumption $\pi$ is the composition of a proper morphism $\mathcal{Z}\to \mathcal{X}_1$ that is representable by schemes and an open immersion $\mathcal{X}_1 \to \mathcal{X}$. Let $\mathcal{X}_2$ be a quasi-compact open substack of $\mathcal{X}$ such that $\pi(|\mathcal{Z}|) \subset |\mathcal{X}_2|$. Replacing $\mathcal{X}$ with $\mathcal{X}_1 \cap \mathcal{X}_2$, we may assume that $\mathcal{X}$ is finite type over $k$ and that $\pi$ is a proper morphism that is representable by schemes. Because the theorem is obvious when $\mathcal{U}= \emptyset$, we may also assume that $\mathcal{U}$ is nonempty. In particular, $|\pi^{-1}(\mathcal{U})|$ is dense in $|\mathcal{Z}|$. Let $\mathcal{W}$ be the scheme theoretic image of the inclusion $\mathcal{U}\to \mathcal{X}$. Because $|\pi^{-1}(\mathcal{U})|$ is dense in $|\mathcal{Z}|$, we have $|\pi(\mathcal{Z})| \subset |\mathcal{W}|$. Then because $\mathcal{Z}$ is reduced, $\pi$ factors as $\mathcal{Z}\xrightarrow{\rho} \mathcal{W}\hookrightarrow \mathcal{X}$. We will show that we can replace $\mathcal{X}, \pi$ with $\mathcal{W}, \rho$ and therefore assume that $|\mathcal{U}|$ is dense in $|\mathcal{X}|$. First, note that because $\pi$ is proper and representable by schemes and $\mathcal{W}\hookrightarrow \mathcal{X}$ is a closed immersion, the morphism $\rho$ is proper and representable by schemes. By , we have $\mathcal{C}\subset |\mathscr{L}(\mathcal{W})|$. In particular, each $\theta_n(\mathcal{C}) \subset |\mathscr{L}_n(\mathcal{W})| \subset |\mathscr{L}_n(\mathcal{X})|$, so each class $\mathop{\mathrm{e}}(\theta_n(\mathcal{C}))$ does not depend on whether we consider $\theta_n(\mathcal{C})$ as a constructible subset of $|\mathscr{L}_n(\mathcal{W})|$ or $|\mathscr{L}_n(\mathcal{X})|$. Thus to show that we may replace $\mathcal{X}, \pi$ with $\mathcal{W}, \rho$, we just need to show that $$\mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{Z}/\mathcal{X}}} = \mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{Z}/\mathcal{W}}}.$$ To do so, consider an arc $\varphi: \mathop{\mathrm{Spec}}(k'\llbracket t \rrbracket) \to \mathcal{Z}$. The exact triangle $$L\rho^* L_{\mathcal{W}/\mathcal{X}} \to L_{\mathcal{Z}/ \mathcal{X}} \to L_{\mathcal{Z}/ \mathcal{W}}$$ induces an exact triangle $$L\varphi^*L\rho^* L_{\mathcal{W}/\mathcal{X}} \to L\varphi^*L_{\mathcal{Z}/ \mathcal{X}} \to L\varphi^*L_{\mathcal{Z}/ \mathcal{W}},$$ so we obtain the exact sequence $$H^0(L\varphi^*L\rho^* L_{\mathcal{W}/\mathcal{X}}) \to H^0(L\varphi^*L_{\mathcal{Z}/ \mathcal{X}}) \to H^0(L\varphi^*L_{\mathcal{Z}/ \mathcal{W}}) \to H^1(L\varphi^*L\rho^* L_{\mathcal{W}/\mathcal{X}}).$$ Because $\mathcal{W}\to \mathcal{X}$ is a closed immersion, $$H^0(L\varphi^*L\rho^* L_{\mathcal{W}/\mathcal{X}}) = H^1(L\varphi^*L\rho^* L_{\mathcal{W}/\mathcal{X}}) = 0,$$ and therefore $$\mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{Z}/\mathcal{X}}}(\varphi) = \dim_{k'} H^0(L\varphi^*L_{\mathcal{Z}/ \mathcal{X}}) = \dim_{k'} H^0(L\varphi^*L_{\mathcal{Z}/ \mathcal{W}}) = \mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{Z}/\mathcal{W}}}(\varphi).$$ Thus we may replace $\mathcal{X}, \pi$ with $\mathcal{W}, \rho$, so we may assume that $|\mathcal{U}|$ is dense in $|\mathcal{X}|$. We will now prove each part of the theorem. (a) Because $\mathcal{C}$ is a cylinder and $\mathcal{X}$ is finite type over $k$, there exists some $n \in \mathbb{Z}_{\geq 0}$ and a constructible subset $\mathcal{C}_n \subset |\mathscr{L}_n(\mathcal{X})|$ such that $\mathcal{C}= \theta_n^{-1}(\mathcal{C}_n)$. Therefore $\mathcal{E}= \theta_n^{-1}(\mathscr{L}_n(\pi)^{-1}(\mathcal{C}_n))$ is a cylinder. Because $\mathcal{C}$ is disjoint from $|\mathscr{L}(\mathcal{X}\smallsetminus\mathcal{U})|$, the valuative criterion for proper morphisms that are representable by schemes implies that $\overline{\mathcal{E}}(k') \to \overline{\mathcal{C}}(k')$ is bijective for all field extensions $k'$ of $k$. (b) Noting that $\pi$ being representable implies $\mathop{\mathrm{ht}}^{(1)}_{L_{\mathcal{Z}/\mathcal{X}}} = 0$, we get that the function $\mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{Z}/\mathcal{X}}}$ is integer valued and constructible on $\mathcal{E}$ by [@SatrianoUsatine2 Remark 3.5]. (c) For each $h \in \mathbb{Z}_{\geq 0}$, let $\mathcal{E}^{(h)} = \mathcal{E}\cap (\mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{Z}/\mathcal{X}}})^{-1}(h)$ and $\mathcal{C}^{(h)} = \mathscr{L}(\pi)(\mathcal{E}^{(h)})$. By part ([\[theoremPartConstructibleRepresentableOfMCVF\]](#theoremPartConstructibleRepresentableOfMCVF){reference-type="ref" reference="theoremPartConstructibleRepresentableOfMCVF"}), $\mathcal{E}= \bigsqcup_{h \in \mathbb{Z}_{\geq 0}} \mathcal{E}^{(h)}$, so by part ([\[theoremPartCylinderBijection\]](#theoremPartCylinderBijection){reference-type="ref" reference="theoremPartCylinderBijection"}), $\mathcal{C}= \bigsqcup_{h \in \mathbb{Z}_{\geq 0}} \mathcal{C}^{(h)}$ and $\mathcal{E}^{(h)} = \mathscr{L}(\pi)^{-1}(\mathcal{C}^{(h)})$ for all $h$. By parts ([\[theoremPartCylinderBijection\]](#theoremPartCylinderBijection){reference-type="ref" reference="theoremPartCylinderBijection"}) and ([\[theoremPartConstructibleRepresentableOfMCVF\]](#theoremPartConstructibleRepresentableOfMCVF){reference-type="ref" reference="theoremPartConstructibleRepresentableOfMCVF"}) each $\mathcal{E}^{(h)}$ is a cylinder. Because $\mathcal{Z}$ is irreducible and $|\mathcal{U}|$ is dense in $|\mathcal{X}|$, we have $\mathcal{X}$ is irreducible and therefore equidimensional [@stacks-project Lemma 0DRX]. Thus by , each $\mathcal{C}^{(h)}$ is a cylinder. By [@SatrianoUsatine2 Proposition 2.2], $\mathcal{E}^{(h)}$ is empty for all but finitely many $h$, so $$\int_{\mathcal{E}} \mathbb{L}^{-\mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{Z}/\mathcal{X}}}}\mathrm{d}\mu_\mathcal{Z}= \sum_{h \in \mathbb{Z}_{\geq 0}} \int_{\mathcal{E}^{(h)}} \mathbb{L}^{-\mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{Z}/\mathcal{X}}}}\mathrm{d}\mu_\mathcal{Z},$$ with the right-hand-side being a finite sum. Because $\mathcal{C}^{(h)}$ is empty for all but finitely many $h$, there exists some $N_0 \in \mathbb{Z}_{\geq 0}$ such that for all $h$, we have $\mathcal{C}^{(h)}$ is the preimage along $\theta_{N_0}$ of a constructible subset of $|\mathscr{L}_{N_0}(\mathcal{X})|$. Then for all $n \geq N_0$, $$\theta_n(\mathcal{C}) = \bigsqcup_{h \in \mathbb{Z}_{\geq 0}} \theta_n(\mathcal{C}^{(h)}),$$ so $$\mathop{\mathrm{e}}(\theta_n(\mathcal{C}))\mathbb{L}^{-(n+1)\dim\mathcal{Z}} = \sum_{h \in \mathbb{Z}_{\geq 0}} \mathop{\mathrm{e}}(\theta_n(\mathcal{C}^{(h)}))\mathbb{L}^{-(n+1)\dim\mathcal{Z}},$$ with the right-hand-side being a finite sum. Therefore by considering each $\mathcal{C}^{(h)}, \mathcal{E}^{(h)}$ in place of $\mathcal{C}, \mathcal{E}$, we may assume that there exists some $h \in \mathbb{Z}_{\geq 0}$ such that $\mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{Z}/\mathcal{X}}}$ is equal to $h$ on all of $\mathcal{E}$. Because $\mathcal{Z}$ is a smooth and irreducible and $\mathcal{E}$ is a cylinder, there exists some $N_1 \in \mathbb{Z}_{\geq 0}$ such that for all $n \geq N_1$, $$\int_{\mathcal{E}} \mathbb{L}^{-\mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{Z}/\mathcal{X}}}}\mathrm{d}\mu_\mathcal{Z}= \mathbb{L}^{-h} \mu_\mathcal{Z}(\mathcal{E}) = \mathbb{L}^{-h}\mathop{\mathrm{e}}(\theta_n(\mathcal{E}))\mathbb{L}^{-(n+1)\dim\mathcal{Z}}.$$ Thus it is sufficient to show that there exists some $N_2 \in \mathbb{Z}_{\geq 0}$ such that for all $n \geq N_2$, $$\mathop{\mathrm{e}}(\theta_n(\mathcal{E})) = \mathbb{L}^h \mathop{\mathrm{e}}(\theta_n(\mathcal{C})).$$ Using that $\mathcal{Z}$ is smooth and that $\mathcal{C}$ is a cylinder, it is straightforward to check that there exists some $N_3 \in \mathbb{Z}_{\geq 0}$ such that for all $n \geq N_3$, $$\mathscr{L}_n(\pi)^{-1}(\theta_n(\mathcal{C})) \subset \theta_n(\mathcal{E}).$$ Therefore by [@SatrianoUsatine1] it is sufficient to show that there exists some $N_4 \in \mathbb{Z}_{\geq 0}$ such that for any $n \geq N_4$, any field extension $k'$ of $k$, and any $\varphi_n \in (\theta_n(\mathcal{C}))(k')$, $$(\mathscr{L}_n(\pi)^{-1}(\varphi_n))_\mathrm{red}\cong \mathbb{A}_{k'}^h.$$ Let $\xi: X \to \mathcal{X}$ be a smooth cover by a finite type $k$-scheme $X$ such that for all field extensions $k'$ of $k$, the map $X(k') \to \overline{\mathcal{X}}(k')$ is surjective. Such a smooth cover exists, e.g, by [@Deshmukh Theorem 1.2(b)], and by [@stacks-project Lemmas 0DRP and 0DRQ] we may assume that $X$ is equidimensional. Let $\rho: Z \to X$ be the base change of $\pi: \mathcal{Z}\to \mathcal{X}$ along $\xi: X \to \mathcal{X}$. Because $\pi$ is representable by schemes, $Z$ is a scheme. Let $U = \xi^{-1}(\mathcal{U})$. Because $Z \to |\mathcal{Z}|$ is an open map and $|\pi^{-1}(\mathcal{U})|$ is dense in $|\mathcal{Z}|$, we have that $\rho^{-1}(U)$ is dense in $Z$. Then because $\rho^{-1}(U) \to U$ is an isomorphism, $Z$ is equidimensional. Let $C = \mathscr{L}(\xi)^{-1}(\mathcal{C}) \subset \mathscr{L}(X)$, and let $E = \mathscr{L}(\rho)^{-1}(C) \subset \mathscr{L}(Z)$. Clearly $C$ and $E$ are cylinders, and by the valuative criterion applied to the proper map $\rho:Z \to X$, we have $E(k') \to C(k')$ is bijective for all field extensions $k'$ of $k$. Furthermore, because $X \to \mathcal{X}$ is flat, $L_{Z/X}$ is isomorphic to the derived pullback of $L_{\mathcal{Z}/\mathcal{X}}$ along $Z \to \mathcal{Z}$. In particular, $\mathop{\mathrm{ht}}^{(0)}_{L_{Z/X}}$ is equal to $h$ on all of $E$. By [@ChambertLoirNicaiseSebag Chapter 5 Theorem 3.2.2], noting that $\mathop{\mathrm{ht}}^{(0)}_{L_{Z/X}}$ coincides with the function denoted $\mathop{\mathrm{ordjac}}_{\rho}$ in [@ChambertLoirNicaiseSebag], there exists some $N_5 \in \mathbb{Z}_{\geq 0}$ such that for any $n \geq N_5$, any field extension $k'$ of $k$, and any $\psi_n \in (\theta_n(C))(k')$, $$(\mathscr{L}_n(\rho)^{-1}(\psi_n))_\mathrm{red}\cong \mathbb{A}_{k'}^h.$$ Let $N_\mathcal{C}$ be as in the conclusion of , and set $N_4 = \max(N_5, N_{\mathcal{C}})$. Now let $n \geq N_4$, let $k'$ be a field extension of $k$, and let $\varphi_n \in (\theta_n(\mathcal{C}))(k')$. By our choice of $N_\mathcal{C}$, there exists $\varphi \in \mathcal{C}(k')$ such that $\theta_n(\varphi) \cong \varphi_n$. Because $X(k') \to \overline{\mathcal{X}}(k')$ is surjective and $X \to \mathcal{X}$ is smooth, there exists some $\psi \in C(k')$ such that $\mathscr{L}(\xi)(\psi) \cong \varphi$. Then $$(\mathscr{L}_n(\pi)^{-1}(\varphi_n))_\mathrm{red}\cong (\mathscr{L}_n(\rho)^{-1}(\theta_n(\psi)))_\mathrm{red}\cong \mathbb{A}_{k'}^h,$$ completing the proof.  ◻ # Stacks and the Grothendieck group of varieties We prove $\Vert \mathop{\mathrm{e}}(\mathcal{Y}) \Vert = \exp(\dim\mathcal{Y})$ for our stacks of interest. We make use of this in the following section to prove properties of our measure $\mu_{\mathcal{X},d}$. **Lemma 6**. *Let $\mathcal{Y}$ be a finite type Artin stack over $k$, and let $\{\mathcal{Y}_i\}_{i \in I}$ be a finite collection of locally closed substacks of $\mathcal{Y}$ such that $|\mathcal{Y}|=\coprod_{i\in I}|\mathcal{Y}_i|$. Then $$\dim\mathcal{Y}= \max_{i \in I} \dim \mathcal{Y}_i.$$* *Proof.* Let $Y\to\mathcal{Y}$ be a smooth cover by a scheme, let $R=Y\times_\mathcal{Y}Y$, let $s,t\colon R\to Y$ be the two projection maps, and let $e\colon Y\to R$ be the canonical section. Let $Y_i:=Y\times_\mathcal{Y}\mathcal{Y}_i$, let $R_i:=R\times_\mathcal{Y}\mathcal{Y}_i$, and let $s_i,t_i\colon R_i\to Y_i$ and $e_i\colon Y_i\to R_i$ be the induced maps. Fix $i$, let $y'_i\in Y_i$, and let $y_i\in|\mathcal{Y}|$ be the image of $y'_i$. Since $Y_i\subset Y$ is locally closed, we have $\dim_{y'_i} Y_i\leq \dim_{y'_i} Y$ with equality if $Y_i$ is open in $Y$, i.e., if $\mathcal{Y}_i\subset\mathcal{Y}$ is open. Also note that $$(R_i)_{y'_i}:=R_i\times_{s_i,Y_i,y'_i} \mathop{\mathrm{Spec}}k(y'_i)=R\times_{s,Y,y'_i} \mathop{\mathrm{Spec}}k(y'_i)=:R_{y'_i}.$$ Then by definition, $$\begin{aligned} \dim_{y_i} \mathcal{Y}_i &= \dim_{y'_i} Y_i - \dim_{e(y'_i)} (R_i)_{y'_i}\\ & \leq \dim_{y'_i} Y - \dim_{e(y'_i)} R_{y'_i} = \dim_{y_i} \mathcal{Y}\end{aligned}$$ with equality if $\mathcal{Y}_i$ is open in $\mathcal{Y}$. The result follows since $\dim\mathcal{Y}=\sup_{y\in|\mathcal{Y}|}\{\dim_y\mathcal{Y}\}$, $\dim\mathcal{Y}_i=\sup_{y\in|\mathcal{Y}_i|}\{\dim_y\mathcal{Y}_i\}$, and for every irreducible component $\mathcal{Z}$ of $\mathcal{Y}$, there exists $i\in I$ with $\mathcal{Z}\cap\mathcal{Y}_i$ open in $\mathcal{Z}$. ◻ Let $\mathop{\mathrm{EP}}: \widehat\mathscr{M}_k \to \mathbb{Q}\llparenthesis T^{-1} \rrparenthesis$ be the unique continuous ring homomorphism that takes each class $\mathop{\mathrm{e}}(Y)$ of a finite type $k$-scheme $Y$ to its Euler--Poincaré polynomial, and let $\mathop{\mathrm{val}}: \mathbb{Q}\llparenthesis T^{-1} \rrparenthesis \to \mathbb{Z}\cup \{\infty\}$ denote the valuation given by taking the order of vanishing of $T^{-1}$. **Lemma 7**. *Let $\mathcal{Y}$ be a finite type Artin stack over $k$ with affine geometric stabilizers. Then $$\mathop{\mathrm{val}}(\mathop{\mathrm{EP}}(\mathop{\mathrm{e}}(\mathcal{Y}))) = -\dim\mathcal{Y},$$ and the coefficient of $T^{\dim\mathcal{Y}}$ in $\mathop{\mathrm{EP}}(\mathop{\mathrm{e}}(\mathcal{Y}))$ is positive.* **Remark 5**. Although later we will only use the first part of this lemma, the positivity part is useful in its proof. Specifically, this positivity allows us to avoid cancellation during the reduction steps below. *Proof.* By and the fact that $\mathcal{Y}$ can be stratified into quotient stacks [@Kresch Proposition 3.5.9], we may assume that $\mathcal{Y}= [Y / G]$, where $G$ is a general linear group acting on a finite type $k$-scheme $Y$. Note that $$\dim[Y / G] = \dim Y - \dim G,$$ and because $G$ is a special group, $\mathop{\mathrm{e}}(G)$ is a unit and $$\mathop{\mathrm{e}}([Y / G]) = \mathop{\mathrm{e}}(Y) \mathop{\mathrm{e}}(G)^{-1}.$$ Therefore we are done if the lemma holds in the case where $\mathcal{Y}$ is a scheme (applied to the schemes $Y$ and $G$), but this case is well known, e.g., by using Nagata compactifications, Chow's lemma, and resolution of singularities to reduce to the case of smooth and projective varieties where Euler--Poincaré polynomials coincide with Poincaré polynomials. ◻ **Proposition 5**. *Let $\mathcal{Y}$ be a finite type Artin stack over $k$ with affine geometric stabilizers. Then $$\Vert \mathop{\mathrm{e}}(\mathcal{Y}) \Vert = \exp(\dim\mathcal{Y}).$$* *Proof.* First we note that implies $\Vert \mathop{\mathrm{e}}(\mathcal{Y}) \Vert \geq \exp(\dim\mathcal{Y})$, so we only need to show $\Vert \mathop{\mathrm{e}}(\mathcal{Y}) \Vert \leq \exp(\dim\mathcal{Y})$. Then by , the fact that $\Vert \cdot \Vert$ satisfies the non-archimedean triangle inequality, and the fact that $\mathcal{Y}$ can be stratified into quotient stacks [@Kresch Proposition 3.5.9], we may assume that $\mathcal{Y}= [Y / G]$, where $G$ is a general linear group acting on a finite type $k$-scheme $Y$. Because $G$ is a special group, $\mathop{\mathrm{e}}(G)$ is a unit and $$\mathop{\mathrm{e}}(\mathcal{Y}) = \mathop{\mathrm{e}}(Y)\mathop{\mathrm{e}}(G)^{-1},$$ and by [@SatrianoUsatine1 Lemma 3.17], $$\Vert \mathop{\mathrm{e}}(G)^{-1} \Vert = \Vert \mathop{\mathrm{e}}(G) \Vert^{-1}.$$ Therefore $$\Vert \mathop{\mathrm{e}}(\mathcal{Y}) \Vert \leq \Vert \mathop{\mathrm{e}}(Y) \Vert \cdot \Vert \mathop{\mathrm{e}}(G) \Vert^{-1} \leq \exp(\dim Y) \cdot \exp(\dim G)^{-1} = \exp(\dim \mathcal{Y}),$$ where we note that $\Vert \mathop{\mathrm{e}}(Y) \Vert \leq \exp(\dim Y)$ by definition of $\Vert \cdot \Vert$ and $\Vert \mathop{\mathrm{e}}(G) \Vert \geq \exp(\dim G)$ by the beginning of this proof. ◻ The following consequence will be useful when studying the behaviour of $\mu_{\mathcal{X},d}$. **Proposition 6**. *Let $m \in \mathbb{Z}_{\geq 0}$ and for each $i \in \{1, \dots, m\}$, let $\{\mathcal{Y}_{i, n}\}_{n \in \mathbb{Z}_{\geq 0}}$ be a sequence of finite type Artin stacks over $k$ with affine geometric stabilizers such that $\{\mathop{\mathrm{e}}(\mathcal{Y}_{i,n})\}_{n \in \mathbb{Z}_{\geq 0}}$ converges as $n \to \infty$. Then $$\Vert \sum_{i = 1}^m \lim_{n \to \infty} \mathop{\mathrm{e}}(\mathcal{Y}_{i,n}) \Vert = \max_{1\leq i\leq m} \Vert \lim_{n \to \infty} \mathop{\mathrm{e}}(\mathcal{Y}_{i,n}) \Vert.$$* *Proof.* The proposition follows from the continuity of $\Vert \cdot \Vert: \widehat{\mathscr{M}}_k \to \mathbb{R}_{\geq 0}$ and the fact that for all $n \in \mathbb{Z}_{\geq 0}$, $$\begin{aligned} \Vert \sum_{i=1}^m \mathop{\mathrm{e}}(\mathcal{Y}_{i, n}) \Vert &= \Vert \mathop{\mathrm{e}}( \mathcal{Y}_{1, n} \sqcup \dots \sqcup \mathcal{Y}_{m, n}) \Vert = \exp(\dim(\mathcal{Y}_{1, n} \sqcup \dots \sqcup \mathcal{Y}_{m, n}))\\ &= \max_{1\leq i\leq m}\, \exp(\dim \mathcal{Y}_{i,n}) = \max_{1\leq i \leq m} \Vert \mathop{\mathrm{e}}(\mathcal{Y}_{i,n}) \Vert,\end{aligned}$$ where the second and final equalities are due to and the third equality is due to (or the definition of dimension). ◻ **Remark 6**. Let $\mathcal{X}$ be a locally finite type Artin stack over $k$ with affine geometric stabilizers, let $d \in \mathbb{Z}$, let $\mathcal{C}\subset |\mathscr{L}(\mathcal{X})|$ be a $d$-convergent cylinder, and let $m \in \mathbb{Z}$. Then there exists a sequence $\{\mathcal{Y}_n\}_{n \in \mathbb{Z}_{\geq 0}}$ of finite type Artin stacks over $k$ with affine geometric stabilizers such that $\mathbb{L}^m\mu_{\mathcal{X}, d}(\mathcal{C}) = \lim_{n \to \infty} \mathop{\mathrm{e}}(\mathcal{Y}_n)$. Specifically, one may take $\mathcal{Y}_n=\mathbb{A}^m\times\theta_n(\mathcal{C})\times B\mathbb{G}_a^{(n+1)\dim\mathcal{X}}$ where, if $m<0$, we define $\mathbb{A}^m$ as $B\mathbb{G}_a^{-m}$. # $d$-convergence and $d$-measurable sets The following notion of convergence plays an integral role throughout the remainder of the paper. **Definition 3**. Let $\mathcal{X}$ be a locally finite type Artin stack over $k$ with affine geometric stabilizers, let $\mathcal{C}\subset |\mathscr{L}(\mathcal{X})|$ be a cylinder, and let $d \in \mathbb{Z}$. We call $\mathcal{C}$ *$d$-convergent* if $\mathcal{C}$ is small and the sequence $$\{\mathop{\mathrm{e}}(\theta_n(\mathcal{C})) \mathbb{L}^{-(n+1)d}\}_{n \in \mathbb{Z}_{\geq 0}}$$ converges. In that case, we set $$\mu_{\mathcal{X}, d}(\mathcal{C}) = \lim_{n \to \infty} \mathop{\mathrm{e}}(\theta_n(\mathcal{C})) \mathbb{L}^{-(n+1)d}.$$ **Remark 7**. If $\mathcal{X}$ is equidimensional, finite type, and either smooth or a global quotient, then any cylinder $\mathcal{C}\subset |\mathscr{L}(\mathcal{X})|$ is $\dim \mathcal{X}$-convergent by [@SatrianoUsatine1 Theorems 3.9 and 3.33] and $\mu_{\mathcal{X},\dim \mathcal{X}}(\mathcal{C}) = \mu_\mathcal{X}(\mathcal{C})$. **Proposition 7**. *Let $\mathcal{X}$ be a locally finite type Artin stack over $k$ with affine geometric stabilizers and separated diagonal, let $\mathcal{U}$ be an open substack of $\mathcal{X}$ that is smooth and finite type over $k$ and irreducible, and let $\mathcal{C}\subset |\mathscr{L}(\mathcal{X})|$ be a small cylinder that is disjoint from $|\mathscr{L}(\mathcal{X}\smallsetminus\mathcal{U})|$. Then $\mathcal{C}$ is $\dim\mathcal{U}$-convergent.* *Proof.* By replacing $\mathcal{X}$ with a quasi-compact open substack that contains $\theta_0(\mathcal{C})$ and replacing $\mathcal{U}$ with its intersection with that quasi-compact open substack, we may assume that $\mathcal{X}$ is finite type over $k$. Then the scheme theoretic image $\mathcal{X}'$ of $\mathcal{U}\hookrightarrow \mathcal{X}$ is integral and finite type over $k$, so by there exists an irreducible smooth finite type Artin stack $\mathcal{Z}$ over $k$ and a morphism $\pi': \mathcal{Z}\to \mathcal{X}'$ that is proper and representable by schemes and such that $\pi'^{-1}(\mathcal{U}) \to \mathcal{U}$ is an isomorphism. Considering the composition $\pi: \mathcal{Z}\xrightarrow{\pi'} \mathcal{X}' \hookrightarrow \mathcal{X}$ and noting , we see that $\mathcal{X}, \mathcal{Z}, \pi, \mathcal{U}, \mathcal{C}$ satisfy the hypotheses of . Therefore, noting that $\dim\mathcal{U}= \dim\mathcal{Z}$, parts ([\[theoremPartConstructibleRepresentableOfMCVF\]](#theoremPartConstructibleRepresentableOfMCVF){reference-type="ref" reference="theoremPartConstructibleRepresentableOfMCVF"}) and ([\[theoremPartIntegralOfMCVF\]](#theoremPartIntegralOfMCVF){reference-type="ref" reference="theoremPartIntegralOfMCVF"}) imply that $\mathcal{C}$ is $\dim\mathcal{U}$-convergent. ◻ Having defined $d$-convergence, we may now introduce the corresponding notion of $d$-measurability. **Definition 4**. Let $\mathcal{X}$ be a locally finite type Artin stack over $k$ with affine geometric stabilizers, and let $d \in \mathbb{Z}$. Let $\mathcal{C}\subset |\mathscr{L}(\mathcal{X})|$, let $\varepsilon \in \mathbb{R}_{>0}$, let $\mathcal{C}^{(0)} \subset |\mathscr{L}(\mathcal{X})|$ be a $d$-convergent cylinder, and let $\{\mathcal{C}^{(i)}\}_{i \in I}$ be a collection of $d$-convergent cylinders in $|\mathscr{L}(\mathcal{X})|$. We call $(\mathcal{C}^{(0)}, \{\mathcal{C}^{(i)}\}_{i \in I})$ a *$d$-convergent cylindrical $\varepsilon$-approximation* of $\mathcal{C}$ if $$(\mathcal{C}\cup \mathcal{C}^{(0)}) \smallsetminus(\mathcal{C}\cap \mathcal{C}^{(0)}) \subset \bigcup_{i \in I} \mathcal{C}^{(i)},$$ and $\Vert \mu_{\mathcal{X},d}(\mathcal{C}^{(i)}) \Vert < \varepsilon$ for all $i$. **Definition 5**. Let $\mathcal{X}$ be a locally finite type Artin stack over $k$ with affine geometric stabilizers, let $d \in \mathbb{Z}$, and let $\mathcal{C}\subset |\mathscr{L}(\mathcal{X})|$. We say $\mathcal{C}$ is *$d$-measurable* if for any $\varepsilon > 0$, the set $\mathcal{C}$ has a $d$-convergent cylindrical $\varepsilon$-approximation. In that case we let $\mu_{\mathcal{X},d}(\mathcal{C}) \in \widehat{\mathscr{M}}_k$ denote the unique element of $\widehat{\mathscr{M}}_k$ such that $$\Vert \mu_{\mathcal{X},d}(\mathcal{C}) - \mu_{\mathcal{X},d}(\mathcal{C}^{(0)}) \Vert < \varepsilon$$ for any $d$-convergent cylindrical $\varepsilon$-approximation $(\mathcal{C}^{(0)}, \{\mathcal{C}^{(i)}\}_{i \in I})$ of $\mathcal{C}$. **Remark 8**. When $\mathcal{C}$ is $d$-measurable, such an element $\mu_{\mathcal{X}, d}(\mathcal{C})$ exists by standard methods. See, e.g., [@ChambertLoirNicaiseSebag Chapter 6, Theorem 3.3.2] for the case where $\mathcal{X}$ is a finite type equidimensional scheme and $d = \dim\mathcal{X}$, and note that the exact same argument works in our setting. **Remark 9**. Suppose $\mathcal{X}$ is equidimensional, finite type, and a global quotient. Then $\mathcal{C}$ is $\dim\mathcal{C}$-measurable if and only if $\mathcal{C}$ is measurable in the sense of [@SatrianoUsatine1 Definition 3.13], and in that case $\mu_{\mathcal{X},\dim \mathcal{X}}(\mathcal{C}) = \mu_\mathcal{X}(\mathcal{C})$. In particular, these definitions are consistent with the usual definitions in the case $\mathcal{X}$ is an equidimensional and finite type scheme. Our notion of $d$-measurability allows us to speak of $d$-measurable functions. **Definition 6**. Let $\mathcal{X}$ be a locally finite type Artin stack over $k$ with affine geometric stabilizers, let $d \in \mathbb{Z}$, let $\mathcal{C}\subset |\mathscr{L}(\mathcal{X})|$, and let $f: \mathcal{C}\to \mathbb{Z}\cup \{\infty\}$. We say $f$ is $d$-measurable if for all $n \in \mathbb{Z}$, the set $f^{-1}(n)$ is a $d$-measurable subset of $|\mathscr{L}(\mathcal{X})|$. **Definition 7**. Let $\mathcal{X}$ be a locally finite type Artin stack over $k$ with affine geometric stabilizers, let $d \in \mathbb{Z}$, let $\mathcal{C}\subset |\mathscr{L}(\mathcal{X})|$, and let $f: \mathcal{C}\to \mathbb{Z}\cup \{\infty\}$. We say $\mathbb{L}^{f}$ is $d$-integrable if $f$ is $d$-measurable and the series $\sum_{n \in \mathbb{Z}} \mathbb{L}^n \mu_{\mathcal{X}, d}(f^{-1}(n))$ converges. In that case we set $$\int_\mathcal{C}\mathbb{L}^f \mathrm{d}\mu_{\mathcal{X},d} = \sum_{n \in \mathbb{Z}} \mathbb{L}^n \mu_{\mathcal{X}, d}(f^{-1}(n)).$$ **Remark 10**. If $f$ is a function defined on a larger subset than $\mathcal{C}$, we may say $f$ is $d$-measurable (resp. $\mathbb{L}^{f}$ is $d$-integrable) on $\mathcal{C}$ when we mean (resp. ) is satisfied by the restriction of $f$ to $\mathcal{C}$. **Notation 2**. Let $\mathcal{X}$ be a locally finite type Artin stack over $k$ with affine geometric stabilizers, let $\mathcal{C}\subset |\mathscr{L}(\mathcal{X})|$, let $d \in \mathbb{Z}$, let $f: \mathcal{C}\to \mathbb{Z}$ be a function that takes only finitely many values, and assume that for all $n \in \mathbb{Z}$ the set $f^{-1}(n)$ is a $d$-convergent cylinder in $|\mathscr{L}(\mathcal{X})|$. Then we will set $$\int_\mathcal{C}\mathbb{L}^f \mathrm{d}\mu_{\mathcal{X}, d} = \sum_{n \in \mathbb{Z}} \mathbb{L}^n \mu_{\mathcal{X}, d}( f^{-1}(n) ).$$ For the next two results, fix $\mathcal{X}$ a locally finite type Artin stack over $k$ with affine geometric stabilizers, and fix $d \in \mathbb{Z}$. **Lemma 8**. *Let $\mathcal{C}, \mathcal{D}\subset |\mathscr{L}(\mathcal{X})|$ be $d$-convergent cylinders. If $\mathcal{C}\subset \mathcal{D}$, then $$\Vert \mu_{\mathcal{X},d}(\mathcal{C}) \Vert \leq \Vert \mu_{\mathcal{X},d}(\mathcal{D}) \Vert.$$* *Proof.* It is straightforward to check that $\mathcal{D}\smallsetminus\mathcal{C}$ is a $d$-convergent cylinder and that $\mu_{\mathcal{X}, d}(\mathcal{D}) = \mu_{\mathcal{X},d}(\mathcal{C}) + \mu_{\mathcal{X},d}(\mathcal{D}\smallsetminus\mathcal{C})$. The lemma then follows from and . ◻ **Proposition 8**. *Let $\{\mathcal{C}^{(n)}\}_{n \in \mathbb{Z}_{\geq 0}}$ be a sequence of pairwise disjoint $d$-measurable subsets of $|\mathscr{L}(\mathcal{X})|$. Then $\lim_{n \to \infty} \mu_{\mathcal{X}, d}( \mathcal{C}^{(n)}) = 0$ if and only if $\bigcup_{n \in \mathbb{Z}_{\geq 0}} \mathcal{C}^{(n)}$ is $d$-measurable, and in that case, $$\mu_{\mathcal{X},d}\left(\bigcup_{n \in \mathbb{Z}_{\geq 0}} \mathcal{C}^{(n)}\right) = \sum_{n \in \mathbb{Z}_{\geq 0}} \mu_{\mathcal{X},d}(\mathcal{C}^{(n)}).$$* *Proof.* A version of this proposition including the special case where $\mathcal{X}$ is a finite type scheme over $k$ is proved in [@ChambertLoirNicaiseSebag Chapter 6 Proposition 3.4.3]. Using and , the same proof works here. ◻ # Relative height functions As mentioned in the introduction, our motivic change of variables formula is controlled by the following relative height function which we introduce. **Notation 3**. If $k'$ is a field extension of $k$ and $M$ is a module over $k'\llbracket t \rrbracket$, we will let $M_\mathrm{tor}$ denote the torsion submodule of $M$. **Definition 8**. Let $\pi: \mathcal{X}\to \mathcal{Y}$ be a morphism of locally finite type Artin stacks over $k$. Let $k'$ be a field extension of $k$, and let $\varphi \in \mathscr{L}(\mathcal{X})(k')$. The canonical morphism $L\pi^* L_\mathcal{Y}\to L_\mathcal{X}$ induces a morphism $H^0(L\varphi^*L\pi^*L_\mathcal{Y})_\mathrm{tor}\to H^0(L\varphi^* L_\mathcal{X})_\mathrm{tor}$ of $k'\llbracket t \rrbracket$-modules, where by slight abuse of notation we also let $\varphi$ denote the corresponding map $\mathop{\mathrm{Spec}}(k'\llbracket t \rrbracket) \to \mathcal{X}$. If $\mathop{\mathrm{ht}}^{(1)}_{L_{\mathcal{X}/\mathcal{Y}}}(\varphi) \neq \infty$, set $$\begin{aligned} \mathop{\mathrm{ht}}_{\mathcal{X}/\mathcal{Y}}(\varphi) = &\mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{X}/\mathcal{Y}}}(\varphi) - \mathop{\mathrm{ht}}^{(1)}_{L_{\mathcal{X}/\mathcal{Y}}}(\varphi)\\ &- \dim_{k'}\mathop{\mathrm{coker}}\left(H^0(L\varphi^*L\pi^*L_\mathcal{Y})_\mathrm{tor}\to H^0(L\varphi^* L_\mathcal{X})_\mathrm{tor}\right),\end{aligned}$$ and otherwise set $\mathop{\mathrm{ht}}_{\mathcal{X}/\mathcal{Y}}(\varphi) = \infty$. This induces a map $$\mathop{\mathrm{ht}}_{\mathcal{X}/\mathcal{Y}}: |\mathscr{L}(\mathcal{X})| \to \mathbb{Z}\cup \{\infty\}$$ that we call the *height function of $\mathcal{X}$ over $\mathcal{Y}$*. **Remark 11**. It is straightforward to check that if $\mathcal{U}$ is an open substack of $\mathcal{X}$, then the restriction of $\mathop{\mathrm{ht}}_{\mathcal{X}/\mathcal{Y}}$ to $|\mathscr{L}(\mathcal{U})|$ coincides with $\mathop{\mathrm{ht}}_{\mathcal{U}/\mathcal{Y}}$. We begin by showing that $\mathop{\mathrm{ht}}_{\mathcal{X}/\mathcal{Y}}$ is invariant under finite field extensions, and hence does indeed define a function on $|\mathscr{L}(\mathcal{X})|$. **Lemma 9**. *With notation as in , $\mathop{\mathrm{ht}}_{\mathcal{X}/\mathcal{Y}}$ is a well-defined function on $|\mathscr{L}(\mathcal{X})|$.* *Proof.* Since $\mathop{\mathrm{ht}}^{(i)}_{L_{\mathcal{X}/\mathcal{Y}}}$ is well-defined, it suffices to prove that the quantity $$\dim_{k'}\mathop{\mathrm{coker}}\left(H^0(L\varphi^*L\pi^*L_\mathcal{Y})_\mathrm{tor}\to H^0(L\varphi^* L_\mathcal{X})_\mathrm{tor}\right)$$ is independent under taking finite extensions of $k'$ (which are necessarily étale since we are working in characteristic $0$.) For this, let $R' = k'\llbracket t \rrbracket$ and $R'' := k''\llbracket t \rrbracket = R' \otimes_{k'} k''$. Let $K' = k'\llparenthesis t\rrparenthesis$ and $K'' = k''\llparenthesis t\rrparenthesis = K' \otimes_{k'} k''$. Now, for any $R'$-module $M$, we have an exact sequence $$0\to M_{\mathrm{tor}}\to M\to M\otimes_{R'}K'\to 0.$$ Tensoring with $R''$ (which is étale over $R'$) shows that $$M_{\mathrm{tor}}\otimes_{R'}R''=(M\otimes_{R'}R'')_{\mathrm{tor}}.$$ Hence, given a map $M\to N$ of $R'$-modules and letting $Q$ be the cokernel of $M_{\mathrm{tor}}\to N_{\mathrm{tor}}$, we see $$Q\otimes_{R'}R''=\mathop{\mathrm{coker}}((M\otimes_{R'}R'')_{\mathrm{tor}}\to (N\otimes_{R'}R'')_{\mathrm{tor}}).$$ Applying this to the case $M=H^0(L\varphi^*L\pi^*L_\mathcal{Y})$ and $N=H^0(L\varphi^* L_\mathcal{X})$ proves the result. ◻ The following lemma allows us to understand the relative height function when the source is smooth. **Lemma 10**. *Let $\mathcal{Z}$ be a smooth Artin stack over $k$, let $k'$ be a field extension of $k$, and let $\varphi: \mathop{\mathrm{Spec}}(k'\llbracket t \rrbracket) \to \mathcal{Z}$ be a map. Then $H^0(L\varphi^* L_\mathcal{Z})$ is a torsion-free $k'\llbracket t \rrbracket$-module and $H^i(L\varphi^*L_\mathcal{Z}) = 0$ for all $i \neq 0, 1$.* *Proof.* Let $\xi: Z \to \mathcal{Z}$ be a smooth cover by a scheme. Possibly after extending $k'$, we may assume there exists some $\psi: \mathop{\mathrm{Spec}}(k'\llbracket t \rrbracket) \to Z$ such that $\xi \circ \psi \cong \varphi$. Applying $L\psi^*$ to the exact triangle $$L\xi^* L_\mathcal{Z}\to \Omega^1_Z \to \Omega^1_{Z/\mathcal{Z}},$$ we have an exact triangle $$L\varphi^*L_\mathcal{Z}\to \psi^* \Omega^1_Z \to \psi^* \Omega^1_{Z/\mathcal{Z}}.$$ The desired result then follows immediately from the fact that $\psi^* \Omega^1_Z$ is free. ◻ **Corollary 1**. *Let $\pi: \mathcal{X}\to \mathcal{Y}$ be a morphism of locally finite type Artin stacks over $k$. If $\mathcal{X}$ is smooth, then $$\mathop{\mathrm{ht}}_{\mathcal{X}/\mathcal{Y}} = \mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{X}/\mathcal{Y}}} - \mathop{\mathrm{ht}}^{(1)}_{L_{\mathcal{X}/\mathcal{Y}}}.$$* *Proof.* This is immediate from as $H^0(L\varphi^* L_\mathcal{X})_\mathrm{tor}$ vanishes. ◻ We end this section by showing that the relative height function is additive under compositions of morphisms. **Proposition 9**. *Let $\mathcal{X}\xrightarrow{f}\mathcal{Y}\xrightarrow{g}\mathcal{Z}$ be morphisms of stack. Assume there is an open substack $\mathcal{V}\subset\mathcal{X}$ such that $\mathcal{V}\to\mathcal{Y}$ and $\mathcal{V}\to\mathcal{Z}$ are open immersions. If $\mathcal{V}$ is smooth, then for all $\varphi\in\mathscr{L}(\mathcal{X})\smallsetminus\mathscr{L}(\mathcal{X}\smallsetminus\mathcal{V})$, we have $$\mathop{\mathrm{ht}}_{\mathcal{X}/\mathcal{Z}}(\varphi)=\mathop{\mathrm{ht}}_{\mathcal{X}/\mathcal{Y}}(\varphi)+\mathop{\mathrm{ht}}_{\mathcal{Y}/\mathcal{Z}}(f\varphi)$$* *Proof.* We begin by reducing to the case where $\mathcal{X}$ is smooth. When $\mathcal{X}$ is not smooth, by , we may choose a representable resolution of singularities $\rho\colon\widetilde{\mathcal{X}}\to\mathcal{X}$ with $\rho|_\mathcal{V}$ an isomorphism. Then $\varphi$ lifts to an arc $\widetilde{\varphi}$ of $\widetilde{X}$. We then have $$\begin{aligned} \mathop{\mathrm{ht}}_{\mathcal{X}/\mathcal{Y}}(\varphi) & = \mathop{\mathrm{ht}}_{\widetilde{\mathcal{X}}/\mathcal{Y}}(\widetilde{\varphi}) - \mathop{\mathrm{ht}}_{\widetilde{\mathcal{X}}/\mathcal{X}}(\widetilde{\varphi})\\ \mathop{\mathrm{ht}}_{\mathcal{X}/\mathcal{Z}}(\varphi) & = \mathop{\mathrm{ht}}_{\widetilde{\mathcal{X}}/\mathcal{Z}}(\widetilde{\varphi}) - \mathop{\mathrm{ht}}_{\widetilde{\mathcal{X}}/\mathcal{X}}(\widetilde{\varphi})\\ \mathop{\mathrm{ht}}_{\mathcal{Y}/\mathcal{Z}}(f\varphi) & = \mathop{\mathrm{ht}}_{\widetilde{\mathcal{X}}/\mathcal{Z}}(\widetilde{\varphi}) - \mathop{\mathrm{ht}}_{\widetilde{\mathcal{X}}/\mathcal{Y}}(\widetilde{\varphi})\end{aligned}$$ from which the result follows. It remains to handle the case when $\mathcal{X}$ is smooth. By , we must show $$-\mathop{\mathrm{ht}}_{\mathcal{Y}/ \mathcal{Z}} (\mathscr{L}(f)(\varphi)) = -\mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{X}/\mathcal{Z}}}(\varphi) + \mathop{\mathrm{ht}}^{(1)}_{L_{\mathcal{X}/\mathcal{Z}}}(\varphi) + \mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{X}/\mathcal{Y}}}(\varphi)-\mathop{\mathrm{ht}}^{(1)}_{L_{\mathcal{X}/\mathcal{Y}}}(\varphi).$$ Consider the exact triangle $$Lf^*L_{\mathcal{Y}/ \mathcal{Z}} \to L_{\mathcal{X}/ \mathcal{Z}} \to L_{\mathcal{X}/ \mathcal{Y}}.$$ Pulling back along $\varphi$ we get the exact triangle $$L\varphi^*Lf^*L_{\mathcal{Y}/\mathcal{Z}} \to L\varphi^*L_{\mathcal{X}/\mathcal{Z}} \to L\varphi^* L_{\mathcal{X}/\mathcal{Y}},$$ which gives the exact sequence $$\begin{aligned} &H^{-1}(L\varphi^*L_{\mathcal{X}/\mathcal{Z}}) \to H^{-1}(L\varphi^* L_{\mathcal{X}/\mathcal{Y}}) \\ &\to H^0(L\varphi^*Lf^*L_{\mathcal{Y}/\mathcal{Z}}) \to H^{0}(L\varphi^*L_{\mathcal{X}/\mathcal{Z}}) \to H^{0}(L\varphi^* L_{\mathcal{X}/\mathcal{Y}})\\ &\to H^1(L\varphi^*Lf^*L_{\mathcal{Y}/\mathcal{Z}}) \to H^{1}(L\varphi^*L_{\mathcal{X}/\mathcal{Z}}) \to H^{1}(L\varphi^*L_{\mathcal{X}/\mathcal{Y}}) \to 0.\end{aligned}$$ The restrictions of $Lf^*L_{\mathcal{Y}/ \mathcal{Z}}, L_{\mathcal{X}/ \mathcal{Z}}$, and $L_{\mathcal{X}/ \mathcal{Y}}$ to $\mathcal{V}$ are trivial and $\varphi$ is not in $\mathscr{L}(\mathcal{X}\smallsetminus\mathcal{V})$, so these $k'\llbracket t \rrbracket$-modules are finite dimensional over $k'$. Thus $$\begin{aligned} 0 = &\dim_{k'}\mathop{\mathrm{coker}}\left( H^{-1}(L\varphi^*L_{\mathcal{X}/\mathcal{Z}}) \to H^{-1}(L\varphi^* L_{\mathcal{X}/\mathcal{Y}}) \right)\\ &- \mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{Y}/\mathcal{Z}}}(\mathscr{L}(f)(\varphi)) + \mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{X}/\mathcal{Z}}}(\varphi) - \mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{X}/\mathcal{Y}}}(\varphi)\\ &+\mathop{\mathrm{ht}}^{(1)}_{L_{\mathcal{Y}/\mathcal{Z}}}(\mathscr{L}(f)(\varphi)) - \mathop{\mathrm{ht}}^{(1)}_{L_{\mathcal{X}/\mathcal{Z}}}(\varphi)+\mathop{\mathrm{ht}}^{(1)}_{L_{\mathcal{X}/\mathcal{Y}}}(\varphi).\end{aligned}$$ We are therefore done if we can prove that $$\begin{aligned} \dim_{k'}&\mathop{\mathrm{coker}}\left( H^{-1}(L\varphi^*L_{\mathcal{X}/\mathcal{Z}}) \to H^{-1}(L\varphi^* L_{\mathcal{X}/\mathcal{Y}}) \right) \\ &= \dim_{k'}\mathop{\mathrm{coker}}\left(H^0(L\varphi^*Lf^*Lg^*L_\mathcal{Z})_\mathrm{tor}\to H^0(L\varphi^*Lf^* L_\mathcal{Y})_\mathrm{tor}\right).\end{aligned}$$ Consider the commuting diagram whose rows are exact triangles. Noting that $H^{-1}(L\varphi^*L_\mathcal{X}) = 0$ by , pulling back along $\varphi$ and taking cohomology gives the commuting diagram whose rows are exact. Recall that because $\varphi$ is not in $\mathscr{L}(\mathcal{X}\smallsetminus\mathcal{V})$, the $k'\llbracket t \rrbracket$-modules $H^{-1}(L\varphi^*L_{\mathcal{X}/\mathcal{Z}})$ and $H^{-1}(L\varphi^* L_{\mathcal{X}/\mathcal{Y}})$ are torsion. Also the $k'\llbracket t \rrbracket$-module $H^0 (L\varphi^* L_\mathcal{X})$ is torsion-free by . Thus the middle horizontal arrows are just the inclusions of the respective torsion submodules. The commutativity of the diagram then implies that the induced map $$\begin{aligned} \mathop{\mathrm{coker}}&\left( H^{-1}(L\varphi^*L_{\mathcal{X}/\mathcal{Z}}) \to H^{-1}(L\varphi^* L_{\mathcal{X}/\mathcal{Y}}) \right)\\ &\xrightarrow{\simeq} \mathop{\mathrm{coker}}\left(H^0(L\varphi^*Lf^*Lg^*L_\mathcal{Z})_\mathrm{tor}\to H^0(L\varphi^*Lf^* L_\mathcal{Y})_\mathrm{tor}\right),\end{aligned}$$ is an isomorphism. ◻ # A variant on Our goal in this section is to prove the following motivic change of variables formula. **Theorem 4**. *Let $\mathcal{X}$ be a locally finite type Artin stack over $k$ with affine geometric stabilizers and separated diagonal, let $Y$ be an irreducible finite type scheme over $k$, let $\pi: \mathcal{X}\to Y$ be a morphism, let $\mathcal{U}$ be a smooth open substack of $\mathcal{X}$ such that $\mathcal{U}\hookrightarrow \mathcal{X}\xrightarrow{\pi} Y$ is an open immersion, let $\mathcal{C}\subset |\mathscr{L}(\mathcal{X})|$ be a cylinder that is disjoint from $|\mathscr{L}(\mathcal{X}\smallsetminus\mathcal{U})|$, and let $D \subset \mathscr{L}(Y)$ be a cylinder such that $\pi$ induces a bijection $\overline{\mathcal{C}}(k') \to D(k')$ for every field extension $k'$ of $k$.* (a) *The restriction of $\mathop{\mathrm{ht}}_{\mathcal{X}/Y}$ to $\mathcal{C}$ is integer valued and takes only finitely many values.* (b) *For all $n \in \mathbb{Z}$, the set $\mathop{\mathrm{ht}}_{\mathcal{X}/ Y}^{-1}(n) \cap \mathcal{C}$ is a $\dim Y$-convergent cylinder in $|\mathscr{L}(\mathcal{X})|$.* (c) *We have an equality $$\mu_Y(D) = \int_\mathcal{C}\mathbb{L}^{-\mathop{\mathrm{ht}}_{\mathcal{X}/Y}} \mathrm{d}\mu_{\mathcal{X}, \dim Y}.$$* We turn to the proof after a preliminary result. **Lemma 11**. *Let $\mathcal{X}$ and $\mathcal{Y}$ be locally finite type Artin stacks over $k$, let $\pi: \mathcal{X}\to \mathcal{Y}$ be a morphism, let $\mathcal{U}$ be a smooth open substack of $\mathcal{X}$ such that $\mathcal{U}\hookrightarrow \mathcal{X}\xrightarrow{\pi} \mathcal{Y}$ is an open immersion, let $\mathcal{C}\subset |\mathscr{L}(\mathcal{X})|$ be a cylinder that is disjoint from $|\mathscr{L}(\mathcal{X}\smallsetminus\mathcal{U})|$, let $\mathcal{D}\subset |\mathscr{L}(\mathcal{Y})|$ be a cylinder, assume that $\mathcal{Y}$ is equidimensional, and assume that $\pi$ induces a bijection $\overline{\mathcal{C}}(k') \to \overline{\mathcal{D}}(k')$ for every field extension $k'$ of $k$. Then $\mathcal{C}$ is small if and only if $\mathcal{D}$ is small.* *Proof.* Because $\theta_0(\mathcal{D})$ is the image of $\theta_0(\mathcal{C})$ under the continuous map $|\mathcal{X}| \to |\mathcal{Y}|$, it is clear that if $\mathcal{C}$ is small then $\mathcal{D}$ is small. For the other direction, assume that $\mathcal{D}$ is small. For every $\varphi \in \mathcal{C}$ choose a quasi-compact open substack $\mathcal{X}_\varphi$ of $\mathcal{X}$ such that $\theta_0(\varphi) \in |\mathcal{X}_\varphi|$, and set $\mathcal{C}_\varphi = \mathcal{C}\cap \theta_0^{-1}(|\mathcal{X}_\varphi|)$ and $\mathcal{D}_\varphi = \mathscr{L}(\pi)(\mathcal{C}_\varphi)$. Then $\mathcal{C}= \bigcup_{\varphi \in \mathcal{C}} \mathcal{C}_\varphi$ and each $\mathcal{C}_\varphi$ is a small cylinder that is disjoint from $|\mathscr{L}(\mathcal{X}\smallsetminus\mathcal{U})|$. So, $\mathcal{D}= \bigcup_{\varphi \in \mathcal{C}} \mathcal{D}_\varphi$ as $\overline{\mathcal{C}}(k') \to \overline{\mathcal{D}}(k')$ is surjective, and each $\mathcal{D}_\varphi$ is a small cylinder by . By , there exist $\varphi_1, \dots, \varphi_r \in \mathcal{C}$ such that $\mathcal{D}= \mathcal{D}_{\varphi_1} \cup \dots \cup \mathcal{D}_{\varphi_r}$. Because $\overline{\mathcal{C}}(k') \to \overline{\mathcal{D}}(k')$ is injective for every field extension $k'$ of $k$, it is straightforward to check that $\mathcal{C}= \mathcal{C}_{\varphi_1} \cup \dots \cup \mathcal{C}_{\varphi_r}$, which is small. ◻ *Proof of .* Because the theorem is trivially true when $\mathcal{U}$ is empty, we will assume that $\mathcal{U}$ is nonempty. In particular $\mathcal{U}$ is integral and $\dim\mathcal{U}= \dim Y$. Because $Y$ is finite type, $D$ is a small, so $\mathcal{C}$ is small by . Because $Y$ is finite type and $\mathcal{U}$ is isomorphic to an open subscheme of $Y$, we have $\mathcal{U}$ is quasi-compact. Replacing $\mathcal{X}$ with a quasi-compact open substack that contains $\theta_0(\mathcal{C})$ and $\mathcal{U}$ and noting , we may assume that $\mathcal{X}$ is finite type over $k$. Let $\mathcal{X}'$ be the scheme theoretic image of $\mathcal{U}\hookrightarrow \mathcal{X}$. Then $\mathcal{X}'$ is integral and finite type over $k$, so by there exists an irreducible smooth finite type Artin stack $\mathcal{Z}$ over $k$ and a morphism $\rho': \mathcal{Z}\to \mathcal{X}'$ that is proper and representable by schemes and such that $\rho'^{-1}(\mathcal{U}) \to \mathcal{U}$ is an isomorphism. Let $\rho: \mathcal{Z}\to \mathcal{X}$ be the composition $\mathcal{Z}\xrightarrow{\rho'} \mathcal{X}' \hookrightarrow \mathcal{X}$, and set $\mathcal{E}= \mathscr{L}(\rho)^{-1}(\mathcal{C})$. Noting , we have $\mathcal{X}, \mathcal{Z}, \rho, \mathcal{U}, \mathcal{C}, \mathcal{E}$ satisfy the hypotheses of . By ([\[theoremPartCylinderBijection\]](#theoremPartCylinderBijection){reference-type="ref" reference="theoremPartCylinderBijection"}), the hypotheses of [@SatrianoUsatine2 Theorem 1.3] hold, hence combining with [@SatrianoUsatine3 Theorem 2.3], we see $-\mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{Z}/Y}} + \mathop{\mathrm{ht}}^{(1)}_{L_{\mathcal{Z}/Y}}$ is integer valued and constructible on $\mathcal{E}$ and $$\mu_Y(D) = \int_{\mathcal{E}} \mathbb{L}^{-\mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{Z}/Y}} + \mathop{\mathrm{ht}}^{(1)}_{L_{\mathcal{Z}/Y}}}\mathrm{d}\mu_\mathcal{Z}.$$ For each $n, m \in \mathbb{Z}$ set $$\mathcal{E}^{(n, m)} = \mathcal{E}\cap (-\mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{Z}/Y}} + \mathop{\mathrm{ht}}^{(1)}_{L_{\mathcal{Z}/Y}})^{-1}(n) \cap (\mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{Z}/\mathcal{X}}})^{-1}(m)$$ and $$\mathcal{C}^{(n,m)} = \mathscr{L}(\rho)(\mathcal{E}^{(n,m)}).$$ We have each $\mathcal{E}^{(n,m)}$ is a cylinder, so each $\mathcal{C}^{(n,m)}$ is a cylinder in $|\mathscr{L}(\mathcal{X}')|$ by . We will show that each $\mathcal{C}^{(n,m)}$ is a cylinder in $|\mathscr{L}(\mathcal{X})|$. There exists some $\ell \in \mathbb{Z}_{\geq 0}$ and constructible subset $\mathcal{C}_\ell \subset |\mathscr{L}_\ell(\mathcal{X}')|$ such that $$\mathcal{C}^{(n,m)} = \theta_{\ell, \mathcal{X}'}^{-1}(\mathcal{C}_\ell) = \theta_{\ell, \mathcal{X}}^{-1}(\mathcal{C}_\ell) \cap |\mathscr{L}(\mathcal{X}')|.$$ Intersecting with $\mathcal{C}$ on both sides and noting $\mathcal{C}\subset |\mathscr{L}(\mathcal{X}')|$ and $\mathcal{C}^{(n,m)} \subset \mathcal{C}$, $$\mathcal{C}^{(n,m)} = \theta_{\ell, \mathcal{X}}^{-1}(\mathcal{C}_\ell) \cap \mathcal{C},$$ which is a cylinder in $|\mathscr{L}(\mathcal{X})|$. Now for all $n,m$, we have $\mathcal{X}, \mathcal{Z}, \rho, \mathcal{U}, \mathcal{C}^{(n,m)}, \mathcal{E}^{(n,m)}$ satisfy the hypotheses of , so by ([\[theoremPartIntegralOfMCVF\]](#theoremPartIntegralOfMCVF){reference-type="ref" reference="theoremPartIntegralOfMCVF"}) each $\mathcal{C}^{(n,m)}$ is $\dim Y$-convergent and $$\mu_{\mathcal{X}, \dim Y}(\mathcal{C}^{(n,m)}) = \int_{\mathcal{E}^{(n,m)}} \mathbb{L}^{-\mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{Z}/\mathcal{X}}}}\mathrm{d}\mu_\mathcal{Z}= \mathbb{L}^{-m}\mu_\mathcal{Z}(\mathcal{E}^{(n,m)}).$$ Therefore $$\mu_Y(D) = \sum_{n,m \in \mathbb{Z}} \mathbb{L}^n \mu_\mathcal{Z}(\mathcal{E}^{(n,m)}) = \sum_{n,m \in \mathbb{Z}} \mathbb{L}^{n+m} \mu_{\mathcal{X}, \dim Y}(\mathcal{C}^{(n,m)}).$$ Thus we are done if we can show that for all $n, m \in \mathbb{Z}$, the function $-\mathop{\mathrm{ht}}_{\mathcal{X}/ Y}$ is equal to $n + m$ on all of $\mathcal{C}^{(n,m)}$. This follows since $$-\mathop{\mathrm{ht}}_{\mathcal{X}/ Y} (\mathscr{L}(\rho)(\psi)) = -\mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{Z}/Y}}(\psi) + \mathop{\mathrm{ht}}^{(1)}_{L_{\mathcal{Z}/Y}}(\psi) + \mathop{\mathrm{ht}}^{(0)}_{L_{\mathcal{Z}/\mathcal{X}}}(\psi),$$ by and applied to $\mathcal{Z}\to\mathcal{X}\to Y$. ◻ # Motivic change of variables formula: proof of Throughout this subsection, let $\mathcal{X}, Y, \pi, U, \mathcal{U}, \mathcal{C}$ be as in the hypotheses of , and set $d = \dim Y$. We first prove our desired result for measurable sets contained in a cylinder. **Lemma 12**. *Let $D \subset \mathscr{L}(Y)$ be a measurable set, let $E$ be a cylinder contained in $\mathscr{L}(Y) \smallsetminus\mathscr{L}(Y \smallsetminus U)$, and assume $D \subset E$. Then the restriction of $\mathop{\mathrm{ht}}_{\mathcal{X}/Y}$ to $\mathcal{C}\cap \mathscr{L}(\pi)^{-1}(D)$ is integer valued and takes only finitely many values, $\mathbb{L}^{-\mathop{\mathrm{ht}}_{\mathcal{X}/Y}}$ is $d$-integrable on $\mathcal{C}\cap \mathscr{L}(\pi)^{-1}(D)$, and $$\mu_Y(D) = \int_{\mathcal{C}\cap \mathscr{L}(\pi)^{-1}(D)} \mathbb{L}^{-\mathop{\mathrm{ht}}_{\mathcal{X}/Y}} \mathrm{d}\mu_{\mathcal{X}, d}.$$* *Proof.* For all $n \in \mathbb{Z}$, set $\mathcal{C}_n = \mathcal{C}\cap \mathscr{L}(\pi)^{-1}(D) \cap \mathop{\mathrm{ht}}_{\mathcal{X}/Y}^{-1}(n)$. For all $\varepsilon \in \mathbb{R}_{> 0}$, choose a cylindrical $\varepsilon$-approximation $(D_\varepsilon^{(0)}, \{D_\varepsilon^{(i)}\}_{i \in I_\varepsilon})$ of $D$. Since $D \subset E$ and noting e.g., , we may replace each $D_\varepsilon^{(0)}$ and $D_\varepsilon^{(i)}$ with their intersections with $E$ and therefore assume that all $D_\varepsilon^{(0)}$ and $D_\varepsilon^{(i)}$ are contained in $E$. For all $\varepsilon \in \mathbb{R}_{> 0}$ and $n \in \mathbb{Z}$, set $$\mathcal{C}_{\varepsilon, n}^{(0)} = \mathcal{C}\cap \mathscr{L}(\pi)^{-1}(D_\varepsilon^{(0)}) \cap \mathop{\mathrm{ht}}_{\mathcal{X}/Y}^{-1}(n),$$ and for all $i \in I_\varepsilon$, set $$\mathcal{C}_{\varepsilon, n}^{(i)} = \mathcal{C}\cap \mathscr{L}(\pi)^{-1}(D_\varepsilon^{(i)}) \cap \mathop{\mathrm{ht}}_{\mathcal{X}/Y}^{-1}(n).$$ We will show that for all $\varepsilon, n$, we have $(\mathcal{C}_{\varepsilon, n}^{(0)}, \{\mathcal{C}_{\varepsilon, n}^{(i)}\}_{i \in I_\varepsilon})$ is a $d$-convergent cylindrical $\varepsilon\exp(n)$-approximation of $\mathcal{C}_n$. It is straightforward to check that $$(\mathcal{C}_n \smallsetminus\mathcal{C}^{(0)}_{\varepsilon, n}) \cup (\mathcal{C}^{(0)}_{\varepsilon, n} \smallsetminus\mathcal{C}_n) \subset \bigcup_{i \in I_\varepsilon} \mathcal{C}^{(i)}_{\varepsilon, n}.$$ Applying to $\mathcal{C}\cap \mathscr{L}(\pi)^{-1}(E) \to E$ gives that the restriction of $\mathop{\mathrm{ht}}_{\mathcal{X}/Y}$ to $\mathcal{C}\cap \mathscr{L}(\pi)^{-1}(E)$ is integer valued and takes only finitely many values. Because $D \subset E$, we also have that the restriction of $\mathop{\mathrm{ht}}_{\mathcal{X}/Y}$ to $\mathcal{C}\cap \mathscr{L}(\pi)^{-1}(D)$ is integer valued and takes only finitely many values. Let $n_0 \in \mathbb{Z}_{\geq 0}$ be the maximum of the absolute value of $\mathop{\mathrm{ht}}_{\mathcal{X}/Y}$ on $\mathcal{C}\cap \mathscr{L}(\pi)^{-1}(E)$. In particular, for all $\varepsilon, i$, we have $\mathcal{C}_n$, $\mathcal{C}^{(0)}_{\varepsilon, n}$ and $\mathcal{C}^{(i)}_{\varepsilon, n}$ are empty whenever $|n| > n_0$. For each $\varepsilon, i$, applying to $\mathcal{C}\cap \mathscr{L}(\pi)^{-1}(D^{(i)}_\varepsilon) \to D^{(i)}_\varepsilon$ gives that - each $\mathcal{C}^{(i)}_{\varepsilon, n}$ is a $d$-convergent cylinder in $|\mathscr{L}(\mathcal{X})|$, and - we have the equality $$\mu_Y(D^{(i)}_\varepsilon) = \sum_{n = -n_0}^{n_0} \mathbb{L}^{-n} \mu_{\mathcal{X}, d}(\mathcal{C}^{(i)}_{\varepsilon, n}).$$ Then by and , $$\Vert \mu_{\mathcal{X}, d}(\mathcal{C}^{(i)}_{\varepsilon, n}) \Vert = \exp(n)\Vert \mathbb{L}^{-n} \mu_{\mathcal{X}, d}(\mathcal{C}^{(i)}_{\varepsilon, n})\Vert \leq \exp(n) \Vert \mu_Y(D^{(i)}_\varepsilon) \Vert < \varepsilon\exp(n).$$ Similarly, for all $\varepsilon$ - each $\mathcal{C}^{(0)}_{\varepsilon, n}$ is a $d$-convergent cylinder in $|\mathscr{L}(\mathcal{X})|$, and - we have the equality $$\mu_Y(D^{(0)}_\varepsilon) = \sum_{n = -n_0}^{n_0} \mathbb{L}^{-n} \mu_{\mathcal{X}, d}(\mathcal{C}^{(0)}_{\varepsilon, n}).$$ In particular, for all $\varepsilon, n$ we have $(\mathcal{C}_{\varepsilon, n}^{(0)}, \{\mathcal{C}_{\varepsilon, n}^{(i)}\}_{i \in I_\varepsilon})$ is a $d$-convergent cylindrical $\varepsilon\exp(n)$-approximation of $\mathcal{C}_n$. Thus each $\mathcal{C}_n$ is $d$-measurable and $$\mu_{\mathcal{X},d}(\mathcal{C}_n) = \lim_{\varepsilon \to 0} \mu_{\mathcal{X},d}(\mathcal{C}^{(0)}_{\varepsilon, n}).$$ Since $\mathcal{C}_n$ is empty for all but finitely many $n$, this also implies that $\mathbb{L}^{-\mathop{\mathrm{ht}}_{\mathcal{X}/Y}}$ is $d$-integrable on $\mathcal{C}\cap \mathscr{L}(\pi)^{-1}(D)$. Finally $$\begin{aligned} \int_{\mathcal{C}\cap \mathscr{L}(\pi)^{-1}(D)} \mathbb{L}^{-\mathop{\mathrm{ht}}_{\mathcal{X}/Y}} \mathrm{d}\mu_{\mathcal{X}, d} &= \sum_{n = -n_0}^{n_0} \mathbb{L}^{-n} \mu_{\mathcal{X}, d}(\mathcal{C}_n)\\ &= \sum_{n = -n_0}^{n_0}\mathbb{L}^{-n} \lim_{\varepsilon \to 0} \mu_{\mathcal{X},d}(\mathcal{C}^{(0)}_{\varepsilon, n})\\ &= \lim_{\varepsilon \to 0} \sum_{n = -n_0}^{n_0} \mathbb{L}^{-n} \mu_{\mathcal{X}, d}(\mathcal{C}^{(0)}_{\varepsilon, n})\\ &= \lim_{\varepsilon \to 0} \mu_Y(D^{(0)}_\varepsilon) = \mu_Y(D).\end{aligned}$$ ◻ We next loosen the restriction that $D$ be contained in a cylinder. **Lemma 13**. *Let $D \subset \mathscr{L}(Y)$ be a measurable set. Then $\mathbb{L}^{-\mathop{\mathrm{ht}}_{\mathcal{X}/Y}}$ is $d$-integrable on $(\mathcal{C}\smallsetminus|\mathscr{L}(\mathcal{X}\smallsetminus\mathcal{U})|) \cap \mathscr{L}(\pi)^{-1}(D)$ and $$\mu_Y(D) = \int_{(\mathcal{C}\smallsetminus|\mathscr{L}(\mathcal{X}\smallsetminus\mathcal{U})|) \cap \mathscr{L}(\pi)^{-1}(D)} \mathbb{L}^{-\mathop{\mathrm{ht}}_{\mathcal{X}/Y}} \mathrm{d}\mu_{\mathcal{X}, d}.$$* *Proof.* Throughout this proof, choose some closed subscheme structure for $Y \smallsetminus U \hookrightarrow Y$, set $$E_0 = \mathscr{L}(Y) \smallsetminus\theta_0^{-1}(\mathscr{L}_0(Y \smallsetminus U)),$$ and for all $\ell \in \mathbb{Z}_{\geq 1}$ set $$E_\ell = \theta_{\ell-1}^{-1}(\mathscr{L}_{\ell-1}(Y \smallsetminus U)) \smallsetminus\theta_{\ell}^{-1}(\mathscr{L}_{\ell}(Y\smallsetminus U)) \subset \mathscr{L}(Y).$$ Therefore $$\mathscr{L}(Y) \smallsetminus\mathscr{L}(Y \smallsetminus U) = \bigsqcup_{\ell \in \mathbb{Z}_{\geq 0}} E_\ell.$$ Because $\mu_Y(\mathscr{L}(Y \smallsetminus U)) = 0$, we may replace $D$ with $D \smallsetminus\mathscr{L}(Y \smallsetminus U)$ and therefore assume that $D$ is disjoint from $\mathscr{L}(Y \smallsetminus U)$. We then have $$(\mathcal{C}\smallsetminus|\mathscr{L}(\mathcal{X}\smallsetminus\mathcal{U})|) \cap \mathscr{L}(\pi)^{-1}(D) = \mathcal{C}\cap \mathscr{L}(\pi)^{-1}(D),$$ and $$D = \bigsqcup_{\ell \in \mathbb{Z}_{\geq 0}} D \cap E_\ell.$$ Because each $E_\ell$ is a cylinder and therefore measurable, each $D \cap E_\ell$ is measurable. Then, e.g., by , $$\mu_Y(D) = \sum_{\ell \in \mathbb{Z}_{\geq 0}} \mu_Y(D \cap E_\ell).$$ Also by applied to $D \cap E_\ell \subset E_\ell$, - the restriction of $\mathop{\mathrm{ht}}_{\mathcal{X}/Y}$ to $\mathcal{C}\cap \mathscr{L}(\pi)^{-1}(D \cap E_\ell)$ is integer valued and takes finitely many values, - $\mathbb{L}^{-\mathop{\mathrm{ht}}_{\mathcal{X}/Y}}$ is $d$-integrable on $\mathcal{C}\cap \mathscr{L}(\pi)^{-1}(D \cap E_\ell)$, and - we have the equality $$\mu_Y(D \cap E_\ell) = \int_{\mathcal{C}\cap \mathscr{L}(\pi)^{-1}(D \cap E_\ell)} \mathbb{L}^{-\mathop{\mathrm{ht}}_{\mathcal{X}/Y}} \mathrm{d}\mu_{\mathcal{X},d}.$$ For each $\ell \in \mathbb{Z}_{\geq 0}$, let $n_\ell$ be the maximum of the absolute value of $\mathop{\mathrm{ht}}_{\mathcal{X}/Y}$ on $\mathcal{C}\cap \mathscr{L}(\pi)^{-1}(D \cap E_\ell)$. Then combining the above, $$\begin{aligned} \mu_Y(D) &= \sum_{\ell \in \mathbb{Z}_{\geq 0}} \mu_Y(D \cap E_\ell)\\ &= \sum_{\ell \in \mathbb{Z}_{\geq 0}} \int_{\mathcal{C}\cap \mathscr{L}(\pi)^{-1}(D \cap E_\ell)} \mathbb{L}^{-\mathop{\mathrm{ht}}_{\mathcal{X}/Y}} \mathrm{d}\mu_{\mathcal{X},d}\\ &= \sum_{\ell \in \mathbb{Z}_{\geq 0}} \sum_{n = -n_\ell}^{n_\ell} \mathbb{L}^{-n} \mu_{\mathcal{X}, d}(\mathcal{C}\cap \mathscr{L}(\pi)^{-1}(D \cap E_\ell) \cap \mathop{\mathrm{ht}}_{\mathcal{X}/Y}^{-1}(-n)).\end{aligned}$$ The convergence of the final series implies that for any $n \in \mathbb{Z}$, $$\lim_{\ell \to \infty} \mu_{\mathcal{X}, d}(\mathcal{C}\cap \mathscr{L}(\pi)^{-1}(D \cap E_\ell) \cap \mathop{\mathrm{ht}}_{\mathcal{X}/Y}^{-1}(-n)) = 0.$$ Thus by for any $n \in \mathbb{Z}$, $$\mathcal{C}\cap \mathscr{L}(\pi)^{-1}(D) \cap \mathop{\mathrm{ht}}_{\mathcal{X}/Y}^{-1}(-n)= \bigsqcup_{\ell \in \mathbb{Z}_{\geq 0}}(\mathcal{C}\cap \mathscr{L}(\pi)^{-1}(D \cap E_\ell) \cap \mathop{\mathrm{ht}}_{\mathcal{X}/Y}^{-1}(-n))$$ is $d$-measurable and $$\begin{aligned} \mu_{\mathcal{X}, d}(\mathcal{C}\cap \mathscr{L}(\pi)^{-1}(D) &\cap \mathop{\mathrm{ht}}_{\mathcal{X}/Y}^{-1}(-n)) \\ &= \sum_{\ell \in \mathbb{Z}_{\geq 0}} \mu_{\mathcal{X}, d}(\mathcal{C}\cap \mathscr{L}(\pi)^{-1}(D \cap E_\ell) \cap \mathop{\mathrm{ht}}_{\mathcal{X}/Y}^{-1}(-n)).\end{aligned}$$ Therefore, if for each $n\in\mathbb{Z}_{\geq0}$, we set $\mathcal{S}_n=\{\ell\mid |n|\leq n_\ell\}$, we find $$\begin{aligned} \int_{\mathcal{C}\cap \mathscr{L}(\pi)^{-1}(D)} &\mathbb{L}^{-\mathop{\mathrm{ht}}_{\mathcal{X}/Y}}\mathrm{d}\mu_{\mathcal{X}, d} \\ &= \sum_{n \in \mathbb{Z}_{\geq 0}} \sum_{\ell \in \mathcal{S}_n} \mathbb{L}^{-n} \mu_{\mathcal{X}, d}(\mathcal{C}\cap \mathscr{L}(\pi)^{-1}(D \cap E_\ell) \cap \mathop{\mathrm{ht}}_{\mathcal{X}/Y}^{-1}(-n))\\ &= \mu_Y(D),\end{aligned}$$ and we are done. ◻ Lastly, we prove the following more general form of . **Theorem 5**. *Keep Notation [Notation 1](#not:mainnot){reference-type="ref" reference="not:mainnot"}, and let $\mathcal{C}\subset |\mathscr{L}(\mathcal{X})|$ be a cylinder such that the map $$%\overline{ \mathcal{C}\smallsetminus|\mathscr{L}(\mathcal{X}\smallsetminus\mathcal{U})|\ \longrightarrow\ \mathscr{L}(Y) \smallsetminus\mathscr{L}(Y \smallsetminus U)$$ induced by $\pi$ is a bijection on isomorphism classes of $k'$-points for all field extensions $k'$ of $k$. Let $f: \mathscr{L}(Y) \to \mathbb{Z}\cup \{\infty\}$ be a measurable function.* (a) *[\[theoremMotivicChangeOfVariablesMeasurable-realversion::a\]]{#theoremMotivicChangeOfVariablesMeasurable-realversion::a label="theoremMotivicChangeOfVariablesMeasurable-realversion::a"} We have $\mathbb{L}^f$ is integrable on $\mathscr{L}(Y)$ if and only if $\mathbb{L}^{f \circ \mathscr{L}(\pi)-\mathop{\mathrm{ht}}_{\mathcal{X}/Y}}$ is $\dim Y$-integrable on $\mathcal{C}\smallsetminus|\mathscr{L}(\mathcal{X}\smallsetminus\mathcal{U})|$.* (b) *If $\mathbb{L}^f$ is integrable on $\mathscr{L}(Y)$, then $$\int_{\mathscr{L}(Y)} \mathbb{L}^{f} \mathrm{d}\mu_Y = \int_{\mathcal{C}\smallsetminus|\mathscr{L}(\mathcal{X}\smallsetminus\mathcal{U})|} \mathbb{L}^{f \circ \mathscr{L}(\pi)-\mathop{\mathrm{ht}}_{\mathcal{X}/Y}} \mathrm{d}\mu_{\mathcal{X}, \dim Y}.$$* *Proof.* Set $\mathcal{C}' = \mathcal{C}\smallsetminus|\mathscr{L}(\mathcal{X}\smallsetminus\mathcal{U})|$, and for all $\ell \in \mathbb{Z}$, set $D_\ell = f^{-1}(\ell)$. Then by , $\mathbb{L}^{-\mathop{\mathrm{ht}}_{\mathcal{X}/Y}}$ is $d$-integrable on $\mathcal{C}' \cap \mathscr{L}(\pi)^{-1}(D_\ell)$ and $$\begin{aligned} \mu_Y(D_\ell) &= \int_{\mathcal{C}' \cap \mathscr{L}(\pi)^{-1}(D_\ell)} \mathbb{L}^{-\mathop{\mathrm{ht}}_{\mathcal{X}/Y}} \mathrm{d}\mu_{\mathcal{X}, d}\\ &= \sum_{n \in \mathbb{Z}} \mathbb{L}^{-n} \mu_{\mathcal{X}, d}(\mathcal{C}' \cap \mathscr{L}(\pi)^{-1}(D_\ell) \cap \mathop{\mathrm{ht}}_{\mathcal{X}/Y}^{-1}(-n) )\\ &= \sum_{n \in \mathbb{Z}} \mathbb{L}^{-n} \mu_{\mathcal{X}, d}(\mathcal{C}' \cap (f \circ \mathscr{L}(\pi))^{-1}(\ell) \cap \mathop{\mathrm{ht}}_{\mathcal{X}/Y}^{-1}(-n) ).\end{aligned}$$ Therefore $\mathbb{L}^f$ is integrable if and only if $$\begin{aligned} \sum_{\ell \in \mathbb{Z}} &\sum_{n \in \mathbb{Z}} \mathbb{L}^{\ell-n} \mu_{\mathcal{X}, d}(\mathcal{C}' \cap (f \circ \mathscr{L}(\pi))^{-1}(\ell) \cap \mathop{\mathrm{ht}}_{\mathcal{X}/Y}^{-1}(-n) )\\ &= \sum_{m \in \mathbb{Z}} \mathbb{L}^{m} \sum_{\ell - n = m} \mu_{\mathcal{X}, d}(\mathcal{C}' \cap (f \circ \mathscr{L}(\pi))^{-1}(\ell) \cap \mathop{\mathrm{ht}}_{\mathcal{X}/Y}^{-1}(-n) )\end{aligned}$$ converges, which by is equivalent to $\mathbb{L}^{f \circ \mathscr{L}(\pi)-\mathop{\mathrm{ht}}_{\mathcal{X}/Y}}$ being $d$-integrable on $\mathcal{C}'$, and in that case $$\int_{\mathscr{L}(Y)} \mathbb{L}^{f} \mathrm{d}\mu_Y = \int_{\mathcal{C}'} \mathbb{L}^{f \circ \mathscr{L}(\pi)-\mathop{\mathrm{ht}}_{\mathcal{X}/Y}} \mathrm{d}\mu_{\mathcal{X}, d}.$$ This completes our proof. ◻
arxiv_math
{ "id": "2309.11442", "title": "Motivic integration for singular Artin stacks", "authors": "Matthew Satriano and Jeremy Usatine", "categories": "math.AG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | The paper studies a multi-dimensional mean-field reflected backward stochastic differential equation (MF-RBSDE) with a reflection constraint depending on both the value process $Y$ and its distribution $[Y]$. We establish the existence, uniqueness and the stability of the solution of MF-RBSDE. We also investigate the associated interacting particle systems of RBSDEs and prove a propagation of chaos result. Lastly, we investigate the relationship between MF-RBSDE and an obstacle problem for partial differential equations in Wasserstein space within a Markovian framework. Our work provides a connection between the work of Briand et al. (2020) on BSDEs with normal reflection in law and the work of Gegout-Petit and Pardoux (1996) on classical multi-dimensional RBSDEs. author: - Ruisen Qian title: Multi-dimensional reflected McKean-Vlasov BSDEs with the obstacle depending on both the first unknown and its distribution --- mean-field, reflected backward stochastic differential equation, interacting particle system, obstacle problem, Wasserstein space 35R15 ,60H10 # Introduction Since the seminal paper by Pardoux and Peng [@PP90], backward stochastic differential equations (BSDEs) have been extensively studied. The solution of a typical BSDE with a random driver $f$ and a terminal condition $\xi$ is a pair of progressively measurable processes $(Y, Z)$ that satisfy the dynamic $$\label{BSDE} Y_t=\xi+\int_t^T f(s,Y_s,Z_s)ds -\int_t^T Z_sdB_s, \quad 0\leq t\leq T.$$ Pardoux and Peng [@PP90] demonstrated the existence and uniqueness of such a solution, and since then, many extensions of the dynamic have been proposed and studied. El Karoui et al. [@EKPPM97] introduced the notion of a reflected BSDE. The solution of a reflected BSDE contains an additional adapted non-decreasing process $K$ with $K_0=0$, such that the triplet of processes $(Y,Z,K)$ satisfies the dynamic $$\label{RBSDE} \begin{aligned} Y_t=\xi +\int_t^T f(s,Y_s,Z_s)ds-\int_t^T Z_s dB_s + K_T-K_t, \quad 0\leq t\leq T \end{aligned}$$ with a chosen constraint on the solution. El Karoui et al. [@EKPPM97] considered the constraint of the form $$\begin{aligned} Y_t\geq S_t, \quad 0\leq t\leq T, \end{aligned}$$ with a continuous progressively measurable obstacle process $S$, along with a Skorokhod condition $$\begin{aligned} \int_0^T (Y_t-S_t) dK_t=0 \end{aligned}$$ to ensure the minimality of the solution. The multi-dimensional case where the process $Y$ is constrained in a convex domain is studied by Gegout-Petit and Pardoux [@GPP96]. More recently, Briand et al. [@BCDH20; @BEH18; @BH21] proposed and studied a class of BSDEs with normal reflection in law, where the distribution $\mu$ of the $Y$ component of the solution is required to satisfy the constraint and the corresponding Skorokhod condition $$\label{El} \begin{aligned} h(\mu_t)\geq0, \quad 0\leq t\leq T, \,\,\mathrm{and}\,\, \int_0^T h(\mu_s) dK_s=0, \end{aligned}$$ for some Lions differentiable concave functional $h:{\cal P}_2(\mathbb{R}^n)\rightarrow \mathbb{R}$. The dynamic for this class of BSDEs writes $$\label{RBSDE-BCDH20} \begin{aligned} Y_t=\xi +\int_t^T f(s,Y_s,Z_s)ds-\int_t^T Z_sdB_s + \int_t^T \partial_\mu h(\mu_s)(Y_s) dK_s, \quad 0\leq t\leq T. \end{aligned}$$ Briand et al. [@BCDH20] showed that the system ([\[El\]](#El){reference-type="ref" reference="El"})-([\[RBSDE-BCDH20\]](#RBSDE-BCDH20){reference-type="ref" reference="RBSDE-BCDH20"}) admits a unique solution if $K$ is only allowed to be deterministic. This type of reflected BSDEs finds applications in quantile hedging problems [@FL99; @FL00] and super-hedging problems under risk measure constraints [@BEH18 Section 6]. The aim of this paper consists in enlarging the result of [@BCDH20], and providing a connection to the classical results of reflected BSDEs [@GPP96]. We study a general class of mean-field reflected BSDEs where the constraint takes the form $$\label{refl} H(Y_t,\mu_t)\geq 0,$$ where the functional $H$ depends on both the value process $Y$ and its distribution $\mu$. Moreover, the driver $f$ is allowed to depend on the variables $(Y, Z)$ as well as their joint distribution $\nu$. The dynamic of our problem writes $$Y_t=\xi+\int_t^T f(s,Y_s,Z_s,\nu_s)ds - \int_t^T Z_s dB_s + \int_t^T \partial_y H(Y_s, \mu_s)dK_s +\widetilde{\mathbb{E}}\left[\int_t^T \partial_\mu H(\widetilde{Y}_s,\mu_s)(Y_s)d\widetilde{K}_s\right] ,$$ for all $t\in [0,T]$. For example, let $H(y,\mu)=y+h(\mu)$, then the reflection part of this dynamic becomes $$K_T - K_t + \int_t^T \partial_\mu h(\mu_s)(Y_s) d\mathbb{E}[K_s] ,$$ which can be viewed as a combination of the reflections in [\[RBSDE\]](#RBSDE){reference-type="eqref" reference="RBSDE"} and [\[RBSDE-BCDH20\]](#RBSDE-BCDH20){reference-type="eqref" reference="RBSDE-BCDH20"}. Reflected BSDEs with constraints depending on both the value process $Y$ and its distribution $\mu$ have found applications in insurance and risk management. One such application is in the pricing of guaranteed life endowment policies with a surrender/withdrawal option, as described by [@DEH19]. In this context, the obstacle $h$ contains a bonus option, which is linked to the distribution of the possible surplus realized by the average of all involved contracts. The paper establishes the well-posedness of mean-field reflected BSDEs, connects these problems with the mean-field limit of a system of reflected BSDEs, and investigates the relationship between the mean-field reflected BSDEs and an obstacle problem for partial differential equations (PDEs) in Wasserstein space. The rest of the paper is organized as follows. In Section [2](#sec-framework){reference-type="ref" reference="sec-framework"}, the problem is formulated in detail and the assumptions are clarified. We also give some a priori estimates of the solution, which will be used later on. In Sections [3](#sec-unique-stability){reference-type="ref" reference="sec-unique-stability"} and [4](#sec-exist){reference-type="ref" reference="sec-exist"}, the well-posedness of the solution is analyzed. We first present a stability result. Next, we establish the uniqueness of the solution by combining the stability result and an analysis of the reflection component. Finally, we prove the existence of a solution using the penalization technique. In Section [5](#sec-limit){reference-type="ref" reference="sec-limit"}, we attempt to interpret the mean-field reflected BSDEs at the particle level. We consider a corresponding particle system and investigate the limiting properties of the solution of this system. We demonstrate that the solution of the mean-field reflected BSDEs is the mean-field limit of the particle system. Finally, in Section [6](#sec:obstacle problem){reference-type="ref" reference="sec:obstacle problem"}, we establish a connection between our mean-field reflected BSDEs and an obstacle problem for PDEs in Wasserstein space. We show that, given that the problem is formulated within a Markovian framework, the solution of mean-field reflected BSDEs provides a probabilistic representation of a viscosity solution of an obstacle problem in Wasserstein space. #### Notations. {#notations. .unnumbered} Throughout this paper, we will work on a classical Wiener space $(\Omega, \cal F, \mathbb{P})$ with a finite time horizon $T>0$ and a $d$-dimensional standard Brownian motion $B=(B_t)_{0\leq t \leq T}$. We endow the probability space $(\Omega, \cal F, \mathbb{P})$ with the filtration $\mathbf{F}=({\cal F}_t)_{0\leq t\leq T}$ generated by the Brownian motion $B$. We also denote by: - $S^{2,n}$ the set of $R^n$-valued continuous adapted processes $Y$ on $[0, T]$ such that $\\ \left \| Y \right \|_{S^{2,n}}^2 := \mathbb{E}\left[ \sup_{t\in [0, T]} |Y_t|^2\right]<\infty$, - $H^{2,n}$ the set of $R^{n\times d}$-valued predictable processes $Z$ such that $\left \| Z \right \|_{H^{2,n}}^2 := \mathbb{E}\left[\int_0^T |Z_t|^2 dt\right]<\infty$, - $A^{2,1}$ the subset of $S^{2,1}$ consisting of non-decreasing processes starting from $0$, - ${\cal P}_2(\mathbb{R}^n)$ the Wasserstein space equipped with 2-Wasserstein distance $W_2(\cdot,\cdot)$, - $[\xi]$ the distribution of a random variable $\xi$. # Framework {#sec-framework} ## Mean-field reflected BSDEs Throughout this paper, we consider the following problem: for all $t\in[0,T]$, $$\label{MF-RBSDE} Y_t=\xi+\int_t^T f(s,Y_s,Z_s,\nu_s)ds - \int_t^T Z_s dB_s + \int_t^T \partial_y H(Y_s, \mu_s)dK_s +\widetilde{\mathbb{E}}\left[\int_t^T \partial_\mu H(\widetilde{Y}_s,\mu_s)(Y_s)d\widetilde{K}_s\right] ,$$ $$\label{refl} H(Y_t,\mu_t)\geq 0,\quad \int_0^T H(Y_s, \mu_s)dK_s=0 ,$$ where $\mu_s:=[Y_s], \nu_s:=[(Y_s,Z_s)]$ denote the distribution of $Y_s$ and the joint distribution of $(Y_s,Z_s)$, and $\widetilde{Y}_s, \widetilde{K}_s$ are independent copies of $Y_s,K_s$. We define a solution of ([\[MF-RBSDE\]](#MF-RBSDE){reference-type="ref" reference="MF-RBSDE"})-([\[refl\]](#refl){reference-type="ref" reference="refl"}) as a triple of progressively measurable processes $(Y,Z,K)$ with values in $\mathbb{R}^n\times\mathbb{R}^{n\times d}\times\mathbb{R}$ such that $K$ is continuous, non-decreasing, and $K_0=0$. Some special cases of our problem are: 1. $H(y,\mu)=H(y)$, meaning the constraint depends solely on the value process $Y$. In this case, the system ([\[MF-RBSDE\]](#MF-RBSDE){reference-type="ref" reference="MF-RBSDE"})-([\[refl\]](#refl){reference-type="ref" reference="refl"}) corresponds to the classical reflected BSDE with normal reflection, as studied in [@GPP96]. 2. $H(y,\mu)=H(\mu)$, meaning the constraint depends only on the distribution $\mu$ of the value process $Y$. In this case, the system ([\[MF-RBSDE\]](#MF-RBSDE){reference-type="ref" reference="MF-RBSDE"})-([\[refl\]](#refl){reference-type="ref" reference="refl"}) corresponds to the reflected BSDE with normal constraint in law, which is investigated in [@BCDH20; @BEH18]. We note that the reflection terms of ([\[MF-RBSDE\]](#MF-RBSDE){reference-type="ref" reference="MF-RBSDE"}) reduce to $\int_t^T \partial_\mu H(\mu_s)(Y_s)\,\mathbb{E}\left[ dK_s \right]$. Consequently, the reflection process $K$ is usually assumed to be deterministic in order to obtain uniqueness results. 3. $H(y,\mu)=y+h(\mu)$ for some concave functional $h: {\cal P}_2(\mathbb{R})\to\mathbb{R}$, meaning the constraint decomposes into a part that depends on $y$ and a part that depends on $\mu$. This is a generalization of cases 1 and 2, which is also described earlier in the Introduction. We study the system ([\[MF-RBSDE\]](#MF-RBSDE){reference-type="ref" reference="MF-RBSDE"})-([\[refl\]](#refl){reference-type="ref" reference="refl"}) under the following assumptions: **Assumption 1**. *The driver $f$, the functional $H$, and the terminal condition $\xi$ satisfy:* 1. *[\[H_f\]]{#H_f label="H_f"} $f$ is a mapping from $\Omega \times [0, T] \times \mathbb{R}^n \times \mathbb{R}^{n\times d} \times {\cal P}_2(\mathbb{R}^n\times\mathbb{R}^{n\times d})$ into $\mathbb{R}^n$ such that* 1. *[\[H_f\_H2\]]{#H_f_H2 label="H_f_H2"} The process $(f(t,0,0,\delta_0))_{0\leq t\leq T}$ is progressively measurable and $$\mathbb{E}\left[\int_0^T |f(t,0,0,\delta_0)|^4 dt \right]<\infty.$$* 2. *There exists a constant $L \geq 0$, such that for all $t\in[0,T], i\in\{1,2\}$, and for all $y_i \in \mathbb{R}^n$, $z_i\in \mathbb{R}^{n*d}$, $\nu_i\in{\cal P}_2(\mathbb{R}^n\times\mathbb{R}^{n\times d})$, $$|f(t, y_1,z_1,\nu_1)-f(t, y_2,z_2,\nu_2)| \leq L\left(|y_1-y_2|+|z_1-z_2|+W_2(\nu_1,\nu_2)\right).$$* 2. *[\[H_H\]]{#H_H label="H_H"} The functional $H: \mathbb{R}^n \times {\cal P}_2(\mathbb{R}^n) \to \mathbb{R}$ has the following properties in $\mathbb{R}^n$* 1. *[\[H-differentiable-y\]]{#H-differentiable-y label="H-differentiable-y"} For any $\mu\in {\cal P}_2(\mathbb{R}^n)$, the function $\mathbb{R}^n\ni y\mapsto H(y,\mu)$ is twice continuously differentiable in $\mathbb{R}^n$, and the functions $\partial_y H$ and $\partial^2_{yy} H$ are jointly continuous in $(y,\mu)$.* 2. *[\[H-differentiable-mu\]]{#H-differentiable-mu label="H-differentiable-mu"} For any $y\in\mathbb{R}^n$, the functional ${\cal P}_2(\mathbb{R}^n)\ni \mu\mapsto H(y,\mu)$ is continuously L-differentiable and, for any $\mu\in{\cal P}_2(\mathbb{R}^n)$, there exists a version of the function $\mathbb{R}^n\ni v\mapsto\partial_\mu H(y,\mu)(v)$, such that the functional $\partial_\mu H$ is jointly continuous in $(y,\mu,v)$.* 3. *[\[H-differentiable-mu-v\]]{#H-differentiable-mu-v label="H-differentiable-mu-v"} For the version $\partial_\mu H$ mentioned above and for any $(y,\mu,v)\in\mathbb{R}^n\times{\cal P}_2(\mathbb{R}^n)\times\mathbb{R}^n$, the function $\mathbb{R}^n\ni v\mapsto\partial_\mu H(y,\mu)(v)$ is continuously differentiable in $\mathbb{R}^n$, and its derivative, denoted by $\mathbb{R}^n\ni v\mapsto \partial_v\partial_\mu H(y,\mu)(v)\in\mathbb{R}^{n\times n}$, is jointly continuous in $(y,\mu,v)$.* 4. *[\[H-Lipschitz\]]{#H-Lipschitz label="H-Lipschitz"} There exist $0<\beta<M<\infty$, such that for all $(y,\mu,v)\in R^n\times{\cal P}_2(\mathbb{R}^n)\times\mathbb{R}^n$, $$\label{H-beta} |\partial_y H(y,\mu)|\geq \beta,$$ and $$\label{H-M} |\partial_y H(y,\mu)|+|\partial^2_{yy} H(y,\mu)|+|\partial_\mu H(y,\mu)(v)|+|\partial_v\partial_\mu H(y,\mu)(v)|\leq M.$$* 5. *[\[Hy-Lipschitz\]]{#Hy-Lipschitz label="Hy-Lipschitz"} The functionals $\partial_y H(y,\cdot)$ and $\partial_\mu H(y,\cdot)(\cdot)$ are Lipschitz continuous, i.e., for all $y\in\mathbb{R}^n$, $Y_1,Y_2\in L^2(\Omega, \mathbb{R}^n)$, and $\mu_1,\mu_2\in{\cal P}^2(\mathbb{R}^n)$, $$\begin{gathered}\label{Hy-L} \left| \partial_y H(y,\mu_1)- \partial_y H(y,\mu_1) \right| \leq L \, W_2(\mu_1,\mu_2), \\ \mathbb{E}\left| \partial_\mu H(y,[Y_1])(Y_1)- \partial_\mu H(y,[Y_2])(Y_2) \right|^2 \leq L \, \mathbb{E}\left| Y_1-Y_2 \right|^2. \end{gathered}$$* 6. *[\[Hc\]]{#Hc label="Hc"} For all $y_1,y_2,v\in\mathbb{R}^n$ and for all $\mu_1,\mu_2\in{\cal P}_2(\mathbb{R}^n)$, $$\begin{gathered}\label{Hx-Hy} \partial_y H(v,\mu_1)\cdot \partial_\mu H(y_2,\mu_2)(v)\geq 0, \\ \partial_\mu H(y_1,\mu_1)(v) \cdot\partial_\mu H(y_2,\mu_2)(v)\geq 0. \end{gathered}$$* 7. *[\[H-concave\]]{#H-concave label="H-concave"} The functional $H$ is concave: for all $(y_1,\mu_1),(y_2,\mu_2)\in \mathbb{R}^n\times{\cal P}_2(\mathbb{R}^n)$, $$\label{H1-H2} H(y_2,\mu_2)-H(y_1,\mu_1) \leq \partial_y H(y_1,\mu_1)\cdot (y_2-y_1) +\mathbb{E}\left[\partial_\mu H(y_1,\mu_1)(Y_1)\cdot(Y_2-Y_1)\right],$$ whenever $Y_1$ and $Y_2$ are square integrable random variables with distributions $\mu_1$ and $\mu_2$.* 3. *The terminal value $\xi$ is an ${\cal F}_T$-measurable random variable with $\mathbb{E}|\xi|^4<\infty$, such that $$\label{H-xi-geq-0} H(\xi,\left[\xi\right])\geq 0.$$* **Remark 1**. 1. Assumption [Assumption 1](#H){reference-type="ref" reference="H"} 2.(a)-(c) are equivalent to the Assumption(Joint Chain Rule) in [@CD18 Section 5.6.4], which is used to derive chain rules in the Wasserstein space. 2. We assume that $\int_0^T |f(t,0,0,\delta_0)|^4 dt$ and $\xi$ have finite 4th moments. These technical assumptions are necessary in the proof of Lemma [Lemma 11](#lem-Ym-Zm-Cauchy){reference-type="ref" reference="lem-Ym-Zm-Cauchy"}, which is crucial for establishing the existence of a solution. Let us begin with a simple proposition, which provides a pivotal point for obtaining a priori estimates. **Proposition 2**. *Under Assumption [Assumption 1](#H){reference-type="ref" reference="H"}, there exists $\widehat{y}\in\mathbb{R}^n$, such that $$\begin{aligned} H(\widehat{y},\delta_{\widehat{y}})\geq0. \end{aligned}$$* *Proof.* We define $(y(t))_{t\geq0}$ as the solution of $$\begin{aligned} y(t)=\int_0^t \partial_y H(y(s),\delta_{y(s)})ds, \end{aligned}$$ and derive from [\[H-beta\]](#H-beta){reference-type="eqref" reference="H-beta"} and [\[Hx-Hy\]](#Hx-Hy){reference-type="eqref" reference="Hx-Hy"} that $$\begin{aligned} & H(y(t),\delta_{y(t)})-H(0,\delta_0) \\ = & \int_0^t \left( \left|\partial_y H(y(s),\delta_{y(s)})\right|^2 + \partial_\mu H(y(s),\delta_{y_(s)})(y(s)) \cdot \partial_y H(y(s),\delta_{y(s)}) \right) ds \\ \geq & \int_0^t \left|\partial_y H(y(s),\delta_{y(s)})\right|^2 ds \\ \geq & \beta^2 t. \end{aligned}$$ Therefore, there exists $t^*\geq0$, such that $H(y(t^*),\delta_{y(t^*)})\geq0$. We set $\widehat{y}:=y(t^*)$ to obtain the desired result. ◻ ## A priori estimate In the following, we provide some useful a priori estimates for any solution of the MF-RBSDE ([\[MF-RBSDE\]](#MF-RBSDE){reference-type="ref" reference="MF-RBSDE"})-([\[refl\]](#refl){reference-type="ref" reference="refl"}). **Lemma 3**. *Under Assumption [Assumption 1](#H){reference-type="ref" reference="H"}. Let $(Y,Z,K)$ be a solution of ([\[MF-RBSDE\]](#MF-RBSDE){reference-type="ref" reference="MF-RBSDE"})-([\[refl\]](#refl){reference-type="ref" reference="refl"}) in $S^{2,n}\times H^{2,n}\times A^{2,1}$. Then, there exists $C>0$, such that $(Y,Z)$ satisfies the following: $$\label{est-supE-Y} \begin{aligned} \sup_{0\leq t \leq T}\mathbb{E}\left[ \left| Y_t \right|^2\right] + \mathbb{E}\left[\int_0^T \left|Z_s\right|^2ds\right]\leq C \mathbb{E}\left[ |\xi|^2 + \int_0^T |f^0(s)|^2ds + |\widehat{y}|^2 \right], \end{aligned}$$ $$\label{est-supE-Y4} \begin{aligned} \sup_{0\leq t \leq T}\mathbb{E}\left[ \left| Y_t \right|^4\right] \leq C \mathbb{E}\left[ |\xi|^4 + \int_0^T |f^0(s)|^4ds + |\widehat{y}|^4 \right], \end{aligned}$$ where $\widehat{y}$ is the constant vector appearing in Proposition [Proposition 2](#pr-widetilde-y){reference-type="ref" reference="pr-widetilde-y"}, and $f^0(s):=f(s,0,0,\delta_0)$.* *Proof.* Let $\Delta Y:=Y-\widehat{y}$ (recalling that $\widehat{y}\in\mathbb{R}^n$ and $H(\widehat{y},\delta_{\widehat{y}})\geq0$). Applying Itô's formula to $e^{\alpha t}|\Delta Y_t|^2$, we obtain $$\label{ito10} \begin{aligned} & e^{\alpha t} |\Delta Y_t|^2 + \int_t^T e^{\alpha s} |Z_s|^2ds \\ = & e^{\alpha T} |\xi-\widehat{y}|^2 - \int_t^T \alpha e^{\alpha s} |\Delta Y_s|^2 ds + \int_t^T 2 e^{\alpha s} \Delta Y_s \cdot f(s,Y_s,Z_s,\nu_s) ds \\ & + \int_t^T 2 e^{\alpha s} \Delta Y_s \cdot \partial_y H(Y_s,\mu_s) dK_s + \int_t^T 2 e^{\alpha s} \Delta Y_s \cdot \widetilde{\mathbb{E}}\left[ \partial_\mu H(\widetilde{Y}_s,\mu_s)(Y_s)d\widetilde{K}_s \right] \\ & - \int_t^T 2 e^{\alpha s} \Delta Y_s \cdot Z_s dB_s. \\ \end{aligned}$$ From the Lipschitz continuity of $f$ and Young's inequality, we obtain $$\begin{aligned} & 2 e^{\alpha s} \Delta Y_s \cdot f(s,Y_s,Z_s,\nu_s) \\ \leq & 2 e^{\alpha s} |\Delta Y_s| \left[ |f^0(s)| + L\left(|Y_s| + |Z_s| + W_2(\nu_s,\delta_{(0,0)})\right) \right] \\ \leq & 2 e^{\alpha s} |\Delta Y_s| \left[ |f^0(s)| + L\left(|\Delta Y_s| + |\widehat{y}| + |Z_s| + W_2(\nu_s,\delta_{(\widehat{y},0)}) + |\widehat{y}| \right) \right] \\ \leq & e^{\alpha s} \left[ \left(1 +2L +12 L^2\right) |\Delta Y_s|^2 + \mathbb{E}|\Delta Y_s|^2 + |f^0(s)|^2 + |\widehat{y}|^2 +\frac{1}{4} |Z_s|^2 + \frac{1}{4} \mathbb{E}|Z_s|^2\right]. \end{aligned}$$ Setting $\alpha:= 2 +2L +12 L^2$ and taking the expectation on both sides of ([\[ito10\]](#ito10){reference-type="ref" reference="ito10"}), we obtain $$\label{eito10} \begin{aligned} & \mathbb{E}\left[ e^{\alpha t} |\Delta Y_t|^2 + \frac{1}{2} \int_t^T e^{\alpha s} |Z_s|^2ds \right] \\ \leq & \mathbb{E}\left[ e^{\alpha T} |\xi-\widehat{y}|^2 + \int_t^T e^{\alpha s} \left(\left| f^0(s) \right|^2 +|\widehat{y}|^2\right)ds \right] \\ &+2\mathbb{E}\left[\int_t^T e^{\alpha s} \Delta Y_s \cdot \left( \partial_y H(Y_s,\mu_s) dK_s + \widetilde{\mathbb{E}}\left[ \partial_\mu H(\widetilde{Y}_s,\mu_s)(Y_s)d\widetilde{K}_s \right] \right)\right]. \\ \end{aligned}$$ From the concavity of $H$, the inequality $H(\widehat{y},\delta_{\widehat{y}})\geq 0$, and Fubini's theorem, we obtain $$\label{est-K0} \begin{aligned} & \mathbb{E}\left[\int_t^T e^{\alpha s} \Delta Y_s \cdot \left( \partial_y H(Y_s,\mu_s) dK_s + \widetilde{\mathbb{E}}\left[ \partial_\mu H(\widetilde{Y}_s,\mu_s)(Y_s)d\widetilde{K}_s \right] \right)\right] \\ = & \mathbb{E}\left[\int_t^T e^{\alpha s} \left(\Delta Y_s \cdot \partial_y H(Y_s,\mu_s) + \widetilde{\mathbb{E}}\left[\Delta \widetilde{Y}_s \cdot \partial_\mu H(Y_s,\mu_s)(\widetilde{Y}_s) \right] \right) dK_s \right] \\ & + \mathbb{E}\left[\int_t^T e^{\alpha s} \left( \Delta Y_s \cdot \widetilde{\mathbb{E}}\left[ \partial_\mu H(\widetilde{Y}_s,\mu_s)(Y_s)d\widetilde{K}_s \right] -\widetilde{\mathbb{E}}\left[\Delta \widetilde{Y}_s \cdot \partial_\mu H(Y_s,\mu_s)(\widetilde{Y}_s) \right] dK_s \right) \right] \\ = & \mathbb{E}\left[\int_t^T e^{\alpha s} \left(\Delta Y_s \cdot \partial_y H(Y_s,\mu_s) + \widetilde{\mathbb{E}}\left[\Delta \widetilde{Y}_s \cdot \partial_\mu H(Y_s,\mu_s)(\widetilde{Y}_s) \right] \right) dK_s \right] \\ \leq & \mathbb{E}\left[\int_t^T e^{\alpha s} \left(H(Y_s,\mu_s)-H(\widehat{y}, \delta_{\widehat{y}})\right) dK_s\right] \leq 0, \end{aligned}$$ with the last inequality coming from the Skorokhod condition and the inequality $H(\widehat{y},\delta_{\widehat{y}}) \geq0$. From ([\[eito10\]](#eito10){reference-type="ref" reference="eito10"}) and ([\[est-K0\]](#est-K0){reference-type="ref" reference="est-K0"}), we have $$\label{est-supE0} \begin{aligned} \sup_{0\leq t \leq T} \mathbb{E}\left[ e^{\alpha t} |\Delta Y_t|^2\right] + \mathbb{E}\left[ \int_0^T e^{\alpha t} |Z_s|^2ds \right] \leq & C \mathbb{E}\left[ e^{\alpha T} |\xi-\widehat{y}|^2 + \int_0^T e^{\alpha s} \left| f^0(s)\right|^2 ds \right], \\ \end{aligned}$$ and prove [\[est-supE-Y\]](#est-supE-Y){reference-type="eqref" reference="est-supE-Y"}. Then, [\[est-supE-Y4\]](#est-supE-Y4){reference-type="eqref" reference="est-supE-Y4"} is deduced by applying Itô's formula to $e^{\alpha t}|\Delta Y_t|^4$, $$\label{eq:ito-4} \begin{aligned} e^{\alpha t}|\Delta Y_t|^4 + \int_t^T 4 e^{\alpha s} |\Delta Y_s \cdot Z_s|^2 ds = e^{\alpha T} |\xi-\widehat{y}|^4 - \int_t^T \alpha e^{\alpha s}|\Delta Y_s|^4 ds - \int_t^T 2 e^{\alpha s}|\Delta Y_s|^2 d|\Delta Y_s|^2, \end{aligned}$$ and following a similar approach as previously described. ◻ **Lemma 4**. *Let Assumption [Assumption 1](#H){reference-type="ref" reference="H"} be satisfied. Let $(Y,Z,K)$ be a solution of ([\[MF-RBSDE\]](#MF-RBSDE){reference-type="ref" reference="MF-RBSDE"})-([\[refl\]](#refl){reference-type="ref" reference="refl"}) in $S^{2,n}\times H^{2,n}\times A^{2,1}$. Then, there exists $C>0$, such that $(Y,Z)$ satisfies the following: $$\begin{aligned} \mathbb{E}\left[\sup_{0\leq t \leq T} \left| Y_t \right|^2 + \int_0^T \left|Z_s\right|^2ds \right] \leq C \mathbb{E}\left[ |\xi|^2 + \int_0^T |f^0(s)|^2ds + |\widehat{y}|^2 + (K_T)^2 \right]. \end{aligned}$$* *Proof.* Using the same notation as in Lemma [Lemma 3](#lem-a-priori-supE){reference-type="ref" reference="lem-a-priori-supE"}, we take the supremum in $t$ and the expectation in ([\[ito10\]](#ito10){reference-type="ref" reference="ito10"}) to obtain $$\label{e16} \begin{aligned} & \mathbb{E}\left[\sup _{0 \leq t \leq T} e^{\alpha t}\left|\Delta Y_t\right|^2+\frac{1}{2} \int_0^T e^{\alpha s}\left|\Delta Z_s\right|^2 d s\right] \\ \leq & \mathbb{E}\left[e^{\alpha T}|\xi-\widehat{y}|^2+\int_0^T e^{\alpha s}\left(|f^0(s)|^2+|\widehat{y}|^2\right) d s\right]+2 \mathbb{E}\sup _{0 \leq t \leq T}\left|\int_t^T e^{\alpha s} \Delta Y_s \cdot Z_s d B_s\right| \\ & +\mathbb{E}\sup _{0 \leq t \leq T}\left[\int_t^T e^{\alpha s} \Delta Y_s \cdot\left(\partial_y H(Y_s, \mu_s) d K_s+\widetilde{\mathbb{E}}\left[\partial_\mu H(\widetilde{Y}_s, \mu_s)\left(Y_s\right) d \widetilde{K}_s\right]\right)\right]. \\ \end{aligned}$$ From the concavity of $H$, the boundedness of $\partial_y H$ and $\partial_\mu H$, the equation $\mathbb{E}\left[ \int_0^T H(Y_t,\mu_t)dK_t\right]=0$, and [\[est-supE-Y\]](#est-supE-Y){reference-type="eqref" reference="est-supE-Y"}, we obtain $$\label{e16-K-L} \begin{aligned} & \mathbb{E}\sup _{0 \leq t \leq T}\left[\int_t^T e^{\alpha s} \Delta Y_s \cdot\left(\partial_y H(Y_s, \mu_s) d K_s+\widetilde{\mathbb{E}}\left[\partial_\mu H(\widetilde{Y}_s, \mu_s)\left(Y_s\right) d \widetilde{K}_s\right]\right)\right] \\ \leq & \mathbb{E}\sup _{0 \leq t \leq T}\left[\int_t^T e^{\alpha s}\left(\Delta Y_s \cdot \partial_y H(Y_s, \mu_s)+\widetilde{\mathbb{E}}\left[\Delta \widetilde{Y}_s \cdot \partial_\mu H(Y_s, \mu_s)(\widetilde{Y}_s)\right]\right) d K_s\right] \\ & +\mathbb{E}\sup _{0 \leq t \leq T}\left[-\int_t^T e^{\alpha s} \widetilde{\mathbb{E}}\left[\Delta \widetilde{Y}_s \cdot \partial_\mu H(Y_s, \mu_s)(\widetilde{Y}_s)\right] d K_s\right] \\ & +\mathbb{E}\sup _{0 \leq t \leq T}\left[\int_t^T e^{\alpha s} \Delta Y_s \cdot \widetilde{\mathbb{E}}\left[\partial_\mu H(\widetilde{Y}_s, \mu_s)\left(Y_s\right) d \widetilde{K}_s\right]\right] \\ \leq & \mathbb{E}\left[\int_0^T e^{\alpha s}\left(H(Y_s, \mu_s)-H(\widehat{y}, \delta_{\widehat{y}})\right) d K_s\right] + C \mathbb{E}\left[ K_T \right] \sup _{0 \leq t \leq T}\mathbb{E}\left[e^{\alpha t} |\Delta Y_t| \right] \\ \leq & C \mathbb{E}\left[ |\xi|^2 + \int_0^T |f^0(s)|^2ds + |\widehat{y}|^2 + (K_T)^2 \right]. \\ \end{aligned}$$ Combining ([\[e16\]](#e16){reference-type="ref" reference="e16"}) and the last inequality, from BDG inequality and Young's inequality, we obtain the desired result. ◻ # Stability and uniqueness of the solution {#sec-unique-stability} In this section, we investigate the stability and uniqueness of the solution of ([\[MF-RBSDE\]](#MF-RBSDE){reference-type="ref" reference="MF-RBSDE"})-([\[refl\]](#refl){reference-type="ref" reference="refl"}). First, we prove the stability of the solution in Section [3.1](#subsec-stability){reference-type="ref" reference="subsec-stability"}. This stability result plays a crucial role in Section [6](#sec:obstacle problem){reference-type="ref" reference="sec:obstacle problem"} , as it facilitates the demonstration of the continuity of the decoupling function for the corresponding obstacle problem in Wasserstein space. Subsequently, we leverage the stability result to establish the uniqueness of the solution in Section [3.2](#subsec-uniqueness){reference-type="ref" reference="subsec-uniqueness"}. ## Stability result {#subsec-stability} **Proposition 5**. *Assume that $(Y^i,Z^i,K^i)_{i=1,2}$ are two solutions of the mean-field reflected BSDE ([\[MF-RBSDE\]](#MF-RBSDE){reference-type="ref" reference="MF-RBSDE"})-([\[refl\]](#refl){reference-type="ref" reference="refl"}) with data $(f^i, \xi^i)_{i=1,2}$. Let Assumptions [Assumption 1](#H){reference-type="ref" reference="H"}, [Assumption 2](#H:SDE){reference-type="ref" reference="H:SDE"}, and [\[H-uniqueness\]](#H-uniqueness){reference-type="eqref" reference="H-uniqueness"} hold true for $(f^i, \xi^i)_{i=1,2}$ and $H$. Then, there exists a constant $C>0$ depending on $T$ and $L$, such that $$\label{eq:est-stability} \begin{aligned} \sup_{0 \leq t \leq T} \mathbb{E}\left| \Delta Y_t \right|^2 + \int_0^T \left| \Delta Z_s \right|^2 \,ds \leq C I_\delta^2, \end{aligned}$$ and $$\label{eq:est-R} \begin{aligned} \sup_{0 \leq t \leq T} \mathbb{E}\left| \Delta R_t \right| \leq C I_\delta, \end{aligned}$$ where $\Delta Y:= Y^1-Y^2, \Delta Z:= Z^1-Z^2, \Delta R:=R^1-R^2, \Delta \xi:= \xi^1-\xi^2, \delta f:= f^1-f^2$, $$\label{eq:R-def-1} \begin{aligned} & R^i_t := \int_0^t \partial_y H(Y^i_{s}, \mu^i_{s}) \,dK^i_s + \widetilde{\mathbb{E}} \left[ \int_0^t \partial_\mu H(\widetilde{Y}^i_{s}, \mu^i_{s})(Y^i_{s}) \, d \widetilde{K}^i_s \right], \quad i=1,2, \end{aligned}$$ and $$\label{eq:I-delta} \begin{aligned} I_\delta := \mathbb{E}^{\frac{1}{2} } \left[ \left| \Delta \xi \right|^2 + \int_0^T \left| \delta f(s,Y^1_s,Z^1_s,\nu^1_s) \right|^2 \,ds \right] . \end{aligned}$$* *Proof.* We apply Itô's formula to $\left| \Delta Y_t \right|^2$. Following the same technique used in the proof of Lemma [Lemma 3](#lem-a-priori-supE){reference-type="ref" reference="lem-a-priori-supE"}, from the concavity of $H$, we obtain $$\label{ito-delta-y} \begin{aligned} \mathbb{E}\left[ \left| \Delta Y_t \right|^2 + \int_t^T \left| \Delta Z_s \right|^2 \,ds \right] \leq 2 \mathbb{E}\left[ \int_t^T \left| \Delta Y_s \right| \left| \Delta f(s) \right| \,ds \right] , \end{aligned}$$ where $\Delta f(s):= f^1(s,Y^1_s,Z^1_s,\nu^1_s) - f^2(s,Y^2_s,Z^2_s,\nu^2_s)$. From the Lipschitz continuity of $f^2$, we deduce that $$\label{delta-y-f} \begin{aligned} & 2 \left| \Delta Y_s \right| \left| \Delta f(s) \right| \\ \leq & 2 \left| \Delta Y_s \right| \left( \left| \delta f(s,Y^1_s,Z^1_s,\nu^1_s) \right| + \left| f^2(s,Y^1_s,Z^1_s,\nu^1_s) - f^2(s,Y^2_s,Z^2_s,\nu^2_s) \right| \right) \\ \leq & \left| \delta f(s,Y^1_s,Z^1_s,\nu^1_s) \right|^2 + C_L \left| \Delta Y_s \right|^2 + \mathbb{E}\left| \Delta Y_s \right|^2 + \frac{1}{4} \left( \left| \Delta Z_s \right|^2 + \mathbb{E}\left| \Delta Z_s \right|^2 \right) . \\ \end{aligned}$$ Substituting [\[delta-y-f\]](#delta-y-f){reference-type="eqref" reference="delta-y-f"} into [\[ito-delta-y\]](#ito-delta-y){reference-type="eqref" reference="ito-delta-y"} and applying Gronwall's lemma, we obtain [\[eq:est-stability\]](#eq:est-stability){reference-type="eqref" reference="eq:est-stability"}. Then, [\[eq:est-R\]](#eq:est-R){reference-type="eqref" reference="eq:est-R"} is obtained from [\[eq:est-stability\]](#eq:est-stability){reference-type="eqref" reference="eq:est-stability"}, the equation $$\label{eq:Delta R} \begin{aligned} \Delta R_t = \Delta Y_0 - \Delta Y_t - \int_0^t \Delta f(s) \,ds + \int_0^t \Delta Z_s \,dB_s , \quad 0\leq t\leq T, \end{aligned}$$ and the Lipschitz continuity of $f^2$. ◻ ## Uniqueness of the solution {#subsec-uniqueness} In this section, we consider the uniqueness of the solution of ([\[MF-RBSDE\]](#MF-RBSDE){reference-type="ref" reference="MF-RBSDE"})-([\[refl\]](#refl){reference-type="ref" reference="refl"}). **Proposition 6**. *Let Assumption [Assumption 1](#H){reference-type="ref" reference="H"} be satisfied. Assume that $(Y^i,Z^i,K^i)_{i=1,2}$ are two solutions of the mean-field reflected BSDE ([\[MF-RBSDE\]](#MF-RBSDE){reference-type="ref" reference="MF-RBSDE"})-([\[refl\]](#refl){reference-type="ref" reference="refl"}). Then $(Y^1,Z^1)=(Y^2,Z^2)$.* *Moreover, if there exists a constant $\delta_0>0$, such that $$\label{H-uniqueness} \begin{aligned} \left|\mathbb{E}_{ X\sim\mu }\left[\partial_\mu H(y,\mu)(X)\right]\right| \leq (1-\delta_0) \left|\partial_y H(y,\mu)\right| , \quad \forall(y,\mu)\in \mathbb{R}^n\times {\cal P}_2(\mathbb{R}^n). \end{aligned}$$ Then $(Y^1,Z^1,K^1)=(Y^2,Z^2,K^2)$.* *Proof.* We set $\Delta Y:=Y^1-Y^2, \Delta Z:=Z^1-Z^2$, and $\Delta K:=K^1-K^2$. From Proposition [Proposition 5](#pr:stability){reference-type="ref" reference="pr:stability"}, we obtain $$\label{ee} \begin{aligned} \mathbb{E}\left[|\Delta Y_t|^2\right] +\frac{1}{2}\mathbb{E}\left[\int_t^T |\Delta Z_s|^2 ds\right] = 0. \\ \end{aligned}$$ Hence $(Y^1,Z^1)=(Y^2,Z^2)=(Y,Z)$. Assume moreover that [\[H-uniqueness\]](#H-uniqueness){reference-type="eqref" reference="H-uniqueness"} holds true. We shall prove the uniqueness of $K$. We derive from the uniqueness of $Y$ and $Z$, that for all $0\leq s\leq t\leq T$, $$\label{e18} \begin{aligned} \int_s^t \partial_y H(Y_{u},\mu^{}_{u}) d\Delta K_u +\widetilde{\mathbb{E}}\left[\int_s^t \partial_\mu H(\widetilde{Y}_{u},\mu^{}_{u})(Y^{}_{u}) d \widetilde{\Delta K}_u\right]=0. \end{aligned}$$ From ([\[H-beta\]](#H-beta){reference-type="ref" reference="H-beta"}) and the last equation, we obtain $$\label{e2} \begin{aligned} d\Delta K_t=-\frac{\partial_y H(Y_{t},\mu^{}_{t})^*}{\left| \partial_y H(Y_{t},\mu^{}_{t}) \right|^2} \widetilde{\mathbb{E}}\left[ \partial_\mu H(\widetilde{Y}_{t},\mu^{}_{t})(Y^{}_{t}) d \widetilde{\Delta K}_t \right]. \end{aligned}$$ Multiplying both sides by $\partial_\mu H(Y_{t},\mu^{}_{t})(\widetilde{Y}^{}_{t})$ and taking the expectation $\mathbb{E}[\cdot]$, we obtain $$\label{e19} \begin{aligned} F(\widetilde{Y}_t) +\mathbb{E}\left[ G(Y_t, \widetilde{Y}_t) F(Y_t)\right] =0, \end{aligned}$$ where $$\begin{aligned} F(Y)= \widetilde{\mathbb{E}}\left[ \partial_\mu H(\widetilde{Y},\mu)(Y) d \widetilde{\Delta K} \right] , \quad G(Y,\widetilde{Y})=\frac{\partial_\mu H(Y,\mu)(\widetilde{Y}) \partial_y H(Y,\mu)^*}{\left| \partial_y H(Y,\mu) \right|^2}. \end{aligned}$$ From ([\[H-uniqueness\]](#H-uniqueness){reference-type="ref" reference="H-uniqueness"}), we deduce that $\widetilde{\mathbb{E}}\|G(Y,\widetilde{Y})\|_2\leq 1-\delta_0$. Then, from Fubini's theorem, we obtain $$\begin{aligned} \mathbb{E}| F(Y_t) | = \widetilde{\mathbb{E}}|F(\widetilde{Y}_t)| \leq \mathbb{E}\left[\widetilde{\mathbb{E}}\| G(Y_t, \widetilde{Y}_t)\|_2 \left| F(Y_t) \right|\right] \leq (1-\delta_0) \mathbb{E}| F(Y_t) | . \end{aligned}$$ Hence, we obtain from [\[e2\]](#e2){reference-type="eqref" reference="e2"} $$\begin{aligned} \partial_y H(Y_{t},\mu^{}_{t})d\Delta K_t = \widetilde{\mathbb{E}}\left[ \partial_\mu H(\widetilde{Y}_{t},\mu^{}_{t})(Y^{}_{t}) d \widetilde{\Delta K}_t\right]=0. \end{aligned}$$ Then, the uniqueness of $K$ follows from [\[H-beta\]](#H-beta){reference-type="eqref" reference="H-beta"}. ◻ **Remark 7**. 1. As discussed in Section [5](#sec-limit){reference-type="ref" reference="sec-limit"}, the solution of ([\[MF-RBSDE\]](#MF-RBSDE){reference-type="ref" reference="MF-RBSDE"})-([\[refl\]](#refl){reference-type="ref" reference="refl"}) is obtained as the limit of solutions for a series of particle systems with oblique reflection. The assumption [\[H-uniqueness\]](#H-uniqueness){reference-type="eqref" reference="H-uniqueness"} naturally arises in the literature on obliquely reflected BSDEs. It ensures that the perturbing operator for the reflection direction is (uniformly) positive definite, satisfying [@CR20 Assumption (SB).(iii)]. We refer the reader to [@R02; @CR20] for the assumptions and well-posedness results related to obliquely reflected BSDEs. 2. As noted in [@BEH18 Section 1, page 484], in the case where $H$ depends only on $\mu$, the reflecting process $K$ is generally not unique. Nevertheless, under the assumption that $$\begin{aligned} \inf_{\mu\in{\cal P}_2(\mathbb{R}^n)}\mathbb{E}_{X\sim\mu}\left| \partial_\mu H(\mu)(X) \right|^2>0, \end{aligned}$$ the mean process $\mathbb{E}[K]$ is unique [@BCDH20 Theorem 25]. At the end of the section, we provide a counterexample where the uniqueness does not hold if inequalities [\[Hx-Hy\]](#Hx-Hy){reference-type="eqref" reference="Hx-Hy"} and [\[H-uniqueness\]](#H-uniqueness){reference-type="eqref" reference="H-uniqueness"} are not satisfied. #### Counterexample. {#counterexample. .unnumbered} Consider $n=1, \xi=1, f=0$ and $H(x,\mu)=x-\int_{\Omega} y\,\mu(dy)$. Then, it is straightforward to verify that, for all $\lambda\geq 0$, $(1,0,\lambda t)_{0\leq t\leq T}$ is a solution of ([\[MF-RBSDE\]](#MF-RBSDE){reference-type="ref" reference="MF-RBSDE"})-([\[refl\]](#refl){reference-type="ref" reference="refl"}). Therefore, the process $K$ is not unique. # Existence of the solution {#sec-exist} In this section, we consider the existence of the solution of ([\[MF-RBSDE\]](#MF-RBSDE){reference-type="ref" reference="MF-RBSDE"})-([\[refl\]](#refl){reference-type="ref" reference="refl"}). We construct a solution through a penalized BSDE approach. For $m\geq1$, let $(Y^m,Z^m)$ be the solution of the following BSDE: for all $t\in[0,T]$, $$\label{penal} \begin{aligned} Y^m_t= & \xi+\int_t^T f(s,Y^m_s,Z^m_s,\nu^m_s)ds - \int_t^T Z^m_s dB_s \\ & + \int_t^T \partial_y H(Y^m_s, \mu^m_s)dK^m_s +\widetilde{\mathbb{E}}\left[\int_t^T \partial_\mu H(\widetilde{Y}^m_s,\mu^m_s)(Y^m_s)d\widetilde{K}^m_s\right], \end{aligned}$$ where $$\label{def-Km} \begin{aligned} K^m_t:=\int_0^t m H^-(Y^m_s, \mu^m_s)ds, \quad \mu^m_t:=[Y^m_t],\quad \nu^m_t:=[(Y^m_t,Z^m_t)], \\ \end{aligned}$$ and $H^-$ denotes the negative part of $H$. We then have the following (uniform) a priori estimates of $(Y^m,Z^m)$. **Lemma 8**. *Suppose that Assumption [Assumption 1](#H){reference-type="ref" reference="H"} holds true. Then, for any $m\geq 1$, the penalized equation ([\[penal\]](#penal){reference-type="ref" reference="penal"}) admits a unique solution $(Y^m,Z^m)\in S^{2,n}\times H^{2,n}$. Moreover, there exists a constant $C>0$ independent of $m$, such that $$\label{est-supE-Ym} \begin{aligned} \sup_{0\leq t \leq T} \mathbb{E}\left[ \left| Y^m_t \right|^2\right] + \mathbb{E}\left[ \int_0^T \left|Z^m_s\right|^2ds \right]\leq C \mathbb{E}\left[ |\xi|^2 + \int_0^T |f^0(s)|^2ds + |\widehat{y}|^2 \right], \end{aligned}$$ $$\label{est-supE-Ym4} \begin{aligned} \sup_{0\leq t \leq T}\mathbb{E}\left[ \left| Y^m_t \right|^4\right] \leq C \mathbb{E}\left[ |\xi|^4 + \int_0^T |f^0(s)|^4ds + |\widehat{y}|^4 \right]. \end{aligned}$$* *Proof.* The existence and uniqueness of a solution $(Y^m,Z^m)$ of ([\[penal\]](#penal){reference-type="ref" reference="penal"}) follow from a (slightly) generalized version of [@BLP09 Theorem 3.1]. It can easily be deduced from the definition of $K^m$ that for all $t\in[0,T]$, $$\begin{aligned} \int_t^T H(Y^m_s, \mu^m_s)dK^m_s\leq0, \end{aligned}$$ and the estimates ([\[est-supE-Ym\]](#est-supE-Ym){reference-type="ref" reference="est-supE-Ym"}) and [\[est-supE-Ym4\]](#est-supE-Ym4){reference-type="eqref" reference="est-supE-Ym4"} are obtained in an identical way to the proof of Lemma [Lemma 3](#lem-a-priori-supE){reference-type="ref" reference="lem-a-priori-supE"}. ◻ **Lemma 9**. *Suppose that Assumption [Assumption 1](#H){reference-type="ref" reference="H"} holds true. Then, there exists a constant $C>0$, such that for all $m\geq 1$, $$\label{est-H^-} \begin{aligned} \mathbb{E}\left[\sup_{0\leq t \leq T} H^-(Y^m_t,\mu^m_t)^2\right]\leq \frac{C}{m} \,\,\,\mathrm{and}\,\,\, \mathbb{E}\left[\int_0^T H^-(Y^m_s,\mu^m_s)^2ds\right] \leq \frac{C}{m^2}. \end{aligned}$$ In particular, for all $m\geq1$, $$\label{EKm} \begin{aligned} \mathbb{E}\left(K^m_T\right)^2\leq C. \end{aligned}$$* *Proof.* The main idea of the proof is applying Itô's formula to $H^-(Y^m_t,\mu^m_t)^2$ and using the concavity of $H$ and [\[est-supE-Ym\]](#est-supE-Ym){reference-type="eqref" reference="est-supE-Ym"} to obtain the desired estimate. However, due to the lack of needed integrability of the process $Z$, we cannot directly apply Itô's formula to $H^-(Y^m_t,\mu^m_t)^2$. Instead, we replace $\mu_t$ with the empirical distribution, and apply the classical Itô's formula. Define $(Y^{m,i},Z^{m,i})_{1\leq i\leq N}$ as $N$ i.i.d. copies of $(Y^m,Z^m)$. Let $\mu^{m,N}_t:=N^{-1}\sum_{i=1}^N \delta_{Y^{m,i}_t}$. Applying Itô's formula to $H^-(Y^m_t,\mu^{m,N}_t)^2$ and using the concavity of $H$, we obtain $$\label{s4l2-e1} \begin{aligned} & H^-(Y^m_t,\mu^{m,N}_t)^2-H^-(\xi,\mu^{m,N}_T)^2 +\int_t^T \mathbf{1}_{\{ H(Y^m_s,\mu^{m,N}_s)\leq0 \}} \left|\partial_y H(Y^m_s,\mu^{m,N}_s)^* Z^m_s\right|^2ds \\ & +\frac{1}{N^2}\sum_{i=1}^N \int_t^T \mathbf{1}_{\{ H(Y^m_s,\mu^{m,N}_s)\leq0 \}}\left|\partial_\mu H(Y^m_s,\mu^{m,N}_s)(Y^{m,i}_s)^* Z^{m,i}_s\right|^2ds \\ \leq & \sum_{i=1}^{2} {\cal E}^f_i + \sum_{i=1}^{2} {\cal E}^{\cal M}_i + \sum_{i=1}^{4} {\cal E}^R_i, \end{aligned}$$ where $$\label{eq:} \begin{aligned} {\cal E}^f_1 &= -\int_t^T 2 H^-(Y^m_s,\mu^{m,N}_s) \partial_y H(Y^m_s,\mu^{m,N}_s) \cdot f(s,Y^m_s,Z^m_s,\nu^m_s)ds, \\ {\cal E}^f_2 &= -\frac{1}{N}\sum_{i=1}^N \int_t^T 2 H^-(Y^m_s,\mu^{m,N}_s) \partial_\mu H(Y^m_s,\mu^{m,N}_s)(Y^{m,i}_s) \cdot f^i(s,Y^{m,i}_s,Z^{m,i}_s,\nu^m_s) ds, \\ {\cal E}^{\cal M}_1 &= \int_t^T 2 H^-(Y^m_s,\mu^{m,N}_s) \partial_y H(Y^m_s,\mu^{m,N}_s)\cdot Z^m_sdB_s, \\ {\cal E}^{\cal M}_2 &= \frac{1}{N}\sum_{i=1}^N\int_t^T 2 H^-(Y^m_s,\mu^{m,N}_s) \partial_\mu H(Y^m_s,\mu^{m,N}_s)(Y^{m,i}_s)\cdot Z^{m,i}_sdB^i_s, \\ {\cal E}^R_1 &= -\int_t^T 2 H^-(Y^m_s,\mu^{m,N}_s) \partial_y H(Y^m_s,\mu^{m,N}_s) \cdot \partial_y H(Y^m_s,\mu^{m}_s) dK^m_s, \\ {\cal E}^R_2 &= -\int_t^T 2 H^-(Y^m_s,\mu^{m,N}_s) \partial_y H(Y^m_s,\mu^{m,N}_s) \cdot \widetilde{\mathbb{E}}\left[\partial_\mu H(\widetilde{Y}^m_s,\mu^{m}_s)(Y^m_s) d\widetilde{K}^m_s\right], \\ {\cal E}^R_3 &= -\frac{1}{N}\sum_{i=1}^N \int_t^T 2 H^-(Y^m_s,\mu^{m,N}_s) \partial_\mu H(Y^{m}_s,\mu^{m,N}_s)(Y^{m,i}_s)\cdot \partial_y H(Y^{m,i}_s,\mu^{m}_s)dK^i_s, \\ {\cal E}^R_4 &= -\frac{1}{N}\sum_{i=1}^N \int_t^T 2 H^-(Y^m_s,\mu^{m,N}_s) \partial_\mu H(Y^{m}_s,\mu^{m,N}_s)(Y^{m,i}_s)\cdot \widetilde{\mathbb{E}}\left[\partial_\mu H(\widetilde{Y}^m_s,\mu^{m}_s)(Y^{m,i}_s) d\widetilde{K}^m_s\right]. \\ \end{aligned}$$ We now study each term seperately. We start by dealing with the reflection terms ${\cal E}^R:=\sum_{1\leq i\leq 4} {\cal E}^R_i$. From [\[H-beta\]](#H-beta){reference-type="eqref" reference="H-beta"} and [\[def-Km\]](#def-Km){reference-type="eqref" reference="def-Km"}, we obtain $$\label{s4l2-e1-k} \begin{aligned} {\cal E}^R \leq & {\cal E}^R_1 \\ = & -\int_t^T 2 H^-(Y^m_s,\mu^{m}_s) \left|\partial_y H(Y^m_s,\mu^{m}_s)\right|^2 dK^m_s \\ & + \int_t^T 2 \left( H^-(Y^m_s,\mu^{m}_s) \partial_y H(Y^m_s,\mu^{m}_s) - H^-(Y^m_s,\mu^{m,N}_s) \partial_y H(Y^m_s,\mu^{m,N}_s) \right) \cdot \partial_y H(Y^m_s,\mu^{m}_s) dK^m_s \\ \leq & -2\beta^2 m \int_t^T H^-(Y^m_s,\mu^{m}_s)^2 ds \\ & + 2 L M m \int_t^T H^-(Y^m_s,\mu^{m}_s) \left( H^-(Y^m_s,\mu^{m}_s) + \left| \partial_y H(Y^m_s,\mu^{m,N}_s) \right| \right) W_2(\mu^{m,N}_s,\mu^m_s) ds. \\ \end{aligned}$$ Next, the terms ${\cal E}^f:=\sum_{1\leq i\leq 2}{\cal E}^f_i$ are handled using the same technique as in the proof of [@GPP96 Lemma 5.4]. From Young's inequality, we have $$\label{s4l2-e1-f} \begin{aligned} {\cal E}^f \leq & 2M\int_t^T H^-(Y^m_s,\mu^{m,N}_s)\left( |f(s,Y^m_s,Z^m_s,\nu^m_s)|+\frac{1}{N}\sum_{i=1}^N |f(s,Y^{m,i}_s,Z^{m,i}_s,\nu^m_s)| \right)ds \\ \leq & \frac{C}{m} \int_t^T \left( |f(s,Y^m_s,Z^m_s,\nu^m_s)|^2 + \frac{1}{N}\sum_{i=1}^N |f^i(s,Y^{m,i}_s,Z^{m,i}_s,\nu^m_s)|^2 \right)ds \\ & + \beta^2 m \int_t^T H^-(Y^m_s,\mu^{m,N}_s)^2ds . \end{aligned}$$ Bring together the estimates on ${\cal E}^R$ and ${\cal E}^f$, we obtain $$\label{s4l2-e2} \begin{aligned} & H^-(Y^m_t,\mu^{m,N}_t)^2-H^-(\xi,\mu^{m,N}_T)^2 +\int_t^T \mathbf{1}_{\{ H(Y^m_s,\mu^{m,N}_s)\leq0 \}} \left|\partial_y H(Y^m_s,\mu^{m,N}_s)^* Z^m_s\right|^2ds \\ & +\frac{1}{N^2}\sum_{i=1}^N \int_t^T \mathbf{1}_{\{ H(Y^m_s,\mu^{m,N}_s)\leq0 \}}\left|\partial_\mu H(Y^m_s,\mu^{m,N}_s)(Y^{m,i}_s)^* Z^{m,i}_s\right|^2ds \\ \leq & \frac{C}{m} \int_t^T \left( |f(s,Y^m_s,Z^m_s,\nu^m_s)|^2 + \frac{1}{N}\sum_{i=1}^N |f^i(s,Y^{m,i}_s,Z^{m,i}_s,\nu^m_s)|^2 \right)ds \\ & + 2 L M m \int_t^T W_2(\mu^{m,N}_s,\mu^m_s) H^-(Y^m_s,\mu^{m}_s) \left( H^-(Y^m_s,\mu^{m}_s) + M \right)ds \\ & -\beta^2 m \int_t^T H^-(Y^m_s,\mu^{m}_s)^2 ds +{\cal E}^{\cal M}, \\ \end{aligned}$$ where ${\cal E}^{\cal M}={\cal E}^{\cal M}_1+{\cal E}^{\cal M}_2$. From the Lipschitz continuity of $H$ and Lemma [Lemma 8](#lem-a priori-supE-Ym){reference-type="ref" reference="lem-a priori-supE-Ym"}, we deduce that the local martingale ${\cal E}^{\cal M}$ is a true martingale. Then, we take the expectation in [\[s4l2-e2\]](#s4l2-e2){reference-type="eqref" reference="s4l2-e2"} and let $N\to\infty$. From [\[H-M\]](#H-M){reference-type="eqref" reference="H-M"}, [\[Hy-L\]](#Hy-L){reference-type="eqref" reference="Hy-L"} and [\[est-supE-Ym\]](#est-supE-Ym){reference-type="eqref" reference="est-supE-Ym"}, we have $$\label{s4l2-e10} \begin{aligned} & \mathbb{E}\left|H^-(Y^m_t,\mu^{m,N}_t)^2-H^-(Y^m_t,\mu^{m}_t)^2\right| \\ \leq & \mathbb{E}^{\frac{1}{2}}\left|H^-(Y^m_t,\mu^{m,N}_t)+H^-(Y^m_t,\mu^{m}_t)\right|^2 \mathbb{E}^{\frac{1}{2}}\left|H^-(Y^m_t,\mu^{m,N}_t)-H^-(Y^m_t,\mu^{m}_t)\right|^2 \\ \leq & C \mathbb{E}^{\frac{1}{2}} \left[ \left( |Y^m_t|+ \frac{1}{N}\sum_{i=1}^N |Y^{m,i}_t| \right)^2 \right] \mathbb{E}^{\frac{1}{2}}\left[ W^2_2(\mu^{m,N}_t,\mu^m_t) \right] \\ \leq & C \mathbb{E}^{\frac{1}{2}}\left[ W^2_2(\mu^{m,N}_t,\mu^m_t) \right], \\ \end{aligned}$$ for some constant $C\geq0$ independent of $N$. From Lipschitz continuity of $H$ and [\[est-supE-Ym4\]](#est-supE-Ym4){reference-type="eqref" reference="est-supE-Ym4"}, we have $$\label{s4l2-e11} \begin{aligned} & \mathbb{E}\left[\int_0^T W_2(\mu^{m,N}_s,\mu^m_s) H^-(Y^m_s,\mu^{m}_s)^2ds\right] \\ \leq & C \mathbb{E}^{\frac{1}{2}}\left[\int_0^T W^2_2(\mu^{m,N}_s,\mu^m_s) ds \right] \mathbb{E}^{\frac{1}{2}}\left[ H^-(Y^m_s,\mu^{m}_s)^4 \right] \\ \leq & C \mathbb{E}^{\frac{1}{2}}\left[\int_0^T W^2_2(\mu^{m,N}_s,\mu^m_s) ds \right]. \end{aligned}$$ Hence, passing to the limit in [\[s4l2-e2\]](#s4l2-e2){reference-type="eqref" reference="s4l2-e2"}, from [\[s4l2-e10\]](#s4l2-e10){reference-type="eqref" reference="s4l2-e10"}, [\[s4l2-e11\]](#s4l2-e11){reference-type="eqref" reference="s4l2-e11"} and the limit $$\label{eq:limit} \begin{aligned} \mathbb{E}\left[\int_0^T W^2_2(\mu^{m,N}_s,\mu^m_s) ds\right]\rightarrow0, \end{aligned}$$ we obtain $$\label{s4l2-e3} \begin{aligned} & H^-(Y^m_t,\mu^{m}_t)^2 +\int_t^T \mathbf{1}_{\{ H(Y^m_s,\mu^{m}_s)\leq0 \}} \left|\partial_y H(Y^m_s,\mu^{m}_s)^* Z^m_s\right|^2ds \\ \leq & \frac{C}{m} \int_t^T \left( |f(s,Y^m_s,Z^m_s,\nu^m_s)|^2 + \mathbb{E}\left[ |f(s,Y^{m}_s,Z^{m}_s,\nu^m_s)|^2\right] \right)ds \\ &+\int_t^T 2 H^-(Y^m_s,\mu^{m}_s) \partial_y H(Y^m_s,\mu^{m}_s)\cdot Z^m_sdB_s -\beta^2 m \int_t^T H^-(Y^m_s,\mu^{m}_s)^2ds . \\ \end{aligned}$$ Taking the expectation, we have $$\label{s4l2-e4} \begin{aligned} \sup_{ 0 \leq t \leq T } \mathbb{E}\left[ H^-(Y^m_t,\mu^{m}_t)^2 \right] +\mathbb{E}\left[\int_0^T \mathbf{1}_{\{ H(Y^m_s,\mu^{m}_s)\leq0 \}} \left|\partial_y H(Y^m_s,\mu^{m}_s)^* Z^m_s\right|^2ds \right] \leq \frac{C}{m}, \\ \end{aligned}$$ and $$\label{s4l2-e5} \begin{aligned} \mathbb{E}\left[ \int_0^T H^-(Y^m_s,\mu^{m}_s)^2ds \right] \leq \frac{C}{m^2}. \\ \end{aligned}$$ Coming back to ([\[s4l2-e3\]](#s4l2-e3){reference-type="ref" reference="s4l2-e3"}), we take the supremum in time and the expectation. From BDG inequality and Young's inequality, we obtain $$\label{e13-B} \begin{aligned} & \mathbb{E}\left[\sup_{0\leq t\leq T}\left| \int_t^T H^-(Y^m_s,\mu^{m}_s) \partial_y H(Y^m_s,\mu^{m}_s)\cdot Z^m_sdB_s \right| \right] \\ \leq & \mathbb{E}\left[ \int_0^T \mathbf{1}_{\{H(Y^m_s,\mu^m_s)\leq0\}} |H^-(Y^m_s,\mu^{m}_s)|^2 |\partial_y H(Y^m_s,\mu^{m}_s)^* Z^m_s|^2 ds \right]^{\frac{1}{2}} \\ \leq & \epsilon \mathbb{E}\left[\sup_{0\leq t \leq T} |H^-(Y^m_t,\mu^m_t)|^2 \right] + C_\epsilon \mathbb{E}\left[\int_0^T \mathbf{1}_{\{H(Y^m_s,\mu^{m}_s)\leq0\}} |H_1(Y^m_s,\mu^{m}_s)^* Z^m_s|^2 ds\right] . \end{aligned}$$ Then, from ([\[s4l2-e3\]](#s4l2-e3){reference-type="ref" reference="s4l2-e3"}), ([\[s4l2-e4\]](#s4l2-e4){reference-type="ref" reference="s4l2-e4"}) and the last inequality, we obtain $$\label{e13-Esup} \begin{aligned} & \mathbb{E}\left[\sup_{0\leq t\leq T}H^-(Y^m_t,\mu^{m}_t)^2\right]+\mathbb{E}\left[\int_0^T \mathbf{1}_{\{H(Y^m_s,\mu^{m}_s)\leq0\}} |\partial_y H(Y^{m}_{s},\mu^{m}_{s}) ^* Z^m_s|^2ds\right] \leq \frac{C}{m}, \end{aligned}$$ which, together with [\[s4l2-e5\]](#s4l2-e5){reference-type="eqref" reference="s4l2-e5"}, proves [\[est-H\^-\]](#est-H^-){reference-type="eqref" reference="est-H^-"}. ◻ **Lemma 10**. *Under Assumption [Assumption 1](#H){reference-type="ref" reference="H"}, there exists a constant $C>0$ independent of $m$, such that $$\begin{aligned} \mathbb{E}\left[\sup_{0\leq t \leq T} \left| Y^m_t \right|^2 + \int_0^T \left|Z^m_s\right|^2ds \right]\leq C \mathbb{E}\left[ |\xi|^2 + \int_0^T |f^0(s)|^2ds + |\widehat{y}|^2 + 1 \right]. \end{aligned}$$* *Proof.* The result can be obtained in an identical way to the proof of Lemma [Lemma 4](#lem-a-priori-Esup){reference-type="ref" reference="lem-a-priori-Esup"}. Therefore, we omit the proof. ◻ **Lemma 11**. *Under Assumption [Assumption 1](#H){reference-type="ref" reference="H"}, the sequence $(Y^m,Z^m)_{m\geq 1}$ is a Cauchy sequence in $S^{2,n} \times H^{2,n}$.* *Proof.* The proof follows mainly from arguments in the proof of [@GPP96 Lemma 5.6]. However, some extra work is required to handle the reflection terms properly. Let $m,l$ be two positive integers, and set $\Delta Y := Y^m -Y^l$ and $\Delta Z := Z^m -Z^l$. We apply Itô's formula to $e^{\alpha t}|\Delta Y_t|^2$ and obtain $$\label{ito-Ym-Yl} \begin{aligned} & e^{\alpha t}|\Delta Y_t|^2 + \int_t^T e^{\alpha s}|\Delta Z_s|^2ds \\ = & - \int_t^T \alpha e^{\alpha s} |\Delta Y_s|^2ds+ 2\int_t^T e^{\alpha s}\Delta Y_s \cdot \Delta f(s)ds - 2 \int_t^T e^{\alpha s}\Delta Y_s \cdot \Delta Z_s dB_s \\ & + 2 \int_t^T e^{\alpha s}\Delta Y_s \cdot \left( \partial_y H(Y^m_s,\mu^m_s)dK^m_s - \partial_y H(Y^l_s,\mu^l_s)dK^l_s \right) \\ & + 2 \int_t^T e^{\alpha s}\Delta Y_s \cdot \left( \widetilde{\mathbb{E}}\left[\partial_\mu H(\widetilde{Y}^m_s,\mu^m_s)(Y^m_s)d\widetilde{K}^m_s\right] - \widetilde{\mathbb{E}}\left[\partial_\mu H(\widetilde{Y}^l_s,\mu^l_s)(Y^l_s)d\widetilde{K}^l_s\right] \right) , \\ \end{aligned}$$ where $\Delta f(s)=f(s,Y^m_s,Z^m_s,\nu^m_s)-f(s,Y^l_s,Z^l_s,\nu^l_s)$. Taking the expectation on both sides of ([\[ito-Ym-Yl\]](#ito-Ym-Yl){reference-type="ref" reference="ito-Ym-Yl"}), we have $$\label{eito-Ym-Yl} \begin{aligned} & \mathbb{E}\left[ e^{\alpha t} |\Delta Y_t|^2 + \frac{1}{2}\int_t^T e^{\alpha s} |\Delta Z_s|^2ds \right] \\ \leq & 2 \mathbb{E}\left[ \int_t^T e^{\alpha s}\Delta Y_s \cdot \left( \partial_y H(Y^m_s,\mu^m_s)dK^m_s - \partial_y H(Y^l_s,\mu^l_s)dK^l_s \right) \right] \\ & + 2\mathbb{E}\left[ \int_t^T e^{\alpha s}\Delta Y_s \cdot \left( \widetilde{\mathbb{E}}\left[\partial_\mu H(\widetilde{Y}^m_s,\mu^m_s)(Y^m_s)d\widetilde{K}^m_s\right] - \widetilde{\mathbb{E}}\left[\partial_\mu H(\widetilde{Y}^l_s,\mu^l_s)(Y^l_s)d\widetilde{K}^l_s\right] \right)\right]. \\ \end{aligned}$$ From the concavity of $H$, Lemma [Lemma 9](#lem-eHm){reference-type="ref" reference="lem-eHm"} and Fubini's theorem, we obtain $$\label{est-Km-Kl-1} \begin{aligned} & \mathbb{E}\left[\int_t^T e^{\alpha s}\Delta Y_s \cdot \left( \partial_y H(Y^m_s,\mu^m_s)dK^m_s + \widetilde{\mathbb{E}}\left[\partial_\mu H(\widetilde{Y}^m_s,\mu^m_s)(Y^m_s)d\widetilde{K}^m_s\right] \right) \right] \\ = & \mathbb{E}\left[ \int_t^T e^{\alpha s}\Delta Y_s\cdot \partial_y H(Y^m_s,\mu^m_s)dK^m_s + \int_t^T e^{\alpha s} \widetilde{\mathbb{E}}\left[ \Delta\widetilde{Y}_s\cdot \partial_\mu H(Y^m_s,\mu^m_s)(\widetilde{Y}^m_s)\right]dK^m_s \right] \\ & + \mathbb{E}\left[ \int_t^T e^{\alpha s}\Delta Y_s\cdot\widetilde{\mathbb{E}}\left[\partial_\mu H(\widetilde{Y}^m_s,\mu^m_s)(Y^m_s)d\widetilde{K}^m_s\right] - \int_t^T e^{\alpha s}\widetilde{\mathbb{E}}\left[ \Delta\widetilde{Y}_s\cdot \partial_\mu H(Y^m_s,\mu^m_s)(\widetilde{Y}^m_s)\right]dK^m_s \right] \\ \leq & \mathbb{E}\left[\int_t^T e^{\alpha s} \left( H(Y^m_s,\mu^m_s) - H(Y^l_s,\mu^l_s) \right) dK^m_s\right] \\ \leq & \mathbb{E}\left[\int_t^T e^{\alpha s} H^-(Y^l_s,\mu^l_s) dK^m_s\right] \\ = & m \mathbb{E}\left[\int_t^T e^{\alpha s} H^-(Y^l_s,\mu^l_s)H^-(Y^m_s,\mu^m_s) ds\right] \\ \leq & e^{\alpha T} m \, \mathbb{E}^{\frac{1}{2} } \left[\int_t^T H^-(Y^l_s,\mu^l_s)^2ds \right] \mathbb{E}^{\frac{1}{2} } \left[\int_t^T H^-(Y^m_s,\mu^m_s)^2ds \right] \\ \leq & \frac{C}{l} , \\ \end{aligned}$$ where $C>0$ is the constant appeared in Lemma [Lemma 9](#lem-eHm){reference-type="ref" reference="lem-eHm"} multiplied by $e^{\alpha T}$, which is independent of $m$ and $l$. Arguing similarly, we also have $$\label{est-Km-Kl-2} \begin{aligned} & - \mathbb{E}\left[\int_t^T e^{\alpha s}\Delta Y_s \cdot \left( \partial_y H(Y^l_s,\mu^l_s)dK^l_s + \widetilde{\mathbb{E}}\left[\partial_\mu H(\widetilde{Y}^l_s,\mu^l_s)(Y^l_s)d\widetilde{K}^l_s\right] \right) \right] \leq \frac{C}{m}. \\ \end{aligned}$$ Combining ([\[eito-Ym-Yl\]](#eito-Ym-Yl){reference-type="ref" reference="eito-Ym-Yl"}), ([\[est-Km-Kl-1\]](#est-Km-Kl-1){reference-type="ref" reference="est-Km-Kl-1"}), and ([\[est-Km-Kl-2\]](#est-Km-Kl-2){reference-type="ref" reference="est-Km-Kl-2"}), we obtain $$\label{est-supE-Km-Kl} \begin{aligned} & \sup_{0\leq t\leq T} \mathbb{E}\left[ |\Delta Y_t|^2\right] + \mathbb{E}\left[ \int_0^T |\Delta Z_s|^2ds \right] \leq C \left( \frac{1}{l} + \frac{1}{m} \right) . \end{aligned}$$ Coming back once again to ([\[ito-Ym-Yl\]](#ito-Ym-Yl){reference-type="ref" reference="ito-Ym-Yl"}), taking the supremum in $t$ and the expectation, we obtain $$\label{e14-Esup} \begin{aligned} & \mathbb{E}\left[\sup_{0\leq t \leq T}e^{\alpha t}|\Delta Y_t|^2 + \frac{1}{2}\int_0^T e^{\alpha s}|\Delta Z_s|^2ds\right] \\ \leq & 2 \mathbb{E}\left[\sup_{0\leq t \leq T}\left|\int_t^T e^{\alpha s}\Delta Y_s\cdot\Delta Z_s dB_s\right|\right] \\ & + 2 \mathbb{E}\sup_{0\leq t \leq T}\left[\int_t^T e^{\alpha s}\Delta Y_s \cdot \left( \partial_y H(Y^m_s,\mu^m_s)dK^m_s + \widetilde{\mathbb{E}}\left[\partial_\mu H(\widetilde{Y}^m_s,\mu^m_s)(Y^m_s)d\widetilde{K}^m_s\right] \right) \right] \\ & + 2 \mathbb{E}\sup_{0\leq t \leq T}\left[-\int_t^T e^{\alpha s}\Delta Y_s \cdot \left( \partial_y H(Y^l_s,\mu^l_s)dK^l_s + \widetilde{\mathbb{E}}\left[\partial_\mu H(\widetilde{Y}^l_s,\mu^l_s)(Y^l_s)d\widetilde{K}^l_s\right] \right) \right]. \\ \end{aligned}$$ From BDG inequality and Young's inequality, we have $$\label{e14-B} \begin{aligned} \mathbb{E}\left[\sup_{0\leq t \leq T}\left|\int_t^T e^{\alpha s}\Delta Y_s\cdot\Delta Z_s dB_s\right|\right] \leq \epsilon \mathbb{E}\left[\sup_{0\leq t \leq T} e^{\alpha t} |\Delta Y_t|^2 \right] + C_\epsilon \mathbb{E}\left[ \int_0^T e^{\alpha s}|\Delta Z_s|^2ds \right]. \end{aligned}$$ From the concavity of $H$ and the boundedness of $\partial_y H$ and $\partial_\mu H$, thanks to Lemma [Lemma 9](#lem-eHm){reference-type="ref" reference="lem-eHm"}, we have $$\label{e14-Km-Lm} \begin{aligned} & \mathbb{E}\sup_{0\leq t \leq T}\left[\int_t^T e^{\alpha s}\Delta Y_s \cdot \left( \partial_y H(Y^m_s,\mu^m_s)dK^m_s + \widetilde{\mathbb{E}}\left[\partial_\mu H(\widetilde{Y}^m_s,\mu^m_s)(Y^m_s)d\widetilde{K}^m_s\right] \right) \right] \\ \leq & \mathbb{E}\sup_{0\leq t \leq T}\left[ \int_t^T e^{\alpha s} \Delta Y_s \cdot \partial_y H(Y^m_{s},\mu^{m}_{s}) dK^m_s + \int_t^T e^{\alpha s} \widetilde{\mathbb{E}}\left[ \Delta Y_s\cdot \partial_\mu H(Y^m_s,\mu^m_s)(\widetilde{Y}^m_s)\right]dK^m_s \right] \\ & + \mathbb{E}\sup_{0\leq t \leq T}\left[ -\int_t^T e^{\alpha s} \widetilde{\mathbb{E}}\left[ \Delta\widetilde{Y}_s\cdot \partial_\mu H(Y^m_s,\mu^m_s)(\widetilde{Y}^m_s)\right]dK^m_s \right] \\ & +\mathbb{E}\sup_{0\leq t \leq T}\left[\int_t^T e^{\alpha s}\Delta Y_s\cdot\widetilde{\mathbb{E}}\left[\partial_\mu H(\widetilde{Y}^m_s,\mu^m_s)(Y^m_s)d\widetilde{K}^m_s\right] \right] \\ \leq & \mathbb{E}\sup_{0\leq t \leq T}\left[ \int_t^T e^{\alpha s} \left( H(Y^m_s,\mu^m_s) - H(Y^l_s,\mu^l_s) \right) dK^m_s \right] + 2M \sup_{0\leq t \leq T}\mathbb{E}\left[ e^{\alpha t}|\Delta Y_t|\right] \mathbb{E}\left[K^m_T\right] \\ \leq & C \left(\frac{1}{l}+\frac{1}{\sqrt{l}}+\frac{1}{\sqrt{m}}\right). \\ \end{aligned}$$ Arguing similarly, we also have $$\label{e14-Kl-Ll} \begin{aligned} & \mathbb{E}\sup_{0\leq t \leq T}\left[ - \int_t^T e^{\alpha s}\Delta Y_s \cdot \left( \partial_y H(Y^{l}_{s},\mu^{l}_{s}) dK^l_s + \widetilde{\mathbb{E}} \left[ \partial_\mu H(\widetilde{Y}^{l}_{s},\mu^{l}_{s})(Y^{l}_{s})d \widetilde{K}^l_S \right] \right) \right] \\ \leq & C \left(\frac{1}{m}+\frac{1}{\sqrt{l}}+\frac{1}{\sqrt{m}}\right). \\ \end{aligned}$$ Combining ([\[e14-Esup\]](#e14-Esup){reference-type="ref" reference="e14-Esup"}), ([\[e14-B\]](#e14-B){reference-type="ref" reference="e14-B"}), ([\[e14-Km-Lm\]](#e14-Km-Lm){reference-type="ref" reference="e14-Km-Lm"}) and ([\[e14-Kl-Ll\]](#e14-Kl-Ll){reference-type="ref" reference="e14-Kl-Ll"}), we deduce that $$\label{e15} \begin{aligned} & \mathbb{E}\left[\sup_{0\leq t \leq T}e^{\alpha t}|\Delta Y_t|^2 + \frac{1}{2}\int_0^T e^{\alpha s}|\Delta Z_s|^2ds\right] \leq C \left(\frac{1}{\sqrt{l}}+\frac{1}{\sqrt{m}}\right). \\ \end{aligned}$$ Therefore, we conclude that $(Y^m,Z^m)_{m\geq1}$ is a Cauchy sequence in $S^{2,n}\times H^{2,n}$. ◻ We now have all the key ingredients to show the existence result. **Proposition 12**. *Under Assumption [Assumption 1](#H){reference-type="ref" reference="H"}, there exists a solution $(Y,Z,K)$ of ([\[MF-RBSDE\]](#MF-RBSDE){reference-type="ref" reference="MF-RBSDE"})-([\[refl\]](#refl){reference-type="ref" reference="refl"}) in $S^{2,n}\times H^{2,n}\times A^{2,1}$.* *Proof.* It follows from Lemma [Lemma 11](#lem-Ym-Zm-Cauchy){reference-type="ref" reference="lem-Ym-Zm-Cauchy"} that $$\begin{aligned} Y^m\overset{S^{2,n}}{\longrightarrow}Y, \quad Z^m\overset{H^{2,n}}{\longrightarrow}Z, \,\,\, \mathrm{and}\,\,\, \int_0^\cdot Z^m_sdB_s\overset{L^{2,n}}{\longrightarrow}\int_0^\cdot Z_sdB_s. \end{aligned}$$ Set $\mu:=[(Y)]$ and $\nu:=[(Y,Z)]$. From the Lipschitz continuity of $f$, we have $$\begin{aligned} f(\cdot,Y^m_\cdot,Z^m_\cdot,\nu^m_\cdot)\overset{H^{2,n}}{\longrightarrow}f(\cdot,Y_\cdot,Z_\cdot,\nu_\cdot), \end{aligned}$$ Define $k^m_\cdot:=m H^-(Y^m_\cdot,\mu^m_\cdot)$. Thanks to ([\[est-H\^-\]](#est-H^-){reference-type="ref" reference="est-H^-"}), we have $\mathbb{E}\left[ \int_0^T |k^m_s|^2ds \right]\leq C$. Hence, there is a subsequence of $k^m$ converging weakly to some $k$ in $L^2(\Omega\times[0,T])$. From Mazur's Lemma, we know that there exists a convex combination of $k^m$ converging strongly to $k$, namely $$\begin{aligned} \sum_{i=m}^{N_m} \lambda^m_i k^i \overset{L^{2}}{\longrightarrow} k , \end{aligned}$$ where $\lambda^m_i\geq0$ for all $m\geq1$ and $m\leq i\leq N_m$, and $\sum_{i=m}^{N_m}\lambda^m_i=1$. For all $m\geq1$, we have $$\begin{aligned} \sum_{i=m}^{N_m} \lambda^m_i \partial_y H(Y^i,\mu^i) k^i = \sum_{i=m}^{N_m} \lambda^m_i \left(\partial_y H(Y^i,\mu^i)-\partial_y H(Y,\mu)\right) k^i + \partial_y H(Y,\mu) \sum_{i=m}^{N_m}\lambda^m_i k^i. \end{aligned}$$ From the Lipschitz continuity of $\partial_y H$, the strong $S^{2,n}$-convergence of $Y$ and the uniform $L^2(\Omega\times[0,T])$-boundedness of $k^i$, we obtain $$\begin{aligned} \sum_{i=m}^{N_m} \lambda^m_i \left(\partial_y H(Y^i,\mu^i)-\partial_y H(Y,\mu)\right) k^i \overset{L^{2}}{\longrightarrow}0, \end{aligned}$$ and $$\begin{aligned} \partial_y H(Y,\mu) \sum_{i=m}^{N_m}\lambda^m_i k^i \overset{L^{2}}{\longrightarrow} \partial_y H(Y,\mu) k. \end{aligned}$$ Arguing similarly, we also have $$\begin{aligned} \sum_{i=m}^{N_m} \lambda^m_i \widetilde{\mathbb{E}}\left[\partial_\mu H(\widetilde{Y}^i,\mu^i)(Y^i) \widetilde{k}^i\right] \overset{L^{2}}{\longrightarrow} \widetilde{\mathbb{E}}\left[\partial_\mu H(\widetilde{Y},\mu)(Y) \widetilde{k}\right]. \end{aligned}$$ Setting $K_t:=\int_0^t k_sds$, and passing to the limit into $$\begin{aligned} \sum_{i=m}^{N_m} \lambda^m_i Y^i_s = & \xi+ \sum_{i=m}^{N_m} \int_t^T \lambda^m_i f(s,Y^i_s,Z^i_s,\nu^i_s)ds - \sum_{i=m}^{N_m} \int_t^T \lambda^m_i Z^i_sdB_s \\ & + \sum_{i=m}^{N_m} \int_t^T \lambda^m_i \partial_y H(Y^i_s,\mu^i_s)dK^i_s + \sum_{i=m}^{N_m} \mathbb{E}\left[ \int_t^T \lambda^m_i \partial_\mu H(\widetilde{Y}^i_s,\mu^i_s)(Y^i_s)d\widetilde{K}^i_s \right], \end{aligned}$$ we conclude that $(Y,Z,K)$ satisfies the BSDE ([\[MF-RBSDE\]](#MF-RBSDE){reference-type="ref" reference="MF-RBSDE"}). Lastly, we need to check that $H(Y_t,\mu_t)\geq0$, and that the Skorokhod condition is satisfied. Indeed, we have $$\begin{aligned} H(Y_t,\mu_t) = \lim_{m\to\infty} H(Y^m_t,\mu^m_t) \geq 0. \end{aligned}$$ From the inequality $\int_0^T H(Y^i_s,\mu^i_s)dK^i_s\leq0$, we have $$\begin{aligned} 0\leq \int_0^T H(Y_s,\mu_s)dK_s = \lim_{m\to\infty}\sum_{i=m}^{N_m} \int_0^T \lambda^m_i H(Y^i_s,\mu^i_s)dK^i_s\leq0, \end{aligned}$$ which verifies the Skorokhod condition. ◻ Combining Proposition [Proposition 6](#pr-unique){reference-type="ref" reference="pr-unique"} and Proposition [Proposition 12](#pr-exist){reference-type="ref" reference="pr-exist"}, we are ready to state the main theorem of well-posedness of ([\[MF-RBSDE\]](#MF-RBSDE){reference-type="ref" reference="MF-RBSDE"})-([\[refl\]](#refl){reference-type="ref" reference="refl"}). **Theorem 13**. *Assume that Assumption [Assumption 1](#H){reference-type="ref" reference="H"} holds true. Then, there exists a solution $(Y,Z,K)$ to the mean-field reflected BSDE ([\[MF-RBSDE\]](#MF-RBSDE){reference-type="ref" reference="MF-RBSDE"})-([\[refl\]](#refl){reference-type="ref" reference="refl"}), and the tuple $(Y,Z)$ is unique. If ([\[H-uniqueness\]](#H-uniqueness){reference-type="ref" reference="H-uniqueness"}) holds true, then the reflecting process $K$ is also unique. Moreover, almost every path of $K$ is absolutely continuous with respect to the Lebesgue measure.* # Particle system and mean-field limit {#sec-limit} ## Definition and well-posedness of the particle system The objective of this section is to establish the propagation of chaos result for the mean-field reflected BSDE ([\[MF-RBSDE\]](#MF-RBSDE){reference-type="ref" reference="MF-RBSDE"})-([\[refl\]](#refl){reference-type="ref" reference="refl"}). Let $\{B^i\}_{1\leq i\leq N}$ be $N$ independent $d$-dimensional standard Brownian motions and denote by $\mathbf{F}^i=({\cal F}^i_t)_{0\leq t\leq T}$ the filtration generated by $B^i$. Let $\xi^i$ be independent ${\cal F}^i_T$-measurable copies of $\xi$. We will consider the following interacting particle system: for all $i\in\{1,2,\cdots,N\}$ and $t\in[0,T]$, $$\label{RBSDE-N} \begin{dcases} & \begin{aligned} \widehat{Y}^i_t = & \widehat{\xi}^i+\int_t^T f(s,\widehat{Y}^i_s,\widehat{Z}^{i,i}_s,\widehat{\nu}^{N}_{s})ds - \int_t^T \sum_{j=1}^N \widehat{Z}^{i,j}_s dB^j_s + \int_t^T \partial_y H(\widehat{Y}^{i}_{s},\widehat{\mu}^{N}_{s}) d\widehat{K}^i_s \\ & +\frac{1}{N}\sum_{j=1}^N \int_t^T \partial_\mu H(\widehat{Y}^{j}_{s},\widehat{\mu}^{N}_{s})(\widehat{Y}^{i}_{s}) d\widehat{K}^j_s , \end{aligned} \\ & H(\widehat{Y}^i_t, \widehat{\mu}^{N}_t)\geq 0,\quad \int_0^T H(\widehat{Y}^i_s, \widehat{\mu}^{N}_s)d\widehat{K}^i_s=0 , \\ \end{dcases}$$ where $\widehat{\mu}^{N}_t:=N^{-1}\sum_{j=1}^N \delta_{\widehat{Y}^j_t}$ and $\widehat{\nu}^{N}_t:=N^{-1}\sum_{j=1}^N \delta_{(\widehat{Y}^j_t,\widehat{Z}^{j,j}_t)}$. For each $i,j\in\{1,2,\cdots,N\}$, $(\widehat{Y}^i,\widehat{Z}^{i,j},\widehat{K}^i)$ are progressively measurable processes, and $\widehat{K}^i$ is continuous and non-decreasing with $\widehat{K}^i_0=0$. Note that we have to define the terminal condition $\widehat{\xi}^i$ in the above particle system, so that $\widehat{\xi}^i$ is ${\cal F}^i_T$-measurable and satisfies the constraint $H(\widehat{\xi}^i,\widehat{\mu}^{N}_T)\geq0$. The following lemma constructs such modified terminal condition $\widehat{\xi}^i$ using $\xi^i$. **Lemma 14**. *Let Assumption [Assumption 1](#H){reference-type="ref" reference="H"} be satisfied. Then, there exist $C>0$ and $N$ independent $\widehat{\xi}^i$ is ${\cal F}^i_T$-measurable random variables $(\widehat{\xi}^i)_{1\leq i\leq N}$ with $\mathbb{E}|\widehat{\xi}^i|^4<\infty$, such that $$\label{est-xi} \begin{aligned} H(\widehat{\xi}^i,\widehat{\mu}^{N}_T)\geq0 \,\,\,\mathrm{and}\,\,\, \mathbb{E}\left| \widehat{\xi}^i-\xi^i \right|^2 \leq C \mathbb{E}\left[ W^2_2(\mu^N_T,\mu_T)\right], \quad i=1,2,\cdots,N, \end{aligned}$$ where $\widehat{\mu}^{N}_T=N^{-1}\sum_{j=1}^N \delta_{\widehat{\xi}^i}$, $\mu^N_T=N^{-1}\sum_{j=1}^N \delta_{\xi^i}$ and $\mu_T={\cal L}(\xi)$.* *Proof.* We define $(X^i)_{1\leq i\leq N}$ the solution of $$\label{e31} \begin{aligned} X^i_t=\xi^i+\int_0^t \partial_y H(X^{i}_{s},\mu^{X,N}_{s}) ds , \end{aligned}$$ where $\mu^{X,N}:=N^{-1}\sum_{j=1}^N \delta_{X^j}$. Then, from [\[H-beta\]](#H-beta){reference-type="eqref" reference="H-beta"}, [\[Hx-Hy\]](#Hx-Hy){reference-type="eqref" reference="Hx-Hy"} and [\[H1-H2\]](#H1-H2){reference-type="eqref" reference="H1-H2"}, we obtain $$\label{e17} \begin{aligned} & H(X^i_t,\mu^{X,N}_t) - H(\xi^i,\widehat{\mu}^{N}_T) \\ = & \int_0^t |\partial_y H(X^{i}_{s},\widehat{\mu}^{X,N}_{s}) |^2ds +\frac{1}{N}\sum_{j=1}^{N} \int_0^t \partial_\mu H(X^{i}_{s},\mu^{X,N}_{s})(X^{j}_{s})\cdot \partial_y H(X^j_s,\mu^{X,N}_s)ds \\ \geq & \beta^2 t. \\ \end{aligned}$$ Set $t^*:=\inf \{ t\geq0 : H(X^i_t,\mu^{X,N}_t) \geq0 \}$ and $\widehat{\xi}^i:=X^i_{t^*}$. Then, we have $t^*\leq \beta^{-2} H^-(\xi^i,\mu^N_T)$. From $H^-(\xi^i,\mu_T)=0$ and the Lipschitz continuity of $H$, we obtain $$\begin{aligned} & \mathbb{E}\left| \widehat{\xi}^i-\xi^i \right|^2 = \mathbb{E}\left| X^i_{t^*} - X^i_0 \right|^2 \leq M^2 \mathbb{E}\left(t^*\right)^2 \\ \leq & \frac{M^2}{\beta^4} \mathbb{E}\left| H^-(\xi^i,\mu^N_T) - H^-(\xi^i,\mu_T) \right|^2 \\ \leq & \frac{M^4}{\beta^4} \mathbb{E}\left[ W^2_2(\mu^N_T,\mu_T)\right]. \end{aligned}$$ The result follows. ◻ **Proposition 15**. *Let Assumption [Assumption 1](#H){reference-type="ref" reference="H"} be satisfied and ([\[H-uniqueness\]](#H-uniqueness){reference-type="ref" reference="H-uniqueness"}) holds true. Then, the particle system ([\[RBSDE-N\]](#RBSDE-N){reference-type="ref" reference="RBSDE-N"}) is well posed. Moreover, there exists $C>0$ independent of $N$, such that for all $1\leq i\leq N$, $$\label{est-5.2} \begin{gathered} \mathbb{E}\left[\sup_{0\leq t \leq T} \left| Y^i_t \right|^2 + \int_0^T \sum_{j=1}^N\left|Z^{i,j}_s\right|^2ds +\int_0^T |k^i_s|^2 ds \right]\leq C , \\ \sup_{0\leq t \leq T} \mathbb{E}\left| Y^i_t \right|^4 \leq C , \end{gathered}$$ where $k^i_t:=\frac{dK^i_t}{dt}$ is the Radon--Nikodym derivative of $K^i$ with respect to the Lebesgue measure.* *Proof.* The uniqueness is obtained in an identical way to the proof of Proposition [Proposition 6](#pr-unique){reference-type="ref" reference="pr-unique"}. For $m\geq1$, we set $$\begin{aligned} \widehat{k}^{m,i}_t:=m H^-(\widehat{Y}^{m,i}_t,[\widehat{Y}^{m,i}_t]), \quad \widehat{K}^{m,i}_t:=\int_0^t \widehat{k}^{m,i}_s \,ds, \end{aligned}$$ and define a penalized system of BSDE similar to [\[penal\]](#penal){reference-type="eqref" reference="penal"}. Then, the existence is obtained by repeating the procedure in Section [4](#sec-exist){reference-type="ref" reference="sec-exist"}. ◻ ## The mean-field convergence In this section, we study the mean-field convergence of the solution of the particle system [\[RBSDE-N\]](#RBSDE-N){reference-type="eqref" reference="RBSDE-N"}. Let $(Y^i,Z^i,K^i)_{1\leq i\leq N}$ be the solution of the mean-field reflected BSDE ([\[MF-RBSDE\]](#MF-RBSDE){reference-type="ref" reference="MF-RBSDE"})-([\[refl\]](#refl){reference-type="ref" reference="refl"}) with data $(B^i,\xi^i)_{1\leq i\leq N}$. Denote by $\Delta Y^i:=\widehat{Y}^i-Y^i$ and $\Delta Z^{i,j}:=\widehat{Z}^{i,j}-Z^i \delta_{i,j}$. Then, the following propagation of chaos result holds. **Theorem 16**. *Let Assumption [Assumption 1](#H){reference-type="ref" reference="H"} be satisfied and ([\[H-uniqueness\]](#H-uniqueness){reference-type="ref" reference="H-uniqueness"}) holds true. Then, there exists $C>0$, such that $$\label{est-poc} \begin{aligned} \mathbb{E}\left[\sup_{0\leq t \leq T} \frac{1}{N}\sum_{i=1}^{N}|\Delta Y^i_t|^2+\int_0^T \frac{1}{N}\sum_{i,j=1}^N |\Delta Z^{i,j}_s|^2ds \right] \leq C\left( \frac{1}{\sqrt{N}} +\sup_{0 \leq t \leq T} \mathbb{E}^{\frac{1}{8} }\left[W^4_2(\mu^N_t,\mu_t)\right] \right). \end{aligned}$$* *Proof.* Let $\alpha$ be a large enough constant. For $1\leq i\leq N$, we apply Itô's formula to $e^{\alpha t} |\Delta Y^i_t|^2$. From the Lipschitz continuity of $f$, we obtain $$\label{t5.3-e1} \begin{aligned} & e^{\alpha t} |\Delta Y^i_t|^2+\frac{1}{2} \int_t^T e^{\alpha s} \sum_{j=1}^{N} |\Delta Z^{i,j}_s|^2ds \\ \leq & e^{\alpha T} |\Delta \xi^i|^2 + \int_t^T e^{\alpha s}\left( \frac{1}{N}\sum_{j=1}^{N} |\Delta Y^j_s|^2 + W^2_2(\mu^N_s,\mu_s) \right) ds -\int_t^T 2 e^{\alpha s} \Delta Y^i_s \cdot \sum_{j=1}^{N} \Delta Z^{i,j}_s dB^j_s \\ & + J_1^i(t,T) + J_2^i(t,T), \end{aligned}$$ with $$\label{eq:J1} \begin{aligned} J_1^i(t,T) & = \int_t^T 2 e^{\alpha s} \Delta Y^i_s \cdot \left( \partial_y H(\widehat{Y}^{i}_{s},\widehat{\mu}^{N}_{s}) \widehat{k}^i_s +\frac{1}{N}\sum_{j=1}^{N} \partial_\mu H(\widehat{Y}^{j}_{s},\widehat{\mu}^{N}_{s})(\widehat{Y}^{i}_{s}) \widehat{k}^{j}_{s} \right) ds \end{aligned}$$ and $$\label{eq:J2} \begin{aligned} J_2^i(t,T)&= -\int_t^T 2 e^{\alpha s} \Delta Y^i_s \cdot \left( \partial_y H(Y^{i}_{s},\mu_{s}) k^i_s + \widetilde{\mathbb{E}} \left[ \partial_\mu H(\widetilde{Y}^{}_{s},\mu^{}_{s})(Y^{i}_{s}) \widetilde{k}_s \right] \right) ds . \end{aligned}$$ We sum the preceding inequality over $i$, take the expectation, and then examine each term on the r.h.s. separately. 1\. We now deal with $J_1^i(t,T)$. From the concavity of $H$, we obtain $$\label{t5.3-ej1} \begin{aligned} & \frac{1}{N} \sum_{i=1}^{N} \mathbb{E}\left[ J_1^i(t,T) \right] \\ = & \frac{1}{N}\sum_{i=1}^{N} \mathbb{E}\left[ \int_t^T 2 e^{\alpha s} \widehat{k}^i_s \left( \Delta Y^i_s \cdot\partial_y H(\widehat{Y}^{i}_{s},\widehat{\mu}^{N}_{s}) + \frac{1}{N}\sum_{j=1}^{N} \Delta Y^j_s \cdot\partial_\mu H(\widehat{Y}^{i}_{s},\widehat{\mu}^{N}_{s})(\widehat{Y}^{j}_{s}) \right) ds \right] \\ & + \frac{1}{N^2}\sum_{i,j=1}^{N}\mathbb{E}\left[ \int_t^T 2 e^{\alpha s} \left(- \widehat{k}^i_s \Delta Y^j_s \cdot\partial_\mu H(\widehat{Y}^{i}_{s},\widehat{\mu}^{N}_{s})(\widehat{Y}^{j}_{s}) + \widehat{k}^j_s \Delta Y^i_s\cdot \partial_\mu H(\widehat{Y}^{j}_{s},\widehat{\mu}^{N}_{s})(\widehat{Y}^{i}_{s}) \right) ds \right] \\ \leq & \frac{1}{N}\sum_{i=1}^{N} \mathbb{E}\left[ \int_t^T e^{\alpha s} \left( H(\widehat{Y}^i_s,\widehat{\mu}^N_s)-H(Y^i_s,\mu^N_s) \right) \widehat{k}^i_s ds \right] \\ \leq & \frac{1}{N}\sum_{i=1}^{N} \mathbb{E}\left[ \int_t^T e^{\alpha s} \left( H(Y^i_s,\mu_s)-H(Y^i_s,\mu^N_s) \right) \widehat{k}^i_s ds \right] \\ \leq & \frac{C}{N}\sum_{i=1}^{N} \mathbb{E}^{\frac{1}{2}} \left[ \int_t^T W_2^2(\mu^N_s,\mu_s) ds \right] \, \mathbb{E}^{\frac{1}{2}} \left[ \int_t^T (\widehat{k}^i_s)^2 ds \right] \\ \leq & C \sup_{t \leq s \leq T} \mathbb{E}^{\frac{1}{2}}\left[W_2^2(\mu^N_s,\mu_s)\right]. \\ \end{aligned}$$ 2\. The mean of the second term $J_2^i(t,T)$ reads $$\label{t5.3-l123} \begin{aligned} & \frac{1}{N}\sum_{i=1}^{N} \mathbb{E}\left[ J_2^i(t,T) \right] = I_1(t,T) + I_2(t,T) + I_3(t,T), \\ \end{aligned}$$ where $$\label{eq:I1} \begin{aligned} I_1(t,T)&= \frac{2}{N}\sum_{i=1}^{N} \mathbb{E}\left[ \int_t^T e^{\alpha s} k^i_s \Delta Y^i_s \cdot \left(- \partial_y H(Y^{i}_{s},\mu^{}_{s}) + \partial_y H(Y^{i}_{s},\mu^{N}_{s}) \right) ds \right] , \end{aligned}$$ $$\label{eq:I2} \begin{aligned} I_2(t,T)&= -\frac{2}{N^2}\sum_{i,j=1}^{N} \mathbb{E}\left[ \int_t^T e^{\alpha s} k^i_s \left( \Delta Y^i_s \cdot \partial_y H(Y^{i}_{s},\mu^{N}_{s}) + \Delta Y^j_s \cdot \partial_\mu H(Y^{i}_{s},\mu^{N}_{s})(Y^{j}_{s}) \right) ds \right] , \end{aligned}$$ and $$\label{eq:I3} \begin{aligned} I_3(t,T)&= \frac{2}{N}\sum_{i=1}^{N} \mathbb{E}\left[ \int_t^T e^{\alpha s} \Delta Y^i_s \cdot\left( \partial_\mu H(Y^{j}_{s},\mu^{N}_{s})(Y^{i}_{s}) k^j_s - \widetilde{\mathbb{E}} \left[ \partial_\mu H(\widetilde{Y}^{}_{s},\mu^{}_{s})(Y^{i}_{s})\widetilde{k}_s \right] \right) ds \right]. \end{aligned}$$ 2.1. For the first part, from [\[est-5.2\]](#est-5.2){reference-type="eqref" reference="est-5.2"}, the Lipschitz continuity of $\partial_y H$ and Hölder's inequality, we obtain $$\label{t5.3-eqi1} \begin{aligned} I_1(t,T) \leq & \frac{C}{N}\sum_{i=1}^{N} \mathbb{E}\left[ \int_t^T e^{\alpha s} k^i_s |\Delta Y^i_s| W_2(\mu^N_s,\mu_s) ds \right] \\ \leq & \frac{C}{N}\sum_{i=1}^{N} \mathbb{E}^{\frac{1}{2} } \left[\int_t^T (k^i_s)^2 \,ds\right] \mathbb{E}^{\frac{1}{4} } \left[ \int_t^T |\Delta Y^i_s|^4 \,ds\right] \mathbb{E}^{\frac{1}{4} } \left[ \int_t^T W^4_2(\mu^N_s,\mu_s) \,ds\right] \\ \leq & C \sup_{t \leq s \leq T} \mathbb{E}^{\frac{1}{4} } \left[ W^4_2(\mu^N_s,\mu_s) \right]. \end{aligned}$$ 2.2. For the second part, from the concavity of $H$, the Skorokhod condition, and the inequality $H(\widehat{Y}^i_s,\mu^N_s)\geq0$, we obtain $$\label{t5.3-eq92} \begin{aligned} I_2(t,T) \leq & \frac{2}{N}\sum_{i=1}^{N} \mathbb{E}\left[ \int_t^T \left( H(Y^i_s,\mu^N_s)-H(\widehat{Y}^i_s,\widehat{\mu}^N_s) \right) k^i_s ds \right] \\ \leq & \frac{2}{N}\sum_{i=1}^{N} \mathbb{E}\left[ \int_t^T \left( H(\widehat{Y}^i_s,\mu^N_s)-H(\widehat{Y}^i_s,\widehat{\mu}^N_s) \right) k^i_s ds \right] \\ \leq & C \sup_{t \leq s \leq T} \mathbb{E}^{\frac{1}{2}} \left[ W^2_2(\mu^N_s,\mu_s) \right]. \\ \end{aligned}$$ 2.3. Then, we deal with $I_3(t,T)$. We split it into two parts: $$\begin{aligned} I_3(t,T) = & (a) + (b), \\ \end{aligned}$$ where $$\label{eq:(a)} \begin{aligned} (a) = \frac{2}{N^2} \sum_{i, j=1}^N \mathbb{E}\left[\int_t^T e^{\alpha s}\Delta Y_s^i \cdot \left( \partial_\mu H(Y_s^j, \mu_s^N)(Y_s^i) -\partial_\mu H(Y_s^j, \mu_s)(Y_s^i) \right) k_s^j d s\right] \end{aligned}$$ and $$\label{eq:(b)} \begin{aligned} (b) = \frac{2}{N^2} \sum_{i, j=1}^N \mathbb{E}\left[\int_t^T e^{\alpha s} \Delta Y_s^i \cdot\left(\partial_\mu H(Y_s^j, \mu_s)(Y_s^i) k_s^j-\widetilde{\mathbb{E}}\left[\partial_\mu H(\widetilde{Y}_s, \mu_s)\left(Y_s^i\right) \widetilde{k}_s\right]\right) d s\right]. \end{aligned}$$ From the Lipschitz continuity of $\partial_{\mu} H$ and [\[est-5.2\]](#est-5.2){reference-type="eqref" reference="est-5.2"}, we obtain in an identical way to the calculation of [\[t5.3-eqi1\]](#t5.3-eqi1){reference-type="eqref" reference="t5.3-eqi1"}: $$\label{t5.3-eq94} \begin{aligned} (a) \leq C \sup_{t \leq s \leq T} \mathbb{E}^{\frac{1}{4} } \left[ W^4_2(\mu^N_s,\mu_s) \right]. \\ \end{aligned}$$ Recalling that $(Y^i)_{1\leq i\leq N}$ are independent of each other, we obtain $$\begin{aligned} \mathbb{E}\left[\int_t^T \Delta Y_s^i \cdot\left(\partial_\mu H(Y_s^j, \mu_s)(Y_s^i) k_s^j-\widetilde{\mathbb{E}}\left[\partial_\mu H(\widetilde{Y}_s, \mu_s)\left(Y_s^i\right) \widetilde{k}_s\right]\right)ds\right]=0,\quad \forall 1\leq i\neq j\leq N. \\ \end{aligned}$$ From the preceding equation and the boundedness of $\partial_\mu H$, we obtain $$\begin{aligned} (b) =& \frac{2}{N^2} \sum_{i=1}^N \mathbb{E}\left[\int_t^T e^{\alpha s} \Delta Y_s^i \cdot\left(\partial_\mu H(Y_s^i, \mu_s)(Y_s^i) k_s^i-\widetilde{\mathbb{E}}\left[\partial_\mu H(\widetilde{Y}_s, \mu_s)\left(Y_s^i\right) \widetilde{k}_s\right]\right) d s\right] \\ \leq &\frac{C}{N^2} \sum_{i=1}^N \mathbb{E}^{\frac{1}{2}}\left[\int_t^T \left|\Delta Y_s^i\right|^2 d s\right] \mathbb{E}^{\frac{1}{2}}\left[\int_t^T\left(\left(k_s^i\right)^2+\left(k_s\right)^2\right) d s\right] \leq \frac{C}{N}.\\ \end{aligned}$$ Bringing together the above estimates, we have $$\label{t5.3-ej23} \begin{aligned} &\frac{1}{N} \sum_{i=1}^N \mathbb{E}\left[J_2^i(t, T)\right] \leq C\left(\frac{1}{N}+\sup _{t \leq s \leq T} \mathbb{E}^{\frac{1}{4} }\left[W_2^4\left(\mu_s^N, \mu_s\right)\right]\right). \end{aligned}$$ From [\[est-5.2\]](#est-5.2){reference-type="eqref" reference="est-5.2"}, [\[t5.3-e1\]](#t5.3-e1){reference-type="eqref" reference="t5.3-e1"}, [\[t5.3-ej1\]](#t5.3-ej1){reference-type="eqref" reference="t5.3-ej1"}, and Gronwall's lemma, we obtain $$\label{t5.3-supe} \begin{aligned} &\sup _{0 \leq t \leq T} \mathbb{E}\left[\frac{1}{N} \sum_{i=1}^N\left|\Delta Y_t^i\right|^2\right]+\mathbb{E}\left[\int_0^T \frac{1}{N} \sum_{i, j=1}^N\left|\Delta Z_s^{i, j}\right|^2 d s\right]\\ \leq& C\left(\frac{1}{N} \sum_{i=1}^N \mathbb{E}\left|\Delta \xi^i\right|^2 +\sup _{0 \leq t \leq T} \mathbb{E}^{\frac{1}{2}}\left[W_2^2\left(\mu_t^N, \mu_t \right)\right] +\frac{1}{N} +\sup _{0 \leq t \leq T} \mathbb{E}^{\frac{1}{4} }\left[W_2^4\left(\mu_t^N, \mu_t\right)\right]\right)\\ \leq& C\left(\frac{1}{N}+\sup _{0 \leq t \leq T} \mathbb{E}^{\frac{1}{4}}\left[W_2^4\left(\mu_t^N, \mu_t\right)\right]\right). \\ \end{aligned}$$ Coming back to [\[t5.3-e1\]](#t5.3-e1){reference-type="eqref" reference="t5.3-e1"}, summing the inequality over $i$, taking the supremum in $t$ and the expectation, we deduce from BDG inequality $$\label{t5.3-esup} \begin{aligned} &\mathbb{E}\sup _{0 \leq t \leq T}\left[ \frac{1}{N} \sum_{i=1}^N e^{\alpha t}\left|\Delta Y_t^i\right|^2 \right] +\mathbb{E}\left[\int_0^T \frac{1}{N} \sum_{i, j=1}^N e^{\alpha s}\left|\Delta Z_s^{i, j}\right|^2 d s\right]\\ \leq &\frac{C}{N} \mathbb{E}\left[ \sum_{i=1}^N e^{\alpha T}\left|\Delta\xi^i\right|^2 + \int_t^T e^{\alpha s} |\Delta Y^i_s|^2 \,ds + \sum_{k=1}^2 \sup _{0 \leq t \leq T}\sum_{i=1}^N J_k^i(t, T) \right]. \\ \end{aligned}$$ In view of the definition [\[t5.3-e1\]](#t5.3-e1){reference-type="eqref" reference="t5.3-e1"} of $J^i_k(t,T)$, from the boundedness of $\partial_y H$ and $\partial_\mu H$, we obtain $$\label{t5.3-esup-j} \begin{aligned} & \frac{1}{N} \mathbb{E}\left[\sum_{k=1}^2\sup _{0 \leq t \leq T}\sum_{i=1}^N J_k^i(t, T)\right] \\ \leq & \frac{C}{N} \sum_{i=1}^N \mathbb{E}\left[\int_0^T e^{\alpha s} \left(k_s^i+\mathbb{E}[k_s^i]\right) \left|\Delta Y_s^i\right| d s\right]\\ \leq & \frac{C}{N} \sum_{i=1}^N \mathbb{E}^{\frac{1}{2} }\left[\int_0^T \left(k_s^i\right)^2 d s\right] \mathbb{E}^{\frac{1}{2} }\left[\int_0^T \left|\Delta Y_s^i\right| ^2 ds\right] \\ \leq & C\left(\frac{1}{\sqrt{N}}+\sup _{0 \leq t \leq T} \mathbb{E}^{\frac{1}{8}}\left[W_2^4\left(\mu_t^N, \mu_t\right)\right]\right).\\ \end{aligned}$$ From [\[est-xi\]](#est-xi){reference-type="eqref" reference="est-xi"}, [\[t5.3-esup\]](#t5.3-esup){reference-type="eqref" reference="t5.3-esup"}, the last two inequalities, and Gronwall's lemma, we obtain the desired result. ◻ **Corollary 17**. *Let Assumption [Assumption 1](#H){reference-type="ref" reference="H"} be satisfied and ([\[H-uniqueness\]](#H-uniqueness){reference-type="ref" reference="H-uniqueness"}) holds true. Assume that $f^0\in H^{q,n}$ and $\mathbb{E}|\xi|^q<\infty$ for some $q>4$. Then, there exists $C>0$, such that $$\label{est-poc-cor} \begin{aligned} \mathbb{E}\left[\sup_{0\leq t \leq T} \frac{1}{N}\sum_{i=1}^{N}|\Delta Y^i_t|^2+\int_0^T \frac{1}{N}\sum_{i,j=1}^N |\Delta Z^{i,j}_s|^2ds \right] \leq C \begin{dcases} N^{-\frac{1}{8} }, & n<4, \\ N^{-\frac{1}{8} } \ln (N+1) , & n=4, \\ N^{-\frac{1}{2n} } , & n>4. \\ \end{dcases} \end{aligned}$$* *Proof.* By repeating the proof of Lemma [Lemma 3](#lem-a-priori-supE){reference-type="ref" reference="lem-a-priori-supE"}, we can show that $$\begin{aligned} \sup_{0 \leq t \leq T} \mathbb{E}|\widehat{Y}^i_t|^q<\infty, \quad i=1,2,\cdots,N. \end{aligned}$$ Hence, we obtain the result by combining the preceding theorem and [@FG15 Theorem 1]. ◻ # Related obstacle problem for PDEs in Wasserstein space {#sec:obstacle problem} In this section, we connect the MF-RBSDE with an obstacle problem for PDEs in Wasserstein space. Throughout this section, we assume that $Y_t$ is a one-dimensional process, i.e., $n=1$. Note that [\[H-beta\]](#H-beta){reference-type="eqref" reference="H-beta"} and [\[Hx-Hy\]](#Hx-Hy){reference-type="eqref" reference="Hx-Hy"} imply that $\partial_y H(y^{}_{},\mu^{}_{})$ and $\partial_\mu H(y^{}_{},\mu^{}_{})$ are either always positive or always negative in the one-dimensional case. Without loss of generality, we assume that $$\label{eq:Hy,Hmu} \begin{aligned} \partial_y H(y^{}_{},\mu^{}_{}) < 0 \,\,\, \mathrm{and} \,\,\, \partial_\mu H(y^{}_{},\mu^{}_{})(v)<0, \quad \forall (y,v,\mu)\in \mathbb{R}\times\mathbb{R}\times{\cal P}_2(\mathbb{R}). \end{aligned}$$ We consider a Markovian setup, where the terminal condition $\xi$ is given by $g(X^{t, X_0}_T, [X^{t, X_0}_T])$, and $X^{t, X_0}$ is the solution of an SDE. Namely, we consider the following Forward-Backward SDE with mean-field reflection (MF-RFBSDE): $$\label{eq:FBSDE} \begin{dcases} \begin{aligned} X^{t, X_0}_s =& X_0 + \int_t^s b(r, X^{t, X_0}_r) \,dr + \int_t^s \sigma(r, X^{t, X_0}_r) \,dB_r, \quad s\in[t,T], \\ X^{t, X_0}_s =& X_0, \quad s\in[0,t) \\ Y^{t, X_0}_s =& g(X^{t, X_0}_T, [X^{t, X_0}_T]) + \int_s^T f(r, X^{t, X_0}_r, Y^{t, X_0}_r, [X^{t, X_0}_r], [Y^{t, X_0}_r]) \,dr \\ &- \int_s^T Z^{t, X_0}_r \,dB_r + R^{t, X_0}_T - R^{t, X_0}_s , \quad s\in[t,T], \\ Y^{t, X_0}_s =& Y^{t, X_0}_t, \quad s\in[0,t), \\ H( Y^{t, X_0 }_s & , [Y^{t, X_0}_s]) \geq0, \quad s\in[t,T], \quad \int_t^T H(Y^{t, X_0}_r,[Y^{t, X_0}_r]) \,dK^{t, X_0}_r = 0, \\ \end{aligned} \end{dcases}$$ where $$\label{eq:R-def} \begin{aligned} & R^{t, X_0}_s := \int_t^s \partial_y H(Y^{t, X_0}_{r}, [Y^{t, X_0}_r]) \,dK^{t, X_0}_r + \widetilde{\mathbb{E}} \left[ \int_t^s \partial_\mu H(\widetilde{Y}^{t, X_0}_{r}, [Y^{t, X_0}_r])(Y^{t, X_0}_{r}) \, d \widetilde{K}^{t, X_0}_r \right], & s\in[t,T], \end{aligned}$$ and the superscript $(t,X_0)$ stands for the initial condition of the SDE. **Remark 18**. We assume that $f$ does not depend on $Z$, and only depends on the marginal distribution $\mu$ of $\nu$. This assumption is due to the lack of a comparison principle for mean-field FBSDEs (see [@BLP09 Section 3]), which poses difficulty in analyzing the existence of a decoupling field when the driver depends on the $Z$ argument. The coefficients $b, \sigma$ of the SDE and the functions $f,g$ satisfy the following conditions. **Assumption 2**. 1. *[\[H:b,sigma\]]{#H:b,sigma label="H:b,sigma"} The functions $b$ and $\sigma$ are progressively measurable mappings from $\Omega \times [0, T] \times \mathbb{R}^l$ to $\mathbb{R}^l$ and $\mathbb{R}^{l \times d}$, respectively, such that for all $s\in[0,T]$ and $x_1, x_2 \in \mathbb{R}^l$, $$\label{eq:b,sigma} \begin{gathered} \left| b(s, x_1) - b(s, x_2) \right| +\left| \sigma(s, x_1) - \sigma(s, x_2) \right| \leq L \left| x_1 - x_2 \right|, \\ \left| b(s, 0) \right| + \left| \sigma(s, 0) \right| \leq L . \end{gathered}$$* 2. *The driver $f$ is a mapping from $[0,T]\times \mathbb{R}^l \times \mathbb{R}\times {\cal P}_2(\mathbb{R}^l) \times {\cal P}_2(\mathbb{R})$ to $\mathbb{R}$, which satisfies the linear growth condition and the Lipschitz condition, i.e., there exists $L>0$, such that for all $t\in[0,T], x,x_1,x_2\in\mathbb{R}^l, y_1,y_2\in\mathbb{R}, \lambda,\lambda_1,\lambda_2 \in {\cal P}_2(\mathbb{R}^l)$ and $\mu_1,\mu_2 \in {\cal P}_2(\mathbb{R})$, $$\label{eq:f-linear-growth} \begin{aligned} \left| f(t,x,0,\lambda,\delta_0) \right| \leq L \left( 1+ \left| x \right| + W_2(\lambda,\delta_0) \right), \end{aligned}$$ $$\label{eq:f-Lipschitz-2} \begin{aligned} & \left| f(t,x_1,y_1,\lambda_1,\mu_1) - f(t,x_2,y_2,\lambda_2,\mu_2) \right| \\ \leq & L\left(|x_1-x_2| + |y_1-y_2| +W_2(\lambda_1,\lambda_2) +W_2(\mu_1,\mu_2) \right). \end{aligned}$$* 3. *[\[H:g\]]{#H:g label="H:g"} The function $g$ is a Lipschitz continuous mapping from $\mathbb{R}^l \times {\cal P}_2(\mathbb{R})$ to $\mathbb{R}$, i.e., for all $x_1,x_2\in\mathbb{R}^l$ and $\lambda_1,\lambda_2 \in {\cal P}_2(\mathbb{R}^l)$, $$\label{eq:g-Lipschitz} \begin{aligned} & \left| g(x_1,\lambda_1) - g(x_2,\lambda_2) \right| \leq L\left(|x_1-x_2| +W_2(\lambda_1,\lambda_2) \right). \end{aligned}$$* Under these assumptions, the SDE part of [\[eq:FBSDE\]](#eq:FBSDE){reference-type="eqref" reference="eq:FBSDE"} has a unique strong solution $X^{t, X_0}$. From Theorem [Theorem 13](#thm-exist-unique){reference-type="ref" reference="thm-exist-unique"}, there exists a unique solution $(Y^{t, X_0},Z^{t, X_0},K^{t, X_0})$ to the mean-field reflected BSDE part of [\[eq:FBSDE\]](#eq:FBSDE){reference-type="eqref" reference="eq:FBSDE"}. Therefore, the MF-RFBSDE [\[eq:FBSDE\]](#eq:FBSDE){reference-type="eqref" reference="eq:FBSDE"} admits a unique solution. We will connect the solution of [\[eq:FBSDE\]](#eq:FBSDE){reference-type="eqref" reference="eq:FBSDE"} to a viscosity solution of an obstacle problem for PDE. Consider the following problem in Wasserstein space: $$\label{PDE} \begin{aligned} \begin{dcases} \min \left\{ \left( \partial_{t} + {\cal L}_t \right) u(t,x,\lambda) + f(t,x,u(t,x,\lambda), \lambda, u(t,\cdot,\lambda)_* \lambda) , \,\, H(u(t,x,\lambda), u(t,\cdot,\lambda)_* \lambda) \right\} = 0, \\ u(T,\cdot,\cdot) = g, \\ \end{dcases} \end{aligned}$$ where $u(t,\cdot,\lambda)_* \lambda$ stands for the pushforward measure of $\lambda$ under the mapping $u(t, \cdot, \lambda)$, and ${\cal L}_t$ is an operator given by $$\label{eq:Lc} \begin{aligned} {\cal L}_t \varphi(t,x,\lambda):= & b(t,x) \nabla_x \varphi(t,x,\lambda) + \frac{1}{2} \mathrm{Tr}\left[ \left( \sigma\sigma^* \right)(t,x) \nabla^2_x \varphi(t,x,\lambda) \right] \\ & + \mathbb{E}_{X\sim\lambda}\left[ b(t,X) \cdot \partial_\mu \varphi(t,x,\lambda)(X) + \frac{1}{2} \mathrm{Tr} \left[ \left( \sigma\sigma^* \right)(t,X) \partial_v \partial_\mu \varphi(t,x,\lambda)(X) \right] \right] , \end{aligned}$$ for all smooth $\varphi:[0,T]\times \mathbb{R}^l \times {\cal P}_2(\mathbb{R}^l) \to \mathbb{R}$. We define the notion of viscosity solution of [\[PDE\]](#PDE){reference-type="eqref" reference="PDE"} as follows: **Definition 19**. A function $u:[0,T]\times \mathbb{R}^l \times {\cal P}_2(\mathbb{R}^l) \to \mathbb{R}$ is called a viscosity subsolution (resp. supersolution) of the obstacle problem [\[PDE\]](#PDE){reference-type="eqref" reference="PDE"} if: 1. the function $u$ is continuous and locally bounded, 2. for any $(t,x,\lambda)\in [0,T]\times \mathbb{R}^l \times {\cal P}_2(\mathbb{R}^l)$, for any test function $\varphi:[0,T]\times \mathbb{R}^l \times {\cal P}_2(\mathbb{R}^l) \to \mathbb{R}$ (see [@CD18 Definition 11.18] for the definition of test functions) such that $u-\varphi$ has a global minimum (resp. maximum) in $(t,x,\lambda)$, we have $$\label{eq:sub-super solution} \begin{aligned} \min \left\{ \left( \partial_{t} + {\cal L}_t \right) \varphi(t,x,\lambda) + f(t,x,u(t,x,\lambda), u(t,\cdot,\lambda)_* \lambda) , \,\, H(u(t,x,\lambda), u(t,\cdot,\lambda)_* \lambda) \right\} \leq 0 \, (\mathrm{resp. } \geq0), \end{aligned}$$ 3. $u(T,\cdot,\cdot)=g$ on $\mathbb{R}^l \times {\cal P}_2(\mathbb{R}^l)$. A function $u:[0,T]\times \mathbb{R}^l \times {\cal P}_2(\mathbb{R}^l) \to \mathbb{R}$ is called a viscosity solution if and only if it is a subsolution and a supersolution. For any $(t,x,\lambda) \in [0,T]\times \mathbb{R}^l \times {\cal P}_2(\mathbb{R}^l)$, we define $$\label{eq:decoupling} \begin{aligned} u(t,x,\lambda) := Y^{t,x,\lambda}_t, \end{aligned}$$ where $(X^{t,x,\lambda},Y^{t,x,\lambda},Z^{t,x,\lambda})$ is the solution of $$\label{eq:FBSDE-x} \begin{dcases} \begin{aligned} X^{t, x, \lambda}_s =& x + \int_t^s b(r, X^{t, x, \lambda}_r) \,dr + \int_t^s \sigma(r, X^{t, x, \lambda}_r) \,dB_r, \quad s\in[t,T], \\ X^{t, x, \lambda}_s =& x, \quad s\in[0,t). \\ Y^{t, x, \lambda}_s =& g(X^{t, x, \lambda}_T, [X^{t, \lambda}_T]) + \int_s^T f(r, X^{t, x, \lambda}_r, Y^{t, x, \lambda}_r, [X^{t, \lambda}_r], [Y^{t,\lambda}_r]) \,dr \\ &- \int_s^T Z^{t, x, \lambda}_r \,dB_r + R^{t, x, \lambda}_T - R^{t, x, \lambda}_s , \quad s\in[t,T], \\ Y^{t, x, \lambda}_s =& Y^{t, x, \lambda}_t, \quad s\in[0,t), \\ \end{aligned} \end{dcases}$$ and $$\label{eq:Rx-def} \begin{aligned} & R^{t, x, \lambda}_s := \mathbb{E}\left[ R^{t, \lambda}_s \mid X^{t, \lambda}_t = x \right], \quad s\in[t,T] . \\ \end{aligned}$$ We present the following result that connects $u$ with the solution of the obstacle problem [\[PDE\]](#PDE){reference-type="eqref" reference="PDE"}. **Theorem 20**. *Under Assumption [Assumption 1](#H){reference-type="ref" reference="H"} 2.(a)-(g) and [Assumption 2](#H:SDE){reference-type="ref" reference="H:SDE"}, the decoupling field $u$ defined in [\[eq:decoupling\]](#eq:decoupling){reference-type="eqref" reference="eq:decoupling"} is a viscosity solution of the obstacle problem [\[PDE\]](#PDE){reference-type="eqref" reference="PDE"}.* **Remark 21**. Theorem [Theorem 20](#thm-viscosity){reference-type="ref" reference="thm-viscosity"} establishes the existence of a viscosity solution of the obstacle problem [\[PDE\]](#PDE){reference-type="eqref" reference="PDE"} in the Wasserstein space setting. However, this result does not address the issue of uniqueness for the viscosity solution. The study of uniqueness for viscosity solutions of PDEs in Wasserstein space is inherently difficult, as traditional techniques used for proving uniqueness, such as comparison principles, often do not apply in this context [@BLP09 Section 3]. Recently, Talbi et al. [@TTZ22] introduced an alternative notion of viscosity solutions for obstacle problems in Wasserstein space, establishing the existence, uniqueness, stability, and comparison principles. While their results do not directly apply to our specific problem, their work may provide insights for future research. The following lemma addresses the (joint) continuity of the decoupling field $u$. **Lemma 22**. *Under Assumption [Assumption 1](#H){reference-type="ref" reference="H"} 2.(a)-(g) and [Assumption 2](#H:SDE){reference-type="ref" reference="H:SDE"}, the decoupling field $u$ defined in [\[eq:decoupling\]](#eq:decoupling){reference-type="eqref" reference="eq:decoupling"} is continuous in $[0,T]\times \mathbb{R}^l \times {\cal P}_2(\mathbb{R}^l)$.* *Proof.* For simplicity, we denote $X^i:=X^{t^i,x^i,\lambda^i}$ and $Y^i:=Y^{t^i,x^i,\lambda^i}$ for $i\geq0$. Fix $(t^0,x^0,\lambda^0)$ in $[0,T]\times \mathbb{R}^l \times {\cal P}_2(\mathbb{R}^l)$. For any sequence $(t^m,x^m,\lambda^m)_{m\geq1}$ converging to $(t^0,x^0,\lambda^0)$ as $m\to\infty$, we will prove that $$\label{eq:u-continuous} \begin{aligned} \left| u(t^m,x^m,\lambda^m) - u(t^0,x^0,\lambda^0) \right| = \left| Y^{m}_{t^m} - Y^{0}_{t^0} \right| \to 0. \end{aligned}$$ We split it into two terms and then deal with them separately. Note that $Y^m_{t^m}$ and $Y^0_{t^0}$ are deterministic, so $$\label{eq:Ym-Y0} \begin{aligned} & \left| Y^m_{t^m} - Y^0_{t^0} \right| = \left| \mathbb{E}\left[ Y^m_{t^m} - Y^0_{t^0} \right] \right| \leq \left| \mathbb{E}\left[ Y^m_{t^m} - Y^0_{t^m} \right] \right| + \left| \mathbb{E}\left[ Y^0_{t^m} - Y^0_{t^0} \right] \right| . \\ \end{aligned}$$ From Proposition [Proposition 5](#pr:stability){reference-type="ref" reference="pr:stability"}, we have $$\label{eq:Ym-Y0:1} \begin{aligned} & \left| \mathbb{E}\left[ Y^m_{t^m} - Y^0_{t^m} \right] \right| \leq C \mathbb{E}^{\frac{1}{2} } \left[ \left| \Delta g \right|^2 + \int_0^T \left| \delta f(s,Y^0_s,[Y^{t^0,\lambda^0}_s]) \right|^2 \,ds \right], \end{aligned}$$ where $$\label{eq:delta-g-f} \begin{gathered} \Delta g := g(X^{m}_T, [X^{t^m, \lambda^m}_T]) - g(X^{0}_T, [X^{t^0, \lambda^0}_T]), \\ \delta f(s,y,\mu):=f(s,X^m_s,y,[X^{t^m, \lambda^m}_s],\mu) - f(s,X^0_s,y,[X^{t^0, \lambda^0}_s],\mu). \end{gathered}$$ From the Lipschitz continuity of $g$ and $f$ and the standard estimate [@Z17 Theorem 3.4.3] of solutions of SDE, the r.h.s. of [\[eq:Ym-Y0:1\]](#eq:Ym-Y0:1){reference-type="eqref" reference="eq:Ym-Y0:1"} converges to $0$ as $m\to\infty$. Next, we will deal with the second term in [\[eq:Ym-Y0\]](#eq:Ym-Y0){reference-type="eqref" reference="eq:Ym-Y0"}. $$\label{eq:Ym-Y0:2} \begin{aligned} & \left| \mathbb{E}\left[ Y^0_{t^m} - Y^0_{t^0} \right] \right| \leq \mathbb{E}\left| \int_{t^0}^{t^m} f(r,X^0_r,Y^0_r,[X^{t^0, \lambda^0}_r],[Y^{t^0,\lambda^0}_r]) \,dr \right| + \mathbb{E}\left| R^0_{t^m} - R^0_{t^0} \right|. \end{aligned}$$ In view of the definition [\[eq:R-def\]](#eq:R-def){reference-type="eqref" reference="eq:R-def"} and [\[eq:Rx-def\]](#eq:Rx-def){reference-type="eqref" reference="eq:Rx-def"} of $R^0$, and [\[H-M\]](#H-M){reference-type="eqref" reference="H-M"}, we have $$\label{eq:E|R|} \begin{aligned} & \mathbb{E}\left| R^0_{t^m} - R^0_{t^0} \right| \leq C \mathbb{E}\left| \int_{t^0}^{t^m} k^0_r \,dr \right| , \end{aligned}$$ where $k^0_r:=\frac{dK^0_r}{dr}$. Recall that we have $k^0\in L^2(\Omega\times[0,T])$ from the proof of Proposition [Proposition 12](#pr-exist){reference-type="ref" reference="pr-exist"}. From the growth condition of $f$ and the last two inequalities, we obtain $$\label{eq:Ym-Y0:2.1} \begin{aligned} & \left| \mathbb{E}\left[ Y^0_{t^m} - Y^0_{t^0} \right] \right| \leq C \mathbb{E}\left| \int_{t^0}^{t^m} \left( 1 + \left| X^0_r \right| + \left| Y^0_r \right| + \mathbb{E}\left| X^{t^0,\lambda^0}_r \right| + \mathbb{E}\left| Y^{t^0,\lambda^0}_r \right| \right) \,dr \right| + C \mathbb{E}\left| \int_{t^0}^{t^m} k^0_r \,dr \right| , \end{aligned}$$ which converges to $0$ as $m\to\infty$. ◻ *Proof of Theorem [Theorem 20](#thm-viscosity){reference-type="ref" reference="thm-viscosity"}.* [\[proof:thm-viscosity\]]{#proof:thm-viscosity label="proof:thm-viscosity"} Our approach is inspired by the proofs of [@EKPPM97 Theorem 8.5] and [@BCDH20 Theorem 37]. We will first prove that the function $u$ defined in [\[eq:decoupling\]](#eq:decoupling){reference-type="eqref" reference="eq:decoupling"} is a subsolution of the obstacle problem [\[PDE\]](#PDE){reference-type="eqref" reference="PDE"}. Let $(t,x,\lambda)\in [0,T]\times \mathbb{R}^l \times {\cal P}_2(\mathbb{R}^l)$. For any test function $\varphi$, such that $u-\varphi$ has a global minimum at $(t,x,\lambda)$, we aim to show that [\[eq:sub-super solution\]](#eq:sub-super solution){reference-type="eqref" reference="eq:sub-super solution"} is valid. Recall that $u(t,x,\lambda) = Y^{t,x,\lambda}_t$. From the dynamics [\[eq:FBSDE\]](#eq:FBSDE){reference-type="eqref" reference="eq:FBSDE"} of $Y^{t,x,\lambda}$, we obtain, for all $s\in[t,T]$, $$\label{eq:Eu} \begin{aligned} & \mathbb{E}\left[ u(s,X^{t,x,\lambda}_s,[X^{t,\lambda}_s]) \right] \\ = & u(t,x,\lambda)-\mathbb{E}\left[ \int_t^s f(r,X^{t,x,\lambda}_r,Y^{t,x,\lambda}_r,[X^{t,\lambda}_r],[Y^{t,\lambda}_r]) \,dr \right] - \mathbb{E}\left[ R^{t,x,\lambda}_s - R^{t,x,\lambda}_t \right]. \\ \end{aligned}$$ Applying Itô's formula, we derive $$\label{eq:Ephi} \begin{aligned} \mathbb{E}\left[ \varphi(s,X^{t,x,\lambda}_s,[X^{t,\lambda}_s]) \right] = \varphi(t,x,\lambda) + \mathbb{E}\left[ \int_t^s \left( \partial_{t} + {\cal L}_t \right) \varphi(r,X^{t,x,\lambda}_r,[X^{t,\lambda}_r]) \,dr \right] . \end{aligned}$$ Without loss of generality, we assume that $u(t,x,\lambda)=\varphi(t,x,\lambda)$. Since $u-\varphi$ is minimized at $(t,x,\lambda)$, we have $$\label{eq:u>=phi} \begin{aligned} \mathbb{E}\left[ u(s,X^{t,x,\lambda}_s,[X^{t,\lambda}_s]) \right] \geq \mathbb{E}\left[ \varphi(s,X^{t,x,\lambda}_s,[X^{t,\lambda}_s]) \right]. \end{aligned}$$ Combining [\[eq:Eu\]](#eq:Eu){reference-type="eqref" reference="eq:Eu"} with [\[eq:Ephi\]](#eq:Ephi){reference-type="eqref" reference="eq:Ephi"}, we deduce $$\label{eq:E(u-phi)} \begin{aligned} & \mathbb{E}\left[ \int_t^s \left( \left( \partial_{t} + {\cal L}_t \right) \varphi(r,X^{t,x,\lambda}_r,[X^{t,\lambda}_r]) + f(r,X^{t,x,\lambda}_r,Y^{t,x,\lambda}_r,[X^{t,\lambda}_r],[Y^{t,\lambda}_r]) \right) \,dr \right] \\ \leq & - \mathbb{E}\left[ R^{t,x,\lambda}_s - R^{t,x,\lambda}_t \right] . \end{aligned}$$ We now verify [\[eq:sub-super solution\]](#eq:sub-super solution){reference-type="eqref" reference="eq:sub-super solution"}. Assume now that $H(u(t,x,\lambda), u(t,\cdot,\lambda)_* \lambda))>0$. Then, from the continuity of $u$, there exists $\delta>0$, such that for all $(s,x',\lambda') \in [t,T]\times \mathbb{R}^l \times {\cal P}_2(\mathbb{R}^l)$ with $(s-t) + |x-x'| + W_2(\lambda,\lambda') \leq 3\delta$, we have $$\label{eq:H>0} \begin{aligned} H(u(s,x',\lambda'), u(s,\cdot,\lambda')_* \lambda')>0. \end{aligned}$$ Moreover, from the standard estimate [@Z17 Theorem 3.4.3] of solutions of SDE, there exists $s_1>t$, such that $W_2(\lambda,[X^{t,\lambda}_s])\leq \delta,\,\, \forall s\in [t,s_1]$. Assume now that $s\in [t,s_1 \wedge (t+\delta)]$, we will estimate the r.h.s. of [\[eq:E(u-phi)\]](#eq:E(u-phi)){reference-type="eqref" reference="eq:E(u-phi)"}. Then, we deduce from the Skorokhod condition that $$\label{eq:} \begin{aligned} \int_{t}^{s} \mathbf{1}_{\left\{ \left| X^{t,x,\lambda}_r - x \right| \leq \delta \right\}} k^{t,x,\lambda}_r \,dr = 0. \end{aligned}$$ From [\[eq:E\|R\|\]](#eq:E|R|){reference-type="eqref" reference="eq:E|R|"}, Young's inequality and Chebyshev's inequality, we deduce that $$\label{eq:ER} \begin{aligned} & \left| \mathbb{E}\left[ R^{t,x,\lambda}_s - R^{t,x,\lambda}_t \right] \right| \leq C \mathbb{E}\int_{t}^{s} k^{t,x,\lambda}_r \,dr = C \mathbb{E}\int_{t}^{s} \mathbf{1}_{\left\{ \left| X^{t,x,\lambda}_r - x \right| > \delta \right\}} k^{t,x,\lambda}_r \,dr \\ \leq & C \mathbb{P}^{\frac{1}{2} } \left[ \sup_{t \leq r \leq s} \left| X^{t,x,\lambda}_r - x \right| > \delta \right] \mathbb{E}^{\frac{1}{2}} \left[ \int_{t}^{s} \left( k^{t,x,\lambda}_r \right)^2 \,dr \right] \\ \leq & \frac{C}{\delta^2} \mathbb{E}^{\frac{1}{2}} \left[ \sup_{t \leq r \leq s} \left| X^{t,x,\lambda}_r - x \right|^4 \right] \mathbb{E}^{\frac{1}{2}} \left[ \int_{t}^{s} \left( k^{t,x,\lambda}_r \right)^2 \,dr \right] \\ \leq & \frac{C}{\delta^2} (s-t) \mathbb{E}^{\frac{1}{2}} \left[ \int_{t}^{s} \left( k^{t,x,\lambda}_r \right)^2 \,dr \right] \\ \end{aligned}$$ Dividing both sides of [\[eq:E(u-phi)\]](#eq:E(u-phi)){reference-type="eqref" reference="eq:E(u-phi)"} by $(s-t)$ and letting $s\to t$, from the last inequality and $$\label{eq:Ek->0} \begin{aligned} \lim_{s\to t} \mathbb{E}\left[ \int_{t}^{s} \left( k^{t,x,\lambda}_r \right)^2 \,dr \right] = 0, \end{aligned}$$ we obtain $$\label{eq:d_t phi} \begin{aligned} \left( \partial_{t} + {\cal L}_t \right) \varphi(t,x,\lambda) + f(t,x,u(t,x,\lambda), u(t,\cdot,\lambda)_* \lambda) \leq 0. \end{aligned}$$ Therefore, we have $$\label{eq:subsolution} \begin{aligned} \min \left\{ \left( \partial_{t} + {\cal L}_t \right) \varphi(t,x,\lambda) + f(t,x,u(t,x,\lambda), u(t,\cdot,\lambda)_* \lambda) , \,\, H(u(t,x,\lambda), u(t,\cdot,\lambda)_* \lambda) \right\} \leq 0. \end{aligned}$$ We conclude the proof by showing that $u$ is also a supersolution of the obstacle problem [\[PDE\]](#PDE){reference-type="eqref" reference="PDE"}. We already know that $H(u(t,x,\lambda), u(t,\cdot,\lambda)_* \lambda) \geq 0$. Let $(t,x,\lambda) \in [0,T] \times \mathbb{R}^l \times {\cal P}_2(\mathbb{R}^l) \to \mathbb{R}$. For any test function $\varphi$, such that $u - \varphi$ has a global maximum at $(t,x,\lambda)$, we obtain the following using the same technique as in the calculation of [\[eq:E(u-phi)\]](#eq:E(u-phi)){reference-type="eqref" reference="eq:E(u-phi)"}: $$\label{eq:E(u-phi),2} \begin{aligned} & \mathbb{E}\left[ \int_t^s \left( \left( \partial_{t} + {\cal L}_t \right) \varphi(r,X^{t,x,\lambda}_r,[X^{t,\lambda}_r]) + f(r,X^{t,x,\lambda}_r,Y^{t,x,\lambda}_r,[X^{t,\lambda}_r],\mu^{t,\lambda}_r) \right) \,dr \right] \\ \geq & - \mathbb{E}\left[ R^{t,x,\lambda}_s - R^{t,x,\lambda}_t \right] \geq 0, \quad \forall s\in[t,T], \\ \end{aligned}$$ with the last inequality coming from [\[eq:Hy,Hmu\]](#eq:Hy,Hmu){reference-type="eqref" reference="eq:Hy,Hmu"} and the definitions [\[eq:R-def\]](#eq:R-def){reference-type="eqref" reference="eq:R-def"} and [\[eq:Rx-def\]](#eq:Rx-def){reference-type="eqref" reference="eq:Rx-def"} of $R^{t,x,\lambda}$. Dividing both sides of [\[eq:E(u-phi)\]](#eq:E(u-phi)){reference-type="eqref" reference="eq:E(u-phi)"} by $(s-t)$ and letting $s \to t$, we obtain $$\label{eq:d_t phi,2} \begin{aligned} \left( \partial_{t} + {\cal L}_t \right) \varphi(t,x,\lambda) + f(t,x,u(t,x,\lambda), u(t,\cdot,\lambda)_* \lambda) \geq 0. \end{aligned}$$ ◻ # Declarations {#declarations .unnumbered} This work was supported by National Natural Science Foundation of China (Grant No. 12031009). The author has no competing interests to disclose. 00 P. Briand, P. Cardaliaguet, P. E. C. De Raynal and Y. Hu (2020). Forward and backward stochastic differential equations with normal constraints in law. , 130(12): 7021--7097. P. Briand, R. Elie and Y. Hu (2018). BSDEs with mean reflection. , 28(1): 482--510. P. Briand and H. Hibon (2021). Particles Systems for mean reflected BSDEs. , 131: 253-275. R. Buckdahn, J. Li and S. Peng (2009). Mean-field backward stochastic differential equations and related partial differential equations. , 119(10): 3133--3154. R. Carmona and F. Delarue (2018). Probabilistic Theory of Mean Field Games with Applications I-II. . J. F. Chassagneux and A. Richou (2020). Obliquely reflected backward stochastic differential equations. , 56(4): 2868--2896. B. Djehiche, R. Elie and S. Hamadène (2019). Mean-field reflected backward stochastic differential equations. arXiv preprint arXiv: 1911.06079. N. El Karoui, C. Kapoudjian, E. Pardoux, S. Peng and M. C. Quenez (1997). Reflected solutions of backward SDE's, and related obstacle problems for PDE's. , 25(2): 702--737. N. Fournier and A. Guillin (2015), On the rate of convergence in Wasserstein distance of the empirical measure. , 162(3): 707--738. H. Föllmer and P. Leukert (1999), Quantile hedging. , 3(3): 251--273. H. Föllmer and P. Leukert (2000), Efficient hedging: cost versus shortfall risk. , 4(2): 117--146. A. Gegout-Petit and É. Pardoux (1996). Equations différentielles stochastiques rétrogrades réfléchies dans un convexe. , 57(1-2): 111--128. É. Pardoux and S. Peng (1990). Adapted solution of a backward stochastic differential equation. , 14(1): 55--61. S. Ramasubramanian (2002). Reflected backward stochastic differential equations in an orthant. , 112(2): 347--360. M. Talbi, N. Touzi, and J. Zhang (2022). Viscosity solutions for obstacle problems on Wasserstein space. arXiv preprint arXiv:2203.17162. J. Zhang (2017). Backward stochastic differential equations. .
arxiv_math
{ "id": "2309.10427", "title": "Multi-dimensional reflected McKean-Vlasov BSDEs with the obstacle\n depending on both the first unknown and its distribution", "authors": "Ruisen Qian", "categories": "math.PR", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/" }
--- abstract: | Consider the ring $C_c(X)_F$ of real valued functions which are discontinuous on a finite set with countable range. We discuss $(\mathcal{Z}_c)_F$-filters on $X$ and $(\mathcal{Z}_c)_F$-ideals of $C_c(X)_F$. We establish an analogous version of Gelfand-Kolmogoroff theorem in our setting. We prove some equivalent conditions when $C_c(X)_F$ is a Baer-ring and a regular ring. Lastly, we talk about the zero divisor graph on $C_c(X)_F$. address: - $^{1, 4}$ Department of Mathematics, Bangabasi Evening College, 19 Rajkumar Chakraborty Sarani, Kolkata 700009, West Bengal, India - $^{2,3}$ Department of Pure Mathematics, University of Calcutta, 35, Ballygunge Circular Road, Kolkata 700019, West Bengal, India author: - Achintya Singha$^1$, D. Mandal$^{2*}$, Samir Ch Mandal$^3$ and Sagarmoy Bag$^4$ title: Rings of functions which are discontinuous on a finite set with countable range --- [^1] \ \ \ \ \ \ # Introduction We start with a $T_1$ topological space $(X,\tau)$. Let $C_c(X)_F$ be the collection of all real valued functions on $X$ which are discontinuous on a finite set with countable range. Then $C_c(X)_F$ is a commutative ring with unity, where addition and multiplications are defined pointwise. We define for $f,g\in C_c(X)_F$, $f\leq g$ if and only if $f(x)\leq g(x)$ for all $x\in X$. Then $f\vee g=\frac{f+g+\vert f-g\vert}{2}\in C_c(X)_F$ and $f\wedge g=-(-f\vee -g)\in C_c(X)_F$. Thus $(C_c(X)_F, +, \cdot, \leq)$ is a lattice ordered ring. Clearly, $C_c(X)_F\subseteq C(X)_F$ ($\equiv$ rings of functions which are discontinuous on a finite set, studied briefly in [@MZ(2021); @ZMA(2018)]). It is interesting to see that taking an infinite set $X$ with co-finite topology (or any irreducible topological space), we get $C_c(X)_F=C(X)_F$. Let $C_c(X)$ be the ring of all real valued continuous functions with countable range. We see that the ring $C_c(X)_F$ properly contains the ring $C_c(X)$. Our intention in this paper is to study some ring properties of $C_c(X)_F$ and interpret certain topological behaviours of $X$ via $C_c(X)_F$ . In Section 2, we define $\mathcal{F}_c$-completely separated subsets of $X$ and establish that two subsets are $\mathcal{F}_c$-completely separated if and only if they are contained in two disjoint zero sets. In Section 3, we develop a connection between $(\mathcal{Z}_c)_F$-filters and ideals of $C_c(X)_F$. We define $(\mathcal{Z}_c)_F$-ideal and some equivalent conditions of $(\mathcal{Z}_c)_F$-prime ideals of $C_c(X)_F$ (see Theorem [Theorem 13](#SMP3,3.7){reference-type="ref" reference="SMP3,3.7"}) and arrive at a decision that $C_c(X)_F$ is a Gelfand ring. Also, we develop conditions when every ideal of $C_c(X)_F$ is fixed. We prove an ideal of $C_c(X)_F$ is an essential ideal if and only if it is free and also show that the set of all $(\mathcal{Z}_c)_F$-ideals and the set of all $z^\circ$-ideals are identical (see Corollary [Corollary 21](#SMP3,3.15){reference-type="ref" reference="SMP3,3.15"}). Moreover we establish some results related to socle of $C_c(X)_F$. In the next section, we discuss structure spaces of $C_c(X)_F$ and prove that the set of all maximal ideals of $C_c(X)_F$ with hull-kernel topology is homeomorphic to the set of all $(\mathcal{Z}_c)_F$-ultrafilters on $X$ with Stone topology (see Theorem [Theorem 28](#SMP3,4.3){reference-type="ref" reference="SMP3,4.3"}) and also establish an analogue version of the Gelfand-Kolmogoroff theorem (see Theorem [Theorem 29](#SMP3,4.4){reference-type="ref" reference="SMP3,4.4"}). In Example [Example 30](#SMP3,4.5){reference-type="ref" reference="SMP3,4.5"}, we show that $\beta_\circ X$ and structure space of $C_c(X)_F$ are not homeomorphic. In the next section, we establish that the ring $C_c(X)_F$ properly contains the ring $C_c(X)$ and discuss some relations between $C_c(X)$ and $C_c(X)_F$. In Section 6, we furnish some equivalent conditions when $C_c(X)_F$ is a Baer-ring. A space $X$ is called $F_cP$-space if $C_c(X)_F$ is regular and in Theorem [Theorem 43](#SMP3,7.6){reference-type="ref" reference="SMP3,7.6"}, some equivalent conditions of $F_cP$-space are proved. Finally, we introduce and study the main features of the zero divisor graph of $C_c(X)_F$ in Section 8. # Definitions and Preliminaries For any $f\in C_c(X)_F$, $Z(f)=\{x\in X:f(x)=0\}$ is called zero set of the function $f$ and $Z[C_c(X)_F]$ aggregates of all zero sets in $X$. Then $Z[C_c(X)_F]=Z[C_c^*(X)_F]$, where $C_c^*(X)_F=\{f\in C_c(X)_F:f$ is bounded$\}$. Now we can easily check the following properties of zero sets: **Properties 1**. Let $f,g\in C_c(X)_F$ and for any $r\in\mathbb{R}$, $\underline{r}$ stands for constant function from $X$ to $\mathbb{R}$. Then - $Z(f)=Z(\vert f\vert)=Z(\vert f\vert\wedge\underline{1})=Z(f^n)$ (for all $n\in\mathbb{N})$. - $Z(\underline{0})=X$ and $Z(\underline{1})=\emptyset$. - $Z(f^2+g^2)=Z(f)\cap Z(g)=Z(\vert f\vert + \vert g\vert)$. - $Z(f\cdot g)=Z(f)\cup Z(g)$. - $\{x\in X:f(x)\geq r\}$ and $\{x\in X:f(x)\leq r\}$ are zero sets in $X$. Any two subsets $A$ and $B$ of a topological space $X$ are called completely separated \[see 1.15,[@GJ]\] if there exists a continuous function $f:X\rightarrow [0,1]$ such that $f(A)=\{0\}$ and $f(B)=\{1\}$. Analogously, we define $\mathcal{F}_c$-completely separated as follows: **Definition 2**. Two subsets $A$ and $B$ of a topological space $X$ are said to be $\mathcal{F}_c$-completely separated in $X$ if there exists an element $f\in C_c(X)_F$ such that $f(A)=0$ and $f(B)=1$. **Theorem 3**. *Two subsets $A$ and $B$ of a topological space $X$ are $\mathcal{F}_c$-completely separated if and only if they are contained in disjoint members of $Z[C_c(X)_F]$.* *Proof.* Let $A,B$ be two $\mathcal{F}_c$-completely separated subsets of $X$. Then there exists $f\in C_c(X)_F$ with $f:X\rightarrow [0,1]$ such that $f(A)=\{0\}$ and $f(B)=\{1\}$. Take $Z_1=\{x\in X:f(x)\leq \frac{1}{5}\}$ and $Z_2=\{x\in X:f(x)\geq \frac{1}{3}\}$. The $Z_1,Z_2$ are disjoint zero sets in $Z[C_c(X)_F]$ and $A\subseteq {Z_1}, B\subseteq {Z_2}$. Conversely, let $A\subseteq Z(f),B\subseteq Z(g)$, where $Z(f)\cap Z(g)=\phi, f,g\in C_c(X)_F$. Let $h=\frac{f^2}{f^2+g^2}:X\rightarrow [0,1]$. Now $Z(f)\cap Z(g)=Z(f^2+g^2)=\phi$. Then we have $h\in C_c(X)_F$. Also $h(A)=\{0\},$ $h(B)=\{1\}$. This shows that $A, B$ are $\mathcal{F}_c$-separated in $X$. ◻ **Corollary 4**. Any two disjoint zero sets in $Z[C_c(X)_F]$ are $\mathcal{F}_c$-completely separated in $X$. **Theorem 5**. *If two disjoint subsets $A$ and $B$ of $X$ are $\mathcal{F}_c$-completely separated, then there exists a finite subset $F$ of $X$ such that $A\setminus F$ and $B\setminus F$ are completely separated in $X\setminus F$.* *Proof.* Let $A$ and $B$ be $\mathcal{F}_c$-completely separated in $X$. Then by Theorem [Theorem 3](#SMP3,2.3){reference-type="ref" reference="SMP3,2.3"}, there exist two zero sets $Z(f_1)$ and $Z(f_2)$ such that $A\subseteq Z(f_1)$ and $B\subseteq Z(f_2)$. Since $f_1, f_2\in C_c(X)_F$, there is a finite subset $F$ such that $f_1,f_2\in C(X\setminus F)$. Now $A\setminus F\subseteq Z(f_1)\setminus F$ and $B\setminus F\subseteq Z(f_2)\setminus F$. Also, $Z(f_1)\setminus F$ and $Z(f_2)\setminus F$ are disjoint zero sets in $X\setminus F$. Thus by Theorem 1.15 in [@GJ], $A\setminus F$ and $B\setminus F$ completely separated in $X\setminus F$. ◻ We recall that $C_c(X)$ be the ring of all real valued continuous functions with countable range and $C_c^*(X)=\{f\in C_c(X): f$ is bounded$\}$. Then we have the following lemma. **Lemma 6**. *For a topological space $X$, the following statements are hold.* - *$C_c(X)_F$ is a reduce ring.* - *An element $f\in C_c(X)_F$ is unit if and only if $Z(f)=\emptyset$.* - *Any element of $C_c(X)_F$ is zero divisor or unit.* - *$C_c(X)_F=C_c^*(X)_F$ if and only if for any finite subset $F$ of $X$, $C_c(X\setminus F)=C_c^*(X\setminus F)$.* *Proof.* (i) It is trivial. \(ii\) Let $f\in C_c(X)_F$ be a unit. Then there exists $g\in C_c(X)_F$ such that $fg=\underline{1}$. Therefore $Z(f)=\emptyset$. Conversely, let $Z(f)=\emptyset$. Then $\frac{1}{f}\in C_c(X)_F$ is the inverse of $f$. \(iii\) Let $f\in C_c(X)_F$ and $Z(f)=\emptyset$. Then $f$ is a unit element. If $Z(f)\neq \emptyset$, then for $x\in Z(f)$, $\chi_{\{x\}}\in C_c(X)_F$ and $f\cdot\chi_{\{x\}}=0$ i.e., $f$ is a zero divisor. \(iv\) Suppose that $F$ is a finite subset of $X$ and $f\in C_c(X\setminus F)$. Now we define $g$ as $$g(x)= \left\{ \begin{array}{ll} 0, & if~~ x\in F \\ f(x), & otherwise.\\ \end{array} \right.$$ Then $g\in C_c(X)_F=C_c^*(X)_F$ and $g\vert_{X\setminus F}=f$, hence $C_c(X\setminus F)=C_c^*(X\setminus F)$. Conversely, let $f\in C_c(X)_F$. Then there exists a finite subset $F$ of $X$ such that $f$ is continuous on $X\setminus F$. By hypothesis $f$ is bounded on $X\setminus F$. Therefore $f\in C_c^*(X)_F$. ◻ # $(\mathcal{Z}_c)_F$-filters and ideals of $C_c(X)_F$ Throughout the article, an ideal of $C_c(X)_F$ (or $C_c^*(X)_F$) always stands for a proper ideal. **Definition 7**. A non-empty family $\mathcal{F}$ of subsets of $Z[C_c(X)_F]$ is called $(\mathcal{Z}_c)_F$-filter on $X$ if it satisfies the following three conditions: - $\phi\notin\mathcal{F}$. - $Z_1,Z_2\in\mathcal{F}$ implies $Z_1\cap Z_2\in\mathcal{F}$. - If $Z\in\mathcal{F}$ and $Z'\in Z[C_c(X)_F]$ such that $Z\subseteq Z'$, then $Z'\in\mathcal{F}$. A $(\mathcal{Z}_c)_F$-filter on $X$ which is not properly contained in any $(\mathcal{Z}_c)_F$-filter on $X$ is called $(\mathcal{Z}_c)_F$-ultrafilter. A straight forward use of Zorn's lemma ensures that a $(\mathcal{Z}_c)_F$-filter on $X$ can be extended to a $(\mathcal{Z}_c)_F$-ultrafilter on $X$. There is an expected duality existing between ideals (maximal ideals) in $C_c(X)_F$ and the $(\mathcal{Z}_c)_F$-filters ($(\mathcal{Z}_c)_F$-ultrafilters) on $X$. This is realized by the following theorem. **Theorem 8**. *For the ring $C_c(X)_F$, the following statements are true.* - *If $I$ is an ideal of $C_c(X)_F$, then $Z[I]=\{Z(f):f\in I\}$ is a $(\mathcal{Z}_c)_F$-filter on $X$. Dually for any $(\mathcal{Z}_c)_F$-filter $\mathcal{F}$ on $X$, $Z^{-1}[\mathcal{F}]=\{f\in C_c(X)_F:Z(f)\in \mathcal{F}\}$ is an ideal (proper) in $C_c(X)_F$.* - *If $M$ is a maximal ideal of $C_c(X)_F$ then $Z[M]$ is a $(\mathcal{Z}_c)_F$-ultrafilter on $X$. If $\mathcal{U}$ is a $(\mathcal{Z}_c)_F$-ultrafilter on $X$, then $Z^{-1}[\mathcal{U}]$ is a maximal ideal of $C_c(X)_F$. Furthermore the assignment: $M\mapsto Z[M]$ defines a bijection on the set of all maximal ideals in $C_c(X)_F$ and the aggregate of all $(\mathcal{Z}_c)_F$-ultrafilters on $X$.* Like the notion of $z$-ideal in $C(X)$ (see 2.7 in [@GJ]), we now define $(\mathcal{Z}_c)_F$-ideal in $C_c(X)_F$. **Definition 9**. An ideal $I$ of $C_c(X)_F$ is called $(\mathcal{Z}_c)_F$-ideal if $Z^{-1}Z[I]=I$. It follows from the Theorem [Theorem 8](#SMP3,3.2){reference-type="ref" reference="SMP3,3.2"}(ii) that each maximal ideal of $C_c(X)_F$ is a $(\mathcal{Z}_c)_F$-ideal; the converse of this statement is false as is shown by the following example. **Example 10**. Let $I=\{f\in C_c(X)_F:f(0)=f(1)=0\}$. Then $I$ is a $(\mathcal{Z}_c)_F$-ideal in $C_c(X)_F$, which is not even a prime ideal in the ring $C_c(X)_F$. The next theorem describes maximal ideals of $C_c(X)_F$. **Theorem 11**. *For any $f\in C_c(X)_F$, we have $M_f=\{g\in C_c(X)_F:Z(f)\subseteq Z(g)\}$.* *Proof.* Let $g\in M_f$ and $x\in Z(f)\setminus Z(g)$. Now $M_x=\{f\in C_c(X)_F:x\in Z(f)\}$ is a maximal ideal of $C_c(X)_F$ contains $f$ but does not contain $g$, a contradiction. Thus $Z(f)\subseteq Z(g)$. For reverse part, let $M$ be a maximal ideal of $C_c(X)_F$ which contains $f$ and $Z(f)\subseteq Z(g)$ for some $g\in C_c(X)_F$. Then we have $Z(g)\in Z[M]$ and this implies that $g\in Z^{-1}Z[M]$. Since $M$ is a $(\mathcal{Z}_c)_F$-ideal, $g\in M$. ◻ **Corollary 12**. An ideal $I$ of $C_c(X)_F$ is a $(\mathcal{Z}_c)_F$-ideal if and only if whenever $Z(f)\subseteq Z(g)$, where $f\in I$ and $g\in C_c(X)_F$, then $g\in I$. The following two results are analogous to Theorem 2.9 and Theorem 2.11 respectively in [@GJ] and thus we state them without any proof. **Theorem 13**. *The following four statements are equivalent for a $(\mathcal{Z}_c)_F$-ideal $I$ in $C_c(X)_F:$* - *$I$ is a prime ideal.* - *$I$ contains a prime ideal in $C_c(X)_F$.* - *For all $f,g\in C_c(X)_F, fg=0\Rightarrow f\in I$ or $g\in I$.* - *Given $f\in C_c(X)_F$, there exists $Z\in Z[I]$ on which $f$ does not change its sign.* **Corollary 14**. Each prime ideal in $C_c(X)_F$ is contained in a unique maximal ideal, in other words $C_c(X)_F$ is a Gelfand ring. **Theorem 15**. *Sum of any two $(\mathcal{Z}_c)_F$-ideals in $C_c(X)_F$ is a $(\mathcal{Z}_c)_F$-ideal.* *Proof.* Let $I$ and $J$ be two $(\mathcal{Z}_c)_F$-ideals of $C_c(X)_F$. Let $f\in I$, $g\in J$, $h\in C_c(X)_F$ and $Z(f+g)\subseteq Z(h)$. Then by Corollary [Corollary 12](#SMP3,3.6){reference-type="ref" reference="SMP3,3.6"}, it is enough to prove that $h\in I+J$. Now we can find a finite subset $F$ such that $f,g,h\in C_c(X)_F$. Define $$k(x)= \left\{ \begin{array}{ll} 0, & if~~ x\in (Z(f)\cap Z(g))\setminus F \\ \frac{hf^2}{f^2+g^2}, &if ~~ x\in (X\setminus F)\setminus (Z(f)\cap Z(g)).\\ \end{array} \right.$$ $$l(x)= \left\{ \begin{array}{ll} 0, & if~~ x\in (Z(f)\cap Z(g))\setminus F \\ \frac{hg^2}{f^2+g^2}, &if ~~ x\in (X\setminus F)\setminus (Z(f)\cap Z(g)).\\ \end{array} \right.$$ Now we show that $k$ and $l$ are continuous on $X\setminus F$. Moreover, it is enough to show that $k$ and $l$ are continuous on $(Z(f)\cap Z(g))\setminus F$. For $x\in (Z(f)\cap Z(g))\setminus F$, $h(x)=0$ and for any $\epsilon>0$ there exists a neighbourhood $U$ of $x$ such that $h(U)\subseteq (-\epsilon,\epsilon)$. On the other hand $k(x)\leq h(x)$ and $l(x)\leq h(x)$ for all $x\in U$. Hence $k$ and $l$ are continuous on $X\setminus F$. Set $k^*(X\setminus F)=k(X\setminus F)$, $k^*(F)=h(F)$ and $l^*(X\setminus F)=l(X\setminus F)$, $l^*(F)=0$. Then $k^*,l^*\in C_c(X)_F$, $Z(f)\subseteq Z(k)\subseteq Z(k^*)$, $Z(g)\subseteq Z(l)\subseteq Z(l^*)$ and $h=k^*+l^*$. Since $I$ and $J$ are $(\mathcal{Z}_c)_F$-ideal of $C_c(X)_F$, $k^*\in I$ and $l^*\in J$. Therefore $h\in I+J$. ◻ **Corollary 16**. Suppose that $\{I_k\}_{k\in S}$ is a collection of $(\mathcal{Z}_c)_F$-ideals of $C_c(X)_F$. Then $\sum\limits_{k\in S}I_k=C_c(X)_F$ or $\sum\limits_{k\in S}I_k$ is a $(\mathcal{Z}_c)_F$-ideal of $C_c(X)_F$. In a reduced ring, every minimal prime ideal is also $z$-ideals (which is proved in [@Mason(1973)]). Now using this result, Theorem [Theorem 13](#SMP3,3.7){reference-type="ref" reference="SMP3,3.7"} and the above corollary, we have the following corollary. **Corollary 17**. Let $\{P_i\}_{i\in I}$ be a collection of minimal prime ideals of $C_c(X)_F$. Then $\sum\limits_{i\in I}P_i=C_c(X)_F$ or $\sum\limits_{i\in I}P_i$ is a prime ideal of $C_c(X)_F$. **Definition 18**. An ideal $I$ of $C_c(X)_F$ is called fixed if $\cap Z[I]\neq\emptyset$. Otherwise it is called free. **Theorem 19**. *For any topological space $X$, the following statements are equivalent.* - *The space $X$ is finite.* - *Every proper ideal of $C_c(X)_F$ (or $C_c^*(X)_F$) is fixed.* - *Every maximal ideal of $C_c(X)_F$ (or $C_c^*(X)_F$) is fixed.* - *Each $(\mathcal{Z}_c)_F$-filter on $X$ is fixed.* - *Each $(\mathcal{Z}_c)_F$-ultrafilter on $X$ is fixed.* Let $M_A=\{f\in C_c(X)_F:A\subseteq Z(f)\}$, for a subset $A$ of $X$. Then $M_A$ is an ideal of $C_c(X)_F$ and $M_A=\bigcap\limits_{x\in A} M_x$, where $M_x=\{f\in C_c(X)_F: x\in Z(f)\}$ is a fixed maximal ideal of $C_c(X)_F$. **Theorem 20**. *The following statements are true.* - *For two ideals $I$ and $J$ of $C_c(X)_F$, $Ann(I)\subseteq Ann(J)$ if and only if $\bigcap Z[I]\subseteq \bigcap Z[J]$ if and only if $\bigcap COZ[J]\subseteq\bigcap COZ[I]$.* - *For any subset $S$ of $C_c(X)_F$ we have $Ann(S)=M_{(\bigcup COZ[S])}=\{f\in C_c(X)_F: \bigcup COZ[S]\subseteq Z(f)\}$.* *Proof.* (i) Let $x\in \bigcap\limits_{f\in I}Z(f)$. Then $h=\chi_{\{x\}}\in C_c(X)_F$ and $x\in X\setminus Z(h)\subseteq\bigcap\limits_{f\in I}Z(f)$. Hence $fh=0$ for all $f\in I$. Then $h\in Ann(I)\subseteq Ann(J)$. Therefore $gh=0$ for each $g\in J$. Thus $x\in X\setminus Z(h)\subseteq \bigcap\limits_{g\in J}Z(g)$. Conversely, let $h\in Ann(I)$. Then $hf=0$ for all $f\in I$. This implies that $X\setminus Z(h)\subseteq \bigcap\limits_{f\in I}Z(f)$. Then by given hypothesis, $X\setminus Z(h)\subseteq Z(g)$ for each $g\in J$. Thus $gh=0$, for each $g\in J$, implies $Ann(I)\subseteq Ann(J)$. \(ii\) Let $f\in Ann(S)$. Then $fg=0$, for all $g\in S$. This shows that $\bigcup COZ[S]\subseteq Z(f)$ i.e., $f\in M_{(\bigcup COZ[S])}$. For the reverse part, let $f\in M_{(\bigcup COZ[S])}$. Then $X\setminus Z(g)\subseteq \bigcup COZ[S]\subseteq Z(f)$ for each $g\in S$. Thus $f\in Ann(S)$. ◻ A non-zero ideal in a commutative ring is said to be essential if it intersects every non-zero ideals non-trivially. Let $R$ be a commutative ring with unity. For $a\in R$, let $P_a$ be the intersection of all minimal prime ideals of $R$ containing $a$. Then an ideal $I$ of $R$ is called a $z^\circ$-ideal of $R$ if for each $a\in I$, $P_a\subseteq I$. We now state a well-known result that if $I$ is an ideal of a commutative reduced ring $R$, then $I$ is an essential ideal if and only if $Ann(I)=\{r\in R:rI=0\}=0$ (see [@MO(2008); @OM(1985)] and Lemma 2.1 in [@Taherifar(2014)]). **Corollary 21**. The following statement hold. - An ideal $I$ of $C_c(X)_F$ is an essential ideal if and only if $I$ is a free ideal. - The set of all $(\mathcal{Z}_c)_F$-ideals and $z^\circ$-ideals of $C_c(X)_F$ are identical. *Proof.* (i) It follows trivially from the above Theorem [Theorem 20](#SMP3,3.14){reference-type="ref" reference="SMP3,3.14"}. \(ii\) Clearly, every $z^\circ$-ideal is a $(\mathcal{Z}_c)_F$-ideal. Now let $I$ be a $(\mathcal{Z}_c)_F$-ideal and $Ann(f)\subseteq Ann(g)$. Then using Theorem [Theorem 20](#SMP3,3.14){reference-type="ref" reference="SMP3,3.14"}, we have $Z(f)\subseteq Z(g)$. Therefore $g\in I$. This completes the proof. ◻ It is well known that the intersection of all essential ideals or sum of all minimal prime ideals in a commutative ring with unity is called socle (see [@OM(1985)]). **Proposition 22**. *In a commutative ring with unity the following statements are true.* - *A non-zero ideal $I$ of $C_c(X)_F$ is minimal if and only if $I$ is generated by $\chi_{\{a\}}$, for some $a\in X$.* - *A non-zero ideal $I$ of $C_c(X)_F$ is minimal if and only if $\vert Z[I]\vert=2$.* - *The socle of $C_c(X)_F$ consists of all functions which vanish everywhere except on a finite subset of $X$.* - *The socle of $C_c(X)_F$ is an essential ideal which is also free.* *Proof.* (i) Let $I$ be a non-zero ideal of $C_c(X)_F$ and $f$ be a non-zero element of $I$. Then there exists $a\in X$ such that $f(a)\neq 0$. Now $\chi_{\{a\}}=\frac{1}{f(a)}\chi_{\{a\}}f\in I$. This shows that $I$ is generated by $\chi_{\{a\}}$. Conversely, let $a\in X$. Then the ideal generated by $\chi_{\{a\}}$ is the set of all constant multiple of $\chi_{\{a\}}$, which is clearly a minimal ideal. \(ii\) Let us assume $\vert Z[I]\vert=2$ and $0\neq f\in I$ with $f(a)\neq 0$ for some $a\in X$. Then $\chi_{\{a\}}\in I$ and for any non-zero element $g\in I$, $Z(g)=Z(\chi_{\{a\}})=X\setminus \{a\}$. Thus $g=g(a)\chi_{\{a\}}$ and hence $I$ is generated by $\chi_{\{a\}}$. Hence by (i), $I$ is minimal and the remaining part of the proof follows immediately. \(iii\) From $(i)$, we show that the socle of $C_c(X)_F$ is equal to the ideal generate by $\chi_{\{a\}}$'s which is equal to the set of all functions that vanishes everywhere except on a finite set. \(iv\) Clearly from $(i)$, any non-zero function $f$ has a non-zero multiple which is in the socle of $C_c(X)_F$. This implies that socle is essential. Then by Corollary [Corollary 21](#SMP3,3.15){reference-type="ref" reference="SMP3,3.15"}, the socle is a free ideal. ◻ **Corollary 23**. For a topological space $X$, the the following statements are equivalent: - $X$ is finite set. - $C_c(X)_F=Soc(C_c(X)_F)$, where $Soc(C_c(X)_F)$ is the socle of $C_c(X)_F$. *Proof.* $(i)\implies (ii):$ Let $X=\{x_1,x_2,\cdots ,x_n\}$. Then $1=\sum\limits_{i=1}^n\chi_{\{x_i\}}\in Soc(C_c(X)_F)$, using result $(iii)$ of the above proposition. Therefore $C_c(X)_F=Soc(C_c(X)_F)$. $(ii)\implies (i):$ Let $C_c(X)_F=Soc(C_c(X)_F)$. Then $\underline{1}\in Soc(C_c(X)_F)$. Hence from the above proposition, $X=X\backslash Z(\underline{1})$ is a finite set. This completes the proof. ◻ **Proposition 24**. *The following statements are true.* - *Any fixed maximal ideal of $C_c(X)_F$ is generated by an idempotent.* - *Any non-maximal prime ideal of $C_c(X)_F$ is essential ideal.* *Proof.* (i) Let $\alpha\in X$ and consider the maximal ideal $M_\alpha$. Then for any $f\in M_\alpha$, $f=f(1-\chi_{\{\alpha\}})$. This shows that $M_\alpha$ generated by an idempotent $1-\chi_{\{\alpha\}}$. \(ii\) Let $P$ be a non-maximal prime ideal of $C_c(X)_F$. For each $\alpha\in X$, the ideal generated by $1-\chi_{\{\alpha\}}$ is maximal ideal (since ideal generated by $\chi_{\{\alpha\}}$ is minimal), then $1-\chi_{\{\alpha\}}\notin P$. Thus $\chi_{\{\alpha\}}\in P$ and hence $P$ contains in socle. Therefore $P$ is essential. ◻ The following theorem is an analogous version of Theorem 2.2 in [@MA(2014)]. **Theorem 25**. *Let $J_1=\{f\in C_c(X)_F:$ for all $g, Z(1-fg)$ is finite$\}$. Then $J_1$ is equal to the intersection of all essential maximal ideals (free maximal ideals) of $C_c(X)_F$. Also, for all $f\in C_c(X)_F$, $COZ(f)$ is a countable set.* *Proof.* Let $f\in J_1$ and $M$ be an essential maximal ideal in $C_c(X)_F$ such that it does not contain $f$. Then for some $g\in C_c(X)_F$ and $m\in M$, we have $gf+m=1$. This implies that $m=1-gf$ and hence $Z(m)$ is finite. Take $h=m+\chi_{Z(m)}$. Then $h\in M$, since $m\in M$ and $\chi_{Z(m)}\in Soc(C_c(X)_F)\subseteq M$. On the other hand $h$ is invertible, a contradiction. Thus $f$ belongs to each essential maximal ideal of $C_c(X)_F$. Therefore $J_1\subseteq \bigcap\limits_{M\in S} M$, where $S$ is the collection of all essential maximal ideals of $C_c(X)_F$. Next, let $f$ be any element in the intersection of all essential maximal ideals of $C_c(X)_F$. Let $g\in C_c(X)_F$ be such that $Z(1-gf)$ infinite. This implies that for any $s\in Soc(C_c(X)_F)$ and any $t\in C_c(X)_F$, the function $s+t(1-gf)$ has a zero and thus it can not be equal to $1$. Then the ideal $Soc(C_c(X)_F)+<1-gf>$ is a proper essential ideal. Thus there exists an essential maximal ideal $M$ containing it. Therefore $1-gf\in M$ and $f\in M$, a contradiction. Hence $Z(1-gf)$ is finite. This completes the proof. For the second part, we define $F_n=\{x\in X:\vert f(x)\vert\geq \frac{1}{n}\}$, for each $n\in \mathbb{N}$. Since $COZ(f)=\bigcup\limits_{n=1}^\infty F_n$, it is enough to show that $F_n$ is a finite set for any $n\in \mathbb{N}$. If possible, let $F_n$ be infinite for some $n\in \mathbb{N}$. Let $g:\mathbb{R}\rightarrow \mathbb{R}$ be a continuous function such that $g(x)=\frac{1}{x}$ if $\vert x\vert\geq\frac{1}{n}$ and take $h=g\circ f$. Then $h\in C_c(X)_F$ and $F_n\subseteq Z(1-hf)$. This implies that $Z(1-fh)$ is infinite, a contradiction. ◻ # Structure space of $C_c(X)_F$ Let $Max(C_c(X)_F)$ be the structure space of $C_c(X)_F$ i.e., $Max(C_c(X)_F)$ is the set of all maximal ideals of $C_c(X)_F$ equipped with hull-kernel topology. Then $\{\mathcal{M}_f:f\in C_c(X)_F\}$ form a base for closed sets with this hull-kernel topology (see 7M [@GJ]), where $\mathcal{M}_f=\{M\in C_c(X)_F:f\in M\}$. Using Theorem 1.2 of [@GA(1971)], we have $Max(C_c(X)_F)$ is a Hausdorff compact space. It is checked that the structure space of $C_c(X)_F$ is identical with the set of all $(\mathcal{Z}_c)_F$-ultrafilters on $X$ with Stone topology. Let $\beta_\circ^F X$ be an index set for the family of all $(\mathcal{Z}_c)_F$-ultrafilters on X i.e., for each $p\in\beta_\circ^F X$, there exists a $(\mathcal{Z}_c)_F$-ultrafilter on $X$, which is denoted by $\mathcal{U}^p$. For any $p\in X$, we can find a fixed $(\mathcal{Z}_c)_F$-ultrafilter $\mathcal{U}_p$ and set $\mathcal{U}_p=\mathcal{U}^p$. Then we can think $X$ is a subset of $\beta_\circ^F X$. Now we wish to define a topology on $\beta_\circ^F X$. Let $\beta=\{\overline{Z}:Z\in Z[C_c(X)_F]\}$, where $\overline{Z}=\{p\in\beta_\circ^F X: Z\in \mathcal{U}^p\}$. Then $\beta$ is a base for closed sets for some topology on $\beta_\circ^F X$. Since X belongs to every $(\mathcal{Z}_c)_F$-ultrafilters on $X$, then we have $\overline{X}=\beta_\circ^F X$. We can easily check that $\overline{Z}\cap X=Z$ and for $Z_1,Z_2\in Z[C_c(X)_F]$ with $Z_1\subseteq Z_2$, then $\overline{Z_1}\subseteq\overline{Z_2}$. This leads to the following result. **Theorem 26**. *For $Z\in Z[C_c(X)_F]$, $\overline{Z}=Cl_{\beta_\circ^F X}{Z}$.* *Proof.* Let $Z\in Z[C_c(X)_F]$ and $\overline{{{Z_1}}}\in\beta$ be such that $Z\subseteq\overline{Z_1}$. Then $Z\subseteq\overline{Z_1}\cap X=Z_1$. This implies $\overline{Z}\subseteq\overline{Z_1}$. Therefore $\overline{Z}$ is the smallest basic closed set containing Z. Hence $\overline{Z}=Cl_{\beta_\circ^F X}Z$. ◻ **Corollary 27**. $X$ is a dense subset of $\beta_\circ^F X$. *Proof.* Since $X$ is a member of every $(\mathcal{Z}_c)_F$-ultrafilter on $X$, it implies that $\overline{X}=\beta_\circ^F X$. ◻ Now, we want to show that $Max(C_c(X)_F)$ and $\beta_\circ^F X$ are homeomorphic. **Theorem 28**. *The map $\phi: Max(C_c(X)_F)\rightarrow \beta_\circ^F X$, defined by $\phi(M)= p$ is a homeomorphism, where $Z[M]=\mathcal{U}^p$.* *Proof.* The map $\phi$ is bijective by Theorem [Theorem 8](#SMP3,3.2){reference-type="ref" reference="SMP3,3.2"} (ii). Basic closed set of $Max(C_c(X)_F)$ is of the form $\mathcal{M}_f=\{M\in Max(C_c(X)_F):f\in M\}$, for some $f\in C_c(X)_F$. Now $M\in \mathcal{M}_f \Leftrightarrow f\in M\Leftrightarrow Z(f)\in Z[M]$ (Since maximal ideal is a $(\mathcal{Z}_c)_F$-ideal$)\Leftrightarrow Z(f)\in\mathcal{U}^p \Leftrightarrow p\in\overline{Z(f)}$. Thus $\phi(\mathcal{M}_f)=\overline{Z(f)}$. Therefore $\phi$ interchanges basic closed sets of $Max(C_c(X)_F)$ and $\beta_\circ^F X$. Hence $Max(C_c(X)_F)$ is homeomorphic to $\beta_\circ^F X$. ◻ Now we prove the following theorem which is an analogous version of the Gelfand-Kolmogoroff Theorem 7.3 [@GJ]. **Theorem 29**. *Every maximal ideal of $C_c(X)_F$ is of the form $M^p=\{f\in C_c(X)_F:p\in Cl_{\beta_\circ^F X}Z(f)\}$, for some $p\in\beta_\circ^F X$.* *Proof.* Let $M$ be any maximal ideal of $C_c(X)_F$. Then $Z[M]$ is a $(\mathcal{Z}_c)_F$-ultrafilter on $X$. Thus $Z[M]=\mathcal{U}^p$, for some $p\in \beta_\circ^F X$. So, $f\in M \Leftrightarrow Z(f)\in Z[M]$ as $M$ is a $(\mathcal{Z}_c)_F$-ideal $\Leftrightarrow Z(f)\in Z[M]=\mathcal{U}^p \Leftrightarrow p\in \overline{Z(f)}=Cl_{\beta_\circ^F X}Z(f)$. Hence $M=\{f\in C_c(X)_F:p\in Cl_{\beta_\circ^F X} Z(f)\}$ and so we can write $\{f\in C_c(X)_F:p\in Cl_{\beta_\circ^F X} Z(f)\}=M^p$, $p\in \beta_\circ^F X$. This completes the proof. ◻ We know that the structure space of $C_c(X)$ is homeomorphic to $\beta_\circ X$ ($\equiv$ the Banaschewski compactification of a zero dimensional space $X$). Also, it is interesting to note that the structure space of $C_c(X)$ and the structure space of $C_c(X)_F$ are same if $X$ is equipped with discrete topology. The following example shows that these spaces may not be homeomorphic to each other. **Example 30**. Take $X=\{\frac{1}{n}:n\in\mathbb{N}\}\cup\{0\}$. Consider $(X,\tau_u)$, where $\tau_u$ is the subspace topology on $X$ of the real line. Since $X$ is a zero dimensional space (it has a base of clopen sets), then the Stone-$\check{C}$ech compactification of $X$, $\beta X=\beta_\circ X$ (see [@JR(1988)], subsection 4.7). Again $X$ is a compact space implies $X$ is homeomorphic to $\beta X=\beta_\circ X$. On the other hand since $X$ contains only one non-isolated point and it is a countable set, $C_c(X)_F=\mathbb{R}^X= C(X,\tau_d)$, where $C(X,\tau_d)$ is the rings of continuous functions with discrete topology $\tau_d$. Hence $\beta_\circ^F X$ is the Stone-$\check{C}$ech compactification of $X$ equipped with the discrete topology and the cardinality of $\beta_\circ^F X$ is equal to $\vert \beta \mathbb{N}\vert =2^c$ (see 9.3 in [@GJ]), where $\beta\mathbb{N}$ is the Stone-$\check{C}$ech compactification of the set $\mathbb{N}$ of all natural numbers. Now the cardinality of $\beta X$ is $\aleph_\circ$ implies that $\beta_\circ^F X$ is not homeomorphic to $\beta X=\beta_\circ X$. # $C_c(X)_F$ and $C_c(X)$ In this section, we shall discuss relation between $C_c(X)_F$ and $C_c(X)$. It is interesting to see that the ring $C_c(X)_F$ properly contains the ring $C_c(X)$. In fact, for any topological space $X$, let $x$ be a non-isolated point of $X$. Then $\chi_{\{x\}}\in C_c(X)_F$, but $\chi_{\{x\}}\not \in C_c(X)$.\ Now we recall that an over ring $S$ of a reduced ring $R$ is called a quotient ring of $R$ if for any non-zero element $s\in S$, there is an element $r\in R$ such that $0\neq sr\in R$. **Theorem 31**. *For any topological space $X$, the following statements are equivalent.* - *$C_c(X)_F=C_c(X)$.* - *$X$ is a discrete space.* - *$C_c(X)_F$ is a quotient ring of $C_c(X)$.* *Proof.* $(i)\Leftrightarrow (ii)$ Let $X$ be a discrete space, then clearly $C_c(X)_F=C_c(X)$. Next we assume that $C_c(X)_F=C_c(X)$. Then for each $x\in X$, $\chi_{\{x\}}\in C_c(X)_F=C_c(X)$, a continuous map. Then $\{x\}$ is an isolated point. Therefore $X$ is a discrete space. $(ii)\Rightarrow (iii)$ It is trivial. $(iii)\Rightarrow (ii)$ Let $x_\circ\in X$. Then $\chi_{\{x_\circ\}}\in C_c(X)_F$. Then by given hypothesis there exists a function $f\in C_c(X)$ such that $0\neq f\cdot\chi_{\{x_\circ\}}\in C_c(X)$. Now $f(x)\chi_{\{x_\circ\}}(x)=f(x_\circ)\chi_{\{x_\circ\}}(x)$ for all $x\in X$. Hence $\chi_{\{x_\circ\}}$ is a continuous function. This implies that $\{x_\circ\}$ is an isolated point. Therefore $X$ is a discrete space. ◻ **Lemma 32**. *Let $\phi:C_c(X)_F\rightarrow C_c(Y)$ be a ring isomorphism, where $X$ and $Y$ are two topological spaces. Then the following statements are true.* - *Both $\phi$ and $\phi^{-1}$ are order preserving function.* - *Both $\phi$ and $\phi^{-1}$ preserve constant functions (and their values).* - *For any $x_\circ\in X$ there is an element $y_\circ\in Y$ such that $\phi(\chi_{\{x_\circ\}})=\chi_{\{y_\circ\}}$ and $\phi(f)(y_\circ)=f(x_\circ)$ for any $f\in C_c(X)_F$.* *Proof.* (i) Let $f\in C_c(X)_F$ and $f\geq 0$, then there exists an element $g\in C_c(X)_F$ such that $f=g^2$. Hence $\phi(f)=(\phi(g))^2\geq 0$. Thus $\phi$ is order preserving. Similarly, $\phi^{-1}$ is also order preserving. \(ii\) We can easily check that $\phi(1)=1$ and use this we can calculate that $\phi$ maps any constant function with rational value to same constant and together with the order preserving property of $\phi$ shows that $\phi$ preserves constant functions. Similarly, constant functions are preserved by $\phi^{-1}$. \(ii\) For $x\in X$, $\phi(\chi_{\{x_\circ\}})$ is an idempotent element of $C_c(Y)$. Then there is a clopen subset $A$ of $Y$ such that $\phi(\chi_{\{x_\circ\}})=\chi_A$. Clearly, $A$ is non-empty. Now, we prove that $A$ is a singleton set. If possible, let $y,z\in A$. Then there exists a continuous function $f:Y\rightarrow [0,1]$ such that $f(y)=0$ and $f(z)=1$. Take $g=min\{f,\chi_A\}\in C_c(Y)$. Then $0\leq g\leq \chi_A$ and we have $0\leq \phi^{-1}(g)\leq\chi_{\{x\}}$, consequently $\phi^{-1}(g)=k\chi_{\{x\}}$ for some real number $k$. Hence $g=k \chi_A$. Now $g(y)=k \chi_A(y)$ implies that $k=0$ and $g(z)=\chi_A(z)$ implies that $k=1$, a contradiction. Therefore $A$ is a singleton set. To prove the second part, let $x_\circ\in X$, $f\in C_c(X)_F$ and $\phi(\chi_{\{x_\circ\}})=\chi_{\{y_\circ\}}$ for some $y_\circ \in Y$. If possible, let $\phi(f-f(x_\circ))(y_\circ)\neq 0$. Then we have $\phi(f-f(x_\circ))^2\geq \frac{d^2}{3}\chi_{\{y_\circ\}}=\phi(\frac{d^2}{3}\chi_{\{x_\circ\}})$, where $d=\phi(f-f(x_\circ))(y_\circ)$. Thus $(f-f(x_\circ))^2\geq \frac{d^2}{3}\chi_{\{x_\circ\}}$, contradiction when evaluated at $x_\circ$. Therefore $\phi(f)(y_\circ)=f(x_\circ)$. ◻ **Theorem 33**. *For a topological spaces $X$. There exists a topological space $Y$ such that $C_c(X)_F\cong C_c(Y)$ if and only if set of all non-isolated points of $X$ is finite.* *Proof.* Let $\{x_1,x_2,\cdots, x_n\}$ be the set of all non-isolated points of $X$. Set $Y=X\setminus\{x_1,x_2,\cdots,x_n\}$. Then for each $f\in C_c(X)_F$, we have $f\vert_Y\in C_c(Y)$ and $f\mapsto f\vert_Y$ is the required isomorphism. For the reverse part, let $\phi:C_c(X)_F\rightarrow C_c(Y)$ be a ring isomorphism. If possible, let $\{x_1,x_2,\cdots\}$ be infinite number of isolated points and without loss of generality for each $i$, $x_i$ be a limit point of $X\setminus\{x_1,x_2,\cdots\}$. Then by Lemma [Lemma 32](#SMP3,5.2){reference-type="ref" reference="SMP3,5.2"}(iii), we get $\{y_1,y_2,\cdots\}$ such that $\phi(\chi_{\{x_i\}})=\chi_{\{y_i\}}$ for each $i$. Let $g=\sum\limits_{i=1}^\infty\chi_{\{y_i\}}$ (which is an element of $C_c(Y)$) and $f$ be an inverse image of $g$ under $\phi$. Then $f(x)=0$ if and only if $x\notin\{x_1,x_2,\cdots\}$ and hence discontinuity set of $f$ is infinite, which contradicts that $f\in C_c(X)_F$. ◻ # $C_c(X)_F$ as a Baer-ring In this section, we give a characterization of $C_c(X)_F$ as a Baer-ring. A ring $R$ is called a Baer-ring if annihilator of every non-empty ideal is generated by an idempotent. A ring $R$ is said to be a $SA$-ring if for any two ideals $I$ and $J$, there exists an ideal $K$ of $R$ such that $Ann(I)\cap Ann(J)=Ann(K)$. A ring $R$ is called an $IN$-ring if for any two ideals $I$ and $J$, $Ann(I\cap J)=Ann(I)\cap Ann(J)$. Clearly, any $SA$-ring is always an $IN$-ring. Next lemma states a characterization of $IN$-ring when it is also a reduced ring. **Lemma 34**. *([@Taherifar(2014)(1)]) Let $R$ be a reduced ring. Then the following statements are equivalent.* - *For any two orthogonal ideals $I$ and $J$ of $R$, $Ann(I)+Ann(J)=R$.* - *For any two ideals $I$ and $J$ of $R$, $Ann(I)+Ann(J)=Ann(I\cap J)$.* Furthermore, the next lemma gives some equivalent conditions of Baer-ring. **Lemma 35**. *([@GMA(2015)]) Let $R$ be a commutative reduced ring. Then the following statements are equivalent.* - *$R$ is a Baer-ring.* - *$R$ is a $SA$-ring.* - *The space of prime ideals of $R$ is extremally disconnected.* - *$R$ is a $IN$-ring.* Now we wish to establish some equivalent conditions when $C_c(X)_F$ is an $IN$-ring, $SA$-ring or a Baer-ring. For this purpose we first prove the following lemma. **Lemma 36**. *For any subset $A$ of a space $X$, there exists a subset $S$ of $C_c(X)_F$ such that $A=\bigcup COZ[S]=\bigcup\{COZ(f):f\in S\}$.* *Proof.* This follows immediately, since $A=\bigcup \{\chi_{\{x\}}:x\in A\}$ and $\chi_{\{x\}}\in C_c(X)_F$ for all $x\in X$. ◻ Now, in this situation, we are ready to prove the following equivalent conditions. **Theorem 37**. *The following statements are equivalent.* - *Any two disjoint subsets of $X$ are $\mathcal{F}_c$-completely separated.* - *$C_c(X)_F$ is an $IN$-ring.* - *$C_c(X)_F$ is a $SA$-ring.* - *$C_c(X)_F$ is a Baer-ring.* - *The space of prime ideals of $C_c(X)_F$ is an extremally disconnected space.* - *Any subset of $X$ is of the form $COZ(e)$ for some idempotent $e\in C_c(X)_F$.* - *For any subset $A$ of $X$, there exists a finite subset $F$ of $X$ such that $A\setminus F$ is clopen in $X\setminus F$.* *Proof.* From Lemma [Lemma 35](#SMP3,6.2){reference-type="ref" reference="SMP3,6.2"}, statements $(ii), (iii), (iv)$ and $(v)$ are equivalent. $(i)\implies (ii)$: Let $I$ and $J$ be two orthogonal ideals of $C_c(X)_F$. Then $IJ=0$ and $\bigcup COZ[I]$, $\bigcup COZ[J]$ are two disjoint subsets of $X$. Now, by given hypothesis there exist two elements $f_1,f_2\in C_c(X)_F$ such that $\bigcup COZ[I]\subseteq Z(f_1)$ and $\bigcup COZ[J]\subseteq Z(f_2)$ with $Z(f_1)\cap Z(f_2)=\emptyset$. This implies that $f_1\in Ann(I)$, $f_2\in Ann(J)$ and $Z(f_1^2+f_2^2)=\emptyset$. Then $f_1^2+f_2^2$ is a unit element in $Ann(I)+Ann(J)$. Hence $Ann(I)+Ann(J)=C_c(X)_F$. Thus by Lemma [Lemma 34](#SMP3,6.1){reference-type="ref" reference="SMP3,6.1"}, $C_c(X)_F$ is an $IN$-ring. $(ii)\implies (i)$: Suppose that $A$ and $B$ are two disjoint subsets of $X$. Then by Lemma [Lemma 36](#SMP3,6.3){reference-type="ref" reference="SMP3,6.3"}, there are two subset $S_1$ and $S_2$ of $C_c(X)_F$ such that $A=\bigcup COZ[S_1]$ and $B=\bigcup COZ[S_2]$. Let $I$ and $J$ be two ideals generated by $S_1$ and $S_2$ respectively. Then we have $\bigcup COZ[I]\cap \bigcup COZ[J]=A\cap B=\emptyset$. This implies that $IJ=0$ i.e., $I$ and $J$ are orthogonal ideals of $C_c(X)_F$. Then by Lemma [Lemma 34](#SMP3,6.1){reference-type="ref" reference="SMP3,6.1"}, $Ann(I)+Ann(J)=C_c(X)_F$. So there exist $h_1\in Ann(I)$ and $h_2\in Ann(J)$ such that $1=h_1+h_2$, a unit element. Then $Z(h_1)$ and $Z(h_2)$ are disjoint. Since $h_1\in Ann(I)$, $A=\bigcup COZ[S_1]=\bigcup COZ[I]\subseteq Z(h_1)$. Similarly, $B\subseteq Z(h_2)$. This proves $(i)$. $(iv)\implies (vi)$: Let $A$ be a subset of $X$. Then by Lemma [Lemma 36](#SMP3,6.3){reference-type="ref" reference="SMP3,6.3"}, there exists a subset $S$ of $C_c(X)_F$ such that $A=\bigcup COZ[S]$. Let $I$ be an ideal generated by $S$. Then by given hypothesis, there exists an idempotent $e\in C_c(X)_F$ such that $Ann(e)=Ann(I)$. Thus by using Theorem [Theorem 20](#SMP3,3.14){reference-type="ref" reference="SMP3,3.14"}, we have $A=\bigcup COZ[I]=COZ(e)$. $(vi)\implies (iv)$: Suppose that $I$ is an ideal of $C_c(X)_F$. Then by given hypothesis there is an idempotent $e\in C_c(X)_F$ such that $\bigcup COZ[I]=COZ(e)$. Then by Theorem [Theorem 20](#SMP3,3.14){reference-type="ref" reference="SMP3,3.14"}, $Ann(I)=Ann(e)=(1-e)C_c(X)_F$. Hence $C_c(X)_F$ is a Baer-ring. $(vi)\implies (vii)$: Let $A=COZ(e)$, for some idempotent $e\in C_c(X)_F$. Then $e\in C(X\setminus F)$ for some finite subset $F$ of $X$. Now $A\setminus F=COZ(e)\cap (X\setminus F)$ is clopen in $X\setminus F$. $(vii)\implies (vi)$: Trivial. ◻ # $F_cP$-apace **Definition 38**. A commutative ring $R$ with unity is said to be a von Neumann regular ring or simply a regular ring if for each $a\in R$, there exists $r\in R$ such that $a=a^2 r$. We recall that X is a P-space if and only if C(X) is regular ring \[4J,[@GJ]\].\ Now we define $F_cP$-Space as follows. **Definition 39**. A space X is called $F_cP$-space if $C_c(X)_F$ is regular. **Example 40**. Consider the space $X=\{0,1,\frac{1}{2},\frac{1}{3}, \cdots \}$ (endowed with the subspace topology from the usual topology on the real line $\mathbb{R}$). It is clear that $C_c(X)_F=\mathbb{R}^X$, which means that X is an $F_cP$-space. On the other hand by 4K.1 [@GJ], X is not a $P$-space. Next example shows that for every $P$-space $X$, $C_c(X)_F$ may not be regular i.e., not a $F_cP$-apace. **Example 41**. Let the set of rational numbers $\mathbb{Q}$ be the subspace of the real line $\mathbb{R}$. Let $X=\mathbb{Q}^*=\mathbb{Q}\cup\{\infty\}$, one point compactification of $\mathbb{Q}$. Then every continuous function $f:\mathbb{Q}^*\rightarrow \mathbb{R}$ is a constant function. Hence $C(\mathbb{Q}^*)$ is isomorphic to $\mathbb{R}$, a regular ring. Hence $X$ is a $P$-space. But we wish to show that $C_c(X)_F$ is not a regular ring. Let $f:\mathbb{Q}^*\rightarrow\mathbb{R}$ be defined as, $$f(x)= \left\{ \begin{array}{ll} cos(\frac{\pi x}{2}), & if~ x\in \mathbb{Q} \\ 2, & if~ x=\infty.\\ \end{array} \right.$$ Then $f\in C_c(X)_F$. If possible, let there exists a $g\in C_c(X)_F$ such that $f=f^2 g$. Then $g(x)=\frac{1}{f(x)}$ when $f(x)\neq 0$ and discontinuity set of $g$, $D_g\supseteq\{1,-1,3,-3,5,-5,\cdots\}$. This shows that $g\notin C_c(X)_F$. Hence $C_c(X)_F$ is not regular i.e., $X$ is not a $F_cP$-apace. However, if we consider $X$ as a Tychonoff space, then the following statement is true. **Theorem 42**. *If $X$ is a P-space, then it is also a $F_cP$-space.* *Proof.* Let $f\in C_c(X)_F$. Then $f\in C(X\setminus F)$ for a finite subset $F$ of $X$. Since subspace of a $P$-space is $P$-space (see 4K.4 in [@GJ]), $X\setminus F$ is a $P$-space. So $C(X\setminus F)$ is regular. This implies that $(f\vert_{X\setminus F})^2.g=f\vert_{X\setminus F}$, for some $g\in C(X\setminus F)$. Now, we define $g^*:X\rightarrow \mathbb{R}$ by $$g^*(x)= \left\{ \begin{array}{ll} g(x), & if~ x\in (X\setminus Z(f))\setminus F \\ 0, & if~ x\in (Z(g)\cap Z(f))\setminus F\\ 0, & if~ x\in (Z(f)\setminus Z(g))\setminus F\\ \frac{1}{f(x)}, & if ~ x\in F\setminus Z(f)\\ 0, & if ~ x\in Z(f)\cap F.\\ \end{array} \right.$$ Since $f\vert_{X\setminus F}$ is continuous, then $(X\setminus Z(f))\setminus F$ is open in $X\setminus F$. Also, $X\setminus F$ is open in $X$ and hence $(X\setminus Z(f))\setminus F$ is open in $X$. Now, $(Z(g)\cap Z(f))\setminus F$ is $G_\delta$-set in $X\setminus F$ and $X\setminus F$ is $G_\delta$-set in $X$, then $(Z(g)\cap Z(f))\setminus F$ is $G_\delta$-set in $X$. Hence it is open in $X$ (see 4J.4 [@GJ]). Again, $(Z(f)\setminus Z(g))\setminus F=(Z(f)\cap (Z(g))^c)\setminus F$ is $G_\delta$-set in $X\setminus F$ and hence open in $X$ (using 4J.4 [@GJ]). Then using Pasting lemma, we can easily observe that $g^*$ is well defined and continuous on $X\setminus F$. Thus $g^\star\in C_c(X)_F$ and $f=f^2.g^\star$. So it is a $F_cP$-space. ◻   **Theorem 43**. *The following statements are equivalent.* - *The space $X$ is an $F_cP$-space.* - *For any $Z\in Z[C_c(X)_F]$, there exists a finite subset $F$ in $X$ such that $Z\setminus F$ is a clopen subset in $X\setminus F$.* - *$C_c(X)_F$ is a $PP$-ring, that is annihilator of every element is generated by an idempotent.* *Proof.* $(i)\implies (ii)$: Let $Z(f)\in Z[C_c(X)_F]$. By $(i)$, there exists $g\in C_c(X)_F$ such that $f^2g=f$. Also, there is a finite subset $F$ of $X$ such that $f,g\in C(X\setminus F)$. Hence for any $x\in X\setminus F$ we have $f(x)^2g(x)=f(x)$. Therefore $Z(f\vert_{X\setminus F})\cup Z((1-fg)\vert_{X\setminus F})=X\setminus F$. On the other hand $Z(f\mid_{X\setminus F})\cap Z((1-fg)\vert_{X\setminus F})=\phi$. These shows that $Z(f\vert_{X\setminus F})=Z(f)\setminus F$ is clopen in $X\setminus F$. $(ii)\implies (iii)$: Let $f\in C_c(X)_F$. Then there is a finite subset $F$ of $X$ such that $Z(f)\setminus F$ is clopen in $X\setminus F$. So $Z(f)\setminus F =Z(e)$ for some idempotent $e\in C_c(X\setminus F)$. Therefore $Z(f)=Z(e^\star)$ where $e^\star\vert_{X\setminus F}=e, e^\star$ is zero on $Z(f)\cap F$ and $e^\star$ is equal to $1$ on $F\setminus Z(f)$. Then by Theorem [Theorem 20](#SMP3,3.14){reference-type="ref" reference="SMP3,3.14"}, we have $Ann(f)=Ann(e^\star)=(1-e^\star)C_c(X)_F$, i.e., $C_c(X)_F$ is a $PP$-ring. $(iii)\implies(i)$: Assume that $f\in C_c(X)_F$. By hypothesis, there is an idempotent $e\in C_c(X)_F$ such that $Ann(e)=Ann(f)$. By Theorem [Theorem 20](#SMP3,3.14){reference-type="ref" reference="SMP3,3.14"}, $Z(e)=Z(f)$. Now, $F$ is a finite subset such that $f,e\in C_c(X\setminus F)$. Then $Z(f)\setminus F=Z(e)\setminus F$ is a clopen subset in $X\setminus F$. Now, we define $f^\star:X\rightarrow\mathbb{R}$ by $$f^*(x)= \left\{ \begin{array}{ll} 0, & if~ x\in Z(f)\\ \frac{1}{f(x)}, & otherwise.\\ \end{array} \right.$$ Then $f^\star\in C_c(X)_F$ and $f^2f^\star=f$. Thus $C_c(X)_F$ is a regular ring i.e., X is an $F_cP$-space. ◻ # Zero divisor graph of $C_c(X)_F$ Consider the graph $\Gamma(C_c(X)_F)$ of the ring $C_c(X)_F$ with the vertex set $V,$ the collection of all nonzero zero divisors in the ring $C_c(X)_F$ and two vertices $f,g$ are adjacent if and only if $fg=0$ on $X$. We recall that for $f\in C_c(X)_F$, $Z(f)=\{x\in X: f(x)=0\}$ is called the zero set of $f$. For a $T_1$ space $X$ and $x\in X$, the characteristic function $\chi_{\{x\}}$ defined by $\chi_{\{x\}}(y)=0$ if $y\neq x$ and $\chi_{\{x\}}(x)=1$ is a member of $C_c(X)_F$. The following result provides a condition of the adjacency of two vertices of $\Gamma(C_c(X)_F)$ in terms of their zero sets. **Lemma 44**. *Two vertices $f,g$ in the graph $\Gamma(C_c(X)_F)$ are adjacent if and only if $Z(f)\cup Z(g)=X$.* *Proof.* To begin with, let us assume that $Z(f)\cup Z(g)=X$. Then $fg=0$. So $f$ and $g$ are adjacent in $\Gamma(C_c(X)_F).$ Conversely, let $f$ and $g$ be adjacent in $\Gamma(C_c(X)_F)$. Then $fg=0$. This implies $Z(0)=Z(fg)=Z(f)\cup Z(g)=X$. ◻ **Lemma 45**. *For any two vertices $f,g$ there is another vertex $h$ in the graph $\Gamma(C_c(X)_F)$ which is adjacent to both $f$ and $g$ if and only if $Z(f)\cap Z(g)\neq \emptyset$.* *Proof.* Firstly, we consider that there is a vertex $h$ such that $h$ is adjacent to both $f$ and $g$. Then $hf=0$ and $hg=0$. As $h$ is non-zero, there exists a point $x_\circ\in X$ such that $h(x_\circ)\neq 0$. Then obviously, $f(x_\circ)=0$ and $g(x_\circ)=0$. Hence $x_\circ\in Z(f)\cap Z(g)$. Thus $Z(f)\cap Z(g)\neq \emptyset$. Conversely, let $Z(f)\cap Z(g)\neq \emptyset$ and $y\in Z(f)\cap Z(g)$. Take $h=\chi_{\{y\}}$. Then $h\in C(X)_F$ and both $hf=0$ and $hg=0$. So $h$ is adjacent to both $f$ and $g$ in the graph $\Gamma(C_c(X)_F)$. ◻ **Lemma 46**. *For any two vertices $f,g$ there are distinct vertices $h_1$ and $h_2$ in $\Gamma(C_c(X)_F)$ such that $f$ is adjacent to $h_1$, $h_1$ is adjacent to $h_2$ and $h_2$ is adjacent to $g$ if $Z(f)\cap Z(g)= \emptyset$.* *Proof.* Let $Z(f)\cap Z(g)= \emptyset$. Let us choose $x\in Z(f)$ and $y\in Z(g)$. Consider two functions $h_1=\chi_{\{x\}}$ and $h_2=\chi_{\{y\}}$. Then $Z(h_1)=X\setminus \{x\}$ and $Z(h_2)=X\setminus \{y\}.$ So $Z(h_1)\cup Z(f)=X$, $Z(h_2)\cup Z(g)=X$ and $Z(h_1)\cup Z(h_2)=X$. Hence by Lemma [Lemma 44](#deg1){reference-type="ref" reference="deg1"}, we can say that $f$ is adjacent to $h_1$, $h_1$ is adjacent to $h_2$ and $h_2$ is adjacent to $g$. ◻ **Definition 47**. For two vertices $f,g$ in any graph $G$ , $d(f,g)$ is defined as the length of the smallest path between $f$ and $g$. **Theorem 48**. *For any two vertices $f,g$ in the graph $\Gamma(C_c(X)_F)$, we have the following outputs:* 1. *$d(f,g)=1$ if and only if $Z(f)\cup Z(g)=X$.* 2. *$d(f,g)=2$ if and only if $Z(f)\cup Z(g)\neq X$ and $Z(f)\cap Z(g)\neq \emptyset$.* 3. *$d(f,g)=3$ if and only if $Z(f)\cup Z(g)\neq X$ and $Z(f)\cap Z(g) = \emptyset$.* *Proof.* (i) It follows from Lemma [Lemma 44](#deg1){reference-type="ref" reference="deg1"}. \(ii\) Let $d(f,g)=2.$ So $f$ and $g$ are not adjacent to each other. Then by Lemma [Lemma 44](#deg1){reference-type="ref" reference="deg1"}, $Z(f)\cup Z(g)\neq X$. Moreover, there is a vertex $h\in \Gamma(C(X)_F)$ such that $h$ is adjacent to both $f$ and $g$. Hence by Lemma [Lemma 45](#deg 2){reference-type="ref" reference="deg 2"}, we have $Z(f)\cap Z(g)\neq \emptyset$. Conversely, let $Z(f)\cup Z(g)\neq X$ and $Z(f)\cap Z(g)\neq \emptyset$. Then by Lemma [Lemma 44](#deg1){reference-type="ref" reference="deg1"} and [Lemma 45](#deg 2){reference-type="ref" reference="deg 2"}, $f$ and $g$ are not adjacent and there is a third vertex $h$, adjacent to both $f$ and $g$. Hence $d(f,g)=2.$ \(iii\) Let $d(f,g)=3$. Then by Lemmas [Lemma 44](#deg1){reference-type="ref" reference="deg1"} and [Lemma 45](#deg 2){reference-type="ref" reference="deg 2"}, we get $Z(f)\cup Z(g)\neq X$ and $Z(f)\cap Z(g) = \emptyset$. Conversely, let $Z(f)\cup Z(g)\neq X$ and $Z(f)\cap Z(g) = \emptyset$. Then by Lemma [Lemma 44](#deg1){reference-type="ref" reference="deg1"} and [Lemma 45](#deg 2){reference-type="ref" reference="deg 2"}, $f$ and $g$ are not adjacent to each other and there is no common vertex $h$ which is adjacent to both $f$ and $g$. Hence $d(f,g)\geq 3.$ Since $Z(f)\cap Z(g) = \emptyset$, applying Lemma [Lemma 46](#deg 3){reference-type="ref" reference="deg 3"}, there are two distinct vertices $h_1$ and $h_2$ such that $f$ is adjacent to $h_1$, $h_1$ is adjacent to $h_2$ and $h_2$ is adjacent to $g$. As a consequence, $d(f,g)=3.$ ◻ **Definition 49**. The maximum of all possible $d(f,g)$ is called the diameter of a graph $G$ and it is denoted by $diam(G)$. Also, the length of the smallest cycle in the graph $G$ is called the girth of the graph $G$ and it is denoted by $gr(G)$. If there does not exist any cycle in the graph $G$, we declare $gr (G)=\infty.$ **Theorem 50**. *If a space $X$ contains at least three elements, then\ $diam(\Gamma(C_c(X)_F))=gr(\Gamma(C_c(X)_F))=3.$* *Proof.* Let us take three distinct points $x,y,z$ in $X$. Consider the functions $f=1-\chi_{\{x\}}$ and $g=1-\chi_{\{y\}}$. Then $Z(f)=\{x\}$ and $Z(g)=\{y\}$. Thus $Z(f)\cup Z(g)\neq X$ because $z\notin Z(f)\cup Z(g).$ As $Z(f)\cap Z(g)=\emptyset$, by Theorem [Theorem 48](#deg-thm){reference-type="ref" reference="deg-thm"}(iii), $d(f,g)=3.$ But we know that $d(f,g)\leq 3$ for all vertices $f,g$ in $\Gamma(C_c(X)_F)$. Hence we have $diam(\Gamma(C_c(X)_F))=3.$ For the girth of the graph, take $h_1=\chi_{\{x\}}$, $h_2=\chi_{\{y\}}$ and $h_3=\chi_ {\{z\}}$. Then the union of any two zero sets among $Z(h_1)$, $Z(h_2)$ and $Z(h_3)$ is $X$. Thus $h_1,h_2$ and $h_3$ form a triangle. Since there is no loop in the graph $\Gamma(C_c(X)_F)$, the girth $gr(\Gamma(C_c(X)_F))=3$. ◻ **Theorem 51**. *$diam(\Gamma(C_c(X)_F))=2$ if and only if $gr(\Gamma(C_c(X)_F))=\infty$ if and only if $|X|=2$.* *Proof.* Let $X=\{x,y\}$. Then for any vertex $f$ of $\Gamma(C_c(X)_F)$, $Z(f)$ must be singleton. Let us consider $f=\chi_{\{x\}}$ and $g=\chi_{\{y\}}$. Then $f$ and $2f$ are not adjacent to each other whereas $g$ is adjacent to both $f$ and $2f$. Now for two vertices $f$ and $g$, if their zero sets are same, then they must be constant multiple of each other and thus they cannot be adjacent and their distance is 2 and if their zero sets are not same then they are adjacent to each other. Hence for any two vertices $f,g$, $d(f,g)$ is either 1 or 2. Thus we conclude that $diam(\Gamma(C_c(X)_F))=2.$ Since there are only two distinct zero sets, there cannot exist any cycle in the graph $\Gamma(C_c(X)_F)$. Thus the girth $gr(\Gamma(C_c(X)_F))=\infty$. Now suppose $diam(\Gamma(C_c(X)_F))=2$ or girth $gr(\Gamma(C_c(X)_F))=\infty$. By Theorem [Theorem 50](#diam){reference-type="ref" reference="diam"}, we see that if $X$ contains more than two points then diameter and girth both are $3$. Hence we have $|X|=2$ because if $X$ is singleton, then there is no zero divisor. ◻ **Definition 52**. For a vertex $f$ in a graph $G$, the associated number $e(f)$ is defined by $e(f)=\max\{d(f,g):g (\neq f)$ is a vertex in $G\}$. The vertex $g$ with smallest associated number is called a centre of the graph. The associated number of the centre vertex in $G$ is called the radius of the graph and it is denoted by $\rho(G)$. The following result is about the associated number of any vertex in the graph $\Gamma(C_c(X)_F)$. **Lemma 53**. *For any vertex $f$ in the graph $\Gamma(C_c(X)_F)$, we have $$e(f)=\left\{ \begin{array}{ll} 2 \text{ if } X\setminus Z(f)\text{ is singleton}\\ 3 \text{ otherwise.} \end{array} \right.$$* *Proof.* Suppose $X\setminus Z(f)=\{x_\circ\}$. Let $g$ be any vertex in $\Gamma(C_c(X)_F)$ such that $g\neq f$. Then there are only two possibilities, namely $x_\circ\in Z(g)$ or $x_\circ\notin Z(g)$. If $Z(g)$ contains $x_\circ$ then $fg=0$. In this case $f$ and $g$ are adjacent to each other. Thus $d(f,g)=1$. On the other hand, if $Z(g)$ does not contain $x_\circ$ then $Z(g)\subseteq Z(f)$. This implies that $Z(f)\cap Z(g) = Z(g)\neq \emptyset$ and $Z(f)\cup Z(g)=Z(f)\neq \emptyset$. Therefore by Theorem [Theorem 48](#deg-thm){reference-type="ref" reference="deg-thm"}, $d(f,g)=2.$ Hence we prove that $e(f)=2.$ On the other hand, let $X\setminus Z(f)$ contains at least two points, say $x_\circ$ and $y_\circ$. By Theorem [Theorem 48](#deg-thm){reference-type="ref" reference="deg-thm"}, we see that $e(f)\leq 3.$ Now choose $g=1-\chi_{\{x_\circ\}}$. Then $Z(g)=\{x_\circ\}$. Clearly, $Z(f)\cap Z(g)=\emptyset$ and $Z(f)\cup Z(g)\neq X$ because $y_\circ$ does not belong to the union. Hence by Theorem [Theorem 48](#deg-thm){reference-type="ref" reference="deg-thm"}, for this particular $g$, we get $d(f,g)=3$. Thus we obtain that $e(f)=3$. ◻ **Corollary 54**. The radius $\rho(\Gamma(C_c(X)_F))$ of the graph $\Gamma(C_c(X)_F)$ is always 2. *Proof.* We can always consider a vertex $f$ with $e(f)=2$, for example take $f=\chi_{\{x_\circ\}}.$ Then $X\setminus Z(f)$ is singleton. So we have radius of $\Gamma(C_c(X)_F)=2$. ◻ **Definition 55**. A graph $G$ is said to be 1. triangulated if every vertex of the graph $G$ is a vertex of a triangle. 2. hyper-triangulated if every edge of the graph $G$ is an edge of a triangle. **Theorem 56**. *The graph $\Gamma(C_c(X)_F)$ is neither triangulated nor hyper-triangulated.* *Proof.* At first, we prove that the graph $\Gamma(C_c(X)_F)$ is not triangulated. For this, let us consider $x_\circ\in X.$ Now define $f= 1-\chi_{\{x_\circ\}}.$ Then $Z(f) = \{x_\circ\}.$ We claim that there is no triangle containing $f$ as a vertex. If possible, let $g,h$ be two vertices such that $f,g,h$ make a triangle. Then by Lemma [Lemma 44](#deg1){reference-type="ref" reference="deg1"}, $Z(g) = X-\{x_\circ\}=Z(h).$ But again by Lemma [Lemma 44](#deg1){reference-type="ref" reference="deg1"}, $g$ and $h$ cannot be adjacent. This creates a contradiction that $f,g,h$ make a triangle. Now to prove hypertriangulated, let us take a point $x_\circ\in X$. Now take two functions $f=\chi_{\{x_\circ\}}$ and $g=1-\chi_{\{x_\circ\}}$. Then $Z(f)\cup Z(g)=X$ and $Z(f)\cap Z(g)=\emptyset$. Then by Lemma [Lemma 45](#deg 2){reference-type="ref" reference="deg 2"}, it is not possible to get a triangle that contains the edge connecting $f$ and $g$. So the graph $\Gamma(C_c(X)_F)$ is neither triangulated nor hyper-triangulated. ◻ The above mentioned result is totally different from the case of $C(X)$. In fact, we have **Proposition 57** ([@zero]). *The following results are true.* 1. *$\Gamma(C(X))$ is triangulated if and only if $X$ does not contain any non-isolated points.* 2. *$\Gamma(C(X))$ is hyper-triangulated if and only if $X$ is a connected middle $P$-space.* For definition of middle $P$-space see [@zero]. **Definition 58**. For two vertices $f$ and $g$ in any graph $G$, we denote by $c(f,g)$ the length of the smallest cycle containing $f$ and $g$. If there is no cycle containing $f$ and $g$, we declare $c(f,g)=\infty$. In the following theorem, we shall discuss all possible values of $c(f,g)$ in the graph $\Gamma(C_c(X)_F)$. **Theorem 59**. *Let $f$ and $g$ be two vertices in the graph $\Gamma(C_c(X)_F)$. Then* 1. *$c(f,g)=3$ if and only if $Z(f)\cup Z(g)=X$ and $Z(f) \cap Z(g)\neq \emptyset$.* 2. *$c(f,g)=4$ if and only if $Z(f)\cup Z(g)=X$ and $Z(f) \cap Z(g)= \emptyset$ or $Z(f)\cup Z(g)\neq X$ and $Z(f) \cap Z(g) \neq \emptyset$.* 3. *$c(f,g)=6$ if and only if $Z(f)\cup Z(g)\neq X$ and $Z(f) \cap Z(g) = \emptyset$.* *Proof.* (i) Suppose $Z(f)\cup Z(g)=X$ and $Z(f) \cap Z(g)\neq \emptyset$. Thus by Lemma [Lemma 44](#deg1){reference-type="ref" reference="deg1"} and [Lemma 45](#deg 2){reference-type="ref" reference="deg 2"}, $f$ and $g$ are adjacent to each other and there is another vertex $h$ adjacent to both $f$ and $g$. Hence we obtain a triangle with vertices $f,g$ and $h$. This shows that $c(f,g)=3$. Conversely, if $c(f,g)=3$ then there exists a triangle with $f,g$ and $h$ as its vertices for some other vertex $h$. Now using Lemma [Lemma 44](#deg1){reference-type="ref" reference="deg1"} and [Lemma 45](#deg 2){reference-type="ref" reference="deg 2"}, we find that $Z(f)\cup Z(g)=X$ and $Z(f) \cap Z(g)\neq \emptyset$.\ (ii) Consider $Z(f)\cup Z(g)=X$ and $Z(f) \cap Z(g)= \emptyset$. Then using Lemma [Lemma 44](#deg1){reference-type="ref" reference="deg1"}, $f$ and $g$ are adjacent to each other. Now by Lemma [Lemma 46](#deg 3){reference-type="ref" reference="deg 3"}, there are vertices $h_1$ and $h_2$ such that $f$ is adjacent to $h_1$, $h_1$ is adjacent to $h_2$ and $h_2$ is adjacent to $g$. Thus we get a cycle of length 4 with vertices in order, $f,h_1,h_2$ and $g$. As $Z(f) \cap Z(g)= \emptyset$, by Lemma [Lemma 45](#deg 2){reference-type="ref" reference="deg 2"}, there is no triangle containing $f$ and $g$ as its vertices. Thus $c(f,g)=4$. Now suppose $Z(f)\cup Z(g)\neq X$ and $Z(f) \cap Z(g) \neq \emptyset$. Then using Lemma [Lemma 44](#deg1){reference-type="ref" reference="deg1"}, $f$ and $g$ are not adjacent to each other. By Lemma [Lemma 45](#deg 2){reference-type="ref" reference="deg 2"}, there exists a vertex $h$ such that $h$ is adjacent to both $f$ and $g$. Then $2h$ is also adjacent to both $f$ and $g$. Thus we get a quadrilateral containing vertices in order $f,h,g$ and $2h$. Again condition $Z(f)\cup Z(g)\neq X$ implies that it is not possible to have a triangle containing $f$ and $g$ as its vertices. So $c(f,g)=4.$ To prove the converse, let $c(f,g)=4.$ Now $Z(f)\cup Z(g)=X$, then we must have $Z(f)\cap Z(g)=\emptyset$, otherwise we have a triangle having vertices $f$ and $g$. If we have $Z(f)\cup Z(g)\neq X$, then $f$ and $g$ are not adjacent to each other. But there is a quadrilateral containing $f$ and $g$. So there must exist two functions $h_1$ and $h_2$ such that both $h_1$ and $h_2$ are adjacent to both $f$ and $g$. So by Lemma [Lemma 45](#deg 2){reference-type="ref" reference="deg 2"}, we have $Z(f) \cap Z(g) \neq \emptyset$. \(iii\) Let $Z(f)\cup Z(g)\neq X$ and $Z(f) \cap Z(g) = \emptyset$. Then $f$ and $g$ are not adjacent to each other. As $Z(f) \cap Z(g) = \emptyset$, by Lemma [Lemma 46](#deg 3){reference-type="ref" reference="deg 3"}, there are two vertices $h_1$ and $h_2$ in $\Gamma (C(X)_F)$ such that there is a path connecting $f,h_1,h_2$ and $g$ in order. So immediately there is another path connecting $g,2h_2,2h_1$ and $f$. So we get a cycle of length 6, namely $f,h_1,h_2,g,2h_2,2h_1$ and $f$. Let us make it clear that with the given condition it is not possible to get a cycle of length 5. As $f$ and $g$ are not adjacent to each other, to have a cycle of length 5, we must have a path of length 2 joining $f$ and $g$ which is not possible as $Z(f)\cap Z(g)=\emptyset.$ This implies that $c(f,g)=6.$ Conversely, let $c(f,g)=6.$ Then by proof of (i) and (ii), we have $Z(f)\cup Z(g)\neq X$ and $Z(f) \cap Z(g) = \emptyset$. ◻ 6 Ahmadi, M. R., Khosravi, Z.: Remarks on the rings of functions which have a finite number of discontinuities. Appl. Gen. Topol. 22(1), 139-147 (2021) Azarpanah, F., Motamedi, M.: Zero-divisor graph of $C(X)$. Acta Math. Hungar. 108 (1-2), 25-36 (2005) Bag, S., Acharyya, S. K., Mandal, D., Raha, A. B., Singha, A.: Rings of functions which are discontinuous on a set of measure zero. Positivity. 26(12), 1-15 (2022) Birkenmeier, G. F., Ghirati, M., Taherifar, A.: When is the Sum of Annihilator Ideals an Annihilator ideal?. Comm. Algebra. 43(7), 2690-2702 (2015) Gharebaghi, Z., Ghirati, M., Taherifar, A.: On the rings of functions which are discontinuous on a finite set. Houston J. Math. 44(2), 721-739 (2018) Ghirati, M., Karamzadeh, O.A.S.: On strongly essential submodules. Commun. Algebra. 36(2), 564-580 (2008) Ghirati, M., Taherifar, A.: Intersection of essential (resp. free) maximal ideals of $C(X)$. Topol. Appl. 167, 62-68 (2014) Gillman, L., Jerison, M.: Rings of continuous functions. Springer, London (1976) Karamzadeh, O. A. S., Rostami, M.: On the intrinsic topology and some related ideals of $C(X)$. Proc. Amer. Math. Soc. 93(1), 179-184 (1985) Maroo, G. D., Orsatti, A.: Commutative rings in which every prime ideal is contained in a unique maximal ideal. Proc. Amer. Math. Soc. 30, 459-466 (1971) Mason, G.: Prime $z$-ideals of $C(X)$ and related rings. J. Algebra. 26, 280-297 (1973) Mandal, S. Ch., Bag, S., Mandal, D.: Convergence and the zero divisor graph on the ring of functions which are discontinuous on a finite sets. Afr. Mat., 34(43), 1-10 (2023) Porter, J. R., Woods, R. G.: Extension and absolutes of Hausdorff Spaces. Springer, New York (1988) Taherifar, A.: Intersection of essential minimal prime ideals. Comment. Math. Univ. Carolin. 55(1), 121-130 (2014) Taherifar, A.: Some new classes of topological spaces and annihilator ideal. Topology Appl. 165, 84-97 (2014) [^1]:
arxiv_math
{ "id": "2310.02010", "title": "Rings of functions which are discontinuous on a finite set with\n countable range", "authors": "Achintya Singha, D. Mandal, Samir Ch Manda and Sagarmoy Bag", "categories": "math.GN", "license": "http://creativecommons.org/licenses/by-sa/4.0/" }
--- abstract: | We provide a few characterizations of a strictly convex Banach space. Using this we improve the main theorem of \[Digar, Abhik; Kosuru, G. Sankara Raju; Cyclic uniform Lipschitzian mappings and proximal uniform normal structure. Ann. Funct. Anal. 13 (2022)\]. address: - Indian Institute of Technology Kanpur, Department of Mathematics & Statistics, Kalyanpur, Uttar Pradesh, 208016, India - Indian Institute of Technology Punjab, Rupnagar, Punjab 140 001, India author: - Abhik Digar - G. Sankara Raju Kosuru title: Characterizations of strictly convex spaces and proximal uniform normal structure --- # Introduction and preliminaries {#Section1} We first fix a few notations, which will be used in the sequel. In the next section, we prove charectrizations of strictly convex spaces and using one of these characterizations, we fix a gap in the main theorem (Theorem 5.8) of [@AbhikRaju2021]. Let $\mathcal{E}$ be a Banach space and let $(\mathcal{P},\mathcal{Q})$ be a pair of non-empty subsets of $\mathcal{E}.$ We denote the sets $\mathcal{P}_0=\{p\in \mathcal{P}: \|p-q\|=d(\mathcal{P}, \mathcal{Q})~\mbox{for some}~q\in \mathcal{Q}\}$ and $\mathcal{Q}_0=\{q\in \mathcal{Q}: \|p-q\|=d(\mathcal{P}, \mathcal{Q})~\mbox{for some}~p\in \mathcal{P}\},$ here $d(\mathcal{P}, \mathcal{Q})=\operatorname{inf}\{\|p-q\|: p\in \mathcal{P}, a\in \mathcal{Q}\}.$ Then $(\mathcal{P}, \mathcal{Q})$ is said to be proximal if $\mathcal{P}=\mathcal{P}_0$ and $\mathcal{Q}=\mathcal{Q}_0.$ Also, $(\mathcal{P}, \mathcal{Q})$ is said to be semisharp proximal ([@Raju2011]) if for any $w$ in $\mathcal{P}$ (similarly, in $\mathcal{Q}$), there exists $u, v$ in $\mathcal{Q}$ (similarly, in $\mathcal{P}$) such that $\|w-u\|=d(\mathcal{P}, \mathcal{Q})=\|w-v\|,$ then it is the case that $u=v.$ In this case, we denote $u$ by $w'$ (as well as $w$ by $u'$). A proximal pair is known as sharp proximal if it is semisharp. Further, the sharp proximal pair $(\mathcal{P}, \mathcal{Q})$ is parallel ([@Espinola2008]) if $\mathcal{Q}=\mathcal{P}+h$ for some $h\in \mathcal{E}.$ In this case, we have $p'=p+h,~q'=q-h$ for $p\in \mathcal{P}, q\in \mathcal{Q}$ and $\|h\|=d(\mathcal{P},\mathcal{Q}).$ Suppose $\mathcal{D}$ is a non-empty subset of $\mathcal{E}.$ The metric projection $P_\mathcal{D}: \mathcal{E}\to \mathcal{D}$ is defined by $P_\mathcal{D}(x)=\{z\in \mathcal{D}: \|x-z\|=d(x, \mathcal{D})\}$ for all $x\in \mathcal{E}.$ It is known that if $\mathcal{D}$ is weakly compact and convex subset of a strictly convex Banach space, then $P_\mathcal{D}(x)$ is singleton for every $x\in \mathcal{E}.$ For $z\in \mathcal{D}$ and $r>0,$ the closed ball centered at $x$ and radius $r$ is denoted by $\mathbb{\mathbb{B}}(x,r).$ We recall a lemma from [@AbhikRaju2021]. **Lemma 1**. *Let $(\mathcal{P},\mathcal{Q})$ be a non-empty closed convex pair in a Banach space with $(\mathcal{P},\mathcal{Q})$ semisharp proximal. Suppose that $\{u_n\}$ is a sequence in $\mathcal{P}$ and $(u,v)$ is a proximal pair in $(\mathcal{P},\mathcal{Q})$ such that $\displaystyle\lim_{n\to \infty}\|u_n-v\|=d(\mathcal{P}, \mathcal{Q}).$ Then $\displaystyle\lim_{n\to \infty} u_n=u.$* The following two results were proved in [@EldredRaj2014]. **Theorem 2**. *Let $\mathcal{C}$ be a non-empty closed convex subset of a strictly convex Banach space. Then $\mathcal{C}$ contains not more than one point of minimum norm.* **Theorem 3**. *Let $(\mathcal{P},\mathcal{Q})$ be a non-empty closed convex pair in a strictly convex Banach space. Then the restriction of the metric projection operator $P_{\mathcal{P}_0}$ to $\mathcal{Q}_0$ is an isometry.* # Characterizations of strictly convex spaces {#Section2} Let $(\mathcal{P}, \mathcal{Q})$ be a non-empty pair in a Banach space $\mathcal{E}.$ For $w\in \mathcal{E},$ a sequence $\{p_n\}$ in $\mathcal{P}$ is said to be minimizing with respect to $w$ if $\displaystyle\lim_{n\to \infty}\|p_n-w\|=d(w,\mathcal{P}).$ Also, the sequence is said to be minimizing with respect to $\mathcal{Q}$ if $d(p_n, \mathcal{Q})\to d(\mathcal{P}, \mathcal{Q}).$ It is well-known that $\mathcal{P}$ is approximatively compact if every minimizing sequence in $\mathcal{P}$ has a convergent subsequence ([@book:Megginson]). We say that $\mathcal{P}$ is weak approximatively compact with respect to $\mathcal{Q}$ if every minimizing sequence in $\mathcal{P}$ with respect to $\mathcal{Q}$ has a weak convergent subsequence in $\mathcal{P}.$ The pair $(\mathcal{P}, \mathcal{Q})$ is weak approximatively compact if $\mathcal{P}$ and $\mathcal{Q}$ are weak approximatively compact with respect to $\mathcal{Q}$ and $\mathcal{P}$ respectively. The pair $(\mathcal{P}, \mathcal{Q})$ is said to have the property UC ([@Suzuki2009]) if $\{u_n\}, \{v_n\}$ are sequences in $\mathcal{P}$ and $\{q_n\}$ is a sequence in $\mathcal{Q}$ with $\displaystyle \lim_{n\to \infty} \|u_n-q_n\|=d(\mathcal{P}, \mathcal{Q})=\displaystyle \lim_{n\to \infty} \|v_n-q_n\|,$ then it is the case that $\displaystyle\lim_{n\to \infty} \|u_n- v_n\|=0.$ It is easy to see that if a non-empty pair has property UC, then it is semisharp proximal. The Banach space $\mathcal{E}$ is said to have the Kadec-Klee property ([@book:Megginson]) if for any sequence on the unit sphere $S_\mathcal{E}$ the weak and strong convergence concide. A normed linear space is said to be strictly convex if its unit sphere contains no non-trivial line segement. The following result provides the characterizations of a strictly convex Banach space. **Theorem 4**. *Let $\mathcal{E}$ be a Banach space. Then the following statements are equivalent:* 1. *$\mathcal{E}$ is strictly convex.* 2. *Suppose $(\mathcal{P},\mathcal{Q})$ is a non-empty closed bounded convex pair in $\mathcal{E}$. Then $(\mathcal{P},\mathcal{Q})$ is semisharp proximal.* 3. *If $(\mathcal{P},\mathcal{Q})$ is a non-empty closed bounded convex pair in $\mathcal{E}$ and $(p,q)\in (\mathcal{P},\mathcal{Q}).$ Suppose $\{p_n\}$ is a sequence in $\mathcal{P}$ such that $\displaystyle\lim_{n\to \infty}\|p_n-q\|=d(\mathcal{P}, \mathcal{Q})=\|p-q\|.$ Then $\displaystyle\lim_{n\to \infty}p_n=p.$* 4. *If $\mathcal{P}$ and $\mathcal{Q}$ are compact subsets of $\mathcal{E}$ with $\mathcal{P}$ convex, then $(\mathcal{P},\mathcal{Q})$ has property UC.* 5. *Suppose $(\mathcal{P},\mathcal{Q})$ is a non-empty closed bounded convex pair in $\mathcal{E}$. If $\mathcal{E}$ has the Kadec-Klee property and $(\mathcal{P},\mathcal{Q})$ is weak approximatively compact, then $(\mathcal{P},\mathcal{Q})$ has property UC.* 6. *Suppose $(\mathcal{P},\mathcal{Q})$ is a non-empty closed bounded convex pair in $\mathcal{E}$. For $p,w\in \mathcal{P}$ and $q\in \mathcal{Q},$ with $\|p-q\|=d(q,\mathcal{P})$ and $\|w-q\|=d(q,\mathcal{P}),$ we have $p=w.$* 7. *If $\mathcal{P}$ is a non-empty closed convex subset of $\mathcal{E},$ then it has no more than one element of minimum norm.* 8. *If $(\mathcal{P},\mathcal{Q})$ is a non-empty closed bounded convex pair in $\mathcal{E}$ and $\mathcal{P}_0\neq \emptyset$, then the restriction of the metric projection $P_{\mathcal{P}_0}: \mathcal{Q}_0\to \mathcal{P}_0$ is an isometry.* 9. *Let $(\mathcal{P},\mathcal{Q})$ be a non-empty closed bounded convex pair in $\mathcal{E}$. For any $p\in \mathcal{P},$ the closed ball $\mathbb{\mathbb{B}}(p,d(\mathcal{P},\mathcal{Q}))$ intersects $\mathcal{Q}$ at not more than one point.* *Proof.* We only prove the following implications: $(b)\Rightarrow (a), (b)\Leftrightarrow (d), (b)\Leftrightarrow (e), (a)\Leftrightarrow (f), (a)\Leftrightarrow (g), (a)\Leftrightarrow (h), (a)\Leftrightarrow (i).$ The proof of $(a) \Rightarrow (b),~(c) \Rightarrow (b)$ are obvious and $(b) \Rightarrow (c)$ follows from Lemma [Lemma 1](#Lemma:AbhikRaju2021){reference-type="ref" reference="Lemma:AbhikRaju2021"}. $(b)\Rightarrow (a):$ Suppose $\mathcal{E}$ is not strictly convex. Then there exists a nontrivial line segment $\mathcal{Q}=\{tc_1+(1-t)c_2: 0\leq t\leq 1\}$ in $S_\mathcal{E},$ where $c_1, c_2\in S_\mathcal{E}$ with $c_1\neq c_2.$ Set $\mathcal{P}=\{0\}.$ Then $d(\mathcal{P}, \mathcal{Q})=1.$ Clearly $\mathcal{Q}$ is non-empty closed convex subset of $\mathcal{E}$ such that $\left\|0-\left(tc_1+(1-t)c_2\right)\right\|=1$ for all $0\leq t\leq 1.$ This contradicts the semisharp proximality of $(\mathcal{P}, \mathcal{Q}).$ Thus we have $(b) \Rightarrow (a).$ $(b) \Leftrightarrow (d):$ Let $(\mathcal{P}, \mathcal{Q})$ be a compact pair in a strictly convex Banach space with $\mathcal{P}$ convex. Suppose there exist sequences $(u_n), (z_n)$ in $\mathcal{P}$ and $(v_n)$ in $\mathcal{Q}$ such that $\operatorname{max}\{\|u_n-v_n\|, \|z_n-v_n\|\}\to d(\mathcal{P},\mathcal{Q}).$ Choose a subsequence $(n_j)$ and points $u, z\in \mathcal{P}, v\in \mathcal{Q}$ such that $u_{n_j}\to u, z_{n_j}\to z$ and $v_{n_j}\to v.$ Then $\|u-v\|=d(\mathcal{P},\mathcal{Q})=\|z-v\|.$ By $(b),~u=z.$ We now prove that $\|u_n-z_n\|\to 0.$ If not, then there exist $\epsilon_0>0$ and a subsequence $(n_l)$ such that $\|u_{n_l}-z_{n_l}\|>\epsilon_0$ for all $l\geq 1.$ Using earlier argument, there exist a subsequence $(n_{l_k})$ such that $u_{n_{l_k}}\to p$ and $z_{n_{l_k}}\to p$ for some $p\in \mathcal{P}$ and for all $k\geq 1.$ This is a contradiction. Conversely, assume $(d).$ Let $(\mathcal{P},\mathcal{Q})$ is a non-empty closed bounded convex pair in $\mathcal{E}.$ Let $\|u-v\|=d(\mathcal{P},\mathcal{Q})=\|z-v\|$ for some $u, z\in \mathcal{P}$ and $v\in \mathcal{Q}.$ Set $\mathcal{C}=$ $\overline{co}\{u,z\}.$ Then $(\mathcal{C},\{v\})$ is a compact convex pair in $\mathcal{E}$ with $d(\mathcal{P},\mathcal{Q})\leq d(\mathcal{C},\{v\})\leq \|u-v\|=d(\mathcal{P},\mathcal{Q}).$ By $(d),$ we have $z=u.$ $(b) \Leftrightarrow (e):$ Assume $(b).$ Let $(\mathcal{P},\mathcal{Q})$ be a bounded convex semisharp proximal pair in $\mathcal{E}.$ Suppose $(\mathcal{P},\mathcal{Q})$ is weak approximatively compact, and $\mathcal{E}$ has the Kadec-Klee property. Let $(u_n),(z_n)$ in $\mathcal{P}$ and $(v_n)$ in $\mathcal{Q}$ be sequences such that $\|u_n- v_n\|\to d(\mathcal{P},\mathcal{Q})$ and $\|z_n-v_n\|\to d(\mathcal{P},\mathcal{Q}).$ It is clear that $(u_n), (z_n)$ in $\mathcal{P}$ and $(v_n)$ in $\mathcal{Q}$ are minimizing sequences. Then there exists a subsequence $(n_k)$ of $(n)$ such that $u_{n_k}\to u, z_{n_k}\to z$ and $v_{n_k}\to v$ weakly for some $x, z\in \mathcal{P}$ and $v\in \mathcal{Q}.$ Then $u_{n_k}-v_{n_k}\to u-v$ and $z_{n_k}-v_{n_k}\to z-v$ weakly. Since the norm is weakly lower semicontinuous, we have $\|u_{n_k}-v_{n_k}\|\to \|u-v\|$ and $\|z_{n_k}-v_{n_k}\|\to \|z-v\|.$ By the Kadec-Klee property of $\mathcal{E},$ it is clear that $u_{n_k}-v_{n_k}\to u-v$ and $z_{n_k}-v_{n_k}\to z-v.$ Thus $u_{n_k}-z_{n_k}\to u-z.$ Moreover, $\|u-v\|=d(\mathcal{P},\mathcal{Q})=\|z-v\|.$ By $(b), u=z.$ Hence $\|u_{n_k}-z_{n_k}\|\to 0.$ By applying the earlier argument, we have $\|u_n-z_n\|\to 0.$ To see $(e) \Rightarrow (b),$ suppose $u,z\in \mathcal{P}$ and $v\in \mathcal{Q}$ such that $\|u-v\|=d(\mathcal{P},\mathcal{Q})=\|z-v\|.$ Set $\mathcal{P}=\overline{co}(\{u,z\})$ and $\mathcal{Q}=\{v\}.$ Then $(\mathcal{P},\mathcal{Q})$ is a closed bounded weak approximatively compact convex pair. By property UC, we have $u=z.$ $(a)\Leftrightarrow (f):$ Suppose $\mathcal{E}$ is strictly convex and $(\mathcal{P},\mathcal{Q})$ is a non-empty pair with $\mathcal{P}$ convex. For $u, z\in \mathcal{P}, u\neq z$ and $v\in \mathcal{Q}$ such that $\|u-v\|=d(v,\mathcal{P})=\|z-v\|.$ By strict convexity, $\left\|\frac{u+z}{2}-v\right\|<d(v,\mathcal{P}),$ a contradiction. Conversely, suppose that $\mathcal{E}$ is not strictly convex. Then there exist $c_1, c_2\in S_X, c_1\neq c_2$ such that $\{tc_1+(1-t)c_2: 0\leq t\leq 1\}\subseteq S_\mathcal{E}.$ Set $\mathcal{C}=\{tc_1+(1-t)c_2: 0\leq t\leq 1\}$ and $\mathcal{D}=\{0\}.$ It is clear that $(\mathcal{C},\mathcal{D})$ contradicts $(f).$ $(a)\Leftrightarrow (g):$ Suppose $\mathcal{E}$ is not strictly convex. Then as mentioned above there exists a set $\mathcal{P}=\{tc_1+(1-t)c_2: 0\leq t\leq 1\}\subseteq S_\mathcal{E}$ where $c_1, c_2\in S_\mathcal{E}$ with $c_1\neq c_2.$ We see that all elements of $\mathcal{P}$ are of minimum norm. This contradicts $(g).$ This proves $(g)\Rightarrow (a).$ The proof for $(a)\Rightarrow (g)$ follows from Theorem [Theorem 2](#Thm1:EldredRaj2014){reference-type="ref" reference="Thm1:EldredRaj2014"}. $(a)\Leftrightarrow (h):$ Suppose $\mathcal{E}$ is not strictly convex. Then as mentioned above there exists a set $\mathcal{Q}=\{tc_1+(1-t)c_2: 0\leq t\leq 1\}\subseteq S_\mathcal{E}$ where $c_1, c_2\in S_\mathcal{E}$ with $c_1\neq c_2.$ Make $\mathcal{P}=\{0\}.$ Then $\mathcal{P}_0=\mathcal{P}$ and $\mathcal{Q}_0=\mathcal{Q}.$ Moreover, $\|P_\mathcal{P}(c_1)-P_\mathcal{P}(c_2)\|=0\neq \|c_1-c_2\|.$ Thus $P_\mathcal{P}$ is not an isometry. This proves $(h)\Rightarrow (a).$ The proof for $(a)\Rightarrow (h)$ follows from Theorem [Theorem 3](#Thm2:EldredRaj2014){reference-type="ref" reference="Thm2:EldredRaj2014"}. $(a)\Leftrightarrow (i):$ Suppose $X$ is strictly convex and $(\mathcal{P},\mathcal{Q})$ is a non-empty convex pair of $\mathcal{E}.$ Let $v_1, v_2\in \mathcal{Q}$ and $u\in \mathcal{P}$ such that $v_1, v_2\in \mathbb{B}(u, d(\mathcal{P},\mathcal{Q})).$ Then $\|u-v_i\|= d(\mathcal{P},\mathcal{Q}), 1\leq i\leq 2.$ By strict convexity, $v_1=v_2.$ Conversely, suppose that $\mathcal{E}$ is not strictly convex. Then there is a set $\mathcal{C}=\{tc_1+(1-t)c_2: 0\leq t\leq 1\}\subseteq S_\mathcal{E}$ where $c_1, c_2\in S_\mathcal{E}$ with $c_1\neq c_2.$ Set $\mathcal{D}=\{0\}.$ Then $d(\mathcal{C},\mathcal{D})=1$ and the closed unit ball $\mathbb{B}(0,1)$ contains $\mathcal{C}$ in the boundary, contradicting $(i).$ ◻ # On proximal uniform normal structure and best proximity pairs We fix a few notations for a non-empty bounded convex pair $(\mathcal{P},\mathcal{Q})$ of subsets of a Banach space $\mathscr{E}.$ For $u\in \mathcal{P},~r(u,\mathcal{Q}) =\operatorname{sup}\{\|u-v\|:v\in \mathcal{Q}\}.$ We denote $d(\mathcal{P},\mathcal{Q})=\operatorname{inf}\{\|z-w\|:z\in \mathcal{P}, w\in \mathcal{Q}\},~\delta(\mathcal{P},\mathcal{Q})=\operatorname{sup}\{\|u-v\|:u\in \mathcal{P}, v\in \mathcal{Q}\},~ r(\mathcal{P},\mathcal{Q})=\operatorname{inf}\{r(z, \mathcal{Q}):z\in \mathcal{P}\},$ $R(\mathcal{P},\mathcal{Q})=\operatorname{max}\{r(\mathcal{P},\mathcal{Q}), r(\mathcal{Q}, \mathcal{P})\},$ $\mathcal{P}_0=\{a\in \mathcal{P}: \|a-b\|=d(\mathcal{P},\mathcal{Q})~\mbox{for some}~b\in \mathcal{Q}\}$ and $\mathcal{Q}_0=\{b\in \mathcal{Q}: \|b-a\|=d(\mathcal{P},\mathcal{Q})~\mbox{for some}~a\in \mathcal{P}\}.$ Also, we denote $\overline{co}(\mathcal{P})$ by the closed and convex hull of $\mathcal{P}$. The pair $(\mathcal{P},\mathcal{Q})$ is proximal (resp., semisharp proximal) ([@Espinola2008; @Raju2011]) if for $(u,v)\in (\mathcal{P},\mathcal{Q})$ there exists one (resp., at most one) pair $(z,w)\in (\mathcal{P},\mathcal{Q})$ for which $\|u-w\|=d(\mathcal{P},\mathcal{Q})=\|v-z\|$. If such a pair $(z,w)$ is unique for given any $(u,v) \in (\mathcal{P},\mathcal{Q})$, then $(\mathcal{P},\mathcal{Q})$ is said to be a sharp proximal pair and $(z,w)$ is denoted by $(v',u')$. We denote a collection $\Upsilon (\mathcal{P},\mathcal{Q})$ consisting of non-empty bounded closed convex proximal pairs $(A_1,B_1)$ of $(\mathcal{P},\mathcal{Q})$ with at least one of $A_1$ and $B_1$ contains more than one point and $d(A_1,B_1)=d(\mathcal{P},\mathcal{Q}).$ According to [@AbhikRaju2021], the pair $(\mathcal{P},\mathcal{Q})$ is said to have proximal uniform normal structure ([@AbhikRaju2021 Definition 5.1, p. 9]) if $N(\mathcal{P},\mathcal{Q})=\displaystyle \operatorname{sup}\frac{R(A_1,B_1)}{\delta(A_1,B_1)} < 1.$ Here the supremum is taken over all $(A_1,B_1)$ in $\Upsilon(\mathcal{P},\mathcal{Q}).$ If $d(\mathcal{P},\mathcal{Q})=0$, then proximal uniform normal structure turns out to be uniform normal structure (which was introduced in [@GW-UNS-1979]) and the existence of the best proximity pair ([@AbhikRaju2021 Theorem 5.8, p. 11]) boils down to the corresponding existing fixed point result ([@LImXu1995 Theorem 7, p. 1234]). Hence, throughout this note, without loss generality we assume that $d(\mathcal{P},\mathcal{Q})>0$. Let $(A_1,B_1)$ be an element in $\Upsilon(\mathcal{P},\mathcal{Q}).$ Then it is possible to construct a sequence $\{(A_n, B_n)\}$ in $\Upsilon(A_1,B_1)$ such that $\delta(A_n)=\delta(B_n)\to 0.$ Consequently, $R(A_n, B_n)\leq R(A_n)+ d(\mathcal{P},\mathcal{Q})\leq \delta(A_n)+ d(\mathcal{P},\mathcal{Q})\to d(\mathcal{P},\mathcal{Q})$ and $\delta(A_n, B_n)\leq \delta(A_n)+ d(\mathcal{P},\mathcal{Q})\to d(\mathcal{P},\mathcal{Q}).$ Hence $N(\mathcal{P},\mathcal{Q})=1$ and so, no $(\mathcal{P},\mathcal{Q})$ can have proximal uniform normal structure. Thus to enrich the family of non-empty pairs that satisfy proximal uniform normal structure, we reform the notion. **Definition 5**. *A non-empty bounded convex pair $(\mathcal{P},\mathcal{Q})$ of a Banach space $\mathscr{E}$ is said to have proximal uniform normal structure if $$\mathcal{N}(\mathcal{P},\mathcal{Q})=\displaystyle\operatorname{sup}\left\{\frac{R(A_1, A_2)-d(A_1,A_2)}{\delta(A_1, A_2)-d(A_1,A_2)}:(A_1, A_2)\in\Upsilon (\mathcal{P},\mathcal{Q}) \right\}<1.$$* It is easy to observe that if a non-empty bounded convex pair $(\mathcal{P},\mathcal{Q})$ has proximal uniform normal structure as defined in Definition [Definition 5](#PNUS){reference-type="ref" reference="PNUS"}, then it has proximal normal structure ([@Eldred2005]). We say that a Banach space $\mathscr{E}$ has proximal uniform normal structure if every non-empty closed bounded convex proximal pair of $\mathscr{E}$ has proximal uniform normal structure. It is known ([@AbhikRaju2021]) that if a non-empty closed bounded convex pair in a Banach space has proximal uniform normal structure, then it is semisharp proximal. Thus in view of Theorem [Theorem 4](#Thm2.1:StrictConvexityEquivalence){reference-type="ref" reference="Thm2.1:StrictConvexityEquivalence"}, we have the following theorem. **Theorem 6**. *Let $X$ be a Banach space having proximal uniform normal structure. Then $X$ is strictly convex.* Let $(A,B)$ be as given in Example 5.2 of [@AbhikRaju2021]. It can be verified (similar computations as in Example 5.2 of [@AbhikRaju2021]) that for $(A_1,B_1)\in \Upsilon(A,B),$ there is a constant $c<1$ (independent of $(A_1,B_1)$) such that $\displaystyle\frac{R(A_1,B_1)^2-d(A_1,B_1)^2}{\delta(A_1,B_1)^2-d(A_1,B_1)^2}= \frac{R(B_1,B_1)^2}{\delta(B_1,B_1)^2}\leq \displaystyle \frac{c^2 \delta(B_1,B_1)^2}{\delta(B_1,B_1)^2}.$ Hence $(A,B)$ has proximal uniform normal structure. It should be observed that Lemma 5.5 of [@AbhikRaju2021] holds true with the revised notion of proximal uniform normal structure. We mention that we need to modify Lemma 5.6 and Proposition 5.7 of [@AbhikRaju2021]. **Proposition 7**. *Let $(\mathcal{P},\mathcal{Q})$ be a non-empty weakly compact and convex pair in a Banach space $\mathscr{E}.$ Suppose $\{z_n\}$ and $\{u_n\}$ are two sequences in $\mathcal{P}$ and $\{v_n\}$ is a sequence in $\mathcal{Q}.$ Then there exists $v_0\in \mathcal{Q}$ such that $\displaystyle\operatorname{max}\{\limsup_{n}\|z_n-v_0\|, \limsup_{n}\|u_n-v_0\|\} \leq \limsup_{i}\operatorname{max}\{\limsup_{n}\|z_n-v_i\|, \limsup_{n}\|u_n-v_i\|\}.$* *Proof.* Set $C = \displaystyle\bigcap_{n=1}^{\infty}\overline{co}\left(\{v_i:i\geq n\}\right).$ By weak compactness $C\neq \emptyset$. As the function $h_1(x)=\displaystyle\limsup_{n}\|z_n-x\|$ and $h_2(x)=\displaystyle\limsup_{n}\|u_n-x\|$ are weakly lower semicontinuous, the function $h(x)=\operatorname{max}\{h_1(x), h_2(x)\}$ is pointwise weakly lower semicontinuous. Thus there exist a subsequence $\{v_{k_i}\}$ and a point $v_0$ in $C$ such that $$\displaystyle h(v_0)\leq \liminf_{i} h(v_{k_i})\leq \limsup_{i} h(v_i)=\limsup_{i}\operatorname{max}\{h_1(v_i), h_2(v_i)\}.$$ This completes the proof. ◻ **Lemma 8**. *Let $(\mathcal{P},\mathcal{Q})$ be a non-empty pair as in Proposition [Proposition 7](#propertyP){reference-type="ref" reference="propertyP"}. Suppose $(\mathcal{P},\mathcal{Q})$ has proximal uniform normal structure. Let $\{u_n\}$ be a sequence in $\mathscr{E}$ such that $\{u_{2n}\}\subseteq \mathcal{P}$ and $\{u_{2n+1}\}\subseteq \mathcal{Q}$. Then there exists $x\in \mathcal{P}_0$ such that* - *$\displaystyle\operatorname{max}\{\limsup_{n} \left\|x-u_{2n}'\right\|, \limsup_{n} \left\|x-u_{2n+1}\right\|\}-d(\mathcal{P},\mathcal{Q})\\ \leq \mathcal{N}(\mathcal{P},\mathcal{Q})\left(\delta\left(\{u_{2n}, u_{2n+1}'\}, \{u_{2n}', u_{2n+1}\}\right)-d(\mathcal{P},\mathcal{Q})\right);$* - *$\|x-v\|\leq \displaystyle \operatorname{max}\{\limsup_{j}\|v-u_{2j}\|, \limsup_{j}\|v-u_{2j+1}'\|\}$ for all $v\in \mathcal{Q};$* *Proof.* Set $\mathcal{P}_{2l}=\overline{co}\left(\{u_{2j}, u_{2j+1}' :j\geq l\}\right)$ and $\mathcal{P}_{2l}'=\overline{co}\left(\{u_{2j+1}, u_{2j}':j\geq l\}\right),$ $l\geq 1$. Then $\{\mathcal{P}_{2l}\}$ (similarly, $\{\mathcal{P}_{2l}'\}$) is a decreasing sequence of non-empty closed and convex subsets of $\mathcal{P}.$ By weak compactness, the set $\mathcal{P}^{0}=\displaystyle \bigcap_{l=1}^{\infty}\mathcal{P}_{2l}\neq \emptyset.$ For any $x\in \mathcal{P}^{0}$ and $v\in \mathcal{Q},$ $\|v-x\|\leq r(v, \mathcal{P}_{2l})=\displaystyle \operatorname{max}\{\operatorname{sup}_{j\geq l}\|v-u_{2j}\|, \operatorname{sup}_{j\geq l}\|v-u_{2j+1}'\|\}.$ Hence $\|v-x\|\leq \displaystyle \operatorname{max}\{\limsup_{j}\|v-u_{2j}\|, \limsup_{j}\|v-u_{2j+1}'\|\}.$ This proves (ii). To prove (i), without loss of any generality, let's assume that $\delta(\mathcal{P},\mathcal{Q})>d(\mathcal{P},\mathcal{Q}).$ For $l\in \mathbb{N},$ choose $\epsilon _l>0$ such that $\displaystyle \lim_{l\to \infty} \epsilon_l =0.$ As $\delta\left(\mathcal{P}_{2l},\mathcal{P}_{2l}'\right)\leq \delta\left(\{u_{2l}, u_{2l+1}'\}, \{u_{2l}', u_{2l+1}\}\right)$, for $l\geq 1,$ there exists $x_{2l}\in \mathcal{P}_{2l}$ such that $r(x_{2l},\mathcal{P}_{2l}') -d(\mathcal{P},\mathcal{Q}) < R(\mathcal{P}_{2l},\mathcal{P}_{2l}')-d(\mathcal{P},\mathcal{Q})+\epsilon_l.$ Hence $$\begin{aligned} \operatorname{max}\{\displaystyle\limsup_{j}\left\|x_{2l}-u_{2j}'\right\|, \limsup_{j}\left\|x_{2l}-u_{2j+1}\right\|\}-d(\mathcal{P},\mathcal{Q})\\ \leq \mathcal{N}(\mathcal{P},\mathcal{Q}) \left(\delta \left(\{u_{2l}, u_{2l+1}'\}, \{u_{2l}', u_{2l+1}\}\right)-d(\mathcal{P},\mathcal{Q})\right) +\epsilon_l.\end{aligned}$$ As $F=\displaystyle \bigcap_{j=1}^{\infty}\overline{co}\{x_{2i}:i\geq j\}\neq \emptyset,$ by Proposition [Proposition 7](#propertyP){reference-type="ref" reference="propertyP"}, there is $w\in F$ satisfying $$\begin{aligned} &&\displaystyle \operatorname{max}\{\limsup_{j}\|w-u_{2j}'\|, \limsup_{j}\|w-u_{2j+1}\|\}-d(\mathcal{P},\mathcal{Q})\\ &&\leq \mathcal{N}(\mathcal{P},\mathcal{Q}) (\delta \left(\{u_{2l}, u_{2l+1}'\}, \{u_{2l}', u_{2l+1}\}\right)-d(\mathcal{P},\mathcal{Q})).\end{aligned}$$ This proves (i). ◻ Let $(\mathcal{P}, \mathcal{Q})$ be a non-empty closed bounded convex proximal pair in a Banach space $\mathcal{E}.$ Suppose $\mathcal{E}$ has proximal uniform normal structure. Let $x,y\in \mathcal{P}.$ By strict convexity (using Theorem [Theorem 6](#Thm:StrictConvexity){reference-type="ref" reference="Thm:StrictConvexity"}) we have $\|x-y'\|=\|x'-y\|.$ In the below we use this fact to prove the main theorem of [@AbhikRaju2021]. Replace $(A,B)$ by $(\mathcal{P}, \mathcal{Q})$ in Theorem 5.8 of [@AbhikRaju2021]. Let $u_0\in \mathcal{P}_0.$ Then $\{T^{2m}u_0\}\subseteq \mathcal{P}_0,$ $\{T^{2m+1}u_0\}\subseteq \mathcal{Q}_0.$ Then there exists $u_1\in \mathcal{P}_0$ that satisfies Lemma [Lemma 8](#Main Lemma 1){reference-type="ref" reference="Main Lemma 1"} by replacing $x$ by $u_1$ and $(u_{2m}, u_{2m+1})$ by $\left(T^{2m}u_0, T^{2m+1}u_0\right).$ By induction, we can find a sequence $\{u_i\}$ in $\mathcal{P}_0$ such that, for $i\in \mathbb{N},$ $$\begin{aligned} \label{eqn4.3.4} \begin{split} & \displaystyle\operatorname{max}\{\displaystyle \limsup_{m} \|u_i- T^{2m}u_{i-1}'\|,\limsup_{m} \|u_i- T^{2m+1}u_{i-1}\|\}-d(\mathcal{P},\mathcal{Q})\\ & \leq \mathcal{N}(\mathcal{P},\mathcal{Q})\left(\delta\big(\{T^{2m}u_{i-1}, T^{2m+1}u_{i-1}'\}, \{T^{2m}u_{i-1}', T^{2m+1}u_{i-1}\}\big)-d(\mathcal{P},\mathcal{Q})\right);\\ & \|u_i-v\|\leq \operatorname{max}\{\displaystyle\limsup_{m}\|T^{2m}u_{i-1}-v\|, \limsup_{m}\|T^{2m+1}u_{i-1}'-v\|\}~~\mbox{for all}~v~\mbox{in}~\mathcal{Q}. \end{split}\end{aligned}$$ Set $g_i(\mathcal{P})=\operatorname{max}\{\displaystyle \limsup_{j} \|u_{i+1}- T^{2j}u_{i}'\|,\limsup_{j} \|u_{i+1}- T^{2j+1}u_{i}\|\}.$ Using ([\[eqn4.3.4\]](#eqn4.3.4){reference-type="ref" reference="eqn4.3.4"}) and similar techniques used in the proof of Theorem 5.8 of [@AbhikRaju2021], we get $$g_i(\mathcal{P})-d(\mathcal{P},\mathcal{Q})\leq k^{2}\left[g_{i-1}(\mathcal{P})-d(\mathcal{P},\mathcal{Q})\right]\leq \cdots \leq \left(\mathcal{N}(\mathcal{P},\mathcal{Q})\cdot k^2\right)^i [g_{0}(\mathcal{P})-d(\mathcal{P},\mathcal{Q})].$$ Thus, $\operatorname{max}\left\{\displaystyle\lim_{i\to \infty} g_i(\mathcal{P}), \displaystyle \lim_{i\to \infty}\delta\big(\{T^{2m}u_{i}, T^{2m+1}u_{i}'\}, \{T^{2m}u_{i}', T^{2m+1}u_{i}\}\big)\right\}$ $=d(\mathcal{P},\mathcal{Q}).$ Substituting $v=u_{i}'$ in ([\[eqn4.3.4\]](#eqn4.3.4){reference-type="ref" reference="eqn4.3.4"}), we have $\displaystyle\limsup_{m}\|T^{2m}u_{i-1}-u_{i}'\|\to d(\mathcal{P},\mathcal{Q})$ and $\displaystyle \limsup_{m}\|T^{2m+1}u_{i-1}'-u_{i}'\|\}\to d(\mathcal{P},\mathcal{Q})$ as $i\to \infty.$ By weak compactness, $K =\displaystyle \bigcap_{j=1}^{\infty}\overline{co}(\{u_i':i\geq j\})\neq \emptyset.$ By Proposition [Proposition 7](#propertyP){reference-type="ref" reference="propertyP"}, there exists $w\in K$ and a subsequence $\{u_{i_j}\}$ such that $\displaystyle \lim_{j\to \infty} \|w-u_{i_j}\| = d(\mathcal{P},\mathcal{Q}).$ By a similar technique used in the proof of Theorem 5.8 of [@AbhikRaju2021], we get a point $z_0\in H=\displaystyle \bigcap_{j=1}^{\infty}\overline{co}(\{u_{i_j}:i\geq j\})$ such that $\|z_0-Tz_0\|=d(\mathcal{P},\mathcal{Q}).$ AAAA Digar, Abhik and Kosuru, G. Sankara Raju, Cyclic uniform Lipschitzian mappings and proximal uniform normal structure, Ann. Funct. Anal., **vol 13(1)**, (2022) Paper No. 5, pp. 14. Eldred, A. Anthony, Kirk, W. A., Veeramani, P., Proximal normal structure and relatively nonexpansive mappings, Studia Math., **vol 171(3)**, (2005) 283--293. Espı́nola, Rafa, A new approach to relatively nonexpansive mappings, Proc. Amer. Math. Soc., **vol 136(6)**, (2008) 1987--1995. Gillespie, A. A. and Williams B. B., Fixed point theorem for non-expansive mappings on Banach spaces with uniform normal structure, Appl. Anal., **9** (1979) 121--124. Kosuru, G. Sankara Raju and Veeramani, P., A note on existence and convergence of best proximity points for pointwise cyclic contractions, Numer. Funct. Anal. Optim., **vol 32(7)**, (2011) 821--830. Lim, T.-C. and Xu, Hong Kun, Uniformly Lipschitzian mappings in metric spaces with uniform normal structure, Nonlinear Anal., **25**(11) (1995) 1231--1235. Megginson, Robert E., An introduction to Banach space theory, Springer-Verlag, New York, **vol 183)**, (1998). Sankar Raj, V. and Anthony Eldred, A., A characterization of strictly convex spaces and applications, J. Optim. Theory Appl., **vol 160(2)**, (2014) 703--710. Suzuki, Tomonari, Kikkawa, Misako, Vetro, Calogero, The existence of best proximity points in metric spaces with the property UC, Nonlinear Anal., **vol 71(7-8)**, (2009) 2918--2926.
arxiv_math
{ "id": "2309.04684", "title": "Characterizations of strictly convex spaces and proximal uniform normal\n structure", "authors": "Abhik Digar and G. Sankara Raju Kosuru", "categories": "math.FA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We study the problem of enumerating Braess edges for Kemeny's constant in trees. We obtain bounds and asympotic results for the number of Braess edges in some families of trees. title: Kemeny's constant and enumerating Braess edges in trees --- Jihyeug Jang^1^, Mark Kempton^2^, Sooyeong Kim[^1]^3^, Adam Knudson^2^, Neal Madras^3^ and Minho Song^1^ ^1^Department of Mathematics, Sungkyunkwan University, Suwon, 16419, South Korea ^2^Department of Mathematics, Brigham Young University, Provo UT, USA ^3^Department of Mathematics and Statistics, York University, 4700 Keele Street, Toronto, Canada Kemeny's constant, Braess' paradox, trees. **AMS subject classifications.** 05C81, 05C50, 05A16, 05C05, 60J10 # Introduction An important parameter arising from the study of Markov chains that has received increased attention in graph theory in recent years is Kemeny's constant. Given a discrete time finite state irreducible Markov chain with stationary distribution $\pi$, we define *Kemeny's constant* of the Markov chain by $$\kappa = \sum_{j=1}^nm_{ij}\pi(j)$$ where $m_{ij}$ denotes the *mean first passage time* from $i$ to $j$, which is the expected number of steps for the Markov chain to go from state $i$ to state $j$. Surprisingly, this quantity is independent of the state $i$ [@KemenySnell]. In this paper, we will be studying Kemeny's constant of a random walk on a graph. As a graph parameter, Kemeny's constant provides a measure of how hard it is for a random walker to move around the graph, and gives a way of measuring how well connected the graph is---a small Kemeny's constant corresponds to a well-connected graph in which it is easy to move around, while a large Kemeny's constant corresponds to a poorly connected graph, possibly with bottlenecks. For a graph $G$, we will denote Kemeny's constant for the random walk on $G$ as $\kappa(G)$, and we will simply call it Kemeny's constant of $G$. Kemeny's constant has received considerable attention recently in the study of graphs and networks, and has many applications to a variety of subjects, including the study of disease spread [@kahn2014mixing; @kim2023effectiveness; @ruhi2015sirs; @van2008virus; @yilmaz2020kemeny], network algorithms [@dudkina2021node; @fouss2016algorithms; @levene2002kemeny], robotic surveillance [@patel2015robotic], and many others. One interesting phenomenon studied in connection with Kemeny's constant is *Braess' Paradox*. Loosely speaking, Braess' paradox occurs when the deletion of an edge from (or insertion of a new edge to) a graph affects the overall connectivity in a counterintuitive way. Originally, Braess [@braess2005paradox] studied road networks, and observed instances in road network models in which the closure of a road would actually improve overall traffic flow in a city. Intuitively, one expects that adding an edge to a graph will improve the graph's overall connectivity. When this is not the case, we say that Braess' paradox occurs. Braess' paradox in traffic networks has been widely studied since this initial observation; see for instance [@ding2012traffic; @hagstrom2001characterizing; @rapoport2009choice] and references therein. The study of Braess' paradox in the context of Kemeny's constant was first introduced by Kirkland and Zeng in [@kirkland2016kemeny]. We say a pair of nonadjacent vertices $\{ u,v \}$ in a graph $G$ is a *Braess edge* of $G$ if adding the edge $\{ u,v \}$ to $G$ increases Kemeny's constant. Kirkland and Zeng proved that if $G$ is a tree with two pendent twin vertices (degree one vertices adjacent to the same vertex), then the endpoints of those pendent vertices constitute a Braess edge. This fact was generalized by Ciardo [@ciardo2022kemeny] to arbitrary connected graphs with a pair of pendent twins. Intuitively, when the edge between the twin pendent vertices is added, then there is some probability that a random walker will get stuck walking between those two vertices for some time, thus increasing travel times to other parts of the graph. Work in [@faught20221] looked more generally at graphs with a cut vertex (a vertex whose deletion disconnects the graph) and found that Braess edges occur frequently in such graphs. In particular, graphs with a cut vertex whose deletion gives rise to two paths were studied by Kim [@kim2022families], and it was discovered that the eccentricity of the cut vertex is related to the formation of the Braess edge joining the end-vertices of each path. Furthermore, work by Hu and Kirkland [@hu2019complete] established equivalent conditions for complete multipartite graphs and complete split graphs to have every non-edge as a Braess edge. Recently, work from [@kirkland2023edge] studied the overall effect that adding an edge to a graph can have on Kemeny's constant. The purpose of this paper is to study the occurrence of Braess edges in trees in greater depth. Our motivation is to understand what kind of Braess edges can occur in trees and to enumerate how many Braess edges can occur in a tree. We are able to give partial answers to these questions for various families of trees. After establishing the terminology, notation, and tools we will use in Section [2](#sec:prelim){reference-type="ref" reference="sec:prelim"}, we will first focus on paths in Section [3](#sec:path){reference-type="ref" reference="sec:path"}. Paths are of particular interest because a path on $n$ vertices is the tree on $n$ vertices with the largest Kemeny's constant, as shown in [@faught20221]. We show that the number of Braess edges for a path of length $k$ is $$\frac13k\ln k - ck +o(k)$$ for a constant $c\approx .548$ (see Corollary [Corollary 11](#cor-asymp){reference-type="ref" reference="cor-asymp"} below). We further show that the addition of any edge that crosses the middle of the path cannot be a Braess edge, showing that the Braess edges in a path are concentrated towards the endpoints (see Theorems [Theorem 7](#thm:nonedge crossing the center){reference-type="ref" reference="thm:nonedge crossing the center"} and [Theorem 9](#thm:equiv Braess on path){reference-type="ref" reference="thm:equiv Braess on path"} as well as Remark [Remark 8](#rmk:far from the center){reference-type="ref" reference="rmk:far from the center"} below). This fits our intuition: an edge connecting points that were originally far apart will tend to decrease travel times of the random walker, whereas edges close to the endpoints of a path create bottlenecks where the random walker could get stuck for a time. In Section [4](#sec:spider){reference-type="ref" reference="sec:spider"}, we study a family of trees that we call *spider graphs*: the spider graph $\mathcal{S}_{a,b}$ is obtained from taking $b$ paths of length $a$ and connecting one endpoint of each at a central vertex of degree $b$. This is motivated by thinking of stars, in which results from [@ciardo2022kemeny; @kirkland2016kemeny] show that any pair of nonadjacent vertices is Braess. We show that the number of Braess edges for $\mathcal{S}_{a,b}$ is $$\Theta(ab\ln(a))$$ for sufficiently large $a$. (See Theorem [Theorem 21](#thm:spider){reference-type="ref" reference="thm:spider"} for lower and upper bounds.) We further show that adding an edge that crosses this central vertex cannot be a Braess edge (see Theorem [Theorem 18](#prop:nonBraess1){reference-type="ref" reference="prop:nonBraess1"}). For vertices along the branches of a spider graph, we prove that in $\mathcal{S}_{a,b}$, if there is a Braess edge in a path of length $2a$ (which is $\mathcal{S}_{a,2}$), then the corresponding edge is also Braess in $\mathcal{S}_{a,b}$ (see Lemma [Lemma 20](#lem:Braess on spider){reference-type="ref" reference="lem:Braess on spider"}). For sufficiently large $a$ and $b$, $\mathcal{S}_{a,b}$ can contain more Braess edges as well (see Theorems [Theorem 21](#thm:spider){reference-type="ref" reference="thm:spider"} and [Theorem 23](#thm:spider2){reference-type="ref" reference="thm:spider2"}). As a consequence of results from this section, we are able to show that there are no Braess edges in $\mathcal{S}_{2,b}$ for $b\geq2$ (see Corollary [Corollary 19](#cor:no Braess){reference-type="ref" reference="cor:no Braess"}), thus exhibiting a family of trees in which Braess' paradox does not occur for any pair of vertices. Finally, in Section [5](#sec:broom){reference-type="ref" reference="sec:broom"}, we combine ideas from the previous two sections and study a family of graph that we call *brooms*. The broom $\mathcal{B}_{k,p}$ is obtained from attaching $p$ pendent vertices to the endpoint of a path of length $k$. We determine the asympototics for the number of Braess edges in broom graphs. In particular, there are three kinds of non-edges in a broom graph: pairs whose endpoints are both among the pendent vertices of the start at the end, those where both vertices are along the path, and those where one endpoint is on the star and the other on the path. We show that in a broom $\mathcal{B}_{k,p}$, as $k\rightarrow\infty$, there are $\binom{p}{2}$ Braess edges among the pendent vertices, $\Theta(k\ln{k})$ Braess edges along the path, and $O(k)$ Braess edges attaching a pendent vertex to the path (see Theorem [Theorem 24](#thm:broom){reference-type="ref" reference="thm:broom"}). # Preliminaries {#sec:prelim} We begin with necessary terminology and notation in graph theory. Let $G$ be a graph of order $n$ with vertex set $V(G)$ and edge set $E(G)$ where $n=|V(G)|$. An edge joining vertices $v$ and $w$ of $G$ is denoted by $\{ v,w \}$. Let $m_G$ be defined as $|E(G)|$. The subgraph of $G$ *induced* by a subset $S$ of $V(G)$ is the graph with vertex set $S$, where two vertices in $S$ are adjacent if and only if they are adjacent in $G$. For $v\in V(G)$, we denote by $\mathrm{deg}_G(v)$ the degree of $v$. A vertex $v$ of a graph $G$ is said to be *pendent* if $\mathrm{deg}_G(v)=1$. Given a labelling of $V(G)$, we define $\mathbf{d}_G$ to be the column vector whose $i^\text{th}$ component is $\mathrm{deg}_G(v_i)$ for $1\leq i\leq n$, where $v_i$ is the $i^\text{th}$ vertex in $V(G)$. For $v,w\in V(G)$, the distance between $v$ and $w$ in $G$ is denoted by $\mathrm{dist}_G(v,w)$. We define $F_{G}$ to be the matrix given by $F_{G}=[f_{i,j}^{G}]$ where $f_{i,j}^{G}$ is the number of $2$-tree spanning forests of $G$ such that one of the two trees contains a vertex $i$ of $G$, and the other has a vertex $j$ of $G$. Note that $f_{i,i}^{G}=0$, that is, the diagonal entries of $F_G$ are zero. We denote by $\mathbf e_v$ the column vector whose component in $v^\text{th}$ position is $1$ and zeros elsewhere, and denote by $\mathbf{f}_G^v$ the $v^{\text{th}}$ column of $F_{G}$. That is, the $i^{\text{th}}$ entry of $\mathbf{f}_G^v$ is $f_{i,v}^{G}$. We define $B(G)$ to be the number of Braess edges for $G$, and denote by $\tau_G$ the number of spanning trees of $G$. Throughout the paper, we will be using standard notation regarding asymptotic analysis of a function of the number $n$ of vertices of a graph. In particular, we say that $f(n)$ is $O(g(n))$ (or $f(n)=O(g(n))$) if $|f(n)|\leq C|g(n)|$ for some constant $C$ not depending on $n$ for sufficiently large $n$. We say that $f(n)$ is $o(g(n))$ if $\lim_{n\rightarrow\infty} (f(n)/g(n)) = 0$. And finally, we say that $f(n)\sim g(n)$ if $\lim_{n\rightarrow\infty} (f(n)/g(n)) = 1$. **Proposition 1**. *[@kim2022families][\[Prop:dFd with a cut-vertex\]]{#Prop:dFd with a cut-vertex label="Prop:dFd with a cut-vertex"} Let $H_1$ and $H_2$ be connected graphs, and let $v_1\in V(H_1)$ and $v_2\in V(H_2)$. Assume that $G$ is obtained from $H_1$ and $H_2$ by identifying $v_1$ and $v_2$ as a vertex $v$. Suppose that $\widetilde{H}_1=H_1-v_1$ and $\widetilde{H}_2=H_2-v_2$. Then, labelling the vertices of $G$ in order of $V(\widetilde{H}_1)$, $v$, and $V(\widetilde{H}_2)$, we have: $$\begin{aligned} \mathbf{d}_{G}^T&=[\mathbf{d}_{H_1}^T\;\mathbf{0}^T_{|V(\widetilde{H}_2)|}]+[\mathbf{0}^T_{|V(\widetilde{H}_1)|}\;\mathbf{d}_{H_2}^T],\\ m_{G}&=m_{H_1}+m_{H_2},\\ \tau_{G}&=\tau_{H_1}\tau_{H_2},\\ F_{G}&=\left[\begin{array}{c|c|c} \tau_{H_2}F_{\widetilde{H}_1} & \tau_{H_2}\mathbf{f}_1 & \tau_{H_2}\mathbf{f}_1\mathbf{1}^T+\tau_{H_1}\mathbf{1}\mathbf{f}_2^T \\\hline \tau_{H_2}\mathbf{f}^T_1 & 0 & \tau_{H_1}\mathbf{f}^T_2\\\hline \tau_{H_1}\mathbf{f}_2\mathbf{1}^T+\tau_{H_2}\mathbf{1}\mathbf{f}_1^T & \tau_{H_1}\mathbf{f}_2 & \tau_{H_1}F_{\widetilde{H}_2} \end{array}\right], \end{aligned}$$ where $\mathbf{f}_1$ and $\mathbf{f}_2$ are the column vectors obtained from $\mathbf{f}_{H_1}^v$ and $\mathbf{f}_{H_2}^v$ by deleting the $v^\text{th}$ component (which is $0$), respectively. This implies that $$\begin{aligned} &\mathbf{d}_G^TF_G\mathbf{d}_G\\ =&\tau_{H_2}\mathbf{d}_{H_1}^TF_{H_1}\mathbf{d}_{H_1}+\tau_{H_1}\mathbf{d}_{H_2}^TF_{H_2}\mathbf{d}_{H_2}+4\tau_{H_2}m_{H_2}\mathbf{d}_{H_1}^T\mathbf{f}_{H_1}^v+4\tau_{H_1}m_{H_1}\mathbf{d}_{H_2}^T\mathbf{f}_{H_2}^v. \end{aligned}$$* A formula for Kemeny's constant of a graph $G$ from [@kirkland2016kemeny] is $$\label{formula:kemeny} \kappa(G)=\frac{\mathbf{d}_G^T F_G\mathbf{d}_G}{4m_G\tau_G}.$$ In describing some graph families it will occasionally be convenient to have the following notation. **Definition 2**. Let $G_1, G_2$ be simple connected graphs and with labelled vertices $v_1 \in V(G_1)$, and $v_2 \in V(G_2)$. The *1-sum* $G = G_1\bigoplus_{v_1,v_2} G_2$ is the graph created by taking a copy of $G_1, G_2$, removing $v_1$, and replacing every edge of the form $\{i, v_1\} \in E(G_1)$ with $\{i, v_2\}$. We often omit the subscript when the choice and/or labelling of vertices is clear. We say $G_1\bigoplus_vG_2$ has a *1-separation*, and that $v$ is a *1-separator* or *cut vertex.* For cleaner expressions we also introduce the following notation which was used in [@ciardo2022kemeny; @faught20221]. **Definition 3**. Let $G$ be a connected graph. The *moment* of $v \in V$ is $$\mu(G, v) = \frac{1}{\tau_G} \mathbf{d}_G^T F_G \mathbf e_v.$$ **Example 4**. [@kim2022families][\[moment:P_n\]]{#moment:P_n label="moment:P_n"} Consider the path $\mathcal{P}_n=(1,2,\dots,n)$ where $n\geq 2$. Let $v$ be a vertex of $\mathcal{P}_n$. For $v=1,\dots,n$, $$\begin{aligned} \kappa(\mathcal{P}_n)=\frac{1}{3}(n-1)^2+\frac{1}{6},\;\;\;\;\mu(\mathcal{P}_n,v)=(v-1)^2+(n-v)^2. \end{aligned}$$ **Example 5**. [@kim2022families][\[moment:S_n\]]{#moment:S_n label="moment:S_n"} Consider a star $\mathcal{S}_n$ of order $n$ where $n\geq 3$. Suppose that $n$ is the center vertex. For $v=1,\dots,n$, $$\begin{aligned} \kappa(\mathcal{S}_n) = n-\frac{3}{2},\;\;\;\;\mu(\mathcal{S}_n,v)=\begin{cases*} n-1, & \text{if $v=n$,}\\ 3n-5, & \text{if $v\neq n$.} \end{cases*} \end{aligned}$$ # Braess edges on a path {#sec:path} In this section, we consider which non-edges in a path are Braess edges. Throughout this section, we assume $k\geq 3$. Let $\mathcal P_k=(1,\dots,k)$ be the path of length $k-1$. From Example [\[moment:P_n\]](#moment:P_n){reference-type="ref" reference="moment:P_n"} with [\[formula:kemeny\]](#formula:kemeny){reference-type="eqref" reference="formula:kemeny"}, $$\begin{aligned} \label{eq:dfd(P)} \mathbf d_{\mathcal P_k}^TF_{\mathcal P_k}\mathbf d_{\mathcal P_k}=\frac{4k^3-12k^2+14k-6}{3}=\frac{4}{3}(k-1)^3+\frac{2}{3}(k-1). \end{aligned}$$ Throughout this paper, we will always assume that $i<j$ when representing the edge $\{i,j\}$. Let $\mathcal P_k^{\{ i,j \}}$ be the graph obtained from $\mathcal P_n$ by inserting the edge between vertex $i$ and vertex $j$ with $j-i\neq 1$. We now compute $\kappa(\mathcal P_k^{\{ i,j \}})$ using Proposition [\[Prop:dFd with a cut-vertex\]](#Prop:dFd with a cut-vertex){reference-type="ref" reference="Prop:dFd with a cut-vertex"}. The vertex $i$ is a cut vertex in $\mathcal P_k^{\{ i,j \}}$. There are exactly two branches of $\mathcal P_k^{\{ i,j \}}$ at the vertex $i$: one is the path $\mathcal P_i$ of length $i-1$, and the other is the lollipop graph $\mathcal L_{j-i+1,k-i+1}$, where the *lollipop graph* $\mathcal L_{a,b}$ is the graph obtained from a cycle of length $a$ by appending a path of length $b-a$. See Figure [\[Figure:P\^i,j\]](#Figure:P^i,j){reference-type="ref" reference="Figure:P^i,j"}. We let $s=i-1$, $l=j-i+1$, $B_1=\mathcal P_{s+1}$, and $B_2=\mathcal L_{l,k-s}$. The graph $\mathcal P_k^{\{ i,j \}}$ is obtained from $B_1$ and $B_2$ by identifying the vertex $i$ in $B_1$ and the vertex $i$ in $B_2$. We already have $\mathbf d_{B_1}^TF_{B_1}\mathbf d_{B_1}$ from [\[eq:dfd(P)\]](#eq:dfd(P)){reference-type="eqref" reference="eq:dfd(P)"} with $k = s+1$. To compute $\kappa(\mathcal P_k^{\{ i,j \}})$, we need to find $\mathbf d_{B_1}^T\mathbf{f}_{B_1}^i, \mathbf d_{B_2}^TF_{B_2}\mathbf d_{B_2}$, and $\mathbf d_{B_2}^T\mathbf{f}_{B_2}^i$. From [@kim2022families], we have $\mathbf d_{B_1}^T\mathbf{f}_{B_1}^i=s^2$. It appears in [@kirkland2016kemeny] that $$\mathbf d_{B_2}^TF_{B_2}\mathbf d_{B_2}=\frac{2l(2(k-s)^3-4(k-s)l^2+3l^3-(k-s))}{3}.$$ Furthermore, it can be seen that $$\begin{aligned} \mathbf d_{B_2}^T\mathbf{f}_{B_2}^i&=\frac{1}{3}(l-1)l(l+1)+(k-s-l)(l(k-s-l)+2l-2)\\ &=l(k-s)^2-2(l^2-l+1)(k-s)+\frac{1}{3}(4l^3-6l^2+5l).\end{aligned}$$ Therefore, by Proposition [\[Prop:dFd with a cut-vertex\]](#Prop:dFd with a cut-vertex){reference-type="ref" reference="Prop:dFd with a cut-vertex"}, we have $$\begin{aligned} &\mathbf d_{\mathcal P_k^{\{ i,j \}}}^TF_{\mathcal P_k^{\{ i,j \}}}\mathbf d_{\mathcal P_k^{\{ i,j \}}}\\ =&~\tau_{B_2}\mathbf d_{B_1}^TF_{B_1}\mathbf d_{B_1}+\tau_{B_1}\mathbf d_{B_2}^TF_{B_2}\mathbf d_{B_2}+4\tau_{B_2}m_{B_2}\mathbf d_{B_1}^T\mathbf{f}_{B_1}^i+4\tau_{B_1}m_{B_1}\mathbf d_{B_2}^T\mathbf{f}_{B_2}^i\\ =&~\frac{2}{3}\left( 12(l^2-l+1)s^2 + 12(l^3 - l^2k - l^2 + lk + l - k)s + l(3l^3 -4l^2k +2k^3 -k)\right).\end{aligned}$$ Hence, we have $$\begin{aligned} \label{eq:k(P^ij)} \kappa\left(\mathcal P_k^{\{ i,j \}}\right) &= \frac{\mathbf d_{\mathcal P_k^{\{ i,j \}}}^TF_{\mathcal P_k^{\{ i,j \}}}\mathbf d_{\mathcal P_k^{\{ i,j \}}}}{4kl} \\ \notag &= \frac{1}{6kl}( 12(l^2-l+1)s^2 + 12(l^3 - l^2k - l^2 + lk + l - k)s \\ \notag &+ l(3l^3 -4l^2k +2k^3 -k)). \end{aligned}$$ By computation, it is straightforward from Example [\[moment:P_n\]](#moment:P_n){reference-type="ref" reference="moment:P_n"} and [\[eq:k(P\^ij)\]](#eq:k(P^ij)){reference-type="eqref" reference="eq:k(P^ij)"} to obtain $$\begin{aligned} \kappa\left(\mathcal P_k^{\{ i,j \}}\right) - \kappa\left(\mathcal P_k\right) = \frac{f(s,l,k)}{6lk},\end{aligned}$$ where $$\begin{aligned} \label{eq:f(s,l,k)} f(s,l,k) = & ~4lk^2 -4( l + 3s - 3ls + 3l^2s + l^3)k \\ \notag &+ s(12l^3 - 12l^2 + 12l) + s^2(12l^2 - 12l + 12) + 3l^4. \end{aligned}$$ This means that the non-edge $((s+1), (s +l))$ is a Braess edge on $\mathcal P_k$ if $f(s,l,k) >0$. We now count the number of Braess edges on the path $\mathcal P_k$. **Theorem 6**. *For an integer $k \ge 3$, the number of Braess edges on the path $\mathcal P_k$ is $$\begin{aligned} \label{eq:Braess edges on path2} 2\sum_{l= 3}^{\lfloor \sqrt{k} \rfloor} \left(\left\lfloor A(l) \right\rfloor +1 \right) + 2\max\left\{\left\lfloor A(\lfloor \sqrt{k} \rfloor+1) \right\rfloor+1 ,0\right\}, \end{aligned}$$ where $$\begin{aligned} A(l) := \frac{k-l}{2} - \sqrt{\frac{(3l^2-7l+3)k^2-2(l^3-3l^2+l)k-3l^2(l-1)}{12(l^2-l+1)}}. \end{aligned}$$* *Proof.* We rewrite the function $f(s,l,k)$ in [\[eq:f(s,l,k)\]](#eq:f(s,l,k)){reference-type="eqref" reference="eq:f(s,l,k)"} as $$\begin{aligned} 12(l^2-l+1)\left(s-\frac{k-l}{2}\right)^2 -(3l^2-7l+3)k^2+2(l^3-3l^2+l)k+3l^2(l-1). \end{aligned}$$ Recall that a pair $(s,l)$ gives a Braess edge on $\mathcal P_k$ if $f(s,l,k)>0$, equivalently, $$\begin{aligned} \label{eq:symmetry on P_k} s > (k-l) - A(l)\quad \mbox{ or}\quad s < A(l), \end{aligned}$$ where $$\begin{aligned} A(l) = \frac{k-l}{2} - \sqrt{\frac{(3l^2-7l+3)k^2-2(l^3-3l^2+l)k-3l^2(l-1)}{12(l^2-l+1)}}. \end{aligned}$$ By the symmetry in [\[eq:symmetry on P_k\]](#eq:symmetry on P_k){reference-type="eqref" reference="eq:symmetry on P_k"}, a pair $(s,l)$ gives a Braess edge on $\mathcal P_k$ if and only if the pair $\left((k-l)-s,l\right)$ gives a Braess edge on $\mathcal P_k$. Therefore, the number of Braess edges for given $l$ is $$\begin{aligned} 2\times \max\left\{\left\lfloor A(l) \right\rfloor +1,0\right\}. \end{aligned}$$ Hence, the number of Braess edges on $\mathcal P_k$ is $$\begin{aligned} \label{eq:Braess edges on path1} 2\sum_{l\ge 3}\max\left\{\left\lfloor A(l) \right\rfloor +1,0\right\}. \end{aligned}$$ We now find the range of $l$ such that the function $A(l)$ is nonnegative, for given $k$. We have $$\begin{aligned} A(l) \ge 0 &\Longleftrightarrow \frac{(3l^3 -4kl^2 +4k^2-4k)l}{12(l^2-l+1)}\ge 0\\ &\Longleftrightarrow 3l^3 -4kl^2 +4k^2-4k\ge 0. \end{aligned}$$ Let $h(l)= 3l^3 -4kl^2 +4k^2-4k$. For $k\ge 3$, one can check that $h(\sqrt{k}) >0$, $h(\sqrt{k}+1)<0$. Since $h'(l) = 9l^2 -8kl$, we observe that $h(l)$ is decreasing for $0\le l \le \sqrt{k}$. This ensures that $A(l)$ is positive when $0< l \le \sqrt{k}$. Furthermore, we observe that $A(l)$ is negative when $\sqrt{k}+1 \le l \le k$, since $h(l)$ is decreasing for $\sqrt{k}+1 \le l \le 8k/9$, and increasing for $8k/9 \le l \le k$, in addition to $h(\sqrt{k}+1)<0$ and $h(k) = -k(k-2)^2<0$. Therefore, we can express [\[eq:Braess edges on path1\]](#eq:Braess edges on path1){reference-type="eqref" reference="eq:Braess edges on path1"} as a finite sum, which completes the proof. ◻ We now show that if a non-edge crosses or contains a *center vertex* $\lceil k/2\rceil$ in a path $\mathcal P_k$, then it is not a Braess edge. **Theorem 7**. *For a path $\mathcal P_k$, we have $\kappa\left(\mathcal P_k^{\{ i,j \}}\right) - \kappa\left(\mathcal P_k\right) < 0$ if $i\le\lceil k/2\rceil < j$.* *Proof.* We know that $\kappa\left(\mathcal P_k^{\{ i,j \}}\right) - \kappa\left(\mathcal P_k\right) = \frac{f(s,l,k)}{6lk},$ where $f(s,l,k)$ is defined in [\[eq:f(s,l,k)\]](#eq:f(s,l,k)){reference-type="eqref" reference="eq:f(s,l,k)"}, $s=i-1$, and $l=j-i+1$. To see $f(s,l,k)<0$ under the assumption, we rewrite $f(s,l,k)$ as $$\begin{aligned} f(s,l,k)=12(l^2-l+1)s^2+12(l^2-l+1)(l-k)s+l\cdot c(l,k), \end{aligned}$$ where $c(l,k)=4k^2-4(l^2+1)k+3l^3$. We consider $f(s,l,k)$ as a quadratic polynomial in variable $s$, say $f(s)$. In this point of view, we give an equivalent condition of $i\le\lceil k/2\rceil < j$ as follows: $$\begin{aligned} 0\le s\le \frac{k-1}{2}, \quad \frac{k}{2}-l\le s\le k-l,\text{ and } 3\le l\le k. \end{aligned}$$ The polynomial $f(s)$ is minimized at $s=\frac{k-l}{2}.$ Thus, $f(s)$ is maximized when $s$ is a point farthest from $\frac{k-l}{2}$, which depends on $k$ and $l$. We consider two cases: (1) $\frac{k- 1}{2}\leq k-l$ and (2) $\frac{k- 1}{2}> k-l$. We claim that for both cases, the maximized value of $f(s)$ is negative. 1. Suppose $\frac{k-1}{2}\le k-l$, which is equivalent to $2l-1\le k$. First, let $k/2\ge l$. Then we have $k-l-(k-1)/2-(k/2-l)=1/2 >0$. In this case, the farthest point from $(k-l)/2$ is $s=k/2-l$, see Figure [\[fig:visualization of f(s)\]](#fig:visualization of f(s)){reference-type="ref" reference="fig:visualization of f(s)"}. For a given $l$, let $$\begin{aligned} \widetilde{f}(k)&=f\left(\frac{k}{2}-l,l,k\right)\\ &=-(3l^2-7l+3)k^2+2l(l^2-3l+1)k+3l^4. \end{aligned}$$ For $l\ge3$, $\widetilde{f}(k)$ is decreasing for $k>l(l^2-3l+1)/(3l^2-7l+3)$. Since $k\ge 2l > l(l^2-3l+1)/(3l^2-7l+3)$, the possible maximum is attained at $k=2l$ and $\widetilde{f}(2l)=-l^2(5l^2-16l+8)<0$ for $l\ge3$. Second, let $k/2<l$. From the assumption, we have $k=2l-1$. Obviously, the farthest point from $(k-l)/2$ is $s=0$. For a given $l\ge3$, $f(0,l,2l-1)=-l(l-2)(5l^2-10l+4)<0.$ 2. Suppose $\frac{k-1}{2}>k-l$. In this case, the farthest point from $\frac{k-l}{2}$ is $s=k-l$. For a given $l$, let $\hat{f}(k)=f(k-l,l,k)=4lk^2-4l(l^2+1)k+3l^4$. For two roots $r_1$ and $r_2$ of $\hat{f}(k)$ with $r_1<r_2$, we claim that $r_1<l<2l-1<r_2$ for $l\ge3$. When $l\ge3$, using the fact that $l^4-3l^3+2l^2+1>(l-1)^4$, we have $$\begin{aligned} l-r_1=&~\frac{\sqrt{l^4-3l^3+2l^2+1}}{2}-\frac{(l-1)^2}{2}>0, \\ r_2-(2l-1)=&~\frac{\sqrt{l^4-3l^3+2l^2+1}}{2}+\frac{l^2-4l+3}{2}\\ >&\frac{(l-1)^2}{2}+\frac{l^2-4l+3}{2}=(l-1)(l-2)>0. \end{aligned}$$ Additionally, since the coefficient of $k^2$ in $\hat{f}(k)$ is positive, we have $\hat{f}(k)<0$ for $l\le k<2l-1$. This completes the proof.  ◻ **Remark 8**. As we have seen in the proof of Theorem [Theorem 7](#thm:nonedge crossing the center){reference-type="ref" reference="thm:nonedge crossing the center"}, $f(s,l,k)$ is minimized at $s=\frac{k-l}{2}$ for a given $k$ and $l$ and it increases as $s$ moves away from $\frac{k-l}{2}$. Roughly speaking, the further a non-edge moves away from the center of the path, the more likely it is to be a Braess edge. **Theorem 9**. *For a path $\mathcal P_k$, there exists a Braess edge making a cycle of length $l\ge3$ if and only if $k>l^2-\frac{3}{4}l+\frac{7}{16}.$* *Proof.* Let $f(s,l,k)$ be defined as in [\[eq:f(s,l,k)\]](#eq:f(s,l,k)){reference-type="eqref" reference="eq:f(s,l,k)"}. By the symmetry of $f(s,l,k)$ with respect to $s=\frac{k-l}{2}$ together with Remark [Remark 8](#rmk:far from the center){reference-type="ref" reference="rmk:far from the center"}, there exists a Braess edge if and only if $f(0,l,k)>0$. The inequality $f(0,l,k)=l\cdot (4k^2-4(l^2+1)k+3l^3)>0$ is equivalent to $$\begin{aligned} r(l):=\frac{l^2+1}{2}+\frac{\sqrt{l^4-3l^3+2l^2+1}}{2}<k \end{aligned}$$ because we have $\frac{l^2+1}{2}-\frac{\sqrt{l^4-3l^3+2l^2+1}}{2}<l\le k$. One can check that $l^2-\frac{3}{4}l+\frac{7}{16}>r(l)$ for $l\ge3$, which gives one direction of the proof. Now assume to the contrary that $k\le l^2-\frac{3}{4}l+\frac{7}{16}$. Let $h(l)=l^2-\frac{3}{4}l+\frac{7}{16}-r(l)$. One can check the following: $$\begin{aligned} h(3)<h(5)<h(4)<0.01308, \frac{dh(l)}{dl}<0 \text{ for } l\ge5, \text{ and } \lim_{l\rightarrow\infty}h(l)=0. \end{aligned}$$ Thus, $h(l)$ is maximized at $l=4$ and $h(l)<0.01308$. Furthermore, the decimal part of $l^2-3l/4+7/16$ varies with $l$ and is presented in four different cases. Among these possibilities, the minimum value is greater than 0.18 and hence $\lfloor l^2- \frac{3}{4}l+\frac{7}{16}\rfloor=\lfloor l^2-\frac{3}{4}l+\frac{7}{16}-0.01308\rfloor$. Therefore, we have $$k\le \left\lfloor l^2-\frac{3}{4}l+\frac{7}{16}\right\rfloor = \left\lfloor l^2-\frac{3}{4}l+\frac{7}{16}-0.01308\right\rfloor \le \lfloor r(l) \rfloor \le r(l). \qedhere$$ ◻ ## Asymptotic behaviour We now examine asymptotic behavior of $\sum_{l= 3}^{\lfloor \sqrt{k} \rfloor}\left\lfloor A(l) \right\rfloor$ described in Theorem [Theorem 6](#thm:B(P)){reference-type="ref" reference="thm:B(P)"}. For $k>l>0$, we have $$A(l) := A(l,k) = \frac{k-l}{2}- \sqrt{ \frac{(3l^2-7l+3)k^2 -2(l^3-3l^2+l)k-3l^2(l-1)}{12(l^2-l+1)} }.$$ Also, let $$A^*(k) = \sum_{l=3}^{\lfloor \sqrt{k}\rfloor} \lfloor A(l)\rfloor .$$ For convenience, we also introduce the notation $$\begin{aligned} h(l) = &~ \frac{3l^2-7l+3}{3(l^2-l+1)} = 1- \frac{4l}{3(l^2-l+1)} , \\ g(l) = &~ \frac{2l(l^2-3l+1)}{3(l^2-l+1)} , \\ f(l) = &~ \frac{l^2(l-1)}{l^2-l+1} ,\end{aligned}$$ so that we have $$\begin{aligned} \label{eq.Aelldef} A(l) = \frac{k-l}{2} - \sqrt{ \frac{h(l)k^2}{4} - \frac{g(l)k}{4} - \frac{f(l)}{4} }.\end{aligned}$$ Recall that Euler's constant satisfies $$\label{eq.euler} \gamma = \lim_{N\rightarrow\infty}\left(-\ln N +\sum_{l=1}^N\frac{1}{l} \right) = 0.5772\ldots$$ **Proposition 10**. *$$\label{eq.Alimprop} \lim_{k\rightarrow\infty} \left(\frac{A^*(k)}{k} -\frac{1}{6}\ln(k) \right) = \frac{1}{2}S_1^{\infty} + \frac{1}{3}\gamma- \frac{2}{3}$$ where $\gamma$ is Euler's constant and $S_1^{\infty}$ is the absolutely convergent sum $$\label{eq.s1infdef} S_1^{\infty}= \sum_{l=3}^{\infty} \left(1- \sqrt{h(l)} - \frac{2}{3l} \right) .$$* **Corollary 11**. *The number of Braess edges on the path $\mathcal P_k$ is $\frac{k}{3}\ln k +\left(S_1^{\infty}+\frac{2}{3}\gamma-\frac{4}{3}\right)k + o(k)$.* **Remark 12**. The value of $S_1^{\infty}$ is approximately $0.400$. Thus the right-hand side of Equation ([\[eq.Alimprop\]](#eq.Alimprop){reference-type="ref" reference="eq.Alimprop"}) is approximately $-0.274$. To prove Proposition [Proposition 10](#prop-asymp){reference-type="ref" reference="prop-asymp"}, we will need the following lemma. **Lemma 13**. *Assume $D,D+u\geq w >0$. Then $$\label{eq.taylor} \left| \sqrt{D+u}- \left( \sqrt{D}+ \frac{u}{2\sqrt{D}}\right) \right| \leq \frac{u^2}{8 w^{3/2}} .$$* **Proof of Lemma [Lemma 13](#lem.taylor){reference-type="ref" reference="lem.taylor"}:** Let $\phi(x) =x^{1/2}$. Then $$\phi'(x) =\frac{1}{2} x^{-1/2} \hspace{5mm}\hbox{and}\hspace{5mm} \phi''(x) = -\frac{1}{4} x^{-3/2} .$$ By Taylor's Theorem, there exists a $v$ between $D$ and $D+u$ such that $$\phi(D+u) = \phi(D) + u\phi'(D) + \frac{u^2 \phi''(v)}{2} .$$ We have $|\phi''(v)|\leq 1/(4w^{3/2})$ because $v\geq w$, and the lemma follows. $\Box$ **Proof of Proposition [Proposition 10](#prop-asymp){reference-type="ref" reference="prop-asymp"}:** First observe from Equation ([\[eq.Aelldef\]](#eq.Aelldef){reference-type="ref" reference="eq.Aelldef"}) that $$\begin{aligned} \label{eq.Aelldef2} \frac{A(l)}{k} = \frac{1}{2}\left( 1 - \frac{l}{k} - \sqrt{h(l) - \frac{g(l)}{k} - \frac{f(l)}{k^2} } \right) .\end{aligned}$$ Then we have $$\label{eq.bigsum} \frac{A^*(k)}{k}-\frac{1}{6}\ln k =\frac{1}{2}\left(S_0(k)+S_1(k) -S_2(k)+S_3(k) +E(k) \right)$$ where $$\begin{aligned} \nonumber%\label{eq.S0def} S_0(k) = & ~ \frac{1}{k}\sum_{l=3}^{\lfloor \sqrt{k}\rfloor} \left( \lfloor A(l)\rfloor - A(l) \right) , \\ \nonumber%\label{eq.S1def} S_1(k) = & ~ \sum_{l=3}^{\lfloor \sqrt{k}\rfloor} \left( 1-\sqrt{h(l)} - \frac{2}{3l} \right) , \\ \nonumber%\label{eq.S2def} S_2(k) = & ~ \sum_{l=3}^{\lfloor \sqrt{k}\rfloor} \frac{l}{k} , \\ \label{eq.S3def} S_3(k) = & ~ \sum_{l=3}^{\lfloor \sqrt{k}\rfloor} \left( \sqrt{h(l)} - \sqrt{ h(l)- \frac{g(l)}{k} - \frac{f(l)}{k^2} } \right) , \quad \hbox{and} \\ \label{eq.Edef} E(k) = & ~ \frac{2}{3}\left( -\ln \sqrt{k} + \sum_{l=3}^{\lfloor \sqrt{k}\rfloor} \frac{1}{l} \right).\end{aligned}$$ Since $-1\leq \lfloor A(l)\rfloor - A(l)\leq 0$, we see that $-\sqrt{k}/k\leq S_0(k)\leq 0$, and hence $$\label{eq.S0lim} \lim_{k\rightarrow\infty}S_0(k) = 0 .$$ The sum $S_2(k)$ is easy to evaluate exactly, and we conclude $$\label{eq.S2lim} \lim_{k\rightarrow\infty}S_2(k) = \frac{1}{2} .$$ For $E(k)$, notice that the lower limits of the sums are different in Equations ([\[eq.euler\]](#eq.euler){reference-type="ref" reference="eq.euler"}) and ([\[eq.Edef\]](#eq.Edef){reference-type="ref" reference="eq.Edef"}). Accounting for this, and using the fact that $\lim_{k\rightarrow \infty}(\ln\sqrt{k}-\ln \lfloor \sqrt{k}\rfloor)=0$, we see that $$\lim_{k\rightarrow\infty}E(k)+\frac{2}{3}\sum_{l=1}^2\frac{1}{l} = \frac{2}{3}\gamma ,$$ and hence $$\label{eq.Elim} \lim_{k\rightarrow\infty}E(k) = \frac{2}{3}\gamma - 1.$$ At this point, it is useful to have some simple bounds on the functions $h$, $g$, and $f$. For $l\geq 3$, we have $$\begin{aligned} \label{eq.hbd1a} h(l) = &~ 1 - \frac{4l}{3(l^2-l+1)} > 1- \frac{4l}{3(l^2-l)} = 1- \frac{4}{3(l-1)} \\ \label{eq.hbd1b} \geq &~ 1-\frac{4}{6} = \frac{1}{3} , \\ \nonumber%\label{eq.gbd1} g(l) = &~ \frac{2l(l^2-3l+1)}{3(l^2-l+1)} < \frac{2l}{3} , \hspace{5mm}\hbox{and} \\ \label{eq.fbd1} f(l) = & ~ \frac{l^2(l-1)}{l^2-l+1} < \frac{l^2(l-1)}{l^2-l} = l.\end{aligned}$$ Also, $$\begin{aligned} \label{eq.hbd2} \frac{1-h(l)}{2}-\frac{2}{3l} = \frac{2l}{3(l^2-l+1)}-\frac{2}{3l} = \frac{2(l-1)}{3l(l^2-l+1)} \end{aligned}$$ and hence $$\begin{aligned} \label{eq.hbd3} 0 \leq \frac{1-h(l)}{2}-\frac{2}{3l} \leq \frac{2(l-1)}{3l(l^2-l)} = \frac{2}{3l^2} . \end{aligned}$$ To handle $S_1(k)$, we shall apply Lemma [Lemma 13](#lem.taylor){reference-type="ref" reference="lem.taylor"} with $D=1$, $u=h(l)-1$, and $w=1/4$. The bound ([\[eq.hbd1b\]](#eq.hbd1b){reference-type="ref" reference="eq.hbd1b"}) shows that $D+u>w$, so we have $$\begin{aligned} \left| 1-\sqrt{h(l)} - \frac{2}{3l} \right| = &~ \left|\sqrt{h(l)} -1 -\frac{h(l)-1}{2} + \frac{h(l)-1}{2} + \frac{2}{3l} \right| \\ \leq & ~ \left|\sqrt{h(l)} -1 -\frac{h(l)-1}{2}\right| + \left|\frac{1-h(l)}{2} - \frac{2}{3l} \right| \\ \leq & ~ \frac{ (h(l)-1)^2}{8(1/4)^{3/2}} + \frac{2}{3l^2} \\ &~ \hspace{18mm}\hbox{(by Lemma \ref{lem.taylor} and Equation (\ref{eq.hbd3}))} \\ \leq & ~\frac{16}{9(l-1)^2} + \frac{2}{3l^2} \hspace{9mm}\hbox{(by Equation (\ref{eq.hbd1a}))}. \end{aligned}$$ It follows that the series $S_1^{\infty}$ of Equation ([\[eq.s1infdef\]](#eq.s1infdef){reference-type="ref" reference="eq.s1infdef"}) is absolutely convergent, and of course $$\begin{aligned} \label{eq.S1lim} \lim_{k\rightarrow\infty}S_1(k) = S_1^{\infty} .\end{aligned}$$ It remains to deal with $S_3(k)$. We begin by applying Lemma [Lemma 13](#lem.taylor){reference-type="ref" reference="lem.taylor"} with $$D=h(l) \hspace{5mm}\hbox{and}\hspace{5mm} u = - \frac{g(l)}{k} - \frac{f(l)}{k^2} .$$ By Equations ([\[eq.hbd1a\]](#eq.hbd1a){reference-type="ref" reference="eq.hbd1a"}--[\[eq.fbd1\]](#eq.fbd1){reference-type="ref" reference="eq.fbd1"}), when $3\leq l \leq \sqrt{k}$, we have $D>\frac{1}{3}$ and $$\begin{aligned} \label{eq.hgfbd2} 0 > u > -\frac{2l}{3k} - \frac{l}{k^2} \geq -\frac{2}{3\sqrt{k}} - \frac{1}{k^{3/2}} \geq - \frac{5}{3\sqrt{k}}.\end{aligned}$$ Since we are really interested in $k$ tending to infinity, it is harmless to assume that $k\geq 36$. Then Equation ([\[eq.hgfbd2\]](#eq.hgfbd2){reference-type="ref" reference="eq.hgfbd2"}) implies that $$\begin{aligned} \label{eq.hgfbd3} D+u > \frac{1}{3} - \frac{5}{3\sqrt{36}} = \frac{1}{18} \hspace{5mm}\hbox{when $3\leq l \leq \sqrt{k}$ and $k\geq 36$.}\end{aligned}$$ Therefore we can take $w=1/18$ in Lemma [Lemma 13](#lem.taylor){reference-type="ref" reference="lem.taylor"}. Thus the lemma tells us that we can write $$\label{eq.hdiff1} \sqrt{h(l)} - \sqrt{ h(l)- \frac{g(l)}{k} - \frac{f(l)}{k^2} } = \frac{ \frac{g(l)}{k}+\frac{f(l)}{k^2} }{2\sqrt{h(l)} } +\epsilon(l,k).$$ where the remainder term $\epsilon$ satisfies $$\begin{aligned} \label{eq.epsbound} \left|\epsilon(l,k) \right| \leq \frac{\left(\frac{5}{3\sqrt{k}}\right)^2}{8\left(\frac{1}{18}\right)^{3/2}} = \frac{75}{2\sqrt{2}\,k} \hspace{5mm}\hbox{when $3\leq l \leq \sqrt{k}$ and $k\geq 36$.} \end{aligned}$$ Now, by Equations ([\[eq.S3def\]](#eq.S3def){reference-type="ref" reference="eq.S3def"}) and ([\[eq.hdiff1\]](#eq.hdiff1){reference-type="ref" reference="eq.hdiff1"}), we can rewrite $S_3(k)$ as $$\begin{aligned} \label{eq.Sdecomp} S_3(k) = S_{3,g}(k) +S_{3,f}(k)+ S_{3,\epsilon}(k),\end{aligned}$$ where $$\begin{aligned} S_{3,g}(k)=~\frac{1}{k} \sum_{l=3}^{\lfloor \sqrt{k}\rfloor}\frac{g(l)}{2\sqrt{h(l)}},\;\;\; S_{3,f}(k) = \frac{1}{k^2} \sum_{l=3}^{\lfloor \sqrt{k}\rfloor}\frac{f(l)}{2\sqrt{h(l)}},\;\;\; S_{3,\epsilon}(k) = \sum_{l=3}^{\lfloor \sqrt{k}\rfloor} \epsilon(l,k).\end{aligned}$$ By Equation ([\[eq.epsbound\]](#eq.epsbound){reference-type="ref" reference="eq.epsbound"}), $|S_{3,\epsilon}(k)| \leq \frac{75}{2\sqrt{2k}}$ when $k\geq 36$, and so $$\label{eq.s3epslim} \lim_{k\rightarrow\infty} S_{3,\epsilon}(k) = 0.$$ By Equations ([\[eq.hbd1b\]](#eq.hbd1b){reference-type="ref" reference="eq.hbd1b"}) and ([\[eq.fbd1\]](#eq.fbd1){reference-type="ref" reference="eq.fbd1"}), we see that $$|S_{3,f}(k)| \leq \frac{1}{k^2} \frac{\sqrt{k}(\sqrt{k}+1)}{4\sqrt{1/3}} ,$$ and therefore $$\label{eq.s3flim} \lim_{k\rightarrow\infty} S_{3,f}(k) = 0.$$ Next, we claim that $$\label{eq.s3glim} \lim_{k\rightarrow\infty} S_{3,g}(k) = \frac{1}{6} .$$ To obtain this result, we write $g(l)$ as $$\begin{aligned} g(l) = \frac{2l(l^2-3l+1)}{3(l^2-l+1)} = \frac{2(l-2)}{3} + g_1(l),\;\;\hbox{where}\;\;\; g_1(l) = -\frac{4(l-1)}{3(l^2-l+1)} . \end{aligned}$$ Accordingly, we write $$\begin{aligned} \label{eq.gg1suma} S_{3,g}(k) = T(k) + T_1(k), \end{aligned}$$ where $$\begin{aligned} T(k) = \frac{1}{k} \sum_{l=3}^{\lfloor \sqrt{k}\rfloor} \frac{(l-2)}{3\sqrt{h(l)}} \;\;\;\hbox{and}\;\;\;T_1(k) = \frac{1}{k} \sum_{l=3}^{\lfloor \sqrt{k}\rfloor} \frac{g_1(l)}{2\sqrt{h(l)} }.\end{aligned}$$ We have from Equation ([\[eq.hbd1b\]](#eq.hbd1b){reference-type="ref" reference="eq.hbd1b"}) that $$\begin{aligned} \left| \frac{g_1(l)}{2\sqrt{h(l)}} \right| \leq \frac{2(l-1)}{3(l^2-l)\sqrt{1/3}} < \frac{4}{3l} ,\end{aligned}$$ and therefore $\left| T_1(k) \right| \leq \frac{4}{3}\,\sqrt{k}/k$, from which we obtain $$\begin{aligned} \label{eq.T1lim} \lim_{k\rightarrow\infty} T_1(k) = 0.\end{aligned}$$ Now we consider $T(k)$. Note that $h(l)<1$ for every $l\geq 3$. Let $\delta\in (0,1)$. Since $\lim_{l\rightarrow\infty}h(l)=1$, we can choose $L$ such that $1> \sqrt{h(l)} > 1-\delta$ whenever $l>L$. Then $$\label{eq.Tbd1} T(k) \geq \frac{1}{k} \sum_{l=3}^{\lfloor \sqrt{k}\rfloor} \frac{(l-2)}{3} = \frac{ ( \lfloor \sqrt{k}\rfloor-1)(\lfloor \sqrt{k}\rfloor-2)}{6k} .$$ For the corresponding upper bound, assume $\sqrt{k}>L$. Then $$\begin{aligned} \nonumber T(k) & \leq & \frac{1}{k} \sum_{l=L}^{\lfloor \sqrt{k}\rfloor} \frac{(l-2)}{3(1-\delta)} + \frac{1}{k}\sum_{l=3}^{L-1} \frac{l-2}{3\sqrt{h(l)}} \\ & \leq & \label{eq.Tbd2} \frac{ ( \lfloor \sqrt{k}\rfloor-1)(\lfloor \sqrt{k}\rfloor-2)}{6k(1-\delta)} % - \frac{L(L-1)}{6k(1-\delta)} . + \frac{1}{k}\sum_{l=3}^{L-1} \frac{l-2}{3\sqrt{h(l)}}.\end{aligned}$$ Combining Equations ([\[eq.Tbd1\]](#eq.Tbd1){reference-type="ref" reference="eq.Tbd1"}) and ([\[eq.Tbd2\]](#eq.Tbd2){reference-type="ref" reference="eq.Tbd2"}) yields (since $L$ is fixed) $$\frac{1}{6} \leq \liminf_{k\rightarrow\infty}T(k) \leq \limsup_{k\rightarrow\infty}T(k) \leq \frac{1}{6(1-\delta)}.$$ Since the above holds for every $\delta$ in $(0,1)$, we conclude that $$\label{eq.Tlim1} \lim_{k\rightarrow\infty}T(k) = \frac{1}{6} .$$ Now the claimed Equation ([\[eq.s3glim\]](#eq.s3glim){reference-type="ref" reference="eq.s3glim"}) follows from Equations ([\[eq.gg1suma\]](#eq.gg1suma){reference-type="ref" reference="eq.gg1suma"}), ([\[eq.T1lim\]](#eq.T1lim){reference-type="ref" reference="eq.T1lim"}), and ([\[eq.Tlim1\]](#eq.Tlim1){reference-type="ref" reference="eq.Tlim1"}). Then by Equations ([\[eq.s3epslim\]](#eq.s3epslim){reference-type="ref" reference="eq.s3epslim"})--([\[eq.s3glim\]](#eq.s3glim){reference-type="ref" reference="eq.s3glim"}), we obtain $$\label{eq.S3lim} \lim_{k\rightarrow\infty}S_3(k) = \frac{1}{6} .$$ Finally, by Equations ([\[eq.bigsum\]](#eq.bigsum){reference-type="ref" reference="eq.bigsum"}), ([\[eq.S0lim\]](#eq.S0lim){reference-type="ref" reference="eq.S0lim"})--([\[eq.Elim\]](#eq.Elim){reference-type="ref" reference="eq.Elim"}), ([\[eq.S1lim\]](#eq.S1lim){reference-type="ref" reference="eq.S1lim"}), and ([\[eq.S3lim\]](#eq.S3lim){reference-type="ref" reference="eq.S3lim"}), we conclude $$\begin{aligned} \lim_{k\rightarrow\infty} \left(\frac{A^*(k)}{k} -\frac{1}{6}\ln(k) \right) & = & \frac{1}{2}\left( 0+S_1^{\infty} -\frac{1}{2} +\frac{1}{6} +\frac{2}{3}\gamma-1\right) \\ & = & \frac{1}{2}S_1^{\infty} + \frac{1}{3}\gamma- \frac{2}{3}.\end{aligned}$$ This proves the proposition. $\Box$ # Braess edges on a spider graph {#sec:spider} First we mention a calculation that is useful for identifying Braess edges. The result in [@faught20221 Theorem 4.2] can be recast as follows. **Theorem 14**. *Let $G=G_1\bigoplus_v G_2$. Let $\tilde{G}$ (resp. $\tilde{G}_1$) be the graph obtained from $G$ (resp. $G_1$) by adding an edge to $G_1$. Then, $$\begin{aligned} &\kappa(\tilde{G})-\kappa(G)\\ =&\kappa(\tilde{G}_1)-\kappa(G_1)+\frac{m_{G_2}}{m_{\tilde{G}}}\left(\mu(\tilde{G}_1,v)-\kappa(\tilde{G}_1)-\mu(G_2,v)+\kappa(G_2)\right)\\ &-\frac{m_{G_2}}{m_G}\left(\mu(G_1,v)-\kappa(G_1)-\mu(G_2,v)+\kappa(G_2)\right). \end{aligned}$$* A *spider graph* is a tree with one vertex of degree at least $3$ and all other vertices of degree $1$ or $2$. For $a\geq 1$ and $b\geq 2$, we use $\mathcal{S}_{a,b}$ to denote the spider graph with a vertex $v$ of degree $b$ such that each pendent vertex is at distance $a$ from $v$. From [@ciardo2022kemeny Proposition 6.3], we can find $$\begin{aligned} \kappa(\mathcal{S}_{a,b}) = \left(b-\frac{2}{3}\right)a^2+\frac{1}{6}.\end{aligned}$$ Considering the distance matrix of $\mathcal{S}_{a,b}$, we can find $$\begin{aligned} \mu(\mathcal{S}_{a,b},v) = ba^2.\end{aligned}$$ Then, $$\begin{aligned} \label{eqn:temp111} \mu(\mathcal{S}_{a,b},v)-\kappa(\mathcal{S}_{a,b}) =\frac{2}{3}a^2-\frac{1}{6}= \mu(\mathcal{P}_{a+1},w)-\kappa(\mathcal{P}_{a+1})\end{aligned}$$ where $w$ is a pendent vertex of the path. We remark that by [@ciardo2022kemeny Theorem 2.2], every non-edge in $S_{1,b}$ for $b\geq 2$ is a Braess edge, and so we shall consider $S_{a,b}$ for $a \geq 2$ throughout this section. **Proposition 15**. *Let $a\geq 2$, $b\geq 2$, and $v$ be the center vertex of $\mathcal{S}_{a,b}$. Choose two pendent vertices in $\mathcal{S}_{a,b}$. We let $\mathcal{P}_{2a+1}=(1,\dots,2a+1)$ be the path from one pendent vertex to the other. For $1\leq i<j\leq 2a+1$ with $j-i\geq 2$, let $\mathcal{S}_{a,b}^{\{ i,j \}}$ denote the graph obtained from $\mathcal{S}_{a,b}$ by adding edge $\{ i,j \}$. Then, we have $$\begin{aligned} \nonumber &\kappa(\mathcal{S}_{a,b}^{\{ i,j \}})-\kappa(\mathcal{S}_{a,b}) \\\label{exp:spider} = &\kappa(\mathcal P_{2a+1}^{\{ i,j \}}) - \kappa\left(\mathcal P_{2a+1}\right)+\frac{a(b-2)}{ab+1}\left(\mu(\mathcal P_{2a+1}^{\{ i,j \}},v) -\kappa(\mathcal P_{2a+1}^{\{ i,j \}})-\frac{2}{3}a^2+\frac{1}{6}\right). \end{aligned}$$* *Proof.* When $b=2$, it is trivial. Let $b= 3$. Choosing $G_1=\mathcal{S}_{a,2}$, $\tilde{G}_1 = \mathcal{S}_{a,2}^{\{ i,j \}}$, and $G_2 = \mathcal{P}_{a+1}$, using Theorem [Theorem 14](#thm:Braess with cut vertex){reference-type="ref" reference="thm:Braess with cut vertex"} with [\[eqn:temp111\]](#eqn:temp111){reference-type="eqref" reference="eqn:temp111"}, Equation [\[exp:spider\]](#exp:spider){reference-type="eqref" reference="exp:spider"} follows. Similarly, for $b \geq 4$, choosing $G_1=\mathcal{S}_{a,2}$, $\tilde{G}_1 = \mathcal{S}_{a,2}^{\{ i,j \}}$, and $G_2 = \mathcal{S}_{a,b-2}$, we can establish the result. ◻ To find the moment $\mu(\mathcal P_{2a+1}^{\{ i,j \}},v)$, we calculate each entry in $F_{\mathcal P_{2a+1}^{\{ i,j \}}}$ case by case: **Lemma 16**. *Let $\mathcal{P}_k=(1,\dots,k)$ be the path of length $k-1$. Let $F_{\mathcal{P}_k^{\{i, j\}}}=(f_{p,q})_{1\le p,q \le k}$ and $l$ be the length of the cycle in $\mathcal{P}_k^{\{ i,j \}}$. We have the following:* - *if $p,q<i$ or $p,q>j$, then $f_{p,q} = l|p-q|$;* - *if $p<i$ and $i\le q\le j$, then $f_{p,q} = l(i-p)+(q-i)(l-q+i)$;* - *if $p<i$ and $q>j$, then $f_{p,q} = l(i-p)+l(q-j)+(l-1)$;* - *if $i\le p \le j$ and $i\le q \le j$, then $f_{p,q} = |p-q|(l-|p-q|)$;* - *if $i\le p\le j$ and $q>j$, then $f_{p,q}=l(q-j)+(j-p)(l-j+p)$.* *Other cases not stated are obtained by symmetry.* *Proof.* Using a similar argument as in [@kirkland2016kemeny Example 3.4], one can easily compute $f_{p,q}$, considering two cases: the vertex $p$ is on the cycle in $\mathcal{P}_k^{\{i, j\}}$ if $i \le p \le j$, and $p$ is not on the cycle if $p<i$ or $p>j$. ◻ Preserving the notation of the above lemma, by the definition of the moment, we have $$\begin{aligned} \nonumber \mu(\mathcal P_{2a+1}^{\{ i,j \}},v) =&~ \frac{1}{\tau_{\mathcal P_{2a+1}^{\{ i,j \}}}} \mathbf{d}_{\mathcal P_{2a+1}^{\{ i,j \}}}^T F_{\mathcal P_{2a+1}^{\{ i,j \}}} e_v \\\label{eq:moment Pij} =&~ \frac{1}{l}\left( f_{1,v} + 2\sum_{m=2}^{i-1} f_{m,v} + 2\sum_{m=i}^{j} f_{m,v} + 2\sum_{m=j+1}^{2a} f_{m,v} + f_{2a+1,v} \right).\end{aligned}$$ Note that $v$ is the center of $\mathcal P_{2a+1}$, that is, $v=a+1$. Let $i = s+1$ and $j = s+l$, so that $l$ is the length of the cycle in $\mathcal P_{2a+1}^{\{i,j\}}$. Now we shall find the moment $\mu(\mathcal P_{2a+1}^{\{ i,j \}},v)$ by considering two cases: 1. $v$ is contained in the cycle of $\mathcal P_{2a+1}^{\{ i,j \}}$; 2. $v$ is not contained in the cycle of $\mathcal P_{2a+1}^{\{ i,j \}}$. For the case (i), we have $i\le v \le j$, equivalently, $s+1\leq a+1\leq s+l$. Applying Lemma [Lemma 16](#lem:f_pq){reference-type="ref" reference="lem:f_pq"} and [\[eq:moment Pij\]](#eq:moment Pij){reference-type="eqref" reference="eq:moment Pij"}, we obtain $$\begin{gathered} \label{eq:moment1} \mu(\mathcal P_{2a+1}^{\{ i,j \}},v) = -\frac{2(2a-2l+3)}{l}s^2+\frac{2(2a-2l+3)(2a-l+1)}{l}s\\ +\frac{4l^2}{3}-2(3a+2)l+10a^2+14a+\frac{14}{3}-\frac{2(2a^3+5a^2+4a+1)}{l}.\end{gathered}$$ For the case (ii), considering the symmetry, we may only consider $i<j<v$, equivalently, $0\leq s$ and $s+l\leq a$. Again, applying Lemma [Lemma 16](#lem:f_pq){reference-type="ref" reference="lem:f_pq"} and [\[eq:moment Pij\]](#eq:moment Pij){reference-type="eqref" reference="eq:moment Pij"}, we obtain $$\begin{aligned} \label{eq:moment2} \mu(\mathcal P_{2a+1}^{\{ i,j \}},v)= -\frac{2(l^2-l+1)}{l}s+2a^2+2a-\frac{2}{3}l^2+\frac{2}{3}.\end{aligned}$$ We identify some non-Braess edges on $\mathcal{S}_{a,b}$. **Lemma 17**. *Let $a\geq 2$. Suppose that the cycle of $\mathcal P_{2a+1}^{\{ i,j \}}$ contains $v$ where $v = a+1$. Then, $$\mu(\mathcal P_{2a+1}^{\{ i,j \}},v)- \kappa\left(\mathcal P_{2a+1}\right)-\frac{2}{3}a^2+\frac{1}{6}\leq 0.$$* *Proof.* Let $s=i-1$ and $l=j-i+1$. Using [\[eq:moment1\]](#eq:moment1){reference-type="eqref" reference="eq:moment1"}, we can find $$\begin{aligned} H_1(s,l,a):=&~l\left(\mu(\mathcal P_{2a+1}^{\{ i,j \}},v)- \kappa\left(\mathcal P_{2a+1}\right)-\frac{2}{3}a^2+\frac{1}{6}\right)\\ =&-2(2a - 2l + 3)s^2 + 2(2a - 2l + 3)(2a - l + 1)s\\ & - 4a^3 + (8l - 10)a^2 -2(l-1)(3l-4)a +\frac{2}{3}(l-1)(2l^2 - 4l + 3). \end{aligned}$$ Let $a\geq l-1$. If we consider $H_1$ as a polynomial in variable $s$, then $H_1$ is concave down and the maximum is given by $$\begin{aligned} H_1\left(\frac{2a-l+1}{2},l,a\right)= -(l-1)^2a + \frac{1}{3}l^3 - \frac{1}{2}l^2 + \frac{2}{3}l - \frac{1}{2}, \end{aligned}$$ which is a linear function in variable $a$ with a negative slope. Since $H_1\left(\frac{l-1}{2},l,l-1\right)=-\frac{1}{6}(l-1)(4l^2-11l+3)<0$ for $l\ge 3$, we have $H_1(s,l,a)<0$ for $a\geq l$. Suppose $\frac{l-1}{2}<a<l-1$. Then, the polynomial $H_1$ in variable $s$ is concave up. Examining the proof of Theorem [Theorem 7](#thm:nonedge crossing the center){reference-type="ref" reference="thm:nonedge crossing the center"}, given $l$ and $a$, the maximum is attained at $s=0$. So, $$\begin{aligned} H_1\left(0,l,a\right)= - 4a^3+(8l - 10)a^2 -2(l-1)(3l-4)a + \frac{2}{3}(l-1)(2l^2 - 4l + 3). \end{aligned}$$ Taking the derivative of $H_1(0,l,a)$ with respect to $a$ yields $- 12a^2 + 4(4l - 5)a -2(l-1)(3l-4)$, which can be shown to be negative. Since $H_1\left(0,l,\frac{l}{2}\right)=-\frac{1}{6}(l-3)(l-1)(l+2)\leq 0$ for $l \geq 3$, we have $H_1(s,l,a)<0$ for $\frac{l-1}{2}<a<l-1$. Finally, we suppose $l =2a+1$. Then $s = 0$ and we have $$H_1\left(0,l,\frac{l-1}{2}\right) = -\frac{1}{6}l(l-1)(l-5).$$ Since $a\geq 2$, we have $l\geq 5$. It can be seen that $H_1\left(0,l,\frac{l-1}{2}\right)\leq 0$ for $l\geq 5$. Therefore, the conclusion follows. ◻ **Theorem 18**. *Let $a\geq 2$ and $b\geq 2$. Suppose that the cycle in $\mathcal{S}_{a,b}^{\{ i,j \}}$ contains $v$, where $v$ is the center vertex of $\mathcal{S}_{a,b}$. Then, $$\kappa(\mathcal{S}_{a,b}^{\{ i,j \}})-\kappa(\mathcal{S}_{a,b}) < 0.$$* *Proof.* Consider [\[exp:spider\]](#exp:spider){reference-type="eqref" reference="exp:spider"}. From Theorem [Theorem 7](#thm:nonedge crossing the center){reference-type="ref" reference="thm:nonedge crossing the center"}, we have $\kappa(\mathcal P_{2a+1}^{\{ i,j \}}) - \kappa\left(\mathcal P_{2a+1}\right)<0$. If $\mu(\mathcal P_{2a+1}^{\{ i,j \}},v) -\kappa(\mathcal P_{2a+1}^{\{ i,j \}})-\frac{2}{3}a^2+\frac{1}{6}\leq 0$, then $\kappa(\mathcal{S}_{a,b}^{\{ i,j \}})-\kappa(\mathcal{S}_{a,b}) < 0$. Suppose that $\mu(\mathcal P_{2a+1}^{\{ i,j \}},v) -\kappa(\mathcal P_{2a+1}^{\{ i,j \}})-\frac{2}{3}a^2+\frac{1}{6}>0$. Since $0 \leq \frac{a(b-2)}{ab+1}<1$, we have $$\begin{aligned} \label{tempeqn:11} \kappa(\mathcal{S}_{a,b}^{\{ i,j \}})-\kappa(\mathcal{S}_{a,b}) < \mu(\mathcal P_{2a+1}^{\{ i,j \}},v)- \kappa\left(\mathcal P_{2a+1}\right)-\frac{2}{3}a^2+\frac{1}{6}. \end{aligned}$$ By Lemma [Lemma 17](#lem:ineq1){reference-type="ref" reference="lem:ineq1"}, it follows that $\kappa(\mathcal{S}_{a,b}^{\{ i,j \}})-\kappa(\mathcal{S}_{a,b})<0$. This proves the theorem. ◻ We obtain the following corollary for a family of trees with no Braess edges (see Figure [\[Figure:2-star\]](#Figure:2-star){reference-type="ref" reference="Figure:2-star"}). **Corollary 19**. *For an integer $b\geq2$, $B(\mathcal{S}_{2,b}) = 0$.* We now consider which edges are Braess on $\mathcal{S}_{a,b}$ in the following Lemma. **Lemma 20**. *Let $a\geq 2$. If $\kappa(\mathcal P_{2a+1}^{\{ i,j \}}) - \kappa\left(\mathcal P_{2a+1}\right)>0$, then $$\mu(\mathcal P_{2a+1}^{\{ i,j \}},v)-\kappa(\mathcal P_{2a+1}^{\{ i,j \}})-\frac{2}{3}a^2+\frac{1}{6}>0.$$ This implies that if $\{ i,j \}$ is a Braess edge for $\mathcal P_{2a+1}$, then it also is for $\mathcal{S}_{a,b}$.* *Proof.* Let $s=i-1$ and $l=j-i+1$. Using [\[eq:k(P\^ij)\]](#eq:k(P^ij)){reference-type="eqref" reference="eq:k(P^ij)"} and [\[eq:moment2\]](#eq:moment2){reference-type="eqref" reference="eq:moment2"}, we can find that $$\begin{aligned} F_1(s,l,a) :=&~ 6l(2a+1)\left(\mu(\mathcal P_{2a+1}^{\{ i,j \}},v)-\kappa(\mathcal P_{2a+1}^{\{ i,j \}})-\frac{2}{3}a^2+\frac{1}{6}\right)\\ =& -12(l^2 - l + 1)s^2 - 12l(l^2 - l + 1)s + l(8a^2 + 12a - 3l^3 + 4). \end{aligned}$$ Since $\{ i,j \}$ is a Braess edge, we have from Theorem [Theorem 9](#thm:equiv Braess on path){reference-type="ref" reference="thm:equiv Braess on path"} and [\[eq:symmetry on P_k\]](#eq:symmetry on P_k){reference-type="eqref" reference="eq:symmetry on P_k"} that $$\begin{aligned} a>K\;\;\text{and}\;\;\;\; 0\leq s< \frac{(2a+1)-l}{2}-\sqrt{\frac{N}{D}}:=s_0, \end{aligned}$$ where $$\begin{aligned} K =&~\frac{1}{2}\left(l^2-\frac{3}{4}l-\frac{9}{16}\right),\\ N =&~ (3l^2-7l+3)(2a+1)^2-2(l^3-3l^2+l)(2a+1)-3l^2(l-1), \\ D =&~ 12(l^2-l+1). \end{aligned}$$ Considering $F_1(s,l,a)$ as a polynomial in variable $s$, observe that it is concave down. It is enough to show that $F_1(0,l,a)>0$ and $F_1(s_0,l,a)>0$ for $l\geq 3$. For the first inequality, let $R := R(l,a) = l(8a^2 + 12a - 3l^3 + 4)$. We can see that $$\begin{aligned} F_1(0,l,a)= R(l,a) > R(l,K) = 2l^5 - 6l^4 + \frac{39}{8}l^3 - \frac{45}{16}l^2 + \frac{161}{128}l>0. \end{aligned}$$ Now we consider $$\begin{aligned} F_1(s_0,l,a) =&~(2a+1)\sqrt{ND}+R-D\left(\frac{(2a+1-l)^2}{4}+\frac{l(2a+1-l)}{2}+\frac{N}{D}\right). \end{aligned}$$ Then $F_1(s_0,l,a)>0$ is implied by $$F_2(l,a) := (2a+1)^2ND-\left(R-D\left(\frac{(2a+1-l)^2}{4}+\frac{l(2a+1-l)}{2}+\frac{N}{D}\right)\right)^2>0.$$ We can find that $$\begin{aligned} F_2(l,a) = &~192l(l-2)(2l-1)a^4-192l(l^3 - 8l^2 + 13l - 5)a^3\\ &-16l(l - 2)(l - 1)( l^3 + 6l^2 + 29l - 27)a^2\\ &-16l( l^5 + 3l^4 + 4l^3 - 42l^2 + 52l - 21)a\\ &-4l(l^5 + 3l^4 + l^3 - 24l^2 + 28l - 12). \end{aligned}$$ Since the coefficient of $a^4$ in $F_2(l,a)$ is positive, the expression $F_2^{(1)}(l,a)$ obtained from $F_2(l,a)$ by replacing $a^4$ with $Ka^3$ is less than $F_2(l,a)$. Then the coefficient of $a^3$ in $F_2^{(1)}(l,a)$ is $$6l(8l^2(l-3)(4l-5)+210l^2 - 395l + 142)>0.$$ So, the expression $F_2^{(2)}(l,a)$ obtained from $F_2^{(1)}(l,a)$ by replacing $a^3$ with $Ka^2$ is less than $F_2^{(1)}(l,a)$. Then the coefficient of $a^2$ in $F_2^{(1)}(l,a)$ is $$16l^5(l-3)(6l-13)+2l^3(l-3)(285l - 98)+\frac{l}{16}(28830l^2 - 30031l + 9990)>0.$$ Similarly, define $F_2^{(3)}(l,a)$ as the expression obtained from $F_2^{(2)}(l,a)$ by replacing $a^2$ with $Ka$. Then the coefficient of $a$ in $F_2^{(3)}(l,a)$ is $$\begin{aligned} C = &~4l^7(l - 3)(12l - 35)+\frac{3}{4}l^5(l-3)(448l - 359)+\frac{3}{64}l^4(15320l - 29069)\\ &+\frac{1}{512}l^2(520134l - 275585)+\frac{41061}{256}l. \end{aligned}$$ It can be seen that $C>0$ for $l\geq 3$. Finally, we have $$\begin{aligned} &F_2^{(4)}(l,a)\\ :=&~CK -4l(l^5 + 3l^4 + l^3 - 24l^2 + 28l - 12)\\ =&~8l^9(l-3)(3l-11)+\frac{3}{4}l^7(l-3)(276l - 295)+\frac{1}{32}l^6( 11697l - 28745)\\ &+\frac{3}{512}l^4(98606l - 46225)+\frac{49}{16384}l^2(30870l - 6943)+\frac{23667}{8192}l>0. \end{aligned}$$ Therefore, $$F_2(l,a)>F_2^{(1)}(l,a)>F_2^{(2)}(l,a)>F_2^{(3)}(l,a)>F_2^{(4)}(l,a)>0.\qedhere$$ ◻ Here is our main result in this section. **Theorem 21**. *For integers $a\geq 2$ and $b\geq 2$, $$\frac{b}{2}B(\mathcal{P}_{2a+1})\leq B(\mathcal{S}_{a,b}) \leq \left(\frac{1}{2}\ln(3a+1)+T_1^\infty+\gamma-2\right)ab+o(a)b.$$ where $\gamma$ is Euler's constant and $T_1^{\infty}$ is the absolutely convergent sum $$T_1^{\infty}= \sum_{l=3}^{\infty} \left(\frac{l}{l^2-l+1}-\frac{1}{l} \right) .$$* **Remark 22**. The value of $T_1^{\infty}$ is approximately $0.3701$. Thus $T_1^\infty+\gamma-2$ is approximately $-1.0527$. **Proof of Theorem [Theorem 21](#thm:spider){reference-type="ref" reference="thm:spider"}:** From Lemma [Lemma 20](#lem:Braess on spider){reference-type="ref" reference="lem:Braess on spider"}, we see that each branch of $\mathcal{S}_{a,b}$ at $v$, where $v$ is the center vertex of $\mathcal{S}_{a,b}$, has $\frac{1}{2}B(\mathcal{P}_{2a+1})$ Braess edges. So, we attain a lower bound on $B(\mathcal{S}_{a,b})$. Now, we shall find an upper bound on the number of Braess edges on $\mathcal{S}_{a,b}$. Suppose that $s\geq 0$ and $s+l\leq a$. Let $i = s+1$ and $j = s+l$. Note that $v = a+1$. Since $j<v$, the moment $\mu(\mathcal P_{2a+1}^{\{ i,j \}},v)$ is given by [\[eq:moment2\]](#eq:moment2){reference-type="eqref" reference="eq:moment2"}. We can find that $$\begin{aligned} H_2(s,l,a):=&~\mu(\mathcal P_{2a+1}^{\{ i,j \}},v)- \kappa\left(\mathcal P_{2a+1}\right)-\frac{2}{3}a^2+\frac{1}{6}\\ =&~\frac{-6(l^2 - l +1)s + 6al - 2l^3+ 2l}{3l}.\end{aligned}$$ We denote by $\Gamma(a)$ the number of pairs $(s,l)$ such that $H_2(s,l,a)>0$. Since $$H_2(s,l,a) =\left(\kappa(\mathcal P_{2a+1}^{\{ i,j \}}) - \kappa\left(\mathcal P_{2a+1}\right)\right)+\left(\mu(\mathcal P_{2a+1}^{\{ i,j \}},v)-\kappa(\mathcal P_{2a+1}^{\{ i,j \}})-\frac{2}{3}a^2+\frac{1}{6}\right),$$ it follows from Lemma [Lemma 20](#lem:Braess on spider){reference-type="ref" reference="lem:Braess on spider"} that $\frac{1}{2}B(\mathcal{P}_{2a+1})\leq \Gamma(a)$. We can see from [\[exp:spider\]](#exp:spider){reference-type="eqref" reference="exp:spider"} that for any non-edge $\{ i,j \}$, $\kappa(\mathcal{S}_{a,b}^{\{ i,j \}})-\kappa(\mathcal{S}_{a,b})$ can be evaluated by Kemeny's constant and the moment of the path $\mathcal{P}_{2a+1}$ in terms of $a$, $b$, $s$, and $l$. Moreover, recall [\[tempeqn:11\]](#tempeqn:11){reference-type="eqref" reference="tempeqn:11"} that since $0\leq \frac{a(b-2)}{ab+1}<1$, we have $\kappa(\mathcal{S}_{a,b}^{\{ i,j \}})-\kappa(\mathcal{S}_{a,b})<H_2(s,l,a)$. Hence, if $\{ i,j \}$ is a Braess edge, then $H_2(s,l,a)>0$. Consequently, the number of Braess edges on each branch of $S_{a,b}$ at $v$ is less than or equal to $\Gamma(a)$ and thus $$\begin{aligned} B(\mathcal{S}_{a,b}) \leq b\Gamma(a).\end{aligned}$$ The inequality $H_2(s,l,a)>0$ is equivalent to $$\begin{aligned} 0\leq s<\frac{6al-2l^3+2l}{6(l^2-l+1)}=\frac{(3a+1)l}{3(l^2-l+1)}+\frac{1}{3(l^2-l+1)}-\frac{l+1}{3}=:X(l). \end{aligned}$$ Then $6al-2l^3+2l$ is positive for $s$ to be feasible. This inequality yields $$l< \sqrt{3a+1} =:U_0.$$ Hence, $$\begin{aligned} \Gamma(a) = \sum_{l = 3}^{\lceil U_0\rceil -1} \lceil X(l)\rceil .\end{aligned}$$ Now we consider the asymptotic behaviour of $\Gamma(a)$ as $a\rightarrow \infty$. Then we see that $$\frac{\Gamma(a)}{a}-\ln\left( U_0\right) = T_0(a)+T_1(a)+T_2(a)+T_3(a)$$ where $$\begin{aligned} \nonumber T_0(a) = & ~ \frac{1}{a}\sum_{l=3}^{\lceil U_0\rceil -1} \left( \lceil X(l)\rceil - X(l) \right),\\ T_1(a) =& ~ \sum_{l=3}^{\lceil U_0\rceil -1}\left(\frac{(3a+1)l}{3a(l^2-l+1)}-\frac{1}{l}\right) = \sum_{l=3}^{\lceil U_0\rceil -1}\left(\frac{l}{3a(l^2-l+1)}+\frac{l-1}{l(l^2-l+1)}\right), \\ T_2(a) = & ~ \frac{1}{a}\sum_{l=3}^{\lceil U_0\rceil -1} \left(\frac{1}{3(l^2-l+1)}-\frac{l+1}{3}\right),\\ T_3(a) = & ~ -\ln\left( U_0\right)+\ln(\lceil U_0\rceil -1)+\left(-\ln(\lceil U_0\rceil -1)+\sum_{l=1}^{\lceil U_0\rceil -1}\frac{1}{l}\right)-\sum_{l=1}^{2}\frac{1}{l}.\end{aligned}$$ As done in the previous section for finding the asymptotic behaviour of the number of Braess edges on path, one can find $$\lim_{a\rightarrow\infty}T_0(a)=0,\;\;\;\lim_{a\rightarrow\infty}T_3(a)=\gamma-\frac{3}{2}.$$ We can see that $$\lim_{a\rightarrow\infty}T_2(a) = -\frac{1}{2}.$$ Since $\frac{l-1}{l(l^2-l+1)}$ is a term of a convergent series, $T_1^{\infty}$ is absolutely convergent. It follows that $$\lim_{a\rightarrow\infty}T_1(a) = T_1^\infty.$$ This proves the theorem. $\Box$ A natural question arises: Given a function $f$ such that $f(n)\leq n^2/2$ and $f(n)\rightarrow\infty$ as $n\rightarrow\infty$, can we identify a family of trees $G$ such that $B(G)\sim f$? Considering variations on spider graphs in which the lengths of the paths coming off the central vertex can vary, we are able to determine the following special case to answer the question. **Theorem 23**. *For $b_1, b_2\geq 1$, $B(S_{1, b_1}\bigoplus_{v_1, v_2} S_{2, b_2}) = \binom{b_1}{2}$, where $v_1$ is the vertex of degree $b_1$ and $v_2$ is the vertex of degree $b_2$.* *Proof.* By [@ciardo2020braess Theorem 2.2], any edge added in $S_{1, b_1}$ is Braess. Thus $B(S_{1, b_1}\bigoplus_{v_1, v_2} S_{2, b_2}) \geq \binom{b_1}{2}$. Through repeated use of Theorem [Theorem 14](#thm:Braess with cut vertex){reference-type="ref" reference="thm:Braess with cut vertex"} it can be verified through straightforward computation that no other edges will be Braess and we have the result. There are 6 types of edges $\{i,j\}$ to consider. 1. $i,j$ are pendent vertices with $i,j\in S_{2,b_2}$ 2. $i,j\in S_{2,b_2}$ with $i$ a pendent vertex and $j$ of degree 2 3. $i,j\in S_{2,b_2}$ with $i,j$ both of degree 2 4. $i,j\in S_{2,b_2}$ with $i$ a pendent vertex and $j = v_2$ 5. $i,j$ are pendent vertices with $i\in S_{1, b_1}$ and $j\in S_{2,b_2}$ 6. $i$ a pendent vertex with $i\in S_{1,b_1}$ and $j\in S_{2,b_2}$ with $j$ of degree 2. Using Proposition [\[Prop:dFd with a cut-vertex\]](#Prop:dFd with a cut-vertex){reference-type="ref" reference="Prop:dFd with a cut-vertex"}, one can find that for $a_1\geq 1$ and $a_2\geq 1$, $$\begin{aligned} \kappa\left(S_{1,a_1}\bigoplus\nolimits_{v_1, v_2}S_{2, a_2}\right) =&~ \frac{a_1^2 + 8a_2^2 - \frac{1}{2}a_1 - 5a_2 + 6a_1a_2}{a_1 + 2a_2},\\ \mu\left(S_{1, a_1}\bigoplus\nolimits_{v_1, v_2}S_{2, a_2}, v\right) =&~ a_1 + 4a_2, \end{aligned}$$ where $v$ is the vertex of degree $a_1+a_2$. For the remainder of this proof we follow the notation of Theorem [Theorem 14](#thm:Braess with cut vertex){reference-type="ref" reference="thm:Braess with cut vertex"}. The graphs $G_1$, $\tilde G_1$ and $G_2$ will vary depending on each case. For cases (i) - (iii) $G_1 = \mathcal{P}_5$ and $G_2 = S_{1, b_1}\bigoplus_{v_1, v_2}S_{2, b_2-2}$ so by Example [\[moment:P_n\]](#moment:P_n){reference-type="ref" reference="moment:P_n"} $$\begin{aligned} \kappa(G_1) = \frac{11}{2}, \;\;\; \mu(G_1, v) = 8. \end{aligned}$$ Case (iv) has $G_1 = \mathcal{P}_3$ and $G_2=S_{1,b_1}\bigoplus\nolimits_{v_1, v_2}S_{2, b_2-1}$, hence $$\begin{aligned} \kappa(G_1) = \frac{3}{2},\;\;\; \mu(G_1, v) = 4. \end{aligned}$$ For cases (v) and (vi) $G_1 = \mathcal{P}_4$ and $G_2=S_{1,b_1-1}\bigoplus\nolimits_{v_1, v_2}S_{2, b_2-1}$, hence $$\begin{aligned} \kappa(G_1) = \frac{19}{6}, \;\;\; \mu(G_1, v) = 5. \end{aligned}$$ We now work on the individual cases. Values of $\kappa(\tilde G_1)$ and $\mu(\tilde G_1, v)$ can be obtained through [\[eq:k(P\^ij)\]](#eq:k(P^ij)){reference-type="eqref" reference="eq:k(P^ij)"} and Lemma [Lemma 16](#lem:f_pq){reference-type="ref" reference="lem:f_pq"} , respectively. In cases (i) - (iii) assume that $b_2 \geq 2$, otherwise this edge does not exist. In these cases we have $$\begin{aligned} \mathrm{(i)}& \kappa(\tilde G_1) = 4, \mu(\tilde G_1, v) = 8,& \kappa(\tilde G) - \kappa(G) = -\frac{19b_1 + 30b_2}{2(b_1 + 2b_2)(b_1 + 2b_2 + 1)} < 0,\\ \mathrm{(ii)}&\kappa(\tilde G_1) = \frac{39}{10}, \mu(\tilde G_1, v) = \frac{15}{2},& \kappa(\tilde G) - \kappa(G) = -\frac{b_1^2 + 4 b_1 (4 + b_2) + 4 b_2 (6 + b_2)}{ 2 (b_1 + 2 b_2) (b_1 + 2 b_2 + 1))}<0,\\ \mathrm{(iii)}&\kappa(\tilde G_1) = \frac{59}{15}, \mu(\tilde G_1, v) = \frac{22}{3},& \kappa(\tilde G) - \kappa(G) = -\frac{4 b_1^2 + b_1 (43 + 16 b_2) + 2 b_2 (31 + 8 b_2)}{ 6 (b_1 + 2 b_2) (b_1 + 2 b_2 + 1)} < 0. \end{aligned}$$ For the remaining cases we may again assume that $b_2\geq 1$. We have $$\begin{aligned} \mathrm{(iv)}& \kappa(\tilde G_1) = \frac{4}{3},\;\;\;\mu(\tilde G_1, v) = \frac{8}{3}, & \kappa(\tilde G) - \kappa(G) = -\frac{-8b_1^2 - 32b_2^2 - 32b_1b_2 + b_1 + 26b_2}{6 (b_1 + 2 b_2) (b_1 + 2 b_2+1)} < 0,\\ \mathrm{(v)}&\kappa(\tilde G_1) = \frac{5}{2},\;\;\; \mu(\tilde G_1, v) = 5, & \kappa(\tilde G) - \kappa(G) = -\frac{4 (b_1 + b_2)}{(b_1 + 2 b_2) (b_1 + 2 b_2 + 1)} < 0,\\ \mathrm{(vi)}&\kappa(\tilde G_1) = \frac{61}{24},\;\;\; \mu(\tilde G_1, v) = 5, & \kappa(\tilde G) - \kappa(G) = -\frac{23 b_1 + 22 b_2}{6 (b_1 + 2 b_2) (b_1 + 2 b_2+1)} < 0. \end{aligned}$$  ◻ # Braess edges on a broom {#sec:broom} Let $\mathcal{P}_k = (1, \hdots, k)$. A *broom* is the graph $\mathcal{B}_{k, p} = \mathcal{P}_k \bigoplus_{k,v} S_{1, p}$ with $p\geq 2$, where $v$ is the vertex in $\mathcal{S}_{1,p}$ with degree $p$. See Figure [\[fig:Broom\]](#fig:Broom){reference-type="ref" reference="fig:Broom"} for an example. We let $B_1$ be the number of Braess edges joining two vertices of $S_{1, p}$ for $\mathcal{B}_{k, p}$, $B_2$ be the number of Braess edges for $\mathcal{B}_{k, p}$ joining a vertex of $\mathcal{P}_k$ and a pendent vertex of $S_{1, p}$, and $B_3$ be the number of Braess edges on $\mathcal{P}_k$ for $\mathcal{B}_{k, p}$. Then, $B(\mathcal{B}_{k,p}) = B_1+B_2+B_3$. Since any non-edge between twin pendent vertices is a Braess edge by [@ciardo2020braess Theorem 2.2], we have $B_1=\frac{p(p-1)}{2}$. We shall investigate $B_2$ and $B_3$ in Subsections 5.1 and 5.2, respectively, with the information about where Braess edges occur. Here is the main result of this section. **Theorem 24**. *Let $B_1$, $B_2$ and $B_3$ be defined as above. Then, $$\begin{aligned} B_1 = \frac{p(p-1)}{2}, \;\;\;B_2 = O(k),\;\;\;B_3 = \Theta(k\ln(k)). \end{aligned}$$ This implies that as $k\rightarrow \infty$, we have $$\begin{array}{cll} B(\mathcal{B}_{k,p})\;\; =& \Theta(k \ln (k)+p^2), & \text{if $p/k\rightarrow0$;}\\ B(\mathcal{B}_{k,p})\;\; \sim& \frac{1}{2}p^2, & \text{if $\frac{p}{k}\rightarrow\beta\in (0,\infty].$} \end{array}$$* *Proof.* As we shall see below, in all cases that $p/k$ tends to a finite number or infinity, Corollary [Corollary 39](#cor:B2isO(k)){reference-type="ref" reference="cor:B2isO(k)"} tells us that $B_2 = O(k)$, and Theorem [Theorem 42](#thm:B3){reference-type="ref" reference="thm:B3"} provides $B_3 = \Theta(k \ln(k))$. ◻ **Remark 25**. We see a threshold at $p=\sqrt{k \ln(k)}$: If $p$ grows faster than this rate, then $p^2$ dominates and most of the Braess edges for the broom occur on $\mathcal{S}_{1,p}$; if $p$ grows more slowly than this rate, then $k\ln(k)$ dominates, and most of the Braess edges occur on $\mathcal{P}_k$. ## Braess edges from pendent to path Proposition [Proposition 27](#prop:KemDifferenceLollipopBroom){reference-type="ref" reference="prop:KemDifferenceLollipopBroom"} describes the change in Kemeny's constant in a broom $\mathcal{B}_{k,p}$ after adding an edge from one of the $p$ pendent vertices in $S_{1, p}$ to one of the vertices in $\mathcal{P}_k$ to create a cycle of length $l$. Denote the resulting graph by $\mathcal{B}_{k,p}^l$. An edge of this sort will go across the 1-separation suggested in the definition of $\mathcal{B}_{k,p}$. In order to still make use of the 1-separation formulas for Kemeny's constant we notice we can write $\mathcal{B}_{k, p} = \mathcal{P}_{k+1}\bigoplus_{k,v} S_{1, p-1}$. Thinking of the graph in this way, the edge added to get $\mathcal{B}_{k,p}^l$ will not go across the 1-separation, but be entirely contained in the $\mathcal{P}_{k+1}$ part. In considering Braess edges of this sort in a Broom $\mathcal{B}_{k,p}$, it will be helpful to understand Kemeny's constant for a lollipop graph, $\mathcal{L}_{l, n}$. Here we recall that this is a graph obtained by taking a path $\mathcal{P}_n = (1, \hdots, n)$ and adding an edge $(n, n-l+1)$ to create a cycle of length $l$ at one end of the graph. Notice that $\mathcal{B}_{k,p}^l = \mathcal{L}_{l, k+1}\bigoplus_{k, v} S_{1, p-1}$. **Lemma 26**. *Let $k\geq 3$ and $3\leq l\leq k+1$. Then $$\begin{aligned} \kappa(\mathcal{L}_{l, k+1}) &= \frac{2(k+1)^3 - 4(k+1)l^2 + 3l^3 - (k+1)}{6(k+1)}\\ \mu(\mathcal{L}_{l, k+1}, k) &= \frac{(l+1)(l-1)}{3} + (k - l + 1)^2 + \frac{4(k - l + 1)(l-2)}{l}. \end{aligned}$$* *Proof.* The formula for Kemeny's constant was given in (3.4) of [@kirkland2016kemeny]. The result on the moment comes from using [@faught20221 Lemma 2.3]. ◻ Though Theorem [Theorem 14](#thm:Braess with cut vertex){reference-type="ref" reference="thm:Braess with cut vertex"} already gives an expression for the change in Kemeny's constant, here we give another. The following can be found by using Theorem 2.1 of [@faught20221] together with Examples [\[moment:P_n\]](#moment:P_n){reference-type="ref" reference="moment:P_n"} and [\[moment:S_n\]](#moment:S_n){reference-type="ref" reference="moment:S_n"} and Lemma [Lemma 26](#lem:LollipopValues){reference-type="ref" reference="lem:LollipopValues"} to find $\kappa(\tilde G)$ and $\kappa(G)$, then taking the difference. **Proposition 27**. *Let $k\geq 3$, $p\geq 2$ and $3\leq l\leq k+1$. Then $$\begin{aligned} &\kappa(\mathcal{B}_{k, p}^l) - \kappa(\mathcal{B}_{k,p})\\ =& \left[\frac{k+1}{k+p}\kappa(\mathcal{L}_{l,k+1}) - \frac{k}{k+p-1}\kappa(\mathcal{P}_{k+1})\right] +\left[ \frac{p-1}{k+p}\mu(\mathcal{L}_{l,k+1}, k) - \frac{p-1}{k+p-1}(k^2-2k+2)\right]\\ &\hspace{4em}+ \frac{p-1}{2(k+p)(k+p-1)}. \end{aligned}$$* These next three lemmas will be helpful in ruling out edges that will not be Braess edges. **Lemma 28**. *Let $k\geq 5$ and $p\geq 2$. Then $\kappa(\mathcal{B}_{k, p}^l) - \kappa(\mathcal{B}_{k,p})$ is decreasing in $l$ for $3\leq l \leq (k+1)/2$.* *Proof.* Note that only two terms in Proposition [Proposition 27](#prop:KemDifferenceLollipopBroom){reference-type="ref" reference="prop:KemDifferenceLollipopBroom"} depend on $l$. $\kappa(\mathcal{L}_{l, k+1})$ was observed to be decreasing in $l$ for $3\leq l\leq 8(k+1)/9$ in Example 3.4 of [@kirkland2016kemeny]. Now we show that $\mu(\mathcal{L}_{l, k+1}, k)$ is decreasing in $l$ on $3\leq l \leq (k+1)/2.$ Notice $$\frac{d}{dl}\left[\mu(\mathcal{L}_{l, k+1},k)\right] = \frac{8}{3}l - 2k - 6 + \frac{8(k+1)}{l^2}.$$ Taking the second and third derivative we see $$\begin{aligned} \frac{d^2}{dl^2}\left[\mu(\mathcal{L}_{l, k+1},k)\right] =&\: \frac{8}{3} - \frac{16(k+1)}{l^3}\\ \frac{d^3}{dl^3}\left[\mu(\mathcal{L}_{l, k+1},k)\right] = &\:\frac{48(k+1)}{l^4}. \end{aligned}$$ Note that the third derivative is clearly positive for all positive values of $k$, hence the first derivative is a convex function. Thus if we can find two values of $l$ which give a negative value for the first derivative, then all intermediate values of $l$ will also give a negative value for the first derivative. Plugging in $l=3$ and $l= (k+1)/2$ gives $$\begin{aligned} \left.\frac{d}{dl}\left[\mu(\mathcal{L}_{l, k+1},k)\right]\right\rvert_{l=3} &= \frac{26}{9} - \frac{10k}{9} < 0 &\text{if $k\geq 3$.}\\ \left.\frac{d}{dl}\left[\mu(\mathcal{L}_{l, k+1},k)\right]\right\rvert_{l=(k+1)/2} &= \frac{-2(k^2 + 8k - 41)}{3(k+1)} < 0 &\text{if $k\geq 4$.} \end{aligned}$$ Thus we have the desired conclusion. ◻ **Lemma 29**. *Let $p\geq 2$ and $k\geq 5$. Then $$\mu(\mathcal{L}_{l, k+1}, k) - (k^2 - 2k + 2) < 0.$$* *Proof.* We use some of the calculations from the proof of Lemma [Lemma 28](#lem:KemDifferenceDecreasingInl){reference-type="ref" reference="lem:KemDifferenceDecreasingInl"}. For a given $k\ge5$, let $p(l) = \mu(\mathcal{L}_{l, k+1}, k) - (k^2 - 2k + 2)$. Since $\left.\frac{d}{dl}p(l)\right\rvert_{l=3}<0$ and $\frac{d}{dl}p(l)$ is a convex function, it suffices to check both $p(3)<0$ and $p(k+1)<0$. It is straightforward to verify that $$\begin{aligned} \mu(\mathcal{L}_{3, k+1}, k) - (k^2 - 2k + 2) &= 2 - \frac{2k}{3} < 0, \\ \mu(\mathcal{L}_{k+1, k+1}, k) - (k^2 - 2k + 2) &= -\frac{2}{3}(k-1)(k-3) < 0. \qedhere \end{aligned}$$ ◻ Note that Lemma [Lemma 29](#lem:2ndTermNegative){reference-type="ref" reference="lem:2ndTermNegative"} implies that the second term in square brackets in Proposition [Proposition 27](#prop:KemDifferenceLollipopBroom){reference-type="ref" reference="prop:KemDifferenceLollipopBroom"} is strictly negative. **Lemma 30**. *Let $k\geq5$. If the edge added into $\mathcal{B}_{k,p}$ to get $\mathcal{B}_{k,p}^l$ is not a Braess edge for $\mathcal{P}_{k+1}$, then it is not a Braess edge for $\mathcal{B}_{k,p}$.* *Proof.* The expression in the second square bracket in Proposition [Proposition 27](#prop:KemDifferenceLollipopBroom){reference-type="ref" reference="prop:KemDifferenceLollipopBroom"} is always negative as was shown in Lemma [Lemma 29](#lem:2ndTermNegative){reference-type="ref" reference="lem:2ndTermNegative"}. Further, we see that the maximum is attained at $l=3$ or at $l = k + 1$. We examine these two cases. Let $l = 3$. This gives $$\left[ \frac{p-1}{k+p}\mu(\mathcal{L}_{3, k+1}, k) - \frac{p-1}{k+p-1}(k^2-2k+2)\right] = -\frac{(p-1)(5k^2 + 2k(p-7) - 6(p-2))}{3(k+p)(k+p-1)}.$$ Now under the assumption that the edge is not a Braess edge, we have $\kappa(\mathcal{L}_{l, k+1}) \leq \kappa(\mathcal{P}_{k+1}) = (2k^2 + 1)/6$. Then $$\begin{aligned} \frac{k+1}{k+p}\kappa(\mathcal{L}_{l, k+1}) - \frac{k}{k+p-1}\kappa(\mathcal{P}_{k+1}) &\leq \frac{2k^2+1}{6}\left(\frac{k+1}{k+p} - \frac{k}{k+p-1}\right)\\ &= \frac{(2k^2 + 1)(p-1)}{6(k+p)(k+p-1)}. \end{aligned}$$ Simplifying the expression in Proposition [Proposition 27](#prop:KemDifferenceLollipopBroom){reference-type="ref" reference="prop:KemDifferenceLollipopBroom"} gives $$\begin{aligned} &\kappa(\mathcal{B}_{k,p}^3) - \kappa(\mathcal{B}_{k,p})\\ &\leq \frac{(2k^2+1)(p-1)}{6(k+p)(k+p-1)} - \frac{(p-1)(5k^2+2k(p-7)-6(p-2))}{3(k+p)(k+p-1)} +\frac{p-1}{2(k+p)(k+p-1)}\\ &= -\frac{2(p-1)}{3(k+p)(k+p-1)}[2k^2 + k(p-7) - 3p + 5]. \end{aligned}$$ This is negative if $p\geq 2$. Now suppose that $l = k+1$. Similar work as above gives $$\begin{aligned} \kappa(\mathcal{B}_{k,p}^{k+1}) - \kappa(\mathcal{B}_{k,p}) &\leq -\frac{2(k-1)(p-1)(k^2+k(p-3)-3p+1)}{3(k+p)(k+p-1)}. \end{aligned}$$ This is easily seen to be negative if $p\geq 2$. Therefore, if a non-edge is not Braess in $\mathcal{P}_{k+1}$ it is not Braess in $G$. ◻ The following proposition is the entire formula in the Proposition [Proposition 27](#prop:KemDifferenceLollipopBroom){reference-type="ref" reference="prop:KemDifferenceLollipopBroom"}, expanded out. We include this version too since different insights are readily seen looking at the expression in this manner than are seen from Proposition [Proposition 27](#prop:KemDifferenceLollipopBroom){reference-type="ref" reference="prop:KemDifferenceLollipopBroom"}. **Proposition 31**. *Let $k\geq 3$, $p\geq 2$ and $3\leq l\leq k+1$. Then $$\begin{aligned} &\kappa(\mathcal{B}_{k,p}^l) - \kappa(\mathcal{B}_{k,p})\\ =&\left(\frac{1}{6l(k+p-1)(k+p)}\right)\bigg(4k^3l - 4k^2[l^3 + 3l^2(p-1) - l(12p-11)+12(p-1) ]\\ &\quad+ k[3l^4 + 4l^3(p-2) - 12l^2(p^2 + p-2) + 16l(3p^2-p-2) - 48(p-1)p]\\ &\quad+ (l-2)(p-1)[3l^3 + 24(p-1) + 2l^2(4p-3)-4l(5p-6)]\bigg). \end{aligned}$$* Looking at Proposition [Proposition 31](#prop:KemDifferenceLollipopBroomExpanded){reference-type="ref" reference="prop:KemDifferenceLollipopBroomExpanded"}, it is clear that for fixed $l$ and $p$, if the path is long enough ($k$ is large enough) then the cycle is Braess. While we do not pin down exactly how large $k$ must be to ensure an edge is Braess, these next few results partially answer that question. In fact, Proposition [Proposition 35](#prop:KHowBigBraessPendantToPath){reference-type="ref" reference="prop:KHowBigBraessPendantToPath"} will give a sufficient size of $k$ for the edge to be Braess. **Lemma 32**. *Let $k\geq 6$ and $p\geq 2$. If $l = \sqrt{k} + 1$, then $\kappa(\mathcal{B}_{k,p}^l) - \kappa(\mathcal{B}_{k,p}) < 0$.* *Proof.* As the denominator in Proposition [Proposition 31](#prop:KemDifferenceLollipopBroomExpanded){reference-type="ref" reference="prop:KemDifferenceLollipopBroomExpanded"} is positive, we see that $\kappa(\mathcal{B}_{k,p}^l) - \kappa(\mathcal{B}_{k,p}) < 0$ if and only if the numerator is negative. Then putting $l = \sqrt{k} + 1$ in the numerator gives that $\kappa(\mathcal{B}_{k,p}^l) - \kappa(\mathcal{B}_{k,p}) < 0$ if and only if $$\begin{aligned} -(\sqrt{k}+1)[k^{5/2}(12p-7) - k^2(16p-21) + k^{3/2}(12p^2 - 7p - 6)&\\ - k(20p^2 - 29p + 10) + \sqrt{k}(4p^2 - 25p + 21) - 3(4p^2 - 5p + 1)] &< 0. \end{aligned}$$ This is true if and only if the expression in the square bracket is positive. Analyzing the $k^{5/2}$ and $k^2$ terms together, they can be seen to be positive for $k\geq 4.$ The $\sqrt{k}$ term is positive if $p\geq 6$. Looking at the $k^{3/2}, k,$ and $k^0$ terms together these are also seen to be positive if $k\geq 4$. This is seen by replacing the constant 3 in the $k^0$ term with $k$. Checking the $2\leq p\leq 5$ cases one finds that the expression in the square bracket is positive if $k\geq 6$. The conclusion follows. ◻ **Proposition 33**. *Let $p\geq 2$ and $k\geq 6$. Suppose we add an edge to $\mathcal{B}_{k,p}$ to obtain $\mathcal{B}_{k,p}^l$. If $l \geq \sqrt{k} + 1$ then the added edge is not Braess. This implies that $B_2 = O(p\sqrt{k})$ as $k\rightarrow \infty$.* *Proof.* By Lemma [Lemma 32](#lem:sqrtK+1NotBraess){reference-type="ref" reference="lem:sqrtK+1NotBraess"} $l = \sqrt{k} + 1$ does not give a Braess edge. By Lemma [Lemma 28](#lem:KemDifferenceDecreasingInl){reference-type="ref" reference="lem:KemDifferenceDecreasingInl"} there can be no Braess edges of this sort for $l\in[\sqrt{k}+1, (k+1)/2]$. Further, by Theorem [Theorem 7](#thm:nonedge crossing the center){reference-type="ref" reference="thm:nonedge crossing the center"} combined with Lemma [Lemma 30](#lem:notBraessPathGivesNotBraessBroom){reference-type="ref" reference="lem:notBraessPathGivesNotBraessBroom"} there are no Braess edges for $l\geq (k+1)/2$. ◻ **Corollary 34**. *Let $p\geq 2$ and $k\geq 6$. If $\kappa(\mathcal{B}_{k,p}^l) - \kappa(\mathcal{B}_{k,p})<0$ for some $l\geq 3$, then $\kappa(\mathcal{B}_{k,p}^s) - \kappa(\mathcal{B}_{k,p})<0$ for all $l\leq s\leq k+1$.* *Proof.* We note that $\sqrt{k}+1<\frac{k+1}{2}$ for $k\geq 6$. If $l\geq \sqrt{k}+1$ then the result follows from Proposition [Proposition 33](#prop:NotBraessInBroom){reference-type="ref" reference="prop:NotBraessInBroom"}. If $l< \sqrt{k}+1$ then the conclusion follows from Lemma [Lemma 28](#lem:KemDifferenceDecreasingInl){reference-type="ref" reference="lem:KemDifferenceDecreasingInl"} and Proposition [Proposition 33](#prop:NotBraessInBroom){reference-type="ref" reference="prop:NotBraessInBroom"}. ◻ **Proposition 35**. *Let $l\geq 3$ and $p\geq 2$. Suppose we add an edge to $\mathcal{B}_{k,p}$ to obtain $\mathcal{B}_{k,p}^l$. If $k \geq (0.007p + 1)l^2+3(p-1)l - 7(p-1)$, then the edge added is Braess.* *Proof.* As seen in Proposition [Proposition 31](#prop:KemDifferenceLollipopBroomExpanded){reference-type="ref" reference="prop:KemDifferenceLollipopBroomExpanded"}, the numerator of $\kappa(\mathcal{B}_{k,p}^l) - \kappa(\mathcal{B}_{k,p})$ is a cubic in $k$, hence there will come a point for which $\kappa(\mathcal{B}_{k,p}^l) - \kappa(\mathcal{B}_{k,p})$ begins increasing in $k$ indefinitely. We first find that point, then show that the estimate for $k$ given in this Proposition statement is past that, then show that $\kappa(\mathcal{B}_{k,p}^l) - \kappa(\mathcal{B}_{k,p}) > 0$ at that estimate. This will show that if $k\geq K_1 = (0.007p + 1)l^2+3(p-1)l - 7(p-1)$ then the edge added will be Braess. Using Proposition [Proposition 31](#prop:KemDifferenceLollipopBroomExpanded){reference-type="ref" reference="prop:KemDifferenceLollipopBroomExpanded"}, we take the derivative with respect to $k$ of the numerator. This has roots $$\label{eq:kIncreasingPastHere} k = g_1(l,p) \pm \frac{\sqrt{g_2(l, p)}}{24l},$$ where $$\begin{aligned} g_1(l, p) &~= \frac{1}{3}l^2 + l(p-1) - \frac{1}{3}(12p-11) + \frac{4(p-1)}{l},\\ g_2(l, p) &~= 64(l^3 + 3l^2(p-1) - l(12p-11) + 12(p-1))^2\\ &\quad - 48l(3l^4 + 4l^3(p-2) - 12l^2(p^2 + p - 2) + 16l(3p^2 - p - 2) - 48(p-1)p). \end{aligned}$$ Let $K_2$ be the right hand side of equation ([\[eq:kIncreasingPastHere\]](#eq:kIncreasingPastHere){reference-type="ref" reference="eq:kIncreasingPastHere"}) with the positive square root. It can be shown that $g_2(l,p)\geq 0$ for all $p\geq 2$ and $l\geq 3$, hence $K_2$ is a real number. We want to show that $K_1 - K_2 \geq 0$. Straightforward algebra gives this is true if and only if $$\begin{aligned} &(24l)^2(K_1-g_1(l,p))^2-g_2(l,p)\\ =&~\frac{3}{15625}( 7p + 1000)(21p + 1000)l^6 + \frac{144}{125}(14p^2 + 986p - 875)l^5 \\ &+ \frac{48}{125}( 4437p^2 - 10430p + 6500)l^4 - \frac{2304}{125}(158p + 125)(p - 1)l^3 \\ &-576(p - 1)(27p - 29)l^2 + 2304(p - 1)(13p - 14)l\geq 0. \end{aligned}$$ Looking at the $l^6, l^4,$ and $l^2$ terms together it can be seen that these three terms are positive if $l\geq 4$ and $p\geq 2$. Looking at $l^5$ and $l^3$ terms together they are seen to be positive if $l \geq 14$ and $p\geq 2$. The $l$ term is positive provided $p\geq 2$. Checking when $l=3, 4, \hdots, 13$ it is seen that the entire expression is positive if $p\geq 2$ and $l\geq 3$. Therefore $K_1 - K_2\geq 0$ for $p\geq 2$ and $l\geq 3$. Now as $\kappa(\mathcal{B}_{k,p}^l) - \kappa(\mathcal{B}_{k,p})$ is increasing in $k$ for $k\geq K_1$, we show that this difference is indeed positive at $k = K_1$. Plugging $K_1$ in for $k$ and then doing work similar to what was done just above shows that $\kappa(\mathcal{B}_{k,p}^l) - \kappa(\mathcal{B}_{k,p})\geq 0$ at $k = K_1$ and we have the result. ◻ Recall that $B_2$ denotes the number of Braess edges for $\mathcal{B}_{k,p}$ connecting a pendent vertex in $S_{1,p}$ to a vertex in $\mathcal{P}_k$. Now we present several results regarding $B_2$. **Theorem 36**. *If $p\geq k\geq 3$ then $B_2 =0$.* *Proof.* We shall first show that $\kappa(\mathcal{B}_{k,p}^l) - \kappa(\mathcal{B}_{k,p})<0$ for $p\geq k\geq 6$. From Corollary [Corollary 34](#Cor:l implies s for no Braess){reference-type="ref" reference="Cor:l implies s for no Braess"}, it suffices to show that $\kappa(\mathcal{B}_{k,p}^3) - \kappa(\mathcal{B}_{k,p}) < 0$. Now consider $$\kappa(\mathcal{B}_{k,p}^3) - \kappa(\mathcal{B}_{k,p}) = \frac{-4(k-3 )p^2 - (4k^2-13)p + 4k^3 - 28k^2 + 49k - 25}{6(k + p)(k + p - 1)}.$$ Given $k\geq 6$, the maximum of the numerator is attained at $p=k$, which is given by $-4k^3 - 16k^2 + 62k - 25<0$. Hence, $B_2 = 0$ for $p\geq k\geq 6$. It remains to show that $\kappa(\mathcal{B}_{k,p}^l) - \kappa(\mathcal{B}_{k,p}) < 0$ for $3\leq k\leq 5$ and $p\geq k$. Using the expression in Proposition [Proposition 31](#prop:KemDifferenceLollipopBroomExpanded){reference-type="ref" reference="prop:KemDifferenceLollipopBroomExpanded"}, computing for each of pairs $(l,k)$ with $3\leq k\leq 5$ and $3\leq l\leq k+1$, one can establish the desired result. ◻ **Theorem 37**. *Fix a real number $\beta>0$. Assume $\frac{p}{k}\rightarrow \beta$ as $k\rightarrow \infty$. If $\beta \geq \frac{-1 + \sqrt{5}}{2}$, then $B_2 = 0$ for all sufficiently large $k$. If $0 < \beta < \frac{-1 + \sqrt{5}}{2}$, then $B_2 = \Theta(p) = \Theta(k)$ as $k\to\infty$.* *Proof.* We assume that $\frac{p}{k}\to\beta$. Showing that the edge to $\mathcal{B}_{k,p}$ to obtain $\mathcal{B}_{k,p}^l$ is Braess is equivalent to showing that the numerator in Proposition [Proposition 31](#prop:KemDifferenceLollipopBroomExpanded){reference-type="ref" reference="prop:KemDifferenceLollipopBroomExpanded"} is positive. First, we prove the following claim: There is a constant $C$ (depending on $\beta$) such that, for all sufficiently large $k$, no edge with $l>C$ is Braess. Assume that the claim is false. Then there is a sequence $l=l(k)$ such that $l(k)\to\infty$ and the edge to $\mathcal{B}_{k,p}$ to obtain $\mathcal{B}_{k,p}^{l(k)}$ is Braess for $\mathcal{B}_{k,p}$ (for infinitely many values of $k$). By Proposition [Proposition 33](#prop:NotBraessInBroom){reference-type="ref" reference="prop:NotBraessInBroom"}, we have $l<\sqrt{k}+1$. We then see that the dominant terms in the numerator of Proposition [Proposition 31](#prop:KemDifferenceLollipopBroomExpanded){reference-type="ref" reference="prop:KemDifferenceLollipopBroomExpanded"} are of order $k^3l^2$, specifically $-12k^2l^2p-12kl^2p^2$. Since these terms are negative, and all the other terms are $o(k^3l^2)$, we have a contradiction. This proves the claim, and shows that $B_2=O(k)$. Given the above claim, we can now assume $l=O(1)$. Using $p = \beta k+o(k)$ and simplifying the numerator in Proposition [Proposition 31](#prop:KemDifferenceLollipopBroomExpanded){reference-type="ref" reference="prop:KemDifferenceLollipopBroomExpanded"}, we get $$\label{eq.kcubednumerator} k^3 (-48 \beta - 48 \beta^2 + 4 l + 48 \beta l + 48 \beta^2 l - 12 \beta l^2 - 12 \beta^2 l^2) + o(k^3).$$ If for a given $\beta$ and $l$ the $k^3$ coefficient is negative, then in the limit as $k\to\infty$ we know that the numerator must be negative, hence an edge with that given $l$ is not a Braess edge. Further, by the monotonicity argument of Corollary [Corollary 34](#Cor:l implies s for no Braess){reference-type="ref" reference="Cor:l implies s for no Braess"}, no edge with $\hat l \geq l$ will be Braess. Thus, in particular, if $l = 3$ and this coefficient is negative, then we have $B_2 = 0$ for all sufficiently large $k$. Solving for the roots of this coefficient as a quadratic in $\beta$ we see that this coefficient is negative if $\beta$ is greater than the larger of the two roots, hence if $$\beta > -\frac{1}{2} + \frac{\sqrt{3(3l^2 - 8l + 12)}}{6(l-2)}$$ If $l = 3$ then the value of the right hand side is $\frac{-1 + \sqrt{5}}{2}\approx 0.618$. Hence if $\beta > \frac{-1 + \sqrt{5}}{2}$ then $B_2 = 0$ as $k\to\infty.$ One can check that if $\beta = \frac{-1 + \sqrt{5}}{2}$ then $B_2 = 0$ for all sufficiently large $k$ in this case too. Finally, if $0 < \beta < \frac{-1 + \sqrt{5}}{2}$, then Equation ([\[eq.kcubednumerator\]](#eq.kcubednumerator){reference-type="ref" reference="eq.kcubednumerator"}) is positive for $l=3$ as $k\rightarrow\infty$, showing that $B_2\geq p$, and hence that $B_2=\Theta(p)=\Theta(k)$ (since our earlier claim implies that $B_2\leq Cp$). ◻ **Theorem 38**. *Let $n = p+k$ be the number of vertices in $\mathcal{B}_{k,p}$. If $\frac{p}{k}\to 0$ as $n\to\infty$, then $B_2 = O(k)$.* *Proof.* From Proposition [Proposition 33](#prop:NotBraessInBroom){reference-type="ref" reference="prop:NotBraessInBroom"}, we have $B_2 = O(p\sqrt{k})$ as $k\to \infty$. If $p$ is fixed, then $B_2 = O(\sqrt{k})$. We assume that $\frac{p}{k}\to0$ as $k,p\to \infty$. To determine if an edge is Braess, it will suffice to show that the numerator in Proposition [Proposition 31](#prop:KemDifferenceLollipopBroomExpanded){reference-type="ref" reference="prop:KemDifferenceLollipopBroomExpanded"}, which we will denote as $F(k, p, l)$, is positive or negative. We will find an upper bound on $B_2$ by showing that as $n \to\infty$, if $l\geq k/p$, the edge is not Braess. We claim that if $p\geq 4$ then $F(k,p,l)<0$ for $k\geq 5p$ and $l\geq k/p$. Let $H(k, p) = p^4F(k, p, k/p)$. We have $$\begin{aligned} H(k, p) =&\: k^5(3 - 4p) - k^4(p-1)(8p^2 - 8p - 3) + 4k^3p(9p^3-12p^2+p+3)\\ &- 4k^2p^2(p-9)(p-1) - 8kp^3(p-1)(6p^2-8p+9) - 48p^4(p-1)^2. \end{aligned}$$ Let $c_i$ denote the coefficient of $k^i$. Notice that $c_5, c_4, c_1, c_0 < 0$ if $p\geq 2$. Further $c_3 > 0$ if $p\geq 2$. Regardless of the sign of $c_2$, Descartes' Sign Rule implies that there are at most 2 positive roots to $H(k, p)$. We will now use this fact to show that $H(k, p) < 0$ as $k+p\to\infty$. We can find that for $p\geq 4$, $$\begin{aligned} H(p, p) =& -p^4(20p^3 - 24p^2 - 2p + 3)<0,\\ H(3p, p) =& 15p^4(12p^3 - 48p^2 + 32p - 5)>0,\\ H(5p, p) =& -p^4(740p^3 + 8088p^2 - 7166p + 963)<0. \end{aligned}$$ It follows that $H(k, p) < 0$ for $k\geq 5p$. By the monotonicity argument of Corollary [Corollary 34](#Cor:l implies s for no Braess){reference-type="ref" reference="Cor:l implies s for no Braess"}, if $p\geq 4$ then $F(k,p,l)<0$ for $k\geq 5p$ and $l\geq k/p$. Therefore, if $p/k\to 0$ as $k,p\to\infty$ then $B_2 < p\cdot\frac{k}{p} = k$ and hence $B_2 = O(k)$. ◻ We summarize the previous three results in the following corollary. **Corollary 39**. *Let $n = p+k$ be the number of vertices in $\mathcal{B}_{k,p}$. Then $B_2 = O(k)$ as $n\to\infty.$* ## Braess edges on the path in a broom Let $\mathcal{P}_k = (1, \hdots, k)$. Consider the broom $\mathcal{B}_{k, p} = \mathcal{P}_k \bigoplus_{k,v} S_{1, p}$ with $k\geq 4$ and $p\geq 2$. Let $1\leq i<j\leq k$ with $j-i\geq 2$. We use $\mathcal{B}_{k,p}^{\{ i,j \}}$ to denote the graph obtained from $\mathcal{B}_{k,p}$ by adding the edge $\{ i,j \}$ into $\mathcal{B}_{k,p}$. **Proposition 40**. *Let $k\geq 4$ and $p\geq 2$. Let $1\leq i<j\leq k$. Then, $$\begin{aligned} &\kappa(\mathcal{B}_{k,p}^{\{ i,j \}})-\kappa(\mathcal{B}_{k,p})\\ =&~\kappa(\mathcal P_{k}^{\{ i,j \}}) - \kappa\left(\mathcal P_{k}\right)+\frac{p}{k+p}\left(\mu(\mathcal{P}_k^{\{ i,j \}}, k)-\kappa(\mathcal P_{k}^{\{ i,j \}})-\frac{1}{2}\right)-\frac{p}{k+p-1}\left(\frac{2}{3}(k-1)^2-\frac{2}{3}\right). \end{aligned}$$* *Proof.* From [\[moment:S_n\]](#moment:S_n){reference-type="eqref" reference="moment:S_n"}, $\mu(S_{1, p}, k)-\kappa(S_{1, p})=\frac{1}{2}$. It is straightforward from Theorem [Theorem 14](#thm:Braess with cut vertex){reference-type="ref" reference="thm:Braess with cut vertex"}. ◻ Let $l = j-i+1$ and $s = i-1$. It follows from Lemma [Lemma 16](#lem:f_pq){reference-type="ref" reference="lem:f_pq"} that $$\begin{aligned} \mathbf{d}_{\mathcal P_{k}^{\{ i,j \}}}^TF_{\mathcal P_{k}^{\{ i,j \}}}\mathbf e_k=lk^2 - 2(l^2 -l +1)s - \frac{1}{3}(2l^3+l).\end{aligned}$$ Hence, $$\begin{aligned} \label{eq: mu Pk} \mu(\mathcal{P}_k^{\{ i,j \}}, k) =k^2 - 2\left(l -1 +\frac{1}{l}\right)s - \frac{1}{3}(2l^2+1).\end{aligned}$$ Here we recall [\[eq:k(P\^ij)\]](#eq:k(P^ij)){reference-type="eqref" reference="eq:k(P^ij)"} that $$\begin{aligned} \label{eq: k Pij} \kappa(\mathcal P_{k}^{\{ i,j \}})=& \frac{1}{6kl}( 12(l^2-l+1)s^2 + 12(l^3 - l^2k - l^2 + lk + l - k)s \\ \notag &+ l(3l^3 -4l^2k +2k^3 -k)).\end{aligned}$$ Let $$\kappa(\mathcal{B}_{k,p}^{\{ i,j \}})-\kappa(\mathcal{B}_{k,p}) = \frac{D(s,l,k,p)}{6l(k+p-1)(k+p)}.$$ Then $D(s,l,k,p)>0$ if and only if $\kappa(\mathcal{B}_{k,p}^{\{ i,j \}})-\kappa(\mathcal{B}_{k,p})>0$. Simplifying $D(s,l,k,p)$ using Example [\[moment:P_n\]](#moment:P_n){reference-type="ref" reference="moment:P_n"}, Proposition [Proposition 40](#prop: kemeny B-B){reference-type="ref" reference="prop: kemeny B-B"}, [\[eq: mu Pk\]](#eq: mu Pk){reference-type="eqref" reference="eq: mu Pk"} and [\[eq: k Pij\]](#eq: k Pij){reference-type="eqref" reference="eq: k Pij"} and collecting it in terms of $s$ yields $$\begin{aligned} D(s,l,k,p)=& ~12(k + p-1)(l^2 - l + 1)s^2 - 12(k+ p-1)(l^2 - l + 1)(k + p - l )s \\ &~+4l(-l^2 + 3k -2)p^2 + l(12k^2 - 8kl^2 -16k + 3l^3 + 4l^2 + 8)p \\ &~+l(k-1)(4k^2 - 4kl^2 - 4k + 3l^3).\end{aligned}$$ One can verify that two roots are given by $$\begin{aligned} \label{s_11} s_1(l,k,p)= \frac{k+p-l}{2}-\sqrt{C(l,k,p)}\;\;\text{and}\;\; s_2(l,k,p)=\frac{k+p-l}{2}+\sqrt{C(l,k,p)},\end{aligned}$$ where $$\begin{gathered} \label{eqn:c(l,k,p)} C(l,k,p) =\frac{1}{4}(k+p)^2-\frac{lkp}{l^2-l+1}-\frac{l}{2}(k+p)+\frac{l^3-l}{3(l^2-l+1)}(k+p)\\ +\frac{lp}{l^2-l+1}-\frac{l^3-l^2}{4(l^2-l+1)}-\frac{k}{(k+p-1)}\frac{(k^2-3k+2)l}{3(l^2-l+1)}.\end{gathered}$$ Hence, $\{ i,j \}$ is a Braess edge for $\mathcal{B}_{k,p}$ if and only if $$\begin{aligned} \label{range of s} 0\leq s<s_1(l,k,p)\;\;\text{or}\;\;s_2(l,k,p)<s\leq k-l+1.\end{aligned}$$ We now consider where a Braess edge forms when $n$ tends to infinity, provided that $l$ is given. Setting $\beta = \frac{p}{k}$, we can find that as $k\rightarrow \infty$, $$\begin{aligned} \frac{1}{k}\left(\frac{k+p-l}{2}\pm\sqrt{C(l,k,p)}\right)\rightarrow \frac{1+\beta}{2}\pm\sqrt{\frac{(1+\beta)^2}{4}-\frac{l\beta}{l^2-l+1}-\frac{l}{3(1+\beta)(l^2-l+1)}}.\end{aligned}$$ Let $n= k+p$. It follows that $$\frac{s_1(l,k,p)}{k}\rightarrow\begin{cases*} \frac{l}{l^2-l+1}, & \text{if $\beta\rightarrow\infty$ as $n\rightarrow\infty$;}\\ \frac{1}{2}-\sqrt{\frac{1}{4}-\frac{l}{3(l^2-l+1)}}, & \text{if $\beta\rightarrow 0$ as $n\rightarrow\infty$.} \end{cases*}$$ We also have $$\frac{s_2(l,k,p)}{k}\rightarrow\begin{cases*} \infty, & \text{if $\beta\rightarrow\infty$ as $n\rightarrow\infty$;}\\ \frac{1}{2}+\sqrt{\frac{1}{4}-\frac{l}{3(l^2-l+1)}}, & \text{if $\beta\rightarrow 0$ as $n\rightarrow\infty$.} \end{cases*}$$ Therefore, if $k$ dominates over $p$, then the location in which Braess edges form is asymptotically the same as that on $\mathcal{P}_{k}$ for sufficiently large $k$; and if $p$ dominates over $k$, then for sufficiently large $p$, edges added where $s<\frac{l}{l^2-l+1}$ (close to the single pendent vertex) tend to be Braess, while edges added where $s>\frac{l}{l^2-l+1}$ (close to the opposite side to the single vertex) do not. We now investigate asymptotic behaviours of $B_3$. **Lemma 41**. *Let $s_1(l,k,p)$ be the expression defined in [\[s_11\]](#s_11){reference-type="eqref" reference="s_11"}. There exists $L_k>0$ depending on $k$ between $\sqrt{k}$ and $\sqrt{3k}$ such that $s_1(l,k,p)>0$ for $3\leq l < L_k$, and $s_1(l,k,p)\le 0$ for $L_k \leq l\leq k$. Moreover, we have $$\sum_{l=3}^{\lceil L_k\rceil-1} \lceil s_1(l,k,p)\rceil\leq B_3\leq 2\sum_{l=3}^{\lceil L_k\rceil-1} \lceil s_1(l,k,p)\rceil.$$* *Proof.* For fixed $k$ and $p$, the number of Braess edges on $\mathcal{P}_k$ for $\mathcal{B}_{k,p}$ is equal to the number of integers $l\ge 3$ satisfying [\[range of s\]](#range of s){reference-type="eqref" reference="range of s"}, that is, $$\label{eq:count (45)} \max\{0,\lceil s_1(l,k,p) \rceil \}+ \max\{0,(k-l+1 - \lfloor s_2(l,k,p)\rfloor )\}.$$ Since $(k+p-l) -s_2(l,k,p) = s_1(l,k,p)$ by definition, we have $(k-l+1)-s_2(l,k,p)\leq s_1(l,k,p)$, so the inequalities $$\label{eq:ineq for s1,s2} 0 \le \max\{0,(k-l+1 - \lfloor s_2(l,k,p)\rfloor )\} \le \max\{0,\lceil s_1(l,k,p)\rceil\}$$ hold. Then using [\[eq:count (45)\]](#eq:count (45)){reference-type="eqref" reference="eq:count (45)"} and [\[eq:ineq for s1,s2\]](#eq:ineq for s1,s2){reference-type="eqref" reference="eq:ineq for s1,s2"}, we have $$\sum_{l\in L} \lceil s_1(l,k,p)\rceil\leq B_3\leq 2\sum_{l\in L} \lceil s_1(l,k,p)\rceil,$$ where $L$ is the set of integers $l$ such that $s_1(l,k,p) >0$ for fixed $k$ and $p$. Note that $s_1(l,k,p)> 0$ if and only if $(k+p-l)^2-4C(l,k,p)> 0$. One can compute that $$(k+p-l)^2-4C(l,k,p) = ~\frac{N(l,k,p)}{3(k + p - 1)(l^2-l+1)},$$ where $$\begin{gathered} N(l,k,p) = 3(k + p - 1)l^3 - 4(k+p-1)(k+p)l^2 +4(3k - 2)p^2 + (12k^2 - 16k + 8)p + 4k(k-1)^2. \end{gathered}$$ So, $s_1(l,k,p)> 0$ if and only if $N(l,k,p)>0$. We can find that for $k\geq 4$, $$\begin{aligned} &N(\sqrt{k},k,p) = 3k^{5/2} + 4(p - 1)k^2 + 3(p-1)k^{\frac{3}{2}} + 4(2p-1)(p-1)k -8p(p-1) >0,\\ &N(\sqrt{3k},k,p) = - 8k^3 + 9\sqrt{3}k^{\frac{5}{2}} - 4(3p-1)k^2 +9\sqrt{3}(p-1)k^{\frac{3}{2}} - 4(p-1)k - 8p(p-1) <0,\\ &N(k,k,p) = -k^4 - 5(p-1)k^3 -4(p^2 -4p +2)k^2 + 4(3p-1)(p-1)k - 8p(p-1)<0. \end{aligned}$$ We see that $N(l,k,p)$ is a cubic polynomial in $l$, and $N(0,k,p)>0$. Moreover, the coefficients of $l^3,l^2$, and $l$ are positive, negative, and $0$, respectively. It follows that given $k\geq 4$, there exists $L_k>0$ between $\sqrt{k}$ and $\sqrt{3k}$ such that $N(l,k,p)>0$ for $3\leq l < L_k$; and $N(l,k,p) \le 0$ for $L_k\leq l\leq k$. (See Figure [\[fig:cubicPoly\]](#fig:cubicPoly){reference-type="ref" reference="fig:cubicPoly"} for the illustration of $N(l,k,p)$ with respect to $l$.) This completes the proof. [\[fig:cubicPoly\]]{#fig:cubicPoly label="fig:cubicPoly"} ◻ From [\[eqn:c(l,k,p)\]](#eqn:c(l,k,p)){reference-type="eqref" reference="eqn:c(l,k,p)"}, we can recast $C(l,k,p)$ as follows: $$\begin{aligned} C(l,k,p) = \frac{(k+p)^2}{4}\left(H(l)+G(l)\right),\end{aligned}$$ where $$\begin{aligned} H(l) = &~1-\frac{kp}{(k+p)^2}\frac{4l}{l^2-l+1}-\frac{k}{k+p-1}\frac{k^2}{(k+p)^2}\frac{4l}{3(l^2-l+1)},\\ G(l) =&~ \left(-2l+\frac{4l(l^2-1)}{3(l^2-l+1)}+\frac{4l}{l^2-l+1}\frac{p}{k+p}+\frac{k}{k+p-1}\frac{4l}{l^2-l+1}\frac{k}{k+p}\right)\frac{1}{k+p}\\ &~+\left(-\frac{l(l^2-l)}{l^2-l+1}-\frac{k}{k+p-1}\frac{8l}{3(l^2-l+1)}\right)\frac{1}{(k+p)^2}.\end{aligned}$$ Then we see that $$\frac{s_1(l,k,p)}{k+p} \;=\; \frac{1}{2}\left( 1-\frac{l}{k+p} - \sqrt{H(l) + G(l)} \right).$$ **Theorem 42**. *Suppose that $\frac{p}{k}\rightarrow \beta \in [0,\infty]$ as $k\rightarrow\infty$. Then $$B_3 = \Theta(k \ln (k)).$$* *Proof.* Let $s_1(l,k,p)$ be the expression defined in [\[s_11\]](#s_11){reference-type="eqref" reference="s_11"}, and let $$\begin{aligned} s_1^*(k) \;=\; \sum_{l=3}^{\lceil L_k\rceil-1} \lceil s_1(l,k,p) \rceil \, , \end{aligned}$$ where $L_k$ is the positive number between $\sqrt{k}$ and $\sqrt{3k}$ defined in Lemma [Lemma 41](#lem:beta_0){reference-type="ref" reference="lem:beta_0"}. We write $$\begin{aligned} \label{s_1} \frac{s_1^*(k)}{k+p} \;=\; \frac{1}{2}\left(P_0(k)+P_1(k)+P_2(k)\right), \end{aligned}$$ where $$\begin{aligned} P_0(k) \; & = \; \frac{1}{k+p} \sum_{l=3}^{\lceil L_k\rceil-1} \left( \lceil s_1(l,k,p) \rceil - s_1(l,k,p) \right), \\ P_1(k) \;& = \; \sum_{l=3}^{\lceil L_k\rceil-1} \left( 1-\sqrt{H(l)} \right), \\ P_2(k) \; & = \; \sum_{l=3}^{\lceil L_k\rceil-1} \left( -\frac{l}{k+p} +\sqrt{H(l)}-\sqrt{H(l)-G(l) }\right)\,. \end{aligned}$$ We first show that $P_0(k)+P_2(k)$ is $o(\ln k)$. As done in the proof of Proposition [Proposition 10](#prop-asymp){reference-type="ref" reference="prop-asymp"}, we have $P_0(k)=O(\sqrt{k}/k)$. For $P_2(k)$, we use the identity ([\[eq.diffsquares\]](#eq.diffsquares){reference-type="ref" reference="eq.diffsquares"}) to obtain (for some constant $D$) $$\begin{aligned} \left| \sqrt{H(l)}-\sqrt{H(l)-G(l) } \right| \; =\; \frac{|G(l)|}{ \sqrt{H(l)}+\sqrt{H(l)-G(l) } }\;\leq \;\frac{|G(l)|}{\sqrt{H(l)}} \;\leq \; \frac{ 1 }{\sqrt{H(l)}}\frac{Dl}{(k+p)} \,. \end{aligned}$$ Since $\lim_{l\rightarrow \infty}H(l)=1$ and $l\leq L_k \leq \sqrt{3k}$, it follows that $P_2(k)$ is bounded by a constant. Now we shall examine $P_1(k)$. Let $$H_1(l) =1 - \frac{(k+p)^2}{(k+p)^2}\frac{4l}{l^2-l+1}-\frac{k+p-1}{k+p-1}\frac{(k+p)^2}{(k+p)^2}\frac{4l}{3(l^2-l+1)}.$$ Since $1 - H(l)<1-H_1(l)$, we see that $$\begin{aligned} P_1(k)\leq \sum_{l=3}^{\lceil L_k\rceil-1} \left( 1-\sqrt{H_1(l)} \right). \end{aligned}$$ Using the identity $$\label{eq.diffsquares} \sqrt{x}-\sqrt{y} \;=\; \frac{x-y}{\sqrt{x}+\sqrt{y}} \,,$$ we observe that $$\lim_{l\rightarrow\infty}\frac{1-\sqrt{H_1(l)}}{1/l} \;=\; \lim_{l\rightarrow\infty}\frac{l(1-H_1(l))}{1+\sqrt{H_1(l)}} \; = \; \frac{8}{3}.$$ Now we use an elementary exercise in analysis: if $\{a_{l}\}$ and $\{b_{l}\}$ are two nonnegative real sequences such that $\lim_{l\rightarrow\infty}a_{l}/b_{l}=A>0$, and if $\sum_{l=1}^{\infty}b_{l}$ diverges, then $$\lim_{n\rightarrow\infty}\frac{ \sum_{l=1}^{n}a_{l} }{ \sum_{l=1}^{n}b_{l}} \;=\; A\,.$$ Since $\sum_{l=1}^{n}1/l\,\sim \,\ln n$ and $\sqrt{k} < L_k < \sqrt{3k}$, it follows that $$P_1(k)=O(\ln (k)).$$ Suppose $k\geq \beta p$ for some $\beta>0$. Then $$1 - H(l)\;> \;\frac{k}{k+p}\frac{k^2}{(k+p)^2}\frac{4l}{3(l^2-l+1)}\geq \left(\frac{\beta}{1+\beta}\right)^3\frac{4l}{3(l^2-l+1)}.$$ As done for obtaining $O(\ln(k))$, it follows that $$P_1(k)=\Omega(\ln k).$$ Finally, consider the case for $p>k$. We see that $$1-\sqrt{H(l)} = \frac{1-H(l)}{1+\sqrt{H(l)}} > \frac{1-H(l)}{2}\;>\; \frac{p}{k+p}\frac{k}{k+p}\frac{2l}{l^2-l+1}\;>\; \frac{1}{2}\frac{k}{k+p}\frac{2}{l}=\frac{k}{k+p}\frac{1}{l}.$$ Hence, $$P_1(k) = \Omega(k\ln(k)/(k+p)).$$ From [\[s_1\]](#s_1){reference-type="eqref" reference="s_1"}, $$s_1^*(k) = \Omega(k \ln (k)).$$ The theorem follows. ◻ 10 H. J. Ahn and B. Hassibi. On the mixing time of the SIS Markov chain model for epidemic spread. In *53rd IEEE Conference on Decision and Control*, pages 6221--6227. IEEE, 2014. D. Braess, A. Nagurney, and T. Wakolbinger. On a paradox of traffic planning. , 39(4):446--450, 2005. L. Ciardo. The Braess' paradox for pendent twins. , 590:304--316, 2020. L. Ciardo, G. Dahl, and S. Kirkland. On Kemeny's constant for trees with fixed order and diameter. , 70(12):2331--2353, 2022. C. Ding and S. Song. Traffic paradoxes and economic solutions. , 1(1):63--76, 2012. E. Dudkina, M. Bin, J. Breen, E. Crisostomi, P. Ferraro, S. Kirkland, J. Marecek, R. Murray-Smith, T. Parisini, L. Stone, et al. On node ranking in graphs. , 2021. N. Faught, M. Kempton, and A. Knudson. A 1-separation formula for the graph Kemeny constant and Braess edges. , 60(1):49--69, 2022. F. Fouss, M. Saerens, and M. Shimbo. . Cambridge University Press, 2016. J. N. Hagstrom and R. A. Abrams. Characterizing Braess's paradox for traffic networks. In *ITSC 2001. 2001 IEEE Intelligent Transportation Systems. Proceedings (Cat. No. 01TH8585)*, pages 836--841. IEEE, 2001. Y. Hu and S. Kirkland. Complete multipartite graphs and Braess edges. , 579:284--301, 2019. J. Kemeny and J. Snell. . Springer-Verlag, New York-Heidelberg, 1976. S. Kim. Families of graphs with twin pendent paths and the Braess edge. , pages 9--31, 2022. S. Kim, J. Breen, E. Dudkina, F. Poloni, and E. Crisostomi. On the effectiveness of random walks for modeling epidemics on networks. , 18(1):e0280277, 2023. S. Kirkland, Y. Li, J. McAlister, and X. Zhang. Edge addition and the change in Kemeny's constant. , 2023. S. Kirkland and Z. Zeng. Kemeny's constant and an analogue of Braess' paradox for trees. , 31:444--464, 2016. M. Levene and G. Loizou. Kemeny's constant and the random surfer. , 109(8):741--745, 2002. R. Patel, P. Agharkar, and F. Bullo. Robotic surveillance and Markov chains with minimal weighted Kemeny constant. , 60(12):3156--3167, 2015. A. Rapoport, T. Kugler, S. Dugar, and E. J. Gisches. Choice of routes in congested traffic networks: Experimental tests of the Braess paradox. , 65(2):538--571, 2009. N. A. Ruhi and B. Hassibi. SIRS epidemics on complex networks: Concurrence of exact Markov chain and approximated models. In *2015 54th IEEE Conference on Decision and Control (CDC)*, pages 2919--2926. IEEE, 2015. P. Van Mieghem, J. Omic, and R. Kooij. Virus spread in networks. , 17(1):1--14, 2008. S. Yilmaz, E. Dudkina, M. Bin, E. Crisostomi, P. Ferraro, R. Murray-Smith, T. Parisini, L. Stone, and R. Shorten. Kemeny-based testing for COVID-19. , 15(11):e0242401, 2020. [^1]: Contact: kimswim\@yorku.ca
arxiv_math
{ "id": "2309.02977", "title": "Kemeny's constant and enumerating Braess edges in trees", "authors": "Jihyeug Jang, Mark Kempton, Sooyeong Kim, Adam Knudson, Neal Madras,\n Minho Song", "categories": "math.CO", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/" }
--- abstract: | We study aspects of noncommutative Riemannian geometry of the path algebra arising from the Kronecker quiver with $N$ arrows. To start with, the framework of derivation based differential calculi is recalled together with a discussion on metrics and bimodule connections compatible with the $\ast$-structure of the algebra. As an illustration, these concepts are applied to the noncommutative torus where examples of torsion free and metric (Levi-Civita) connections are given. In the main part of the paper, noncommutative geometric aspects of (generalized) Kronecker algebras are considered. The structure of derivations and differential calculi is explored, and torsion free bimodule connections are studied together with their compatibility with hermitian forms, playing the role of metrics on the module of differential forms. Moreover, for several different choices of Lie algebras of derivations, non-trivial Levi-Civita connections are constructed. address: | Dept. of Math.\ Linköping University\ 581 83 Linköping\ Sweden author: - Joakim Arnlind bibliography: - references.bib title: | Noncommutative Riemannian geometry\ of Kronecker algebras --- # Introduction Over the last 15 years, at lot of progress has been made in understanding the Riemannian aspects of noncommutative geometry from several different perspectives. For instance, the curvature of the metric, identified from the heat kernel expansion of the Dirac operator, has been computed [@fk:scalarCurvature; @cm:modularCurvature] and related to the Gauss-Bonnet theorem [@ct:gaussBonnet; @fk:gaussBonnet]. From a more (Hopf) algebraic point of view, metric and connections on bimodules (see e.g. [@ac:ncgravitysolutions] [@as:noncommutative.connections.twists] [@bm:Quantum.Riemannian.geometry] [@ail:lc.quantum.spheres]) have been introduced and studied from different points of view. A fundamental aspect of classical Riemannian geometry is the existence of a unique metric and torsion free connection -- the Levi-Civita connection. In noncommutative geometry, the definitions of metric compatibility and torsion free depend on the context, and the existence and uniqueness of the Levi-Civita connection has been studied from many different perspectives (see e.g. [@bm:starCompatibleConnections; @r:leviCivita; @ps:on.nc.lc.connections; @al:projections.nc.cylinder; @w:braided.cartan.calculi; @a:levi-civita.class.nms]). For instance, starting from a derivation based approach to noncommutative differential geometry (see e.g. [@dv:calculDifferentiel; @dvm:connections.central.bimodules]) the framework of real metric calculi was developed in [@aw:curvature.three.sphere] (and further developed in [@atn:minimal.embeddings.morphisms]) and a Gauss-Bonnet theorem was proven for the noncommutative 4-sphere [@aw:cgb.sphere]. In this context, one can prove a general uniqueness theorem for torsion free and metric connections. One the other hand, if further conditions are abandoned, like the reality conditions of a real metric calculus, one expects there to be many torsion free and metric connections (see e.g. [@a:levi-civita.class.nms]). Concrete and illustrative examples have traditionally been of great use in noncommutative geometry; both as a mean to understand abstract concepts, but also as an inspiration for new results. In the present paper, we study differential calculi and Levi-Civita connections of the path algebra of generalized Kronecker quivers for different choices of Lie algebras of derivations. These algebras turn out to be one dimensional, in the sense that there are no differential forms of order greater than one; a result which is independent on the choice of Lie algebra of derivations. Despite the low-dimensionality of the calculus, the structure of the module of 1-forms is not trivial and we show that one may construct many interesting examples of Levi-Civita connections. Section [2](#sec:der.calculus){reference-type="ref" reference="sec:der.calculus"} recalls the basic concepts of derivation based differential calculus in order to fix the notation and terminology for our purposes. As an illustration, these concepts are applied to the noncommutative torus in Section [3](#sec:nc.torus){reference-type="ref" reference="sec:nc.torus"} where left module connections, as well as bimodule connections, are constructed on the module of differential forms. In Section [4](#sec:kronecker.alg){reference-type="ref" reference="sec:kronecker.alg"} we study the path algebra of the generalized Kronecker quiver, and derive properties of the algebra and its differential calculi. Section [5](#sec:LC.Omegaoneg){reference-type="ref" reference="sec:LC.Omegaoneg"} is devoted to the construction of Levi-Civita connections on the module of 1-forms for different choices of Lie algebras of derivations. # Derivation based differential calculus {#sec:der.calculus} Let us recall the construction of noncommutative differential forms in a derivation based calculus. Let $\mathcal{A}$ be a unital associative $\ast$-algebra over $\mathbb{C}$, and let $\operatorname{Der}(\mathcal{A})$ denote the set of derivations of $\mathcal{A}$. We shall consider $\operatorname{Der}(\mathcal{A})$ to be both a left and right $Z(\mathcal{A})$-module, where $Z(\mathcal{A})$ denotes the center of $\mathcal{A}$, in the standard way; i.e. $$\begin{aligned} &(z\cdot \partial)(a)=z\partial(a)\\ &(\partial\cdot z)(a)=\partial(a)z=z\partial(a)=(z\cdot \partial)(a)\end{aligned}$$ for $z\in Z(\mathcal{A})$, $a\in\mathcal{A}$ and $\partial\in\operatorname{Der}(\mathcal{A})$. For any Lie algebra $\mathfrak{g}\subseteq\operatorname{Der}(\mathcal{A})$ we shall in the following assume that $\mathfrak{g}$ is also a $Z(\mathcal{A})$-submodule of $\operatorname{Der}(\mathcal{A})$. Given $\mathfrak{g}\subseteq\operatorname{Der}(\mathcal{A})$, one defines $\bar{\Omega}^{k}_{\mathfrak{g}}$ to be the set of $Z(\mathcal{A})$-multilinear alternating maps $$\begin{aligned} \omega:\underbrace{\mathfrak{g}\times\cdots\times\mathfrak{g}}_k\to\mathcal{A},\end{aligned}$$ and sets $$\begin{aligned} \bar{\Omega}^{}_{\mathfrak{g}} = \bigoplus_{k\geq 0}\bar{\Omega}^{k}_{\mathfrak{g}}\end{aligned}$$ with $\bar{\Omega}^{0}_{\mathfrak{g}}=\mathcal{A}$. Furthermore, for $\omega\in\bar{\Omega}^{k}_{\mathfrak{g}}$ and $\tau\in\bar{\Omega}^{l}_{\mathfrak{g}}$ one defines $\omega\tau\in\bar{\Omega}^{k+l}_{\mathfrak{g}}$ as $$\begin{aligned} (\omega\tau)(\partial_1,\ldots,\partial_{k+l}) = \frac{1}{k!l!}\sum_{\sigma\in S_{k+l}}\operatorname{sgn}(\sigma)\omega(\partial_{\sigma(1)},\ldots,\partial_{\sigma(k)}) \tau(\partial_{\sigma(k+1)},\ldots,\partial_{\sigma(k+l)}),\end{aligned}$$ where $S_N$ denotes the symmetric group on $N$ letters, and introduce $d_k:\bar{\Omega}^{k}_{\mathfrak{g}}\to\bar{\Omega}^{k+1}_{\mathfrak{g}}$ as $d_0a(\partial_0)=\partial_0(a)$ for $a\in\bar{\Omega}^{0}_{\mathfrak{g}}=\mathcal{A}$ and for $\omega\in\bar{\Omega}^{k}_{\mathfrak{g}}$ with $k\geq 1$ $$\begin{aligned} d_k\omega(\partial_0,\ldots,\partial_{k}) = &\sum_{i=0}^k(-1)^i\partial_i\big(\omega(\partial_0,\ldots,\hat{\partial}_{i},\ldots,\partial_k)\big)\\ &\qquad+\sum_{0\leq i<j\leq k}(-1)^{i+j}\omega\big([\partial_i,\partial_j],\partial_0,\ldots,\hat{\partial}_i,\ldots,\hat{\partial}_j,\ldots,\partial_k\big),\end{aligned}$$ satisfying $d_{k+1}d_{k}=0$, where $\hat{\partial}_i$ denotes the omission of $\partial_i$ in the argument. When there is no risk for confusion, we shall omit the index $k$ and simply write $d:\bar{\Omega}^{k}_{\mathfrak{g}}\to\bar{\Omega}^{k+1}_{\mathfrak{g}}$. For instance, if $\omega\in\bar{\Omega}^{1}_{\mathfrak{g}}$ then $$\begin{aligned} d\omega(\partial_0,\partial_1) = \partial_0\omega(\partial_1)-\partial_1\omega(\partial_0)-\omega([\partial_0,\partial_1]). \end{aligned}$$ Moreover, we note that the graded product rule is satisfied $$\begin{aligned} d(\omega\eta) = (d\omega)\eta + (-1)^k\omega d\eta \end{aligned}$$ for $\omega\in\bar{\Omega}^{k}_{\mathfrak{g}}$ and $\eta\in\bar{\Omega}^{l}_{\mathfrak{g}}$. One can endow $\bar{\Omega}^{}_{\mathfrak{g}}$ with the structure of an $\mathcal{A}$-bimodule by setting $$\begin{aligned} &(a\omega)(\partial_1,\ldots,\partial_k) = a\omega(\partial_1,\ldots,\partial_k)\\ &(\omega a)(\partial_1,\ldots,\partial_k) = \omega(\partial_1,\ldots,\partial_k)a\end{aligned}$$ for $a\in\mathcal{A}$, and a $\ast$-structure may be introduced as $$\begin{aligned} \omega^\ast(\partial_1,\ldots,\partial_k) = \omega(\partial_1^\ast,\ldots,\partial_k^\ast)^\ast\end{aligned}$$ satisfying $(a\omega b)^\ast = b^\ast\omega^\ast a^\ast$, making $\bar{\Omega}^{}_{\mathfrak{g}}$ into a $\ast$-bimodule with $$\begin{aligned} (\omega\eta)^\ast = (-1)^{kl}\eta^\ast\omega^\ast\end{aligned}$$ for $\omega\in\bar{\Omega}^{k}_{\mathfrak{g}}$ and $\eta\in\bar{\Omega}^{l}_{\mathfrak{g}}$. We say that $\bar{\Omega}^{}_{\mathfrak{g}}$ is a differential calculus over $\mathcal{A}$. The differential calculus is called connected if $da=0$ implies that $a=\lambda\mathds{1}$ for some $\lambda\in\mathbb{C}$. The cohomology of a differential calculus is introduced in the standard way $$\begin{aligned} H^n(\bar{\Omega}^{}_{\mathfrak{g}}) = \frac{\ker d_n}{\operatorname{im}d_{n-1}}.\end{aligned}$$ Let us now recall the concept of a connection on a left or right $\mathcal{A}$-module. **Definition 1**. Let $M$ be a left $\mathcal{A}$-module. A *left connection on $M$* is a map $\nabla:\mathfrak{g}\times M\to M$ such that $$\begin{aligned} &\nabla_{\partial}\big(m+m'\big) = \nabla_{\partial}m + \nabla_{\partial}m'\\ &\nabla_{\partial+\partial'}m = \nabla_{\partial}m + \nabla_{\partial'}m\\ &\nabla_{z\cdot \partial}m = z\nabla_{\partial}m\\ &\nabla_{\partial}(am) = a\nabla_{\partial}m + (\partial a)m \end{aligned}$$ for $m,m'\in M$, $\partial,\partial'\in\mathfrak{g}$, $a\in\mathcal{A}$ and $z\in Z(\mathcal{A})$. **Definition 2**. Let $M$ be a right $\mathcal{A}$-module. A *right connection on $M$* is a map $\nabla:\mathfrak{g}\times M\to M$ such that $$\begin{aligned} &\nabla_{\partial}\big(m+m'\big) = \nabla_{\partial}m + \nabla_{\partial}m'\\ &\nabla_{\partial+\partial'}m = \nabla_{\partial}m + \nabla_{\partial'}m\\ &\nabla_{\partial\cdot z}m = (\nabla_{\partial}m)z\\ &\nabla_{\partial}(ma) = (\nabla_{\partial}m)a + m(\partial a) \end{aligned}$$ for $m,m'\in M$, $\partial,\partial'\in\mathfrak{g}$, $a\in\mathcal{A}$ and $z\in Z(\mathcal{A})$. As in differential geometry, the curvature $R$ of a connection $\nabla$ is defined as $$\begin{aligned} R(\partial_1,\partial_2)m = \nabla_{\partial_1}\nabla_{\partial_2}m-\nabla_{\partial_2}\nabla_{\partial_1}m -\nabla_{[\partial_1,\partial_2]}m.\end{aligned}$$ Now, assume that $\nabla$ is both a left and right connection on the $\mathcal{A}$-bimodule $M$. In particular, one has $$\begin{aligned} z\nabla_{\partial}m = \nabla_{z\cdot \partial}m = \nabla_{\partial\cdot z}m = (\nabla_{\partial}m)z\end{aligned}$$ for all $m\in M$, $\partial\in\mathfrak{g}$ and $z\in Z(\mathcal{A})$. Hence, a natural setting for bimodule connections is in the context of central bimodules. **Definition 3**. A *central $\mathcal{A}$-bimodule* is an $\mathcal{A}$-bimodule $M$ such that $$\begin{aligned} zm = mz \end{aligned}$$ for all $m\in M$ and $z\in Z(\mathcal{A})$. Note that it follows that $\bar{\Omega}^{}_{\mathfrak{g}}$ is a central bimodule since $$\begin{aligned} (z\omega)(\partial_1,\ldots,\partial_k) = z\omega(\partial_1,\ldots,\partial_k) = \omega(\partial_1,\ldots,\partial_k)z = (\omega z)(\partial_1,\ldots,\partial_k)\end{aligned}$$ for $z\in Z(\mathcal{A})$. **Definition 4**. Let $M$ be a central $\mathcal{A}$-bimodule. If $\nabla$ is a both a left and a right connection on $M$ then $\nabla$ is called a *bimodule connection on $M$*. In case the module $M$ is a $\ast$-bimodule, one can ask that the connection is compatible with the involution in the following sense. **Definition 5**. Let $\nabla:\mathfrak{g}\times M\to M$ be a left connection on the $\ast$-bimodule $M$. The connection is called a left $\ast$-connection if $$\big(\nabla_\partial m\big)^\ast = \nabla_{\partial^\ast}m^\ast$$ for all $\partial\in\mathfrak{g}$ and $m\in M$. **Proposition 6**. *If $\nabla$ is a left $\ast$-connection on a central $\ast$-bimodule $M$ then $\nabla$ is a bimodule connection on $M$.* *Proof.* Since $\nabla$ is a left $\ast$-connection and $M$ is a $\ast$-bimodule one has $$\begin{aligned} \nabla_\partial(ma) &= \big(\nabla_{\partial^\ast}a^\ast m^\ast\big)^\ast =\big(a^\ast\nabla_{\partial^\ast}m^\ast+(\partial^\ast a^\ast)m^\ast\big)^\ast\\ &=\big(\nabla_{\partial^\ast}m^\ast\big)^\ast a + m(\partial^\ast a^\ast)^\ast = (\nabla_{\partial}m)a + m(\partial a) \end{aligned}$$ for $\partial\in\mathfrak{g}$, $m\in M$ and $a\in\mathcal{A}$, showing that $\nabla$ is also a right connection on $M$. ◻ A left $\ast$-connection on a central $\ast$-bimodule will be called a $\ast$-bimodule connection. The above lemma shows that bimodule connections arise naturally on $\ast$-bimodules when requiring the connection to be compatible with the $\ast$-structure. In noncommutative geometry, a finitely generated projective module $M$ corresponds to a vector bundle and metrics on vector bundles can be introduced as hermitian forms on $M$. Let us recall the definition. **Definition 7**. Let $M$ be a left $\mathcal{A}$-module. A *left hermitian form on $M$* is a map $h:M\times M\to\mathcal{A}$ such that $$\begin{aligned} &h(m_1+m_2,m_3) = h(m_1+m_2,m_3)\\ &h(am_1,m_2) = ah(m_1,m_2)\\ &h(m_1,m_2)^\ast = h(m_2,m_1) \end{aligned}$$ for all $m_1,m_2,m_3\in M$ and $a\in\mathcal{A}$. Similarly, a right hermitian form on a right $\mathcal{A}$-module $M$ is a bilinear map satisfying $$\begin{aligned} h(m_1,m_2a) = h(m_1,m_2)a\end{aligned}$$ together with $h(m_1,m_2)^\ast=h(m_2,m_1)$ for $m_1,m_2\in M$ and $a\in M$. In the case of $\ast$-bimodules, one can combine the concepts of left and right hermitian forms into $\ast$-bimodule forms. **Definition 8**. Let $M$ be a $\ast$-bimodule. A $\ast$-bimodule form on $M$ is a $\mathbb{C}$-bilinear map $g:M\times M\to\mathcal{A}$ such that $$\begin{aligned} &g(m_1,m_2)^\ast = g(m_2^\ast,m_1^\ast)\label{eq:star.bimodule.form.starprop}\\ &g(am_1,m_2) = ag(m_1,m_2)\label{eq:star.bimodule.form.leftlinear} \end{aligned}$$ for all $m_1,m_2\in M$ and $a\in\mathcal{A}$. Note that it follows from [\[eq:star.bimodule.form.starprop\]](#eq:star.bimodule.form.starprop){reference-type="eqref" reference="eq:star.bimodule.form.starprop"} and [\[eq:star.bimodule.form.leftlinear\]](#eq:star.bimodule.form.leftlinear){reference-type="eqref" reference="eq:star.bimodule.form.leftlinear"} that $$\begin{aligned} g(m_1,m_2 a) = g(m_1,m_2)a\end{aligned}$$ for all $m_1,m_2\in\mathcal{A}$ and $a\in\mathcal{A}$. Given a $\ast$-bimodule form $g$ on $M$ we note that $h_L(m_1,m_2)=g(m_1,m_2^\ast)$ is a left hermitian form on $M$ and $h_R(m_1,m_2)=g(m_1^\ast,m_2)$ is a right hermitian form on $M$. Conversely, given a left hermitian form $h$ on $M$, one obtains a $\ast$-bimodule form $g(m_1,m_2)=h(m_1,m_2^\ast)$ as well as a right hermitian form $h_R(m_1,m_2)=h(m_1^\ast,m_2^\ast)$. As in Riemannian geometry, one is interested in connections that preserve the metric. Hence, one makes the following definitions. **Definition 9**. Let $M$ be a left $\mathcal{A}$-module. A left connection $\nabla$ on $M$ is compatible with a left hermitian form $h$ if $$\begin{aligned} &\partial h(m_1,m_2) = h(\nabla_\partial m_1,m_2) + h(m_1,\nabla_{\partial^\ast}m_2) \end{aligned}$$ for all $m_1,m_2\in M$ and $\partial\in\mathfrak{g}$. **Definition 10**. Let $M$ be a right $\mathcal{A}$-module. A right connection $\nabla$ on $M$ is compatible with a right hermitian form $h$ if $$\begin{aligned} &\partial h(m_1,m_2) = h(\nabla_{\partial^\ast} m_1,m_2) + h(m_1,\nabla_{\partial}m_2) \end{aligned}$$ for all $m_1,m_2\in M$ and $\partial\in\mathfrak{g}$. **Definition 11**. Let $M$ be a $\ast$-bimodule and let $g$ be a $\ast$-bimodule form on $M$. A left (right) connection $\nabla:\mathfrak{g}\times M\to M$ is compatible with $g$ if $$\begin{aligned} \partial g(m_1,m_2) = g(\nabla_\partial m_1,m_2) + g(m_1,\nabla_{\partial}m_2) \end{aligned}$$ for all $\partial\in\mathfrak{g}$ and $m_1,m_2\in M$. For $\ast$-bimodule connections, the above concepts of compatibility are equivalent in the following sense. **Proposition 12**. *Let $M$ be a $\ast$-bimodule and let $g$ be a $\ast$-bimodule form on $M$ and set $h_L(m_1,m_2) = g(m_1,m_2^\ast)$ and $h_R(m_1,m_2) = g(m_1^\ast,m_2)$. If $\nabla:\mathfrak{g}\times M\to M$ is a $\ast$-bimodule connection compatible with $g$ then $$\begin{aligned} &\partial h_L(m_1,m_2) = h_L(\nabla_\partial m_1,m_2) + h_L(m_1,\nabla_{\partial^\ast}m_2)\\ &\partial h_R(m_1,m_2) = h_R(\nabla_{\partial^\ast} m_1,m_2) + h_R(m_1,\nabla_{\partial}m_2) \end{aligned}$$ for all $\partial\in\mathfrak{g}$ and $m_1,m_2\in M$. Conversely, if $\nabla$ is a $\ast$-bimodule connection compatible with a left hermitian form $h_L$ then $\nabla$ is compatible with the right hermitian form $h_R(m_1,m_2)=h_L(m_1^\ast,m_2^\ast)$ as well as the $\ast$-bimodule form $g(m_1,m_2)=h_L(m_1,m_2^\ast)$.* *Proof.* Assume that $\nabla$ is a $\ast$-bimodule connection compatible with the $\ast$-bimodule form $g$. One easily checks that $$\begin{aligned} h_L(\nabla_\partial m_1,m_2) &+ h_L(m_1,\nabla_{\partial^\ast}m_2) = g(\nabla_{\partial}m_1,m_2^\ast)+g\big(m_1,(\nabla_{\partial^\ast}m_2)^\ast\big)\\ &= g(\nabla_{\partial}m_1,m_2^\ast) + g(m_1,\nabla_{\partial}m_2^\ast) = \partial g(m_1,m_2^\ast) = \partial h_L(m_1,m_2), \end{aligned}$$ using that $\nabla$ is a $\ast$-connection compatible with $g$. The proof that $\nabla$ is compatible with $h_R$ is analogous. Now, assume that $\nabla$ is a $\ast$-bimodule connection compatible with the left hermitian form $h_L$. Then $$\begin{aligned} h_R(\nabla_{\partial^\ast}m_1,m_2) &+ h_R(m_1,\nabla_{\partial}m_2) = h_L\big((\nabla_{\partial^\ast}m_1)^\ast,m_2^\ast\big) +h_L\big(m_1^\ast,(\nabla_{\partial}m_2)^\ast\big)\\ &= h_L(\nabla_{\partial}m_1^\ast,m_2^\ast) + h_L(m_1^\ast,\nabla_{\partial^\ast}m_2^\ast) = \hat{\partial}_L(m_1^\ast,m_2^\ast) = \partial h_R(m_1,m_2) \end{aligned}$$ and $$\begin{aligned} g(\nabla_{\partial}m_1,m_2) &+g(m_1,\nabla_{\partial}m_2) = h_L(\nabla_{\partial}m_1,m_2^\ast) + h_L\big(m_1,(\nabla_{\partial}m_2)^\ast\big)\\ &=h_L(\nabla_{\partial}m_1,m_2^\ast) + h_L(m_1,\nabla_{\partial^\ast}m_2^\ast) =\partial h_L(m_1,m_2^\ast) = \partial g(m_1,m_2), \end{aligned}$$ showing that $\nabla$ is indeed compatible with $h_R$ and $g$. ◻ Proposition [Proposition 12](#prop:bimodule.comp.gives.lr.comp){reference-type="ref" reference="prop:bimodule.comp.gives.lr.comp"} together with Proposition [Proposition 6](#prop:l.st.conn.bimodule.conn){reference-type="ref" reference="prop:l.st.conn.bimodule.conn"} show that for $\ast$-connections on a $\ast$-bimodule, the concepts of left-/right-/bimodule connections coincide together with their corresponding concepts of compatibility with hermitian forms. Although the above definitions give a natural concept of metric compatibility for a connection, there is in general not a unique concept of torsion for arbitrary modules (although one can define torsion via an *anchor map*, see [@aw:curvature.three.sphere]). However, for $\bar{\Omega}^{1}_{\mathfrak{g}}$ one introduces torsion in analogy with differential geometry. **Definition 13**. The *torsion* of a left (right) connection $\nabla$ on $\bar{\Omega}^{1}_{\mathfrak{g}}$ is given by the map $T:\bar{\Omega}^{1}_{\mathfrak{g}}\times\mathfrak{g}\times\mathfrak{g}\to\mathcal{A}$, defined by $$\begin{aligned} \label{eq:def.torsion} T_\omega(\partial,\partial') = (\nabla_{\partial}\omega)(\partial')-(\nabla_{\partial'}\omega)(\partial) -d\omega(\partial,\partial'). \end{aligned}$$ The connection is called *torsion free* if $T_\omega(\partial,\partial')=0$ for all $\partial,\partial'\in\mathfrak{g}$ and $\omega\in\bar{\Omega}^{1}_{\mathfrak{g}}$. It is easy to check that $T_{a\omega}(\partial,\partial')=aT_\omega(\partial,\partial')$ for a left connection (and $T_{\omega a}(\partial,\partial')=T_{\omega}(\partial,\partial')a$ for a right connection), implying that the induced map $T(\partial,\partial'):\bar{\Omega}^{1}_{\mathfrak{g}}\to\mathcal{A}$ is a left (right) module homomorphism. Now, we are ready to discuss torsion free connections compatible with a hermitian form on $\bar{\Omega}^{1}_{\mathfrak{g}}$, so called *Levi-Civita connections*. **Definition 14**. Let $h$ be a left (right) hermitian form on $\bar{\Omega}^{1}_{\mathfrak{g}}$. A *left (right) Levi-Civita connection $\nabla$ on $\bar{\Omega}^{1}_{\mathfrak{g}}$ with respect to $h$* is a torsion free left (right) connection on $\bar{\Omega}^{1}_{\mathfrak{g}}$ compatible with $h$. **Definition 15**. Let $g$ be a $\ast$-bimodule form on $\bar{\Omega}^{1}_{\mathfrak{g}}$. A *$\ast$-bimodule Levi-Civita connection on $\bar{\Omega}^{1}_{\mathfrak{g}}$ with respect to $g$* is a torsion free $\ast$-bimodule connection on $\bar{\Omega}^{1}_{\mathfrak{g}}$ compatible with $g$. It follows from Proposition [Proposition 12](#prop:bimodule.comp.gives.lr.comp){reference-type="ref" reference="prop:bimodule.comp.gives.lr.comp"} that a $\ast$-bimodule Levi-Civita connection with respect to $g$ is also a left and right Levi-Civita connection with respect to the associated left and right hermitian forms. ## Restricted calculi In the following, we shall mostly be interested in so called restricted calculi $\Omega^{}_{\mathfrak{g}}\subseteq\bar{\Omega}^{}_{\mathfrak{g}}$, which is generated (as a left module) in degree one by $da$ for $a\in\mathcal{A}$. That is, $\Omega^{k}_{\mathfrak{g}}$ is generated by elements of the form $$\begin{aligned} a_0da_1da_2\cdots da_k\end{aligned}$$ for $a_0,a_1,\ldots,a_k\in\mathcal{A}$, acting as $$\begin{aligned} a_0da_1\cdots da_k(\partial_1,\ldots,\partial_k) = \sum_{\sigma\in S_k}\operatorname{sgn}(\sigma)a_0da_1(\partial_{\sigma(1)})\cdots da_k(\partial_{\sigma(k)})\end{aligned}$$ for $\partial_1,\ldots,\partial_k\in\mathfrak{g}$. One readily checks that $\Omega^{k}_{\mathfrak{g}}$ is closed under the right action of $\mathcal{A}$ by repeatedly using $(da)b=d(ab)-adb$ for $a,b\in\mathcal{A}$, giving $$\begin{aligned} da_1da_2\cdots da_k\cdot a_{k+1} &= \sum_{l=1}^k(-1)^{k-l}da_1\cdots d(a_la_{l+1})\cdots da_k da_{k+1}\\ &\qquad+(-1)^ka_1da_2\cdots da_k da_{k+1}\end{aligned}$$ which is clearly an element of $\Omega^{k}_{\mathfrak{g}}$. Furthermore, one can show that $$\begin{aligned} d\big(a_0da_1da_2\cdots da_k\big) = da_0da_1da_2\cdots da_k\end{aligned}$$ which is an element of $\Omega^{k+1}_{\mathfrak{g}}$. Thus, $\Omega^{}_{\mathfrak{g}}$ is a differential subalgebra of $\bar{\Omega}^{}_{\mathfrak{g}}$, and the introduced concepts of hermitian forms, connections and torsion apply equally well to this subalgebra. # The noncommutative torus {#sec:nc.torus} Let us illustrate the above concepts by considering the noncommutative torus. The noncommutative torus $T^2_{\theta}$ is a $\ast$-algebra generated by unitary $U,V$ satisfying $VU=qUV$ with $q=e^{2\pi i\theta}$. We shall assume that $\theta$ is irrational, implying that the center of $T^2_{\theta}$ is trivial, i.e. $Z(T^2_{\theta})=\{\lambda\mathds{1}:\lambda\in\mathbb{C}\}$. Any element $a\in T^2_{\theta}$ can be written as $$\begin{aligned} a = \sum_{k,l\in\mathbb{Z}}a_{kl}U^kV^l\end{aligned}$$ where $a_{kl}\in\mathbb{C}$, and one introduces the standard derivations $\partial_1,\partial_2$, defined by $$\begin{aligned} &\partial_1U = iU\qquad \partial_1V = 0\\ &\partial_2U = 0 \qquad \partial_2V = iV\end{aligned}$$ and it follows that $[\partial_1,\partial_2]=0$ as well as $\partial_a^\ast=\partial_a$ for $a=1,2$. Since $Z(\mathcal{A})$ is trivial, the complex (abelian) Lie algebra $\mathfrak{g}$ generated by $\partial_1,\partial_2$ is a $Z(\mathcal{A})$-module. The restricted differential calculus $\Omega^{}_{\mathfrak{g}}$ is generated in first degree by $dU$ and $dV$, and we note that $\{dU,dV\}$ is a basis for $\Omega^1_{\mathfrak{g}}$: $$\begin{aligned} &adU+bdV = 0\quad\Rightarrow\quad \begin{cases} adU(\partial_1)+bdV(\partial_1) = 0\\ adU(\partial_2)+bdV(\partial_2) = 0 \end{cases}\quad\Rightarrow\quad\\ & \begin{cases} iaU = 0\\ ibV = 0 \end{cases}\quad\Rightarrow\quad a=b=0.\end{aligned}$$ Thus, $\Omega^1_{\mathfrak{g}}$ is a free module, and the bimodule structure can be summarized as follows: $$\begin{split} (dU)U &= UdU \qquad (dU)V = q^{-1}V(dU)\\ (dV)V &= VdV \qquad (dV)U = qUdV. \end{split}$$ For instance, checking that $(dU)V=q^{-1}VdU$ amounts to showing that $$\begin{aligned} &\big((dU)V\big)(\partial_1) = (\partial_1U)V = iUV = iq^{-1}VU=q^{-1}(VdU)(\partial_1)\\ &\big((dU)V\big)(\partial_2) = (\partial_2U)V = 0 = q^{-1}(VdU)(\partial_2).\end{aligned}$$ Moreover, one can easily check that $\Omega^{}_{\mathfrak{g}}$ is a connected calculus, since $$\begin{aligned} &\partial_1\big(a_{kl}U^kV^l\big) = 0\quad\Rightarrow\quad ika_{kl}U^kV^l = 0\quad\Rightarrow\quad a_{kl} = 0\text{ for }k\neq 0.\\ &\partial_2\big(a_{kl}U^kV^l\big) = 0\quad\Rightarrow\quad ila_{kl}U^kV^l = 0\quad\Rightarrow\quad a_{kl} = 0\text{ for }l\neq 0,\end{aligned}$$ and the only possible nonzero coefficient is given by $a_{00}$, and one concludes that $$\begin{aligned} da = 0\quad\Leftrightarrow\quad \begin{cases} \partial_1a = 0\\ \partial_2a = 0 \end{cases}\quad\Leftrightarrow\quad a = \lambda\mathds{1}\end{aligned}$$ for $\lambda\in\mathbb{C}$. A slightly more convenient basis of $\Omega^1_{\mathfrak{g}}$ is given by $$\begin{aligned} \omega^1 = \omega = -iU^{-1}dU\quad\text{and}\quad \omega^2 = \eta = -iV^{-1}dV\end{aligned}$$ satisfying $$\begin{aligned} &\omega^a(\partial_b) = \delta^a_b\mathds{1}\qquad (\omega^a)^\ast = \omega^a\qquad [\omega^a,f] = 0\\ &\omega\eta = -\eta\omega\qquad d\omega=d\eta = 0\end{aligned}$$ for all $f\in T^2_{\theta}$ and $a,b=1,2$. We note that $\omega\eta$ is a basis of $\Omega^{2}_{\mathfrak{g}}$ since $$\begin{aligned} &a\omega\eta = 0\quad\Rightarrow\quad a\omega\eta(\partial_1,\partial_2) = 0\quad\Rightarrow\quad\\ &a\big(\omega(\partial_1)\eta(\partial_2)-\omega(\partial_2)\eta(\partial_1)\big) = 0 \quad\Rightarrow\quad a=0.\end{aligned}$$ Let us now compute the cohomology of the differential calculus $\Omega^{}_{\mathfrak{g}}$. **Proposition 16**. *The cohomology of $\Omega^{}_{\mathfrak{g}}$ is given by $$\begin{aligned} H^0(\Omega^{}_{\mathfrak{g}}) = \mathbb{C}\qquad H^1(\Omega^{}_{\mathfrak{g}}) = \mathbb{C}^2\qquad H^2(\Omega^{}_{\mathfrak{g}}) = \mathbb{C}. \end{aligned}$$* *Proof.* Since $\Omega^{}_{\mathfrak{g}}$ is connected one finds that $$\begin{aligned} H^0(\Omega^{}_{\mathfrak{g}}) = \{a\in T^2_{\theta}: da = 0\} = \{\lambda\mathds{1}:\lambda\in\mathbb{C}\}=\mathbb{C}. \end{aligned}$$ Let us now consider $H^2(\Omega^{}_{\mathfrak{g}})$. First, one notes that $$\begin{aligned} d(U^k) = ikU^k\omega\quad\text{and}\quad d(V^k) = ikV^k\eta \end{aligned}$$ giving $d(U^kV^l)=iU^kV^l(k\omega + l\eta)$. Since $\dim(\mathfrak{g})=2$ any element of $\Omega^{2}_{\mathfrak{g}}$ is closed, so to compute $H^2(\Omega^{}_{\mathfrak{g}})$ one has to find out which 2-forms that are not exact. An exact 2-form can be written as $d\rho$ for $\rho = a\omega + b\eta\in\Omega^{1}_{\mathfrak{g}}$; writing $a=a_{kl}U^kV^l$ and $b=b_{kl}U^kV^l$, one obtains $$\label{eq:drho} \begin{split} d\rho &= (da)\omega + (db)\eta = \sum_{k,l\in\mathbb{Z}}ia_{kl}U^kV^l\big(k\omega + l\eta\big)\omega +\sum_{k,l\in\mathbb{Z}}ib_{kl}U^kV^l\big(k\omega + l\eta\big)\eta\\ &= \sum_{k,l\in\mathbb{Z}}i\big(kb_{kl}-la_{kl}\big)U^kV^l\omega\eta, \end{split}$$ implying that $(c_{kl}U^kV^l)\omega\eta$ is exact if and only if $c_{00}=0$. Hence, $$\begin{aligned} H^2(\Omega^{}_{\mathfrak{g}}) = \{\lambda\omega\eta : \lambda\in\mathbb{C}\} = \mathbb{C}. \end{aligned}$$ Finally, let us consider $H^1(\Omega^{}_{\mathfrak{g}})$. Let $\rho=a\omega + b\eta$ be a closed $1$-form, giving (using [\[eq:drho\]](#eq:drho){reference-type="eqref" reference="eq:drho"}) $$\begin{aligned} \sum_{k,l\in\mathbb{Z}}i\big(kb_{kl}-la_{kl}\big)U^kV^l\omega\eta = 0\quad\Rightarrow\quad \sum_{k,l\in\mathbb{Z}}i\big(kb_{kl}-la_{kl}\big)U^kV^l = 0 \end{aligned}$$ since $\omega\eta$ is a basis of $H^2(\Omega^{}_{\mathfrak{g}})$. Thus, $$\label{eq:drhoab} d\rho = 0\quad\Leftrightarrow\quad kb_{kl} = la_{kl}$$ for all $k,l\in\mathbb{Z}$, giving $a_{0k}=b_{k0}=0$ for $k\neq 0$. Such coefficients can be parametrized by $r_{kl}\in\mathbb{C}$ as $$\label{aklbklr} b_{kl} = lr_{kl}\quad\text{and}\quad a_{kl} = kr_{kl}$$ for $(k,l)\neq(0,0)$, leaving $a_{00}$ and $b_{00}$ arbitrary. On the other hand, exact 1-forms are given by $$\begin{aligned} \rho = dc = d\sum_{k,l\in\mathbb{Z}}c_{kl}U^kV^l = \sum_{k,l\in\mathbb{Z}}ikc_{kl}U^kV^l\omega+\sum_{k,l\in\mathbb{Z}}ilc_{kl}U^kV^l\eta, \end{aligned}$$ and comparing with [\[aklbklr\]](#aklbklr){reference-type="eqref" reference="aklbklr"} one concludes that the closed 1-forms that are not exact can be represented by $\rho = a_{00}\omega + b_{00}\eta$ for $a_{00},b_{00}\in\mathbb{C}$; hence $$H^1(\Omega^{}_{\mathfrak{g}}) = \{\lambda\omega + \mu\eta:\lambda,\mu\in\mathbb{C}\} \simeq \mathbb{C}^2.\qedhere$$ ◻ ## Levi-Civita connections Let us now consider connections on $\Omega^1_{\mathfrak{g}}$. Since $\{\omega^1,\omega^2\}$ is a basis of $\Omega^1_{\mathfrak{g}}$, (left) connections can be introduced as $$\label{eq:left.conn.def} \nabla_{\partial_a}\omega^b = \Gamma_{ac}^b\omega^c$$ for arbitrary $\Gamma_{ac}^b\in T^2_{\theta}$. The connection is torsion free if $$\begin{aligned} 0 = \big(\nabla_{\partial_1}\omega^a\big)(\partial_2)-\big(\nabla_{\partial_2}\omega^a\big)(\partial_1) =\Gamma_{1c}^a\omega^c(\partial_2) - \Gamma_{2c}^a\omega^c(\partial_1) = \Gamma^a_{12}-\Gamma^a_{21}\end{aligned}$$ for $a=1,2$; i.e. $\Gamma_{ab}^c=\Gamma_{ba}^c$ for $a,b,c=1,2$. Let $h$ be a left hermitian form on $\Omega^1_{\mathfrak{g}}$, write $h^{ab}=h(\omega^a,\omega^b)$ and assume that there exist $h_{ab}\in T^2_{\theta}$ such that $h^{ac}h_{cb}=h_{bc}h^{ca}=\delta^a_b\mathds{1}$ for $a,b=1,2$. A connection of the form [\[eq:left.conn.def\]](#eq:left.conn.def){reference-type="eqref" reference="eq:left.conn.def"} is compatible with $h$ if $$\begin{aligned} 0 = \partial_ch(\omega^a,\omega^b) -h\big(\nabla_{\partial_c}\omega^a,\omega^b\big)-h\big(\omega^a,\nabla_{\partial_c}\omega^b\big) = \partial_ch^{ab}-\Gamma^a_{cp}h^{pb}-\big(\Gamma^b_{cp}h^{pa}\big)^\ast.\end{aligned}$$ Introducing $\Gamma^{ab}_c = \Gamma^a_{cp}h^{pb}$ the compatibility condition may be written as $$\label{eq:torus.metric.cond} \partial_ch^{ab} = \Gamma^{ab}_c+\big(\Gamma^{ba}_c\big)^\ast$$ and setting $\Gamma^{ab}_c = \frac{1}{2}\partial_ch^{ab}+iS^{ab}_c$ condition [\[eq:torus.metric.cond\]](#eq:torus.metric.cond){reference-type="eqref" reference="eq:torus.metric.cond"} becomes $(S^{ab}_c)^\ast=S^{ba}_c$. Hence, $$\label{eq:torus.metric.conn} \nabla_{\partial_a}\omega^b = \Gamma^b_{ac}\omega^c=\Gamma^{bp}_ah_{pc}\omega^c = \big(\tfrac{1}{2}\partial_ah^{bp}+iS^{bp}_a\big)h_{pc}\omega^c$$ defines a connection compatible with $h$ for arbitrary $(S^{ab}_c)^\ast=S^{ba}_c$; in particular, choosing $S^{ab}_c=0$ shows that compatible connections always exist. Let us slightly rewrite the first term in [\[eq:torus.metric.conn\]](#eq:torus.metric.conn){reference-type="eqref" reference="eq:torus.metric.conn"} to obtain (using $\partial_ah^{bp}h_{pc}=\partial_a\delta^b_c\mathds{1}=0$) $$\nabla_{\partial_a}\omega^b = \big(-\tfrac{1}{2}h^{bp}\partial_ah_{pc}+iS^{bp}_a h_{pc}\big)\omega^c.$$ Furthermore, setting $$\begin{aligned} S^{ab}_c = \tfrac{i}{2}h^{ap}(\partial_qh_{pc})h^{qb} -\tfrac{i}{2}h^{ap}(\partial_ph_{cq})h^{qb} + T^{ab}_c\end{aligned}$$ gives $$\begin{aligned} (S^{ab}_c)^\ast = -\tfrac{i}{2}h^{bq}(\partial_qh_{cp})h^{pa} +\tfrac{i}{2}h^{bq}(\partial_ph_{qc})h^{pa} + (T^{ab}_c)^\ast =S^{ba}_c-T^{ba}_c+(T^{ab}_c)^\ast\end{aligned}$$ implying that $(S^{ab}_c)^\ast=S^{ba}_c$ is equivalent to $(T^{ab}_c)^\ast=T^{ba}_c$, giving $$\begin{aligned} \Gamma^c_{ab} = -\tfrac{1}{2}h^{cp}\partial_ah_{pb} -\tfrac{1}{2}h^{cp}\partial_bh_{pa} +\tfrac{1}{2}h^{cp}\partial_ph_{ab}+iT^{cp}_ah_{pb}.\end{aligned}$$ Thus, the above Christoffel symbols define a connection compatible with $h$ for arbitrary $T^{ab}_c\in T^2_{\theta}$ such that $(T^{ab}_c)^\ast=T^{ba}_c$. Demanding that the connection is torsion free amounts to requiring that $\Gamma_{ab}^c=\Gamma_{ba}^c$ which is equivalent to $$\label{eq:torsion.free.T} \tfrac{1}{2}\partial_q(h_{ab}-h_{ba}) = ih_{qc}T^{cp}_b h_{pa}-ih_{qc}T^{cp}_a h_{pb}$$ Let us split the components of (the inverse of) the hermitian form in its hermitian and antihermitian part: $$\begin{aligned} h_{ab} = f_{ab} + ig_{ab}\end{aligned}$$ with $f_{ab}^\ast=f_{ab}$ and $g_{ab}^\ast=g_{ab}$. The condition $h_{ab}^\ast=h_{ba}$ implies that $$\begin{aligned} f_{ab}=f_{ba}\quad\text{and}\quad g_{ab}=-g_{ba},\end{aligned}$$ and [\[eq:torsion.free.T\]](#eq:torsion.free.T){reference-type="eqref" reference="eq:torsion.free.T"} becomes $$\label{eq:torsion.free.g.T} \partial_qg_{ab} = h_{qc}T^{cp}_b h_{pa}-h_{qc}T^{cp}_a h_{pb}$$ In fact, since $a\in{1,2}$ and $g_{ab}=-g_{ba}$ there are only two independent equations: $$\begin{aligned} &\partial_1g_{12} = h_{1c}T^{cp}_2h_{p1}-h_{1c}T^{cp}_1h_{p2}\label{eq:torsionfree.g.1}\\ &\partial_2g_{12} = h_{2c}T^{cp}_2h_{p1}-h_{2c}T^{cp}_1h_{p2}.\label{eq:torsionfree.g.2}\end{aligned}$$ We note that if $\partial_1g_{12}=\partial_2g_{12}$=0 then one can solve [\[eq:torsion.free.g.T\]](#eq:torsion.free.g.T){reference-type="eqref" reference="eq:torsion.free.g.T"} by setting $T^{ab}_c=0$. For instance, if the hermitian form is diagonal (implying that the inverse is also diagonal) then $\partial_qg_{ab}=0$ since $g_{ab}=0$ for $a\neq b$. Thus, for $$\begin{aligned} (h^{ab}) = \begin{pmatrix} h_1 & 0 \\ 0 & h_2 \end{pmatrix}\quad\Rightarrow\quad (h_{ab}) = \begin{pmatrix} h_1^{-1} & 0 \\ 0 & h_2^{-1} \end{pmatrix}\end{aligned}$$ a torsion free connection on $\Omega^1_{\mathfrak{g}}$ compatible with $h$ is given by $$\begin{aligned} &\nabla_{\partial_1}\omega = -\tfrac{1}{2}h_1(\partial_1h_1^{-1})\omega-\tfrac{1}{2}h_1(\partial_2h_1^{-1})\eta\\ &\nabla_{\partial_1}\eta = \tfrac{1}{2}h_2(\partial_2h_1^{-1})\omega-\tfrac{1}{2}h_2(\partial_1h_2^{-1})\eta\\ &\nabla_{\partial_2}\omega = -\tfrac{1}{2}h_1(\partial_2h_1^{-1})\omega+\tfrac{1}{2}h_1(\partial_1h_2^{-1})\eta\\ &\nabla_{\partial_2}\eta = -\tfrac{1}{2}h_2(\partial_1h_2^{-1})\omega-\tfrac{1}{2}h_2(\partial_2h_2^{-1})\eta.\end{aligned}$$ Let us consider another example where we assume that $h$ is purely off-diagonal, i.e. $$\begin{aligned} (h^{ab}) = \begin{pmatrix} 0 & \hat{h}\\ \hat{h}^\ast & 0 \end{pmatrix}\quad\Rightarrow\quad (h_{ab}) = \begin{pmatrix} 0 & (\hat{h}^{-1})^\ast\\ \hat{h}^{-1}& 0 \end{pmatrix}= \begin{pmatrix} 0 & f+ig\\ f-ig & 0 \end{pmatrix}.\end{aligned}$$ Setting $T^{11}_a=T^{22}_a=0$ for $a=1,2$, [\[eq:torsionfree.g.1\]](#eq:torsionfree.g.1){reference-type="eqref" reference="eq:torsionfree.g.1"} and [\[eq:torsionfree.g.2\]](#eq:torsionfree.g.2){reference-type="eqref" reference="eq:torsionfree.g.2"} become (with $g_{12}=g$) $$\begin{aligned} &\partial_1g = -h_{12}T^{21}_2h_{12} = -(\hat{h}^{-1})^\ast T^{21}_1(\hat{h}^{-1})^\ast\\ &\partial_2g = h_{21}T^{12}_2h_{21} = \hat{h}^{-1}T^{12}_2\hat{h}^{-1}\end{aligned}$$ which can be solved by setting $$\begin{aligned} &T^{21}_1 = -\hat{h}^\ast(\partial_1g)\hat{h}^\ast \qquad T^{12}_1=(T^{12}_1)^\ast = -\hat{h}(\partial_1g)\hat{h}\\ &T^{12}_2 = \hat{h}(\partial_2g)\hat{h}\qquad T^{21}_2 = (T^{12}_2)^\ast = \hat{h}^\ast(\partial_2g)\hat{h}^\ast,\end{aligned}$$ giving $$\begin{aligned} {3} &\Gamma^1_{11} = -\hat{h}\partial_1f &\quad &\Gamma^1_{22}= 0 &\quad&\Gamma^1_{12}=\Gamma^1_{21}=i\hat{h}\partial_2g\\ &\Gamma^2_{22} = -\hat{h}^\ast\partial_2f & &\Gamma^2_{11} = 0 & & \Gamma^2_{12}=\Gamma^2_{21}=-i\hat{h}^\ast\partial_1g \end{aligned}$$ and $$\begin{aligned} {2} &\nabla_{\partial_1}\omega = -\hat{h}(\partial_1f)\omega + i\hat{h}(\partial_2g)\eta &\qquad &\nabla_{\partial_1}\eta = -i\hat{h}^\ast (\partial_1g)\eta\\ &\nabla_{\partial_2}\omega = i\hat{h}(\partial_2g)\omega & &\nabla_{\partial_2}\eta = -i\hat{h}^\ast(\partial_1g)\omega - \hat{h}^\ast(\partial_2f)\eta.\end{aligned}$$ ## Bimodule connections Let us now construct bimodule connections on $\Omega^1_{\mathfrak{g}}$. To this end, let $\nabla$ be a left connection on $\Omega^1_{\mathfrak{g}}$ and write $$\begin{aligned} \nabla_{\partial_a}\omega^c = \Gamma^c_{ab}\omega^b.\end{aligned}$$ If $\nabla$ is also a right connection, then it has to satisfy $$\begin{aligned} \nabla_{\partial_a}(\omega^c f) = \big(\nabla_{\partial_a}\omega^c\big)f + \omega^c\partial_af.\end{aligned}$$ for all $f\in T^2_{\theta}$. Since $[\omega^c,f]=0$ and $\nabla$ is a left connection, this implies that $$\begin{aligned} f\nabla_{\partial_a}\omega^c + (\partial_af)\omega^c = \big(\nabla_{\partial_a}\omega^c\big)f + \omega^c\partial_af\quad\Leftrightarrow\quad [f,\nabla_{\partial_a}\omega^c] = 0\quad\Leftrightarrow\quad [f,\Gamma^c_{ab}] = 0\end{aligned}$$ for $f\in T^2_{\theta}$ and $a,b,c\in\{1,2\}$. Hence, $\Gamma^c_{ab}\in Z(T^2_{\theta})$ implying that there exist $\gamma^c_{ab}\in\mathbb{C}$ such that $\Gamma^c_{ab} = \gamma^c_{ab}\mathds{1}$. Moreover, in order for a left connection to be a $\ast$-connection, one needs $$\begin{aligned} 0 &= \big(\nabla_{\partial_a}(f\omega^c)\big)^\ast-\nabla_{\partial_a}(\omega^cf^\ast) =\big(\nabla_{\partial_a}(f\omega^c)\big)^\ast-\nabla_{\partial_a}(f^\ast\omega^c)\\ &=\big(f\nabla_{\partial_a}\omega^c+(\partial_af)\omega^c\big)^\ast-f^\ast\nabla_{\partial_a}\omega^c-(\partial_af^\ast)\omega^c\\ &=(\nabla_{\partial_a}\omega^c)^\ast f^\ast + \omega^c\partial_af^\ast-f^\ast\nabla_{\partial_a}\omega^c-(\partial_af^\ast)\omega^c\\ &= (\nabla_{\partial_a}\omega^c)^\ast f^\ast-f^\ast\nabla_{\partial_a}\omega^c = \big((\Gamma^c_{ab})^\ast f^\ast -f^\ast\Gamma^{c}_{ab}\big)\omega^b.\end{aligned}$$ For $f=\mathds{1}$ one immediately obtains $(\Gamma^c_{ab})^\ast=\Gamma^c_{ab}$, and using that in the above equation gives $[\Gamma^c_{ab},f^\ast]=0$ for all $f\in T^2_{\theta}$, i.e. $\Gamma^c_{ab}\in Z(T^2_{\theta})$. Hence, a left connection $\nabla$ is a bimodule connection if $\Gamma^c_{ab}\in Z(T^2_{\theta})$ and, moreover, $\nabla$ is a left $\ast$-connection if $(\Gamma^c_{ab})^\ast=\Gamma_{ab}^c$. Clearly, if the hermitian form is constant, i.e. $h^{ab}\sim\mathds{1}$ for $a,b\in\{1,2\}$, then $\Gamma^c_{ab}=0$ gives a torsion free $\ast$-connection compatible with $h$ on the bimodule $\Omega^1_{\mathfrak{g}}$. This is nothing but the canonical connection on a free module given by $\nabla_{\partial_a}f\omega^c=(\partial_af)\omega^c$. However, there are clearly other solutions when $\partial_ch^{ab}=0$. For instance if $$\begin{aligned} (h^{ab}) = \begin{pmatrix} 0 & i\lambda\mathds{1}\\ -i\lambda\mathds{1}& 0 \end{pmatrix} \end{aligned}$$ for $\lambda\in\mathbb{R}$, then $$\begin{aligned} &\nabla_{\partial_1}\eta = \gamma_1\omega \qquad \nabla_{\partial_2}\eta = 0\\ &\nabla_{\partial_2}\omega = \gamma_2\eta \qquad \nabla_{\partial_1}\omega = 0\end{aligned}$$ with $\gamma_1,\gamma_2\in\mathbb{R}$ defines a torsion free $\ast$-connection compatible with $h$. Now, assume that a $\ast$-bimodule form $g$ is given on $\Omega^1_{\mathfrak{g}}$ by $$\begin{aligned} g(f_a\omega^a,\omega^b\tilde{f}_b) = f_ag^{ab}\tilde{f}_b\end{aligned}$$ with $(g^{ab})^\ast=g^{ba}$ and, furthermore, assume that $g$ is diagonal, i.e. $$\begin{aligned} (g^{ab}) = \begin{pmatrix} g_1 & 0 \\ 0 & g_2 \end{pmatrix}.\end{aligned}$$ Compatibility with $g$ amounts to $$\begin{aligned} \partial_cg^{ab} = \Gamma^a_{cp}g^{pb} + g^{ap}\Gamma^b_{cp},\end{aligned}$$ and since $g$ is diagonal one obtains the equations $$\begin{aligned} {2} &\partial_1g_1 = 2\Gamma^1_{11}g_1 &\qquad &\partial_2g_1 = 2\Gamma^1_{21}g_1\\ &\partial_1g_2 = 2\Gamma^2_{12}g_2 & &\partial_2g_2 = 2\Gamma^2_{22}g_2\\ &\Gamma^2_{11}g_1 = -\Gamma^1_{12}g_2 & &\Gamma^2_{21} g_1 = -\Gamma^1_{22}g_2.\end{aligned}$$ We note that $g_1$ and $g_2$ are necessarily proportional. For instance, choosing $g_1=U^kV^l$ and $g_2=zg_1=zU^kV^l$ for $z\in\mathbb{C}$ (and $z\neq 0$), one obtains a torsion free bimodule connection compatible with $g$ by setting $$\begin{aligned} {2} &\Gamma^1_{11} = \tfrac{ik}{2} &\qquad &\Gamma^1_{12}=\Gamma^1_{21} = \tfrac{il}{2}\\ &\Gamma^2_{22} = \tfrac{il}{2} & &\Gamma^2_{12}=\Gamma^2_{21}=\tfrac{ik}{2}\\ &\Gamma^1_{22} = -\tfrac{ik}{2z}& &\Gamma^2_{11} = -\tfrac{ilz}{2}\end{aligned}$$ giving $$\begin{aligned} {2} &\nabla_{\partial_1}\omega = i\tfrac{k}{2}\omega + i\tfrac{l}{2}\eta&\qquad &\nabla_{\partial_1}\eta = -i\tfrac{lz}{2}\omega + i\tfrac{k}{2}\eta \\ &\nabla_{\partial_2}\omega = i\tfrac{l}{2}\omega - i\tfrac{k}{2z}\eta & &\nabla_{\partial_2}\eta = i\tfrac{k}{2}\omega + i\tfrac{l}{2}\eta.\end{aligned}$$ Note that $\nabla$ is not a $\ast$-connection since not all $\Gamma^c_{ab}$ are real. # The geometry of Kronecker algebras {#sec:kronecker.alg} Let us turn to our main object of study in this paper; namely, the path algebra of the (generalized) Kronecker quiver with $N$ arrows: $$\begin{tikzcd} 1 \arrow[r, draw=none, "\raisebox{+1.5ex}{\vdots}" description] \arrow[r, bend left, "\alpha_1"] \arrow[r, bend right, swap, "\alpha_N"] & 2 \end{tikzcd}$$ Denoting the leftmost node by $e$, and consequently the rightmost node by $\mathds{1}-e$, we let $\mathcal{K}_N$ be the unital $\mathbb{C}$-algebra generated by $e,\alpha_1,\ldots,\alpha_N$ satisfying $$\begin{aligned} \label{eq:kronecker.alg.relations} e^2=e\qquad e\alpha_k = \alpha_k\qquad \alpha_ke = 0\qquad \alpha_j\alpha_k = 0\end{aligned}$$ for $j,k\in\{1,\ldots,N\}$. The algebra $\mathcal{K}_N$ is finite dimensional, and every element $a\in\mathcal{K}_N$ can be uniquely written as $$\begin{aligned} a = \lambda\mathds{1}+ \mu e + a^i\alpha_i\end{aligned}$$ for $\lambda,\mu,a^i\in\mathbb{C}$, where summation from $1$ to $N$ over the repeated index $i$ is assumed. The product of two elements $$\begin{aligned} a_1 = \lambda_1\mathds{1}+ \mu_1e + a_1^i\alpha_i\quad\text{and}\quad a_2 = \lambda_2\mathds{1}+ \mu_2e + a_2^i\alpha_i\end{aligned}$$ can then be computed as $$\begin{aligned} a_1a_2 = \lambda_1\lambda_2\mathds{1}+ \big(\lambda_1\mu_2+\lambda_2\mu_1+\mu_1\mu_2\big)e+ \big((\lambda_1+\mu_1)a_2^i+\lambda_2a_1^i\big)\alpha_i,\end{aligned}$$ giving $$\begin{aligned} \label{eq:a1a2.commutator} [a_1,a_2] = \big(\mu_1a_2^i-\mu_2a_1^i\big)\alpha_i.\end{aligned}$$ The subspace $\mathcal{K}^\alpha_N$, generated by $\alpha_1,\ldots,\alpha_N$, is a two-sided ideal of $\mathcal{K}_N$ and we note that $ea=a$ and $ab=0$ for all $a,b\in\mathcal{K}^\alpha_N$. It is straightforward to introduce a $\ast$-structure on $\mathcal{K}_N$. **Proposition 17**. *For $a=\lambda\mathds{1}+ \mu e + a^i\alpha_i\in\mathcal{K}_N$ set $$\begin{aligned} a^\ast = (\bar{\lambda}+\bar{\mu})\mathds{1}-\bar{\mu} e+\bar{a}^i\alpha_i. \end{aligned}$$ Then $(a^\ast)^\ast = a$, $(za+b)^\ast=\bar{z}a^\ast + b^\ast$ and $(ab)^\ast=b^\ast a^\ast$ for all $z\in\mathbb{C}$ and $a,b\in\mathcal{K}_N$.* *Proof.* It is easy to see that $(za+b)^\ast = \bar{z}a^\ast + b^\ast$. Furthermore, one checks that $$\begin{aligned} (a^\ast)^\ast = (\lambda+\mu-\mu)\mathds{1}+\mu e+a^i\alpha_i = a. \end{aligned}$$ Now, for $$\begin{aligned} a_1 = \lambda_1\mathds{1}+ \mu_1e + a_1^i\alpha_i\quad\text{and}\quad a_2 = \lambda_2\mathds{1}+ \mu_2e + a_2^i\alpha_i \end{aligned}$$ one computes $$\begin{aligned} (a_1a_2)^\ast = &(\bar{\lambda}_1+\bar{\mu}_1)(\bar{\lambda}_2+\bar{\mu}_2)\mathds{1} -(\bar{\lambda}_1\bar{\mu}_2+\bar{\lambda}_2\bar{\mu}_1+\bar{\mu}_1\bar{\mu}_2)e+\big((\bar{\lambda}_1+\bar{\mu}_1)\bar{a}_2^i+\bar{\lambda}_2\bar{a}_1^i\big) \end{aligned}$$ and $$\begin{aligned} a_2^\ast a_1^\ast &= \Big((\bar{\lambda}_2+\bar{\mu}_2)\mathds{1}-\bar{\mu}_2 e+\bar{a}_2^i\alpha_i\Big) \Big((\bar{\lambda}_1+\bar{\mu}_1)\mathds{1}-\bar{\mu}_1 e+\bar{a}_1^i\alpha_i\Big)\\ &=(\bar{\lambda}_1+\bar{\mu}_1)(\bar{\lambda}_2+\bar{\mu}_2)\mathds{1} +\big(-(\bar{\lambda}_2+\bar{\mu}_2)\bar{\mu}_1-(\bar{\lambda}_1+\bar{\mu}_1)\bar{\mu}_2+\bar{\mu}_1\bar{\mu}_2\big)e\\ &\quad+\big(\bar{\lambda}_2\bar{a}_1^i+(\bar{\lambda}_1+\bar{\mu}_1)\bar{a}_2^i\big)\alpha_i, \end{aligned}$$ showing that $(a_1a_2)^\ast=a_2^\ast a_1^\ast$ for all $a_1,a_2\in\mathcal{K}_N$. ◻ Proposition [Proposition 17](#prop:star.algebra.structure){reference-type="ref" reference="prop:star.algebra.structure"} implies that $\mathcal{K}_N$ is a $\ast$-algebra with $$\begin{aligned} e^\ast = \mathds{1}-e\quad\text{and}\quad\alpha_i^\ast = \alpha_i.\end{aligned}$$ Moreover, note that an arbitrary hermitian element $a\in\mathcal{K}_N$ can be written as $$\begin{aligned} a = (\lambda-\tfrac{i}{2}\mu)\mathds{1}+i\mu e+a^i\alpha_i\end{aligned}$$ with $\lambda,\mu,a^i\in\mathbb{R}$. In the construction of differential calculi, the center plays an important role. For $\mathcal{K}_N$, the center turns out to be trivial. **Proposition 18**. *$Z(\mathcal{K}_N) = \{\lambda\mathds{1}:\lambda\in\mathbb{C}\}$.* *Proof.* Let $c=\lambda\mathds{1}+ \mu e + a^i\alpha_i$ be in the center of $\mathcal{K}_N$. Writing out $[c,e] = 0$ gives $$\begin{aligned} 0=[c,e] = ce - ec = (\lambda+\mu e)\mathds{1}-(\lambda+\mu e)\mathds{1}-a^i\alpha_i =-a^i\alpha_i \end{aligned}$$ implying that $a^i=0$ for $i=1,\ldots,N$ and $c=\lambda\mathds{1}+\mu e$. Requiring that $[c,\alpha_i]=0$ for $i=1,\ldots,N$ gives $$\begin{aligned} 0 = [c,\alpha_i] = c\alpha_i-\alpha_ic = (\lambda+\mu)\alpha_i-\lambda\alpha_i = \mu\alpha_i \end{aligned}$$ implying that $\mu=0$ and $c=\lambda\mathds{1}$. Clearly, any element of the form $\lambda\mathds{1}$ is in the center of $\mathcal{K}_N$. ◻ Furthermore, the algebra $\mathcal{K}_N$ has plenty of invertible elements, as shown by the following result. **Proposition 19**. *An element $a=\lambda\mathds{1}+\mu e+a^i\alpha_i\in\mathcal{K}_N$ is invertible if and only if $\lambda\neq 0$ and $\lambda+\mu\neq 0$. In this case, the inverse is given by $$\label{eq:a.inverse} a^{-1} = \frac{1}{\lambda(\lambda+\mu)}\Big((\lambda+\mu)\mathds{1}-\mu e - a^i\alpha_i\Big).$$* *Proof.* If $\lambda\neq 0$ and $\lambda+\mu\neq 0$ it is easy to check that [\[eq:a.inverse\]](#eq:a.inverse){reference-type="eqref" reference="eq:a.inverse"} is the inverse of $a$. Now, assume that $a=\lambda\mathds{1}+\mu e+a^i\alpha_i$ is invertible. Then there exists $$b = \gamma_1\mathds{1}+ \gamma_2e + \gamma^i\alpha_i$$ such that $ab=\mathds{1}$, which is equivalent to $$\begin{aligned} &\lambda\gamma_1 = 1\label{eq:lambda.gamma1}\\ &(\lambda+\mu)\gamma_2 = -\gamma_1\mu\label{eq:lambda.gamma2}\\ &(\lambda+\mu)\gamma^i = -\gamma_1a^i\label{eq:lambda.gammai} \end{aligned}$$ for $i=1,\ldots,N$. It follows immediately from [\[eq:lambda.gamma1\]](#eq:lambda.gamma1){reference-type="eqref" reference="eq:lambda.gamma1"} that $\lambda\neq 0$ and $\gamma_1\neq 0$. Furthermore, if $\lambda+\mu=0$ then [\[eq:lambda.gamma2\]](#eq:lambda.gamma2){reference-type="eqref" reference="eq:lambda.gamma2"} implies that $\mu=0$ (since $\gamma_1\neq 0$) and, consequently, that $\lambda=-\mu=0$, which contradicts [\[eq:lambda.gamma1\]](#eq:lambda.gamma1){reference-type="eqref" reference="eq:lambda.gamma1"}. Hence, if $ab=\mathds{1}$ then $\lambda\neq 0$ and $\lambda+\mu\neq 0$. ◻ In the following, we will construct derivation based differential calculi over $\mathcal{K}_N$. As a first step, let us describe the Lie algebra of derivations as well as a basis consisting of hermitian derivations. **Proposition 20**. *A basis of $\operatorname{Der}(\mathcal{K}_N)$ is given by $\{\partial_k\}_{k=1}^N$ and $\{\partial_k^l\}_{k,l=1}^N$ with $$\begin{aligned} {2} &\partial_k(e) = i\alpha_k &\qquad &\partial_k(\alpha_l) = 0\label{eq:di.def}\\ &\partial_k^l(e)=0 &\qquad &\partial_{k}^l(\alpha_j) =\delta_j^l\alpha_k,\label{eq:dij.def} \end{aligned}$$ satisfying $$\begin{aligned} &[\partial_{i}^j,\partial_{k}^l] = \delta_{k}^j\partial_{i}^l-\delta_{i}^l\partial_{k}^j\\ &[\partial_{i}^j,\partial_k] = \delta_{k}^j\partial_i\\ &[\partial_i,\partial_j] = 0. \end{aligned}$$ Moreover, $\partial_i,\partial_i^j$ are hermitian derivations for $i,j=1,\ldots,N$.* *Proof.* Let us start by finding the form of an arbitrary derivation. To this end, we make the Ansatz: $$\begin{aligned} &\partial(e) = \lambda\mathds{1}+ \mu e + a^k\alpha_k\\\ &\partial(\alpha_i) = \lambda_i\mathds{1}+ \mu_ie+ b_i^k\alpha_k \end{aligned}$$ for $\lambda,\lambda_i,\mu,\mu_i,a^k,b_i^k\in\mathbb{C}$. These maps (extended as derivations to all of $\mathcal{K}_N$) are derivations if they preserve the relations in [\[eq:kronecker.alg.relations\]](#eq:kronecker.alg.relations){reference-type="eqref" reference="eq:kronecker.alg.relations"}. One computes $$\begin{aligned} 0 &= \partial(e^2-e) = e\partial e+(\partial e)e -\partial e\\ &= e(\lambda\mathds{1}+ \mu e + a^k\alpha_k)+(\lambda\mathds{1}+ \mu e + a^k\alpha_k)e-(\lambda\mathds{1}+ \mu e + a^k\alpha_k)\\ &= -\lambda\mathds{1}+ (2\lambda+\mu)e, \end{aligned}$$ implying that $\lambda=\mu=0$ and $\partial e = a^k\alpha_k$. Next, $$\begin{aligned} 0 &= \partial(\alpha_ie) = \alpha_i\partial e + (\partial\alpha_i)e = \alpha_i(a^k\alpha_k) + (\lambda_i\mathds{1}+ \mu_ie + b_i^k\alpha_k)e\\ &= (\lambda_i+\mu_i)e, \end{aligned}$$ implying that $\mu_i=-\lambda_i$ and $\partial\alpha_i = \lambda_i(\mathds{1}-e)+b_i^k\alpha_k$. Furthermore, $$\begin{aligned} 0 &= \partial(e\alpha_i-\alpha_i) = e\partial\alpha_i + (\partial e)\alpha_i - \partial\alpha_i\\ &= e\big(\lambda_i(\mathds{1}-e)+b_i^k\alpha_k\big) + a^k\alpha_k\alpha_i-\lambda_i(\mathds{1}-e)-b_i^k\alpha_k\\ &= -\lambda_i(\mathds{1}-e), \end{aligned}$$ implying that $\lambda_i=0$ and $\partial\alpha_i = b_i^k\alpha_k$. Finally, $$\begin{aligned} 0=\partial(\alpha_i\alpha_j) = \alpha_i\partial\alpha_j + (\partial\alpha_i)\alpha_j = b_j^k\alpha_i\alpha_k + b_i^k\alpha_k\alpha_j = 0. \end{aligned}$$ Hence, $\partial$ is a derivation if and only if there exists $a^k,b_i^k\in\mathbb{C}$ such that $$\begin{aligned} \partial e = a^k\alpha_k\quad\text{and}\quad \partial\alpha_i = b_i^k\alpha_k. \end{aligned}$$ It is now clear that the derivations given in [\[eq:di.def\]](#eq:di.def){reference-type="eqref" reference="eq:di.def"} and [\[eq:dij.def\]](#eq:dij.def){reference-type="eqref" reference="eq:dij.def"} is a vector space basis of $\operatorname{Der}(\mathcal{K}_N)$. Next, let us compute the commutators of these derivations: $$\begin{aligned} &[\partial_k,\partial_l](e) = \partial_k(i\alpha_l) - \partial_l(i\alpha_k) = 0\\ &[\partial_k,\partial_l](\alpha_m) = \partial_k(0) - \partial_l(0)=0 \end{aligned}$$ gives $[\partial_k,\partial_l]=0$, and $$\begin{aligned} (e) &= \partial_i^j(0)-\partial_k^l(0) = 0\\ [\partial_i^j,\partial_k^l](\alpha_m) &= \delta_m^l\partial_i^j(\alpha_k)-\delta_m^j\partial_k^l(\alpha_i) =\delta_m^l\delta_k^j\alpha_i-\delta_m^j\delta_i^l\alpha_k\\ &= \delta_k^j\partial_i^l(\alpha_m)-\delta_i^l\partial_k^j(\alpha_m), \end{aligned}$$ from which we conclude that $[\partial_i^j,\partial_k^l]=\delta_k^j\partial_i^l-\delta_i^l\partial_k^j$. Furthermore, $$\begin{aligned} &[\partial_j^k,\partial_l](e) = \partial_j^k(i\alpha_l) = i\delta_l^k\alpha_j = \delta^k_l\partial_j(e)\\ &[\partial_j^k,\partial_l](\alpha_m) = -\delta_m^k\partial_l\alpha_j = 0, \end{aligned}$$ giving $[\partial_j^k,\partial_l] = \delta_k^l\partial_j$. Finally, let us show that $\partial_j$ and $\partial_j^k$ are hermitian derivations; namely $$\begin{aligned} &\partial_j(e^\ast)^\ast = \partial_j(\mathds{1}-e)^\ast=-(i\alpha_j)^\ast = i\alpha_j = \partial_j(e)\\ &\partial_j^k(\alpha_m^\ast)^\ast = \partial_j^k(\alpha_m)^\ast = \delta^k_m\alpha_j = \partial_j^k(\alpha_m) \end{aligned}$$ together with $\partial_j(\alpha_k^\ast)^\ast=\partial_j^k(e^\ast)^\ast = 0$ shows that $\partial_j$ and $\partial_j^k$ are indeed hermitian derivations for $j,k=1,\ldots,N$. ◻ From the above result, we note in particular that $\partial(a)\in\mathcal{K}^\alpha_N$ for all $\partial\in\operatorname{Der}(\mathcal{K}_N)$ and $a\in\mathcal{K}_N$. Proposition [Proposition 20](#prop:derivations){reference-type="ref" reference="prop:derivations"} constructs a basis for all derivations of $\mathcal{K}_N$, and the next results determines which derivations are inner in terms of the basis. **Proposition 21**. *A derivation $\partial\in\operatorname{Der}(\mathcal{K}_N)$ is inner if and only if there exist $a^0,a^1,\ldots,a^N\in\mathbb{C}$ such that $$\begin{aligned} \label{eq:inner.derivation} \partial= a^i\partial_i + a^0\big(\partial_1^1+ \partial_2^2+ \cdots + \partial_N^N\big). \end{aligned}$$* *Proof.* An inner derivation is by definition of the form $\partial(\cdot)=[\cdot,a]$ for some $a\in\mathcal{K}_N$. Setting $a=\lambda\mathds{1}+\mu e+a^i\alpha_i$ one obtains $$\begin{aligned} &\partial e = [e,a] = a^i\alpha_i\\ &\partial\alpha_k = -\mu\alpha_k \end{aligned}$$ implying that $$\begin{aligned} \partial= a^i\partial_i - \mu\big(\partial^1_1+\partial^2_2+\cdots+\partial_N^N\big), \end{aligned}$$ which proves that every inner derivation is of the form as in [\[eq:inner.derivation\]](#eq:inner.derivation){reference-type="eqref" reference="eq:inner.derivation"}. ◻ Let us denote by $\mathfrak{g}_{\textrm{inn}}$ the Lie algebra of inner derivations; that is, $\mathfrak{g}_{\textrm{inn}}$ is a Lie algebra of dimension $N+1$ with a basis given by $\{\hat{\partial},\partial_1,\ldots,\partial_N\}$, with $\hat{\partial}=\partial_1^1+\cdots+\partial_N^N$, satisfying $$\begin{aligned} = 0\quad\text{and}\quad [\hat{\partial},\partial_i] = \partial_i\end{aligned}$$ for $i=1,\ldots,N$ and $$\begin{aligned} \hat{\partial}(a) = [e,a]\qquad \partial_k(a) = -i[\alpha_k,a]\end{aligned}$$ for $a\in\mathcal{K}_N$. ## Differential calculi Given a Lie algebra $\mathfrak{g}\subseteq\operatorname{Der}(\mathcal{K}_N)$ (which we shall always assume to be closed under $\partial\to\partial^\ast$), we construct the restricted calculus $\Omega^{}_{\mathfrak{g}}$. Recall that $\Omega^1_{\mathfrak{g}}$ is generated by $da$ for $a\in\mathcal{K}_N$ and, since $\mathcal{K}_N$ is finite dimensional, $\Omega^1_{\mathfrak{g}}$ is generated by $d\alpha_0\equiv ide, d\alpha_1,\ldots, d\alpha_N$. However, depending on the Lie algebra $\mathfrak{g}$, these 1-forms are in general not linearly independent. Nevertheless, one can express the bimodule structure of $\Omega^1_{\mathfrak{g}}$ independent of the choice of $\mathfrak{g}$. **Proposition 22**. *For any $\mathfrak{g}\subseteq\operatorname{Der}(\mathcal{K}_N)$ the bimodule structure of $\Omega^{1}_{\mathfrak{g}}$ is given by $$\begin{aligned} &ed\alpha_I = d\alpha_I\qquad \alpha_id\alpha_I = 0\\ &(d\alpha_I)e = 0\qquad (d\alpha_I)\alpha_i = 0, \end{aligned}$$ for $i=1,\ldots,N$ and $I=0,1,\ldots,N$; furthermore, $(d\alpha_I)^\ast=d\alpha_I$ for $I=0,1\ldots,N$.* *Proof.* Since $\partial a\in\mathcal{K}^\alpha_N$ for all $\partial\in\operatorname{Der}(\mathcal{K}_N)$ and $a\in\mathcal{K}_N$ one obtains $$\begin{aligned} &ed\alpha_I(\partial) = e\partial(\alpha_I) = \partial(\alpha_I)=d\alpha_I(\partial)\\ &\alpha_id\alpha_I(\partial) = \alpha_i\partial(\alpha_I) = 0 \end{aligned}$$ and in the same way one obtains the right bimodule relations. Next, one computes $$\begin{aligned} &(d\alpha_k)^\ast(\partial) = \big(d\alpha_k(\partial^\ast)\big)^\ast = \big(\partial^\ast\alpha_k\big)^\ast = \partial(\alpha_k^\ast) =\partial\alpha_k = d\alpha_k(\partial)\\ &(d\alpha_0)^\ast(\partial) = \big(ide(\partial^\ast)\big)^\ast = -i\big(\partial^\ast e\big)^\ast = -i\partial(e^\ast) = -i\partial(\mathds{1}-e) =i\partial e = d\alpha_0(\partial) \end{aligned}$$ for $\partial\in\mathfrak{g}$, showing that $(d\alpha_I)^\ast=d\alpha_I$. ◻ The above result implies that $\Omega^1_{\mathfrak{g}}$ is generated by $\{d\alpha_I\}_{I=0}^N$ as a complex vector space; i.e. every element $\omega\in\Omega^1_{\mathfrak{g}}$ can be written as $\omega=\lambda^Id\alpha_I$ for $\lambda^I\in\mathbb{C}$. Moreover, Proposition [Proposition 22](#prop:Omega1.bimodule.structure){reference-type="ref" reference="prop:Omega1.bimodule.structure"} implies that $\Omega^1_{\mathfrak{g}}$ is not a free module since $\alpha_k\omega=0$ for any $\omega\in\Omega^1_{\mathfrak{g}}$. For $\mathcal{K}_N$, it turns out that there are no higher order forms, as the next result shows. **Proposition 23**. *If $\mathfrak{g}\subseteq\operatorname{Der}(\mathcal{K}_N)$ then $\Omega^{k}_{\mathfrak{g}}=0$ for $k\geq 2$.* *Proof.* The module $\Omega^{k}_{\mathfrak{g}}$ is generated by elements of the form $da_1da_2\cdots da_k$ for $a_k\in\mathcal{K}_N$, and $$\begin{aligned} da_1da_2\cdots da_k(\partial_1,\ldots,\partial_k) =\sum_{\sigma\in S_k}\operatorname{sgn}(\sigma)(\partial_{\sigma(1)}a_1)(\partial_{\sigma(2)}a_2)\cdots(\partial_{\sigma(k)}a_k). \end{aligned}$$ For $k\geq 2$ the above expression is zero since $\partial a\in\mathcal{K}^\alpha_N$ for all $\partial\in\operatorname{Der}(\mathcal{K}_N)$ and $a\in\mathcal{K}_N$. ◻ Recall that the $k$'th cohomology $H^k(\Omega^{}_{\mathfrak{g}})$ is defined as the closed $k$-forms modulo the exact $k$-forms; that is, $$\begin{aligned} H^k(\Omega^{}_{\mathfrak{g}}) = \operatorname{im}(d_{k-1})\slash \ker(d_k).\end{aligned}$$ Since $\Omega^{k}_{\mathfrak{g}}=0$ for $k\geq 2$, by Proposition [Proposition 23](#prop:Omegak.zero){reference-type="ref" reference="prop:Omegak.zero"}, the only remaining cohomology is $H^1(\Omega^{}_{\mathfrak{g}})$. **Proposition 24**. *If $\mathfrak{g}\subseteq\operatorname{Der}(\mathcal{K}_N)$ then $H^1(\Omega^{}_{\mathfrak{g}})=0$.* *Proof.* Since $\Omega^{k}_{\mathfrak{g}}=0$ for $k\geq 2$ (by Proposition [Proposition 23](#prop:Omegak.zero){reference-type="ref" reference="prop:Omegak.zero"}) every element of $\Omega^{1}_{\mathfrak{g}}$ is closed. Moreover, since every element of $\Omega^{1}_{\mathfrak{g}}$ can be written as $\lambda^Id\alpha_I$ for $\lambda^I\in\mathbb{C}$ it follows immediately that every element of $\Omega^{1}_{\mathfrak{g}}$ is exact, since $$\lambda^0de + \lambda^1d\alpha_1+\cdots+\lambda^Nd\alpha_N =d\big(\lambda^0e + \lambda^1\alpha_1+\cdots+\lambda^N\alpha_N\big).\qedhere$$ ◻ Let us now consider the structure of $\Omega^1_{\mathfrak{g}}$ in more detail. Given $a\in\mathcal{A}$ it follows from Proposition [Proposition 22](#prop:Omega1.bimodule.structure){reference-type="ref" reference="prop:Omega1.bimodule.structure"} that $$\begin{aligned} \left\langle da \right\rangle = \{\lambda da:\lambda\in\mathbb{C}\}\end{aligned}$$ is a sub-bimodule of $\Omega^{1}_{\mathfrak{g}}$ with $$\label{eq:da.module.relations} \begin{split} &e da = da\qquad da\cdot e = 0\\ &\alpha_ida = 0 \qquad da\cdot \alpha_i =0, \end{split}$$ and due to the above bimodule relations one finds that when $da\neq 0$ $$\begin{aligned} \left\langle da \right\rangle\simeq\{\lambda\alpha_k:\lambda\in\mathbb{C}\} \equiv M_{\alpha_k}\end{aligned}$$ as a $\mathcal{K}_N$-bimodule for $k=1,\ldots,N$. **Proposition 25**. *$M_{\alpha_k}=\{\lambda\alpha_k:\lambda\in\mathbb{C}\}$ is a projective $\mathcal{K}_N$-bimodule.* *Proof.* Let $\mathcal{K}_N^e=\mathcal{K}_N\otimes_{\mathbb{C}}\mathcal{K}_N^{op}$ denote the enveloping algebra of $\mathcal{K}_N$, and let $\hat{e}\in\mathcal{K}_N^e$ denote the idempotent $\hat{e}=e\otimes_{\mathbb{C}}\overline{(\mathds{1}-e)}$, where $\bar{a}$ denotes $a\in\mathcal{K}_N$ as an element in $\mathcal{K}_N^{op}$; i.e. $\bar{a}\cdot\bar{b}=\overline{ba}$. Then $\mathcal{K}_N^e\hat{e}$ is a projective left $\mathcal{K}_N^e$-module. Let us now show that $M_{\alpha_k}$ is isomorphic to $\mathcal{K}_N^e\hat{e}$ as a left $\mathcal{K}_N^e$-module. By definition, an arbitrary element of $\mathcal{K}_N^e\hat{e}$ can be written as sums of elements of the form $$\begin{aligned} (a\otimes_{\mathbb{C}}\bar{b})(e\otimes_{\mathbb{C}}\overline{(\mathds{1}-e)}) =ae\otimes_{\mathbb{C}}\overline{(\mathds{1}-e)b} \end{aligned}$$ and by using the algebra relations one concludes that $$\begin{aligned} \mathcal{K}_N^e\hat{e} = \{\lambda\big(e\otimes_{\mathbb{C}}\overline{(\mathds{1}-e)}\big):\lambda\in\mathbb{C}\}. \end{aligned}$$ Defining $\phi:\mathcal{K}_N^e\hat{e}\to M_{\alpha_k}$ as $$\begin{aligned} \phi\big(\lambda\big(e\otimes_{\mathbb{C}}\overline{(\mathds{1}-e)}\big)\big) = \lambda\alpha_k \end{aligned}$$ it is clear that $\phi$ is a vector space isomorphism. Furthermore, for $$\begin{aligned} a_1 = \lambda_1\mathds{1}+\mu_1e + a_1^i\alpha_i\quad\text{and}\quad a_2 = \lambda_2\mathds{1}+\mu_2e + a_2^i\alpha_i \end{aligned}$$ one finds that $$\begin{aligned} &\phi\big((a_1\otimes_{\mathbb{C}}\bar{a}_2)e\otimes_{\mathbb{C}}\overline{(\mathds{1}-e)}\big) =\phi\big(a_1e\otimes_{\mathbb{C}}\overline{((\mathds{1}-e)a_2)}\big) =(\lambda_1+\mu_1)\lambda_2\alpha_k\\ &(a_1\otimes_{\mathbb{C}}\bar{a}_2)\phi\big(e\otimes_{\mathbb{C}}\overline{(\mathds{1}-e)}\big) = a_1\alpha_k a_2 = (\lambda_1+\mu_1)\alpha_k a_2 = (\lambda_1+\mu_1)\lambda_2\alpha_k \end{aligned}$$ showing that $\phi$ is an isomorphism of left $\mathcal{K}_N^e$-modules. Hence, $M_k$ is projective as a left $\mathcal{K}_N^e$-module or, equivalently, as a $\mathcal{K}_N$-bimodule. ◻ In particular, Proposition [Proposition 25](#prop:Malphak.projective.bimodule){reference-type="ref" reference="prop:Malphak.projective.bimodule"} implies that $\left\langle da \right\rangle$ is projective as a $\mathcal{K}_N$-bimodule (and consequently projective as both a left and right $\mathcal{K}_N$-module). As we have seen, for any choice of Lie algebra $\mathfrak{g}\subseteq\operatorname{Der}(\mathcal{K}_N)$, the module $\Omega^1_{\mathfrak{g}}$ is a finite dimensional vector space spanned by $d\alpha_0,d\alpha_1,\ldots,d\alpha_N$; however, the dimension of $\Omega^1_{\mathfrak{g}}$ clearly depends on the choice of $\mathfrak{g}$. Let $n=\dim_{\mathbb{C}}(\Omega^{1}_{\mathfrak{g}})$ and let $\{dx_a\}_{a=1}^n$ denote a (vector space) basis of $\Omega^{1}_{\mathfrak{g}}$. Hence, for each $\omega\in\Omega^{1}_{\mathfrak{g}}$ there exist unique $\lambda^1,\ldots,\lambda^n\in\mathbb{C}$ such that $\omega=\lambda^adx_a$. **Proposition 26**. *Let $\mathfrak{g}\subseteq\operatorname{Der}(\mathcal{K}_N)$ and assume that $\{dx_a\}_{a=1}^n$ is a (vector space) basis of $\Omega^1_{\mathfrak{g}}$. Then $$\begin{aligned} \Omega^1_{\mathfrak{g}}\simeq \bigoplus_{a=1}^n\left\langle dx_a \right\rangle \end{aligned}$$ as a $\mathcal{K}_N$-bimodule.* *Proof.* Let $M=\oplus_{a=1}^n\left\langle dx_a \right\rangle$ and defining $\phi:\Omega^1_{\mathfrak{g}}\to M$ as $$\begin{aligned} \phi(\lambda^adx_a) = \lambda^1dx_1\oplus\cdots\oplus\lambda^ndx_n \end{aligned}$$ it is clear that $\phi$ is an isomorphism of vector spaces. To prove that $\phi$ is a left module homomorphism one checks that $$\begin{aligned} &\phi(e \lambda^adx_a)-e\phi(\lambda^adx_a) = \phi(\lambda^adx_a)-e\phi(\lambda^adx_a)\\ &\qquad\qquad=(\mathds{1}-e)\big(\lambda^1dx_1\oplus\cdots\oplus\lambda^ndx_n\big)=0\\ &\phi(\alpha_i\lambda^adx_a)-\alpha_i\phi(\lambda^adx_a) = -\alpha_i\big(\lambda^1dx_1\oplus\cdots\oplus\lambda^ndx_n\big)=0, \end{aligned}$$ and to show that $\phi$ is a right module homomorphism, one computes $$\begin{aligned} &\phi(\lambda^adx_a\cdot e)-\phi(\lambda^adx_a)e =-\big(\lambda^1dx_1\oplus\cdots\oplus\lambda^ndx_n\big)e=0\\ &\phi(\lambda^adx_a\cdot \alpha_i)-\phi(\lambda^adx_a)\alpha_i =-\big(\lambda^1dx_1\oplus\cdots\oplus\lambda^ndx_n\big)\alpha_i=0. \end{aligned}$$ We conclude that $\phi:\Omega^1_{\mathfrak{g}}\to M$ is a bimodule isomorphism. ◻ The following is then an immediate consequence of the above result. **Corollary 27**. *$\Omega^1_{\mathfrak{g}}$ is a projective $\mathcal{K}_N$-bimodule.* Recall that a differential calculus is called connected if $da=0$ implies that $a=\lambda\mathds{1}$ for some $\lambda\in\mathbb{C}$. For $\Omega^1_{\mathfrak{g}}$, one proves the following criterion for connectedness. **Proposition 28**. *$\Omega^1_{\mathfrak{g}}$ is connected if and only if $d\alpha_0,d\alpha_1,\ldots,d\alpha_N$ is a vector space basis of $\Omega^1_{\mathfrak{g}}$.* *Proof.* Let $a=\lambda\mathds{1}+ \mu e + a^i\alpha_i$ be an arbitrary element of $\mathcal{K}_N$. $\Omega^1_{\mathfrak{g}}$ being connected is equivalent to $$\begin{aligned} &da = 0\quad\Rightarrow\quad a=\lambda\mathds{1}\quad\Leftrightarrow\quad\\ &-i\mu d\alpha_0 + a^id\alpha_i = 0 \quad\Rightarrow\quad\mu=a^i=0 \end{aligned}$$ which is equivalent to $d\alpha_0,d\alpha_1,\ldots,d\alpha_N$ being a vector space basis of $\Omega^1_{\mathfrak{g}}$. ◻ Let us consider the case when $\mathfrak{g}=\operatorname{Der}(\mathcal{K}_N)$ in which case we write $\Omega^1_{\mathfrak{g}}=\Omega^1_{\operatorname{Der}}$. In that case it is easy to see that $\Omega_{\operatorname{Der}}^1$ is connected. Namely, for $a=\lambda\mathds{1}+ \mu e + a^i\alpha_i$, assuming $da=0$ gives $$\begin{aligned} &da(\partial_k)=0 \quad\Rightarrow\quad\mu\partial_ke = 0\quad\Rightarrow\quad i\mu\alpha_k = 0\quad\Rightarrow\quad\mu = 0\\ &da(\partial^k_l) = 0\quad\Rightarrow\quad a^i\partial^k_l\alpha_i = 0\quad\Rightarrow\quad a^k\alpha_l = 0\quad\Rightarrow\quad a^k=0\end{aligned}$$ implying that $a=\lambda\mathds{1}$, giving that $\{d\alpha_I\}_{I=0}^N$ is a vector space basis of $\Omega_{\operatorname{Der}}^1$ via Proposition [Proposition 28](#prop:connected.vspace.basis){reference-type="ref" reference="prop:connected.vspace.basis"}. Furthermore, Proposition [Proposition 26](#prop:Omegag.direct.sum){reference-type="ref" reference="prop:Omegag.direct.sum"} implies that $$\begin{aligned} \Omega^1_{\operatorname{Der}} \simeq \bigoplus_{I=0}^N\left\langle d\alpha_I \right\rangle\end{aligned}$$ as a $\mathcal{K}_N$-bimodule. ## Traces and integration A trace on a $\ast$-algebra $\mathcal{A}$ is a $\mathbb{C}$-linear functional $\tau:\mathcal{A}\to\mathbb{C}$ such that $\tau(a^\ast)=\overline{\tau(a)}$ and $\tau(ab)=\tau(ba)$ for all $a,b\in\mathcal{A}$. If $\tau(\mathds{1})=1$ then the trace is said to be normalized. Moreover, if $\tau(a^\ast a)\geq 0$ for all $a\in\mathcal{A}$ then $\tau$ is said to be a positive trace. Note that, if the algebra is finite dimensional, a trace is determined by its values on the basis elements. **Proposition 29**. *A $\mathbb{C}$-linear functional $\tau:\mathcal{K}_N\to\mathbb{C}$ is a trace on $\mathcal{K}_N$ if and only if $\tau(\alpha_i)=0$ for $i=1,\ldots,N$ and there exist $\tau_0,\tau_1\in\mathbb{R}$ such that $$\begin{aligned} \tau(\mathds{1}) = \tau_0\quad\text{and}\quad \tau(e) = \tfrac{1}{2}\tau_0+i\tau_1. \end{aligned}$$* *Proof.* Let $\tau$ be a $\mathbb{C}$-linear functional on $\mathcal{K}_N$, let $a_1,a_2\in\mathcal{K}_N$ be arbitrary elements and write $$\begin{aligned} &a_1 = \lambda_1\mathds{1}+\mu_1e+a_1^i\alpha_i\\ &a_2 = \lambda_2\mathds{1}+\mu_2e+a_2^i\alpha_i. \end{aligned}$$ One finds that $$\begin{aligned} \tau(a_1a_2)-\tau(a_2a_1)= (\mu_1a_2^i-\lambda_1a_1^i)\tau(\alpha_i), \end{aligned}$$ and requiring this to be zero for all $a_1,a_2\in\mathcal{K}_N$ is equivalent to $\tau(\alpha_i)=0$ for $i=1,\ldots,N$. Furthermore, for $a=\lambda\mathds{1}+\mu e+a^i\alpha_i$, $\tau(a^\ast)=\overline{\tau(a)}$ for all $a\in\mathcal{K}_N$ is equivalent to $$\begin{aligned} \bar{\lambda}\big(\tau(\mathds{1})-\overline{\tau(\mathds{1})}\big) +\bar{\mu}\big(\tau(\mathds{1})-\tau(e)-\overline{\tau(e)}\big)=0 \end{aligned}$$ for all $\lambda,\mu\in\mathbb{C}$, which is equivalent to $$\begin{aligned} \tau(\mathds{1})=\tau_0\quad\text{and}\quad\tau(e)=\tfrac{1}{2}\tau_0+i\tau_1 \end{aligned}$$ for some $\tau_0,\tau_1\in\mathbb{R}$. ◻ It follows that there is a 1-parameter family of normalized traces on $\mathcal{K}_N$, given by $$\begin{aligned} \tau(\mathds{1}) = 1\qquad \tau(e) = \tfrac{1}{2}+i\tau_1\qquad \tau(\alpha_i)=0\end{aligned}$$ for $\tau_1\in\mathbb{R}$. The next result shows that there are no non-trivial positive traces on $\mathcal{K}_N$. **Proposition 30**. *If $\tau$ is a trace on $\mathcal{K}_N$ such that $\tau(a^\ast a)\geq 0$ for all $a\in\mathcal{K}_N$ then $\tau(a)=0$ for all $a\in\mathcal{K}_N$.* *Proof.* Let $\tau$ be a trace on $\mathcal{K}_N$. From Proposition [Proposition 29](#prop:traces.KN){reference-type="ref" reference="prop:traces.KN"} it follows that $$\begin{aligned} \tau(\mathds{1}) = \tau_0\qquad \tau(e) = \tfrac{1}{2}\tau_0+i\tau_1\qquad \tau(\alpha_i) = 0\quad i=1,\ldots,N \end{aligned}$$ for some $\tau_0,\tau_1\in\mathbb{C}$. For $a=\lambda\mathds{1}+\mu e+a^i\alpha_i$ one obtains $$\begin{aligned} \tau(a^\ast a) &= (|\lambda|^2+\lambda\bar{\mu})\tau_0 +(\mu\bar{\lambda}-\bar{\mu}\lambda)\big(\tfrac{1}{2}\tau_0+i\tau_1\big)\\ &= \big(|\lambda|^2+\operatorname{Re}(\lambda\bar{\mu})\big)\tau_0 + 2\operatorname{Im}(\lambda\bar{\mu})\tau_1. \end{aligned}$$ If $\tau_0\neq 0$ one can always find $\lambda,\mu\in\mathbb{R}$ such that $$\begin{aligned} \tau(a^\ast a) = \lambda(\lambda+\mu)\tau_0<0, \end{aligned}$$ and if $\tau_1\neq 0$ one can always find $\lambda',\mu\in\mathbb{R}$ such that for $\lambda=i\lambda'$ $$\begin{aligned} \tau(a^\ast a) = \lambda'^2\tau_0 + 2\mu\lambda'\tau_1 =\lambda'(\lambda'\tau_0+2\mu\tau_1)<0. \end{aligned}$$ Hence, if $\tau(a^\ast a)\geq 0$ for all $a\in\mathcal{K}_N$ then $\tau_0=\tau_1=0$, implying that $\tau(a)=0$ for all $a\in\mathcal{K}_N$. ◻ Now, we would like to use traces to integrate elements of $\Omega^1_{\mathfrak{g}}$ and we will proceed in analogy with a $1$-dimensional interval $[a,b]\subseteq\mathbb{R}$. For $f\in C^\infty([a,b])$ one has $$\begin{aligned} \int_a^b df = \int_a^b\frac{df}{dx}dx = f(b)-f(a),\end{aligned}$$ and $\tilde{\tau}(f)=f(b)-f(a)$ is clearly a trace on the (commutative) algebra $C^\infty([a,b])$. Thus, setting $$\begin{aligned} \int da = \tau(a)\end{aligned}$$ for $a\in\mathcal{K}_N$ the map $\int:\Omega^1_{\mathfrak{g}}\to\mathbb{C}$ is well defined if $\tau(a)=0$ whenever $da=0$. Assuming that $\Omega^1_{\mathfrak{g}}$ is connected (giving $da=0\Rightarrow a=\lambda\mathds{1}$) it is necessary that $\tau(\mathds{1})=\tau_0=0$, which is consistent with $f(b)-f(a)=0$ for any constant function $f$. It follows that $$\begin{aligned} \int d\alpha_0 = i\tau(e) = i(i\tau_1) = -\tau_1\qquad\text{and}\qquad\int d\alpha_k = 0\end{aligned}$$ for $k=1,\ldots,N$. ## Connections and hermitian forms Let us now consider connections on $\Omega^1_{\mathfrak{g}}$; i.e bilinear maps $\nabla:\mathfrak{g}\times\Omega^1_{\mathfrak{g}}\to\Omega^1_{\mathfrak{g}}$ satisfying a Leibniz rule, which for left connections reads $$\begin{aligned} \nabla_\partial(a\omega) = a\nabla_{\partial}\omega + (\partial a)\omega\end{aligned}$$ for $a\in\mathcal{K}_N$ and $\omega\in\Omega^1_{\mathfrak{g}}$. Alternatively, as is common for a general differential graded algebra (not necessarily defined by derivations), a left connection on $\Omega^1_{\mathfrak{g}}$ is a linear map $\nabla:\Omega^1_{\mathfrak{g}}\to\Omega^1_{\mathfrak{g}}\otimes_{\mathcal{K}_N}\Omega^1_{\mathfrak{g}}$ satisfying $$\begin{aligned} \nabla(a\omega) = a\nabla\omega + da\otimes_{\mathcal{K}_N}\omega,\end{aligned}$$ and in case the differential graded algebra is defined from derivations, one can recover a connection $\nabla:\mathfrak{g}\times\Omega^1_{\mathfrak{g}}\to\Omega^1_{\mathfrak{g}}$ by evaluating the leftmost 1-form in the tensor product against the derivation. However, the converse is not necessarily true, as it depends on the image of the action of the derivations in $\mathfrak{g}$. In fact, for $\mathcal{K}_N$ there are no non-trivial connections $\nabla:\Omega^1_{\mathfrak{g}}\to\Omega^1_{\mathfrak{g}}\otimes_{\mathcal{K}_N}\Omega^1_{\mathfrak{g}}$ due to the following result. **Lemma 31**. *$\Omega^1_{\mathfrak{g}}\otimes_{\mathcal{K}_N}\Omega^1_{\mathfrak{g}}=0$.* *Proof.* Let us start by noting that $$\begin{aligned} d\alpha_I\otimes_{\mathcal{K}_N}d\alpha_J = d\alpha_I\otimes_{\mathcal{K}_N}ed\alpha_J = (d\alpha_I)e\otimes_{\mathcal{K}_N}d\alpha_J = 0 \end{aligned}$$ since $ed\alpha_J=d\alpha_J$ and $(d\alpha_I)e$ = 0 by Proposition [Proposition 22](#prop:Omega1.bimodule.structure){reference-type="ref" reference="prop:Omega1.bimodule.structure"}. Since every element $\omega\in\Omega^1_{\mathfrak{g}}$ can be written as $\omega=\lambda^Id\alpha_I$ (where $\lambda^I\in\mathbb{C})$, it follows that an arbitrary element of $\Omega^1_{\mathfrak{g}}\otimes_{\mathcal{K}_N}\Omega^1_{\mathfrak{g}}$ can be written as $$\begin{aligned} \lambda^{IJ}d\alpha_I\otimes_{\mathcal{K}_N}d\alpha_J \end{aligned}$$ for $\lambda^{IJ}\in\mathbb{C}$. Hence, by the previous argument it follows that $\Omega^1_{\mathfrak{g}}\otimes_{\mathcal{K}_N}\Omega^1_{\mathfrak{g}}=0$. ◻ Note that the above result does not prevent the existence of connections $\nabla:\mathfrak{g}\times\Omega^1_{\mathfrak{g}}\to\Omega^1_{\mathfrak{g}}$, as we shall see in the following. Let us start by showing that any bilinear map $\nabla:\mathfrak{g}\times\Omega^1_{\mathfrak{g}}\to\Omega^1_{\mathfrak{g}}$ defines a connection. **Proposition 32**. *Let $\nabla$ be a $\mathbb{C}$-bilinear map $$\nabla:\mathfrak{g}\times\Omega^1_{\mathfrak{g}}\to\Omega^1_{\mathfrak{g}}.$$ Then $\nabla$ is a bimodule connection on $\Omega^1_{\mathfrak{g}}$. Moreover, if $\partial\in\mathfrak{g}$ then $\nabla_{\partial}:\Omega^1_{\mathfrak{g}}\to\Omega^1_{\mathfrak{g}}$ is a bimodule homomorphism.* *Proof.* To prove that $\nabla$ is a bimodule connection, one has to check that $$\begin{aligned} &\nabla_{\partial}(a\omega) - a\nabla_{\partial}\omega - (\partial a)\omega = 0\label{eq:left.Leibniz}\\ &\nabla_{\partial}(\omega a) - (\nabla_{\partial}\omega)a - \omega(\partial a) = 0\label{eq:right.Leibniz} \end{aligned}$$ for $\partial\in\mathfrak{g}$, $\omega\in\Omega^1_{\mathfrak{g}}$ and $a\in\mathcal{K}_N$. From Proposition [Proposition 22](#prop:Omega1.bimodule.structure){reference-type="ref" reference="prop:Omega1.bimodule.structure"} it follows that $f\omega=\omega f=0$ for all $f\in\mathcal{K}^\alpha_N$ and $\partial\in\mathfrak{g}$, implying that $(\partial a)\omega=\omega(\partial a)=0$ since $\partial a\in\mathcal{K}^\alpha_N$ for all $a\in\mathcal{K}_N$. Thus, for $a=\lambda\mathds{1}+ \mu e+a^i\alpha_i$, giving $a\omega=(\lambda+\mu)\omega$ and $\omega a=\lambda\omega$ for all $\omega\in\Omega^1_{\mathfrak{g}}$, one finds that $$\begin{aligned} &\nabla_{\partial}(a\omega) - a\nabla_{\partial}\omega - (\partial a)\omega = \nabla_{\partial}\big((\lambda+\mu)\omega\big) -(\lambda+\mu)\nabla_{\partial}\omega = 0\\ &\nabla_{\partial}(\omega a) - (\nabla_{\partial}\omega)a - \omega(\partial a) = \nabla_{\partial}(\lambda\omega) -\lambda\nabla_{\partial}\omega = 0, \end{aligned}$$ showing that [\[eq:left.Leibniz\]](#eq:left.Leibniz){reference-type="eqref" reference="eq:left.Leibniz"} and [\[eq:right.Leibniz\]](#eq:right.Leibniz){reference-type="eqref" reference="eq:right.Leibniz"} are satisfied. Moreover, since $(\partial a)\omega=\omega(\partial a)=0$ it follows that $$\begin{aligned} \nabla_{\partial}(a\omega) = a\nabla_{\partial}\omega\quad\text{and}\quad \nabla_{\partial}(\omega a) = (\nabla_{\partial}\omega)a \end{aligned}$$ showing that $\nabla_{\partial}$ is indeed a bimodule homomorphism. ◻ Note in particular that the above result implies that there exists a trivial connection on $\Omega^1_{\mathfrak{g}}$, given by $\nabla_{\partial}\omega=0$ for all $\partial\in\mathfrak{g}$ and $\omega\in\Omega^1_{\mathfrak{g}}$. Next, let us consider hermitian forms on $\Omega^1_{\mathfrak{g}}$. It turns out that, due to the bimodule relations, the components of any hermitian form lie in the ideal $\mathcal{K}^\alpha_N$. **Proposition 33**. *Let $\{dx_a\}_{a=1}^n$ denote a vector space basis of $\Omega^1_{\mathfrak{g}}$ and let $\omega=\omega^adx_a$, $\eta=\eta^adx_a$, with $\omega^a,\eta^a\in\mathbb{C}$, be arbitrary elements of $\Omega^1_{\mathfrak{g}}$. The map $h:\Omega^1_{\mathfrak{g}}\times\Omega^1_{\mathfrak{g}}\to\mathcal{K}_N$, defined by $$\label{eq:def.hform.Omega} h(\omega,\eta) = \omega^ah_{ab}\bar{\eta}^b,$$ is a left hermitian form on $\Omega^1_{\mathfrak{g}}$ if and only if $h_{ba}=h_{ab}^\ast\in\mathcal{K}^\alpha_N$ for $a,b=1,\ldots,N$.* *Proof.* First, assume that $h_{ba}=h_{ab}\in\mathcal{K}^\alpha_N$. To show that [\[eq:def.hform.Omega\]](#eq:def.hform.Omega){reference-type="eqref" reference="eq:def.hform.Omega"} defines a left hermitian form on $\Omega^1_{\mathfrak{g}}$ one needs to check that $$\begin{aligned} h(\omega,\eta)^\ast = h(\eta,\omega)\qquad h(a\omega,\eta) = ah(\omega,\eta)\qquad \end{aligned}$$ for all $\omega,\eta\in\Omega^1_{\mathfrak{g}}$ and $a\in\mathcal{K}_N$ (the necessary bilinearity properties are trivial). It follows immediately from [\[eq:def.hform.Omega\]](#eq:def.hform.Omega){reference-type="eqref" reference="eq:def.hform.Omega"} that $$\begin{aligned} h(\omega,\eta)^\ast &= (\omega^ah_{ab}\bar{\eta}^b)^\ast = \eta^bh_{ba}\bar{\omega}^a = h(\eta,\omega) \end{aligned}$$ since $h_{ab}^\ast=h_{ba}$. Recall that, for $b\in\mathcal{K}^\alpha_N$ and $a=\lambda\mathds{1}+ \mu e + a^i\alpha_i$, one has $$\begin{aligned} ab = (\lambda+\mu)b\quad\text{and}\quad ba = \lambda b \end{aligned}$$ giving $$\begin{aligned} &h(a\omega,\eta) = h\big((\lambda+\mu)\omega^adx_a,\eta^bdx_b\big) = (\lambda+\mu)h_{ab}\omega^a\bar{\eta}^b = ah_{ab}\omega^a\bar{\eta}^b = ah(\omega,\eta) \end{aligned}$$ using that $a\omega = (\lambda+\mu)\omega$ for all $\omega\in\Omega^1_{\mathfrak{g}}$. Hence, $h$ is a left hermitian form on $\Omega^1_{\mathfrak{g}}$. Conversely, assume that $h$ is a left hermitian form on $\Omega^1_{\mathfrak{g}}$. Let us first show that the bimodule relations imply that $h_{ab}\in\mathcal{K}^\alpha_N$. To this end, let us fix $a,b$ and write $h_{ab}=\lambda\mathds{1}+ \mu e + a^i\alpha_i$. Requiring $h(edx_a,dx_b)=eh(dx_a,dx_b)$ gives $$\begin{aligned} h_{ab} = eh_{ab}\quad\Rightarrow\quad \lambda\mathds{1}+\mu e + a^i\alpha_i = (\lambda+\mu)e + a^i\alpha_i\quad\Rightarrow\quad\lambda = 0 \end{aligned}$$ and, moreover, requiring $h(\alpha_idx_a,dx_b)=\alpha_ih(dx_a,dx_b)$ gives $$\begin{aligned} 0=\alpha_ih_{ab}\quad\Rightarrow\quad 0 = \mu e\quad\Rightarrow\quad\mu=0 \end{aligned}$$ showing that $h_{ab}\in\mathcal{K}^\alpha_N$ for all $a,b=1,\ldots,N$. ◻ Recall that a hermitian form $h$ is invertible if the associated map $\hat{h}:\Omega^1_{\mathfrak{g}}\to(\Omega^1_{\mathfrak{g}})^\ast$, given by $\hat{h}(\omega)(\eta) = h(\omega,\eta)$ is a bijection. If $h$ is not invertible, it is said to be degenerate. As we will see in Proposition [Proposition 35](#prop:no.invertible.forms){reference-type="ref" reference="prop:no.invertible.forms"}, every hermitian form on $\Omega^1_{\mathfrak{g}}$ is degenerate. However, in order to prove this statement, let us first recall the following result which gives a characterization of projective modules with invertible hermitian forms. **Proposition 34** ([@a:levi-civita.class.nms], Proposition 2.6). *Let $M$ be a left $\mathcal{A}$-module with generators $\{e_a\}_{a=1}^n$, let $h$ be a left hermitian form on $M$ and set $h_{ab}=h(e_a,e_b)$. Then $M$ is a projective module and $h$ is an invertible hermitian form if and only if there exist $h^{ab}\in\mathcal{A}$ (for $a,b=1,\ldots,n$) such that $(h^{ab})^\ast=h^{ba}$ and $h_{cp}h^{pq}e_q=e_c$ for $a,b,c=1,\ldots,n$.* Since $\Omega^1_{\mathfrak{g}}$ is a left projective module, we can use the above result to prove that there are no invertible hermitian forms on $\Omega^1_{\mathfrak{g}}$. **Proposition 35**. *Every left hermitian form on $\Omega^1_{\mathfrak{g}}$ is degenerate.* *Proof.* Let $\{dx_a\}_{a=1}^n$ be a vector space basis of $\Omega^1_{\mathfrak{g}}$. (In particular, $\{dx_a\}_{a=1}^n$ is also a set of generators for $\Omega^1_{\mathfrak{g}}$.) Since $\Omega^1_{\mathfrak{g}}$ is projective, Proposition [Proposition 34](#prop:huab.equiv.projective){reference-type="ref" reference="prop:huab.equiv.projective"} implies that a left hermitian form on $\Omega^1_{\mathfrak{g}}$ is invertible if and only if there exist $h^{ab}\in\mathcal{K}_N$ such that $$\begin{aligned} \label{eq:hab.huab.dxa} h_{ab}h^{bc}dx_c = dx_a \end{aligned}$$ for $a=1,\ldots,N$ where $h_{ab}=h(dx_a,dx_b)$. Since $\mathcal{K}^\alpha_N$ is an ideal and $h_{ab}\in\mathcal{K}^\alpha_N$ (by Proposition [Proposition 33](#prop:hab.in.KaN){reference-type="ref" reference="prop:hab.in.KaN"}), it follows that $h_{ab}h^{bc}\in\mathcal{K}^\alpha_N$ for all $a,c=1,\ldots,N$. Since $adx_c=0$ for all $a\in\mathcal{K}^\alpha_N$ (by Proposition [Proposition 22](#prop:Omega1.bimodule.structure){reference-type="ref" reference="prop:Omega1.bimodule.structure"}) if follows that the left hand side of [\[eq:hab.huab.dxa\]](#eq:hab.huab.dxa){reference-type="eqref" reference="eq:hab.huab.dxa"} is zero for any $h^{ab}\in\mathcal{K}_N$, which contradicts the fact that $\{dx_a\}_{a=1}^n$ is a basis of $\Omega^1_{\mathfrak{g}}$. Thus, we conclude that no such $h^{ab}$ exist, implying that $h$ is degenerate. ◻ In [@a:levi-civita.class.nms] it is shown that one can always construct a class of connections that are compatible with an invertible hermitian form on a projective module. However, Proposition [Proposition 35](#prop:no.invertible.forms){reference-type="ref" reference="prop:no.invertible.forms"} clearly implies that one can not make use of these results. As we shall see, connections compatible with degenerate hermitian forms can exist as well. Now, assume that $\{dx_a\}_{a=1}^n$ is a vector space basis of $\Omega^1_{\mathfrak{g}}$. Proposition [Proposition 32](#prop:C.linear.gives.connection){reference-type="ref" reference="prop:C.linear.gives.connection"} implies that one can define a connection by choosing linear maps $\gamma_a^b:\mathfrak{g}\to\mathbb{C}$ for $a,b=1,\ldots,n$, and setting $$\begin{aligned} \nabla_{\partial}dx_a = \gamma^b_a(\partial)dx_b,\end{aligned}$$ extending linearly to all of $\Omega^1_{\mathfrak{g}}$. Let us now consider the construction of torsion free connections. By definition, a torsion free connection satisfies $$\begin{aligned} d\omega(\partial_1,\partial_2) = \big(\nabla_{\partial_1}\omega\big)(\partial_2) -\big(\nabla_{\partial_2}\omega\big)(\partial_1)\end{aligned}$$ for $\omega\in\Omega^1_{\mathfrak{g}}$ and $\partial_1,\partial_2\in\mathfrak{g}$. The implications of this condition on the maps $\gamma_a^b:\mathfrak{g}\to\mathbb{C}$ depend on the Lie algebra $\mathfrak{g}$. Let us point out two cases below. **Lemma 36**. *Let $\nabla$ be a connection on $\Omega^1_{\mathfrak{g}}$ given by $\nabla_{\partial}d\alpha_I=\gamma_I^J(\partial)d\alpha_J$, where $\gamma_{I}^J:\mathfrak{g}\to\mathbb{C}$ are $\mathbb{C}$-linear maps for $I,J=0,\ldots,N$. If $\partial_i^j\in\mathfrak{g}$ for $i,j=1,\ldots,N$ then $\{d\alpha_i\}_{i=1}^N$ are linearly independent over $\mathbb{C}$ and, moreover, if $\nabla$ is a torsion free connection then $\gamma_I^j(\partial_k^l)=0$ for $I=0,\ldots,N$ and $j,k,l=1,\ldots,N$.* *Proof.* Assume that $\partial_j^k\in\mathfrak{g}$ for $j,k=1,\ldots,N$. Let us first show that $\{d\alpha_i\}_{i=1}^N$ are linearly independent. To this end, assume that $$\begin{aligned} &\lambda^id\alpha_i=0 \quad\Rightarrow\quad \lambda^id\alpha_i(\partial_j^k) = 0\quad\Rightarrow\quad \lambda^i\partial_j^k\alpha_i=0\quad\Rightarrow\quad\\ &\lambda^i\delta^k_i\alpha_j = 0\quad\Rightarrow\quad \lambda^k\alpha_j=0 \end{aligned}$$ for all $j,k=1,\ldots,N$, implying that $\lambda^k=0$ for $k=1,\ldots,N$. Hence, $\{d\alpha_i\}_{i=1}^N$ are linearly independent. Since $\nabla$ is a torsion free connection, one finds that $$\begin{aligned} 0 &=(\nabla_{\partial_j^k}d\alpha_I)(\partial_l^m) -(\nabla_{\partial_l^m}d\alpha_I)(\partial_j^k) = \gamma_I^J(\partial_j^k)d\alpha_J(\partial^l_m) -\gamma_I^J(\partial_l^m)d\alpha_J(\partial_j^k)\\ &= \gamma_I^m(\partial_j^k)\alpha_l -\gamma_I^k(\partial_l^m)\alpha_j \end{aligned}$$ for all $I=0,1,\ldots,N$ and $j,k,l,m = 1,\ldots,N$. In particular, for $j\neq l$ this implies that $\gamma_I^m(\partial_j^k) = 0$ for $I=0,1,\ldots,N$ and $j,k,m = 1,\ldots,N$. ◻ Thus, for any Lie algebra $\mathfrak{g}$ such that $\partial_i^j\in\mathfrak{g}$ for $i,j=1,\ldots,N$, torsion free connections on $\Omega^1_{\mathfrak{g}}$ satisfy $$\begin{aligned} \nabla_{\partial_i^j}d\alpha_I = \gamma_{I}^J(\partial_i^j)d\alpha_J = \gamma_{I}^0(\partial_i^j)d\alpha_0.\end{aligned}$$ **Lemma 37**. *Let $\nabla$ connection on $\Omega^1_{\mathfrak{g}}$ given by $\nabla_{\partial}d\alpha_I=\gamma_I^J(\partial)d\alpha_J$, where $\gamma_{I}^J:\mathfrak{g}\to\mathbb{C}$ is a $\mathbb{C}$-linear map for $I,J=0,\ldots,N$. If $\nabla$ is torsion free and $\partial_i\in\mathfrak{g}$ for $i=1,\ldots,N$, then $\gamma_I^0(\partial_k)=0$ for $I=0,\ldots,N$ and $k=1,\ldots,N$.* *Proof.* Note that if $\partial_k\in\mathfrak{g}$ then $de\neq 0$ since $de(\partial_k)=i\alpha_k\neq 0$. Assuming that $\nabla$ is torsion free, one obtains (for $k\neq j$) $$\begin{aligned} 0 &= \big(\nabla_{\partial_k}d\alpha_I\big)(\partial_j)-\big(\nabla_{\partial_j}d\alpha_I\big)(\partial_k) = \gamma_I^J(\partial_k)d\alpha_J(\partial_j) -\gamma_I^J(\partial_j)d\alpha_J(\partial_k)\\ &= i\gamma_I^0(\partial_k)\partial_je - i\gamma_I^0(\partial_j)\partial_ke = -\gamma_I^0(\partial_k)\alpha_j + \gamma_I^0(\partial_j)\alpha_k \end{aligned}$$ which implies that $\gamma_I^0(\partial_k)=0$ for $k=1,\ldots,N$ and $I=0,\ldots,N$. ◻ In this case, if $\partial_i\in\mathfrak{g}$ for $i=1,\ldots,N$ then any torsion free connection on $\Omega^1_{\mathfrak{g}}$ satisfy $$\begin{aligned} \nabla_{\partial_i}d\alpha_I = \gamma_I^J(\partial_i)d\alpha_J =\gamma_I^k(\partial_i)d\alpha_k.\end{aligned}$$ Clearly, in the case when $\mathfrak{g}=\operatorname{Der}(\mathcal{K}_N)$, Lemma [Lemma 36](#lemma:christoffel.djk){reference-type="ref" reference="lemma:christoffel.djk"} and Lemma [Lemma 37](#lemma:christoffel.dk){reference-type="ref" reference="lemma:christoffel.dk"} apply, giving the following result, showing that there is a unique torsion free connection on $\Omega^1_{\operatorname{Der}}$. **Proposition 38**. *If $\nabla$ is a torsion free connection on $\Omega_{\operatorname{Der}}^1$ then $\nabla_{\partial}\omega=0$ for all $\partial\in\operatorname{Der}(\mathcal{K}_N)$ and $\omega\in\Omega_{\operatorname{Der}}^1$.* *Proof.* Let $\nabla$ be a a torsion free connection on $\Omega_{\operatorname{Der}}^1$ and write $$\begin{aligned} \nabla_{\partial}d\alpha_I = \gamma_I^J(\partial)d\alpha_J \end{aligned}$$ where $\gamma_I^J:\mathfrak{g}\to\mathbb{C}$ is a $\mathbb{C}$-linear map. Since $\nabla$ is torsion free and $\partial_j^k,\partial_l\in\mathfrak{g}$ for $j,k,l=1,\ldots,N$, one can use Lemma [Lemma 36](#lemma:christoffel.djk){reference-type="ref" reference="lemma:christoffel.djk"} and Lemma [Lemma 37](#lemma:christoffel.dk){reference-type="ref" reference="lemma:christoffel.dk"} to conclude that $\gamma_I^0(\partial_k)=\gamma_I^j(\partial_k^l)=0$ for $j,k,l=1,\ldots,N$ and $I=0,\ldots,N$. Moreover, since $\nabla$ is torsion free, one finds that $$\begin{aligned} 0 &= \big(\nabla_{\partial_j}d\alpha_I\big)(\partial_k^l)-\big(\nabla_{\partial_k^l}d\alpha_I\big)(\partial_j) =\gamma_I^J(\partial_j)d\alpha_J(\partial_k^l) -\gamma_I^J(\partial_k^l)d\alpha_J(\partial_j)\\ &= \gamma_I^m(\partial_j)\delta_m^l\alpha_k-i\gamma_I^0(\partial_k^l)\partial_je =\gamma_I^l(\partial_j)\alpha_k+\gamma_I^0(\partial_k^l)\alpha_j, \end{aligned}$$ implying that $\gamma_I^k(\partial_j)=\gamma_I^0(\partial_j^k)=0$ for $I=0,\ldots,N$ and $j,k=1,\ldots,N$. Together with $\gamma_I^0(\partial_k)=\gamma_I^j(\partial_k^l)=0$, one concludes that $\gamma_I^J(\partial_i)=\gamma_I^J(\partial_i^j)=0$ for $i,j=1,\ldots,N$ and $I,J=0,\ldots,N$. Since $\{\partial_i,\partial_j^k\}$ is a basis of $\operatorname{Der}(\mathcal{K}_N)$ and $\{d\alpha_I\}_{I=0}^N$ is a basis of $\Omega_{\operatorname{Der}}^1$, one concludes that $\nabla_{\partial}\omega=0$ for all $\partial\in\mathfrak{g}$ and $\omega\in\Omega_{\operatorname{Der}}^1$. ◻ # Levi-Civita connections on $\Omega^1_{\mathfrak{g}}$ {#sec:LC.Omegaoneg} Given a left hermitian form $h$ on $\Omega^1_{\mathfrak{g}}$, a (left) Levi-Civita connection is a connection on $\Omega^1_{\mathfrak{g}}$ such that $$\begin{aligned} &\partial h(\omega,\eta) = h\big(\nabla_{\partial}\omega,\eta\big) + h(\omega,\nabla_{\partial^\ast}\eta)\\ &(\nabla_{\partial}\omega)(\partial')-(\nabla_{\partial'}\omega)(\partial)-d\omega(\partial,\partial') = 0\end{aligned}$$ for $\partial,\partial'\in\mathfrak{g}$ and $\omega,\eta\in\Omega^1_{\mathfrak{g}}$. If $\mathfrak{g}=\operatorname{Der}(\mathcal{K}_N)$ then Proposition [Proposition 38](#prop:torsion.free.on.OmegaDer){reference-type="ref" reference="prop:torsion.free.on.OmegaDer"} implies that a torsion free connection on $\Omega^1_{\operatorname{Der}}$ is compatible with a hermitian form $h$ if and only if $\partial h(\omega,\eta)=0$ for all $\partial\in\operatorname{Der}(\mathcal{K}_N)$ and $\omega,\eta\in\Omega^1_{\operatorname{Der}}$, giving $h(\alpha_I,\alpha_J)=\lambda_{IJ}\mathds{1}$ for some $\lambda_{IJ}\in\mathbb{C}$. However, Proposition [Proposition 33](#prop:hab.in.KaN){reference-type="ref" reference="prop:hab.in.KaN"} then implies that $\lambda_{IJ}=0$ since $h(d\alpha_I,d\alpha_J)$ is necessarily in $\mathcal{K}^\alpha_N$. Hence, it is only for the trivial hermitian form, i.e $h(\omega,\eta)=0$ for all $\omega,\eta\in\Omega^1_{\operatorname{Der}}$, that a Levi-Civita connection exists on $\Omega^1_{\operatorname{Der}}$. Thus, to get more interesting examples of Levi-Civita connections one needs to choose a different Lie algebra of derivations. In the following, we will present a few examples where Levi-Civita connections exist for non-trivial choices of hermitian form. ## Levi-Civita connections for inner derivations Let us now consider the case when $\mathfrak{g}=\mathfrak{g}_{\textrm{inn}}$ is the Lie algebra of inner derivations spanned by $\hat{\partial}=\partial_1^1+\cdots+\partial_N^N$ and $\partial_i$ for $i=1,\ldots,N$, satisfying $$\begin{aligned} =0\quad\text{and}\quad [\hat{\partial},\partial_i] = \partial_i\end{aligned}$$ for $i,j=1,\ldots,N$, and $$\begin{aligned} \partial_k(a) = -i[\alpha_k,a]\quad\text{and}\quad \hat{\partial}(a) = [e,a]\end{aligned}$$ for all $a\in\mathcal{K}_N$, as well as $\hat{\partial}\alpha_i=\alpha_i$ for $i=1,\ldots,N$. The calculus is connected since for $a=\lambda\mathds{1}+ \mu e + a^i\alpha$, $da=0$ implies that $$\begin{aligned} &da(\partial_i) = 0\quad\Rightarrow\quad\partial_ia = 0 \quad\Rightarrow\quad\mu\alpha_i = 0\quad\Rightarrow\quad\mu = 0\\ &da(\hat{\partial}) = 0 \quad\Rightarrow\quad\hat{\partial}a = 0\quad\Rightarrow\quad a^i\alpha_i = 0 \quad\Rightarrow\quad a^i = 0\end{aligned}$$ giving $a=\lambda\mathds{1}$. Thus, by Proposition [Proposition 28](#prop:connected.vspace.basis){reference-type="ref" reference="prop:connected.vspace.basis"}, we know that $\{d\alpha_I\}_{I=0}^N$ is a vector space basis of $\Omega^1_{\mathfrak{g}}$. An arbitrary connection on $\Omega^1_{\mathfrak{g}}$ is given by $$\begin{aligned} \label{eq:conn.inner.gamma} \nabla_{\partial}d\alpha_I = \gamma_I^J(\partial)d\alpha_J\end{aligned}$$ where $\gamma_I^J:\mathfrak{g}\to\mathbb{C}$ is a $\mathbb{C}$-linear map. For inner derivations, there is a canonical connection given by $$\begin{aligned} \nabla_{[a,\cdot]}\omega = [a,\omega]\end{aligned}$$ corresponding to $\gamma_I^J(\partial_i)=0$ and $\gamma_I^J(\hat{\partial})=\delta_I^J$ in [\[eq:conn.inner.gamma\]](#eq:conn.inner.gamma){reference-type="eqref" reference="eq:conn.inner.gamma"}. However, this connection is not torsion free since $$\begin{aligned} \big(\nabla_{[a,\cdot]}d\alpha_I\big)([b,\cdot]) -\big(\nabla_{[b,\cdot]}d\alpha_I\big)([a,\cdot]) &= [a,\alpha_I]([b,\cdot])-[b,\alpha_I]([a,\cdot])\\ &= [b,[a,\alpha_I]]-[a,[b,\alpha_I]] =[\alpha_I,[a,b]]\end{aligned}$$ giving, for $I=0$, $a=e$ and $b=\alpha_k$, $$\begin{aligned} \big(\nabla_{[e,\cdot]}de\big)([\alpha_k,\cdot]) -\big(\nabla_{[\alpha_k,\cdot]}de\big)([e,\cdot]) =[e,[e,\alpha_k]] = [e,\alpha_k] = \alpha_k \neq 0.\end{aligned}$$ **Proposition 39**. *$\nabla$ is a torsion free connection on $\Omega^1_{\mathfrak{g}}$ if and only if there exist $\gamma_I^J\in\mathbb{C}$, for $I,J=0,\ldots,N$ such that $$\begin{aligned} \nabla_{\hat{\partial}}d\alpha_I &= \gamma_I^Jd\alpha_J\label{eq:inner.nabla.torsion.two}\\ \nabla_{\partial_k}d\alpha_I &= -\gamma_I^0d\alpha_k\label{eq:inner.nabla.torsion.one} \end{aligned}$$ for $k=1,\ldots,N$ and $I=0,\ldots,N$.* *Proof.* Let $\nabla$ be a a torsion free connection on $\Omega^1_{\mathfrak{g}}$ and write $$\begin{aligned} \nabla_{\partial}d\alpha_I = \gamma_I^J(\partial)d\alpha_J \end{aligned}$$ where $\gamma_I^J:\mathfrak{g}\to\mathbb{C}$ is a $\mathbb{C}$-linear map. Since $\nabla$ is torsion free, one obtains (for $k\neq j$) $$\begin{aligned} 0 &= \big(\nabla_{\partial_k}d\alpha_I\big)(\partial_j)-\big(\nabla_{\partial_j}d\alpha_I\big)(\partial_k) = \gamma_I^J(\partial_k)d\alpha_J(\partial_j) -\gamma_I^J(\partial_j)d\alpha_J(\partial_k)\\ &= i\gamma_I^0(\partial_k)\partial_je - i\gamma_I^0(\partial_j)\partial_ke = -\gamma_I^0(\partial_k)\alpha_j + \gamma_I^0(\partial_j)\alpha_k \end{aligned}$$ which implies that $\gamma_I^0(\partial_k)=0$ for $k=1,\ldots,N$ and $I=0,\ldots,N$. Furthermore, one finds that $$\begin{aligned} 0 &= \big(\nabla_{\partial_k}d\alpha_I\big)(\hat{\partial})-\big(\nabla_{\hat{\partial}}d\alpha_I\big)(\partial_k) =\gamma_I^J(\partial_k)d\alpha_J(\hat{\partial}) -\gamma_I^J(\hat{\partial})d\alpha_J(\partial_k)\\ &= \gamma_I^j(\partial_k)\alpha_j -i\gamma_I^0(\hat{\partial})de(\partial_k) = \gamma_I^j(\partial_k)\alpha_j +\gamma_I^0(\hat{\partial})\alpha_k \end{aligned}$$ giving $\gamma_I^j(\partial_k)=0$ for $j\neq k$ and $\gamma_I^k(\partial_k)=-\gamma_I^0(\hat{\partial})$ for $I=0,\ldots,N$ and $j,k=1,\ldots,N$. Thus, one gets $$\begin{aligned} &\nabla_{\hat{\partial}}d\alpha_I = \gamma_I^J(\hat{\partial})d\alpha_J\\ &\nabla_{\partial_k}d\alpha_I = \gamma_I^J(\partial_k)d\alpha_J =\gamma_I^k(\partial_k)d\alpha_k =-\gamma_I^0(\hat{\partial})d\alpha_k \end{aligned}$$ giving [\[eq:inner.nabla.torsion.one\]](#eq:inner.nabla.torsion.one){reference-type="eqref" reference="eq:inner.nabla.torsion.one"} and [\[eq:inner.nabla.torsion.two\]](#eq:inner.nabla.torsion.two){reference-type="eqref" reference="eq:inner.nabla.torsion.two"} with $\gamma_I^J=\gamma_I^J(\hat{\partial})$. Conversely, it is straightforward to check that the connection defined by [\[eq:inner.nabla.torsion.one\]](#eq:inner.nabla.torsion.one){reference-type="eqref" reference="eq:inner.nabla.torsion.one"} and [\[eq:inner.nabla.torsion.two\]](#eq:inner.nabla.torsion.two){reference-type="eqref" reference="eq:inner.nabla.torsion.two"} is torsion free. ◻ Since $\partial_k^\ast=\partial_k$, $\hat{\partial}^\ast=\hat{\partial}$ and $(d\alpha_I)^\ast=d\alpha_I$, it is easy to check that the torsion free connections in Proposition [Proposition 39](#prop:inner.der.torsion.free){reference-type="ref" reference="prop:inner.der.torsion.free"} are $\ast$-connections if $\gamma_I^J\in\mathbb{R}$. Let us now construct a torsion free $\ast$-connection compatible with the left hermitian form of the type $$\begin{aligned} \big(h_{IJ}\big) = \big(h(d\alpha_I.d\alpha_J)\big) = \begin{pmatrix} \rho_0 & -i\rho_1 & \cdots & -i\rho_N\\ i\rho_1 & \\ \vdots & & 0 & \\ i\rho_N \end{pmatrix}h_0\end{aligned}$$ for $\rho_I\in\mathbb{R}$ for $I=0,\ldots,N$ and $h_0\in\mathcal{K}^\alpha_N$ with $h_0^\ast=h_0$. Note that $h_{IJ}$ defines a left hermitian form via $$\begin{aligned} h(\omega^Id\alpha_I,\eta^J d\alpha_J) = \omega^Ih_{IJ}\bar{\eta}^J\end{aligned}$$ by Proposition [Proposition 33](#prop:hab.in.KaN){reference-type="ref" reference="prop:hab.in.KaN"}. Let us now choose $\gamma^I_J=\tfrac{1}{2}\delta_J^I$ in Proposition [Proposition 39](#prop:inner.der.torsion.free){reference-type="ref" reference="prop:inner.der.torsion.free"}, giving a torsion free $\ast$-connection on $\Omega^1_{\mathfrak{g}}$ as $$\begin{aligned} \nabla_{\hat{\partial}}d\alpha_I = \tfrac{1}{2}d\alpha_I\qquad \nabla_{\partial_k}d\alpha_0 = -\tfrac{1}{2}d\alpha_k\qquad \nabla_{\partial_k}d\alpha_l = 0\end{aligned}$$ for $I=0,\ldots,N$ and $k,l=1,\ldots,N$. Let us now check metric compatibility of $\nabla$: $$\begin{aligned} \hat{\partial}h(d\alpha_I,d\alpha_J) - h(\nabla_{\hat{\partial}}d\alpha_I,d\alpha_J)-h(d\alpha_I,\nabla_{\hat{\partial}}d\alpha_J) = \hat{\partial}h_{IJ} -\tfrac{1}{2}h_{IJ}-\tfrac{1}{2}h_{IJ} = 0\end{aligned}$$ since $\hat{\partial}h_{IJ} = h_{IJ}$, and furthermore $$\begin{aligned} \partial_k h(d\alpha_I,d\alpha_J) - h(\nabla_{\partial_k}d\alpha_I,d\alpha_J)-h(d\alpha_I,\nabla_{\partial_k}d\alpha_J) = \tfrac{1}{2}\delta_I^0h_{kJ} + \tfrac{1}{2}\delta_J^0h_{Ik}\end{aligned}$$ If $I,J\neq 0$ then the above expression clearly vanishes, and if $I=J=0$ then one has $$\begin{aligned} \tfrac{1}{2}\delta_I^0h_{kJ} + \tfrac{1}{2}\delta_J^0h_{Ik} =\tfrac{1}{2}h_{k0} + \tfrac{1}{2}h_{0k} =\tfrac{1}{2}i\rho_k -\tfrac{1}{2}i\rho_k = 0. \end{aligned}$$ If $I=0$ and $J=j\neq 0$ (and similarly for $J=0$ and $I\neq 0$) then $$\begin{aligned} \tfrac{1}{2}\delta_I^0h_{kJ} + \tfrac{1}{2}\delta_J^0h_{Ik} = \tfrac{1}{2}h_{kj} = 0\end{aligned}$$ since $h_{kj}=0$. Hence, $\nabla$ is a $\ast$-bimodule Levi-Civita connection compatible with $h$. ## Levi-Civita connections for a set of outer derivations In this section, let $$\begin{aligned} \mathfrak{g}= \mathbb{C}\left\langle \tilde{\partial}_i=\partial_i+\partial_i^i:\,\, i=1,\ldots,N \right\rangle\end{aligned}$$ giving $\tilde{\partial}_i^\ast=\tilde{\partial}_i$, $[\tilde{\partial}_i,\tilde{\partial}_j]=0$ and $$\begin{aligned} \tilde{\partial}_ke = i\alpha_k\quad\text{and}\quad\tilde{\partial}_i\alpha_k = \delta_{ik}\alpha_i.\end{aligned}$$ Note that $\tilde{\partial}_i$ is an outer derivation for $i=1,\ldots,N$, but $$\begin{aligned} \tilde{\partial}_1+\cdots+\tilde{\partial}_N = \partial_1+\cdots+\partial_N+\partial_1^1+\cdots+\partial_N^N\end{aligned}$$ is a sum of inner derivations, and therefore an inner derivation. **Proposition 40**. *A vector space basis for $\Omega^1_{\mathfrak{g}}$ is given by $\{d\alpha_i\}_{i=1}^N$ and it holds that $$\begin{aligned} d\alpha_0 + d\alpha_1+\cdots+d\alpha_N=0 \end{aligned}$$ It follows that $\Omega^1_{\mathfrak{g}}$ is not connected.* *Proof.* One computes $$\begin{aligned} -d\alpha_0(\tilde{\partial}_k) = -i(\partial_k+\partial^k_k)(e) = -i\partial_ke = -i(i\alpha_k)=\alpha_k \end{aligned}$$ and $$\begin{aligned} (d\alpha_1+\cdots+d\alpha_N)(\tilde{\partial}_k) = \partial_k^k(\alpha_1)+\cdots+\partial_k^k(\alpha_N) = \partial_k^k\alpha_k = \alpha_k, \end{aligned}$$ showing that $d\alpha_0+d\alpha_1+\cdots+d\alpha_k = 0$. It follows immediately that $$\begin{aligned} d(\alpha_1+\cdots+\alpha_N+ie) = 0 \end{aligned}$$ which shows that $\Omega^1_{\mathfrak{g}}$ is not connected. Moreover, it follows that $\{d\alpha_k\}_{k=1}^N$ generates $\Omega^1_{\mathfrak{g}}$. Let us now show that $\{d\alpha_k\}_{k=1}^N$ are linearly independent over $\mathbb{C}$. Thus, assume that $\lambda^kd\alpha_k=0$ for $\lambda^k\in\mathbb{C}$, which implies that $$\begin{aligned} 0=\lambda^kd\alpha_k(\tilde{\partial}_l) = \lambda^k\partial_l^l\alpha_k =\lambda^l\alpha_l\quad\text{(no sum over $l$)} \end{aligned}$$ giving $\lambda^l=0$ for $l=1,\ldots,N$. ◻ Thus, it follows (cf. Proposition [Proposition 32](#prop:C.linear.gives.connection){reference-type="ref" reference="prop:C.linear.gives.connection"}) that an arbitrary connection on $\Omega^1_{\mathfrak{g}}$ is determined by $$\begin{aligned} \nabla_{\tilde{\partial}_i}d\alpha_j = \gamma_{ij}^kd\alpha_k\end{aligned}$$ with $\gamma_{ij}^k\in\mathbb{C}$. **Proposition 41**. *A connection $\nabla:\mathfrak{g}\times\Omega^1_{\mathfrak{g}}\to\Omega^1_{\mathfrak{g}}$ is torsion free if and only if there exists $\gamma_{ij}\in\mathbb{C}$ such that $$\begin{aligned} \label{eq:dt.torsion.free.conn} \nabla_{\tilde{\partial}_i}d\alpha_j = \gamma_{ij}d\alpha_i \end{aligned}$$ for $i,j=1,\ldots,N$.* *Proof.* One computes (writing out sums explicitly) $$\begin{aligned} \big(\nabla_{\tilde{\partial}_k}d\alpha_i\big)(\tilde{\partial}_l)-\big(\nabla_{\tilde{\partial}_l}d\alpha_i\big)(\tilde{\partial}_k) &=\sum_{m=1}^N\Big[\gamma_{ki}^m\tilde{\partial}_l(\alpha_m)-\gamma_{li}^m\tilde{\partial}_k(\alpha_m)\Big] =\gamma_{ki}^l\alpha_l - \gamma_{li}^k\alpha_k, \end{aligned}$$ for $k,l=1,\ldots,N$, and concludes that the connection is torsion free if $\gamma_{ki}^l=0$ for $i=1,\ldots,N$ and $k\neq l$. Hence, a torsion free connection is given by $$\begin{aligned} \nabla_{\tilde{\partial}_i}d\alpha_j = \gamma_{ij}^id\alpha_i, \end{aligned}$$ showing that every torsion free connection is given as in [\[eq:dt.torsion.free.conn\]](#eq:dt.torsion.free.conn){reference-type="eqref" reference="eq:dt.torsion.free.conn"} (setting $\gamma_{ij}=\gamma_{ij}^i$). ◻ Since $\tilde{\partial}_i^\ast=\tilde{\partial}_i$ and $(d\alpha_i)^\ast=d\alpha_i$ it is easy to check that the connection in Proposition [Proposition 41](#prop:connection.dti.torsion.free){reference-type="ref" reference="prop:connection.dti.torsion.free"} is a $\ast$-connection if $\gamma_{ij}\in\mathbb{R}$. Now, let us construct a torsion free $\ast$-connection on $\Omega^1_{\mathfrak{g}}$ that is compatible with the diagonal hermitian form given by $$\begin{aligned} h_{ij}=h(d\alpha_i,d\alpha_j)=\delta_{ij}\lambda_i\alpha_i \end{aligned}$$ for $\lambda_i\in\mathbb{R}$. Setting $\gamma_{ij}=\tfrac{1}{2}\delta_{ij}$ in Proposition [Proposition 41](#prop:connection.dti.torsion.free){reference-type="ref" reference="prop:connection.dti.torsion.free"} it follows that $$\begin{aligned} \label{eq:dti.lc.connection} \nabla_{\tilde{\partial}_i}d\alpha_j = \tfrac{1}{2}\delta_{ij}d\alpha_i\end{aligned}$$ is a torsion free $\ast$-connection. Next, one checks that $\nabla$ is compatible with $h$: $$\begin{aligned} \tilde{\partial}_ih_{jk} &- h(\nabla_{\tilde{\partial}_i}d\alpha_j,d\alpha_k)-h(d\alpha_j,\nabla_{\tilde{\partial}_i}d\alpha_k)\\ & = \delta_{jk}\lambda_j\tilde{\partial}_id\alpha_j - \tfrac{1}{2}\delta_{ij}h_{ik} - \tfrac{1}{2}\delta_{ik}h_{ji}\\ &= \lambda_j\delta_{jk}\delta_{ij}\alpha_i - \tfrac{1}{2}\delta_{ij}\delta_{ik}\lambda_i\alpha_i - \tfrac{1}{2}\delta_{ik}\delta_{ji}\lambda_j\alpha_j = 0.\end{aligned}$$ Hence, [\[eq:dti.lc.connection\]](#eq:dti.lc.connection){reference-type="eqref" reference="eq:dti.lc.connection"} defines a $\ast$-bimodule Levi-Civita connection on $\Omega^1_{\mathfrak{g}}$ with respect to the hermitian form $h(d\alpha_i,d\alpha_j)=\delta_{ij}\lambda_i\alpha_i$. # Acknowledgements {#acknowledgements .unnumbered} J.A. would like to thank E. Darpö for discussions. Furthermore, J.A. is supported by the Swedish Research Council grant 2017-03710.
arxiv_math
{ "id": "2309.00283", "title": "Noncommutative Riemannian geometry of Kronecker algebras", "authors": "Joakim Arnlind", "categories": "math.QA math-ph math.MP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In the local setting, Gröbner cells are affine spaces that parametrize ideals in ${\bf k}[\![x,y]\!]$ that share the same leading term ideal with respect to a local term ordering. In particular, all ideals in a cell have the same Hilbert function, so they provide a cellular decomposition of the punctual Hilbert scheme compatible with its Hilbert function stratification. We exploit the parametrization given in [@HW21] via Hilbert-Burch matrices to compute the Betti strata, with hands-on examples of deformations that preserve the Hilbert function, and revisit some classical results along the way. Moreover, we move towards an explicit parametrization of all local Gröbner cells. address: - Centre de Recerca Matemàtica - Freie Universität Berlin, Arnimallee 3, 14195 Berlin, Germany author: - Roser Homs - Anna-Lena Winz bibliography: - references.bib title: Deformations of local Artin rings via Hilbert-Burch matrices --- # Introduction The punctual Hilbert scheme $\operatorname{Hilb}^n({\bf k}[\![x,y]\!])$ is a fundamental object in algebraic geometry that parametrizes ideals of colength $n$ in ${\bf k}[\![x,y]\!]$. During the decades of the 70s and 80s of the past century several authors [@Bri77; @BG74; @ES; @Gal74; @Gra83; @Iar77; @Ya89] studied its geometrical properties. In particular, Briançon [@Bri77] and Iarrobino [@Iar77] proved that $\operatorname{Hilb}^n({\bf k}[\![x,y]\!])$ is irreducible of dimension $n-1$ for ${\bf k}$ algebraically closed. A natural approach to understand these schemes better is decomposing them into smaller spaces by considering sets of flat deformations of a given colength $n$ monomial ideal. On one hand, Briançon and Galligo [@BG74] introduced the technique of vertical standard basis stratification of $\operatorname{Hilb}^n{\bf k}[[x, y]]$ consisting on the study of the generators of such deformations. Yaméogo [@Ya89] showed that each vertical stratum of $\operatorname{Hilb}^n{\bf k}[[x, y]]$ is isomorphic to an affine space of dimension given by a very simple formula. On the other hand, Conca and Valla [@CV08] focused on studying the syzygies of the deformations of monomial ideals. For a monomial ideal $E$, they used Hilbert-Burch matrices to parametrize the so-called Gröbner cell $V_{\operatorname{lex}}(E)$ of all ideals $J$ with leading term ideal $\operatorname{Lt_{lex}}(J)=E$ with respect to the lexicographical term ordering $\operatorname{lex}$. When the Hilbert scheme is smooth these cells are affine spaces by [@Bia73]. A term ordering $\tau$ determines a ${\bf k}^*$-action on the punctual Hilbert scheme with isolated fixed points --- the monomial ideals --- and the $V_{\tau}(E)$ are all those points in the Hilbert scheme specialising to the point $E$. When one has a smooth scheme with a ${\bf k}^*$-action with isolated fixed points, one obtains a cellular decomposition of that space. This approach has been used to study certain invariants --- such as Betti numbers --- of Hilbert schemes of points of $\mathbb{A}^2$, $\mathbb{P}^2$ and $(\mathbb{A}^2,0)$, see [@ES; @Got90]. The cellular decomposition of $\operatorname{Hilb}^n{\bf k}[[x, y]]$ resulting from vertical stratification or equivalently Gröbner cells is not compatible with the Hilbert function stratification of $\operatorname{Hilb}^n{\bf k}[[x, y]]$. In other words, colength $n$ ideals with different Hilbert functions may belong to the same Gröbner cell. Bulding on the works of Rossi and Sharifan [@RS10] and Bertella [@Ber09] on producing zero-dimensional ideals in ${\bf k}[[x, y]]$ with a given Hilbert function, the authors of the present paper consider in [@HW21] a local term ordering ${\overline{\tau}}$ to define Gröbner cells that overcome this problem.The parametrization of $V_{\overline{\tau}}(E)$ in terms of Hilbert-Burch matrices has been completely described for a special class of monomial ideals, that includes the well-studied lex-segment ideals, see or [@HW21 Theorem 5.7]. The goal of the present contribution is twofold: (1) to exploit the parametrization of Gröbner cells for lex-segment ideals to study the Betti strata and homogeneous subcells of the subscheme $\operatorname{Hilb}^h({\bf k}[\![x,y]\!])$ of $\operatorname{Hilb}^n({\bf k}[\![x,y]\!])$ that parametrizes ideals with a given Hilbert function $h$; and (2) to move towards a complete description of a cellular decomposition of $\operatorname{Hilb}^n({\bf k}[\![x,y]\!])$ that is compatible with the Hilbert function stratification. The paper is structured as follows. introduces the machinery needed for studying Gröbner cells in the local setting and recalls the fundamental result of [@HW21], namely the parametrization of Gröbner cells of lex-segment ideals in terms of canonical Hilbert-Burch matrices. In , we focus on the use --- in both practise and theory --- of such matrices to obtain deformations of the lex-segment ideal with desired properties, such as a given minimal number of generators or homogeneity. The main results of this section are and , where we parameterize the Betti strata and the homogeneous sub-cell, respectively, of the Gröbner cell for lex-segment ideals. Our journey towards these statements includes revisiting classical results from the perspective of canonical Hilbert-Burch matrices. Finally, is devoted to cellular decompositions of the punctual Hilbert scheme $\operatorname{Hilb}^n({\bf k}[\![x,y]\!])$ that are compatible with its Hilbert function stratification. We explicitly compute such a decomposition for $n=6$ and briefly describe the Julia programming language module we created to compute Gröbner cells of any monomial ideal according to the parametrization in [@HW21 Conjecture 5.14]. We provide computational evidence for the correctness of the conjecture. # Gröbner cells in the local setting {#sec:cells} In this section we will introduce the notation and recall some basics on local term orderings. We will finish with our main result from [@HW21], which will be applied in the following sections to prove and reprove some properties of the punctual Hilbert scheme and its points, codimension two ideals in the power series ring. **Definition 1**. A term ordering $\tau$ in the polynomial ring $P={\bf k}[x_1,\dots,x_n]$ induces a reverse-degree ordering ${\overline{\tau}}$ in $R={\bf k}[\![x_1,\dots,x_n]\!]$ such that for any monomials $m,m'$ in $R$, $m>_{{\overline{\tau}}} m'$ if and only if $$\deg(m) < \deg(m') \mbox{ or }\deg(m) = \deg(m') \mbox{ and } m >_{\tau} m'.$$ We call ${\overline{\tau}}$ the **local term ordering** induced by the global ordering $\tau$. **Definition 2**. Given an ideal $J$ of $R$, we define the **leading term ideal** of $J$ as the monomial ideal in $P$ generated by the leading terms with respect to the local degree ordering ${\overline{\tau}}$, i.e. $\operatorname{Lt_{\overline{\tau}}}(J)=\left( \operatorname{Lt_{\overline{\tau}}}(f): f\in J\right)\subset{\bf k}[x_1,\dots,x_n]$. We call a subset $\lbrace f_1,\dots,f_m\rbrace$ of $J$ a **${\overline{\tau}}$-enhanced standard basis of $J$** if $\operatorname{Lt_{\overline{\tau}}}(J)=(\operatorname{Lt_{\overline{\tau}}}(f_1),\dots,\operatorname{Lt_{\overline{\tau}}}(f_m))$. From the computational point of view, the tangent cone algorithm given by Mora in [@Mor82] --- a variant of Buchberger's algorithm to compute Gröbner bases of ideals in the polynomial ring --- allows us to explicitly obtain ${\overline{\tau}}$-enhanced standard basis of ideals in ${\bf k}[\![x_1,\dots,x_n]\!]$ that are generated by polynomials. See [@GP08 Sections 1.6,1.7,6.4] for a detailed treatment and implementation. From now on we will restrict to the codimension two case. Let $P = {\bf k}[x,y]$, $R = {\bf k}[\![x,y]\!]$ and fix $x>_\tau y$. **Definition 3**. Given a zero-dimensional monomial ideal $E$ in $R$, we denote by $V(E):=\lbrace J\subset R: \operatorname{Lt_{\overline{\tau}}}(J)=E\rbrace$ the **Gröbner cell** of $E$ with respect to the local term ordering ${\overline{\tau}}$. As the name suggests these are affine spaces by results about group actions of ${\bf k}^*$ on smooth varieties with isolated fixed points of [@Bia73 Theorem 4.4]. They have been explicitly described --- at least for a special class of monomial ideals --- for the lexicographical term ordering [@CV08], the degree lexicographical term ordering [@Con11] and the local term ordering induced by both of them [@HW21]. *Remark 4*. It is important to note that, as opposed to what happens for global term orderings, ideals in a Gröbner cell with respect to a local term ordering preserve the Hilbert function of the monomial ideal, see [@HW21 Corollary 5.10]. Indeed, by definition of local term ordering $\operatorname{Lt_{\overline{\tau}}}(f)=\operatorname{Lt_{\tau}}(f^\ast)$, where $f^\ast$ is the initial form of $f$. In particular, any ${\overline{\tau}}$-enhanced standard basis $\lbrace f_1,\dots,f_m\rbrace$ of $J$ is also a standard basis of $J$, namely the initial ideal $J^\ast$ is generated by the initial forms $f_1^\ast,\dots,f_m^\ast$. Therefore, $\operatorname{Lt_{\tau}}(J^\ast)=\operatorname{Lt_{\overline{\tau}}}(J)$ and hence $\operatorname{HF}_{R/J}=\operatorname{HF}_{P/J^\ast}=\operatorname{HF}_{P/ \operatorname{Lt_{\overline{\tau}}}(J)}$. So these Gröbner cells are compatible with the Hilbert function stratification of $\operatorname{Hilb}^n({\bf k}[\![x,y]\!])$ and induce a cellular decomposition of the strata. The strategy used in the description of the Gröbner cells of the three previously mentioned term orderings consists in focusing directly on the syzygies instead of the generators themselves and to that aim it is convenient to define the notion of canonical Hilbert-Burch matrix. Note that any zero-dimensional monomial ideal in ${\bf k}[\![x,y]\!]$ can be written as $E=(x^t,\dots,x^{t-i}y^{m_i},\dots,y^{m_t})$ with $0=m_0< m_1\leq \dots\leq m_t$. **Definition 5**. The **canonical Hilbert-Burch matrix** of the monomial ideal $E=(x^t,\dots,x^{t-i}y^{m_i},\dots,y^{m_t})$ is the Hilbert-Burch matrix of $E$ of the form $$H=\left(\begin{array}{cccc} y^{d_1} & 0 & \cdots & 0\\ -x & y^{d_2} & \cdots & 0\\ 0 & -x & \cdots & 0\\ \vdots & \vdots & & \vdots\\ 0 & 0 & \cdots & y^{d_t}\\ 0 & 0 & \cdots & -x \end{array}\right),$$ where $d_i=m_i-m_{i-1}$ for any $1\leq i\leq t$. The **degree matrix** $U$ of $E$ is the $(t+1)\times t$ matrix with integer entries $u_{i,j}=m_j-m_{i-1}+i-j$, for $1\leq i\leq t+1$ and $1\leq j\leq t$. It follows from the definition that $u_{i,i}=d_i$ and $u_{i+1,i}=1$, for $1\leq i\leq t$. **Definition 6**. We denote by $\mathcal{M}(E)$ the set of matrices $N=(n_{i,j})$ of size $(t+1)\times t$ with entries in ${\bf k}[y]$ such that - $n_{i,j}=0$ for any $i\leq j$, - and for $i>j$ the non-zero entries satisfy $u_{i,j} \leq {\mathrm{ ord}}(n_{i,j}) \leq\deg(n_{i,j})<d_j$. For the time being we will focus on a special class of monomial ideals called lex-segment ideals. **Definition 7**. A monomial ideal $L=(x^t,\dots,x^{t-i}y^{m_i},\dots,y^{m_t})$ is a **lex-segment ideal** if and only if $0<m_1<\dots<m_t$. By Macaulay's theorem [@Mac27], any Hilbert function $h$ has a unique lex-segment ideal $\operatorname{Lex}(h)$, see [@HH Theorem 6.3.1] for a modern treatment. Consider an ideal $J$ with Hilbert function $h$. In characteristic 0, after possibly a generic change of coordinates on $J$, $\operatorname{Lt_{\overline{\tau}}}(J)$ coincides with $\operatorname{Lex}(h)$. In other words, the lex-segment ideal is the generic initial ideal with respect to ${\overline{\tau}}$. For the reader familiar with generic initial ideals in the global setting, $\operatorname{Gin}_{{\overline{\tau}}}(J)=\operatorname{Gin}_{\tau}(J^\ast)$. See [@Ber09 Section 1.4] for an exposition about the notion of generic initial ideal in the local setting. **Theorem 8**. *[@HW21 Theorem 5.7] Given a zero-dimensional lex-segment monomial ideal $L\subset R$ with canonical Hilbert-Burch matrix $H$, the map $$\begin{array}{rrcl} \Phi_L: & \mathcal{M}(L) & \longrightarrow & V(L)\\ & N & \longmapsto & I_t(H+N) \end{array}$$* *is a bijection, where $I_t(M)$ denotes the ideal of $t$-minors of the matrix $M$.* This theorem allows us to define the canonical Hilbert-Burch matrix of any zero-dimensional ideal $J\subset R$ that can be obtained as a deformation of a lex-segment ideal $L$ as $H+\Phi_L^{-1}(J)$, where $H$ is the canonical Hilbert-Burch matrix of the monomial ideal $\operatorname{Lt_{\overline{\tau}}}(J)$. In particular, the maximal minors of $H+\Phi_L^{-1}(J)$ form a ${\overline{\tau}}$-enhanced standard basis of $J$. In characteristic 0, the affine space $V(L)$ encodes all ideals with a given Hilbert function up to generic change of coordinates. In other words, the map $\Phi_L$ provides a parametrization via canonical Hilbert-Burch matrices of all ideals with Hilbert function $h=\operatorname{HF}_{P/L}$ up to generic change of coordinates. For a formal version of this statement, see [@HW21 Corollary 5.10]. See for a discussion about the description of Gröbner cells and the notion of canonical Hilbert-Burch matrix beyond the lex-segment case. # Parametrizing ideals with a given Hilbert function {#sec:parametrization} In this section we will showcase how to exploit canonical Hilbert-Burch matrices to obtain all ideals with a given Hilbert function, and special additional properties, up to generic change of coordinates. In the first part of the section, we will exhibit hands-on examples on how to refine the parametrization from by adding constraints on the ideals such as the minimal number of generators or homogeneity. The second part is devoted to formalizing these procedures. We will revisit some classical results on admissible numbers of generators and on certain subschemes of the punctual Hilbert scheme $\operatorname{Hilb}^n({\bf k}[\![x,y]\!])$, and give alternative proofs based on our canonical Hilbert-Burch matrices. This will assist us in reaching and , where we provide explicit descriptions of the affine and quasi-affine varieties forming the Betti strata and homogeneous cells, respectively. In what follows $h$ is an admissible Hilbert function for some zero-dimensional ideal in $R={\bf k}[\![x,y]\!]$, namely $h=(1,2,\dots,t,h_t,\dots,h_s,0)$, with $t\geq h_t\geq \dots\geq h_s$, see [@Mac27]. Moreover, we will denote by $\Delta(h)$ the maximal jump in the Hilbert function $h$, that is $\Delta(h):=\max \lbrace \vert h_i-h_{i-1}\vert: 1\leq i\leq s\rbrace$. For an ideal $J$ we denote by $\mu(J)$ the minimal number of generators of $J$. We will say that a minimal number of generators $d$ is admissible for a Hilbert function $h$ if there exists an ideal $J$ with Hilbert function $h$ and $\mu(J)=d$. Let $L=(x^t,x^{t-1}y^{m_1},\dots,y^{m_t})$ be the associated lex-segment ideal to the Hilbert function $h$ and let us focus on special subsets of $V(L)$: **Definition 9**. For any admissible minimal number of generators $d$ of the Hilbert function $h$, we call $V_d(L)=\lbrace J\in V(L):\mu(J)=d\rbrace$ the $d$-**Betti stratum** of $V(L)$. The set $V_2(L)$ is the subset of $V(L)$ consisting of complete intersection ideals. Therefore, we will also refer to it as $V_{\textrm{CI}}(L)$. We denote the homogeneous sub-cell of $V(L)$ by $V_{\textrm{hom}}(L)=\lbrace J\in V(L): J=J^\ast\rbrace$. In [@RS10 Remark 4.7], Rossi and Sharifan give a procedure to obtain special instances of ideals in the $d$-Betti stratum $V_d(L)$. The authors considered very specific deformations of a Hilbert-Burch matrix $H$ of the lex-segment ideal $L$: whenever $u_{i+2,i}\leq 0$, the entry $n_{i+2,i}$ is set to $1$ and all other entries are set to $0$. Note that a matrix $N$ obtained this way belongs to $\mathcal{M}(L)$, hence the ideal $I_t(H+N)$ is in $V(L)$. This approach is a particular case of the fact that the minimal number of generators of $J\in V(L)$ decreases as the rank of the matrix $\bar{N}$ formed by the constant terms of $N\in \mathcal{M}(L)$ increases. We reproduce a tailored version of a result of Bertella [@Ber09 Lemma 2.1] and its proof here for the sake of completeness. **Lemma 10**. *Let $L$ be a lex-segment ideal with canonical Hilbert-Burch matrix $H$ and consider $J\in V(L)$. Then the minimal number of generators $\mu(J)=t+1-\operatorname{rk}(\bar{N})$, where $N$ is the unique matrix in $\mathcal{M}(L)$ such that $J=I_t(H+N)$.* *Proof.* Since the columns of the matrix $H+N$ encode a system of generators of the syzygies of its maximal minors (that is, of the generators of the ideal $J$), the sequence $R^t\xrightarrow{H+N} R^{t+1}\longrightarrow J \longrightarrow 0$ is exact. Tensoring with $R/\mathfrak m$ over $R$ yields $$(R/\mathfrak m)^t\xrightarrow{\bar{N}} (R/\mathfrak m)^{t+1}\longrightarrow J/\mathfrak mJ \longrightarrow 0,$$ hence the minimal number of generators of $J$ is $\dim_{R/\mathfrak m} J/\mathfrak mJ=t+1-\operatorname{rk}(\bar{N})$. ◻ In the following example, we will show how the parametrization of the Gröbner cell $V(L)$ via canonical Hilbert-Burch matrices generalizes the constructions of Rossi and Sharifan in [@RS10]. **Example 11**. Consider the lex-segment ideal $L = (x^4,x^3y, x^2y^5, xy^8, y^{10})$ with Hilbert function $h=(1,2,3,4,3,3,3,2,2,1)$, canonical Hilbert-Burch matrix $H$ and degree matrix $U$: $$H=\left( \begin{array}{c c c c} y & 0 & 0 & 0\\ -x & y^4 & 0 & 0\\ 0 & -x & y^3 & 0 \\ 0 & 0 & -x & y^2 \\ 0 & 0 & 0 & -x\\ \end{array} \right) \hspace{1.5cm} U = \left( \begin{array}{c c c c} 1 & 4 & 6 & 7\\ 1 & 4 & 6 & 7\\ -2 & 1 & 3 & 4\\ -4 & -1 & 1 & 2\\ -5 & -2 & 0 & 1 \\ \end{array} \right).$$ By , any $J \in V(L)$ is the ideal of maximal minors of the matrix $$H+N = \left( \begin{array}{c c c c} y& 0& 0& 0\\ -x&y^4& 0& 0\\ c_1& -x+c_2 y+c_3 y^2+c_4 y^3&y^3& 0\\ c_5& c_6+c_7y+c_8y^2+c_9y^3& -x+c_{10}y+c_{11}y^2&y^2\\ c_{12}& c_{13}+c_{14}y+c_{15}y^2+c_{16}y^3& c_{17}+c_{18}y+c_{19}y^2& -x+c_{20}y\\ \end{array} \right),$$ for some coefficients $c_1, \dots, c_{20} \in {\bf k}$. In other words, $V(L)$ is a 20-dimensional affine space with origin the lex-segment ideal $L$. Any ideal of the form $J=I_4(H+N)$ can therefore be identified with the point with coordinates $(c_1,\dots,c_{20})$ in $\mathbb{A}^{20}$. Note that the coordinates $c_2,c_{10},c_{17},c_{20}$ play a special role: $J$ is homogeneous if and only if those are the only non-zero coordinates. Indeed, they are the coefficients of degree matching the corresponding entry of the degree matrix. Thus $V_{\textrm{hom}}$ as defined in is the subset of $V(L)$ where all other coordinates vanish. Let $\mathcal{I} \subset {\bf k}[c_1,\dots,c_{20}]$ be an ideal, then $\mathcal{V}(\mathcal{I}) \subset \mathbb{A}^{20}$ will denote the vanishing set of $\mathcal{I}$. And we can formalize the previous statement to: $$V_{\textrm{hom}}(L)=\mathcal{V}(c_1,c_3,c_4,c_5,c_6,c_7,c_8,c_9,c_{11},c_{12},c_{13},c_{14},c_{15},c_{16},c_{18},c_{19})\simeq \mathbb{A}^4.$$ Since a ${\overline{\tau}}$-enhanced standard basis is in particular a standard basis, it offers a very simple procedure for computing the initial ideal $J^\ast$ for a non-homogeneous $J$: set all $c_i=0$ except for $i=2,10,17,20$, namely project onto $\mathbb{A}^4$. Let us revisit the specific deformation of $L$ given in [@RS10 Example 4.8]. The authors constructed the complete intersection ideal $J=(x^4-x^2y^2-x^2y^3-x^2y^4+y^6, x^3y-xy^3-xy^4) \in V(L)$ and calculated $J^\ast$. Note that $J=I_4(H+N)$ with $N \in \mathcal{M}(L)$ given by $c_1=c_6=c_{17}=1$ and 0's otherwise. Indeed, $J^\ast$ is obtained by keeping $c_{17}=1$ and setting 0's otherwise. Note that the third column of the canonical Hilbert-Burch matrix of $J$ allows us to express $f_4$ as a combination of $f_2,f_3$, the second column yields $f_3$ as a combination of $f_1$ and $f_2$ and the first one gives $f_2$ as a combination of $f_0$ and $f_1$. In fact, any ideal obtained from a matrix satisfying $c_1c_6c_{17}\neq 0$ is a complete intersection generated by $f_0$ and $f_1$. By , this is actually the only way to obtain complete intersections in $V(L)$. $V_{\textrm{CI}}(L)$ (as in ) is a full-dimensional quasi-affine variety obtained by removing the union of the three hyperplanes $\lbrace c_1=0\rbrace$, $\lbrace c_6=0\rbrace$ and $\lbrace c_{17}=0\rbrace$ from $\mathbb{A}^{20}$. Similarly, the sets of ideals in $V(L)$ with exactly 5, 4 and 3 generators can be described as follows: $$V_5(L)=\mathcal{V}(c_1,c_5,c_6,c_{12},c_{13},c_{17})\simeq \mathbb{A}^{14},$$ $$V_4(L)=\mathcal{V}(c_1c_6,c_1c_{13},c_1c_{17},c_5c_{17},c_6c_{17},c_5c_{13}-c_6c_{12})\backslash V_5(L),$$ $$V_3(L)=\mathcal{V}(c_1c_6c_{17})\backslash \mathcal{V}(c_1c_6,c_1c_{13},c_1c_{17},c_5c_{17},c_6c_{17},c_5c_{13}-c_6c_{12}).$$ In particular, homogeneous ideals with Hilbert function $h$ will always have either 4 or 5 generators, since the only non-zero constant term of $H+N$ allowed in $V_{\textrm{hom}}(L)$ is $c_{17}$. We will now consider the closures of the $d$-Betti strata. The stratum of the maximal admissible minimal number of generators $V_5(L)$ is closed, of codimension $6$ inside the $20$-dimensional $V(L)$ and irreducible. The closure of the $4$-Betti stratum has $3$ irreducible components, all of them are reduced of codimension $3$. The closure $\overline{V_3(L)}$ has $3$ irreducible components, each of codimension $1$. The stratum of complete interesection ideals is dense in $V(L)$, thus $\overline{V_{\textrm{CI}}(L)} = V(L)$. Next we will display an example where $\Delta(h)>1$: **Example 12**. Consider the lex-segment ideal $L = (x^4,x^3y^2, x^2y^3, xy^5, y^7)$ with Hilbert function $h=(1,2,3,4,4,2,1)$ and degree matrix $U$: $$U = \left( \begin{array}{cccc} 2 & 2 & 3 & 4 \\ 1 & 1 & 2 & 3 \\ 1 & 1 & 2 & 3 \\ 0 & 0 & 1 & 2\\ -1 & -1 & 0 & 1 \\ \end{array} \right)$$ By , $V(L)\simeq\mathbb{A}^{12}$ and any ideal in $V(L)$ is of the form $$J = I_4 \left( \begin{array}{cccc} y^2& 0& 0& 0\\ -x+c_1y &y & 0& 0\\ c_2y & -x & y^2 & 0\\ c_3+c_4y & c_5 & -x+c_6y & y^2\\ c_7+c_8y & c_9 & c_{10}+c_{11}y & -x+c_{12}y\\ \end{array} \right),$$ for some coefficients $c_1, \dots, c_{12} \in {\bf k}$. Note that $J$ is homogeneous if and only if, in each entry $(i,j)$ of the matrix above, coefficients of terms with degree other than $u_{i,j}$ vanish, namely $V_{\textrm{hom}}(L)=\mathcal{V}(c_4,c_7,c_8,c_9,c_{11})\simeq \mathbb{A}^7$. We study the matrix of constant terms $$\label{eq:Mbar} \bar{N} = \left( \begin{array}{cccc} 0 & 0& 0& 0\\ 0 & 0 & 0& 0\\ 0 & 0 & 0 & 0\\ c_3 & c_5 & 0 & 0\\ c_7 & c_9 & c_{10} & 0\\ \end{array} \right)$$ to determine the sets of ideals with a given admissible minimal number of generators: $$V_5(L)=\mathcal{V}(c_3,c_5,c_7,c_9,c_{10})\simeq \mathbb{A}^7,$$ $$V_4(L)=\mathcal{V}(c_3c_{10},c_5c_{10},c_3c_9-c_5c_7)\backslash V_5(L),$$ $$V_3(L)=\mathbb{A}^{12}\backslash \mathcal{V}(c_3c_{10},c_5c_{10},c_3c_9-c_5c_7).$$ The stratum $V_5(L)$ with the maximal admissible minimal number of generators is again closed, as in . It is of codimension $5$ and irreducible. The closure of $V_4(L)$ has two irreducible components, both of codimension $2$. The stratum $V_3(L)$ with the minimal admissible minimal number of generators is dense in $V(L)$. In this example, we can obtain homogeneous ideals in each of the admissible minimal numbers of generators since $\bar{N}$ is projected into $V_{\textrm{hom}}(L)$ just by setting $c_7=c_9=0$, hence its rank can still take any value in $\{0,1,2\}$. For example, take any $J\in V_3(L)$, then its initial ideal $$J^\ast = I_4 \left( \begin{array}{cccc} y^2& 0& 0& 0\\ -x+c_1y &y & 0& 0\\ c_2y & -x & y^2 & 0\\ c_3 & c_5 & -x+c_6y & y^2\\ 0 & 0 & c_{10} & -x+c_{12}y\\ \end{array} \right)$$ is minimally generated by a quartic and two quintics and will have the graded resolution $$0\longrightarrow P(-6)\oplus P(-8)\longrightarrow P(-4)\oplus P^2(-5)\longrightarrow P \longrightarrow P/J^\ast \longrightarrow 0,$$ which can be obtained from the resolution of $P/L$ by the only two admissible zero cancellations $(P(-6),P(-6))$ and $(P(-7),P(-7))$, in the notation of [@RS10]. As an intermediate step towards proving the main result on Betti strata and homogeneous cells, we will reprove some classical results using the parametrization of . We will start with the well-known numerical condition on Hilbert functions that admit complete intersections, that is, there exists a complete intersection ideal with that Hilbert function. This is a result from Macaulay in [@Mac27], which has been reproved several times with different techniques in [@Bri77; @Iar77; @RS10]. From now on we will assume that ${\bf k}$ is a field of characteristic 0. This assumption is necessary whenever we want to translate results on $V(\operatorname{Lex}(h))$ to results on $\operatorname{Hilb}^h({\bf k}[\![x,y]\!])$, or from $V_{\mathrm{hom}}(\operatorname{Lex}(h))$ to results on $\operatorname{Hilb}^h_{\mathrm{hom}}({\bf k}[\![x,y]\!])$. The computation of dimensions and strata in the Gröbner cells themselves hold for any arbitrary field ${\bf k}$. **Proposition 13**. *\[Hilbert functions that admit a complete intersection\] A Hilbert function $h$ admits a complete intersection if and only if $\Delta(h)=1$.* *Proof.* Combining and the fact that $\operatorname{Lex}(h)$ is the generic initial ideal, $h$ admits a complete intersection if and only if $\operatorname{rk}\bar{N}$ can reach $t-1$ for some $J=I_t(H+N)$ in $V(\operatorname{Lex}(h))$, where $N \in \mathcal{M}(\operatorname{Lex}(h))$. This can only occur if $u_{i,i-2}\leq 0$ for all $i=3,\dots,t+1$. Since $0<m_1<\dots <m_t$ in the lex-segment case, $u_{i,j}\leq 1$ for any $j\leq i-1$ and it is decreasing in the sense that $u_{i,j}\leq u_{i-1,j},u_{i-1,j+1},u_{i,j+1}$. More precisely, $u_{i,j}=1$ iff $u_{i-1,j}=u_{i-1,j+1}=u_{i,j+1}=1$ for $j\leq i-2$. From the fact that $d_k=1$ for $k\geq 2$ if and only if there is a jump in $h$ of height at least 2, we see that $u_{i,i-2}\leq 0$ for all $i=3,\dots,t$ if and only if $\Delta(h)=1$. ◻ The main point in the proof of is to show that for a Hilbert function $h$ with maximal jump $\Delta(h)=1$ there exists $N \in \mathcal{M}(\operatorname{Lex}(h))$ with $\operatorname{rk}\bar{N} = t-1$. Similarly, for Hilbert functions $h$ with $\Delta(h)>1$ we can investigate what is the maximal rank of $\bar{N}$ that can be achieved. **Example 14**. Let us consider a Hilbert function $h=(1,2,3,4,5,6,7,8,\dots)$ which starts decreasing at $t=8$ and has maximal jump $\Delta(h)=3$. We want to know which are the possible ranks of a matrix $\bar{N}$ formed by constant terms of $N\in\mathcal{M}(\operatorname{Lex}(h))$. We claim that this only depends on the size of the maximal jump and, more precisely, $0\leq \operatorname{rk}{\bar{N}}\leq 5=8-3$. To illustrate this, we will consider the following extra assumptions: $h$ has a single jump by 3 (reflected by the fact that there are only two consecutive 1's in the main diagonal of the degree matrix $U$, namely $d_5=d_6=1$) and a single jump by 2 (namely $d_3=1$). $$U=\left(\begin{array}{cccccccc} d_1& & & & & & &\\ 1 & d_2 & & & & & &\\ \ast & 1 & 1 & & & & &\\ \ast & 1 & 1 & d_4 & & & &\\ \ast &\ast & \ast & \mathbf{1} & \mathbf{1} & \mathbf{1} & & \\ \ast &\ast & \ast & \mathbf{1} & \mathbf{1} & \mathbf{1} & & \\ \ast &\ast & \ast & \mathbf{1} & \mathbf{1} & \mathbf{1} & d_7 & \\ \ast & \ast & \ast & \ast & \ast &\ast & 1 & d_8\\ \ast & \ast & \ast &\ast &\ast &\ast & \ast & 1\\ \end{array}\right) \quad \bar{N}= \left(\begin{array}{cccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ \ast & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ \ast & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ \fbox{$\ast$} & \ast & \ast & 0 & 0 & 0 & 0 & 0 \\ \ast & \fbox{$\ast$} & \ast & 0 & 0 & 0 & 0 & 0 \\ \ast & \ast & \fbox{$\ast$} & \mathbf{0} & 0 & 0 & 0 & 0 \\ \ast & \ast & \ast & \fbox{$\ast$} & \ast & \ast & 0 & 0 \\ \ast & \ast & \ast & \ast & \fbox{$\ast$} & \ast & \ast & 0 \\ \end{array}\right)$$ The asteriscs below the diagonal in $U$ represent entries such that $u_{i,j}\leq 0$, hence the corresponding entries in $\bar{N}$ are allowed to be non-zero. Note that only the maximal size of a square of 1's (here 3) determines which is the longest main diagonal of non-zeroes (boxed asteriscs) and hence $\operatorname{rk}(\bar{N})\leq 5$. By considering a matrix $N$ with (possibly zero) constants in the boxed entries and zeroes elsewhere, we can obtain ideals $J=I_8(H+N)$ with Hilbert function $h$ and $4\leq \mu(J)\leq 9$. This argument is used in the following proposition on the minimal number of generators of an ideal with a given Hilbert function, reproving the results in [@Ber09 Theorem 2.3, Theorem 2.4]: **Proposition 15** (Admissible number of generators of an ideal with a given Hilbert function). *Given $h$, any ideal $J$ such that $\operatorname{HF}_{R/J}=h$ satisfies $\Delta(h)<\mu(J)\leq t+1$, and we can find ideals with any admissible number of generators.* *Proof.* Combining and , finding ideals $J$ with $\Delta(h)<\mu(J)\leq t+1$ is equivalent to proving that for $N \in \mathcal{M}(L)$, with $L=\operatorname{Lex}(h)$, the rank of $\bar{N}$ is in the range $0\leq\operatorname{rk}{\bar{N}}\leq t-\Delta(h)$ and that for all the values $l$ in this range there exists an $N \in \mathcal{M}(L)$ such that $\operatorname{rk}{\bar{N}}=l$. In the proof of we saw that $u_{i,j}=1$ iff $u_{i-1,j}=u_{i-1,j+1}=u_{i,j+1}=1$ for $j\leq i-2$ and $d_k=1$ for $k\geq 2$ iff $\Delta(h)>1$. Therefore, $\Delta(h)$ is the size of the maximal square of 1's that appears in the degree matrix $U$, see . It can be checked that it is precisely this size what gives an upper bound the maximal rank of $\bar{N}$: the maximal diagonal that admits non-zero entries has exactly $t-\Delta(h)$ entries (see boxed entries in ). By setting the entries in this diagonal to zero or non-zero values, we obtain matrices with the desired ranks. ◻ We can also use the parametrization of Gröbner cells of lex-segment ideals to recover classical results on the dimension of special subschemes of the punctual Hilbert scheme $\operatorname{Hilb}^n({\bf k}[\![x,y]\!])$. The dimension of the subscheme $\operatorname{Hilb}^h({\bf k}[\![x,y]\!])$ of $\operatorname{Hilb}^n({\bf k}[\![x,y]\!])$ parametrizing ideals with a given Hilbert function was already given in [@Bri77 Theorem III.3.1] and [@Iar77 Theorem 2.12], where $\operatorname{Hilb}^h({\bf k}[\![x,y]\!])$ was denoted by $S(H^{\alpha})$ and $Z_T$, respectively. **Proposition 16** (Dimension of the subscheme of the punctual Hilbert scheme with a given Hilbert function). *The dimension of the subscheme of $\operatorname{Hilb}^n({\bf k}[\![x,y]\!])$ corresponding to a certain Hilbert function $h$ is $$\label{eq:dim} \dim (\operatorname{Hilb}^h({\bf k}[\![x,y]\!]))=n-t-\sum_{l\geq 2}n_l\left(\begin{array}{c} l\\2\end{array}\right),$$ where $n_l$ denotes the number of jumps of height $l\geq 2$ in $h$.* *Proof.* Consider the lex-segment ideal $L$ associated to the Hilbert function $h$. It is immediate from the parametrization of $V(L)$ in that $\dim V(L)=\sum_{2\leq j+1\leq i\leq t+1}\left(d_j-\max(u_{i,j},0)\right)$. This expression can be rewritten as $n-t-\#\lbrace u_{i,j}: u_{i,j}=1 \mbox{ for }j\leq i-2 \rbrace$. Using a similar argument as in the proof of , 1's below the second main diagonal arise only when there is a jump of height $l\geq 2$ in the Hilbert function and they occur in $U$ in disjoint lower triangular blocks of size $l-1$. Therefore, we derive that the cardinality of the set $\lbrace u_{i,j}: u_{i,j}=1 \mbox{ for }j\leq i-2 \rbrace$ is exactly the sum of the number of jumps of height $l\geq 2$ times $l(l-1)/2$. ◻ We reprove the result on the dimension of the subscheme $\operatorname{Hilb}^h_{\textrm{hom}}({\bf k}[\![x,y]\!])$ that parameterizes homogeneous ideals with a given Hilbert function presented in [@Iar77 Theorem 2.12] and [@CV08 Corollory 3.1], where $\operatorname{Hilb}^h_{\textrm{hom}}({\bf k}[\![x,y]\!])$ was denoted by $G_T$ and $\mathbb{G}(h)$ respectively: **Proposition 17** (Dimension of the subscheme of homogeneous ideals with a given Hilbert function). *The dimension of the subscheme of $\operatorname{Hilb}^h({\bf k}[\![x,y]\!])$ parametrizing homogeneous ideals is $$\label{eq:dimhom} \dim(\operatorname{Hilb}^h_{\textrm{hom}}({\bf k}[\![x,y]\!]))=t+\sum_{i=t-1}^s (h_{i-1}-h_i)(h_i-h_{i+1}).$$* *Proof.* By , the dimension of the affine space $V_{\hom}(L)$ is the sum of the number of 0's in the degree matrix $U$ and the number of 1's in the strictly lower triangle of $U$ for which their corresponding diagonal entry is strictly higher than 1. More precisely, $\dim V_{\hom}(L)=\#\lbrace u_{i,j}: u_{i,j}=0 \mbox{ for }j\leq i-2\rbrace+\#\lbrace u_{i,j}: u_{i,j}=1 \mbox{ for }j\leq i-1 \mbox{ such that }d_j\neq 1\rbrace$. We start by counting 0's in the degree matrix. For any $3\leq i\leq t+1$, $u_{i,i-2}=0$ if and only if $d_{i-1}=2$. Note that $d_k=2$, for $k\geq 2$, is the indicator that there are two consecutive jumps of height $l_1\geq 1$ and $l_2\geq 1$, respectively, in the Hilbert function (after position $t$). This yields a rectangle of 0's of size $l_1\times l_2$ and hence the number of 0's is the sum of the area of all such rectangles. Therefore, $$\#\lbrace u_{i,j}: u_{i,j}=0 \mbox{ for }j\leq i-2\rbrace=\sum_{i=t}^s(h_{i-1}-h_i)(h_i-h_{i+1}).$$ Let us now count the number of admissible 1's. If $d_1\neq 1$, there are $t$ such 1's. Indeed, in the case $\Delta(h)=1$, the only 1's are in the third main diagonal and all of them are admissible. Alternatively, if $\Delta(h)>1$, then we will have squares of 1's in the degree matrix whose edge lengths are the size of the jumps, but the admissible ones are only those in the first column of each square, so we still obtain $t$ 1's. However, if $d_1=1$, we have to remove the 1's that might be in the first column of $U$. These 1's occur if and only if $h_t<t$ and there will be $t-h_t$ many such 1's. Since $h_{t-2}-h_{t-1}=(t-1)-t=-1$, the number of admissible ones in $U$ is exactly $t+(h_{t-2}-h_{t-1})(h_{t-1}-h_t)=t-(t-h_t)$ for any value of $d_1$. Adding the number of 0's to this quantity completes the proof. ◻ We are now in the position of stating the parametrization of the Betti strata of $\operatorname{Hilb}^h({\bf k}[\![x,y]\!])$ up to generic change of coordinates. Recall that, for the lex-segment ideal $L$ associated to $h$, the Gröbner cell $V(L)$ is an affine space of dimension $D:= n-t-\sum_{l\geq 2}n_l {l\choose 2}$. For the sake of clarity in the notation of the following theorem, $I_d(\bar{N})$ is an ideal in ${\bf k}[c_1,..,c_D]$, where $\bar{N}$ is the matrix obtained by taking only the constant terms of a matrix $N\in \mathcal{M}(L)$ in general form. See [\[eq:Mbar\]](#eq:Mbar){reference-type="eqref" reference="eq:Mbar"} for an example of $\bar{N}$. **Theorem 18** (Betti strata). *Consider a Hilbert function $h$ and its associated lex-segment ideal $L$. For any $\Delta(h) +1\leq d\leq t+1$, the $d$-Betti stratum of the $D$-dimensional affine space $V(L)$, is the quasi-affine variety $$V_d(L)=\mathcal{V}(I_{t+2-d}(\bar{N})) \backslash \mathcal{V}(I_{t+1-d}(\bar{N}))\subset V(L)\simeq \mathbb{A}^D.$$ In particular, $V_{t+1}(L)$ is an affine space and $$V_{\Delta(h)+1}(L)=\mathbb{A}^D\backslash \mathcal{V}(I_{t-\Delta(h)}(\bar{N}))$$ is a $D$-dimensional quasi-affine variety.* Note that a consequence of this result is that ideals that have the minimum minimal number of generators admissible for a given $h$, namely those in $V_{\Delta(h)+1}(L)$, are dense in $\operatorname{Hilb}^h({\bf k}[\![x,y]\!])$, as proved in [@Bri77 Proposition III.2.1]. In particular, if $\Delta(h)=1$, then $V_{\textrm{CI}}(L)$ is the quasi-affine variety obtained by removing $t-1$ hyperplanes from $\mathbb{A}^D$. **Proposition 19** (Homogeneous sub-cell). *$V_{\mathrm{hom}}(L)$ is a $D_h$-dimensional affine space, where $D_h$ is the right-hand side of [\[eq:dimhom\]](#eq:dimhom){reference-type="eqref" reference="eq:dimhom"}. Moreover, for any $J=I_t(H+N)$ its initial ideal $J^\ast$ can be computed by projecting $N$ into that affine space.* *Proof.* The maximal minors $f_0,\dots,f_t$ of $H+N$ are a ${\overline{\tau}}$-enhanced standard basis of $J=I_t(H+N)$ by construction. By , they also form a standard basis of $J$, namely $J^\ast=(f_0^\ast,\dots,f_t^\ast)$. Let us call $N^\ast$ the matrix obtained by only keeping the terms of degree $u_{i,j}$ in each entry $N_{i,j}$. Note that $N^\ast\in \mathcal{M}(L)$ and $I_t(H+N^\ast)=J^\ast$. We claim that an ideal $J=I_t(H+N)$, for some $N\in \mathcal{M}(L)$, is homogeneous if and only the maximal minors $f_0,\dots,f_t$ of $H+N$ are homogeneous. Indeed, homogeneity of a system of generators yields homogeneity of $J$. Conversely, if $J$ is homogeneous then $J=I_t(H+N)=I_t(H+N^\ast)$, thus $N=N^\ast$ by . ◻ # A Cellular Decomposition of the punctual Hilbert scheme {#sec:conjecture} In this section we leave the class of lex-segment ideals and consider general zero-dimensional monomial ideals $E$ in ${\bf k}[\![x,y]\!]$. The collection of all Gröbner cells $V(E)$ forms a cellular decomposition of the punctual Hilbert scheme $\operatorname{Hilb}^n({\bf k}[\![x,y]\!])$. A cellular decomposition which additionally respects the Hilbert function stratification induces a cellular decomposition of $\operatorname{Hilb}^h({\bf k}[\![x,y]\!])$, the stratum of $\operatorname{Hilb}^n({\bf k}[\![x,y]\!])$ with a prescribed Hilbert function $h$. When working over $\mathbb{C}$ a cellular decomposition of a scheme can be used to calculate its Betti numbers. This was done in [@ES] for the Hilbert schemes of points of $\mathbb{P}^2$, $\mathbb{A}^2$ and the punctual Hilbert scheme $\operatorname{Hilb}^n({\bf k}[\![x,y]\!])$. In [@Got90] this result is refined for the stratum $\operatorname{Hilb}^h({\bf k}[\![x,y]\!])$. Both papers use representation theoretical methods without giving explicit parametrizations of the cells. We conjecture what should be the parametrization of Gröbner cells for general $E$ in terms of Hilbert-Burch matrices and give some detailed examples and evidence for our conjecture. In [@HW21 Proposition 5.1/Definition 5.2] we showed that the set of matrices $\mathcal{M}(E)$ from parametrizes a subset of $V(E)$, namely those ideals $J$ in $V(E)$ that admit a ${\overline{\tau}}$-enhanced standard basis that also forms a Gröbner basis of the ideal $J \cap {\bf k}[x,y]$ with respect to the lexicographic ordering. We proved that for the class of monomial ideals $E=(x^t,x^{t-1}y^{m_1},\dots, y^{m_t})$ that satisfy $$\label{lexGBcond} m_j-j-1 \le m_i-i \mbox{ for all } j<i,$$ this is already the whole Gröbner cell $V(E)$. The class includes lex-segment ideals, but for example also allows $m_i=m_{i+1}$ for exactly one $i$. For general $E$ there exist ideals $J \in V(E)$ which do not admit such a ${\overline{\tau}}$-enhanced standard basis. What occurs in this situation is that $\operatorname{Lt_{lex}}(J \cap {\bf k}[x,y]) \not= E$, for an example check [@HW21 Lemma 5.12, Example 5.13]. In these cases, to obtain $J$ as the ideal of maximal minors of $H+N$ it is no longer enough to consider lower triangular matrices $N$. **Definition 20**. For a monomial ideal $E \subset {\bf k}[\![x,y]\!]$ we denote by $\mathcal{N}_{<\underline{d}}(E)$ the set of matrices $N=(n_{i,j})$ of size $(t+1)\times t$ with entries in ${\bf k}[y]$ such that its non-zero entries satisfy - $u_{i,j}+1 \leq {\mathrm{ ord}}(n_{i,j}) \leq \deg(n_{i,j}) <d_i$ for any $i\leq j$, - $u_{i,j} \leq {\mathrm{ ord}}(n_{i,j}) \leq\deg(n_{i,j})<d_j$ for any $i> j$. Note that the set $\mathcal{M}(E)$ is a subset of $\mathcal{N}_{<\underline{d}}(E)$, and for ideals satisfying condition [\[lexGBcond\]](#lexGBcond){reference-type="eqref" reference="lexGBcond"} $\mathcal{M}(E)=\mathcal{N}_{<\underline{d}}(E)$, since the inequality $u_{i,j}+1>d_i$ holds for $i \le j$, and thence yields $n_{i,j}=0$ for all $i \leq j$. **Conjecture 21**. [@HW21 Conjecture 5.14] [\[Conj\]]{#Conj label="Conj"} Let $E \subset {\bf k}[\![x,y]\!]$ be a zero-dimensional monomial ideal. Then $$\begin{array}{r r c l} \Phi_E: & \mathcal{N}_{<\underline{d}}(E) & \longrightarrow & V(E)\\ & N & \longmapsto & I_t(H+N) \end{array}$$ is a bijection. By [@HW21 Lemma 4.4] it is clear that $\Phi_E$ is well-defined and that the minors form a ${\overline{\tau}}$-enhanced standard basis of $I_t(H+N)$. By , holds for lex-segment ideals. If the conjecture holds for all monomial ideals, we obtain a cellular decomposition of the whole punctual Hilbert scheme and can define a canonical Hilbert-Burch matrix for all ideals without considering a change of coordinates. This decomposition induces a cellular decomposition of $\operatorname{Hilb}^h({\bf k}[\![x,y]\!])$. This is not the case for the cellular decomposition given in [@CV08] because the Hilbert function of ideals in a cell can vary, see [@HW21 Example 5.11]. Note that even without a proof of , we can still prove a version of for homogeneous sub-cells of Gröbner cells of general $E$. This result agrees with [@CV08 Corollary 1] and shows that even though their Gröbner cells differ, the subset of homogeneous ideals agrees. This is no surprise, since for homogeneous $f$ we have that $\operatorname{Lt_{\overline{\tau}}}(f)=\operatorname{Lt_{lex}}(f)$ by definition. **Lemma 22**. *The homogeneous sub-cell $V_{\mathrm{hom}}(E)$ of $V(E)$ is an affine space of dimension $\# \{(i,j) \ | \ i >j, 0\le u_{i,j} < d_j \}$.* *Proof.* Homogeneous ideals $J$ in $V(E)$ are inside the set of ideals in $V(E)$ admitting a canonical Hilbert-Burch matrix. This is clear since for homogeneous $f$, the leading terms $\operatorname{Lt_{\overline{\tau}}}(f) = \operatorname{Lt_{lex}}(f)$ coincide, so a homogeneous ${\overline{\tau}}$-enhanced standard basis of $J$ will also form a lex-Gröbner basis of $J \cap {\bf k}[x,y]$. Thence by [@HW21 Proposition 5.1] $V_{\hom}(E) \subset \phi_E(\mathcal{M}(E))$, and there exists a unique $N$ such that $J=I_t(H+N)$. Now $J$ is homogeneous if and only if $N \in \mathcal{M}(E)$ has only terms of degree $u_{i,j}$. Thus $V_{\hom}(E)$ is an affine space of dimension $\# \{(i,j) \ | \ i >j, 0 \le u_{i,j} < d_j \}$. ◻ **Example 23** (A cellular decomposition of $\operatorname{Hilb}^6({\bf k}[\![x,y]\!])$). Let us consider $n=6$. There are eleven partitions of 6, so the punctual Hilbert scheme $\operatorname{Hilb}^6({\bf k}[\![x,y]\!])$ has a cellular decomposition into eleven cells. There are four possible Hilbert functions $[1,2,3]$, $[1,2,2,1]$, $[1,2,1,1,1]$ and $[1,1,1,1,1,1]$. Note that this is significantly different from Briançon's table in [@Bri77] because there the author provides a representative of all possible analytic types of ideals in $\operatorname{Hilb}^6({\bf k}[\![x,y]\!])$, whereas our cells contain ideals with a common leading term ideal but in general different analytic types coexist in the same cell (for example, ideals with different minimal number of generators). The Hilbert function $[1,2,3]$ is only attained by the ideal $(x^3,x^2y,xy^2,y^3) = (x,y)^3$. By Proposition [Proposition 16](#prop:dimCell){reference-type="ref" reference="prop:dimCell"} the dimension of this stratum is $$n-t-\sum_{l\geq 2}n_l\left(\begin{array}{c} l\\2\end{array}\right) = 6 - 3 - 1 \cdot \left(\begin{array}{c} 3\\2\end{array}\right)=0,$$ and the Gröbner cell consists only of the single point corresponding to the monomial ideal itself. It is minimally generated by four elements. There are six cells with Hilbert function $[1,2,2,1]$, see . Admissible parameters to obtain a homogeneous ideal, i.e. those of $V_{\hom}(E)$, are indicated in bold. The Hilbert function has only jumps of height 1, so the maximal dimensional cell among them, corresponding to $m=(2,4)$, has dimension $n-t = 6-2 = 4$. General ideals in this cell will be minimally generated by two elements. If $c_2 =0$ the ideal is generated by three elements. The other five cells with this Hilbert function are not lex-segment. The cells $[2,2,2]$ and $[3,3]$, corresponding to monomial complete intersection ideals $(x^3,y^2)$ and $(x^2,y^3)$, consist only of complete intersection ideals. The two cells $[1,1,2,2]$ and $[1,1,1,3]$ only contain ideals that are minimally generated by 3 elements while the cell $[1,1,4]$ contains ideals $I,J$ with $\mu(I) = 2$ and $\mu(J)=3$. Notice that $\dim(V(E)) - \dim(V_{\hom}(E)) = 1$ for all $E$ with Hilbert function $[1,2,2,1]$. $$\begin{tabular}{c c c c c} \toprule m & boxes & H+N & dimension & \(\mu\) \\ \toprule \hline & & [1,2,3] \\ \hline \\ \([1, 2, 3]\) & \( \begin{tikzpicture}[scale=0.25] \draw (0,0) -- (3,0) -- (3,1) -- (2,1) -- (2,2) -- (1,2) -- (1,3) -- (0,3) -- cycle; \draw (0,1) -- (2,1); \draw (0,2) -- (1,2); \draw (1,0) -- (1,3); \draw (2,0) -- (2,2); \end{tikzpicture}\) & \( \left( \begin{array}{c c c} y & 0 & 0\\ -x & y & 0\\ 0 & -x & y\\ 0 & 0 & -x \end{array} \right) \) & 0 & 4\\ \\ \hline & & [1,2,2,1] \\ \hline \\ \([1, 1, 2, 2]\) & \( \begin{tikzpicture}[scale=0.25] \draw (0,0) -- (0,2); \draw (1,0) -- (1,2); \draw (2,0) -- (2,2); \draw (3,0) -- (3,1); \draw (4,0) -- (4,1); \draw (0,2) -- (2,2); \draw (0,1) -- (4,1); \draw (0,0) -- (4,0); \end{tikzpicture}\) & \( \left( \begin{array}{c c c c} y & 0 & 0 & c_1\\ -x & 1 & 0 & 0\\ 0 & -x & y & 0\\ 0 & 0 & -x & 1\\ 0 & 0 & 0 & -x \end{array} \right) \) & 1 & 3\\ \\ \([1, 1, 1, 3]\) & \( \begin{tikzpicture}[scale=0.25] \draw (0,0) -- (0,3); \draw (1,0) -- (1,3); \draw (2,0) -- (2,1); \draw (3,0) -- (3,1); \draw (4,0) -- (4,1); \draw (0,0) -- (4,0); \draw (0,1) -- (4,1); \draw (0,2) -- (1,2); \draw (0,3) -- (1,3); \end{tikzpicture}\) & \( \left( \begin{array}{c c c c} y & 0 & c_1 & 0\\ -x & 1 & 0 & 0\\ 0 & -x & 1 & 0\\ 0 & 0 & -x & y^2\\ 0 & 0 & 0 & -x + \bm{c_2} y \end{array} \right) \) & 2 & 3\\ \\ \([2, 2, 2]\) & \( \begin{tikzpicture}[scale=0.25] \draw (0,0) -- (0,2); \draw (1,0) -- (1,2); \draw (2,0) -- (2,2); \draw (3,0) -- (3,2); \draw (0,0) -- (3,0); \draw (0,1) -- (3,1); \draw (0,2) -- (3,2); \end{tikzpicture}\) & \( \left( \begin{array}{c c c} y^2 & 0 & c_1 y\\ -x + \bm{c_2} y & 1 & 0\\ 0 & -x & 1\\ 0 & 0 & -x \end{array} \right) \) & 2 & 2\\ \\ \([1, 1, 4]\)& \( \begin{tikzpicture}[scale=0.25] \draw (0,0) -- (0,4); \draw (1,0) -- (1,4); \draw (2,0) -- (2,1); \draw (3,0) -- (3,1); \draw (0,4) -- (1,4); \draw (0,0) -- (3,0); \draw (0,1) -- (3,1); \draw (0,2) -- (1,2); \draw (0,3) -- (1,3); \end{tikzpicture}\) & \( \left( \begin{array}{c c c} y & 0 & 0\\ -x & 1 & 0\\ 0 & -x & y^3\\ \bm{c_1} & 0 & -x + \bm{c_2} y + c_3 y^2 \end{array} \right) \) & 3 & 2,3\\ \\ \([3, 3]\) & \( \begin{tikzpicture}[scale=0.25, rotate=90] \draw (0,0) -- (0,2); \draw (1,0) -- (1,2); \draw (2,0) -- (2,2); \draw (3,0) -- (3,2); \draw (0,0) -- (3,0); \draw (0,1) -- (3,1); \draw (0,2) -- (3,2); \end{tikzpicture}\) & \( \left( \begin{array}{c c} y^3 & 0\\ -x + \bm{c_1} y + c_2 y^2 & 1\\ \bm{c_3} y^2 & -x \end{array} \right) \) & 3 & 2 \\ \\ \([2, 4]\) & \( \begin{tikzpicture}[scale=0.25] \draw (0,0) -- (2,0); \draw (0,1) -- (2,1); \draw (0,2) -- (2,2); \draw (0,3) -- (1,3); \draw (0,4) -- (1,4); \draw (2,0) -- (2,2); \draw (1,0) -- (1,4); \draw (0,0) -- (0,4); \end{tikzpicture}\) & \( \left( \begin{array}{c c} y^2 & 0\\ -x + \bm{c_1} y & y^2\\ \bm{c_2} + c_3 y & -x +\bm{c_4} y \end{array} \right) \) & 4 & 2,3 \\ \end{tabular}$$ There are two cells with Hilbert function $[1,2,1,1,1]$, see . Both contain ideals minimally generated by three elements (in case $c_3 =0$ for $E=(x^5,xy,y^2)$ or $c_1=0$ for $E=(x^2,xy,y^5)$) and complete intersection ideals. $$\begin{tabular}{c c c c c} \toprule m & boxes & H+N & dimension & \(\mu\) \\ \toprule \hline \hline & & \([1, 2, 1, 1, 1]\)\\ \hline \\ \([1, 1, 1, 1, 2]\)& \( \begin{tikzpicture}[scale=0.25] \draw (0,0) -- (0,2); \draw (1,0) -- (1,2); \draw (2,0) -- (2,1); \draw (3,0) -- (3,1); \draw (4,0) -- (4,1); \draw (5,0) -- (5,1); \draw (0,0) -- (5,0); \draw (0,1) -- (5,1); \draw (0,2) -- (1,2); \end{tikzpicture}\) & \( \left( \begin{array}{c c c c c} y & 0 & c_1 & c_2 & c_3\\ -x & 1 & 0 & 0 & 0\\ 0 & -x & 1 & 0 & 0\\ 0 & 0 & -x & 1 & 0\\ 0 & 0 & 0 & -x & y\\ 0 & 0 & 0 & 0 & -x \end{array} \right) \) & 3 & 2,3 \\ \\ \([1, 5]\) & \( \begin{tikzpicture}[scale=0.25] \draw (0,0) -- (2,0); \draw (0,1) -- (2,1); \draw (0,2) -- (1,2); \draw (0,3) -- (1,3); \draw (0,4) -- (1,4); \draw (0,5) -- (1,5); \draw (0,0) -- (0,5); \draw (1,0) -- (1,5); \draw (2,0) -- (2,1); \end{tikzpicture}\) & \( \left( \begin{array}{c c} y & 0\\ -x & y^4\\ c_1 & -x + \bm{c_2} y + c_3 y^2 + c_4 y^3 \end{array} \right) \) & 4 &2,3\\ \\ \end{tabular}$$ The Hilbert function $[1,1,1,1,1,1]$ is attained by the monomial ideals $(x^6,y)$ and the lex-segment monomial ideal $(x,y^6)$. Both cells consist only of complete intersection ideals and have dimensions $4$ and $5$, see . The cell of $m=[6]$ is the dense open subset of the punctual Hilbert scheme $\operatorname{Hilb}^6({\bf k}[\![x,y]\!])$. Note that here the difference between the dimension of $V(E)$ and $V_{\hom}(E)$ is again constant for all $E$ with the same Hilbert function: $\dim(V(E))-\dim(V_{\hom}(E))$ is $3$ for $[1,2,1,1,1]$ and $4$ for $[1,1,1,1,1,1]$. $$\begin{tabular}{c c c c c} \toprule m & boxes & H+N & dimension & \(\mu\) \\ \toprule \hline & & \([1, 1, 1, 1, 1, 1]\)\\ \hline \\ \([1]^6\) & \( \begin{tikzpicture}[scale=0.25] \draw (0,0) -- (0,1); \draw (1,0) -- (1,1); \draw (2,0) -- (2,1); \draw (3,0) -- (3,1); \draw (4,0) -- (4,1); \draw (5,0) -- (5,1); \draw (6,0) -- (6,1); \draw (0,0) -- (6,0); \draw (0,1) -- (6,1); \end{tikzpicture}\) & \( \left( \begin{array}{c c c c c c} y & 0 & c_1 & c_2 & c_3 & c_4\\ -x & 1 & 0 & 0 & 0 & 0\\ 0 & -x & 1 & 0 & 0 & 0\\ 0 & 0 & -x & 1 & 0 & 0\\ 0 & 0 & 0 & -x & 1 & 0\\ 0 & 0 & 0 & 0 & -x & 1\\ 0 & 0 & 0 & 0 & 0 & -x \end{array} \right) \) & 4 & 2\\ \([6]\) & \( \begin{tikzpicture}[scale=0.25, rotate=90] \draw (0,0) -- (0,1); \draw (1,0) -- (1,1); \draw (2,0) -- (2,1); \draw (3,0) -- (3,1); \draw (4,0) -- (4,1); \draw (5,0) -- (5,1); \draw (6,0) -- (6,1); \draw (0,0) -- (6,0); \draw (0,1) -- (6,1); \end{tikzpicture}\) & \( \left( \begin{array}{c} y^6\\ -x + \bm{c_1} y + c_2 y^2 + c_3 y^3 + c_4 y^4 + c_5 y^5 \end{array} \right) \) & 5 &2 \\ \\ \hline \end{tabular}$$ As stated before, for $E$ not satisfying condition [\[lexGBcond\]](#lexGBcond){reference-type="eqref" reference="lexGBcond"} the set $\mathcal{N}_{<\underline{d}}(E)$ contains matrices with non-zero entries above the diagonal. This is the case for $m = (1,1,2,2), (1,1,1,3), (2,2,2), (1,1,1,1,2)$ and $(1,1,1,1,1,1)$. For those ideals we checked by comparing to the reduced standard basis that gives a parametrization of the Gröbner cell. We can investigate the dimensions of the occurring cells. When we define $a_i$ as the number of cells of dimension $i$, we obtain $$a = (a_0,a_1,a_2,a_3,a_4,a_5) = (1,1,2,3,3,1).$$ This vector is an invariant of the space. When a scheme $X$ over $\mathbb{C}$ has a cellular decomposition with dimension vector $a$ as defined in , then all other cellular decompositions of $X$ will have the same dimension vector. It holds that $a_i=b_{2i}(X)$, the $2i$-th Betti number of $X$, and that the Betti numbers $b_{2i+1}(X)=0$, see [@Bia73 Theorem 4.4/4.5] and [@FU98 Chapter 19.1]. In [@ES] this method is used to calculate the Betti numbers of the Hilbert scheme of points of the projective plane, the affine plane and, of most interest to us, the punctual Hilbert scheme $\operatorname{Hilb}^n({\bf k}[\![x,y]\!])$. To state their theorem, we need the following definition. **Definition 24**. Let $l,n \in \mathbb{Z}_{>0}$, then we define $P(n,l)$ as the number of partitions of $n$ bounded by $l$, i.e. as the number of sequences $0 =m_0 \le m_1 \le \dots \le m_t \le l$ such that $\sum_{i=1}^t m_i = n$. **Theorem 25**. *[@ES Theorem 1.1 (iv)] The non-zero Betti numbers of the punctual Hilbert scheme are $$b_{2i}(\operatorname{Hilb}^n({\bf k}[[x,y]]))=P(i,n-i).$$* This result gives us a way of checking for plausibility by checking if the number of monomial ideals $E$ with $\dim(\mathcal{N}_{<\underline{d}}(E))=i$ is equal to $P(i,n-i)$. We created a Julia ([@julia]) module that uses OSCAR.jl ([@OSCAR]) to calculate examples and perform this plausibility check. The module and Jupyter-notebooks with experiments are available at <https://github.com/anelanna/LocalHilbertBurch.jl>. The module can calculate the (conjectural) cellular decomposition for a given $n$. For all partitions of $n$ it creates a `Cell` that has the following properties: - `m` -- the partition, - `E` -- its associated monomial ideal, - `d` -- the vector of differences, - `U` -- the degree matrix, - `hilb` -- the Hilbert function of the ideals in the cell, - `H` -- the canonical Hilbert-Burch matrix of the monomial ideal, - `M` -- the general canonical Hilbert-Burch matrix $H+N$, - `N` -- the corresponding element in $\mathcal{N}_{<\underline{d}}(E)$, - `I` -- the maximal minors of $M=H+N$, and - `dim` -- the dimension of the cell. The vector of dimensions $a$ can be computed with `sorted_celllist(n)`, a function that returns a dictionary mapping an integer $i$ to a vector with the cells of dimension $i$. The numbers of bounded partitions can be found in the On-Line Encyclopedia of Integer Sequences, [@OEIS], as (diagonals in) A008284, or relabelled, and thus serving our purposes a bit better, as (diagonals in) A058398 (see <https://oeis.org/A008284> and <https://oeis.org/A058398)>. For $n \le 50$ we calculated all dimension vectors of our proposed cellular decomposition and checked that they are the correct ones. A stronger evidence for would be that $\phi_E: \mathcal{N}_{<\underline{d}}(E) \to V(E)$ is an isomorphism of affine spaces for all $n \le 40$. We have checked this for $n \le 7$ where we described $V(E)$ by means of the reduced ${\overline{\tau}}$-enhanced standard bases. **Example 26** ( continued). We can use the connection between cellular decompositions and Betti numbers described above to calculate the Betti numbers of $\operatorname{Hilb}^{h}({\bf k}[\![x,y]\!])$ for ${\bf k}=\mathbb{C}$ and $h$ an admissible Hilbert function in the $n=6$ case. We obtain the following Betti numbers: $$b_i(\operatorname{Hilb}^{(1,2,3)}({\bf k}[\![x,y]\!])) = \begin{cases} 1, & i = 0;\\ 0, & \mbox{otherwise}. \end{cases}$$ $$b_i(\operatorname{Hilb}^{(1,2,2,1)}({\bf k}[\![x,y]\!])) = \begin{cases} 1, & i =2,8;\\ 2,& i =4,6;\\ 0, & \mbox{otherwise}. \end{cases}$$ $$b_i(\operatorname{Hilb}^{(1,1,1,2,1)}({\bf k}[\![x,y]\!])) = \begin{cases} 1, & i =6,8;\\ 0, & \mbox{otherwise}. \end{cases}$$ $$b_i(\operatorname{Hilb}^{(1,1,1,1,1,1)}({\bf k}[\![x,y]\!])) = \begin{cases} 1, & i = 8,10;\\ 0, & \mbox{otherwise.} \end{cases}$$ *Remark 27*. More generally, we can describe the subscheme $\operatorname{Hilb}^{(1,1,\dots,1)}({\bf k}[\![x,y]\!])$ of the punctual Hilbert scheme for any $n$. There are only two cells with this Hilbert function, namely $L=(x,y^n)$ and $E=(x^n,y)$. The dimension of the cell of the lex-segment ideal $L$ is $n-1$. By considering the reduced ${\overline{\tau}}$-enhanced standard basis of $E$ we can verify that holds for $E$ or just directly show that $\dim(V(E))=n-2$. So the cellular decomposition has one cell of dimension $n-2$ and one of dimension $n-1$. Hence, the only non-vanishing Betti numbers are $b_{2(n-2)}(\operatorname{Hilb}^{(1,\dots,1)}({\bf k}[\![x,y]\!]))=b_{2(n-1)}(\operatorname{Hilb}^{(1,\dots,1)}({\bf k}[\![x,y]\!]))=1$. This agrees with the observations in [@Got90 Remark 2.2a]. Another relevant observation from is that within each stratum $\operatorname{Hilb}^h({\bf k}[\![x,y]\!])$ of the punctual Hilbert scheme $\operatorname{Hilb}^n({\bf k}[\![x,y]\!])$, the difference of dimensions between any local Gröbner cell and its homogeneous sub-cell is constant. This fact actually follows from a result by Iarrobino: **Theorem 28**. *[@Iar77 Theorem 2.11, Theorem 3.14] The stratrum $\operatorname{Hilb}^h({\bf k}[\![x,y]\!])$ is a locally trivial bundle over $\operatorname{Hilb}^h_{\hom}({\bf k}[\![x,y]\!])$ having fibre, an affine space, and a global section.* The cellular decomposition of $\operatorname{Hilb}^h({\bf k}[\![x,y]\!])$ given by all Gröbner cells $V(E)$ where $E$ is a monomial ideal with Hilbert function $h$, induces a cellular decomposition of $\operatorname{Hilb}^h_{\hom}({\bf k}[\![x,y]\!])$ by taking homogeneous sub-cells $V_{\hom}(E)$. By a combination of and the fibration restricts to $V(E) \to V_{\hom}(E)$. Since we know the dimension of $V_{\hom}(E)$ by , we can check (at least in examples) whether the difference $\dim(\mathcal{N}_{<\underline{d}}(E))-\dim(V_{\hom}(E))$ is constant for all $E$ with the same Hilbert function. If this holds, it implies that our conjectured cells have the right dimension. Using our `Julia` module, we have checked this for all strata $\operatorname{Hilb}^h({\bf k}[\![x,y]\!])$ of $\operatorname{Hilb}^n({\bf k}[\![x,y]\!])$ with $n \le 50$, thus providing strong evidence for . # Acknowledgements {#acknowledgements .unnumbered} We want to thank Lars Kastner for helping us to set up the Julia module and helping with technical issues. We thank the two anonymous referees for many useful comments on a preliminary version of this paper. The first author wants to thank Anthony Iarrobino, Pedro Macias Marques, Maria Evelina Rossi, and Jean Vallès, organizers of the AMS-EMS-SMF Special Session on Deformation of Artinian Algebras and Jordan Types, for their invitation to the session.
arxiv_math
{ "id": "2309.06871", "title": "Deformations of local Artin rings via Hilbert-Burch matrices", "authors": "Roser Homs and Anna-Lena Winz", "categories": "math.AC math.AG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Fano varieties are basic building blocks in geometry -- they are 'atomic pieces' of mathematical shapes. Recent progress in the classification of Fano varieties involves analysing an invariant called the quantum period. This is a sequence of integers which gives a numerical fingerprint for a Fano variety. It is conjectured that a Fano variety is uniquely determined by its quantum period. If this is true, one should be able to recover geometric properties of a Fano variety directly from its quantum period. We apply machine learning to the question: does the quantum period of $X$ know the dimension of $X$? Note that there is as yet no theoretical understanding of this. We show that a simple feed-forward neural network can determine the dimension of $X$ with $98\%$ accuracy. Building on this, we establish rigorous asymptotics for the quantum periods of a class of Fano varieties. These asymptotics determine the dimension of $X$ from its quantum period. Our results demonstrate that machine learning can pick out structure from complex mathematical data in situations where we lack theoretical understanding. They also give positive evidence for the conjecture that the quantum period of a Fano variety determines that variety. address: - | Department of Mathematics\ Imperial College London\ 180 Queen's Gate\ London\ SW7 2AZ\ UK - | School of Mathematical Sciences\ University of Nottingham\ Nottingham\ NG7 2RD\ UK - | Department of Mathematics\ Imperial College London\ 180 Queen's Gate\ London\ SW7 2AZ\ UK author: - Tom Coates  - Alexander M. Kasprzyk  - Sara Veneziale  bibliography: - bibliography.bib title: Machine Learning the Dimension of a Fano Variety --- # Introduction Algebraic geometry describes shapes as the solution sets of systems of polynomial equations, and manipulates or analyses a shape $X$ by manipulating or analysing the equations that define $X$. This interplay between algebra and geometry has applications across mathematics and science; see e.g. [@vanLintvanderGeer1988; @NiederreiterXing2009; @AtiyahDrinfeldHitchinManin1978; @ErikssonRanestadSturmfelsSullivant2005]. Shapes defined by polynomial equations are called *algebraic varieties*. Fano varieties are a key class of algebraic varieties. They are, in a precise sense, atomic pieces of mathematical shapes [@Kollar1987; @KollarMori1998]. Fano varieties also play an essential role in string theory. They provide, through their 'anticanonical sections', the main construction of the Calabi--Yau manifolds which give geometric models of spacetime [@CandelasHorowitzStromingerWitten1985; @Greene1997; @Polchinski2005]. The classification of Fano varieties is a long-standing open problem. The only one-dimensional example is a line; this is classical. The ten smooth two-dimensional Fano varieties were found by del Pezzo in the 1880s [@DelPezzo1887]. The classification of smooth Fano varieties in dimension three was a triumph of 20th century mathematics: it combines work by Fano in the 1930s, Iskovskikh in the 1970s, and Mori--Mukai in the 1980s [@Fano1947; @Iskovskikh1977; @Iskovskikh1978; @Iskovskikh1979; @MoriMukai1982; @MoriMukai1982Erratum]. Beyond this, little is known, particularly for the important case of Fano varieties that are not smooth. A new approach to Fano classification centres around a set of ideas from string theory called Mirror Symmetry [@CandelasdelaOssaGreenParkes1991; @GreenePlesser1990; @HoriVafa2000; @CoxKatz1999]. From this perspective, the key invariant of a Fano variety is its *regularized quantum period* [@CoatesCortiGalkinGolyshevKasprzyk2013] $$\label{eq:Ghat} \widehat{G}_X(t) = \sum_{d=0}^\infty c_d t^d$$ This is a power series with coefficients $c_0 = 1$, $c_1 = 0$, and $c_d = r_d d!$, where $r_d$ is a certain Gromov--Witten invariant of $X$. Intuitively speaking, $r_d$ is the number of rational curves in $X$ of degree $d$ that pass through a fixed generic point and have a certain constraint on their complex structure. In general $r_d$ can be a rational number, because curves with a symmetry group of order $k$ are counted with weight $1/k$, but in all known cases the coefficients $c_d$ in [\[eq:Ghat\]](#eq:Ghat){reference-type="eqref" reference="eq:Ghat"} are integers. It is expected that the regularized quantum period $\widehat{G}_X$ uniquely determines $X$. This is true (and proven) for smooth Fano varieties in low dimensions, but is unknown in dimensions four and higher, and for Fano varieties that are not smooth. In this paper we will treat the regularized quantum period as a numerical signature for the Fano variety $X$, given by the sequence of integers $(c_0,c_1,\ldots)$. *A priori* this looks like an infinite amount of data, but in fact there is a differential operator $L$ such that $L \widehat{G}_X\equiv 0$; see e.g. [@CoatesCortiGalkinGolyshevKasprzyk2013 Theorem 4.3]. This gives a recurrence relation that determines all of the coefficients $c_d$ from the first few terms, so the regularized quantum period $\widehat{G}_X$ contains only a finite amount of information. Encoding a Fano variety $X$ by a vector in $\mathbb{Z}^{m+1}$ given by finitely many coefficients $(c_0,c_1,\ldots,c_m)$ of the regularized quantum period allows us to investigate questions about Fano varieties using machine learning. In this paper we ask whether the regularized quantum period of a Fano variety $X$ knows the dimension of $X$. There is currently no viable theoretical approach to this question. Instead we use machine learning methods applied to a large dataset to argue that the answer is probably yes, and then prove that the answer is yes for toric Fano varieties of low Picard rank. The use of machine learning was essential to the formulation of our rigorous results (Theorems [Theorem 5](#thm:wps){reference-type="ref" reference="thm:wps"} and [Theorem 6](#thm:rank_2){reference-type="ref" reference="thm:rank_2"} below). This work is therefore proof-of-concept for a larger program, demonstrating that machine learning can uncover previously unknown structure in complex mathematical datasets. Thus the Data Revolution, which has had such impact across the rest of science, also brings important new insights to pure mathematics [@DaviesEtAl2021; @He2022; @Wagner2021; @ErbinFinotello2021; @LevittHajijSazdanovic2022; @WuDeLoera2022]. This is particularly true for large-scale classification questions, e.g. [@KreuzerSkarke2000; @ConwayCurtisNortonParkerWilson1985; @Cremona2016; @LieAtlas; @CoatesKasprzyk2022], where these methods can potentially reveal both the classification itself and structural relationships within it. # Results ### Algebraic varieties can be smooth or have singularities {#algebraic-varieties-can-be-smooth-or-have-singularities .unnumbered} Depending on their equations, algebraic varieties can be smooth (as in Figure [1](#fig:non_singular){reference-type="ref" reference="fig:non_singular"}) or have singularities (as in Figure [2](#fig:singular){reference-type="ref" reference="fig:singular"}). In this paper we consider algebraic varieties over the complex numbers. The equations in Figures [1](#fig:non_singular){reference-type="ref" reference="fig:non_singular"} and [2](#fig:singular){reference-type="ref" reference="fig:singular"} therefore define complex surfaces; however, for ease of visualisation, we have plotted only the points on these surfaces with co-ordinates that are real numbers. Most of the algebraic varieties that we consider below will be singular, but they all have a class of singularities called *terminal quotient singularities*. This is the most natural class of singularities to allow from the point of view of Fano classification [@KollarMori1998]. Terminal quotient singularities are very mild; indeed, in dimensions one and two, an algebraic variety has terminal quotient singularities if and only if it is smooth. ![$x^2+y^2 = z^2 + 1$](non_singular.png){#fig:non_singular width="\\linewidth"} ![$x^2 + y^2 = z^2$](singular.png){#fig:singular width="\\linewidth"} ### The Fano varieties that we consider {#the-fano-varieties-that-we-consider .unnumbered} The fundamental example of a Fano variety is projective space $\mathbb{P}^{N-1}$. This is a quotient of $\mathbb{C}^N \setminus \{0\}$ by the group $\mathbb{C}^\times$, where the action of $\lambda \in \mathbb{C}^\times$ identifies the points $(x_1,x_2,\ldots,x_N)$ and $(\lambda x_1, \lambda x_2, \ldots, \lambda x_N)$. The resulting algebraic variety is smooth and has dimension $N-1$. We will consider generalisations of projective spaces called *weighted projective spaces* and *toric varieties of Picard rank two*. A detailed introduction to these spaces is given in §[4](#sec:supp_notes){reference-type="ref" reference="sec:supp_notes"}. To define a weighted projective space, choose positive integers $a_1,a_2,\ldots,a_N$ such that any subset of size $N-1$ has no common factor, and consider $$\mathbb{P}(a_1,a_2,\ldots,a_N) = (\mathbb{C}^N \setminus \{0\})/\mathbb{C}^\times$$ where the action of $\lambda \in \mathbb{C}^\times$ identifies the points $$\begin{aligned} (x_1,x_2,\ldots,x_N) && \text{and} && (\lambda^{a_1} x_1, \lambda^{a_2} x_2, \ldots, \lambda^{a_N} x_N) \end{aligned}$$ in $\mathbb{C}^N \setminus \{0\}$. The quotient $\mathbb{P}(a_1,a_2,\ldots,a_N)$ is an algebraic variety of dimension $N-1$. A general point of $\mathbb{P}(a_1,a_2,\ldots,a_N)$ is smooth, but there can be singular points. Indeed, a weighted projective space $\mathbb{P}(a_1,a_2,\ldots,a_N)$ is smooth if and only if $a_i = 1$ for all $i$, that is, if and only if it is a projective space. To define a toric variety of Picard rank two, choose a matrix $$\label{eq:weights} \begin{pmatrix} a_1 & a_2 & \cdots & a_N \\ b_1 & b_2 & \cdots & b_N \end{pmatrix}$$ with non-negative integer entries and no zero columns. This defines an action of $\mathbb{C}^\times\times \mathbb{C}^\times$ on $\mathbb{C}^N$, where $(\lambda,\mu) \in \mathbb{C}^\times\times \mathbb{C}^\times$ identifies the points $$\begin{aligned} (x_1,x_2,\ldots,x_N) && \text{and} && (\lambda^{a_1} \mu^{b_1} x_1, \lambda^{a_2} \mu^{b_2} x_2, \ldots, \lambda^{a_N} \mu^{b_N} x_N) \end{aligned}$$ in $\mathbb{C}^N$. Set $a = a_1 + a_2 + \cdots + a_N$ and $b = b_1 + b_2 + \cdots + b_N$, and suppose that $(a,b)$ is not a scalar multiple of $(a_i,b_i)$ for any $i$. This determines linear subspaces $$\begin{aligned} S_+ & = \left\{ (x_1,x_2,\ldots,x_N)\mid \text{$x_i = 0$ if $b_i/a_i < b/a$} \right\} \\ S_- & = \left\{ (x_1,x_2,\ldots,x_N)\mid \text{$x_i = 0$ if $b_i/a_i > b/a$} \right\} \end{aligned}$$ of $\mathbb{C}^N$, and we consider the quotient $$\label{eq:rank_2} X = (\mathbb{C}^N \setminus S)/(\mathbb{C}^\times\times \mathbb{C}^\times)$$ where $S = S_+ \cup S_-$. The quotient $X$ is an algebraic variety of dimension $N-2$ and second Betti number $b_2(X) \leq 2$. If, as we assume henceforth, the subspaces $S_+$ and $S_-$ both have dimension two or more then $b_2(X) = 2$, and thus $X$ has Picard rank two. In general $X$ will have singular points, the precise form of which is determined by the weights in [\[eq:weights\]](#eq:weights){reference-type="eqref" reference="eq:weights"}. There are closed formulas for the regularized quantum period of weighted projective spaces and toric varieties [@CoatesCortiGalkinKasprzyk2016]. We have $$\label{eq:Ghat_wP} \widehat{G}_{\mathbb{P}}(t) = \sum_{k = 0}^\infty \frac{(ak)!}{(a_1 k)! (a_2 k)! \cdots (a_N k)!} t^{ak}$$ where $\mathbb{P}= \mathbb{P}(a_1,\ldots,a_N)$ and $a = a_1 + a_2 + \cdots + a_N$, and $$\label{eq:Ghat_rank_two} \widehat{G}_X(t) =\!\!\! \sum_{(k, l) \in \mathbb{Z}^2 \cap C}\!\frac{(ak + bl)!}{(a_1 k+b_1 l)! \cdots (a_N k + b_N l)!} t^{ak + bl}$$ where the weights for $X$ are as in [\[eq:weights\]](#eq:weights){reference-type="eqref" reference="eq:weights"}, and $C$ is the cone in $\mathbb{R}^2$ defined by the equations $a_i x + b_i y \geq 0$, $i \in \{1,2,\ldots,N\}$. Formula [\[eq:Ghat_wP\]](#eq:Ghat_wP){reference-type="eqref" reference="eq:Ghat_wP"} implies that, for weighted projective spaces, the coefficient $c_d$ from [\[eq:Ghat\]](#eq:Ghat){reference-type="eqref" reference="eq:Ghat"} is zero unless $d$ is divisible by $a$. Formula [\[eq:Ghat_rank_two\]](#eq:Ghat_rank_two){reference-type="eqref" reference="eq:Ghat_rank_two"} implies that, for toric varieties of Picard rank two, $c_d = 0$ unless $d$ is divisible by $\gcd\{a,b\}$. ### Data generation: weighted projective spaces {#data-generation-weighted-projective-spaces .unnumbered} The following result characterises weighted projective spaces with terminal quotient singularities; this is [@Kasprzyk2013 Proposition 2.3]. **Proposition 1**. *Let $X=\mathbb{P}(a_1,a_2,\ldots,a_N)$ be a weighted projective space of dimension at least three. Then $X$ has terminal quotient singularities if and only if $$\sum_{i=1}^N\{ka_i/a\}\in\{2,\ldots,N-2\}$$ for each $k\in\{2,\ldots,a-2\}$. Here $a=a_1+a_2+\cdots+a_N$ and $\{q\}$ denotes the fractional part $q - \lfloor q \rfloor$ of $q \in \mathbb{Q}$.* A simpler necessary condition is given by [@Kasprzyk2009 Theorem 3.5]: **Proposition 2**. *Let $X=\mathbb{P}(a_1,a_2,\ldots,a_N)$ be a weighted projective space of dimension at least two, with weights ordered $a_1\leq a_2\leq\ldots\leq a_N$. If $X$ has terminal quotient singularities then $a_i/a<1/(N-i+2)$ for each $i\in\{3,\ldots,N\}$.* Weighted projective spaces with terminal quotient singularities have been classified in dimensions up to four [@Kasprzyk2006; @Kasprzyk2013]. Classifications in higher dimensions are hindered by the lack of an effective upper bound on $a$. We randomly generated $150\,000$ distinct weighted projective spaces with terminal quotient singularities, and with dimension up to $10$, as follows. We generated random sequences of weights $a_1\leq a_2\leq\ldots\leq a_N$ with $a_N \leq 10N$ and discarded them if they failed to satisfy any one of the following: 1. [\[item:wellformed\]]{#item:wellformed label="item:wellformed"} for each $i \in \{1,\ldots,N\}$, $\gcd\{a_1,\ldots,\widehat{a}_i,\ldots,a_N\}=1$, where $\widehat{a}_i$ indicates that $a_i$ is omitted; 2. [\[item:terminal_bound\]]{#item:terminal_bound label="item:terminal_bound"} $a_i/a<1/(N-i+2)$ for each $i\in\{3,\ldots,N\}$; 3. [\[item:terminal_iff\]]{#item:terminal_iff label="item:terminal_iff"} $\sum_{i=1}^N\{ka_i/a\}\in\{2,\ldots,N-2\}$ for each $k\in\{2,\ldots,a-2\}$. Condition [\[item:wellformed\]](#item:wellformed){reference-type="eqref" reference="item:wellformed"} here was part of our definition of weighted projective spaces above; it ensures that the set of singular points in $\mathbb{P}(a_1,a_2,\ldots,a_N)$ has dimension at most $N-2$, and also that weighted projective spaces are isomorphic as algebraic varieties if and only if they have the same weights. Condition [\[item:terminal_bound\]](#item:terminal_bound){reference-type="eqref" reference="item:terminal_bound"} is from Proposition [Proposition 2](#prop:wps_bound){reference-type="ref" reference="prop:wps_bound"}; it efficiently rules out many non-terminal examples. Condition [\[item:terminal_iff\]](#item:terminal_iff){reference-type="eqref" reference="item:terminal_iff"} is the necessary and sufficient condition from Proposition [Proposition 1](#prop:wps_terminal){reference-type="ref" reference="prop:wps_terminal"}. We then deduplicated the sequences. The resulting sample sizes are summarised in Table [\[tab:both_samples\]](#tab:both_samples){reference-type="ref" reference="tab:both_samples"}. ### Data generation: toric varieties {#sec:data_generation .unnumbered} Deduplicating randomly-generated toric varieties of Picard rank two is harder than deduplicating randomly generated weighted projective spaces, because different weight matrices in [\[eq:weights\]](#eq:weights){reference-type="eqref" reference="eq:weights"} can give rise to the same toric variety. Toric varieties are uniquely determined, up to isomorphism, by a combinatorial object called a *fan* [@Fulton1993]. A fan is a collection of cones, and one can determine the singularities of a toric variety $X$ from the geometry of the cones in the corresponding fan. We randomly generated $200\,000$ distinct toric varieties of Picard rank two with terminal quotient singularities, and with dimension up to $10$, as follows. We randomly generated weight matrices, as in [\[eq:weights\]](#eq:weights){reference-type="eqref" reference="eq:weights"}, such that $0 \leq a_i, b_j \leq 5$. We then discarded the weight matrix if any column was zero, and otherwise formed the corresponding fan $F$. We discarded the weight matrix unless: 1. [\[item:non_degenerate\]]{#item:non_degenerate label="item:non_degenerate"} $F$ had $N$ rays; 2. [\[item:simplicial\]]{#item:simplicial label="item:simplicial"} each cone in $F$ was simplicial (i.e. has number of rays equal to its dimension); 3. [\[item:terminal_fan\]]{#item:terminal_fan label="item:terminal_fan"} the convex hull of the primitive generators of the rays of $F$ contained no lattice points other than the rays and the origin. Conditions [\[item:non_degenerate\]](#item:non_degenerate){reference-type="eqref" reference="item:non_degenerate"} and [\[item:simplicial\]](#item:simplicial){reference-type="eqref" reference="item:simplicial"} together guarantee that $X$ has Picard rank two, and are equivalent to the conditions on the weight matrix in [\[eq:weights\]](#eq:weights){reference-type="eqref" reference="eq:weights"} given in our definition. Conditions [\[item:simplicial\]](#item:simplicial){reference-type="eqref" reference="item:simplicial"} and [\[item:terminal_fan\]](#item:terminal_fan){reference-type="eqref" reference="item:terminal_fan"} guarantee that $X$ has terminal quotient singularities. We then deduplicated the weight matrices according to the isomorphism type of $F$, by putting $F$ in normal form [@KreuzerSkarke2004; @GrinisKasprzyk2013]. See Table [\[tab:both_samples\]](#tab:both_samples){reference-type="ref" reference="tab:both_samples"} for a summary of the dataset. ---------------------------- ------------- ------------ -- -------------------------- ------------- ------------ Weighted projective spaces Rank-two toric varieties Dimension Sample size Percentage Dimension Sample size Percentage 1 1 0.001 2 1 0.001 2 2 0.001 3 7 0.005 3 17 0.009 4 8 936 5.957 4 758 0.379 5 23 584 15.723 5 6 050 3.025 6 23 640 15.760 6 19 690 9.845 7 23 700 15.800 7 35 395 17.698 8 23 469 15.646 8 42 866 21.433 9 23 225 15.483 9 47 206 23.603 10 23 437 15.625 10 48 016 24.008 Total 150 000 Total 200 000 ---------------------------- ------------- ------------ -- -------------------------- ------------- ------------ ### Data analysis: weighted projective spaces {#data-analysis-weighted-projective-spaces .unnumbered} We computed an initial segment $(c_0,c_1,\ldots,c_m)$ of the regularized quantum period for all the examples in the sample of $150\,000$ terminal weighted projective spaces, with $m \approx 100\,000$. The non-zero coefficients $c_d$ appeared to grow exponentially with $d$, and so we considered $\{\log c_d\}_{d\in S}$ where $S= \{d \in \mathbb{Z}_{\geq 0}\mid c_d \neq 0\}$. To reduce dimension we fitted a linear model to the set $\{(d, \log c_d)\mid d \in S\}$ and used the slope and intercept of this model as features; see Figure [3](#fig:lin_approximation_pr1){reference-type="ref" reference="fig:lin_approximation_pr1"} for a typical example. Plotting the slope against the $y$-intercept and colouring datapoints according to the dimension we obtain Figure [5](#fig:colour_triangle_all_pr1){reference-type="ref" reference="fig:colour_triangle_all_pr1"}: note the clear separation by dimension. A Support Vector Machine (SVM) trained on 10% of the slope and $y$-intercept data predicted the dimension of the weighted projective space with an accuracy of 99.99%. Full details are given in §§[5](#sec:supp_methods_1){reference-type="ref" reference="sec:supp_methods_1"}--[6](#sec:supp_methods_2){reference-type="ref" reference="sec:supp_methods_2"}. ![$\log c_d$ for $\mathbb{P}(5,5,11,23,28,29,33,44,66,76)$.](linear_approx_pr1.png){#fig:lin_approximation_pr1 width="\\linewidth"} ![$\log c_d$ for Example [Example 3](#eg:rank_2_growth){reference-type="ref" reference="eg:rank_2_growth"}.](linear_approx_last250.png){#fig:lin_approximation_pr2 width="\\linewidth"} ![](colour_triangle_pr1.png){#fig:colour_triangle_all_pr1 width="\\textwidth"} ![](colour_triangle.png){#fig:colour_triangle_all width="\\textwidth"} ### Data analysis: toric varieties {#data-analysis-toric-varieties .unnumbered} As before, the non-zero coefficients $c_d$ appeared to grow exponentially with $d$, so we fitted a linear model to the set $\{(d, \log c_d)\mid d \in S\}$ where $S= \{d \in \mathbb{Z}_{\geq 0} \mid c_d \neq 0\}$. We used the slope and intercept of this linear model as features. **Example 3**. In Figure [4](#fig:lin_approximation_pr2){reference-type="ref" reference="fig:lin_approximation_pr2"} we plot a typical example: the logarithm of the regularized quantum period sequence for the nine-dimensional toric variety with weight matrix $$% Note: too many columns for pmatrix \left(\begin{array}{ccccccccccc} 1 & 2 & 5 & 3 & 3 & 3 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 3 & 4 & 4 & 1 & 2 & 2 & 3 & 4 \end{array}\right)$$ along with the linear approximation. We see a periodic deviation from the linear approximation; the magnitude of this deviation decreases as $d$ increases (not shown). To reduce computational costs, we computed pairs $(d,\log{c_d})$ for $1000\leq d\leq 20\,000$ by sampling every $100$th term. We discarded the beginning of the period sequence because of the noise it introduces to the linear regression. In cases where the sampled coefficient $c_d$ is zero, we considered instead the next non-zero coefficient. The resulting plot of slope against $y$-intercept, with datapoints coloured according to dimension, is shown in Figure [6](#fig:colour_triangle_all){reference-type="ref" reference="fig:colour_triangle_all"}. We analysed the standard errors for the slope and $y$-intercept of the linear model. The standard errors for the slope are small compared to the range of slopes, but in many cases the standard error $s_{\text{int}}$ for the $y$-intercept is relatively large. As Figure [7](#fig:less_0.3_standard_error){reference-type="ref" reference="fig:less_0.3_standard_error"} illustrates, discarding data points where the standard error $s_{\text{int}}$ for the $y$-intercept exceeds some threshold reduces apparent noise. This suggests that the underlying structure is being obscured by inaccuracies in the linear regression caused by oscillatory behaviour in the initial terms of the quantum period sequence; these inaccuracies are concentrated in the $y$-intercept of the linear model. Note that restricting attention to those data points where $s_{\text{int}}$ is small also greatly decreases the range of $y$-intercepts that occur. As Example [Example 4](#ex:outlier){reference-type="ref" reference="ex:outlier"} and Figure [8](#fig:outlier_intercept){reference-type="ref" reference="fig:outlier_intercept"} suggest, this reflects both transient oscillatory behaviour and also the presence of a subleading term in the asymptotics of $\log c_d$ which is missing from our feature set. We discuss this further below. **Example 4**. Consider the toric variety with Picard rank two and weight matrix $$\begin{pmatrix} 1 & 10 & 5 & 13 & 8 & 12 & 0 \\ 0 & 0 & 3 & 8 & 5 & 14 & 1 \end{pmatrix}$$ This is one of the outliers in Figure [6](#fig:colour_triangle_all){reference-type="ref" reference="fig:colour_triangle_all"}. The toric variety is five-dimensional, and has slope $1.637$ and $y$-intercept $-62.64$. The standard errors are $4.246 \times 10^{-4}$ for the slope and $5.021$ for the $y$-intercept. We computed the first $40\,000$ coefficients $c_d$ in [\[eq:Ghat\]](#eq:Ghat){reference-type="eqref" reference="eq:Ghat"}. As Figure [8](#fig:outlier_intercept){reference-type="ref" reference="fig:outlier_intercept"} shows, as $d$ increases the $y$-intercept of the linear model increases to $-28.96$ and $s_{\text{int}}$ decreases to $0.7877$. At the same time, the slope of the linear model remains more or less unchanged, decreasing to $1.635$. This supports the idea that computing (many) more coefficients $c_d$ would significantly reduce noise in Figure [6](#fig:colour_triangle_all){reference-type="ref" reference="fig:colour_triangle_all"}. In this example, even $40\,000$ coefficients may not be enough. ![The slopes and $y$-intercepts from the linear model. This is as in Figure [6](#fig:colour_triangle_all){reference-type="ref" reference="fig:colour_triangle_all"}, but plotting only data points for which the standard error $s_{\text{int}}$ for the $y$-intercept satisfies $s_{\text{int}}< 0.3$. The colour records the dimension of the toric variety.](colour_triangle_zoom_0-3.png){#fig:less_0.3_standard_error width="40%"} ![Variation as we move deeper into the period sequence. The $y$-intercept and its standard error $s_{\text{int}}$ for the toric variety from Example [Example 4](#ex:outlier){reference-type="ref" reference="ex:outlier"}, as computed from pairs $(k,\log{c_k})$ such that $d - 20\,000\leq k \leq d$ by sampling every $100$th term. We also show LOWESS-smoothed trend lines.](outlier_901_intercept.png){#fig:outlier_intercept width="40%"} Computing many more coefficients $c_d$ across the whole dataset would require impractical amounts of computation time. In the example above, which is typical in this regard, increasing the number of coefficients computed from $20\,000$ to $40\,000$ increased the computation time by a factor of more than $10$. Instead we restrict to those toric varieties of Picard rank two such that the $y$-intercept standard error $s_{\text{int}}$ is less than $0.3$; this retains $67\,443$ of the $200\,000$ datapoints. We used 70% of the slope and $y$-intercept data in the restricted dataset for model training, and the rest for validation. An SVM model predicted the dimension of the toric variety with an accuracy of 87.7%, and a Random Forest Classifier (RFC) predicted the dimension with an accuracy of 88.6%. ### Neural networks {#neural-networks .unnumbered} Neural networks do not handle unbalanced datasets well. We therefore removed the toric varieties of dimensions 3, 4, and 5 from our data, leaving $61\,164$ toric varieties of Picard rank two with terminal quotient singularities and $s_{\text{int}}< 0.3$. This dataset is approximately balanced by dimension. A Multilayer Perceptron (MLP) with three hidden layers of sizes $(10,30,10)$ using the slope and intercept as features predicted the dimension with 89.0% accuracy. Since the slope and intercept give good control over $\log c_d$ for $d \gg 0$, but not for small $d$, it is likely that the coefficients $c_d$ with $d$ small contain extra information that the slope and intercept do not see. Supplementing the feature set by including the first $100$ coefficients $c_d$ as well as the slope and intercept increased the accuracy of the prediction to 97.7%. Full details can be found in §§[5](#sec:supp_methods_1){reference-type="ref" reference="sec:supp_methods_1"}--[6](#sec:supp_methods_2){reference-type="ref" reference="sec:supp_methods_2"}. ### From machine learning to rigorous analysis {#from-machine-learning-to-rigorous-analysis .unnumbered} Elementary "out of the box" models (SVM, RFC, and MLP) trained on the slope and intercept data alone already gave a highly accurate prediction for the dimension. Furthermore even for the many-feature MLP, which was the most accurate, sensitivity analysis using SHAP values [@LundbergLee2017] showed that the slope and intercept were substantially more important to the prediction than any of the coefficients $c_d$: see Figure [9](#fig:SHAP_plot){reference-type="ref" reference="fig:SHAP_plot"}. This suggested that the dimension of $X$ might be visible from a rigorous estimate of the growth rate of $\log c_d$. In §[3](#sec:methods){reference-type="ref" reference="sec:methods"} we establish asymptotic results for the regularized quantum period of toric varieties with low Picard rank, as follows. These results apply to any weighted projective space or toric variety of Picard rank two: they do not require a terminality hypothesis. Note, in each case, the presence of a subleading logarithmic term in the asymptotics for $\log c_d$. **Theorem 5**. *Let $X$ denote the weighted projective space $\mathbb{P}(a_1,\ldots,a_N)$, so that the dimension of $X$ is $N-1$. Let $c_d$ denote the coefficient of $t^d$ in the regularized quantum period $\widehat{G}_X(t)$ given in [\[eq:Ghat_wP\]](#eq:Ghat_wP){reference-type="eqref" reference="eq:Ghat_wP"}. Let $a = a_1 + \cdots + a_N$ and $p_i = a_i/a$. Then $c_d = 0$ unless $d$ is divisible by $a$, and non-zero coefficients $c_d$ satisfy $$\log c_d \sim Ad - \frac{\dim{X}}{2} \log d + B$$ as $d \to \infty$, where $$\begin{aligned} A &= - \sum_{i=1}^N p_i \log p_i \\ B &= - \frac{\dim{X}}{2} \log(2 \pi) - \frac{1}{2} \sum_{i=1}^N \log p_i \end{aligned}$$* Note, although it plays no role in what follows, that $A$ is the Shannon entropy of the discrete random variable $Z$ with distribution $(p_1,p_2,\ldots,p_N)$, and that $B$ is a constant plus half the total self-information of $Z$. **Theorem 6**. *Let $X$ denote the toric variety of Picard rank two with weight matrix $$\begin{pmatrix} a_1 & a_2 & a_3 & \cdots & a_N \\ b_1 & b_2 & b_3 & \cdots & b_N \end{pmatrix}$$ so that the dimension of $X$ is $N-2$. Let $a = a_1 + \cdots + a_N$, $b = b_1 + \cdots + b_N$, and $\ell = \gcd\{a,b\}$. Let $[\mu:\nu] \in \mathbb{P}^1$ be the unique root of the homogeneous polynomial $$\prod_{i=1}^N (a_i \mu + b_i \nu)^{a_i b} - \prod_{i=1}^N (a_i \mu + b_i \nu)^{b_i a}$$ such that $a_i \mu + b_i \nu \geq 0$ for all $i \in \{1,2,\ldots,N\}$, and set $$p_i = \frac{\mu a_i + \nu b_i}{\mu a + \nu b}$$ Let $c_d$ denote the coefficient of $t^d$ in the regularized quantum period $\widehat{G}_X(t)$ given in [\[eq:Ghat_rank_two\]](#eq:Ghat_rank_two){reference-type="eqref" reference="eq:Ghat_rank_two"}. Then non-zero coefficients $c_d$ satisfy $$\log c_d \sim Ad - \frac{\dim{X}}{2} \log d + B$$ as $d \to \infty$, where $$\begin{aligned} A &= -\sum_{i=1}^N p_i \log p_i \\ B &= - \frac{\dim{X}}{2} \log(2 \pi)\!-\!\frac{1}{2} \sum_{i=1}^N\log p_i\!-\!\frac{1}{2} \log\!\left(\sum_{i=1}^N\!\frac{(a_i b - b_i a)^2}{ \ell^2 p_i} \right) \end{aligned}$$* Theorem [Theorem 5](#thm:wps){reference-type="ref" reference="thm:wps"} is a straightforward application of Stirling's formula. Theorem [Theorem 6](#thm:rank_2){reference-type="ref" reference="thm:rank_2"} is more involved, and relies on a Central Limit-type theorem that generalises the De Moivre--Laplace theorem. ![Model sensitivity analysis using SHAP values. The model is an MLP with three hidden layers of sizes (10,30,10) applied to toric varieties of Picard rank two with terminal quotient singularities. It is trained on the slope, $y$-intercept, and the first 100 coefficients $c_d$ as features, and predicts the dimension with 97.7% accuracy.](shap_plot_layered_violin.png){#fig:SHAP_plot width="40%"} ### Theoretical analysis {#theoretical-analysis .unnumbered} ![](hedgehog_data.png){#fig:hedgehogs width="\\linewidth"} ![](hedgehog_pr2.png){#fig:hedgehogs_pr2 width="\\linewidth"} ![](slope_comparison.png){#fig:slope_comparison width="\\linewidth"} ![](intercept_comparison.png){#fig:intercept_comparison width="\\linewidth"} The asymptotics in Theorems [Theorem 5](#thm:wps){reference-type="ref" reference="thm:wps"} and [Theorem 6](#thm:rank_2){reference-type="ref" reference="thm:rank_2"} imply that, for $X$ a weighted projective space or toric variety of Picard rank two, the quantum period determines the dimension of $X$. Let us revisit the clustering analysis from this perspective. Recall the asymptotic expression $\log c_d\sim A d - \frac{\dim{X}}{2} \log d + B$ and the formulae for $A$ and $B$ from Theorem [Theorem 5](#thm:wps){reference-type="ref" reference="thm:wps"}. Figure [10](#fig:hedgehogs){reference-type="ref" reference="fig:hedgehogs"} shows the values of $A$ and $B$ for a sample of weighted projective spaces, coloured by dimension. Note the clusters, which overlap. Broadly speaking, the values of $B$ increase as the dimension of the weighted projective space increases, whereas in Figure [5](#fig:colour_triangle_all_pr1){reference-type="ref" reference="fig:colour_triangle_all_pr1"} the $y$-intercepts decrease as the dimension increases. This reflects the fact that we fitted a linear model to $\log c_d$, omitting the subleading $\log d$ term in the asymptotics. As Figure [\[fig:comparison\]](#fig:comparison){reference-type="ref" reference="fig:comparison"} shows, the linear model assigns the omitted term to the $y$-intercept rather than the slope. The slope of the linear model is approximately equal to $A$. The $y$-intercept, however, differs from $B$ by a dimension-dependent factor. The omitted $\log$ term does not vary too much over the range of degrees ($d<100\,000$) that we considered, and has the effect of reducing the observed $y$-intercept from $B$ to approximately $B - \frac{9}{2} \dim X$, distorting the clusters slightly and translating them downwards by a dimension-dependent factor. This separates the clusters. We expect that the same mechanism applies in Picard rank two as well: see Figure [11](#fig:hedgehogs_pr2){reference-type="ref" reference="fig:hedgehogs_pr2"}. We can show that each cluster in Figure [10](#fig:hedgehogs){reference-type="ref" reference="fig:hedgehogs"} is linearly bounded using constrained optimisation techniques. Consider for example the cluster for weighted projective spaces of dimension five, as in Figure [14](#fig:bounded_hedgehog){reference-type="ref" reference="fig:bounded_hedgehog"}. **Proposition 7**. *Let $X$ be the five-dimensional weighted projective space $\mathbb{P}(a_1,\ldots,a_6)$, and let $A$, $B$ be as in Theorem [Theorem 5](#thm:wps){reference-type="ref" reference="thm:wps"}. Then $B+\frac{5}{2}A \geq \frac{41}{8}$. If in addition $a_i \leq 25$ for all $i$ then $B + 5 A \leq \frac{41}{40}$.* Fix a suitable $\theta \geq 0$ and consider $$B + \theta A = - \frac{\dim{X}}{2} \log(2 \pi) - \frac{1}{2} \sum_{i=1}^N \log p_i - \theta \sum_{i=1}^N p_i \log p_i$$ with $\dim{X}=N-1=5$. Solving $$\begin{aligned} \min(B + \theta A) && \text{subject to} &&& p_1 + \cdots + p_6 = 1 \\ && &&& p_1,\ldots,p_6 \geq 0 \end{aligned}$$ on the five-simplex gives a linear lower bound for the cluster. This bound does not use terminality: it applies to any weighted projective space of dimension five. The expression $B + \theta A$ is unbounded above on the five-simplex (because $B$ is) so we cannot obtain an upper bound this way. Instead, consider $$\begin{aligned} \max(B + \theta A)&& \text{subject to} &&& p_1 + \cdots + p_6 = 1 \\ && &&& \epsilon \leq p_1 \leq p_2 \leq \cdots \leq p_6 \end{aligned}$$ for an appropriate small positive $\epsilon$, which we can take to be $1/a$ where $a$ is the maximum sum of the weights. For Figure [14](#fig:bounded_hedgehog){reference-type="ref" reference="fig:bounded_hedgehog"}, for example, we can take $a = 124$, and in general such an $a$ exists because there are only finitely many terminal weighted projective spaces. This gives a linear upper bound for the cluster. ![Linear bounds for the cluster of five-dimensional weighted projective spaces in Figure [10](#fig:hedgehogs){reference-type="ref" reference="fig:hedgehogs"}. The bounds are given by Proposition [Proposition 7](#pro:bound_5){reference-type="ref" reference="pro:bound_5"}.](bounded_hedgehog.png){#fig:bounded_hedgehog width="40%"} The same methods yield linear bounds on each of the clusters in Figure [10](#fig:hedgehogs){reference-type="ref" reference="fig:hedgehogs"}. As the Figure shows however, the clusters are not linearly separable. ## Discussion {#discussion .unnumbered} We developed machine learning models that predict, with high accuracy, the dimension of a Fano variety from its regularized quantum period. These models apply to weighted projective spaces and toric varieties of Picard rank two with terminal quotient singularities. We then established rigorous asymptotics for the regularized quantum period of these Fano varieties. The form of the asymptotics implies that, in these cases, the regularized quantum period of a Fano variety $X$ determines the dimension of $X$. The asymptotics also give a theoretical underpinning for the success of the machine learning models. Perversely, because the series involved converge extremely slowly, reading the dimension of a Fano variety directly from the asymptotics of the regularized quantum period is not practical. For the same reason, enhancing the feature set of our machine learning models by including a $\log d$ term in the linear regression results in less accurate predictions. So although the asymptotics in Theorems [Theorem 5](#thm:wps){reference-type="ref" reference="thm:wps"} and [Theorem 6](#thm:rank_2){reference-type="ref" reference="thm:rank_2"} determine the dimension in theory, in practice the most effective way to determine the dimension of an unknown Fano variety from its quantum period is to apply a machine learning model. The insights gained from machine learning were the key to our formulation of the rigorous results in Theorems [Theorem 5](#thm:wps){reference-type="ref" reference="thm:wps"} and [Theorem 6](#thm:rank_2){reference-type="ref" reference="thm:rank_2"}. Indeed, it might be hard to discover these results without a machine learning approach. It is notable that the techniques in the proof of Theorem [Theorem 6](#thm:rank_2){reference-type="ref" reference="thm:rank_2"} -- the identification of generating functions for Gromov--Witten invariants of toric varieties with certain hypergeometric functions -- have been known since the late 1990s and have been studied by many experts in hypergeometric functions since then. For us, the essential step in the discovery of the results was the feature extraction that we performed as part of our ML pipeline. This work demonstrates that machine learning can uncover previously unknown structure in complex mathematical data, and is a powerful tool for developing rigorous mathematical results; cf. [@DaviesEtAl2021]. It also provides evidence for a fundamental conjecture in the Fano classification program [@CoatesCortiGalkinGolyshevKasprzyk2013]: that the regularized quantum period of a Fano variety determines that variety. # Methods {#sec:methods} In this section we prove Theorem [Theorem 5](#thm:wps){reference-type="ref" reference="thm:wps"} and Theorem [Theorem 6](#thm:rank_2){reference-type="ref" reference="thm:rank_2"}. The following result implies Theorem [Theorem 5](#thm:wps){reference-type="ref" reference="thm:wps"}. **Theorem 8**. *Let $X$ denote the weighted projective space $\mathbb{P}(a_1,\ldots,a_N)$, so that the dimension of $X$ is $N-1$. Let $c_d$ denote the coefficient of $t^d$ in the regularized quantum period $\widehat{G}_X(t)$ given in [\[eq:Ghat_wP\]](#eq:Ghat_wP){reference-type="eqref" reference="eq:Ghat_wP"}. Let $a = a_1 + \ldots + a_N$. Then $c_d = 0$ unless $d$ is divisible by $a$, and $$\log c_{ka} \sim ka \left[\log a - \frac{1}{a} \sum_{i=1}^N a_i \log a_i \right] -\frac{\dim{X}}{2} \log(ka) + \frac{1+\dim{X}}{2} \log a - \frac{\dim{X}}{2} \log(2 \pi) - \frac{1}{2} \sum_{i=1}^N \log a_i$$ That is, non-zero coefficients $c_d$ satisfy $$\log c_d \sim Ad - \frac{\dim{X}}{2} \log d + B$$ as $d \to \infty$, where $$\begin{aligned} A = - \sum_{i=1}^N p_i \log p_i && B = - \frac{\dim{X}}{2} \log(2 \pi) - \frac{1}{2} \sum_{i=1}^N \log p_i \end{aligned}$$ and $p_i = a_i/a$.* *Proof.* Combine Stirling's formula $$n! \sim \sqrt{2 \pi n } \left(\frac{n}{e} \right)^n$$ with the closed formula [\[eq:Ghat_wP\]](#eq:Ghat_wP){reference-type="eqref" reference="eq:Ghat_wP"} for $c_{ka}$. ◻ ### Toric varieties of Picard rank 2 {#toric-varieties-of-picard-rank-2 .unnumbered} Consider a toric variety $X$ of Picard rank two and dimension $N-2$ with weight matrix $$\begin{pmatrix} a_1 & a_2 & a_3 & \cdots & a_N \\ b_1 & b_2 & b_3 & \cdots & b_N \end{pmatrix}$$ as in [\[eq:weights\]](#eq:weights){reference-type="eqref" reference="eq:weights"}. Let us move to more invariant notation, writing $\alpha_i$ for the linear form on $\mathbb{R}^2$ defined by the transpose of the $i$th column of the weight matrix, and $\alpha = \alpha_1 + \cdots + \alpha_N$. Equation [\[eq:Ghat_rank_two\]](#eq:Ghat_rank_two){reference-type="ref" reference="eq:Ghat_rank_two"} becomes $$\widehat{G}_X(t) = \sum_{k \in \mathbb{Z}^2 \cap C} \frac{(\alpha\cdot k)!}{\prod_{i=1}^N (\alpha_i\cdot k)!} t^{\alpha \cdot k}$$ where $C$ is the cone $C = \{ x \in \mathbb{R}^2\mid\text{$\alpha_i \cdot x \geq 0$ for $i =1,2,\ldots,N$} \}$. As we will see, for $d \gg 0$ the coefficients $$\begin{aligned} \frac{(\alpha\cdot k)!}{\prod_{i=1}^N (\alpha_i\cdot k)!} && \text{where $k \in \mathbb{Z}^2 \cap C$ and $\alpha \cdot k = d$} \end{aligned}$$ are approximated by a rescaled Gaussian. We begin by finding the mean of that Gaussian, that is, by minimising $$\begin{aligned} \prod_{i=1}^N (\alpha_i\cdot k)! && \text{where $k \in \mathbb{Z}^2 \cap C$ and $\alpha \cdot k = d$.} \end{aligned}$$ For $k$ in the strict interior of $C$ with $\alpha \cdot k = d$, we have that $$(\alpha_i \cdot k)! \sim \left( \frac{\alpha_i \cdot k}{e}\right)^{\alpha_i \cdot k}$$ as $d \to \infty$. **Proposition 9**. *The constrained optimisation problem $$\begin{aligned} \min & \prod_{i=1}^N (\alpha_i\cdot x)^{\alpha_i \cdot x} & \text{subject to\ } & \begin{cases} x \in C \\ \alpha \cdot x = d \end{cases} \end{aligned}$$ has a unique solution $x = x^*$. Furthermore, setting $p_i = (\alpha_i \cdot x^*)/(\alpha \cdot x^*)$ we have that the monomial $$\prod_{i=1}^N p_i^{\alpha_i \cdot k}$$ depends on $k \in \mathbb{Z}^2$ only via $\alpha \cdot k$.* *Proof.* Taking logarithms gives the equivalent problem $$\begin{aligned} \label{eq:min_problem} \min & \sum_{i=1}^N (\alpha_i \cdot x) \log (\alpha_i \cdot x) & \text{subject to\ } & \begin{cases} x \in C \\ \alpha \cdot x = d \end{cases} \end{aligned}$$ The objective function $\sum_{i=1}^N (\alpha_i \cdot x) \log (\alpha_i \cdot x)$ here is the pullback to $\mathbb{R}^2$ of the function $$f(x_1,\ldots,x_N) = \sum_{i=1}^N x_i \log x_i$$ along the linear embedding $\varphi : \mathbb{R}^2 \to \mathbb{R}^N$ given by $(\alpha_1,\ldots,\alpha_N)$. Note that $C$ is the preimage under $\varphi$ of the positive orthant $\mathbb{R}^N_+$, so we need to minimise $f$ on the intersection of the simplex $x_1 + \cdots + x_N = d$, $(x_1,\ldots,x_N) \in \mathbb{R}^N_+$ with the image of $\varphi$. The function $f$ is convex and decreases as we move away from the boundary of the simplex, so the minimisation problem in [\[eq:min_problem\]](#eq:min_problem){reference-type="eqref" reference="eq:min_problem"} has a unique solution $x^*$ and this lies in the strict interior of $C$. We can therefore find the minimum $x^*$ using the method of Lagrange multipliers, by solving $$\label{eq:Lagrange_multiplier} \sum_{i=1}^N \alpha_i \log (\alpha_i \cdot x) + \alpha = \lambda \alpha$$ for $\lambda \in \mathbb{R}$ and $x$ in the interior of $C$ with $\alpha \cdot x = d$. Thus $$\sum_{i=1}^N \alpha_i \log (\alpha_i \cdot x^*) = (\lambda-1) \alpha$$ and, evaluating on $k \in \mathbb{Z}^2$ and exponentiating, we see that $$\prod_{i=1}^N (\alpha_i \cdot x^*)^{\alpha_i \cdot k}$$ depends only on $\alpha \cdot k$. The result follows. ◻ Given a solution $x^*$ to [\[eq:Lagrange_multiplier\]](#eq:Lagrange_multiplier){reference-type="eqref" reference="eq:Lagrange_multiplier"}, any positive scalar multiple of $x^*$ also satisfies [\[eq:Lagrange_multiplier\]](#eq:Lagrange_multiplier){reference-type="eqref" reference="eq:Lagrange_multiplier"}, with a different value of $\lambda$ and a different value of $d$. Thus the solutions $x^*$, as $d$ varies, lie on a half-line through the origin. The direction vector $[\mu:\nu] \in \mathbb{P}^1$ of this half-line is the unique solution to the system $$\label{eq:polynomial_minimum} \begin{aligned} \prod_{i=1}^N (a_i \mu +b_i \nu)^{a_i b} &= \prod_{i=1}^N (a_i \mu + b_i \nu)^{b_i a} \\ \begin{pmatrix} \mu \\ \nu \end{pmatrix} &\in C \end{aligned}$$ Note that the first equation here is homogeneous in $\mu$ and $\nu$; it is equivalent to [\[eq:Lagrange_multiplier\]](#eq:Lagrange_multiplier){reference-type="eqref" reference="eq:Lagrange_multiplier"}, by exponentiating and then eliminating $\lambda$. Any two solutions $x^*$, for different values of $d$, differ by rescaling, and the quantities $p_i$ in Proposition [Proposition 9](#pro:minimum){reference-type="ref" reference="pro:minimum"} are invariant under this rescaling. They also satisfy $p_1 + \cdots + p_N = 1$. We use the following result, known in the literature as the "Local Theorem" [@GnedenkoUshakov2018], to approximate multinomial coefficients. **Local Theorem 1**. *For $p_1, \dots, p_n \in [0,1]$ such that $p_1 + \cdots + p_n = 1$, the ratio $$d^{\frac{n-1}{2}} \binom{d}{k_1 \cdots k_n} \prod_{i=1}^n p_i^{k_i} : \frac{\exp(-\frac{1}{2} \sum_{i=1}^n q_i x_i^2)}{(2 \pi)^{\frac{n-1}{2}} \sqrt{p_1 \cdots p_n}} \rightarrow 1$$ as $d \rightarrow \infty$, uniformly in all $k_i$'s, where $$\begin{aligned} q_i = 1- p_i && x_i = \frac{k_i - d p_i}{\sqrt{dp_iq_i}} \end{aligned}$$ and the $x_i$ lie in bounded intervals.* Let $B_r$ denote the ball of radius $r$ about $x^* \in \mathbb{R}^2$. Fix $R > 0$. We apply the Local Theorem with $k_i = \alpha_i \cdot k$ and $p_i = (\alpha_i \cdot x^*)/(\alpha \cdot x^*)$, where $k \in \mathbb{Z}^2 \cap C$ satisfies $\alpha \cdot k = d$ and $k \in B_{R \sqrt{d}}$. Since $$x_i = \frac{\alpha_i \cdot (k - x^*)}{\sqrt{d p_i q_i}}$$ the assumption that $k \in B_{R \sqrt{d}}$ ensures that the $x_i$ remain bounded as $d \to \infty$. Note that, by Proposition [Proposition 9](#pro:minimum){reference-type="ref" reference="pro:minimum"}, the monomial $\prod_{i=1}^N p_i^{k_i}$ depends on $k$ only via $\alpha \cdot k$, and hence here is independent of $k$: $$\prod_{i=1}^N p_i^{k_i} = \prod_{i=1}^N p_i^{\alpha_i \cdot x^*} = \prod_{i=1}^N p_i^{d p_i}$$ Furthermore $$\sum_{i=1}^N q_i x_i^2 = \frac{(k-x^*)^T A \, (k - x^*)}{d}$$ where $A$ is the positive-definite $2 \times 2$ matrix given by $$A = \sum_{i=1}^N \frac{1}{p_i} \alpha_i^T \alpha_i$$ Thus as $d \to \infty$, the ratio $$\label{eq:limit_of_multinomial} \frac{(\alpha\cdot k)!}{\prod_{i=1}^N (\alpha_i\cdot k)!} : \frac{\exp\big({-\frac{1}{2d}} (k - x^*)^T A \, (k-x^*)\big)}{(2 \pi d)^{\frac{N-1}{2}} \prod_{i=1}^N p_i^{d p_i + \frac{1}{2}}} \to 1$$ for all $k \in \mathbb{Z}^2 \cap C \cap B_{R\sqrt{d}}$ such that $\alpha \cdot k = d$. **Theorem [Theorem 6](#thm:rank_2){reference-type="ref" reference="thm:rank_2"}.** *Let $X$ be a toric variety of Picard rank two and dimension $N-2$ with weight matrix $$\begin{pmatrix} a_1 & a_2 & a_3 & \cdots & a_N \\ b_1 & b_2 & b_3 & \cdots & b_N \end{pmatrix}$$ Let $a = a_1 + \cdots + a_N$ and $b = b_1 + \cdots + b_N$, let $\ell = \gcd(a,b)$, and let $[\mu:\nu] \in \mathbb{P}^1$ be the unique solution to [\[eq:polynomial_minimum\]](#eq:polynomial_minimum){reference-type="eqref" reference="eq:polynomial_minimum"}. Let $c_d$ denote the coefficient of $t^d$ in the regularized quantum period $\widehat{G}_X(t)$. Then non-zero coefficients $c_d$ satisfy $$\log c_d \sim Ad - \frac{\dim{X}}{2} \log d + B$$ as $d \to \infty$, where $$\begin{aligned} A &= -\sum_{i=1}^N p_i \log p_i \\ B &= - \frac{\dim{X}}{2} \log(2 \pi) - \frac{1}{2} \sum_{i=1}^N \log p_i - \frac{1}{2} \log \left( \sum_{i=1}^N \frac{(a_i b - b_i a)^2}{ \ell^2 p_i} \right) \end{aligned}$$ and $p_i = \displaystyle \frac{\mu a_i + \nu b_i}{\mu a + \nu b}$.* *Proof.* We need to estimate $$c_d = \sum_{\substack{k \in \mathbb{Z}^2 \cap C \\ \text{with $\alpha \cdot k = d$}}} \frac{(\alpha\cdot k)!}{\prod_{i=1}^N (\alpha_i\cdot k)!}$$ Consider first the summands with $k \in \mathbb{Z}^2 \cap C$ such that $\alpha \cdot k = d$ and $k \not \in B_{R \sqrt{d}}$. For $d$ sufficiently large, each such summand is bounded by $c d^{-\frac{1+\dim{X}}{2}}$ for some constant $c$ -- see [\[eq:limit_of_multinomial\]](#eq:limit_of_multinomial){reference-type="eqref" reference="eq:limit_of_multinomial"}. Since the number of such summands grows linearly with $d$, in the limit $d \to \infty$ the contribution to $c_d$ from $k \not \in B_{R \sqrt{d}}$ vanishes. As $d \to \infty$, therefore $$c_d \sim \frac{1}{(2 \pi d)^{\frac{N-1}{2}} \prod_{i=1}^N p_i^{d p_i + \frac{1}{2}}} \sum_{\substack{k \in \mathbb{Z}^2 \cap C \cap B_{R \sqrt{d}}\\ \text{with $\alpha \cdot k = d$}}} \exp\left(-\frac{(k - x^*)^T A \, (k-x^*)}{2d}\right)$$ Writing $y_k = (k - x^*)/\sqrt{d}$, considering the sum here as a Riemann sum, and letting $R \to \infty$, we see that $$\begin{aligned} c_d \sim \frac{1}{(2 \pi d)^{\frac{N-1}{2}} \prod_{i=1}^N p_i^{d p_i+ \frac{1}{2}}} \sqrt{d} \int_{L_\alpha} \exp \big( {-\textstyle \frac{1}{2}}y^T A y \big) \, dy \end{aligned}$$ where $L_\alpha$ is the line through the origin given by $\ker \alpha$ and $dy$ is the measure on $L_\alpha$ given by the integer lattice $\mathbb{Z}^2 \cap L_\alpha \subset L_\alpha$. To evaluate the integral, let $$\begin{aligned} \alpha^\perp = \frac{1}{\ell}\begin{pmatrix} b\\-a \end{pmatrix} && \text{where $\ell = \gcd\{a,b\}$} \end{aligned}$$ and observe that the pullback of $dy$ along the map $\mathbb{R}\to L_\alpha$ given by $t \mapsto t \alpha^\perp$ is the standard measure on $\mathbb{R}$. Thus $$\int_{L_\alpha} \exp \big( {-\textstyle \frac{1}{2}}y^T A y \big) \, dy = \int_{-\infty}^\infty \exp \big( {-\textstyle \frac{1}{2}} \theta x^2 \big) \, dx = \sqrt{\frac{2 \pi}{\theta}}$$ where $\theta = \sum_{i=1}^N \frac{1}{\ell^2 p_i} ( \alpha_i \cdot \alpha^\perp)^2$, and $$c_d \sim \frac{1}{(2 \pi d)^{\frac{\dim{X}}{2}} \prod_{i=1}^N p_i^{d p_i + \frac{1}{2}} \sqrt{\theta}}$$ Taking logarithms gives the result. ◻ # Supplementary Notes {#sec:supp_notes} We begin with an introduction to weighted projective spaces and toric varieties, aimed at non-specialists. ### Projective spaces and weighted projective spaces {#projective-spaces-and-weighted-projective-spaces .unnumbered} The fundamental example of a Fano variety is two-dimensional projective space $\mathbb{P}^2$. This is a quotient of $\mathbb{C}^3 \setminus \{0\}$ by the group $\mathbb{C}^\times$, where the action of $\lambda \in \mathbb{C}^\times$ identifies the points $(x,y,z)$ and $(\lambda x, \lambda y, \lambda z)$ in $\mathbb{C}^3\setminus \{0\}$. The variety $\mathbb{P}^2$ is smooth: we can see this by covering it with three open sets $U_x$, $U_y$, $U_z$ that are each isomorphic to the plane $\mathbb{C}^2$: $$\begin{aligned} U_x &= \{(1,Y,Z)\} & \text{given by rescaling $x$ to 1} \\ U_y &= \{(X,1,Z)\} & \text{given by rescaling $y$ to 1} \\ U_z &= \{(X,Y,1)\} & \text{given by rescaling $z$ to 1} \end{aligned}$$ Here, for example, in the case $U_x$ we take $x \ne 0$ and set $Y = y/x$, $Z = z/x$. Although the projective space $\mathbb{P}^2$ is smooth, there are closely related Fano varieties called weighted projective spaces [@Dolgachev1982; @IanoFletcher2000] that have singularities. For example, consider the weighted projective plane $\mathbb{P}(1,2,3)$: this is the quotient of $\mathbb{C}^3 \setminus \{0\}$ by $\mathbb{C}^\times$, where the action of $\lambda \in \mathbb{C}^\times$ identifies the points $(x,y,z)$ and $(\lambda x, \lambda^2 y, \lambda^3 z)$. Let us write $$\mu_n = \{e^{2 \pi k \mathtt{i}/n}\mid k \in \mathbb{Z}\}$$ for the group of $n$th roots of unity. The variety $\mathbb{P}(1,2,3)$ is once again covered by open sets $$\begin{aligned} U_x &= \{(1,Y,Z)\} & \text{given by rescaling $x$ to 1} \\ U_y &= \{(X,1,Z)\} & \text{given by rescaling $y$ to 1} \\ U_z &= \{(X,Y,1)\} & \text{given by rescaling $z$ to 1} \end{aligned}$$ but this time we have $U_x \cong \mathbb{C}^2$, $U_y \cong \mathbb{C}^2/\mu_2$, and $U_z = \mathbb{C}^2/\mu_3$. This is because, for example, when we choose $\lambda \in \mathbb{C}^\times$ to rescale $(x,y,z)$ with $z \ne 0$ to $(X,Y,1)$, there are three possible choices for $\lambda$ and they differ by the action of $\mu_3$. In particular this lets us see that $\mathbb{P}(1,2,3)$ is singular. For example, functions on the chart $U_y \cong \mathbb{C}^2/\mu_2$ are polynomials in $X$ and $Z$ that are invariant under $X \mapsto -X$, $Z \mapsto -Z$, or in other words $$\begin{aligned} U_y &= \mathop{\mathrm{Spec}}\mathbb{C}[X^2, XZ, Z^2]\\ &= \mathop{\mathrm{Spec}}\mathbb{C}[a,b,c]/(ac-b^2) \end{aligned}$$ Thus the chart $U_y$ is the solution set for the equation $ac-b^2=0$, as pictured in Figure [15](#fig:A1){reference-type="ref" reference="fig:A1"}. Similarly, the chart $U_z \cong \mathbb{C}^2/\mu_3$ can be written as $$\begin{aligned} U_z &= \mathop{\mathrm{Spec}}\mathbb{C}[X^3, XY, Y^3]\\ &= \mathop{\mathrm{Spec}}\mathbb{C}[a,b,c]/(ac-b^3) \end{aligned}$$ and is the solution set to the equation $ac-b^3=0$, as pictured in Figure [16](#fig:A2){reference-type="ref" reference="fig:A2"}. The variety $\mathbb{P}(1,2,3)$ has singular points at $(0,1,0) \in U_y$ and $(0,0,1) \in U_z$, and away from these points it is smooth. ![](A1.pdf){#fig:A1 width="\\linewidth"} ![](A2.pdf){#fig:A2 width="\\linewidth"} There are weighted projective spaces of any dimension. Let $a_1,a_2,\ldots,a_N$ be positive integers such that any subset of size $N-1$ has no common factor, and consider $$\mathbb{P}(a_1,a_2,\ldots,a_N) = (\mathbb{C}^N \setminus \{0\})/\mathbb{C}^\times$$ where the action of $\lambda \in \mathbb{C}^\times$ identifies the points $$(x_1,x_2,\ldots,x_N) \qquad \text{and} \qquad (\lambda^{a_1} x_1, \lambda^{a_2} x_2, \ldots, \lambda^{a_N} x_N)$$ in $\mathbb{C}^N \setminus \{0\}$. The quotient $\mathbb{P}(a_1,a_2,\ldots,a_N)$ is an algebraic variety of dimension $N-1$. A general point of $\mathbb{P}(a_1,a_2,\ldots,a_N)$ is smooth, but there can be singular points. Indeed, $\mathbb{P}(a_1,a_2,\ldots,a_N)$ is covered by $N$ open sets $$U_i = \{(X_1,\ldots,X_{i-1},1,X_{i+1},\ldots,X_N)\} \qquad i \in \{1,2,\ldots,N\}$$ given by rescaling $x_i$ to 1; here we take $x_i \ne 0$ and set $X_j = x_j/x_i$. The chart $U_i$ is isomorphic to $\mathbb{C}^{N-1}/\mu_{a_i}$, where $\mu_{a_i}$ acts on $\mathbb{C}^{N-1}$ with weights $a_j$, $j \ne i$. In Reid's notation, this is the cyclic quotient singularity $\frac{1}{a_i}(a_1,\ldots,\widehat{a}_i,\ldots,a_N)$; it is smooth if and only if $a_i = 1$. The topology of weighted projective space is very simple, with $$H^k\big(\mathbb{P}(a_1,a_2,\ldots,a_N); \mathbb{Q}) = \begin{cases} \mathbb{Q}& \text{if $0 \leq k \leq 2N-2$ and $k$ is even;} \\ 0 & \text{otherwise.} \end{cases}$$ Hence every weighted projective space has second Betti number $b_2 = 1$. There is a closed formula [@CoatesCortiGalkinKasprzyk2016 Proposition D.9] for the regularized quantum period of $X = \mathbb{P}(a_1,a_2,\ldots,a_N)$: $$\label{eq:Ghat_wP_appendix} \widehat{G}_X(t) = \sum_{k = 0}^\infty \frac{(ak)!}{(a_1 k)! (a_2 k)! \cdots (a_N k)!} t^{ak}$$ where $a = a_1 + a_2 + \cdots + a_N$. ### Toric varieties of Picard rank 2 {#toric-varieties-of-picard-rank-2-1 .unnumbered} As well as weighted projective spaces, which are quotients of $\mathbb{C}^N \setminus \{0\}$ by an action of $\mathbb{C}^\times$, we will consider varieties that arise as quotients of $\mathbb{C}^N \setminus S$ by $\mathbb{C}^\times\times \mathbb{C}^\times$, where $S$ is a union of linear subspaces. These are examples of *toric varieties* [@Fulton1993; @CoxLittleSchenk2011]. Specifically, consider a matrix $$\label{eq:weights_appendix} \begin{pmatrix} a_1 & a_2 & \cdots & a_N \\ b_1 & b_2 & \cdots & b_N \end{pmatrix}$$ with non-negative integer entries and no zero columns. This defines an action of $\mathbb{C}^\times\times \mathbb{C}^\times$ on $\mathbb{C}^N$, where $(\lambda,\mu) \in \mathbb{C}^\times\times \mathbb{C}^\times$ identifies the points $$\begin{aligned} (x_1,x_2,\ldots,x_N) && \text{and} && (\lambda^{a_1} \mu^{b_1} x_1, \lambda^{a_2} \mu^{b_2} x_2, \ldots, \lambda^{a_N} \mu^{b_N} x_N) \end{aligned}$$ in $\mathbb{C}^N$. Set $a = a_1 + a_2 + \ldots + a_N$ and $b = b_1 + b_2 + \cdots + b_N$, and suppose that $(a,b)$ is not a scalar multiple of $(a_i,b_i)$ for any $i$. This determines linear subspaces $$\begin{aligned} S_+ & = \left\{ (x_1,x_2,\ldots,x_N)\mid \text{$x_i = 0$ if $b_i/a_i < b/a$} \right\} \\ S_- & = \left\{ (x_1,x_2,\ldots,x_N)\mid \text{$x_i = 0$ if $b_i/a_i > b/a$} \right\} \end{aligned}$$ of $\mathbb{C}^N$, and we consider the quotient $$\label{eq:rank_2_appendix} X = (\mathbb{C}^N \setminus S)/(\mathbb{C}^\times\times \mathbb{C}^\times)$$ where $S = S_+ \cup S_-$. See e.g. [@BrownCortiZucconi2004 §A.5]. These quotients behave in many ways like weighted projective spaces. Indeed, if we take the weight matrix [\[eq:weights_appendix\]](#eq:weights_appendix){reference-type="eqref" reference="eq:weights_appendix"} to be $$\begin{pmatrix} a_1 & a_2 & \cdots & a_N & 0 \\ 0 & 0 & \cdots & 0 & 1 \end{pmatrix}$$ then $X$ coincides with $\mathbb{P}(a_1,a_2,\ldots,a_N)$. We will consider only weight matrices such that the subspaces $S_+$ and $S_-$ both have dimension two or more; this implies that the second Betti number $b_2(X) = 2$, and hence $X$ is not a weighted projective space. We will refer to such quotients [\[eq:rank_2\_appendix\]](#eq:rank_2_appendix){reference-type="eqref" reference="eq:rank_2_appendix"} as *toric varieties of Picard rank two*, because general theory implies that the Picard lattice of $X$ has rank two. The dimension of $X$ is $N-2$. As for weighted projective spaces, toric varieties of Picard rank two can have singular points, the precise form of which is determined by the weights [\[eq:weights_appendix\]](#eq:weights_appendix){reference-type="eqref" reference="eq:weights_appendix"}. There is also a closed formula [@CoatesCortiGalkinKasprzyk2016 Proposition C.2] for the regularized quantum period. Let $C$ denote the cone in $\mathbb{R}^2$ defined by the equations $a_i x + b_i y \geq 0$, $i \in \{1,2,\ldots,N\}$. Then $$\label{eq:Ghat_rank_two_appendix} \widehat{G}_X(t) =\!\! \sum_{(k, l) \in \mathbb{Z}^2 \cap C} \frac{(ak + bl)!}{(a_1 k+b_1 l)! (a_2 k + b_2 l)! \cdots (a_N k + b_N l)!} t^{ak + bl}$$ ### Classification results {#classification-results .unnumbered} Weighted projective spaces with terminal quotient singularities have been classified in dimensions up to four; see Table [1](#tab:wps_low_dim){reference-type="ref" reference="tab:wps_low_dim"} for a summary. There are $35$ three-dimensional Fano toric varieties with terminal quotient singularities and Picard rank two [@Kasprzyk2006]. There is no known classification of Fano toric varieties with terminal quotient singularities in higher dimension, even when the Picard rank is two. ---------------- ---------------- --------------------- --------------------- Dimension 1 2 3 4 $\mathbb{P}^1$ $\mathbb{P}^2$ see [@Kasprzyk2006] see [@Kasprzyk2013] ---------------- ---------------- --------------------- --------------------- : The classification of low-dimensional weighted projective spaces with terminal quotient singularities. # Supplementary Methods 1 {#sec:supp_methods_1} ### Data analysis: weighted projective spaces {#data-analysis-weighted-projective-spaces-1 .unnumbered} We computed an initial segment $(c_0,c_1,\ldots,c_m)$ of the regularized quantum period, with $m \approx 100\,000$, for all the examples in the sample of $150\,000$ weighted projective spaces with terminal quotient singularities. We then considered $\{\log c_d\}_{d\in S}$ where $S= \{d \in \mathbb{Z}_{\geq 0}\mid c_d \neq 0\}$. To reduce dimension we fitted a linear model to the set $\{(d, \log c_d)\mid d \in S\}$ and used the slope and intercept of this model as features. The linear fit produces a close approximation of the data. Figure [\[fig:error_histogram_pr1\]](#fig:error_histogram_pr1){reference-type="ref" reference="fig:error_histogram_pr1"} shows the distribution of the standard errors for the slope and the $y$-intercept: the errors for the slope are between $3.9\times 10^{-8}$ and $1.4\times 10^{-5}$, and the errors for the $y$-intercept are between $0.0022$ and $0.82$. As we will see below, the standard error for the $y$-intercept is a good proxy for the accuracy of the linear model. This accuracy decreases as the dimension grows -- see Figure [19](#fig:intercept_histogram_by_dimension_pr1){reference-type="ref" reference="fig:intercept_histogram_by_dimension_pr1"} -- but we will see below that this does not affect the accuracy of the machine learning classification. ![](standard_error_slope_pr1.png){#fig:slope_histogram_pr1 width="\\linewidth"} ![](standard_error_intercept_pr1.png){#fig:intercept_histogram_pr1 width="\\linewidth"} \ ![](standard_error_by_dimension_pr1.png){#fig:intercept_histogram_by_dimension_pr1 width="\\linewidth"} ### Data analysis: toric varieties of Picard rank 2 {#data-analysis-toric-varieties-of-picard-rank-2 .unnumbered} We fitted a linear model to the set $\{(d, \log c_d)\mid d \in S\}$ where $S= \{d \in \mathbb{Z}_{\geq 0} \mid c_d \neq 0\}$, and used the slope and intercept of this linear model as features. The distribution of standard errors for the slope and $y$-intercept of the linear model are shown in Figure [\[fig:error_histogram_pr2\]](#fig:error_histogram_pr2){reference-type="ref" reference="fig:error_histogram_pr2"}. The standard errors for the slope are small compared to the range of slopes, but in many cases the standard error for the $y$-intercept is relatively large. As Figure [\[fig:comparison_errors\]](#fig:comparison_errors){reference-type="ref" reference="fig:comparison_errors"} illustrates, discarding data points where the standard error $s_{\text{int}}$ for the $y$-intercept exceeds some threshold reduces apparent noise. As discussed above, we believe that this reflects inaccuracies in the linear regression caused by oscillatory behaviour in the initial terms of the quantum period sequence. ![](standard_error_slope_pr2.png){#fig:slope_histogram width="\\linewidth"} ![](standard_error_intercept_pr2.png){#fig:intercept_histogram width="\\linewidth"} ![](colour_triangle_zoom.png){#fig:all_standard_error width="\\linewidth"} ![](colour_triangle_zoom_1.png){#fig:less_1_standard_error width="\\linewidth"} ![](colour_triangle_zoom_0-3.png){#fig:less_0.2_standard_error width="\\linewidth"}   **Example 10**. Let us consider in more detail the toric variety from Example [Example 3](#eg:rank_2_growth){reference-type="ref" reference="eg:rank_2_growth"}. In Figure [\[fig:linear_approximations_sm\]](#fig:linear_approximations_sm){reference-type="ref" reference="fig:linear_approximations_sm"} we plot $\log c_d$ along with its linear approximation. Figure [25](#fig:linear_approximation_0_250){reference-type="ref" reference="fig:linear_approximation_0_250"} shows only the first $250$ terms, whilst Figure [26](#fig:linear_approximation_1000_1250){reference-type="ref" reference="fig:linear_approximation_1000_1250"} shows the interval between the $1000$th and the $1250$th term. We see considerable deviation from the linear approximation among the first 250 terms; the deviation reduces for larger $d$. ![](linear_approx_first250.png){#fig:linear_approximation_0_250 width="\\linewidth"} ![](linear_approx_last250.png){#fig:linear_approximation_1000_1250 width="\\linewidth"} # Supplementary Methods 2 {#sec:supp_methods_2} We performed our experiments using scikit-learn [@scikit-learn], a standard machine learning library for Python. The computations that produced the data shown in Figure [10](#fig:hedgehogs){reference-type="ref" reference="fig:hedgehogs"} were performed using Mathematica [@Mathematica]. All code required to replicate the results in this paper is available from Bitbucket under an MIT license [@supporting-code]. ### Weighted projective spaces {#weighted-projective-spaces .unnumbered} We excluded dimensions one and two from the analysis, since there is only one weighted projective space in each case (namely $\mathbb{P}^1$ and $\mathbb{P}^2$). Therefore we have a dataset of $149\,998$ slope-intercept pairs, labelled by the dimension which varies between three and ten. We standardised the features, by translating the means to zero and scaling to unit variance, and applied a Support Vector Machine (SVM) with linear kernel and regularisation parameter $C=10$. By looking at different train--test splits we obtained the learning curves shown in Figure [27](#fig:learning_curve_sv_pr1){reference-type="ref" reference="fig:learning_curve_sv_pr1"}. The figure displays the mean accuracies for both training and validation data obtained by performing five random test-train splits each time: the shaded areas around the lines correspond to the $1\sigma$ region, where $\sigma$ denotes the standard deviation. Using $10\%$ (or more) of the data for training we obtained an accuracy of $99.99\%$. In Figure [28](#fig:decision_boundaries_pr1){reference-type="ref" reference="fig:decision_boundaries_pr1"} we plot the decision boundaries computed by the SVM between neighbouring dimension classes. ![Learning curves for a Support Vector Machine with linear kernel applied to the dataset of weighted projective spaces. The plot shows the means of the training and validation accuracies for five different random train--test splits. The shaded regions show the $1\sigma$ interval, where $\sigma$ is the standard deviation.](learning_curve_svm_pr1.png){#fig:learning_curve_sv_pr1 width="40%"} ![Decision boundaries computed from a Support Vector Machine with linear kernel trained on 70% of the dataset of weighted projective spaces. Note that the data has been standardised.](colour_triangle_decision_boundaries_pr1.png){#fig:decision_boundaries_pr1 width="40%"} ### Toric varieties of Picard rank 2 {#toric-varieties-of-picard-rank-2-2 .unnumbered} In light of the discussion above, we restricted attention to toric varieties with Picard rank two such that the $y$-intercept standard error $s_{\text{int}}$ is less than $0.3$. We also excluded dimension two from the analysis, since in this case there are only two varieties (namely, $\mathbb{P}^1\times\mathbb{P}^1$ and the Hirzebruch surface $\mathbb{F}_1$). The resulting dataset contains $67\,443$ slope-intercept pairs, labelled by dimension; the dimension varies between three and ten, as shown in Table [2](#tab:filtered_rank_2){reference-type="ref" reference="tab:filtered_rank_2"}. Rank-two toric varieties with $s_{\text{int}}< 0.3$ ----------------------------------------------------- ------------- ------------ Dimension Sample size Percentage 3 17 0.025 4 758 1.124 5 5 504 8.161 6 12 497 18.530 7 16 084 23.848 8 13 701 20.315 9 10 638 15.773 10 8 244 12.224 Total 67 443 : The distribution by dimension among toric varieties of Picard rank two in our dataset with $s_{\text{int}}< 0.3$. ### Support Vector Machine {#support-vector-machine .unnumbered} We used a linear SVM with regularisation parameter $C=50$. By considering different train--test splits we obtained the learning curves shown in Figure [29](#fig:learning_curve_svm_pr2){reference-type="ref" reference="fig:learning_curve_svm_pr2"}, where the means and the standard deviations were obtained by performing five random samples for each split. Note that the model did not overfit. We obtained a validation accuracy of $88.2\%$ using $70\%$ of the data for training. Figure [30](#fig:decision_boundaries_pr2){reference-type="ref" reference="fig:decision_boundaries_pr2"} shows the decision boundaries computed by the SVM between neighbouring dimension classes. Figure [\[fig:confusion_matrices_svm\]](#fig:confusion_matrices_svm){reference-type="ref" reference="fig:confusion_matrices_svm"} shows the confusion matrices for the same train--test split. ![Learning curves for a Support Vector Machine with linear kernel applied to the dataset of toric varieties of Picard rank two. The plot shows the means of the training and validation accuracies for five different random train--test splits. The shaded regions show the $1\sigma$ interval, where $\sigma$ is the standard deviation.](learning_curve_svm_pr2.png){#fig:learning_curve_svm_pr2 width="40%"} ![Decision boundaries computed from a Support Vector Machine with linear kernel trained on $70\%$ of the dataset of toric varieties of Picard rank two. Note that the data has been standardised.](colour_triangle_decision_boundaries_pr2.png){#fig:decision_boundaries_pr2 width="40%"} ![Confusion matrix normalised with respect to the true values.](true_confusion_svm.png){width="\\linewidth"} ![Confusion matrix normalised with respect to the predicted values.](predicted_confusion_svm.png){width="\\linewidth"} ### Random Forest Classifier {#random-forest-classifier .unnumbered} We used a Random Forest Classifier (RFC) with $1500$ estimators and the same features (slope and $y$-intercept for the linear model). By considering different train--test splits we obtained the learning curves shown in Figure [31](#fig:learning_curve_rfc_pr2){reference-type="ref" reference="fig:learning_curve_rfc_pr2"}; note again that the model did not overfit. Using 70% of the data for training, the RFC gave a validation accuracy of $89.4\%$. Figure [\[fig:confusion_matrices_rfc\]](#fig:confusion_matrices_rfc){reference-type="ref" reference="fig:confusion_matrices_rfc"} on page  shows confusion matrices for the same train--test split. ![Learning curves for a Random Forest Classifier applied to the dataset of toric varieties of Picard rank two. The plot shows the means of the training and validation accuracies for five different random train--test splits. The shaded regions show the $1\sigma$ interval, where $\sigma$ is the standard deviation.](learning_curve_rfc_pr2.png){#fig:learning_curve_rfc_pr2 width="40%"} ![Confusion matrix normalised with respect to the true values.](true_confusion_rfc.png){width="\\linewidth"} ![Confusion matrix normalised with respect to the predicted values.](predicted_confusion_rfc.png){width="\\linewidth"} ### Feed-forward neural network {#feed-forward-neural-network .unnumbered} As discussed above, neural networks do not handle unbalanced datasets well, and therefore we removed the toric varieties with dimensions three, four, and five from our dataset: see Table [2](#tab:filtered_rank_2){reference-type="ref" reference="tab:filtered_rank_2"}. We trained a Multilayer Perceptron (MLP) classifier on the same features, using an MLP with three hidden layers $(10,30,10)$, Adam optimiser [@KingmaDiederikBa2014], and rectified linear activation function [@AgarapAbien2018]. Different train--test splits produced the learning curve in Figure [32](#fig:learning_curve_nn){reference-type="ref" reference="fig:learning_curve_nn"}; again the model did not overfit. Using $70\%$ of the data for training, the MLP gave a validation accuracy of $88.7\%$. One could further balance the dataset, by randomly undersampling so that there are the same number of representatives in each dimension (8244 representatives: see Table [2](#tab:filtered_rank_2){reference-type="ref" reference="tab:filtered_rank_2"}). This resulted in a slight decrease in accuracy: the better balance was outweighed by loss of data caused by undersampling. ![Learning curves for a Multilayer Perceptron classifier $\mathop{\mathrm{MLP}}_2$ applied to the dataset of toric varieties of Picard rank two and dimension at least six, using just the regression data as features. The plot shows the means of the training and validation accuracies for five different random train--test splits. The shaded regions show the $1\sigma$ interval, where $\sigma$ is the standard deviation.](learning_curve_mlp_pr2.png){#fig:learning_curve_nn width="40%"} ### Feed-forward neural network with many features {#feed-forward-neural-network-with-many-features .unnumbered} We trained an MLP with the same architecture, but supplemented the features by including $\log c_d$ for $1 \leq d \leq 100$ (unless $c_d$ was zero in which case we set that feature to zero), as well as the slope and $y$-intercept as before. We refer to the previous neural network as $\mathop{\mathrm{MLP}}_2$, because it uses 2 features, and refer to this neural network as $\mathop{\mathrm{MLP}}_{102}$, because it uses 102 features. Figure [33](#fig:learning_curve_nn_more){reference-type="ref" reference="fig:learning_curve_nn_more"} shows the learning curves obtained for different train--test splits. Using $70\%$ of the data for training, the $\mathop{\mathrm{MLP}}_{102}$ model gave a validation accuracy of $97.7\%$. ![Learning curves for a Multilayer Perceptron classifier $\mathop{\mathrm{MLP}}_{102}$ applied to the dataset of toric varieties of Picard rank two and dimension at least six, using as features the regression data as well as $\log c_d$ for $1 \leq d \leq 100$. The plot shows the means of the training and validation accuracies for five different random train--test splits. The shaded regions show the $1\sigma$ interval, where $\sigma$ is the standard deviation.](learning_curve_mlp_pr2_more.png){#fig:learning_curve_nn_more width="40%"} We do not understand the reason for the performance improvement between $\mathop{\mathrm{MLP}}_{102}$ and $\mathop{\mathrm{MLP}}_2$. But one possible explanation is the following. Recall that the first 1000 terms of the period sequence were excluded when calculating the slope and intercept, because they exhibit irregular oscillations that decay as $d$ grows. These oscillations reduce the accuracy of the linear regression. The oscillations may, however, carry information about the toric variety, and so including the first few values of $\log(c_d)$ potentially makes more information available to the model. For example, examining the pattern of zeroes at the beginning of the sequence $(c_d)$ sometimes allows one to recover the values of $a$ and $b$ -- see [\[eq:Ghat_rank_two_appendix\]](#eq:Ghat_rank_two_appendix){reference-type="eqref" reference="eq:Ghat_rank_two_appendix"} for the notation. This information is relevant to estimating the dimension because, as a very crude approximation, larger $a$ and $b$ go along with larger dimension. Omitting the slope and intercept, however, and training on the coefficients $\log c_d$ for $1 \leq d \leq 100$ with the same architecture gave an accuracy of only 62%. ### Comparison of models {#comparison-of-models .unnumbered} The validation accuracies of the SVM, RFC, and the neural networks $\mathop{\mathrm{MLP}}_2$ and $\mathop{\mathrm{MLP}}_{102}$, on the same data set ($s_{\text{int}}<0.3$, dimension between six and ten), are compared in Table [3](#tab:compare_models){reference-type="ref" reference="tab:compare_models"}. Their confusion matrices are shown in Table [4](#tab:compare_models_cm){reference-type="ref" reference="tab:compare_models_cm"}. All models trained on only the regression data performed well, with the RFC slightly more accurate than the SVM and the neural network $\mathop{\mathrm{MLP}}_2$ slightly more accurate still. Misclassified examples are generally in higher dimension, which is consistent with the idea that misclassification is due to convergence-related noise. The neural network trained on the supplemented feature set, $\mathop{\mathrm{MLP}}_{102}$, outperforms all other models. However, as discussed above, feature importance analysis using SHAP values showed that the slope and the intercept were the most influential features in the prediction. ----------- ---------- --------------------------- ------------------------------- ML models SVM RFC $\mathop{\mathrm{MLP}}_2$ $\mathop{\mathrm{MLP}}_{102}$ $87.7\%$ $88.6\%$ $88.7\%$ $97.7 \%$ ----------- ---------- --------------------------- ------------------------------- : Comparison of model accuracies. Accuracies for various models applied to the dataset of toric varieties of Picard rank two and dimension at least six: a Support Vector Machine with linear kernel, a Random Forest Classifier, and the neural networks $\mathop{\mathrm{MLP}}_2$ and $\mathop{\mathrm{MLP}}_{102}$. Model True confusion matrix Predicted confusion matrix ------------------------------- ----------------------------------------------------------- ---------------------------------------------------------------- SVM ![image](true_confusion_svm_nn.png){width="\\linewidth"} ![image](predicted_confusion_svm_nn.png){width="\\linewidth"} RFC ![image](true_confusion_rfc_nn.png){width="\\linewidth"} ![image](predicted_confusion_rfc_nn.png){width="\\linewidth"} $\mathop{\mathrm{MLP}}_2$ ![image](true_confusion_nn.png){width="\\linewidth"} ![image](predicted_confusion_nn.png){width="\\linewidth"} $\mathop{\mathrm{MLP}}_{102}$ ![image](true_confusion_nn_more.png){width="\\linewidth"} ![image](predicted_confusion_nn_more.png){width="\\linewidth"} : Comparison of confusion matrices. Confusion matrices for various models applied to the dataset of toric varieties of Picard rank two and dimension at least six: a Support Vector Machine with linear kernel, a Random Forest Classifier, and the neural networks $\mathop{\mathrm{MLP}}_2$ and $\mathop{\mathrm{MLP}}_{102}$. # Supplementary Discussion ### Comparison with Principal Component Analysis {#comparison-with-principal-component-analysis .unnumbered} An alternative approach to dimensionality reduction, rather than fitting a linear model to $\log c_d$, would be to perform Principal Component Analysis (PCA) on this sequence and retain only the first few principal components. Since the vectors $(c_d)$ have different patterns of zeroes -- $c_d$ is non-zero only if $d$ is divisible by the Fano index $r$ of $X$ -- we need to perform PCA for Fano varieties of each index $r$ separately. We analysed this in the weighted projective space case, finding that for each $r$ the first two components of PCA are related to the growth coefficients $(A,B)$ from Theorem [Theorem 5](#thm:wps){reference-type="ref" reference="thm:wps"} by an invertible affine-linear transformation. That is, our analysis suggests that the coefficients $(A,B)$ contain exactly the same information as the first two components of PCA. Note, however, that the affine-linear transformation that relates PCA to $(A,B)$ varies with the Fano index $r$. Using $A$ and $B$ as features therefore allows for meaningful comparison between Fano varieties of different index. Furthermore, unlike PCA-derived values, the coefficients $(A,B)$ can be computed for a single Fano variety, rather than requiring a sufficiently large collection of Fano varieties of the same index. ### Towards more general Fano varieties {#towards-more-general-fano-varieties .unnumbered} Weighted projective spaces and toric varieties of Picard rank two are very special among Fano varieties. It is hard to quantify this, because so little is known about Fano classification in the higher-dimensional and non-smooth cases, but for example this class includes only 18% of the $\mathbb{Q}$-factorial terminal Fano toric varieties in three dimensions. On the other hand, one can regard weighted projective spaces and toric varieties of Picard rank two as representative of a much broader class of algebraic varieties called toric complete intersections. Toric complete intersections share the key properties that we used to prove Theorems [Theorem 5](#thm:wps){reference-type="ref" reference="thm:wps"} and [Theorem 6](#thm:rank_2){reference-type="ref" reference="thm:rank_2"} -- geometry that is tightly controlled by combinatorics, including explicit expressions for genus-zero Gromov--Witten invariants in terms of hypergeometric functions -- and we believe that the rigorous results of this paper will generalise to the toric complete intersection case. All smooth two-dimensional Fano varieties and 92 of the 105 smooth three-dimensional Fano varieties are toric complete intersections [@CoatesCortiGalkinKasprzyk2016]. Many theorems in algebraic geometry were first proved for toric varieties and later extended to toric complete intersections and more general algebraic varieties; cf. [@Givental1996; @Givental1998; @GrossSiebert2019] and [@Givental2001; @Teleman2012]. The machine learning paradigm presented here, however, applies much more broadly. Since our models take only the regularized quantum period sequence as input, we expect that whenever we can calculate $\widehat{G}_X$ -- which is the case for almost all known Fano varieties -- we should be able to apply a machine learning pipeline to extract geometric information about $X$. ## Data availability {#data-availability .unnumbered} Our datasets [@wpsData; @rank2Data] and the code for the Magma computer algebra system [@BosmaCannonPlayoust1997] that was used to generate them are available from Zenodo [@Zenodo] under a CC0 license. The data was collected using Magma V2.25-4. ## Code availability {#code-availability .unnumbered} All code required to replicate the results in this paper is available from Bitbucket under an MIT license [@supporting-code]. ## Acknowledgements {#acknowledgements .unnumbered} TC is funded by ERC Consolidator Grant 682603 and EPSRC Programme Grant EP/N03189X/1. AK is funded by EPSRC Fellowship EP/N022513/1. SV is funded by the EPSRC Centre for Doctoral Training in Geometry and Number Theory at the Interface, grant number EP/L015234/1. We thank Giuseppe Pitton for conversations and experiments that began this project, and thank John Aston and Louis Christie for insightful conversations and feedback. We also thank the anonymous referees for their careful reading of the text and their insightful comments, which substantially improved both the content and the presentation of the paper.
arxiv_math
{ "id": "2309.05473", "title": "Machine learning the dimension of a Fano variety", "authors": "Tom Coates, Alexander M. Kasprzyk, Sara Veneziale", "categories": "math.AG cs.LG", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | This paper explores the phenomena of enhanced dissipation and Taylor dispersion in solutions to the passive scalar equations subject to time-dependent shear flows. The hypocoercivity functionals with carefully tuned time weights are applied in the analysis. We observe that as long as the critical points of the shear flow vary slowly, one can derive the sharp enhanced dissipation and Taylor dispersion estimates, mirroring the ones obtained for the time-stationary case. author: - "Daniel Coble[^1]" - "Siming He [^2]" bibliography: - SimingBib.bib - nonlocal_eqns.bib - JacobBib.bib title: A Note on Enhanced Dissipation and Taylor Dispersion of Time-dependent Shear Flows --- # Introduction In this paper, we consider the passive scalar equations $$\begin{aligned} \label{PS} \partial_t f+V(t,y)\partial_x f=\nu \Delta_\sigma f,\quad f(t=0,x,y)=f_0(x,y). \end{aligned}$$ Here $f$ denotes the density of the substances, and $(V(t,y),0)$ is a time-dependent shear flow. The Péclet number $\nu>0$ captures the ratio between the transport and diffusion effects in the process. Here $\Delta_\sigma=\sigma\partial_{x}^2+\partial_y^2,\, \sigma\in\{0,1\}.$ We consider three types of domains: ${\mathbb T}\times \mathbb{R},\, {\mathbb T}^2, \, \mathbb{R}\times [-1,1]$. The torus ${\mathbb T}$ is normalized such that ${\mathbb T}=[-\pi,\pi].$ In recent years, much research has been devoted to studying enhanced dissipation and Taylor dispersion phenomena associated with the equation [\[PS\]](#PS){reference-type="eqref" reference="PS"} in the regime $0<\nu\ll1.$ To understand these phenomena, we first identify the relevant time scale of the problem. The standard $L^2$-energy estimate yields the following energy dissipation equality: $$\begin{aligned} \frac{d}{dt}\|f\|_{L^2}^2=-2\nu \sigma\|\partial_x f\|_{L^2}^2-2\nu\|\partial_y f\|_{L^2}^2.\label{engy_rel} \end{aligned}$$ Hence, at least formally, we expect that the energy ($L^2$-norm) of the solution decays to half of the original value on a long time scale $\mathcal{O}(\nu^{-1}).$ This is called the "heat dissipation time scale". However, a natural question remains: since the fluid transportation can create gradient growth of the density $\nabla f$, which makes the damping effect in [\[engy_rel\]](#engy_rel){reference-type="eqref" reference="engy_rel"} stronger, can one derive a better decay estimate of the solution to [\[PS\]](#PS){reference-type="eqref" reference="PS"}? This question was answered by Lord Kelvin in 1887 for a special family of flow $V(t,y)=y$ (Couette flow) [@Kelvin87]. He could explicitly solve the equation [\[PS\]](#PS){reference-type="eqref" reference="PS"} and read the exact decay rate through the Fourier transform. To present his observation, we first restrict ourselves to the cylinder ${\mathbb T}\times \mathbb{R}$ or torus ${\mathbb T}^2$ and define the concepts of horizontal average and remainder: $$\begin{aligned} \label{x_av_rm} \langle f\rangle(y) =\frac{1}{2\pi}\int_{-\pi}^{\pi} f(x,y)dx,\quad f_{\neq}(x,y)=f(x,y)-\langle f\rangle(y).\end{aligned}$$ We observe that the $x$-average $\langle f\rangle$ of the solution to [\[PS\]](#PS){reference-type="eqref" reference="PS"} is also a solution to the heat equation. Hence it decays with rate $\nu$. On the other hand, the remainder $f_{\neq}$ still solves the passive scalar equation [\[PS\]](#PS){reference-type="eqref" reference="PS"} with $f_{\neq}(t=0,x,y)=f_{0;{\neq}}(x,y)$ and something nontrivial can be said. Lord Kelvin showed that there exists constants $C,\,\delta>0$ such that the following estimate holds $$\begin{aligned} \label{ed_mon_sharp} \|f_{{\neq}}(t)\|_{L^2}\leq C\|f_{0;{\neq}}\|_{L^2}e^{-\delta\nu^{1/3}t},\quad \forall t\geq 0. \end{aligned}$$ One can see that significant decay of the remainder happens on time scale $\mathcal{O}(\nu^{-1/3})$, which is much shorter than the heat dissipation time scale. This phenomenon is called the *enhanced dissipation*. However, new challenges arise when one considers shear flows different from the Couette flow. In these cases, no direct Fourier analytic proof is available at this point. We focus on two families of shear flows, i.e., strictly monotone shear flows and nondegenerate shear flows. In the paper [@BCZ15], J. Bedrossian and M. Coti Zelati apply hypocoercivity techniques to show that for *stationary* strictly monotone shear flows $\{(V(y),0)|\inf|V'(y)|\geq c>0,\, y\in \mathbb{R}\}$, the following estimate is available $$\begin{aligned} \|f_{\neq}(t)\|_{L^2}\leq C \|f_{{\neq};0}\|_{L ^2}e^{-\delta\nu^{1/3}|\log\nu|^{-2}t},\quad \forall t\geq 0.\end{aligned}$$ Later on, D. Wei applied resolvent estimate techniques to improve their estimate to [\[ed_mon_sharp\]](#ed_mon_sharp){reference-type="eqref" reference="ed_mon_sharp"} [@Wei18]. When we consider non-constant smooth shear flows on the torus ${\mathbb T}^2$, an important geometrical constraint has to be respected, namely, the shear profile $V$ must have critical points $\mathcal{C}:=\{y_\ast|\partial_y V(y_\ast)=0\}$. Nondegenerate shear flows are a family of shear flows such that the second derivative of the shear profile does not vanish at these critical points, i.e., $\min_{y_\ast\in \mathcal{C}}|\partial_{y}^2V(y_\ast)|\geq c>0$. In the papers [@BCZ15; @Wei18], it is shown that if the underlying shear flows are *stationary* and non-degenerate, there exist constants $C\geq 1,\ \delta>0$ such that $$\begin{aligned} \label{ED_non_deg} \|f_{{\neq}}(t)\|_{L^2}\leq C\|f_{0;{\neq}}\|_{L^2}e^{-\delta\nu^{1/2}t},\quad \forall t\in[0,\infty).\end{aligned}$$ In the paper [@CotiZelatiDrivas19], it is shown that the enhanced dissipation estimates [\[ed_mon_sharp\]](#ed_mon_sharp){reference-type="eqref" reference="ed_mon_sharp"}, [\[ED_non_deg\]](#ED_non_deg){reference-type="eqref" reference="ED_non_deg"} are sharp for *stationary* shear flows. In the paper [@CKRZ08; @ElgindiCotiZelatiDelgadino18; @FengIyer19], the authors rigorously justify the relation between the enhanced dissipation effect and the mixing effect. In the paper [@AlbrittonBeekieNovack21], the authors apply Hörmander hypoellipticity technique to derive the estimates [\[ed_mon_sharp\]](#ed_mon_sharp){reference-type="eqref" reference="ed_mon_sharp"}, [\[ED_non_deg\]](#ED_non_deg){reference-type="eqref" reference="ED_non_deg"} on various domains. Further enhanced dissipation in other flow settings, we refer the interested readers to the papers [@He21; @FengFengIyerThiffeault20; @CotiZelatiDolce20], and the references therein. The enhanced dissipation effects have also found applications in many different areas, ranging from hydrodynamic stability to plasma physics, we refer to the following papers [@BMV14; @BGM15I; @BGM15II; @BGM15III; @ChenLiWeiZhang18; @BedrossianHe16; @BedrossianHe19; @GongHeKiselev21; @HeKiselev21; @BedrossianBlumenthalPunshonSmith192; @WeiZhang19; @LiZhao21; @CotiZelatiDietertGerardVaret22; @AlbrittonOhm22; @Bedrossian17; @He; @HuKiselevYao23; @HuKiselev23; @KiselevXu15; @IyerXuZlatos; @CotiZelatiDolceFengMazzucato; @FengShiWang22]. The enhanced dissipation can be rigorously justified for the periodic domains ${\mathbb T}^2,\ {\mathbb T}\times \mathbb{R}$. Based on these observations, one might ask whether extending these results to infinitely long channels, e.g., $\mathbb{R}\times[-1,1]$ is possible. The question is highly nontrivial. As mentioned above, the enhanced dissipation phenomenon is closely related to the mixing effect associated with the fluid field, [@ElgindiCotiZelatiDelgadino18; @FengIyer19]. However, it is widely recognized that the mixing effect can be weak in infinite channels. As it turns out, in the infinite channel, another fluid transport effect - *Taylor dispersion* - plays an important role, see, e.g., [@Aris56; @Taylor53; @YoungJones91]. For a mathematically rigorous justification, we refer to the papers, [@CotiZelatiGallay21; @BedrossianCotiZelatiDolce22; @BeckChaudharyWayne20; @CotiZelatiDolceLo23]. It is also worth mentioning that the Taylor dispersion is also related to the *quenching phenomenon* of shear flows, see, e.g., [@CKR00; @KiselevZlatos; @HeTadmorZlatos; @Zlatos11; @Zlatos2010]. Most of the results we present thus far are centered around *stationary* flows. In this paper, we focus on *time-dependent* shear flows and hope to identify sufficient conditions that guarantee enhanced dissipation and Taylor dispersion. Before stating the main theorems, we provide some further definitions. After applying a Fourier transformation in the $x$-variable, we end up with the following $k$-by-$k$ equation $$\begin{aligned} \label{k_by_k_eq} \partial_t \widehat f_k(t,y)+V(t,y)ik \widehat f_k(t,y)=\nu\partial_{y}^2 \widehat f_k-\sigma\nu|k|^2 \widehat f_k(t,y),\quad \widehat f_k(t=0,y)= \widehat f_{0;k}(y). \end{aligned}$$ We will drop the $\widehat{(\cdot)}$ notation later for simplicity. The main statements of our theorems are as follows: **Theorem 1**. *Consider the solution to the equation [\[k_by_k\_eq\]](#k_by_k_eq){reference-type="eqref" reference="k_by_k_eq"} initiated from the initial data $f_0\in C_c^\infty({\mathbb T}\times\mathbb{R})$. Assume that on the time interval $[0,T]$, the $C_t C^2_{y}$ velocity profile $V(t,y)$ satisfies the following constraint $$\begin{aligned} \inf_{t\in[0,T],\ y\in\mathbb{R}}|\partial_y V(t,y)|\geq \mathfrak{c}>0,\quad \|V\|_{L_t^\infty([0,T]; W_y^{3,\infty})}<C.\label{V_y}\end{aligned}$$ Then there exists a threshold $\nu_0(V)$ such that for $\nu<\nu_0$, the following estimate holds $$\begin{aligned} \label{mon_ED_main} \|f_{k}(t)\|_{L^2}\leq e\| f_{0;k}\|_{L^2}\exp\left\{-\delta\nu^{1/3}{ |k|^{2/3} } t\right\},\quad \forall t\in [0,T].\end{aligned}$$ Here $\delta>0$ are constants depending only on the parameter $\mathfrak{c}$ and $\|V\|_{L_t^\infty C_y^3}$ [\[delta_mono\]](#delta_mono){reference-type="eqref" reference="delta_mono"}.* The next theorem is stated as follows. **Theorem 2**. *Consider the solution to the equation [\[PS\]](#PS){reference-type="eqref" reference="PS"} initiated from the smooth initial data $f_0\in C^\infty({\mathbb T}^2)$. Assume that the shear flow $V(t,y)\in C^2_{t,y}$ satisfies the following structure assumptions on the time interval $[0,T]$:* *a) Phase assumption: There exists a nondegenerate reference shear $U\in C_t^1C_y^2$ such that the time-dependent flow $V(t,y)$ and the reference flow $U(t,y)$ share all their nondegenerate critical points $\{y_i(t)\}_{i=1}^N$, where $N$ is a fixed finite number. Moreover, $$\begin{aligned} \partial_y V(t,y)\partial_yU(t,y)\geq &0,\ \ \quad\forall y\in {\mathbb T},\ \forall t\in[0,T], \\ \|\partial_{ty}U\|_{L_t^\infty([0,T];L^\infty_y)}\leq \nu^{3/4},\quad\|V&\|_{L_t^\infty([0,T]; W_y^{2,\infty})}+\|U\|_{L_t^\infty([0,T];W^{2,\infty})}<C. \label{asmpt0}\end{aligned}$$* *b) Shape assumption: there exist $N$ pairwise disjoint open neighborhoods $\{B_{r}(y_i(t))\}_{i=1}^N$ with fixed radius $0<r=\mathcal{O}(1)$, and two constants $\mathfrak{C}_0,\ \mathfrak{C}_1> 1$ such that the following estimates hold for $Z(t,y)\in \{V(t,y), U(t,y)\}$, $$\begin{aligned} \label{asmp_1} \mathfrak{C}_0^{-1}(y-y_i(t))^2\leq |\partial_yZ|^2\leq& \mathfrak{C}_0(y-y_i(t))^2, \quad \mathfrak{C}_0>0, \quad \forall y\in B_{r}(y_i(t));\\ 0<\mathfrak{C}_1^{-1}\leq |\partial_y Z|\leq& \mathfrak{C}_1 ,\quad \forall y\notin \cup_{i=1}^N B_{r}(y_i(t)),\label{asmpt2} \end{aligned}$$ Then there exists a threshold $\nu_0(U,V)$ such that if $\nu\leq \nu_0$, the following estimate holds$$\begin{aligned} \|f_{k}(t)\|_{L^2}\leq e\|f_{k}(0)\|_{L^2}\exp\left\{-\delta\nu^{1/2}{ |k|^{1/2} } t\right\},\quad \forall t\in [0, T],\label{ndeg_ED_main}\end{aligned}$$ with $\delta$ depending on the functions $U,V$. In particular, it depends only on the parameters specified in the conditions above.* ![Relation between $U,\, V$. The reference $U$ slowly varies, whereas the actual shear $V$ can change fast. However, the two shears share the same critical points. ](Relation_UV.pdf){#Figure:1} **Theorem 3**. *Consider the solution to the equation [\[PS\]](#PS){reference-type="eqref" reference="PS"} initiated from the smooth initial data $f_0\in C^\infty({\mathbb T}^2)$. Assume that the shear flow $V(t,y)\in C^3_{t,y}$ satisfies the following structure assumptions on the time interval $[0,T]$:* *a) Phase assumption: There exists a nondegenerate reference profile $U(\cdot,\cdot)\in C_{t,y}^3(\mathbb{R}_+\times{\mathbb T})$ such that the time-dependent flow $V(t,y)$ and the reference flow $U(t,y)$ share all their nondegenerate critical points $\{y_i(t)\}_{i=1}^N, \, N<\infty$. Moreover, $$\begin{aligned} \partial_y V(t,y)\partial_y U(t,y)\geq &0,\ \ \|\partial_{yt}U\|_{L_t^\infty([0,T];L^\infty_y)}\leq \nu^{3/4},\quad \|V\|_{L_t^\infty W_y^{3,\infty}}+\|U\|_{L_t^\infty W^{3,\infty}_y}<C,\quad\forall y\in {\mathbb T},\ \forall t\in[0,T]. \label{asmpt0}\end{aligned}$$* *b) Shape assumption: there exist $N$ pairwise disjoint open neighborhoods $\{B_{r_i}(y_i(t))\}_{i=1}^N,\, r_i=\mathcal{O}(1)$ and two constants $\mathfrak{C}_0,\ \mathfrak{C}_1> 1$ such that the following estimates hold for $Z(t,y)\in \{V(t,y), U(t,y))\}$ $$\begin{aligned} \label{asmp_1} \mathfrak{C}_0^{-1}(y-y_i(t))^2\leq& |\partial_yZ|^2\leq \mathfrak{C}_0(y-y_i(t))^2, \quad \mathfrak{C}_0>0, \quad \forall y\in B_{r_i}(y_i(t));\\ 0<\mathfrak{C}_1^{-1}\leq |\partial_y Z|\leq& \mathfrak{C}_1 ,\quad \forall y\notin \cup_{i=1}^N B_{r_i}(y_i(t)),\label{asmpt2} \end{aligned}$$ Then there exists a threshold $\nu_0(U,V)$ such that if $\nu\leq \nu_0$, the following estimate holds$$\begin{aligned} \|f_{k}(t)\|_{L^2}\leq C\|f_{0;k}\|_{L^2}\exp\left\{-\delta\nu^{1/2}{ |k|^{1/2} } t\right\},\quad \forall t\in [0, T],\label{ndeg_ED_main}\end{aligned}$$ with $C_{ED}$ and $\delta$ depending on the functions $U,V$. In particular, they depend only on the parameters specified in the conditions above.* **Remark 1**. *We remark that if we consider the solution $V(t,y)=e^{-\nu t}\sin(y)$ to the heat equation $\partial_t V=\nu \partial_{yy}V$ on the torus, the structure conditions are satisfied for time $t\in[0,\mathcal{O}(\nu^{-1+})]$.* **Remark 2**. *In our analysis of the time-dependent shear flows, the dynamics of the critical points are crucial. The main theorem encodes the dynamics of the critical points in the reference shear $U$. The relation between $U, \ V$ is highlighted in Figure [1](#Figure:1){reference-type="ref" reference="Figure:1"}. The condition $\|\partial_{ty}U\|_\infty\leq \nu^{3/4}$ enforces that the critical points of the target shear $V$ cannot move too fast. If this condition is violated, the fluid can trigger mixing and unmixing effects within a short time. Hence, it is not clear whether the enhanced dissipation phenomenon persists.* Finally, we present the following theorem of the Taylor dispersion in an infinite long channel. **Theorem 4**. *Consider the equation [\[PS\]](#PS){reference-type="eqref" reference="PS"} in an infinite long channel $\mathbb{R}\times [-1,1]$ subject to Dirichlet boundary condition $f(t,\pm 1)=0$. The initial data $f_0\in H^1_0(\mathbb{R}\times [-1,1])$ is consistent with the boundary condition. Assume that on the time interval $[0,T]$, the shear flow $V\in C_{t}^1C_y^2$ has finitely many critical points $\{y_i(t)\}_{i=1}^{N(t)}$ at every time instance $t\in[0,T]$. Furthermore, there exist four parameters $\,m_0\in \mathbb{N},\, r_0=\mathcal{O}(1), \, \mathfrak{C}_2,\,\mathfrak{C}_3\geq 1$ such that the following non-degeneracy condition holds around each critical point $y_i(t)$: $$\begin{aligned} \label{Taylorcnstr} \|\partial_{ty}V\|_\infty\leq \mathfrak{C}_2\nu,\quad |y-y_i|^{m_0}\leq \mathfrak{C}_3|\partial_y V(t,y)|,\quad \forall |y-y_i|\leq r_0,\quad i\in\{1,2,\cdots, N(t)\} .\end{aligned}$$ Then for $0<|k|\leq \nu$, the following estimate holds $$\begin{aligned} \|f_k(t)\|_{L^2}\leq e\|f_k(0)\|_{L^2}\exp\left\{-\delta \frac{|k|^2}{\nu} t\right\},\quad \forall t\in[0,T].\end{aligned}$$ Here, the constant $\delta\in(0,1)$ only depends on the aforementioned properties of the shear flow $V$.* **Remark 3**. *We remark that the $f_0\in H_0^1$ can be relaxed to $f_0\in L^2.$ This is because the passive scalar equation is smoothing, and the solution will automatically lie in $H_0^1$ for $t>0$.* **Remark 4**. *In the paper [@CotiZelatiGallay21], the authors show that one can use the resolvent estimate to derive the enhanced dissipation and Taylor dispersion simultaneously. Here, we observe that one can achieve the same goal utilizing the hypocoercivity techniques. This method is potentially more flexible because it can treat certain time-dependent cases.* The hypocoercivity energy functional introduced in [@BCZ15] is our main tool to prove the main theorems. However, we choose to incorporate time-weights introduced in the papers [@WeiZhang19] into our setting. Let us define a parameter and two time weights $$\begin{aligned} \epsilon:=& \nu|k|^{-1},\quad \psi= \min\{\nu^{1/3}{ |k|^{2/3} }t,1\},\quad \phi=\min\{\nu^{1/2}{ |k|^{1/2} }t,1\},\quad \zeta=\min\{\nu^{-1}|k|^{2}t, 1\}.\label{ep&t_weights} \end{aligned}$$ We observe that the derivatives of the time weights are compactly supported: $$\begin{aligned} \psi'(t)=\nu^{1/3}{ |k|^{2/3} }\mathbbm{1}_{[0,\nu^{-1/3}{ |k|^{-2/3} }]}(t),\quad \phi'(t)=\nu^{1/2}{ |k|^{1/2} }\mathbbm{1}_{[0,\nu^{-1/2}{ |k|^{-1/2} }]}(t),\quad \zeta'(t)=\nu^{-1}{ |k|^{2} }\mathbbm{1}_{[0,\nu{ |k|^{-2} }]}(t).\\ \label{dt_weights}\end{aligned}$$ To prove Theorem [Theorem 1](#thm_mon){reference-type="ref" reference="thm_mon"}, Theorem [Theorem 3](#ndeg_main_thm){reference-type="ref" reference="ndeg_main_thm"} and Theorem [Theorem 4](#Taylor_thm){reference-type="ref" reference="Taylor_thm"}, we invoke the following hypocoercivity functionals $$\begin{aligned} \text{Theorem \ref{thm_mon}:\, }&\mathcal{F}[f_k]:= \|f_k\|_{2}^2+\alpha\psi\epsilon^{2/3}\ \|\partial_y f_k\|_{2}^2+\beta \psi^2\epsilon^{1/3} \ \Re\langle i\text{sign}(k) f_k,\partial_y f_k \rangle;\label{hypo_mono}\\ \text{Theorem \ref{ndeg_main_thm}:\, }&\mathcal{G}[f_k]:= \|f_k\|_{2}^2+\alpha\phi\epsilon^{1/2}\ \|\partial_y f_k\|_{2}^2+\beta \phi^2 \ \Re\langle i\text{sign}(k) \partial_y U f_k,\partial_y f_k \rangle+\gamma\phi^3\epsilon^{-1/2} \|\partial_y U f_k\|_2^2;\label{hypo_ndeg}\\ \text{Theorem \ref{Taylor_thm}:\, }&\mathcal{T}[f_k]:= \|f_k\|_{2}^2+\alpha\zeta\ \|\partial_y f_k\|_{2}^2+\beta \zeta \ \Re\langle i\text{sign}(k) \partial_y V f_k,\partial_y f_k \rangle.&&&&\label{hypo_Tay}\end{aligned}$$ Here, the inner product $\langle\cdot ,\cdot\rangle$ is defined in [\[inner_prod\]](#inner_prod){reference-type="eqref" reference="inner_prod"}. Through detailed analysis, one can derive the following statements. a\) Assume all conditions in Theorem [Theorem 1](#thm_mon){reference-type="ref" reference="thm_mon"}. There exist parameters $\alpha=\mathcal{O}(1),\, \beta=\mathcal{O}(1)$ such that the following estimate holds on the time interval $[0,T]$: $$\begin{aligned} \mathcal{F}[f_k](t)\leq C\mathcal{F}[f_{0;k}]\exp\left\{-\delta \nu^{1/3}{ |k|^{2/3} }t\right\}=C\|f_{0;k}\|_2 ^2\exp\left\{-\delta \nu^{1/3}{ |k|^{2/3} }t\right\},\quad \forall t\in[0,T].\label{Hypo_est_mon}\end{aligned}$$ b\) Assume all conditions in Theorem [Theorem 3](#ndeg_main_thm){reference-type="ref" reference="ndeg_main_thm"}. Then there exist parameters $\alpha=\mathcal{O}(1),\, \beta=\mathcal{O}(1),\, \gamma=\mathcal{O}(1)$ such that the following estimate holds for $t\in[0,T]$, $$\begin{aligned} \mathcal{G}[f_k](t)\leq C\mathcal{G}[f_{0;k}]\exp\{-\delta \nu^{1/2}{ |k|^{1/2} }t\}=C\|f_{0;k}\|_2 ^2\exp\left\{-\delta \nu^{1/2}{ |k|^{1/2} }t\right\},\quad \forall t\in[0,T].\label{Hypo_est_ndeg}\end{aligned}$$ c\) Assume all conditions in Theorem [Theorem 4](#Taylor_thm){reference-type="ref" reference="Taylor_thm"}. Then there exist parameters $\alpha=\mathcal{O}(1),\, \beta=\mathcal{O}(1)$ such that the following estimate holds for $t\in[0,T]$, $$\begin{aligned} \mathcal{T}[f_k](t)\leq C\mathcal{T}[f_{0;k}]\exp\{-\delta \nu^{-1}{ |k|^{2} }t\}=C\|f_{0;k}\|_2^2\exp\left\{-\delta \nu^{-1}{ |k|^{2} }t\right\},\quad \forall t\in[0,T].\label{Hypo_est_Taylor}\end{aligned}$$ Hence, we can apply the observation that $\|f_k\|_2^2\leq \mathcal{F}[f_k],\, \mathcal{G}[f_k],\,\mathcal{T}[f_k]$ to derive Theorem [Theorem 1](#thm_mon){reference-type="ref" reference="thm_mon"}, [Theorem 3](#ndeg_main_thm){reference-type="ref" reference="ndeg_main_thm"}, [Theorem 4](#Taylor_thm){reference-type="ref" reference="Taylor_thm"}. We organize the remaining sections as follows: in section [2](#monotone){reference-type="ref" reference="monotone"}, we prove Theorem [Theorem 1](#thm_mon){reference-type="ref" reference="thm_mon"}; in section [3](#nondegenerate){reference-type="ref" reference="nondegenerate"}, we prove Theorem [Theorem 3](#ndeg_main_thm){reference-type="ref" reference="ndeg_main_thm"}; in section [4](#Taylor){reference-type="ref" reference="Taylor"}, we prove Theorem [Theorem 4](#Taylor_thm){reference-type="ref" reference="Taylor_thm"}. **Notations:** For two complex-valued functions $f,g$, we define the inner product$$\begin{aligned} \langle f,g\rangle=\int_{D} f\overline{g}dy.\label{inner_prod} \end{aligned}$$ Here $D$ is the domain of interest. Furthermore, we introduce the $L^p$-norms ($p\in[1,\infty)$) $$\begin{aligned} \|f\|_{p}=\|f\|_{L^p}=\left(\int |f|^p dy\right)^{1/p},\quad p\in[1,\infty). \end{aligned}$$ We also recall the standard extension of this definition to the $p=\infty$ case. We further recall the standard definition for Sobolev norms of functions $f(y),\, g(t,y)$: $$\begin{aligned} \|f\|_{W^{m,p}_y}=\left(\sum_{k=0}^m\|\partial_y^m f\|_{L^p}^p\right)^{1/p}, \quad p\in [1,\infty];\quad \ \|g\|_{L_t^q W^{m,p}_y}=\|\|g\|_{W^{m,p}_y}\|_{L_t^q}, \quad p,q\in [1,\infty]. \end{aligned}$$ We will also use classical notations $H^1=W^{1,2}$ and $H_0^1$ (the $H^1$ functions with zero trace on the boundary). We use the notation $A\approx B$ ($A,B>0$) if there exists a constant $C>0$ such that $\frac{1}{C}B\leq A\leq CB$. Similarly, we use the notation $A\lesssim B$ ($A\gtrsim B$) if there exists a constant $C$ such that $A\leq CB$ ($A\geq B/C$). Throughout the paper, the constant $C$ can depend on the norm $\|V\|_{L_t^\infty W^{3,\infty}_{y}},\, \|U\|_{L_t^\infty W_y^{3,\infty}}$, but it will never depend on $\nu,\, |k|$. The meaning of the notation $C$ can change from line to line. # Enhanced Dissipation: Strictly Monotone Shear Flows {#monotone} In this section, we prove the estimate [\[mon_ED_main\]](#mon_ED_main){reference-type="eqref" reference="mon_ED_main"} for the hypoellitic passive scalar equation $\eqref{k_by_k_eq}_{\sigma=0}$. The proof of the $\sigma=1$ case is similar and simpler. Throughout the remaining part of the paper, we adopt the following notation $$\begin{aligned} \label{notation_simp} f(t,y):= \widehat f_k(t,y).\end{aligned}$$ Without loss of generality, we assume that $$\begin{aligned} \label{WLOG} \partial_y V>0, \quad k\geq 1.\end{aligned}$$ Let us start with a simple observation. **Lemma 1**. *Assume the relation $$\begin{aligned} \label{monotone bound req} {\alpha} > \beta^2. \end{aligned}$$ Then, the following relations hold $$\begin{aligned} \frac{1}{2}(\|f\|_2^2 +\alpha\epsilon^{2/3}\psi\|\partial_yf\|_2^2) \leq \mathcal{F}[f] \leq \frac{3}{2}(\|f\|_2^2 +\alpha\epsilon^{2/3}\psi\|\partial_yf\|_2^2),\quad \forall t \in[0,T].\label{equiv_mono} \end{aligned}$$* *Proof.* To prove the estimate, we recall the definition of $\mathcal{F}$ [\[hypo_mono\]](#hypo_mono){reference-type="eqref" reference="hypo_mono"}, and estimate it using Hölder inequality, Young's inequality, $$\begin{aligned} \mathcal{F}[f] \leq& \|f\|_2^2 + \alpha\epsilon^{2/3}\psi\|\partial_yf\|_2^2 + \beta\psi^2\epsilon^{1/3}\|f\|_2\|\partial_yf\|_2 \leq \left(1 + \frac{\beta^2}{2\alpha}\psi^3\right)\|f\|_2^2 + \frac{3\alpha}{2} \epsilon^{2/3}\psi\|\partial_yf\|_2^2 \end{aligned}$$Similarly, we have the following lower bound, $$\begin{aligned} \mathcal{F}[f] \geq& \|f\|_2^2 + \alpha\epsilon^{2/3}\psi\|\partial_yf\|_2^2 - \beta\epsilon^{1/3}\psi^2\|f\|_2\|\partial_yf\|_2\geq \left(1- \frac{\beta^2}{2\alpha}\psi^3\right)\|f\|_2^2 + \frac{\alpha}{2}\epsilon^{2/3}\psi\|\partial_yf\|_2^2 \end{aligned}$$Since ${\alpha} > {\beta^2}$, we obtain that $$\begin{aligned} \frac{1}{2}\|f\|_2^2 +\frac{1}{2}\alpha\epsilon^{2/3}\psi\|\partial_yf\|_2^2\leq \mathcal{F}[f] \leq \frac{3}{2}\|f\|_2^2 +\frac{3}{2}\alpha\epsilon^{2/3}\psi\|\partial_yf\|_2^2,\quad \forall t\in [0,T]. \end{aligned}$$ This concludes the proof of the lemma. ◻ By taking the time derivative of the hypocoercivity functional, [\[hypo_mono\]](#hypo_mono){reference-type="eqref" reference="hypo_mono"}, we end up with the following decomposition: $$\begin{aligned} \label{T_albe_term} \frac{d}{dt}\mathcal{F}[f]=&\frac{d}{dt} \|f\|_2^2+\alpha\epsilon^{2/3}\frac{d}{dt}\left(\psi\|\partial_{y}f\|_2^2\right)+\beta\epsilon^{1/3}\frac{d}{dt}\left(\psi^2\Re\langle i f,\partial_y f\rangle \right) =: T_{L^2} + T_\alpha+ T_\beta.\end{aligned}$$ Through standard energy estimates, we observe that $$\begin{aligned} T_{L^2}=-2\nu\int |\partial_y f|^2 dy- 2\Re i\int V f \overline{f} dy = - 2\nu\|\partial_yf\|_2^2.\label{T_L2}\end{aligned}$$ The estimates for the $T_\alpha,\ T_\beta$ terms are trickier, and we collect them in the following technical lemmas whose proofs will be postponed to the end of this section. **Lemma 2** ($\alpha$-estimate). *For any constant $B>0$, the following estimate holds on the interval $[0,T]$: $$\begin{aligned} T_\alpha%\leq& \al\ep^{2/3}\psi'\|\pa_y f\|_2^2-2\al\psi\ep^{2/3}\nu\|\pa_y^2f\|_2^2+ 2 \al \psi \ep^{2/3} \|\pa_yV\|_\infty \|\pa_yf\|_2\|f\|_2\\ \leq&\alpha\epsilon^{2/3}\psi'\|\partial_y f\|_2^2-2\alpha\psi\epsilon^{2/3}\nu\|\partial_y^2f\|_2^2+\frac{\beta}{B} \psi^2\epsilon^{1/3}|k|\|\|\sqrt{|\partial_yV|}f\|_2^2+ \frac{B\alpha^2}{\beta}\|\partial_y V\|_\infty\nu \|\partial_yf\|_2^2.\label{monotone al estimate} \end{aligned}$$* **Lemma 3** ($\beta$-estimate). *The following estimate holds $$\begin{aligned} \ensuremath{\nonumber} T_\beta \leq&\frac{\beta}{\sqrt{\alpha}} \nu^{1/3}{ |k|^{2/3} }\mathbbm{1}_{[0,\nu^{-1/3}{ |k|^{-2/3} }]}(t) \left(\|f\|_2^2+ \alpha\epsilon^{2/3} \psi\|\partial_yf\|_2^2\right)\\ &+\frac{ \beta^2}{\alpha}\psi^3\nu\|\partial_y f\|_2^2 +{\alpha}\psi\epsilon^{2/3}\nu\| \partial_{yy}f\|_2^2- {\beta}\psi^2\epsilon^{1/3}|k|\left\| \sqrt{|\partial_yV|}|f|\right\|_{2}^2.\label{monotone beta estimate} \end{aligned}$$* We are ready to prove Theorem [Theorem 1](#thm_mon){reference-type="ref" reference="thm_mon"} with these estimates. *Proof of Theorem [Theorem 1](#thm_mon){reference-type="ref" reference="thm_mon"}.* If $T\leq 2\nu^{-1/3}{ |k|^{-2/3} }$, then standard $L^2$-energy estimate yields [\[mon_ED_main\]](#mon_ED_main){reference-type="eqref" reference="mon_ED_main"}. Hence, we assume $T>2\nu^{-1/3}{ |k|^{-2/3} }$ without loss of generality. We distinguish between two time intervals, i.e., $$\begin{aligned} \label{I1_I2} \mathcal{I}_1=[0, \nu^{-1/3}{ |k|^{-2/3} }], \quad \mathcal{I}_2=[\nu^{-1/3}{ |k|^{-2/3} },T].\end{aligned}$$ We organize the proof in three steps. **Step \# 1: Energy bounds.** Combining the estimates [\[T_L2\]](#T_L2){reference-type="eqref" reference="T_L2"}, [\[monotone al estimate\]](#monotone al estimate){reference-type="eqref" reference="monotone al estimate"}, [\[monotone beta estimate\]](#monotone beta estimate){reference-type="eqref" reference="monotone beta estimate"}, we obtain that $$\begin{aligned} \frac{d}{dt}\mathcal{F}[f]\leq&\alpha\epsilon^{2/3}\nu^{1/3}{ |k|^{2/3} }\mathbbm{1}_{[0,\nu^{-1/3}{ |k|^{-2/3} }]}(t)\|\partial_y f\|_2^2+\frac{\beta}{\sqrt{\alpha}} \nu^{1/3}{ |k|^{2/3} }\mathbbm{1}_{[0,\nu^{-1/3}{ |k|^{-2/3} }]}(t) \left(\|f\|_2^2+ \alpha\epsilon^{2/3} \psi\|\partial_yf\|_2^2\right)\\ & - 2\nu\|\partial_yf\|_2^2-2\alpha\psi\epsilon^{2/3}\nu\|\partial_{yy}f\|_2^2+\frac{\beta}{2} \psi^2\epsilon^{1/3}|k|\left\|\sqrt{|\partial_yV|}f\right\|_2^2+ \frac{2\alpha^2}{\beta} {\nu} \|\partial_yV\|_\infty \|\partial_yf\|_2^2\\ &+{\alpha}\psi\epsilon^{2/3}\nu\| \partial_{yy}f\|_2^2+\frac{ \beta^2}{\alpha}\psi^3\nu\|\partial_y f\|_2^2 - {\beta}\psi^2\epsilon^{1/3}|k|\left\| \sqrt{|\partial_yV|}|f|\right\|_{2}^2\\ \leq&\frac{\beta}{\sqrt{\alpha}} \nu^{1/3}{ |k|^{2/3} }\mathbbm{1}_{[0,\nu^{-1/3}{ |k|^{-2/3} }]}(t) \left(\|f\|_2^2+ \alpha\epsilon^{2/3} \psi\|\partial_yf\|_2^2\right) \\ &- \nu\left(2-\alpha\mathbbm{1}_{[0,\nu^{-1/3}{ |k|^{-2/3} }]}(t) - \frac{2\alpha^2}{\beta }\|\partial_yV\|_\infty - \frac{\beta^2}{\alpha}\psi^3\right)\|\partial_yf\|_2^2-\frac{ 1}{2}\beta\epsilon^{1/3}\psi^2|k| \left\| \sqrt{|\partial_yV|}f\right\|_2^2 . \end{aligned}$$ Now we choose the $\alpha,\beta$ as follows: $$\begin{aligned} \label{chc_albe_mon} \alpha=\beta=\frac{1}{2(1+\|\partial_y V\|_\infty)}.\end{aligned}$$ Then we check that the condition [\[monotone bound req\]](#monotone bound req){reference-type="eqref" reference="monotone bound req"} and the following hold for all $t\in[0,T]$, $$\begin{aligned} 2 -\alpha\mathbbm{1}_{[0,\nu^{-1/3}]}(t) - \frac{2\alpha^2}{\beta }\|\partial_yV\|_\infty - \frac{\beta^2}{\alpha}\psi^3\geq 2 -\frac{1}{2(1+\|\partial_yV\|_\infty)}-\frac{\|\partial_yV\|_\infty}{1+\|\partial_yV\|_\infty}-\frac{1}{2(1+\|\partial_yV\|_\infty)}\geq {1}. \end{aligned}$$ As a result, we have [\[equiv_mono\]](#equiv_mono){reference-type="eqref" reference="equiv_mono"} and the following, $$\begin{aligned} \frac{d}{dt}\mathcal{F}[f]\leq&\frac{\beta}{\sqrt{\alpha}} \nu^{1/3}{ |k|^{2/3} }\mathbbm{1}_{[0,\nu^{-1/3}{ |k|^{-2/3} }]}(t) \left(\|f\|_2^2+ \alpha\epsilon^{2/3} \psi\|\partial_yf\|_2^2\right) -{\nu}\|\partial_y f\|_2^2 - \frac{\beta\epsilon^{1/3}|k|}{2}\psi^2\left\|\sqrt{|\partial_yV|}f\right\|_2^2. \\ \label{energy} \end{aligned}$$ **Step \# 2: Initial time layer estimate.** Thanks to the estimate [\[energy\]](#energy){reference-type="eqref" reference="energy"} and the equivalence [\[equiv_mono\]](#equiv_mono){reference-type="eqref" reference="equiv_mono"}, we have that $$\begin{aligned} \frac{d}{dt}\mathcal{F}[f](t)\leq 2\frac{\beta}{\sqrt{\alpha}}\nu^{1/3}{ |k|^{2/3} }\mathcal{F}[f](t)=\frac{\sqrt{2}}{(1+\|\partial_y V\|_\infty)^{1/2}}\nu^{1/3}{ |k|^{2/3} }\mathcal{F}[f](t),\quad \mathcal{F}[f](t=0)=\|f_{0;k}\|_2^2.\end{aligned}$$ By solving this differential inequality, we have that $$\begin{aligned} \label{Step_2_mono} \mathcal{F}[f](t)\leq \exp\left\{\frac{\sqrt{2}}{(1+\|\partial_y V\|_\infty)^{1/2}}\right\}\|f_{0;k}\|_2^2,\quad \forall t\in[0, \nu^{-1/3}{ |k|^{-2/3} }].\end{aligned}$$ **Step \# 3: Long time estimate.** Now, we focus on the long time interval $\mathcal{I}_2$. On this interval, we have that $\psi\equiv 1$. The estimate [\[energy\]](#energy){reference-type="eqref" reference="energy"}, together with the lower bound on $|\partial_y V|$ [\[V_y\]](#V_y){reference-type="eqref" reference="V_y"}, the choice of $\beta$ [\[chc_albe_mon\]](#chc_albe_mon){reference-type="eqref" reference="chc_albe_mon"} yields that $$\begin{aligned} \frac{d}{dt}\mathcal{F}[f](t)\leq &-\nu\|\partial_y f\|_2^2-\frac{\nu^{1/3}|k|^{2/3}}{4(1+\|\partial_y V\|_\infty)}\left\|\sqrt{|\partial_y V|}f\right\|_2^2\leq -\frac{ \nu^{1/3}|k|^{2/3}}{4(1+\mathfrak c)(1+\|\partial_y V\|_\infty)}(\|f\|_{2}^2+\alpha\epsilon^{2/3}\|\partial_y f\|_2^2)\\ \leq & -\frac{ \nu^{1/3}|k|^{2/3}}{6(1+\mathfrak c)(1+\|\partial_y V\|_\infty)}\mathcal{F}[f](t).\end{aligned}$$ In the last line, we invoked the equivalence [\[equiv_mono\]](#equiv_mono){reference-type="eqref" reference="equiv_mono"}. Hence, for all $t\in[\nu^{-1/3}{ |k|^{-2/3} }, T]$ $$\begin{aligned} \mathcal{F}[f](t) \leq \mathcal{F}[f](t=\nu^{-1/3}{ |k|^{-2/3} })\exp\left\{-\delta\nu^{1/3}{ |k|^{2/3} }(t-\nu^{-1/3}{ |k|^{-2/3} })\right\},\quad \delta:=\frac{1}{6(1+\mathfrak c)(1+\|\partial_y V\|_\infty)}.\\ \label{delta_mono}\end{aligned}$$ Thanks to the relation [\[Step_2\_mono\]](#Step_2_mono){reference-type="eqref" reference="Step_2_mono"}, we have that $$\begin{aligned} \mathcal{F}[f_k](t) \leq e \|f_{0;k}\|_2^2\exp\left\{-\delta\nu^{1/3}|k|^{2/3} t \right\},\quad \forall t\in[\nu^{-1/3}{ |k|^{-2/3} }, T].\end{aligned}$$ This concludes the proof of [\[Hypo_est_mon\]](#Hypo_est_mon){reference-type="eqref" reference="Hypo_est_mon"} and Theorem [Theorem 1](#thm_mon){reference-type="ref" reference="thm_mon"}. Define $\alpha$ such that $\beta c^{-1} = 2\frac{\alpha^2}{\beta c}\|\partial_yV\|_\infty^2$, $\alpha= \frac{\sqrt{2}\beta}{\|\partial_yV\|_\infty}$. $$\begin{aligned} \frac{d}{dt}\mathcal{F}[f]\leq& -\epsilon^{1/3}\psi^2\frac{\beta c}{3}\|f\|_2^2 - \epsilon\left(\frac{1}{2} - 3\beta{c^{-1}}\right)\|\partial_yf\|_2^2 \end{aligned}$$ Now choose $\beta$ small enough that the following relation holds $$\begin{aligned} \frac{1}{2} - 3\beta{c^{-1}} \geq \frac{\beta c}{3} \end{aligned}$$ So that $$\begin{aligned} \frac{d}{dt}\mathcal{F}[f]\leq& -\frac{\beta c}{3}\epsilon^{1/3}\left(\psi^2\|f\|_2^2 + \epsilon^{2/3}\|\partial_yf\|_2^2\right) \end{aligned}$$ Consider the domain $t\in[0, \epsilon^{-1/3}]$. The first estimate is shown by noting that at $t=0$, $\mathcal{F}[f(0)]=\frac{1}{2}\|f(0)\|_2^2$. Furthermore, the derivative is non-positive, so in this domain $$\begin{aligned} \mathcal{F}[f(t)] \leq \frac{1}{2}\|f(0)\|_2^2 \end{aligned}$$ Consider the domain $t\geq \epsilon^{-1/3}$. Then $\min\{\epsilon^{1/3}t,1\} = 1$ so the derivative estimate reduces to $$\begin{aligned} \frac{d}{dt}\mathcal{F}[f]\leq& -\frac{\beta c}{3}\epsilon^{1/3}\left(\|f\|_2^2 + \epsilon^{2/3}\|\partial_yf\|_2^2\right) \end{aligned}$$ And using [\[equiv_mono\]](#equiv_mono){reference-type="ref" reference="equiv_mono"}, we have $$\begin{aligned} \frac{d}{dt}\mathcal{F}[f]\leq& -C\epsilon^{1/3}\left(\mathcal{F}[f]\right) \end{aligned}$$ The inequality solution to which is $$\begin{aligned} \mathcal{F}[f(t)] \leq& \mathcal{F}[f(\epsilon^{-1/3})]e^{-C\epsilon^{1/3}\left(t-\epsilon^{-1/3}\right)}\\ =& \mathcal{F}[f(\epsilon^{-1/3})]\left(e^C\right)e^{-C\epsilon^{1/3}t} \end{aligned}$$ Now the two domain inequalities can be combined at $t=\epsilon^{-1/3}$ to form an inequality which is valid for $t\geq\epsilon^{-1/3}$. $$\begin{aligned} \mathcal{F}[f(t)] \leq& \frac{1}{2}\left(e^{C}\right)\|f(0)\|_2^2e^{-C\epsilon^{1/3}t} \end{aligned}$$ Therefore there is some larger constant $C(\alpha, \beta, \|\partial_yV\|_\infty^2)$ so that the inequality holds for $t\in[0, \epsilon^{-1/3}]$. $$\begin{aligned} \mathcal{F}[f(t)] \leq& C\|f(0)\|_2^2e^{-\delta\epsilon^{1/3}t} \end{aligned}$$ where $\delta\sim\beta$ is small. ◻ Finally, we collect the proofs of the technical lemmas. *Proof of Lemma [Lemma 2](#lem:monotone al){reference-type="ref" reference="lem:monotone al"}.* We recall the definition of $T_\alpha$ [\[T_albe_term\]](#T_albe_term){reference-type="eqref" reference="T_albe_term"}. Invoking the equation [\[k_by_k\_eq\]](#k_by_k_eq){reference-type="eqref" reference="k_by_k_eq"} and integration by parts yields that $$\begin{aligned} T_\alpha=& \alpha\psi' \epsilon^{2/3}\|\partial_yf\|_2^2 + \alpha\psi\epsilon^{2/3}\frac{d}{dt}\|\partial_y f\|_2^2 %=& \al\psi' \ep^{2/3}\|\pa_yf\|_2^2 + 2\al\ep^{2/3}\psi\Re\int \pa_y \pa_tf \overline{\pa_yf}dy\\ = \alpha\psi' \epsilon^{2/3}\|\partial_yf\|_2^2 + 2\alpha\psi\epsilon^{2/3}\Re\int \partial_y(\nu\partial_{y}^2f - ikVf) \overline{\partial_y f}dy\\ =& \alpha\psi' \epsilon^{2/3}\|\partial_yf\|_2^2 + 2\alpha\psi\epsilon^{2/3}\Re\int\left(\nu \partial_{y}^3f- ik\partial_yVf - ikV\partial_yf\right)\overline{\partial_y f}dy\\ =&\alpha\psi' \epsilon^{2/3}\|\partial_yf\|_2^2 + 2\alpha\psi\epsilon^{2/3}\left(\nu\Re\int\partial_{y}^3f\overline{\partial_yf}dy - \Re\int ik\partial_yVf\overline{\partial_yf}dy - \Re\int ikV\partial_yf\overline{\partial_yf}dy\right)\\ =& \alpha\psi' \epsilon^{2/3}\|\partial_yf\|_2^2 - 2\alpha\psi\epsilon^{2/3}\left( \nu\|\partial_{y}^2f\|_2^2 + \Re\int ik\partial_yVf\overline{\partial_yf}dy\right)\\ % \leq& \al\psi' \ep^{2/3}\|\pa_yf\|_2^2 - 2\al\psi\ep^{2/3}\nu\|\pa_{y}^2f\|_2^2 + 2\al\psi\ep^{2/3}\|\pa_yV\, f\|_2\|\pa_yf\|_2\\ \leq& \alpha\psi' \epsilon^{2/3}\|\partial_yf\|_2^2 - 2\alpha\psi\epsilon^{2/3}\nu\|\partial_{y}^2f\|_2^2 + 2\alpha\psi\epsilon^{2/3}|k|\|\partial_yV\|_{\infty}^{1/2}\|\sqrt{|\partial_yV|} f\|_2\|\partial_yf\|_2. %\\ \leq& \al\psi' \ep^{2/3}\|\pa_yf\|_2^2 + 2\al\psi\ep^{2/3}\|\pa_yV\|_\infty\|\|f\|_2\|\pa_yf\|_2\\ % \leq& \al\psi' \ep^{2/3}\|\pa_yf\|_2^2 + \frac{\beta c}{4}\psi^2\ep^{1/3}\|f\|_2^2 + 2\frac{\al^2}{\beta c}\ep\|\pa_yV\|_\infty^2\|\pa_yf\|_2^2 \end{aligned}$$ An application of Young's inequality yields [\[monotone al estimate\]](#monotone al estimate){reference-type="eqref" reference="monotone al estimate"}. ◻ *Proof of Lemma [Lemma 3](#lem:monotone beta){reference-type="ref" reference="lem:monotone beta"}.* The estimate of the $T_\beta$ term in [\[T_albe_term\]](#T_albe_term){reference-type="eqref" reference="T_albe_term"} is technical. Hence, we further decompose it into three terms: $$\begin{aligned} T_\beta =& 2\beta \psi \psi'\epsilon^{1/3}\Re\langle i f,\partial_y f\rangle + \beta\psi^2\epsilon^{1/3} \Re\int i\partial_tf\overline{\partial_yf}dy + \beta\psi^2\epsilon^{1/3}\Re\int if\overline{\partial_{yt}f}dy=:T_{\beta;1}+T_{\beta;2}+T_{\beta;3}. \label{T_beta123} \end{aligned}$$ We estimate these terms one by one. To begin with, we have the following bound for the $T_{\beta;1}$: $$\begin{aligned} |T_{\beta;1}|\leq& 2\frac{\beta}{\sqrt{\alpha}} \nu^{1/3}{ |k|^{2/3} }\mathbbm{1}_{[0,\nu^{-1/3}{ |k|^{-2/3} }]}(t)\sqrt{\psi} \|f\|_2 (\sqrt{\alpha}\epsilon^{1/3} \sqrt{\psi}\|\partial_yf\|_2) \\ \leq& \frac{\beta}{\sqrt{\alpha}} \nu^{1/3}{ |k|^{2/3} }\mathbbm{1}_{[0,\nu^{-1/3}{ |k|^{-2/3} }]}(t) \left(\|f\|_2^2+ \alpha\epsilon^{2/3} \psi\|\partial_yf\|_2^2\right). \label{T_beta1} \end{aligned}$$ Next we compute the term $T_{\beta;2}$ using the equation [\[k_by_k\_eq\]](#k_by_k_eq){reference-type="eqref" reference="k_by_k_eq"}and the assumption $\partial_y V>0$ [\[WLOG\]](#WLOG){reference-type="eqref" reference="WLOG"}: $$\begin{aligned} T_{\beta;2}=& \beta\psi^2\epsilon^{1/3}\Re\int i\left(\nu\partial_{yy}f - iVf\right)\overline{\partial_yf}dy = \beta\psi^2\epsilon^{1/3}\left(\nu\Re\int i\partial_{yy}f\overline{\partial_yf}dy + k\Re\int V \partial_y \left(\frac{|f|^2}{2}\right)dy \right)\\ =& \beta\psi^2\epsilon^{1/3}\left(\nu\Re\int i\partial_{yy}f\overline{\partial_yf}dy - \frac{k}{2}\Re\int |f|^2\partial_yV dy\right). \label{T_beta_2} \end{aligned}$$ Finally, we focus on the $T_{\beta;3}$ term in [\[T_beta123\]](#T_beta123){reference-type="eqref" reference="T_beta123"}. Recalling that $0\leq \partial_yV\in \mathbb R$, we have that $$\begin{aligned} T_{\beta;3}=& \beta\psi^2\epsilon^{1/3} \Re\int if\overline{\left(\nu\partial_{y}^3f- ik\partial_yVf - ikV\partial_yf\right)}dy \\ =& \beta\psi^2\epsilon^{1/3}\left(-\nu\Re\int i\partial_y f\overline{\partial_{y}^2f}dy- k\Re\int f\overline{\partial_yVf}dy - k\Re\int V \partial_y \left(\frac{|f|^2}{2}\right)dy\right)\\ =& -\beta\psi^2\epsilon^{1/3}\nu\Re\int i\partial_y f\overline{\partial_{y}^2f}dy- \frac{\beta}{2}\psi^2\epsilon^{1/3}k\Re\int |f|^2 \partial_yV dy.\label{T_beta3}\end{aligned}$$ Combining the estimates [\[T_beta1\]](#T_beta1){reference-type="eqref" reference="T_beta1"}, [\[T_beta_2\]](#T_beta_2){reference-type="eqref" reference="T_beta_2"}, [\[T_beta3\]](#T_beta3){reference-type="eqref" reference="T_beta3"}, we have that $$\begin{aligned} T_\beta\leq& \frac{\beta}{\sqrt{\alpha}} \nu^{1/3}{ |k|^{2/3} }\mathbbm{1}_{[0,\nu^{-1/3}{ |k|^{-2/3} }]}(t) \left(\|f\|_2^2+ \alpha\epsilon^{2/3} \psi\|\partial_yf\|_2^2\right)-2\beta\psi^2\epsilon^{1/3}\nu\Re\int i\partial_y f\overline{\partial_{y}^2f}dy\\ &- {\beta}\psi^2\epsilon^{1/3}|k|\left\| \sqrt{|\partial_yV|}|f|\right\|_{2}^2\\ \leq & \frac{\beta}{\sqrt{\alpha}} \nu^{1/3}{ |k|^{2/3} }\mathbbm{1}_{[0,\nu^{-1/3}{ |k|^{-2/3} }]}(t) \left(\|f\|_2^2+ \alpha\epsilon^{2/3} \psi\|\partial_yf\|_2^2\right)+\frac{\beta^2}{\alpha}\psi^3\nu\|\partial_y f\|_2^2 +{\alpha}{}\psi\epsilon^{2/3}\nu\| \partial_{yy}f\|_2^2\\ &- {\beta}\psi^2\epsilon^{1/3}|k|\left\| \sqrt{|\partial_yV|}|f|\right\|_{2}^2.\end{aligned}$$ $$\begin{aligned} =& 2\beta\psi\epsilon^{2/3}\|f\|_2\|\partial_yf\|_2\\ +& \beta\psi^2\epsilon^{1/3}\left(\epsilon\Re\int i\partial_{yy}f\overline{\partial_yf}dy - 2\epsilon\Re\int if\overline{\partial_yf}dy + \epsilon\Re\int if\overline{\partial_{yyy}f}dy - \Re\int f\overline{\partial_yVf}dy\right)\\ =& 2\beta\psi\epsilon^{2/3}\|f\|_2\|\partial_yf\|_2\\ +& \beta\psi^2\epsilon^{1/3}\left(\epsilon\Re\int i\partial_{yy}f\overline{\partial_yf}dy - 2\epsilon\Re\int if\overline{\partial_yf}dy - \epsilon\Re\int i\partial_yf\overline{\partial_{yy}f}dy - \Re\int f\overline{\partial_yVf}dy\right)\\ =& 2\beta\psi\epsilon^{2/3}\|f\|_2\|\partial_yf\|_2 + \beta\psi^2\epsilon^{1/3}\left(-2\epsilon\Re\int if\overline{\partial_yf}dy - \Re\int f\overline{V\partial_yf}dy\right)\\ \leq& 2\beta\psi\epsilon^{2/3}\|f\|_2\|\partial_yf\|_2 + 2\beta\psi^2\epsilon^{4/3}\|f\|_2\|\partial_yf\|_2 - \psi^2\|\sqrt{\partial_yV}f\|_2^2\\ \leq& \frac{\beta c}{4}\psi^2\epsilon^{1/3}\|f\|_2^2 + 2\beta\epsilon{c^{-1}}\|\partial_yf\|_2^2 + 2\beta\psi^2\epsilon^{4/3}\|f\|_2\|\partial_yf\|_2 - \psi^2\|\sqrt{\partial_yV}f\|_2^2\\ \leq& \frac{\beta c}{4}\beta\psi^2\epsilon^{1/3}\|f\|_2^2 + 2\beta\epsilon{c^{-1}}\|\partial_yf\|_2^2\\ +& \beta\psi^2\epsilon^{4/3}\|f\|_2^2 + \beta\psi^2\epsilon^{4/3}\|\partial_yf\|_2^2 - \beta\psi^2\epsilon^{1/3}\|\sqrt{\partial_yV}f\|_2^2\\ \leq& \frac{\beta c}{4}\beta\psi^2\epsilon^{1/3}\|f\|_2^2 + 2\beta\epsilon{c^{-1}}\|\partial_yf\|_2^2\\ +& \beta\psi^2\epsilon^{4/3}\|f\|_2^2 + \beta\epsilon^{4/3}\|\partial_yf\|_2^2 - \beta\psi^2\epsilon^{1/3}\|\sqrt{\partial_yV}f\|_2^2\\ \leq& \frac{\beta c}{4}\psi^2\epsilon^{1/3}\|f\|_2^2 + 2\beta\epsilon{c^{-1}}\|\partial_yf\|_2^2\\ +& \beta\psi^2\epsilon^{4/3}\|f\|_2^2 + \beta\epsilon^{4/3}\|\partial_yf\|_2^2 - \beta\psi^2\epsilon^{1/3}\min_{y,t}(\sqrt{\partial_yV})^2\|f\|_2^2\\ \leq& \frac{\beta c}{4}\psi^2\epsilon^{1/3}\|f\|_2^2 + 2\beta\epsilon{c^{-1}}\|\partial_yf\|_2^2\\ +& \beta\psi^2\epsilon^{4/3}\|f\|_2^2 + \beta\epsilon^{4/3}\|\partial_yf\|_2^2 - \beta c\psi^2\epsilon^{1/3}\|f\|_2^2\\ \leq& -\epsilon^{1/3}\psi^2\left(\frac{3\beta c}{4} - \beta\epsilon\right)\|f\|_2^2 + \epsilon\left(2\beta{c^{-1}} + \beta\epsilon^{1/3}\right)\|\partial_yf\|_2^2 \end{aligned}$$ ◻ We consider the following passive scalar equation in the channel ${\mathbb T}\times{\mathbb R}$ or ${\mathbb T}\times[0,1]$ $$\begin{aligned} \partial_t f_{\neq}+ V(t,y) \partial_x f_{\neq}=\epsilon\Delta f_{\neq},\quad f_{\neq}(t=0)=f_{\text{in};\neq}.\end{aligned}$$ Assume that we are on ${\mathbb T}\times \mathbb{R}$ and $f_{\neq}$ decays to zero fast enough near $y=\infty$. The time-dependent shear flow $V$ is strictly monotone. With monotone flow, we will assume that $$\begin{aligned} \label{V_y assumption} \min_{t\in \mathbb{R}, y\in \mathbb{R}}|\partial_yV|\geq c>0.\end{aligned}$$ We take the Fourier transform in the $x$-variable to obtain the following $$\begin{aligned} \partial_t \widehat f_k+V(t,y)ik \widehat f_k=\epsilon\partial_{yy}\widehat f_k-\epsilon|k|^2 \widehat f_k.\end{aligned}$$ It turns out that we can focus on the $k=1$ case: $$\begin{aligned} \partial_t \widehat f_1+V(t,y)i \widehat f_1=\epsilon\partial_{yy}\widehat f_1-\epsilon\widehat f_1.\end{aligned}$$ Define the hypocoercivity functional as $$\begin{aligned} \label{monotone fnc} \mathcal{F}[f]=\frac{1}{2}\|f\|_2^2+\alpha\psi\left(\frac{\epsilon}{|k|}\right)^{2/3}\|\partial_{y}f\|_2^2+\beta\psi^2\left(\frac{\epsilon}{|k|}\right)^{1/3}\Re\langle i k f,\partial_y f\rangle.\end{aligned}$$ Here $\alpha, \beta$ are three parameters to be chosen. The following equivalence relation holds. **Theorem 5**. *With the previous lemmas, there are $\alpha$ and $\beta$ such that: For $t\in[0, \epsilon^{-1/3}]$, $$\begin{aligned} \mathcal{F}[f(t)] \lesssim \|f(0)\|_2^2 \end{aligned}$$ For $t>\epsilon^{-1/3}$, $$\begin{aligned} \mathcal{F}[f(t)] \lesssim \mathcal{F}[f(\epsilon^{-1/3})]e^{-\delta\epsilon^{1/3}\left(t-\epsilon^{-1/3}\right)} \end{aligned}$$ Moreover, the following estimate holds on the whole time interval. $$\begin{aligned} \mathcal{F}[f(t)] \leq C\|f(0)\|_2^2e^{-\delta\epsilon^{1/3}t} \end{aligned}$$* # Enhanced Dissipation: Nondegenerate Shear Flows {#nondegenerate} In this section, we prove the estimate [\[ndeg_ED_main\]](#ndeg_ED_main){reference-type="eqref" reference="ndeg_ED_main"} for the hypoellitic passive scalar equation $\eqref{k_by_k_eq}_{\sigma=0}$. Without loss of generality, we assume that $k \geq 1.$ Let us start with a lemma. **Lemma 4**. *Consider the flow $V(t,y)$ and the reference flow $U(t,y)$ as in Theorem [Theorem 3](#ndeg_main_thm){reference-type="ref" reference="ndeg_main_thm"}. There exists a constant $C_\ast(\mathfrak {C}_0,\mathfrak {C}_1)> 1$ such that the following estimate holds $$\begin{aligned} \label{equiv_UV} C_\ast^{-1}|\partial_yU(t,y)|\leq |\partial_y V(t,y)|\leq C_\ast |\partial_yU(t,y)|,\quad \forall y\in {\mathbb T},\quad\forall t\in[0,T]. \end{aligned}$$* *Proof.* We distinguish between two cases: a) $y\in B_{r}(y_i(t))$; b) $y\in (\cup_{i=1}^N B_{r}(y_i(t)))^c$. If $y\in B_{r}(y_i(t))$, by [\[asmp_1\]](#asmp_1){reference-type="eqref" reference="asmp_1"}, $$\begin{aligned} |\partial_y V(t,y)|\leq \mathfrak{C}_0^{1/2}|y-y_i(t)|\leq \mathfrak{C}_0|\partial_yU(t,y)|,\quad |\partial_yU(t,y)| \leq \mathfrak{C}_0^{1/2}|y-y_i(t)|\leq \mathfrak{C}_0 |\partial_y V(t,y)| . \end{aligned}$$ In case b), since $|\partial_y V|,|\partial_yU|\in[\mathfrak{C}_1^{-1},\mathfrak{C}_1]$, the relation [\[equiv_UV\]](#equiv_UV){reference-type="eqref" reference="equiv_UV"} is direct. ◻ **Lemma 5**. *Assume the relation $$\begin{aligned} \label{ndeg_bnd_req} { \beta^2 \leq \alpha\gamma. } \end{aligned}$$ Then, the following equivalence relation concerning the functional $\mathcal{G}$ [\[hypo_ndeg\]](#hypo_ndeg){reference-type="eqref" reference="hypo_ndeg"} holds $$\begin{aligned} \|f\|_2^2 + \frac{1}{2}\left(\alpha\phi\epsilon^{1/2}\|\partial_yf\|_2^2 + \gamma\phi^3\epsilon^{-1/2}\|\partial_yUf\|_2^2\right) \leq \mathcal{G}[f] \leq \|f\|_2^2 + \frac{3}{2}\left(\alpha\phi\epsilon^{1/2}\|\partial_yf\|_2^2 + \gamma\phi^3\epsilon^{-1/2}\|\partial_yUf\|_2^2\right).\\\label{equiv_ndg} \end{aligned}$$* *Proof.* We recall the definition of $\mathcal{G}$ [\[hypo_ndeg\]](#hypo_ndeg){reference-type="eqref" reference="hypo_ndeg"}, and estimate $\mathcal{G}[f]$ using Hölder inequality and Young's inequality, $$\begin{aligned} \mathcal{G}[f] \leq& \|f\|_2^2 + \alpha\phi\epsilon^{1/2}\|\partial_yf\|_2^2 + \beta\phi^2\|\partial_yUf\|_2^2\|\partial_yf\|_2^2 + \gamma\phi^3\epsilon^{-1/2}\|\partial_yUf\|_2^2\\ \leq& \|f\|_2^2 + \frac{3\alpha}{2}\phi\epsilon^{1/2}\|\partial_yf\|_2^2 + \left(\gamma + \frac{\beta^2}{2\alpha}\right)\phi^3\epsilon^{-1/2}\|\partial_yUf\|_2^2. \end{aligned}$$ Similarly, we have the lower bound, $$\begin{aligned} \mathcal{G}[f] \geq \|f\|_2^2 + \frac{\alpha}{2}\phi\epsilon^{1/2}\|\partial_yf\|_2^2 + \left(\gamma - \frac{\beta^2}{2\alpha}\right)\phi^3\epsilon^{-1/2}\|\partial_yUf\|_2^2. \end{aligned}$$ Since [\[ndeg_bnd_req\]](#ndeg_bnd_req){reference-type="eqref" reference="ndeg_bnd_req"} implies that $\frac{\beta^2}{2\alpha} \leq \frac{\gamma}{2}$, we obtain that $$\begin{aligned} \|f\|_2^2 + \frac{1}{2}\alpha\phi\epsilon^{1/2}\|\partial_yf\|_2^2 + \frac{1}{2}\gamma\phi^3\epsilon^{-1/2}\|\partial_yUf\|_2^2 \leq \mathcal{G}[f(t)] \leq \|f\|_2^2 + \frac{3}{2}\alpha\phi\epsilon^{1/2}\|\partial_yf\|_2^2 + \frac{3}{2}\gamma\phi^3\epsilon^{-1/2}\|\partial_yUf\|_2^2. \end{aligned}$$ This concludes the proof of the lemma. ◻ By taking the time derivative of the hypocoercivity functional, [\[hypo_mono\]](#hypo_mono){reference-type="eqref" reference="hypo_mono"}, we end up with the following decomposition: $$\begin{aligned} \frac{d}{dt}\mathcal{G}[f(t)] =& \frac{d}{dt}\|f\|_2^2 + \alpha\epsilon^{1/2}\frac{d}{dt}\left(\phi\|\partial_{y}f\|_2^2\right) + \beta\frac{d}{dt}\left(\phi^2\Re\langle i\partial_yUf,\partial_yf\rangle\right) + \gamma\epsilon^{-1/2}\frac{d}{dt}\left(\phi^3\|\partial_yUf\|_2^2\right)\\ =:& \mathbb T_{L^2} + \mathbb T_\alpha+ \mathbb T_\beta + \mathbb T_\gamma.\label{ndeg_T_albe_term}\end{aligned}$$ The estimates for the $\mathbb T_\alpha,\ \mathbb T_\beta$, and $\mathbb T_\gamma$ terms are tricky, and we collect them in the following technical lemmas whose proofs will be postponed to the end of this section. **Lemma 6** ($\alpha$-estimate). *The following estimate holds on the interval $[0,T]$: $$\begin{aligned} \label{deg_al_est} \mathbb{T}_\alpha\leq \alpha\nu\left(1+\frac{4\alpha}{\beta}C_\ast^3\right)\|\partial_yf\|_2^2 - 2\alpha\phi\epsilon^{1/2}\nu\|\partial_{y}^2f\|_2^2 + \frac{\beta\phi^2}{4C_\ast} |k|\|\partial_yUf\|_2^2. \end{aligned}$$ Here, the constant $C_\ast$ is defined in [\[equiv_UV\]](#equiv_UV){reference-type="eqref" reference="equiv_UV"}.* **Lemma 7** ($\beta$-estimate). *The following estimate holds $$\begin{aligned} \mathbb{T}_\beta \leq& \left(\frac{1}{4}+ 4\beta C_\ast \right)\nu\|\partial_yf\|_2^2+2{\alpha} \phi\epsilon^{1/2}\nu\|\partial_y^2f\|_2^2 - \frac{3}{4}\frac{\beta\phi^2|k|}{C_\ast}\|\partial_{y}Uf\|_2^2\\ &+\left(\frac{\beta\phi^2}{|k|^{1/2}}+\frac{\beta\phi}{2\alpha}\|\partial_{yy}U\|_\infty^2\right)\beta \phi^2\nu^{1/2}|k|^{1/2}\|f\|_2^2 + \left(\frac{3\beta^2}{4\alpha\gamma}\right)\gamma\phi^3\epsilon^{-1/2}\nu\|\partial_{y}U\partial_yf\|_2^2 . \label{ndeg_beta_est} \end{aligned}$$ Here the constant $C_\ast$ is defined in [\[equiv_UV\]](#equiv_UV){reference-type="eqref" reference="equiv_UV"}.* **Lemma 8** ($\gamma$-estimate). *The following estimate holds on the interval $[0,T]$ $$\begin{aligned} \mathbb{T}_\gamma \leq& \left(\frac{3\gamma C_\ast}{\beta}+\frac{1}{4}\right)\frac{\beta|k|\phi^2\|\partial_yUf\|_2^2}{C_\ast} +\left( \frac{4C_\ast\gamma^2\phi^2}{\beta^2|k|^{1/2}}+\frac{4\gamma}{\beta}\phi\|\partial_{yy}U\|_\infty^2\right)\beta\phi^2\nu^{1/2}|k|^{1/2}\|f\|_2^2 -\gamma\phi^3\epsilon^{-1/2}\nu\|\partial_yU\partial_yf\|_2^2.\label{ndeg_gamma_est} \end{aligned}$$ Here the $C_\ast$ is defined in [\[equiv_UV\]](#equiv_UV){reference-type="eqref" reference="equiv_UV"}.* These estimates allow us to prove Theorem [Theorem 3](#ndeg_main_thm){reference-type="ref" reference="ndeg_main_thm"}. *Proof of Theorem [Theorem 3](#ndeg_main_thm){reference-type="ref" reference="ndeg_main_thm"}.* If $T\leq 2\nu^{-1/2}{ |k|^{-1/2} }$, then standard $L^2$-energy estimate yields [\[ndeg_ED_main\]](#ndeg_ED_main){reference-type="eqref" reference="ndeg_ED_main"}. Hence, we assume $T>2\nu^{-1/2}{ |k|^{-1/2} }$ without loss of generality. We distinguish between two time intervals, i.e., $$\begin{aligned} \label{ndeg_I1_I2} \mathcal{I}_1=[0, \nu^{-1/2}{ |k|^{-1/2} }], \quad \mathcal{I}_2=[\nu^{-1/2}{ |k|^{-1/2} },T]. \end{aligned}$$ We organize the proof into three steps. In step \# 1, we choose the $\alpha, \beta,\gamma$ parameters and derive the energy dissipation relation. In step \# 2, we estimate the functional $\mathcal{G}$ in the time interval $\mathcal{I}_1$. In step \# 3, we estimate the functional $\mathcal{G}$ in the time interval $\mathcal{I}_2$ and conclude the proof. Combining the estimates [\[T_L2\]](#T_L2){reference-type="eqref" reference="T_L2"}, [\[deg_al_est\]](#deg_al_est){reference-type="eqref" reference="deg_al_est"}, [\[ndeg_beta_est\]](#ndeg_beta_est){reference-type="eqref" reference="ndeg_beta_est"}, [\[ndeg_gamma_est\]](#ndeg_gamma_est){reference-type="eqref" reference="ndeg_gamma_est"}, we obtain that $$\begin{aligned} \frac{d}{dt}\mathcal{G}[f(t)] \leq& - \left(\frac{7}{4}- { \alpha } - \frac{4\alpha^2}{\beta}C_\ast^3- 4\beta C_\ast \right)\nu\|\partial_yf\|_2^2- \left(\frac{1}{4} - \frac{3\gamma C_\ast}{\beta}\right)\frac{\beta|k|\phi^2}{C_\ast}\|\partial_{y}Uf\|_2^2 \\ &+\left(\frac{\beta \phi^2}{|k|^{1/2}} + \frac{\beta\phi}{2\alpha}\|\partial_{yy}U\|_\infty^2+ \frac{4C_\ast\gamma^2\phi^2}{\beta^2|k|^{1/2}} + \frac{4\gamma}{\beta}\phi\|\partial_{yy}U\|_\infty^2\right)\beta\phi^2\nu^{1/2}|k|^{1/2}\|f\|_2^2\\ &- \gamma\left(1 - \frac{3\beta^2}{4\alpha\gamma}\right)\phi^3\nu\epsilon^{-1/2}\|\partial_{y}U\partial_yf\|_2^2. \end{aligned}$$ We choose $\alpha$, $\gamma$ in terms of $\beta(\leq 1)$ as follows $$\begin{aligned} \label{chc_al_ga} \alpha= \frac{\beta^{1/2}}{4{ C_\ast^{3/2} }}, \quad\quad \gamma = 4\beta^{3/2}{ C_\ast^{3/2} }. \end{aligned}$$ The resulting differential inequality is $$\begin{aligned} \frac{d}{dt}\mathcal{G}[f(t)]\leq & - \left(\frac{5}{4} - 4\beta C_\ast \right)\underbrace{\epsilon|k|}_{=\nu}\|\partial_yf\|_2^2 - \left(\frac{1}{4} - 12\beta^{1/2}C_\ast^{5/2} \right)\frac{\beta|k|\phi^2}{C_\ast}\|\partial_{y}Uf\|_2^2 \\ &+\underbrace{\left(\beta + 2\beta^{1/2}C_\ast^{3/2} + {64C_\ast^4\beta} + 16 {\beta^{1/2}C_\ast^{3/2}} \right)}_{\leq 83\beta^{1/2}C_\ast^4}\max\{1,\|\partial_{yy}U\|_\infty^2\}\beta\phi^2\underbrace{\epsilon^{1/2}|k|}_{=\nu^{1/2}|k|^{1/2}}\|f\|_2^2\\ &- \frac{\gamma}{4}\phi^3\nu\epsilon^{-1/2}\|\partial_{y}U\partial_yf\|_2^2. \end{aligned}$$ Now we invoke the spectral inequality [\[spectral\]](#spectral){reference-type="eqref" reference="spectral"} to obtain that $$\begin{aligned} \frac{d}{dt}\mathcal{G}[f(t)]\leq & - \left(\frac{5}{4} - 4\beta C_\ast-83\beta^{3/2}C_\ast^4\max\{1,\|\partial_{yy}U\|_\infty^2\}\right)\nu\|\partial_yf\|_2^2 \\ & - \left(\frac{1}{4C_\ast} - 12\beta^{1/2}C_\ast^{3/2}-83\beta^{1/2}C_\ast^4\mathfrak{C}_{\text{spec}} \max\{1,\|\partial_{yy}U\|_\infty^2\}\right)\beta|k|\phi^2\|\partial_{y}Uf\|_2^2 .%- \frac{\gamma}{4}\phi^3\nu\ep^{-1/2}\|\pa_{y}U\pa_yf\|_2^2 %MATH %\mathfrak{C}_{\text{spec}} \end{aligned}$$ Hence we can choose $$\begin{aligned} \beta=\beta(C_\ast,\mathfrak{C}_{\mathrm{spec}},\|\partial_{yy}U\|_\infty)<1\label{chc_beta} \end{aligned}$$ small enough, invoke the spectral inequality [\[spectral\]](#spectral){reference-type="eqref" reference="spectral"} and the equivalence relation [\[equiv_ndg\]](#equiv_ndg){reference-type="eqref" reference="equiv_ndg"} to obtain that $$\begin{aligned} \frac{d}{dt}\mathcal{G}[f(t)]\leq&- \frac{1}{2}\epsilon|k|\|\partial_y f\|_2^2-\frac{\beta}{8C_\ast}|k|\phi^2\|\partial_{y}Uf\|_2^2\\ \leq&-\frac{\beta\phi^2}{16\mathfrak{C}_{\mathrm{spec}}C_\ast}\epsilon^{1/2}|k|\|f\|_2^2-\frac{1}{4}\nu^{1/2}|k|^{1/2}\epsilon^{1/2}\phi\|\partial_y f\|_2^2-\frac{\beta}{16C_\ast}\nu^{1/2}|k|^{1/2}\epsilon^{-1/2}\phi^3\|\partial_{y}Uf\|_2^2\\ \leq& -\delta(\beta,\mathfrak{C}_{\text{spec}}^{-1}, C_\ast^{-1})\nu^{1/2}|k|^{1/2}\mathcal{G}[f].\label{dfn_del} \end{aligned}$$ Finally, we observe that the parameter $\delta$ depends only on three parameters $C_{\ast},\,\mathfrak{C}_{\text{spec}}$ and $\|\partial_{yy}U\|_\infty$. **Step \# 2: Initial time layer estimate.** This step is similar to the argument in the strictly monotone shear case. Thanks to the energy dissipation relation [\[dfn_del\]](#dfn_del){reference-type="eqref" reference="dfn_del"}, we obtain that $$\begin{aligned} \mathcal{G}[f_k](t)\leq C\|f_{0;k}\|_{L^2}^2,\quad\forall t\in[0, \nu^{-1/2}|k|^{-1/2}]. \end{aligned}$$ **Step \# 3: Long time estimate.** Assume $t\geq \nu^{-1/2}|k|^{-1/2}$. Thanks to the energy dissipation relation [\[dfn_del\]](#dfn_del){reference-type="eqref" reference="dfn_del"}, we obtain $$\begin{aligned} \frac{d}{dt}\mathcal{G}[f]\leq& -\delta\nu^{1/2}|k|^{1/2}\mathcal{G}[f]. \end{aligned}$$Hence, we obtain that $$\begin{aligned} \mathcal{G}[f(t)] \leq& \mathcal{G}[f(\nu^{-1/2}|k|^{-1/2})]e^{-\delta\nu^{1/2}{ |k|^{1/2} }\left(t-\nu^{-1/2}{ |k|^{-1/2} }\right)} \leq e\mathcal{G}[f(0)] e^{-\delta\nu^{1/2}{ |k|^{1/2} }t}= e\|f(0)\|_2^2 e^{-\delta\nu^{1/2}{ |k|^{1/2} }t}. \end{aligned}$$ Now, the results from step 2 and 3 yields [\[Hypo_est_ndeg\]](#Hypo_est_ndeg){reference-type="eqref" reference="Hypo_est_ndeg"}. ◻ We conclude the section by providing the details of the proof of Lemma [Lemma 6](#lem:nondegenerate al){reference-type="ref" reference="lem:nondegenerate al"}, [Lemma 7](#lem:nondegenerate beta){reference-type="ref" reference="lem:nondegenerate beta"}, and [Lemma 8](#lem:nondegenerate gamma){reference-type="ref" reference="lem:nondegenerate gamma"}. *Proof of Lemma [Lemma 6](#lem:nondegenerate al){reference-type="ref" reference="lem:nondegenerate al"}.* We recall the definition of $\mathbb{T}_\alpha$ [\[ndeg_T\_albe_term\]](#ndeg_T_albe_term){reference-type="eqref" reference="ndeg_T_albe_term"}. Invoking the equation [\[k_by_k\_eq\]](#k_by_k_eq){reference-type="eqref" reference="k_by_k_eq"} and integration by parts yields that $$\begin{aligned} \mathbb{T}_\alpha=& \alpha\phi'\epsilon^{1/2}\|\partial_yf\|_2^2 + \alpha\phi\epsilon^{1/2}\frac{d}{dt}\|\partial_yf\|_2^2 = \alpha\phi'\epsilon^{1/2}\|\partial_yf\|_2^2 + 2\alpha\phi\epsilon^{1/2}\Re\int \partial_y(\nu\partial_{y}^2f - ikVf) \overline{\partial_y f}dy\\ %=& \al\phi'\ep^{1/2}\|\pa_yf\|_2^2 + 2\al\phi\ep^{1/2}\Re\int\lf(\nu \pa_{y}^3f- ik\pa_yVf - ikV\pa_yf\rg)\overline{\pa_y f}dy\\ %=&\al\phi'\ep^{1/2}\|\pa_yf\|_2^2 + 2\al\phi\ep^{1/2}\lf(-\nu\Re\int|\pa_{y}^2f|^2dy - \Re\int ik\pa_yVf\overline{\pa_yf}dy - \Re\int ikV|\pa_yf|^2dy\rg)\\ =& \alpha\phi'\epsilon^{1/2}\|\partial_yf\|_2^2 - 2\alpha\phi\epsilon^{1/2}\left( \nu\|\partial_{y}^2f\|_2^2 + \Re\int ik\partial_yVf\overline{\partial_yf}dy\right). \end{aligned}$$ Now we apply Hölder inequality, the expression [\[dt_weights\]](#dt_weights){reference-type="eqref" reference="dt_weights"}, and the equivalence relation [\[equiv_UV\]](#equiv_UV){reference-type="eqref" reference="equiv_UV"} to obtain that $$\begin{aligned} \mathbb{T}_\alpha\leq& \alpha\nu\|\partial_yf\|_2^2 - 2\alpha\phi\epsilon^{1/2}\nu\|\partial_{y}^2f\|_2^2 + 2\alpha\phi\epsilon^{1/2}|k|\|\partial_yVf\|_{2}\|\partial_yf\|_2\\ \leq& \alpha\nu\|\partial_yf\|_2^2 - 2\alpha\phi\epsilon^{1/2}\nu\|\partial_{y}^2f\|_2^2 + \frac{4\alpha^2}{\beta}C_\ast^3\nu\|\partial_yf\|_2^2+\frac{\beta\phi^2|k|}{4C_\ast}\|\partial_{y}Uf\|_2^2. \end{aligned}$$ This is [\[deg_al_est\]](#deg_al_est){reference-type="eqref" reference="deg_al_est"}. ◻ *Proof of Lemma [Lemma 7](#lem:nondegenerate beta){reference-type="ref" reference="lem:nondegenerate beta"}.* The estimate of the $\mathbb{T}_\beta$ term in [\[T_albe_term\]](#T_albe_term){reference-type="eqref" reference="T_albe_term"} is technical. We further decompose it into four terms and estimate them one by one: $$\begin{aligned} \mathbb{T}_\beta =& 2\beta\phi\phi'\Re\langle i\partial_{y}Uf,\partial_yf\rangle + \beta\phi^2\Re\int i\partial_{ty}Uf\overline{\partial_yf}dy + \beta\phi^2\Re\int i\partial_{y}U\partial_tf\overline{\partial_yf}dy + \beta\phi^2\Re\int i\partial_{y}Uf\overline{\partial_{yt}f}dy\\ =:& \mathbb{T}_{\beta;1} + \mathbb{T}_{\beta;2} + \mathbb{T}_{\beta;3} + \mathbb{T}_{\beta;4}.\label{T_beta1234} \end{aligned}$$ To begin with, we apply the expression [\[dt_weights\]](#dt_weights){reference-type="eqref" reference="dt_weights"}, the Hölder and Young's inequalities to derive the following bound for the $\mathbb{T}_{\beta;1}$ term, $$\begin{aligned} \mathbb{T}_{\beta;1} \leq& 2\beta\phi\nu^{1/2}|k|^{1/2}\|\partial_{y}Uf\|_2\|\partial_yf\|_2 \leq \frac{\beta\phi^2|k|}{4C_\ast} \|\partial_{y}Uf\|_2^2 + 4\beta C_\ast \nu \|\partial_yf\|_2^2. \end{aligned}$$ Next we estimate the term $\mathbb{T}_{\beta;2}$ using the assumption [\[asmpt0\]](#asmpt0){reference-type="eqref" reference="asmpt0"}, $$\begin{aligned} \mathbb{T}_{\beta;2} \leq \beta\phi^2\|\partial_{ty}U\|_\infty\|f\|_2\|\partial_yf\|_2 \leq \beta^2\phi^4\nu^{1/2}\|f\|_2^2 + \frac{1}{4}\nu\|\partial_yf\|_2^2. \end{aligned}$$ To estimate the $\mathbb{T}_{\beta;3}$-term in [\[T_beta1234\]](#T_beta1234){reference-type="eqref" reference="T_beta1234"}, we apply integration by parts and obtain that $$\begin{aligned} \mathbb{T}_{\beta;3} =& \beta\phi^2\Re\int i\partial_{y}U\left(\nu\partial_{yy}f-iVkf\right)\overline{\partial_yf}dy = \beta\phi^2\left(\nu\Re\int i\partial_{y}U\partial_{yy}f\overline{\partial_yf}dy + k\Re\int \partial_{y}UVf\overline{\partial_yf}dy\right)\\ \leq& \beta\phi^2\nu\|\partial_{y}U\partial_yf\|_2\|\partial_{yy}f\|_2 + \beta\phi^2k\Re\int \partial_{y}UVf\overline{\partial_yf}dy\\ \leq& \alpha\phi\epsilon^{1/2}\nu\|\partial_{yy}f\|_2^2 + \left(\frac{\beta^2}{4\alpha\gamma}\right) \gamma\phi^3\epsilon^{-1/2}\nu\|\partial_{y}U\partial_yf\|_2^2 + \beta\phi^2k\Re\int \partial_{y}UVf\overline{\partial_yf}dy.\label{T_beta_3} \end{aligned}$$ Finally we estimate the term $\mathbb{T}_{\beta;4}$ in [\[T_beta1234\]](#T_beta1234){reference-type="eqref" reference="T_beta1234"} $$\begin{aligned} \mathbb{T}_{\beta;4} =& \beta\phi^2\Re\int i\partial_{y}Uf\overline{\left(\nu\partial_y^3f - ik\partial_yVf - ikV\partial_yf\right)}dy\\ =& \beta\phi^2\left(-\nu\Re\int i\left(\partial_{yy}Uf + \partial_{y}U\partial_yf\right)\overline{\partial_{yy}f}dy - k\Re\int (\partial_{y}U\partial_yV)|f|^2dy - k\Re\int \partial_{y}UVf\overline{\partial_yf}dy\right). \end{aligned}$$ Now, we invoke the assumption [\[asmpt0\]](#asmpt0){reference-type="eqref" reference="asmpt0"} and the equivalence relation [\[equiv_UV\]](#equiv_UV){reference-type="eqref" reference="equiv_UV"} to obtain that $$\begin{aligned} \mathbb{T}_{\beta,4}\leq& \beta\phi^2\nu\|\partial_{yy}U\|_\infty\|f\|_2\|\partial_y^2f\|_2 + \beta\phi^2\nu\|\partial_{y}U\partial_yf\|_2\|\partial_y^2f\|_2 - \frac{\beta\phi^2|k|}{C_\ast}\Re\int |\partial_{y}U|^2|f|^2 dy - \beta\phi^2k\Re\int \partial_{y}UVf\overline{\partial_yf}dy\\ \leq& {\alpha} \phi\epsilon^{1/2}\nu\|\partial_y^2f\|_2^2 + \frac{\beta^2}{2\alpha}\phi^3\epsilon^{-1/2}\nu\|\partial_{yy}U\|_\infty^2\|f\|_2^2 + \left(\frac{\beta^2}{2\alpha\gamma}\right)\gamma\phi^3\epsilon^{-1/2}\nu\|\partial_{y}U\partial_yf\|_2^2\\ & - \frac{\beta\phi^2|k|}{C_\ast}\|\partial_{y}Uf\|_2^2 - \beta\phi^2k\Re\int \partial_{y}UVf\overline{\partial_yf}dy.\label{T_beta_4} \end{aligned}$$ Combining the estimates, we have $$\begin{aligned} \mathbb{T}_\beta \leq& \left(\frac{1}{4}+ 4\beta C_\ast \right)\nu\|\partial_yf\|_2^2 - \frac{3}{4}\frac{\beta\phi^2|k|}{C_\ast}\|\partial_{y}Uf\|_2^2+\left(\frac{\beta\phi^2}{|k|^{1/2}}+\frac{\beta}{2\alpha}\phi\|\partial_{yy}U\|_\infty^2\right)\beta \phi^2\nu^{1/2}|k|^{1/2}\|f\|_2^2 \\ &+2{\alpha} \phi\epsilon^{1/2}\nu\|\partial_y^2f\|_2^2 + \left(\frac{3\beta^2}{4\alpha\gamma}\right)\gamma\phi^3\epsilon^{-1/2}\nu\|\partial_{y}U\partial_yf\|_2^2 . \end{aligned}$$ This is the estimate [\[ndeg_beta_est\]](#ndeg_beta_est){reference-type="eqref" reference="ndeg_beta_est"}. ◻ *Proof of Lemma [Lemma 8](#lem:nondegenerate gamma){reference-type="ref" reference="lem:nondegenerate gamma"}.* Combining the equation [\[k_by_k\_eq\]](#k_by_k_eq){reference-type="eqref" reference="k_by_k_eq"}, the smallness assumption [\[asmpt0\]](#asmpt0){reference-type="eqref" reference="asmpt0"}, and integration by parts yields the following bound $$\begin{aligned} \mathbb{T}_\gamma %$%=& 3\gamma\phi^2\phi'\ep^{-1/2}\|i\pa_{y}Uf\|_2^2 + \gamma\phi^3\ep^{-1/2}\Re\int\frac{d}{dt}\lf(\pa_{y}Uf\overline{\pa_{y}Uf}\rg)dy\\ %=& 3\gamma\phi^2\phi'\ep^{-1/2}\|\pa_{y}Uf\|_2^2 + 2\gamma\phi^3\ep^{-1/2}\lf(\Re\int\pa_t\pa_{y}Uf\overline{\pa_{y}Uf}dy+\Re\int \pa_{y}U\pa_tf\overline{\pa_{y}Uf}dy\rg)\\ \leq& 3\gamma\phi^2|k|\|\partial_{y}Uf\|_2^2 + 2\gamma\phi^3\epsilon^{-1/2}\left(\int |\partial_{ty}U||\partial_{y}U||f|^2 dy + \Re\int |\partial_{y}U|^2\left(\nu\partial_{yy}f - iVkf\right)\overline{f}dy\right)\\ \leq& 3\gamma\phi^2|k|\|\partial_{y}Uf\|_2^2 + 2\gamma\phi^3\epsilon^{-1/2}\left( \nu^{3/4} \|f\|_2\|\partial_{y}U f\|_2 -2\nu\Re\int \partial_{y}U\partial_yf\ \overline{\partial_{yy}U f}dy - \nu\| \partial_{y}U\partial_yf\|_2^2 \right)\\ \leq& \left(\frac{3\gamma C_\ast}{\beta}+\frac{1}{4}\right)\frac{\beta\phi^2|k|}{C_\ast}\|\partial_{y}Uf\|_2^2 + \left(\frac{4C_\ast\gamma^2}{\beta^2|k|^{1/2}}\phi^2+\frac{4\gamma\phi}{\beta}\|\partial_{yy}U\|_\infty^2\|\right)\beta\phi^2\nu^{1/2}|k|^{1/2}\|f\|_2^2 -\gamma\phi^3\epsilon^{-1/2}\nu\|\partial_{y}U\partial_yf\|_2^2. \end{aligned}$$This is [\[ndeg_gamma_est\]](#ndeg_gamma_est){reference-type="eqref" reference="ndeg_gamma_est"}. ◻ # Taylor Dispersion {#Taylor} In this section, we prove Theorem [Theorem 4](#Taylor_thm){reference-type="ref" reference="Taylor_thm"}. Let us focus on the Taylor dispersion regime $0<|k|\leq \nu$ and further divide the proof into three steps. **Step \# 1: Preliminaries.** Without loss of generality, we assume that $k> 0$ and focus on the hypoelliptic equation $\eqref{k_by_k_eq}_{\sigma=0}$. Thanks to the Dirichlet boundary condition $f(t,y=\pm 1)=0$, we have the following boundary constraints for [\[k_by_k\_eq\]](#k_by_k_eq){reference-type="eqref" reference="k_by_k_eq"}: $$\begin{aligned} \widehat f_k(t,y=\pm 1)=0, \quad \partial_{yy}\widehat f_{k}(t, y=\pm 1)=0,\quad \forall t\geq0.\label{bc}\end{aligned}$$ These boundary conditions enable us to implement integration by parts without creating boundary terms. We will adopt the simplified notation $f=\widehat f_k$ in the remaining part of the section. Next, we identify the parameter regime in which the functional $\mathcal{T}$ [\[hypo_Tay\]](#hypo_Tay){reference-type="eqref" reference="hypo_Tay"} is comparable to the $H^1$-norm of the solution. We observe that $$\begin{aligned} \mathcal{T}[f]\geq \|f\|_2^2+\alpha\zeta \|\partial_y f\|_2^2- \beta\zeta\|\partial_y V\|_\infty\|f\|_2\|\partial_yf \|_2\geq \frac{1}{2}\|f\|_2^2+\alpha\zeta\left(1-\frac{\beta^2}{\alpha}\|\partial_yV\|_{\infty}^2\right)\|\partial_yf\|_2^2.\end{aligned}$$ Similar estimate holds for the upper bound. Hence if $$\begin{aligned} \frac{\beta^2}{\alpha}\|\partial_yV\|_\infty^2\leq \frac{1}{2},\label{cnstr_ab_Taylor}\end{aligned}$$ the following equivalence relation holds $$\begin{aligned} \frac{1}{2}\|f\|_2^2+\frac{1}{2}\alpha\zeta\|\partial_yf\|_2^2\leq \mathcal{T}[f]\leq \frac{3}{2}\|f\|_2^2+\frac{3}{2}\alpha\zeta\|\partial_yf\|_2^2. \label{Tay_equiv}\end{aligned}$$ we consider the following hypocoercivity functional: $$\begin{aligned} T[f]=\|f\|_2^2+\alpha\|\partial_y f\|_2^2+\beta\Re\langle i\partial_{y}U\text{sign}{k} f,\partial_y f\rangle.\end{aligned}$$ **Step \# 2: Hypocoercivity.** We compute the time derivative of the hypocoercivity functional $\mathcal{T}[f]$ [\[hypo_Tay\]](#hypo_Tay){reference-type="eqref" reference="hypo_Tay"} $$\begin{aligned} \label{Tay_123} \frac{d}{dt}\mathcal T[f]=\frac{d}{dt}\|f\|_2^2+\alpha\frac{d}{dt}\left(\zeta\|\partial_y f\|_2^2\right)+\beta \frac{d}{dt}\left(\zeta\Re\langle i\partial_yV\ \text{sign}{k}\ f,\partial_y f\rangle\right)=:T_1+T_2+T_3.\end{aligned}$$ By recalling the boundary condition [\[bc\]](#bc){reference-type="eqref" reference="bc"}, we apply integration by parts to obtain $$\begin{aligned} T_1=-2\nu\|\partial_y f\|_2^2.\label{Tay_1}\end{aligned}$$ The $T_2$-term can be estimated using the constraint $|k|/\nu\leq1,$ integration by parts (with boundary condition [\[bc\]](#bc){reference-type="eqref" reference="bc"}), Hölder inequality and Young's inequality as follows $$\begin{aligned} T_2=&\alpha\zeta'\|\partial_y f\|_2^2+2\alpha\zeta\Re\langle\nu\partial_{yyy}f -\partial_y Vik f-Vik\partial_y f ,\partial_y f\rangle\\ \leq& \alpha\frac{|k|^2}{\nu^2}\nu\|\partial_y f\|_2^2 -2\alpha\zeta\nu\|\partial_{yy}f\|_2^2+2\alpha\zeta\frac{|k|}{\nu}\nu\|\partial_y Vf\|_2\|\partial_yf\|_2\\ \leq &\alpha\nu\|\partial_y f\|_2^2 -2\alpha\zeta\nu\|\partial_{yy}f\|_2^2+\frac{\beta}{4}\zeta |k|\|\partial_y Vf\|_2^2+ \frac{4\alpha^2}{\beta}\zeta \nu\|\partial_yf\|_2^2.\label{Tay_2}\end{aligned}$$ The $T_3$-term in [\[Tay_123\]](#Tay_123){reference-type="eqref" reference="Tay_123"} can be estimated in a similar fashion as the one in [\[T_beta1234\]](#T_beta1234){reference-type="eqref" reference="T_beta1234"}. We decompose the term as follows $$\begin{aligned} T_3=&\zeta' \beta\Re\int i\partial_y V f \overline{\partial_y f}dy+\zeta\beta\Re\int i \partial_{ty}V f\overline{\partial_y f}dy+\beta\zeta\Re\int i \partial_yV (\nu\partial_{yy}f-ikV f)\overline{\partial_y f}dy\\ &+\beta\zeta\Re\int i \partial_yV f \overline{ (\nu\partial_{yyy}f-ik\partial_y V f-ik V\partial_y f)}dy=:\sum_{i=1}^4 T_{3i}.\label{T_3_Tay}\end{aligned}$$ We recall the assumption [\[Taylorcnstr\]](#Taylorcnstr){reference-type="eqref" reference="Taylorcnstr"} and the definition [\[ep&t_weights\]](#ep&t_weights){reference-type="eqref" reference="ep&t_weights"} to obtain$$\begin{aligned} \label{T_312_Tay} T_{31}+T_{32}\leq &\nu^{-1}|k|^2\beta\|\partial_y V\|_{\infty}\|f \|_2\|\partial_y f\|_2+\beta\mathfrak{C}_2\nu\|f\|_2\|\partial_yf\|_2. %\\&+\beta\Re\int i \pa_yV (\nu\pa_{yy}f-ikV f)\overline{\pa_y f}dy+\beta\Re\int i \pa_yV f \overline{ (\nu\pa_{yyy}f-ik\pa_y V f-ik V\pa_y f)}dy=:\end{aligned}$$ Similar to the estimate in [\[T_beta_3\]](#T_beta_3){reference-type="eqref" reference="T_beta_3"}, we estimate the $T_{33}$-term in [\[T_3\_Tay\]](#T_3_Tay){reference-type="eqref" reference="T_3_Tay"} as follows $$\begin{aligned} T_{33}\leq& \beta\zeta\Re\int i\partial_{y}V\left(\nu\partial_{yy}f-iVkf\right)\overline{\partial_yf}dy = \beta\zeta\left(\nu\Re\int i\partial_{y}V\partial_{yy}f\overline{\partial_yf}dy + k\Re\int V\partial_{y}Vf\overline{\partial_yf}dy\right)\\ \leq& \beta\zeta \nu \| \partial_y V\|_{\infty}\|\partial_{yy} f\|_2\|\partial_y f\|_2 + \beta\zeta k\Re\int V\partial_{y}Vf\overline{\partial_yf}dy.\label{T_33_Tay}%\\ \leq &+ \beta k\Re\lan V\pa_yV f,\pa_y f\ran\end{aligned}$$ Similar to the estimate in [\[T_beta_4\]](#T_beta_4){reference-type="eqref" reference="T_beta_4"}, we invoke the boundary condition [\[bc\]](#bc){reference-type="eqref" reference="bc"} to estimate the $T_{34}$-term in [\[T_3\_Tay\]](#T_3_Tay){reference-type="eqref" reference="T_3_Tay"} as follows $$\begin{aligned} T_{34}\leq& \beta\zeta\nu\|\partial_{yy}V\|_\infty\|f\|_2\|\partial_{yy}f\|_2 + \beta\zeta\nu\|\partial_{y}V\|_\infty\|\partial_yf\|_2\|\partial_{yy}f\|_2 - {\beta\zeta|k|}\|\partial_{y}Vf\|_2^2 - \beta\zeta k\Re\int V\partial_{y}Vf\overline{\partial_yf}dy.\\ \label{T_34_Tay} % \leq &\beta\zeta\nu\|\pa_{yy}V\|_\infty\|f\|_2\|\pa_{yy}f\|_2 - {\beta |k|} \lf\|\pa_y Vf\rg\|_2^2 - \beta k\Re\lan V\pa_yVf,\pa_yf\ran. \end{aligned}$$ Finally, we recall the Poincaré inequality for $f\in H^1_0([-1,1])$: $$\begin{aligned} \label{Poincare} \|f\|_2\leq C\|\partial_y f\|_2.\end{aligned}$$ Now we combine the decomposition [\[T_3\_Tay\]](#T_3_Tay){reference-type="eqref" reference="T_3_Tay"}, the estimates [\[T_312_Tay\]](#T_312_Tay){reference-type="eqref" reference="T_312_Tay"}, [\[T_33_Tay\]](#T_33_Tay){reference-type="eqref" reference="T_33_Tay"} and [\[T_34_Tay\]](#T_34_Tay){reference-type="eqref" reference="T_34_Tay"} and the Poincaré inequality [\[Poincare\]](#Poincare){reference-type="eqref" reference="Poincare"} to estimate the $T_3$-term as follows $$\begin{aligned} T_3\leq& C_0\left( \nu^{-2}|k|^2\|\partial_yV\|_\infty+ \mathfrak{C}_2+\frac{\beta}{\alpha}\|\partial_y V\|_\infty^2 +\frac{\beta}{\alpha}\|\partial_{yy} V\|_\infty^2 \right)\beta \nu \|\partial_yf\|_2^2+\alpha\zeta\nu \|\partial_{yy}f\|_2^2-\frac12\beta\zeta|k|\left\|\partial_y Vf\right\|_2^2.\\\label{Tay_3}\end{aligned}$$ Here $C_0\geq 1$ is a universal constant. Now we set $$\begin{aligned} \alpha=\beta=\frac{1}{16C_0\left(1+\mathfrak{C}_2+\|\partial_y V\|_{L_{t,y}^\infty }^2+\|\partial_{yy}V\|_{L_{t,y}^\infty}^2\right)},%\quad \eta=\frac{1}{16}, \end{aligned}$$ which is consistent with [\[cnstr_ab_Taylor\]](#cnstr_ab_Taylor){reference-type="eqref" reference="cnstr_ab_Taylor"}. Combining the decomposition [\[Tay_123\]](#Tay_123){reference-type="eqref" reference="Tay_123"} and the estimates [\[Tay_1\]](#Tay_1){reference-type="eqref" reference="Tay_1"}, [\[Tay_2\]](#Tay_2){reference-type="eqref" reference="Tay_2"}, [\[Tay_3\]](#Tay_3){reference-type="eqref" reference="Tay_3"} yields that $$\begin{aligned} \frac{d}{dt}\mathcal{T}[f]\leq-\nu\|\partial_y f\|_2^2-\frac{1}{4}\beta \zeta|k|\|\partial_yV f \|_2^2.\end{aligned}$$ Thanks to the spectral inequality [\[Spec_Tay\]](#Spec_Tay){reference-type="eqref" reference="Spec_Tay"} and the equivalence [\[Tay_equiv\]](#Tay_equiv){reference-type="eqref" reference="Tay_equiv"}, we obtain that for $|k|\leq \nu$, $$\begin{aligned} \frac{d}{dt}\mathcal{T}[f]\leq&-\frac{1}{2}\nu\|\partial_y f\|_2^2 -\frac{\beta}{4}\frac{|k|^2}{\nu}\left(\frac{\nu^2}{|k|^{2}}\|\partial_y f\|_2^2+\frac{\nu}{|k|}\zeta\|\partial_y V f\|_2^2\right)\\ \leq&-\frac{1}{2}\frac{k^2}{\nu^2}\nu\|\partial_y f\|_2^2 -\frac{|k|^2}{4\nu}\beta\zeta(\|\partial_{y}f\|_{2}^2+\|\partial_y V f\|_2^2)\leq -\frac{1}{2}\frac{|k|^2}{\nu}\|\partial_yf\|_2^2-\frac{|k|^2}{\nu}\frac{\beta\zeta}{4\mathcal{C}_{spec}}\|f\|_2^2\\ \leq&-\frac{|k|^2}{\nu}\frac{\beta\zeta}{4\mathcal{C}_{spec}}\left(\|f\|_2^2+\alpha\zeta\|\partial_yf\|_2^2\right)\leq -\frac{|k|^2}{\nu}\frac{\beta\zeta}{6\mathcal{C}_{spec}}\mathcal{T}[f]=:-\delta(\beta,\mathcal{C}_{spec}^{-1})\zeta\frac{|k|^2}{\nu}\mathcal{T}[f].\label{dtT}\end{aligned}$$ **Step \# 3: Conclusion.** Without loss of generality, set $T\geq2\nu|k|^{-2}$. We observe that for $t\in[0,\nu|k|^{-2}]$, since $\frac{d}{dt}\mathcal{T}\leq 0,$ $$\begin{aligned} \mathcal{T}[f(t)]\leq \mathcal{T}[f(0)]=\|f(0)\|_{2}^2. \end{aligned}$$ Thanks to the estimate [\[dtT\]](#dtT){reference-type="eqref" reference="dtT"}, we have that $$\begin{aligned} \mathcal{T}[f(t)]\leq \mathcal{T}[f(\nu|k|^{-2})]\exp\left\{-\delta \frac{|k|^2}{\nu}(t-\nu|k|^{-2})\right\}\leq e\|f(0)\|_{2}^2\exp\left\{-\delta \frac{|k|^2}{\nu} t \right\},\quad \forall t\in[\nu|k|^{-2},T]. \end{aligned}$$ This concludes the proof of [\[Hypo_est_Taylor\]](#Hypo_est_Taylor){reference-type="eqref" reference="Hypo_est_Taylor"} and Theorem [Theorem 4](#Taylor_thm){reference-type="ref" reference="Taylor_thm"}. The time weight associated with the Taylor dispersion is $$\begin{aligned} \zeta(t)=\min\left\{1,\frac{|k|^2}{\nu}t\right\}.\end{aligned}$$ $$\begin{aligned} T[f]=\|f\|_2^2+\alpha\zeta\|\partial_y f\|_2^2+\beta\zeta\Re\langle i\partial_{y}U\text{sign}{k} f,\partial_y f\rangle.\end{aligned}$$ Now the time derivative of the $\alpha$ term reads $$\begin{aligned} \frac{|k|^2}{\nu}\alpha\|\partial_y f\|_2^2= \frac{|k|^2}{\nu^2}\nu\alpha\|\partial_y f\|_2^2\leq \alpha\nu\|\partial_y f\|_2^2.\end{aligned}$$ Now the time derivative of the $beta$ term reads as follows $$\begin{aligned} \frac{|k|^2}{\nu^2}\nu \beta\Re\langle i\partial_{y}U f,\partial_y f\rangle\leq \beta\nu\|\partial_{y}U\|_\infty\|f\|_2\|\partial_y f\|_2.\end{aligned}$$ # Technical Lemmas The proof makes use of several spectral inequalities. We present them below. **Lemma 9**. *a) Consider the domain, i.e., $y\in{\mathbb T}$. Assume that $U(t,y)$ has $N$ nondegenerate critical points $\{y_i(t)\}_{i=1}^N$ for $t\in[0,T]$. Moreover, there exist $N$ open neighbourhoods $B_{r}(y_i(t)),\, i=1,\cdots, N$, such that $$\begin{aligned} | \partial_y{U}(t,y)|^2 \geq& \mathfrak{C}_0^{-1} (y-y_i(t))^2,\quad \forall t\in[0,T],\quad \forall y\in B_{r}(y_i(t)),\quad \forall y_i(t)\in \left\{y\big|{\partial_y} U(t,y)=0\right\},\\ |\partial_yU(t,y)|&\in[\mathfrak C_1^{-1},\mathfrak C_1],\quad \forall y\in (\cup_{i=1}^NB_{r}(y_i(t))^c. \end{aligned}$$ Then for $\nu$ small enough depending on the shear $U$, there exists a constant $\mathfrak{C}_{\text{Spec}}\geq 1$ such that the following estimate hold ($\epsilon=\nu/|k|$) $$\begin{aligned} \label{spectral} \epsilon^{1/2}\|f\|_{L^2({\mathbb T})}^2\leq \epsilon\|\partial_y f\|_{L^2({\mathbb T})}^2+\mathfrak{C}_{\text{Spec}}\left\| \partial_yU(t,\cdot)f\right\|_{L^2({\mathbb T})}^2. \end{aligned}$$* *b) Consider functions $f\in H_0^1([-1,1])$. Assume that the shear flow $V(t,y)$ satisfies the conditions specified in Theorem [Theorem 4](#Taylor_thm){reference-type="ref" reference="Taylor_thm"}. Then there exists a constant $\mathcal{C}_{Spec}=\mathcal{C}_{Spec}(m_0, \mathfrak{C}_3, r_0)\ge 1$ such that the following estimate holds: $$\begin{aligned} \label{Spec_Tay} \|f\|_{L^2([-1,1])}^2\leq \mathcal{C}_{Spec}\|\partial_y f\|_{L^2([-1,1])}^2+\mathcal{C}_{Spec}\|\partial_y V(t,\cdot) f\|_{L^2([-1,1])}^2.\end{aligned}$$* *Proof.* a) The proof of the theorem is stated in the paper [@BCZ15]. For the sake of completeness, we provide a different proof here. We can apply a partition of unity $\{\chi_i\}_{i=0}^N$ to decompose the function $f=f(\chi_0+\sum_{i=1}^n\chi_i)$, where $\{\chi_i\}_{i\neq 0}$ are supported near the critical points $y_i(t)$ and $\chi_0$ is supported away from the critical points. Moreover, $\sum_{i=0}^n\|\partial_y \chi_i\|_\infty\leq C$ and the supports of $\{\chi_i\}_{i\neq 0}$ are pairwise disjoint. Now we use the integration by parts formula $$\begin{aligned} \ensuremath{\nonumber} \epsilon^{1/2}\int_\mathbb{R}|f_i|^2 dy=& \frac{1}{2}\epsilon^{1/2}\bigg|\int_{\mathbb{R}} |f_i|^2 \frac{d^2}{dy^2}(y-y_i)^2 dy\bigg| = \epsilon^{1/2}\bigg|\int_\mathbb{R}\partial_y|f_i|^2 (y-y_i) dy\bigg|\\ \leq& {2}\mathfrak{C}_0\epsilon^{1/2}\bigg|\Re\int_\mathbb{R}\overline{f_i}\partial_y f_i |\partial_yU| dy\bigg|\leq \frac{1}{2}\epsilon\|\partial_y f_i\|_{L^2(\mathbb{R})}^2+C(\mathfrak C_0)\left\|\partial_yU f_i\right\|_{L^2(\mathbb{R})}^2,\quad i\neq 0.\label{spec_nr_crit}\end{aligned}$$ Since the supports of the cutoff functions $\chi_i,\, i\neq 0$ are disjoint, we have that $$\begin{aligned} \epsilon^{1/2}\int_{{\mathbb T}} |f(1-\chi_0)|^2dy\leq \epsilon\|\partial_y (f(1-\chi_0))\|_{L^2}^2+C(\mathfrak{C}_0)\|\partial_y U f(1-\chi_0)\|_{L^2}^2.\end{aligned}$$ We further observe that, since the $|\partial_yU|\geq c>0$ on the support of $\chi_0$, $$\begin{aligned} \epsilon^{1/2}\|f\chi_0\|_{L^2}^2\leq C\||\partial_yU|f\chi_0\|_{L^2}^2.\end{aligned}$$ Combining the above estimates, we have that $$\begin{aligned} \ensuremath{\nonumber} \epsilon^{1/2}\|f\|_{L^2}^2\leq& 2\epsilon^{1/2}\|f\chi_{0}\|_{L^2}^2+2\epsilon^{1/2}\|f(1-\chi_0)\|_{L^2}^2\leq \epsilon\|\partial_y(f(1-\chi_0))\|_{L^2}^2+C(\mathfrak{C}_0)\left\||\partial_yU| f\right\|_{L^2}^2\\ \leq&\epsilon\|\partial_yf\|_{L^2}^2+C(\mathfrak{C}_0)\left\||\partial_yU| f\right\|_{L^2}^2+\epsilon\|\partial_y\chi_0\|_{L^\infty}^2\|f\|_{L^2}^2.%&\end{aligned}$$ We can take the $\nu$ small enough so that the left-hand side absorbs the last term. This concludes the proof of the lemma. b\) The proof of the estimate [\[Spec_Tay\]](#Spec_Tay){reference-type="eqref" reference="Spec_Tay"} is similar to the proof above. We decompose the function $f$ as follows $$\begin{aligned} f=\sum_{i=0}^{N(t)}f\chi_i,\quad \text{support }\chi_i(t,\cdot)\in U_i:=B_{r_0/2}(y_i(t))\cap [-1,1], \quad \chi_i\in C^\infty(\mathbb{R}_+\times[-1,1]), \quad \forall i\in\{0,1,\cdots, N(t)\}.\end{aligned}$$ We observe that the sets $U_i$ are pairwise disjoint and $\text{distance}(U_i,U_j)\geq r_0>0$ for $i\neq j,\, i,j\in \{1, 2,\cdots, N(t)\}$. We further assume that $\chi_i(t,y)\equiv 1$ for $y\in B_{r_0/4}(y_i(t))\cap [-1,1]$. As a result of this choice and the condition [\[Taylorcnstr\]](#Taylorcnstr){reference-type="eqref" reference="Taylorcnstr"}, there exists a positive constant $c$, which depends on the shear flow $V$, such that $$\begin{aligned} \label{non_deg_Tay} |\partial_y V(t,y)|\geq c>0,\quad \forall y\in \text{support} \{\partial_y \chi_i(t,\cdot)\}. \end{aligned}$$ We apply the Poincaré inequality and the observation [\[non_deg_Tay\]](#non_deg_Tay){reference-type="eqref" reference="non_deg_Tay"} to obtain that for $i\in \{1,\cdots, N(t)\}$, $$\begin{aligned} \|f \chi_i\|_{L^2([-1,1])}\leq& C\|\partial_y (f\chi_i)\|_{L^2([-1,1])}\leq C\|\partial_y f\|_{L^2([-1,1])}+C\|f \partial_y \chi_i(t,\cdot)\|_{L^2([-1,1])}\\ \leq& C\|\partial_y f\|_{L^2([-1,1])}+C\|f\partial_y V(t,\cdot)\|_{L^2([-1,1])}.\end{aligned}$$ Moreover, we have that $\|f\chi_0\|_{L^2([-1,1])}\leq C\|f\partial_yV(t,\cdot)\|_{L^2([-1,1])}$ because the $\partial_y V$ is non-vanishing on the support of $\chi_0$. Through summing all the contributions, we obtain [\[Spec_Tay\]](#Spec_Tay){reference-type="eqref" reference="Spec_Tay"}. ◻ [^1]: University of South Carolina, Columbia, SC 29208, USA [`dncoble@email.sc.edu`](mailto:dncoble@email.sc.edu) [^2]: Department of Mathematics, University of South Carolina, Columbia, SC 29208, USA [`siming@mailbox.sc.edu `](mailto:siming@mailbox.sc.edu)\ **Acknowledgment.** This work was partly supported by NSF grants DMS 2006660, DMS 2304392, DMS 2006372.
arxiv_math
{ "id": "2309.15738", "title": "A Note on Enhanced Dissipation and Taylor Dispersion of Time-dependent\n Shear Flows", "authors": "Daniel Coble and Siming He", "categories": "math.AP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Despite the provocative title of this paper, it must be emphasized that no questions are raised here about the correctness of Gödel's incompleteness theorems, and there can be no denying their profound impact for almost a century on conversations ranging from the limits of mathematical inquiry to philosophical questions relating to what is fundamentally knowable and not knowable. On the contrary, this paper is only concerned with a very narrow question of whether the scope of Gödel's theorems to the foundations of mathematics is somewhat more limited than commonly recognized. Specifically, we show that recursively-enumerable theorem sets cannot include inferences based on the Law of Excluded Middle (LEM) with respect to undecidable propositions. We then discuss the implications of these results to automated theorem proving. author: - | Jeffrey Uhlmann\ Dept. of Electrical Engineering and Computer Science\ University of Missouri - Columbia title: | A Note on the Incompleteness of\ Gödel's Incompleteness Theorems --- Keywords: Gödel's Incompleteness Theorems, Formal Systems, Recursive Enumeration, Law of Excluded Middle, Automated Theorem Proving, Computer-Assisted Theorem Proving, Artificial Intelligence. # Introduction Technically, Gödel's Incompleteness theorems apply only to recursively enumerable (RE) axiomatic systems [@goedel1]. While this is formally recognized and uncontroversial, it is implicitly presumed that this class of systems effectively encompasses the core of the axiomatic foundations of mathematics. We show that it does not, and the consequences are not inconsequential artifacts of minor technicalities. The structure of this paper is as follows. We begin by informally discussing an inference rule based on the Law of Excluded Middle (LEM). We then show that this rule, LEM-Based Inference (LBI), cannot in general be accommodated unconditionally by RE-defined formal systems. We conclude with a discussion of the implications of this result for automated theorem proving. # LEM-Based Inference Use of the Law of Excluded Middle (LEM) as a basis for inference has the following form: $$(x \lor \lnot x) \to y,$$ where the presumption is that either $x$ or $\lnot x$ must be true, therefore either $(x\to y)$ or $(\lnot x\to y)$ must be true. This bivalence was stated explicitly by Aristotle in the *Organon* [@organon], and its use as an inference rule has a long history. Notable examples include the following: - In 1914, Littlewood proved a result relating to the Prime Number Theorem based on its implication from the truth of the Riemann Hypothesis and from its negation [@littlewood1; @littlewood2]. - In 1918 up to the 1930s, a collection of theorems relating to the Gauss Class Number Conjecture were proven via LBI relative to the Generalized Riemann Hypothesis [@ir1990] - A series of results relating to Stone-Čech compactification were proven via LBI relative to the Continuum Hypothesis [@vanmill]. - In 1975, Erdos and Nicolas proved a bound on Euler's totient function using LBI relative to the Riemann Hypothesis [@nicolas1; @nicolas2]. - Fermat's Last Theorem (FLT) was proven via LBI [@lipblog] in 1995 by Andew Wiles [@wiles] with contributions by Robert Taylor [@taylor]. LBI-based proofs are also be found in the engineering and computer science literature and are similarly not regarded as controversial. # LBI, RE, and Formal Systems We begin with the following definitions. **Definition 1**. **Axiomatic System (AS)*: An axiomatic system $\mathcal{S}$ consists of a set of axioms, along with a set of inference rules from which a set of provable theorems can be derived.* **Definition 2**. **Recursively Enumerable (RE) Axiomatic System*: An axiomatic formal system $\mathcal{S}_{\text{\tiny RE}}$ is said to be RE if its set of theorems $T_{\text{\tiny RE}}$ is recursively enumerable, i.e., there exists a Turing machine that can enumerate all $t \in T_{\text{\tiny RE}}$.* **Definition 3**. **LBI-Accepting System*: An axiomatic system $\mathcal{S}_{\text{\tiny LBI}}$ is said to be LBI-accepting if for any formula $y$ and any proposition $x$, if $(x \lor \lnot x) \to y$ is a valid inference, then $y$ is a theorem in the system.* RE is essentially an ordered construction of a tree of theorem derivations starting from the system's set of axioms. Thus, for every theorem there exists a branch representing the application of a sequence of inference rules to a sequence of theorems established earlier in the construction that gives a proof of the theorem. This tree construction is not unique because it depends on arbitrary choices such as the specific order in which axioms, rules, and theorems are processed, but every valid RE construction will yield the same theorem set. As suggested earlier, it is widely believed that the set of theorems derivable for any reasonable axiomatic system should be recursively enumerable. This is based on strong intuition that a Turing machine can always be defined to perform the algorithmic derivation of the system's theorem set. We now show that this intuition fails in the case of LBI-accepting systems.\  \ : *The theorem set for an LBI-accepting system $\mathcal{S}_{\text{\tiny LBI}}$ cannot generally be recursively enumerated*.\  \ : Assume that the theorem set of $\mathcal{S}_{\text{\tiny LBI}}$ is RE. We know from Gödel that if $\mathcal{S}_{\text{\tiny LBI}}$ is sufficiently powerful there will exist undecidable statements $x$ such that neither $x$ nor $\lnot x$ are elements of the RE theorem set, i.e., the truth value of $x$ cannot be established by the system. Let $y$ be an LBI-accepted theorem via $(x \lor \lnot x) \to y$, then $y$ cannot be an element of the RE theorem set because neither $x$ nor $\lnot x$ exist as theorems from which $y$ can be derived in a branch of the recursive enumeration of theorems. Therefore, the RE assumption must be false.\ *QED*. Observation 1: : The set of RE axiomatic systems is a strict subset of LBI-accepting systems, which implies that the set of systems for which Gödel's incompleteness theorems apply does not generally accommodate all commonly accepted rules of inference. In other words, it is *not* the case that theorems derived about formal systems, i.e., as per Gödel, hold independently of the choice of base logic for a given system. Observation 2: : Let $G$ be a puported Gödel statement for $\mathcal{S}_{\text{\tiny LBI}}$. The challenge posed by the main theorem is that there may exist an undecidable statement $R$ such that $(R \lor \lnot R) \to G$ according to the base logic, so Gödel's diagonalization argument, which hinges on RE, cannot obtain. Observation 3: : For a given $\mathcal{S}_{\text{\tiny LBI}}$, it is possible that the set of undecidable statements $U$ satisfies $(U_0 \lor \lnot U_0) \to U_1$, $(U_1 \lor \lnot U_1) \to U_2$, \..., $(U_i \lor \lnot U_i) \to U_0$, i.e., it is an example of a system that is complete but not RE. # Conclusion The main result of this paper is recognition that practical axiomatic systems that admit the conventionally-accepted LEM-based inference (LBI) rule do not generally fall within the scope of Gödel's incompleteness theorems because their theorem sets cannot generally be recursively enumerated. A natural reaction to this might be to dismiss it as inconsequential unless it is possible to prove completeness and/or consistency within an LBI-accepting system containing an amount of elementary arithmetic beyond what is possible given Gödel's incompleteness theorems with respect to RE-based formal systems. However, this would put the burden of proof in the wrong place. It should be noted that historically the RE assumption was made precisely to facilitate the proving of results about the set of theorems that could be derived by a given axiomatic system. This assumption was uncontroversial because it was believed that the application of a set of inference rules to a set of axioms could always permit the complete set of possible theorems to be algorithmically generated. Now that this is established to not generally be the case, it should be recognized that the incompleteness theorems of Gödel can technically only be taken as applying to a relatively narrow class of systems, e.g., intuitionist logics that explicitly do not accept LEM, hence LBI [@intuit1; @intuit2], to which only a small minority of mathematicians subscribe. However, we believe that additional effort -- inspired by Gödel's original pioneering vision -- can achieve new insights about $\mathcal{S}_{\text{\tiny LBI}}$. # Discussion Aristotle was the first to envision a system of logic in which all of human reasoning could be mechanized to formally establish objective truths. This is not the same, however, as saying that a mechanistic process can, in principle, *generate* all objective truths. The recursive enumeration assumption is equivalent to saying that mathematics is essentially a Platonic monolith, and mathematicians are little more than surveyors of its features. Without RE, by contrast, the activity of mathematics becomes more like that of artists who must rely on insight and imagination to identify paths to desired results. Yes, pixel values for all possible images of a given size can be algorithmically generated, as can all melodies of a given length [@knuth], but little of value can be obtained from such constructions. The goal of artists and mathematicians is to identify instances that are in some sense *special* among the vast set of all possible instances, and what is special is often what is revealed about the journey required to obtain them. LEM-based inference can be seen as providing a scattering of seeds amidst the landscape of RE theorems from which non-RE theorems can be reached. Given that the LBI landscape cannot be algorithmically generated -- which is to say that it cannot be fully explored by any deterministic search strategy -- the job of the mathematician must be to apply insight and intuition to seek out fruitful regions that are most deserving of focused exploration. With this in mind, it should be recognized that computer-assisted mathematics is not an intermediate step toward fully automated theorem proving but instead subsumes it. In other words, fully automated general theorem proving is either impossible or requires general artificial intelligence to provide the creative component, i.e., to serve as the mathematician to decide what *should* be pursued from what *can* be pursued. 99 Kurt Gödel, "Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme, I," *Monatshefte für Mathematik und Physik*, v. 38 n. 1, pp. 173--198, 1931. Aristotle, *Organon*, (Andronicus of Rhodes, ed.), 9 October (CDT), 40 BC. Littlewood, J.E.,"Sur la distribution des nombres premiers," *Comptes Rendus*, 158, 1914. Ingham, A.E, *The Distribution of Prime Numbers, Cambridge Tracts in Mathematics and Mathematical Physics*, vol. 30, Cambridge University Press, 1932. Erdős, P. and Nicolas, J.L., "Répartition des nombres superabondants, *Bull. Soc. Math. France*, 79 (103): 65--90, 1975. Sárközy, A., "Jean-Louis Nicolas and the partitions," *The Ramanujan Journal*, 9 (1--2): 7--17, 2005. Ireland, Kenneth and Rosen, Michael, *A Classical Introduction to Modern Number Theory* (Second edition), New York: Springer, 1990. van Mill, Jan, "An introduction to $\beta\omega$," in Kunen, Kenneth; Vaughan, Jerry E. (eds.), *Handbook of Set-Theoretic Topolog*y, North-Holland, pp. 503--560, 1984. Lipton, RJ, "Proof Checking: Not Line by Line," *Gödel's Lost Letters and P=NP* (blog), 13 June, 2020. Wiles, Andrew, "Modular elliptic curves and Fermat's Last Theorem," *Annals of Mathematics*, 141 (3): 443--551, 1995. Taylor R, Wiles A, "Ring theoretic properties of certain Hecke algebras," *Annals of Mathematics*, 141 (3): 553--572, 1995. Heyting, Arend, "Die formalen Regeln der intuitionistischen Logik," (in three parts), *Sitzungsberichte der preussischen Akademie der Wissenschaften*: 42--71, 158--169, 1930. van Dalen, Dirk, "Intuitionistic Logic," in Goble, Lou, ed., *The Blackwell Guide to Philosophical Logic*, Blackwell, 2001. Knuth, Donald, "The Complexity of Songs," *SIGACT News*, 9 (2): 17--24, 1977.
arxiv_math
{ "id": "2309.05729", "title": "A Note on the Incompleteness of G\\\"odel's Incompleteness Theorems", "authors": "Jeffrey Uhlmann", "categories": "math.GM math.LO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | The Jarník-Besicovitch theorem is a fundamental result in metric number theory which concerns the Hausdorff dimension for certain limsup sets. We discuss the analogous problem for liminf sets. Consider an infinite sequence of positive integers, $S=\{q_{n}\}_{n\in\mathbb{N}}$, exhibiting exponential growth. For a given $n$-tuple of functions denoted as $\Psi:=~(\psi_1, \ldots,\psi_n)$, each of the form $\psi_{i}(q)=q^{-\tau_{i}}$ for $(\tau_{1},\dots,\tau_{n})\in\mathbb{R}^{n}_{+}$, we calculate the Hausdorff dimension of the set of points that can be $\Psi$-approximated for all sufficiently large $q\in S$. We prove this result in the generalised setting of approximation by abstract rationals as recently introduced by Koivusalo, Fraser, and Ramirez (LMS, 2023). Some of the examples of this setting include the real weighted inhomogeneous approximation, $p$-adic weighted approximation, Diophantine approximation over complex numbers, and approximation on missing digit sets. address: - Mumtaz Hussain, Department of Mathematical and Physical Sciences, La Trobe University, Bendigo 3552, Australia. - Ben Ward, Department of Mathematical and Physical Sciences, La Trobe University, Bendigo 3552, Australia. author: - Mumtaz Hussain - Benjamin Ward bibliography: - biblioo.bib date: - - title: liminf approximation sets for abstract rationals --- # Introduction Dirichlet's Theorem in Diophantine approximation asserts that for any $n$-tuple of positive real numbers $\boldsymbol{\tau}=(\tau_{1},\dots,\tau_{n})$ with $\tau_{1}+\dots+\tau_{n}=1$, every $\textbf{x}=(x_{1},\dots, x_{n})\in [0,1]^{n}$, and any $N\in\mathbb{N}$ there exists $1\leq q\leq N$ such that $$\| q x_{i}\|<N^{-\tau_{i}} \quad (1\leq i \leq n)\, ,$$ where $\|\cdot\|$ denotes the minimum distance to the nearest integer. A corollary one can deduce from Dirichlet's theorem is that for every irrational $\textbf{x}\in [0,1]^{n}$ there exists an infinite sequence of integers $\{q_{j}(\textbf{x})\}_{j\in\mathbb{N}}$ such that $$\| q_{j}(\textbf{x})x_{i}\|<q_{j}(\textbf{x})^{-\tau_{i}} \quad (1\leq i\leq n)\, ,$$ for every $j \in \mathbb{N}$. In order to keep the statement true for all $\textbf{x}\in [0,1]^{n}$ one cannot improve on the size of the summation of the components of $\boldsymbol{\tau}$. Indeed, if $$\tau_{1}+\dots + \tau_{n} >1\, ,$$ then an easy application of the Borel-Cantelli lemma implies that Lebesgue-almost no points in $[0,1]^{n}$ can be $\boldsymbol{\tau}$-approximated by rationals infinitely often. However, the following was proven by Rynne [@R98].\ **Theorem 1** (Rynne, 1998). *For $n$-tuple $\boldsymbol{\tau}$ with $\tau_{1}+\dots + \tau_{n}>1$ then $$\dim_{\mathcal{H}}\left\{ x\in[0,1]^{n}:\begin{array}{c}\|q x_{i}\|<q^{-\tau_{i}} \quad (1\leq i \leq n) \\ \text{ for infinitely many } q \in \mathbb{N}\end{array} \right\}=\min_{1\leq j \leq n} \left\{\frac{n+1+\sum\limits_{i:\tau_{j}>\tau_{i}}(\tau_{j}-\tau_{i})}{\tau_{j}+1}\right\}\, ,$$ where $\dim_{\mathcal{H}}$ denotes the Hausdorff dimension.* For the definition of Hausdorff measure and dimension see § [3.1](#prelims){reference-type="ref" reference="prelims"}. This result is the weighted analogue of the classical Jarník-Besicovitch theorem in Diophantine approximation [@B34; @J29]. In this article, we consider the reverse of the setup given above. That is, given an infinite sequence of positive integers $\{q_{j}\}_{j\in\mathbb{N}}$ and an $n$-tuple $\boldsymbol{\tau}=(\tau_{1},\dots,\tau_{n})\in\mathbb{R}^{n}_{+}$ with $\tau_{1}+\dots+\tau_{n}\geq~1$ for how many $\textbf{x}\in [0,1]^{n}$ does it hold that $$\| q_{j}x_{i}\|<q_{j}^{-\tau_{i}} \quad (1\leq i \leq n)\, ,$$ for all sufficiently large $j\in\mathbb{N}$. This set can trivially be seen to be a null set in terms of Lebesgue measure (even when $\tau_{1}+\dots +\tau_{n}=1$) by considering the measure of each individual layer and taking the limit as $j \to \infty$. For the Hausdorff dimension, a corollary of our main result (stated in subsection [1.3](#mainresults){reference-type="ref" reference="mainresults"}) gives us the following. **Theorem 2**. *Let $\{q_{j}\}_{j\in\mathbb{N}}$ be an infinite sequence of positive integers and $\boldsymbol{\tau}=(\tau_{1},\dots, \tau_{n})\in~\mathbb{R}^{n}_{+}$ an $n$-tuple of positive real numbers. Suppose that $$\label{difference_interesting} \lim_{j\to \infty} \frac{\log q_{j}}{\log q_{j-1}}=k>1$$ exists and $\max\limits_{1\leq i \leq n}\tau_{i}<k-1<\infty$. Then $$\dim_{\mathcal{H}}\left\{ \textbf{x}\in [0,1]^{n}: \hspace{-0.2cm}\begin{array}{l} \|q_{j}x_{i}\|<q_{j}^{-\tau_{i}} \quad (1\leq i \leq n) \\ \text{ \rm for all sufficiently large } j \in \mathbb{N}\end{array} \right\} = \min_{1\leq j \leq n}\left\{\frac{n-\frac{1}{k-1}\sum\limits_{i=1}^{n}\tau_{i} +\sum\limits_{i:\tau_{j}>\tau_{i}}(\tau_{j}-\tau_{i})}{\tau_{j}+1}\right\} \, . %\frac{n}{n+1}\left(n-\frac{1}{k-1}\right).$$* *Remark 3*. As an idea of the sorts of sequences $S$ one can choose satisfying [\[difference_interesting\]](#difference_interesting){reference-type="eqref" reference="difference_interesting"} note that for any $(a,b)\in\mathbb{N}_{>1}^{2}$ the sequence $\{a^{b^{c}}\}_{c\in\mathbb{N}}$ has $$\lim_{j\to \infty} \frac{\log q_{j}}{\log q_{j-1}}=b\, .$$ Let $$\mathcal{F}=\left\{ \{a^{b^{c}}\}_{c \in \mathbb{N}} : (a,b)\in\mathbb{N}_{>1}^{2} \right\}\, .$$ Then Theorem [Theorem 2](#interesting?){reference-type="ref" reference="interesting?"} and the countable stability of the Hausdorff dimension (see Section [3.1](#prelims){reference-type="ref" reference="prelims"} for more details) gives us that for any $n$-tuple $\boldsymbol{\tau}$ of finite positive real numbers $$\dim_{\mathcal{H}}\left\{ \textbf{x}\in [0,1]^{n}: \, \exists\, S \in \mathcal{F}\text{ s.t. } \hspace{-0.2cm}\begin{array}{l} \|q_{j}x_{i}\|<q_{j}^{-\tau_{i}} \quad (1\leq i \leq n) \\ \text{ \rm for all sufficiently large } j \in \mathbb{N}\end{array} \right\} = \min_{1\leq j \leq n}\left\{\frac{n +\sum\limits_{i:\tau_{j}>\tau_{i}}(\tau_{j}-\tau_{i})}{\tau_{j}+1}\right\} \, .$$ *Remark 4*. Compared to the Theorem of Rynne, note that the set defined in Theorem [Theorem 2](#interesting?){reference-type="ref" reference="interesting?"} is of strictly smaller dimension, even if the sequence $\{q_{j}\}_{j\in\mathbb{N}}$ is taken to be increasingly sparse. In fact, one can deduce from [@R98] that for $n$-tuple $\boldsymbol{\tau}$ with $\tau_{1}+\dots +\tau_{n}>0$ and sequence $S$ satisfying [\[difference_interesting\]](#difference_interesting){reference-type="eqref" reference="difference_interesting"} then $$\dim_{\mathcal{H}}\left\{ \textbf{x}\in [0,1]^{n}: \begin{array}{c} \|q_{j}x_{i}\|<q_{j}^{-\tau_{i}} \quad (1\leq i \leq n)\\ \text{ for infinitely many } j\in \mathbb{N}\end{array} \right\} = \min_{1\leq j \leq n}\left\{\frac{n +\sum\limits_{i:\tau_{j}>\tau_{i}}(\tau_{j}-\tau_{i})}{\tau_{j}+1}\right\} \, .$$ This follows from [@R98 Theorem 1] and the observation that the value $v(Q)$ ($v(S)$ in our case) appearing in [@R98 Theorem 1] is equal to zero for the sequences we are considering. *Remark 5*. Theorem [Theorem 2](#interesting?){reference-type="ref" reference="interesting?"} is an extension of the recent work of the first-named author and Shi who proved the result in the non-weighted setting [@HussainShi23]. Theorem [Theorem 2](#interesting?){reference-type="ref" reference="interesting?"} is a corollary of our main result which we state in the next section. In particular, we do not require the limit [\[difference_interesting\]](#difference_interesting){reference-type="eqref" reference="difference_interesting"} to exist, we only need to calculate the $\liminf$ of the sequence. See Theorem [Theorem 6](#main){reference-type="ref" reference="main"} and Theorem [Theorem 10](#real_corollary){reference-type="ref" reference="real_corollary"} for more details. In order to state our main theorem we recall and expand upon the definition of *abstract rationals* as introduced in [@FKR23]. We then recall the notation of [@HussainShi23] on the exponential shrinking problem in the non-weighted case and redefine these sets for abstract rationals. Our main results are then given, which are followed by a series of applications. These include the real weighted inhomogeneous approximation, $p$-adic weighted approximation, complex Diophantine approximation, and approximation on missing digit sets. In §3-5 we prove our main results. ## Abstract rationals Fix $n\in\mathbb{N}$ and for each $1\leq i \leq n$, let $(F_{i},d_{i},\mu_{i})$ be a non-empty totally bounded metric space equipped with a $\delta_{i}$-Ahlfors regular measure $\mu_{i}$ with support $F_{i}$. That is, for any $x_{i}\in F_{i}$ and $0<r<r_{0}$ for some bounded $r_{0}$ there exists constants $0<c_{1,i}\leq c_{2,i}< \infty$ such that $$c_{1,i} r^{\delta_{i}} \leq \mu_{i}(B_{i}(x_{i},r)) \leq c_{2,i} r^{\delta_{i}},$$ where for any $x_{i}\in F_{i}$ and $r>0$ we write $$B_{i}(x_{i},r)=\{y\in F_{i}: d_{i}(x_{i},y)<r\}\, .$$ Let $$F=\prod_{i=1}^{n}F_{i}, \quad d(\cdot,\cdot)=\max_{1\leq i\leq n} d_{i}(\cdot,\cdot), \quad \mu=\prod_{i=1}^{n} \mu_{i}$$ so that $(F,d,\mu)$ is the product metric space. For any two subsets $A,B\subset F_{i}$ by $d_{i}(A,B)$ we mean $$d_{i}(A,B)=\inf\{d_{i}(a,b): a\in A, b\in B\},$$ and for any $\textbf{x}=(x_{1},\dots, x_{n}) \in F$ and $r>0$ we write $$B((x_{1},\dots, x_{n}),r)=\prod_{i=1}^{n}B_{i}(x_{i},r).$$ Let $\mathcal{N}$ be an infinite countable set, and let $\beta:\mathcal{N}\to \mathbb{R}_{+}$ be function which associates a weight $\alpha \mapsto \beta(\alpha)=\beta_{\alpha}$. In line with [@FKR23] For each $1\leq i\leq n$ and each $q\in\mathcal{N}$ define the *$\beta$-abstract rationals of level $q$ in $F_{i}$* by fixing a *maximal $\beta_{q}^{-1}$-separated* set of points $P_{i}(q) \subset F_{i}$, where - *$\beta_{q}^{-1}$-separated* means that for all $p_{1},p_{2} \in P_{i}(q)$ we have that $d_{i}(p_{1},p_{2})\geq \beta_{q}^{-1}$, and - *maximal* means that for all $x \in F_{i}$ there exists $p \in P_{i}(q)$ such that $d_{i}(p,x)<\beta_{q}^{-1}$. That is, all the points are reasonably separated from each other and we cannot fit any more level $q$ abstract rationals without being too close to an already present abstract rational. Let $P(q)=\prod_{i=1}^{n}P_{i}(q)$ denote the product of abstract rationals of level $q$ in $F$, and let $$\mathcal{Q}=\bigcup_{q\in\mathcal{N}}P(q)$$ be the set of *$\beta$-abstract rationals in $F$*. As an example consider $$\begin{aligned} F_{i}=&\,[0,1]^{d}\, , \quad d_{i}=|\cdot| \text{ the maximum norm}, \\ \mathcal{N}=&\,\mathbb{N}\, , \quad \quad \beta(q)=q\, , \\ P_{i}(q)&=\left\{\left(\frac{p_{1}}{q},\dots,\frac{p_{d}}{q}\right): 0\leq p_{1}, \dots, p_{d} \leq q\right\}=\frac{1}{q}\mathbb{Z}^{d} \cap [0,1]^{d} \, , \quad (1\leq i \leq n)\, .\end{aligned}$$ Then $\mathcal{Q}=\mathbb{Q}^{dn}\cap [0,1]^{dn}$. Generally one can take each $F_{i}$ to be any bounded convex body in $\mathbb{R}^{d}$, and the lattice $\frac{1}{q}\mathbb{Z}^{d}$ can be shifted by any vector. ## Liminf approximation sets In [@HussainShi23], Hussain and Shi introduced an exponential shrinking problem stated as follows. Given a sequence $S=\{q_{j}\}_{j\in\mathbb{N}}$ of positive integers, fixed $\theta=(\theta_{1},\dots, \theta_{n}) \in [0,1]^{n}$, and $\tau \geq 1$, consider the set $$\Lambda_{\mathbb{Q}^{n}}^{S}(\tau):=\left\{ \textbf{x}\in [0,1]^{n}: \max_{1\leq i\leq n}||q_{j}x_{i}-\theta_{i}||<q_{j}^{-\tau} \quad \text{ for all } j\geq 1 \right\}.$$ This set was introduced to answer a question related to a problem posed by Schleischitz in [@SchleischitzSelecta]. Under certain conditions on $\tau$ and $S$, they provide the exact Hausdorff dimension of the set $\Lambda_{\mathbb{Q}^{n}}^{S}(\tau)$. In this article, we generalise the above setting by considering weighted approximation (the approximation function can vary between coordinate axes) by abstract rationals (which includes approximation by rationals as a special case). Fix a sequence $S=\{q_{j}\}_{j\in\mathbb{N}}$ with each $q_{j}\in\mathcal{N}$ and weight vector $\boldsymbol{\tau}=(\tau_{1},\dots, \tau_{n})\in\mathbb{R}^{n}_{+}$. Define the set $$\Lambda_{\mathcal{Q}}^{S}(\boldsymbol{\tau}):=\left\{ \textbf{x}\in F : \begin{array}{c} d_{i}(x_{i},P_{i}(q_{j}))<\beta(q_{j})^{-\tau_{i}} \quad (1\leq i \leq n) \\ \text{ for all } \, j \in \mathbb{N}\end{array} \right\},$$ and the set $$\widehat{\Lambda}_{\mathcal{Q}}^{S}(\boldsymbol{\tau}):=\left\{ \textbf{x}\in F : \begin{array}{c}d_{i}(x_{i},P_{i}(q_{j}))<\beta(q_{j})^{-\tau_{i}} \quad (1\leq i \leq n)\\ \text{ for all sufficiently large } \, j \in \mathbb{N}\end{array} \right\}.$$ The latter set is a slight relaxation of the former set since we only require the points to be eventually always close to a sequence of abstract rationals. We may write the latter set in terms of the former by defining for any $t\in\mathbb{N}$ $$\sigma^{t} S=\{q_{i+t}\}_{i\in \mathbb{N}},$$ that is, $\sigma$ is the left shift on sequence $S$. Then $$\widehat{\Lambda}_{\mathcal{Q}}^{S}(\boldsymbol{\tau}) = \bigcup_{t\in\mathbb{N}} \Lambda_{\mathcal{Q}}^{\sigma^{t} S}(\boldsymbol{\tau}).$$ ## Main results {#mainresults} We prove the following results on the Hausdorff dimension of the sets introduced above. **Theorem 6**. *Let $(F,d,\mu)$ be a product space of non-empty totally bounded metric spaces each equipped with Ahlfors regular measures. Let $\mathcal{N}$ be an infinite countable set $\mathcal{N}$, $\beta:\mathcal{N}\to \mathbb{R}_{+}$, and $\mathcal{Q}$ be a set of $\beta$-abstract rationals in $F$. Fix an infinite sequence $S$ contained in $\mathcal{N}$ over which $\beta$ is unbounded and strictly increasing. Suppose that $$\inf_{j\in \mathbb{N}} \frac{\log \beta(q_{j})}{\log \beta(q_{j-1})}=h_{S}>1, \, \, \text{ and } \,\, \liminf_{j\to \infty} \frac{\sum\limits_{i=1}^{j-1}\log \beta(q_{i})}{\log \beta(q_{j})}=\alpha_{S}.$$ For any $\boldsymbol{\tau}$ such that $h_{S}>\tau_{i}>1$ for each $1\leq i \leq n$, we have that $$\dim_{\mathcal{H}}\Lambda_{\mathcal{Q}}^{S}(\boldsymbol{\tau})=\min_{1\leq k \leq n}\left\{ \frac{1}{\tau_{k}}\left(\sum_{i=1}^{n}\delta_{i} -\alpha_{S}\sum_{i=1}^{n}(\tau_{i}-1)\delta_{i} + \sum_{j:\tau_{k}\geq \tau_{j}}(\tau_{k}-\tau_{j})\delta_{j} \right)\right\}.$$* *Remark 7*. The infimum condition of Theorem [Theorem 6](#main){reference-type="ref" reference="main"} cannot be replaced by a $\liminf$ condition. To see this consider for example $F=[0,1]$, $\beta(q)=q$, $$\mathcal{Q}=\left\{\frac{p+\theta}{q}: (p,q)\in\mathbb{Z}\times \mathbb{N}\text{ and } \frac{p+\theta}{q} \in [0,1]\right\},$$ and the sequence $$S=\{2,3,3^{h},3^{h^{2}}, \dots \}.$$ For large $h$ and suitable choice of $\theta\in [0,1]$ it can be shown that the sets of points $$\begin{aligned} &\{x\in[0,1]:\|2x+\theta\|<2^{-h+1}\} \text{ and } \\ &\{x\in[0,1]:\|3x+\theta\|<3^{-h+1}\} \end{aligned}$$ are disjoint and so $\Lambda_{\mathcal{Q}}^{S}(\boldsymbol{\tau})=\emptyset$. *Remark 8*. In order to state our result for the general case of abstract rationals, it is necessary to suppose each $\tau_{i}<h_{S}$. Without this assumption additional considerations are necessary. See [@HussainShi23 Remark 1.2] for a discussion on complications in the case of real simultaneous approximation (non-weighted). The infimum condition can be replaced by a $\liminf$ condition if we consider the set $\widehat{\Lambda}_{\mathcal{Q}}^{S}(\boldsymbol{\tau})$. **Theorem 9**. *Let $(F,d,\mu)$, $\mathcal{N}$, $\beta$, and $\mathcal{Q}$ be constructed as above. Fix an infinite sequence $S$ contained in $\mathcal{N}$ over which $\beta$ is unbounded and strictly increasing and suppose that $$\liminf_{j\to \infty} \frac{\log \beta(q_{j})}{\log \beta(q_{j-1})}=h>1, \, \, \text{ and } \,\, \liminf_{j\to \infty} \frac{\sum\limits_{i=1}^{j-1}\log \beta(q_{i})}{\log \beta(q_{j})}=\alpha_{S}.$$ For any $\boldsymbol{\tau}$ such that $h>\tau_{i}>1$ for each $1\leq i \leq n$, we have that $$\dim_{\mathcal{H}}\widehat{\Lambda}_{\mathcal{Q}}^{S}(\boldsymbol{\tau})=\min_{1\leq k \leq n}\left\{ \frac{1}{\tau_{k}}\left(\sum_{i=1}^{n}\delta_{i} -\alpha_{S}\sum_{i=1}^{n}(\tau_{i}-1)\delta_{i} + \sum_{j:\tau_{k}\geq \tau_{j}}(\tau_{k}-\tau_{j})\delta_{j} \right)\right\}.$$* It should be noted that Theorem [Theorem 9](#corollary){reference-type="ref" reference="corollary"} essentially follows from Theorem [Theorem 6](#main){reference-type="ref" reference="main"} and the countable stability of the Hausdorff dimension. For completeness, we provide the proof at the end of §[3.4](#Section:CorollaryProof){reference-type="ref" reference="Section:CorollaryProof"}. The research of both authors is supported by the Australian Research Council discovery project 200100994. # Applications We begin with the classical setting of real approximation by rational numbers, which we generalise to the weighted inhomogeneous setting. We then give similar statements in the case of $p$-adic approximation. In later applications, we give the statement in the simplified one-dimensional homogeneous setting. It should be clear from the application in the real weighted inhomogeneous setting that the one-dimensional case readily generalises to the higher dimensional weighted setting. Indeed, the only calculations required to apply Theorems [Theorem 6](#main){reference-type="ref" reference="main"}--[Theorem 9](#corollary){reference-type="ref" reference="corollary"} is to show our setup aligns with some approximation by abstract rationals. Once this is done in one dimension it is clear that the product space also satisfies the criteria of abstract rationals. The notion of abstract rationals admits a broad range of applications, though in some instances it is not immediately clear whether a set satisfies the properties of being abstract rationals. The main work in each of these applications is constructing the sets of abstract rationals. The list of applications presented below is far from exhaustive. For example, in increasing levels of difficulty, one could consider formal power series approximation, approximation by irrational rotations, and approximation of real manifolds by rational numbers. The latter two cases seem particularly challenging. ## Real approximation The classical study of approximation of real numbers by rationals is extensive, see [@BRV16] for a survey of the foundational results and [@HY14; @KW23; @KTV06; @R98; @WW19] for the more recent weighted analogies of such results. In this section let $$\begin{aligned} F_{i}=[0,1]\, , \quad d_{i}=&|\cdot|\, , \quad \mu_{i}=\lambda\, , \\ \text{ so } \, (F,d,\mu)=&([0,1]^{n},|\cdot|,\lambda_{n})\, , \end{aligned}$$ for $|\cdot|$ the usual max norm on real space, $\lambda$ the Lebesgue measure, and $\lambda_{n}$ the $n$-dimensional Lebesgue measure. Let $$\begin{aligned} \mathcal{N}=&\mathbb{N}\, , \quad \quad \beta(q)=q\, , \quad \quad \theta:\mathcal{N}\to [0,1]^{n}\, , \text{ and } \\ P_{i}(q)=&\left\{ \frac{p+\theta_{i}(q)}{q}: p \in\mathbb{Z}\, \, \, \text{ and } \, \, \frac{p+\theta_{i}(q)}{q}\in [0,1] \right\} \, , \, \, \, ( 1 \leq i \leq n )\end{aligned}$$ be the $q^{-1}$-abstract rationals of level $q$ in $[0,1]$. Observe that each $P_{i}(q)$ can be seen as a subset of the shifted lattice $\tfrac{1}{q}\mathbb{Z}+ \theta_{i}(q)$, thus it is clear each point is $q^{-1}$ separated, and furthermore the set is maximal. Note we only have to show that each $P_{i}(q)$ is a well defined set of $q^{-1}$-abstract rationals of level $q$ in $[0,1]$. The higher dimensional product space result follows immediately. Hence $$\mathcal{Q}= \bigcup_{q\in\mathbb{N}} \prod_{i=1}^{n}\left\{ \frac{p+\theta_{i}(q)}{q}: p \in\mathbb{Z}\, \, \, \text{ and } \, \, \frac{p+\theta_{i}(q)}{q}\in [0,1] \right\}$$ is a well-defined set of abstract rationals. For $\theta(q)=0$ for all $q\in \mathbb{N}$ this is the standard homogeneous setting, and for $\theta(q)=(\theta_{1},\dots, \theta_{n})$ fixed this is the standard inhomogeneous setting. Let $S=\{q_{i}\}_{i\in\mathbb{N}}$ be an increasing sequence of positive integers and define the sets $$\begin{aligned} W_{n}^{S}(\boldsymbol{\tau})=\left\{ \textbf{x}\in [0,1]^{n} : \begin{array}{c}\left\| q_{j}x_{i}-\theta(q_{j})_{i}\right\|<q_{j}^{-\tau_{i}}\, \quad (1\leq i \leq n) \\ \quad \text{ for all } j \in \mathbb{N}\end{array} \right\}\, , \\ \widehat{W}_{n}^{S}(\boldsymbol{\tau})=\left\{ \textbf{x}\in [0,1]^{n} : \begin{array}{c}\left\| q_{j}x_{i}-\theta(q_{j})_{i}\right\|<q_{j}^{-\tau_{i}}\, \quad (1\leq i \leq n) \\ \quad \text{ for all sufficiently large } j \in \mathbb{N}\end{array} \right\}\, .\end{aligned}$$ Note that dividing through by $q_{j}$ in the inequalities in $W_{n}^{S}(\boldsymbol{\tau})$ gives us $W_{n}^{S}(\boldsymbol{\tau})=\Lambda_{\mathcal{Q}}^{S}(\boldsymbol{\tau}+1)$, and similarly $\widehat{W}_{n}^{S}(\boldsymbol{\tau})=\widehat{\Lambda}_{\mathcal{Q}}^{S}(\boldsymbol{\tau}+1)$. Notice by choice $\beta(q)=q$ for any strictly increasing sequence of positive integers $S$ we immediately have that $\beta$ is unbounded and strictly increasing. Applying Theorem [Theorem 6](#main){reference-type="ref" reference="main"} to this setting we immediately have the following. **Theorem 10**. *Let $S$ be an increasing sequence of integer with $$\inf_{j\to \infty} \frac{\log q_{j}}{\log q_{j-1}}=h_{S}>1, \, \, \text{ and } \,\, \liminf_{j\to \infty} \frac{\sum\limits_{i=1}^{j-1}\log q_{i}}{\log q_{j}}=\alpha_{S}.$$ For any $\boldsymbol{\tau}$ such that $h_{S}-1>\tau_{i}>0$ for each $1\leq i \leq n$, we have that $$\dim_{\mathcal{H}}W_{n}^{S}(\boldsymbol{\tau})=\min_{1\leq k \leq n} \left\{\frac{1}{\tau_{k}+1}\left(n-\alpha_{S}\sum_{i=1}^{n}\tau_{i} + \sum_{i:\tau_{k}\geq \tau_{i}} (\tau_{k}-\tau_{i}) \right)\right\}.$$* *Remark 11*. This result is a generalisation of [@HussainShi23 Theorem 1.1] to the weighted setting, where it was proven that for $\tau=\tau_{1}=\dots=\tau_{n}$ with $h_{S}-1>\tau>0$ that $$\dim_{\mathcal{H}}W^{S}_{n}(\tau)=\frac{n}{\tau+1}(1-\alpha_{S}\tau),$$ which agrees with the theorem above. Unlike [@HussainShi23], in our setting the inhomogeneity $\theta$ is allowed to vary over the sequence $S$. By applying Theorem [Theorem 9](#corollary){reference-type="ref" reference="corollary"} instead of Theorem [Theorem 6](#main){reference-type="ref" reference="main"} we have the following result. **Theorem 12**. *Let $S$ be an increasing sequence of integer with $$\liminf_{j\to \infty} \frac{\log q_{j}}{\log q_{j-1}}=h_{S}>1, \, \, \text{ and } \,\, \liminf_{j\to \infty} \frac{\sum\limits_{i=1}^{j-1}\log q_{i}}{\log q_{j}}=\alpha_{S}.$$ For any $\boldsymbol{\tau}$ such that $h_{S}-1>\tau_{i}>0$ for each $1\leq i \leq n$, we have that $$\dim_{\mathcal{H}}W_{n}^{S}(\boldsymbol{\tau})=\min_{1\leq k \leq n} \left\{\frac{1}{\tau_{k}+1}\left(n-\alpha_{S}\sum_{i=1}^{n}\tau_{i} + \sum_{i:\tau_{k}\geq \tau_{i}} (\tau_{k}-\tau_{i}) \right)\right\}.$$* In later applications we will only give a results aligning to setups of the form $\Lambda_{\mathcal{Q}}^{S}(\boldsymbol{\tau})$. It should be clear that the statements relating to $\widehat{\Lambda}_{\mathcal{Q}}^{S}(\boldsymbol{\tau})$ follow immediately. ## $p$-adic weighted approximation Fix a prime number $p$ and let $$\begin{aligned} F=\mathbb{Z}_{p}\, , \quad d=&|\cdot|_{p}\, , \quad \mu=\mu_{p}\, , \\\end{aligned}$$ for $\mathbb{Z}_{p}$ the ring of $p$-adic integers, $|\cdot|_{p}$ the $p$-adic norm, and $\mu_{p}$, the $p$-adic Haar measure normalised at $\mu_{p}(\mathbb{Z}_{p})=1$. For metric properties of the classical sets of Diophantine approximation in $p$-adic space see [@A95; @J45; @L55], and for the more recent weighted setting see [@BLW21b; @GHSW23; @KTV06]. Due to the ultrametric properties of $p$-adic space, it is slightly more complicated to construct layers of abstract rationals. We opt for the following setup. Let $$\mathcal{N}=\{p^{k}: k\in\mathbb{N}\}\, , \quad \text{ and } \quad P(q)=\left\{\frac{a}{q-1}: 1\leq a \leq q \right\}\, .$$ Note that for $\frac{a}{q-1},\frac{a'}{q-1}\in P(q)$ with $a\neq a'$ we have that $$\left| \frac{a}{q-1}-\frac{a'}{q-1}\right|_{p}=|q-1|_{p}^{-1}|a-a'|_{p}= |a-a'|_{p} \geq p^{-k}=|q|^{-1},$$ and so $P(q)$ is $q^{-1}$-separated for each $q\in\mathcal{N}$. Thus define $\beta(q)=|q|$. To show each $\beta$-abstract rationals of level $q$ is maximal observe that there are $p^{k}$ points contained in $P(p^{k})$ and each of these lies in $\mathbb{Z}_{p}$ by the coprimality of $(p^{k}-1,p)$. The $p$-adic max norm balls $$\bigcup_{\frac{a}{p^{k}-1}\in P_{i}(p^{k})}B\left(\frac{a}{p^{k}-1}, p^{-k}\right)$$ are disjoint and furthermore are a cover of $\mathbb{Z}_{p}$. To see this write each $x\in\mathbb{Z}_{p}$ as their $p$-adic expansion $$\label{p-adic cover} x=\sum_{i=1}^{\infty}x_{i}p^{-i} \quad x_{i}\in \{0,1,\dots , p-1\}.$$ Since each $\frac{a}{p^{k}-1} \in P(p^{k})$ is $p^{-k}$-separated, each of their corresponding $p$-adic expansions must differ over the first $k$ coefficients. There are $p^{k}$ different $p$-adic expansions over the first $k$ coefficients so the $p^{k}$ abstract rationals of level $p^{k}$ cover each possible expansion. Thus any $x\in \mathbb{Z}_{p}$ belongs to some ball in the union [\[p-adic cover\]](#p-adic cover){reference-type="eqref" reference="p-adic cover"}, and so every $x\in\mathbb{Z}_{p}$ is $p^{-k}$ close to an abstract rational of level $p^{k}$. For each $q\in\mathcal{N}$ let $\mathcal{Q}_{p}=\bigcup_{q\in\mathcal{N}}P(q) \subset \mathbb{Z}_{p}$. For $\tau\in\mathbb{R}_{+}$ and sequence $S=\{p^{k_{j}}\}_{j\in\mathbb{N}}$ with $k_{j}\in\mathbb{N}$ an increasing sequence define $$\mathcal{W}_{\mathbb{Z}_{p}}^{S}(\tau)=\left\{ x \in \mathbb{Z}_{p}: \left|x-\frac{a}{p^{k_{j}}-1}\right|_{p}<p^{-k_{j}\tau} \, \ \text{ for some } \frac{a}{p^{k_{j}}-1}\in P(p^{k_{j}}) \text{ and for all } j\in \mathbb{N}\right\}$$ Note that $\mathcal{W}_{\mathbb{Z}_{p}}^{S}(\tau)=\Lambda_{\mathcal{Q}_{p}}^{S}(\tau)$, and observe that since the sequence $S$ is strictly increasing $\beta(q)=|q|$ is unbounded and strictly increasing. Hence applying Theorem [Theorem 6](#main){reference-type="ref" reference="main"} we have the following. **Theorem 13**. *Let $S=\{p^{k_{j}}\}_{j\in\mathbb{N}}$ be a sequence of positive integers with $k_{j}\in\mathbb{N}$ increasing and let $$\inf_{j\in \mathbb{N}} \frac{k_{j}}{k_{j-1}}=h_{S}>1, \, \, \text{ and } \,\, \liminf_{j\to \infty} \frac{\sum\limits_{i=1}^{j-1}k_{i}}{ k_{j}}=\alpha_{S}.$$ For any $\tau$ such that $h_{S}>\tau>1$ we have that $$\dim_{\mathcal{H}}\mathcal{W}_{\mathbb{Z}_{p}}^{S}(\boldsymbol{\tau}) = \frac{1}{\tau}\left(1-\alpha_{S}(\tau-1) \right)\, .$$* For completeness in this application, we also provide the weighted result. Redefine $$\begin{aligned} F&=\mathbb{Z}_{p}^{n}\, ,\quad \quad d=\max_{1\leq i \leq n} |\cdot |_{p}\, ,\quad \quad \mu=\mu_{p,n}=\prod_{i=1}^{n}\mu_{p}\, ,\\ \mathcal{N}&=\{p^{k}: k\in\mathbb{N}\} \, , \quad P(q)=\prod_{i=1}^{n}P_{i}(q)=\prod_{i=1}^{n}\left\{\frac{a_{i}}{q-1}:1\leq a\leq q\right\}\, ,\\ \mathcal{Q}_{p,n}&=\bigcup_{q\in\mathcal{N}}P(q)\subset \mathbb{Z}_{p}^{n}\, .\end{aligned}$$ By our above calculations in the one dimensional case each $P_{i}(q)$ are $\beta$-abstract rationals of level $q$ in $\mathbb{Z}_{p}$ so $P(q)=\prod_{i=1}^{n}P_{i}(q)$ are $\beta$-abstract rationals of level $q$ in $\mathbb{Z}_{p}^{n}$. For $n$-tuple $\boldsymbol{\tau}\in\mathbb{R}^{n}_{+}$ and sequence $S=\{p^{k_{j}}\}_{j\in\mathbb{N}}$ with $k_{j}$ an increasing sequence let $$\mathcal{W}_{\mathbb{Z}_{p}^{n}}^{S}(\boldsymbol{\tau})=\left\{ \textbf{x}\in \mathbb{Z}_{p}^{n}:\begin{array}{c} \left|x-\frac{a_{i}}{p^{k_{j}}-1}\right|_{p}<p^{-k_{j}\tau_{i}} \, \quad (1\leq i \leq n) \\ \text{ for some } \frac{\mathbf{a}}{p^{k_{j}}-1}\in P(p^{k_{j}}) \text{ and for all } j\in \mathbb{N}\end{array} \right\}\,.$$ **Theorem 14**. *Let $S=\{p^{k_{j}}\}_{j\in\mathbb{N}}$ be a sequence of positive integers with $k_{j}\in\mathbb{N}$ increasing and let $$\inf_{j\in \mathbb{N}} \frac{k_{j}}{k_{j-1}}=h_{S}>1, \, \, \text{ and } \,\, \liminf_{j\to \infty} \frac{\sum\limits_{i=1}^{j-1}k_{i}}{ k_{j}}=\alpha_{S}.$$ For any $\boldsymbol{\tau}$ such that $h_{S}>\tau_{i}+1>0$ for each $1\leq i \leq n$ we have that $$\dim_{\mathcal{H}}\mathcal{W}_{n}^{S}(\boldsymbol{\tau}) = \min_{1\leq k \leq n} \left\{\frac{1}{\tau_{k}}\left(n-\alpha_{S}\sum_{i=1}^{n}(\tau_{i}-1) + \sum_{i:\tau_{k}\geq \tau_{i}} (\tau_{k}-\tau_{i}) \right)\right\}.$$* ## Complex Diophantine approximation The classical setting of Diophantine approximation of complex numbers by Gaussian integers has been studied by a range of authors, see for example [@BGH; @HeXiong2021; @HussainNZJM] and [@DodsonKristensen Sections 4-6]. In the complex case, we consider the following setup. Let $$\begin{aligned} F=\mathfrak{F}=[-\tfrac{1}{2},\tfrac{1}{2}]\times[-\tfrac{1}{2},\tfrac{1}{2}]i\, &, \quad d=\|\cdot\|_{2}\, , \quad \mu=\lambda_{2}\, ,\end{aligned}$$ for $\|\cdot\|_{2}$ the Euclidean norm. Note that $\delta=2$ in this setting. Let $$\begin{aligned} \mathcal{N}=\{a+bi: a,b\in\mathbb{Z}\}\, , \quad &\beta(a+bi)=\|a+bi\|_{2}=a^{2}+b^{2}\, .\end{aligned}$$ Then, for any $q\in\mathcal{N}$, the set of Gaussian integers, let $$P(q)=\left\{\frac{p}{q}: p \in \mathcal{N}\text{ and } \frac{p}{q} \in \mathfrak{F} \right\}\, .$$ This set can be associated to a lattice on $\mathbb{R}^{2}$ with Euclidean distance $\|\frac{p}{q}-\frac{p'}{q}\|_{2}\geq \|q\|_{2}^{-1}$, see [@DodsonKristensen Section 4.5]. Hence $P(q)$ is $\|q\|_{2}^{-1}$-separated and maximal, and so $\mathcal{Q}_{\mathbb{C}}=\bigcup_{q\in\mathcal{N}}P(q)$ is a well defined set of $\beta$-abstract rationals on $\mathcal{F}$. Let $S=\{q_{j}\}_{j\in \mathbb{N}}$ be a sequence of Gaussian integers with strictly increasing norm, and let $\tau \in \mathbb{R}_{+}$. Define the set $$\mathfrak{W}_{\mathbb{C}}^{S}(\tau)=\left\{ z \in \mathfrak{F} : \left\|z-\frac{p}{q_{j}}\right\|_{2}< \|q_{j}\|_{2}^{-\tau-1} \text{ for some } \frac{p}{q_{j}}\in P(q_{j}) \text{ and for all } j\in\mathbb{N}\right\}.$$ Note by the condition that $S$ is a sequence of Gaussian integers with strictly increasing norm $\beta(q)=\|q\|_{2}$ is strictly increasing and unbounded. Furthermore note that $\mathfrak{W}_{\mathbb{C}}^{S}(\tau)=\Lambda_{\mathcal{Q}_\mathbb{C}}^{S}(\tau)$. Applying Theorem [Theorem 6](#main){reference-type="ref" reference="main"} we have the following. **Theorem 15**. *Let $S=\{q_{j}\}_{j\in\mathbb{N}}$ be a sequence of Gaussian integers with strictly increasing norm. Let $h_{S}$ and $\alpha_{S}$ be defined and satisfy the conditions as in Theorem [Theorem 6](#main){reference-type="ref" reference="main"}. For any $\tau\in\mathbb{R}_{+}$ such that $h_{S}>\tau>1$ we have that $$\dim_{\mathcal{H}}\mathfrak{W}_{\mathbb{C}}^{S}(\tau)=\frac{2}{\tau+1}\left(1-\alpha_{S}\tau \right).$$* ## Approximation on missing digit sets Diophantine approximation on fractals has been an area of intense study, particularly since Mahler's 1984 paper [@M84] in which he asked how well points inside the middle-third Cantor set could be approximated by rational points either i) inside, or ii) outside of the middle-third Cantor set. This question has since been generalised significantly. For classical approximation results in this setting see [@AB21; @SBaker; @FS14; @KLW05; @KW05; @LSV07; @TanWangWu] and [@KW23; @WW19] for the higher dimensional weighted setting. Let $b\in\mathbb{N}_{\geq 3}$ be fixed and let $J\subset \{0,1,\dots , b-1\}$ denote a proper subset with $\# J\geq 2$. For each $j\in J$ define the maps $f_{j}:[0,1] \to [0,1]$ by $$f_{j}(x)=\frac{x+j}{b},$$ and let $\Phi=\{f_{j}:j\in J\}$. Consider the self-similar iterated function system $\Phi$ and let $\mathcal{C}(b,J)$ be the attractor of $\Phi$. That is, $\mathcal{C}(b,J)$ is the unique non-empty compact subset of $[0,1]$ such that $$\mathcal{C}(b,J)=\bigcup_{j\in J} f_{j}(\mathcal{C}(b,J) ).$$ Call $\mathcal{C}(b,J)$ the $(b,J)$-missing digit set. As an example note that $\mathcal{C}(3,\{0,2\})$ is the classical middle-third Cantor set. These sets can be equipped with a self similar measure $\mu_{\mathcal{C}}$ defined as $$\mu_{\mathcal{C}}(B)=\mathcal H^{\gamma(b, J)}(B\cap \mathcal{C}(b, J))\, ,$$ which was shown by Mauldin and Urbanski [@MU96] to be $\gamma(b,J)$-Ahlfors regular where $$\gamma(b,J)=\dim_{\mathcal{H}}\mathcal{C}(b,J)=\frac{\log\# J}{\log b}.$$ Concisely, let $$\begin{aligned} F=&\mathcal{C}(b,J)\, , \quad d=|\cdot|\, , \quad \mu=\mu_{\mathcal{C}}\, , \\ \mathcal{N}&=\mathbb{N}\, , \quad \text{ and } \, \, \beta(k)=b^{k}\, .\end{aligned}$$ We now construct the abstract rationals in this setting. Henceforth, fix any point $z\in\mathcal{C}(b,J)$ and for any $k\in \mathcal{N}$ let $$P(k)=\{ f_{\mathbf{i}}(z) : \mathbf{i}\in J^{k} \},$$ where $f_{\mathbf{i}}=f_{i_{1}} \circ \dots \circ f_{i_{n}}$ for the finite word $\mathbf{i}=(i_{1},\dots, i_{n})\in J^{n}$. Note that for each $f_{\mathbf{v}}(z),f_{\textbf{u}}(z) \in P(k)$ with $\mathbf{v}\neq \textbf{u}$ we have that $$\begin{aligned} |f_{\mathbf{v}}(z)-f_{\textbf{u}}(z)|& =\left|\left(\sum_{i=1}^{k}v_{i}b^{-i} +\sum_{i=k+1}^{\infty}z_{i-k}b^{-i}\right) -\left(\sum_{i=1}^{k}u_{i}b^{-i} +\sum_{i=k+1}^{\infty}z_{i-k}b^{-i}\right) \right| \\ &= \left|\sum_{i=1}^{k}(v_{i}-u_{i})b^{-i} \right|. \end{aligned}$$ Since $\textbf{u}\neq \mathbf{v}$ they must differ in at least one digit. Suppose that $(u_{1},\dots,u_{t})=(v_{1}, \dots , v_{t})$ but $u_{t+1}\neq v_{t+1}$ for some $0\leq t\leq k-1$. Then $$\begin{aligned} \left|\sum_{i=1}^{k-1}(v_{i}-u_{i})b^{-i} \right|&=\left|\sum_{i=t+1}^{k}(v_{i}-u_{i})b^{-i} \right|\\ & \geq b^{-(t+1)} - \left|\sum_{i=t+2}^{k}(v_{i}-u_{i})b^{-i}\right| \\ & \geq b^{-(t+1)}-(b^{-(t+1)}-b^{-k})\\ & = b^{-k}.\end{aligned}$$ Hence $P(k)$ is $b^{-k}$-separated, and so it is natural to take $\beta(k)=b^{k}$. To show it is maximal consider any point $x\in\mathcal{C}(b,J)$. Then there exists word $\mathbf{i}\in J^{\mathbb{N}}$ such that $$x=\lim_{n\to \infty}f_{(i_{1},\dots, i_{n})}(z).$$ Hence, $$|f_{(i_{1}, \dots,i_{k})}(z)-x|=\left|f_{(i_{1},\dots,i_{k})}(z)-\lim_{n\to \infty}f_{(i_{1},\dots, i_{n})}(z) \right| \leq b^{-k}$$ and so $P(k)$ is maximal. Let $\mathcal{Q}(,\mathcal{C}(b,J),z)=\bigcup_{k\in\mathbb{N}}P(k)$. Let $S=\{k_{j}\}_{j\in\mathbb{N}}$ be a sequence of increasing integers. For $\tau \in \mathbb{R}_{+}$ define $$\mathbf{W}^{S}_{\mathcal{C}(b,J)}(\tau, z):=\left\{ x \in \mathcal{C}(b,J) : |x-f_{\mathbf{i}}(z)|<b^{-k_{j}\tau}\, \quad \mathbf{i}\in J^{k_{j}}\, , \quad \text{ for all } j \in \mathbb{N}\right\}.$$ Note since $S$ is strictly increasing function $\beta(k)=b^{k}$ is increasing and unbounded on $S$. Furthermore $\mathbf{W}^{S}_{\mathcal{C}(b,J)}(\tau, z)=\Lambda_{\mathcal{Q}(\mathcal{C}(b,J),z)}^{S}(\tau)$ and so by Theorem [Theorem 6](#main){reference-type="ref" reference="main"} we have the following. **Theorem 16**. *Let $S=\{k_{j}\}_{j\in\mathbb{N}}$ be a sequence of positive integers and let $h_{S}$ and $\alpha_{S}$ be defined by $S$ and satisfy the conditions as in Theorem [Theorem 6](#main){reference-type="ref" reference="main"}. For any $\tau\in\mathbb{R}_{+}$ such that $h_{S}>\tau>1$ we have that $$\dim_{\mathcal{H}}\mathbf{W}^{S}_{\mathcal{C}(b,J)}(\tau, z) = \frac{\gamma(b,J)}{\tau}\left(1 -\alpha_{S}(\tau-1) \right).$$* # Proof of Theorem [Theorem 6](#main){reference-type="ref" reference="main"} {#proof-of-theorem-main} Before giving the proof of Theorem [Theorem 6](#main){reference-type="ref" reference="main"} we show that $\Lambda_{\mathcal{Q}}^{S}(\boldsymbol{\tau})$ can be written as a $\liminf$ sequence of rectangles. This $\liminf$ set will make for much easier calculation of the upper and lower bound of $\Lambda_{\mathcal{Q}}^{S}(\boldsymbol{\tau})$. For $1\leq i \leq n$ and $j \in \mathbb{N}$, let $$E_{j,i}=\bigcup_{p_{i} \in P_{i}(q_{j})} B_{i}(p_{i},\beta(q_{j})^{-\tau_{i}}),$$ where $q_{j}$ denotes the $j$th value in the sequence $S$, and let $$E_{j}=\prod_{i=1}^{n}E_{j,i}=\bigcup_{\mathbf{p}=(p_{1},\dots, p_{n})\in P(q_{j})} \prod_{i=1}^{n}B_{i}(p_{i},\beta(q_{j})^{-\tau_{i}}).$$ Then $$\Lambda_{\mathcal{Q}}^{S}(\boldsymbol{\tau})=\bigcap_{j\in\mathbb{N}} E_{j}.$$ This construction is crucial in the proof of Theorem [Theorem 6](#main){reference-type="ref" reference="main"}. A brief sketch of the proof is as follows. As is standard with the calculation of the dimension of such sets, we split the proof into two parts, proving the upper and lower bound separately. The upper bound is proven by considering the standard cover provided in the previous section. One small technicality is that the cover is a cover of rectangles, so in order to construct an efficient cover of $\Lambda_{\mathcal{Q}}^{S}(\boldsymbol{\tau})$ we need to consider several different coverings of balls, depending on the side lengths of the rectangles in the layers $E_{j}$. The methodology of the lower bound is as follows. Firstly we construct a Cantor set $L_{\infty}^{S}$ inside of $\Lambda_{\mathcal{Q}}^{S}(\boldsymbol{\tau})$. This Cantor set is a very natural construction. Generally, starting at the layer $E_{1}$ as defined in the construction of $\Lambda^{S}_{\mathcal{Q}}(\tau)$ we iteratively construct the Cantor set by simply including all rectangles from $E_{j}$ that are contained in some rectangle from the $E_{j-1}$ layer. Hence starting at $E_{1}$ we construct a nested Cantor set $L_{\infty}^{S}$ contained in $\Lambda^{S}_{\mathcal{Q}}(\boldsymbol{\tau})$. For the second part, we then construct some measure $\nu$ on $L_{\infty}^{S}$. This measure is defined naturally such that the mass is evenly distributed over each rectangle appearing in the layer, and the sum of the mass of rectangles contained in a larger rectangle of the previous layer is equal to the mass of said larger rectangle. The calculation of the Hölder exponent of the measure $\mu$ for a general ball is quite technical since our constructed Cantor set is made of rectangles. To calculate the Hölder exponent we have to split into a number of cases determined by the size of the radius of our general ball relative to the side lengths of the rectangles in our Cantor set construction. Once the exponent is calculated we can apply the mass distribution principle to obtain our lower bound dimension result. Before going into the proof we state the definitions of Hausdorff measure and dimension and state some easy, but essential, lemmas that will be required in the proof. ## Preliminaries {#prelims} We begin by recalling the definition of Hausdorff measure and dimension, for a thorough exposition see [@F14]. Let $(F,d)$ be a metric space and $X \subset F$. Then for any $0 < \rho \leq \infty$, any finite or countable collection $\{B_i\}$ of subsets of $F$ such that $X\subset \bigcup_i B_i$ and $$r(B_i)=\inf\{r\geq 0: d(x,y)\leq r \quad ( x,y \in B_{i}) \}\leq \rho$$ is called a *$\rho$-cover* of $X$. Let $$\mathcal{H}_{\rho}^{s}(X)=\inf \left\{\sum\limits_{i} r(B_i)^{s}\right\} \, ,$$ where the infimum is taken over all possible $\rho$-covers $\{B_i\}$ of $X$. The *$s$-dimensional Hausdorff measure of $X$* is defined to be $$\mathcal{H}^s(X)=\lim_{\rho\to 0}\mathcal{H}_\rho^s(X).$$ For any set $X\subset F$ denote by $\dim_{\mathcal{H}}X$ the Hausdorff dimension of $X$, defined as $$\dim_{\mathcal{H}}X :=\inf\left\{s\geq 0\;:\; \mathcal{H}^s(X)=0\right\}\,.$$ A property that the Hausdorff dimension enjoys is that it is countably stable. That is, for a sequence of sets $X_{i}$ we have that $$\dim_{\mathcal{H}}\bigcup_{i} X_{i} = \sup_{i} \dim_{\mathcal{H}}X_{i}\, .$$ We will use the following lemma to calculate the lower bound dimension result. **Lemma 17** (Mass Distribution Principle). *Let $\nu$ be a Borel probability measure supported on a subset $X \subseteq F$. Suppose that for $s>0$ there exists constants $c,r_{0}>0$ such that $$\mu(B) \leq c r(B)^{s}$$ for all open balls $B \subset F$ with $r(B)<r_{0}$. Then $\mathcal{H}^{s}(X) \geq \frac{1}{c}$.* We now state some easy lemmas in relation to our setup that will be used in both the upper and lower bound dimension calculations. The first lemma essentially gives us good bounds on the number of abstract rationals contained in some rectangle of certain sidlenghts. **Lemma 18**. *For any $x_{i}\in F_{i}$, $q\in \mathcal{N}$ and $r>0$ such that $r>\beta(q)^{-1}$ $$\frac{c_{1,i}}{c_{2,i}}2^{-\delta_{i}}(r\beta(q))^{\delta_{i}} \leq \# \left\{ p \in P_{i}(q): p \in B_{i}(x_{i},r)\right\} \leq \frac{c_{2,i}}{c_{1,i}}2^{\delta_{i}+1}(r\beta(q))^{\delta_{i}}.$$* *Proof.* Note that by the $\beta(q)^{-1}$ separated condition of $P_{i}(q)$ the balls $$\bigcup_{p\in P_{i}(q)} B_{i}\left(p,\tfrac{1}{2}\beta(q)^{-1}\right)$$ are disjoint, and so using a volume argument one can deduce that $$\# \left\{ p \in P_{i}(q): p \in B_{i}(x_{i},r)\right\} \leq \frac{\mu_{i}\left(B_{i}(x_{i},r)\right)}{\mu\left(B_{i}(p,\frac{1}{2}\beta(q)^{-1})\right)}+1 \leq \frac{c_{2,i}}{c_{1,i}}2^{\delta_{i}+1}(r\beta(q))^{\delta_{i}},$$ where the second inequality follows since $r\beta(q)>1$ and $c_{2,i}c_{1,i}^{-1}\geq 1$. A similar argument can be done for the lower bound. Note that the maximal condition of $P_{i}(q)$ ensures, regardless of the arrangement of $P_{i}(q)$, that at least one $p \in P_{i}(q)$ must be contained in any ball $B_{i}(x,2\beta(q)^{-1})$ with $x\in F_{i}$, and by a volume argument $$\# \left\{ p \in P_{i}(q): p \in B_{i}(x_{i},r)\right\}\geq \frac{\mu_{i}\left(B_{i}(x_{i},r)\right)}{\mu_{i}\left(B_{i}(p,2\beta(q)^{-1})\right)} \geq \frac{c_{1,i}}{c_{2,i}}2^{-\delta_{i}}(r\beta(q))^{\delta_{i}}.$$ ◻ The following lemma shows us that our sequence $S$, satisfying the conditions of Theorem [Theorem 6](#main){reference-type="ref" reference="main"}, will decrease fast enough. **Lemma 19**. *For any sequence $S=\{q_{j}\}_{j\in\mathbb{N}}$ if $$\inf_{j\in\mathbb{N}} \frac{\log \beta(q_{j})}{\log \beta(q_{j-1})}=h_{S}>1$$ then $$\lim_{j\to \infty} \frac{j}{\log \beta(q_{j})}=0.$$* *Proof.* Observe the infimum condition on $S$ implies that for any $j\in\mathbb{N}_{>1}$ $$\log \beta(q_{j}) >h_{S}\log \beta(q_{j-1})>h_{S}^{2}\log \beta(q_{j-2})> \dots > h_{S}^{j-1}\log \beta(q_{1}).$$ Thus $$\frac{j}{\log \beta(q_{j})} < \frac{j}{h_{S}^{j-1}\log \beta(q_{1})} \to 0$$ as $j \to \infty$ since $h_{S}>1$. ◻ ## Upper bound of Theorem [Theorem 6](#main){reference-type="ref" reference="main"} {#upper-bound-of-theorem-main} Observe that $E_{1}\cap\dots \cap E_{j}$ is a $\beta(q_{j})^{-\min_{i}\tau_{i}}$-cover of $\Lambda_{\mathcal{Q}}^{S}(\boldsymbol{\tau})$ for any $j \in \mathbb{N}$. Consider the cover $E_{1}\cap E_{2}$ and note that for any $x_{i}\in F_{i}$ $$\#\{p_{i}\in P_{i}(q_{2}): p\in B_{i}(x_{i}, \beta(q_{1})^{-\tau_{i}})\} \overset{\text{Lemma~\ref{vol_arg}}}{\leq} \frac{c_{2,i}}{c_{1,i}}2^{\delta_{i}+1} (\beta(q_{2})\beta(q_{1})^{-\tau_{i}})^{\delta_{i}},$$ and so for any $\textbf{x}\in F$ $$\#\left\{\mathbf{p}\in P(q_{2}): \mathbf{p}\in \prod_{i=1}^{n} B_{i}(x_{i}, \beta(q_{1})^{-\tau_{i}})\right\} \overset{\text{Lemma~\ref{vol_arg}}}{\leq} \left(\prod_{i=1}^{n}\frac{c_{2,i}}{c_{1,i}}\right) 2^{n+\sum\limits_{i=1}^{n}\delta_{i}} \beta(q_{2})^{\sum\limits_{i=1}^{n}\delta_{i}} \beta(q_{1})^{-\sum\limits_{i=1}^{n}\tau_{i}\delta_{i}}.$$ We should remark here that Lemma [Lemma 18](#vol_arg){reference-type="ref" reference="vol_arg"} is applicable since $$\beta(q_{1})^{-\tau_{i}}>\beta(q_{1})^{-h_{S}}>\beta(q_{2})^{-1}$$ by our definition of $h_{S}$ and assumption on $\boldsymbol{\tau}$. From now on, for ease of notation, we will write $$\delta=\sum_{i=1}^{n}\delta_{i}\, , \quad c_{1}=\prod_{i=1}^{n}c_{1,i}\, , \quad c_{2}=\prod_{i=1}^{n}c_{2,i}\, .$$ This tells us that the cover $E_{1}\cap E_{2}$ is composed of at most $$\frac{c_{2}}{c_{1}} 2^{n+\delta} \beta(q_{2})^{\delta}\beta(q_{1})^{-\sum\limits_{i=1}^{n}\tau_{i}\delta_{i}}$$ rectangles of the form $$\prod_{i=1}^{n}B_{i}\left(p_{i},\beta(q_{2})^{-\tau_{i}}\right)$$ for $\mathbf{p}=(p_{1},\dots,p_{n}) \in P(q_{2})$. We can repeat this process iteratively to obtain that $E_{1}\cap\dots\cap E_{j}$ can be covered by at least $$G_{j}:=\#P(q_{1}) \left( \frac{c_{2}}{c_{1}} 2^{n+\delta}\right)^{j-1} \prod_{i=1}^{j-1}\beta(q_{i+1})^{\delta}\beta(q_{i})^{-\sum\limits_{i=1}^{n}\tau_{i}\delta_{i}}$$ rectangles of the form $$\prod_{i=1}^{n}B_{i}\left(p_{i},\beta(q_{j})^{-\tau_{i}}\right)$$ for some $\mathbf{p}\in P(q_{j})$. For now, fix some $1\leq k \leq n$. Observe that $\prod_{i=1}^{n}B_{i}\left(p_{i},\beta(q_{j})^{-\tau_{i}}\right)$ can be covered by $$\asymp_{c_{1},c_{2}} \prod_{i=1}^{n} \max\left\{1, \beta(q_{j})^{(\tau_{k}-\tau_{i})\delta_{i}}\right\}$$ $n$-dimensional balls, in $F$, of radius $\beta(q_{j})^{-\tau_{k}}$. Note that the implied constant is independent of $j$. Hence $$\begin{aligned} \label{upper_measure} \mathcal{H}^{s}\left(\Lambda_{\mathcal{Q}}^{S}(\boldsymbol{\tau})\right) &\ll_{c_{1},c_{2},} \liminf_{j\to \infty} G_{j}\prod_{i=1}^{n} \max\left\{1, \beta(q_{j})^{(\tau_{k}-\tau_{i})\delta_{i}}\right\} \left(\beta(q_{j})^{-\tau_{k}}\right)^{s} \nonumber\\ &\ll_{c_{1},c_{2},\#P(q_{1})}\liminf_{j\to \infty} \left( \frac{c_{2}}{c_{1}} 2^{n+\delta}\right)^{j-1} \prod_{i=1}^{j-1}\beta(q_{i+1})^{\delta}\beta(q_{i})^{-\sum\limits_{i=1}^{n}\tau_{i}\delta_{i}}\beta(q_{j})^{\sum\limits_{i:\tau_{k}>\tau_{i}}(\tau_{k}-\tau_{i})\delta_{i}} \left(\beta(q_{j})^{-\tau_{k}}\right)^{s}.\end{aligned}$$ For $$\label{s_value} s>\frac{1}{\tau_{k}}\left(\delta + \frac{(j-1)\log\left( \frac{c_{2}}{c_{1}} 2^{n+\delta}\right)}{\log \beta(q_{j})}-\left(\sum\limits_{i=1}^{n}(\tau_{i}-1)\delta_{i}\right)\frac{\sum\limits_{i=1}^{j-1}\log \beta(q_{i})}{\log \beta(q_{j})} + \sum\limits_{i:\tau_{k}>\tau_{i}}(\tau_{k}-\tau_{i})\delta_{i} \right)$$ we have that [\[upper_measure\]](#upper_measure){reference-type="eqref" reference="upper_measure"} is finite. Furthermore, for any $\epsilon>0$ there exists subsequence $\{j_{t}\}_{t\in\mathbb{N}}$ and large $t_{0}\in\mathbb{N}$ such that for any $$s>\frac{1}{\tau_{k}}\left(\delta + \epsilon -\left(\sum_{i=1}^{n}(\tau_{i}-1)\delta_{i}\right)\alpha_{S} + \sum_{i:\tau_{k}>\tau_{i}}(\tau_{k}-\tau_{i})\delta_{i} \right),$$ the equation after the limit in [\[upper_measure\]](#upper_measure){reference-type="eqref" reference="upper_measure"} is finite for all $j_{t}$ with $t>t_{0}$. This follows from the definition of $\alpha_{S}$, and Lemma [Lemma 19](#constant corrector){reference-type="ref" reference="constant corrector"}. Thus $$\dim_{\mathcal{H}}\Lambda_{\mathcal{Q}}^{S}(\boldsymbol{\tau}) \leq \frac{1}{\tau_{k}}\left(\delta -\left(\sum_{i=1}^{n}(\tau_{i}-1)\delta_{i}\right)\alpha_{S} + \sum_{i:\tau_{k}>\tau_{i}}(\tau_{k}-\tau_{i})\delta_{i} \right)\, .$$ We can repeat the above steps for each $1\leq k\leq n$, and so $$\dim_{\mathcal{H}}\Lambda_{\mathcal{Q}}^{S}(\boldsymbol{\tau}) \leq \min_{1\leq k \leq n} \left\{ \frac{1}{\tau_{k}}\left(\delta -\left(\sum_{i=1}^{n}(\tau_{i}-1)\delta_{i}\right)\alpha_{S} + \sum_{i:\tau_{k}>\tau_{i}}(\tau_{k}-\tau_{i})\delta_{i} \right) \right\}.$$ ## Lower bound of Theorem [Theorem 6](#main){reference-type="ref" reference="main"} {#lower-bound-of-theorem-main} We begin with the construction of the Cantor subset $L_{\infty}^{S}$ of $\Lambda^{S}_{\mathcal{Q}}(\boldsymbol{\tau})$ before constructing a measure $\nu$ with support $L_{\infty}^{S}$. Finally we calculate the Hölder exponent of a general ball for such measure. ### Cantor construction of $L_{\infty}^{S}$ The Cantor set $L_{\infty}^{S}$ is constructed as follows:\ 1. Recalling that $$\Lambda_{\mathcal{Q}}^{S}(\boldsymbol{\tau})=\bigcap_{j\in\mathbb{N}} E_{j}$$ Let $L_{1}^{S}=E_{1}$.\ 2. Let $$L_{2}^{S}=\underset{\mathbf{p}\in \prod_{i=1}^{n}B_{i}\left(p_{i},\frac{1}{2}\beta(q_{1})^{-\tau_{i}}\right)}{\bigcup_{\mathbf{p}\in P(q_{2}):}} \prod_{i=1}^{n}B_{i}\left(p_{i},\beta(q_{2})^{-\tau_{i}}\right).$$ Observe that for any $\textbf{x}\in F$ $$\#\left\{\mathbf{p}\in P(q_{2}): \mathbf{p}\in \prod_{i=1}^{n}B_{i}\left(x_{i}, \frac{1}{2}\beta(q_{1})^{-\tau_{i}}\right) \right\} \overset{\text{Lemma~\ref{vol_arg}}}{\geq} \prod_{i=1}^{n}\frac{c_{1,i}}{c_{2,i}}2^{-2\delta_{i}}(\beta(q_{2})\beta(q_{1})^{-\tau_{i}})^{\delta_{i}}$$ and so $L_{2}^{S}$ is composed of at least $$\label{L1-count} \frac{c_{1}}{c_{2}} 2^{-2\delta}\beta(q_{2})^{\delta}\beta(q_{1})^{-\sum_{i=1}^{n}\tau_{i}\delta_{i}}$$ rectangles of the form $$\prod_{i=1}^{n}B_{i}\left(p_{i},\beta(q_{2})^{-\tau_{i}}\right)$$ for some $\mathbf{p}=(p_{1},\dots,p_{n}) \in P(q_{2})$. Note that [\[L1-count\]](#L1-count){reference-type="eqref" reference="L1-count"} may be small but since we are taking integer values there is at least $1$ rectangle.\ 3. The argument above can be applied inductively again. Namely, for rectangle $R_{j-1} \in L_{j-1}^{S}$[^1] let $$L_{j}^{S}(R_{j-1})= \underset{\mathbf{p}\in R_{j-1}}{\bigcup_{\mathbf{p}\in P(q_{2}):}} \prod_{i=1}^{n}B_{i}(p_{i},\beta(q_{j})^{-\tau_{i}}),$$ and $$L_{j}^{S}=\bigcup_{R_{j-1}\in L_{j-1}^{S}}L_{j}^{S}(R_{j-1}).$$ Observe that using the same calculation as in the construction of $L_{2}^{S}$ that $$\#\{\mathbf{p}\in P(q_{j}): \mathbf{p}\in R_{j-1}\} \geq \frac{c_{1}}{c_{2}}2^{-2\delta}\beta(q_{j})^{\delta}\beta(q_{j-1})^{-\sum\limits_{i=1}^{n}\tau_{i}\delta_{i}}$$ and so each set $L_{j}^{S}(R_{j-1})$ is composed of at least $$\label{L-count} \frac{c_{1}}{c_{2}}2^{-2\delta}\beta(q_{j})^{\delta}\beta(q_{j-1})^{-\sum\limits_{i=1}^{n}\tau_{i}\delta_{i}}$$ rectangles of the form $$\prod_{i=1}^{n}B_{i}\left(p_{i},\beta(q_{j})^{-\tau_{i}}\right).$$ Note that, by our definition of $h_{S}$ and condition that $h_{S}>\tau_{i}$ for each $1\leq i \leq n$, for sufficiently large $j$ [\[L-count\]](#L-count){reference-type="eqref" reference="L-count"} will be strictly larger than $1$. Hence the constructed Cantor set does not form a singleton. To finish the construction define. $$L_{\infty}^{S}=\bigcap_{j\in\mathbb{N}}L_{j}^{S}.$$ ### Construction of measure $\nu$ on $L_{\infty}^{S}$ Define the measure $\nu$ of $L_{\infty}^{S}$ by $$\nu(R_{0})=1 \quad \text{ for } R_{0}=F,$$ and for each $R_{j}\in L_{j}^{S}(R_{j-1})$ define $$\label{measure def} \nu(R_{j})=\nu(R_{j-1})\frac{1}{\#L_{j}^{S}(R_{j-1})}.$$ That is, we equally distribute the mass between rectangles in each layer. It is easy to see that the mass distribution is measure-preserving and so $\nu$ is defined as $$\nu(A)=\inf\left\{\sum_{i}\nu(R_{i}) : \bigcup R_{i} \supseteq A \quad \text{ and } R_{i} \in \bigcup_{j\in\mathbb{N}}L_{\infty}^{S} \right\}$$ for any Borel set $A\subseteq F$ is a Borel probability measure with support $L^{S}_{\infty} \subset \Lambda^{S}_{\mathcal{Q}}(\boldsymbol{\tau})$. Consider a rectangle $R_{k}\in L_{k}^{S}$. Observe that $$r(R_{k})=\max\limits_{1\leq i \leq n}\beta(q_{k})^{-\tau_{i}}=\beta(q_{k})^{-\min\limits_{1\leq i \leq n} \tau_{i}},$$ and by definition that $$\begin{aligned} \nu(R_{k}) & \overset{\eqref{L-count}+\eqref{measure def}}{\leq} \nu(R_{k-1})\frac{c_{2}}{c_{1}}2^{2\delta}\beta(q_{k})^{-\delta}\beta(q_{k-1})^{\sum\limits_{i=1}^{n}\tau_{i}\delta_{i}}, \nonumber\\ & \leq \beta(q_{1})^{-\delta} \prod_{j=2}^{k} \frac{c_{2}}{c_{1}}2^{2\delta}\beta(q_{j})^{-\delta}\beta(q_{j-1})^{\sum\limits_{i=1}^{n}\tau_{i}\delta_{i}}, \nonumber\\ & \leq \beta(q_{k})^{-\delta}\prod_{j=1}^{k-1}\frac{c_{2}}{c_{1}}2^{2\delta}\beta(q_{j})^{-\sum\limits_{i=1}^{n}(1-\tau_{i})\delta_{i}}. \label{rectangle measure}\end{aligned}$$ Thus the Hölder exponent can be calculated to be $$\begin{aligned} \frac{\log \nu(R_{k})}{\log r(R_{k})} & \geq \frac{\delta \log \beta(q_{k}) +(k-1)\log\left(\frac{c_{1}}{c_{2}}2^{-2\delta}\right) - \left(\sum\limits_{i=1}^{n}(\tau_{i}-1)\delta_{i} \right)\left(\sum\limits_{j=1}^{k-1}\log \beta(q_{j})\right)}{\min\limits_{1\leq i \leq n}\tau_{i}\log \beta(q_{k})}\\ & = \frac{1}{\min\limits_{1\leq i \leq n}\tau_{i}}\left( \delta + \log\left(\frac{c_{1}}{c_{2}}2^{-2\delta}\right)\frac{(k-1)}{\log \beta(q_{k})}-\left(\sum\limits_{i=1}^{n}(\tau_{i}-1)\delta_{i} \right) \frac{\sum\limits_{j=1}^{k-1}\log \beta(q_{j})}{\log \beta(q_{k})} \right).\end{aligned}$$ By our condition on the sequence $S$ and Lemma [Lemma 19](#constant corrector){reference-type="ref" reference="constant corrector"} for any $\varepsilon>0$ there exists $k_{\varepsilon}$ such that for all $k>k_{\varepsilon}$ $$\label{epsilon size 1} \left|\frac{\sum\limits_{j=1}^{k-1}\log \beta(q_{j})}{\log \beta(q_{k})}-\alpha_{S} \right|< \frac{\varepsilon}{2}\left(\sum\limits_{i=1}^{n}(\tau_{i}-1)\delta_{i}\right)^{-1} \, , \quad \text{ and } \quad \left|\log\left(\frac{c_{1}}{c_{2}}2^{-2\delta}\right)\frac{(k-1)}{\log \beta(q_{k})}\right|< \frac{\varepsilon}{2}\, .$$ Hence we have that $$\label{general s bound} \frac{\log \nu(R_{k})}{\log r(R_{k})} \geq \frac{1}{\min\limits_{1\leq i \leq n}\tau_{i}}\left( \delta - \left(\sum\limits_{i=1}^{n}(\tau_{i}-1)\delta_{i} \right)\alpha_{S}+\varepsilon \right):=s_{\min},$$ Thus $$\nu(R_{k}) \ll r(R_{k})^{s_{\min}}.$$ ### Hölder exponent of a general ball Fix any $\varepsilon>0$ and let $k_{\varepsilon} \in \mathbb{N}$ be large enough such that [\[general s bound\]](#general s bound){reference-type="eqref" reference="general s bound"} holds for all $k > k_{\varepsilon}$ and $$\label{epsilon size 2} \left| \frac{\log\left( \frac{c_{2}}{c_{1}}4^{n+\delta}\right) +(k-1)\log\left(\frac{c_{1}}{c_{2}}2^{\delta}\right)}{\log \beta(q_{k-1})} \right|<\frac{\varepsilon}{2} \, , \quad \text{ and } \quad \left|\frac{k\log\left(\frac{c_{2}}{c_{1}}2^{2\delta}\right)+\log 2^{n+3\delta}}{\log \beta(q_{k-1})}\right|<\frac{\varepsilon}{2}\, .$$ Note such choice of $k_{\varepsilon}$ is possible by [\[constant corrector\]](#constant corrector){reference-type="eqref" reference="constant corrector"}. Consider an arbitrary ball $B(x,r) \subset F$ with $x\in L_{\infty}^{S}$ and $0<r<r_{0}:=\beta(q_{k_{\varepsilon}})^{-\min_{1\leq i \leq n} \tau_{i}}$. If $B(x,r)$ intersects exactly $1$ rectangle at each layer of $L_{\infty}^{S}$ then $$\nu(B(x,r)) \leq \nu(R_{k}) \to 0 \quad \text{ as } k \to \infty,$$ and so trivially $\nu(B(x,r))\leq r^{s}$. Thus we may assume there exists some $k \in \mathbb{N}_{>k_{\varepsilon}}$ such that $B(x,r)$ intersects exactly one rectangle in $L_{k-1}^{S}$, say $R_{k-1}$, and at least two rectangles in $L_{k}^{S}(R_{k-1})$. Note that $$\nu(B(x,r))=\nu(B(x,r)\cap R_{k-1}).$$ Consider the following cases: 1. $r>\beta(q_{k-1})^{-\min\limits_{1\leq i \leq n}\tau_{i}}$: Then $$\nu(B(x,r)) \leq \nu(R_{k-1}) \leq \left(\beta(q_{k-1})^{-\min\limits_{1\leq i \leq n}\tau_{i}}\right)^{s_{\min}} \leq r^{s_{\min}}.$$ 2. $\beta(q_{k-1})^{-\min\limits_{1\leq i \leq n}\tau_{i}}\geq r \geq \beta(q_{k-1})^{-\max\limits_{1\leq i \leq n}\tau_{i}}$: We need to find an upper bound of $$\lambda=\#\left\{ \mathbf{p}\in P(q_{k}): \mathbf{p}\in \prod_{i=1}^{n} B_{i}\left(x_{i},\min\left\{ r, 2\beta(q_{k-1})^{-\tau_{i}}\right\}\right) \right\}.$$ That is, the number of centers corresponding to rectangles from $L_{k}^{S}(R_{k-1})$ contained in $B(x,r)\cap R_{k-1}$. Observe that $$\begin{aligned} \lambda &\overset{\text{Lemma~\ref{vol_arg}}}{\leq} \prod_{i: r<2\beta(q_{k-1})^{-\tau_{i}}}\frac{c_{2,i}}{c_{1,i}}(r\beta(q_{k}))^{\delta_{i}} \times \prod_{i: r\geq 2\beta(q_{k-1})^{-\tau_{i}}} \frac{c_{2,i}}{c_{1,i}}4^{1+\delta_{i}}(\beta(q_{k})\beta(q_{k-1})^{-\tau_{i}})^{\delta_{i}}, \\ & \leq \frac{c_{2}}{c_{1}} 4^{n+\delta} \beta(q_{k})^{\delta}\prod_{i: r<2\beta(q_{k-1})^{-\tau_{i}}}r^{\delta_{i}} \times \prod_{i: r\geq 2\beta(q_{k-1})^{-\tau_{i}}} \beta(q_{k-1})^{-\tau_{i}\delta_{i}}.\end{aligned}$$ Hence $$\begin{aligned} \nu(B(x,r)\cap R_{k-1}) &\leq \underset{B(x,r)\cap R_{k} \neq \emptyset}{\sum_{R_{k}\in L_{k}^{S}(R_{k-1}):}} \nu(R_{k}) \nonumber\\[2ex] & \leq \lambda \, \nu(R_{k}), \nonumber \\ & \overset{\eqref{rectangle measure}}{\leq} \lambda \, \beta(q_{k})^{-\delta}\prod_{j=1}^{k-1}\frac{c_{2}}{c_{1}}2^{2\delta}\beta(q_{j})^{-\sum\limits_{i=1}^{n}(1-\tau_{i})\delta_{i}}. \label{eq1}\end{aligned}$$ Note that $B(x,r)\cap R_{k-1}$ is contained in a ball with radius $r$, and so for $$N_{\nu,r,k}=\frac{\log\nu(B(x,r)\cap R_{k-1})}{\log r(B(x,r)\cap R_{k-1})}$$ we have $$\begin{aligned} N_{\nu,r,k} & \geq \frac{\left(\sum\limits_{i:r<2\beta(q_{k-1})^{-\tau_{i}}}\delta_{i}\right)\log r - \left(\sum\limits_{i:r\geq 2\beta(q_{k-1})^{-\tau_{i}}}\tau_{i}\delta_{i}\right)\log \beta(q_{k-1}) -\left(\sum\limits_{i=1}^{n}(1-\tau_{i})\delta_{i}\right)\left(\sum\limits_{i=1}^{k-1}\log \beta(q_{j}) \right)}{\log r} \\ & \hspace{2cm} + \frac{\log\left(\frac{c_{2}}{c_{1}}4^{n+\delta}\right) + (k-1)\log\left(\frac{c_{2}}{c_{1}}2^{2\delta}\right)}{\log r}\end{aligned}$$ Since $$\beta(q_{k-1})^{-\min\limits_{1\leq i \leq n}\tau_{i}}\geq r \geq \beta(q_{k-1})^{-\max\limits_{1\leq i \leq n}\tau_{i}},$$ there exists two coordinate axes, say the $u$th and $v$th axis, such that $r$ lies in the interval $[\beta(q_{k-1})^{-\tau_{u}},\beta(q_{k-1})^{-\tau_{v}}]$ and no $\beta(q_{k-1})^{-\tau_{i}}$ is contained in the interior. Observe the right-hand side of the above inequality is monotonic (in $r$ over $[\beta(q_{k-1})^{-\tau_{u}},\beta(q_{k-1})^{-\tau_{v}}]$) and so the minimum is obtained at one of the endpoints. Let $$\begin{aligned} T_{1}&=\left\{i: \beta(q_{k-1})^{-\tau_{j}}<2\beta(q_{k-1})^{-\tau_{i}} \right\}\, , \\ T_{2}&=\left\{ i: \beta(q_{k-1})^{-\tau_{j}}\geq 2\beta(q_{k-1})^{-\tau_{i}} \right\}=\{1,\dots, n\} \backslash T_{1}\, . \end{aligned}$$ Then we may write $$\begin{aligned} N_{\nu,r,k} &\geq \min_{j=u,v} \left\{ \begin{array}{c} \frac{\left(\sum\limits_{i\in T_{1}}\delta_{i}\right)(-\tau_{j})\log \beta(q_{k-1}) - \left(\sum\limits_{i\in T_{2}}\tau_{i}\delta_{i}\right)\log \beta(q_{k-1}) -\left(\sum\limits_{i=1}^{n}(1-\tau_{i})\delta_{i}\right)\left(\sum\limits_{i=1}^{k-1}\log \beta(q_{j}) \right)}{-\tau_{j}\log \beta(q_{k-1})} \\[2ex] \hspace{2cm} + \frac{\log\left(\frac{c_{2}}{c_{1}}4^{n+\delta}\right) + (k-1)\log\left( \frac{c_{1}}{c_{2}}2^{2\delta}\right)}{-\tau_{j}\log \beta(q_{k-1})} \end{array}\right\} \\ & \geq \min_{j=u,v} \left\{ \frac{1}{\tau_{j}}\left(\begin{array}{c} \frac{\left(\sum\limits_{i\in T_{1}}\delta_{i}\right)(-\tau_{j})\log \beta(q_{k-1}) - \left(\sum\limits_{i\in T_{2}}\tau_{i}\delta_{i}\right)\log \beta(q_{k-1}) -\left(\sum\limits_{i=1}^{n}(1-\tau_{i})\delta_{i}\right)\left(\sum\limits_{i=1}^{k-1}\log \beta(q_{j}) \right)}{-\log \beta(q_{k-1})} \\[2ex] \hspace{2cm} + \frac{\varepsilon}{2} \end{array}\right)\right\}\, ,\end{aligned}$$ by [\[epsilon size 2\]](#epsilon size 2){reference-type="eqref" reference="epsilon size 2"}. Now $$\begin{aligned} N_{\nu,r,k} &\geq \min_{j=u,v} \left\{ \frac{1}{\tau_{j}}\left( \begin{array}{c}\left(\sum\limits_{i\in T_{1}}\delta_{i}\tau_{j}\right) + \left(\sum\limits_{i\in T_{2}}\tau_{i}\delta_{i}\right) + \, \, \frac{\left(\sum\limits_{i=1}^{n}(1-\tau_{i})\delta_{i}\right)\left(\sum\limits_{i=1}^{k-1}\log \beta(q_{j}) \right)}{\log \beta(q_{k-1})} +\frac{\varepsilon}{2} \end{array} \right)\right\}\\ &\geq \min_{j=u,v} \left\{ \frac{1}{\tau_{j}}\left( %\begin{array}{c} \left(\sum\limits_{i\in T_{1}}\delta_{i}\tau_{j}\right) + \left(\sum\limits_{i\in T_{2}}\tau_{i}\delta_{i}\right) + \, \left(\sum\limits_{i=1}^{n}(1-\tau_{i})\delta_{i}\right)\left(1+ \frac{\left(\sum\limits_{i=1}^{k-2}\log \beta(q_{j}) \right)}{\log \beta(q_{k-1})}\right) +\frac{\varepsilon}{2} %\end{array} \right)\right\}\\ &\geq \min_{j=u,v} \left\{ \frac{1}{\tau_{j}}\left( \begin{array}{c} \left(\sum\limits_{i\in T_{1}}\delta_{i}\tau_{j}\right) + \left(\sum\limits_{i\in T_{2}}\tau_{i}\delta_{i}\right) + \, \left(\sum\limits_{i=1}^{n}(1-\tau_{i})\delta_{i}\right)\left(1+ \alpha_{S}\right) +\varepsilon \end{array} \right)\right\}\, , \\\end{aligned}$$ by [\[epsilon size 1\]](#epsilon size 1){reference-type="eqref" reference="epsilon size 1"}. Splitting the third summation into the two components we get $$\begin{aligned} N_{\nu,r,k} &\geq \min_{j=u,v} \left\{ \frac{1}{\tau_{j}}\left( \begin{array}{c} \delta -\sum\limits_{i=1}^{n}\tau_{i}\delta_{i}+ \left(\sum\limits_{i\in T_{1}}\delta_{i}\tau_{j}\right) + \left(\sum\limits_{i\in T_{2}}\tau_{i}\delta_{i}\right) + \, \left(\sum\limits_{i=1}^{n}(1-\tau_{i})\delta_{i}\right) \alpha_{S} +\varepsilon \end{array} \right)\right\}\\ & \geq \min_{j=u,v} \left\{ \frac{1}{\tau_{j}}\left( \delta + \left(\sum_{i\in T_{1}}(\tau_{j}-\tau_{i})\delta_{i}\right) +\left(\sum_{i=1}^{n}(1-\tau_{i})\delta_{i}\right)\alpha_{S} +\varepsilon \right)\right\} \\ & \geq \min_{j=u,v} \left\{ \frac{1}{\tau_{j}}\left( \delta - \left(\sum_{i=1}^{n}(\tau_{i}-1)\delta_{i}\right)\alpha_{S} + \left(\sum_{i:\tau_{j}>\tau_{i}}(\tau_{j}-\tau_{i})\delta_{i}\right) + \varepsilon \right)\right\} =s_{u,v}\, .\end{aligned}$$ The last line follows on the observation that for $k$ sufficiently large and $j$ fixed, the sets $\{i:\tau_{j}>\tau_{i}\}$ and $T_{1}\cup \{j\}$ are the same. Clearly, the fact that $j$ is omitted does not affect the appearing summation. This completes case ii).\ 3. $r \leq \beta(q_{k-1})^{-\max_{1\leq i \leq n} \tau_{i}}$: We calculate $$\nu\left(B(x,r) \cap R_{k-1}\right)\leq \underset{B(x,r)\cap R_{k} \neq \emptyset}{\sum_{R_{k}\in L_{k}^{S}(R_{k-1})}} \nu(R_{k}).$$ Since $B(x,r)$ intersects at least two rectangles in $L_{k}^{S}$, and $x$ is contained in one of them, we have that $r>\tfrac{1}{2}\beta(q_{k})^{-1}$. Hence $$\begin{aligned} \#\left\{ R_{k}\in L_{k}^{S}: B(x,r)\cap R_{k} \neq \emptyset \right\} & \leq \#\left\{p \in \prod_{i=1}^{n}P_{i}(q_{k}):p\in\prod_{i=1}^{n}B_{i}(x_{i},4r)\right\} \\ & \overset{\text{Lemma~\ref{vol_arg}}}{\leq} \left(\frac{c_{2}}{c_{1}}\right)^{n}2^{3\delta+n}(r\beta(q_{k}))^{\delta}. \end{aligned}$$ So $$\begin{aligned} \nu(B(x,r) \cap R_{k-1})& \leq \frac{c_{2}}{c_{1}}2^{n+3\delta}(r\beta(q_{k}))^{\delta} \nu(R_{k}) \\ & \leq \frac{c_{2}}{c_{1}}2^{n+3\delta}(r\beta(q_{k}))^{\delta} \prod_{i=1}^{k} \frac{1}{\#L_{i}^{S}(R_{i-1})} \\ &\overset{\eqref{L-count}}{\leq} (r\beta(q_{k}))^{\delta} \left(\frac{c_{2}}{c_{1}}\right)^{k+1}2^{(n+3\delta)+2k\delta} \beta(q_{1})^{-\delta}\prod_{i=2}^{k}\beta(q_{i})^{-\delta_{i}}\beta(q_{i-1})^{\sum\limits_{i=1}^{n}\tau_{i}\delta_{i}} \\ &\leq 2^{n+3\delta}\left(\frac{c_{2}}{c_{1}}2^{(2\delta)}\right)^{(k+1)} r^{\delta}\prod_{j=1}^{k-1}\beta(q_{j})^{\sum\limits_{i=1}^{n}(\tau_{i}-1)\delta_{i}}. \end{aligned}$$ Hence the Hölder exponent can be calculated to be $$\begin{aligned} \frac{\log \nu(B(x,r))}{\log r}&\geq \frac{\delta\log r + \left( \sum\limits_{i=1}^{n}(\tau_{i}-1)\delta_{i}\right)\left(\sum\limits_{j=1}^{k-1}\log \beta(q_{j})\right)}{\log r} + \frac{(k+1)}{\log r}\log\left(\frac{c_{2}}{c_{1}}2^{(2\delta)}\right)+\frac{\log 2^{n+3\delta}}{\log r}\\ &\overset{\eqref{epsilon size 2}}{\geq} \delta + \frac{ \left( \sum\limits_{i=1}^{n}(\tau_{i}-1)\delta_{i}\right)\left(\sum\limits_{j=1}^{k-1}\log \beta(q_{j})\right)}{\log r}-\left(\max_{1\leq i \leq n}\tau_{i}\right)^{-1}\frac{\varepsilon}{2} \\ & \hspace{-1cm}\overset{\hspace{0.3cm}\left(r<\beta(q_{k-1})^{-\max\limits_{1\leq i \leq n}\tau_{i}}\right)}{\geq} \frac{1}{\max\limits_{1\leq i \leq n}\tau_{i}} \left( \delta \max\limits_{1\leq i \leq n}\tau_{i} - \left(\sum\limits_{i=1}^{n}(\tau_{i}-1)\delta_{i}\right)\frac{\sum\limits_{j=1}^{k-1} \log \beta(q_{j})}{\log \beta(q_{k-1})} \right)-\left(\max_{1\leq i \leq n}\tau_{i}\right)^{-1}\frac{\varepsilon}{2} \\ & \overset{\eqref{epsilon size 1}}{\geq} \frac{1}{\max_{1\leq i \leq n}\tau_{i}} \left( \delta \max\limits_{1\leq i \leq n}\tau_{i} - \left(\sum\limits_{i=1}^{n}(\tau_{i}-1)\delta_{i}\right)(\alpha_{S} +1)-\frac{\varepsilon}{2} \right) -\left(\max_{1\leq i \leq n}\tau_{i}\right)^{-1}\frac{\varepsilon}{2}\\ &= \frac{1}{\max\limits_{1\leq i \leq n}\tau_{i}} \left( \delta - (\alpha_{S}-\varepsilon)\sum\limits_{i=1}^{n}(\tau_{i}-1)\delta_{i} + \sum\limits_{i=1}^{n}\left( \left(\max\limits_{1\leq i \leq n} \tau_{i}\right) - \tau_{i}\right)\delta_{i} -\varepsilon \right) \\ &=s_{\max}\, . \end{aligned}$$ This completes case iii).\ Compiling the results of i)-iii) we have that $$\mu(B(x,r))\ll r^{\min\limits_{1\leq u,v\leq n}\left\{s_{\min},s_{\max},s_{u,v}\right\}}$$ Note $\varepsilon>0$ was arbitrary in the values of $s_{\min},s_{\max}$ and each $s_{u,v}$. Thus letting $\varepsilon\to 0$ and noting that $$\min_{1\leq u,v\leq n}\left\{s_{\min},s_{\max},s_{u,v}\right\}=s$$ gives us $$\dim_{\mathcal{H}}\Lambda_{\mathcal{Q}}^{S}(\boldsymbol{\tau}) \geq s$$ via Lemma [Lemma 17](#Mass distribution principle){reference-type="ref" reference="Mass distribution principle"} hence completing the proof of the lower bound. ## Proof of Theorem [Theorem 9](#corollary){reference-type="ref" reference="corollary"} {#Section:CorollaryProof} To prove Theorem [Theorem 9](#corollary){reference-type="ref" reference="corollary"} via Theorem [Theorem 6](#main){reference-type="ref" reference="main"} note that by the countable stability of the Hausdorff dimension $$\dim_{\mathcal{H}}\widehat{\Lambda}_{\mathcal{Q}}^{S}(\boldsymbol{\tau}) = \sup_{t\in\mathbb{N}}\dim_{\mathcal{H}}\Lambda_{\mathcal{Q}}^{\sigma^{t}S}(\boldsymbol{\tau}).$$ Observe that $h\geq h_{\sigma^{t}S}$ for any $t\in\mathbb{N}$. For $\boldsymbol{\tau}$ chosen in Theorem [Theorem 9](#corollary){reference-type="ref" reference="corollary"} it may be true that there exists $\tau_{i}$ with $\tau_{i}>h_{\sigma^{t}S}$. However, these sets have dimension less than or equal to the exact dimension result of Theorem [Theorem 6](#main){reference-type="ref" reference="main"} so can be ignored when considering the supremum over $t\in\mathbb{N}$. Thus without loss of generality assume that for each $1\leq i \leq n$ $h>\tau_{i}>1$. Furthermore choose $t_{0}$ sufficiently large such that $$h>h_{\sigma^{t}S}>\tau_{i} \quad \forall 1\leq i \leq n$$ for all $t\geq t_{0}$. Thus by Theorem [Theorem 6](#main){reference-type="ref" reference="main"} $$\dim_{\mathcal{H}}\widehat{\Lambda}_{\mathcal{Q}}^{S}(\boldsymbol{\tau}) = \sup_{t\in\mathbb{N}} s =s$$ completeing the proof. ## Proof of Theorem [Theorem 2](#interesting?){reference-type="ref" reference="interesting?"} {#proof-of-theorem-interesting} Note that if $$\lim_{j\to \infty} \frac{\log q_{j}}{\log q_{j-1}}=k$$ then for any sufficiently small $\varepsilon>0$ there exists $j_{0}\in \mathbb{N}$ such that for all $j>j_{0}$ $$(k-\varepsilon)\log q_{j-1}<\log q_{j}< (k+\varepsilon)\log q_{j-1}.$$ So $$\frac{\sum\limits_{i=1}^{j-1}\log q_{i}}{\log q_{j}}< \sum\limits_{i=1}^{j-1}(k+\varepsilon)^{-i}=\frac{1-(k+\varepsilon)^{-(j-1)}}{k-1+\varepsilon},$$ and similarly for the lower bound $$\frac{\sum\limits_{i=1}^{j-1}\log q_{i}}{\log q_{j}}> \frac{1-(k-\varepsilon)^{-(j-1)}}{k-1-\varepsilon}.$$ Thus, as $k>1$ and $0<\varepsilon$ can be chosen arbitrarily small ($\varepsilon<k-1$), taking the limit as $j \to \infty$ gives us that $$\alpha_{S}=\lim_{j\to \infty} \frac{\sum\limits_{i=1}^{j-1}\log q_{i}}{\log q_{j}} =\frac{1}{k-1}.$$ Applying Corollary [Theorem 10](#real_corollary){reference-type="ref" reference="real_corollary"} with $h_{S}=k$, $\alpha_{S}=\frac{1}{k-1}$, and $0<\tau_{i}<h_{S}-1$ for each $1\leq i \leq n$ gives us Theorem [Theorem 2](#interesting?){reference-type="ref" reference="interesting?"}. [^1]: We should note here that technically $L_{j-1}^{S}$ is a union of rectangles, rather than a collection of rectangles so strictly \"$R_{j-1}\in L_{j-1}^{S}$\" does not make sense. Hopefully it is clear that in this notation we mean one of the rectangles that is in the union of rectangles in the construction of $L_{j-1}^{S}$.
arxiv_math
{ "id": "2309.13338", "title": "Liminf approximation sets for abstract rationals", "authors": "Mumtaz Hussain, Ben Ward", "categories": "math.NT math.DS", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Let $T_t^{P_2}f(x)$ denote the solution to the linear Schrödinger equation at time $t$, with initial value function $f$, where $P_2 (\xi) = |\xi|^2$. In 1980, Carleson asked for the minimal regularity of $f$ that is required for the pointwise a.e. convergence of $T_t^{P_2} f(x)$ to $f(x)$ as $t \rightarrow 0.$ This was recently resolved by work of Bourgain, and Du and Zhang. This paper considers more general dispersive equations, and constructs counterexamples to pointwise a.e. convergence for a new class of real polynomial symbols $P$ of arbitrary degree, motivated by a broad question: what occurs for symbols lying in a generic class? We construct the counterexamples using number-theoretic methods, in particular the Weil bound for exponential sums, and the theory of Dwork-regular forms. This is the first case in which counterexamples are constructed for indecomposable forms, moving beyond special regimes where $P$ has some diagonal structure. address: - Duke University, 120 Science Drive, Durham NC 27708 - Duke University, 120 Science Drive, Durham NC 27708 author: - Rena Chu - Lillian B. Pierce bibliography: - NoThBibliography.bib title: | Generalizations of the Schrödinger maximal operator:\ building arithmetic counterexamples --- # Introduction Given a polynomial $P(\xi)\in \mathbb{R}[\xi_1,...,\xi_n]$ of degree $k \geq 2$, the operator $$\begin{aligned} \label{T_int_dfn} T_t^{P}f(x):=\frac{1}{(2\pi)^n}\int_{\mathbb{R}^n} \hat{f}(\xi) e^{i(\xi \cdot x + P(\xi)t)} d\xi, \end{aligned}$$ initially defined for $f$ of Schwartz class on $\mathbb{R}^n$, gives a solution to the linear PDE $$\begin{aligned} \begin{cases} \partial_t u - i \mathcal{P}(D) u =0, \quad (x,t) \in \mathbb{R}^n \times \mathbb{R}, \label{PDE}\\ u(x,0) = f(x), \quad x \in \mathbb{R}^n. \end{cases} \end{aligned}$$ Here $D = \frac{1}{i}(\frac{\partial}{\partial x_1}, \ldots, \frac{\partial}{\partial x_n})$ and $\mathcal{P}(D)$ is defined according to its real symbol by $$\mathcal{P}(D) f(x) = \frac{1}{(2\pi)^n} \int_{\mathbb{R}^n} e^{i \xi\cdot x}P(\xi)\hat{f}(\xi)d\xi.$$ When $P(\xi)= |\xi|^2$, in which case ([\[PDE\]](#PDE){reference-type="ref" reference="PDE"}) is the linear Schrödinger equation, Carleson famously asked [@Car80 Eqn (14) p. 24]: what is the smallest $s>0$ such that $$\begin{aligned} \label{intro_convergence} \lim_{t\rightarrow 0} T_t^{P}f(x)=f(x), \qquad \text{a.e. $x\in \mathbb{R}^n$, for all $f\in H^s(\mathbb{R}^n)$.} \end{aligned}$$ This question was resolved for dimension $n=1$ quite swiftly by [@Car80; @DahKen82], which established that ([\[intro_convergence\]](#intro_convergence){reference-type="ref" reference="intro_convergence"}) holds if and only if $s\geq 1/4.$ In higher dimensions, there is a long history of work on necessary and sufficient conditions for the Schrödinger pointwise convergence problem, including [@Cow83; @Car85; @Sjo87; @Veg88b; @Bou95; @MVV96; @TaoVar00; @Lee06; @LucRog17; @DGL17; @DGLZ18; @LucRog19]. For several decades it was expected that $s=1/4$ might be the critical threshold in all dimensions, until Bourgain pushed the necessary condition on $s$ above $1/4$ in [@Bou13]. It was very recently resolved (up to the endpoint) by Bourgain [@Bou16], who showed that $s\geq 1/4 + \delta(n)$ with $\delta(n) = (n-1)/(4(n+1))$ is necessary, while Du and Zhang [@DuZha19] showed that $s> 1/4 + \delta(n)$ is sufficient. Bourgain's counterexample construction was interesting: it cleverly employed Gauss sums to force $\sup_{0<t<1}|T_t^Pf(x)|$ to be large (from which a violation to ([\[intro_convergence\]](#intro_convergence){reference-type="ref" reference="intro_convergence"}) can be deduced) for test functions $f$ defined using exponential sums. Recently, in [@ACP23] we expanded this idea into a more flexible method for producing counterexamples to pointwise convergence results of the form ([\[intro_convergence\]](#intro_convergence){reference-type="ref" reference="intro_convergence"}) for the initial value problem ([\[PDE\]](#PDE){reference-type="ref" reference="PDE"}), using the Weil bound for complete exponential sums. In that initial paper, we demonstrated the new method for symbols of the form $P(X_1,\ldots,X_n) =X_1^k + \cdots + X_n^k$ for any degree $k \geq 3$, and we proved that $s \geq 1/4 + \delta(n,k)$ is necessary for ([\[intro_convergence\]](#intro_convergence){reference-type="ref" reference="intro_convergence"}) to hold, for $\delta(n,k) = (n-1)/(4((k-1)n+1)).$ Subsequently [@EPV22a] adapted the method of [@ACP23] to achieve a result of the same strength, for any polynomial whose leading form (homogeneous part of highest degree) takes the special shape $$\label{intro_special} P_k(X_1,\ldots,X_n) = X_1^k + Q_k(X_2,\ldots,X_n),$$ where $Q_k \in \mathbb{Q}[X_2,\ldots,X_n]$ is a nonsingular form of degree $k$ that is independent of $X_1$. For degree 2 forms, the special shape ([\[intro_special\]](#intro_special){reference-type="ref" reference="intro_special"}) does not entail a loss of generality, since any quadratic form can be diagonalized over $\mathbb{R}$, and as we will explain, the underlying problem allows for such changes of coordinates. However, for $k \geq 3,$ forms $P_k$ of the shape ([\[intro_special\]](#intro_special){reference-type="ref" reference="intro_special"}) are quite sparse among degree $k$ forms in $\mathbb{Q}[X_1,\ldots,X_n]$ (in a sense we quantify in §[8](#sec_details_examples){reference-type="ref" reference="sec_details_examples"}), and it is well-known that in arithmetic problems, a form with some diagonal structure is generally easier to handle. We are motivated by the question: what is the minimal regularity required for ([\[intro_convergence\]](#intro_convergence){reference-type="ref" reference="intro_convergence"}) when the real polynomial symbol $P$ has leading form belonging to a generic class of degree $k$ forms in $\mathbb{Q}[X_1,\ldots,X_n]$? For any fixed real symbol $P$, the key to proving or disproving pointwise convergence as in ([\[intro_convergence\]](#intro_convergence){reference-type="ref" reference="intro_convergence"}) is the associated maximal operator $$\label{T_max_dfn} f \mapsto \sup_{0<t<1} |T_t^Pf|.$$ For a given $s$, to prove that pointwise convergence ([\[intro_convergence\]](#intro_convergence){reference-type="ref" reference="intro_convergence"}) holds for all $f \in H^s(\mathbb{R}^n)$, it suffices to prove (for example) that the maximal operator maps $H^s(\mathbb{R}^n)$ to $L^2_{\mathrm{loc}}(\mathbb{R}^n)$. In the other direction, to prove that convergence ([\[intro_convergence\]](#intro_convergence){reference-type="ref" reference="intro_convergence"}) fails for some functions in $H^s(\mathbb{R}^n)$, it suffices to prove that the maximal operator is unbounded from $H^s(\mathbb{R}^n)$ to $L^1_{\mathrm{loc}}(\mathbb{R}^n)$; see for example [@Pie20 Appendix A] for a summary of these standard arguments. Thus we state our main result in terms of showing the maximal operator ([\[T_max_dfn\]](#T_max_dfn){reference-type="ref" reference="T_max_dfn"}) is unbounded from $H^s(\mathbb{R}^n)$ to $L^1_{\mathrm{loc}}(\mathbb{R}^n)$ for $s$ in a certain range. For any fixed $n \geq 2$ and degree $k \geq 2,$ nonsingular forms are generic among degree $k$ forms in $\mathbb{Q}[X_1,\ldots,X_n]$. Given a value $s>0$, the truth (or falsity) of a bound of the form $$\label{T_bound_inv} \|\sup_{0<t<1}|T_t^{P}f|\|_{L^1_{\mathrm{loc}}(\mathbb{R}^n)} \leq C_s \|f\|_{H^s(\mathbb{R}^n)} \qquad \text{for all $f \in H^s(\mathbb{R}^n)$}$$ is invariant under $\mathrm{GL}_n(\mathbb{R})$-action on the polynomial $P$ (see §[3.5](#sec_change_var){reference-type="ref" reference="sec_change_var"}). Thus if one wishes to understand this putative bound for an arbitrary polynomial $P$ with nonsingular leading form $P_k \in \mathbb{Q}[X_1,\ldots,X_n]$, it is no loss of generality to first apply a $\mathrm{GL}_n(\mathbb{Q})$ change of variable to put $P_k$ in a convenient form. We heavily exploit the following property: for *every* nonsingular form in $L[X_1,\ldots,X_n]$ for an infinite field $L$, there is a $\mathrm{GL}_n(L)$ change of variables under which the form becomes *Dwork-regular* (see the definition in ([\[DR_dfn\]](#DR_dfn){reference-type="ref" reference="DR_dfn"})). Thus in the study of generic forms, it is no loss of generality to focus on Dwork-regular forms, and we do so here. The fact that diagonalization, so convenient for quadratic forms, is out of reach for most higher-degree forms, is a dominant theme in the study of symmetric tensors (which, roughly speaking, generalize the symmetric matrix associated to a quadratic form). This has led to the development of many notions of rank for degree $k$ forms, including the Schmidt rank (or $h$-index), Waring rank (symmetric tensor rank), slicing rank, relative rank, the property of decomposability, and more. Each such notion of rank is motivated by specific applications in algebraic invariant theory, number theory, algebraic geometry, computational complexity, etc. Similarly, our present work leads to a new notion of rank, which we now define. **Definition 1** (Intertwining rank). A variable $X_i$ intertwines with $X_j$ (with $i \neq j$) in a form $P_k$ of degree $k \geq 2$ if $(\partial^2/\partial X_i\partial X_j)P_k \not\equiv 0$. By convention, $X_i$ intertwines with itself. The intertwining rank $r(X_i)$ of $X_i$ in $P_k$ is the number of variables with which $X_i$ intertwines. The *intertwining rank* of the form $P_k$ is $\min_{1 \leq i \leq n} r(X_i)$. For example, $X_1^3 + X_2^3 + X_3^3+X_4^3$ has intertwining rank 1, while $X_1^3 + X_1X_2^2 + X_2X_3X_4$ has intertwining rank $2.$ Our main result is: **Theorem 1**. *Fix $n\geq 2$ and $k\geq 2$. Let $P\in \mathbb{R}[X_1,...,X_n]$ be a polynomial whose leading form $P_k \in \mathbb{Q}[X_1,\ldots,X_n]$ is Dwork-regular in $X_1,\ldots,X_n$ over $\mathbb{Q}$ and has intertwining rank $r$. Suppose there is a constant $C_s$ such that for all $f\in H^s(\mathbb{R}^n)$, $$\label{thm_T_bound} \|\sup_{0<t<1}|T_t^{P}f|\|_{L^1(B_n(0,1))} \leq C_s \|f\|_{H^s(\mathbb{R}^n)}.$$ Then $s\geq \frac{1}{4} + \delta(n,k,r)$ with $$\delta(n,k,r)=\frac{n-r}{4((k-1)(n-(r-1))+1)}.$$* We now briefly situate Theorem [Theorem 1](#thm:main2){reference-type="ref" reference="thm:main2"} with respect to previous literature, and then we explain the context of Dwork-regular forms, describe more precisely the notion of "generic" forms, and illustrate that a strength of the theorem is its application to indecomposable forms. ## Relation to previous literature on convergence problems As an immediate consequence of Theorem [Theorem 1](#thm:main2){reference-type="ref" reference="thm:main2"}, pointwise convergence as in ([\[intro_convergence\]](#intro_convergence){reference-type="ref" reference="intro_convergence"}) fails for some $f \in H^s(\mathbb{R}^n)$ for the initial value problem ([\[PDE\]](#PDE){reference-type="ref" reference="PDE"}) defined by $P$, for each $s< 1/4 + \delta(n,k,r)$ (following the standard arguments recorded in [@Pie20 Appendix A]). Our present work also adapts (in a trivial way) to dimension $n=r=1$ (see Remark [Remark 17](#remark_n_r_1){reference-type="ref" reference="remark_n_r_1"}), but we omit the details, since in 1 dimension, ([\[intro_convergence\]](#intro_convergence){reference-type="ref" reference="intro_convergence"}) holds for all polynomials $P$ of degree $k \geq 2$ if $s \geq 1/4$ and fails if $s<1/4$, by [@KPV91 Cor. 2.6], [@DahKen82; @KenRui83]. The threshold $1/4$ is a common sticking point of many methods in the literature relating to the convergence problem ([\[intro_convergence\]](#intro_convergence){reference-type="ref" reference="intro_convergence"}); see e.g. the survey in [@ACP23 §1.2]. The main content of Theorem [Theorem 1](#thm:main2){reference-type="ref" reference="thm:main2"} is for $n \geq 3$, $k \geq 3$, and $2 \leq r<n$. In all dimensions, counterexamples to ([\[intro_convergence\]](#intro_convergence){reference-type="ref" reference="intro_convergence"}) and ([\[thm_T\_bound\]](#thm_T_bound){reference-type="ref" reference="thm_T_bound"}) for all $s<1/4$, for any real polynomial symbol (with leading form of any intertwining rank $r \geq 1$), are due to Sjölin [@Sjo98]. Theorem [Theorem 1](#thm:main2){reference-type="ref" reference="thm:main2"} is the first result to go beyond $1/4$ for intertwining rank $r \geq 2$, for all dimensions $n \geq 3$. The strength of our result decreases as $r$ increases, and subsides to the requirement $s \geq 1/4$ when $r=n.$ For intertwining rank $r=1$, Theorem [Theorem 1](#thm:main2){reference-type="ref" reference="thm:main2"} recovers the special case of diagonal symbols considered in [@ACP23], and the symbols of the form ([\[intro_special\]](#intro_special){reference-type="ref" reference="intro_special"}) considered in [@EPV22a]. For degree $k=2$, by the spectral theorem, any quadratic leading form is diagonalizable under $\mathrm{GL}_n(\mathbb{R})$, which (after further renormalization) reduces the case of quadratic forms to the case of intertwining rank $r=1$. For dimension $n=2$, the only cases are intertwining rank $r=1$, in which case Theorem [Theorem 1](#thm:main2){reference-type="ref" reference="thm:main2"} recovers a result of [@EPV22a], and $r=2$, in which case Theorem [Theorem 1](#thm:main2){reference-type="ref" reference="thm:main2"} states $s \geq 1/4$, which was previously known. It remains an interesting open question whether the regularity condition in Theorem [Theorem 1](#thm:main2){reference-type="ref" reference="thm:main2"} can be increased further. In all dimensions, ([\[intro_convergence\]](#intro_convergence){reference-type="ref" reference="intro_convergence"}) holds for all $s>1/2$, for a wide class of differentiable functions, including any real polynomial $P$ of principal type of order $\alpha$ for $\alpha>1$ (meaning $|\nabla P(\xi)| \gg (1+|\xi|)^{\alpha-1}$ for all sufficiently large $|\xi|$), by [@BenDev91 Thm. D] and [@RVV06 Remark 2.2]. Positive results proving bounds related to ([\[thm_T\_bound\]](#thm_T_bound){reference-type="ref" reference="thm_T_bound"}) for $s \leq 1/2$, such as the celebrated work in the case $P(\xi)=|\xi|^2$ in [@DuZha19], must proceed by entirely different methods. For further notes on the vast literature on convergence results, maximal operators and connections to local smoothing, we refer to [@ACP23 §1.2]. ## The role of Dwork-regular forms Dwork-regular forms have been extensively developed by Dwork [@Dwo62] and later Katz (e.g. [@Kat08; @Kat09]). To set the context for their definition, first recall that a form $P_k$ is said to be nonsingular over $\mathbb{Q}$ if the polynomials $P_k$, $\partial P_k /\partial X_1,\ldots, \partial P_k/ \partial X_n$ have no common zeroes in $\mathbb{P}_{\overline{\mathbb{Q}}}^{n-1}$ (correspondingly the projective hypersurface defined by $P_k=0$ in $\mathbb{P}_{\overline{\mathbb{Q}}}^{n-1}$ is nonsingular). (Here and throughout, $\overline{L}$ denotes a fixed algebraic closure of a given field $L$, and $\mathbb{P}_{\overline{L}}^{n-1}$ denotes the $(n-1)$-dimensional projective space over the field $\overline{L}$.) In comparison, $P_k$ is said to be Dwork-regular over $\mathbb{Q}$ in the variables $X_1,\ldots,X_n$ if there are no simultaneous solutions in $\mathbb{P}^{n-1}_{\overline{\mathbb{Q}}}$ to $$\label{DR_dfn} P_k(X_1,\ldots,X_n)=0, \qquad X_i\frac{\partial P_k}{\partial X_i}(X_1,\ldots,X_n)=0, \qquad 1 \leq i \leq n.$$ A comparison of the definitions shows that any Dwork-regular form over $\mathbb{Q}$ is nonsingular over $\mathbb{Q}$. As mentioned before, any nonsingular form becomes Dwork-regular under an appropriate change of variables (see §[3](#sec:dwork){reference-type="ref" reference="sec:dwork"}). Our interest in passing to Dwork-regular forms is that they are particularly amenable to applications of the Weil-Deligne bound (Lemma [Lemma 11](#lem:deligne){reference-type="ref" reference="lem:deligne"}) even after fixing one or more variables (a consequence of Proposition [Proposition 6](#prop:dwork_deligne){reference-type="ref" reference="prop:dwork_deligne"}). This allows us to make new progress on the convergence problem ([\[intro_convergence\]](#intro_convergence){reference-type="ref" reference="intro_convergence"}) despite a central difficulty that appears if each variable "interacts" with other variables in the leading form $P_k$. Intertwining rank captures the amount of such interaction. The novelty in our present work is that we can prove new results for forms of all intertwining ranks $1<r<n$. ## Genericity: an underlying motivating question We are motivated by the question: what is the behavior of the initial value problem ([\[PDE\]](#PDE){reference-type="ref" reference="PDE"}) when $P_k$ is a "generic" form in $\mathbb{Q}[X_1,\ldots,X_n]$? Technically, a class of forms is said to be generic if it corresponds to a open set (in the Zariski topology) in the moduli space of all degree $k$ homogeneous forms in $\mathbb{Q}[X_1,\ldots,X_n].$ As one example, nonsingular forms are generic. It is equivalent to show that being singular is a condition on the coefficients of $P_k$ that defines a closed set in the Zariski topology. Since $P_k$ is singular if and only if $P_k, \partial P_k/\partial X_1,...,\partial P_k/\partial X_n$ have a common nonzero root, then $P_k$ is singular if and only if the resultant of $P_k, \partial P_k/\partial X_1,...,\partial P_k/\partial X_n$ vanishes. This resultant is a (nonzero) polynomial in the coefficients of $P_k, \partial P_k/\partial X_1,...,\partial P_k/\partial X_n$, so that $P_k$ being singular is characterized by its coefficients lying in the vanishing set of a polynomial, proving the claim. As another example, *indecomposable* forms are generic; we describe this property thoroughly in §[1.4](#sec:indecomposable){reference-type="ref" reference="sec:indecomposable"} below. The union of two Zariski closed sets (e.g. the set of singular forms and the set of decomposable forms) is closed, and so the complement (e.g. the set of forms that are nonsingular and indecomposable) is open, and hence generic. Indeed, any generic condition will include nonsingular forms (generically), and indecomposable forms (generically). This fact cuts in two directions, one convenient and one inconvenient. First, on the one hand, even if we are interested in studying generic forms, it is reasonable only to consider nonsingular forms (which is advantageous for an application of Lemma [Lemma 11](#lem:deligne){reference-type="ref" reference="lem:deligne"}). But on the other hand, it shows that to understand the generic situation, we must understand the case of indecomposable forms. One strength of Theorem [Theorem 1](#thm:main2){reference-type="ref" reference="thm:main2"} is that it proves the first (nontrivial) counterexamples to ([\[intro_convergence\]](#intro_convergence){reference-type="ref" reference="intro_convergence"}) that apply to leading forms $P_k$ that are indecomposable. Thus in the next section we describe decomposability/indecomposability in more detail. Nevertheless, Theorem [Theorem 1](#thm:main2){reference-type="ref" reference="thm:main2"} falls short of proving nontrivial results for a generic class of forms: it is only nontrivial for forms of rank strictly smaller than $n$, and these are not generic. In §[8.4](#sec_codim){reference-type="ref" reference="sec_codim"} we compute the codimension of Dwork-regular forms of intertwining rank $r<n$, among degree $k$ forms in $\mathbb{Q}[X_1,\ldots,X_n]$. This codimension quantifies that Theorem [Theorem 1](#thm:main2){reference-type="ref" reference="thm:main2"} proves nontrivial results for a class of forms that is not generic, but that nevertheless contains "many more" real symbols than were tractable in previous works. ## Indecomposable forms of degree $k$: definition, remarks and examples {#sec:indecomposable} A form is called decomposable (or sometimes of Sebastiani-Thom type) over a field $L$ if there is a $\mathrm{GL}_n(L)$ change of variables so that the form can be written as a sum of at least two forms in disjoint sets of variables, for example $$\label{PQQ} P(X_1,...,X_n) = Q_1(X_{1},...,X_{m})+Q_2(X_{m+1},...,X_{n})$$ for some $1 \leq m < n$. Otherwise, a form is indecomposable over $L$. All of the forms considered in [@ACP23; @EPV22a] were of the special shape ([\[intro_special\]](#intro_special){reference-type="ref" reference="intro_special"}), and thus have intertwining rank $r=1$. All forms with intertwining rank $r=1$ are decomposable. Yet we are motivated to tackle indecomposable forms, since in the moduli space of degree $k \geq 3$ forms in $\mathbb{Q}[X_1,\ldots,X_n],$ indecomposable forms are generic for $n \geq 2$, at least for $(n,k)\neq (2,3)$. (This is related e.g. to [@Wan15 §6], [@HLYZ21 Thm. 3.2], [@ORySha03 p. 303]; see further details in §[8](#sec_details_examples){reference-type="ref" reference="sec_details_examples"}.) Note that the intertwining rank of any decomposable polynomial is at most $\lfloor n/2 \rfloor$; this can be seen by inspecting ([\[PQQ\]](#PQQ){reference-type="ref" reference="PQQ"}). We use this to deduce the following immediate corollary of Theorem [Theorem 1](#thm:main2){reference-type="ref" reference="thm:main2"}, verified in §[3.6](#sec_cor){reference-type="ref" reference="sec_cor"}. **Corollary 2**. *Fix $n\geq 2$ and $k\geq 2$. Let $P\in \mathbb{R}[X_1,...,X_n]$ be a polynomial whose leading form $P_k \in \mathbb{Q}[X_1,\ldots,X_n]$ is decomposable and nonsingular in $X_1,\ldots,X_n$ over $\mathbb{Q}$. Suppose there is a constant $C_s$ such that for all $f\in H^s(\mathbb{R}^n)$, $$\|\sup_{0<t<1}|T_t^{P}f|\|_{L^1(B_n(0,1))} \leq C_s \|f\|_{H^s(\mathbb{R}^n)}.$$ Then $s\geq \frac{1}{4} + \frac{n}{4((k-1)(n+2)+2)}$.* Since it has been remarked that it is challenging to exhibit indecomposable forms (see e.g. [@Pum06 p. 348], [@Wan15 p. 576]), we provide explicit examples. For each degree $k \geq 3$ and rank $2\leq r \leq n,$ we exhibit indecomposable forms of degree $k$ that are Dwork-regular over $\mathbb{Q}$ in $X_1,\ldots, X_n$ with intertwining rank $r$, namely $$\begin{aligned} P_k(X_1,\ldots,X_n)&=X_1^k+\cdots + X_n^k + \sum_{2\leq j \leq r} X_1X_j^{k-1} + \sum_{2\leq i < j \leq n} X_iX_j^{k-1},\quad \text{$k \geq 3$ odd;} \\ P_k(X_1,\ldots,X_n)&=X_1^k+\cdots + X_n^k + \sum_{2\leq j \leq r} X_1^2X_j^{k-2}+ \sum_{2\leq i < j \leq n} X_i^2X_j^{k-2}, \quad \text{$k \geq 4$ even.} \end{aligned}$$ For example, in dimension $n=3$ the examples with intertwining rank $2$ are $$\begin{aligned} P_k(X_1,X_2,X_3)&=X_1^k+X_2^k + X_3^k + X_1X_2^{k-1}+ X_2X_3^{k-1},\quad \text{$k \geq 3$ odd;} \\ P_k(X_1,X_2,X_3)&=X_1^k+X_2^k + X_3^k + X_1^2X_2^{k-2} + X_2^2X_3^{k-2}, \quad \text{$k \geq 4$ even.}\end{aligned}$$ In §[8](#sec_details_examples){reference-type="ref" reference="sec_details_examples"}, we use a criterion of Harrison [@Har75; @HarPar88] to verify that these are indecomposable forms over $\mathbb{Q}$ (and hence in particular cannot be brought to have intertwining rank $1$ by any $\mathrm{GL}_n(\mathbb{Q})$ change of variables). ## Further directions By the invariance of ([\[T_bound_inv\]](#T_bound_inv){reference-type="ref" reference="T_bound_inv"}) under $\mathrm{GL}_n(\mathbb{Q})$-action on $P$, the main result of Theorem [Theorem 1](#thm:main2){reference-type="ref" reference="thm:main2"} furthermore applies to any polynomial $P$ with leading form $P_k \in \mathbb{Q}[X_1,\ldots,X_n]$ lying in the $\mathrm{GL}_n(\mathbb{Q})$-orbit of a Dwork-regular form of intertwining rank $r$. How big is such an orbit? This points to an interesting question, which is in fact typical when one encounters a notion of rank for a higher degree form. Given a particular notion of rank, in an application one often wants to manipulate the original form (or class of forms) to make the (particular) rank more advantageous; the limits of this procedure may depend on the underlying field (and whether it is algebraically closed). For example, in the case of Schmidt rank, there is recent work on regularization and a new relative rank in [@LamZie21x], and the relation to algebraic closure in [@LamZie22x]. In the case of Waring rank, see the celebrated work of Alexander and Hirschowitz [@AleHir95] over $\mathbb{C}$ (and a nice overview in [@RanSch00]), or more recent work for monomials in [@CCG12] and partial progress over $\mathbb{R}$ or $\mathbb{Q}$ in [@HanMoo22], with intriguing remarks on the dependence on the underlying field in [@Rez13]. For our particular setting, this becomes the question: how does the intertwining rank behave under minimization via $\mathrm{GL}_n(\mathbb{Q})$? We pursue this question, which requires completely different methods, in other work. ## Notation In this paper we employ the convention $e(t) = e^{it}$. Correspondingly, $\hat{f}(\xi) = \int_{\mathbb{R}^m}f(x) e^{-i x\cdot \xi} dx$ and $f(x) = (2\pi)^{-m}\int_{\mathbb{R}^m} \hat{f}(\xi) e^{i x \cdot \xi} d\xi$, so that Plancherel's theorem takes the form $\|f\|^{2}_{L^2(\mathbb{R}^m)} = (2\pi)^{-m} \|\hat{f}\|_{L^2(\mathbb{R}^m)}^2.$ The Sobolev space $H^s(\mathbb{R}^m)$ is defined to be all $f \in \mathcal{S}'(\mathbb{R}^m)$ with finite Sobolev norm $$\|f\|_{H^s(\mathbb{R}^m)}^2 = \frac{1}{(2\pi)^m} \int_{\mathbb{R}^m} (1+|\xi|^2)^s |\hat{f}(\xi)|^2 d\xi.$$ We use the convention that $B_m(c,r)$ is the Euclidean ball of radius $r$ centered at $c$ in $\mathbb{R}^m$. The notation $A \ll_\kappa B$ denotes that $|A| \leq C(\kappa) B$ for a constant $C(\kappa)$. It is harmless in our argument to allow constants to depend on the dimension $n$, the symbol $P$ of degree $k$, the intertwining rank $r$, and a Schwartz function $\phi$ we will fix once and for all. Certain small constants, which we can choose freely, we will denote by $c_0,c_1,c_2,...$; we will demarcate these explicitly in inequalities when we are preparing to exploit their small size. For $v=(v_1,...,v_m),w=(w_1,...,w_m)\in \mathbb{R}^m$, we define $v\circ w = (v_1w_1,...,v_mw_m)$. For a multi-index $\alpha$ we set $y^\alpha= y_1^{\alpha_1} \cdots y_n^{\alpha_n}$, $\alpha! = \alpha_1! \cdots \alpha_n!,$ $|\alpha| = \alpha_1 + \cdots + \alpha_n,$ and $\partial^\alpha= \partial_1^{\alpha_1} \cdots \partial_n^{\alpha_n};$ for two multi-indices $\alpha,\beta$, $\alpha\geq \beta$ and $\alpha- \beta$ denote coordinate-wise relations. # Method of proof {#sec:method} To prove Theorem [Theorem 1](#thm:main2){reference-type="ref" reference="thm:main2"}, we construct a family of test functions $\{f_j\}$ that are Fourier-supported in an annulus $\{(1/C)R_j\leq |\xi|\leq CR_j\}$ of radius $R_j$ for a sequence of $R_j \rightarrow\infty$ as $j \rightarrow\infty$. By definition $$R_j^s \|f_j\|_{L^2}\ll_{C,s} \|f_j\|_{H^s} \ll_{C,s} R_j^s \|f_j\|_{L^2}.$$ Hence Theorem [Theorem 1](#thm:main2){reference-type="ref" reference="thm:main2"} follows immediately from an explicit construction: **Theorem 3**. *Let $n \geq 2$. Let $P\in \mathbb{R}[X_1,...,X_n]$ be a polynomial of degree $k \geq 2$ whose leading form $P_k\in \mathbb{Q}[X_1,\ldots,X_n]$ is Dwork-regular over $\mathbb{Q}$ in the variables $X_1,\ldots,X_n$, and has intertwining rank $r \leq n$. Fix any $s< \frac{1}{4} + \delta(n,k,r)$ with $\delta(n,k,r)$ as in Theorem [Theorem 1](#thm:main2){reference-type="ref" reference="thm:main2"}. Then there exists an infinite sequence of $j \rightarrow\infty$ such that for $R_j=2^j$, there exists a function $f_j\in L^2(\mathbb{R}^n)$, where $\|f_j\|_{L^2}=1$ and $\hat{f}_j$ is supported in an annulus $\{(1/C)R_j\leq |\xi|\leq CR_j\}$, and with the property that $$\lim_{j\rightarrow \infty} \frac{\|\sup_{0<t<1} |T_t^{P}f_j(x)|\|_{L^1(B_n(0,1))}}{R_j^s} = \infty.$$* To prove Theorem [Theorem 3](#thm:main3){reference-type="ref" reference="thm:main3"}, we define each test function $f$ so that $|T_t^Pf(x)|$ can be approximated by an $(n-r)$-dimensional exponential sum, which we show is "large" for many $x \in B_n(0,1)$, after choosing $t$ appropriately (depending on $x$). This strategy is motivated by ideas of Bourgain (for degree $k=2$) as explained in [@Pie20], and the flexible construction (applicable to degree $k\geq 3$) developed in our earlier work [@ACP23], for the diagonal case $P(X_1,\ldots,X_n)=X_1^k + \cdots + X_n^k.$ However, a difficulty arises if the leading form $P_k$ of the real symbol has intertwining rank $r>1$: in order to optimize the test functions $f$ to violate the supposed upper bound ([\[thm_T\_bound\]](#thm_T_bound){reference-type="ref" reference="thm_T_bound"}) for $s$ as large as possible, one is naturally led to dilate one variable within the exponential sum, say $X_1,$ by a large parameter. This large dilation contributes large error terms to certain approximation arguments. We overcome this by using the notion of intertwining rank. To assist the reader in tracking the main ideas of the method, we now present a series of heuristic computations that are not rigorous, but simply emphasize numerology. The remainder of the paper carries out each step rigorously. ## Heuristic overview {#sec_heuristic} Let $\Phi_n(x_1,\ldots,x_n)$ be a non-negative Schwartz function with $\Phi_n (0)=1$ and $\hat{\Phi}_n$ supported in $[-1,1]^n.$ As in Theorem [Theorem 3](#thm:main3){reference-type="ref" reference="thm:main3"}, we think of $R$ as a parameter that will go to infinity. Define a test function $$\label{intro_f_dfn} f(x) = \Phi_n(S \circ x) \sum_{\substack{m \in \mathbb{Z}^n \\ m_j \approx R/\Lambda_j}} e((\Lambda \circ m) \cdot x)$$ for some parameters $S=(S_1,\ldots,S_n)$ and $\Lambda = (\Lambda_1,\ldots,\Lambda_n)$, with each $S_i,\Lambda_i$ chosen later to be $1, R$ or a small power of $R$. Let $\|S\| = \prod S_j$ for the moment, and similarly for $\|\Lambda\|$. The Fourier transform $\hat{f}$ is supported in an annulus of radius $\approx R$ if each $S_j\ll R$, and $\|f\|_{H^s} \approx R^s \|S\|^{-1/2} R^{n/2}\|\Lambda\|^{-1/2}$, so that by normalizing appropriately, $f$ fits the hypotheses of Theorem [Theorem 3](#thm:main3){reference-type="ref" reference="thm:main3"}. For this test function, $$T_t^Pf(x) = \frac{1}{(2\pi)^n} \int_{\mathbb{R}^n} \hat{\Phi}_n(\xi) e((S \circ \xi)\cdot x) \sum_{\substack{m \in \mathbb{Z}^n \\ m_j \approx R/\Lambda_j}} e((\Lambda \circ m) \cdot x) e(P(S \circ \xi + \Lambda \circ m)t)d\xi.$$ For simplicity we temporarily assume $P$ is a homogeneous form of degree $k\geq 2$, defined in terms of coefficients and multi-indices by $$P(y_1,\ldots,y_n) = \sum_{|\alpha| =k} c_{\alpha} y^\alpha.$$ Step 1: Use partial summation to remove all terms in the sum over $m$ that depend on $\xi$ (i.e. that are "not arithmetic"); this replaces the sum over $m$ (up to an error term) by $$\label{intro_arith} \approx w(R/\Lambda_1,\ldots, R/\Lambda_n) \sum_{\substack{m \in \mathbb{Z}^n \\ m_j \approx R/\Lambda_j}} e((\Lambda \circ m) \cdot x+P(\Lambda \circ m)t),$$ in which the new sum is the "arithmetic contribution" while $$\label{intro_w_dfn} w(y_1,\ldots,y_n) = e((P(S \circ \xi + \Lambda \circ y) - P(\Lambda \circ y))t)$$ is the "weight" that has been removed by partial summation. The weight $w(R/\Lambda_1,\ldots, R/\Lambda_n)$ contributes to the integral over $\xi$ a factor with a linear phase in $\xi$, namely $\approx e( S \circ \xi \cdot t \nabla P(\underline{R})),$ where $\underline{R}=(R,\ldots,R).$ Step 2: Use integration by parts to remove all terms in the phase of the integral over $\xi$ that are order 2 or higher in $\xi$ (up to an error term). Step 3: After Step 2, one may immediately apply Fourier inversion to the remaining integral over $\xi$, so that the main contribution to $T_t^Pf(x)$ is a product of the arithmetic contribution and $$\approx \Phi_n(S \circ (x + t \nabla P(\underline{R}))).$$ Then place constraints on $x$ and $t$ so that $S \circ (x + t \nabla P(\underline{R})) \approx 0$, so that applying $\Phi_n(0)=1$ (and continuity of $\Phi_n$) implies that $\Phi_n(S \circ (x + t \nabla P(\underline{R})))\approx 1$ and hence $$\label{intro_T_sum} |T_t^Pf(x)| \approx |\sum_{\substack{m \in \mathbb{Z}^n \\ m_j \approx R/\Lambda_j}} e((\Lambda \circ m) \cdot x+P(\Lambda \circ m)t)|.$$ Step 4: Construct a set of $x \in \mathbb{R}^n$ that is a positive proportion of $B_n(0,1)$ so that for each $x$ in the set there exists $t \in (0,1)$ for which the arithmetic contribution ([\[intro_T\_sum\]](#intro_T_sum){reference-type="ref" reference="intro_T_sum"}) is large. Up to some simple changes of variables, the set is a union of boxes centered at rationals $(a_1/q,a_2/q,\ldots,a_n/q)$ for primes $q \approx Q,$ where $Q$ is a small power of $R$ to be chosen later. Step 5: Optimize the choices of $S=(S_1,\ldots,S_n),$ $\Lambda = (\Lambda_1,\ldots,\Lambda_n)$, and $Q$, subject to the constraints that the error terms in all previous steps are acceptable. To unravel the chain of dependencies that make these steps efficient and compatible, first consider Step 3, which requires that for each $j=1,\ldots,n$ $$\label{t_relation_heuristic} S_j (x_j + t R^{k-1}) \ll 1 \iff t \approx -\frac{x_j}{R^{k-1}} + O( \frac{1}{R^{k-1}S_j}).$$ (This sketch assumes in particular that $\partial_1 P(\underline{R})\gg R^{k-1}$; to achieve this, we develop Lemma [Lemma 8](#lem:coefficients){reference-type="ref" reference="lem:coefficients"}.) Given $(x_1,\ldots,x_n)$, if we choose $t$ to satisfy this for $x_1$, then the only way it can simultaneously satisfy it for $x_2,\ldots,x_n$ is for $x_2,\ldots,x_n$ to all lie in $O(S_j^{-1})$ neighborhoods of $x_1$. This is too limiting in Step 4 unless we set $S_2=\cdots=S_n=1$, which we now do, so $S=(S_1,1,\ldots,1)$ and $\|S\|=S_1.$ From now on, because of the "large" rescaling factor $S_1$, the first coordinate $x_1$ will play a special role. Next consider Step 2, in which we use iterated integration by parts (coordinate by coordinate) to remove a "weight" from the integral that contains all terms in the phase that are order 2 or higher in $\xi$; this weight takes the approximate form $$W(\xi_1,\ldots,\xi_n) = e([P(S \circ \xi + \underline{R}) - L_0(\xi) - L_1(\xi)]t) = e(t \sum_{\substack{|\beta+ \gamma| = k \\ |\beta|\geq 2}}c_{\beta+ \gamma}C(\beta,\gamma) (S \circ \xi)^\beta\underline{R}^\gamma),$$ in which $L_0$ (respectively $L_1$) represents terms in $P(S \circ \xi + \underline{R})$ that are order 0 (respectively order 1) in $\xi$, and $C(\beta,\gamma)$ are positive combinatorial constants. As usual, the error term when a weight is removed by integration by parts (or summation by parts) will be smaller if the weight is slowly-varying, and thus we must control the derivatives of $W$. The error accrued in Step 2 must be at most a small proportion of the main term ([\[intro_T\_sum\]](#intro_T_sum){reference-type="ref" reference="intro_T_sum"}). This will be achieved if for each multi-index $\kappa \in \{0,1\}^n,$ for all $\xi \in {\rm supp \;}\hat{\Phi}_n \subseteq [-1,1]^n$, $$\label{intro_W_sum} |\frac{\partial^{|\kappa|}}{\partial \xi^\kappa} W(\xi)| \ll 1 \iff t \sum_{\substack{|\beta+ \gamma| = k \\ |\beta| \geq 2, \beta\geq \kappa}}c_{\beta+ \gamma}C(\beta,\gamma) (S \circ \xi)^{\beta- \kappa}S^\kappa \frac{\beta!}{( \beta- \kappa)!} \underline{R}^\gamma\ll 1.$$ From Step 3 we know that $t\approx R^{-(k-1)},$ so we require the sum above to satisfy $\ll R^{k-1}$. Each term in the sum is roughly of size $S^\beta\underline{R}^\gamma= S_1^{\beta_1} R^{k-|\beta|}$ for some $|\beta|\geq 2$. There are two scenarios: if $|\beta|>\beta_1$, this term is $\ll R^{k-1}$ as long as $S_1 \ll R$. If $\beta_1 = |\beta|$, which can only occur if $\beta_1\geq 2,$ then this term is $\ll R^{k-1} (S_1^{\beta_1}/R^{\beta_1-1})$, which is $\ll R^{k-1}$ as long as $S_1 \ll R^{\frac{\beta_1 -1}{\beta_1}},$ which is most restrictive when $\beta_1=2$. Thus we impose the condition $S_1 \ll R^{1/2}.$ Next consider Step 1, in which the error introduced by iterated partial summation must be at most a small proportion of the main term ([\[intro_T\_sum\]](#intro_T_sum){reference-type="ref" reference="intro_T_sum"}). This will be achieved if for each multi-index $\kappa \in \{0,1\}^n,$ for all $y$ with $y_j \approx R/\Lambda_j$, $$\label{intro_w_sum} |\frac{\partial^{|\kappa|}}{\partial y^\kappa} w(y)| \ll \Lambda^\kappa R^{-|\kappa|} \iff t \sum_{\substack{|\beta+ \gamma| = k \\ |\beta| \geq 1, \gamma\geq \kappa}}c_{\beta+ \gamma}C(\beta,\gamma) (S \circ \xi)^{\beta} (\Lambda \circ y)^{\gamma-\kappa} \Lambda^\kappa \frac{\gamma!}{( \gamma- \kappa)!}\ll \Lambda^\kappa R^{-|\kappa|}.$$ Note that in contrast to ([\[intro_W\_sum\]](#intro_W_sum){reference-type="ref" reference="intro_W_sum"}), in this case the phase in the weight $w(y)$ includes terms of order 1 in $\xi$, so that $|\beta|=1$ is allowed in the sum immediately above. Each term in the sum immediately above is roughly of size $S^\beta\Lambda^\kappa (\Lambda \circ y)^{\gamma- \kappa } \approx S_1^{\beta_1} \Lambda^\kappa \underline{R}^{\gamma- \kappa}\approx S_1^{\beta_1} R^{k - |\beta|} \cdot \Lambda^\kappa R^{- |\kappa|}.$ Thus the condition ([\[intro_w\_sum\]](#intro_w_sum){reference-type="ref" reference="intro_w_sum"}) will be met, recalling $t\approx R^{-(k-1)}$, as long as $S_1^{\beta_1} R^{k - |\beta|} \ll R^{k-1}$ for all $|\beta| \geq 1$. There are again several scenarios: if $|\beta| > \beta_1$ or if $\beta_1 = |\beta| \geq 2$, this term is $\ll R^{k-1}$ by arguing as in Step 2, under our assumption $S_1 \ll R^{1/2}$. The problem is that there is now also a third case, with $\beta_1 = |\beta|=1$ in which case the requirement is asking that $S_1^{1} R^{k-1} \ll R^{k-1}$. These problematic terms can be seen as the contribution to the weight ([\[intro_w\_dfn\]](#intro_w_dfn){reference-type="ref" reference="intro_w_dfn"}) that is varying the fastest with respect to $y$, namely the portion of the phase that is highest order in $y$ (total degree $k-1$) and linear in $\xi$. (We also provide an explicit example of such terms in ([\[P_example\]](#P_example){reference-type="ref" reference="P_example"}).) One way to achieve the requirement $S_1^{1} R^{k-1} \ll R^{k-1}$ is to impose $S_1 \ll 1$, but this is inefficient in Step 5. The strategy we adopt is to modify the definition of the test function $f$ so that such terms never appear. Precisely, in the definition ([\[intro_f\_dfn\]](#intro_f_dfn){reference-type="ref" reference="intro_f_dfn"}) of the test function $f$, we now restrict the sum over $m \in \mathbb{Z}^n$ to sum only over those coordinates $m_j$ with the following property: for each multi-index $\alpha=(\alpha_1,\ldots,\alpha_n)$, if $\alpha_j \geq 1$ and $\alpha_1 \geq 1$ then the coefficient $c_\alpha=0$ in the original polynomial $P(y)$. Equivalently, we define the sum to be only over those coordinates $m_j$ such that $X_j$ never appears in a monomial with $X_1$ in the original polynomial $P(y)$. (Equivalently, set $\Lambda_j \approx R$ for each $j$ such that $X_j$ appears in a monomial with $X_1$, and in all other coordinates take $\Lambda_j=L$, for $L$ a small power of $R$ to be chosen later.) Because the exponential sum is a source of gain in Steps 4 and 5, we wish to sum over as many coordinates as possible, so depending on $P(y)$ we relabel coordinates in the beginning so that $X_1$ is the variable that appears in monomials with as few other coordinates as possible, say $X_2,\ldots, X_r$ with $r<n$; this is the motivation for defining intertwining rank. We now let $m = (m_{r+1},\ldots,m_n)$, and only sum over these coordinates. The conclusion is that in place of ([\[intro_T\_sum\]](#intro_T_sum){reference-type="ref" reference="intro_T_sum"}) we arrive at a main contribution of the form $$\label{intro_T_sum_mod} |T_t^Pf(x)| \approx |\sum_{\substack{(m_{r+1},\ldots,m_n) \in \mathbb{Z}^{n-r} \\ m_j \approx R/L}} e(m \cdot (Lx_{r+1},\ldots,Lx_n)+P(R/L,\ldots,R/L,m)L^k t)|.$$ (Here we used homogeneity of $P$ of degree $k$; to make $R/L$ integral, see Remark [Remark 19](#remark_integer){reference-type="ref" reference="remark_integer"}.) In Step 4, to construct a set of $x$ for which the above sum in ([\[intro_T\_sum_mod\]](#intro_T_sum_mod){reference-type="ref" reference="intro_T_sum_mod"}) is "large," imagine that each $Lx_j =a_j/q$ and $L^kt=a_1/q$ are rationals of prime denominator $q \approx Q$, so that the sum can be regarded as $\approx ((R/L)q^{-1})^{n-r}$ copies of a sum where each coordinate $m_j$ runs over a complete set of residues modulo $q$. Since $P(y_1,\ldots,y_n)$ is Dwork-regular, even after specializing the first $r$ variables, the remaining polynomial is well-behaved (Proposition [Proposition 6](#prop:dwork_deligne){reference-type="ref" reference="prop:dwork_deligne"}). A major feature of our argument shows that for a positive proportion ($\gg q^{n-r+1}$) of choices of $a_1, a_{r+1},\ldots, a_n$ the $(n-r)$-dimensional sum mod $q$ is of the optimal size $\approx q^{(n-r)/2}$ (Proposition [Proposition 10](#prop:bound_T){reference-type="ref" reference="prop:bound_T"}). Hence at precisely such a point $x$ and for such a $t,$ $$\label{intro_T_size} |T_t^Pf(x)| \approx \left(\frac{R}{Lq}\right)^{n-r} q^{(n-r)/2} \approx \left(\frac{R}{LQ^{1/2}}\right)^{n-r}.$$ To achieve a similar result for a positive measure of $x$, we need to show this continues to hold for $Lx_j$ and $L^kt$ merely "close" to rationals with denominator $q$. Typically, deducing this from the case where they are precisely rationals would follow by applying partial summation to the sum in ([\[intro_T\_sum_mod\]](#intro_T_sum_mod){reference-type="ref" reference="intro_T_sum_mod"}). The error incurred by partial summation will be too large if the "weight" removed involves terms of the highest order in $m$. Thus we must choose $t$ so that (i): $L^kt=a_1/q$ precisely. Fortunately this is possible because of the wiggle room allowed in (ii): $L^kt \approx -L^kx_1/R^{k-1} + O(L^k/R^{k-1}S_1)$, from ([\[t_relation_heuristic\]](#t_relation_heuristic){reference-type="ref" reference="t_relation_heuristic"}) in Step 3. Given $x_1$ with $-L^kx_1/R^{k-1}$ in an interval of length $O(1/q)$ centered at $a_1/q$ we can always choose $t$ meeting both requirements (i) and (ii) as long as $Q^{-1} \ll L^k/R^{k-1}S_1,$ which we now assume. In contrast, the coordinates $Lx_{r+1},\ldots,Lx_n$ appear as coefficients of the lowest-order (linear) terms in $m$ so that partial summation will contribute reasonable errors when we allow $Lx_j$ to vary in an interval around $a_j/q$. If the interval is of length $V$, say, the contributed error will be proportional to $(R/L)^{n-r}V^{n-r}$ times the size of the main term, so we require $(R/L)^{n-r}V^{n-r}\ll 1$. At the same time, we want $V$ to be large in order for the boxes so constructed to cover a positive proportion of $(x_{r+1},\ldots,x_n) \in [0,1]^{n-r}$. In this regard the principle of simultaneous Dirichlet approximation in $n-r$ dimensions motivates the choice $V=1/qQ^{1/(n-r)} \approx Q^{-1-1/(n-r)}$ (see e.g. [@Pie20 Appendix B]). Taken together, these two requirements force the condition $Q^{-1-1/(n-r)} \ll L/R.$ Cumulatively, this construction yields boxes in the coordinates $x_1,x_{n-r},\ldots,x_n$, each of measure $\approx Q^{-1} V^{n-r}$, centered at around $\gg q^{n-r+1}$ rational tuples with denominator $q$ for each prime $q \approx Q$. A naive calculation suggests the total measure of the union of the boxes could be $\approx (Q/\log Q) Q^{n-r+1} Q^{-1} (Q^{-1-1/(n-r)})^{n-r} \approx 1/\log Q$. Since the boxes can overlap significantly, a sophisticated justification is required although the conclusion agrees with the above (Proposition [Proposition 20](#prop:Omega){reference-type="ref" reference="prop:Omega"}). Upon reaching Step 5, these heuristics suggest that for the test function $f$ so constructed, with $\|f\|_{H^s} \approx R^s S_1^{-1/2} (R/L)^{(n-r)/2}$, on a set $x \in B_n(0,1)$ of measure $\gg 1/\log Q,$ $|T_t^Pf(x)| \gg (R/LQ^{1/2})^{n-r}$. This occurs under the constraints $S_1 \leq R^{1/2}$, $Q^{-1-1/(n-r)} \ll L/R$, $Q^{-1} \ll L^k/R^{k-1}S_1$. The claim of Theorem [Theorem 3](#thm:main3){reference-type="ref" reference="thm:main3"} consequently holds for each $s$ such that $$\frac{(\frac{R}{LQ^{1/2}})^{n-r}(\log Q)^{-1}}{S_1^{-1/2} (R/L)^{(n-r)/2}} \gg R^{s'}$$ for some $s'>s.$ Upon setting $L=R^\lambda, Q=R^\kappa, S_1= R^\sigma$ this is equivalent to a linear condition on $\lambda, \kappa, \sigma$ and $s$ subject to linear constraints, and it can be optimized, which we do in detail in §[7](#sec:parameters){reference-type="ref" reference="sec:parameters"}. ## Outline of the paper In §[3](#sec:dwork){reference-type="ref" reference="sec:dwork"} we state and prove all the key properties of Dwork-regular polynomials we will use, including that upon fixing one or more variables, the resulting polynomial is a Deligne polynomial over $\mathbb{F}_q$ (for all but finitely many primes $q$). In §[4](#sec:exponential){reference-type="ref" reference="sec:exponential"} we prove upper and lower bounds on (complete and incomplete) exponential sums involving Deligne polynomials. In §[5](#sec:reducing){reference-type="ref" reference="sec:reducing"} we approximate $T_t^{P}f$, for appropriate test functions $f$, by an exponential sum. In §[6](#sec:arithmetic){reference-type="ref" reference="sec:arithmetic"} we define a set of $x \in B_n(0,1)$ for which we can approximate this sum by complete exponential sums to which we can apply the arithmetic results of §[4](#sec:exponential){reference-type="ref" reference="sec:exponential"}. In §[7](#sec:parameters){reference-type="ref" reference="sec:parameters"} we then optimize the choices of all parameters, thus proving Theorem [Theorem 3](#thm:main3){reference-type="ref" reference="thm:main3"}. Finally, in §[8](#sec_details_examples){reference-type="ref" reference="sec_details_examples"} we provide details on examples of Dwork-regular, indecomposable forms of arbitrary intertwining rank and degree, and remark on the codimension of such forms. # Properties of Dwork-regular forms {#sec:dwork} In this section we first gather three algebraic properties of Dwork-regular forms that we will apply throughout the proof. Then in §[3.4](#sec_gradient){reference-type="ref" reference="sec_gradient"}, we prove a lower bound on a partial derivative of a Dwork-regular form, in §[3.5](#sec_change_var){reference-type="ref" reference="sec_change_var"} we show that boundedness (or unboundedness) of the maximal operator is invariant under a $\mathrm{GL}_n(\mathbb{Q})$ change of variables, in §[3.6](#sec_cor){reference-type="ref" reference="sec_cor"} we verify Corollary [Corollary 2](#corollary_decomp){reference-type="ref" reference="corollary_decomp"}, and finally in §[3.7](#sec:dispersive){reference-type="ref" reference="sec:dispersive"} we remark that the PDE's we consider are dispersive. It is convenient to work temporarily in a more abstract setting, and simply fix a field $L$, which could for example be $\mathbb{Q}$, $\mathbb{R}$ or a finite field $\mathbb{F}_q$. We will later call upon the lemmas we prove both in the setting of infinite fields such as $\mathbb{Q}$ and $\mathbb{R}$ and finite fields $\mathbb{F}_q$ for $q$ prime. Let $L$ be a field and $H\in L[X_1,...,X_n]$ a homogeneous polynomial of degree $k \geq 2$. Then $H$ is nonsingular over $L$ if $H, \partial H /\partial X_1,\ldots, \partial H/ \partial X_n$ have no common zeroes in $\mathbb{P}_{\overline{L}}^{n-1}$ (correspondingly the projective hypersurface defined by $H=0$ in $\mathbb{P}_{\overline{L}}^{n-1}$ is nonsingular). Recall that $H$ is Dwork-regular in the variables $X_1,...,X_n$ over $L$ if there are no solutions in $\mathbb{P}^{n-1}_{\overline{L}}$ to the simultaneous equations $$H(X_1,\ldots,X_n)=0, \qquad X_i\frac{\partial H}{\partial X_i}(X_1,\ldots,X_n)=0 \qquad 1 \leq i \leq n.$$ If $H$ is Dwork-regular over $L$, then $H$ is nonsingular over $L$. However, it can be that $H$ is nonsingular but not Dwork-regular: for example, $X_1^k +\cdots + X_{n-1}^{k} + X_{n-1}X_n^{k-1}$ over $\mathbb{Q}$. However, if the field $L$ is infinite, given any nonsingular form, there exists a $\mathrm{GL}_n(L)$ change of variables under which the form becomes Dwork-regular (see [@Dwo62 pp. 67--68] and [@Kat09 Lemma 3.1]). (In fact, for a given form, there are many such changes of variables: the proof of [@Kat09 Lemma 3.1] can be adapted to show that the set of such elements is dense in $\mathrm{GL}_n(L)$.) We quote from Katz in [@Kat08 p. 1252], that the archetypical Dwork-regular polynomial $H$ would be of the form $H(X) = \sum_{i=1}^n X_i ^k + \tilde{H}(X)$, where $\tilde{H}$ is any polynomial of degree at most $k-1$. The antithesis to a Dwork-regular polynomial, when $n=2m$ is even and $L$ has odd characteristic, is something of the form $H(X) = \sum_{i=1}^{m} X_i X_{m+i}$. By Euler's identity, for a form $H$ of degree $k$, $k H = \sum_{i=1}^n X_i (\partial/\partial X_i)H,$ so that when discussing Dwork-regular forms it is natural to assume that $\mathrm{char} L \nmid k$, if $L$ is finite. In this section, we first present an equivalent characterization of Dwork-regularity that is easier to work with. (This property has previously been remarked in the context of [@Kat09 Lemma 3.1].) **Lemma 4**. *Let $L$ be a field and let $H(X_1,...,X_m)\in L[X_1,...,X_m]$ be a homogeneous polynomial of degree $d \geq 2$. For every nonempty $S\subseteq \{1,...,m\}$, define $H_S:=H|_{X_j=0,j\not\in S}$. Then $H$ is Dwork-regular in $X_1,\ldots,X_m$ over $L$ if and only if* 1. *for all $S$ with $|S|=1$, $H_S$ is not the zero polynomial, and* 2. *for all $S$ with $|S|\geq 2$, the hypersurface defined by $H_S=0$ is nonsingular in $\mathbb{P}^{|S|-1}_{\Bar{L}}$ in the variables $X_i,i\in S$.* The first condition of Lemma [Lemma 4](#lem:dwork_equiv){reference-type="ref" reference="lem:dwork_equiv"} implies that a degree $d$ form that is Dwork-regular in $X_1,...,X_m$ necessarily contains a nonzero multiple of each monomial $X_1^d,...,X_m^d$. Second, we verify that if a form $H \in \mathbb{Z}[X_1,\ldots,X_m]$ is Dwork-regular over $\mathbb{Q}$ in the variables $X_1,\ldots, X_m$, then its reduction modulo $q$ is Dwork-regular over $\mathbb{F}_q$ in the variables $X_1,\ldots,X_m$ for all but finitely many primes $q$; in particular this is true for all primes $q \geq K_1$ for a finite constant $K_1 = K_1(H)$. We can describe this abstractly over any field $L$ as follows: **Lemma 5**. *Let $L$ be a field and let $H(X_1,...,X_m)\in L[X_1,..,X_m]$ be Dwork-regular over $L$ in the variables $X_1,...,X_m$. Then there exists a finite set $S$ of finite places of $L$ such that for all finite places ${\mathfrak p}\not\in S$, the reduced polynomial $H\; ( \text{mod} \; {\mathfrak p})$ is Dwork-regular over the residue field $\mathcal{O}_S/{\mathfrak p}$.* We reserve the precise definitions of the notion of a finite place, a residue field, and the ring $\mathcal{O}_S$ to §[3.2](#sec:proof_of_lemma){reference-type="ref" reference="sec:proof_of_lemma"}. In the case when $L=\mathbb{Q}$, a finite place corresponds to a prime number, and the conclusion of the lemma is that once a finite number of "bad" primes are excluded, then for all remaining prime numbers $q$, the reduction of $H$ modulo $q$ is Dwork-regular over the finite field $\mathbb{F}_q$. We next recall that a polynomial $P\in \mathbb{Z}[X_1,...,X_m]$ of degree $d \geq 2$ is a *Deligne* polynomial over a finite field $L$ of characteristic $q$ if 1. $q \nmid d$, and 2. the hypersurface defined by the leading form $P_d(X_1,\ldots,X_m)=0$ is nonsingular in $\mathbb{P}_{\overline{L}}^{m-1}$. (In the case that $m=1$, (2) is replaced by $P_d(X_1) \not\equiv 0.$ Recall that the leading form of a polynomial is the homogeneous part of highest degree.) A crucial fact we apply later is that after specializing one or more coordinates of a Dwork-regular form, the remaining polynomial is Deligne: **Proposition 6**. *Let $L$ be a finite field and let $H(X_1,...,X_m) \in L[X_1,...,X_m]$ be a homogeneous polynomial of degree $d$ that is Dwork-regular over $L$ in the variables $X_1,...,X_m$, with $\mathrm{char} L \nmid d$. Fix $1 \leq r \leq m-1$. Then for any constants $c_1,...,c_r\in L$, $H|_{X_1=c_1,...,X_r=c_r}$ is a Deligne polynomial in $X_{r+1},...,X_m$ over $L$.* In particular, we remark that by Lemma [Lemma 4](#lem:dwork_equiv){reference-type="ref" reference="lem:dwork_equiv"}, the constants $c_i$ in Proposition [Proposition 6](#prop:dwork_deligne){reference-type="ref" reference="prop:dwork_deligne"} can be 0 in $L$. We now turn to the proof of these results. ## Proof of Lemma [Lemma 4](#lem:dwork_equiv){reference-type="ref" reference="lem:dwork_equiv"} {#sec:dwork_equiv} We suppose $H$ is not Dwork-regular and then show either (1) or (2) is violated. For $H$ not Dwork-regular, there exists an $a=[a_1:\cdots:a_m]\in \mathbb{P}^{m-1}_{\Bar{L}}$ such that $$H(a)=0,\qquad (X_i \frac{\partial H}{\partial X_i})(a)=0 \text{ for } 1\leq i \leq m.$$ In particular, $a_j\neq 0$ for some $1\leq j \leq m$ and so for this $j$, $\frac{\partial H}{\partial X_j}(a)=0$. Define the set $S=\{j:a_j\neq 0 \}\subseteq \{1,...,m\}$. If $|S|=1$ then $H_S$ is either identically zero or a monomial in one variable, say in $X_j$. Moreover, when evaluated at the point $a=[0:\cdots : a_j : \cdots :0]\in \mathbb{P}^{m-1}_{\Bar{L}}$, the monomial in $a_j \neq 0$ satisfies $H_S(a) = H(a)=0.$ Thus the coefficient of the monomial is zero, and $H_S \equiv 0$ so that (1) is violated. If on the other hand $|S|\geq 2$, say $S = \{ i_1,\ldots, i_{|S|}\}$ then let $b=[a_{i_1}:\cdots :a_{i_{|S|}}] \in \mathbb{P}^{|S|-1}_{\Bar{L}}$. Then upon regarding $H_S$ as a polynomial in $X_{i_1},\ldots, X_{i_{|S|}},$ $H_S(b) = H(a)=0.$ For each $i\in S$, when we evaluate at the point $b$ (or $a$ respectively), $$\frac{\partial H_S}{\partial X_i}(b) = \frac{\partial H}{\partial X_i}(a) = 0.$$ This produces a singular point on the projective hypersurface $H_S=0$ in $\mathbb{P}^{|S|-1}_{\overline{L}},$ violating (2). Finally, suppose $H$ is Dwork-regular; we will argue that (1) and (2) must hold, by contradiction. Indeed, suppose either $H_S$ is identically zero for some $S$ with $|S|=1$ or $H_S=0$ is singular for some $S$ with $|S|\geq 2$. Write $S=\{i_1,...,i_{|S|}\}\subseteq \{1,...,m\}$. For the case $|S|=1$, define $a=[a_1:\cdots : a_m]\in \mathbb{P}^{m-1}_{\Bar{L}}$ by $a_j=1$ if $j\in S$ and 0 otherwise. Note that $H(a)=H_S(a)=0$. For $i\not\in S$, the coordinate $X_i(a)=0$ while for $i\in S$, at the point $a$, $$\frac{\partial H}{\partial X_i} (a) = \frac{\partial H_S}{\partial X_i} (a) =0$$ since $H_S \equiv 0$. Consequently, $$\begin{aligned} \label{eqn:a_dwork} H(a)=0,\qquad (X_i \frac{\partial H}{\partial X_i})(a)=0 \text{ for } 1\leq i \leq m, \end{aligned}$$ which violates Dwork-regularity of $H$, a contradiction. On the other hand, if $|S|\geq 2$, let $b=[b_{i_1}:\cdots:b_{i_{|S|}}]\in \mathbb{P}^{|S|-1}_{\Bar{L}}$ be such that $$H_S(b)=0,\qquad \frac{\partial H_S}{\partial X_i}(b)=0 \text{ for } i\in S.$$ Define $a=[a_1:\cdots:a_m]\in \mathbb{P}^{m-1}_{\Bar{L}}$ by $a_j=b_j$ if $j\in S$ and 0 otherwise. Then $H(a)=H_S(b)=0$. Additionally, for $i\not\in S$, the coordinate $X_i(a)=0$ while for $i\in S$, $(\partial H/\partial X_i)(a)=(\partial H_S/\partial X_i)(b)=0$. Thus $a$ satisfies [\[eqn:a_dwork\]](#eqn:a_dwork){reference-type="eqref" reference="eqn:a_dwork"} and violates Dwork-regularity of $H$, a contradiction. ## Proof of Lemma [Lemma 5](#lemma_Dwork_reduced){reference-type="ref" reference="lemma_Dwork_reduced"} {#sec:proof_of_lemma} Lemma [Lemma 5](#lemma_Dwork_reduced){reference-type="ref" reference="lemma_Dwork_reduced"} follows by a standard type of argument, which applies a version of Nullstellensatz; we provide the projective version we apply. **Lemma 7**. *Let $L$ be a field. Let $I\subseteq L[X_1,...,X_m]$ be a homogeneous ideal. Define $Z(I)=\{x\in \mathbb{P}_{\Bar{L}}^{m-1} : f(x)=0 \text{ for all homogeneous } f\in I\} \subseteq \mathbb{P}_{\Bar{L}}^{m-1}$. Then $Z(I)$ is the empty set if and only if $(X_1^d,...,X_m^d)\subseteq I$ for some $d$.* *Proof.* Suppose $Z(I)=\emptyset$ in $\mathbb{P}^{m-1}_{\Bar{L}}$. Define the affine set $Z_{\mathbb{A}}(I) = \{(a_1,...,a_m)\in \mathbb{A}^{m}_{\Bar{L}}: f(a)=0 \text{ for all homogeneous } f\in I\}$. Then $Z_{\mathbb{A}}(I) =(0,...,0)$ and so for each $i$ the monomial $X_i$ vanishes on $Z_{\mathbb{A}}(I) =(0,...,0)$. Then by affine Nullstellensatz [@Lan02 Theorem 1.5], $X_i^{d_i}\in I$ for some $d_i$. For the other direction, suppose $(X_1^d,...,X_m^d)\subseteq I$ for some $d$. Let $x\in Z(I)$ so that $f(x)=0$ for all homogeneous $f\in I$. Then in particular the monomials $X_i^d$ vanish on $x$ and so $x_i=0$ for all $i$. This is a contradiction to $x$ belonging to $\mathbb{P}_{\overline{L}}^{m-1}$, so $Z(I) = \emptyset.$ ◻ Now to prove Lemma [Lemma 5](#lemma_Dwork_reduced){reference-type="ref" reference="lemma_Dwork_reduced"}, let us first recall some terminology. For the field $L$, we will denote a finite place by $\mathfrak{p}$ and its associated valuation by $v_{\mathfrak p}$. For example, in the case we will apply, $L=\mathbb{Q}$ so if we pick a finite place (prime number) $p$, then the associated valuation is $v_p(x) = \max\{a\in \mathbb{Z}: p^a|x\}$ for $x \in \mathbb{Q}$; for example, $v_p(x) \geq 0$ for all primes $p$ precisely when $x$ is an integer. When working with polynomials with rational coefficients, it can be convenient to multiply an identity of polynomials by a sufficiently large integer (say $N$) to "clear denominators;" alternatively, we could work in an enlarged set of "integers" that include rational numbers with denominators only divisible by primes $p|N$. For example, we could consider rational numbers with denominators only divisible by powers of $5$ and $7$; we call the set of all such rationals $S$-integers for the set $S=\{5,7\}$. In general let $S$ be a finite set of finite places of a field $L$. An associated ring $\mathcal{O}_S$ called the $S$-integers is defined by $\mathcal{O}_S = \{x\in L: v_{\mathfrak p}(x)\geq 0 \text{ for all finite places }{\mathfrak p}\not\in S\}$. Finally, for any finite place ${\mathfrak p}\not\in S$ we may consider the quotient $\mathcal{O}_S/{\mathfrak p}$, which is the residue field. In the case $L=\mathbb{Q}$ where we apply Lemma [Lemma 5](#lemma_Dwork_reduced){reference-type="ref" reference="lemma_Dwork_reduced"}, given a particular prime $p \not\in S$, $\mathcal{O}_S/{\mathfrak p}= \mathcal{O}_S/p\mathcal{O}_S$ is isomorphic to $\mathcal{O}_L/{\mathfrak p}=\mathbb{Z}/p\mathbb{Z}\cong \mathbb{F}_p$ since the map $\mathcal{O}_S \rightarrow \mathbb{Z}/p\mathbb{Z}$ given by $a/b \mapsto a\pmod p$ is surjective, and its kernel is $p\mathcal{O}_S$. To prove the lemma, initially define $S$ to be the set of places ${\mathfrak p}$ such that either $v_{\mathfrak p}(c)<0$ for some coefficient $c$ of $H$ or $v_{\mathfrak p}(c)>0$ for all coefficients of $H$. Then $H\in \mathcal{O}_S[X_1,...,X_m]$ and hence $X_i \frac{\partial H}{\partial X_i} \in \mathcal{O}_S[X_1,...,X_m]$. Define the ideal $$I:=\left(H,X_1\frac{\partial H}{\partial X_1},...,X_m\frac{\partial H}{\partial X_m}\right) \subseteq \mathcal{O}_S[X_1,...,X_m].$$ Since by assumption $H$ is Dwork-regular over $L$, $Z(I)=\emptyset$ in $\mathbb{P}^{m-1}_{\Bar{L}}$ where we view $I$ as an ideal in $L[X_1,...,X_m]$. Then by Lemma [Lemma 7](#lem:nullstellensatz){reference-type="ref" reference="lem:nullstellensatz"}, $(X_1^d,...,X_m^d)\subseteq I$ for some $d$. In particular for each $i$, there exist $Q_i,Q_{i,1},...,Q_{i,m}\in L[X_1,...,X_m]$ such that $$X_i^d = HQ_i + X_1\frac{\partial H}{\partial X_1}Q_{i,1} + \cdots + X_m\frac{\partial H}{\partial X_m}Q_{i,m}.$$ Now we enlarge the set $S$ so that it includes places ${\mathfrak p}$ such that $v_{\mathfrak p}(c)<0$ for some coefficient of at least one of $Q_i,Q_{i,1},...,Q_{i,m}$. Then $(X_1^d,...,X_m^d)\subseteq I \subseteq \mathcal{O}_S[X_1,...,X_m]$. For ${\mathfrak p}\not\in S$, $$X_i^d \equiv HQ_i + X_1\frac{\partial H}{\partial X_1}Q_{i,1} + \cdots + X_m\frac{\partial H}{\partial X_m}Q_{i,m} \; ( \text{mod} \; {\mathfrak p})$$ and so $(X_1^d,...,X_m^d) \; ( \text{mod} \; {\mathfrak p})\subseteq (H,X_1\frac{\partial H}{\partial X_1},...,X_m\frac{\partial H}{\partial X_m}) \; ( \text{mod} \; {\mathfrak p})$; that is to say, working over the residue field $L'=\mathcal{O}_S/{\mathfrak p}$ and viewing the ideals now in $L'[X_1,\ldots,X_m],$ the inclusion $(X_1^d,...,X_m^d) \subseteq (H,X_1\frac{\partial H}{\partial X_1},...,X_m\frac{\partial H}{\partial X_m})$ holds. We apply Lemma [Lemma 7](#lem:nullstellensatz){reference-type="ref" reference="lem:nullstellensatz"} again, now with $L'=\mathcal{O}_S/{\mathfrak p},$ to deduce that $H,X_1\frac{\partial H}{\partial X_1},...,X_m\frac{\partial H}{\partial X_m}$ have no common zeros in $\mathbb{P}^{m-1}_{\overline{\mathcal{O}_S/{\mathfrak p}}}$ and hence by definition $H$ is Dwork-regular over $\mathcal{O}_S/{\mathfrak p}$ in the variables $X_1,\ldots,X_m$. ## Proof of Proposition [Proposition 6](#prop:dwork_deligne){reference-type="ref" reference="prop:dwork_deligne"} {#proof-of-proposition-propdwork_deligne} Let $c_1,...,c_{r}\in L$ be given, and let $G\in L[X_{r+1},\ldots,X_m]$ denote the polynomial $H(c_1,...,c_r,X_{r+1},\ldots,X_m)$. Note that $G$ has degree $d:=\deg H$; by the remark following Lemma [Lemma 4](#lem:dwork_equiv){reference-type="ref" reference="lem:dwork_equiv"}, $H$ must contain a nonzero multiple of $X_{j}^d$ for each $j$ and in particular for $r+1 \leq j \leq m$, so $\deg G=d$. In particular, $\mathrm{char} L \nmid\deg G$. Next, $H$ can be written as $H = G_dF_0 + G_{d-1}F_1 + \cdots + G_0F_d$ where $G_i\in L[X_{r+1},\ldots,X_m]$ is homogeneous of degree $i$ and $F_i\in L[X_1,...,X_r]$ is homogeneous of degree $i$ (in particular $F_0 \equiv 1, G_0 \equiv 1$). The leading form of $G$ is then precisely $G_d(X_{r+1},\ldots,X_m)$, and $G_d= H_S$ for $S=\{r+1,\ldots,m\}$, in the notation of Lemma [Lemma 4](#lem:dwork_equiv){reference-type="ref" reference="lem:dwork_equiv"}. Thus the leading form of $G$ defines a nonsingular hypersurface in $\mathbb{P}_{\overline{L}}^{m-r-1}$ (or is not the zero polynomial, if $m-r=1$), by Lemma [Lemma 4](#lem:dwork_equiv){reference-type="ref" reference="lem:dwork_equiv"}, completing the proof. ## A lower bound for a partial derivative {#sec_gradient} When constructing counterexamples, it will be important to find a lower bound for a partial derivative of the leading form $P_k$ of the polynomial symbol. (Explicitly, this is used to guarantee that for each $x$ in a small neighborhood of the origin in $\mathbb{R}^n$, a choice of $t$ meets all the requirements of ([\[eqn:t_constraints\]](#eqn:t_constraints){reference-type="ref" reference="eqn:t_constraints"}) simultaneously.) As a motivating example, in the special case where the symbol has diagonal leading form $P_k(X) = X_1^k + \cdots + X_n^k,$ one immediately has $(\partial_1 P_k)(R,X_2,\ldots,X_n) \gg_k R^{k-1}$ for all $(X_2,\ldots,X_n) \in \mathbb{R}^{n-1}$, which suffices for our application; here and throughout the paper, $R$ is a large parameter that will tend to infinity at the end of the argument. If $P_k$ has higher intertwining rank, we must proceed differently; the following lemma provides an integral point where the partial derivative is sufficiently large. **Lemma 8**. *Let $P_k \in \mathbb{Q}[X_1,\ldots,X_n]$ be a Dwork-regular form of degree $k \geq 2$ with intertwining rank $r<n$; without loss of generality, say $X_1$ intertwines with $X_1,X_2,...,X_r$ but not with $X_{r+1},\ldots,X_n$. Then there exists a tuple $(M_1,...,M_r)\in \mathbb{Z}^r$, with $M_i\geq 1$ for all $i$, such that for all $R \geq 1,$ $$|(\partial_1 P_k)(M_1R,M_2R,...,M_rR,X_{r+1},\ldots,X_n)| \gg R^{k-1},$$ uniformly in $(X_{r+1},\ldots,X_n) \in \mathbb{R}^{n-r}.$* Let $X'=(X_2,...,X_n)$ and rewrite $P_k$ in terms of its coefficients $c_\beta$ as $$P_k(X_1,\ldots, X_n) =\sum_{j=0}^{k} X_1^{k-j} \sum_{|\beta|=j} c_\beta X'^{\beta}$$ where $\beta$ is a multi-index $(\beta_2,...,\beta_n)$ with order $|\beta| = \beta_2 + \cdots + \beta_n,$ and $X'^\beta=X_2^{\beta_2}\cdots X_n^{\beta_n}$. By the hypothesis that $X_1$ does not intertwine with $X_{r+1},...,X_n$, for each $|\beta| <k,$ $c_\beta=0$ for all $\beta$ with $\beta_\ell>0$ for some $\ell >r.$ Consequently, the derivative $\partial_1 P_k$ is a function of $X_1,\ldots, X_{r}$ and is independent of $X_j$ for $j>r$: $$\begin{aligned} \label{eqn:partial_Pk} ( \partial_1 P_k)(X_1,\ldots,X_n) = \sum_{j=0}^{k-1} (k-j) X_1^{k-j-1} \sum_{|\beta|=j} c_\beta X_2^{\beta_2}\cdots X_{r}^{\beta_{r}}. \end{aligned}$$ Then for any value of a parameter $R \geq 1$, after plugging in any $(M_1,...,M_{r})\in \mathbb{Z}^{r}$, $$\begin{aligned} (\partial_1 P_k)(M_1R,\ldots,M_{r}R,X_{r+1},\ldots,X_n)&=\sum_{j=0}^{k-1} (k-j)(M_1R)^{k-j-1} \sum_{|\beta|=j} c_\beta (M_2R)^{\beta_2}\cdots (M_{r}R)^{\beta_{r}} \\ &=R^{k-1}\sum_{j=0}^{k-1} (k-j)M_1^{k-j-1} \sum_{|\beta|=j} c_\beta M_2^{\beta_2}\cdots M_{r}^{\beta_{r}}. \end{aligned}$$ Thus we can conclude $|(\partial_1 P_k)(M_1R,...,M_{r}R,X_{r+1},\ldots,X_n)|\gg R^{k-1}$ (uniformly in $X_{r+1},\ldots,X_n$) as long as $$\sum_{j=0}^{k-1} (k-j)M_1^{k-j-1} \sum_{|\beta|=j} c_\beta M_2^{\beta_2}\cdots M_{r}^{\beta_{r}} \neq 0.$$ From [\[eqn:partial_Pk\]](#eqn:partial_Pk){reference-type="eqref" reference="eqn:partial_Pk"}, we see that this condition is equivalent to $(\partial_1 P_k)(M_1,...,M_{r})\neq 0$ where we view $\partial_1P_k$ as an element of $\mathbb{Q}[X_1,...,X_{r}]$. Hence it remains to prove that there exists an integral point $(M_1,...,M_{r})\in \mathbb{Z}^{r}$ with $M_i\geq 1$ for all $i$ such that $(\partial_1 P_k)(M_1,...,M_{r})\neq 0$. First we note that $\partial_1 P_k$ is not the zero polynomial in $X_1,\ldots, X_r$; this follows from Dwork-regularity, since by (1) of Lemma [Lemma 4](#lem:dwork_equiv){reference-type="ref" reference="lem:dwork_equiv"}, $P_k$ contains a term $cX_1^k$ for some nonzero $c \in \mathbb{Q}$. If $r=1$, $(\partial_1 P_k)(X_1)$ has at most $k-1$ roots, so the claim holds. For $r\geq 2$, by a trivial upper bound (see e.g. [@BCLP22 Lemma 10.1] for a standard statement), for any $B\geq 1,$ there are at most $\ll B^{r-1}$ integral solutions in $[1,B]^r$ to $(\partial_1 P_k)(X_1,\ldots,X_r)=0.$ Since there are $B^r$ integral points in that box, there exists a sufficiently large $B$ such that there is an integral point, say $(M_1,\ldots,M_r) \in [1,B]^r$ with $(\partial_1 P_k)(M_1,\ldots,M_r)\neq 0.$ The lemma is proved. ## Invariance under $\mathrm{GL}_n(\mathbb{Q})$: nonsingular to Dwork-regular {#sec_change_var} Let $P\in \mathbb{R}[X_1,\ldots,X_n]$ be given, with leading form $P_k \in \mathbb{Q}[X_1,\ldots,X_n]$. Now let $A \in \mathrm{GL}_n(\mathbb{Q})$ and let $Q$ denote the polynomial after changing variables, i.e. $Q(\eta_1,...,\eta_n)=P(\xi_1,...,\xi_n)$ with $\xi_i = \sum_j a_{ij}\eta_j$ or equivalently $\eta=A^{-1}\xi$. (In particular, by [@Kat09 Lemma 3.1], if $P_k$ is nonsingular, there exists a choice of $A$ so that $Q(\eta_1,\ldots,\eta_n)$ is Dwork-regular over $\mathbb{Q}$ in $\eta_1,\ldots, \eta_n$.) Then for any fixed $t>0,$ $$T_t^Pf(x) = \frac{1}{(2\pi)^n}\int_{\mathbb{R}^n} \hat{f}(\xi) e^{i(\xi \cdot x + P(\xi)t)} d\xi\\ =\frac{1}{(2\pi)^n}\int_{\mathbb{R}^n} \hat{f}(A\eta) e^{i((A\eta) \cdot x + Q(\eta)t)}|A| d\eta.$$ Now define a function $g$ such that $\hat{g}(\eta) = |A|\hat{f}(A\eta)$ where $|A|$ denotes the determinant of $A$; equivalently $g(x) = f((A^{-1})^Tx).$ Then $$(T_t^Pf)(x) = \frac{1}{(2\pi)^n}\int_{\mathbb{R}^n} \hat{g}(\eta) e^{i(\eta \cdot A^Tx + Q(\eta)t)} d\eta = (T_t^Qg)(A^Tx).$$ Now let $\tau: \mathbb{R}^n \rightarrow(0,1)$ be a measurable stopping-time function, so that proving $f(x) \mapsto \sup_{0<t<1}|T_t^Pf(x)|$ is bounded from $H^s$ to $L^1_{\mathrm{loc}}$ is equivalent to proving that $f(x) \mapsto |T_{\tau(x)}^Pf(x)|$ is bounded from $H^s$ to $L^1_{\mathrm{loc}}$, independent of the function $\tau$. For a given stopping-time function $\tau$, the computation above shows that $$\|T_{\tau(x)}^Pf(x)\|_{L^1(B_n(0,1))} = \int_{B_n(0,1)} |(T_{\tau(x)}^Qg)(A^Tx)|dx = |A^T|^{-1} \int_{A^TB_n(0,1)} |(T_{\sigma(u)}^Qg)(u)|du,$$ for the stopping-time $\sigma(u) = \tau((A^T)^{-1}u).$ Thus $$\label{TP_bdd} \|T_{\tau(x)}^Pf(x)\|_{L^1(B_n(0,1))} \leq C_s \|f\|_{H^s}$$ is true for all $f \in H^s$, uniformly over all stopping-time functions, if and only if $$\label{TQ_bdd} \|T_{\sigma(x)}^Qg(x)\|_{L^1(A^TB_n(0,1))} \leq C_s C'_A \|g\|_{H^s}$$ is true for all $g \in H^s$, uniformly over all stopping-time functions. (Of course, $\|g\|_{H^s} \ll_{A} \|f\|_{H^s} \ll_{A} \|g\|_{H^s}.$) In particular, if we show for a given real $s>0$ that there is no constant $C_s$ such that $$\|\sup_{0<t<1}|T_t^Qg(x)|\|_{L^1(B_n(0,1))} \leq C_s \|g\|_{H^s}$$ for all $g \in H^s$, then ([\[TQ_bdd\]](#TQ_bdd){reference-type="ref" reference="TQ_bdd"}) fails for all constants $C_s, C_A'$ and consequently ([\[TP_bdd\]](#TP_bdd){reference-type="ref" reference="TP_bdd"}) fails for all constants $C_s$. This invariance under $\mathrm{GL}_n(\mathbb{Q})$ changes of variables shows that it is reasonable to study maximal operators for the class of Dwork-regular polynomial symbols in place of the class of nonsingular polynomial symbols, as we do. (The invariance demonstrated above holds for $\mathrm{GL}_n(\mathbb{R})$ as well.) ## Verification of Corollary [Corollary 2](#corollary_decomp){reference-type="ref" reference="corollary_decomp"} {#sec_cor} Suppose as in the hypothesis that $P_k(X_1,\ldots,X_n)$ is decomposable over $\mathbb{Q}$, so that after an appropriate $\mathrm{GL}_n(\mathbb{Q})$ change of variables, $P_k(X_1,\ldots,X_n) = Q_1(X_1,\ldots,X_m) + Q_2(X_{m+1},\ldots,X_n)$ holds for some $1 \leq m<n.$ Note that $P_k$ is nonsingular as a function of $X_1,\ldots,X_n$ if and only if $Q_1$ is nonsingular as a function of $X_1,\ldots, X_m$ and $Q_2$ is nonsingular as a function of $X_{m+1},\ldots,X_n$. By the disjointness of the variables in $Q_1$ and $Q_2$, we can apply $\mathrm{GL}_m(\mathbb{Q})$ (respectively $\mathrm{GL}_{n-m}(\mathbb{Q})$) changes of variables separately to $Q_1$ (respectively $Q_2$) so that both become Dwork-regular over $\mathbb{Q}$. This provides a block diagonal $\mathrm{GL}_n(\mathbb{Q})$ transformation that makes $P_k$ Dwork-regular and still decomposed into a form in $X_1,\ldots,X_m$ and a second form in $X_{m+1},\ldots,X_n$. In particular, its intertwining rank remains bounded above by $r\leq \lfloor n/2 \rfloor$. Since the parameter $\delta(n,k,r)$ in Theorem [Theorem 1](#thm:main2){reference-type="ref" reference="thm:main2"} is decreasing as a function of $r$, then $\delta(n,k,r)\geq \delta(n,k,n/2)$ for $r\leq \lfloor n/2 \rfloor$, and this verifies the corollary. ## Dispersivity {#sec:dispersive} A description of the broad principle of dispersion, in the context of an initial value problem like ([\[PDE\]](#PDE){reference-type="ref" reference="PDE"}), can be found for example in [@Pal97 §3.5] or [@Tao06 Principle 2.1]. The following lemma verifies that for each real symbol considered in the main theorem, ([\[PDE\]](#PDE){reference-type="ref" reference="PDE"}) is a dispersive PDE, in the sense of the criterion presented in [@KPV91 Theorem 4.1]. **Lemma 9**. *Let $P(X_1,...,X_n)\in \mathbb{R}[X_1,...,X_n]$ be a polynomial of degree $k\geq 2$ such that its leading form $P_k(X_1,\ldots,X_n) \in \mathbb{Q}[X_1,\ldots,X_n]$ is Dwork-regular over $\mathbb{Q}$ in the variables $X_1,...,X_n$. Then $\nabla P_k(x_1,...,x_n) \neq 0$ for all $(x_1,...,x_n)\in \mathbb{R}^n \setminus \{0\}$. Further, there exists a finite $M \in \mathbb{N}$ such that for each $i=1,...,n,$ for all $(c_1,...,c_{i-1},c_{i+1},...,c_n)\in \mathbb{R}^{n-1}$ and $c'\in \mathbb{R}$, the equation $$\label{P_sol} P(c_1,...,c_{i-1},x,c_{i+1},...,c_n)=c',$$ has at most $M$ solutions.* *Proof.* Since $P_k$ is Dwork-regular over $\mathbb{Q}$, then $P_k$ is nonsingular over $\mathbb{Q}$, so that $\nabla P_k(X_1,\ldots,X_n)=0$ has no nontrivial solutions over $\overline{\mathbb{Q}}$; since $P_k$ has rational coefficients, this implies that there are no nontrivial solutions over $\mathbb{R}$ either. Fix an $1 \leq i \leq n.$ By the remark following Lemma [Lemma 4](#lem:dwork_equiv){reference-type="ref" reference="lem:dwork_equiv"}, the leading form $P_k$ contains a term that is a nonzero multiple of $X_i^k$. Hence $P(c_1,...,c_{i-1},x,c_{i+1},...,c_n)-c'$ contains a nonzero multiple of $x^k$, so that ([\[P_sol\]](#P_sol){reference-type="ref" reference="P_sol"}) has at most $k$ solutions. ◻ # Upper and lower bounds for exponential sums {#sec:exponential} Now we prove three results about exponential sums involving a Deligne polynomial, which by Proposition [Proposition 6](#prop:dwork_deligne){reference-type="ref" reference="prop:dwork_deligne"} apply to Dwork-regular forms over $\mathbb{Q}$ with fixed variables, for all sufficiently large prime moduli. First, we prove in Proposition [Proposition 10](#prop:bound_T){reference-type="ref" reference="prop:bound_T"} that complete exponential sums with a Deligne polynomial in the phase are often "large". Second, we prove in Proposition [Proposition 14](#prop:bound_incomplete){reference-type="ref" reference="prop:bound_incomplete"} that incomplete exponential sums are "not too large" (and in particular, only a logarithmic factor larger than complete exponential sums). Third, in Proposition [Proposition 15](#prop:2.7){reference-type="ref" reference="prop:2.7"} we approximate an exponential sum with real coefficients by complete exponential sums, up to an acceptable error. ## A lower bound for many complete exponential sums **Proposition 10**. *Let $Q_k(X_1,...,X_m)\in \mathbb{Z}[X_1,...,X_m]$ be a polynomial of degree $k\geq 2$ and suppose that for all primes $q \geq K_1(Q_k)$, the reduction of $Q_k$ modulo $q$ is a Deligne polynomial over $\mathbb{F}_q$. Define for each prime $q$ and $(a,b) \in \mathbb{F}_q \times \mathbb{F}_q^{m}$, $${\mathbf{T}}(a,b;q):=\sum_{x \; ( \text{mod} \; q)^m} e\left(\frac{2\pi}{q}\left(a Q_k(x) + b \cdot x\right)\right).$$ Then there exist constants $0<\alpha_1, \alpha_2<1$ with $\alpha_2=\alpha_2(k,m)$, and a constant $K_2(k,m)$ such that for every prime $q > \max\{k, K_1(Q_k), K_2(k,m)\}$, there exist at least $\alpha_2 q^{m+1}$ choices of $(a,b)\in \mathbb{F}_q \times \mathbb{F}_q^{m}$ such that $$\begin{aligned} \label{eqn:T_large} \alpha_1 q^{m/2} \leq |{\mathbf{T}}(a,b;q)| \leq (k-1)^mq^{m/2} . \end{aligned}$$* In fact, we may take $\alpha_1=1/2$ and any $\alpha_2 \leq (1/8)(k-1)^{-2m}.$ The Weil-Deligne bound is a key ingredient in the proof, which we now recall, following [@Del74 Thm. 8.4] as stated in [@IwaKow04 Thm. 11.43]. **Lemma 11**. *Let $f(X_1,...,X_m)\in \mathbb{Z}[X_1,...,X_m]$ be a Deligne polynomial of degree $k\geq 2$ over $\mathbb{F}_q$ for $q$ prime. Then $$\left|\sum_{\mathbf{x}\in \mathbb{F}_q^m} e(\frac{2\pi}{q}f(x_1,...,x_m))\right|\leq (k-1)^{m}q^{m/2}.$$* The proof of the proposition is a mild variation on [@ACP23 Prop. 2.2]. We first claim that $$\label{T_average} \sum_{\substack{a \; ( \text{mod} \; q)\\b \; ( \text{mod} \; q)^m}} |{\mathbf{T}}(a,b;q)|^2 = q^{2m+1}.$$ The left-hand side may be expanded as $$\sum_{x, \tilde{x}} \sum_{a\; ( \text{mod} \; q)} e\left(\frac{2\pi a}{q}(Q_k(x) - Q_k(\tilde{x})) \right) \sum_{b\; ( \text{mod} \; q)^{m}} e\left(\frac{2\pi }{q}b \cdot (x-\tilde{x}) \right).$$ The innermost sum vanishes unless $x_i\equiv \tilde{x}_i\; ( \text{mod} \; q)$ for all $1\leq i\leq m$; in this case the left-hand side evaluates to $q^{2m+1}$, as claimed. Next we observe that since $Q_k$ is Deligne over $\mathbb{F}_q$ of degree $k\geq 2$, so is $aQ_k(x)+b\cdot x$ for $a\neq 0 \in \mathbb{F}_q$, so we can apply Lemma [Lemma 11](#lem:deligne){reference-type="ref" reference="lem:deligne"} to bound ${\mathbf{T}}(a,b;q)$; for $a=0$ the sum vanishes unless $b=0$ and we address this below. Since ([\[T_average\]](#T_average){reference-type="ref" reference="T_average"}) shows that ${\mathbf{T}}(a,b;q)$ is of size $q^{m/2}$ on average, and the Weil-Deligne bound shows it cannot ever be much larger, we can deduce that most of the time it is not much smaller either. Precisely, suppose for a given pair $0<\alpha_1,\alpha_2<1$ there are $<\alpha_2q^{m+1}$ choices of $(a,b) \; ( \text{mod} \; q)^{m+1}$ such that $|{\mathbf{T}}(a,b;q)|\geq \alpha_1 q^{m/2}$. Then $$\sum_{\substack{a \; ( \text{mod} \; q)\\b \; ( \text{mod} \; q)^m}} |{\mathbf{T}}(a,b;q)|^2 \\ = q^{2m} + \sum_{\substack{a \not\equiv 0\\ |{\mathbf{T}}(a,b;q)|\geq \alpha_1 q^{m/2}}} |{\mathbf{T}}(a,b;q)|^2 + \sum_{\substack{a\not\equiv 0\\ |{\mathbf{T}}(a,b;q)|< \alpha_1 q^{m/2}}} |{\mathbf{T}}(a,b;q)|^2.$$ Here the first term is the contribution from $a \equiv 0$. The first sum on the right-hand side is $<\alpha_2 q^{m+1} ((k-1)^mq^{m/2})^2$; the second sum on the right-hand side is $<q^{m+1} (\alpha_1 q^{m/2})^2$. Hence $$\sum_{\substack{a \; ( \text{mod} \; q)\\b \; ( \text{mod} \; q)^m}} |{\mathbf{T}}(a,b;q)|^2 < (q^{-1}+ \alpha_2 (k-1)^{2m} +\alpha_1^2)q^{2m+1} <(1/3+\alpha_2(k-1)^{2m}+\alpha_1^2)q^{2m+1}.$$ This is $<q^{2m+1}$, contradicting ([\[T_average\]](#T_average){reference-type="ref" reference="T_average"}), for sufficiently small $\alpha_1,\alpha_2$; for example, we may take $\alpha_1=1/2$ and any $\alpha_2 \leq (1/4)(k-1)^{-2m}.$ For such $\alpha_1,\alpha_2$, there must then be $\geq \alpha_2 q^{m+1}$ choices of $(a,b) \; ( \text{mod} \; q)^{m+1}$ for which the left-hand inequality in ([\[eqn:T_large\]](#eqn:T_large){reference-type="ref" reference="eqn:T_large"}) holds. Finally, the only case in which $|\mathbf{T}(a,b;q)|>(k-1)^mq^{m/2}$ is when $a=0 \in \mathbb{F}_q$ (in which case $\mathbf{T}(a,b;q)$ is a linear exponential sum which vanishes unless $b = 0 \in \mathbb{F}_q^m$). Thus aside from $(a,b)=(0,0)$, all those $(a,b)$ satisfying the lower bound in ([\[eqn:T_large\]](#eqn:T_large){reference-type="ref" reference="eqn:T_large"}) also satisfy the upper bound. We can summarize this by saying that at least $\alpha_2 q^{m+1}$ choices satisfy both bounds, with the modified choice $\alpha_2= (1/8)(k-1)^{-2m},$ as long as we assume $q$ is sufficiently large that $(1/4)(k-1)^{-2m}q^{m+1} \geq 2,$ which is true for all $q \geq K_2$, for an appropriate choice of $K_2=K_2(k,m).$ The proposition is proved. ## An upper bound for incomplete exponential sums To show the maximal operator associated to $T_t^{P}f$ is large, we will repeatedly approximate integrals and sums by complete exponential sums. We first state general formulas for partial summation and partial integration; these are proved simply by iteration one coordinate at a time, and we omit the proofs. Given a subset $J\subseteq \{1,\ldots,n\}$, we define $I = \{1,\ldots,n\}\setminus J$ and use the notation $(\mathbf{x}_{(J)},\mathbf{x}_{(I)}) \in \mathbb{R}^{n}$ to indicate ${\bf x}_{(J)} \in \mathbb{R}^{|J|}$ is indexed by $j \in J$ and ${\bf x}_{(I)} \in \mathbb{R}^{n-|J|}$ is indexed by $j \in \{1,\ldots,n\}\setminus J.$ **Lemma 12**. *Let $a(\mathbf{m})$ be a sequence of complex numbers indexed by $\mathbf{m}=(m_1,...,m_n)\in \mathbb{Z}^n$. Let $h(\mathbf{y})$ be a $C^{(n)}$ function on $\mathbb{R}^n$ such that for every $\boldsymbol{\kappa}= (\kappa_1,...,\kappa_n) \in \{0,1\}^n$, there exists a positive real number $B_{\boldsymbol{\kappa}}$ such that for $$\begin{aligned} \partial_{\boldsymbol{\kappa}} h(\mathbf{y}):=\frac{\partial^{\kappa_1+\cdots+\kappa_n}}{\partial y_1^{\kappa_1} \cdots \partial y_n^{\kappa_n}} h(\mathbf{y}), \end{aligned}$$ we have $\left|\partial_{\boldsymbol{\kappa}} h(\mathbf{y})\right| \leq B_{\boldsymbol{\kappa}}$ uniformly in $\mathbf{y}\in [\mathbf{M},\mathbf{M}+\mathbf{N}],$ where $\mathbf{M}=(M,M,\ldots,M)$ and $\mathbf{N} = (N_1,N_2,\ldots,N_n).$ Then $$\begin{aligned} \sum_{\mathbf{M}\leq \mathbf{m}\leq \mathbf{M+N}} a(\mathbf{m}) e(h(\mathbf{m})) = e(h(\mathbf{M+N}))\sum_{\mathbf{M}\leq \mathbf{m}\leq \mathbf{M+N}} a(\mathbf{m}) + E \end{aligned}$$ where $$\begin{aligned} |E|\ll_n \sup_{\substack{J\subseteq \{1,...,n\}\\|J| \geq 1}}\left\{ \prod_{j \in J}N_j \cdot \sup_{\mathbf{u}_{(J)}\leq \mathbf{N}_{(J)}}|A (\mathbf{u}_{(J)},\mathbf{N}_{(I)})| \cdot \sup_{1\leq \ell \leq |J|} \sup_{\substack{\boldsymbol{\alpha_1},...,\boldsymbol{\alpha_\ell} \in \{0,1\}^n\\\boldsymbol{\alpha_1}+ \cdots +\boldsymbol{\alpha_\ell} = \boldsymbol{1}_J}} \prod_{i=1}^\ell B_{\boldsymbol{\alpha_i}} \right\}, \end{aligned}$$ in which $$A (\mathbf{u}_{(J)},\mathbf{N}_{(I)})=\sum_{\substack{\mathbf{M}_{(I)}\leq \mathbf{m}_{(I)} \leq \mathbf{M}_{(I)}+\mathbf{N}_{(I)}\\ \mathbf{M}_{(J)}\leq \mathbf{m}_{(J)} \leq \mathbf{M}_{(J)}+\mathbf{u}_{(J)}}} a(\mathbf{m}_{(J)},\mathbf{m}_{(I)}).$$* **Lemma 13**. *Let $a<b$ be real numbers. Let $f(\mathbf{t})$ be an integrable function supported on $[a,b]^n$ and let $h(\mathbf{t})$ be a $C^{(n)}$ function on $\mathbb{R}^n$ such that for every $\boldsymbol{\kappa} = (\kappa_1,...,\kappa_n)\in \{0,1\}^n$, there exists a positive real number $B_{\boldsymbol{\kappa}}$ such that $\left|\partial_{\boldsymbol{\kappa}} h(\mathbf{t})\right| \leq B_{\boldsymbol{\kappa}},$ uniformly in $\mathbf{t} \in [a,b]^n.$ Then $$\begin{aligned} \int_{[a,b]^n} f(\mathbf{t}) e(h(\mathbf{t})) d\mathbf{t} = e(h(\mathbf{b})) \int_{[a,b]^n} f(\mathbf{t}) d\mathbf{t} + E \end{aligned}$$ where $$\begin{aligned} |E|\ll_n & \sup_{\substack{J\subseteq \{1,...,n\}\\|J| \geq 1}} \left \{(b-a)^{|J|}\sup_{\mathbf{u}_{(J)} \leq \mathbf{b}_{(J)}} |F(\mathbf{u}_{(J)},\mathbf{b}_{(I)})| \cdot \sup_{1\leq \ell \leq |J|} \sup_{\substack{\boldsymbol{\alpha_1},...,\boldsymbol{\alpha_\ell} \in \{0,1\}^n\\\boldsymbol{\alpha_1}+ \cdots +\boldsymbol{\alpha_\ell} = \boldsymbol{1}_J}} \prod_{i=1}^\ell B_{\boldsymbol{\alpha_i}} \right\}, \end{aligned}$$ in which $$F(\mathbf{u}_{(J)},\mathbf{b}_{(I)}) =\int_{[a,b]^{|I|}} \int \cdots \int_{[a,u_{j}],j\in J} f(\mathbf{t}_{(J)},\mathbf{t}_{(I)}) d\mathbf{t}_{(J)}d\mathbf{t}_{(I)}.$$* We bound an incomplete exponential sum by completing the sum and applying the Weil bound: **Proposition 14**. *Let $Q_k(X_1,...,X_n)\in \mathbb{Z}[X_1,...,X_n]$ be a Deligne polynomial of degree $k\geq 2$ over $\mathbb{F}_q$ for a prime $q$. Let $J\subseteq \{1,...,n\}$ and $I=\{1,...,n\}\setminus J$. Then for $\mathbf{H}=(H_1,...,H_n)$ with $1\leq H_i \leq q$, $$\left|\sum_{\substack{\mathbf{1}_{(J)} \leq {\mathbf{m}}_{(J)}\leq \mathbf{q}_{(J)},\\\mathbf{1}_{(I)} \leq {\mathbf{m}}_{(I)}\leq \mathbf{H}_{(I)}}} e\left(\frac{2\pi}{q} Q_k({\mathbf{m}}_{(J)},{\mathbf{m}}_{(I)})\right)\right| \ll_{k,n} q^{n/2}(\log q)^{|I|} .$$* The sum on the left-hand side may be written as $$\sum_{\substack{\mathbf{1}_{(J)} \leq {\mathbf{m}}_{(J)}\leq \mathbf{q}_{(J)}}}\sum_{\substack{\mathbf{1}_{(I)} \leq \mathbf{a}_{(I)}\leq \mathbf{q}_{(I)}}} e\left(\frac{2\pi}{q} Q_k({\mathbf{m}}_{(J)},\mathbf{a}_{(I)}) \right) \sum_{\substack{\mathbf{1}_{(I)} \leq {\mathbf{m}}_{(I)}\leq \mathbf{H}_{(I)},\\ {\mathbf{m}}_{(I)}\equiv \mathbf{a}_{(I)}\; ( \text{mod} \; q)^{|I|}}} 1.$$ By the identity $$\mathbf{1}_{m\equiv a \; ( \text{mod} \; q)} = \frac{1}{q} \sum_{1\leq h \leq q} e\left(\frac{2\pi }{q}h\cdot (m-a)\right)$$ we can expand the sum as $$\begin{aligned} \frac{1}{q^{|I|}} \sum_{ \mathbf{h}_{(I)} \leq \mathbf{q}_{(I)}} \left( \sum_{\substack{ {\mathbf{m}}_{(J)} \leq \mathbf{q}_{(J)},\\ \mathbf{a}_{(I)} \leq \mathbf{q}_{(I)}}} e\left(\frac{2\pi }{q}(Q_k({\mathbf{m}}_{(J)},\mathbf{a}_{(I)})-\mathbf{h}_{(I)}\cdot \mathbf{a}_{(I)})\right) \sum_{\mathbf{m}_{(I)} \leq \mathbf{H}_{(I)}} e\left(\frac{2\pi}{q} \mathbf{h}_{(I)}\cdot {\mathbf{m}}_{(I)}\right) \right). \end{aligned}$$ Since $Q_k(X_1,\ldots,X_n)$ is a Deligne polynomial of degree $k \geq 2$, so is $Q_k(X_1,\ldots,X_n) - {\bf h}_{(I)} \cdot \mathbf{X}_{(I)}$ so by Lemma [Lemma 11](#lem:deligne){reference-type="ref" reference="lem:deligne"}, the complete sum over ${\bf m}_{(J)},\mathbf{a}_{(I)}$ is bounded above by $(k-1)^n q^{n/2}$. For the remaining double sum over $\mathbf{h}_{(I)}$ and ${\mathbf{m}}_{(I)}$, we recall that for each $h,H \geq 1,$ $$\sum_{1\leq m \leq H} e(2\pi hm/q) \ll \min \{H, \|h/q\|^{-1}\}.$$ (Here $\|t\|$ temporarily indicates the distance from $t$ to the nearest integer.) By considering cases $1\leq h \leq q/2$ and $q/2<h\leq q$, note that $\sum_{1\leq h \leq q} \min \{H, \|h/q\|^{-1}\} \ll q \log q$. Hence the double sum over $\mathbf{h}_{(I)}$ and ${\mathbf{m}}_{(I)}$ is bounded above by $(q\log q)^{|I|}$. In total, the sum considered in the lemma is thus bounded by $\ll (k-1)^n q^{n/2}(\log q)^{|I|}$, and the proof is complete. ## Approximation of a sum with real coefficients by complete sums We next approximate an exponential sum with real coefficients in the linear term by complete exponential sums with prime moduli. **Proposition 15**. *Let $Q_k(X_1,...,X_m)\in \mathbb{Z}[X_1,...,X_m]$ be a Deligne polynomial of degree $k\geq 2$ over $\mathbb{F}_q$ for a prime $q$. Let $1\leq a <q$. Let $V \geq 0$ and $y \in \mathbb{R}^m$, and suppose that for each $1\leq i \leq m$ there exists $1\leq b_i \leq q$ such that $|y_i-2\pi b_i/q|<V$. Then for any ${\mathbf{M}}=(M,...,M)\in \mathbb{R}^{m}$ and ${\mathbf{N}}=(N_1,...,N_m)\in \mathbb{R}^{m}$ with $0 \leq N_i \leq N$, $$\left|\sum_{{\mathbf{M}}\leq {\mathbf{m}}\leq {\mathbf{M}}+{\mathbf{N}}} e\left(\frac{2\pi a}{q} Q_k({\mathbf{m}})+ y \cdot {\mathbf{m}}\right)\right| = \prod_{i=1}^m\left\lfloor \frac{N_i}{q} \right\rfloor \cdot \left| \sum_{{\mathbf{m}}\; ( \text{mod} \; q)^m} e\left(\frac{2\pi}{q}(a Q_k({\mathbf{m}})+b \cdot {\mathbf{m}})\right) \right| + E$$ where, under the assumption $VN \leq 1$, $$|E|\ll_{k,m} VN \cdot \prod_{i=1}^m\left\lfloor \frac{N_i}{q}\right\rfloor \cdot q^{m/2} +\sup_{|J|<m} \prod_{j \in J}\left\lfloor \frac{N_j}{q}\right\rfloor \cdot q^{m/2} (\log q)^{m-|J|} .$$* *Remark 16*. In our later application we will take $m=n-r$ and coordinates $(m_{r+1},\ldots,m_n) \in \mathbb{Z}^{n-r}$. Set $N=\max\{N_{r+1},...,N_n\}.$ The bound for $|E|$ is increasing as a function $N_j$ so we can replace each $N_j$ by $N$. Recalling the hypothesis $VN \leq 1,$ the bound for $|E|$ can be crudely estimated by $$|E| \ll_{k,n,r} \left\lfloor \frac{N}{q} \right\rfloor^{n-r} q^{(n-r)/2}(VN+ \left\lfloor \frac{N}{q} \right\rfloor^{-1} (\log q)^{n-r}) .$$ We need the error to be at most a small proportion (say half the size) of the main term. To achieve this, we will choose $(a,b)\in \mathbb{F}_q \times \mathbb{F}_q^{n-r}$ so that the exponential sum in the main term is large (using Proposition [Proposition 10](#prop:bound_T){reference-type="ref" reference="prop:bound_T"}) and we will impose conditions that force $V \leq d_0 N^{-1}$ for a constant $d_0<1$ as small as we like, and $N/q \gg q^{\Delta_0}$ for some $\Delta_0>0$ (see §[6](#sec:arithmetic){reference-type="ref" reference="sec:arithmetic"}). To prove the proposition, we apply partial summation with respect to $\mathbf{m}$, following Lemma [Lemma 12](#lem:partial_summation){reference-type="ref" reference="lem:partial_summation"}, with the function $h(\mathbf{m}) = (y-\tfrac{2\pi}{q}b)\cdot \mathbf{m}.$ Since this is linear, for a multi-index $\boldsymbol{\alpha} \in \{0,1\}^n$, the $\partial_{\boldsymbol{\alpha}}$ partial derivative as a function of $\mathbf{m}$ vanishes unless $|\boldsymbol{\alpha}|=1$, say $\partial_{\boldsymbol{\alpha}}= \partial_i,$ in which case $$|\partial_i((y-\tfrac{2\pi}{q}b)\cdot \mathbf{m})| = |y_i-2\pi b_i/q| < V.$$ Thus using the notation of the lemma, for any nonempty subset $J \subseteq\{1,\ldots,m\},$ we only need to consider the following expression in the case each $|\boldsymbol{\alpha}_i|=1$ (so that $\ell=|J|$): $$\sup_{1\leq \ell \leq |J|} \sup_{\substack{\boldsymbol{\alpha_1},...,\boldsymbol{\alpha_\ell} \in \{0,1\}^n\\\boldsymbol{\alpha_1}+ \cdots +\boldsymbol{\alpha_\ell} = \boldsymbol{1}_J}} \prod_{i=1}^{\ell} B_{\boldsymbol{\alpha_i}} = \prod_{i=1}^{|J|} V = V^{|J|}.$$ Now apply partial summation to see that $$\begin{gathered} \label{eqn:2.7_ps} \sum_{{\mathbf{M}}\leq {\mathbf{m}}\leq {\mathbf{M}}+{\mathbf{N}}} e\left(\frac{2\pi a}{q} Q_k({\mathbf{m}})+ y\cdot {\mathbf{m}}\right) \\ = e\left( (y-\frac{2\pi}{q}b)\cdot ({\mathbf{M}}+{\mathbf{N}}) \right) \sum_{{\mathbf{M}}\leq {\mathbf{m}}\leq {\mathbf{M}}+{\mathbf{N}}} e\left(\frac{2\pi}{q}(a Q_k({\mathbf{m}})+b \cdot {\mathbf{m}})\right) + E_1 \end{gathered}$$ where $E_1$ is dominated by $$\sup_{|J| \geq 1} V^{|J|}\prod_{j \in J}N_j \cdot \sup_{\mathbf{u}_{(J)} \leq \mathbf{N}_{(J)}} \left| \sum_{\substack{\mathbf{M}_{(I)} \leq {\mathbf{m}}_{(I)}\leq \mathbf{M}_{(I)}+\mathbf{N}_{(I)},\\\mathbf{M}_{(J)} \leq {\mathbf{m}}_{(J)}\leq \mathbf{M}_{(J)}+ \mathbf{u}_{(J)}}} e\left(\frac{2\pi }{q}(a Q_k({\mathbf{m}}_{(J)},{\mathbf{m}}_{(I)}) + b\cdot ({\mathbf{m}}_{(J)},{\mathbf{m}}_{(I)}))\right)\right|$$ with nonempty $J\subseteq \{1,...,m\}$ and $I= \{1,...,m\} \setminus J$. The first factor is $\leq (VN)^{|J|} \leq VN \leq 1.$ We may estimate the sum in the error $E_1$ by first breaking it into a main term of as many complete sums (complete in all $m$ coordinates) as possible, that is, $$\prod_{i\in I }\left\lfloor \frac{N_i}{q} \right\rfloor\cdot \color{black}\prod_{j\in J} \left\lfloor \frac{u_j}{q} \right\rfloor\cdot \sum_{{\mathbf{m}}\; ( \text{mod} \; q)^{m}} e\left(\frac{2\pi }{q}(a Q_k({\mathbf{m}}_{(J)},{\mathbf{m}}_{(I)}) + b \cdot({\mathbf{m}}_{(J)},{\mathbf{m}}_{(I)}))\right),$$ plus an error term of incomplete sums, that is, of the form $$\sum_{\substack{\tilde{I}\subseteq I, \tilde{J}\subseteq J,\\ |\tilde{I}\cup \tilde{J}|<m }} \prod_{i\in \tilde{I}}\left\lfloor \frac{N_i}{q} \right\rfloor \cdot \prod_{j\in \tilde{J}} \left\lfloor \frac{u_j}{q} \right\rfloor\cdot \sideset{}{^*}\sum e\left(\frac{2\pi }{q}(a Q_k({\mathbf{m}}_{(J)},{\mathbf{m}}_{(I)}) + b\cdot ({\mathbf{m}}_{(J)},{\mathbf{m}}_{(I)}))\right)$$ in which the starred sum is over $\mathbf{1} \leq {\mathbf{m}}_{(\tilde{I})}\leq \mathbf{q}_{(\tilde{I})}$, $\mathbf{1} \leq {\mathbf{m}}_{(I\setminus \tilde{I})}\leq \mathbf{H}_{(I\setminus \tilde{I})}$, $\mathbf{1} \leq {\mathbf{m}}_{(\tilde{J})}\leq \mathbf{q}_{(\tilde{J})}$, $\mathbf{1} \leq {\mathbf{m}}_{(J \setminus \tilde{J})}\leq \mathbf{H}_{(J\setminus \tilde{J})}$, for some $\mathbf{H}=(H_1,...,H_m)$ with $1\leq H_i < q$. Since $Q_k$ is a Deligne polynomial of degree $k\geq 2$, we may apply the Weil-Deligne bound in Lemma [Lemma 11](#lem:deligne){reference-type="ref" reference="lem:deligne"} so that the contribution of the complete sums is at most $$\ll_{k,m} \prod_{i\in I }\left\lfloor \frac{N_i}{q} \right\rfloor \cdot \prod_{j\in J} \left\lfloor \frac{u_j}{q} \right\rfloor \cdot q^{m/2} \ll \prod_{i=1}^m\left\lfloor \frac{N_i}{q} \right\rfloor\cdot q^{m/2} .$$ Again using that $Q_k$ is a Deligne polynomial, we may apply Proposition [Proposition 14](#prop:bound_incomplete){reference-type="ref" reference="prop:bound_incomplete"} so the contribution of the incomplete sums is $$\ll \sup_{\substack{\tilde{I}\subseteq I,\tilde{J}\subseteq J,\\ | \tilde{I}\cup \tilde{J}|< m}} \prod_{i\in \tilde{I}}\left\lfloor \frac{N_i}{q} \right\rfloor\cdot \prod_{j\in \tilde{J}} \left\lfloor \frac{u_j}{q} \right\rfloor \cdot q^{m/2} (\log q)^{m-(|\tilde{J}|+|\tilde{I}|)} \ll \sup_{|J|< m} \prod_{j \in J} \left\lfloor \frac{N_j}{q} \right\rfloor\cdot q^{m/2} (\log q)^{m-|J|}.$$ This is sufficient for bounding $E_1.$ Finally we can similarly separate the main term of the right-hand side of [\[eqn:2.7_ps\]](#eqn:2.7_ps){reference-type="eqref" reference="eqn:2.7_ps"} into complete and incomplete sums, that is, $$\prod_{i=1}^m\left\lfloor \frac{N_i}{q} \right\rfloor \cdot \left| \sum_{{\mathbf{m}}\; ( \text{mod} \; q)^m} e\left(\frac{2\pi}{q} (a Q_k({\mathbf{m}})+ b\cdot {\mathbf{m}})\right) \right| \\ + E_1',$$ where $$E_1' = \sum_{|J|<m} \prod_{j\in J}\left\lfloor \frac{N_j}{q}\right\rfloor \cdot \sum_{\substack{\mathbf{1}_{(J)} \leq {\mathbf{m}}_{(J)}\leq \mathbf{q}_{(J)},\\\mathbf{1}_{(I)} \leq {\mathbf{m}}_{(I)}\leq \mathbf{H}_{(I)}}} e\left(\frac{2\pi}{q}(a Q_k({\mathbf{m}}_{(J)},{\mathbf{m}}_{(I)})+b\cdot ({\mathbf{m}}_{(J)},{\mathbf{m}}_{(I)}))\right)$$ for some $H_i <q$. Another application of Proposition [Proposition 14](#prop:bound_incomplete){reference-type="ref" reference="prop:bound_incomplete"} shows that $$|E_1'| \ll \sup_{|J|< m} \prod_{j\in J}\left\lfloor \frac{N_j}{q}\right\rfloor \cdot q^{m/2}(\log q)^{m-|J|}.$$ Combined with the bound for $E_1$, this proves the proposition. # Reducing the maximal operator to an exponential sum {#sec:reducing} ## Initial definition of the test function We now construct a collection of test functions to prove Theorem [Theorem 3](#thm:main3){reference-type="ref" reference="thm:main3"}. The symbol $P\in \mathbb{R}[X_1,...,X_n]$ has leading form $P_k \in \mathbb{Q}[X_1,\ldots,X_n]$ that is Dwork-regular in $X_1,\ldots,X_n$ over $\mathbb{Q}$ and has intertwining rank $r$. By relabelling variables we may assume that $X_1$ intertwines with $X_1,X_2,...,X_{r}$ and does not intertwine with $X_{r+1},\ldots,X_n$. By a $\mathrm{GL}_n(\mathbb{Q})$ change of variables we may clear denominators and assume from now on that $P_k$ has integral coefficients. Fix $R\geq 1$, which will later be a parameter that tends to infinity. Let $$S_1 = R^\sigma, \qquad L = R^\lambda$$ where $0 < \sigma< 1$, $0< \lambda < 1$ are parameters we will choose optimally later. Once and for all, fix a Schwartz function $\phi$ on $\mathbb{R}$ with the properties (i) $\phi \geq 0$, (ii) $\phi(0) = (2\pi)^{-1}\int \hat{\phi}(\xi)d\xi=1,$ (iii) $\mathrm{supp}(\hat{\phi}) \subseteq [-1,1]$; such a function may be constructed in a standard way, such as in [@Pie20 §2.1]. Then define for each $m \geq 1$ and variables $u_1,\ldots,u_m$ that $\Phi_m(u_1,\ldots,u_m) = \phi(u_1) \cdots \phi(u_m)$. We will define the test function $f$, tailored to the fact that $X_1$ does not intertwine with $X_{r+1},\ldots,X_n$. Fix an integral tuple $M=(M_1,M_2,...,M_r)\in\mathbb{Z}^r$ with $M_i\geq 1$ as provided by Lemma [Lemma 8](#lem:coefficients){reference-type="ref" reference="lem:coefficients"}. Define the test function $f$ to be $$\begin{gathered} f(x) := \Phi_r(S_1x_1,x_2,\ldots,x_r)e((M\circ R)\cdot (x_1,\ldots,x_r)) \\\times \Phi_{n-r}(x_{r+1},\ldots,x_n)\sum_{\substack{m\in \mathbb{Z}^{n-r},\\R/L\leq m_j<2R/L}} e(Lm\cdot (x_{r+1},\ldots,x_n)) \end{gathered}$$ where we recall the notation $M\circ R = (M_1R,...,M_rR)$. Let $M_* = \max\{2,M_1,M_2,...,M_{r}\}$. The Fourier transform of $f$ is supported in $$[M_1R-S_1,M_1R+S_1]\times [M_2R-1,M_2R+1]\times \cdots \times [M_{r}R-1,M_{r}R+1]\times [R-1,2R+1]^{n-r},$$ which is contained in $B_n(0,\sqrt{n} M_*R + \sqrt{n} S_1) \setminus B_n(0,\sqrt{n}R - \sqrt{n}S_1).$ Since $S_1 = R^\sigma$ with $\sigma<1$ there exists some $R_1(\sigma)$ such that for $R \geq R_1(\sigma)$, $S_1 \leq (1/2)R$ so that this region is contained in the annulus $\{ (1/C)R \leq |x|\leq C R\}$ with $C=2M_*\sqrt{n}$, say, so that $C$ depends only on the symbol $P$ and the dimension $n$. Finally, we note for later reference the size of the norm of $f$. Since $\hat{f}$ is supported in the annulus stated above, for all $R \geq 1,$ $R^s\|f\|_{L^2(\mathbb{R}^n)} \ll_s \|f\|_{H^s(\mathbb{R}^n)} \ll_s R^s \|f\|_{L^2(\mathbb{R}^n)}.$ Moreover, $$\label{f_norm} S_1^{-1/2} \lfloor R/L \rfloor^{\frac{n-r}{2}} \|\phi\|_{L^2(\mathbb{R})}^n \leq \|f\|_{L^2(\mathbb{R}^n)} \leq S_1^{-1/2} \lceil R/L \rceil^{\frac{n-r}{2}} \|\phi\|_{L^2(\mathbb{R})}^n.$$ Explicitly, using the notation $\xi = (\xi_1,\ldots,\xi_r)$ and $\eta = (\xi_{r+1},\ldots,\xi_n),$ $$\hat{f}(\xi, \eta) = \sum_{\substack{m \in \mathbb{Z}^{n-r}\\R/L \leq m_j < 2R/L}} g_{m}(\xi,\eta),$$ in which $g_{m}(\xi,\eta) = S_1^{-1} \hat{\Phi}_{r}(S^{-1}\circ(\xi - M\circ R)) \hat{\Phi}_{n-r}(\eta - Lm)$, where $S^{-1}=(S_1^{-1},1,...,1)$. Thus $g_m$ is supported in the set $\mathcal{B}+ (M\circ R,Lm)$, where $\mathcal{B}$ is the box $[-S_1,S_1] \times [-1,1]^{n-1}.$ In particular, for $n \geq 2, n-r\geq 1,$ and $L \geq 4$, the supports of $g_{m}$ as $m$ varies are distinct. Thus by Plancherel's theorem, $$(2\pi)^n\|f\|_{L^2(\mathbb{R}^n)}^2=\|\hat{f}\|^2_{L^2(\mathbb{R}^n)} = \sum_{\substack{m \in \mathbb{Z}^{n-r}\\R/L \leq m_j < 2R/L}} \|g_{m}\|_{L^2(\mathbb{R}^n)}^2,$$ and the claim ([\[f_norm\]](#f_norm){reference-type="ref" reference="f_norm"}) holds since by Plancherel's theorem, $$\|g_{m}\|_{L^2(\mathbb{R}^n)}^2 = S_1^{-1}\|\hat{\phi}\|_{L^2(\mathbb{R}^n)}^{2n} = S_1^{-1}(2\pi)^n\|\phi\|_{L^2(\mathbb{R}^n)}^{2n}.$$ *Remark 17*. Note that if $n=r$ or $n=1$ the above sum is empty; computing the $\|f\|_{H^s}$ norm as above and taking $\sigma=1/2$ produces a counterexample to ([\[thm_T\_bound\]](#thm_T_bound){reference-type="ref" reference="thm_T_bound"}) for all $s < 1/4,$ which is the claim of Theorem [Theorem 1](#thm:main2){reference-type="ref" reference="thm:main2"} in this case. Thus from now on we may assume $r<n$ and $n \geq 2$. ## Approximation of the maximal operator by an exponential sum Since $f$ treats the intertwined variables $x_1,\ldots,x_r$ differently, it is convenient to define $v=(x_1,\ldots,x_r)$ and $w = (x_{r+1}, \ldots,x_n)$, and similarly use $\xi=(\xi_1,...,\xi_{r})$, $\eta=(\xi_{r+1},...,\xi_n)$; finally, we will continue to denote $S \circ \xi = (S_1\xi_1,\xi_2,\ldots,\xi_r)$ and $M\circ R=(M_1R,...,M_{r}R)$ for $(M_1,\ldots,M_r)$ provided by Lemma [Lemma 8](#lem:coefficients){reference-type="ref" reference="lem:coefficients"}. By definition, for the test function $f$, $$\begin{gathered} \label{eqn:T_int} T_t^{P}f(x) = \frac{1}{(2\pi)^{n}} \int_{\mathbb{R}^{n}} \hat{\Phi}_{n}(\xi,\eta) e((S\circ \xi + M\circ R)\cdot v + \eta\cdot w ) \\ \times \sum_{\substack{m \in \mathbb{Z}^{n-r} \\R/L \leq m_j < 2R/L}} e(Lm\cdot w) e(P(S\circ \xi+M\circ R,\eta+Lm)t)d\xi d\eta.\end{gathered}$$ The main result of this section is that $T_t^{P}f(x)$ is well-approximated by an exponential sum defined as follows: for any $u \in \mathbb{R}^{n-r}$ with each $u_j> R/L$, and any $w \in \mathbb{R}^{n-r},t \in \mathbb{R}$ set $$\mathbf{S}(u;w,t):=\sum_{\substack{m \in \mathbb{Z}^{n-r} \\R/L \leq m_j < u_j}} e\left(Lm\cdot w + P_k(M\circ R,Lm) t\right).$$ For simplicity, when $u = (2R/L,\ldots, 2R/L)$, we denote this sum by $\mathbf{S}(2R/L;w,t)$. Fix $0<c_0<1/2$. Then since $\phi$ is smooth and $\phi(0)=1$, there exists a small constant $\delta_0=\delta_0(c_0, \phi)$ such that $$\begin{aligned} \label{eqn:phi_delta} \phi(y)\geq 1-c_0/2 \qquad \text{for all $|y|\leq \delta_0$.} \end{aligned}$$ The main result of this section is: **Proposition 18**. *Let $0<c_0<1/2$ be a small constant and $\delta_0$ be as in [\[eqn:phi_delta\]](#eqn:phi_delta){reference-type="eqref" reference="eqn:phi_delta"}. There exist constants $0< c_1(\delta_0,k,P),c_2(k,P)<1/2$ such that for all $c_1<c_1(\delta_0,k,P),c_2<c_2(k,P)$ and $0<c_3<1$ as small as we like, the following holds.* *Let $x\in (-c_1,-c_1/2]\times [-c_1,c_1]^{n-1}$, $\sigma \leq 1/2$ and $R\geq R_2(c_1,c_2,k, P,\sigma)$. Let $t\in (0,1)$ satisfy $$\begin{aligned} \label{eqn:t_constraints} t &= \frac{-x_1}{(\partial_1 P_k)(M\circ R,\tilde{R})} + \tau, \qquad |\tau| \leq \frac{c_2\delta_0}{S_1R^{k-1}},\qquad \text{and} \qquad t \leq \frac{c_3}{R^{k-1}}, \end{aligned}$$ in which $\tilde{R}:=(L(\lceil 2R/L \rceil -1),...,L(\lceil 2R/L \rceil -1)) \in \mathbb{R}^{n-r}$. Then with $w=(x_{r+1},\ldots,x_n),$ $$|T_t^{P}f(x)| \geq (1-c_0)^n |\mathbf{S}(2R/L;w,t)| + \mathbf{E}_1$$ where $$|\mathbf{E}_1| \ll_{\phi,n,k,r,P}c_3 \sup_{u \in [R/L,2R/L]^{n-r}} |\mathbf{S}(u;w,t)|.$$* Since we will bound $|\mathbf{S}(u;w,t)|$ by a function that is increasing as a function of $u_j$, we will obtain $|\mathbf{E}_1| \ll_{\phi,n,k,r,P} c_3 |\mathbf{S}(2R/L;w,t)|$, so that the error term is at most half the main term, by taking $c_3$ appropriately small, as we may. For the conditions in ([\[eqn:t_constraints\]](#eqn:t_constraints){reference-type="ref" reference="eqn:t_constraints"}) to be compatible we need $(\partial_1 P_k)(M\circ R,\tilde{R})$ to be nonzero and moreover to be of size $R^{k-1}$. This is assured by Lemma [Lemma 8](#lem:coefficients){reference-type="ref" reference="lem:coefficients"}. To prove the proposition, we first use partial summation to remove all terms in the sum over $m$ in ([\[eqn:T_int\]](#eqn:T_int){reference-type="ref" reference="eqn:T_int"}) that do not appear in $\mathbf{S}(2R/L;w,t)$; to accrue only an acceptable error, we require the third constraint in [\[eqn:t_constraints\]](#eqn:t_constraints){reference-type="eqref" reference="eqn:t_constraints"}, and intertwining rank plays a key role. The next step is to use integration by parts to remove all terms in the phase in ([\[eqn:T_int\]](#eqn:T_int){reference-type="ref" reference="eqn:T_int"}) that are nonlinear in $\xi,\eta$, so that we may apply Fourier inversion to the integral of $\hat{\Phi}_n$; this applies the third constraint in [\[eqn:t_constraints\]](#eqn:t_constraints){reference-type="eqref" reference="eqn:t_constraints"}. Finally, after Fourier inversion, we arrive at a main term involving the evaluation of $\Phi_n$, and to bound this from below we use the fact that $\Phi_n(0)=1$ forces $t$ to lie in a specified range around a specific point, which is encoded in the first and second constraints in [\[eqn:t_constraints\]](#eqn:t_constraints){reference-type="eqref" reference="eqn:t_constraints"}. ## Removing non-arithmetic terms in the exponential sum {#sec:expsum_partial_sum} We will use the notation $$\begin{aligned} P=P_k+P_{k-1}+\cdots + P_0 \end{aligned}$$ where each $P_i$ with $0 \leq i \leq k-1$ is a homogeneous polynomial of degree $i$ with real coefficients. Then by Taylor expansion (multinomial expansion), $$P(S\circ \xi +M\circ R,\eta+Lm) =P_k(M\circ R,Lm) + Q(S\circ \xi+M\circ R,\eta+Lm)$$ in which we define $$Q(u+v) := \sum_{j=0}^{k-1}P_j(u) + \sum_{j=0}^k \sum_{1\leq |\alpha|\leq j} \frac{(\partial_\alpha P_j)(u)}{\alpha !} v^{\alpha}, \qquad \text{for any $u,v \in \mathbb{R}^n$,}$$ so that $$\label{eqn:Q} Q(S\circ \xi +M\circ R,\eta+Lm) = \sum_{j=0}^{k-1}P_j(M\circ R,Lm) + \sum_{j=0}^k \sum_{1\leq |\alpha|\leq j} \frac{(\partial_\alpha P_j)(M\circ R,Lm)}{\alpha !} (S\circ \xi,\eta)^{\alpha}.$$ Our goal is to extract the non-arithmetic weight $e( Q(S\circ \xi +M\circ R,\eta+Lm)t)$ from the sum over $m$ in ([\[eqn:T_int\]](#eqn:T_int){reference-type="ref" reference="eqn:T_int"}) via partial summation, using the assumption on intertwining rank. To understand the role of intertwining rank, it is helpful to see a motivating example. ### Example {#sec_example_P} Suppose the dimension $n=2$ and $P(X_1,X_2) =X^\beta$ is simply a monomial, where $\beta$ is a multi-index $(\beta_1,\beta_2)$ with $\beta_1 + \beta_2 =k$. Then $$\label{P_example} P(S_1\xi_1 +M_1R,\xi_2+Lm_2) = \sum_{j=0}^{\beta_1} {\binom{\beta_1}{j}} (M_1R)^{\beta_1-j}(S_1\xi_1)^j \cdot \sum_{\ell=0}^{\beta_2} {\binom{\beta_2}{\ell}}(Lm_2)^{\beta_2-\ell}\xi_2^\ell.$$ We isolate the arithmetic term $(M_1R)^{\beta_1}(Lm_2)^{\beta_2}$ (the $j=\ell=0$ term), leaving the non-arithmetic terms $$H(m_2):=\sum_{\substack{\alpha\leq \beta\\\alpha\neq (0,0)}} \frac{\beta!}{\alpha! (\beta- \alpha)!} (M_1R,Lm_2)^{\beta- \alpha}(S_1\xi_1,\xi_2)^{\alpha}.$$ In this example, we model the sum in ([\[eqn:T_int\]](#eqn:T_int){reference-type="ref" reference="eqn:T_int"}) by summing over $m_2:$ $$\sum_{R/L \leq m_2 < 2R/L} e(Lm_2x_2+ (M_1R)^{\beta_1}(Lm_2)^{\beta_2}t)e(h(m_2))$$ with $h(m_2) = H(m_2)t.$ By partial summation with respect to $m_2$, the weight $e(h(m_2))$ can be removed with an error term depending on the derivative $(\partial/\partial m_2)h(m_2)$. This derivative is $$tL\sum_{\substack{\alpha\leq \beta- (0,1)\\\alpha\neq (0,0)}} \frac{\beta!}{\alpha! (\beta- \alpha- (0,1))!} (M_1R,Lm_2)^{\beta- \alpha-(0,1)}(S_1\xi_1,\xi_2)^{\alpha} \ll tL\sum_{\substack{\alpha\leq \beta- (0,1)\\\alpha\neq (0,0)}} R^{k - |\alpha|-1}S_1^{\alpha_1} ,$$ where the upper bound holds uniformly for $(\xi_1,\xi_2) \in [-1,1]^2,$ $m_2 \in [R/L,2R/L).$ In order for the resulting error term (after an application of Lemma [Lemma 12](#lem:partial_summation){reference-type="ref" reference="lem:partial_summation"}) to be acceptable, we need $$t L\sum_{\substack{\alpha\leq \beta- (0,1)\\\alpha\neq (0,0)}} R^{k - |\alpha|-1}S_1^{\alpha_1} \ll L/R .$$ Since we will ultimately have $t\approx R^{-(k-1)}$, this requires $R^{k - |\alpha|}S_1^{\alpha_1}\ll R^{k-1}$ for each $\alpha$. If the original monomial $P(X_1,X_2)=X_1^{\beta_1}X_2^{\beta_2}$ has $\beta_1 \geq 1$ and $\beta_2\geq 1$, there will be a term with $|\alpha| = \alpha_1=1$, and this term will force the condition $S_1\ll 1$. If however $X_2$ does not intertwine with $X_1$ in $P(X_1,X_2)$, $\beta_1=0$ so that $\alpha_1=0$, removing this difficulty. This concludes our example. In general, we apply Lemma [Lemma 12](#lem:partial_summation){reference-type="ref" reference="lem:partial_summation"} to $$\begin{aligned} \label{eqn:sum_remove_dep} \sum_{\substack{m \in \mathbb{Z}^{n-r} \\R/L \leq m_j < 2R/L}} e(Lm\cdot w) e(P_k(M\circ R,Lm)t) e(h(m)) \end{aligned}$$ with $h(m):=Q(S\circ \xi+M\circ R,\eta+Lm) t.$ We claim that for each multi-index $\kappa=(\kappa_{r+1},...,\kappa_{n})\in \{0,1\}^{n-r}$ with $|\kappa|\geq 1$, $$\begin{aligned} \label{eqn:mixed_partial_P_tilde} \sup_{\substack{(\xi,\eta) \in [-1,1]^n \\y \in [R/L,2R/L)^{n-r}}} \left| \frac{\partial^{|\kappa|}}{\partial y_{r+1}^{\kappa_{r+1}} \cdots \partial y_{n}^{\kappa_{n}}} Q((S\circ \xi,\eta)+(M\circ R,Ly)) \right| \ll_{n,k,P} R^{k-1}(L/R)^{|\kappa|} . \end{aligned}$$ Note that the derivatives controlled by $\kappa$ apply only to the last $n-r$ coordinates. It is in proving this bound that we crucially use the definition of intertwining rank, and the construction of the test function $f$ to accommodate the fact that $X_1$ does not intertwine with $X_{r+1},\ldots,X_n$ in $P_k$; in particular, we will use the fact that for each $\ell>r$, the mixed partial $\partial_\ell \partial_1 P_k$ vanishes identically. Assume ([\[eqn:mixed_partial_P\_tilde\]](#eqn:mixed_partial_P_tilde){reference-type="ref" reference="eqn:mixed_partial_P_tilde"}) for the moment; then for all such $\kappa$, $$\sup_{\substack{(\xi,\eta) \in [-1,1]^n \\y \in [R/L,2R/L)^{n-r}}} \left| \frac{\partial^{|\kappa|}}{\partial y_{r+1}^{\kappa_{r+1}} \cdots \partial y_{n}^{\kappa_{n}}} h(y) \right|\ll R^{k-1}(L/R)^{|\kappa|}t =:B_{\kappa},$$ in the notation of Lemma [Lemma 12](#lem:partial_summation){reference-type="ref" reference="lem:partial_summation"}. (Here we also note that $B_\kappa \equiv 0$ if any entry in a multi-index $\kappa$ exceeds the highest corresponding degree in the polynomial $Q$. However, unlike our previous application of partial summation in Proposition [Proposition 15](#prop:2.7){reference-type="ref" reference="prop:2.7"}, in this case the total degree of $Q$ is $k-1$ in $m$, so we could have $\ell=1$ in the application below.) Consequently, for each nonempty $J \subseteq \{r+1,\ldots,n\},$ and $\ell \leq |J|,$ $$\sup_{\substack{\boldsymbol{\alpha}_1,...,\boldsymbol{\alpha}_\ell\in \{0,1\}^{n-r}\\\boldsymbol{\alpha}_1+ \cdots +\boldsymbol{\alpha}_\ell = \boldsymbol{1}_J}} \prod_{i=1}^\ell B_{\boldsymbol{\alpha_i}} = \sup_{\substack{\boldsymbol{\alpha}_1,...,\boldsymbol{\alpha}_\ell\in \{0,1\}^{n-r}\\\boldsymbol{\alpha}_1+ \cdots +\boldsymbol{\alpha}_\ell = \boldsymbol{1}_J}} \prod_{i=1}^\ell (R^{k-1}(L/R)^{|\boldsymbol{\alpha}_i|}t ) = R^{(k-1)\ell}(L/R)^{|J|}t^{\ell} .$$ This is $\ll R^{k-1}t(L/R)^{|J|}$ under the assumption $t \ll R^{-(k-1)},$ which we assume from now on. Hence by Lemma [Lemma 12](#lem:partial_summation){reference-type="ref" reference="lem:partial_summation"} the sum in [\[eqn:sum_remove_dep\]](#eqn:sum_remove_dep){reference-type="eqref" reference="eqn:sum_remove_dep"} is identical to $$\mathbf{S}(2R/L;w,t)e(Q(S\circ \xi + M\circ R,\eta+\tilde{R})t) + E_1$$ where we define $\tilde{R}:=(L(\lceil 2R/L \rceil -1),...,L(\lceil 2R/L \rceil -1)) \in \mathbb{R}^{n-r}$ and we may take $$|E_1|\ll_{n,k,r,P} \sup_{u \in [R/L,2R/L)^{n-r}} |\mathbf{S}(u;w,t)|\cdot R^{k-1}t,$$ uniformly in $(\xi,\eta) \in [-1,1]^n.$ Consequently, after integrating $E_1$ trivially in $(\xi,\eta)$ by applying the compact support of $\hat{\Phi}_n$ in $[-1,1]^n$, we conclude that $T_t^{P}f(x)$ is precisely $$\label{eqn:T_int_reduced} \frac{1}{(2\pi)^{n}} \mathbf{S}(2R/L;w,t) \int_{\mathbb{R}^{n}} \hat{\Phi}_{n}(\xi,\eta) e((S\circ \xi + M\circ R)\cdot v +\eta\cdot w)\\ \times e(Q(S\circ \xi +M\circ R,\eta+\tilde{R})t) d\xi d\eta + E_2$$ where $$\label{E2} |E_2| \ll_{\phi,n,k,r,P} \sup_{u \in [R/L,2R/L)^{n-r}} |\mathbf{S}(u;w,t)|\cdot R^{k-1}t.$$ It remains to verify ([\[eqn:mixed_partial_P\_tilde\]](#eqn:mixed_partial_P_tilde){reference-type="ref" reference="eqn:mixed_partial_P_tilde"}). We recall the expansion ([\[eqn:Q\]](#eqn:Q){reference-type="ref" reference="eqn:Q"}), and bound the mixed partial of each term. First note that for $y=(y_{r+1},\ldots,y_n)$ and any $\ell > r$, for each degree $j$, $$|(\partial/\partial y_\ell) (P_j(M\circ R,Ly))| = L |( \partial_\ell P_j)(M\circ R,Ly)| \ll R^{j-1}L,$$ uniformly for $y \in [R/L,2R/L)^{n-r}.$ Thus for each fixed $j \leq k-1$, for each $\kappa = (\kappa_{r+1},\ldots,\kappa_n) \in \{0,1\}^{n-r}$ with $|\kappa| \leq j$, $$\sup_{y \in [R/L,2R/L)^{n-r}}\left|\frac{\partial^{|\kappa|}}{\partial y_{r+1}^{\kappa_{r+1}} \cdots \partial y_{n}^{\kappa_{n}}} P_j(M\circ R,Ly)\right| \ll_{n,P_j} L^{|\kappa|} R^{j-|\kappa|} \ll R^{k-1}(L/R)^{|\kappa|},$$ which suffices for ([\[eqn:mixed_partial_P\_tilde\]](#eqn:mixed_partial_P_tilde){reference-type="ref" reference="eqn:mixed_partial_P_tilde"}). For the other terms in the expansion ([\[eqn:Q\]](#eqn:Q){reference-type="ref" reference="eqn:Q"}), fix some multi-index $\alpha=(\alpha_1,\ldots,\alpha_n) \in \mathbb{R}^n$ with $1 \leq |\alpha| \leq k$, and recall the multi-index $\kappa=(\kappa_{r+1},...,\kappa_{n})\in \{0,1\}^{n-r}$ with $|\kappa|\geq 1$ taking derivatives only with respect to coordinates of $y=(y_{r+1},\ldots,y_n)$. For each $0 \leq j \leq k,$ for $|\kappa| \leq j - |\alpha|,$ $$\begin{gathered} \sup_{\substack{(\xi,\eta) \in [-1,1]^n\\y \in [R/L,2R/L)^{n-r}}}\left| \frac{\partial^{|\kappa|}}{\partial y_{r+1}^{\kappa_{r+1}} \cdots \partial y_{n}^{\kappa_{n}}}(\partial_\alpha P_j)(M\circ R,Ly)\frac{(S\circ \xi,\eta)^{\alpha}}{\alpha!} \right|\\ \ll_{n,k,P} L^{|\kappa|}R^{j-|\alpha|- |\kappa|} S_1^{\alpha_1} \ll R^j (L/R)^{|\kappa|} S_1^{\alpha_1}R^{-|\alpha|}, \end{gathered}$$ in which we recall that $S \circ \xi = (S_1\xi_1,\xi_2,\ldots,\xi_r).$ If $j\leq k-1$, this already suffices for ([\[eqn:mixed_partial_P\_tilde\]](#eqn:mixed_partial_P_tilde){reference-type="ref" reference="eqn:mixed_partial_P_tilde"}), since $S_1 \ll R$ shows that the right-hand side is visibly $\ll R^{k-1}(L/R)^{|\kappa|}.$ (Effectively, for $j \leq k-1$ we are using that $P_j$ is already of degree strictly smaller than $k$.) Finally for the highest degree $j=k$ piece, we must be more careful, and use the fact that in the leading form $P_k$, $X_1$ does not intertwine with the last $n-r$ coordinates, over which we are carrying out the partial summation. Consider the expression $R^{k-1} (L/R)^{|\kappa|} S_1^{\alpha_1}R^{-(|\alpha|-1)},$ recalling $|\alpha|\geq 1$ in the cases we currently consider. As explained in the example in §[5.3.1](#sec_example_P){reference-type="ref" reference="sec_example_P"} and the heuristics in §[2.1](#sec_heuristic){reference-type="ref" reference="sec_heuristic"}, the problematic case is when $|\alpha|=\alpha_1=1$. But since $X_1$ does not intertwine with $X_{r+1},\ldots,X_n$, $\partial_j \partial_1 P_k \equiv 0$ for each $j \geq r+1$. Consequently, for any $\alpha$ with $\alpha_1 \geq 1,$ for every $\kappa = (\kappa_{r+1},...,\kappa_n)\in \{0,1\}^{n-r}$ with $|\kappa|\geq 1$, $$\frac{\partial^{|\kappa|}}{\partial y_{r+1}^{\kappa_{r+1}} \cdots \partial y_{n}^{\kappa_{n}}}(\partial_\alpha P_k)(M\circ R,Ly)\equiv 0.$$ Thus for the highest degree form $P_k$, we only need to verify that $R^{k-1} (L/R)^{|\kappa|} S_1^{\alpha_1}R^{-(|\alpha|-1)}\ll R^{k-1} (L/R)^{|\kappa|}$ in cases where $|\alpha| \geq 1$ and $\alpha_1=0$, and this certainly holds. This completes the proof of ([\[eqn:mixed_partial_P\_tilde\]](#eqn:mixed_partial_P_tilde){reference-type="ref" reference="eqn:mixed_partial_P_tilde"}). ## Removing nonlinear terms in the phase, to apply Fourier inversion {#sec:expsum_partial_int} We return our focus to ([\[eqn:T_int_reduced\]](#eqn:T_int_reduced){reference-type="ref" reference="eqn:T_int_reduced"}), and now we remove all nonlinear terms in $(\xi,\eta)$ by integration by parts, in order to apply Fourier inversion. First we Taylor expand, writing $$\begin{aligned} Q(S\circ \xi + M\circ R, \eta + \tilde{R}) &=\sum_{j=0}^{k-1}P_j(M\circ R,\tilde{R}) + \sum_{1\leq |\alpha|\leq k} \frac{(\partial_\alpha P)(M\circ R,\tilde{R})}{\alpha !} (S\circ \xi,\eta)^{\alpha} \\ &=(\nabla P_k)(M\circ R,\tilde{R})\cdot (S\circ \xi,\eta) + \tilde{Q}(S\circ \xi + M\circ R, \eta + \tilde{R}) \end{aligned}$$ where we define $\tilde{Q}(S\circ \xi + M\circ R, \eta + \tilde{R})$ to be $$\sum_{j=0}^{k-1}P_j(M\circ R,\tilde{R}) + \sum_{j=1}^{k-1}(\nabla P_j)(M\circ R,\tilde{R})\cdot(S\circ \xi,\eta) + \sum_{2\leq |\alpha|\leq k} \frac{(\partial_\alpha P)(M\circ R,\tilde{R})}{\alpha !} (S\circ \xi,\eta)^{\alpha}.$$ Then the main term of [\[eqn:T_int_reduced\]](#eqn:T_int_reduced){reference-type="eqref" reference="eqn:T_int_reduced"} can be written as $$\begin{gathered} \mathbf{S}(2R/L;w,t)e((M\circ R)\cdot v) \frac{1}{(2\pi)^{n}} \int_{[-1,1]^{n}} \hat{\Phi}_{n}(\xi,\eta) e((S\circ \xi, \eta)\cdot ((v,w)+ (\nabla P_k)(M\circ R,\tilde{R})t) )\\ \times e(\tilde{Q}(S\circ \xi+M\circ R,\eta+\tilde{R}) t) d\xi d\eta. \end{gathered}$$ We will apply integration by parts as in Lemma [Lemma 13](#lem:partial_integration){reference-type="ref" reference="lem:partial_integration"} to remove the nonlinear terms in $(\xi,\eta),$ with $h(\xi,\eta) := \tilde{Q}(S\circ \xi + M\circ R,\eta+\tilde{R}) t.$ We claim that for every $\kappa = (\kappa_1,...,\kappa_n) \in \{0,1\}^n$ with $|\kappa|\geq 1$, $$\begin{aligned} \label{eqn:mixed_partial_P_tilde2} \sup_{(\xi,\eta) \in [-1,1]^n}\left|\frac{\partial^{|\kappa|}}{\partial \xi_1^{\kappa_1} \cdots \partial \xi_r^{\kappa_r}\partial \eta_{r+1}^{\kappa_{r+1}}\cdots \partial \eta_n^{\kappa_n}} h(\xi,\eta)\right| \ll R^{k-1}t =: B_\kappa, \end{aligned}$$ in the notation of Lemma [Lemma 13](#lem:partial_integration){reference-type="ref" reference="lem:partial_integration"}. Then by integration by parts, the integral above becomes $$e(\tilde{Q}(S\circ 1+M\circ R,1+\tilde{R})t) \frac{1}{(2\pi)^n}\int_{\mathbb{R}^{n}} \hat{\Phi}_{n}(\xi,\eta) e((S\circ \xi, \eta)\cdot ((v,w)+ t(\nabla P_k)(M\circ R,\tilde{R})) )d\xi d\eta + E_3$$ where $|E_3|\ll_{\phi, n} R^{k-1} t .$ By Fourier inversion, the integral above evaluates precisely to $$\Phi_n((S_1,1,\ldots,1) \circ [(x_1,x_2,...,x_n)+t(\nabla P_k)(M\circ R,\tilde{R})]).$$ Consequently, in absolute value, [\[eqn:T_int_reduced\]](#eqn:T_int_reduced){reference-type="eqref" reference="eqn:T_int_reduced"} is $$\label{eqn:reduced_exp} |\mathbf{S}(2R/L;w,t)| \cdot | \Phi_n((S_1,1,\ldots,1) \circ [(x_1,x_2,...,x_n)+t(\nabla P_k)(M\circ R,\tilde{R})])| + E_2 + E_4$$ with $E_2$ bounded as in ([\[E2\]](#E2){reference-type="ref" reference="E2"}) and $$\label{E4} |E_4| \ll |\mathbf{S}(2R/L;w,t)|\cdot|E_3| \ll_{\phi, n} | \mathbf{S}(2R/L;w,t) | R^{k-1} t.$$ Finally, we verify ([\[eqn:mixed_partial_P\_tilde2\]](#eqn:mixed_partial_P_tilde2){reference-type="ref" reference="eqn:mixed_partial_P_tilde2"}). We bound each term in the expansion defining $\tilde{Q}$. The first sum is independent of $(\xi,\eta)$ so it vanishes when we take partials $\partial_\kappa$ with $|\kappa| \geq 1$. For each $1\leq j \leq k-1$, recall that $(S \circ \xi)=(S_1\xi_1,\xi_2,\ldots,\xi_r)$ and $\tilde{R} \ll R$, so that at most $$\sup_{(\xi,\eta) \in [-1,1]^n}\left|\frac{\partial^{|\kappa|}}{\partial \xi_1^{\kappa_1} \cdots \partial \xi_r^{\kappa_r}\partial \eta_{r+1}^{\kappa_{r+1}}\cdots \partial \eta_n^{\kappa_n}} (\nabla P_j)(M\circ R, \tilde{R})\cdot(S\circ \xi,\eta) \right| \ll R^{j-1}S_1 \ll R^{k-2}S_1.$$ This is $\ll R^{k-1}$ as desired, since $S_1=R^\sigma< R$. Now fix a multi-index $2\leq |\alpha|\leq k$. For each $0\leq j \leq k$, $$\sup_{(\xi,\eta) \in [-1,1]^n}\left|\frac{\partial^{|\kappa|}}{\partial \xi_1^{\kappa_1} \cdots \partial \xi_r^{\kappa_r}\partial \eta_{r+1}^{\kappa_{r+1}}\cdots \partial \eta_n^{\kappa_n}} \frac{(\partial_\alpha P_j)(M\circ R, \tilde{R})}{\alpha !} (S\circ \xi,\eta)^{\alpha} \right| \ll R^{j-|\alpha|}S_1^{\alpha_1}.$$ For all the lower-degree terms with $j \leq k-1,$ this is $\ll R^{k-1}S_1^{\alpha_1}R^{-|\alpha|}\ll R^{k-1}.$ Finally, consider $R^{k-|\alpha|}S_1^{\alpha_1}$ in the case $2\leq |\alpha| \leq k$ and $j=k$. (Note that unlike in the partial summation in the previous section, we do not need to consider the case $|\alpha|=1$ and $j=k$, which is singled out as the main term here.) If $\alpha_1=0$, this is $\ll R^{k-1}$. If $|\alpha|>\alpha_1\geq 1$, this is $\ll R^{k-1}(S_1/R)$. If $|\alpha| = \alpha_1\geq 2$, then this is $\ll R^{k}(S_1/R)^2=R^{k-1}(S_1^2/R)$. For this to be $\ll R^{k-1}$ we impose $S_1=R^\sigma$ with $$\label{eqn:cond_sigma} \sigma \leq 1/2.$$ This proves ([\[eqn:mixed_partial_P\_tilde2\]](#eqn:mixed_partial_P_tilde2){reference-type="ref" reference="eqn:mixed_partial_P_tilde2"}) and hence ([\[eqn:reduced_exp\]](#eqn:reduced_exp){reference-type="ref" reference="eqn:reduced_exp"}). ## Restrictions on $t$ to complete the proof of Proposition [Proposition 18](#prop:approx_maximal){reference-type="ref" reference="prop:approx_maximal"} {#sec:expsum_proof} Recall from [\[eqn:phi_delta\]](#eqn:phi_delta){reference-type="eqref" reference="eqn:phi_delta"} that $\phi(y) \geq 1-c_0/2$ if $|y| \leq \delta_0.$ Thus $\Phi_n((S_1,1,\ldots,1) \circ [(x_1,x_2,...,x_n)+t(\nabla P_k)(M\circ R,\tilde{R})]) \geq (1-c_0/2)^n$, which suffices for Proposition [Proposition 18](#prop:approx_maximal){reference-type="ref" reference="prop:approx_maximal"}, if we choose $$\begin{aligned} \label{eqn:cond_t_1} t=\frac{-x_1}{(\partial_1 P_k)(M\circ R,\tilde{R})} + \tau, \qquad |\tau| \leq \frac{c_2\delta_0}{S_1 R^{k-1}} \end{aligned}$$ where $c_2<1$ is sufficiently small that $c_2|(\partial_jP_k)(M\circ R, \tilde{R})|/R^{k-1}<1/2$ for each $1 \leq j \leq n$, and furthermore restrict $x\in [-c_1,c_1]^n$ where $0<c_1<\delta_0/4$ and $c_1|(\partial_jP_k)(M\circ R, \tilde{R})/(\partial_1P_k)(M\circ R, \tilde{R})|<\delta_0/4$ for each $2 \leq j \leq n$. (Recall from Lemma [Lemma 8](#lem:coefficients){reference-type="ref" reference="lem:coefficients"} that $M$ is chosen so that $|(\partial_1 P_k)(M\circ R,\tilde{R})| \gg R^{k-1} \neq 0$ for all $R$, and in the other direction it is always true that $|(\partial_j P_k)(M\circ R,\tilde{R})| \ll R^{k-1}$ for each $1 \leq j \leq n.$) For then, simultaneously $$|S_1(x_1+(\partial_1 P_k)(M\circ R,\tilde{R}) t)| = |S_1\cdot (\partial_1 P_k)(M\circ R,\tilde{R})\tau| <\delta_0/2,$$ and for $j=2,...,n$, $$|x_j+(\partial_j P_k)(M\circ R,\tilde{R}) t|< \delta_0/4 +\delta_0/4 + \delta_0/2.$$ In fact if $x_1\in [-c_1,-c_1/2]$ then we can ensure $t\in (0,1)$ since for appropriate $c_1,c_2,$ for all sufficiently large $R\geq R_2=R_2(c_1,c_2,\sigma, P,k)$, we have $c_1/(\partial_1 P_k)(M\circ R,\tilde{R}) + c_2/(S_1R^{k-1})<1$ and $c_1/(2(\partial_1 P_k)(M\circ R,\tilde{R})) - c_2/(S_1R^{k-1})>0$. Finally, to bound $E_2$ and $E_4$, we impose the condition that $$\begin{aligned} \label{eqn:cond_t} t\leq \frac{c_3}{R^{k-1}} \end{aligned}$$ for some small constant $c_3$ of our choice, and then $\mathbf{E}_1 \ll E_2 + E_4$ is bounded as stated in Proposition [Proposition 18](#prop:approx_maximal){reference-type="ref" reference="prop:approx_maximal"}. For conditions [\[eqn:cond_t\]](#eqn:cond_t){reference-type="eqref" reference="eqn:cond_t"} and [\[eqn:cond_t\_1\]](#eqn:cond_t_1){reference-type="eqref" reference="eqn:cond_t_1"} to be compatible, we verify that we can choose $c_1$ and $c_2$ so that $|R^{k-1}t|\leq c_1(R^{k-1}/(\partial_1 P_k)(M\circ R,\tilde{R}))+c_2(\delta_0/S_1)$ is as small as we want. # The arithmetic contribution {#sec:arithmetic} We now estimate the exponential sum $\mathbf{S}(2R/L;w,t)$ which we extracted from $T_t^{P}f(x)$ in Proposition [Proposition 18](#prop:approx_maximal){reference-type="ref" reference="prop:approx_maximal"}. It is convenient to define the notation: $$\begin{aligned} \label{eqn:change_var} s:=L^k\tau, \qquad y_1:= -\frac{ L^k}{(\partial_1 P_k)(M\circ R,\tilde{R})}x_1 \; ( \text{mod} \; 2\pi), \qquad y_j:= Lx_j \; ( \text{mod} \; 2\pi), \end{aligned}$$ for $r+1\leq j \leq n$, and set $y=(y_{r+1},\ldots,y_n)$. (Implicitly, we also take $y_j=x_j \; ( \text{mod} \; 2\pi)$ for $2 \leq j \leq r,$ although these variables do not play a role here.) Then $y_1+s=L^k t \; ( \text{mod} \; 2\pi)$ and so by recalling $w=(x_{r+1},\ldots,x_n)$ and using homogeneity of $P_k$, $$\begin{aligned} \mathbf{S}(2R/L;w,t) & = \sum_{\substack{m \in \mathbb{Z}^{n-r} \\R/L \leq m_j < 2R/L}} e(Lm\cdot w + L^k P_k(M\circ (R/L),m) t)\\ & = \sum_{\substack{m \in \mathbb{Z}^{n-r} \\R/L \leq m_j < 2R/L}} e(m\cdot y + P_k(M\circ (R/L),m)(y_1+s)). \end{aligned}$$ *Remark 19*. Here we encounter a feature that has not arisen in previous special cases of $P_k$ considered in [@ACP23; @EPV22a], for which the first coordinate only appears in $P_k$ in a diagonal term $X_1^k$, so that it can be pulled out of the exponential sum entirely. In our general setting, to evaluate the sum using number-theoretic methods, we require each $M_iR/L$ to be an integer. We will consider a sequence of $R=R_j \rightarrow\infty$ where each $R_j=2^j$ for a sequence of integers $j$. Since $L=R^\lambda$ for some $0< \lambda <1$, we have $R/L=2^{j(1-\lambda)}$ which is an integer if and only if $j\lambda$ is an integer. We will achieve this by choosing $\lambda=\lambda_1/\lambda_2$ to be rational, and then restricting to $j \rightarrow\infty$ along the arithmetic progression $j \equiv 0 \; ( \text{mod} \; \lambda_2)$. For now, we assume $R$ and $L$ are fixed, and $R/L$ is an integer. We define for any prime $q$, and $a\in \mathbb{F}_q,b \in \mathbb{F}_q^{n-r},$ the complete exponential sum $${\mathbf{T}}(a,b;q) = \sum_{m \; ( \text{mod} \; q)^{n-r}} e\left(\frac{2\pi}{q}\left(b \cdot m+a P_k(M\circ (R/L),m) \right)\right).$$ Let $K_1(P_k)$ be the constant provided by Lemma [Lemma 5](#lemma_Dwork_reduced){reference-type="ref" reference="lemma_Dwork_reduced"}, so that the reduction of $P_k(X_1,\ldots,X_n)$ is Dwork-regular over $\mathbb{F}_q$ in $X_1,\ldots,X_n$ for every prime $q \geq K_1(P_k).$ Then for such $q$, by Proposition [Proposition 6](#prop:dwork_deligne){reference-type="ref" reference="prop:dwork_deligne"}, $P_k(M\circ (R/L),X_{r+1},\ldots, X_n)$ is a Deligne polynomial in $X_{r+1},\ldots,X_n$ over $\mathbb{F}_q$. Thus we will apply Proposition [Proposition 10](#prop:bound_T){reference-type="ref" reference="prop:bound_T"} to show that for many $a,b$ the sum ${\mathbf{T}}(a,b;q)$ must be "large." Consequently, we will construct a set $\Omega$ of $(y_1,\ldots,y_n)\in [0,2\pi]^n$ with (nearly) full measure on which $|\mathbf{S}(2R/L;w,t)|$, and hence $|T_t^{P}f(x)|,$ is large. We state two propositions, first focused on the set $\Omega$, and then focused on the operator $T_t^{P}f(x).$ **Proposition 20**. *Let $R,L$ be fixed with $R/L$ an integer. There exists a parameter $K_3(P_k,k)$ such that the following holds for all $Q > K_3$. For every prime $q\in [Q/2,Q]$, let $\mathcal{G}(q)$ denote the set of all $(a,b) \in \mathbb{F}_q \times \mathbb{F}_q^{n-r}$ for which $$(1/2) q^{(n-r)/2} \leq |{\mathbf{T}}(a,b;q)|\leq (k-1)^{n-r}q^{(n-r)/2}.$$ For any $0<c_4,c_5<1$ sufficiently small of our choice, define a set $\Omega \subseteq [0,2\pi]^n$ by $$\Omega = \bigcup_{\substack{Q/2 \leq q \leq Q\\\text{$q$ prime}}} \hspace{1em} \bigcup_{(a,b)\in \mathcal{G}(q) } B(a,b;q),$$ where each box B(a,b;q) is defined to be $$\{|y_1-2\pi a/q|\leq c_4q^{-1}, (y_2,\ldots,y_r) \in [0,c_1]^{r-1}, |y_j-2\pi b_j/q|\leq c_5q^{-1-1/(n-r)}, r+1 \leq j \leq n\}.$$ Then $|\Omega| \gg_{c_4,c_5,n,k} (\log Q)^{-1}.$* Next, recall that $L=R^\lambda$ for a parameter $0<\lambda<1$ we will choose later, and from now on we let $Q=R^\kappa$ for a parameter $0<\kappa<1$ we will choose later. **Proposition 21**. *Let $R,L$ be fixed with $R/L$ an integer. Suppose $$\begin{aligned} \label{eqn:cond_LQRS} \frac{1}{Q} \ll \frac{L^{k}}{S_1R^{k-1}}, \qquad \frac{1}{Q^{1+1/(n-r)}} \ll (R/L)^{-1}, %\qquad R^{k-1}/L^{k} \ll 1, \qquad R/L \gg Q^{1+\Delta_0} \end{aligned}$$ where $0<\Delta_0\leq 1/(n-r)$. Assume $R,Q$ are sufficiently large, say $R \geq R_3(\delta_0,c_1,c_2,k,P,\sigma,\lambda,\kappa)$ and $Q \geq K_4(n,k,r,P_k,\Delta_0)$. Then there exists a set $\Omega^* \subseteq B_n(0,1)$ with $|\Omega^*| \gg_{c_1,c_4,c_5,n,k,r} (\log Q)^{-1}$ and such that for every $x\in \Omega^*$, there exists a $t\in (0,1)$ such that $$|T_t^{P}f(x)| \geq (1-c_0)^n 2^{-(n-r)-1}\bigg(\frac{R}{LQ^{1/2}}\bigg)^{n-r} -| \mathbf{E}_1| - |\mathbf{E}_2|$$ where $$|\mathbf{E}_1| + |\mathbf{E}_2| \ll_{\phi,n,k,r,P} (c_3+c_5+Q^{-\Delta_0/2})\bigg(\frac{R}{LQ^{1/2}}\bigg)^{n-r}.$$* By taking $R$ sufficiently large and choosing the absolute constants $c_3, c_5$ sufficiently small, we will ultimately make the error terms less than half the size of the main term. We prove the propositions and then turn in §[7](#sec:parameters){reference-type="ref" reference="sec:parameters"} to the final choice of parameters, optimizing the counterexample and completing the proof of Theorem [Theorem 3](#thm:main3){reference-type="ref" reference="thm:main3"}. ## Proof of Proposition [Proposition 20](#prop:Omega){reference-type="ref" reference="prop:Omega"} {#proof-of-proposition-propomega} Since $M \circ (R/L) \in \mathbb{Z}^{r}$ is fixed and $P_k$ is Dwork-regular, by Proposition [Proposition 6](#prop:dwork_deligne){reference-type="ref" reference="prop:dwork_deligne"}, $P_k(M\circ (R/L),X_{r+1},\ldots, X_n)$ is a Deligne polynomial in $X_{r+1},\ldots,X_n$ over $\mathbb{F}_q$ for all primes $q \geq K_1(P_k).$ By Proposition [Proposition 10](#prop:bound_T){reference-type="ref" reference="prop:bound_T"}, for all primes $q>\max\{k,K_1,K_2\}$, $\alpha_2 q^{n-r+1} \leq |\mathcal{G}(q)| \leq q^{n-r+1}$ for some $\alpha_2>0$ depending only on $n,k,r$. We have defined the set $\Omega$ accordingly, so that for $(y_1,\ldots,y_n)\in B(a,b;q)$ for some $(a,b) \in \mathcal{G}(q),$ a small $s$ can be chosen so that $y_1 + s=2\pi a/q$ precisely, and $(y_{r+1},\ldots,y_n)$ is well-approximated by $(2\pi b_{r+1}/q,\ldots,2\pi b_n/q)$. We will use this Diophantine behavior in Proposition [Proposition 21](#prop:evaluate_S){reference-type="ref" reference="prop:evaluate_S"} to show that for $(y_1,\ldots,y_n) \in B(a,b;q)$, $\mathbf{S}(2R/L;w,t)$ is well-approximated by $\lfloor R/Lq\rfloor^{n-r}$ copies of $\mathbf{T}(a,b;q)$ and is hence $\gg \lfloor R/Lq\rfloor^{n-r} q^{(n-r)/2}$. For now we compute a lower bound on the measure of $\Omega$. This is not immediate, since if $n-r>1,$ many of the boxes $B(a,b;q)$ can overlap as $q$ varies; our construction of the set $\mathcal{G}(q)$ is also completely inexplicit about which $a,b$ are chosen. However, we can apply [@ACP23 Lemma 4.1], which shows that if a set of boxes is "well-distributed," then the measure of their union is at least a positive proportion of the sum of their measures. Precisely, choose $K_0$ sufficiently large that (by the prime number theorem) $\pi(x) \geq (1/4) x/\log x$ for all $x \geq K_0/2$. Let $\mathcal{I}$ denote the index set of $(a,b;q)$ over which the unions are taken in the definition of $\Omega.$ Then by [@ACP23 Lemma 4.1(ii)], $$\begin{aligned} | \Omega| &= |\bigcup_{\substack{Q/2 \leq q \leq Q\\\text{$q$ prime}}} \hspace{1em} \bigcup_{(a,b)\in \mathcal{G}(q) } B(a,b;q)| \gg \sum_{\substack{Q/2 \leq q \leq Q\\ \text{$q$ prime}}} \hspace{1em} \sum_{(a,b)\in \mathcal{G}(q) } |B(a,b;q)|\\ & \gg (Q/\log Q) (\alpha_2 Q^{n-r+1}) Q^{-1} (Q^{-1-1/(n-r)})^{n-r} \gg (\log Q)^{-1}\end{aligned}$$ (with implied constants depending on $c_1,c_4,c_5,n,k,r$) as long as all the boxes $B(a,b;q)$ have comparable size and $$\label{welldist} \# \{ (a,b;q), (a',b';q') \in \mathcal{I} : B(a,b;q) \cap B(a',b';q') \neq \emptyset \} \ll |\mathcal{I}|,$$ with an acceptable implied constant. It is clearly true that $1 \ll |B(a,b;q)|/|B(a',b';q')|\ll 1$ for all pairs of indices $(a,b;q)$ and $(a',b';q') \in \mathcal{I}$. The bound ([\[welldist\]](#welldist){reference-type="ref" reference="welldist"}) is a statement that the boxes are well-distributed since a trivial upper bound for the cardinality would be $|\mathcal{I}|^2$; on the other hand, if all the boxes were pairwise disjoint, the cardinality would be precisely $|\mathcal{I}|.$ It remains to verify ([\[welldist\]](#welldist){reference-type="ref" reference="welldist"}). Upon requiring $Q > 2\max\{k,K_0,K_1,K_2\}$ and recalling the construction of the sets $\mathcal{G}(q)$, $(Q/\log Q) Q^{n-r+1}\ll |\mathcal{I}| \ll (Q/\log Q) Q^{n-r+1}.$ The contribution to ([\[welldist\]](#welldist){reference-type="ref" reference="welldist"}) when $(a,b;q)=(a',b';q')$ as tuples is clearly $\ll \mathcal{I},$ so we consider instead the case when the tuples are distinct, and we suppose that $B(a,b;q) \cap B(a',b';q') \neq \emptyset,$ so that in particular, $$\begin{aligned} |a/q-a'/q'| & \leq \frac{c_4}{2\pi}(1/q + 1/q'), \\ |b_j/q - b_j'/q'| & \leq \frac{c_5}{2\pi}(1/q^{1+1/(n-r)} + 1/(q')^{1+1/(n-r)}), \qquad r+1 \leq j \leq n. \end{aligned}$$ If $q = q'$ then by taking $c_4, c_5<1$, the above relations impose $|a-a'|<1$ and $|b_j - b_j'|<1$ for all $j$, so that $(a,b;q)=(a',b';q')$, which is a case we already considered. So we now assume $q \neq q'$ are distinct primes in $[Q/2,Q]$. Then the above relations show that $$\begin{aligned} |aq'-a'q| & \leq \frac{c_4}{\pi}Q, \nonumber \\ |b_jq' - b_j'q| & \leq \frac{c_5C}{\pi}Q^{1-1/(n-r)}, \qquad r+1 \leq j \leq n \label{relations} \end{aligned}$$ where $C$ is a constant depending on $n,r$. Recall that given an integer $m$, and distinct primes $q,q'$, there is a unique choice of a pair $a,a'$ with $1 \leq a \leq q$ and $1 \leq a' \leq q'$ with $aq'-a'q=m.$ Indeed, if there were another representation $a_0, a_0'$ then $aq'-a'q=a_0q'-a_0'q$ would imply that $(a-a_0)q'=(a'-a_0')q$, so that $q' | (a'- a_0')$ and $q|(a-a_0)$, implying $a=a_0$ and $a' =a_0'$, as claimed. Thus once a tuple $(m_1,m_{r+1},\ldots,m_n)$ of integers is chosen with $|m_1| \leq (c_4/\pi)Q$ and $|m_j| \leq (c_5C/\pi)Q^{1-1/(n-r)}$ for each $j=r+1,\ldots,n$, there is at most one choice of a pair $(a,b)\in \mathbb{F}_q \times \mathbb{F}_q^{n-r}$ and $(a',b')\in \mathbb{F}_{q'} \times \mathbb{F}_{q'}^{n-r}$ satisfying the $n-r+1$ relations in ([\[relations\]](#relations){reference-type="ref" reference="relations"}) above. Taking into account all possible values of such $(m_1,m_{r+1},\ldots,m_n)$, this shows that given $q\neq q'$, at most $$\ll_{c_4,c_5,n,r} Q \cdot (Q^{1-1/(n-r)})^{n-r} \ll Q^{n-r}$$ pairs of index tuples $(a,b;q)$ and $(a',b';q')$ can have $B(a,b;q) \cap B(a',b';q') \neq \emptyset$. Taking a union over all pairs of primes $q \neq q' \in [Q/2,Q]$ bounds the left-hand side of ([\[welldist\]](#welldist){reference-type="ref" reference="welldist"}) by $\ll (Q/\log Q)^2 Q^{n-r} \ll |\mathcal{I}|.$ This proves ([\[welldist\]](#welldist){reference-type="ref" reference="welldist"}) and completes the proof of Proposition [Proposition 20](#prop:Omega){reference-type="ref" reference="prop:Omega"}. ## Proof of Proposition [Proposition 21](#prop:evaluate_S){reference-type="ref" reference="prop:evaluate_S"} {#sec_Omega_star} The existence and measure of the set $\Omega^* \subseteq B_n(0,1)$ follows directly from the construction of the set $\Omega$ in Proposition [Proposition 20](#prop:Omega){reference-type="ref" reference="prop:Omega"}. Indeed, $\Omega^*$ is defined to be the set of those $x \in B_n(0,1)$ such that the change of variables defining the $y$-coordinates in ([\[eqn:change_var\]](#eqn:change_var){reference-type="ref" reference="eqn:change_var"}) map $x$ to a point $(y_1,y_2,\ldots,y_n) \in \Omega$. To bound the measure of $\Omega^*$ from below, one only needs to compute the measure of the pre-image of $\Omega$ under the change of variables ([\[eqn:change_var\]](#eqn:change_var){reference-type="ref" reference="eqn:change_var"}). Under the assumption that $\lambda> (k-1)/k$ (which will hold for our final choice of $\lambda$), this simple rescaling argument follows precisely the argument given in [@ACP23 §4.5], and we omit it. For every $x \in \Omega^*$ there is thus a corresponding $y =(y_1,\ldots,y_n) \in \Omega$ such that $y \in B(a,b;q)$ for some tuple $a,b,q$ for which $(1/2) q^{(n-r)/2} \leq |\mathbf{T}(a,b;q)| \ll_k q^{(n-r)/2}.$ By the construction of the box $B(a,b;q)$, $|y_1 - 2\pi a/q| \leq c_4 q^{-1}$. Thus we may *choose* a value of $s$ with $|s| \leq c_4q^{-1}$, such that $y_1 + s=2\pi a/q$ exactly. We make this choice for $s$ (which corresponds precisely via ([\[eqn:change_var\]](#eqn:change_var){reference-type="ref" reference="eqn:change_var"}) to a choice of the time parameter $t$). Again by the construction of the box $B(a,b;q)$, $|y_j -2\pi b_j/q| \leq c_5 q^{-1-1/(n-r)}$ for each $r+1 \leq j \leq n$, so an application of Proposition [Proposition 15](#prop:2.7){reference-type="ref" reference="prop:2.7"} with $N_j=R/L$ for all $r+1\leq j \leq n$ and $V=c_5 q^{-1-1/(n-r)}$, yields $$\mathbf{S}(2R/L;w,t) =\left\lfloor \frac{R}{Lq} \right\rfloor^{n-r} {\mathbf{T}}(a,b;q) + \mathbf{E}_2$$ with $\mathbf{E}_2=E$ as in the proposition. For every $q \in [Q/2,Q],$ in the notation of Proposition [Proposition 15](#prop:2.7){reference-type="ref" reference="prop:2.7"} we have $V\ll c_5 N^{-1}$ by the second constraint in ([\[eqn:cond_LQRS\]](#eqn:cond_LQRS){reference-type="ref" reference="eqn:cond_LQRS"}), and $N \gg q^{1+\Delta_0}$ by the third constraint in ([\[eqn:cond_LQRS\]](#eqn:cond_LQRS){reference-type="ref" reference="eqn:cond_LQRS"}). Hence we may apply the simplified upper bound for the error term given in Remark [Remark 16](#remark_size_of_V){reference-type="ref" reference="remark_size_of_V"}, which in the present setting yields $$\label{E2_work} |\mathbf{E}_2| \ll \left\lfloor \frac{R}{Lq} \right\rfloor^{n-r} q^{(n-r)/2} (c_5 + \left\lfloor \frac{R}{Lq} \right\rfloor^{-1} (\log q)^{n-r} ) \ll (c_5+Q^{-\Delta_0/2}) \left\lfloor \frac{R}{LQ^{1/2}}\right\rfloor^{n-r}.$$ Here we have applied the third condition in [\[eqn:cond_LQRS\]](#eqn:cond_LQRS){reference-type="eqref" reference="eqn:cond_LQRS"} to see that for all $Q$ sufficiently large, $$\left(\frac{R}{LQ}\right)^{-1}(\log Q)^{n-r} \ll Q^{-\Delta_0} (\log Q)^{n-r} \ll Q^{-\Delta_0/2}.$$ Finally, for all $R$ sufficiently large (with respect to $\lambda,\kappa$), $\lfloor \frac{R}{Lq} \rfloor \geq (1/2) R/Lq$, so that the main term of $|\mathbf{S}(2R/L;w,t)|$ satisfies $$\begin{aligned} \left\lfloor \frac{R}{Lq} \right\rfloor^{n-r} |{\mathbf{T}}(a,b;q)| \geq 2^{-(n-r)-1} \left( \frac{R}{Lq}\right)^{n-r} q^{(n-r)/2} \geq 2^{-(n-r)-1}\bigg(\frac{R}{LQ^{1/2}}\bigg)^{n-r}. \end{aligned}$$ The last step of proving Proposition [Proposition 21](#prop:evaluate_S){reference-type="ref" reference="prop:evaluate_S"} is to control the error term $\mathbf{E}_1$ from Proposition [Proposition 18](#prop:approx_maximal){reference-type="ref" reference="prop:approx_maximal"}. By that proposition, $|\mathbf{E}_1| \ll \sup_{R/L\leq u_j \leq 2R/L} c_3|\mathbf{S}(u;w,t)|,$ and thus it will suffice to prove that uniformly in $R/L\leq u_j \leq 2R/L,$ the sum obeys the upper bound $|\mathbf{S}(u;w,t)|\ll (R/LQ^{1/2})^{n-r}.$ For this we can again apply Proposition [Proposition 15](#prop:2.7){reference-type="ref" reference="prop:2.7"} with $N_j=u_j$ and $V=c_5 q^{-1-1/(n-r)}$, so that $$|\mathbf{S}(u;w,t) | =\prod_{j=r+1}^{n}\left\lfloor \frac{u_j}{q} \right\rfloor \cdot |{\mathbf{T}}(a,b;q)| + E_5$$ with $E_5=E$ as in the proposition. We apply the bound $|{\mathbf{T}}(a,b;q)|\ll_k q^{(n-r)/2}$, valid for each pair $(a,b) \in \mathcal{G}(q)$. Upon noting that the expressions for both the main term and $E$ given in Proposition [Proposition 15](#prop:2.7){reference-type="ref" reference="prop:2.7"} are increasing as each range $N_j$ increases, we bound both from above by taking $u_j = 2R/L$ in each case. Hence we may in fact apply the upper bound ([\[E2_work\]](#E2_work){reference-type="ref" reference="E2_work"}) also to $E_5$. In conclusion, $$|\mathbf{E}_1| \ll c_3\left\lfloor \frac{R}{Lq}\right\rfloor^{n-r} q^{(n-r)/2} + c_3(c_5+Q^{-\Delta_0/2})\left( \frac{R}{LQ^{1/2}}\right)^{n-r} \ll c_3\left( \frac{R}{LQ^{1/2}}\right)^{n-r}.$$ # Choosing parameters and conclusion of the proof {#sec:parameters} From Proposition [Proposition 21](#prop:evaluate_S){reference-type="ref" reference="prop:evaluate_S"}, by taking $R$ sufficiently large (relative to $\phi,\delta_0,c_1,c_2,n,k,r,P,\sigma,\lambda,\kappa$) and choosing $c_3,c_5$ sufficiently small (relative to $\phi,c_0,n,k,r,P$) we may conclude that under the hypotheses of the proposition, $$\begin{aligned} \sup_{0<t<1}|T_t^{P}f(x)| \geq \frac{1}{2}(1-c_0)^n 2^{-(n-r)-1} \left( \frac{R}{LQ^{1/2}}\right)^{n-r}. \end{aligned}$$ It then follows from the measure of $\Omega^*$ and the computation ([\[f_norm\]](#f_norm){reference-type="ref" reference="f_norm"}) of $\|f\|_{L^2}$ that $$\begin{aligned} \frac{\|\sup_{0<t<1}|T_t^{P}f(x)|\|_{L^1(B_n(0,1))}}{\|f\|_{L^2}} \gg \left( \frac{R}{LQ^{1/2}}\right)^{n-r}S_1^{1/2}(R/L)^{-(n-r)/2}(\log Q)^{-1}. \end{aligned}$$ Set $$\delta(n,k,r)=\frac{n-r}{4((k-1)(n-(r-1))+1)}.$$ To finish the proof of Theorem [Theorem 3](#thm:main3){reference-type="ref" reference="thm:main3"}, it suffices to show that we can define the parameters $S_1 = R^\sigma$, $L=R^\lambda$ and $Q=R^\kappa$ such that $R/L$ is an integer, ([\[eqn:cond_sigma\]](#eqn:cond_sigma){reference-type="ref" reference="eqn:cond_sigma"}) and ([\[eqn:cond_LQRS\]](#eqn:cond_LQRS){reference-type="ref" reference="eqn:cond_LQRS"}) are satisfied, and for every $s< \frac{1}{4} + \delta(n,k,r)$, $$\begin{aligned} \label{eqn:main3_equiv} \bigg(\frac{R}{LQ^{1/2}}\bigg)^{n-r} S_1^{1/2}(R/L)^{-(n-r)/2}(\log Q)^{-1}\geq A_s R^{s'} \end{aligned}$$ for some $s'>s$ (and some nonzero constant $A_s$). Note that verifying [\[eqn:main3_equiv\]](#eqn:main3_equiv){reference-type="eqref" reference="eqn:main3_equiv"} is equivalent to choosing $\sigma, \lambda, \kappa$ such that $$\begin{aligned} \label{eqn:s_maximize} s &< \frac{\sigma}{2} + \frac{n-r}{2} - (\kappa + \lambda) \frac{n-r}{2}, \end{aligned}$$ while ([\[eqn:cond_sigma\]](#eqn:cond_sigma){reference-type="ref" reference="eqn:cond_sigma"}) imposes $\sigma\leq 1/2$ and ([\[eqn:cond_LQRS\]](#eqn:cond_LQRS){reference-type="ref" reference="eqn:cond_LQRS"}) imposes $$\label{kls_relations} \kappa + k\lambda \geq k- 1 + \sigma, \qquad \frac{n-(r-1)}{n-r}\kappa + \lambda \geq 1 ,\qquad \lambda \leq 1- \kappa (1+\Delta_0).$$ By taking a linear combination of the first two relations in the line above (namely $1/(k-1)$ times the first relation plus $n-r$ times the second relation), we obtain $$\label{kl_relation} \kappa + \lambda \geq \frac{n-(r-1) + \sigma /(k-1)}{n-(r-1)+1/(k-1)}.$$ To maximize the right-hand side of [\[eqn:s_maximize\]](#eqn:s_maximize){reference-type="eqref" reference="eqn:s_maximize"}, we choose $\kappa, \lambda$ so that equality holds in this relation, and substitute the resulting value for $\kappa+\lambda$ into ([\[eqn:s_maximize\]](#eqn:s_maximize){reference-type="ref" reference="eqn:s_maximize"}). For all $k \geq 2$ the coefficient of $\sigma$ on the right-hand side of ([\[eqn:s_maximize\]](#eqn:s_maximize){reference-type="ref" reference="eqn:s_maximize"}) is then positive, so in order to enlarge the region in ([\[eqn:s_maximize\]](#eqn:s_maximize){reference-type="ref" reference="eqn:s_maximize"}) as much as possible, we take $\sigma=1/2$. Now solving for $\kappa$ and $\lambda$ that obey the first two relations in ([\[kls_relations\]](#kls_relations){reference-type="ref" reference="kls_relations"}) and satisfy equality in ([\[kl_relation\]](#kl_relation){reference-type="ref" reference="kl_relation"}) reveals $$\kappa = \frac{n-r}{2((k-1)(n-(r-1))+1)}, \qquad \lambda = 1 - \frac{n-(r-1)}{2((k-1)(n-(r-1))+1)}.$$ It is then true that $0<\kappa, \lambda <1$. Moreover, $\lambda= 1 - \kappa (1+\Delta_0)$ with $\Delta_0 = 1/(n-r)$, so that the third relation in ([\[kls_relations\]](#kls_relations){reference-type="ref" reference="kls_relations"}) holds. Additionally, $\lambda>(k-1)/k$, as was required in §[6.2](#sec_Omega_star){reference-type="ref" reference="sec_Omega_star"}. Finally, $\lambda=\lambda_1/\lambda_2$ is a rational number and hence we take a sequence of integers $j \rightarrow\infty$ with $j\equiv 0 \pmod{2((k-1)(n-(r-1))+1)}$. Then for each $R=R_j=2^j$ and $L=L_j=R_j^\lambda$ as $j \rightarrow\infty$ in this sequence, we have $R_j/L_j = 2^{j(1-\lambda)}$ is an integer, as required in Remark [Remark 19](#remark_integer){reference-type="ref" reference="remark_integer"}. Finally, we conclude that with these choices, ([\[eqn:main3_equiv\]](#eqn:main3_equiv){reference-type="ref" reference="eqn:main3_equiv"}) holds for all $s< 1/4 + \delta(n,k,r),$ which ends the proof of Theorem [Theorem 3](#thm:main3){reference-type="ref" reference="thm:main3"}. # Forms and intertwining rank: examples and remarks {#sec_details_examples} ## Examples of Dwork-regular forms of any intertwining rank For each $k \geq 3$ and $2 \leq r \leq n,$ we now prove the following forms of degree $k$ are Dwork-regular over $\mathbb{Q}$ in $X_1,\ldots, X_n$ with intertwining rank $r$, namely $$\begin{aligned} P_k(X_1,\ldots,X_n)&=X_1^k+\cdots + X_n^k + \sum_{2\leq j \leq r}X_1X_j^{k-1} + \sum_{2\leq i < j \leq n} X_iX_j^{k-1},\quad \text{$k \geq 3$ odd;} \\ P_k(X_1,\ldots,X_n)&=X_1^k+\cdots + X_n^k + \sum_{2\leq j \leq r} X_1^2X_j^{k-2}+ \sum_{2\leq i < j \leq n} X_i^2X_j^{k-2}, \quad \text{$k \geq 4$ even.}\end{aligned}$$ These visibly have intertwining rank $r$. In the next sections we additionally prove these forms are indecomposable, and we compute the codimension of all Dwork-regular forms of intertwining rank $r<n$, thus quantifying the set of forms for which Theorem [Theorem 1](#thm:main2){reference-type="ref" reference="thm:main2"} proves a new result. First we prove that each example $P_k$ defined above is Dwork-regular. We provide a full proof in the case $k \geq 3$ is odd; this relies on the fact that $k-1$ is then even. In the case that $k \geq 4$ is even, the proof is analogous, and relies on the fact that $k-2$ is even. By Lemma [Lemma 4](#lem:dwork_equiv){reference-type="ref" reference="lem:dwork_equiv"}, it suffices to check that for all nonempty $S\subseteq \{1,...,n\}$, $P_S:=P|_{X_i=0,i\not\in S}$ is nonzero if $|S|=1$ and $P_S$ is nonsingular if $|S|\geq 2$. It is clear that $P_S$ is nonzero for $|S|=1$, so henceforth we assume $|S|\geq 2$. Suppose $S=\{\ell_1,...,\ell_m\}$ with $\ell_1<\cdots <\ell_m$. If $1\not\in S$ then $P_S$ is of the form $$\begin{aligned} \label{eqn:PS_form_1} X_{\ell_1}^k+\cdots +X_{\ell_{m}}^k + \sum_{\substack{2\leq \ell_i<\ell_j\leq n,\\\ell_i,\ell_j\in S}} X_{\ell_i}X_{\ell_j}^{k-1}. \end{aligned}$$ If $1\in S$ and $S\setminus \{1\}\subseteq \{r+1,...,n\}$ then $P_S$ is of the form $$\begin{aligned} X_1^k + X_{\ell_2}^k+\cdots +X_{\ell_{m}}^k +\sum_{\substack{2\leq \ell_i<\ell_j\leq n,\\\ell_i,\ell_j\in S}} X_{\ell_i}X_{\ell_j}^{k-1}. \end{aligned}$$ This is the sum of $X_1^k$ and a polynomial $\tilde{P}_S$ of the form [\[eqn:PS_form_1\]](#eqn:PS_form_1){reference-type="eqref" reference="eqn:PS_form_1"}, and so it is nonsingular if and only if $\tilde{P}_S$ is nonsingular. (Note that when $|S|=2$ this is diagonal and hence nonsingular.) Lastly if $1\in S$ and $S\setminus \{1\} \not\subseteq \{r+1,...,n\}$ then $P_S$ is of the form $$\begin{aligned} \label{eqn:PS_form_3} X_1^k + X_{\ell_2}^k+\cdots +X_{\ell_{m}}^k + \sum_{\substack{2\leq \ell_j \leq r,\\\ell_j\in S}} X_1X_{\ell_j}^{k-1} +\sum_{\substack{2\leq \ell_i<\ell_j\leq n,\\\ell_i,\ell_j \in S}} X_{\ell_i}X_{\ell_j}^{k-1}. \end{aligned}$$ In particular, if $|S|=2$ then $P_S$ is of the form $$\begin{aligned} \label{eqn:PS_form_4} X_1^k + X_{\ell_2}^k + X_1X_{\ell_2}^{k-1}. \end{aligned}$$ Consequently, in all cases (after relabelling variables) it suffices to check that for each $m\geq m' \geq 2$ and $\alpha \in \{0,1\}$, $$Q(Y_1,...,Y_m) := Y_1^k +\cdots +Y_{m}^k + \sum_{2\leq j\leq m'} Y_{1}Y_{j}^{k-1} +\alpha \sum_{\substack{2\leq i<j\leq m}} Y_{i}Y_{j}^{k-1}$$ is nonsingular. (The case $m'=m,\alpha=1$ corresponds to [\[eqn:PS_form_1\]](#eqn:PS_form_1){reference-type="eqref" reference="eqn:PS_form_1"} with $Y_i=X_{\ell_i}$, the case $m'<m,\alpha=1$ corresponds to [\[eqn:PS_form_3\]](#eqn:PS_form_3){reference-type="eqref" reference="eqn:PS_form_3"}, and the case $m'=m=2,\alpha=0$ corresponds to [\[eqn:PS_form_4\]](#eqn:PS_form_4){reference-type="eqref" reference="eqn:PS_form_4"}.) Suppose there exists $a=[a_1: \cdots : a_m]\in \mathbb{P}^{m-1}$ that is a simultaneous solution to the system $$\begin{aligned} \frac{\partial Q}{\partial Y_1} &= kY_{1}^{k-1} + \sum_{1<j\leq m'} Y_{j}^{k-1} =0,\\ % \frac{\partial Q}{\partial Y_2} &= kY_{2}^{k-1} +(k-1)Y_{1}Y_{2}^{k-2} +\al \sum_{2<j\leq m} Y_{j}^{k-1} =0\\ % &\vdots\\ % \frac{\partial Q}{\partial Y_{m'}}&=kY_{m'}^{k-1} +(k-1)Y_1Y_{m'}^{k-2}+\al (k-1)\sum_{2\leq i < m'}Y_iY_{m'}^{k-2}+\al\sum_{m'<j\leq m} Y_j^{k-1}=0\\ \frac{\partial Q}{\partial Y_{\ell}}&=kY_{\ell}^{k-1} +(k-1)Y_1Y_{\ell}^{k-2}+\alpha(k-1)\sum_{2\leq i < \ell}Y_iY_{\ell}^{k-2}+\alpha\sum_{\ell<j\leq m} Y_j^{k-1}=0, \ \ 2 \leq \ell \leq m',\\ % \frac{\partial Q}{\partial Y_{m'+1}}&=kY_{m'+1}^{k-1} +\al (k-1)\sum_{2\leq i < m'+1}Y_iY_{m'+1}^{k-2}+\al\sum_{m'+1<j\leq m} Y_j^{k-1}=0\\ % &\vdots\\ % \frac{\partial Q}{\partial Y_{m}} &= % kY_{m}^{k-1} +\al(k-1)\sum_{2\leq i<m}Y_{i}Y_{m}^{k-2}=0. \frac{\partial Q}{\partial Y_{\ell}}&=kY_{\ell}^{k-1} +\alpha(k-1)\sum_{2\leq i < \ell}Y_iY_{\ell}^{k-2}+\alpha\sum_{\ell<j\leq m} Y_j^{k-1}=0, \ \ m'+1\leq \ell \leq m. \end{aligned}$$ Since $k$ is odd, $k-1$ is even, and so the vanishing of $\partial Q / \partial Y_{1}$ forces $a_\ell=0$ for $1\leq \ell \leq m'$. If $\alpha=0$, then the vanishing of the partials $\partial Q / \partial Y_{m'+1},...,\partial Q / \partial Y_m$ forces $a_\ell=0$ for $m'+1\leq \ell \leq m$. Otherwise if $\alpha=1$, then the vanishing of $\partial Q / \partial Y_{m'}$ forces $a_\ell=0$ for $m'+1\leq \ell \leq m$ (by recalling $a_\ell=0$ for $1 \leq \ell \leq m')$. So $[a_1: \cdots :a_m]$ cannot represent a point in $\mathbb{P}^{m-1}$. Thus $Q$ is nonsingular, as needed. ## A criterion to check if a form is indecomposable {#sec:decomposability} We will next prove that the examples given above are indecomposable, and in particular, there is no $\mathrm{GL}_n(\mathbb{Q})$ change of variables that bring them to the shape $X_1^k + Q_k(X_2,\ldots,X_n)$ (e.g. the shape required in previous work [@ACP23; @EPV22a]). We refer to the results of Harrison [@Har75] and Harrison-Pareigis [@HarPar88], who studied the theory of higher-degree forms using the analogous theory for symmetric spaces. (Being decomposable is also referred to as being of Sebastiani-Thom type in algebraic geometry literature, for example in the literature on Carlson and Griffiths' result [@CarGri80] that the generic polynomial can be reconstructed (up to a constant multiplicative factor) from its Jacobian ideal; see the explicit relation to ST-type in [@Wan15].) Let $L$ be a field, in our case with characteristic zero. A symmetric space of degree $k$ over $L$ is a pair $(V,\theta)$ where $V$ is a vector space over $L$ of dimension $n$ and $\theta:V^k \rightarrow L$ is a symmetric multilinear map. This is equivalent to $\mathrm{Sym}^k(V^*)$, the $k$th symmetric power of $V^*$, which is naturally identified with the space of homogeneous polynomials of $n$ variables and degree $k$ (see standard texts such as [@DF03],[@Har92]). To describe explicitly the identification between forms and symmetric spaces, write a homogeneous polynomial $F\in L[X_1,...,X_n]$ in the following symmetric form, $$F(X_1,...,X_n) = \sum_{1\leq i_1,...,i_k \leq n} c_{i_1\cdots i_k}X_{i_1}\cdots X_{i_k},$$ where $c_{i_1\cdots i_k} = c_{\sigma(i_1)\cdots \sigma(i_k)}$ for all $\sigma \in S_k$ and $S_k$ is the symmetric group on $\{1,...,k\}$. Let $V$ be an $n$-dimensional vector space over the field $L$ (we may view $V$ as $L^n$), and let $v_1,...,v_n$ be a basis of $V$. Define $\theta(v_{i_1},...,v_{i_k})= c_{i_1\cdots i_k}.$ Then for all $x_1,...,x_n\in L$, we have the relation $$F(x_1,...,x_n) = \theta (\sum_{i=1}^n v_i x_i, ..., \sum_{i=1}^n v_i x_i).$$ By definition, a symmetric space $(V,\theta)$ is nondegenerate if $\theta(v,v_2,...,v_k)=0$ for all $v_2,...,v_k\in V$ implies $v=0$. Further, a symmetric space is decomposable if there exist nonzero symmetric spaces $(U,\phi)$ and $(W,\psi)$ such that $(V,\theta) = (U,\phi) \oplus (W,\psi)$. Harrison showed that the decomposability of a symmetric space $(V,\theta)$ is characterized by its center $Z(V,\theta)$ which is defined as $$Z(V,\theta) = \{f \in \mathrm{End}_L(V): \theta(f(v_1),v_2,...,v_k) = \theta(v_1,f(v_2),v_3,...,v_k)\}.$$ Note that this also implies $\theta(v_1,...,f(v_i),...,v_j,...,v_k) = \theta(v_1,...,v_i,...,f(v_j),...,v_k)$ for any $i,j$ since $\theta$ is symmetric. Precisely, Harrison proves in [@Har75 Proposition 4.1]: let $(V,\theta)$ be a nondegenerate symmetric space of degree $k\geq 3$ over a field $L$ of characteristic zero. Then $(V,\theta)$ is indecomposable if and only if $Z(V,\theta)$ has no nontrivial idempotents. (An idempotent in a ring is an element $a$ such that $a^2=a$. The trivial idempotent elements are the 0 and 1, respectively the additive identity and the multiplicative identity in the ring.) To translate this into the language of homogeneous polynomials, we define the center of a form $F$, following [@HarPar88], as $$Z(F) = \{A\in M_{n}(L) | A^TH_F= H_FA\}$$ where $H_F$ denotes the Hessian matrix $(\partial^2 F / \partial X_i \partial X_j)_{1\leq i, j \leq n}$. Then the center, and the decomposability, of a form coincide with those of its associated symmetric space. Precisely, let $F\in L[X_1,...,X_n]$ be a form of degree $k\geq 3$ and let $(V,\theta)$ be the symmetric space associated to $F$. Then it can be shown that $Z(F)\cong Z(V,\theta)$, and $F$ is indecomposable as a form if and only if $(V,\theta)$ is indecomposable as a symmetric space. Thus to show a form is indecomposable, it is equivalent to show that its center has no nontrivial idempotents. This is the criterion we will exploit. It is convenient to note that over a field $L$ (with $\mathrm{char} L \nmid k$), a form is called central if $Z(F) \simeq L.$ Harrison showed that if a form is central (over $L$), then it is absolutely indecomposable (that is, indecomposable over any field extension of $L$). In our case, to show $F$ is indecomposable over $\mathbb{Q}$ it suffices to show that $Z(F)\simeq \mathbb{Q},$ so the form has the even stronger property of being central. We remarked earlier that indecomposable forms are generic. This is implied for $(n,k) \neq (2,3)$ over $\mathbb{C}$ by [@HLYZ21 Thm. 3.2] (which shows the set of central forms is open and dense in the moduli space over $\mathbb{C}$; this proof can be adapted to hold over $\mathbb{Q}$). It is also shown directly for $n \geq 3, k \geq 3$ over $\mathbb{C}$ by [@Wan15 Ex. 4.3, 4.4, Cor. 6.1]. The case $(n,k)=(2,3)$ is more complicated, and we defer its study to a different work. ## The examples are indecomposable For $k \geq 3$, $n \geq 2$ and $2 \leq r \leq n,$ the example form $P_k$ defined above is indecomposable; we will prove this next by showing $Z(P_k)$ has no nontrivial idempotents. Precisely, when $(n,k) \neq (2,3)$, we show that $Z(P_k)\simeq \mathbb{Q};$ for $(n,k)=(2,3)$, the example $P_3$ is also indecomposable, but with a different center. (Note that if $r=1$, the form is decomposable since $X_1$ only appears in a diagonal term. Thus we need only consider $2 \leq r \leq n$.) We present the full proof for all odd $k \geq 5$, $n \geq 2$ and $2 \leq r \leq n$; the proof for $k=3$ and for all even $k \geq 4$ is fundamentally analogous. Fix $P=P_k$ to be the example form defined above. Let $H_P$ denote the Hessian of $P$. Then $H_P/(k-1)$ is the $n\times n$ matrix $$\begin{aligned} \begin{pmatrix} kX_1^{k-2} &X_2^{k-2} &\cdots & X_r^{k-2}&0&\cdots&0\\ X_2^{k-2} &kX_2^{k-2}+Q_2& \cdots& X_r^{k-2}&X_{r+1}^{k-2}&\cdots &X_n^{k-2}\\ \vdots &&\ddots&&&&\vdots\\ X_r^{k-2}&X_r^{k-2}&\cdots &kX_r^{k-2}+Q_r&X_{r+1}^{k-2}&\cdots&X_n^{k-2}\\ 0&X_{r+1}^{k-2}&\cdots&X_{r+1}^{k-2}&kX_{r+1}^{k-2}+Q_{r+1}&\cdots&X_{n}^{k-2}\\ \vdots&&&&&\ddots&\vdots\\ 0& X_n^{k-2}&\cdots&X_n^{k-2}&X_n^{k-2}&\cdots&kX_n^{k-2}+Q_n \end{pmatrix} \end{aligned}$$ where $Q_\ell = (k-2)\sum_{1\leq i < \ell}X_iX_\ell^{k-3}$ for $\ell\leq r$ and $Q_\ell = (k-2)\sum_{2\leq i < \ell}X_iX_\ell^{k-3}$ for $\ell> r$. Note that each $Q_\ell \not\equiv 0$. For $k\geq 5$, each $Q_\ell$ consists of monomials that differ from each other and from the entries of $H_P/(k-1)$ that are off the diagonal, and so the vanishing of any linear combination $c_{i_1}Q_{i_1}+\cdots + c_{i_m}Q_{i_m}\equiv 0$ would imply $c_{i_j}=0$ for all $1\leq j \leq m$. Let $A=(a_{ij})\in M_{n}(\mathbb{Q})$ and write $H_P/(k-1)=(h_{ij})$. Let $B_A$ denote $(A^TH_P-H_PA)/(k-1)$. Then $A\in Z(P)$ if and only if $B_A=0$. Note that a priori we have $\{cI_n:c\in\mathbb{Q}\}\subseteq Z(P)$. The assumption that all entries of $B_A$ are 0 implies constraints on the entries of $A$ that show the reverse inclusion $Z(P)\subseteq \{cI_n:c\in\mathbb{Q}\}$, from which we deduce the equality $Z(P) = \{cI_n:c\in \mathbb{Q}\} \simeq \mathbb{Q}$. Write $B_A=(b_{ij})$ so that $b_{ij} = \sum_{\ell=1}^n (a_{\ell i}h_{\ell j} - h_{i\ell}a_{\ell j})$. Note that since $H_P$ is symmetric, $b_{ii}=0$ and $B_A^T=-B_A$. Hence it suffices to consider the $(n^2-n)/2$ entries above the diagonal i.e. $b_{ij}$ with $i<j$. Each entry $b_{ij}$ is a polynomial, so it is $\equiv 0$ if and only if the coefficient of each term (after regrouping) is zero. We split the $(n^2-n)/2$ entries of $b_{ij}$ with $i<j$ into the following five cases of $(i,j)$, based on the shapes of the rows and columns: 1. $(1,j)$ with $2\leq j \leq r$, 2. $(i,j)$ with $2\leq i<j\leq r$, 3. $(1,j)$ with $r+1\leq j \leq n$, 4. $(i,j)$ with $r+1\leq j \leq n$ and $2\leq i \leq r$, and 5. $(i,j)$ with $r+1\leq i< j \leq n$. It suffices to learn from the assumption that $b_{ij} \equiv 0$ in the cases (1),(3),(4). For case (1) with $(1,j)$ with $2\leq j \leq r$, we compute $$\begin{gathered} b_{1j} = -a_{1j}kX_1^{k-2} -\sum_{\substack{\ell=2}}^{j-1} a_{\ell j}X_\ell^{k-2}+(\sum_{\ell=1}^{j-1} a_{\ell 1} + a_{j1}k- a_{jj})X_j^{k-2}\\ + \sum_{\ell=j+1}^r( a_{\ell 1}-a_{\ell j})X_\ell^{k-2} + \sum_{\ell=r+1}^n a_{\ell 1}X_\ell^{k-2}+a_{j1}Q_j. \end{gathered}$$ The assumption $b_{1j} \equiv 0$ for all $2\leq j \leq r$ can be seen to imply that $$\begin{aligned} a_{ii}&=a_{11}, \qquad 2 \leq i \leq r,\label{eqn:cond_1_1}\\ a_{i1}&=0, \qquad 2 \leq i \leq n \label{eqn:cond_1_3},\\ a_{ij}&=0, \qquad 1 \leq i \neq j\leq r.\nonumber \end{aligned}$$ These give the desired conditions on the top left $r\times r$ block and the first column of $A$. If $r=n$, the proof is complete; otherwise for $r<n$ we continue, as cases (3), (4) are non-vacuous. For case (3), $(1,j)$ with $r+1\leq j \leq n$, we compute $$b_{1j} =- a_{1j}kX_1^{k-2} -\sum_{\substack{\ell=2}}^{r} a_{\ell j}X_\ell^{k-2} +(\sum_{\ell=2}^{j-1} a_{\ell 1} + a_{j1}k)X_j^{k-2} +\sum_{\ell=j+1}^n a_{\ell 1}X_\ell^{k-2} +a_{j1}Q_j.$$ Thus the assumption $b_{1j}\equiv 0$ gives in particular the new condition $$a_{i j}=0, \qquad 1 \leq i \leq r, \quad r+1 \leq j \leq n.$$ This is the desired result for the top right $r\times (n-r)$ block of $A$. For case (4), $(i,j)$ with $r+1\leq j \leq n$ and $2\leq i \leq r$, we compute $$\begin{gathered} b_{ij} =-(\sum_{\ell=1}^{i-1}a_{\ell j} + a_{ij}k)X_i^{k-2} -\sum_{\ell=i+1}^{j-1} a_{\ell j}X_\ell^{k-2} +(\sum_{\ell=2}^{j-1} a_{\ell i}+a_{ji}k-a_{jj})X_j^{k-2} \\ +\sum_{\ell=j+1}^n (a_{ \ell i}-a_{\ell j})X_\ell^{k-2} -a_{ij}Q_i + a_{ji}Q_j. \end{gathered}$$ The assumption $b_{ij} \equiv 0$ for all $r+1\leq j \leq n$ and $2\leq i \leq r$ implies that $$\begin{aligned} a_{ii} &=a_{22}, \qquad 2 \leq i \leq n,\\ a_{ij} &= 0 , \qquad r+1\leq i \leq n, \quad 2 \leq j \leq r\\ a_{ij} &= 0 , \qquad 2 \leq i \leq n, \quad r+1\leq j \leq n, \quad i\neq j. \end{aligned}$$ The second condition combined with ([\[eqn:cond_1\_3\]](#eqn:cond_1_3){reference-type="ref" reference="eqn:cond_1_3"}) confirms that the entries in the lower-left $(n-r)\times r$ block of $A$ are zeroes, while the third condition finalizes that the off-diagonal entries of the lower-right $(n-r)\times (n-r)$ block of $A$ are zeroes. Finally, combined with ([\[eqn:cond_1\_1\]](#eqn:cond_1_1){reference-type="ref" reference="eqn:cond_1_1"}) the first condition shows that all diagonal entries in the lower-right block are also equal to $a_{11}$. Thus $A= a_{11}I_n$, and this completes the proof that $Z(P)\subseteq \{cI_n:c\in \mathbb{Q}\}$. The above computations focused on the case of $k \geq 5$ odd. If $k\geq 4$ is even, the argument follows exactly the same structure; the assumption that $b_{ij} \equiv 0$ for indices in case (1) prove the top left $r \times r$ block has the desired structure, and (if $r<n$) indices from cases (3) and (4) complete the information about the remaining matrix. If $k=3$, the proof is more complicated, because the polynomials $Q_\ell$ defined above now must be grouped with various other terms (the monomials appearing are no longer all distinct). Nevertheless, if $n \geq 3$, the assumption $b_{ij} \equiv 0$ in the cases (1)--(4) shows that $Z(P_3)\simeq \mathbb{Q}$. If $(n,k)=(2,3)$, we need only consider rank $r=2$, and only the index case (1) is non-vacuous. From $b_{ij}\equiv 0$ in case (1) we conclude $3a_{12}-a_{21}=0$ and $a_{11}-a_{22}+3a_{21}=0$ so that $$Z(P_3) = \left\{\begin{pmatrix} \alpha- 9\beta&\beta\\ 3\beta&\alpha \end{pmatrix}: \alpha,\beta\in \mathbb{Q} \right\}.$$ Any idempotent $A \in Z(P_3)$ must satisfy $A^2=A,$ by definition of being an idempotent. If $\beta=0$, this forces $\alpha=0$ or $1$, corresponding to the $A$ being either of the trivial idempotents (the zero matrix or the identity matrix). If $\beta\neq 0$, the identity $A^2=A$ produces three independent equations in $\alpha,\beta$, and in particular, inspection of these equations shows that $\alpha$ must satisfy a quadratic equation with no rational roots. Thus $Z(P_3)$ contains no nontrivial idempotents over $\mathbb{Q}$. In conclusion, $P_3$ is not central over $\mathbb{Q}$ but it is indecomposable over $\mathbb{Q}$. (It is incidentally decomposable over $\overline{\mathbb{Q}}$, since its center contains nontrivial idempotents over $\overline{\mathbb{Q}}$.) ## Codimension of the class of forms {#sec_codim} Let $\mathcal{M}$ denote the moduli space of forms $P\in \mathbb{Q}[X_1,...,X_n]$ of degree $k \geq 2$. Then $\dim \mathcal{M}= \binom{n+k-1}{n-1}$. To see this by a "stars and bars" argument, note that $\dim \mathcal{M}$ is the number of monomials of degree $k$ in $n$ variables. Each such monomial can be represented as a configuration of $k$ stars and $n-1$ bars (e.g. $X_1X_2^2$ when $k=3$ and $n=4$ would be represented by the configuration $*|**||.$) The number of such configurations is equal to the number of ways to choose the location of the $n-1$ bars (among $k+n-1$ possible places), and this is the binomial coefficient $\binom{n+k-1}{n-1}$. Let $\mathscr{D}\subseteq \mathcal{M}$ denote the set of Dwork-regular forms, and let $\mathscr{P}_r$ denote the set of forms of intertwining rank $\leq r$, for $1\leq r \leq n$, so that $\mathscr{P}_1\subseteq \mathscr{P}_2 \subseteq \cdots \subseteq \mathscr{P}_n = \mathcal{M}.$ (We remark that the set of forms that have intertwining rank precisely $r$, that is, the set $\tilde{\mathscr{P}}_r=\mathscr{P}_r\setminus\mathscr{P}_{r-1}$, has dimension equal to that of $\mathscr{P}_r$, since it can be shown that $\dim \mathscr{P}_{r-1}<\dim \mathscr{P}_r$.) For each $1 \leq r <n,$ Theorem [Theorem 1](#thm:main2){reference-type="ref" reference="thm:main2"} proves a nontrivial result for all real symbols $P$ with leading form $P_k \in \mathscr{D}\cap\mathscr{P}_r.$ Here we compare the codimension of $\mathscr{D}\cap\mathscr{P}_{n-1}$ in $\mathcal{M}$ (the largest class to which our theorem applies) to the codimension of $\mathscr{D}\cap\mathscr{P}_{1}$ in $\mathcal{M}$ (equivalent to the largest class to which the previous works [@ACP23; @EPV22a] applied); we focus on a brief summary, since a more complete study of such forms (and their $\mathrm{GL}_n(\mathbb{Q})$-orbits) will be given in other work. Fix $1 \leq r \leq n$. First, $\mathscr{D}$ is open in $\mathcal{M}$, and it can be shown that $\mathscr{P}_r$ is a finite union of (irreducible) affine varieties in $\mathcal{M}$, say $P_{r,i}$ for $i=1,\ldots, N$, so that $\mathscr{P}_r=\cup_{i=1}^N P_{r,i}$. Then $\mathscr{D}\cap\mathscr{P}_r$ is a quasi-affine variety, and consequently by [@Har77 Prop 1.10], $\dim \mathscr{D}\cap\mathscr{P}_r = \dim \overline{\mathscr{D}\cap\mathscr{P}_r}$. In general, if $U\subseteq \mathbb{A}^m$ is an open set and $X=\cup_{i=1}^N X_i\subseteq \mathbb{A}^m$ is a union of irreducible affine varieties $X_i$, then as long as $U\cap X_i\neq \emptyset$ for every $i$, it follows that $\overline{U\cap X} = X$. For each $i$, it can be shown that $\mathscr{D}\cap P_{r,i}$ is nonempty, by precisely the examples stated above (up to re-ordering coordinates). Thus we conclude that $\overline{\mathscr{D}\cap\mathscr{P}_r} = \mathscr{P}_r$, so that it suffices to compute $\dim \mathscr{P}_r$ in $\mathcal{M}$. We focus on the cases of $\mathscr{P}_1$ and $\mathscr{P}_{n-1};$ it is easier to count the codimension. By symmetry considerations, the dimension of $\mathscr{P}_{n-1}$ is the dimension of the class of forms for which $X_1$ does not intertwine with $X_n$, or equivalently all those forms that do not contain monomials with the factor $X_1X_n.$ Equivalently, all coefficients of terms of the form $X_1X_nQ_{k-2}(X_1,\ldots,X_n)$ with $Q_{k-2}$ of degree $k-2$, must be zero. This constrains the coefficients of $\binom{n+(k-2)-1}{n-1}$ monomials, so that $$\mathrm{codim}(\mathscr{P}_{n-1}) = \binom{n+k-3}{n-1}.$$ On the other hand, by symmetry considerations, the dimension of $\mathscr{P}_1$ is the dimension of the class of forms $cX_1^k+Q_k(X_2,...,X_n)$, where $Q_k$ has degree $k$; the dimension of such polynomials is $\binom{n+k-2}{n-2}+1$. Equivalently, $$\mathrm{codim}(\mathscr{P}_{1}) = \binom{n+k-1}{n-1} - \binom{n+k-2}{n-2}-1 = \mathrm{codim}(\mathscr{P}_{n-1}) + \binom{n+k-3}{n-2}-1.$$ Thus $\mathscr{P}_{n-1}$ is a larger class of forms, with $\dim (\mathscr{P}_{n-1}) - \dim (\mathscr{P}_{1}) =\binom{n+k-3}{n-2}-1$ behaving asymptotically like $\sim c_nk^{n-2}$ if $n$ is fixed and $k \rightarrow\infty$, or like $\sim c_kn^{k-1}$ if $k$ is fixed and $n \rightarrow\infty.$ # Acknowledgements {#acknowledgements .unnumbered} The first author has been partially supported during portions of this research by NSF DMS-2200470. The second author has been partially supported during portions of this research by NSF DMS-2200470, NSF CAREER grant DMS-1652173 and a Joan and Joseph Birman Fellowship, and thanks the Hausdorff Center for Mathematics for productive research visits as a Bonn Research Chair. We thank the anonymous referees for helpful questions and comments, and the second author thanks Matilde Lalín for helpful conversations.
arxiv_math
{ "id": "2309.05872", "title": "Generalizations of the Schr\\\"odinger maximal operator: building\n arithmetic counterexamples", "authors": "Rena Chu, Lillian B. Pierce", "categories": "math.CA math.AP math.NT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We prove versions of Goldbach conjectures for Gaussian Primes in arbitrary sectors. Fix an interval $\omega \subset \mathbb{T}$. There is an integer $N_\omega$, so that every odd integer $n$ with $\arg (n) \in \omega$ and $N(n)>N_\omega$, is a sum of three Gaussian primes $n=p_1+p_2+p_3$, with $\arg (p_j) \in \omega$, for $j=1,2,3$. A density version of the binary Goldbach conjecture is proved. Both follow from a High/Low decomposition of the Fourier transform of averages over Gaussian primes, defined as follows. Let $\Lambda(n)$ be the Von Mangoldt function for the Gaussian integers and consider the norm function $N:\mathbb Z[i]\rightarrow \mathbb Z^+$, $\alpha + i \beta \mapsto \alpha ^2 + \beta ^2$. Define the averages $$A_Nf(x)=\frac{1}{N}\sum_{N(n)<N}\Lambda(n)f(x-n).$$ Our decomposition also proves the $\ell ^p$ improving estimate $$\lVert A_N f \rVert _{\ell^{p'}}\ll N ^{1/p'- 1/p} \lVert f\rVert _{\ell^p}, \qquad 1<p\leq 2.,$$ address: - School of Mathematics, Georgia Institute of Technology, Atlanta GA 30332, USA - Department of Mathematics, King's College, London, WC2R 2LS, UK - School of Mathematics, Georgia Institute of Technology, Atlanta GA 30332, USA - School of Mathematics, Georgia Institute of Technology, Atlanta GA 30332, USA - School of Mathematics, Georgia Institute of Technology, Atlanta GA 30332, USA author: - Christina Giannitsi - Ben Krause - Michael T. Lacey - Hamed Mousavi - Yaghoub Rahimi title: "Averages over the Gaussian Primes: Goldbach's Conjecture and Improving Estimates" --- # Introduction The principle goal of this paper is to establish a density version of the strong Goldbach conjecture for Gaussian integers, restricted to sectors in the complex plane. Briefly, for integers $n \in \mathbb Z[i]$ we write $N(a+ib) = a^2 + b^2$, and if we express $n = \sqrt {N(n)}e^ { i \theta }$, we set $\arg (n) = \theta$; the units are $\pm1$ and $\pm i$. The field $\mathbb Z[i]$ is a Euclidean domain, inside of which an integer $p= a+ib$ is a prime if it has no integer factor $q$ with $N(q) < N(p)$. The Gaussian primes can take one of two forms. First if $a$ and $b$ are non-zero, then $N(p) = a^2 + b^2$ is prime. Second, if $a$ or $b$ are zero, then it is of the form of a unit times a prime in $\mathbb{N}$ congruent to $3 \mod 4.$ As in the case of $\mathbb Z$, the Goldbach conjecture over $\mathbb Z[i]$ requires a notion of evenness: a Gaussian integer $x = a+ib$ is *even* if and only if $N (x) = a^2 + b^2$ is even. Evenness is equivalent to either condition below. 1. The integer $a+b$ is even. 2. The Gaussian integer $1+i$ divides $x$. An integer which is not even is *odd*. The main result concerns Goldbach type results. We show that there are very few even integers which are *not* a sum of two primes. And, we do so even with significant restrictions on the arguments of the integers involved. Similarly, all sufficiently large odd integers are a sum of three primes. The only prior results in this direction we could find in the literature correspond to the entire complex plane. Below, we allow arbitrary sectors. **Theorem 1**. *Fix an integer $B>10$ and interval $\omega\subset \mathbb{T}$. There exists an $N_{\omega, B} >0$, such that for all integers $N> N _{\omega, B}$, the following holds.* 1. *Every odd integer $n$ with $N(n)> N_{\omega , B}$ and $\arg(n)\in \omega$ is a sum of three Gaussian primes $n= p_1 + p_2+p_3$, with $\arg(p_j) \in \omega$ for $j=1,2,3$.* 2. *We have $\lvert E_2(\omega , N) \rvert \ll \frac{ N}{\log(N)^B}$, where $E_2(\omega ,N)$ is the set of *even* Gaussian integers with $N(n)<N$ and $\arg(n) \in \omega$, which *cannot* be represented as sum of two Gaussian primes $n= p_1 + p_2$, with $\arg(p_j) \in \omega$ for $j=1,2$.* We can further estimate the number of representations in both the binary and ternary case. Our proof derives from a detailed study of the analytic properties of averages over the Gaussian primes. In particular, let $f:\mathbb Z[i]\rightarrow \mathbb{C}$ be an arithmetic function, and for $x\in \mathbb Z[i]$ define $$\begin{aligned} A_Nf(x)=\frac{1}{N}\sum_{N(n)<N} \Lambda(n)f(x-n)\end{aligned}$$ where the von Mangoldt function on $\mathbb Z[i]$ is defined by $$\begin{aligned} \Lambda(n)=\begin{cases} \log(N(\rho)) & \text{ if } n=\rho^{\alpha} \text{ and } \rho\in \mathbb Z[i] \text{ is irreducible} \\ 0 & \text{otherwise.} \end{cases}\end{aligned}$$ We prove an improving estimate for the averages above: The averages of $\ell^1 (\mathbb Z^2)$ functions are nearly bounded. **Theorem 1**. *For all $N$, and $1< p \leq 2$, we have, $$\begin{aligned} \langle A_N f,g\rangle \ll N ^{1- 2/p} \lVert f \rVert_p \lVert g \rVert_p. \end{aligned}$$ Equivalently, for all $1 < p \leq 2$, whenever $f$ is supported on a cube, $Q$, of side length $\sqrt{N}$, $$\begin{aligned} \frac{ \| A_N f \|_{p'}}{|Q|^{1/p'}} \ll \frac{ \| f \|_{p}}{|Q|^{1/p}} \end{aligned}$$* To prove this, we need to develop many expected results, including Ramanujan type identities, and the Vinogradov type estimates for the Fourier transform of $A_N$. Not being able to identify clear cut references for many of these estimates, we develop them below. We then develop a High/Low decomposition of the Fourier transform of $A_N$. A particular innovation is our approach to the major arc estimate in Lemma [Lemma 1](#majorarcestimate){reference-type="ref" reference="majorarcestimate"}, which are typically proved by Abel summation. But that method is is poorly adapted to the question at hand. We develop a different method. Also noteworthy is that the Low term is defined in terms of smooth numbers, a technique used in [@210110401L]. Smoothness facilitates the proof of Lemma [Lemma 1](#l:HLConstantlemma){reference-type="ref" reference="l:HLConstantlemma"}. With this decomposition in hand, the deduction of the Theorems is relatively straight forward. Indeed, we develop this for the more specialized averages, over sectors of the complex plane. For an interval $\omega \subset \mathbb{T}$, extend the definition of the averages to this setting. $$\begin{aligned} \label{e:omega} A_N ^{\omega }f(x)=\frac{2 \pi } { \lvert \omega \rvert N} \sum_{\substack{N(n)<N \\ \arg(n) \in \omega }} \Lambda(n)f(x-n). \end{aligned}$$ To address the binary Goldbach question, note that $$A_N ^{\omega } \ast A_N ^{\omega } (m) = \frac{4 \pi^2 } { \lvert \omega \rvert ^2 N ^2 } \sum _{ \substack{ N(n_1), N(n_2) < N \\ \arg (n_1) , \arg (n_2) \in \omega \\ n_1 + n_2 =m}} \Lambda (n_1) \Lambda (n_2)$$ The High/Low decomposition for the $A_N ^{\omega }$ can be leveraged to study the sum above. This is the path we follow to prove Theorem [Theorem 1](#t:Goldbach){reference-type="ref" reference="t:Goldbach"} in §[8](#s:Goldbach){reference-type="ref" reference="s:Goldbach"}. Our motivations come from number theory and analysis. In this setting, the binary (and higher order) Goldbach conjectures has been addressed in the number field setting. Mitsui [@MR136590]\*§11,12 addresses the binary and higher order cases of Goldbach's Conjecture in an arbitrary number field. Mitsui finds that all sufficiently large totally positive odd integers are the sum of odd totally positive primes. He does not address the sector case. Much earlier work of Rademacher [@MR3069422; @MR3069434]. Rademacher [@MR3069422] studied representations of totally positive integers in a class of quadratic extensions of $\mathbb Z$. These papers assume the Generalized Riemann Hypothesis. Holben and Jordan [@MR238760] raised conjectures in the spirit of our Theorem. Their Conjecture F states **Conjecture 1**. Each even Gaussian integer $n$ with $N(n)>10$ is the sum of two primes $n=p_1+p_2$, with $\arg(n)- \arg(p_j) \leq \pi /6$, for $j=1,2$. As far as we know, there is no prior result in this direction, for neither the binary nor the ternary version of the Goldbach conjecture. But, we also note that our Theorem [Theorem 1](#t:Goldbach){reference-type="ref" reference="t:Goldbach"} provides a density version of a much stronger result. For all $\delta >0$, most even Gaussian integers $n$ with $N(n)> N_\delta$, are the sum of two primes $n=p_1+p_2$, with $\lvert \arg(n)- \arg(p_j) \rvert \leq \delta$, for $j=1,2$. Indeed, partition the unit circle into intervals of length $\delta$, and apply our Theorem to each interval. The study of metric properties of the uniform distribution of $p \alpha$, for irrational $\alpha \in \mathbb C$, and Gaussian prime $p$, is well developed by Harman [@MR2331072]\*Chapters 11, [@MR4045111], with effective results even small sectors. See the extensions to certain quadratic number fields by Baier and Technau [@MR4206429]. The latter paper also addresses metric questions along lines in the complex plane. On the analytical side, our improving estimate above is part of Discrete Harmonic Analysis, a subject invented by Bourgain [@MR1019960]. The recent textbook of one of us [@MR4512201] serves as a comprehensive summary of the subject. The study of the averages over the primes has been extensively studied, beginning with the work of Bourgain-Wierdl [@MR995574; @BP1], and continued by several [@MR4434278; @MR3370012; @MR3299842; @MR3646766; @MR4072599]. # Notation and Preliminaries Throughout the paper we are using $\| \beta\|$ to denote the distance of $\beta\in \mathbb{C}$ to the closest point $n\in \mathbb Z[i]$, and for $a,b,c \in \mathbb Z[i]$ we write $(a,b)=c$ to indicate that $c$ is the greatest (in norm) common divisor of $a$ and $b$ up to a unitary element. We will use the $\ell^\infty$ balls $B_\infty(r) = \{ x = a+bi \in \mathbb Z[i] \, : \, -r \leq a,b < r \}$ and $B_\infty(c,r) = B_{\infty}(r) +c$. Notice that there is a small departure from the traditional notation of a ball with respect to the $\infty$-norm, in the sense that we are only including the lowest endpoint, however this variation is useful as it allows us to obtain a tessellation of the complex plane. It is obvious that translating and rotating the grid does not affect the tessellation. For a $q = |q| e^{i \theta_q} \in \mathbb Z[i]$, we are interested in the grid formed by the squares $$B_q : = e^{i \theta_q} B_\infty \left((1+i)\frac{|q|}{2}, \frac{|q|}{2} \right)$$ where we have rotated the squares by the argument of $q$, so that their sides are parallel and orthogonal to $q$ respectively; see Figures [\[square_Sq\]](#square_Sq){reference-type="ref" reference="square_Sq"} and [\[square_Sq_2\]](#square_Sq_2){reference-type="ref" reference="square_Sq_2"}. This particular decomposition of lattice points coincides with all the *unique* remainders modulo $q$ in $\mathbb Z[i]$ and simplifies our calculations of complex exponentials. Indeed, it is known that for $q \in \mathbb Z[i]$, there are $N(q)$ distinct remainders modulo $q$, which agrees with the number of points inside $B_q$. It is straightforward to verify that any two points in $B_q$ cannot be equivalent modulo $q$, which then proves our claim. We are also going to need the following, more geometric, description of our boxes: $$\label{eq-geometricsquaredefinition} B_q = \{r \in \mathbb Z[i] \mid 0 \leq \langle r , q\rangle < N(q) \ \ \text{and} \ \ 0 \leq \langle r , iq\rangle < N(q) \}.$$ Let $e(x) := e^{2 \pi i x}$. For a function $f:\mathbb Z[i] \to \mathbb{C}$, we can define the discrete Fourier Transform over the box $B_q$ as $$\begin{aligned} \label{e:FDq} \mathcal F_{q} f \, (x) := \sum_{n\in B_q} f(n) e\bigl(-\langle x,\frac{n}{q}\rangle\bigr),\end{aligned}$$ and the corresponding inverse discrete Fourier transform as $$\begin{aligned} \mathcal F_{q}^{-1} f \, (n) := \frac{1}{N(q)}\sum_{x\in B_q} f(x) e\bigl(\langle n,\frac{x}{\bar{q}}\rangle\bigr).\end{aligned}$$ The Euler totient function $\phi$ for Gaussian integers counts the number of points in $B_q$ that are coprime to $z$. It is equal to $$\begin{aligned} \phi(z) & = N(z) \prod_{\substack{p \divides z \\ p : \text{ prime in } \mathbb Z[i]}} \left( 1 - \frac{1}{N(p)} \right). \end{aligned}$$ Clearly, $\phi (z) = \phi ( \bar z)$. Thus, $\lvert \mathbb{A}_q \rvert = \phi (q)$, where $$\mathbb{A}_q : = \{ r \in B_q \, : \, (r,q)=1 \}.$$ We also need the arithmetic function $r_2(n)$, which is called *Sum of Squares function*, which counts the number of representations of an integer $n$ as sum of two squares. Obviously $r_2(n)$ is the number of $m\in \mathbb Z[i]$ such that $N(m)=n$. It is known that $r_2(n)=O(1)$ in an average sense, namely $$\begin{aligned} \label{sumofsqestimate} %\sum_{n<N}\frac{r_2(n)}{n}&\simeq \log N & \sum_{n<N}r_2(n)&\simeq N\end{aligned}$$ Define the Möbius function as follows. $$\begin{aligned} \label{e:Mobius} \mu(n)=\begin{cases} (-1)^k & \text{ if } n=\epsilon \rho_1\rho_2\cdots \rho_k\\ 1 & \text{ if } n=\epsilon\\ 0 & \text{otherwise.} \end{cases}\end{aligned}$$ where $\epsilon\in\{\pm 1,\pm i\}$ are units and $\rho_i$ are distinct prime elements of $\mathbb Z[i]$. We define the classical form of the average over a sector. For an interval $\omega \subset \mathbb{T}$, set $$\label{e:MN} M_N ^{\omega } = M_N = \frac{\lvert\omega\rvert} { 2 \pi N} \sum _{ \substack { N(n) < N \\ \arg(n)\in\omega }} \delta _n ,$$ where $\delta _n$ is a Dirac measure at $n$. We emphasize that *we do not attempt to track constants that depend upon $\omega$.* Frequently, we assume that we will assume that $N$ is large enough, once $\omega$ is fixed. **Lemma 1**. *Fix an interval $\omega \subset \mathbb{T}$, $0< |\omega| \leq 2 \pi$. For $N(\beta)<1$, and integers $N> N _{\omega }$, $$\begin{aligned} \widehat M_N ^{\omega }(\beta) & \coloneqq \frac{\lvert\omega\rvert} { 2 \pi N} \sum _{ \substack { N(n) < N \\\arg(n)\in\omega }} e(-\langle n,\beta\rangle) \\ \label{e:expsum} & \ll_{\omega} \min\left(1, (N \cdot N(\beta))^{-\frac{3}{4}}\right). + \frac1{\sqrt N }.\end{aligned}$$ The implied constant only depends on $\omega$.* *Proof.* Let $I_n = B_\infty (n, 1/2)$. For $n=0$, we have $$\int_{I_0} e (-\langle x, \beta\rangle) \;dx = \prod _{j=1}^2 \frac{\sin (\pi \beta_j)} {\pi \beta_j}, \; \; \; \beta = (\beta_1,\beta_2).$$ This function is bounded above, and away from zero, since $N(\beta) <1$. Suppress the dependence on $\omega$ in the notation. Modify the definition of $$S_N (\beta) := N \widehat M_N (\beta )$$ to $$\begin{aligned} T_N (\beta) &\coloneqq \sum _{\substack{n \colon N(n)< N \\ \arg(n)\in\omega }} \int_{I_n} e ( -\langle x, \beta \rangle )\; dx \\& = \prod _{j=1}^2 \frac{\sin (\beta_j/2)} {\beta_j/2} \sum _{\substack{ n\colon N(n)< N \\ \arg(n)\in\omega }} e(-\langle n, \beta \rangle) \\ &= S_N (\beta) \prod _{j=1}^2 \frac{\sin (\beta_j/2)} {\beta_j/2} . \end{aligned}$$ We see that it suffices to estimate $T_N(\beta)$. The symmetric difference between the sector defined by $\omega$, and the area of integration defining $T_N (\beta)$ is the set $$\bigcup _{\substack{ n\colon N(n)< N \\ \arg(n)\in\omega }} I_n \, \triangle \{ n\colon N(n)<N,\arg(n)\in\omega \} .$$ It has measure at most $\ll \omega \sqrt N$, as the above set is supported in an $O(1)$ neighborhood of the boundary $\{ n : N(n) < N, \ n \in \omega\}$, which has length $\omega \sqrt{N}$. Thus, $$\begin{aligned} T_N(\beta) &\ll \int_{ \substack {N( x ) <N \\ \arg (x)\in\omega } } e(-\langle x, \beta\rangle ) \; dx + \sqrt{N} % \\ & % \ll N \min \Bigl\{ 1, \frac { 1}{ [N (\beta ) \sqrt{N}]^{1/2}} %\Bigr\} + \sqrt{N}. \textcolor{red}{\text{ Please check here. I am sure the lemma is correct though.}} \end{aligned}$$ By partitioning $\omega$ into smaller arcs and arguing as in [@MONTGOMERY Page 111-112], which addresses the case where $\omega = \mathbb{T}$, we may bound the integral $$\frac{1}{N} \int_{ \substack {N( x ) <N \\ \arg (x)\in\omega } } e(-\langle x, \beta\rangle ) \; dx \ll_{\omega} \big( \frac{1}{N \cdot N(\beta)} \big)^{3/4}.$$ ◻ # Inequalities involving Ramanujan Sums In this section, we prove two dimensional analogues of estimates and identities already known for one dimensional Ramanujan sums, like the Cohen Identity. This section is crucial to our High-Low decomposition. Some of these properties are known, but we include details for completeness. We start with standard facts about Fourier transform on $B_q$. **Lemma 1**. *Consider the set $B_q$ and $n \in \mathbb Z[i]$. We have: $$\begin{aligned} \label{e:orthognality} \sum_{r\in {B_q}} e( \langle r,\frac{n}{\bar q}\rangle) = \begin{cases} N(q) & \textup{ if } \bar q \divides n \\ 0 & \textup{ Otherwise.} \end{cases}\end{aligned}$$* Below, we divide by $\bar d$, where $d$ is a divisor of $q$. **Corollary 1**. *For $d \divides q$ we have $$\begin{aligned} \label{e:orthognality2} \sum_{r\in B_q} e\left( \langle r,\frac{n}{\bar d}\rangle\right) = \begin{cases} N(q) & \textup{ if } \bar d \divides n \\ 0 & \textup{ Otherwise.} \end{cases}\end{aligned}$$* Another consequence of Lemma [Lemma 1](#orthoidentity-lm){reference-type="ref" reference="orthoidentity-lm"} is a form of Parseval's Identity as stated below. **Proposition 1**. *For a function $f$ defined on $\mathbb Z[i]$ the Discrete Fourier Transform $\mathcal{F}_{q}$ defined in [\[e:FDq\]](#e:FDq){reference-type="eqref" reference="e:FDq"} satisfies $$\begin{aligned} \sum_{n\in B_q} |f(n)|^2 = \frac{1}{N(q)} \sum_{x\in B_{\bar q}} |\mathcal{F}_{q} f (x)|^2. \end{aligned}$$* Above, on the left we have $B _{q}$, and on the right $B_{\bar q} = \{\bar{n}:n\in B_q \} =\overline{B_{q}}$. The analog of Ramanujan's sum is $$\begin{aligned} \label{def-generalizedareagausssum1} \tau_{q}(x) & := \sum_{n\in \mathbb A_q} e(\langle x, \tfrac{n}{q}\rangle).\end{aligned}$$ **Lemma 1**. *Let $r\in \mathbb Z[i]$. $$\begin{aligned} \mathbf{1}_{\gcd(\bar r, q)=1} = \frac{1}{N(q)} \sum_{k\in B_q} e\left(\langle -r,\frac{k}{q}\rangle\right) \tau_{{\bar q}}( k).\end{aligned}$$* *Proof.* Note from the definition in [\[def-generalizedareagausssum1\]](#def-generalizedareagausssum1){reference-type="eqref" reference="def-generalizedareagausssum1"}, that $$\tau_{{\bar q}}( k) = \bigl\langle \mathbf{1}_{\gcd(\bar r, q)=1} , e ( \langle \cdot , k/\bar q ) \bigr\rangle.$$ Then, the conclusion follows from general facts about orthogonal bases. ◻ **Lemma 1**. *For $x\in \mathbb Z[i]$ we have $$\begin{aligned} \tau_{q}(x) &= \sum_{d \divides (q,\bar x)} \mu(q/d) N(d).\end{aligned}$$ In particular, $\tau_{q}(q) = \phi(q)$ and $\tau_{q}(1) = \mu(q)$. In addition, if $(q, \bar x) = 1$ then $\tau_{q}(x) = \mu(q)$.* *Proof.* Note that thanks to Corollary [Corollary 1](#cor:Orthogonality){reference-type="ref" reference="cor:Orthogonality"} we have that if $d \divides q$ then $$\begin{aligned} \sum_{\substack{k\in B_q\\ d|k}} e\left(\langle x,\frac{k}{q}\rangle\right) & = \sum_{k\in B_q} e\left(\langle x,\frac{k}{q}\rangle\right) \mathbf{1}_{d|k} \\ & = \sum_{k\in B_q} e\left(\langle x,\frac{k}{q}\rangle\right) \frac{1}{N(q)} \sum_{r\in B_q} e\left(\langle -r,\frac{ k}{ d}\rangle\right) \\ & = \frac{1}{N(q)} \sum_{r\in B_q} \sum_{k\in B_q} e\left(\langle \frac{x- r\frac{\bar{q}}{\bar d}}{\bar{q}},k \rangle\right) \\ & = \sum_{r\in B_q} \mathbf{1}_{{\bar q}| x- r\frac{{\bar q}}{{\bar d}}} \\ & = N(\frac{q}{d}) \mathbf{1}_{\frac{ q}{d} \divides \bar x}.\end{aligned}$$ The last equality comes from counting the number of $r$'s in $B_q$ for which the indicator is non-zero. Let $\bar x = \frac{ q}{d} x'$. Then $\bar x - \bar r \frac{ q}{d} \equiv 0 \mod q \, \Leftrightarrow \, \frac{ q}{d} (x' - \bar r) \equiv 0 \mod \bar q$. This means that $x' \equiv \bar r \mod d$, which means that there exists a unique such $\bar r \in B_{ d}$ and $N(q/d)$ of them in $B_{ q}$. Hence, and with the assistance of the Inclusion-Exclusion principle, we observe that $$\begin{aligned} \label{e:multiplicative} \tau_{q}(x) &= \sum_{\substack{k\in B_q\\ (k,q)=1}}e(\langle x,\tfrac{k}{q}\rangle) \\ &= \sum_{d \divides q} \mu(d) \sum_{\substack{k\in B_q\\ d|k}} e(\langle x,\tfrac{k}{q}\rangle) \\ & = \sum_{d \divides q} \mu(d) N(\tfrac{q}{d}) \mathbf{1}_{\frac{ q}{d} \divides \bar x} \\ & = \sum_{d \divides q} \mu(q/d) N(d) \mathbf{1}_{d \divides \bar x } \\ & = \sum_{d \divides (q, \bar x)} \mu(q/d) N(d).\end{aligned}$$ ◻ **Corollary 1**. *The function $\tau_{q}(x)$ is multiplicative and we have that $$\begin{aligned} \tau_{q}(x) &= \mu\Big(\frac{q}{(q, \bar x)}\Big) \frac{\phi(q)}{\phi(\frac{q}{(q, \bar x)})}.\end{aligned}$$* *Proof.* Let $d = (q,\bar x)$. A direct consequence of the Chinese Remainder Theorem gives us $$\begin{aligned} \tau_{q}(x) & = \sum_{r \in \mathbb{A}_{ q} } e ( \langle r, \tfrac{x}{\bar q} \rangle ) \\ & = \sum_{m \in \mathbb{A}_{ q/d } } \sum_{k \in \mathbb{A}_{d} } e ( \langle k \frac{q}{d} + md, \frac{x}{\bar q} \rangle ) \\ & = \sum_{m \in \mathbb{A}_{ q/d } } e ( \langle m, \tfrac{x}{\bar q / \bar d} \rangle) \sum_{k \in \mathbb{A}_{d} } e ( \langle k , \tfrac{x}{\bar d} \rangle ) \\ & = \tau_{q/d}(x) \, \phi (d) \end{aligned}$$ and the result follows immediately from the previous lemma and the multiplicative properties of $\phi$. ◻ With these identities in mind, we prove Cohen's identity. **Lemma 1**. *For $x\in \mathbb Z[i]$, the following identity holds: $$\begin{aligned} \sum_{r \in \mathbb{A}_{\bar q} } \tau_{q}(x+r) = \mu (q) \tau_{q}(x).\end{aligned}$$* *Proof.* We have $$\begin{aligned} \sum_{r \in \mathbb{A}_{\bar q} } \tau_{q}(x+r) & = \sum_{r\in B_{\bar q}} 1_{( r,\bar q)=1} \tau_{q}(x+r) \\ & =\sum_{r\in B_{\bar q}} \frac{1}{N(q) } \sum_{k\in B_q} e\left( \langle -r,\frac{k}{q}\rangle\right) \, \tau_{\bar q}(k) \, \tau_{q}(x+r) \\ & = \frac{1}{N(q)} \sum_{k\in B_q}\tau_{\bar q}(k) \sum_{r\in B_{\bar q}} e\left( \langle -r,\frac{ k}{q}\rangle\right) \tau_{q}(x+r)\end{aligned}$$ Now we compute the inner sum. $$\begin{aligned} \sum_{r\in B_{\bar q}} e\left( \langle -r,\frac{ k}{q}\rangle\right) \tau_{q}(x+r) & = \sum_{r\in B_{\bar q}} e\left( \langle -r,\frac{ k}{q}\rangle\right) \sum_{n \in \mathbb{A}_{ q} } e\left(\langle \frac{ n}{ q} , x+r\rangle\right) \\ & = \sum_{n \in \mathbb{A}_{q} } e\left(\langle \frac{ n}{ q} , x\rangle\right) \sum_{r\in B_{\bar q}} e\left(\langle \frac{ n-k}{ q} , r\rangle\right) \\ & = \sum_{n \in \mathbb{A}_{ q} } e\left(\langle \frac{ n}{ q} , x\rangle\right) N(q) \mathbf{1}_{ q \divides n-k} \\ & = N(q) e\left(\langle \frac{ k}{ q} , x\rangle\right) \mathbf{1}_{(k,q) = 1}.\end{aligned}$$ In addition, note that we have if $(k,q)=1$ then $\tau_{\bar q}(k) = \tau_{\bar q}(1)$. Indeed, this means that $\bar k$ is a unitary element modulo $\bar q$ so the set $\{ \bar k s \, : \, s \in B_{\bar q} \}$ is just a rearrangement of $B_{\bar q}$, and so $$\begin{aligned} \label{e:cyclicramanujan} \tau_{\bar q}(k) & = \sum_{s\in \mathbb{A}_{\bar q}} e\left(\langle k,\frac{s}{\bar q}\rangle\right) \\ & = \sum_{s\in \mathbb{A}_{\bar q}} e\left(\langle 1,\frac{\bar k s}{\bar q}\rangle\right) \\ & = \sum_{s\in \mathbb{A}_{\bar q}} e\left(\langle 1,\frac{ s}{\bar q}\rangle\right) \\ & = \tau_{\bar q}(1). \end{aligned}$$ Therefore we have that $$\begin{aligned} \sum_{r\in \mathbb{A}_{\bar q}} \tau_{q}(x+r) & = \sum_{k\in B_q}\tau_{\bar q}(k) e\left(\langle \frac{ k}{ q} , x\rangle\right) \mathbf{1}_{(k,q) = 1} \\ & = \tau_{\bar q}(1) \sum_{k\in B_q}e\left(\langle \frac{ k}{ q} , x\rangle\right) \mathbf{1}_{(k,q) = 1} \\ & = \tau_{\bar q}(1) \tau_{ q}(x).\end{aligned}$$ Finally, $\tau _{\bar q}(1)= \mu (q)$. ◻ **Corollary 1**. *For $x\in \mathbb Z[i]$ we have $$\begin{aligned} \label{e:12} \bigg| \sum_{r\in \mathbb{A}_{\bar q}} \tau_{q}(x+r) \bigg| & \ll N(\gcd(x,q)) \, N(q)^{\epsilon}.\end{aligned}$$* *Proof.* By Lemma [Lemma 1](#xplusrestimate-lm){reference-type="ref" reference="xplusrestimate-lm"}, it suffices to find a bound for $\tau_{ q}(x)$. The inequality follows immediately from Lemma [Lemma 1](#lm:multiplicative){reference-type="ref" reference="lm:multiplicative"} and Corollary [Corollary 1](#crlr:multiplicative){reference-type="ref" reference="crlr:multiplicative"} $$\begin{aligned} |\tau_{q}(x)| \ll N\big(\gcd(\bar x,q)\big) {N(q)}^{\epsilon}.\end{aligned}$$ ◻ Now we are ready to prove the Gaussian version of an inequality, originally due to Bourgain, involving the Ramanujan sums. It assures us that while the summands above can, in general, be as big as $q$, this happens infrequently as $x$ varies. **Proposition 1**. *For every $B, k>2$, integers $N > N _{k, B}$ and $Q \leq (\log N)^B$ we have $$\begin{aligned} \label{e:BourgainRamanujan} \frac{1}{N} \sum_{N(x)<N} \bigg[ \sum_{q \colon q< Q} \lvert \tau_{q}(x+r) \rvert \bigg]^k \ll Q^{k+\epsilon}. \end{aligned}$$* *Proof.* The proof of this proposition is inspired from a proof due to Bourgain [@MR1209299]. In view of [\[e:12\]](#e:12){reference-type="eqref" reference="e:12"}, it suffices to show that $$\frac{1}{N} \sum_{N(x)<N} \biggl(\sum_{N(q)<Q} N(\gcd(x,q))\biggr)^k \ll Q^{k+\epsilon}.$$ Expanding out the $k$th power, we have $$\begin{aligned} \sum_{N(q_1),N(q_2),\cdots,N(q_k) < Q} \frac{1}{N}\sum_{N(x)<N} \prod_{i=1}^k N(\gcd(x,q_i)) \end{aligned}$$ The function $x \to \prod_{i=1}^k N(\gcd(x,q_i))$ has period of $\mathfrak{L}:=N(\mathop{\mathrm{lcm}}(q_1,\cdots,q_k))$. We are free to restrict attention to the case in which $N$ is much larger than $\mathfrak{L}\leq Q^k$. Thus, the bound is reduced to establishing that $$\begin{aligned} \sum_{N(q_1),N(q_2),\cdots,N(q_k) < Q} \frac{1}{\mathfrak{L}}\sum_{N(x)<\mathfrak{L}} \prod_{i=1}^k N(\gcd(x,q_i)) \ll Q ^{k+ \epsilon }. \end{aligned}$$ We will establish a bound for the inner sum, namely $$\sum_{N(x)<\mathfrak{L}} \prod_{i=1}^k N(\gcd(x,q_i)) \ll Q^k.$$ The sum above is multiplicative in the variables $q_1, \cdots, q_k$. So, specializing to the case of $q_i=\rho^{r_i}$, we can estimate as follows. Assume that $r = \max r_{i}$. Then $\mathfrak{L} = N(\rho^{r}).$ Let $\rho^a\| x$. There are $\mathfrak{L} N(\rho^{-a}) = N(\rho^{r-a})$ such $x$ with $N(x)<\mathfrak{L}$. Also $N(\gcd(x,\rho^{r_i})) = N(\rho^{\min (r_i,a) })$. So $$\begin{aligned} \label{e:MULTI} \sum_{N(x)< \mathfrak{L}} \prod_{i=1}^k N(\gcd(x,\rho^{r_i})) &\ll N(\rho^{r-a}) \prod_{i=1}^{k} N(\rho^{\min (r_{i},a) }) \\ & \ll \prod_{i=1}^{k} N(\rho ^{r_i}). \end{aligned}$$ Above, we can assume that $r = \max r_i = r_1$, and then estimate $$N(\rho^{\min (r_{i},a) }) \leq \begin{cases} N( \rho ^a) & i=1 \\ N(\rho^{r_i}) & 2\leq i \leq k \end{cases}.$$ Using [\[e:MULTI\]](#e:MULTI){reference-type="eqref" reference="e:MULTI"}, the multiplicative property implies that $$\sum_{N(x)<\mathfrak{L}} \prod_{i=1}^k N(\gcd(x,q_i)) \ll \prod_{i=1}^{k} N(q_i) \leq Q^k.$$ Thus, to conclude the estimate, we note that $$\begin{aligned} \sum_{N(q_1),N(q_2),\cdots,N(q_k) < Q} \frac{1}{\mathfrak{L}} \ll Q^{\epsilon}\end{aligned}$$ This completes the proof. ◻ # The Vinogradov Inequality We prove an analogue of the Vinogradov inequality for $\Lambda$ on $\mathbb Z[i]$. That is, we control the Fourier transform of averages of the von Mangoldt function over a sector [\[e:omega\]](#e:omega){reference-type="eqref" reference="e:omega"}, at a point which is close to a rational with relatively large denominator. **Theorem 1**. *Let $\alpha\in \mathbb{C}$ and $a,q\in \mathbb Z[i]$ with $0<N(a)<N(q) < \sqrt N$ $\gcd(a,q)=1$ and $N(\alpha-\frac{a}{q})<\frac{1}{N(q)^2}$. Fix $\omega \subset \mathbb T$. For $N > N _{\omega }$, we have $$\begin{aligned} \sum_{\substack{ n\colon N(n)<N \\ \arg(n)\in\omega }} \Lambda(n)e(\langle n, \alpha\rangle) \ll \frac{N\log^2(N)}{N(q) ^{1/2}} + N^{99/100} .\end{aligned}$$* The remainder of the section is devoted to the proof. We collect some background material. The ring $\mathbb Z[i]$ is an Euclidean domain, which means that for each $a,b\in \mathbb Z[i]$, there exists a unique pair of $q,r$ such that $$a=bq+r \textup{ where } {r\in B_b}$$ Note that if $N(r) =\frac{N(q)}{2}$, then $r$ is on the edges of square $\langle b,ib\rangle$, we pick the point with positive phase on the line that contains origin, i.e. $0\leq \arg(r)<\pi$. Similar to the Farey dissection, we can find an approximation of $\xi$ by rational points in $\mathbb{Q}[i]$. In fact, we can say that for every $N(\xi)<1$, there exists $\frac{a}{q}\in \mathbb{Q}[i]$ such that $$N(\xi - \tfrac{a}{q}) < \frac{c(q)}{N(q)^2}$$ where $c(q) \approx 1$ is a constant that depends on $q$ but is uniformly bounded above and below. Thus, the assumption of the Theorem says that $\alpha$ is well approximated by $a/q$, and we apply the result above with $N(q)$ large. Next, we recall the Prime Number Theorem for Gaussian integers. For any choices of $\omega \subset \mathbb T$, integers $A$, and $N(q)<\log^{\frac{A}{ {2}}-1}N$, $$\begin{aligned} \psi _{\Lambda} (x,q,r,\omega ) & := \sum_{\substack{n \equiv r \mod q \\ N(n) < x \\ \arg(n)\in\omega }} \Lambda (n) = \frac{\lvert \omega\rvert }{2\pi} \frac{x}{\phi(q)} +O\left( \frac{x \, N(q)}{\log ^A x} \right) \label{landaupnt2}\end{aligned}$$ the implied constants are absolute. The last two equations come from [@MR2061214]\*Theorem 5.36 & following hint, which provides us with a uniform property of the Prime Number Theorem. The principal technical tool is this Lemma. **Lemma 1**. *Let $N$ be a large integer and $m\in \mathbb Z[i]$ with $N(m)>N^{1/100}$ or $m=0$. For large $T$, and $N(\alpha-\tfrac{a}{q})\leq \frac{1}{N(q)^2}$ where $0<N(a)<N(q)$ and $(a,q)=1$ $$\begin{aligned} B(T,N,\alpha,m)& :=\sum_{t\colon 0<N(t)<T} \min\left(\frac{N}{N(t)},\left(\frac{N}{N(t)}\right)^{\frac{1}{4}}\frac{1}{N(\|(\bar{t}+m)\alpha\|)^{\frac{3}{4}}}\right) \\ & \ll \frac{N\log(T)}{N(q)}+N^{\frac{1}{4}} N (q)^{\frac 34} \log N(q)+N(q)^{\frac{1}{4}}N^{\frac{1}{4}}T^{\frac{3}{4}} +N^{99/100}.\end{aligned}$$* *Proof.* The proof proceeds by case analysis based on the values of $h$ and $r$ introduced here. Since $\mathbb Z[i]$ is a Euclidean domain, we know that $t+\bar{m}=\bar{h}\bar{q}+\bar{r}$, with $r\in B_q$. Let $\beta:=\alpha - \frac{a}{q}$. Then $N(\beta) <N(q)^{-2}$ and $$\| (\bar{t} +m)\alpha\| =\bigl\lVert hq\beta+r\beta+\tfrac{ra}{q}\bigr\rVert.$$ So we can rewrite our sum in terms of $h$ and $r$ as follows. $$\begin{aligned} B(T,N,\alpha,m) = \sum_{N(h)<\frac{T}{N(q)}}\sum_{r\in B_q} \min\left(\frac{N}{N(t)},\left(\frac{N}{N(t)}\right)^{\frac{1}{4}}\frac{1}{N(\|(\bar{t}+m)\alpha\|)^{\frac{3}{4}}}\right),\end{aligned}$$ where on the right we understand that $t=t_{h,r}$ is given by $$t+\bar{m}=\bar{h}\bar{q}+\bar{r},$$ **Case 1: $\bar{t}+ m=0$.** Hence $\bar{t}=-m$, where $m$ is fixed. There is only one term in the sum we are estimating. Since $N(t)>0$, we see that $m\neq 0$, and so $N(t) > N ^{1/100}$. Then we use the trivial bound, which gives the contribution of at most $N^{99/100}$ and we see the inequality holds in the case that $\bar{t}+m=0$. **Case 2: $h=0$, and $0<N(r)<N(q)/10$.** By assumption $N(r\beta)\leq (2N(q)) ^{-1}$, so $4N(\| (\bar{t}+m)\alpha\|)\geq N(\|\frac{ra}{q}\|)-\frac{1}{2N(q)}$. Therefore $$\begin{aligned} \label{e:interestcase1} \sum_{0 < N(r) \leq N(q)/10 } \left(\frac{N}{N(t)}\right)^{\frac{1}{4}}\frac{1}{N(\| (\bar{t}+m)\alpha\|)^{\frac{3}{4}}} &\ll \sum_{N(r)<N(q)}\frac{N^{\frac{1}{4}}}{N(r)^{\frac{1}{4}} \left( N(\|\frac{ra}{q}\|) - \frac{1}{4N(q)} \right)^{\frac{3}{4}}}\end{aligned}$$ Denote $d_a(r) \equiv a r \pmod q$, so that $N(\|\frac{ra}{q}\|) = \frac{ N(d_a(r))}{N(q)}$. We see that we can ignore $\frac{1}{4N(q)}$ in the denominator of the right hand side. So $$\begin{aligned} \eqref{e:interestcase1} &\ll N^{\frac{1}{4}}\sum_{N(r)<N(q)}\frac{1}{N(r)^{\frac{1}{4}}N(\|\frac{ra}{q}\|)^{\frac{3}{4}}}\end{aligned}$$ Now,the map $r \to ra$ is a permutation on $B_q$, so that $$\begin{aligned} \eqref{e:interestcase1} &\ll N^{\frac{1}{4}}\sum_{N(r)<N(q)}\frac{1}{N(r)^{\frac{1}{4}}N( \frac{d_a(r)}{q} )^{\frac{3}{4}}} \\ & \ll N^{\frac{1}{4}} N (q)^{3/4} \sum_{N(r)<N(q)}\frac{1}{N(r)^{\frac{1}{4}}N({d_a(r)\textbf{}})^{\frac{3}{4}}} \\ & \ll N^{\frac{1}{4}} N (q)^{ 3/4} \sum_{N(r)<N(q)}\frac{1}{N(r)} \ll N^{\frac{1}{4}} N (q)^{\frac 34} \log N(q). \end{aligned}$$ Above, we have used the $\ell^{4}$---$\ell^{4/3}$ Hölder inequality, and [\[sumofsqestimate\]](#sumofsqestimate){reference-type="eqref" reference="sumofsqestimate"}. That completes Case 2. **Case 3. $h\neq 0$, or $h=0$ and $N(r)>\frac{N(q)}{10}$.** This is the principal case. Observe that $N(t)\gg N(q)(N(h)+1)$. If $h \neq 0$ we can immediately see that $$N(t) \gg N(h) \, N(q) - N(r) \geq N(h) \, N(q) - \frac{N(q)}{2} \gg N(q) \, (N(h) + 1),$$ or otherwise $N(t) = N(r) \geq {N(q)}/{10}$. Combining the two inequalities we see that $$N(t) \gg N(q)(N(h)+1).$$ The key claim comes in two parts: First, there are constants $0<C_1 < C_2$ and $C_3>0$ so that for all $q$ and $h$ as above, and for all $r\in B_q$ there is a $d = d(r, h) \in B_q$ so that first, $$\label{e:permutation} C_1 N(\tfrac{d}{q}) \leq\bigl\lVert \tfrac{ra}{q}+hq\beta+r \beta\bigr\rVert ^2 \leq C_2 N(\tfrac{d}{q}).$$ and second, for all $d$, the cardinality of $\{ r \colon d(r,h)=d\}$ is at most $C_3$. (In this sense, $\| \frac{ra}{q}+hq\beta+r \beta\|$ runs over the set $\|\frac{d}{q}\|$, as illustrated in Figure [\[f:unidist\]](#f:unidist){reference-type="ref" reference="f:unidist"}.) *Proof.* The middle term in [\[e:permutation\]](#e:permutation){reference-type="eqref" reference="e:permutation"} is at most one. We take $d = d(r,h) \in B_q$ to minimize the distance between $d/q$ and the fractional part of $\frac{ra}{q}+hq\beta+r \beta$. Then, the upper and lower bounds in [\[e:permutation\]](#e:permutation){reference-type="eqref" reference="e:permutation"} are immediate. The remainder of the argument concerns the second claim above. But that follows, since we always have for $r_1\neq r_2 \in B_q$ $$\begin{aligned} N\left(\left\| \frac{r_1a}{q}+hq\beta+r_1\beta \right\| - \left\| \frac{r_2a}{q}+hq\beta+r_2\beta \right\| \right) & \gg N\left(\frac{(r_1-r_2)a}{q}+(r_1-r_2)\beta \right) \\ & = N \left(\frac{r_1 -r_2}{q} \alpha\right) \gg \frac{1}{N(q)} . \end{aligned}$$ ◻ We can now turn to the sum $B(T,N, \alpha)$ in this case. Using the trivial bound for the cases that $r=0$ and picking the nontrivial bound for the other cases, we obtain $$\begin{aligned} B(T,N,\alpha) & \ll \sum _{N(h) < \frac{T}{N(q)} } \sum _{ N(r) < N(q)/2 } \min \left( \frac{N}{N(q) \, (N(h) +1)} \right. , \\ & \hspace*{5cm} \left. \frac{N^{1/4}}{N(q)^{1/4} \, (N(h)+1) ^{1/4} \, N(\| hq\beta + \frac{ra}{q} + r\beta \|)^{3/4} }\right) \\ & \ll \sum_{N(h)<\frac{T}{N(q)}} \Bigg( \Bigg. \frac{N}{N(q)\left(N(h)+1\right)} \\ & \hspace*{3cm} +\sum_{0<N(r)<N(q)/2}\frac{N^{\frac{1}{4}}}{N(q)^{\frac{1}{4}}(1+N(h))^{\frac{1}{4}}N(\| hq\beta+\frac{ra}{q}+r\beta\|)^{\frac{3}{4}}}\Bigg.\Bigg) \\ &\ll \sum_{N(h)<\frac{T}{N(q)}} \left(\frac{N}{N(q)\left(N(h)+1\right)}+\sum_{0<N(d)<N(q)/2}\frac{N^{\frac{1}{4}}}{N(q)^{\frac{1}{4}}(1+N(h))^{\frac{1}{4}}N( \frac{d}{q})^{\frac{3}{4}}}\right) \\ &\ll \sum_{N(h)<\frac{T}{N(q)}} \left(\frac{N}{N(q)\left(N(h)+1\right)}+\frac{\, N^{\frac{1}{4}}N(q)^{\frac{1}{2}}}{(1+N(h))^{\frac{1}{4}}}\sum_{k=1}^{N(q)}\frac{r_2(k)}{k^{\frac{3}{4}}}\right) \\ &\ll \sum_{N(h)<\frac{T}{N(q)}} \left(\frac{N}{N(q)\left(N(h)+1\right)}+\frac{\, N^{\frac{1}{4}}N(q)^{\frac{3}{4}}}{(1+N(h))^{\frac{1}{4}}}\right) \\ &\ll \sum_{0<\ell<\frac{T}{N(q)}} r_2(\ell) \left( \frac{N}{N(q)\ell} + \frac{ N(q)^{\frac{3}{4}} N^{\frac{1}{4}}}{\ell^{\frac{1}{4}} } \right) \\ & \ll \frac{N\log(T)}{N(q)}+N(q)^{ \frac{1}{4}}N^{\frac{1}{4}}T^{\frac{3}{4}}\end{aligned}$$ where we have used the estimates in [\[sumofsqestimate\]](#sumofsqestimate){reference-type="eqref" reference="sumofsqestimate"}, and our proof is complete. ◻ Now are now ready to prove the main theorem of this section. *Proof of Theorem [Theorem 1](#vingradovgaussian){reference-type="ref" reference="vingradovgaussian"}.* Vaughan's identity, well-known in the one dimensional case, holds in two dimensions as well. That is, let $|f|\leq 1$ be an arithmetic function and fix $UV<N$. Then $$\begin{aligned} \label{vaughan} \sum_{N(n)<N}f(n)\Lambda(n)\ll U+\log(N)A_1(N,U,V)+N^{\frac{1}{2}}\log^3(N)A_2(N,U,V)\end{aligned}$$ where $A_1$ and $A_2$ are given by $$\begin{aligned} \label{A1} A_1=&\sum_{N(t)\leq UV}\max_{1\leq w\leq N} \biggl\lvert \sum_{w\leq N(r)\leq \frac{N}{N(t)}}f(rt)\biggr\rvert\\ \label{A2} A_2=&\max_{U\leq M\leq N/V}\max_{V\leq N(j)\leq N/M}\left(\sum_{V<N(k)\leq N/M}\biggl\lvert \sum_{M<N(m)<\min(2M,\frac{N}{N(j)},\frac{N}{N(k)})}f(mj)\overline{f(mk)}\biggr\rvert\right)^{\frac{1}{2}}.\end{aligned}$$ The term we need to estimate is [\[vaughan\]](#vaughan){reference-type="eqref" reference="vaughan"}, with $f(x)=e(\langle x,\alpha\rangle)$. The two auxiliary integers are $U=N^{\frac{1}{2}}$ and $V=N^{\frac{1}{4}}$. The first term requires the exponential sum estimate [\[e:expsum\]](#e:expsum){reference-type="eqref" reference="e:expsum"}, and Lemma [Lemma 1](#cong){reference-type="ref" reference="cong"} in which we set $T=UV$. It follows that $$\begin{aligned} \label{a1} A_1 & =\sum_{N(t)<UV}\max_{w<\frac{N}{N(t)}} \biggl\vert \sum_{\substack{w<N(r)<\frac{N}{N(t)} \\\arg(r)\in\omega }}e(\langle rt,\alpha\rangle)\biggr\vert \\ & \ll\sum_{N(t)<UV} \min\biggl(\frac{N}{N(t)},\left(\frac{N}{N(t)}\right)^{\frac{1}{4}}\frac{1}{N(\|\bar{t}\alpha\|)^{\frac{3}{4}}}\biggr) \\ & \ll \frac{N\log(N)}{N(q)} + N^{\frac{1}{4}} N(q)^{\frac{3}{4}} \log N(q) + N^{\frac{1}{4}}[UV]^{\frac{3}{4}}N(q)^{\frac{1}{4}} +N^{99/100} \\ & \ll \frac{N\log(N)}{N(q)} \Bigl(1 + (N(q)/N)^{\frac 34}\log N(q) + \frac{N(q)^{\frac14}}{N^{\frac3{16}}} \Bigr) +N^{99/100} \\ & \ll \frac{N\log^2(N)}{N(q)} + N^{99/100} \end{aligned}$$ where in the last inequality we have used Lemma [Lemma 1](#cong){reference-type="ref" reference="cong"} with the hypothesis that $m=0$. This bound meets the claimed bound in Theorem [Theorem 1](#vingradovgaussian){reference-type="ref" reference="vingradovgaussian"}. The second term from Vaughan's identity [\[vaughan\]](#vaughan){reference-type="eqref" reference="vaughan"} is quadratic in nature. $$\begin{aligned} A_2 ^2 & = \max_{U<M<\frac{N}{V}}\max_{V\leq N(j)<N/M} \sum_{V<N(k)\leq N/M}\Biggl\vert \sum_{\substack{M<N(m)<2M\\N(m)<N/N(k),N/N(j) \\\arg(m)\in\omega}}e\left(mj-mk,\alpha\right)\Biggr\vert \\ &\ll \max_{U<M<\frac{N}{V}}\max_{V\leq N(j)<N/M} \sum_{V<N(k)\leq N/M}\min\biggl( M,\left(\frac{N}{N(k)}\right)^{\frac{1}{4}}N(\| (j-k)\alpha\|)^{-\frac{3}{4}}\biggr)\end{aligned}$$ where we have used Lemma [Lemma 1](#expsum){reference-type="ref" reference="expsum"}. Now we use Lemma [Lemma 1](#cong){reference-type="ref" reference="cong"} for $m=j$. The conclusion of the Lemma applies since in the sum above, we have $M \leq \frac{N}{N(k)}$. Therefore $$\begin{aligned} \label{a2} A_2&\ll \max_{U<M<\frac{N}{V}}\max_{V\leq N(j)<N/M} \left(M+\frac{N\log(N)}{N(q)}+\frac{N\, N(q)^{\frac{1}{4}}}{M^{3/4}}+\left(\frac{N}{M}\right)^{\frac{1}{4}}N(q) +N^{\frac{99}{100}}\right)^{\frac{1}{2}}\\ &\ll{\frac{N^{1/2}\log N}{N(q)^{1/2}}}+\sqrt{\frac{N}{V}} + \frac{N^{\frac{1}{2}}N(q)^{\frac{1}{8}}}{ U^{3/8}} + \left(\frac{N}{U}\right)^{\frac{1}{8}}N(q)^{\frac{1}{2}} +N^{\frac{99}{200}} . \end{aligned}$$ The last inequality follow from our choice of $U=N^{\frac{1}{2}}$ and $V=N^{\frac{1}{4}}$, and completes the proof. ◻ # Approximating the Kernel Define the approximating multiplier $L_N^{a,q}$ as follows: $$\begin{aligned} \label{e:LNaq} \widehat{L_N^{a,q}}(\xi) &:= \Phi(a,\bar q) \widehat{M_{N} ^{\omega }}(\xi - \frac{a}{q}). \end{aligned}$$ Above, we suppress the $\omega$ dependence in the already heavy notation, and we use the notation $$\begin{aligned} \label{e:newgausssum} \Phi(a,q) &\coloneqq \frac{\tau _{ q } (a)}{\phi(q)} ,\end{aligned}$$ In addition, recall that $M_N = M_N ^{\omega }$ is an average over a sector defined by a choice of interval $\omega \subset \mathbb{T}$, see [\[e:MN\]](#e:MN){reference-type="eqref" reference="e:MN"}. The weighted variant is $$A_N ^{\omega } = A_N = \frac{2\pi} {\lvert\omega\rvert N} \sum _{ \substack { N(n) < N \\\arg(n)\in\omega }} \Lambda (n) \delta _n .$$ **Lemma 1**. *For $\omega \subset \mathbb T$, $\alpha \in \mathbb{C}$ with $N(\alpha) < 1$, assume that there are $0 \leq N(a) < N(q) < Q$ such that $N ( \alpha - \frac{a}{q} ) < \frac{Q}{{N \cdot N(q) }}$ and $a\in \mathbb{A}_q$). Then we have, for $A>1$, and $N > N _{\omega ,A}$, $$\bigl\lvert \widehat{A_N ^{\omega }}(\alpha) - \widehat{ L_N^{a, q}}(\alpha ) \bigl\lvert \ll \frac{Q^4}{\log^A(N)}.$$* The usual one dimensional approach to these estimates uses the Prime Number Theorem, and Abel summation. Implementing that argument in the two dimensional case engages a number of complications. After all, the two dimensional Prime Number Theorem is adapted to annular sectors, whereas Abel summation is most powerful on rectangles in the plane. We avoid these technicalities below. (Mitsui [@MR136590] sums over rectangular regions.) *Proof.* The quantifications of the Prime Number Theorem are decisive. Write $$\begin{aligned} \widehat{A_N ^{\omega }}(\alpha) &= \frac{2\pi} {\lvert\omega\rvert N} \sum_{\substack{N(n) < N \\ \arg(n)\in\omega } } \Lambda_{\mathbb Z[i]}(n) e(\left\langle n, \alpha \right\rangle) \\ & = \frac{2\pi} {\lvert\omega\rvert N} \sum_{\substack{r \in B_{\bar q} \\ (r,\bar{q})=1}} \; \sum_{\substack {n \equiv r \mod \bar{q} \\ N(n) < N \\ \arg(n)\in\omega }} \Lambda(n) e(\left\langle n, \beta + a/q \right\rangle) \\ & = \frac{2\pi} {\lvert\omega\rvert N} \sum_{\substack{r \in B_{\bar q} \\ (r,\bar{q})=1}} e( \left\langle r, a/q \right\rangle) \sum_{\substack {n \equiv r \mod \bar{q} \\ N(n) < N \\ \arg(n)\in\omega }} \Lambda(n) e(\left\langle n, \beta \right\rangle) \\ & = \frac 1 {\phi (q)} \sum_{\substack{r \in B_{\bar q} \\ (r,\bar{q})=1}} e( \left\langle r, a/q \right\rangle) B_N (r,\beta ), \end{aligned}$$ where we define $B_N (r,\beta )$, and a closely related quantity by $$\begin{aligned} B_N (r,\beta ) &\coloneqq \frac {2 \pi\phi (q) }{\lvert \omega \rvert N} \sum_{\substack {n \equiv r \mod \bar{q} \\ N(n) < N \\ \arg(n)\in\omega }} \Lambda(n) e(\left\langle n, \beta \right\rangle) , \\ B'_{N} (r, \beta ) & \coloneqq \frac {2 \pi\phi (q) }{\lvert \omega \rvert N} \sum_{\substack {n \equiv r \mod \bar{q} \\ N(n) < N \\ \arg(n)\in\omega }} e(\left\langle n, \beta \right\rangle).\end{aligned}$$ Compare $B_{N,r}$ to $B'_{N,r}$, as follows. Using the trivial estimate for $N(n) \leq \sqrt N$ $$\begin{aligned} \label{e:differenceprimenonprime} B _{N,r} (\beta ) - B _{N}' (r,\beta ) \ll & N ^{-1/2} + \frac {2 \pi\phi (q) }{\lvert \omega \rvert N} \sum _{\substack{ n \colon \sqrt N < N(n) < N \\ n \equiv r \mod \bar{q} \\ \arg(n)\in\omega }} \bigl( \Lambda(n) - \tfrac q {{\pi}\phi (q)} \bigr) e(\langle n, \beta \rangle). \end{aligned}$$ We continue with the last sum above. It is divided into annular rectangles, as follows. Let $\mathcal P$ be a partition of the arc $[0, \omega ] \subset \mathbb{T}$ into intervals of length approximately $(\log N)^{-10A}$. Set $\rho = 1+ (\log N)^{-10A}$. For integers $j$ with $$N^{1/{2}} \leq N_j = \rho ^j {\sqrt{N}}< N ,$$ and an interval $P\in \mathcal P$, set $$R (j, P) = \{ n \colon N_j \leq N(n) < N _{j+1},\ \textup{arg}(n) \in P,\ n \equiv r \mod \bar q \}.$$ See Figure [\[f:annular\]](#f:annular){reference-type="ref" reference="f:annular"}. The set $R(j,P)$ is the symmetric difference of four sets to which the prime counting function estimate [\[landaupnt2\]](#landaupnt2){reference-type="eqref" reference="landaupnt2"} applies. From it, we see that $$\begin{aligned} \label{e:PNTHecke} D(j,P)& =\sum _{n\in R(j,P) } \bigl( \Lambda(n) - \tfrac q {\phi (q)} \bigr) e(\langle n, \beta \rangle) \\ &\leq \sup_{n,m\in R(j,P)} \lvert 1 - e(\langle n -m, \beta \rangle) \rvert \sum _{n\in R(j,P) } \Lambda(n) + \tfrac q {\phi (q)} \\ & \qquad + \Bigl\lvert \sum _{n\in R(j,P) } \Lambda(n) - \tfrac q {\phi (q)} \Bigr\rvert \\ & \ll \Bigl[ \frac QN \cdot \frac N {\log^{10} N} \Bigr] ^{1/2} \lvert R(j,P) \rvert + \frac{N_{j+1} - N_j } {(\log N)^{10A}} \\ &\ll \sqrt Q \frac{N_{j+1} - N_j } {(\log N)^{10A}}. \end{aligned}$$ The bound for the first term comes from the condition that $N(\beta ) \leq \frac Q N$, and for the second from [\[landaupnt2\]](#landaupnt2){reference-type="eqref" reference="landaupnt2"}. Control of the absolute value of the $D(j,P)$ is sufficient, since $$\begin{aligned} B _{N,r} (\beta ) - B _{N,r}' (\beta ) & \ll N ^{-1/2} + \frac{\phi(q)}N \sum_{ P \in \mathcal{P}} \sum_{ j \colon \rho^j \leq\sqrt N} \lvert D(j,P)\rvert \\ & \ll N ^{-1/2} + \frac{\phi(q)}N \sum_{ P \in \mathcal{P}} \sum_{ j \colon \rho^j \leq\sqrt N} \frac{N_{j+1} - N_j } {(\log N)^{10A}} \\ & \ll \frac{Q (\log N)^{A}} { (\log N)^{10A}} \end{aligned}$$ as there are only $\ll(\log N)^A$ choices of the interval $P$. We are free to choose $A$ as large as we want. This holds for all $r \in B _{\bar q}$, so that we have $$\widehat {A_N ^{\omega } } (\alpha ) - \frac 1 {\phi (q)} \sum_{\substack{r \in B_{\bar q} \\ (r,\bar{q})=1}} e( \left\langle r, a/q \right\rangle) B_N' (r,\beta ) \ll \frac{Q} {(\log N)^{A}}.$$ Then, observe the elementary inequality that for $r, s \in B _{\bar q}$, $$\begin{aligned} B_N' (r,\beta ) - B_N' (s,\beta ) & \ll \lvert r-s\rvert \cdot \lvert \beta\rvert \ll Q \Bigl[ \frac {Q}{ N} \Bigr] ^{1/2} \end{aligned}$$ which just depends upon the Lipschitz bound on exponentials, and the upper bound on $\beta$. The conclusion of the argument is then clear. Up to an error term of magnitude $Q ^{3/2}(\log N)^{A}$ we can write $$\begin{aligned} \widehat {A_N ^{\omega } }(\alpha ) &= \frac 1 {\phi (q)} \sum_{\substack{r \in B_{\bar q} \\ (r,\bar{q})=1}} e( \left\langle r, a/q \right\rangle) B_N' (0,\beta ) \\ &= \Phi (a, \bar q)B_N' (0,\beta ) \\ & =\Phi (a, \bar q) \frac 1 {\lvert B _{\bar q} \rvert} \sum_{r\in B _{\bar q}} B_N' (r,\beta ) \\ & = \Phi (a, \bar q) \widehat {M _N ^{\omega } } (\beta ). \end{aligned}$$ That is the conclusion of Lemma [Lemma 1](#majorarcestimate){reference-type="ref" reference="majorarcestimate"}. ◻ Consider the following dyadic decomposition of rationals $$\mathcal{R}_s = \left\lbrace \frac{a}{q}\, : \, 2^s \leq N(q) < 2^{s+1}, \, a \in \mathbb{A}_q \right\rbrace.$$ Let $\Delta$ be be a continuous function on $\mathbb{C}$, a tensor product of piecewise linear functions, with $$\label{eq-etadefinition} \eta(\xi) = \begin{cases} 1 &\mbox{if } \xi =(0,0) \\ 0 & \mbox{if } N(\xi) \geq\lVert \xi \rVert_\infty \geq 1 \end{cases}$$ and let $\Delta _s(\xi) : = \eta(16^s \xi)$. Here, we remark that this definition is different from many related papers in the literature. With this defintion, the function $\check \eta$ is a tensor product Féjer kernels. In particular, they are non-negative averages. Imposing this choice here will simplify considerations in the analysis of the Goldbach conjectures. Recalling definitions of $L_N^{a, q}$ in [\[e:LNaq\]](#e:LNaq){reference-type="eqref" reference="e:LNaq"}, further define $$\widehat{B ^{\omega }_N }(\xi):= \sum_{s \geq 0} \sum_{a/q \in \mathcal{R}_s } \widehat{ L_N^{a, q}} \left( \xi \right) \Delta _s \left( \xi - \frac{a}{q} \right).$$ We remind the reader that the $\omega$ dependence is suppressed in the notation on the right. **Theorem 1**. *Fix an integer $A >10$ and $\omega \subset \mathbb T$. Then, there is an $N _{\omega }$ to that for all $N > N _{\omega }$, $$\label{e:kernel_approx} \| \widehat{A ^{\omega }_N} -\widehat{B ^{\omega }_N } \|_\infty \ll (\log N)^{-A} .$$ The implied constant is independent of $\omega$.* *Proof.* A useful and familiar fact we will reference below is that for each $s$, the functions below are disjointly supported. $$\label{e:disjoint} \widehat{ L_N^{a, q}} \Bigl( \xi \Bigr) \Delta_s \Bigl( \xi - \frac{a}{q} \Bigr), \qquad a/q\in \mathcal R_s$$ Fix $\xi \in \mathbb T^2$. By the higher-dimensional Dirichlet's Theorem there are relatively prime numbers $a$ and $q$ that satisfy $1 \leq N(a) \leq N(q) \leq N^{1/4}$ such that $$N \bigl( \xi - \tfrac{a}{q} \bigr) \leq \frac{1}{N(q) \cdot N^{1/4}}.$$ To prove the theorem we need to consider two cases based on the value of $N(q)$. **Case 1: Suppose $1 \leq N(q) \leq (\log N)^{4A}$.** For $\frac{a'}{q'} \neq \frac aq$ and $N(q') \leq (\log N)^{2A}$, we have $$\begin{aligned} N \left( \xi - \frac{a'}{q'} \right) & \gg N \left( \frac{a'}{q'} - \frac{a}{q} \right) - N \left( \xi - \frac{a}{q} \right) \\ & \geq \frac{1}{N(q')N(q)} - \frac{1}{N(q)} \frac{1}{N^{1/4}} \gg (\log N)^{-6A} . \end{aligned}$$ Using this, and the decay estimate for $\widehat M_N$ in [\[e:expsum\]](#e:expsum){reference-type="eqref" reference="e:expsum"} to see that $$\left| \widehat{ L_N^{a', q'}} ( \xi ) \Delta_s \left( \xi - \frac{a'}{q'} \right) \right| \ll \frac{1}{\sqrt{N(q')}} \left( N N \left( \xi - \frac{a'}{q'} \right) \right)^{-3/4} \ll N^{-1/2}.$$ Appeal to the disjointness property [\[e:disjoint\]](#e:disjoint){reference-type="eqref" reference="e:disjoint"}. We have $$\Biggl\lvert \sum_{s \colon 2^s \leq (\log N)^{2A}} \sum_{ \frac{a'}{q'} \in \mathcal{R}_s, \frac{a'}{q'} \neq \frac{a}{q}} \widehat{ L_N^{a', q'}} ( \xi ) \Delta_s \left( \xi - \frac{a'}{q'} \right) \Biggr\rvert \ll N^{-1/2} \sum_{s \colon 2^s \leq (\log N)^{2A}} 2^{-s} \ll N^{-1/2}.$$ For $2^s > (\log N)^{2A}$ we use the trivial bound for $\widehat{M_N^\omega}$, the estimate for the Gauss sums in [\[e:newgausssum\]](#e:newgausssum){reference-type="eqref" reference="e:newgausssum"}, as well as the support property [\[e:disjoint\]](#e:disjoint){reference-type="eqref" reference="e:disjoint"}. This yields the estimate $$\label{e:larges} \sum_{s \colon 2^s > (\log N)^{2A} } \sum_{ \substack{\frac{a'}{q'} \in \mathcal{R}_s \\ \frac{a'}{q'} \neq \frac{a}{q}} }\widehat{ L_N^{a', q'}} ( \xi) \Delta_s \Bigl( \xi - \frac{a'}{q'} \Bigr) \ll \sum_{s \colon 2^s > (\log N)^{2A} } 2^{- 3s/4} \ll (\log N)^{-A}.$$ Above, the sums exclude the case of $a'/q'= a/q$. That is the central case, the one Lemma [Lemma 1](#majorarcestimate){reference-type="ref" reference="majorarcestimate"} was designed for. We turn to the case of $a'/q'=a/q$ here. With appropriate choice of $Q$ and $A$ in that Lemma, we obtain from [\[e:kernel_approx\]](#e:kernel_approx){reference-type="eqref" reference="e:kernel_approx"}, $$\begin{aligned} \Bigl\lvert \widehat{A_N ^{\omega }}(\xi) - \widehat{ L_N^{a, q}} ( \xi ) \Delta _s \Bigl( \xi - \frac{a}{q} \Bigr) \Bigr\rvert &\leq \Bigl\lvert \widehat{A_N ^{\omega }}(\xi) - \widehat{ L_N^{a, q}} ( \xi ) \Bigr\rvert + \Bigl\lvert \widehat{ L_N^{a, q}} ( \xi ) \Bigl(1-\Delta _s \Bigl( \xi - \frac{a}{q} \Bigr)\Bigr) \Bigr\rvert \\ & \ll (\log N)^{-A} + \frac{N(q)^2}{ N(\xi -a/q)^{1/2}} \ll (\log N)^{-A}.\end{aligned}$$ This holds by choice of $a/q$. That completes this case. **Case 2: Suppose $(\log N)^{4A} \leq N(q)$**. Both terms are small. By the Vinogradov inequality in Theorem [Theorem 1](#vingradovgaussian){reference-type="ref" reference="vingradovgaussian"} we have $$\begin{aligned} \lvert \widehat{A_N ^{\omega }}(\xi) \lvert \ll (\log N)^{-A}. \end{aligned}$$ It remains to show that $\widehat B_N(\xi )$ is also small. That function is a sum over integers $s \geq 0$. For $2^s > (\log N)^{2A}$, we only need to use the estimate [\[e:larges\]](#e:larges){reference-type="eqref" reference="e:larges"}. Thus, our focus turns to the case of $2^s \leq (\log N)^{2A}$. For $2^s \leq (\log N)^{2A}$ we have for $\frac{a'}{q'} \in \mathcal{R}_s$ $$\begin{aligned} N \Bigl( \xi - \frac{a'}{q'} \Bigr) & \geq N \Bigl( \frac{a'}{q'} - \frac{a}{q} \Bigr) - N \Bigl( \xi - \frac{a}{q} \Bigr) \\ & \geq \frac{1}{N(q')N(q)} - \frac{1}{N(q)} \frac{1}{N^{1/4}} \\ & \geq \frac{2^{-s-1}}{N(q)}\gg N^{-1/8}. \end{aligned}$$ From the decay estimate in [\[e:expsum\]](#e:expsum){reference-type="eqref" reference="e:expsum"}, we have $$\widehat{ L_N^{a', q'}} ( \xi) \Delta_s \Bigl( \xi - \frac{a'}{q'} \Bigr) \ll \left( N N \left( \xi - \frac{a'}{q'} \right) \right)^{-3/4} \ll N ^{-3/32}$$ Using the disjointness property [\[e:disjoint\]](#e:disjoint){reference-type="eqref" reference="e:disjoint"}, it then easy to see that $$\label{case1} \sum_{2^s \leq \sqrt{Q}} \sum_{ \frac{a'}{q'} \in \mathcal{R}_s, \frac{a'}{q'} \neq \frac{a}{q}} \widehat{ L_N^{a', q'}} ( \xi ) \Delta_s \left( \xi - \frac{a'}{q'} \right) \ll (\log N)^{-A}.$$ That completes the second case, and hence the proof of our Theorem. ◻ # Estimates for the High and Low Parts Our High and Low decomposition of the multiplier incorporates a notion of smooth numbers. For integer $Q = 2^{q_0} \ll (\log N)^B$, we say that a Gaussian integer is $Q$-smooth if $q$ is square free and the product of primes $\rho$ with $N(\rho )\leq Q$. Here, $B$ will be a fixed integer. We write $A_N ^{\omega } = \textup{Lo}_{Q,N} ^\omega + \textup{Hi} _{Q,N} ^{\omega }$, where $$\begin{aligned} \label{e:Lo} \widehat{\mathop{\mathrm{Lo}}}_{Q,N} ^{\omega }(\xi) &= \sum_{q \colon N(q) < Q} \sum_{a\in \mathbb{A}_q} \Phi(a,\bar q) \widehat{M_N ^{\omega }} (\xi-\frac{a}{q}) \Delta_{q_0} (\xi-\frac{a}{q}). \end{aligned}$$ Here, we recall that $\omega\subset \mathbb{T}$ is an interval, and $N > N _ \omega$ is sufficiently large. The average $M ^\omega_N$ is defined in [\[e:MN\]](#e:MN){reference-type="eqref" reference="e:MN"}, the Gauss sum $\Phi(a,\bar q)$ in [\[e:newgausssum\]](#e:newgausssum){reference-type="eqref" reference="e:newgausssum"}. (Note that the Gauss sum will be zero if $q$ contains a square.) This definition is inspired by Theorem [Theorem 1](#theorem:kernel_approximation){reference-type="ref" reference="theorem:kernel_approximation"}. But, the definition above incorporates not only smoothness, but the cutoff function $\Delta_{q_0}$ is a function of $Q$. Both changes are useful in the next section. There are two key properties of these terms. The first, is that the 'High' part has small $\ell^2$ norm. **Lemma 1**. *For any $\epsilon >0$, $\omega \subset \mathbb T$, there in an $N_ \omega$ so that for all $N> N_\omega$, $$\begin{aligned} \label{e:Hi} \| \mathop{\mathrm{Hi}}_{Q,N}^{\omega }\|_{\ell_2\rightarrow \ell_2} \ll Q^{-1+\varepsilon}.\end{aligned}$$* *Proof.* The $\ell^2$ norm is estimated on the frequency side. By Theorem [Theorem 1](#theorem:kernel_approximation){reference-type="ref" reference="theorem:kernel_approximation"}, the High term is a sum of three terms. They are, suppressing the dependence on $\omega$, $$\begin{aligned} \widehat{\mathop{\mathrm{Hi}}_{Q,N} ^1}(\xi) & = \sum_{Q<2^{s+1} } \sum_{ \substack{2^s \leq N(q) < 2^{s-1} \\ N(q) \geq Q }} \sum_{ \frac{a}{q} \in \mathcal R_s} \Phi(a,\bar q)\widehat{M_N^{\omega }}(\xi-\frac{a}{q})\Delta_s(\xi-\frac{a}{q}) \\ \widehat{\mathop{\mathrm{Hi}}_{Q,N} ^2}(\xi) & = \sum_{s} \sum_{ \substack{2^s \leq N(q) < 2^{s-1} \\ N(q) < Q }} \sum_{ \frac{a}{q} \in \mathcal R_s} \Phi(a,\bar q) \widehat{M_N^{\omega }} (\xi-\frac{a}{q}) \{\Delta_{q_0} (\xi-\frac{a}{q}) - \Delta_s (\xi-\frac{a}{q})\} \\ \widehat{\mathop{\mathrm{Hi}}_{Q,N} ^3}(\xi) & \coloneqq \widehat{A ^{\omega }_N} -\widehat{B ^{\omega }_N }.\end{aligned}$$ We address them in reverse order. The last term is controlled by Theorem [Theorem 1](#theorem:kernel_approximation){reference-type="ref" reference="theorem:kernel_approximation"}. It clearly satisfies our claim [\[e:Hi\]](#e:Hi){reference-type="eqref" reference="e:Hi"}. In the $\widehat{\mathop{\mathrm{Hi}}_{Q,N} ^2}$, the key term is the last difference between $\Delta _{q_0}$ and $\Delta _s$. In particular, if $\Delta_{q_0} (\xi) - \Delta_s (\xi) \neq 0$, we have $N(\xi)\gg 2 ^{-q_0}$. It follows that $\widehat{M_N^{\omega }} (\xi)$ is relatively small. This allows us to estimate $$\begin{aligned} \lVert \widehat{\mathop{\mathrm{Hi}}_{Q,N} ^2}(\xi) \rVert_\infty &\leq \sum_{s} \sum_{ \substack{2^s \leq q < 2^{s-1} \\ N(q) < Q}} \max_{ \frac{a}{q} \in \mathcal R_s} \bigl\lVert \Phi(a,\bar q) \widehat{M_N^{\omega }} (\xi-\frac{a}{q}) \{\Delta_{q_0} (\xi-\frac{a}{q}) - \Delta_s (\xi-\frac{a}{q})\} \bigr\rVert_\infty \\ &\ll \sum_{s} 2^{- 3s/4} \min\{1, (N2^{-q_0})^{-3/4} + N^{-1/2} \} \ll Q^{-1}. \end{aligned}$$ Here, we have used the disjointness of support for the different functions, and the exponential sum estimate Lemma [Lemma 1](#expsum){reference-type="ref" reference="expsum"}. It remains to bound the term $\widehat{\mathop{\mathrm{Hi}}_{Q,N} ^1}$. But the smallest denominator $q$ that we sum over satisfies at least $N(q)>Q$, so that a similar argument leads to $$\begin{aligned} \lVert \widehat{\mathop{\mathrm{Hi}}_{Q,N} ^1} \rVert_\infty & \ll \sum_{Q<2^s<N^{1/4}} \sum_{ \substack{2^s \leq N(q) < 2^{s-1} \\ N(q) \geq Q}} \max_{ \frac{a}{q} \in \mathcal R_s} \lvert \Phi(a,\bar q) \rvert \\ & = \sum_{Q<2^s<N^{1/4}} 2^{-(1- \epsilon )s} \ll Q^{-1+ \epsilon }. \end{aligned}$$ ◻ We turn to the Low term. It has an explicit form in spatial variables. **Lemma 1**. *For $x\in \mathbb Z[i]$ we have the equality below, in which recall that $2^{q_0} = Q < (\log N)^B$. $$\label{e:LoEquals} \textup{Lo}_{N,Q} ^\omega (x) = \left( M_N^{\omega } \ast \widecheck{\Delta_ {q_0}}\right)(x) \sum_{ q \colon N(q) < Q }\frac{\mu (q) \tau _q (x)}{ \phi (q)} %\sum_{r \in \A_{\bar q} } \tau_{q}(x+r) .$$ And, moreover, for all $\epsilon >0$, and non-negative $f$ $$\begin{aligned} \label{e:LOOO} \mathop{\mathrm{Lo}}_{Q,N} ^\omega f (x) \ll Q ^{ \epsilon } \bigl[ ( M_N^{\omega } \ast \widecheck{\Delta_ {q_0}}) \ast f ^{1+ \epsilon } (x) \bigr] ^{1/(1+\epsilon )}. \end{aligned}$$* *Proof.* We have for fixed $q$, $$\begin{aligned} \sum_{a \in \mathbb{A}_q}\Phi(a,\bar q) \int_{D}\widehat{M_N^\omega}(\xi-\frac{a}{q})\Delta_{q_0}(\xi-\frac{a}{q})e(\langle x,\xi\rangle)d\xi & =\sum_{a \in \mathbb{A}_q}\Phi(a,\bar q) e(\langle x,\frac{a}{q}\rangle)\left( M_N \ast \widecheck{\Delta_ {q_0}}\right)(x) \\ & = \left( M_N^{\omega } \ast \widecheck{\Delta_ {q_0}}\right)(x) \frac{1}{ \phi (\bar q)} \sum_{a \in \mathbb{A}_q}\tau_{\bar q}(a) e(\langle x,\frac{a}{q}\rangle) \\ & = \left( M_N^{\omega } \ast \widecheck{\Delta_ {q_0}}\right)(x) \frac{1}{ \phi (q)} \sum_{r \in \mathbb{A}_{\bar q} } \tau_{q}(x+r) .\end{aligned}$$ We then apply Lemma [Lemma 1](#xplusrestimate-lm){reference-type="ref" reference="xplusrestimate-lm"}, and sum over $Q$-smooth denominators $q$ to conclude the first claim [\[e:LoEquals\]](#e:LoEquals){reference-type="eqref" reference="e:LoEquals"}. For the second claim, we use [\[e:BourgainRamanujan\]](#e:BourgainRamanujan){reference-type="eqref" reference="e:BourgainRamanujan"} in the standard way. Fix $2 ^{s-1} \leq Q$, and consider the operator $A$ with kernel $$A_s (x) = \left( M_N^{\omega } \ast \widecheck{\Delta_ {q_0}}\right)(x) \sum _{q \colon 2 ^{s} \leq q < 2 ^{s}} \frac {\lvert \tau _q (x) \rvert}{\phi (q)} .$$ For an integer $k > 2\epsilon ^{-1}$, and non-negative $f \in \ell^{1+ \epsilon }$, we have $$\begin{aligned} A_s f (x) & = \sum_y \left( M_N^{\omega } \ast \widecheck{\Delta_ {q_0}}\right)(y) \sum _{q \colon 2 ^{s} \leq q < 2 ^{s}} \frac {\lvert \tau _q (y) \rvert}{\phi (q)} f (x-y) \\ & \leq \Bigl[ \sum_y \left( M_N^{\omega } \ast \widecheck{\Delta_ {q_0}}\right)(y) f(x-y) ^{k/(k-1)} \Bigr] ^{(k-1)/k} \\ \qquad & \times \Bigl[ \sum_y \left( M_N^{\omega } \ast \widecheck{\Delta_ {q_0}}\right)(y) \sum _{q \colon 2 ^{s} \leq q < 2 ^{s}} \Bigl[ \frac {\lvert \tau _q (y) \rvert}{\phi (q)}\Bigr] ^{k} \Bigr] ^{1/k} \\ & \ll 2 ^{\epsilon s} \Bigl[ \sum_y \left( M_N^{\omega } \ast \widecheck{\Delta_ {q_0}}\right)(y) f(x-y) ^{k/(k-1)} \Bigr] ^{(k-1)/k} . \end{aligned}$$ We sum this over $2 ^{s-1} < Q$ to complete the proof of [\[e:LOOO\]](#e:LOOO){reference-type="eqref" reference="e:LOOO"}. ◻ # Improving Inequalities In this brief section, we establish the improving inequalities, namely Theorem [Theorem 1](#t:fixedscale){reference-type="ref" reference="t:fixedscale"}. And, list some additional results that we could establish. For the convenience of the reader, we restate the improving inequality here, in a slightly more convenient form for our subsequent discussion. And, there is no loss of generality to reduce to the trivial case of a sector. **Theorem 1**. *For all $N$, and $1< p \leq 2$, we have, for $\omega = \mathbb{T}$, and functions $f, g$ supported on $[0,\sqrt N]^2$, $$\begin{aligned} N^{-1} \langle A_N ^{\mathbb{T} }f,g\rangle \ll N ^{- 2/p} \lVert f \rVert_p \lVert g \rVert_p. \end{aligned}$$* *Proof.* As the angle $\omega = \mathbb{T}$, we suppress it in the notation. We must prove the inequalities over the open range of $1 < p < 2$. So, it suffices to consider the case that $f = \mathbf{1}_F$ and $g= \mathbf{1}_G$, for $F, G \subset \{ n \colon N(n) \leq N\}$. Interpolation proves them as stated. Dominating the von Mangoldt function by $\log N$, we always have $$\begin{aligned} \langle A_Nf,g\rangle \ll N (\log N) \lvert F \rvert \cdot \lvert G \rvert. \end{aligned}$$ We can then immediately deduce the inequality if $$N^{-2} \lvert F \rvert \cdot \lvert G \rvert \ll (\log N) ^{-2p'}$$ So, we assume that this inequality fails, which will allow us to use our High Low decomposition. Namely, for $0< \epsilon < 1/2$ sufficiently small, set $$Q ^{ \frac {2(1+ \epsilon)}{1- \epsilon } } \simeq \frac {N^2} { \lvert F \rvert \cdot \lvert G \rvert } \ll (\log N) ^{2p'}.$$ Write $A_N = \textup{Hi}_{N,Q} + \textup{Lo} _{N,Q}$. Appealing to [\[e:Hi\]](#e:Hi){reference-type="eqref" reference="e:Hi"} for the High term, and [\[e:LOOO\]](#e:LOOO){reference-type="eqref" reference="e:LOOO"} for the Low term, we have $$\begin{aligned} \langle\textup{Hi}_{N,Q} f, g \rangle & \ll Q ^{-1+ \epsilon } \lVert f \rVert_2 \lVert g \rVert_2 \ll Q ^{-1+ \epsilon } [ \lVert f \rVert_1 \lVert g \rVert_1] ^{1/2}, \\ \langle\textup{Lo}_{N,Q} f, g \rangle & \ll N Q ^{\epsilon } \lVert f \rVert_{1+ \epsilon } \lVert g \rVert_{1+\epsilon }. \end{aligned}$$ By choice of $Q$, the two upper bounds nearly agree and are at most $$\langle\textup{Lo}_{N,Q} f, g \rangle \simeq N ^{-1 + 2 \epsilon }\bigl[ \lvert F \rvert \cdot \lvert G \rvert] ^{1- 2 \epsilon }.$$ That is the desired inequality, for $p' = \frac {1+ 2 \epsilon } \epsilon$. And so completes the proof. ◻ The techniques developed to establish the improving inequality can be elaborated on to prove additional results. We briefly describe them here. 1. An $\ell^p \to \ell^p$, for $1< p < \infty$ inequality for the maximal function $\sup_N \lvert A_N f \rvert$. Compare to [@MR995574]. 2. A $(p,p)$, $1< p <2$, sparse bound for the maximal function. Here we use the terminology of [@MR4072599], for instance. The interest in the sparse bound is that it immediately implies a range of weighted inequalities. 3. One can establish pointwise convergence of ergodic averages. Let $(T_1, T_2)$ be commuting invertible measure preserving transformations of a probability space $(X, \mu)$. For all $1< p < \infty$, and $f\in L^p (X)$, the limit $$\lim_{N} \frac1 N \sum_{ N(n)<N} \Lambda (n) f(T^n x)$$ exists for a.e.$(x)$. Here, $T^{a+ib}=T_1^aT_2^{b}$. Compare to [@MR3646766]. We have given references particular to the primes (in $\mathbb Z$). # Goldbach Conjecture {#s:Goldbach} The purpose of this section is to prove analogues of the Goldbach Conjecture on the Gaussian setting. We recall some elementary facts about Gaussian primes. We address a binary and ternary form of the Goldbach conjecture. The binary form is addressed in density form. Namely, most even integers are the sum of two primes. On the other hand all sufficiently large odd integers are the sum of three primes. We further restrict the arguments of the integers to be in a fixed interval. ## The Binary Goldbach Conjecture {#sub:the_binary_goldbach_conjecture} The Goldbach Conjecture states that every even Gaussian integer can be written as the sum of two primes. We prove a density version of this result. It uses the High/Low decomposition. Observe that $$A_N ^\omega \ast A_N ^\omega (n) = \frac {2 \pi} { \lvert \omega\rvert^2 N^2} \sum_{ \substack{N(m_1), N(m_2) <N \\ m_1 + m_2 =n \\ \arg(m_1), \arg(m_2) \in \omega}} \Lambda (m_1) \Lambda(m_2).$$ If the sum is non-zero, $n$ can be represented as the sum of two numbers in the support of the von Mangoldt function $\Lambda$ intersected with $S ^ \omega_N$, where we define $$S ^\omega_N = \{ n \colon N(n) <N, \arg(n) \in \omega \}.$$ The von Mangoldt function is supported on the Gaussian primes, and their powers. The powers have density less than $\ll \sqrt N$. Thus, it suffices to establish that $$\lvert \{ n \in S ^\omega_N \colon \textup{$n$ even \& } A_N ^\omega\ast A_N^\omega (n) = 0 \} \rvert \ll \frac N {(\log N)^B} .$$ Recall from [\[e:Lo\]](#e:Lo){reference-type="eqref" reference="e:Lo"}, that we can write $$\label{e:GHiLo} A_N ^\omega = {\textup{Hi} } + {\textup{Lo} }.$$ This depends upon a choice of $Q = (\log N)^B$ for some sufficiently large power of $B$, and we suppress the dependence on $Q$, $N$ and $\omega$ in the notation. Thus, we write $$\label{e:A=LoHi} A_N \ast A_N = \textup{Hi} \ast A_N + \textup{Lo} \ast \textup{Hi} + \textup{Lo} \ast \textup{Lo}.$$ On the right, the last term that is crucial. We further write it as $$\textup{Lo} \ast \textup{Lo} = \textup{Main} + \textup{Error}.$$ Aside from the main term, everything is small. Our Theorem easily follows from the Lemma below, in which we collect the required estimates. **Lemma 1**. *We have the estimates below, valid for all choices of $B>1$. $$\begin{aligned} \label{e:LoLo} N ^{-1} &\ll \min _{ \substack{ n\in S^\omega_N \\ n\ \textup{even} } }\textup{Main} (n), \\ \label{e:Error} \lvert \{ 0< N(n) <N \colon \lvert \textup{Error} (n) \rvert &> N^{-1} (\log N)^{ -(B-1)/2} \rvert \ll N (\log N) ^{- (B-1)2}, \\ \label{e:LoHi} \lVert \textup{Lo} \ast \textup{Hi} (n) \rVert _{\ell^2} &\ll ( N (\log N)^{B-1}) ^{-1/2}, \\ \label{e:AHi} \lVert \textup{Hi} \ast A_N (n) \rVert _{\ell^2}& \ll ( N (\log N)^{B-1}) ^{-1/2}. \end{aligned}$$* We focus on the first estimate above. The Main term is $$\label{e:Main} \textup{Main}(x) \coloneqq \sum_{\textup{$q$ is $Q$-smooth}} \frac{1}{\phi(q)^2} \sum_{a \in \mathbb{A}_q} \tau_{q} (a)^2 e\bigl(\langle \tfrac{a}{q} ,x\rangle\bigr) \int_{\mathbb{T}^2} \widehat{M_N^\omega}(\xi)^2 e\bigl(\langle \xi ,x\rangle\bigr) \; d\xi$$ Above, we say that $q$ is *$Q$-smooth* if $q$ is square free and all prime factors $\rho$ of $q$ satisfy $N(q)< Q$. The expression above can be calculated explicitly, using the Ramanujan like sum [\[def-generalizedareagausssum1\]](#def-generalizedareagausssum1){reference-type="eqref" reference="def-generalizedareagausssum1"}. **Lemma 1**. *Recall that $2^{q_0} = Q < (\log N)^B$. For every $N(x) <N$ we have $$\begin{aligned} \label{e:lowlow} \textup{Main}(x) = M_N ^{\omega} \ast M_N ^{\omega} (x)\sum_{\textup{$q$ is $Q$-smooth}} \frac{|\mu(q)|}{\phi(q)^2} \tau_{q} (x), \end{aligned}$$* *Proof.* The term in [\[e:Main\]](#e:Main){reference-type="eqref" reference="e:Main"} is $$\begin{aligned} \textup{Main} (x) &= M_N(x) ^{\omega} \ast M_N ^{\omega} (x) \sum_{\textup{$q$ is $Q$-smooth}} \frac{1}{\phi(q)^2} \sum_{a \in \mathbb{A}_q} \tau_{q} (a)^2 e\bigl(\langle \tfrac{a}{q} ,x\rangle\bigr) \end{aligned}$$ In the arithmetic term above, we fix $q$, expand the Ramanujan sums, and we use Lemma [Lemma 1](#xplusrestimate-lm){reference-type="ref" reference="xplusrestimate-lm"}. This gives us $$\begin{aligned} \frac{1}{\phi(q)^2} \sum_{a \in \mathbb{A}_q} \tau_{q} (a)^2 e\bigl(\langle \tfrac{a}{q} ,x\rangle\bigr) &= \frac{1}{\phi(q)^2} \sum_{a \in \mathbb{A}_q} \Bigl[ \sum_{r\in \mathbb{A}_q} e\bigl(\langle \tfrac{a}{q} ,r\rangle\bigr) \Bigr]^2 e\bigl(\langle \tfrac{a}{q} ,x\rangle\bigr) \\ & = \frac{1}{\phi(q)^2} \sum_{r_1 \in \mathbb{A}_q} \sum_{r_2 \in \mathbb{A}_q} \tau_{q} (x+r_1+r_2) \\ &=\frac{1}{\phi(q)^2} \sum_{r_1 \in \mathbb{A}_q} \tau_{q} (x+r_1)\tau_{\bar q} (1) \\ \label{e:tautau} & = \frac{1}{\phi(q)^2} \tau_{q} (x)\tau_{\bar q} (1)^2 , \end{aligned}$$ where in the last line we have used Lemma [Lemma 1](#xplusrestimate-lm){reference-type="ref" reference="xplusrestimate-lm"} again. Finally $\tau_{\bar q} (1)^2 = \lvert \mu (q) \rvert$. ◻ On the right in our equality for the Low-Low expression [\[e:lowlow\]](#e:lowlow){reference-type="eqref" reference="e:lowlow"}, the first convolution satisfies $$\min _{n \in S^\omega_N} M_N ^{\omega }\ast M_N ^{\omega}(n) \gg N^{-1}.$$ The implied constant is a function of $\omega$, but at no point have we sought to track this dependence. We analyze the arithmetic part here, and the Lemma below completes the proof of [\[e:LoLo\]](#e:LoLo){reference-type="eqref" reference="e:LoLo"}. Indeed, it is exactly this Lemma and its proof that motivates the use of the smooth numbers. **Lemma 1**. *We have for all even $x$, $$\begin{aligned} \label{e:HLConstantlemma} \sum_{\textup{$q$ is $Q$-smooth}} \frac{|\mu(q)|}{\phi(q)^2} \tau_{q} (x) \gg 1 \end{aligned}$$* *Proof.* Exploit the multiplicative structure of the sum. One sees that it is a product over primes. For any integer $x$, $$\begin{aligned} \label{e:finallyconstant} \sum_{\textup{$q$ is $Q$-smooth}} \frac{|\mu(q)|}{\phi(q)^2} \tau_{q} (x) & = \prod_{\rho \colon N(\rho ) < Q } 1 + \frac{\tau_{\rho } (x)}{\phi(\rho)^2} \end{aligned}$$ Above, the product is over primes $\rho$ with $N(\rho )< Q$, up to multiplication by units. And recall that if $\rho\mid x$, we have $\tau_\rho(x)= \phi(\rho)$. It is important to single out the prime $1+i$. This is the unique prime with $\phi (1+i)= 1$, and $$1 + \frac{\tau_{1+i} (x)}{\phi(1+i)^2} = \begin{cases} 2 & 1+i \mid x \\ 0 & 1+i \nmid x \end{cases}$$ Thus, if $1+i \nmid x$, the sum in [\[e:finallyconstant\]](#e:finallyconstant){reference-type="eqref" reference="e:finallyconstant"} is zero. But, evenness of $x$ is equivalent to $1+i \mid x$. Thus, for even $x$, single out the case of $\rho=1+i$. $$\begin{aligned} \prod_{\rho \colon N(\rho ) \leq Q } 1 + \frac{\tau_{\rho } (x)}{\phi(\rho)^2} &= 2\prod_{\substack{\rho|x,\ N(\rho ) \leq Q \\ \rho\neq 1+i }} \left( 1 + \frac{1}{\phi(\rho)} \right)\left( 1 - \frac{1}{\phi(\rho)^2} \right)^{-1} \times \prod_{\substack{\rho,\ N(\rho ) \leq Q \\ \rho\neq 1+i}} 1 - \frac{1}{\phi(\rho)^2} \\ & = 2\prod_{\substack{\rho|x,\ N(\rho ) \leq Q \\ \rho\neq 1+i}} \frac{\phi(\rho)}{\phi(\rho)-1} \times \prod_{\substack{\rho,\ N(\rho ) \leq Q \\ \rho\neq 1+i}} 1 - \frac{1}{\phi(\rho)^2} \\ & \coloneqq h (x) \mathcal{G}. \end{aligned}$$ In the last line, we have written the product as a 'local term' $h(x)$ and a 'global term,' $\mathcal G$. If there is no $Q$-smooth prime that divides $x$, we understand that the local term is $2$. Thus, $h(x)$ is always at least 2 for even $x$. The global term is finite: $$\mathcal G \geq \prod _{\rho } 1 - \frac{1}{\phi(\rho)^2} ,$$ and the infinite sum is convergent to a positive number, since $$\begin{aligned} \sum_{\rho\ \textup{prime}} \frac{1}{\phi(\rho)^2} & \ll \sum_{\rho\ \textup{prime}} N(p)^{-2} \ll \sum_{k=1}^\infty k 2^{-k} < \infty. \end{aligned}$$ That completes our proof. ◻ *Proof of [\[e:Error\]](#e:Error){reference-type="eqref" reference="e:Error"}.* We control the difference between the Low term and the Main term. This is the term $\textup{Error}$, and we only seek a distributional estimate on it. We begin by writing out this term explicitly. Recall that $$\widehat{\mathop{\mathrm{Lo}}}(\xi) = \sum_{q \colon N(q)< Q} \sum_{a\in \mathbb{A}_q} \Phi(a,\bar q) \widehat{M_N ^{\omega }} (\xi-\frac{a}{q}) \Delta_{q_0} (\xi-\frac{a}{q}).$$ By the disjointness of the supports of $\Delta _{q_0}( \cdot - \frac aq)$, for $N(q)< Q$, and the definition of the $\textup{Main}$ term in [\[e:Main\]](#e:Main){reference-type="eqref" reference="e:Main"}, we see that $$\begin{aligned} \widehat{\mathop{\mathrm{Lo}}}(\xi) \cdot \widehat{\mathop{\mathrm{Lo}}}(\xi) & = \sum _{N(q)\leq Q} \sum _{a\in \mathbb{A} _q } \tau_{q} (a)^2 \Delta_ {q_0}(\xi - a/q)^2 \widehat{M_N^\omega}(\xi -a/q)^2 \end{aligned}$$ We have done the work to invert this Fourier transform. In particular from [\[e:tautau\]](#e:tautau){reference-type="eqref" reference="e:tautau"}, we have $$\begin{aligned} {\mathop{\mathrm{Lo}}}\ast {\mathop{\mathrm{Lo}}}(x) & = M_N ^{\omega } \ast \check \Delta _{q_0} \ast M_N ^{\omega } \ast \check \Delta _{q_0} (x) \sum _{q \colon N(q) <N } \frac{\lvert \mu (q) \rvert}{\phi(q)^2} \tau_{q} (x) . \end{aligned}$$ We can then explicitly write $\textup{Error}(x)$ as the sum of these three terms. $$\begin{aligned} \label{e:E1} E_1 (x) & \coloneqq \int (1 - \Delta _{q_0} (\xi ) ^2 ) \widehat{M_N ^{\omega }} (\xi) ^2 e ( \langle \xi ,x \rangle) \; d \xi \sum _{q \colon N(q) < Q } \frac{\lvert \mu (q) \rvert}{\phi(q)^2} \tau_{q} (x) , \\ \label{e:E2} E_2 (x) & \coloneqq M_N ^{\omega } \ast M_N ^{\omega } (x) \sum _{q \colon Q \leq N(q) < N^{1/8} } \frac{\lvert \mu (q) \rvert}{\phi(q)^2} \tau_{q} (x) , \\ \label{e:E3} E_3 (x) & \coloneqq M_N ^{\omega } \ast M_N ^{\omega } (x) \sum _{q \colon N(q) \geq N^{1/8} } \frac{\lvert \mu (q) \rvert}{\phi(q)^2} \tau_{q} (x) . \end{aligned}$$ We address them in order. For the control of $E_1$ defined in [\[e:E1\]](#e:E1){reference-type="eqref" reference="e:E1"}, an easily accessible $\ell^2$ estimate applies. We recall that $Q \ll (\log N)^B$. By selection of $\Delta_{q_0}$ as a scaled version of a Fejer kernel, we have $1 - \Delta _{q_0} (\xi ) ^2 =O(Q^{-3})$ if $N(\xi ) < Q^{-4}$. So in this range we have $$\begin{aligned} \bigl\lVert \int_{N(\xi)<Q^{-4}} (1 - \Delta _{q_0} (\xi ) ^2 ) \widehat{M_N ^{\omega }} (\xi) ^2 e ( \langle \xi ,x \rangle) \; d \xi \bigr\rVert _{\ell^2} \ll Q^{-3}\int |\widehat{M_N^{\omega}}(\xi)|^2d\xi\ll N^{-1}Q^{-3}.\end{aligned}$$ But then it follows that $$\bigl\lVert \int (1 - \Delta _{q_0} (\xi ) ^2 ) \widehat{M_N ^{\omega }} (\xi) ^2 e ( \langle \xi ,x \rangle) \; d \xi \bigr\rVert _{\ell^2} \ll N ^{-1}Q^{-3}.$$ On the other hand, it is easy to see that $$\sum _{q \colon N(q) < Q } \frac{\lvert \mu (q) \rvert}{\phi(q)^2} \tau_{q} (x) \ll Q .$$ These two estimates prove the control required in [\[e:Error\]](#e:Error){reference-type="eqref" reference="e:Error"}. In particular we have $$\lVert E_1 \rVert _{\ell^2} \ll ( N^2 (\log N)^{B-1}) ^{-1/2}.$$ This estimate is stronger than than the required distributional estimate. We turn to the term $E_2$ defined in [\[e:E2\]](#e:E2){reference-type="eqref" reference="e:E2"}. For this and $E_3$, the leading term $M_N ^{\omega } \ast M_N ^{\omega } (x) \ll N ^{-1}$, so our focus is on the arithmetic terms. The definition $E_2$ requires that the denominators $q$ are at least $Q$, and less than $N^{1/8}$. The point is that the Ramanujan function $\tau _q$ is rarely more than $1$, as quantified by [\[e:BourgainRamanujan\]](#e:BourgainRamanujan){reference-type="eqref" reference="e:BourgainRamanujan"}. For integers $s$ with $Q/2 < 2^s \leq N ^{1/8}$, we have $$\Bigl\lvert \Bigl\{ N(x) < N \colon \sum _{ 2^s< N(q) \leq 2 ^{s+1}} \lvert \tau _q (x)\rvert > 2 ^{3s/2 } \Bigr\}\Bigr\rvert \ll N 2 ^{-3s}.$$ This follows from [\[e:BourgainRamanujan\]](#e:BourgainRamanujan){reference-type="eqref" reference="e:BourgainRamanujan"}, with $k=8$, and trivial bounds on the totient function. It is clear that this can be summed over these values of $s$ to complete the proof of [\[e:E2\]](#e:E2){reference-type="eqref" reference="e:E2"} in this case. Indeed, we have $$\begin{aligned} \Bigl\lvert \Bigl\{ N(x) < N \colon \sum _{ 2^s< N(q) \leq 2 ^{s+1}} \lvert \tau _q (x)\rvert > 2 ^{3s/2 }\Bigr\}\Bigr\rvert & \ll 2 ^{-12s} \sum _{x \colon N(x)<N} \Bigr[ \sum _{ 2^s< N(q) \leq 2 ^{s+1}} \lvert \tau _q (x)\rvert \Bigr] ^{8} \\ & \ll 2^{-3s} N. \end{aligned}$$ Using a trivial bound lower bound on the Totient function will complete this case. We turn to the term $E_3$ defined in [\[e:E3\]](#e:E3){reference-type="eqref" reference="e:E3"}. In this case, we require $N^{1/8} <N(q)$. And $N(q)$ can be as large and $e^{Q} = e^{c(\log N)^B}$. This is too large to directly apply the previous argument. Instead, we will specialize the proof of [\[e:BourgainRamanujan\]](#e:BourgainRamanujan){reference-type="eqref" reference="e:BourgainRamanujan"} to this setting. For an integer $s$ with $N^{1/8} <2^{s+1}$, estimate $$\begin{aligned} \label{e:E31} \sum _{\substack{q \colon 2^s\leq N(q) < 2 ^{s+1} \\ q \textup{\ $Q$-smooth}} } \frac{\lvert \mu (q) \rvert}{\phi(q)^2} \tau_{q} (x) & \ll s ^2 2 ^{-2s} \sum _{\substack{q \colon 2^s\leq N(q) < 2 ^{s+1} \\ q \textup{\ $Q$-smooth}} } N((x,q)) . \end{aligned}$$ Above, we have used the familiar upper bound $\tau _q (x) \ll N((x,q))$. Write $(x,q)=d$ and $q= q' d$. Continue $$\begin{aligned} \eqref{e:E31} & \ll s ^2 2 ^{-2s} \sum_{d \textup{\ $Q$-smooth}} \mathbf{1}_{d\mid x } N(d) \sum _{\substack{q' \colon 2^s\leq N(q')N(d) < 2 ^{s+1} \\ q' \textup{\ $Q$-smooth}} } \mathbf{1} \\ & \ll s ^2 2 ^{-s} \sum_{d \textup{\ $Q$-smooth}} \mathbf{1}_{d\mid x } \ll 2 ^{-s/2}.\end{aligned}$$ In the last line, we have $0< N(x) < N$, and $2^{s+1}> N^{1/8}$, so that we can use a favorable estimate on the divisor function. This estimate is summable in $s$. It follows that for $0<N(x) < N$, that we have $$E_3(x) \ll N ^{-17/16}.$$ This completes the analysis of the Error term. ◻ *Proof of [\[e:LoHi\]](#e:LoHi){reference-type="eqref" reference="e:LoHi"}.* We have from [\[e:Hi\]](#e:Hi){reference-type="eqref" reference="e:Hi"} and [\[e:LOOO\]](#e:LOOO){reference-type="eqref" reference="e:LOOO"}, $$\begin{aligned} \lVert \textup{Lo} \ast \textup{Hi} \rVert _{\ell^2} ^2 & = \int _{\mathbb{T}^2} \lvert \widehat {\textup{Lo}} \cdot \widehat{\textup{Hi}} \rvert ^2 \; d \xi \\ & \ll Q ^{-1+ \epsilon } \int _{\mathbb{T}^2} \lvert \widehat {\textup{Lo}} \rvert ^2 \; d \xi \\ & \ll \frac {Q ^{-1+ 2\epsilon }} N . \end{aligned}$$ ◻ The previous argument immediately implies the third and final estimate [\[e:AHi\]](#e:AHi){reference-type="eqref" reference="e:AHi"}. That completes the proof of Lemma [Lemma 1](#l:Estimates){reference-type="ref" reference="l:Estimates"}, and hence the proof of our binary Goldbach Theorem. ## The Ternary Goldbach Conjecture {#sub:ternary_goldbach} We turn to the ternary Goldbach conjecture, using the same notation. We will show that for all intervals $\omega \subset \mathbb T$, there is an $N_\omega$, so that for $N>N_\omega$, we have $$\label{e:TernaryGoldbach} \inf_{ \substack{n\in S^\omega _N \\ \textup{$n$ odd}}} A_N ^\omega\ast A_N^\omega \ast A_N^\omega (n) \gg N^{-1}.$$ That is, every odd integer in $S^\omega _N$ has many representations as a sum of three primes, each of which is in the sector $\omega$. Indeed, there are as many as $N^2/(\log N)^2$ representations. This holds for all large $N$, and we can assume that the sector is small, so that completes the proof of ternary Goldbach Theorem. It remains to establish [\[e:TernaryGoldbach\]](#e:TernaryGoldbach){reference-type="eqref" reference="e:TernaryGoldbach"}. We turn to the High/Low decomposition as in [\[e:GHiLo\]](#e:GHiLo){reference-type="eqref" reference="e:GHiLo"}, and write $$\begin{aligned} A_N ^\omega\ast A_N^\omega \ast A_N^\omega &= \textup{Hi} \ast A_N^\omega \ast A_N^\omega + \textup{Lo} \ast A_N^\omega \ast A_N^\omega \\ &=\textup{Hi} \ast A_N^\omega \ast A_N^\omega + \textup{Lo} \ast \textup{Hi} \ast A_N^\omega + \textup{Lo} \ast \textup{Lo} \ast A_N^\omega \\ \label{e:Err3terms} &=\textup{Hi} \ast A_N^\omega \ast A_N^\omega + \textup{Lo} \ast \textup{Hi} \ast A_N^\omega + \textup{Lo} \ast \textup{Lo} \ast \textup{Hi} + \textup{Lo} \ast \textup{Lo} \ast \textup{Lo} \\ &\eqqcolon \textup{Err} + \textup{Lo} \ast \textup{Lo} \ast \textup{Lo}. \end{aligned}$$ As before, in [\[e:Main\]](#e:Main){reference-type="eqref" reference="e:Main"} we will write $$\label{e:Main3} \textup{Lo} \ast \textup{Lo} \ast \textup{Lo} = \textup{Main}+\textup{Error}.$$ Thus, our focus is on the Lemma below. It easily completes the proof. **Lemma 1**. *We have the estimates $$\begin{aligned} \label{e:LoLoLo} N ^{-1} &\ll \min _{ \substack{ x\in S^\omega_N \\ \textup{$x$ odd} } }\textup{Main}(x), \\ \label{e:Err} \lVert \textup{Error} \rVert _{\ell^ \infty} + \lVert \textup{Err} \rVert _{\ell^ \infty} &\ll [ N (\log N)^{B-3}]^{-1}. \end{aligned}$$* Recalling the definition of $Q$-smooth from [\[e:Main\]](#e:Main){reference-type="eqref" reference="e:Main"}, we set $$\label{e:ternary1} \textup{Main}(x) \coloneqq \tilde M (x) \sum_{\textup{$q$ is $Q$ smooth}}\frac{\mu(q)}{\phi(q)^3}\tau_q(x),$$ where $$\tilde M(x) \coloneqq M_N^{\omega}\ast M_N^{\omega}\ast M_N^{\omega}\ast \widecheck{\Delta_{q_0}}\ast\widecheck{\Delta_{q_0}}\ast\widecheck{\Delta_{q_0}}.$$ *Proof of [\[e:LoLoLo\]](#e:LoLoLo){reference-type="eqref" reference="e:LoLoLo"}.* The details here are very close to those of the binary case, so we will be a little brief. Following the arguments of Lemma [Lemma 1](#l:lowlowpart){reference-type="ref" reference="l:lowlowpart"}, in the definition of the Main term, we have $\inf _{x\in S^\omega _N} \tilde M(x) \gg N^{-1}$. In the arithmetic part of [\[e:ternary1\]](#e:ternary1){reference-type="eqref" reference="e:ternary1"}, we have the Möbius function $\mu (q)$, instead of $\lvert \mu (q) \rvert$ as in the binary case, and a third power of the totient function. Using the multiplicative properties, we have $$\begin{aligned} \sum_{\textup{$q$ is $Q$ smooth}}\frac{\mu(q)}{\phi(q)^3}\tau_q(x) &=\prod_{N(p)<Q,p|x}\left(1-\frac{1}{\phi^2(p)}\right)\prod_{N(p)<Q,(p,x)=1}\left(1+\frac{1}{\phi^3(p)}\right) \\&\coloneqq h_3(x)\mathcal{G}_3\end{aligned}$$ Parity is again crucial. We have $$1-\frac{\tau_{1+i}(x)}{\phi^3(1+i)} = \begin{cases} 0 & \textup{$x$ even} \\ 2 & \textup{$x$ odd} \end{cases}$$ That is, $h_3(x)$ is positive for odd $x$. Note that $\mathcal{G}_3=O(1)$, because $$\begin{aligned} \mathcal{G}_3 = \prod_{N(p)<Q,(p,x)=1}\left(1+\frac{1}{\phi^3(p)}\right) < \sum_{n} \frac{|\mu(n)|}{N(n)^2} = O(1).\end{aligned}$$ This completes the proof of [\[e:LoLoLo\]](#e:LoLoLo){reference-type="eqref" reference="e:LoLoLo"}. ◻ *Proof of [\[e:Err\]](#e:Err){reference-type="eqref" reference="e:Err"}.* There are two estimates to prove. The first is to bound the $\ell^\infty$ norm of $$\begin{aligned} \textup{Error} &\coloneqq \textup{Lo} \ast \textup{Lo} \ast \textup{Lo} -\textup{Main} . \end{aligned}$$ Switch to Fourier variables. We have $$\begin{aligned} \widehat{ \textup{Lo} }(\xi ) ^{3} = \sum _{q<Q} \sum _{a \in \mathbb{A} _q} \widehat{\tilde M}(\xi - a/q) \frac{\tau _q (a) ^{3}}{\phi(q)^3} . \end{aligned}$$ It follows that $$\begin{aligned} \widehat{\textup{Error}} (\xi ) = \sum _{ \substack {q \textup{\ $Q$-smooth} \\ q\geq Q}} \sum _{a \in \mathbb{A} _q} \widehat{\tilde M}(\xi - a/q)\frac{\tau _q (a) ^{3}}{\phi(q)^3}. \end{aligned}$$ So, the $\ell^\infty$ norm of $\textup{Error}$ is at most the $L^1(\mathbb{T} ^2 )$ norm of the expression above. That is at most $$\begin{aligned} \sum _{ \substack {q \textup{\ $Q$-smooth} \\ q\geq Q}} \Bigl\lVert \sum _{a \in \mathbb{A} _q} \widehat{\tilde M}(\xi - a/q) & \tau _q (a) ^{3} \Bigr\rVert _{L^1(\mathbb{T} ^2 )} \\ & \ll N ^{-1} \sum _{ \substack {q\geq Q}} \phi (q) ^{-2+\epsilon } \ll N ^{-1} Q ^{-1+\epsilon } .\end{aligned}$$ By our choice of $Q = (\log N)^B$, this estimate meets our requirements. The second estimate concerns the term $\textup{Err}$. The term $\textup{Err}$ is a sum of three terms of the form $\phi _1 \ast \phi _2 \ast \phi _3$, where $\phi _j \in \{ A_N, \textup{Hi}, \textup{Lo} \}$. And, at least one of the terms is a High term. See [\[e:Err3terms\]](#e:Err3terms){reference-type="eqref" reference="e:Err3terms"}. We control each term. To fix ideas, consider $$\begin{aligned} \lVert \mathop{\mathrm{Hi}}\ast A_N^{\omega}\ast A_N^{\omega}\rVert_{\infty} &\leq \lVert \mathop{\mathrm{Hi}}\ast A_N^{\omega}\rVert_{2} \lVert A_N^{\omega}\rVert_{2} \\ &\leq \lVert \widehat{\mathop{\mathrm{Hi}}}\widehat{ A_N^{\omega}}\rVert_{2} \lVert \widehat{A_N^{\omega}}\rVert_{2} \\ &\leq \lVert \widehat{\mathop{\mathrm{Hi}}}\rVert_{\infty} \lVert \widehat{A_N^{\omega}}\rVert_{2}^2\end{aligned}$$ where the last inequality is trivial. Now, $\lVert \widehat{\mathop{\mathrm{Hi}}}\rVert_{\infty} \ll Q^{-1+\epsilon}$, by [\[e:Hi\]](#e:Hi){reference-type="eqref" reference="e:Hi"}. Recall that $Q= (\log N)^B$, for an integer $B>10$. And, $$\begin{aligned} \lVert \widehat{A_N^{\omega}}\rVert_{2}^2 &= \lVert {A_N^{\omega}}\rVert_{2}^2 \\ &\ll N^{-2} \sum_{N(n)<N,\arg(n)\in \omega}\Lambda(n)^2 \\ &\ll N^{-1}(\log N)^2 \end{aligned}$$ We see that $\lVert \mathop{\mathrm{Hi}}\ast A_N^{\omega}\ast A_N^{\omega}\rVert_{\infty} \ll N^{-1} (\log N)^{-(1-\epsilon)B-2}$. That completes this case. The second term to control is $$\begin{aligned} \lVert \mathop{\mathrm{Hi}}\ast \mathop{\mathrm{Lo}}\ast \mathop{\mathrm{Lo}}\rVert_{\infty} &\leq \lVert \widehat{\mathop{\mathrm{Hi}}}\rVert_{\infty} \lVert \mathop{\mathrm{Lo}}\rVert_{2}^2. \end{aligned}$$ To estimate this last term $\lVert \mathop{\mathrm{Lo}}\rVert_{2}^2$, use [\[e:LOOO\]](#e:LOOO){reference-type="eqref" reference="e:LOOO"}, which gives a better estimate than the first term. The third term to control is $$\begin{aligned} \lVert \mathop{\mathrm{Hi}}\ast A_N ^\omega \ast \mathop{\mathrm{Lo}}\rVert_{\infty} &\leq \lVert \widehat{\mathop{\mathrm{Hi}}}\rVert_{\infty} \lVert A_N^\omega \rVert_2 \lVert \mathop{\mathrm{Lo}}\rVert_{2}. \end{aligned}$$ But the right hand side is the geometric mean of the other two terms. That completes the proof. ◻
arxiv_math
{ "id": "2309.14249", "title": "Averages over the Gaussian Primes: Goldbach's Conjecture and Improving\n Estimates", "authors": "Christina Giannitsi, Ben Krause, Michael Lacey, Hamed Mousavi, Yaghoub\n Rahimi", "categories": "math.NT math.CA", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Building on a recent construction of G. Plebanek and the third named author, it is shown that a complemented subspace of a Banach lattice need not be linearly isomorphic to a Banach lattice. This solves a long-standing open question in Banach lattice theory. address: - | Instituto de Ciencias Matemáticas (CSIC-UAM-UC3M-UCM)\ Consejo Superior de Investigaciones Científicas\ C/ Nicolás Cabrera, 13--15, Campus de Cantoblanco UAM\ 28049 Madrid, Spain. - Universidad de Murcia, Departamento de Matemáticas, Campus de Espinardo 30100 Murcia, Spain. - | Instituto de Matemáticas IMUEX\ Universidad de Extremadura\ Avda. de Elvas, s/n\ 06006 Badajoz, Spain. - | Instituto de Ciencias Matemáticas (CSIC-UAM-UC3M-UCM)\ Consejo Superior de Investigaciones Científicas\ C/ Nicolás Cabrera, 13--15, Campus de Cantoblanco UAM\ 28049 Madrid, Spain. author: - David de Hevia - Gonzalo Martínez-Cervantes - Alberto Salguero-Alarcón - Pedro Tradacete title: A counterexample to the complemented subspace problem in Banach lattices --- # Introduction In their 1987 paper [@CKT], P.G. Casazza, N.J. Kalton and L. Tzafriri started by recalling: *"One of the most important problems in the theory of Banach lattices, which is still open, is whether any complemented subspace of a Banach lattice must be linearly isomorphic to a Banach lattice."* We will refer to this as the *Complemented Subspace Problem* (CSP, in short) for Banach lattices. This question could be framed in the larger research program of understanding the relation between linear and lattice structures on Banach lattices (see [@BVL; @JMST; @kalton; @LT2] or [@Meyer Chapter 5] for classical results, and [@AMRT; @ART; @DLOT; @LLOT] for more recent developments). The CSP for Banach lattices is motivated by the fact that most existing criteria that allow us to determine that a Banach space is not linearly isomorphic to a Banach lattice do not actually distinguish between Banach lattices and their complemented subspaces. These criteria include for instance the following well-known characterization of reflexivity: if a (complemented subspace of a) Banach lattice does not contain any subspace isomorphic to $c_0$ nor $\ell_1$, then it is reflexive [@LT2 Theorem 1.c.5]. A similar characterization of weak sequential completeness in terms of lack of $c_0$ subspaces is also known (see [@LT2 Section 1.c]). Another potential difference between Banach spaces and Banach lattices is that the latter contain plenty of unconditional sequences: every sequence of disjoint vectors in a Banach lattice is an unconditional basic sequence; nevertheless, every complemented subspace of a Banach lattice also contains an unconditional basic sequence ([@LT2 Proposition 1.c.6, Theorem 1.c.9]). Based on these criteria one can exhibit examples of Banach spaces which are not isomorphic to complemented subspaces of Banach lattices: the James space [@J] and the space $Y$ constructed by Bourgain and Delbaen in [@BD], since these are non-reflexive but do not contain $c_0$ nor $\ell_1$; also, any hereditarily indecomposable space failing to contain unconditional basic sequences [@GM]. Another example is the construction due to M. Talagrand ([@Tal], see also [@LAT]) of a reflexive space associated to a weakly compact operator $T\colon \ell_1\rightarrow C[0,1]$ which does not factor through any reflexive Banach lattice (the latter by completely different techniques as the mentioned above). The CSP for Banach lattices is actually behind a considerable amount of work in the literature. It was partly the motivation driving the research on local unconditional structures in Banach spaces (see [@BL76; @FJT; @GL]), which generalize the theory of $\mathcal{L}_p$ spaces. In connection with this, recall that a Banach space $X$ has local unconditional structure in the sense of Gordon-Lewis (GL-lust, in short) if and only if $X^{**}$ is complemented in a Banach lattice. In particular, this allows us to add more examples to the list of spaces not isomorphic to complemented subspaces of Banach lattices: the Kalton-Peck space [@JLS], Schatten $p$-class operators (for $p\neq 2$) [@GL], $H^\infty(\mathbb{D})$ [@Pelczynski], or certain Sobolev spaces and spaces of smooth functions [@PW; @PW2; @Tse], as all of them fail GL-lust. In general, having local unconditional structure does not ensure being complemented in a Banach lattice: the examples constructed in [@BD] of $\mathcal{L}_\infty$-spaces without isomorphic copies of $c_0$ cannot be complemented subspaces of Banach lattices, since if this were the case, then those spaces would be complemented in their biduals [@LT2 Propositions 1.c.4, 1.c.6], which is impossible. Also, one could consider an analogous version of the CSP for *atomic* Banach lattices, and this turns out to be equivalent to the well-known open question whether every complemented subspace of a space with an unconditional basis has an unconditional basis (which is attributed to S. Banach). Under additional assumptions on the projection, the CSP for Banach lattices is known to have an affirmative solution. This is the case for example if we take a *positive* projection on a Banach lattice, in which case the range is always isomorphic to a Banach lattice (see [@Schaefer-book p. 214]). As for contractive projections, it is well-known that for any $1\leq p <\infty$ every $1$-complemented subspace of an $L_p$-space is isometrically isomorphic to an $L_p$-space [@Bernau-LaceyLp], separable $1$-complemented subspaces of $C(K)$-spaces are isomorphic to $C(K)$-spaces [@B] and, in the complex setting, N. J. Kalton and G. V. Wood proved that every $1$-complemented subspace of a space with $1$-unconditional basis also has $1$-unconditional basis [@KW] (see also [@Flinn]). This last fact does not hold in the real case as shown in [@BFL]. All these results and several others concerning $1$-complemented subspaces in Banach lattices may be found in a comprehensive survey about the topic due to B. Randrianantoanina [@Beata]. A closely related long-standing open problem is the CSP for $L_1$-spaces. In the separable setting, this amounts to determining whether every complemented subspace of $L_1[0,1]$ is isomorphic to $L_1[0,1]$, $\ell_1$ or $\ell_1^n$. Significant work on this question can be found in [@Douglas; @ES; @Tal90]. A somehow dual question is the well-known CSP for $C(K)$-spaces, which asks whether every complemented subspace of a $C(K)$-space is also isomorphic to a $C(K)$-space, with a considerable amount of literature around it (cf. [@B78; @Bou; @JKS; @LW; @Ros72]) and a comprehensive survey due to H.P. Rosenthal [@Ros]. An explicit connection among the different versions of the CSP will be exhibited in the next section. Namely, a positive answer to the CSP for Banach lattices would imply a positive answer to the CSP for both $L_1$-spaces and $C(K)$-spaces (at least in the separable setting). The CSP for $C(K)$-spaces has been recently solved by G. Plebanek and the third named author in [@PSA], providing a non-separable counterexample by means of an appropriate construction of a Johnson-Lindenstrauss space. The main goal of the present paper is to prove that the space constructed in [@PSA], which will be denoted by ${\normalfont\textsf{PS}}_\textsf{2}$ and is $1$-complemented in some $C(K)$-space, is not even isomorphic to a Banach lattice, thus answering the CSP for Banach lattices also in the negative. The proof of this theorem will be built up essentially based on the following two facts: local theory of Banach lattices can be used to show that it is enough to check that ${\normalfont\textsf{PS}}_\textsf{2}$ is not isomorphic to an AM-space; in addition, a careful analysis of the inductive argument in the construction of ${\normalfont\textsf{PS}}_\textsf{2}$ will show that not only any possible *free norming* $weak^*$-closed subset (see Section [3.2](#s:normingfree){reference-type="ref" reference="s:normingfree"} for the precise definition) cannot lie inside $B_{{\normalfont\textsf{PS}}_\textsf{2}^*}$, but, in fact, a broader collection of $weak^*$-compact sets with desirable lattice properties. Namely, we will consider those $K\subseteq B_{{\normalfont\textsf{PS}}_\textsf{2}^*}$ which have the following properties: 1. $K$ is norming in ${\normalfont\textsf{PS}}_\textsf{2}^*$, 2. For every $x\in {\normalfont\textsf{PS}}_\textsf{2}$, there exists $y\in {\normalfont\textsf{PS}}_\textsf{2}$ such that $\left.\delta_y\right|_K=\left|\left.\delta_x\right|_K\right|$, where $\delta\colon {\normalfont\textsf{PS}}_\textsf{2}\to \mathcal{C}(B_{{\normalfont\textsf{PS}}_\textsf{2}^*})$ is defined by $\delta_x(x^*)=x^*(x)$, for $x^*\in B_{{\normalfont\textsf{PS}}_\textsf{2}^*}$. It will be shown that the existence of a $weak^*$-compact subspace of $B_{{\normalfont\textsf{PS}}_\textsf{2}^*}$ with the above-mentioned features is necessary for ${\normalfont\textsf{PS}}_\textsf{2}$ to be isomorphic to an AM-space. The paper is organized as follows. Section 2 is devoted to gather some necessary results about Banach lattices. Given that our results are strongly based on the construction of ${\normalfont\textsf{PS}}_\textsf{2}$ from [@PSA], for the sake of readability we summarized the basic necessary facts about ${\normalfont\textsf{PS}}_\textsf{2}$ on Section 3. These will be used in Sections 4 and 5 to obtain counterexamples for the CSP both for real and complex Banach lattices. # Preliminaries and auxiliary results on Banach lattices {#SectionAuxiliaryBL} A Banach lattice is a Banach space $X$, equipped with a partial order (compatible with the vector space structure) and lattice operations $x\vee y$ and $x\wedge y$ being respectively the least upper bound, and the greatest lower bound of $x,y\in X$, so that the norm satisfies $\|x\|\leq \|y\|$ whenever $|x|\leq |y|$, where $|x|=x\vee (-x)$. Let $X$ be a Banach lattice. A linear functional $x^* \in X^*$ is said to be a lattice homomorphism if $x^*(x\vee y)=x^*(x) \vee x^*(y)$ and $x^*(x\wedge y)=x^*(x) \wedge x^*(y)$ for every $x,y \in X$. We denote by $\mathop{\mathrm{Hom}}(X, \mathbb{ R})$ the set of all lattice homomorphisms in $X^*$. Note that the Banach space dual $X^*$ is also a Banach lattice in a natural way and that every lattice homomorphism in $X^*$ is in particular positive. Luckily enough, it is sufficient to know the lattice structure of $X^*$ in order to determine those linear functionals in $X^*$ which belong to $\mathop{\mathrm{Hom}}(X,\mathbb{ R})$. Namely, a functional $x^* \in X^*$ with $x^*>0$ (i.e. $x^*$ is positive and $x^*\neq 0$) is a lattice homomorphism if and only if it is an atom in $X^*$ (see, e.g., [@AB Section 2.3, Exercise 6]). Recall that an element $x>0$ in a Banach lattice $X$ is said to be an atom if and only if $x \geq u \geq 0$ implies that $u=ax$ for some scalar $a \geq 0$. For instance, if $X=\ell_1(\Gamma)$ for some set $\Gamma$, then the atoms of $X$ are precisely those elements of the form $\lambda e_\alpha$, where $\lambda>0$ and $\{e_\alpha: \alpha\in \Gamma\}$ are the vectors of the canonical basis of $\ell_1(\Gamma)$. Thus, for any Banach lattice $X$ with $X^*=\ell_1(\Gamma)$ we have $\mathop{\mathrm{Hom}}(X,\mathbb{ R})=\{\lambda e_\alpha: \lambda \geq 0,~\alpha\in \Gamma\}$. A Banach lattice $X$ is said to be an AL-space if $\|x+y\|=\|x\|+\|y\|$ for all $x,y\in X$ with $x,y\geq 0$. Analogously, an AM-space is a Banach lattice $X$ whose norm satisfies $\|x\lor y\|=\max\{\|x\|,\,\|y\|\}$ whenever $x,y\in X$, $x,y\geq 0$. There is a well-known duality relation between these two classes: $X$ is an AL-space (respectively, an AM-space) if and only if $X^*$ is an AM-space (resp., an AL-space). The remarkable representation theorems of Kakutani allow us to identify AL-spaces with $L_1(\mu)$-spaces (up to a lattice isometry), while AM-spaces can be identified with sublattices of $C(K)$-spaces. These classical results may be found in [@Lacey-book; @LT2]. Given $1\leq p\leq \infty$ and $\lambda\geq 1$, a Banach space $X$ is said to be an $\mathcal{L}_{p,\lambda}$-space if for every finite-dimensional subspace $E$ of $X$ there is a finite dimensional subspace $F$ of $X$ such that $E\subseteq F$ and $F$ is $\lambda$-isomorphic to $\ell_p^{n}$, where $n=\text{dim}\, F$. We say that a Banach space $X$ is an $\mathcal{L}_p$-space if it is an $\mathcal{L}_{p,\lambda}$-space for some $\lambda$. The most relevant properties of these classes of Banach spaces may be found in [@LP68; @LR69]. The discussion in the Introduction about Banach spaces being or not isomorphic to Banach lattices suggests this can be a not so simple question in general. It is however considerably easier to prove that a given space is not isometric to a Banach lattice (see for example [@HHM83 Theorem 4.1], where a similar but less complicated construction than that of [@PSA] is used to produce a space not isometric to any Banach lattice). In our argument to show that ${\normalfont\textsf{PS}}_\textsf{2}$ cannot be isomorphic to a Banach lattice it will be enough to show that it cannot be isomorphic to a sublattice of $\ell_\infty$ (see Corollary [Corollary 15](#cor:desired property){reference-type="ref" reference="cor:desired property"}). The reason for this is based on the following proposition stated in [@AW75], whose proof is included below for the convenience of the reader. **Proposition 1**. *Let $X$ be a Banach lattice which is an $\mathcal{L}_1$-space. Then $X$ is lattice isomorphic to an $L_1(\mu)$-space.* *Proof.* Fix $\lambda>1$ such that $X$ is an $\mathcal{L}_{1,\lambda}$-space. Since $X$ is an $\mathcal{L}_1$-space, then it is isomorphic to a subspace of certain $L_1(\mu)$-space [@LP68 Proposition 7.1]. Hence, $X$ cannot contain isomorphic copies of $c_0$ and, thus, $X$ is an order-continuous Banach lattice [@AB Theorem 4.60]. Let us consider the following expression for $x\in X$: $$\label{eq:definition norm} {\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert x \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}=\sup\left\{\sum_{i=0}^m \|x_i\|\::\: m\in\omega,\:\, x_0,\ldots, x_m\in X \text{ pairwise disjoint such that } \sum_{i=0}^m x_i=x \right\}.$$ We claim that ${\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert \cdot \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}$ defines an equivalent AL-norm for $X$. This fact implies, by Kakutani's representation theorem, that $X$ endowed with this new norm is lattice isometric to an $L_1(\mu)$-space ([@AB Theorem 4.27]) and thus we obtain that $X$ is lattice isomorphic to an $L_1(\mu)$-space. Let us detail the proof of the previous claim. We begin by showing the next inequalities: $$\label{eq:equivalent norm} \|x\|\leq {\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert x \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert} \leq (K_G \lambda)^2 \|x\|, \quad \text{ for every } x\in X,$$ where $K_G$ stands for Grothendieck's constant (for real scalars). The first inequality is trivial, so we shall focus on the second one. Fix a natural number $m$ and $x_0,\ldots, x_m$ pairwise disjoint vectors in $X$. For each $i\in\{0,\ldots, m\}$, let $B_i$ be the band generated by $x_i$ in $X$. Since $X$ is order-continuous, in particular, $X$ is $\sigma$-complete and then each $B_i$ is a band projection [@Meyer Proposition 1.2.11]. Let us denote by $P_i$ a projection band in $X$ onto $B_i$ for $0\leq i \leq m$. For each $0\leq i \leq m$, take $x_i^*\in S_{X^*}$ such that $x_i^*(x_i)=\|x_i\|$. Consider the bounded linear operator $$\begin{aligned} S&: &X\longrightarrow \:\ell_2^{m+1} \\ &&x\,\mapsto (x_i^* \circ P_i(x))_{i=0}^m\end{aligned}$$ Since $X$ is an $\mathcal{L}_{1,\lambda}$-space, we have that $S$ is a $1$-summing operator with $\pi_1(S)\leq K_G \lambda \|S\|$ ([@DJT Theorem 3.1]). Now, we are going to show that $\|S\|\leq K_G \lambda$. To this end, we introduce the following family of operators, which are defined for each $x\in X$ by the linear extension of $$\begin{aligned} T_x& :&\ell_\infty^{m+1} \longrightarrow X \\ &&\:e_i \longmapsto P_i(x)\end{aligned}$$ Since $X$ is an $\mathcal{L}_{1,\lambda}$-space, by [@DJT Theorem 3.7] we have that $T_x$ is $2$-summing with $\pi_2(T_x)\leq K_G\lambda \|T_x\|$ for every $x\in X$. Taking into account the fact that the elements $(P_i(x))_{i=0}^m$ are pairwise disjoint and that $(P_i)_{i=0}^m$ are projection bands, we have that $$\left| T_x\bigl((a_i)_{i=0}^m\bigr) \right|=\left|\sum_{i=0}^m a_i P_i(x) \right|=\sum_{i=0}^m |a_i||P_i(x)| \leq |x| \bigl\|(a_i)_{i=0}^m \bigr\|_\infty$$ and, thus, $\|T_x\|\leq \|x\|$. Now, observe that $$\sup\left\{\left(\sum_{i=0}^m |y^*(e_i)|^2\right)^{\frac{1}{2}}\::\: y^*\in B_{(\ell_\infty^{m+1})^*}\right\}=\sup\left\{\left(\sum_{i=0}^m |a_i|^2\right)^{\frac{1}{2}}\::\: (a_j)_{j=0}^m\in B_{\ell_1^{m+1}}\right\}\leq 1,$$ and since $T_x$ is $2$-summing with $\pi_2(T_x)\leq K_G\lambda \|x\|$, for every $x\in X$, the next inequality holds: $$\left(\sum_{i=0}^m \|P_i(x)\|^2\right)^{\frac{1}{2}}=\left(\sum_{i=0}^m \|T_x(e_i)\|^2\right)^{\frac{1}{2}}\leq K_G\lambda \|x\|, \qquad \text{ for every } x\in X.$$ From the above identity, it follows that $$\|Sx\|=\left(\sum_{i=0}^m |x_i^*(P_i(x))|^2\right)^{\frac{1}{2}}\leq \left(\sum_{i=0}^m \|P_i(x)\|^2\right)^{\frac{1}{2}} \leq K_G\lambda \|x\|, \qquad \text{ for every } x\in X,$$ so we get that $\pi_1(S)\leq (K_G\lambda)^2$. Given that the vectors $(x_i)_{i=0}^m$ are pairwise disjoint and $\sum_{i=0}^m x_i=x$, for every $x^*\in B_{X^*}$ we have that $$\sum_{i=0}^m|x^*(x_i)|\leq \sum_{i=0}^m|x^*|(|x_i|)=|x^*|\left(\sum_{i=0}^m |x_i|\right)=|x^*|(|x|).$$ Therefore, $\sup\left\{\sum_{i=0}^m|x^*(x_i)|\::\: x^*\in B_{X^*}\right\}\leq \|x\|$ and we finally obtain $$\sum_{i=0}^m \|x_i\|=\sum_{i=0}^m \|Sx_i\|\leq (K_G \lambda)^2\|x\|.$$ Since the preceding inequality does not depend on the choice of $m\in\omega$ and $x_0,\ldots,x_m\in X$, this proves the identity ([\[eq:equivalent norm\]](#eq:equivalent norm){reference-type="ref" reference="eq:equivalent norm"}). It is straightforward to check that the map ${\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert \cdot \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}$ defined in ([\[eq:definition norm\]](#eq:definition norm){reference-type="ref" reference="eq:definition norm"}) is a norm on $X$ and is complete because it is equivalent to the complete norm $\|\cdot\|$, as we have already exhibited in ([\[eq:equivalent norm\]](#eq:equivalent norm){reference-type="ref" reference="eq:equivalent norm"}). It reamins to show that it is indeed a lattice norm. To this end, take $x,y\in X$ such that $|x|\leq |y|$ and fix a natural number $m$ and a finite sequence $(y_i)_{i=0}^m$ of pairwise disjoint vectors in $X$ such that $y=\sum_{i=0}^m y_i$. By the Riesz Decomposition Property (see [@AB Theorem 1.13]), there exist $x_0,\ldots, x_m\in X$ satisfying $x=\sum_{i=0}^m x_i$ and $|x_i|\leq |y_i|$ for each $i=0,\ldots, m$. Thus, the vectors $(x_i)_{i=0}^m$ are also pairwise disjoint, and since $\|\cdot\|$ is a lattice norm, we have $\|x_i\|\leq \|y_i\|$ for each $i=0,\ldots, m$. This implies that ${\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert x \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}\leq {\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert y \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}$. Finally, it is easy to check that $\|x+y\|=\|x\|+\|y\|$ for every disjoint pair $x,y\in X$, so we conclude that ${\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert \cdot \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}$ is an AL-norm. ◻ **Corollary 2**. *Let $X$ be a Banach lattice which is an $\mathcal{L}_\infty$-space. Then $X$ is lattice isomorphic to an $AM$-space.* *Proof.* Since $X$ is an $\mathcal{L}_\infty$-space, its dual $X^*$ is an $\mathcal{L}_1$-space ([@LR69 Theorem III (a)]). By the previous proposition, we obtain that $X^*$ is lattice isomorphic to an $L_1(\mu)$-space and, thus, $X^{**}$ is lattice isomorphic to a $C(K)$-space. Since $X$ is a sublattice of $X^{**}$ [@Meyer Proposition 1.4.5 (ii)], $X$ is lattice isomorphic to a sublattice of a $C(K)$-space, which is an $AM$-space. ◻ **Corollary 3**. *Let $X$ be a complemented subspace in an $L_1(\mu)$-space. If $X$ is isomorphic to a Banach lattice, then it is isomorphic to an $L_1(\mu)$-space.* *Proof.* Since $X$ is complemented in an $L_1(\mu)$-space, by [@LR69 Theorem III (b)] we get that $X$ is an $\mathcal{L}_1$-space. If $X$ is also isomorphic to a Banach lattice, the preceding proposition ensures that $X$ is isomorphic to an $L_1(\mu)$-space. ◻ **Corollary 4**. *Let $X$ be a separable complemented subspace in a $C(K)$-space. If $X$ is isomorphic to a Banach lattice, then it is isomorphic to a $C(K)$-space.* *Proof.* By [@LR69 Theorem 3.2] complemented subspaces of $C(K)$-spaces are $\mathcal{L}_\infty$-spaces. If $X$ is also isomorphic to a Banach lattice, Corollary [Corollary 2](#cor:Am-spaces){reference-type="ref" reference="cor:Am-spaces"} shows that $X$ is isomorphic to an AM-space. The conclusion follows from the fact that separable AM-spaces are isomorphic to $C(K)$-spaces [@B]. ◻ **Remark 5**. It is worth noting that a positive answer to the CSP for Banach lattices in the separable setting would imply by Corollaries [Corollary 3](#cor:L1){reference-type="ref" reference="cor:L1"} and [Corollary 4](#cor:CK){reference-type="ref" reference="cor:CK"} a positive answer to the CSP for $L_1$-spaces and for subspaces of $C[0,1]$. **Remark 6**. Proposition [Proposition 1](#p:BLL1){reference-type="ref" reference="p:BLL1"} yields, in particular, that if a Banach lattice $X$ is linearly isomorphic to $\ell_1$, then it must be lattice isomorphic to $\ell_1$. Note that this does not extend to isometries, in the following sense: A Banach lattice linearly isometric to $\ell_1$ need not be lattice isometric. In fact, the proof of Proposition [Proposition 1](#p:BLL1){reference-type="ref" reference="p:BLL1"} tells us that if a Banach lattice $X$ is linearly isometric to $\ell_1$, then it is $K_G^2$-lattice isomorphic to $\ell_1$ (where $K_G$ is Grothendieck's constant). Moreover, in the 2-dimensional setting it can be checked that the isomorphism constant is sharp: take $\ell_\infty^2$ which is linearly isometric to $\ell_1^2$, and one can check that any lattice isomorphism will give constant at least 2 (which coincides with the square of Grothendieck's constant for dimension 2 [@kri]). # The Plebanek-Salguero space {#SectionTheSpace} Let us denote $\omega=\{0,1,2,...\}$ the set of non-negative integers, and write $\operatorname{fin}(\omega)$ for the family of all finite subsets of $\omega$. Given $\mathcal F \subseteq {\mathcal P}(\omega)$, $[\mathcal F]$ denotes the smallest subalgebra of ${\mathcal P}(\omega)$ containing $\mathcal F$. A family ${\mathcal A}$ of infinite subsets of $\omega$ is *almost disjoint* whenever $A\cap B$ is finite for every distinct $A, B\in {\mathcal A}$. Every almost disjoint family ${\mathcal A}$ gives rise to a *Johnson-Lindenstrauss* space $\operatorname{JL}({\mathcal A})$, which is the closed linear span inside $\ell_\infty$ of the set of characteristic functions $\{1_n: n\in \omega\} \cup \{1_A: A\in {\mathcal A}\}\cup \{1_\omega\}$. Alternatively, let us write $\mathfrak A= [\operatorname{fin}(\omega) \cup {\mathcal A}]$. Then $\operatorname{JL}({\mathcal A})$ is precisely the closure in $\ell_\infty$ of the subspace $s(\mathfrak A)$ consisting of all *simple* $\mathfrak A$-measurable functions; that is, functions of the form $f=\sum_{i=1}^n a_i\cdot 1_{B_i}$, where $n\in \omega$, $a_i\in \mathbb{ R}$ and $B_i\in \mathfrak A$. It is well-known that $\operatorname{JL}({\mathcal A})$ is isometrically isomorphic to a $C(K)$-space [@AK Theorem 4.2.5]. The underlying compact space can be realized as the Stone space consisting of all ultrafilters on $\mathfrak A$. More explicitly, we can define $$K_{\mathcal A}= \omega \cup \{p_A: A\in {\mathcal A}\} \cup \{\infty\}$$ and specify a topology on $K_{\mathcal A}$ as follows: - points in $\omega$ are isolated; - given $A\in {\mathcal A}$, a basic neighbourhood of $p_A$ is of the form $\{p_A\}\cup A\setminus F$, where $F\in \operatorname{fin}(\omega)$; - $K_{\mathcal A}$ is the one-point compactification of the locally compact space $\omega \cup \{p_A: A\in {\mathcal A}\}$. The compact space $K_{\mathcal A}$ is often referred to as the *Alexandrov-Urysohn compact space* associated to ${\mathcal A}$. It is a separable, scattered compact space with empty third derivative. Please observe that $\operatorname{JL}({\mathcal A})$ coincides with the subspace $\{f|_\omega: f\in C(K_{\mathcal A})\}$ of $\ell_\infty$. On the other hand, the dual of $\operatorname{JL}({\mathcal A})$ is isometrically isomorphic to the space $M(\mathfrak A)$ of real-valued *finitely* additive measures on $\mathfrak A$. Indeed, every $\mu\in M(\mathfrak A)$ defines a functional on $s(\mathfrak A)$ by means of integration [@DS III.2], and every functional on $\operatorname{JL}({\mathcal A})$ arises in this way. Let us recall that the norm of any measure $\nu\in M(\mathfrak A)$ is given by $\|\nu\| = |\nu|(\omega)$, where the *variation* $|\nu|$ is defined as $$|\nu|(A) = \sup\{|\nu(B)| + |\nu(A\setminus B)|: B\in \mathfrak A, B\subseteq A\}.$$ In particular, since $\operatorname{JL}({\mathcal A})$ is isometrically isomorphic to $C(K_{\mathcal A})$, and $K_{\mathcal A}$ is scattered, $M(\mathfrak A)$ is isometrically isomorphic to $\ell_1(K_{\mathcal A}) \simeq \ell_1(\omega) \oplus_1 \ell_1({\mathcal A}) \oplus_1 \mathbb{ R}$. Therefore, every $\nu\in M(\mathfrak A)$ can be decomposed as $\nu = \mu + \bar\nu$, where $\mu$ is a countably additive measure and $\bar\nu$ is an element of $M(\mathfrak A)$ which vanishes on finite sets. The spaces $\operatorname{JL}({\mathcal A})$ have recently found use in Banach space theory as counterexamples or as a tool to produce them [@KL; @MP; @PSA1]. They are also used in [@PSA] to obtain a negative answer for the complemented subspace problem in $C(K)$-spaces. ## General facts {#subsec:general} The main purpose of [@PSA] is to construct two almost disjoint families ${\mathcal A}, \protect{\mathcal B}\subseteq {\mathcal P}(\omega\times 2)$ such that the corresponding Johnson-Lindenstrauss spaces enjoy the following properties (see [@PSA Theorem 1.3]): - $\operatorname{JL}(\protect{\mathcal B})$ is isomorphic to $\operatorname{JL}({\mathcal A})\oplus {\normalfont\textsf{PS}}_\textsf{2}$, and both $\operatorname{JL}({\mathcal A})$ and ${\normalfont\textsf{PS}}_\textsf{2}$ are isometric to $1$-complemented subspaces of $\operatorname{JL}(\protect{\mathcal B})$. - ${\normalfont\textsf{PS}}_\textsf{2}$ is not isomorphic to any $C(K)$-space. Let us describe how such families ${\mathcal A}$ and $\protect{\mathcal B}$ are defined and interact with each other. For that purpose, we work in the countable set $\omega \times 2$ rather than in $\omega$, and we need some definitions. First, we say that a subset $C\subseteq \omega \times 2$ is a *cylinder* if it is of the form $C=C_0\times 2$ for some $C_0\subseteq \omega$. Given $n\in \omega$, let us write $c_n = \{n\}\times 2$. A partition $B^0, B^1$ of a cylinder $C=C_0\times 2$ *splits* $C$ (or is a *splitting* of $C$) if for every $n\in C_0$, the sets $B^0 \cap c_n$ and $B^1 \cap c_n$ are singletons. Consider two almost disjoint families ${\mathcal A}$ and $\protect{\mathcal B}$ such that: - ${\mathcal A}= \{A_\xi: \xi < \mathfrak c\}$ is a family of cylinders in $\omega \times 2$. - $\protect{\mathcal B}= \{B_\xi^0, B_\xi^1: \xi<\mathfrak c\}$ satisfies that given $\xi<\mathfrak c$, the pair $B_\xi^0$, $B_\xi^1$ is a splitting of $A_\xi$. In this context, we will slightly abuse notation by declaring $\operatorname{JL}({\mathcal A})$ to be the closed subspace of $\ell_\infty(\omega \times 2)$ spanned by $\{c_n: n\in \omega\} \cup \{1_A: A\in {\mathcal A}\}\cup \{1_{\omega \times 2}\}$; that is to say, in the definition of $\operatorname{JL}({\mathcal A})$ we consider only finite *cylinders* instead of all finite subsets of $\omega \times 2$. With these considerations, it is straightforward to see that $\operatorname{JL}({\mathcal A})$ sits inside $\operatorname{JL}(\protect{\mathcal B})$ as the subspace formed by functions which are constant on cylinders, and the map $P: \operatorname{JL}(\protect{\mathcal B}) \to \operatorname{JL}(\protect{\mathcal B})$ defined as $$Pf(n,0) = Pf(n,1) = \frac{1}{2}\big(f(n,0)+f(n,1)\big)$$ is a norm-one projection whose image is $\operatorname{JL}({\mathcal A})$ --see [@PSA Proposition 3.1]--. Let us write $X=\ker P$, so that we have $JL(\protect{\mathcal B}) = \operatorname{JL}({\mathcal A}) \oplus X$. Then the map $Q = \operatorname{Id}_{C(K_\protect{\mathcal B})}- P$ acts as $$Q(f)(n,0) = - Q(f)(n,1) = \frac12\big(f(n,0) - f(n,1)\big).$$ and so it is a norm-one projection onto $X$. Therefore both $X$ and $\operatorname{JL}({\mathcal A})$ are isometric to $1$-complemented subspaces of $\operatorname{JL}(\protect{\mathcal B})$, and the space $X$ can be defined as follows: $$\label{eq:Csigma} X = \{f\in \operatorname{JL}(\protect{\mathcal B}): f(n,0)=-f(n,1) \ \forall n\in \omega\}$$ In order to ensure that $X$ is not isomorphic to a $C(K)$-space, the families ${\mathcal A}$ and $\protect{\mathcal B}$ will be chosen to satisfy certain delicate combinatorial properties. Actually, the space which we denote by ${\normalfont\textsf{PS}}_\textsf{2}$ is such $X$ for a particular choice of ${\mathcal A}$ and $\protect{\mathcal B}$ satisfying that $X$ is not a $C(K)$-space. ## Norming and free subsets {#s:normingfree} We now focus on how to produce the almost disjoint families ${\mathcal A}$ and $\protect{\mathcal B}$, intertwined as in Section [3.1](#subsec:general){reference-type="ref" reference="subsec:general"}, so that the resulting space $X$ is not isomorphic to a $C(K)$-space. These techniques will be essential for our counterexample below. We start with the basic idea. [\[subsec:norming\]]{#subsec:norming label="subsec:norming"} **Definition 7**. Given $X$ a Banach space and $K$ a weak\* closed subset of $B_{X^*}$, we say that - $K$ is *norming* if there is $0<c\leq 1$ such that $\|x\| \geq c\cdot\sup_{x^*\in K} |{\left\langle\kern 0ex x^*,x \kern 0ex\right\rangle}|$ for every $x\in X$. - $K$ is *free* if for every $f\in C(K)$ there exists $x\in X$ such that $f(x^*)={\left\langle\kern 0ex x^*,x \kern 0ex\right\rangle}$ for every $x^*\in K$. We will speak of a *$c$-norming* set whenever we need explicit mention of the constant $c$ in the definition of a norming set. Observe that $K$ is a norming and free subset of $B_{X^*}$ precisely when the natural operator $T: X \to C(K)$ defined by $T(x)(x^*)={\left\langle\kern 0ex x^*,x \kern 0ex\right\rangle}$ is an (onto) isomorphism [@PSA1 Lemma 2.2]. The use of norming free sets makes possible to construct certain Banach spaces which are not $C(K)$-spaces. Applied to our particular case, the idea is to prevent every candidate for a norming set in the dual ball of $X$ to be free. We now indicate how to proceed. First, observe that, for any choice of the families ${\mathcal A}$ and $\protect{\mathcal B}$ as in Section [3.1](#subsec:general){reference-type="ref" reference="subsec:general"}, the dual of the space $X$ can be isometrically identified with the subspace $\operatorname{JL}({\mathcal A})^\perp$ of $M(\mathfrak B)$. Therefore, every functional $\nu \in X^*$ can be seen as a pair of measures $(\mu, \bar\nu) \in \ell_1(\omega\times 2) \oplus \ell_1(\mathfrak c\times 2)$ where $\mu(n,0)=-\mu(n,1)$ for every $n\in \omega$ and $\bar\nu(\xi,0)=-\bar\nu(\xi,1)$ for every $\xi<\mathfrak c$. On the other hand, for every $n\in \omega$, the function $$f_n = \frac12\big(1_{(n,0)}-1_{(n,1)}\big)$$ is always an element of $X$. Hence, every norming set for $X$ must contain a sequence of functionals $(\nu_n)_{n\in \omega}$ such that $\inf_{n\in\omega} |\nu_n(f_n)| = \inf_{n\in \omega} |\nu_n(n,0)|>0$. This motivates the following definition: **Definition 8**. A bounded sequence $(\mu_n)_{n\in \omega}$ in $\ell_1(\omega\times 2)$ is *admissible* if - $\inf_{n\in \omega} |\mu_n(n,0)|>0$. - $\mu_k(n,0)=-\mu_k(n,1)$ for every $k,n\in \omega$. Consequently, every norming set for $X$ must contain a sequence $(\mu_n, \bar\nu_n)_{n\in \omega}$ such that $(\mu_n)_{n\in \omega}$ is admissible. The main idea of [@PSA] is to prevent every sequence contained in $\operatorname{JL}({\mathcal A})^\perp$ whose $\ell_1(\omega\times 2)$-parts are admissible from lying inside a free subset in the dual unit ball of ${\normalfont\textsf{PS}}_\textsf{2}$. In this way, there are no norming free subsets for ${\normalfont\textsf{PS}}_\textsf{2}$, and therefore it cannot be isomorphic to a $C(K)$-space. Our subsequent argument also makes use of admissible sequences to prove that ${\normalfont\textsf{PS}}_\textsf{2}$ is, in fact, not isomorphic to a Banach lattice. This proof relies on the following simple observation (cf. [@PSA Lemma 3.3]): **Lemma 9**. *Assume $\mathfrak B$ is a Boolean subalgebra of ${\mathcal P}(\omega)$. If $M\subseteq M(\mathfrak B)$ lies inside a free subset, then for every $B\in \mathfrak B$ and $\varepsilon>0$ there is a simple $\mathfrak B$-measurable function $g$ such that $\big|{\left\langle\kern 0ex \mu,g \kern 0ex\right\rangle} - |\mu(B)|\big|<\varepsilon$ for every $\mu\in M$.* ## Separation of measures Let $M_1(\mathfrak B)$ denote the unit ball of the space $M(\mathfrak B)$. [\[subsec:separation\]]{#subsec:separation label="subsec:separation"} **Definition 10**. Given $\mathfrak B$ a Boolean algebra of ${\mathcal P}(\omega\times 2)$, two subsets of measures $M, M'\subseteq M_1(\mathfrak B)$ are *$\mathfrak B$-separated* if there is $\varepsilon>0$ and a finite collection $B_1, ..., B_n\in \mathfrak B$ satisfying that for every pair $(\mu, \mu')\in M\times M'$, there is $k\in\{1,...,n\}$ such that $|\mu(B_k)-\mu'(B_k)|\geq \varepsilon$. The notion of $\mathfrak B$-separation is essential in the construction of ${\normalfont\textsf{PS}}_\textsf{2}$. In particular, Definition [Definition 10](#def:sep){reference-type="ref" reference="def:sep"} in tandem with Lemma [Lemma 9](#lem:no-free){reference-type="ref" reference="lem:no-free"}, together with some techniques from infinite combinatorics, is which prevents a certain sequence of measures from lying inside a free set. We will also need the following fact about $\mathfrak B$-separation to show that ${\normalfont\textsf{PS}}_\textsf{2}$ is not isomorphic to a Banach lattice --see [@PSA Lemma 4.2]--: **Lemma 11**. *If there are $\varepsilon>0$ and a simple $\mathfrak B$-measurable function $g$ such that for every $(\mu, \mu')\in M\times M'$ we have $|{\left\langle\kern 0ex \mu,g \kern 0ex\right\rangle}-{\left\langle\kern 0ex \mu',g \kern 0ex\right\rangle}|\geq \varepsilon$, then $M$ and $M'$ are $\mathfrak B$-separated.* ## The heart of the construction of ${\normalfont\textsf{PS}}_\textsf{2}$ {#subsec:heart} The almost disjoint families ${\mathcal A}$ and $\protect{\mathcal B}$ are constructed through an inductive process of length $\mathfrak c$ --cf. [@PSA Section 6]--. Let us now sketch this process paying special attention to the properties that will be needed later. Recall that the idea is to construct a family ${\mathcal A}= \{A_\xi: \xi <\mathfrak c\}$ of cylinders in $\omega \times 2$ and define suitable splittings $B_\xi^0, B_\xi^1$ of $A_\xi$ for every $\xi<\mathfrak c$. Given any $\Lambda\subseteq \mathfrak c$, we will denote $\mathfrak B(\Lambda) = [\operatorname{fin}(\omega \times 2) \cup \{B_\alpha^0, B_\alpha^1\colon \alpha\in \Lambda\}]$. In particular, $\mathfrak B(\xi)$ stands for $\mathfrak B(\{\alpha: \alpha<\xi\})$, and the final algebra is denoted $\protect{\mathcal B}=[\operatorname{fin}(\omega \times 2) \cup\{B_\alpha^0, B_\alpha^1\colon \alpha<\mathfrak c\}]$. Let us explain how the sets $B_\xi^0, B_\xi^1$ are obtained for any given $\xi<\mathfrak c$. First, observe that we can "code" all elements in $M(\mathfrak B)$ before the construction of the algebra $\mathfrak B$, since they are represented by elements in $\ell_1(\omega \times 2) \oplus_1 \ell_1(\mathfrak c\times 2) \oplus_1 \mathbb{ R}$. Following the remark before Definition 3.2, we only need to consider sequences which will produce elements in $\operatorname{JL}({\mathcal A})^\perp$. Hence, we produce a suitable enumeration of all such sequences of measures. The result is that, to each ordinal $\xi<\mathfrak c$, we assign a sequence $(\nu_n^\xi)_{n\in\omega}$ in the unit ball of $\ell_1(\omega \times 2) \times \ell_1(\mathfrak c\times 2)$ such that, if we write $\nu_n^\xi = \mu_n^\xi + \bar\nu_n^\xi$ for every $n\in \omega$, then: 1. the sequence $(\mu_n^\xi)_{n\in \omega}$ is admissible, 2. $\bar\nu_n^\xi(\alpha,0)+\bar\nu_n^\xi(\alpha,1)=0$ for every $\alpha<\mathfrak c$, 3. $\bar\nu_n^\xi(\alpha,i)=0$ whenever $\alpha\geq \xi$ and $i\in\{0,1\}$. Moreover, every sequence $(\nu_n)_{n\in \omega}$ in the unit ball of $\ell_1(\omega \times 2) \times \ell_1(\mathfrak c\times 2)$ which satisfies properties *(i)*--*(iii)* above is of the form $(\nu_n^\xi)_{n\in\omega}$ for exactly one $\xi<\mathfrak c$. The sets $B_\xi^0$ and $B_\xi^1$ are defined so that $(\nu_n^\xi)_{n\in\omega}$ cannot eventually lie in a free set of $M_1(\mathfrak B)$. This is done as follows. First, let us define $c=\inf_{n\in \omega}|\mu_n^\xi(n,0)|$, which is a strictly positive number, and let $p_n^\xi$ be the one-point subset of $c_n=\{(n,0),(n,1)\}$ in which $\mu_n^\xi(p_n^\xi)>0$. Now, consider three auxiliary infinite subsets $J_2^\xi \subseteq J_1^\xi \subseteq J_0^\xi \subseteq \omega$, such that the differences $\omega\setminus J_0^\xi$, $J_0^\xi\setminus J_1^\xi$ and $J_1^\xi\setminus J_2^\xi$ are also infinite, and define $$B_\xi^0 = \left(\bigcup_{n\in J^\xi_2} p_n^\xi \right) \cup \left(\bigcup_{n\in J^\xi_1 \setminus J^\xi_2} c_n \setminus p_n^\xi \right), \quad B_\xi^1 = (J_1^\xi \times 2) \setminus B_\xi^0$$ The sets $J_0^\xi$, $J_1^\xi$ and $J_2^\xi$ are chosen in order to satisfy that, for some fixed $\delta \in (0, \frac{c}{16})$, the following properties are satisfied: 1. For every $n\in J_0^\xi$, $|\mu_n^\xi|( (J_0^\xi \times 2)\setminus c_n)<\delta$ --see (5.a) in the proof of [@PSA Lemma 5.3]. 2. There is $a\geq c$ such that $|\mu_n^\xi(p_n^\xi)-a|<\delta$ for every $n\in J_1^\xi$ --see (5.b) in the proof of [@PSA Lemma 5.3]. 3. The pairs - $\{\nu_n^\xi: n\in J^\xi_2\}$ and $\{\nu_n^\xi: n\in J^\xi_1\setminus J^\xi_2\}$, - $\{\nu_n^\xi: n\in J^\xi_1\}$ and $\{\nu_n^\xi: n\in J^\xi_0 \setminus J^\xi_1\}$ are not $\mathfrak B(\alpha \setminus \{\xi\})$-separated for every $\alpha > \xi$. Therefore, such pairs are not $\mathfrak B(\mathfrak c\setminus \{\xi\})$-separated either. As a consequence of (P1)--(P3) --cf. [@PSA Lemmata 5.3 and 5.5]--, $(\nu_n^\xi)_{n\in \omega}$ cannot lie inside a free set in $M(\mathfrak B)$. Our proof that ${\normalfont\textsf{PS}}_\textsf{2}$ is not isomorphic to a Banach lattice will be substantially based on Properties (P1)--(P3). Finally, let us record, for future reference, the following observation, which will be applied below to functions of the form $f = 1_{B_0^\xi} - 1_{B_1^\xi}$ for a given $\xi<\mathfrak c$. **Remark 12**. Fix any $\xi<\mathfrak c$ and consider the sequence $(\nu_n^\xi)_{n\in \omega}$ in $M_1(\mathfrak B)$. Since $\bar\nu_n^\xi(\alpha, i)=0$ for every $\alpha \geq \xi$ and $i\in\{0,1\}$, we have $|\bar\nu_n^\xi|(B_\alpha^i) = 0$ whenever $\alpha\geq \xi$ and $i\in \{0,1\}$. Hence, for any $n\in \omega$, $\nu_n^\xi$ agrees with $\mu_n^\xi$ inside any set $B_\alpha^i$ for $\alpha \geq \xi$ and $i\in \{0,1\}$. In particular, if $f \in {\normalfont\textsf{PS}}_\textsf{2}$ has its support contained in the set $B^0_\xi \cup B^1_\xi$, then ${\left\langle\kern 0ex \nu^\xi_n,f \kern 0ex\right\rangle} = {\left\langle\kern 0ex \mu^\xi_n,f \kern 0ex\right\rangle}$ for every $n\in \omega$. # The Plebanek-Salguero space is not a Banach lattice We now combine the results in Section [2](#SectionAuxiliaryBL){reference-type="ref" reference="SectionAuxiliaryBL"} with the fundamental properties of the space ${\normalfont\textsf{PS}}_\textsf{2}$ to show that it is not linearly isomorphic to any Banach lattice. First of all, notice that ${\normalfont\textsf{PS}}_\textsf{2}^*$ is a complemented subspace of $M(\mathfrak B)=\ell_1(\omega\times 2)\oplus \ell_1(\protect{\mathcal B})$ and therefore isomorphic to $\ell_1(\Gamma)$ for an uncountable set $\Gamma$. Furthermore, since the set $\{\delta_{(n,0)}, \delta_{(n,1)}:n\in \omega\}$ is $1$-norming in $M(\mathfrak B)$, just taking the restrictions to ${\normalfont\textsf{PS}}_\textsf{2}$ we deduce that ${\normalfont\textsf{PS}}_\textsf{2}$ also has a countable $1$-norming set. These two characteristics of ${\normalfont\textsf{PS}}_\textsf{2}$ will make easier to prove that this space cannot be isomorphic to a Banach lattice, as the next proposition shows. **Proposition 13**. *Let $X$ be an isomorphic predual of $\ell_1(\Gamma)$ which has a countable norming set. If $X$ is isomorphic to a Banach lattice, then it is isomorphic to a sublattice of $\ell_\infty$.* *Proof.* Let $Y$ be a Banach lattice which is isomorphic to $X$. Since $X$ is a predual of $\ell_1(\Gamma)$, $Y$ is an $\mathcal{L}_\infty$-space. Hence, by Corollary [Corollary 2](#cor:Am-spaces){reference-type="ref" reference="cor:Am-spaces"}, we may assume that $Y$ is an $AM$-space. Then, $Y^*$ is an $AL$-space isomorphic to $\ell_1(\Gamma)$, so $Y^*$ is, in fact, lattice isometric to $\ell_1(\Gamma)$ (see [@Lacey-book Theorem 4, Section 18]). Let us denote by $\{e_\gamma^*\::\:\gamma\in\Gamma\}$ the canonical basis of $\ell_1(\Gamma)$ and let $\{y_n^*\}_{n=0}^\infty$ be a countable $c$-norming set in $B_{Y^*}$. We can write each $y_n^*$ as $$y_n^*=\sum_{\gamma\in\Gamma} \lambda_\gamma^n e_\gamma^*,$$ where $\sum_{\gamma\in\Gamma} |\lambda_\gamma^n|\leq 1$. Thus, for every $n\in\omega$, the set $S_n=\{\gamma\in\Gamma\::\:\lambda_\gamma^n\neq 0\}$ is countable and, consequently, $S=\bigcup_{n\in\omega} S_n$ is also a countable set. We claim that the set $\{e_\gamma^*\::\:\gamma\in S\}$ is $c$-norming for $Y$. Otherwise, there exist $y\in Y$ and $\varepsilon\in (0,c)$ such that $\sup_{\gamma\in S}|e_\gamma^*(y)|<(c-\varepsilon)\|y\|$. But then $$|y_n^*(y)|=\biggl|\sum_{\gamma\in S} \lambda_\gamma^n e_\gamma^*(y) \biggr|\leq \sum_{\gamma\in S} |\lambda_\gamma^n| |e_\gamma^*(y)| \leq \sup_{\gamma\in S}|e_\gamma^*(y)| \sum_{\gamma\in S} |\lambda_\gamma^n|<(c-\varepsilon)\|y\|,$$ for every $n\in\omega$, which yields a contradiction with the fact that $\{y_n^*\}_{n=0}^\infty$ is $c$-norming. Finally, $Y$ is lattice embeddable into $\ell_\infty(S)$ through the lattice embedding given by $y\mapsto \bigl(e_\gamma^*(y)\bigr)_{\gamma\in S}$. ◻ The preceding proposition motivates the following definition: **Definition 14**. We say that a Banach space $X$ has the **Desired Property (DP)** if for every norming sequence $(e_n^*)_{n\in \omega}$ in $X^*$ there exists an element $f \in X$ such that no element $g \in X$ satisfies $$e_n^*(g)=|e_n^*(f)| \mbox{ for every } n \in \omega.$$ We will see next that a Banach space has (DP) if and only if it is not isomorphic to a sublattice of $\ell_\infty$. Therefore, by Proposition 4.1, in order to prove that ${\normalfont\textsf{PS}}_\textsf{2}$ is not isomorphic to a Banach lattice it will be sufficient to check that this space does not have the (DP). **Corollary 15**. *Given an isomorphic predual $X$ of $\ell_1(\Gamma)$ which has a countable norming set, the following statements are equivalent:* 1. *$X$ is isomorphic to a Banach lattice.* 2. *$X$ is isomorphic to a sublattice of $\ell_\infty$.* 3. *$X$ does not have the (DP). That is, there exists a norming sequence $(x_n^*)_{n=0}^\infty$ in $B_{X^*}$ such that for every $f\in X$ there is an element $g\in X$ such that $$x_n^*(g)=|x_n^*(f)|, \text{ for every } n\in \omega.$$* *Proof.* (1) $\Leftrightarrow$ (2) is just Proposition 4.1. \(2\) $\Rightarrow$ (3). Let $T\colon X\to Y$ be an invertible operator onto a sublattice $Y$ of $\ell_\infty$ and let $C=\|T\|\|T^{-1}\|$. If we denote by $(e_n^*)_{n=0}^\infty\subseteq \ell_\infty^*$ the canonical basis of $\ell_1$, the natural order in $\ell_\infty$ is given by $$f\leq g \quad \text{ iff } \quad e_n^*(f)\leq e_n^*(g), \quad \text{for every } n\in\omega.$$ It is clear that $(e_n^*)_{n=0}^\infty$ is $1$-norming in $\ell_\infty$ and, hence, the sequence of restrictions $$y_n^*=\left.e_n^*\right|_Y, \quad n\in\omega$$ is $1$-norming in $Y$. Now, define $$x_n^*=\frac{1}{\|T\|}T^*y_n^*,\quad n\in\omega.$$ It is straightforward to check that $(x_n^*)_{n=0}^\infty\subseteq B_{X^*}$ is $1/C$-norming in $X$ and, given $f\in X$, if we take $g=T^{-1}|Tf|$, then for every $n\in\omega$ we have $$x_n^*(g)=\frac{1}{\|T\|}T^*y_n^*\bigl(T^{-1}|Tf|\bigr)=\frac{1}{\|T\|}y_n^*\bigl(|Tf|\bigr)=|x_n^*(f)|.$$ \(3\) $\Rightarrow$ (2). Suppose that $X$ fails (DP). That is, there exists a $c$-norming sequence $(e_n^*)_{n\in\omega}\subseteq B_{X^*}$ such that for every $f\in X$ there is an element $g\in X$ such that $e_n^*(g)=|e_n^*(f)|$ for every $n\in\omega$. We define the following mapping $$\begin{array}{ccc} T: & X\longrightarrow & \ell_\infty \\ &f \longmapsto & \bigl(e_n^*(f)\bigr), \end{array}$$ which is clearly linear and bounded below (since $(e_n^*)_{n\in\omega}$ is norming). Hence, $Y=T(X)$ is a closed subspace of $\ell_\infty$. Moreover, it is a sublattice. Indeed, for any $(e_n^*(f))\in Y$ there exists $g\in X$ such that $(e_n^*(g))=(|e_n^*(f)|)=|(e_n^*(f))|$; that is, the modulus of $(e_n^*(f))$ also belongs to $Y$. ◻ **Theorem 16**. *${\normalfont\textsf{PS}}_\textsf{2}$ is not isomorphic to a Banach lattice.* *Proof.* We will prove this fact by showing that ${\normalfont\textsf{PS}}_\textsf{2}$ has the (DP). Fix a norming sequence $(e_n^*)_{n \in \omega}$ in $B_{{\normalfont\textsf{PS}}_\textsf{2}^*}$. Our aim is to find an $f\in {\normalfont\textsf{PS}}_\textsf{2}$ such that no $g\in {\normalfont\textsf{PS}}_\textsf{2}$ satisfies $$\langle e_n^*, g\rangle=|\langle e_n^*, f\rangle|, \text{ for every } n\in\omega.$$ The very definition of ${\normalfont\textsf{PS}}_\textsf{2}$ allows us to write, for every $n\in\omega$, $e_n^*=\mu_n+\bar\nu_n$, where $\mu_n\in \ell_1(\omega\times 2)$ and $\bar\nu_n\in \ell_1(\mathfrak c\times 2)$ --see Section [\[subsec:norming\]](#subsec:norming){reference-type="ref" reference="subsec:norming"}--. Recall that $e_n^*(c_k)=0$ for any $k\in \omega$ --recall that $c_k=\{(k,0),(k,1)\}$ for any $k\in\omega$-- , since we are identifying ${\normalfont\textsf{PS}}_\textsf{2}^*$ with $\operatorname{JL}(\mathcal{A})^\perp$. Furthermore, given that $(\bar\nu_n)_{n\in\omega}$ vanishes on finite sets, we have $$0=e_n^*(c_k)=\mu_n(c_k), \quad \text{ for all } k,n\in\omega.$$ In addition, as $(e_n^*)_{n=0}^\infty$ is a norming set, there exists $\tilde{c}>0$ such that $$2\,\sup_n|\mu_n(k,0)|=\sup_n|\mu_n(c_k)|=\sup_n|e_n^*(c_k)|\geq \tilde{c},\quad \text{ for every } k\in\omega.$$ Hence we may assume, passing to a subsequence if necessary, that $c=\inf_n |\mu_n(n,0)|>0$ and thus, $(\mu_n)_{n\in\omega}$ is an *admissible sequence* in $B_{{\normalfont\textsf{PS}}_\textsf{2}^*}$. Therefore, there exists $\xi<\mathfrak{c}$ such that $(e_n^*)_{n\in\omega} = (\nu_n^\xi)_{n\in\omega}$, with the notations of Section [3.4](#subsec:heart){reference-type="ref" reference="subsec:heart"}. Observe that, by the way the enumeration has been carried out --see item *(iii)* in Section [3.4](#subsec:heart){reference-type="ref" reference="subsec:heart"}--, we have $\bar\nu_n^\xi(\alpha,i)=0$ whenever $\alpha\geq \xi$ and $i\in \{0,1\}$. Moreover, by virtue of (P3), the pairs of measures - $\{e_n^*: n\in J_2^\xi\}$ and $\{e_n^*: n\in J_1^\xi\setminus J_2^\xi\}$. - $\{e_n^*: n\in J_1^\xi\}$ and $\{e_n^*: n\in J_0^\xi\setminus J_1^\xi\}$. are not $\mathfrak{B}(\mathfrak c\setminus\{\xi\})$-separated. For the rest of the proof, we will drop the superindex $\xi$, and simply write $\mu_n$ for the sequence of measures $\mu_n^\xi$. Now, consider the function $$\label{eq:main-thm-1} f=1_{B^0_\xi}-1_{B^1_\xi}\in {\normalfont\textsf{PS}}_\textsf{2}.$$ Since $f$ is supported in $B^0_\xi \cup B^1_\xi$, Remark [Remark 12](#rem:zero){reference-type="ref" reference="rem:zero"} asserts that ${\left\langle\kern 0ex e_n^*,f \kern 0ex\right\rangle} = {\left\langle\kern 0ex \mu_n,f \kern 0ex\right\rangle}$ for every $n\in \omega$. Let us suppose that there exists an element $g\in {\normalfont\textsf{PS}}_\textsf{2}$ such that $$\langle e_n^*, g\rangle=|\langle e_n^*, f\rangle|=|\langle\mu_n,f\rangle| \text{ for every } n\in\omega$$ We will arrive at a contradiction with the separation of the above pairs of sets. Our argument closely follows that of [@PSA Lemma 5.3]. First, we introduce some notation: given $a,b\in \mathbb{ R}$ and $\delta>0$, we write $a\approx_\delta b$ to mean $|a-b|<\delta$. Let us pick $\delta>0$ satisfying property (P1). Since the subspace of ${\normalfont\textsf{PS}}_\textsf{2}$ consisting of simple $\mathfrak B$-measurable functions is dense in ${\normalfont\textsf{PS}}_\textsf{2}$, there is such a function $h\in {\normalfont\textsf{PS}}_\textsf{2}$ such that $\|g-h\|<\delta$. Therefore, $$|\langle\mu_n,f\rangle|=|\langle e_n^*,f\rangle|\approx_\delta \langle e_n^*,h\rangle, \text{ for every } n\in\omega.$$ Without loss of generality, we assume that $h=r f+s$, where $r\in\mathbb{R}$ and $s$ is a simple $\mathfrak{B}(\mathfrak{c}\setminus\{\xi\})$-measurable function lying in ${\normalfont\textsf{PS}}_\textsf{2}$. Let us further suppose that $r>0$; otherwise, we may apply our argument to the function $-g$ instead of $g$ (that is, if we show that $-|f|$ cannot exist, then neither can $|f|$). Hence, we have: $$\label{eq:main-thm-2} |\langle \mu_n, f\rangle|\approx_\delta r\langle \mu_n, f\rangle+\langle e_n^*,s\rangle, \text{ for every } n\in\omega.$$ Now, observe that properties (P1) and (P2), together with the definition of $B^0_\xi$ and $B^1_\xi$, yield, for every $n\in J_0^\xi$, $$\begin{array}{cl} {\left\langle\kern 0ex \mu_n,f \kern 0ex\right\rangle}\approx_\delta \int_{c_n} f d\mu_n=f(p_n)\mu_n(p_n)+f(c_n\setminus p_n)\mu_n(c_n\setminus p_n) \approx_{2\delta} \left\{\begin{array}{rl} 2a & \text{ if } n\in J_2^\xi,\\[1mm] -2a & \text{ if } n\in J_1^\xi\setminus J_2^\xi, \\[1mm] 0 & \text{ if } n\in J_0^\xi\setminus J_1^\xi. \end{array}\right. \end{array}$$ Hence, $$\label{eq:main-thm-3} \langle \mu_n, f\rangle \approx_{3\delta} \left\{\begin{array}{rl} 2a & \text{ if } n\in J_2^\xi,\\[1mm] -2a & \text{ if } n\in J_1^\xi\setminus J_2^\xi, \\[1mm] 0 & \text{ if } n\in J_0^\xi\setminus J_1^\xi. \end{array}\right.$$ We deduce from the previous equation that $$\label{eq:main-thm-4} |\langle \mu_n,f\rangle | \approx_{3\delta} \left\{\begin{array}{rl} 2a & \text{ if } n\in J_1^\xi,\\[1mm] 0 & \text{ if } n\in J_0^\xi \setminus J_1^\xi. \end{array}\right.$$ Finally, using ([\[eq:main-thm-3\]](#eq:main-thm-3){reference-type="ref" reference="eq:main-thm-3"}) and ([\[eq:main-thm-4\]](#eq:main-thm-4){reference-type="ref" reference="eq:main-thm-4"}), we infer from ([\[eq:main-thm-2\]](#eq:main-thm-2){reference-type="ref" reference="eq:main-thm-2"}) the following relations: $$\label{eq:main-thm-5} \left\{\begin{array}{rrl} 2a\approx_{\delta(4+3r)}& 2ra +\langle e_n^*,s\rangle & \text{ if } n\in J_2^\xi,\\[1mm] 2a\approx_{\delta(4+3r)}& -2ra+\langle e_n^*,s\rangle & \text{ if } n\in J_1^\xi\setminus J_2^\xi,\\[1mm] 0\approx_{\delta(4+3r)}& \langle e_n^*,s\rangle & \text{ if } n\in J_0^\xi \setminus J_1^\xi. \end{array}\right.$$ First, suppose that $0\leq r\leq 1/2$. The first two relations of ([\[eq:main-thm-5\]](#eq:main-thm-5){reference-type="ref" reference="eq:main-thm-5"}) give, for every $n\in J_1^\xi$, $$\langle e_n^*,s\rangle\geq 2(1-r)a-\delta (4+3r)\geq a-\frac{11}{2}\delta,$$ while the third one gives, for every $k\in J_0^\xi \setminus J_1^\xi$, $$\langle e_k^*,s\rangle\leq \delta (4+3r)\leq \frac{11}{2}\delta.$$ Thus, for any $n\in J_1^\xi$ and any $k\in J_0^\xi \setminus J_1^\xi$, we have, using that $\delta<c/16$ and $a\geq c$, $$\langle e_n^*,s\rangle-\langle e_k^*,s\rangle\geq a-11\delta>0.$$ This already implies --see Lemma [Lemma 11](#lem:sep){reference-type="ref" reference="lem:sep"}-- that the sets $\{e_n^*: n\in J_1^\xi\}$ and $\{e_n^*: n\in J_0^\xi\setminus J_1^\xi\}$ are $\mathfrak B(\mathfrak c\setminus\{\xi\})$-separated. On the other hand, if $r\geq 1/2$, then using relations ([\[eq:main-thm-5\]](#eq:main-thm-5){reference-type="ref" reference="eq:main-thm-5"}) again, we infer that for every $n\in J_1^\xi \setminus J_2^\xi$ and every $k\in J_2^\xi$ $$\begin{aligned} \langle e_n^*,s\rangle -\langle e_k^*,s\rangle & \geq 2a(1+r)-\delta(4+3r) -\bigl(2a(1-r)+ \delta(4+3r)\bigr) = %4ra-8\delta-6r\delta \\ & = 2r(2a-3\delta)-8\delta \geq 2a-11\delta>0.\end{aligned}$$ Hence, the sets $\{e_n^*: n\in J_2^\xi\}$ and $\{e_n^*: n\in J_1^\xi\setminus J_2^\xi\}$ are $\mathfrak{B}(\mathfrak{c}\setminus\{\xi\})$-separated, again by Lemma [Lemma 11](#lem:sep){reference-type="ref" reference="lem:sep"}. Thus, in both cases we arrive at a contradiction. ◻ ## Relation to other classes of ${\mathcal L}_\infty$-spaces Apart from the class of AM-spaces, other well-known classes of ${\mathcal L}_\infty$-spaces are those of G-spaces and $C_\sigma(K)$-spaces. *G-spaces* can be characterized as the closed subspaces $X$ of some $C(K)$ for which there exist a certain set of triples $A=\{(t_\alpha, t'_\alpha, \lambda_\alpha): \alpha \in \Gamma\} \subseteq K \times K \times \mathbb R$ so that $X = \{f\in C(K): f(t_\alpha) = \lambda_\alpha f(t'_\alpha) \ \forall \alpha\in \Gamma\}$. On the other hand, *$C_\sigma(K)$-spaces* are the closed subspaces $X$ of some $C(K)$ which are of the form $X=\{f \in C(K): f(\sigma t) = -f(t) \ \forall t\in K\}$, where $\sigma: K \to K$ is a homeomorphism with $\sigma^2 = \operatorname{Id}$. These classes are also rather natural in the context of $1$-complemented subspaces. Indeed, $G$-spaces are precisely those Banach spaces which are $1$-complemented in some AM-space, while a Banach space is $1$-complemented in a $C(K)$-space if and only if it is a $C_\sigma(K)$-space [@LW Theorem 3]. It is clear that $C_\sigma(K)$-spaces are $G$-spaces. On the other hand, Kakutani's representation theorem asserts that every AM-space $X$ is of the form $X = \{f\in C(K): f(t_\alpha) = \lambda_\alpha f(t'_\alpha) \ \forall \alpha\in A\}$ for some compact space $K$ and a certain set of triples $A=\{(t_\alpha, t'_\alpha, \lambda_\alpha) \in K \times K \times \mathbb [0,\infty) \}$. Therefore, AM-spaces are in particular $G$-spaces. The properties of ${\normalfont\textsf{PS}}_\textsf{2}$ and Theorem [Theorem 16](#thm:main){reference-type="ref" reference="thm:main"} show that the latter are a strictly larger class: **Corollary 17**. *There is a $C_\sigma(K)$-space which is not isomorphic to an AM-space. In particular, there is a $G$-space which is not isomorphic to an AM-space.* In fact, equation ([\[eq:Csigma\]](#eq:Csigma){reference-type="ref" reference="eq:Csigma"}) witness that ${\normalfont\textsf{PS}}_\textsf{2}$ is a $C_\sigma(K)$-space for $K=K_\protect{\mathcal B}$: we can write $${\normalfont\textsf{PS}}_\textsf{2}= \{f\in C(K_\protect{\mathcal B}): f(\sigma p) = -f(p) \ \forall p\in K_\protect{\mathcal B}\}$$ where $\sigma: K_\protect{\mathcal B}\to K_\protect{\mathcal B}$ is defined as $$\sigma(n,i)=(n,1-i) \quad,\quad \sigma(p_\xi^i) = p_\xi^{1-i} \quad,\quad \sigma(\infty)=\infty$$ for $n\in \omega$, $i\in\{0,1\}$ and $\xi<\mathfrak c$. This yields two interesting consequences. First, let us observe that such map $\sigma$ has $\infty$ as its only fixed point. Lacey proves in [@Lacey-book Corollary of Theorem 10, Section 10] that every $C_\sigma(K)$-space, where $K$ is a scattered compact space and $\sigma$ has no fixed points, is isometrically isomorphic to a $C(K)$-space. This result is no longer true if $\sigma$ has only one fixed point, as the existence of ${\normalfont\textsf{PS}}_\textsf{2}$ shows. On the other hand, in [@HHM86 Theorem 7] it is shown that a Banach space $X$ has an ultrapower isometric to an ultrapower of $c_0$ if and only if $X$ is isometric to a $C_\sigma(K)$-space for $K$ a totally disconnected compact space with a dense subset of isolated points and such that $\sigma$ has a unique fixed point which is not isolated in $K$. We therefore conclude that $({\normalfont\textsf{PS}}_\textsf{2})^\mathcal{U}$ is isometric to $(c_0)^\mathcal{U}$ for some ultrafilter $\mathcal{U}$, improving an earlier result of [@HHM83 Theorem 4.1]. # The CSP for complex Banach lattices In this section, we will show how ${\normalfont\textsf{PS}}_\textsf{2}$ can be *modified* to provide a counterexample to the Complemented Subspace Problem for *complex Banach lattices*. By a complex Banach lattice, as usual, we mean the complexification $X_{\mathbb C}=X\oplus iX$ of a real Banach lattice $X$, equipped with the norm $\|x+iy\|_{X_\mathbb{C}}=\||x+iy|\|_X$, where $|\cdot|:X_\mathbb{C}\to X_+$ is *the modulus* map given by $$\label{eq:norm1} |x+iy|=\sup_{\theta\in [0,2\pi]}\{x\cos\theta +y\sin \theta\}, \qquad \text{for every } x+iy\in X_\mathbb{C}.$$ Given a real Banach space $E$, $E_\mathbb{C}$ denotes the complexification of the real vector space $E$, $E\oplus i E$, endowed with the norm $$\label{eq:norm2} \|x+iy\|=\sup_{\theta\in [0,2\pi]}\|x\cos \theta+y\sin\theta\|, \quad \text{ for every } x+iy\in E_{\mathbb{C}}.$$ It is not difficult to check that the norm induced by [\[eq:norm1\]](#eq:norm1){reference-type="eqref" reference="eq:norm1"} and the one in [\[eq:norm2\]](#eq:norm2){reference-type="eqref" reference="eq:norm2"} are equivalent in the class of complex Banach lattices. Moreover, both definitions actually coincide in the complexification of a $C(K)$-space or an $L_p$-space [@AA-book Section 3.2, Exercises 3 and 5]. A complex sublattice $Z$ of a complex Banach lattice $X_\mathbb{C}$ is the complexification $Y_\mathbb{C}$ of a real sublattice $Y$ of $X$. A $\mathbb{C}$-linear operator between two complex Banach lattices $T:X_\mathbb{C}\to Y_\mathbb{C}$ is said to be a complex lattice homomorphism if there exists a lattice homomorphism $S:X\to Y$ such that $T(x_1+ix_2)=Sx_1+iSx_2$ for all $x_1,x_2\in X$. We refer to [@Schaefer-book Chapter II, Section 11] for further information on complex Banach lattices. As we mentioned in the introduction, an important motivation which led us to consider the complex version of the CSP is the following result of Kalton and Wood [@KW]: every $1$-complemented subspace of a complex space with a $1$-unconditional basis has a $1$-unconditional basis. This result is not true in the real case [@BFL], even though it is still unknown whether every complemented subspace of a space with an unconditional basis also has an unconditional basis. Other positive results for complex Banach lattices (which are false in the real case) are: - An $M$-projection $P$ on $X_\mathbb{C}$ (respectively, an $L$-projection), i.e. a projection satisfying $\|x\|=\max\{\|Px\|,\|x-Px\|\}$ for every $x\in X$ (resp., $\|x\|=\|Px\|+\|x-Px\|$), is always a band projection [@DKL]. - If we have a decomposition $X_\mathbb{C}=E\oplus F$ such that $\|x+y\|=\||x|\lor|y|\|$ for all $x\in E$ and $y\in F$, then $|x|\land |y|=0$ for $x\in E$, $y\in F$; in other words, $E\oplus F$ is a band decomposition of $X_\mathbb{C}$ [@Kalton(Dales)]. For the purpose of providing a counterexample to the CSP for complex Banach lattices, a natural approach would be to take $({\normalfont\textsf{PS}}_\textsf{2})_\mathbb{C}$, the complexification of ${\normalfont\textsf{PS}}_\textsf{2}$, which is one complemented in $(\operatorname{JL}(\protect{\mathcal B}))_{\mathbb{C}}$ (which is a $C_\mathbb{C}(K)$ space, that is, a complex space of continuous functions). Although it is not clear whether $({\normalfont\textsf{PS}}_\textsf{2})_\mathbb{C}$ can be isomorphic to a complex Banach lattice, we will see that the construction of ${\normalfont\textsf{PS}}_\textsf{2}$ can be slightly modified in order to give a counterexample to the problem also in the complex setting. Namely, we will show that it is possible to construct a Banach space $\widetilde{{\normalfont\textsf{PS}}_\textsf{2}}$ which is $1$-complemented in a $C(K)$ space and such that its complexification $(\widetilde{{\normalfont\textsf{PS}}_\textsf{2}})_{\mathbb{C}}$ is not isomorphic to a complex Banach lattice. In particular, $\widetilde{{\normalfont\textsf{PS}}_\textsf{2}}$ cannot be isomorphic to a real Banach lattice either. We start with a complex version of the notion of admissible sequence (compare to Definition [Definition 8](#def:admissible-real){reference-type="ref" reference="def:admissible-real"}). **Definition 18**. We say that a sequence $(\mu_n)_{n\in\omega}$ in the unit ball of $\ell_1(\omega \times 2)\oplus i \ell_1(\omega \times 2)$ is $\mathbb C$-*admissible* if $\mu_n(c_k)=0$ for every $n,k\in\omega$ and $\inf_{n\in\omega} |\mu_n(n,0)|>0$. Note that if $\mu_n(c_k)=0$ for every $n,k\in\omega$, then it follows that $\mathfrak{Re}\,\mu_n(c_k)=\mathfrak{Im}\,\mu_n(c_k)=0$ for $n,k\in\omega$. Nevertheless, this does not imply that $\inf_{n\in\omega} |\mathfrak{Re}\,\mu_n(n,0)|>0$ or $\inf_{n\in\omega} |\mathfrak{Im}\,\mu_n(n,0)|>0$, so $(\mathfrak{Re}\,\mu_n)_n$ or $(\mathfrak{Im}\,\mu_n)_n$ are not necessarily *real* admissible sequences. This is the obstacle we found to show that the complexification of ${\normalfont\textsf{PS}}_\textsf{2}$ cannot be isomorphic to a complex Banach lattice. Instead, we will work directly with the complex version of the notion of *admissibility* (Definition [Definition 18](#def:admissible-complex){reference-type="ref" reference="def:admissible-complex"}) and show that with small modifications one can produce a $\widetilde{{\normalfont\textsf{PS}}_\textsf{2}}$ space with our new desired property. It is possible to construct two almost disjoint families $\mathcal{A}=\{A_\alpha\::\:\alpha<\mathfrak{c}\}$ and $\mathcal{B}=\{B_\alpha^0,\,B_\alpha^1\::\:\alpha<\mathfrak{c}\}$ in $\mathcal{P}(\omega\times 2)$ with the property that all norming sequences $\{(\nu_n^\xi)_{n\in\omega}\::\:\xi<\mathfrak{c}\}$ in the unit ball of $(\widetilde{{\normalfont\textsf{PS}}_\textsf{2}})^*_{\mathbb{C}}=\operatorname{JL}({\mathcal A})_{\mathbb{C}}^\perp$ are labelled in such a way that for every $\xi<\mathfrak{c}$: - We can decompose $\nu_n^\xi=\mu_n^\xi+\bar{\nu}_n^\xi$ for every $n\in\omega$, where $(\mu_n^\xi)_{n\in\omega}$ is $\mathbb{C}$-admissible and $(\bar{\nu}_n^\xi)_{n\in\omega}$ satisfies the two properties given below: - $\bar\nu_n^\xi(\alpha,0)+\bar\nu_n^\xi(\alpha,1)=0$ for every $\alpha<\mathfrak c$, - $\bar\nu_n^\xi(\alpha,j)=0$ whenever $\alpha\geq \xi$ and $j\in\{0,1\}$. - If $c:=\inf_{n\in\omega}|\mu_n(n,0)|$ and $\delta$ represents a fixed number in the interval $(0,c/22)$, there are three infinite sets $J_2^\xi \subseteq J_1^\xi \subseteq J_0^\xi \subseteq \omega$ such that $\omega\setminus J_0^\xi$, $J_0^\xi\setminus J_1^\xi$ and $J_1^\xi\setminus J_2^\xi$ are also infinite, with the following properties: 1. For every $n\in J_0^\xi$, $|\mu_n^\xi|( (J_0^\xi \times 2)\setminus c_n)<\delta$. 2. There exist $a\in\mathbb{C}$ with $|a|\geq c$ and $p_n^\xi \in \{\{(n,0)\},\{(n,1)\}\}$, such that $|\mu_n^\xi(p_n^\xi)-a|<\delta$ for every $n\in J_1^\xi$ and $\mathfrak{Re}\,a\geq \frac{c}{\sqrt{2}}$ or $\mathfrak{Im}\,a\geq \frac{c}{\sqrt{2}}$. 3. The pairs - $\{\nu_n^\xi: n\in J^\xi_2\}$ and $\{\nu_n^\xi: n\in J^\xi_1\setminus J^\xi_2\}$. - $\{\nu_n^\xi: n\in J^\xi_1\}$ and $\{\nu_n^\xi: n\in J^\xi_0 \setminus J^\xi_1\}$. are not $\mathfrak B(\alpha \setminus \{\xi\})$-separated for every $\alpha > \xi$. Therefore, such pairs are not $\mathfrak B(\mathfrak c\setminus \{\xi\})$-separated either. Properties (Q1)--(Q3) can be obtained by adjusting lemmata 5.3 and 5.5 of [@PSA] to the definition of $\mathbb{C}$-admissibility. To avoid cumbersome repetitions, we will not give an explicit proof of these as similar computations will be detailed in the proof of Theorem [Theorem 19](#thm:complex-CSP){reference-type="ref" reference="thm:complex-CSP"}. Let us however sketch the idea of how (Q2) could be checked: Since, for every $\xi<\mathfrak{c}$, $(\nu_n^\xi)_{n\in\omega}\subseteq B_{(\widetilde{{\normalfont\textsf{PS}}_\textsf{2}})_{\mathbb{C}}^*}$, we have $$1\geq \frac{1}{2}|\nu_n^\xi(1_{(n,0)}-1_{(n,1)})|=|\mu_n^\xi(n,0)| \quad \text{ for every } n\in\omega.$$ Hence, passing to a subsequence we may assume that $\bigl(\mu_n^\xi(n,0)\bigr)$ converges to some $b\in\mathbb{C}$ . As $\inf_{n\in\omega}|\mu_n^\xi(n,0)|=c>0$, then $|b|\geq c$. Thus, $|\mathfrak{Re}\,b|\geq \frac{c}{\sqrt{2}}$ or $|\mathfrak{Im}\,b|\geq \frac{c}{\sqrt{2}}$. Let us suppose for instance that $|\mathfrak{Re}\,b|\geq \frac{c}{\sqrt{2}}$. Since $\mu_n^\xi(c_k)=0$ for every $n,k\in\omega$, in particular, $$\mathfrak{Re}\,\mu_n^\xi(n,0)=-\mathfrak{Re}\,\mu_n^\xi(n,1)\quad \text{ for every } n\in\omega.$$ Consequently, for each $n\in\omega$, we can choose $p_n^\xi=\{(n,0)\}$ or $p_n^\xi=\{(n,1)\}$ such that $\mathfrak{Re}\,\mu_n^\xi(p_n^\xi)=|\mathfrak{Re}\,\mu_n^\xi(n,0)|\geq 0$, so $\mathfrak{Re}\,\mu_n^\xi(p_n^\xi)\to |\mathfrak{Re}\,b|$. Finally, passing again to a subsequence if necessary, we obtain $\mu_n^\xi(p_n^\xi)\to a$ with $|a|\geq c$ and $\mathfrak{Re}\,a=|\mathfrak{Re}\,b|\geq \frac{c}{\sqrt{2}}$. We proceed to prove our main result in this section. **Theorem 19**. *$\widetilde{{\normalfont\textsf{PS}}_\textsf{2}}\oplus i\widetilde{{\normalfont\textsf{PS}}_\textsf{2}}$ is not isomorphic to a complex Banach lattice.* *Proof.* We will prove this fact by showing that $(\widetilde{{\normalfont\textsf{PS}}_\textsf{2}})_\mathbb{C}$ does not have the complex (DP). Fix a norming sequence $(e_n^*)_{n \in \omega}$ in $B_{(\widetilde{{\normalfont\textsf{PS}}_\textsf{2}})_\mathbb{C}^*}$. Our aim is to find an $f\in (\widetilde{{\normalfont\textsf{PS}}_\textsf{2}})_\mathbb{C}$ such that no $g\in (\widetilde{{\normalfont\textsf{PS}}_\textsf{2}})_\mathbb{C}$ satisfies $$\langle e_n^*, g\rangle=|\langle e_n^*, f\rangle|, \text{ for every } n\in\omega.$$ We shall denote $e_n^*=e_{n,0}^*+ie_{n,1}^*$, where $e_{n,j}^*\in \widetilde{{\normalfont\textsf{PS}}_\textsf{2}}^*$ for $j=0,1$. The very definition of $\widetilde{{\normalfont\textsf{PS}}_\textsf{2}}$ allows us to write, for every $n\in\omega$, $e_{n,j}^*=\mu_{n,j}+\bar\nu_{n,j}$, where $\mu_{n,j}\in \ell_1(\omega\times 2)$ and $\bar\nu_{n,j}\in \ell_1(\mathfrak c\times 2)$ --see Section [\[subsec:norming\]](#subsec:norming){reference-type="ref" reference="subsec:norming"}--. In particular, there exists $\alpha_0<\mathfrak{c}$ such that $\bar\nu_{n,j}(B_\xi^i)=0$ for all $n\in\omega$, $i,j\in \{0,1\}$ and $\xi\geq \alpha_0$. Moreover, given that $(\bar\nu_{n,j})_{n\in\omega}$ vanishes on finite sets for $j=0,1$, we have $$0=e_n^*(c_k)=\mu_n(c_k)=\mu_{n,0}(c_k)+i\mu_{n,1}(c_k), \quad \text{ for all } k,n\in\omega.$$ Moreover, as $(e_n^*)_{n=0}^\infty$ is a norming set, there exists $\tilde{c}>0$ such that $$\sup_n|e_n^*(\{k,0\})|=\sup_n|\mu_n(k,0)|\geq \tilde{c},\quad \text{ for every } k\in\omega,$$ Hence we may assume, passing to a subsequence if necessary, that $\inf_n |\mu_n(\{(n,0)\})|=c>0$ and thus, $(\mu_n)_{n\in\omega}$ is a $\mathbb C$-*admissible sequence* in $B_{(\widetilde{{\normalfont\textsf{PS}}_\textsf{2}})^*_{\mathbb{C}}}$. Therefore, as mentioned in the comments prior to this theorem, there exists $\alpha_0\leq \xi<\mathfrak{c}$ such that $(e_n^*)_{n\in\omega} = (\nu_n^\xi)_{n\in\omega}$ and by (Q3) the pairs of measures - $\{e_n^*: n\in J_2^\xi\}$ and $\{e_n^*: n\in J_1^\xi\setminus J_2^\xi\}$. - $\{e_n^*: n\in J_1^\xi\}$ and $\{e_n^*: n\in J_0^\xi\setminus J_1^\xi\}$. are not $\mathfrak{B}(\mathfrak c\setminus\{\xi\})$-separated. We have $\mu_n(p_n)\to a\in\mathbb{C}$ (when $n\in J_1^\xi$), where $|a|\geq c$ and $\mathfrak{Re}\,a\geq \frac{c}{\sqrt{2}}$ or $\mathfrak{Im}\,a\geq \frac{c}{\sqrt{2}}$. For simplicity, let us assume that $\mathfrak{Re}\,a\geq \frac{c}{\sqrt{2}}$. Now, consider the function $$\label{eq:main-thm-1c} f=1_{B^0_\xi}-1_{B^1_\xi}\in (\widetilde{{\normalfont\textsf{PS}}_\textsf{2}})_\mathbb{C}.$$ Let us suppose that there exists an element $g\in (\widetilde{{\normalfont\textsf{PS}}_\textsf{2}})_\mathbb{C}$ such that $$\langle e_n^*, g\rangle=|\langle e_n^*, f\rangle|=|\langle\mu_n,f\rangle|, \text{ for every } n\in\omega.$$ and we will arrive at a contradiction with the separation of the above pairs of sets. Our argument closely follows that of [@PSA Lemma 5.3]. First, we introduce some notation: given $a,b\in \mathbb{ R}$ and $\delta>0$, we write $a\approx_\delta b$ to mean $|a-b|<\delta$. Let us fix $\delta>0$ as in (Q1)--(Q3). Since the subspace of $(\widetilde{{\normalfont\textsf{PS}}_\textsf{2}})_\mathbb{C}$ consisting of simple $\mathfrak B$-measurable functions is dense in $(\widetilde{{\normalfont\textsf{PS}}_\textsf{2}})_\mathbb{C}$, there is such a function $h\in (\widetilde{{\normalfont\textsf{PS}}_\textsf{2}})_\mathbb{C}$ such that $\|g-h\|<\delta$. Therefore, $$|\langle\mu_n,f\rangle|\approx_\delta \langle e_n^*,h\rangle, \text{ for every } n\in\omega.$$ Without loss of generality, we assume that $h=r f+s$, where $r\in\mathbb{C}$ and $s$ is a simple $\mathfrak{B}(\mathfrak{c}\setminus\{\xi\})$-measurable function lying in $(\widetilde{{\normalfont\textsf{PS}}_\textsf{2}})_\mathbb{C}$. Let us further suppose that $r>0$; if $r=|r|e^{i\theta}$ we may apply our argument to the function $e^{i\theta}g$ instead of $g$ (that is, if we prove that $e^{-i\theta}|f|$ cannot exist, then $|f|$ does not exist either). Hence, we have: $$\label{eq:main-thm-2c} |\langle \mu_n, f\rangle|\approx_\delta r\langle \mu_n, f\rangle+\langle e_n^*,s\rangle, \text{ for every } n\in\omega.$$ Now, observe that properties (Q1) and (Q2), together with the definition of $B^0_\xi$ and $B^1_\xi$, yield, for every $n\in J_0^\xi$: $$\begin{array}{cl} {\left\langle\kern 0ex \mu_n,f \kern 0ex\right\rangle}\approx_{\delta} \int_{c_n} f d\mu_n=f(p_n)\mu_n(p_n)+f(c_n\setminus p_n)\mu_n(c_n\setminus p_n) \approx_{3\delta} \left\{\begin{array}{rl} 2a & \text{ if } n\in J_2^\xi,\\[1mm] -2a & \text{ if } n\in J_1^\xi\setminus J_2^\xi, \\[1mm] 0 & \text{ if } n\in J_0^\xi\setminus J_1^\xi. \end{array}\right. \end{array}$$ Hence, $$\label{eq:main-thm-3c} \langle \mu_n, f\rangle \approx_{3\delta} \left\{\begin{array}{rl} 2a & \text{ if } n\in J_2^\xi,\\[1mm] -2a & \text{ if } n\in J_1^\xi\setminus J_2^\xi, \\[1mm] 0 & \text{ if } n\in J_0^\xi\setminus J_1^\xi. \end{array}\right.$$ We deduce from the previous equation that $$\label{eq:main-thm-4c} |\langle \mu_n,f\rangle | \approx_{3\delta} \left\{\begin{array}{rl} 2|a| & \text{ if } n\in J_1^\xi,\\[1mm] 0 & \text{ if } n\in J_0^\xi \setminus J_1^\xi. \end{array}\right.$$ Now, using ([\[eq:main-thm-3c\]](#eq:main-thm-3c){reference-type="ref" reference="eq:main-thm-3c"}) and ([\[eq:main-thm-4c\]](#eq:main-thm-4c){reference-type="ref" reference="eq:main-thm-4c"}), we infer from ([\[eq:main-thm-2c\]](#eq:main-thm-2c){reference-type="ref" reference="eq:main-thm-2c"}) the following relations: $$\label{eq:main-thm-5c} \left\{\begin{array}{rrl} 2|a|\approx_{\delta(4+3r)}& 2ra +\langle e_n^*,s\rangle & \text{ if } n\in J_2^\xi,\\[1mm] 2|a|\approx_{\delta(4+3r)}& -2ra+\langle e_n^*,s\rangle & \text{ if } n\in J_1^\xi\setminus J_2^\xi,\\[1mm] 0\approx_{\delta(4+3r)}& \langle e_n^*,s\rangle & \text{ if } n\in J_0^\xi \setminus J_1^\xi. \end{array}\right.$$ First, suppose that $0\leq r\leq 1/2$. The first two relations of ([\[eq:main-thm-5c\]](#eq:main-thm-5c){reference-type="ref" reference="eq:main-thm-5c"}) give for every $n\in J_1^\xi$ $$|\langle e_n^*,s\rangle|\geq 2|1\pm re^{i\alpha}||a|-\delta (4+3r)\geq |a|-\frac{11}{2}\delta,$$ while the third one gives for every $k\in J_0^\xi \setminus J_1^\xi$ $$|\langle e_k^*,s\rangle|\leq \delta (4+3r)=\frac{11}{2}\delta.$$ Thus, for any $n\in J_1^\xi$ and any $k\in J_0^\xi \setminus J_1^\xi$, we have, using that $\delta<c/22$ and $|a|\geq c$: $$|\langle e_n^*,s\rangle|-|\langle e_k^*,s\rangle|\geq |a|-11\delta>0.$$ This already implies --see Section [\[subsec:separation\]](#subsec:separation){reference-type="ref" reference="subsec:separation"}-- that the sets $\{e_n^*: n\in J_1^\xi\}$ and $\{e_n^*: n\in J_0^\xi\setminus J_1^\xi\}$ are $\mathfrak B(\mathfrak c\setminus\{\xi\})$-separated. On the other hand, if $r\geq 1/2$, then using relations ([\[eq:main-thm-5c\]](#eq:main-thm-5c){reference-type="ref" reference="eq:main-thm-5c"}) again, we infer that for every $n\in J_1^\xi \setminus J_2^\xi$ and every $k\in J_2^\xi$: $$\begin{aligned} \mathfrak{Re}\,\langle e_n^*,s\rangle -\mathfrak{Re}\,\langle e_k^*,s\rangle & \geq 2r\mathfrak{Re}\,a+2|a|-\delta(4+3r)-\bigl(\delta(4+3r)+2|a|-2r\mathfrak{Re}\,a \bigr) = %4ra-8\delta-6r\delta \\ & = 2r(2\mathfrak{Re}\,a-3\delta)-8\delta \geq 2\mathfrak{Re}\,a-11\delta>0,\end{aligned}$$ since we have supposed that $\mathfrak{Re}\,a\geq \frac{c}{\sqrt{2}}$. It is clear that if $\mathfrak{Im}\,a\geq \frac{c}{\sqrt{2}}$, we may obtain using the same procedure as shown above that $\mathfrak{Im}\,\langle e_n^*,s\rangle -\mathfrak{Im}\,\langle e_k^*,s\rangle\geq 2\mathfrak{Re}\,a-11\delta>0$ for every $n\in J_1^\xi \setminus J_2^\xi$ and every $k\in J_2^\xi$. Hence, the sets $\{e_n^*: n\in J_2^\xi\}$ and $\{e_n^*: n\in J_1^\xi\setminus J_2^\xi\}$ are $\mathfrak{B}(\mathfrak{c}\setminus\{\xi\})$-separated. This is a contradiction. ◻ # Acknowledgements {#acknowledgements .unnumbered} We wish to thank Antonio Avilés and Félix Cabello for interesting discussions related to the topic of this paper. Research of D. de Hevia and P. Tradacete partially supported by grants PID2020-116398GB-I00 and CEX2019-000904-S funded by MCIN/AEI/10.13039/501100011033. D. de Hevia benefited from an FPU Grant FPU20/03334 from the Ministerio de Universidades. G. Martínez-Cervantes was partially supported by Fundación Séneca - ACyT Región de Murcia (21955/PI/22), by Generalitat Valenciana project CIGE/2022/97 and by Agencia Estatal de Investigación and EDRF/FEDER "A way of making Europe\" (MCIN/AEI/10.13039/501100011033) (PID2021-122126NB-C32) Research of A. Salguero-Alarcón has been supported by projects PID-2019-103961GB-C21 funded by MCIN/ AEI /10.13039/501100011033 and IB20038 (Junta de Extremadura). P. Tradacete is also supported by a 2022 Leonardo Grant for Researchers and Cultural Creators, BBVA Foundation. 100 Y. A. Abramovich and C. D. Aliprantis, *An invitation to operator theory.* Grad. Studies in Math. **50**, American Mathematical Society, Providence, RI, 2002. Y. Abramovich and P. Wojtaszczyk, *The uniqueness of order in the spaces $L_p$(0, 1) and $\ell_p$ ($1 \leq p \leq \infty$)*. Mat. Zametki **18** (1975), 313--325. F. Albiac and N. J. Kalton, *Topics in Banach space theory.* Grad. Texts in Math. **233** (2006). C. D. Aliprantis and O. Burkinshaw, *Positive operators*. Springer, Dordrecht, 2006, Reprint of the 1985 original. A. Avilés, G. Martínez-Cervantes, A. Rueda Zoca and P. Tradacete, *Linear versus lattice embeddings between Banach lattices.* Adv. Math. **406** (2022), Article ID 108574, 14 pp. A. Avilés, J. Rodrı́guez and P. Tradacete, *The free Banach lattice generated by a Banach space.* J. Funct. Anal. **274**, no. 10, 2955--2977 (2018). Y. Benyamini, *Separable $G$ spaces are isomorphic to $C(K)$ spaces.* Israel J. Math. **14** (1973), 287--293. Y. Benyamini, *An extension theorem for separable Banach spaces.* Israel J. Math. **29** (1978), no. 1, 24--30. Y. Benyamini, P. Flinn and D.R. Lewis, *A space without $1$-unconditional basis which is $1$-complemented in a space with a $1$-unconditional basis.* Texas Functional Analysis Seminar, 1983--84. Longhorn Notes, University of Texas Press (1984), 145--149. S. J. Bernau and H. E. Lacey, *The range of a contractive projection on an $L_p$-space.* Pacific J. Math. **53** (1974), 21--41. S. J. Bernau and H. E. Lacey, *A local characterization of Banach lattices with order continuous norm.* Studia Math. **58** (1976), 101--128. J. Bourgain, *A result on operators on $C[0,1]$.* J. Operator Theory **3** (1980), no. 2, 275--289. J. Bourgain and F. Delbaen, *A class of special $\mathcal{L}_\infty$ spaces.* Acta Math. **145** (1980), 155--176. A. V. Bukhvalov, A. I. Veksler and G. Ya. Lozanovskii, *Banach lattices - some Banach aspects of their theory.* Russ. Math. Surv. **34** (1979), no. 2, 159--212. P. G. Casazza, N. J. Kalton and L. Tzafriri, *Decompositions of Banach lattices into direct sums.* Trans. Amer. Math. Soc., **304** (1987), 771--800. H. G. Dales, N. J. Laustsen, T. Oikhberg and V. G. Troitsky, *Multi-norms and Banach lattices.* Dissertationes Math. **524** (2017), 115 pp. J. Diestel, H. Jarchow and A. Tonge, *Absolutely summing operators.* Cambridge Studies in Advanced Math. **43**, Cambridge University Press, Cambridge, 1995. R. G. Douglas, *Contractive projections on an $L_1$ space.* Pacific J. Math. **15** (1965), 443--462. L. Drewnowski, A. Kamińska and P. Lin, *On Multipliers and L- and M-Projections in Banach Lattices and Köthe Function Spaces.* J. Math. Anal. Appl. **206** (1997), no. 1, 83--102. N. Dunford and J. T. Schwartz, *Linear operators Part 1.* John Wiley and Sons, 1958. P. Enflo and T. W. Starbird, *Subspaces of $L^1$ containing $L^1$.* Studia Math. **65** (1979), no. 2, 203--225. T. Figiel, W. B. Johnson and L. Tzafriri, *On Banach Lattices and Spaces Having Local Unconditional Structure, with Applications to Lorentz Function Spaces* , J. Approx. Theory **13** (1975), 395--412. P. Flinn, *On a theorem of N. J. Kalton and G. V. Wood concerning 1-complemented subspaces of spaces having an orthonormal basis.* Texas Functional Analysis Seminar, 1983--84. Longhorn Notes, University of Texas Press (1984), 135--144. Y. Gordon and D. R. Lewis, *Absolutely summing operators and local unconditional structures.* Acta Math. **133** (1974), 27--48. W. T. Gowers and B. Maurey, *The unconditional basic sequence problem.* J. Amer. Math. Soc. **6** (1993), no. 4, 851--874. S. Heinrich, C. W. Henson and L. C. Moore, *Elementary equivalence of $L_1$-preduals.* In: Banach Space Theory and its Applications. Lecture Notes in Math. **991**. Springer, Berlin, Heidelberg (1983), 79--90. S. Heinrich, C. W. Henson and L. C. Moore, *Elementary Equivalence of $C_\sigma(K)$ Spaces for Totally Disconnected, Compact Hausdorff $K$.* J. of Symbolic Logic **51** (1986), no. 1, 135--146. R. C. James, *A non-reflexive Banach space isometric with its second conjugate space.* Proc. Nat. Acad. Sci. U.S.A. **37** (1951), 174--177. W.B. Johnson, T. Kania and G. Schechtman, *Closed ideals of operators on and complemented subspaces of Banach spaces of functions with countable support*. Proc. Amer. Math. Soc. **144** (2016), 4471--4485. W. B. Johnson, J. Lindenstrauss and G. Schechtman, *On the relation between several notions of unconditional structure.* Israel J. Math. **37** (1980), no. 1-2, 120--129. W. B. Johnson, B. Maurey, G. Schechtman and L. Tzafriri, *Symmetric structures in Banach spaces.* Mem. Am. Math. Soc. **217** (1979). N. J. Kalton, *Lattice Structures on Banach Spaces.* Mem. Amer. Math. Soc., **103** (1993). N. J. Kalton, *Hermitian operators on complex Banach lattices and a problem of Garth Dales*. J. Lond. Math. Soc. (2) **86** (2012), 641--656. N. J. Kalton and G. V. Wood, *Orthonormal systems in Banach spaces and their applications*. Math. Proc. Cambridge Philos. Soc. **79** (1976), 493--510. P. Koszmider and N. J. Laustsen, *A Banach space induced by an almost disjoint family, admitting only few operators and decompositions.* Adv. Math. **381** (2021), paper ID 107613, 39 pp. J. L. Krivine, *Constantes de Grothendieck et fonctions de type positif sur les sphères*. Adv. Math. **31** (1979), no. 1, 16--30. H. E. Lacey, *The isometric theory of classical Banach spaces*. Springer-Verlag, Berlin, 1974. D. H. Leung, L. Li, T. Oikhberg and M. A. Tursi, *Separable universal Banach lattices.* Israel J. Math. **230** (2019), no. 1, 141--152. J. Lindenstrauss and A. Pełczyński, *Absolutely summing operators in $\mathcal{L}_p$-spaces and their applications.* Studia Math. **29** (1968), 275--326. J. Lindenstrauss and H. P. Rosenthal, *The $\mathcal{L}_p$-spaces.* Israel J. Math. **7** (1969), 325--349. J. Lindenstrauss and L. Tzafriri, *Classical Banach spaces II*. Springer-Verlag, Berlin, 1977. J. Lindenstrauss and D. E. Wulbert, *On the classification of the Banach spaces whose duals are $L_1$ spaces*. J. Funct. Anal. **4** (1969), 332--349. J. López-Abad and P. Tradacete, *Shellable weakly compact subsets of $C[0,1]$.* Math. Ann. **367** (2017), no. 3-4, 1777--1790. W. Marciszewski and G. Plebanek, *Extension operators and twisted sums of $c_0$ and $C(K)$-spaces.* J. Funct. Anal. **274** (2018), 1491--1529. P. Meyer-Nieberg, *Banach lattices*, Springer-Verlag, 1991. A. Pełczyński, *Sur certaines propriétés isomorphiques nouvelles des espaces de Banach de fonctions holomorphes $A$ et $H^\infty$.* C. R. Acad. Sci. Paris Sér. A **279** (1974), 9--12. A. Pełczyński and M. Wojciechowski, *Sobolev spaces in several variables in $L^1$-type norms are not isomorphic to Banach lattices.* Ark. Mat. **40** (2002), no. 2, 363--382. A. Pełczyński and M. Wojciechowski, *Sobolev spaces.* Handbook of the geometry of Banach spaces, vol. 2, 1361--1423, North-Holland, Amsterdam, 2003. G. Plebanek and A. Salguero Alarcón, *On the three space property for $C(K)$-spaces.* J. Funct. Anal. **281** (2021), paper ID 109193, 15 pp. G. Plebanek and A. Salguero Alarcón, *The complemented subspace problem for $C(K)$-spaces: a counterexample.* Adv. Math. **426** (2023), 109103, 20 pp. B. Randrianantoanina, *Norm-one projections in Banach spaces.* Taiwanese J. Math. **5** (2001), no. 1, 35--95. H. P. Rosenthal, *On factors of $C[0, 1]$ with non-separable dual.* Israel J. Math. **13** (1972), 361--378. H. P. Rosenthal, *The Banach spaces $C(K)$.* Handbook of the Geometry of Banach spaces, vol. 2, North-Holland, Amsterdam, 2003, 1547--1602. H. Schaefer, *Banach Lattices and Positive Operators*, Die Grundlehren der mathematischen Wissenschaften **215**, Springer-Verlag, Berlin-Heidelberg-New York, 1974. M. Talagrand, *Some weakly compact operators between Banach lattices do not factor through reflexive Banach lattices.* Proc. Amer. Math. Soc. **96** (1986), no. 1, 95--102. M. Talagrand, *The three-space problem for $L^1$.* J. Amer. Math. Soc. **3** (1990), no. 1, 9--29. A. Tselishchev, *Absence of Local Unconditional Structure in Spaces of Smooth Functions on Two-Dimensional Torus.* J. Math. Sci. **261** (2022), 832--843.
arxiv_math
{ "id": "2310.02196", "title": "A counterexample to the complemented subspace problem in Banach lattices", "authors": "D. de Hevia, G. Mart\\'inez-Cervantes, A. Salguero-Alarc\\'on, P.\n Tradacete", "categories": "math.FA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | EvenQuads is a new card game that is a generalization of the SET game, where each card is characterized by three attributes, each taking four possible values. Four cards form a quad when, for each attribute, the values are the same, all different, or half and half. Given $\ell$ cards from the deck of EvenQuads, we can build an error-correcting linear binary code of length $\ell$ and Hamming distance 4. The quads correspond to codewords of weight 4. Error-correcting codes help us calculate the possible number of quads when given up to 8 cards. We also estimate the number of cards that do not contain quads for decks of different sizes. In addition, we discuss properties of error-correcting codes built on semimagic, magic, and strongly magic quad squares. author: - Nikhil Byrapuram - Hwiseo (Irene) Choi - Adam Ge - Selena Ge - Sylvia Zia Lee - Evin Liang - Rajarshi Mandal - Aika Oki - Daniel Wu - Michael Yang - Tanya Khovanova title: EvenQuads Game and Error-Correcting Codes --- # Introduction EvenQuads is a new card game introduced by Rose and Perreira [@Rose] that is a generalization of the SET game. The new game was initially called SuperSET, but now it is called EvenQuads. The SET deck consists of 81 cards, where each card is characterized by four attributes: number (1, 2, or 3 symbols), color (green, red, or purple), shading (empty, striped, or solid), and shape (oval, diamond, or squiggle). A set is formed by three cards that are either all the same or all different in each attribute. The players try to find sets among given cards, and the fastest player wins. Similarly, the EvenQuads deck consists of $64$ cards, where each card is characterized by three attributes: number (1, 2, 3, or 4), color (red, green, yellow, or blue), and shape (square, icosahedron, circle, or spiral). A quad is formed by four cards that are either all the same, all different, or half and half in each attribute. The players try to find quads among given cards, and the fastest player wins. One can view the EvenQuads deck as a deck of numbers from 0 to 63 inclusive. Four numbers form a quad if their bitwise XOR is 0 [@CragerEtAl; @LAGames]. Error-correcting codes allow to encode transmissions in such a way that when an error is introduced, the decoder can still completely recover the transmission. The first error-correcting code was introduced in 1950 by Hamming [@Hamming]. Since then, many different types of error-correcting codes have been invented serving different goals. This paper introduces linear binary error-correcting codes based on the cards from an EvenQuads deck. We start with preliminary information about the EvenQuads game and error-correcting codes in Section [2](#sec:preliminaries){reference-type="ref" reference="sec:preliminaries"}. We extend the deck to any size that is a power of 2. In Section [3](#sec:qecc){reference-type="ref" reference="sec:qecc"}, we build error-correcting codes using a set of EvenQuads cards. These are binary linear codes, such that their length is the number of cards, and the weights of the codewords are even. We call such codes quad codes. The number of quads in a given set of cards corresponds to the number of codewords of weight 4. In Section [4](#sec:la2help){reference-type="ref" reference="sec:la2help"}, we describe the inequality satisfied by the dimension of a quad code. We prove that for any quad code of length $\ell$ and dimension $k$, we can find a set of cards in an EvenQuads-$2^n$ deck that realizes this code if and only if $n \ge \ell - k - 1$. In Section [5](#sec:codesandsymmetries){reference-type="ref" reference="sec:codesandsymmetries"}, we describe symmetries of an EvenQuads deck and show that quad codes correspond to sets of cards up to affine transformations. In Section [6](#sec:smallnumbercards){reference-type="ref" reference="sec:smallnumbercards"}, we calculate possible numbers of quads given up to 7 cards. For a given number of cards and quads, we calculate the smallest deck size in which we can have the given number of cards having exactly the given number of quads. Section [7](#sec:weighenumerator){reference-type="ref" reference="sec:weighenumerator"} introduces the weight enumerator of a code and uses it to find how many quads are possible given 8 cards. We show that the number of possible quads is one of 0, 1, 2, 3, 5, 6, 7, and 14. In Section [8](#sec:noquads){reference-type="ref" reference="sec:noquads"}, we consider sets of cards from a given deck that do not contain a quad. We calculate the maximum size of such a set for small decks and provide bounds for larger decks. Section [9](#sec:squares){reference-type="ref" reference="sec:squares"} introduces error-correcting codes related to semimagic, magic, and strongly magic quad squares, which were defined in [@QuadSquares]. We calculate the smallest dimension for such codes and the weight enumerator corresponding to this smallest dimension. We also enumerate such codes in each possible dimension. # Preliminaries {#sec:preliminaries} ## EvenQuads An EvenQuads deck consists of $64 = 4^3$ cards with different objects. The cards have 3 attributes with 4 values each: - Number: 1, 2, 3, or 4 symbols. - Color: red, green, yellow, or blue. - Shape: square, icosahedron, circle, or spiral. A *quad* consists of four cards so that for each attribute, the cards must be either all the same, all different, or half and half. In EvenQuads, the dealer lays out 12 cards from the deck facing up. Then, without touching the cards, players look for four cards that form a quad. As soon as the player finds a quad, they have to yell out "quad" and whoever says it first gets to take the four cards, as long as they are correct. If the four cards they chose do not form a quad, they have to put the cards back. If the cards do form a quad, then four new cards are drawn from the deck. If players cannot find a quad after a while, the dealer may lay out an additional card. The game ends when no more cards are left in the deck, and there are no possible quads among the remaining cards. The winner of the game is the one with the most quads. We can assume, as suggested in [@CragerEtAl], that each attribute takes values in the set $\{0,1,2,3\}$. Then, four cards form a quad if and only if the bitwise XOR (also called parity sum) of the values in each attribute is zero. This is equivalent to saying that each attribute takes values in $\mathbb{Z}_2^2$ and four cards form a quad if and only if for each attribute the sum of the vector values in $\mathbb{Z}_2^2$ is the zero vector. Thus, we can view our cards as vectors in $\mathbb{Z}_2^6$. For generalizations, we can consider an EvenQuads deck of size $2^n$. It corresponds to vectors in $\mathbb{Z}_2^n$. It follows that four vectors $\vec{a}$, $\vec{b}$, $\vec{c}$, and $\vec{d}$ form a quad if and only if $$\vec{a} + \vec{b} + \vec{c} + \vec{d} = \vec{0}.$$ Consider four vectors $\vec{a}$, $\vec{b}$, $\vec{c}$, and $\vec{d}$ forming a quad. By a translation, we can assume that $\vec{a}$ is the origin. Then, from the above equation we get $\vec{d} = - \vec{b} - \vec{c} = \vec{b} + \vec{c}$. This means that vector $\vec{d}$ is in the same plane as the origin and vectors $\vec{b}$ and $\vec{c}$. Thus, four cards form a quad if and only if their endpoints belong to the same plane. In the rest of the paper, we often use numbers from 0 to $2^n-1$ inclusive to label the cards in the EvenQuads-$2^n$ deck. Sometimes, to better visualize quads, we represent them as quaternary strings: we convert every number to base 4 and pad it with zeros on the left to reach a needed length of $\lceil \frac{n}{2} \rceil$. When $n$ is odd, the first digit in a string representing a card can only be 0 or 1. For example, a quad in the standard deck can be represented as four numbers 0, 21, 42, and 63. The cards in quaternary notation are 000, 111, 222, and 333. Every quad is defined by three distinct cards. The fourth card in this quad can be uniquely found by bitwise XORing the three cards. In other words, the fourth card is the fourth point in the affine plane, defined by the three initial cards. In particular, the EvenQuad-$2^n$ deck contains $\frac{\binom {2^n}{3}}{4}$ quads. Indeed, we can choose any three cards in $\binom {2^n}{3}$ ways, and these three cards always complete to a quad. Each quad is counted 4 times, so we have to divide by 4. The corresponding sequence is A016290: $$1, 14, 140, 1240, 10416, 85344, 690880, 5559680, \ldots.$$ Given four cards from an EvenQuads-$2^n$ deck, the probability that they form a quad is $\frac{1}{2^n-3}$. This follows from the fact that any three cards from this deck can be uniquely extended into a quad. Thus, given $m$ cards from a Quads-$2^n$ deck, the expected number of quads is $\frac{\binom{m}{4}}{2^n-3}$. ## Error-correcting codes Consider a binary string $\boldsymbol{b}$ of length $\ell$. We can view it as a vector in $\mathbb{Z}_2^\ell$. The *Hamming distance* between two strings of equal length is the number of positions at which the corresponding symbols are different. Suppose we choose a set of binary strings $C$ from $\mathbb{Z}_2^\ell$ such that they form a linear subspace and the Hamming distance between any two of them is at least 3. The strings in $C$ are called *codewords*. Such a set forms an error-correcting code of *length* $\ell$. Suppose a sender sends a message $\boldsymbol{b}$ that is one of the binary strings in $C$. We allow for noisy transmission, where not more than 1 bit can be corrupted during the transmission. The receiver receives a binary string $\boldsymbol{b_1}$, where the Hamming distance between $\boldsymbol{b}$ and $\boldsymbol{b_1}$ is not more than 1. In addition, the Hamming distance between $\boldsymbol{b_1}$ and any other codeword in $C$ is at least 2. This way, the receiver can uniquely correct the corruption and figure out that the sent message is $\boldsymbol{b}$. We assume that our codes are *linear*, meaning that the set of codewords is closed under addition. The fact that the codewords form a subspace means that the number of codewords is a power of 2. We can define a *weight* of the codeword to be the number of ones in its representation. Linearity of the code means that the smallest Hamming distance between two codewords equals the smallest weight of a nonzero codeword. **Example 1**. Consider $C = \{000,111\}$. This is a code of length 3. Any received message corrupted by not more than one bit can be recovered by a majority vote. Linear binary codes with a minimum distance of 4 are used a lot. Like Hamming codes with a minimum distance of 3, they can correct single-bit errors. But their advantage is that they also detect a double error. They cannot correct the double error though. Such codes are often known as SECDED (abbreviated from single error correction double error detection). # Quads and error-correcting codes {#sec:qecc} We can analyze the number of quads in sets of cards by converting a set of cards into an error-correcting code. Suppose we are given cards $\vec{a_1}$ through $\vec{a_\ell}$ from an EvenQuads-$2^n$ deck. We construct a linear binary code $C$ of length $\ell$. We say $\boldsymbol{c} = c_1 c_2 \cdots c_\ell$ is a codeword if and only if: - An even number of $c_1$ through $c_\ell$ are 1. - $c_1\vec{a_1}+\cdots+c_\ell\vec{a_\ell} = 0$. By our definition, every codeword has an even weight. As we require the codewords to have an even number of ones, the Hamming distance between any two codewords has to be even. For our codewords, the minimum distance is 4. Suppose, to the contrary, there are two codewords with a distance of 2. Then their sum is a codeword of weight 2. But weight 2 is impossible, as this would mean that we have two identical cards among our cards. **Example 2**. Suppose we have four cards $\vec{a_1}$ through $\vec{a_4}$, then the length of the code $\ell$ is 4. If the cards form a quad, the only codewords are 0000 and 1111. The weight of the first codeword is 0, and the weight of the second codeword is 4. It is important to note that $c_i$s are scalars and $\vec{a_i}$s are vectors. The cards are in $\mathbb{Z}_2^n$, while the codeword $\boldsymbol{b}$ can also be viewed as a vector in the space $\mathbb{Z}_2^\ell$. We use the vector symbol for the vectors corresponding to cards and bold symbols for the vectors corresponding to codewords to emphasize that they belong to different spaces. Preliminary observations: - The word with all zeros is always a codeword. - If we have a quad among the cards, then the corresponding word with four ones is a codeword. - We cannot have a codeword with two ones. **Definition 1**. When we get code $C$ from a sequence of cards $\vec{a_1},\ldots,\vec{a_\ell}$, we say that the sequence of cards *realizes* code $C$. When the cards belong to the EvenQuads-$2^n$ deck, we say that the code $C$ *can be realizable* in the deck of size $2^n$. # Linear Algebra to Help {#sec:la2help} ## From cards to codes We can represent the cards as an $n$-by-$\ell$ matrix $A$, consisting of zeros and ones. Each column is a card, and the $i$-th row is a set of values of the $i$th coordinate in each card. Then, the codewords are the vectors belonging to the nullspace of the matrix $A$. If we denote transpose by $T$, then the codeword $\boldsymbol{c}$ satisfies the equation $$A \boldsymbol{c}^T = 0.$$ The condition that a codeword $\boldsymbol{c}$ has to have an even number of ones is equivalent to $\boldsymbol{c}$ being orthogonal to the vector consisting of all ones in $\mathbb{Z}_2^\ell$. Let us denote by $\boldsymbol{1}$ the vector consisting of all 1s and by $\boldsymbol{0}$ the vector consisting of all 0s in $\mathbb{Z}_2^\ell$. For every vector $\vec{a_i}$, where $1 \leq i \leq \ell$, let us consider the augmented vector $\vec{a_i}'$, where we add one more coordinate with the value 1. Let us consider matrix $A'$, where we add a row of all ones to the bottom of matrix $A$. In other words, the columns of $A'$ are formed by augmented vectors $\vec{a_i}'$. Then the code $C$ is exactly the nullspace of $A'$. This is because the vectors orthogonal to $\boldsymbol{1}$ are exactly the vectors that contain an even number of ones. In other words, the space of codewords $C$ is the dual space to the space of rows of $A'$. Sometimes, we denote our code $C$ as $C_A$ to emphasize that it is built by the set of cards corresponding to the matrix $A$. Let $\boldsymbol{u}$ and $\boldsymbol{v}$ be two vectors in $\mathbb{Z}_2^\ell$. The dot product $\boldsymbol{u} \cdot \boldsymbol{v}$ is the parity of the number of places where $\boldsymbol{u}$ and $\boldsymbol{v}$ are both 1. Let $C$ be a code. Then the *dual code* $C^*$ is the set of vectors $\boldsymbol{u}$ such that $\boldsymbol{u} \cdot \boldsymbol{v} =0$ for all vectors $\boldsymbol{v}$ of $C$. From standard linear algebra [@Axler], we get the following proposition. **Proposition 1**. *Let $C$ be a linear binary code of length $\ell$ that contains $2^k$ codewords. Then $C^*$ is a linear binary code with $2^{\ell-k}$ codewords. Furthermore, $C^{**}=C$.* It follows that given the code $C_A$, its dual code is a span of rows of $A'$. Note that a linear binary code has the Hamming distance of at least 4 between codewords if and only if the minimum number of ones in a nonzero codeword is at least 4. Let us call a linear binary code a *quad* code if each codeword has an even number of ones, and the minimum number of ones in a nonzero codeword is at least 4. In particular, code $C_A$ described above is a quad code. ## From codes to cards Suppose we have a quad code $C$ of length $\ell$ and dimension $k$. We want to construct a set of cards corresponding to it. We start with the known connection between the number of codewords and code length [@Hamming]. **Lemma 2**. *Let $C$ be a binary code of length $\ell$ with a minimal Hamming distance of at least 4. Then $C$ contains at most $\frac{2^{\ell-1}}{\ell}$ codewords.* In other words, $$k \leq \ell - 1 - \lceil \log_2 \ell \rceil.$$ We are ready to prove our theorem that describes when we can find cards that realize the given code. **Theorem 3**. *Let $C$ be a quad code of length $\ell$ containing $2^k$ codewords, where $\ell> 0$. Then, the code is realizable in an EvenQuads-$2^n$ deck if and only if $$n \geq \ell-k-1.$$* *Proof.* We start with the dual space $C^*$. This space contains vector $\boldsymbol{1}$ and has dimension $\ell-k$. We pick $\ell - k - 1$ independent vectors that together with $\boldsymbol{1}$ span $C^*$. Suppose $n < \ell - k - 1$, then the matrix $A'$ for any set of cards has fewer than $\ell -k$ rows. Thus, its nullspace has a dimension greater than $k$, implying that $C$ cannot equal its nullspace. For the other direction, suppose $n \geq \ell-k-1$, then in addition $n \ge \lceil \log_2 \ell \rceil$ by Lemma [Lemma 2](#lemma:cwbound){reference-type="ref" reference="lemma:cwbound"}. That means we can find $n$ vectors that span $C^*$ by adding more vectors, which can be repeat vectors, to the ones we found. We arrange these $n$ vectors as rows of matrix $A$. The columns of this matrix are our cards. It is left to show that the columns are different. It follows from the fact that $C$ is the nullspace of matrix $A'$. If the set of cards had identical cards, then $C$ would have contained a codeword of weight 2. As, by our assumption, this is not the case, all the cards must be different. ◻ # Codes and Symmetries {#sec:codesandsymmetries} There is a natural approach to symmetries of quads using linear algebra. So far we looked at cards as elements of a vector space $\mathbb{Z}_2^n$. A more conceptual way to do this is to see them as elements of the corresponding affine space (which is the same as a vector space, but without choosing the origin). This view emphasizes that the cards are equivalent to each other. The symmetries of the space are affine transformations: they consist of parallel translations and invertible linear transformations. We can view an affine transformation as a pair $(M,\vec{t})$, where $M$ is an invertible $n$-by-$n$ matrix over the field $\mathbb{F}_2$ and $\vec{t}$ is a translation vector in $\mathbb{Z}_2^n$. The pair acts on vector $\vec{a}$ as $\vec{y} = M\vec{a} + \vec{t}$. We can simplify this description using augmented vectors. Indeed, $\vec{y} = M\vec{a} + \vec{t}$ is equivalent to the following $$\begin{bmatrix} \vec{y}^T \\1\end{bmatrix} = \left[\begin{array}{ccc|c}&M&& \vec{t}^T \\0&\cdots &0&1\end{array}\right]\begin{bmatrix} \vec{a}^T \\1 \end{bmatrix}.$$ Suppose $\vec{a'}$, $\vec{t'}$, and $\vec{y'}$ are augmented vectors for $\vec{a}$, $\vec{t}$, and $\vec{y}$. Then, we can express the affine transformation as $$\vec{y'}^T = P \vec{a'}^T,$$ where $P$ is an invertible $(n+1)$-by-$(n+1)$ matrix above that we got from $M$ by adding a zero row at the bottom and then attaching $\vec{t'}^T$ as the last column. Affine transformations preserve quads. Indeed if $\vec{a_i}$, for $1 \leq i \leq 4$ form a quad then the transformation produces four vectors $\vec{b_i} = M\vec{a_i} + \vec{t}$. We have $$\sum_{i=1}^4\vec{b_i} = \sum_{i=1}^4(M\vec{a_i} + \vec{t}) = \sum_{i=1}^4 M\vec{a_i} + \sum_{i=1}^4\vec{t} = M\sum_{i=1}^4\vec{a_i} + 4\vec{t} = 0,$$ proving that vectors $\vec{b_i}$ form a quad too. **Theorem 4**. *If we change a set of cards according to an affine transformation, the corresponding code does not change. Moreover, if two sets of cards correspond to the same code, then one of the sets can be achieved from the other by an affine transformation.* *Proof.* Consider $\ell$ cards $\vec{a_i}$, for $1 \leq i \leq \ell$, and the corresponding matrix $A'$ and code $C_A$, where $C_A$ is the nullspace of $A'$. Applying an affine transformation $(M,\vec{t})$ to vectors $\vec{a_i}$, we get vectors $\vec{b_i} = M\vec{a_i} + \vec{t}$. We denote the new augmented matrix as $B'$. We know that $B' = PA'$. Thus, the nullspaces of $A'$ and $B'$ are the same. It follows that the new code $C_B$ is the code $C_A$ before the transformation. For the second part, consider two sets of cards corresponding to the same code $C$. That means the augmented matrices $A'$ and $B'$ for both sets have the same nullspace $C$. It follows that the rows of $A'$ and $B'$ span the same space. That means we can represent the rows of $A'$ as a linear combination of rows of $B'$. Thus, there exists a matrix $M$, such that $A' = PB'$. Moreover, the last rows of $A'$ and $B'$ are the same. Thus, we can find $P$, such that all the elements of the last row are zero, except for the last element, which is 1. Therefore, $P$ represents the affine transformation in our space. ◻ It follows that if two sets of cards correspond to the same code, the number of quads in each set is the same. # Possible number of quads for a small number of cards {#sec:smallnumbercards} We are interested in the number of possible quads given $\ell$ cards. Suppose that there exists a set of cards that produces $q$ quads with $\ell$ cards from an EvenQuads-$2^n$ deck. By adding more cards to the deck, we can conclude that any larger deck can also have $\ell$ cards with $q$ quads. Thus, it is enough to find the smallest such deck. Denote by $D(\ell,q)$ the smallest deck size $2^n$ such that there exist $\ell$ cards from the EvenQuads-$2^n$ deck that form exactly $q$ quads. If such a deck does not exist, we define $D(\ell,q)$ as $\infty$. Number $A(\ell,4)$ describes the maximum number of codewords in a binary linear code of length $\ell$ with the Hamming distance 4. This number is described by sequence A005864 in the OEIS [@OEIS], where the sequence starts with index 1: $$1,\ 1,\ 1,\ 2,\ 2,\ 4,\ 8,\ 16,\ 20,\ 40,\ 72,\ 144,\ \ldots.$$ As each quad gives us a codeword, and there is also an all-zero codeword, the maximum number of quads among $\ell$ cards is bounded by A005864$(\ell) - 1$. Thus, for $q$ greater than A005864$(\ell) - 1$, we have $D(\ell,q) = \infty$. We start by considering small examples without using the machinery of error-correcting codes. We cannot have a quad if we have fewer than 4 cards. The next two examples cover cases for 4 and 5 cards. **Example 3**. Suppose we have 4 cards. If the deck is of size 4, then we have 1 quad. If it is a bigger deck, we can have 0 quads by choosing any three cards, and then for the fourth card, we can pick any other card that avoids the quad. As A005864(4) = 2, the maximum number of quads is not more than 1, thus we have covered all cases. **Example 4**. Suppose we have 5 cards. It follows that the deck size is at least 8. As A005864(5) = 2, the maximum number of quads is not more than 1. We can prove directly that more than 1 quad is not possible. Indeed, if there are two quads, their intersection has 3 cards. But 3 cards define a quad uniquely. We can pick any quad and an extra card for an example of 1 quad among 5 cards. For example, 0, 1, 2, 3, and 4. If the deck size is 8, then 5 cards always contain a quad: we leave it for the reader to check this. For a larger deck, we can use cards 0, 1, 2, 4, and 8 for an example of 5 cards without a quad. Table [1](#table:possibleNumQuads){reference-type="ref" reference="table:possibleNumQuads"} shows $D(\ell,q)$ for $\ell < 8$. The first column describes the number of cards, and the first row describes the number of quads. If the number of cards is $\ell$ and the number of quads is $q$, the entry corresponding to $\ell$ and $q$ shows $D(\ell,q)$. We replaced the infinity sign with an empty entry so as not to clutter the table. **Proposition 5**. *Table [1](#table:possibleNumQuads){reference-type="ref" reference="table:possibleNumQuads"} shows $D(\ell,q)$ for $\ell < 8$.* \# of Cards 0 1 2 3 4 5 6 7 ------------- ---- ---- ---- ---- --- --- --- --- 1 1 2 2 3 4 4 8 4 5 16 8 6 16 16 8 7 32 32 16 16 8 : $D(\ell,q)$ for $\ell < 8$. *Proof.* The cases of 1 through 5 cards are discussed above. Given the number of quads, we want to find a corresponding quad code with the maximum possible number of codewords, as by Theorem [Theorem 3](#thm:fromCodes2Cards){reference-type="ref" reference="thm:fromCodes2Cards"} they are realizable in smaller decks. Suppose we have 6 cards with 0 quads. The corresponding quad code can have at most two codewords, 000000 and 111111. By Theorem [Theorem 3](#thm:fromCodes2Cards){reference-type="ref" reference="thm:fromCodes2Cards"}, the code is realizable for $n = 4$. Suppose we have 6 cards with $q$ quads, where $q >0$. Then, 111111 cannot be a codeword, as it has a distance of 2 from any codeword corresponding to a quad. Since the code is linear, the total number of codewords is a power of two, bounded by A005864(6) = 4. So, the total number of codewords, in this case, can be 2 or 4, with the corresponding number of quads being 1 or 3. The code with 2 codewords can be 000000 and 001111. By Theorem [Theorem 3](#thm:fromCodes2Cards){reference-type="ref" reference="thm:fromCodes2Cards"}, the code is realizable in the deck of size 16. The code with 4 codewords can be the following: 000000, 001111, 110011, and 111100. By Theorem [Theorem 3](#thm:fromCodes2Cards){reference-type="ref" reference="thm:fromCodes2Cards"}, the code is realizable in the deck of size 8. Suppose we have 7 cards with $q$ quads. The number of codewords is a power of 2; and A005864(7) = 8. It means more than 7 quads is impossible. First, we show that 4, 5, or 6 quads are impossible. We start by showing that we cannot have more than 1 codeword of weight 6. Indeed, two different codewords of length 7 and having six ones have to differ in exactly two places. That means the Hamming distance between them is 2, which is a contradiction. Suppose we have at least 4 quads, then the total number of codewords is at least 8. As we have 0 or 1 codewords of weight 6, we have either 6 or 7 codewords of weight 4, proving that 4 or 5 quads are impossible. Suppose we have 6 quads, meaning that we have 6 codewords of weight 4 and one codeword of weight 6. Without loss of generality, we can assume that the last digit of the codeword $\boldsymbol{c}$ of weight 6 is zero. It follows that the codewords of weight 4 have to have 1 as the last digit, as otherwise, they will be at distance 2 from the codeword $\boldsymbol{c}$. Since the codes are linear, adding two different codewords $\boldsymbol{a}_1$ and $\boldsymbol{a}_2$ of weight 4 should give us a working nonzero codeword. However, since the last digit is 1 in both codewords, the sum will have a 0 in the last digit. Thus, it must equal $c$. If there is another codeword $\boldsymbol{a}_3$ of weight 4, then again we see that $\boldsymbol{a}_1+\boldsymbol{a}_3 = \boldsymbol{c} = \boldsymbol{a}_1 + \boldsymbol{a}_2$. Thus, $\boldsymbol{a}_3 = \boldsymbol{a}_2$, creating a contradiction. The remaining possibility is 7 quads, or, equivalently, 7 codewords of weight 4. An example of these 7 codewords is as follows: 1111000, 1100110, 0011110, 1010011, 0101011, 0110101, and 1001101. This code is realizable in a deck of size 8. For 0 quads, the largest code contains 2 codewords: 0000000 and 01111111. By Theorem [Theorem 3](#thm:fromCodes2Cards){reference-type="ref" reference="thm:fromCodes2Cards"}, it is realizable in a deck of size 32. For 1 quad, the code up to symmetries contains two codewords, 0000000 and 0001111, and cannot contain more codewords with weight 4. As we saw before, it cannot contain more than 1 codeword of length 6. Given that the total number of codewords is a power of 2, the code must contain exactly 2 codewords. By Theorem [Theorem 3](#thm:fromCodes2Cards){reference-type="ref" reference="thm:fromCodes2Cards"}, the code is realizable in a deck of size 32. For 2 or 3 quads, the code needs to have 4 codewords. The code 0000000, 1111000, 0001111, and 1110111 corresponds to 2 quads, while the code 0000000, 1111000, 1100110, and 0011110 corresponds to 3 quads. By Theorem [Theorem 3](#thm:fromCodes2Cards){reference-type="ref" reference="thm:fromCodes2Cards"}, these codes are realizable in a deck of size 16. ◻ Looking at Table [1](#table:possibleNumQuads){reference-type="ref" reference="table:possibleNumQuads"}, one can notice some patterns. **Proposition 6**. *Suppose that $D(\ell, q) = 2^n$ for some values of $\ell$, $q$, and $n$. Then $D(\ell+1, q)$ exists and is not greater than $2^{n+1}$.* *Proof.* Suppose that $D(\ell, q) = 2^n$ for some values of $\ell$, $q$, and $n$. Let us denote the set of $\ell$ cards in the deck of size $2^n$ with $q$ quads as $S$. First proof. We add an extra card to this set $S$ with a numerical value $2^n$, forming a new set $S'$. The new card cannot form quads with any set of cards in $S$. Thus, the number of quads in set $S'$ is $q$, implying that $D(\ell+1, q) \le 2^{n+1}$. Second proof. Consider a quad code corresponding to the set of cards $S$. We append 0 to each codeword to get a quad code of length $\ell+1$ and the same dimension as code $C$. By Theorem [Theorem 3](#thm:fromCodes2Cards){reference-type="ref" reference="thm:fromCodes2Cards"}, this code is realizable in the deck of size at least $2^{n+1}$. As this code has the same number of codewords of weight 4 as code $C$, the cards that realize the code form $q$ quads, implying $D(\ell+1, q) \le 2^{n+1}$. ◻ Now, we want to provide examples of sets of cards for all nonempty cells in Table [1](#table:possibleNumQuads){reference-type="ref" reference="table:possibleNumQuads"}. We already described the examples for up to 5 cards. In the proof of Proposition [Proposition 6](#prop:nondecreasingcolumns){reference-type="ref" reference="prop:nondecreasingcolumns"}, we also showed how to build an example for an entry in the table that is twice the entry in the previous row and the same column. Thus, we only need to provide examples for 6 cards and 0 quads, 6 cards and 3 quads, 7 cards and 2 quads, and 7 cards and 7 quads. All these examples are realizable in the EvenQuads-16 deck and shown in Table [2](#tab:cardexamples){reference-type="ref" reference="tab:cardexamples"} using quaternary notation. \# of cards \# of quads Cards ------------- ------------- ---------------------------- 6 0 00, 01, 02, 10, 20, 33 3 00, 01, 02, 10, 13, 03 7 2 00, 02, 10, 13, 20, 21, 32 7 00, 01, 02, 03, 10, 11, 12 : Card examples. # Weight enumerator {#sec:weighenumerator} We calculated possible numbers of quads for a small number of cards. To make more progress, we want to introduce the weight enumerator. The *weight enumerator* $W(x,y)$ of a binary code $C$ is given by a polynomial $$W(x,y) = \sum_{\boldsymbol{c} \in C} x^{\ell-w(\boldsymbol{c})}y^{w(\boldsymbol{c})},$$ where $\ell$ is the length of the code, $w(\boldsymbol{c})$ is the weight of $\boldsymbol{c} \in C$, and the sum runs over all codewords. Recall that the length of a quad code is the number of cards that define it. **Example 5**. Suppose four cards form a quad as in Example [Example 3](#ex:ecc4){reference-type="ref" reference="ex:ecc4"} above. The weight enumerator is $W(x,y) = x^4 + y^4$. Suppose we have 6 cards with $q$ quads. Then, from Proposition [Proposition 5](#prop:upto7cards){reference-type="ref" reference="prop:upto7cards"} and its proof, the associated weight enumerator has to be $x^6 + qx^2y^4$, where $q$ is either 0, 1, or 3. There is a powerful method that can help us with more advanced examples. Let us define the dual polynomial of $W$ by $W^*$, where $$W^* = \frac{1}{W(1,1)}W(x+y,x-y).$$ MacWilliams's statement shows how the dual code's weight enumerator relates to the original code's weight enumerator. **Lemma 7** (MacWilliams identity). *Let $C$ be a linear binary code and $W$ be its weight enumerator. Then the dual code $C^*$ has the weight enumerator $W^*$.* In particular, $W^*$ is a polynomial with non-negative integer coefficients. Note that $W(1,1)=|C|=2^k$ is the number of codewords in $C$ because $W(1,1)$ is the sum of the coefficients of the polynomial $W(x, y)$. The dual code $C^*$ is a linear binary code where each codeword has the same length $\ell$ as the original code. We start with an example of 7 cards. We already covered this example in Section [6](#sec:smallnumbercards){reference-type="ref" reference="sec:smallnumbercards"}. However, here, we want to show how to use the weight enumerator. **Example 6**. Suppose we have 7 cards with $q$ quads, where $q$ is 4, 5, or 6. Then, the corresponding code has to have 8 codewords, and the associated weight enumerator has to be $x^7 + qx^3y^4 + (7 -q)xy^6$. Then, the dual weight enumerator is $$\frac{1}{8}((x+y)^7 + q(x+y)^3(x-y)^4 + (7 -q)(x+y)(x-y)^6).$$ Ignoring the multiplier $\frac{1}{8}$ we get $$8 x^7 + (4q- 28) x^6 y + (84 - 12q) x^5 y^2 + 8 q x^4 y^3 + 8 q x^3 y^4 + (84 - 12q) x^2 y^5 + (4q- 28) x y^6+ 8 y^7.$$ The coefficient $4q-28$ is negative for given values of $q$. Therefore, such codes cannot exist. **Proposition 8**. *For 8 cards, the possible numbers of quads are 0, 1, 2, 3, 5, 6, 7, and 14, realizable in the smallest decks of sizes 64, 32, 32, 32, 16, 16, 16, and 8, correspondingly.* *Proof.* We already know that 0, 1, 2, 3, and 7 quads are possible from Proposition [Proposition 6](#prop:nondecreasingcolumns){reference-type="ref" reference="prop:nondecreasingcolumns"}. We also know that the maximum number of codewords is bounded by A005864$(8)$, which is 16. The coefficient of $y^8$ in the weight enumerator is at most 1, as only one codeword of weight 8 might exist. Moreover, if 11111111 is a codeword, then there are no codewords of weight 6. We know that the EvenQuads-8 deck has 14 quads, the maximum possible number, as every three cards complete to a quad. Suppose the dimension of the corresponding code is 4; that is, there are 16 codewords. We consider possible weight enumerators. The enumerator $x^8+15x^4y^4$ is impossible, as 15 quads are impossible. The enumerator $x^8+14x^4y^4+y^8$ corresponds to 14 quads, which we already discussed. We are left to investigate enumerators $x^8+qx^4y^4+(15-q)x^2y^6$, where $q$ is the number of quads. The dual weight enumerator (ignoring the multiplier $1/16$) is $$\begin{gathered} (x + y)^8 + q (x + y)^4 (x - y)^4 + (15 - q) (x + y)^2 (x - y)^6 = \\ 16x^8 + (-52+4q)x^7y + (88-8q)x^6y^2 + (116-4q)x^5y^3 + (-80+16q)x^4y^4 + \\ (116-4q)x^3y^5 + (88-8q)x^2y^6 + (-52+4q)xy^7 + 16y^8.\end{gathered}$$ The coefficient $-52+4q$ is negative for $q<13$, and the coefficient $88-8q$ is negative for $q>11$, so the code with such an enumerator does not exist. It follows that we cannot have 8 to 13 quads inclusive. Suppose the dimension of the corresponding code is 3; that is, there are 8 codewords. We know that it is possible to have 7 quads with 7 cards. That means it is possible to have the same number of quads with 8 cards. We know that the dimension of the corresponding code cannot be 4; therefore, it has to be 3. It follows that 7 quads with 8 cards are realizable in the EvenQuads-16 deck. For fewer quads, we need to check the weight enumerators $$x^8+qx^4y^4+(7-q)x^2y^6 \quad \textrm{ and } \quad x^8+6x^4y^4+y^8.$$ Consider the dual enumerator for the first one: $$\begin{gathered} (x+y)^8+q(x+y)^4(x-y)^4+(7-q)(x+y)^2(x-y)^6 \\ =8x^8+(-20+4q)x^7y+(56-8q)x^6y^2+(84-4q)x^5y^3+(16q)x^4y^4 \\ +(84-4q)x^3y^5+(56-8q)x^2y^6+(-20+4q)xy^7+8y^8.\end{gathered}$$ The coefficient $-20+4q$ is negative for $q<5$, so we cannot have a code of dimension 3 with fewer than 5 quads. Now, we show that 5 and 6 quads are possible. We start with 5 quads. Our weight enumerator suggests that we have two codewords of weight six. Without loss of generality, we can assume that these are codewords 00111111 and 11001111. We now pick one codeword of weight 4 such that it is distance 4 from both the codewords of weight 6. We use the codeword 01010011. By linearity, the other codewords are 11110000, 01101100, 10011100, and 10100011. By Theorem [Theorem 3](#thm:fromCodes2Cards){reference-type="ref" reference="thm:fromCodes2Cards"}, this code is realizable in the EvenQuads-16 deck. The 6 quad set does have an elegant associated code, where the codewords are $$00000000,\ 11110000,\ 00001111,\ 11001100,\ 00110011,\ 00111100,\ 11000011,\ 11111111.$$ Again, this code is realizable in the EvenQuads-16 deck. Suppose we have 0 quads; then the code can contain codewords of length 6 or 8. Only one codeword of length 8 exists. In addition, if the code has two different codewords of length six, their bitwise XOR has length 2, 4, or 8. Each of these lengths is forbidden. Thus, the maximum dimension of such code is 2. By Theorem [Theorem 3](#thm:fromCodes2Cards){reference-type="ref" reference="thm:fromCodes2Cards"}, this code is realizable in the EvenQuads-64 deck. Suppose we have 1 quad. Without loss of generality, we can assume that the corresponding codeword is 11110000. We cannot have a codeword of weight 8, as otherwise 00001111 will also be a codeword, thus adding another quad. Then, if there is a weight-6 codeword, its last 4 digits must be 1. Without loss of generality, we can assume that it is 00111111. Then the only other possible weight-6 codeword is 11001111. Thus, the maximum dimension is 4. By Theorem [Theorem 3](#thm:fromCodes2Cards){reference-type="ref" reference="thm:fromCodes2Cards"}, this code is realizable in the EvenQuads-32 deck. Suppose we have 2 quads. Per our previous discussion, the dimension of the code has to be 2. One example of such code is 00000000, 00001111, 11110000, and 11111111. This code is realizable in the EvenQuads-32 deck. Suppose we have 3 quads. Again, the code dimension has to be 2. One example of such code is 00000000, 00001111, 00110011, and 00111100. This code is realizable in the EvenQuads-32 deck. Thus, the only possible numbers of quads in a set of 8 cards are 0, 1, 2, 3, 5, 6, 7, and 14, and they are realizable in the decks of sizes 64, 32, 32, 32, 16, 16, 16, and 8, correspondingly. ◻ **Example 7**. The example of 5 quads among 8 cards in EvenQuads-16 deck is, using quaternary notation, 00, 01, 02, 03, 10, 20, 30, and 33. The example of 6 quads among 8 cards in EvenQuads-16 deck is, using quaternary notation, 03, 11, 12, 13, 21, 30, 31, and 33. The weight enumerator was very useful to exclude some cases. However, 8 cards are not too many, and it might be possible to prove the proposition above without the weight enumerator. For example, one can prove that the maximum number of codewords of weight 6 is 2. Indeed, the 2 zeros in two different codewords of weight 6 cannot overlap, as otherwise, adding them will give a codeword of weight 2. For the sake of contradiction, we assume there are 3 codewords of weight 6. Without loss of generality, assume these 3 codewords are 00111111, 11001111, and 11110011. Then, we can add the three codewords to get a codeword with 2 ones, which is a contradiction. # No-quads sets {#sec:noquads} Suppose we want to find sets of cards that do not contain any quads. We call such sets *no-quads* sets. These sets are similar to no-sets in the game of SET, which are also called cap sets [@BB]. Mathematicians are very interested in finding the maximum number of cards in a no-set. Similarly, they are interested in finding the number of cards in a no-quads set. Let $F(n)$ be the maximum number of cards in a no-quads set in the EvenQuads-$2^n$ deck. For the standard deck, it was shown in [@CragerEtAl] that $F(6) = 9$. Translating this to codes, we can say that we are looking for quad codes of length $\ell$ that do not contain codewords of length 4. In other words, we are looking for quad codes with a minimum distance of 6. A standard technique gives a bijection between quad codes of length $\ell$ and distance at least 6 and linear binary codes of length $\ell-1$ and distance at least 5. We can remove the last digit from a quad code $C$ to get a linear code of length $\ell-1$ where the distance decreases by not more than one. On the other hand, given a linear binary code $D$ of length $\ell-1$ and smallest positive weight 5, we can obtain a quad code by appending a parity-check bit to the end of every codeword in $D$. Thus, our task is equivalent to finding linear binary codes of length $\ell -1$ with a minimum distance of 5. ## Small decks Let $B(\ell)$ be the largest possible number of codewords of length $\ell$ in a linear binary code with a minimal distance of at least 6. Table [3](#table:b){reference-type="ref" reference="table:b"} shows the values of $B(\ell)$ for $\ell \le 12$, which we manually calculated. $\ell$ 1 2 3 4 5 6 7 8 9 10 11 12 ----------- --- --- --- --- --- --- --- --- --- ---- ---- ---- $B(\ell)$ 1 1 1 1 1 2 2 2 4 4 4 8 : Values of $B(\ell)$ for small $\ell$. We give examples of codewords in $B(\ell)$ where $\ell$ is 1, 6, 9, and 12. Other examples with the same values of $B(\ell)$ can be constructed by appending zeros. When $\ell =1$, the only codeword is 0. When $\ell = 6$, the 2 codewords are 000000 and 111111. When $\ell = 9$, an example of a code with 4 codewords is 000000000, 111111000, 000111111, and 111000111. When $\ell = 12$, then $B(12) = 8$, and an example is the code with codewords 000000000000, 111111000000, 000000111111, 111111111111, 000111111000, 111000111000, 111000000111, and 000111000111. **Theorem 9**. *If $\ell > 0$, we have $F(\ell-\log_2 B(\ell)-1)\ge \ell$. In addition, if $B(\ell) = B(\ell+1)$, then $F(\ell-\log_2 B(\ell)-1) = \ell$.* *Proof.* Consider a quad code $C$ of length $\ell$ with $B(\ell)$ codewords. Theorem [Theorem 3](#thm:fromCodes2Cards){reference-type="ref" reference="thm:fromCodes2Cards"} implies there is a set of cards in the EvenQuads-$2^{\ell-\log_2 B(\ell)-1}$ deck that realizes $C$. Since all codewords of $C$ have at least 6 ones, this set of cards must be a no-quad set. Since $C$ has length $\ell$, this set contains $\ell$ cards, proving the first part of the theorem. Assume to the contrary that $B(\ell) = B(\ell + 1)$ but $F(\ell-\log_2 B(\ell)-1) > \ell$, i.e. the maximum number of cards from a deck with size $2^n$ in a no-quads set has more than $\ell$ cards where, from the definition of $F(n)$, we have $n = \ell - \log_2 B(\ell) - 1$. So, there is a code $C$ corresponding to this no-quads set that has length more than $\ell$. From the definition of $B(\ell)$, the number of codewords in $C$ is not more than $B(\ell + 1) = B(\ell)$. By Theorem 3, code $C$ is only realizable in a deck of dimension $n \ge \ell+1 - \log_2 B(\ell) - 1 = \ell - \log_2 B(\ell)$. This contradicts the fact that $n = \ell - \log_2 B(\ell) - 1$. ◻ We can use Theorem [Theorem 9](#thm:noquads){reference-type="ref" reference="thm:noquads"} and Table [3](#table:b){reference-type="ref" reference="table:b"} to calculate the no-quads sets for small decks. Table [4](#table:a){reference-type="ref" reference="table:a"} shows such values, where the first row is $n$, corresponding to the deck size of $2^n$, and for the second row we choose $\ell$ so that $B(\ell) = B(\ell+1)$ and $n = \ell-\log_2 B(\ell)-1$. This allows us to calculate $F(n)$ precisely. The lemma above confirms that $F(1)=2$ (when $\ell=2$), $F(2)=3$ (when $\ell=3$), $F(3)=4$ (when $\ell=4$), $F(4)=6$ (when $\ell=6$), $F(5)=7$ (when $\ell=7$), $F(6)=9$ (when $\ell=9$), $F(7)=10$ (when $\ell=10$), and $F(8)=12$ (when $\ell=12$). $n$ 1 2 3 4 5 6 7 ------------------- --- --- --- --- --- --- ---- $\ell$ and $F(n)$ 2 3 4 6 7 9 10 : Values of $F(\ell)$ for small $\ell$. ## Bounds for no-quads sets In [@CragerEtAl], the size of a no-quads set was only calculated for a standard deck. Here, we provide bounds for any deck size using bounds from coding theory [@ConwaySloane]. In particular, we use the notion of a Hamming bound. Let $A_{q}(\ell,d)$ denote the maximum possible size of a $q$-ary block code $C$ of length $\ell$ and minimum Hamming distance $d$. Then, the Hamming bound is: $$A_{q}(\ell,d) \leq \frac{q^{\ell}}{\sum _{{k=0}}^{t}{\binom {\ell}{k}}(q-1)^{k}},$$ where $t=\left\lfloor {\frac {d-1}{2}}\right\rfloor$. In particular, this bound estimates $B(\ell)$ for larger values: $B(\ell) \le A_{2}(\ell-1,5)$, which allows us to get a bound for any sized deck. **Theorem 10**. *If a no-quad set of $\ell$ cards exists in some deck, then the number of cards in the deck must be at least $\frac{\ell^2- \ell +2}{2}$.* *Proof.* Consider a no-quads set with $\ell$ cards. It corresponds to a quad code of length $\ell$ with a minimal distance of at least 6. Delete any bit, producing a binary code of length $\ell - 1$ with a minimal distance of at least 5. In the Hamming bound above, we substitute $q=2$, $d=5$, and use $\ell-1$ for the length of the code to get $$A_{2}(\ell-1,5)\leq \frac{2^{\ell-1}}{\sum _{{k=0}}^{2}{\binom {\ell-1}{k}}(2-1)^{k}} = \frac{2^{\ell-1}}{\binom {\ell-1}{0} + \binom {\ell-1}{1} + \binom {\ell-1}{2}} =\frac{2^{\ell}}{\ell^2 - \ell + 2}.$$ The dimension $k$ of the code satisfies $2^k \le \frac{2^\ell}{\ell^2 - \ell + 2}$ implying that $2^{\ell-k} \ge \ell^2-\ell+2$. Then, by Theorem [Theorem 3](#thm:fromCodes2Cards){reference-type="ref" reference="thm:fromCodes2Cards"} this code can be realized in the deck with at least $2^{\ell-k-1} = 2^{\ell - 1}\cdot \frac{\ell^2-\ell+2}{2^\ell} = \frac{\ell^2- \ell +2}{2}$ cards. ◻ The values of $\frac{\ell^2-\ell+2}{2}$ for $\ell$ from 1 to 20 are $$1,\ 2,\ 4,\ 7,\ 11,\ 16,\ 22,\ 29,\ 37,\ 46,\ 56,\ 67,\ 79,\ 92,\ 106,\ 121,\ 137,\ 154,\ 172,\ 191.$$ They start a famous sequence A000124 in the OEIS database [@OEIS] called the lazy caterer's numbers. The sequence describes the maximum number of pieces formed when slicing a pizza with $\ell$ straight cuts. The sequence can also be described as triangular numbers plus 1. We summarize the lower bounds for small deck dimensions in Table [5](#tab:lbds){reference-type="ref" reference="tab:lbds"}, where we put the value in bold when we know the bound is tight. The value in the second row is $\lceil \log_2 \textrm{A000124}(\ell) \rceil$. $\ell$ 4 5 6 7 8 9 10 11 12 13 14 15 -------- ------- --- --- ------- --- ------- ---- ---- ---- ---- ---- ---- $n$ **3** 4 4 **5** 5 **6** 6 6 7 7 7 7 : Lower bounds for the deck dimension given the size of no-quads set. On the other hand, we can estimate when the no-quads set of cards has to exist. **Theorem 11**. *If $\binom{\ell}{4}+3< 2^n$, there exists a no-quad set of $\ell$ cards in an EvenQuads-$2^n$ deck.* *Proof.* We use the probabilistic method. Take a random set of $\ell$ cards. The number of sets of four cards among these cards is $\binom{\ell}{4}$. Each has a probability of $\frac{1}{2^n-3}$ of being a quad. So the probability that a quad exists among our $\ell$ cards is at most $\frac{\binom{\ell}{4}}{2^n-3}$. Consequently, if $\binom{\ell}{4} < 2^n-3$, then there must exist a no-quad set of $k$ cards. ◻ We summarize the upper bounds for small sizes of no-quads sets in Table [6](#tab:lbc){reference-type="ref" reference="tab:lbc"}, where we put the value in bold when we know the bound is tight. $n$ 3 4 5 6 7 8 9 10 -------- ------- ------- --- --- --- ---- ---- ---- $\ell$ **4** **5** 6 7 8 10 12 14 : Upper bounds for the number of cards in a no-quads set given the deck dimension. # Error-correcting codes and quad squares {#sec:squares} In our paper studying quad squares [@QuadSquares], we defined a *semimagic quad square* as a 4-by-4 square of EvenQuads cards such that each row and column forms a quad. We can index the 16 cards in a square by numbers from 1 to 16 going left-to-right and top-to-bottom. Then, the property that the set of cards forms a semimagic square is equivalent to the fact that the corresponding quad code contains the four codewords $$1111000000000000,\ 0000111100000000,\ 0000000011110000,\ 0000000000001111$$ corresponding to rows, and the four codewords $$1000100010001000,\ 0100010001000100,\ 0010001000100010,\ 0001000100010001$$ corresponding to columns. We denote this code as $C_s$, which is a vector space of dimension 7, as 4 codewords corresponding to rows sum to the same value as 4 codewords corresponding to columns. Similarly, a *magic quad square* is a 4-by-4 square of EvenQuads cards such that each row, column, and diagonal forms a quad. That means the corresponding code $C_m$ also must contain two codewords $$1000010000100001 \textrm{ and } 0001001001001000$$ corresponding to diagonals. The codewords span the vector space of dimension 8. We add two vectors to $C_s$, but we have one more dependency: rows 1 and 4 plus columns 2 and 3 plus both diagonals sum to 0. Both codes contain the codeword consisting of all ones. This means there is a bijection between codewords of weights $w$ and $16-w$. Two codewords correspond to each other if their bitwise XOR equals all ones. That means the weight enumerator has to be a symmetric function. We wrote a program to calculate the weight enumerator for both codes. The weight enumerator for $C_s$ is $$W_{C_s}(x, y) = x^{16} + 8x^{12}y^4 + 16x^{10}y^6 + 78x^8y^8 + 16x^6y^{10} + 8x^4y^{12} + y^{16}.$$ This weight enumerator can be calculated manually, too. For example, the codewords of weight 6 are in a bijection with codewords that can be formed as a bitwise XOR of two codewords that correspond to a row and a column of the square. Thus, there should be 16 of them. The weight enumerator for $C_m$ is $$W_{C_m}(x, y) = x^{16} + 12x^{12}y^4 + 64x^{10}y^6 + 102x^8y^8 + 64x^6y^{10} + 12x^4y^{12} + y^{16}.$$ That means every magic quad square contains 12 quads: 4 rows, 4 columns, 2 diagonals, and 2 more. For the two extra quads, we observe that $1000010000100001+0100010001000100+0000000011110000+0001000100010001+1111000000000000 =001000110000100$. Thus, 001000110000100 is a quad, and by symmetry, 0100100000010010 is too. We can visualize them in the square as broken diagonals in Table [7](#tab:extraquads){reference-type="ref" reference="tab:extraquads"}, where X marks cards for one quad and Y for the other. X Y --- --- --- --- X Y Y X Y X : Two extra quads in a magic square. In [@QuadSquares], a *strongly magic quad square* was defined to be a square such that for any four cards, if their $x$-coordinates are all the same, half and half, or all different, and their $y$-coordinates are all the same, half and half, or all different, then the four cards form a quad. Equivalently, if the coordinates of four cards form a quad, the cards form a quad. We denote this code as $C_{sm}$. Its dimension is 11 with the weight enumerator $$W_{C_{sm}}(x, y) = x^{16} + 140x^{12}y^4 + 448x^{10}y^6 + 870x^8y^8 + 448x^6y^{10} + 140x^4y^{12} + y^{16}.$$ The code $C_{sm}$ is special: If we drop any particular bit from all codewords, we will get a code of length 15 with Hamming distance 3 and dimension 11. That means the result is a perfect code. In particular, it is the Hamming(15,11) code. **Theorem 12**. *For each type of quad square (semimagic, magic, strongly magic), Table [8](#table:dimensions){reference-type="ref" reference="table:dimensions"} shows how many corresponding codes are in each dimension and the smallest size of the deck of cards that realizes it.* Dimension 7 8 9 10 11 ---------------- ----- ----- ------ ------ ----- Semimagic 1 159 2531 2823 112 Magic 1 43 85 10 Strongly magic 1 Deck size 256 128 64 32 16 : The number of codes in each dimension. *Proof.* From Theorem [Theorem 3](#thm:fromCodes2Cards){reference-type="ref" reference="thm:fromCodes2Cards"}, we know that each quad code is realizable in some deck. Moreover, from Theorem [Theorem 4](#thm:equivalence){reference-type="ref" reference="thm:equivalence"}, we know that the cards corresponding to a code are defined up to an affine transformation. In [@QuadSquares], it was shown that in the EvenQuads-$2^n$ deck, there are $$\begin{aligned} 2^n(2^n-1)&(2^n-2)(2^n-4)(2^n-8)\cdot \\ & (112 + 2823 (2^n - 16) + 2531 (2^n - 16) (2^n - 32) + 159 (2^n - 16) (2^n - 32) (2^n-64) + \\ &(2^n - 16) (2^n - 32)(2^n-64)(2^n-128))\end{aligned}$$ semimagic quad squares. The proof showed that this formula is a sum of the number of equivalency classes of sets of cards forming a magic square and spanning a particular deck size times the number of elements in such a class. From here, the coefficients in the sum above correspond to the number of quad codes defining semimagic squares in different dimensions. Similar, in [@QuadSquares], we showed that in the EvenQuads-$2^n$ deck, there are $$\begin{aligned} 2^n(2^n-1)&(2^n-2)(2^n-4)(2^n-8)\cdot \\ & (10 + 85(2^n - 16) + 43(2^n - 16) (2^n - 32) + (2^n - 16) (2^n - 32) (2^n-64))\end{aligned}$$ magic quad squares. The third row of the table is extracted from the formula in the same way as above. For the fourth row, we use the theorem from [@QuadSquares] that shows that there are $2^n(2^n-1)(2^n-2)(2^n-4)(2^n-8)$ strongly magic quad squares that can be made by using the cards from the EvenQuads-$2^n$ deck. From Theorem [Theorem 3](#thm:fromCodes2Cards){reference-type="ref" reference="thm:fromCodes2Cards"}, we can find the smallest $n$, such that the deck size realizing a quad code of dimension $k$ corresponding to one of the squares is $2^n$. Given that the length of our codes is 16, such $n$ has to be $16 - k - 1 = 15 - k$. ◻ # Acknowledgments We are grateful to the MIT PRIMES STEP program and its director, Slava Gerovitch, for allowing us the opportunity to do this research. 99 Sheldon Axler, *Linear algebra done right*, Springer, 1997. Tom C. Brown and Joe P. Buhler, Lines imply spaces in density Ramsey theory, Journal of Combinatorial Theory. Series A. 36 (2): 214--220 (1984). Nikhil Byrapuram, Hwiseo (Irene) Choi, Adam Ge, Selena Ge, Tanya Khovanova, Sylvia Zia Lee, Evin Liang, Rajarshi Mandal, Aika Oki, Daniel Wu, and Michael Yang, Card Games Unveiled: Exploring the Underlying Linear Algebra. math.HO arXiv: 2306.09280, (2023). Nikhil Byrapuram, Hwiseo (Irene) Choi, Adam Ge, Selena Ge, Tanya Khovanova, Sylvia Zia Lee, Evin Liang, Rajarshi Mandal, Aika Oki, Daniel Wu, and Michael Yang, QuadSquares. math.HO and math.CO arXiv: 2308.07455, (2023) John Horton Conway and Neil James Alexander Sloane, *Sphere packings, lattices and groups*, Vol. 290, Springer Science & Business Media, 2013. Julia Crager, Felicia Flores, Timothy E. Goldberg, Lauren L. Rose, Daniel Rose-Levine, Darrion Thornburgh, and Raphael Walker, How many cards should you lay out in a game of EvenQuads? A study of 2-caps in $AG(2, n)$, math.CO arXiv:2212.05353, (2022). Richard W. Hamming, Error detecting and error correcting codes, *The Bell system technical journal*, 29.2 (1950) pp. 147--160. Lauren L. Rose, Quads: A SET-like game with a Twist, available at <http://sigmaa.maa.org/mcst/QUADS%20-%20SET%20WITH%20A%20TWIST.pdf> Jim Vinci, The maximum number of sets for N cards and the total number of internal sets for all partitions of the deck, available at <https://www.setgame.com/sites/default/files/teacherscorner/SETPROOF.pdf>, accessed in 2022. OEIS Foundation Inc. (2023), The On-Line Encyclopedia of Integer Sequences, Published electronically at <https://oeis.org>.
arxiv_math
{ "id": "2310.01257", "title": "EvenQuads Game and Error-Correcting Codes", "authors": "Nikhil Byrapuram, Hwiseo (Irene) Choi, Adam Ge, Selena Ge, Tanya\n Khovanova, Sylvia Zia Lee, Evin Liang, Rajarshi Mandal, Aika Oki, Daniel Wu,\n and Michael Yang", "categories": "math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Let $\mathcal{X}$ be a skew-symmetrizable cluster Poisson variety. The cluster complex $\Delta^+(\mathcal{X})$ was introduced in [@GHKK]. It codifies the theta functions on $\mathcal{X}$ that restrict to a character of a seed torus. Every seed ${ \bf s}$ for $\mathcal{X}$ determines a fan realization $\Delta^+_{\bf s}(\mathcal{X})$ of $\Delta^+(\mathcal{X})$. For every $\textbf{s}$ we provide a simple and explicit description of the cones of $\Delta^+_{{\bf s}}(\mathcal{X})$ and their facets using **c**-vectors. Moreover, we give formulas for the theta functions parametrized by the integral points of $\Delta^+_{{ \bf s}}(\mathcal{X})$ in terms of $F$-polynomials. In case $\mathcal{X}$ is skew-symmetric and the quiver $Q$ associated to ${\bf s}$ is acyclic, we describe the normal vectors of the supporting hyperplanes of the cones of $\Delta^+_{\bf s}(\mathcal{X})$ using **g**-vectors of (non-necessarily rigid) objects in $\mathsf{K}^{\rm b}(\text{proj} \; kQ)$. author: - Carolina Melo and Alfredo Nájera Chávez date: May 2023 title: The cluster complex for cluster Poisson varieties and representations of acyclic quivers --- # Introduction ## Overview Cluster complexes are a class of simplicial complexes that naturally arise in the theory of cluster algebras. They were introduced by Fomin and Zelevinsky in [@FZ_II] as an important tool in the classification of cluster algebras of finite cluster type. Since then, many authors have considered these complexes as they turned out to be very relevant in various contexts. In particular, the work of Fock-Goncharov [@FG_cluster_ensembles] and of Gross-Hacking-Keel-Kontsevich [@GHKK] show the importance of cluster complexes in the study of cluster varieties and their rings of regular functions. Throughout this introduction we use the standard terminology employed in the theory of cluster algebras and cluster varieties, the precise definitions will be recalled throughout the text. Cluster varieties are schemes that can be obtained gluing a (possibly infinite) collection of algebraic tori --the *seed tori*-- using distinguished birational maps called *cluster transformations*. The gluing is governed by a combinatorial framework that is completely determined by the choice of an *initial seed*. In [@GHKK], Gross, Hacking, Keel and Kontsevich introduced trendsetting techniques for the study of cluster algebras and cluster varieties. In particular, they defined theta functions on cluster varieties and show that cluster complexes allow to identify those theta functions that restrict to a character of a particular seed torus. There are two kinds of cluster varieties: cluster $\mathcal{A}$-varieties, also known as cluster $K_2$ varieties, and cluster $\mathcal{X}$-varieties, also known as cluster Poisson varieties. From the perspective of [@GHKK], every cluster variety $\mathcal{V}$ (either of type $\mathcal{A}$ or $\mathcal{X}$) has an associated *(Fock-Goncharov) cluster complex* $\Delta^+(\mathcal{V} )$. Every choice of seed $\textbf{s}$ gives rise to a fan realization $\Delta^+_\textbf{s}(\mathcal{V} )$ of $\Delta^+(\mathcal{V} )$. From this point of view the usual cluster complexes introduced by Fomin and Zelevinsky are associated to the cluster $\mathcal{A}$-varieties and the corresponding fan realizations are the well-known ***g**-vector fans*. The aim of this paper is to initiate a systematic study of the cluster complexes associated to cluster Poisson varieties *without frozen directions*[^1]. ## The skew-symmetrizable case In order to state our description of the cluster complex associated to a cluster Poisson variety we need to recollect some well known concepts in the theory of cluster algebras. So let $\mathcal{A}$ and $\mathcal{X}$ be a pair of skew-symmetrizable cluster varieties associated to the same initial seed $\textbf{s}$. Let $B_{\textbf{s}}$ be the $n\times n$ skew-symmetrizable integer matrix associated to $\textbf{s}$. We consider the (rooted) $n$-regular tree $\mathbb T_n$. For simplicity we identify the vertices of $\mathbb T_n$ with the seeds mutation equivalent to $\textbf{s}$, let $\textbf{s}$ be the root (*i.e.* a distinguished vertex) of $\mathbb{T}_n$ and write $\textbf{s}'\in \mathbb{T}_n$ to indicate that $\textbf{s}'$ is a vertex of $\mathbb{T}_n$. Associated to every $n\times n$ skew-symmetrizable matrix there are two families of $n\times n$ integral matrices called the $\mathbf{c}$- and $\mathbf{g}$-matrices, see [@NZ]. The elements of these families are indexed by the vertices of $\mathbb T_n$. Given an initial skew-symmetrizable matrix $B$ and $\textbf{s}'\in \mathbb T_n$ we let $C^{B}_{\textbf{s}'}$ (resp. $G^B_{\textbf{s}'}$) denote the $\mathbf{c}$-matrix (resp. $\mathbf{g}$-matrix) indexed by the vertex $\textbf{s}'$ of $\mathbb T_n$ (taking $B$ as the initial matrix). In particular we consider the skew-symmetrizable matrices $B_\textbf{s}$ and $-B^T_{\textbf{s}}$, where $X^T$ denotes the transpose of a matrix $X$. The columns of $C^{-B_{\textbf{s}}^T}_{\textbf{s}'}$ (resp. $G^{\mathcal{B} _\textbf{s}}_{\textbf{s}'}$) are $c_{1;\textbf{s}'}, \dots ,c_{n;\textbf{s}'}$ (resp. $g_{1;\textbf{s}'}, \dots ,g_{n;\textbf{s}'}$) and the vectors that arise in this way are the $\mathbf{c}$-vectors associated to $-B^T_\textbf{s}$ (resp. **g**-vectors) associated to $B_\textbf{s}$. Let $N$ (resp. $M^\circ$) be the character lattice of the seed tori in the atlas of $\mathcal{X}$ (resp. $\mathcal{A}$). In particular the $\mathbf{c}$-vectors (resp. $\mathbf{g}$-vectors) associated to $-B^T_\textbf{s}$ (resp. $B_\textbf{s}$) belong to $N$ (resp. $M^\circ$). Let $N_{\mathbb{R} }= N\otimes \mathbb{R}$, $M^\circ_{\mathbb{R} }=M^\circ \otimes \mathbb{R}$ and $\mathcal{G}_{\textbf{s}'}$ be the cone in $M^\circ_{\mathbb{R} }$ spanned by the $\mathbf{g}$-vectors in $G^{B_{\textbf{s}}}_{\textbf{s}'}$. Finally, consider the cluster ensemble lattice map $p^*:N \to M^\circ$ associated to $\mathcal{A}$ and $\mathcal{X}$. The ambient vector space for $\Delta^+_{\textbf{s}}(\mathcal{A} )$ (resp. $\Delta^+_{\textbf{s}}(\mathcal{X} )$) is $M^\circ_{\mathbb{R} }$ (resp. $N_{\mathbb{R} }$). The following summarizes Lemma [Lemma 3](#cone){reference-type="ref" reference="cone"}, Corollary [Corollary 4](#conedescription){reference-type="ref" reference="conedescription"} and Lemma [Lemma 7](#initiallemma){reference-type="ref" reference="initiallemma"} bellow. **Theorem 1**. $(1)$ For every $\textbf{s}'\in \mathbb T_n$ the cone $(p^*)^{-1}(\mathcal{G}_{\textbf{s}'})$ is a cone of the fan $\Delta^+_{\textbf{s}}(\mathcal{X} )$ and each of its cones arises in this way. Moreover, if for $\beta\in N_{\mathbb{R} }$ we let $\beta_\textbf{s}$ be the column vector obtained by writing $\beta$ in the basis of $N_{\mathbb{R} }$ determined by $\textbf{s}$ then $$(p^*)^{-1}(\mathcal G_{\textbf{s}'})=\{\beta \in N_\mathbb{R} \; | \; (C_{\textbf{s}'}^{-B_{\textbf{s}}^T})^TB_{\textbf{s}}\cdot \beta_{\textbf{s}} \geq \textbf{0}\}.$$ $(2)$ If $p^*(\beta) \in \mathcal{G}_{\textbf{s}'}$ then the theta function parametrized by $\beta_{\textbf{s}}=(\beta_1,\dots, \beta_n)$ is given by $$\vartheta_{\beta}^{\mathcal{X}}:=\prod_{i=1}^{n} y_i^{\beta_i}(F_{i;\textbf{s}'})^{c^T_{i;\textbf{s}'}\cdot B_{\textbf{s}} \cdot \beta_{\textbf{s}}}(y_1, \ldots, y_n),$$ where $y_1,\dots, y_n$ are the coordinates of the seed torus of $\mathcal{X}$ determined by $\textbf{s}$ and $F_{i;\textbf{s}'}$ is the $F$-polynomial associated $i$ and $\textbf{s}'$ (taking $B_\textbf{s}$ as initial exchange matrix). The $\mathbf{g}$-vectors in a $\mathbf{g}$-matrix associated to $B_{\textbf{s}}$ span a maximal cone of $\Delta^+_\textbf{s}(\mathcal{A} )$. Item $(1)$ of the theorem above tells us that every $\mathbf{c}$-matrix also determines a (not necessarily maximal) cone of $\Delta^+_{\textbf{s}}(\mathcal{X} )$, however, in a different way. That is, the $\mathbf{c}$-vectors of a $\mathbf{c}$-matrix in general *do not* span a cone of $\Delta^+_{\textbf{s}}(\mathcal{X} )$ but such $\mathbf{c}$-vectors determine the equations of the supporting hyperplanes of a cone of $\Delta^+_{\textbf{s}}(\mathcal{X} )$ (the dimension of $(p^*)^{-1}(\mathcal{G}_{\textbf{s}'})$ depends on $\textbf{s}'$). Standard results in polyhedral geometry allow us to describe the dimension of $p^*(\mathcal{G}_{\textbf{s}'})$ using the *implicit equalities* of the system $(C_{\textbf{s}'}^{-B_{\textbf{s}}^T})^TB_{\textbf{s}}\cdot \beta_{\textbf{s}} \geq \textbf{0}$, see equation [\[eq:dim\]](#eq:dim){reference-type="eqref" reference="eq:dim"}. In particular, one can search for positive linear combinations of the **c**-vectors $c^T_{1;\textbf{s}'}, \dots c^T_{n;\textbf{s}'}$ that lie in the kernel of $p^*$ in order to determine the implicit equalities of the system. ## The skew-symmetric acyclic case The supporting hyperplane of the cone $(p^*)^{-1}(\mathcal G_{\textbf{s}'})$ determined by the $\mathbf{c}$-vector $c_{j;\textbf{s}'}^T$ is defined by the equation $c_{j;\textbf{s}'}^T\cdot B_\textbf{s}\cdot\beta_\textbf{s}=0$. Writing the elements of $M^\circ$ in the basis determined by $\textbf{s}$, the normal vector of such hyperplane is $c_{j;\textbf{s}'}^T\cdot B_\textbf{s}\in M^\circ$. In the skew-symmetric acyclic case we are able to describe this normal vector using $\mathbf{g}$-vectors (from the perspective of silting theory) and Auslander-Reiten theory. We proceed to explain this. We assume from now on that the quiver $Q$ associated to $B_\textbf{s}$ is acyclic and consider the path algebra $kQ$. Since we are in the skew-symmetric case we have that $M^\circ=M=\mathop{\mathrm{Hom}}(N,\mathbb{Z} )$. Let $D^b_{kQ}$ be the derived category of the category $\text{mod}\ kQ$ of right $kQ$-modules and $\mathsf{K}^{\rm b}(\mathop{\mathrm{proj}}kQ)$ be the homotopy category of bounded complexes of finitely generated projective $kQ$-modules. We consider the Grothendieck groups $K_0 (D^b_{kQ})$ and $K_0(\mathsf{K}^{\rm b}(\mathop{\mathrm{proj}}kQ))$ and think of the $kQ$-modules (resp. projective $kQ$-modules) as stalk complexes in $D^b_{kQ}$ (resp. in $\mathop{\mathrm{proj}}kQ$) concentrated in degree $0$. Recall that every $\mathbf{c}$-vector is non-zero and is either positive (all its entries are non-negative) or negative (all its entries are non-positive). By [@Najera; @Nagao], we have that the positive $\mathbf{c}$-vectors are in bijection with the dimension vectors of the rigid indecomposable $kQ$-modules. The lattice of dimension vector of objects in $D^b_{kQ}$ is $K_0(D^b_{kQ})$ (we think of dimension vectors as row vectors). Hence, every $\mathbf{c}$-vectors can be thought of as the (transpose of a) dimension vector of a stalk complex of $D^b_{kQ}$ (concentrated in degree $0$ if it is positive and in degree $-1$ is it is negative). Similarly, every object $X$ in $K_0(\mathsf{K}^{\rm b}(\mathop{\mathrm{proj}}kQ))$ has an associated $\mathbf{g}$-vector $\mathbf{g} ^X_P$ (with respect to the silting complex $P=kQ$), see [@DIJ]. We can think of $\mathbf{g} ^X_P$ as the class of $X$ in $K_0(\mathsf{K}^{\rm b}(\mathop{\mathrm{proj}}kQ))$ (the actual integer vector is obtained expressing the class of $X$ in the basis of $K_0(\mathsf{K}^{\rm b}(\mathop{\mathrm{proj}}kQ))$ given by the indecomposable projective $kQ$-modules). We have the following result. **Theorem 2**. Let $X$ be an indecomposable object of $D^b_{kQ}$, then $$B_\textbf{s}\cdot (\underline{\dim}\; X)^T=-\mathbf{g} ^X_P -\mathbf{g} ^{\tau^{-1}X}_P=-\sum_{X_i\in \overrightarrow{\text{mesh}}_X} \mathbf{g} ^{X_i}_P.$$ where $\underline{\dim}\ X$ is the dimension vector of $X$ and the sum runs over the modules $X_i$ in the middle of the mesh (in the Auslander-Reiten quiver of $D^b_{kQ}$) starting in $X$ and ending in $\tau^{-1}(X)$. In particular, if $c_{j;\textbf{s}'}^T=\underline{\dim}\ X$ then (using the fact that $B_\textbf{s}$ is skew-symmetric) we obtain that $$c_{j;\textbf{s}'}^T \cdot B_\textbf{s}= \sum_{X_i\in \overrightarrow{\text{mesh}}_X} {\bf g}_P^{X_i}.$$ ## Organization The structure of the paper is the following. In §[2](#sec:cluster_complex){reference-type="ref" reference="sec:cluster_complex"} we provide the background on cluster varieties and set up the notation that will be used throughout the text. For simplicity we restrict our attention to the skew-symmetric case as the notation in this case is simpler. In §[3](#sec:Desc_cluster_complex){reference-type="ref" reference="sec:Desc_cluster_complex"} we give the description of $\Delta^+_{\textbf{s}}(\mathcal{X} )$ in terms of $\mathbf{c}$-vectors. The description is given in the skew-symmetric case and in §[3.3](#skew-symmetrizable){reference-type="ref" reference="skew-symmetrizable"} we indicate how to generalize the results from the skew-symmetric to the skew-symmetrizable case. In §[4](#sec:theta_functions){reference-type="ref" reference="sec:theta_functions"} we provide the formulas expressing the theta functions on $\mathcal{X}$ parametrized by the integral points of $\Delta^+_{\textbf{s}}(\mathcal{X} )$ using $F$-polynomials. In §[5](#sec:supporting_hyperplanes){reference-type="ref" reference="sec:supporting_hyperplanes"} we describe the normal vectors of the supporting hyperplanes of the cones of $\Delta^+_\textbf{s}(\mathcal{X} )$ using $\mathbf{g}$-vectors provided the quiver $Q$ associated to $\textbf{s}$ is acyclic. # Background on cluster varieties {#sec:cluster_complex} ## Cluster varieties {#sec:cv} The reader is referred to [@GHK_birational §2] and [@FG_cluster_ensembles] for a detailed account on cluster varieties, to [@GHKK §3], [@FG_cluster_ensembles §1.1] and [@BCMNC §2.2] for background on tropicalization of cluster varieties and to [@GHKK §3] (resp. [@GHKK §7.2]) for the details concerning the Fock--Goncharov cluster complex associated to cluster $\mathcal{A}$-varieties (resp. cluster Poisson varieties). For simplicity throughout the paper we focus on the skew-symmetric case, however, some of our main results hold in the skew-symmetrizable case as explained in § [3.3](#skew-symmetrizable){reference-type="ref" reference="skew-symmetrizable"} and Remark [Remark 9](#rem:tf_sk){reference-type="ref" reference="rem:tf_sk"}. We begin by setting the notation related to cluster varieties. Throughout this work we fix an algebraically closed field $k$ of characteristic $0$[^2]. The split torus over $k$ associated to a lattice $L$ is the scheme $T_L:= \mathop{\mathrm{Spec}}(k[L^*])$, where $L^*$ is the lattice dual to $L$. Recall (for instances from [@GHK_birational]) that in order to construct a cluster variety we need a *fixed data* and a *seed*. In this section we introduce skew-symmetric cluster varieties without frozen directions. Hence the fixed data $\Gamma$ we consider consists of the following: - the set $I=I_{\rm uf}=\{1,\dots,n\}$; - a lattice $N\cong \mathbb{Z} ^n$; - the dual lattice $M = \mathop{\mathrm{Hom}}(N, \mathbb{Z} )$; - a skew-symmetric bilinear form $\{ \cdot , \cdot \} : N \times N \rightarrow \mathbb{Z}$. A seed is a tuple $\textbf{s}:= ( e_i )_{i \in I}$ such that $\{ e_i \}_{i\in I}$ is a basis for $N$. We let $\{ e^*_i \}_{i\in I}$ be the basis of $M$ dual to $\{ e_i \}_{i\in I}$. For $i,j\in I$ we let $b_{ij}:=\{e_j,e_i\}$ and consider the $n\times n$ skew-symmetric matrix $$B_{\textbf{s}}= (b_{ij}).$$ The quiver associated to $B_\textbf{s}$ is denoted by $Q_{\textbf{s}}$. Its set of vertices is $I$ and the arrows between $i$ to $j$ are $|b_{ij}|$, where the arrows go from $i$ to $j$ if $b_{ij}$ is positive and from $j$ to $i$ if $b_{ij}$ is negative. We fix a seed $\textbf{s}$ and call it the initial seed. Associated to $\Gamma$ and $\textbf{s}$ there are four cluster varieties (see [@GHKK Appendix A]): $\mathcal{A}$, $\mathcal{X}$, $\mathcal{A}_{\mathrm{prin}}$ and $\mathcal{X}_{\mathrm{prin}}$. These are schemes over $k$ although $\mathcal{A}_{\mathrm{prin}}$ has a very useful structure of a scheme over a Laurent polynomial ring, see [@BFMNC §3] for further details. The dimension of $\mathcal{A}$ and $\mathcal{X}$ is $n$ whereas the dimension of $\mathcal{A}_{\mathrm{prin}}$ and $\mathcal{X}_{\mathrm{prin}}$ is $2n$. Moreover, these schemes are endowed with atlases of tori of the form $$\mathcal{A} = \bigcup_{\textbf{s}'\in \mathbb{T}_n}T_{N, \textbf{s}'}, \quad \quad \mathcal{X} = \bigcup_{\textbf{s}'\in \mathbb{T}_n}T_{M, \textbf{s}'}, \quad \quad \mathcal{A}_{\mathrm{prin}}= \bigcup_{\textbf{s}'\in \mathbb{T}_n}T_{N\oplus M, \textbf{s}'}, \quad \text{and} \quad \mathcal{X}_{\mathrm{prin}}= \bigcup_{\textbf{s}'\in \mathbb{T}_n}T_{M\oplus N, \textbf{s}'},$$ where the unions are taken over the vertices of the $n$-regular tree $\mathbb{T}_n$ and for every lattice $L$ the symbol $T_{L,\textbf{s}'}$ denotes a copy of $T_{L}$ indexed by $\textbf{s}'$. Given a cluster variety $\mathcal{V}$, we denote by $\mathcal{V} _{\textbf{s}'}$ the tori in its atlas parametrized by $\textbf{s}'$. For instances, $\mathcal{A} _{\textbf{s}'}=T_{N,\textbf{s}'}$. The precise way these tori are glued to each other is described in [@GHK_birational §2]. In fact, the schemes $\mathcal{A}_{\mathrm{prin}}$ and $\mathcal{X}_{\mathrm{prin}}$ are isomorphic as schemes over $k$, however, these schemes are endowed with different coordinate systems. We frequently write $N_{\textbf{s}}$ (resp. $M_{\textbf{s}}$) to stress that $N$ is endowed with the basis $\{e_i\}_{i\in I}$ (resp. $\{e^*_i\}_{i\in I}$). In particular, if $v$ belongs to $N$ (or to $M$) we write $v_{\textbf{s}}$ to denote the column vector expressing $v$ in the corresponding basis. Moreover, given an $n\times n$ matrix $X$ with entries in $\mathbb{Z}$, we write $X:N_{\textbf{s}} \to M_{\textbf{s}}$ to denote the $\mathbb{Z}$-linear map $N \to M$ induce by $X$. The varieties introduced above are related to each other via distinguished maps, see [@BCMNC §2.1.3] and [@GHKK Appendix A]. The cluster varieties $\mathcal{A}$ and $\mathcal{X}$ are related to each other via a *cluster ensemble map* $$p:\mathcal{A} \to \mathcal{X} .$$ By a slight abuse of notation, the restriction of $p$ to the seed torus $\mathcal{A} _{\textbf{s}}$ is also denoted by $p$. This restriction is in fact a monomial map $p:\mathcal{A} _{\textbf{s}} \to \mathcal{X} _{\textbf{s}}$ determined by the lattice map $p^*:N \to M$ given by $n_{\textbf{s}} \mapsto B_{\textbf{s}}\cdot n_{\textbf{s}}$. We call $p^*$ a *cluster ensemble lattice map*. The variety $\mathcal{X}$ can be described as a geometric quotient of $\mathcal{A}_{\mathrm{prin}}$ by the action of a torus canonically isomorphic to $T_N$. We let $$\tilde{p}:\mathcal{A}_{\mathrm{prin}}\to \mathcal{X} .$$ be the associated quotient map. There is a distinguished fibration $$w:\mathcal{X}_{\mathrm{prin}}\to T_M.$$ Both $\mathcal{X}_{\mathrm{prin}}$ and $\mathcal{X}$ are Poisson varieties and there is a canonical inclusion $$\xi:\mathcal{A} \hookrightarrow \mathcal{X}_{\mathrm{prin}}$$ that realizes $\mathcal{A}$ as a fiber of $w$. ## The weight map For any semifield $P$ and any cluster variety $\mathcal{V}$ we can consider its set of $P$-valued points $\mathcal{V} (P)$. If the atlas of $\mathcal{V}$ consist of tori of the form $T_L$ then for any $\textbf{s}\in \mathbb{T}_n$ we have a bijection $$r_{\textbf{s}}: \mathcal{V} (P) \to P \otimes_{\mathbb{Z} } L.$$ In this paper we exclusively consider the semifields $\mathbb{Z} ^T=(\mathbb{Z} , \max, +)$, $\mathbb{Z} ^t=(\mathbb{Z} ,\min, +)$ and their real counterparts $\mathbb{R} ^T$ and $\mathbb{R} ^t$. So from now on $P$ will denote one of these semifields. We also have a canonical identification $i:\mathcal{V} (\mathbb{R} ^T)\to \mathcal{V} (\mathbb{R} ^t)$ given by a sign change. More precisely, for every $\textbf{s}$ we have a commutative diagram $$\xymatrix{ \mathcal{V} (\mathbb{R} ^T)\ar_{i}[d] \ar^{r_{\textbf{s}}}[r] & L_{\mathbb{R} } \ar^{ -1\cdot }[d]\\ \mathcal{V} (\mathbb{R} ^t) \ar^{r_{\textbf{s}}}[r] & L_{\mathbb{R} }, }$$ where the horizontal arrow on the right is the multiplication by $-1$. For this reason, the inverse of $i$ is also denoted by $i$. Given a *positive rational function* $g: T_L \to T_{L'}$ we can consider $g(P):T_L(P) \to T_{L'}(P)$ the *tropicalization* of $g$ with respect to $P$. Since $g(\mathbb{Z} ^T)= g(\mathbb{R} ^T)\mid_{T_{L}(\mathbb{Z} ^T)}$ we denote both $g(\mathbb{R} ^T)$ and $g(\mathbb{Z} ^T)$ simply by $g^T$. The notation $g^t$ is defined analogously. By a slight abuse of notation, for every $\textbf{s}\in \mathbb T_n$, the restriction of $w:\mathcal{X}_{\mathrm{prin}}\to T_M$ to the seed torus $(\mathcal{X}_{\mathrm{prin}})_\textbf{s}$ is also denoted by $w$. With this notation and under the canonical identifications of $(\mathcal{X}_{\mathrm{prin}})_{\textbf{s}}(\mathbb{R} ^T)$ (resp. $T_M(\mathbb{R} ^T)$) with $M_{\mathbb{R} } \oplus N_{\mathbb{R} }$ (resp. $M_{\mathbb{R} }$), the map $w^T: (\mathcal{X}_{\mathrm{prin}})_{\textbf{s}}(\mathbb{R} ^T)\to T_M(\mathbb{R} ^T)$ is called the weight map (see Lemma [Lemma 1](#lem:identification_w_slice){reference-type="ref" reference="lem:identification_w_slice"} for an explanation of this terminology) and is given by $$\begin{aligned} w^T&: M_{\mathbb{R} } \oplus N_{\mathbb{R} }\to M_{\mathbb{R} }\\ & \ \ \ \ \ \ (m,n) \longmapsto m-p^*(n).\end{aligned}$$ # Description of the cluster complex for cluster Poisson varieties {#sec:Desc_cluster_complex} The aim of this section is to provide a description of the cluster complex associated to a cluster Poisson variety. Before presenting such description we briefly discuss cluster complexes associated to cluster varieties and the cluster complexes associated to cluster algebras. The cluster complex $\Delta^+(A)$ associated to a cluster algebra $A$ was introduced in [@FZ_II]. By definition, its vertices are the cluster variables and its simplices are the sets of compatible cluster variables. So $\Delta^+(A)$ is independent of a choice of initial seed for $A$. Moreover, for every $\textbf{s}\in \mathbb T_n$, the $\mathbf{g}$-fan $\Delta^+_{\textbf{s}}(A)$ is a realization of $\Delta^+(A)$ as a rational polyhedral fan. From the perspective of [@GHKK], every cluster variety $\mathcal{V}$ endowed with a choice of seed torus $\mathcal{V} _\textbf{s}$ has an associated *Fock-Goncharov cluster complex* $\Delta^+_{\textbf{s}}(\mathcal{V} )$, see Definition 7.9 of *loc. cit.*. The definition of $\Delta^+_{\textbf{s}}(\mathcal{V} )$ is given using global monomials on $\mathcal{V}$ (which we discuss in § [4](#sec:theta_functions){reference-type="ref" reference="sec:theta_functions"}). It is shown in Lemma 7.10 of *loc. cit.* that $\Delta^+_{\textbf{s}}(\mathcal{V} )$ is a rational polyhedral fan and provide a description of $\Delta^+_{\textbf{s}}(\mathcal{X} )$ using $\Delta^+_{\textbf{s}}(\mathcal{A}_{\mathrm{prin}})$ and the weight map. In § [3.1](#sec:inequalities){reference-type="ref" reference="sec:inequalities"} we recall this construction and describe the cones of $\Delta_{\textbf{s}}(\mathcal{X} )$ using $\mathbf{c}$-matrices and $\mathbf{c}$-vectors. More precisely, we explain how every $\mathbf{c}$-matrix determines a (not necessarily top dimensional) cone of $\Delta^+_{\textbf{s}}(\mathcal{X} )$, and how every $\mathbf{c}$-vector of a $\mathbf{c}$-matrix $C$ determines a supporting hyperplane of a cone determined by $C$. ## Description of the cones and their supporting hyperplanes via **c**-vectors {#sec:inequalities} For brevity, in this paper we refer to $\Delta^+_{\textbf{s}}(\mathcal{V} )$ as a $\mathcal{V}$-cluster complex. It is well known that the $\mathcal{A}$-cluster complex associated to a seed $\textbf{s}$ coincides with the fan realization induced by $\textbf{s}$ of the cluster complex associated to the corresponding cluster algebra. Namely, for $\mathcal{A} =\mathcal{A} (\textbf{s})$ and $A=A(\textbf{s})$, we have that $$\Delta^+_{\textbf{s}}(\mathcal{A} ) = \Delta^+_{\textbf{s}}(A).$$ The natural ambient space for $\Delta^+_{\textbf{s}}(\mathcal{A} )$ is the "$\max$" tropical space $\mathcal{X} _{\textbf{s}}(\mathbb{R} ^T)$, which is identified with $M_{\mathbb{R} }$. The complex $\Delta^+_{\textbf{s}}(\mathcal{A}_{\mathrm{prin}})$ naturally lives inside the "$\max$" tropical space $(\mathcal{X}_{\mathrm{prin}})_{\textbf{s}}(\mathbb{R} ^T)\cong M_\mathbb{R} \oplus N_\mathbb{R}$ and can be described as follows $$\Delta^+_{\textbf{s}}(\mathcal{A}_{\mathrm{prin}})= \Delta^+_{\textbf{s}}(\mathcal{A} ) \times N_{\mathbb{R} }.$$ In particular, the maximal cones of $\Delta^+_{\textbf{s}}(\mathcal{A}_{\mathrm{prin}})$ are not strictly convex in the sense that they contain a linear subspace. As opposed to $\mathcal{A}$ and $\mathcal{A}_{\mathrm{prin}}$, the $\mathcal{X}$-cluster complex $\Delta^+_{\textbf{s}}(\mathcal{X} )$ naturally lives inside the "$\min$" tropical space $\mathcal{A} _{\textbf{s}}(\mathbb{R} ^t)= N_{\mathbb{R} }$. We briefly explain this difference and refer to [@BCMNC §3.2.6] for the relevant details. There is a natural action of $T_N$ on $\mathcal{A}_{\mathrm{prin}}$ and every theta function on $\mathcal{A}_{\mathrm{prin}}$ is an eigenfunction with respect to this action, see [@BCMNC Proposition 3.14]. Since $\mathcal{X}$ is a quotient of $\mathcal{A}_{\mathrm{prin}}$ by the action of $T_N$, the theta functions on $\mathcal{X}$ are the functions on $\mathcal{X}$ induced by the theta functions on $\mathcal{A}_{\mathrm{prin}}$ whose $T_N$-weight is $0$. A key observation is that the $T_N$-weight of a theta function on $\mathcal{A}_{\mathrm{prin}}$ can be obtained tropicalizing the weight map $w:(\mathcal{X}_{\mathrm{prin}})_\textbf{s}\to T_M$ see [@BCMNC Proposition 3.14]. In other words, if we consider the *weight zero slice* associated to $\textbf{s}$, which is defined as $(w^T)^{-1}(0)\subset (\mathcal{X}_{\mathrm{prin}})_\textbf{s}(\mathbb{R} ^T)$, then the theta functions on $\mathcal{A}_{\mathrm{prin}}$ parametrized by points in $(w^T)^{-1}(0)$ are of $T_N$-weight $0$. The following result was proved in [@BCMNC Lemma 3.15] in a more general situation. **Lemma 1**. For every seed $\textbf{s}$ the image of the composition $\xi^T \circ i:\mathcal{A} _{\textbf{s}}(\mathbb{R} ^t)\to (\mathcal{X}_{\mathrm{prin}})_{\textbf{s}}(\mathbb{R} ^T)$ is precisely $(w^T)^{-1}(0)$. Since $\xi^T \circ i$ is injective then $\mathcal{A} _{\textbf{s}}(\mathbb{R} ^t)$ and $(w^T)^{-1}(0)$ are in bijection. We let $j:=\xi^T \circ i$. One way to describe $\Delta^+_\textbf{s}(\mathcal{X} )$ is as the intersection of $\Delta^+_{\textbf{s}}(\mathcal{A}_{\mathrm{prin}})$ with the weight zero slice. In light of Lemma [Lemma 1](#lem:identification_w_slice){reference-type="ref" reference="lem:identification_w_slice"}, we can realize this cluster complex inside $\mathcal{A} _\textbf{s}(\mathbb{R} ^t)$. So we have the following definitions. **Definition 2**. The **cluster complex** for $\mathcal{X}$ **inside** $\mathcal{X}_{\mathrm{prin}}(\mathbb{R} ^T)$ is $$\widetilde{\Delta}^+_\textbf{s}(\mathcal{X} ):=\Delta^+_{\textbf{s}}(\mathcal{A}_{\mathrm{prin}})\cap (w^{T})^{-1}(0).$$ The **cluster complex** for $\mathcal{X}$ **inside** $\mathcal{A} (\mathbb{R} ^t)$ is $$\Delta^+_\textbf{s}(\mathcal{X} ):=j^{-1}\left(\widetilde{\Delta}^+_\textbf{s}(\mathcal{X} )\right)$$ **Lemma 3**. Every cone of $\Delta^+_{\textbf{s}} (\mathcal{X} )$ is of the form $$(p^*)^{-1}(\mathcal{G}_{\textbf{s}'}),$$ where $\mathcal{G}_{\textbf{s}'}$ is the **g**-cone in $\Delta^+_{\textbf{s}} (\mathcal{A} )$ associated to $\textbf{s}'\in \mathbb T_n$. *Proof.* First notice that every cone of $\widetilde{\Delta}^+_\textbf{s}(\mathcal{X} )$ can be described as the intersection of a cone of $\Delta^+_{\textbf{s}}(\mathcal{A}_{\mathrm{prin}})$ with $(w^T)^{-1}(0)$. Moreover, every cone of $\Delta^+_{\textbf{s}}(\mathcal{A}_{\mathrm{prin}})$ is of the form $\mathcal{G}_{\textbf{s}'} \times N_{\mathbb{R} }$ for some $\textbf{s}' \in \mathbb T_n$. Since $(w^T)^{-1}(0)= \{ (p^*(n),n)\mid n\in N_{\mathbb{R} }\}$, we have that $$\mathcal{G}_{\textbf{s}'} \times N_{\mathbb{R} } \cap (w^T)^{-1}(0) = \{ (p^*(n),n) \mid p^*(n) \in \mathcal{G}_{\textbf{s}'}\}.$$ Finally, the map $j:\mathcal{A} _\textbf{s}(\mathbb{R} ^T)\to (\mathcal{X}_{\mathrm{prin}})_{\textbf{s}} (\mathbb{R} ^T)$ is given by $$j(n)=(p^*(n),n)\in M_{\mathbb{R} } \oplus N_{\mathbb{R} },$$ see [@BCMNC Lemma 3.15]. The result follows. ◻ Let us now recall the tropical duality between $C$- and $G$- matrices introduced in [@NZ]. Let $G^{B_{\textbf{s}}}_{\textbf{s}'}$ (resp. $C^{-B_{\textbf{s}}^T}_{\textbf{s}'}$) be the $C$-matrix associated to $\textbf{s}'\in \mathbb T_n$ in the cluster pattern with initial matrix $B_{\textbf{s}}$ (resp. $-B_{\textbf{s}}^T$). Then it was proved in *loc. cit.* that $G^{B_{\textbf{s}}}_{\textbf{s}'}$ is invertible over $\mathbb{Z}$ and that $$(G^{B_{\textbf{s}}}_{\textbf{s}'})^{-1}=(C^{-B_{\textbf{s}}^T}_{\textbf{s}'})^T.$$ Combining Lemma [Lemma 3](#cone){reference-type="ref" reference="cone"} and tropical duality one obtains that every $\mathbf{c}$-vector ($\mathbf{c}$-matrix) determines a supporting hyperplane (the system of inequalities) of a cone of $\Delta^+_{\textbf{s}}(\mathcal{X} )$. **Corollary 4**. Let $\textbf{s},\textbf{s}'\in \mathbb T_n$ and consider the $\mathbf{g}$-cone $\mathcal{G}_{\textbf{s}'}$ of $\Delta^+_{\textbf{s}}(\mathcal{A} )$. Then the cone $(p^*)^{-1}(\mathcal G_{\textbf{s}'})$ of $\Delta^+_{\textbf{s}}(\mathcal{X} )$ is given as $$(p^*)^{-1}(\mathcal G_{\textbf{s}'})=\{\beta \in N_\mathbb{R} \; | \; (C_{\textbf{s}'}^{-B_{\textbf{s}}^T})^TB_{\textbf{s}}\cdot \beta_{\textbf{s}} \geq \textbf{0}\}.$$ *Proof.* Let $\beta^T_{\textbf{s}}=(\beta_1,\dots, \beta_n)$ be such that $$p^{*}(\beta_{\textbf{s}})=\sum_{i=1}^{n}\alpha_{i}\mathbf{g} _{i;\textbf{s}'}, \quad \text{where } \alpha_{1}, \dots, \alpha_{n}\geq 0.$$ This equation can be expressed as $p^{*}(\beta_{\textbf{s}})= G^{B_{\textbf{s}}}_{\textbf{s}'}\cdot \alpha$, where $\alpha^{T}=(\alpha_{1}, \dots, \alpha_{n})$. Using tropical duality we obtain that $$(C_{\textbf{s}'}^{-B_{\textbf{s}}^T})^T B_{\textbf{s}} \cdot \beta_{\textbf{s}}= \alpha \geq \textbf{0}.$$ ◻ In particular, if $(c_{1;\textbf{s}'},\dots, c_{n;\textbf{s}'})$ are the columns of (*i.e.* the $\mathbf{c}$-vectors that form) the $\mathbf{c}$-matrix $C_{\textbf{s}'}^{-B_{\textbf{s}}^T}$ then the inequalities of inequalities defining the cone $(p^*)^{-1}(\mathcal G_{\textbf{s}'})$ is $$\begin{aligned} (c_{1;\textbf{s}'}^{-B_{\textbf{s}}^T})^T\cdot B_{\textbf{s}}\cdot \beta_{\textbf{s}} \geq 0\phantom{.}\\ \vdots \phantom{aaaaaa}\\ (c_{n;\textbf{s}'}^{-B_{\textbf{s}}^T})^T\cdot B_{\textbf{s}}\cdot \beta_{\textbf{s}} \geq 0.\end{aligned}$$ In other words, every $\mathbf{c}$-vector $c_{j;\textbf{s}'}$ defines a supporting hyperplane of the cone $(p^*)^{-1}(\mathcal G_{\textbf{s}'})$. **Remark 5**. In [@Boss], Bossinger constructs a class of fans associated to certain partial compactifications of $\mathcal{A}$. More precisely, the fans are the totally positive part of the tropicalization of the ideal defining the compactification (see [@Boss Theorem 1.1]). In the cases considered in loc. cit., some cones of $\Delta^+_{\textbf{s}}(\mathcal{X} )$ can be obtained as proyections of cones in the totally positive part of the tropicalization. This provides a way to describe locally the fan $\Delta^+_{\textbf{s}}(\mathcal{X} )$ using Gröbner theory. ## Dimension and facets of the cones {#dim} It is possible to compute the dimension of the cone $(p^{*})^{-1}(\mathcal{G}_{\textbf{s}'})$ using its description by inequalities. The most interesting case is when $p^{*}$ is not an isomorphism (in case $p^*$ is an isomorphism then $(p^{*})^{-1}(\mathcal{G}_{\textbf{s}'})$ is always a top dimensional cone). For brevity in this subsection we write $C_{\textbf{s}'}^T$ as a shorthand for $(C_{\textbf{s}'}^{-B_{\textbf{s}}^T})^T$. We consider $C_{\textbf{s}'}^TB_{\textbf{s}}\cdot \beta_{\textbf{s}} \geq \textbf{0}$ as a system of inequalities on variables $\beta_1, \dots, \beta_n$. Following [@CCZ], we say that the inequality $c_{k;\textbf{s}'}^T\cdot B_{\textbf{s}}\cdot \beta_{\textbf{s}} \geq 0$ is an *implicit equality* of the system $C_{\textbf{s}'}^T B_{\textbf{s}}\cdot \beta_{\textbf{s}} \geq \textbf{0}$ if whenever $b^T=(b_1, \dots, b_n)\in \mathbb{R} ^n$ is such that $c_{i;\textbf{s}'}^T\cdot B_{\textbf{s}}\cdot b \geq 0$ for all $i\in \{1, \dots, n\}$ we have that $c_{k;\textbf{s}'}^T\cdot B_{\textbf{s}}\cdot b = 0$. The collection of the implicit equalities of the system is denoted as $[C_{\textbf{s}'}^T B_{\textbf{s}}]^{=}\beta_{\textbf{s}} \geq \textbf{0}^{=}$ and the collection of the remaining inequalities as $[C_{\textbf{s}'}^TB_{\textbf{s}}]^{>}\beta_{\textbf{s}} \geq \textbf{0}^{>}$. In particular, we can think of both $[C_{\textbf{s}'}^T B_{\textbf{s}}]^{=}\beta_{\textbf{s}} \geq \textbf{0}^{=}$ and $[C_{\textbf{s}'}^TB_{\textbf{s}}]^{>}\beta_{\textbf{s}} \geq \textbf{0}^{>}$ as systems of inequalities (with potentially fewer inequalities than the system $C_{\textbf{s}'}^TB_{\textbf{s}}\cdot \beta_{\textbf{s}} \geq \textbf{0}$). Since $(p^*)^{-1}(\mathcal G_{\textbf{s}'})\neq \emptyset$ then [@CCZ Theorem 3.17] tells us that $$\label{eq:dim} \dim((p^*)^{-1}(\mathcal G_{\textbf{s}'}))+\operatorname{rank}([C_{\textbf{s}'}^TB_{\textbf{s}}]^=)=n,$$ where $[C_{\textbf{s}'}^TB_\textbf{s}]^=$ represents the matrix associated with the system of implicit equalities $[C_{\textbf{s}'}^TB_\textbf{s}]^{=}\beta_{\textbf{s}} \geq \textbf{0}^{=}$. More precisely, if we define $J^{=}\subseteq \{1,\dots,n\}$ as the set of indices corresponding to the implicit equalities in the system $[C_{\textbf{s}'}^TB_\textbf{s}]^{=}\beta_{\textbf{s}} \geq \textbf{0}^{=}$, then the rank of the matrix with rows $\{c_{j;\textbf{s}'}^T \cdot B_\textbf{s}\}_{j\in J^{=}}$ allows us to determine the dimension $(p^{*})^{-1}(\mathcal G_{\textbf{s}'})$. In particular, if $C_{\textbf{s}'}^TB_\textbf{s}\cdot \beta_\textbf{s}\geq \textbf{0}$ has no implicit equalities, except possibly for the trivial equality 0 = 0, then $(p^{*})^{-1}(\mathcal G_{\textbf{s}'})$ has full dimension $n$. To characterize the implicit equalities of the system $C_{\textbf{s}'}^T B_{\textbf{s}}\cdot\beta_{\textbf{s}} \geq \textbf{0}$ notice that if there is a subset $\{c_{h;\textbf{s}'}^{T}\cdot B_{\textbf{s}}\cdot\beta_{\textbf{s}} \geq 0\}_{h\in H}$ of inequalities in the system parametrized by some nonempty set $H\subseteq \{1,\dots,n\}$ and there are positive integers $\lambda_h> 0$, for all $h\in H$, such that $\displaystyle{\sum_{h\in H}\lambda_h (c_{h;\textbf{s}'}^T\cdot B_{\textbf{s}} \cdot\beta_{\textbf{s}})=0}$ then $$-c_{k;\textbf{s}'}^T \cdot B_{\textbf{s}}\cdot\beta_{\textbf{s}} = \frac{1}{\lambda_k} \sum_{\substack{h\in H \\ h\neq k}}\lambda_h (c_{h;\textbf{s}'}^T \cdot B_{\textbf{s}}\cdot\beta_{\textbf{s}})\geq 0,$$ for all $k\in H$. This implies that $c_{k;\textbf{s}'}^T \cdot B_{\textbf{s}}\cdot b=0$ for every solution $b\in \mathbb{R} ^n$ of the system $C_{\textbf{s}'}^TB_{\textbf{s}}\cdot \beta_{\textbf{s}} \geq \textbf{0}$. Hence $c_{k;\textbf{s}'}^T \cdot B_{\textbf{s}}\cdot \beta_\textbf{s}\geq 0$ is an implicit equality of the system. In particular, a set of inequalities $\{c_{h;\textbf{s}'}^{T}\cdot B_{\textbf{s}}\cdot\beta_{\textbf{s}} \geq 0\}_{h\in H}$ is contained in the set of implicit equalities if and only if there are positive integers $\{\lambda_h\}_{h\in H}$ such that $\sum_{h\in H}\lambda_hc_{h;\textbf{s}'} \in \ker(p^{*})$. In the skew-symmetric acyclic case inequalities belonging to the set of implicit equalities can be determined finding positive linear combinations of $\mathbf{g}$-vectors, see Remark [Remark 17](#rem:relationgvectors){reference-type="ref" reference="rem:relationgvectors"}). The observations above can be used to identify the implicit equalities of the system $C_{\textbf{s}'}^T B_{\textbf{s}}\cdot\beta_{\textbf{s}} \geq \textbf{0}$. Regarding the facets of $(p^*)^{-1}(\mathcal G_{\textbf{s}'})$, first define $J^{>}$ as the set of indices corresponding to the inequalities in the system $[C_{\textbf{s}'}^TB_{\textbf{s}}]^{>}\beta_{\textbf{s}} \geq \textbf{0}^{>}$. Then $$(p^*)^{-1}(\mathcal G_{\textbf{s}'})=\left\{\beta\in N_\mathbb{R} \; | \; c_{i;\textbf{s}'}^{T}\cdot B_{\textbf{s}}\cdot\beta_{\textbf{s}} = 0, i\in J^{=}, \; c_{j;\textbf{s}'}^{T}\cdot B_{\textbf{s}}\cdot\beta_{\textbf{s}} \geq 0, j\in J^{>}\right\}.$$ We say that the system $[C_{\textbf{s}'}^TB_{\textbf{s}}]^{>}\beta_{\textbf{s}} \geq \textbf{0}^{>}$ is *redundant* if there is a $k\in J^{>}$ such that $$(p^*)^{-1}(\mathcal G_{\textbf{s}'})=\left\{\beta\in N_\mathbb{R} \; | c_{i;\textbf{s}'}^{T}\cdot B_{\textbf{s}}\cdot\beta_{\textbf{s}} = 0, i\in J^{=}\text{ and }c_{j;\textbf{s}'}^{T}\cdot B_{\textbf{s}}\cdot\beta_{\textbf{s}} \geq 0, j\in J^{>}\setminus \{k\}\right\}.$$ In this case we say that $c_{k;\textbf{s}'}^{T}\cdot B_{\textbf{s}}\cdot\beta_{\textbf{s}}$ is superfluous for the system. If there exist $\lambda_1, \dots ,\lambda_p> 0$ such that $c_{k;\textbf{s}'}^{T}\cdot B_{\textbf{s}}\cdot\beta_{\textbf{s}}=\sum_{\substack{i=1\\i\neq k}}^{p} \lambda_{i} c_{i;\textbf{s}'}^{T}\cdot B_{\textbf{s}}\cdot\beta_{\textbf{s}}$, for $k\in J^{>}$, then the inequality $c_{k;\textbf{s}'}^{T}\cdot B_{\textbf{s}}\cdot\beta_{\textbf{s}}$ is superfluous. Then, following [@CCZ Theorem 3.27] for each facet $F$ of $(p^{*})^{-1}(\mathcal G_{\textbf{s}'})$, there exists $j \in J^{>}$ such that the inequality $c_{j;\textbf{s}'}^{T}\cdot B_{\textbf{s}}\cdot\beta_{\textbf{s}} \geq 0$ defines $F$, that is, $$F := (p^{*})^{-1}(\mathcal G_{\textbf{s}'}) \cap \{\beta \in N_{\mathbb{R} } \mid c_{j;\textbf{s}'}^{T}\cdot B_{\textbf{s}}\cdot \beta_{\textbf{s}} = 0\}.$$ In particular, the number of facets of $(p^{*})^{-1}(\mathcal G_{\textbf{s}'})$ is less than or equal to $|J^{>}|=n-|J^{=}|$. The equality is achieved when the system $[C_{\textbf{s}'}^T B_{\textbf{s}}]^{>} \beta_{\textbf{s}} \geq \textbf{0}^{>}$ is not redundant. ## The skew-symmetrizable case {#skew-symmetrizable} For simplicity we have considered the skew-symmetric case, that is, we have assumed that the codomain of the bilinear form $\{\cdot , \cdot \}$ is $\mathbb{Z}$. However, the results of § [3.1](#sec:inequalities){reference-type="ref" reference="sec:inequalities"} and § [3.2](#dim){reference-type="ref" reference="dim"} still hold (upon a suitable reinterpretation of some constructions) in the skew-symmetrizable case as we proceed to explain. Suppose we are in the skew-symmetrizable case. This means that the co-domain of the skew-symmetric bilinear form $\{\cdot, \cdot \}$ is $\mathbb{Q}$ and that there is a finite index sublattice $N^{\circ}\subset N$ such that $\{ N, N^\circ \}\subset \mathbb{Z}$. Let $M^{\circ}\supset M$ be the $\mathbb{Z}$-dual of $N$. In this case the seed $\textbf{s}=(e_i)_{i\in I}$ is such that $\{e_i\}_{i\in I}$ is a basis of $N$ and $\{d_ie_i\}_{i\in I}$ is a basis of $N^{\circ}$ for positive integers $d_1, \dots, d_n\in \mathbb{Z} _{>0}$. Moreover, $\{e_i^*\}_{i\in I}$ is the basis for $M$ dual to $\{e_i\}_{i\in I}$ and $\{f_i\}_{i\in I}$ is a basis of $M^\circ$, where $f_i=d_i^{-1}e_i^*$. Let $d=\text{lcm}(d_i\mid i\in I)$. In this case the $ij$ entry of the matrix $B_{\textbf{s}}$ is $\{e_j,e_i\}d_i$ (in this case the codomain of $p^*$ is $M^{\circ}$ and its matrix in the bases of $N$ and $M^{\circ}$ described above is still $B_{\textbf{s}}$). In this case the $\mathbf{g}$-vectors belong to $M^\circ$ and the $\mathbf{c}$-vectors still belong to $N$, see Remark [Remark 6](#d_factor){reference-type="ref" reference="d_factor"}. The ambient vector space for $\Delta^+_{\textbf{s}}(\mathcal{X} )$ is $N_{\mathbb{R} }$ and for $\Delta^+_{\textbf{s}}(\mathcal{A} )$ is $(M^{\circ})_{\mathbb{R} }$. Lemma [Lemma 3](#cone){reference-type="ref" reference="cone"} still holds as stated in the skew-symmetrizable case and its the proof is essentially the same. Indeed, the main difference in the proof is that one would need to discuss the weight map in the skew-symmetrizable case (the reader can find the relevant discussion in Proposition 3.14 of [@BCMNC]). However, in this case the map $N_{\mathbb{R} }\to (M^\circ \oplus N)_{\mathbb{R} }$ given by $n \mapsto (p^*(n), n)$ still gives rise to an identification of the ambient tropical space for $\Delta^+_{\textbf{s}}(\mathcal{X} )$ and the corresponding weight $0$ slice (see §3.2.6 of [@BCMNC] for the explanation). So under these considerations, the proof remains the same. Corollary [Corollary 4](#conedescription){reference-type="ref" reference="conedescription"} still holds as stated in the skew-symmetrizable case too. Indeed, this result follows directly from Lemma [Lemma 3](#cone){reference-type="ref" reference="cone"} and tropical duality. The way we have presented tropical duality includes the skew-symmetrizable case and does not need to be modified at all. **Remark 6**. In some constructions the lattice $d\cdot N$ arises more naturally as the ambient lattice for the **c**-vectors, see for example §3.2.6 of [@BCMNC]. However, we can systematically consider the canonical isomorphism between $d\cdot N$ and $N$ to eliminate the factor of $d$. # Global monomials on cluster Poisson varieties {#sec:theta_functions} A *global monomial* on a cluster variety $\mathcal{V}$ is a regular function on $\mathcal{V}$ which restricts to a character of some seed torus in the atlas of $\mathcal{V}$ (see [@GHKK Definition 0.1]). The integral points of a cone of the cluster complex $\Delta^+_{\textbf{s}}(\mathcal{V} )$ parametrize those global monomials on $\mathcal{V}$ that restrict to a character of the same seed torus and, moreover, every such global monomial is a *theta function* on $\mathcal{V}$. More precisely, if $v_1, \dots, v_r$ are primitive ray generators of a cone of $\Delta^+_{\textbf{s}}(\mathcal{V} )$ then there are theta functions $\vartheta ^{\mathcal{V} }_{v_1}, \dots, \vartheta ^{\mathcal{V} }_{v_r}$ on $\mathcal{V}$ and a seed $\textbf{s}'$ such that every $\vartheta ^{\mathcal{V} }_{v_j}$ restricts to a character of $\mathcal{V} _{\textbf{s}'}$. Moreover, given non-negative integers $a_1,\dots , a_r$ and $v=\sum_{j=1}^ra_jv_j$ then the theta function $\vartheta ^{\mathcal{V} }_{v}$ on $\mathcal{V}$ is such that $$\vartheta ^{\mathcal{V} }_{v}= \prod_{j=1}^r (\vartheta ^{\mathcal{V} }_{v_j})^{a_j}.$$ By [@GHKK Lemma 7.8] a global monomial on $\mathcal{A}$ is the same as a cluster monomial. Every global monomial on $\mathcal{X}$ can be pulled-back to $\mathcal{A}_{\mathrm{prin}}$ along the map $\tilde{p}:\mathcal{A}_{\mathrm{prin}}\to \mathcal{X}$ obtained composing the quotient map $\mathcal{A}_{\mathrm{prin}}\to \mathcal{A}_{\mathrm{prin}}/T_N$ with the canonical isomorphism $\mathcal{A}_{\mathrm{prin}}/T_N \to \mathcal{X}$ (see [@GHKK Lemma 7.10 (3)] for a proof a this statement and [@BCMNC Section 2.1.3.] for a precise description of the maps). This allows to describe the global monomials on $\mathcal{X}$ using the *$F$-polynomials* introduced in [@FZ_IV Definition 3.3] (see Lemma [Lemma 7](#initiallemma){reference-type="ref" reference="initiallemma"} bellow). Let $y_1,\dots, y_n$ be the cluster coordinates of the seed torus $\mathcal{X} _{\textbf{s}}$ and $x_1,\dots , x_n,t_1,\dots, t_n$ be the coordinated of the seed torus $(\mathcal{A}_{\mathrm{prin}})_{\textbf{s}}=T_M\times T_N$. We use the standard vector notation for a monomial on variables $y_1,\dots, y_n$. Namely, if $v=(v_1,\dots, v_n)\in \mathbb{Z} ^n$ then ${\bf y}^v=y_1^{v_1}\cdots y_n^{v_n}$ and similar notation is used for the $x_i's$ and the $t_i's$. The $F$-polynomial associated to $B_{\textbf{s}}$, $\textbf{s}'$ and $i$ is denoted by $F_{i;\textbf{s}'}$. **Lemma 7**. Let $\beta^T=(\beta_1,\dots, \beta_n)\in N$ be such that $p^{*}(\beta)\in \mathcal G_{\textbf{s}'}$ for some ${\textbf{s}'}\in \mathbb{T}_n$. Let $$p^{*}(\beta)=\sum_{i=1}^{n}\alpha_{i}\mathbf{g} _{i;\textbf{s}'} \quad \text{where } \alpha_{1}, \dots, \alpha_{n}\geq 0. %\alpha_{i;\seed}\geq 0 \quad \text{for all} \quad i\in[n].$$ Then $$\label{eq:gm1} \vartheta_{\beta}^{\mathcal{X}}:={\bf y}^{\beta_{\textbf{s}}}\prod_{i=1}^{n} F_{i;\textbf{s}'}^{\alpha_{i}}(y_1, \ldots, y_n).$$ In particular, $$\label{eq:gm2} \vartheta ^{\mathcal{X} }_{\beta}={\bf y}^{\beta_{\textbf{s}}}\prod_{i=1}^n F_{i;\textbf{s}'}^{c^T_{i;\textbf{s}'} \cdot B_{\textbf{s}} \cdot \beta_{\textbf{s}}}(y_1,\dots,y_n).$$ *Proof.* The pull-back $\tilde{p}^*: N \to M\oplus N$ is given by $\beta \mapsto (p^*(\beta),\beta)$. Using the fact that $\vartheta ^{\mathcal{A}_{\mathrm{prin}}}_{(p^*(\beta),0)}$ is the global monomial parametrized by $\sum_{i=1}^{n}\alpha_{i}\mathbf{g} _{i;\textbf{s}'}$ we have that $$\label{eq:tf1} \tilde{p}^*(\vartheta ^\mathcal{X} _\beta)=\vartheta ^{\mathcal{A}_{\mathrm{prin}}}_{(p^*(\beta),\beta)}=\vartheta ^{\mathcal{A}_{\mathrm{prin}}}_{(0,\beta)}\vartheta ^{\mathcal{A}_{\mathrm{prin}}}_{(p^*(\beta),0)}={\bf t}^{\beta_{\textbf{s}}}\prod_{i=1}^n\left(\vartheta ^{\mathcal{A}_{\mathrm{prin}}}_{(\mathbf{g} _{i;\textbf{s}'},0)}\right)^{\alpha_{i}}.$$ The theta function $\vartheta ^{\mathcal{A}_{\mathrm{prin}}}_{(\mathbf{g} _{i;\textbf{s}'},0)}$ is in fact a cluster variable so it is given as $$\vartheta ^{\mathcal{A}_{\mathrm{prin}}}_{(\mathbf{g} _{i;\textbf{s}'},0)}={\bf x}^{\mathbf{g} _{i;\textbf{s}'}}F_{i;\textbf{s}'}(\hat{y}_1, \dots, \hat{y}_n) %\tf^{\cAp}_{(\gv_{i;\seed},0)}=x_1^{g_{1;\seed}}\cdots x_n^{g_{n;\seed}}F_{i;\seed}(\hat{y}_1, \dots, \hat{y_n}),$$ where $$\label{eq:tf2} \hat{y}_j=t_j\prod_{i=1}^n x_i^{b_{ij}}=\vartheta ^{\mathcal{A}_{\mathrm{prin}}}_{(0,e_j)}\vartheta ^{\mathcal{A}_{\mathrm{prin}}}_{(p^*(e_j),0)}= \vartheta ^{\mathcal{A}_{\mathrm{prin}}}_{(p^*(e_j),e_j)}= \tilde{p}^*(y_j),$$ for $j \in \{1,\dots ,n\}$ is the weight 0 monomial on $(\mathcal{A}_{\mathrm{prin}})_{\textbf{s}}$ inducing $y_j$. Putting together equations [\[eq:tf1\]](#eq:tf1){reference-type="eqref" reference="eq:tf1"} and [\[eq:tf2\]](#eq:tf2){reference-type="eqref" reference="eq:tf2"} we obtain $$\begin{aligned} \tilde{p}^*(\vartheta ^\mathcal{X} _\beta)&=&{\bf t}^{\beta_{\textbf{s}}}\prod_{i=1}^{n}({\bf x}^{\alpha_{i}\mathbf{g} _{i;\textbf{s}'}})\prod_{i=1}^{n}F^{\alpha_i}_{i;\textbf{s}'}(\tilde{p}^*(y_1),\dots, \tilde{p}^*(y_n))\\ &=&{\bf t}^{\beta_\textbf{s}}{\bf x}^{p^*(\beta_\textbf{s})}\prod_{i=1}^{n}F^{\alpha_i}_{i;\textbf{s}'}(\tilde{p}^*(y_1),\dots, \tilde{p}^*(y_n))\\ &=&\tilde{p}^*({\bf y}^{\beta_\textbf{s}})\prod_{i=1}^{n}\tilde{p}^*(F^{\alpha_i}_{i;\textbf{s}'}(y_1,\dots, y_n))\\ &=&\tilde{p}^*\left({\bf y}^{\beta_\textbf{s}}\prod_{i=1}^{n} F_{i;\textbf{s}'}^{\alpha_{i}}(y_1, \ldots, y_n)\right).\end{aligned}$$ Equation [\[eq:gm1\]](#eq:gm1){reference-type="eqref" reference="eq:gm1"} follows from the injectivity of $\tilde{p}^*$ and [\[eq:gm2\]](#eq:gm2){reference-type="eqref" reference="eq:gm2"} from the fact that $\alpha_{i}=c^T_{i;\textbf{s}'}\cdot B_\textbf{s}\cdot \beta_\textbf{s}$. ◻ **Example 8**. Let $B_{\textbf{s}}=\begin{psmallmatrix} 0 & 1 & 0 \\ -1 & 0 & -1 \\ 0 & 1 & 0 \end{psmallmatrix}$ and $\beta^{T}_\textbf{s}=(\beta_1,\beta_2,\beta_3)$ be such that $p^{*}(\beta_{\textbf{s}})\in \mathcal G_{\textbf{s}'}$, where $G^B_{\textbf{s}'}=\begin{psmallmatrix} -1 & 0 & 0 \\ 1 & 1 & 1 \\ 0 & 0 & 1 \end{psmallmatrix}$ is its associated $\mathbf{g}$-matrix. Hence, $$\begin{aligned} \vartheta_{\beta_{\textbf{s}}}^{\mathcal{X}}&=y_1^{\beta_1} y_2^{\beta_2} y_3^{\beta_3}(1+y_1)^{(-1,0,0)\cdot B_{\textbf{s}}\cdot\beta_{\textbf{s}}}1^{(1,1,1)\cdot B_{\textbf{s}}\cdot\beta_{\textbf{s}}}(1+y_3)^{(0,0,-1)\cdot B_{\textbf{s}}\cdot\beta_{\textbf{s}}} \\ &= y_1^{\beta_1} y_2^{\beta_2} y_3^{\beta_3}(1+y_1)^{(0,-1,0)\cdot\beta_{\textbf{s}}}(1+y_3)^{(0,-1,0)\cdot\beta_{\textbf{s}}}\\ &= y_1^{\beta_1} y_2^{\beta_2} y_3^{\beta_3}(1+y_1)^{-\beta_2}(1+y_3)^{-\beta_2},\end{aligned}$$ where $-\beta_2\geq 0$ and $-\beta_1+2\beta_2-\beta_3\geq 0$ are the inequalities in the system $C_{\textbf{s}'}^TB_\textbf{s}\cdot \beta_{\textbf{s}} \geq 0$, $C_{\textbf{s}'}=\begin{psmallmatrix} -1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 1 & -1 \end{psmallmatrix}$. **Remark 9**. Lemma [Lemma 7](#initiallemma){reference-type="ref" reference="initiallemma"} also holds as stated in the skew-symmetrizable case (see Remark [Remark 6](#d_factor){reference-type="ref" reference="d_factor"}). The only difference in the preamble is that the torus acting on $\mathcal{A}_{\mathrm{prin}}$ is $T_{N^\circ}$ as opposed to $T_N$. # Normal vectors via **g**-vectors in the acyclic case {#sec:supporting_hyperplanes} ## The cluster ensemble map via representation theory From now on we assume that $Q=Q_\textbf{s}$ is acyclic and denote $B_\textbf{s}$ also by $B_Q$. In this section we describe the map $p^*$ using representation theory and use this to provide a formula (see Theorem [Theorem 14](#dv-gv){reference-type="ref" reference="dv-gv"} below) that allows to compute the image under $p^*$ of a dimension vector using Auslander-Reiten theory. We begin by recalling the notions of dimension vectors for objects of $D^b_{kQ}$ and **g**-vectors for objects of $\mathsf{K}^{\rm b}(\mathop{\mathrm{proj}}kQ)$. The reader is referred to [@AIR] and [@DIJ] for a complete treatment on **g**-vectors using silting theory. ### Dimension vectors and **g**-vectors All $kQ$-modules are thought of as stalk complexes concentrated in degree $0$. Let $S_1, \dots , S_n$ be the simple $kQ$-modules and $P_1, \dots, P_n$ be the indecomposable projective $kQ$-modules. The classes of the simple modules form a basis of $K_0(D^b_{kQ})$ and the classes of the indecomposable projective modules form a basis of $K_0(\mathsf{K}^{\rm b}(\mathop{\mathrm{proj}}kQ))$. We shall use the following conventions. The dimension vector of an object $X$ of $D^b_{kQ}$ is $$\underline{\dim}\ X = \sum_{i=1}^n d_i e_i\in N,$$ where $$[X]=\sum_{i=1}^n d_i [S_i].$$ Let $P=P_1\oplus \dots \oplus P_n$. The **g**-vector with respect to $P$ of an object $X$ of $\mathsf{K}^{\rm b}(\mathop{\mathrm{proj}}kQ)$ is $$\mathbf{g} _P^X:= \sum_{i=1}^n g_i e^*_i \in M,$$ where $$[X]= \sum_{i=1}^n g_i[P_i] \in K_0(\mathsf{K}^{\rm b}(\mathop{\mathrm{proj}}kQ)).$$ Similarly, let $I_1, \dots , I_n$ be the indecomposable injective $kQ$-modules and let $K^b(\mathop{\mathrm{inj}}kQ)$ be the homotopy category of bounded complexes of finitely generated injective $kQ$-modules. The classes of the indecomposable injective modules form a basis of the Grothendieck ring $K_0(\mathop{\mathrm{inj}}kQ)$. In particular we can define $$\mathbf{g} _I^X:= \sum_{i=1}^n g_i e^*_i \in M,$$ where $$[X]= \sum_{i=1}^n g_i[I_i] \in K_0(\mathsf{K}^{\rm b}(\mathop{\mathrm{inj}}kQ)).$$ **Remark 10**. It is well known that if $X\in \mathsf{K}^{\rm b}(\mathop{\mathrm{proj}}kQ)$ is the projective presentation of an indecomposable rigid module then $\mathbf{g} _P^X$ corresponds to the **g**-vector of a cluster variable. **Remark 11**. Since $\text{mod } kQ$ is hereditary the **g**-vectors are additive in exact triangles. In other words, if $X \to E \to Y \to X[1]$ is an exact triangle in $\mathsf{K}^{\rm b}(\mathop{\mathrm{proj}}kQ)$ (resp. in $\mathsf{K}^{\rm b}(\mathop{\mathrm{inj}}kQ)$) then $\mathbf{g} _P^{E}=\mathbf{g} _P^{X} + \mathbf{g} _P^{Y}$ (resp. $\mathbf{g} _I^{E}=\mathbf{g} _I^{X} + \mathbf{g} _I^{Y}$). ### Interpretation of $p^*$ via quiver representations Let $C_{kQ}$ be the Cartan matrix associated to $kQ$. This is a $n\times n$ integer matrix whose columns are the dimension vectors of the indecomposable projective $kQ$-modules. Since $Q$ is acyclic, the *Euler characteristic* of $kQ$ is the bilinear map $\langle -, - \rangle_{kQ}: N\times N \to \mathbb{Z}$ given by $$\langle \underline{\dim}\ X, \underline{\dim}\ Y \rangle_{kQ} = \dim_k \mathop{\mathrm{Hom}}_{kQ} (X,Y)- \dim_k \mathop{\mathrm{Ext}}^1_{kQ} (X,Y).$$ Moreover, in the basis of $N$ given by $\textbf{s}$ (*i.e.* by the dimension vectors of the simple modules) the matrix associated to $\langle -, - \rangle_{kQ}$ is $(C^{-1}_{kQ})^T$. The following is a well known result that asserts that the exchange matrix $B_Q$ can be described using the Cartan matrix. For the convenience of the reader we include a proof. To keep notation light we denote $(\underline{\dim}\ X)_{\textbf{s}}$ simply by $\underline{\dim}\ X$. **Lemma 12**. Let $Q$ be an acyclic quiver. Then $$B_Q=(C_{kQ}^{-1})^T-C_{kQ}^{-1}.$$ *Proof.* Let $c_{ij}$ be the entry in the $i^{\rm th}$ row and the $j^{\rm th}$ column of the matrix $(C_{kQ}^{-1})^T-C_{kQ}^{-1}$, then $$\begin{aligned} c_{ij}= & \ (\underline{\dim}\; S_i ) (C_{kQ}^{-1})^T (\underline{\dim}\; S_j )^T - (\underline{\dim}\; S_i) (C^{-1}_{kQ}) (\underline{\dim}\; S_j )^T \\ = & \ (\underline{\dim}\; S_i) (C_{kQ}^{-1})^T (\underline{\dim}\; S_j )^T - (\underline{\dim}\; S_i) (C^{-1}_{kQ^{op}})^T (\underline{\dim}\; S_j )^T\\ = &\ \langle \underline{\dim}\ S_i, \underline{\dim}\ S_j \rangle_{kQ}- \langle \underline{\dim}\ S_i, \underline{\dim}\ S_j \rangle_{kQ^{\rm op}}\\ = &\ \dim_k \mathop{\mathrm{Ext}}^1_{kQ^{\rm op}}(S_i, S_j) -\dim_k \mathop{\mathrm{Ext}}^1_{kQ}(S_i, S_j).\end{aligned}$$ Now recall the entry in the $i^{th}$ row and the $j^{th}$ column of the matrix $B_Q$ is given by the number of arrows in $Q$ from $i$ to $j$ minus the number of arrows in $Q$ from $j$ to $i$. Moreover, we have that $\dim_k \mathop{\mathrm{Ext}}^1_{kQ}(S_i, S_j)$ coincides with the number of arrows from $j$ to $i$ in $Q$ (see for example [@Assem Lemma 2.12.b]). The result follows. ◻ ## Normal vectors of the facets We now describe the facets of the cones of $\Delta^+_{\textbf{s}}(\mathcal{X} )$ in terms of their normal vectors. Since $kQ$ has finite global dimension there is a canonical triangulated equivalence $D^b_{kQ} \to \mathsf{K}^{\rm b}(\mathop{\mathrm{proj}}kQ)$ that maps a module to its minimal projective presentation. For an object $X$ of $D^b_{kQ}$ we let $P_X$ be the image of $X$ under this equivalence. Since $\text{mod } kQ$ is hereditary, any module is quasi-isomorphic to its minimal projective presentation. Motivated by this observation, for an object $X$ of $D^b_{kQ}$ we define $$\mathbf{g} ^X_P:=\mathbf{g} ^{P_X}_P.$$ We define $\mathbf{g} ^{X}_I$ analogously. We denote by $\tau$ the Auslander--Reiten translation of $\text{mod }kQ$ (or $\mathsf{K}^{\rm b}(\mathop{\mathrm{proj}}\ kQ)$ or $D^b_{kQ}$). **Lemma 13**. Let $Q$ be an acyclic quiver and $X$ be an object in $D^b_Q$. Then i) $C_{kQ}^{-1} (\underline{\dim}\; X)^T=\mathbf{g} _P^{X}.$ ii) $(C_{kQ}^{-1})^T (\underline{\dim}\; X)^T=-\mathbf{g} _P^{\tau^{-1}X}.$ *Proof.* Since the columns of $C_{kQ}$ are the dimension vectors of the indecomposable projective modules, we have that $C_{kQ}(\mathbf{g} ^{P_i}_P)^T=\underline{\dim}\ P_i$. Equivalently, $C_{kQ}^{-1} (\underline{\dim}\ P_i)^T=\mathbf{g} ^{P_i}_P$. Then $(i)$ follows from the fact that dimension vectors and the **g**-vectors are additive in exact triangles and that $X$ is quasi-isomorphic to $P_X$. We now show $(ii)$. By additivity, we can assume that $X$ is an indecomposable module. Let $$0 \to X \to I^{\prime} \to I^{\prime\prime} \to 0$$ be a minimal injective presentation of $X$. By definition of the Auslander-Reiten translation and since $\text{mod }kQ$ is hereditary we have that $$0 \to \nu^{-1}I^{\prime} \to \nu^{-1}I^{\prime\prime} \to \tau^{-1}X \to 0.$$ is a minimal projective presentation of $\tau^{-1}X$, where $\nu$ is the Nakayama functor. By additivity we obtain that $$\mathbf{g} _P^{\tau^{-1}X}=\mathbf{g} _P^{\nu^{-1}I^{\prime\prime}}-\mathbf{g} _P^{\nu^{-1}I^{\prime}}.$$ Since $\nu^{-1}I_i=P_i$ we have that $\mathbf{g} ^{\nu^{-1}I^{\prime}}_P=\mathbf{g} ^{I^{\prime}}_I$ and $\mathbf{g} ^{\nu^{-1}I^{\prime\prime}}_P=\mathbf{g} ^{I^{\prime\prime}}_I$. Moreover, by additivity we have that $\mathbf{g} _I^{X}=\mathbf{g} ^{I^{\prime}}_I - \mathbf{g} ^{I^{\prime\prime}}_I$. Putting these observations together we obtain that $$-\mathbf{g} ^{\tau^{-1}X}_{P}=\mathbf{g} ^{X}_I.$$ Finally, the columns of $(C^{-1}_{kQ})^T$ are the dimension vectors of the indecomposable injective modules. Hence we can argue as in the preceding case that $(C_{kQ})^{-1}(\underline{\dim}X)^T=\mathbf{g} ^X_I$. The result follows. ◻ For every indecomposable complex $X$ of $D^b_{kQ}$ there is an Auslander-Reiten triangle $$\label{eq:AR_triang} X\to X_1\oplus \dots \oplus X_r \to \tau^{-1}X \to X[1],$$ where the $X_1,\dots, X_n$ are indecomposable objects, see [@Hap § 3]. A mesh is a sequence of the form $X\to X_1\oplus \dots \oplus X_r \to \tau^{-1}X$; we write $$X_i\in \overrightarrow{\text{mesh}}_X$$ to indicate that $X_i$ is an indecomposable summand in the middle of such mesh. **Theorem 14**. Let $Q$ be an acyclic quiver and $X$ be an indecomposable object in $D^b_Q$, then $$B_Q(\underline{\dim}\; X)^T=-\mathbf{g} ^X_P -\mathbf{g} ^{\tau^{-1}X}_P=-\sum_{X_i\in \overrightarrow{\text{mesh}}_X} \mathbf{g} ^{X_i}_P.$$ *Proof.* By Lemma [Lemma 12](#B-C){reference-type="ref" reference="B-C"}, we know that $B_Q=(C_{kQ}^{-1})^T-C_{kQ}^{-1}$. Using Lemma [Lemma 13](#lem:mesh){reference-type="ref" reference="lem:mesh"} and the additivity of **g**-vectors we obtain that $$B_Q (\underline{\dim}\; X)^T=(C_{kQ}^{-1})^{T}(\underline{\dim}\; X)^T-C_{kQ}^{-1}(\underline{\dim}\; X)^T = -\mathbf{g} ^X_P -\mathbf{g} ^{\tau^{-1}X}_P= -\sum_{X_i\in \overrightarrow{\text{mesh}}_X} \mathbf{g} ^{X_i}_P.$$ ◻ **Example 15**. Let $Q$ be the $2$-Kronecker quiver $$\begin{tikzcd} Q: 1 && 2. \arrow[ll,shift left,""] \arrow[ll,shift right,swap,""] \end{tikzcd}$$ So $$B_Q=\begin{pmatrix} 0 & -2 \\ 2 & 0 \end{pmatrix}.$$ We proceed to verify Theorem [Theorem 14](#dv-gv){reference-type="ref" reference="dv-gv"} in this case. Observe that it is enough to verify the statenent for the stalk complexes concentrated at degree $0$. Indeed, for every $X\in D^b_{kQ}$ as $\underline{\dim}\ X[1]=-\underline{\dim}\ X$ and $\mathbf{g} ^{X[1]}P=-\mathbf{g} ^X_P$. It is well-known that every indecomposable preprojective module in $kQ$ has a dimension vector of the form $(d, d+1)$, while every indecomposable preinjective module has a dimension vector of the form $(d+1, d)$, where $d\in \mathbb{N}$. Furthermore, indecomposable modules in the regular component of the Auslander-Reiten quiver of $kQ$ have dimension vectors of the form $(d+1, d+1)$. Additionally, at the lecel of dimension vesctors, the meshes in preprojective and preinjective components, $\mathcal{P}(A)$ and $\mathcal{Q}(A)$, of the Aulander-Reiten quiver in the derived category are of the following form, respectively $$\begin{tikzcd} (d,d+1) \arrow[rr,shift left] \arrow[rr,shift right,swap] && (d+1,d+2) \arrow[rr,shift left] \arrow[rr,shift right,swap] && (d+2,d+3) \end{tikzcd}$$ and $$\begin{tikzcd} (d+3,d+2) \arrow[rr,shift left] \arrow[rr,shift right,swap] && (d+2,d+1) \arrow[rr,shift left] \arrow[rr,shift right,swap] && (d+1,d) \end{tikzcd}$$ and in the regular component $\mathcal{R}(A)$ are of the form On the other hand, it can be observed that if $X$ is an indecomposable module with $\underline{\dim}\; X = (d_1,d_2)$, $d_1,d_2\in \mathbb{N}$, then $\mathbf{g} _P^X=(g_1,g_2)$, where $g_1=d_1$ and the value of $g_2$ depends on the specific form of $X$ as follows: $$g_2= \left\{ \begin{array}{lll} -d_2+2, & \text{if} & X\in \mathcal{P}(A) \\ \\ -d_2-2, & \text{if} & X\in \mathcal{Q}(A) \\ \\ -d_2, & \text{if} & X \in \mathcal{R}(A) \\ \end{array} \right.$$ Then, we have the following cases: 1. for $X\in \mathcal{P}(A)$ with $\underline{\dim}\ X = (d,d+1)$, $$\begin{aligned} B_Q (\underline{\dim}\ X) = & (-2(d+1),2d) = -2 (d+1,-(d+2)+2) =-2 \mathbf{g} _P^{X^{\prime}},\end{aligned}$$ where $\underline{\dim}\ X^{\prime} = (d+1,d+2)$. 2. for $X\in \mathcal{Q}(A)$ with $\underline{\dim}\; X = (d+2,d+1)$, $$\begin{aligned} B_Q (\underline{\dim}\; X) = & (-2(d+1),2(d+2)) = -2 (d+1,-d-2) =-2 \mathbf{g} _P^{X^{\prime}},\end{aligned}$$ where $\underline{\dim}\ X^{\prime} = (d+1,d)$. 3. for $X$ with $\underline{\dim}\; X = (1,1)$, $$\begin{aligned} B_Q (\underline{\dim}\; X) = & (-2,2) = - \mathbf{g} _P^{X^{\prime}},\end{aligned}$$ where $\underline{\dim}\ X^{\prime} = (2,2)$. 4. for $X$ in the regular component such that $\underline{\dim}\; X = (d,d)$, $d>2$, $$\begin{aligned} B_Q (\underline{\dim}\; X) = & (-2d,2d) = -\big[(d+1,-d-1)+(d-1,-d+1)\big] =- [\mathbf{g} _P^{X^{\prime\prime}}+\mathbf{g} _P^{X^{\prime}}],\end{aligned}$$ where $\underline{\dim}\ X^{\prime} = (d-1,d-1)$ and $\underline{\dim}\ X^{\prime\prime} = (d+1,d+1)$. This illustrates Theorem [Theorem 14](#dv-gv){reference-type="ref" reference="dv-gv"} for all $kQ$-modules $X$. We can also see from these computations that the result holds for all $X\in D^b_{kQ}$. ## Supporting hyperplanes via **g**-vectors The aim of this subsection is to apply Theorem [Theorem 14](#dv-gv){reference-type="ref" reference="dv-gv"} to give the equation of every supporting hyperplane of a cone of $\Delta^+_{\textbf{s}}(\mathcal{X} )$ using **g**-vectors. Using the description of the cone $(p^*)^{-1}(\mathcal G_{\textbf{s}'})$ through the system of inequalities $$C_{\textbf{s}'}^T B_Q\cdot\beta_{\textbf{s}} \geq \textbf{0},$$ as discussed in § [3.1](#sec:inequalities){reference-type="ref" reference="sec:inequalities"}, the following result tells us that a normal vector of the hyperplane of the form $c_{i;\textbf{s}'}^T\cdot B_Q\cdot\beta_{\textbf{s}} = 0$ can be described using **g**-vectors of $kQ$-modules. **Corollary 16**. Suppose $c_{i;\textbf{s}'}$ is a positive $\mathbf{c}$-vector. Let $X$ be the indecomposable rigid $kQ$-module such that $c_{i;\textbf{s}'}^T=\underline{\dim}\ X$. Then the vector $\displaystyle{\sum_{X_j\in \overrightarrow{\text{mesh}}_X}\mathbf{g} _P^{X_j}}$ is normal to the hyperplane $%\alpha_{i;\seed}= c_{i;\textbf{s}'}^T\cdot B_Q\cdot\beta_{\textbf{s}} = 0$. *Proof.* It is immediate that Theorem [Theorem 14](#dv-gv){reference-type="ref" reference="dv-gv"} implies that $$\begin{aligned} (-B_Q \cdot (\underline{\dim}\; X)^T)^{T}= & \bigg(\sum_{X_i\in \overrightarrow{\text{mesh}}_X} \mathbf{g} _P^{X_i}\bigg)^T \\ (\underline{\dim}\; X) \cdot (-B_Q^{T})= & \sum_{X_i\in \overrightarrow{\text{mesh}}_X} (\mathbf{g} _P^{X_i})^T \end{aligned}$$ Hence, we can deduce $$\alpha_{i}=c_{i;\textbf{s}'}^{T}\cdot B_Q\cdot\beta_{\textbf{s}}=\sum_{X_j\in \overrightarrow{\text{mesh}}_X}(\mathbf{g} _P^{X_j})^T\cdot \beta_{\textbf{s}},$$ and the result follows. ◻ **Remark 17**. The results of section § [3.2](#dim){reference-type="ref" reference="dim"} together with Corollary [Corollary 16](#Corollary-normalvectorhyperplane){reference-type="ref" reference="Corollary-normalvectorhyperplane"} imply that we can search for linear relations with positive coefficients between certain sets of **g**-vectors to determine the implicit equalities of the system $C_{\textbf{s}'}^T B_Q\cdot\beta_{\textbf{s}} \geq \textbf{0}$. More precisely, if $X_i$ is an indecomposable rigid $kQ$-module such that $c_{i;\textbf{s}'}^T=\underline{\dim}\ X_i$, for $i\in \{1,\dots,n\}$ then $$\sum_{k\in H}\lambda_k(c_{k;\textbf{s}'}^T \cdot B_Q\cdot\beta_{\textbf{s}})=0,$$ is equivalent to $$\label{eq:equiv} \sum_{k\in H}\lambda_k(\mathbf{g} _P^{X_k}+\mathbf{g} _P^{\tau^{-1}X_k})=\sum_{\substack{X_{j_k}\in \overrightarrow{\text{mesh}}_{X_k} \\ k\in H}}\lambda_k\mathbf{g} _P^{X_{j_k}}=\textbf{0}.$$ Hence, finding relations as in [\[eq:equiv\]](#eq:equiv){reference-type="eqref" reference="eq:equiv"} allows us to identify sets of implicit equations of the system. **Remark 18**. For each $\textbf{s}'\in \mathbb{T}_n$, all the hyperplanes $c_{i;\textbf{s}'}^{T} \cdot B_Q \cdot \beta_\textbf{s}=0$ can be determined through a "mesh relation\" inherited from the mesh relation satisfied by the corresponding dimension vectors in the Auslander-Reiten quiver in $\mod kQ$. Consequently, the inequalities linked to the indecomposable projective modules assume a crucial role in deducing other inequalities related to $\mathbf{c}$-vectors. For instance, for quivers of finite type, the inequalities associated with the indecomposable projective modules allow determining all the supporting hyperplanes of the cones. **Example 19**. All the hyperplanes associated with the $2$-Kronecker quiver in Example [Example 15](#2Kronecker){reference-type="ref" reference="2Kronecker"} determined by a positive $\mathbf{c}$-vector $c_{i;\textbf{s}'}^T=\underline{\dim}(X)$, take the following forms: 1. $(d+1)\beta_1-d\beta_2=0$, if $X\in \mathcal{P}(A)$, 2. $d\beta_1-(d+1)\beta_2=0$, if $X\in \mathcal{Q}(A)$, 3. $\beta_1-\beta_2=0$, if $X$ is a regular module. where $d\in\mathbb{N}$. These hyperplanes can be positioned in the Auslander-Reiten quiver according to the location of their respective $\mathbf{c}$-vectors. In this way, the supporting hyperplanes look as follows: The colors blue, red and green represent whether the module belongs to $\mathcal{P}(A)$, $\mathcal{Q}(A)$ or $\mathcal{R}(A)$, respectively. Besides, by the relations between **g**-vectors and $p^{*}$ obtained in Example [Example 15](#2Kronecker){reference-type="ref" reference="2Kronecker"}, the following behavior of the cones and the generating rays can be observed: **Example 20**. Consider the quiver $Q=1\rightarrow 2 \leftarrow 3$ and let $B_Q=\begin{psmallmatrix} 0 & 1 & 0 \\ -1 & 0 & -1 \\ 0 & 1 & 0 \end{psmallmatrix}$ be its associated matrix. We proceed to describe the cones $(p^*)^{-1}(\mathcal{G}_{\textbf{s}'})$ for all $\textbf{s}'$. It is convenient to work with triangulations of the $6$-gon to consider all the possible seeds. We let be the triangulation associated to $\textbf{s}$. We proceed to verify that the seeds $\textbf{s}'$ such that $(p^*)^{-1}(\mathcal{G}_{\textbf{s}'})$ is $3$-dimensional are precisely the bipartite seeds. We also identify those seeds that give rise to $2$ and $1$-dimensional cones. In the case of seeds corresponding to bipartite orientations of $\mathbb{A}_3$, they generate the maximal cones, as no inequalities belong to $[C_{\textbf{s}'}^TB_Q]^=\beta_{\textbf{s}} \geq \textbf{0}^{=}$ as shown in Table [1](#tab:3A3){reference-type="ref" reference="tab:3A3"}. Triangulation --------------------------------------------------------------- ---------------------------- ------------------------------------- ------------------------------------ ----------------------------------- --------------------------- --------------------------- $\mathbf{c} -$matrix $\begin{psmallmatrix} $\begin{psmallmatrix} $\begin{psmallmatrix} $\begin{psmallmatrix} $\begin{psmallmatrix} $\begin{psmallmatrix} 1 & 0 & 0 \\ -1 & 1 & 0 \\ 0 & -1 & 1 \\ 0 & 0 & -1 \\ 0 & 0 & -1 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \\ 0 & 1 & 0 \\ 1 & -1 & 1 \\ -1 & 1 & -1 \\ 0 & -1 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 0 & 1 & -1 1 & -1 & 0 -1 & 0 & 0 -1 & 0 & 0 1 & 0 & 0 \end{psmallmatrix}$ \end{psmallmatrix}$ \end{psmallmatrix}$ \end{psmallmatrix}$ \end{psmallmatrix}$ \end{psmallmatrix}$ $C_{\textbf{s}'}^TB_Q\cdot\beta_{\textbf{s}} \geq \textbf{0}$ $\begin{smallmatrix} $\begin{smallmatrix} $\begin{smallmatrix} $\begin{smallmatrix} $\begin{smallmatrix} $\begin{smallmatrix} \beta_2 \geq 0 \\ -\beta_2 \geq 0 \\ -\beta_1+\beta_2-\beta_3 \geq 0 \\ \beta_1-\beta_2+\beta_3 \geq 0 \\ -\beta_2 \geq 0 \\ \beta_2 \geq 0 \\ -\beta_1-\beta_3 \geq 0 \\ -\beta_1+2\beta_2-\beta_3 \geq 0 \\ \beta_1-2\beta_2+\beta_3 \geq 0 \\ -\beta_1-\beta_3 \geq 0 \\ \beta_1+\beta_3 \geq 0 \\ \beta_1+\beta_3 \geq 0 \\ \beta_2 \geq 0 -\beta_2 \geq 0 -\beta_1+\beta_2-\beta_3 \geq 0 \beta_1-\beta_2+\beta_3 \geq 0 -\beta_2 \geq 0 \beta_2 \geq 0 \end{smallmatrix}$ \end{smallmatrix}$ \end{smallmatrix}$ \end{smallmatrix}$ \end{smallmatrix}$ \end{smallmatrix}$ : 3-dimensional cones $\mathbb{A}_3$ Additionally, for each bipartite seed $\textbf{s}'$, we observe that $c_{1;\textbf{s}'}\cdot B_Q \cdot \beta_{\textbf{s}} = c_{3;\textbf{s}'}\cdot B_Q \cdot \beta_{\textbf{s}}$, implying that the subset $\{c_{1;\textbf{s}'}\cdot B_Q \cdot \beta_{\textbf{s}} \geq 0, c_{2;\textbf{s}'}\cdot B_Q \cdot \beta_{\textbf{s}}\geq 0\}$ of $[C_{\textbf{s}'}^TB_Q]^{>}\beta_{\textbf{s}} \geq \textbf{0}^{>}$ is no redundant. Consequently, $(p^{*})^{-1}(\mathcal G_{\textbf{s}'})$ has two facets: $$F_1 := (p^{*})^{-1}(\mathcal G_{\textbf{s}'}) \cap \{\beta \in N_{\mathbb{R} } \mid c_{1;\textbf{s}'}^{T}\cdot B_{\textbf{s}}\cdot \beta_{\textbf{s}} = 0\}, \quad F_2 := (p^{*})^{-1}(\mathcal G_{\textbf{s}'}) \cap \{\beta \in N_{\mathbb{R} } \mid c_{2;\textbf{s}'}^{T}\cdot B_{\textbf{s}}\cdot \beta_{\textbf{s}} = 0\}.$$ Next, we consider the Auslander-Reiten quiver of $D^b_{kQ}$: We can use Theorem [Theorem 14](#dv-gv){reference-type="ref" reference="dv-gv"} to see that the difference between each pair of elements enclosed within the same red rectangle belongs to the kernel of $p^{*}$. Consequently, a $\mathbf{c}$-matrix containing one of these two elements with positive sign and also containing the other element with negative sign generates a 2-dimensional cone of $\Delta^+_\textbf{s}(\mathcal{X} )$. Hence, 2-dimensional cones correspond to the seeds whose associated quiver is linearly oriented, as observed in the following table: Triangulation --------------------------------------------------------------- ----------------------------------- ------------------------ ------------------------------------- ------------------------ --------------------------- ------------------------ $\mathbf{c} -$matrix $\begin{psmallmatrix} $\begin{psmallmatrix} $\begin{psmallmatrix} $\begin{psmallmatrix} $\begin{psmallmatrix} $\begin{psmallmatrix} 0 & -1 & 1 \\ 0 & 0 & -1 \\ 1 & 0 & 0 \\ -1 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & -1 \\ -1 & 0 & 1 \\ 1 & 0 & -1 \\ 0 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & -1 & 0 \\ 0 & -1 & 0 \\ -1 & 0 & 0 1 & -1 & 0 0 & 1 & -1 0 & 0 & 1 -1 & 0 & 0 1 & 0 & 0 \end{psmallmatrix}$ \end{psmallmatrix}$ \end{psmallmatrix}$ \end{psmallmatrix}$ \end{psmallmatrix}$ \end{psmallmatrix}$ $C_{\textbf{s}'}^TB_Q\cdot\beta_{\textbf{s}} \geq \textbf{0}$ $\begin{smallmatrix} $\begin{smallmatrix} $\begin{smallmatrix} \beta_1-\beta_2+\beta_3 \geq 0 \\ \beta_2 \geq 0 \\ -\beta_2 \geq 0 \\ -\beta_2 \geq 0 \\ -\beta_1+2\beta_2-\beta_3 \geq 0 \\ \beta_1+\beta_3 \geq 0 \\ -\beta_1+\beta_2-\beta_3 \geq 0 -\beta_2 \geq 0 \beta_2 \geq 0 \end{smallmatrix}$ \end{smallmatrix}$ \end{smallmatrix}$ $[C_{\textbf{s}'}^TB_Q]^{=}$ $\begin{psmallmatrix} $\begin{psmallmatrix} $\begin{psmallmatrix} 1 & -1 & 1 \\ 0 & 1 & 0 \\ 0 & -1 & 0 \\ -1 & 1 & -1 \\ 0 & -1 & 0 \\ 0 & 1 & 0 \\ \end{psmallmatrix}$ \end{psmallmatrix}$ \end{psmallmatrix}$ : 2-dimensional cones $\mathbb{A}_3$ In accordance with § [3.2](#dim){reference-type="ref" reference="dim"}, we can see that in each case considered in Table [2](#tab:2A3){reference-type="ref" reference="tab:2A3"}, inequalities $c_{1;\textbf{s}'}\cdot B_Q \cdot \beta_{\textbf{s}}\geq 0$ and $c_{3;\textbf{s}'}\cdot B_Q \cdot \beta_{\textbf{s}}\geq 0$ belong to the set of implicit equalities of $C_{\textbf{s}'}^TB_Q\cdot\beta_{\textbf{s}} \geq \textbf{0}$ and $\operatorname{rank}([C_{\textbf{s}'}^TB_Q]^{=})=1$. Next, the seeds giving rise to $1$-dimensional cones are those whose quiver is a cyclic orientation of a triangle. The corresponding seeds and the inequalities they induce are presented in Table [3](#tab:A31){reference-type="ref" reference="tab:A31"}. In this case all the inequalities of the system are implicit equalities. Moreover, we have that $\operatorname{rank}([C_{\textbf{s}'}^TB_Q]^{=})=2$ and the one dimensional cone defined by these seeds is precisely the linear space spanned by the kernel of $p^{*}$. +--------------------+----------------------------------------+----------------------------------------------------------------+-----------------------------------------+ | Triangulation | $\mathbf{c} -$matrix | $C_{\textbf{s}'}^TB_Q \cdot\beta_{\textbf{s}} \geq \textbf{0}$ | $[C_{\textbf{s}'}^TB_Q]^{=}$ | +:==================:+:======================================:+:==============================================================:+:=======================================:+ | ::: {.adjustbox} | ::: {.adjustbox} | $\begin{smallmatrix} | $\begin{psmallmatrix} | | trim=0 0 0 -0.15cm | trim=0 0 0 0.9cm$\begin{psmallmatrix} | \beta_2 \geq 0 \\ | 0 & 1 & 0 \\ | | ::: | 0 & -1 & 0 \\ | \beta_1-\beta_2+\beta_3 \geq 0 \\ | 1 & -1 & 1 \\ | | | 1 & -1 & 0 \\ | -\beta_1-\beta_3 \geq 0 | -1 & 0 & -1 \\ | | | 0 & 0 & 1 | \end{smallmatrix}$ | \end{psmallmatrix}$ | | | \end{psmallmatrix}$ | | | | | ::: | | | +--------------------+----------------------------------------+----------------------------------------------------------------+-----------------------------------------+ | ::: {.adjustbox} | | | ::: {.adjustbox} | | trim=0 0 0 -0.15cm | | | trim=0 0 0 0.9cm $\begin{psmallmatrix} | | ::: | | | 1 & 0 & 0 \\ | | | | | 0 & -1 & 1 \\ | | | | | 0 & -1 & 0 | | | | | \end{psmallmatrix}$ | | | | | ::: | +--------------------+----------------------------------------+----------------------------------------------------------------+-----------------------------------------+ : 1-dimensional cones $\mathbb{A}_3$ As a result, $\Delta_{\textbf{s}}^+(\mathcal{X} )$ can be visualized as follows: For a clearer perspective, the next image displays the fan induced by $\Delta^+_{\textbf{s}}(\mathcal{X} )$ in $N_{\mathbb{R} }/\ker(p^*)$. Furthermore, the image includes the **c**-vectors and the seeds responsible for generating each cone of co-dimension one and maximal cone, respectively, as indicated within their corresponding rays and cones: By Corollary [Corollary 16](#Corollary-normalvectorhyperplane){reference-type="ref" reference="Corollary-normalvectorhyperplane"}, we can see the normal vector of each of the walls in the Auslander-Reiten quiver given by the **g**-vectors. **Remark 21**. Although in case $\mathbb{A}_3$ the maximal cones correspond only to the bipartite seed this is not true in general. In fact, for $k\geq 2$, seeds associated with quivers $\mathbb{A}_{2k+1}$ whose orientation is not bipartite can also determine maximal cones of $\Delta^+_{\textbf{s}}(\mathcal{X} )$. BFMMNC20 Takahide Adachi, Osamu Iyama, and Idun Reiten, -tilting theory, , 105(3):415--452,2014. Ibrahim Assem, Daniel Simson, and Andrzej Skowrónski, Elements of the Representation Theory of Associative Algebras. Vol. 1: Techniques of Representation Theory , 2006. Lara Bossinger. Tropical totally positive cluster varieties. , 2022. Lara Bossinger, Bosco Frı́as-Medina, Timothy Magee, and Alfredo Nájera Chávez. Toric degenerations of cluster varieties and cluster duality. , 156(10):2149--2206, 2020. Lara Bossinger, Man-Wai Cheung, Timothy Magee and Alfredo Nájera Chávez. Newton-Okounkov bodies and minimal models for cluster varieties. , 2023. Michele Conforti, Gerard Cornuejols and Giacomo Zambelli. Integer Programming. , isbn = 3319110071, 2014. Laurent Demonet, Osamu Iyama and Gustavo Jasso. -tilting finite algebras, bricks, and **g**-vectors. , (3):852--892, 2019. Vladimir V. Fock and Alexander B. Goncharov. Cluster ensembles, quantization and the dilogarithm. , 42(6):865--930, 2009. Sergey Fomin and Andrei Zelevinsky. Cluster algebras. II. Finite type classification. , 154(1):63--121, 2003. Sergey Fomin and Andrei Zelevinsky. Cluster algebras. IV. Coefficients. , 143(1):112--164, 2007. Mark Gross, Paul Hacking, and Sean Keel. Birational geometry of cluster algebras. , 2(2):137--175, 2015. Mark Gross, Paul Hacking, Sean Keel, and Maxim Kontsevich. Canonical bases for cluster algebras. , 31(2):497--608, 2018. Dieter Happel. On the derived category of a finite-dimensional algebra, *Comment. Math. Helv.* **62** (1987), 339--389. Kentaro Nagao. Donaldson-Thomas theory and cluster algebras. **162** (2013), no.7, 1313--1367. Alfredo Nájera Chávez. On the $\mathbf{c}$-vectors of an acyclic cluster algebra. , 2015 (6), 1590-1600, 2013. Tomoki Nakanishi and Andrei Zelevinsky On tropical dualities in cluster algebras. , Contemp. Math. 217--226, 2012. [^1]: Unlike the $\mathcal{A}$ case, the cluster complex associated to a cluster Poisson variety can change if we add frozen directions. [^2]: Part of the framework in this paper related to representation theory does not require that $k$ is algebraically closed nor of characteristic $0$, however, these hypotheses are needed for the framework related to cluster varieties.
arxiv_math
{ "id": "2310.03626", "title": "The cluster complex for cluster Poisson varieties and representations of\n acyclic quivers", "authors": "Carolina Melo and Alfredo N\\'ajera Ch\\'avez", "categories": "math.RT math.AG math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We completely classify the locally finite, infinite graphs with pure mapping class groups admitting a coarsely bounded generating set. We also study algebraic properties of the pure mapping class group: We establish a semidirect product decomposition, compute first integral cohomology, and classify when they satisfy residual finiteness and the Tits alternative. These results provide a framework and some initial steps towards quasi-isometric and algebraic rigidity of these groups. author: - George Domat, Hannah Hoganson, and Sanghoon Kwak bibliography: - bib.bib title: Generating Sets and Algebraic Properties of Pure Mapping Class Groups of Infinite Graphs --- # Introduction {#sec:intro} A recent surge of interest in mapping class groups of infinite-type surfaces has prompted the emergence of a "big\" analog of $\mathop{\mathrm{Out}}(F_{n})$ as well. Algom-Kfir--Bestvina [@AB2021] propose that the appropriate analog is the group of self proper homotopy equivalences up to proper homotopy of a locally finite, infinite graph. One main difficulty of studying these "big" groups is that the classical approach of geometric group theory is not applicable. In particular, the mapping class groups of infinite-type surfaces and those of locally finite, infinite graphs are generally not finitely generated, and not even compactly generated. Fortunately, they are still Polish groups (separable and completely metrizable topological groups), to which Rosendal provides a generalized geometric group theoretic approach. The role of a finite or compact generating set is replaced with a *coarsely bounded* (CB) generating set. For example, a group that admits a coarsely bounded generating set has a well-defined quasi-isometry type [@rosendal2022 Theorem 1.2 and Proposition 2.72], and a coarsely bounded group is quasi-isometric to a point. A group with a coarsely bounded neighborhood around the identity is said to be *locally coarsely bounded*, which is equivalent to having a well-defined *coarse equivalence* type, and is necessary to have a coarsely bounded generating set. Using this framework, Mann--Rafi [@mann2023large] gave a classification of (tame) surfaces whose mapping class groups are coarsely bounded, locally coarsely bounded, and generated by a coarsely bounded set. This established a first step toward studying the coarse geometry of mapping class groups of infinite-type surfaces. Recently, Thomas Hill [@hill2023largescale] gave a complete classification of surfaces that have *pure* mapping class groups with the aforementioned coarse geometric properties, without the tameness condition. In the authors' previous work [@DHK2023], we gave a complete classification of graphs with coarsely bounded, and locally coarsely bounded, pure mapping class groups, the subgroup of the mapping class group fixing the ends of the graph pointwise. In this paper, we provide the complete classification of infinite graphs with CB-generated pure mapping class groups, fulfilling our goal to provide a foundation for studying the coarse geometry of these groups. In the following statement, $E$ refers to the space of ends of the graph ${\Gamma}$ and $E_{\ell}$ is the subset of ends accumulated by loops. **Theorem 1**. *Let ${\Gamma}$ be a locally finite, infinite graph. Then its pure mapping class group, $\mathop{\mathrm{PMap}}({\Gamma})$, is *CB generated* if and only if either ${\Gamma}$ is a tree, or satisfies both:* 1. *${\Gamma}$ has finitely many ends accumulated by loops, and* 2. *there is no accumulation point in $E \setminus E_\ell$.* **Remark 1**. Alternatively, we have a constructive description: $\mathop{\mathrm{PMap}}({\Gamma})$ is CB generated if and only if ${\Gamma}$ can be written (not necessarily uniquely) as a finite wedge sum of the four graphs from . ![Every graph with a CB-generated pure mapping class group can be written as a finite wedge sum of these four graphs. From left to right these are: a single loop, a single ray, a Loch Ness monster graph, and a Millipede monster graph.](pics/CBsummands.pdf){#fig:CBsummands width=".6\\textwidth"} illustrates the complete classification of graphs with CB, locally CB, and CB-generated pure mapping class group. Observe the trend that $\mathop{\mathrm{PMap}}({\Gamma})$ admits more complicated coarse geometric properties when ${\Gamma}$ has more complicated geometry. The main tool used to prove is the following semidirect decomposition of the pure mapping class group (reminiscent of [@APV2020 Corollary 4] in the surface setting): **Theorem 2**. *Let ${\Gamma}$ be a locally finite graph. Let $\alpha = \max\{0,|E_\ell({\Gamma})| - 1\}$ for $|E_\ell({\Gamma})|<\infty$ and $\alpha = \aleph_0$ otherwise. Then we have the following short exact sequence, $$1 \longrightarrow\overline{\mathop{\mathrm{PMap}}_c({\Gamma})} \longrightarrow\mathop{\mathrm{PMap}}({\Gamma}) \longrightarrow\mathbb{Z}^\alpha \longrightarrow 1$$ which splits. In particular, we have $\mathop{\mathrm{PMap}}({\Gamma})=\overline{\mathop{\mathrm{PMap}}_c({\Gamma})} \rtimes \mathbb{Z}^{\alpha}.$* Here, $\overline{\mathop{\mathrm{PMap_{\it c}}}({\Gamma})}$ is the closure of the group of compactly supported mapping classes and $\mathbb{Z}^{\alpha}$ is generated by commuting loop shifts. As a corollary, we compute the rank of the first integral cohomology of $\mathop{\mathrm{PMap}}({\Gamma})$. This allows us to see that the number of ends acummuluted by loops of a graph ${\Gamma}$ is an algebraic invariant of $\mathop{\mathrm{PMap}}({\Gamma})$. **Corollary 3**. *For every locally finite, infinite graph ${\Gamma}$, $$\mathop{\mathrm{rk}}\left(H^1(\mathop{\mathrm{PMap}}({\Gamma});\mathbb{Z})\right) = \begin{cases} 0 & \text{if $|E_\ell| \le 1$}, \\ n-1 & \text{if $2 \le |E_\ell|=n< \infty$}, \\ \aleph_0 & \text{otherwise}. \end{cases}$$* We also show that $\mathop{\mathrm{PMap}}({\Gamma})$ distinguishes graphs of finite rank from graphs of infinite rank. Recall a group is *residually finite* if it can be embedded into a direct product of finite groups. **Theorem 4**. *$\mathop{\mathrm{PMap}}({\Gamma})$ is residually finite if and only if ${\Gamma}$ has finite rank.* A group satisfies the *Tits Alternative* if every subgroup is either virtually solvable or contains a nonabelian free group. Interestingly, it is exactly the graphs with $\mathop{\mathrm{PMap}}({\Gamma})$ residually finite that satisfy the Tits Alternative. **Theorem 5**. *$\mathop{\mathrm{PMap}}({\Gamma})$ satisfies the Tits Alternative if and only if ${\Gamma}$ has finite rank.* These three results are steps towards determining when the isomorphism type of $\mathop{\mathrm{PMap}}({\Gamma})$ determines the graph ${\Gamma}$, as in the surface case [@BDR2020]. If ${\Gamma}$ is the infinite rank graph with a single end (the Loch Ness monster graph) and ${\Gamma}'$ is the wedge sum of ${\Gamma}$ with a single ray, then the groups $\mathop{\mathrm{PMap}}({\Gamma})$ and $\mathop{\mathrm{PMap}}({\Gamma}')$ inject into $\mathop{\mathrm{Out}}(F_{\infty})$ and $\mathop{\mathrm{Aut}}(F_{\infty})$, respectively, by [@AB2021 Theorem 3.1 and Lemma 3.2]. Thus we immediately get the following corollary. We note that one can instead prove this directly, e.g. see [@OutFinfinity2011]. **Corollary 6**. *For $F_{\infty}$, the free group on a countably infinite set, $\mathop{\mathrm{Aut}}(F_{\infty})$ and $\mathop{\mathrm{Out}}(F_{\infty})$ are not residually finite and do not satisfy the Tits alternative.* ## Comparison with Surfaces {#comparison-with-surfaces .unnumbered} The statement of is exactly the same as for pure mapping class groups of surfaces, seen in Aramayona--Patel--Vlamis [@APV2020]. Although the proof we give is similar in spirit as well, we have to make use of different tools. In [@APV2020] the authors make use of the *homology of separating curves* on a surface and build an isomorphism between the first cohomology of the pure mapping class group and this homology group. For graphs, we do not have any curves to take advantage of. Instead we use partitions of the space of ends accumulated by loops. In order to make this precise and give this an algebraic structure we make use of the group of locally constant integral functions on $E_{\ell}({\Gamma})$, i.e., the zeroth cohomology of $E_{\ell}({\Gamma})$, denoted as $\mathring{C}(E_{\ell}({\Gamma}))$. On a surface, any separating simple closed curve determines a partition of the end space. We can use this to show that the first cohomology groups of pure mapping class groups of graphs and surfaces are often in fact *naturally* isomorphic. This also gives a slightly alternate proof of the main results in [@APV2020]. **Corollary 7**. *Let $S$ be an infinite-type surface of genus at least one and ${\Gamma}$ a locally finite, infinite graph. If $E_{g}(S)$ is homeomorphic to $E_{\ell}({\Gamma})$, then both $H^{1}(\mathop{\mathrm{PMap}}(S);\mathbb{Z})$ and $H^{1}(\mathop{\mathrm{PMap}}({\Gamma});\mathbb{Z})$ are isomorphic to $\mathring{C}(E_{g}(S)) \cong \mathring{C}(E_{\ell}({\Gamma}))$.* ## Rigidity Questions {#rigidity-questions .unnumbered} The results above fit naturally into a general question about (pure) mapping class groups of infinite graphs. Namely: "How much does the group $\mathop{\mathrm{PMap}}({\Gamma})$ determine the graph ${\Gamma}$?\" One can obtain more concrete questions by considering certain types of rigidity. We will focus on *algebraic* and *quasi-isometric* rigidity. In the finite-type setting, mapping class groups of surfaces and $\mathop{\mathrm{Out}}(F_n)$ are known to exhibit strong rigidity properties. Various results starting with Ivanov [@Ivanov1988] (see also [@McCarthy1986; @Harer1986; @BLM1983; @Ivanov1997]) establish strong forms of algebraic rigidity and BehrstockKleinerMinskyMosher [@BKMM2012] establish quasi-isometric rigidity for $\mathop{\mathrm{Map}}(S)$. For $\mathop{\mathrm{Out}}(F_n)$ we also have strong forms of algebraic rigidity from the work of Farb--Handel [@FH2007] building on [@Khramtsov1990; @BV2000] (see also [@HW2020]). Quasi-isometric rigidity for $\mathop{\mathrm{Out}}(F_n)$ is still unknown. For infinite-type surfaces, the work of Bavard--Dowdall--Rafi [@BDR2020] established a strong form of algebraic rigidity á la Ivanov (see also [@HMV2018]). The question of quasi-isometric rigidity is still open, but Mann--Rafi [@mann2023large] give a classification of which mapping class groups of tame infinite-type surfaces have a well-defined quasi-isometry type and which of those are trivial. This allows one to begin to distinguish between some of the mapping class groups (see also [@schaffer2020]). One can ask the same rigidity questions for infinite graphs. The picture becomes less clear than in the surface case. In particular, trees fail to have algebraic rigidity for the *pure* mapping class group, as they all have trivial pure mapping class group. Failure is also present for the full mapping class group. Let $T$ be the regular trivalent tree and let $T'$ be the wedge sum of $T$ with a single ray. Note that $E(T) = \mathcal{C}$, a Cantor set, and $E(T') = \mathcal{C}\sqcup \{*\}$, a Cantor set together with a single isolated point. Now we have that $\mathop{\mathrm{Map}}(T) = \mathop{\mathrm{Homeo}}(\mathcal{C})$ and $\mathop{\mathrm{Map}}(T') = \mathop{\mathrm{Homeo}}(\mathcal{C}\sqcup \{*\})$. However, these two groups are isomorphic, as any homeomorphism fixes the extra end $*$ of $T'$. There are even more complicated examples of this failure of algebraic rigidity for mapping class groups of trees that come from work on Boolean algebras by McKenzie [@McKenzie1977automorphism] answering a rigidity conjecture of Monk [@Monk1975automorphism]. The results in this paper allow one to ask several natural rigidity questions for the pure mapping class groups of infinite graphs. We will restrict to some nice classes of graphs in order to state concrete questions. All of the following families of graphs are CB generated by and hence have a well-defined quasi-isometry type. Let ${\Gamma}_{n}$ denote the graph with exactly $n$ ends, each of which is accumulated by loops. **Question 2**. Let $n,m \ge 2$. If $\mathop{\mathrm{PMap}}({\Gamma}_{n})$ is quasi-isometric to $\mathop{\mathrm{PMap}}({\Gamma}_{m})$, then does $n=m$? By we do know that $\mathop{\mathrm{PMap}}({\Gamma}_n)$ is algebraically isomorphic to $\mathop{\mathrm{PMap}}({\Gamma}_m)$ if and only if $n=m$. We can also use the fact that $\mathop{\mathrm{PMap}}({\Gamma}_1)$ is CB to see that $\mathop{\mathrm{PMap}}({\Gamma}_1)$ is not quasi-isometric to $\mathop{\mathrm{PMap}}({\Gamma}_n)$ for any $n \neq 1$. However, the general question is still open. In the authors' previous work [@DHK2023], we computed asymptotic dimension for all of these groups. However, it is infinite for $n>1$. Therefore, in order to answer this question one would need to study and/or develop other "big\" quasi-isometry invariants. Instead of comparing the effect of changing the number of ends accumulated by loops, one could ask the same question for rays. Namely, let ${\Gamma}_{n,r}$ denote the graph with $n$ ends accumulated by loops and $r$ rays. We start by asking for distinguishing features of "no ray" versus "one ray." **Question 3**. Is $\mathop{\mathrm{PMap}}({\Gamma}_{n,0})$ quasi-isometric to $\mathop{\mathrm{PMap}}({\Gamma}_{n,1})$? In fact, here we do not even know algebraic rigidity. **Question 4**. Is $\mathop{\mathrm{PMap}}({\Gamma}_{n,0})$ isomorphic to $\mathop{\mathrm{PMap}}({\Gamma}_{n,1})$? The other large family of graphs with CB-generated pure mapping class groups are the finite-type ones. Let $\Omega_{n,r}$ denote the graph of finite rank $n$ with $r<\infty$ rays attached. We know that no $\mathop{\mathrm{PMap}}(\Omega_{n,r})$ is isomorphic to any $\mathop{\mathrm{PMap}}({\Gamma}_{m})$ by using either residual finiteness or the Tits alternative . We do not know if any of them are quasi-isometric. Note that $\mathop{\mathrm{PMap}}(\Omega_{n,r})$ is always finitely generated, but this does not preclude it from being quasi-isometric to an uncountable group. **Question 5**. Is $\mathop{\mathrm{PMap}}(\Omega_{m,r})$ ever quasi-isometric to $\mathop{\mathrm{PMap}}({\Gamma}_{n})$, for $m,r,n>1$? ## Outline {#outline .unnumbered} In , we give background on mapping class groups of infinite graphs, examples of elements in the pure mapping class group, and coarse geometry of groups. In , we prove our semidirect product decomposition, . We also prove in . By exploiting the semidirect decomposition of $\mathop{\mathrm{PMap}}({\Gamma})$, we prove the CB-generation classification, , in . In and , we finish by proving the residual finiteness characterization and Tits alternative characterization . ## Acknowledgments {#acknowledgments .unnumbered} Thank you to Mladen Bestvina for providing an idea of the proof for and suggestion to use the zeroth   cohomology to prove . We also thank Priyam Patel for helpful conversations toward and , along with answering questions regarding [@PatelVlamis] and [@APV2020]. Also we thank Camille Horbez for clarifying the proof of . The first author was supported in part by the Fields Institute for Research in Mathematical Sciences, NSF DMS--1745670, and NSF DMS--2303262. The second author was supported by NSF DMS--2303365. The third author acknowledges the support from the University of Utah Graduate Research Fellowship. # Preliminaries {#sec:prelims} ## Mapping class groups of infinite graphs Let ${\Gamma}$ be a locally finite, infinite graph. Informally, an *end* of a graph is a way to travel to infinity in the graph. The space of ends (or, the end space), denoted by $E({\Gamma})$, is defined as: $$E({\Gamma}) = \varprojlim_{K \subset {\Gamma}}\pi_0({\Gamma}\setminus K),$$ where $K$ runs over compact sets of ${\Gamma}$ in the inverse limit. Then each element of $E({\Gamma})$ is called an **end** of ${\Gamma}$. An end $e$ of ${\Gamma}$ is said to be **accumulated by loops** if the sequence of complementary components in ${\Gamma}$ corresponding to $e$ only consists of infinite rank graphs. Colloquially, if one continues to see loops along the way to $e$. We denote by $E_\ell({\Gamma})$ the set of ends of ${\Gamma}$ accumulated by loops. Note $E_\ell({\Gamma})$ is a closed subset of $E({\Gamma})$, and $E({\Gamma})$ can be realized as a closed subset of a Cantor set (hence so is $E_\ell({\Gamma})$). We say that the **characteristic triple** of ${\Gamma}$ is the triple $(r({\Gamma}),E({\Gamma}),E_{\ell}({\Gamma}))$, where $r({\Gamma}) \in \mathbb{Z}_{\geq 0 } \cup \{\infty\}$ is the rank of $\pi_{1}({\Gamma})$. Now we define the mapping class group of a locally finite, infinite graph ${\Gamma}$. Recall that a map is **proper** if the pre-image of every compact set is compact. **Definition 6**. [@AB2021] The **mapping class group** of ${\Gamma}$, denoted $\mathop{\mathrm{Map}}({\Gamma})$, is the group of proper homotopy classes of proper homotopy equivalences of ${\Gamma}$. The **pure mapping class group**, denoted $\mathop{\mathrm{PMap}}({\Gamma})$, is the closed subgroup consisting of maps that fix the ends of ${\Gamma}$ pointwise. More precisely, it is the kernel of the action of $\mathop{\mathrm{Map}}({\Gamma})$ on the end space $(E({\Gamma}),E_\ell({\Gamma}))$ by homeomorphisms, hence fitting into the following short exact sequence: $$1 \longrightarrow\mathop{\mathrm{PMap}}({\Gamma}) \longrightarrow\mathop{\mathrm{Map}}({\Gamma}) \longrightarrow\mathop{\mathrm{Homeo}}(E,E_\ell) \longrightarrow 1$$ When $E({\Gamma}) \setminus E_\ell({\Gamma})$ is nonempty and compact, we can further decompose $\mathop{\mathrm{PMap}}({\Gamma})$ into subgroups of *core maps* and of *ray maps*. To state the result, we need to introduce a few concepts. **Definition 7**. Let ${\Gamma}$ be a locally finite, infinite graph. Denote by ${\Gamma}_c$ the **core graph** of ${\Gamma}$, the smallest connected subgraph of ${\Gamma}$ that contains all immersed loops in ${\Gamma}$. When $E({\Gamma}) \setminus E_\ell({\Gamma})$ is nonempty, pick $e_0 \in E({\Gamma}) \setminus E_\ell({\Gamma})$ and denote by ${\Gamma}_c^*$ be the subgraph consisting of ${\Gamma}_c$ and a choice of embedded ray in ${\Gamma}$ limiting to $e_0$ such that the ray intersects ${\Gamma}_c$ in exactly one point. Define $\pi_1({\Gamma}_c^*,e_0)$ to be the set of proper homotopy equivalence classes of lines in ${\Gamma}_c^*$, both ends of which limit to $e_0$. We endow it with a group structure by concatenation. This group is naturally isomorphic to $\pi_{1}({\Gamma}_{c}^{*},p)$ for any choice of basepoint $p \in {\Gamma}_{c}^{*}$. Finally, define $\mathcal{R}$ as the group of maps $h: E({\Gamma}) \to \pi_1({\Gamma}_c^*,e_0)$ such that 1. $h(e_0) =1$, and 2. $h$ is locally constant, where the group operation is the pointwise multiplication in $\pi_1({\Gamma}_c^*,e_0)$. We have the following decomposition of $\mathop{\mathrm{PMap}}({\Gamma})$: **Proposition 8** ([@AB2021 Corollary 3.9]). *Let ${\Gamma}$ be a locally finite, infinite graph with $E({\Gamma}) \setminus E_\ell({\Gamma})$ nonempty and compact. Then $$\mathop{\mathrm{PMap}}({\Gamma}) \cong \mathcal{R}\rtimes \mathop{\mathrm{PMap}}({\Gamma}_c^*).$$ In particular, when ${\Gamma}$ has finite rank $n \ge 0$ and finitely many ends, say $|E({\Gamma})|=e$, then $$\mathop{\mathrm{PMap}}({\Gamma}) \cong \begin{cases} \mathop{\mathrm{Out}}(F_n), & \text{if $e=0$,} \\ F_n^{e-1} \rtimes \mathop{\mathrm{Aut}}(F_n), & \text{if $e \ge 1$.} \end{cases}$$* **Remark 9**. Any time $K$ is a connected, compact subgraph of a locally finite, infinite graph ${\Gamma}$, we use $\mathop{\mathrm{PMap}}(K)$ to refer to the group of proper homotopy equivalences of $K$ that fix $\partial K$ pointwise up to proper homotopy fixing $\partial K$. This group is naturally isomorphic to the group $\mathop{\mathrm{PMap}}(\tilde{K})$ where $\tilde{K}$ is the graph $K$ together with a ray glued to each point in $\partial K$. Applying the above proposition we see that $\mathop{\mathrm{PMap}}(K)$ is always of the form $F_{n}^{e-1} \rtimes \mathop{\mathrm{Aut}}(F_{n})$ for some $n$ and $e$ because $K$ is always a proper subset of ${\Gamma}$, so $\partial K$ is nonempty. The pure mapping class group $\mathop{\mathrm{PMap}}({\Gamma})$ records the internal symmetries of ${\Gamma}$. Contractible graphs (trees) have no internal symmetries. This follows from the work of Ayala--Dominguez--Márquez--Quintero [@ayala1990proper]. They give a proper homotopy equivalence classification of locally finite, infinite graphs. **Theorem 10** ([@ayala1990proper Theorem 2.7]). *Let ${\Gamma}$ and ${\Gamma}'$ be two locally finite graphs of the same rank. A homeomorphism of end spaces $(E({\Gamma}),E_\ell({\Gamma})) \to (E({\Gamma}'),E_\ell({\Gamma}'))$ extends to a proper homotopy equivalence ${\Gamma}\to {\Gamma}'$. If ${\Gamma}$ and ${\Gamma}'$ are trees, then this extension is unique up to proper homotopy.* The second statement of implies the following. **Proposition 11**. *Let ${\Gamma}$ be a locally finite, infinite graph with $\pi_1({\Gamma})=1$. Then $\mathop{\mathrm{PMap}}({\Gamma})=1$.* In [@AB2021] the authors endow $\mathop{\mathrm{Map}}({\Gamma})$ with the compact-open topology and show that this gives $\mathop{\mathrm{Map}}({\Gamma})$, and hence $\mathop{\mathrm{PMap}}({\Gamma})$, the structure of a Polish group. A neighborhood basis about the identity for the topology is given by sets of the form $$\begin{aligned} \mathcal{V}_{K} &:=\{[f] \in \mathop{\mathrm{Map}}({\Gamma})\vert~ \exists f' \in [f] \text{ s.t. } f'\vert_{K} = \mathop{\mathrm{id}}\text{ and } \\ &f' \text{ preserves the complementary components of $K$ setwise}\}\end{aligned}$$ where $K$ is a compact subset of ${\Gamma}$. Recall the **support** of a continuous map $\phi:X \to X$ is the closure of the set of points $x \in X$ such that $\phi(x) \neq x$. The group of compactly supported mapping classes, denoted by $\mathop{\mathrm{PMap_{\it c}}}({\Gamma})$, is the subgroup of $\mathop{\mathrm{PMap}}({\Gamma})$ consisting of classes that have a compactly supported representative. Its closure in this topology is denoted by $\overline{\mathop{\mathrm{PMap_{\it c}}}({\Gamma})}$ and it is a closed (hence Polish) subgroup of $\mathop{\mathrm{PMap}}({\Gamma})$. As proper homotopy equivalences are not necessarily injective, unlike homeomorphisms, we need the following alternate notion of support for a proper homotopy equivalence. **Definition 12**. We say that $[f] \in \mathop{\mathrm{Map}}({\Gamma})$ is **totally supported** on $K \subset {\Gamma}$ if there is a representative $f' \in [f]$ so that $f'(K) = K$ and $f'\vert_{{\Gamma}\setminus K} = \mathop{\mathrm{id}}$. To see how a proper homotopy equivalence can have different support and total support, consider a rose graph with two loops labeled by $a_1$ and $a_2$. Then a (proper) homotopy equivalence mapping $a_1$ to $a_1a_2$ which is the identity elsewhere is supported on $a_1$, but not totally supported on $a_1$. It is totally supported on $a_1 \cup a_2$. This is in contrast with homeomorphisms on surfaces, where $f$ is supported on $K$ if and only if $f$ is totally supported on $K$. As mapping class groups of graphs are independent of the proper homotopy equivalence representative of the graph, it is often useful to consider a 'standard' representative within a proper homotopy equivalence class of graphs. **Definition 13**. A locally finite graph, ${\Gamma}$, is in **standard form** if ${\Gamma}$ is a tree with loops attached at some of the vertices. We endow ${\Gamma}$ with the path metric that assigns each edge length $1$. We also give special names to specific graphs that we will reference often. **Definition 14**. The **Loch Ness Monster** graph is the graph with characteristic triple $(\infty,\ \{*\},\ \{*\})$. The **Millipede Monster** graph is the graph with characteristic triple $(\infty,\ \{0\} \cup \{\frac{1}{n}\ |\ n \in \mathbb{Z}_{>0}\},\ \{0\})$. A **monster** graph refers to either one of these. ## Elements of $\mathop{\mathrm{PMap}}({\Gamma})$ {#ssec:elements} Here we give a brief treatment of elements of $\mathop{\mathrm{PMap}}({\Gamma})$. For more detailed definitions with examples, see [@DHK2023 Section 3]. ### Loop swaps A Loop swap is an order 2 proper homotopy equivalence induced from a transposition automorphism of a free group. It is totally supported on a compact set. More precisely, we define it as follows. **Definition 15**. Let ${\Gamma}$ be a locally finite graph in standard form with $\mathop{\mathrm{rk}}{\Gamma}\ge 2$. Let $A$ and $B$ be disjoint finite subsets of loops such that $|A|=|B|$. Then the **loop swap** $\mathcal{L}(A,B)$ is a proper homotopy equivalence induced from the group isomorphism on $\pi_1({\Gamma})$ swapping the free factors corresponding to $A$ and $B$. More concretely, pick a basepoint $p \in {\Gamma}$ and collapse each maximal tree of the subgraphs corresponding to $A$ and $B$ in $\pi_1({\Gamma},p)$. This results in two roses of $|A|=|B|$ petals. Then swap the two roses, followed by blowing up each rose to the original subgraph. Define $\mathcal{L}(A,B)$ as the composition of these three maps. Note $\mathcal{L}(A,B) \in \mathop{\mathrm{PMap_{\it c}}}({\Gamma}).$ As mentioned above, loop swaps of a graph correspond to the transposition free group automorphisms, which are part of generating set for $\mathop{\mathrm{Aut}}(F_n)$. (See ). ### Word maps Next, word maps, which are the most diverse kind of mapping classes among the three kinds of maps introduced in this section. **Definition 16**. Let ${\Gamma}$ be a locally finite graph with $\mathop{\mathrm{rk}}{\Gamma}\ge 1$, with a base point $p \in {\Gamma}$. Let $w \in \pi_1({\Gamma},p)$ and $I$ be an interval in an edge of ${\Gamma}$. Then the **word map** $\varphi_{(w,I)}$ is a proper homotopy equivalence that maps $I$ to a path in ${\Gamma}$ determined by $w \in \pi_1({\Gamma},p)$ and is the identity outside of $I$. See [@DHK2023 Section 3.3] for a careful construction of these maps. Note $\varphi_{(w,I)}$ is supported on $I$, but in general not *totally* supported on $I$. Rather, it is totally supported on the compact set that is the union of $I$ with a path in ${\Gamma}$ determined by $w \in \pi_1({\Gamma},p)$. The following two properties of word maps will be important in . **Lemma 17** ([@DHK2023 Lemma 3.5]). *If $I$ is contained in an edge of ${\Gamma}\setminus {\Gamma}_c$ and $w_1,w_2$ are elements in $\pi_1({\Gamma},p)$, then $$[\varphi_{(w_1,I)} \circ \varphi_{(w_2,I)}] = [\varphi_{(w_1w_2,I)}]$$ in $\mathop{\mathrm{PMap}}({\Gamma})$.* **Lemma 18** ([@DHK2023 Lemma 3.10]). *Let $I$ be an interval of ${\Gamma}$ which is outside of ${\Gamma}_c$, and $\psi \in \mathop{\mathrm{PMap}}({\Gamma})$ be totally supported on a compact subgraph of ${\Gamma}_{c}$. Then $$\psi \circ [\varphi_{(w,I)}] \circ \psi^{-1} = [\varphi_{(\psi_*(w),I)}].$$* In particular, we can use when $\psi$ is a loop swap. ### Loop shifts Loop shifts are to graphs as handle shifts are to surfaces, which were introduced in Patel--Vlamis [@PatelVlamis]. We first define a loop shift on the standard form of the graph $\Lambda$, the graph with characteristic triple $(\infty, \{e_{-},e_{+}\},\{e_{-},e_{+}\})$. (See .) Embed $\Lambda$ in $\mathbb{R}^2$ by identifying the maximal tree with the $x$-axis such that $e_{\pm}$ is identified with $\pm \infty$ of the $x$-axis, and each vertex is identified with an integer point in the $x$-axis. Identify the loops with the circles $\{(x-n)^2 + (y-\frac{1}{4})^2 = \frac{1}{16}\}_{n \in \mathbb{Z}}$. Note these circles are tangent to the integer points $\{(n,0)\}_{n \in \mathbb{Z}}$, thus representing the loops in $\Lambda$. Now define the **primitive loop shift** $h$ on $\Lambda$ as the horizontal translation $x \mapsto x+1$. One can also omit some loops from $\Lambda$ and define the loop shift to avoid those loops. For a more general definition, see [@DHK2023 Section 3.4]. **Definition 19**. Now we define the loop shift on a locally finite, infinite graph ${\Gamma}$ with $|E_\ell| \ge 2$. Pick two distinct ends $e_{-}, e_+ \in E_\ell({\Gamma})$ accumulated by loops. By considering a standard form of ${\Gamma}$, we can find an embedded ladder graph $\Lambda$ in ${\Gamma}$ such that $e_{\pm}$ is identified with $e_{\pm}$ of $\Lambda$, respectively. Now define the **primitive loop shift** $h$ on ${\Gamma}$ associated to $(e_-, e_+)$ as the proper homotopy equivalence induced from the primitive loop shift on the embedded ladder graph $\Lambda$. For the rest of the graph, define $h$ to be the identity outside of the $\frac{1}{2}$-neighborhood of $\Lambda$ and interpolate between the shift and the identity on the $\frac{1}{2}$-neighborhood. Finally, a proper homotopy equivalence $f$ on ${\Gamma}$ is a **loop shift** if $f = h^n$ for some primitive loop shift $h$ and $n \in \mathbb{Z}\setminus \{0\}$. ## Coarse geometry of groups **Definition 20**. Let $A$ be a subset of a topological group $G$. Then $A$ is **coarsely bounded (CB)** in $G$ if for every continuous isometric action of $G$ on a metric space, every orbit is bounded. We say a group is **CB-generated** if it has an algebraic generating set that is CB. Similarly, a group is **locally CB** if it admits a CB neighborhood of the identity. In , we will construct a CB-generating set for the pure mapping class groups of certain graphs, proving the if direction of . On the other hand, we have previously classified which graphs have CB or locally CB mapping class groups: **Theorem 21** ([@DHK2023 Theorem A, D]). *Let ${\Gamma}$ be a locally finite, infinite graph. Then its pure mapping class group $\mathop{\mathrm{PMap}}({\Gamma})$ is *coarsely bounded* if and only if one of the following holds:* - *${\Gamma}$ has rank zero, or* - *${\Gamma}$ has rank one, and has one end, or* - *${\Gamma}$ is a monster graph with finitely many rays attached.* *Moreover, $\mathop{\mathrm{PMap}}({\Gamma})$ is *locally coarsely bounded* if and only if one of the following holds:* - *${\Gamma}$ has finite rank, or* - *${\Gamma}$ satisfies both:* 1. *$|E_\ell({\Gamma})| <\infty$, and* 2. *only finitely many components of ${\Gamma}\setminus {\Gamma}_c$ have infinite end spaces.* **Remark 22**. Mirroring the constructive description in of the CB-generated $\mathop{\mathrm{PMap}}({\Gamma})$ classification, we can alternatively characterize the locally CB condition as: $\mathop{\mathrm{PMap}}({\Gamma})$ is locally CB if and only if ${\Gamma}$ can be written as a finite wedge sum of single loops, monster graphs, and trees. After confirming that a group is CB-generated, the Rosendal framework enables the exploration of the group through the lens of coarse geometry. **Theorem 23**. *[@rosendal2022 Theorem 1.2, Proposition 2.72] Let $G$ be a CB-generated Polish group. Then $G$ has a well-defined quasi-isometry type. Namely, any two CB-generating sets for $G$ give rise to quasi-isometric word metrics on $G$.* # Semidirect product structure and cohomology {#sec:semidirect} In this section, we prove: **Theorem 24** (, revisited). *Let ${\Gamma}$ be a locally finite graph. Let $\alpha = \max\{0,|E_\ell({\Gamma})| - 1\}$ for $|E_\ell({\Gamma})|<\infty$ and $\alpha = \aleph_0$ otherwise. Then we have the following short exact sequence, $$1 \longrightarrow\overline{\mathop{\mathrm{PMap}}_c({\Gamma})} \longrightarrow\mathop{\mathrm{PMap}}({\Gamma}) \longrightarrow\mathbb{Z}^\alpha \longrightarrow 1$$ which splits. In particular, we have $\mathop{\mathrm{PMap}}({\Gamma})=\overline{\mathop{\mathrm{PMap}}_c({\Gamma})} \rtimes \mathbb{Z}^{\alpha}.$* The map to $\mathbb{Z}^{\alpha}$ is defined using *flux maps*, which were first defined for locally finite, infinite graphs in [@DHK2023]. We quickly treat the case when the graph has at most one end accumulated by loops in . Then in , we recap the necessary definitions for flux maps and further expand on their properties. In , we characterize $\overline{\mathop{\mathrm{PMap}}_c({\Gamma})}$ as the common kernel of all flux maps (), which provides the left side of the desired splitting short exact sequence. Then in , we construct the other side of the short exact sequence by finding a section, proving . This requires us to study the space of flux maps, which is done in the same subsection. As an application, in we compute the first integral cohomology of $\mathop{\mathrm{PMap}}({\Gamma})$. Finally, we show the same approach could have been applied to infinite-type surfaces in to recover the surface version of by Aramayona--Patel--Vlamis [@APV2020], by showing that there is a natural isomorphism between the first cohomology of the pure mapping class groups of infinite-type surfaces and infinite graphs. ## The case $|E_\ell| \le 1$ {#ssec:atmostOneEnd} **Proposition 25**. *Let ${\Gamma}$ be a locally finite, infinite graph with $|E_\ell| \le 1$. Then $\mathop{\mathrm{PMap}}({\Gamma}) = \overline{\mathop{\mathrm{PMap}}_c({\Gamma})}$. Furthermore, if $\lvert E_{\ell} \rvert =0$, then $\mathop{\mathrm{PMap}}({\Gamma})=\mathop{\mathrm{PMap}}_c({\Gamma})$.* *Proof.* The case when $|E_\ell({\Gamma})|=1$ is the result of [@DHK2023 Corollary 4.5]. Now we assume $|E_\ell({\Gamma})|=0$, i.e., ${\Gamma}$ has finite rank. Let $f \in \mathop{\mathrm{PMap}}({\Gamma})$. Because $f$ is proper, $f^{-1}({\Gamma}_{c})$ is compact. Thus, there is some connected compact set $K$ such that ${\Gamma}_{c} \cup f^{-1}({\Gamma}_{c}) \subset K$. Now $f\vert_{{\Gamma}\setminus K}$ is a proper homotopy equivalence between two contractible sets and thus $f$ can be homotoped to be totally supported on $K$. Hence, we conclude $f \in \mathop{\mathrm{PMap}}_{c}({\Gamma})$. ◻ ## Flux maps {#ssec:fluxmaps} We begin the case when $|E_\ell| \ge 2$, where the flux maps come onto the scene. Here we recap the definitions and properties of flux maps developed in [@DHK2023 Section 7]. Let ${\Gamma}$ be a locally finite, infinite graph with $|E_\ell| \ge 2$. For each nonempty, proper, clopen subset $\mathcal{E}$ of $E_\ell$, we will construct a flux map $\Phi_\mathcal{E}$, which will evaluate to 1 for every primitive loop shift that goes from an end in $E_\ell \setminus \mathcal{E}$ to an end in $\mathcal{E}$. We fix such a subset $\mathcal{E}$ for this discussion. After potentially applying a proper homotopy equivalence, we can put ${\Gamma}$ into a standard form so that there is a maximal tree $T$ and a choice of $x_0$ in $T$ such that ${\Gamma}\setminus \{x_0\}$ defines a partition of the ends that is compatible with the partition $\mathcal{E}\sqcup (E_\ell \setminus \mathcal{E})$ of $E_\ell$. That is, the components of ${\Gamma}\setminus \{x_{0}\}$ determine a partition $E = \bigsqcup_{i=1}^{m} \mathcal{F}_{i}$ so that we can rewrite as $\mathcal{E}= \bigsqcup_{i=1}^{k} (\mathcal{F}_{i} \cap E_{\ell})$ and $E_{\ell} \setminus \mathcal{E}= \bigsqcup_{i=k+1}^{m} (\mathcal{F}_i \cap E_{\ell})$. Now we group the components of ${\Gamma}\setminus \{x_{0}\}$ by the set $\mathcal{E}$. Let ${\Gamma}_{+}$ and ${\Gamma}_{-}$ be the unions of the closures of the components of ${\Gamma}\setminus \{x_{0}\}$ so that $E_{\ell}({\Gamma}_{+}) = \mathcal{E}$ and $E_{\ell}({\Gamma}_{-}) = E_{\ell} \setminus \mathcal{E}$. More precisely, ${\Gamma}_{+}$ is exactly the union of the complementary components of $x_0$ with end spaces corresponding to $\mathcal{F}_1,\ldots,\mathcal{F}_k$ together with adding back in $x_{0}$. Similarly, ${\Gamma}_{-}$ is the union of the components corresponding to $\mathcal{F}_{k+1},\ldots, \mathcal{F}_{m}$, together with $x_0$. Finally, let $T_{-}$ be the maximal tree of $\Gamma_{-}$ contained in $T$. Define for each $n \in \mathbb{Z}$: $$\begin{aligned} \Gamma_{n} &:=\begin{cases} \overline{\Gamma_{-} \cup B_{n}(x_{0})} &\text{ if }n \geq 0, \\ \left(\Gamma_{-} \setminus B_{n}(x_{0})\right) \cup T_{-} &\text{ if }n < 0, \end{cases}\end{aligned}$$ where $B_{n}(x_{0})$ is the open metric ball of radius $n$ about $x_{0}$. See [@DHK2023 Section 7.2] for more details and pictures of the ${\Gamma}_n$'s. Recall that a subgroup $A$ of a group $G$ is a **free factor** if there exists another subgroup $P$ such that $G = A * P$. Given a free factor $A$ of $B$, we define the **corank** of $A$ in $B$, denoted by $\mathop{\mathrm{cork}}(B,A)$, as the rank of $B/\langle\!\langle A\rangle\!\rangle$, the quotient of $B$ by the normal closure of $A$. For the ${\Gamma}_{n}$ defined above we write $A_{n} = \pi_{1}({\Gamma}_{n},x_{0})$, the free factor determined by the subgraph ${\Gamma}_{n}$. Denote by $\mathop{\mathrm{PPHE}}({\Gamma})$ the group of proper homotopy equivalences on ${\Gamma}$ that fix the ends of ${\Gamma}$ pointwise and fix the basepoint $x_{0}$, i.e., the group of *pure* proper homotopy equivalences. Any pure mapping class can be properly homotoped to fix a point, hence every pure mapping class has a representative in $\mathop{\mathrm{PPHE}}({\Gamma})$. Note a proper homotopy equivalence on ${\Gamma}$ induces an isomorphism on the level of fundamental group. Hence, with our choice of basepoint $x_0 \in {\Gamma}$, for each element $f \in \mathop{\mathrm{PPHE}}({\Gamma})$, we denote by $f_*$ the induced map on $\pi_1({\Gamma},x_0)$. **Definition 26** ([@DHK2023 Definition 7.9]). Given $f \in \mathop{\mathrm{PPHE}}(\Gamma)$, we say that a pair of integers, $(m,n)$, with $m>n$, is **admissible** for $f$ if 1. $A_{n}$ and $f_{*}(A_{n})$ are free factors of $A_{m}$, and 2. both $\mathop{\mathrm{cork}}(A_{m},A_{n})$ and $\mathop{\mathrm{cork}}(A_{m},f_{*}(A_{n}))$ are finite. In [@DHK2023 Corollary 7.8], we showed that for every $f \in \mathop{\mathrm{PPHE}}({\Gamma})$, and $n \in \mathbb{Z}$, there exist $m \in \mathbb{Z}$ such that $m>n$ and $(m,n)$ is admissible for $f$. Hence, we can define: **Definition 27**. For a map $f \in \mathop{\mathrm{PPHE}}(\Gamma)$ and an admissible pair $(m,n)$ for $f$, we let $$\begin{aligned} \phi_{m,n}(f) := \mathop{\mathrm{cork}}(A_{m},A_{n}) - \mathop{\mathrm{cork}}(A_{m},f_{*}(A_{n})). \end{aligned}$$ Call such a $\phi_{m,n}$ a **PPHE-flux map**. **Lemma 28** ([@DHK2023 Lemma 7.10]). *The PPHE-flux of a map $f \in \mathop{\mathrm{PPHE}}({\Gamma})$ is well-defined over the choice of admissible pair $(m,n)$. That is, if $(m,n)$ and $(m',n')$ are two admissible pairs for the map $f \in \mathop{\mathrm{PPHE}}(\Gamma)$ then $\phi_{m,n}(f) = \phi_{m',n'}(f)$.* Furthermore: **Proposition 29** ([@DHK2023 Proposition 7.11 and Lemma 7.12]). *The PPHE-flux maps are homomorphisms. Moreover, for any nonempty proper clopen subset $\mathcal{E}$ of $E_\ell$, if $f,g \in \mathop{\mathrm{PPHE}}({\Gamma})$ are properly homotopic, then $\phi_\mathcal{E}(f) = \phi_\mathcal{E}(g)$.* Hence, the PPHE-flux map factors through $\mathop{\mathrm{PMap}}({\Gamma})$, so we can define the flux map on $\mathop{\mathrm{PMap}}({\Gamma})$ as follows. **Definition 30**. For each nonempty proper clopen subset $\mathcal{E}$ of $E_\ell$, we define the **flux map** as: $$\begin{aligned} \Phi_\mathcal{E}:\mathop{\mathrm{PMap}}(\Gamma) \rightarrow \mathbb{Z}\\ [f] \mapsto \phi_\mathcal{E}(f),\end{aligned}$$ which is a well-defined homomorphism by . This independence of the choice of admissible pairs further implies the independence of the choice of the basepoint $x_0$. **Lemma 31** (Independence to choice of $x_0$). *For a nonempty proper clopen subset $\mathcal{E}$ of $E_\ell$, let $x_0$ and $x_0'$ be two different points that realize the partition $E_\ell = \mathcal{E}\sqcup (E_\ell \setminus \mathcal{E})$. Say $\phi_\mathcal{E}$ and $\phi_{\mathcal{E}}'$ are the flux maps constructed from $x_0$ and $x_0'$ respectively, with the same orientation; $E_\ell({\Gamma}_+) = E_\ell({\Gamma}_+') = \mathcal{E}$. Then $\phi_{\mathcal{E}} = \phi_{\mathcal{E}}'$.* *Proof.* Note $x_0$ and $x_0'$ together cut ${\Gamma}$ into three parts (not necessarily connected), where two of them are of infinite rank and realize $\mathcal{E}$ and $E_\ell \setminus \mathcal{E}$ respectively, and the middle part is of finite rank (but not necessarily compact), and we call it $M$. Let $\{{\Gamma}_n\}$ and $\{{\Gamma}_n'\}$ be the chains of graphs used to define $\phi_\mathcal{E}$ and $\phi_{\mathcal{E}}'$ respectively. Then since $\phi_\mathcal{E}$ and $\phi_{\mathcal{E}'}$ are in the same direction, there exists $k \in \mathbb{Z}$ such that $A_{n+k} = A_{n}'$ for all $n \in \mathbb{Z}$. To be precise, this holds for $k$ such that ${\Gamma}_{k}$ and ${\Gamma}_{0}'$ have the same core graph. Now, given $f \in \mathop{\mathrm{PMap}}({\Gamma})$ and an admissible pair $(m,n)$ for $f$ at $x_{0}$, the pair $(m-k,n-k)$ is admissible for $f$ at $x_{0}'$. Then $$\begin{aligned} (\phi_{\mathcal{E}})_{m,n}(f) &= \mathop{\mathrm{cork}}(A_m,A_n) - \mathop{\mathrm{cork}}(A_m,f_*(A_n))\\ &= \mathop{\mathrm{cork}}(A'_{m-k},A'_{n-k}) - \mathop{\mathrm{cork}}(A'_{m-k}, f_*(A'_{n-k})) =(\phi_{\mathcal{E}}')_{m-k,n-k}(f). \end{aligned}$$ All in all, the independence of the choice of admissible pairs by proves that $\phi_{{\mathcal{E}}}(f) = \phi_{{\mathcal{E}}}'(f)$. Since $f$ was chosen arbitrarily, this concludes the proof. ◻ Therefore, for each nonempty proper clopen subset $\mathcal{E}$ of $E_\ell$, we can write the resulting flux map as $\phi_{\mathcal{E}}$ without specifying $x_0$. We end this subsection by exploring basic properties of flux maps, to be used in subsequent subsections. Note that flux maps inherit the group operation from $\mathop{\mathrm{Hom}}(\mathop{\mathrm{PMap}}({\Gamma}),\mathbb{Z})$; pointwise addition. **Proposition 32**. *Let $\mathcal{E}\subset E_\ell$ be a nonempty proper clopen subset of $E_\ell$, where $|E_\ell| \ge 2$. Let $A,B$ and $B'$ be nonempty proper clopen subsets of $E_\ell$, such that $A$ and $B$ are disjoint, and $B$ is a proper subset of $B'$. Then the following hold:* *[\[prop:fluxcomplement\]]{#prop:fluxcomplement label="prop:fluxcomplement"} $\Phi_{\mathcal{E}^c} = -\Phi_\mathcal{E}$.* *[\[prop:flux_disjoint_sets\]]{#prop:flux_disjoint_sets label="prop:flux_disjoint_sets"} $\Phi_{A \sqcup B} = \Phi_A + \Phi_B.$* *[\[prop:fluxdifference\]]{#prop:fluxdifference label="prop:fluxdifference"} $\Phi_{B' \setminus B} = \Phi_{B'} - \Phi_{B}$.* *Proof.* We first note that (iii) follows from (ii), noting that $B' \setminus B$ and $B$ are disjoint. Hence, it suffices to prove (i) and (ii). 1. Let $f \in \mathop{\mathrm{PPHE}}({\Gamma})$ and $\mathcal{E}\subset E_\ell$ be a nonempty proper clopen subset. Choose $g \in \mathop{\mathrm{PPHE}}({\Gamma})$ to be a proper homotopy inverse of $f$. Take ${\Gamma}_L$ and ${\Gamma}_R$ with ${\Gamma}_L \subset {\Gamma}_R$ to be an admissible pair of graphs for $f$ and $g$ with respect to $\mathcal{E}.$ Fixing ${\Gamma}_L$, we can enlarge ${\Gamma}_R$ so that $({\Gamma}\setminus {\Gamma}_L, {\Gamma}\setminus {\Gamma}_R)$ is an admissible pair for $f$ with respect to $\mathcal{E}^c$. Note $({\Gamma}_R, {\Gamma}_L)$ is still an admissible pair of graphs for $f$ with respect to $\mathcal{E}$. In summary, we have: - $f({\Gamma}_L) \subset {\Gamma}_R, \quad g({\Gamma}_L) \subset {\Gamma}_R$ - $f({\Gamma}\setminus {\Gamma}_R) \subset {\Gamma}\setminus {\Gamma}_L$, - $\mathop{\mathrm{cork}}(\pi_1({\Gamma}_R), \pi_1({\Gamma}_L))<\infty, \quad \mathop{\mathrm{cork}}(\pi_1({\Gamma}_R), f_*(\pi_1({\Gamma}_L))) < \infty$. - $\mathop{\mathrm{cork}}(\pi_1({\Gamma}_R), g_*(\pi_1({\Gamma}_L))) < \infty$. - $\mathop{\mathrm{cork}}(\pi_1({\Gamma}\setminus {\Gamma}_L), \pi_1({\Gamma}\setminus {\Gamma}_R))<\infty, \quad \mathop{\mathrm{cork}}(\pi_1({\Gamma}\setminus {\Gamma}_L), f_*(\pi_1({\Gamma}\setminus {\Gamma}_R))) < \infty$. Because $f_*$ is a $\pi_1$-isomorphism, we have the following three different free factor decompositions of $\pi_{1}({\Gamma})$: $$\begin{aligned} \pi_{1}({\Gamma}) &= f_*(\pi_1({\Gamma}_R)) \ast f_*(\pi_1({\Gamma}\setminus {\Gamma}_R)), \\ \pi_{1}({\Gamma}) &= \pi_1({\Gamma}_R) \ast \pi_1({\Gamma}\setminus {\Gamma}_R), \text{ and} \\ \pi_{1}({\Gamma}) &= \pi_1({\Gamma}_L) \ast \pi_1({\Gamma}\setminus {\Gamma}_L). \end{aligned}$$ We also have the free factor decompositions $$\begin{aligned} f_{*}(\pi_{1}({\Gamma}_{R})) &= \pi_{1}({\Gamma}_{L}) \ast B, \text{ and}\\ \pi_{1}({\Gamma}\setminus{\Gamma}_{L}) &= f_{*}(\pi_1({\Gamma}\setminus{\Gamma}_{R})) \ast C, \end{aligned}$$ for some free factors $B$ and $C$ of $\pi_1({\Gamma})$. Putting together these decompositions, we get: $$\begin{aligned} \pi_{1}({\Gamma}) &= \pi_{1}({\Gamma}_{L}) \ast B \ast f_*(\pi_1({\Gamma}\setminus {\Gamma}_R)) \\ \pi_{1}({\Gamma}) &= \pi_1({\Gamma}_L) \ast f_{*}(\pi_1({\Gamma}\setminus{\Gamma}_{R})) \ast C. \end{aligned}$$ Therefore, we have $\mathop{\mathrm{rk}}(B)=\mathop{\mathrm{rk}}(C)$. Translating these equalities, we compute: $$\begin{aligned} \Phi_{\mathcal{E}^c}(f) &= \mathop{\mathrm{cork}}(\pi_1({\Gamma}\setminus {\Gamma}_L), f_*(\pi_1({\Gamma}\setminus {\Gamma}_R))) - \mathop{\mathrm{cork}}(\pi_1({\Gamma}\setminus {\Gamma}_L), \pi_1({\Gamma}\setminus {\Gamma}_R)) \\ &= \mathop{\mathrm{cork}}(f_*(\pi_1({\Gamma}_R)), \pi_1({\Gamma}_L)) - \mathop{\mathrm{cork}}(\pi_1({\Gamma}_R),\pi_1({\Gamma}_L)) \\ &= \mathop{\mathrm{cork}}(\pi_1({\Gamma}_R), g_*(\pi_1({\Gamma}_L))) - \mathop{\mathrm{cork}}(\pi_1({\Gamma}_R),\pi_1({\Gamma}_L)) \\ &= \Phi_\mathcal{E}(g) = -\Phi_\mathcal{E}(f), \end{aligned}$$ where the last equation follows from that $g$ is a proper inverse of $f$ and $\Phi_\mathcal{E}$ is a homomorphism. 2. Let $f \in \mathop{\mathrm{PPHE}}({\Gamma})$. Choose an $x_{0}$ that determines a partition that is compatible with both $A^{c}$ and $B^{c}$ as in the beginning of this section. Then there exist admissible pairs $({\Gamma}_{R_{A^{c}}},{\Gamma}_{L_{A^{c}}})$ and $({\Gamma}_{R_{B^{c}}}, {\Gamma}_{L_{B^{c}}})$ of $f$ with respect to $A^{c}$ and $B^{c}$ respectively. By taking small enough ${\Gamma}_{L_{A^{c}}}$ and ${\Gamma}_{L_{B^{c}}}$, we can ensure that ${\Gamma}_{R_{A^{c}}}$ and ${\Gamma}_{R_{B^{c}}}$ have contractible intersection in ${\Gamma}$; See . ![Illustration of the choices of subgraphs for the proof of . Here the paths from $x_0$ to each subgraph are omitted. We can choose pairs of graphs $({\Gamma}_{R_A^c},{\Gamma}_{L_A^c})$ and $({\Gamma}_{R_B^c},{\Gamma}_{L_B^c})$ such that the graphs from different pairs have contractible intersections.](pics/flux_disjoint_sets.pdf){#fig:flux_disjoint_sets width=".5\\textwidth"} Then we observe that $({\Gamma}_{R_{A^{c}}} \cup {\Gamma}_{R_{B^{c}}}, {\Gamma}_{L_{A^{c}}} \cup {\Gamma}_{L_{B^{c}}})$ is an admissible pair for $f$ with respect to $A^{c} \cap B^{c} = (A \sqcup B)^{c}$ (still with the basepoint $x_{0}$). We then have a free decomposition $$\pi_1({\Gamma}_{R_{A^{c}}} \cup {\Gamma}_{R_{B^{c}}}, x_0) \cong \pi_1({\Gamma}_{R_{A^{c}}},x_0) \ast \pi_1({\Gamma}_{R_{B^{c}}},x_0),$$ and the same for $\pi_1({\Gamma}_{L_{A^{c}}} \cup {\Gamma}_{L_{B^{c}}},x_0)$. Finally, we compute $$\begin{aligned} \Phi_{(A \sqcup B)^{c}} &= \mathop{\mathrm{cork}}\left(A_{R_{A^{c}}} \ast A_{R_{B^{c}}}, f_*(A_{L_{A^{c}}} \ast A_{L_{B^{c}}}) \right) - \mathop{\mathrm{cork}}\left(A_{R_{A^{c}}} \ast A_{R_{B^{c}}}, A_{L_{A^{c}}} \ast A_{L_{B^{c}}} \right)\\ &= \left(\mathop{\mathrm{cork}}(A_{R_{A^{c}}}, f_*(A_{L_{A^{c}}})) + \mathop{\mathrm{cork}}(A_{R_{B^{c}}}, f_*(A_{L_{B^{c}}})) \right) \\ &\hspace{20pt}- \left(\mathop{\mathrm{cork}}(A_{R_{A^{c}}}, A_{L_{A^{c}}}) + \mathop{\mathrm{cork}}(A_{R_{B^{c}}}, A_{L_{B^{c}}}) \right)\\ &= \left(\mathop{\mathrm{cork}}(A_{R_{A^{c}}}, f_*(A_{L_{A^{c}}})) - \mathop{\mathrm{cork}}(A_{R_{A^{c}}}, A_{L_{A^{c}}}) \right) \\ &\hspace{20pt}+ \left(\mathop{\mathrm{cork}}(A_{R_{B^{c}}}, f_*(A_{L_{B^{c}}})) - \mathop{\mathrm{cork}}(A_{R_{B^{c}}}, A_{L_{B^{c}}}) \right) \\ &= \Phi_{A^{c}} + \Phi_{B^{c}}. \end{aligned}$$ Finally we apply Part (i) to see that $$\Phi_{A \sqcup B} = -\Phi_{(A \sqcup B)^{c}} = -\Phi_{A^{c}} - \Phi_{B^{c}} = \Phi_{A} + \Phi_{B}. \qedhere$$  ◻ **Remark 33**. We remark that by and , we can even formally define the flux map with respect to the empty set or the whole set $E_\ell$: $$\Phi_{\emptyset}:= \Phi_A - \Phi_A \equiv 0, \qquad \Phi_{E_\ell} := \Phi_{A} + \Phi_{A^c } \equiv 0.$$ This allows us to define a flux map for any clopen $\mathcal{E}\subset E$ by $\Phi_{\mathcal{E}}=\Phi_{\mathcal{E}\cap E_{\ell}}$. ## Flux zero maps {#ssec:semidirect} In this section we will prove the following characterization of flux zero maps. **Theorem 34**. *Let ${\Gamma}$ be a locally finite, infinite graph with $|E_{\ell}({\Gamma})| \geq 2$, and $f \in \mathop{\mathrm{PMap}}({\Gamma})$. Then $f \in \overline{\mathop{\mathrm{PMap_{\it c}}}({\Gamma})}$ if and only if $\Phi_{\mathcal{E}}(f) = 0$ for every clopen subset $\mathcal{E}$ of $E(\Gamma)$.* We have proved the forward direction already in a previous paper. **Proposition 35** ([@DHK2023 Proposition 7.13]). *If $f \in \overline{\mathop{\mathrm{PMap}}_c({\Gamma})}$, then $\Phi_\mathcal{E}(f)=0$ for every clopen subset $\mathcal{E}$ of $E({\Gamma})$.* We will first assume that ${\Gamma}$ is a core graph, i.e., $E_{\ell}({\Gamma})=E({\Gamma})$. For brevity, we will temporarily drop the subscript $\ell$ for $E_\ell$ while we work under this assumption. To leverage the algebraic information (flux 0) to obtain topological information (homotopy equivalence), we need the following fact: **Lemma 36** ([@Hatcher2002 Proposition 1B.9]). *Let $X$ be a connected CW complex and let $Y$ be $K(G,1)$. Then every homomorphism $\pi_1(X,x_0) \to \pi_1(Y,y_0)$ is induced by a continuous map $(X,x_0) \to (Y,y_0)$ that is unique up to homotopy fixing $x_0$.* Recall that a graph is $K(F,1)$ for $F$ a free group (the fundamental group of the graph). Now we prove a preliminary lemma to construct a compact approximation of a proper homotopy equivalence. **Lemma 37**. *Let $\mathcal{E}\subset E({\Gamma})$ be a nonempty proper clopen subset and $f \in \mathop{\mathrm{PMap}}({\Gamma})$. If $\Phi_{\mathcal{E}}(f) = 0$, then given any compact $K \subset {\Gamma}$, there exists $\psi \in \mathop{\mathrm{PMap}}({\Gamma})$ such that* 1. *[**(Compact approximation)**]{.roman} $\psi f^{-1} \in \mathcal{V}_{K}$,* 2. *[**(Truncation)**]{.roman} there exist disjoint subgraphs ${\Gamma}_{\mathcal{E}}$, and ${\Gamma}_{\mathcal{E}^{c}}$ of ${\Gamma}$ with end spaces $\mathcal{E}$ and $\mathcal{E}^{c}$ respectively, such that $\psi\vert_{{\Gamma}_{\mathcal{E}}} = \mathop{\mathrm{id}}$ and $\psi\vert_{{\Gamma}_{\mathcal{E}^{c}}} = f\vert_{{\Gamma}_{\mathcal{E}^{c}}}$, and* 3. *[**(Same flux)**]{.roman} $\Phi_{\eta}(\psi) = \Phi_{\eta}(f)$ for every clopen subset $\eta \subset E({\Gamma}) \setminus \mathcal{E}$.* *Proof.* Let $\{{\Gamma}_{n}\}_{n \in \mathbb{Z}}$ be as in the definition of $\Phi_{\mathcal{E}}$, for some choice of basepoint $x_{0}$. Now, given $f \in \mathop{\mathrm{PMap}}({\Gamma})$ and any $n$ there is some $m_{n}>n$ that makes $(m_n,n)$ into an admissible pair for $f$. See . ![Illustration of ${\Gamma}_n$ and ${\Gamma}_{m_n}$ for the flux map $\Phi_{\mathcal{E}}$. Note $x_0$ is realizing a partition of ends compatible with the partition $\mathcal{E}\sqcup \mathcal{E}^c$.](pics/compactApproximation.pdf){#fig:compactApproximation width=".6\\textwidth"} Since $\Phi_{\mathcal{E}}(f) = 0$ we have $$\begin{aligned} \label{eq:flux0} \mathop{\mathrm{cork}}(\pi_1({\Gamma}_{m_{n}},x_0),\pi_1({\Gamma}_{n},x_0)) = \mathop{\mathrm{cork}}(\pi_1({\Gamma}_{m_{n}},f(x_0)), f_*(\pi_1({\Gamma}_{n},x_0))) \tag{$\ast$} \end{aligned}$$ for each $n \in \mathbb{Z}$. This allows us to define an isomorphism $\Psi_{n}: \pi_{1}({\Gamma},x_{0}) \rightarrow \pi_{1}({\Gamma},f(x_{0}))$ for each $n$. Here we use the notation $G \setminus \hspace{-5pt}\setminus H$ to denote the complementary free factor of $H$ in $G$. Define $$\begin{aligned} \Psi_{n}= \begin{cases} \mathop{\mathrm{Id}}& \text{on $\pi_{1}(\Gamma,x_{0}) \setminus \hspace{-5pt}\setminus\ \pi_{1}({\Gamma}_{m_{n}},x_{0})$}, \\ \sigma_n & \text{on } \pi_{1}(\Gamma_{m_{n}},x_{0}) \setminus \hspace{-5pt}\setminus\ \pi_{1}({\Gamma}_{n},x_{0}), \\ f_{*} & \text{on $\pi_{1}({\Gamma}_{n},x_{0})$}, \end{cases} \end{aligned}$$ where $\sigma_n: \pi_{1}({\Gamma}_{m_{n}},x_{0}) \setminus \hspace{-5pt}\setminus\ \pi_{1}({\Gamma}_{n},x_{0}) \rightarrow \pi_{1}({\Gamma}_{m_{n}},f(x_{0})) \setminus \hspace{-5pt}\setminus\ f_{*}(\pi_{1}({\Gamma}_{n},x_{0}))$ is any isomorphism. Such $\sigma_n$ is guaranteed to exist by [\[eq:flux0\]](#eq:flux0){reference-type="eqref" reference="eq:flux0"}. Now by , for each $n$ there exists a homotopy equivalence $\psi_{n}:({\Gamma},x_{0}) \to ({\Gamma},f(x_{0}))$ such that $$\psi_{n}= \begin{cases} \mathop{\mathrm{Id}}& \text{on ${\Gamma}\setminus {\Gamma}_{m_{n}}$,} \\ f & \text{on ${\Gamma}_{n}$}. \end{cases}$$ Also note $\psi_{n}$ is a *proper* homotopy equivalence, as it can be defined in pieces as proper maps. Further, $\psi_n$ fixes the ends of ${\Gamma}$, because $f$ does and ${\Gamma}_{m_n} \setminus {\Gamma}_n$ is compact. One can similarly define its proper homotopy inverse. Hence, for each $n$ we have $[\psi_n] \in \mathop{\mathrm{PMap}}({\Gamma})$. The subgraphs $\{{\Gamma}_{n}\}_{n \in \mathbb{Z}}$ form an exhaustion of ${\Gamma}$, so $\psi_{n} \rightarrow f$ in $\mathop{\mathrm{PMap}}({\Gamma})$. Therefore, for a compact $K \subset {\Gamma}$, there exists an $n'\in \mathbb{Z}$ such that $\psi_{n'}f^{-1} \in \mathcal{V}_{K}$. Take $\psi = \psi_{n'}$ and set ${\Gamma}_{\mathcal{E}^{c}} = {\Gamma}_{n'}$ and ${\Gamma}_{\mathcal{E}} = \overline{{\Gamma}\setminus {\Gamma}_{m_{n'}}}$. This gives (i) and (ii) by construction. We now check that (iii) follows from (ii). Let $\eta$ be a clopen subset of $E({\Gamma})$ that is disjoint from $\mathcal{E}$. We will actually check that $\Phi_{\eta^{c}}(\psi) = \Phi_{\eta^{c}}(f)$. This will imply (iii) by . Note $\eta \subset \mathcal{E}^c$. Now let ${\Gamma}_{m}$ be a subgraph from the definition of $\Phi_{\eta^{c}}$ so that ${\Gamma}_{m} \subset {\Gamma}_{\mathcal{E}^{c}}$. Then there exists $n \le m$ such that $(m,n)$ is admissible for $\psi$ with respect to the flux map $\Phi_{\eta^c}$. Since $f=\psi$ on ${\Gamma}_n \subset {\Gamma}_m \subset {\Gamma}_{\mathcal{E}^{c}}$ by (ii), we see that $\Phi_{\eta^c}(\psi) = \Phi_{\eta^c}(f)$ with the admissible pair of graphs $({\Gamma}_m,{\Gamma}_n)$. ◻ **Remark 38**. The reader may wonder why in the proof above we chose to define this sequence of maps and argue via convergence in place of constructing the map $\psi$ by hand as in [@APV2020]. While it is not too difficult to construct a $\psi$ so that $\psi f^{-1}$ is the identity on a given compact $K$, it is significantly more finicky to guarantee that $\psi f^{-1}$ preserves the complementary components of $K$. The convergence argument given above allows us to avoid the messy details of this. **Proposition 39**. *Let ${\Gamma}$ be a locally finite, infinite graph with $E({\Gamma}) = E_{\ell}({\Gamma})$, $|E({\Gamma})| \geq 2$, and $f \in \mathop{\mathrm{PMap}}({\Gamma})$. If $\Phi_{{\mathcal{E}}}(f) = 0$ for every clopen subset $\mathcal{E}$ of $E({\Gamma})$, then $f \in \overline{\mathop{\mathrm{PMap}}_c({\Gamma})}$.* *Proof.* Assume $f \in \mathop{\mathrm{PMap}}({\Gamma})$ has $\Phi_{\mathcal{E}}(f) = 0$ for every nonempty proper clopen subset $\mathcal{E}$ of the end space $E({\Gamma})$. Given any compact $K \subset {\Gamma}$ we will find $\psi \in \mathop{\mathrm{PMap_{\it c}}}({\Gamma})$ such that $\psi f^{-1} \in \mathcal{V}_{K}$. Without loss of generality we may enlarge $K$ so that it is connected, has at least two complementary components, and every complementary component of $K$ is infinite. Then the complement of $K$ induces a partition of the ends. Write $$\mathcal{P}_K = \mathcal{E}_1 \sqcup \ldots \sqcup \mathcal{E}_n$$ for this partition. Apply to $f$ using $\mathcal{E}_{1}$ to obtain $\psi_{1}$. Note that by (iii) we still have $\Phi_{\mathcal{E}_{2}}(\psi_{1}) = \Phi_{\mathcal{E}_2}(f) = 0$. Thus we can apply the lemma again to $\psi_{1}$ using $\mathcal{E}_{2}$ to obtain a $\psi_{2}$. Continue this process recursively to obtain $\psi_{n}$. Now, by (i) of , there exist $v_1,\ldots,v_n \in \mathcal{V}_K$ such that $$\begin{aligned} \psi_{i} &= \begin{cases} v_i\psi_{i-1} & \text{for $1 < i \le n$,}\\ v_1f & \text{for $i=1$}. \end{cases} \end{aligned}$$ Putting these together gives $\psi_{n}f^{-1} = v_{n}v_{n-1}\cdots v_{1} \in \mathcal{V}_{K}$ as $\mathcal{V}_{K}$ is a subgroup. It remains to check that $\psi_{n} \in \mathop{\mathrm{PMap_{\it c}}}({\Gamma})$. However, by (ii), we have that $\psi_{n}$ is equal to the identity on $\bigcup_{i=1}^{n}{\Gamma}_{\mathcal{E}_{i}}$. This exactly covers all of the ends of ${\Gamma}$ as $\mathcal{P}_{K}$ was a partition of the ends. Therefore we see that $\psi_{n}$ is supported on $\overline{\bigcap_{i=1}^{n} {\Gamma}\setminus {\Gamma}_{\mathcal{E}_{i}}}$, a compact set. Taking $\psi = \psi_{n}$ gives the desired compact approximation of $f$. Finally, since the $K$ above was taken to be arbitrary, starting with a compact exhaustion of ${\Gamma}$ we can apply the above to obtain a sequence of compactly supported maps that converge to $f$. ◻ Now we turn to the case where ${\Gamma}$ is not necessarily a core graph. *Proof of .* The forward direction follows from . For the backward direction, we first homotope $f$ so that it fixes the vertices of ${\Gamma}$. Then we see that we can write $f=f_{T}f_{c}$ where $f_{T}$ has support on ${\Gamma}\setminus {\Gamma}_{c}$ and $f_{c}$ has support on ${\Gamma}_{c}$. We can see that $f_{T} \in \overline{\mathop{\mathrm{PMap}}_{c}({\Gamma})}$. Indeed, enumerate the components of ${\Gamma}\setminus {\Gamma}_{c}$ as $\{R_{i}\}_{i\in I}$ where each $R_{i}$ is a tree and $I$ is either finite or $I=\mathbb{N}$. Then we can decompose $f_{T} = \prod_{i\in I} f_{i}$ where each $f_{i}$ has compact support on $R_{i}$. Indeed, each has compact support as $f_{T}$ is proper and thus the pre-image of the cutpoint $\overline{R_{i}} \cap {\Gamma}_{c}$ is compact and $f_{i}$ can be homotoped to have support contained within the convex hull of this full pre-image of the cutpoint. Furthermore, all of the $f_{i}$ pairwise commute as each $f_{i}$ can be homotoped so that it is totally supported away from the support of each other $f_{j}$. Thus, we see that $f_{T} \in \overline{\mathop{\mathrm{PMap_{\it c}}}({\Gamma})}$ as it is realized as the limit of partial products of the $f_{i}$. This also shows that given any flux map $\Phi_{\mathcal{E}}$ we must have that $\Phi_{\mathcal{E}}(f_{T}) = 0$, again by . Therefore, given an $\mathcal{E}$ with $\Phi_{\mathcal{E}}(f) = 0$ we must have that $\Phi_{\mathcal{E}}(f_{c})=0$ as $\Phi_\mathcal{E}$ is a homomorphism. We can then apply to conclude the desired result. ◻ ## Space of flux maps {#ssec:SpaceOfFluxmaps} Before we can prove we need to endow the set of flux maps with an algebraic structure. In the surface case, [@APV2020] could utilize the first integral (co)homology of separating curves on the surface to give structure to the flux maps they defined. Here we will be using the group of locally constant $\mathbb{Z}$-valued functions on $E_{\ell}({\Gamma})$ in place of the homology of separating curves. We remark that this is really the zeroth cohomology of $E_{\ell}({\Gamma})$ with coefficients in the constant sheaf $\mathbb{Z}$. In we observe that this perspective also works in the surface case. For a topological space $X$, we denote by $\check{C}(X)$ the group of locally constant $\mathbb{Z}$-valued functions on $X$. The group operation is given by addition of functions. We let $\mathring{C}(X) = \check{C}(X)/\mathbb{Z}$, the quotient obtained by identifying the constant functions with zero. We will now give a collection of some facts about $\mathring{C}(E)$ when $E$ is a compact, totally disconnected, and metrizable space (i.e. a closed subset of a Cantor set). We identify the Cantor set, $\mathcal{C}= 2^{\mathbb{N}} = \{0,1\}^{\mathbb{N}}$, with the set of countable binary sequences. A countable basis of clopen sets for the topology is then given by the cylinder sets $$\begin{aligned} C_{a_{1}\cdots a_{k}} :=\{ (x_{n}) \in 2^{\mathbb{N}}\ \vert\ x_{i} = a_{i}, \; i=1,\ldots, k\} \end{aligned}$$ where $a_{1}\cdots a_{k}$ is some finite binary sequence of length $k$. Say such a cylinder set has **width** $k$. For $E$ a closed subset of the Cantor set $\mathcal{C}$, a **cylinder set** of $E$ is the intersection of a cylinder set for $\mathcal{C}$ with $E$, i.e., a set of the form $C_{w} \cap E$ where $w \in 2^{k}$ for some $k \ge 0$. The standard tree model for the Cantor set is the usual rooted binary tree, and for an arbitrary closed subset $E \subset \mathcal{C}$ we take the subtree with the end space $E$. Given a subset, $A$, of a topological space we let $\chi_{A}$ denote the indicator function on $A$. **Theorem 40** (Countable Basis for $\mathring{C}(E)$). *Let $E$ be a compact, totally disconnected, and metrizable space. There exists a countable collection $\mathcal{A}=\{A_i\}_{i \in I}$ of cylinder sets of $E$ so that* 1. *Any cylinder set $C$ of $E$ that is not in $\mathcal{A}$ can be written as $C = A_0 \setminus (A_{1} \sqcup \cdots \sqcup A_{n})$ for some $A_0\in \mathcal{A}$, and some $A_{j} \in \mathcal{A}$, with $A_j\subset A_0$ and $A_{j} \cap A_{k} = \emptyset$ for all distinct $j,k \in \{1,\ldots,n\}$,* 2. *$\mathcal{B} = \{\chi_{A_{i}}\}_{i\in I}$ is a free basis for $\mathring{C}(E)$. In particular, $\mathring{C}(E) = \oplus_{i\in I} \mathbb{Z}$, and* 3. *for $T$ the standard tree model of the end space $(E,\emptyset)$, there exists an injective map $\iota:\mathcal{A}\rightarrow T$ so that $\iota$ maps into the interior of edges and $\iota(\mathcal{A})$ cuts the graph into a collection of one ended graphs.* *Proof.* Note that if $|E| = n <\infty$ then the result is immediate by taking $\mathcal{A}$ to be the collection of all individual ends except one. Hence, we will assume that $E$ is infinite. We first prove the result for $E = \mathcal{C}$ the Cantor set. We define $\mathcal{A}'$ to be the set of all cylinder sets consisting of cylinders of the form $C_{a_{1}\cdots a_{k-1} 0}$ together with the whole space $\mathcal{C}$. That is, $$\begin{aligned} \mathcal{A}' = \{\mathcal{C},C_{0},C_{00},C_{10},C_{000},C_{100},C_{010},C_{110},\ldots \} \end{aligned}$$ We claim that $\{\chi_{A}\}_{A \in \mathcal{A}'}$ forms a free basis for $\check{C}(\mathcal{C})$. We first have **Claim 1**. *For every $f \in \check{C}(\mathcal{C})$, there exist finitely many disjoint clopen subsets $B_1,\ldots,B_n$ and integers $b_{1},\ldots, b_{n}$ such that $$f = \sum_{j=1}^n b_{j}\chi_{B_j}.$$* *Proof.* Suppose $f$ is a locally constant function on $\mathcal{C}$ with *infinitely many* distinct $\mathbb{Z}$-values $b_1,b_2,\ldots$. Then $\{f^{-1}(b_j)\}_{j=1}^\infty$ forms a clopen cover of $\mathcal{C}$ which does not have a finite subcover, contradicting the compactness of $\mathcal{C}$. Therefore, $f$ can assume at most finitely different values in $\mathbb{Z}$, and taking $B_{j} = f^{-1}(b_{j})$ proves the claim. ◻ Thus we can check that $\{\chi_{A}\}_{A \in \mathcal{A}'}$ generates $\check{C}(\mathcal{C})$ by verifying that for an arbitrary clopen set $B$ of $\mathcal{C}$, we can write $\chi_{B}$ as a finite linear combination of elements from $\{\chi_{A}\}_{A \in \mathcal{A}'}$. Since the cylinder sets form a clopen basis for the topology, we only need to check when $B$ is a cylinder set. Take $B = C_{a_1\cdots a_k}$ for some $k > 0$ and $a_{1}\cdots a_{k} \in 2^{k}$. Then we have either $B \in \mathcal{A}'$ or $a_{k} = 1$. Supposing the latter, let $$m = \begin{cases} 0 & \text{if $a_1=\ldots=a_k=1$,}\\ \max \{j\vert a_{j}=0\} & \text{otherwise} \end{cases}$$ Then we can write $$\begin{aligned} \chi_B = \chi_{C_{a_{1}\cdots a_{k}}} = \chi_{C_{a_{1}\cdots a_{m}}} - \left(\sum_{j=m}^{k-1} \chi_{C_{a_{1}\cdots a_{j}0}} \right), \end{aligned}$$ where we take $a_1\cdots a_m$ as an empty sequence when $m=0$. Thus we see that $\{\chi_{A}\}_{A \in \mathcal{A}'}$ generates $\check{C}(\mathcal{C})$. This also shows that property (1) holds. Next we verify that the set $\mathcal{B}':=\{\chi_{A}\}_{A \in \mathcal{A}'}$ is linearly independent. Suppose $$\begin{aligned} 0=\sum_{j=1}^{n} a_{j} \chi_{A_{j}}, \end{aligned}$$ for some distinct $A_1,\ldots,A_n \in \mathcal{A}'$. We will proceed by induction on $n$. The case when $n=1$ is straightforward. Now let $n>1$ and without loss of generality we can assume that $A_{n}$ is of minimal width. Let $w$ be the word defining $A_{n}$, i.e. $A_{n} = C_{w}$. Note that $w$ may be the empty word (when $A_n = \mathcal{C})$. Consider the sequence $w\bar{1}$ consisting of the starting word $w$ followed by the constant infinite sequence of $1$s. Then by minimality of $w$, we have $$\begin{aligned} 0 = \sum_{j=1}^{n} a_{j} \chi_{A_{j}}(w\bar{1}) = a_{n}. \end{aligned}$$ Therefore, we have $0=\sum_{j=1}^{n} a_{j} \chi_{A_{j}} = \sum_{j=1}^{n-1} a_{j} \chi_{A_{j}}$ so by the induction on $n$ we see that $a_{j} = 0$ for all $j$. Thus we see that $\mathcal{B}'$ is a free basis for $\check{C}(\mathcal{C})$. Taking $\mathcal{A}:= \mathcal{A}' \setminus \{\mathcal{C} \} = \{C_{0},C_{00},C_{10},C_{000},C_{100},C_{010},C_{110},\ldots\}$, the free basis $\mathcal{B}'$ for $\check{C}(\mathcal{C})$ descends (allowing for a slight abuse of notation) to a free basis $\mathcal{B}:=\{\chi_{A}\}_{A \in \mathcal{A}}$ for $\mathring{C}(\mathcal{C})$, proving (2). Finally, we can define $\iota: \mathcal{A}\rightarrow T$ by using the labels on each of the cylinder sets to map each cylinder set to the midpoint of its corresponding edge in the standard binary tree model of the Cantor set. See for a picture of the map. The components of $T\setminus \iota(\mathcal{A})$ each contains one end from $T$. Now to go from the Cantor set to a general infinite end space we identify $E$ with a subspace of $\mathcal{C}$ and take $\mathcal{A}= \{C_{0} \cap E, C_{00} \cap E,C_{10} \cap E,\ldots \}$, deleting duplicated sets if necessary. Then the set $\{\chi_A\}_{A \in \mathcal{A}}$ will still determine a free basis for $\mathring{C}(E)$. ◻ Apply this theorem to $E_{\ell}({\Gamma})$ in order to obtain the set $\mathcal{A}=\{A_i\}_{i \in I}$. We now define the homomorphism $$\begin{aligned} \Pi: \mathop{\mathrm{PMap}}({\Gamma}) &\rightarrow \prod_{i \in I} \mathbb{Z}\\ f &\mapsto (\Phi_{A_{i}}(f))_{i \in I}.\end{aligned}$$ We will check that this map is surjective and has kernel exactly $\overline{\mathop{\mathrm{PMap_{\it c}}}({\Gamma})}$, i.e. it forms the following short exact sequence: $$\begin{aligned} 1 \longrightarrow \overline{\mathop{\mathrm{PMap_{\it c}}}({\Gamma})} \longrightarrow \mathop{\mathrm{PMap}}({\Gamma}) \overset{\Pi}{\longrightarrow} \prod_{i \in I} \mathbb{Z}\longrightarrow 1.\end{aligned}$$ **Lemma 41**. *Let $\mathcal{E}$ be a clopen subset of $E({\Gamma})$ so that $\mathcal{E}\cap E_{\ell}({\Gamma})$ is a proper nontrivial subset. If $f \in \mathop{\mathrm{PMap}}({\Gamma})$ satisfies $\Phi_{A} (f) = 0$ for all $A \in \mathcal{A}$, then $\Phi_{\mathcal{E}}(f) = 0$ as well.* *Proof.* We first note that $\mathcal{E}$ can be written as a disjoint union of finitely many cylinder sets. Thus, by it suffices to check when $\mathcal{E}$ is a cylinder set $C$ of $E({\Gamma})$. Assume that $f \in \mathop{\mathrm{PMap}}({\Gamma})$ satisfies $\Phi_{A_{i}} (f) = 0$ for all $i \in I$. Then $C\cap E_{\ell}({\Gamma})$ is again a cylinder set of $E_{\ell}$. Applying property (1) of we have either $C \in \mathcal{A}$, or $C = A_{0} \setminus (\bigsqcup_{j=1}^{n} A_{j})$ for some $A_{0} \in \mathcal{A}$ and $A_j \in \mathcal{A}$. If $C \in \mathcal{A}$, then we conclude $\Phi_C(f)=0$. For the other case, we can apply and to write $$\begin{aligned} \Phi_{C} (f) = \Phi_{A_{0}}(f) - \sum_{j=1}^{n} \Phi_{A_{j}}(f) = 0 - 0 =0. \end{aligned}$$ ◻ **Corollary 42**. *For ${\Gamma}$ and $\Pi$ as above, $\ker(\Pi) = \overline{\mathop{\mathrm{PMap_{\it c}}}({\Gamma})}$.* *Proof.* The forward direction of implies $\ker(\Pi) \supset \overline{\mathop{\mathrm{PMap_{\it c}}}({\Gamma})}$. On the other hand, together with the backward direction of imply the other containment $\ker(\Pi) \subset \overline{\mathop{\mathrm{PMap_{\it c}}}({\Gamma})}$. ◻ Next, we will build a section to show $\Pi$ is surjective, and more importantly, this sequence splits. This gives us our desired semidirect product decomposition in . **Proposition 43**. *There exists an injective homomorphism $\hat{\iota}:\prod_{i\in I} \mathbb{Z}\rightarrow \mathop{\mathrm{PMap}}({\Gamma})$ so that $\Pi \circ \hat{\iota}$ is the identity on $\prod_{i \in I}\mathbb{Z}$.* *Proof.* Let $T$ be the maximal tree of the graph ${\Gamma}_{c}$ in standard form. Note that the end space of $T$ is homeomorphic to $E_{\ell}({\Gamma})$ and let $\mathcal{A}=\{A_i\}_{i \in I}$ be the set obtained from (2) of applied to the set $E_{\ell}({\Gamma})$ and $\iota:\mathcal{A}\rightarrow T$ be the map given by property (3) of . The closure in ${\Gamma}_{c}$ of every complementary component of $\iota(\mathcal{A})$ is a one-ended subgraph with infinite rank. Call one such component ${\Gamma}'$. It has at most a countably infinite number of half edges coming from the points of $\iota(\mathcal{A})$. Now we will modify ${\Gamma}'$ via a proper homotopy equivalence that fixes $\partial{\Gamma}'$ so that the new graph has a "grid of loops\" above $\partial{\Gamma}'$. See for how this replacement is done. Such a replacement by a proper homotopy equivalence is possible by the classification of infinite graphs. After replacing each component of ${\Gamma}_{c} \setminus\iota(\mathcal{A})$ we obtain a new graph that is proper homotopy equivalent to the original ${\Gamma}_{c}$. We can also extend this proper homotopy equivalence to the entire graph ${\Gamma}$, as our proper homotopy equivalence fixes the boundary points of each complementary component of $\iota(\mathcal{A})$. Now for each $i$, there are exactly two complementary components whose closures in ${\Gamma}_c$ contain $\iota(A_{i})$. Let $\ell_{i} \in \mathop{\mathrm{PMap}}({\Gamma})$ be the loop shift supported on the two columns of loops sitting above $\iota(A_{i})$ in these components. Orient the loop shift so that it is shifting towards the end in $A_{i}$. Note that each $\ell_{i}$ has total support disjoint from each other $\ell_{j}$ so that $\ell_{i}\ell_{j} = \ell_{j}\ell_{i}$ for all $i,j \in I$. Therefore, $\prod_{i \in I} \langle\ell_{i} \rangle< \mathop{\mathrm{PMap}}({\Gamma})$, and we can define the homomorphism $\hat{\iota}: \prod_{i \in I}\mathbb{Z} \to \mathop{\mathrm{PMap}}({\Gamma})$ by $$\begin{aligned} \hat{\iota}\left((n_{i})_{i\in I}\right) := \prod_{i\in I} \ell_{i}^{n_{i}}. \end{aligned}$$ It remains to check that $\Pi \circ \hat{\iota}$ is the identity on $\prod_{i\in I}\mathbb{Z}$. By the construction of the loop shifts, $\ell_{i}$ crosses exactly one of the clopen subsets in $\mathcal{A}$, namely $A_{i}$. Therefore, we have $$\begin{aligned} \Phi_{A_{j}} (\ell_{i}) = \delta_{ij}:=\begin{cases} 1 \;\; \text{ if } i=j, \\ 0 \;\; \text{ if } i\neq j.\end{cases} \end{aligned}$$ Now, given any tuple $(n_{i})_{i \in I} \in \prod_{i\in I}\mathbb{Z}$ we compute $$\begin{aligned} (\Pi \circ \hat{\iota})\left((n_{i})_{i\in I}\right) &= \Pi\left(\prod_{i\in I} \ell_{i}^{n_{i}}\right) = \left(\Phi_{A_{j}}\left(\prod_{i\in I} \ell_{i}^{n_{i}}\right)\right)_{j\in I} = (n_{i})_{i\in I}. \qedhere \end{aligned}$$ ◻ *Proof of .* and above give the desired splitting short exact sequence $1 \longrightarrow \overline{\mathop{\mathrm{PMap}}_c({\Gamma})} \longrightarrow \mathop{\mathrm{PMap}}({\Gamma}) \longrightarrow \mathbb{Z}^\alpha \longrightarrow 1$, with $\alpha = |\mathcal{A}|-1$. ◻ ## The rank of integral cohomology {#ssec:firstcohomology} As pointed out in , we define $\Phi_{\emptyset} = \Phi_{E_\ell} \equiv 0$. **Lemma 44**. *Let $\{A_i\}_{i \in I}$ be a clopen collection of subsets of $E_\ell({\Gamma})$ such that $\mathcal{B} = \{\chi_{A_i}\}_{i \in I}$ is a free basis for $\mathring{C}(E_\ell)$ as in . Then the map $$\begin{aligned} \Theta: \mathring{C}(E_\ell({\Gamma})) &\longrightarrow H^1(\mathop{\mathrm{PMap}}({\Gamma});\mathbb{Z}), \\ \sum_{i \in I}{n_i}\chi_{A_i} &\longmapsto \sum_{i \in I}{n_i}\Phi_{A_i}. \end{aligned}$$ is a well-defined injective homomorphism.* *Proof.* Since $\mathcal{B}$ is a free basis for $\mathring{C}(E_\ell)$, the map $\chi_{A_i} \mapsto \Phi_{A_i}$ on $\mathcal{B}$ extends to a well-defined homomorphism on the whole group $\mathring{C}(E_\ell)$. To see $\Theta$ is injective, suppose $\Theta(\sum_i n_i\chi_{A_i}) = \sum_i n_i\Phi_{A_i} =0$ for $\chi_{A_i} \in \mathcal{B}$. Then for each $j$ that arises as an index of the summation, we evaluate the sum at the loop shift $\ell_j$ constructed in the proof of : $$0 = \sum_i n_i \Phi_{A_i}(\ell_j) = n_j\Phi_{A_j}(\ell_j) = n_j,$$ which implies that $\sum_i n_i\chi_{A_i} \equiv 0$, concluding that $\Theta$ is injective. ◻ Here we collect relevant results on the first homology of the pure mapping class group of graphs of rank $n$ with $s$ rays. **Fact 45** ([@HV1998 Theorem 1.1]). *$H_1(\mathop{\mathrm{Aut}}(F_n);\mathbb{Q})=0$ for all $n \ge 1$.* **Fact 46** ([@Hatcher1995 Section 4]). *For $n \ge 3$ and $s \ge 1$, $$H_1(F_n^{s-1} \rtimes \mathop{\mathrm{Aut}}(F_n);\mathbb{Z}) \cong H_1(F_n^{s} \rtimes \mathop{\mathrm{Aut}}(F_n);\mathbb{Z}).$$ This still holds for $n=1,2$ if $s \ge 2$.* **Proposition 47**. *$H^1(\mathop{\mathrm{PMap}}_c({\Gamma});\mathbb{Z})=0$ for every locally finite, infinite graph ${\Gamma}$.* *Proof.* Let $\{{\Gamma}_k\}$ be a compact exhaustion of ${\Gamma}$. Then $\mathop{\mathrm{PMap}}_c({\Gamma})$ is a direct limit of $\mathop{\mathrm{PMap}}({\Gamma}_k)$'s, each of which is isomorphic to $F_{n_k}^{e_k} \rtimes \mathop{\mathrm{Aut}}(F_{n_k})$ for some $e_k \ge 0$ and $n_k \ge 1$ (Recall ). Since the direct limit commutes with $H^1(-;\mathbb{Z}) \equiv \mathop{\mathrm{Hom}}(-,\mathbb{Z})$, it suffices to show that groups of the form $F_n^e \rtimes \mathop{\mathrm{Aut}}(F_n)$ have trivial first cohomology. We first show $H^1(\mathop{\mathrm{Aut}}(F_n);\mathbb{Z})=0$. By the universal coefficient theorem for cohomology, $$0 \longrightarrow \mathop{\mathrm{Ext}}\left(H_0(\mathop{\mathrm{Aut}}(F_n);\mathbb{Q}),\mathbb{Z}\right) \longrightarrow H^1(\mathop{\mathrm{Aut}}(F_n);\mathbb{Z}) \longrightarrow \mathop{\mathrm{Hom}}(H_1(\mathop{\mathrm{Aut}}(F_n);\mathbb{Q}),\mathbb{Z}) \longrightarrow 0$$ where $\mathop{\mathrm{Ext}}\left(H_0(\mathop{\mathrm{Aut}}(F_n);\mathbb{Q});\mathbb{Z}\right)=0$ as $H_0(\mathop{\mathrm{Aut}}(F_n);\mathbb{Q})\cong \mathbb{Q}$ is free. Also, $H_1(\mathop{\mathrm{Aut}}(F_n);\mathbb{Q})=0$ by , so it follows that $H^1(\mathop{\mathrm{Aut}}(F_n);\mathbb{Z})=0$. On the other hand, repeatedly applying together with the universal coefficient theorem for homology shows that for $n \ge 3$, $$H_1(F_n^{s} \rtimes \mathop{\mathrm{Aut}}(F_n);\mathbb{Q}) = H_1(F_n^{s-1} \rtimes \mathop{\mathrm{Aut}}(F_n);\mathbb{Q}) = \ldots = H_1(\mathop{\mathrm{Aut}}(F_n);\mathbb{Q})=0.$$ The last equality comes from . For $n=1,2$, the argument is the same, except we reduce the problem of showing $H^1(F_n^{s-1} \rtimes \mathop{\mathrm{Aut}}(F_n);\mathbb{Z})=0$ to checking $H_1(F_n \rtimes \mathop{\mathrm{Aut}}(F_n);\mathbb{Q})=0$. One can check $\mathbb{Z}\rtimes \mathbb{Z}_2$ and $F_2 \rtimes \mathop{\mathrm{Aut}}(F_2)$ have finite abelianization to conclude this. (See e.g. [@AFV2008presentation Corollary 2] for a finite presentation of $\mathop{\mathrm{Aut}}(F_2)$.) This completes the proof of $H^1(\mathop{\mathrm{PMap}}_c({\Gamma}); \mathbb{Z})=0$. ◻ **Theorem 48**. *The map $\Theta$ in is an isomorphism.* *Proof.* We only need to check the surjectivity of $\Theta$. Pick $\phi \in H^1(\mathop{\mathrm{PMap}}({\Gamma});\mathbb{Z}) = \mathop{\mathrm{Hom}}(\mathop{\mathrm{PMap}}({\Gamma}),\mathbb{Z})$. By , we have $\phi(\mathop{\mathrm{PMap_{\it c}}}({\Gamma})) = \{0\}$. By Dudley's automatic continuity [@dudley1961], $\phi$ is continuous, so $\phi(\overline{\mathop{\mathrm{PMap_{\it c}}}({\Gamma})})=\{0\}$. Recall the semidirect product decomposition $\mathop{\mathrm{PMap}}({\Gamma}) \cong \overline{\mathop{\mathrm{PMap}}_c({\Gamma})} \rtimes L$ from , where $L \cong \prod_{i \in I} \langle\ell_{i} \rangle$, the product of commuting loop shifts. Furthermore, these loop shifts are dual to the collection of $\{\Phi_{A_{i}}\}_{i\in I} \subset H^{1}(\mathop{\mathrm{PMap}}({\Gamma});\mathbb{Z})$ so that $\Phi_{A_j}(\ell_i) = \delta_{ij}$. Since $\phi$ is zero on the $\overline{\mathop{\mathrm{PMap}}_c({\Gamma})}$-factor, it follows that $\phi$ is completely determined by its value on $L$. Note also that $L \cong \prod_{i \in I}\mathbb{Z}$ so that $H^{1}(L;\mathbb{Z}) \cong \oplus_{i \in I}\mathbb{Z}$ where a basis for $H^{1}(L;\mathbb{Z})$ is given exactly by the set $\{\Phi_{A_{i}}\}_{i\in I}$, as in (2). Hence, $\phi = \phi|_{L} \in H^1(L; \mathbb{Z})$ can be described by a finite linear combination of $\Phi_{A_i}$'s. Such a finite linear combination is the image of a finite linear combination of $\chi_{A_i}$ under $\Theta$, so $\Theta$ is surjective. ◻ **Corollary 49** (, revisited). *For every locally finite, infinite graph ${\Gamma}$, $$\mathop{\mathrm{rk}}\left(H^1(\mathop{\mathrm{PMap}}({\Gamma});\mathbb{Z})\right) = \begin{cases} 0 & \text{if $|E_\ell| \le 1$} \\ n-1 & \text{if $2 \le |E_\ell|=n< \infty$} \\ \aleph_0 & \text{otherwise}. \end{cases}$$* *Proof.* This follows from the isomorphism $\Theta: \mathring{C}(E_\ell({\Gamma})) \cong H^1(\mathop{\mathrm{PMap}}({\Gamma});\mathbb{Z})$ in . ◻ ## Relation to surfaces {#ssec:surfacescohom} Aramayona--Patel--Vlamis in [@APV2020] obtain a result similar to in the infinite-type *surface* case using the homology of separating curves in place of $\mathring{C}(E_{\ell}({\Gamma}))$. Here we show that these approaches can be unified, as they each rely solely on the subspace of ends accumulated by loops or genus. Let $S$ be an infinite-type surface and let $\hat{S}$ be the surface obtained from $S$ by forgetting the planar ends of $S$. Let $H_{1}^{sep}(\hat{S};\mathbb{Z})$ be the subgroup of $H_{1}(\hat{S};\mathbb{Z})$ generated by homology classes that have separating simple closed curves of $\hat{S}$ as representatives. Note that when $S$ has only planar ends, $H_{1}^{sep}(\hat{S};\mathbb{Z})$ is trivial. **Theorem 50** ([@APV2020 Theorem 4] for genus $\geq 2$, [@DP2020 Theorem 1.1] for genus $1$). *Let $S$ be an infinite-type surface of genus at least one. Then $$\begin{aligned} H^{1}(\mathop{\mathrm{PMap}}(S);\mathbb{Z}) \cong H_{1}^{sep}(\hat{S};\mathbb{Z}). \end{aligned}$$* Let $E_{g}(S)$ denote the space of ends of $S$ accumulated by genus (i.e., the non-planar ends). **Proposition 51**. *Let $S$ be an infinite-type surface. Then $$\begin{aligned} H_{1}^{sep}(\hat{S};\mathbb{Z}) \cong \mathring{C}(E_{g}(S)). \end{aligned}$$* *Proof.* We first note that by definition, $E_{g}(S) = E(\hat{S})$. Let $v \in H_{1}^{sep}(\hat{S};\mathbb{Z})$ be a primitive element, i.e. $v$ has a representative $\gamma$ that is an oriented and separating simple closed curve. Now $v$ determines a partition of $E(\hat{S})$ into two clopen subsets, $v^{+}$, those ends to the right of $\gamma$, and $v^{-}$, those ends to the left of $\gamma$. Note that these are proper subsets if and only if $v \neq 0$ if and only if $\chi_{v^+} \neq 0$ in $\mathring{C}(E)$. Define $$\begin{aligned} \Xi(v) :=\chi_{v^{+}} \in \mathring{C}(E), \end{aligned}$$ for each nonzero primitive element $v$ of $H_1^{sep}(\hat{S};\mathbb{Z})$. This linearly extends to define an isomorphism $\Xi: H_{1}^{sep}(\hat{S};\mathbb{Z}) \xrightarrow{\sim} \mathring{C}(E_{g}(S))$. ◻ **Corollary 52**. *Let $S$ be an infinite-type surface of genus at least one and ${\Gamma}$ be a locally finite, infinite graph. If $E_{g}(S)$ is homeomorphic to $E_{\ell}({\Gamma})$, then there is a natural isomorphism between $H^{1}(\mathop{\mathrm{PMap}}(S);\mathbb{Z})$ and $H^{1}(\mathop{\mathrm{PMap}}({\Gamma});\mathbb{Z})$.* *Proof.* We first note that if $E_{g}(S)$ is empty (i.e. $S$ has finite genus), then $H^{1}(\mathop{\mathrm{PMap}}(S);\mathbb{Z})$ is trivial by [@APV2020 Theorem 1] and [@DP2020 Theorem 1.1]. Similarly, if $E_{\ell}(S)$ is empty, then $H^{1}(\mathop{\mathrm{PMap}}(S);\mathbb{Z})$ is trivial by and . Otherwise, the isomorphism is obtained by composing the maps from , , and : $$H^1(\mathop{\mathrm{PMap}}({\Gamma});\mathbb{Z}) \overset{\Theta}{\cong} \mathring{C}(E_\ell({\Gamma})) \cong \mathring{C}(E_g(S)) \overset{\Xi}{\cong} H_1^{sep}(\hat{S};\mathbb{Z}) \cong H^1(\mathop{\mathrm{PMap}}(S);\mathbb{Z}).$$ ◻ # CB generation classification {#sec:CBgen} As an application of and , in this section we obtain , the classification of infinite graphs with CB generating sets. We revisit the theorem for convenience. **Theorem 53** (, revisited). *Let ${\Gamma}$ be a locally finite, infinite graph. Then $\mathop{\mathrm{PMap}}({\Gamma})$ is CB generated if and only if either ${\Gamma}$ is a tree, or satisfies the following:* 1. *${\Gamma}$ has finitely many ends accumulated by loops, and* 2. *there is no accumulation point in $E \setminus E_\ell$.* The only if direction of comes from [@DHK2023]: **Proposition 54** ([@DHK2023 Theorem 6.1]). *Let ${\Gamma}$ be a locally finite, infinite graph. If $\mathop{\mathrm{rk}}({\Gamma})>0$ and $E \setminus E_\ell$ has an accumulation point, then $\mathop{\mathrm{PMap}}({\Gamma})$ is not CB-generated.* **Proposition 55** ([@DHK2023 Theorem 8.2]). *Let ${\Gamma}$ be a locally finite, infinite graph. If ${\Gamma}$ has infinitely many ends accumulated by loops, then $\mathop{\mathrm{PMap}}({\Gamma})$ is not locally CB. In particular, $\mathop{\mathrm{PMap}}({\Gamma})$ is not CB-generated.* Now we show that those conditions are also sufficient for CB-generation. First, recall by that when ${\Gamma}$ is a tree, $\mathop{\mathrm{PMap}}({\Gamma})$ is the trivial group. We proceed to show (1) and (2) are sufficient to show that $\mathop{\mathrm{PMap}}({\Gamma})$ is CB generated. We start with the case where ${\Gamma}$ has finite rank and satisfies Condition (2): **Proposition 56**. *Let ${\Gamma}$ be a locally finite, infinite graph. If ${\Gamma}$ has finite rank with no accumulation point in $E$, then $\mathop{\mathrm{PMap}}({\Gamma})$ is finitely generated.* *Proof.* Note in this case $E_\ell$ is the empty set, so having no accumulation point in $E \setminus E_\ell$ is equivalent to having a finite end space. Hence $\mathop{\mathrm{PMap}}({\Gamma})$ is isomorphic to one of $\mathop{\mathrm{Out}}(F_n), \mathop{\mathrm{Aut}}(F_n)$ or $F_n^e \rtimes \mathop{\mathrm{Aut}}(F_n)$ for some $e=|E|-1 \ge 1$, all of which are finitely generated, concluding the proof. ◻ Now assume ${\Gamma}$ has infinite rank but finitely many ends accumulated by loops with no accumulation point in $E \setminus E_\ell$. As in , ${\Gamma}$ can be realized as a finite wedge sum of rays, Loch Ness monster graphs (infinite rank graph with end space $(E,E_{\ell}) \cong (\{*\},\{*\})$), and Millipede monster graphs (infinite rank graph with end space $(E,E_{\ell}) \cong (\mathbb{N}\cup \{\infty\}, \{\infty\})$). Then ${\Gamma}$ is characterized by the triple $(r,l,m)$ where $r$ is the number of ray summands, $l$ is the number of Loch Ness monster summands, and $m$ is the number of Millipede monster summands. Then ${\Gamma}$ is as in . Note that this triple is not unique, in fact, if $m>0$ then we do not need to keep track of $r$ as any additional ray can simply be moved via a proper homotopy into a Millipede monster summand. However, in order to avoid a case-by-case analysis we prove that $\mathop{\mathrm{PMap}}({\Gamma})$ is CB-generated for *any* triple $(r,l,m)$. Note that we already know by that both the Loch Ness monster graph, $(0,1,0)$, and the Millipede monster graph, $(0,0,1)$, have CB and thus CB-generated pure mapping class groups. Therefore we will ignore these two graphs throughout this section. The foundation for our choice of CB-generating set for $\mathop{\mathrm{PMap}}({\Gamma})$ will be the set $\mathcal{V}_K$, where $K$ is the wedge point as in . Recall that an appropriate choice of a compact set $K$ provides a CB neighborhood of the identity certifying that $\mathop{\mathrm{PMap}}({\Gamma})$ is locally CB. **Proposition 57** ([@DHK2023 Proposition 8.3]). *Let ${\Gamma}$ be a locally finite, infinite graph with finitely many ends accumulated by loops. Then $\mathop{\mathrm{PMap}}({\Gamma})$ is locally CB if and only if ${\Gamma} \setminus {\Gamma}_c$ has only finitely many components whose ends space is infinite. Moreover, for any choice of connected compact subgraph $K$ whose complementary components are either trees or monster graphs, $\mathcal{V}_K$ is a CB neighborhood of the identity in $\mathop{\mathrm{PMap}}({\Gamma})$.* We remark that the moreover statement is absent in [@DHK2023 Proposition 8.3]; however, it can be deduced readily from the proof. We thus have that our choice of $\mathcal{V}_{K}$ is CB. This is the starting point for our CB generating set; we now describe how to choose the remaining elements. Enumerate each of the ray summands of ${\Gamma}$ as $R_{1},\ldots,R_{r}$, the Loch Ness monster summands as $L_{1},\ldots,L_{l}$, and the Millipede monster summands as $M_{1},\ldots,M_{m}$ (skip the enumeration if there are no summands of a given type). We also sequentially label the loops in $L_{i}$ by $a_{i,j}$ where $a_{i,1}$ is the loop closest to $K$. We similarly label the loops in $M_{i}$ by $b_{i,j}$. For each $R_{i}$ let $I_{i}$ be an interval in the interior of $R_{i}$. Then we have the following finite collection of word maps: $$\begin{aligned} W :=\{\phi_{(a_{1,1},I_{i})}\}_{i=1}^{r}.\end{aligned}$$ If $l=0$ then we use $W:= \{\phi_{(b_{1,1}, I_{i})}\}_{i=1}^{r}$ instead. Note we cannot have $l=m=0$ as ${\Gamma}$ has infinite rank. If $r=0$, we set $W := \emptyset$. Next, we have the following finite collection of loop swaps: $$\begin{aligned} B :=&\{\alpha_{ij}:=\text{swaps } a_{i,1} \leftrightarrow a_{j,1}\ |\ 1 \le i < j \le l\}\\ &\cup\{\beta_{ij}:= \text{swaps } b_{i,1} \leftrightarrow b_{j,1}\ |\ 1 \le i < j \le m \}\\ &\cup\{\gamma_{ij}:= \text{swaps } a_{i,1} \leftrightarrow b_{j,1}\ |\ 1 \le i \le l,\ 1\le j \le m\}. \end{aligned}$$ In words, $B$ is the collection of all loop swaps between loops that are adjacent to $K$. Finally, we need a finite collection of loop shifts. The graph ${\Gamma}$ has only finitely many ends accumulated by loops, so by , $H^{1}(\mathop{\mathrm{PMap}}({\Gamma});\mathbb{Z})$ has finite rank. Let $H$ be a finite collection of primitive loop shifts dual to a finite basis of $H^{1}(\mathop{\mathrm{PMap}}({\Gamma});\mathbb{Z})$. We claim that the set $$\begin{aligned} \mathcal{S}:= \mathcal{V}_{K} \cup W \cup B \cup H\end{aligned}$$ is a CB generating set for $\mathop{\mathrm{PMap}}({\Gamma})$. Note that $\mathcal{S}$ is CB since $\mathcal{V}_{K}$ is CB by and each of $W,B,$ and $H$ is simply a finite set. Thus we only need to verify that $\mathcal{S}$ is a generating set for $\mathop{\mathrm{PMap}}({\Gamma})$. We will first check that $\mathcal{S}$ generates $\mathop{\mathrm{PMap_{\it c}}}({\Gamma})$. **Lemma 58**. *If $K' \subset {\Gamma}$ is any connected compact subset of ${\Gamma}$, then $\mathop{\mathrm{PMap}}(K') \subset \langle\mathcal{S}\rangle$.* Before we give the proof of this lemma we review finite generating sets for $\mathop{\mathrm{Aut}}(F_{n})$. Let $F_n$ be a free group of rank $n$, and denote by $a_1,\ldots,a_n$ its free generators. In 1924, Nielsen [@Nielsen1924] proved a finite presentation of $\mathop{\mathrm{Aut}}(F_n)$, with the generating set $\{\tau_i\}_{i=1}^n \cup \{\sigma_{ij}, \lambda_{ij}, \rho_{ij}\}_{1 \le i \neq j \le n}$, where: $$\begin{aligned} &\tau_i= \begin{cases} a_i \mapsto a_i^{-1}, \\ a_j \mapsto a_j & \text{for $j \neq i$.} \end{cases} &&\sigma_{ij}= \begin{cases} a_i \leftrightarrow a_j, \\ a_k \mapsto a_k &\text{for $k \neq i,j$.} \end{cases}\\ &\lambda_{ij}= \begin{cases} a_i \mapsto a_ja_i, \\ a_k \mapsto a_k & \text{for $k \neq i,j$.} \end{cases} &&\rho_{ij}= \begin{cases} a_i\mapsto a_ia_j, \\ a_k \mapsto a_k & \text{for $k \neq i,j$.} \end{cases}\end{aligned}$$ We call $\tau_i$ a **flip**, $\sigma_{ij}$ a **transposition,** and $\lambda_{ij},\rho_{ij}$ **left/right Nielsen automorphisms** respectively. In fact, Armstrong--Forrest--Vogtmann [@AFV2008presentation Theorem 1] reduced this generating set to consist only of involutions: $$\label{eqn:AFVgenset} \{\tau_i\}_{i=1}^n \cup \{\sigma_{i,i+1}\}_{i=1}^{n-1}\cup\{\tau_2\lambda_{12}\}. \tag{$\dagger$}$$ *Proof of .* Let $K'$ be a connected compact subset of ${\Gamma}$. Without loss of generality, we can increase the size of $K'$ so that it is as in . In particular, $K'$ satisfies the following: - $K'$ contains at least two loops of $L_{1}$ (or $M_{1}$ if $l=0$), - $K'$ contains at least one loop from every monster summand, - the vertices in $K'$ are contained in its interior, - every component of ${\Gamma}\setminus K'$ is infinite, - $K'$ is connected and contains the wedge point $K$. Note that the last two properties imply that $K'$ contains a subsegment of every ray summand $R_i$. ![Illustration of $K'$ in ${\Gamma}$. Here we have $l=2, m=3, r=3$ and $\mathop{\mathrm{PMap}}(K') \cong F_9^{11} \rtimes \mathop{\mathrm{Aut}}(F_9)$. We may assume $K'$ contains: at least one loop from every monster summand, at least two loops from one of the monster summands, and initial segments of the ray summands, as well as $K$. If needed, we can further enlarge $K'$ such that it contains the vertices in the interior, and it only contains the entirety of the loops.](pics/compactcbgen.pdf){#fig:compactcbgen width=".7\\textwidth"} By and , we have that $\mathop{\mathrm{PMap}}(K') \cong F_{m}^{k} \rtimes \mathop{\mathrm{Aut}}(F_{m})$ for some $k > 0$ and $m=\mathop{\mathrm{rk}}(K')$. We first check that $\langle\mathcal{S}\rangle$ contains an Armstrong--Forrest--Vogtmann generating set for $\mathop{\mathrm{Aut}}(F_{m})$. Relabel the loops of $K'$ by $a_1,\ldots,a_m$ in the following manner. The loop of $L_{1}$ closest to $K$ is labeled $a_{1}$, the next loop in $L_{1}$ is $a_{2}$, continued until the loops of $L_{1}$ are exhausted. Then the next loop, say $a_{j+1}$, is the first loop on $L_{2}$ etc., until all of the loops in all of the $L_{i}$ are exhausted. Finally, continue relabeling by $a_\bullet$'s the loops in $M_{1}$ through $M_{m}$, in the same process. Note that when $l=0$, then $a_1$ and $a_2$ are contained in $M_1$. Note that we immediately have $\tau_{1},\ldots,\tau_m,\lambda_{12} \in \mathcal{V}_{K} \subset \mathcal{S}$. Therefore it remains to check that $\sigma_{i,i+1} \in \langle\mathcal{S}\rangle$ for all $i=1,\ldots,m-1$. Each such $\sigma_{i,i+1}$ either swaps two adjacent loops in a single component of $K'\setminus K$ or swaps the last loop in a component of $K'\setminus K$ with the first loop in the next component. In the former case we already have that $\sigma_{i,i+1} \in \mathcal{V}_{K}$. For the latter case, let $a_t$ be the first loop in the component of $K' \setminus K$ containing $a_i$. Then consider the loop swap $\sigma_{i,t}$ that swaps $a_i$ with $a_t$ (note those two loops could coincide, and then $\sigma_{i,t}$ is the identity) and let $\sigma_{t,i+1}$ be the loop swap that swaps $a_t$ with $a_{i+1}$, which is the first loop in the component of $K' \setminus K$ containing $a_{i+1}$. Then we have that $\sigma_{i,1} \in \mathcal{V}_K$, $\sigma_{t,i+1} \in B$ and $\sigma_{i,i+1} = \sigma_{i,t} \sigma_{t,i+1} \sigma_{i,t} \in \langle\mathcal{S}\rangle$. Thus we see that every Armstrong--Forrest--Vogtmann generator for the $\mathop{\mathrm{Aut}}(F_{m})$ subgroup of $\mathop{\mathrm{PMap}}(K') \cong F_{m}^{k} \rtimes \mathop{\mathrm{Aut}}(F_{m})$ is contained in $\langle\mathcal{S}\rangle$. Finally we need to be able to obtain each of the $k$ factors of $F_{m}$ in $\mathop{\mathrm{PMap}}(K')$. Each $F_{m}$ subgroup can be identified with the subgroup of the collection of word maps on an interval adjacent to the boundary of $K'$. Recall by there are $k+1$ such boundary adjacent intervals, so say $I_1,\ldots,I_{k+1}$. Since we have already generated the $\mathop{\mathrm{Aut}}(F_{m})$ subgroup of $\mathop{\mathrm{PMap}}(K')$ with $\mathcal{S}$ and we can change the word of the word map using and , it suffices to show that a *single* word map on each interval $I_j$ that maps onto a generator of $F_{m}$ is in $\langle\mathcal{S}\rangle$. However, we even have one such word map in $\mathcal{S}$ already. Indeed, if $I_j$ is contained in some ray then we have already added a corresponding word map to $W$. Otherwise, if $I_j$ is contained in some monster summand, then there is an appropriate word map already in $\mathcal{V}_{K}$ obtained by mapping $I_j$ over the first loop of that summand. We can thus conclude that $\mathop{\mathrm{PMap}}(K') \cong F_{m}^{k} \rtimes \mathop{\mathrm{Aut}}(F_{m})$ in contained in $\langle\mathcal{S}\rangle$. ◻ We are now ready to prove . Note that in the above lemma we never made use of the loop shifts in $H$. They will now be used to push any random mapping class into the closure of the compactly supported mapping classes. *Proof of .* As discussed in the beginning of the section, the only if direction comes from and . Now we prove the if direction. When ${\Gamma}$ is a tree, we have $\mathop{\mathrm{PMap}}({\Gamma})=1$ by . If ${\Gamma}$ has finite rank, $\mathop{\mathrm{PMap}}({\Gamma})$ is finitely generated by . Also if ${\Gamma}$ is either the Loch Ness monster or Millipede monster graph, then by , $\mathop{\mathrm{PMap}}({\Gamma})$ is CB. Hence we may assume $1 \le |E_\ell|<\infty$, there is no accumulation point in $E\setminus E_\ell$, and ${\Gamma}$ is neither the Loch Ness monster nor the Millipede monster graphs. Let $\mathcal{S}$ be as defined above; $\mathcal{S}= \mathcal{V}_K \cup W \cup B \cup H$. We will show that $\mathcal{S}$ generates $\mathop{\mathrm{PMap}}({\Gamma})$. Let $f \in \mathop{\mathrm{PMap}}({\Gamma})$. If $|E_\ell|=1$, then $\mathop{\mathrm{PMap}}({\Gamma}) = \overline{\mathop{\mathrm{PMap_{\it c}}}({\Gamma})}$ by , so we obtain $f \in \overline{\mathop{\mathrm{PMap_{\it c}}}({\Gamma})}$. Otherwise, if $|E_\ell| \ge 2$, then by postcomposing $f$ with primitive loop shifts in $H$, we may assume the flux of $f$ is zero with respect to any 2-partition of $E_\ell$. By , we can assume $f \in \overline{\mathop{\mathrm{PMap_{\it c}}}({\Gamma})}$ for this case as well. Then there exists a compact set $K'$ containing $K$, and $g \in \mathop{\mathrm{PMap_{\it c}}}({\Gamma})$ such that $g$ is totally supported in $K'$ and $fg^{-1} \in \mathcal{V}_K$. Therefore, it suffices to show that $g$ is contained in the group generated by $\mathcal{S}$. Since $g$ is totally supported in $K'$, the map $g$ can be identified with an element in $\mathop{\mathrm{PMap}}(K')$, which is contained in $\langle\mathcal{S}\rangle$ by . This concludes the proof that $\mathop{\mathrm{PMap}}({\Gamma})$ is generated by $\mathcal{S}$. Finally, $\mathcal{S}$ is CB as it is the union of three finite sets $W,B$, and $H$ and the set $\mathcal{V}_{K}$, which is CB by . ◻ # Residual finiteness {#sec:RF} In this section, we prove : **Theorem 59** (, revisited). *$\mathop{\mathrm{PMap}}({\Gamma})$ is residually finite if and only if ${\Gamma}$ has a finite rank.* ## Forgetful map {#ssec:forgetful} Throughout this section, we let ${\Gamma}$ be a locally finite, infinite graph with no ends accumulated by loops. That is, $E_\ell({\Gamma})=\emptyset$ but $E({\Gamma}) \neq \emptyset$. Fix an end $\alpha_0 \in E({\Gamma})$. Define $E_{<\infty}({\Gamma})$ as the collection of finite subsets of $E({\Gamma})$ containing $\alpha_0$: $$E_{<\infty}({\Gamma}) = \{ \mathcal{E}\subset E({\Gamma}): \alpha_0 \in \mathcal{E}, \text{ and } |\mathcal{E}|<\infty\}.$$ For each $\mathcal{E}\in E_{<\infty}({\Gamma})$, we define the graph ${\Gamma}_{\mathcal{E}}$ as a subgraph of ${\Gamma}$ such that: - $\mathop{\mathrm{rk}}{\Gamma}_{\mathcal{E}} = \mathop{\mathrm{rk}}{\Gamma}$, and - $E({\Gamma}_{\mathcal{E}}) = \mathcal{E}$. Note ${\Gamma}_\mathcal{E}$ is properly homotopic to the core graph ${\Gamma}_c$ of ${\Gamma}$ with $|\mathcal{E}|$ rays attached. Now we use : $\mathop{\mathrm{PMap}}({\Gamma}) \cong \mathcal{R}\rtimes \mathop{\mathrm{PMap}}({\Gamma}_c^*)$ if $E({\Gamma}) \setminus E_\ell({\Gamma})$ is nonempty and compact. In our case ${\Gamma}$ is of infinite type and has no ends accumulated by loops, so $E({\Gamma}) \setminus E_\ell({\Gamma}) = E({\Gamma})$ is automatically nonempty and compact. Now we denote by $\mathcal{R}_\mathcal{E}$ the $\mathcal{R}$ subgroup for ${\Gamma}_{\mathcal{E}}$. Then we have a map $\rho_\mathcal{E}: \mathcal{R}\to \mathcal{R}_\mathcal{E}$ by 'restricting' the domain to $E({\Gamma}_\mathcal{E})$. Namely, given a locally constant map $[f: E({\Gamma}) \to \pi_1({\Gamma}, \alpha_0)] \in \mathcal{R}$, we define $\rho_\mathcal{E}(f):= f|_{E({\Gamma}_\mathcal{E})}$, where we note that $f|_{E({\Gamma}_\mathcal{E})}:E({\Gamma}_\mathcal{E}) \to \pi_1({\Gamma},\alpha_0) = \pi_1({\Gamma}_\mathcal{E},\alpha_0)$ is a locally constant map to $\pi_1({\Gamma}_\mathcal{E},\alpha_0)$, so $\rho_\mathcal{E}(f) \in \mathcal{R}_\mathcal{E}$. **Lemma 60**. *The restriction map $\rho_\mathcal{E}:\mathcal{R}\to \mathcal{R}_\mathcal{E}$ is a group homomorphism.* *Proof.* Recall the group operation in $\mathcal{R}$ is the pointwise multiplication in $\pi_1({\Gamma},\alpha_0)$. Hence the restriction on $f \cdot g$ for $f, g \in \mathcal{R}$ is just the product of their restrictions: $$\rho_\mathcal{E}(f\cdot g) = (f\cdot g)|_{E({\Gamma}_\mathcal{E})} = f|_{E({\Gamma}_\mathcal{E})} \cdot g|_{E({\Gamma}_\mathcal{E})} = \rho_\mathcal{E}(f) \cdot \rho_\mathcal{E}(g). \qedhere$$ ◻ Observe ${\Gamma}_c^* = ({\Gamma}_\mathcal{E})_c^*$. Hence, from the lemma we can build the homomorphism $\mathcal{F}_\mathcal{E}$ on $\mathop{\mathrm{PMap}}({\Gamma})$ to $\mathop{\mathrm{PMap}}({\Gamma}_\mathcal{E})$ by just letting $\mathcal{F}_\mathcal{E}= \rho_\mathcal{E}\times \mathop{\mathrm{Id}}$ on the decomposition $\mathop{\mathrm{PMap}}({\Gamma}) = \mathcal{R}\rtimes \mathop{\mathrm{PMap}}({\Gamma}_c^*)$ given in : $$\mathcal{F}_\mathcal{E}: \mathop{\mathrm{PMap}}({\Gamma}) \to \mathop{\mathrm{PMap}}({\Gamma}_\mathcal{E}),$$ which we will call as the **forgetful homomorphism** to $\mathcal{E}\subset E({\Gamma})$. ## Finite rank if and only if residually finite {#ssec:RFiffFIN} Now we prove . The following lemma is used for the only if direction of the proof. Denote by $\mathop{\mathrm{SAut}}(F_n)$ the unique index 2 subgroup of $\mathop{\mathrm{Aut}}(F_n)$. **Fact 61** ([@baumeister2019smallest Theorem 9.1]). *There exists a strictly increasing sequence of integers $\{a_n\}_{n \ge 3}$ such that for $n \ge 3$, every nontrivial finite quotient of $\mathop{\mathrm{SAut}}(F_n)$ has cardinality of at least $a_n$.* *Proof of .* Suppose that ${\Gamma}$ has finite rank $n$. If ${\Gamma}$ has no ends then $\mathop{\mathrm{PMap}}({\Gamma})$ is isomorphic to $\mathop{\mathrm{Out}}(F_n)$, which is residually finite by [@grossman1974]. If ${\Gamma}$ has only one end, then $\mathop{\mathrm{PMap}}({\Gamma})$ is isomorphic to $\mathop{\mathrm{Aut}}(F_n)$, which is residually finite by [@baumslag1963]. If ${\Gamma}$ has finitely many ends, then $\mathop{\mathrm{PMap}}({\Gamma}) \cong F_n^{{|E|-1}} \rtimes \mathop{\mathrm{Aut}}(F_n)$ which is again residually finite as both factors are residually finite and $F_n^{|E|-1}$ is finitely generated [@mal1958homomorphisms Theorem 1]. Now we assume ${\Gamma}$ has finite rank and infinitely many ends. The proof is similar to the proof for infinite-type surfaces; [@PatelVlamis Proposition 4.6]. Let $f \in \mathop{\mathrm{PMap}}({\Gamma})$ be a nontrivial element. Since ${\Gamma}$ is of finite rank and $f$ is proper, it follows that $({\Gamma}\setminus {\Gamma}_c) \cap \mathop{\mathrm{supp}}(f)$ is compact. In particular, there exists some finite set $\mathcal{E}\subset E$ such that $({\Gamma}_\mathcal{E}\setminus {\Gamma}_c) \cap \mathop{\mathrm{supp}}(f)$ is still nonempty. This implies that the forgetful map $\mathcal{F}_\mathcal{E}: \mathop{\mathrm{PMap}}({\Gamma}) \to \mathop{\mathrm{PMap}}({\Gamma}_\mathcal{E})$ sends $f$ to a nontrivial element $\mathcal{F}_\mathcal{E}(f) \in \mathop{\mathrm{PMap}}({\Gamma}_\mathcal{E})$. However, we know that $\mathcal{E}$ has finite end space so $\mathop{\mathrm{PMap}}({\Gamma}_\mathcal{E})$ is residually finite by the previous paragraph. Therefore, there exists a homomorphism $\psi: \mathop{\mathrm{PMap}}({\Gamma}_\mathcal{E}) \to F$ for some finite group $F$ so that $\psi(\mathcal{F}_\mathcal{E}(f))$ is nontrivial. Thus $\mathop{\mathrm{PMap}}({\Gamma})$ is residually finite. Conversely, let ${\Gamma}$ have infinite rank and assume it is in standard form. Let $\{{\Gamma}_{k}\}$ be a compact exhaustion of ${\Gamma}$ by connected subgraphs. Then there exist non-decreasing sequences $\{n_k\}, \{e_k\}$ such that $\mathop{\mathrm{PMap}}({\Gamma}_k) \cong F_{n_k}^{e_k} \rtimes \mathop{\mathrm{Aut}}(F_{n_k})$. Here $e_k+1$ is the number of boundaries of ${\Gamma}_k$ (i.e. the size of $\overline{{\Gamma}\setminus {\Gamma}_k} \cap {\Gamma}_k$), and $n_k$ is the rank of ${\Gamma}_k$. As ${\Gamma}$ has infinite rank, we have $\lim_{k \to \infty}n_k = \infty.$ Also, note $\mathop{\mathrm{Aut}}(F_{n_k})$ has the unique index 2 subgroup $\mathop{\mathrm{SAut}}(F_{n_k})$ for each $k$, whose isomorphic copy in $\mathop{\mathrm{PMap}}({\Gamma}_k)$ we denote by $G_k$. The group $\mathop{\mathrm{Aut}}(F_{n_k})$ can be identified with the subgroup of mapping classes totally supported on the core graph of ${\Gamma}_k$, and $G_{k} \cong \mathop{\mathrm{SAut}}(F_{n_{k}})$ with the set of those mapping classes that preserve orientation. Since the core graph of ${\Gamma}_{k}$ is contained in the core graph of ${\Gamma}_{k+1}$, and orientation preserving mapping classes on ${\Gamma}_k$ are orientation preserving on ${\Gamma}_{k+1}$, it follows that we have the inclusion $G_k \hookrightarrow G_{k+1}$. Hence the direct limit $\mathop{\mathrm{SAut}}_\infty({\Gamma}) := \varinjlim G_k$ is a well-defined subgroup of $\mathop{\mathrm{PMap}}({\Gamma})$. We claim that $\mathop{\mathrm{SAut}}_{\infty}({\Gamma})$ has no finite quotients. To do this, suppose $H$ is a proper normal subgroup of $\mathop{\mathrm{SAut}}_{\infty}({\Gamma})$ with finite index $r \ge 2$. Then as $H$ is a proper subgroup of $\mathop{\mathrm{SAut}}_{\infty}({\Gamma})$ and $\varinjlim G_k = \mathop{\mathrm{SAut}}_\infty({\Gamma})$, it follows that there exists some $k_{0}$ such that whenever $k \ge k_{0}$, $G_k$ is not contained in $H$. Hence $H \cap G_k$ is a *proper* normal subgroup of $G_k$. Note $$1 \neq [G_k : G_k \cap H] \le [\mathop{\mathrm{SAut}}_\infty({\Gamma}) : H] = r,$$ but the minimal finite index of proper subgroups of $G_k \cong \mathop{\mathrm{SAut}}(F_{n_k})$ increases as $k$ does by . Therefore, $[G_k : G_k \cap H]$ cannot be uniformly bounded by $r$; giving a contradiction. Therefore $\mathop{\mathrm{SAut}}_\infty({\Gamma})$ has no nontrivial finite quotient, implying that both $\mathop{\mathrm{PMap}}({\Gamma})$ and $\mathop{\mathrm{Map}}({\Gamma})$ are not residually finite. ◻ # Tits alternative {#sec:TitsAlternative} In a series of three papers [@BFH2000; @BFH2005; @BFH2004], Bestvina, Feighn, and Handel prove that $\mathop{\mathrm{Out}}(F_{n})$ satisfies what we call the **strong Tits alternative**: Every subgroup either contains a nonabelian free group or is virtually abelian. The same was previously known for mapping class groups of compact surfaces by work of Ivanov, McCarthy, and Birman--Lubotzky--McCarthy [@Ivanov1984; @McCarthy1985; @BLM1983]. However, it was shown by Lanier and Loving [@LL2020] that this is not the case for big mapping class groups. They prove that big mapping class groups *never* satisfy the strong Tits alternative by showing that they always contain a subgroup isomorphic to the wreath product $\mathbb{Z}\wr \mathbb{Z}$. In [@abbott2021finding], the authors extend this idea to find many subgroups isomorphic to wreath products. Allcock [@Allcock2021] further showed that most big mapping class groups fail the (standard) Tits alternative by finding a poison subgroup that surjects onto a Grigorchuck group. A groups satisfies the **Tits alternative** if every subgroup either contains a nonabelian subgroup or is virtually solvable. Note that some references require subgroups be finitely generated, but we do not need to make that restriction. ## Infinite rank: Fails to satisfy TA In this section, we find poison subgroups (analogous to the surface case) in $\mathop{\mathrm{PMap}}({\Gamma})$ for graphs ${\Gamma}$ of infinite rank. **Theorem 62**. *Let ${\Gamma}$ be a locally finite graph of infinite rank. Then $\mathop{\mathrm{PMap}}({\Gamma})$ contains a subgroup isomorphic to $\mathop{\mathrm{Aut}}(F_n) \wr \mathbb{Z}$ for all $n\in \mathbb{N}$.* *Proof.* Recall that to define the wreath product $G \wr \mathbb{Z}$, we need a $\mathbb{Z}$-indexed set of copies of $G$, denoted by $\{G_i\}_{i \in \mathbb{Z}}$. Then $\mathbb{Z}$ acts on the index set by translation, so it also acts on $\oplus_{i \in \mathbb{Z}} G_i$ by translation on the indices. Now set $G = \mathop{\mathrm{Aut}}(F_n)$ and denote by $\phi$ the translation action by $\mathbb{Z}$ on the direct sum. We then define $$\mathop{\mathrm{Aut}}(F_n) \wr \mathbb{Z}:=\left(\bigoplus_{\mathbb{Z}} \mathop{\mathrm{Aut}}(F_n)\right) \rtimes_{\phi} \mathbb{Z}.$$ To realize this group as a subgroup of $\mathop{\mathrm{PMap}}({\Gamma})$, we will find $\mathbb{Z}$ copies of $\mathop{\mathrm{Aut}}(F_n)$ together with a translation action. For each $n\in \mathbb{N}$, let $\Delta_n$ be the graph obtained from a line identified with $\mathbb{R}$ with a wedge of $n$ circles attached by an edge at each integer point; see . If ${\Gamma}$ has at least two ends accumulated by loops, we can properly homotope ${\Gamma}$ to have $\Delta_n$ as a subgraph. ![The graph $\Delta_4$, with two ends accumulated by roses with $4$ petals. It admits a translation of roses, denoted by the green dotted arrow. Up to proper homotopy, such graph arises as a subgraph of any graph with at least two ends accumulated by loops.](pics/TA_2end.pdf){#fig:TA_2end width=".8\\textwidth"} For each $i \in \mathbb{Z}$, let $R_i$ be the wedge of circles supported above the integer point $i$ in $\Delta_n \subset \Gamma$. Let $G_i$ be the subgroup of elements of $\mathop{\mathrm{PMap}}({\Gamma})$ which are totally supported on $R_i$. Each $G_i$ is isomorphic to $\mathop{\mathrm{Aut}}(F_n)$ (see ) and the $G_i$'s have disjoint total support, so $\langle G_i\rangle_{i \in \mathbb{Z}}\cong \oplus_{\mathbb{Z}} \mathop{\mathrm{Aut}}(F_n)$. There is a shift map, call it $h$, that translates along $\Delta_n$ by $+1$ on the line and maps $R_i$ to $R_{i+1}$ isometrically. Because $h^mG_i=G_{i+m}h^m$, the subgroup of $\mathop{\mathrm{PMap}}({\Gamma})$ generated by $G_0$ and $h$ is isomorphic to $\mathop{\mathrm{Aut}}(F_n)\wr \mathbb{Z}$. In general, if ${\Gamma}$ has at least one end accumulated by loops, we can embed a copy of $\Delta_n$ into ${\Gamma}$ where the images of the two ends of $\Delta_n$ are not distinct. The corresponding "shift map\" will no longer be shifting between distinct ends, but this does not affect the construction of an $\mathop{\mathrm{Aut}}(F_n)\wr \mathbb{Z}$ subgroup. ◻ immediately tells us that $\mathop{\mathrm{PMap}}({\Gamma})$ fails the strong Tits alternative because $\mathbb{Z}\wr \mathbb{Z}$ is a subgroup of $\mathop{\mathrm{Aut}}(F_n) \wr \mathbb{Z}$. In [@Allcock2021], Allcock shows that big mapping class groups of surfaces with infinite genus fail the (standard) Tits alternative. His idea is to find elements of the mapping class group that "look like" the action of the Grigorchuck group on a rooted binary tree. Because these elements are not of finite order, the resulting subgroup of the mapping class group is an extension of the Grigorchuck group. When this same idea is implemented in the pure mapping class group of a graph, we instead find an exact copy of Grigorchuck's group. Many graphs, such as an infinite binary tree, also contain Grigorchuck's group as a subgroup of their *full* mapping class group in the obvious way. **Theorem 63**. *Let ${\Gamma}$ be a locally finite graph of infinite rank. Then $\mathop{\mathrm{PMap}}({\Gamma})$ contains a subgroup isomorphic to the Grigorchuck group.* *Proof.* First, we define the proper homotopy equivalences called $a,b,c,d$ on an infinite binary tree $T$ as in . Note only $a$ swaps the level 1 branches. Each of the other three homotopy equivalences $b,c,d$ misses $(3k+1), 3k, (3k-1)$-st branch swaps for $k \ge 1$ respectively, as well as the level-1 swap. These four elements generate the Grigorchuck group, $G$ [@grigorchuck1980burnside; @grigorchuck2008intermediate]. ![Proper homotopy equivalences $a,b,c$ and $d$ on infinite binary tree $T$. Each green arrow denotes the swap of the two subtrees induced by the swap of the two branches.](pics/Grigorchuck.pdf){#fig:Grigorchuck width=".8\\textwidth"} Now let $\Delta$ be the infinite graph with one end accumulated by loops, constructed as in . Specifically, we start with a ray and label the countably many vertices by $v_1,v_2,\ldots$ etc. Attach a finite binary tree $T_i$ of level $i$ to $v_i$ for each $i \ge 1$. Then attach a single loop at each leaf of the trees. For any graph ${\Gamma}$ with infinite rank, we can apply a proper homotopy equivalence so that $\Delta$ is a subgraph. Hence, $\mathop{\mathrm{PMap}}(\Delta) \le \mathop{\mathrm{PMap}}({\Gamma})$, so it suffices to find a Grigorchuck group inside $\mathop{\mathrm{PMap}}(\Delta)$. ![The graph $\Delta$ and the proper homotopy equivalence $\hat{b}$ on $T_1,T_2,\ldots \subset \Delta$ mimicking the definition of $b$ on $T$.](pics/1endGrigorchuck.pdf){#fig:1endGrigorchuck width=".8\\textwidth"} Define a proper homotopy equivalence $\hat{b}$ as the map on *finite* binary trees $T_1,T_2,\ldots$ by 'mimicking' $b$ defined on the *infinite* binary tree $T$. See for an illustration of $\hat{b}$, denoted in green arrows. Similarly define $\hat{a},\hat{c}$ and $\hat{d}$ from $a,c$ and $d$. Denote by $\widetilde{a},\widetilde{b},\widetilde{c}$ and $\widetilde{d}$ the proper homotopy classes of $\hat{a},\hat{b},\hat{c}$ and $\hat{d}$ respectively. Following the same proof of [@Allcock2021 Lemma 4.1], we see that $\widetilde{a},\widetilde{b},\widetilde{c},\widetilde{d}$ satisfy exactly the defining relations of $G$, and $\widetilde{G}:=\langle\widetilde{a},\widetilde{b},\widetilde{c},\widetilde{d}\rangle$ is isomorphic to the Grigorchuck group. ◻ **Corollary 64**. *Let ${\Gamma}$ be a locally finite graph of infinite rank. Then $\mathop{\mathrm{PMap}}({\Gamma})$ and $\mathop{\mathrm{Map}}({\Gamma})$ fail the Tits alternative.* ## Finite rank: Satisfies TA On the other hand, when ${\Gamma}$ has finite rank, we get the following contrasting result. **Theorem 65**. *Let ${\Gamma}$ be a locally finite graph with finite rank. Then $\mathop{\mathrm{PMap}}({\Gamma})$ satisfies the Tits alternative. That is, every finitely generated subgroup is either virtually solvable or contains a free group.* We first need the following stability property of the Tits alternative. **Proposition 66**. *Satisfying the Tits alternative is stable under subgroups, finite-index supergroups, and group extensions. More precisely,* 1. *Let $H \le G$. If $G$ satisfies the Tits alternative, then so does $H$.* 2. *Let $H \le G$ with $[G:H]<\infty$. If $H$ satisfies the Tits alternative, then so does $G$.* 3. *[(cf. [@Cantat2011 Proposition 6.3])]{.roman} Suppose the groups $K,G,H$ form a short exact sequence as follows: $$1 \longrightarrow K \longrightarrow G \longrightarrow H \longrightarrow 1.$$ If $K$ and $H$ satisfy the Tits alternative, then so does $G$.* *Proof.* Claim (1) holds because every subgroup of $H$ is a subgroup of $G$. Claim (2) will follow from (3), as finite groups are virtually trivial so satisfy the Tits alternative. Now we prove (3). Let $L \le G$ be a subgroup. Then we have the following commutative diagram: $$\begin{tikzcd} 1 \rar &K \rar &G \rar{q} &H \rar &1 \\ 1 \rar &K \cap L \rar \arrow[u, hook] &L \rar{q} \arrow[u, hook] &q(L) \rar \arrow[u, hook] &1 \end{tikzcd}$$ Indeed, $K\cap L \trianglelefteq L$ and $q(L) \cong L/(K\cap L) \le H$. By (1), both $K \cap L$ and $q(L)$ satisfy Tits alternative. If $K \cap L$ has $F_2$ as a subgroup, then $L$ has $F_2$ as a subgroup. If $q(L)$ has $F_2$ has a subgroup, then we can find a section of $q$ to lift $F_2$ inside $L$. Hence, we may assume both $K \cap L$ and $q(L)$ are virtually solvable. In this case, the following fact finishes the proof. **Fact 67** ([@dinh2012 Lemma 5.5], see also [@Cantat2011 Lemme 6.1]). *Suppose $N$ is a normal subgroup of a group $G$. If both $N$ and $G/N$ are virtually solvable, then $G$ is virtually solvable.* Hence $L$ is virtually solvable, concluding that $G$ satisfies Tits alternative. ◻ Now we are ready to prove . *Proof of .* Let $\mathop{\mathrm{rk}}{\Gamma}= n.$ Then we have the following short exact sequence [@AB2021 Theorem 3.5]: $$1 \longrightarrow \mathcal{R}\longrightarrow \mathop{\mathrm{PMap}}({\Gamma}) \longrightarrow \mathop{\mathrm{Aut}}(F_n) \longrightarrow 1,$$ where $\mathcal{R}$ is the group of locally constant functions from $E$ to $F_n$ with pointwise multiplication. The subgroup of $\mathop{\mathrm{Out}}(F_{n+1})$ fixing one $\mathbb{Z}$ factor is naturally isomorphic to $\mathop{\mathrm{Aut}}(F_n)$. Recall that $\mathop{\mathrm{Out}}(F_{n+1})$ satisfies the Tits alternative by [@BFH2000], so $\mathop{\mathrm{Aut}}(F_n)$ does too. We will show that $\mathcal{R}$ satisfies the (strong) Tits alternative, then part (3), guarantees that $\mathop{\mathrm{PMap}}({\Gamma})$ satisfies the Tits alternative as well. **Claim 2**. *$\mathcal{R}$ satisfies the strong Tits alternative.* Consider a (not necessarily finitely generated) subgroup $H \subset \mathcal{R}$. Suppose there exist $\phi, \psi \in \mathcal{R}$ that do not commute; so there exists an $e \in E$ such that $\phi(e)\psi(e) \neq \psi(e) \phi(e)$. Now we use the Ping Pong lemma to prove $\langle \phi, \psi \rangle \cong F_2$, which will conclude that $\mathcal{R}$ satisfies the strong Tits alternative. Let $X_{\phi(e)}$ and $X_{\psi(e)}$ as the set of words in $F_n$ that start with $\phi(e)$ and $\psi(e)$ respectively. We note $X_{\phi(e)}$ and $X_{\psi(e)}$ are disjoint as otherwise $\phi(e)=\psi(e)$, contradicting the assumption $\phi(e)\psi(e) \neq \psi(e)\phi(e)$. We consider the action of $H$ on $F_n$ as: $$\phi \cdot w := \phi(e)w, \qquad \text{for $\phi \in \mathcal{R}$ and $w \in F_n$}.$$ Then the same assumption $\phi(e)\psi(e) \neq \psi(e) \phi(e)$ implies that $\phi \cdot X_{\psi(e)} \subset X_{\phi(e)}$ and $\psi \cdot X_{\phi(e)} \subset X_{\psi(e)}$. Therefore, by the Ping-Pong lemma, we conclude $\langle\phi, \psi\rangle \cong F_2$ so $\mathcal{R}$ satisfies the (strong) Tits alternative. ◻ We now extend these results to determine which full mapping class groups satisfy the Tits alternative. **Corollary 68**. *Let ${\Gamma}$ be a locally finite, infinite graph. Then $\mathop{\mathrm{Map}}({\Gamma})$ satisfies the Tits alternative if and only if ${\Gamma}$ has finite rank and finite end space.* *Proof.* We divide into cases. First, if ${\Gamma}$ has at least one end accumulated by loops, then $\mathop{\mathrm{Map}}({\Gamma})$ fails the Tits alternative by . Otherwise, ${\Gamma}$ has finite rank, and we continue to divide into cases. If ${\Gamma}$ has finite end space, then $\mathop{\mathrm{Map}}({\Gamma})$ is a finite extension of $\mathop{\mathrm{PMap}}({\Gamma})$, so by property (2), the full mapping class group $\mathop{\mathrm{Map}}({\Gamma})$ satisfies the Tits alternative. If ${\Gamma}$ has countable end space, then we can modify the proof of by replacing the loops with rays, to realize Grigorchuck's group as a subgroup of $\mathop{\mathrm{Map}}({\Gamma})$. If the end space of ${\Gamma}$ is uncountable, then there is a closed subset of the ends which is homeomorphic to the whole Cantor set, so contains Grigorchuck's group in the natural way, and again $\mathop{\mathrm{Map}}({\Gamma})$ fails the Tits alternative. ◻ The strong Tits alternative is not stable under group extensions (consider $\mathbb{Z}\wr \mathbb{Z}$). So, the best we could conclude about $\mathop{\mathrm{PMap}}({\Gamma})$ from the decomposition as $\mathcal{R}\rtimes \mathop{\mathrm{Aut}}(F_n)$ was . However, our proof that $\mathcal{R}$ satisfies the strong Tits alternative actually shows the slightly stronger statement: Any two elements of $\mathcal{R}$ which do not commute generate $F_2$. This property could be useful in answering the question, **Question 69**. If ${\Gamma}$ is a locally finite graph of finite rank, does $\mathop{\mathrm{PMap}}({\Gamma})$ satisfy the strong Tits alternative?
arxiv_math
{ "id": "2309.07885", "title": "Generating Sets and Algebraic Properties of Pure Mapping Class Groups of\n Infinite Graphs", "authors": "George Domat, Hannah Hoganson, Sanghoon Kwak", "categories": "math.GR math.GT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We investigate the limiting behavior of multiple ergodic averages along sparse sequences evaluated at prime numbers. Our sequences arise from smooth and well-behaved functions that have polynomial growth. Central to this topic is a comparison result between standard Cesáro averages along positive integers and averages weighted by the (modified) von Mangoldt function. The main ingredients are a recent result of Matomäki, Shao, Tao and Teräväinen on the Gowers uniformity of the latter function in short intervals, a lifting argument that allows one to pass from actions of integers to flows, a simultaneous (variable) polynomial approximation in appropriate short intervals, and some quantitative equidistribution results for the former polynomials. We derive numerous applications in multiple recurrence, additive combinatorics, and equidistribution in nilmanifolds along primes. In particular, we deduce that any set of positive density contains arithmetic progressions with step $\lfloor p^c \rfloor$, where $c$ is a positive non-integer and $p$ denotes a prime, establishing a conjecture of Frantzikinakis. address: - Department of Mathematics, Aristotle University of Thessaloniki, Thessaloniki 54124, Greece - Department of Mathematics and Applied Mathematics, University of Crete, Voutes University Campus, Heraklion 70013, Greece author: - Andreas Koutsogiannis and Konstantinos Tsinas bibliography: - final.bib title: Ergodic averages for sparse sequences along primes --- # Introduction and main results {#Section-Introduction} ## Motivation The seminal work of Furstenberg [@Furstenberg-original] towards Szemerédi's theorem has initiated significant interest in the study of ergodic theoretic problems and their applications in problems of combinatorial or number-theoretic nature. Dynamical methods have proven extremely effective at tackling problems relating to the combinatorial richness of positive density subsets of integers. Furthermore, there are frequently no alternative methods that can recover the results that ergodic theoretic tools provide. For instance, we have far-reaching generalizations of Szemerédi's theorem, such as the Bergelson-Leibman theorem [@Bergelson-Leibman-polynomial-VDW] that produces polynomial progressions on sets of integers with positive density. The general structure of our problems is the following: we are given a collection of sequences $a_1(n),\dots, a_k(n)$ of integers and a standard probability space $(X,\mathcal{X},\mu)$ equipped with invertible, commuting, measure-preserving transformations $T_1,\dots, T_k$ that act on $X,$ and we examine the limiting behavior of the multiple averages $$\label{E: general ergodic averages} \frac{1}{N}\sum_{n=1}^{N} T_1^{a_1(n)}f_1\cdot\ldots\cdot T_k^{a_k(n)}f_k.$$ Throughout the article, these assumptions on the transformations will be implicit; we call the tuple $(X,\mathcal{X},\mu, T_1,\ldots, T_k)$ a *measure-preserving system* (or just *system*). Here $f_1,\dots, f_k$ are functions in $L^{\infty}(\mu)$ and we concern ourselves with their convergence mainly in the $L^2$-sense. In view of Furstenberg's correspondence principle, a satisfactory answer to this problem typically ensures that sets with positive density possess patterns of the form $(m,m+a_1(n),\dots, m+a_k(n))$, where $m,n\in \mathbb{N}$. Specializing to the case where all the sequences are equal and $T_i=T^i$, we arrive at the averages $$\label{E: Furstenberg averages} \frac{1}{N}\sum_{n=1}^{N} T^{a(n)}f_1\cdot T^{2a(n)}f_2\cdot\ldots\cdot T^{ka(n)}f_k,$$which relate to patterns of arithmetic progressions, whose common difference belongs to the set $\{a(n){:}\;n\in \mathbb{N}\}$. Furthermore, it is particularly tempting to conjecture that results pertaining to mean convergence of the averages in [\[E: general ergodic averages\]](#E: general ergodic averages){reference-type="eqref" reference="E: general ergodic averages"} should still be valid, if we restrict the range of summation to a sparse set such as the primes. Normalizing appropriately, we contemplate whether or not the averages $$\label{E: general ergodic averages along the primes} \frac{1}{\pi(N)} \sum_{ p\in \mathbb{P}{:}\;p\leq N}T_1^{a_1(p)}f_1\cdot\ldots\cdot T_k^{a_k(p)}f_k$$converge in $L^2(\mu)$ and what is the corresponding limit of these averages. Here, $\pi(N)$ denotes the number of primes less than or equal to $N$ and $\mathbb{P}$ is the set of primes. The first results in this direction were established in the case $k=1$. Namely, Sárközy [@sarkozy3] used methods from analytic number theory to show that sets of positive density contain patterns of the form $(m,m+p-1)$, where $p$ is a prime.[^1] Additionally, Wierdl [@Wierdl-primes] established the even stronger pointwise convergence result for the averages [\[E: general ergodic averages along the primes\]](#E: general ergodic averages along the primes){reference-type="eqref" reference="E: general ergodic averages along the primes"} in the case $k=1$ and $a_1(n)=n$, while Nair generalized this theorem to polynomials evaluated at primes [@nair-primes]. In the setting of several iterates, the first results were provided by Frantzikinakis, Host, and Kra [@Fra-Ho-Kra-primes-1], who established that sets of positive density contain 3-term arithmetic progressions whose common difference is a shifted prime. Furthermore, they demonstrated that the averages in [\[E: general ergodic averages along the primes\]](#E: general ergodic averages along the primes){reference-type="eqref" reference="E: general ergodic averages along the primes"} converge in the case $k=2$, $T_1=T_2$ and $a_i(n)=in,\ i\in\{1,2\}$. This was generalized significantly by Wooley and Ziegler [@Wooley-Ziegler] to hold in the case that the sequences $a_i(n),\ i\in \{1,\dots,k\}$ are polynomials with integer coefficients and the transformations $T_1,\dots, T_k$ are the same. Following that, Frantzikinakis, Host, and Kra confirmed the validity of the Bergelson-Leibman theorem in [@Fra-Host-Kra-primes] along the shifted primes. In addition, they showed that the averages in [\[E: general ergodic averages along the primes\]](#E: general ergodic averages along the primes){reference-type="eqref" reference="E: general ergodic averages along the primes"} converge in norm when $a_i(n)$ are integer polynomials. Furthermore, Sun obtained convergence and recurrence results in [@Wenbo-primes] for a single transformation and iterates of the form $i{\left \lfloor an \right \rfloor},\ i\in \{1,\dots, k\}$ or ${\left \lfloor ja n \right \rfloor}, \ j\in\{1,\dots,k\},$ with $a$ irrational. Finally, using the convergence results in [@Koutsogiannis-correlations] along $\mathbb{N}$ for integer parts of real polynomials and several transformations, the first author extended the convergence result of [@Fra-Host-Kra-primes] to real polynomials in [@koutsogiannis-closest], obtaining recurrence for polynomials with real coefficients rounded to the closest integer. In all of the previous cases, combinatorial applications along the shifted primes were derived as well. In the case of multiple iterates, a shared theme in the methods used has been the close reliance on the deep results provided by the work of Green and Tao in their effort to show that primes contain arbitrarily long arithmetic progressions [@Green-Tao-primes-progressions]. For instance, all results[^2] relied on the Gowers uniformity of the (modified) von Mangoldt function that was established in [@Green-Tao-linearequationsprimes] conditional to two deep conjectures, which were subsequently verified in [@GreeN-Tao-Ziegler-inversetheorem] and [@Green-Tao-Mobius]. It was conjectured by Frantzikinakis that the polynomial theorems along primes should hold for more general sequences involving fractional powers $n^c$, such as ${\left \lfloor n^{3/2} \right \rfloor}, {\left \lfloor n^{\sqrt{2}} \right \rfloor}$ or even linear combinations thereof. Indeed, it was conjectured in [@Fra-Hardy-singlecase] that the sequence ${\left \lfloor p_n^c \right \rfloor}$, where $c$ is a positive non-integer and $p_n$ is the sequence of primes is good for multiple recurrence and convergence. To be more precise, he conjectured that the averages $$\label{E: Szemeredi averages for p^c} \frac{1}{\pi(N)}\sum_{p\in \mathbb{P}{:}\;p\leq N}^{} T^{{\left \lfloor p^c \right \rfloor}}f_1\cdot\ldots\cdot T^{k{\left \lfloor p^c \right \rfloor}}f_k$$converge in $L^2(\mu)$ for all positive integers $k$ and all positive non-integers $c$. Analogously, we have the associated multiple recurrence conjecture, namely that all sets of positive upper density contain $k$-term arithmetic progressions with common difference of the form ${\left \lfloor p^c \right \rfloor}$. When $0<c<1$, one can leverage the fact that the range of ${\left \lfloor p_n^c \right \rfloor}$ contains all sufficiently large integers to establish the multiple recurrence result. Additionally, the convergence of the previous averages is known in the case $k=1$ since one can use the spectral theorem and the fact that the sequence $\{p_n^c a \}$ is equidistributed mod 1 for all non-zero $a\in \mathbb{R}$. This last assertion follows from [@Stux] or [@Wolke] when $c<1$ and [@Leitmann] in the case $c>1$. There were significant obstructions to the solution of this problem. One approach would be to modify the comparison method from [@Fra-Host-Kra-primes] (concerning polynomials), but the Gowers uniformity of the von Mangoldt functions is insufficient to establish this claim. The other approach would be to use the method of characteristic factors, which is based on the structure theorem of Host-Kra [@Host-Kra-annals]. Informally, this reduces the task of proving convergence to a specific class of systems with special algebraic structure called nilmanifolds. However, this required some equidistribution results on nilmanifolds for the sequence ${\left \lfloor p_n^c \right \rfloor}$, which were very difficult to establish. A similar conjecture by Frantzikinakis was made for more general averages of the form $$\frac{1}{\pi(N)} \sum_{p\in \mathbb{P}{:}\;p\leq N} T^{{\left \lfloor p^{c_1} \right \rfloor}}f_1\cdot\ldots\cdot T^{{\left \lfloor p^{c_k} \right \rfloor}}f_k$$for distinct positive non-integer $c_1,\dots, c_k$. The recent result of Frantzikinakis [@Fra-primes] verifies that these averages converge in $L^2(\mu)$ to the product of the integrals of the functions $f_1,\dots, f_k$ in any ergodic system, even in the more general case where the sequences in the iterates are linearly independent fractional polynomials. The number theoretic input required is a sieve-theoretic upper bound for the number of tuples of primes of a specific form, as well as an equidistribution result on fractional powers of primes in the torus that was already known. These methods relied heavily on the use of the joint ergodicity results in [@Fra-jointly-ergodic] and, thus, the linear independence assumption on the fractional polynomials was absolutely essential. In the same paper, it was conjectured [@Fra-primes Problem] that the case of fractional polynomials can be generalized to a significantly larger class of functions of polynomial growth, called Hardy field functions, which we consider below. The conjecture asks for necessary and sufficient conditions so that the averages along primes converge to the product of the integrals in ergodic systems. The arguments in [@Fra-primes] cannot cover this larger class of functions,[^3] as it was remarked in Subsection 1.3 of that article. In this article, our objective is to strengthen the convergence results in [@Fra-Host-Kra-primes] and [@Fra-primes] and resolve the convergence problem of the averages in [\[E: Szemeredi averages for p\^c\]](#E: Szemeredi averages for p^c){reference-type="eqref" reference="E: Szemeredi averages for p^c"}. Actually, there is no advantage in confining ourselves to sequences of the form ${\left \lfloor p^c \right \rfloor}$, so we consider the more general class of sequences arising from Hardy field functions of polynomial growth (see Section [2](#Section: Background){reference-type="ref" reference="Section: Background"} for the general definition), which, loosely speaking, are functions with pleasant behavior (such as smoothness, for instance). The prototypical example of a Hardy field is the field $\mathcal{LE}$ of logarithmico-exponential functions, which are defined by a finite combination of the operations $+,-,\times, \div$ and the functions $\exp,\log$ acting on a real variable $t$ and real constants. For instance, the field $\mathcal{LE}$ contains the functions $\log^{3/2}t,$ $t^{\pi},$ $t^{17}\log t+\exp(\sqrt{t^{\log t}+\log\log t }).$ The fact that $\mathcal{LE}$ is a Hardy field was established in [@Hardy-1] and the reader can keep this in mind as a model case throughout this article. We resolve several conjectures involving the convergence of the averages in [\[E: general ergodic averages along the primes\]](#E: general ergodic averages along the primes){reference-type="eqref" reference="E: general ergodic averages along the primes"} along Hardy sequences. Consequently, we derive several applications in recurrence and combinatorics that expand the known results in the literature. Finally, we also establish an equidistribution result in nilmanifolds for sequences evaluated at primes. ## Main results We present here our main theorems. We start by stating our mean convergence results, followed by their applications to multiple recurrence and combinatorics, and conclude our presentation with the equidistribution results in nilmanifolds. We will assume below that we are working with a Hardy field $\mathcal{H}$ that contains the polynomial functions. This assumption is not necessary, but it simplifies the proofs of our main theorems. Besides, this restriction is very mild and the most interesting Hardy fields contain the polynomials. A few results impose additional assumptions on $\mathcal{H}$ and we state those when necessary. These extra assumptions are a byproduct of convergence results along $\mathbb{N}$ in the literature that were proved under these hypotheses and we will not need to use the implied additional structure on $\mathcal{H}$ in any of our arguments. ### Comparison between averaging schemes For many number-theoretic problems, a suitable proxy for capturing the distribution of the prime numbers is the von-Mangoldt function, which is defined on $\mathbb{N}$ by $$\label{E: Definition of con Mangoldt} \Lambda(n)=\begin{cases} \log p&, \ \text{if } n=p^k\ \text{for some prime } p \text{ and } k\in \mathbb{N}\\ 0&, \ \text{otherwise} \end{cases}.$$ The function $\Lambda$ has mean value 1 by the prime number theorem. Usually, the prime powers with exponents at least 2 contribute a term of significantly lower order in asymptotics, so one can think of $\Lambda$ as being supported on primes. However, due to the irregularity of the distribution of $\Lambda$ in residue classes to small moduli, one typically considers a modified version of $\Lambda$, called the W-tricked version. To define this, let $w$ be a positive integer and let $W=\prod_{p\leq w,p\in \mathbb{P}} p$. Then, for any integer $1\leq b\leq W$ with $(b,W)=1$, we define the $W$-tricked von Mangoldt function $\Lambda_{w,b}$ by $$\label{E: defintion of W-tricked von Mangoldt} \Lambda_{w,b}(n)=\frac{\phi(W)}{W}\Lambda(Wn+b),$$where $\phi$ denotes the Euler totient function. Our main result provides a comparison between ergodic averages along primes and averages along natural numbers. This will allow us to transfer mean convergence results for Cesàro averages to the prime setting, answering numerous conjectures regarding norm convergence of averages as those in [\[E: general ergodic averages along the primes\]](#E: general ergodic averages along the primes){reference-type="eqref" reference="E: general ergodic averages along the primes"} followed by applications in multiple recurrence and combinatorics. We explain the choice of the conditions on the functions $a_{ij}$ in Subsection [1.3](#strategysubsection){reference-type="ref" reference="strategysubsection"}. Roughly speaking, the first condition implies that the sequence $a_{ij}$ is equidistributed mod 1 due to a theorem of Boshernitzan (see Theorem [\[T: Boshernitzan\]](#T: Boshernitzan){reference-type="ref" reference="T: Boshernitzan"} in Section [2](#Section: Background){reference-type="ref" reference="Section: Background"}). **Theorem 1**. *Let $\ell,k$ be positive integers and, for all $1\leq i\leq k,\ 1\leq j\leq \ell$, let $a_{ij}\in \mathcal{H}$ be functions of polynomial growth such that $$\label{E: far away from rational polynomials} \lim_{t\to+\infty} \left|\frac{a_{ij}(t)-q(t)}{\log t} \right|=+\infty\ \text{ for every polynomial } q(t)\in \mathbb{Q}[t],$$ or $$\label{E: essentially equal to a polynomial} \lim\limits_{t\to+\infty} |a_{ij}(t)-q(t)|=0\ \text{ for some polynomial } q(t)\in \mathbb{Q}[t]+\mathbb{R}.$$Then, for any measure-preserving system $(X,\mathcal{X}, \mu, T_1,\dots, T_k)$ and functions $f_1,\dots,f_{\ell}\in L^{\infty}(\mu)$, we have $$\label{E: main average in Proposition P: the main comparison} \lim_{w\to+\infty} \ \limsup\limits_{N\to+\infty}\ \max_{\underset{(b,W)=1}{1\leq b\leq W}}\ \Big\lVert \frac{1}{N}\sum_{n=1}^{N} \big(\Lambda_{w,b}(n) -1\big) \prod_{j=1}^{\ell}\big( \prod_{i=1}^{k} T_i^{{\left \lfloor a_{ij}(Wn+b) \right \rfloor}} \big)f_j \Big\rVert_{L^2(\mu)}=0.$$* **Remark 1**. *We can easily verify that each of the integer parts can be individually replaced by other rounding functions, such as the ceiling function (which we denote by $\lceil\cdot\rceil$) or the closest integer function (denoted by $[[\cdot]]$). This is an immediate consequence of the identities $\lceil x\rceil=-{\left \lfloor -x \right \rfloor}$ and $[[x]]={\left \lfloor x+1/2 \right \rfloor},$ for all $x\in\mathbb{R}$ and the fact that the affine shifts (by rationals) $q_1 a_{ij}+q_2, q_1,q_2\in \mathbb{Q}$, still satisfy [\[E: far away from rational polynomials\]](#E: far away from rational polynomials){reference-type="eqref" reference="E: far away from rational polynomials"} or [\[E: essentially equal to a polynomial\]](#E: essentially equal to a polynomial){reference-type="eqref" reference="E: essentially equal to a polynomial"} if $a_{ij}$ does.* Theorem [Theorem 1](#T: the main comparison){reference-type="ref" reference="T: the main comparison"} is the main tool that we use to derive all of our applications. The bulk of the article is aimed towards establishing it and everything else is practically a corollary (in combination with known norm convergence theorems for Cesàro averages). We remark that unlike several of the theorems below, there are no "independence" assumptions between the functions $a_{ij}$, although, in applications, we will need to impose analogous assumptions to ensure convergence of the averages, firstly along $\mathbb{N}$, and then along $\mathbb{P}$. In order to clarify how the comparison works, we present the following theorem, which is effectively a corollary of Theorem [Theorem 1](#T: the main comparison){reference-type="ref" reference="T: the main comparison"} and which shall be proven in Section [6](#Section-Proofs of remaining theorems){reference-type="ref" reference="Section-Proofs of remaining theorems"}. **Theorem 2**. *Let $\ell, k$ be positive integers, $(X,\mathcal{X}, \mu, T_1,\dots, T_k)$ be a measure-preserving system and $f_1,\dots, f_k\in L^{\infty}(\mu)$. Assume that for all $1\leq i\leq k,\ 1\leq j\leq \ell$, $a_{ij}\in\mathcal{H}$ are functions of polynomial growth such that the following conditions are satisfied:\ (a) Each one of the functions $a_{ij}(t)$ satisfies either $\eqref{E: far away from rational polynomials}$ or [\[E: essentially equal to a polynomial\]](#E: essentially equal to a polynomial){reference-type="eqref" reference="E: essentially equal to a polynomial"}.\ (b) For all positive integers $W,b$, the averages $$\label{E: averages along Wn+b} \frac{1}{N}\sum_{n=1}^{N} \big( \prod_{i=1}^{k} T_i^{{\left \lfloor a_{i1}(Wn+b) \right \rfloor}} \big)f_1\cdot\ldots \cdot \big( \prod_{i=1}^{k} T_i^{{\left \lfloor a_{i\ell}(Wn+b) \right \rfloor}} \big)f_{\ell}$$converge in $L^2(\mu)$.* *Then, the averages $$\label{E: averages along primes converge} \frac{1}{\pi(N)}\sum_{p\in \mathbb{P}{:}\;p\leq N} \big( \prod_{i=1}^{k} T_i^{{\left \lfloor a_{i1}(p) \right \rfloor}} \big)f_1\cdot\ldots \cdot \big( \prod_{i=1}^{k} T_i^{{\left \lfloor a_{i\ell}(p) \right \rfloor}} \big)f_{\ell}$$converge in $L^2(\mu)$.* *Furthermore, if the averages in [\[E: averages along Wn+b\]](#E: averages along Wn+b){reference-type="eqref" reference="E: averages along Wn+b"} converge to the function $F\in L^{\infty}(\mu)$ for all positive integers $W,b$, then the limit in $L^2(\mu)$ of the averages [\[E: averages along primes converge\]](#E: averages along primes converge){reference-type="eqref" reference="E: averages along primes converge"} is equal to $F$.* In the setting of Hardy field functions, the fact that we require convergence for sequences along arithmetic progressions is typically harmless. Indeed, convergence results along $\mathbb{N}$ typically follow from a growth condition on the implicit functions $a_{ij}$ (such as [\[E: far away from rational polynomials\]](#E: far away from rational polynomials){reference-type="eqref" reference="E: far away from rational polynomials"}) and it is straightforward to check that the function $a_{ij}(Wt+b)$ satisfies a similar growth condition as well. Therefore, one can think of the second condition morally as asking to establish convergence in the case $W=1$. The final part of Theorem [Theorem 2](#T: criterion for convergence along primes){reference-type="ref" reference="T: criterion for convergence along primes"} allows us to compute the limit of averages along primes in cases where we have an expression for the limit of the standard Cesàro averages. This is possible, in rough terms, whenever the linear combinations of the functions $a_{ij}$ do not contain polynomials or functions that are approximately equal to a polynomial. The reason for that is that there is no explicit description of the limit of polynomial ergodic averages in a general measure preserving system (although one can get a simplified expression in special cases, or under some total ergodicity assumptions on the system). ### Convergence of ergodic averages along primes The foremost application is that the averages in [\[E: Furstenberg averages\]](#E: Furstenberg averages){reference-type="eqref" reference="E: Furstenberg averages"} converge when $a(n)$ is a Hardy sequence and when we average along primes. This will also lead to generalizations of Szemerédi's theorem in our applications. The following theorem is a corollary of our comparison and the convergence results in [@Fra-Hardy-singlecase] (specifically, Theorems 2.1 and 2.2 of that paper). In conjunction with the corresponding recurrence result of Theorem [Theorem 6](#T: multiple recurrence for szemeredi type patterns){reference-type="ref" reference="T: multiple recurrence for szemeredi type patterns"} below, we get an affirmative answer to a stronger version of [@Fra-Hardy-singlecase Problem 7] (this problem also reappeared in [@Fra-open Problem 27]), which was stated only for sequences of the form $n^c, c\in \mathbb{R}^{+}\setminus \mathbb{N}$. **Theorem 3**. *Let $a\in \mathcal{H}$ be a function of polynomial growth that satisfies either$$\label{E: far away from real multiples of integer polynomials} \lim_{t\to+\infty} \left|\frac{a(t)-cq(t)}{\log t} \right|=+\infty \text{ for every } c\in \mathbb{R}\text{ and every } q\in \mathbb{Z}[t],$$or $$\label{E: equal to a real multiple of integer polynomial} \lim\limits_{t\to+\infty}|a(t)-cq(t)|=d\ \text{for some } c,d\in \mathbb{R}\ \text{and some } q\in \mathbb{Z}[t].$$ Then, for any positive integer $k$, any measure-preserving system $(X,\mathcal{X},\mu,T)$ and functions $f_1,\dots,f_k\in L^{\infty}(\mu)$, we have that the averages $$\label{E: aek ole} \frac{1}{\pi(N)}\sum_{p\in \mathbb{P}{:}\;p\leq N} T^{{\left \lfloor a(p) \right \rfloor}}f_1\cdot\ldots\cdot T^{k{\left \lfloor a(p) \right \rfloor}}f_k$$converge in $L^2(\mu)$.* *In particular, if $a$ satisfies [\[E: far away from real multiples of integer polynomials\]](#E: far away from real multiples of integer polynomials){reference-type="eqref" reference="E: far away from real multiples of integer polynomials"}, the limit of the averages in [\[E: aek ole\]](#E: aek ole){reference-type="eqref" reference="E: aek ole"} is equal to the limit in $L^2(\mu)$ of the averages $$\frac{1}{N}\sum_{n=1}^{N} T^nf_1\cdot\ldots\cdot T^{kn} f_k.$$* ***Comment** 1*. We can replace the floor function in [\[E: aek ole\]](#E: aek ole){reference-type="eqref" reference="E: aek ole"} with either the function $\lceil\cdot\rceil$ or the function $[[\cdot ]]$. The assumption that the iterates are Hardy field functions can also be relaxed. We discuss this more in Section [7](#Section: more general iterates){reference-type="ref" reference="Section: more general iterates"}. Observe that there is only one function appearing in the statement of the previous theorem. The following convergence results concern the case where we may have several different Hardy field functions. In both cases, there are some \"independence\" assumptions between the functions involved, which has the advantage of providing an exact description of the limit for the averages along $\mathbb{N}$. Thus, we can get a description for the limit along $\mathbb{P}$ as well. The following theorem concerns the "jointly ergodic" case for one transformation, which refers to the setting when we have convergence to the product of the integrals in ergodic systems. Theorem [Theorem 1](#T: the main comparison){reference-type="ref" reference="T: the main comparison"} combines with [@Tsinas Theorem 1.2] to provide the next result. This generalizes the theorem of Frantzikinakis [@Fra-primes Theorem 1.1] and gives a positive answer to [@Fra-primes Problem]. Unlike the previous theorem, we have to impose here an additional assumption on $\mathcal{H}$, since the respective convergence result along $\mathbb{N}$ is established under this condition. The field $\mathcal{LE}$ does not have the property appearing in the ensuing theorem, but it is contained in the Hardy field of Pfaffian functions, which does (for the definition, see [@Tsinas Section 2]). **Theorem 4**. *Let $\mathcal{H}$ be a Hardy field that contains $\mathcal{LE}$ and is closed under composition and compositional inversion of functions, when defined.[^4] For a positive integer $k,$ let $a_1,\dots,a_k$ be functions of polynomial growth and assume that every non-trivial linear combination $a$ of them satisfies $$\label{E: jointly ergodic condition} \lim\limits_{t\to+\infty} \Bigl| \frac{a(t)-q(t)}{\log t} \Bigr|=+\infty \text{ for every } q(t)\in \mathbb{Z}[t].\footnote{Actually, we can assume here the more general condition that $$\frac{1}{N}\sum_{n=1}^N e(t_1{\left \lfloor a_1(n) \right \rfloor}+\ldots +t_k{\left \lfloor a_k(n) \right \rfloor})\to 0, $$ for every $(t_1,\ldots,t_k)\in [0,1)^k\setminus\{(0,\ldots,0)\},$ where $e(x)=e^{2\pi i x},$ $x\in \mathbb{R}$ (see the remark under \cite[Theorem 1.2]{Tsinas}). This condition is necessary and sufficient in order for \eqref{E: jointly ergodic averages along primes} to hold. }$$* *Then, for any measure-preserving system $(X,\mathcal{X},\mu,T)$ and functions $f_1,\dots, f_k\in L^{\infty}(\mu)$, we have that $$\label{E: jointly ergodic averages along primes} \lim\limits_{N\to+\infty} \frac{1}{\pi(N)} \sum_{p\in \mathbb{P}{:}\;p\leq N} T^{{\left \lfloor a_1(p) \right \rfloor}}f_1\cdot \ldots \cdot T^{{\left \lfloor a_k(p) \right \rfloor}}f_k = \tilde{f}_1\cdot\ldots\cdot \tilde{f}_k, %\footnote{ Via the ergodic decomposition, for a general system, we get that the corresponding limit equals to the product of conditional expectations of the $f_i$'s with respect to the $\sigma$-algebra of $T$-invariant sets.}$$where $\tilde{f}_i:=\mathbb{E}(f_i|\mathcal{I}(T))=\lim_{N\to+\infty}\frac{1}{N}\sum_{n=1}^N T^n f_i$ and the convergence is in $L^2(\mu)$.* **Remark 2**. *We remark that we can also transfer the convergence result appearing in [@tsinas-pointwise Theorem 1.3] to primes, although we do not have useful information on the limiting behavior of the associated averages to deduce recurrence results.* In the case of several commuting transformations, knowledge of the limiting behavior for averages along $\mathbb{N}$ is sparse. This is naturally a barrier to proving multidimensional analogs of our recurrence results below along primes. Nonetheless, we have the following convergence theorem, which adapts the convergence result in [@Fra-Hardy-multidimensional Theorem 2.3] to the prime setting. By a shift-invariant Hardy field, we are referring to a Hardy field such that $a(t+h)\in \mathcal{H}$ for any $h\in \mathbb{Z}$ and function $a(t)\in \mathcal{H}$. **Theorem 5**. *Let $k\in \mathbb{N},$ $\mathcal{H}$ be a shift-invariant Hardy field, $a_1,\dots,a_k$ be functions in $\mathcal{H}$ with pairwise distinct growth rates and such that there exist integers $d_i\geq 0$ satisfying $$\lim\limits_{t\to+\infty} \Bigl| \frac{a_i(t)}{t^{d_i} \log t} \Bigr|=\lim\limits_{t\to+\infty} \Bigl| \frac{t^{d_i+1}}{a_i(t)} \Bigr|=0.$$ Then, for any system $(X,\mathcal{X},\mu,T_1,\dots, T_k)$ and functions $f_1,\dots, f_k\in L^{\infty}(\mu)$, we have $$\lim\limits_{N\to+\infty} \frac{1}{\pi(N)} \sum_{p\in \mathbb{P}{:}\;p\leq N} T_1^{{\left \lfloor a_1(p) \right \rfloor}}f_1\cdot \ldots \cdot T_k^{{\left \lfloor a_k(p) \right \rfloor}}f_k = \tilde{f}_1\cdot\ldots\cdot \tilde{f}_k,$$where $\tilde{f}_i:=\mathbb{E}(f_i|\mathcal{I}(T_i))=\lim_{N\to+\infty}\frac{1}{N}\sum_{n=1}^N T_i^n f_i$ and the convergence is in $L^2(\mu)$.* While there are more restrictions compared to Theorem [Theorem 4](#T: jointly ergodic case){reference-type="ref" reference="T: jointly ergodic case"}, we note that Theorem [Theorem 5](#T: nikos result to primes){reference-type="ref" reference="T: nikos result to primes"} shows that we have norm convergence in the case when $a_i(t)=t^{c_i}$ for distinct, positive non-integers $c_i$. ***Comment** 2*. In the previous two theorems, we can replace the integer part with any other rounding function in each iterate individually (see Remark [Remark 1](#R: replacing rounding functions){reference-type="ref" reference="R: replacing rounding functions"}). ### Applications to multiple recurrence and combinatorics In this subsection, we will translate the previous convergence results to multiple recurrence results and then combine them with Furstenberg's correspondence principle to extrapolate combinatorial applications. Due to arithmetic obstructions arising from polynomials, we have to work with the set of shifted primes in some cases. In addition, it was observed in [@koutsogiannis-closest] that in the case of real polynomials, one needs to work with the rounding to the closest integer function instead of the floor function. Indeed, even in the case of sequences of the form ${\left \lfloor ap(n)+b \right \rfloor}$, explicit conditions that describe multiple recurrence are very complicated (cf. [@Fra-Hardy-singlecase Footnote 4]). Our first application relates to the averages of the form as in [\[E: general ergodic averages along the primes\]](#E: general ergodic averages along the primes){reference-type="eqref" reference="E: general ergodic averages along the primes"}. We have the following theorem. **Theorem 6**. *Let $a\in \mathcal{H}$ be a function of polynomial growth. Then, for any measure-preserving system $(X,\mathcal{X},\mu,T),$ $k\in \mathbb{N},$ and set $A$ with positive measure we have the following:\ (a) If $a$ satisfies [\[E: far away from real multiples of integer polynomials\]](#E: far away from real multiples of integer polynomials){reference-type="eqref" reference="E: far away from real multiples of integer polynomials"}, we have $$\lim\limits_{N\to+\infty} \frac{1}{\pi(N)} \sum_{p\in \mathbb{P}{:}\;p\leq N} \mu(A\cap T^{-{\left \lfloor a(p) \right \rfloor}}A\cap \dots\cap T^{-{k{\left \lfloor a(p) \right \rfloor}}}A )> 0.$$ (b) If $a$ satisfies [\[E: equal to a real multiple of integer polynomial\]](#E: equal to a real multiple of integer polynomial){reference-type="eqref" reference="E: equal to a real multiple of integer polynomial"} with $cp(0)+d=0$,[^5] then for any set $A$ with positive measure, the set $$\left\{n\in\mathbb{N}:\; \mu\big(A\cap T^{-[[a(n)]]}A \cap \dots \cap T^{-k[[a(n)]]}A \big)>0\right\}$$has non-empty intersection with the sets $\mathbb{P}-1$ or $\mathbb{P}+1$.* We recall that for a subset $E$ of $\mathbb{N}$, its upper density $\bar{d}(E)$ is defined by $$\bar{d}(E):=\limsup\limits_{N\to+\infty}\frac{|E\cap\{1,\ldots,N\}|}{N}.$$ **Corollary 7**. *For any set $E\subseteq \mathbb{N}$ of positive upper density, $k\in \mathbb{N},$ and function $a\in \mathcal{H}$ of polynomial growth, the following holds:\ (a) If $a$ satisfies [\[E: far away from real multiples of integer polynomials\]](#E: far away from real multiples of integer polynomials){reference-type="eqref" reference="E: far away from real multiples of integer polynomials"}, we have$$\liminf\limits_{N\to+\infty} \frac{1}{\pi(N)}\sum_{p\in \mathbb{P}{:}\;p\leq N} \bar{d}\big(E\cap (E-{\left \lfloor a(p) \right \rfloor})\cap \dots\cap (E-k{\left \lfloor a(p) \right \rfloor})\big)>0.$$ (b) If $a$ satisfies [\[E: equal to a real multiple of integer polynomial\]](#E: equal to a real multiple of integer polynomial){reference-type="eqref" reference="E: equal to a real multiple of integer polynomial"} with $cp(0)+d=0$, then the set $$\left\{n\in\mathbb{N}:\; \bar{d}\big(E\cap (E-[[a(n)]]) \cap \dots \cap (E-k[[a(n)]]) \big)>0\right\}$$has non-empty intersection with the sets $\mathbb{P}-1$ or $\mathbb{P}+1$.[^6]* Specializing to the case where $a(n)=n^c$ where $c$ is a positive non-integer, Theorem [Theorem 3](#T: convergence of Furstenberg averages){reference-type="ref" reference="T: convergence of Furstenberg averages"} and part (a) of Theorem [Theorem 6](#T: multiple recurrence for szemeredi type patterns){reference-type="ref" reference="T: multiple recurrence for szemeredi type patterns"} provide an affirmative answer to [@Fra-open Problem 27]. **Remark 3**. *In part (a) of both Theorem [Theorem 6](#T: multiple recurrence for szemeredi type patterns){reference-type="ref" reference="T: multiple recurrence for szemeredi type patterns"} and Corollary [Corollary 7](#C: Szemeredi corollary){reference-type="ref" reference="C: Szemeredi corollary"}, one can evaluate the sequences along $p+u$ instead of $p$, for any $u\in \mathbb{Z}$, or even more generally along the affine shifts $ap+b$ for $a, b\in \mathbb{Q}$ with $a\neq 0$. This follows from the fact that the function $a_i(at+b)$ satisfies [\[E: far away from real multiples of integer polynomials\]](#E: far away from real multiples of integer polynomials){reference-type="eqref" reference="E: far away from real multiples of integer polynomials"} as well. However, the shifts $p-1$ and $p+1$ are the only correct ones in part (b) of Theorem [Theorem 6](#T: multiple recurrence for szemeredi type patterns){reference-type="ref" reference="T: multiple recurrence for szemeredi type patterns"}. Notice also that the function ${\left \lfloor \cdot \right \rfloor}$ can be replaced by $\lceil\cdot\rceil$ or $[[\cdot]]$ in part (a) of the two previous statements.* Now, we state the recurrence result obtained by Theorem [Theorem 4](#T: jointly ergodic case){reference-type="ref" reference="T: jointly ergodic case"}. **Theorem 8**. *Let $k\in\mathbb{N},$ $\mathcal{H}$ be a Hardy field that contains $\mathcal{LE}$ and is closed under composition and compositional inversion of functions, when defined, and suppose $a_1,\dots,a_k\in \mathcal{H}$ are functions of polynomial growth whose non-trivial linear combinations satisfy [\[E: jointly ergodic condition\]](#E: jointly ergodic condition){reference-type="eqref" reference="E: jointly ergodic condition"}. Then, for any measure-preserving system $(X,\mathcal{X},\mu,T),$ and set $A$ with positive measure, we have that $$\lim\limits_{N\to+\infty} \frac{1}{\pi(N)}\sum_{p\in \mathbb{P}{:}\;p\leq N} \mu\big(A\cap T^{-{\left \lfloor a_1(p) \right \rfloor}}A \cap \dots \cap T^{-{\left \lfloor a_k(p) \right \rfloor}}A \big)\geq \big(\mu(A)\big)^{k+1}.$$* **Corollary 9**. *For any $k\in\mathbb{N},$ set $E\subseteq \mathbb{N}$ of positive upper density, Hardy field $\mathcal{H}$ and functions $a_1,\dots, a_k\in \mathcal{H}$ as in Theorem [Theorem 8](#T: multiple recurrence in the jointly ergodic case){reference-type="ref" reference="T: multiple recurrence in the jointly ergodic case"}, we have $$\liminf\limits_{N\to+\infty} \frac{1}{\pi(N)}\sum_{p\in \mathbb{P}{:}\;p\leq N} \bar{d}\big(E\cap (E-{\left \lfloor a_1(p) \right \rfloor})\cap \dots \cap (E-{\left \lfloor a_k(p) \right \rfloor})\big)\geq \big(\bar{d}(E)\big)^{k+1}.$$* In particular, we conclude that for any set $E\subseteq \mathbb{N}$ with positive upper density and $a_1,\dots, a_k$ as above, the set $$\{n\in \mathbb{N}{:}\;\text{ there exists } m\in\mathbb{N}\text{ such that } m, m+{\left \lfloor a_1(n) \right \rfloor},\ldots, m+{\left \lfloor a_k(n) \right \rfloor} \in E \}$$ has non-empty intersection with the set $\mathbb{P}$. The following is a multidimensional analog of Theorem [Theorem 8](#T: multiple recurrence in the jointly ergodic case){reference-type="ref" reference="T: multiple recurrence in the jointly ergodic case"} and relies on the convergence result of Theorem [Theorem 5](#T: nikos result to primes){reference-type="ref" reference="T: nikos result to primes"}. **Theorem 10**. *Let $k\in\mathbb{N},$ $\mathcal{H}$ be a shift-invariant Hardy field and suppose that $a_1,\dots,a_k\in \mathcal{H}$ are functions of polynomial growth that satisfy the hypotheses of Theorem [Theorem 5](#T: nikos result to primes){reference-type="ref" reference="T: nikos result to primes"}. Then, for any system $(X,\mathcal{X},\mu,T_1,\dots, T_k)$ and set $A$ with positive measure, we have that $$\lim\limits_{N\to+\infty} \frac{1}{\pi(N)} \sum_{p\in \mathbb{P}{:}\;p\leq N} \mu\big(A\cap T_1^{-{\left \lfloor a_1(p) \right \rfloor}}A\cap \dots \cap T_k^{-{\left \lfloor a_k(p) \right \rfloor}}A \big)\geq \big(\mu(A)\big)^{k+1}.$$* Lastly, we present the corresponding combinatorial application of our last multiple recurrence result. We recall that for a set $E\subseteq \mathbb{Z}^d,$ its *upper density* is given by $$\Bar{d}(E):=\limsup_{N\to+\infty}\frac{|E\cap\{-N,\ldots,N\}^d|}{(2N)^d}.$$ **Corollary 11**. *For any $k\in\mathbb{N},$ set $E\subseteq \mathbb{Z}^d$ of positive upper density, Hardy field $\mathcal{H}$ and functions $a_1,\dots, a_k\in \mathcal{H}$ as in Theorem [Theorem 10](#T: multidimensional recurrence for primes){reference-type="ref" reference="T: multidimensional recurrence for primes"} and vectors ${\bf v}_1,\ldots,{\bf v}_k\in \mathbb{Z}^d$, we have $$\liminf\limits_{N\to+\infty} \frac{1}{\pi(N)}\sum_{p\in \mathbb{P}{:}\;p\leq N} \bar{d}\big(E\cap (E-{\left \lfloor a_1(p) \right \rfloor}{\bf v}_1)\cap \dots \cap (E-{\left \lfloor a_k(p) \right \rfloor}{\bf v}_k)\big)\geq \big(\bar{d}(E)\big)^{k+1}.$$* ***Comment** 3*. Once again, we remark that in the recurrence results in both Theorem [Theorem 8](#T: multiple recurrence in the jointly ergodic case){reference-type="ref" reference="T: multiple recurrence in the jointly ergodic case"} and Theorem [Theorem 10](#T: multidimensional recurrence for primes){reference-type="ref" reference="T: multidimensional recurrence for primes"} and the corresponding corollaries, one can replace $p$ with any other affine shift $ap+b$ with $a, b\in \mathbb{Q}$ $(a\neq 0),$ as we explained in Remark [Remark 3](#R: remark for shifts of primes){reference-type="ref" reference="R: remark for shifts of primes"}. In addition, one can replace the floor functions with either $\lceil \cdot \rceil$ or $[[\cdot]]$. ### Equidistribution in nilmanifolds In this part, we present some results relating to pointwise convergence in nilmanifolds along Hardy sequences evaluated at primes. We have the following theorem that is similar in spirit to Theorem [Theorem 2](#T: criterion for convergence along primes){reference-type="ref" reference="T: criterion for convergence along primes"}. **Theorem 12**. *Let $k$ be a positive integer. Assume that $a_1,\dots, a_k\in \mathcal{H}$ are functions of polynomial growth, such that the following conditions are satisfied:\ (a) For every $1\leq i\leq k$, the function $a_{i}(t)$ satisfies either $\eqref{E: far away from rational polynomials}$ or [\[E: essentially equal to a polynomial\]](#E: essentially equal to a polynomial){reference-type="eqref" reference="E: essentially equal to a polynomial"}.\ (b) For all positive integers $W,b$, any nilmanifold $Y=H/\Delta$, pairwise commuting elements $u_1,\dots,u_k$ and points $y_1,\dots, y_k\in Y$, the sequence $$\Big( u_1^{{\left \lfloor a_1(Wn+b) \right \rfloor}}y_1, \ldots, u_k^{{\left \lfloor a_k(Wn+b) \right \rfloor}} y_k\Big)$$ is equidistributed on the nilmanifold $\overline{(u_1^{\mathbb{Z}} y_1)}\times\dots \times~ \overline{(u_k^{\mathbb{Z}} y_k)}$.* *Then, for any nilmanifold $X=G/\Gamma$, pairwise commuting elements $g_1,\dots, g_k\in G$ and points $x_1,\dots, x_k\in X$, the sequence $$\Big(g_1^{{\left \lfloor a_1(p_n) \right \rfloor}}x_1,\dots, g_k^{{\left \lfloor a_k(p_n) \right \rfloor}}x_k \Big)_{n\in \mathbb{N}},$$where $p_n$ denotes the $n$-th prime, is equidistributed on the nilmanifold $\overline{(g_1^{\mathbb{Z}} x_1)}\times\dots \times~ \overline{(g_k^{\mathbb{Z}} x_k)}$.* Instead of the "pointwise convergence" assumption (b), one can replace it with a weaker convergence (i.e. in the $L^2$-sense) hypothesis. However, we will not benefit from this in applications, so we opt to not state our results in that setup. In the case of a polynomial function, a convergence result along primes follows by combining [@Green-Tao-Mobius Theorem 7.1] (which is the case of linear polynomials) and the fact that any polynomial orbit on a nilmanifold can be lifted to a linear orbit of a unipotent affine transformation on a larger nilmanifold (an argument due to Leibman [@Leibman-nil-polynomial-equidistribution]). Nonetheless, in this case, we do not have a nice description for the orbit of this polynomial sequence. On the other hand, equidistribution results in higher-step nilmanifolds (along primes) for sequences such as ${\left \lfloor n^c \right \rfloor}$, with $c$ a non-integer ($c>1$), are unknown even in the simplest case of one fractional power. Theorem [Theorem 12](#T: criterion for pointwise convergence along primes-nil version){reference-type="ref" reference="T: criterion for pointwise convergence along primes-nil version"} will allow us to obtain the first results in this direction from the corresponding results along $\mathbb{N}$. Equidistribution results for Hardy sequences along $\mathbb{N}$ were obtained originally by Frantzikinakis in [@Fra-equidsitribution], while more recently new results were established by Richter [@Richter] and the second author [@tsinas-pointwise]. In view of the structure theory of Host-Kra [@Host-Kra-annals], results of this nature are essential to demonstrate that the corresponding multiple ergodic averages along $\mathbb{N}$ converge in $L^2(\mu)$. All of the pointwise convergence theorems that we mentioned above can be transferred to the prime setting. As an application, we state the following sample corollary of Theorem [Theorem 12](#T: criterion for pointwise convergence along primes-nil version){reference-type="ref" reference="T: criterion for pointwise convergence along primes-nil version"}. The term invariant under affine shifts refers to a Hardy field $\mathcal{H}$ for which $a(Wt+b)\in \mathcal{H}$ whenever $a\in \mathcal{H}$, for all $W,b\in \mathbb{N}$. **Corollary 13**. *Let $k$ be a positive integer, $\mathcal{H}$ be a Hardy field invariant under affine shifts, and suppose that $a_1,\dots, a_k\in \mathcal{H}$ are functions of polynomial growth, for which there exists an $\varepsilon>0$, so that every non-trivial linear combination $a$ of them satisfies $$\label{E: t^e away from polynomials} \lim\limits_{t\to+\infty} \Bigl| \frac{a(t)-q(t)}{ t^{\varepsilon}} \Bigr|=+\infty \text{ for every } q(t)\in \mathbb{Z}[t].$$ Then, for any collection of nilmanifolds $X_i=G_i/\Gamma_i$ $i=1,\dots, k$, elements $g_i\in G_i$ and points $x_i\in X_i$, the sequence $$\big( g_1^{{\left \lfloor a_1(p_n) \right \rfloor}}x_1,\ldots,g_k^{{\left \lfloor a_k(p_n) \right \rfloor}}x_k \big)_{n\in \mathbb{N}},$$where $p_n$ denotes the $n$-th prime, is equidistributed on the nilmanifold $\overline{(g_1^{\mathbb{Z}} x_1)}\times\dots \times~ \overline{(g_k^{\mathbb{Z}} x_k)}$.* The assumption in [\[E: t\^e away from polynomials\]](#E: t^e away from polynomials){reference-type="eqref" reference="E: t^e away from polynomials"} is a byproduct of the corresponding equidistribution result along $\mathbb{N}$ proven in [@tsinas-pointwise]. Also, the assumption on $\mathcal{H}$ can be dropped since the arguments in [@tsinas-pointwise] rely on some growth assumptions on the functions $a_{i}$ which translate over to their shifted versions. We choose not to remove the assumption here since the results in [@tsinas-pointwise] are not stated in this setup. Our corollary implies that the sequence $$\big( g_1^{{\left \lfloor p_n^{c_1} \right \rfloor}}x_1,\ldots,g_k^{{\left \lfloor p_n^{c_k} \right \rfloor}}x_k \big)$$is equidistributed on the subnilmanifold $\overline{(g_1^{\mathbb{Z}} x_1)}\times\dots \times~ \overline{(g_k^{\mathbb{Z}} x_k)}$ of $X_1\times\dots\times X_k$, for any distinct positive non-integers $c_1,\dots, c_k$ and for all points $x_i\in X_i$. This is stronger than the result of Frantzikinakis [@Fra-primes] that establishes convergence in the $L^2$-sense (for linearly independent fractional polynomials). This result is novel even in the simplest case $k=1$. Furthermore, we remark that in the case $k=1$ we can actually replace [\[E: t\^e away from polynomials\]](#E: t^e away from polynomials){reference-type="eqref" reference="E: t^e away from polynomials"} with the optimal condition that $a(t)-q(t)$ grows faster than $\log t$, for all $q(t)$ that are real multiples of integer polynomials, using the results from [@Fra-equidsitribution]. ## Strategy of the proof and organization {#strategysubsection} The bulk of the paper is spent on establishing the asserted comparison between the $W$-tricked averages and the standard Cesàro averages (Theorem [Theorem 1](#T: the main comparison){reference-type="ref" reference="T: the main comparison"}). The main trick is to recast our problem to the setting where our averages range over a short interval of the form $[N, N+L(N)]$, where $L(t)$ is a function of sub-linear growth chosen so that Hardy sequences are approximated sufficiently well by polynomials in these intervals. Naturally, the study of the primes in short intervals requires strong number theoretic input and this is provided by the recent result in [@MSTT] on the Gowers uniformity of several arithmetic functions in short intervals (this is Theorem [\[T: Gowers uniformity in short intervals\]](#T: Gowers uniformity in short intervals){reference-type="ref" reference="T: Gowers uniformity in short intervals"} in the following section). The strategy of restricting ergodic averages to short intervals was first used by Frantzikinakis in [@Fra-Hardy-singlecase] to demonstrate the convergence of the averages in [\[E: Furstenberg averages\]](#E: Furstenberg averages){reference-type="eqref" reference="E: Furstenberg averages"} when $a(n)$ is a Hardy sequence and then amplified further by the second author in [@Tsinas] to resolve the problem in the more general setting of the averages in [\[E: general ergodic averages\]](#E: general ergodic averages){reference-type="eqref" reference="E: general ergodic averages"} (for one transformation). Certainly, the uniformity estimate in Theorem [\[T: Gowers uniformity in short intervals\]](#T: Gowers uniformity in short intervals){reference-type="ref" reference="T: Gowers uniformity in short intervals"} requires that the interval is not too short, but it was observed in [@Tsinas] that one can take the function $L(t)$ to grow sufficiently fast, as long as one is willing to tolerate polynomial approximations with much larger degrees. After this step has been completed, one typically employs a complexity reduction argument (commonly referred to as PET induction in the literature) that relies on repeated applications of the van der Corput inequality. Using this approach, one derives iterates that are comprised of several expressions with integer parts, which are then assembled together using known identities for the floor function (with an appropriate error). This approach was used for the conventional averages over $\mathbb{N}$ in [@Fra-Hardy-singlecase] and [@Tsinas], because one can sloppily combine integer parts in the iterates at the cost of inserting a bounded weight in the corresponding averages. To be more precise, this weight is actually the characteristic function of a subset of $\mathbb{N}$. However, we cannot afford to do this blindly in our setting, since there is no guarantee that this subset of $\mathbb{N}$ does not correlate very strongly with $\Lambda_{w,b}(n)-1$, which could imply that the resulting average is large. The fact that the weight $\Lambda_{w,b}-1$ is unbounded complicates this step as well. Nonetheless, it was observed in [@koutsogiannis-closest] (using an argument from [@Koutsogiannis-correlations]),[^7] that if the fractional parts of the sequences in the iterates do not concentrate heavily around 1, then one can pass to an extension of the system $(X,\mathcal{X},\mu, T_1,\dots, T_k)$, wherein the actions $T_i$ are lifted to $\mathbb{R}$-actions (also called measure-preserving flows) and the integer parts are removed. Since there are no nuisances with combining rounding functions in the iterates, one can then run the complexity reduction argument in the new system and obtain the desired bounds. Unfortunately, there is still an obstruction in this approach arising from the fact that the flows in the extension are not continuous. To be more precise, let us assume that we derived an approximation of the form $a(n)=p_N(n)+\varepsilon_N(n)$, where $n\in [N, N+L(N)]$, $p_N(n)$ is a Taylor polynomial and $\varepsilon_N(n)$ is the remainder term. The PET induction can eliminate the polynomials $p_N(n)$, by virtue of the simple observation that taking sufficiently many "discrete derivatives" makes a polynomial vanish. However, this procedure cannot eliminate the error term $\varepsilon_N(n)$ at all and the fact that the flow is not continuous prohibits us from replacing them with zero. Thus, we take action to discard $\varepsilon_N(n)$ beforehand. This is done by studying the equidistribution properties of the polynomial $p_N(n)$ in the prior approximation, using standard results from the equidistribution theory of finite polynomial orbits due to Weyl. Practically, we show that for "almost all" values of $n$ in the interval $[N, N+L(N)]$, we can write ${\left \lfloor p_N(n)+\varepsilon_N(n) \right \rfloor}={\left \lfloor p_N(n) \right \rfloor}$, so that the error $\varepsilon_N(n)$ can be removed from the expressions in the iterates. In our approach, some equidistribution assumptions on our original functions are required. This clarifies the conditions on Theorem [Theorem 1](#T: the main comparison){reference-type="ref" reference="T: the main comparison"}. Indeed, [\[E: far away from rational polynomials\]](#E: far away from rational polynomials){reference-type="eqref" reference="E: far away from rational polynomials"} implies that the sequence $\big(a_{ij}(n)\big)_{n}$ is equidistributed modulo 1 (due to Theorem [\[T: Boshernitzan\]](#T: Boshernitzan){reference-type="ref" reference="T: Boshernitzan"}), while condition [\[E: essentially equal to a polynomial\]](#E: essentially equal to a polynomial){reference-type="eqref" reference="E: essentially equal to a polynomial"} implies that the function $a_{ij}(t)$ is essentially equal to a polynomial with rational coefficients (thus periodic modulo 1). ### A simple example We demonstrate the methods discussed above in a basic case that avoids most complications that appear in the general setting. Even this simple case, however, is not covered by prior methods in the literature. We will use some prerequisites from the following section, such as Theorem [\[T: Gowers uniformity in short intervals\]](#T: Gowers uniformity in short intervals){reference-type="ref" reference="T: Gowers uniformity in short intervals"}. We consider the averages $$\label{E: three Aps example primes} \frac{1}{\pi(N)} \sum_{p\in\mathbb{P}:\; p\leq N}T^{{\left \lfloor p^{3/2} \right \rfloor}}f_1\cdot T^{2{\left \lfloor p^{3/2} \right \rfloor}}f_2,$$where $(X,\mathcal{X},\mu,T)$ is a system and $f_1, f_2\in L^\infty(\mu)$. For every $1\leq b\leq W$ with $(b,W)=1$, we study the averages $$\label{E: three Aps example} \frac{1}{N}\sum_{n=1}^N \big(\Lambda_{w,b}(n)-1\big) T^{{\left \lfloor n^{3/2} \right \rfloor}}f_1\cdot T^{2{\left \lfloor n^{3/2} \right \rfloor}}f_2,$$which is the required comparison for the averages in [\[E: three Aps example primes\]](#E: three Aps example primes){reference-type="eqref" reference="E: three Aps example primes"}. We will show that as $N\to+\infty$ and then $w\to+\infty$, the norm of this average converges to 0 uniformly in $b$. We set $L(t)=t^{0.65}$. Notice that $L(t)$ grows faster than $t^{5/8}$, which is a necessary condition to use Theorem [\[T: Gowers uniformity in short intervals\]](#T: Gowers uniformity in short intervals){reference-type="ref" reference="T: Gowers uniformity in short intervals"}. In order to establish the required convergence for the averages in [\[E: three Aps example\]](#E: three Aps example){reference-type="eqref" reference="E: three Aps example"}, it suffices to show that $$\label{E: second equation in first example} \limsup\limits_{r\to+\infty} \Big\lVert \mathop{\mathrm{\mathbb{E}}}_{r\leq n\leq r+L(r)} \big(\Lambda_{w,b}(n)-1\big) T^{{\left \lfloor n^{3/2} \right \rfloor}}f_1 \cdot T^{2{\left \lfloor n^{3/2} \right \rfloor}} f_2 \Big\rVert_{L^2(\mu)}=o_w(1)$$uniformly in $b$. We remark that in the more general case that encompasses several functions, we will need to average over the parameter $r$ as well and, thus, we are dealing with a double-averaging scheme. This reduction is the content of Lemma [Lemma 39](#L: long averages to short averages){reference-type="ref" reference="L: long averages to short averages"}. Using the Taylor expansion around $r$, we can write for every $0\leq h\leq L(r)$:$$(r+h)^{3/2}=r^{3/2}+\frac{3r^{1/2}h}{2}+\frac{3h^2}{8r^{1/2}} -\frac{3h^3}{48\xi^{3/2}_{h}}, \ \text{ where } \xi_{h}\in [r,r+h].$$ Observe that the error term is smaller than a constant multiple of $$\frac{\big(L(r)\big)^{3}}{r^{3/2}}=o_r(1).$$ We show that we have $${\left \lfloor (r+h)^{3/2} \right \rfloor}={\left \lfloor r^{3/2}+\frac{3r^{1/2}h}{2}+\frac{3h^2}{8r^{1/2}} \right \rfloor}$$for "almost all" $0\leq h \leq L(r)$, in the sense that the number of $h$'s that do not obey this relation is bounded by a constant multiple of $L(r)\log^{-100} r$ (say). Thus, their contribution on the average is negligible, since the sequence $\Lambda_{w,b}$ has size comparable to $\log r$. Let us denote by $p_r(h)$ the quadratic polynomial in the Taylor expansion above. In order to establish this assertion, we will investigate the discrepancy of the finite sequence $\big(p_r(h)\big)_{0\leq h \leq L(r)}$, using some exponential sum estimates and the Erdős-Turán inequality (Theorem [\[T: Erdos-Turan\]](#T: Erdos-Turan){reference-type="ref" reference="T: Erdos-Turan"}). This is the content of Proposition [Proposition 33](#P: remove error term for fast functions){reference-type="ref" reference="P: remove error term for fast functions"}. Assuming that all the previous steps were completed, we shall ultimately reduce our problem to showing that $$\limsup\limits_{r\to+\infty} \Big\lVert \mathop{\mathrm{\mathbb{E}}}_{0\leq h\leq L(r)} \big(\Lambda_{w,b}(r+h)-1\big) T^{{\left \lfloor p_r(h) \right \rfloor}}f_1 \cdot T^{2{\left \lfloor p_r(h) \right \rfloor}} f_2 \Big\rVert_{L^2(\mu)}=o_w(1)$$uniformly for $1\leq b\leq W$ coprime to $W$. Now that the error terms have been eliminated, we are left with an average that involves polynomial iterates. Next, we use an argument from [@Koutsogiannis-correlations] that allows us to pass to an extension of the system $(X,\mathcal{X},\mu, T)$. To be more precise, there exists an $\mathbb{R}$-action (see the definition in Section [2](#Section: Background){reference-type="ref" reference="Section: Background"}) $(Y,\mathcal{Y},v,S)$ and functions $\widetilde{f}_1,\widetilde{f}_2$, such that we have the equality $$T^{{\left \lfloor p_r(h) \right \rfloor}}f_1 \cdot T^{2{\left \lfloor p_r(h) \right \rfloor}} f_2= S_{p_r(h)}\widetilde{f}_1\cdot S_{2p_r(h)}\widetilde{f}_2.$$This procedure can be done because the polynomial $p_r(h)$ has good equidistribution properties (which we analyze in the previous step) and thus the fractional parts of the finite sequence $\big(p_r(h)\big)_{0\leq h\leq L(r)}$ fall inside a small interval around 1 with the correct frequency. This is a necessary condition in order to use Proposition [Proposition 29](#P: Gowers norm bound on variable polynomials){reference-type="ref" reference="P: Gowers norm bound on variable polynomials"}, which provides a bound for the inner average. To be more specific, we have the expression $$\limsup\limits_{r\to+\infty} \Big\lVert \mathop{\mathrm{\mathbb{E}}}_{0\leq h\leq L(r)} \big(\Lambda_{w,b}(r+h)-1\big) S_{p_r(h)}\widetilde{f}_1\cdot S_{2p_r(h)}\widetilde{f}_2 \Big\rVert_{L^2(\mu)}.$$ The inner average involves polynomials and can be bounded uniformly by the Gowers norm of the sequence $\Lambda_{w,b}(n)-1$ by Proposition [Proposition 29](#P: Gowers norm bound on variable polynomials){reference-type="ref" reference="P: Gowers norm bound on variable polynomials"} (modulo some constants and error terms that we ignore for the sake of this discussion). In particular, we have that the average in [\[E: second equation in first example\]](#E: second equation in first example){reference-type="eqref" reference="E: second equation in first example"} is bounded by $$\big\lVert \Lambda_{w,b}(n)-1 \big\rVert_{U^s(r,r+L(r)] }$$for some $s\in \mathbb{N}$. Finally, Theorem [\[T: Gowers uniformity in short intervals\]](#T: Gowers uniformity in short intervals){reference-type="ref" reference="T: Gowers uniformity in short intervals"} implies that for sufficiently large values of $r$ we have $\big\lVert \Lambda_{w,b}(n)-1 \big\rVert_{U^s(r,r+L(r)] }$ is $o_w(1)$ uniformly in $b$. Finally, sending $w$ to $+\infty$, we reach the desired conclusion. This argument is quite simpler than the general case since it involves only one function. As we commented briefly, one extra complication is that we are dealing with a double averaging, unlike the model example. During the proof of Theorem [Theorem 1](#T: the main comparison){reference-type="ref" reference="T: the main comparison"}, we will also need to split the functions $a_{ij}$ into several distinct classes, which are handled with different methods. For example, the argument above works nicely for the function $t^{3/2}$ but has to be modified in the case of the function $\log^2 t$, because the latter cannot be approximated by polynomials of degree 1 or higher on our short intervals. Namely, the Taylor polynomial corresponding to $\log^2 t$ is constant and the previous method is orendered ineffective. Thus, we present an additional, more elaborate model example in Section [4](#Section-Removing error terms section){reference-type="ref" reference="Section-Removing error terms section"}, which exemplifies the possible cases that arise in the main proof. ## Open problems and further directions {#SUBSection-Open problems} We expect that condition [\[E: far away from real multiples of integer polynomials\]](#E: far away from real multiples of integer polynomials){reference-type="eqref" reference="E: far away from real multiples of integer polynomials"} in Theorem [Theorem 6](#T: multiple recurrence for szemeredi type patterns){reference-type="ref" reference="T: multiple recurrence for szemeredi type patterns"} can be relaxed significantly and still provide a multiple recurrence result. Motivated by [@Fra-Hardy-singlecase Theorem 2.3], we make the following conjecture. **Conjecture 1**. *Let $a\in \mathcal{H}$ be a function of polynomial growth which satisfies $$\lim\limits_{t\to+\infty} \bigl| a(t)-cp(t) \bigr|=+\infty \text{ for every } c\in \mathbb{R}\text{ and } p(t)\in \mathbb{Z}[t].$$ Then, for any $k\in\mathbb{N},$ measure-preserving system $(X,\mathcal{X},\mu,T)$ and set $A$ of positive measure, the set $$\{n\in \mathbb{N}:\; \mu\big(A\cap T^{-{\left \lfloor a(n) \right \rfloor}} A \cap \dots \cap T^{-k{\left \lfloor a(n) \right \rfloor}}A \big)>0\}$$ has non-empty intersection with $\mathbb{P}$.* Comparing the assumptions on the function $a$ to those in Theorem [Theorem 6](#T: multiple recurrence for szemeredi type patterns){reference-type="ref" reference="T: multiple recurrence for szemeredi type patterns"}, we see that we are very close to establishing Conjecture [Conjecture 1](#Conjecture 1){reference-type="ref" reference="Conjecture 1"}. However, there are examples that our work does not encompass, such as the function $t^4 +\log t$ or $t^2+\log\log (5t)$. In the setting of multiple recurrence along $\mathbb{N}$, the corresponding result was established in [@Fra-Hardy-singlecase] and was generalized for more functions in [@BMR-Hardy2]. In view of [@BMR-Hardy2 Corollary B.3, Corollary B.4], we also make the following conjecture: **Conjecture 2**. *Let $k\in\mathbb{N}$ and $a_1,\dots, a_k\in \mathcal{H}$ be functions of polynomial growth. Assume that every non-trivial linear combination $a$ of the functions $a_1,\dots,a_k$, $a,$ has the property $$\lim_{t\to +\infty}|a(t)-p(t)|=+\infty \text{ for all } p(t)\in \mathbb{Z}[t].$$ Then, for any measure-preserving system $(X,\mathcal{X},\mu,T)$ and set $A$ of positive measure, the set $$\{n\in \mathbb{N}:\; \mu\big(A\cap T^{-{\left \lfloor a_1(n) \right \rfloor}} A \cap \dots \cap T^{-{\left \lfloor a_k(n) \right \rfloor}}A \big)>0\}$$ has non-empty intersection with $\mathbb{P}$.* We remark that if one wants to also include functions that are essentially equal to a polynomial, then there are more results in this direction in [@BMR-Hardy2], where it was shown that a multiple recurrence result for functions that are approximately equal to jointly-intersective polynomials is valid. Certainly, one would need to work with the sets $\mathbb{P}+1$ or $\mathbb{P}-1$ in this setting to transfer this result from $\mathbb{N}$ to the primes. It is known that a convergence result along $\mathbb{N}$ with typical Cesàro averages cannot be obtained, if one works with the weaker conditions of the previous two conjectures. Indeed, the result would fail even for rotations on tori, because the corresponding equidistribution statement is false. The main approach employed in [@BMR-Hardy2] was to consider a weaker averaging scheme than Cesàro averages. Using a different averaging method, one can impose some equidistribution assumption on functions that are not equidistributed in the standard sense. For instance, it is well-known that the sequence $(\log n)_{n\in \mathbb{N}}$ is not equidistributed mod 1 using Cesàro averages, but it is equidistributed under logarithmic averaging. Thus, it is natural to expect that an analog of Theorem [Theorem 1](#T: the main comparison){reference-type="ref" reference="T: the main comparison"} for other averaging schemes would allow someone to relax the conditions [\[E: far away from rational polynomials\]](#E: far away from rational polynomials){reference-type="eqref" reference="E: far away from rational polynomials"} and [\[E: essentially equal to a polynomial\]](#E: essentially equal to a polynomial){reference-type="eqref" reference="E: essentially equal to a polynomial"} in order to tackle the previous conjectures. A comparison result similar to Theorem [Theorem 1](#T: the main comparison){reference-type="ref" reference="T: the main comparison"} (but for other averaging schemes) appears to be a potential first step in this problem. We expect that, under the same hypotheses, the analogous result in the setting of multiple commuting transformations will also hold. In particular, aside from the special cases established in [@Fra-Hardy-multidimensional], convergence results along $\mathbb{N}$ for Hardy sequences and commuting transformations are still open. For instance, it is unknown whether the averages in Theorem [Theorem 5](#T: nikos result to primes){reference-type="ref" reference="T: nikos result to primes"} converge when the functions $a_i$ are linear combinations of fractional powers. In view of Theorem [Theorem 1](#T: the main comparison){reference-type="ref" reference="T: the main comparison"} and Theorem [Theorem 2](#T: criterion for convergence along primes){reference-type="ref" reference="T: criterion for convergence along primes"}, any new result in this direction can be transferred to the setting of primes in a rather straightforward fashion, since conditions [\[E: far away from rational polynomials\]](#E: far away from rational polynomials){reference-type="eqref" reference="E: far away from rational polynomials"} and [\[E: essentially equal to a polynomial\]](#E: essentially equal to a polynomial){reference-type="eqref" reference="E: essentially equal to a polynomial"} are quite general to work with. ## Acknowledgements We thank Nikos Frantzikinakis for helpful discussions. ## Notational conventions Throughout this article, we denote with $\mathbb{N}=\{1,2,\ldots\},$ $\mathbb{Z},$ $\mathbb{Q},$ $\mathbb{R},$ and $\mathbb{C}$ the sets of natural, integer, rational, real, and complex numbers respectively. We denote the one dimensional torus $\mathbb{T}=\mathbb{R}/\mathbb{Z}$, the exponential phases $e(t)=e^{2\pi it}$, while $\left\Vert x\right\Vert_{\mathbb{T}}=d(x,\mathbb{Z}),$ $[[x]],$ ${\left \lfloor x \right \rfloor},$ $\lceil x\rceil,$ and $\{x\}$ are the distance of $x$ from the nearest integer, the nearest integer to $x,$ the greatest integer which is less or equal to $x,$ the smallest integer which is greater or equal to $x,$ and the fractional part of $x$ respectively. We also let ${\bf 1}_A$ denote the characteristic function of a set $A$ and $|A|$ is its cardinality. For any integer $Q$ and $0\leq a\leq Q-1$, we use the symbol $a\; (Q)$ to denote the residue class $a$ modulo $Q$. Therefore, the notation ${\bf 1}_{a\; (Q)}$ refers to the characteristic function of the set of those integers, whose residue when divided by $Q$ is equal to $a$. For two sequences $a_n,b_n$, we say that $b_n$ *dominates* $a_n$ and write $a_n\prec b_n$ or $a_n=o(b_n)$, when $a_n/b_n$ goes to 0, as $n\to+\infty$. In addition, we write $a_n\ll b_n$ or $a_n=O(b_n)$, if there exists a positive constant $C$ such that $|a_n|\leq C|b_n|$ for large enough $n$. When we want to denote the dependence of the constant $C$ on some parameters $h_1,\dots,h_k$, we will use the notation $a_n=O_{h_1,\dots,h_k}(b_n)$. In the case that $b_n\ll a_n\ll b_n$, we shall write $a_n\sim b_n$. We say that $a_n$ and $b_n$ have the same growth rate when the limit of $\frac{a_n}{b_n}$, as $n\to+\infty$ exists and is a non-zero real number. We use a similar notation and terminology for asymptotic relations when comparing functions of a real variable $t$. Under the same setup as in the previous paragraph, we say that the sequence $a_n$ *strongly dominates* the sequence $b_n$ if there exists $\delta>0$ such that $$\frac{a_n}{b_n}\gg n^{\delta}.$$In this case, we write $b_N \lll a_N$, or $a_N\ggg b_N$.[^8] We use similar terminology and notation for functions on a real variable $t$. Finally, for any sequence $(a(n))$, we employ the notation $$\mathop{\mathrm{\mathbb{E}}}_{n\in S} a(n)=\frac{1}{|S|} \sum_{ n\in S}^{} a(n)$$ to denote averages over a finite non-empty set $S$. We will typically work with averages over the integers in a specified interval, whose endpoints will generally be non-integers. We shall avoid using this notation for the Cesàro averages. # Background {#Section: Background} ## Measure-preserving actions Let $(X,\mathcal{X},\mu)$ be a Lebesgue probability space. A transformation $T:X\to X$ is *measure-preserving* if $\mu(T^{-1}(A))=\mu(A)$ for all $A\in \mathcal{X}$. It is called *ergodic* if all the $T$-invariant functions are constant. If $T$ is invertible, then $T$ induces a $\mathbb{Z}$-action on $X$ by $(n,x)=T^n x$, for every $n\in \mathbb{Z}$ and $x\in X$. More generally, let $G$ be a group. A *measure-preserving $G$-action* on a Lebesgue probability space $(X,\mathcal{X},\mu)$ is an action on $X$ by measure-preserving maps $T_g$ for every $g\in G$ such that, for all $g_1,g_2\in G$, we have $T_{g_1g_2}=T_{g_1}\circ T_{g_2}$. For the purposes of this article, we will only need to consider actions by the additive groups of $\mathbb{Z}$ or $\mathbb{R}$. Throughout the following sections, we will also refer to $\mathbb{R}$-actions as *measure-preserving flows*. In the case of $\mathbb{Z}$-actions, we follow the usual notation and write $T^n$ to indicate the map $T_n$. ## Hardy fields Let $(\mathcal{B},+,\cdot)$ denote the ring of germs at infinity of real-valued functions defined on a half-line $(t_0,+\infty)$. A sub-field $\mathcal{H}$ of $\mathcal{B}$ that is closed under differentiation is called a *Hardy field*. For any two functions $f,g\in\mathcal{H}$, with $g$ not identically zero, the limit $$\lim\limits_{t\to+\infty} \frac{f(t)}{g(t)}$$ exists in the extended line and thus we can always compare the growth rates of two functions in $\mathcal{H}$. In addition, every non-constant function in $\mathcal{H}$ is eventually monotone and has a constant sign eventually. We define below some notions that will be used repeatedly throughout the remainder of the paper. **Definition 14**. *Let $a$ be a function in $\mathcal{H}$. We say that the function $a$ has *polynomial growth* if there exists a positive integer $k$ such that $a(t)\ll t^k$. The smallest positive integer $k$ for which this holds will be called the *degree* of $a$. The function $a$ is called *sub-linear* if $a(t)\prec t$. It will be called *sub-fractional* if $a(t)\prec t^{\varepsilon}$, for all $\varepsilon>0$. Finally, we will say that $a$ is *strongly non-polynomial* if, for all positive integers $k$, we have that the functions $a(t)$ and $t^k$ have distinct growth rates.* Throughout the proofs in the following sections, we will assume that we have fixed a Hardy field $\mathcal{H}$. Some of the theorems impose certain additional assumptions on $\mathcal{H}$, but this is a byproduct of the arguments used to establish the case of convergence of Cesàro averages in [@Tsinas] and we will not need to use these hypotheses in any of our arguments. ## Gowers uniformity norms on intervals of integers Let $N$ be a positive integer and let $f:\mathbb{Z}_N\to \mathbb{C}$ be a function. For any positive integer $s$, we define the *Gowers uniformity norm* $\left\Vert f\right\Vert_{U^s(\mathbb{Z}_N)}$ inductively by $$\big\lVert f \big\rVert_{U^1(\mathbb{Z}_N)}=\bigl| \mathop{\mathrm{\mathbb{E}}}_{n\in \mathbb{Z}_N} \ f(n) \bigr|$$and for $s\geq 2$,$$\big\lVert f \big\rVert_{U^{s}(\mathbb{Z}_N)}^{2^s}=\mathop{\mathrm{\mathbb{E}}}_{h\in \mathbb{Z}_N} \big\lVert \overline{f(\cdot )}f(\cdot+h) \big\rVert_{U^{s-1}(\mathbb{Z}_N)}^{2^{s-1}}.$$ A straightforward computation implies that $$\big\lVert f \big\rVert_{U^{s}(\mathbb{Z}_N)}=\Big( \mathop{\mathrm{\mathbb{E}}}_{{\underline{h}}\in \mathbb{Z}_N^s}\mathop{\mathrm{\mathbb{E}}}_{n\in \mathbb{Z}_N} \prod_{{\underline{\varepsilon}}\in \{0,1\}^s}\mathcal{C}^{|{\underline{\varepsilon}}|} f(n+{\underline{h}}\cdot {\underline{\varepsilon}})\Big)^{\frac{1}{2^s}}.$$ Here, the notation $\mathcal{C}$ denotes the conjugation map in $\mathbb{C}$, whereas for ${\underline{\varepsilon}}\in \{0,1\}^s$, $|{\underline{\varepsilon}}|$ is the sum of the entries of ${\underline{\varepsilon}}$ (the number of coordinates equal to $1$). It can be shown that for $s\geq 2$, $\left\Vert \cdot\right\Vert_{U^s(\mathbb{Z}_N)}$ is a norm and that $$\left\Vert f\right\Vert_{U^s(\mathbb{Z}_N)}\leq \left\Vert f\right\Vert_{U^{s+1}(\mathbb{Z}_N)}$$ for any function $f$ on $\mathbb{Z}_N$ [@Host-Kra-structures Chapter 6]. For the purposes of this article, it will be convenient to consider similar expressions that are not necessarily defined only for functions in an abelian group $\mathbb{Z}_N$. Therefore, for any $s\geq 1$ and a finitely supported sequence $f(n), n\in \mathbb{Z}$, we define the *unnormalized Gowers uniformity norm* $$\label{E: unnormalized gowers norm} \big\lVert f \big\rVert_{U^s(\mathbb{Z})}=\Big( \sum_{{\underline{h}}\in \mathbb{Z}^s}\ \sum_{n\in \mathbb{Z}} \ \prod_{{\underline{\varepsilon}}\in \{0,1\}^s}\mathcal{C}^{|{\underline{\varepsilon}}|} f(n+{\underline{h}}\cdot {\underline{\varepsilon}})\Big)^{\frac{1}{2^s}}$$and for a bounded interval $I\subset{\mathbb{R}}$, we define $$\label{Definition: Gowers norms along intervals} \big\lVert f \big\rVert_{U^s(I)} =\frac{ \big\lVert f\cdot {\bf 1}_{I} \big\rVert_{U^s(\mathbb{Z})}}{\big\lVert {\bf 1}_{I} \big\rVert_{U^s(\mathbb{Z}) }}.$$ First of all, observe that a simple change of variables in the summation in [\[Definition: Gowers norms along intervals\]](#Definition: Gowers norms along intervals){reference-type="eqref" reference="Definition: Gowers norms along intervals"} implies that for $X\in \mathbb{Z}$ $$\big\lVert f \big\rVert_{U^s(X,X+H]}=\big\lVert f(\cdot+X) \big\rVert_{U^s[1,H]}.$$ Evidently, we want to compare uniformity norms on the interval $[1, H]$ with the corresponding norms on the abelian group $\mathbb{Z}_H$. To this end, we will use the following lemma, whose proof can be found in [@Host-Kra-structures Chapter 22, Proposition 11]. **Lemma 15**. *Let $s$ be a positive integer and $N,N'\in \mathbb{N}$ with $N'\geq 2N$. Then, for any sequence $\big(f(n)\big)_{ n\in \mathbb{Z}}$, we have $$\big\lVert f \big\rVert_{U^s[1,N] } =\frac{\big\lVert f\cdot 1_{[1,N]} \big\rVert_{U^s(\mathbb{Z}_{N'})} }{\big\lVert 1_{[1,N] } \big\rVert_{U^s(\mathbb{Z}_{N'})}}.$$* We will need a final lemma that implies that the Gowers uniformity norm is smaller when the sequence is evaluated along arithmetic progressions. **Lemma 16**. *Let $u(n)$ be a sequence of complex numbers. Then, for any integer $s\geq 2$ and any positive integers $0\leq a\leq Q-1$, we have $$\big\lVert u(n){\bf 1}_{a\;( Q)}(n) \big\rVert_{U^s(X,X+H]}\leq \big\lVert u(n) \big\rVert_{U^s(X,X+H]},$$for all integers $X\geq 0$ and all $H\geq 1$.* *Proof.* We set $u_X(n)=u(X+n)$, so that we can rewrite the norm on the left-hand side as $\big\lVert u_X(n){\bf 1}_{a\;(Q)}(X+n) \big\rVert_{U^s[1,H]}$. Observe that the function ${\bf 1}_{a\; (Q)}(n)$ is periodic modulo $Q$. Thus, treating it as a function in $\mathbb{Z}_Q$, we have the Fourier expansion $${\bf 1}_{a\; (Q)}(n)=\sum_{\xi\in \mathbb{Z}_q} \widehat{{\bf 1}}_{a\; (Q)}(\xi)e\Big(\frac{n\xi}{Q}\Big),$$for every $0\leq n\leq Q-1$, and this can be extended to hold for all $n\in \mathbb{Z}$ due to periodicity. Furthermore, we have the bound $$\bigl| \widehat{{\bf 1}}_{a\; (Q)}(\xi) \bigr|=\frac{1}{Q}\Bigl| e\bigg(\frac{a\xi}{Q}\bigg) \Bigr|\leq \frac{1}{Q}.$$Applying the triangle inequality, we deduce that $$\big\lVert u_X(n){\bf 1}_{a\;(Q)}(X+n) \big\rVert_{U^s[1,H]}\leq \sum_{\xi\in \mathbb{Z}_Q} \bigl| \widehat{{\bf 1}}_{a\; (Q)}(\xi) \bigr|\cdot \Big\lVert u_X(n)e\Big(\frac{(X+n)\xi}{Q}\Big) \Big\rVert_{U^s[1,H]}.$$However, it is immediate from [\[E: unnormalized gowers norm\]](#E: unnormalized gowers norm){reference-type="eqref" reference="E: unnormalized gowers norm"} that the $U^s$-norm is invariant under multiplication by linear phases, for every $s\geq 2$. Therefore, we conclude that $$\big\lVert u_X(n){\bf 1}_{a\;(Q)}(X+n) \big\rVert_{U^s[1,H]}\leq \big\lVert u_X(n) \big\rVert_{U^s[1,H]}= \big\lVert u(n) \big\rVert_{U^s(X,X+H]},$$which is the desired result. ◻ The primary utility of the Gowers uniformity norms is the fact that they arise naturally in complexity reduction arguments that involve multiple ergodic averages with polynomial iterates. In particular, Proposition [\[P: PROPOSITION THAT WILL BE COPYPASTED FROM FRA-HO-KRA\]](#P: PROPOSITION THAT WILL BE COPYPASTED FROM FRA-HO-KRA){reference-type="ref" reference="P: PROPOSITION THAT WILL BE COPYPASTED FROM FRA-HO-KRA"} below implies that polynomial ergodic averages weighted by a sequence $(a(n))_{n\in \mathbb{N}}$ can be bounded in terms of the Gowers norm of $a$ on the abelian group $\mathbb{Z}_{sN}$ for some positive integer $s$ (that depends only on the degrees of the underlying polynomials). **Proposition 17**. *[@Fra-Host-Kra-primes Lemma 3.5][\[P: PROPOSITION THAT WILL BE COPYPASTED FROM FRA-HO-KRA\]]{#P: PROPOSITION THAT WILL BE COPYPASTED FROM FRA-HO-KRA label="P: PROPOSITION THAT WILL BE COPYPASTED FROM FRA-HO-KRA"} Let $k, \ell\in \mathbb{N},$ $(X,\mathcal{X},\mu,T_1,\ldots, T_k)$ be a system of commuting $\mathbb{Z}$ actions, $p_{i,j}\in\mathbb{Z}[t]$ be polynomials for every $1\leq i\leq k,$ $1\leq j\leq \ell,$ $f_1,\ldots,f_{\ell}\in L^{\infty}(\mu)$ and $a:\mathbb{N}\to\mathbb{C}$ be a sequence. Then, there exists $s\in\mathbb{N},$ depending only on the maximum degree of the polynomials $p_{i,j}$ and the integers $k,\ell$, and a constant $C_s$ depending on $s,$ such that $$\label{E: bound of polynomial ergodic averages in terms of Gowers norm of the weight} \Big\lVert \mathop{\mathrm{\mathbb{E}}}_{1\leq n\leq N} a(n)\cdot \prod_{j=1}^{\ell} \prod_{i=1}^k T_i^{p_{i,j}(n)} f_j%\cdot\ldots\cdot (\prod_{i=1}^k T_i^{p_{i,\ell}(n)})f_{\ell} \Big\rVert_{L^2(\mu)}\leq C_{s} \left( \left\Vert a\cdot {\bf 1}_{[1,N]} \right\Vert_{U^s(\mathbb{Z}_{sN})}+\frac{\max \{1,\left\Vert a\right\Vert^{2s}_{\ell^{\infty}[1,sN]}\}}{N} \right).$$* **Remark 4**. *$(i)$ The statement presented in [@Fra-Host-Kra-primes] asserts that the second term in the prior sum is just $o_N(1)$, under the assumption that $a(n)\ll n^c$ for all $c>0$. However, a simple inspection of the proof gives the error term presented above. Indeed, the error terms appearing in the proof of Proposition [\[P: PROPOSITION THAT WILL BE COPYPASTED FROM FRA-HO-KRA\]](#P: PROPOSITION THAT WILL BE COPYPASTED FROM FRA-HO-KRA){reference-type="ref" reference="P: PROPOSITION THAT WILL BE COPYPASTED FROM FRA-HO-KRA"} are precisely of the form $$\frac{1}{N}\mathop{\mathrm{\mathbb{E}}}_{n\in [1,N]}\ \mathop{\mathrm{\mathbb{E}}}_{ {\underline{h}}\in [1,N]^k} \Bigl| \ \prod_{{\underline{\varepsilon}}\in \{0,1\}^k} \mathcal{C}^{|{\underline{\varepsilon}}|}\ a(n+{\underline{h}}\cdot {\underline{\varepsilon}}) \Bigr|$$for $k\leq s-1$, which are the error terms in the van der Corput inequality. Deducing the error term on [\[E: bound of polynomial ergodic averages in terms of Gowers norm of the weight\]](#E: bound of polynomial ergodic averages in terms of Gowers norm of the weight){reference-type="eqref" reference="E: bound of polynomial ergodic averages in terms of Gowers norm of the weight"} is then straightforward.\ $(ii)$ The number $s-1$ is equal to the number of applications of the van der Corput inequality in the associated PET argument and we may always assume that $s\geq 2$. In that case, Lemma [Lemma 15](#L: relations for gowers norms defined on intervals and on groups){reference-type="ref" reference="L: relations for gowers norms defined on intervals and on groups"} and the bound $\left\Vert {\bf 1}_{[1,N]} \right\Vert_{U^s(\mathbb{Z}_{sN})}\leq 1$ implies that we can replace the norm in [\[E: bound of polynomial ergodic averages in terms of Gowers norm of the weight\]](#E: bound of polynomial ergodic averages in terms of Gowers norm of the weight){reference-type="eqref" reference="E: bound of polynomial ergodic averages in terms of Gowers norm of the weight"} with the term $\left\Vert a\right\Vert_{U^s[1,N]}$.* For polynomials $p_{i,j}(t)\in\mathbb{R}[t]$ of the form $$p_{i,j}(t)=a_{ij,d_{ij}}t^{d_{ij}}+\dots+a_{ij,1}t+a_{ij,0},$$ and $(T_{i,s})_{s\in \mathbb{R}}$ $\mathbb{R}$-actions, we have $$T_{i,p_{i,j}(n)}=\Big(T_{i,a_{ij,d_{ij}}}\Big)^{n^{d_{ij}}}\cdot\ldots \cdot \Big(T_{i,a_{ij,1}}\Big)^{n}\cdot \Big(T_{i,a_{ij,0}}\Big).$$ Thus, Proposition [\[P: PROPOSITION THAT WILL BE COPYPASTED FROM FRA-HO-KRA\]](#P: PROPOSITION THAT WILL BE COPYPASTED FROM FRA-HO-KRA){reference-type="ref" reference="P: PROPOSITION THAT WILL BE COPYPASTED FROM FRA-HO-KRA"} implies the following. **Corollary 18**. *Let $k, \ell\in \mathbb{N},$ $(X,\mathcal{X},\mu,S_1,\ldots, S_k)$ be a system of commuting $\mathbb{R}$-actions, $p_{i,j}\in\mathbb{Z}[t]$ be polynomials for all $1\leq i\leq k,$ $1\leq j\leq \ell,$ $f_1,\ldots,f_{\ell}\in L^{\infty}(\mu)$ and $a:\mathbb{N}\to\mathbb{C}$ be a sequence. Then, there exists $s\in\mathbb{N},$ depending only on the maximum degree of the polynomials $p_{i,j}$ and the integers $k,\ell$ and a constant $C_s$ depending on $s,$ such that $$\Big\lVert \mathop{\mathrm{\mathbb{E}}}_{1\leq n\leq N} a(n)\cdot \prod_{j=1}^{\ell} \prod_{i=1}^k S_{i,p_{i,j}(n)} f_j%\cdot\ldots\cdot (\prod_{i=1}^k T_i^{p_{i,\ell}(n)})f_{\ell} \Big\rVert_{L^2(\mu)}\leq C_{s} \left( \left\Vert a\cdot {\bf 1}_{[1,N]} \right\Vert_{U^s(\mathbb{Z}_{sN})}+\frac{\max \{1,\left\Vert a\right\Vert^{2s}_{\ell^{\infty}[1,sN]}\}}{N} \right).$$* ## Number theoretic tools The following lemma is a standard consequence of the prime number theorem and the sparseness of prime powers (actually, we use this argument in the proof of Corollary [Corollary 21](#C: Brun-Titchmarsh inequality for von Mangoldt sums){reference-type="ref" reference="C: Brun-Titchmarsh inequality for von Mangoldt sums"} below). For a proof, see, for instance, [@Host-Kra-structures Chapter 25]. **Lemma 19**. *For any bounded sequence $(a(n))_{n\in \mathbb{N}}$ in a normed space, we have $$\lim\limits_{N\to+\infty} \Big\lVert \frac{1}{\pi(N)}\sum_{p\in \mathbb{P}{:}\;p\leq N} a(p) -\frac{1}{N}\sum_{n=1}^{N} \Lambda(n) a(n) \Big\rVert=0.$$* Therefore, in order to study ergodic averages along primes, we can replace them with the ergodic averages over $\mathbb{N}$ weighted by the function $\Lambda(n)$. For the modified von Mangoldt function, we have the following deep theorem, which was recently established in [@MSTT]. [@MSTT Theorem 1.5][\[T: Gowers uniformity in short intervals\]]{#T: Gowers uniformity in short intervals label="T: Gowers uniformity in short intervals"} Let $\varepsilon>0$ and assume $L(N)$ is a positive sequence that satisfies the bounds $N^{\frac{5}{8}+\varepsilon}\leq L(N)\leq N^{1-\varepsilon}$. Let $s$ be a fixed integer and let $w$ be a positive integer. Then, if $N$ is large enough in terms of $w$, we have that $$\label{E: W-trick} \left\Vert \Lambda_{w,b}-1\right\Vert_{U^s(N,N+L(N)]}=o_w(1)$$for every $1\leq b\leq W$ with $(b,W )=1$. We will need to use the orthogonality of $\Lambda_{w,b}$ to polynomial phases in short intervals. This is an immediate consequence of the $U^d$ uniformity in Theorem [\[T: Gowers uniformity in short intervals\]](#T: Gowers uniformity in short intervals){reference-type="ref" reference="T: Gowers uniformity in short intervals"} in conjunction with an application of the van der Corput inequality $d$ times until the polynomial phase is eliminated. Alternatively, one can use Proposition [\[P: PROPOSITION THAT WILL BE COPYPASTED FROM FRA-HO-KRA\]](#P: PROPOSITION THAT WILL BE COPYPASTED FROM FRA-HO-KRA){reference-type="ref" reference="P: PROPOSITION THAT WILL BE COPYPASTED FROM FRA-HO-KRA"} for a rotation on the torus $\mathbb{T}$ to carry out the reduction to Theorem [\[T: Gowers uniformity in short intervals\]](#T: Gowers uniformity in short intervals){reference-type="ref" reference="T: Gowers uniformity in short intervals"}.[^9] We omit its proof. **Lemma 20**. *Let $L(N)$ be a positive sequence satisfying $N^{\frac{5}{8}+\varepsilon}\prec L(N)\prec N^{1-\varepsilon}$ for some $\varepsilon>0$. Then, we have that $$\label{E: orthogonality to polynomial phases} \max_{\underset{(b,W)=1}{1\leq b\leq W}} \sup_{\underset{\deg p=d}{p\in \mathbb{R}[t]}} \Bigl| \mathop{\mathrm{\mathbb{E}}}_{N\leq n\leq N+L(N)}\big(\Lambda_{w,b}(n)-1\big)e(p(n)) \Bigr|=o_w(1).$$for every $N$ large enough in terms of $w$.* **Remark 5**. *$(i)$ The error term $o_w(1)$ depends on the degree $d$, but since this will be fixed in applications, we suppressed that dependence above.\ $(ii)$ Quantitative bounds for similar expressions (involving the more general class of nilsequences, as well) were the main focus in [@MSTT], though in that setting the authors used a different weight of the form $\Lambda-\Lambda^{\#}$, where $\Lambda^{\#}$ is a carefully chosen approximant for the von Mangoldt function arising from considerations of the (modified) Cramer random model for the primes.* Finally, we will also use a corollary of the Brun-Titchmarsh inequality to bound the contribution of bad residue classes in our ergodic averages by a constant term. For $q\geq 2$ and $(a,q)=1$, we denote by $\pi(x,q,a)$ the number of primes $\leq x$ that are congruent to $a$ modulo $q$. Alternatively, one could also use the asymptotics for averages of $\Lambda$ in short intervals that were established by Huxley [@Huxley], since $L(N)$ will be chosen to grow sufficiently fast in our applications. [\[T: Brun-Titchmarsh inequality\]]{#T: Brun-Titchmarsh inequality label="T: Brun-Titchmarsh inequality"} We have $$\label{E: Brun-Titchmarsh} \pi(x+y,q,a)-\pi(x,q,a)\leq \frac{2y}{\phi(q)\log(\frac{y}{q})}$$for every $x\geq y>q$. While we referred to this as the Brun-Titchmarsh inequality, the previous theorem was established in [@Montgomery-Vaughan] by Montgomery and Vaughan (prior results contained the term $2+o(1)$ in the numerator). We will need a variant of this theorem adapted to the von Mangoldt function. This follows easily from the previous theorem and a standard partial summation argument. **Corollary 21**. *For every $q\leq y\leq x$, we have $$\sum_{\underset{n\equiv a\;(q)}{x\leq n\leq x+y}}\Lambda(n)\leq \frac{2y\log x}{\phi(q)\log(\frac{y}{q})}+O\big(\frac{y}{\log x}\big)+O\big(x^{\frac{1}{2}}\log x\big).$$* *Proof.* Consider the function $$\pi(x,q,a)=\sum_{\underset{n\equiv a\;(Q)}{1\leq n\leq x} } 1_{\mathbb{P}}(n)$$ as in the statement of Theorem [\[T: Brun-Titchmarsh inequality\]](#T: Brun-Titchmarsh inequality){reference-type="ref" reference="T: Brun-Titchmarsh inequality"}, defined for all $x\geq 3/2$. Let $$\theta(x,q,a)=\sum_{\underset{n\equiv a\;( Q)}{1\leq n\leq x} } 1_{\mathbb{P}}(n)\log n,\ \ \psi(x,q,a)=\sum_{\underset{n\equiv a\;( Q)}{1\leq n\leq x} } \Lambda(n).$$ It is evident that $$\label{E: prime powers suck} \Bigl| \theta(x,q,a)-\psi(x,q,a) \Bigr|\leq \sum_{p^k\leq x{:}\;p\in \mathbb{P}, k\geq 2} \log p\leq x^{1/2}\log x,$$since there are at most $x^{1/2}$ prime powers $\leq x$ and each one of them contributes at most $\log x$ in this sum. Now, we use summation by parts to deduce that $$\begin{gathered} \theta(x+y,q,a)-\theta(x,q,a)=\sum_{\underset{n\equiv a\;( Q)}{x< n\leq x+y} } 1_{\mathbb{P}}(n)\log n+O(1)=\pi(x+y,q,a)\log(x+y)-\\ \pi(x,q,a)\log(x+1)+\sum_{\underset{n\equiv a\;( Q)}{x< n\leq x+y} } \pi(n,q,a)\Big(\log n -\log(n+1)\Big)+O(1). \end{gathered}$$Using the inequalities $\log n-\log(n+1)\leq- (n+1)^{-1}$ and $\log(x+y)\leq \log x +y/x$, we deduce that $$\begin{gathered} \theta(x+y,q,a)-\theta(x,q,a)\leq \log x\Big( \pi(x+y,q,a)-\pi(x,q,a)\Big)+\frac{\pi(x+y,q,a)y}{x}-\\ \sum_{\underset{n\equiv a\;( Q)}{x< n\leq x+y} } \frac{\pi(n,q,a)}{n+1}+O(1). \end{gathered}$$Using the estimate $\pi(x,q,a)\ll \frac{x}{\phi(q)\log x}$ and Theorem [\[T: Brun-Titchmarsh inequality\]](#T: Brun-Titchmarsh inequality){reference-type="ref" reference="T: Brun-Titchmarsh inequality"}, we bound the sum in the previous expression by $$\log x\frac{2y}{\phi(q)\log (\frac{y}{q})}+ O\Big(\frac{(x+y)y}{\phi(q)x\log(x+y)}\Big)+ O\Big(\sum_{\underset{n\equiv a\;( Q)}{x< n\leq x+y} } \frac{1}{\phi(q)\log n}\Big)+O(1).$$Since $$\begin{gathered} \sum_{\underset{n\equiv a\;( Q)}{x< n\leq x+y} } \frac{1}{\log n}\leq \int_{x}^{x+y}\frac{dt}{\log t}+O(1)=\frac{x+y}{\log(x+y)}-\frac{x}{\log x}+\int_{x}^{x+y}\frac{dt}{\log^2 t} +O(1)\leq\\ \frac{y}{\log x}+O(\frac{y}{\log^2 x})+O(1), \end{gathered}$$we conclude that $$\label{E: estimate for sums of von Mangoldt on primes} \theta(x+y,q,a)-\theta(x,q,a)\leq \frac{2y\log x}{\phi(q)\log(\frac{y}{q})}+O(\frac{y}{\log x})+O(1).$$Consequently, if we combine [\[E: prime powers suck\]](#E: prime powers suck){reference-type="eqref" reference="E: prime powers suck"} and [\[E: estimate for sums of von Mangoldt on primes\]](#E: estimate for sums of von Mangoldt on primes){reference-type="eqref" reference="E: estimate for sums of von Mangoldt on primes"}, we arrive at $$\psi(x+y,q,a)-\psi(x,q,a)\leq \frac{2y\log x}{\phi(q)\log(\frac{y}{q})}+O(\frac{y}{\log x})+O(x^{\frac{1}{2}}\log x),$$ as was to be shown. ◻ **Remark 6**. *We will apply this corollary for $q=W$ and $y\gg x^{5/8+\varepsilon}$. Note that for $y$ in this range, the second error term can be absorbed into the first one.* ## Quantitative equidistribution mod 1   **Definition 22**. *Let $(x_n)_{n\in \mathbb{N}}$ be a real valued sequence. We say that $(x_n)_{n\in\mathbb{N}}$ is* - **equidistributed $mod \;1$* if for all $0\leq a< b \leq 1,$ we have $$\label{Equi} \lim_{N\to+\infty}\frac{\big|\big\{ n\in \{1,\ldots, N\}:\;\{x_n\}\in [a,b)\big\}\big|}{N}=b-a.$$* - **well distributed $mod \;1$* if for all $0\leq a<b\leq 1,$ we have $$\label{Well_Equi} \lim_{N\to+\infty}\frac{\big|\big\{n\in \{1,\ldots,N\}:\;\{x_{k+n}\}\in [a,b)\big\}\big|}{N}=b-a, \text{ uniformly in } k=0,1,\ldots.$$* In the case of polynomial sequences, their equidistribution properties are well understood. If the polynomial has rational non-constant coefficients, it is straightforward to check that the sequence of its fractional parts is periodic. On the other hand, for polynomials with at least one non-constant irrational coefficient, we have the following theorem. [\[T: Weyl\]]{#T: Weyl label="T: Weyl"} Let $p\in \mathbb{R}[t]$ be a polynomial with at least one non-constant irrational coefficient. Then, the sequence $(p(n))_{n\in \mathbb{N}}$ is well-distributed $mod\: 1$. This theorem is classical and for a proof, we refer the reader to [@Kuipers-Niederreiter Chapter 1, Theorem 3.2].[^10] In the case of Hardy field functions, we have a complete characterization of equidistribution modulo 1 due to Boshernitzan. We recall here [@Boshernitzan-equidistribution Theorem 1.3]. [\[T: Boshernitzan\]]{#T: Boshernitzan label="T: Boshernitzan"} Let $a\in\mathcal{H}$ be a function of polynomial growth. Then, the sequence $(a(n))_{n\in \mathbb{N}}$ is equidistributed $mod \; 1$ if and only if $|a(t)-p(t)|\succ \log t$ for every $p\in \mathbb{Q}[t]$. This theorem explains the assumptions in Theorem [Theorem 1](#T: the main comparison){reference-type="ref" reference="T: the main comparison"} and, in particular, condition [\[E: far away from rational polynomials\]](#E: far away from rational polynomials){reference-type="eqref" reference="E: far away from rational polynomials"}. Indeed, since we need equidistribution assumptions for our method to work, this condition appears to be vital. We will invoke Boshernitzan's theorem only in the case of sub-fractional functions. Indeed, we will investigate the equidistribution properties of fast-growing functions by studying their exponential sums in short intervals. This leads to a proof of the previous theorem indirectly, at least in the case that the function involved is not sub-fractional. For our purposes, we will need a quantitative version of the equidistribution phenomenon. For a finite sequence of real numbers $(u_n)_{1\leq n\leq N}$ and an interval $[a,b]\subseteq [0,1]$, we define the *discrepancy* of the sequence $u_n$ with respect to $[a,b]$ by $$\label{E: discrepancy} \Delta_{[a,b]}(u_1,\dots,u_N)=\Bigg|\frac{\bigl| \big\{n\in \{1,\ldots,N\}{:}\;\{u_n\}\in [a,b] \big\} \bigr|}{N}-(b-a)\Bigg|.$$ The discrepancy of a sequence is a quantitative measure of how close a sequence of real numbers is to being equidistributed modulo 1. For example, it is immediate that for an equidistributed sequence $u_n$, we have that $$\lim\limits_{N\to+\infty} {\Delta_{[a,b]} (u_1,\dots,u_N) } =0,$$for all $0\leq a\leq b\leq 1.$ For an in-depth discussion on the concept of discrepancy and the more general theory of equidistribution on $\mathbb{T}$, we refer the reader to [@Kuipers-Niederreiter]. Our only tool will be an upper bound of Erdős and Turán on the discrepancy of a finite sequence. For a proof of this result, see [@Kuipers-Niederreiter Chapter 2, Theorem 2.5].[^11] [\[T: Erdos-Turan\]]{#T: Erdos-Turan label="T: Erdos-Turan"} There exists an absolute constant $C$, such that for any positive integer $M$ and any Borel probability measure $\nu$ on $\mathbb{T}$, we have $$\sup_{A\subseteq \mathbb{T}} |\nu(A)-\lambda(A)|\leq C\Big(\frac{1}{M}+\sum_{m=1}^{M}\frac{|\widehat{\nu}(m)|}{m}\Big),$$where $\lambda$ is the Lebesgue measure on $\mathbb{T}$ and the supremum is taken over all arcs $A$ of $\mathbb{T}$. In particular, specializing to the case that $\nu= N^{-1}\sum_{i=1}^{N}\delta_{\{u_i\}}$, where $u_1,\dots,u_N$ is a finite sequence of real numbers, we have $$\Delta_{[a,b]}(u_1,\dots,u_N)\leq C\Big( \frac{1}{M}+\sum_{m=1}^{M}\frac{1}{m} \Bigl| \frac{1}{N} \sum_{n=1}^{N} e(mu_n) \Bigr| \Big)$$ for all positive integers $M$ and all $0\leq a\leq b<1.$ It is clear that in order to get the desired bounds on the discrepancy in our setting, we will need some estimates for exponential sums of Hardy field sequences in short intervals. Due to the Taylor approximation, this is morally equivalent to establishing estimates for exponential sums of polynomial sequences. There are several well-known estimates in this direction, the most fundamental of these being a result of Weyl that shows that an exponential sum along a polynomial sequence is small unless all non-constant coefficients of the polynomial are "major-arc". In the case of strongly non-polynomial Hardy field functions, we will only need to study the leading coefficient of the polynomial in its Taylor approximation, which will not satisfy such a major-arc condition. To this end, we require the following lemma. **Lemma 23**. *Let $0<\delta <1$ and $d\in \mathbb{N}$. There exists a positive constant $C$ depending only on $d$, such that if $p(x)=a_dx^d+\dots +a_1x+a_0$ is a real polynomial that satisfies $$\Bigl| \frac{1}{N}\sum_{n=1}^{N} e(p(n)) \Bigr|>\delta,$$then, for every $1\leq k\leq d$, there exists $q\in \mathbb{Z}$ with $|q|\leq \delta ^{-C}$, such that $N^k \left\Vert qa_k \right\Vert_{\mathbb{T}}\leq \delta^{-C}$.* Note that there is no dependency of the constant on the length of the averaging interval, or on the implicit polynomial $p$ (apart from its degree). For a proof of this lemma, see [@Green-Tao-quantitative Proposition 4.3], where a more general theorem is established in the setting of nilmanifolds as well. ## Nilmanifolds and correlation sequences Let $G$ be a nilpotent Lie group with nilpotency degree $s$ and let $\Gamma$ be a discrete and cocompact subgroup. The space $X=G/\Gamma$ is called an $s$-step *nilmanifold*. The group $G$ acts on the space $X$ by left multiplication and the measure on $X$ that is invariant under this action is called the *Haar measure* of $X$, which we shall denote by $m_X$. Given a sequence of points $x_n\in X$, we will say that the sequence $x_n$ is *equidistributed* on $X$, if for any continuous function $F:X\to \mathbb{C}$ we have that $$\lim\limits_{N\to+\infty} \frac{1}{N}\sum_{n=1}^{N} F(x_n)=\int F \ d\, m_X.$$ A *subnilmanifold* of $X=G/\Gamma$ is a set of the form $Hx$, where $H$ is a closed subgroup of the Lie group $G$, $x\in X$ and such that $Hx$ is closed in $X$. Let $g$ be any element on the group $G$. Then, for any $x\in X$, the closed orbit of the action of $g$ on $x$ will be denoted by $\overline{(g^{\mathbb{Z}}x)}$. It is known that this set is a subnilmanifold of $X=G/\Gamma$ and that the sequence $g^n x$ is equidistributed in the subnilamnifold $\overline{(g^{\mathbb{Z}}x)}$ (see, for example, [@Host-Kra-structures Chapter 11, Theorem 9]). This is a corollary of the following lemma, whose proof can be found in [@Leibman-nil-polynomial-equidistribution Theorem 2.21]. **Lemma 24**. *Let $A$ be a countable, finitely generated amenable group and $\phi: A\to G$ be a homomorphism. Then, there exists a closed subgroup $E$ of $G$, such that $\overline{\phi(A)x}=Ex$ for all $x\in G/\Gamma$ and $\{\phi(u)x\}$ is equidistributed on $\overline{\phi(A)x}$ (with respect to any Følner sequence). In particular, $\overline{\phi(A)x}$ is a subnilmanifold of $G/\Gamma$.* We can use this lemma to establish the following corollary, which will be helpful to compute some specific expressions that arise in the proof of Theorem [Theorem 12](#T: criterion for pointwise convergence along primes-nil version){reference-type="ref" reference="T: criterion for pointwise convergence along primes-nil version"}. **Corollary 25**. *Let ${\color{red}s, k,} \ell_1,\dots, \ell_s$ be positive integers and $X=G/\Gamma$ be a nilmanifold. Assume that $g_1,\dots, g_k\in G$, $x_1,\dots,x_s\in X$ and $G_1,\dots, G_s$ are continuous functions from $X$ to $\mathbb{C}$. Then, we have $$\lim\limits_{N\to+\infty} \frac{1}{(2N+1)^k} \sum_{n_1,\dots, n_k\in [-N,N]}\prod_{j=1}^s G_j(g_1^{\ell_jn_1}\cdot\ldots\cdot g_k^{\ell_jn_k} x_j)= \prod_{j=1}^{s}\int_{Y_j} G_j\, dm_{Y_j}$$where $Y_j$ is the nilmanifold $\overline{(g_1^{\ell_1})^{\mathbb{Z}}\cdot\ldots \cdot (g_k^{\ell_k})^{\mathbb{Z}} x_j }$, for every $1\leq j\leq s$.* *Proof of Corollary [Corollary 25](#C: limit of averages along linear actions of g_1,...g_k){reference-type="ref" reference="C: limit of averages along linear actions of g_1,...g_k"}.* It is sufficient to show that the sequence $$\big(g_1^{\ell_1n_1}\cdot\ldots \cdot g_k^{\ell_1n_k} x_1,\ldots, g_{1}^{\ell_sn_1}\cdot\ldots \cdot g_k^{\ell_sn_k} x_s \big)$$is equidistributed on the nilmanifold $Y_1\times\dots\times Y_s$ (with respect to the Følner sequence $[-N,N]^K$), as the required convergence follows by applying this equidistribution assumption on the continuous function $G_1\otimes \dots\otimes G_s$. If we define the homomorphism $\phi:\mathbb{Z}^k\to G^s$ by $$\phi(n_1,\dots, n_k)=(g_1^{\ell_1n_1}\cdot\ldots\cdot g_k^{\ell_1n_k},\ldots,g_{1}^{\ell_sn_1}\cdot\ldots \cdot g_k^{\ell_sn_k} ),$$ then the result follows from Lemma [Lemma 24](#L: linear action equidistribution in nil){reference-type="ref" reference="L: linear action equidistribution in nil"}. ◻ We now present the following definition for nilsequences in several variables. **Definition 26**. *Let $k,s$ be positive integers and let $X=G/\Gamma$ be an $s$-step nilmanifold. Assume that $g_1,\dots, g_k$ are pairwise commuting elements of the group $G$, $F: X\to\mathbb{C}$ is a continuous function on $X$ and $x\in X$. Then, the sequence $$\psi(n_1,\dots, n_k)=F(g_1^{n_1}\cdot\ldots\cdot g_k^{n_k}x), \ \text{where } n_1,\dots,n_k\in \mathbb{Z}$$ is called an *$s$-step nilsequence in $k$-variables*.* The main tool that we will need is an approximation of general nilsequences by multi-correlation sequences in the $\ell^{\infty}$-sense. The following lemma is established in [@Fra-Host-weighted Proposition 4.2]. **Lemma 27**. *Let $k,s$ be positive integers and $\psi:\mathbb{Z}^k\to \mathbb{C}$ be a $(s-1)$-step nilsequence in $k$ variables. Then, for every $\varepsilon>0$, there exists a system $(X,\mathcal{X},\mu,T_1,\dots, T_k)$ and functions $F_1,\dots, F_s$ on $L^{\infty}(\mu)$, such that the sequence $b(n_1,\dots,n_k)$ defined by $$b(n_1,\dots,n_k)=\int \prod_{j=1}^{s}\big( T_1^{\ell_j n_1}\cdot\ldots\cdot T_k^{\ell_jn_k} \big)F_j\ d\,\mu, \ (n_1,\dots, n_k)\in \mathbb{Z}^k$$with $\ell_j=s!/j$ satisfies $$\left\Vert \psi-b\right\Vert_{\ell^{\infty}(\mathbb{Z}^k)}\leq \varepsilon.$$* **Lemma 28**. *Let $k,s$ be positive integers and $\psi:\mathbb{Z}^k\to \mathbb{C}$ be a $(s-1)$-step nilsequence in $k$ variables. Then, for every $\varepsilon>0$, there exists a nilmanifold $Y=H/\Delta$, pairwise commuting elements $u_1,\dots,u_k\in H$ and continuous, complex-valued functions $F_1,\dots, F_s$ on $Y$, such that the sequence $b(n_1,\dots,n_k)$ defined by $$b(n_1,\dots,n_k)=\int \prod_{j=1}^{s}F_j\big( u_1^{\ell_j n_1}\cdot\ldots\cdot u_k^{\ell_jn_k} y\big)\ d\,m_Y(y), \ (n_1,\dots, n_k)\in \mathbb{Z}^k$$with $\ell_j=s!/j$ satisfies $$\left\Vert \psi-b\right\Vert_{\ell^{\infty}(\mathbb{Z}^k)}\leq \varepsilon.$$* ***Comment** 4*. The definition of nilsequences used in [@Fra-Host-weighted] imposed that $x=\Gamma$ and that ${\bf n}\in \mathbb{N}^k$. However, their arguments generalize in a straightforward manner to the slightly more general setting that we presented above. # Lifting to an extension flow {#Section-Lifting section} In this section, we use a trick that allows us to replace the polynomial ergodic averages with similar ergodic averages over $\mathbb{R}$ actions on an extension of the original probability space, removing the rounding functions in the process.. This argument is implicit in [@koutsogiannis-closest] for Cesàro averages, so we adapt its proof to the setting of short intervals. **Proposition 29**. *Let $k,\ell,d$ be positive integers and let $L(N)$ be a positive sequence satisfying $N^{\frac{5}{8}+\varepsilon}\ll L(N)\ll N^{1-\varepsilon}$. Let $(X,\mathcal{X},\mu,T_1,\dots,T_k)$ be a system of commuting transformations. Then, there exists a positive integer $s$ depending only on $k,\ell,d$, such that for any variable family $\mathcal{P}=\{p_{i,j,N}{:}\;1\leq i\leq k, 1\leq j\leq \ell\}$ of polynomials with degrees at most $d$ that, for all $i,j,$ satisfy $$\label{E: non-concetration around 1} \lim_{\delta\to 0^+}\lim_{N\to+\infty}\frac{\left|\{N\leq n\leq N+L(N):\; \{p_{i,j,N}(n)\}\in [1-\delta,1)\}\right|}{L(N)}=0,$$ we have that for any $0<\delta<1$ and functions $f_1,\dots, f_{\ell}\in L^{\infty}(\mu)$ $$\begin{gathered} \Big\lVert \mathop{\mathrm{\mathbb{E}}}_{N\leq n\leq N+L(N)} \ \big(\Lambda_{w,b}(n)-1\big) \prod_{j=1}^{\ell} \prod_{i=1}^{k} T_i^{{\left \lfloor p_{i,j,N}(n) \right \rfloor}} f_j \Big\rVert_{L^2(\mu)}\ll_{k,\ell,d}\\ \frac{1}{\delta^{k\ell }}\Big(\big\lVert \Lambda_{w,b}(n)-1 \big\rVert_{U^s(N,N+sL(N)]}+o_w(1)\Big)+ o_{\delta}(1)(1+o_w(1)), \end{gathered}$$ for all $1\leq b\leq W,\ (b,W)=1$, where $W=\prod_{p\in \mathbb{P}{:}\;p\leq w}p$.* *Proof.* Let $\lambda$ denote the Lebesgue measure on $[0,1)$ and we define (as in [@koutsogiannis-closest]) the measure-preserving $\mathbb{R}^{k \ell}$-action $\displaystyle \prod_{i=1}^{k}S_{i,s_{i,1}}\cdot \ldots\cdot \prod_{i=1}^k S_{i,s_{i,\ell}}$ on the space $Y:=X\times [0,1)^{k\ell},$ endowed with the measure $\nu:=\mu\times \lambda^{k\ell}$, by $$\prod_{j=1}^{\ell} \prod_{i=1}^k S_{i,s_{i,j}}(x,a_{1,1},\ldots,a_{k,1},a_{1,2},\ldots,a_{k,2},\ldots,a_{1,\ell},\ldots,a_{k,\ell})=$$ $$\left(\prod_{j=1}^{\ell} \prod_{i=1}^k T_i^{[s_{i,j}+a_{i,j}]}x,\{s_{1,1}+a_{1,1}\},\ldots,\{s_{k,1}+a_{k,1}\},\ldots,\{s_{1,\ell}+a_{1,\ell}\},\ldots,\{s_{k,\ell}+a_{k,\ell}\}\right).$$ If $f_1,\ldots,f_{\ell}$ are bounded functions on $X,$ we define the $Y$-extensions of $f_j,$ setting for every element $(a_{1,1},\ldots,a_{k,1},a_{1,2},\ldots,a_{k,2},\ldots,a_{1,\ell},\ldots,a_{k,\ell})\in [0,1)^{k\ell}$: $$\hat{f}_j(x,a_{1,1},\ldots,a_{k,1},a_{1,2},\ldots,a_{k,2},\ldots,a_{1,\ell},\ldots,a_{k,\ell})=f_j(x),\;\;1\leq j\leq \ell;\;\;$$ and we also define the function $$\hat{f}_0(x,a_{1,1},\ldots,a_{k,1},a_{1,2},\ldots,a_{k,\ell})= 1_{[0,\delta]^{k\ell }}(a_{1,1},\ldots,a_{k,1},a_{1,2},\ldots,a_{k,\ell}).$$ For every $N\leq n\leq N+L(N),$ we consider the functions (on the original space $X$) $$b_N(n):= (\prod_{i=1}^k T_i^{[p_{i,1,N}(n)]})f_1\cdot\ldots\cdot(\prod_{i=1}^k T_i^{[p_{i,\ell,N}(n)]})f_\ell$$ as well as the functions $$\tilde{b}_N(n):= \hat{f}_0\cdot(\prod_{j=1}^{\ell}\prod_{i=1}^k S_{i,\delta_{j 1}\cdot p_{i,1,N}(n)})\hat{f}_1\cdot \ldots\cdot (\prod_{j=1}^{\ell}\prod_{i=1}^k S_{i,\delta_{j \ell}\cdot p_{i,\ell,N}(n)})\hat{f}_{\ell}$$ defined on the extension $Y$. Here, $\delta_{ij}$ denotes the Kronecker $\delta$, meaning that the only terms that do not vanish are the diagonal ones (i.e., when $i=j$). For every $x\in X$, we also let $$b'_N(n)(x):=\int_{[0,1)^{k\ell }}\tilde{b}_N(n)(x,a_{1,1},\ldots,a_{k,1},a_{1,2},\ldots,a_{k,2},\ldots,a_{1,\ell},\ldots,a_{k,\ell})\,d\lambda^{k\ell },$$ where the integration is with respect to the variables $a_{i,j}.$ Using the triangle and Cauchy-Schwarz inequalities, we have $$\begin{gathered} \label{E: triangle Andreas 2} \delta^{k\ell }\Big\lVert \mathop{\mathrm{\mathbb{E}}}_{N\leq n\leq N+L(N)} \big(\Lambda_{w,b}(n)-1\big) b_N(n) \Big\rVert_{L^2(\mu)} \leq \\ \Big\lVert \mathop{\mathrm{\mathbb{E}}}_{N\leq n\leq N+L(N)} \big(\Lambda_{w,b}(n)-1\big)\cdot (\delta^{k\ell }b_N(n)-b'_N(n)) \Big\rVert_{L^2(\mu)} + \Big\lVert \mathop{\mathrm{\mathbb{E}}}_{N\leq n\leq N+L(N)} \big(\Lambda_{w,b}(n)-1\big)\tilde{b}_N(n) \Big\rVert_{L^2(\nu)}.\end{gathered}$$ Using Proposition [\[P: PROPOSITION THAT WILL BE COPYPASTED FROM FRA-HO-KRA\]](#P: PROPOSITION THAT WILL BE COPYPASTED FROM FRA-HO-KRA){reference-type="ref" reference="P: PROPOSITION THAT WILL BE COPYPASTED FROM FRA-HO-KRA"}, we find an integer $s\in \mathbb{N},$ depending only on the integers $k, \ell, d,$ and a constant $C_s$ depending on $s,$ such that $$\label{E: bound for the first term} \Big\lVert \mathop{\mathrm{\mathbb{E}}}_{N\leq n\leq N+L(N)} \big(\Lambda_{w,b}(n)-1\big)\tilde{b}_N(n) \Big\rVert_{L^2(\nu)}\leq C_s\left(\big\lVert \Lambda_{w,b}-1 \big\rVert_{U^s(N,N+sL(N)]}+o_N(1)\right),$$ where the $o_N(1)$ term depends only on the integer $s$ and the sequence $\Lambda_{w,b}(n).$ Now we study the first term $$\Big\lVert \mathop{\mathrm{\mathbb{E}}}_{N\leq n\leq N+L(N)} \big(\Lambda_{w,b}(n)-1\big)\cdot (\delta^{k\ell }b_N(n)-b'_N(n)) \Big\rVert_{L^2(\mu)}$$ in [\[E: triangle Andreas 2\]](#E: triangle Andreas 2){reference-type="eqref" reference="E: triangle Andreas 2"}. For every $x\in X$ and $N\leq n\leq N+L(N),$ we have $$\begin{split} &\quad \Big|\delta^{k\ell } b_N(n)(x)-b'_N(n)(x)\Big|= \\& \quad\quad\quad\quad\quad\quad\quad \left|\int_{[0,\delta]^{k\ell }} \left(\prod_{j=1}^\ell f_j(\prod_{i=1}^k T_i^{[p_{i,j,N}(n)]}x)-\prod_{j=1}^\ell f_j(\prod_{i=1}^k T_i^{[p_{i,j,N}(n)+a_{i,j}]}x) \right)\, d \lambda^{k\ell }\right|. \end{split}$$ Since all the integrands $a_{i,j}$ are less than or equal than $\delta,$ we deduce that if all of the implicit polynomials satisfy $\{p_{i,j,N}(n)\}<1-\delta,$ we have $T_i^{[p_{i,j,N}(n)+a_{i,j}]}=T_i^{[p_{i,j,N}(n)]}$ for all $1\leq i\leq k,$ $1\leq j\leq \ell.$ To deal with the possible case where $\{p_{i,j,N}(n)\}\geq 1-\delta$ for at least one of our polynomials, we define, for every $1\leq i\leq k,$ $1\leq j\leq \ell,$ the set $$E_{\delta,N}^{i,j}:=\{n\in [N,N+L(N)]{:}\;\{p_{i,j,N}(n)\}\in [1-\delta,1)\}.$$ Then, by using the fact that $${\bf 1}_{E_{\delta,N}^{1,1}\cup \ldots\cup E_{\delta,N}^{1,\ell}\cup E_{\delta,N}^{2,1}\cup\ldots\cup E_{\delta,N}^{k,\ell}}\leq \sum_{(i,j)\in[1,k]\times[1,\ell]} {\bf 1}_{E_{\delta,N}^{i,j}}$$ and that ${\bf 1}_{E_{\delta,N}^{i,j}}(n)={\bf 1}_{[1-\delta,1)}(\{p_{i,j,N}(n)\}),$ we infer that $$\Big|\delta^{k\ell }b_N(n)(x)-b'_N(n)(x)\Big|\leq 2\delta^{k\ell }\sum_{(i,j)\in[1,k]\times[1,\ell]} {\bf 1}_{[1-\delta,1)}(\{p_{i,j,N}(n)\})$$for every $x\in X$. In view of the above, using the inequality $|\Lambda_{w,b}(n)-1|\leq \Lambda_{w,b}(n)+1$, we deduce that $$\begin{gathered} \mathop{\mathrm{\mathbb{E}}}_{N\leq n\leq N+L(N)}\big|\big(\Lambda_{w,b}(n)-1\big)\big|\cdot {\bf 1}_{[1-\delta,1)}(\{p_{i,j,N}(n)\}) \leq \ \ \ \ \\ \mathop{\mathrm{\mathbb{E}}}_{N\leq n\leq N+L(N)} \big(\Lambda_{w,b}(n)-1\big)\cdot{\bf 1}_{[1-\delta,1)}(\{p_{i,j,N}(n)\})+ 2\mathop{\mathrm{\mathbb{E}}}_{N\leq n\leq N+L(N)}{\bf 1}_{[1-\delta,1)}(\{p_{i,j,N}(n)\})\leq \\ \mathop{\mathrm{\mathbb{E}}}_{N\leq n\leq N+L(N)}\big(\Lambda_{w,b}(n)-1\big)\cdot{\bf 1}_{[1-\delta,1)}(\{p_{i,j,N}(n)\})+2\cdot\frac{|E^{i,j}_{\delta,N}|}{L(N)}.\end{gathered}$$ Since each polynomial $p_{i,j,N}$ satisfies [\[E: non-concetration around 1\]](#E: non-concetration around 1){reference-type="eqref" reference="E: non-concetration around 1"} for large $N$ and small enough $\delta,$ the term (and the sum of finitely many terms of this form) $\frac{|E_{\delta,N}^{i,j}|}{L(N)}$ is as small as we want. It remains to show that the term $$\mathop{\mathrm{\mathbb{E}}}_{N\leq n\leq N+L(N)} \big(\Lambda_{w,b}(n)-1\big)\cdot{\bf 1}_{[1-\delta,1)}(\{p_{i,j,N}(n)\})$$ goes to zero as $N\to\infty,$ then $w\to\infty$ and finally $\delta\to 0^+.$ To this end, it suffices to show $$\mathop{\mathrm{\mathbb{E}}}_{N\leq n\leq N+L(N)}\ \big(\Lambda_{w,b}(n)-1\big)e^{2\pi i m p_{i,j,N}(n)}\to 0$$ as $N\to\infty$ and then $w\to\infty$ for all $m\in \mathbb{Z}\setminus\{0\},$[^12] which follows from Lemma [Lemma 20](#L: discorrelation of W-tricked with polynomial phases){reference-type="ref" reference="L: discorrelation of W-tricked with polynomial phases"}. ◻ # Equidistribution in short intervals {#Section-Removing error terms section} We gather here some useful propositions that describe the behavior of a Hardy field function when restricted to intervals of the form $[N, N+L(N)]$, where $L(N)$ grows slower compared to the parameter $N$. In our applications, we will typically need the function $L(N)$ to grow faster than $N^{5/8}$ in order to be able to use the uniformity results in short intervals, but we will not need to work under this assumption throughout most of this section, the only exception being Proposition [Proposition 35](#P: remove error terms for polynomial functions){reference-type="ref" reference="P: remove error terms for polynomial functions"} below. We will also present an example that illustrates the main points in the proof of Theorem [Theorem 1](#T: the main comparison){reference-type="ref" reference="T: the main comparison"} in the following section. ## Details on the proof In the case of strongly non-polynomial functions that also grow faster than some fractional power, we show that the associated Taylor polynomial $p_N(n)$ has ideal equidistribution properties. Indeed, by picking the length $L(N)$ a little more carefully, one gains arbitrary logarithmic powers over the trivial bound in the exponential sums of $p_N$. Consequently, we demonstrate that the number of integers in $[N, N+L(N)]$ for which ${\left \lfloor a(n) \right \rfloor}\neq {\left \lfloor p_N(n) \right \rfloor}$ is less than $L(N)(\log N)^{-100}$ (say) and, thus, their contribution to the average is negligible. Therefore, for all intents and purposes, one can suppose that the error terms are identically zero. The situation is different when a function that grows slower than all fractional powers is involved since these functions are practically constant in these short intervals. For instance, if one has the function $p(t)+\log^2 t$, where $p$ is a polynomial, the only feasible approximation is of the form $p(n)+\log^2 n=p(n)+\log^2 N +e_N(n)$, where $e_N(n)$ converges to 0. While it seems that we do have a polynomial as the main term in the approximation (at least when $p$ is non-constant), quantitative bounds on the exponential sums of the polynomial component cannot be established in this case at all. The main reason is that such bounds depend heavily on the diophantine properties of the coefficients of $p$, for which we have no data. In the case that $p$ is a constant polynomial, we can use the equidistribution (mod 1) of the sequence $\log^2 n$ to show that in most short intervals $[N, N+L(N)]$, we have ${\left \lfloor \log^2 n \right \rfloor}={\left \lfloor \log^2 N \right \rfloor}$ for all $n\in [N, N+L(N)]$. The contribution of the bad short intervals is then bounded using the triangle inequality and Corollary [Corollary 21](#C: Brun-Titchmarsh inequality for von Mangoldt sums){reference-type="ref" reference="C: Brun-Titchmarsh inequality for von Mangoldt sums"}. Suppose that the polynomial $p$ above is non-constant. In the case that $p$ has rational non-constant coefficients, we split our averages to suitable arithmetic progressions so that the resulting polynomials have integer coefficients (aside from the constant term), and the effect of $e_N(n)$ will be eliminated when we calculate the integer parts. In the case that $p$ has a non-constant irrational coefficient, we can invoke the well-distribution of $p(n)$ to conclude that the number of integers of the set $$E_N=\{n\in [N, N+L(N)]{:}\;{\left \lfloor p(n)+\log^2 n \right \rfloor}\neq {\left \lfloor p(n)+\log^2 N \right \rfloor}\}$$is $O(\varepsilon L(N))$, for a fixed small parameter $\varepsilon$ and $N$ large. However, in order to bound the total contribution of the set $E_N$, we can only use the triangle inequality in the corresponding ergodic averages, so we are forced to extract information on how large the quantity $$\frac{1}{L(N)}\sum_{N\leq n\leq N+L(N)} \Lambda_{w, b}(n){\bf 1}_{E_N}(n)$$ can be. This can be bounded effectively if the corresponding exponential sums $$\frac{1}{L(N)}\sum_{N\leq n\leq N+L(N)} \Lambda_{w,b}(n)e\big(p(n)\big)$$ are small. This is demonstrated by combining the fact that the exponential sums of $p(n)$ are small (due to the presence of an irrational coefficient) with the fact that exponential sums weighted by $\Lambda_{w,b}(n)-1$ are small due to the uniformity of the $W$-tricked von Mangoldt function. The conclusion follows again by an application of the Erdős-Turàn inequality, this time for a probability measure weighted by $\Lambda_{w,b}(n)$. ## A model example We sketch the main steps in the case of the ergodic averages $$\label{E: example averages} \frac{1}{N}\sum_{n=1}^{N} \big(\Lambda_{w,b}(n)-1\big)T^{{\left \lfloor n\log n \right \rfloor}}f_1\cdot T^{{\left \lfloor an^2+\log n \right \rfloor}}f_2\cdot T^{{\left \lfloor \log^2 n \right \rfloor}}f_3.$$where $a$ is an irrational number. We will show that the $L^2$-norm of this expression converges to 0, as $N\to+\infty$ and then $w\to+\infty$. Note that the three sequences in the iterates satisfy our hypotheses. In addition, we remark that the arguments below are valid in the setting where we have three commuting transformations, but we consider a simpler case for convenience. Additionally, we do not evaluate the sequences at $Wn+b$ (as we should in order to be in the setup of Theorem [Theorem 1](#T: the main comparison){reference-type="ref" reference="T: the main comparison"}), since the underlying arguments remain identical apart from changes in notation. We choose $L(t)=t^{0.66}$ (actually, any power $t^c$ with $5/8<c<2/3$ works here) and claim that it suffices to show that $$\label{E: example averages in short intervals} \mathop{\mathrm{\mathbb{E}}}_{1\leq r \leq R} \Big\lVert \mathop{\mathrm{\mathbb{E}}}_{r\leq n\leq r+L(r) }\big(\Lambda_{w,b}(n)-1\big)T^{{\left \lfloor n\log n \right \rfloor}}f_1\cdot T^{{\left \lfloor an^2+\log n \right \rfloor}}f_2\cdot T^{{\left \lfloor \log^2 n \right \rfloor}}f_3 \Big\rVert_{L^2(\mu)}=0.$$This reduction is the content of Lemma [Lemma 39](#L: long averages to short averages){reference-type="ref" reference="L: long averages to short averages"}. Now, we can use the Taylor expansion around $r$ to write $$\begin{aligned} n\log n&=r\log r+(\log r+1)(n-r)+\frac{(n-r)^2}{2r}-\frac{(n-r)^3}{6\xi_{1,n,r}^2}\\ \log n&=\log r+\frac{n-r}{\xi_{2,n,r}}\\ \log^2 n&=\log^2 r +\frac{2(n-r)\log \xi_{3,n,r}}{\xi_{3,n,r}},\end{aligned}$$for some real numbers $\xi_{i,n,r}\in [r,n]$ ($i=1,2,3$). Our choice of $L(t)$ implies that $$\Bigl| \frac{(n-r)^3}{6\xi_{1,n,r}^2} \Bigr|\leq \frac{r^{3\cdot 0.65}}{6r^2}\ll 1,$$and similarly for the other two cases. To be more specific, there exists a $\delta>0$, such that all the error terms (the ones involving the quantities $\xi_{i,n,r}$) are $O(r^{-\delta})$. Let us fix a small $\varepsilon>0$. Firstly, we shall deal with the third iterate, since this is the simplest one. Observe that if $r$ is chosen large enough and such that it satisfies $\{\log^2 r\}\in (\varepsilon,1-\varepsilon)$, then for all $n\in [r,r+L(r)]$, we will have $${\left \lfloor \log^2 n \right \rfloor}={\left \lfloor \log^2 r \right \rfloor},$$since the error terms in the expansion are $O(r^{-\delta})$, which is smaller than $\varepsilon$ for large $r$. In addition, the sequence $\log^2 n$ is equidistributed modulo 1, so our prior assumption can fail for at most $3\varepsilon R$ (say) values of $r\in [1, R]$, provided that $R$ is sufficiently large. For the bad values of $r$, we use the triangle inequality for the corresponding norm to deduce that their contribution on the average is $O(\varepsilon R)$, which will be acceptable if $\varepsilon$ is small. Actually, in order to establish this, we will need to use Corollary [Corollary 21](#C: Brun-Titchmarsh inequality for von Mangoldt sums){reference-type="ref" reference="C: Brun-Titchmarsh inequality for von Mangoldt sums"}, though we will ignore that in this exposition. In conclusion, we can rewrite the expression in [\[E: example averages in short intervals\]](#E: example averages in short intervals){reference-type="eqref" reference="E: example averages in short intervals"} as $$\label{E: example first reduction} \mathop{\mathrm{\mathbb{E}}}_{1\leq r\leq R} \Big\lVert \mathop{\mathrm{\mathbb{E}}}_{r\leq n\leq r+L(r) }\big(\Lambda_{w,b}(n)-1\big)T^{{\left \lfloor n\log n \right \rfloor}}f_1\cdot T^{{\left \lfloor an^2+\log n \right \rfloor}}f_2\cdot T^{{\left \lfloor \log^2 r \right \rfloor}}f_3 \Big\rVert_{L^2(\mu)}+O(\varepsilon).$$ Now, we deal with the first function. We claim that the discrepancy of the finite sequence $$\Big(\{r\log r+(\log r+1)(n-r)+\frac{(n-r)^2}{2r}\}\Big)_{r\leq n\leq r+L(r)}$$is $O_A(\log^{-A }r)$ for any $A>0$. We will establish this in Proposition [Proposition 33](#P: remove error term for fast functions){reference-type="ref" reference="P: remove error term for fast functions"} using Lemma [Lemma 23](#L: Weyl-type estimate){reference-type="ref" reference="L: Weyl-type estimate"} and Theorem [\[T: Erdos-Turan\]](#T: Erdos-Turan){reference-type="ref" reference="T: Erdos-Turan"}. As a baby case, we show the following estimate for some simple trigonometric averages: $$\Bigl| \mathop{\mathrm{\mathbb{E}}}_{r\leq n\leq r+L(r)} e\Big(\frac{(n-r)^2}{2r}\Big) \Bigr|\leq \frac{1}{\log^A r}$$for $r$ large enough. Indeed, if that inequality fails for some $r\in \mathbb{N}$, there exists an integer $|q_r|\leq \log^{O(A)}r$, such that $$\Big\lVert \frac{q_r}{2r} \Big\rVert_{\mathbb{T}}\leq \frac{\log^{O(A)}r}{(L(r))^2}.$$Certainly, if $r$ is large enough, we can replace the norm with the absolute value, so that the previous inequality implies that $$\big(L(r)\big)^2 \leq \frac{2r\log^{O(A)}r}{|q_r|}.$$However, the choice $L(t)=t^{0.66}$ implies that this inequality is false for large $r$. In our problem, we can just pick $A=2$. Using the definition of discrepancy, we deduce that the number of integers in $[r,r+L(r)]$, for which we have $$\{r\log r+(\log r+1)(n-r)+\frac{(n-r)^2}{2r}\}\in [0,r^{-\delta/2}]\cup [1-r^{-\delta/2},1)$$ is $O(L(r)\log^{-2} r)$. However, if $n$ does not belong to this set of bad values, we conclude that $${\left \lfloor n\log n \right \rfloor}={\left \lfloor r\log r+(\log r+1)(n-r)+\frac{(n-r)^2}{2r} \right \rfloor}$$since the error terms are $O(r^{-\delta})$. Furthermore, since $\Lambda_{w,b}(n)=O(\log r)$ for $n\in [r,r+L(r)]$, we conclude that the contribution of the bad values is $o_r(1)$ on the inner average. Therefore, we can rewrite the expression in [\[E: example first reduction\]](#E: example first reduction){reference-type="eqref" reference="E: example first reduction"} as $$\label{E: example second reduction} \mathop{\mathrm{\mathbb{E}}}_{1\leq r\leq R} \Big\lVert \mathop{\mathrm{\mathbb{E}}}_{r\leq n\leq r+L(r) }\big(\Lambda_{w,b}(n)-1\big)T^{{\left \lfloor p_r(n) \right \rfloor}}f_1\cdot T^{{\left \lfloor an^2+\log n \right \rfloor}}f_2\cdot T^{{\left \lfloor \log^2 r \right \rfloor}}f_3 \Big\rVert_{L^2(\mu)}+O(\varepsilon)+o_R(1),$$where $p_r(n)=r\log r+(\log r+1)(n-r)+\dfrac{(n-r)^2}{2r}$. Finally, we deal with the second iterate. We consider the parameter $\varepsilon$ as above and set $M=1/\varepsilon$. Once again, we shall assume that $r$ is very large compared to $M$. Since $a$ is irrational, we have that the sequence $an^2$ is well-distributed modulo 1, so we would expect the number of $n$ for which $\{an^2+\log r\}\not\in [\varepsilon,1-\varepsilon]$ to be small. Note that for the remaining values of $n$, we have ${\left \lfloor an^2+\log n \right \rfloor}={\left \lfloor an^2+\log r \right \rfloor}$, since the error term in the approximation is $O(r^{-\delta})$. Therefore, we estimate the size of the set $$\mathcal{B}_{r,\varepsilon}:=\{n\in [r,r+L(r)]{:}\;\{an^2+\log r\}\in [0,\varepsilon]\cup [1-\varepsilon,1) \}$$ Using Weyl's theorem, we conclude that $$\label{E: exponential sums in example} \max_{1\leq m\leq M} \Bigl| \mathop{\mathrm{\mathbb{E}}}_{r\leq n\leq r+L(r)} e\big(m(an^2+\log r)\big) \Bigr|=o_r(1).$$ Here, the $o_r(1)$ term depends on $M=1/\varepsilon$, but since we will send $r\to+\infty$ and then $\varepsilon\to 0$, this will not cause any issues. We suppress these dependencies in this exposition. An application of Theorem [\[T: Erdos-Turan\]](#T: Erdos-Turan){reference-type="ref" reference="T: Erdos-Turan"} implies that $$\label{E: size of bad set in example} \frac{|\mathcal{B}_{r,\varepsilon}|}{L(r)}\ll 2\varepsilon+\frac{1}{M} +\sum_{m=1}^{M}\frac{1}{m}\Bigl| \mathop{\mathrm{\mathbb{E}}}_{r\leq n\leq r+L(r)} e\big(m(ar^2+\log r)\big) \Bigr|,$$so that $|\mathcal{B}_{r,\varepsilon}|\ll (\varepsilon+o_r(1))L(r)$. Additionally, we will need to estimate $$\frac{1}{L(r)} \sum_{r\leq n\leq r+L(r)} \Lambda_{w,b}(n) {\bf 1}_{\mathcal{B}_r}(n),$$which will arise when we apply the triangle inequality to bound the contribution of the set $\mathcal{B}_r$. However, we have that $$\max_{1\leq m\leq M} \Bigl| \mathop{\mathrm{\mathbb{E}}}_{r\leq n\leq r+L(r)} \Lambda_{w,b}(n)e\big(m(an^2+\log r)\big) \Bigr|=o_w(1)+o_r(1),$$which can be seen by splitting $\Lambda_{w,b}(n)=(\Lambda_{w,b}(n)-1)+1$, applying the triangle inequality and using Lemma [Lemma 20](#L: discorrelation of W-tricked with polynomial phases){reference-type="ref" reference="L: discorrelation of W-tricked with polynomial phases"} and [\[E: exponential sums in example\]](#E: exponential sums in example){reference-type="eqref" reference="E: exponential sums in example"}, respectively, to treat the resulting exponential averages. In view of this, we can apply the Erdős-Turán inequality (Theorem [\[T: Erdos-Turan\]](#T: Erdos-Turan){reference-type="ref" reference="T: Erdos-Turan"}) for the probability measure $$\nu(S)=\dfrac{\sum\limits_{r\leq n\leq r+L(r)}^{} \Lambda_{w,b}(n)\delta_{\{an^2+\log r\}}(S) }{\sum\limits_{r\leq n\leq r+L(r)}^{} \Lambda_{w,b}(n)}$$ as well as Corollary [Corollary 21](#C: Brun-Titchmarsh inequality for von Mangoldt sums){reference-type="ref" reference="C: Brun-Titchmarsh inequality for von Mangoldt sums"} (to bound the sum in the denominator) to conclude that $$\frac{1}{L(r)} \sum_{r\leq n\leq r+L(r)} \Lambda_{w,b}(n) {\bf 1}_{\mathcal{B}_r}(n)\ll \varepsilon+o_w(1)\log \frac{1}{\varepsilon}+o_r(1),$$Therefore, if we apply the triangle inequality, we conclude that the contribution of the set $\mathcal{B}_{r,\varepsilon}$ on the average over $[r,r+L(r)]$ is at most $O(\varepsilon+o_w(1)\log \frac{1}{\varepsilon} +o_r(1))$. This is acceptable if we send $R\to +\infty$, then $w\to +\infty$, and then $\varepsilon\to 0$ at the end. Ignoring the peculiar error terms that turn out to be satisfactory, we can rewrite the expression in [\[E: example second reduction\]](#E: example second reduction){reference-type="eqref" reference="E: example second reduction"} as $$\mathop{\mathrm{\mathbb{E}}}_{1\leq r\leq R} \Big\lVert \mathop{\mathrm{\mathbb{E}}}_{r\leq n\leq r+L(r) }\big(\Lambda_{w,b}(n)-1\big)T^{{\left \lfloor p_r(n) \right \rfloor}}f_1\cdot T^{{\left \lfloor an^2+\log r \right \rfloor}}f_2\cdot T^{{\left \lfloor \log^2 r \right \rfloor}}f_3 \Big\rVert_{L^2(\mu)}.$$ Now, the iterates satisfy the assumptions of Proposition [Proposition 29](#P: Gowers norm bound on variable polynomials){reference-type="ref" reference="P: Gowers norm bound on variable polynomials"}. This is true for the first iterate since we have a good bound on the discrepancy and it is also true for the second iterate because the polynomial $an^2$ has an irrational coefficient (so we can use its well-distribution modulo 1). For the third one, our claim is obvious because we simply have an integer in the iterate. Therefore, we can bound the inner average by a constant multiple of the norm $$\left\Vert \Lambda_{w,b}-1\right\Vert_{U^s(r,r+L(r)]}$$ with some error terms that we will ignore here. Finally, we invoke Theorem [\[T: Gowers uniformity in short intervals\]](#T: Gowers uniformity in short intervals){reference-type="ref" reference="T: Gowers uniformity in short intervals"} to show that the average $$\mathop{\mathrm{\mathbb{E}}}_{1\leq r\leq R}\left\Vert \Lambda_{w,b}-1\right\Vert_{U^s(r,r+L(r)]}$$converges to 0, which leads us to our desired conclusion. ## Some preparatory lemmas Let us fix a Hardy field $\mathcal{H}$. Firstly, we will need a basic lemma that relates the growth rate of a Hardy field function of polynomial growth with the growth rate of its derivative. To do this, we recall a lemma due to Frantzikinakis [@Fra-equidsitribution Lemma 2.1], as well as [@Tsinas Proposition A.1]. **Lemma 30**. *Let $a\in \mathcal{H}$ satisfy $t^{-m}\prec a(t)\prec t^m$ for some positive integer $m$ and assume that $a(t)$ does not converge to a non-zero constant as $t\to+\infty$. Then, $$\frac{a(t)}{t\log^2 t}\prec a'(t)\ll \frac{a(t)}{t}.$$* Observe that if a function $a(t)$ satisfies the growth inequalities in the hypothesis of this lemma, then the function $a'(t)$ satisfies $\frac{t^{-1-m}}{\log^2 t} \prec a'(t)\prec t^{m-1}$. Therefore, we deduce the relations $t^{-m-2} \prec a'(t)\prec t^{m+2}$, which implies that the function $a'(t)$ satisfies a similar growth condition. Provided that the function $a'(t)$ does not converge to a non-zero constant as $t\to+\infty$, the above lemma can then be applied to the function $a'(t)$. When a function $a(t)$ is strongly non-polynomial and dominates the logarithmic function $\log t$, one can get a nice ordering relation for the growth rates of consecutive derivatives. This is the content of the following proposition. **Proposition 31**. *[@Tsinas Proposition A.2][\[P: Taylor expansion-preliminary\]]{#P: Taylor expansion-preliminary label="P: Taylor expansion-preliminary"} Let $a\in \mathcal{H}$ be a function of polynomial growth that is strongly non-polynomial and also satisfies $a(t)\succ \log t$. Then, for all sufficiently large $k\in \mathbb{N}$, we have $$1\prec \bigl| a^{(k)}(t) \bigr|^{-\frac{1}{k}}\prec \bigl| a^{(k+1)}(t) \bigr|^{-\frac{1}{k+1}}\prec t.$$* **Remark 7**. *The proof of Proposition [\[P: Taylor expansion-preliminary\]](#P: Taylor expansion-preliminary){reference-type="ref" reference="P: Taylor expansion-preliminary"} in [@Tsinas] establishes the fact that if $a$ satisfies the previous hypotheses, then the derivatives of $a$ always satisfy the conditions of Lemma [Lemma 30](#L: Frantzikinakis growth inequalities){reference-type="ref" reference="L: Frantzikinakis growth inequalities"}.* This proposition is the main tool used to show that a strongly non-polynomial function $a(t)$ can be approximated by polynomials in short intervals. Indeed, assume that a positive sub-linear function $L(t)$ satisfies $$\label{E: growth relations} \bigl| a^{(k)}(t) \bigr|^{-\frac{1}{k}}\prec L(t) \prec \bigl| a^{(k+1)}(t) \bigr|^{-\frac{1}{k+1}}$$for some sufficiently large $k\in \mathbb{N}$ (large enough so that the inequalities in Proposition [\[P: Taylor expansion-preliminary\]](#P: Taylor expansion-preliminary){reference-type="ref" reference="P: Taylor expansion-preliminary"} hold). In particular, this implies that $\lim\limits_{t\to+\infty }a^{(k+1)}(t)\to 0$. Then, we can use the Taylor expansion around the point $N$ to write $$a(N+h)=a(N) +{ha'(N)}+\dots +\frac{h^ka^{(k)}(N)}{k!}+\frac{h^{k+1}a^{(k+1)} (\xi_{N,h}) }{(k+1)!}\ \text{for some } \xi_{N,h}\in [N,N+h]$$for every $0\leq h\leq L(N)$. However, we observe that $$\Bigl| \frac{h^{k+1}a^{(k+1)} (\xi_{N,h} ) }{(k+1)!} \Bigr| \leq \frac{L(N)^{k+1} |a^{(k+1)} (N )| }{(k+1)!}=o_N(1),$$where we used the fact that $|a^{(k+1)}(t)|\to 0$ monotonically (since $a^{(k+1)}(t)\in \mathcal{H}$). Therefore, we have $$a(N+h)= a(N) +{ha'(N)}+\dots +\frac{h^ka^{(k)}(N)}{k!}+o_N(1),$$which implies that the function $a(N+h)$ is essentially a polynomial in $h$. The final lemma implies that if the function $L(t)$ satisfies certain growth assumptions, then a strongly non-polynomial function $a(t)$ will be approximated by a polynomial of some degree $k$. **Proposition 32**. *Let $a\in \mathcal{H}$ be a strongly non-polynomial function of polynomial growth, such that $a(t)\succ \log t$. Assume that $L(t)$ is a positive sub-linear function, such that $1\prec L(t)\ll t^{1-\varepsilon}$ for some $\varepsilon>0$. Then, there exists a non-negative integer $k$ depending on the function $a(t)$ and $L(t)$, such that $$\bigl| a^{(k)}(t) \bigr|^{-\frac{1}{k}}\prec L(t)\prec \bigl| a^{(k+1)}(t) \bigr|^{-\frac{1}{k+1}},$$where we adopt the convention that $\bigl| a^{(k)}(t) \bigr|^{-\frac{1}{k}}$ denotes the constant function 1, when $k=0$.* *Proof.* We split the proof into two cases depending on whether $a$ is sub-fractional or not. Assume first that $a(t)\ll t^{\delta}$ for all $\delta>0$. We will establish the claim for $k=0$. This means that functions that are sub-fractional become essentially constant when restricted to intervals of the form $[N, N+L(N)]$. The left inequality is obvious. Furthermore, since $a(t)\prec t^{\varepsilon}$, Lemma [Lemma 30](#L: Frantzikinakis growth inequalities){reference-type="ref" reference="L: Frantzikinakis growth inequalities"} implies that $$a'(t)\prec \frac{1}{t^{1-\varepsilon}}\ll \frac{1}{L(t)},$$ which yields the desired result. Assume now that $a(t)\succ t^{\delta}$ for some $\delta>0$. Observe that, in this case, we have that $$\bigl| a^{(k)}(t) \bigr|^{-\frac{1}{k}} \prec \bigl| a^{(k+1)}(t) \bigr|^{-\frac{1}{k+1}}$$for $k$ large enough, due to Proposition [\[P: Taylor expansion-preliminary\]](#P: Taylor expansion-preliminary){reference-type="ref" reference="P: Taylor expansion-preliminary"}. We also consider the integer $d$, such that $t^d\prec a(t)\prec t^{d+1}$. This number exists because the function $a$ is strongly non-polynomial. If $L(t)\prec \bigl| a^{(d+1)}(t) \bigr|^{-\frac{1}{d+1}}$, then the claim holds for $k={d}$, since $\bigl| a^{(d)}(t) \bigr|^{-\frac{1}{d}}\prec 1\prec L(t)$. It suffices to show that there exists $k\in \mathbb{N}$, such that $L(t)\prec \bigl| a^{(k+1)}(t) \bigr|^{-\frac{1}{k+1}}$, which, in turn, follows if we show that $$\label{E: large derivatives surpass sub-linear powers} t^{1-\varepsilon}\prec \bigl| a^{(k+1)}(t) \bigr|^{-\frac{1}{k+1}}$$for some $k\in \mathbb{N}$. We can rewrite the above inequality as $a^{(k+1)}(t)\prec t^{(k+1)(\varepsilon-1)}$. However, since the function $a(t)$ is strongly non-polynomial and $a(t)\succ \log t$, the functions $a^{(k)}(t)$ satisfy the hypotheses of Lemma [Lemma 30](#L: Frantzikinakis growth inequalities){reference-type="ref" reference="L: Frantzikinakis growth inequalities"} (see also Remark [Remark 7](#R: why we can use Lemma 4.1 for derivatives){reference-type="ref" reference="R: why we can use Lemma 4.1 for derivatives"}). Therefore, iterating the aforementioned lemma, we deduce that $$a^{(k+1)}(t)\ll \frac{a(t)}{t^{k+1}}.$$ Hence, it suffices to find $k$ such that $a(t)\ll t^{(k+1)\varepsilon}$ and such a number exists, because the function $a(t)$ has polynomial growth. ◻ **Remark 8**. *The condition $L(t)\prec t^{1-\varepsilon}$ is necessary. For example, if $a(t)=t\log t$ and $L(t)=\frac{t}{\log t}$, then for any $k\in \mathbb{N}$, we can write $$(N+h)\log (N+h)=N\log N+\dots +\frac{C_1h^k}{N^{k-1}} +\frac{C_2h^{k+1}}{\xi_{N,h}^k}$$for every $0\leq h\leq \frac{N}{\log N}$ and some numbers $C_1,C_2\in \mathbb{R}$. However, there is no positive integer $k$ for which the last term in this expansion can be made to be negligible since $\frac{N}{\log N}\succ N^{\frac{k}{k+1}}$ for all $k\in \mathbb{N}$. Essentially, in order to approximate the function $t\log t$ in these specific short intervals, one would be forced to use the entire Taylor series instead of some appropriate cutoff.* ## Eliminating the error terms in the approximations In the previous subsection, we saw that any Hardy field function can be approximated by polynomials in short intervals using the Taylor expansion. Namely, if $a(t)$ diverges and $L(t)\to+\infty$ is a positive function, such that $$\label{E: ti na valw twra edw?} \bigl| a^{(k)}(t) \bigr|^{-\frac{1}{k}}\prec L(t)\prec \bigl| a^{(k+1)}(t) \bigr|^{-\frac{1}{k+1}}$$then, for any $0\leq h\leq L(N)$, we have $$a(N+h)= a(N+h)=a(N) +\dots +\frac{h^ka^{(k)}(N)}{k!}+\frac{h^{k+1}a^{(k+1)} (\xi_{N,h}) }{(k+1)!}=p_N(h)+\theta_N(h)$$$\text{for some } \xi_{N,h}\in [N,N+h]$, where we denote $$p_N(h)=a(N) +\dots +\frac{h^ka^{(k)}(N)}{k!}.$$ Observe that our growth assumption on $L(t)$ implies that the term $\theta_N(h)$ is bounded by a quantity that converges to 0, as $N\to+\infty$. Therefore, for large values of $N$, we easily deduce that $${\left \lfloor a(N+h) \right \rfloor}={\left \lfloor p_N(h) \right \rfloor}+\varepsilon_{N,h},$$where $\varepsilon_{N,h}\in \{-1,0,1\}$. In order to be able to apply Proposition [Proposition 29](#P: Gowers norm bound on variable polynomials){reference-type="ref" reference="P: Gowers norm bound on variable polynomials"}, we will need to eliminate the error terms $\varepsilon_{N,h}$. We will consider three distinct cases, which are tackled using somewhat different arguments. ### The case of fast-growing functions Firstly, we establish the main proposition that will allow us to remove the error terms in the case of functions that contain a \"non-polynomial part\" which does not grow too slowly. We will need a slight strengthening of the growth conditions in [\[E: ti na valw twra edw?\]](#E: ti na valw twra edw?){reference-type="eqref" reference="E: ti na valw twra edw?"}, which, as we saw previously, are sufficient to have a Taylor approximation in the interval $[N, N+L(N)]$. **Proposition 33**. *Let $A>0$ and let $a(t)$ be a $C^{\infty}$ function defined for all sufficiently large $t\in \mathbb{R}$. Assume $L(t)$ is a positive sub-linear function going to infinity and let $k$ be a positive integer, such that $$\label{E: strong domination conditions} 1\lll \bigl| a^{(k)}(t) \bigr|^{-\frac{1}{k}}\lll L(t)\lll \bigl| a^{(k+1)}(t) \bigr|^{-\frac{1}{k+1}}$$and such that the function $a^{(k+1)}(t)$ converges to 0 monotonically. Then, for $N$ large enough, we have that, for all $0\leq c\leq d<1$, $$\label{E: discrepancy of Hardy sequences on short intervals} \dfrac{\big|\{n\in [N, N+L(N)]{:}\;a(n)\in [c,d]\}\big|}{L(N)}=|d-c|+O_A(L(N)\log^{-A} N).\footnote{One can actually get a small power saving here, with an exponent that depends on $k$ and the implicit fractional powers in the growth relations of \eqref{E: strong domination conditions}, though this will not be any more useful for our purposes.}$$ Consequently, for all $N$ sufficiently large, we have that $${\left \lfloor a(N+h) \right \rfloor} ={\left \lfloor a(N) +{ha'(N)}+\dots +\frac{h^ka^{(k)}(N)}{k!} \right \rfloor}$$for all, except at most $O_A(L(N)\log^{-A}(N))$ values of integers $h\in [N,N+L(N)]$.* *Proof.* Our hypothesis on $L(t)$ implies that there exist $\varepsilon_1,\varepsilon_2>0$ such that $$\label{E: powersavinggrowthinequality} L(t)\bigl| a^{(k)}(t) \bigr|^{\frac{1}{k}} \gg t^{\varepsilon_1} \text{ and } \ L(t)\bigl| a^{(k+1)}(t) \bigr|^{\frac{1}{k+1}}\ll t^{-\varepsilon_2}.$$In addition, the leftmost inequality implies that there exists $\varepsilon_3>0$, such that $a^{(k)}(t)\ll t^{-\varepsilon_3}$. Using the Taylor expansion around the point $N$, we can write $$\label{E: Taylor approximation for C apeiro function} a(N+h)= a(N) +{ha'(N)}+\dots +\frac{h^ka^{(k)}(N)}{k!} +\frac{h^{k+1}a^{(k+1)}(\xi_h) }{(k+1)!},\ \text{ for some } \xi_h\in [N,N+h],$$for every $h\in [0,L(N)]$. We denote $$p_N(h)=a(N) +\dots +\frac{h^ka^{(k)}(N)}{k!}$$ and $$\theta_N(h)=\frac{h^{k+1}a^{(k+1)}(\xi_h) }{(k+1)!}.$$ The function $a^{(k+1)}(t)$ converges to 0 monotonically due to our hypothesis. Therefore, for sufficiently large $N$, $$\label{E: definition of theta_N} \max_{0\leq h\leq L(N)}^{} |\theta_N(h) |\leq \Bigl| \frac{a^{(k+1)}(N)}{(k+1)!} \Bigr|(L(N))^{k+1}=\theta_N,$$and the quantity $\theta_N$ is strongly dominated by the constant 1 due to [\[E: powersavinggrowthinequality\]](#E: powersavinggrowthinequality){reference-type="eqref" reference="E: powersavinggrowthinequality"}. More precisely, we have that $\theta_N\ll N^{-(k+1)\varepsilon_2}$. Let $A>0$ be any constant. We study the discrepancy of the finite polynomial sequence $$p_N(h),\ \text{where } 0\leq h\leq{L(N)}.$$ We shall establish that we have $$\Delta_{[c,d]}\big( p_N(h) \big)\ll_A \log^{-A}N$$for any choice of the interval $[c,d]\subseteq [0,1]$. To this end, we apply Theorem [\[T: Erdos-Turan\]](#T: Erdos-Turan){reference-type="ref" reference="T: Erdos-Turan"} for the finite sequence $(p_N(h))_{0\leq h\leq L(N)}$ to deduce that $$\label{E: discrepancy bound} \Delta_{[c,d]}\Big(\big (p_N(h)\big)_{0\leq h\leq L(N)} \Big)\leq \frac{C}{{\left \lfloor \log^A N \right \rfloor}}+C\sum_{m=1}^{{\left \lfloor \log^A N \right \rfloor}} \frac{1}{m}\Bigl| \underset{0\leq h \leq {L(N)}}{\mathop{\mathrm{\mathbb{E}}}} \ e(mp_N(h)) \Bigr|,$$where $C$ is an absolute constant. We claim that for every $1\leq m\leq {\left \lfloor \log^A N \right \rfloor}$, we have that $$\label{E: log^A saving } \Bigl| \underset{0\leq h \leq {L(N)}}{\mathop{\mathrm{\mathbb{E}}}} \ e(mp_N(h)) \Bigr|\leq \frac{1}{\log ^A N},$$provided that $N$ is sufficiently large. Indeed, assume for the sake of contradiction that there exists $1\leq m_0\leq {\left \lfloor \log^A N \right \rfloor}$, such that$$\label{E: application of Erdos-Turan} \Bigl| \underset{0\leq h\leq {L(N)}}{\mathop{\mathrm{\mathbb{E}}}} \ e(m_0p_N(h)) \Bigr|>\frac{1}{\log ^A N}.$$ The leading coefficient of $m_0p_N(h)$ is equal to $$\frac{m_0a^{(k)} (N)}{k!}.$$ Then, Lemma [Lemma 23](#L: Weyl-type estimate){reference-type="ref" reference="L: Weyl-type estimate"} implies that there exists a constant $C_k$ (depending only on $k$) an integer $q$ satisfying $|q|\leq \log^{C_k A} N$ and such that $$\Big\lVert q\cdot\frac{m_0a^{(k)} (N)}{k!} \Big\rVert_{\mathbb{T}} \leq \frac{ \log^{C_kA} N}{{\left \lfloor L(N) \right \rfloor}^k}.$$ The number $qm_0$ is bounded in magnitude by $\log^{(C_k+1)A}(N)$, so that $$q\cdot\frac{m_0a^{(k)} (N)}{k!}\ll \log^{(C_k+1)A} N\cdot N^{-\varepsilon_3}=o_N(1).$$ Therefore, for large values of $N$, we can substitute the circle norm of the fraction in [\[E: application of Erdos-Turan\]](#E: application of Erdos-Turan){reference-type="eqref" reference="E: application of Erdos-Turan"} with the absolute value, which readily implies that $$\Bigl| q\cdot\frac{m_0a^{(k)} (N)}{k!} \Bigr|\leq \frac{ \log^{C_kA} N}{{\left \lfloor L(N) \right \rfloor}^k}\implies {\left \lfloor L(N) \right \rfloor}^k\big|a^{(k)}(N)\big|\leq k!\log^{C_kA} N.$$However, this implies that $L(t)$ cannot strongly dominate the function $\big(a^{(k)}(t)\big)^{-\frac{1}{k}}$, which is a contradiction due to our hypothesis. We have established that for every $1\leq m\leq {\left \lfloor \log^A N \right \rfloor}$ and large $N$, inequality [\[E: log\^A saving \]](#E: log^A saving ){reference-type="eqref" reference="E: log^A saving "} holds. Substituting this in [\[E: discrepancy bound\]](#E: discrepancy bound){reference-type="eqref" reference="E: discrepancy bound"}, we deduce that $$\Delta_{[c,d]}\Big(\big(p_N(h)\big)_{0\leq h\leq {L(N)}}\Big)\leq \frac{C}{{\left \lfloor \log^A N \right \rfloor}}+C\sum_{m=1}^{{\left \lfloor \log^A N \right \rfloor}} \frac{1}{m\log^A N},$$which implies that $$\Delta_{[c,d]}\Big(\big(p_N(h)\big)_{0\leq h\leq {L(N)}}\Big)\ll \frac{A\log \log N}{\log^A N}.$$In particular, since $A$ was arbitrary, we get $$\label{E: logarithmic power savings} \Delta_{[c,d]}\Big(\big(p_N(h)\big)_{0\leq h\leq {L(N)}}\Big)\ll_A \frac{1}{\log^A N}.$$ This establishes the first part of the proposition. The second part of our statement follows from an application of the bound on the discrepancy of the finite polynomial sequence $(p_N(h))$. Indeed, we consider the set $$S_N=[0,\theta_N]\cup [1-\theta_N,1),$$where we recall that $\theta_N$ was defined in [\[E: definition of theta_N\]](#E: definition of theta_N){reference-type="eqref" reference="E: definition of theta_N"} and decays faster than a small fractional power. Then, if $\{p_N(h)\}\notin S_N$, we have ${\left \lfloor p_N(h) +\theta_N(h) \right \rfloor}={\left \lfloor p_N(h) \right \rfloor}$, as can be seen by noticing that the error term in [\[E: Taylor approximation for C apeiro function\]](#E: Taylor approximation for C apeiro function){reference-type="eqref" reference="E: Taylor approximation for C apeiro function"} is bounded in magnitude by $\theta_N$. Now, we estimate the number of integers $h\in [0, L(N)]$ for which $\{p_N(h)\}\in S_N$. Using the definition of discrepancy and the recently established bounds, we deduce that $$\dfrac{\bigl| \{h\in [0,{L(N)}]{:}\;\{p_N(h)\}\in [0,\theta_N] \} \bigr|}{{L(N)}{}} -\theta_N\ll_A \frac{1}{\log^A N}$$for every $A>0$. Since the number $\theta_N$ is dominated by $N^{-(k+1)\varepsilon_2}$, this implies that $$\bigl| \{h\in [0,L(N)]{:}\;\{p_N(h)\}\in [0,\theta_N] \} \bigr|\ll_A \frac{L(N)}{\log ^A N}.$$ An entirely similar argument yields the analogous relation for the interval $[1-\theta_N, 1)$. Therefore, the number of integers in $[0,L(N)]$ for which $\{p_N(h)\}\in S_N$ is at most $O_A({L(N)}\log^{-A} N)$. In conclusion, since ${\left \lfloor a(N+h) \right \rfloor}={\left \lfloor p_N(h) \right \rfloor}$ for all integers not in $S_N$, we have that the number of integers which does not satisfy this last relation is $O_A(L(N)\log ^{-A} N)$, which yields the desired result. ◻ The above proposition asserts that, for almost all values of $h\in [0,L(N)]$, we can write ${\left \lfloor a(N+h) \right \rfloor}={\left \lfloor p_N(h) \right \rfloor}$. The logarithmic power saving in the statement will be helpful since we are dealing with averages weighted by the sequence $\Lambda_{w,b}(n)-1$, which has size comparable to $\log N$ on the interval $[N, N+L(N)]$. Furthermore, notice that we did not assume that $a$ is a Hardy field function in the proof. Thus, the conditions in this proposition can be used to prove a comparison result for more general iterates. ### The case of slow functions Unfortunately, the previous proposition cannot deal with functions whose only possible Taylor approximations involve only a constant term. This case will emerge when we have sub-fractional functions (see Definition [Definition 14](#D: growthdefinitions){reference-type="ref" reference="D: growthdefinitions"}) since, as we have already remarked, these functions have a polynomial approximation of degree 0 in short intervals (assuming that $L(t)\ll t^{1-\varepsilon}$). To cover this case, we will need the following proposition which is practically of a qualitative nature. **Proposition 34**. *Let $a(t)\in \mathcal{H}$ be a sub-fractional function such that $a(t)\succ \log t$. Assume $L(t)$ is a positive sub-linear function going to infinity and such that $L(t)\ll t^{1-\delta}$, for some $\delta>0$. Then, for every $0<\varepsilon<1$, we have the following: for all $R\in \mathbb{N}$ sufficiently large we have ${\left \lfloor a(N+h) \right \rfloor}={\left \lfloor a(N) \right \rfloor}$ for every $h\in [0,L(N)]$, for all, except at most $\varepsilon R$ values of $N\in [1,R]$.* *Proof.* Observe that for any $h\in [0,L(N)]$, we have $$\label{E: approximation of subfractional} a(N+h)=a(N)+ha'(\xi_h)$$for some $\xi_h\in [N,N+h]$. In addition, since $a'(t)$ converges to 0 monotonically, we have $$|ha'(\xi_h)|\leq L(N)a'(N)\ll N^{1-\delta} a'(N) \lll 1,$$ where the last inequality follows from Lemma [Lemma 30](#L: Frantzikinakis growth inequalities){reference-type="ref" reference="L: Frantzikinakis growth inequalities"} and the assumption that $a(t)$ is sub-fractional. In particular, there exists a positive real number $q$, such that $|ha'(\xi_h)|\ll N^{-q}$, for all $h\in [0, L(N)]$.[^13] The sequence $a(n)$ is equidistributed mod 1 by Theorem [\[T: Boshernitzan\]](#T: Boshernitzan){reference-type="ref" reference="T: Boshernitzan"}, since it dominates the function $\log t$. Now, suppose that $\varepsilon>0$, and choose a number $R_0$ such that ${R_0^{-2q}}<\varepsilon/2$. Then, for $R\geq R_0$, the number of integers $N\in [R_0,R]$ such that $\{a(N)\}\in [\frac{\varepsilon}{2},1-\frac{\varepsilon}{2}]$ is $$(R-R_0)(1-\varepsilon+o_R(1))$$ due to the fact that $a(n)$ is equidistributed. For these values of $N$, we have that $$\{a(N)\}\notin [0,N^{-2q}]\cup [1-N^{-2q}, 1],$$ which implies that for all $h\in [0,L(N)]$, we have that ${\left \lfloor a(N+h) \right \rfloor}={\left \lfloor a(N) \right \rfloor}$, as can be derived easily by [\[E: approximation of subfractional\]](#E: approximation of subfractional){reference-type="eqref" reference="E: approximation of subfractional"} and the fact that the error term is $O(N^{-q})$. If we consider the integers $N$ in the interval $[1,R_0]$ as well, then the number of "bad values" (that is, the numbers $N$ for which we do not have ${\left \lfloor a(N+h) \right \rfloor}={\left \lfloor a(N) \right \rfloor}$ for every $h\in [0,L(N)]$) is at most $$R_0+(R-R_0)(\varepsilon+o_R(1)).$$ Finally, choosing $R$ sufficiently large, we get that this number is smaller than $2\varepsilon R$ and the claim follows. ◻ In simplistic terms, what we have established is that if we restrict our attention to short intervals $[N, N+L(N)]$ for the natural numbers $N$, such that $\{a(N)\}\in [\varepsilon,1-\varepsilon]$, then we can just write ${\left \lfloor a(N+h) \right \rfloor}={\left \lfloor a(N) \right \rfloor}$ for all $h\in [0, L(N)]$. Due to the equidistribution of $a(n)$ mod 1 (which follows from Theorem [\[T: Boshernitzan\]](#T: Boshernitzan){reference-type="ref" reference="T: Boshernitzan"}), this is practically true for almost all $N$, if we take $\varepsilon$ sufficiently small. ### The case of polynomial functions The final case is the case of functions of the form $p(t)+x(t)$, where $p$ is a polynomial with real coefficients and $x(t)$ is a sub-fractional function. The equidistribution of the corresponding sequence will be affected only by the polynomial $p$ when restricted to short intervals. Nonetheless, the techniques of Proposition [Proposition 33](#P: remove error term for fast functions){reference-type="ref" reference="P: remove error term for fast functions"} cannot be employed, because we cannot establish quantitative bounds on the exponential sums uniformly over all real polynomials. Therefore, we will use the following proposition, which allows us to calculate the integer parts in this case. Unlike the previous two propositions which can be bootstrapped to give a similar statement for several functions, we establish this one for several functions from the outset. We do not need to concern ourselves with rational polynomials, since these can be trivially reduced to the case of integer polynomials by passing to arithmetic progressions. **Proposition 35**. *Let $k,d$ be positive integers, let $0<\varepsilon<1/2$ be a real number and let $w\in \mathbb{N}$. We define $W=\prod_{p\in \mathbb{P}{:}\;p\leq w}p$ and let $1\leq b\leq W$ be any integer with $(b,W)=1$. Suppose that $a_1,\dots, a_k\in \mathcal{H}$ are functions of the form $p_i(t)+x_i(t)$, where $p_i$ are polynomials of degree at most $d$ and with at least one irrational non-constant coefficient, while $x_i(t)$ are sub-fractional functions. Finally, assume that $L(t)$ is a positive sub-linear function going to infinity and such that $$t^{\frac{5}{8}}\lll L(t)\lll t.\footnote{See the notational conventions for the definition of $\lll$.}$$* *Then, for every $r$ sufficiently large in terms of $w$, $\frac{1}{\varepsilon}$, we have that there exists a subset $\mathcal{B}_{r,\varepsilon}$ of integers in the interval $[r,r+L(r)]$ with at most $O_k(\varepsilon L(r))$ elements, such that for all integers $n\in [r,r+L(r)]\setminus\mathcal{B}_{r,\varepsilon}$, we have $${\left \lfloor p_i(n)+x_i(n) \right \rfloor}={\left \lfloor p_i(n)+x_i(r) \right \rfloor}.$$ Furthermore, the set $\mathcal{B}_{r,\varepsilon}$ satisfies $$\label{E: bound on weighted sums of B_r} \frac{1}{L(r)} \sum_{r\leq n\leq r+L(r)} \Lambda_{w,b}(n){\bf 1}_{\mathcal{B}_{r,\varepsilon}}(n)\ll_{k,d} \varepsilon+o_w(1)\log \frac{1}{\varepsilon}+o_r(1).$$* **Remark 9**. *The $o_r(1)$ term depends on the fixed parameters $w,\varepsilon$. However, in our applications, we will send $r\to+\infty$, then we will send $w\to+\infty$, and then $\varepsilon\to 0$. We shall reiterate this observation in the proof of Theorem [Theorem 1](#T: the main comparison){reference-type="ref" reference="T: the main comparison"}. On the other hand, the $o_w(1)$ term is the same as the one in [Lemma 20](#L: discorrelation of W-tricked with polynomial phases){reference-type="ref" reference="L: discorrelation of W-tricked with polynomial phases"} and depends on the degree $d$ of the polynomials, which will be fixed in applications.* *Proof of Proposition [Proposition 35](#P: remove error terms for polynomial functions){reference-type="ref" reference="P: remove error terms for polynomial functions"}.* Fix an index $1\leq i\leq k$ and consider a sufficiently large integer $r$. Using the mean value theorem and the fact that $|x_i'(t)|$ decreases to 0 faster than all fractional powers by Lemma [Lemma 30](#L: Frantzikinakis growth inequalities){reference-type="ref" reference="L: Frantzikinakis growth inequalities"}, we deduce that $$\max_{0\leq h\leq L(r)} |x_{i}(r+h)-x_i(r)|\leq L(r)|x'_i(r)|\lll 1.$$In particular, there exists $\delta_0>0$ depending only on the functions $a_1,\dots, a_k$ and $L(t)$, such that $$\label{E: locally constant function} \max_{0\leq h\leq L(r)} |x_{i}(r+h)-x_i(r)|\ll r^{-\delta_0}$$for all $1\leq i\leq k$. Thus, we observe that if $\{p_i(n)+x_i(r)\}\in (\varepsilon,1-\varepsilon)$ and $r$ is large enough in terms of $1/\varepsilon$, then we have that $${\left \lfloor p_i(n)+x_i(n) \right \rfloor}={\left \lfloor p_i(n)+x_i(r) \right \rfloor}.$$Naturally, we consider the set $$\mathcal{B}_{i,r,\varepsilon}=\{n\in [r,r+L(r)]{:}\;\{p_i(n)+x_i(r)\}\in [0,\varepsilon]\cup [1-\varepsilon,1)\}$$and take $\mathcal{B}_{r,\varepsilon}=\mathcal{B}_{1,r,\varepsilon}\cup\dots\cup\mathcal{B}_{k,r,\varepsilon}$. Now, we observe that the polynomial sequence $p_i$ is well-distributed modulo 1, since it has at least one non-constant irrational coefficient. Therefore, if $r$ is large enough, we have that the set $\mathcal{B}_{i,r,\varepsilon}$ has less than $3\varepsilon L(r)$ elements (say). Using the union bound, we conclude that the set $\mathcal{B}_{r,\varepsilon}$ has $O(\varepsilon k L(r))$ elements. This shows the first requirement of the proposition. We have to establish [\[E: bound on weighted sums of B_r\]](#E: bound on weighted sums of B_r){reference-type="eqref" reference="E: bound on weighted sums of B_r"}. We shall set $M={\left \lfloor \varepsilon^{-1} \right \rfloor}$ for brevity so that $r$ is assumed to be very large in terms of $M$. Since the polynomials $p_i$ have at least one non-constant irrational coefficient, we can use Weyl's criterion for well-distribution (see, for instance, [@Kuipers-Niederreiter Theorem 5.2, Chapter 1]) to conclude that $$\max_{1\leq m\leq M} \Bigl| \mathop{\mathrm{\mathbb{E}}}_{r\leq n\leq r+L(r)} e\big(m(p_i(n)+x_i(r) ) \big) \Bigr|=o_r(1),$$for all $r$ sufficiently large in terms of $M$, as we have assumed to be the case.[^14] On the other hand, Lemma [Lemma 20](#L: discorrelation of W-tricked with polynomial phases){reference-type="ref" reference="L: discorrelation of W-tricked with polynomial phases"} implies that $$\max_{1\leq m\leq M} \Bigl| \mathop{\mathrm{\mathbb{E}}}_{r\leq n\leq r+L(r)} \big(\Lambda_{w,b}(n)-1 \big) e\big(m(p_i(n)+x_i(r) ) \big) \Bigr|=o_w(1)$$for $r$ sufficiently large in terms of $w$. Combining the last two bounds, we deduce that $$\label{E: W-tricked Lambda exponential sums} \max_{1\leq m\leq M} \Bigl| \mathop{\mathrm{\mathbb{E}}}_{r\leq n\leq r+L(r)} \Lambda_{w,b}(n) e\big(m(p_i(n)+x_i(r) ) \big) \Bigr|=o_w(1)+o_r(1).$$ Since we have estimates on the exponential sums weighted by $\Lambda_{w,b}(n)$, we can now make the passage to [\[E: bound on weighted sums of B_r\]](#E: bound on weighted sums of B_r){reference-type="eqref" reference="E: bound on weighted sums of B_r"}. To this end, we apply Theorem [\[T: Erdos-Turan\]](#T: Erdos-Turan){reference-type="ref" reference="T: Erdos-Turan"} for the probability measure $$\nu(S)=\frac{\sum\limits_{r\leq n\leq r+L(r)} \Lambda_{w,b}(n)\delta_{\{p_i(n)+x_i(r)\}}(S) }{\sum\limits_{r\leq n\leq r+L(r)}\Lambda_{w,b}(n) }.\footnote{ The denominator is non-zero if $r$ is large enough.}$$Setting $$S_r=\sum_{r\leq n\leq r+L(r)}\Lambda_{w,b}(n)$$ for brevity, we conclude that $$\begin{gathered} \frac{\sum\limits_{r\leq n\leq r+L(r)} \Lambda_{w,b}(n)\delta_{\{p_i(n)+x_i(r)\}}\big([0,\varepsilon]\cup[1-\varepsilon,1)\big) }{S_r }\ll 2\varepsilon+ \frac{1}{M} +\\ \sum_{m=1}^{M}\frac{1}{m}\Bigl| \frac{1}{S_r}\sum_{r\leq n\leq r+L(r)} \Lambda_{w,b}(n) e\big(m(p_i(n)+x_i(r) ) \big) \Bigr|,\end{gathered}$$ where the implied constant is absolute. Applying the bounds in [\[E: W-tricked Lambda exponential sums\]](#E: W-tricked Lambda exponential sums){reference-type="eqref" reference="E: W-tricked Lambda exponential sums"} and recalling the definition of $\mathcal{B}_{i,r,\varepsilon}$, we conclude that $$\begin{gathered} \label{E: forza aekara} \sum\limits_{r\leq n\leq r+L(r)} \Lambda_{w,b}(n){\bf 1}_{\mathcal{B}_{i,r,\varepsilon}}(n)\ll \Big(\varepsilon+\frac{1}{M}\Big)S_r+\sum_{m=1}^M \frac{L(r)}{m}(o_w(1)+o_r(1))\\ \ll \varepsilon S_r+ L(r)\big(o_w(1)+o_r(1)\big)\log \frac{1}{\varepsilon},\end{gathered}$$since $M={\left \lfloor \varepsilon^{-1} \right \rfloor}$. Finally, we bound $S_r$ by applying Corollary [Corollary 21](#C: Brun-Titchmarsh inequality for von Mangoldt sums){reference-type="ref" reference="C: Brun-Titchmarsh inequality for von Mangoldt sums"} to conclude that $$\begin{gathered} \label{E: sieve upper bound on the S_r} S_r=\frac{\phi(W)}{W}\sum_{\underset{n\equiv b\;(W)}{Wr+b\leq n\leq Wr+b+WL(r)}} \Lambda(n)\leq \frac{\phi(W)}{W}\Big( \frac{2WL(r)\log r}{\phi(W)\log\big(\frac{L(r)}{W}\big)} +\\ O\Big(\frac{L(r)}{\log r}\Big)+ O(r^{1/2}\log r)\Big)\ll L(r)(1+o_r(1)),\end{gathered}$$where we used the fact that $L(r)\gg t^{5/8}$ to bound the first fraction by an absolute constant. Applying this in [\[E: forza aekara\]](#E: forza aekara){reference-type="eqref" reference="E: forza aekara"}, we conclude that $$\frac{1}{L(r)} \sum_{r\leq n\leq r+L(r)} \Lambda_{w,b}(n){\bf 1}_{\mathcal{B}_{i,r,\varepsilon}}(n)\ll \varepsilon(1+o_r(1))+\big(o_w(1)+o_r(1)\big)\log \frac{1}{\varepsilon}.$$ Finally, we recall that $\mathcal{B}_{r,\varepsilon}=\mathcal{B}_{1,r,\varepsilon}\cup\dots\cup\mathcal{B}_{k,r,\varepsilon}$ and use the union bound to get$$\frac{1}{L(r)} \sum_{r\leq n\leq r+L(r)} \Lambda_{w,b}(n){\bf 1}_{\mathcal{B}_{r,\varepsilon}}(n)\ll_k \varepsilon++o_w(1)\log \frac{1}{\varepsilon}+o_r(1),$$provided that $r$ is very large in terms of $1/\varepsilon, w$. This is the desired conclusion. ◻ ## Simultaneous approximation of Hardy field functions In view of Proposition [Proposition 33](#P: remove error term for fast functions){reference-type="ref" reference="P: remove error term for fast functions"}, we would like to show that we can find a function $L(t)$ such that the growth rate condition of the statement is satisfied for several functions in $\mathcal{H}$ simultaneously. This is the content of the following lemma. We will only need to consider the case where the functions dominate some fractional power, since for sub-fractional functions, we have Propositions [Proposition 34](#P: remove error term for slow functions){reference-type="ref" reference="P: remove error term for slow functions"} and [Proposition 35](#P: remove error terms for polynomial functions){reference-type="ref" reference="P: remove error terms for polynomial functions"} that can cover them adequately. We refer again to our notational conventions in Section [1](#Section-Introduction){reference-type="ref" reference="Section-Introduction"} for the notation $\lll$. **Proposition 36**. *Let $\ell\in \mathbb{N}$ and suppose $a_1,\dots,a_{\ell}\in \mathcal{H}$ are strongly non-polynomial functions of polynomial growth that are not sub-fractional. Then, for all $0<c<1$, there exists a positive sub-linear function $L(t)$, such that $t^c\ll L(t)\ll t^{1-\varepsilon}$ for some $\varepsilon>0$ and such that, for all $1\leq i\leq \ell,$ there exist positive integers $k_i$, which satisfy $$1\lll \bigl| a_i^{(k_i)}(t) \bigr|^{-\frac{1}{k_i}}\lll L(t)\lll \bigl| a_i^{(k_i+1)}(t) \bigr|^{-\frac{1}{k_i+1}}.$$ Furthermore, the integers $k_i$ can be chosen to be arbitrarily large, provided that $c$ is sufficiently close to 1.* CHECK FOR MULTIPLE $t_i$'s! Check first Proposition [Proposition 32](#P: Taylor expansion main){reference-type="ref" reference="P: Taylor expansion main"}. **Proposition 37**. *Let $t$ be a tempered function (here I will put details on where these functions belong). with $\lim_{x\to\infty}\frac{xt'(x)}{t(x)}=\alpha\in (0,\infty)\cap(\mathbb{R}\setminus \mathbb{N})$ (As I said during our meeting, it turns out that we need this assumption, i.e., $\alpha\notin\mathbb{N}$) and $0<c<1.$ For all large enough positive integer $k$, we have $$t^c\prec \big(t^{(k)}(x)\big)^{-\frac{1}{k}}\lll \big(t^{(k+1)}(x)\big)^{-\frac{1}{k+1}}.$$* *Proof.* If $d$ is the degree of $t,$ we have that $t(x)\prec x^{d+1}.$ Since $0<c<1,$ for all large enough $k\in \mathbb{N}$ we have $t(x)\prec x^{k(1-c)},$ so $$\frac{t^{(k)}(x)}{x^{-ck}}=\frac{t(x)}{x^{k(1-c)}}\cdot \prod_{i=1}^k \frac{xt^{(i)}(x)}{t^{(i-1)}(x)}\to 0.$$ Hence, $t^{(k)}(x)\prec x^{-ck},$ or, $x^c\prec (t^{(k)}(x))^{-\frac{1}{k}}.$ For the aforementioned $k$'s, let $0<q<1:$ $x^{kq}\prec t(x).$ Since $\alpha\notin \mathbb{N},$ $$\frac{x^{k(q-1)}}{t^{(k)}(x)}=\frac{x^{kq}}{t(x)}\cdot \prod_{i=1}^k \frac{t^{(i-1)}(x)}{xt^{(i)}(x)}\to 0,$$ so $x^{k(q-1)}\prec t^{(k)}(x).$ Since $\lim_{x\to\infty}\frac{xt^{(k+1)}(x)}{t^{(k)}(x)}\in\mathbb{R}\setminus\{0\},$ we have that $t^{(k+1)}(x)\ll x^{-1}t^{(k)}(x),$ so, for $\delta=\frac{q}{k+1},$ we have $$\frac{(t^{(k+1)}(x))^{-\frac{1}{k+1}}}{(t^{(k)}(x))^{-\frac{1}{k}}}\gg \frac{x^{\frac{1}{k+1}}(t^{(k)}(x))^{-\frac{1}{k+1}}}{(t^{(k)}(x))^{-\frac{1}{k}}}=x^{\frac{1}{k+1}}(t^{(k)}(x))^{\frac{1}{k(k+1)}}\succ x^{\frac{1}{k+1}}\cdot x^{\frac{q-1}{k+1}}=x^\delta,$$ as was to be shown. ◻ *Proof.* We will use induction on $\ell$. For $\ell=1$, it suffices to show that there exists a positive integer $k$, such that the function $\bigl| a^{(k+1)}(t) \bigr|^{-\frac{1}{k+1}}$ strongly dominates the function $\bigl| a^{(k)}(t) \bigr|^{-\frac{1}{k}}$. Then, we can pick the function $L(t)$ to be the geometric mean of these two functions to get our claim.[^15] Firstly, note that if we pick $k$ sufficiently large, then we can ensure that $(a^{(k)}(t))^{-\frac{1}{k}}\gg t^{c}$, which would also imply the lower bound on the other condition imposed on the function $L(t)$. To see why this last claim is valid, observe that the derivatives of $a$ satisfy the assumptions of Lemma [Lemma 30](#L: Frantzikinakis growth inequalities){reference-type="ref" reference="L: Frantzikinakis growth inequalities"}, so that we have $a^{(k)}(t)\ll t^{-k}a(t)$. Thus, if $d$ is a positive integer, such that $t^{d}$ grows faster than $a(t)$ and we choose $k>\frac{d}{c}-1$, we verify that our claim holds. Secondly, we will show that for all $k\in \mathbb{N}$, we have $$\bigl| a^{(k)}(t) \bigr|^{-\frac{1}{k}}\ll t^{1-\varepsilon}$$ for some $0<\varepsilon<1$, as this relation (with $k+1$ in place of $k$) will yield the upper bound on the growth of the function $L(t)$ that we chose above. For the sake of contradiction, we assume that this fails and use the lower bound from Lemma [Lemma 30](#L: Frantzikinakis growth inequalities){reference-type="ref" reference="L: Frantzikinakis growth inequalities"}, to deduce that $$t^{k(\varepsilon-1)}\gg a^{(k)}(t) \succ \frac{a(t)}{t^{k}\log^{2k} t}$$for every $0<\varepsilon<1$. This, implies that $a(t)\ll t^{k\varepsilon}\log^{2k}t$ for all small $\varepsilon$, which contradicts the hypothesis that $a(t)$ is not sub-fractional. We remark in passing that this argument also indicates that the integer $k$ can be made arbitrarily large by choosing $c$ to be sufficiently close to $1$, as the last claim in our statement suggests. In order to complete the base case of the induction, we show that for all sufficiently large $k$, we have $$\bigl| a^{(k)}(t) \bigr|^{-\frac{1}{k}}\lll\bigl| a^{(k)}(t) \bigr|^{-\frac{1}{k+1}}.$$ Equivalently, we prove that $$\label{E: fractionally away} \frac{ \bigl| a^{(k+1)}(t) \bigr|^{-\frac{1}{k+1}}}{ \bigl| a^{(k)}(t) \bigr|^{-\frac{1}{k}}}\gg t^{\delta}$$for some $\delta>0$ that will depend on $k$. Choose a real number $0<q<1$ (the value of $q$ depends on $k$), such that $\bigl| a^{(k)}(t) \bigr|^{-\frac{1}{k}}\ll t^{1-q}$, which can be done as we demonstrated above. In order to establish [\[E: fractionally away\]](#E: fractionally away){reference-type="eqref" reference="E: fractionally away"}, we combine the inequality $a^{(k)}(t)\gg ta^{(k+1)}(t)$ with the inequality $\bigl| a^{(k)}(t) \bigr|^{-\frac{1}{k}}\ll t^{1-q}$, which after some computations gives the desired result for $\delta=q/(k+1)$. This completes the base case. Assume that the claim has been established for the integer $\ell$. Now, let $a_1,\dots, a_{\ell+1}$ be functions that satisfy the hypotheses of the proposition. Our induction hypothesis implies that there exists a function $L(t)$ with $t^c \ll L(t)\ll t^{1-\varepsilon}$ and integers $k_1,\dots,k_{\ell}$, such that $$\bigl| a_i^{(k_i)}(t) \bigr|^{-\frac{1}{k_i}}\lll L(t)\lll \bigl| a_i^{(k_i+1)}(t) \bigr|^{-\frac{1}{k_i+1}},\;\;1\leq i\leq \ell.$$ Due to Proposition [Proposition 32](#P: Taylor expansion main){reference-type="ref" reference="P: Taylor expansion main"}, there exists a positive integer $s$, such that $$\label{E: prec equation} \bigl| a_{\ell+1}^{(s)}(t) \bigr|^{-\frac{1}{s}}\prec L(t)\prec \bigl| a_{\ell+1}^{(s+1)}(t) \bigr|^{-\frac{1}{s+1}}.$$ Without loss of generality, we may assume that $c$ is sufficiently close to 1. This implies that the integer $s$ can be chosen to be sufficiently large as well, so that the relation $\bigl| a_{\ell+1}^{(s)}(t) \bigr|^{-\frac{1}{s}}\lll \bigl| a_{\ell+1}^{(s+1)}(t) \bigr|^{-\frac{1}{s+1}}$ holds, as we established in the base case of the induction. If each function strongly dominates the preceding one in [\[E: prec equation\]](#E: prec equation){reference-type="eqref" reference="E: prec equation"}, then we are finished. Therefore, assume that $L(t)$ is not strongly dominated by the function $\bigl| a_{\ell+1}^{(s+1)}(t) \bigr|^{-\frac{1}{s+1}}$ (the other case is similar). Note that for every $1\leq i\leq \ell$, we have that $$\bigl| a_i^{(k_i)}(t) \bigr|^{-\frac{1}{k_i}} \lll \bigl| a_{\ell+1}^{(s+1)}(t) \bigr|^{-\frac{1}{s+1}}.$$Indeed, since the function $L(t)$ strongly dominates the function $\bigl| a_i^{(k_i)}(t) \bigr|^{-\frac{1}{k_i}}$ (by the induction hypothesis) and $L(t)$ grows slower than the the function $\bigl| a_{\ell+1}^{(s+1)}(t) \bigr|^{-\frac{1}{s+1}}$, this claim follows immediately. Among the functions $a_1,\dots,a_{\ell+1}$, we choose a function for which the growth rate of $\bigl| a_i^{(k_i)}(t) \bigr|^{-\frac{1}{k_i}}$ is maximized.[^16] Assume that this happens for the index $i_0\in \{1,\dots,\ell+1\}$ and observe that the function $\bigl| a_{\ell+1}^{(s+1)}(t) \bigr|^{-\frac{1}{s+1}}$ strongly dominates $\bigl| a_{i_0}^{(k_{i_0})}(t) \bigr|^{-\frac{1}{k_{i_0}}}$, because the first function grows faster than $L(t)$ and $L(t)$ strongly dominates the latter (in the case $i_0={\ell+1}$, this follows from the fact that $\bigl| a_{\ell+1}^{(s)}(t) \bigr|^{-\frac{1}{s}}\lll \bigl| a_{\ell+1}^{(s+1)}(t) \bigr|^{-\frac{1}{s+1}}$). Define the function ${\widetilde{L}(t)}$ to be the geometric mean of the functions $\bigl| a_{i_0}^{(k_{i_0})}(t) \bigr|^{-\frac{1}{k_{i_0}}}$ and $\bigl| a_{\ell+1}^{(s+1)}(t) \bigr|^{-\frac{1}{s+1}}$. Observe that this function grows slower than the function $L(t)$, since it is strongly dominated by the function $\bigl| a_{\ell+1}^{(s+1)}(t) \bigr|^{-\frac{1}{s+1}},$ while the original function $L(t)$ is not. Due to its construction, we deduce that the function ${\widetilde{L}(t)}$ satisfies $$\bigl| a_{\ell+1}^{(s)}(t) \bigr|^{-\frac{1}{s}}\lll {\widetilde{L}(t)}\lll \bigl| a_{\ell+1}^{(s+1)}(t) \bigr|^{-\frac{1}{s+1}}$$and $$\bigl| a_i^{(k_i)}(t) \bigr|^{-\frac{1}{k_i}}\lll {\widetilde{L}(t)}$$for all $1\leq i\leq \ell$. This is a simple consequence of the fact that $\widetilde{L}(t)$ strongly dominates the function $\bigl| a_{i_0}^{(k_{i_0})}(t) \bigr|^{-\frac{1}{k_{i_0}}}$ and the index $i_0$ was chosen so that the growth rate of the associated function is maximized. In addition, the function $L(t)$ grows faster than the function ${\widetilde{L}(t)}$, which implies that $${\widetilde{L}(t)}\prec L(t) \lll \bigl| a_i^{(k_i+1)}(t) \bigr|^{-\frac{1}{k_i+1}}$$for all $1\leq i\leq \ell$. The analogous relation in the case $i=\ell+1$ is also correct, as we pointed out previously. Therefore, the function ${\widetilde{L}(t)}$ satisfies all of our required properties and the induction is complete. Finally, the assertion that the integers $k_i$ can be made arbitrarily large follows by enlarging $c$ appropriately and the fact that given a fixed $k_i\in \mathbb{N}$, the function $\bigl| a_i^{(k_i+1)}(t) \bigr|^{-\frac{1}{k_i+1}}$ cannot dominate all powers $t^c$ with $c<1$, as we displayed in the base case of the induction. ◻ We can actually weaken the hypothesis that the functions are strongly non-polynomial. The following proposition is more convenient to use and its proof is an immediate consequence of Proposition [Proposition 36](#P: Taylor expansion final000){reference-type="ref" reference="P: Taylor expansion final000"}. **Proposition 38**. *Let $\ell\in \mathbb{N}$ and suppose $a_1,\dots,a_{\ell}\in \mathcal{H}$ are functions of polynomial growth, such that $|a_i(t)-p(t)|\ggg 1$, for all real polynomials $p(t)$ and every $i\in \{1,\dots, \ell\}$. Then, for all $0<c<1$, there exists a positive sub-linear function $L(t)$, such that $t^c\prec L(t)\ll t^{1-\varepsilon}$ for some $\varepsilon>0$ and such that there exist positive integers $k_i$, which satisfy $$1\lll \bigl| a_i^{(k_i)}(t) \bigr|^{-\frac{1}{k_i}}\lll L(t)\lll \bigl| a_i^{(k_i+1)}(t) \bigr|^{-\frac{1}{k_i+1}}.$$* *Proof.* Each of the functions $a_i$ can be written in the form $p_i(t)+x_i(t)$, where $p_i$ is a polynomial with real coefficients and $x_i\in \mathcal{H}$ is strongly non-polynomial. The hypothesis implies that the functions $x_i$ are not sub-fractional. If $k$ is large enough, then we have $a_i^{(k)}(t)=x_i^{(k)}(t)$ for all $t\in \mathbb{R}$. The conclusion follows from Proposition [Proposition 36](#P: Taylor expansion final000){reference-type="ref" reference="P: Taylor expansion final000"} applied to the functions $x_i(t)$, where the corresponding integers $k_i$ are chosen large enough so that the equality $a_i^{(k_i)}(t)=x_i^{(k_i)}(t)$ holds. ◻ # The main comparison {#Section-The main comparison} In this section, we will establish the main proposition that asserts that averages weighted by the W-tricked von-Mangoldt function are morally equal to the standard Cesàro averages over $\mathbb{N}$. In order to do this, we will use the polynomial approximations for our Hardy field functions and we will try to remove the error terms arising from these approximations using Propositions [Proposition 33](#P: remove error term for fast functions){reference-type="ref" reference="P: remove error term for fast functions"}, [Proposition 34](#P: remove error term for slow functions){reference-type="ref" reference="P: remove error term for slow functions"} and [Proposition 35](#P: remove error terms for polynomial functions){reference-type="ref" reference="P: remove error terms for polynomial functions"}. Firstly, we will use a lemma that allows us to pass from long averages over the interval $[1,N]$ to shorter averages over intervals of the form $[N,N+L(N)]$. This lemma is similar to [@Tsinas Lemma 3.3], the only difference being the presence of the unbounded weights. **Lemma 39**. *Let $(A_{n})_{n\in \mathbb{N}}$ be a sequence in a normed space, such that $\left\Vert A_{n}\right\Vert\leq 1$ and let $L(t)\in\mathcal{H}$ be an (eventually) increasing sub-linear function, such that $L(t)\gg t^{\varepsilon}$ for some $\varepsilon>0$. Suppose that $w$ is a fixed natural number. Then, we have $$\Big\lVert \underset{1\leq r\leq R}{\mathop{\mathrm{\mathbb{E}}}} \big(\Lambda_{w,b}(r)-1\big)A_r \Big\rVert\leq \underset{1\leq r\leq R}{\mathop{\mathrm{\mathbb{E}}}} \ \Big\lVert \underset{r\leq n\leq r+L(r)}{\mathop{\mathrm{\mathbb{E}}}} \big(\Lambda_{w,b}(n)-1\big) A_n \Big\rVert+o_R(1),$$uniformly for all $1\leq b\leq W$ with $(b,W)=1$.* *Proof.* Using the triangle inequality, we deduce that $$\underset{1\leq r\leq R}{\mathop{\mathrm{\mathbb{E}}}} \ \Big\lVert \underset{r\leq n\leq r+L(r)}{\mathop{\mathrm{\mathbb{E}}}} \big(\Lambda_{w,b}(n)-1\big) A_{n} \Big\rVert\geq \Big\lVert \underset{1\leq r\leq R}{\mathop{\mathrm{\mathbb{E}}}} \big( \underset{r\leq n\leq r+L(r)}{\mathop{\mathrm{\mathbb{E}}}} \big(\Lambda_{w,b}(n)-1\big) A_{n} \big) \Big\rVert.$$Therefore, our result will follow if we show that $$\Big\lVert \underset{1\leq r\leq R}{\mathop{\mathrm{\mathbb{E}}}} \Big( \underset{r\leq n\leq r+L(r)}{\mathop{\mathrm{\mathbb{E}}}} \big(\Lambda_{w,b}(n)-1\big)A_{n}\Big) -\underset{1\leq r\leq R}{\mathop{\mathrm{\mathbb{E}}}} \big(\Lambda_{w,b}(r)-1\big)A_{r} \Big\rVert=o_R(1).$$ Let $u$ denote the inverse of the function $t+L(t)$, which is well-defined for sufficiently large $t$ due to monotonicity. Furthermore, it is straightforward to derive that $\lim\limits_{t\to+\infty} u(t)/t=1$ from the fact that $t+L(t)$ also grows linearly. Now, we have $$\begin{gathered} \underset{1\leq r\leq R}{\mathop{\mathrm{\mathbb{E}}}} \Big(\ \underset{r\leq n\leq r+L(r)}{\mathop{\mathrm{\mathbb{E}}}} \big(\Lambda_{w,b}(n)-1\big)A_{n} \Big) = \frac{1}{R} \Big( \sum_{n=1}^{R}p_R(n)\big(\Lambda_{w,b}(n)-1\big)A_{n} + \\ \sum_{n=R+1}^{R+L(R)} p_R(n)\big(\Lambda_{w,b}(n)-1\big)A_{n}\Big) \end{gathered}$$for some real numbers $p_R(n)$, which denote the number of appearances of $A_n$ in the previous expression (weighted by the term $1/L(r)$ that appears on each inner average). Assuming that $n$ (and thus $R$) is sufficiently large, so that $u(n)$ is positive, we can calculate $p_R(n)$ to be equal to $$p_R(n)= \frac{1}{L({\left \lfloor u(n) \right \rfloor})+1}+\cdots +\frac{1}{L(n)+1} +o_n(1),$$since the number $A_{n}$ appears on the average $\underset{r\leq n\leq r+L(r)}{\mathop{\mathrm{\mathbb{E}}}}$ if and only if $u(n)\leq r\leq n$. Note that $p_R(n)$ is actually independent of $R$ (for $n$ large enough) and therefore, we will denote it simply as $p(n)$ from now on. We have that $$\label{E: p(n)limit} \lim_{n\to +\infty } p(n)=1.$$ This follows exactly as in the proof of Lemma 3.3 in [@Tsinas], so we omit its proof here. Now, we show that $$\label{E: to kommati poy jefeygei} \frac{1}{R} \sum_{n=R+1}^{R+L(R)} p(n)\big(\Lambda_{w,b}(n)-1\big)A_{n}=o_R(1).$$Bounding $p(n)$ trivially by 2 (since its limit is equal to 1) and $\left\Vert A_n\right\Vert$ by $1$, we infer that it is sufficient to show that $$\frac{1}{R} \sum_{n=R+1}^{R+L(R)} \big|\Lambda_{w,b}(n)-1\big|=o_R(1).$$Using the triangle inequality and the fact that $L(r)\prec r$, this reduces to $$\frac{1}{R} \sum_{n=R+1}^{R+L(R)} \Lambda_{w,b}(n)=o_R(1).$$To establish this, we apply Corollary [Corollary 21](#C: Brun-Titchmarsh inequality for von Mangoldt sums){reference-type="ref" reference="C: Brun-Titchmarsh inequality for von Mangoldt sums"} to conclude that $$\begin{gathered} \frac{1}{R} \sum_{n=R+1}^{R+L(R)} \frac{\phi(W)}{W}\Lambda(Wn+b)=\frac{1}{R}\sum_{\underset{n\equiv b\;(W)}{WR+R+b\leq n\leq WR+R+b+WL(r)}} \Lambda(n)\leq\\ \frac{\phi(W)}{WR}\Big( \frac{2WL(R)\log R}{\phi(W)\log\big(\frac{L(R)}{W}\big)} +O\big(\frac{L(R)}{\log (WR+R+b)}\big) +O(R^{1/2}\log R)\Big)=o_R(1).\end{gathered}$$ This follows from the fact that $L(R)\prec R$ and that the quantity $\log R/\log(L(R))$ is bounded by the hypothesis $L(R)\gg R^{\varepsilon}$. In view of this, it suffices to show that $$\Big\lVert \frac{1}{R}\sum_{n=1}^{R}p(n)\big(\Lambda_{w,b}(n)-1\big)A_{n}-\frac{1}{R}\sum_{n=1}^{R}\big(\Lambda_{w,b}(n)-1\big)A_{n} \Big\rVert=o_R(1).$$ We have $$\Big\lVert \frac{1}{R}\sum_{n=1}^{R}p(n)\big(\Lambda_{w,b}(n)-1\big)A_{n}-\frac{1}{R}\sum_{n=1}^{R}\big(\Lambda_{w,b}(n)-1\big)A_{n} \Big\rVert\leq \frac{1}{R}\sum_{n=1}^{R}|p(n)-1||\Lambda_{w,b}(n)-1|,$$by the triangle inequality. Now, given $\varepsilon>0$, we can bound this by $$\frac{1}{R}\sum_{n=1}^{R}\varepsilon\big(\Lambda_{w,b}(n)+1\big)+o_R(1),$$where the $o_R(1)$ term reflects the fact that the bound for $|p(n)-1|\leq \varepsilon$ is valid for large values of $n$ only. It suffices to bound the term $$\frac{\varepsilon}{R}\sum_{n=1}^{R}\Lambda_{w,b}(n),$$since the remainder is simply $O(\varepsilon)$. However, using Corollary [Corollary 21](#C: Brun-Titchmarsh inequality for von Mangoldt sums){reference-type="ref" reference="C: Brun-Titchmarsh inequality for von Mangoldt sums"} (or the prime number theorem in arithmetic progressions), we see that this term is also $O(\varepsilon)$, exactly as we did above. Sending $\varepsilon\to 0$, we reach the desired conclusion. ◻ We restate here our main theorem for convenience. Let $\ell,k$ be positive integers and, for all $1\leq i\leq \ell,\ 1\leq j\leq k$, let $a_{ij}\in \mathcal{H}$ be functions of polynomial growth such that $$|a_{ij}(t) -q(t) |\succ \log t\ \text{ for every polynomial } q(t)\in \mathbb{Q}[t],$$ or $$\lim\limits_{t\to+\infty} |a_{ij}(t)-q(t)|=0\ \text{ for some polynomial } q(t)\in \mathbb{Q}[t]+\mathbb{R}.$$Then, for any measure-preserving system $(X,\mathcal{X},\mu,T_1,\dots,T_k)$ of commuting transformations and functions $f_1,\dots,f_{\ell}\in L^{\infty}(\mu)$, we have $$\lim_{w\to+\infty} \ \limsup\limits_{N\to+\infty}\max_{\underset{(b,W)=1}{1\leq b\leq W}} \Big\lVert \frac{1}{N}\sum_{n=1}^{N} \ \big(\Lambda_{w,b}(n) -1\big) \prod_{j=1}^{\ell}\big( \prod_{i=1}^{k} T_i^{{\left \lfloor a_{ij}(Wn+b) \right \rfloor}} \big)f_j \Big\rVert_{L^2(\mu)}=0.$$ *Proof.* We split this reduction into several steps. For a function $a\in \mathcal{H}$, we will use the notation $a_{w,b}(t)$ to denote the function $a(Wt+b)$ and we will need to keep in mind that the asymptotic constants must not depend on $W$ and $b$. As is typical in these arguments, we shall rescale the functions $f_1,\dots, f_{\ell}$ so that they are all bounded by 1. ## Step 1: A preparatory decomposition of the functions {#step-1-a-preparatory-decomposition-of-the-functions .unnumbered} Each function $a_{ij}$ can be written in the form $$a_{ij}(t)=g_{ij}(t)+p_{ij}(t)+q_{ij}(t)$$where $g_{ij}(t)$ is a strongly non-polynomial function (or identically zero), $p_{ij}(t)$ is either a polynomial with at least one non-constant irrational coefficient or a constant polynomial, and, lastly, $q_{ij}(t)$ is a polynomial with rational coefficients. Observe that there exists a fixed positive integer $Q_0$ for which all the polynomials $q_{ij}(Q_0n+s_0)$ have integer coefficients except possibly the constant term, for all $0\leq s_0\leq Q_0$. These non-integer constant terms can be absorbed into the polynomial $p_{ij}(t)$. Therefore, splitting our average into the arithmetic progressions $(Q_0n+s_0)$, it suffices to show that $$\lim_{w\to+\infty} \ \limsup\limits_{N\to+\infty}\ \max_{\underset{(b,W)=1}{1\leq b\leq W}} \Big\lVert \frac{1}{N}\sum_{n=1}^{N} \big(\Lambda_{w,b}(Q_0n+s_0)-1\big) \prod_{j=1}^{\ell}\big( \prod_{i=1}^{k} T_i^{{\left \lfloor a_{ij,w,b}(Q_0n+s_0) \right \rfloor}} \big)f_j \Big\rVert_{L^2(\mu)}=0$$for all $s_0\in\{0,\dots,Q_0-1\}$. Observe that each one of the functions $a_{ij,w,b}(Q_0t+s_0)$ satisfies either [\[E: far away from rational polynomials\]](#E: far away from rational polynomials){reference-type="eqref" reference="E: far away from rational polynomials"} or [\[E: essentially equal to a polynomial\]](#E: essentially equal to a polynomial){reference-type="eqref" reference="E: essentially equal to a polynomial"}. Since the polynomials $q_{ij,w,b}(Q_0n+s_0)$ have integer coefficients, we can rewrite the previous expression as $$\begin{gathered} \label{E: Step 1 final expression} \lim_{w\to+\infty} \ \lim\limits_{N\to+\infty}\ \max_{\underset{(b,W)=1}{1\leq b\leq W}} \Big\lVert \frac{1}{N}\sum_{n=1}^{N} {\bf 1}_{s_0\;( Q_0)}(n)\big(\Lambda_{w,b}(n)-1\big)\\ \prod_{j=1}^{\ell}\big( \prod_{i=1}^{k} T_i^{{\left \lfloor g_{ij,w,b}(n) +p_{ij,w,b}(n) \right \rfloor}+q_{ij,w,b}(n) } \big)f_j \Big\rVert_{L^2(\mu)}=0.\end{gathered}$$ ## Step 2: Separating the iterates {#step-2-separating-the-iterates .unnumbered} Define the sets $$\label{E: set S_1} S_1= \{(i,j)\in [1, k]\times [1, \ell] {:}\;g_{ij}(t)\ll t^{\delta} \text{ for all } \delta>0 \ \text{and } p_{ij} \text{ is non-constant}\},$$and $$\label{E: set S_2} S_2=\{(i,j)\in [1, k]\times [1, \ell] {:}\;g_{ij}(t)\ll t^{\delta} \text{ for all } \delta>0 \ \text{and } p_{ij} \text{ is constant}\},$$whose union contains precisely the pairs $(i,j)$, for which $g_{ij}(t)$ is sub-fractional. Our first observation is that if a pair $(i,j)$ belongs to $S_2$, then the function $a_{ij}(t)$ has the form $g_{ij}(t)+q_{ij}(t)$, where $g_{ij}$ is sub-fractional and $q_{ij}$ is a rational polynomial. Thus, [\[E: far away from rational polynomials\]](#E: far away from rational polynomials){reference-type="eqref" reference="E: far away from rational polynomials"} and [\[E: essentially equal to a polynomial\]](#E: essentially equal to a polynomial){reference-type="eqref" reference="E: essentially equal to a polynomial"} imply that we either have that $g_{ij}(t)\succ \log(t)$ or $g_{ij}(t)$ converges to a constant, as $t\to+\infty$. The constant can be absorbed into the constant polynomial $p_{ij}$. In view of this, we will subdivide $S_2$ further into the following two sets:$$\begin{aligned} \label{E: sets S_2',S_2''} &S'_2=\{(i,j)\in S_2{:}\;g_{ij}(t)\succ \log t\},\\ \tag*{} &S''_2=\{(i,j)\in S_2{:}\;g_{ij}(t)\prec 1\}.\end{aligned}$$ Observe that iterates corresponding to pairs $(i,j)$ that do not belong to the union $S_1\cup S'_2\cup S''_2$ have an expression inside the integer part that has the form $g(t)+p(t)$, where $g$ is a strongly non-polynomial function that is not sub-fractional. In particular, these functions satisfy the hypotheses of Proposition [Proposition 38](#P: Taylor expansion final){reference-type="ref" reference="P: Taylor expansion final"}. Furthermore, functions that correspond to the set $S_1$ have the form $p(t)+x(t)$, where $p$ is an irrational polynomial and $x$ is sub-fractional, while functions in $S'_2$ are sub-fractional functions that dominate $\log t$. We will use Proposition [Proposition 35](#P: remove error terms for polynomial functions){reference-type="ref" reference="P: remove error terms for polynomial functions"} and Proposition [Proposition 34](#P: remove error term for slow functions){reference-type="ref" reference="P: remove error term for slow functions"} for these two collections respectively. Finally, observe that if $(i,j)\in S_2''$, then for $n$ sufficiently large, we can write $${\left \lfloor a_{ij}(Q_0n+s_0) \right \rfloor}= q_{ij}(Q_0n+s_0)+{\left \lfloor c_{ij} \right \rfloor}+e_{ij,Q_0n+s_0},$$ where $e_{ij, Q_0n+s_0}\in \{0,-1\}$ and $c_{ij}$ is a constant term arising from the constant (in this case) polynomial $p_{ij}$. The error term $e_{ij, Q_0n+s_0}$ actually exists only if $c_{ij}$ is an integer. In particular, we have $e_{ij,Q_0n+s_0}=0$ for all large enough $n$ when $g_{ij}(t)$ decreases to 0 and $e_{ij,Q_0n+s_0}=-1$ if $g_{ij}(t)$ increases to 0. Therefore, if we redefine the polynomials $q_{ij}(t)$ accordingly so that both ${\left \lfloor c_{ij} \right \rfloor}$ and the error term $e_{ij, Q_0n+s_0}$ (which is independent of $s_0$) is absorbed into the constant term, we may assume without loss of generality that for all $n$ sufficiently large, we have $${\left \lfloor g_{ij}(Q_0n+s_0)+p_{ij}(Q_0n+s_0) \right \rfloor}+q_{ij}(Q_0n+s_0)=q_{ij}(Q_0n+s_0).$$We will employ this relation to simplify the iterates in [\[E: Step 1 final expression\]](#E: Step 1 final expression){reference-type="eqref" reference="E: Step 1 final expression"}, where $n$ will be replaced by $Wn+b$. We rewrite the limit in [\[E: Step 1 final expression\]](#E: Step 1 final expression){reference-type="eqref" reference="E: Step 1 final expression"} as $$\begin{gathered} \label{E: } \lim_{w\to+\infty} \ \limsup\limits_{N\to+\infty}\ \max_{\underset{(b,W)=1}{1\leq b\leq W}} \Big\lVert \frac{1}{N}\sum_{n=1}^{N} {\bf 1}_{s_0\;( Q_0)}(n)\big(\Lambda_{w,b}(n)-1\big)\\ \prod_{j=1}^{\ell}\Big( \prod_{i{:}\;(i,j)\in S_1}^{} T_i^{{\left \lfloor g_{ij,w,b}(n) +p_{ij,w,b}(n) \right \rfloor}+q_{ij,w,b}(n) } \cdot \prod_{i{:}\;(i,j)\in S'_2}^{} T_i^{{\left \lfloor g_{ij,w,b}(n) +p_{ij,w,b}(n) \right \rfloor}+q_{ij,w,b}(n) }\cdot\\ \prod_{i{:}\;(i,j)\in S''_2}^{} T_i^{q_{ij,w,b}(n) }\cdot \prod_{i{:}\;(i,j)\not S_1\cup S'_2\cup S''_2}^{} T_i^{{\left \lfloor g_{ij,w,b}(n) +p_{ij,w,b}(n) \right \rfloor}+q_{ij,w,b}(n) } \Big)f_j \Big\rVert_{L^2(\mu)}.\end{gathered}$$ ## Step 3: Passing to short intervals {#step-3-passing-to-short-intervals .unnumbered} The functions $g_{ij}(t)+p_{ij}(t)$ with $(i,j)\in S_1$ satisfy the assumptions of Proposition [Proposition 35](#P: remove error terms for polynomial functions){reference-type="ref" reference="P: remove error terms for polynomial functions"}, while the functions $g_{ij}(t)+p_{ij}(t)$ with $(i,j)\notin S_1\cup S'_2\cup S''_2$ satisfy the assumptions of Proposition [Proposition 38](#P: Taylor expansion final){reference-type="ref" reference="P: Taylor expansion final"} (thus, each one of them satisfies Proposition [Proposition 33](#P: remove error term for fast functions){reference-type="ref" reference="P: remove error term for fast functions"} for some appropriately chosen values of the integer $k$ in that statement). Lastly, the functions of the set $S_2'$ satisfy the assumptions of Propositions [Proposition 34](#P: remove error term for slow functions){reference-type="ref" reference="P: remove error term for slow functions"}. It is straightforward to infer that, in each case, the corresponding property continues to hold when the functions $g_{ij}(t)+p_{ij}(t)$ are replaced by the functions $g_{ij,w,b}(t)+p_{ij,w,b}(t)$. This is a simple consequence of the fact that if $f\in \mathcal{H}$ has polynomial growth, then the functions $f$ and $f_{w,b}$ have the same growth rate. Let $d_0$ be the maximal degree appearing among the polynomials $p_{ij}(t)$. Then, we can find a sub-linear function $L(t)$ such that $$\label{E: L(t) satisfies the condition of Polynomial proposition} t^{ \frac{5}{8}} \lll L(t)\lll t$$ and, such that there exists positive integers $k_{ij}$ for $(i,j)\notin S_1\cup S'_2\cup S''_2$, for which we have the growth inequalities $$\label{E: inequalities that give taylor approximation0} \Big| g_{ij}^{(k_{ij})}(t)\Big|^{-\frac{1}{k_{ij}}}\lll L(t) \lll \Big| g_{ij}^{(k_{ij}+1)}(t)\Big|^{-\frac{1}{k_{ij}+1}}.$$ Furthermore, we can assume that $k_{ij}$ are very large compared to the maximal degree $d_0$ of the polynomials $p_{ij}(t)$, by taking $L(t)$ to grow sufficiently fast. We remark that [\[E: inequalities that give taylor approximation0\]](#E: inequalities that give taylor approximation0){reference-type="eqref" reference="E: inequalities that give taylor approximation0"} also implies the inequalities $$\label{E: inequalities that give taylor approximation} \Big| g_{ij,w,b}^{(k_{ij})}(t)\Big|^{-\frac{1}{k_{ij}}}\lll L(t) \lll \Big| g_{ij,w,b}^{(k_{ij}+1)}(t)\Big|^{-\frac{1}{k_{ij}+1}}.$$for any fixed $w,b$. For the choice of $L(t)$ that we made above, we apply Lemma [Lemma 39](#L: long averages to short averages){reference-type="ref" reference="L: long averages to short averages"} to infer that it suffices to show that $$\begin{gathered} \label{E: final expression in Step 3} \lim_{w\to+\infty} \ \limsup\limits_{R\to+\infty} \ \max_{\underset{(b,W)=1}{1\leq b\leq W}}\ \mathop{\mathrm{\mathbb{E}}}_{1\leq r\leq R} \Big\lVert \mathop{\mathrm{\mathbb{E}}}_{r\leq n\leq r+L(r)} {\bf 1}_{s_0\;( Q_0)}(n)\big(\Lambda_{w,b}(n)-1\big)\\ \prod_{j=1}^{\ell}\Big( \prod_{i{:}\;(i,j)\in S_1}^{} T_i^{{\left \lfloor g_{ij,w,b}(n) +p_{ij,w,b}(n) \right \rfloor}+q_{ij,w,b}(n) } \cdot \prod_{i{:}\;(i,j)\in S'_2}^{} T_i^{{\left \lfloor g_{ij,w,b}(n) +p_{ij,w,b}(n) \right \rfloor}+q_{ij,w,b}(n) } \cdot\\ \prod_{i{:}\;(i,j)\in S''_2}^{} T_i^{q_{ij,w,b}(n) }\cdot \prod_{i{:}\;(i,j)\notin S_1\cup S'_2\cup S''_2}^{} T_i^{{\left \lfloor g_{ij,w,b}(n) +p_{ij,w,b}(n) \right \rfloor}+q_{ij,w,b}(n) } \Big)f_j \Big\rVert_{L^2(\mu)} =0.\end{gathered}$$ ## Step 4: Reducing to polynomial iterates and using uniformity bounds {#step-4-reducing-to-polynomial-iterates-and-using-uniformity-bounds .unnumbered} We now fix $w$ (thus $W$) and the integer $b$. Suppose that $R$ is sufficiently large and consider the expression $$\begin{gathered} \label{E: I could not come up with something good} \mathcal{J}_{w,b,s_0}(R):= \mathop{\mathrm{\mathbb{E}}}_{1\leq r\leq R} \Big\lVert \mathop{\mathrm{\mathbb{E}}}_{r\leq n\leq r+L(r)} {\bf 1}_{s_0\;( Q_0)}(n)\big(\Lambda_{w,b}(n)-1\big)\\ \prod_{j=1}^{\ell}\Big( \prod_{i{:}\;(i,j)\in S_1}^{} T_i^{{\left \lfloor g_{ij,w,b}(n) +p_{ij,w,b}(n) \right \rfloor}+q_{ij,w,b}(n) } \cdot \prod_{i{:}\;(i,j)\in S'_2}^{} T_i^{{\left \lfloor g_{ij,w,b}(n) +p_{ij,w,b}(n) \right \rfloor}+q_{ij,w,b}(n) }\cdot \\ \prod_{i{:}\;(i,j)\in S''_2}^{} T_i^{q_{ij,w,b}(n) }\cdot \prod_{i{:}\;(i,j)\notin S_1\cup S'_2\cup S''_2}^{} T_i^{{\left \lfloor g_{ij,w,b}(n) +p_{ij,w,b}(n) \right \rfloor}+q_{ij,w,b}(n) } \Big)f_j \Big\rVert_{L^2(\mu)}.\end{gathered}$$ We will apply Propositions [Proposition 33](#P: remove error term for fast functions){reference-type="ref" reference="P: remove error term for fast functions"}, [Proposition 34](#P: remove error term for slow functions){reference-type="ref" reference="P: remove error term for slow functions"} and [Proposition 35](#P: remove error terms for polynomial functions){reference-type="ref" reference="P: remove error terms for polynomial functions"} to replace the iterates with polynomials (with coefficients depending on $r$). Due to the nature of Proposition [Proposition 34](#P: remove error term for slow functions){reference-type="ref" reference="P: remove error term for slow functions"} (namely, that it excludes a small set of $r\in [1, R]$), we let $\mathcal{E}_{R,w,b}$ denote a subset of $\{1,\dots, R\}$, which will be constructed throughout the proof and will have small size. We remark that the iterates corresponding to $S_2''$ have been dealt with (morally), so we will focus our attention on the other three sets. Let $d$ be the maximum number among the degrees among the polynomials $p_{ij},q_{ij}$ and the integers $k_{ij}$. Let $\varepsilon>0$ be a small (but fixed) quantity and we assume that $r$ is large enough in terms of $1/\varepsilon$, i.e., larger than some $R_0=R_0(\varepsilon)$. Observe that if $R$ is sufficiently large, then we have $R_0\leq \varepsilon R$. We include the "small" $r$ in the exceptional set $\mathcal{E}_{R,w,b}$, so that $\mathcal{E}_{R,w,b}$ now has at most $\varepsilon R$ elements. We will need to bound the expression $\mathcal{J}_{w,b,s_0}(R)$ for large $R$ uniformly in $b$. *Throughout the rest of this step, we implicitly assume that all terms of the form $o_r(1)$ or $o_R(1)$ are allowed to depend on the parameters $w$ and $\varepsilon$ which will be fixed up until the end of Step 4. One can keep in mind the following hierarchy $\frac{1}{\varepsilon}\ll w\ll r$.* $\underline{\text{Case 1}}:$ We first deal with the functions in $S'_2$. Fix an $(i,j)\in S'_2$ and consider the function $g_{ij,w,b}(n)+p_{ij,w,b}(n)$ appearing in the corresponding iterate. Observe that due to the definition of $S'_2$ in [\[E: sets S_2\',S_2\'\'\]](#E: sets S_2',S_2''){reference-type="eqref" reference="E: sets S_2',S_2''"}, the polynomial $p_{ij}(t)$ is constant, so that $p_{ij,w,b}(t)$ is also constant. In addition, the function $g_{ij}(t)$ is a sub-fractional function and dominates $\log t$. Therefore, the same is true for the function $g_{ij,w,b}(t)$. We apply Proposition [Proposition 34](#P: remove error term for slow functions){reference-type="ref" reference="P: remove error term for slow functions"}: for all except at most $\varepsilon R$ values of $r\in [1,R]$, we have that $$\label{E: iterates of the set S'_2: completed} {\left \lfloor g_{ij,w,b}(n)+p_{ij,w,b}(n) \right \rfloor}={\left \lfloor g_{ij,w,b}(r)+p_{ij,w,b}(r) \right \rfloor} \ \text{for all } n\in [r,r+L(r)].$$For each $(i,j)\in S'_2$, we include the "bad" values of $r$ to the set $\mathcal{E}_{R,w,b}$, so that the set $\mathcal{E}_{R,w,b}$ now has at most $(k\ell+1)\varepsilon R$ elements. $\underline{\text{Case 2}}:$ Now, we turn our attention to functions on the complement of the set $S_1\cup S'_2\cup S''_2$. The functions $g_{ij}$ satisfy [\[E: inequalities that give taylor approximation\]](#E: inequalities that give taylor approximation){reference-type="eqref" reference="E: inequalities that give taylor approximation"} and recall that we have chosen $k_{ij}$ to be much larger than the degrees of the $p_{ij}$, so that the derivative of order $k_{ij}$ of our polynomial vanishes. In conclusion, we may conclude that $g_{ij}(t)+p_{ij}(t)$ satisfies the assumptions of Proposition [Proposition 33](#P: remove error term for fast functions){reference-type="ref" reference="P: remove error term for fast functions"} for the integer $k_{ij}$ (and the sub-linear function $L(t)$ that we have already chosen). Given $A>0$, we infer that for all but $O_A(L(r)\log^{-A} r)$ values of $n\in [r,r+L(r)]$, we have $$\label{E: iterates of the complement set: completed} {\left \lfloor g_{ij,w,b}(n)+p_{ij,w,b}(n) \right \rfloor}={\left \lfloor \widetilde{p}_{ij,w,b,r}(n) \right \rfloor},$$where $\widetilde{p}_{ij,w,b,r}(n)$ is the polynomial $$\sum_{l=0}^{k_{ij}} \frac{(n-r)^{l} g_{ij,w,b}^{(l)}(r) }{l!} +p_{ij,w,b}(n).$$Additionally, the polynomials $\widetilde{p}_{ij,w,b,r}$ satisfy $$\label{E: strong non-concentration of polynomial approximations- strongly non-polynomial case} \frac{\bigl| \{n\in [r,r+L(r)]{:}\;\{\widetilde{p}_{ij,w,b,r}(n)\}\in [1-\delta, 1)\} \bigr| }{L(r)}=\delta+O_{A}(\log^{-A} r)$$ for any $\delta<1$. Practically, this last condition signifies that the polynomials $\widetilde{p}_{ij,w,b,r}$ satisfy the equidistribution condition in Proposition [Proposition 29](#P: Gowers norm bound on variable polynomials){reference-type="ref" reference="P: Gowers norm bound on variable polynomials"}, which we shall invoke later. $\underline{\text{Case 3}}:$ Finally, we deal with the case of the set $S_1$. Proposition [Proposition 35](#P: remove error terms for polynomial functions){reference-type="ref" reference="P: remove error terms for polynomial functions"} suggests that there is a subset $\mathcal{B}_{w,b,r,\varepsilon}$ of $[r,r+L(r)]$ of size $O_{k,\ell}(\varepsilon L(r))$, such that for every $n\in [r,r+L(r)]\setminus \mathcal{B}_{w,b,r,\varepsilon}$, we have $$\label{E: iterates of the set S_1: completed} {\left \lfloor p_{ij,w,b}(n)+g_{ij,w,b}(n) \right \rfloor}={\left \lfloor p_{ij,w,b}(n)+g_{ij,w,b}(r) \right \rfloor}.$$ Additionally, the set $\mathcal{B}_{w,b,r,\varepsilon}$ satisfies $$\label{E: bound on the size of B_r} \frac{1}{L(r)} \sum_{r\leq n\leq r+L(r)} \Lambda_{w,b}(n){\bf 1}_{\mathcal{B}_{w,b,r,\varepsilon}}(n)\ll_{k,\ell,d} \ \varepsilon+o_w(1)\log \frac{1}{\varepsilon}+o_r(1).$$ We emphasize that the asymptotic constant in [\[E: bound on the size of B_r\]](#E: bound on the size of B_r){reference-type="eqref" reference="E: bound on the size of B_r"} depends only on $k,l,d$, so that the constant is the same regardless of the choice of the parameters $w,b$. First of all, we apply [\[E: iterates of the set S\'\_2: completed\]](#E: iterates of the set S'_2: completed){reference-type="eqref" reference="E: iterates of the set S'_2: completed"} to simplify the expression for $\mathcal{J}_{w,b,s_0}(R)$. Namely, for any $r\notin \mathcal{E}_{R,w,b}$, we have that the inner average in the definition of $\mathcal{J}_{w,b,s_0}(R)$ is equal to $$\begin{gathered} \Big\lVert \mathop{\mathrm{\mathbb{E}}}_{r\leq n\leq r+L(r)} {\bf 1}_{s_0\;( Q_0)}(n)\big(\Lambda_{w,b}(n)-1\big) \prod_{j=1}^{\ell}\Big( \prod_{i{:}\;(i,j)\in S_{1}}^{} T_i^{{\left \lfloor g_{ij,w,b}(n) +p_{ij,w,b}(n) \right \rfloor}+q_{ij,w,b}(n) }\cdot \\ \prod_{i{:}\;(i,j)\in S'_2}^{} T_i^{{\left \lfloor g_{ij,w,b}(r) +p_{ij,w,b}(r) \right \rfloor}+q_{ij,w,b}(n) }\cdot \prod_{i{:}\;(i,j)\in S''_2}^{} T_i^{q_{ij,w,b}(n) }\cdot\\ \prod_{i{:}\;(i,j)\notin S_1\cup S'_2\cup S''_2}^{} T_i^{{\left \lfloor g_{ij,w,b}(n) +p_{ij,w,b}(n) \right \rfloor}+q_{ij,w,b}(n) } \Big)f_j \Big\rVert_{L^2(\mu)}.\end{gathered}$$ Thus, we have replaced the iterates of the set $S'_2$ with polynomials in the averaging parameter $n$. Secondly, we use [\[E: iterates of the complement set: completed\]](#E: iterates of the complement set: completed){reference-type="eqref" reference="E: iterates of the complement set: completed"} to deduce that for all, except at most $O_A(k\ell L(r)\log^{-A} r)$ values of $n\in [r, r+L(r)]$, the product of transformations appearing in the previous relation can be written as $$\begin{gathered} \label{E: first reduction of the product on the iterates} \prod_{j=1}^{\ell}\Big( \prod_{i{:}\;(i,j)\in S_{1}}^{} T_i^{{\left \lfloor g_{ij,w,b}(n) +p_{ij,w,b}(n) \right \rfloor}+q_{ij,w,b}(n) } \prod_{i{:}\;(i,j)\in S'_2}^{} T_i^{{\left \lfloor g_{ij,w,b}(r) +p_{ij,w,b}(r) \right \rfloor}+q_{ij,w,b}(n) }\cdot\\ \prod_{i{:}\;(i,j)\in S''_2}^{} T_i^{q_{ij,w,b}(n) }\cdot \prod_{i{:}\;(i,j)\notin S_1\cup S'_2\cup S''_2}^{} T_i^{{\left \lfloor \widetilde{p}_{ij,w,b}(n) \right \rfloor}+q_{ij,w,b}(n) } \Big)f_j.\end{gathered}$$ The contribution of the exceptional set can be at most $$k\ell \log(Wr+WL(r)+b)\cdot O_A(\log^{-A}r),$$since each $\Lambda_{w,b}(n)$ is bounded by $\log(Wn+b)$. Therefore, if we choose $A\geq 2$, this contribution is $o_r(1)$ and we can rewrite the average over the corresponding short interval as $$\begin{gathered} \label{E: equation before lifting two flows first time} \Big\lVert \mathop{\mathrm{\mathbb{E}}}_{r\leq n\leq r+L(r)} {\bf 1}_{s_0\;( Q_0)}(n)\big(\Lambda_{w,b}(n)-1\big) \prod_{j=1}^{\ell}\Big( \prod_{i{:}\;(i,j)\in S_{1}}^{} T_i^{{\left \lfloor g_{ij,w,b}(n) +p_{ij,w,b}(n) \right \rfloor}+q_{ij,w,b}(n) } \\ \prod_{i{:}\;(i,j)\in S'_2}^{} T_i^{{\left \lfloor g_{ij,w,b}(r) +p_{ij,w,b}(r) \right \rfloor}+q_{ij,w,b}(n) }\cdot \prod_{i{:}\;(i,j)\in S''_2}^{} T_i^{q_{ij,w,b}(n) }\cdot\\ \prod_{i{:}\;(i,j)\notin S_1\cup S'_2\cup S''_2}^{} T_i^{{\left \lfloor \widetilde{p}_{ij,w,b,r}(n) \right \rfloor}+q_{ij,w,b}(n) } \Big)f_j \Big\rVert_{L^2(\mu)} +o_r(1).\end{gathered}$$Thus, we have reduced our iterates to polynomial form in this case as well. Finally, we follow the same procedure for the set $S_1$. Namely, for all integers $n$ in the interval $[r,r+L(r)]$ such that $n\notin \mathcal{B}_{w,b,r,\varepsilon}$, we use [\[E: iterates of the set S_1: completed\]](#E: iterates of the set S_1: completed){reference-type="eqref" reference="E: iterates of the set S_1: completed"} to rewrite [\[E: first reduction of the product on the iterates\]](#E: first reduction of the product on the iterates){reference-type="eqref" reference="E: first reduction of the product on the iterates"} as $$\begin{gathered} % \prod_{j=1}^{\ell}\Big( \prod_{i\colon (i,j)\in S_{1}}^{} T_i^{\floor{g_{ij,w,b}(n) +p_{ij,w,b}(n) }+q_{ij,w,b}(n) } % \prod_{i\colon(i,j)\in S'_2}^{} T_i^{\floor{g_{ij,w,b}(r) +p_{ij,w,b}(r) }+q_{ij,w,b}(n) }\cdot\\ % \prod_{i\colon(i,j)\in S''_2}^{} T_i^{q_{ij,w,b}(n) }\cdot % \prod_{i\colon (i,j)\notin S_1\cup S'_2\cup S''_2}^{} T_i^{\floor{\wt{p}_{ij,w,b}(n)}+q_{ij,w,b}(n) } \Big)f_j=\\ \prod_{j=1}^{\ell}\Big( \prod_{i{:}\;(i,j)\in S_{1}}^{} T_i^{{\left \lfloor g_{ij,w,b}(r) +p_{ij,w,b}(n) \right \rfloor}+q_{ij,w,b}(n) } \prod_{i{:}\;(i,j)\in S'_2}^{} T_i^{{\left \lfloor g_{ij,w,b}(r) +p_{ij,w,b}(r) \right \rfloor}+q_{ij,w,b}(n) }\cdot\\ \prod_{i{:}\;(i,j)\in S''_2}^{} T_i^{q_{ij,w,b}(n) }\cdot \prod_{i{:}\;(i,j)\notin S_1\cup S'_2\cup S''_2}^{} T_i^{{\left \lfloor \widetilde{p}_{ij,w,b}(n) \right \rfloor}+q_{ij,w,b}(n) } \Big)f_j.\end{gathered}$$ The contribution of the set $\mathcal{B}_{w,b,r,\varepsilon}$ on the average over the interval $[r,r+L(r)]$ can be estimated using the triangle inequality. More specifically, this contribution is smaller than $$\frac{1}{L(r)}\sum_{r\leq n\leq r+L(r)}{\bf 1}_{s_0\;(Q_0)}(n) \Bigl| \Lambda_{w,b}(n)-1 \Bigr|{\bf 1}_{\mathcal{B}_{w,b,r,\varepsilon}}(n).$$We bound the characteristic function ${\bf 1}_{s_0\;(Q_0)}$ trivially by 1, so that the above quantity is smaller than $$\label{E: contribution of B_r on the average} \frac{1}{L(r)}\sum_{r\leq n\leq r+L(r)}\Lambda_{w,b}(n){\bf 1}_{\mathcal{B}_{w,b,r,\varepsilon}}(n)+ \frac{1}{L(r)}\sum_{r\leq n\leq r+L(r)}{\bf 1}_{\mathcal{B}_{w,b,r,\varepsilon}}(n).$$The second term contributes $O_{k,\ell}(\varepsilon)$, since $\mathcal{B}_{w,b,r,\varepsilon}$ has at most $O_{k,\ell}(\varepsilon L(r))$ elements. On the other hand, we have a bound for the first term already in [\[E: bound on the size of B_r\]](#E: bound on the size of B_r){reference-type="eqref" reference="E: bound on the size of B_r"}. Thus, the total contribution is $O_{k,\ell,d}(1)$ times the expression $$\varepsilon+o_w(1)\log \frac{1}{\varepsilon}+o_r(1).$$ In view of the above, we deduce that the average in [\[E: equation before lifting two flows first time\]](#E: equation before lifting two flows first time){reference-type="eqref" reference="E: equation before lifting two flows first time"} is bounded by $O_{k,\ell,d}(1)$ times $$\begin{gathered} \label{E: final polynomial ergodic average } \Big\lVert \mathop{\mathrm{\mathbb{E}}}_{r\leq n\leq r+L(r)} {\bf 1}_{s_0( Q_0)}(n)\big(\Lambda_{w,b}(n)-1\big) \prod_{j=1}^{\ell}\Big( \prod_{i{:}\;(i,j)\in S_{1}}^{} T_i^{{\left \lfloor g_{ij,w,b}(r) +p_{ij,w,b}(n) +q_{ij,w,b}(n) \right \rfloor} } \\ \prod_{i{:}\;(i,j)\in S'_2}^{} T_i^{{\left \lfloor g_{ij,w,b}(r) +p_{ij,w,b}(r) +q_{ij,w,b}(n) \right \rfloor} }\cdot \prod_{i{:}\;(i,j)\in S''_2}^{} T_i^{{\left \lfloor q_{ij,w,b}(n) \right \rfloor} }\cdot\\ \prod_{i{:}\;(i,j)\notin S_1\cup S'_2\cup S''_2}^{} T_i^{{\left \lfloor \widetilde{p}_{ij,w,b}(n)+q_{ij,w,b}(n) \right \rfloor} } \Big)f_j \Big\rVert_{L^2(\mu)} + \varepsilon+o_w(1)\log \frac{1}{\varepsilon}+o_r(1). \end{gathered}$$ Here, we moved the polynomials $q_{ij,w,b}$ back inside the integer parts, which we are allowed to do since they have integer coefficients. The polynomials in the iterates corresponding to $S_1, S'_2, S''_2$, and the complement of $S_1\cup S'_2\cup S''_2$ fulfill the hypothesis of Proposition [Proposition 29](#P: Gowers norm bound on variable polynomials){reference-type="ref" reference="P: Gowers norm bound on variable polynomials"}. To keep the number of parameters lower, we will apply this proposition for $\delta=\varepsilon$, where we have assumed that $\varepsilon$ is a very small parameter. Accordingly, we assume (as we may) that $w$ and $r$ are much larger than $\frac{1}{\varepsilon}$. To see why the hypotheses are satisfied, observe that for the first set, this follows from the fact that $p_{ij,w,b}$ has at least one non-constant irrational coefficient (since $p_{ij}$ is non-constant by the definition of $S_1$). Therefore, the number of integers $n\in [r,r+L(r)]$ for which we have $$\{g_{ij,w,b}(r) +p_{ij,w,b}(n) +q_{ij,w,b}(n)\}\in (1-\varepsilon,1)$$is smaller than $2\varepsilon L(r)$ for $r$ sufficiently large. At the same time, the result is immediate for the second and third sets, since the iterates involve polynomials with integer coefficients (except, possibly, their constant terms). For the final set, this claim follows from [\[E: strong non-concentration of polynomial approximations- strongly non-polynomial case\]](#E: strong non-concentration of polynomial approximations- strongly non-polynomial case){reference-type="eqref" reference="E: strong non-concentration of polynomial approximations- strongly non-polynomial case"}. In view of the prior discussion, we conclude that there exists a positive integer $s$, that depends only on $d,k,\ell$, such that the expression in [\[E: final polynomial ergodic average \]](#E: final polynomial ergodic average ){reference-type="eqref" reference="E: final polynomial ergodic average "} is bounded by $$\begin{gathered} \label{E: final gowers norm bound} \varepsilon^{-k\ell} \big\lVert {\bf 1}_{s_0\;(Q_0)}\big(\Lambda_{w,b}(n)-1\big) \big\rVert_{U^s(r,r+sL(r)]} +\varepsilon^{-k\ell}o_w(1) + o_{\varepsilon}(1)(1+o_w(1))+\\ \varepsilon+o_w(1)\log \frac{1}{\varepsilon}+o_r(1).\end{gathered}$$ Applying Lemma [Lemma 16](#L: Gowers uniformity norms evaluated at arithmetic progressions){reference-type="ref" reference="L: Gowers uniformity norms evaluated at arithmetic progressions"}, we can bound the previous Gowers norm along the residue class $s_0\; (Q_0)$ as follows: $$\label{E: gowers norm bound for Lambda in progressions} \big\lVert {\bf 1}_{s_0\;(Q_0)}\big(\Lambda_{w,b}(n)-1\big) \big\rVert_{U^s(r,r+sL(r)]}\leq \big\lVert \Lambda_{w,b}(n)-1 \big\rVert_{U^s(r,r+sL(r)]}.$$ In view of the arguments above, we conclude that, for every $r\notin \mathcal{E}_{R,w,b}$, the following inequality holds $$\begin{gathered} \Big\lVert \mathop{\mathrm{\mathbb{E}}}_{r\leq n\leq r+L(r)} {\bf 1}_{s_0( Q_0)}(n)\big(\Lambda_{w,b}(n)-1\big)\\ \prod_{j=1}^{\ell}\Big( \prod_{i{:}\;(i,j)\in S_1}^{} T_i^{{\left \lfloor g_{ij,w,b}(n) +p_{ij,w,b}(n) \right \rfloor}+q_{ij,w,b}(n) } \cdot \prod_{i{:}\;(i,j)\in S'_2}^{} T_i^{{\left \lfloor g_{ij,w,b}(n) +p_{ij,w,b}(n) \right \rfloor}+q_{ij,w,b}(n) }\cdot \\ \prod_{i{:}\;(i,j)\in S''_2}^{} T_i^{q_{ij,w,b}(n) }\cdot \prod_{i{:}\;(i,j)\notin S_1\cup S'_2\cup S''_2}^{} T_i^{{\left \lfloor g_{ij,w,b}(n) +p_{ij,w,b}(n) \right \rfloor}+q_{ij,w,b}(n) } \Big)f_j \Big\rVert_{L^2(\mu)}\ll_{k,\ell,d}\\\ \varepsilon^{-k\ell}\big\lVert \big(\Lambda_{w,b}(n)-1\big) \big\rVert_{U^s(r,r+sL(r)]}+\varepsilon+\big(\varepsilon^{-k\ell}+\log \frac{1}{\varepsilon}+o_{\varepsilon}(1)\big)o_w(1)+o_{\varepsilon}(1)+o_r(1).\end{gathered}$$We apply this estimate to the double average defining $\mathcal{J}_{w,b,s_0}(R)$ in [\[E: I could not come up with something good\]](#E: I could not come up with something good){reference-type="eqref" reference="E: I could not come up with something good"}. This estimate holds for every $r\notin \mathcal{E}_{R,w,b}$ and, thus, we need an estimate for the values of $r$ in this exceptional set. In order to achieve this, we recall that the set $\mathcal{E}_{R,w,b}$ has at most $(2k\ell+1)\varepsilon R$ elements. For each $r\in \mathcal{E}_{R,w,b}$, we use the triangle inequality to bound the average over the corresponding short interval by $$\frac{1}{L(r)} \sum_{\underset{n\equiv s_0\; ( \, Q_0)}{r\leq n\leq r+L(r)}} (\Lambda(Wn+b) +1).$$ We bound the characteristic function of the residue class $n\equiv s_0\;(Q_0)$ trivially by 1 and apply Corollary [Corollary 21](#C: Brun-Titchmarsh inequality for von Mangoldt sums){reference-type="ref" reference="C: Brun-Titchmarsh inequality for von Mangoldt sums"} to conclude that this expression is $O(1)+o_r(1)$, using similar estimates as the ones used in the proof of Proposition [Proposition 35](#P: remove error terms for polynomial functions){reference-type="ref" reference="P: remove error terms for polynomial functions"} (see [\[E: sieve upper bound on the S_r\]](#E: sieve upper bound on the S_r){reference-type="eqref" reference="E: sieve upper bound on the S_r"}). Therefore, the contribution of the set $\mathcal{E}_{R,w,b}$ is at most $O_{k,\ell}(\varepsilon)+o_r(1)$. Combining all of the above, we arrive at the estimate$$\begin{gathered} \label{E: penultimate bound} \mathcal{J}_{w,b,s_0}(R)\ll_{d,k,\ell} \ \varepsilon^{-k\ell} \Big( \mathop{\mathrm{\mathbb{E}}}_{1\leq r\leq R}\big\lVert \big(\Lambda_{w,b}(n)-1\big) \big\rVert_{U^s(r,r+sL(r)]}\Big) +\varepsilon^{-k\ell}o_w(1)+\\ o_{\varepsilon}(1)(1+o_w(1))+o_R(1).\end{gathered}$$ We restate [\[E: final expression in Step 3\]](#E: final expression in Step 3){reference-type="eqref" reference="E: final expression in Step 3"} here. Namely, we want to show that $$\limsup\limits_{R\to+\infty}\ \max_{\underset{(b,W)=1}{1\leq b\leq W}}\ \mathcal{J}_{w,b,s_0}(R)=o_w(1).$$ Applying [\[E: penultimate bound\]](#E: penultimate bound){reference-type="eqref" reference="E: penultimate bound"}, we conclude that for a fixed $w$, we have$$\begin{gathered} \limsup\limits_{R\to+\infty} \max_{\underset{(b,W)=1}{1\leq b\leq W}}\ \mathcal{J}_{w,b,s_0}(R)\ll_{d,k,\ell} \varepsilon^{-k\ell} \Big( \lim_{R\to+\infty}\mathop{\mathrm{\mathbb{E}}}_{1\leq r\leq R}\ \max_{\underset{(b,W)=1}{1\leq b\leq W}}\ \big\lVert \big(\Lambda_{w,b}(n)-1\big) \big\rVert_{U^s(r,r+L(r)]}\Big)+\\ \varepsilon^{-k\ell}o_w(1)+o_{\varepsilon}(1)(1+o_w(1)).\end{gathered}$$ Due to Theorem [\[T: Gowers uniformity in short intervals\]](#T: Gowers uniformity in short intervals){reference-type="ref" reference="T: Gowers uniformity in short intervals"}, we have that $$\max_{\underset{(b,W)=1}{1\leq b \leq W}} \big\lVert \big(\Lambda_{w,b}(n)-1\big) \big\rVert_{U^s(r,r+L(r)]}=o_w(1)$$for every sufficiently large $r$. Thus, we conclude that $$\limsup\limits_{R\to+\infty}\ \max_{\underset{(b,W)=1}{1\leq b\leq W}}\ \mathcal{J}_{w,b,s_0}(R)\ll_{d,k,\ell} \varepsilon^{-k\ell}o_w(1)+ o_{\varepsilon}(1)(1+o_w(1)).$$ ## Step 5: Putting all the bounds together {#step-5-putting-all-the-bounds-together .unnumbered} We restate here our conclusion. We have shown that for all fixed integers $w$ and real number $0<\varepsilon<1$, we have $$\begin{gathered} \label{E: Final Estimate} \limsup\limits_{R\to+\infty} \lim\limits_{N\to+\infty}\ \max_{\underset{(b,W)=1}{1\leq b\leq W}} \Big\lVert \frac{1}{N}\sum_{n=1}^{N} {\bf 1}_{s_0( Q_0)}(n)\big(\Lambda_{w,b}(n)-1\big) \prod_{j=1}^{\ell}\big( \prod_{i=1}^{k} T_i^{{\left \lfloor a_{ij,w,b}(n) \right \rfloor} } \big)f_j \Big\rVert_{L^2(\mu)}\\ \ll_{d,k,\ell} \varepsilon^{-k\ell}o_w(1)+o_{\varepsilon}(1)(1+o_w(1)),\end{gathered}$$where we recall that $d$ was the maximum among the integers $k_{ij}$ and the degrees of the polynomials $p_{ij},q_{ij}$ (all of these depend only on the initial functions $a_{ij}$). Sending $w\to+\infty$, we deduce that the limit in [\[E: Step 1 final expression\]](#E: Step 1 final expression){reference-type="eqref" reference="E: Step 1 final expression"} (in view of [\[E: Final Estimate\]](#E: Final Estimate){reference-type="eqref" reference="E: Final Estimate"}) is smaller than a constant (depending on $k,\ell,d$) multiple of $o_{\varepsilon}(1)$. Sending $\varepsilon\to 0$, we conclude that the original limit is $0$, which is the desired result. ◻ # Proofs of the remaining theorems {#Section-Proofs of remaining theorems} We finish the proofs of our theorems in this section. ## Proof of the convergence results *Proof of Theorem [Theorem 2](#T: criterion for convergence along primes){reference-type="ref" reference="T: criterion for convergence along primes"}.* Let $(X,\mathcal{X},\mu,T_1,\dots,T_k)$ be the system and $a_{ij}\in \mathcal{H}$ the functions in the statement. In view of Lemma [Lemma 19](#L: indicator of primes to von-Mangoldt){reference-type="ref" reference="L: indicator of primes to von-Mangoldt"}, it suffices to show that the averages $$A(N):= \frac{1}{N}\sum_{n=1}^{N} \Lambda(n)\big( \prod_{i=1}^{k} T_i^{{\left \lfloor a_{i1}(n) \right \rfloor}} \big)f_1\cdot\ldots \cdot \big( \prod_{i=1}^{k} T_i^{{\left \lfloor a_{i\ell}(n) \right \rfloor}} \big)f_{\ell}$$converge in $L^2(\mu)$. For a fixed $w \in \mathbb{N}$, we define $W=\prod_{p\leq w, p\in\mathbb{P}}p$ as usual and let $b\in \mathbb{N}$. We define $$B_{w,b}(N):= \frac{1}{N}\sum_{n=1}^{N} \Lambda(n)\big( \prod_{i=1}^{k} T_i^{{\left \lfloor a_{i1}(Wn+b) \right \rfloor}} \big)f_1\cdot\ldots \cdot \big( \prod_{i=1}^{k} T_i^{{\left \lfloor a_{i\ell}(Wn+b) \right \rfloor}} \big)f_{\ell}.$$ Let $\varepsilon>0$. Using Theorem [Theorem 1](#T: the main comparison){reference-type="ref" reference="T: the main comparison"}, we can find $w_0\in \mathbb{N}$ (which yields a corresponding $W_0$) such that $$\label{E: comparison between the A, B} \Big\lVert A(W_0N)-\frac{1}{\phi(W_0)}\sum_{\underset{(b,W_0)=1}{1\leq b\leq W_0} }B_{w_0,b}(N) \Big\rVert_{L^2(\mu)}=O(\varepsilon)$$for all $N$ sufficiently large. Our hypothesis implies that the sequence of bounded functions $B_{w_0,b}(N)$ is a Cauchy sequence in $L^2(\mu)$, which, in conjunction with [\[E: comparison between the A, B\]](#E: comparison between the A, B){reference-type="eqref" reference="E: comparison between the A, B"}, implies that the sequence $A(W_0N)$ is a Cauchy sequence. In particular, we have $$\left\Vert A(W_0M)-A(W_0N)\right\Vert_{L^2(\mu)}=O(\varepsilon),$$for all $N,M$ sufficiently large. Finally, since $$\left\Vert A(W_0N+b)-A(W_0N)\right\Vert_{L^2(\mu)}=o_N(1),$$ for all $1\leq b\leq W_0$, we conclude that $A(N)$ is a Cauchy sequence, which implies the required convergence. Furthermore, if the sequence $B_{w,b}(N)$ converges to the function $F$ in $L^2(\mu)$ for all $w,r\in \mathbb{N}$, then [\[E: comparison between the A, B\]](#E: comparison between the A, B){reference-type="eqref" reference="E: comparison between the A, B"} implies that $\left\Vert A(W_0N)-F\right\Vert_{L^2(\mu)}=O(\varepsilon)$, for all large enough $N$. Repeating the same argument as above, we infer that $A(N)$ converges to the function $F$ in norm, as we desired. ◻ *Proof of Theorem [Theorem 3](#T: convergence of Furstenberg averages){reference-type="ref" reference="T: convergence of Furstenberg averages"}.* Let $a\in \mathcal{H}$ satisfy either [\[E: far away from real multiples of integer polynomials\]](#E: far away from real multiples of integer polynomials){reference-type="eqref" reference="E: far away from real multiples of integer polynomials"} or [\[E: equal to a real multiple of integer polynomial\]](#E: equal to a real multiple of integer polynomial){reference-type="eqref" reference="E: equal to a real multiple of integer polynomial"}, $k\in\mathbb{N},$ $(X,\mathcal{X},\mu,T)$ be any measure-preserving system, and functions $f_1,\dots,f_k\in L^{\infty}(\mu).$ Observe that in either case, the function $a$ satisfies [\[E: far away from rational polynomials\]](#E: far away from rational polynomials){reference-type="eqref" reference="E: far away from rational polynomials"} or [\[E: essentially equal to a polynomial\]](#E: essentially equal to a polynomial){reference-type="eqref" reference="E: essentially equal to a polynomial"}. In addition, when $a(t)$ satisfies either of the two latter conditions, then the function $a(Wt+b)$ satisfies the same condition, for all $W,b\in \mathbb{N}$. Using [@Fra-Hardy-singlecase Theorem 2.1],[^17] we have that, for all $W,b \in \mathbb{N},$, the averages $$\frac{1}{N}\sum_{n=1}^{N} T^{{\left \lfloor a(Wn+b) \right \rfloor}}f_1\cdot\ldots\cdot T^{k{\left \lfloor a(Wn+b) \right \rfloor}}f_k$$converge in $L^2(\mu)$. aWe conclude that the two conditions of Theorem [Theorem 2](#T: criterion for convergence along primes){reference-type="ref" reference="T: criterion for convergence along primes"} are satisfied, which shows that the desired averages converge. In particular, if $a$ satisfies condition [\[E: far away from real multiples of integer polynomials\]](#E: far away from real multiples of integer polynomials){reference-type="eqref" reference="E: far away from real multiples of integer polynomials"}, we can invoke [@Fra-Hardy-singlecase Theorem 2.2] to conclude that the limit of the averages $$\frac{1}{N}\sum_{n=1}^{N} T^{{\left \lfloor a(Wn+b) \right \rfloor}}f_1\cdot\ldots\cdot T^{k{\left \lfloor a(Wn+b) \right \rfloor}}f_k$$ is equal to the limit (in $L^2(\mu)$) of the averages $$\frac{1}{N}\sum_{n=1}^{N} T^nf_1\cdot\ldots\cdot T^{kn} f_k.$$ Again, Theorem [Theorem 2](#T: criterion for convergence along primes){reference-type="ref" reference="T: criterion for convergence along primes"} yields the desired conclusion. ◻ *Proof of Theorem [Theorem 4](#T: jointly ergodic case){reference-type="ref" reference="T: jointly ergodic case"}.* We work analogously as in the proof of Theorem [Theorem 3](#T: convergence of Furstenberg averages){reference-type="ref" reference="T: convergence of Furstenberg averages"}. The only difference is that in this case, we use [@Tsinas Theorem 1.2] to deduce that, for all $W\in \mathbb{N},$ $b\in\mathbb{N}$ positive integers $W$ and $b$, the averages $$\frac{1}{N}\sum_{n=1}^{N} T^{{\left \lfloor a_1(Wn+b) \right \rfloor}}f_1\cdot\ldots\cdot T^{{\left \lfloor a_k(Wn+b) \right \rfloor}}f_k$$ converge in $L^2(\mu)$ to the product $\widetilde{f}_1\cdot \ldots \cdot \widetilde{f}_{k}$. The result follows from Theorem [Theorem 2](#T: criterion for convergence along primes){reference-type="ref" reference="T: criterion for convergence along primes"}. ◻ *Proof of Theorem [Theorem 5](#T: nikos result to primes){reference-type="ref" reference="T: nikos result to primes"}.* The proof follows identically as the one of Theorem [Theorem 4](#T: jointly ergodic case){reference-type="ref" reference="T: jointly ergodic case"} by using [@Fra-Hardy-multidimensional Theorem 2.3] instead of [@Tsinas Theorem 1.2]. ◻ ## Proof of the recurrence results We recall Furstenberg's Correspondence Principle for $\mathbb{Z}^d$-actions [@Furstenberg-book], for the reader's convenience. Let $d\in \mathbb{N}$ and $E\subseteq \mathbb{Z}^d.$ There exists a system $(X,\mathcal{X},\mu,T_1,\ldots,T_d)$ and a set $A\in \mathcal{X}$ with $\bar{d}(E)=\mu(A),$ such that $$\bar{d}\big(E\cap(E+{\bf{n}}_1)\cap\dots\cap(E-{\bf{n}}_k)\big)\geq \mu\left(A\cap\prod_{i=1}^d T_i^{-n_{i,1}}A\cap\dots\cap \prod_{i=1}^d T_i^{-n_{i,k}}A\right),$$ for all $k\in \mathbb{N}$ and ${\bf{n}}_j=(n_{1,j},\dots,n_{d,j})\in\mathbb{Z}^d,$ $1\leq j\leq k.$ In view of the correspondence principle, the corollaries in Section [1](#Section-Introduction){reference-type="ref" reference="Section-Introduction"} follow easily. *Proof of Theorem [Theorem 6](#T: multiple recurrence for szemeredi type patterns){reference-type="ref" reference="T: multiple recurrence for szemeredi type patterns"}.* (a) We apply Theorem [Theorem 3](#T: convergence of Furstenberg averages){reference-type="ref" reference="T: convergence of Furstenberg averages"} for the functions $f_1=\dots=f_k={\bf 1}_{A}$. Since convergence in $L^2(\mu)$ implies weak convergence, integrating along $A$ the relation $$\lim\limits_{N\to+\infty} \frac{1}{\pi(N)} \sum_{p\in \mathbb{P}{:}\;p\leq N} T^{{\left \lfloor a(p) \right \rfloor}}{\bf 1}_A\cdot \ldots\cdot T^{k{\left \lfloor a(p) \right \rfloor}} {\bf 1}_A = \\ \lim\limits_{N\to+\infty} \frac{1}{N}\sum_{n=1}^{N} T^n{\bf 1}_A\cdot\ldots\cdot T^{kn}{\bf 1}_{A},$$and applying Furstenberg's multiple recurrence theorem we infer that $$\lim\limits_{N\to+\infty} \frac{1}{\pi(N)} \sum_{p\in \mathbb{P}{:}\;p\leq N} \mu\big(A\cap T^{-{\left \lfloor a(p) \right \rfloor}}A\cap \dots\cap T^{-{k{\left \lfloor a(p) \right \rfloor}}}A \big)> 0,$$which is the desired result. We turn our attention toward establishing the assertion for the set $\mathbb{P}-1$. we used FMR it two lines above. Maybe we can write it above once, and quote it whenever we need it. Using Furstenberg's multiple recurrence theorem [@Furstenberg-original], we conclude that there exists $c>0$, such that (the following limit exists due to [@Host-Kra-annals]) $$\lim\limits_{N\to+\infty} \frac{1}{N}\sum_{n=1}^{N}\mu\big(A\cap T^{-n}A\cap \dots\cap T^{-kn}A\big)\geq c.$$ Using [@Fra-Hardy-singlecase Theorem 2.2], we have that for every $R\in \mathbb{N}$ $$\lim\limits_{N\to+\infty} \frac{1}{N}\sum_{n=1}^{N} \mu\big(A\cap T^{-{\left \lfloor a(Rn) \right \rfloor}}A\cap \dots\cap T^{-k{\left \lfloor a(Rn) \right \rfloor}}A\big)\geq c,$$since the function $a(R\cdot)$ satisfies [\[E: far away from real multiples of integer polynomials\]](#E: far away from real multiples of integer polynomials){reference-type="eqref" reference="E: far away from real multiples of integer polynomials"}.[^18] \(b\) We write $a(t)=cq(t)+\varepsilon(t)$, where $q(t)\in \mathbb{Z}[t],\ q(0)=0,\ c\in \mathbb{R}$ and $\varepsilon(t)$ is a function that converges to $0$, as $t\to+\infty$. Using [@koutsogiannis-closest Proposition 3.8], we have that there exists $c_0$ depending only on $\mu(A)$, the degree of $q$ and $k$, such that $$\liminf\limits_{N\to+\infty} \frac{1}{N} \sum_{n=1}^{N} \mu(A\cap T^{-[[cq(n)]]}A\cap \dots\cap T^{-k[[cq(n)]]}A)\geq c_0.$$ Now, we consider two separate cases. If $c$ is rational with denominator $Q$ in lowest terms, then for $t$ sufficiently large, we have $|\varepsilon(t)|\leq (2Q)^{-1}$. Therefore, we immediately deduce that $$]=[[cq(t)]].$$ Thus, we conclude that $$\label{E: recurrence lower bound for polynomials} \liminf\limits_{N\to+\infty} \frac{1}{N} \sum_{n=1}^{N} \mu(A\cap T^{-[[cq(n)+\varepsilon(n)]]}A\cap \dots\cap T^{-k[[cq(n)+\varepsilon(n)]]}A)\geq c_0.$$ If $c$ is irrational, then the polynomial $cq(t)$ is uniformly distributed mod 1. Given $\delta>0$, we consider the set $S:=\{n\in \mathbb{N}{:}\;\{cq(n)\}\in [\delta,1-\delta] \}$, which has density $1-2\delta$. Therefore, we have $$\begin{gathered} \Bigl| \frac{1}{N} \sum_{n=1}^{N} \mu(A\cap T^{-[[cq(n)+\varepsilon(n)]]}A\cap \dots\cap T^{-k[[cq(n)+\varepsilon(n)]]}A)-\\ \frac{1}{N}\sum_{n=1}^{N} \mu(A\cap T^{-[[cq(n)]]}A\cap \dots\cap T^{-k[[cq(n)]]}A) \Bigr|\leq 2\delta+o_N(1).\end{gathered}$$Sending $\delta\to 0^{+}$, we derive [\[E: recurrence lower bound for polynomials\]](#E: recurrence lower bound for polynomials){reference-type="eqref" reference="E: recurrence lower bound for polynomials"} in this case as well. Notice that since $c_0$ depends only on the degree of $q$, we have that $$\liminf\limits_{N\to+\infty} \frac{1}{N} \sum_{n=1}^{N} \mu(A\cap T^{-[[cq(Rn)+\varepsilon(Rn)]]}A\cap \dots\cap T^{-k[[cq(Rn)+\varepsilon(Rn)]]}A)\geq c_0,$$for all positive integers $R$. Now, we apply Theorem [Theorem 1](#T: the main comparison){reference-type="ref" reference="T: the main comparison"} with $b=1$ and the functions $a(\cdot -1)$, where we recall that $a(t)=cq(t)+\varepsilon(t)$ to obtain that for some sufficiently large $w$, we have $$\liminf\limits_{N\to+\infty} \frac{1}{N}\sum_{n=1}^{N} \Lambda_{w,1}(n) \mu\big(A\cap T^{-{\left \lfloor a(Wn) \right \rfloor}}A\cap \dots\cap T^{-k{\left \lfloor a(Wn) \right \rfloor}}A\big) \geq c_0/2,$$where $W$ is defined as usual in terms of $w$. Finally, we observe that we can replace the function $\Lambda(n)$ in the previous relation with the function $\Lambda(n){\bf 1}_{\mathbb{P}}(n)$ since the contribution of the prime powers (i.e. with exponent $\geq 2$) is negligible on the average. Therefore, we conclude that $$\liminf\limits_{N\to+\infty} \frac{1}{N}\sum_{n=1}^{N} \Lambda_{w,1}(n){\bf 1}_{\mathbb{P}}(Wn+1) \mu\big(A\cap T^{-{\left \lfloor a(Wn) \right \rfloor}}A\cap \dots\cap T^{-k{\left \lfloor a(Wn) \right \rfloor}}A\big) \geq c_0/2,$$which implies the desired result. Analogously, we reach the expected conclusion for the set $\mathbb{P}+1$ instead of $\mathbb{P}-1$. ◻ *Proof of Theorem [Theorem 8](#T: multiple recurrence in the jointly ergodic case){reference-type="ref" reference="T: multiple recurrence in the jointly ergodic case"}.* Similarly to the proof of Theorem [Theorem 6](#T: multiple recurrence for szemeredi type patterns){reference-type="ref" reference="T: multiple recurrence for szemeredi type patterns"}, we apply Theorem [Theorem 4](#T: jointly ergodic case){reference-type="ref" reference="T: jointly ergodic case"} for the functions $f_1=\cdots=f_k={\bf1}_{A}$. We deduce that $$\label{E: vale oti thes edw pera} \lim\limits_{N\to+\infty} \frac{1}{\pi(N)} \sum_{p\in \mathbb{P}{:}\;p\leq N} \mu\big(A\cap T^{-{\left \lfloor a_1(p) \right \rfloor}}A \cap \dots \cap T^{-{\left \lfloor a_k(p) \right \rfloor}}A \big)=\int {\bf 1}_A\cdot \big( \mathop{\mathrm{\mathbb{E}}}({\bf1}_A| \mathcal{I}(T))\big)^{k}\, d\mu.$$However, using that the function ${\bf 1}_A$ is non-negative and Hölder's inequality, we get $$\int {\bf 1}_A\cdot \big( \mathop{\mathrm{\mathbb{E}}}({\bf1}_A| \mathcal{I}(T))\big)^{k}\, d\mu\geq \Big(\int \mathop{\mathrm{\mathbb{E}}}({\bf 1}_A|\mathcal{I}(T))\, d\mu\Big )^{k+1}=\big(\mu(A) \big)^{k+1},$$ and the conclusion follows. In the setting of shifted primes, we have that for any $R\in \mathbb{N}$ $$\lim\limits_{N\to+\infty} \frac{1}{N}\sum_{n=1}^{N} \mu\big(A\cap T^{-{\left \lfloor a_1(Rn) \right \rfloor}}A\cap \dots\cap T^{-{\left \lfloor a_k(Rn \right \rfloor}}A\big)\geq \big(\mu(A) \big)^{k+1},$$since the functions $a_1(Rn),\dots, a_k(Rn)$ are Hardy field functions whose linear combinations satisfy [\[E: jointly ergodic condition\]](#E: jointly ergodic condition){reference-type="eqref" reference="E: jointly ergodic condition"}. Using Theorem [Theorem 1](#T: the main comparison){reference-type="ref" reference="T: the main comparison"} with $b=1$ and the functions $a_i(\cdot -1)$, we infer that for $w$ sufficiently large, we have $$\liminf\limits_{N\to+\infty} \frac{1}{N}\sum_{n=1}^{N} \Lambda_{w,1}(n) \mu\big(A\cap T^{-{\left \lfloor a_1(Wn) \right \rfloor}}A\cap \dots\cap T^{-{\left \lfloor a_k(Wn) \right \rfloor}}A\big) \geq \frac{1}{2} \big(\mu(A)\big)^{k+1}.$$The rest of the proof is completed similarly as in the proof of part (a) in Theorem [Theorem 6](#T: multiple recurrence for szemeredi type patterns){reference-type="ref" reference="T: multiple recurrence for szemeredi type patterns"}. Analogously, we reach the expected conclusion for the set $\mathbb{P}+1$ in place of $\mathbb{P}-1$. ◻ *Proof of Theorem [Theorem 10](#T: multidimensional recurrence for primes){reference-type="ref" reference="T: multidimensional recurrence for primes"}.* The proof is similar to the proof of Theorem [Theorem 8](#T: multiple recurrence in the jointly ergodic case){reference-type="ref" reference="T: multiple recurrence in the jointly ergodic case"}. The only distinction is made in [\[E: vale oti thes edw pera\]](#E: vale oti thes edw pera){reference-type="eqref" reference="E: vale oti thes edw pera"}, namely we have $$\begin{gathered} \lim\limits_{N\to+\infty} \frac{1}{\pi(N)} \sum_{p\in \mathbb{P}{:}\;p\leq N} \mu\big(A_0\cap T_1^{-{\left \lfloor a_1(p) \right \rfloor}}A_1 \cap \dots \cap T_k^{-{\left \lfloor a_k(p) \right \rfloor}}A_k \big)=\\ \int {\bf 1}_{A_0}\cdot \mathop{\mathrm{\mathbb{E}}}({\bf1}_{A_1}| \mathcal{I}(T_1))\cdot \ldots\cdot \mathop{\mathrm{\mathbb{E}}}({\bf1}_{A_k}| \mathcal{I}(T_k))\, d\mu, \end{gathered}$$ where the sets $A_0, A_1, \dots, A_k$ satisfy the hypothesis. Since each function $\mathop{\mathrm{\mathbb{E}}}({\bf1}_{A_i}| \mathcal{I}(T_i))$ is $T_i$-invariant, we deduce that the integral on the right-hand side is larger than $$\int f\cdot \mathop{\mathrm{\mathbb{E}}}(f|\mathcal{I}(T_1))\cdot\ldots\cdot \mathop{\mathrm{\mathbb{E}}}(f|\mathcal{I}(T_k))\, d\mu,$$where $f={\bf 1}_{A_0\cap T^{\ell_1} A_1\cap \dots \cap T^{\ell_k}A_k}$. However, since the function $f$ is non-negative, [@Chu-2-commuting Lemma 1.6] implies that $$\int f\cdot \mathop{\mathrm{\mathbb{E}}}(f|\mathcal{I}(T_1))\cdot\ldots\cdot \mathop{\mathrm{\mathbb{E}}}(f|\mathcal{I}(T_k))\, d\mu\geq \left(\int f\;d\mu\right)^{k+1}= \mu(A)^{k+1},$$and the conclusion follows. ◻ ## Proof of the equidistribution results in nilmanifolds In this final part of this section, we offer a proof for Theorem [Theorem 12](#T: criterion for pointwise convergence along primes-nil version){reference-type="ref" reference="T: criterion for pointwise convergence along primes-nil version"}. The main tool is the approximation of Lemma [Lemma 28](#L: approximation by nilsequences){reference-type="ref" reference="L: approximation by nilsequences"}. *Proof of Theorem [Theorem 12](#T: criterion for pointwise convergence along primes-nil version){reference-type="ref" reference="T: criterion for pointwise convergence along primes-nil version"}.* Let $X$ and $g_1,\dots, g_k,x_1,\dots, x_k$ be as in the statementthe section?, we offer a proof for Theorem 1.12. The main tool is the approximation of and let $s$ denote the nilpotency degree of $X$. It suffices to show that, for any continuous functions $f_1,\dots, f_s$ on $X$, we have the following:$$\lim\limits_{N\to+\infty} \frac{1}{\pi(N)} \sum_{p\in\mathbb{P}{:}\;p\leq N}f_1(g_1^{{\left \lfloor a_1(p) \right \rfloor}}x_1)\cdot \ldots\cdot f_k(g_k^{{\left \lfloor a_k(p) \right \rfloor}}x_k) =\int_{Y_1}f_1 \, dm_{Y_1}\cdot\ldots\cdot \int_{Y_k} f_k\, d m_{Y_k},$$where $Y_i=\overline{(g_i^{\mathbb{Z}}x_i )}$ for all admissible values of $i$. We rewrite this in terms of the von Mangoldt function as $$\label{E: what we want to show} \lim\limits_{N\to+\infty} \frac{1}{N} \sum_{n=1}^{N}\Lambda(n) f_1(g_1^{{\left \lfloor a_1(n) \right \rfloor}}x_1)\cdot \ldots\cdot f_k(g_k^{{\left \lfloor a_k(n) \right \rfloor}}x_k) =\int_{Y_1}f_1 \, dm_{Y_1}\cdot\ldots\cdot \int_{Y_k} f_k\, d m_{Y_k},$$where the equivalence of the last two relations is a consequence of Lemma [Lemma 19](#L: indicator of primes to von-Mangoldt){reference-type="ref" reference="L: indicator of primes to von-Mangoldt"}. Our equidistribution assumption implies that for all $W,b\in \mathbb{N}$, we have $$\label{E: what the shit assumption gives} \lim\limits_{N\to+\infty} \frac{1}{N} \sum_{n=1}^{N} f_1(g_1^{{\left \lfloor a_1(Wn+b) \right \rfloor}}x_1)\cdot \ldots\cdot f_k(g_k^{{\left \lfloor a_k(Wn+b) \right \rfloor}}x_k) =\int_{Y_1}f_1 \, dm_{Y_1}\cdot\ldots\cdot \int_{Y_k} f_k\, d m_{Y_k}.$$ We write $Y_i=G_i/\Gamma_i$ for some nilpotent Lie groups $G_i$ with discrete and co-compact subgroups $\Gamma_i$ and denote $Y=Y_1\times \dots\times Y_k$. Define the function $F:Y\to \mathbb{C}$ by $F(y_1,\dots, y_k)=f_1(y_1)\cdot\ldots\cdot f_k(y_k)$ and rewrite [\[E: what we want to show\]](#E: what we want to show){reference-type="eqref" reference="E: what we want to show"} as $$\label{E: modified what we want to show} \lim\limits_{N\to+\infty} \frac{1}{N} \sum_{n=1}^{N} \Lambda(n) F(\widetilde{g}_1^{{\left \lfloor a_1(n) \right \rfloor}}\cdot\ldots\cdot \widetilde{g}_k^{{\left \lfloor a_k(n) \right \rfloor}} \widetilde{x}) =\int_{Y}F \, dm_{Y},$$where $\widetilde{g_i}$ is the element on the nilpotent Lie group $G_1\times\dots\times G_k$ whose $i$-th coordinate is equal to $g_i$ and the rest of its entries are the corresponding identity elements. Lastly, $\widetilde{x}$ is the point $(x_1,\dots, x_k)\in Y$. Similarly, we rewrite [\[E: what the shit assumption gives\]](#E: what the shit assumption gives){reference-type="eqref" reference="E: what the shit assumption gives"} as $$\label{E: modified what the shit assumption gives} \lim\limits_{N\to+\infty} \frac{1}{N} \sum_{n=1}^{N} F(\widetilde{g}_1^{{\left \lfloor a_1(Wn+b) \right \rfloor}}\cdot\ldots\cdot \widetilde{g}_k^{{\left \lfloor a_k(Wn+b \right \rfloor}} \widetilde{x}) =\int_{Y}F \, dm_{Y}.$$Therefore, we want to prove [\[E: modified what we want to show\]](#E: modified what we want to show){reference-type="eqref" reference="E: modified what we want to show"} under the assumption that [\[E: modified what the shit assumption gives\]](#E: modified what the shit assumption gives){reference-type="eqref" reference="E: modified what the shit assumption gives"} holds for all $W,r\in \mathbb{N}$. We use the notation $$A(N):=\frac{1}{N} \sum_{n=1}^{N} \Lambda(n) F(\widetilde{g}_1^{{\left \lfloor a_1(n) \right \rfloor}}\cdot\ldots\cdot \widetilde{g}_k^{{\left \lfloor a_k(n) \right \rfloor}} \widetilde{x}),$$and $$B_{W,b}(N):=\frac{1}{N} \sum_{n=1}^{N} F(\widetilde{g}_1^{{\left \lfloor a_1(Wn+b) \right \rfloor}}\cdot\ldots\cdot \widetilde{g}_k^{{\left \lfloor a_k(Wn+b) \right \rfloor}} \widetilde{x})$$for convenience. Let $\varepsilon>0$. Observe that the sequence $\psi({\bf n})=F(\widetilde{g}_1^{n_1}\cdot\ldots\cdot \widetilde{g}_k^{n_k} \widetilde{x})$ is an $s$-step nilsequence in $k$-variables. We apply Lemma [Lemma 28](#L: approximation by nilsequences){reference-type="ref" reference="L: approximation by nilsequences"} to deduce that there exists a system $(X',\mathcal{X}',\mu,S_1,\dots, S_k)$ and functions $G_1,\dots, G_s\in L^{\infty}(\mu)$ such that $$\Bigl| F(\widetilde{g}_1^{n_1}\cdot\ldots\cdot \widetilde{g}_k^{n_k} \widetilde{x})-\int \prod_{j=1}^{s} \Big(\prod_{i=1}^{k} S_i^{\ell_jn_i}\Big)G_j \,d\mu \Bigr|\leq \varepsilon$$for all $n_1,\dots, n_k \in \mathbb{Z}$, where $\ell_j=(s+1)!/j$. Thus, if we define $$A'(N):=\frac{1}{N} \sum_{n=1}^{N} \Lambda(n) \int \prod_{j=1}^{s+1} \Big(\prod_{i=1}^{k} S_i^{\ell_j{\left \lfloor a_i(n) \right \rfloor}}\Big)G_j \,d\mu,$$and $$B'_{W,b}(N)=\frac{1}{N} \sum_{n=1}^{N} \int \prod_{j=1}^{s+1} \Big(\prod_{i=1}^{k} S_i^{\ell_j{\left \lfloor a_i(Wn+b) \right \rfloor}}\Big)G_j \,d\mu,$$we deduce that $|B_{W,b}(N)-B'_{W,b}(N)|\leq \varepsilon$, for all $N\in \mathbb{N}$, whereas $|A(N)-A'(N)|\leq \varepsilon(1+o_N(1))$, by the prime number theorem. The functions $a_1,\dots, a_k$ satisfy the assumptions of Theorem [Theorem 1](#T: the main comparison){reference-type="ref" reference="T: the main comparison"}. Thus, we deduce that if we pick $w_0$ (which provides a corresponding $W_0$) sufficiently large and apply the Cauchy-Schwarz inequality, we will get $$\label{E: application of Theorem 1.1 in nilproof} \max_{\underset{(b,W_0)=1}{1\leq b\leq W}} \Bigl| \frac{1}{N}\sum_{n=1}^{N} \big(\Lambda_{w_0,b}(n)-1\big) \int \prod_{j=1}^{s+1} \Big(\prod_{i=1}^{k} S_i^{\ell_j{\left \lfloor a_i(W_0n+b) \right \rfloor}}\Big)G_j \,d\mu \Bigr|\leq \varepsilon$$for every sufficiently large $N\in \mathbb{N}$. In addition, we use [\[E: modified what the shit assumption gives\]](#E: modified what the shit assumption gives){reference-type="eqref" reference="E: modified what the shit assumption gives"}, the inequality $|B_{W_0,b}(N)-B'_{W_0,b}(N)|\leq \varepsilon$ and the triangle inequality to infer that for $N$ large enough, we have $$\label{E: approximation of B(N) by the integral} \Bigl| B'_{W_0,b}(N) -\int_{Y}F \, dm_{Y} \Bigr|\leq 2\varepsilon,$$for all $1\leq b\leq W_0$ coprime to $W_0$. Observe that [\[E: application of Theorem 1.1 in nilproof\]](#E: application of Theorem 1.1 in nilproof){reference-type="eqref" reference="E: application of Theorem 1.1 in nilproof"} implies that for all $N$ sufficiently large, we have $$\Bigl| A'(W_0N)-\frac{1 }{\phi(W_0)} \sum_{\underset{(b,W_0)=1}{1\leq b\leq W_0}}^{}B'_{W_0,b}(N) \Bigr|\leq 2\varepsilon,$$and we can combine this with [\[E: approximation of B(N) by the integral\]](#E: approximation of B(N) by the integral){reference-type="eqref" reference="E: approximation of B(N) by the integral"} to conclude that $$\Bigl| A'(W_0N)- \int_{Y}F \, dm_{Y} \Bigr|\leq 4\varepsilon$$for all $N$ sufficiently large. Since $|A'(N)-A(N)|\leq \varepsilon(1+o_N(1))$, we finally arrive at the inequality $$\Bigl| A(W_0N) -\int_{Y}F \, dm_{Y} \Bigr|\leq 6\varepsilon,$$for all large enough $N\in \mathbb{N}$. Since $|A(W_0N)-A(W_0N+b)|=o_N(1)$ for all $1\leq b\leq W$, we conclude that $$\Bigl| A(N) -\int_{Y}F \, dm_{Y} \Bigr|\leq 7\varepsilon,$$for all sufficiently large $N\in \mathbb{N}$. Sending $\varepsilon\to 0$, we deduce [\[E: modified what we want to show\]](#E: modified what we want to show){reference-type="eqref" reference="E: modified what we want to show"}, which is what we wanted to show. ◻ *Proof.* Let $s$ be the degree of nilpotency of the nilmanifold $Y$. Thus, Lemma [Lemma 28](#L: approximation by nilsequences){reference-type="ref" reference="L: approximation by nilsequences"} implies that there exists a nilmanifold $Z=H/\Delta$ where the Lie group $H$ is connected and simply connected, pairwise commuting elements $u_1,\dots, u_k\in H$ and continuous functions $G_1,\dots, G_{s+1}$, such that the sequence $$b({\bf n})= \int \prod_{j=1}^{s+1} G_j(u_1^{\ell_jn_1} \cdot\ldots\cdot u_k^{\ell_jn_k} z)\, dm_{Z}(z), {\bf n}=(n_1,\dots, n_k)\in \mathbb{Z}^{k}$$where $\ell_j=(s+1)!/j$, satisfies $$\label{E: approximation by nil-correlation} \left\Vert \psi- b\right\Vert_{\ell^{\infty}(\mathbb{Z}^k)}\leq \varepsilon.$$ Define the sequences of functions (on the nilmanifolds $Y,Z$ respectively) $$A_N(\widetilde{x})= \frac{1}{\pi(N)} \sum_{p\in {:}\;p\leq N} F(\widetilde{g}_1^{{\left \lfloor a_1(p) \right \rfloor}}\cdot\ldots\cdot \widetilde{g}_k^{{\left \lfloor a_k(p) \right \rfloor}} \widetilde{x})$$and $$B_N(z)= \frac{1}{\pi(N)} \sum_{p\in {:}\;p\leq N} \prod_{j=1}^{s+1} G_j(u_1^{\ell_j{\left \lfloor a_1(p) \right \rfloor}} \cdot\ldots\cdot u_k^{\ell_j{\left \lfloor a_k(p) \right \rfloor}} z ).$$ We want to show that $A_N(\widetilde{x})$ converges to $\int Fdm_Y(y)$ for all $\widetilde{x}\in Y$. Observe that [\[E: approximation by nil-correlation\]](#E: approximation by nil-correlation){reference-type="eqref" reference="E: approximation by nil-correlation"} implies that $$\Bigl| A_N(\widetilde{x}) -\int B_N(z) dm_Z(z) \Bigr|\leq \varepsilon$$ for all $N\in \mathbb{N}$. Therefore, in order to prove our assertion, it suffices to establish that[^19] $$\label{E: nilpotent average bullshit} \Bigl| \lim\limits_{N\to+\infty} \int B_N(z)dm_Z(z)-\int F\, dm_Y \Bigr|\leq 3\varepsilon,$$since that would imply that $$\Bigl| A_N(\widetilde{x}) -\int F dm_Y \Bigr|\leq 5\varepsilon$$for all sufficiently large $N$. Then, sending $\varepsilon\to 0$, we would get the desired conclusion. We establish [\[E: nilpotent average bullshit\]](#E: nilpotent average bullshit){reference-type="eqref" reference="E: nilpotent average bullshit"}. Once again we can "lift" the expression for $B_N(z)$ to the product nilmanifold $Z^{s+1}$ as we did above. To be more precise, if we write $\widetilde{u_i}=(u_i^{\ell_1},\dots, u_i^{\ell_{s+1}})\in H^{s+1}$ and the function $\widetilde{G}: Z^{s+1}\to\mathbb{C}$ is the tensor product $G_1\otimes \dots\otimes G_{s+1}$, then $$B_N(z)=\frac{1}{\pi(N)} \sum_{p\in {:}\;p\leq N}G(\widetilde{u_1}^{{\left \lfloor a_1(p) \right \rfloor}}\cdot \ldots\cdot \widetilde{u}_{k}^{{\left \lfloor a_k(p) \right \rfloor}} z^{\otimes (s+1)})$$where we used the abbreviated notation $z^{\otimes(s+1)}$ for the element on $Z^{s+1}$ whose coordinates are all equal to $z$. The second assumption on the functions $a_1,\dots, a_k\in \mathcal{H}$ implies that for all $W,b\in \mathbb{N}$, we have the equality $$\label{E: application of equidistribution hypothesis} \lim\limits_{N\to+\infty} \frac{1}{N}\sum_{n=1}^{N}\widetilde{G}(\widetilde{u_1}^{{\left \lfloor a_1(Wn+b) \right \rfloor}}\cdot \ldots\cdot \widetilde{u}_{k}^{{\left \lfloor a_k(Wn+b) \right \rfloor}} z^{\otimes (s+1)})=\int_{{W}_z} \widetilde{G} \, dm_{{W}_z}$$for all $z\in Z$, where ${W}_z=\overline{\widetilde{u_1}^{\mathbb{Z}} \cdot\ldots\cdot \widetilde{u_k}^{\mathbb{Z}}z^{\otimes(s+1)} }$. Unpacking the definitions of the function $\widetilde{G}$ and the elements $\widetilde{u_i}$, we calculate the integral on the right-hand side to be equal to $$\int_{\widetilde{Y}_1(z)} G_1\, dm_{\widetilde{Y}_1(z)} \cdot \ldots \cdot \int_{\widetilde{Y}_{s+1}(z)} G_{s+1}\, dm_{\widetilde{Y}_{s+1}(z)},$$where $\widetilde{Y}_j(z)=\overline{(u_1^{\ell_j})^{\mathbb{Z}}\cdot \ldots (u_k^{\ell_j})^{\mathbb{Z}}\cdot z}$ is a subnilmanifold of $Z$. Since we have convergence in the pointwise sense in [\[E: application of equidistribution hypothesis\]](#E: application of equidistribution hypothesis){reference-type="eqref" reference="E: application of equidistribution hypothesis"}, we deduce that the sequence of functions on the left-hand side converges to the integral on the right-hand side in $L^2(m_{W_z})$ as well. Therefore, Theorem [Theorem 2](#T: criterion for convergence along primes){reference-type="ref" reference="T: criterion for convergence along primes"} applies. In combination with the Cauchy-Schwarz inequality, we conclude that $$\begin{gathered} \label{E: weak convergence along primes} \lim\limits_{N\to+\infty} \frac{1}{\pi(N)}\sum_{p\in \mathbb{P}{:}\;p\leq N}^{N}\int_Z G(\widetilde{u}_1^{{\left \lfloor a_1(p) \right \rfloor}}\cdot \ldots\cdot \widetilde{u}_{k}^{{\left \lfloor a_k(p) \right \rfloor}} z^{\otimes (s+1)})dm_Z(z)= \\ \int_Z \Big( \int_{\widetilde{Y}_1} G_1\, dm_{\widetilde{Y}_1(z)}\Big)\cdot\ldots\cdot \Big(\int_{\widetilde{Y}_{s+1}(z)} G_{s+1}\, dm_{\widetilde{Y}_{s+1}(z)} \Big)dm_Z(z).\end{gathered}$$ Observe that $\widetilde{Y}_j(z)$ is the orbit of the linear action of the elements $u_1^{\ell_j},\dots, u_k^{\ell_j}$. We apply Corollary [Corollary 25](#C: limit of averages along linear actions of g_1,...g_k){reference-type="ref" reference="C: limit of averages along linear actions of g_1,...g_k"} to deduce that $$\begin{gathered} \lim\limits_{N\to+\infty} \frac{1}{(2N+1)^k}\sum_{n_1\dots, n_k\in [-N,N]} \prod_{j=1}^{s+1} G_j(u_1^{\ell_jn_1} \cdot\ldots\cdot u_k^{\ell_jn_k} z)=\\ \Big( \int_{\widetilde{Y}_1} G_1\, dm_{\widetilde{Y}_1(z)}\Big)\cdot \ldots\cdot \Big( \int_{\widetilde{Y}_{s+1}} G_{s+1}\, dm_{\widetilde{Y}_{s+1}(z)}\Big)\end{gathered}$$for every $z\in Z$. Integrating with respect to the variable $z$ and swapping the limit and the integral using the dominated convergence theorem, we deduce that $$\begin{gathered} \lim\limits_{N\to\infty} \frac{1}{(2N+1)^k}\sum_{n_1\dots, n_k\in [-N,N]}\int_Z \prod_{j=1}^{s+1} G_j(u_1^{\ell_jn_1} \cdot\ldots\cdot u_k^{\ell_jn_k} z)dm_Z(z)=\\ \int_Z \Big( \int_{\widetilde{Y}_1} G_1\, dm_{\widetilde{Y}_1(z)}\Big)\cdot \ldots\cdot \Big( \int_{\widetilde{Y}_{s+1}} G_{s+1}\, dm_{\widetilde{Y}_{s+1}(z)}\Big)dm_Z(z).\end{gathered}$$Using the approximation in [\[E: approximation by nil-correlation\]](#E: approximation by nil-correlation){reference-type="eqref" reference="E: approximation by nil-correlation"}, we deduce that for all sufficiently large $N$, we have $$\begin{gathered} \Bigl| \lim\limits_{N\to\infty} \frac{1}{(2N+1)^k}\sum_{n_1\dots, n_k\in [-N,N]} F\big(\widetilde{g}_1^{n_1}\cdot\ldots\cdot \widetilde{g}_k^{n_k}\widetilde{x} \big)- \\ \int_Z \Big( \int_{\widetilde{Y}_1} G_1\, dm_{\widetilde{Y}_1(z)}\Big)\cdot \ldots\cdot \Big( \int_{\widetilde{Y}_{s+1}} G_{s+1}\, dm_{\widetilde{Y}_{s+1}(z)}\Big)dm_Z(z) \Bigr|\leq 2\varepsilon.\end{gathered}$$ Finally, since the averages of the function $F$ converge to $\int_Y F dm_Y$ (by another application of Corollary [Corollary 25](#C: limit of averages along linear actions of g_1,...g_k){reference-type="ref" reference="C: limit of averages along linear actions of g_1,...g_k"}), we conclude that $$\Bigl| \int_Y F\, dm_Y - \int_Z \Big( \int_{\widetilde{Y}_1} G_1\, dm_{\widetilde{Y}_1(z)}\Big)\cdot \ldots\cdot \Big( \int_{\widetilde{Y}_{s+1}} G_{s+1}\, dm_{\widetilde{Y}_{s+1}(z)}\Big)dm_Z(z) \Bigr|\leq 3\varepsilon.$$ Plugging this into [\[E: weak convergence along primes\]](#E: weak convergence along primes){reference-type="eqref" reference="E: weak convergence along primes"}, we conclude that $$\Bigl| \lim\limits_{N\to+\infty}\int_Z B_N(z) dm_Z(z) -\int_Y Fdm_Y \Bigr|\leq 3\varepsilon,$$which is the desired equation [\[E: nilpotent average bullshit\]](#E: nilpotent average bullshit){reference-type="eqref" reference="E: nilpotent average bullshit"}. Thus, our conclusion follows. ◻ *Proof of Proposition Corollary [Corollary 13](#C: equidistribution nilmanifolds){reference-type="ref" reference="C: equidistribution nilmanifolds"}.* The result follows readily from Theorem [Theorem 12](#T: criterion for pointwise convergence along primes-nil version){reference-type="ref" reference="T: criterion for pointwise convergence along primes-nil version"}. The first hypothesis of the criterion is satisfied, since each of the functions $a_i(t)$ satisfies [\[E: t\^e away from polynomials\]](#E: t^e away from polynomials){reference-type="eqref" reference="E: t^e away from polynomials"}, while condition (b) follows from [@tsinas-pointwise Theorem 1.1] and our assumption that $a_i(Wt+b)$ belongs to $\mathcal{H}$ . ◻ # More general iterates {#Section: more general iterates} In this last section of the article, we discuss how the hypotheses that the functions $a_i(t)$ in the iterates belong to a Hardy field $\mathcal{H}$ can be weakened. The starting point is Proposition [Proposition 33](#P: remove error term for fast functions){reference-type="ref" reference="P: remove error term for fast functions"}, which was established for general smooth functions, subject to some growth inequalities on the derivative of some particular order (the integer $k$ in the statement). Unfortunately, one cannot generalize theorems such as Theorem [Theorem 4](#T: jointly ergodic case){reference-type="ref" reference="T: jointly ergodic case"}, which involve several functions to a more general class. The main obstruction is that in order to obtain the simultaneous Taylor expansions, one needs to find a function $L(t)$ (the length of the short interval) that satisfies a growth relation for all functions at the same time, which is non-trivial to perform, because we do not know how the derivatives of one function might grow relative to the derivatives of another function. Nonetheless, this is not feasible in the case of one function, such as Theorem [Theorem 3](#T: convergence of Furstenberg averages){reference-type="ref" reference="T: convergence of Furstenberg averages"}, which leads to Szemerédi-type results. We have the following proposition. **Proposition 40**. *Let $a(t)$ be a function, defined for all sufficiently large $t$ and satisfying $|a(t)|\to+\infty$, as $t\to+\infty$. Suppose there exists a positive integer $k$ for which $a$ is $C^{k+1},$ $a^{(k+1)}(t)$ converges to 0 monotonically, and such that[^20]$$t^{5/8}\ll \bigl| a^{(k)}(t) \bigr|^{-\frac{1}{k}}\lll\bigl| a^{(k+1)}(t) \bigr|^{-\frac{1}{k+1}}\ll t.$$* *Then, for any $\ell\in\mathbb{N},$ measure-preserving system $(X,\mathcal{X}, \mu, T_1,\dots, T_{\ell}),$ and functions $f_1,\dots,f_{\ell}\in L^{\infty}(\mu)$, we have $$\label{E: main average comparison tempered} \lim_{w\to+\infty} \ \limsup\limits_{N\to+\infty}\ \max_{\underset{(b,W)=1}{1\leq b\leq W}}\ \Big\lVert \frac{1}{N}\sum_{n=1}^{N} \big(\Lambda_{w,b}(n) -1\big) T_1^{{\left \lfloor a(Wn+b) \right \rfloor}}f_1\cdot\ldots\cdot T_{\ell}^{{\left \lfloor a(Wn+b) \right \rfloor}} f_{\ell} \Big\rVert_{L^2(\mu)}=0.$$* We remark that any improvement in the parameter $5/8$ in Theorem [\[T: Gowers uniformity in short intervals\]](#T: Gowers uniformity in short intervals){reference-type="ref" reference="T: Gowers uniformity in short intervals"} will also lower the term $t^{5/8}$ on the leftmost part of the growth inequalities accordingly. *Sketch of the proof of Proposition [Proposition 40](#P: comparison for more general iterates){reference-type="ref" reference="P: comparison for more general iterates"}.* We define $L(t)$ to be the geometric mean of the functions $\bigl| a^{(k)}(t) \bigr|^{-\frac{1}{k}}$ and $\bigl| a^{(k+1)}(t) \bigr|^{-\frac{1}{k+1}}$, which is well-defined for all $t$ sufficiently large. A standard computation implies the relation $$t^{5/8}\ll \bigl| a^{(k)}(t) \bigr|^{-\frac{1}{k}}\lll L(t) \lll\bigl| a^{(k+1)}(t) \bigr|^{-\frac{1}{k+1}}\ll t.$$Regarding the parameter $w$ as fixed, it suffices to show that $$\limsup\limits_{N\to+\infty}\max_{\underset{(b,W)=1}{1\leq b\leq W}}\ \Big\lVert \frac{1}{N}\sum_{n=1}^{N} \big(\Lambda_{w,b}(n) -1\big) T_1^{{\left \lfloor g(Wn+b) \right \rfloor}}f_1\cdot\ldots\cdot T_{\ell}^{{\left \lfloor g(Wn+b) \right \rfloor}} f_{\ell} \Big\rVert_{L^2(\mu)}=o_w(1).$$This follows if we show that $$\limsup\limits_{N\to+\infty}\max_{\underset{(b,W)=1}{1\leq b\leq W}}\ \Big\lVert \mathop{\mathrm{\mathbb{E}}}_{N\leq n\leq N+L(N)} \big(\Lambda_{w,b}(n) -1\big) T_1^{{\left \lfloor a(Wn+b) \right \rfloor}}f_1\cdot\ldots\cdot T_{\ell}^{{\left \lfloor a(Wn+b) \right \rfloor}} f_{\ell} \Big\rVert_{L^2(\mu)}=o_w(1).$$ This derivation is very similar to the proof of [@Fra-Hardy-singlecase Lemma 4.3], which was stated only for bounded sequences. This is proven by covering the interval $[1,N]$ with non-overlapping sub-intervals that have the form $[m,m+L(m)]$ (for $m$ large enough), where the term of the average on the last set of the covering is bounded as in [\[E: to kommati poy jefeygei\]](#E: to kommati poy jefeygei){reference-type="eqref" reference="E: to kommati poy jefeygei"}. [^21] Using Proposition [Proposition 33](#P: remove error term for fast functions){reference-type="ref" reference="P: remove error term for fast functions"} and the abbreviated notation $g_{W,b}(t)$ for the function $g(Wt+b)$, we deduce that we can write $${\left \lfloor g_{W,b}(n) \right \rfloor}={\left \lfloor g_{W,b}(N)+\dots+\frac{(n-N)^kg^{(k)}_{W,b}(N)}{k!} \right \rfloor}$$for all except at most $O(L(N)\log^{-100}N)$ values of $n\in [N,N+L(N)]$. Furthermore, we also have the equidistribution assumption of Proposition [Proposition 33](#P: remove error term for fast functions){reference-type="ref" reference="P: remove error term for fast functions"}, which implies that Proposition [Proposition 29](#P: Gowers norm bound on variable polynomials){reference-type="ref" reference="P: Gowers norm bound on variable polynomials"} is applicable for the polynomial $$g_{W,b}(N)+\dots+\frac{(n-N)^kg^{(k)}_{W,b}(N)}{k!}$$ appearing in the iterates. The conclusion then follows similarly as in the proof of Theorem [Theorem 1](#T: the main comparison){reference-type="ref" reference="T: the main comparison"}, so we omit the rest of the details. ◻ An application of the previous comparison is for the class of *tempered* functions, which we define promptly. **Definition 41**. *Let $i$ be a non-negative integer. A real-valued function $g$ which is $(i+1)$-times continuously differentiable on $(t_0,\infty)$ for some $t_0\geq 0,$ is called a *tempered function of degree $i$* (we write $d_g=i$), if the following hold:* 1. *$g^{(i+1)}(t)$ tends monotonically to $0$ as $t\to\infty;$* 2. *$\lim_{t\to+\infty}t|g^{(i+1)}(t)|=+\infty.$* *Tempered functions of degree $0$ are called *Fejér* functions.* For example, consider the functions $$\label{E: Examples tempered} g_1(t)=t^{1/25}(100+\sin\log t)^3, \; g_2(t)=t^{1/25}, \; g_3(t)=t^{17/2}(2+\cos\sqrt{\log t}).$$ We have that $g_1$ and $g_2$ are Fejér, $g_3$ is tempered of degree $8$ (which is not Hardy, see [@Bergelson-Haland]). Every tempered function of degree $i$ is eventually monotone and it grows at least as fast as $t^i\log t$ but slower than $t^{i+1}$ (see [@Bergelson-Haland]), so that, under the obvious modification of Definition [Definition 14](#D: growthdefinitions){reference-type="ref" reference="D: growthdefinitions"}, tempered functions $\mathcal{T}$ are strongly non-polynomial. Also, for every tempered function $g,$ we have that $(g(n))_{n\in\mathbb{N}}$ is equidistributed mod 1.[^22] In general, it is more restrictive to work with tempered functions than working with Hardy field ones. To see this, notice that ratios of tempered functions need not have limits, in contrast to the Hardy field case. For example, the functions $g_1$ and $g_2$ in [\[E: Examples tempered\]](#E: Examples tempered){reference-type="eqref" reference="E: Examples tempered"} are such that $g_1(t)/g_2(t)$ has no limit as $t\to+\infty$. This issue persists even when we are dealing with a single function, as ratios that involve derivatives of the same function may not have a limit either. Indeed, we can easily see that $g_1$ from [\[E: Examples tempered\]](#E: Examples tempered){reference-type="eqref" reference="E: Examples tempered"} (which was first studied in [@DKS-pointwise]) has the property that $\frac{tg_1'(t)}{g_1(t)}$ does not have a limit as $t\to+\infty.$ The existence of the limit of the latter is important as it allows us to compare (via L' Hôpital's rule) growth rates of derivatives of functions with comparable growth rates. In order to sidestep the aforementioned problematic cases, we restrict our study to the following subclass of tempered functions (see also [@Bergelson-Haland], [@Koutsogiannis-tempered]). Let $\mathcal{R}:=\Big\{g\in C^\infty(\mathbb{R}^+):\;\lim_{t\to+\infty}\frac{tg^{(i+1)}(t)}{g^{(i)}(t)}\in \mathbb{R}\;\;\text{for all}\;\;i\in\mathbb{N}\cup\{0\}\Big\};$ $\mathcal{T}_i:=\Big\{g\in\mathcal{R}:\;\exists\;i<\alpha< i+1,\;\lim_{t\to+\infty}\frac{tg'(t)}{g(t)}=\alpha,\;\lim_{t\to+\infty}g^{(i+1)}(t)=0\Big\};$ and $\mathcal{T}:=\bigcup_{i=0}^\infty \mathcal{T}_i.$ For example, $g_2\in \mathcal{T}_0$ and $g_3\in\mathcal{T}_8$ ($g_2, g_3$ are those from [\[E: Examples tempered\]](#E: Examples tempered){reference-type="eqref" reference="E: Examples tempered"}). Notice that while the class of Fejér functions contain sub-fractional functions, $\mathcal{T}_0$ does not as, according to [@DKS-hardy Lemma 6.4], if $g\in \mathcal{T}$ with $\lim_{t\to+\infty}\frac{tg'(t)}{g(t)}=\alpha,$ then for every $0<\beta<\alpha$ we have $t^\beta\prec g(t).$ We will prove a convergence result for the class $\mathcal{T}$ through an application of Proposition [Proposition 40](#P: comparison for more general iterates){reference-type="ref" reference="P: comparison for more general iterates"}. **Lemma 42**. *Let $g$ be a function in $\mathcal{T}$ and $0<c<1$. Then, for all large enough positive integers $k$, we have $$t^c\prec \bigl| g^{(k)}(t) \bigr|^{-\frac{1}{k}}\lll \bigl| g^{(k+1)}(t) \bigr|^{-\frac{1}{k+1}} \prec t.$$* *Proof.* Since $g(t)\prec t^{d_g+1}$ and $0<c<1,$ we have $g(t)\prec t^{k(1-c)}$ for all large enough $k\in \mathbb{N}$, which implies $$\frac{g^{(k)}(t)}{t^{-ck}}=\frac{g(t)}{t^{k(1-c)}}\cdot \prod_{i=1}^k \frac{tg^{(i)}(t)}{g^{(i-1)}(t)}\to 0.$$ Hence, $g^{(k)}(t)\prec t^{-ck}$ or, equivalently, $t^c\prec \bigl| g^{(k)}(t) \bigr|^{-\frac{1}{k}}.$ For the aforementioned $k$'s, let $0<q<1$ so that $t^{kq}\prec g(t).$ Since $\lim_{t\to+\infty}\frac{tg'(t)}{g(t)}\notin \mathbb{N},$ $$\frac{t^{k(q-1)}}{g^{(k)}(t)}=\frac{t^{kq}}{g(t)}\cdot \prod_{i=1}^k \frac{g^{(i-1)}(t)}{tg^{(i)}(t)}\to 0,$$ so $t^{k(q-1)}\prec g^{(k)}(t).$ As $\lim_{t\to+\infty}\frac{tg^{(k+1)}(t)}{g^{(k)}(t)}\in\mathbb{R}\setminus\{0\},$ we get $g^{(k+1)}(t)\ll t^{-1}g^{(k)}(t)$, so, if we let $\delta=\frac{q}{k+1},$ we have $$\frac{\bigl| g^{(k+1)}(t) \bigr|^{-\frac{1}{k+1}}}{\bigl| g^{(k)}(t) \bigr|^{-\frac{1}{k}}}\gg \frac{t^{\frac{1}{k+1}}\bigl| g^{(k)}(t) \bigr|^{-\frac{1}{k+1}}}{\bigl| g^{(k)}(t) \bigr|^{-\frac{1}{k}}}=t^{\frac{1}{k+1}}\bigl| g^{(k)}(t) \bigr|^{\frac{1}{k(k+1)}}\succ t^{\frac{1}{k+1}}\cdot t^{\frac{q-1}{k+1}}=t^\delta,$$ completing the proof of the lemma (the rightmost inequality follows by [@DKS-hardy]). ◻ Using Proposition [Proposition 40](#P: comparison for more general iterates){reference-type="ref" reference="P: comparison for more general iterates"} and [@Fra-Hardy-singlecase Theorem 2.2] we get the following result. More precisely, we use the fact here that [@Fra-Hardy-singlecase Theorem 2.2] holds for a single function $a$ which has the property that, for some $k\in\mathbb{N},$ $a$ is $C^{k+1},$ $a^{(k+1)}(t)$ converges to $0$ monotonically, $1/t^k\prec a^{(k)}(t),$ and $|a^{(k)}(t)|^{-1/k}\prec |a^{(k+1)}(t)|^{-1/(k+1)}$ (see comments in [@Fra-Hardy-singlecase Subsection 2.1.5]). We omit its proof as it is identical to the one of Theorem [Theorem 3](#T: convergence of Furstenberg averages){reference-type="ref" reference="T: convergence of Furstenberg averages"}. **Theorem 43**. *Let $g\in \mathcal{T}.$ For any $k\in\mathbb{N},$ measure-preserving system $(X,\mathcal{X}, \mu, T),$ and functions $f_1,\dots,f_{k}\in L^{\infty}(\mu)$, we have $$\label{E: aek ole tempered} \lim_{N\to+\infty} \frac{1}{\pi(N)}\sum_{p\in \mathbb{P}{:}\;p\leq N} T^{{\left \lfloor g(p) \right \rfloor}}f_1\cdot\ldots\cdot T^{k{\left \lfloor g(p) \right \rfloor}}f_k=\lim_{N\to+\infty} \frac{1}{N}\sum_{n=1}^{N} T^nf_1\cdot\ldots\cdot T^{kn} f_k,$$ where the convergence takes place in $L^2(\mu)$.* As in the Hardy field case, we have the corresponding recurrence result. **Theorem 44**. *Let $g\in \mathcal{T}.$ For any $k\in\mathbb{N},$ measure-preserving system $(X,\mathcal{X}, \mu,T),$ and set $A$ with positive measure, we have $$\lim\limits_{N\to+\infty} \frac{1}{\pi(N)} \sum_{p\in \mathbb{P}{:}\;p\leq N} \mu(A\cap T^{-{\left \lfloor g(p) \right \rfloor}}A\cap \dots\cap T^{-{k{\left \lfloor g(p) \right \rfloor}}}A )> 0.$$* The latter implies the following corollary, which guarantees arbitrarily long arithmetic progressions, with steps coming from the class of tempered functions evaluated at primes. **Corollary 45**. *Let $g\in \mathcal{T}.$ For any set $E\subseteq \mathbb{N}$ of positive upper density, and $k\in \mathbb{N},$ we have $$\liminf\limits_{N\to+\infty} \frac{1}{\pi(N)}\sum_{p\in \mathbb{P}{:}\;p\leq N} \bar{d}\big(E\cap (E-{\left \lfloor g(p) \right \rfloor})\cap \dots\cap (E-k{\left \lfloor g(p) \right \rfloor})\big)>0.$$* ***Comment** 5*. In Theorem [Theorem 43](#T: convergence of Furstenberg averages tempered){reference-type="ref" reference="T: convergence of Furstenberg averages tempered"}, and, thus, in Theorem [Theorem 44](#T: recurrence tempered){reference-type="ref" reference="T: recurrence tempered"} and Corollary [Corollary 45](#C: Szemeredi tempered){reference-type="ref" reference="C: Szemeredi tempered"}, the floor function can be replaced with either the function $\lceil\cdot\rceil$ or the function $[[\cdot ]]$. Furthermore, in each of these results, one can alternatively evaluate the sequences along the affine shifts $ap+b,$ for $a, b\in \mathbb{R}$ with $a\neq 0.$ As we saw, the comparison method provides results along primes through the corresponding results for averages along $\mathbb{N}$, though in the case of tempered functions, we do not have a comparison result of the same strength as Theorem [Theorem 1](#T: the main comparison){reference-type="ref" reference="T: the main comparison"}. Nonetheless, it is expected that convergence results along $\mathbb{N}$ for iterates which are comprised of multiple tempered functions (or even combinations of tempered and Hardy field functions) can be transferred to the prime setting. Even in the case of averages along $\mathbb{N}$, the convergence results are still not established under the most general expected assumptions. For a single function and commuting transformations, a result in this direction was proven in [@DKS-hardy]. We note that [@DKS-hardy Theorem 6.1] reflects the complexity of the assumptions we have to impose on the growth rates of functions to deduce such results. This analysis is beyond the scope of this paper. [^1]: Throughout this article, it will be a reoccurring theme that in combinatorial applications, certain arithmetic obstructions force one to consider the set of shifted primes $\mathbb{P}-1$ (or $\mathbb{P}+1$) in place of $\mathbb{P}$, when dealing with polynomials. This is a necessary assumption, as in such cases the corresponding results for the set $\mathbb{P}$ are easily seen to be incorrect (see, for example, [@Wenbo-primes Remark 1.4]). [^2]: The methods in [@Wooley-Ziegler] do not invoke the full power of this theorem, although their approach draws heavily from the work of Green and Tao. [^3]: A more fundamental obstruction in this more general setting was that the necessary seminorm estimates were unavailable even in the simplest case of averages along $\mathbb{N}$, apart from some known special cases. This was established a few months later by the second author [@Tsinas]. [^4]: *This means that if $f,g\in \mathcal{H}$ are such that $g(t)\to+\infty$, then $f\circ g\in \mathcal{H}$ and $g^{-1}\in \mathcal{H}$.* [^5]: *Notice here the usual necessary assumption that we have to postulate on the polynomial, i.e., to have no constant term, in order to obtain a recurrence, and, consequently, a combinatorial result.* [^6]: *In this case only, $\bar{d}(E)$ can be replaced by $d^\ast(E):=\limsup\limits_{|I|\to+\infty}\frac{|E\cap I|}{|I|}$ following the arguments from [@koutsogiannis-closest], where the $\limsup$ is taken along all intervals $I\subseteq \mathbb{Z}$ with lengths tending to infinity.* [^7]: This argument was first used for $k=1$ in [@Boshernitzan-Jones-Wierdl-book] and [@Lesigne-single] to prove that when a sequence of real positive numbers is good for (single term) pointwise convergence, then its floor value is also good. The method was later adapted to the $k=2$ setting by Wierdl (personal communication with the first author, 2015). [^8]: This notation is non-standard, so we may refer back to this part quite often throughout the text. [^9]: Evidently, both statements rely on similar complexity reduction arguments, though Proposition [\[P: PROPOSITION THAT WILL BE COPYPASTED FROM FRA-HO-KRA\]](#P: PROPOSITION THAT WILL BE COPYPASTED FROM FRA-HO-KRA){reference-type="ref" reference="P: PROPOSITION THAT WILL BE COPYPASTED FROM FRA-HO-KRA"} is stated in much larger generality involving numerous polynomials. [^10]: While this theorem concerns the case of equidistribution, the more general result follows easily by a straightforward adaptation of van der Corput's difference theorem to the case of well-distribution. The authors of [@Kuipers-Niederreiter] discuss this in the notes of Section 5 in Chapter 1. [^11]: In this book, the theorem is proven for measures of the form $\nu= \frac{1}{N}\sum_{i=1}^{N}\delta_{x_i}$, although the more general statement follows by noting that every Borel probability measure is a weak limit of measures of the previous form. [^12]: This follows by the fact that if $f$ is Riemann integrable on $[0,1)$ with $\int_{[0,1)}f(x)\,dx=c,$ then, for every $\varepsilon>0,$ we can find trigonometric polynomials $q_1,\;q_2,$ with no constant terms, with $q_1(t)+c-\varepsilon\leq f(t)\leq q_2(t)+c+\varepsilon.$ We use this for the function $f={\bf 1}_{[1-\delta,1)}.$ [^13]: We do not actually need this quantity to converge to zero faster than some power of $N$. The same argument applies if this quantity simply converges to zero. [^14]: A bound that is uniform over all $m\in \mathbb{N}$ is in general false, so we have to restrict $m$ to a finite range. [^15]: It is straightforward to check that if $f\lll g$, then $f\lll \sqrt{fg}\lll g$, assuming, of course, that the square root is well-defined (e.g. when the functions $f,g$ are eventually positive). [^16]: In the case $i=\ell+1$, we are referring to the function $\bigl| a_i^{(s)}(t) \bigr|^{-\frac{1}{s}}$. [^17]: There is a slight issue here, in that we would need the assumption that the function $a(Wn+b)$ belongs to $\mathcal{H}$ in order to apply Theorem 2.2 from [@Fra-Hardy-singlecase], However, the proof in [@Fra-Hardy-singlecase] only requires some specific growth conditions on the derivatives of the function $a(Wn+b)$ (specifically those outlined in equation 26 of that paper), which follow naturally from the assumption that $a\in \mathcal{H}$. [^18]: While it is not necessarily true that $a(R\cdot )$ belongs to $\mathcal{H}$, the arguments in the proof of [@Fra-Hardy-singlecase Theorem 2.2] show that the conclusion holds for such functions uniformly in $R$. Indeed, the fact that $a\in \mathcal{H}$ implies some specific growth conditions on the derivatives of $a$, which are also true for $a(R\cdot)$ and these conditions are the only necessary ingredient in the argument of the aforementioned theorem. Furthermore, the same is true for $a(R\cdot+b)$ for $b\in \mathbb{R}$. [^19]: We will show that the limit appearing in this expression exists. [^20]: *See the subsection with the notational conventions in Section [1](#Section-Introduction){reference-type="ref" reference="Section-Introduction"} for the notation $\lll$.* [^21]: In particular, this case is much simpler than the method used to establish Theorem [Theorem 1](#T: the main comparison){reference-type="ref" reference="T: the main comparison"}, in that we do not have to consider the more complicated double averaging scheme. In addition, we do not need any assumptions on $L(t)$ other than it is positive and $L(t)\prec t$. [^22]: For Fejér functions this is a classical result due to Fejér (for a proof see [@Kuipers-Niederreiter]). The general case follows inductively by van der Corput's difference theorem.
arxiv_math
{ "id": "2309.04939", "title": "Ergodic averages for sparse sequences along primes", "authors": "Andreas Koutsogiannis and Konstantinos Tsinas", "categories": "math.DS math.CO math.NT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | This paper is concerned with numerical solutions of one-dimensional SDEs with the drift being a generalised function, in particular belonging to the Hölder-Zygmund space $\mathcal{C}^{-\gamma}$ of negative order $-\gamma<0$ in the spacial variable. We design an Euler-Maruyama numerical scheme and prove its convergence, obtaining an upper bound for the strong $L^1$ convergence rate. We finally implement the scheme and discuss the results obtained. **MSC:** Primary 65C30; Secondary 60H35, 65C20, 46F99 **Keywords:** distributional drift, Euler-Maruyama scheme, rate of convergence, Besov space, stochastic differential equation, numerical scheme address: - $^\dag$School of Mathematics, University of Leeds, Leeds, LS2 9JT, United Kingdom - $^\ddag$Dept. of Mathematics "G. Peano", University of Turin, Via Carlo Alberto 10, 10123, Torino, Italy author: - Luis Mario Chaparro Jáquez$^{\dag1}$ - Elena Issoglio$^{\ddag2}$ - Jan Palczewski$^{\dag3}$ bibliography: - ./references.bib title: Convergence rate of numerical scheme for SDEs with a distributional drift in Besov space --- # Introduction {#sec:intro} This paper is concerned with the Euler-Maruyama scheme and its rate of convergence for a one-dimensional stochastic differential equation (SDE) of the form $$\label{eq:sde_S} X_{t} = x + \int_{0}^{t} b(s, X_{s}) ds + W_{t},$$ where $(W_{t} )_{t\ge0}$ is a Brownian motion and the drift $b (t, \cdot)$ belongs to the space of Schwartz distributions $\mathcal{S}'(\mathbb{R})$ for all $t\in [0,T]$. More precisely, the map $t \mapsto b(t)$ is $\frac12$-Hölder continuous for any $t \in [0, T]$ with values in the Hölder-Zygmund space $\mathcal{C}^{-\gamma}$ of negative order $-\gamma<0$, which we denote by $b \in C_T^{1/2} \mathcal{C}^{-\gamma}$. For a precise definition of these spaces see Section [2.1](#subsec:function_spaces){reference-type="ref" reference="subsec:function_spaces"} below. SDEs with distributional coefficients have been studied by several authors in different settings and with different noises, starting from the early 2000s with [@bass_chen; @flandoliSDEsDistributionalDrift2003; @flandoliSDEsDistributionalDrift2004] and then in recent years by [@flandoliMultidimensionalStochasticDifferential2017; @delarueRoughPaths1d2016; @cannizzaro; @chaudru_menozzi; @issoglioSDEsSingularCoefficients2023]. In all these works the drift is a distribution and the authors investigate theoretical questions of existence and uniqueness of solution, without exploring numerical aspects. The specific setting we consider here is the one studied in [@issoglioSDEsSingularCoefficients2023], where the authors formulate the notion of solution to [\[eq:sde_S\]](#eq:sde_S){reference-type="eqref" reference="eq:sde_S"} as a suitable martingale problem (c.f. Section [2.4](#subsec:soln_sde){reference-type="ref" reference="subsec:soln_sde"}). SDE [\[eq:sde_S\]](#eq:sde_S){reference-type="eqref" reference="eq:sde_S"} is solved in [@issoglioSDEsSingularCoefficients2023] in any general dimension $d$, and the notion of solution is intrinsically a weak solution, since it is formulated as a martingale problem. Here we restrict ourselves to dimension $1$ in which case there is a unique strong solution $X$ (see Remark [Lemma 3](#rm:strong){reference-type="ref" reference="rm:strong"}); the strong convergence studied in this paper can only be defined for strong solutions. The research of weak convergence for multi-dimensional SDEs of the above type is left for the future. The first results on numerical schemes for SDEs date back to the 80s, see the book by Kloeden and Platen [@kloeden_platen] for the case of smooth coefficients. On the other hand, numerical schemes for SDEs with low-regularity coefficients is an active area of research, but almost all contributions deal with SDEs with coefficients that are at least functions. We refer to the introduction of [@kohatsu-higa_et.al] for a list of other relevant papers, and to the introduction of [@dareiotisQuantifyingConvergenceTheorem2021] for a short summary of techniques used for numerical schemes with low-regularity coefficients. Paper [@dareiotisQuantifyingConvergenceTheorem2021] goes on to investigate convergence rates for SDEs with bounded measurable drifts, obtaining a strong $L^p$-rate of $\frac12$ for non-unitary diffusion coefficient, and a strong $L^p$-rate of $\frac{1+\gamma}2$ when the drift $b$ is time-homogeneous and an element of the Sobolev space $\dot W_{d\vee2}^\gamma$ for $\gamma \in (0,1)$ and the diffusion coefficient is the identity. Notice that the drift has a positive Sobolev regularity of $\gamma$, hence it is a possibly discontinuous function. Another relevant paper is [@jourdainConvergenceRateEulerMaruyama2021], where the authors deal with the case of $L^q$-$L^p$ drifts and unitary diffusion. They prove that the weak error of the Euler-Maruyama scheme is $\frac\alpha2$, where $\alpha$ is the distance from the singularity $\alpha:= 1- \left(\frac dp + \frac2q\right)$, where $d$ is the dimension of the problem. They also conjecture that their methods of proof should produce a rate of $\frac{1+\gamma}2$ for time-inhomogeneous drifts that belong to $C_T^{\frac\gamma2}C^\gamma$. A different line of research investigates discontinuous drifts and possibly degenerate diffusion coefficients, where the discontinuities lie on finitely many points or hypersurfaces, see [@neunekirch_szolgyenyi; @Leobacher_szolgyenyi; @Przybylowicz_szolgyenyi] for more details, and [@szolgyenyi] for a review. Only a few works deal with numerical schemes for SDEs with distributional coefficients. In [@deangelisNumericalSchemeStochastic2022], the SDE is like [\[eq:sde_S\]](#eq:sde_S){reference-type="eqref" reference="eq:sde_S"} but the drift $b$ belongs to a different distribution space, namely to the fractional Sobolev space of negative order $b \in C^{\kappa}_T H^{-\gamma}_{{(1-\gamma)}^{-1}, q}$ for $\kappa \in (\frac{1}{2}, 1)$, $\gamma \in (0, \frac{1}{4})$ and $q \in (4, \frac{1}\gamma)$. The authors obtain a strong $L^1$-rate of convergence depending on $\gamma$ which vanishes as $-\gamma$ approaches the boundary $-\frac14$, and tends to $\frac16$ for when $-\gamma$ approaches $0$ (i.e., when it approaches measurable drifts). In [@goudenegeNumericalApproximationSDEs2022], the authors study SDEs in $d$-dimensions with drifts in negative Besov spaces $B^{-\gamma}_p$, and the noise is a fractional Brownian motion with Hurst index $H\in (0,\frac{1}{2})$. They require that $1-\frac{1}{2H} < -\gamma-\frac{d}{p} <0$, i.e., the roughness of the drift is compensated by the roughness of the noise. The case $p=\infty$, $d=1$ and $H=\frac{1}{2}$ would correspond to our case, but this combination of parameters violates the above condition as the left-hand side becomes $0$. Techniques used in [@goudenegeNumericalApproximationSDEs2022] cannot be easily extended to our case. Indeed, for their proofs of convergence the authors rely on the well-known fact that a rougher noise gives more regularity to the solution, hence allowing for a rougher drift coefficient $b$ (or a higher dimension). In this paper, we set up a two-step numerical scheme. The first step is to regularise the distributional drift with the action of the heat semigroup, which gives a smooth function and allows to use Schauder estimates to control the approximation error bounds for the solution of the SDE with smoothed drift. Proving this step is the bulk of the paper. The second step is the bound on the error of the Euler-Maruyama scheme, which requires ad-hoc estimates (rather than standar EM estimates that can be found in most of the literature) to be able to control the constant in front of the rate in terms of the properties of the smoothed drift. To do so, we borrow ideas and results from [@deangelisNumericalSchemeStochastic2022], but we still have to prove a delicate $L^1$-bound of the local time of the error process (see Lemma [Lemma 15](#lemma:local-time-at-0){reference-type="ref" reference="lemma:local-time-at-0"}). Notice that we consider the $L^1$ strong error, and not the more common $L^2$ error, because we would otherwise get some terms that we could not bound. Finally, we link the smoothing parameter in Step 1 and the time step in Step 2 to obtain a one-step scheme and its convergence rate. This paper is organised as follows. In Section [2](#sec:preliminaries){reference-type="ref" reference="sec:preliminaries"} we set up the problem and the notation, recalling all relevant theoretical tools such as Hölder-Zygmund spaces, the heat kernel and semigroup, Schauder estimates and the notion of virtual solution to SDE [\[eq:sde_S\]](#eq:sde_S){reference-type="eqref" reference="eq:sde_S"}. In Section [3](#sec:numerics_results){reference-type="ref" reference="sec:numerics_results"} we describe the numerical scheme and state the main result (Theorem [Theorem 7](#th:convergence_rate_es){reference-type="ref" reference="th:convergence_rate_es"}) which provides a convergence rate in terms of the regularity parameter $\gamma$ (Corollary [Corollary 9](#cor:rate){reference-type="ref" reference="cor:rate"}). Section [4](#sec:proof){reference-type="ref" reference="sec:proof"} contains the proof of the building bloc of the main theoretical result (Proposition [Proposition 6](#prop:reg_to_original){reference-type="ref" reference="prop:reg_to_original"}), which is a bound on the difference between the solution to the original SDE [\[eq:sde_S\]](#eq:sde_S){reference-type="eqref" reference="eq:sde_S"} and its approximation after smoothing the drift $b$. Here is where we make use of the bound on the local time borrowed from [@deangelisNumericalSchemeStochastic2022]. Finally in Section [5](#sec:numerics){reference-type="ref" reference="sec:numerics"}, we describe a numerical implementation of the scheme and analyse numerical results. It is striking to see that the empirical convergence rate seems to be $\frac12 - \frac{\gamma}2$, which would be the extension of the rate found in [@dareiotisQuantifyingConvergenceTheorem2021] if they allowed for negative regularity index (and hence for distributional drifts). A straightforward application of their techniques is not possible and further investigations in this direction, for example using stochastic sewing lemma introduced in [@Le], are left for future research. # Preliminaries {#sec:preliminaries} ## Notation {#subsec:function_spaces} For a function $f: [0, T] \times \mathbb{R}\to \mathbb{R}$ that is sufficiently smooth, we denote by $f_t$ the partial derivative with respect to $t$, by $f_x$ the partial derivative with respect to $x$, and by $f_{xx}$ the second partial derivative with respect to $x$. For a function $g:\mathbb{R}\to \mathbb{R}$ sufficiently smooth we denote its derivative by $g'$. We now recall some useful definitions and facts from the literature. First of all, let $\mathcal{S}(\mathbb{R})$ be the space of Schwartz functions on $\mathbb{R}$ and $\mathcal{S}'(\mathbb{R})$ the space of tempered distributions. We denote $(\cdot)^{\wedge}$ and $(\cdot)^{\vee}$ the Fourier transform and inverse Fourier transform on $\mathcal{S}$ respectively, extended to $\mathcal{S}'$ in the standard way. For $\gamma \in \mathbb{R}$, the Hölder-Zygmund space is defined by $$\label{eq:hs_space} \mathcal{C}^{\gamma}(\mathbb{R}) = \Big\{ f \in \mathcal{S}' : \left\| f \right\|_{\gamma} := \sup_{j \in \mathbb{N}} 2^{j \gamma} \Big\| \big( \phi_{j} \hat{f}\, \big)^{\!\vee} \Big\|_{L^{\infty}} < \infty \Big\},$$ where $( \phi_{j})_j$ is any partition of unity. The Hölder-Zygmund space $\mathcal{C}^{\gamma}(\mathbb{R})$ is also known as Besov space $B^\gamma_{\infty, \infty}(\mathbb{R})$. For more details see [@triebelTheoryFunctionSpaces2010; @bahouriFourierAnalysisNonlinear2011]. To shorten notation we write $\mathcal{C}^{\gamma}$ instead of $\mathcal{C}^{\gamma} (\mathbb{R})$. Note that if $\gamma \in \mathbb{R}^{+} \setminus \mathbb{N}$ the space above coincides with the classical Hölder space. These spaces will be used widely in the paper, so we recall the norms that we will use in the paper. If $\gamma \in (0,1)$, the classical $\gamma$-Hölder norm $$\label{eq:equiv_norm01} \left\| f \right\|_{L^{\infty}} + \sup_{ \substack{ x \neq y \\ \left| x - y \right| < 1 } } \frac{\left| f(x) - f(y) \right|}{\left| x - y \right|^{\gamma}},$$ is an equivalent norm in $\mathcal{C}^{\gamma}$. If $\gamma \in (1,2)$ an equivalent norm is $$\label{eq:equiv_norm12} \left\| f \right\|_{L^{\infty}} + \left\| f' \right\|_{L^{\infty}} + \sup_{ \substack{ x \neq y \\ \left| x - y \right| < 1 } } \frac{\left| f'(x) - f'(y) \right|}{\left| x - y \right|^{\gamma}}.$$ We will write $C_{T} \mathcal{C}^{\gamma} := C([0, T]; \mathcal{C}^{\gamma})$ with the norm $$\|f\|_{C_{T} \mathcal{C}^{\gamma}}:= \sup_{t\in[0,T]} \|f(t)\|_{\mathcal{C}^{\gamma}}.$$ We will also use a family of equivalent norms $\|\cdot \|^{(\rho)}_{C_{T} \mathcal{C}^{\gamma}}$, for $\rho\geq0$, given by $$\|f\|^{(\rho)}_{C_{T} \mathcal{C}^{\gamma}} := \sup_{t\in[0,T]} e^{-\rho(T-t)} \|f(t)\|_{\mathcal{C}^{\gamma}}.$$ Indeed, it is easy to see that $$\label{eq:rho_eq_norm} \|f\|_{C_{T} \mathcal{C}^{\gamma}} \leq e^{\rho T} \|f\|_{C_{T} \mathcal{C}^{\gamma}}^{(\rho)}.$$ For any given $\gamma \in \mathbb{R}$ we denote by $\mathcal{C}^{\gamma+}$ and $\mathcal{C}^{\gamma-}$ the following spaces $$\label{eq:c+} \mathcal{C}^{\gamma+} := \cup_{\alpha>\gamma} \mathcal{C}^{\alpha},$$ $$\label{eq:c-} \mathcal{C}^{\gamma-} := \cap_{\alpha<\gamma} \mathcal{C}^{\alpha}.$$ Similarly, we also write $C_{T} \mathcal{C}^{\gamma+} := \cup_{\alpha > \gamma} C_T \mathcal C^\alpha$. The following bound in Hölder-Zygmund spaces will be useful later and it is known as Bernstein inequality. **Lemma 1** (Bernstein inequality). *For any $\gamma \in \mathbb{R}$, there is $c > 0$ such that $$\label{eq:bernstein_inequality} \left\| f' \right\|_{\gamma} \leq c \left\| f \right\|_{\gamma + 1}, \qquad f \in \mathcal{C}^{\gamma + 1}.$$* Let $\kappa \in (0,1)$. We denote by $C_T^\kappa L^\infty$ the space of $\kappa$-Hölder continuous functions with values in $L^\infty$, and by $C^\kappa_T C^1_b$ the space of $\kappa$-Hölder continuous functions from $[0,T]$ with values in the space of $C^1$ functions which are bounded and have a bounded derivative. We define the following norms and seminorms for $g \in C_T^\kappa L^\infty$: $$\label{eq:norm_infty} \left\| g \right\|_{\infty, L^\infty} := \sup_{t\in[0,T]} \sup_{x\in\mathbb{R}}\left | g(t,x) \right|$$ and $$\label{eq:norm_infinity_holder} [g]_{\kappa, L^{\infty}} := \sup_{t,s \in [0,T], t \neq s} \frac{\|g(t) - g(s)\|_{L^{\infty}}}{|t - s|^{\kappa}}.$$ Notice that if $g\in C_T^\kappa C^1_b$ then $g_x \in C_T^\kappa L^\infty$. We finish this section by introducing an asymptotic relation between functions. For functions $f, g$ defined on an unbounded subset of $\mathbb{R}^{+}$, we write $f(x) = o (g(x))$ if $\lim_{x \to \infty} |f(x) | / |g(x)| = 0$. ## Standing assumption {#subsec:assumptions} The following assumption will hold throughout the paper. **Assumption 1**. *We have $b \in C_T^{1/2} \mathcal{C}^{(-\hat \beta)+}$ for some $0 < \hat{\beta} < 1/2$.* We will derive our numerical scheme and prove its convergence under the above assumption, but solutions to the SDE [\[eq:sde_S\]](#eq:sde_S){reference-type="eqref" reference="eq:sde_S"} exist under a weaker assumption $b \in C_T \mathcal{C}^{- \hat \beta}$ and this fact will be exploited in the derivation of the rate of convergence. Although solutions to SDE [\[eq:sde_S\]](#eq:sde_S){reference-type="eqref" reference="eq:sde_S"} exist in higher dimensions, we work in dimension $1$ because in this case one can construct a strong solution, see Remark [Lemma 3](#rm:strong){reference-type="ref" reference="rm:strong"}, which is fundamental to the definition of strong convergence error. ## Heat kernel and heat semigroup {#subsec:heat_kernel} We will use heat kernel smoothing to derive a sequence of approximating regularised SDEs. Here we introduce notation and provide background information about the action of the heat semigroup on elements of $\mathcal{C}^{\gamma}$. The function $$\label{eq:heat_kernel} p_{t}(x) = \frac{1}{\sqrt{4 \pi t}} e^{-\frac{|x|^{2}}{2t}}$$ is called heat kernel and it is the fundamental solution to the heat equation. The operator acting as a convolution of the heat kernel with a (generalised) function, is called heat semigroup, and it is denoted by $P_{t}$: for any $g \in \mathcal{S}$ we have $$\label{eq:heat_semigroup} \left(P_{t} g\right) (y) = \left(p_{t} \ast g\right) (y) = \int_{\mathbb{R}} p_{t}(x) g(y - x) dx.$$ The semigroup $P_t$ can be extended to $\mathcal S'$ by duality. For a distribution $\varrho \in \mathcal{S}'$, the convolution and derivative commute as mentioned in [@renardyIntroductionPartialDifferential2004 Section 5.3] and [@issoglioPdeDriftNegative2022 Remark 2.5], that is, $$\label{eq:comm_sem_der} % P_{t} \varrho_x (p_{t} \ast \varrho)' = p'_{t} \ast \varrho = p_{t} \ast \varrho' % = ( P_{t} \varrho )_x. .$$ This fact is useful for efficient construction of regularised SDEs when $b = B_x$ for some function $B \in C^{1/2}_T \mathcal{C}^{1-\hat \beta}$ as we do in the numerical example studied in Section [5](#sec:numerics){reference-type="ref" reference="sec:numerics"}. We recall the so-called Schauder estimates which quantify the effect of heat semigroup smoothing. **Lemma 2** (Schauder estimates). *For any $\gamma \in \mathbb{R}$, there is a constant $c>0$ such that for any $\theta \geq 0$ and $f \in \mathcal{C}^{\gamma}$ $$\label{eq:se_Pt} \left\| P_t f \right\|_{\gamma + 2 \theta} \leq ct^{-\theta} \left\| f \right\|_{\gamma}.$$ Moreover, the above constant can be chosen so that if $\theta < 1$ and $f \in \mathcal{C}^{\gamma + 2\theta}$ then $$\label{eq:se_Pt-I} \left\| P_t f - f \right\|_{\gamma} \leq c t^{\theta} \left\| f \right\|_{\gamma + 2 \theta}.$$* ## Existence of solutions to the SDE {#subsec:soln_sde} We recall from [@issoglioPdeDriftNegative2022; @issoglioSDEsSingularCoefficients2023] main results on construction, existence and uniqueness of solutions to SDE [\[eq:sde_S\]](#eq:sde_S){reference-type="eqref" reference="eq:sde_S"} under the following assumption $$\label{eqn:ass_b} b \in C_T \mathcal{C}^{(- \beta)+}, \qquad \beta \in (0, 1/2).$$ There exists two equivalent notions of solution to SDE [\[eq:sde_S\]](#eq:sde_S){reference-type="eqref" reference="eq:sde_S"}: virtual solutions and solutions via martingale problem. The formulation via martingale problem is omitted and the reader is referred to [@issoglioSDEsSingularCoefficients2023]. We describe below the formulation via virtual solutions, because it is particularly suited to our analysis of the numerical scheme. For $\lambda > 0$, let us consider a Kolmogorov-type PDE $$\label{eq:kolmogorov} \begin{cases} u_t + \frac{1}{2} u_{xx} + b u_x = \lambda u - b \\ u(T) = 0 \end{cases}$$ with the solution understood in a mild sense, i.e., as a solution to the integral equation $$\label{eq:mild_solution} u(t) = \int_{t}^{T} P_{s - t} \left( u_x (s) b(s) \right) ds - \int_{t}^{T} P_{s-t} \left( \lambda u(s) - b(s) \right) ds, \quad \forall t \in [0, T] .$$ It is shown in [@issoglioPdeDriftNegative2022 Theorem 4.7] that a solution $u$ exists in $C_{T} \mathcal{C}^{(2 - \beta)-}$ and is unique in $C_T \mathcal{C}^{(1+\beta)+}$. Hence, $u_x$ is $\alpha$-Hölder continuous for any $\alpha <1-\beta$. By [@issoglioPdeDriftNegative2022 Proposition 4.13], we have $\|u_x\|_\infty<1/2$ for $\lambda$ large enough. From now on we fix $\lambda$ large enough. By [@issoglioPdeDriftNegative2022 Proposition 4.13], the mapping $$\label{eq:phi} \phi(t, x):= x + u(t,x)$$ is invertible in the space dimension, and we denote this space-inverse by $\psi(t, \cdot)$. Consider now a weak solution $Y$ to SDE $$\label{eq:Y_def} Y_t = y_0 + \lambda \int_0^t u(s, \psi(s, Y_s)) ds + \int_0^t ( u_x (s, \psi(s, Y_t)) + 1 ) dW_s.$$ We call that $X_t := \psi (t, Y_t)$ is a *virtual solution*[^1] to SDE [\[eq:sde_S\]](#eq:sde_S){reference-type="eqref" reference="eq:sde_S"}. Clearly, $Y_t = \phi (t, X_t) = X_t + u(t, X_t)$ solves [\[eq:Y_def\]](#eq:Y_def){reference-type="eqref" reference="eq:Y_def"} when $X$ is a weak solution to the equation $$\label{eq:virtual_int_eq} X_{t} = x + u(0, x) - u(t, X_{t}) + \lambda \int_{0}^{t} u(s, X_{s}) ds + \int_{0}^{t} \left( u_x (s, X_{s}) + 1 \right) dW_{s},$$ so any solution $(X_t)$ of [\[eq:virtual_int_eq\]](#eq:virtual_int_eq){reference-type="eqref" reference="eq:virtual_int_eq"} is also a virtual solution to SDE [\[eq:sde_S\]](#eq:sde_S){reference-type="eqref" reference="eq:sde_S"}. We also note that the virtual solution does not depend on $\lambda$ thanks to the links between virtual solutions and martingale problem developed in [@issoglioSDEsSingularCoefficients2023]; the reader is refered to the aforementioned paper for a complete presentation of those links. We complete this section by showing that the virtual solution of SDE [\[eq:sde_S\]](#eq:sde_S){reference-type="eqref" reference="eq:sde_S"} is a strong solution in the sense that the solution $Y$ to SDE [\[eq:Y_def\]](#eq:Y_def){reference-type="eqref" reference="eq:Y_def"} is a unique strong solution. **Lemma 3**. *Under condition [\[eqn:ass_b\]](#eqn:ass_b){reference-type="eqref" reference="eqn:ass_b"}, there exists process $(Y_t)$ on the original probability space with the Brownian motion $(W_t)$ that satisfies [\[eq:Y_def\]](#eq:Y_def){reference-type="eqref" reference="eq:Y_def"}, and this process is unique up to modifications. Equivalently, there is a unique process $(X_t)$ that satisfies [\[eq:virtual_int_eq\]](#eq:virtual_int_eq){reference-type="eqref" reference="eq:virtual_int_eq"}.* *Proof.* Recall that $u$ and $u_x$ are bounded, hence they have linear growth uniformly in time, which implies there exists a strong solution $Y$ to SDE [\[eq:Y_def\]](#eq:Y_def){reference-type="eqref" reference="eq:Y_def"}, see [@baldiStochasticCalculus2017 Remark 9.6]. Uniqueness for SDE [\[eq:Y_def\]](#eq:Y_def){reference-type="eqref" reference="eq:Y_def"} follows in dimension 1 from a result by Yamada and Watanabe, see e.g. [@karatzasBrownianMotionStochastic1991 Proposition 2.13]. Indeed, the drift and the diffusion coefficients have linear growth and the diffusion coefficient is $\alpha$-Hölder continuous for any $\alpha>1/2$. ◻ # The numerical scheme and main results {#sec:numerics_results} Numerical scheme for SDE [\[eq:sde_S\]](#eq:sde_S){reference-type="eqref" reference="eq:sde_S"} is based on two approximations. The first one replaces the distributional drift with a sequence of functional drifts so that the solutions of respective SDEs converge to the solution of the original SDE. Subsequently, the approximating SDEs are simulated with an Euler-Maruyama scheme. We will then balance errors coming from the approximation of the drift and the discretisation of time to maximise the rate of convergence. We regularise the drift by applying heat semigroup. For a real number $N > 0$, we define $$\label{eq:bN} b^N = P_{\frac1N} b.$$ Since $b^N(t, \cdot) \in \mathcal{C}^{\gamma}$ for any $\gamma>0$ (c.f. Lemma [Lemma 2](#lemma:schauder_estimates){reference-type="ref" reference="lemma:schauder_estimates"}) and for all $t\in[0,T]$, we have $b^N (t, \cdot) \in C^1_b$, $t\in[0,T]$, hence, it is Lipschitz continuous in the spatial variable. Hence the SDE $$\label{eqn:X^N} dX^N_t = b^N(t, X^N_t) dt + dW_t$$ has a unique strong solution. In keeping with the usual approch, for $m \in \mathbb{N}$ we take an equally spaced partition $t_k = t^m_{k} = kT/m$, $k = 0, \ldots, m$, of the interval $[0, T]$. We define $$\label{eq:time_sup} k(t) = k^m(t) = \max \left\{ k: t_{k} \leq t \right\}, \qquad t \in [0, T].$$ Consider an Euler-Maruyama approximation of $(X^N)$ with $m$ time steps $$\label{eq:m_em_approx} X^{Nm}_{t} = x + \int_{0}^{t} {b^N}\big(t_{k(s)}, X^{Nm}_{t_{k(s)}}\big) ds + W_{t}, \qquad t \in [0, T].$$ We first obtain a bound on the strong error between the approximation $(X^{Nm})$ and the process $(X^N)$ with explicit dependence of constants on the properties of the drift $b^N$. This is needed in order to balance the smoothing via choice of $N$ and the number of time steps $m$ to optimise the convergence rate of the Euler-Maruyama approximation to the true solution $(X)$. Following arguments in the proof of [@deangelisNumericalSchemeStochastic2022 Prop. 3.4] we obtain the following result. **Proposition 4**. *Assume that $b^N \in C^{1/2}_T L^\infty \cap L^\infty_T C_b^1$. Then $$\sup_{0 \leq t \leq T} \mathbb{E}\left[\left| X^{Nm}_{t} - X^N_{t}\right|\right] \leq A^N m^{-1} + B^N m^{-1/2},$$ where $$\begin{aligned} A^N &=\left\|b^N\right\|_{\infty,L^{\infty}} \left( 1 + \left\| b^N_x\right\|_{\infty,L^{\infty}} \right),\\ B^N &= \left\| b^N_x \right\|_{\infty, L^{\infty}} + \left[b^N\right]_{\frac12, L^{\infty}}.\end{aligned}$$* **Corollary 5**. *Under Assumption [Assumption 1](#ass:b_betahat){reference-type="ref" reference="ass:b_betahat"}, the condition of Proposition [Proposition 4](#pr:Euler_conv){reference-type="ref" reference="pr:Euler_conv"} is satisfied and for any $\epsilon > 0$ there is a constant $c>0$ such that $$A^N \le c N^{\frac{\epsilon + \hat{\beta}}{2}} \left\| b \right\|_{C_{T} \mathcal{C}^{-\hat{\beta}}} \left( 1+ N^{\frac{\epsilon + \hat{\beta} + 1}{2}} \left\| b \right\|_{C_{T} \mathcal{C}^{-\hat{\beta}}} \right)$$ and $$B^N \le c N^{\frac{\epsilon + \hat{\beta} + 1}{2}} \left\| b \right\|_{C_{T} \mathcal{C}^{-\hat{\beta}}} + c N^{\frac{\epsilon+\hat\beta}2} \|b\|_{C_T^\frac12 \mathcal{C}^{-\hat\beta}} .$$* *Proof.* We first show that $b^N \in C^\frac12 _T L^\infty \cap L^\infty_T C^1_b$. In the proof, a constant $c$ may change from line to line. We apply Lemma [Lemma 2](#lemma:schauder_estimates){reference-type="ref" reference="lemma:schauder_estimates"} with $\epsilon$ from the statement of the corollary, $\theta=\frac{\epsilon+\hat\beta}{2}$ and $\gamma= -\hat\beta$ to get $$\begin{aligned} \|b^N(t,\cdot) - b^N(s, \cdot)\|_{L^\infty} &= \|P_{\frac1N} (b(t,\cdot) - b(s, \cdot))\|_{L^\infty} \leq c \|P_{\frac1N} (b(t,\cdot) - b(s, \cdot))\|_{\mathcal{C}^{\epsilon}}\\ &\leq c N^{\frac{\epsilon+\hat\beta}2} \|b(t,\cdot) - b(s, \cdot)\|_{\mathcal{C}^{-\hat\beta}},\end{aligned}$$ where the first inequality is by [\[eq:equiv_norm01\]](#eq:equiv_norm01){reference-type="eqref" reference="eq:equiv_norm01"}. Hence, $$\label{eq:er04} [b^N]_{\frac12, L^\infty} \leq c N^{\frac{\epsilon+\hat\beta}2} \|b\|_{C_T^\frac12 \mathcal{C}^{-\hat\beta}}.$$ By the same arguments applied to $b^N(t, \cdot)$, we obtain $\|b^N(t, \cdot) \|_{L^\infty} \le c N^{\frac{\epsilon+\hat\beta}2} \|b(t,\cdot)\|_{\mathcal{C}^{-\hat\beta}}$, which yields $$\label{eq:er02} \left\| b^{N} \right\|_{\infty,L^{\infty}} \leq \left\| b^{N} \right\|_{C_{T} \mathcal{C}^{\epsilon}} \leq c N^{\frac{\epsilon + \hat{\beta}}{2}} \left\| b \right\|_{C_{T} \mathcal{C}^{-\hat{\beta}}}.$$ Bounds [\[eq:er04\]](#eq:er04){reference-type="eqref" reference="eq:er04"} and [\[eq:er02\]](#eq:er02){reference-type="eqref" reference="eq:er02"} allow us to conclude that $b^N \in C_T^\frac12 L^\infty$. It remains to show that $b_x^N(t, \cdot)$ exists and is bounded uniformly in $t\in[0,T]$. The derivative $b^N_x(t, \cdot)$ is well defined for all $t\in[0,T]$ because $b^N \in C_T\mathcal{C}^{\gamma}$ for all $\gamma>0$. Using the equivalent norm [\[eq:equiv_norm01\]](#eq:equiv_norm01){reference-type="eqref" reference="eq:equiv_norm01"} and Bernstein's inequality [\[eq:bernstein_inequality\]](#eq:bernstein_inequality){reference-type="eqref" reference="eq:bernstein_inequality"} we have $$\label{eq:er03} \left\| b^{N}_x \right\|_{\infty, L^{\infty}} \leq \left\| b^{N}_x \right\|_{C_{T} \mathcal{C}^{\epsilon}} \leq c \left\|b^{N}\right\|_{C_{T} \mathcal{C}^{\epsilon+1}} \leq c N^{\frac{\epsilon + \hat{\beta} + 1}{2}} \left\| b \right\|_{C_{T} \mathcal{C}^{-\hat{\beta}}},$$ where the last inequality is by Lemma [Lemma 2](#lemma:schauder_estimates){reference-type="ref" reference="lemma:schauder_estimates"}. Inequalities [\[eq:er02\]](#eq:er02){reference-type="eqref" reference="eq:er02"} and [\[eq:er03\]](#eq:er03){reference-type="eqref" reference="eq:er03"} show that $b^N \in L^\infty_T C^1_b$. It remains to insert the bounds derived above into formulas for $A^N$ and $B^N$ from Proposition [Proposition 4](#pr:Euler_conv){reference-type="ref" reference="pr:Euler_conv"}. ◻ Before stating the main result, we state another auxiliary result, whose proof is the main content of Section [4](#sec:proof){reference-type="ref" reference="sec:proof"} below. **Proposition 6**. *Under Assumption [Assumption 1](#ass:b_betahat){reference-type="ref" reference="ass:b_betahat"}, for any $\hat \alpha \in (1/2, 1 - \hat\beta)$ and any $\beta \in (\hat \beta, 1/2)$, there is a constant $c$ such that $$% \label{eq:EXN-X} \sup_{t \in [0, T]} \mathbbm{E} [ |X^N_t - X_t| ] \leq c \|b^N - b\|_{C_{T} \mathcal{C}^{-\beta}}^{2 \hat \alpha - 1}$$ for all $N$ sufficiently large so that $\|b^N - b\|_{C_{T} \mathcal{C}^{-\beta}} < 1$.* The approximation error of our numerical scheme comes from two sources: the time discretisation error from Euler-Maruyama scheme, which depends on $m$, and the smoothing error coming from replacing $b$ with $b^N$ in the SDE, which depends on $N$. We will now show how to balance those two sources of errors and bound the resulting convergence rate. To this end, we parametrise $N$ in terms of $m$ and postulate that this parametrisation is of the form $N(m) = m^{\eta}$ for some $\eta > 0$. We denote $$\hat X^m := X^{N(m)m}$$ and consider the strong error $$\Upsilon(m) = \sup_{0 \leq t \leq T} \mathbb{E} \left[ \left| \hat X^{m}_{t} - X_{t} \right| \right].$$ Take any $\epsilon \in (0, 1/2-\hat\beta)$, $\beta \in (\hat \beta, 1/2)$ and ${\hat \alpha} \in (1/2, 1 - \hat\beta)$. By triangle inequality, we have $$\label{eq:er_constants} \begin{aligned} \Upsilon(m) &\leq \sup_{0 \leq t \leq T} \mathbb{E}\left[\left| \hat X^{m}_{t} - X^{N(m)}_{t}\right|\right] + \sup_{0 \leq t \leq T} \mathbb{E}\left[\left| X^{N(m)}_{t} - X_{t}\right|\right]\\ &\leq A^{N(m)} m^{-1} + B^{N(m)} m^{-1/2} + c\left\|b^{N(m)} - b \right\|_{C_{T} \mathcal{C}^{-\beta}}^{2\hat \alpha - 1}\\ % & \leq c\Big[ % m^{\eta\frac{\epsilon + \hat{\beta}}{2}} \left\| b \right\|_{\ctc{-\hat{\beta}}} \left( 1+ m^{\eta\frac{\epsilon + \hat{\beta} + 1}{2}} \left\| b \right\|_{\ctc{-\hat{\beta}}} \right) m^{-1} + \left( % m^{\eta\frac{\epsilon + \hat{\beta} + 1}{2}} \left\| b \right\|_{\ctc{-\hat{\beta}}} +m^{\eta\frac{\epsilon+\hat\beta}2} \|b\|_{C_T^\frac12 \hz{-\hat\beta}} % \right) m^{-\frac12} \\ % &+ \left\|b^{N(m)} - b \right\|_{\ctc{-\beta}}^{2\alpha - 1} % \Big] \\ & \leq c\Big[ m^{\eta\frac{\epsilon + \hat{\beta}}{2}} \left\| b \right\|_{C_{T} \mathcal{C}^{-\hat{\beta}}} \left( 1+ m^{\eta\frac{\epsilon + \hat{\beta} + 1}{2}} \left\| b \right\|_{C_{T} \mathcal{C}^{-\hat{\beta}}} \right) m^{-1}\\ &\hspace{20pt}+ \Big( m^{\eta\frac{\epsilon + \hat{\beta} + 1}{2}} \left\| b \right\|_{C_{T} \mathcal{C}^{-\hat{\beta}}} +m^{\eta\frac{\epsilon+\hat\beta}2} \|b\|_{C_T^\frac12 \mathcal{C}^{-\hat\beta}} \Big) m^{-\frac12} \\ &\hspace{20pt} + m^{-\eta\frac{\beta-\hat\beta}2({2\hat\alpha}-1)} \left\| b \right\|_{C_{T} \mathcal{C}^{-\hat\beta}}^{{2\hat\alpha} - 1} \Big], \end{aligned}$$ where in the second inequality we used Proposition [Proposition 6](#prop:reg_to_original){reference-type="ref" reference="prop:reg_to_original"} and in the last inequality Corollary [Corollary 5](#cor:Euler_conv){reference-type="ref" reference="cor:Euler_conv"} and the following estimate arising from Lemma [Lemma 2](#lemma:schauder_estimates){reference-type="ref" reference="lemma:schauder_estimates"}: $$\|b^{N(m)}(t, \cdot) - b(t, \cdot) \|_{\mathcal{C}^{-\beta}} \le c N(m)^{-\frac{\beta-\hat\beta}{2}} \|b(t, \cdot)\|_{\mathcal{C}^{-\hat\beta}}, \qquad t \in [0, T].$$ Since all the norms appearing on the right hand side are finite, they can be absorbed by a constant and we have $$\label{eq:subs_m_eta} \Upsilon \leq c {\bigg[} {m}^{\eta \frac{\epsilon + \hat{\beta}}{2} - 1} + {m}^{\eta \frac{2\epsilon + 2\hat{\beta} + 1}{2} - 1} \\ + {m}^{\eta \frac{\epsilon + \hat{\beta} + 1}{2} - \frac{1}{2}} + {m}^{\eta \frac{\epsilon + \hat{\beta}}{2} - \frac{1}{2}} + m^{ -\eta \frac{(\beta - \hat{\beta})({2 \hat\alpha} - 1)}{2} } {\bigg]}.$$ Before proceeding further, we optimise the last term in $\beta$ and ${\hat\alpha}$ to maximise its rate of decrease of the last term in $m$. Recalling the constraints for $\beta$ and ${\hat\alpha}$, the product $(\beta - \hat{\beta})({2 \hat\alpha} - 1)$ is maximised for ${\hat\alpha} \approx 1-\hat\beta$ and $\beta \approx 1/2$. We take ${\hat\alpha} = 1 - \hat\beta -\epsilon$ and $\beta = 1/2 - \epsilon$ which yields the value $2(1/2-\hat\beta - \epsilon)^2$. The last term of [\[eq:subs_m\_eta\]](#eq:subs_m_eta){reference-type="eqref" reference="eq:subs_m_eta"} takes the form $$m^{ -\eta (\frac12 - \hat \beta - \epsilon)^2}.$$ The monotonicity of the remaining four terms in [\[eq:subs_m\_eta\]](#eq:subs_m_eta){reference-type="eqref" reference="eq:subs_m_eta"} depends on $\eta$. For the scheme to converge, we need to make sure that they all decrease which is guaranteed if $\eta\frac{2\epsilon+2\hat\beta+1}2-1<0$ and $\eta\frac{\epsilon+\hat\beta+1}2-\frac12<0$. This leads to the constraint $$\label{eq:contraint_eta} 0 < \eta < \frac{1}{\epsilon + \hat{\beta} + 1}.$$ At this point we have to find the optimal value of $\eta$. It is easy to see that the slowest decreasing term within the first four terms of [\[eq:subs_m\_eta\]](#eq:subs_m_eta){reference-type="eqref" reference="eq:subs_m_eta"} is ${m}^{\eta \frac{\epsilon + \hat{\beta} + 1}{2} - \frac{1}{2}}$. To balance Euler-Maruyama error measured by the first four terms and the error of approximating $b$ with $b^{N(m)}$ in the last term we equate the rates $$\label{eqn:eta_equation} {\eta \frac{\epsilon + \hat{\beta} + 1}{2} - \frac{1}{2}} =-\eta (\frac12 - \hat \beta - \epsilon)^2.$$ This leads to $$\label{eq:eta} \eta = \frac1{\epsilon + \hat{\beta} + 1 + 2(\frac12 - \hat\beta -\epsilon)^2 }.$$ Inserting this expression for $\eta$ into the right-hand side of [\[eqn:eta_equation\]](#eqn:eta_equation){reference-type="eqref" reference="eqn:eta_equation"} we obtain the rate of convergence of our scheme as $$\Big(\frac{\epsilon + \hat{\beta} + 1}{(\frac12 - \hat\beta -\epsilon)^2} + 2 \Big)^{-1}.$$ We summarise the above derivation in the following theorem. **Theorem 7**. *Let Assumption [Assumption 1](#ass:b_betahat){reference-type="ref" reference="ass:b_betahat"} hold and fix $\epsilon \in (0, 1/2-\hat\beta)$. By taking $$N(m) = m^{\frac1{\epsilon + \hat{\beta} + 1 + 2(\frac12 - \hat\beta -\epsilon)^2 }},$$ the strong error of our scheme is bounded as follows: $$\label{eq:euler_rate} \sup_{0 \leq t \leq T} \mathbb{E} \left[ \left| X^{N(m) m}_{t} - X_{t} \right| \right] \leq c m^{-\big(\frac{\epsilon + \hat{\beta} + 1}{(1/2 - \hat\beta -\epsilon)^2} + 2 \big)^{-1}},$$ where constant $c$ depends on $\epsilon$ and drift $b$.* **Remark 8**. *Note that the bound stated in Proposition [Proposition 6](#prop:reg_to_original){reference-type="ref" reference="prop:reg_to_original"} holds for $N$ large enough so that $\|b^{N(m)} - b\|_{C_T\mathcal{C}^{-\beta}} < 1$. Formally, the error for small $N(m)$, i.e., small $m$, could be incorporated into the constant $c$ in [\[eq:euler_rate\]](#eq:euler_rate){reference-type="eqref" reference="eq:euler_rate"} when the condition $\|b^{N(m)} - b\|_{C_T\mathcal{C}^{-\beta}} < 1$ is not satisfied. However, when doing numerical estimation of the convergence rate (see Section [5](#sec:numerics){reference-type="ref" reference="sec:numerics"}) we have to pick $m$ large enough so that $\|b^{N(m)} - b\|_{C_T\mathcal{C}^{-\beta}} < 1$, otherwise the numerical estimate of the rate could not be reliable.* **Corollary 9**. *Our construction allows us to achieve any strong convergence rate strictly smaller than $$\lim_{\epsilon \downarrow 0} \Big(\frac{\epsilon + \hat{\beta} + 1}{(\frac12 - \hat\beta -\epsilon)^2} + 2 \Big)^{-1} = \Big(\frac{\hat{\beta} + 1}{(\frac12 - \hat\beta)^2} + 2 \Big)^{-1} =: r(\hat \beta).$$* **Remark 10**. *We rewrite $r(\hat \beta) =\frac{\left(\frac{1}{2} - \hat{\beta}\right)^2}{2\left(\frac{1}{2} - \hat{\beta}\right)^2 + \hat{\beta} + 1}$ from Corollary [Corollary 9](#cor:rate){reference-type="ref" reference="cor:rate"}. Figure [3](#fig:empirical_rate){reference-type="ref" reference="fig:empirical_rate"} [in Section [5](#sec:numerics){reference-type="ref" reference="sec:numerics"}]{style="color: purple"} displays this function over the range of $\hat \beta \in (0,\frac{1}{2})$. We take a closer look at the limits as $\hat \beta$ approaches $0$ and $\frac{1}{2}$:* - *$\lim_{\hat\beta \downarrow 0} r(\hat \beta) = \frac16$. This corresponds to $b\in C_T \mathcal{C}^{0+}$, which is comparable to the case of a measurable function $b\in C_T \mathcal{C}^{0}$.* - *$\lim_{\hat\beta \uparrow \frac{1}{2}} r(\hat \beta) = 0$, so when the roughness of the drift approaches the boundary $\frac{1}{2}$, the scheme deteriorates. This is expected due to the nature of the estimate in Proposition [Proposition 6](#prop:reg_to_original){reference-type="ref" reference="prop:reg_to_original"} that we use.* *A direct comparison of our rate with other rates found in the literature is only possible in the case of measurable drifts with Brownian noise, which has been treated in [@butkovskyApproximationSDEsStochastic2021]. The rate obtained there is $\frac{1}{2}$, so our estimate is clearly not optimal because we obtain $\frac{1}{6}$. However, the technique they use is different than ours, and in particular they use stochastic sewing lemma due to [@leStochasticSewingLemma2020] to drive up the rate, but it seems it is not straightforward to apply the techniques used in [@butkovskyApproximationSDEsStochastic2021] to our setting.* *Paper [@goudenegeNumericalApproximationSDEs2022] considers time-homogeneous distributional drifts and, between others, covers drifts $b\in C^{-\beta}$ for some $\beta>0$, but the driving noise is a fractional Brownian motion with Hurst index $H\in(0,\frac{1}{2})$. For $\beta \in (0,\frac1{2H}-1)$, the rate of convergence is $\frac{1}{2(1+\beta)}-\epsilon$ for any $\epsilon > 0$, which excludes the case $H=\frac{1}{2}$ of Brownian noise which would lead to a rate of $1/2$ for measurable functions (since $\beta$ would also approach 0).* *In [@deangelisNumericalSchemeStochastic2022], the authors consider SDE [\[eq:sde_S\]](#eq:sde_S){reference-type="eqref" reference="eq:sde_S"} with the drift $b \in C^{\kappa}_T H^{-\beta}_{{(1-\beta)}^{-1}, q}$ for $\kappa \in (\frac{1}{2}, 1)$, $\beta \in (0, \frac{1}{4})$ and $q \in (4, \frac{1}\beta)$. The space $H^{-\beta}_{{(1-\beta)}^{-1}, q}$ is a fractional Sobolev space of negative regularity $-\beta$. Their $\beta$ is closely related to our $\hat\beta$ as fractional Sobolev spaces $H^{s}_{p,q}$ are related, albeit different, to Besov spaces $B^s_{p,q}$, see e.g. [@triebelTheoryFunctionSpaces2010]. In both cases the index $s$ is a measure of smoothness of the elements of the space. When $\beta \downarrow 0$, the convergence rate of the numerical scheme in [@deangelisNumericalSchemeStochastic2022] tends to $\frac{1}{6}$, as in our case. However, when $\beta\to \frac{1}{4}$ the convergence rate in [@deangelisNumericalSchemeStochastic2022] vanishes, while we get $r(\frac{1}{4}) = {\frac{1}{22}}$.* *Finally we would like to mention that in our numerical study in Section [5](#sec:numerics){reference-type="ref" reference="sec:numerics"} we obtain a numerical estimate of the convergence rate equal to $\frac{1+(-\hat \beta)}{2}$, which is the equivalent of the rate $\frac{1+\gamma}{2}$ found in [@butkovskyApproximationSDEsStochastic2021] for positive $\gamma$, i.e. for $\gamma$-Hölder continuous drifts. This suggest that the rate from the latter paper could apply in our setting but it would require a different approach and is left as future work.* # Proof of Proposition [Proposition 6](#prop:reg_to_original){reference-type="ref" reference="prop:reg_to_original"} {#sec:proof} Let us introduce an auxiliary process $Y^N$, which is the (weak) solution of the SDE $$\label{eq:YN_def} Y^N_t = y^N_0 + \lambda \int_0^t u^N(s, \psi^N(s, Y^N_s)) ds + \int_0^t ( u_x^N (s, \psi^N(s, Y^N_t)) + 1 ) dW_s,$$ where $u^N$ is the unique solution to the regularised Kolmogorov equation $$\label{eq:kolmogorov_N} \begin{cases} u^N_t + \frac{1}{2} u^N_{xx} + b^N u_x^N = \lambda u^N - b^N \\ u^N(T) = 0, \end{cases}$$ and $\psi^N (t, x)$ is the space-inverse of $$\label{eq:phiN} \phi^N(t, x):= x+ u^N (t, x),$$ which exists by [@issoglioPdeDriftNegative2022 Proposition 4.16] for $\lambda$ large enough. Since $b^N \to b$ in [$C_T \mathcal{C}^{\hat\beta}$]{style="color: purple"} by Lemma [Lemma 2](#lemma:schauder_estimates){reference-type="ref" reference="lemma:schauder_estimates"} and Assumption [Assumption 1](#ass:b_betahat){reference-type="ref" reference="ass:b_betahat"}, we can apply [@issoglioPdeDriftNegative2022 Lemma 4.19]. This lemma and its proof provide key properties of the above system and its relation to [\[eq:kolmogorov\]](#eq:kolmogorov){reference-type="eqref" reference="eq:kolmogorov"} and [\[eq:Y_def\]](#eq:Y_def){reference-type="eqref" reference="eq:Y_def"}. The solution of the regularised Kolmogorov equation [\[eq:kolmogorov_N\]](#eq:kolmogorov_N){reference-type="eqref" reference="eq:kolmogorov_N"} enjoys a bound $\|u^N_x\|_\infty <1/2$ for $\lambda$ large enough; the constant $\lambda$ can be chosen independently of $N$ (but it depends on the drift $b$ of the original SDE), and from now on we fix such $\lambda$. We also have $u^N \to u$ and $u^N_x \to u_x$ uniformly on $[0,T]\times \mathbb R^d$. This section is devoted to the proof of Proposition [Proposition 6](#prop:reg_to_original){reference-type="ref" reference="prop:reg_to_original"} with the overall structure inspired by [@deangelisNumericalSchemeStochastic2022]. The main idea of the proof is to rewrite the solutions $X^N$ and $X$ in their equivalent virtual formulations and thus study the error between the auxiliary processes $Y^N$ defined in [\[eq:YN_def\]](#eq:YN_def){reference-type="eqref" reference="eq:YN_def"} and $Y$ defined in [\[eq:Y_def\]](#eq:Y_def){reference-type="eqref" reference="eq:Y_def"}. This transformation has the advantage that the SDEs for $Y^N$ and $Y$ are classical SDEs with strong solutions, see Remark [Lemma 3](#rm:strong){reference-type="ref" reference="rm:strong"}. After writing the difference $X^N-X$ in terms of $Y^N-Y$ we apply Itô's formula to $|Y^N-Y|$. We will control the stochastic and Lebesgue integrals by $u^N-u$ and $\psi^N-\psi$, which in turn are bounded by some function of $b^N-b$. The term involving the local time at $0$ of $Y^N-Y$ is handled in Lemma [Lemma 15](#lemma:local-time-at-0){reference-type="ref" reference="lemma:local-time-at-0"}. In the remaining of this section, we fix $\beta \in (\hat \beta, 1/2)$ and $\hat\alpha \in (0, 1 - \hat\beta)$ as in the statement of Proposition [Proposition 6](#prop:reg_to_original){reference-type="ref" reference="prop:reg_to_original"}. Since $\beta \in (\hat \beta, 1/2)$, we have $b \in C_T \mathcal{C}^{(-\beta)+}$ by Assumption [Assumption 1](#ass:b_betahat){reference-type="ref" reference="ass:b_betahat"}. Notice that the solutions $u$ and $u^N$ obtained when taking $b$ as an element of $C^{1/2}_T \mathcal{C}^{(-\hat\beta)+}$ are the same as when viewing the same $b$ as an element of $C_T \mathcal{C}^{(-\beta)+}$. We obviously have $u, u^N \in C_{T} \mathcal{C}^{(2- \beta)-}$ and $u, u^N \in C_{T} \mathcal{C}^{1 + \alpha}$ for any $\alpha < 1- \hat\beta$. **Lemma 11**. *There are $\rho_0$ and $c$ such that for any $\rho\geq \rho_0$ and for any $\alpha <1-\beta$ we have $$\| u - u^N \|^{(\rho)}_{C_{T} \mathcal{C}^{1 + \alpha}} \leq 2c \| b - b^N \|_{C_{T} \mathcal{C}^{-\beta}} (\| u \|^{(\rho)}_{C_{T} \mathcal{C}^{1 + \alpha}} - 1) \rho^{\frac{\alpha + \beta - 1}{2}}.$$* *Proof.* The bound in the statement of the lemma forms the main part of the proof of [@issoglioPdeDriftNegative2022 Lemma 4.17] in the special case when the terminal condition is zero and $g^N = b^N$, $g = b$. ◻ The parameter $\rho$ and the exact dependence of the bound on it is not important for our arguments, so we may fix $\rho$ that satisfies the conditions of the above lemma. However, for completeness of the presentation, we will mention the dependence on $\rho_0$ and $\rho$ in results below. Using Lemma [Lemma 11](#lemma:diff_u_uN){reference-type="ref" reference="lemma:diff_u_uN"} we can derive a uniform bound on the $L^\infty$-norm of the difference $u(t) - u^N(t)$ and $u_x(t) - u_x^N(t)$. **Lemma 12**. *For any $\rho \ge \rho_0$, where $\rho_0$ is from Lemma [Lemma 11](#lemma:diff_u_uN){reference-type="ref" reference="lemma:diff_u_uN"}, there is $\kappa > 0$ such that for all $t \in [0, T]$ $$%\label{eq:uNu_bounded_by_bNb} \| u(t) - u^N(t) \|_{L^\infty} + \| u_x(t) - u^N_x(t) \|_{L^\infty} \leq \kappa \| b - b^N \|_{ C_{T} \mathcal{C}^{-\beta}}.$$ Constant $\kappa$ depends also on $c$ from Lemma [Lemma 11](#lemma:diff_u_uN){reference-type="ref" reference="lemma:diff_u_uN"} and on $T$ and $u$.* *Proof.* We recall that $u, u^N \in C_T \mathcal{C}^{1+\alpha}$ and $u_x, u^N_x \in C_T \mathcal{C}^{\alpha}$ for any $\alpha<1-\beta$. Recall also that the norm in $\mathcal{C}^{\gamma}$ for $\gamma\in(0,1)$ is given by [\[eq:equiv_norm01\]](#eq:equiv_norm01){reference-type="eqref" reference="eq:equiv_norm01"} and in $\mathcal{C}^{\gamma}$ for $\gamma\in(1,2)$ is given by [\[eq:equiv_norm12\]](#eq:equiv_norm12){reference-type="eqref" reference="eq:equiv_norm12"}. Using this together with Bernstein inequality [\[eq:bernstein_inequality\]](#eq:bernstein_inequality){reference-type="eqref" reference="eq:bernstein_inequality"} we get for all $t\in[0,T]$ that $$\| u(t) - u^N(t) \|_{L^\infty} + \| u_x(t) - u^N_x(t) \|_{L^\infty} \leq c' \| u(t) - u^N(t) \|_{\mathcal{C}^{1+\alpha}} ,$$ for some $c'>0$. We conclude using [\[eq:rho_eq_norm\]](#eq:rho_eq_norm){reference-type="eqref" reference="eq:rho_eq_norm"} and Lemma [Lemma 11](#lemma:diff_u_uN){reference-type="ref" reference="lemma:diff_u_uN"}. The thesis holds with constant $\kappa$ given by $$\kappa = c'e^{\rho T} 2c \| b - b^N \|_{C_{T} \mathcal{C}^{-\beta}} (\| u \|^{(\rho)}_{C_{T} \mathcal{C}^{1 + \alpha}} - 1) {\rho^{\frac{\alpha + \beta - 1}{2}}}.$$ ◻ Next we derive a bound for the difference $\psi-\psi^N$, where we recall that the two functions are the space-inverses of $\phi$ and $\phi^N$ defined in [\[eq:phi\]](#eq:phi){reference-type="eqref" reference="eq:phi"} and [\[eq:phiN\]](#eq:phiN){reference-type="eqref" reference="eq:phiN"}. **Lemma 13**. *Take $\kappa$ from Lemma [Lemma 12](#lemma:diff_uN_graduN){reference-type="ref" reference="lemma:diff_uN_graduN"}. We have $$\sup_{(t,y) \in [0,T] \times \mathbb{R}} \left|\psi (t,y) - \psi^N(t,y)\right| \leq 2 \kappa \left\|b - b^N\right\|_{C_{T} \mathcal{C}^{-\beta}}.$$* *Proof.* Let us first recall that $\|u_x\|_\infty \leq \frac12$, see the discussion at the beginning of this section. Hence $u_x$ is $\frac12$-Lipschitz. Using this we have for any $x, x' \in \mathbb R$ $$\label{eq:phi_difference} \begin{aligned} \left| \phi(t, x) - \phi(t, x') \right| &= \left| \left( x + u(t, x) \right) - \left( x' + u(t, x') \right) \right| \\ &\geq |x - x'| - |u(t, x) - u(t, x')| \\ &\geq |x - x'| - \frac12 |x-x'| \\ &\geq \frac12 |x-x'| . \end{aligned}$$ Insert $x= \psi(t, y)$ and $x' = \psi^{N}(t, y)$ for some $y \in \mathbb{R}$, to get the bound $$\left| \phi(t, \psi(t,y)) - \phi(t, \psi^{N}(t, y)) \right| \geq \frac12 \left| \psi(t, y) - \psi^N(t, y) \right|.$$ This implies the first inequality below $$\label{eq:psi_phi_1} \begin{aligned} \left| \psi(t, y) - \psi^{N}(t, y) \right| &\leq 2 \left| \phi(t, \psi(t, y)) - \phi(t, \psi^{N}(t,y)) \right| \\ &= 2 \left| \phi^{N}(t, \psi^{N}(t, y)) - \phi(t, \psi^{N}(t,y)) \right| \\ &= 2 \left| u^{N}(t, \psi^{N}(t, y)) - u(t, \psi^{N}(t,y)) \right|, \end{aligned}$$ where we used $\phi^{N}(t, \psi^{N}(t, y)) = y = \phi(t, \psi(t, y))$ and the definition of $\phi$ and $\phi^N$. Thus $$\left| \psi(t, y) - \psi^{N}(t, y) \right| \leq 2\left\| u(t) - u^{N}(t) \right\|_{L^{\infty}} \leq 2\kappa \left\| b - b^{N} \right\|_{C_T\mathcal{C}^{-\beta}},$$ having used Lemma [Lemma 12](#lemma:diff_uN_graduN){reference-type="ref" reference="lemma:diff_uN_graduN"} in the last inequality. ◻ Finally we derive a bound for $|u^N(s, \psi^N(s, y)) - u(s, \psi(s, y'))|$ and for $|u_x^N(s, \psi^N(s, y)) - u_x(s, \psi(s, y'))|$. **Lemma 14**. *For any $\beta\in(\hat\beta, 1/2)$, any $\alpha< 1-\hat\beta$ and any $y,y' \in \mathbb{R}$, the following bounds are satisfied $$\label{eq:bound_u_abs} \left|u^N(s, \psi^N(s, y')) - u(s, \psi(s, y))\right| \leq 2 \kappa \left\|b^N - b\right\|_{C_{T} \mathcal{C}^{-\beta}} + \left|y-y'\right|,$$ and $$\label{eq:bound_ux_abs} \begin{split} \left|u_x^N(s, \psi^N(s, y')) - u_x(s, \psi(s, y))\right| \leq & \kappa \left\| b - b^N \right\|_{C_{T} \mathcal{C}^{-\beta}} + 2^{\alpha}\kappa^{\alpha} \left\|u\right\|_{C_{T} \mathcal{C}^{1 + \alpha}} \left\|b^N - b\right\|_{C_{T} \mathcal{C}^{-\beta}}^{\alpha} \\ &+\left|u_x(s, \psi(s, y)) - u_x(s, \psi(s, y'))\right|, \end{split}$$ where $\kappa$ is from Lemma [Lemma 12](#lemma:diff_uN_graduN){reference-type="ref" reference="lemma:diff_uN_graduN"}.* *Proof.* We start with [\[eq:bound_u\_abs\]](#eq:bound_u_abs){reference-type="eqref" reference="eq:bound_u_abs"}. We rewrite the left-hand side as $$\label{eq:uN-u} \begin{aligned} \left|u^N\left(s, \psi^N\left(s,y'\right)\right) - u\left(s, \psi\left(s, y\right)\right)\right| &\leq \left|u^N\left(s, \psi^N\left(s, y'\right)\right) - u\left(s, \psi^N\left(s, y'\right)\right)\right| \\ &\hspace{11pt}+ \left|u\left(s, \psi^N\left(s, y'\right)\right) - u\left(s, \psi\left(s, y'\right)\right)\right| \\ &\hspace{11pt}+ \left|u\left(s, \psi\left(s,y'\right)\right) - u\left(s, \psi\left(s, y\right)\right)\right|. \end{aligned}$$ Using Lemma [Lemma 12](#lemma:diff_uN_graduN){reference-type="ref" reference="lemma:diff_uN_graduN"}, we bound the first term above by $$\left|u^N\left(s, \psi^N\left(s, y'\right)\right) - u\left(s, \psi^N\left(s, y'\right)\right)\right| \leq \|u^N\left(s\right) - u\left(s\right)\|_{L^\infty} \leq \kappa \left\| b - b^N \right\|_{C_{T} \mathcal{C}^{- \beta}}.$$ The second term in [\[eq:uN-u\]](#eq:uN-u){reference-type="eqref" reference="eq:uN-u"} is bounded using the fact that $\|u_x\|_\infty \le \frac12$ (see the discussion at the beginning of this section) and Lemma [Lemma 13](#lemma:bound_psi-psiN){reference-type="ref" reference="lemma:bound_psi-psiN"}: $$\left|u\left(s, \psi^N\left(s, y'\right)\right) - u\left(s, \psi\left(s, y'\right)\right)\right| \leq \frac12 |\psi^N\left(s, y'\right) - \psi\left(s, y'\right)| \leq \kappa \left\| b - b^N \right\|_{\textcolor{purple}{C_{T} \mathcal{C}^{-\beta}}}.$$ For the third term in [\[eq:uN-u\]](#eq:uN-u){reference-type="eqref" reference="eq:uN-u"}, we use again that $u(t, \cdot)$ is $\tfrac12$-Lipschitz $$\left|u\left(s, \psi\left(s,y'\right)\right) - u\left(s, \psi\left(s, y\right)\right)\right| \leq \frac12|\psi\left(s,y'\right) - \psi\left(s, y\right)| \leq |y-y'|,$$ where the last inequality follows from the fact that $\psi(t, \cdot)$ is $2$-Lipschitz, which we argue as follows. By the definition of $\psi$ as the space inverse of $\phi$, we have $z = \psi(t,z) + u(t, \psi(t,z))$. Hence $$\begin{aligned} \left| \psi(t, y) - \psi(t, y') \right| &\leq |u(t, \psi(t,y)) - u(t, \psi(t, y')| + |y - y'| \le \frac12 | \psi(t, y) - \psi(t, y')| + |y - y'|,\end{aligned}$$ where the last inequality uses again that $u(t, \cdot)$ is $\frac12$-Lipschitz. Combining the above estimates proves [\[eq:bound_u\_abs\]](#eq:bound_u_abs){reference-type="eqref" reference="eq:bound_u_abs"}. Let us now prove [\[eq:bound_ux_abs\]](#eq:bound_ux_abs){reference-type="eqref" reference="eq:bound_ux_abs"}. Similarly as above we write $$\label{eq:uNx-ux} \begin{aligned} \left| u^N_x(s, \psi^N(s, y')) - u_x(s, \psi(s, y))\right| &\leq \left| u_x^N(s, \psi^N(s, y')) - u_x(s, \psi^N(s, y'))\right| \\ &\hspace{11pt}+ \left| u_x(s, \psi^N(s, y')) - u_x(s, \psi(s, y'))\right| \\ &\hspace{11pt}+ \left| u_x(s, \psi(s, y')) - u_x(s, \psi(s, y))\right|. \end{aligned}$$ The first term is bounded using Lemma [Lemma 12](#lemma:diff_uN_graduN){reference-type="ref" reference="lemma:diff_uN_graduN"} as follows $$\left|u^N_x\left(s, \psi^N\left(s, y'\right)\right) - u_x\left(s, \psi^N\left(s, y'\right)\right)\right| \leq \|u_x^N\left(s\right) - u_x\left(s\right)\|_{L^\infty} \leq \kappa \left\| b - b^N \right\|_{{C_{T} \mathcal{C}^{-\beta}}}.$$ Since $b \in C_T \mathcal{C}^{-\hat\beta}$ and $\alpha \in (1/2, 1-\hat\beta)$ then $u\in C_T \mathcal{C}^{1+ \alpha}$, so by Bernstein inequality (Lemma [Lemma 1](#lemma:bernstein_inequality){reference-type="ref" reference="lemma:bernstein_inequality"}) we have $u_x(t) \in C_T \mathcal{C}^{{\alpha}}$ with $\|u_x\|_{{C_T\mathcal{C}^{\alpha}}} \le \|u\|_{{C_T\mathcal{C}^{1+\alpha}}}$. Recalling the norm [\[eq:equiv_norm01\]](#eq:equiv_norm01){reference-type="eqref" reference="eq:equiv_norm01"} in $\mathcal{C}^{\alpha}$, we conclude that $u_x(t)$ is $\alpha$-Hölder continuous with constant $\|u\|_{{C_T \mathcal{C}^{1+\alpha}}}$. This allows us to bound the second term in [\[eq:uNx-ux\]](#eq:uNx-ux){reference-type="eqref" reference="eq:uNx-ux"} as $$\begin{aligned} \left|u_x(s, \psi^N(s, y')) - u_x\left(s, \psi\left(s, y'\right)\right)\right| &\leq \|u\|_{{C_T \mathcal{C}^{1+\alpha}}} |\psi^N\left(s, y'\right) - \psi\left(s, y'\right)|^{\alpha} \\ &\leq \|u\|_{C_T \mathcal{C}^{1+\alpha}} 2^{\alpha}\kappa^{\alpha} \left\| b - b^N \right\|_{C_{T} \mathcal{C}^{-\beta}}^{\alpha},\end{aligned}$$ where the last inequality uses Lemma [Lemma 13](#lemma:bound_psi-psiN){reference-type="ref" reference="lemma:bound_psi-psiN"}. This concludes the proof. ◻ In order to bound the $L^1$ norm of the difference $X^N - X$, we need to bound the local time of $Y^N - Y$ at zero. To this end, we recall the definition of a local time of a continuous semimartingale $Z$ and a bound for this local time established in [@deangelisNumericalSchemeStochastic2022]. We define the local time of $Z$ at $0$ by $$\label{eq:local_time_zero} L^0_t (Z) = \lim_{\epsilon \to 0} \frac{1}{2 \epsilon} \int_0^t \mathbbm{1}_{\{| Z | \leq \epsilon\}} d \langle Z \rangle_s, \qquad t \geq 0.$$ **Lemma 15** ([@deangelisNumericalSchemeStochastic2022 Lemma 5.1]). *For any $\epsilon \in (0,1)$ and any real-valued, continuous semi-martingale $Z$ we have $$\mathbb{E}[L^0_t(Z)] \leq 4 \epsilon - 2 \mathbb{E}\left[ \int_0^t \left( \mathbbm{1}_{\{Z_s \in (0, \epsilon)\}} + \mathbbm{1}_{\{Z_s \geq \epsilon\}} e^{1 - Z_s/\epsilon} \right) dZ_s \right] + \frac{1}{\epsilon} \mathbb{E}\left[ \int_0^t \mathbbm{1}_{\{Z_s > \epsilon\}} e^{1 - Z_s/\epsilon} d \langle Z \rangle_s \right].$$* The next Proposition is a bound for the local time of $Y^N-Y$, solutions to the SDEs [\[eq:Y_def\]](#eq:Y_def){reference-type="eqref" reference="eq:Y_def"} and [\[eq:YN_def\]](#eq:YN_def){reference-type="eqref" reference="eq:YN_def"}. This is a key bound in the proof of the $L^1$-convergence of $X^N$ to $X$, due to the application of Itô's formula to $|Y^N-Y|$. **Proposition 16**. *For any $\alpha \in (1/2, 1 - \hat\beta)$ and $N$ such that $\|b^N - b\|_{{C_T\mathcal{C}^{-\beta}}} < 1$ we have $$\label{eq:local_time_YNY_bound} \begin{aligned} \mathbb{E} [L^0_t(Y^N - Y)] &\leq A \, \mathbb{E} \left[ \int_0^t |Y^N_s - Y_s| ds \right] + B \left\|b^N - b\right\|_{{C_{T} \mathcal{C}^{-\beta}}}^{{2 \alpha} - 1} + o\left(\left\|b^N - b\right\|_{C_{T} \mathcal{C}^{-\beta}}^{ {2 \alpha - 1}}\right), \end{aligned}$$ for some constants $A, B > 0$.* *Proof.* This proof follows the same steps as the proof of [@deangelisNumericalSchemeStochastic2022 Proposition 5.4] with differences coming from the spaces that $b$ and $b^N$ belong to. It is provided for the reader's convenience. Recall that $Y_{t}, Y^{N}_{t}$ are strong solutions of [\[eq:Y_def\]](#eq:Y_def){reference-type="eqref" reference="eq:Y_def"} and [\[eq:YN_def\]](#eq:YN_def){reference-type="eqref" reference="eq:YN_def"} respectively, see Remark [Lemma 3](#rm:strong){reference-type="ref" reference="rm:strong"}. Their difference satisfies $$\label{eq:YN-Y} \begin{aligned} Y^N_t - Y_t &= (y^N_0 - y_0) + \lambda \int_0^t ( u^N(s, \psi^N(s, Y^N_s)) - u(s, \psi(s, Y_s)) ) ds \\ &\hspace{11pt}+ \int_0^t ( u_x^N (s, \psi^N(s, Y^N_s)) - u_x (s, \psi(s, Y_s))) dW_s. \end{aligned}$$ We apply Lemma [Lemma 15](#lemma:local-time-at-0){reference-type="ref" reference="lemma:local-time-at-0"} to $Y^N_t - Y_t$ for $\epsilon \in (0,1)$: $$\begin{aligned} &\mathbb{E}[L_t^0 (Y^N - Y)] \leq 4 \epsilon\notag\\ & - 2\lambda \mathbb{E} \left[ \int_0^t \left( \mathbbm{1}_{\{Y^N_s - Y_s \in (0, \epsilon)\}} + \mathbbm{1}_{\{Y^N_s - Y_s \geq \epsilon\}} e^{1 - \frac{Y^N_s - Y_s}{\epsilon}} \right) \left( u^N(s, \psi^N(s, Y^N_s)) - u(s, \psi(s, Y_s)) \right) ds \right] \label{eq:local_time_diff_u} \\ &+ \frac{1}{\epsilon} \mathbb{E} \left[ \int_0^t \mathbbm{1}_{\{ Y^N_s - Y_s > \epsilon \}} e^{1 - \frac{Y^N_s - Y_s}{\epsilon}} \left( u_x^N(s, \psi^N(s, Y^N_s)) - u_x(s, \psi(s, Y_s)) \right)^2 ds \right], \label{eq:local_time_diff_gradu} \end{aligned}$$ where the expectation of the integral with respect to the Brownian motion $(W_t)$ is zero thanks to the fact that $u_x$ and $u_x^N$ are bounded uniformly in $N$. Notice that if $Y^N_s - Y_s \geq \epsilon$, then $e^{1 - \frac{Y^N_s - Y_s}{\epsilon}} \leq 1$. Hence, [\[eq:local_time_diff_u\]](#eq:local_time_diff_u){reference-type="eqref" reference="eq:local_time_diff_u"} is bounded above by $$4\lambda \mathbb{E} \left[ \int_0^t \big(u(s, \psi(s, Y_s)) - u^N(s, \psi^N(s, Y^N_s))\big) ds \right] \le 4 \lambda \left( 2\kappa \left\|b^N - b\right\|_{{C_{T} \mathcal{C}^{-\beta}}} t + \mathbb{E}\left[\int_0^t\left|Y^N_s - Y^N\right| ds \right] \right),$$ where the last inequality is by Lemma [Lemma 14](#lemma:uN-n_bound_for_integral){reference-type="ref" reference="lemma:uN-n_bound_for_integral"}. Now for [\[eq:local_time_diff_gradu\]](#eq:local_time_diff_gradu){reference-type="eqref" reference="eq:local_time_diff_gradu"}, we use again the observation that $\mathbbm{1}_{\{ Y^N_s - Y_s > \epsilon \}} e^{1 - \frac{Y^N_s - Y_s}{\epsilon}} \le 1$, the estimate [\[eq:bound_ux_abs\]](#eq:bound_ux_abs){reference-type="eqref" reference="eq:bound_ux_abs"} from Lemma [Lemma 14](#lemma:uN-n_bound_for_integral){reference-type="ref" reference="lemma:uN-n_bound_for_integral"} choosing $\alpha' \in (\alpha, 1-\hat \beta)$, and the inequality $(x_1 + x_2 + x_3)^2 \leq 3(x_1^2 + x_2^2 + x_3^2)$ to get the bound $$% \label{eq:bound_integral_graduNu} \begin{aligned} & \frac{1}{\epsilon} \mathbb{E} \int_0^t \left( 3 \kappa^2 \left\| b - b^N \right\|_{{C_{T} \mathcal{C}^{-\beta}}}^2 + 3 \cdot 2^{2\alpha'} \kappa^{2\alpha'} \left\|b^N - b\right\|_{{C_{T} \mathcal{C}^{-\beta}}}^{2\alpha'} \left\|u\right\|_{C_{T} \mathcal{C}^{1 + \alpha'}}^2 \right)ds \\ &\qquad+ \frac{1}{\epsilon} \mathbb{E} \int_0^t 3 \mathbbm{1}_{\left\{Y^N_s - Y_s > \epsilon\right\}} e^{1 - \frac{Y^N_s - Y_s}{\epsilon}} \left| u_x(s, \psi(s, Y^N_s)) - u_x(s, \psi(s, Y_s))\right|^2 ds \\ &\leq \frac{3 t}{\epsilon} \left\| b^N - b\right\|_{{C_{T} \mathcal{C}^{-\beta}}} \left( \kappa^2 \left\|b^N - b\right\|_{{C_{T} \mathcal{C}^{-\beta}}} + (2 \kappa)^{2 \alpha'} \left\|u\right\|_{C_{T} \mathcal{C}^{1 + \alpha'}}^2 \left\|b^N - b\right\|_{{C_{T} \mathcal{C}^{-\beta}}}^{2 \alpha' - 1} \right) \\ &\qquad+ \frac{3}{\epsilon} \mathbb{E} \left( \int_0^t \mathbbm{1}_{\left\{ Y^N_s - Y_s > \epsilon \right\}} e^{1 - \frac{Y^N_s - Y_s}{\epsilon}} \left| u_x(s, \psi(s, Y^N_s)) - u_x(s, \psi(s, Y_s)) \right|^2 ds \right). \end{aligned}$$ Putting above estimates together yields the following bound for the expectation of the local time: $$\label{eqn:localtime} \begin{aligned} \mathbb{E}[L_t^0 (Y^N - Y)] &\leq 4 \epsilon + 4 \lambda \left( 2\kappa \left\|b^N - b\right\|_{{C_{T} \mathcal{C}^{-\beta}}} t + \mathbb{E}\left[\int_0^t\left|Y^N_s - Y^N\right| ds \right] \right)\\ &\hspace{11pt}+\frac{3t}{\epsilon} \left\| b^N - b\right\|_{{C_{T} \mathcal{C}^{-\beta}}} \left( \kappa^2 \left\|b^N - b\right\|_{{C_{T} \mathcal{C}^{-\beta}}} + (2 \kappa)^{2 \alpha'} \left\|u\right\|_{C_{T} \mathcal{C}^{1 + \alpha'}}^2 \left\|b^N - b\right\|_{{C_{T} \mathcal{C}^{-\beta}}}^{2 \alpha' - 1} \right) \\ &\hspace{11pt}+ \frac{3}{\epsilon} \mathbb{E} \left[ \int_0^t \mathbbm{1}_{\left\{ Y^N_s - Y_s > \epsilon \right\}} e^{1 - \frac{Y^N_s - Y_s}{\epsilon}} \left| u_x(s, \psi(s, Y^N_s)) - u_x(s, \psi(s, Y_s)) \right|^2 ds \right]. \end{aligned}$$ We take $\epsilon = \|b^N - b\|_{{C_T\mathcal{C}^{-\beta}}} < 1$ and choose $\zeta \in (0,1)$ such that $\alpha'\zeta > 1/2$. Recall that $u_x(s, \cdot)$ is $\alpha'$-Hölder continuous with the constant $\|u\|_{C_T\mathcal{C}^{1+\alpha'}}$, as $\alpha'<1-\hat\beta$, (see Section [2.4](#subsec:soln_sde){reference-type="ref" reference="subsec:soln_sde"}) and $\psi(s, \cdot)$ is $2$-Lipschitz, so $$\left| u_x(s, \psi(s, Y^N_s)) - u_x(s, \psi(s, Y_s)) \right| \le \|u\|_{C_T\mathcal{C}^{1+\alpha'}} 2^{\alpha'} |Y^N_s - Y_s|^{\alpha'}.$$ This bound and the observation that $\epsilon^\zeta > \epsilon$ yield $$\begin{aligned} &\frac{3}{\epsilon} \mathbb{E}\left[ \int_0^t \mathbbm{1}_{\left\{ Y^N_s - Y_s > \epsilon \right\}} e^{1 - \frac{Y^N_s - Y_s}{\epsilon}} \left| u_x(s, \psi(s, Y^N_s)) - u_x(s, \psi(s, Y_s)) \right|^2 ds \right]\\ &\le \frac{3}{\epsilon} \mathbb{E}\left[ \int_0^t \mathbbm{1}_{\left\{ \epsilon < Y^N_s - Y_s \le \epsilon^\zeta \right\}} e^{1 - \frac{Y^N_s - Y_s}{\epsilon}} \left| u_x(s, \psi(s, Y^N_s)) - u_x(s, \psi(s, Y_s)) \right|^2 ds \right]\\ &+ \frac{3}{\epsilon} \mathbb{E}\left[ \int_0^t \mathbbm{1}_{\left\{ Y^N_s - Y_s > \epsilon^\zeta \right\}} e^{1 - \frac{Y^N_s - Y_s}{\epsilon}} \left| u_x(s, \psi(s, Y^N_s)) - u_x(s, \psi(s, Y_s)) \right|^2 ds \right]\\ &\le \frac{3}{\epsilon} t \|u\|^2_{C_T\mathcal{C}^{1+\alpha'}} 2^{2\alpha'} \epsilon^{2\alpha'\zeta} + \frac{3}{\epsilon} t e^{1-\epsilon^{\zeta-1}},\end{aligned}$$ where in the last inequality we used that $\|u_x\|_{L^\infty} \le 1/2$. We insert this bound into [\[eqn:localtime\]](#eqn:localtime){reference-type="eqref" reference="eqn:localtime"} and recall that $\epsilon = \|b^N - b\|_{{C_T\mathcal{C}^{-\beta}}}$ to obtain $$\begin{aligned} & \mathbb{E}[L_t^0 (Y^N - Y)]\\ &\leq (4 + 8 \lambda \kappa t + 3\kappa^2 t) \left\|b^N - b\right\|_{\textcolor{purple}{C_{T} \mathcal{C}^{-\beta}}} + 4 \lambda \left( \mathbb{E}\left[\int_0^t\left|Y^N_s - Y^N\right| ds \right] \right)\\ &\hspace{11pt}+3t \left( (2 \kappa)^{2 \alpha'} \left\|u\right\|_{C_{T} \mathcal{C}^{1 + \alpha'}}^2 \left\|b^N - b\right\|_{{C_{T} \mathcal{C}^{-\beta}}}^{2 \alpha' - 1} \right) \\ &\hspace{11pt}+ 3 t \|u\|^2_{C_T\mathcal{C}^{1+\alpha'}} 2^{2\alpha'} \left\|b^N - b\right\|_{{C_{T} \mathcal{C}^{-\beta}}}^{2\alpha'\zeta - 1} + \frac{3t}{\left\|b^N - b\right\|_{{C_{T} \mathcal{C}^{-\beta}}}} e^{1-\left\|b^N - b\right\|_{{C_{T} \mathcal{C}^{-\beta}}}^{\zeta-1}}\\ &\le 3 T \|u\|^2_{C_T\mathcal{C}^{1+\alpha'}} 2^{2\alpha'} \left\|b^N - b\right\|_{{C_{T} \mathcal{C}^{-\beta}}}^{2\alpha'\zeta - 1} + 4 \lambda \left(\mathbb{E}\left[\int_0^T\left|Y^N_s - Y^N\right| ds \right] \right) + o \left(\left\|b^N - b\right\|_{{C_{T} \mathcal{C}^{-\beta}}}^{2\alpha'\zeta - 1}\right).\end{aligned}$$ Taking $\zeta = \alpha / \alpha'$ completes the proof. ◻ *Proof of Proposition [Proposition 6](#prop:reg_to_original){reference-type="ref" reference="prop:reg_to_original"}.* Our arguments are inspired by the proof of [@deangelisNumericalSchemeStochastic2022 Proposition 3.1]. For our arguments we fix $\rho$ that satisfies conditions of Lemma [Lemma 11](#lemma:diff_u_uN){reference-type="ref" reference="lemma:diff_u_uN"} and denote by $\kappa$ the constant from Lemma [Lemma 12](#lemma:diff_uN_graduN){reference-type="ref" reference="lemma:diff_uN_graduN"}. By the definition of $\psi, \psi^N$, Lemma [Lemma 13](#lemma:bound_psi-psiN){reference-type="ref" reference="lemma:bound_psi-psiN"} and the fact that $\psi$ is 2-Lipschitz, we have $$\label{eq:|XN-X|} \begin{aligned} |X^N_t - X_t| &= | \psi^N(t, Y_t^N) - \psi(t, Y_t)| \leq | \psi^N(t, Y_t^N) - \psi(t, Y_t^N)| + | \psi(t, Y_t^N) - \psi(t, Y_t)|\\ &\leq 2 \kappa \| b^N - b \|_{C_{T} \mathcal{C}^{-\hat\beta}} + 2 |Y^N_t - Y_t|. \end{aligned}$$ For the term $|Y^N - Y|$ we use Itô-Tanaka's formula and take expectations of both sides: $$%\label{eq:EYNY} \begin{aligned} \mathbbm{E}\left[|Y^N_t - Y_t|\right] &= \mathbbm{E}|Y^N_0 - Y_0| + \mathbbm{E}\left[\frac{1}{2} L^0_t (Y^N - Y)\right] \\ &\hspace{11pt}+ \lambda \mathbbm{E}\left[\int_0^t \mathop{\mathrm{sgn}}(Y^N - Y)(u^N(s, \psi^N(s, Y^N_s)) - u(s, \psi(s, Y_s))) ds\right], \end{aligned}$$ where the stochastic integral disappears due to the boundedness of $u_x$ and $u^N_x$. For the first term we use that $Y_0 = x + u(0, x)$, $Y^N_0 = x + u^N(0, x)$ and Lemma [Lemma 11](#lemma:diff_u_uN){reference-type="ref" reference="lemma:diff_u_uN"} to conclude that $$|u^N(0,x)-u(0,x)| \le \|u^N - u\|_{C_{T} \mathcal{C}^{-\beta}}^{(\rho)} \leq o\left(\left\|b^N - b\right\|_{C_{T} \mathcal{C}^{-\beta}}^{2 \hat\alpha - 1}\right),$$ where we used that $\left\|b^N - b\right\|_{C_{T} \mathcal{C}^{-\beta}} = o\left(\left\|b^N - b\right\|_{C_{T} \mathcal{C}^{-\beta}}^{2 \hat\alpha - 1}\right)$, because $2\hat\alpha - 1 < 1$ and $\left\|b^N - b\right\|_{C_{T} \mathcal{C}^{-\beta}} < 1$ as assumed in the statement of the proposition. The second term is bounded by Proposition [Proposition 16](#prop:bound_local_time_sde){reference-type="ref" reference="prop:bound_local_time_sde"}. For the third term we employ a bound from Lemma [Lemma 14](#lemma:uN-n_bound_for_integral){reference-type="ref" reference="lemma:uN-n_bound_for_integral"} and the fact that $|\mathop{\mathrm{sgn}}(x)| \leq 1$. In summary, we obtain $$\mathbbm{E}\left[|Y^N_t - Y_t|\right] \leq o\left(\left\|b^N - b\right\|_{C_{T} \mathcal{C}^{-\beta}}^{2 \hat\alpha - 1}\right) + (A/2 + \lambda) \mathbb{E}\left[ \int_0^t |Y^N_s - Y_s| ds \right] + B/2 \left\|b^N - b\right\|_{C_{T} \mathcal{C}^{-\beta}}^{2 \hat\alpha - 1}.$$ Finally, using Gronwall's lemma we get the following bound $$%\label{eq:gronwallYNY} \mathbbm{E}\left[|Y^N_t - Y_t|\right] \leq B/2 \| b^N - b \|^{2 \hat\alpha - 1}_{C_{T} \mathcal{C}^{-\beta}} e^{(A/2+\lambda)t} + o\left(\left\|b^N - b\right\|_{C_{T} \mathcal{C}^{-\beta}}^{2 \hat\alpha - 1}\right).$$ We take expectation of both sides of [\[eq:\|XN-X\|\]](#eq:|XN-X|){reference-type="eqref" reference="eq:|XN-X|"}, take supremum over $t \in [0, T]$ and insert the above bound to conclude. ◻ # Numerical Implementation {#sec:numerics} In this section we describe an implementation of our numerical scheme and analyse the results obtained. [ Our implementation created using the Python programming language can be found in [@chaparrojaquezImplementationNumericalMethods2023].]{style="color: violet"} We recall that numerical implementation proceeds in two steps. First we approximate the drift $b$ with $b^N$ as in [\[eqn:X\^N\]](#eqn:X^N){reference-type="eqref" reference="eqn:X^N"}. Then we apply classical Euler-Maruyama scheme [\[eq:m_em_approx\]](#eq:m_em_approx){reference-type="eqref" reference="eq:m_em_approx"} to approximate the SDE with drift $b^N$. For the numerical example we will consider a time homogeneous drift, i.e. $b \in \mathcal C^{(-\hat\beta)+}$. Since the drift $b$ is a Schwartz distribution in $\mathcal C^{(-\hat \beta)+}$, producing a numerical approximation of it poses some challenges. We cannot simply discretise $b$ and then convolve it with the heat kernel to get $b^N$ as in [\[eq:bN\]](#eq:bN){reference-type="eqref" reference="eq:bN"}, see also [\[eq:heat_semigroup\]](#eq:heat_semigroup){reference-type="eqref" reference="eq:heat_semigroup"}, because the discretization of a distribution is not meaningful. Instead, we use the fact that an element of $\mathcal C^{(-\hat\beta)+}$ can be obtained as the distributional derivative of a function $h$ in $\mathcal C^{(-\hat\beta+1)+}$, and that the derivative commutes with the heat kernel as explained in [\[eq:comm_sem_der\]](#eq:comm_sem_der){reference-type="eqref" reference="eq:comm_sem_der"}. Indeed, since $b \in \mathcal{C}^{-\hat\beta+\epsilon}$ for some $\epsilon > 0$ and $-\hat\beta\in (-1/2,0)$, we have $\gamma:=-\hat\beta+ \varepsilon+1>0$ and $h\in \mathcal{C}^{\gamma}$ is a function. Using [\[eq:comm_sem_der\]](#eq:comm_sem_der){reference-type="eqref" reference="eq:comm_sem_der"} allows us to discretise the function $h$ first, then convolve it with the heat kernel, which has a smoothing effect, and finally take the derivative. Without loss of generality we will pick a function $h$ that is constant outside of a compact set, which implies that the distribution $h'$ is supported on the same compact. For example one can choose $L>0$ large enough so that the solution $X_t$ of the SDE with the drift $b\mathbbm 1_{[-L, L]}$ replacing $b$ will, with a large probability, stay within the interval $[-L, L]$ for all times $t\in[0,T]$, rendering numerically irrelevant the fact that the drift $b$ has been cut outside the compact support $[-L, L]$. The simplest example of a drift $b \in \mathcal{C}^{\gamma-1}$, $\gamma \in (1/2,1)$, would be given by the derivative of the locally $\gamma$-Hölder continuous function $|x|^\gamma$, smoothly cut outside the compact set $[-L, L]$. However, this function is not differentiable in $x=0$ only. Instead, for our numerical implementation we consider $h\in \mathcal C^\gamma$, which is capable of fully encompassing the rough nature of the drift $b=h'$. This can be obtained if the function $h$ has a 'rough' behaviour in (almost) each point of the interval $[-L,L]$. In this section, we take as $h$ a transformation of a trajectory of a fractional Brownian motion (fBm)[^2] $(B^H_x)_{x\geq 0}$. Indeed, it is known that $\mathbb P$-almost all paths of fBm $B^H$ are locally $\gamma$-Hölder continuous for any $\gamma < H$, see [@biaginiStochasticCalculusFractional2008 Section 1.6] but paths are almost nowhere differentiable, see [@biaginiStochasticCalculusFractional2008 Section 1.7]. We construct a compactly supported function $h$ as follows. We take a path $(B_x^H (\omega))_{ x \in [0, 2L]}$ with Hurst index $H = - \hat\beta+1 +2 \varepsilon \in(1/2,1)$ for some small $\varepsilon>0$, which is thus locally $\gamma$-Hölder continuous with $\gamma = - \hat\beta+1 + \varepsilon$. The constraint $H\in(1/2,1)$ ensures that $\hat \beta \in (0,1/2)$, as needed in Assumption [Assumption 1](#ass:b_betahat){reference-type="ref" reference="ass:b_betahat"}. We define function $g:\mathbb{R}\to \mathbb{R}$ as $$g(x) = \big(B^H_x - \frac{B^H_{2L}}{2L} x\big) \mathbbm{1}_{\{x \in [0,2L]\}} (x),$$ which ensures that $g(0) = g(2L) = 0$. This transformation, inspired by the so called Brownian bridge, ensures that $g$ is continuous and keeps the same regularity as fBm $B^H$. Finally, for convenience, we translate $g$ so that it is supported in $[-L,L]$ rather than $[0,2L]$. This is done for the sake of symmetry, since we choose to start our SDE from an initial condition close to $0$. We choose $L$ large enough so that the path $(X_t)$ stays in the strip $[-L, L]$ with high probability up to our terminal time $T$. In summary, the function $h$ is constructed as $$h(x) = \Big( B^H_{x+L}(\omega) - \frac{B^H_{2L}(\omega)}{2L} (x-L) \Big) \mathbbm 1_{\{x \in [-L,L]\}} (x), \label{eq:fBbridge}$$ and we have $h \in \mathcal C^{\gamma}$ with $\gamma =-\hat \beta+ \varepsilon+1$, so that $b := h' \in \mathcal C^{(- \hat \beta)+ }$, and both are supported on $[-L, L]$. By a slight abuse of notation we denote $B^H(x) =h(x)$ so that $b = (B^H)'$. To compute $b^N$ we apply the semigroup $P_{1/N}$ to $b$, as in [\[eq:bN\]](#eq:bN){reference-type="eqref" reference="eq:bN"}, which is equivalent to a convolution with the heat kernel $p_{1/N}$. Since the derivative commutes with the convolution as recalled in [\[eq:comm_sem_der\]](#eq:comm_sem_der){reference-type="eqref" reference="eq:comm_sem_der"} we get $$\label{eq:numerical_integration} b^N(x) := (P_{1/N} b)(x) = \left(p_{1/N} \ast (B^H)'\right)(x) = \left(p_{1/N}' \ast B^H\right) (x) = \int_{-\infty}^{\infty} p_{1/N}' (y) B^H(x - y) dy.$$ As the derivative of the heat kernel is $p_{1/N}' (y) = - \frac{y}{1/N} p_{1/N} (y)$, we have $$\label{eq:rep_der_hk} b^N(x) = - \int_{-\infty}^{\infty} B^H(x-y) \frac y {1/N} p_{1/N} (y) dy .$$ In order to approximate the above integral numerically, we create a uniform discretisation of the interval $[-L, L]$ with $2M+1$ points, denoted by $\mathfrak{M}= \{ x_{-M}, ..., x_{M} \}$, with the mesh size $\delta: = \frac L M$. We sample the fBm $B^H(x)$ in $x \in \mathfrak{M}$ and extend to the real line as a piecewise constant function $\hat B^H$ as follows: $$\label{eq:disc_fbm} \hat{B}^H(z) = \sum_{j=-M}^{M-1} B^H(x_j) \mathbbm{1}_{\{z \in [x_j-\frac{\delta}{2}, x_{j}+\frac{\delta}{2})\}}, \qquad z \in \mathbb{R}.$$ Inserting [\[eq:disc_fbm\]](#eq:disc_fbm){reference-type="eqref" reference="eq:disc_fbm"} into [\[eq:rep_der_hk\]](#eq:rep_der_hk){reference-type="eqref" reference="eq:rep_der_hk"} and performing the change of variable $z = x_i - y$ we obtain a numerical approximation of $b^N$ at any point $x_i \in \mathfrak{M}$: $$\label{eq:approx_drift_3} \begin{split} b^N(x_i) &\approx - \int_{-\infty}^{\infty} \hat B^H(x_i-y) \frac{y}{1/N} p_{1/N} (y) dy\\ &= - \int_{-\infty}^{\infty} \sum_{j=-M}^{M} B^H(x_j) \mathbbm{1}_{\{z \in [x_j-\frac{\delta}{2}, x_{j}+\frac{\delta}{2})\}} \frac{x_i - z}{1/N} p_{1/N} (x_i - z) dz \\ &= - \sum_{j=-M}^{M} B^H(x_j) \int_{x_j-\delta/2}^{x_{j}+\delta/2} \frac{x_i - z}{1/N} p_{1/N} (x_i - z) dz \\ &= - \sum_{j=-M}^{M} B^H(x_j) \int_{-\delta/2}^{\delta/2} \frac{x_i - x_j - z'}{1/N} p_{1/N} (x_i - x_j - z') dz', \end{split}$$ where we performed a second change of variable $z' = z - x_j$ in the last equality. Let us denote the integral in the last line of [\[eq:approx_drift_3\]](#eq:approx_drift_3){reference-type="eqref" reference="eq:approx_drift_3"} as $$\mathcal{I}^N(y) := \int_{-\delta/2}^{\delta/2} \frac{y - z'}{1/N} p_{1/N} (y - z') dz', \label{eq:integral_delta_half}$$ so that equation [\[eq:approx_drift_3\]](#eq:approx_drift_3){reference-type="eqref" reference="eq:approx_drift_3"} reads $$\label{eq:discrete_convolution} b^N(x_i) \approx - \sum_{j=-M}^{M} B^H(x_j) \mathcal I^N (x_i - x_j) =: - ( B^H \ast \mathcal I^N) (x_i) ,$$ where $\ast$ denotes the discrete convolution between vectors $(B^H(x_i))_{i=-M}^M$ and $(\mathcal{I}^N(x_i))_{i=-2M}^{2M}$. [The latter vector needs to be formally defined for a $\delta$-discretisation of the interval $[-2L, 2L]$, however, in practice, the values of $\mathcal{I}^N(y)$ decrease quickly to $0$ as $|y|$ increases (this is illustrated in Figure [1](#fig:df){reference-type="ref" reference="fig:df"} for two values of $1/N$), so it is enough to use the values of $(\mathcal{I}^N(x_i))$ for $x_i \in \mathfrak{M}$]{style="color: purple"}. We will use the above approximation [\[eq:discrete_convolution\]](#eq:discrete_convolution){reference-type="eqref" reference="eq:discrete_convolution"} in our numerical computations. ![From left to right: $\mathcal I^N (y)$ for $y \in [-5, 5]$ and $1/N %= m^{-\eta} =0.199494, 0.068111$ respectively. ](Figures/integral_grid.pdf){#fig:df width="95%"} At this stage a few remarks regarding the size of $\mathfrak{M}$ are in order. The drift $b^N$ is implemented with a discrete convolution given in equation [\[eq:discrete_convolution\]](#eq:discrete_convolution){reference-type="eqref" reference="eq:discrete_convolution"}, where one of the convoluting terms is $\mathcal I^N$, which is depicted in Figure [1](#fig:df){reference-type="ref" reference="fig:df"}. The peaks of $\mathcal I^N$ depend on the magnitude of $1/N$, indeed they become higher and thinner as the variance $1/N$ of the heat kernel $p_{1/N}$ decreases. We choose $N= N(m) = m^{\eta}$ for some $\eta>0$ given in equation [\[eq:eta\]](#eq:eta){reference-type="eqref" reference="eq:eta"}, so as the number of steps $m$ in the Euler scheme increases, the variance $1/N(m) = m^{-\eta}$ decreases, leading to a vector $\mathcal I^{N(m)}$ of mostly zeros up to numerical precision. If the discretization $\mathfrak{M}$ is too coarse compared to $1/N(m)$, the vector $\mathcal I^{N(m)}$ does not retain enough information. To avoid this issue we have to make sure that there are sufficiently many points in $\mathfrak{M}$, i.e. that $2M$ is large enough compared to the size of the support $[-L, L]$. More specifically, we require that the distance $\delta$ between points in the grid is less than the standard deviation of the heat kernel $\sqrt{1/N(m)}$, i.e. $\delta < \sqrt{1/N(m)} = m^{-\eta/2}$ for every $m$ considered in numerical computations, which leads to $$\label{eq:lb_A} 2M>2Lm^{\eta/2}.$$ As an example, let us consider a discretisation of the interval $[-5, 5]$ (hence $L=5$) and use $m =2^{12}$ points in the Euler-Maruyama scheme. We will need at least $2M+1> 2Lm^{\eta/2}+1 = 10 \cdot 2^{6\eta}+1$ points in $\mathfrak{M}$, where the value $\eta$ depends on $\hat\beta$ and it is given explicitly in equation [\[eq:eta\]](#eq:eta){reference-type="eqref" reference="eq:eta"}. The smallest admissible number of points $2M+1$ of the discretisation $\mathfrak{M}$ for varying $\hat\beta$ is given in Table [1](#tab:A_beta){reference-type="ref" reference="tab:A_beta"}. [\[tab:A_beta\]]{#tab:A_beta label="tab:A_beta"} $\hat \beta$ $10^{-6}$ $1/16$ $1/8$ $1/4$ $1/2$ ---------------------------------- ----------- ---------- ---------- ---------- ---------- $1/N(m) = m^{-\eta(\hat \beta)}$ 0.001288 0.001398 0.001489 0.001768 0.003906 $2M+1$ 279 269 261 239 161 : Minimum amount of points $2M+1$ needed in the discretisation $\mathfrak{M}$ when $m=2^{12}$ and $L=5$. This varies according to the variance $1/N$, which in turn is a function of $\hat \beta$. \ We now summarise the procedure to compute numerically an approximation of the drift $b^{N(m)}$. 1. Fix $\hat \beta, L$ and the number $m$ of steps of the Euler scheme. Compute the smallest integer $M$ that satisfies [\[eq:lb_A\]](#eq:lb_A){reference-type="eqref" reference="eq:lb_A"}. Define the discretisation $\mathfrak{M}$ and the corresponding mesh size $\delta$. 2. Simulate a single path of fBm on the interval $[0,2L]$ with mesh size $\delta$ and with a given Hurst parameter $H = -\hat \beta+1+\epsilon$ for some small $\epsilon > 0$. 3. Transform the path of fBm into a bridge on $[-L, L]$ by applying the transformation [\[eq:fBbridge\]](#eq:fBbridge){reference-type="eqref" reference="eq:fBbridge"}, to get a vector $B^H(x_i)$, for $x_i \in \mathfrak{M}$. 4. Compute numerically the integral $\mathcal I^{N(m)} (x_i)$ for $x_i \in \mathfrak{M}$. 5. Perform the discrete convolution $-(B^H \ast \mathcal I^{N(m)})$ as in [\[eq:discrete_convolution\]](#eq:discrete_convolution){reference-type="eqref" reference="eq:discrete_convolution"} to approximate numerically $b^{N(m)}(x_i)$ for $x_i \in \mathfrak{M}$. 6. Extend $b^{N(m)}(x)$ for all $x\in[-L, L]$ by linear interpolation. Once we have a numerical approximation of the drift coefficient, the remaining step is to apply the standard Euler-Maruyama scheme. To calculate the empirical rate of convergence of the numerical scheme we must have approximations with an increasing number of steps $m$, as well as a proxy of the real solution, since the real solution is unknown in a closed form. The strong error of the scheme is calculated by Monte Carlo approximation of the $L^1$ norm of the difference between those approximations and the proxy at time $T$. The procedure reads as follows: 1. Choose a 'large' $m$ for the proxy of the real solution, and $m_i << m$, $i=0, \ldots, I$, for the approximations and such that each $m_i$ divides $m$. Choose a number of sample paths $Q\in\mathbb N$ sufficiently large. 2. Denote by $\Delta t_i= T/m_i$ the time-step for the approximated solutions, where $T$ is the terminal time. 3. Define $\mathfrak{M}$ with $M$ points, where $M$ and $m$ satisfy [\[eq:lb_A\]](#eq:lb_A){reference-type="eqref" reference="eq:lb_A"}. Generate one fBm path on the discrete grid $\mathfrak{M}$. 4. Run the Euler-Maruyama scheme for the proxy solution and for the approximated solutions up to time $T$, with the same $Q$ Brownian motion paths. 5. Compute the strong error between the proxy solution corresponding to $m$ and the approximations corresponding to $m_i$, by calculating a Monte Carlo average of the absolute differences between computed solutions at time $T$ across the $Q$ sample paths. Denote this approximations of the strong error by $\epsilon_i$, $i=0, \ldots, I$. 6. As we expect $\epsilon_i \approx c {m_i}^{-r} = c (\Delta t_i)^r$, where $r$ is the convergence rate, we compute the empirical rate $r$ by performing a linear regression of $\log_{10}(\epsilon_i)$ on $\log_{10}(\Delta t_i)$, $i=0, \ldots, I$. ![Examples of empirical convergence rates for $\hat \beta = \varepsilon, 1/8, 1/4, 1/2-\varepsilon$, with $\varepsilon=10^{-6}$, obtained running an Euler scheme with $Q=10^4$ sample paths, $T=1$, $m=2^{15}$ points for the proxy of the real solution and $m_i = 2^{8+i}$ for $i=0, \ldots, 4$, for the approximated solutions. The empirical convergence rate $r$ for each $\hat \beta$ is the slope of its corresponding line, which is plotted in a log-log graph. ](rates_beta.pdf){#fig:rates width="\\textwidth"} In Figure [2](#fig:rates){reference-type="ref" reference="fig:rates"} we plot the empirical convergence rate we obtained for different choices of the smoothness parameter $\hat \beta$, that is for $\hat \beta = \varepsilon, 1/8, 1/4, 1/2-\varepsilon$, with $\varepsilon=10^{-6}$, and with the rest of the parameters as indicated in the caption to Figure [2](#fig:rates){reference-type="ref" reference="fig:rates"}. Note that as $\hat \beta$ grows and the drift becomes more rough, the empirical convergence rate becomes smaller and at the same time the strong error increases. Note that the empirical convergence rate is close to $1/2$ when $\hat \beta \approx 0$, which agrees with the theoretical results obtained by [@dareiotisQuantifyingConvergenceTheorem2021] in the realm of measurable functions, see also [@butkovskyApproximationSDEsStochastic2021]. Indeed, they show a strong convergence rate of $1/2+\alpha/2$ when $b\in C^{\alpha}$ for $\alpha\in[0,1)$, which reduces to $1/2$ for measurable functions ($\alpha=0$). On the other hand, for $\beta \to 1/2$ we have an empirical convergence rate close to $1/4$. Finally, we performed a further experiment to better compare the empirical rate with the theoretical results. Since the drift of the SDE is obtained running a single path of a fBm, and clearly there is randomness there, we decided to run the algorithm for 50 different paths, for each value $\hat \beta$, and then we computed the average of the empirical convergence rates as well as its 95% confidence interval. We compared this with the theoretical rate obtained in Theorem [Theorem 7](#th:convergence_rate_es){reference-type="ref" reference="th:convergence_rate_es"} and with the conjecture that the rate should be $1/2 - \hat \beta/2$. The latter would be the natural extension of the results of [@dareiotisQuantifyingConvergenceTheorem2021] if they could be extended into the case of distributions, in particular with $-\hat \beta \in (-1/2,0)$ which is the case we treat here. We collected the results in Table [2](#table:rates){reference-type="ref" reference="table:rates"} below and also plotted them in Figure [3](#fig:empirical_rate){reference-type="ref" reference="fig:empirical_rate"}. This experiment strongly suggests that our theoretical result is not optimal, and that the convergence rate indeed could be $1/2 - \hat \beta/2$. Further studies are needed to prove or disprove this conjecture. [\[table:rates\]]{#table:rates label="table:rates"} $\hat \beta$ $\varepsilon = 10^{-6}$ $1/16$ $1/8$ $1/4$ $3/8$ $7/16$ $1/2 - \varepsilon$ --------------------------- ------------------------- --------- --------- --------- --------- --------- --------------------- Average of empirical rate 0.50814 0.48542 0.44063 0.36960 0.30478 0.29048 0.28275 $1/2 - \hat \beta/2$ 0.49999 0.46875 0.43750 0.37500 0.31250 0.28125 0.25000 Theoretical rate 0.16666 0.13243 0.10000 0.04545 0.01111 0.00270 0.00000 : Average of empirical convergence rates obtained using 50 Euler-Maruyama approximations with $10^4$ sample paths for each $\hat \beta$, the conjecture rate of $1/2-\hat \beta/2$ and the theoretical rate from Theorem [Theorem 7](#th:convergence_rate_es){reference-type="ref" reference="th:convergence_rate_es"}. All values rounded to 5 decimal places. Same values are plotted in Figure [3](#fig:empirical_rate){reference-type="ref" reference="fig:empirical_rate"}. \ ![Plot of average of empirical convergence rates obtained as average of 50 Euler-Maruyama approximations with $10^4$ sample paths for each $\hat \beta$. The plot contains also the 95% confidence interval for the average, the conjectured rate of $1/2 - \hat \beta/2$ and the theoretical rate obtained in Theorem [Theorem 7](#th:convergence_rate_es){reference-type="ref" reference="th:convergence_rate_es"}. ](rates.pdf){#fig:empirical_rate width="\\textwidth"} [^1]: We borrow here the term virtual solution from [@flandoliMultidimensionalStochasticDifferential2017], where the authors use an analogous equation for $Y$ to define the solution of an SDE with distributional drift $b$ in a fractional Sobolev space. In [@issoglioSDEsSingularCoefficients2023] the authors instead define the solution via martingale problem, but the equivalence with the notion of virtual solutions follows from their Theorem 3.9. [^2]: A fractional Brownian motion $(B^H_x)_{x\geq 0}$ with Hurst parameter $H\in(0,1)$ on a probability space $(\Omega, \mathcal F, \mathbb P)$ is a centered Gaussian stochastic process with covariance given by $\mathbb E (B^H_x B^H_y) = \frac12 (|x|^{2H} + |y|^{2H} - |x-y|^{2H})$. When $H=\tfrac12$ we recover a Brownian motion.
arxiv_math
{ "id": "2309.11396", "title": "Convergence rate of numerical scheme for SDEs with a distributional\n drift in Besov space", "authors": "Luis Mario Chaparro J\\'aquez, Elena Issoglio, Jan Palczewski", "categories": "math.PR cs.NA math.NA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Let $G_{k,n}$ be the $n$-balanced $k$-partite graph, whose vertex set can be partitioned into $k$ parts, each has $n$ vertices. In this paper, we prove that if $k \geq 2,n \geq 1$, for the edge set $E(G)$ of $G_{k,n}$ $$|E(G)| \geq\left\{\begin{array}{cc} 1 & \text { if } k=2, n=1 \\ n^{2} C_{k}^{2}-(k-1) n+2 & \text { other } \end{array}\right.$$ then $G_{k,n}$ is hamiltonian. And the result may be the best. address: - School of Science, Beijing University of Posts and Telecommunications, Beijing, 100876, China - School of Science, Beijing University of Posts and Telecommunications, Beijing, 100876, China - School of Science, Beijing University of Posts and Telecommunications, Beijing, 100876, China author: - Zongyuan Yang - Yi Zhang - Shichang Zhao bibliography: - achemso-demo.bib title: - The existence of Hamilton cycle in $n$-balanced $k$-partite graphs - The existence of Hamilton cycle in $n$-balanced $k$-partite graphs --- # 1 Introduction A Hamiltonian circle is a circle that contains all the vertices of the graph. The problem of the existence of Hamiltonian circle has been a highly significant problem in graph theory, because it's NP-hard. Researchers have tried to give some tight sufficient conditions to prove the existence of Hamiltonian circles, among which the minimum degree condition (Dirac condition), the minimum degree sum condition (Ore condition) and the Fan-type condition are three classical results. Inspired by the problem of Turán-type extremal graphs, we hope to give a sufficient condition for the existence of Hamiltonian circles in balanced multipartite graphs in terms of the number of edges, i.e., there must be a Hamiltonian circle for as many edges as there are at least in a balanced multipartite graph. Complete balanced multipartite graphs have a lot of great structural properties, such as having Hamiltonian circles, which can be used as a structural basis for interconnection networks. Connections are often broken in networks, so the fault tolerance of edges can be an important indicator of the stability of a network. The paper can provide an important theoretical support for determining the fault tolerance of complete multipartite graphs with Hamiltonian circles from the perspective of the number of edges, making it the basis of network structure. Let $G_{k,n}$ be the complete $n$-balanced $k$-partite graph, whose vertex set can be partitioned into $k$ parts, each has $n$ vertices. Our main result is the following theorem.\ **Theorem 1**. *Let $G=(V,E)$ be an $n$-balanced $k$-partite graph with $k \geq 2$, $n \geq 1$ except $k=2$, $n=1$. If $|E(G)| \geq n^{2} C_{k}^{2}-(k-1) n+2$, then $G$ is Hamiltonian.* # 2 Notations and useful results All graphs considered in this paper are simple, finite, loop-free and undirected. For a graph $G$, let $V(G),E(G)$ denote the vertex set and edge set of $G$, respectively. For a vertex $u \in V(G)$, let $N_G(u)$ be the set of adjacent vertices to $u$ in $G$ and $d_G(u) =|N_G(u)|$. Let $G_{n_1,\cdots,n_k}=(V(G_{n_1,\cdots,n_k}),E(G_{n_1,\cdots,n_k}))$ be a $k$-partite graph with order $n_1+n_2+\cdots+n_k$, where the vertex set $V(G_{n_1,\cdots,n_k})$ can be divided into $k$ parts $V_1,V_2,\cdots,V_k$ with $|V_i|=n_i$ for $1 \leq i \leq k$ and the edge set $E(G_{n_1,\cdots,n_k})$ contains edges with no two vertices from one part. If the edge set $E(G_{n_1,\cdots,n_k})$ contains all edges with no two vertices from one part, then we call $G_{n_1,\cdots,n_k}$ the complete $k$-partite graph with order $n_1+n_2+\cdots+n_k$, and denote the graph by $CG_{n_1,\cdots,n_k}$. If $n_1=\cdots=n_k$, for simplicity, we write $G_{k,n}$ and $CG_{k,n}$ instead, and call $G_{k,n}$ an $n$-balanced $k$-partite graph. Let $\overline{G_{n_1,\cdots,n_k}}$ be the $k$-partite graph with the vertex set $V(G_{n_1,\cdots,n_k})$ and the edge set $E(CG_{n_1,\cdots,n_k})\setminus E(G_{n_1,\cdots,n_k})$. Clearly $|E(G_{n_1,\cdots,n_k})|+|E(\overline{G_{n_1,\cdots,n_k}})| = |E(CG_{n_1,\cdots,n_k})|$ and $|E(G_{k,n})|+|E(\overline{G_{k,n}})| = |E(CG_{k,n})|$. Given a vertex $u\in V(G_{k,n})$, clearly $d_{\overline{G_{k,n}}}(u)+d_{G_{k,n}}(u) =(k-1)n$. Let $\delta(G_{k,n})$ be the smallest degree among all vertices and $\sigma(G_{k,n})$ be the smallest degree sum of two nonadjacent vertices from different parts. The following results will be used in our proof. **Theorem 2**. *(Ore [@1960Note]) Let $G$ be a graph with $n \geq 3$ vertices. If $d(u)+d(v) \geq n$ for any two nonadjacent vertices $u$ and $v$, then $G$ is hamiltonian.* **Theorem 3**. *For any graph $G$ with $n \geq 3$ vertices, if $|E(G)| \geq \binom{n-1}{2} + 2$, then $G$ is Hamiltonian.* Proof. We claim that $d(u)+d(v) \geq n$ for any two nonadjacent vertices $u$ and $v$. Otherwise, we let $u_0$ and $v_0$ be two nonadjacent vertices with $d(u)+d(v) \leq n-1$. Then we obtain that $|E(G)| \leq \binom{n-2}{2} + n-1 < \binom{n-1}{2} + 2$, a contradiction. Furthermore, by Ore Theorem, $G$ is Hamiltonian. **Lemma 4**. *Let $G$ be a graph with $n \geq 3$ vertices and $u_1-P-u_n$ be a Hamilton path. If $d(u_1)+d(u_n) \geq n$, then $G$ is Hamiltonian.* Proof. Let $u_1-u_2-\cdots u_{n-1}-u_n$ be a Hamilton path. To the contrary, we assume that $G$ is not Hamiltonian. If $u_k$ ia adjacent to $u_1$, then $u_{k-1}$ is not adjacent to $u_n$, otherwise we find a Hamilton cycle. Therefore $d(u_n) \leq n-1-d(u_1)$. Similarly, $d(u_1) \leq n-1-d(u_n)$. It follows that $d(u_1)+d(u_n) \leq n-1$, a contradiction. ------------------------------------------------------------------------ **Theorem 5**. *[@1995Hamiltonicity] Let $G_{k,n}$ be an $n$-balanced $k$-partite graph with $k \geq 2$. If $$\delta(G_{k,n})>\left\{\begin{array}{ll} \left(\frac{k}{2}-\frac{1}{k+1}\right) n & \text { if } k \text { is odd, } \\ \left(\frac{k}{2}-\frac{2}{k+2}\right) n & \text { if } k \text { is even, } \end{array}\right.$$ then $G_{k,n}$ is Hamiltonian.* **Theorem 6**. *[@1997Degree] Let $G_{k,n}$ be an $n$-balanced $k$-partite graph with $k \geq 2$. If $$\sigma(G_{k,n})>\left\{\begin{array}{ll} \left(k-\frac{2}{k+1}\right) n & \text { if } k \text { is odd, } \\ \left(k-\frac{4}{k+2}\right) n & \text { if } k \text { is even, } \end{array}\right.$$ then $G_{k,n}$ is Hamiltonian.* # Proof of Theorem [Theorem 1](#theorem1){reference-type="ref" reference="theorem1"} {#proof-of-theorem-theorem1} Suppose that $G_{k,n}=(V(G_{k,n}),E(G_{k,n}))$ is an $n$-balanced $k$-partite graph with $k$ parts $V_1$, $V_2$, $\cdots,$ $V_k$ and $|E(G_{k,n})| \geq \binom{k}{2}n^{2}-(k-1) n+2$, where $k \geq 2$, $n \geq 1$ except $k=2$, $n=1$. Recall that $E(\overline{G_{k,n}})=E(CG_{k,n})\setminus E(G_{k,n})$, we have $$\label{eq111} |E(\overline{G_{k,n}})| \leq(k-1) n-2.$$ If $n=1$, then $|E(G_{k,1})| \geq \binom{k-1}{2}+2$, by Theorem [Theorem 3](#theorem2){reference-type="ref" reference="theorem2"}, the result holds. **Claim 7**. *If $k=2$, the result holds.* *Proof.* In this case, we have $|E(G_{2,n})| \geq n^2-n+2$ as $k=2$, which implies that $$\label{eq222} \sigma {G_{2,n}} \geq n+1 \text{\, and\, } \delta (G_{2,n}) \geq 2.$$ Otherwise $|E(G_{2,n})| \leq n^2-n+1$, a contradiction. By Theorem [Theorem 6](#theorem3){reference-type="ref" reference="theorem3"} with $k=2$, the result holds. ◻ **Claim 8**. *If $n=2$, the result holds.* *Proof.* In this case we have $|E(G_{k,2})| \geq 2k^2-4k+4$ as $n=2$, which implies that $$\label{eq2} \sigma (G_{k,2}) \geq 2k-1 \text{\, and\, } \delta (G_{k,2}) \geq 2.$$ Otherwise $|E(G_{k,2})| \leq 2k^2-4k+3$, a contradiction. Theorem [Theorem 6](#theorem3){reference-type="ref" reference="theorem3"} with $n=2$ implies that if $$\label{eq1} \sigma (G_{k,2}) >\left\{\begin{array}{ll} 2k-\frac{4}{k+1} & \text { when } k \text { is odd, } \\ 2k-\frac{8}{k+2} & \text { when } k \text { is even, } \end{array}\right.$$ then $G_{k,2}$ is Hamiltonian. It follows directly that the result holds for $k=2,4$. If $\sigma (G_{k,2}) \geq 2k$, then $G_{k,2}$ is Hamiltonian as ([\[eq1\]](#eq1){reference-type="ref" reference="eq1"}). By ([\[eq2\]](#eq2){reference-type="ref" reference="eq2"}), we just need to consider the case when $\sigma (G_{k,2}) =2k-1$. Without loss of generality, we let $u_1 \in V_1$ and $u_2 \in V_{2}$ be two nonadjacent vertices such that $d(u_1)+d(u_2)=2k-1$ and $d(u_2) > d(u_1)$. It follows that $d(u_2) \geq k$. Since $|E(G_{k,2})| \geq 2k^2-4k+4$, we obtain that $G_{k,2}-\{u_1,u_2\}$ is the complete $k$-partite graph with order $1+1+2+\cdots+2=2k-2$. Otherwise $|E(G_{k,2})| \leq 2k^2-4k+3$, a contradiction. Suppose $k=3$. We obtain that $d(u_1)=2$, $d(u_2)=3$. Combining with $|E(G_{3,2})| \geq 10$, it is easy to find a Hamilton cycle in $G_{3,2}$. We prove the case when $k \geq 5$ by induction on $k$. It is easy to obtain that $G_{k,2}[V_2\cup V_3\cup\cdots V_k]$ is a 2-balanced $k-1$ partite graph with at least $\binom{k-2}{2}2^2+2(k-2)+k-1 > 2(k-1)^2-4(k-1)+4$ as $k \geq 5$. By inductive hypothesis, $G_{k,2}[V_2\cup V_3\cup\cdots V_k]$ contains a Hamilton cycle $C$. Let $u_1'$ be the other vertex of $V_1$ different from $u_1$. By ([\[eq2\]](#eq2){reference-type="ref" reference="eq2"}), $\delta (G_{k,2}) \geq 2$, we can let $u, v \in V(C)$ be two vertices adjacent to $u_1$. If $\{u,v\}\in E(C)$, then we can construct a cycle of size $2k-1$ in $G_{k,2}-u_1'$. Since $d_{G_{k,2}}(u_1') \geq 2$, it is easy to find a Hamilton path in $G_{k,2}$ with one end being $u_1'$ and the other not being $u_1$, denoted by $w$. Clearly $d(u_1')+d(w) \geq 2(k-2)+1+k > 2k$ as $k \geq 5$. By Lemma [Lemma 4](#lemma1){reference-type="ref" reference="lemma1"}, $G_{k,2}$ has a Hamilton cycle. If $\{u,v\}\notin E(C)$, suppose that $C=u-u'-C_1-v-v'-C_2-u$, where $u'-C_1-v$ is the path from $u'$ to $v$ on $C$ not passing $u$ and $v'-C_2-u$ is the path from $v'$ to $u$ on $C$ not passing $v$, then at least one of $u'$ and $v'$ is adjacent to $u_1'$ as $d(u_1') \geq 2(k-2)+1$, say $\{u_1',v'\}\in E(G_{k,2})$. Then $u_1'-v'-C_2-u-u_1-v-C_1-u'$ is a Hamilton path in $G_{k,n}$. Clearly $d(u_1')+d(u') \geq 2(k-2)+1+k > 2k$ as $k \geq 5$. By Lemma [Lemma 4](#lemma1){reference-type="ref" reference="lemma1"}, $G_{k,2}$ has a Hamilton cycle. ◻ Next we prove the case when $k \geq 3,n \geq 3$ by induction on $n$. We assume $$\sigma(G_{k,n})\leq\left\{\begin{array}{ll} \left(k-\frac{2}{k+1}\right) n & \text { if } k \text { is odd, } \\ \left(k-\frac{4}{k+2}\right) n & \text { if } k \text { is even. } \end{array}\right.$$ Otherwise, by Theorem [Theorem 6](#theorem3){reference-type="ref" reference="theorem3"}, $G_{k,n}$ is Hamiltonian. Without loss of generality, we let $u_1 \in V_1$ and $v \in V_2$ be two nonadjacent vertices such that $$\label{eq3} d_{G_{k,n}}(u_1)+d_{G_{k,n}}(v) \leq\left\{\begin{array}{ll} \left(k-\frac{2}{k+1}\right) n & \text { if } k \text { is odd, } \\ \left(k-\frac{4}{k+2}\right) n & \text { if } k \text { is even, } \end{array}\right.$$ and $$\label{eq4} d_{G_{k,n}}(u_1) \leq\left\{\begin{array}{ll} \left(\frac{k}{2}-\frac{1}{k+1}\right) n & \text { if } k \text { is odd, } \\ \left(\frac{k}{2}-\frac{2}{k+2}\right) n & \text { if } k \text { is even. } \end{array}\right.$$ Furthermore, since $d_{\overline{G_{k,n}}}(u)+d_{G_{k,n}}(u) =(k-1)n$ for any vertex $u\in V(G_{k,n})$, we obtain: $$\label{eq55} d_{\overline{G_{k,n}}}(u_1)+d_{\overline{G_{k,n}}}(v) \geq\left\{\begin{array}{ll} \left(k+\frac{2}{k+1}-2\right) n & \text { if } k \text { is odd, } \\ \left(k+\frac{4}{k+2}-2\right) n & \text { if } k \text { is even, } \end{array}\right.$$ and $$\label{eq66} d_{\overline{G_{k,n}}}(u_1) \geq\left\{\begin{array}{ll} \left(\frac{k}{2}+\frac{1}{k+1}-1\right) n & \text { if } k \text { is odd, } \\ \left(\frac{k}{2}+\frac{2}{k+2}-1\right) n & \text { if } k \text { is even. } \end{array}\right.$$ Clearly $\delta(G_{k,n}) \geq 2$ as $|E(G_{k,n}) | \geq \binom{k}{2}n^{2}-(k-1) n+2$. Therefore $d_{G_{k,n}}(u_1) \geq2$. We distinguish the following two cases: **Case 1.** There exist $i \neq j \in \{2,3,\cdots,k\}$ such that $N_{G_{k,n}}(u_1) \cap V_{i} \neq \emptyset$ and $N_{G_{k,n}}(u_1) \cap V_{j} \neq \emptyset$. Let $G=G_{k,n}-\{u_1,v\}$. We claim that $$\begin{aligned} \label{eq5} \delta(G) \geq (k-2)n.\end{aligned}$$ Otherwise, there exists a vertex $u \in V(G)$ such that $d_G(u) \leq (k-2)n-1$. If $u\in V_1$ or $V_2$, then $$\begin{aligned} d_{\overline{G}}(u) \geq (k-2)n+(n-1)-((k-2)n-1)=n.\end{aligned}$$ If $u\in V_i$, $3 \leq i \leq k$, then $$\begin{aligned} d_{\overline{G}}(u) \geq (k-3)n+2(n-1)-((k-2)n-1)=n-1.\end{aligned}$$ Combining with ([\[eq55\]](#eq55){reference-type="ref" reference="eq55"}), we obtain that $$|E(\overline{G_{k,n}})|\geq n-1+ \left\{\begin{array}{ll} \left(k+\frac{2}{k+1}-2\right) n-1 & \text { if } k \text { is odd, } \\ \left(k+\frac{4}{k+2}-2\right) n-1 & \text { if } k \text { is even. } \end{array}\right.$$ If $k$ is odd, then $$\begin{aligned} &n-1+\left(k+\frac{2}{k+1}-2\right) n-1 = (k-1)n-2+ \frac{2}{k+1}n > (k-1)n-2;\end{aligned}$$ if $k$ is even, then $$\begin{aligned} &n-1+\left(k+\frac{4}{k+2}-2\right) n-1 = (k-1)n-2+ \frac{4}{k+2}n > (k-1)n-2.\end{aligned}$$ It is a contradiction. If $N_{G_{k,n}}(u_1) \cap V_2 \neq \emptyset$, without loss of generality, we let $u_2 \in V_2$ and $u_3\in V_3$ be two adjacent vertices to $u_1$. By [\[eq5\]](#eq5){reference-type="ref" reference="eq5"}, we can greedily find a path $P=u_2-u_1-u_3-\cdots-u_k$ of length $k$, where $u_i\in V_i$ for $i=2,\cdots,k$. Otherwise there exists $3 \leq i \leq k-1$ such that $N_G(u_i) \cap V_{i+1}=\emptyset$. Then $d_G(u_i) \leq 2(n-1)+(k-4)n=(k-2)n-2$, a contradiction. If $N_{G_{k,n}}(u_1) \cap V_2 = \emptyset$, without loss of generality, we let $u_3 \in V_3$ and $u_4\in V_4$ be two adjacent vertices to $u_1$. Similarly, by [\[eq5\]](#eq5){reference-type="ref" reference="eq5"}, we can greedily find a path $P=u_3-u_1-u_4-\cdots-u_k-u_2$, where $u_i\in V_i$ for $i=2,\cdots,k$ and $u_2 \neq v$. Otherwise there exists $4 \leq i \leq k-1$ such that $N_G(u_i) \cap V_{i+1}=\emptyset$ or $N_G(u_k) \cap V_{2}\setminus\{v\}=\emptyset$. Then $d_G(u_i) \leq (k-2)n-1$, a contradiction. Clearly the path $P$ contains one and only one vertex from every part. Suppose $P=u_2-u_1-u_3-u_4-\cdots-u_k$ (The case when $P=u_3-u_1-u_4-\cdots-u_k-u_2$ is similar). By ([\[eq111\]](#eq111){reference-type="ref" reference="eq111"}) and ([\[eq66\]](#eq66){reference-type="ref" reference="eq66"}), we obtain that $\overline{G_{k,n}-V(P)}$ is a $(n-1)-$balanced $k$-partite graph with at most $\left(\frac{k}{2}-\frac{1}{k+1}\right) n-2$ edges if $k$ is odd and at most $\left(\frac{k}{2}-\frac{2}{k+2}\right) n-2$ edges if $k$ is even. It is not difficult to check that $$\max\Big\{\left(\frac{k}{2}-\frac{1}{k+1}\right) n-2, \left(\frac{k}{2}-\frac{2}{k+2}\right) n-2 \Big\} \leq (k-1)(n-1)-2$$ as $k \geq 3$, $n \geq 3$. It follows that $G_{k,n}-V(P)$ contains at least $\binom{k}{2}(n-1)^{2}-(k-1)(n-1)+2$ edges. By inductive hypothesis, $G_{k,n}-V(P)$ contains a Hamilton cycle, denoted by $$C = v_1-v_2-\cdots-v_{k(n-1)}-v_1.$$ If $k(n-1)$ is odd, we construct a matching $M$ of size $\frac{k(n-1)-1}{2}$ from the edges of $C$ with $v \notin V(M)$: $$M=\Big\{\{v_{2i-1},v_{2i}\}: i=1,2,\cdots,\frac{k(n-1)-1}{2}\Big\}.$$ If $k(n-1)$ is even, we construct a matching $M$ of size $\frac{k(n-1)-2}{2}$ from the edges of $C$ with $v \notin V(M)$: $$M=\Big\{\{v_{2i-1},v_{2i}\}: i=1,2,\cdots,\frac{k(n-1)-2}{2}\Big\}.$$ We have the following claim. **Claim 9**. *There exists one edge $\{v_{2i-1},v_{2i}\}$ of $M$ such that $\{u_2,v_{2i-1}\} \in E(G_{k,n})$ and $\{u_k,v_{2i}\}\in E(G_{k,n})$ or $\{u_2,v_{2i}\} \in E(G_{k,n})$ and $\{u_k,v_{2i-1}\}\in E(G_{k,n})$.* *Proof.* To the contrary, if $k(n-1)$ is odd, the number of vertices in $\overline{G_{k,n}-V(P)-v}$, which is adjacent to $u_2$ or $u_k$, is at least $$2\frac{k(n-1)-1}{2} - 2(n-1)=kn-k-2n+1,$$ as the number of vertices from the same part to $u_2$ ($u_k$) is at most $(n-1)$ in $\overline{G_{k,n}-V(P)-v}$. By ([\[eq55\]](#eq55){reference-type="ref" reference="eq55"}), we obtain: $$|E(\overline{G_{k,n}})| \geq kn-k-2n+1+ \left\{\begin{array}{ll} \left(k+\frac{2}{k+1}-2\right) n-1 & \text { if } k \text { is odd, } \\ \left(k+\frac{4}{k+2}-2\right) n-1 & \text { if } k \text { is even. } \end{array}\right\}$$ If $k$ is odd, then $$\begin{aligned} & kn-k-2n+1+ \left(k+\frac{2}{k+1}-2\right)n-1 = (k-1)n-2+ \left(k-3+\frac{2}{k+1}\right) n-k+2 \\ \geq & (k-1)n-2+ 3\left(k-3+\frac{2}{k+1}\right)-k+2=(k-1)n-2+ 2k+\frac{6}{k+1} -7> (k-1)n-2.\end{aligned}$$ We derive that $|E(\overline{G_{k,n}})| > (k-1)n+2$. It is similar when $k$ is even. It is a contradiction. ◻ By Claim [Claim 9](#claim777){reference-type="ref" reference="claim777"}, we obtain a Hamilton cycle of $G_{k,n}$ from $P$ and $C$. Without loss of generality, we can assume $\{v_{1},v_{2}\}$ of $M$ satisfying: $\{u_2,v_1\},\{u_k,v_{2}\}\ \in E(G_{k,n})$. Obviously $v_1-u_2-P-u_k-v_2-C-v_1$ is a Hamilton cycle of $G_{k,n}$. **Case 2.** There exists only one $i \in \{2,3,\cdots,k\}$ such that $N_{G_{k,n}}(u_1) \cap V_{i} \neq \emptyset$. In this case we have $d_{G_{k,n}}(u_1) \leq n$. If $k=3$ and $n=3$, then it is easy to check that $G_{3,3}$ contains a Hamilton cycle. Suppose that $k\geq3$, $n\geq3$ except $k=3$ and $n=3$. We have the following claim. **Claim 10**. *$\delta(G_{k,n}-u_1) \geq (k-2)n+1$.* *Proof.* Let $G' =G_{k,n}-u_1$. To the contrary, suppose $u \in V(G')$ such that $d_{G'}(u) \leq (k-2)n$. If $u\in V_1$, then $|E(G'-u)|\leq \binom{k-1}{2}n^2+(n-2)(k-1)n$. It follows that $$\begin{aligned} |E(G_{k,n})| & \leq d_{G'}(u)+ |E(G'-u)|+d_{G_{k,n}}(u_1) \\ & \leq (k-2)n+ \binom{k-1}{2}n^2+(n-2)(k-1)n+n= \binom{k}{2}n^2-(k-1)n,\end{aligned}$$ a contradiction. We assume that $u \in V_i$ for some $2 \leq i \leq k$. Then $|E(G'-u)|\leq \binom{k-2}{2}n^2+(2n-2)(k-2)n+(n-1)(n-1)$. Therefore $$\begin{aligned} |E(G_{k,n})| & \leq d_{G'}(u)+ |E(G'-u)|+d_{G_{k,n}}(u_1) \\ & \leq (k-2)n+\binom{k-2}{2}n^2+(2n-2)(k-2)n+(n-1)(n-1)+n= \binom{k}{2}n^2-(k-1)n+1,\end{aligned}$$ a contradiction. ◻ Suppose that $N_{G_{k,n}}(u_1) \subseteq V_2$. Let $u_2,u_2' \in N_{G_{k,n}}(u_1) \subseteq V_2$. We claim that there exist two disjoint paths of length $k$: $$\begin{aligned} P_1=u_1-u_2-u_3-\cdots-u_k { \,\, \text {and} \,\,} P_2=u_1-u_2'-u_3'-\cdots-u_k'-u_1',\end{aligned}$$ where $u_i \neq u_i'\in V_i$ for $i=1,2,\cdots,k$. First, we can greedily construct $P_1$. By Claim [Claim 10](#claim8){reference-type="ref" reference="claim8"}, $\delta(G_{k,n}-u_1) \geq (k-2)n+1$, we obtain that $N_{G_{k,n}-u_1}(u_i) \cap V_{i+1} \neq \emptyset$ for $2 \leq i \leq k-1$. Next, we can also greedily construct $P_2$. Otherwise, there exists $u_i' \in V_i$ such that $N_{G_{k,n}-u_1}(u_i') \cap V_{i+1}\setminus\{u_i\} = \emptyset$ for $2 \leq i \leq k-1$ or $N_{G_{k,n}-u_1}(u_k') \cap V_{1}\setminus\{u_1\} = \emptyset$. It follows that $d_{G_{k,n}-u_1}(u_i') \leq kn-1-n-(n-1)=(k-2)n$, a contradiction. Suppose that $N_{G_{k,n}}(u_1) \subseteq V_i$ for $3 \leq i \leq k$. Without loss of generality, we let $u_3,u_3' \in N_{G_{k,n}}(u_1) \subseteq V_3$. We claim that there exist two disjoint paths of length $k$: $$\begin{aligned} P_1=u_1-u_3-u_2-u_4-\cdots-u_k { \,\, \text {and} \,\,} P_2=u_1-u_3'-u_2'-u_4'-\cdots-u_k'-u_1',\end{aligned}$$ where $u_i \neq u_i'\in V_i$ for $i=1,2,\cdots,k$. Especially, if $k=3$, we can let $u_2\neq v$. The explanation is similar to the case when $N_{G_{k,n}}(u_1) \subseteq V_2$. Recall that $d_{G_{k,n}}(u_1) \leq n$. Clearly $d_{\overline{G_{k,n}}}(u_1) \geq (k-2)n$. By ([\[eq111\]](#eq111){reference-type="ref" reference="eq111"}), we derive that $$|E(\overline{G_{k,n}-V(P_1\cup P_2)})| \leq n-2 .$$ Therefore $G_{k,n}-V(P_1)-V(P_2)$ is a $(n-2)-$balanced $k$-partite graph with at least $$\begin{aligned} \binom{k}{2}(n-2)^{2}-(n-2) \geq \binom{k}{2}(n-2)^{2}-(k-1)(n-2)+2\end{aligned}$$ edges as $k\geq3$, $n\geq3$ except $k=3$, $n=3$. By inductive hypothesis, $G_{k,n}-V(P_1 \cup P_2)$ contains a Hamilton cycle, denoted by $$C = v_1-v_2-\cdots-v_{k(n-2)}-v_1.$$ If $k(n-2)$ is odd, we construct a matching $M$ of size $\frac{k(n-2)-1}{2}$ from the edges of $C$ with $v \notin V(M)$: $$M=\Big\{\{v_{2i-1},v_{2i}\}: i=1,2,\cdots,\frac{k(n-2)-1}{2}\Big\}.$$ If $k(n-2)$ is even, we construct a matching $M$ of size $\frac{k(n-2)-2}{2}$ from the edges of $C$ with $v \notin V(M)$: $$M=\Big\{\{v_{2i-1},v_{2i}\}: i=1,2,\cdots,\frac{k(n-2)-2}{2}\Big\}.$$ We have the following claim. **Claim 11**. *There exists one edge $\{v_{2i-1},v_{2i}\}$ of $M$ such that $\{u_1',v_{2i-1}\} \in E(G_{k,n})$ and $\{u_k,v_{2i}\}\in E(G_{k,n})$ or $\{u_1',v_{2i}\} \in E(G_{k,n})$ and $\{u_k,v_{2i-1}\}\in E(G_{k,n})$.* *Proof.* To the contrary, if $k(n-2)$ is odd, the number of vertices in $\overline{G_{k,n}-V(P_1\cup P_2)-v}$, which is adjacent to $u_1'$ or $u_k$, is at least $$2\frac{k(n-2)-1}{2} - 2(n-2)=kn-2k-2n+3,$$ as the number of vertices from the same part to $u_1'$ ($u_k$) is at most $(n-2)$ in $\overline{G_{k,n}-V(P_1\cup P_2)-v}$. By ([\[eq55\]](#eq55){reference-type="ref" reference="eq55"}), we obtain: $$|E(\overline{G_{k,n}})| \geq kn-2k-2n+3+ \left\{\begin{array}{ll} \left(k+\frac{2}{k+1}-2\right) n-1 & \text { if } k \text { is odd, } \\ \left(k+\frac{4}{k+2}-2\right) n-1 & \text { if } k \text { is even. } \end{array}\right\}$$ If $k$ is odd, then $$\begin{aligned} & kn-2k-2n+3+ \left(k+\frac{2}{k+1}-2\right)n-1 \\ = & (k-1)n-2+ \left(k-3+\frac{2}{k+1}\right) n-2k+4 \\ \geq & (k-1)n-2+ 3\left(k-3+\frac{2}{k+1}\right)-2k+4\\ = & (k-1)n-2+ k+\frac{6}{k+1}-5 >(k-1)n-2.\end{aligned}$$ We derive that $|E(\overline{G_{k,n}})| > (k-1)n+2$. It is similar when $k$ is even. It is a contradiction. ◻ By Claim [Claim 11](#claim7){reference-type="ref" reference="claim7"}, we obtain a Hamilton cycle of $G_{k,n}$ from $P_1$, $P_2$ and $C$. Without loss of generality, we can assume $\{v_{1},v_{2}\}$ of $M$ satisfying: $\{u_1',v_1\},\{u_k,v_{2}\}\ \in E(G_{k,n})$. Obviously $v_1-u_1'-P_2-u_1-P_1-u_k-v_2-C-v_1$ is a Hamilton cycle of $G_{k,n}$. Once we have proved Theorem [Theorem 1](#theorem1){reference-type="ref" reference="theorem1"}, we can obtain a useful inference. **Theorem 12**. *Let $G=(V,E)$ be an $n$-balanced $k$-partite graph with $k \geq 2$, $n \geq 1$ except $k=2$, $n=1$. If $|E(G)| \geq n^{2} C_{k}^{2}-(k-1) n+1$ and $\delta(G_{k,n}) \geq 2$, then $G$ is Hamiltonian.* # Proof of Theorem [Theorem 12](#theorem11){reference-type="ref" reference="theorem11"} {#proof-of-theorem-theorem11} Suppose that $G_{k,n}=(V(G_{k,n}),E(G_{k,n}))$ is an $n$-balanced $k$-partite graph with $k$ parts $V_1$, $V_2$, $\cdots,$ $V_k$ , $|E(G_{k,n})| \geq \binom{k}{2}n^{2}-(k-1) n+1$ and $\delta(G_{k,n}) \geq 2$, where $k \geq 2$, $n \geq 1$ except $k=2$, $n=1$. Recall that $E(\overline{G_{k,n}})=E(CG_{k,n})\setminus E(G_{k,n})$, we have $$\label{eq121} |E(\overline{G_{k,n}})| \leq(k-1) n-1.$$ For every pair of nonadjacent vertices $u$ and $v$ which are in different partite sets,$$d(u)+d(v) \geq \max \{4,(k-1) n\}=\sigma_{G_{k,n}}$$ If $G$ satisfies the conditions of Theorem[Theorem 6](#theorem3){reference-type="ref" reference="theorem3"}, $G$ is hamiltonian. Otherwise, there exists a pair of nonadjacent vertices $u$ and $v$ which are in different partite sets such that $$d_{\overline{G_{k,n}}}(u) \leq\left\{\begin{array}{l} \left(\frac{k}{2}-\frac{1}{k+1}\right) n-1 \text { if } k \text { is odd } \\ \left(\frac{k}{2}-\frac{2}{k+2}\right) n-1 \text { if } k \text { is even } \end{array}\right.$$ $$\label{eq1111} \left|E(\overline{G_{k,n} - \{u\}}) \bigcap E(\overline{G_{k,n} - \{v\}})\right| \leq\left\{\begin{array}{l} \left(1-\frac{2}{k+1}\right) n \text { if } k \text { is odd } \\ \left(1-\frac{4}{k+2}\right) n \text { if } k \text { is even } \end{array}\right.$$ hold. $\delta(G_{k,n}) + 1>\left\{\begin{array}{l}\left(k-\frac{2}{k+1}\right) n \quad \text{if k is odd}\\ \left(k-\frac{4}{k+2}\right) n \quad \text{if k is even}\end{array}\right.$, only when $k=2,n \geq 2$, or $k=4,n=2$, or $k \geq 2,n=1$. In these several cases, we only need to consider $d(u)+d(v)=\delta(G_{k,n})>\left\{\begin{array}{l}\left(k-\frac{2}{k+1}\right) n \quad \text{if k is odd}\\ \left(k-\frac{4}{k+2}\right) n \quad \text{if k is even}\end{array}\right.$. It's straightforward to prove these cases with a comparable proof of the $k \geq 2,n=2$ case in Proof of Claim [Claim 8](#claim88){reference-type="ref" reference="claim88"}. The other case is $\delta(G_{k,n}) + 1 \leq \left\{\begin{array}{l}\left(k-\frac{2}{k+1}\right) n \quad \text{if k is odd}\\ \left(k-\frac{4}{k+2}\right) n \quad \text{if k is even}\end{array}\right.$. If $d(u)+d(v)=\delta(G_{k,n})$, it's straightforward to prove these cases with a comparable proof of the $k \geq 2,n=2$ case in Proof of Claim [Claim 8](#claim88){reference-type="ref" reference="claim88"}. If $d(u)+d(v) > \delta(G_{k,n})$, then $|E(\overline{G_{k,n} - \{u\}}) \bigcap E(\overline{G_{k,n} - \{v\}})| \geq 1$. There must exist an edge belonging to $E(\overline{G_{k,n}})$ that is not associated with $u$ or $v$. It is useful to set the edge to $ab$. Considering the graph $G \bigcup\{a b\}$, set it to $G$. Applying Theorem 1[Theorem 1](#theorem1){reference-type="ref" reference="theorem1"}, $G$ is Hamiltonian. Let the Hamiltonian cycle of $G^{\prime}$ be $H$. If $H$ does not include the edge $ab$, the conclusion holds. Otherwise, $H$ contains the edge $ab$. Let the other points adjacent to $a,b$ on $H$, respectively, be $a_{left},b_{right}$. It's useful to set $H: w \ldots a_{l e f t} a b b_{r i g h t} \ldots w$. Because $d(a) \geq 2$ and $d(b) \geq 2$ in $G$, let $a^{\prime} \in N(a), b^{\prime} \in N(b), a^{\prime} \neq a_{l e f t}, b^{\prime} \neq b_{r i g h t}$. If $H$ contains the edge $a^{\prime}b^{\prime}$, there exists a new Hamiltonian cycle $H^{\prime}: w-\overrightarrow{H}-a^{\prime} a-\overleftarrow{H}-b^{\prime} b-\overrightarrow{H}-w\left(H^{\prime}: w-\overrightarrow{H}-a a^{\prime}-\overleftarrow{H}-b^{\prime} b-\overrightarrow{H}-w\right)$ And the conclusion holds. Thus, $|N_{\overline{G_{k,n}}}(a) \bigcup N_{\overline{G_{k,n}}}(b)| \geq k n-3+1-(n-1)-(n-1)=(k-2) n$, then $|E(\overline{G_{k,n} - \{u\}}) \bigcap E(\overline{G_{k,n} - \{v\}})| \geq |N_{\overline{G_{k,n}}}(a) \bigcup N_{\overline{G_{k,n}}}(b)| -4 \geq(k-2) n-4$. Since ([\[eq1111\]](#eq1111){reference-type="ref" reference="eq1111"}), the contradiction arises if $k$ and $n$ are sufficiently large.
arxiv_math
{ "id": "2309.00232", "title": "The Existence of Hamilton Cycle in n-Balanced k-Partite Graphs", "authors": "Zongyuan Yang, Yi Zhang, Shichang Zhao", "categories": "math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We establish a collection of closed-loop guarantees and propose a scalable, Newton-type optimization algorithm for distributionally robust model predictive control (DRMPC) applied to linear systems, zero-mean disturbances, convex constraints, and quadratic costs. Via standard assumptions for the terminal cost and constraint, we establish distribtionally robust long-term and stage-wise performance guarantees for the closed-loop system. We further demonstrate that a common choice of the terminal cost, i.e., as the solution to the discrete-algebraic Riccati equation, renders the origin input-to-state stable for the closed-loop system. This choice of the terminal cost also ensures that the exact long-term performance of the closed-loop system is *independent* of the choice of ambiguity set the for DRMPC formulation. Thus, we establish conditions under which DRMPC does *not* provide a long-term performance benefit relative to stochastic MPC (SMPC). To solve the proposed DRMPC optimization problem, we propose a Newton-type algorithm that empirically achieves *superlinear* convergence by solving a quadratic program at each iteration and guarantees the feasibility of each iterate. We demonstrate the implications of the closed-loop guarantees and the scalability of the proposed algorithm via two examples. author: - Robert D. McAllister and Peyman Mohajerin Esfahani bibliography: - paper.bib title: | Distributionally Robust Model Predictive Control:\ Closed-loop Guarantees and Scalable Algorithms --- **Keywords.** Model predictive control, distributionally robust optimization, closed-loop stability, second-order algorithms # Introduction Model predictive control (MPC) defines an implicit control law via a finite horizon optimal control problem. This optimal control problem is defined by the stage cost $\ell(x,u)$, state/input constraints, and a (discrete-time) dynamical model: $$x^+ = f(x,u,w)$$ in which $x$ is the state, $u$ is the manipulated input, and $w$ is the disturbance. The primary difference between variants of this general MPC formulation (e.g., nominal, robust, and stochastic MPC) is their approach to modeling the disturbance $w$ in the optimization problem. In nominal MPC, the optimization problem uses a nominal dynamical model, i.e., $w=0$. Nonetheless, feedback affords nominal MPC a nonzero margin of *inherent* robustness to disturbances [@grimm:messina:tuna:teel:2004; @yu:reble:chen:allgower:2014; @allan:bates:risbeck:rawlings:2017]. This nonzero margin, however, may be insufficient in certain safety-critical applications with high uncertainty. Robust MPC (RMPC) and stochastic MPC (SMPC) offer a potential means to improve on the inherent robustness of nominal MPC by characterizing the disturbance and including this information in the optimal control problem. RMPC describes the disturbance via a set $\mathbb{W}$ and requires that the state and input constraints in the optimization problem are satisfied for all possible realizations of $w\in\mathbb{W}$. The objective function of RMPC considers only the nominal system ($w=0$) and these methods are sometimes called tube-based MPC if the constraint tightening is computed offline [@mayne:seron:rakovic:2005; @goulart:kerrigan:maciejowski:2006]. SMPC includes a stochastic description of the disturbance $w\sim \mathds{P}$ ($w$ is distributed according to the probability distribution $\mathds{P}$) and defines the objective function based on the expected value of the cost function subject to this distribution [@cannon:kouvaritakis:wu:2009; @farina:giulioni:scattolini:2016; @mesbah:2016; @lorenzen:dabbene:tempo:allgower:2016]. This stochastic description of the disturbance also permits the use of so-called chance constraints. The performance of SMPC therefore depends on the disturbance distribution $\mathds{P}$. Analogous to nominal MPC, feedback affords SMPC a small margin of inherent distributional robustness, i.e., robustness to inaccuracies in the disturbance distribution [@mcallister:rawlings:2023]. If this distribution is identified from limited data, however, there may be significant distributional uncertainty. Therefore, a distributionally robust (DR) approach to the SMPC optimization problem may provide desirable benefits in applications with high uncertainty and limited data. Advances in distributionally robust optimization (DRO) have inspired a range of distributionally robust MPC (DRMPC) formulations. In general, these problems take the following form: $$\min_{\theta\in\Pi(x)} \max_{\mathds{P}\in\mathcal{P}}\mathbb{E}_{\mathds{P}}[J(x,\theta,\mathbf{w})] \label{eq:dro_intro}$$ in which $x$ is the current state of the system, $\theta$ defines the control inputs for the MPC horizon (potentially as parameters in a previously defined feedback law), $\mathbb{E}_{\mathds{P}}\left[\cdot\right]$ denotes the expected value with respect to the distribution $\mathds{P}$, and $\mathcal{P}$ is the ambiguity set for the distribution $\mathds{P}$ of the disturbances $\mathbf{w}$. The goal is to select $\theta$ to minimize the worst-case expected value of the cost function $J(\cdot)$ and satisfy the (chance-)constraints embedded in the set $\Pi(x)$. Note that SMPC and RMPC are special cases of DRMPC via specific choices of $\mathcal{P}$. The key feature of all MPC formulations is that the finite horizon optimal control problem in [\[eq:dro_intro\]](#eq:dro_intro){reference-type="ref" reference="eq:dro_intro"} is solved with an updated state estimate at each time step, i.e., a rolling horizon approach. With this approach, DRMPC defines an implicit feedback control law $\kappa(x)$ and the closed-loop system $$\label{eq:fcl} x^+ = f(x,\kappa(x),w)$$ The performance of this controller is ultimately defined by this closed-loop system and the stage cost. In particular, we often define performance based on the expected average closed-loop stage cost at time $k\geq 1$, i.e., $$\mathcal{J}_k(\mathds{P}) := \mathbb{E}_{\mathds{P}}\left[\frac{1}{k}\sum_{i=0}^{k-1}\ell\Big(\phi(i),\kappa(\phi(i))\Big)\right]$$ in which $\phi(i)$ is the closed-loop state trajectory defined by [\[eq:fcl\]](#eq:fcl){reference-type="ref" reference="eq:fcl"} and $\mathds{P}$ is the distribution for the closed-loop disturbance. In this work, we focus on DRMPC formulations for linear systems with additive disturbances and quadratic costs. We note that there are also DRMPC formulations that consider parameteric uncertainty [@coulson:lygeros:dorfler:2021] and piecewise affine cost functions are also considered in [@micheli:summers:lygeros:2022]. In both cases, the proposed DRMPC formulation solves for only a single input trajectory for all realizations of the disturbance. To better address the realization of uncertainty in the open-loop trajectory, RMPC/SMPC formulations typically solve for a trajectory of parameterized control policies instead of a single input trajectory. A common choice of this parameterization is the state-feedback law $u=Kx+v$ in which $K$ is the fixed feedback gain and the parameter to be optimized is $v$. Using this parameterization, several DRMPC formulations were proposed to tighten probabilistic constraints for linear systems based on different ambiguity sets [@mark:liu:2020; @tan:yang:chen:li:2022; @fochesato:lygeros:2022; @li:tan:wu:duan:2021]. In these formulations, however, the cost function is unaltered from the corresponding SMPC formulation due to the fixed feedback gain in the control law parameterization. If the control law parameterization is chosen as a more flexible feedback affine policy (see [\[eq:feedbackaffine\]](#eq:feedbackaffine){reference-type="ref" reference="eq:feedbackaffine"}), first proposed for MPC formulations in [@goulart:kerrigan:maciejowski:2006], distributional uncertainty in the cost function is nontrivial to the DRMPC problem. @vanparys:kuhn:goulart:morari:2015 propose a tractable method to solve linear quadratic control problems with unconstrained inputs and a distributionally robust chance constraint on the state. consider a disturbance feedback affine parameterization with conic representable ambiguity sets and demonstrate a tractable reformulation of the DRMPC problem. @mark:liu:2022 consider a similar formulation with a simplified ambiguity set and also establish some performance guarantees for the closed-loop system. @tacskesen:iancu:kocyigit:kuhn:2023 demonstrate that for *unconstrained* linear systems, additive disturbances, and quadratic costs, a linear feedback law is optimal and can be found via a semidefinite program (SDP). @pan:faulwasser:2023 use polynomial chaos expansion to approximate and solve the distributionally robust optimal control problem. While these new formulations are interesting, there remain important questions about the efficacy of including yet another layer of uncertainty in the MPC problem. For example, what properties should DRMPC provide to the closed-loop system in [\[eq:fcl\]](#eq:fcl){reference-type="ref" reference="eq:fcl"}? And what conditions are required to achieve these properties? Due to the rolling horizon nature of DRMPC, distributionally robust closed-loop properties are not necessarily obtained by simply solving a distributional robust optimization problem. Moreover, the conditions under which DRMPC provides significant performance benefits relative to SMPC are currently unknown. One of the main contributions of this paper is to provide greater insight into these questions. The focus, in particular, is on the performance benefits and guarantees that may be obtained by considering distributional uncertainty in the cost function [\[eq:dro_intro\]](#eq:dro_intro){reference-type="ref" reference="eq:dro_intro"}. Chance constraints are therefore not considered in the proposed DRMPC formulation or closed-loop analysis. DRMPC is also limited by practical concerns related to the computational cost of solving DRO problems. While these DRMPC problems can often be reformulated as convex optimization problems, in particular SDPs, these optimization problems are often significantly more difficult to solve relative to the quadratic programs (QPs) that are ubiquitous in nominal, robust, and stochastic MPC problems. In this work, we consider a DRMPC formulation for linear dynamical models, additive disturbances, convex constraints, and quadratic costs. This DRMPC formulation uses a Gelbrich ambiguity set with fixed first moment (zero mean) as a conservative approximation for a Wasserstein ball of the same radius [@gelbrich:1990]. The key contributions of this work are (informally) summarized in the following two categories. 1. **Closed-loop guarantees:** 1. **Distributionally robust long-term performance.** We establish sufficient conditions for DRMPC, in particular the terminal cost and constraint, such that the closed-loop system satisfies a distributionally robust long-term performance bound ([Theorem 1](#thm:performance){reference-type="ref" reference="thm:performance"}), i.e., we define a function $C(\mathds{P})$ such that $$\label{eq:intro_performance} \limsup_{k\rightarrow\infty} \mathcal{J}_k(\mathds{P}) \leq \max_{\tilde{\mathds{P}}\in\mathcal{P}} C(\tilde{\mathds{P}})$$ for all $\mathds{P}\in\mathcal{P}$. This bound is *distributionally robust* because it holds for all distributions $\mathds{P}\in\mathcal{P}$. 2. **Distributionally robust stage-wise performance.** If the stage cost is also positive definite, we establish that the closed-loop system satisfies a distributionally robust stage-wise performance bound ([Theorem 2](#thm:resie){reference-type="ref" reference="thm:resie"}), i.e., there exists $\lambda\in(0,1)$ and $c,\gamma>0$ such that $$\label{eq:intro_resie} \mathbb{E}_{\mathds{P}}\left[\ell\Big(\phi(k),\kappa(\phi(k))\Big)\right] \leq \lambda^k c|\phi(0)|^2 + \max_{\tilde{\mathds{P}}\in\mathcal{P}} \gamma C(\tilde{\mathds{P}})$$ for all $\mathds{P}\in\mathcal{P}$. Moreover, this result directly implies that the closed-loop system is distributionally robust, *mean-squared* input-to-state stable (ISS) ([Corollary 3](#cor:mss){reference-type="ref" reference="cor:mss"}), i.e., the left-hand side of [\[eq:intro_resie\]](#eq:intro_resie){reference-type="ref" reference="eq:intro_resie"} becomes $\mathbb{E}_{\mathds{P}}\left[|\phi(k)|^2\right]$. 3. **Pathwise input-to-state stability.** A common approach in MPC design is to select the terminal cost based on the discrete-algebraic riccati equation (DARE) for the unconstrained linear system. Under these conditions, we establish that the closed-loop system is in fact (pathwise) ISS ([Theorem 4](#thm:iss){reference-type="ref" reference="thm:iss"}), which is a stronger property than mean-squared ISS. 4. **Exact long-term performance.** Given this stronger property of (pathwise) ISS, we can further establish an *exact* value for the long-term performance of DRMPC based on this terminal cost and the closed-loop disturbance distribution ([Theorem 5](#thm:performance_opt){reference-type="ref" reference="thm:performance_opt"}), i.e., $$\label{eq:intro_exact} \lim_{k\rightarrow\infty} \mathcal{J}_k(\mathds{P}) = C(\mathds{P})$$ for all distributions $\mathds{P}$ supported on $\mathbb{W}$. Of particular interest is the fact that this result is *independent* of the choice of ambiguity set $\mathcal{P}$. Thus, the long-term performance of DRMPC, SMPC, and RMPC are *equivalent* for this choice of terminal cost ([Corollary 6](#cor:equal){reference-type="ref" reference="cor:equal"}). 2. **Scalable algorithm: Newton-type saddle point algorithm.** We present a novel optimization algorithm tailored to solve the DRMPC problem of interest ([\[alg:NT\]](#alg:NT){reference-type="ref" reference="alg:NT"}). In contrast to Frank-Wolfe algorithms previously proposed to solve DRO problems (e.g., [@nguyen:shafieezadeh:kuhn:mohajerin:2023; @sheriff:mohajerin:2023]), the proposed algorithm solves a QP at each iteration instead of an LP. The Newton-type algorithm achieves *superlinear* (potentially quadratic) convergence in numerical experiments ([1](#fig:convergence){reference-type="ref" reference="fig:convergence"}) and reduces computation time 50% compared to solving the DRMPC problem as an LMI optimization problem with state-of-the art solvers, i.e., MOSEK ([2](#fig:comparison){reference-type="ref" reference="fig:comparison"}). #### Organization {#organization .unnumbered} In [2](#s:problem){reference-type="ref" reference="s:problem"}, we introduce the DRMPC problem formulation and associated DRO problem. In [3](#s:cl){reference-type="ref" reference="s:cl"}, we present the main technical results on closed-loop guarantees. In [4](#s:cl_proofs){reference-type="ref" reference="s:cl_proofs"}, we provide the technical proofs and supporting lemmata for the main technical results introduced in [3](#s:cl){reference-type="ref" reference="s:cl"}. In [5](#s:algorithms){reference-type="ref" reference="s:algorithms"}, we discuss the DRO problem of interest and introduce the proposed Newton-type saddle point algorithm. In [6](#s:examples){reference-type="ref" reference="s:examples"}, we study two examples to demonstrate the closed-loop properties established in [3](#s:cl){reference-type="ref" reference="s:cl"} and the scalability of the proposed algorithm. #### Notation {#notation .unnumbered} Let $\mathbb{R}$ denote the reals and subscripts/superscripts denote the restrictions/dimensions for the reals, i.e., $\mathbb{R}^n_{\geq 0}$ is the set of nonegative reals with dimension $n$. The transpose of a square matrix $M\in\mathbb{R}^{n\times n}$ is denoted $M'$. The trace of a matrix $M\in\mathbb{R}^{n\times n}$ is denoted $\textnormal{tr}(M)$. A positive (semi)definite matrix $M\in\mathbb{R}^{n\times n}$ is denoted by $M\succ 0$ ($M\succeq 0$). For $M\succeq 0$, let $|x|_M^2$ denote the quadratic form $x'Mx$. If $\mathds{P}$ is a probability distribution, $\mathds{P}(A)$ denotes the probability of event $A$. Let $\mathbb{E}_{\mathds{P}}\left[\cdot\right]$ denote the expected value with respect to $\mathds{P}$. Let $\mathbb{E}_{\mathds{P}}\left[\cdot\mid x\right]$ denote the expected value with respect to $\mathds{P}$ and given the value of $x$. A function $\alpha:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0}$ is said to be in class $\mathcal{K}$, denoted $\alpha(\cdot)\in\mathcal{K}$, if $\alpha(\cdot)$ is continuous, strictly increasing, and $\alpha(0)=0$. # Problem Formulation {#s:problem} We consider the linear system with additive disturbances $$\label{eq:f} x^+ = Ax + Bu + Gw$$ in which $x\in\mathbb{R}^n$, $u\in\mathbb{U}\subseteq\mathbb{R}^m$, and $w\in\mathbb{W}\subseteq\mathbb{R}^q$. We also consider the state/input constraints $$\label{eq:bbZ} (x,u) \in \mathbb{Z}\subseteq \mathbb{R}^n \times \mathbb{U}$$ and the terminal constraint $\mathbb{X}_f\subseteq\mathbb{R}^n$. We consider convex constraints as follows. **Assumption 1** (Convex state-action constraints). The sets $\mathbb{Z}$ and $\mathbb{X}_f$ are closed, convex, and contain the origin in their interior. The set $\mathbb{U}$ is compact and contains the origin. The set $\mathbb{W}$ is compact and contains the origin in its interior. To ensure constraint satisfaction, we use the following disturbance feedback parameterization [@goulart:kerrigan:maciejowski:2006]. $$\label{eq:feedbackaffine} u(i) = v(i) + \sum_{j=0}^{i-1}M(i,j)w(j)$$ in which $v(i)\in\mathbb{R}^m$ and $M(i,j)\in\mathbb{R}^{m\times q}$. With this parameterization and a finite horizon $N\geq 1$, the input sequence $\mathbf{u}:=(u(0),u(1),\dots,u(N-1))$ is defined as $$\label{eq:bfu} \mathbf{u}= \mathbf{M}\mathbf{w}+ \mathbf{v}$$ in which $\mathbf{w}:=(w(0),w(1),\dots,w(N-1))$ is the disturbance trajectory. Note that the structure of $\mathbf{M}$ must satisfy the following requirements to enforce causality. $$(\mathbf{M},\mathbf{v})\in \Theta := \left\{(\mathbf{M},\mathbf{v}) \ \middle| \ \begin{matrix} \mathbf{M}\in\mathbb{R}^{Nm\times Nq}, \ \mathbf{v}\in\mathbb{R}^{Nm} \\ \ M(i,j)=0 \ \forall j \geq i \end{matrix}\right\}$$ The state trajectory $\mathbf{x}:=(x(0),x(1),\dots,x(N))$ is therefore $$\label{eq:bfx} \mathbf{x}= \mathbf{A}x + \mathbf{B}\mathbf{v}+ (\mathbf{B}\mathbf{M}+ \mathbf{G})\mathbf{w}$$ and the constraints for this parameterization are given by $$\Pi(x) := \bigcap_{\mathbf{w}\in\mathbb{W}^N} \left\{ (\mathbf{M},\mathbf{v})\in\Theta\ \middle| \ \begin{matrix} \textnormal{s.t. } \cref{eq:bfu}, \ \cref{eq:bfx} \\ (x(k),u(k))\in\mathbb{Z}\ \forall \ k \\ x(N) \in \mathbb{X}_f \end{matrix} \right\}$$ That is, if $(\mathbf{M},\mathbf{v})\in\Pi(x)$ then the constraints in [\[eq:bbZ\]](#eq:bbZ){reference-type="ref" reference="eq:bbZ"} are satisfied for all realizations of the disturbance trajectory $\mathbf{w}\in\mathbb{W}^N$. We also define the feasible set $$\mathcal{X}:= \{x\in\mathbb{R}^n \mid \Pi(x)\neq\emptyset \}$$ To streamline notation, we define $$\theta := (\mathbf{M},\mathbf{v})\in\Pi(x)$$ **Lemma 1** (Policy constraints). *If [Assumption 1](#as:convex){reference-type="ref" reference="as:convex"} holds, then $\Pi(x)$ is compact and convex for all $x\in\mathcal{X}$ and $\mathcal{X}$ is closed and convex.* See [7](#s:appendix){reference-type="ref" reference="s:appendix"} for the proof. Note that [Lemma 1](#lem:compact){reference-type="ref" reference="lem:compact"} uses a slightly different formulation and set of assumptions than in [@goulart:kerrigan:maciejowski:2006], and we are therefore able to establish that $\Pi(x)$ is also bounded. Moreover, if $\mathbb{Z}$ and $\mathbb{X}_f$ are polyhedral and $\mathbb{W}$ is a polytope, then $\Pi(x)$ is also a (bounded) polytope [@goulart:kerrigan:maciejowski:2006]. For the MPC problem, we consider quadratic stage and terminal costs defined as $$\ell(x,u) = x'Qx + u'Ru \qquad V_f(x) = x'Px$$ with the following standard assumption. **Assumption 2** (Positive semidefinite cost). The matrices $Q$ and $P$ are positive semidefinite ($Q,P\succeq 0$) and $R$ is positive definite ($R\succ 0$). For a given input and disturbance trajectory, we have the following deterministic cost function. $$\Phi(x,\mathbf{u},\mathbf{w}) := \sum_{k=0}^{N-1}\ell(x(k),u(k)) + V_f(x(N))$$ If we embed the disturbance feedback parameterization in this cost function, we have $$\begin{aligned} J(x,\theta,\mathbf{w}) := \Phi(x,\mathbf{M}\mathbf{w}+ \mathbf{v},\mathbf{w}) = |H_xx+H_u\mathbf{v}+ (H_u\mathbf{M}+ H_w)\mathbf{w}|^2\end{aligned}$$ with constant matrices $H_x$, $H_u$, and $H_w$. We make the following standing assumption for the remainder of this paper: *The disturbances $w$ are zero mean, independent in time, and satisfy $w\in\mathbb{W}$ with probability one.* Let $\mathcal{M}(\mathbb{W})$ denote all probability distributions of the random variable $w$ with zero mean such that $w\in\mathbb{W}$ with probability one, i.e., $$\mathcal{M}(\mathbb{W}) := \left\{\mathds{P}\ \middle| \ \mathbb{E}_{\mathds{P}}\left[w\right]=0, \ \mathds{P}\left(w\in\mathbb{W}\right)=1\right\}$$ For any distribution $\mathds{P}\in\mathcal{M}(\mathbb{W}^N)$ of $\mathbf{w}$ with covariance $\boldsymbol{\Sigma}:=\mathbb{E}_{\mathds{P}}\left[\mathbf{w}\mathbf{w}'\right]$, we have $$\begin{aligned} & L(x,\theta,\boldsymbol{\Sigma}) := \mathbb{E}_{\mathds{P}}\left[J(x,\theta,\mathbf{w})\right] = |H_xx+H_u\mathbf{v}|^2 + \textnormal{tr}\Big((H_u\mathbf{M}+H_w)'(H_u\mathbf{M}+H_w)\boldsymbol{\Sigma}\Big) \label{eq:L_obj}\end{aligned}$$ Note that $L(x,\theta,\boldsymbol{\Sigma})$ is quadratic in $\theta$ and linear in $\boldsymbol{\Sigma}$. In SMPC, we minimize $L(x,\theta,\boldsymbol{\Sigma})$ for a specific covariance $\boldsymbol{\Sigma}$ and the current state $x$. For DRMPC, we instead consider a worst-case version of the SMPC problem in which $\mathds{P}$ takes the worst value within some ambiguity set. To define this ambiguity set, we first consider the Gelbrich ball for the covariance of a single disturbance $w\in\mathbb{R}^q$ centered at the nominal covariance $\widehat{\Sigma}\in\mathbb{R}^{q\times q}$ with radius $\varepsilon\geq 0$ defined as $$\mathbb{B}_{\varepsilon}(\widehat{\Sigma}) := \left\{\Sigma \succeq 0 \ \middle| \ \textnormal{tr}\left(\widehat{\Sigma} + \Sigma - 2\left(\widehat{\Sigma}^{1/2}\Sigma\widehat{\Sigma}^{1/2}\right)^{1/2}\right) \leq \varepsilon^2 \right\}$$ To streamline notation, we define $$\mathbb{B}_d := \mathbb{B}_{\varepsilon}(\widehat{\Sigma}), \qquad \text{where} \qquad d:=\left(\varepsilon,\widehat{\Sigma}\right)$$ This Gelbrich ball produces the following Gelbrich ambiguity set for the distributions of $w$: $$\mathcal{P}_d := \left\{\mathds{P}\in\mathcal{M}(\mathbb{W}) \ \middle| \ \mathbb{E}_{\mathds{P}}[ww']=\Sigma\in\mathbb{B}_d\right\}$$ We further assume that this Gelbrich ambiguity set is compatible with $\mathbb{W}$, i.e., all covariances $\Sigma\in\mathbb{B}_d$ can be achieved by at least one distribution $\mathds{P}\in\mathcal{M}(\mathbb{W})$. For example, in the extreme case that $\mathbb{W}=\{0\}$ then $\mathcal{M}(\mathbb{W})$ contains only one distribution with all the weight at zero and the only reasonable Gelbrich ball to consider is $\mathbb{B}_d=\{0\}$. Formally, we consider only ambiguity parameters $d\in\mathcal{D}$ with $$\mathcal{D}:= \left\{ (\varepsilon,\widehat{\Sigma}) \ \middle| \begin{matrix} d=(\varepsilon,\widehat{\Sigma}), \ \varepsilon\geq 0, \ \widehat{\Sigma}\succeq 0, \\ \ \forall \ \Sigma\in\mathbb{B}_{d} \ \exists \ \mathds{P}\in\mathcal{M}(\mathbb{W}) \ \textnormal{s.t. } \mathbb{E}_{\mathds{P}}[ww']=\Sigma \end{matrix} \right\}$$ Note that $\mathcal{D}$ depends on $\mathbb{W}$, but we suppress this dependence to streamline the notation. If $d\in\mathcal{D}$, then for any $\Sigma\in\mathbb{B}_d$ there exists $\mathds{P}\in\mathcal{P}_d$ such that $\mathbb{E}_{\mathds{P}}\left[ww'\right]=\Sigma$. For the disturbance trajectory $\mathbf{w}\in\mathbb{W}^N$, we define the following ambiguity set that enforces independence in time: $$\mathcal{P}_d^N := \prod_{k=0}^{N-1}\mathcal{P}_d = \left\{\mathds{P}\in\mathcal{M}(\mathbb{W}^N) \ \middle| \ \begin{matrix} \mathbb{E}_{\mathds{P}}[w(k)w(k)'] \in \mathbb{B}_d \\ w(k) \textrm{ are independent} \end{matrix}\right\}$$ We can equivalently represent $\mathcal{P}_d^N$ as $$\mathcal{P}_d^N = \left\{\mathds{P}\in\mathcal{M}(\mathbb{W}^N) \ \middle| \ \begin{matrix} \mathbb{E}_{\mathds{P}}\left[\mathbf{w}\mathbf{w}'\right]=\boldsymbol{\Sigma}\in\mathbb{B}_d^N \\ w(k) \textrm{ are independent} \end{matrix}\right\}$$ in which the set $\mathbb{B}^N_d$ is defined as $$\mathbb{B}^N_d := \left\{\boldsymbol{\Sigma}= \begin{bmatrix} \Sigma_0 & \dots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \dots & \Sigma_{N-1} \end{bmatrix} \ \middle| \ \Sigma_k\in\mathbb{B}_{d} \ \forall k \right\}$$ The worst-case expected cost is defined as $$\label{eq:V_d} V_d(x,\theta) := \max_{\mathds{P}\in\mathcal{P}_d^N} \mathbb{E}_{\mathds{P}}\left[J(x,\theta,\mathbf{w})\right] = \max_{\boldsymbol{\Sigma}\in\mathbb{B}^N_d} L(x,\theta,\boldsymbol{\Sigma})$$ We note that equality between the two maximization problems in [\[eq:V_d\]](#eq:V_d){reference-type="ref" reference="eq:V_d"} holds because $d\in\mathcal{D}$. We now define the DRO problem for DRMPC as $$\begin{aligned} \label{eq:dro} V_d^0(x) := \min_{\theta\in\Pi(x)}\max_{\mathds{P}\in\mathcal{P}_d^N} \mathbb{E}_{\mathds{P}}\left[J(x,\theta,\mathbf{w})\right] = \min_{\theta\in\Pi(x)} V_d(x,\theta) \end{aligned}$$ and we denote the solution(s) to the outer minimization problem as $\theta^0_d(x)$. Note that SMPC ($d=(0,\widehat{\Sigma})$) and RMPC ($d=(0,0)$) are special cases of the optimization problem in [\[eq:dro\]](#eq:dro){reference-type="ref" reference="eq:dro"}. Thus, all subsequent statements about DRMPC include SMPC and RMPC as special cases of $d\in\mathcal{D}$. Fundamental mathematical properties for this optimization problem (e.g., existence, continuity, and measurability) are provided in [9](#app:problem){reference-type="ref" reference="app:problem"}. # Closed-loop guarantees: Main results {#s:cl} Preliminaries and closed-loop system We now define the controller and closed-loop system derived from this DRMPC formulation. The control law is defined as the *first* input given by the optimal control law parameterization $\theta^0_d(x)$. Although $\theta^0_d(x)$ may be set-valued, i.e., there are multiple solutions, we assume that some selection rule is applied such that the control law $\kappa_d:\mathcal{X}\rightarrow\mathbb{U}$ is a single-valued function that satisfies $$\kappa_d(x) \in \left\{v^0(0)\mid (\mathbf{M}^0,\mathbf{v}^0)\in\theta^0_d(x)\right\}$$ With this control law, the closed-loop system is $$\label{eq:cl} x^+ = Ax + B\kappa_d(x) + Gw$$ Let $\phi_d(k;x,\mathbf{w}_{\infty})$ denote the closed-loop state of [\[eq:cl\]](#eq:cl){reference-type="ref" reference="eq:cl"} at time $k\geq 0$, given the initial state $x\in\mathcal{X}$ and the disturbance trajectory $\mathbf{w}_{\infty}\in\mathbb{W}^\infty$, i.e., a disturbance trajectory in the $\ell^{\infty}$ space of sequences. Define the infinity norm of the sequence $\mathbf{w}_{\infty}$ as $||\mathbf{w}_{\infty}||:=\sup_{k\geq 0} |w(k)|$. Note that the deterministic value of $\phi_d(k;x,\mathbf{w}_{\infty})$ for a given realization of $\mathbf{w}_{\infty}\in\mathbb{W}^{\infty}$ is a function of $d\in\mathcal{D}$ via the DRMPC control law. The goal of this section is to demonstrate the the closed-loop system in [\[eq:cl\]](#eq:cl){reference-type="ref" reference="eq:cl"} obtains some desirable properties for the class of distributions considered in $\mathcal{P}_d$. We consider the set of all distributions for the infinite sequence of disturbances $\mathbf{w}_{\infty}$ such that the disturbances are independent in time and their marginal distributions are in $\mathcal{P}_d$, i.e., we consider the set $$\mathcal{P}_d^\infty := \prod_{k=0}^{\infty}\mathcal{P}_d = \left\{\mathds{P}\in\mathcal{M}(\mathbb{W}^\infty) \ \middle| \ \begin{matrix} \mathbb{E}_{\mathds{P}}[w(k)w(k)'] \in \mathbb{B}_d \\ w(k) \textrm{ are independent} \end{matrix}\right\}$$ An important property for the DRMPC algorithm is robust positive invariance, defined as follows. **Definition 1** (Robust positive invariance). A set $X\subseteq\mathbb{R}^n$ is robustly positively invariant (RPI) for the system in [\[eq:cl\]](#eq:cl){reference-type="ref" reference="eq:cl"} if $x^+\in X$ for all $x\in X$, $w\in\mathbb{W}$, and $d\in\mathcal{D}$. Note that this definition is adapted for DRMPC to consider all possible $d\in\mathcal{D}$. If we choose $\mathcal{D}=\{(0,0)\}$ (RMPC) then this definition reduces to the standard definition of RPI found in, e.g., [@rawlings:mayne:diehl:2020 Def 3.7]. Since the control law $\kappa_d:\mathcal{X}\rightarrow\mathbb{U}$ is defined on only the feasible set $\mathcal{X}$, the first step in the closed-loop analysis is to establish that this feasible set is RPI. We define the expected average performance of the closed-loop system for $k\geq 1$, given the initial state $x\in\mathcal{X}$, ambiguity parameters $d\in\mathcal{D}$, and the distribution $\mathds{P}\in\mathcal{P}_d^{\infty}$, as follows. $$\mathcal{J}_k(x,d,\mathds{P}) := \mathbb{E}_{\mathds{P}}\left[\frac{1}{k}\sum_{i=0}^{k-1}\ell(\phi_d(i;x,\mathbf{w}_{\infty}),\kappa_d\Big(\phi_d(i;x,\mathbf{w}_{\infty}))\Big)\right]$$ Main results and key technical assumptions To establish desirable properties for the closed-loop system, we consider the following assumption for the terminal cost $V_f(x)=x'Px$ and constraint $\mathbb{X}_f$. This assumption is also used in SMPC and RMPC analysis. **Assumption 3** (Terminal cost and constraint). The terminal cost matrix $P$ is chosen such that there exists $K_f\in\mathbb{R}^{m\times n}$ satisfying $$\label{eq:term} P - Q - K_f'RK_f \succeq (A+BK_f)'P(A+BK_f)$$ Moreover, the terminal set $\mathbb{X}_f$ contains the origin in its interior and is chosen such that $(x,K_fx)\in\mathbb{Z}$ and $(A+BK_f)x+Gw\in\mathbb{X}_f$ for all $x\in\mathbb{X}_f$ and $w\in\mathbb{W}$. Verifying [Assumption 3](#as:term){reference-type="ref" reference="as:term"} is tantamount to finding a stabilizing linear control law $u=K_fx$, i.e., $A+BK_f$ is Schur stable, that satisfies the required constraints $(x,K_fx)\in\mathbb{Z}$ within some robustly positive invariant neighborhood of the origin $\mathbb{X}_f$. With this stabilizing linear control law, we can then construct an appropriate terminal cost matrix $P$ by, e.g., solving a discrete time Lyapunov equation. With this assumption, we can guarantee that the feasible set $\mathcal{X}$ is RPI and establish the following distributionally robust long-term performance guarantee. This performance guarantee is a distributionally robust version of the stochastic performance guarantee typically derived for SMPC (e.g., [@lorenzen:dabbene:tempo:allgower:2016; @hewing:wabersich:zeilinger:2020; @cannon:kouvaritakis:wu:2009]). **Theorem 1** (DR long-term performance). *If [\[as:convex,as:cost,as:term\]](#as:convex,as:cost,as:term){reference-type="ref" reference="as:convex,as:cost,as:term"} hold, then the set $\mathcal{X}$ is RPI for [\[eq:cl\]](#eq:cl){reference-type="ref" reference="eq:cl"} and $$\label{eq:performance} \limsup_{k\rightarrow\infty}\mathcal{J}_k(x,d,\mathds{P}) \leq \max_{\Sigma\in\mathbb{B}_d}\textnormal{tr}(G'PG\Sigma)$$ for all $\mathds{P}\in\mathcal{P}^{\infty}_d$, $d\in\mathcal{D}$, and $x\in\mathcal{X}$.* , however, applies only to the average performance in the limit as $k\rightarrow\infty$. If we are also interested in the transient or stage-wise behavior of the closed-loop system at a given time $k\geq 0$, one can include the following assumption. **Assumption 4** (Positive definite stage cost). The matrix $Q$ is positive definite, i.e., $Q\succ 0$. Moreover, the feasible set $\mathcal{X}$ is bounded or $\mathbb{X}_f=\mathbb{R}^n$. By also including [Assumption 4](#as:track){reference-type="ref" reference="as:track"}, we can establish the following distributionally robust stage-wise performance guarantee. **Theorem 2** (DR stage-wise performance). *If [\[as:convex,as:cost,as:term,as:track\]](#as:convex,as:cost,as:term,as:track){reference-type="ref" reference="as:convex,as:cost,as:term,as:track"} hold, then there exist $\lambda\in(0,1)$ and $c,\gamma>0$ such that $$\label{eq:resie} \mathbb{E}_{\mathds{P}}\left[\ell(x(k),u(k))\right] \leq \lambda^kc|x|^2 + \gamma\left(\max_{\Sigma\in\mathbb{B}_d}\textnormal{tr}(G'PG\Sigma)\right)$$ in which $x(k)=\phi_d(k;x,\mathbf{w}_{\infty})$, $u(k)=\kappa_d(x(k))$ for all $\mathds{P}\in\mathcal{P}^{\infty}_d$, $d\in\mathcal{D}$, $x\in\mathcal{X}$, and $k\geq 0$.* ensures that the effect of the initial condition $x$ on the closed-loop stage cost vanishes exponentially fast as $k\rightarrow\infty$. There is also a persistent term on the right-hand side of [\[eq:resie\]](#eq:resie){reference-type="ref" reference="eq:resie"} that accounts for the continuing effect of the disturbance. We note, however, that the persistent term on the right-hand side of [\[eq:resie\]](#eq:resie){reference-type="ref" reference="eq:resie"} is a constant that depends on the design of the DRMPC algorithm and does not depend on the actual distribution $\mathds{P}$. Since $Q\succ 0$, we can also establish a the following corollary of [Theorem 2](#thm:resie){reference-type="ref" reference="thm:resie"}. **Corollary 3** (DR, mean-squared ISS). *If [\[as:convex,as:cost,as:term,as:track\]](#as:convex,as:cost,as:term,as:track){reference-type="ref" reference="as:convex,as:cost,as:term,as:track"} hold, then there exist $\lambda\in(0,1)$ and $c,\gamma>0$ such that $$\label{eq:mss} \mathbb{E}_{\mathds{P}}\left[|\phi_d(k;x,\mathbf{w}_{\infty})|^2\right] \leq \lambda^kc|x|^2 + \gamma\left(\max_{\Sigma\in\mathbb{B}_d}\textnormal{tr}(G'PG\Sigma)\right)$$ for all $\mathds{P}\in\mathcal{P}^{\infty}_d$, $d\in\mathcal{D}$, $x\in\mathcal{X}$, and $k\geq 0$.* The ISS-style bound in [\[eq:mss\]](#eq:mss){reference-type="ref" reference="eq:mss"} applies to the *mean-squared* norm of the closed-loop state, a commonly referenced quantity in stochastic stability analysis. Note that [\[eq:mss\]](#eq:mss){reference-type="ref" reference="eq:mss"} also implies a similar bound for $\mathbb{E}_{\mathds{P}}\left[|\phi_d(k;x,\mathbf{w}_{\infty})|\right]$ via Jensen's inequality. In MPC formulations, a common strategy is to choose the terminal cost matrix $P$ according to the discrete algebraic Riccati equation (DARE), i.e., the cost for the linear-quadratic regulator (LQR) of the unconstrained linear system.[^1] Specifically, we now consider the following stronger version of [Assumption 3](#as:term){reference-type="ref" reference="as:term"}. **Assumption 5** (DARE terminal cost). The matrix $P\succ 0$ satisfies $$\label{eq:dare} P = A'PA - A'PB(R+B'PB)^{-1}B'PA + Q$$ and $K_f:=-(R+B'PB)B'PA$. Moreover, $(x,K_fx)\in\mathbb{Z}$ and $(A+BK_f)x+Gw\in\mathbb{X}_f$ for all $x\in\mathbb{X}_f$ and $w\in\mathbb{W}$. The terminal set $\mathbb{X}_f$ contains the origin in its interior. With this stronger assumption, we can establish significantly stronger properties for the DRMPC controller, similar to results for SMPC reported in [@goulart:2007 Lemma 4.18]. In particular, we can establish that the closed-loop system is (pathwise) ISS. **Theorem 4** (Pathwise ISS). *Let [\[as:convex,as:cost,as:track,as:term_lqr\]](#as:convex,as:cost,as:track,as:term_lqr){reference-type="ref" reference="as:convex,as:cost,as:track,as:term_lqr"} hold. Then, for any $d\in\mathcal{D}$, the origin is (pathwise) ISS for [\[eq:cl\]](#eq:cl){reference-type="ref" reference="eq:cl"}, i.e., there exist $\lambda\in(0,1)$, $c>0$, and $\gamma(\cdot)\in\mathcal{K}$ such that $$\label{eq:iss} |\phi_d(k;x,\mathbf{w}_{\infty})| \leq \lambda^kc|x| + \gamma(||\mathbf{w}_{\infty}||)$$ for all $k\geq 0$, $\mathbf{w}_{\infty}\in\mathbb{W}^{\infty}$, and $x\in\mathcal{X}$.* The property of (pathwise) ISS in [Theorem 4](#thm:iss){reference-type="ref" reference="thm:iss"} is notably stronger than mean-squared ISS in [Corollary 3](#cor:mss){reference-type="ref" reference="cor:mss"}. The key distinction is that the persistent term on the right-hand side of [\[eq:iss\]](#eq:iss){reference-type="ref" reference="eq:iss"} is specific to a given realization of the disturbances trajectory $\mathbf{w}_{\infty}$, while the persistent term in [Corollary 3](#cor:mss){reference-type="ref" reference="cor:mss"} depends only on the DRMPC design. f $\mathbf{w}_{\infty}=\mathbf{0}$ then [\[eq:iss\]](#eq:iss){reference-type="ref" reference="eq:iss"} implies that the origin is exponentially stable. By contrast, the weaker restriction on the terminal cost in [Assumption 3](#as:term){reference-type="ref" reference="as:term"} does *not* ensure that the closed-loop system is ISS. We demonstrate this fact in [6](#s:examples){reference-type="ref" reference="s:examples"} via a counterexample. We now consider a class of disturbances that are both independent *and* identically distributed (i.i.d.) in time. We also require that arbitrarily small values of these disturbances occur with nonzero probability. Specifically, we define the following set of distributions. $$\mathcal{Q}:= \prod_{k=0}^{\infty}\left\{\mathds{P}\in\mathcal{M}(\mathbb{W}) \ \middle| \ \forall \ \varepsilon>0, \ \mathds{P}(|w|\leq \varepsilon) >0\right\}$$ Note that $\mathcal{Q}$ includes most distributions of interest such as uniform, truncated Gaussian, and even finite distributions with $\mathds{P}(w=0)>0$. For this class of disturbances, we have the following exact long-term performance guarantee. **Theorem 5** (Exact long-term performance). *Let [\[as:convex,as:cost,as:track,as:term_lqr\]](#as:convex,as:cost,as:track,as:term_lqr){reference-type="ref" reference="as:convex,as:cost,as:track,as:term_lqr"} hold. Then, $$\label{eq:performance_optK} \lim_{k\rightarrow\infty}\mathcal{J}_k(x,d,\mathds{P}) = \textnormal{tr}(G'PG\Sigma)$$ in which $\Sigma=\mathbb{E}_{\mathds{P}}\left[w(i)w(i)'\right]$ for all $d\in\mathcal{D}$, $x\in\mathcal{X}$, and $\mathds{P}\in\mathcal{Q}$.* Note that [\[eq:performance_optK\]](#eq:performance_optK){reference-type="ref" reference="eq:performance_optK"} provides an exact value for the long-term performance based on the distribution of the disturbance in the closed-loop system. By contrast, [\[eq:performance\]](#eq:performance){reference-type="ref" reference="eq:performance"} provides a conservative and constant bound based on the design parameter $d\in\mathcal{D}$. Furthermore, the values of $d\in\mathcal{D}$ do *not* affect the long-term performance in [\[eq:performance_optK\]](#eq:performance_optK){reference-type="ref" reference="eq:performance_optK"}. By recalling that SMPC and RMPC are special cases of DRMPC, we have the following corollary of [Theorem 5](#thm:performance_opt){reference-type="ref" reference="thm:performance_opt"}. **Corollary 6** (DRMPC versus SMPC). *If [\[as:convex,as:cost,as:track,as:term_lqr\]](#as:convex,as:cost,as:track,as:term_lqr){reference-type="ref" reference="as:convex,as:cost,as:track,as:term_lqr"} hold, then the long-term performance of DRMPC, SMPC ($\varepsilon=0$), and RMPC ($\varepsilon=0$, $\widehat{\Sigma}=0$) are equivalent, i.e., $$\begin{aligned} \lim_{k\rightarrow\infty}\mathcal{J}_k\Big(x,(\varepsilon,\widehat{\Sigma}),\mathds{P}\Big) = \lim_{k\rightarrow\infty}\mathcal{J}_k\Big(x,(0,\widehat{\Sigma}),\mathds{P}\Big) = \lim_{k\rightarrow\infty}\mathcal{J}_k\Big(x,(0,0),\mathds{P}\Big) \end{aligned}$$ for all $x\in\mathcal{X}$, $\mathds{P}\in\mathcal{Q}$, and $(\varepsilon,\widehat{\Sigma})\in\mathcal{D}$.* Although selecting $P$ to satisfy [\[eq:dare\]](#eq:dare){reference-type="ref" reference="eq:dare"} is a standard design method in MPC, there are also systems in which one cannot satisfy the requirements of [Assumption 5](#as:term_lqr){reference-type="ref" reference="as:term_lqr"} for a given $Q,R\succ 0$. In particular, if the origin is sufficiently close to (or on) the boundary of $\mathbb{Z}$, then satisfying all of the requirements in [Assumption 5](#as:term_lqr){reference-type="ref" reference="as:term_lqr"} is typically not possible. In chemical process control, for example, processes often operate near input constraints (e.g., maximum flow rates) to ensure high throughput for the process. Thus, the terminal cost and constraint are chosen to satisfy only the weaker condition in [Assumption 3](#as:term){reference-type="ref" reference="as:term"}. In this case, there is a possibility that DRMPC produces superior long-term performance relative to SMPC and RMPC. We therefore focus on examples in [6](#s:examples){reference-type="ref" reference="s:examples"} that satisfy [Assumption 3](#as:term){reference-type="ref" reference="as:term"}, but cannot satisfy [Assumption 5](#as:term_lqr){reference-type="ref" reference="as:term_lqr"}. **Remark 1** (Detectable stage cost). We can also weaken [Assumption 4](#as:track){reference-type="ref" reference="as:track"} to $Q\succeq 0$ and $(A,Q^{1/2})$ detectable. By defining an input-output-to-state stability (IOSS) Lyapunov function, e.g., [@rawlings:mayne:diehl:2020 Thm. 2.24 ], we can apply the same approach use for nominal MPC to establish [Theorem 2](#thm:resie){reference-type="ref" reference="thm:resie"} and [Corollary 3](#cor:mss){reference-type="ref" reference="cor:mss"} for DRMPC under this weaker restriction for $Q$. # Closed-loop guarantees: Technical proofs {#s:cl_proofs} This section includes several technical lemmata that serve as a preliminary to prove the main results of this study. Distributionally robust long-term performance[\[ss:standard\]]{#ss:standard label="ss:standard"} To establish [Theorem 1](#thm:performance){reference-type="ref" reference="thm:performance"}, we begin by establishing that feasible set $\mathcal{X}$ is RPI and providing a distributionally robust expected cost decrease condition. **Lemma 2** (DR cost decrease). *If [\[as:convex,as:cost,as:term\]](#as:convex,as:cost,as:term){reference-type="ref" reference="as:convex,as:cost,as:term"} hold, then the feasible set $\mathcal{X}$ is RPI for [\[eq:cl\]](#eq:cl){reference-type="ref" reference="eq:cl"} and $$\label{eq:cost_dec} \mathbb{E}_{\mathds{P}}\left[V^0_d(x^+)\right] \leq V^0_d(x) - \ell(x,\kappa_d(x)) + \max_{\Sigma\in\mathbb{B}_d}\textnormal{tr}(G'PG\Sigma)$$ for all $\mathds{P}\in\mathcal{P}_d$, $d\in\mathcal{D}$, and $x\in\mathcal{X}$.* *Proof.* Choose $x(0)\in\mathcal{X}$ and $d\in\mathcal{D}$. Define $(\mathbf{M}^0,\mathbf{v}^0)=\theta^0\in\theta^0_d(x(0))$. Consider the subsequent state $x(1)=Ax(0)+Bv^0(0)+Gw(0)$ for some $w(0)\in\mathbb{W}$. For the state $x(1)$, we choose a candidate solution $$\label{eq:candidate} \tilde{\theta}^+(w(0)) = \Big(\tilde{\mathbf{M}}^+,\tilde{\mathbf{v}}^+(w(0))\Big)$$ such that the open-loop input trajectory remains the same as the previous optimal solution, i.e., $$\label{eq:u_cand} u(k) = v^0(k) + \sum_{j=0}^{k-1}M^0(k,j)w(j) = \tilde{v}^+(k) + \sum_{j=1}^{k-1}\tilde{M}^+(k,j)w(j)$$ for all $k\in\{1,\dots,N-1\}$ and $\mathbf{w}\in\mathbb{W}^N$. With this choice of parameters, the open-loop state trajectories $x(k)$ are also the same for all $k\in\{1,\dots,N-1\}$ and $\mathbf{w}\in\mathbb{W}^N$. The candidate solution is therefore $$\tilde{\mathbf{M}}^+ = \begingroup % keep the change local \setlength\arraycolsep{1pt} \begin{bmatrix} 0 & \cdots & \cdots & 0 & 0 \\ M^0(2,1) & 0 & \cdots & 0 & 0\\ \vdots & \ddots & \ddots & \vdots & \vdots \\ M^0(N-1,1) & \cdots & M^0(N-1,N-2) & 0 & 0 \\ \tilde{M}^+(N,1) & \cdots & \tilde{M}^+(N,N-2) & \tilde{M}^+(N,N-1) & 0 \end{bmatrix} \endgroup$$ $$\tilde{\mathbf{v}}^+(w(0)) = \begin{bmatrix} v^0(1) + M^0(1,0)w(0) \\ v^0(2) + M^0(2,0)w(0) \\ \vdots \\ v^0(N-1) + M^0(N-1,0)w(0) \\ \tilde{v}^+(N) \end{bmatrix}$$ in which that last rows of $\tilde{\mathbf{M}}^+$ and $\tilde{\mathbf{v}}^+(w(0))$ are not yet defined. We define these last rows by the terminal control law $u(N)=K_fx(N)$. Specifically, we have $$\label{eq:warmstart_N} K_fx(N) = \tilde{v}^+(N) + \sum_{i=1}^{N-1} \tilde{M}^+(N,i)w(i)$$ By definition of $x(N)$, we have $$x(N) = A^{N-1}x(1) + \sum_{i=1}^{N-1}A^{N-1-i}\Big(Bu(i) + Gw(i)\Big)$$ We then substitute the values of $u(i)$ for the candidate solution in [\[eq:u_cand\]](#eq:u_cand){reference-type="ref" reference="eq:u_cand"} to give $$\begin{aligned} x(N) = A^{N-1}x(1) &+ \sum_{i=1}^{N-1}A^{N-1-i}B\Big(v^0(i) + M^0(i,0)w(0)\Big) \\ & + \sum_{i=1}^{N-1}A^{N-1-i}\left(\sum_{j=1}^{i-1}BM^0(i,j)w(j) + G w(i)\right) \end{aligned}$$ With some manipulation, we can therefore define $$\begin{aligned} \tilde{v}^+(N) & = K_fA^{N-1}x(1) + \sum_{i=1}^{N-1}K_fA^{N-1-i}B\Big(v(i) + M^0(i,0)w(0)\Big) \\ \tilde{M}^+(N,i) & = K_f\left(\sum_{j=i+1}^{N-1}A^{N-1-j}BM^0(j,i) + A^{N-1-i}G\right) \end{aligned}$$ to satisfy [\[eq:warmstart_N\]](#eq:warmstart_N){reference-type="ref" reference="eq:warmstart_N"}. Note that $\tilde{\mathbf{M}}^+$ is independent of $w(0)$ and $\tilde{\mathbf{v}}^+(w(0))$ is an affine function of $w(0)$. We first establish that this candidate solution is feasible for any $w(0)\in\mathbb{W}$ and that $\mathcal{X}$ is RPI. Since $(\mathbf{M}^0,\mathbf{v}^0)\in\Pi(x(0))$, then $(x(k),u(k))\in\mathbb{Z}$ for all $k\in\{1,\dots,N-1\}$ and $x(N)\in\mathbb{X}_f$ for all $\mathbf{w}\in\mathbb{W}^N$. From [Assumption 3](#as:term){reference-type="ref" reference="as:term"}, we also have that $(x(N),K_fx(N))\in\mathbb{Z}$ for all $\mathbf{w}\in\mathbb{W}^N$. Therefore, $(x(N),u(N))\in\mathbb{Z}$ and $x(N+1)=(A+BK_f)x(N) + Gw(N)\in\mathbb{X}_f$ for all $w(N)\in\mathbb{W}$ by [Assumption 3](#as:term){reference-type="ref" reference="as:term"}. Thus, $(\tilde{\mathbf{M}}^+,\tilde{\mathbf{v}}^+)\in\Pi(x(1))$. Since $\Pi(x(1))\neq\emptyset$, we also know that $x(1)\in\mathcal{X}$ for any $w(0)\in\mathbb{W}$. Since the choice of $x(0)\in\mathcal{X}$ and $d\in\mathcal{D}$ was arbitrary, we have that $\mathcal{X}$ is RPI. Choose $\mathbf{w}=(w(0),\dots,w(N-1))\in\mathbb{W}^N$ and define $\mathbf{w}^+=(w(1),\dots,w(N))$ with some additional $w(N)\in\mathbb{W}$. We have that $$\begin{gathered} \label{eq:J_diff} J(x(1),\tilde{\theta}^+(w(0)),\mathbf{w}^+) - J(x(0),\theta^0,\mathbf{w}) = -\ell(x(0),v^0(0)) \\ + V_f(x(N+1)) - V_f(x(N)) + \ell(x(N),K_fx(N)) \end{gathered}$$ We define $$\begin{aligned} \boldsymbol{\Sigma}^+ & = \arg\max_{\boldsymbol{\Sigma}\in\mathbb{B}_d^N} L(x(1),\tilde{\theta}^+(w(0)),\boldsymbol{\Sigma}) = \arg\max_{\boldsymbol{\Sigma}\in\mathbb{B}_d^N} \textnormal{tr}\left((H_u\tilde{\mathbf{M}}^+ + H_w)'(H_u\tilde{\mathbf{M}}^+ + H_w)\boldsymbol{\Sigma}\right) \end{aligned}$$ and note that $\boldsymbol{\Sigma}^+$ is independent of $w(0)$ because $\tilde{\mathbf{M}}^+$ is independent of $w(0)$. We also define the distribution $\mathds{Q}\in\mathcal{M}(\mathbb{W}^N)$ for $\mathbf{w}^+$ such that $\mathbb{E}_{\mathds{Q}}[(\mathbf{w}^+)(\mathbf{w}^+)']=\boldsymbol{\Sigma}^+$. Note that such a $\mathds{Q}$ exists because $d\in\mathcal{D}$. For this distribution, we take the expected value of each side of [\[eq:J_diff\]](#eq:J_diff){reference-type="ref" reference="eq:J_diff"} to give $$\begin{gathered} \mathbb{E}_{\mathds{Q}}\left[J(x(1),\tilde{\theta}^+(w(0)),\mathbf{w}^+) - J(x(0),\theta^0,\mathbf{w})\mid w(0)\right] = \\ \mathbb{E}_{\mathds{Q}}\left[V_f(x(N+1)) - V_f(x(N)) + \ell(x(N),K_fx(N))\mid w(0)\right] -\ell(x(0),v^0(0)) \end{gathered}$$ From [Assumption 3](#as:term){reference-type="ref" reference="as:term"} and the fact that $x(N)\in\mathbb{X}_f$ for all $\mathbf{w}\in\mathbb{W}^N$, we have that $$\mathbb{E}_{\mathds{Q}}\left[V_f(x(N+1)) - V_f(x(N)) + \ell(x(N),K_fx(N))\mid w(0)\right] \leq \textnormal{tr}(G'PG\Sigma_N) \leq \delta_d$$ in which $\Sigma_N=\mathbb{E}_{\mathds{Q}}[w(N)w(N)']\in\mathbb{B}_d$ and $\delta_d:=\max_{\Sigma\in\mathbb{B}_d}\textnormal{tr}(G'PG\Sigma)$. From the definition of $\boldsymbol{\Sigma}^+$ and optimality, we have $$V_d\Big(x(1),\tilde{\theta}^+(w(0))\Big) = \mathbb{E}_{\mathds{Q}}\left[J(x(1),\tilde{\theta}^+(w(0)),\mathbf{w}^+)\mid w(0)\right]$$ and therefore $$\label{eq:E_Q} V_d\Big(x(1),\tilde{\theta}^+(w(0))\Big) \leq \mathbb{E}_{\mathds{Q}}\left[J(x(0),\theta^0,\mathbf{w})\mid w(0)\right] - \ell(x(0),v^0(0)) + \delta_d$$ Choose any $\mathds{P}\in\mathcal{P}_d$ for the distribution of $w(0)$. From the definition of $\mathds{P}$ and $\mathds{Q}$, we have $$\mathbb{E}_{\mathds{P}}\left[\mathbb{E}_{\mathds{Q}}\left[J(x(0),\theta^0,\mathbf{w})\mid w(0)\right]\right] \leq V_d(x(0),\theta^0) = V^0_d(x(0))$$ because $\theta\in\theta^0_d(x(0))$. We take the expected value of [\[eq:E_Q\]](#eq:E_Q){reference-type="ref" reference="eq:E_Q"} with respect to $\mathds{P}$ and use this inequality to give $$\label{eq:E_QP} \mathbb{E}_{\mathds{P}}\left[V_d\Big(x(1),\tilde{\theta}^+(w(0))\Big)\right] \leq V^0_d(x(0)) - \ell(x(0),v^0(0)) + \delta_d$$ By optimality, we have $$V_d^0(x(1)) \leq V_d\Big(x(1),\tilde{\theta}^+(w(0))\Big)$$ We combine this inequality with [\[eq:E_QP\]](#eq:E_QP){reference-type="ref" reference="eq:E_QP"} and substitute in $x=x(0)$, $x^+=x(1)$, $\kappa_d(x)=v^0(0)$ to give [\[eq:cost_dec\]](#eq:cost_dec){reference-type="ref" reference="eq:cost_dec"}. Note that the choices of $\mathds{P}\in\mathcal{P}_d$, $d\in\mathcal{D}$, and $x(0)\in\mathcal{X}$ were arbitrary and therefore [\[eq:cost_dec\]](#eq:cost_dec){reference-type="ref" reference="eq:cost_dec"} holds for all values in these sets. ◻ The key difference between [Lemma 2](#lem:costdec){reference-type="ref" reference="lem:costdec"} and the typical expected cost decrease condition for SMPC is that this inequality holds for all distributions $\mathds{P}\in\mathcal{P}_d$, i.e., the inequality is distributionally robust. We can then apply [Lemma 2](#lem:costdec){reference-type="ref" reference="lem:costdec"} to prove [Theorem 1](#thm:performance){reference-type="ref" reference="thm:performance"}. *Proof of [Theorem 1](#thm:performance){reference-type="ref" reference="thm:performance"}.* Choose $x\in\mathcal{X}$, $d\in\mathcal{D}$, and $\mathds{P}\in\mathcal{P}_d^{\infty}$. Define $x(i)=\phi_d(i;x,\mathbf{w}_{\infty})$ and $u(i)=\kappa_d(x(i))$. From [Lemma 2](#lem:costdec){reference-type="ref" reference="lem:costdec"}, we have that $\mathcal{X}$ is RPI and $$\label{eq:costdec_Q} \mathbb{E}_{\mathds{Q}}\left[V^0_d(x(i+1))\mid x(i)\right] \leq V^0_d(x(i)) - \ell(x(i),u(i)) + \max_{\Sigma\in\mathbb{B}_d}\textnormal{tr}(G'PG\Sigma)$$ for all $\mathds{Q}\in\mathcal{P}_d$. Let $\delta_d:=\max_{\Sigma\in\mathbb{B}_d}\textnormal{tr}(G'PG\Sigma)$ to streamline notation. From the law of total expectation and [\[eq:costdec_Q\]](#eq:costdec_Q){reference-type="ref" reference="eq:costdec_Q"}, we have $$\mathbb{E}_{\mathds{P}}\left[\ell(x(i),u(i))\right] \leq \mathbb{E}_{\mathds{P}}\left[V^0_d(x(i))\right] - \mathbb{E}_{\mathds{P}}\left[V^0_d(x(i+1))\right] + \delta_d$$ We sum both sides of this inequality from $i=0$ to $i=k-1$ and divide by $k\geq 1$ to give $$\mathcal{J}_k(x,d,\mathds{P}) \leq \frac{\mathbb{E}_{\mathds{P}}\left[V^0_d(x(0))\right] - \mathbb{E}_{\mathds{P}}\left[V^0_d(x(k))\right]}{k} + \delta_d$$ Note that $V^0_d(x(k))\geq 0$. We take the $\limsup$ as $k\rightarrow \infty$ of both sides of the inequality and substitute in the definition of $\delta_d$ to give [\[eq:performance\]](#eq:performance){reference-type="ref" reference="eq:performance"}. ◻ Distributionally robust stage-wise performance To establish [Theorem 2](#thm:resie){reference-type="ref" reference="thm:resie"}, we first establish the following upper bound for the optimal cost function. **Lemma 3** (Upper bound). *If [\[as:convex,as:cost,as:term,as:track\]](#as:convex,as:cost,as:term,as:track){reference-type="ref" reference="as:convex,as:cost,as:term,as:track"} hold, then $$\label{eq:upper_bound} V^0_d(x) \leq c_2|x|^2 + N\left(\max_{\Sigma\in\mathbb{B}_d}\textnormal{tr}(G'PG\Sigma)\right)$$ for all $d\in\mathcal{D}$ and $x\in\mathcal{X}$.* *Proof.* For any $x\in\mathbb{X}_f$, we define the control law as $u=K_fx$ from [Assumption 3](#as:term){reference-type="ref" reference="as:term"}. Therefore, $$\mathbf{u}= \mathbf{K}_f\mathbf{x}, \quad \mathbf{x}= (I-\mathbf{B}\mathbf{K}_f)^{-1}(\mathbf{A}x + \mathbf{G}\mathbf{w})$$ in which $\mathbf{K}_f := \begin{bmatrix} I_N\otimes K_f & 0\end{bmatrix}$. Note that the inverse $(I-\mathbf{B}\mathbf{K}_f)^{-1}$ exists because $\mathbf{B}\mathbf{K}_f$ is nilpotent (lower triangular with zeros along the diagonal). We represent this control law as $\theta_f(x)=(\mathbf{M}_f,\mathbf{v}_f(x))$ so that $$\label{eq:theta_f} \mathbf{v}_f(x) := \mathbf{K}_f(I-\mathbf{B}\mathbf{K}_f)^{-1}\mathbf{A}x \qquad \mathbf{M}_f := \mathbf{K}_f(I-\mathbf{B}\mathbf{K}_f)^{-1}\mathbf{G}$$ We have from [Assumption 3](#as:term){reference-type="ref" reference="as:term"} that this control law ensures that $(x(k),u(k))\in\mathbb{Z}$ and $x(k+1)\in\mathbb{X}_f$ for all $k\in\{0,\dots,N-1\}$. Therefore $\theta_f(x)\in\Pi(x)$ for all $x\in\mathbb{X}_f$ and $d\in\mathcal{D}$. Choose any $x\in\mathbb{X}_f$ and $d\in\mathcal{D}$. Choose any $\boldsymbol{\Sigma}\in\mathbb{B}^N_d$ and corresponding $\mathds{P}\in\mathcal{P}_d^N$ such that $\mathbb{E}_{\mathds{P}}\left[\mathbf{w}\mathbf{w}'\right]=\boldsymbol{\Sigma}$. From [Assumption 3](#as:term){reference-type="ref" reference="as:term"}, we have that $$\mathbb{E}_{\mathds{P}}\left[V_f(x(k+1)) - V_f(x(k)) + \ell(x(k),K_fx(k))\right] \leq \textnormal{tr}(G'PG\Sigma_k) \leq \delta_d$$ in which $x(k+1)=(A+BK_f)x(k)+Gw(k)$ for all $k\in\{0,1,\dots,N-1\}$, $x(0)=x$, and $\delta_d:=\max_{\Sigma\in\mathbb{B}_d}\textnormal{tr}(G'PG\Sigma)$. We sum both sides of this inequality from $k=0$ to $k=N-1$ and rearrange to give $$L(x,\theta_f(x),\boldsymbol{\Sigma}) \leq V_f(x(0)) + N\delta_d$$ for all $\boldsymbol{\Sigma}\in\mathbb{B}^N_d$. Therefore, $$V^0_d(x) \leq V_d(x,\theta_f(x)) \leq V_f(x) + N\delta_d \leq \bar{\lambda}_P|x|^2 + N\delta_d$$ in which $\bar{\lambda}_P$ is the maximum eigenvalue of $P$ for all $x\in\mathbb{X}_f$. If $\mathbb{X}_f=\mathbb{R}^n$, the proof is complete because $\mathcal{X}\subseteq\mathbb{R}^n$. Otherwise, we use the fact that $\mathcal{X}$ is bounded to extend this bound to all $x\in\mathcal{X}$. Define the function $$F(x) = \sup \left\{V^0_d(x) - N\delta_d \ \middle| \ d\in\mathcal{D}\right\}$$ Since $\mathbb{W}$ is bounded, $\mathcal{D}$ is bounded as well ([Lemma 7](#lem:calD){reference-type="ref" reference="lem:calD"}). Therefore, $F(x)$ is finite for all $x\in\mathcal{X}$. We further define $$r := \sup\left\{\frac{F(x)}{|x|^2} \ \middle| \ x\in\mathcal{X}\setminus \mathbb{X}_f\right\}$$ Note that since $\mathbb{X}_f$ contains the origin in its interior and $F(x)$ is finite for all $x\in\mathcal{X}$, $r$ exists and is finite. Therefore, $$V^0(x) \leq F(x) + Nd \leq r|x|^2 + N\delta_d$$ for all $x\in\mathcal{X}\setminus\mathbb{X}_f$. We define $c_2:=\max\{r,\bar{\lambda}_P\}$ and substitute in the definition of $\delta_d$ to complete the proof. ◻ With this upper bound, we prove [Theorem 2](#thm:resie){reference-type="ref" reference="thm:resie"} by using $V_d^0(x)$ as a Lyapunov-like function. *Proof of [Theorem 2](#thm:resie){reference-type="ref" reference="thm:resie"}.* Since $Q\succ 0$, there exists $c_1>0$ such that $$c_1|x|^2 \leq \ell(x,\kappa_d(x)) \leq V^0_d(x)$$ for all $x\in\mathcal{X}$. Let $\delta_d=\max_{\Sigma\in\mathbb{B}_d}\textnormal{tr}(G'PG\Sigma)$. From [\[eq:upper_bound\]](#eq:upper_bound){reference-type="ref" reference="eq:upper_bound"}, we have $$-|x|^2 \leq -(1/c_2)V^0_d(x) + (N\delta_d/c_2)$$ We combine this inequality and the lower bound for $\ell(x,\kappa_d(x))$ with [\[eq:cost_dec\]](#eq:cost_dec){reference-type="ref" reference="eq:cost_dec"} to give $$\label{eq:exp_costdec} \mathbb{E}_{\mathds{Q}}\left[V^0_d(x^+)\mid x\right] \leq \lambda V^0_d(x) + (1+Nc_1/c_2)\delta_d$$ in which $\lambda = (1-c_1/c_2)\in(0,1)$ and $x^+=Ax+B\kappa_d(x)+Gw$ for all $\mathds{Q}\in\mathcal{P}_d$, $d\in\mathcal{D}$, and $x\in\mathcal{X}$. Choose $x\in\mathcal{X}$, $d\in\mathcal{D}$, and $\mathds{P}\in\mathcal{P}_d^{\infty}$. Define $x(k)=\phi_d(k;x,\mathbf{w}_{\infty})$ and $u(k)=\kappa_d(x(k))$. Note that because $\mathcal{X}$ is RPI for the closed-loop system ([Lemma 2](#lem:costdec){reference-type="ref" reference="lem:costdec"}), $x(k)\in\mathcal{X}$ for all $x\in\mathcal{X}$, $d\in\mathcal{D}$, $\mathbf{w}_{\infty}\in\mathbb{W}^{\infty}$, and $k\geq 0$. Therefore, from [\[eq:exp_costdec\]](#eq:exp_costdec){reference-type="ref" reference="eq:exp_costdec"} we have $$\label{eq:exp_costdec_k} \mathbb{E}_{\mathds{Q}}\left[V^0_d(x(k+1))\mid x(k)\right] \leq \lambda V^0_d(x(k)) + (1+Nc_1/c_2)\delta_d$$ for all $\mathds{Q}\in\mathcal{P}_d$. We take the expected value of [\[eq:exp_costdec_k\]](#eq:exp_costdec_k){reference-type="ref" reference="eq:exp_costdec_k"} with respect to $\mathds{P}$ and the corresponding $\mathds{Q}$ to give $$\label{eq:E_exp_costdec} \mathbb{E}_{\mathds{P}}\left[V^0_d(x(k+1))\right] \leq \lambda \mathbb{E}_{\mathds{P}}\left[V^0_d(x(k))\right] + (1+Nc_1/c_2)\delta_d$$ By iterating [\[eq:E_exp_costdec\]](#eq:E_exp_costdec){reference-type="ref" reference="eq:E_exp_costdec"}, we have $$\mathbb{E}_{\mathds{P}}\left[V^0_d(x(k))\right] \leq \lambda^k V^0_d(x(0)) + \frac{1+Nc_1/c_2}{1-\lambda} \delta_d$$ Substitute in the lower and upper bounds for $V^0_d(\cdot)$, rearrange, and define $c:=c_2/c_1$ and $\gamma := N + (c_1^{-1}+Nc_2^{-1})/(1-\lambda)$ to give [\[eq:resie\]](#eq:resie){reference-type="ref" reference="eq:resie"}. ◻ Pathwise input-to-state stability To establish [Theorem 4](#thm:iss){reference-type="ref" reference="thm:iss"}, we first establish the following interesting property for the DRMPC control law within the terminal region $\mathbb{X}_f$, similar to [@goulart:2007 Lemma 4.18]. **Lemma 4** (Terminal control law). *If [\[as:convex,as:cost,as:track,as:term_lqr\]](#as:convex,as:cost,as:track,as:term_lqr){reference-type="ref" reference="as:convex,as:cost,as:track,as:term_lqr"} hold, then $$\kappa_d(x):= K_fx$$ for all $x\in\mathbb{X}_f$ and $d\in\mathcal{D}$. Moreover, $\mathbb{X}_f$ is RPI for the closed-loop system in [\[eq:cl\]](#eq:cl){reference-type="ref" reference="eq:cl"}.* *Proof.* From the definition of $P$ and $K_f$ in [Assumption 5](#as:term_lqr){reference-type="ref" reference="as:term_lqr"} and any $\mathds{P}\in\mathcal{M}(\mathbb{W}^{N})$, we have $$\mathbb{E}_{\mathds{P}}\left[ \Phi(x,\mathbf{u},\mathbf{w}) \right] = |x|_P^2 + \mathbb{E}_{\mathds{P}}\left[|\mathbf{u}-\mathbf{K}_f\mathbf{x}|^2_{\mathbf{S}}\right] + \mathbb{E}_{\mathds{P}}\left[ |\mathbf{w}|^2_{\mathbf{P}}\right] %\label{eq:EPhi}$$ in which $\mathbf{K}_f := \begin{bmatrix}I_N\otimes K_f & 0 \end{bmatrix}$, $\mathbf{S}:= I_N \otimes (R+B'PB)$ $\mathbf{P}:= I_N\otimes (G'PG)$ (see [@goulart:2007 eq. (4.46)]). Using the control law parameterization $\theta=(\mathbf{M},\mathbf{v})$, we have $$\begin{aligned} \label{eq:EJ} \mathbb{E}_{\mathds{P}}\left[ J(x,\theta,\mathbf{w}) \right] = |x|_P^2 &+ |\mathbf{v}- \mathbf{K}_f\mathbf{A}x - \mathbf{K}_f\mathbf{B}\mathbf{v}|^2_{\mathbf{S}} \\ &+ \mathbb{E}_{\mathds{P}}\left[|(\mathbf{M}-\mathbf{K}_f\mathbf{B}\mathbf{M}- \mathbf{K}_f\mathbf{G})\mathbf{w}|^2_{\mathbf{S}}\right] + \mathbb{E}_{\mathds{P}}\left[ |\mathbf{w}|^2_{\mathbf{P}}\right] \end{aligned}$$ We have that the optimal solution is bounded by $$\min_{\theta\in\Pi(x)}\max_{\mathds{P}\in\mathcal{P}_d^N} \mathbb{E}_{\mathds{P}}\left[ J(x,\theta,\mathbf{w}) \right] \geq |x|_P^2 + \max_{\boldsymbol{\Sigma}\in\mathbb{B}_{d}^N}\textnormal{tr}\Big(\mathbf{P}\boldsymbol{\Sigma}\Big) \label{eq:Vopt_lb}$$ for all $x\in\mathcal{X}$. This lower bound is obtained by the candidate solution $$\mathbf{v}_c(x) := (I - \mathbf{K}_f\mathbf{B})^{-1}\mathbf{K}_f\mathbf{A}x \qquad \mathbf{M}_c := (I-\mathbf{K}_f\mathbf{B})^{-1}\mathbf{K}_f\mathbf{G}$$ and $\theta_c(x)=(\mathbf{M}_c,\mathbf{v}_c(x))$ for any $\mathds{P}\in\mathcal{P}_d^N$. Note that the inverse $\left(I-\mathbf{K}_f\mathbf{B}\right)^{-1}$ exists because $\mathbf{K}_f\mathbf{B}$ is nilpotent (lower triangular with zeros along the diagonal). By application of the matrix inversion lemma, we have that $\theta_c(x)=\theta_f(x)$ in [\[eq:theta_f\]](#eq:theta_f){reference-type="ref" reference="eq:theta_f"}. Therefore $\theta_c(x)=\theta_f(x)\in\Pi(x)$ for all $x\in\mathbb{X}_f$. Moreover, the solution $\mathbf{v}_c(x)$ is unique because $\mathbf{S}\succ 0$. Therefore, $$\kappa_d(x) = v^0(0;x) = v_c(0;x) = K_fx$$ is the unique control law for all $x\in\mathbb{X}_f$. Since $\kappa_d(x)=K_fx$ for all $x\in\mathbb{X}_f$ and $\mathbb{X}_f$ is RPI for the system $x^+ = (A+BK_f)x + Gw$, we have that $\mathbb{X}_f$ is also RPI for [\[eq:cl\]](#eq:cl){reference-type="ref" reference="eq:cl"}. ◻ ensures that within the terminal region, the DRMPC control law is *equivalent* to the LQR control law defined by the DARE in [Assumption 5](#as:term_lqr){reference-type="ref" reference="as:term_lqr"}. Moreover, this controller is the same regardless of the choice of $d\in\mathcal{D}$. This control law also renders the terminal set RPI. Therefore, once the state of the system reaches the terminal region, there is no difference between the control laws for DRMPC, SMPC, RMPC, and LQR. With this result, we can now prove [Theorem 4](#thm:iss){reference-type="ref" reference="thm:iss"}. *Proof of [Theorem 4](#thm:iss){reference-type="ref" reference="thm:iss"}.* Choose $d\in\mathcal{D}$ and $x\in\mathcal{X}$. We define $(\mathbf{M}^0,\mathbf{v}^0)=\theta^0\in\theta^0_d(x)$ and the corresponding candidate solution $(\tilde{\mathbf{v}}^+(w),\tilde{\mathbf{M}}^+) = \tilde{\theta}^+(w)$ defined in [\[eq:candidate\]](#eq:candidate){reference-type="ref" reference="eq:candidate"}. Recall that $\tilde{\mathbf{v}}^+(w)$ is an affine function of $w$, i.e., $$\tilde{\mathbf{v}}^+(w) = c + Zw$$ in which $c\in\mathbb{R}^{Nm}$ and $Z\in\mathbb{R}^{Nm\times q}$ are fixed quantities for a given $\theta^0$. Let $$\widehat{x}^+=Ax+B\kappa_d(x) = Ax+Bv^0(0)$$ From the Proof of [Lemma 2](#lem:costdec){reference-type="ref" reference="lem:costdec"}, we have that $$\mathbb{E}_{\mathds{Q}}\left[V_d(\widehat{x}^++Gw,\tilde{\theta}^+(w))\right] \leq V_d^0(x) - \ell(x,v(0)) + \delta_d\label{eq:costdec_xhat}$$ for all $\mathds{Q}\in\mathcal{P}_d$ in which $\delta_d:=\max_{\Sigma\in\mathbb{B}_d}\textnormal{tr}(G'PG\Sigma)$. Define $$\begin{aligned} \boldsymbol{\Sigma}^+ & := \arg\max_{\boldsymbol{\Sigma}\in\mathbb{B}_{d}^N} L(\widehat{x}^++Gw,\tilde{\theta}^+(w),\boldsymbol{\Sigma}) = \arg\max_{\boldsymbol{\Sigma}\in\mathbb{B}_{d}^N} \textnormal{tr}\left((H_u\tilde{\mathbf{M}}^++H_w)'(H_u\tilde{\mathbf{M}}^++H_w)\boldsymbol{\Sigma}\right) \end{aligned}$$ and note that $\boldsymbol{\Sigma}^+$ is independent of $w$ because $\tilde{\mathbf{M}}^+$ is independent of $w$. We also define $\mathds{P}\in\mathcal{P}_d^N$ such that $\boldsymbol{\Sigma}^+=\mathbb{E}_{\mathds{P}}\left[(\mathbf{w}^+)(\mathbf{w}^+)'\right]$. By applying [\[eq:EJ\]](#eq:EJ){reference-type="ref" reference="eq:EJ"}, we have $$\begin{gathered} L(\widehat{x}^+,\tilde{\theta}^+(0),\tilde{\boldsymbol{\Sigma}}^+) - L(\widehat{x}^++Gw,\tilde{\theta}^+(w),\tilde{\boldsymbol{\Sigma}}^+) = |\widehat{x}^+|_P^2 + |c-\mathbf{K}_f\mathbf{A}\widehat{x}^+ - \mathbf{K}_f\mathbf{B}c|_{\mathbf{S}}^2 \\ - |\widehat{x}^++Gw|_P^2 - |c+Zw - \mathbf{K}_f\mathbf{A}(\widehat{x}^++Gw) - \mathbf{K}_f\mathbf{B}(c+Zw)|^2_{\mathbf{S}} \label{eq:L_diff} \end{gathered}$$ Note that the terms involving $\mathds{P}$ and $\tilde{\mathbf{M}}^+$ in [\[eq:EJ\]](#eq:EJ){reference-type="ref" reference="eq:EJ"} do not change with $w$ and therefore vanish in this difference. By the definition of $\boldsymbol{\Sigma}^+$ and optimality, we have that $$V_d^0(\widehat{x}^+) - V_d(\widehat{x}^++Gw,\tilde{\theta}^+(w)) \leq L(\widehat{x}^+,\tilde{\theta}^+(0),\tilde{\boldsymbol{\Sigma}}^+) - L(\widehat{x}^++Gw,\tilde{\theta}^+(w),\tilde{\boldsymbol{\Sigma}}^+) \label{eq:V_diff}$$ We now define $$\overline{\Sigma} := \arg\max_{\Sigma\in\mathbb{B}_{d}}\textnormal{tr}(G'PG\Sigma)$$ and choose $\overline{\mathds{Q}}\in\mathcal{P}_d$ such that $\mathbb{E}_{\overline{\mathds{Q}}}[ww']=\overline{\Sigma}$. We therefore combine [\[eq:L_diff\]](#eq:L_diff){reference-type="ref" reference="eq:L_diff"}, [\[eq:V_diff\]](#eq:V_diff){reference-type="ref" reference="eq:V_diff"}, and take the expected value with respect to $\overline{\mathds{Q}}$ to give $$\begin{aligned} & V_d^0(\widehat{x}^+) - \mathbb{E}_{\overline{\mathds{Q}}}\left[V_d(\widehat{x}^++Gw,\tilde{\theta}^+(w))\right] = -\textnormal{tr}(G'PG\overline{\Sigma}) - \mathbb{E}_{\overline{\mathds{Q}}}\left[|Zw-\mathbf{K}_f\mathbf{A}Gw - \mathbf{K}_f\mathbf{B}Zw|^2_{\mathbf{S}}\right] \leq -\delta_d \end{aligned}$$ We combine this inequality with [\[eq:costdec_xhat\]](#eq:costdec_xhat){reference-type="ref" reference="eq:costdec_xhat"} to give $$\label{eq:Vopt_costdec} V_d^0(\widehat{x}^+) \leq V_d^0(x) - \ell(x,v(0)) \leq V_d^0(x) - c_3|x|^2$$ in which $c_3>0$ because $Q\succ 0$. Note that the choice of $x\in\mathcal{X}$ was arbitrary and therefore this inequality holds for all $x\in\mathcal{X}$. Next, we define the Lyapunov function $$H(x) := V_d^0(x) - \max_{\boldsymbol{\Sigma}\in\mathbb{B}_{d}^N}\textnormal{tr}(\mathbf{P}\boldsymbol{\Sigma})$$ in which $\mathbf{P}:=I_N\otimes (G'PG)$ from [\[eq:Vopt_lb\]](#eq:Vopt_lb){reference-type="ref" reference="eq:Vopt_lb"}. Note that $H:\mathcal{X}\rightarrow\mathbb{R}_{\geq 0}$ is convex because $V_d^0(x)$ is convex (Danskin's Theorem). From [\[eq:Vopt_costdec\]](#eq:Vopt_costdec){reference-type="ref" reference="eq:Vopt_costdec"}, [\[eq:Vopt_lb\]](#eq:Vopt_lb){reference-type="ref" reference="eq:Vopt_lb"}, and [Lemma 3](#lem:upperbound){reference-type="ref" reference="lem:upperbound"}, there exist $c_1,c_2,c_3>0$ such that $c_1|x|^2\leq H(x)\leq c_2|x|^2$ and $$H(\widehat{x}^+) \leq H(x) - c_3|x|^2$$ Since $H(x)$ is a convex Lyapunov function, $x^+=\widehat{x}^++Gw$, and $\mathcal{X}$ is compact with the origin in its interior, we have from [@goulart:2007 Prop. 4.13] that [\[eq:cl\]](#eq:cl){reference-type="ref" reference="eq:cl"} is ISS for any $d\in\mathcal{D}$. ◻ Exact long-term performance For the class of disturbances in $\mathcal{Q}$, @munoz:cannon:2020 established that ISS systems converge to the minimal RPI set for the system with probability one. By [Assumption 5](#as:term_lqr){reference-type="ref" reference="as:term_lqr"}, the terminal set must contain the minimal RPI set for the system. Thus, we have the following result adapted from [@munoz:cannon:2020 Thm. 5]. **Lemma 5** (Convergence to terminal set). *If [\[as:convex,as:cost,as:track,as:term_lqr\]](#as:convex,as:cost,as:track,as:term_lqr){reference-type="ref" reference="as:convex,as:cost,as:track,as:term_lqr"} hold, then for all $\mathds{P}\in\mathcal{Q}$, $d\in\mathcal{D}$, and $x\in\mathcal{X}$, there exists $p\in[0,\infty)$ such that $$\sum_{k=0}^{\infty}\mathds{P}\big(\phi_d(k;x,\mathbf{w}_{\infty})\notin\mathbb{X}_f\big) \leq p$$* From [Lemma 4](#lem:K_f){reference-type="ref" reference="lem:K_f"} and the Borel-Cantelli lemma, [Lemma 5](#lem:bc){reference-type="ref" reference="lem:bc"} implies that for all $x\in\mathcal{X}$, we have $$\mathds{P}\left(\lim_{k\rightarrow\infty} \phi(k;x,\mathbf{w}_{\infty})\in\mathbb{X}_f\right) = 1$$ In other words, the state of the closed-loop system converges to the terminal set $\mathbb{X}_f$ with probability one. Once in $\mathbb{X}_f$, the closed-loop state remains in this terminal set by applying the fixed control law $K_fx$ for all subsequent time step ([Lemma 4](#lem:K_f){reference-type="ref" reference="lem:K_f"}). The long-term performance of the closed-loop system is therefore determined by the control law $K_f$ and, by definition, the matrix $P$ from the DARE in [\[eq:dare\]](#eq:dare){reference-type="ref" reference="eq:dare"}. We now prove [Theorem 5](#thm:performance_opt){reference-type="ref" reference="thm:performance_opt"} by formalizing these arguments. *Proof of [Theorem 5](#thm:performance_opt){reference-type="ref" reference="thm:performance_opt"}.* Choose $d\in\mathcal{D}$, $x\in\mathcal{X}$, and $\mathds{P}\in\mathcal{Q}$. Define $x(i)=\phi_d(i;x,\mathbf{w}_{\infty})$, $u(i)=\kappa_d(x(i))$, $\Sigma=\mathbb{E}_{\mathds{P}}\left[w(i)w(i)'\right]$, and $$\zeta(i) := |x(i+1)|_P^2 - |x(i)|_P^2 + \ell(x(i),u(i)) - |Gw(i)|_P^2$$ Recall that $x(i)\in\mathcal{X}$ for all $i\geq 0$ because $\mathcal{X}$ is RPI. From [Lemma 4](#lem:K_f){reference-type="ref" reference="lem:K_f"}, we have that if $x(i)\in\mathbb{X}_f$, $u(i)=K_fx(i)$ and therefore $\mathbb{E}_{\mathds{P}}\left[\zeta(i)\right] = 0$. Therefore, we have $$\mathbb{E}_{\mathds{P}}\left[\zeta(i)\right] = \mathbb{E}_{\mathds{P}}\left[\zeta(i)\mid x(i)\notin\mathbb{X}_f\right]\mathds{P}\left(x(i)\notin\mathbb{X}_f\right)$$ Since $\mathcal{X}$, $\mathbb{U}$, and $\mathbb{W}$ are bounded, there exists $\eta\geq 0$ such that $|\zeta(i)|\leq \eta$. Thus, $$-\eta\mathds{P}\left(x(i)\notin\mathbb{X}_f\right)\leq \mathbb{E}_{\mathds{P}}\left[\zeta(i)\right] \leq \eta\mathds{P}\left(x(i)\notin\mathbb{X}_f\right)$$ By definition $$\mathbb{E}_{\mathds{P}}\left[\ell(x(i),u(i)) \right] = \mathbb{E}_{\mathds{P}}\left[|x(i)|_P^2 - |x(i+1)|_P^2\right] + \mathbb{E}_{\mathds{P}}\left[\zeta(i)\right] + \textnormal{tr}(G'PG\Sigma)$$ We sum both sides from $i=0$ to $k-1$, divide by $k\geq 1$, and rearrange to give $$\label{eq:sum_delta} \mathcal{J}_k(x,d,\mathds{P}) = \frac{|x(0)|^2_P - \mathbb{E}_{\mathds{P}}\left[|x(k)|_P^2\right]}{k} + \frac{1}{k}\sum_{i=0}^{k-1}\mathbb{E}_{\mathds{P}}\left[\zeta(i)\right] + \textnormal{tr}(G'PG\Sigma)$$ We apply the upper bound on $\mathbb{E}_{\mathds{P}}\left[\zeta(i)\right]$ in [\[eq:sum_delta\]](#eq:sum_delta){reference-type="ref" reference="eq:sum_delta"} and note that $|x(k)|_P^2\geq 0$ to give $$\mathcal{J}_k(x,d,\mathds{P}) \leq \frac{|x(0)|^2_P}{k} + \frac{\eta}{k}\sum_{i=0}^{k-1}\mathds{P}\left(x(i)\notin\mathbb{X}_f\right) + \textnormal{tr}(G'PG\Sigma)$$ From [Lemma 5](#lem:bc){reference-type="ref" reference="lem:bc"}, there exists $p\in[0,\infty)$ such that $$\mathcal{J}_k(x,d,\mathds{P}) \leq \frac{|x(0)|^2_P+\eta p}{k} + \textnormal{tr}(G'PG\Sigma)$$ Therefore, $$\label{eq:limsup} \limsup_{k\rightarrow \infty} \mathcal{J}_k(x,d,\mathds{P})\leq \textnormal{tr}(G'PG\Sigma)$$ We apply the lower bound for $\mathbb{E}_{\mathds{P}}\left[\zeta(i)\right]$ in [\[eq:sum_delta\]](#eq:sum_delta){reference-type="ref" reference="eq:sum_delta"} and note that $|x(0)|_P^2\geq 0$ to give $$\mathcal{J}_k(x,d,\mathds{P}) \geq \frac{-\mathbb{E}_{\mathds{P}}\left[|x(k)|^2_P\right]}{k} - \frac{\eta}{k}\sum_{i=0}^{k-1}\mathds{P}\left(x(i)\notin\mathbb{X}_f\right) + \textnormal{tr}(G'PG\Sigma)$$ From [Lemma 5](#lem:bc){reference-type="ref" reference="lem:bc"}, there exists $p\in[0,\infty)$ such that $$\mathcal{J}_k(x,d,\mathds{P})\geq \frac{-\mathbb{E}_{\mathds{P}}\left[|x(k)|^2_P\right]- \eta p}{k} + \textnormal{tr}(G'PG\Sigma)$$ Note that $\mathbb{E}_{\mathds{P}}\left[|x(k)|^2_P\right]$ is bounded from [Corollary 3](#cor:mss){reference-type="ref" reference="cor:mss"} and therefore $$\label{eq:liminf} \liminf_{k\rightarrow\infty}\mathcal{J}_k(x,d,\mathds{P}) \geq \textnormal{tr}(G'PG\Sigma)$$ We combine [\[eq:limsup\]](#eq:limsup){reference-type="ref" reference="eq:limsup"} and [\[eq:liminf\]](#eq:liminf){reference-type="ref" reference="eq:liminf"} to give $$\lim_{k\rightarrow\infty}\mathcal{J}_k(x,d,\mathds{P}) = \textnormal{tr}(G'PG\Sigma)$$ Since the choice of $d\in\mathcal{D}$, $\mathds{P}\in\mathcal{Q}$, and $x\in\mathcal{X}$ was arbitrary, this equality holds for all $d\in\mathcal{D}$, $\mathds{P}\in\mathcal{Q}$, and $x\in\mathcal{X}$. ◻ # Scalable algorithms {#s:algorithms} We assume for the subsequent discussion that $\varepsilon >0$ and $\theta$ is in a vectorized form, i.e., $\mathbf{M}$ is converted to a vector. We first present an exact reformulation of the DRO problem in [\[eq:dro\]](#eq:dro){reference-type="ref" reference="eq:dro"}. We then introduce the Frank-Wolfe algorithm and subsequently the proposed Newton-type saddle point algorithm. Exact reformulation Using existing results in [@nguyen:kuhn:mohajerin:2022 Prop. 2.8] and [@kuhn:et-al:2019 Thm. 16], we provide an exact reformulation of [\[eq:dro\]](#eq:dro){reference-type="ref" reference="eq:dro"} via linear matrix inequalities (LMIs) to serve as a baseline for the subsequently proposed Frank-Wolfe and Newton-type algorithms. **Proposition 6** (Exact LMI reformulation). *Let [\[as:convex,as:cost\]](#as:convex,as:cost){reference-type="ref" reference="as:convex,as:cost"} hold and $x\in\mathcal{X}$. For any $\varepsilon>0$ and $\widehat{\Sigma}\succeq 0$, the min-max problem in [\[eq:dro\]](#eq:dro){reference-type="ref" reference="eq:dro"} is equivalent to the optimization problem $$\begin{aligned} \inf_{\mathbf{M},\mathbf{v},Z,Y,\gamma} \ & |H_x x+ H_u\mathbf{v}|^2 + \sum_{k=0}^{N-1}\left(c\gamma_k + \textnormal{tr}(Y_k)\right) \label{eq:lmi}\\ \textnormal{s.t. } & \gamma_k I \succeq Z_k \quad \forall \ k\in\{0,\dots,N-1\} \nonumber \\ & \begin{bmatrix} Y_k & \gamma_k\widehat{\Sigma}^{1/2} \\ \gamma_k\widehat{\Sigma}^{1/2} & \gamma_k I - Z_{k} \end{bmatrix} \succeq 0 \quad \forall \ k\in\{0,\dots,N-1\} \nonumber \\ & \begin{bmatrix} Z & (H_u\mathbf{M}+ H_w)' \\ (H_u\mathbf{M}+ H_w) & I \end{bmatrix} \succeq 0 \nonumber \\ & (\mathbf{M},\mathbf{v})\in\Pi(x) \nonumber \end{aligned}$$ in which $c=\varepsilon^2-\textnormal{tr}(\widehat{\Sigma})$ and $Z_{k}\in\mathbb{R}^{q\times q}$ is the $k^{\rm th}$ block diagonal of $Z$.* *Proof of [Proposition 6](#prop:sdp){reference-type="ref" reference="prop:sdp"}.* Define $\tilde{Z}(\theta)=(H_u\mathbf{M}+H_w)'(H_u\mathbf{M}+H_w)\succeq 0$. We have that $$V_d(x,\theta) = \max_{\boldsymbol{\Sigma}\in\mathbb{B}_d^N}\textnormal{tr}\Big(\tilde{Z}(\theta)\boldsymbol{\Sigma}\Big) = \min_{Z\succeq \tilde{Z}(\theta)}\max_{\boldsymbol{\Sigma}\in\mathbb{B}_d^N}\textnormal{tr}\Big(Z\boldsymbol{\Sigma}\Big)$$ From the structure of $\boldsymbol{\Sigma}\in\mathbb{B}_d^N$, we have $$\max_{\boldsymbol{\Sigma}\in\mathbb{B}_d^N}\textnormal{tr}\Big(Z\boldsymbol{\Sigma}\Big) = \sum_{k=0}^{N-1}\max_{\Sigma_k\in\mathbb{B}_d}\textnormal{tr}\Big(Z_k\Sigma_k\Big) \label{eq:sum_Sigma}$$ in which $Z_k\in\mathbb{R}^{q\times q}$ is the $k$-th block diagonal of $Z$. From [@kuhn:et-al:2019 Thm. 16] and [@nguyen:kuhn:mohajerin:2022 Prop. 2.8], we can write the dual of $\max_{\Sigma_k\in\mathbb{B}_d}\textnormal{tr}(Z_k\Sigma_k)$ via an LMI. Substituting this dual formulation into [\[eq:sum_Sigma\]](#eq:sum_Sigma){reference-type="ref" reference="eq:sum_Sigma"} and reformulating $Z \succeq (H_u\mathbf{M}+H_w)'(H_u\mathbf{M}+H_w)$ via Schur complement gives [\[eq:lmi\]](#eq:lmi){reference-type="ref" reference="eq:lmi"}. ◻ If $\mathbb{Z}$, $\mathbb{X}_f$, and $\mathbb{W}$ are polytopes, then this reformulation can be solved as an LMI optimization problem with standard software such as MOSEK [@mosek:2022]. While this LMI optimization problem can be solved quickly and reliably for small problems, larger problems are unfortunately not practically scalable compared to the usual QPs encountered in linear MPC formulations. **Remark 2** (Saddle point). Another interesting fact is that the min-max problem [\[eq:dro\]](#eq:dro){reference-type="ref" reference="eq:dro"} enjoys a non-empty and compact saddle point $(\theta^*,\boldsymbol{\Sigma}^*)$ for any $x\in\mathcal{X}$ and $d\in\mathcal{D}$. This fact is a classical result for the convex-concave function $L(x,\cdot)$ ensured by the convexity and compactness of $\Pi(x)$ ([Lemma 1](#lem:compact){reference-type="ref" reference="lem:compact"}) and $\mathbb{B}_d^N$ [@nguyen:shafieezadeh:kuhn:mohajerin:2023 Lemma A.6], see for instance [@ref:bertsekas2009convex Prop. 5.5.7]. Frank-Wolfe Frank-Wolfe algorithms that exploit the structure of the min-max program in [\[eq:dro\]](#eq:dro){reference-type="ref" reference="eq:dro"} have shown promising results for similar DRO problems (e.g., [@nguyen:shafieezadeh:kuhn:mohajerin:2023; @sheriff:mohajerin:2023]). Thus, we propose such an algorithm here based on these previous algorithms. We subsequently assume that $\widehat{\Sigma}\succ 0$. For fixed $x\in\mathcal{X}$ and $d\in\mathcal{D}$, we define the objective function $$\label{eq:innermax} f(\theta) := \max_{\boldsymbol{\Sigma}\in\mathbb{B}_d^N} L_x(\theta,\boldsymbol{\Sigma})$$ in which $L_x(\cdot)=L(x,\cdot)$. Note that the inner maximization problem in [\[eq:innermax\]](#eq:innermax){reference-type="ref" reference="eq:innermax"} is *linear* in $\boldsymbol{\Sigma}$. Furthermore, the structure of $\mathbb{B}_d^N$ allows us to rewrite [\[eq:innermax\]](#eq:innermax){reference-type="ref" reference="eq:innermax"} as $$\label{eq:innermax_iid} L_x(\theta,\mathbf{0}) + \sum_{k=0}^{N-1}\max_{\Sigma_k\in\mathbb{B}_d}\textnormal{tr}\big(Z_k(\theta)\Sigma_k\big)$$ in which $Z_k(\theta)\in\mathbb{R}^{q\times q}$ is the $k$^th^ block diagonal of $(H_u\mathbf{M}+H_w)'(H_u\mathbf{M}+H_w)$. Each maximization in [\[eq:innermax_iid\]](#eq:innermax_iid){reference-type="ref" reference="eq:innermax_iid"} can be solved in finite time using a bisection algorithm detailed in @nguyen:shafieezadeh:kuhn:mohajerin:2023 [Alg. 2, Thm. 6.4]. With this bisection algorithm, we can also compute the optimal solution $$\label{eq:innermax_k} \Sigma_k^*(\theta) := \arg\max_{\Sigma_k\in\mathbb{B}_d}\textnormal{tr}\big(Z_k(\theta)\Sigma_k\big)$$ and construct the matrix $$\boldsymbol{\Sigma}^*(\theta) = \begin{bmatrix} \Sigma_0^*(\theta) & \dots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \dots & \Sigma_{N-1}^*(\theta) \end{bmatrix}$$ Thus, we subsequently treat $f(\cdot)$ as the function of interest and consider only the outer minimization problem, i.e., the DRO problem in [\[eq:dro\]](#eq:dro){reference-type="ref" reference="eq:dro"} is now $$\label{eq:opt} f^* := \min_{\theta\in\Pi}f(\theta)$$ in which $\Pi=\Pi(x)$ is convex. Since the solution to [\[eq:innermax_k\]](#eq:innermax_k){reference-type="ref" reference="eq:innermax_k"} is unique for $\widehat{\Sigma}\succ 0$ [@nguyen:shafieezadeh:kuhn:mohajerin:2023 Prop. A.2], we have from Danskin's theorem that $f(\theta)$ is convex and $$\nabla f(\theta) = \nabla_{\theta} L_x(\theta, \boldsymbol{\Sigma}^0(\theta))$$ i.e., the gradient of $f(\cdot)$ at $\theta$ is given by the gradient of $L_x(\cdot)$ with respect to $\theta$, evaluated at $(\theta,\boldsymbol{\Sigma}^0(\theta))$. Thus, we can define the gradient of $f(\cdot)$ as a quasi-analytic expression and the first-order oracle as $$\begin{aligned} \label{FW_oracle} F_1(\theta) := \arg\min_{\vartheta\in\Pi}\nabla f(\theta)'\vartheta\end{aligned}$$ If the set $\Pi$ is a polytope, the oracle is evaluated by solving a linear program (LP). The solution to this minimization problem provides the search direction for the Frank-Wolfe algorithm and leads to the following iterative update rule. $$\label{eq:fw_update} \theta_{t+1} = \theta_t + \eta_t\left(F_1(\theta_t) - \theta_t\right)$$ in which $\eta_t\in(0,1]$ is the step-size, chosen according to some (adaptive) rule that is subsequently introduced. The adaptive Frank-Wolfe algorithm uses the stepsize $$\label{eq:adaptive} \eta_t(\beta) = \min\left\{1, \frac{(\theta_t-F_1(\theta_t))'\nabla f(\theta_t)}{\beta |\theta_t-F_1(\theta_t)|^2}\right\}$$ in which $\beta$ is the global smoothness parameter for $f(\cdot)$, i.e., $\beta>0$ satisfies $$|\nabla f(\theta_1) - \nabla f(\theta_2)| \leq \beta | \theta_1-\theta_2| \qquad \forall \ \theta_1,\theta_2\in\Pi$$ Note that we do not verify that $f(\theta)$ defined in [\[eq:innermax\]](#eq:innermax){reference-type="ref" reference="eq:innermax"} is in fact $\beta$-smooth. To improve the convergence of the Frank-Wolfe algorithm, one can also replace the global smoothness parameter $\beta$ in [\[eq:adaptive\]](#eq:adaptive){reference-type="ref" reference="eq:adaptive"} with an adaptive smoothness parameter $\beta_t$ [@pedregosa:negiar:askari:jaggi:2020]. We require this $\beta_t$ to satisfy the inequality $$\label{eq:fadaptive} f\Big(\theta_t + \eta_t(\beta_t)\big(F_1(\theta_t)-\theta_t\big)\Big) \leq f(\theta_t) + \eta_t(\beta_t)\big(F_1(\theta_t)-\theta_t\big)'\nabla f(\theta_t) + \frac{1}{2}\beta_t\eta_t(\beta_t)^2|F_1(\theta_t)-\theta_t|^2$$ in which $\eta_t(\beta_t)$ is the adaptive stepsize calculation in [\[eq:adaptive\]](#eq:adaptive){reference-type="ref" reference="eq:adaptive"}. The value of $\beta_t$ at each iteration is chosen according to a backtracking line search algorithm. Specifically, $\beta_t$ is chosen as the smallest element of the discrete search space $(\beta_{t-1}/\zeta)\cdot\{1,\tau,\tau^2,\tau^3,\dots\}$ that satisfies [\[eq:fadaptive\]](#eq:fadaptive){reference-type="ref" reference="eq:fadaptive"}, in which $\zeta,\tau>1$ are prescribed line search parameters. Unfortunately, Frank-Wolfe algorithms for MPC optimization problems are often limited to sublinear convergence rates because the constraint set $\Pi$ is not strongly convex (e.g., polytope) and the solution to the optimization problem is frequently on the boundary of $\Pi$. We observe this same limitation for DRMPC as demonstrated in [1](#fig:convergence){reference-type="ref" reference="fig:convergence"}. Newton-type saddle point algorithm We now introduce an algorithm based on a new search direction (i.e., oracle) that solves the following optimization problem over the outer variable for a fixed $\boldsymbol{\Sigma}$: $$\begin{aligned} \label{2-oracle} F_2(\theta) := \arg\min_{\vartheta\in\Pi} L_x(\vartheta,\boldsymbol{\Sigma}^0(\theta))\end{aligned}$$ Recall that our objective function $L_x$ defined in [\[eq:L_obj\]](#eq:L_obj){reference-type="eqref" reference="eq:L_obj"} is a quadratic function in the first argument. Hence, when $\Pi$ is a polytope, the oracle [\[2-oracle\]](#2-oracle){reference-type="eqref" reference="2-oracle"} is a QP whereas the Frank-Wolfe oracle [\[FW_oracle\]](#FW_oracle){reference-type="eqref" reference="FW_oracle"} is LP. We also note that the QP solved in $F_2(\theta)$ is equivalent to solving an SMPC problem for the same system with a fixed value of $\boldsymbol{\Sigma}$. Using the new QP oracle [\[2-oracle\]](#2-oracle){reference-type="eqref" reference="2-oracle"}, we follow the Frank-Wolfe update rule [\[eq:fw_update\]](#eq:fw_update){reference-type="eqref" reference="eq:fw_update"}, i.e., $$\label{eq:nt_update} \theta_{t+1} = \theta_{t} + \eta_t\left(F_2(\theta) - \theta_t\right)$$ where the stepsize $\eta_t$ is chosen according to the same step-size rules introduced for the Frank-Wolfe algorithm. **Remark 3** (Newton-type saddle point computation). The following provides further details on our view regarding the proposed algorithm: 1. *Newton update rule:* The proposed second-order oracle [\[2-oracle\]](#2-oracle){reference-type="eqref" reference="2-oracle"} coincides with the Newton step for the function $f$ defined in [\[eq:innermax\]](#eq:innermax){reference-type="eqref" reference="eq:innermax"} provided that the Hessian $\nabla^2 f(\theta) = \nabla_{\theta\theta}^2L_x(\theta,\boldsymbol{\Sigma}^0(\theta))$ exists. Namely, the oracle [\[2-oracle\]](#2-oracle){reference-type="eqref" reference="2-oracle"} optimizes the quadratic function $$L_x(\vartheta,\boldsymbol{\Sigma}^0(\theta)) = f(\theta) + \nabla f(\theta)'(\vartheta-\theta) + (\vartheta-\theta)'\nabla^2 f(\theta)(\vartheta-\theta)$$ which locally approximates [\[eq:innermax\]](#eq:innermax){reference-type="eqref" reference="eq:innermax"} at a given $\theta$. 2. *Saddle-point computation:* While $\theta_t$ converges to the outer minimization function [\[eq:innermax\]](#eq:innermax){reference-type="eqref" reference="eq:innermax"}, the condition $\widehat{\Sigma} \succ 0$ ensures that the inner maximizer $\boldsymbol{\Sigma}^0(\theta_t)$ is indeed unique [@nguyen:shafieezadeh:kuhn:mohajerin:2023 Prop. A.2]. It is a classical result in the saddle point literature that this property ensures that $\boldsymbol{\Sigma}_t = \boldsymbol{\Sigma}^0(\theta_t)$ also converges to the optimizer of the dual problem [@ref:bertsekas2009convex Sec. 5.5.2]. set $t\leftarrow 0$\ [\[alg:NT\]]{#alg:NT label="alg:NT"} We summarize the description of the proposed algorithm in pseudocode [\[alg:NT\]](#alg:NT){reference-type="ref" reference="alg:NT"}. We close this section by noting three practical advantages of the proposed algorithm compared to solving the LMI reformulation [\[eq:lmi\]](#eq:lmi){reference-type="ref" reference="eq:lmi"}: 1. *Per-iteration complexity and existing QP solvers:* When $\Pi$ is a polytope, each iteration of [\[eq:nt_update\]](#eq:nt_update){reference-type="ref" reference="eq:nt_update"} involves only a QP. Therefore, no additional software is required to implement this algorithm relative to other versions of linear MPC, which already require the solution to a QP. There are also many state-of-the-art open-source solvers available for QPs, such as OSQP [@osqp:2020]. 2. *Anytime algorithm:* Each iteration of [\[eq:nt_update\]](#eq:nt_update){reference-type="ref" reference="eq:nt_update"} is guaranteed to be a feasible solution of the optimization problem, i.e., $\theta_t\in\Pi$ for all $t\geq 0$. Thus, [\[eq:nt_update\]](#eq:nt_update){reference-type="ref" reference="eq:nt_update"} is the so-called "anytime algorithm\" in the sense that it can be terminated anytime after the first iteration. 3. *Speedup by warm-start:* The algorithm can benefit from a "warm-start", i.e., an initial value of $\theta$ that is feasible ($\theta\in\Pi$) and potentially near the optimal solution. For MPC applications in particular, a natural warm-start is the solution to the optimization problem at the previous time by applying the terminal control law in [Assumption 3](#as:term){reference-type="ref" reference="as:term"}, e.g., $\tilde{\theta}^+$ in [\[eq:candidate\]](#eq:candidate){reference-type="ref" reference="eq:candidate"}. # Numerical Examples {#s:examples} We present two examples. The first is a small-scale (2 state) example that is used to demonstrate the closed-loop performance guarantees presented in [3](#s:cl){reference-type="ref" reference="s:cl"} and investigate the computational performance of the proposed Newton-type algorithm. The second is a large-scale (20 state) example, based on the Shell oil fractionator case study in @maciejowski:2001 [s. 9.1], that is used to demonstrate the scalability of the proposed Newton-type algorithm. All optimization problems (LP, QP, or LMI) are solved with MOSEK with default paramter settings [@mosek:2022]. Small-scale example We consider a two-state, two-input system in which $$A = \begin{bmatrix} 0.9 & 0 \\ 0.2 & 0.8 \end{bmatrix} \qquad B = G = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$$ We define the input constraints $\mathbb{U}:=\{u\in\mathbb{R}^2\mid |u|_{\infty}\leq 1, \ u_2\geq 0\}$ and note that the origin is on the boundary of $\mathbb{U}$. We consider the disturbance set $\mathbb{W}:=\{w\in\mathbb{R}^2\mid |w|_{\infty}\leq 1\}$ with the nominal covariance $\widehat{\Sigma}:=0.01 I_2$ and ambiguity an radius $\varepsilon=0.1$. For this system, we consider the cost matrices $$Q = \begin{bmatrix} 0.1 & 0 \\ 0 & 10 \end{bmatrix} \qquad R = \begin{bmatrix} 10 & 0 \\ 0 & 0.1 \end{bmatrix}$$ and define $P\succ 0$ as the solution to the the Lyapunov equation for this system with $u=0$, i.e., $P\succ 0$ that satisfies $A'PA - P = -Q$. This DRMPC problem formulation satisfies [\[as:convex,as:cost,as:term,as:track\]](#as:convex,as:cost,as:term,as:track){reference-type="ref" reference="as:convex,as:cost,as:term,as:track"}, but *not* [Assumption 5](#as:term_lqr){reference-type="ref" reference="as:term_lqr"}. \*Computational performance We solve the DRMPC problem for this formulation using three different methods: 1) the LMI optimization problem in [\[eq:lmi\]](#eq:lmi){reference-type="ref" reference="eq:lmi"} with MOSEK, 2) the Frank-Wolfe (FW) algorithm in [\[eq:fw_update\]](#eq:fw_update){reference-type="ref" reference="eq:fw_update"}, and 3) the proposed Newton-type (NT) saddle point algorithm in [\[alg:NT\]](#alg:NT){reference-type="ref" reference="alg:NT"}. To compare these algorithms, we use a fixed initial condition of $x_0=\begin{bmatrix} 1 & 1 \end{bmatrix}'$ and horizon length $N=5$. We plot the convergence rate in terms of suboptimality gap ($f(\theta_t)-f^*$) for the FW and NT algorithms in [1](#fig:convergence){reference-type="ref" reference="fig:convergence"}. For this suboptimality gap, we determine $f^*$ via the LMI optimization problem in [\[eq:lmi\]](#eq:lmi){reference-type="ref" reference="eq:lmi"}. Therefore, convergence in the suboptimality gap implies that the Frank-Wolfe/Newton-type algorithm converge to the same optimal cost as the exact LMI reformulation. For both algorithms, we consider the adaptive and fully adaptive step-size rules. We terminate when the duality gap is less than $10^{-6}$ or we exceed $10^3$ iterations. First, we discuss the per-iteration convergence rate shown in the top of [1](#fig:convergence){reference-type="ref" reference="fig:convergence"}. For the FW algorithm, the convergence rate is *sublinear* for both step-size rules.[^2] Moreover, the suboptimality gaps of the FW algorithms remains significantly larger than the specified tolerance of $10^{-6}$ after $10^3$ iterations. By contrast, the NT algorithm appears to obtain a *superlinear* (perhaps quadratic) convergence rate near the optimal solution. This behavior is also observed for all other values of the initial condition and horizon length investigated. In fact, the fully adaptive NT algorithm typically converges in fewer than $5$ iterations. The significant improvement in per-iteration convergence rate ensures that the NT algorithm is also significantly faster than the FW algorithm in terms of computation time, as shown in the right side of [1](#fig:convergence){reference-type="ref" reference="fig:convergence"}. ![Convergence of Frank-Wolfe (FW) and Newton-type (NT) algorithm for the DRMPC problem ($N=10$) in terms of suboptimality gap as a function of iteration (top) and computation time (bottom).](convergence_arxiv.pdf){#fig:convergence} In [2](#fig:comparison){reference-type="ref" reference="fig:comparison"}, we compare the computation times required to solve the small scale DRMPC problem for difference horizon lengths $N$ via 1) the LMI optimization problem in [\[eq:lmi\]](#eq:lmi){reference-type="ref" reference="eq:lmi"} with MOSEK and 2) the NT algorithm. For $N\leq 5$, and therefore fewer variables and constraints, solving the DRMPC problem as an LMI optimization problem is faster. For $N>5$, however, solving the DRMPC problem as an LMI optimization problem becomes notably slower than using the NT algorithm. For $N\geq 15$, the computation time to solve the DRMPC problem as an LMI optimization problem is typically more than *twice* the computation time required for the Newton-type algorithm. ![Comparison of computation times for the fully adaptive Newton-type (NT) algorithm and LMI formulation solved by MOSEK for the horizon $N$.](comparison.pdf){#fig:comparison} \*Closed-loop Performance We initialize the state at $x(0)=\begin{bmatrix}1 & 1 \end{bmatrix}'$ and consider a fixed horizon of $N=10$. In the following discussion, we consider three different controllers: DRMPC with $d=(\varepsilon,\widehat{\Sigma})$, SMPC with $d=(0,\widehat{\Sigma})$, and RMPC with $d=(0,0)$. To demonstrate the differences between the control laws for DRMPC, SMPC, and RMPC, we plot the first element of closed-loop state trajectory assuming the disturbance is zero, i.e., $w=0$, in [3](#fig:n2_nom){reference-type="ref" reference="fig:n2_nom"}. RMPC drives the closed-loop state to the origin. SMPC, however, does not drive the closed-loop state to the origin even though the disturbance is zero. Since $u_2\geq 0$, the SMPC controller keeps $x_1$ slightly below the origin to mitigate the effect of positive values for $w_2$. The amount of offset is determined by the covariance of the disturbance. Since DRMPC considers a worst-case covariance for the disturbances, the offset is larger. Thus, for $||\mathbf{w}_{\infty}||=0$, the closed-loop state for DRMPC (SMPC) does not converge to the origin. The origin is therefore not ISS for DRMPC (SMPC), despite satisfying [\[as:convex,as:cost,as:term,as:track\]](#as:convex,as:cost,as:term,as:track){reference-type="ref" reference="as:convex,as:cost,as:term,as:track"}. By contrast, these assumptions are sufficient to render the origin ISS for the closed-loop system generated by RMPC [@goulart:kerrigan:maciejowski:2006 Thm. 23]. To summarize: SMPC and DRMPC are hedging against uncertainty and thereby giving up the deterministic properties of RMPC, such as ISS, in the pursuit of improved performance in terms of the expected value of the stage cost, i.e., $\mathcal{J}_k(\cdot)$. ![Closed-loop trajectories with zero disturbance, i.e., $x(k)=\phi_d(k;x,\mathbf{0})$, for the first element of the state, denoted $x_1(k)$.](n2_nom_plot.pdf){#fig:n2_nom} We now investigate the performance of DRMPC relative to SMPC/RMPC for a distribution $\mathds{P}\in\mathcal{P}_d^{\infty}$. Specifically, we consider $w(k)$ to be i.i.d. in time and sampled from a zero-mean uniform distribution with a covariance of $$\Sigma = \begin{bmatrix} 0.01 & 0.01 \\ 0.01 & 0.035 \end{bmatrix}$$ We simulate $S=100$ different realizations of the disturbance trajectory for each controller. For each simulation $s\in\{1,\dots,S\}$, we define the closed-loop state and input trajectory $x^s(k)$ and $u^s(k)$, as well as the time-average stage cost $$\mathcal{J}_k^s:=\frac{1}{k}\sum_{i=0}^{k-1}\ell(x^s(k),u^s(k))$$ In accordance with the results in [\[thm:performance,cor:mss\]](#thm:performance,cor:mss){reference-type="ref" reference="thm:performance,cor:mss"}, we consider the sample average approximations of $\mathbb{E}_{\mathds{P}}\left[|\phi(k;x,\mathbf{w}_{\infty})|\right]$ and $\mathcal{J}_k(x,d,\mathds{P})$ defined as $$\tilde{\mathbb{E}}_{\mathds{P}}\left[|x(k)|^2\right] := \frac{1}{S}\sum_{s=1}^{S} |x^s(k)|^2 \qquad \tilde{\mathcal{J}}_k:= \frac{1}{S}\sum_{s=1}^{S}\mathcal{J}_k^s$$ In [4](#fig:n2_stats){reference-type="ref" reference="fig:n2_stats"}, we plot $\tilde{\mathbb{E}}_{\mathds{P}}\left[|x(k)|^2\right]$ and $\tilde{\mathcal{J}}_k$. For each algorithm, we observe an initial, exponential decay in the mean-squared distance $\tilde{\mathbb{E}}_{\mathds{P}}\left[|x(k)|^2\right]$ towards a constant, but nonzero, value. These results for DRMPC are consistent with [Corollary 3](#cor:mss){reference-type="ref" reference="cor:mss"}. We note, however, that DRMPC produces the largest value of $\tilde{\mathbb{E}}[|x(k)|^2]$, i.e., the mean-squared distance between the closed-loop state and the setpoint is larger for DRMPC than for SMPC or RMPC. While this result may initially seem counter-intuitive, the objective prescribed to the DRMPC problem is to minimize the expected value of the stage cost, not the expected distance to the origin. In terms of the expected value of the stage cost, i.e., $\tilde{\mathcal{J}}_k$, the performance of DRMPC is better than SMPC, which is better than RMPC. This difference becomes more pronounced as $k\rightarrow\infty$. ![Sample averages of $\mathbb{E}_{\mathds{P}}\left[|\phi(k;x,\mathbf{w}_{\infty})|^2\right]$ and $\mathcal{J}_k(x,d,\mathds{P})$, denoted $\tilde{\mathbb{E}}_{\mathds{P}}\left[|x(k)|^2\right]$ and $\tilde{\mathcal{J}}_k$, for $S=100$ realizations of the closed-loop trajectory.](n2_stats_arxiv.pdf){#fig:n2_stats} We note that $\Sigma\in\mathbb{B}_d$ in this example is intentionally chosen to exacerbate the effect of the disturbance on $x_2$ and thereby increase the cost of the closed-loop trajectory, i.e., a worst-case distribution. Therefore, DRMPC produces a superior control law relative to SMPC. If the ambiguity set, however, becomes too large relative to this value of $\Sigma$, the additional conservatism of DRMPC can produce worse performance than SMPC in terms of $\tilde{\mathcal{J}}_k$ for a fixed value of $\Sigma$. To demonstrate this tradeoff, we consider the same closed-loop simulation and plot the value of $\tilde{\mathcal{J}}_T$ at $T=500$ for various values of $\varepsilon$ and fixed $\widehat{\Sigma}$. In [5](#fig:n2_rhos){reference-type="ref" reference="fig:n2_rhos"}, we observe that $\varepsilon\approx 0.11$ achieves the minimum value of $\mathcal{J}_T$, with an approximately 13% decrease in the value of $\tilde{\mathcal{J}}_T$ compared to $\varepsilon=0.01$. For values of $\varepsilon> 0.11$, the value of $\tilde{\mathcal{J}}_T$ increases significantly until leveling off around $\varepsilon=1$. For large values of $\varepsilon$, DRMPC is too conservative because $\Sigma$ is now well within the interior of $\mathbb{B}_d$. Another interesting feature in [5](#fig:n2_rhos){reference-type="ref" reference="fig:n2_rhos"} is that the range of values for $\mathcal{J}_T^s$ for each $s\in\{1,\dots,30\}$, shown by the shaded region, decreases as $\varepsilon$ increases. This behavior might also be explained by the increased conservatism of DRMPC as $\varepsilon$ increases. As the value of $\varepsilon$ increase, DRMPC generates a closed-loop system that is distributionally robust to a range of possible covariances. In this case, we might expect DRMPC to drive the closed-loop system to an operating region that attenuates the effect of all disturbances on the closed-loop cost at the expense of nominal performance. Thus, the closed-loop system becomes less sensitive to disturbances and thereby decreases the variability in performance at the expense of an increase in average performance. In summary, we are left with the classic trade-off in robust controller design; If we choose $\varepsilon$ too large, the excessively conservative DRMPC may perform worse than SMPC ($\varepsilon=0$). Thus, the design goal for DRMPC is to select a value of $\varepsilon$ that balances these two extremes. ![Sample average of the performance metric $\mathcal{J}_T(x,d,\mathds{P})$, denoted $\tilde{\mathcal{J}}_T$, for $T=500$ as a function of the ambiguity radius $\varepsilon$ for $S=30$ realizations of the disturbance trajectory. The shaded blue region indicates the range of values of $\tilde{\mathcal{J}}_T^s$ for $s\in\{1,\dots,30\}$.](n2_rhos.pdf){#fig:n2_rhos} Large scale example: Shell oil fractionator To demonstrate the applicability of the Newton-type algorithm to control problems of an industrially relevant size, we now consider the Shell oil fractionator example in @maciejowski:2001 [s. 9.1] with $n=20$ states, $m=3$ inputs, and $p=3$ outputs. We include two disturbances ($q=2$): the intermediate reflux duty and the upper reflux duty. We assume these disturbances are i.i.d. and zero mean. We also note that $A$ in this problem is Schur stable. We consider the input constraints $\mathbb{U}:= \left\{u\in\mathbb{R}^3\mid |u|_{\infty}\leq 1, \ u_3\geq 0\right\}$ in which the origin is again on the boundary of the input constraint $\mathbb{U}$. The disturbance set is $\mathbb{W}:=\{w\in\mathbb{R}^2\mid |w|_{\infty}\leq 1\}$ with the nominal covariance $\widehat{\Sigma}=0.01I_2$ and ambiguity an radius of $\varepsilon=0.1$. The outputs $y$ satisfy $y=Cx$ and we define cost matrices as $Q=C'Q_yC$ and $R=0.1I_3$ in which $Q_y=\textnormal{diag}([20, \ 10, \ 1])$. We then define the terminal cost matrix $P\succ 0$ as the solution to the Lyapunov equation $A'PA-P=-Q$. This DRMPC problem formulation satisfies [\[as:convex,as:cost,as:term\]](#as:convex,as:cost,as:term){reference-type="ref" reference="as:convex,as:cost,as:term"} and $(A,Q^{1/2})$ is detectable (See [Remark 1](#rem:detectable){reference-type="ref" reference="rem:detectable"}). We initialize the state at $x(0)=0$ and use $N=10$. We consider the performance of the closed-loop system in which $w(k)$ is sampled from a zero-mean uniform distribution with a covariance of $\Sigma = \textnormal{diag}([0.04, \ 0.01])$. Note that $\Sigma\in\mathbb{B}_d$. We simulate $T=500$ time steps for $S=30$ realizations of the disturbance trajectory. We plot $\tilde{\mathcal{J}}_k$ in [6](#fig:oil_stats){reference-type="ref" reference="fig:oil_stats"} for DRMPC, SMPC, and RMPC. At $T=500$, we observe an almost negligible 0.2% decrease in $\tilde{\mathcal{J}}_{T}$ for DRMPC relative to SMPC. Longer horizons may increase this difference, but we expect the overall benefit of DRMPC to remain small and therefore not worth the extra computational demand. ![Sample average of $\mathcal{J}_k(x,d,\mathds{P})$, denoted $\widehat{\mathcal{J}}_k$, for $S=30$ realizations of the closed-loop trajectory.](oilfr_sim_plot.pdf){#fig:oil_stats} # Additional technical proofs and results {#s:appendix} *Proof of [Lemma 1](#lem:compact){reference-type="ref" reference="lem:compact"}.* From @goulart:2007 [Thm. 3.5] we have that $\Pi(x)$ is closed and convex for all $x\in\mathcal{X}$ and $\mathcal{X}$ is closed and convex. We now establish that $\Pi(x)$ is bounded. If $(\mathbf{M},\mathbf{v})\in\Pi(x)$, then $\mathbf{M}\mathbf{w}+ \mathbf{v}\in \mathbb{U}^N$ for all $\mathbf{w}\in\mathbb{W}^N$. Since $0\in\mathbb{W}$, we have that for any $(\mathbf{M},\mathbf{v})\in\Pi(x)$, $\mathbf{v}$ must satisfy $\mathbf{v}\in\mathbb{U}^N$ Moreover, since the origin is in the interior of $\mathbb{W}$, there exists $\delta>0$ such that $B_{\delta}:=\{\mathbf{w}\in\mathbb{R}^{Nq}\mid |\mathbf{w}|\leq\delta \}\subseteq\mathbb{W}$. Since $\mathbb{U}$ is bounded, there exists $b\geq 0$, such that $|\mathbf{u}|\leq b$ for all $\mathbf{u}\in\mathbb{U}^N$. For any $(\mathbf{M},\mathbf{v})\in\Pi(x)$, we have $$\label{eq:M_bound} |\mathbf{M}\mathbf{w}| \leq 2b \quad \forall \ \mathbf{w}\in B_{\delta}$$ because $\mathbf{v}\in\mathbb{U}^N$. We have that [\[eq:M_bound\]](#eq:M_bound){reference-type="ref" reference="eq:M_bound"} is equivalent to $$\lVert \mathbf{M}\rVert_2 := \sup\left\{|\mathbf{M}\mathbf{w}| \ \middle| \ |\mathbf{w}|\leq 1 \right\} \leq 2b/\delta$$ and we can construct a bounded set for $\mathbf{M}$ as follows: $$\mathbf{M}\in \mathbb{M} := \left\{ \mathbf{M}\in\mathbb{R}^{Nm\times Nq} \ \middle| \ ||\mathbf{M}||_2 \leq 2b/\delta \right\}$$ for all $(\mathbf{M},\mathbf{v})\in\Pi(x)$. Therefore, $(\mathbf{M},\mathbf{v})\in\Pi(x) \subseteq \mathbb{M} \times \mathbb{U}^N$ for all $x\in\mathcal{X}$. Since $\mathbb{U}$ and $\mathbb{M}$ are bounded, $\Pi(x)$ is bounded as well, uniformly for all $x\in\mathcal{X}$. ◻ **Lemma 7**. *If $\mathbb{W}$ is bounded, then $\mathcal{D}$ is bounded.* *Proof.* Define the set $\mathbb{S}:=\{\Sigma = \mathbb{E}_{\mathds{P}}\left[ww'\right] \mid \mathds{P}\in\mathcal{M}(\mathbb{W})\}$ and note that $\mathbb{S}$ is bounded because $\mathbb{W}$ is bounded. By definition, $\mathbb{B}_d\subseteq\mathbb{S}$ for all $d\in\mathcal{D}$. Since $\widehat{\Sigma}\in \mathbb{B}_d$ for all $d=(\varepsilon,\widehat{\Sigma})$, then $\widehat{\Sigma}\in\mathbb{S}$ for all $d\in\mathcal{D}$ and therefore $\mathcal{D}\subseteq \mathbb{R}_{\geq 0}\times \mathbb{S}$. Define $\rho:=\sup_{\Sigma\in\mathbb{S}}\textnormal{tr}(\Sigma)^{1/2}<\infty$. Therefore, $\mathbb{S}=\mathbb{B}_{(\rho,0)}$. Choose any $\widehat{\Sigma}\in\mathbb{S}$. If $\Sigma\in\mathbb{B}_{(\rho,0)}$, then $$\textnormal{tr}\left(\widehat{\Sigma} + \Sigma - 2\left(\widehat{\Sigma}^{1/2}\Sigma\widehat{\Sigma}^{1/2}\right)^{1/2}\right) \leq \textnormal{tr}\left(\widehat{\Sigma}\right) + \textnormal{tr}\left(\Sigma\right) \leq 2\rho$$ and therefore $\Sigma\in\mathbb{B}_{(2\rho,\hat{\Sigma})}$. Thus, $\mathbb{S}\subseteq\mathbb{B}_{(\rho,0)}\subseteq \mathbb{B}_{(2\rho,\hat{\Sigma})}$. For all $(\varepsilon,\widehat{\Sigma})\in\mathcal{D}$, we have $\mathbb{B}_{(\varepsilon,\widehat{\Sigma})} \subseteq \mathbb{S} \subseteq \mathbb{B}_{(2\rho,\hat{\Sigma})}$ and therefore $\varepsilon\leq 2\rho$. Hence, $\mathcal{D}\subseteq [0,2\rho]\times\mathbb{S}$ and $\mathcal{D}$ is bounded. ◻ # Optimal terminal cost {#app:optP} Since the term $\textnormal{tr}(G'PG\Sigma)$ appears on the right-hand side of all bounds in [\[thm:performance,thm:resie,cor:mss\]](#thm:performance,thm:resie,cor:mss){reference-type="ref" reference="thm:performance,thm:resie,cor:mss"}, a straightforward design strategy for DRMPC is to select the value of $P$ that minimizes this term subject to the constraints in [Assumption 3](#as:term){reference-type="ref" reference="as:term"}. In the following lemma, we show that the value of $P$ that achieves this goal is given by [\[eq:dare\]](#eq:dare){reference-type="ref" reference="eq:dare"} in [Assumption 5](#as:term_lqr){reference-type="ref" reference="as:term_lqr"}. **Lemma 8** (Optimal terminal cost). *For any $Q,R\succ 0$, a solution to $$\begin{aligned} \inf_{P\succ 0,K_f} \ & \textnormal{tr}(G'PG\Sigma) \\ \textnormal{s.t. } & P - Q - K_f'RK_f \succeq (A+BK_f)'P(A+BK_f) \end{aligned}$$ is given by the solution to the DARE [\[eq:dare\]](#eq:dare){reference-type="ref" reference="eq:dare"} and the associated controller $K_f:=-(R+B'PB)B'PA$ for all $G\in\mathbb{R}^{m\times q}$ and $\Sigma\succeq 0$.* suggests that, if possible, one should always select $P$ according to [Assumption 5](#as:term_lqr){reference-type="ref" reference="as:term_lqr"} to minimize the value of $\textnormal{tr}(G'PG\Sigma)$ for all $\Sigma\succeq 0$. Therefore, [Assumption 5](#as:term_lqr){reference-type="ref" reference="as:term_lqr"}, in addition to being a convenient and common method to select $P$, also produces the best theoretical bound for the performance of the closed-loop system. *Proof of [Lemma 8](#lem:optP){reference-type="ref" reference="lem:optP"}.* Consider the unique solution $P^*\succ 0$ that satisfies [\[eq:dare\]](#eq:dare){reference-type="ref" reference="eq:dare"} and choose any $G\in\mathbb{R}^{m\times q}$ and $\Sigma\succeq 0$. We denote $D:=G\Sigma G'$ and note that $D\succeq 0$. We consider the equivalent optimization problem $$\begin{aligned} \inf_{K_f,S} \ & \textnormal{tr}((P^*-S) D) \\ \textnormal{s.t. } & P^* - S - Q - K_f'RK_f \succeq (A+BK_f)'(P^*-S)(A+BK_f) \label{eq:Lyap_Pstar}\\ & P^*-S \succ 0, \quad S \succeq 0 \label{eq:pd_Pstar} \end{aligned}$$ If the solution to this optimization problem is $S=0$ for all $D\succeq 0$, then $P^*$ must be a solution to the original optimization problem. We now prove that $S=0$. Assume $K_f\in\mathbb{R}^{m\times n}$ and $S\succeq 0$ exist such that [\[eq:Lyap_Pstar,eq:pd_Pstar\]](#eq:Lyap_Pstar,eq:pd_Pstar){reference-type="ref" reference="eq:Lyap_Pstar,eq:pd_Pstar"} hold. Note that this inequality implies that $A+BK_f$ is Schur stable. Since $P^*$ defines the optimal cost of the infinite horizon unconstrained LQR problem, any $K_f\in\mathbb{R}^{m\times n}$ must satisfy $$\label{eq:Pstar} (A+BK_f)'(P^*)(A+BK_f) \succeq P^* - Q - K_f'RK_f$$ By combining [\[eq:Lyap_Pstar,eq:Pstar\]](#eq:Lyap_Pstar,eq:Pstar){reference-type="ref" reference="eq:Lyap_Pstar,eq:Pstar"}, we have that $K_f$ and $S\succeq 0$ must satisfy $$\label{eq:S} (A+BK_f)'S(A+BK_f) \succeq S$$ However, if there exists $S\neq 0$ satisfying [\[eq:S\]](#eq:S){reference-type="ref" reference="eq:S"} with $S\succeq 0$, then $A+BK_f$ is not Schur stable. Since $A+BK_f$ is Schur stable, the only value of $S\succeq 0$ that satisfies [\[eq:S\]](#eq:S){reference-type="ref" reference="eq:S"} is $S=0$. Therefore, $S=0$ is the only solution to the modified optimization problem and $P^*$ is the solution to the original optimization problem. Note that since the choices of $G\in\mathbb{R}^{m\times q}$ and $\Sigma\succeq 0$ were arbitrary, this solution holds for any $G\in\mathbb{R}^{m\times q}$ and $\Sigma\succeq 0$. ◻ # Fundamental mathematical properties of DRMPC formulation {#app:problem} We now establish some fundamental mathmatical properties of [\[eq:dro\]](#eq:dro){reference-type="ref" reference="eq:dro"} to ensure that subsequently defined quantities are indeed well-defined. We first introduce some notation and definitions. A function $f:X\rightarrow \mathbb{R}$ is lower semincontinuous if $\liminf_{x\rightarrow x_0} f(x) \geq f(x_0)$ for all $x_0\in X$. Let $\mathcal{B}(\Omega)$ denote the Borel field of some set $\Omega$, i.e., the subsets of $\Omega$ generated through relative complements and countable unions of all open subsets of $\Omega$. For the metric spaces $X$ and $Y$, a function $f:X\rightarrow Y$ is Borel measurable if for each open set $O\subseteq Y$, we have $f^{-1}(O):=\{x\in X\mid f(x)\in O\}\in\mathcal{B}(X)$. For the metric spaces $X$ and $Y$, a set-valued mapping $F:X\rightrightarrows Y$ is Borel measurable if for every open set $O\subseteq Y$, we have $F^{-1}(O):=\{x\in X \mid F(x)\cap O\neq\emptyset\}\in\mathcal{B}(X)$ [@rockafellar:wets:2009 Def. 14.1]. **Proposition 9** (Existence and measurability). *If [\[as:convex,as:cost\]](#as:convex,as:cost){reference-type="ref" reference="as:convex,as:cost"} hold, then for all $d\in\mathcal{D}$, $V_d:\mathcal{X}\times\Theta\rightarrow \mathbb{R}_{\geq 0}$ is convex and continuous, $V_d^0:\mathcal{X}\rightarrow \mathbb{R}_{\geq 0}$ is convex and lower semicontinuous, $\theta^0:\mathcal{X}\rightrightarrows\Theta$ is Borel measurable, and $\theta^0_d(x)\neq\emptyset$ for all $x\in\mathcal{X}$.* *Proof.* We have that $L(\cdot)$ is continuous and $\mathbb{B}_d^N$ is compact [@nguyen:shafieezadeh:kuhn:mohajerin:2023 Lemma A.6]. Therefore, $V_d(x,\theta)$ is continuous for all $(x,\theta)\in\mathbb{R}^n\times\Theta$ [@rawlings:mayne:diehl:2020 Thm. C.28]. We also have that $L(x,\theta,\boldsymbol{\Sigma})$ is convex in $(x,\theta)\in\mathbb{R}^n\times\Theta$ for all $\boldsymbol{\Sigma}\in\mathbb{B}_d^N$. From Danskin's theorem, we have that $V_d(x,\theta)$ is also convex. Since $V_d(x,\theta)$ is continuous and $\Pi(x)$ is compact for each $x\in\mathcal{X}$, we have that $\theta^0_d(x)\neq\emptyset$, i.e., the minimum is attained, for all $x\in\mathcal{X}$. Since $\mathbb{Z}$ and $\mathbb{X}_f$ are closed, we have that $$\mathcal{Z}:= \bigcap_{\mathbf{w}\in\mathbb{W}^N} \left\{ (x,\mathbf{M},\mathbf{v})\in\mathbb{R}^n\times\Theta\ \middle| \ \begin{matrix} \mathbf{x}= \mathbf{A}x + \mathbf{B}\mathbf{v}+ (\mathbf{B}\mathbf{M}+ \mathbf{G})\mathbf{w}\\ \mathbf{u}= \mathbf{M}\mathbf{w}+ \mathbf{v}\\ (x(k),u(k))\in\mathbb{Z}\ \forall \ k\in\mathbb{I}_{0:N-1} \\ x(N) \in \mathbb{X}_f \end{matrix} \right\}$$ is also closed. From the Proof of [Lemma 1](#lem:compact){reference-type="ref" reference="lem:compact"}, we have that $\mathcal{Z}\subseteq\mathbb{R}^n\times(\mathbb{M}\times\mathbb{U})$ in which $\mathbb{U}$ and $\mathbb{M}$ are compact. Therefore, $$V^0_d(x) = \min_{\theta}\left\{V_d(x,\theta) \mid (x,\theta)\in\mathcal{Z}\right\} %\end{equation*} \quad \text{and} \quad %\begin{equation*} \theta^0_d(x) = \arg\min_{\theta}\left\{V_d(x,\theta) \mid (x,\theta)\in\mathcal{Z}\right\}$$ From @bertsekas:shreve:1996 [Prop. 7.33], we have that $V_d^0:\mathcal{X}\rightarrow\mathbb{R}_{\geq 0}$ is lower semicontinuous and $\theta^0:\mathcal{X}\rightrightarrows\Theta$ is Borel measurable. ◻ Since $\theta^0_d(x)$ is Borel measurable and $\mathbb{U}$ is compact, we have from [Proposition 9](#prop:exist){reference-type="ref" reference="prop:exist"} and [@bertsekas:shreve:1996 Lemma 7.18] that there exists a selection rule such that $\kappa_d:\mathcal{X}\rightarrow\mathbb{U}$ is also Borel measurable. Thus, the closed-loop system $\phi(k;x,\mathbf{w}_{\infty})$ is measurable with respect to $\mathbf{w}_{\infty}$ for all $k\geq 0$. Moreover, all probabilities and expected values for the closed-loop system $\phi(\cdot)$ are well defined. We also have the following corollary to [Proposition 9](#prop:exist){reference-type="ref" reference="prop:exist"} if $\mathbb{Z}$, $\mathbb{X}_f$, and $\mathbb{W}$ are polytopes. **Corollary 7** (Continuity of optimal value function). *If [\[as:convex,as:cost\]](#as:convex,as:cost){reference-type="ref" reference="as:convex,as:cost"} hold and $\mathbb{Z},\mathbb{X}_f,\mathbb{W}$ are polyhedral, then for all $d\in\mathcal{D}$, $V^0_d:\mathcal{X}\rightarrow\mathbb{R}_{\geq 0}$ is continuous.* *Proof.* Since $\mathbb{Z},\mathbb{X}_f,\mathbb{W}$ are polyhedral, we have from @goulart:2007 [Cor. 3.8] $\mathcal{X}$ is polyhedral. From the Proof of [Lemma 1](#lem:compact){reference-type="ref" reference="lem:compact"}, we have that $\mathcal{Z}\subseteq\mathbb{R}^n\times(\mathbb{M}\times\mathbb{U})$ in which $\mathbb{U}$ and $\mathbb{M}$ are bounded. Recall from the Proof of [Proposition 9](#prop:exist){reference-type="ref" reference="prop:exist"} that $V_d:\mathbb{R}^n\times\Theta\rightarrow\mathbb{R}^n$ is continuous. Therefore, $$V^0_d(x) = \min_{\theta}\left\{V_d(x,\theta) \mid (x,\theta)\in\mathcal{Z}\right\} %\end{equation*} \quad \text{and} \quad %\begin{equation*} \theta^0_d(x) = \arg\min_{\theta}\left\{V_d(x,\theta) \mid (x,\theta)\in\mathcal{Z}\right\}$$ From @rawlings:mayne:diehl:2020 [Thm. C.34], we have that $V^0_d:\mathcal{X}\rightarrow\mathbb{R}$ is continuous and $\theta^0_d:\mathcal{X}\rightrightarrows \Theta$ is outer semicontinuous. ◻ [^1]: This strategy is in fact optimal in terms of minimizing $\textnormal{tr}(G'PG\Sigma)$. See [8](#app:optP){reference-type="ref" reference="app:optP"}. [^2]: Even for RMPC, i.e., $d=(0,0)$, we observe only sublinear convergence for the FW algorithm.
arxiv_math
{ "id": "2309.12758", "title": "Distributionally Robust Model Predictive Control: Closed-loop Guarantees\n and Scalable Algorithms", "authors": "Robert D. McAllister and Peyman Mohajerin Esfahani", "categories": "math.OC cs.SY eess.SY", "license": "http://creativecommons.org/licenses/by/4.0/" }
## Stability of homogeneous control system with quantized state measurements Due to nonlinearity, the design of finite/fixed-time controllers with quantized state measurements is a non-trivial task. To tackle this problem we propose the following two step procedure: *Step 1*: First, we design a continuous-time closed-loop system with finite/fixed-time stabilizing feedback. Next, we replace the continuous states with quantized states in the feedback law, and derive some sufficient finite/fixed-time stability conditions of the closed-loop system with quantization. *Step 2*: Leveraging the derived quantization conditions, we design a quantization function using a logarithmic quantizer such that finite/fixed-time stabilization is preserved. The continuous-time homogeneous finite/fixed-time stabilizer [\[eq:hom_con_P\]](#eq:hom_con_P){reference-type="eqref" reference="eq:hom_con_P"} is assumed to be designed for the LTI system by means of Theorem [\[thm:hom_linear\]](#thm:hom_linear){reference-type="ref" reference="thm:hom_linear"}. Our goal now is to derive a sufficient condition for the quantization function to preserve finite/fixed stability of the closed-loop system in the case of quantized state measurements. The homogeneous controller [\[eq:hom_con_P\]](#eq:hom_con_P){reference-type="eqref" reference="eq:hom_con_P"} contains two terms. The linear one $K_0x$ is aimed at homogenization of $A_0=A+BK_0$ with non-zero degree (see Lemma [\[lem:nilpotent\]](#lem:nilpotent){reference-type="ref" reference="lem:nilpotent"}), while the nonlinear term stabilizes the closed-loop homogeneous system. If $A$ is a nilpotent matrix, then $K_0$ may be equal to zero. Below we study both cases $K_0\neq\mathbf{0}$ and $K_0=\mathbf{0}$, since, in the latter case, the control law also becomes a homogeneous function and restrictions to the quantizer can be relaxed. [\[thm:main\]]{#thm:main label="thm:main"} *Let all parameters of the homogeneous feedback [\[eq:hom_con_P\]](#eq:hom_con_P){reference-type="eqref" reference="eq:hom_con_P"} be designed as in Theorem [\[thm:hom_linear\]](#thm:hom_linear){reference-type="ref" reference="thm:hom_linear"}, and define the quantized feedback as: $$\label{eq:u_q} u(\mathfrak{q}(x))=K_0\mathfrak{q}(x)+ \|\mathfrak{q}(x)\|_\dn^{1+\mu}K\dn(-\ln\|\mathfrak{q}(x)\|_\dn)\mathfrak{q}(x)$$ where $\mathfrak{q}:\mathbb{R}^n\mapsto \mathcal{Q}$ is a quantization function. If $K_0=\mathbf{0}$ and the quantization error satisfies $$\label{eq:con1} \|\mathfrak{q}(x)-x\|_\dn\le \epsilon \|x\|_\dn, \quad \forall x\in\mathbb{R}^n, \ \epsilon\in\mathbb{R}_+,$$* *then, for a sufficiently small $\epsilon$, the system [\[eq:q_sys\]](#eq:q_sys){reference-type="eqref" reference="eq:q_sys"}, [\[eq:u_q\]](#eq:u_q){reference-type="eqref" reference="eq:u_q"} is* - *globally uniformly finite-time stable for $\mu<0$;* - *globally uniformly nearly fixed-time stable for $\mu>0$.* *If $K_0\neq \mathbf{0}$ and the quantization error additionally satisfies $$\label{eq:con2} \|\mathfrak{q}(x)-x\|\le\kappa\|x\|, \quad \forall x\in\mathbb{R}^n, \ \kappa\in\mathbb{R}_+,$$ then the system [\[eq:q_sys\]](#eq:q_sys){reference-type="eqref" reference="eq:q_sys"}, [\[eq:u_q\]](#eq:u_q){reference-type="eqref" reference="eq:u_q"} is* - *locally uniformly finite-time stable for $\mu<\min\{\underline{\eta}-1, 0\}$;* - *globally uniformly practically fixed-time stable [^1] for $\mu\!>\!\max\{\overline{\eta}\!-\!1, 0\}$*. *Proof.* The inequality [\[eq:con1\]](#eq:con1){reference-type="eqref" reference="eq:con1"} implies that $\mathfrak{q}(\boldsymbol{0})=\boldsymbol{0}$ and $x=\boldsymbol{0}$ is the equilibrium of the system. Since $\|x\|_\dn$ serves as a Lyapunov function for the quantization-free system [\[eq:lin_sys\]](#eq:lin_sys){reference-type="eqref" reference="eq:lin_sys"}, let us calculate the derivative along with the system using quantization [\[eq:q_sys\]](#eq:q_sys){reference-type="eqref" reference="eq:q_sys"}: $$\begin{aligned} &\tfrac{d}{dt}\|x\|_\dn = \|x\|_\dn\tfrac{x^\top\dn(-s_{x})P\dn(-s_{x})}{x^\top\dn(-s_{x})PG_\dn \dn(-s_{x})x} \left(Ax+ Bu(\mathfrak{q}(x))\right)\\ &\!=\!\|x\|_\dn\tfrac{x^\top\dn(-s_{x})P\dn(-s_{x})}{x^\top\dn(-s_{x})PG_\dn \dn(-s_{x})x} \!\left(Ax\!+\! Bu(x) \!+\! Bu(\mathfrak{q}_x)\!-\!Bu(x)\right) \end{aligned}$$ where $s_{x}=\ln\|x\|_\dn$ and $\mathfrak{q}_x=\mathfrak{q}(x)$. The equation [\[eq:G0\]](#eq:G0){reference-type="eqref" reference="eq:G0"} leads to $A_0\dn(s)=e^{\mu s}\dn(s)A_0$, $\dn(s)B = e^{s}B, \forall s\in \R$ and $$\tfrac{d}{dt}\|x\|_\dn = -\rho\|x\|_\dn^{1+\mu}+\tfrac{x^\top\dn(-s_{x})PB\left(u(\mathfrak{q}_x)-u(x)\right)}{x^\top\dn(-s_{x})PG_\dn \dn(-s_{x})x}.$$ The control $u(x)$ consists of two parts: the linear part $K_0x$ and the homogeneous part $\tilde{u}(x) = \|x\|_{\dn}^{1+\mu}K\dn(-\ln\|x\|_{\dn})x$. Therefore, we derive $$\begin{aligned} \tfrac{d}{dt}\|x\|_\dn \!=\! -\rho\|x\|_\dn^{1+\mu}&+\tfrac{x^\top\dn(-s_{x})PBK_0(\mathfrak{q}_x-x)}{x^\top\dn(-s_{x})PG_\dn \dn(-s_{x})x}\\ & + \tfrac{x^\top\dn(-s_{x})PB\left(\tilde{u}(\mathfrak{q}_x)-\tilde{u}(x)\right) }{x^\top\dn(-s_{x})PG_\dn \dn(-s_{x})x}. \end{aligned}$$ Referring the definition of canonical homogeneous norm $\|\dn(-s_{x})x\|=1$, then using Cauchy--Schwarz inequality, it yields that $$\tfrac{d}{dt}\|x\|_\dn \!\le\! -\!\rho\|x\|_\dn^{1+\mu}\!+\!c_1\|\mathfrak{q}_x\!-\!x\| \!+\! \tfrac{x^\top\dn(-s_{x})PB\left(\tilde{u}(\mathfrak{q}_x)\!-\!\tilde{u}(x)\right) }{x^\top\dn(-s_{x})PG_\dn \dn(-s_{x})x},$$ where $c_1 = \tfrac{\sqrt{\lambda_{\max}(K_0^\top B^\top PBK_0)}}{\lambda_{\min}(P^{1/2}G_\dn P^{-1/2} + P^{-1/2}G_\dn^\top P^{1/2})}$. Since $B\tilde{u}(\dn(s)z) = e^{(1+\mu)s}B\tilde{u}(z), \forall s\in \R, \forall z\in \R^n$ then $$\label{eq:d_norm_1} \begin{aligned} \tfrac{d}{dt}\|x\|_\dn \!\le\!& -\rho\|x\|_\dn^{1+\mu}+c_1\|\mathfrak{q}_x-x\|\\ & + c_2\|x\|_\dn^{1+\mu}\|\tilde{u}\left(\dn(-s_x)\mathfrak{q}_x\right)\!-\!\tilde{u}(\dn(-s_{x})x)\|, \end{aligned}$$ where $c_2 = \tfrac{\sqrt{\lambda_{\max}(K^\top B^\top PBK)}}{\lambda_{\min}(P^{1/2}G_\dn P^{-1/2} + P^{-1/2}G_\dn^\top P^{1/2})}$. Taking into account the monotonicity of the dilation $\dn$ (see Definition [\[def:monocity\]](#def:monocity){reference-type="eqref" reference="def:monocity"}), the condition [\[eq:con1\]](#eq:con1){reference-type="eqref" reference="eq:con1"} implies that $$\label{eq:con_d} \begin{aligned} &\quad \;\;e^{-\ln\epsilon}e^{-\ln\|x\|_\dn}\|\mathfrak{q}_x-x\|_{\dn}\le 1\\ &\Rightarrow\|\dn(-\ln\epsilon)\dn(-\ln\|x\|_\dn)(\mathfrak{q}_x-x)\|_{\dn}\le 1\\ &\Rightarrow \|\dn(-\ln\epsilon)\dn(-\ln\|x\|_\dn)(\mathfrak{q}_x-x)\|\le 1\\ &\Rightarrow \lfloor\dn(-\ln\epsilon)\rfloor\cdot\|\dn(-\ln\|x\|_\dn)(\mathfrak{q}_x-x)\|\le 1 \end{aligned}$$ Due to [\[eq:d_bound\]](#eq:d_bound){reference-type="eqref" reference="eq:d_bound"}, there exists $\alpha\in\mathcal{K}$ such that $\tfrac{1}{\lfloor\dn(-\ln\epsilon)\rfloor}\le \alpha(\epsilon)$ leading to $$\begin{aligned} \|\dn(-\ln\|x\|_\dn)\mathfrak{q}_x\|\! &=\! \|\dn(-\ln\|x\|_\dn)x \!+\! \dn(-\ln\|x\|_\dn)(\mathfrak{q}_x\!-\!x)\|\\ &\le 1+ \alpha(\epsilon) \end{aligned}$$ for all $x\neq 0$ with a sufficiently small $\epsilon>0$. Since $\dn(-\ln\|x\|_\dn)x$ lies on the unit sphere for any $x\neq 0$, for a sufficiently small $\epsilon>0$ there exists a compact set, which does not contain the origin, such that $\dn(-\ln\|x\|_\dn)\mathfrak{q}_x$ and $\dn(-\ln \|x\|_{\dn})x$ always belong to this compact set for all $x\neq \mathbf{0}$. Since $\tilde{u}$ is continuous on $\mathbb{R}^n\backslash\{\mathbf{0}\}$, then it is uniformly continuous on the compact set. Therefore, there exists a class-$\mathcal{K}$ function $\gamma$, such that $$\label{eq:class_K} \!\left\| \tilde{u}(\dn(-s_x)\mathfrak{q}_x)\!-\!\tilde{u}(\dn(-s_{x})x)\right\| \!\!\le\! \gamma\!\left(\|\dn(-s_{x})(\mathfrak{q}_x\!-\!x)\| \right)\!, \forall x\!\neq\! \mathbf{0}$$ Then, using the latter inequality and [\[eq:con_d\]](#eq:con_d){reference-type="eqref" reference="eq:con_d"}, we derive $$\label{eq:d_norm_2} \begin{aligned} \tfrac{d}{dt}\|x\|_\dn \!\le\! -(\rho-c_2\overline{\gamma}(\epsilon))\|x\|_\dn^{1+\mu}+c_1\|\mathfrak{q}_x-x\|, \end{aligned}$$ where $\overline{\gamma}=\gamma\circ\alpha \in\mathcal{K}$. For $K_0=\boldsymbol{0}\Rightarrow c_1 = 0$ we derive $$\begin{aligned} \tfrac{d}{dt}\|x\|_\dn \!\le\! -(\rho-c_2\overline{\gamma}(\epsilon))\|x\|_\dn^{1+\mu}. \end{aligned}$$ and if $0<\epsilon< \overline{\gamma}^{-1}\left(\tfrac{\rho}{c_2}\right)$, where $\overline{\gamma}^{-1}$ is an inverse function of $\overline{\gamma}$, then the system is globally finite-time stable for $\mu<0$, exponentially stable for $\mu=0$ and nearly fixed-time stable for $\mu>0$. If $K_0\neq \mathbf{0}$ (i.e., $c_1\neq 0$), let us consider the relation between the Euclidean norm and the canonical homogeneous norm given in Proposition [\[prop:3\]](#prop:3){reference-type="ref" reference="prop:3"}. It yields that $$\left\{ \begin{aligned} &\|\mathfrak{q}\!-\!x\| \!\le\! \kappa\|x\|_\dn^{\underline{\eta}}, \ \|x\|_\dn\!\le\! 1,\\ &\|\mathfrak{q}\!-\!x\|\!\le\! \kappa\|x\|_\dn^{\overline{\eta}}, \ \|x\|_\dn\!\ge\! 1, \end{aligned} \right.$$ Using the latter estimate and [\[eq:d_norm_2\]](#eq:d_norm_2){reference-type="eqref" reference="eq:d_norm_2"}, we derive that $$\label{eq:d_norm_3} \tfrac{d\|x\|_\dn}{dt} \!\le\!\!\left\{\begin{aligned} &\!\!-\!\!\left[\rho\!-\!c_2\overline{\gamma}(\epsilon) \!-\! c_1\kappa\|x\|_\dn^{\underline{\eta}-(1+\mu)}\right]\|x\|_\dn^{1+\mu},\ \|x\|_\dn\!\le\!1,\\ &\!\!-\!\!\left[\rho\!-\!c_2\overline{\gamma}(\epsilon) \!-\! c_1\kappa\|x\|_\dn^{\overline{\eta}-(1+\mu)}\right]\|x\|_\dn^{1+\mu},\ \|x\|_\dn\!\ge\!1. \end{aligned} \right.$$ for some $\kappa>0$. Let $0<\epsilon < \overline{\gamma}^{-1}(\frac{\rho}{c_2})$, and we have the following two cases: 1. If $\mu > \max\{\overline{\eta}-1,\underline{\eta}-1\}=\overline{\eta}-1$ we have $\|x\|_\dn^{\overline{\eta}-(1+\mu)} \to 0$ and $\|x\|_\dn^{\underline{\eta}-(1+\mu)} \to 0$ as $\|x\|_\dn \to \infty$. Thus, the system is globally practically fixed-time stable due to $\mu>0$. 2. If $\mu \le \min\{\overline{\eta}-1, \underline{\eta}-1\}=\underline{\eta}-1$ then $\|x\|_\dn^{\overline{\eta}-(1+\mu)} \to 0$ and $\|x\|_\dn^{\underline{\eta}-(1+\mu)} \to 0$ as $\|x\|_\dn \to 0$. Hence, the system [\[eq:q_sys\]](#eq:q_sys){reference-type="eqref" reference="eq:q_sys"} is locally finite-time stable if $\mu<0$. The proof is complete. $\hfill \square$ ◻ For $K_0\neq 0$, the region of attraction (resp., the attractive set) of locally finite-time (resp., globally practically fixed-time) stable system [\[eq:q_sys\]](#eq:q_sys){reference-type="eqref" reference="eq:q_sys"}, [\[eq:u_q\]](#eq:u_q){reference-type="eqref" reference="eq:u_q"} with $\mu<0$ (resp. $\mu>0)$ can be tuned arbitrarily large (resp., small) by a selection of a small enough $0<\epsilon < \overline{\gamma}^{-1}(\frac{\rho}{c_2})$ and a small enough $\kappa>0$. *The condition $\|\mathfrak{q}(x) - x\|\le\epsilon\|x\|$ is sufficient for exponential stabilization of a linear control system with states quantization (see, e.g., [@Fu_etal_2005_TAC], [@bullo_Liberzon2006TAC]). Similarly, for $\dn$-homogeneous LTI system with the $\dn$-homogeneous quantized feedback, the finite/nearly fixed-time stabilization condition is expressed in the same form but utilizing the canonical homogeneous norm [\[eq:con1\]](#eq:con1){reference-type="eqref" reference="eq:con1"}. Furthermore, for $G_\dn = I_n$ (i.e., $\mu=0$) we have $\|x\|_{\dn}=\|x\|$. This corresponds to the linear case considered above.* At first sight, the restriction to a quantizer in Theorem [\[thm:main\]](#thm:main){reference-type="ref" reference="thm:main"} is not easy to check since it is expressed in terms of the canonical homogeneous norm defined implicitly. However, in the next section, we show that using the well-known logarithmic quantizer, the required condition can be satisfied. ## Finite/fixed-time stabilization under logarithmic quantization The objective of this section is to study the feasibility of finite/fixed-time stabilization problem with state quantization using logarithmic quantizer (see Fig. [1](#fig:quantizer){reference-type="ref" reference="fig:quantizer"}). Let us briefly recall the corresponding definitions. The distinguishing property of a logarithmic quantizer is that the quantization error decreases as the states approach the origin. This allows the convergence/stability property of the original system to be preserved. ***Logarithmic quantizer*** is described by $$\label{eq:log_quantizer} q(\phi)\!=\!\left\{ \begin{aligned} &\nu^i \zeta_0, \ \tfrac{1}{1+\delta} \nu^i \zeta_0 \!<\! \phi \!\leq \tfrac{1}{1-\delta} \nu^i \zeta_0, i\!=\!0, \pm 1, \pm 2, \ldots \\ & 0, \ \qquad \phi = 0\\ & -q(-\phi), \ \phi<0 \end{aligned}\right.$$ where $q(\phi), \phi\in\mathbb{R}$, $\zeta_0\in\mathbb{R}_+$, $\nu \in(0,1)$ represents the quantization density and $\delta=(1-\nu) /(1+\nu)$. A small $\nu$ (resp., large $\delta$) implies a coarse quantization, but a large $\nu$ (resp., a small $\delta$) means a dense quantization. For the logarithmic quantizer, the quantization error is sector bounded [@Fu_etal_2005_TAC]: $$\label{eq:log_q} |q(\phi) - \phi|\le \delta|\phi|, \ \delta \in (0,1)$$ and vanishing as the state goes to the origin. ![ Logarithmic quantizer $q(x)$.](log_quantizer.eps){#fig:quantizer width="40%"} As shown in [\[eq:log_q\]](#eq:log_q){reference-type="eqref" reference="eq:log_q"}, the quantization error of the logarithmic quantizer is formulated as an absolute value. To validate the stability condition [\[eq:con1\]](#eq:con1){reference-type="eqref" reference="eq:con1"}, a new relation between $|\cdot|$ and $\|\cdot\|_\dn$ has been derived in Lemma [\[lem:1\]](#lem:1){reference-type="ref" reference="lem:1"}. Based on the results of Lemma [\[lem:1\]](#lem:1){reference-type="ref" reference="lem:1"}, we construct a logarithmic quantization function $\mathfrak{q}(x)$ that employs the transformation matrix $\Gamma$ and fulfills the conditions of Theorem [\[thm:main\]](#thm:main){reference-type="ref" reference="thm:main"}. [\[cor:2\]]{#cor:2 label="cor:2"} *Let $G_{\dn}$ be a diagonizable generator of the linear dilation in $\R^n$: $$\exists \Gamma\!\in\! \R^{n\times n} : \;\;G_{\dn}\!=\!\Gamma \Lambda \Gamma^{-1}, \;\; \Lambda\!=\!\mathrm{diag}(\lambda_1,\lambda_2,...,\lambda_n)\!\succ\! 0$$ and the canonical homogeneous norm $\|x\|_{\dn}$ be induced by the weighted Euclidean norm $\|x\|=\sqrt{x^{\top}Px}$, where $x\in \R^n$ and $0\prec P=P^{\top}\in \R^{n\times n}$ satisfies [\[eq:mon_cond\]](#eq:mon_cond){reference-type="eqref" reference="eq:mon_cond"}. If $q$ is a logarithmic quantization function with parameter $\delta>0$ and* $$\label{eq:log_quan_Gamma} \mathfrak{q}(x) = \Gamma[q(\xi_{x,1}), q(\xi_{x,2}),\cdots, q(\xi_{x,n})]^\top,$$ *where $\xi_{x} = \Gamma^{-1}x = [\xi_{x,1}, \xi_{x,2},\cdots, \xi_{x,n}]^\top$, $x\in\mathbb{R}^n$, then for any $\epsilon>0$ and any $\kappa>0$ there exists $\delta>0$ such that the inequalities [\[eq:con1\]](#eq:con1){reference-type="eqref" reference="eq:con1"}, [\[eq:con2\]](#eq:con2){reference-type="eqref" reference="eq:con2"} are fulfilled simultaneously.* *Proof.* On the one hand, from the formula of quantization function $\mathfrak{q}$, we have $$\Gamma^{-1}(\mathfrak{q}(x)-x) \!=\! [q(\xi_{x,1})-\xi_{x,1}, q(\xi_{x,2})-\xi_{x,2},\cdots, q(\xi_{x,n})-\xi_{x,n}]^\top,$$ where $\Gamma^{-1}x = [\xi_{x,1}, \xi_{x,2},\cdots, \xi_{x,n}]^\top$. Then, since $G_\dn$ is diagonalizable, according to the quantization error of the logarithmic quantizer [\[eq:log_q\]](#eq:log_q){reference-type="eqref" reference="eq:log_q"} and Lemma [\[lem:1\]](#lem:1){reference-type="ref" reference="lem:1"}, we conclude that for any given $\epsilon>0$, there exists a sufficiently small $\delta>0$ such that the inequality $|q(\xi_{x,i})-\xi_{x,i}|\le \delta |\xi_{x,i}|$, $\forall i=1,2,\cdots, n$ implies the inequality [\[eq:con1\]](#eq:con1){reference-type="eqref" reference="eq:con1"}. On the other hand, since $\Gamma$ is invertible, then $\Gamma^{-\top}\Gamma^{-1}$ is positive definite, hence the inequality [\[eq:log_q\]](#eq:log_q){reference-type="eqref" reference="eq:log_q"} also implies that $$\label{eq:} \begin{aligned} \|\mathfrak{q}(x) - x\|_2& = \|\Gamma\left([q(\xi_{x,1}), q(\xi_{x,2}),\cdots, q(\xi_{x,n})]^\top - \xi_x\right)\|_2\\ &\le \lambda_{\max}^{1/2}(\Gamma^\top \Gamma) \left(\sum_{i=1}^{n}|q(\xi_{x,i})-\xi_{x,i}|^2\right)^{1/2}\\ &\le \lambda_{\max}^{1/2}(\Gamma^\top \Gamma) \left(\sum_{i=1}^{n}|\delta\xi_{x,i}|^2\right)^{1/2}\\ &\le \delta\lambda_{\max}^{1/2}(\Gamma^\top \Gamma) \lambda_{\max}^{1/2}(\Gamma^{-\top} \Gamma^{-1})\|x\|_2 \end{aligned}$$ The latter yields [\[eq:con2\]](#eq:con2){reference-type="eqref" reference="eq:con2"} with $$\label{eq:kappa} \kappa=\delta\lambda_{\max}^{1/2}(\Gamma^\top \Gamma) \lambda_{\max}^{1/2}(\Gamma^{-\top} \Gamma^{-1})\tfrac{\sqrt{\lambda_{\min}(P)}}{\sqrt{\lambda_{\max}(P)}}.$$ The proof is complete. $\hfill \square$ ◻ The proven corollary immediately implies that all conditions of Theorem [\[thm:main\]](#thm:main){reference-type="ref" reference="thm:main"} are fulfilled for the constructed quantizer [\[eq:log_quan_Gamma\]](#eq:log_quan_Gamma){reference-type="eqref" reference="eq:log_quan_Gamma"} with a sufficiently small $\delta>0$ provided that the linear dilation $\dn$ has a diagonalizable generator. Therefore, the corresponding quantized control [\[eq:u_q\]](#eq:u_q){reference-type="eqref" reference="eq:u_q"} is a global (local) homogeneous stabilizer of the system [\[eq:lin_sys\]](#eq:lin_sys){reference-type="eqref" reference="eq:lin_sys"} if $K_0=\mathbf{0}$ (resp., $K_0\neq \mathbf{0}$). *For the controlled chain of integrators: $$A=\left(\begin{smallmatrix} 0 & 1 & 0 & ... & 0 & 0\\ 0 & 0 &1 & ...& 0 & 0\\ ... & ... & ... & ... & ... & ...\\ 0 & 0 & 0 & ... & 0 & 1\\ 0 & 0 & 0 & ... & 0 & 0\\ \end{smallmatrix}\right), \quad B=\left(\begin{smallmatrix} 0 \\ 0\\ ... \\ 0 \\ 1\\ \end{smallmatrix}\right)$$ the dilation $\dn$ has a diagonal generator, so $\Gamma=I_n$ and all conditions of Theorem [\[thm:main\]](#thm:main){reference-type="ref" reference="thm:main"} hold even for the classical logarithmic quantizer: $\mathfrak{q}(x)\!=\![q(x_1),...,q(x_n)]^{\top}$, $x\!=\!(x_1,...,x_n)^{\top}\!\in\! \R^n$.* The global fixed-time stabilizer can be designed by a combination of various homogeneous stabilizers with negative and positive degrees. [\[cor:comu\]]{#cor:comu label="cor:comu"} *Let $G_0$, $Y_0$, and $K_{0}=Y_0(G_0-I_n)^{-1}$, $A_0=A+BK_0$ be defined as in Theorem [\[thm:hom_linear\]](#thm:hom_linear){reference-type="ref" reference="thm:hom_linear"}. Let $G_0$ be diagonalizable $$\exists \Gamma\!\in\! \R^{n\times n} : \;\;G_0\!=\!\Gamma \Lambda_0 \Gamma^{-1}, \;\; \Lambda_0\!=\!\mathrm{diag}(\tilde \lambda_1,\tilde \lambda_2,...,\tilde \lambda_n)\!\succ\! 0$$ and the logarithmic quantizer $\mathfrak{q}$ be defined by the formula [\[eq:log_quan_Gamma\]](#eq:log_quan_Gamma){reference-type="eqref" reference="eq:log_quan_Gamma"} with a parameter $\delta>0$. Suppose that the feedback control with quantized states is defined as follows: $$\label{eq:u_c} u \!=\!\! \left\{ \begin{smallmatrix} \!\!\!K_0\mathfrak{q}(x) + \|\mathfrak{q}(x)\|_{\dn_1}^{1+\mu_1}\!K\dn_1(\!-\!\ln\|\mathfrak{q}(x)\|_{\dn_1}\!)\mathfrak{q}(x) &\text{ if } & \|\mathfrak{q}(x)\|\ge 1, \\ \!\!\!K_0\mathfrak{q}(x) + \|\mathfrak{q}(x)\|_{\dn_2}^{1+\mu_2}\!K\dn_2(\!-\!\ln\|\mathfrak{q}(x)\|_{\dn_2}\!)\mathfrak{q}(x) & \text{ if } & \|\mathfrak{q}(x)\|< 1, \end{smallmatrix} \right.$$ where $K= YX^{-1}$, $\dn_i(\cdot)$, $i=1,2$ are generated by $G_{\dn_i}=G_0+\mu_iI_n$, with $0\leq \mu_1\leq 1/n$ and $-1\leq \mu_2<0$, $\|\cdot\|_{\dn_i}$ are induced by the norm $\|x\|\!=\!$ $\sqrt{x^{\top} P x}$, $P=X^{-1}$. If $X \in \mathbb{R}^{n \times n}$ and $Y \in \mathbb{R}^{m \times n}$ satisfy the following algebraic system: $$\label{eq:LMI1}\small \left\{\!\begin{aligned} &\!X A_{0}^{\top}\!+\!A_{0} X\!+\!Y^{\top} B^{\top}\!+\!B Y\prec\!0, \quad X \succ 0,\\ &\!2(1+\mu_1)X\succeq X G_{\dn_1}^{\top}+G_{\dn_1} X \succ 0,\\ & XG_{\dn_2}^{\top}+G_{\dn_2} X \succeq 2(1+\mu_2)X, \end{aligned}\right.$$ then for sufficiently small $0\!<\!\delta\!<\!1$, the system [\[eq:lin_sys\]](#eq:lin_sys){reference-type="eqref" reference="eq:lin_sys"}, [\[eq:u_c\]](#eq:u_c){reference-type="eqref" reference="eq:u_c"} is:* - *globally uniformly finite-time stable for $\mu_1=0$;* - *globally uniformly fixed-time stable for $\mu_1>0$;* *Proof.* Let us consider the following Lyapunov function candidate: $$V = \left\{ \begin{aligned} &\|x\|_{\dn_1}, & \|x\|\ge 1\\ &\|x\|_{\dn_2}, & \|x\|\le 1\\ \end{aligned} \right.$$ Notice that $V\in C^{\R^n}\cap C^{1}(\R^{n}\backslash S\backslash\{\mathbf{0}\})$, where $S=\{x\in \R^n: \|x\|=1\}$. For sufficiently small $\delta\in (0,1)$ we have $$(1-\kappa)\|x\| \leq \|\mathfrak{q}(x)\| \leq (1+\kappa)\|x\|,$$ where $\kappa=\kappa(\delta)$ has the form as [\[eq:kappa\]](#eq:kappa){reference-type="eqref" reference="eq:kappa"}, then $$\tfrac{1}{(1-\kappa)}\!\leq\! \|x\| \Rightarrow 1 \!\leq\! \|\mathfrak{q}(x)\|,\!\; \|x\|\!\le\! \tfrac{1}{(1+\kappa)}\Rightarrow \|\mathfrak{q}(x)\|\!\le\! 1.$$ Let $\Omega_1:=\{x\in\mathbb{R}^n:\tfrac{1}{(1-\kappa)}\leq \|x\|\}$, $\Omega_2:=\{x\in\mathbb{R}^n:\|x\|\le \tfrac{1}{(1+\kappa)}\}$, $\Omega_3:=\{x\in\mathbb{R}^n:\tfrac{1}{(1+\kappa)}\leq \|x\|\le \tfrac{1}{(1-\kappa)}\}$. By Corollary [\[cor:2\]](#cor:2){reference-type="ref" reference="cor:2"}, the logarithmic quantizer $\mathfrak{q}(x)$ satisfies the conditions [\[eq:con1\]](#eq:con1){reference-type="eqref" reference="eq:con1"} and [\[eq:con2\]](#eq:con2){reference-type="eqref" reference="eq:con2"} for a sufficiently small $0 < \delta < 1$. Thus according to Theorem [\[thm:main\]](#thm:main){reference-type="ref" reference="thm:main"} and [\[eq:d_norm_3\]](#eq:d_norm_3){reference-type="eqref" reference="eq:d_norm_3"}, the derivative of $V$ admits the estimate $$\label{eq:d_norm_4} \dot{V} \!\le\!\!\left\{\begin{aligned} &\!\!-\!\!\left[\rho_1\!-\!c_{2,\mu_1}\overline{\gamma}_1(\epsilon) \!-\! c_{1,\mu_1}\kappa\|x\|_{\dn_1}^{\overline{\eta}_1-(1+\mu_1)}\right]\|x\|_{\dn_1}^{1+\mu_1},\ x\!\in\!\Omega_1\\ &\!\!-\!\!\left[\rho_2\!-\!c_{2,\mu_2}\overline{\gamma}_2(\epsilon) \!-\! c_{1,\mu_2}\kappa\|x\|_{\dn_2}^{\underline{\eta}_2-(1+\mu_2)}\right]\|x\|_{\dn_2}^{1+\mu_2},\ x\!\in\!\Omega_2, \end{aligned} \right.$$ where $c_{1,\mu_i} \!=\! \tfrac{\lambda_{\max}^{1/2}(K_0^\top B^\top PBK_0)}{h_i}$, $c_{2,\mu_i}\!=\!\tfrac{\lambda_{\max}^{1/2}(K^\top B^\top PBK)}{h_i}$,\ $\rho_i = \tfrac{\lambda_{max}(X A_{0}^{\top}\!+\!A_{0} X\!+\!Y^{\top} B^{\top}\!+\!B Y)}{h_i}$,\ $h_i=\lambda_{\min}(P^{1/2}G_{\dn_i} P^{-1/2} \!+\! P^{-1/2}G_{\dn_i}^\top P^{1/2})$, $i=1,2$. The functions $\overline{\gamma}_1, \overline{\gamma}_2\in\mathcal{K}$ are obtained from [\[eq:con_d\]](#eq:con_d){reference-type="eqref" reference="eq:con_d"}, [\[eq:class_K\]](#eq:class_K){reference-type="eqref" reference="eq:class_K"} with $(\dn_i,\mu_i)$, $i=1,2$. The LMIs defined in ([\[eq:LMI1\]](#eq:LMI1){reference-type="ref" reference="eq:LMI1"}) guarantee $\overline{\eta}_1-1\le \mu_1$ and $\mu_2\le\underline{\eta}_2-1$, where $\overline{\eta}_i\!=\!\tfrac{1}{2} \lambda_{\max }(X^{-\frac{1}{2}} G_{\mathrm{d}_i} X^{\frac{1}{2}}+X^{\frac{1}{2}} G_{\mathrm{d}_i}^{\top} X^{-\frac{1}{2}})$, $\underline{\eta}_i=\tfrac{1}{2} \lambda_{\min }(X^{-\frac{1}{2}} G_{\mathrm{d}_i} X^{\frac{1}{2}}+P^{-\frac{1}{2}} G_{\mathrm{d}_i}^{\top} X^{-\frac{1}{2}})$, $i=1,2$. Then, we have $\|x\|_{\dn_1}^{\overline{\eta}_1-(1+\mu_1)}$ monotonically decreasing to zero as $\|x\|_{\dn_1}$ increases to infinity, and $\|x\|_{\dn_2}^{\overline{\eta}_2-(1+\mu_2)}$ monotonically decreasing to zero as $\|x\|_{\dn_2}$ decreases to zero. Thus, we conclude that for a sufficiently small $\delta$ we have $$\dot{V}<0, \ x\in\Omega_1\cup\Omega_2.$$ On the other hand, for $\tfrac{1}{1+\kappa}\le\|x\|\le \tfrac{1}{1-\kappa}$, the quantization error has $$\|\mathfrak{q}_x-x\|\le \tfrac{\kappa}{1-\kappa}.$$ Using [\[eq:d_norm_2\]](#eq:d_norm_2){reference-type="eqref" reference="eq:d_norm_2"}, for a sufficiently small $\delta>0$ we derive $$\label{eq:d_norm_5} \begin{aligned} \dot{V} \!\le\! \max\limits_{i=1,2}\left\{\!-\!\tfrac{\rho_i\!-\!c_{2,\mu_i}\overline{\gamma}_i(\epsilon)}{(1+\kappa)^{1+\mu_i}}\!+\!c_{1,\mu_i}\tfrac{\kappa}{1-\kappa}\right\}<0,\ x\!\in\!\Omega_3 \end{aligned}$$ Therefore, $V$ is a global Lyapunov function and the system is globally asymptotically stable. Since $\mu_2<0$, then by Theorem [\[thm:main\]](#thm:main){reference-type="ref" reference="thm:main"} it is finite-time stable if $\mu_1=0$ and fixed-time stable if $\mu_1>0$. The proof is complete. $\hfill \square$ ◻ [^1]: The system [\[eq:lin_sys\]](#eq:lin_sys){reference-type="eqref" reference="eq:lin_sys"} is globally uniformly practically - Lyapunov stable if $\exists r\!\in\! \mathbb{R}_+, \exists \chi\in\mathcal{K}$ : $\|x(t)\|\!\le\! r + \chi(\|x_0\|)$, $\forall t\!\ge\! t_0$; - fixed-time stable if it is globally uniformly practically Lyapunov stable and $\exists \tilde T>0$ such that $\|x(t)\| \leq r, \;\; \forall t\geq \tilde T, \ \forall x_0\in \R^n.$
arxiv_math
{ "id": "2309.01412", "title": "Finite/fixed-time Stabilization of Linear Systems with States\n Quantization", "authors": "Yu Zhou, Andrey Polyakov, Gang Zheng", "categories": "math.OC cs.SY eess.SY", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We present a distributed conjugate gradient method for distributed optimization problems, where each agent computes an optimal solution of the problem locally *without* any central computation or coordination, while communicating with its immediate, one-hop neighbors over a communication network. Each agent updates its local problem variable using an estimate of the *average conjugate direction* across the network, computed via a dynamic consensus approach. Our algorithm enables the agents to use *uncoordinated* step-sizes. We prove convergence of the local variable of each agent to the optimal solution of the aggregate optimization problem, without requiring decreasing step-sizes. In addition, we demonstrate the efficacy of our algorithm in distributed state estimation problems, and its robust counterparts, where we show its performance compared to existing distributed first-order optimization methods. author: - "Ola Shorinwa$^{1}$ and Mac Schwager$^{2}$ [^1] [^2] [^3]" bibliography: - references.bib title: "**Distributed Conjugate Gradient Method via Conjugate Direction Tracking**" --- # Introduction {#sec:introduction} A variety of problems in many disciplines can be formulated as distributed optimization problems, where a group of agents seek to compute the optimal estimate, action, or control that minimizes (or maximizes) a specified objective function. Examples of such problems include distributed target tracking [@zhang2018adaptive; @zhu2013distributed; @shorinwa2020distributed; @park2019distributed], pose/state/signal estimation in sensor/robotic networks [@rabbat2004distributed; @necoara2011parallel; @mateos2010distributed]; machine learning and statistical modeling [@konevcny2016federated; @li2020federated; @zhou2021communication]; process control [@nedic2018distributed; @wang2017distributed; @erseghe2014distributed]; and multi-agent planning and control [@rostami2017admm; @tang2019distributed; @shorinwa2023distributed]. In these problems, the data is collected and stored locally by each agent, with the additional constraint that no individual agent has access to all the problem data across the network. In many situations, the limited availability of communication and data storage resources, in addition to privacy regulations, preclude the aggregation of the problem data at a central location or node, effectively rendering *centralized optimization* methods infeasible. *Distributed optimization* enables each agent to compute an optimal solution via local computation procedures while communicating with its neighbors over a communication network. In essence, via distributed optimization, each agent *collaborates* with its neighbors to compute an optimal solution without access to the aggregate problem data. Some distributed optimization methods require a central coordinator for execution or coordination of some of the update procedures. These methods are often used in machine learning for parallel processing on a cluster of computing nodes, especially in problems involving large datasets. In contrast, in this work, we focus on *fully-distributed* algorithms that do not require a central node for coordination or computation. We derive a distributed conjugate gradient algorithm, termed DC-Grad, for distributed optimization problems. In our algorithm, each agent utilizes *first-order* information (i.e., gradients) of its local objective function to compute its local conjugate directions for updating its local estimate of the solution of the optimization problem at each iteration and communicates with its one-hop neighbor over a point-to-point communication network. Each agent does not share its local problem data, including its objective function and gradients, with other agents, preserving the *privacy* of the agents. For simplicity of exposition, we limit our analysis to distributed optimization problems with smooth, convex objective functions. We prove convergence of the local problem variables of all agents to the optimal solution of the aggregate optimization problem. We examine the performance of our distributed algorithm in comparison to notable existing distributed optimization methods in distributed state estimation and robust-state-estimation problems. In both problems, we show that our algorithm converges with the least communication overhead in densely-connected communication networks, with some additional computation overhead in comparison to the best-competing distributed algorithm DIGing-ATC. On sparsely-connected graphs, our algorithm performs similarly to other first-order distributed optimization methods. # Related Work {#sec:related_work} Distributed optimization methods have received significant attention, with many such methods developed from their centralized counterparts. Distributed first-order methods leverage the local gradients (i.e., first-order information) of each agent to iteratively improve the each agent's local estimate of the optimal solution of the optimization problem, bearing similarities with other centralized first-order methods such as the centralized gradient descent. Distributed incremental (sub)gradient methods require a central node that receives the local gradient information from each agent and performs the associated update step [@nedic2001distributed]. As such, these methods require a hub-spoke communication model --- where all the agents are connected to the central node (hub) --- or a ring communication model (a cyclic network), which is quite restrictive. Distributed (sub)gradient methods circumvent this limitation, enabling distributed optimization over arbitrary network topologies. At each iteration, each agent exchanges its local iterates and other auxiliary variables (such as estimates of the average gradient of the joint (global) objective function) with other neighboring agents. In distributed (sub)gradient methods, each agent recursively updates its local estimate using its local (sub)gradient and *mixes* its estimates with the estimates of its neighbors via a convex combination, where the *mixing* step is achieved via average consensus [@nedic2009; @matei2011performance] or the push-sum technique [@olshevsky2009convergence; @benezit2010weighted]. Generally, distributed (sub)gradient methods require a diminishing step-size for convergence to the optimal solution in convex problems [@lobel2010distributed; @yuan2016convergence], which typically slows down convergence. With a constant step-size, these methods converge to a neighborhood of the optimal solution. Distributed gradient-tracking methods were developed to eliminate the need for diminishing step-sizes [@shi2015extra; @qu2017harnessing; @nedic2017achieving]. In these methods, in addition to the local estimate of the optimal solution, each agent maintains an estimate of the *average gradient* of the objective function and updates its local estimate of the optimal solution by taking a descent step in the direction of the estimated average gradient. Distributed gradient-tracking methods provide faster convergence guarantees with constant step-sizes. Further, diffusion-based distributed algorithms [@chen2012diffusion; @yuan2018exact; @yuan2018exact2] converge to the optimal solution with constant step-sizes. Distributed first-order methods have been derived for undirected [@shi2015extra; @xu2015augmented] and directed [@saadatniaki2018optimization; @zeng2017extrapush] networks, as well as static [@xi2017add; @xin2018linear] and time-varying [@nedic2017achieving] networks. In addition, acceleration schemes such as momentum and Nesterov acceleration have been applied to distributed first-order methods [@xin2019distributedNesterov; @qu2019accelerated; @lu2020nesterov]. Distributed methods that leverage higher-order information have been developed for distributed optimization, including distributed quasi-newton methods that approximate the inverse Hessian of the objective function [@Mokhtari2015; @Eisen2019; @mansoori2019fast]. Further, the alternating direction method of multipliers (ADMM) is amenable to consensus optimization problem. In ADMM, each agent maintains an estimate of the optimal solution in addition to dual variables associated with the consensus constraints between agents. However, ADMM, in its original form, requires a central node for computation of the dual update procedure. Fully-distributed variants of ADMM have been developed, addressing this limitation [@mateos2010distributed; @Ling2015; @chang2014multi; @farina2019distributed]. In general, ADMM methods are amenable to static, undirected communication networks. The conjugate gradient (CG) method was originally developed for computing the solution of a linear system of equations (i.e., $Ax = b$), where the matrix ${A \in \mathbb{R}^{n \times n}}$ is square, symmetric, and positive-definite [@hestenes1952methods; @hager2006survey]. More generally, the method applies to strongly-convex quadratic programming problems, where the conjugate gradient method is guaranteed to compute the optimal solution in at most $n$ iterations, in the absence of roundoff errors. The conjugate gradient method has been extended to nonlinear optimization problems (which includes non-quadratic problems) [@dai1999nonlinear; @yuan2020conjugate]. In general, the conjugate gradient method provides faster convergence compared to gradient descent methods [@gilbert1992global; @yuan2019global; @shewchuk1994introduction]. Variants of the conjugate gradient method for parallel execution on multiple computing nodes (processors) have been developed [@ismail2013implementation; @chen2004implementing; @lanucara1999conjugate; @helfenstein2012parallel; @engelmann2021essentially]. These methods decompose the data matrix associated with the linear system of equations into individual components assigned to each processor, enabling parallelization of the matrix-vector operations arising in the conjugate gradient method, which constitute the major computational bottleneck in the CG method. However, these methods are only amenable to problems with hub-spoke communication models or all-to-all communication models. Some other distributed CG methods eliminate the need for a hub-spoke communication model [@xu2016distributed], but, however, require a ring communication model, which does not support parallel execution of the update procedures, ultimately degrading the computational speed of the algorithm. The distributed variant [@ping2021dcg] allows for more general communication networks. Nonetheless, these CG methods are limited to solving a linear system of equations and do not consider a more general optimization problem. Few distributed CG methods for nonlinear optimization problems exist. The work in [@xu2020distributed] derives a distributed CG method for online optimization problems where mixing of information is achieved using the average consensus scheme. Like distributed (sub)gradient methods, this algorithm requires a diminishing step-size for convergence to the optimal solution, converging to a neighborhood of the optimal solution if a constant step-size is used. In this paper, we derive a distributed conjugate gradient method for a more general class of optimization problems and prove convergence of the algorithm to the optimal solution in convex problems with Lipschitz-continuous gradients. Moreover, we note that, in our algorithm, each agent can use uncoordinated constant step-sizes. # Notation and Preliminaries {#sec:preliminaries} In this paper, we denote the gradient of a function $f$ by $\nabla f$ and $g$, interchangeably. We denote the all-ones vector as ${\bm{1}_{n} \in \mathbb{R}^{n}}$. We represent the inner-product of two matrices ${A \in \mathbb{R}^{m \times n}}$ and ${B \in \mathbb{}R^{m \times n}}$ as ${\langle A, B \rangle = \ensuremath{\mathrm{trace}\left({A^{\mathsf{T}}B}\right)}}$. We denote the standard scalar-vector product, matrix-vector product, and matrix-matrix product (composition) as ${A \cdot B}$, depending on the mathematical context. For a given matrix ${A \in \mathbb{R}^{m \times n}}$, we denote its spectral norm as ${\rho(A) = \left\Vert A \right\Vert_{2}}$. Further, we denote its Frobenius norm by $\left\Vert A \right\Vert_{F}$. Likewise, we define the mean of a matrix ${B \in \mathbb{R}^{N \times n}}$, computed across its rows, as ${\ensuremath{\overline{B}} = \frac{1}{N}\bm{1}_{N}\bm{1}_{N}^{\mathsf{T}}B \in \mathbb{R}^{N \times n}}$, where each row of $\ensuremath{\overline{B}}$ is the same. In addition, we define the consensus violation between the matrix ${B \in \mathbb{R}^{N \times n}}$ and its mean ${\ensuremath{\overline{B}} \in \mathbb{R}^{N \times n}}$ as ${\tilde{B} = B - \ensuremath{\overline{B}}}$. We denote the domain of a function $f$ as ${\mathrm{dom}(f)}$, the non-negative orthant as $\mathbb{R}_{+}$, and the strictly-positive orthant as $\mathbb{R}_{++}$. We introduce the following definitions that will be relevant to our discussion. **Definition 1** (Conjugacy). *Two vectors ${a, b \in \mathbb{R}^{n}}$ are conjugate with respect to a symmetric positive-definite matrix ${C \in \mathbb{R}^{n \times n}}$ if: $$a^{\mathsf{T}}Cb = \langle a, Cb \rangle = \langle Ca, b \rangle = 0.$$* **Definition 2** (Convex Function). *A function ${f: \mathbb{R}^{n} \rightarrow \mathbb{R}}$ is convex if for all ${x,y \in \mathrm{dom}(f)}$ and all ${\zeta \in [0,1]}$: $$f(\zeta x + (1 - \zeta)y) \leq \zeta f(x) + (1 - \zeta) f(y),$$ and the domain of $f$, ${\ensuremath{\mathrm{dom}\left({f}\right)} \subseteq \mathbb{R}^{n}}$, is convex.* **Definition 3** (Smoothness). *A function ${f: \mathbb{R}^{n} \rightarrow \mathbb{R}}$ is $L$-smooth if it is continuously differentiable over its domain and its gradients are $L$-Lipschitz continuous, i.e.: $$\left\Vert \nabla{f}(x) - \nabla{f}(y) \right\Vert_{2} \leq L \left\Vert x - y \right\Vert_{2},\ \forall x, y \in \ensuremath{\mathrm{dom}\left({f}\right)},$$ where ${L \in \mathbb{R}_{++}}$ is the Lipschitz constant.* **Definition 4** (Coercive Function). *A function ${f: \mathbb{R}^{n} \rightarrow \mathbb{R}^{m}}$ is coercive if ${f(x) \rightarrow \infty}$ as ${x \rightarrow \infty}$, for all ${x \in \ensuremath{\mathrm{dom}\left({f}\right)}}$.* We represent the agents as nodes in an undirected, connected communication graph ${\mathcal{G} = (\mathcal{V}, \mathcal{E})}$, where ${\mathcal{V} = \{1, \ldots, N\}}$ denotes the set of vertices, representing the agents, and ${\mathcal{E} \subset \mathcal{V} \times \mathcal{V}}$ represents the set of edges. An edge $(i, j)$ exists in $\mathcal{E}$ if agents $i$ and $j$ share a communication link. Moreover, we denote the set of neighbors of agent $i$ as $\mathcal{N}_{i}$. We associate a *mixing matrix* ${W \in \mathbb{R}^{N \times N}}$ with the underlying communication graph. A mixing matrix $W$ is compatible with $\mathcal{G}$ if ${w_{ij} = 0}$, ${\forall j \notin \mathcal{N}_{i}}$, ${\forall i \in \mathcal{V}}$. We denote the *degree* of agent $i$ as ${\deg(i) = |\mathcal{N}_{i}|}$, representing the number of neighbors of agent $i$, and the *adjacency matrix* associated with $\mathcal{G}$ as ${\mathcal{A} \in \mathbb{R}^{N \times N}}$, where ${\mathcal{A}_{ij} = 1}$ if and only if ${j \in \mathcal{N}_{i}}$, ${\forall i \in \mathcal{V}}$. In addition, we denote the *graph Laplacian* of $\mathcal{G}$ as ${L = \mathrm{diag}(\deg(1),\ldots,\deg(N)) - \mathcal{A}}$. # Problem Formulation And The Centralized Conjugate Gradient Method We consider the distributed optimization problem: $$\label{eq:global_prob} \ensuremath{\underset{x \in \mathbb{R}^{n}}{\text{minimize}}\ } \frac{1}{N} \sum_{i = 1}^{N} f_{i}(x),$$ over $N$ agents, where ${f_{i}: \mathbb{R}^{n} \rightarrow \mathbb{R}}$ denotes the local objective function of agent $i$ and ${x \in \mathbb{R}}$ denotes the optimization variable. The objective function of the optimization problem [\[eq:global_prob\]](#eq:global_prob){reference-type="eqref" reference="eq:global_prob"} consists of a sum of $N$ local components, making it *separable*, with each component associated with an agent. We assume that agent $i$ only knows its local objective function $f_{i}$ and has no knowledge of the objective function of other agents. We begin with a description of the centralized nonlinear conjugate gradient method, before deriving our method in Section  [5](#sec:distirbuted_alg){reference-type="ref" reference="sec:distirbuted_alg"}. The nonlinear conjugate gradient method (a generalization of the conjugate gradient method to optimization problems beyond quadratic programs) represents an iterative first-order optimization algorithm that utilizes the gradient of the objective function to generate iterates from the recurrence: $$\label{eq:cen_conjugate_method} x^{(k+1)} = x^{(k)} + \alpha^{(k)} \cdot s^{(k)},$$ where ${x^{(k)} \in \mathbb{R}^{n}}$ denotes the estimate at iteration $k$, ${\alpha^{(k)} \in \mathbb{R}_{+}}$ denotes the step-size at iteration $k$, and ${s^{(k)} \in \mathbb{R}^{n}}$ denotes the conjugate direction at iteration $k$. In the nonlinear conjugate direction method, the conjugate direction is initialized as the negative gradient of the objective function at the initial estimate, with ${s^{(0)} = -g^{(0)}}$. Further, the conjugate directions are generated from the recurrence: $$s^{(k + 1)} = -g^{(k+1)} + \beta^{(k)} \cdot s^{(k)},$$ at iteration $k$, where ${\beta^{(k)} \in \mathbb{R}}$ denotes the conjugate gradient update parameter. Different schemes have been developed for updating the conjugate update parameter. Here, we provide a few of the possible schemes: - *Hestenes-Stiefel Scheme* [@hestenes1952methods]: $$\label{eq:hestenes_stiefel_CG_parameter} \beta_{HS}^{(k)} = \frac{\left(g^{(k+1)} - g^{(k)}\right)^\mathsf{T}g^{(k+1)}}{\left(g^{(k+1)} - g^{(k)}\right)^ \mathsf{T}s^{(k)}}$$ - *Fletcher-Reeves Scheme* [@fletcher1964function]: $$\label{eq:flecther_reeves_CG_parameter} \beta_{FR}^{(k)} = \frac{\left\Vert g^{(k+1)} \right\Vert_{2}^{2}}{\left\Vert g^{(k)} \right\Vert_{2}^{2}}$$ - *Polak-Ribière Scheme* [@polak1969note; @polyak1969conjugate]: $$\label{eq:polak_ribiere_CG_parameter} \beta_{PR}^{(k)} = \frac{\left(g^{(k+1)} - g^{(k)}\right)^\mathsf{T}g^{(k+1)}}{\left\Vert g^{(k)} \right\Vert_{2}^{2}}$$ We note that the update schemes are equivalent when $f$ is a strongly-convex quadratic function. Moreover, when $f$ is strongly-convex and quadratic, the search directions $\{s^{(k)}\}_{\forall k}$ are conjugate. As a result, the iterate $x^{(k)}$ converges to the optimal solution in at most $n$ iterations. For non-quadratic problems, the search directions lose conjugacy, and convergence may occur after more than $n$ iterations. In many practical problems, the value of the update parameter $\beta$ is selected via a hybrid scheme, obtained from a combination of the fundamental update schemes, which include the aforementioned ones. Simple hybrid schemes are also used, e.g., ${\beta^{(k)} = \max\{0, \beta_{PR}^{(k)}\}}$. # Distributed Conjugate Gradient Method {#sec:distirbuted_alg} In this section, we derive a distributed optimization algorithm based on the nonlinear conjugate method for [\[eq:global_prob\]](#eq:global_prob){reference-type="eqref" reference="eq:global_prob"}. We assign a local copy of $x$ to each agent, representing its local estimate of the solution of the optimization problem, with each agent computing its conjugate directions locally. Agent $i$ maintains the variables: ${x_{i} \in \mathbb{R}^{n}}$, ${s_{i} \in \mathbb{R}^{n}}$, along with ${\alpha_{i} \in \mathbb{R}}$, and ${\beta_{i} \in \mathbb{R}}$. In addition, we denote the gradient of $f_{i}$ at $x_{i}$ by $g_{i}(x_{i})$. Before proceeding with the derivation, we introduce the following notation: $$\begin{aligned} \bm{x} &= \begin{bmatrix} % \makebox[2em]{$\m@th\smash-\mkern-7mu\cleaders\hbox{$\mkern-2mu\smash-\mkern-2mu$}\hfill\mkern-7mu\smash-$}\kern-\arraycolsep& x_{1}^{\mathsf{T}} & \kern-\arraycolsep% \makebox[2em]{$\m@th\smash-\mkern-7mu\cleaders\hbox{$\mkern-2mu\smash-\mkern-2mu$}\hfill\mkern-7mu\smash-$}\\ & \vdots & \\ % \makebox[2em]{$\m@th\smash-\mkern-7mu\cleaders\hbox{$\mkern-2mu\smash-\mkern-2mu$}\hfill\mkern-7mu\smash-$}\kern-\arraycolsep& x_{N}^{\mathsf{T}} & \kern-\arraycolsep% \makebox[2em]{$\m@th\smash-\mkern-7mu\cleaders\hbox{$\mkern-2mu\smash-\mkern-2mu$}\hfill\mkern-7mu\smash-$} \end{bmatrix}, \ \bm{s} = \begin{bmatrix} % \makebox[2em]{$\m@th\smash-\mkern-7mu\cleaders\hbox{$\mkern-2mu\smash-\mkern-2mu$}\hfill\mkern-7mu\smash-$}\kern-\arraycolsep& s_{1}^{\mathsf{T}} & \kern-\arraycolsep% \makebox[2em]{$\m@th\smash-\mkern-7mu\cleaders\hbox{$\mkern-2mu\smash-\mkern-2mu$}\hfill\mkern-7mu\smash-$}\\ & \vdots & \\ % \makebox[2em]{$\m@th\smash-\mkern-7mu\cleaders\hbox{$\mkern-2mu\smash-\mkern-2mu$}\hfill\mkern-7mu\smash-$}\kern-\arraycolsep& s_{N}^{\mathsf{T}} & \kern-\arraycolsep% \makebox[2em]{$\m@th\smash-\mkern-7mu\cleaders\hbox{$\mkern-2mu\smash-\mkern-2mu$}\hfill\mkern-7mu\smash-$} \end{bmatrix}, \\ \bm{g}(\bm{x}) &= \begin{bmatrix} % \makebox[2em]{$\m@th\smash-\mkern-7mu\cleaders\hbox{$\mkern-2mu\smash-\mkern-2mu$}\hfill\mkern-7mu\smash-$}\kern-\arraycolsep& \left(\nabla f_{1}(x_{1})\right)^{\mathsf{T}} & \kern-\arraycolsep% \makebox[2em]{$\m@th\smash-\mkern-7mu\cleaders\hbox{$\mkern-2mu\smash-\mkern-2mu$}\hfill\mkern-7mu\smash-$}\\ & \vdots & \\ % \makebox[2em]{$\m@th\smash-\mkern-7mu\cleaders\hbox{$\mkern-2mu\smash-\mkern-2mu$}\hfill\mkern-7mu\smash-$}\kern-\arraycolsep& \left(\nabla f_{N}(x_{N})\right)^{\mathsf{T}} & \kern-\arraycolsep% \makebox[2em]{$\m@th\smash-\mkern-7mu\cleaders\hbox{$\mkern-2mu\smash-\mkern-2mu$}\hfill\mkern-7mu\smash-$} \end{bmatrix}, \end{aligned}$$ ${\bm{\alpha} = \mathrm{diag}(\alpha_{1},\ldots,\alpha_{N})}$, and ${\bm{\beta} = \mathrm{diag}(\beta_{1},\ldots,\beta_{N})}$, where the variables are obtained by stacking the local variables of each agent, with ${\bm{x} \in \mathbb{R}^{N \times n}}$, ${\bm{s} \in \mathbb{R}^{N \times n}}$, ${\bm{g}(\bm{x}) \in \mathbb{R}^{N \times n}}$, and ${\bm{\alpha} \in \mathbb{R}^{N}}$. To simplify notation, we denote $\bm{g}(\bm{x}^{(k)})$ by $\bm{g}^{k}$. In addition, we note that all agents achieve *consensus* when all the rows of $\bm{x}$ are the same. Moreover, *optimality* is achieved when ${\bm{1}_{N}^{\mathsf{T}} \bm{g}(\bm{x}) = \bm{0}_{n}^{\mathsf{T}}}$, i.e., the first-order optimality condition is satisfied. Further, we define the *aggregate* objective function considering the local variables of each agent as: $$\bm{f}(\bm{x}) = \frac{1}{N} \sum_{i = 1}^{N} f_{i}(x_{i}).$$ To obtain a distributed variant of the centralized conjugate gradient method, one could utilize the average consensus technique to eliminate the need for centralized procedures, yielding the distributed algorithm: $$\begin{aligned} \label{eq:vanilla_distributed_method} \bm{x}^{(k+1)} &= W \bm{x}^{(k)} + \bm{\alpha}^{(k)} \cdot \bm{s}^{(k)}, \\ \bm{s}^{(k + 1)} &= -\bm{g}^{(k+1)} + \bm{\beta}^{(k)} \cdot \bm{s}^{(k)},\end{aligned}$$ which simplifies to: $$\begin{aligned} \label{eq:vanilla_distributed_method_agent} x_{i}^{(k+1)} &= w_{ii} x_{i}^{(k)} + \sum_{j \in \mathcal{N}_{i}} w_{ij} x_{j}^{(k)} + \alpha_{i}^{(k)} \cdot s_{i}^{(k)}, \\ s_{i}^{(k + 1)} &= -g_{i}^{(k+1)} + \beta_{i}^{(k)} \cdot s_{i}^{(k)},\end{aligned}$$ when expressed with respect to agent $i$, with initialization ${x_{i}^{(0)} \in \mathbb{R}^{n}}$, ${s_{i}^{(0)} = - \nabla f_{i}(x_{i}^{(0)})}$, and ${\alpha_{i}^{(0)} \in \mathbb{R}_{+}}$. One can show that the above distributed algorithm does not converge to the optimal solution with a non-diminishing step-size. Here, we provide a simple proof by contradiction showing that the optimal solution $x^{\star}$ is not a fixed point of the distributed algorithm [\[eq:vanilla_distributed_method\]](#eq:vanilla_distributed_method){reference-type="eqref" reference="eq:vanilla_distributed_method"}: Assume that $x^{\star}$ is a fixed point of the algorithm. With this assumption, the first-two terms on the right-hand side of [\[eq:vanilla_distributed_method_agent\]](#eq:vanilla_distributed_method_agent){reference-type="eqref" reference="eq:vanilla_distributed_method_agent"} simplify to $x^{\star}$. Further, the conjugate update parameter $\beta_{i}^{(k - 1)}$ simplifies to zero, where we define the ratio $\frac{0}{0}$ to be zero if the Fletcher-Reeves Scheme is utilized. However, in general, the local conjugate direction of agent $i$, denoted by $s_{i}^{(k)}$, may not be zero, since the critical point of the joint objective function $\frac{1}{N}\sum_{i = 1}^{N} f_{i}$ may not coincide with the critical point of $f_{i}$, i.e., ${\nabla f_{i}(x^{\star})}$ may not be zero. Consequently, the last term in [\[eq:vanilla_distributed_method\]](#eq:vanilla_distributed_method){reference-type="eqref" reference="eq:vanilla_distributed_method"} is not zero, in general, and as a result, agent $i$'s iterate $x_{i}^{(k+1)}$ deviates from $x^{\star}$, showing that $x^{\star}$ is not a fixed point of the distributed algorithm given by [\[eq:vanilla_distributed_method_agent\]](#eq:vanilla_distributed_method_agent){reference-type="eqref" reference="eq:vanilla_distributed_method_agent"}. This property mirrors that of distributed (sub)gradient methods where a diminishing step-size is required for convergence. Further, we note that the last term in [\[eq:vanilla_distributed_method_agent\]](#eq:vanilla_distributed_method_agent){reference-type="eqref" reference="eq:vanilla_distributed_method_agent"} is zero if agent $i$ utilizes the average conjugate direction in place of its local conjugate direction. With this modified update procedure, the optimal solution $x^{\star}$ represents a fixed point of the resulting, albeit non-distributed, algorithm. To address this challenge, we assign an auxiliary variable $z$ to each agent, representing an estimate of the average conjugate direction, which is updated locally using dynamic average consensus [@zhu2010discrete], yielding the *Distributed Conjugate Gradient Method* (DC-Grad), given by: $$\begin{aligned} \bm{x}^{(k+1)} &= W (\bm{x}^{(k)} + \bm{\alpha}^{(k)} \cdot \bm{z}^{(k)}), \label{eq:x_sequence} \\ \bm{s}^{(k + 1)} &= -\bm{g}^{(k+1)} + \bm{\beta}^{(k)} \cdot \bm{s}^{(k)}, \label{eq:s_sequence} \\ \bm{z}^{(k + 1)} &= W(\bm{z}^{(k)} + \bm{s}^{(k+1)} - \bm{s}^{(k)}), \label{eq:z_sequence}\end{aligned}$$ which is initialized with ${x_{i}^{(0)} \in \mathbb{R}^{n}}$, ${s_{i}^{(0)} = - \nabla f_{i}(x_{i}^{(0)})}$, ${z_{i}^{(0)} = s_{i}^{(0)}}$, and ${\alpha_{i}^{(0)} \in \mathbb{R}_{+}}$, ${\forall i \in \mathcal{V}}$. Using dynamic average consensus theory, we can show that the agents reach consensus with ${z_{i}^{(\infty)} = \bar{z}^{(\infty)} = \bar{s}^{(\infty)}}$, ${\forall i \in \mathcal{V}}$. The resulting distributed conjugate gradient method enables each agent to compute the optimal solution of the optimization problem using *uncoordinated*, *non-diminishing* step-sizes. Considering the update procedures in terms of each agent, at each iteration $k$, agent $i$ performs the following updates: $$\begin{aligned} x_{i}^{(k +1)} &= \sum_{j \in \mathcal{N}_{i} \cup \{i\}} w_{ij} \left(x_{j}^{(k)} + \alpha_{j}^{(k)} \cdot z_{j}^{(k)}\right), \label{eq:x_update_procedure}\\ s_{i}^{(k +1)} &= -g_{i}^{(k + 1)} + \beta_{i}^{(k)} \cdot s_{i}^{(k)}, \label{eq:s_update_procedure}\\ z_{i}^{(k + 1)} &= \sum_{j \in \mathcal{N}_{i} \cup \{i\}} w_{ij} \left(z_{j}^{(k)} + s_{j}^{(k + 1)} - s_{j}^{(k)} \right), \label{eq:z_update_procedure}\end{aligned}$$ where agent $i$ communicates: $$\begin{aligned} u_{i}^{(k)} &= x_{i}^{(k)} + \alpha_{i}^{(k)} \cdot z_{i}^{(k)}, \\ v_{i}^{(k)} &= z_{i}^{(k)} + s_{i}^{(k + 1)} - s_{i}^{(k)} \end{aligned}$$ with its neighbors. We summarize the distributed conjugate gradient algorithm in Algorithm [\[alg:distributed_algorithm\]](#alg:distributed_algorithm){reference-type="ref" reference="alg:distributed_algorithm"}. [\[alg:distributed_algorithm\]]{#alg:distributed_algorithm label="alg:distributed_algorithm"} **Initialization:**\ ${x_{i}^{(0)} \in \mathbb{R}^{n}}$, ${s_{i}^{(0)} = - \nabla f_{i}(x_{i}^{(0)})}$, ${z_{i}^{(0)} = s_{i}^{(0)}}$, and ${\alpha_{i}^{(0)} \in \mathbb{R}}$, ${\forall i \in \mathcal{V}}$. ( $\forall i \in \mathcal{V}$)not converged or stopping criterion is not met $x_{i}^{(k + 1)} \leftarrow$ Procedure [\[eq:x_update_procedure\]](#eq:x_update_procedure){reference-type="eqref" reference="eq:x_update_procedure"}\ $s_{i}^{(k + 1)} \leftarrow$ Procedure [\[eq:s_update_procedure\]](#eq:s_update_procedure){reference-type="eqref" reference="eq:s_update_procedure"}\ $z_{i}^{(k + 1)} \leftarrow$ Procedure [\[eq:z_update_procedure\]](#eq:z_update_procedure){reference-type="eqref" reference="eq:z_update_procedure"}\ $k \leftarrow k + 1$ We present some assumptions that will be relevant in analyzing the convergence properties of our algorithm. **Assumption 1**. *The local objective function of each agent, $f_{i}$, is closed, proper, and convex. Moreover, $f_{i}$ is $L_{i}$-Lipschitz-smooth, with Lipschitz-continuous gradients.* **Remark 1**. *From Assumption [Assumption 1](#assm:convex_lipschitz){reference-type="ref" reference="assm:convex_lipschitz"}, we note that the aggregate objective function $\bm{f}$ is closed, convex, proper, and Lipschitz-continuous with: $$\left\Vert \nabla{\bm{f}}(x) - \nabla{\bm{f}}(y) \right\Vert_{2} \leq L \left\Vert x - y \right\Vert_{2},\ \forall x, y \in \mathbb{R}^{n},$$ where ${L = \max_{i \in \mathcal{V}} \{L_{i}\}}$.* **Assumption 2**. *The local objective function of each agent $f_{i}$ is coercive.* **Assumption 3**. *The optimization problem [\[eq:global_prob\]](#eq:global_prob){reference-type="eqref" reference="eq:global_prob"} has a non-empty feasible set, and further, an optimal solution $x^{\star}$ exists for the optimization problem.* In addition, we make the following assumption on the stochastic weight matrix. **Assumption 4**. *The mixing matrix $W$ associated with the communication graph $G$ satisfies:* 1. **(Double-Stochasticity)* $W \bm{1} = \bm{1}$ and $\bm{1}^{\mathsf{T}} W = \bm{1},$* 2. **(Spectral Property)* $\lambda = \rho(W - \frac{\bm{1}_{N}\bm{1}_{N}^{\mathsf{T}}}{N}) < 1$. [\[assm:mixing_matrix_spectral\]]{#assm:mixing_matrix_spectral label="assm:mixing_matrix_spectral"}* Part [\[assm:mixing_matrix_spectral\]](#assm:mixing_matrix_spectral){reference-type="ref" reference="assm:mixing_matrix_spectral"} of Assumption [Assumption 4](#assm:mixing_matrix){reference-type="ref" reference="assm:mixing_matrix"} specifies that the matrix ${M = W - \frac{\bm{1}_{N}\bm{1}_{N}^{\mathsf{T}}}{N}}$ has a spectral norm less than one. This assumption is necessary and sufficient for consensus, i.e., $$\lim_{k \rightarrow \infty} W^{k} \rightarrow \frac{\bm{1}_{N}\bm{1}_{N}^{\mathsf{T}}}{N}.$$ We note that Assumption [Assumption 4](#assm:mixing_matrix){reference-type="ref" reference="assm:mixing_matrix"} is not restrictive, in undirected communication networks. We provide common choices for the mixing matrix $W$: 1. *Metropolis-Hastings Weights*: $$w_{ij} = \begin{cases*} \frac{1}{\max\{\deg(i), \deg(j)\} + \epsilon}, & if $(i,j) \in \mathcal{E},$ \\ 0 & if $(i,j) \notin \mathcal{E}$ and $i \neq j,$ \\ 1 - \sum_{r \in \mathcal{V}} w_{ir} & if $i = j,$ \end{cases*}$$ where ${\epsilon \in \mathbb{R}_{++}}$ denotes a small positive constant, e.g., ${\epsilon = 1}$ [@xiao2007distributed]. 2. *Laplacian-based Weights*: $$W = I - \frac{L}{\tau},$$ where $L$ denotes the Laplacian matrix of $\mathcal{G}$, and ${\tau \in \mathbb{R}}$ denotes a scaling parameter with ${\tau > \frac{1}{2} \lambda_{\max}(L)}$. One can choose ${\tau = \max_{i \in \mathcal{V}} \{\deg(i)\} + \epsilon}$, if computing $\lambda_{\max}(L)$ is infeasible, where ${\epsilon \in \mathbb{R}_{++}}$ represents a small positive constant [@sayed2014diffusion]. The aforementioned assumptions are standard in convergence analysis of distributed optimization algorithms. # Convergence Analysis {#sec:convergence_analysis} We analyze the convergence properties of our distributed algorithm. Before proceeding with the analysis, we consider the following sequence: $$\begin{aligned} \ensuremath{\overline{\bm{x}}}^{(k+1)} &= M W \bm{x}^{(k+1)} + M \bm{\alpha}^{(k)} \cdot \bm{z}^{(k)}, \\ \ensuremath{\overline{\bm{x}}}^{(k+1)} &= \ensuremath{\overline{\bm{x}}}^{(k)} + \ensuremath{\overline{\bm{\alpha}^{(k)} \cdot \bm{z}^{(k)}}}, \label{eq:mean_x_sequence} \\ \ensuremath{\overline{\bm{s}}}^{(k+1)} &= M \left(-\bm{g}^{(k+1)} + \bm{\beta}^{(k)} \cdot \bm{s}^{(k)}\right), \\ \ensuremath{\overline{\bm{s}}}^{(k+1)} &= -\ensuremath{\overline{\bm{g}}}^{(k+1)} + \ensuremath{\overline{\bm{\beta}^{(k)} \cdot \bm{s}^{(k)}}}, \label{eq:mean_s_sequence} \\ \ensuremath{\overline{\bm{z}}}^{(k+1)} &= M \left(W(\bm{z}^{(k)} + \bm{s}^{(k+1)} - \bm{s}^{(k)})\right), \\ \ensuremath{\overline{\bm{z}}}^{(k+1)} &= \ensuremath{\overline{\bm{z}}}^{(k)} + \ensuremath{\overline{\bm{s}}}^{(k+1)} - \ensuremath{\overline{\bm{s}}}^{(k)}, \label{eq:mean_z_sequence}\end{aligned}$$ derived from the mean of the local iterates of each agent, where we have utilized the assumption that $W$ is column-stochastic. From [\[eq:mean_z\_sequence\]](#eq:mean_z_sequence){reference-type="eqref" reference="eq:mean_z_sequence"} , we note that ${\ensuremath{\overline{z}}^{(k)} = \ensuremath{\overline{s}}^{(k)}}$, $\forall k$, given that ${\ensuremath{\overline{z}}^{(0)} = \ensuremath{\overline{s}}^{(0)}}$. In addition, we introduce the following definitions: ${\alpha_{\max} = \max_{k \in \mathbb{Z}_{+}} \{\left\Vert \bm{\alpha}^{(k)} \right\Vert_{2}\}}$; ${\beta_{\max} = \max_{k \in \mathbb{Z}_{+}} \{\left\Vert \bm{\beta}^{(k)} \right\Vert_{2}\}}$; ${r_{ \alpha} = \alpha_{\max} \max_{k \in \mathbb{Z}_{+}} \frac{ 1}{\left\Vert \ensuremath{\overline{\bm{\alpha}}}^{(k)} \right\Vert_{2}}}$; ${r_{\beta} = \beta_{\max} \max_{k \in \mathbb{Z}_{+}} \frac{ 1}{\left\Vert \ensuremath{\overline{\bm{\beta}}}^{(k)} \right\Vert_{2}}}$. Likewise, we define ${\ensuremath{\overline{\bm{\alpha}}} = \frac{\bm{1}_{N} \bm{1}_{N}^{\mathsf{T}}}{N} \sum_{i \in \mathcal{V}} \alpha_{i}^{(k)}}$, with a similar definition for ${\ensuremath{\overline{\bm{\beta}}}}$. We state the following lemma, bounding the norm of the sequence ${\{\bm{x}^{(k)}\}_{\forall k}}$, ${\{\bm{z}^{(k)}\}_{\forall k}}$, and ${\{\bm{s}^{(k)}\}_{\forall k}}$. **Lemma 1**. *If the sequence ${\{\bm{x}^{(k)}\}_{\forall k}}$, ${\{\bm{s}^{(k)}\}_{\forall k}}$, and ${\{\bm{z}^{(k)}\}_{\forall k}}$ is generated by the recurrence in [\[eq:x_sequence\]](#eq:x_sequence){reference-type="eqref" reference="eq:x_sequence"}, [\[eq:s_sequence\]](#eq:s_sequence){reference-type="eqref" reference="eq:s_sequence"}, and [\[eq:z_sequence\]](#eq:z_sequence){reference-type="eqref" reference="eq:z_sequence"}, the auxiliary sequence ${\{\tilde{\bm{x}}^{(k)}\}_{\forall k}}$, ${\{\tilde{\bm{s}}^{(k)}\}_{\forall k}}$, and ${\{\tilde{\bm{z}}^{(k)}\}_{\forall k}}$ satisfy the following bounds: $$\begin{aligned} \left\Vert \tilde{\bm{x}}^{(k+1)} \right\Vert_{2} & \leq \lambda \left\Vert \tilde{\bm{x}}^{(k)} \right\Vert_{2} + \lambda \alpha_{\max} (1 +r_{\alpha}) \left\Vert \tilde{\bm{z}}^{(k)} \right\Vert_{2} \notag \\ & \quad + \lambda r_{\alpha} \left\Vert \ensuremath{\overline{\bm{\alpha}^{(k)} \cdot \bm{z}^{(k)}}} \right\Vert_{2}, \\ \left\Vert \tilde{\bm{s}}^{(k+1)} \right\Vert_{2} &\leq \left\Vert \tilde{\bm{g}}^{(k + 1)} \right\Vert_{2} + \beta_{\max} (1 + r_{\beta}) \left\Vert \tilde{\bm{s}}^{(k)} \right\Vert_{2} \notag \\ & \quad + \left( 1 + r_{\beta} \right) \left\Vert \ensuremath{\overline{\bm{\beta}^{(k)} \cdot \bm{s}^{(k)}}} \right\Vert_{2}, \\ \left\Vert \tilde{\bm{z}}^{(k+1)} \right\Vert_{2} &\leq (\lambda + \lambda^{2} L \alpha_{\max} ( 1 + r_{\alpha})) \left\Vert \tilde{\bm{z}}^{(k)} \right\Vert_{2} \notag \\ & \quad + \lambda L ( \lambda + 1) \left\Vert \tilde{\bm{x}}^{(k)} \right\Vert_{2} \notag \\ & \quad + \lambda L (\lambda r_{\alpha} + 1) \left\Vert \ensuremath{\overline{\bm{\alpha}^{(k)} \cdot \bm{z}^{(k)}}} \right\Vert_{2} \notag \\ & \quad + \lambda\left( \left\Vert \bm{\beta}^{(k)} \cdot \bm{s}^{(k)} \right\Vert_{2} + \left\Vert \bm{\beta}^{(k - 1)} \cdot \bm{s}^{(k - 1)} \right\Vert_{2} \right). \end{aligned}$$* Please refer to [Appendix [8.1](#appdx:lemma_sequence_bounds){reference-type="ref" reference="appdx:lemma_sequence_bounds"}](#appdx:lemma_sequence_bounds) for the proof. Further, all agents reach agreement on their local iterates, which we state in the following theorem. **Theorem 1** (Agreement). *Given the recurrence [\[eq:x_sequence\]](#eq:x_sequence){reference-type="eqref" reference="eq:x_sequence"}, [\[eq:s_sequence\]](#eq:s_sequence){reference-type="eqref" reference="eq:s_sequence"}, and [\[eq:z_sequence\]](#eq:z_sequence){reference-type="eqref" reference="eq:z_sequence"}, the local iterates of agent $i$, ${\left(x_{i}^{(k)}, s_{i}^{(k)}, z_{i}^{(k)}\right)}$, converge to the mean, ${\forall i \in \mathcal{V}}$, i.e., each agent reaches agreement with all other agents, for sufficiently large $k$. In particular: $$\begin{aligned} \lim_{k \rightarrow \infty} \left\Vert \tilde{\bm{s}}^{(k)} \right\Vert_{2} = 0, \ \lim_{k \rightarrow \infty} \left\Vert \tilde{\bm{x}}^{(k)} \right\Vert_{2} = 0, \ \lim_{k \rightarrow \infty} \left\Vert \tilde{\bm{z}}^{(k)} \right\Vert_{2} = 0. \label{eq:tilde_errors_theorem} \end{aligned}$$ Further, the local iterates of each agent converge to a limit point, as ${k \rightarrow \infty}$, with: $$\begin{aligned} \lim_{k \rightarrow \infty} \left\Vert \ensuremath{\overline{\bm{\alpha}^{(k)} \cdot \bm{z}^{(k)}}} \right\Vert_{2} = \lim_{k \rightarrow \infty} \left\Vert \ensuremath{\overline{\bm{\beta}^{(k)} \cdot \bm{s}^{(k)}}} \right\Vert_{2} = 0. \end{aligned}$$ Moreover, the norm of the mean of the agents' local iterates tracking the average conjugate direction converges to zero, with the norm of the average gradient evaluated at the local iterate of each agent also converging to zero, for sufficiently large $k$. Specifically, the following holds: $$\begin{aligned} \lim_{k \rightarrow \infty} \left\Vert \ensuremath{\overline{\bm{s}}}^{(k)} \right\Vert_{2} = 0, \ \lim_{k \rightarrow \infty} \left\Vert \ensuremath{\overline{\bm{z}}}^{(k)} \right\Vert_{2} = 0, \ \lim_{k \rightarrow \infty} \left\Vert \ensuremath{\overline{\bm{g}}}^{(k)} \right\Vert_{2} = 0. \label{eq:mean_gradient_direction_theorem} \end{aligned}$$* We refer readers to [Appendix [8.2](#appdx:thm_agreement){reference-type="ref" reference="appdx:thm_agreement"}](#appdx:thm_agreement) for the proof. Theorem [Theorem 1](#thm:agreement){reference-type="ref" reference="thm:agreement"} indicates that the local iterates of all agents, ${\{x_{i}^{(k)}}\}_{\forall i \in \mathcal{V}}$, converge to a common limit point $x^{(\infty)}$, given by the mean ${\sum_{i \in \mathcal{V}} x_{i}^{(k)}}$, as ${k \rightarrow \infty}$. Further: $$\label{eq:mean_gradient_direction_simplified} \begin{aligned} \lim_{k \rightarrow \infty} \left\Vert \ensuremath{\overline{\bm{g}}}^{(k)} \right\Vert_{2} &= \lim_{k \rightarrow \infty} \left\Vert \frac{\bm{1}_{N}\bm{1}_{N}^{\mathsf{T}}}{N} \bm{g}^{(k)} \right\Vert_{2}, \\ &= \left\Vert \frac{\bm{1}_{N}\bm{1}_{N}^{\mathsf{T}}}{N} \bm{g}(\bm{x}^{(\infty)}) \right\Vert_{2}, \\ &= \left\Vert \bm{1}_{N} \left(\nabla f(x^{(\infty)})\right)^{\mathsf{T}} \right\Vert_{2}, \\ &= \left\Vert \bm{1}_{N} \right\Vert_{2} \left\Vert \nabla f(x^{(\infty)}) \right\Vert_{2}, \\ &= \sqrt{N} \left\Vert \nabla f(x^{(\infty)}) \right\Vert_{2}, \end{aligned}$$ where ${\bm{x}^{(\infty)} = \bm{1}_{N} \left(x^{(\infty)}\right)^{\mathsf{T}}}$. From [\[eq:mean_gradient_direction_theorem\]](#eq:mean_gradient_direction_theorem){reference-type="eqref" reference="eq:mean_gradient_direction_theorem"} and [\[eq:mean_gradient_direction_simplified\]](#eq:mean_gradient_direction_simplified){reference-type="eqref" reference="eq:mean_gradient_direction_simplified"}, we note that ${\left\Vert \nabla f(x^{(\infty)}) \right\Vert_{2} = 0}$. Hence, the limit point of the distributed algorithm represents a critical point of the optimization problem [\[eq:global_prob\]](#eq:global_prob){reference-type="eqref" reference="eq:global_prob"}. **Theorem 2** (Convergence of the Objective Value). *The value of the objective function $\bm{f}$ evaluated at the mean of the local iterates of all agents converges to the optimal objective value. Moreover, the value of $\bm{f}$ evaluated at the agents' local iterates converges to the optimal objective value, for sufficiently large $k$. Particularly: $$\lim_{k \rightarrow \infty} \bm{f}({\bm{x}}^{(k)}) = \lim_{k \rightarrow \infty} \bm{f}(\ensuremath{\overline{\bm{x}}}^{(k)}) = f^{\star},$$* We provide the proof in [Appendix [8.3](#appdx:thm_convergence){reference-type="ref" reference="appdx:thm_convergence"}](#appdx:thm_convergence). # Simulations {#sec:simulations} In this section, we examine the performance of our distributed conjugate gradient method (DC-Grad) in comparison to other existing distributed optimization algorithms, namely: DIGing-ATC [@nedic2017achieving], C-ADMM [@mateos2010distributed], $AB$/Push-Pull [@xin2020general], and $ABm$ [@xin2019distributed], which utilizes *momentum* acceleration to achieve faster convergence. We note that $AB$/Push-Pull reduces to DIGing-CTA when the matrices $A$ and $B$ are selected to be doubly-stochastic. We assess the convergence rate of our algorithm across a range of communication networks, with varying degrees of connectivity, described by the connectivity ratio ${\kappa = \frac{2 \vert \mathcal{E} \vert}{N(N - 1)}}$. We consider a *state estimation problem*, formulated as a least-squares optimization problem, in addition to its robust variant derived with the Huber loss function. In each problem, we utilize *Metropolis-Hastings* weights for the mixing matrix $W$. Since Metropolis-Hastings weights yield doubly-stochastic (DS) mixing matrices, we use the terms $ABm$ and $ABm$-DS interchangeably. We compute the convergence error of the local iterate of each agent to the optimal solution, in terms of the *relative-squared error* (RSE) given by: $$\mathrm{RSE} = \frac{\left\Vert x_{i} - x^{\star} \right\Vert_{2}}{\left\Vert x^{\star} \right\Vert_{2}},$$ where $x_{i}$ denotes the local iterate of agent $i$ and $x^{\star}$ denotes the optimal solution, computed from the aggregate optimization problem. We set the threshold for convergence at $1e^{-13}$. For a good comparison of the computation and communication overhead incurred by each method, we selected a convergence threshold that could be attained by all methods. In our simulation study, we note that DIGing-ATC and DC-Grad yield higher-accuracy solutions compared to the other methods, with $AB$/Push-Pull yielding solutions with the least accuracy. We utilize the *golden-section search* to select an optimal step-size for our distributed conjugate gradient method, DIGing-ATC, $AB$/Push-Pull, and $ABm$. Likewise, we select an optimal value for the penalty parameter $\rho$ in C-ADMM using golden-section search. Further, we assume that each scalar component in the agents' iterates is represented using the double-precision floating-point representation format. ## Distributed State Estimation {#sec:dis_state_estimation} In the state estimation problem, we seek to compute an estimate of a parameter (*state*) given a set of observations (*measurements*). In many situations (e.g., in robotics, process control, and finance), the observations are collected by a network of sensors, resulting in decentralization of the problem data, giving rise to the distributed state estimation problem. Here, we consider the distributed state estimation problem over a network of $N$ agents, where the agents estimate the state ${x \in \mathbb{R}^{n}}$, representing the parameter of interest, such as the location of a target. Each agent makes noisy observations of the state, given by the model: ${y_{i} = C_{i} x + w_{i}}$, where ${y_{i} \in \mathbb{R}^{m_{i}}}$ denotes the observations of agent $i$, ${C_{i} \in \mathbb{R}^{m_{i} \times n}}$ denotes the observation (measurement) matrix, and $w_{i}$ denotes random noise. We can formulate the state estimation problem as a least-squares optimization problem, given by: $$\label{eq:state_estimation_least_squares} \ensuremath{\underset{x \in \mathbb{R}^{n}}{\text{minimize}}\ } \frac{1}{N} \sum_{i = 1}^{N} \left\Vert C_{i}x - y_{i} \right\Vert_{2}^{2}.$$ We determine the number of local observations for each agent randomly by sampling from the uniform distribution over the closed interval $[5, 30]$. We randomly generate the problem data: $C_{i}$ and $y_{i}$,  ${\forall i \in \mathcal{V}}$, with ${N = 50}$ and ${n = 10}$. We examine the convergence rate of the distributed optimization algorithms over randomly-generated connected communication graphs. We update the conjugate gradient parameter $\beta$ using a modified *Fletcher-Reeves Scheme* [\[eq:flecther_reeves_CG_parameter\]](#eq:flecther_reeves_CG_parameter){reference-type="eqref" reference="eq:flecther_reeves_CG_parameter"}. In Table [\[tab:state_estimation_computation_least_squares\]](#tab:state_estimation_computation_least_squares){reference-type="ref" reference="tab:state_estimation_computation_least_squares"}, we present the mean and standard deviation of the cumulative computation time per agent, in seconds, required for convergence by each distributed algorithm, over $20$ randomly-generated problems for each communication network. We utilize a closed-form solution for the primal update procedure arising in C-ADMM, making it competitive with other distributed optimization methods in terms of computation time. From Table [\[tab:state_estimation_computation_least_squares\]](#tab:state_estimation_computation_least_squares){reference-type="ref" reference="tab:state_estimation_computation_least_squares"}, we note that DIGing-ATC requires the shortest computation time, closely followed by DC-Grad, on densely-connected communication graphs, i.e., on graphs with ${\kappa}$ close to one, where we note that DC-Grad requires an update procedure for $\beta$, increasing its computation time. However, on more sparsely-connected communication graphs, C-ADMM requires the shortest computation time. width= Algorithm $\kappa = 0.48$ $\kappa = 0.80$ $\kappa = 0.97$ $\kappa = 1.00$ ---------------------------------- ----------------------------------------------------- ----------------------------------------------------- ----------------------------------------------------- ----------------------------------------------------- $AB$/Push-Pull [@xin2020general] $8.95\mathrm{e}^{-4} \pm 1.60\mathrm{e}^{-4}$ $9.42\mathrm{e}^{-4} \pm 1.44\mathrm{e}^{-4}$ $1.03\mathrm{e}^{-3} \pm 1.59\mathrm{e}^{-4}$ $1.06\mathrm{e}^{-3} \pm 1.43\mathrm{e}^{-4}$ $ABm$-DS [@xin2019distributed] $5.54\mathrm{e}^{-4} \pm 2.23\mathrm{e}^{-5}$ $3.19\mathrm{e}^{-4} \pm 3.86\mathrm{e}^{-5}$ $2.85\mathrm{e}^{-4} \pm 4.71\mathrm{e}^{-5}$ $2.91\mathrm{e}^{-4} \pm 4.10\mathrm{e}^{-5}$ C-ADMM [@mateos2010distributed] $\bm{1.71\mathrm{e}^{-4} \pm 1.87\mathrm{e}^{-5}}$ $\bm{1.34\mathrm{e}^{-4} \pm 1.20\mathrm{e}^{-5}}$ $1.18\mathrm{e}^{-4} \pm 6.84\mathrm{e}^{-6}$ $1.17\mathrm{e}^{-4} \pm 7.87\mathrm{e}^{-6}$ DIGing-ATC [@nedic2017achieving] $5.98\mathrm{e}^{-4} \pm 3.08\mathrm{e}^{-5}$ $2.12\mathrm{e}^{-4} \pm 1.61\mathrm{e}^{-5}$ $\bm{6.79\mathrm{e}^{-5} \pm 9.10\mathrm{e}^{-6}}$ $\bm{3.87\mathrm{e}^{-5} \pm 2.75\mathrm{e}^{-6}}$ DC-Grad (ours) $7.94\mathrm{e}^{-4} \pm 4.39\mathrm{e}^{-5}$ $2.85\mathrm{e}^{-4} \pm 2.04\mathrm{e}^{-5}$ $9.10\mathrm{e}^{-5} \pm 9.32\mathrm{e}^{-6}$ $4.47\mathrm{e}^{-5} \pm 2.08\mathrm{e}^{-6}$ Moreover, we provide the mean and standard deviation of the cumulative size of messages exchanged per agent, in Megabytes (MB), for each distributed algorithm in Table [\[tab:state_estimation_communication_least_squares\]](#tab:state_estimation_communication_least_squares){reference-type="ref" reference="tab:state_estimation_communication_least_squares"}. We note that C-ADMM requires agents to communicate fewer variables by a factor of 2, compared to $AB$/Push-Pull, $ABm$, DIGing-ATC , and DC-Grad. Table [\[tab:state_estimation_communication_least_squares\]](#tab:state_estimation_communication_least_squares){reference-type="ref" reference="tab:state_estimation_communication_least_squares"} shows that DC-Grad incurs the least communication overhead for convergence on more-densely-connected graphs, closely followed by DIGing-ATC. This finding reveals that DC-Grad requires fewer iterations for convergence on these graphs, compared to the other algorithms. On more-sparsely-connected graphs, C-ADMM incurs the least communication overhead. width= Algorithm $\kappa = 0.48$ $\kappa = 0.80$ $\kappa = 0.97$ $\kappa = 1.00$ ---------------------------------- ----------------------------------------------------- ----------------------------------------------------- ----------------------------------------------------- ----------------------------------------------------- $AB$/Push-Pull [@xin2020general] $6.06\mathrm{e}^{-2} \pm 1.18\mathrm{e}^{-2}$ $6.52\mathrm{e}^{-2} \pm 1.10\mathrm{e}^{-2}$ $6.97\mathrm{e}^{-2} \pm 1.23\mathrm{e}^{-2}$ $7.20\mathrm{e}^{-2} \pm 1.13\mathrm{e}^{-2}$ $ABm$-DS [@xin2019distributed] $3.64\mathrm{e}^{-2} \pm 1.31\mathrm{e}^{-3}$ $2.13\mathrm{e}^{-2} \pm 2.21\mathrm{e}^{-3}$ $1.85\mathrm{e}^{-2} \pm 3.17\mathrm{e}^{-3}$ $1.90\mathrm{e}^{-2} \pm 2.78\mathrm{e}^{-3}$ C-ADMM [@mateos2010distributed] $\bm{1.15\mathrm{e}^{-2} \pm 9.80\mathrm{e}^{-4}}$ $\bm{9.24\mathrm{e}^{-3} \pm 8.17\mathrm{e}^{-4}}$ $7.98\mathrm{e}^{-3} \pm 4.78\mathrm{e}^{-4}$ $8.03\mathrm{e}^{-3} \pm 5.00\mathrm{e}^{-4}$ DIGing-ATC [@nedic2017achieving] $4.68\mathrm{e}^{-2} \pm 1.27\mathrm{e}^{-3}$ $1.69\mathrm{e}^{-2} \pm 1.11\mathrm{e}^{-3}$ $5.16\mathrm{e}^{-3} \pm 2.19\mathrm{e}^{-4}$ $3.00\mathrm{e}^{-3} \pm 2.20\mathrm{e}^{-4}$ DC-Grad (ours) $4.63\mathrm{e}^{-2} \pm 1.70\mathrm{e}^{-3}$ $1.70\mathrm{e}^{-2} \pm 1.10\mathrm{e}^{-3}$ $\bm{5.16\mathrm{e}^{-3} \pm 2.00\mathrm{e}^{-4}}$ $\bm{2.58\mathrm{e}^{-3} \pm 1.00\mathrm{e}^{-4}}$ In Figure [1](#fig:algorithm_comparison_fully_connected_least_squares){reference-type="ref" reference="fig:algorithm_comparison_fully_connected_least_squares"}, we show the convergence error of the agents' iterates, per iteration, on a fully-connected communication network. Figure [1](#fig:algorithm_comparison_fully_connected_least_squares){reference-type="ref" reference="fig:algorithm_comparison_fully_connected_least_squares"} highlights that DC-Grad requires the least number of iterations for convergence, closely followed by DIGing-ATC. In addition, $ABm$ and C-ADMM converge at relatively the same rate. Similarly, we show the convergence error of the iterates of each agent on a randomly-generated connected communication graph with ${\kappa = 0.48}$ in Figure [2](#fig:algorithm_comparison_non_fully_connected_least_squares_0p48){reference-type="ref" reference="fig:algorithm_comparison_non_fully_connected_least_squares_0p48"}. We note that C-ADMM converges the fastest in Figure [2](#fig:algorithm_comparison_non_fully_connected_least_squares_0p48){reference-type="ref" reference="fig:algorithm_comparison_non_fully_connected_least_squares_0p48"}. In addition, we note that the convergence plot of DIGing-ATC overlays that of DC-Grad, with both algorithms exhibiting a similar performance. ![Convergence error of all agents per iteration in the distributed state estimation problem on a fully-connected communication graph. DC-Grad converges the fastest, closely followed by DIGing-ATC.](figures/least_squares_fully_connected.pdf){#fig:algorithm_comparison_fully_connected_least_squares width="0.85\\linewidth"} ![Convergence error of all agents per iteration in the distributed state estimation problem on a randomly-generated connected communication graph with ${\kappa = 0.48}$. C-ADMM attains the fastest convergence rate. The convergence plot of DIGing-ATC overlays that of DC-Grad, with both algorithms converging at the same rate.](figures/least_squares_non_fully_connected_k_0p48.pdf){#fig:algorithm_comparison_non_fully_connected_least_squares_0p48 width="0.85\\linewidth"} ## Distributed Robust Least-Squares We consider the robust least-squares formulation of the state estimation problem, presented in Section [7.1](#sec:dis_state_estimation){reference-type="ref" reference="sec:dis_state_estimation"}. We replace the $\ell_{2}^{2}$-loss function in [\[eq:state_estimation_least_squares\]](#eq:state_estimation_least_squares){reference-type="eqref" reference="eq:state_estimation_least_squares"} with the Huber loss function, given by: $$\label{eq:huber_loss} f_{\mathrm{hub}, \xi}(u) = \begin{cases} \frac{1}{2} u^{2}, & \text{if } \lvert u \rvert \leq \xi \ (\ell_{2}^{2}\text{-zone}), \\ \xi (\lvert u \rvert - \frac{1}{2}\xi ), & \text{otherwise } (\ell_{1}\text{-zone}). \end{cases}$$ We note the Huber loss function is less sensitive to outliers, since the penalty function $f_{\mathrm{hub}, \xi}$ grows linearly for large values of $u$. The corresponding robust least-squares optimization problem is given by: $$\label{eq:state_estimation_robust_least_squares} \ensuremath{\underset{x \in \mathbb{R}^{n}}{\text{minimize}}\ } \frac{1}{N} \sum_{i = 1}^{N} f_{\mathrm{hub}, \xi}(C_{i}x - y_{i}).$$ We assume each agent has a single observation, i.e., ${m_{i} = 1}$, ${\forall i \in \mathcal{V}}$ and assess the convergence rate of the distributed algorithms on randomly-generated connected communication graphs, with ${N = 50}$ and ${n = 10}$. We randomly initialize $x_{i}$ such that the $x_{i}$ lies in the $\ell_{1}$-zone, ${\forall i \in \mathcal{V}}$. Further, we randomly generate the problem data such that the optimal solution $x^{\star}$ lies in the $\ell_{2}^{2}$-zone. We set the maximum number of iterations to $3000$. We note that a closed-form solution does not exist for the primal update procedure for C-ADMM in this problem. Consequently, we do not include C-ADMM in this study, noting that solving the primal update procedure with iterative solvers would negatively impact the computation time of C-ADMM, effectively limiting its competitiveness. Further, we update the conjugate gradient parameter $\beta$ using a modified *Polak-Ribière Scheme* [\[eq:polak_ribiere_CG_parameter\]](#eq:polak_ribiere_CG_parameter){reference-type="eqref" reference="eq:polak_ribiere_CG_parameter"}. We provide the mean computation time per agent, in seconds, required for convergence of each algorithm, along with the standard deviation in Table [\[tab:state_estimation_computation_robust_least_squares\]](#tab:state_estimation_computation_robust_least_squares){reference-type="ref" reference="tab:state_estimation_computation_robust_least_squares"}, over $20$ randomly-generated problems for each communication network. From Table [\[tab:state_estimation_computation_robust_least_squares\]](#tab:state_estimation_computation_robust_least_squares){reference-type="ref" reference="tab:state_estimation_computation_robust_least_squares"}, we note that $ABm$ requires the shortest computation time for convergence on more-sparsely-connected communication graphs. However, on more-densely-connected communication graphs, DIGing-ATC achieves the shortest computation time, followed by DC-Grad. width= Algorithm $\kappa = 0.42$ $\kappa = 0.74$ $\kappa = 0.96$ $\kappa = 1.00$ ---------------------------------- ----------------------------------------------------- ----------------------------------------------------- ----------------------------------------------------- ----------------------------------------------------- $AB$/Push-Pull [@xin2020general] $7.55\mathrm{e}^{-3} \pm 1.89\mathrm{e}^{-3}$ $7.76\mathrm{e}^{-3} \pm 1.99\mathrm{e}^{-3}$ $8.16\mathrm{e}^{-3} \pm 2.04\mathrm{e}^{-3}$ $8.63\mathrm{e}^{-3} \pm 2.02\mathrm{e}^{-3}$ $ABm$-DS [@xin2019distributed] $8.96\mathrm{e}^{-4} \pm 1.66\mathrm{e}^{-4}$ $9.06\mathrm{e}^{-4} \pm 2.64\mathrm{e}^{-4}$ $9.47\mathrm{e}^{-4} \pm 2.96\mathrm{e}^{-4}$ $9.69\mathrm{e}^{-4} \pm 2.75\mathrm{e}^{-4}$ DIGing-ATC [@nedic2017achieving] $\bm{7.43\mathrm{e}^{-4} \pm 6.89\mathrm{e}^{-5}}$ $\bm{4.46\mathrm{e}^{-4} \pm 9.93\mathrm{e}^{-5}}$ $\bm{1.87\mathrm{e}^{-4} \pm 4.33\mathrm{e}^{-5}}$ $\bm{1.71\mathrm{e}^{-4} \pm 4.51\mathrm{e}^{-5}}$ DC-Grad (ours) $1.28\mathrm{e}^{-3} \pm 1.15\mathrm{e}^{-4}$ $7.65\mathrm{e}^{-4} \pm 1.75\mathrm{e}^{-4}$ $3.05\mathrm{e}^{-4} \pm 7.27\mathrm{e}^{-5}$ $2.85\mathrm{e}^{-4} \pm 6.69\mathrm{e}^{-5}$ In Table [\[tab:state_estimation_communication_robust_least_squares\]](#tab:state_estimation_communication_robust_least_squares){reference-type="ref" reference="tab:state_estimation_communication_robust_least_squares"}, we show the mean and standard deviation of the cumulative size of messages exchanged by each agent (in MB), in each distributed algorithm. Generally, on more-sparsely-connected graphs, $ABm$ converges the fastest, in terms of the number of iterations, and as a result, incurs the least communication overhead, closely followed by DIGing-ATC and DC-Grad. On the other hand, on more-densely-connected communication graphs, DC-Grad incurs the least communication overhead. width= Algorithm $\kappa = 0.42$ $\kappa = 0.74$ $\kappa = 0.96$ $\kappa = 1.00$ ---------------------------------- ----------------------------------------------------- ----------------------------------------------------- ----------------------------------------------------- ----------------------------------------------------- $AB$/Push-Pull [@xin2020general] $8.86\mathrm{e}^{-1} \pm 2.26\mathrm{e}^{-1}$ $9.02\mathrm{e}^{-1} \pm 2.30\mathrm{e}^{-1}$ $9.80\mathrm{e}^{-1} \pm 2.48\mathrm{e}^{-1}$ $1.01\mathrm{e}^{0} \pm 2.387\mathrm{e}^{-1}$ $ABm$-DS [@xin2019distributed] $1.02\mathrm{e}^{-1} \pm 1.85\mathrm{e}^{-2}$ $1.02\mathrm{e}^{-1} \pm 2.99\mathrm{e}^{-2}$ $1.09\mathrm{e}^{-1} \pm 3.45\mathrm{e}^{-2}$ $1.08\mathrm{e}^{-1} \pm 3.14\mathrm{e}^{-2}$ DIGing-ATC [@nedic2017achieving] $\bm{1.14\mathrm{e}^{-1} \pm 1.05\mathrm{e}^{-2}}$ $\bm{6.65\mathrm{e}^{-2} \pm 1.48\mathrm{e}^{-2}}$ $2.81\mathrm{e}^{-2} \pm 6.74\mathrm{e}^{-3}$ $2.52\mathrm{e}^{-2} \pm 6.70\mathrm{e}^{-3}$ DC-Grad (ours) $1.14\mathrm{e}^{-1} \pm 1.06\mathrm{e}^{-2}$ $6.65\mathrm{e}^{-2} \pm 1.55\mathrm{e}^{-2}$ $\bm{2.71\mathrm{e}^{-2} \pm 6.69\mathrm{e}^{-3}}$ $\bm{2.47\mathrm{e}^{-2} \pm 5.95\mathrm{e}^{-3}}$ We show the convergence error of each agent's iterate $x_{i}$, per iteration, on a fully-connected communication network in Figure [3](#fig:algorithm_comparison_fully_connected_robust_least_squares){reference-type="ref" reference="fig:algorithm_comparison_fully_connected_robust_least_squares"}. We note that DC-Grad converges within the fewest number of iterations, closely followed by DIGing-ATC. In addition, $AB$/Push-Pull requires the greatest number of iterations for convergence. We note that $AB$/Push-Pull (which is equivalent to DIGing-CTA) utilizes the *combine-then-adapt* update scheme, which results in slower convergence, generally [@nedic2017achieving]. Moreover, the objective function in [\[eq:huber_loss\]](#eq:huber_loss){reference-type="eqref" reference="eq:huber_loss"} is not strongly-convex over its entire domain, particularly in the $\ell_{1}$-zone. In addition, gradient-tracking methods, in general, require (*restricted*) strong convexity for linear convergence. As a result, all the algorithms exhibit sublinear convergence initially, since all the algorithms are initialized with $x_{i}$ in the $\ell_{1}$-zone, ${\forall i \in \mathcal{V}}$. The algorithms exhibit linear convergence when the iterates enter the $\ell_{2}^{2}$-zone, as depicted in Figure [3](#fig:algorithm_comparison_fully_connected_robust_least_squares){reference-type="ref" reference="fig:algorithm_comparison_fully_connected_robust_least_squares"}. In addition, Figure [4](#fig:algorithm_comparison_non_fully_connected_robust_least_squares_0p42){reference-type="ref" reference="fig:algorithm_comparison_non_fully_connected_robust_least_squares_0p42"} shows the convergence error of each agent's iterates on a randomly-generated communication network with ${\kappa = 0.42}$. On these graphs, $ABm$ requires the least number of iterations for convergence. We note that the convergence plot for DIGing-ATC overlays that of DC-Grad, with both algorithms exhibiting relatively the same performance. ![Convergence error of all agents per iteration in the distributed robust-state-estimation problem on a fully-connected communication network. DC-Grad attains the fastest convergence rate, while $AB$/Push-Pull attains the slowest convergence rate.](figures/robust_least_squares_fully_connected.pdf){#fig:algorithm_comparison_fully_connected_robust_least_squares width="0.85\\linewidth"} ![Convergence error of all agents per iteration in the distributed robust-state-estimation problem on a randomly-generated connected communication network with ${\kappa = 0.42}$. The convergence plot of DIGing-ATC overlays that of DC-Grad, with both methods converging faster than the other methods in this trial, although, in general, $ABm$ converges marginally faster on more-sparsely-connected graphs.](figures/robust_least_squares_non_fully_connected_k_0p42.pdf){#fig:algorithm_comparison_non_fully_connected_robust_least_squares_0p42 width="0.85\\linewidth"} # Conclusion {#sec:conclusion} We introduce DC-Grad, a distributed conjugate gradient method, where each agent communicates with its immediate neighbors to compute an optimal solution of a distributed optimization problem. Our algorithm utilizes only first-order information of the optimization problem, without requiring second-order information. Through simulations, we show that our algorithm requires the least communication overhead for convergence on densely-connected communication graphs, in general, at the expense of a slightly increased computation overhead in comparison to the best-competing algorithm. In addition, on sparsely-connected communication graphs, our algorithms performs similarly to other first-order distributed algorithms. Preliminary convergence analysis of our algorithm suggests that our algorithm converges linearly. In future work, we seek to characterize the convergence rate of our method. Further, in our simulation studies, our algorithm exhibits a notably similar performance with DIGing-ATC. We intend to examine this similarity in future work. # Appendix {#sec:appendix .unnumbered} ## Proof of Lemma [Lemma 1](#lem:sequence_bounds){reference-type="ref" reference="lem:sequence_bounds"} {#appdx:lemma_sequence_bounds} Before proceeding with the proof, we state the following relation: $$\begin{aligned} \left\Vert \ensuremath{\overline{\bm{\alpha}^{(k)} \cdot \bm{z}^{(k)}}} \right\Vert_{2} &= \left\Vert \ensuremath{\overline{\bm{\alpha}}}^{(k)} \cdot \ensuremath{\overline{\bm{z}}}^{(k)} + \ensuremath{\overline{\tilde{\bm{\alpha}}^{(k)} \cdot \tilde{\bm{z}}^{(k)}}} \right\Vert_{2}, \\ & \geq \left\Vert \ensuremath{\overline{\bm{\alpha}}}^{(k)} \right\Vert_{2} \cdot \left\Vert \ensuremath{\overline{\bm{z}}}^{(k)} \right\Vert_{2} - \left\Vert \tilde{\bm{\alpha}}^{(k)} \right\Vert_{2} \cdot \left\Vert \tilde{\bm{z}}^{(k)} \right\Vert_{2}, \label{eq:x_update_mean_z_relation}\end{aligned}$$ where we have used the fact that ${\ensuremath{\overline{\alpha}}^{(k)} = \left\Vert \ensuremath{\overline{\bm{\alpha}}}^{(k)} \right\Vert_{2}}$ and ${\left\Vert \bm{1}_{N} \bm{1}_{N}^{\mathsf{T}} \right\Vert_{2} = \sqrt{N} \sqrt{N} = N}$. Considering the recurrence in [\[eq:mean_z\_sequence\]](#eq:mean_z_sequence){reference-type="eqref" reference="eq:mean_z_sequence"}: $$\label{eq:z_diff_relation} \begin{aligned} \bm{z}^{(k+1)} - \ensuremath{\overline{\bm{z}}}^{(k+1)} &= W \bm{z}^{(k)} - \ensuremath{\overline{\bm{z}}}^{(k)} + W(\bm{s}^{(k+1)} - \bm{s}^{(k)}) \\ & \quad - (\ensuremath{\overline{\bm{s}}}^{(k+1)} - \ensuremath{\overline{\bm{s}}}^{(k)}), \\ \tilde{\bm{z}}^{(k+1)} &= M \tilde{\bm{z}}^{(k)} + M (\bm{s}^{(k+1)} - \bm{s}^{(k)}), \end{aligned}$$ where we have utilized the relation: ${M \ensuremath{\overline{\bm{z}}}^{(k)} = 0}$. From [\[eq:z_diff_relation\]](#eq:z_diff_relation){reference-type="eqref" reference="eq:z_diff_relation"}: $$\begin{aligned} \left\Vert \tilde{\bm{z}}^{(k+1)} \right\Vert_{2} &= \left\Vert M \tilde{\bm{z}}^{(k)} + M (\bm{s}^{(k+1)} - \bm{s}^{(k)}) \right\Vert_{2}, \\ &\leq \left\Vert M \tilde{\bm{z}}^{(k)} \right\Vert_{2} + \left\Vert M (\bm{s}^{(k+1)} - \bm{s}^{(k)}) \right\Vert_{2}, \\ &\leq \lambda \left\Vert \tilde{\bm{z}}^{(k)} \right\Vert_{2} + \lambda \left\Vert \bm{s}^{(k+1)} - \bm{s}^{(k)} \right\Vert_{2}. \end{aligned}$$ Using the recurrence [\[eq:s_sequence\]](#eq:s_sequence){reference-type="eqref" reference="eq:s_sequence"}: $$\begin{aligned} \left\Vert \tilde{\bm{z}}^{(k+1)} \right\Vert_{2} &\leq \lambda \left\Vert \tilde{\bm{z}}^{(k)} \right\Vert_{2} + \lambda \left\Vert \bm{g}^{(k+1)} - \bm{g}^{(k)} \right\Vert_{2} \\ & \quad + \lambda \left(\left\Vert \bm{\beta}^{(k)} \cdot \bm{s}^{(k)} \right\Vert_{2} + \left\Vert \bm{\beta}^{(k - 1)} \cdot \bm{s}^{(k - 1)} \right\Vert_{2} \right), \\ &\leq \lambda \left\Vert \tilde{\bm{z}}^{(k)} \right\Vert_{2} + \lambda L \left\Vert \bm{x}^{(k+1)} - \bm{x}^{(k)} \right\Vert_{2} \\ & \quad + \lambda\left( \left\Vert \bm{\beta}^{(k)} \cdot \bm{s}^{(k)} \right\Vert_{2} + \left\Vert \bm{\beta}^{(k - 1)} \cdot \bm{s}^{(k - 1)} \right\Vert_{2} \right), \end{aligned}$$ from Lipschitz continuity of $\nabla \bm{f}$, with: $$\label{eq:tilde_z_norm_bound_step_a} \begin{aligned} \left\Vert \tilde{\bm{z}}^{(k+1)} \right\Vert_{2} &\leq \lambda \left\Vert \tilde{\bm{z}}^{(k)} \right\Vert_{2} \\ & \quad + \lambda L \left\Vert \tilde{\bm{x}}^{(k+1)} + \ensuremath{\overline{\bm{x}}}^{(k+1)} - \tilde{\bm{x}}^{(k)} - \ensuremath{\overline{\bm{x}}}^{(k+1)} \right\Vert_{2} \\ & \quad + \lambda\left( \left\Vert \bm{\beta}^{(k)} \cdot \bm{s}^{(k)} \right\Vert_{2} + \left\Vert \bm{\beta}^{(k - 1)} \cdot \bm{s}^{(k - 1)} \right\Vert_{2} \right), \\ &\leq \lambda \left\Vert \tilde{\bm{z}}^{(k)} \right\Vert_{2} \\ & \quad + \lambda L \left( \left\Vert \tilde{\bm{x}}^{(k+1)} \right\Vert_{2} + \left\Vert \tilde{\bm{x}}^{(k)} \right\Vert_{2} \right. \\ & \hspace{4em} \left. + \left\Vert \ensuremath{\overline{\bm{\alpha}^{(k)} \cdot \bm{z}^{(k)}}} \right\Vert_{2} \right) \\ & \quad + \lambda\left( \left\Vert \bm{\beta}^{(k)} \cdot \bm{s}^{(k)} \right\Vert_{2} + \left\Vert \bm{\beta}^{(k - 1)} \cdot \bm{s}^{(k - 1)} \right\Vert_{2} \right), \end{aligned}$$ using [\[eq:mean_x\_sequence\]](#eq:mean_x_sequence){reference-type="eqref" reference="eq:mean_x_sequence"}. In addition: $$\begin{aligned} \left\Vert \tilde{\bm{x}}^{(k+1)} \right\Vert_{2} &= \left\Vert \bm{x}^{(k)} - \ensuremath{\overline{\bm{x}}}^{(k)} \right\Vert, \\ &= \left\Vert W \left(\bm{x}^{(k)} + \bm{\alpha}^{(k)} \cdot \bm{z}^{(k)} \right) - \ensuremath{\overline{\bm{x}}}^{(k)} - \ensuremath{\overline{\bm{\alpha}^{(k)} \cdot \bm{z}^{(k)}}} \right\Vert_{2}, \\ &= \left\Vert M \tilde{\bm{x}}^{(k)} + M \left(\bm{\alpha}^{(k)} \cdot \bm{z}^{(k)} - \ensuremath{\overline{\bm{\alpha}^{(k)} \cdot \bm{z}^{(k)}}} \right) \right\Vert_{2}, \\ &= \left\Vert M \tilde{\bm{x}}^{(k)} + M \left(\bm{\alpha}^{(k)} \cdot \tilde{\bm{z}}^{(k)} + \tilde{\bm{\alpha}}^{(k)} \cdot \ensuremath{\overline{\bm{z}}}^{(k)} \right) \right\Vert_{2}, \end{aligned}$$ noting: ${M \ensuremath{\overline{\bm{v}}} = 0}$. Thus: $$\label{eq:tilde_x_norm_bound} \begin{aligned} \left\Vert \tilde{\bm{x}}^{(k+1)} \right\Vert_{2} &\leq \lambda \left\Vert \tilde{\bm{x}}^{(k)} \right\Vert_{2} + \lambda \left\Vert \bm{\alpha}^{(k)} \cdot \tilde{\bm{z}}^{(k)} \right\Vert_{2} \\ & \quad + \lambda \left\Vert \tilde{\bm{\alpha}}^{(k)} \cdot \ensuremath{\overline{\bm{z}}}^{(k)} \right\Vert_{2}, \\ &\leq \lambda \left\Vert \tilde{\bm{x}}^{(k)} \right\Vert_{2} + \lambda \alpha_{\max} \left\Vert \tilde{\bm{z}}^{(k)} \right\Vert_{2} \\ & \quad + \lambda \frac{ \left\Vert \tilde{\bm{\alpha}}^{(k)} \right\Vert_{2}}{\left\Vert \ensuremath{\overline{\bm{\alpha}}}^{(k)} \right\Vert_{2}} \cdot \left(\left\Vert \ensuremath{\overline{\bm{\alpha}^{(k)} \cdot \bm{z}^{(k)}}} \right\Vert_{2} \right. \\ & \hspace{8em} \left. + \left\Vert \tilde{\bm{\alpha}}^{(k)} \right\Vert_{2} \cdot \left\Vert \tilde{\bm{z}}^{(k)} \right\Vert_{2} \right), \\ & \leq \lambda \left\Vert \tilde{\bm{x}}^{(k)} \right\Vert_{2} + \lambda \alpha_{\max} (1 +r_{\alpha}) \left\Vert \tilde{\bm{z}}^{(k)} \right\Vert_{2} \\ & \quad + \lambda r_{\alpha} \left\Vert \ensuremath{\overline{\bm{\alpha}^{(k)} \cdot \bm{z}^{(k)}}} \right\Vert_{2}, \end{aligned}$$ using [\[eq:x_update_mean_z\_relation\]](#eq:x_update_mean_z_relation){reference-type="eqref" reference="eq:x_update_mean_z_relation"} in the second inequality and the fact ${\left\Vert \tilde{\bm{\alpha}}^{(k)} \right\Vert_{2} \leq\alpha_{\max} }$. Likewise, from [\[eq:s_sequence\]](#eq:s_sequence){reference-type="eqref" reference="eq:s_sequence"} and [\[eq:mean_s\_sequence\]](#eq:mean_s_sequence){reference-type="eqref" reference="eq:mean_s_sequence"}: $$\label{eq:tilde_s_sequence} \begin{aligned} \tilde{\bm{s}}^{(k+1)} = -\tilde{\bm{g}}^{(k + 1)} + \bm{\beta}^{(k)} \cdot \bm{s}^{(k)} - \ensuremath{\overline{\bm{\beta}^{(k)} \cdot \bm{s}^{(k)}}}_{2}. \end{aligned}$$ Considering the second term in [\[eq:tilde_s\_sequence\]](#eq:tilde_s_sequence){reference-type="eqref" reference="eq:tilde_s_sequence"}: $$\begin{aligned} \left\Vert \bm{\beta}^{(k)} \cdot \bm{s}^{(k)} \right\Vert_{2} &= \left\Vert \bm{\beta}^{(k)} \cdot \left(\tilde{\bm{s}}^{(k)} + \ensuremath{\overline{\bm{s}}}^{(k)}\right) \right\Vert_{2}, \\ &\leq \beta_{\max} \left\Vert \tilde{\bm{s}}^{(k)} + \ensuremath{\overline{\bm{s}}}^{(k)} \right\Vert_{2}, \\ &\leq \beta_{\max} \left( \left\Vert \tilde{\bm{s}}^{(k)} \right\Vert_{2} + \left\Vert \ensuremath{\overline{\bm{s}}}^{(k)} \right\Vert_{2} \right). \end{aligned}$$ Further, considering the third term in [\[eq:tilde_s\_sequence\]](#eq:tilde_s_sequence){reference-type="eqref" reference="eq:tilde_s_sequence"}: $$\begin{aligned} \left\Vert \ensuremath{\overline{\bm{\beta}^{(k)} \cdot \bm{s}^{(k)}}} \right\Vert_{2} & \geq \left\Vert \ensuremath{\overline{\bm{\beta}}}^{(k)} \right\Vert_{2} \cdot \left\Vert \ensuremath{\overline{\bm{s}}}^{(k)} \right\Vert_{2} - \left\Vert \tilde{\bm{\beta}}^{(k)} \right\Vert_{2} \cdot \left\Vert \tilde{\bm{s}}^{(k)} \right\Vert_{2}; \end{aligned}$$ hence: $$\label{eq:norm_mean_s_upper_bound} \left\Vert \ensuremath{\overline{\bm{s}}}^{(k)} \right\Vert_{2} \leq \frac{1}{\left\Vert \ensuremath{\overline{\bm{\beta}}}^{(k)} \right\Vert_{2}} \left( \left\Vert \ensuremath{\overline{\bm{\beta}^{(k)} \cdot \bm{s}^{(k)}}} \right\Vert_{2} + \left\Vert \tilde{\bm{\beta}}^{(k)} \right\Vert_{2} \cdot \left\Vert \tilde{\bm{s}}^{(k)} \right\Vert_{2} \right),$$ which yields: $$\begin{aligned} \left\Vert \bm{\beta}^{(k)} \cdot \bm{s}^{(k)} \right\Vert_{2} &\leq \beta_{\max} (1 + r_{\beta}) \left\Vert \tilde{\bm{s}}^{(k)} \right\Vert_{2} \\ & \quad + \frac{\beta_{\max}}{\left\Vert \ensuremath{\overline{\bm{\beta}}}^{(k)} \right\Vert_{2}} \left\Vert \ensuremath{\overline{\bm{\beta}^{(k)} \cdot \bm{s}^{(k)}}} \right\Vert_{2}. \end{aligned}$$ Hence, from [\[eq:tilde_s\_sequence\]](#eq:tilde_s_sequence){reference-type="eqref" reference="eq:tilde_s_sequence"}: $$\label{eq:tilde_s_norm_bound} \begin{aligned} \left\Vert \tilde{\bm{s}}^{(k+1)} \right\Vert_{2} &\leq \left\Vert \tilde{\bm{g}}^{(k + 1)} \right\Vert_{2} + \beta_{\max} (1 + r_{\beta}) \left\Vert \tilde{\bm{s}}^{(k)} \right\Vert_{2} \\ & \quad + \left( 1 + \frac{\beta_{\max}}{\left\Vert \ensuremath{\overline{\bm{\beta}}}^{(k)} \right\Vert_{2}}\right) \left\Vert \ensuremath{\overline{\bm{\beta}^{(k)} \cdot \bm{s}^{(k)}}} \right\Vert_{2}, \\ &\leq \left\Vert \tilde{\bm{g}}^{(k + 1)} \right\Vert_{2} + \beta_{\max} (1 + r_{\beta}) \left\Vert \tilde{\bm{s}}^{(k)} \right\Vert_{2} \\ & \quad + \left( 1 + r_{\beta} \right) \left\Vert \ensuremath{\overline{\bm{\beta}^{(k)} \cdot \bm{s}^{(k)}}} \right\Vert_{2}. \end{aligned}$$ In addition, from [\[eq:tilde_z\_norm_bound_step_a\]](#eq:tilde_z_norm_bound_step_a){reference-type="eqref" reference="eq:tilde_z_norm_bound_step_a"} and [\[eq:tilde_x\_norm_bound\]](#eq:tilde_x_norm_bound){reference-type="eqref" reference="eq:tilde_x_norm_bound"}: $$\label{eq:tilde_z_norm_bound} \begin{aligned} \left\Vert \tilde{\bm{z}}^{(k+1)} \right\Vert_{2} &\leq (\lambda + \lambda^{2} L \alpha_{\max} ( 1 + r_{\alpha})) \left\Vert \tilde{\bm{z}}^{(k)} \right\Vert_{2} \\ & \quad + \lambda L ( \lambda + 1) \left\Vert \tilde{\bm{x}}^{(k)} \right\Vert_{2} \\ & \quad + \lambda L (\lambda r_{\alpha} + 1) \left\Vert \ensuremath{\overline{\bm{\alpha}^{(k)} \cdot \bm{z}^{(k)}}} \right\Vert_{2} \\ & \quad + \lambda\left( \left\Vert \bm{\beta}^{(k)} \cdot \bm{s}^{(k)} \right\Vert_{2} + \left\Vert \bm{\beta}^{(k - 1)} \cdot \bm{s}^{(k - 1)} \right\Vert_{2} \right). \end{aligned}$$ ## Proof of Theorem [Theorem 1](#thm:agreement){reference-type="ref" reference="thm:agreement"} {#appdx:thm_agreement} We introduce the following sequences: $$\begin{aligned} X^{(k)} &= \sqrt{\sum_{l = 0}^{k} \left\Vert \tilde{\bm{x}}^{(l)} \right\Vert_{2}^{2}}, \quad S^{(k)} = \sqrt{\sum_{l = 0}^{k} \left\Vert \tilde{\bm{s}}^{(l)} \right\Vert_{2}^{2}}, \\ Z^{(k)} &= \sqrt{\sum_{l = 0}^{k} \left\Vert \tilde{\bm{z}}^{(l)} \right\Vert_{2}^{2}}, \\ R^{(k)} &= \sqrt{\sum_{l = 0}^{k} \left(\left\Vert \ensuremath{\overline{\bm{\alpha}^{(l)} \cdot \bm{z}^{(l)}}} \right\Vert_{2}^{2} + \left\Vert \ensuremath{\overline{\bm{\beta}^{(l)} \cdot \bm{s}^{(l)}}} \right\Vert_{2}^{2} + \left\Vert \tilde{\bm{g}}^{(l + 1)} \right\Vert_{2}^{2} \right) }.\end{aligned}$$ We state the following lemma, and refer readers to [@xu2015augmented] for its proof. **Lemma 2**. *Given the non-negative scalar sequence $\{\nu^{(k)} \}_{\forall k > 0}$, defined by: $$\nu^{(k + 1)} \leq \lambda \nu^{(k)} + \omega^{(k)},$$ where ${\lambda \in (0, 1)}$, the following relation holds: $$V^{(k + 1)} \leq \gamma \Omega^{(k)} + \epsilon,$$ where ${V^{(k)} = \sqrt{\sum_{l = 0}^{k} \left\Vert \nu^{(l)} \right\Vert_{2}^{2}}}$, ${\Omega^{(k)} = \sqrt{\sum_{l = 0}^{k} \left\Vert \omega^{(l)} \right\Vert_{2}^{2}}}$, ${\gamma = \frac{\sqrt{2}}{1 - \lambda}}$, and ${\epsilon = \nu^{(0)} \sqrt{\frac{2}{1 - \lambda^{2}}}}$.* From [\[eq:tilde_x\_norm_bound\]](#eq:tilde_x_norm_bound){reference-type="eqref" reference="eq:tilde_x_norm_bound"}, let: $$\omega^{(k)} = \lambda \alpha_{\max} (1 +r_{\alpha}) \left\Vert \tilde{\bm{z}}^{(k)} \right\Vert_{2} + \lambda r_{\alpha} \left\Vert \ensuremath{\overline{\bm{\alpha}^{(k)} \cdot \bm{z}^{(k)}}} \right\Vert_{2},$$ which yields: $$\label{eq:X_recurrence_step_a} \begin{aligned} X^{(k)} & \leq \rho_{xz} Z^{(k)} + \rho_{xr} R^{(k)} + \epsilon_{x}, \end{aligned}$$ where ${\rho_{xz} = \frac{\sqrt{2}}{1 - \lambda} \lambda \alpha_{\max} (1 +r_{\alpha})}$, ${\rho_{xr} = \frac{\sqrt{2}}{1 - \lambda} \lambda r_{\alpha}}$, and ${\epsilon_{x} = \left\Vert \tilde{\bm{x}}^{(0)} \right\Vert_{2} \sqrt{\frac{2}{1 - \lambda^{2}}}}$. Likewise, from [\[eq:tilde_s\_norm_bound\]](#eq:tilde_s_norm_bound){reference-type="eqref" reference="eq:tilde_s_norm_bound"}, assuming ${\lambda_{s} = \beta_{\max} (1 + r_{\beta}) < 1}$: $$\label{eq:S_recurrence_step} \begin{aligned} S^{(k)} & \leq \mu_{sr} R^{(k)} + \mu_{sc}, \end{aligned}$$ where ${\mu_{sr} = \rho_{sr} = \frac{\sqrt{2}}{1 - \lambda_{s}} (2 +r_{\beta})}$ and ${\mu_{c} = \epsilon_{s} = \left\Vert \tilde{\bm{s}}^{(0)} \right\Vert_{2} \sqrt{\frac{2}{1 - \lambda_{s}^{2}}}}$; from [\[eq:tilde_z\_norm_bound\]](#eq:tilde_z_norm_bound){reference-type="eqref" reference="eq:tilde_z_norm_bound"}, assuming ${\lambda_{z} = \lambda + \lambda^{2} L \alpha_{\max} ( 1 + r_{\alpha}) < 1}$, $$\label{eq:Z_recurrence_step_a} \begin{aligned} Z^{(k)} & \leq \rho_{zx} X^{(k)} + \rho_{zr} R^{(k)} + \epsilon_{z}, \end{aligned}$$ where ${\rho_{zx} = \frac{\sqrt{2}}{1 - \lambda_{z}} \lambda L ( \lambda + 1)}$ , ${\rho_{zr} = \frac{\sqrt{2}}{1 - \lambda_{z}} \left(\lambda L (\lambda r_{\alpha} + 1) + 2 \lambda \right)}$ and ${\epsilon_{z} = \left\Vert \tilde{\bm{z}}^{(0)} \right\Vert_{2} \sqrt{\frac{2}{1 - \lambda_{z}^{2}}}}$. From [\[eq:X_recurrence_step_a\]](#eq:X_recurrence_step_a){reference-type="eqref" reference="eq:X_recurrence_step_a"} and [\[eq:Z_recurrence_step_a\]](#eq:Z_recurrence_step_a){reference-type="eqref" reference="eq:Z_recurrence_step_a"}: $$\label{eq:X_recurrence_step} \begin{aligned} X^{(k)} & \leq \mu_{xr} R^{(k)} + \mu_{xc}, \\ \end{aligned}$$ where ${\mu_{xr} = \frac{\rho_{xz} \rho_{zr} + \rho_{xr}}{1 - \rho_{xz} \rho_{zx}}}$ and ${\mu_{xc} = \frac{\rho_{xz} \epsilon_{z} + \epsilon_{x}}{1 - \rho_{xz} \rho_{zx}}}$; likewise, $$\label{eq:Z_recurrence_step} \begin{aligned} Z^{(k)} & \leq \mu_{zr} R^{(k)} + \mu_{zc}, \\ \end{aligned}$$ where ${\mu_{zr} = \frac{\rho_{zx} \rho_{xr} + \rho_{zr}}{\rho_{zx} \rho_{xz}}}$ and ${\mu_{zc} = \frac{\rho_{zx} \epsilon_{x} + \epsilon_{z}}{\rho_{zx} \rho_{xz}}}$. From $L$-Lipschitz continuity of the gradient: $$f_{i}(y) \leq f_{i}(x) + g(x)^{\mathsf{T}}(y - x) + \frac{L_{i}}{2} \left\Vert y - x \right\Vert_{2}^{2}, \ \forall i \in \mathcal{V}.$$ Hence: $$\label{eq:L_Lipschitz_upper_bound_step_a} \begin{aligned} \bm{f}(\ensuremath{\overline{\bm{x}}}^{(k+1)}) &\leq \bm{f}(\ensuremath{\overline{\bm{x}}}^{(k)}) \\ & \quad + \frac{1}{N} \ensuremath{\mathrm{trace}\left({\bm{g}(\ensuremath{\overline{\bm{x}}}^{(k)}) \cdot (\ensuremath{\overline{\bm{x}}}^{(k+1)} - \ensuremath{\overline{\bm{x}}}^{(k)})^{\mathsf{T}} }\right)} \\ & \quad + \frac{1}{N} \cdot \frac{L}{2} \left\Vert \ensuremath{\overline{\bm{x}}}^{(k+1)} - \ensuremath{\overline{\bm{x}}}^{(k)} \right\Vert_{F}^{2}, \\ &\leq \bm{f}(\ensuremath{\overline{\bm{x}}}^{(k)}) + \frac{1}{N} \left\Vert \ensuremath{\overline{\bm{g}}}^{(k)} \right\Vert_{F} \left\Vert \ensuremath{\overline{\bm{\alpha}^{(k)} \cdot \bm{z}^{(k)}}} \right\Vert_{F} \\ & \quad + \frac{1}{N} \cdot \frac{L}{2} \left\Vert \ensuremath{\overline{\bm{\alpha}^{(k)} \cdot \bm{z}^{(k)}}} \right\Vert_{F}^{2} \\ & \quad + \frac{1}{N} \left\Vert \bm{g}(\ensuremath{\overline{\bm{x}}}^{(k)}) -\bm{g}(\bm{x}^{(k)}) \right\Vert_{F} \left\Vert \ensuremath{\overline{\bm{\alpha}^{(k)} \cdot \bm{z}^{(k)}}} \right\Vert_{F}. \end{aligned}$$ Considering the second term in [\[eq:L_Lipschitz_upper_bound_step_a\]](#eq:L_Lipschitz_upper_bound_step_a){reference-type="eqref" reference="eq:L_Lipschitz_upper_bound_step_a"}: $$\label{eq:mean_g_relation_step_a} \begin{aligned} \left\Vert \ensuremath{\overline{\bm{g}}}^{(k)} \right\Vert_{F} &= \left\Vert \ensuremath{\overline{\bm{\beta}^{(k-1)} \cdot \bm{s}^{(k-1)}}} - \ensuremath{\overline{\bm{s}}}^{(k)} \right\Vert_{F}, \\ &\leq \left\Vert \ensuremath{\overline{\bm{\beta}^{(k-1)} \cdot \bm{s}^{(k-1)}}} \right\Vert_{F} + \left\Vert \ensuremath{\overline{\bm{s}}}^{(k)} \right\Vert_{F}, \end{aligned}$$ from [\[eq:mean_s\_sequence\]](#eq:mean_s_sequence){reference-type="eqref" reference="eq:mean_s_sequence"}. Further: $$\begin{aligned} \ensuremath{\overline{\bm{\beta}^{(k)} \cdot \bm{s}^{(k)}}} &= \ensuremath{\overline{\bm{\beta}}}^{(k)} \cdot \ensuremath{\overline{\bm{s}}}^{(k)} + \ensuremath{\overline{\tilde{\bm{\beta}}^{(k)} \cdot \tilde{\bm{s}}^{(k)}}}, \\ \end{aligned}$$ which shows that: $$\label{eq:mean_s_relation} \begin{aligned} \ensuremath{\overline{\bm{s}}}^{(k)} = \ensuremath{\overline{\bm{\beta}}}^{(k)^{\dagger}} \left( \ensuremath{\overline{\bm{\beta}^{(k)} \cdot \bm{s}^{(k)}}} - \ensuremath{\overline{\tilde{\bm{\beta}}^{(k)} \cdot \tilde{\bm{s}}^{(k)}}} \right), \\ \end{aligned}$$ where ${\ensuremath{\overline{\bm{\beta}}}^{(k)^{\dagger}}}$ denotes the inverse of ${\ensuremath{\overline{\bm{\beta}}}^{(k)}}$. Hence, from [\[eq:mean_g\_relation_step_a\]](#eq:mean_g_relation_step_a){reference-type="eqref" reference="eq:mean_g_relation_step_a"} and [\[eq:mean_s\_relation\]](#eq:mean_s_relation){reference-type="eqref" reference="eq:mean_s_relation"}: $$\begin{aligned} \left\Vert \ensuremath{\overline{\bm{g}}}^{(k)} \right\Vert_{F} &\leq \left\Vert \ensuremath{\overline{\bm{\beta}^{(k-1)} \cdot \bm{s}^{(k-1)}}} \right\Vert_{F} \\ & \quad + \frac{1}{\left\Vert \ensuremath{\overline{\bm{\beta}}}^{(k)} \right\Vert_{F}} \left( \left\Vert \ensuremath{\overline{\bm{\beta}^{(k)} \cdot \bm{s}^{(k)}}} \right\Vert_{F} + \left\Vert \ensuremath{\overline{\tilde{\bm{\beta}}^{(k)} \cdot \tilde{\bm{s}}^{(k)}}} \right\Vert_{F} \right), \\ \end{aligned}$$ where ${\left\Vert \ensuremath{\overline{\bm{\beta}}}^{(k)^{\dagger}} \right\Vert_{F} = \frac{\sqrt{N}}{\left\Vert \ensuremath{\overline{\beta}}^{(k)} \right\Vert}_{2}}$. Further: $$\begin{aligned} \left\Vert \ensuremath{\overline{\tilde{\bm{\beta}}^{(k)} \cdot \tilde{\bm{s}}^{(k)}}} \right\Vert_{F} &= \left\Vert \frac{\bm{1}_{N}\bm{1}_{N}^{\mathsf{T}}}{N} \left(\tilde{\bm{\beta}}^{(k)} \cdot \tilde{\bm{s}}^{(k)}\right) \right\Vert_{F}, \\ & \leq \left\Vert \tilde{\bm{\beta}}^{(k)} \right\Vert_{F} \left\Vert \tilde{\bm{s}}^{(k)} \right\Vert_{F}. \end{aligned}$$ In addition, we note that: $$\begin{aligned} \left\Vert \ensuremath{\overline{\bm{\beta}^{(k-1)} \cdot \bm{s}^{(k-1)}}} \right\Vert_{F} \left\Vert \ensuremath{\overline{\bm{\alpha}^{(k)} \cdot \bm{z}^{(k)}}} \right\Vert_{F} &\leq \frac{1}{2} \left\Vert \ensuremath{\overline{\bm{\beta}^{(k-1)} \cdot \bm{s}^{(k-1)}}} \right\Vert_{F}^{2} \\ & \quad + \frac{1}{2} \left\Vert \ensuremath{\overline{\bm{\alpha}^{(k)} \cdot \bm{z}^{(k)}}} \right\Vert_{F}^{2}, \end{aligned}$$ from the relation: ${a \cdot b \leq \frac{1}{2} (a^{2} + b^{2})}$. Hence, from [\[eq:L_Lipschitz_upper_bound_step_a\]](#eq:L_Lipschitz_upper_bound_step_a){reference-type="eqref" reference="eq:L_Lipschitz_upper_bound_step_a"}: $$\label{eq:L_Lipschitz_upper_bound_step_b} \begin{aligned} \bm{f}(\ensuremath{\overline{\bm{x}}}^{(k+1)}) &\leq \bm{f}(\ensuremath{\overline{\bm{x}}}^{(k)}) \\ & \quad + \frac{1}{2} \left(\left\Vert \ensuremath{\overline{\bm{\beta}^{(k-1)} \cdot \bm{s}^{(k-1)}}} \right\Vert_{2}^{2} + \left\Vert \ensuremath{\overline{\bm{\alpha}^{(k)} \cdot \bm{z}^{(k)}}} \right\Vert_{2}^{2} \right) \\ & \quad + \frac{1}{2 \left\Vert \ensuremath{\overline{\bm{\beta}}}^{(k)} \right\Vert_{2}} \left(\left\Vert \ensuremath{\overline{\bm{\beta}^{(k)} \cdot \bm{s}^{(k)}}} \right\Vert_{2}^{2} + \left\Vert \ensuremath{\overline{\bm{\alpha}^{(k)} \cdot \bm{z}^{(k)}}} \right\Vert_{2}^{2} \right) \\ & \quad + \frac{\left\Vert \tilde{\bm{\beta}}^{(k)} \right\Vert_{2}}{\left\Vert \ensuremath{\overline{\bm{\beta}}}^{(k)} \right\Vert_{2}} \left\Vert \tilde{\bm{s}}^{(k)} \right\Vert_{2} \left\Vert \ensuremath{\overline{\bm{\alpha}^{(k)} \cdot \bm{z}^{(k)}}} \right\Vert_{2} \\ & \quad + \frac{L}{2} \left\Vert \ensuremath{\overline{\bm{\alpha}^{(k)} \cdot \bm{z}^{(k)}}} \right\Vert_{2}^{2} \\ & \quad + L \left\Vert \tilde{\bm{x}}^{(k)} \right\Vert_{2} \left\Vert \ensuremath{\overline{\bm{\alpha}^{(k)} \cdot \bm{z}^{(k)}}} \right\Vert_{2}, \end{aligned}$$ where the last term results from Lipschitz continuity of $\nabla f$. Summing [\[eq:L_Lipschitz_upper_bound_step_b\]](#eq:L_Lipschitz_upper_bound_step_b){reference-type="eqref" reference="eq:L_Lipschitz_upper_bound_step_b"} over $k$ from $0$ to $t$: $$\begin{aligned} \bm{f}(\ensuremath{\overline{\bm{x}}}^{(t+1)}) &\leq \bm{f}(\ensuremath{\overline{\bm{x}}}^{(0)}) + \frac{1}{2} \left(1 + \frac{1}{\sqrt{N} \beta_{\max}} + L \right) \left(R^{(t)}\right)^{2} \\ & \quad + r_{\beta} S^{(t)} R^{(t)} + L X^{(t)} R^{(t)} \end{aligned}$$ where ${\bm{\beta}^{(-1)} = \bm{s}^{(-1)} = \bm{0}}$, and we have added the term ${\frac{1}{2} \left\Vert \ensuremath{\overline{\bm{\beta}^{(t)} \cdot \bm{s}^{(t)}}} \right\Vert_{2}^{2}}$. Given [\[eq:S_recurrence_step\]](#eq:S_recurrence_step){reference-type="eqref" reference="eq:S_recurrence_step"} and [\[eq:X_recurrence_step\]](#eq:X_recurrence_step){reference-type="eqref" reference="eq:X_recurrence_step"}: $$\label{eq:objective_bound} \begin{aligned} \bm{f}(\ensuremath{\overline{\bm{x}}}^{(t+1)}) &\leq \bm{f}(\ensuremath{\overline{\bm{x}}}^{(0)}) + a_{1} \left(R^{(t)}\right)^{2} + a_{2} R^{(t)}, \end{aligned}$$ where ${a_{1} = \frac{1}{2} \left(1 + \frac{1}{\sqrt{N} \beta_{\max}} + L \right) + r_{\beta} \mu_{sr} + L \mu_{xr}}$ and ${a_{2} = r_{\beta} \mu_{sc} + L \mu_{xc}}$. Subtracting ${f^{\star} = f(x^{\star})}$ from both sides in [\[eq:objective_bound\]](#eq:objective_bound){reference-type="eqref" reference="eq:objective_bound"} yields: $$\begin{aligned} \bm{f}(\ensuremath{\overline{\bm{x}}}^{(t+1)}) - f^{\star} &\leq \bm{f}(\ensuremath{\overline{\bm{x}}}^{(0)}) - f^{\star} + a_{1} \left(R^{(t)}\right)^{2} + a_{2} R^{(t)}, \end{aligned}$$ showing that: $$0 \leq \bm{f}(\ensuremath{\overline{\bm{x}}}^{(t+1)}) - f^{\star} \leq \bm{f}(\ensuremath{\overline{\bm{x}}}^{(0)}) - f^{\star} + a_{1} \left(R^{(t)}\right)^{2} + a_{2} R^{(t)}.$$ Hence: $$a_{1} \left(R^{(t)}\right)^{2} + a_{2} R^{(t)} + \bm{f}(\ensuremath{\overline{\bm{x}}}^{(0)}) - f^{\star} \geq 0.$$ We note that ${a_{1} < 0}$ when: $$\frac{1}{2} \left(1 + \frac{1}{\sqrt{N} \beta_{\max}} + L \right) + r_{\beta} \mu_{sr} + L \mu_{xr} < 0,$$ while ${a_{2} \geq 0}$. With ${a_{1} < 0}$: $$-a_{1} \left(R^{(t)}\right)^{2} + a_{2} R^{(t)} - \left(\bm{f}(\ensuremath{\overline{\bm{x}}}^{(0)}) - f^{\star}\right) \leq 0,$$ which yields: $$\label{eq:R_Sequence_lim} \lim_{t \rightarrow \infty} R^{(t)} \leq \frac{a_{2} + \sqrt{a_{2}^{2} - 4 a_{1} \left(\bm{f}(\ensuremath{\overline{\bm{x}}}^{(0)}) - f^{\star}\right)}}{- 2 a_{1}} = \Sigma < \infty.$$ Using the monotone convergence theorem in [\[eq:R_Sequence_lim\]](#eq:R_Sequence_lim){reference-type="eqref" reference="eq:R_Sequence_lim"}: ${R^{(t)} \rightarrow \Sigma}$, and: $$\lim_{k \rightarrow \infty} \left(\left\Vert \ensuremath{\overline{\bm{\alpha}^{(k)} \cdot \bm{z}^{(k)}}} \right\Vert_{2}^{2} + \left\Vert \ensuremath{\overline{\bm{\beta}^{(k)} \cdot \bm{s}^{(k)}}} \right\Vert_{2}^{2} + \left\Vert \tilde{\bm{g}}^{(k+ 1)} \right\Vert_{2}^{2} \right) = 0,$$ which shows that: $$\begin{aligned} \lim_{k \rightarrow \infty} \left\Vert \ensuremath{\overline{\bm{\alpha}^{(k)} \cdot \bm{z}^{(k)}}} \right\Vert_{2}^{2} &= \lim_{k \rightarrow \infty} \left\Vert \ensuremath{\overline{\bm{\beta}^{(k)} \cdot \bm{s}^{(k)}}} \right\Vert_{2}^{2} \\ &= \lim_{k \rightarrow \infty} \left\Vert \tilde{\bm{g}}^{(k+ 1)} \right\Vert_{2}^{2} \\ &= 0. \end{aligned}$$ From [\[eq:S_recurrence_step\]](#eq:S_recurrence_step){reference-type="eqref" reference="eq:S_recurrence_step"}: $$\lim_{k \rightarrow \infty} S^{(k)} \leq \lim_{k \rightarrow \infty} (\mu_{sr} R^{(k)} + \mu_{sc}) \leq \mu_{sr} \Sigma + \mu_{sc} < \infty.$$ Similarly, from the monotone convergence theorem: $$\label{eq:s_tilde_agreement} \lim_{k \rightarrow \infty} \left\Vert \tilde{\bm{s}}^{(k)} \right\Vert_{2} = 0.$$ Likewise, from [\[eq:X_recurrence_step\]](#eq:X_recurrence_step){reference-type="eqref" reference="eq:X_recurrence_step"}: $$\lim_{k \rightarrow \infty} X^{(k)} \leq \lim_{k \rightarrow \infty} (\mu_{xr} R^{(k)} + \mu_{xc}) \leq \mu_{xr} \Sigma + \mu_{xc} < \infty;$$ $$\label{eq:x_tilde_agreement} \lim_{k \rightarrow \infty} \left\Vert \tilde{\bm{x}}^{(k)} \right\Vert_{2} = 0.$$ Similarly, from [\[eq:Z_recurrence_step\]](#eq:Z_recurrence_step){reference-type="eqref" reference="eq:Z_recurrence_step"}: $$\lim_{k \rightarrow \infty} Z^{(k)} \leq \lim_{k \rightarrow \infty} (\mu_{zr} R^{(k)} + \mu_{zc}) \leq \mu_{zr} \Sigma + \mu_{zc} < \infty,$$ showing that: $$\label{eq:z_tilde_agreement} \lim_{k \rightarrow \infty} \left\Vert \tilde{\bm{z}}^{(k)} \right\Vert_{2} = 0.$$ From [\[eq:s_tilde_agreement\]](#eq:s_tilde_agreement){reference-type="eqref" reference="eq:s_tilde_agreement"}, [\[eq:x_tilde_agreement\]](#eq:x_tilde_agreement){reference-type="eqref" reference="eq:x_tilde_agreement"}, and [\[eq:z_tilde_agreement\]](#eq:z_tilde_agreement){reference-type="eqref" reference="eq:z_tilde_agreement"}, we note that the agents reach *agreement* or *consensus*, with the local iterate of each agent converging to the mean as ${k \rightarrow \infty}$. Moreover, from [\[eq:x_update_mean_z\_relation\]](#eq:x_update_mean_z_relation){reference-type="eqref" reference="eq:x_update_mean_z_relation"}: $$\begin{aligned} \left\Vert \ensuremath{\overline{\bm{z}}}^{(k)} \right\Vert_{2} &\leq \frac{1}{\left\Vert \ensuremath{\overline{\bm{\alpha}}}^{(k)} \right\Vert_{2}} \left\Vert \ensuremath{\overline{\bm{\alpha}^{(k)} \cdot \bm{z}^{(k)}}} \right\Vert_{2} + \frac{\left\Vert \tilde{\bm{\alpha}}^{(k)} \right\Vert_{2}}{\left\Vert \ensuremath{\overline{\bm{\alpha}}}^{(k)} \right\Vert_{2}} \cdot \left\Vert \tilde{\bm{z}}^{(k)} \right\Vert_{2};\end{aligned}$$ hence: $$\begin{aligned} \lim_{k \rightarrow \infty} \left\Vert \ensuremath{\overline{\bm{z}}}^{(k)} \right\Vert_{2} &\leq \lim_{k \rightarrow \infty} \left(\frac{1}{\left\Vert \ensuremath{\overline{\bm{\alpha}}}^{(k)} \right\Vert_{2}} \cdot \left\Vert \ensuremath{\overline{\bm{\alpha}^{(k)} \cdot \bm{z}^{(k)}}} \right\Vert_{2} \right. \\ & \hspace{6em} \left. + \frac{\left\Vert \tilde{\bm{\alpha}}^{(k)} \right\Vert_{2}}{\left\Vert \ensuremath{\overline{\bm{\alpha}}}^{(k)} \right\Vert_{2}} \cdot \left\Vert \tilde{\bm{z}}^{(k)} \right\Vert_{2}\right), \\ &= 0, \end{aligned}$$ yielding: $$\lim_{k \rightarrow \infty} \left\Vert \ensuremath{\overline{\bm{z}}}^{(k)} \right\Vert_{2} = 0.$$ Likewise, from [\[eq:norm_mean_s\_upper_bound\]](#eq:norm_mean_s_upper_bound){reference-type="eqref" reference="eq:norm_mean_s_upper_bound"}: $$\begin{aligned} \lim_{k \rightarrow \infty} \left\Vert \ensuremath{\overline{\bm{s}}}^{(k)} \right\Vert_{2} &\leq \lim_{k \rightarrow \infty} \left( \frac{1}{\left\Vert \ensuremath{\overline{\bm{\beta}}}^{(k)} \right\Vert_{2}} \cdot \left\Vert \ensuremath{\overline{\bm{\beta}^{(k)} \cdot \bm{s}^{(k)}}} \right\Vert_{2} \right. \\ & \hspace{6em} \left. + \frac{\left\Vert \tilde{\bm{\beta}}^{(k)} \right\Vert_{2}}{\left\Vert \ensuremath{\overline{\bm{\beta}}}^{(k)} \right\Vert_{2}} \cdot \left\Vert \tilde{\bm{s}}^{(k)} \right\Vert_{2} \right), \\ &= 0, \end{aligned}$$ giving the result: $$\lim_{k \rightarrow \infty} \left\Vert \ensuremath{\overline{\bm{s}}}^{(k)} \right\Vert_{2} = 0.$$ Further, from [\[eq:mean_s\_sequence\]](#eq:mean_s_sequence){reference-type="eqref" reference="eq:mean_s_sequence"}: $$\lim_{k \rightarrow \infty} \left\Vert \ensuremath{\overline{\bm{g}}}^{(k)} \right\Vert_{2} \leq \lim_{k \rightarrow \infty} \left(\left\Vert \ensuremath{\overline{\bm{s}}}^{(k)} \right\Vert_{2} + \left\Vert \ensuremath{\overline{\bm{\beta}^{(k - 1)} \cdot \bm{s}^{(k - 1)}}} \right\Vert_{2} \right)= 0,$$ yielding: $$\lim_{k \rightarrow \infty} \left\Vert \ensuremath{\overline{\bm{g}}}^{(k)} \right\Vert_{2} = 0.$$ ## Proof of Theorem [Theorem 2](#thm:convergence){reference-type="ref" reference="thm:convergence"} {#appdx:thm_convergence} Since $\bm{f}$ is convex: $$\label{eq:objective_bound_convex} \begin{aligned} \bm{f}(\ensuremath{\overline{\bm{x}}}^{(k)}) - f^{\star} &\leq \frac{1}{N} \ensuremath{\mathrm{trace}\left({\bm{g}(\ensuremath{\overline{\bm{x}}}^{(k)}) \cdot ( \ensuremath{\overline{\bm{x}}}^{(k)} - \bm{x}^{\star})^{\mathsf{T}} }\right)}, \\ &\leq \frac{1}{N} \left\Vert \ensuremath{\overline{\bm{g}}}(\bm{x}^{(k)}) \right\Vert_{F} \left\Vert \ensuremath{\overline{\bm{x}}}^{(k)} - \bm{x}^{\star} \right\Vert_{F} \\ & \quad + \frac{1}{N} \left\Vert \bm{g}(\ensuremath{\overline{\bm{x}}}^{(k)}) - \bm{g}({\bm{x}}^{(k)}) \right\Vert_{F} \left\Vert \ensuremath{\overline{\bm{x}}}^{(k)} - \bm{x}^{\star} \right\Vert_{F}, \\ &\leq \left\Vert \ensuremath{\overline{\bm{g}}}(\bm{x}^{(k)}) \right\Vert_{2} \left\Vert \ensuremath{\overline{\bm{x}}}^{(k)} - \bm{x}^{\star} \right\Vert_{2} \\ & \quad + \frac{L}{2}\left\Vert \ensuremath{\overline{\bm{x}}}^{(k)} - \bm{x}^{(k)} \right\Vert_{2} \left\Vert \ensuremath{\overline{\bm{x}}}^{(k)} - \bm{x}^{\star} \right\Vert_{2}, \end{aligned}$$ where ${\bm{x}^{\star} = \bm{1}_{N} \left(x^{\star}\right)^{\mathsf{T}}}$. Since $\bm{f}$ is coercive by assumption and $\bm{f}(\ensuremath{\overline{\bm{x}}}^{(k)})$ is bounded from [\[eq:objective_bound\]](#eq:objective_bound){reference-type="eqref" reference="eq:objective_bound"}, $\left\Vert \ensuremath{\overline{\bm{x}}}^{(k)} \right\Vert_{2}$ is bounded, and thus, ${\left\Vert \ensuremath{\overline{\bm{x}}}^{(k)} - \bm{x}^{\star} \right\Vert_{2} \leq \left\Vert \ensuremath{\overline{\bm{x}}}^{(k)} \right\Vert_{2} + \left\Vert \bm{x}^{\star} \right\Vert_{2}}$ is bounded. Hence: $$\lim_{k \rightarrow \infty} \left(\bm{f}(\ensuremath{\overline{\bm{x}}}^{(k)}) - f^{\star} \right) \leq 0,$$ which indicates that: $$\label{eq:objective_mean_convergence} \lim_{k \rightarrow \infty} \bm{f}(\ensuremath{\overline{\bm{x}}}^{(k)}) = f^{\star}.$$ From the mean-value theorem: $$\label{eq:f_bound_mean_value_theorem} \bm{f}({\bm{x}}^{(k)}) = \bm{f}(\ensuremath{\overline{\bm{x}}}^{(k)}) + \frac{1}{N} \ensuremath{\mathrm{trace}\left({\bm{g}(\ensuremath{\overline{\bm{x}}}^{(k)} + \xi \tilde{\bm{x}}^{(k)}) \cdot \left(\tilde{\bm{x}}^{(k)}\right)^{\mathsf{T}} }\right)},$$ where ${0 \leq \xi \leq 1}$. In addition, ${\left\Vert \ensuremath{\overline{\bm{x}}}^{(k)} + \xi \tilde{\bm{x}}^{(k)} \right\Vert_{2} \leq \left\Vert \ensuremath{\overline{\bm{x}}}^{(k)} \right\Vert_{2} + \xi \left\Vert \tilde{\bm{x}}^{(k)} \right\Vert_{2}}$ is bounded, as well as ${\left\Vert \bm{g}(\ensuremath{\overline{\bm{x}}}^{(k)} + \xi \tilde{\bm{x}}^{(k)}) \right\Vert_{2}}$, since $\bm{g}$ is Lipschitz-continuous. As a result, from [\[eq:f_bound_mean_value_theorem\]](#eq:f_bound_mean_value_theorem){reference-type="eqref" reference="eq:f_bound_mean_value_theorem"}, $$\begin{aligned} &\left\lvert \bm{f}({\bm{x}}^{(k)}) - \bm{f}(\ensuremath{\overline{\bm{x}}}^{(k)}) \right\rvert \\ & \quad = \frac{1}{N} \left\lvert \ensuremath{\mathrm{trace}\left({\bm{g}(\ensuremath{\overline{\bm{x}}}^{(k)} + \xi \tilde{\bm{x}}^{(k)}) \cdot \left(\tilde{\bm{x}}^{(k)}\right)^{\mathsf{T}} }\right)} \right\rvert, \\ & \quad \leq \left\Vert \bm{g}(\ensuremath{\overline{\bm{x}}}^{(k)} + \xi \tilde{\bm{x}}^{(k)}) \right\Vert_{2} \left\Vert \tilde{\bm{x}}^{(k)} \right\Vert_{2}. \end{aligned}$$ Hence: $$\begin{aligned} \lim_{k \rightarrow \infty} \left\lvert \bm{f}({\bm{x}}^{(k)}) - \bm{f}(\ensuremath{\overline{\bm{x}}}^{(k)}) \right\rvert &\leq \lim_{k \rightarrow \infty} \left( \left\Vert \bm{g}(\ensuremath{\overline{\bm{x}}}^{(k)} + \xi \tilde{\bm{x}}^{(k)}) \right\Vert_{2} \right. \\ & \hspace{4em} \left. \cdot \left\Vert \tilde{\bm{x}}^{(k)} \right\Vert_{2} \right), \\ &= 0, \end{aligned}$$ from [\[eq:x_tilde_agreement\]](#eq:x_tilde_agreement){reference-type="eqref" reference="eq:x_tilde_agreement"}. As a result: $$\lim_{k \rightarrow \infty} \bm{f}({\bm{x}}^{(k)}) = \lim_{k \rightarrow \infty} \bm{f}(\ensuremath{\overline{\bm{x}}}^{(k)}) = f^{\star},$$ from [\[eq:objective_mean_convergence\]](#eq:objective_mean_convergence){reference-type="eqref" reference="eq:objective_mean_convergence"}, proving convergence to the optimal objective value. [^1]: \*This work was supported in part by NSF NRI awards 1830402 and 1925030 and ONR grant N00014-18-1-2830. [^2]: $^{1}$Ola Shorinwa is with the Department of Mechanical Engineering, Stanford University, CA, USA `shorinwa@stanford.edu`. [^3]: $^{2}$Mac Schwager is with the Department of Aeronautics and Astronautics Engineering, Stanford University, CA, USA `schwager@stanford.edu`.
arxiv_math
{ "id": "2309.12235", "title": "Distributed Conjugate Gradient Method via Conjugate Direction Tracking", "authors": "Ola Shorinwa and Mac Schwager", "categories": "math.OC cs.DC cs.MA", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Earth introduces strong attenuation and dispersion to propagating waves. The time-fractional wave equation with very small fractional exponent, based on Kjartansson's constant-Q theory, is widely recognized in the field of geophysics as a reliable model for frequency-independent Q anelastic behavior. Nonetheless, the numerical resolution of this equation poses considerable challenges due to the requirement of storing a complete time history of wavefields. To address this computational challenge, we present a novel approach: a nearly optimal sum-of-exponentials (SOE) approximation to the Caputo fractional derivative with very small fractional exponent, utilizing the machinery of generalized Gaussian quadrature. This method minimizes the number of memory variables needed to approximate the power attenuation law within a specified error tolerance. We establish a mathematical equivalence between this SOE approximation and the continuous fractional stress-strain relationship, relating it to the generalized Maxwell body model. Furthermore, we prove an improved SOE approximation error bound to thoroughly assess the ability of rheological models to replicate the power attenuation law. Numerical simulations on constant-Q viscoacoustic equation in 3D homogeneous media and variable-order P- and S- viscoelastic wave equations in 3D inhomogeneous media are performed. These simulations demonstrate that our proposed technique accurately captures changes in amplitude and phase resulting from material anelasticity. This advancement provides a significant step towards the practical usage of the time-fractional wave equation in seismic inversion. 74D05; 65M22; 41A05; 35R11; 33F05 viscoelasticity, fractional derivative, sum-of-exponentials approximation, generalized Gaussian quadrature, fast algorithm, rheological model author: - Xu Guo - Shidong Jiang - Yunfeng Xiong - Jiwei Zhang title: Compressing the memory variables in constant-Q viscoelastic wave propagation via an improved sum-of-exponentials approximation --- # Introduction Seismic wave propagation has anelastic characteristics in real earth materials due to the loss of energy from the geometrical effect of the enlargement of the wavefront and the intrinsic absorption of the earth [@LiuAndersonKanamori1976; @Kjartansson1979; @CarcioneCavalliniMainardiHanyga2002]. The attenuation of seismic waves causes a decrease in the resolution of seismic images with depth, and transmission losses lead to variations in amplitude with offset [@UrsinToverud2002]. Therefore, accurate attenuation compensation is crucial for enhancing the reliability of seismic data interpretation [@EmmerichKorn1987; @YangBrossierMetivierVirieux2017; @Tromp2020; @MirzanejadTranWang2022; @WangHarrierBaiSaadYangChen2022]. Empirically, the attenuation in an elastic solid is often described by formulations using memory variables, where the elastic moduli act as time convolution operators instead of constants [@UrsinToverud2002; @bk:MarquesGreus2012]. Energy dissipation is characterized by the quality factor $Q$, which is loosely defined as the number of wavelengths through which a wave can propagate in a medium before its amplitude decreases by $e^{-\pi}$ [@BlanchRobertssonSymes1995]. The most common methods are rheological models composed of Hookean elements (springs) and dashpots with different connecting styles, including the standard linear solid [@LiuAndersonKanamori1976], the generalized Maxwell body (GMB), and the generalized Zener body (GZB) [@BlanchRobertssonSymes1995; @DayMinster1984; @ParkSchapery1999; @MoczoKristek2005; @ZhuCarcioneHarris2013; @CaoYin2015; @BlancKomatitschChaljubLombardXie2016]. These methods are based on an approximation of the viscoacoustic/viscoelastic modulus by a low-order rational function, or equivalently, replacing the relaxation spectrum with a discrete one, so that the frequency-independent Q behavior is mimicked by a superposition of mechanical elements [@GrobyTsogka2006; @WangXingZhuZhouShi2022]. Despite their sound physical interpretation and convenience in the time-domain finite-difference implementation [@GrobyTsogka2006; @RobertssonBlanchSymes1994], there remains an open question regarding the overall accuracy of such approaches, especially when propagating waves over a large number of wavelengths [@BlancKomatitschChaljubLombardXie2016]. Additionally, the quality factor $Q$ is implicitly parameterized by a set of parameters obtained via either linear or nonlinear optimization [@BlanchRobertssonSymes1995; @DayMinster1984; @BlancKomatitschChaljubLombardXie2016], meaning that parameter crosstalk presents a significant challenge to the nonlinear inversion of the $Q$ value [@YangBrossierMetivierVirieux2017; @WangLiuJiChenLiu2019]. By contrast, the power attenuation law, which belongs to a family of fractional-derivative viscoelastic models, has garnered significant attention from geophysicists due to its succinct capability to describe the frequency-independent attenuation behavior consistently with the requirements of causality and dissipativity [@Kjartansson1979; @CarcioneKosloffKosloff1988; @HanyaSerdynska2003; @bk:Mainardi2010]. The time-fractional viscoelastic wave equation [@CarcioneKosloffKosloff1988; @Carcione2009; @bk:Mainardi2010; @Zhu2017], based on Kjartansson's constant-Q theory [@Kjartansson1979], is entirely specified by two parameters: the phase velocity at the reference frequency and the quality factor $Q$ [@Carcione2009]. Consequently, it provides a much more economical parameterization across a broad range of frequencies and might potentially reduce the parameter crosstalk in the Q-inversion problem [@Zhu2017]. However, the forward and inverse wavefield modeling via the time-fractional wave equation remains challenging because of the need to store a complete time history [@SchmidtGaul2006; @Diethelm2008; @HuangLiLiZengGuo2022]. Since the loss caused by scattering and absorption is relatively minor in most situations, the attenuation of seismic energy is only treated as a minor perturbation to the propagation [@Kjartansson1979; @SunGaoLiu2019]. For instance, $Q$ values typically range from $10$ to $100$ in real viscoelastic media, and the fractional exponent $2\gamma = \frac{2}{\pi} \arctan \frac{1}{Q}$ ranges from $10^{-3}$ to $10^{-1}$ [@CarcioneCavalliniMainardiHanyga2002]. Due to the Abel kernel's flatness, the classical finite difference methods for the temporal fractional stress-strain relation struggle with the extensive memory requirement. As indicated in [@Zhu2017; @LiuHuGuoLuo2018], this requires the storage of $400$-$500$ levels of the wavefield history in real imaging applications, resulting in substantial computational efforts to address the nonlocality, which hinders the broader application of Kjartansson's theory. However, such long-memory dependence seems to contradict our intuition that stress should rely more on the immediate history of strain than on its distant history [@bk:Christensen2012]. Since the fractional Caputo derivative with a small exponent is very close to an identity operator, a more appropriate discretized approximation should adhere to the short-memory principle [@YuanAgrawal2002; @BlancChiavassaLombard2014; @XiongGuo2022]. To achieve this, an effective strategy is to convert the flat Abel kernel into a localized power function through an integral representation of the power creep, yielding the sum-of-exponentials (SOE) approximation to the fractional stress-strain relation. The challenge lies in accurately mimicking a given attenuation law using a minimal set of memory variables [@BlancKomatitschChaljubLombardXie2016], that is, in determining how to reduce the number of exponentials, $N_{\textup{exp}}$, for a given error tolerance. The purpose of our paper is two-fold. First, we aim to explore the limit of compressing the memory variables for a practically small fractional exponent. While there is a vast body of literature devoted to constructing efficient SOE approximations [@jiang2001thesis; @JiangGreengard2004; @BeylkinMonzon2010; @JiangZhangZhangZhang2017; @JRLi2010; @HuangLiLiZengGuo2022], existing methods might still suffer from stagnation. This means they might require more exponentials than necessary due to the low-exponent breakdown in the initial integral representation. To address this challenge, we begin with a new integral representation of the power function and employ the generalized Gaussian quadrature (GGQ) machinery [@BremerGimbutasRokhlin2010; @ggq2; @ggq3] to construct a (nearly) optimal SOE approximation, which requires less than half the nodes compared to existing methods [@JiangZhangZhangZhang2017]. To validate the accuracy of our new SOE approximation, we simulate the constant-Q viscoacoustic wave equation in 3-D homogeneous media and compare the results with the analytical solution [@Hanyga2002]. Numerical results show that the new SOE approximation can accurately capture the changes in both amplitude and phase induced by material anelasticity [@ZhuCarcioneHarris2013]. For the strong attenuation case ($Q=10$), it can achieve a relative error less than $10^{-3}$ with $N_{\textup{exp}}< 10$. Additionally, it demands fewer memory variables for larger $Q$ values, and $N_{\textup{exp}}$ decreases as the fractional exponent approaches zero, ensuring that stagnation is effectively circumvented. Second, we establish a rigorous mathematical equivalence between the SOE approximation of the power creep model and GMB. The latter has been proven to be equivalent to other rheological models such as GZB [@MoczoKristek2005; @CaoYin2015]. It is observed that the discretized constitutive equation [\[fractional_stress_strain\]](#fractional_stress_strain){reference-type="eqref" reference="fractional_stress_strain"} can be transformed into a differential equation form [@YuanAgrawal2002; @XiongGuo2022], leading to the relaxed dynamics of a collection of parallel springs and dashpots [@ParkSchapery1999; @SchmidtGaul2006]. In some sense, the SOE approximation provides a nearly optimal finite spring-dashpot representation to approximate the continuous fractional stress-strain relation through a uniformly accurate curve fitting to the power function $(t/t_0)^{-2\gamma}$ [@ParkSchapery1999; @ZhuCarcioneHarris2013]. An improved error bound for the new SOE approximation is presented, ensuring numerical accuracy within the specified wave propagation time and range of wavelengths, and firmly establishing the ability of rheological models to approximate the frequency-independent Q behavior. Compared with other approaches, such as the Yuan-Agrawal method [@YuanAgrawal2002; @LuHanyga2005] or the diffusive approximation [@Diethelm2008; @BlancChiavassaLombard2014], the new SOE approximation avoids the augmentation of projection errors [@XiongGuo2022]. This is because it possesses greater flexibility in stiffnesses and viscosities to adapt to the viscoelastic behaviors of the given material data, thereby addressing some of the criticisms noted in [@SchmidtGaul2006]. This represents a significant advancement towards the practical use of the time-fractional wave equation in real 3D geological applications. The rest of this paper is organized as follows. provides a brief introduction to the constant-Q viscoelastic and viscoacoustic wave equations. In [3](#sec.scheme){reference-type="ref" reference="sec.scheme"}, a new SOE approximation to the time-fractional constitutive relation is developed, along with an exponential operator splitting scheme. An improved SOE error bound is proved to rigorously establish the ability of rheological models to mimic the power attenuation law. showcases numerical experiments on the viscoacoustic wave equation in 3D homogeneous media and the variable-order P- and S- viscoelastic wave equations in 3D inhomogeneous media to validate the convergence and accuracy of the new SOE approximation. Finally, conclusions and discussions are presented in [5](#sec.conclusion){reference-type="ref" reference="sec.conclusion"}. # The constant-Q wave equation {#sec.backgroud} In this section, we briefly review the theory of constant-Q viscoelastic wave equation, which is completely characterized by the group velocities $c_P$, $c_S$ and the quality factors $Q_P$, $Q_S$ for longitudinal and shear waves (P- and S-waves), respectively. The dynamics of constant-Q wave propagation is governed by three sets of equations. First, the conservation of linear momentum leads to $$\label{conservation_momentum} \rho(\bm{x}) \frac{\partial^2}{\partial t^2} u_i(\bm{x}, t) = \sum_{j=1}^3 \frac{\partial}{\partial x_j} \sigma_{ij}(\bm{x}, t) + f_i(\bm{x}, t), \quad i = 1, 2, 3, \; \bm{x}\in \mathbb{R}^3,$$ where $\sigma_{ij}$ are the components of the stress tensor, $u_i$ are the components of the displacement vector, $\rho$ is the mass density and $f_i$ are components of the body forces per unit (source term). Second, the strain tensor $\varepsilon_{ij}$ is related to the displacement vector via the formula $$\label{defintion_strain} \varepsilon_{ij}(\bm{x}, t) = \frac{1}{2} \left( \frac{\partial}{\partial x_i} u_j(\bm{x}, t) + \frac{\partial}{\partial x_j} u_i(\bm{x}, t) \right), \quad i, j = 1, \dots, 3.$$ Third, the constitutive equation for the viscoelastic media [@Carcione2009] is $$\label{stress_strain_relation} \begin{aligned} \sigma_{ij}(\bm{x}, t) &= \frac{M_P(\bm{x})}{\omega_0^{2\gamma_P(\bm{x})}} { _{C}D_t^{2\gamma_P(\bm{x})}}\left[ \sum_{k=1}^3\varepsilon_{kk}(\bm{x}, t) \delta_{ij}\right]\\ &+ \frac{2 M_S(\bm{x})}{\omega_0^{2\gamma_S(\bm{x})}} {_{C}D_t^{2\gamma_S(\bm{x})}} \left[ \varepsilon_{ij}(\bm{x}, t) - \sum_{k=1}^3\varepsilon_{kk}(\bm{x}, t) \delta_{ij} \right], \quad i, j=1,\dots,3. \end{aligned}$$ Here $\omega_0 =t_0^{-1}$ is the reference frequency, $\gamma_P(\bm{x}) = \pi^{-1}\arctan Q_P^{-1}(\bm{x})$, $\gamma_S(\bm{x}) = \pi^{-1}\arctan Q_S^{-1}(\bm{x})$. $M_P(\bm{x}) = \rho(\bm{x}) c_P^2(\bm{x}) \cos^2(\pi \gamma_P(\bm{x})/2)$, $M_S(\bm{x}) = \rho(\bm{x}) c_S^2(\bm{x}) \cos^2(\pi \gamma_S(\bm{x})/2)$ are the bulk moduli for P- and S-wave, respectively. $_{C}D_t^{\beta}$ is the Caputo fractional derivative operator defined as $$\label{def.extended_Caputo_time_fractional_operator} {_C}D_t^{\beta} \varepsilon(\bm{x}, t) = \frac{1}{\Gamma(1 - \beta)} \int_0^t (t - \tau)^{-\beta} \left[\frac{\partial}{\partial \tau} \varepsilon(\bm{x}, \tau)\right] \mathrm{d}\tau, \quad 0 < \beta < 1.$$ The variable-order Caputo fractional derivative operators ${_C}D_t^{2\gamma_P(\bm{x})}$ and ${_C}D_t^{2\gamma_S(\bm{x})}$ characterize the anisotropic attenuation in inhomogeneous materials, e.g., the multi-layer viscoelastic media. When $c_P=c_S$ and $\gamma_P=\gamma_S$, [\[conservation_momentum\]](#conservation_momentum){reference-type="ref" reference="conservation_momentum"} --[\[stress_strain_relation\]](#stress_strain_relation){reference-type="ref" reference="stress_strain_relation"} can be reduced via the Helmholtz decomposition and the P-wave propagation can be described by a scalar viscoacoustic wave equation [@ChapmanHobroRobertsson2014] $$\rho(\bm{x}) \frac{\partial^2}{\partial t^2} u(\bm{x}, t) = \nabla \cdot (\rho(\bm{x}) C_P(\bm{x}) {_C D_t^{2\gamma_P(\bm{x})}} \nabla u(\bm{x}, t)) + f(\bm{x}, t) \label{pwaveequation}$$ with $C_P(\bm{x}) = c_P^2(\bm{x})\cos^2(\pi \gamma_P(\bm{x})/2) \omega_0^{-2\gamma_P(\bm{x})}$. In the case of $\rho(\bm{x}) \equiv \rho_0$, $C_P(\bm{x}) \equiv C_P$, $\gamma_P(\bm{x}) \equiv \gamma_P$ and $f = 0$, [\[pwaveequation\]](#pwaveequation){reference-type="ref" reference="pwaveequation"} becomes $$_C D_t^{2-2\gamma_P} u(\bm{x}, t) = C_P \Delta u(\bm{x}, t), \label{constantpwave}$$ where the Caputo derivative (with order $1<\beta <2$) is defined as $$\label{caputodef2} {_C}D_t^{\beta} u(\bm{x}, t) = \frac{1}{\Gamma(2 - \beta)} \int_0^t (t - \tau)^{1-\beta} \left[\frac{\partial^2}{\partial \tau^2} u(\bm{x}, \tau)\right] \mathrm{d}\tau, \quad 1 < \beta < 2.$$ When [\[constantpwave\]](#constantpwave){reference-type="ref" reference="constantpwave"} is combined with the initial data $u(\bm{x}, 0^+) = u_0(\bm{x})$ and $\frac{\partial }{\partial t} u(\bm{x}, 0^+) = 0$, the solution to this initial value problem (IVP) can be expressed as follows: $$\label{exact_solution} u(\bm{x}, t) = \int_{\mathbb{R}^3} G^{(3)}(\bm{y}, t; 1-\gamma_P) u_0(\bm{x}- \bm{y}) \mathrm{d}\bm{y},$$ where $G^{(3)}(\bm{x}, t; 1-\gamma_P)$ is the Green's function of the given IVP (see [\[Green\]](#Green){reference-type="ref" reference="Green"} for its explicit expression in terms of the derivatives of Mainardi functions). The Green's function $G^{(3)}$ with different quality factors $Q_P$ (or equivalently, $\gamma_P$) is visualized in [\[Green_function\]](#Green_function){reference-type="ref" reference="Green_function"}. We observe that Green's functions for time-fractional wave equation are oscillatory, bounded from below and above, and approach the derivative of the Heaviside function as $\gamma_P \to 0$ ($Q_P \to \infty$). Moreover, the power function relaxation in the fractional stress-strain relation not only leads to amplitude loss but also contributes to phase dispersion. It may even change the first arrival time of the seismic signals [@CarcioneKosloffKosloff1988]. The smoothness of the wavefield at the wave fronts is an important consequence of singular memory, implying that the signals build up with some delay after the passage of the wave front. Therefore, it necessities a correction in the travel time in view of the dependence of the signal delay on the propagation distance [@HanyaSerdynska2003]. # Short-memory scheme for constant-Q wave equation {#sec.scheme} The major difficulty to numerically solve the time-fractional viscoelastic wave equation, as pointed out in [@Carcione2009; @Zhu2017], lies in the necessity of storing the whole history. A potential way to alleviate such problem is to utilize an alternative form of the fractional Caputo derivative and the reciprocal relation $t^{-2\gamma} \to s^{2\gamma-1}$, then truncate the time-reciprocal variable $s$ instead. For $\gamma \ll 1$, SOE approximations can dramatically reduce the number of memory variables as $s^{2\gamma-1}$ is strongly localized. In the following, we propose a new SOE approximation and demonstrate its equivalence to GMB. ## Exponential splitting scheme {#sec.differential_form} To simplify the notation, we will use the wave equation in one dimension to illustrate the key ideas. Introduce the velocity $v(x, t) = \frac{\partial }{\partial t} u(x, t)$. In one dimension, [\[conservation_momentum\]](#conservation_momentum){reference-type="ref" reference="conservation_momentum"}--[\[stress_strain_relation\]](#stress_strain_relation){reference-type="ref" reference="stress_strain_relation"} are reduced to $$\label{1d_wave} \left\{ \begin{split} &\frac{\partial }{\partial t} v(x, t) = \frac{1}{\rho(x)} \frac{\partial }{\partial x} \sigma(x, t) + \frac{1}{\rho(x)} f(x), \\ &\frac{\partial }{\partial t} \sigma(x, t) = \rho(x) C_P(x) {_C}D_t^{2\gamma_P(x)} \frac{\partial }{\partial x} v(x, t), \end{split} \right.$$ where we have used the following relationship between the strain and the velocity $$\label{strainvelocity} \frac{\partial}{\partial t} \varepsilon(x, t) = \frac{\partial }{\partial x} v(x, t).$$ Note that the power function in the Caputo fractional derivative has the integral representation $$\label{powerfunctionrep} \frac{1}{t^\beta} = \frac{1}{\Gamma(\beta)}\int_0^\infty e^{-t s}s^{\beta-1}\mathrm{d}s.$$ Combining [\[caputodef2\]](#caputodef2){reference-type="ref" reference="caputodef2"} and [\[1d_wave\]](#1d_wave){reference-type="ref" reference="1d_wave"}--[\[powerfunctionrep\]](#powerfunctionrep){reference-type="ref" reference="powerfunctionrep"}, one has the stress-strain relation $$\label{extension_stress_strain} \begin{aligned} \sigma(x, t) &= \frac{ \rho(x) C_P(x)}{\Gamma(1-2\gamma_P(x))\Gamma(2\gamma_P(x))} \int_0^{\infty} s^{2\gamma_P(x) - 1} \left\{\int_0^t e^{-s (t - \tau)} \left[\frac{\partial}{\partial \tau} \varepsilon(x, \tau)\right] \mathrm{d}\tau\right\} \mathrm{d}s\\ &= \frac{ \rho(x) C_P(x)}{\Gamma(1-2\gamma_P(x))\Gamma(2\gamma_P(x))} \int_0^{\infty} s^{2\gamma_P(x) - 1} \Phi(x,s,t)\mathrm{d}s, \end{aligned}$$ where the auxiliary function $\Phi$ is defined by the formula $$\begin{split} &\Phi(x, s, t) = \int_{0}^{t} e^{- s(t - \tau) } \left[\frac{\partial}{\partial \tau} \varepsilon(x, \tau)\right] \mathrm{d}\tau = \int_{0}^{t} e^{- s(t - \tau) } \left[\frac{\partial}{\partial x} v(x, \tau)\right] \mathrm{d}\tau. \end{split}$$ It is easy to see that the auxiliary function $\Phi$ satisfies the differential equation $$\label{stiff_equation} \frac{\partial }{\partial t} \Phi(x, s, t) = -s \Phi(x, s, t) + \frac{\partial }{\partial x} v(x, t)$$ with the initial condition $$\label{initialcondition} \Phi(x, s, t=0^+) = \frac{\Gamma(1-2\gamma_P(x))}{ \rho(x) C_P(x)} e^{-s} \sigma(x, t=0^+).$$ Here, we would like to remark that the initial condition [\[initialcondition\]](#initialcondition){reference-type="ref" reference="initialcondition"} matches the asymptotic behaviors of the power function (see [@XiongGuo2022] for details). Suppose now that the power function admits an efficient SOE approximation $$\label{powersoeappr} \frac{1}{t^{2\gamma_P(x)}} \approx \sum_{j=1}^{N_{\rm exp}}w_j e^{-s_jt}.$$ We would like to remark that the nodes $s_j$ and weights $w_j$ in the SOE approximation [\[powersoeappr\]](#powersoeappr){reference-type="ref" reference="powersoeappr"} may depend on the spatial variable $x$ since $\gamma_P$ is a function of $x$. Analyzing the case where $\gamma_P(x)$ is an arbitrary function of $x$ can be challenging. Fortunately, in many geophysical models, $\gamma_P(x)$ is a piecewise constant function, which corresponds to layered media. See [4.3](#sec.3d.layeredmedia){reference-type="ref" reference="sec.3d.layeredmedia"} for a numerical example of this case. Combining [\[powerfunctionrep\]](#powerfunctionrep){reference-type="ref" reference="powerfunctionrep"}, [\[extension_stress_strain\]](#extension_stress_strain){reference-type="ref" reference="extension_stress_strain"}, and [\[powersoeappr\]](#powersoeappr){reference-type="ref" reference="powersoeappr"}, we obtain $$\label{fractional_stress_strain} \begin{aligned} \sigma(x, t)&\approx \frac{\rho(x) C_P(x)}{\Gamma(1-2\gamma_P(x))} \sum_{j=1}^{N_{\textup{exp}}} w_j \underbrace{\int_0^t e^{-s_j \tau}\frac{\partial}{\partial t} \varepsilon(\bm{x}, t - \tau) \mathrm{d}\tau}_{\textup{memory variables}}\\ &= \frac{\rho(x) C_P(x)}{\Gamma(1-2\gamma_P(x))} \sum_{j=1}^{N_{\textup{exp}}} w_j \Phi(x, s_j, t). \end{aligned}$$ And the fractional derivative is removed from [\[1d_wave\]](#1d_wave){reference-type="ref" reference="1d_wave"}, which is reduced to the following system of differential equations $$\label{1d_differential_form} \left\{ \begin{split} &\frac{\partial }{\partial t} v(x, t) = \frac{1}{\rho(x)} \frac{\partial }{\partial x} \sigma(x, t) + \frac{1}{\rho(x)} f(x), \\ & \sigma(x, t) = \frac{\rho(x) C_P(x)}{\Gamma(1-2\gamma_P(x))} \sum_{j=1}^{N_{\textup{exp}}} w_j \Phi(x, s_j, t), \\ & \frac{\partial }{\partial t} \Phi(x, s_j, t) = -s_j \Phi(x, s_j, t) + \frac{\partial }{\partial x} v(x, t), \quad j = 1, \dots, N_{\textup{exp}}. \end{split} \right.$$ It seems that the stiff terms (i.e., the terms with large values of $s_j$) in [\[1d_differential_form\]](#1d_differential_form){reference-type="ref" reference="1d_differential_form"} may impose a severe stability constraint on the time step size $\Delta t$ [@Diethelm2008]. However, they can be integrated stably by an exponential operator splitting scheme, where the velocity and stress components are updated alternatingly [@XiongGuo2022]. Indeed, for a small interval $[t_n, t_{n+1}]$ with $t_{n+1} - t_n = \Delta t$, the stress components can be assumed to be invariant when updating the velocity components. In the meantime, the velocity components can be assumed to be invariant when updating the stress components, so that the exact solutions of the dynamics of rigid bodies [\[dynamics_phi_trace\]](#dynamics_phi_trace){reference-type="eqref" reference="dynamics_phi_trace"} can be utilized. In this way, the constraint on the time step is removed as the exact flows of rigid dynamics are exploited. In practice, we use the following second-order exponential Strang splitting scheme. $$\label{1d_exponential_splitting} \left\{ \begin{split} &v(x, t_{n+\frac{1}{2}}) = v(x, t_{n}) + \frac{\Delta t}{2} \frac{1}{\rho(x)} \frac{\partial}{\partial x} \sigma(x, t_n) + \int_{t_n}^{t_n +\frac{\Delta t}{2}} \frac{f(x, s)}{\rho(x)} \mathrm{d}s. \\ &\Phi(x, s_j , t_{n+1}) = e^{- s_j \Delta t} \Phi(x, s_j , t_{n}) + \frac{1}{s_j}(1 - e^{-s_j \Delta t}) \frac{\partial}{\partial x}v(x, t_{n+\frac{1}{2}}), \quad j = 1, \dots, N_{\textup{exp}},\\ & \sigma(x, t_{n+1}) = \frac{\rho(x) C_P(x)}{\Gamma(1-2\gamma_P(x))} \sum_{j=1}^{N_{\textup{exp}}} w_j \Phi(x, s_j, t_{n+1}), \\ &v(x, t_{n+1}) = v(x, t_{n+\frac{1}{2}}) + \frac{\Delta t}{2}\frac{1}{ \rho(x)} \frac{\partial}{\partial x} \sigma(x, t_{n+1}) + \int_{t_{n+\frac{1}{2}}}^{t_{n+\frac{1}{2}} +\frac{\Delta t}{2}} \frac{f(x, s)}{\rho(x)} \mathrm{d}s. \end{split} \right.$$ ## Nearly optimal SOE approximations for the power function with small exponent {#sec.setting_new_SOE} [\[sec.SOE\]]{#sec.SOE label="sec.SOE"} We now construct an efficient SOE approximation of the power function $t^{-\beta}$ when the exponent $\beta$ is small (and possibly close to zero). That is, we try to find nodes $s_j$ and weights $w_j$ ($j=1,\ldots,N_{\textup{exp}}$) such that $$\label{soeappr} \left|{t^{-\beta}}-\sum_{j=1}^{N_{\textup{exp}}} w_j e^{-s_j t}\right| \leq \varepsilon {t^{-\beta}}, \quad t\in[\delta, T],$$ for exponent $0<\beta\le 1$ within a prescribed (relative) error tolerance $\varepsilon$, a small gap $\delta$ and final time $T$ [@BeylkinMonzon2010; @JiangZhangZhangZhang2017; @JRLi2010]. In general, we can set $\delta = \Delta t$. When the exponent $\beta$ is very small, the power function becomes flatter and approaches the constant function away from the origin. Intuitively, one expects that the SOE approximation would need fewer number of exponentials as the exponent decreases. However, existing methods for constructing the SOE approximation of the power function suffer from stagnation when $\beta$ approaches zero because the starting integral representation of the existing methods has a low-exponent breakdown. The reason is that many practitioners *incorrectly* conclude that $N_{\textup{exp}}$ will grow as $\beta\rightarrow 0$, which is not in agreement with the observation that the power function $t^{-\beta}$ becomes closer and closer to a constant function as $\beta\rightarrow 0$. Partly, this incorrect conclusion stems from the $1/\beta$ and $\log \beta$ factors in Theorem 5 of [@BeylkinMonzon2010], which shows that $N_{\textup{exp}}$ will grow rapidly as $\beta\rightarrow 0$. In [@JiangZhangZhangZhang2017; @JRLi2010], the construction of the SOE approximations for the general power function starts from the integral representation [\[powerfunctionrep\]](#powerfunctionrep){reference-type="eqref" reference="powerfunctionrep"}. It then truncates the infinite integral in [\[powerfunctionrep\]](#powerfunctionrep){reference-type="eqref" reference="powerfunctionrep"} according to $\delta$ and $\varepsilon$, divides the resulting finite interval into dyadic subintervals towards the origin, and approximates the integral on each subinterval by the Gauss-Legendre quadrature of a proper order. In [@BeylkinMonzon2010], a further change of variable is used to obtain another integral representation of the power function $$\label{intrep2} \frac{1}{t^\beta} = \frac{1}{\Gamma(\beta)}\int_{-\infty}^\infty e^{-te^x+\beta x}\mathrm{d}x.$$ The infinite integral in [\[intrep2\]](#intrep2){reference-type="eqref" reference="intrep2"} is then discretized via the truncated trapezoidal rule. The construction in [@JiangZhangZhangZhang2017] tries to bound the absolute error of the SOE approximations in order to prove the stability of the overall numerical scheme for solving fractional PDEs. In practice, one observes that SOE approximations with controlled relative error are sufficient, even though the bound on relative error is actually weaker than the absolute error for $t\in [\delta, 1]$. The analysis in [@BeylkinMonzon2010] provides an estimate on $N_{\textup{exp}}$ based on relative error. However, because of additional change of variable introduced in [\[intrep2\]](#intrep2){reference-type="eqref" reference="intrep2"}, the truncated trapezoidal rule used in [@BeylkinMonzon2010] leads to many small exponentials. And the resulting estimate on $N_{\textup{exp}}$ blows up as $\beta \rightarrow 0$. We would like to remark here that in [@BeylkinMonzon2010] a modified Prony's method is applied to reduce the number of small exponentials. Thus, the algorithms in [@BeylkinMonzon2010] can be used to construct the SOE approximations for the power function with very small $\beta$. The number of exponentials does not blow up as $\beta\rightarrow 0$, though it is slightly inefficient. When the exponent $\beta$ is very close to zero, the integrand in [\[powerfunctionrep\]](#powerfunctionrep){reference-type="eqref" reference="powerfunctionrep"} is nearly singular due to the factor $s^{\beta-1}$. Simultaneously, $\Gamma(\beta)$ tends to infinity as $\beta\rightarrow 0$. In fact, the estimate on $N_{\textup{exp}}$ in [@BeylkinMonzon2010 Eq. (22)] contains the factor ${1}/{\beta}$, which blows up as $\beta\rightarrow 0$. Nevertherless, the algorithm in [@BeylkinMonzon2010] can still be used to obtain efficient SOE approximations for the power function in this case. The reason is that Prony's method is able to reduce the number of small exponentials significantly, effectively removing the factor ${1}/{\beta}$ in the complexity estimate of $N_{\textup{exp}}$. ### Improved bound on $N_{\textup{exp}}$ for $\beta > 0$ We now refine the analysis in [@JiangZhangZhangZhang2017] to give an improved estimate on $N_{\textup{exp}}$ when one is concerned with relative error. By simple scaling $\tilde \delta = \delta/T$, we may consider the same approximation problem as in [@BeylkinMonzon2010]: $$\label{soeappr2} \Big |\frac{1}{t^{\beta}}-\sum_{j=1}^{N_{\textup{exp}}} w_j e^{-s_j t}\Big | \leq \varepsilon\frac{1}{t^{\beta}}, \quad t\in[\tilde \delta, 1].$$ The original approximation problem  [\[soeappr\]](#soeappr){reference-type="eqref" reference="soeappr"} is identical to [\[soeappr2\]](#soeappr2){reference-type="eqref" reference="soeappr2"} with $\tilde \delta=\delta/T$. Using the property $\Gamma(1+x) = x\Gamma(x)$, we have $$\label{intrep4} \frac{1}{t^\beta} = \frac{1}{\Gamma(\beta)}\int_0^\infty e^{-t s}s^{\beta-1}\mathrm{d}s =\frac{\beta}{\Gamma(1+\beta)}\int_0^\infty e^{-t s}s^{\beta-1}\mathrm{d}s,$$ We now decompose the integral on the right side of [\[intrep4\]](#intrep4){reference-type="ref" reference="intrep4"} into three parts: $$\int_0^\infty e^{-t s}s^{\beta-1}\mathrm{d}s = \underbrace{\int_{0}^{1} e^{-ts}s^{\beta-1} \mathrm{d}s}_{\textup{weakly singular}} + \underbrace{\sum_{j = 0}^{N-1}\int_{2^j}^{2^{j+1}} e^{-ts}s^{\beta-1} \mathrm{d}s}_{\textup{non-singular}} + \underbrace{\int_{2^N}^{\infty} e^{-t s} s^{\beta-1}\mathrm{d}s}_{\textup{truncation}}.$$ The weakly singular part can be approximated by the Gauss-Jacobi quadrature, and the non-singular part on $[2^j,2^{j+1}]$ ($j=0,\ldots,N-1$) can be approximated by the Gauss-Legendre quadrature. The truncation term can be ignored since it becomes sufficiently small when $N$ is large. [\[soelem1\]]{#soelem1 label="soelem1"} Suppose that $p_0 \tilde \delta >1$. Then for $t\geq \tilde \delta >0$, $$\frac{t^{\beta}}{\Gamma(\beta)}\int_{p_0}^\infty e^{-ts}s^{\beta-1} \mathrm{d}s \leq \frac{\beta}{\Gamma(1+\beta)}e^{- \tilde \delta p_0}.$$ *Proof.* It is readily verified that $$\begin{aligned} \frac{t^\beta}{\Gamma(\beta)}\int_{p_0}^\infty e^{-ts}s^{\beta-1} \mathrm{d}s &= \frac{1}{\Gamma(\beta)} e^{-tp_0}\int_0^\infty e^{-x}(x+tp_0)^{\beta-1} \mathrm{d}x \leq e^{-\tilde \delta p_0}\frac{\beta}{\Gamma(1+\beta)}. \end{aligned}$$ ◻ [\[soelem2\]]{#soelem2 label="soelem2"} For a dyadic interval $[a, b] = [2^j, 2^{j+1}]$, let $s_1,\dots, s_n$ and $w_1,\dots,w_n$ be the nodes and weights for $n$-point Gauss-Legendre quadrature on the interval. Then for $\beta > 0$ and $n\ge 8$, $$\frac{t^\beta}{\Gamma(\beta)} \left|\int_a^b e^{-ts}s^{\beta-1}ds-\sum_{k=1}^n w_ks_k^{\beta-1}e^{-s_k t}\right| <\frac{\beta}{\Gamma(1+\beta)} 2^{ \frac{5}{2}}\pi (2n)^\beta \left(\frac{1}{2}\right)^{2n}.$$ *Proof.* By Eq. (2.18) in [@JiangZhangZhangZhang2017], we have $$\frac{t^\beta}{\Gamma(\beta)} \left|\int_a^b e^{-ts}s^{\beta-1}ds-\sum_{k=1}^n w_ks_k^{\beta-1}e^{-s_k t}\right| \leq \frac{1}{\Gamma(\beta)} 2^{\frac{5}{2}}\pi (at)^\beta e^{-at}\left(\frac{eat}{8n}+\frac{1}{4}\right)^{2n}.$$ Consider the function $$f(x)=x^\beta e^{-x}\left(\frac{ex}{8n}+\frac{1}{4}\right)^{2n}.$$ Straightforward calculation shows that for $x>0$, $f$ achieves its maximum at $$x_\ast=\frac{\beta}{2}+\left(1-\frac{1}{e}\right)n +\sqrt{\left(\frac{\beta}{2}+\left(1-\frac{1}{e}\right)n\right)^2+\frac{2n}{e}\beta}.$$ It is easy to see that $x_\ast>2\left(1-\frac{1}{e}\right)n$ and $$\begin{aligned} x_\ast&<\beta+2\left(1-\frac{1}{e}\right)n+\frac{1}{2}\frac{2n}{e}\beta \frac{1}{\frac{\beta}{2}+\left(1-\frac{1}{e}\right)n}\\ &<\beta+2\left(1-\frac{1}{e}\right)n+\frac{\beta}{e-1} <4+2\left(1-\frac{1}{e}\right)n. \end{aligned}$$ Thus it yields that $$\begin{aligned} \max_{x>0} f(x) &= f(x_\ast)=x_\ast^\beta e^{-x_\ast}\left(\frac{ex_\ast}{8n}+\frac{1}{4}\right)^{2n}\\ &<\left(4+2\left(1-\frac{1}{e}\right)n\right)^\beta e^{-2\left(1-\frac{1}{e}\right)n} \left(\frac{e}{8n}\left(4+2\left(1-\frac{1}{e}\right)n\right) +\frac{1}{4}\right)^{2n}\\ &=\left(4+2\left(1-\frac{1}{e}\right)n\right)^\beta \left(\frac{e^{1/e}}{2n} +\frac{e^{1/e}}{4}\right)^{2n} <(2n)^\beta\left(\frac{1}{2}\right)^{2n}, \end{aligned}$$ where the last inequality follows from $4+2\left(1-\frac{1}{e}\right)n<2n$ and $\frac{e^{1/e}}{2n}+\frac{e^{1/e}}{4}<\frac{1}{2}$ for $n\ge 8$. And the lemma follows. ◻ For the weakly singular part, the following bound slightly improves the estimate in [@JiangZhangZhangZhang2017]. [\[soelem3\]]{#soelem3 label="soelem3"} Let $s_1,\dots, s_n$ and $w_1,\dots,w_n$ ($n\geq 2$) be the nodes and weights for $n$-point Gauss-Jacobi quadrature with the weight function $s^{\beta-1}$ on the interval. Then for $t\in [\tilde \delta,1]$, $\beta > 0$ and $n>1$, $$\label{error_singular} \frac{t^\beta}{\Gamma(\beta)}\left|\int_0^1 e^{-ts}s^{\beta-1}ds-\sum_{k=1}^n w_ke^{-s_k t}\right| < \frac{\beta}{\Gamma(1+\beta)} \frac{2n+1}{2n+\beta} \sqrt{\frac{\pi}{n}}\left(\frac{e}{8n}\right)^{2n}.$$ *Proof.* It starts from the error estimate of the Gauss-Jacobi quadrature, $$\left|\int_0^a e^{-s t} s^{\beta-1} \mathrm{d}s - \sum_{k=1}^{n} w_k e^{-s_k t} \right| \le \frac{a^{2n+\beta}}{2n+\beta} c_{n, \beta}\max_{s\in(0, a)} |D_s^{2n} (e^{-s t})|, \quad c_{n, \beta} = \frac{(n!)^2}{(2n)!} \left[\frac{\Gamma(n+\beta)}{\Gamma(2n+\beta)}\right]^2,$$ where $\max_{s\in[0, 1]} |D_s^{2n} (e^{-s t})| = \max_{s\in[0, 1]} |t^{2n} e^{-s t}| \le 1$. Now we introduce a strictly monotonically increasing sequence $\{d_n\}$, $d_n \le d_{n+1}$, $$\label{def_dn} d_n= \frac{2^{4n+1}(n!)^4}{(2n)!(2n+1)!}, \quad \lim_{n \to \infty} d_n = \pi.$$ From [@Kambo1970], we have $2.66 < d_n \le \pi$ ($n > 1$). Thus, $$c_{n, \beta} < \frac{(n!)^2}{(2n)!} \left[ \frac{\Gamma(n)}{\Gamma(2n)}\right]^2 = \frac{[n!(n-1)!]^2}{(2n)! \left[(2n-1)!\right]^2} = \frac{d_n (4n^2 + 2n)}{ 2^{4n+1} n^2 (2n-1)!} \le \frac{\pi (4n+2)}{ 2^{4n}(2n)!},$$ where the first inequality utilizes the fact that $$\frac{\Gamma(n + \beta)}{\Gamma(2n+\beta)} = \frac{(n -1 + \beta) \dots (1+\beta) \Gamma(1 + \beta)}{(2n-1+\beta) \dots (1+ \beta) \Gamma(1 + \beta)} = \frac{1}{(2n-1+\beta)\dots(n+\beta)} < \frac{\Gamma(n)}{\Gamma(2n)}.$$ The application of Stirling's approximation $(2n)! > \sqrt{2\pi}{(2n)^{2n+\frac{1}{2}}}e^{-2n}$ leads to $$\left|\int_0^1 e^{-s t} s^{\beta-1} \mathrm{d}s - \sum_{j=1}^{n} w_k e^{-s_j t} \right| < \frac{\pi(4n+2)}{2n + \beta} \frac{1}{(2n)!} \left(\frac{1}{4}\right)^{2n} < \sqrt{\frac{\pi}{n}} \frac{2n+1}{2n+\beta}\left(\frac{e}{8}\right)^{2n} \left(\frac{1}{n}\right)^{2n},$$ which implies the estimate [\[error_singular\]](#error_singular){reference-type="eqref" reference="error_singular"}. ◻ We are now in a position to combine the above three lemmas to give an efficient SOE approximation for $t^{-\beta}$ on $[\tilde \delta, 1]$ for $\beta > 0$ as follows. [\[soethm\]]{#soethm label="soethm"} Let $0<\tilde \delta\leq t \leq 1$, let $\varepsilon > 0$ be the desired precision, let $n_{o}=O(\log\frac{\beta}{\varepsilon})$, and let $N=O(\log\log\frac{\beta}{\varepsilon}+\log\frac{1}{\tilde \delta})$. Furthermore, let ${s_{o,1}, \dots, s_{o,n_{o}}}$ and ${w_{o,1}, \dots, w_{o,n_{o}}}$ be the nodes and weights for the $n_{o}$-point Gauss-Jacobi quadrature on the interval $[0,1]$, and let ${s_{j,1}, \dots, s_{j,n_l}}$ and ${w_{j,1}, \dots, w_{j,n_l}}$ be the nodes and weights for $n_l$-point Gauss-Legendre quadrature on intervals $[2^j,2^{j+1}]$, $j=0,\dots,N-1$, where $n_l= O\left(\log\frac{\beta}{\varepsilon}+\log\log\frac{1}{\tilde \delta}\right)$. Then for $t\in [\tilde \delta,1]$ and $\beta > 0$, $$\left|\frac{1}{t^\beta} - \frac{\beta}{\Gamma(1+\beta)}\left(\sum_{k=1}^{n_{o}} e^{-s_{o,k}t}w_{o,k} +\sum_{j=0}^{N-1}\sum_{k=1}^{n_l} e^{-s_{j,k}t}s_{j,k}^{\beta-1}w_{j,k} \right) \right| \leq \varepsilon\frac{1}{t^\beta}.$$ *Proof.* By [\[soelem1\]](#soelem1){reference-type="ref" reference="soelem1"}, we may choose $$p_0 =O\left(\frac{1}{\tilde \delta}\log\frac{\beta}{\varepsilon}\right), \quad N=\lceil \log_2 p_0\rceil = O(\log\log\frac{\beta}{\varepsilon}+ \log\frac{1}{\tilde \delta})$$ so that $$\begin{aligned} \left|\frac{1}{t^\beta} -\frac{\beta}{\Gamma(1+\beta)}\int_0^{2^N} e^{-ts}s^{\beta-1}\mathrm{d}s\right| & =\frac{\beta}{\Gamma(1+\beta)}\int_{2^N}^\infty e^{-ts}s^{\beta-1}\mathrm{d}s \le \varepsilon\frac{1}{t^{\beta}}, \quad t\in[\tilde \delta, 1], \end{aligned}$$ where we have used the fact that $\Gamma(x)>0.88$ for any $x>0$. We now split the integration range $[0,2^N]$ into $[0,1]$ and $[2^j,2^{j+1}]$, $j=0,\cdots,N-1$, and approximate the integral on $[0,1]$ by $n_o$-point Gauss-Jaboci quadrature and the other $N$ integrals on dyadic intervals by $n_l$-point Gauss-Legendre quadrature. This leads to $$\begin{aligned} & \left| \frac{\beta}{\Gamma(1+\beta)} \left(\sum_{k=1}^{n_{o}} e^{-s_{o,k}t}w_{o,k} +\sum_{j=0}^{N-1}\sum_{k=1}^{n_l} e^{-s_{j,k}t}s_{j,k}^{\beta-1}w_{j,k}\right) -\frac{\beta}{\Gamma(1+\beta)}\int_0^{2^N} e^{-ts}s^{\beta-1}\mathrm{d}s\right|\\ & \leq \frac{1}{t^\beta}\frac{\beta}{\Gamma(1+\beta)} \left( N 2^{\frac{5}{2}} \pi (2n_l)^{\beta} \left(\frac{1}{2}\right)^{2n_l} + \sqrt{\frac{\pi}{n_{o}}} \frac{2n_o+1}{2n_{o}+\beta} \left(\frac{e}{8n_o}\right)^{2n_o} \right). \end{aligned}$$ By choosing $n_l = O(\log \frac{N\beta}{\varepsilon})$ and $n_{o}=O(\log\frac{\beta}{\varepsilon})$, the error is bounded by $\varepsilon/t^{\beta}$ and $$\begin{aligned} N_{\textup{exp}}&= N n_l + n_o \lesssim \left(\log \frac{\beta}{\varepsilon} + \log N \right) N + n_o \\ &=O\left( \left(\log\frac{\beta}{\varepsilon}+\log\log\frac{1}{\tilde \delta}\right) \left(\log\log\frac{\beta}{\varepsilon} + \log\frac{1}{\tilde \delta}\right)\right), \qquad \beta>\varepsilon. \end{aligned}\label{neestimate}$$ ◻ [\[remark1\]]{#remark1 label="remark1"} For $\beta \le \varepsilon$, it is easy to see that the integration range in [\[intrep4\]](#intrep4){reference-type="ref" reference="intrep4"} can be truncated to $[0,1]$ and the Gauss-Jacobi quadrature with $O(1)$ nodes is sufficient to approximate the integral on $[0,1]$ with relative precision $\varepsilon$. In this case, $N_{\textup{exp}}= O(1)$. We would like to emphasize that [\[neestimate\]](#neestimate){reference-type="ref" reference="neestimate"} provides a new complexity bound on the total number of exponentials $N_{\textup{exp}}$ needed to achieve *relative precision* $\varepsilon$, which is a key parameter to determine the memory length in our scheme for solving the constant-Q wave equations. Compared with our previous estimate in [@JiangZhangZhangZhang2017] when bounding the *absolute error* $$\label{nexpbound} N_{\textup{exp}}= O\left(\log\frac{1}{\varepsilon}\left( \log\log\frac{1}{\varepsilon}+\log\frac{T}{\delta}\right) +\log\frac{1}{\delta}\left( \log\log\frac{1}{\varepsilon}+\log\frac{1}{\delta}\right) \right),$$ the $\log^2 ({1}/{\delta})$ factor in [\[nexpbound\]](#nexpbound){reference-type="ref" reference="nexpbound"} is gone. The dependence on $\beta$ is explicit in [\[neestimate\]](#neestimate){reference-type="ref" reference="neestimate"}. It is easy to see that $N_{\textup{exp}}$ decreases as $\beta\rightarrow 0$ when other parameters are fixed (see [\[neestimate\]](#neestimate){reference-type="eqref" reference="neestimate"} and [\[remark1\]](#remark1){reference-type="ref" reference="remark1"}). This is in agreement with the observation that the power function $1/t^\beta$ becomes closer and closer to a constant function as $\beta\rightarrow 0$. Indeed, our nearly optimal SOE approximations for very small $\beta$ have a nearly static mode (i.e., an exponential node very close to zero). On the other hand, the estimate on $N_{\textup{exp}}$ in Theorem 5 of [@BeylkinMonzon2010] contains $1/\beta$ and $\log \beta$ factors, which grows rapidly as $\beta\rightarrow 0$. ### Generalized Gaussian quadrature and the new integral representation Recently, GGQ (see, for example, [@ggq2]) has been applied to construct efficient SOE approximations for $t^{-1}$ in [@GimbutasMarshallRokhlin2020]. As pointed out in [@ggq3], the first step of GGQ is to use singular value decomposition or pivoted QR decomposition to reduce the number of basis functions that need to be integrated. For SOE approximation problems, it appears that there are infinitely many linearly independent functions since $t$ is the continuous parameter on the interval $[\delta,T]$ in these functions (c.f., [\[powerfunctionrep\]](#powerfunctionrep){reference-type="eqref" reference="powerfunctionrep"}). However, to any *finite* precision $\varepsilon$, the number of *numerically* linearly independent basis functions $n_f$ is surprisingly low for almost all problems encountered in practice, even when some or all of these functions are singular or nearly singular. This simple yet profound observation is critical to the usefulness of GGQ. The second step of GGQ is to find an $n_f$-point quadrature to integrate these $n_f$ basis functions exactly, which is easily achieved by preselecting $n_f$ quadrature nodes, say, at the shifted and scaled Gauss-Legendre nodes, then solving a square linear system to obtain the associated quadrature weights. Finally, GGQ uses a nonlinear optimization procedure to reduce the number of quadrature nodes by one in each step while maintaining the prescribed precision, until the final number of quadrature nodes is reduced by a factor of $2$ (or very close to $2$) (see, for example, [@BremerGimbutasRokhlin2010; @serkh2016thesis]). In order to apply GGQ machinery to construct efficient SOE approximations for the power function with small exponent, we introduce another change of variable $s=u^{1/\beta}$, leading to $$\label{intrep3} \frac{1}{t^\beta} = \frac{1}{\Gamma(\beta+1)}\int_0^\infty e^{-t u^{1/\beta}}\mathrm{d}u.$$ As compared with the integral representation [\[powerfunctionrep\]](#powerfunctionrep){reference-type="eqref" reference="powerfunctionrep"}, the advantage of the new integral representation [\[intrep3\]](#intrep3){reference-type="eqref" reference="intrep3"} is obvious, especially for very small $\beta$. First, the factor $\Gamma(\beta+1)$ approaches $1$ as $\beta\rightarrow 0$. Second, the integrand behaves much nicer both at the origin and at the infinity. At the origin, the nearly singular term $s^{\beta-1}$ is gone. And the integral can be truncated at a much smaller number $$p=O\left(\left(\frac{1}{\delta}\log\frac{1}{\varepsilon}\right)^\beta\right), \quad t>\delta.$$ Algorithm [\[alg.fat\]](#alg.fat){reference-type="ref" reference="alg.fat"} contains a short summary on the construction of nearly optimal SOE approximations of the power function via the application of GGQ to the integral representation [\[intrep3\]](#intrep3){reference-type="eqref" reference="intrep3"}. - Start from the integral representation [\[intrep3\]](#intrep3){reference-type="eqref" reference="intrep3"} of $t^{-\beta}$. Use SVD or pivoted QR decomposition to reduce the number of basis functions that need to be integrated [@ggq3; @GimbutasMarshallRokhlin2020]. This step selects $n_f$ basis functions. - Preselect $n_f$ shifted and scaled Gauss-Legendre nodes and then solve a square linear system to obtain the associated quadrature weights. This step returns an $n_f$-point quadrature to integrate these $n_f$ basis functions exactly. - Use a nonlinear optimization procedure [@BremerGimbutasRokhlin2010; @serkh2016thesis] to reduce the number of quadrature nodes by one in each step while maintaining the prescribed precision. Here we provide some numerical examinations on the new SOE approximation. From [\[curve_fitting\]](#curve_fitting){reference-type="ref" reference="curve_fitting"}, it is confirmed that the new SOE approximation provides a good curve fitting for the power creep $t^{-\beta}$, even with a very small number of exponentials $N_{\textup{exp}}\le 5$ (the precision is set to be $\varepsilon = 1\times10^{-2}$). Indeed, as presented in [\[curve_error\]](#curve_error){reference-type="ref" reference="curve_error"}, it achieves a uniform accuracy within the prescribed precision $\varepsilon$. For typical qualify factors, we provide the number of exponentials in [\[Nexp_SOE\]](#Nexp_SOE){reference-type="ref" reference="Nexp_SOE"} to attain the given precision tolerance. It can be observed that GGQ is able to compress the memory variables to a large extent. Additionally, it requires fewer memory variables as the exponent becomes smaller. When the exponent is small, say, less than $0.1$, the number of exponentials $N_{\textup{exp}}$ needed in the SOE approximation is reduced by a factor of two or more as compared with published results [@BeylkinMonzon2010; @JiangZhangZhangZhang2017]. For long-time simulations, as seen in [\[Nexp_SOE\]](#Nexp_SOE){reference-type="ref" reference="Nexp_SOE"}, $N_{\textup{exp}}$ has to increase to maintain the prescribed precision due to $\log \frac{T}{\delta}$ factor in [\[neestimate\]](#neestimate){reference-type="eqref" reference="neestimate"}. Fortunately, the increase in $N_{\textup{exp}}$ is fairly mild as the final instant $T$ increases from $10$ to $1000$ (i.e., the total number of time steps increases from $2000$ to $200000$). $T$ 10 $10^2$ $10^3$ 10 $10^2$ $10^3$ 10 $10^2$ $10^3$ 10 $10^2$ $10^3$ ----------- ----------------------------- -------- -------- ----------------------------- -------- -------- ----------------------------- -------- -------- ------------------------------ -------- -------- -- $Q=10,\beta \approx 0.0635$ $Q=32,\beta \approx 0.0199$ $Q=50,\beta \approx 0.0127$ $Q=100,\beta \approx 0.0064$ $10^{-2}$ 5 5 6 4 4 5 3 4 5 2 4 4 $10^{-3}$ 7 7 9 6 7 8 5 6 7 5 6 7 $10^{-4}$ 9 10 12 8 9 11 7 9 10 6 8 9 $10^{-5}$ 11 13 15 10 12 14 9 11 13 8 11 12 $10^{-6}$ 13 15 18 12 14 17 11 14 16 10 13 15 $10^{-7}$ 15 17 21 14 17 20 13 16 19 12 16 18 $10^{-8}$ 17 20 24 16 19 23 15 19 22 14 18 21 : Number of exponentials needed to approximate $t^{-\beta}$ with various error tolerance $\varepsilon$ and fractional exponents $\beta = 2\pi^{-1}\arctan Q^{-1}$. Here, the gap (time step) is $\delta = 0.005$ and the final instants are $T=10, 100, 1000$, which are equivalent to $N_T=T/\delta= 2000, 20000, 200000$, respectively. Note that $N_{\textup{exp}}$ decreases as the fractional exponent $\beta$ becomes smaller (for larger quality factor $Q$). [\[Nexp_SOE\]]{#Nexp_SOE label="Nexp_SOE"} ### Equivalence between SOE approximations to the power creep and a rheological model In fact, the error bound [\[soeappr\]](#soeappr){reference-type="ref" reference="soeappr"} offers a rigorous mathematical justification for the capacity of rheological models to approximate the frequency-independent Q behavior. The fractional stress-strain relation [\[fractional_stress_strain\]](#fractional_stress_strain){reference-type="ref" reference="fractional_stress_strain"} is equivalent to $$\label{SOE_stress_strain} \begin{split} \sigma(x, t) &= \frac{\rho(x) C_P(x)}{\Gamma(1 - 2\gamma(x))} \left[\frac{1}{t^{2\gamma(x)}} H(t) \ast \frac{\partial}{\partial t} \varepsilon(x, t)\right] \\ &\approx E_R(x) \sum_{j=1}^{N_{\textup{exp}}} w_j \int_{0}^{t} e^{- s_j(t - \tau) } \left[\frac{\partial}{\partial \tau} \varepsilon(x, \tau)\right] \mathrm{d}\tau = \underbrace{E_R(x) \sum_{j=1}^{N_{\textup{exp}}} \frac{\tau_{\varepsilon_j}}{\tau_{\sigma_j}} e^{-\frac{t}{\tau_{\sigma_j}}} H(t)}_{\textup{modulus of GMB}} \ast \frac{\partial}{\partial t} \varepsilon(x, t), \end{split}$$ where $E_R(x) = \rho(x) C_P(x)/\Gamma(1-2\gamma(x))$ is the equilibrium response of the viscoelastic material [@CaoYin2015], $\tau_{\sigma_j} = s_j^{-1}$ and $\tau_{\varepsilon_j} = w_j \tau_{\sigma_j}$ are regarded as the relaxation times, $H(t)$ is the Heaviside step function and $\ast$ denotes the convolution in time. In practice, the first node $s_1$ is very close to $0$. As a result, the finite-node system [\[SOE_stress_strain\]](#SOE_stress_strain){reference-type="eqref" reference="SOE_stress_strain"} can be directly interpreted as a GMB [@ParkSchapery1999; @SchmidtGaul2006; @CaoYin2015], which consists of $N_{\textup{exp}}$ Maxwell chains in parallel, as depicted in [1](#viscoelastic_model){reference-type="ref" reference="viscoelastic_model"}. The force acting in $j$-th branch of spring with relaxation time $\tau_{\sigma_j} = s_j^{-1}$ produces the response $\Phi_j = \Phi(x, s_j, t)$, and the stress $\sigma$ is recovered by summing over $\Phi_j$ weighted by $w_j$ and relaxation modulus $E_R(x)$. ![The connection between the SOE approximation of the fractional stress-strain relation and a classical rheological model. The coefficients of the SOE approximation are determined by a uniformly accurately curve fitting to the power creep function, which is adaptive to the constant-Q viscoelastic behavior. [\[viscoelastic_model\]]{#viscoelastic_model label="viscoelastic_model"}](Viscoelastic.pdf){#viscoelastic_model width="80%" height="37%"} From [\[curve_error\]](#curve_error){reference-type="ref" reference="curve_error"}, it is evident that the new SOE approximation achieves a uniformly accurate approximation to the Caputo fractional derivative operator up to the final time $T$. This contrasts with the behavior of projection errors in the Gauss-Laguerre quadrature rule, as adopted in the Yuan-Agrawal method [@YuanAgrawal2002; @BlancChiavassaLombard2014; @XiongGuo2022], which may amplify over time. Therefore, it partially addresses the criticism made in [@SchmidtGaul2006]. The accuracy of the SOE approximation, as well as the short-memory numerical scheme, can be ensured by allowing more flexibility in the parameters of the SOE approximation to adapt to the viscoelastic behaviors of the provided material data. ## Application to P- and S-wave propagation in 3D media We outline the changes in our scheme for solving the constant-Q wave equation [\[conservation_momentum\]](#conservation_momentum){reference-type="ref" reference="conservation_momentum"} --[\[stress_strain_relation\]](#stress_strain_relation){reference-type="ref" reference="stress_strain_relation"}. The whole scheme can be easily applied to solve the P- and S-wave propagation, where the vector form of fractional stress-strain relation involves two Caputo fractional derivative operators. Starting with the SOE approximations for two different power creeps, $$t^{-2\gamma_P}\approx \sum_{l=1}^{M_P} w_{l}^{(P)} e^{-s_l^{(P)} t}, \quad t^{-2\gamma_S}\approx \sum_{l=1}^{M_S} w_{l}^{(S)} e^{-s_{l}^{(S)} t},$$ we define $M_P$ auxiliary functions for the trace of strain, $$\Phi_{123}(\bm{x}, s_l^{(P)}, t) = \int_{0}^{t} e^{- s_l^{(P)}(t - \tau) } \left[\frac{\partial}{\partial \tau} (\varepsilon_{11}(\bm{x}, \tau)+\varepsilon_{22}(\bm{x}, \tau)+\varepsilon_{33}(\bm{x}, \tau))\right] \mathrm{d}\tau, \quad l=1,\ldots, M_P,$$ and $6M_S$ auxiliary functions for the other six strain components, $$\Phi_{ij}(\bm{x}, s_l^{(S)}, t) = \int_{0}^{t} e^{- s_l^{(S)}(t - \tau) } \left[\frac{\partial}{\partial \tau} \varepsilon_{ij}(\bm{x}, \tau)\right] \mathrm{d}\tau, \quad i\ne j,~ 1\le i, j \le 3,~l=1,\ldots, M_S.$$ Thus, it requires storing $M_P + 6M_S$ memory variables in total. Similar to [\[stiff_equation\]](#stiff_equation){reference-type="ref" reference="stiff_equation"}, $\Phi_{123}(\bm{x}, s_l^{(P)}, t)$ and $\Phi_{ij}(\bm{x}, s_l^{(S)}, t)$ satisfy the differential equations (without fractional derivatives) $$\begin{aligned} &\frac{\partial \Phi_{123}(\bm{x}, s_l^{(P)}, t)}{\partial t} = - s_l^{(P)}\Phi_{123}(\bm{x}, s_l^{(P)}, t) +\frac{\partial v_{1}(\bm{x}, t)}{\partial x_1} + \frac{\partial v_{2}(\bm{x}, t)}{\partial x_2} + \frac{\partial v_{3}(\bm{x}, t)}{\partial x_3} , \label{dynamics_phi_trace}\\ & \frac{\partial \Phi_{ij}(\bm{x}, s_l^{(S)}, t) }{\partial t} = - s_l^{(S)}\Phi_{ij}(\bm{x}, s_l^{(S)}, t) + \frac{1}{2}\left(\frac{\partial v_{j}(\bm{x}, t)}{\partial x_i} + \frac{\partial v_{i}(\bm{x}, t)}{\partial x_j} \right), \; i\ne j,\; 1 \le i, j \le 3,\label{dynamics_phi}\end{aligned}$$ and the initial conditions $$\begin{aligned} &\Phi_{123}(\bm{x}, s_l^{(P)}, t=0^+) = e^{-s_l^{(P)}}\frac{\sigma_{11}(\bm{x}, t=0^+) + \sigma_{22}(\bm{x}, t=0^+) + \sigma_{33}(\bm{x}, t=0^+)}{(\Gamma(1-2\gamma_P(\bm{x})))^{-1} \rho(\bm{x}) C_P(\bm{x}) }, \\ &\Phi_{ij}(\bm{x}, s_l^{(S)}, t=0^+) = e^{-s_l^{(S)}}\frac{ \sigma_{ij}(\bm{x}, t=0^+)}{2(\Gamma(1-2\gamma_S(\bm{x})))^{-1}\rho(\bm{x}) C_S(\bm{x}) }, \quad i\ne j, \quad 1 \le i, j \le 3.\end{aligned}$$ The short-memory operator splitting scheme for 3D viscoelastic wave equation is summarized in Algorithm [\[SMOS\]](#SMOS){reference-type="ref" reference="SMOS"}, where $v^{n}$, $\sigma^n$, $\Phi_{123}^n$ and $\Phi_{ij}^n$ denote the numerical solutions at $t = t_n$. Note that the first node $s_1$ is usually very close to $0$ (see the tables of SOE formulae in our supplementary material). To avoid the numerical instability, we use the approximations $e^{-s_1 \Delta t} \approx 1$ and ${(1 - e^{-s_1 \Delta t})}/{s_1} \approx \Delta t$. - Half-step update of the velocity components: $$v_i^{n+\frac{1}{2}}(\bm{x}) = v_i^n(\bm{x}) + \frac{1}{\rho(\bm{x})} \left(\frac{\Delta t}{2} \sum_{j=1}^3\frac{\partial}{\partial x_j} \sigma^n_{ij}(\bm{x}) + \int_{t_n}^{t_{n}+\frac{\Delta t}{2}} f_i(\bm{x}, \tau) \mathrm{d}\tau \right), \quad i =1, 2, 3.$$ - Full-step update of auxiliary functions for $l_1=1, 2\dots, M_P$, $l_2=1, 2\dots, M_S$, $i, j = 1, 2, 3$: $$\begin{split} &\Phi_{123}^{n+1}(\bm{x}, s_{l_1}^{(P)}) = e^{-s_{l_1}^{(P)} \Delta t}\Phi_{123}^n(\bm{x}, s_{l_1}^{(P)}) + \frac{1 - e^{-s_{l_1}^{(P)} \Delta t}}{s_{l_1}^{(P)} }\sum_{j=1}^3\frac{\partial}{\partial x_j} v_j^{n+\frac{1}{2}}(\bm{x}),\\ &\Phi_{ij}^{n+1}(\bm{x}, s_{l_2}^{(S)}) = e^{-s_{l_2}^{(S)} \Delta t}\Phi_{ij}^n(\bm{x}, s_{l_2}^{(S)}) + \frac{1 - e^{-s_{l_2}^{(S)} \Delta t}}{2s_{l_2}^{(S)}}\left(\frac{\partial}{\partial x_i} v_j^{n+\frac{1}{2}}(\bm{x}) + \frac{\partial}{\partial x_j} v_i^{n+\frac{1}{2}}(\bm{x})\right). \end{split}$$ - Update of the stress components: $$\begin{split} &\sigma^{n+1}_{11}(\bm{x}) = E_R^{(P)}(\bm{x})\sum_{l=1}^{M_P} w_l^{(P)} \Phi_{123}^{n+1}(\bm{x}, s_l^{(P)}) - 2 E_R^{(S)}(\bm{x}) \sum_{l=1}^{M_S} w_l^{(S)} (\Phi^{n+1}_{22}+\Phi^{n+1}_{33})(\bm{x}, s_l^{(S)}), \\ &\sigma^{n+1}_{22}(\bm{x}) = E_R^{(P)}(\bm{x}) \sum_{l=1}^{M_P} w_l^{(P)} \Phi_{123}^{n+1}(\bm{x}, s_l^{(P)}) - 2 E_R^{(S)}(\bm{x}) \sum_{l=1}^{M_S} w_l^{(S)} (\Phi^{n+1}_{11}+\Phi^{n+1}_{33})(\bm{x}, s_l^{(S)}), \\ &\sigma^{n+1}_{33}(\bm{x}) = E_R^{(P)}(\bm{x}) \sum_{l=1}^{M_P} w_l^{(P)} \Phi_{123}^{n+1}(\bm{x}, s_l^{(P)}) - 2 E_R^{(S)}(\bm{x}) \sum_{l=1}^{M_S} w_l^{(S)} (\Phi^{n+1}_{11}+\Phi^{n+1}_{22})(\bm{x}, s_l^{(S)}), \\ & \sigma^{n+1}_{ij}(\bm{x}) = 2 E_R^{(S)}(\bm{x}) \sum_{l=1}^{M_S} w_l^{(S)} \Phi_{ij}^{n+1}(\bm{x}, s_l^{(S)}), \quad 1 \le i < j \le 3, \end{split}$$ where $E_R^{(P)}(\bm{x}) = \frac{\rho(\bm{x}) C_P(\bm{x})}{\Gamma(1-2\gamma_P(\bm{x}))}$ and $E_R^{(S)}(\bm{x}) = \frac{\rho(\bm{x}) C_S(\bm{x})}{\Gamma(1-2\gamma_S(\bm{x}))}$. - Half-step update of the velocity components: $$v_i^{n+1}(\bm{x}) = v_i^{n+\frac{1}{2}}(\bm{x}) + \frac{1}{\rho(\bm{x})} \left(\frac{\Delta t}{2} \sum_{j=1}^3\frac{\partial}{\partial x_j} \sigma^{n+1}_{ij}(\bm{x}) + \int_{t_n + \frac{\Delta t}{2}}^{t_{n+1}} f_i(\bm{x}, \tau) \mathrm{d}\tau \right), \quad i =1, 2, 3.$$ # Numerical experiments {#sec.numerical} We have implemented the scheme for solving the constant-Q wave equation [\[conservation_momentum\]](#conservation_momentum){reference-type="ref" reference="conservation_momentum"}--[\[stress_strain_relation\]](#stress_strain_relation){reference-type="ref" reference="stress_strain_relation"} and its special case - scalar viscoacoustic P-wave equation [\[pwaveequation\]](#pwaveequation){reference-type="ref" reference="pwaveequation"} in Fortran with OpenMP shared-memory parallelization. All numerical results are obtained on a machine with AMD Ryzen 5950X (3.40GHz, 72MB Cache, 16 Cores, 32 Threads) and 128GB Memory\@3600Mhz. The Fourier spectral method is used for spatial discretization, while the exponential Strang operator splitting scheme is used for time evolution. To avoid the artificial wave reflection, we choose a sufficiently large computational domain such that the wavepacket does not reach the boundary during the entire simulation. For readers' convenience, we collect the nodes and weights of SOE approximations in our supplementary material with the quality factor $Q=10, 32, 50, 100$, relative precision $\varepsilon = 10^{-2},\ldots, 10^{-7}$, $T = 10$ and $\Delta t = 0.005$. To measure the numerical error, we use the relative $L^2$-error $\varepsilon_2[v](t)$ and the relative maximum error $\varepsilon_{\infty}[v](t)$, $$\begin{split} \varepsilon_2[v](t) = \frac{\left(\int_{\Omega} |v^{\textup{num}}(\bm{x}, t) - v^{\textup{ref}}(\bm{x}, t)|^2\mathrm{d}\bm{x}\right)^{1/2}}{\max_{\bm{x}}|v^{\textup{ref}}(\bm{x}, t)|}, \quad \varepsilon_\infty[v](t) = \frac{\max_{\bm{x}\in \Omega} |v^{\textup{num}}(\bm{x}, t) - v^{\textup{ref}}(\bm{x}, t)|}{\max_{\bm{x}}|v^{\textup{ref}}(\bm{x}, t)|}, \end{split}$$ where $v^{\textup{num}}$ and $v^{\textup{ref}}$ denote the numerical and reference velocity wavefields, respectively. ## 3D viscoacoustic wave equation We first study the scalar viscoacoustic P-wave equation [\[pwaveequation\]](#pwaveequation){reference-type="ref" reference="pwaveequation"}. The initial condition is chosen to be $v_0(\bm{x}) = e^{-|\bm{x}|^2}$, and the analytical solution (see [\[exact_solution\]](#exact_solution){reference-type="ref" reference="exact_solution"}) is given by the formula $$\label{analytical} v(\bm{x}, t) = -\frac{1}{4\pi C_P t^{2-2\gamma_P}} \int_{\mathbb{R}^3} \frac{e^{-|\bm{x}- \bm{y}|^2}}{|\bm{y}|} \frac{\partial M_{1 - \gamma_P}(z)}{\partial z} \Big |_{z = \frac{|\bm{y}|}{\sqrt{C_P} t^{1-\gamma_P}}} \mathrm{d}\bm{y},$$ where the Mainardi function $M_{l}(z)$ is a special Wright function of the second kind [@bk:Mainardi2010] (see Appendix [\[Green\]](#Green){reference-type="ref" reference="Green"}). Here we focus on the solution on the line $\bm{x}= (0, 0, x_3)$, $$\begin{split} v(\bm{x}, t) & = -\frac{e^{-|x_3|^2}}{4\pi C t^{2-2\gamma_P}} \int_{0}^{+\infty} \mathrm{d}r\int_{0}^{\pi} \mathrm{d}\theta \int_0^{2\pi} \mathrm{d}\phi ~e^{-2x_3 r \cos\theta} r e^{-r^2} \sin \theta \left(\frac{\partial M_{1 - \gamma_P}(z)}{\partial z} \Big |_{z = \frac{r}{\sqrt{C_P} t^{1-\gamma_P}}}\right) \\ & = -\frac{1}{4 C_P t^{2-2\gamma_P}} \int_{0}^{+\infty} \frac{1 - e^{-4x_3 r} }{x_3} e^{-(x_3-r)^2} \left(\frac{\partial M_{1 - \gamma_P}(z)}{\partial z} \Big |_{z = \frac{r}{\sqrt{C_P} t^{1-\gamma_P}}}\right) \mathrm{d}r. \end{split}$$ To seek an accurate approximation to the analytical solution, we first truncate the $r$-domain by utilizing the rapid decay of $\frac{\partial M_{1 - \gamma}(z)}{\partial z}$, then apply the Gauss-Legendre quadrature on the finite interval to obtain $$\label{convolution_approx} \begin{split} v(\bm{x}, t)&\approx \left\{ \begin{split} &\frac{1}{C_P t^{2-2\gamma_P}} \sum_{j = 1}^{N_{H}} \omega_j^{H} \frac{1 - e^{-4x_3 r_j}}{4x} e^{-(x_3-r_j)^2}\left(-\frac{\partial M_{1 - \gamma_P}(z)}{\partial z} \Big |_{z = \frac{r_j}{\sqrt{C_P} t^{1-\gamma_P}}} \right), \quad x_3 > 0, \\ &\frac{1}{C_P t^{2-2\gamma_P}} \sum_{j = 1}^{N_{H}} \omega_j^{H} r_j e^{-(x_3-r_j)^2} \left(-\frac{\partial M_{1 - \gamma_P}(z)}{\partial z} \Big |_{z = \frac{r_j}{C_P t^{1-\gamma_P}}} \right), \quad x_3 = 0, \end{split} \right. \end{split}$$ where $(r_j, \omega_j^H)_{j=1}^{N_G}$ are collocation points and weights, respectively. The Mainardi functions are calculated using the algorithm in [@XiongGuo2022], where the saddle point approximation [\[saddle_point\]](#saddle_point){reference-type="eqref" reference="saddle_point"} is adopted for large $|z|$. This can achieve a reference solution with a precision of about $10^{-8}$, though the accuracy is still limited by the truncation error in the asymptotic expansion. Therefore, we also use the numerical results with the SOE approximation with $10$ digits of accuracy to check the numerical accuracy. \ \ \ As the first step, we need to verify the convergence of the exponential Strang splitting and the Fourier spectral method. The model parameters are set as follows: the group velocity $c_P =1$km/s, the quality factor $Q_P = 10$, the mass density $\rho = 1\textup{g}/\textup{cm}^3$, and the reference frequency $\omega_0 = 2\pi \times 100$Hz. The computational domain is $[-15, 15]^3$, and several groups of simulations under different grid meshes $N^3 = 8^3,16^3, 32^3, 64^3, 128^3$ and time stepsizes $\Delta t= 0.04, 0.02, 0.01, 0.005, 0.0025$ are performed up to the final time $T=8$s, where numerical solutions produced under $N^3 = 256^3$ and $\Delta t= 0.001$ are used as the reference. As presented [\[spatial_temporal_convergence\]](#spatial_temporal_convergence){reference-type="ref" reference="spatial_temporal_convergence"}, it exhibits second-order convergence in time and spectral convergence in space. We also use analytical solutions as the reference and plot the error curve in the panel of [\[spatial_temporal_convergence\]](#spatial_temporal_convergence){reference-type="ref" reference="spatial_temporal_convergence"} as a comparison, and find that the convergence trend is almost the same. More importantly, it is necessary to investigate the convergence with respect to the memory length $N_{\textup{exp}}$ for different quality factors $Q_P$. To this end, we adopt the same parameters: the group velocity $c_P = 1$km/s, the mass density $\rho =1\textup{g}/\textup{cm}^3$, the reference frequency $\omega = 2\pi \times 100$Hz, and the computational domain $[-15, 15]^3$ under the grid mesh $N^3 = 256^3$ and time stepsize $\Delta t = 0.005$s. Three groups of simulations with $Q_P = 10, 32, 50$ are performed and the numerical results are collected in [\[SOE_error_Q\]](#SOE_error_Q){reference-type="ref" reference="SOE_error_Q"}, where the reference is chosen to be the numerical solutions with $N_{\textup{exp}}= 19, \varepsilon=10^{-9}$ for $Q_P = 10$, $N_{\textup{exp}}= 18, \varepsilon=10^{-9}$ for $Q_P = 32$, $N_{\textup{exp}}= 17, \varepsilon=10^{-9}$ for $Q_P = 50$, respectively. To make numerical results more convincing, we also calculate the analytical solutions as the reference and plot the error curves in the panels of [\[SOE_error_Q\]](#SOE_error_Q){reference-type="ref" reference="SOE_error_Q"}. In both situations, rapid convergence with respect to $N_{\textup{exp}}$ is verified. Moreover, for a larger $Q_P$, fewer memory wavefields are needed to attain the same precision, as clearly observed in [\[SOE_error_Q\]](#SOE_error_Q){reference-type="ref" reference="SOE_error_Q"}. Therefore, the stagnation in the SOE approximation has indeed been avoided. A comparison between the numerical wavefield $v(0, 0, x_3)$ at $t=8$s and the analytical solution [\[analytical\]](#analytical){reference-type="eqref" reference="analytical"} with different $Q_P$, as well as the visualization of the relative errors, is given in [\[acoustic_error_QP10\]](#acoustic_error_QP10){reference-type="ref" reference="acoustic_error_QP10"}--[\[acoustic_error_QP50\]](#acoustic_error_QP50){reference-type="ref" reference="acoustic_error_QP50"}. The coincidence between the two different approaches is observed with only a few memory variables. The accuracy can be systematically improved with a larger $N_{\textup{exp}}$. Moreover, even for the strong attenuation case ($Q_P = 10$), our SOE approximation can still achieve a relative error less than $10^{-3}$ with $N_{\exp} < 10$. Therefore, the efficient compression of memory wavefields in new SOE approximation may significantly reduce the memory requirement and computational complexity when solving the time-fractional wave equation. ## Viscoelastic wave propagation in 3D homogeneous media {#sec.3d.homo} We now study the vector wave equation [\[conservation_momentum\]](#conservation_momentum){reference-type="ref" reference="conservation_momentum"}--[\[stress_strain_relation\]](#stress_strain_relation){reference-type="ref" reference="stress_strain_relation"}. To study the propagation of P- and S-wave in viscoelastic media, we set the initial velocity and stress tensors to an equilibrium state and activate the velocities components with source functions $f_1(\bm{x}, t) = f_2(\bm{x}, t) = f_3(\bm{x}, t) = A(\bm{x}) f_r(t)$, which uses a Ricker-type wavelet history and a Gaussian profile $A(\bm{x})$, $$A(\bm{x}) = e^{-{|\bm{x}-\bm{x}^c|^2}}, \quad f_r(t) = (1 - 2(\pi f_P (t- d_r))^2 )e^{-(\pi f_P(t- d_r))^2},$$ where $f_P$ is the peak frequency and $d_r$ is the temporal delay. Here we choose $f_P = 100$Hz, $d_r = 0$ and the center position $\bm{x}^c = (0, 0, 10)$. The model parameters in homogenous media are set as follows: the group velocities for P-wave and S-wave $c_P = 2.614$km/s and $c_S = 0.802$km/s, respectively, the mass density $\rho = 2.2 \textup{g}/\textup{cm}^3$, and the frequency frequency $\omega_0 = 2\pi \times 100$Hz [@Carcione2009]. The computational domain is $[-40, 40]^3$ ($80\textup{km}\times 80 \textup{km}\times 80 \textup{km}$), and the Fourier spectral method is adopted for spatial discretization with the grid size $N^3 = 256^3$. The reference solutions for $Q_P = 32$, $Q_S = 10$ are produced using the grid mesh $N^3 = 256^3$, $\Delta t = 0.005$s, $\varepsilon = 10^{-9}$ and $M_P = 18$, $M_S = 19$ for P- and S-waves, respectively. \ \ \ \ \ Type $M_P$ $M_S$ Wavefields Memory Time(h) $\varepsilon_{2}[v_3](t=10)$ $\varepsilon_{\infty}[v_{3}](t=10)$ -------------------- ------- ------- ------------ -------- --------- ------------------------------ ------------------------------------- Elastic 0 0 9 2.0GB 4.32 \- \- $Q_P=32$, $Q_S=10$ 4 5 43 6.3GB 4.77 6.741$\times10^{-1}$ 5.626$\times10^{-2}$ 6 7 57 7.8GB 4.92 1.406$\times10^{-1}$ 1.025$\times10^{-2}$ 8 9 71 9.5GB 5.24 3.587$\times10^{-2}$ 3.011$\times10^{-3}$ 10 11 85 11.3GB 5.42 1.849$\times10^{-2}$ 1.522$\times10^{-3}$ 12 13 99 13.0GB 5.87 7.669$\times10^{-3}$ 6.299$\times10^{-4}$ 14 15 113 14.8GB 6.00 7.098$\times10^{-3}$ 5.886$\times10^{-4}$ 16 17 127 16.5GB 6.21 2.489$\times10^{-3}$ 2.061$\times10^{-4}$ 18 19 141 18.3GB 6.43 Reference Reference : Homogenous case: A comparison of memory requirement and computational time up to $T=10$s (with $\Delta t = 0.005$s, 2000 steps and spatial resolution $N^3 = 256^3$) under various memory lengths $M_p$ and $M_s$. The relative maximum and $L^2$-errors at $t = 10$s exhibits the rapid convergence, which verifies the accuracy of the nearly-optimal SOE approximation. The reference solution is produced with $M_p =18$, $M_s = 19$. [\[cpu_time\]]{#cpu_time label="cpu_time"} compares the propagation of the vibrational wavefield $v_3$ in both the elastic media and the viscoelastic media under different $Q_P$ and $Q_S$. The sections of $v_3$ at $y = 0$ and two sectional drawings are provided to present the propagation patterns in the homogenous media. Clearly, the reduction in amplitude caused by the fractional stress-strain relation becomes more evident in strongly attenuated media ($Q_P = 32$, $Q_S = 10$). shows that the viscoelasticity not only affects the amplitude but also alters its phase information. Specifically, the first arrival time of the seismic signals might be delayed by strong viscoelasticity [@ZhuCarcioneHarris2013], a finding that aligns with theoretical predictions [@HanyaSerdynska2003]. demonstrates the rapid convergence with respect to the memory length $M_P + 6M_S$. Once again, the new SOE approximation can achieve relative maximum error of $10^{-3}$ when $M_P = 8$, $M_S = 9$, thus only requiring $M_P+6M_S = 62$ extra memory wavefields to be stored. Table [2](#cpu_time){reference-type="ref" reference="cpu_time"} records the total memory requirement and computational time for different $M_P$ and $M_S$. The short-memory operator splitting scheme takes slightly more time on solving the auxiliary dynamics [\[dynamics_phi_trace\]](#dynamics_phi_trace){reference-type="eqref" reference="dynamics_phi_trace"} and [\[dynamics_phi\]](#dynamics_phi){reference-type="eqref" reference="dynamics_phi"} for the memory wavefields. Nonetheless, the increase in the computational time is relatively modest compared to the Strang splitting scheme for the elastic problems, as the primary complexity remains in the calculation of the first-order spatial derivatives of velocity and stress components. Moreover, when $M_P = 8$ and $M_S = 9$, it requires less than five times the memory of the elastic case, thereby significatly reducing the need for long-term storage. ## Viscoelastic wave propagation in 3D layered media {#sec.3d.layeredmedia} In a more challenging example, we study wave propagations in a double-layer structure. Here, $\gamma_P$ is a function of spatial variable and different SOE approximations must be used for each layer (see the discussions in [3.1](#sec.differential_form){reference-type="ref" reference="sec.differential_form"} for details). We compare the wavefields in both elastic and viscoelastic media, using the same source impulse from [4.2](#sec.3d.homo){reference-type="ref" reference="sec.3d.homo"}. The model parameters of the heterogenous media, depicted in [\[double_layer\]](#double_layer){reference-type="ref" reference="double_layer"}, are as follows: For the upper layer spanning $50^2 \times [0, 50]$, the group velocities are $c_P = 2.614$km/s and $c_S = 0.802$km/s, quality factors are $Q_P=32$, $Q_S=10$ and the mass density is $\rho = 2.2 \textup{g}/\textup{cm}^3$. For the lower layer spanning $50^2 \times [-50, 0]$, the group velocities are $c_P = 3.2$km/s and $c_S = 1.85$km/s, with quality factors $Q_P=100$, $Q_S=50$ and a mass density of $\rho = 2.5\textup{g}/\textup{cm}^3$. The domain is $[-50, 50]^3$ ($100\textup{km}\times 100 \textup{km}\times 100 \textup{km}$), and the Fourier spectral method is adopted with $N^3 = 256^3$. presents the propagation of vibrational wavefield $v_3$ in both the elastic and viscoelastic double-layer media. A reflected field from the interface at $x_3 = 0$ is evident, with notable attenuation above, attibuted to the low-quality factor of the upper medium. The lower medium experiences much less attenuation; thus, the refracted field exhibits a high amplitude. From [\[hetero_PV_field\]](#hetero_PV_field){reference-type="ref" reference="hetero_PV_field"}, it is clear that the material anelasticity not only delays the passage of initial wavefront but also results in significant phase dispersion and amplitude loss for the refracted waves, potentially impacting the quality of inversion. As the qualify factors in different layers vary, it is necessary to store $M_P^{(u)} + 6 M_S^{(u)}$ memory variables for the upper layer and an additional $M_P^{(l)} + 6 M_S^{(l)}$ for the lower layer, given the same error tolerance $\varepsilon$. The reference solutions are generated with $M_P^{(u)} = 14$, $M_S^{(u)} = 15$, $M_P^{(u)} = 12$, $M_S^{(u)} = 13$, $\varepsilon=10^{-8}$. illustrates a rapid convergence with respect to the memory length $M_P + 6M_S$, where $M_P = \max(M_P^{(u)}, M_P^{(l)})$ and $M_S = \max(M_S^{(u)}, M_S^{(l)})$. This further confirms the uniform accuracy of the new SOE approximation. From [\[hetero_accuracy\]](#hetero_accuracy){reference-type="ref" reference="hetero_accuracy"}, the relative error is about $3\%$ even with much fewer memory variables, such as $M_P^{(u)} = 6$, $M_S^{(u)} = 7$, $M_P^{(u)} = 4$, $M_S^{(u)} = 5$. lists the total memory requirement and computational time for various memory variables. Clearly, the storage requirement and arithmetic operations have been reduced. \ \ \ \ $M_P^{(u)}$ $M_S^{(u)}$ $M_P^{(l)}$ $M_S^{(l)}$ Wavefields Memory Time(h) $\varepsilon_{2}[v_3](t=10)$ $\varepsilon_{\infty}[v_{3}](t=10)$ ------------- ------------- ------------- ------------- ------------ -------- --------- ------------------------------ ------------------------------------- 4 5 2 3 43 6.3GB 7.92 4.391 1.200$\times10^{-1}$ 6 7 5 5 57 8.0GB 8.94 3.480$\times10^{-1}$ 9.853$\times10^{-3}$ 8 9 6 7 71 9.6GB 10.76 9.503$\times10^{-2}$ 2.048$\times10^{-3}$ 10 11 8 9 85 11.3GB 11.86 4.545$\times10^{-2}$ 9.381$\times10^{-4}$ 12 13 10 11 99 12.9GB 13.42 1.598$\times10^{-2}$ 3.033$\times10^{-4}$ 14 15 12 13 113 14.6GB 14.93 Reference Reference : heterogenous case: A comparison of memory requirement and computational time up to $T=10$s (with $\Delta t = 0.005$s, 2000 steps and spatial resolution $N^3 = 256^3$) under different memory lengths $M_p^{(u)}$, $M_p^{(l)}$, $M_s^{(u)}$ and $M_s^{(l)}$. The relative maximum and $L^2$-errors at $t = 10$s exhibit rapid convergence, which verifies the accuracy of the nearly optimal SOE approximation. The reference solution is produced with $M_P^{(u)} = 14$, $M_S^{(u)} = 15$, $M_P^{(u)} = 12$, $M_S^{(u)} = 13$. [\[hetero_cpu_time\]]{#hetero_cpu_time label="hetero_cpu_time"} # Conclusions and further discussions {#sec.conclusion} The time-fractional wave equation accounts for both wave attenuation and velocity dispersion in real viscoelastic media. However, existing numerical methods usually require storing the entire history of wavefields due to the very small fractional exponent, which hinders its practical usage. This paper proposes a new SOE approximation to resolve this long-memory limitation. The new SOE approximation seeks an optimal curve fitting for the power function $t^{-2\gamma}$ with the generalized Gaussian quadrature applied to a new integral representation, and requires only half the nodes compared to existing SOE approximations. The equivalence between the SOE approximation of the continuous fractional stress-strain relation and the rheological model is established. An improved bound of the new SOE approximation provides a quantitative characterization of their differences. Numerical simulations on 3D constant-Q viscoacoustic equation and P- and S- viscoelastic wave equations have been conducted to demonstrate that the proposed method can effectively capture changes in both amplitude and phase induced by material anelasticity. Furthermore, the memory wavefields can be significantly compressed. In real seismic applications, complex geological structures might exhibit strong anisotropy in each direction. Using a domain decomposition strategy, one only needs to fit the fractional stress-strain relation within each subdomain. The application of the time-fractional wave equation in seismic inversion may be a topic of our future work. # Acknowledgement {#acknowledgement .unnumbered} This research was supported by the Project funded by the Natural Science Foundation of Shandong Province for Excellent Youth Scholars (Nos. ZR2020YQ02), the Taishan Scholars Program of Shandong Province of China (Nos. tsqn201909044), the National Natural Science Foundation of China (No. 1210010642) and the Fundamental Research Funds for the Central Universities (Nos.  310421125). 10 H. Liu, D. L. Anderson, and H. Kanamori. Velocity dispersion due to anelasticity; implications for seismology and mantle composition. , 47:41--58, 1976. E. Kjartansson. Constant Q-wave propagation and attenuation. , 84.B9:4737--4748, 1979. J. M. Carcione, F. Cavallini, F. Mainardi, and A. Hanyga. Time-domain seismic modeling of constant-Q wave propagation using fractional derivatives. , 159:1714--1736, 2002. B. Ursin and T. Toverud. Comparison of seismic dispersion and attenuation models. , 46:293--320, 2002. H. Emmerich and D. Korn. Incorporation of attenuation into time-domain computations of seismic wave fields. , 52(9):1252--1264, 1987. P. Yang, R. Brossier, L. Métivier, and J. Virieux. A review on the systematic formulation of 3-D multiparameter full waveform inversion in viscoelastic medium. , 207(1):129--149, 2017. J. Tromp. Seismic wavefield imaging of Earth's interior across scales. , 1(1):40--53, 2020. M. Mirzanejad, K. T. Tran, and Y. Wang. Three-dimensional Gauss--Newton constant-Q viscoelastic full-waveform inversion of near-surface seismic wavefields. , 231(3):1767--1785, 2022. Y. Wang, J. M. Harris, M. Bai, O. M. Saad, L. Yang, and Y. Chen. An explicit stabilization scheme for Q-compensated reverse time migration. , 87(3):F25--F40, 2022. S. P. C. Marques and C. J. Creus. . Springer Science & Business Media, 2012. J. O. Blanch, J. O. A. Robertsson, and W. W. Symes. Modeling of a constant Q; methodology and algorithm for an efficient and optimally inexpensive viscoelastic technique. , 60(1):176--184, 1995. S. M. Day and J. B Minster. Numerical simulation of attenuated wavefields using a Padé approximant method. , 78(1):105--118, 1984. S.W. Park and R. A. Schapery. Methods of interconversion between linear viscoelastic material functions. Part I - a numerical method based on Prony series. , 36(11):1653--1675, 1999. P. Moczo and J. Kristek. On the rheological models used for time-domain methods of seismic wave propagation. , 32:L01306, 2005. T. Zhu, J. M. Carcione, and J. M. Harris. Approximating constant‐Q seismic propagation in the time domain. , 61(5):931--940, 2013. D. Cao and X. Yin. Equivalence relations of generalized rheological models for viscoelastic seismic‐wave modeling. , 104(1):260--268, 2014. É. Blanc, D. Komatitsch, E. Chaljub, B. Lombard, and Z. Xie. Highly accurate stability-preserving optimization of the Zener viscoelastic model, with application to wave propagation in the presence of strong attenuation. , 205(1):427--439, 2016. J. Groby and C. Tsogka. A time domain method for modeling viscoacoustic wave propagation. , 14(2):201--236., 2006. N. Wang, G. Xing, T. Zhu, H. Zhou, and Y. Shi. Propagating seismic waves in VTI attenuating media using fractional viscoelastic wave equation. , 127(4):e2021JB023280, 2022. J. O. A. Robertsson, J. O. Blanch, and W. W. Symes. Viscoelastic finite-difference modeling. , 59(9):1444--1456, 1994. E. Wang, Y. Liu, Y. Ji, T. Chen, and T. Liu. Q full-waveform inversion based on the viscoacoustic equation. , 16(1):77--91, 2019. J. M. Carcione, D. Kosloff, and R. Kosloff. Wave propagation simulation in a linear viscoelastic medium. , 95:597--611, 1988. A. Hanyga and M. Seredyńska. Power-law attenuation in acoustic and isotropic anelastic media. , 155(3):830--838, 2003. F. Mainardi. . World Scientific, Singapore, 2010. J. M. Carcione. Theory and modeling of constant-Q P- and S-waves using fractional time derivatives. , 74(1):1787--1795, 2009. T. Zhu. Numerical simulation of seismic wave propagation in viscoelastic-anisotropic media using frequency-independent q wave equation. , 82(4):WA1--WA10, 2017. A. Schmidt and L. Gaul. On a critique of a numerical scheme for the calculation of fractionally damped dynamical systems. , 33(1):99--107, 2006. K. Diethelm. An investigation of some nonclassical methods for the numerical approximation of Caputo-type fractional derivatives. , 47(4):361--390, 2008. Y. Huang, Q. Li, R. Li, F. Zeng, and L. Guo. A unified fast memory-saving time-stepping method for fractional operators and its applications. , 15(3):679--714, 2022. F. Sun, J. Gao, and N. Liu. The approximate constant Q and linearized reflection coefficients based on the generalized fractional wave equation. , 145(1):243--253, 2019. S. Liu, D. Yang, X. Dong, Q. Liu, and Y. Zheng. Numerical simulation of the wavefield in a viscous fluid-saturated two-phase VTI medium based on the constant-Q viscoelastic constitutive relation with a fractional temporal derivative. , 61(6):2446--2458, 2018. R. Christensen. . Elsevier, 2012. L. Yuan and O. P. Agrawal. A numerical scheme for dynamic systems containing fractional derivatives. , 124(2):321--324, 2002. E. Blanc, G. Chiavassa, and B. Lombard. Wave simulation in 2D heterogenous transversely isotropic porous media with fractional attenuation: A Cartesian grid approach. , 275:118--142, 2014. Y. Xiong and X. Guo. A short-memory operator splitting scheme for constant-Q viscoelastic wave equation. , 449:110796, 2022. S. Jiang. . ProQuest LLC, Ann Arbor, MI, 2001. Thesis (Ph.D.)--New York University. S. Jiang and L. Greengard. Fast evaluation of nonreflecting boundary conditions for the Schrödinger equation in one dimension. , 47(6-7):955--966, 2004. G. Beylkin and L. Monzón. Approximation by exponential sums revisited. , 28(2):131--149, 2010. S. Jiang, J. Zhang, Q. Zhang, and Z. Zhang. Fast evaluation of the Caputo fractional derivative and its applications to fractional diffusion equations. , 21(3):650--678, 2017. J-R. Li. A fast time stepping method for evaluating fractional integrals. , 31(6):4696--4714, 2010. J. Bremer, Z. Gimbutas, and V. Rokhlin. A nonlinear optimization procedure for generalized Gaussian quadratures. , 32(4):1761--1788, 2010. J. Ma, V. Rokhlin, and S. Wandzura. Generalized Gaussian quadrature rules for systems of arbitrary functions. , 33(3):971--996, 1996. N. Yarvin and V. Rokhlin. Generalized Gaussian quadratures and singular value decompositions of integral operators. , 20(2):699--718, 1998. A. Hanyga. Multidimensional solutions of time-fractional diffusion-wave equations. , 458:933--957, 2002. J. F. Lu and A. Hanyga. Wave field simulation for heterogenous porous media with singular memory drag force. , 208(2):651--674, 2005. C. H. Chapman, J. W. D. Hobro, and J. O. A. Robertsson. Correcting an acoustic wavefield for elastic effects. , 197:1196--1214, 2014. Z. Gimbutas, N. F. Marshall, and V. Rokhlin. A fast simple algorithm for computing the potential of charges on a line. , 49(3):815--830, 2020. K. Serkh. . ProQuest LLC, Ann Arbor, MI, 2016. Thesis (Ph.D.)--Yale University. N. S. Kambo. Error of the Newton-Cotes and Gauss-Legendre quadrature formulas. , 24(110):261--269, 1970. Y. Luchko. Algorithms for evaluation of the Wright function for the real arguments' values. , 11(1):57--75, 2008. # [\[Green\]]{#Green label="Green"}Green's function for 3-D viscoacoustic wave equation The Green's function $G^{(3)}(\bm{x}, t; 1-\gamma_P)$ for 3-D viscoacoustic wave equation is given by $$G^{(3)}(\bm{x}, t; 1-\gamma_P) = -\frac{1}{4\pi C_P |\bm{x}| t^{2-2\gamma_P}} \frac{\partial M_{1 - \gamma_P}(z)}{\partial z}\Big |_{z = \frac{|\bm{x}|}{\sqrt{C_P} t^{1-\gamma_P}}}.$$ Here the Mainardi function $M_{l}(z)$ is a special Wright function of the second kind [@bk:Mainardi2010], say, $M_{l}(z) \coloneqq W_{-l, 1 - l}(-z)$, $0 < l < 1$, $z\in \mathbb{C}$. Moreover, for $-1 < \lambda < 0$, $l < 1$ and $x > 0$ [@Luchko2008], it has $$W_{\lambda, l}(- x) = \frac{1}{\pi} \int_0^{+\infty} K_{\lambda, l}(-x, r ) \mathrm{d}r, ~~ K_{\lambda, l}(x, r ) = r^{-l} e^{-r} \left[e^{x r^{-\lambda} \cos(\lambda \pi)} \sin(x r^{-\lambda} \sin(\pi \lambda) + \pi l)\right].$$ Noting that $$\begin{split} \frac{\partial K_{\lambda, l}(x, r)}{\partial x} = &r^{-(l+\lambda)} e^{-r} \left[e^{x r^{-\lambda} \cos(\lambda \pi)} \sin(x r^{-\lambda} \sin(\pi \lambda) + \pi l) \cos(\lambda\pi) \right] \\ &+ r^{-(l+\lambda)} e^{-r} \left[e^{x r^{-\lambda} \cos(\lambda \pi)} \cos(x r^{-\lambda} \sin(\pi \lambda) + \pi l) \sin(\lambda\pi) \right] \\ = & r^{-(l+\lambda)} e^{-r} \left[e^{x r^{-\lambda} \cos(\lambda \pi)} \sin(x r^{-\lambda} \sin(\pi \lambda) + \pi (l+\lambda)) \right] = K_{\lambda, \lambda+l}(x, r), \end{split}$$ the derivative of the Mainardi function can be represented as $$\frac{\partial M_l(x)}{\partial x} = - \frac{1}{\pi} \int_{0}^{+\infty} K_{-l, 1-2l}(x, r)\mathrm{d}r, \quad \frac{\partial M_l(0)}{\partial x} = - \frac{1}{\Gamma(1 - 2l)}.$$ For sufficiently large $x$, the Mainardi function has a saddle-point approximation [@bk:Mainardi2010], $$\label{saddle_point} \begin{split} &\frac{\partial M_l(\frac{z}{l})}{\partial z} \sim \frac{1}{l \sqrt{2\pi(1-l)}} \left(\frac{l-1/2}{1-l}z^{\frac{2l-3/2}{1-l}} - \frac{1}{l}z^{\frac{2l-1/2}{1-l}}\right)\exp\left(-\frac{1-l}{l} z^{\frac{1}{1-l}}\right), \quad |z| \to \infty. \end{split}$$ The derivatives of Mainardi's functions can be calculated via the Gauss-Laguerre quadrature and the saddle point approximation, and details can be found in the Appendix of [@XiongGuo2022]. # Tables of nodes and weights in new SOE for typical quality factors $\omega_j$ $s_j$ $\omega_j$ $s_j$ ------------------------------------------------- ------------------------ -------------------------------------------------- ------------------------ $\varepsilon = 10^{-2}$, $N_{\textup{exp}}= 5$ $\varepsilon = 10^{-3}$, $N_{\textup{exp}}= 7$ 0.1440797465700240D-10 0.8415261929142592D+00 0.1440797155149404D-10 0.8240830013371835D+00 0.1954901542163180D+00 0.1636845055652096D+00 0.1322237166027692D+00 0.1501811717860068D+00 0.1723999921378503D+01 0.1323100227004369D+00 0.8517903732042016D+00 0.9648611144577410D-01 0.1229908868952335D+02 0.1547537593365945D+00 0.3441114686073438D+01 0.9644807153788990D-01 0.9777983682050727D+02 0.1869902132381615D+00 0.1350088871384277D+02 0.1069689948347042D+00 0.5492464653183761D+02 0.1205169710604457D+00 0.2338504327514402D+03 0.1374498357868730D+00 $\varepsilon = 10^{-4}$, $N_{\textup{exp}}= 9$ $\varepsilon = 10^{-5}$, $N_{\textup{exp}}= 11$ 0.1440797164254682D-10 0.8112378597026695D+00 0.1440797619283847D-10 0.8010997231278196D+00 0.1009890841402805D+00 0.1443806338391152D+00 0.8197828436191787D-01 0.1409941501440947D+00 0.5823334989875405D+00 0.8332378558784953D-01 0.4498296707524729D+00 0.7719857356447507D-01 0.1869388736178351D+01 0.7366620152204306D-01 0.1296039075552287D+01 0.6269524338587557D-01 0.5367910686013082D+01 0.7625660830946936D-01 0.3160106693096034D+01 0.6078138363618767D-01 0.1531655543336788D+02 0.8230551789077256D-01 0.7409771919428429D+01 0.6321002136990252D-01 0.4439653407641731D+02 0.8962282114855959D-01 0.1734398686592705D+02 0.6712827947872171D-01 0.1312617293848889D+03 0.9788699963267940D-01 0.4092281319055039D+02 0.7169125085778819D-01 0.4001238289113953D+03 0.1103304595213679D+00 0.9756294726492744D+02 0.7671133135458129D-01 0.2354800125540291D+03 0.8253927154623260D-01 0.5852581776377881D+03 0.9324471363963684D-01 $\varepsilon = 10^{-6}$, $N_{\textup{exp}}= 13$ $\varepsilon = 10^{-7}$, $N_{\textup{exp}}= 15$ 0.1440797344443702D-10 0.7927416768056691D+00 0.1440797301373925D-10 0.7856419964970299D+00 0.6909132948947617D-01 0.1386674750682972D+00 0.5974823699768806D-01 0.1369104454153603D+00 0.3693004676261487D+00 0.7379286075670194D-01 0.3144306059252867D+00 0.7163361769803406D-01 0.1005372969224108D+01 0.5678739803068599D-01 0.8284526998828101D+00 0.5329152980471319D-01 0.2235671903599100D+01 0.5201787459951138D-01 0.1744202695136431D+01 0.4672404154667911D-01 0.4653227235497959D+01 0.5208079978662095D-01 0.3366361346154247D+01 0.4508321149250512D-01 0.9520871769286824D+01 0.5406183576472692D-01 0.6296947578816033D+01 0.4569204223027506D-01 0.1946905282824594D+02 0.5680493830595190D-01 0.1167667863594848D+02 0.4728179409751884D-01 0.3999161843226904D+02 0.5991962774574524D-01 0.2164708182376960D+02 0.4930960413677760D-01 0.8264328348675848D+02 0.6329721168957117D-01 0.4024043478382247D+02 0.5157105998736930D-01 0.1719522811815946D+03 0.6699623446010711D-01 0.7508563548322061D+02 0.5399937693294307D-01 0.3613875662253792D+03 0.7162043024628401D-01 0.1406994214974785D+03 0.5660145559683093D-01 0.7832726493548571D+03 0.8143769999391330D-01 0.2650393315555953D+03 0.5952492491283821D-01 0.5041800183069405D+03 0.6350067238989261D-01 0.9906709743596185D+03 0.7274601644296926D-01 : The nodes and weights in new SOE for $Q = 10$, $2\gamma =2\pi^{-1}\arctan(Q^{-1}) \approx 0.0635$ to attain $|t^{-\beta} - \sum_{j=1}^{N_{\textup{exp}}} w_j e^{-s_j t}| \le \varepsilon t^{-\beta}$ for $\delta \le t \le T$, with the final instant $T \le 10$ and temporal gap $\delta \ge 5\times 10^{-3}$. [\[SOE_table_Q10\]]{#SOE_table_Q10 label="SOE_table_Q10"} $s_j$ $\omega_j$ $s_j$ $\omega_j$ ------------------------------------------------- ------------------------ -------------------------------------------------- ------------------------ $\varepsilon = 10^{-2}$, $N_{\textup{exp}}= 4$ $\varepsilon = 10^{-3}$, $N_{\textup{exp}}= 6$ 0.1233271244807241D-27 0.9507315622743330D+00 0.1233271244807303D-27 0.9307117960462683D+00 0.2446613360651417D+00 0.5909871860442109D-01 0.7955490392898384D-01 0.5410148687511617D-01 0.3188494705145181D+01 0.5108257353637974D-01 0.6400754573055761D+00 0.3427590000250326D-01 0.4374046349798036D+02 0.6006469461273273D-01 0.3406885956980713D+01 0.3469155299642254D-01 0.1938932219421636D+02 0.3838452138989368D-01 0.1244724672270921D+03 0.4246441195877351D-01 $\varepsilon = 10^{-4}$, $N_{\textup{exp}}= 8$ $\varepsilon = 10^{-5}$, $N_{\textup{exp}}= 10$ 0.1233271244806500D-27 0.9257639935683603D+00 0.1233271244808180D-27 0.9342219590785330D+00 0.5960153475701349D-01 0.5267049620909144D-01 0.8687488238252516D-01 0.4970195686328412D-01 0.4181740222757921D+00 0.2931551656158208D-01 0.5015374242392519D+00 0.2548030796317139D-01 0.1574911535622993D+01 0.2518001231324935D-01 0.1529011432214429D+01 0.2044968316336439D-01 0.5391000223314470D+01 0.2570793408412855D-01 0.4035666163812632D+01 0.1963888134955552D-01 0.1891729095316842D+02 0.2722073088872728D-01 0.1039920617855594D+02 0.1995668303240832D-01 0.6935590107043559D+02 0.2891757255269721D-01 0.2692365365300807D+02 0.2052617564942962D-01 0.2670138625582335D+03 0.3114529866907899D-01 0.7037020812725662D+02 0.2113029151314107D-01 0.1857132902212041D+03 0.2178984669856504D-01 0.5017183552663466D+03 0.2340202613949829D-01 $\varepsilon = 10^{-6}$, $N_{\textup{exp}}= 12$ $\varepsilon = 10^{-7}$, $N_{\textup{exp}}= 14$ 0.1233271244834537D-27 0.9311450359309102D+00 0.1233271246154151D-27 0.9224243946813435D+00 0.7309291099348825D-01 0.4918924204721100D-01 0.4565633910836096D-01 0.4882502116086868D-01 0.4081678128820871D+00 0.2436636893296938D-01 0.2549389400613811D+00 0.2394565431114788D-01 0.1156721990103773D+01 0.1841162414492538D-01 0.7027385624420864D+00 0.1727215242352003D-01 0.2722207854479077D+01 0.1679826283180114D-01 0.1543612188551032D+01 0.1481758253849639D-01 0.6089902279269454D+01 0.1663193579207381D-01 0.3117126857811117D+01 0.1407730402715572D-01 0.1351634416771198D+02 0.1690890326587400D-01 0.6138445183215538D+01 0.1406238726348420D-01 0.3010967506196511D+02 0.1729694788251238D-01 0.1206000042888941D+02 0.1431217725498386D-01 0.6743945752842677D+02 0.1768423697908001D-01 0.2381472369894094D+02 0.1464745537413458D-01 0.1516100483822089D+03 0.1802327708903234D-01 0.4736179967023599D+02 0.1501032214584595D-01 0.3414653124039590D+03 0.1838524006642803D-01 0.9490353496367300D+02 0.1538740114341876D-01 0.7811858483832548D+03 0.1969170630104930D-01 0.1916971393801842D+03 0.1579940949347425D-01 0.3916535995905957D+03 0.1639177497211432D-01 0.8266960970664088D+03 0.1810583027037206D-01 : The nodes and weights in new SOE for $Q = 32$, $2\gamma =2\pi^{-1}\arctan(Q^{-1}) \approx 0.0199$ to attain $|t^{-\beta} - \sum_{j=1}^{N_{\textup{exp}}} w_j e^{-s_j t}| \le \varepsilon t^{-\beta}$ for $\delta \le t \le T$, with the final instant $T \le 10$ and temporal gap $\delta \ge 5\times 10^{-3}$.[\[SOE_table_Q32\]]{#SOE_table_Q32 label="SOE_table_Q32"} $s_j$ $\omega_j$ $s_j$ $\omega_j$ ------------------------------------------------- ------------------------ -------------------------------------------------- ------------------------ $\varepsilon = 10^{-2}$, $N_{\textup{exp}}= 3$ $\varepsilon = 10^{-3}$, $N_{\textup{exp}}= 5$ 0.3291459309486449D-43 0.9715159334883926D+00 0.3291459309486449D-43 0.9654032847335073D+00 0.3844977954137703D+00 0.4494377137687228D-01 0.1805141287964131D+00 0.3541837895493161D-01 0.1269217440126866D+02 0.5029910225330850D-01 0.1603711027271578D+01 0.2495583781684037D-01 0.1131524795360649D+02 0.2656277187664437D-01 0.9103455146880404D+02 0.2933916818950616D-01 $\varepsilon = 10^{-4}$, $N_{\textup{exp}}= 7$ $\varepsilon = 10^{-5}$, $N_{\textup{exp}}= 9$ 0.3291459309486449D-43 0.9613696925562936D+00 0.3291459309486449D-43 0.9583545824772974D+00 0.1230055319059663D+00 0.3335077521224538D-01 0.9421313698694372D-01 0.3257050484288297D-01 0.8113385187918014D+00 0.1901845369491019D-01 0.5594525408055533D+00 0.1684018645821342D-01 0.3262465799193611D+01 0.1760194323773591D-01 0.1799033081583265D+01 0.1390725893797957D-01 0.1273359255878506D+02 0.1827779127280318D-01 0.5146221618958586D+01 0.1362728285045673D-01 0.5190919485559557D+02 0.1930292644345800D-01 0.1464101300598260D+02 0.1397871517834396D-01 0.2237616215581631D+03 0.2060469071916570D-01 0.4245035087292208D+02 0.1447337311323433D-01 0.1260744262177191D+03 0.1502467492901445D-01 0.3878389209183667D+03 0.1606353834964511D-01 $\varepsilon = 10^{-6}$, $N_{\textup{exp}}= 11$ $\varepsilon = 10^{-7}$, $N_{\textup{exp}}= 13$ 0.3291459309486449D-43 0.9559470828794648D+00 0.3291459309486449D-43 0.9539432545841504D+00 0.7659112881304787D-01 0.3217361279926604D-01 0.6461018484650244D-01 0.3193336413074308D-01 0.4338032622992327D+00 0.1585545388214821D-01 0.3568618864911536D+00 0.1533257985633187D-01 0.1256503454895687D+01 0.1209976708574026D-01 0.9785073400536500D+00 0.1113044331387444D-01 0.3060007372246838D+01 0.1116945869588176D-01 0.2177669719981179D+01 0.9746907371580906D-02 0.7156010727142673D+01 0.1112346488285604D-01 0.4525655673256203D+01 0.9387145829502928D-02 0.1671889839142481D+02 0.1133261712262682D-01 0.9242537672647016D+01 0.9399519115875818D-02 0.3944184916633604D+02 0.1161367233920682D-01 0.1887578469291693D+02 0.9535911538885434D-02 0.9421749253717621D+02 0.1192214677313131D-01 0.3876276251779908D+02 0.9713434014349787D-02 0.2283913597457030D+03 0.1230024163842561D-01 0.8018021213022408D+02 0.9907610910937507D-02 0.5714449581431992D+03 0.1329624863545951D-01 0.1672104632942592D+03 0.1012302317471267D-01 0.3527186673439197D+03 0.1044035591633067D-01 0.7683347210259111D+03 0.1142908470080125D-01 : The nodes and weights in new SOE for $Q = 50$, $2\gamma =2\pi^{-1}\arctan(Q^{-1}) \approx 0.0127$ to attain $|t^{-\beta} - \sum_{j=1}^{N_{\textup{exp}}} w_j e^{-s_j t}| \le \varepsilon t^{-\beta}$ for $\delta \le t \le T$, with the final instant $T \le 10$ and temporal gap $\delta \ge 5\times 10^{-3}$.[\[SOE_table_Q50\]]{#SOE_table_Q50 label="SOE_table_Q50"} $s_j$ $\omega_j$ $s_j$ $\omega_j$ ------------------------------------------------- ------------------------ -------------------------------------------------- ------------------------ $\varepsilon = 10^{-2}$, $N_{\textup{exp}}= 2$ $\varepsilon = 10^{-3}$, $N_{\textup{exp}}= 5$ 0.1717351273768353D-90 0.9847183908720862D+00 0.1717351273768353D-90 0.9816124627012480D+00 0.3016163347415830D+00 0.1946540375040938D-01 0.1513258304760864D+00 0.1749586258243257D-01 0.1243840094780066D+01 0.1190462163297126D-01 0.8568483928967764D+01 0.1322284392208735D-01 0.7517155220295190D+02 0.1506498116754216D-01 $\varepsilon = 10^{-4}$, $N_{\textup{exp}}= 6$ $\varepsilon = 10^{-5}$, $N_{\textup{exp}}= 8$ 0.1717351273768353D-90 0.9813979844412281D+00 0.1717351273768353D-90 0.9796394795856220D+00 0.1444688547833037D+00 0.1725070612123545D-01 0.1056560707574286D+00 0.1667886723036181D-01 0.1062521612149110D+01 0.1054578706877307D-01 0.6560288935303707D+00 0.8844634987341105D-02 0.5267690362852202D+01 0.1032992162487232D-01 0.2306291959214068D+01 0.7630982018328110D-02 0.2703967558153169D+02 0.1094329897487313D-01 0.7494069882130154D+01 0.7643894387121303D-02 0.1513162921362690D+03 0.1168001387206569D-01 0.2475380224244508D+02 0.7895263738883522D-02 0.8455768375945300D+02 0.8193476657837220D-02 0.3014062855622324D+03 0.8669821820757132D-02 $\varepsilon = 10^{-6}$, $N_{\textup{exp}}= 10$ $\varepsilon = 10^{-7}$, $N_{\textup{exp}}= 12$ 0.1717351273768353D-90 0.9782726238548010D+00 0.1717351273768353D-90 0.9771544730076503D+00 0.8372863302094305D-01 0.1643118013623608D-01 0.6947921924972357D-01 0.1629754539058215D-01 0.4855505122592347D+00 0.8148771417133997D-02 0.3895637403085457D+00 0.7808468229375633D-02 0.1469355573407596D+01 0.6388908896360456D-02 0.1094368348489869D+01 0.5750937211655103D-02 0.3830248867072182D+01 0.6035304461848845D-02 0.2532138118439746D+01 0.5133367812236056D-02 0.9747221022108484D+01 0.6067666015209520D-02 0.5541693247062111D+01 0.5003442742181595D-02 0.2501534578981606D+02 0.6195686630251158D-02 0.1201464446288306D+02 0.5031093636773381D-02 0.6524253082974616D+02 0.6348046228383885D-02 0.2617235093833228D+02 0.5105593812435561D-02 0.1733585387699787D+03 0.6525047468028670D-02 0.5751774621815093D+02 0.5194450859496128D-02 0.4760435514260730D+03 0.6968897297155060D-02 0.1276960094034739D+03 0.5292138293131431D-02 0.2871575858061491D+03 0.5429770443927676D-02 0.6667008790524359D+03 0.5876242926897542D-02 : The nodes and weights in new SOE for $Q = 100$, $2\gamma =2\pi^{-1}\arctan(Q^{-1}) \approx 0.0064$ to attain $|t^{-\beta} - \sum_{j=1}^{N_{\textup{exp}}} w_j e^{-s_j t}| \le \varepsilon t^{-\beta}$ for $\delta \le t \le T$, with the final instant $T \le 10$ and temporal gap $\delta \ge 5\times 10^{-3}$.[\[SOE_table_Q100\]]{#SOE_table_Q100 label="SOE_table_Q100"}
arxiv_math
{ "id": "2309.05125", "title": "Compressing the memory variables in constant-Q viscoelastic wave\n propagation via an improved sum-of-exponentials approximation", "authors": "Xu Guo, Shidong Jiang, Yunfeng Xiong, Jiwei Zhang", "categories": "math.NA cs.NA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | A broad class of optimization problems can be cast in composite form, that is, considering the minimization of the composition of a lower semicontinuous function with a differentiable mapping. This paper discusses the versatile template of composite optimization without any convexity assumptions. First- and second-order optimality conditions are discussed, advancing the variational analysis of compositions. We highlight the difficulties that stem from the lack of convexity when dealing with necessary conditions in a Lagrangian framework and when considering error bounds. Building upon these characterizations, a local convergence analysis is delineated for a recently developed augmented Lagrangian method, deriving rates of convergence in the fully nonconvex setting. **Keywords** Augmented Lagrangian framework $\cdot$  Composite nonconvex optimization $\cdot$  Error bounds $\cdot$  Local convergence properties $\cdot$  Second-order variational analysis **MSC (2020)** [49J52](http://www.ams.org/mathscinet/msc/msc2020.html?t=49J52) $\cdot$  [49J53](http://www.ams.org/mathscinet/msc/msc2020.html?t=49J53) $\cdot$  [65K10](http://www.ams.org/mathscinet/msc/msc2020.html?t=65K10) $\cdot$  [90C30](http://www.ams.org/mathscinet/msc/msc2020.html?t=90C30) $\cdot$  [90C33](http://www.ams.org/mathscinet/msc/msc2020.html?t=90C33) author: - "Alberto De Marchi[^1]" - "Patrick Mehlitz[^2]" bibliography: - biblio.bib date: - - title: - "**Local properties and augmented Lagrangians in fully nonconvex composite optimization**" - Local properties and augmented Lagrangians in fully nonconvex composite optimization --- "*In fact the great watershed in optimization isn't between linearity and nonlinearity, but between convexity and nonconvexity*."\ --- R. T. Rockafellar [@rockafellar1993lagrange] # Introduction {#sec:introduction} In this paper, we are concerned with finite-dimensional optimization problems of the form $$\mathop{\mathrm{minimize}}_{x} {}\quad{} \varphi(x) :=f(x) + g( c(x) ) , \tag{P}\label{eq:P}$$ where $x\in\mathbb{R}^n$ is the decision variable, $f \colon \mathbb{R}^n \to \mathbb{R}$ and $c \colon \mathbb{R}^n \to \mathbb{R}^m$ are smooth mappings, and $g \colon \mathbb{R}^m \to \overline{\mathbb{R}}:=\mathbb{R}\cup \{\infty\}$ is merely proper and lower semicontinuous. The data functions $f$ and $g$ are allowed to be nonconvex mappings, the nonsmooth cost $g$ is not necessarily continuous nor real-valued, and the mapping $c$ is potentially nonlinear. Such setting of nonsmooth nonconvex composite optimization was named *generalized* [@rockafellar2022convergence] (or *extended* [@rockafellar2000extended]) nonlinear programming by Rockafellar, and it is well known that the model problem [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} covers numerous applications from signal processing, sparse optimization, compressed sensing, machine learning, and disjunctive programming. Let us mention that the situation where $g$ is *convex* has been intensively studied in the literature, see e.g. [@rockafellar2000extended; @rockafellar2022augmented; @rockafellar2022convergence]. We do *not* make such an assumption and consider the far more challenging general setup. Our motivation behind the potential nonconvexity of $g$ is driven by applications from sparse or low-rank optimization. Although the (convex) $\ell_1$- and nuclear norm are known to promote sparse or low-rank behavior, solutions are often not *enough* in certain settings. In order to overcome this issue, one can rely on the $\ell_0$-quasi-norm or the matrix rank as "regularizers", which are discontinuous functions. Intermediate choices for $g$ like the $\ell_q$-quasi-norm or the $q$-Schatten-quasi-norm for $q\in(0,1)$ are nonconvex but uniformly continuous, and have turned out to work well in numerical practice. Another driving force behind this work comes from disjunctive programming, in particular from the observation that constraints can be naturally formulated in a function-in-set format whereby sets are nonconvex yet simple (to project onto). The template [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} lends itself to capture this scenario, taking full advantage of $g$ as the indicator of a nonconvex set, comprising structures typical for, e.g., complementarity, switching, and vanishing constraints, see the classical monographs [@LuoPangRalph1996; @OutrataKocvaraZowe1998] and the more recent contributions [@Gfrerer2014; @mehlitz2020]. The possibility to include constraints in the model problem [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} becomes apparent with a direct reformulation. Introducing an auxiliary variable $z \in \mathbb{R}^m$, [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} can be equivalently rewritten in the form $$\mathop{\mathrm{minimize}}_{x,z} {}\quad{} f(x) + g(z) {}\qquad{} \mathop{\mathrm{subject~to}} {}\quad{} c(x) - z = 0 , \tag{P$_{\text{S}}$}\label{eq:ALMz:P}$$ which has a separable objective function, without nontrivial compositions, and explicitly includes some equality constraints. An analogous template has been studied in [@demarchi2023constrained], demonstrating its modeling versatility, mostly enabled by accepting potentially nonconvex $g$. A fundamental technique for solving constrained optimization problems is the augmented Lagrangian (AL) framework, which can effortlessly handle nonsmooth objectives, see e.g. [@bolte2018nonconvex; @chen2017augmented; @demarchi2023implicit; @demarchi2023constrained; @DhingraKhongJavanovic2019; @HangSarabi2021; @hang2023convergence; @KanzowKraemerMehlitzWachsmuthWerner2023; @rockafellar2022augmented; @rockafellar2022convergence; @sabach2019lagrangian] for some recent contributions, and [@bertsekas1996constrained; @birgin2014practical; @conn1991globally] for some fundamental literature which addresses the setting of standard nonlinear programming. Particularly, Rockafellar extended the approach in [@rockafellar2022augmented; @rockafellar2022convergence] to the broad setting of [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} with $g$ convex, relying on some local duality to build a connection with the proximal point algorithm (PPA), see [@rockafellar1973dual; @rockafellar1976augmented]. Embracing the fully nonconvex setting, we are interested here in investigating AL methods for generalized nonlinear programming without any convexity assumption. Although the shifted-penalty approach underpinning the seminal "method of multipliers" still applies in our setting, it appears more difficult to leverage the perspective of PPA. Moreover, the nonconvexity of $g$ leads to a lack of regularity, as its proximal mapping is potentially set-valued. Here, we seek a better understanding of the variational properties of [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} and the convergence guarantees of AL methods for this class of problems. Building upon the global characterization in [@demarchi2023implicit], we will focus on second-order optimality conditions and local analysis, portraying a convergence theory for the fully nonconvex setting including rates-of-convergence results. The following blanket assumptions are considered throughout, without further mention. Technical definitions are given in [2](#sec:preliminaries){reference-type="ref" reference="sec:preliminaries"}. **Assumption 1**. The following hold in [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}: 1. [\[ass:f\]]{#ass:f label="ass:f"} $f \colon \mathbb{R}^n \to \mathbb{R}$ and $c \colon \mathbb{R}^n \to \mathbb{R}^m$ are twice continuously differentiable; 2. [\[ass:g\]]{#ass:g label="ass:g"} $g \colon \mathbb{R}^m \to \overline{\mathbb{R}}$ is proper, lower semicontinuous, and prox-bounded; 3. [\[ass:phi\]]{#ass:phi label="ass:phi"} $\inf\varphi\in\mathbb{R}$. The prox-boundedness assumption on $g$ in [Assumption 1](#ass:P){reference-type="ref" reference="ass:P"}[\[ass:g\]](#ass:g){reference-type="ref" reference="ass:g"} is included to ensure that, for some suitable parameters, the proximal mapping of $g$ is well-defined, and so the overall numerical scheme considered in this work. However, such stipulation is not necessary. As suggested in [@rockafellar2022convergence], a "trust region" can be specified to localize the proximal minimization and to support its attainment. While avoiding the artificial unboundedness potentially introduced by relaxing the composition constraint, this localization would affect some algorithmic and global aspects, but not the local behavior and properties we are interested in here. ## Related Work and Contributions {#sec:relatedWorkContributions} Since its inception [@hestenes1969multiplier; @powell1969method; @rockafellar1973dual], the AL framework has been extensively investigated and developed [@birgin2014practical; @conn1991globally], also in infinite dimensions [@kanzow2018augmented]. It was soon recognized that, in the convex setting, the method of multipliers can be associated to the PPA applied to a dual problem, see [@rockafellar1976augmented]. Following this pattern, local convexity enabled by some second-order optimality conditions allowed to reconcile the AL scheme with an application of the PPA, and thereby establishing convergence, beyond the convex setting [@bertsekas1996constrained; @rockafellar2022augmented; @rockafellar2022convergence]. However, when it comes to *local* convergence properties, available results remain confined to the case with $g$ convex. #### Contributions With this work, we extend recent results by Rockafellar from [@rockafellar2022augmented; @rockafellar2022convergence], where composite optimization problems with convex $g$ are considered, to the more general setting. In particular, - we advance the variational analysis of compositions $g \circ c$ for twice continuously differentiable $c$ and lower semicontinuous, not necessarily convex $g$, which is a more general setting than that discussed in [@MohammadiMordukhovichSarabi2022; @rockafellar1998variational]. - We provide second-order sufficient optimality conditions (in terms of so-called second subderivatives and the directional proximal subdifferential) for constrained composite optimization problems. As a byproduct, we obtain a refined growth condition for unconstrained composite optimization problems in [Lemma 17](#lem:SOSC_implies_enhanced_growth_condition){reference-type="ref" reference="lem:SOSC_implies_enhanced_growth_condition"}, enhancing [@BenkoMehlitz2023 Thm 6.1]. - We investigate error bound conditions for a system of necessary optimality conditions associated with [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}. - We study the implicit AL method from [@demarchi2023implicit] and characterize its local convergence behavior. Particularly, under suitable conditions, we show convergence of the full sequence with linear or superlinear rate in [Theorem 39](#thm:rate_convergence){reference-type="ref" reference="thm:rate_convergence"}. To proceed, we make use of problem-tailored second-order conditions. Moreover, the Lagrange multiplier has to be locally unique (in the potentially nonconvex set of Lagrange multipliers), see [Lemma 23](#lem:uniqueness_of_multiplier_CQ){reference-type="ref" reference="lem:uniqueness_of_multiplier_CQ"}. Sparsity-promoting terms and nonconvex constraint sets have turned out to work well in the AL framework---at least from a global perspective, e.g. in [@demarchi2023constrained; @JiaKanzowMehlitzWachsmuth2023]. We are also interested in local properties now, with a focus on the numerical method proposed in [@demarchi2023implicit], which favorably avoids the use of slack variables, see [@benko2021implicit] for a recent study. Local fast convergence of an AL method in composite optimization has been considered from the viewpoint of variational analysis in the recent paper [@HangSarabi2021] in the context where $g$ is a continuous, piecewise quadratic, convex function. This allows for a unified analysis as the standard second-order sufficient condition (SOSC) already gives the necessary error bound condition (due to the result from [Lemma 7](#lem:subderivative_vs_graphical_derivative){reference-type="ref" reference="lem:subderivative_vs_graphical_derivative"} and the analysis in [@MohammadiMordukhovichSarabi2022; @rockafellar1998variational]). A recent analysis of the local convergence of AL methods for [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} is that in [@hang2023convergence], restricted to convex $g$, which considers SOSCs and establishes Q-linear convergence of the primal-dual sequence, without assuming any constraint qualification (CQ). Our analysis also took inspiration from [@BoergensKanzowSteck2019; @KanzowSteck2018; @steck2018dissertation] where, among other things, the local analysis of AL methods for (smooth) optimization problems with geometric constraints of type $c(x)\in K$ for some closed, convex set $K$ is considered in a Banach space setting. We, at least roughly, follow the arguments in [@steck2018dissertation] and (apart from the fact that we are working in a fully finite-dimensional setting) generalize the findings therein to nonsmooth composite problems. Let us point out that desirable local convergence properties of AL methods in standard nonlinear programming can be guaranteed with no more than the SOSC, i.e., no additional CQ is necessary, see [@FernandezSolodov2012], and the SOSC can even be replaced by a weaker noncriticality assumption on the involved multipliers, as shown later in [@IzmailovKurennoySolodov2015]. One reason for this behavior is the inherent (convex) polyhedrality of the involved constraint set, see [Example 13](#ex:NLP){reference-type="ref" reference="ex:NLP"}, which also gives (convex) polyhedrality of the associated set of Lagrange multipliers. The fact that polyhedrality comes along with certain stability properties (in the sense of error bounds) is well known from the seminal papers [@Robinson1981; @WalkupWets1969]. In the general, nonpolyhedral situation, such a result is not likely to hold, see [@KanzowSteck2018], and an additional CQ might be necessary. Exemplary, this has been visualized in the papers [@HangMordukhovichSarabi2022; @KanzowSteck2019] where AL methods in second-order cone programming have been investigated. In order to obtain convergence rates, the authors do not only postulate the validity of a second-order condition, but make use of an addition assumption. In [@KanzowSteck2019], the authors exploit the strict Robinson condition (which guarantees uniqueness of the underlying Lagrange multiplier) while in [@HangMordukhovichSarabi2022], a certain multiplier mapping is assumed to be calm while, at the point of interest, uniqueness of the Lagrange multiplier is also needed. #### Roadmap The remainder of the paper is organized as follows. In [2](#sec:preliminaries){reference-type="ref" reference="sec:preliminaries"}, we comment on the exploited notation and provide some preliminary results from variational analysis and generalized differentiation. [3](#sec:constrained_structured_problems){reference-type="ref" reference="sec:constrained_structured_problems"} is dedicated to the investigation of first-order necessary and second-order sufficient optimality conditions in *constrained* nonconvex composite optimization, a seemingly more general problem class than the one represented by [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}. These results are applied to [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} in [4](#sec:compositeOptimization){reference-type="ref" reference="sec:compositeOptimization"}. Furthermore, we comment on a reasonable choice for an AL function and investigate error bounds for a system of necessary optimality conditions associated with [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}. In [5](#sec:ALM){reference-type="ref" reference="sec:ALM"}, we first introduce the AL method of our interest before presenting some global convergence results which complement the analysis provided in [@demarchi2023implicit]. Furthermore, local convergence results are presented. We start by clarifying the existence and convergence of minimizers for the associated AL subproblems before focusing on the derivation of rates-of-convergence results. [6](#sec:examples){reference-type="ref" reference="sec:examples"} is incorporated to illustrate our findings by means of two exemplary settings: sparsity-promoting nonlinear optimization and complementarity-constrained optimization. The paper closes with some concluding remarks in [7](#sec:conclusions){reference-type="ref" reference="sec:conclusions"}. # Preliminaries {#sec:preliminaries} This section provides notation, preliminaries, and known facts based on [@mordukhovich2018; @rockafellar1998variational], with some additional basic results. With $\mathbb{R}$ and $\overline{\mathbb{R}}:=\mathbb{R}\cup \{\infty\}$ we indicate the real and extended-real line, respectively. The set of natural numbers is denoted by $\mathbb{N}$. We equip the appearing Euclidean spaces with the associated Euclidean norm $\Vert \cdot \Vert$. In product spaces, we make use of the associated maximum norm. With $\mathop{\mathrm{\mathbb{B}}}_r(x)$ we indicate the closed ball centered at $x\in\mathbb{R}^n$ with radius $r>0$. Given a set $A\subseteq\mathbb{R}^n$, we use $x+A:=A+x:=\{a+x\,|\,a\in A\}$ for brevity. The notation $\{a^k\}_{k\in K}$ represents a sequence indexed by elements of the set $K\subseteq\mathbb{N}$, and we write $\{a^k\}_{k\in K} \subseteq A$ to indicate that $a^k \in A$ for all indices $k\in K$. Whenever clear from context, we may simply write $\{a^k\}$ to indicate $\{a^k\}_{k\in\mathbb{N}}$. Notation $a^k\to_K x$ ($a^k\to x$) is used to express convergence of $\{a^k\}_{k\in K}$ (of $\{a^k\}$) to $x$. If $n = 1$, we use $\{a_k\}_{k\in K}$ and $\{a_k\}$ to emphasize that we are dealing with sequences of scalars. We will adopt the *little-o* notation for asymptotics: given sequences $\{a_k\}$ and $\{\varepsilon_k\}\subset(0,\infty)$, we write $a_k \in \mathpzc{o}(\varepsilon_k)$ to indicate that $\lim_{k\to\infty} {|a_k|}/{\varepsilon_k} = 0$. A function $f \colon \mathbb{R}^n\times \mathbb{R}^m \to \overline{\mathbb{R}}$ with values $f(x,z)$ is *level-bounded in $x$ locally uniformly in $z$* if for each $\alpha \in \mathbb{R}$ and $\bar{z} \in \mathbb{R}^m$ there exists $\varepsilon > 0$ such that the set $\{ (x,z)\in\mathbb{R}^n\times\mathbb{R}^m\,\vert\, f(x,z) \leq \alpha, \Vert z - \bar{z} \Vert \leq \varepsilon \}$ is bounded. The *effective domain* of a function $h \colon \mathbb{R}^m \to \overline{\mathbb{R}}$ is denoted by $\mathop{\mathrm{dom}}h :=\{ z\in\mathbb{R}^m\,\vert\, h(z) < \infty \}$. The set $\mathop{\mathrm{epi}}h:=\{(z,\alpha)\in\mathbb{R}^m\times\mathbb{R}\,|\,\alpha\geq h(z)\}$ is called the *epigraph* of $h$. We say that $h$ is *proper* if $\mathop{\mathrm{dom}}h \neq \emptyset$ and *lower semicontinuous* if $h(\bar{z}) \leq \liminf_{z\to\bar{z}} h(z)$ for all $\bar{z} \in \mathbb{R}^m$. Note that $h$ is lower semicontinuous if and only if $\mathop{\mathrm{epi}}h$ is closed. Given a point $\bar{z}\in\mathop{\mathrm{dom}}h$, we may avoid to assume $h$ continuous and instead appeal to $h$-*attentive* convergence of a sequence $\{ z^k \}$, denoted as $z^k \overset{h}{\to} \bar{z}$ and given by $z^k \to \bar{z}$ with $h(z^k) \to h(\bar{z})$. The *conjugate function* $h^\ast \colon \mathbb{R}^m \to \overline{\mathbb{R}}$ associated with (proper and lower semicontinuous) $h$ is defined by $$h^\ast(y) := \sup\limits_z\{ \langle y, z \rangle - h(z) \} .$$ We note that $h^\ast$ is a convex function by definition. For a proper and lower semicontinuous function $h \colon \mathbb{R}^m \to \overline{\mathbb{R}}$, a point $\bar z\in \mathbb{R}^m$ is called *feasible* if $\bar z \in \mathop{\mathrm{dom}}h$. A feasible point $\bar z \in \mathbb{R}^m$ is said to be *locally optimal*, or called a *local minimizer*, if there exists $r > 0$ such that $h(\bar z) \leq h(z)$ holds for all feasible $z \in \mathop{\mathrm{\mathbb{B}}}_r(\bar z)$. Additionally, if this inequality holds for all feasible $z \in \mathbb{R}^m$, then $\bar z$ is said to be *(globally) optimal*. We use the notation $\Gamma \colon \mathbb{R}^n \rightrightarrows \mathbb{R}^m$ to indicate a point-to-set function $\Gamma \colon \mathbb{R}^n \to 2^{\mathbb{R}^m}$. The set $\mathop{\mathrm{gph}}\Gamma:=\{(x,y)\in\mathbb{R}^n\times\mathbb{R}^m\,|\,y\in\Gamma(x)\}$ is called the *graph* of $\Gamma$. The set-valued mapping $\Gamma^{-1} \colon \mathbb{R}^m \rightrightarrows \mathbb{R}^n$ given by $\mathop{\mathrm{gph}}\Gamma^{-1}:=\{(y,x)\in\mathbb{R}^m\times\mathbb{R}^n\,|\,(x,y)\in\mathop{\mathrm{gph}}\Gamma\}$ is referred to as the *inverse* of $\Gamma$. The set $\ker\Gamma:=\{x\in\mathbb{R}^n\,|\,0\in\Gamma(x)\}$ is the *kernel* of $\Gamma$. Recall that $\Gamma$ is said to be a *polyhedral* mapping if $\mathop{\mathrm{gph}}\Gamma$ can be represented as the union of finitely many convex polyhedral sets. ## Proximal mappings Let $h \colon \mathbb{R}^m \to \overline{\mathbb{R}}$ be proper and lower semicontinuous. Given a parameter value $\mu>0$, the *proximal* mapping $\mathop{\mathrm{prox}}_{\mu h} \colon \mathbb{R}^m \rightrightarrows \mathbb{R}^m$ is defined by $$\mathop{\mathrm{prox}}_{\mu h}(z) {}:={} \mathop{\mathrm{\arg\min}}_{z^\prime} \left\{ h(z^\prime) + \frac{1}{2\mu}\Vert z^\prime-z \Vert^2 \right\} . \label{eq:proxmapping}$$ We say that $h$ is *prox-bounded* if it is proper and $h + \Vert \cdot \Vert^2 / (2\mu)$ is bounded below on $\mathbb{R}^m$ for some $\mu > 0$, see [@rockafellar1998variational Def. 1.23]. The supremum of all such $\mu$ is the threshold $\mu_h$ of prox-boundedness for $h$. In particular, if $h$ is bounded below by an affine function, then $\mu_h = \infty$. For any $\mu \in (0,\mu_h)$, the proximal mapping $\mathop{\mathrm{prox}}_{\mu h}$ is locally bounded as well as nonempty- and compact-valued, see [@rockafellar1998variational Thm 1.25]. The value function of the minimization problem defining the proximal mapping is the *Moreau envelope* with parameter $\mu \in (0,\mu_h)$, denoted $h^\mu \colon \mathbb{R}^m \to \mathbb{R}$, namely $$h^\mu(z) :=\inf_{z^\prime} \left\{ h(z^\prime) + \frac{1}{2\mu}\Vert z^\prime-z \Vert^2 \right\} .$$ The *projection* mapping $\mathop{\mathrm{\Pi}}_\Omega \colon \mathbb{R}^m \rightrightarrows \mathbb{R}^m$ and the *distance* function $\mathop{\mathrm{dist}}_\Omega \colon \mathbb{R}^m \to \mathbb{R}$ of a nonempty set $\Omega\subseteq\mathbb{R}^m$ are defined by $$\mathop{\mathrm{\Pi}}_\Omega(z) {}:={} \mathop{\mathrm{\arg\min}}_{z^\prime \in \Omega} \Vert z^\prime - z \Vert, \quad \mathop{\mathrm{dist}}(z, \Omega) {}:={} \inf_{z^\prime \in \Omega} \Vert z^\prime - z \Vert.$$ The former is a set-valued mapping whenever $\Omega$ is nonconvex, whereas the latter is always single-valued. The following technical lemmas are used later on. **Lemma 2**. *Let $h \colon \mathbb{R}^m \to \overline\mathbb{R}$ be proper, lower semicontinuous, and prox-bounded. Let $\mathop{\mathrm{dom}}h$ be closed and fix $\bar z\in\mathbb{R}^m$. Then $$\lim\limits_{z^\prime\to\bar z,\,\mu\downarrow 0}\inf\limits_{z\in\mathop{\mathrm{dom}}h} \left\{\mu h(z) + \frac12\Vert z-z^\prime \Vert^2\right\} = \frac12\mathop{\mathrm{dist}}^2(\bar z,\mathop{\mathrm{dom}}h).$$* *Proof.* As $\mathop{\mathrm{dom}}h$ is nonempty and closed, we find $\tilde z\in\mathop{\mathrm{dom}}h$ such that $\mathop{\mathrm{dist}}(\bar z,\mathop{\mathrm{dom}}h)=\Vert \tilde z-\bar z \Vert$. For every $z^\prime\in\mathbb{R}^m$ and $\mu>0$, this gives $$\inf\limits_{z\in\mathop{\mathrm{dom}}h}\left\{\mu h(z)+\frac12\Vert z-z^\prime \Vert^2\right\} \leq \mu h(\tilde z)+\frac12\Vert \tilde z-z^\prime \Vert^2,$$ and taking the upper limit, we find $$\label{eq:upper_semicontinuity_relation} \limsup\limits_{z^\prime\to\bar z,\,\mu\downarrow 0} \inf\limits_{z\in\mathop{\mathrm{dom}}h}\left\{\mu h(z)+\frac12\Vert z-z^\prime \Vert^2\right\} \leq \frac12\Vert \tilde z-\bar z \Vert^2 = \frac12\mathop{\mathrm{dist}}^2(\bar z,\mathop{\mathrm{dom}}h).$$ Next, we define $\psi \colon \mathbb{R}^m\times[0,\infty) \to \mathbb{R}\cup\{-\infty\}$ by means of $$\forall z^\prime\in\mathbb{R}^m,\,\forall \mu\in[0,\infty)\colon\quad \psi(z^\prime,\mu) := \inf\limits_{z\in\mathop{\mathrm{dom}}h}\left\{\mu h(z)+\frac12\Vert z-z^\prime \Vert^2\right\}.$$ As $h$ is prox-bounded, $\psi$ takes finite values for all sufficiently small $\mu$, and these finite values are attained, see [@rockafellar1998variational Thm 1.25]. Suppose that there are sequences $\{z^k\},\{\bar z^k\}\subseteq\mathbb{R}^m$ and $\{\mu_k\}\subseteq[0,\infty)$ with $z^k\to\bar z$ and $\mu_k\downarrow 0$, $\psi(z^k,\mu_k)=\mu_k h(\bar z^k)+\tfrac12\Vert \bar z^k-z^k \Vert^2$, and $\Vert \bar z^k \Vert\to\infty$. On the one hand, boundedness of $\{z^k\}$ and $\{\mu_k\}$ gives the existence of a constant $C>0$ such that $$\label{eq:optimal_value_bounded_from_above} \forall k\in\mathbb{N}\colon\quad \psi(z^k,\mu_k)\leq \mu_k h(\tilde z)+\frac12\Vert \tilde z-z^k \Vert^2\leq C.$$ On the other hand, the prox-boundedness of $h$ implies that $h$ is minorized by a quadratic function, see [@rockafellar1998variational Ex. 1.24]. Hence, there are constants $c_1,c_2,c_3>0$ such that, for sufficiently large $k\in\mathbb{N}$, $$\begin{aligned} \psi(z^k,\mu_k) &\geq -\mu_kc_1\Vert \bar z^k \Vert^2-\mu_kc_2\Vert \bar z^k \Vert-\mu_k c_3+\frac12\Vert \bar z^k-z^k \Vert^2\\ &\geq \left(\frac12-\mu_kc_1\right)\Vert \bar z^k \Vert^2-(\mu_kc_2+\Vert z^k \Vert)\Vert \bar z^k \Vert-\mu_kc_3. \end{aligned}$$ Boundedness of $\{z^k\}$ and $\mu_k\downarrow 0$ thus yield $\psi(z^k,\mu_k)\to\infty$ since $\Vert \bar z^k \Vert\to\infty$. This, however, is a contradiction to [\[eq:optimal_value_bounded_from_above\]](#eq:optimal_value_bounded_from_above){reference-type="eqref" reference="eq:optimal_value_bounded_from_above"}. Hence, we can choose a compact set $\mathcal C\subseteq\mathbb{R}^m$ such that, for each $\mu\geq 0$ small enough and each $z^\prime$ sufficiently close to $\bar z$, we have $$\psi(z^\prime,\mu)=\inf\limits_{z\in\mathcal C\cap\mathop{\mathrm{dom}}h} \left\{\mu h(z)+\frac12\Vert z-z^\prime \Vert^2\right\}.$$ Thus, due to the lower semicontinuity of $h$, we can apply [@BankGuddatKlatteKummerTammer1983 Thm 4.2.1(1)] in order to obtain $$\liminf\limits_{z^\prime\to\bar z,\,\mu\downarrow 0}\inf\limits_{z\in\mathop{\mathrm{dom}}h}\left\{\mu h(z)+\frac12\Vert z-z^\prime \Vert^2\right\} \geq \psi(\bar z,0) = \frac12\mathop{\mathrm{dist}}^2(\bar z,\mathop{\mathrm{dom}}h).$$ Together with [\[eq:upper_semicontinuity_relation\]](#eq:upper_semicontinuity_relation){reference-type="eqref" reference="eq:upper_semicontinuity_relation"}, the assertion follows. ◻ **Lemma 3**. *Let $h \colon \mathbb{R}^m \to \overline\mathbb{R}$ be proper, lower semicontinuous, and prox-bounded. Fix $\bar z\in\mathbb{R}^m$ as well as sequences $\{\mu_k\}\subseteq(0,\infty)$, $\{z^k\}\subseteq\mathop{\mathrm{dom}}h$, and $\{\tilde z^k\}\subseteq\mathbb{R}^m$ such that $\mu_k\downarrow 0$, $\{\tilde z^k\}$ is bounded, and $$\label{eq:some_upper_limit} \limsup\limits_{k\to\infty}\left(\mu_k h(z^k)+\frac12\Vert z^k-\tilde z^k \Vert^2\right) \leq 0.$$ Then $\mu_kh(z^k)\to 0$ and $\Vert z^k-\tilde z^k \Vert\to 0$.* *Proof.* As in the proof of [Lemma 2](#lem:dist_to_domain){reference-type="ref" reference="lem:dist_to_domain"}, we use [@rockafellar1998variational Ex. 1.24] to find constants $c_1,c_2,c_3>0$ such that $h(z)\geq -c_1\Vert z \Vert^2-c_2\Vert z \Vert-c_3$ holds for all $z\in\mathbb{R}^m$. Thus, we have $$\limsup\limits_{k\to\infty} \left( \left(\frac12-\mu_kc_1\right)\Vert z^k \Vert^2-(\mu_kc_2+\Vert \tilde z^k \Vert)\Vert z^k \Vert-\mu_kc_3 \right) \leq 0$$ from [\[eq:some_upper_limit\]](#eq:some_upper_limit){reference-type="eqref" reference="eq:some_upper_limit"}. By $\mu_k\downarrow 0$ and boundedness of $\{\tilde z^k\}$, this implies that $\{z^k\}$ is bounded as well. Hence, $\{h(z^k)\}$ is bounded below, which gives $\liminf_{k\to\infty}\mu_k h(z^k)\geq 0$. Now, [\[eq:some_upper_limit\]](#eq:some_upper_limit){reference-type="eqref" reference="eq:some_upper_limit"} yields the claim. ◻ ## Variational analysis and generalized differentiation {#sec:VA} ### Tangent and normal cones {#tangent-and-normal-cones .unnumbered} We start by repeating the definition of several cones which are well known in variational analysis. Therefore, we fix some closed set $\Omega\subseteq\mathbb{R}^m$ and $\bar z\in\Omega$. We refer to $$\begin{aligned} T_\Omega(\bar z) &:= \left\{ v\in\mathbb{R}^m\,\middle|\, \begin{aligned} &\exists\{t_k\}\subset(0,\infty),\,\exists\{v^k\}\subseteq\mathbb{R}^m\colon\\ &\quad t_k\downarrow 0,\,v^k\to v,\,\bar z+t_kv^k\in\Omega\,\forall k\in\mathbb{N} \end{aligned} \right\}, \\ T^\textup{Cl}_\Omega(\bar z) &:= \left\{ v\in\mathbb{R}^m\,\middle|\, \begin{aligned} &\forall\{z^k\}\subseteq\Omega\text{ such that }z^k\to\bar z,\, \forall\{t_k\}\subset(0,\infty)\text{ such that }t_k\downarrow 0\\ &\quad\exists\{v^k\}\subseteq\mathbb{R}^n\colon\, v^k\to v,\,z^k+t_kv^k\in\Omega\,\forall k\in\mathbb{N} \end{aligned} \right\}\end{aligned}$$ as the *tangent* and *Clarke tangent cone* to $\Omega$ at $\bar z$. Both are closed, and $T^\textup{Cl}_\Omega(\bar z)$ is, additionally, convex. Furthermore, we make use of $$\begin{aligned} \widehat N_\Omega(\bar z) &:= \{ y\in\mathbb{R}^m\,|\, \forall z\in\Omega\colon\,\langle y, z-\bar z \rangle\leq\mathpzc{o}(\Vert z-\bar z \Vert) \},\\ N_\Omega(\bar z) &:= \{ y\in\mathbb{R}^m\,|\, \begin{aligned} \exists\{z^k\}\subseteq\Omega,\,\exists\{y^k\}\subseteq\mathbb{R}^m\colon\, z^k\to\bar z,\,y^k\to y,\,y^k\in\widehat N_\Omega(z^k)\,\forall k\in\mathbb{N} \end{aligned} \}\end{aligned}$$ which are called *regular* (or Fréchet) and *limiting* (or Mordukhovich) *normal cone* to $\Omega$ at $\bar z$. Both of these cones are closed, and $\widehat N_\Omega(\bar x)$ is, additionally, convex. For a convex set $\Omega$, we have $$T_\Omega(\bar z)=T^\textup{Cl}_\Omega(\bar z), \quad \widehat N_\Omega(\bar z)=N_\Omega(\bar z) = \{y\in\mathbb{R}^m\,|\,\forall z\in\Omega\colon\,\langle y, z-\bar z \rangle\leq 0\}.$$ We would like to point out the polar relations $$\label{eq:polarization_rules} \widehat N_\Omega(\bar z) = T_\Omega(\bar z)^\circ, \quad T^\textup{Cl}_\Omega(\bar z) = N_\Omega(\bar z)^\circ.$$ Here, we made use of $A^\circ:=\{y\in\mathbb{R}^m\,|\,\forall z\in A\colon\,\langle y, z \rangle\leq 0\}$, the *polar cone* of $A\subseteq\mathbb{R}^m$. For $\bar z\in\Omega$ and $v\in\mathbb{R}^m$, we refer to $$N_\Omega(\bar z,v) := \left\{ y\in\mathbb{R}^m\,\middle|\, \begin{aligned} &\exists\{t_k\}\subseteq(0,\infty),\,\exists\{v^k\}\subseteq\mathbb{R}^m,\,\exists\{y^k\}\subseteq\mathbb{R}^m\colon\,\\ &\quad t_k\downarrow 0,\,v^k\to v,\,y^k\to y,\,y^k\in\widehat N_\Omega(\bar z+t_kv^k)\,\forall k\in\mathbb{N} \end{aligned} \right\}$$ as the *limiting normal cone* to $\Omega$ at $\bar z$ *in direction* $v$, see [@BenkoGfrererOutrata2019] for a recent study of this variational object. For $v\notin T_\Omega(\bar z)$, we have $N_\Omega(\bar z,v)=\emptyset$. By definition, $N_\Omega(\bar z,v)\subseteq N_\Omega(\bar z)$ is valid. Whenever $\Omega$ is convex, $N_\Omega(\bar z,v)=N_\Omega(\bar z)\cap\{y\in\mathbb{R}^m\,|\,\langle y, v \rangle=0\}$ holds true. ### Subdifferentials and stationarity {#subdifferentials-and-stationarity .unnumbered} For a lower semicontinuous function $h \colon \mathbb{R}^m \to \overline\mathbb{R}$ and $\bar z\in\mathop{\mathrm{dom}}h$, $$\begin{aligned} \widehat{\partial} h(\bar{z}) &:= \{ y\in\mathbb{R}^m\,|\, (y,-1)\in\widehat N_{\mathop{\mathrm{epi}}h}(\bar z,h(\bar z)) \},\\ \partial h(\bar z) &:= \{ y\in\mathbb{R}^m\,|\, (y,-1)\in N_{\mathop{\mathrm{epi}}h}(\bar z,h(\bar z)) \},\\ \partial^\infty h(\bar z) &:= \{ y\in\mathbb{R}^m\,|\, (y,0)\in N_{\mathop{\mathrm{epi}}h}(\bar z,h(\bar z)) \}\end{aligned}$$ are referred to as the the *regular* (or Fréchet), *limiting* (or Mordukhovich), and *singular* (or horizon) *subdifferential* of $h$ at $\bar z$. Whenever $h$ is Lipschitz continuous around $\bar z$, then $\partial^\infty h(\bar z)=\{0\}$. Let us mention that, among others, the subdifferential operators $\widehat\partial$, $\partial$, and $\partial^\infty$ are compatible with respect to smooth additions. Indeed, for each continuously differentiable function $h_0 \colon \mathbb{R}^m \to \mathbb{R}$, it holds $$\widehat\partial (h_0+h)(\bar z) = \nabla h_0(\bar z)+\widehat\partial h(\bar z), \quad \partial(h_0+h)(\bar z) = \nabla h_0(\bar z)+\partial h(\bar z), \quad \partial^\infty(h_0+h)(\bar z) = \partial^\infty h(\bar z).$$ Whenever $h:=\mathop{\mathrm{\delta}}_\Omega$, where $\mathop{\mathrm{\delta}}_\Omega$ is the *indicator function* of $\Omega$, vanishing on $\Omega$ and being $\infty$ otherwise, we have $\mathop{\mathrm{dom}}\mathop{\mathrm{\delta}}_\Omega=\Omega$, and $$\widehat\partial\mathop{\mathrm{\delta}}_\Omega(\bar z) = \widehat N_\Omega(\bar z), \quad \partial\mathop{\mathrm{\delta}}_\Omega(\bar z) = \partial^\infty\mathop{\mathrm{\delta}}_\Omega(\bar z) = N_\Omega(\bar z)$$ for $\bar z\in\Omega$. The function $\mathop{\mathrm{\delta}}_\Omega$ is proper and lower semicontinuous. The proximal mapping of $\mathop{\mathrm{\delta}}_\Omega$ is the projection $\mathop{\mathrm{\Pi}}_\Omega$, so that $\mathop{\mathrm{\Pi}}_\Omega$ is locally bounded. For $\bar z\in\mathop{\mathrm{dom}}h$ and $(v,\beta)\in\mathbb{R}^m\times\mathbb{R}$, we make use of $$\partial h(\bar z,(v,\beta)) := \{y\in\mathbb{R}^m\,|\,(y,-1)\in N_{\mathop{\mathrm{epi}}h}((\bar z,h(\bar z)),(v,\beta))\},$$ the *directional limiting subdifferential* of $h$ at $\bar z$ in direction $(v,\beta)$, see [@BenkoGfrererOutrata2019]. Clearly, if $(v,\beta)\notin T_{\mathop{\mathrm{epi}}h}(\bar z,h(\bar z))$, then $\partial h(\bar z,(v,\beta))=\emptyset$. Furthermore, we always have $\partial h(\bar z,(v,\beta))\subseteq\partial h(\bar z)$ by construction. Whenever $h:=\mathop{\mathrm{\delta}}_\Omega$ and $\bar z\in\Omega$, then $\mathop{\mathrm{\delta}}_\Omega(\bar z)=0$ and $T_{\mathop{\mathrm{epi}}\mathop{\mathrm{\delta}}_\Omega}(\bar z,0)=T_\Omega(\bar z)\times[0,\infty)$, and [@YeZhou2018 Prop. 3.2] yields $N_{\mathop{\mathrm{epi}}\mathop{\mathrm{\delta}}_\Omega}((\bar z,0),(v,\beta))=N_\Omega(\bar z,v)\times N_{[0,\infty)}(\beta)$ giving $$\partial \mathop{\mathrm{\delta}}_\Omega(\bar z,(v,\beta)) = \begin{cases} N_\Omega(\bar z,v) & v\in T_\Omega(\bar z),\,\beta=0,\\ \emptyset & \text{otherwise.} \end{cases}$$ A point $\bar z\in\mathop{\mathrm{dom}}h$ is said to be *M-stationary* whenever $0 \in \partial h(\bar z)$ is valid, and this constitutes a necessary condition for the local minimality of $\bar z$ for $h$ also known as *Fermat's rule*, see [@rockafellar1998variational Thm 10.1]. It should be noted that $0\in\widehat\partial h(\bar z)$ serves as a (potentially sharper) necessary optimality condition as well. Given some tolerance $\varepsilon \geq 0$, an approximate M-stationarity concept for the minimization of $h$ refers to $\mathop{\mathrm{dist}}( 0, \partial h(\bar z) ) \leq \varepsilon$ which we refer to as *$\varepsilon$-M-stationarity*. By closedness of $\partial h(\bar z)$, $\varepsilon$-M-stationarity with $\varepsilon = 0$ recovers the notion of M-stationarity. Below, we introduce a stationarity concept that will be use later to qualify the iterates of our implicit AL algorithm, see [@demarchi2023implicit Sec. 3.2]. Therefore, let us consider a parametric optimization problem with an objective $p \colon \mathbb{R}^n \to \overline{\mathbb{R}}$ and an oracle $\mathbf{O} \colon \mathbb{R}^n \rightrightarrows \mathbb{R}^m$ given by $$p(x) :=\inf_{z\in\mathbb{R}^m} P(x,z), \qquad \mathbf{O}(x) :=\mathop{\mathrm{\arg\min}}_{z\in\mathbb{R}^m} P(x,z) \label{eq:fbo}$$ for a proper, lower semicontinuous function $P \colon \mathbb{R}^n\times \mathbb{R}^m \to \overline{\mathbb{R}}$. Recalling that the notion of uniform level-boundedness corresponds to a parametric extension of level-boundedness, see [@rockafellar1998variational Def. 1.16], we suppose that $P$ is level-bounded in $z$ (second argument) locally uniformly in $x$ (first argument). Then, from [@rockafellar1998variational Thm 10.13] we have for every $\bar{x}\in\mathop{\mathrm{dom}}p$ the inclusion $$\partial p(\bar{x}) {}\subseteq{} \Upsilon(\bar{x}) {}:={} \bigcup_{\bar{z} \in \mathbf{O}(\bar{x})} \{ \xi\in\mathbb{R}^n\,\vert\, (\xi,0) \in \partial P(\bar{x},\bar{z}) \}. \label{eq:Upsilon}$$ In the setting [\[eq:fbo\]](#eq:fbo){reference-type="eqref" reference="eq:fbo"}, because of the parametric nature of $p$, the subdifferential mapping $\partial p \colon \mathbb{R}^n \rightrightarrows \mathbb{R}^n$ is not a simple object in general, making M-stationarity difficult to check. Therefore, for the minimization of $p$, one can resort to the concept of *$\Upsilon$-stationarity*, coined in [@demarchi2023implicit Def. 3.1]. **Definition 4** ($\Upsilon$-stationarity). Let $\varepsilon \geq 0$ be fixed and let $P \colon \mathbb{R}^n\times \mathbb{R}^m \to \overline{\mathbb{R}}$ be chosen as specified above. Define $p \colon \mathbb{R}^n \to \overline{\mathbb{R}}$ and $\Upsilon \colon \mathbb{R}^n \rightrightarrows \mathbb{R}^n$ as in [\[eq:fbo\]](#eq:fbo){reference-type="eqref" reference="eq:fbo"} and [\[eq:Upsilon\]](#eq:Upsilon){reference-type="eqref" reference="eq:Upsilon"}, respectively. Then, relatively to the minimization of $p$, a point $\bar x \in \mathop{\mathrm{dom}}p$ is called *$\varepsilon$-$\Upsilon$-stationary* if $\mathop{\mathrm{dist}}(0, \Upsilon(\bar x)) \leq \varepsilon$. In the case $\varepsilon = 0$, such a point $\bar x$ is said to be *$\Upsilon$-stationary*. Notice that the inclusion $\bar x \in \mathop{\mathrm{dom}}p$ is implicitly required to have the set $\Upsilon(\bar x)$ nonempty. Furthermore, in the exact case, namely $\varepsilon = 0$, $\Upsilon$-stationarity of $\bar{x}$ coincides with $0 \in \Upsilon(\bar x)$, by closedness of $\Upsilon(\bar x)$ [@rockafellar1998variational Thm 10.13]. We shall point out that $\Upsilon$-stationarity provides an intermediate qualification between (the stronger) M-stationarity for $p$ and (the weaker) M-stationarity for $P$, see [@demarchi2023implicit Prop. 3.2 and 3.3] for details. Finally, it appears from [\[eq:Upsilon\]](#eq:Upsilon){reference-type="eqref" reference="eq:Upsilon"} that an $\varepsilon$-$\Upsilon$-stationary point $\bar x\in\mathbb{R}^n$ can be associated to a (possibly nonunique) *certificate* $\bar z \in \mathbf{O}(\bar x)$ that satisfies $$\mathop{\mathrm{dist}}(0, \Upsilon(\bar x)) {}\leq{} \min_{\xi\in\mathbb{R}^n} \left\{ \Vert \xi \Vert \,\vert\, (\xi,0) \in \partial P(\bar x,\bar z) \right\} \leq \varepsilon . \label{eq:UpsilonCertificate}$$ Given such upper bound, the *pair* $(\bar x, \bar z)$ certificates the $\varepsilon$-$\Upsilon$-stationarity of $\bar x$. ### Generalized derivatives of set-valued mapping {#generalized-derivatives-of-set-valued-mapping .unnumbered} Let us fix a set-valued mapping $\Gamma \colon \mathbb{R}^n \rightrightarrows \mathbb{R}^m$ and some point $(\bar x,\bar z)\in\mathop{\mathrm{gph}}\Gamma$. We refer to the set-valued mappings $D\Gamma(\bar x,\bar z) \colon \mathbb{R}^n \rightrightarrows \mathbb{R}^m$ and $D^*\Gamma(\bar x,\bar z) \colon \mathbb{R}^m \rightrightarrows \mathbb{R}^n$, given by $$\begin{aligned} D\Gamma(\bar x,\bar z)(u) &:=\{v\in\mathbb{R}^m\,|\,(u,v)\in T_{\mathop{\mathrm{gph}}\Gamma}(\bar x,\bar z)\},&\\ D^*\Gamma(\bar x,\bar z)(y^*) &:=\{x^*\in\mathbb{R}^n\,|\,(x^*,-y^*)\in N_{\mathop{\mathrm{gph}}\Gamma}(\bar x,\bar z)\},& \end{aligned}$$ as the *graphical derivative* and the *limiting coderivative* of $\Gamma$ at $(\bar x,\bar z)$. Subsequently, we review some stability properties of set-valued mappings, see e.g. [@BenkoMehlitz2022; @rockafellar1998variational]. We say that $\Gamma$ is *metrically regular* at $(\bar x,\bar z)$ whenever there are neighborhoods $U\subseteq\mathbb{R}^n$ of $\bar x$ and $V\subseteq\mathbb{R}^m$ of $\bar z$ as well as a constant $\kappa>0$ such that $$\forall x\in U,\,\forall z\in V\colon\quad \mathop{\mathrm{dist}}(x,\Gamma^{-1}(z))\leq\kappa\mathop{\mathrm{dist}}(z,\Gamma(x)).$$ If just $$\forall x\in U\colon\quad \mathop{\mathrm{dist}}(x,\Gamma^{-1}(\bar z))\leq\kappa\mathop{\mathrm{dist}}(\bar z,\Gamma(x))$$ holds, i.e., if $z:=\bar z$ can be fixed in the estimate required for metric regularity, then $\Gamma$ is called *metrically subregular* at $(\bar x,\bar z)$. Furthermore, $\Gamma$ is said to be *strongly metrically subregular* at $(\bar x,\bar z)$, whenever there exist a neighborhood $U\subseteq\mathbb{R}^n$ and a constant $\kappa>0$ such that $$\forall x\in U\colon\quad \Vert x-\bar x \Vert\leq\kappa\mathop{\mathrm{dist}}(\bar z,\Gamma(x))$$ is valid. Recall that strong metric subregularity of $\Gamma$ at $(\bar x,\bar z)$ is equivalent to $$\ker D\Gamma(\bar x,\bar z) = \{0\}$$ by the so-called *Levy--Rockafellar criterion*, see [@DontchevRockafellar2014 Thm 4E.1] and [@Levy1996 Prop. 4.1]. Furthermore, $\Gamma$ is metrically regular at $(\bar x,\bar z)$ if and only if $$\ker D^*\Gamma(\bar x,\bar z) = \{0\}$$ by the so-called *Mordukhovich criterion*, see [@rockafellar1998variational Thm 9.40]. Let $F \colon \mathbb{R}^n \to \mathbb{R}^m$ be continuously differentiable and let $\Omega\subseteq\mathbb{R}^m$ be closed. We consider the so-called *feasibility mapping* $\Phi \colon \mathbb{R}^n \rightrightarrows \mathbb{R}^m$ given by $$\label{eq:feasibility_mapping} \Phi(x) :=F(x)-\Omega.$$ We fix some point $\bar x\in\mathbb{R}^n$ satisfying $F(\bar x)\in \Omega$, i.e., $(\bar x,0)\in\mathop{\mathrm{gph}}\Phi$. It is well known that $\Phi$ is metrically regular at $(\bar x,0)$ if and only if $$\label{eq:MRCQ} N_\Omega(F(\bar x))\cap\ker F^\prime(\bar x)^\top=\{0\}$$ is valid, as we have $$D^*\Phi(\bar x,0)(y) = \begin{cases} \{F^\prime(\bar x)^\top y\} & y\in N_\Omega(F(\bar x)),\\ \emptyset & \text{otherwise} \end{cases}$$ e.g. from [@BenkoMehlitz2022 Lem. 2.1]. The following lemma provides a certain openness-type property of the feasibility mapping from [\[eq:feasibility_mapping\]](#eq:feasibility_mapping){reference-type="eqref" reference="eq:feasibility_mapping"} around points of its graph where it is metrically regular. **Lemma 5**. *Fix $(\bar x,0)\in\mathop{\mathrm{gph}}\Phi$ where $\Phi$, the mapping given in [\[eq:feasibility_mapping\]](#eq:feasibility_mapping){reference-type="eqref" reference="eq:feasibility_mapping"}, is metrically regular. Then there exists $r>0$ such that, for each $\tilde r\in(0,r)$, there is some $\varepsilon>0$ such that $$\mathop{\mathrm{\mathbb{B}}}_{\tilde r}(0) \subseteq F^\prime(x)\mathop{\mathrm{\mathbb{B}}}_1(0) + \mathop{\mathrm{conv}}\bigl(T_\Omega(z)\cap \mathop{\mathrm{\mathbb{B}}}_1(0)\bigr)$$ holds true for all $x\in \mathop{\mathrm{\mathbb{B}}}_\varepsilon(\bar x)$ and all $z\in \Omega\cap \mathop{\mathrm{\mathbb{B}}}_\varepsilon(F(\bar x))$.* *Proof.* The assumptions of the lemma guarantee that [\[eq:MRCQ\]](#eq:MRCQ){reference-type="eqref" reference="eq:MRCQ"} is valid. Polarizing this equation while respecting [@BonnansShapiro2000 eq. (2.32)] and [\[eq:polarization_rules\]](#eq:polarization_rules){reference-type="eqref" reference="eq:polarization_rules"} gives $$\mathop{\mathrm{cl}}\bigl(F^\prime(\bar x)\mathbb{R}^n+T^\textup{Cl}_\Omega(F(\bar x))\bigr)=\mathbb{R}^m.$$ Noting that the only convex cone which is dense in $\mathbb{R}^m$ is $\mathbb{R}^m$ itself, the closure is actually superfluous. Next, we apply the *generalized open mapping theorem* from [@ZoweKurcyusz1979 Thm 2.1] in order to find $r>0$ such that $$\label{eq:consequence_open_mapping} \mathop{\mathrm{\mathbb{B}}}_r(0) \subseteq F^\prime(\bar x)\mathop{\mathrm{\mathbb{B}}}_1(0)+T^\textup{Cl}_\Omega(F(\bar x))\cap \mathop{\mathrm{\mathbb{B}}}_1(0).$$ Subsequently, we will show that the constant $r$ from above satisfies the requirements of the lemma. For brevity, we introduce $$Z(x,z) {}:={} F^\prime(x)\mathop{\mathrm{\mathbb{B}}}_1(0)+\mathop{\mathrm{conv}}\bigl(T_\Omega(z)\cap\mathop{\mathrm{\mathbb{B}}}_1(0)\bigr)$$ for arbitrary $x\in\mathbb{R}^n$ and $z\in\Omega$. Let us note that $Z(x,z)$ is a nonempty, compact, convex set. Suppose that there is some $\tilde r\in(0,r)$ such that, for each $\varepsilon>0$, there are $x_\varepsilon\in \mathop{\mathrm{\mathbb{B}}}_\varepsilon(\bar x)$, $z_\varepsilon\in \Omega\cap\mathop{\mathrm{\mathbb{B}}}_\varepsilon(F(\bar x))$, and $w_\varepsilon\in\mathop{\mathrm{\mathbb{B}}}_{\tilde r}(0)\setminus Z(x_\varepsilon,z_\varepsilon)$. Let $\varepsilon\in(0,1)$ be fixed for a moment. Then $w_\varepsilon$ and $Z(x_\varepsilon,z_\varepsilon)$ are separable, i.e., we find $\xi_\varepsilon\in\mathbb{R}^m$ with $\Vert \xi_\varepsilon \Vert=1$ and $$\label{eq:consequence_of_separation} \forall w\in Z(x_\varepsilon,z_\varepsilon)\colon\quad \langle \xi_\varepsilon, w_\varepsilon \rangle\leq\langle \xi_\varepsilon, w \rangle.$$ Clearly, $-r \xi_\varepsilon\in\mathop{\mathrm{\mathbb{B}}}_r(0)$. Hence, due to [\[eq:consequence_open_mapping\]](#eq:consequence_open_mapping){reference-type="eqref" reference="eq:consequence_open_mapping"}, there are $u_\varepsilon\in\mathop{\mathrm{\mathbb{B}}}_1(0)$ and $v_\varepsilon\in T^\textup{Cl}_\Omega(F(\bar x))\cap\mathop{\mathrm{\mathbb{B}}}_1(0)$ such that $-r\xi_\varepsilon=F^\prime(\bar x)u_\varepsilon+v_\varepsilon$. Fix $\theta\in(0,1)$. Then we have $\theta v_\varepsilon\in T^\textup{Cl}_\Omega(F(\bar x))\cap\mathop{\mathrm{\mathbb{B}}}_\theta(0)$. For sufficiently small $\varepsilon\in(0,1)$, $\Vert F^\prime(x_\varepsilon)-F^\prime(\bar x) \Vert<\varepsilon$ follows by continuous differentiability of $F$. Further, [@rockafellar1998variational Thm 6.26] ensures the existence of $\tilde v_{\varepsilon,\theta}\in T_\Omega(z_\varepsilon)\cap\mathop{\mathrm{\mathbb{B}}}_1(0)$ such that $\Vert \theta v_\varepsilon-\tilde v_{\varepsilon,\theta} \Vert<\varepsilon$ for sufficiently small $\varepsilon\in(0,1)$ and sufficiently large $\theta\in(0,1)$. Hence, we find $$\begin{aligned} -(1-\varepsilon)\theta r \xi_\varepsilon &= (1-\varepsilon)(F^\prime(\bar x)(\theta u_\varepsilon)+\theta v_\varepsilon) \\ &= (F^\prime(\bar x)-F^\prime(x_\varepsilon))((1-\varepsilon)\theta u_\varepsilon) +(1-\varepsilon)\theta v_\varepsilon-\tilde v_{\varepsilon,\theta} \\ &\qquad +F^\prime(x_\varepsilon)((1-\varepsilon)\theta u_\varepsilon)+\tilde v_{\varepsilon,\theta}. \end{aligned}$$ Let us set $w_{\varepsilon,\theta} :=F^\prime(x_\varepsilon)((1-\varepsilon)\theta u_\varepsilon)+\tilde v_{\varepsilon,\theta}$ and observe that $w_{\varepsilon,\theta}\in Z(x_\varepsilon,z_\varepsilon)$ is valid. Furthermore, by construction, we find $$\begin{aligned} \Vert w_{\varepsilon,\theta}+(1-\varepsilon)\theta r \xi_\varepsilon \Vert < \varepsilon\theta\Vert u_\varepsilon \Vert + \varepsilon\theta\Vert v_\varepsilon \Vert + \varepsilon \leq 3\varepsilon. \end{aligned}$$ This gives $$\begin{aligned} -\tilde r &\leq \langle \xi_\varepsilon, w_\varepsilon \rangle \leq \langle \xi_\varepsilon, w_{\varepsilon,\theta} \rangle \\ &= \langle \xi_\varepsilon, w_{\varepsilon,\theta}+(1-\varepsilon)\theta r \xi_\varepsilon \rangle - \langle \xi_\varepsilon, (1-\varepsilon)\theta r \xi_\varepsilon \rangle \\ &< 3\varepsilon-(1-\varepsilon)\theta r, \end{aligned}$$ where we used the Cauchy--Schwarz inequality and $\Vert \xi_\varepsilon \Vert=1$. As, for sufficiently small $\varepsilon$, this holds for all sufficiently large $\theta\in(0,1)$, we find $-\tilde r\leq 3\varepsilon-(1-\varepsilon)r$, and taking the limit $\varepsilon\downarrow 0$ yields the contradiction $\tilde r\geq r$. ◻ ### Subderivatives and the directional proximal subdifferential {#subderivatives-and-the-directional-proximal-subdifferential .unnumbered} Let us fix a lower semicontinuous function $h \colon \mathbb{R}^m \to \overline{\mathbb{R}}$. For $\bar z\in\mathop{\mathrm{dom}}h$ and $v\in\mathbb{R}^m$, the lower limit $$\mathrm{d}h(\bar z)(v) := \liminf\limits_{t\downarrow 0,\,v^\prime\to v} \frac{h(\bar z+tv^\prime)-h(\bar z)}{t}$$ is called the *subderivative* of $h$ at $\bar z$ in direction $v$, and the mapping $v\mapsto \mathrm{d}h(\bar z)(v)$, which, by definition, is lower semicontinuous and positively homogeneous, is referred to as the subderivative of $h$ at $\bar z$. We note that $\mathop{\mathrm{epi}}\mathrm{d}h(\bar z)=T_{\mathop{\mathrm{epi}}h}(\bar x,h(\bar x))$, see [@rockafellar1998variational Thm 8.2(a)]. Furthermore, for $\bar y\in\mathbb{R}^m$, $$\label{eq:def_second_subderivative} \mathrm{d}^2 h(\bar z,\bar y)(v) := \liminf\limits_{t\downarrow 0,\,v^\prime\to v} \frac{h(\bar z+tv^\prime)-h(\bar z)-t\langle \bar y, v^\prime \rangle}{\tfrac12 t^2}$$ is called the *second subderivative* of $h$ at $\bar z$ for $\bar y$ in direction $v$. The mapping $v\mapsto\mathrm{d}^2h(\bar z,\bar y)(v)$, which, by definition, is lower semicontinuous and positively homogeneous of degree $2$, is referred to as the second subderivative of $h$ at $\bar z$ for $\bar y$. The recent study [@BenkoMehlitz2023] presents an overview of calculus rules addressing these variational tools. **Lemma 6**. *Let $h \colon \mathbb{R}^m \to \overline{\mathbb{R}}$ be a lower semicontinuous function, and fix $\bar z\in\mathop{\mathrm{dom}}h$ and $\bar y\in\mathbb{R}^m$. Then we have $\mathrm{d}^2h(\bar z,\bar y)(0)\in\{-\infty,0\}$.* *Proof.* We first observe that $\mathrm{d}^2h(\bar z,\bar y)(0)\leq 0$ holds by definition of the second subderivative simply by choosing $v^\prime :=0$ in [\[eq:def_second_subderivative\]](#eq:def_second_subderivative){reference-type="eqref" reference="eq:def_second_subderivative"}. Positive homogeneity of degree $2$ of the second subderivative guarantees validity of $\mathrm{d}^2 h(\bar z,\bar y)(0)=\alpha^2\mathrm{d}^2 h(\bar z,\bar y)(0)$ for each $\alpha>0$, and this is only possible if $\mathrm{d}^2h(\bar z,\bar y)(0)\in\{-\infty,0\}$. ◻ The first statement of the following lemma is taken from [@rockafellar1998variational Thm 13.40], whereas the second assertion is inspired by the proof of [@MohammadiMordukhovichSarabi2022 Thm 8.2]. The definition of prox-regularity, subdifferential continuity, and twice epi-differentiability, which can be found in [@rockafellar1998variational Def. 13.27, 13.28, and 13.6(b)], are not stated, as the precise meaning of these concepts is not exploited in this paper. **Lemma 7**. *Let $h \colon \mathbb{R}^m \to \overline{\mathbb{R}}$ be a lower semicontinuous function, and fix $\bar z\in\mathop{\mathrm{dom}}h$ and $\bar y\in\partial h(\bar z)$. Assume that $h$ is prox-regular, subdifferentially continuous, and twice epi-differentiable at $\bar z$ for $\bar y$. Then the following assertions hold.* 1. *We have $$\forall v\in\mathbb{R}^m\colon\quad D(\partial h)(\bar z,\bar y)(v) = \frac12\partial \mathrm{d}^2 h(\bar z,\bar y)(v).$$* 2. *Additionally, assume that $\mathrm{d}^2 h(\bar z,\bar y)$ is convex. Then we have $$\forall v,w\in\mathbb{R}^m\colon\quad w\in D(\partial h)(\bar z,\bar y)(v) \quad \Longrightarrow \quad \mathrm{d}^2 h(\bar z,\bar y)(v)=\langle w, v \rangle.$$* *Proof.* The first statement is taken from [@rockafellar1998variational Thm 13.40]. For the proof of the second statement, pick $v,w\in\mathbb{R}^m$ such that $w\in D(\partial h)(\bar z,\bar y)(v)$. The first assertion yields $2w\in \partial \mathrm{d}^2 h(\bar z,\bar y)(v)$. Noting that $\mathrm{d}^2 h(\bar z,\bar y)$ is convex, we find $$\forall v^\prime\in\mathbb{R}^m\colon\quad \mathrm{d}^2 h(\bar z,\bar y)(v^\prime)\geq \mathrm{d}^2 h(\bar z,\bar y)(v) + 2\langle w, v^\prime-v \rangle.$$ For $\varepsilon\in(0,1)$, we test this inequality with $v^\prime :=(1\pm\varepsilon)v$. As $\mathrm{d}^2 h(\bar z,\bar y)$ is positive homogeneous of degree $2$, this gives $$\pm 2 \langle w, v \rangle\leq (\varepsilon\pm2)\mathrm{d}^2h(\bar z,\bar y)(v),$$ and taking the limit $\varepsilon\downarrow 0$ yields the claim. ◻ Finally, for $\bar z\in\mathop{\mathrm{dom}}h$ and $v\in\mathbb{R}^m$ such that $|\mathrm{d}h(\bar z)(v)|<\infty$, we refer to $$\widehat{\partial}^\textup{p}h(\bar z,v) := \{ y\in\mathbb{R}^m\,|\,\mathrm{d}^2h(\bar z,y)(v)>-\infty,\,\mathrm{d}h(\bar z)(w)=\langle y, v \rangle \}$$ as the *proximal subdifferential* of $h$ at $\bar z$ in direction $v$. In case where $|\mathrm{d}h(\bar z)(v)|=\infty$, we set $\widehat{\partial}^\textup{p}h(\bar z,v):=\emptyset$. This variational object, which has been introduced in [@BenkoMehlitz2023 Sec. 3.2], is closely related to the proximal subdifferential as it has been defined in [@rockafellar1998variational Def. 8.45], see [@BenkoMehlitz2023 Lem. 3.8]. Whenever $h$ is convex, then $$\widehat{\partial}^\textup{p}h(\bar z,v) = \partial h(\bar z)\cap\{y\in\mathbb{R}^m\,|\,\mathrm{d}h(\bar z)(v)=\langle y, v \rangle\}$$ due to [@BenkoMehlitz2023 Cor. 3.14]. For some closed set $\Omega\subseteq\mathbb{R}^m$, $\bar z\in \Omega$, and $v\in T_\Omega(\bar z)$, $$\widehat{N}^\textup{p}_\Omega(\bar z,v) := \widehat{\partial}^\textup{p}\mathop{\mathrm{\delta}}_\Omega(\bar z,v)$$ is referred to as the *proximal normal cone* of $\Omega$ at $\bar z$ *in direction* $v$. For $v\notin T_\Omega(\bar z)$, we set $\widehat{N}^\textup{p}_\Omega(\bar z,v):=\emptyset$. This variational object originated in [@BenkoGfrererYeZhangZhou2022 Def. 2.8], see [@BenkoMehlitz2023 Ex. 3.9] as well. **Lemma 8**. *Let $h \colon \mathbb{R}^m \to \overline{\mathbb{R}}$ be a lower semicontinuous function, and fix $\bar z\in\mathop{\mathrm{dom}}h$ and $v\in\mathbb{R}^m$. Then $\widehat{\partial}^\textup{p}h(\bar z,v) \subseteq \partial h(\bar z,(v,\mathrm{d}h(\bar z)(v)))$.* *Proof.* By means of [@BenkoMehlitz2023 Cor. 3.14], we find $$\widehat{\partial}^\textup{p}h(\bar z,v) = \left\{ y\in\mathbb{R}^m\,\middle|\, (y,-1)\in\widehat{N}^\textup{p}_{\mathop{\mathrm{epi}}h}((\bar z,h(\bar z)),(w,\mathrm{d}h(\bar z)(v))) \right\}.$$ Now, the desired inclusion follows by [@BenkoGfrererYeZhangZhou2022 Prop. 2.9] and the definition of the directional limiting subdifferential. ◻ Let us specify the result of [Lemma 8](#lem:directional_proximal_subdifferential){reference-type="ref" reference="lem:directional_proximal_subdifferential"} for $h:=\mathop{\mathrm{\delta}}_\Omega$, $\bar z\in\Omega$, and $v\in T_\Omega(\bar z)$. Noting that this gives $\mathrm{d}\mathop{\mathrm{\delta}}_\Omega(\bar z)(v)=0$, the inclusion therein reduces to $\widehat{N}^\textup{p}_\Omega(\bar z,v) \subseteq N_\Omega(\bar z,v)$, see [@BenkoGfrererYeZhangZhou2022 Prop. 2.9] for more details. # Constrained structured problems {#sec:constrained_structured_problems} Consider the constrained structured optimization problem $$\tag{Q}\label{eq:Q} \mathop{\mathrm{minimize}}_{x} {}\quad{} \mathbf{f}(x) + \mathbf{g}( \mathbf{c}(x) ) {}\quad{} \mathop{\mathrm{subject~to}} {}\quad{} \mathbf{h}(x)\in C$$ where $\mathbf{f} \colon \mathbb{R}^n \to \mathbb{R}$, $\mathbf{c} \colon \mathbb{R}^n \to \mathbb{R}^m$, and $\mathbf{h} \colon \mathbb{R}^n \to \mathbb{R}^p$ are continuously differentiable (twice continuously differentiable in [3.2](#sec:SOSC:general){reference-type="ref" reference="sec:SOSC:general"}), $\mathbf{g} \colon \mathbb{R}^n \to \overline{\mathbb{R}}$ is merely lower semicontinuous, and $C\subseteq\mathbb{R}^p$ is a closed set, the so-called *constraint set* of [\[eq:Q\]](#eq:Q){reference-type="eqref" reference="eq:Q"}. Let us mention that the model [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} also covers [\[eq:Q\]](#eq:Q){reference-type="eqref" reference="eq:Q"} as we could use $$\label{eq:Q_as_P} f(x):=\mathbf{f}(x), \quad g(z_1,z_2):=\mathbf{g}(z_1)+\mathop{\mathrm{\delta}}_C(z_2), \quad c(x):=(\mathbf{c}(x),\mathbf{h}(x)),$$ so [\[eq:Q\]](#eq:Q){reference-type="eqref" reference="eq:Q"} is *not* more general than [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}. However, it will turn out that the possibility of considering some constraints separately opens the way to second-order optimality conditions which go beyond the standard local second-order growth of the objective function over the feasible set. Before dealing with that, we consider some Lagrangian terminology and notions useful for first-order analysis. ## First-order optimality conditions {#sec:FONOC:general} Introducing auxiliary variables in the spirit of [\[eq:ALMz:P\]](#eq:ALMz:P){reference-type="eqref" reference="eq:ALMz:P"}, and keeping [\[eq:Q_as_P\]](#eq:Q_as_P){reference-type="eqref" reference="eq:Q_as_P"} in mind, we can reformulate [\[eq:Q\]](#eq:Q){reference-type="eqref" reference="eq:Q"} as $$\tag{Q$_{\text{S}}$}\label{eq:Qslack} \mathop{\mathrm{minimize}}_{x,z_1,z_2} {}\quad{} \mathbf{f}(x) + \mathbf{g}( z_1 ) + \mathop{\mathrm{\delta}}_C(z_2) {}\quad{} \mathop{\mathrm{subject~to}} {}\quad{} \mathbf{c}(x)-z_1=0 ,~ \mathbf{h}(x)-z_2=0$$ with slack variables $z_1\in\mathbb{R}^m$ and $z_2\in\mathbb{R}^p$, involving merely equality constraints but no composition of nonsmooth functions with (nontrivial) smooth ones. Introducing Lagrange multipliers $y\in\mathbb{R}^m$ and $\lambda\in\mathbb{R}^p$ for the constraints, we define a Lagrangian-type function $\mathcal{L}^{\mathrm{S}}_\textup{Q} \colon \mathbb{R}^n\times\mathbb{R}^m\times\mathbb{R}^p\times\mathbb{R}^m\times\mathbb{R}^p \to \overline{\mathbb{R}}$ associated with [\[eq:Qslack\]](#eq:Qslack){reference-type="eqref" reference="eq:Qslack"}, the lifted version of [\[eq:Q\]](#eq:Q){reference-type="eqref" reference="eq:Q"}, by means of $$\label{eq:Qslack:Lagrangian} \mathcal{L}^{\mathrm{S}}_\textup{Q}(x,z_1,z_2,y,\lambda) {}:={} \mathbf{f}(x) + \mathbf{g}(z_1) + \mathop{\mathrm{\delta}}_C(z_2) + \langle y, \mathbf{c}(x) - z_1 \rangle + \langle \lambda, \mathbf{h}(x) - z_2 \rangle.$$ Focusing on those terms of $\mathcal{L}^{\mathrm{S}}_\textup{Q}$ depending on $x$, we call the function $\mathcal{L}_\textup{Q} \colon \mathbb{R}^n\times\mathbb{R}^m\times\mathbb{R}^p \to \mathbb{R}$ given by $$\label{eq:Q:Lagrangian} \mathcal{L}_\textup{Q}(x,y,\lambda) := \mathbf{f}(x) + \langle y, \mathbf{c}(x) \rangle + \langle \lambda, \mathbf{h}(x) \rangle$$ the *Lagrangian* function of [\[eq:Q\]](#eq:Q){reference-type="eqref" reference="eq:Q"}. Then, acting as a precursor of $\mathcal{L}_\textup{Q}$, we refer to $\mathcal{L}^{\mathrm{S}}_\textup{Q}$ as the *pre-Lagrangian* function of [\[eq:Q\]](#eq:Q){reference-type="eqref" reference="eq:Q"}. These objects are tightly related to the so-called M-stationarity conditions of both problems [\[eq:Qslack\]](#eq:Qslack){reference-type="eqref" reference="eq:Qslack"} and [\[eq:Q\]](#eq:Q){reference-type="eqref" reference="eq:Q"}. These first-order optimality conditions can be expressed in Lagrangian form as $$0\in\partial_x \mathcal{L}^{\mathrm{S}}_\textup{Q} (\bar{x}, \mathbf{c}(\bar{x}), \mathbf{h}(\bar{x}), \bar{y}, \bar{\lambda}), \ldots, 0\in\partial_\lambda\mathcal{L}^{\mathrm{S}}_\textup{Q} (\bar{x}, \mathbf{c}(\bar{x}), \mathbf{h}(\bar{x}), \bar{y}, \bar{\lambda}).$$ More explicitly, albeit omitting the auxiliary variables $\bar{z}_1 = \mathbf{c}(\bar{x})$ and $\bar{z}_2 = \mathbf{h}(\bar{x})$, these read [\[eq:Q:Mstationary\]]{#eq:Q:Mstationary label="eq:Q:Mstationary"} $$\begin{aligned} 0 {}={}& \nabla_x \mathcal{L}_\textup{Q}(\bar{x},\bar{y},\bar{\lambda}) , \label{eq:Q:Mstationary:x}\\ \bar{y} {}\in{}& \partial \mathbf{g}( \mathbf{c}(\bar{x}) ) , \label{eq:Q:Mstationary:y}\\ \bar{\lambda} {}\in{}& N_C( \mathbf{h}(\bar{x}) ) . \label{eq:Q:Mstationary:lambda} \end{aligned}$$ Notice that [\[eq:Q:Mstationary:y\]](#eq:Q:Mstationary:y){reference-type="eqref" reference="eq:Q:Mstationary:y"}--[\[eq:Q:Mstationary:lambda\]](#eq:Q:Mstationary:lambda){reference-type="eqref" reference="eq:Q:Mstationary:lambda"} implicitly require the feasibility of $\bar{x}$, namely $\mathbf{c}(\bar{x}) \in \mathop{\mathrm{dom}}\mathbf{g}$ and $\mathbf{h}(\bar{x}) \in C$, for otherwise the subdifferential $\partial \mathbf{g}( \mathbf{c}(\bar{x}) )$ and the cone $N_C( \mathbf{h}(\bar{x}) )$ are empty. For a discussion on several Lagrangian-type functions, we refer the interested reader to [4.1](#sec:concepts){reference-type="ref" reference="sec:concepts"}. There we clarify, among other aspects, why $\mathcal{L}^{\mathrm{S}}_\textup{Q}$ is not referred to as the Lagrangian of [\[eq:Qslack\]](#eq:Qslack){reference-type="eqref" reference="eq:Qslack"} here, but only as a precursor in view of [\[eq:Q\]](#eq:Q){reference-type="eqref" reference="eq:Q"}. ## Second-order sufficient optimality conditions {#sec:SOSC:general} Throughout the subsection, we investigate the rather general optimization problem [\[eq:Q\]](#eq:Q){reference-type="eqref" reference="eq:Q"} where the functions $\mathbf{f} \colon \mathbb{R}^n \to \mathbb{R}$, $\mathbf{c} \colon \mathbb{R}^n \to \mathbb{R}^m$, and $\mathbf{h} \colon \mathbb{R}^n \to \mathbb{R}^p$ are twice continuously differentiable. For some point $\bar x\in\mathbb{R}^n$ satisfying $\mathbf{c}(\bar x)\in\mathop{\mathrm{dom}}\mathbf{g}$ and $\mathbf{h}(\bar x)\in C$, we introduce the so-called *critical cone* of [\[eq:Q\]](#eq:Q){reference-type="eqref" reference="eq:Q"} at $\bar x$ by means of $$\mathcal C_{\textup Q}(\bar x) := \{ u\in\mathbb{R}^n\,|\, \mathbf{f}^\prime(\bar x)u+\mathrm{d}\mathbf{g}(\mathbf{c}(\bar x))(\mathbf{c}^\prime(\bar x)u)\leq 0,\, \mathbf{h}^\prime(\bar x)u\in T_C(\mathbf{h}(\bar x)) \}.$$ Furthermore, for each $u\in\mathcal C_{\textup{Q}}(\bar x)$, we make use of a so-called *directional multiplier set* $\Lambda_{\textup{Q}}(\bar x,u)$ given by $$\Lambda_{\textup Q}(\bar x,u) := \left\{ (y,\lambda)\in\mathbb{R}^m\times\mathbb{R}^p \,\middle|\, \begin{aligned} &\nabla_x\mathcal{L}_\textup{Q}(\bar x,y,\lambda)=0, \\ & y\in\widehat{\partial}^\textup{p}\mathbf{g}(\mathbf{c}(\bar x),\mathbf{c}^\prime(\bar x)u),\, \lambda\in\widehat{N}^\textup{p}_C(\mathbf{h}(\bar x),\mathbf{h}^\prime(\bar x)u) \end{aligned} \right\}$$ where $\mathcal{L}_\textup{Q}$ is the Lagrangian of [\[eq:Q\]](#eq:Q){reference-type="eqref" reference="eq:Q"} given in [\[eq:Q:Lagrangian\]](#eq:Q:Lagrangian){reference-type="eqref" reference="eq:Q:Lagrangian"}. The proof of the following theorem has been inspired by [@BenkoGfrererYeZhangZhou2022 Thm 3.3] and [@BenkoMehlitz2023 Thm 6.1]. **Theorem 9**. *Fix some point $\bar x\in\mathbb{R}^n$ satisfying $\mathbf{c}(\bar x)\in\mathop{\mathrm{dom}}\mathbf{g}$ and $\mathbf{h}(\bar x)\in C$. Assume that, for each $u\in\mathcal C_{\textup{Q}}(\bar x)\setminus\{0\}$, there exists a pair $(y,\lambda)\in\Lambda_{\textup{Q}}(\bar x,u)$ such that $$\label{eq:SOSC_general} \nabla^2_{xx}\mathcal{L}_\textup{Q}(\bar x,y,\lambda)[u,u] + \mathrm{d}^2\mathbf{g}(\mathbf{c}(\bar x),y)(\mathbf{c}^\prime(\bar x)u) + \mathrm{d}^2\mathop{\mathrm{\delta}}_C(\mathbf{h}(\bar x),\lambda)(\mathbf{h}^\prime(\bar x)u) > 0.$$ Then there exist constants $\varepsilon>0$ and $\beta>0$ such that $$\label{eq:essential_local_minimizer_general} \forall x\in\mathop{\mathrm{\mathbb{B}}}_\varepsilon(\bar x)\colon\quad \max\{ \mathbf{f}(x)+\mathbf{g}(\mathbf{c}(x))-\mathbf{f}(\bar x)-\mathbf{g}(\mathbf{c}(\bar x)), \mathop{\mathrm{dist}}(\mathbf{h}(x),C) \} \geq \frac\beta2\Vert x-\bar x \Vert^2.$$ Particularly, $\bar x$ is a strict local minimizer of [\[eq:Q\]](#eq:Q){reference-type="eqref" reference="eq:Q"}.* *Proof.* We argue by contradiction. Suppose that there is a sequence $\{x^k\}$ converging to $\bar x$ such that [\[eq:proof_SOSC_contra\]]{#eq:proof_SOSC_contra label="eq:proof_SOSC_contra"} $$\begin{aligned} \label{eq:proof_SOSC_contra_values} \mathbf{f}(x^k)+\mathbf{g}(\mathbf{c}(x^k))-\mathbf{f}(\bar x)-\mathbf{g}(\mathbf{c}(\bar x))&\leq \mathpzc{o}(\Vert x^k-\bar x \Vert^2), \\ \label{eq:proof_SOSC_contra_feasibility} \mathop{\mathrm{dist}}(\mathbf{h}(x^k),C)&\leq \mathpzc{o}(\Vert x^k-\bar x \Vert^2) \end{aligned}$$ are satisfied along $\{x^k\}$ while $x^k\neq \bar x$ is valid for all $k\in\mathbb{N}$. Let us set $t_k :=\Vert x^k-\bar x \Vert$ and $u^k :=(x^k-\bar x)/t_k$ for each $k\in\mathbb{N}$. Then, without loss of generality, we may assume $u^k\to u$ for some $u$ in the unit sphere of $\mathbb{R}^n$. From [\[eq:proof_SOSC_contra_values\]](#eq:proof_SOSC_contra_values){reference-type="eqref" reference="eq:proof_SOSC_contra_values"}, we find $$\label{eq:proof_SOSC_liminf_function_values} \liminf\limits_{k\to\infty} \frac{\mathbf{g}(\mathbf{c}(x^k))-\mathbf{g}(\mathbf{c}(\bar x))}{\frac12 t_k^2} \leq \liminf\limits_{k\to\infty} -\frac{\mathbf{f}(x^k)-\mathbf{f}(\bar x)}{\tfrac12t_k^2}$$ as well as $\mathrm{d}(\mathbf{f}+(\mathbf{g}\circ \mathbf{c}))(\bar x)(u)\leq 0$, and based on the sum and chain rule for the first subderivative, see e.g. [@BenkoMehlitz2023], the latter gives $\mathbf{f}^\prime(\bar x)u+\mathrm{d}\mathbf{g}(\mathbf{c}(\bar x))(\mathbf{c}^\prime(\bar x)u)\leq 0$. Next, we turn our attention to [\[eq:proof_SOSC_contra_feasibility\]](#eq:proof_SOSC_contra_feasibility){reference-type="eqref" reference="eq:proof_SOSC_contra_feasibility"} which, for each $k\in\mathbb{N}$, yields the existence of $z^k\in\mathbb{R}^m$ such that $\mathbf{h}(x^k)+t_k^2z^k\in C$ and $\Vert z^k \Vert\to 0$. For each $k\in\mathbb{N}$, let us set $$v^k :=\frac{\mathbf{h}(x^k)+t_k^2z^k-\mathbf{h}(\bar x)}{t_k}, \qquad w^k :=\frac{\mathbf{c}(x^k)-\mathbf{c}(\bar x)}{t_k}.$$ Clearly, $w^k\to\mathbf{c}^\prime(\bar x)u$. A first-order Taylor expansion of $\mathbf{h}$ gives $$v^k = \frac{\mathbf{h}^\prime(\bar x)t_ku^k+\mathpzc{o}(t_k)+t_k^2z^k}{t_k} = \mathbf{h}^\prime(\bar x)u_k+\frac{\mathpzc{o}(t_k)}{t_k}+t_kz^k \to \mathbf{h}^\prime(\bar x)u,$$ so that $\mathbf{h}(\bar x)+t_kv^k=\mathbf{h}(x^k)+t_k^2z^k\in C$ implies $\mathbf{h}^\prime(\bar x)u\in T_C(\mathbf{h}(\bar x))$ and, hence, $u\in\mathcal C_{\textup{Q}}(\bar x)\setminus\{0\}$. The assumptions of the theorem now guarantee the existence of $(y,\lambda)\in\Lambda_{\textup{Q}}(\bar x,u)$ such that [\[eq:SOSC_general\]](#eq:SOSC_general){reference-type="eqref" reference="eq:SOSC_general"} holds. On the other hand, the definition of the second subderivative, [\[eq:proof_SOSC_liminf_function_values\]](#eq:proof_SOSC_liminf_function_values){reference-type="eqref" reference="eq:proof_SOSC_liminf_function_values"}, $\Vert z^k \Vert\to 0$, and $\nabla_x\mathcal{L}_\textup{Q}(\bar x,y,\lambda)=0$ give $$\begin{aligned} &\mathrm{d}^2\mathbf{g}(\mathbf{c}(\bar x),y)(\mathbf{c}^\prime(\bar x)u) + \mathrm{d}^2\mathop{\mathrm{\delta}}_C(\mathbf{h}(\bar x),\lambda)(\mathbf{h}^\prime(\bar x)u) \\ &\qquad \leq \liminf\limits_{k\to\infty}\frac{\mathbf{g}(\mathbf{c}(\bar x)+t_kw^k)-\mathbf{g}(c(\bar x))-t_k\langle y, w^k \rangle}{\frac12t_k^2} + \liminf\limits_{k\to\infty}\frac{-t_k\langle \lambda, v^k \rangle}{\tfrac12 t_k^2} \\ &\qquad \leq \liminf\limits_{k\to\infty}\frac{-\mathbf{f}(x^k)+\mathbf{f}(\bar x)-\langle y, \mathbf{c}(x^k)-\mathbf{c}(\bar x) \rangle-\langle \lambda, \mathbf{h}(x^k)+t_k^2z^k-\mathbf{h}(\bar x) \rangle}{\tfrac12 t_k^2} \\ &\qquad = \liminf\limits_{k\to\infty}\frac{-\mathcal{L}_\textup{Q}(x^k,y,\lambda)+\mathcal{L}_\textup{Q}(\bar x,y,\lambda)}{\tfrac12 t_k^2} \\ &\qquad = \liminf\limits_{k\to\infty}\frac{-\frac12t_k^2\,\nabla^2_{xx}\mathcal{L}_\textup{Q}(\bar x,y,\lambda)[u^k,u^k]-\mathpzc{o}(t_k^2)}{\frac12t_k^2} = -\nabla^2_{xx}\mathcal{L}_\textup{Q}(\bar x,y,\lambda)[u,u], \end{aligned}$$ which clearly contradicts [\[eq:SOSC_general\]](#eq:SOSC_general){reference-type="eqref" reference="eq:SOSC_general"}. ◻ Let us note that the growth condition [\[eq:essential_local_minimizer_general\]](#eq:essential_local_minimizer_general){reference-type="eqref" reference="eq:essential_local_minimizer_general"} is slightly stronger than the commonly known *second-order growth condition* which demands the existence of constants $\varepsilon>0$ and $\beta>0$ such that $$\label{eq:second_order_growth_condition_general} \forall x\in\mathop{\mathrm{\mathbb{B}}}_\varepsilon(\bar x)\cap \mathbf{h}^{-1}(C)\colon\quad \mathbf{f}(x)+\mathbf{g}(\mathbf{c}(x))-\mathbf{f}(\bar x)-\mathbf{g}(\mathbf{c}(\bar x)) \geq \frac\beta2\Vert x-\bar x \Vert^2.$$ Indeed, condition [\[eq:essential_local_minimizer_general\]](#eq:essential_local_minimizer_general){reference-type="eqref" reference="eq:essential_local_minimizer_general"} allows for small perturbations of feasibility while [\[eq:second_order_growth_condition_general\]](#eq:second_order_growth_condition_general){reference-type="eqref" reference="eq:second_order_growth_condition_general"} only addresses feasible points. Let us recall that a point $\bar x\in\mathbb{R}^n$ satisfying $\mathbf{c}(\bar x)\in\mathop{\mathrm{dom}}\mathbf{g}$, $\mathbf{h}(\bar x)\in C$, and [\[eq:essential_local_minimizer_general\]](#eq:essential_local_minimizer_general){reference-type="eqref" reference="eq:essential_local_minimizer_general"} is referred to as an *essential local minimizer of second order* of [\[eq:Q\]](#eq:Q){reference-type="eqref" reference="eq:Q"}, and this notion can be traced back to [@Penot1998]. Note that the precise definition of the directional multiplier set $\Lambda_\textup{Q}(\bar x,u)$ has not been fully exploited in the proof of [Theorem 9](#thm:SOSC_general){reference-type="ref" reference="thm:SOSC_general"}. Indeed, we only made use of $\nabla_x\mathcal{L}_\textup{Q}(\bar x,y,\lambda)=0$ for $(y,\lambda)\in\Lambda_\textup{Q}(\bar x,u)$. However, we would like to emphasize that for each $u\in\mathcal{C}_\textup{Q}(\bar x)\setminus\{0\}$, condition [\[eq:SOSC_general\]](#eq:SOSC_general){reference-type="eqref" reference="eq:SOSC_general"} implicitly demands the relations $\mathrm{d}^2\mathbf{g}(\mathbf{c}(\bar x),y)(\mathbf{c}^\prime(\bar x)u)>-\infty$ and $\mathrm{d}^2\mathop{\mathrm{\delta}}_C(\mathbf{h}(\bar x),\lambda)(\mathbf{h}^\prime(\bar x)u)>-\infty$ (the undetermined situation "$\infty-\infty$" has to be avoided to make [\[eq:SOSC_general\]](#eq:SOSC_general){reference-type="eqref" reference="eq:SOSC_general"} meaningful). From [@BenkoMehlitz2023 formulas (12) and (15)], we find $$\langle y, \mathbf{c}^\prime(\bar x)u \rangle\leq\mathrm{d}\mathbf{g}(\mathbf{c}(\bar x))(\mathbf{c}^\prime(\bar x)u), \quad \langle \lambda, \mathbf{h}^\prime(\bar x)u \rangle\leq 0.$$ Now, on the one hand, the definition of the critical cone and $\nabla_x\mathcal{L}_\textup{Q}(\bar x,y,\lambda)=0$ give $$\mathrm{d}\mathbf{g}(\mathbf{c}(\bar x))(\mathbf{c}^\prime(\bar x)u) \leq -\mathbf{f}^\prime(\bar x)u = \langle y, \mathbf{c}^\prime(\bar x)u \rangle+\langle \lambda, \mathbf{h}^\prime(\bar x)u \rangle \leq \langle y, \mathbf{c}^\prime(\bar x)u \rangle$$ which yields $\mathrm{d}\mathbf{g}(\mathbf{c}(\bar x))(\mathbf{c}^\prime(\bar x)u)=\langle y, \mathbf{c}^\prime(\bar x)u \rangle$ and, thus, $y\in\widehat{\partial}^\textup{p}\mathbf{g}(\mathbf{c}(\bar x),\mathbf{c}^\prime(\bar x)u)$. On the other hand, based on a similar reasoning, we obtain $$-\langle \lambda, \mathbf{h}^\prime(\bar x)u \rangle = \mathbf{f}^\prime(\bar x)u+\langle y, \mathbf{c}^\prime(\bar x) u \rangle = \mathbf{f}^\prime(\bar x)u+\mathrm{d}\mathbf{g}(\mathbf{c}(\bar x))(\mathbf{c}^\prime(\bar x)u) \leq 0,$$ i.e., $\langle \lambda, \mathbf{h}^\prime(\bar x)u \rangle=0=\mathrm{d}\mathop{\mathrm{\delta}}_C(\mathbf{h}(\bar x))(\mathbf{h}^\prime(\bar x)u)$ due to $\mathbf{h}^\prime(\bar x)u\in T_C(\mathbf{h}(\bar x))$, and this yields $\lambda\in\widehat{N}^\textup{p}_C(\mathbf{h}(\bar x),\mathbf{h}^\prime(\bar x)u)$. Hence, the definition of the directional multiplier set is rather natural, see [@BenkoMehlitz2023 Rem. 5.3, Sec. 6] for related observations. # Fundamentals of composite optimization {#sec:compositeOptimization} We now move our attention to [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} and discuss relevant optimality and stationarity notions, building on the results in [3](#sec:constrained_structured_problems){reference-type="ref" reference="sec:constrained_structured_problems"}. Furthermore, we investigate local characterizations using second-order tools, regularity concepts, and error bounds. ## Stationarity concepts and Lagrangian-type functions {#sec:concepts} Interpreting [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} as an *unconstrained* problem, first-order necessary optimality conditions using the notion of M-stationarity pertain a point $\bar{x}\in\mathbb{R}^n$ such that $0\in\partial\varphi(\bar x)$. We now aim to rewrite this condition in terms of initial problem data, i.e., first-order (generalized) derivatives of $f$, $c$, and $g$. Exploiting compatibility of the limiting subdifferential with respect to smooth additions, we find $0\in\nabla f(x)+\partial(g\circ c)(\bar x)$. It has been recognized, e.g. in [@MohammadiMordukhovichSarabi2022 Sec. 3], that metric subregularity of the set-valued mapping $\Xi \colon \mathbb{R}^n\times\mathbb{R} \rightrightarrows \mathbb{R}^m\times\mathbb{R}$ given by $$\label{eq:svm_composition} \Xi(x,\alpha) := (c(x),\alpha)-\mathop{\mathrm{epi}}g$$ is enough to guarantee that a subdifferential chain rule can be used to approximate the limiting subdifferential of $g\circ c$ from above in terms of the subdifferential of $g$ and the derivative of $c$. More precisely, if $\Xi$ is metrically subregular at $((\bar x,g(c(\bar x))),(0,0))$, then $\partial(g\circ c)(\bar x)\subseteq c^\prime(\bar x)^\top\partial g(c(\bar x))$. From a Lagrangian perspective, this gives the existence of some Lagrange multiplier $\bar{y}\in\partial g(c(\bar{x}))$ such that $0=\nabla f(\bar x)+c^\prime(\bar x)^\top\bar y$ holds. This stationarity characterization resembles, at least in spirit, the so called Karush--Kuhn--Tucker (or KKT) conditions in nonlinear programming, see e.g. [@bertsekas1996constrained; @birgin2014practical]. These considerations lead to the following definition, which uses in accordance with [\[eq:Q:Mstationary\]](#eq:Q:Mstationary){reference-type="eqref" reference="eq:Q:Mstationary"} the Lagrangian function $\mathcal{L} \colon \mathbb{R}^n\times\mathbb{R}^m \to \mathbb{R}$ associated with [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}, given by $$\label{eq:Lagrangian} \mathcal{L}(x,y) {}:={} f(x) + \langle y, c(x) \rangle .$$ **Definition 10** (M-stationarity). Relative to [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}, a point $\bar x \in \mathbb{R}^n$ is called *M-stationary* if there exists a multiplier $\bar{y} \in \mathbb{R}^m$ such that [\[eq:Mstationary\]]{#eq:Mstationary label="eq:Mstationary"} $$\begin{aligned} 0 {}={}& \nabla_x\mathcal{L}(\bar x,\bar y) , \label{eq:Mstationary:x} \\ \bar y {}\in{}& \partial g( c(\bar x) ) . \label{eq:Mstationary:y} \end{aligned}$$ Let $\Lambda(\bar{x})$ denote the set of multipliers $\bar{y}\in\mathbb{R}^m$ such that the M-stationarity conditions [\[eq:Mstationary\]](#eq:Mstationary){reference-type="eqref" reference="eq:Mstationary"} are satisfied by $(\bar{x},\bar{y})$. As a reminder of the possible gap highlighted above, where metric subregularity of $\Xi$ is invoked to formulate first-order optimality conditions in Lagrangian terms, the notion given in [Definition 10](#def:Mstationary){reference-type="ref" reference="def:Mstationary"} could be referred to as *KKT*-stationarity, as in [@demarchi2023implicit]. For simplicity, we stick to the nomenclature of *M*-stationarity. Introducing an auxiliary variable $z \equiv c(x) \in \mathbb{R}^m$ to avoid the composition $g \circ c$, in the spirit of [\[eq:ALMz:P\]](#eq:ALMz:P){reference-type="eqref" reference="eq:ALMz:P"}, we can write [\[eq:Mstationary\]](#eq:Mstationary){reference-type="eqref" reference="eq:Mstationary"} as [\[eq:ALMz:Mstationary\]]{#eq:ALMz:Mstationary label="eq:ALMz:Mstationary"} $$\begin{aligned} 0 {}={}& \nabla_x \mathcal{L}(\bar{x},\bar{y}) , \label{eq:ALMz:Mstationary:x} \\ \bar{y} {}\in{}& \partial g( \bar{z} ) \label{eq:ALMz:Mstationary:y} \\ 0 {}={}& c(\bar{x})-\bar{z} . \label{eq:ALMz:Mstationary:z} \end{aligned}$$ These conditions arise also considering the pre-Lagrangian $\mathcal{L}^{\mathrm{S}}$ of [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}, with auxiliary variable $z\in\mathbb{R}^m$ and Lagrange multiplier $y\in\mathbb{R}^m$, given by $$\label{eq:ALMz:Lagrangian} \mathcal{L}^{\mathrm{S}}(x,z,y) {}:={} f(x) + g(z) + \langle y, c(x) - z \rangle,$$ and taking $0 \in \partial_x \mathcal{L}^{\mathrm{S}}(\bar{x},\bar{z},\bar{y}),0\in\partial_z \mathcal{L}^{\mathrm{S}}(\bar x,\bar z,\bar y),0\in\partial_y \mathcal{L}^{\mathrm{S}}(\bar x,\bar z,\bar y)$. Following the nomenclature in [@bolte2018nonconvex; @sabach2019lagrangian], $\mathcal{L}^{\mathrm{S}}$ would be referred to as the Lagrangian of [\[eq:ALMz:P\]](#eq:ALMz:P){reference-type="eqref" reference="eq:ALMz:P"}, as a standalone problem, and not only as the pre-Lagrangian in view of [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}. In fact, the definition of $\mathcal{L}^{\mathrm{S}}$ in [\[eq:ALMz:Lagrangian\]](#eq:ALMz:Lagrangian){reference-type="eqref" reference="eq:ALMz:Lagrangian"} complies with the classical concept of Lagrangian function for equality-constrained optimization problems, such as [\[eq:ALMz:P\]](#eq:ALMz:P){reference-type="eqref" reference="eq:ALMz:P"}, and reflects the (nonsmooth, extended real-valued) objective $(x,z) \mapsto f(x) + g(z)$ of [\[eq:ALMz:P\]](#eq:ALMz:P){reference-type="eqref" reference="eq:ALMz:P"} and its equality constraints $c(x) - z = 0$. However, containing (primal) nonsmooth terms, $\mathcal{L}^{\mathrm{S}}$ is not differentiable. The object $\mathcal{L}$ defined in [\[eq:Lagrangian\]](#eq:Lagrangian){reference-type="eqref" reference="eq:Lagrangian"} corresponds to the *ordinary* Lagrangian function of [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} as described in [@rockafellar2022convergence], and this is consistent with several other papers which exploit the variational analysis approach to composite optimization, see e.g. [@BenkoGfrererYeZhangZhou2022; @BenkoMehlitz2023; @HangMordukhovichSarabi2022; @HangSarabi2021; @MohammadiMordukhovichSarabi2022; @MordukhovichSarabi2018; @MordukhovichSarabi2019] and, particularly, the setting of standard nonlinear programming, see [Example 13](#ex:NLP){reference-type="ref" reference="ex:NLP"} below. Above, we derived the M-stationarity conditions of [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} at some feasible point $\bar x\in\mathbb{R}^n$ by using the chain rule for the limiting subdifferential which, in general, requires a qualification condition like metric subregularity of $\Xi$ from [\[eq:svm_composition\]](#eq:svm_composition){reference-type="eqref" reference="eq:svm_composition"} at $((\bar x,g(c(\bar x))),(0,0))$. Note that [\[eq:MRCQ\]](#eq:MRCQ){reference-type="eqref" reference="eq:MRCQ"} reduces to $$\label{eq:metric_regularity_CQ} \partial^\infty g(c(\bar x))\cap\ker c^\prime(\bar x)^\top = \{0\}$$ when applied to $\Xi$ at the given reference point, and the latter is equivalent to the mapping $\Xi$ being metrically regular at $((\bar x,g(c(\bar x))),(0,0))$, which also extends to a neighborhood of the point of interest. Thus, [\[eq:metric_regularity_CQ\]](#eq:metric_regularity_CQ){reference-type="eqref" reference="eq:metric_regularity_CQ"} is sufficient for the subregularity requirement stated earlier. Clearly, [\[eq:metric_regularity_CQ\]](#eq:metric_regularity_CQ){reference-type="eqref" reference="eq:metric_regularity_CQ"} is valid whenever $g$ is locally Lipschitz continuous at $c(\bar x)$ or if $c^\prime(\bar x)$ has full row rank. As we know that $0\in\partial\varphi(\bar x)$ provides a necessary optimality condition for the local optimality of $\bar x$, the M-stationarity conditions from [Definition 10](#def:Mstationary){reference-type="ref" reference="def:Mstationary"} do so as well in the presence of a suitable CQ like [\[eq:metric_regularity_CQ\]](#eq:metric_regularity_CQ){reference-type="eqref" reference="eq:metric_regularity_CQ"} as outlined above. We shall introduce *augmented* Lagrangian functions, which not only offer the basic component for AL methods, but also closely relate to first-order optimality concepts. An AL function for [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} can be obtained in two steps: augmenting the pre-Lagrangian $\mathcal{L}^{\mathrm{S}}$ with a penalty term, and then marginalizing over the auxiliary variables. For some penalty parameter $\mu>0$, the AL function $\mathcal{L}^{\mathrm{S}}_\mu \colon \mathbb{R}^n\times \mathbb{R}^m\times \mathbb{R}^m \to \overline{\mathbb{R}}$ associated to [\[eq:ALMz:P\]](#eq:ALMz:P){reference-type="eqref" reference="eq:ALMz:P"} entails the sum of the pre-Lagrangian $\mathcal{L}^{\mathrm{S}}$ and a quadratic penalty for the constraint violation, scaled by $\mu$. This leads to the definition of $\mathcal{L}^{\mathrm{S}}_\mu$ as $$\label{eq:ALMz:L} \mathcal{L}^{\mathrm{S}}_\mu(x,z,y) {}:={} \mathcal{L}^{\mathrm{S}}(x,z,y) + \frac{1}{2 \mu} \Vert c(x) - z \Vert^2 . \nonumber$$ Then, since [\[eq:ALMz:P\]](#eq:ALMz:P){reference-type="eqref" reference="eq:ALMz:P"} involves the minimization over both original and auxiliary variables, whereas the latter ones do not appear in the original [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}, we consider the marginalization of $\mathcal{L}^{\mathrm{S}}$ over $z$, which yields the *augmented Lagrangian* function $\mathcal{L}_\mu \colon \mathbb{R}^n\times \mathbb{R}^m \to \mathbb{R}$ associated to [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}: $$\begin{aligned} \label{eq:L} \mathcal{L}_\mu(x,y) {}:={}& \inf_z \mathcal{L}^{\mathrm{S}}_\mu(x,z,y) \\ {}={}& f(x) + \inf_z \left\{ g(z) + \frac{1}{2 \mu} \Vert c(x) + \mu y - z \Vert^2 \right\} - \frac{\mu}{2} \Vert y \Vert^2 \nonumber\\ {}={}& f(x) + g^\mu( c(x) + \mu y ) - \frac{\mu}{2} \Vert y \Vert^2 . \nonumber\end{aligned}$$ Notice that the minimization over $z$ is well-defined only for sufficiently small penalty parameters, relative to the prox-boundedness threshold of $g$, in particular $\mu \in (0, \mu_g)$. Moreover, we highlight that the Moreau envelope $g^\mu \colon \mathbb{R}^m \to \mathbb{R}$ of $g$ is real-valued and strictly continuous [@rockafellar1998variational Ex. 10.32], but not continuously differentiable in general, as the proximal mapping of $g$ is possibly set-valued. With the AL tools at hand, one can readily recover the M-stationarity conditions [\[eq:Mstationary\]](#eq:Mstationary){reference-type="eqref" reference="eq:Mstationary"} for [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}. Through the augmented pre-Lagrangian function $\mathcal{L}^{\mathrm{S}}_\mu$ of [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}, the first-order optimality conditions in the form of [\[eq:ALMz:Mstationary\]](#eq:ALMz:Mstationary){reference-type="eqref" reference="eq:ALMz:Mstationary"} can be written, for any $\mu > 0$, as $0 \in \partial_{x} \mathcal{L}^{\mathrm{S}}_\mu(\bar{x}, c(\bar{x}), \bar{y})$, $0\in\partial_z\mathcal{L}^{\mathrm{S}}_\mu(\bar x,c(\bar x),\bar y)$, and $0\in\partial_y\mathcal{L}^{\mathrm{S}}_\mu(\bar x,c(\bar x),\bar y)$, which hold if and only if $(\bar{x},\bar{y}) \in \mathbb{R}^n\times \mathbb{R}^m$ satisfies [\[eq:Mstationary\]](#eq:Mstationary){reference-type="eqref" reference="eq:Mstationary"}. The following lemma, which is inspired by [@BoergensKanzowSteck2019 Lem. 3.1], will come in handy later on. **Lemma 11**. *If $x\in\mathbb{R}^n$ is feasible for [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}, then $\mathcal{L}_\mu(x,y) \leq \varphi(x)$ for all $\mu > 0$ and $y \in \mathbb{R}^m$.* *Proof.* By feasibility of $x$, we have $c(x) \in \mathop{\mathrm{dom}}g$. Then, for all $\mu > 0$ and $y\in\mathbb{R}^m$, $$g^\mu(c(x)+\mu y ) {}={} \inf_z \left\{ g(z) + \frac{1}{2\mu} \Vert c(x)+\mu y-z \Vert^2 \right\} {}\leq{} g(c(x)) + \frac{1}{2\mu} \Vert \mu y \Vert^2 {}={} g(c(x)) + \frac{\mu}{2} \Vert y \Vert^2$$ by selecting $z = c(x) \in \mathop{\mathrm{dom}}g$. Hence, $$\mathcal{L}_\mu(x,y) = f(x) + g^\mu(c(x)+\mu y ) - \frac{\mu}{2} \Vert y \Vert^2 \leq f(x) + g(c(x)) = \varphi(x) ,$$ concluding the proof. ◻ In view of the AL subproblems arising below in [5](#sec:ALM){reference-type="ref" reference="sec:ALM"}, the subsequent remark considers the notion of $\Upsilon$-stationarity, discussed already in [2.2](#sec:VA){reference-type="ref" reference="sec:VA"}, to the AL function $\mathcal{L}_\mu$. **Remark 12**. Motivated by the minimization of the AL function $\mathcal{L}_\mu(\cdot,\hat y)$ where $\hat y\in\mathbb{R}^m$ is fixed, we are interested in pairs $(\bar x,\bar z)\in\mathbb{R}^n\times\mathbb{R}^m$ which certificate $\varepsilon$-$\Upsilon$-stationarity of $\bar x$ for $\mathcal{L}_\mu(\cdot,\hat y)$, for some given $\varepsilon\geq 0$. A simple calculation reveals that $$\Upsilon(\bar x) = \bigcup\limits_{\bar z\in\mathop{\mathrm{prox}}_{\mu g}(c(\bar x)+\mu \hat y)} \left\{ \nabla_x\mathcal{L}(\bar x,\bar y) \,\middle|\, \bar y:=\hat y + \frac{c(\bar x)-\bar z}{\mu} \in\partial g(\bar z) \right\}$$ holds is this situation. Clearly, $\bar z\in\mathop{\mathrm{prox}}_{\mu g}(c(\bar x)+\mu \hat y)$ always gives $\hat y+(c(\bar x)-\bar z)/\mu\in\partial g(\bar z)$ by Fermat's rule (the converse is true for convex $g$), so $\varepsilon$-$\Upsilon$-stationarity boils down to the existence of $\bar z\in\mathop{\mathrm{prox}}_{\mu g}(c(\bar x)+\mu \hat y)$ such that $\Vert \nabla_x\mathcal{L}(\bar x,\bar y) \Vert\leq\varepsilon$ where $\bar y :=\hat y+(c(\bar x)-\bar z)/\mu$. Obviously, for arbitrary $\mu>0$ and any pair $(\bar x,\bar z)$ certificating $\Upsilon$-stationarity (where $\varepsilon:=0$) of $\bar x$ for $\mathcal{L}_\mu(\cdot,\hat y)$ in the above sense such that $\bar z=c(\bar x)$ holds, $\bar x$ is also M-stationary (in the sense of [\[eq:Mstationary\]](#eq:Mstationary){reference-type="eqref" reference="eq:Mstationary"} and [\[eq:ALMz:Mstationary\]](#eq:ALMz:Mstationary){reference-type="eqref" reference="eq:ALMz:Mstationary"}). Note that this implicitly demands $\mu\in(0,\mu_g)$. The converse is true whenever $g$ is a convex function, and, in this case, the proximal mapping is single-valued. Some of the concepts addressed in this section are visualized in the following example in terms of standard nonlinear programming. **Example 13**. Nonlinear programming can be cast in the form [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} via many reformulations. Let us consider the setting $$\tag{NLP}\label{eq:NLP} \mathop{\mathrm{minimize}}_x {}\quad{} f(x) {}\quad{} \mathop{\mathrm{subject~to}} {}\quad{} c(x) \in C$$ with $g \equiv \mathop{\mathrm{\delta}}_C$ being the indicator of a nonempty, closed, convex set $C :=[c_l, c_u]$. Allowing entries of $c_l$ and $c_u$ to take infinite values, namely $c_l \in (\mathbb{R}\cup\{-\infty\})^m$ and $c_u \in (\mathbb{R}\cup \{\infty\})^m$, the model includes equalities, inequalities, and bounds in a compact form, and the constraint set $C$ is convex polyhedral. The pre-Lagrangian for [\[eq:NLP\]](#eq:NLP){reference-type="eqref" reference="eq:NLP"} with auxiliary variable $z\in\mathbb{R}^m$ and multiplier $y\in\mathbb{R}^m$ reads $$\mathcal{L}^{\mathrm{S}}(x,z,y) {}={} f(x) + \mathop{\mathrm{\delta}}_C(z) + \langle y, c(x) - z \rangle .$$ The first-order optimality conditions $0 \in \partial_x\mathcal{L}^{\mathrm{S}}(\bar{x},c(\bar{x}),\bar{y}),0\in\partial_z\mathcal{L}^{\mathrm{S}}(\bar x,c(\bar x),\bar y),0\in\partial_y\mathcal{L}^{\mathrm{S}}(\bar x,c(\bar x),\bar y)$ can be expressed as $$\nabla f(\bar{x}) + c^\prime(\bar{x})^\top \bar{y} {}={} 0 , \qquad \bar{y} {}\in{} N_C( c(\bar{x}) ) ,$$ where the inclusion coincides with the classical complementarity conditions and imposes the feasibility condition $c(\bar{x}) \in C$ as well. The Lagrangian $\mathcal{L}$ for [\[eq:NLP\]](#eq:NLP){reference-type="eqref" reference="eq:NLP"} is $\mathcal{L}(x,y) {}={} f(x) + \langle y, c(x) \rangle$ and the AL $\mathcal{L}_\mu$ is given by $$\mathcal{L}_\mu(x,y) {}={} f(x) + \frac{1}{2\mu} \mathop{\mathrm{dist}}_C^2( c(x) + \mu y ) - \frac{\mu}{2} \Vert y \Vert^2 ,$$ recovering all classical quantities. As $C$ is convex in [\[eq:NLP\]](#eq:NLP){reference-type="eqref" reference="eq:NLP"}, the squared distance term in the AL function is continuously differentiable, see e.g. [@BauschkeCombettes2011 Cor. 12.30]. **Remark 14**. Yet another way to the definition of a Lagrangian-type function in composite optimization with *convex* function $g$ has been promoted by Rockafellar in his recent papers [@rockafellar2022augmented; @rockafellar2022convergence] where he introduces the so-called *generalized* Lagrangian of [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} by marginalization of the pre-Lagrangian $\mathcal{L}^{\mathrm{S}}$ given in [\[eq:ALMz:Lagrangian\]](#eq:ALMz:Lagrangian){reference-type="eqref" reference="eq:ALMz:Lagrangian"}. The marginalization step enters here because [\[eq:ALMz:P\]](#eq:ALMz:P){reference-type="eqref" reference="eq:ALMz:P"} involves the minimization over both original and auxiliary variables, whereas the latter ones do not appear in [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}. Marginalization of $\mathcal{L}^{\mathrm{S}}$ over $z$ results in the generalized Lagrangian $\ell \colon \mathbb{R}^n\times\mathbb{R}^m \to \mathbb{R}\cup\{-\infty\}$ given by $$\begin{aligned} \ell(x,y) {}:={}& \inf_z \mathcal{L}^{\mathrm{S}}(x,z,y) \\ {}={}& f(x) + \langle y, c(x) \rangle + \inf_z \{ g(z) - \langle y, z \rangle \} \nonumber\\ {}={}& \mathcal{L}(x,y) - g^\ast(y) , \nonumber \end{aligned}$$ where $\mathcal{L}$ is the Lagrangian defined in [\[eq:Lagrangian\]](#eq:Lagrangian){reference-type="eqref" reference="eq:Lagrangian"}. Clearly, it is $\nabla_x \mathcal{L}= \nabla_x \ell$. One could hope that the generalized Lagrangian $\ell$ encapsulates all information needed to state the M-stationarity conditions [\[eq:Mstationary\]](#eq:Mstationary){reference-type="eqref" reference="eq:Mstationary"} for [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}, exemplary as $0 \in \partial_x\ell(\bar{x},\bar{y}),0 \in \partial_y(-\ell)(\bar{x},\bar{y})$. The negative sign appearing for the multipliers relates to the (generalized) saddle-point properties of the (generalized) Lagrangian. Expanding terms, this gives [\[eq:stat_via_gen_Lag\]]{#eq:stat_via_gen_Lag label="eq:stat_via_gen_Lag"} $$\begin{aligned} \label{eq:stat_via_gen_Lag_x} 0 {}={}& \nabla_x \mathcal{L}(\bar{x},\bar{y}), \\ \label{eq:stat_via_gen_Lag_y} c(\bar{x}) {}\in{}& \partial g^\ast(\bar{y}). \end{aligned}$$ If $g$ is convex, [\[eq:stat_via_gen_Lag_y\]](#eq:stat_via_gen_Lag_y){reference-type="eqref" reference="eq:stat_via_gen_Lag_y"} is equivalent to [\[eq:Mstationary:y\]](#eq:Mstationary:y){reference-type="eqref" reference="eq:Mstationary:y"}, see [@rockafellar1998variational Prop. 11.3], so that M-stationarity can be fully characterized via the derivatives of the generalized Lagrangian. Whenever $g$ is a nonconvex function, this reasoning is, unluckily, no longer possible. Under additional assumptions on $g$ (and $g^\ast$), one may apply the convex hull property, see e.g. [@DempeDuttaMordukhovich2007 formula (2.7)], and a marginal function rule, see e.g. [@BenkoMehlitz2022 Thm 5.1] or [@rockafellar1998variational Thm 10.13], in order to find $$\partial g^\ast(y) \subseteq -\mathop{\mathrm{conv}}\partial(-g^\ast)(y) \subseteq -\mathop{\mathrm{conv}}\{-z\in\mathbb{R}^m\,|\,y\in\partial g(z)\} = \mathop{\mathrm{conv}}(\partial g)^{-1}(y).$$ Hence, whenever $(\partial g)^{-1}(y)$ is convex, [\[eq:stat_via_gen_Lag_y\]](#eq:stat_via_gen_Lag_y){reference-type="eqref" reference="eq:stat_via_gen_Lag_y"} yields [\[eq:Mstationary:y\]](#eq:Mstationary:y){reference-type="eqref" reference="eq:Mstationary:y"} if the aforementioned calculus rules apply. Consequently, under additional assumptions, [\[eq:stat_via_gen_Lag\]](#eq:stat_via_gen_Lag){reference-type="eqref" reference="eq:stat_via_gen_Lag"} implies the M-stationarity conditions [\[eq:Mstationary\]](#eq:Mstationary){reference-type="eqref" reference="eq:Mstationary"} even for nonconvex $g$. However, the converse implication is likely to fail even in very simple situations, as illustrated in the subsequently stated [Example 15](#ex:conjugates_for_nonconvex_g_messy){reference-type="ref" reference="ex:conjugates_for_nonconvex_g_messy"}. **Example 15**. We investigate the model problem $$\label{eq:sparse_programming} \mathop{\mathrm{minimize}}_{x} {}\quad{} f(x) + \Vert c(x) \Vert_0,$$ i.e., where $g$ plays the role of the $\ell_0$-quasi-norm $\Vert \cdot \Vert_0 \colon \mathbb{R}^m \to \mathbb{R}$ which simply counts the nonzero entries of the argument vector. Clearly, $\Vert \cdot \Vert_0$ is a merely lower semicontinuous function which is not convex. For some point $\bar x\in\mathbb{R}^n$, we will exploit the index sets $$I^0(\bar x):=\{i\in\{1,\ldots,m\}\,|\,c_i(\bar x)=0\}, \quad I^{\pm}(\bar x):=\{1,\ldots,m\}\setminus I^0(\bar x).$$ One can easily check that $$\partial\Vert \cdot \Vert_0(c(\bar x)) = \{y\in\mathbb{R}^m\,|\,\forall i\in I^{\pm}(\bar x)\colon\,y_i=0\}$$ holds true. Hence, $\bar x$ is M-stationary for [\[eq:sparse_programming\]](#eq:sparse_programming){reference-type="eqref" reference="eq:sparse_programming"} if and only if there is some $\bar y\in\mathbb{R}^m$ such that [\[eq:Mstationary:x\]](#eq:Mstationary:x){reference-type="eqref" reference="eq:Mstationary:x"} is valid and, for all $i\in I^\pm(\bar x)$, it is $\bar y_i=0$. A simple calculation reveals that $\Vert \cdot \Vert_0^\ast= \mathop{\mathrm{\delta}}_{\{0\}}$, which is why condition [\[eq:stat_via_gen_Lag_y\]](#eq:stat_via_gen_Lag_y){reference-type="eqref" reference="eq:stat_via_gen_Lag_y"} reduces to $\bar y=0$. This is a much stronger requirement on the multiplier than the one demanded by M-stationarity. ## Second-order sufficient optimality conditions {#sec:SOSC:P} Motivated by our findings from [3.2](#sec:SOSC:general){reference-type="ref" reference="sec:SOSC:general"} and [@BenkoMehlitz2023 Sec. 6], we now state second-order sufficient optimality conditions for [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} and discuss some consequences. Let us fix a feasible point $\bar x\in\mathbb{R}^n$ of [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}, and define the *critical cone* of [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} at $\bar x$ by means of $$\mathcal C(\bar x) := \{u\in\mathbb{R}^n\,|\,f^\prime(\bar x)u+\mathrm{d}g(c(\bar x))(c^\prime(\bar x)u)\leq 0\}.$$ Furthermore, for each $u\in\mathcal C(\bar x)$, we make use of the *directional multiplier set* $\Lambda(\bar x,u)$ given by $$\Lambda(\bar x,u) := \{y\in\mathbb{R}^m\,|\,\nabla_x\mathcal{L}(\bar x,y)=0,\,y\in\widehat{\partial}^\textup{p}g(c(\bar x),c^\prime(\bar x)u)\}.$$ Note that this is consistent with the notation coined in [3.2](#sec:SOSC:general){reference-type="ref" reference="sec:SOSC:general"}. **Definition 16** (Second-order sufficient conditions). For a feasible point $\bar x\in\mathbb{R}^n$ of [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}, we say that the *Weak Second-Order Sufficient Condition* (WSOSC for brevity) is valid, whenever for each $u\in\mathcal C(\bar x)\setminus\{0\}$, there exists $y\in\Lambda(\bar x,u)$ such that $$\label{eq:SOSC_P} \nabla^2_{xx}\mathcal{L}(\bar x,y)[u,u] + \mathrm{d}^2g(c(\bar x),y)(c^\prime(\bar x)u) > 0.$$ For a fixed multiplier $y\in \Lambda(\bar x)$, we say that the *Strong Second-Order Sufficient Condition* (SSOSC for brevity) is valid at $\bar x$ for $y$, whenever [\[eq:SOSC_P\]](#eq:SOSC_P){reference-type="eqref" reference="eq:SOSC_P"} is satisfied for each $u\in\mathcal C(\bar x)\setminus\{0\}$. Note that [Lemma 8](#lem:directional_proximal_subdifferential){reference-type="ref" reference="lem:directional_proximal_subdifferential"} yields $$\Lambda(\bar x,u) \subseteq \{y\in\mathbb{R}^m\,|\,\nabla_x\mathcal{L}(\bar x,y)=0,\,y\in\partial g(c(\bar x),(c^\prime(\bar x)u,\mathrm{d}g(c(\bar x))(c^\prime(\bar x)u)))\} \subseteq \Lambda(\bar x).$$ Hence, a point where WSOSC holds must be M-stationary. Furthermore, whenever SSOSC holds at $\bar x$ for $y$, then a similar reasoning as in [3.2](#sec:SOSC:general){reference-type="ref" reference="sec:SOSC:general"} yields $y\in\bigcap_{u\in\mathcal C(\bar x)\setminus\{0\}}\widehat{\partial}^\textup{p}g(c(\bar x),c^\prime(\bar x)u)$. Particularly, SSOSC implies WSOSC. Applying [Theorem 9](#thm:SOSC_general){reference-type="ref" reference="thm:SOSC_general"} with $\mathbf{f}:=f$, $\mathbf{g}:=g$, and $\mathbf{c}:=c$ in the absence of constraints yields that whenever $\bar x\in\mathbb{R}^n$ is a feasible point of [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} where WSOSC holds, then the second-order growth condition $$\label{eq:second_order_growth_P} \forall x\in\mathop{\mathrm{\mathbb{B}}}_\varepsilon(\bar x)\colon\quad \varphi(x)-\varphi(\bar x)\geq\frac{\beta}{2}\Vert x-\bar x \Vert^2$$ holds for constants $\varepsilon>0$ and $\beta>0$. However, we will see in a moment that WSOSC even implies a slightly stronger growth condition which allows for perturbations of the input of the function $g$. **Lemma 17**. *Let $\bar x\in\mathbb{R}^n$ be a feasible point of [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} where WSOSC holds. Then there are constants $\varepsilon>0$ and $\beta>0$ such that $$\label{eq:essential_local_minimizer_P_S} \forall (x,z)\in\mathop{\mathrm{\mathbb{B}}}_\varepsilon(\bar x,c(\bar x))\colon\quad \max\{f(x)+g(z)-\varphi(\bar x),\Vert c(x)-z \Vert\}\geq\frac{\beta}{2}(\Vert x-\bar x \Vert^2+\Vert z-c(\bar x) \Vert^2).$$ Particularly, $\bar x$ is a strict local minimizer of [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}.* *Proof.* Let us investigate the lifted model [\[eq:ALMz:P\]](#eq:ALMz:P){reference-type="eqref" reference="eq:ALMz:P"}, which is a special instance of [\[eq:Q\]](#eq:Q){reference-type="eqref" reference="eq:Q"} with $$\mathbf{f}(x,z):=f(x),\quad \mathbf{g}(x,z):=g(z),\quad \mathbf{c}(x,z):=(x,z),\quad \mathbf{h}(x,z):=c(x)-z,\quad C:=\{0\}.$$ Let us now investigate the point $(\bar x,c(\bar x))$. We will show that WSOSC implies validity of the assumptions of [Theorem 9](#thm:SOSC_general){reference-type="ref" reference="thm:SOSC_general"} in this situation. Then the assertion directly follows as [\[eq:essential_local_minimizer_general\]](#eq:essential_local_minimizer_general){reference-type="eqref" reference="eq:essential_local_minimizer_general"} reduces to [\[eq:essential_local_minimizer_P\_S\]](#eq:essential_local_minimizer_P_S){reference-type="eqref" reference="eq:essential_local_minimizer_P_S"}. In the present situation, the critical cone for [\[eq:Q\]](#eq:Q){reference-type="eqref" reference="eq:Q"} at $(\bar x,c(\bar x))$ is given by $$\begin{aligned} \mathcal C_{\textup{Q}}(\bar x,c(\bar x)) &= \{(u,w)\in\mathbb{R}^n\times\mathbb{R}^m\,|\,f^\prime(\bar x)u+\mathrm{d}g(c(\bar x))(w)\leq 0,\,c^\prime(\bar x)u-w=0\} \\ &= \{(u,c^\prime(\bar x)u)\in\mathbb{R}^n\times\mathbb{R}^m\,|\,u\in\mathcal C(\bar x)\}, \end{aligned}$$ and for $u\in\mathcal C(\bar x)$, the directional multiplier set reduces to $$\begin{aligned} &\Lambda_\textup{Q}((\bar x,c(\bar x)),(u,c^\prime(\bar x)u)) \\ &\quad = \left\{(y_x,y_z,\lambda)\in\mathbb{R}^n\times\mathbb{R}^m\times\mathbb{R}^m\,\middle|\, \begin{aligned} & \nabla f(\bar x)+y_x+c^\prime(\bar x)^\top\lambda=0,\,y_z-\lambda=0, \\ & y_x=0,\,y_z\in\widehat{\partial}^\textup{p}g(c(\bar x),c^\prime(\bar x)u),\,\lambda\in\mathbb{R}^m \end{aligned} \right\} \\ &\quad = \{(0,y,y)\in\mathbb{R}^n\times\mathbb{R}^m\times\mathbb{R}^m\,|\,y\in\Lambda(\bar x,u)\}. \end{aligned}$$ Finally, we observe that for $y\in\Lambda(\bar x,u)$, we have $$\nabla^2_{(x,z),(x,z)}\mathcal{L}_\textup{Q}((\bar x,c(\bar x)),(0,y,y))[(u,c^\prime(\bar x)u),(u,c^\prime(\bar x)u)] = \nabla^2_{xx}\mathcal{L}(\bar x,y)[u,u]$$ as well as $$\begin{aligned} \mathrm{d}^2\mathbf{g}(\mathbf{c}(\bar x,c(\bar x)),(0,y))(u,c^\prime(\bar x)u) &= \mathrm{d}^2g(c(\bar x),y)(c^\prime(\bar x)u), \\ \mathrm{d}^2\mathop{\mathrm{\delta}}_C(\mathbf{h}(\bar x,c(\bar x)),y)(c^\prime(\bar x)u-c^\prime(\bar x)u) &= \mathrm{d}^2\mathop{\mathrm{\delta}}_{\{0\}}(0,y)(0) = 0. \end{aligned}$$ Thus, WSOSC implies validity of the assumptions of [Theorem 9](#thm:SOSC_general){reference-type="ref" reference="thm:SOSC_general"} in the present situation, and the claim follows. ◻ Let us assume that [\[eq:essential_local_minimizer_P\_S\]](#eq:essential_local_minimizer_P_S){reference-type="eqref" reference="eq:essential_local_minimizer_P_S"} holds for some constants $\varepsilon>0$ and $\beta>0$. Then there is some $\tilde\varepsilon\in(0,\varepsilon)$ such that for each $x\in\mathop{\mathrm{\mathbb{B}}}_{\tilde\varepsilon}(\bar x)$, we have $\Vert c(x)-c(\bar x) \Vert\leq\varepsilon$. Choosing $z = c(x)$ in [\[eq:essential_local_minimizer_P\_S\]](#eq:essential_local_minimizer_P_S){reference-type="eqref" reference="eq:essential_local_minimizer_P_S"} gives $$\varphi(x)-\varphi(\bar x) \geq \frac{\beta}{2}(\Vert x-\bar x \Vert^2+\Vert c(x)-c(\bar x) \Vert^2) \geq \frac{\beta}{2}\Vert x-\bar x \Vert^2,$$ i.e., the standard second-order growth condition for [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} from [\[eq:second_order_growth_P\]](#eq:second_order_growth_P){reference-type="eqref" reference="eq:second_order_growth_P"} holds for $\tilde\varepsilon$ in place of $\varepsilon$. **Remark 18**. Let us fix a feasible point $\bar x\in\mathbb{R}^n$ of [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}. We investigate the surrogate model $$\label{eq:surrogate_Rockafellar} \mathop{\mathrm{minimize}}_{x , z^\prime} {}\quad{} f(x) + g(c(x)-z^\prime) {}\qquad{} \mathop{\mathrm{subject~to}} {}\quad{} z^\prime = 0$$ with perturbation variable $z^\prime\in\mathbb{R}^m$, which has been employed by Rockafellar in his recent papers [@rockafellar2022augmented; @rockafellar2022convergence] to analyze AL methods for composite optimization problems where the function $g$ is *convex*. We notice that $(\bar x,0)$ is feasible to [\[eq:surrogate_Rockafellar\]](#eq:surrogate_Rockafellar){reference-type="eqref" reference="eq:surrogate_Rockafellar"}, and that this problem is a special instance of [\[eq:Q\]](#eq:Q){reference-type="eqref" reference="eq:Q"} with $$\mathbf{f}(x,z^\prime) :=f(x), \quad \mathbf{g}:=g, \quad \mathbf{c}(x,z^\prime) :=c(x)-z^\prime, \quad \mathbf{h}(x,z^\prime) :=z^\prime, \quad C :=\{0\}.$$ Proceeding in similar way as in the proof of [Lemma 17](#lem:SOSC_implies_enhanced_growth_condition){reference-type="ref" reference="lem:SOSC_implies_enhanced_growth_condition"}, we can show that whenever WSOSC holds at $\bar x$, then there are constants $\varepsilon>0$ and $\beta>0$ such that $$\label{eq:essential_local_minimizer_surrogate_Rockafellar} \forall (x,z^\prime)\in\mathop{\mathrm{\mathbb{B}}}_\varepsilon(\bar x,0)\colon\quad \max\{f(x)+g(c(x)-z^\prime)-\varphi(\bar x),\Vert z^\prime \Vert\} \geq \frac\beta2(\Vert x-\bar x \Vert^2+\Vert z^\prime \Vert^2),$$ encapsulating the fact that $(\bar x,0)$ is an essential local minimizer of [\[eq:surrogate_Rockafellar\]](#eq:surrogate_Rockafellar){reference-type="eqref" reference="eq:surrogate_Rockafellar"}. Similarly to [\[eq:essential_local_minimizer_P\_S\]](#eq:essential_local_minimizer_P_S){reference-type="eqref" reference="eq:essential_local_minimizer_P_S"}, the growth condition [\[eq:essential_local_minimizer_surrogate_Rockafellar\]](#eq:essential_local_minimizer_surrogate_Rockafellar){reference-type="eqref" reference="eq:essential_local_minimizer_surrogate_Rockafellar"} allows for perturbations of feasibility. Validity of WSOSC at some feasible point $\bar x\in\mathbb{R}^n$ of [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} gives a sufficient condition for a sequence of asymptotically feasible points to converge to $\bar{x}$, see [@BoergensKanzowSteck2019 Cor. 6.2] for a related result. **Corollary 19**. *Let $\bar{x} \in \mathbb{R}^n$ be an M-stationary point of [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} satisfying WSOSC. Then there exists $r > 0$ such that, whenever $\{x^k\} \subseteq \mathop{\mathrm{\mathbb{B}}}_r(\bar{x})$ and $\{z^k\}\subseteq \mathop{\mathrm{dom}}g$ are sequences with $\Vert c(x^k) - z^k \Vert \to 0$ and $\limsup_{k \to \infty} (f(x^k) + g(z^k)) \leq \varphi(\bar{x})$, then $x^k \to \bar{x}$ and $z^k \to c(\bar{x})$.* *Proof.* Due to [Lemma 17](#lem:SOSC_implies_enhanced_growth_condition){reference-type="ref" reference="lem:SOSC_implies_enhanced_growth_condition"}, the growth condition [\[eq:essential_local_minimizer_P\_S\]](#eq:essential_local_minimizer_P_S){reference-type="eqref" reference="eq:essential_local_minimizer_P_S"} holds as $\bar x$ satisfies WSOSC. Let $\varepsilon>0$ and $\beta>0$ be the constants used in [\[eq:essential_local_minimizer_P\_S\]](#eq:essential_local_minimizer_P_S){reference-type="eqref" reference="eq:essential_local_minimizer_P_S"}, and set $r :=\varepsilon$. Hence, for any sequences satisfying the requirements stated by the corollary, $$\begin{aligned} 0 {}\leq{}& \limsup_{k\to\infty} \left[ \max\{ f(x^k) + g(z^k) - \varphi(\bar{x}), \Vert c(x^k) - z^k \Vert \} - \frac{\beta}{2} \left( \Vert x^k - \bar{x} \Vert^2 + \Vert z^k - c(\bar{x}) \Vert^2 \right) \right] \\ {}\leq{}& - \frac{\beta}{2} \liminf_{k\to\infty} \left( \Vert x^k - \bar{x} \Vert^2 + \Vert z^k - c(\bar{x}) \Vert^2 \right) \leq 0 \end{aligned}$$ is obtained, which implies $x^k \to \bar{x}$ and $z^k \to c(\bar{x})$. ◻ ## Error bounds {#sec:ErrorBounds} Here, we aim to establish a connection between the second-order sufficient conditions from [Definition 16](#def:SOSC){reference-type="ref" reference="def:SOSC"} and an error bound property. Relating to stability properties and involving the distance to the primal-dual solution set, error bounds are an essential ingredient for deriving rates of local convergence for numerical methods addressing [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}. In order to quantify the violation of the M-stationarity conditions from [Definition 10](#def:Mstationary){reference-type="ref" reference="def:Mstationary"}, it is (almost) natural to define the residual mapping $$\label{eq:ErrorBound:error} \Theta(x,z,y) := \Vert \nabla_x \mathcal{L}(x,y) \Vert + \Vert c(x) - z \Vert + \mathop{\mathrm{dist}}( y, \partial g(z) ).$$ Clearly, the M-stationarity conditions [\[eq:Mstationary\]](#eq:Mstationary){reference-type="eqref" reference="eq:Mstationary"} for some $(\bar x,\bar y)\in\mathbb{R}^n\times\mathbb{R}^m$ are equivalent to $\Theta(\bar x,c(\bar x),\bar y) = 0$. We shall see now that, under certain assumptions, $\Theta$ allows us to quantify not only the violation of [\[eq:Mstationary\]](#eq:Mstationary){reference-type="eqref" reference="eq:Mstationary"}, but also the distance to the (primal-dual) solution set. The following lemma, which provides the foundations of our analysis in this section, relates the error bound property of our interest with the strong metric subregularity of a certain set-valued mapping. Moreover, the latter characterization is quantifiable via a condition which can be stated in terms of initial problem data. **Lemma 20**. *Let $\bar x\in\mathbb{R}^n$ be an M-stationary point of [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} and pick $\bar y\in\Lambda(\bar x)$. Assume that the qualification condition $$\label{eq:strong_metric_subregularity} 0=\nabla^2_{xx}\mathcal{L}(\bar x,\bar y)u+c^\prime(\bar x)^\top \eta ,\quad \eta\in D(\partial g)(c(\bar x),\bar y)(c^\prime(\bar x)u) \quad \Longrightarrow \quad u=0,\,\eta=0$$ is valid. Then there are a constant $\varrho_{\mathrm{u}}>0$ and a neighborhood $U$ of $(\bar x,c(\bar x),\bar y)$ such that, for each $(x,z,y)\in U\cap(\mathbb{R}^n\times\mathop{\mathrm{dom}}g\times\mathbb{R}^m)$, we have the upper estimate $$\label{eq:error_bound} \Vert x-\bar x \Vert + \Vert z-c(\bar x) \Vert + \Vert y-\bar y \Vert \leq \varrho_{\mathrm{u}}\,\Theta(x,z,y).$$* *Proof.* We define a set-valued mapping $G \colon \mathbb{R}^n\times\mathbb{R}^m\times\mathbb{R}^m \rightrightarrows \mathbb{R}^n\times\mathbb{R}^m\times\mathbb{R}^m$ by means of $$G(x,z,y) := (\nabla_x\mathcal{L}(x,y),c(x)-z,y)-\{0\}\times\{0\}\times \partial g(z).$$ By continuous differentiability of the single-valued part of $G$, one can easily check, e.g., by means of the change-of-coordinates formula for tangents from [@rockafellar1998variational Ex. 6.7], that $$\begin{aligned} &DG((\bar x,c(\bar x),\bar y),(0,0,0))(u,v,\eta) \\ &\qquad = (\nabla^2_{xx}\mathcal{L}(\bar x,\bar y)u+c^\prime(\bar x)^\top\eta,c^\prime(\bar x)u-v,\eta) - \{0\}\times\{0\}\times D(\partial g)(\bar y,c(\bar x))(v) \end{aligned}$$ holds. Thus, [\[eq:strong_metric_subregularity\]](#eq:strong_metric_subregularity){reference-type="eqref" reference="eq:strong_metric_subregularity"} is equivalent to $$\ker DG((\bar x,c(\bar x),\bar y),(0,0,0))=\{(0,0,0)\}.$$ By the Levy--Rockafellar criterion, $G$ is strongly metrically subregular at $(\bar x,c(\bar x),\bar y)$, and the latter is equivalent to the desired error bound condition. ◻ Note that the proof of [Lemma 20](#lem:strong_ms_yields_error_bound){reference-type="ref" reference="lem:strong_ms_yields_error_bound"} actually shows that [\[eq:strong_metric_subregularity\]](#eq:strong_metric_subregularity){reference-type="eqref" reference="eq:strong_metric_subregularity"} is *equivalent* to the local validity of the error bound property [\[eq:error_bound\]](#eq:error_bound){reference-type="eqref" reference="eq:error_bound"}. **Lemma 21**. *Let $\bar x\in\mathbb{R}^n$ be an M-stationary point for [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}, and fix $\bar y\in\Lambda(\bar x)$. If SSOSC is satisfied at $\bar x$ for $\bar y$, and if we have $$\label{eq:crucial_assumption_for_error_bound} \forall u\in\mathbb{R}^n,\,\forall \eta\in D(\partial g)(c(\bar x),\bar y)(c^\prime(\bar x)u)\colon\quad \langle \eta, c^\prime(\bar x)u \rangle \geq \mathrm{d}^2 g(c(\bar x),\bar y)(c^\prime(\bar x)u)$$ and $$\label{eq:CQ_uniqueness_of_multiplier} D(\partial g)(c(\bar x),\bar y)(0)\cap\ker c^\prime(\bar x)^\top=\{0\},$$ then there are a constant $\varrho_{\mathrm{u}}>0$ and a neighborhood $U$ of $(\bar x,c(\bar x),\bar y)$ such that the upper estimate [\[eq:error_bound\]](#eq:error_bound){reference-type="eqref" reference="eq:error_bound"} holds for each triplet $(x,z,y)\in U\cap(\mathbb{R}^n\times\mathop{\mathrm{dom}}g\times\mathbb{R}^m)$.* *Proof.* We just show that the qualification condition [\[eq:strong_metric_subregularity\]](#eq:strong_metric_subregularity){reference-type="eqref" reference="eq:strong_metric_subregularity"} is valid. Then the assertion follows from [Lemma 20](#lem:strong_ms_yields_error_bound){reference-type="ref" reference="lem:strong_ms_yields_error_bound"}. Thus, pick $u\in\mathbb{R}^n$ and $\eta\in\mathbb{R}^m$ with $0=\nabla^2_{xx}\mathcal{L}(\bar x,\bar y)u+c^\prime(\bar x)^\top\eta$ and $\eta\in D(\partial g)(c(\bar x),\bar y)(c^\prime(\bar x)u)$. Taking the scalar product of the equation with $u$ gives $\nabla^2_{xx}\mathcal{L}(\bar x,\bar y)[u,u]+\langle \eta, c^\prime(\bar x)u \rangle=0$, so that [\[eq:crucial_assumption_for_error_bound\]](#eq:crucial_assumption_for_error_bound){reference-type="eqref" reference="eq:crucial_assumption_for_error_bound"} gives $\nabla^2_{xx}\mathcal{L}(\bar x,\bar y)[u,u]+\mathrm{d}^2g(c(\bar x),\bar y)(c^\prime(\bar x)u)\leq 0$. If $u\notin\mathcal C(\bar x)$, we have $$\mathrm{d}g(c(\bar x))(c^\prime(\bar x)u)>-f^\prime(\bar x)u=\langle \bar y, c^\prime(\bar x)u \rangle$$ from $\nabla_x\mathcal{L}(\bar x,\bar y)=0$, which gives $\mathrm{d}^2 g(c(\bar x),\bar y)(c^\prime(\bar x)u)=\infty$, see [@BenkoMehlitz2023 formula (5)], and, thus, a contradiction. Hence, we have $u\in\mathcal C(\bar x)$, and validity of SSOSC gives $u=0$. Thus, $\eta\in D(\partial g)(c(\bar x),\bar y)(0)\cap\ker c^\prime(\bar x)^\top$, and [\[eq:CQ_uniqueness_of_multiplier\]](#eq:CQ_uniqueness_of_multiplier){reference-type="eqref" reference="eq:CQ_uniqueness_of_multiplier"} yields $\eta=0$. Consequently, [\[eq:strong_metric_subregularity\]](#eq:strong_metric_subregularity){reference-type="eqref" reference="eq:strong_metric_subregularity"} is valid, and the assertion follows. ◻ **Remark 22**. Let us note that [\[eq:crucial_assumption_for_error_bound\]](#eq:crucial_assumption_for_error_bound){reference-type="eqref" reference="eq:crucial_assumption_for_error_bound"} is valid whenever $g$ is prox-regular, subdifferentially continuous, and twice epi-differentiable at $c(\bar x)$ for $\bar y$ while $\mathrm{d}^2 g(c(\bar x),\bar y)$ is convex, see [Lemma 7](#lem:subderivative_vs_graphical_derivative){reference-type="ref" reference="lem:subderivative_vs_graphical_derivative"}. All these properties hold if $g$ is a convex piecewise linear-quadratic function, see [@rockafellar1998variational Def. 10.20] for a definition and [@MohammadiMordukhovichSarabi2022 Prop. 7.1] as well as [@rockafellar1998variational Prop. 13.9] for the validation of these properties. Let us note that validity of [\[eq:strong_metric_subregularity\]](#eq:strong_metric_subregularity){reference-type="eqref" reference="eq:strong_metric_subregularity"} is equivalent to validity of [\[eq:CQ_uniqueness_of_multiplier\]](#eq:CQ_uniqueness_of_multiplier){reference-type="eqref" reference="eq:CQ_uniqueness_of_multiplier"} and $$\label{eq:a_really_crucial_CQ} 0=\nabla^2_{xx}\mathcal{L}(\bar x,\bar y)u+c^\prime(\bar x)^\top \eta\quad \eta\in D(\partial g)(c(\bar x),\bar y)(c^\prime(\bar x)u) \quad \Longrightarrow \quad u=0.$$ Let us elaborate on [\[eq:CQ_uniqueness_of_multiplier\]](#eq:CQ_uniqueness_of_multiplier){reference-type="eqref" reference="eq:CQ_uniqueness_of_multiplier"}. Similar considerations in a much more specific setting can be found in [@MohammadiMordukhovichSarabi2022 Sec. 8] and [@MordukhovichSarabi2019 Sec. 4] where $g$ is assumed to be a convex function of a special type. Notice that the results obtained in [@MohammadiMordukhovichSarabi2022; @MordukhovichSarabi2019] are, expectedly, slightly stronger. **Lemma 23**. *Let $\bar x\in\mathbb{R}^n$ be an M-stationary point for [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}, and fix $\bar y\in\Lambda(\bar x)$. We investigate the mapping $H \colon \mathbb{R}^m \rightrightarrows \mathbb{R}^n\times\mathbb{R}^m$ given by $$\label{eq:definition_of_H} \forall y\in\mathbb{R}^m\colon\quad H(y) := (\nabla_x\mathcal{L}(\bar x,y),y)-\{0\}\times\partial g(c(\bar x)).$$ Then [\[eq:CQ_uniqueness_of_multiplier\]](#eq:CQ_uniqueness_of_multiplier){reference-type="eqref" reference="eq:CQ_uniqueness_of_multiplier"} implies that $H$ is strongly metrically subregular at $(\bar y,(0,0))$, and the converse holds true whenever $(\partial g)^{-1}$ is metrically subregular at $(\bar y,c(\bar x))$.* *Proof.* Patterning the proof of [Lemma 20](#lem:strong_ms_yields_error_bound){reference-type="ref" reference="lem:strong_ms_yields_error_bound"}, we find $$DH(\bar y,(0,0))(\eta)=(c^\prime(\bar x)^\top\eta,\eta)-\{0\}\times T_{\partial g(c(\bar x))}(\bar y).$$ Furthermore, we have $T_{\partial g(c(\bar x))}(\bar y)\subseteq D(\partial g)(c(\bar x),\bar y)(0)$, and the converse holds true whenever $(\partial g)^{-1}$ is metrically subregular at $(\bar y,c(\bar x))$, see [@BenkoMehlitz2022 Thm 3.2]. Thus, [\[eq:CQ_uniqueness_of_multiplier\]](#eq:CQ_uniqueness_of_multiplier){reference-type="eqref" reference="eq:CQ_uniqueness_of_multiplier"} implies $$\ker DH(\bar y,(0,0))=\{0\},$$ and the converse holds true under the additional subregularity of $(\partial g)^{-1}$. Hence, the assertion follows from the Levy--Rockafellar criterion. ◻ **Corollary 24**. *Let $\bar x\in\mathbb{R}^n$ be an M-stationary point for [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}, and fix $\bar y\in\Lambda(\bar x)$. Then the following assertions hold.* 1. *[\[item:cor:unique_multiplier:cap\]]{#item:cor:unique_multiplier:cap label="item:cor:unique_multiplier:cap"} If [\[eq:CQ_uniqueness_of_multiplier\]](#eq:CQ_uniqueness_of_multiplier){reference-type="eqref" reference="eq:CQ_uniqueness_of_multiplier"} holds, then there is a neighborhood $V\subseteq\mathbb{R}^m$ of $\bar y$ such that $\Lambda(\bar x)\cap V=\{\bar y\}$. Particularly, if $\Lambda(\bar x)$ is convex (which happens if $\partial g(c(\bar x))$ is convex), then $\Lambda(\bar x)=\{\bar y\}$.* 2. *[\[item:cor:unique_multiplier:metricsubreg\]]{#item:cor:unique_multiplier:metricsubreg label="item:cor:unique_multiplier:metricsubreg"} If $\Lambda(\bar x)=\{\bar y\}$, if the mapping $H$ from [\[eq:definition_of_H\]](#eq:definition_of_H){reference-type="eqref" reference="eq:definition_of_H"} is metrically subregular at $(\bar y,(0,0))$, and if $(\partial g)^{-1}$ is metrically subregular at $(\bar y,c(\bar x))$ (both subregularity assumptions are satisfied if $\partial g$ is a polyhedral mapping as this also gives polyhedrality of $H$), then [\[eq:CQ_uniqueness_of_multiplier\]](#eq:CQ_uniqueness_of_multiplier){reference-type="eqref" reference="eq:CQ_uniqueness_of_multiplier"} holds.* *Proof.* Due to [Lemma 23](#lem:uniqueness_of_multiplier_CQ){reference-type="ref" reference="lem:uniqueness_of_multiplier_CQ"}, the assumptions guarantee that $H$ from [\[eq:definition_of_H\]](#eq:definition_of_H){reference-type="eqref" reference="eq:definition_of_H"} is strongly metrically subregular at $(\bar y,(0,0))$. Hence, we find a neighborhood $V\subseteq\mathbb{R}^m$ of $\bar y$ and a constant $\kappa>0$ such that $$\label{eq:strong_metric_subregularity_of_H} \forall y\in V\colon\quad \Vert y-\bar y \Vert \leq \kappa\bigl(\Vert \nabla_x\mathcal{L}(\bar x,y) \Vert+\mathop{\mathrm{dist}}(y,\partial g(c(\bar x)))\bigr).$$ Particularly, this estimate shows that whenever $y\in V$ is different from $\bar y$, then $y\notin\Lambda(\bar x)$. The additional statement in assertion [\[item:cor:unique_multiplier:cap\]](#item:cor:unique_multiplier:cap){reference-type="ref" reference="item:cor:unique_multiplier:cap"} readily follows. For assertion [\[item:cor:unique_multiplier:metricsubreg\]](#item:cor:unique_multiplier:metricsubreg){reference-type="ref" reference="item:cor:unique_multiplier:metricsubreg"}, notice first that $H^{-1}(0,0)=\Lambda(\bar x)$ is valid. Then metric subregularity of $H$ at $(\bar y,(0,0))$ together with $\Lambda(\bar x)=\{\bar y\}$ show that [\[eq:strong_metric_subregularity_of_H\]](#eq:strong_metric_subregularity_of_H){reference-type="eqref" reference="eq:strong_metric_subregularity_of_H"} is valid for some neighborhood $V\subseteq\mathbb{R}^m$ of $\bar y$ and some constant $\kappa>0$. Hence, $H$ is strongly metrically subregular at $(\bar y,(0,0))$. Finally, metric subregularity of $(\partial g)^{-1}$ at $(\bar y,c(\bar x))$ and [Lemma 23](#lem:uniqueness_of_multiplier_CQ){reference-type="ref" reference="lem:uniqueness_of_multiplier_CQ"} can be used to infer validity of [\[eq:CQ_uniqueness_of_multiplier\]](#eq:CQ_uniqueness_of_multiplier){reference-type="eqref" reference="eq:CQ_uniqueness_of_multiplier"}. ◻ In the following remark we comment on [\[eq:a_really_crucial_CQ\]](#eq:a_really_crucial_CQ){reference-type="eqref" reference="eq:a_really_crucial_CQ"}. **Remark 25**. Let $\bar x\in\mathbb{R}^n$ be an M-stationary point for [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}, and fix $\bar y\in\Lambda(\bar x)$. Suppose that SSOSC is valid at $\bar x$ for $\bar y$, and that $0\in\mathop{\mathrm{dom}}\mathrm{d}^2g(c(\bar x),\bar y)$. We first note that this means that $\bar u :=0$ is the uniquely determined global minimizer of $$\label{eq:second_order_minimization_problem} \mathop{\mathrm{minimize}}_{u} {}\quad{} \frac12\nabla^2_{xx}\mathcal{L}(\bar x,\bar y)[u,u]+\frac12\mathrm{d}^2g(c(\bar x),\bar y)(c^\prime(\bar x)u).$$ In order to see this, one has to observe two facts. First, for each $u\notin \mathcal C(\bar x)$, we have $$\mathrm{d}g(c(\bar x))(c^\prime(\bar x)u) > -f^\prime(\bar x)u = \langle \bar y, c^\prime(\bar x)u \rangle$$ which gives $\mathrm{d}^2g(c(\bar x),\bar y)(c^\prime(\bar x)u)=\infty$. Secondly, $\mathrm{d}^2g(c(\bar x),\bar y)(0)=0$ follows from [Lemma 6](#lem:trivial_second_subderivative){reference-type="ref" reference="lem:trivial_second_subderivative"}. Under the general assumptions of [Lemma 7](#lem:subderivative_vs_graphical_derivative){reference-type="ref" reference="lem:subderivative_vs_graphical_derivative"}, the limiting subdifferential of the scaled second subderivative $v\mapsto\tfrac12\mathrm{d}^2g(c(\bar x),\bar y)(v)$ can be computed in terms of the graphical derivative of $\partial g$ at $(c(\bar x),\bar y)$. Hence, under some suitable assumptions, the chain rule from [@mordukhovich2018 Cor. 4.6] can be applied in order to derive first-order necessary optimality conditions for problem [\[eq:second_order_minimization_problem\]](#eq:second_order_minimization_problem){reference-type="eqref" reference="eq:second_order_minimization_problem"}, and these conditions take the following shape: $$0=\nabla^2_{xx}\mathcal{L}(\bar x,\bar y)u+c^\prime(\bar x)^\top \eta\quad \eta\in D(\partial g)(c(\bar x),\bar y)(c^\prime(\bar x)u).$$ That is why [\[eq:a_really_crucial_CQ\]](#eq:a_really_crucial_CQ){reference-type="eqref" reference="eq:a_really_crucial_CQ"} demands, roughly speaking, that $\bar u :=0$ is the uniquely determined stationary point of [\[eq:second_order_minimization_problem\]](#eq:second_order_minimization_problem){reference-type="eqref" reference="eq:second_order_minimization_problem"}. This is clearly different from postulating that this point is the uniquely determined global minimizer of this problem, i.e., SSOSC. Equivalence of these conditions can only be guaranteed under some additional convexity of the second subderivative. Following [@MordukhovichSarabi2018 Def. 3.1], validity of [\[eq:a_really_crucial_CQ\]](#eq:a_really_crucial_CQ){reference-type="eqref" reference="eq:a_really_crucial_CQ"} demands that the multiplier $\bar y$ is so-called *noncritical*. The concept of critical multipliers dates back to [@Izmailov2005; @IzmailovSolodov2012] where it has been introduced for standard nonlinear programs. In [@IzmailovSolodov2012], it has been pointed out that the presence of critical multipliers slows down the convergence of Newton-type methods when applied for the solution of stationarity systems, and this observation can be extended to certain composite optimization problems as shown in [@MordukhovichSarabi2018; @MordukhovichSarabi2019]. Let us mention that [@MordukhovichSarabi2018 Thm 4.1] and [@MordukhovichSarabi2019 Thm 5.6] show that [\[eq:a_really_crucial_CQ\]](#eq:a_really_crucial_CQ){reference-type="eqref" reference="eq:a_really_crucial_CQ"} is equivalent to a local primal-dual (upper) error bound, based on a not necessarily unique Lagrange multiplier, whenever $g$ is a function whose epigraph is a convex polyhedral set or the indicator function of a so-called $\mathcal C^2$-cone reducible set, and in the latter case, further assumptions are required. For brevity, we abstain here from investigating further refinements of the error bound [\[eq:error_bound\]](#eq:error_bound){reference-type="eqref" reference="eq:error_bound"} to situations where the Lagrange multiplier is not uniquely determined, but indicate that this is an interesting topic for future research. The above puts our comments from [Remark 22](#rem:error_bound_for_convex_piecewise_quadratic_functions){reference-type="ref" reference="rem:error_bound_for_convex_piecewise_quadratic_functions"} into some new light. On the one hand, our arguments highlight that in situations where the second subderivative of $g$ is not convex, SSOSC might be too weak to yield the error bound of our interest. On the other hand, in the absence of any additional assumptions, [\[eq:strong_metric_subregularity\]](#eq:strong_metric_subregularity){reference-type="eqref" reference="eq:strong_metric_subregularity"} may not be sufficient for SSOSC as uniqueness of stationary points says nothing about the existence of a global minimizer for [\[eq:second_order_minimization_problem\]](#eq:second_order_minimization_problem){reference-type="eqref" reference="eq:second_order_minimization_problem"}. However, the condition $$\label{eq:some_new_second_order_condition_for_error_bound} \forall u\in\mathbb{R}^n\setminus\{0\},\,\forall \eta\in D(\partial g)(c(\bar x),\bar y)(c^\prime(\bar x)u)\colon\quad \nabla^2_{xx}\mathcal{L}(\bar x,\bar y)[u,u]+\langle \eta, c^\prime(\bar x)u \rangle>0$$ is clearly sufficient for [\[eq:a_really_crucial_CQ\]](#eq:a_really_crucial_CQ){reference-type="eqref" reference="eq:a_really_crucial_CQ"} and, together with [\[eq:CQ_uniqueness_of_multiplier\]](#eq:CQ_uniqueness_of_multiplier){reference-type="eqref" reference="eq:CQ_uniqueness_of_multiplier"}, gives [\[eq:strong_metric_subregularity\]](#eq:strong_metric_subregularity){reference-type="eqref" reference="eq:strong_metric_subregularity"}. The following example shows that [\[eq:a_really_crucial_CQ\]](#eq:a_really_crucial_CQ){reference-type="eqref" reference="eq:a_really_crucial_CQ"} does not necessarily imply SSOSC. Furthermore, [Example 41](#ex:SSOSC_does_not_give_error_bound){reference-type="ref" reference="ex:SSOSC_does_not_give_error_bound"} visualizes that SSOSC does not necessarily imply [\[eq:a_really_crucial_CQ\]](#eq:a_really_crucial_CQ){reference-type="eqref" reference="eq:a_really_crucial_CQ"}. These two conditions are, thus, independent in general as indicated in [Remark 25](#rem:on_the_really_crucial_CQ){reference-type="ref" reference="rem:on_the_really_crucial_CQ"}. **Example 26**. We consider [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} for the functions $f,c,g \colon \mathbb{R} \to \mathbb{R}$ given by $$f(x):=\frac12x^2,\qquad c(x):=x,\qquad g(z):=-z^2,$$ and choose $\bar x$ to be the origin in $\mathbb{R}$. Note that $\bar x$ is M-stationary with $\Lambda(\bar x)=\{0\}$. Thus, we consider the uniquely determined multiplier $\bar y :=0$. Obviously, $\bar x$ is a strict local maximizer of [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} which is why SSOSC fails to hold at $\bar x$ for $\bar y$, see [Lemma 17](#lem:SOSC_implies_enhanced_growth_condition){reference-type="ref" reference="lem:SOSC_implies_enhanced_growth_condition"}. Clearly, we have $\nabla_{xx}^2\mathcal{L}(\bar x,\bar y)=1$, and twice continuous differentiability of $g$ gives $D(\partial g)(c(\bar x),\bar y)(c^\prime(\bar x)u)=\{-2u\}$ for each $u\in\mathbb{R}$. Hence, one can easily check that [\[eq:a_really_crucial_CQ\]](#eq:a_really_crucial_CQ){reference-type="eqref" reference="eq:a_really_crucial_CQ"} is valid. **Remark 27** (Geometric constraints). In the special case where $g:=\mathop{\mathrm{\delta}}_D$ holds for some closed set $D\subseteq\mathbb{R}^m$, the qualification conditions [\[eq:strong_metric_subregularity\]](#eq:strong_metric_subregularity){reference-type="eqref" reference="eq:strong_metric_subregularity"}, [\[eq:CQ_uniqueness_of_multiplier\]](#eq:CQ_uniqueness_of_multiplier){reference-type="eqref" reference="eq:CQ_uniqueness_of_multiplier"}, [\[eq:a_really_crucial_CQ\]](#eq:a_really_crucial_CQ){reference-type="eqref" reference="eq:a_really_crucial_CQ"}, and [\[eq:some_new_second_order_condition_for_error_bound\]](#eq:some_new_second_order_condition_for_error_bound){reference-type="eqref" reference="eq:some_new_second_order_condition_for_error_bound"} involve the graphical derivative of the (limiting) normal cone mapping associated with $D$. For several different choices of $D$, including convex cones (like the semidefinite or the second-order cone) or convex sets given via smooth convex inequality constraints, explicit ready-to-use formulas for this variational object are available, see e.g. [@GfrererOutrata2016; @Wachsmuth2017]. For diverse nonconvex sets $D$ of special structure, like sparsity sets of type $\{x\in\mathbb{R}^n\,|\,\Vert x \Vert_0\leq \kappa\}$, $k\in\{1,\ldots,n-1\}$, the explicit computation of this tool is possible as well. As we have seen above, the *upper* error bound in [\[eq:error_bound\]](#eq:error_bound){reference-type="eqref" reference="eq:error_bound"} only holds in the presence of comparatively strong assumptions. Unfortunately, due to our definition of $\Theta$ in [\[eq:ErrorBound:error\]](#eq:ErrorBound:error){reference-type="eqref" reference="eq:ErrorBound:error"} which comprises the distance to the subdifferential of $g$, the converse *lower* error bound seems to demand unrealistic assumptions, too, as the following [Remark 28](#rem:lower_error_bound){reference-type="ref" reference="rem:lower_error_bound"} illustrates. We will, however, circumnavigate this potentially crucial observation later on in [5](#sec:ALM){reference-type="ref" reference="sec:ALM"} by the design of our algorithm. **Remark 28**. Let $\bar x\in\mathbb{R}^n$ be an M-stationary point of [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} and pick $\bar y\in\Lambda(\bar x)$. Then, relying on the definition of $\Theta$ in [\[eq:ErrorBound:error\]](#eq:ErrorBound:error){reference-type="eqref" reference="eq:ErrorBound:error"}, it appears indispensable for estimating the distance to the subdifferential to assume its *inner calmness*, see [@BenkoGfrererOutrata2019 Def. 2.2] and [@Benko2021 Sec. 2] for a discussion of this property. In particular, $\partial g$ shall be inner calm at $(c(\bar{x}),\bar{y})$, which entails the existence of $\kappa > 0$ and a neighborhood $V\subseteq\mathbb{R}^m$ of $c(\bar x)$ such that $$\forall z\in V\colon\quad \mathop{\mathrm{dist}}(\bar y,\partial g(z)) \leq \kappa\Vert z-c(\bar x) \Vert.$$ With this property at hand and exploiting [\[eq:Mstationary\]](#eq:Mstationary){reference-type="eqref" reference="eq:Mstationary"}, for each triplet $(x,z,y)\in\mathbb{R}^n\times\mathop{\mathrm{dom}}g\times\mathbb{R}^m$ such that $z\in V$, the triangle inequality yields $$\begin{aligned} \Theta(x,z,y) {}={}& \Vert \nabla_x\mathcal{L}(x,y) \Vert + \Vert c(x)-z \Vert + \mathop{\mathrm{dist}}(y,\partial g(z)) \\ {}\leq{}& \Vert \nabla_x\mathcal{L}(x,y)-\nabla_x\mathcal{L}(\bar x,\bar y) \Vert + \Vert c(x)-c(\bar x) \Vert + \Vert c(\bar x)-z \Vert + \Vert y-\bar y \Vert+\mathop{\mathrm{dist}}(\bar y,\partial g(z)) \\ {}\leq{}& \Vert \nabla_x\mathcal{L}(x,y)-\nabla_x\mathcal{L}(\bar x,\bar y) \Vert + \Vert c(x)-c(\bar x) \Vert + (\kappa+1)\Vert z-c(\bar x) \Vert + \Vert y-\bar y \Vert. \end{aligned}$$ Then, noting that $\nabla_x\mathcal{L}$ and $c$ are locally Lipschitz continuous by [Assumption 1](#ass:P){reference-type="ref" reference="ass:P"}[\[ass:f\]](#ass:f){reference-type="ref" reference="ass:f"}, there are a constant $\varrho_{\mathrm{l}}>0$ and a neighborhood $U$ of $(\bar x,c(\bar x),\bar y)$ such that, for each $(x,z,y)\in U\cap(\mathbb{R}^n\times\mathop{\mathrm{dom}}g\times\mathbb{R}^m)$, we obtain the lower estimate $$\varrho_{\mathrm{l}}\,\Theta(x,z,y) \leq \Vert x-\bar x \Vert+\Vert z-c(\bar x) \Vert+\Vert y-\bar y \Vert,$$ which patterns the upper counterpart in [\[eq:error_bound\]](#eq:error_bound){reference-type="eqref" reference="eq:error_bound"}. However, inner calmness of the subdifferential is a very hard assumption, impractical even for convex $g$, and it would restrict our considerations mainly to points where $g$ is smooth in practice. Exemplary, consider the absolute value function $g :=|\cdot|$. Since $\partial g$ is single-valued on $\mathbb{R}\setminus \{0\}$, one can easily check that inner calmness of $\partial g$ at $(0,\bar y)$ fails for every $\bar y\in[-1,1]=\partial g(0)$. In the following [5](#sec:ALM){reference-type="ref" reference="sec:ALM"}, we will *not* rely on any additional property of the subdifferential, but leverage instead the algorithmic scheme to derive a lower error bound *along the iterates*, see [Lemma 37](#lem:lower_error_bound:ALM){reference-type="ref" reference="lem:lower_error_bound:ALM"} below. # Augmented Lagrangian scheme and convergence {#sec:ALM} This section is devoted to describing a numerical scheme for solving [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} and to investigating its convergence properties under suitable assumptions. In particular, we consider the implicit AL scheme from [@demarchi2023implicit Alg. 3.1], so called because it makes use of the AL function $\mathcal{L}_\mu \colon \mathbb{R}^n\times \mathbb{R}^m \to \mathbb{R}$ associated to [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}, as defined in [\[eq:L\]](#eq:L){reference-type="eqref" reference="eq:L"}. It deviates in this respect from [@demarchi2023constrained Alg. 1], which builds upon [\[eq:ALMz:P\]](#eq:ALMz:P){reference-type="eqref" reference="eq:ALMz:P"} and treats the auxiliary variable explicitly, see [@benko2021implicit] for a discussion on implicit variables (and concealed benefits thereof) in optimization. ## Implicit approach {#sec:ALM:approach} The numerical method considered for addressing [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} is stated in [\[alg:ALM\]](#alg:ALM){reference-type="ref" reference="alg:ALM"}. Fitting into the AL framework [@bertsekas1996constrained; @birgin2014practical; @conn1991globally], the main step of the iterative procedure involves minimizing the AL function with respect to the primal variable. At the $k$-th iteration of the AL scheme, with some given penalty parameter $\mu_k > 0$ and multiplier estimate $\hat{y}^k \in \mathbb{R}^m$, a subproblem involving the minimization of $\mathcal{L}_{\mu_k}(x,\hat{y}^k)$ over $x \in \mathbb{R}^n$ has to be solved approximately. However, the AL function may lack regularity since, for $g$ nonconvex, the Moreau envelope $g^\mu$ is in general *not* continuously differentiable. Therefore, the concept of (approximate) $\Upsilon$-stationarity introduced in [2.2](#sec:VA){reference-type="ref" reference="sec:VA"} plays a role in characterizing adequate solutions of this AL subproblem, see [Remark 12](#rem:Upsilon_stationarity_AL){reference-type="ref" reference="rem:Upsilon_stationarity_AL"} as well. Let us note that, in practice, such points can be computed with the aid of a nonmonotone descent method, see [@demarchi2023implicit Sec. 4] for details. Then, following classical update rules [@birgin2014practical], the dual estimate $\hat{y}$ and penalty parameter $\mu$ are adjusted, along with the subproblem's tolerance $\varepsilon$. Compared to the classical AL or multiplier penalty approach for the solution of nonlinear programs, see [@bertsekas1996constrained; @conn1991globally], this variant uses a safeguarded update rule for the Lagrange multipliers and has stronger global convergence properties, as demonstrated in [@kanzow2018augmented]. The safeguarded Lagrange multiplier estimate $\hat{y}^k$ is drawn from a bounded set $Y\subseteq\mathbb{R}^m$ at [\[step:ALM:ysafe\]](#step:ALM:ysafe){reference-type="ref" reference="step:ALM:ysafe"}. In practice, it is advisable to choose the safeguarded estimate $\hat{y}^k$ as the projection of the Lagrange multiplier $y^{k-1}$ onto $Y$. We refer to [@birgin2014practical Sec. 4.1] for a detailed discussion. The monotonicity test at [\[step:ALM:penalty\]](#step:ALM:penalty){reference-type="ref" reference="step:ALM:penalty"} is adopted to monitor primal infeasibility along the iterates and update the penalty parameter accordingly. Aimed at driving $V_k$ to zero, the penalty parameter $\mu_k$ is reduced in case of insufficient decrease. Before investigating the convergence properties of [\[alg:ALM\]](#alg:ALM){reference-type="ref" reference="alg:ALM"}, we provide some characterizations of the iterates $\{(x^k,z^k,y^k)\}$. These are direct consequences of $z^k$ being a certificate of $\varepsilon_k$-$\Upsilon$-stationarity for $x^k$, by [\[step:ALM:subproblem\]](#step:ALM:subproblem){reference-type="ref" reference="step:ALM:subproblem"}, and of the dual update at [\[step:ALM:y\]](#step:ALM:y){reference-type="ref" reference="step:ALM:y"}, see [Remark 12](#rem:Upsilon_stationarity_AL){reference-type="ref" reference="rem:Upsilon_stationarity_AL"} as well. **Proposition 29**. *Let $\{(x^k,z^k,y^k)\}$ be a sequence generated by [\[alg:ALM\]](#alg:ALM){reference-type="ref" reference="alg:ALM"}. Then, for each $k\in\mathbb{N}$, $z^k \in \mathop{\mathrm{prox}}_{\mu_k g}(c(x^k)+\mu_k\hat y^k) \subseteq \mathop{\mathrm{dom}}g$, $\Vert \nabla f(x^k) + c^\prime(x^k)^\top y^k \Vert \leq \varepsilon_k$, and $y^k \in \partial g(z^k)$.* Throughout the convergence analysis, based on, and extending, that of [@demarchi2023implicit], it is assumed that [\[alg:ALM\]](#alg:ALM){reference-type="ref" reference="alg:ALM"} is well-defined, thus requiring that each subproblem at [\[step:ALM:subproblem\]](#step:ALM:subproblem){reference-type="ref" reference="step:ALM:subproblem"} admits an approximately stationary point. Moreover, the existence of some accumulation point $\bar x$ for a sequence $\{ x^k \}$ generated by [\[alg:ALM\]](#alg:ALM){reference-type="ref" reference="alg:ALM"} requires, in general, coercivity or (level) boundedness arguments. While asymptotic M-stationarity as given in [@demarchi2023implicit Def. 2.3] demands the existence of a sequence of, in a certain sense, approximately M-stationary points for [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} converging to the point of interest, no quantitative bound on this *approximativity* is required. However, for [\[alg:ALM\]](#alg:ALM){reference-type="ref" reference="alg:ALM"} to terminate, there is a need for an approximate version of the M-stationarity concept of [Definition 10](#def:Mstationary){reference-type="ref" reference="def:Mstationary"}. We refer to the notion delineated in [@demarchi2023implicit Def. 2.2], which comes along with an explicit bound quantifying violation of M-stationarity, while aligning with the asymptotic stipulation. ## Global convergence {#sec:GlobalConvergence} In this section, we are concerned with global convergence properties related to [\[alg:ALM\]](#alg:ALM){reference-type="ref" reference="alg:ALM"}, i.e., we are going to study properties of accumulation points of sequences it generates, regardless of how it is initialized. For that purpose, we will assume that [\[alg:ALM\]](#alg:ALM){reference-type="ref" reference="alg:ALM"} is well-defined and produces an infinite sequence of iterates. Our first results pertain to a *global* optimization perspective on the subproblems at [\[step:ALM:subproblem\]](#step:ALM:subproblem){reference-type="ref" reference="step:ALM:subproblem"} of [\[alg:ALM\]](#alg:ALM){reference-type="ref" reference="alg:ALM"}, compare [@birgin2014practical Ch. 5] and [@kanzow2018augmented Sec. 4]. Solving each subproblem up to approximate global optimality, not necessarily with vanishing inner tolerance, one finds in the limit a global minimizer of the infeasibility measure, see [@BoergensKanzowSteck2019 Lem. 4.2] for a related result. **Lemma 30**. *Let $\{(x^k,z^k,y^k)\}$ be a sequence generated by [\[alg:ALM\]](#alg:ALM){reference-type="ref" reference="alg:ALM"} with $\{\varepsilon_k\}$ bounded. Assume that $\mathop{\mathrm{dom}}g$ is closed and that $$\label{eq:approximate_global_minimizer_subproblem} \forall k\in\mathbb{N},\,\forall x\in\mathbb{R}^n\colon\quad \mathcal{L}_{\mu_k}(x^k,\hat{y}^k) \leq \mathcal{L}_{\mu_k}(x,\hat{y}^k) + \varepsilon_k.$$ Fix an accumulation point $\bar{x}\in\mathbb{R}^n$ of $\{x^k\}$. Then $\bar{x}$ is a global minimizer of $\mathop{\mathrm{dist}}(c(\cdot),\mathop{\mathrm{dom}}g)$. In particular, $\bar{x}$ is feasible if the feasible set of [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} is nonempty.* *Proof.* Let us consider two cases. If $\{\mu_k\}$ remains bounded away from zero, then [\[step:ALM:penalty\]](#step:ALM:penalty){reference-type="ref" reference="step:ALM:penalty"} demands that $\Vert c(x^k) - z^k \Vert \to 0$. As we have $\{z^k\}\subseteq\mathop{\mathrm{dom}}g$ from [Proposition 29](#lem:ALM:exact_complementarity){reference-type="ref" reference="lem:ALM:exact_complementarity"}, it follows that $$0 \leq \mathop{\mathrm{dist}}(c(x^k),\mathop{\mathrm{dom}}g) \leq \Vert c(x^k) - z^k \Vert \to 0.$$ Owing to $\mathop{\mathrm{dom}}g$ being closed, we obtain $c(\bar{x})\in\mathop{\mathrm{dom}}g$, proving that $\bar{x}$ is feasible for [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}. Consider now the case $\mu_k\downarrow 0$. Let $\{x^k\}_{k\in K}$ be a subsequence such that $x^k \to_K \bar{x}$. Then [\[eq:approximate_global_minimizer_subproblem\]](#eq:approximate_global_minimizer_subproblem){reference-type="eqref" reference="eq:approximate_global_minimizer_subproblem"} guarantees $$\forall x\in\mathbb{R}^n\colon\quad f(x^k) + g^{\mu_k}(c(x^k) + \mu_k \hat{y}^k) \leq f(x) + g^{\mu_k}(c(x) + \mu_k \hat{y}^k) + \varepsilon_k .$$ Multiplying by $\mu_k$, taking the lower limit as $k\to_K\infty$, and using boundedness of $\{f(x^k)\}_{k\in K}$ and $\{\varepsilon_k\}$ yield $$\liminf\limits_{k\to_K\infty} \mu_k g^{\mu_k}(c(x^k) + \mu_k \hat{y}^k) \leq \liminf\limits_{k\to_K\infty} \mu_k g^{\mu_k}(c(x) + \mu_k \hat{y}^k)$$ for each $x\in\mathbb{R}^n$. Together with [Lemma 2](#lem:dist_to_domain){reference-type="ref" reference="lem:dist_to_domain"}, this gives $$\begin{aligned} \frac{1}{2} \mathop{\mathrm{dist}}^2(c(\bar{x}), \mathop{\mathrm{dom}}g) {}={}& \liminf_{k\to_K\infty} \inf_z\left\{ \mu_k g(z) + \frac{1}{2} \Vert z - (c(x^k) + \mu_k \hat{y}^k) \Vert^2 \right\} \\ {}={}& \liminf_{k\to_K\infty} \mu_k g^{\mu_k}(c(x^k) + \mu_k \hat{y}^k) \\ {}\leq{}& \liminf_{k\to_K\infty} \mu_k g^{\mu_k}(c(x) + \mu_k \hat{y}^k) \\ {}={}& \liminf_{k\to_K\infty} \inf_z\left\{ \mu_k g(z) + \frac{1}{2} \Vert z - (c(x) + \mu_k \hat{y}^k) \Vert^2 \right\} \\ {}={}& \frac{1}{2} \mathop{\mathrm{dist}}^2(c(x), \mathop{\mathrm{dom}}g) \end{aligned}$$ for all $x\in\mathbb{R}^n$, where we used the boundedness of $\{\hat{y}^k\}$ and $\mu_k\downarrow 0$. This shows that $\bar{x}$ globally minimizes $\mathop{\mathrm{dist}}(c(\cdot),\mathop{\mathrm{dom}}g)$. ◻ Notice that, in the proof of [Lemma 30](#lem:global_minimizer_constraint_violation){reference-type="ref" reference="lem:global_minimizer_constraint_violation"}, the requirement of closed $\mathop{\mathrm{dom}}g$ does not affect the case with $\mu_k \to 0$. If, in addition to the assumptions of [Lemma 30](#lem:global_minimizer_constraint_violation){reference-type="ref" reference="lem:global_minimizer_constraint_violation"}, the sequence $\{\varepsilon_k\}$ is chosen so that $\varepsilon_k \downarrow 0$, then (primal) accumulation points of the sequence generated by [\[alg:ALM\]](#alg:ALM){reference-type="ref" reference="alg:ALM"} correspond to global minimizers of [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}, see [@BoergensKanzowSteck2019 Thm 4.12] for a related result. **Theorem 31**. *Let $\{(x^k,z^k,y^k)\}$ be a sequence generated by [\[alg:ALM\]](#alg:ALM){reference-type="ref" reference="alg:ALM"} with $\varepsilon_k \to 0$. Let [\[eq:approximate_global_minimizer_subproblem\]](#eq:approximate_global_minimizer_subproblem){reference-type="eqref" reference="eq:approximate_global_minimizer_subproblem"} hold, and assume that the feasible set of [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} is nonempty while $\mathop{\mathrm{dom}}g$ is closed. Then $\limsup_{k \to \infty} (f(x^k) + g(z^k)) \leq \varphi(x)$ holds for all feasible $x\in\mathbb{R}^n$. Moreover, every accumulation point $\bar{x}\in\mathbb{R}^n$ of $\{x^k\}$ is globally optimal for [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}. For any index set $K \subseteq \mathbb{N}$ such that $x^k \to_K \bar{x}$, it is also $z^k \to_K c(\bar{x})$ and $f(x^k)+g(z^k)\to_K\varphi(\bar x)$.* *Proof.* Let $x\in\mathbb{R}^n$ be a fixed feasible point for [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}. Then, due to [\[eq:approximate_global_minimizer_subproblem\]](#eq:approximate_global_minimizer_subproblem){reference-type="eqref" reference="eq:approximate_global_minimizer_subproblem"}, [Lemma 11](#lem:upper_bound_AL){reference-type="ref" reference="lem:upper_bound_AL"}, and [Proposition 29](#lem:ALM:exact_complementarity){reference-type="ref" reference="lem:ALM:exact_complementarity"}, we have $$\begin{aligned} f(x^k) + g(z^k) - \frac{\mu_k}{2}\Vert \hat{y}^k \Vert^2 {}\leq {}& f(x^k) + g(z^k) + \frac{1}{2\mu_k} \Vert c(x^k)+\mu_k \hat{y}^k - z^k \Vert^2 - \frac{\mu_k}{2}\Vert \hat{y}^k \Vert^2 \\ {}={}& \mathcal{L}_{\mu_k}(x^k,\hat{y}^k) {}\leq{} \mathcal{L}_{\mu_k}(x,\hat{y}^k) + \varepsilon_k \\ {}\leq{}& \varphi(x) + \varepsilon_k < \infty \end{aligned}$$ for all $k\in\mathbb{N}$. If $\mu_k\downarrow 0$, then $\mu_k \Vert \hat{y}^k \Vert^2 \to 0$ by boundedness of $\{\hat{y}^k\}$. In this case, $\varepsilon_k \to 0$ and $z^k \in \mathop{\mathrm{dom}}g$ imply that $\limsup_{k \to \infty} (f(x^k) + g(z^k)) \leq \varphi(x)$. Let us now focus on the case where $\{\mu_k\}$ is bounded away from zero. This is possible only if $\|c(x^k) - z^k\| \to 0$ by [\[step:ALM:penalty\]](#step:ALM:penalty){reference-type="ref" reference="step:ALM:penalty"}. Similar to above, we find $$f(x^k) + g(z^k) + \frac{1}{2\mu_k} \|c(x^k) - z^k\|^2 + \langle \hat{y}^k, c(x^k) - z^k \rangle = \mathcal{L}_{\mu_k}(x^k,\hat{y}^k) \leq \varphi(x)+\varepsilon_k.$$ Since $\{\hat{y}^k\}$ is bounded, $\| c(x^k) - z^k \| \to 0$, and $\varepsilon_k \to 0$, we can take the upper limit in the above estimate to find $\limsup_{k \to \infty} (f(x^k) + g(z^k)) \leq \varphi(x)$. Finally, let $\bar{x}$ be an accumulation point of $\{x^k\}$ and $K\subseteq\mathbb{N}$ an index set such that $x^k \to_K \bar{x}$. Then, as [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} admits feasible points, $\bar{x}$ is feasible by [Lemma 30](#lem:global_minimizer_constraint_violation){reference-type="ref" reference="lem:global_minimizer_constraint_violation"}. Let us show that $\Vert c(x^k)-z^k \Vert \to_K 0$ holds. If $\{\mu_k\}$ remains bounded away from zero, this is obvious by [\[step:ALM:penalty\]](#step:ALM:penalty){reference-type="ref" reference="step:ALM:penalty"}. In the case where $\mu_k\downarrow 0$, we can exploit feasibility of $\bar x$, boundedness of $\{\hat y^k\}$, and [Lemma 2](#lem:dist_to_domain){reference-type="ref" reference="lem:dist_to_domain"} to find $$\mu_kg(z^k)+\frac12\Vert z^k-c(x^k)-\mu_k\hat y^k \Vert^2\to_K 0.$$ Boundedness of $\{\hat y^k\}$ and $\mu_k\downarrow 0$ allow us to apply [Lemma 3](#lem:norm_to_zero){reference-type="ref" reference="lem:norm_to_zero"} which yields $\mu_k g(z^k)\to_K0$ as well as $\Vert z^k - c(x^k) - \mu_k\hat y^k \Vert\to_K0$ and the latter gives $\Vert z^k-c(x^k) \Vert\to_K0$. Due to $\Vert c(x^k)-z^k \Vert\to_K0$, continuity of $c$ gives $z^k\to_K c(\bar x)$. Now, lower semicontinuity of $g$ yields $$\begin{aligned} \varphi(\bar{x}) {}={} f(\bar x)+g(c(\bar x)) {}\leq{}& \liminf\limits_{k\to_K\infty} f(x^k)+\liminf\limits_{k\to_K\infty}g(z^k)\\ {}\leq{}& \liminf\limits_{k\to_K\infty} (f(x^k) + g(z^k) )\\ {}\leq{}& \limsup_{k \to_K \infty} (f(x^k) + g(z^k) ) \leq \varphi(x), \end{aligned}$$ where the last inequality is due to the upper bound obtained previously in the proof. As $x$ is an arbitrary feasible point for [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}, we have shown that $\bar{x}$ is globally optimal for [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}. Finally, with the particular choice $x = \bar{x}$, the previous inequalities give that $f(x^k)+g(z^k)\to_K\varphi(\bar x)$, concluding the proof. ◻ Under an additional assumption on the multiplier estimate $\hat y^k$ in [\[step:ALM:ysafe\]](#step:ALM:ysafe){reference-type="ref" reference="step:ALM:ysafe"}, a stronger result can be proved that concerns the behavior of the iterates for infeasible problems. By resetting the multiplier estimate when signs of infeasibility are detected, the algoritm tends to minimize the objective function subject to minimal constraint violation, see e.g. [@birgin2014practical Thm 5.3] for a related result. **Theorem 32**. *Let $\{(x^k,z^k,y^k)\}$ be a sequence generated by [\[alg:ALM\]](#alg:ALM){reference-type="ref" reference="alg:ALM"} with $\varepsilon_k \to 0$. Let [\[eq:approximate_global_minimizer_subproblem\]](#eq:approximate_global_minimizer_subproblem){reference-type="eqref" reference="eq:approximate_global_minimizer_subproblem"} hold, suppose that $\mathop{\mathrm{dom}}g$ is closed, and that, for all $k\in\mathbb{N}$, $\hat{y}^{k+1} = 0$ if $y^k \notin Y$. Let $\bar{x}$ be an accumulation point of $\{x^k\}$. Then $\bar{x}$ is a global minimizer of $\mathop{\mathrm{dist}}(c(\cdot), \mathop{\mathrm{dom}}g)$ and, for all $(x,z) \in\mathbb{R}^n\times \mathop{\mathrm{dom}}g$ such that $\| c(x) - z \| = \mathop{\mathrm{dist}}(c(\bar{x}), \mathop{\mathrm{dom}}g)$, it holds $\limsup_{k \to \infty} (f(x^k) + g(z^k)) \leq f(x) + g(z)$.* *Proof.* If $\mathop{\mathrm{dist}}(c(\bar{x}), \mathop{\mathrm{dom}}g) = 0$, namely $c(\bar{x}) \in \mathop{\mathrm{dom}}g$ by closedness of $\mathop{\mathrm{dom}}g$, then $\bar{x}$ is feasible and the claim follows from [Theorem 31](#thm:global_optimality){reference-type="ref" reference="thm:global_optimality"}. So, let us assume that $\mathop{\mathrm{dist}}(c(\bar{x}), \mathop{\mathrm{dom}}g) > 0$. Together with [\[step:ALM:penalty\]](#step:ALM:penalty){reference-type="ref" reference="step:ALM:penalty"} and [Proposition 29](#lem:ALM:exact_complementarity){reference-type="ref" reference="lem:ALM:exact_complementarity"}, this implies that $\mu_k\downarrow 0$. Since $\bar{x}$ is a global minimizer of $\mathop{\mathrm{dist}}(c(\cdot),\mathop{\mathrm{dom}}g)$ by [Lemma 30](#lem:global_minimizer_constraint_violation){reference-type="ref" reference="lem:global_minimizer_constraint_violation"}, then $$\label{eq:dist_to_domain} \forall k\in\mathbb{N}\colon\quad \| c(x^k) - z^k \| \geq \mathop{\mathrm{dist}}(c(x^k), \mathop{\mathrm{dom}}g) \geq \mathop{\mathrm{dist}}(c(\bar{x}), \mathop{\mathrm{dom}}g).$$ Thus, by the dual update rule at [\[step:ALM:y\]](#step:ALM:y){reference-type="ref" reference="step:ALM:y"}, boundedness of $Y$, and $\mu_k\downarrow 0$, for all $k\in\mathbb{N}$ large enough it is $y^k \notin Y$. Therefore, by the estimate choice stated in the premises, it is $\hat{y}^k = 0$ for all large enough $k\in\mathbb{N}$. Then, for all $x\in\mathbb{R}^n$, we have that $$\begin{aligned} f(x^k) + g(z^k) + \frac{1}{2\mu_k} \|c(x^k) - z^k\|^2 {}={}& f(x^k) + g^{\mu_k}(c(x^k)) {}={} \mathcal{L}_{\mu_k}(x^k,0) \\ {}\leq{}& \mathcal{L}_{\mu_k}(x,0) + \varepsilon_k {}={} f(x) + g^{\mu_k}(c(x)) + \varepsilon_k \end{aligned}$$ for all $k\in \mathbb{N}$ large enough. This holds, in particular, for $x \in \mathbb{R}^n$ a global minimizer of $\mathop{\mathrm{dist}}(c(\cdot),\mathop{\mathrm{dom}}g)$, namely such that $\mathop{\mathrm{dist}}(c(x), \mathop{\mathrm{dom}}g) = \mathop{\mathrm{dist}}(c(\bar{x}), \mathop{\mathrm{dom}}g)$. Then there is some $z \in \mathop{\mathrm{dom}}g$ such that $\|c(x) - z\| = \mathop{\mathrm{dist}}(c(\bar{x}), \mathop{\mathrm{dom}}g)$, and [\[eq:dist_to_domain\]](#eq:dist_to_domain){reference-type="eqref" reference="eq:dist_to_domain"} gives $\Vert c(x^k)-z^k \Vert\geq\Vert c(x)-z \Vert$. Hence, we find $$\begin{aligned} f(x^k) + g(z^k) + \frac{1}{2\mu_k} \|c(x^k) - z^k\|^2 {}\leq{}& f(x) + g^{\mu_k}(c(x)) + \varepsilon_k \\ {}\leq{}& f(x) + g(z) + \frac{1}{2\mu_k}\|c(x) - z\|^2 + \varepsilon_k \\ {}\leq{}& f(x) + g(z) + \frac{1}{2\mu_k}\Vert c(x^k)-z^k \Vert^2 + \varepsilon_k \end{aligned}$$ for all $k\in \mathbb{N}$ large enough. Subtracting the squared norm term on both sides and taking the upper limit yields the claim since $\varepsilon_k\to 0$. ◻ In [Lemma 30](#lem:global_minimizer_constraint_violation){reference-type="ref" reference="lem:global_minimizer_constraint_violation"} as well as [\[thm:global_optimality,thm:optimality_subject_minimal_infeasibility\]](#thm:global_optimality,thm:optimality_subject_minimal_infeasibility){reference-type="ref" reference="thm:global_optimality,thm:optimality_subject_minimal_infeasibility"}, it has been assumed that the AL subproblem can be solved up to approximate global optimality. This, however, might be a delicate issue whenever $f$ or $g$ are nonconvex or $c$ is difficult enough. In practice, affordable solvers only have local scope and return stationary points as candidate local minimizers. Nevertheless, (primal) accumulation points of a sequence generated by [\[alg:ALM\]](#alg:ALM){reference-type="ref" reference="alg:ALM"} can be shown to be at least *asymptotically* M-, or AM-, stationary points for [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}, see [@demarchi2023implicit Def. 2.3, Thm 3.6] for a detailed discussion. Notice that the (approximate) global optimality is not relaxed to *local* optimality, but to mere ($\Upsilon$-)stationarity, while the subsequential $g$-attentive convergence of certain iterates is required. We refer to [@demarchi2023constrained Ex. 3.4] for an illustration of the importance of attentive convergence. It should be noted that a mild asymptotic regularity condition is enough to guarantee that AM-stationary point of [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} are indeed M-stationary, see [@demarchi2023implicit Cor. 2.6] or [@demarchi2023constrained Cor. 2.7] for related results. Thus, [\[alg:ALM\]](#alg:ALM){reference-type="ref" reference="alg:ALM"} is likely to compute M-stationary points of [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}. A result analogous to [Lemma 30](#lem:global_minimizer_constraint_violation){reference-type="ref" reference="lem:global_minimizer_constraint_violation"} is available in this affordable setting, too, see [@demarchi2023implicit Prop. 3.7], whereas [@demarchi2023implicit Prop. 3.4] provides some sufficient conditions for the feasibility of accumulation points. ## Local convergence {#sec:LocalConvergence} In this section, we investigate the behavior of [\[alg:ALM\]](#alg:ALM){reference-type="ref" reference="alg:ALM"} in the vicinity of stationary points of [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} under various assumptions. Particularly, we are interested in the existence of strict local minimizers of the AL subproblems in a neighborhood of a strict local minimizer to [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}, characterized via the second-order sufficient conditions stated in [Definition 16](#def:SOSC){reference-type="ref" reference="def:SOSC"}, and convergence rates associated with [\[alg:ALM\]](#alg:ALM){reference-type="ref" reference="alg:ALM"} in such situations. ### Existence of local minimizers Let us consider some strict local minimizer $\bar x\in\mathbb{R}^n$ of [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}. We are interested in the existence of local minimizers of the AL function $\mathcal{L}_\mu(\cdot,\hat y)$ for $\mu>0$ sufficiently small and an approximate multiplier $\hat y\in Y$, where $Y\subseteq\mathbb{R}^m$ is a bounded set, see [\[alg:ALM\]](#alg:ALM){reference-type="ref" reference="alg:ALM"}. Note that, by construction, any such local minimizer would be an $\Upsilon$-stationary point of $\mathcal{L}_\mu(\cdot,\hat y)$, and [\[step:ALM:subproblem\]](#step:ALM:subproblem){reference-type="ref" reference="step:ALM:subproblem"} would be meaningful, see [Remark 12](#rem:Upsilon_stationarity_AL){reference-type="ref" reference="rem:Upsilon_stationarity_AL"} as well. We proceed as suggested in [@BoergensKanzowSteck2019 Sec. 7] and, for sufficiently small $r>0$, consider the localized AL subproblem $$\label{eq:subproblem_existence_local_minimizers} \mathop{\mathrm{minimize}}_{x\in\mathbb{R}^n} \quad \mathcal{L}_{\mu_k}(x,\hat{y}^k) \qquad \mathop{\mathrm{subject~to}}\quad x \in \mathop{\mathrm{\mathbb{B}}}_r(\bar{x}),$$ where $\hat y^k\in Y$ is the chosen Lagrange multiplier estimate and $\mu_k\in(0,\mu_g)$. Clearly, by continuity of the Moreau envelope, see e.g. [@rockafellar1998variational Thm 1.25], [\[eq:subproblem_existence_local_minimizers\]](#eq:subproblem_existence_local_minimizers){reference-type="eqref" reference="eq:subproblem_existence_local_minimizers"} possesses a global minimizer $x^k\in\mathbb{R}^n$. Under suitable assumptions, namely WSOSC from [Definition 16](#def:SOSC){reference-type="ref" reference="def:SOSC"}, it is possible to show $\Vert x^k-\bar x \Vert<r$ for sufficiently small $\mu_k$, and the localization in [\[eq:subproblem_existence_local_minimizers\]](#eq:subproblem_existence_local_minimizers){reference-type="eqref" reference="eq:subproblem_existence_local_minimizers"} becomes superfluous. In fact, if $\mu_k\downarrow 0$, we are in position to verify $x^k\to\bar x$ as desired. The following is an analog of [@BoergensKanzowSteck2019 Lem. 7.1]. **Lemma 33**. *Let $\bar{x}\in\mathbb{R}^n$ be an M-stationary point of [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} satisfying WSOSC. Furthermore, let $Y\subseteq\mathbb{R}^m$ be bounded. Then there is a radius $r > 0$ such that whenever $\{\hat{y}^k\} \subseteq Y$, $\mu_k\downarrow 0$, $\varepsilon_k \to 0$, and, for all $k\in\mathbb{N}$, $x^k$ is an $\varepsilon_k$-minimizer of [\[eq:subproblem_existence_local_minimizers\]](#eq:subproblem_existence_local_minimizers){reference-type="eqref" reference="eq:subproblem_existence_local_minimizers"} in the sense that $$\label{eq:approx_minimality_AL} \forall x\in\mathop{\mathrm{\mathbb{B}}}_r(\bar x)\colon\quad \mathcal{L}_{\mu_k}(x^k,\hat y^k)\leq\mathcal{L}_{\mu_k}(x,\hat y^k)+\varepsilon_k,$$ then $x^k \to \bar{x}$.* *Proof.* Let $r > 0$ be as in [Corollary 19](#corollary327){reference-type="ref" reference="corollary327"}. For large enough $k\in\mathbb{N}$, $\mu_k\in(0,\mu_g)$ holds, and we can fix $z^k \in \mathop{\mathrm{prox}}_{\mu_k g}(c(x^k) + \mu_k \hat{y}^k)$. For any such $k\in\mathbb{N}$, [\[eq:approx_minimality_AL\]](#eq:approx_minimality_AL){reference-type="eqref" reference="eq:approx_minimality_AL"} and [Lemma 11](#lem:upper_bound_AL){reference-type="ref" reference="lem:upper_bound_AL"} yield $$\begin{aligned} f(x^k) & + g(z^k) + \frac{1}{2 \mu_k} \| c(x^k) + \mu_k \hat{y}^k - z^k \|^2 - \frac{\mu_k}{2} \| \hat{y}^k \|^2 \\ & = f(x^k) + g^{\mu_k}\left( c(x^k) + \mu_k \hat{y}^k \right) - \frac{\mu_k}{2} \| \hat{y}^k \|^2 \\ & = \mathcal{L}_{\mu_k}(x^k,\hat{y}^k) \leq \mathcal{L}_{\mu_k}(\bar{x},\hat{y}^k) + \varepsilon_k \leq \varphi(\bar{x}) + \varepsilon_k < \infty. \end{aligned}$$ Multiplying by $\mu_k$, by the boundedness of $\{\hat{y}^k\}$ and $\{f(x^k)\}$, $\mu_k\downarrow 0$, and $\varepsilon_k\to 0$, we obtain $$\limsup\limits_{k\to\infty} \left( \mu_k g(z^k) + \frac12\Vert z^k-c(x^k)-\mu_k\hat y^k \Vert^2 \right) \leq 0.$$ We apply [Lemma 3](#lem:norm_to_zero){reference-type="ref" reference="lem:norm_to_zero"} to find $\Vert z^k-c(x^k)-\mu_k\hat y^k \Vert\to 0$ and, thus, $\| c(x^k) - z^k \| \to 0$. Moreover, the above estimate also guarantees $\limsup_{k \to \infty} ( f(x^k) + g(z^k) ) \leq \varphi(\bar{x})$, again by boundedness of $\{\hat y^k\}$ and $\mu_k\downarrow 0$. Hence, [Corollary 19](#corollary327){reference-type="ref" reference="corollary327"} is applicable and yields the desired convergence. ◻ As a consequence of [Lemma 33](#lem:uniform_convergence_minimizers_subproblems){reference-type="ref" reference="lem:uniform_convergence_minimizers_subproblems"}, we find the following result which parallels [@BoergensKanzowSteck2019 Thm 7.2]. **Theorem 34**. *Let $\bar{x}\in\mathbb{R}^n$ be an M-stationary point of [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} satisfying WSOSC. Furthermore, let $Y\subseteq\mathbb{R}^m$ be bounded. Then there is a radius $r > 0$ such that, for every $\hat{y} \in Y$ and $\mu \in (0,\mu_g)$, the function $\mathcal{L}_\mu(\cdot,\hat{y})$ admits a local minimizer $x(\mu,\hat{y})$ which lies in $\mathop{\mathrm{\mathbb{B}}}_r(\bar{x})$. Moreover, $x(\mu,\hat y) \to \bar{x}$ uniformly on $Y$ as $\mu \downarrow 0$.* *Proof.* Let $r>0$ be as in [Lemma 33](#lem:uniform_convergence_minimizers_subproblems){reference-type="ref" reference="lem:uniform_convergence_minimizers_subproblems"}. For $\mu\in(0,\mu_g)$, the Moreau envelope $g^{\mu}$ is a continuous function, and this extends to $\mathcal{L}_\mu(\cdot,\hat y)$ for arbitrary $\hat y\in Y$. Hence, this function possesses a global minimizer over $\mathop{\mathrm{\mathbb{B}}}_r(\bar x)$ which we denote by $x(\mu,\hat y)$. As $\mu\downarrow 0$, we find $x(\mu,\hat y)\to\bar x$ from [Lemma 33](#lem:uniform_convergence_minimizers_subproblems){reference-type="ref" reference="lem:uniform_convergence_minimizers_subproblems"}, and this convergence is uniform for $\hat y\in Y$. ◻ Let us note that [Lemma 33](#lem:uniform_convergence_minimizers_subproblems){reference-type="ref" reference="lem:uniform_convergence_minimizers_subproblems"} and [Theorem 34](#lem:localMin:convergence_to_strict_local_min){reference-type="ref" reference="lem:localMin:convergence_to_strict_local_min"} are based on WSOSC from [Definition 16](#def:SOSC){reference-type="ref" reference="def:SOSC"}, which does not demand uniqueness of the underlying Lagrange multiplier. ### Rates of convergence Our rates-of-convergence analysis of [\[alg:ALM\]](#alg:ALM){reference-type="ref" reference="alg:ALM"} is based on a primal-dual pair $(\bar x,\bar y)\in\mathbb{R}^n\times\mathbb{R}^m$ which solves the M-stationarity system [\[eq:Mstationary\]](#eq:Mstationary){reference-type="eqref" reference="eq:Mstationary"} associated with [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} such that an (upper) error bound condition is valid. For brevity of notation, we will partially make use of the following assumptions. **Assumption 35** (Rates of convergence). 1. [\[item:error_bound\]]{#item:error_bound label="item:error_bound"} Let $\bar x\in \mathbb{R}^n$ be an M-stationary point of [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}, and let $\bar y\in\Lambda(\bar x)$ be chosen such that there are a constant $\varrho_{\mathrm{u}}>0$ and a neighborhood $U$ of $(\bar x,c(\bar x),\bar y)$ such that the upper error bound condition [\[eq:error_bound\]](#eq:error_bound){reference-type="eqref" reference="eq:error_bound"} holds for each triplet $(x,z,y)\in U\cap(\mathbb{R}^n\times\mathop{\mathrm{dom}}g\times\mathbb{R}^m)$. 2. [\[item:ALMsequence\]]{#item:ALMsequence label="item:ALMsequence"} Let $\{(x^k,z^k,y^k)\}$ be a sequence generated by [\[alg:ALM\]](#alg:ALM){reference-type="ref" reference="alg:ALM"} with $\varepsilon_k\to 0$. 3. [\[item:pure_convergence\]]{#item:pure_convergence label="item:pure_convergence"} The primal-dual sequence $\{(x^k,y^k)\}$ converges to $(\bar{x},\bar{y})$. 4. [\[item:no_safeguarding\]]{#item:no_safeguarding label="item:no_safeguarding"} For each $k\in\mathbb{N}$ large enough, $\hat{y}^k = y^{k-1}$ is valid. Note that we already know, by [Theorem 34](#lem:localMin:convergence_to_strict_local_min){reference-type="ref" reference="lem:localMin:convergence_to_strict_local_min"}, that the AL admits approximate local minimizers and stationary points in a neighborhood of some M-stationary point $\bar{x}\in\mathbb{R}^n$ which satisfies WSOSC. We shall now see that, under the error bound conditions from [4.3](#sec:ErrorBounds){reference-type="ref" reference="sec:ErrorBounds"} involving a fixed multiplier $\bar y\in\Lambda(\bar x)$, if the algorithm chooses these local minimizers (or any other points sufficiently close to $\bar{x}$), then we automatically obtain the convergence $(x^k,z^k,y^k) \to (\bar{x},c(\bar x),\bar{y})$. In this case, the sequence $\{y^k\}$ is necessarily bounded, so it is reasonable to assume that the safeguarded multipliers are eventually chosen as $\hat{y}^k = y^{k-1}$. Let us recall that even validity of SSOSC at $\bar x$ for $\bar y$ may not be sufficient for the error bound condition, see [Remark 25](#rem:on_the_really_crucial_CQ){reference-type="ref" reference="rem:on_the_really_crucial_CQ"}. However, [4.3](#sec:ErrorBounds){reference-type="ref" reference="sec:ErrorBounds"} provides a number of sufficient conditions which still can be checked in terms of initial problem data, so we will abstain here from postulating any more specific assumptions on the upper error bound. Furthermore, we do not stipulate any lower error bound conditions, deviating from all other related papers, where the lower estimate was never problematic, see [Remark 28](#rem:lower_error_bound){reference-type="ref" reference="rem:lower_error_bound"}. The following result, which is motivated by [@steck2018dissertation Prop. 4.29], can be considered as (retrospective) justification for [Assumption 35](#ass:rate_convergence){reference-type="ref" reference="ass:rate_convergence"}[\[item:pure_convergence\]](#item:pure_convergence){reference-type="ref" reference="item:pure_convergence"}--[\[item:no_safeguarding\]](#item:no_safeguarding){reference-type="ref" reference="item:no_safeguarding"} in the presence of [Assumption 35](#ass:rate_convergence){reference-type="ref" reference="ass:rate_convergence"}[\[item:error_bound\]](#item:error_bound){reference-type="ref" reference="item:error_bound"}--[\[item:ALMsequence\]](#item:ALMsequence){reference-type="ref" reference="item:ALMsequence"}. Besides the error bound condition, a CQ is needed. As we require an M-stationary point of [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}, this is not too restrictive. **Proposition 36**. *Let [Assumption 35](#ass:rate_convergence){reference-type="ref" reference="ass:rate_convergence"}[\[item:error_bound\]](#item:error_bound){reference-type="ref" reference="item:error_bound"}--[\[item:ALMsequence\]](#item:ALMsequence){reference-type="ref" reference="item:ALMsequence"} hold and suppose that (at least) one of the following conditions is valid:* 1. *[\[item:ErrorBound:convergence_vanishing_error:fullrank\]]{#item:ErrorBound:convergence_vanishing_error:fullrank label="item:ErrorBound:convergence_vanishing_error:fullrank"} $c^\prime(\bar x)$ possesses full row rank $m$;* 2. *[\[item:ErrorBound:convergence_vanishing_error:some_CQ\]]{#item:ErrorBound:convergence_vanishing_error:some_CQ label="item:ErrorBound:convergence_vanishing_error:some_CQ"} condition [\[eq:metric_regularity_CQ\]](#eq:metric_regularity_CQ){reference-type="eqref" reference="eq:metric_regularity_CQ"} is valid, $\mathop{\mathrm{dom}}g$ is closed, and $g$ is continuous relative to its domain.* *Then there exists a radius $r > 0$ such that, if $x^k \in \mathop{\mathrm{\mathbb{B}}}_r(\bar{x})$ for all sufficiently large $k\in\mathbb{N}$, then we have the convergences $\Theta(x^k,z^k,y^k) \to 0$ and $(x^k,z^k,y^k) \to (\bar{x},c(\bar{x}),\bar{y})$ as $k\to\infty$.* *Proof.* Let $r > 0$ be small enough so that the upper estimate in the error bound condition [\[eq:error_bound\]](#eq:error_bound){reference-type="eqref" reference="eq:error_bound"} holds for all $(x,z,y) \in \mathbb{R}^n\times \mathop{\mathrm{dom}}g \times \mathbb{R}^m$ with $x \in \mathop{\mathrm{\mathbb{B}}}_r(\bar{x})$. Assume now that $x^k \in \mathop{\mathrm{\mathbb{B}}}_r(\bar{x})$ for all $k\in\mathbb{N}$ sufficiently large. The proof is divided into multiple steps. We first show that $c(x^k) - z^k \to 0$ as $k\to \infty$. Consider two cases. If $\{\mu_k\}$ remains bounded away from zero, this assertion readily follows from the penalty updating scheme at [\[step:ALM:penalty\]](#step:ALM:penalty){reference-type="ref" reference="step:ALM:penalty"}. Instead, if $\mu_k\downarrow 0$, then we can argue from [Proposition 29](#lem:ALM:exact_complementarity){reference-type="ref" reference="lem:ALM:exact_complementarity"} that $$\label{eq:some_intermediate_convergence} c^\prime(x^k)^\top \left[ c(x^k) + \mu_k\hat y^k - z^k \right] \to 0$$ as $\mu_k\downarrow 0$, by boundedness of $\{\nabla f(x^k)\}$, $\{\hat{y}^k\}$, and $\{\varepsilon_k\}$. Let us now show $c(x^k) + \mu_k\hat y^k-z^k \to 0$, which readily yields $c(x^k)-z^k\to 0$ since $\{\hat y^k\}$ is bounded and $\mu_k\downarrow 0$. In case [\[item:ErrorBound:convergence_vanishing_error:fullrank\]](#item:ErrorBound:convergence_vanishing_error:fullrank){reference-type="ref" reference="item:ErrorBound:convergence_vanishing_error:fullrank"} where $c^\prime(\bar x)$ has full row rank, the matrices $c^\prime(x) c^\prime(x)^\top$ are uniformly invertible on $\mathop{\mathrm{\mathbb{B}}}_r(\bar x)$, potentially after shrinking $r$, and [\[eq:some_intermediate_convergence\]](#eq:some_intermediate_convergence){reference-type="eqref" reference="eq:some_intermediate_convergence"} gives $c(x^k) + \mu_k\hat y^k-z^k \to 0$. Next, for case [\[item:ErrorBound:convergence_vanishing_error:some_CQ\]](#item:ErrorBound:convergence_vanishing_error:some_CQ){reference-type="ref" reference="item:ErrorBound:convergence_vanishing_error:some_CQ"}, assume that [\[eq:metric_regularity_CQ\]](#eq:metric_regularity_CQ){reference-type="eqref" reference="eq:metric_regularity_CQ"} holds while $\mathop{\mathrm{dom}}g$ is closed and $g$ is continuous on its domain. Note that, for each $k\in\mathbb{N}$, we even have $\mu_k^{-1}(c(x^k)+\mu_k\hat y^k-z^k)\in\widehat{\partial}g(z^k)$ by definition of the prox-operator and compatibility of the regular subdifferential with smooth additions, and this also gives $(c(x^k)+\mu_k\hat y^k-z^k,-\mu_k)\in\widehat{N}_{\mathop{\mathrm{epi}}g}(z^k,g(z^k))$. Recall that [\[eq:metric_regularity_CQ\]](#eq:metric_regularity_CQ){reference-type="eqref" reference="eq:metric_regularity_CQ"} is equivalent to the metric regularity of $\Xi$ from [\[eq:svm_composition\]](#eq:svm_composition){reference-type="eqref" reference="eq:svm_composition"} at $((\bar x,g(c(\bar x))),(0,0))$. now yields the existence of $s>0$ such that, for all sufficiently large $k\in\mathbb{N}$, we have $$\label{eq:uniform_condition} \mathop{\mathrm{\mathbb{B}}}_s(0,0) \subseteq \begin{bmatrix} c^\prime(x^k) & 0 \\ 0 & 1 \end{bmatrix} \mathop{\mathrm{\mathbb{B}}}_1(0,0) + \mathop{\mathrm{conv}}\bigl(T_{\mathop{\mathrm{epi}}g}(z^k,g(z^k)) \cap \mathop{\mathrm{\mathbb{B}}}_1(0,0)\bigr)$$ as $g$ is continuous on $\mathop{\mathrm{dom}}g$. In order to see this, we need to make sure that $(z^k,g(z^k))$ is sufficiently close to $(c(\bar x),g(c(\bar x)))$ for large enough $k\in\mathbb{N}$, and due to the continuity of $g$, this boils down to showing that $z^k$ is sufficiently close to $c(\bar x)$ for large enough $k\in\mathbb{N}$. Along the tail of the sequence (without relabeling), we have that $\{ x^k \}$ is close to $\bar{x}$. For every $k$, the optimality of $z^k$ in the proximal minimization subproblem reads $$\forall z \in \mathbb{R}^m\colon\qquad \mu_k g(z^k) + \frac{1}{2} \Vert c(x^k) + \mu_k \hat{y}^k - z^k \Vert^2 \leq \mu_k g(z) + \frac{1}{2} \Vert c(x^k) + \mu_k \hat{y}^k - z \Vert^2 .$$ Taking the specific choice $z :=c(\bar{x}) \in \mathop{\mathrm{dom}}g$ and dividing both sides by $\mu_k > 0$ results in $$g(z^k) + \frac{1}{2 \mu_k} \Vert c(x^k) + \mu_k \hat{y}^k - z^k \Vert^2 \leq g(c(\bar{x})) + \frac{1}{2 \mu_k} \Vert c(x^k) + \mu_k \hat{y}^k - c(\bar{x}) \Vert^2 < \infty .$$ By invoking the triangle, Cauchy--Schwarz, and Young's inequalities, this implies that $$\begin{aligned} \Vert z^k - c(\bar{x}) \Vert^2 {}={}& \Vert z^k - [c(x^k) + \mu_k \hat{y}^k] - c(\bar{x}) + [c(x^k) + \mu_k \hat{y}^k] \Vert^2 \\ {}\leq{}& \Vert z^k - [c(x^k) + \mu_k \hat{y}^k] \Vert^2 \\{}{}&\qquad + 2 \Vert z^k - [c(x^k) + \mu_k \hat{y}^k] \Vert \Vert c(\bar{x}) - [c(x^k) + \mu_k \hat{y}^k] \Vert \\{}{}&\qquad + \Vert c(\bar{x}) - [c(x^k) + \mu_k \hat{y}^k] \Vert^2 \\ {}\leq{}& 2 \Vert z^k - [c(x^k) + \mu_k \hat{y}^k] \Vert^2 + 2 \Vert c(\bar{x}) - [c(x^k) + \mu_k \hat{y}^k] \Vert^2 \\ {}\leq{}& 4 \left[ \mu_k g( c(\bar{x}) ) - \mu_k g(z^k) + \Vert c(x^k) + \mu_k \hat{y}^k - c(\bar{x}) \Vert^2 \right] . \end{aligned}$$ Rearranging gives $$\mu_k g(z^k) + \frac{1}{4} \Vert z^k - c(\bar{x}) \Vert^2 \leq \mu_k g( c(\bar{x}) ) + \Vert c(x^k) + \mu_k \hat{y}^k - c(\bar{x}) \Vert^2.$$ Since $c(\bar{x}) \in \mathop{\mathrm{dom}}g$, the term $\mu_k g( c(\bar{x}) )$ vanishes as $\mu_k\downarrow 0$, and so does $\mu_k \hat{y}^k$. Therefore, possibly shrinking the neighborhood considered around $\bar{x}$, the right-hand side remains bounded by some arbitrarily small $C > 0$ for all large $k\in\mathbb{N}$, by continuous differentiability of $c$, i.e., $$\mu_k g(z^k) + \frac{1}{4} \Vert z^k - c(\bar{x}) \Vert^2 \leq C$$ holds for all $k\in\mathbb{N}$ large enough. By virtue of the prox-boundedness of $g$, [@demarchi2022proximal Lem. 4.1] yields boundedness of $\{z^k\}$ and, thus, of $\{g(z^k)\}$ by continuity of $g$ on its domain which is assumed to be closed (Heine's theorem yields uniform continuity of $g$ on closed, bounded subsets of $\mathop{\mathrm{dom}}g$). As $\mu_k\downarrow 0$ and $C>0$ can be made arbitrarily small if only $r>0$ is chosen small enough, it follows that $\{z^k\}$ is arbitrarily close to $c(\bar x)$ for all large enough $k\in\mathbb{N}$. Pick $w\in \mathop{\mathrm{\mathbb{B}}}_s(0)$ arbitrary. Then, for each sufficiently large $k\in\mathbb{N}$, we can rely on [\[eq:uniform_condition\]](#eq:uniform_condition){reference-type="eqref" reference="eq:uniform_condition"} to find $(u^k,\alpha_k)\in \mathop{\mathrm{\mathbb{B}}}_1(0,0)$ and $(v^k,\beta_k)\in \mathop{\mathrm{conv}}\bigl(T_{\mathop{\mathrm{epi}}g}(z^k,g(z^k))\cap\mathop{\mathrm{\mathbb{B}}}_1(0,0)\bigr)$ such that $(w,0)=(c^\prime(x^k)u^k+v^k, \alpha_k+\beta_k)$. From $(v^k,\beta_k)\in \mathop{\mathrm{conv}}T_{\mathop{\mathrm{epi}}g}(z^k,g(z^k))$, we find $\langle (v^k,\beta_k), (c(x^k)+\mu_k\hat y^k-z^k,-\mu_k) \rangle\leq 0$ due to [\[eq:polarization_rules\]](#eq:polarization_rules){reference-type="eqref" reference="eq:polarization_rules"}. Thus $$\begin{aligned} \langle w, c(x^k)+\mu_k\hat y^k-z^k \rangle &= \langle c^\prime(x^k)u^k+v^k, c(x^k)+\mu_k\hat y^k-z^k \rangle \\ &= \langle u^k, c^\prime(x^k)^\top(c(x^k)+\mu_k\hat y^k-z^k) \rangle + \mu_k\beta_k \\ &\qquad +\langle (v^k,\beta_k), (c(x^k)+\mu_k\hat y^k-z^k,-\mu_k) \rangle \\ &\leq \langle u^k, c^\prime(x^k)^\top(c(x^k)+\mu_k\hat y^k-z^k) \rangle + \mu_k\beta_k \to 0, \end{aligned}$$ where we used boundedness of $\{u^k\}$ and $\{\beta_k\}$ as well as $\mu_k\downarrow 0$ and [\[eq:some_intermediate_convergence\]](#eq:some_intermediate_convergence){reference-type="eqref" reference="eq:some_intermediate_convergence"} Testing this with $w :=\pm s(c(x^k)+\mu_k\hat y^k-z^k)/\Vert c(x^k)+\mu_k\hat y^k-z^k \Vert$ gives $c(x^k)+\mu_k\hat y^k-z^k\to 0$. We now demonstrate that $\Theta(x^k,z^k,y^k) \to 0$ as $k\to\infty$. Observe that $$\nabla_x\mathcal{L}(x^k,y^k) = \nabla_x \mathcal{L}^{\mathrm{S}}(x^k,z^k,y^k) = \nabla_x \mathcal{L}^{\mathrm{S}}_{\mu_k}(x^k,z^k,\hat{y}^k)$$ holds for all $k\in\mathbb{N}$ by construction of the dual update rule. Then the first summand in $\Theta(x^k,z^k,y^k)$ satisfies $$\Vert \nabla_x \mathcal{L}(x^k,y^k) \Vert = \Vert \nabla_x \mathcal{L}^{\mathrm{S}}_{\mu_k}(x^k,z^k,\hat{y}^k) \Vert \leq \varepsilon_k ,$$ which converges to zero by [Assumption 35](#ass:rate_convergence){reference-type="ref" reference="ass:rate_convergence"}[\[item:ALMsequence\]](#item:ALMsequence){reference-type="ref" reference="item:ALMsequence"}. Hence, as $\Vert c(x^k) - z^k \Vert \to 0$ was obtained previously, the second term in [\[eq:ErrorBound:error\]](#eq:ErrorBound:error){reference-type="eqref" reference="eq:ErrorBound:error"} vanishes, too. For the third and last term, it remains to show that $\mathop{\mathrm{dist}}(y^k, \partial g(z^k))\to 0$. This, however, readily follows from [Proposition 29](#lem:ALM:exact_complementarity){reference-type="ref" reference="lem:ALM:exact_complementarity"}. Finally, recall that $x^k \in \mathop{\mathrm{\mathbb{B}}}_r(\bar{x})$ for all $k\in\mathbb{N}$ and that $\Theta(x^k,z^k,y^k) \to 0$. Hence, the convergence $(x^k,z^k,y^k) \to (\bar{x},c(\bar x),\bar{y})$ is an immediate consequence of [\[eq:error_bound\]](#eq:error_bound){reference-type="eqref" reference="eq:error_bound"}. ◻ Subsequently, we will prove convergence rates for the sequence $\{(x^k,z^k,y^k)\}$ in the presence of [Assumption 35](#ass:rate_convergence){reference-type="ref" reference="ass:rate_convergence"}. Since the distance of $(x^k,z^k,y^k)$ to $(\bar{x},c(\bar x),\bar{y})$ admits an estimate relative to the residual terms $\Theta_k :=\Theta(x^k,z^k,y^k)$ by [\[eq:error_bound\]](#eq:error_bound){reference-type="eqref" reference="eq:error_bound"}, we will largely base our analysis on the sequence $\{\Theta_k\}$, and the results on the sequence $\{(x^k,z^k,y^k)\}$ will follow directly. However, this correspondence heavily relies on a *two-sided* error bound, see the proof of [Theorem 39](#thm:rate_convergence){reference-type="ref" reference="thm:rate_convergence"} below. In stark contrast to [Remark 28](#rem:lower_error_bound){reference-type="ref" reference="rem:lower_error_bound"}, the following [Lemma 37](#lem:lower_error_bound:ALM){reference-type="ref" reference="lem:lower_error_bound:ALM"} shows that, along a sequence generated by [\[alg:ALM\]](#alg:ALM){reference-type="ref" reference="alg:ALM"}, a lower error bound holds. This exploits the fact that, as a consequence of [Proposition 29](#lem:ALM:exact_complementarity){reference-type="ref" reference="lem:ALM:exact_complementarity"}, the distance-to-subdifferential in $\Theta$ does not play a role for the error bound *at the iterates*. Therefore, complementing the upper estimate of [Assumption 35](#ass:rate_convergence){reference-type="ref" reference="ass:rate_convergence"}[\[item:error_bound\]](#item:error_bound){reference-type="ref" reference="item:error_bound"}, a two-sided error bound becomes *algorithmically* available, enabling the derivation of convergence rates. **Lemma 37**. *Let $\bar x\in\mathbb{R}^n$ be an M-stationary point of [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} and $\bar y\in\Lambda(\bar x)$ be arbitrary. Suppose [Assumption 35](#ass:rate_convergence){reference-type="ref" reference="ass:rate_convergence"}[\[item:ALMsequence\]](#item:ALMsequence){reference-type="ref" reference="item:ALMsequence"}--[\[item:no_safeguarding\]](#item:no_safeguarding){reference-type="ref" reference="item:no_safeguarding"} hold. Then there are a constant $\varrho_{\mathrm{l}}>0$ and a neighborhood $U$ of $(\bar x,c(\bar x),\bar y)$ such that, for each triplet $(x^k,z^k,y^k)\in U\cap(\mathbb{R}^n\times\mathop{\mathrm{dom}}g\times\mathbb{R}^m)$, we have $$\label{eq:error_bound_lower:ALM} \varrho_{\mathrm{l}}\,\Theta(x^k,z^k,y^k) \leq \Vert x^k-\bar x \Vert+\Vert z^k-c(\bar x) \Vert+\Vert y^k-\bar y \Vert.$$* *Proof.* For each triplet $(x^k,z^k,y^k)\in\mathbb{R}^n\times\mathop{\mathrm{dom}}g\times\mathbb{R}^m$, we can exploit [\[eq:Mstationary\]](#eq:Mstationary){reference-type="eqref" reference="eq:Mstationary"} and the triangle inequality to obtain $$\begin{aligned} \Theta(x^k,z^k,y^k) {}={}& \Vert \nabla_x\mathcal{L}(x^k,y^k) \Vert + \Vert c(x^k)-z^k \Vert \\ {}\leq{}& \Vert \nabla_x\mathcal{L}(x^k,y^k)-\nabla_x\mathcal{L}(\bar x,\bar y) \Vert + \Vert c(x^k)-c(\bar x) \Vert + \Vert z^k-c(\bar x) \Vert , \end{aligned}$$ where the equality is due to [Assumption 35](#ass:rate_convergence){reference-type="ref" reference="ass:rate_convergence"}[\[item:ALMsequence\]](#item:ALMsequence){reference-type="ref" reference="item:ALMsequence"} and [Proposition 29](#lem:ALM:exact_complementarity){reference-type="ref" reference="lem:ALM:exact_complementarity"}, which imply $\mathop{\mathrm{dist}}(y^k,\partial g(z^k)) = 0$ for all $k\in\mathbb{N}$. Then, noting that $\nabla_x\mathcal{L}$ and $c$ are locally Lipschitz continuous by [Assumption 1](#ass:P){reference-type="ref" reference="ass:P"}[\[ass:f\]](#ass:f){reference-type="ref" reference="ass:f"}, the claim follows. ◻ Our next result has been inspired by [@steck2018dissertation Lem. 4.30]. **Lemma 38**. *Let [Assumption 35](#ass:rate_convergence){reference-type="ref" reference="ass:rate_convergence"} hold and set $\Theta_k :=\Theta(x^k,z^k,y^k)$ for each $k\in\mathbb{N}$. Then $$( 1 - \varrho_{\mathrm{u}} \mu_k ) \Theta_k \leq \varepsilon_k + \varrho_{\mathrm{u}} \mu_k \Theta_{k-1}$$ for all $k\in\mathbb{N}$ large enough, where $\varrho_{\mathrm{u}}>0$ is the constant from [\[eq:error_bound\]](#eq:error_bound){reference-type="eqref" reference="eq:error_bound"}.* *Proof.* Due to [Proposition 29](#lem:ALM:exact_complementarity){reference-type="ref" reference="lem:ALM:exact_complementarity"}, $y^k \in \partial g(z^k)$ holds for all $k\in\mathbb{N}$. Then, by [\[eq:ErrorBound:error\]](#eq:ErrorBound:error){reference-type="eqref" reference="eq:ErrorBound:error"} and [\[step:ALM:subproblem\]](#step:ALM:subproblem){reference-type="ref" reference="step:ALM:subproblem"}, we have $$\Theta_k \leq \varepsilon_k + \Vert c(x^k) - z^k \Vert = \varepsilon_k + \mu_k \Vert y^k - y^{k-1} \Vert ,$$ where the equality is due to the update rule at [\[step:ALM:y\]](#step:ALM:y){reference-type="ref" reference="step:ALM:y"} and [Assumption 35](#ass:rate_convergence){reference-type="ref" reference="ass:rate_convergence"}[\[item:no_safeguarding\]](#item:no_safeguarding){reference-type="ref" reference="item:no_safeguarding"}. Using the triangle inequality yields $$\Theta_k \leq \varepsilon_k + \mu_k \Vert y^k - \bar{y} \Vert + \mu_k \Vert y^{k-1} - \bar{y} \Vert .$$ By [Assumption 35](#ass:rate_convergence){reference-type="ref" reference="ass:rate_convergence"}[\[item:error_bound\]](#item:error_bound){reference-type="ref" reference="item:error_bound"}, since $x^k \to \bar{x}$ and, due to [Proposition 36](#lem:ErrorBound:convergence_vanishing_error){reference-type="ref" reference="lem:ErrorBound:convergence_vanishing_error"}, $z^k \to c(\bar{x})$, we find $\Vert y^k - \bar{y} \Vert \leq \varrho_{\mathrm{u}} \Theta_k$ for all $k\in\mathbb{N}$ large enough. Hence, $$\Theta_k \leq \varepsilon_k + \varrho_{\mathrm{u}} \mu_k (\Theta_k + \Theta_{k-1})$$ holds for all $k\in\mathbb{N}$ large enough, and reordering gives the assertion. ◻ With the above lemma and the two-sided error bound enabled by [Assumption 35](#ass:rate_convergence){reference-type="ref" reference="ass:rate_convergence"}[\[item:error_bound\]](#item:error_bound){reference-type="ref" reference="item:error_bound"} and [Lemma 37](#lem:lower_error_bound:ALM){reference-type="ref" reference="lem:lower_error_bound:ALM"}, one can deduce convergence rates for the sequence $\{(x^k,z^k,y^k)\}$, see [@steck2018dissertation Thm 4.31] as well. Notice that the condition $\varepsilon_k \in \mathpzc{o}(\Theta_{k-1})$ can be easily guaranteed in practice. Exemplary, one could compute the next iterate $(x^k,z^k,y^k)$ with a precision $\varepsilon_k \leq \nu_k \Theta_{k-1}$ where $\{\nu_k\}$ is a given null sequence. It should be mentioned also that the value $\Theta_k$ from [\[eq:ErrorBound:error\]](#eq:ErrorBound:error){reference-type="eqref" reference="eq:ErrorBound:error"} becomes algorithmically available thanks to [Proposition 29](#lem:ALM:exact_complementarity){reference-type="ref" reference="lem:ALM:exact_complementarity"}, and can readily be obtained by the dual update rule at [\[step:ALM:y\]](#step:ALM:y){reference-type="ref" reference="step:ALM:y"}. **Theorem 39**. *Let [Assumption 35](#ass:rate_convergence){reference-type="ref" reference="ass:rate_convergence"} hold and set $\Theta_k:=\Theta(x^k,z^k,y^k)$ for each $k\in\mathbb{N}$. Assume that $\varepsilon_k \in \mathpzc{o}(\Theta_{k-1})$. Then* 1. *for every $q\in(0,1)$, there exists $\bar{\mu}(q)$ such that, if $\mu_k \leq \bar{\mu}(q)$ for sufficiently large $k\in\mathbb{N}$, then $(x^k,z^k,y^k) \to (\bar{x},c(\bar x),\bar{y})$ Q-linearly with rate $q$.* 2. *If $\mu_k\downarrow 0$, then $(x^k,z^k,y^k) \to (\bar{x},c(\bar x),\bar{y})$ Q-superlinearly.* *Proof.* Let $k\in\mathbb{N}$ be large enough so that $\hat{y}^k = y^{k-1}$. By [Lemma 38](#lem:ErrorBound:convergence_rate_error){reference-type="ref" reference="lem:ErrorBound:convergence_rate_error"}, if $\mu_k$ is small enough so that $1 - \varrho_{\mathrm{u}} \mu_k > 0$, then $$\frac{\Theta_k}{\Theta_{k-1}} \leq \frac{\varrho_{\mathrm{u}} \mu_k}{1 - \varrho_{\mathrm{u}} \mu_k} + \mathpzc{o}(1).$$ The desired rates for $\{(x^k,z^k,y^k)\}$ are an easy consequence of the upper and lower estimates in [\[eq:error_bound\]](#eq:error_bound){reference-type="eqref" reference="eq:error_bound"} and [Lemma 37](#lem:lower_error_bound:ALM){reference-type="ref" reference="lem:lower_error_bound:ALM"}, as these give $$\begin{aligned} \frac{\Vert x^k-\bar x \Vert+\Vert z^k-c(\bar x) \Vert+\Vert y^k-\bar y \Vert}{\Vert x^{k-1}-\bar x \Vert+\Vert z^{k-1}-c(\bar x) \Vert+\Vert y^{k-1}-\bar y \Vert} \leq \frac{\varrho_{\mathrm{u}}}{\varrho_{\mathrm{l}}} \frac{\varrho_{\mathrm{u}} \mu_k}{1 - \varrho_{\mathrm{u}} \mu_k} + \mathpzc{o}(1) \end{aligned}$$ for all $k\in\mathbb{N}$ large enough. ◻ The following result is analogous to [@steck2018dissertation Cor. 4.32]. **Corollary 40**. *Let [Assumption 35](#ass:rate_convergence){reference-type="ref" reference="ass:rate_convergence"} hold and assume that the subproblems occurring at [\[step:ALM:subproblem\]](#step:ALM:subproblem){reference-type="ref" reference="step:ALM:subproblem"} of [\[alg:ALM\]](#alg:ALM){reference-type="ref" reference="alg:ALM"} are solved exactly, i.e., that $\varepsilon_k = 0$ for all $k\in\mathbb{N}$. Then $\{\mu_k\}$ remains bounded away from zero.* *Proof.* For each $k\in\mathbb{N}$, we make use of $V_k:=\Vert c(x^k)-z^k \Vert$ and $\Theta_k:=\Theta(x^k,z^k,y^k)$. Let $k\in\mathbb{N}$ be large enough so that $\hat{y}^k = y^{k-1}$. Arguing as in the proof of [Lemma 38](#lem:ErrorBound:convergence_rate_error){reference-type="ref" reference="lem:ErrorBound:convergence_rate_error"}, we have for all $k\in\mathbb{N}$ that $\Theta_k \leq V_k$ since $\varepsilon_k = 0$. Furthermore, using the triangle inequality, the convergences, $x^k\to\bar{x}$, $z^k \to c(\bar{x})$, and [Assumption 35](#ass:rate_convergence){reference-type="ref" reference="ass:rate_convergence"}[\[item:no_safeguarding\]](#item:no_safeguarding){reference-type="ref" reference="item:no_safeguarding"}, we obtain $$V_k = \mu_k\Vert y^k-y^{k-1} \Vert \leq \varrho_{\mathrm{u}} \mu_k (\Theta_k + \Theta_{k-1})$$ from [\[eq:error_bound\]](#eq:error_bound){reference-type="eqref" reference="eq:error_bound"}. Combining these inequalities yields $$\frac{V_k}{V_{k-1}} \leq \varrho_{\mathrm{u}} \mu_k \left(\frac{\Theta_k}{\Theta_{k-1}} + 1 \right) .$$ Assuming now by contradiction that $\mu_k\downarrow 0$, we deduce from the proof of [Theorem 39](#thm:rate_convergence){reference-type="ref" reference="thm:rate_convergence"} that $\Theta_k/\Theta_{k-1} \to 0$, and then $V_k/V_{k-1} \to 0$ follows. Hence, $V_k / V_{k-1} \leq \theta$ for all $k\in\mathbb{N}$ sufficiently large, where $\theta\in(0,1)$ is a fixed parameter of [\[alg:ALM\]](#alg:ALM){reference-type="ref" reference="alg:ALM"}, so that [\[step:ALM:penalty\]](#step:ALM:penalty){reference-type="ref" reference="step:ALM:penalty"} gives a contradiction, thus proving the assertion. ◻ In summary, local fast convergence of [\[alg:ALM\]](#alg:ALM){reference-type="ref" reference="alg:ALM"}, even for nonconvex functions $g$, can be obtained in the presence of suitable second-order conditions (one ensuring the existence of minimizers of the subproblems and another one to guarantee validity of an upper error bound) and a first-order CQ which, in principle, gives us the full convergence of the primal-dual sequence. In comparison with the noteworthy results from [@FernandezSolodov2012; @HangSarabi2021], these assumptions may seem quite strong. However, let us mention that in the settings discussed in these papers, the (convex) function $g$ under consideration is chosen in such a way that the aforementioned two second-order conditions can already be merged into one, see [Remark 22](#rem:error_bound_for_convex_piecewise_quadratic_functions){reference-type="ref" reference="rem:error_bound_for_convex_piecewise_quadratic_functions"}. Furthermore, it is likely that the additional postulation of a first-order CQ could be avoided in these papers too, since $g$ (or at least its derivative) is convex and/or polyhedral *enough* and, for convex functions, the proximal operator is well-behaved. It remains a question of future research whether, exemplary, a generalized polyhedral structure of $g$ (where its domain and epigraph are unions of finitely many convex polyhedral sets) makes the additional assumption of a first-order CQ superfluous. # Some exemplary settings {#sec:examples} In light of our theoretical findings for the general problem [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}, this section examines two notable illustrative settings: sparsity-promoting and complementarity-constrained optimization. ## Sparsity-promoting optimization Here, we take a closer look at the sparsity-promoting optimization problem [\[eq:sparse_programming\]](#eq:sparse_programming){reference-type="eqref" reference="eq:sparse_programming"} which has been already discussed in [Example 15](#ex:conjugates_for_nonconvex_g_messy){reference-type="ref" reference="ex:conjugates_for_nonconvex_g_messy"}. Let us fix some point $\bar x\in\mathbb{R}^n$. For $\bar y\in\partial\Vert \cdot \Vert_0(c(\bar x))$, we make use of $$I^{00}(\bar x,\bar y):=\{i\in I^0(\bar x)\,|\,\bar y_i=0\}, \quad I^{0\pm}(\bar x,\bar y):=\{i\in I^0(\bar x)\,|\,\bar y_i\neq 0\}$$ where $I^0(\bar x)$ has been defined in [Example 15](#ex:conjugates_for_nonconvex_g_messy){reference-type="ref" reference="ex:conjugates_for_nonconvex_g_messy"}. With the definition of $I^\pm(\bar x)$ therein, one obtains that $$\label{eq:tangent_cone_ell0_quasi_norm} T_{\mathop{\mathrm{gph}}\partial\Vert \cdot \Vert_0}(c(\bar x),\bar y) = \left\{ (v,\eta)\in\mathbb{R}^m\times\mathbb{R}^m\,\middle|\, \begin{aligned} &\forall i\in I^\pm(\bar x)\colon&&\eta_i=0\\ &\forall i\in I^{0\pm}(\bar x,\bar y)\colon&&v_i=0\\ &\forall i\in I^{00}(\bar x,\bar y)\colon&& v_i\eta_i=0 \end{aligned} \right\}.$$ This can be used to see that [\[eq:CQ_uniqueness_of_multiplier\]](#eq:CQ_uniqueness_of_multiplier){reference-type="eqref" reference="eq:CQ_uniqueness_of_multiplier"} reduces to the linear independence of the family $(\nabla c_i(\bar x))_{i\in I^0(\bar x)}$. For $u\in\mathbb{R}^n\setminus\{0\}$ and $\eta\in D(\partial \Vert \cdot \Vert_0)(c(\bar x),\bar y)(c^\prime(\bar x)u)$, we easily find $$\langle \eta, c^\prime(\bar x)u \rangle = \sum_{i\in I^{\pm}(\bar x)}\underbrace{\eta_i}_{=0}c^\prime_i(\bar x)u + \sum_{i\in I^0(\bar x)}\underbrace{\eta_i c^\prime_i(\bar x)u}_{=0} = 0,$$ so that [\[eq:some_new_second_order_condition_for_error_bound\]](#eq:some_new_second_order_condition_for_error_bound){reference-type="eqref" reference="eq:some_new_second_order_condition_for_error_bound"} reduces to $$\label{eq:sufficient_condition_error_bound_sparse_programming} \forall u\in\{u^\prime\in\mathbb{R}^n\,|\,\forall i\in I^{0\pm}(\bar x,\bar y)\colon\,c^\prime_i(\bar x)u^\prime=0\}\setminus\{0\}\colon\quad \nabla^2_{xx}\mathcal{L}(\bar x,\bar y)[u,u]>0,$$ and this corresponds to the classical second-order sufficient condition for the nonlinear program $$\mathop{\mathrm{minimize}}_x {}\quad{} f(x) {}\quad{} \mathop{\mathrm{subject~to}} {}\quad{} c_i(x)=0\quad i\in I^{0\pm}(\bar x,\bar y).$$ From [@BenkoMehlitz2023 Ex. 6.3], we find $$\mathcal C(\bar x) = \{u\in\mathbb{R}^n\,|\,f^\prime(\bar x)u\leq 0,\,\forall i\in I^{0}(\bar x)\colon\,c^\prime_i(\bar x)u=0\}.$$ However, for $u\in\mathbb{R}^n$ satisfying $c^\prime_i(\bar x)u=0$ for all $i\in I^0(\bar x)$, we already have $$f^\prime(\bar x)u = -\langle \bar y, c^\prime(\bar x)u \rangle = -\sum_{i\in I^{\pm}(\bar x)}\underbrace{\bar y_i}_{=0}c^\prime_i(\bar x)u -\sum_{i\in I^{0}(\bar x)}\bar y_i\underbrace{c^\prime_i(\bar x)u}_{=0} = 0,$$ i.e., $u\in\mathcal C(\bar x)$ due to $\bar y\in\partial\Vert \cdot \Vert(c(\bar x))$, and this particularly holds for $\bar y\in\Lambda(\bar x)$ which exists whenever $\bar x$ is M-stationary. In the latter case, we thus obtain the simplified representation $$\label{eq:critical_cone_sparse_programming_simplified} \mathcal C(\bar x)=\{u\in\mathbb{R}^n\,|\,\forall i\in I^0(\bar x)\colon\,c^\prime_i(\bar x)u=0\}.$$ Noting that $\bar y_i=0$ holds for each $i\in I^{\pm}(\bar x)$, [@BenkoMehlitz2023 Ex. 6.3] shows that WSOSC is implied by $$\forall u\in\mathcal C(\bar x)\setminus\{0\},\,\exists y\in\Lambda(\bar x)\colon\quad \nabla^2_{xx}\mathcal{L}(\bar x,y)[u,u]>0,$$ while $$\exists\bar y\in\Lambda(\bar x),\,\forall u\in\mathcal C(\bar x)\setminus\{0\}\colon\quad \nabla^2_{xx}\mathcal{L}(\bar x,\bar y)[u,u]>0$$ is sufficient for SSOSC, and these correspond to certain second-order sufficient optimality conditions for the optimization problem $$\mathop{\mathrm{minimize}}_x {}\quad{} f(x) {}\quad{} \mathop{\mathrm{subject~to}} {}\quad{} c_i(x)=0\quad i\in I^{0}(\bar x).$$ Clearly, due to [\[eq:critical_cone_sparse_programming_simplified\]](#eq:critical_cone_sparse_programming_simplified){reference-type="eqref" reference="eq:critical_cone_sparse_programming_simplified"}, both conditions are implied by [\[eq:sufficient_condition_error_bound_sparse_programming\]](#eq:sufficient_condition_error_bound_sparse_programming){reference-type="eqref" reference="eq:sufficient_condition_error_bound_sparse_programming"}. The following example shows that [\[eq:sufficient_condition_error_bound_sparse_programming\]](#eq:sufficient_condition_error_bound_sparse_programming){reference-type="eqref" reference="eq:sufficient_condition_error_bound_sparse_programming"} can, indeed, be stronger than SSOSC. **Example 41**. We consider [\[eq:sparse_programming\]](#eq:sparse_programming){reference-type="eqref" reference="eq:sparse_programming"} for the functions $f \colon \mathbb{R}^2 \to \mathbb{R}$ and $c \colon \mathbb{R}^2 \to \mathbb{R}^2$ given by $$f(x):=\frac12(x_1-x_2)^2+x_1-x_2, \quad c(x):=\begin{pmatrix}x_1-x_2\\x_1+x_2\end{pmatrix},$$ and choose $\bar x$ to be the origin in $\mathbb{R}^2$. Note that $I^0(\bar x)=\{1,2\}$ and $\Lambda(\bar x)=\{(-1,0)\}$, i.e., $\bar y :=(-1,0)$ is the uniquely determined multiplier in this situation. As the critical cone $\mathcal C(\bar x)$ reduces to the origin, SSOSC at $\bar x$ for $\bar y$ is trivially satisfied. Observe that we have $$\nabla_{xx}^2\mathcal{L}(\bar x,\bar y) = \begin{pmatrix} 1 & -1 \\ -1 & 1 \end{pmatrix},$$ and for $\bar u:=(1,1)$ and $\bar\eta:=(0,0)$, we find $\bar \eta\in D(\partial\Vert \cdot \Vert_0)(c(\bar x),\bar y)(c^\prime(\bar x)\bar u)$ from [\[eq:tangent_cone_ell0_quasi_norm\]](#eq:tangent_cone_ell0_quasi_norm){reference-type="eqref" reference="eq:tangent_cone_ell0_quasi_norm"}. Furthermore, $\nabla_{xx}^2\mathcal{L}(\bar x,\bar y)\bar u+c^\prime(\bar x)^\top\bar\eta=0$ is valid. Hence, [\[eq:a_really_crucial_CQ\]](#eq:a_really_crucial_CQ){reference-type="eqref" reference="eq:a_really_crucial_CQ"} does not hold, and this also shows that the stronger condition [\[eq:some_new_second_order_condition_for_error_bound\]](#eq:some_new_second_order_condition_for_error_bound){reference-type="eqref" reference="eq:some_new_second_order_condition_for_error_bound"} fails---the latter being equivalent to [\[eq:sufficient_condition_error_bound_sparse_programming\]](#eq:sufficient_condition_error_bound_sparse_programming){reference-type="eqref" reference="eq:sufficient_condition_error_bound_sparse_programming"} in the present setting. ## Complementarity-constrained optimization Let $m :=2p$ for some $p\in\mathbb{N}$ and consider the special situation $g :=\mathop{\mathrm{\delta}}_{C_\textup{cc}}$ where $C_\textup{cc}\subseteq\mathbb{R}^{2p}$ is given by $$C_\textup{cc} :=\{z\in\mathbb{R}^{2p}\,|\,\forall i\in\{1,\ldots,p\}\colon\,0\leq z_i\perp z_{p+i}\geq 0\},$$ i.e., $C_\textup{cc}$ is the standard complementarity set. Problem [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"}, thus, reduces to $$\tag{MPCC}\label{eq:MPCC} \mathop{\mathrm{minimize}}_x {}\quad{} f(x) {}\quad{} \mathop{\mathrm{subject~to}} {}\quad{} c(x) \in C_\textup{cc},$$ a *mathematical problem with complementarity constraints* (MPCC), see the classical monographs [@LuoPangRalph1996; @OutrataKocvaraZowe1998]. Note that standard inequality and equality constraints can be added without any difficulty and are omitted here for brevity of presentation. For a feasible point $\bar x\in\mathbb{R}^n$ of [\[eq:MPCC\]](#eq:MPCC){reference-type="eqref" reference="eq:MPCC"}, we make use of the index sets $$\begin{aligned} I^{+0}(\bar x)& :=\{i\in\{1,\ldots,p\}\,|\,c_i(\bar x)>0,\,c_{p+1}(\bar x)=0\},\\ I^{0+}(\bar x)& :=\{i\in\{1,\ldots,p\}\,|\,c_i(\bar x)=0,\,c_{p+i}(\bar x)>0\},\\ I^{00}(\bar x)& :=\{i\in\{1,\ldots,p\}\,|\,c_i(\bar x)=0,\,c_{p+i}(\bar x)=0\},\end{aligned}$$ which provide a disjoint partition of $\{1,\ldots,p\}$. As we have $$\begin{aligned} \partial g(c(\bar x)) = \partial^\infty g(c(\bar x)) &= N_{C_\textup{cc}}(c(\bar x)) \\ &= \left\{ y\in\mathbb{R}^{2p}\,\middle|\, \begin{aligned} &\forall i\in I^{+0}(\bar x)\colon&&y_i=0\\ &\forall i\in I^{0+}(\bar x)\colon&&y_{p+i}=0\\ &\forall i\in I^{00}(\bar x)\colon&&(y_i\leq 0\,\land\,y_{p+i}\leq 0)\,\lor\,y_iy_{p+i}=0 \end{aligned} \right\}, \end{aligned}$$ we can specify the precise meaning of the CQ [\[eq:metric_regularity_CQ\]](#eq:metric_regularity_CQ){reference-type="eqref" reference="eq:metric_regularity_CQ"}. Note that, as $\mathop{\mathrm{\delta}}_{C_\textup{cc}}$ is continuous on its closed domain $C_\textup{cc}$, [\[eq:metric_regularity_CQ\]](#eq:metric_regularity_CQ){reference-type="eqref" reference="eq:metric_regularity_CQ"} can be used in [Proposition 36](#lem:ErrorBound:convergence_vanishing_error){reference-type="ref" reference="lem:ErrorBound:convergence_vanishing_error"}. We also note that $$\begin{aligned} \widehat N_{C_\textup{cc}}(c(\bar x)) = \left\{ y\in\mathbb{R}^{2p}\,\middle|\, \begin{aligned} &\forall i\in I^{+0}(\bar x)\colon&&y_i=0\\ &\forall i\in I^{0+}(\bar x)\colon&&y_{p+i}=0\\ &\forall i\in I^{00}(\bar x)\colon&& y_i\leq 0,\,y_{p+i}\leq 0 \end{aligned} \right\}.\end{aligned}$$ Let us now assume that $\bar x$ is an M-stationary point of [\[eq:MPCC\]](#eq:MPCC){reference-type="eqref" reference="eq:MPCC"}. Some calculations show that the associated critical cone is given by $$\begin{aligned} \mathcal C(\bar x) = \left\{ u\in\mathbb{R}^n\,\middle|\, \begin{aligned} &&&f^\prime(\bar x)u\leq 0\\ &\forall i\in I^{+0}(\bar x)\colon&&c^\prime_{p+i}(\bar x)u=0\\ &\forall i\in I^{0+}(\bar x)\colon&&c^\prime_i(\bar x)u=0\\ &\forall i\in I^{00}(\bar x)\colon&&0\leq c^\prime_i(\bar x)u\perp c^\prime_{p+i}(\bar x)u\geq 0 \end{aligned} \right\},\end{aligned}$$ see [@BenkoMehlitz2023 Sec. 5.1]. If $\Lambda(\bar x)\cap\widehat{N}_{C_{\textup{cc}}}(c(\bar x))$ is nonempty, i.e., if $\bar x$ is so-called strongly stationary, a simplified representation of the critical cone is available which does not involve the $\nabla f(\bar x)$ anymore but depends on a multiplier $y\in\Lambda(\bar x)\cap\widehat{N}_{C_{\textup{cc}}}(c(\bar x))$ and is given by $$\begin{aligned} \mathcal C(\bar x) = \left\{ u\in\mathbb{R}^n\,\middle|\, \begin{aligned} &&&f^\prime(\bar x)u\leq 0\\ &\forall i\in I^{+0}(\bar x)\colon&&c^\prime_{p+i}(\bar x)u=0\\ &\forall i\in I^{0+}(\bar x)\colon&&c^\prime_i(\bar x)u=0\\ &\forall i\in I^{00}_{--}(\bar x,y)\colon&&c^\prime_i(\bar x)u=0,\,c^\prime_{p+i}(\bar x)= 0\\ &\forall i\in I^{00}_{-0}(\bar x,y)\colon&&c^\prime_i(\bar x)u= 0,\,c^\prime_{p+i}(\bar x)u\geq 0\\ &\forall i\in I^{00}_{0-}(\bar x,y)\colon&&c^\prime_i(\bar x)u\geq 0,\,c^\prime_{p+i}(\bar x)u=0\\ &\forall i\in I^{00}_{*}(\bar x,y)\colon&&0\leq c^\prime_i(\bar x)u\perp c^\prime_{p+i}(\bar x)u\geq 0 \end{aligned} \right\},\end{aligned}$$ see [@mehlitz2020 Lem. 4.1]. Here, we used $$\begin{aligned} I^{00}_{--}(\bar x,y)& :=\{i\in I^{00}(\bar x)\,|\,y_i<0,\,y_{p+i}<0\},& I^{00}_{-0}(\bar x,y)& :=\{i\in I^{00}(\bar x)\,|\,y_i<0,\,y_{p+i}=0\},&\\ I^{00}_{0-}(\bar x,y)& :=\{i\in I^{00}(\bar x)\,|\,y_i=0,\,y_{p+i}<0\},& I^{00}_{*}(\bar x,y)& :=\{i\in I^{00}(\bar x)\,|\,y_i=0,\,y_{p+i}=0\},&\end{aligned}$$ which provide a disjoint partition of $I^{00}(\bar x)$. Following the arguments provided at the end of [@BenkoMehlitz2023 Sec. 3.1], we find $$\begin{aligned} \widehat{N}^\textup{p}_{C_\textup{cc}}(c(\bar x),c^\prime(\bar x)u) &= \widehat N_{T_{C_\textup{cc}}(c(\bar x))}(c^\prime(\bar x)u) \\ &= \left\{ y\in\mathbb{R}^{2p}\,\middle|\, \begin{aligned} &\forall i\in I^{+0}(\bar x)\cup I^{00}_{+0}(\bar x,u)\colon&&y_i=0\\ &\forall i\in I^{0+}(\bar x)\cup I^{00}_{0+}(\bar x,u)\colon&&y_{p+i}=0\\ &\forall i\in I^{00}_{00}(\bar x,u)\colon&& y_i\leq 0,\,y_{p+1}\leq 0 \end{aligned} \right\}\end{aligned}$$ for each $u\in\mathcal C(\bar x)$, where we made use of a disjoint partition of $I^{00}(\bar x)$ given by $$\begin{aligned} I^{00}_{+0}(\bar x,u)& :=\{i\in I^{00}(\bar x)\,|\,c^\prime_i(\bar x)u>0,\,c^\prime_{p+i}(\bar x)u=0\},\\ I^{00}_{0+}(\bar x,u)& :=\{i\in I^{00}(\bar x)\,|\,c^\prime_i(\bar x)u=0,\,c^\prime_{p+i}(\bar x)u>0\},\\ I^{00}_{00}(\bar x,u)& :=\{i\in I^{00}(\bar x)\,|\,c^\prime_i(\bar x)u=0,\,c^\prime_{p+i}(\bar x)u=0\}.\end{aligned}$$ Hence, we have $$\Lambda(\bar x,u) = \left\{y\in\Lambda(\bar x)\,\middle|\, \begin{aligned} &\forall i\in I^{00}_{+0}(\bar x,u)\colon&&y_i=0\\ &\forall i\in I^{00}_{0+}(\bar x,u)\colon&&y_{p+i}=0\\ &\forall i\in I^{00}_{00}(\bar x,u)\colon&& y_i\leq 0,\,y_{p+1}\leq 0 \end{aligned} \right\}.$$ Thus, due to [@BenkoMehlitz2023 Thm 5.4], WSOSC can be stated in the form $$\forall u\in\mathcal C(\bar x)\setminus\{0\},\,\exists y\in\Lambda(\bar x,u)\colon\quad \nabla^2_{xx}\mathcal{L}(\bar x,y)[u,u]>0.$$ As remarked directly after [Definition 16](#def:SOSC){reference-type="ref" reference="def:SOSC"}, any multiplier $\bar y\in\Lambda(\bar x)$ suitable to appear in SSOSC necessarily belongs to $\bigcap_{u\in\mathcal C(\bar x)\setminus\{0\}}\widehat{N}^\textup{p}_{C_\textup{cc}}(c(\bar x),c^\prime(\bar x)u)$, and for any such multiplier $\bar y$, $\mathrm d^2\mathop{\mathrm{\delta}}_{C_\textup{cc}}(c(\bar x),\bar y)(c^\prime(\bar x)u)$ vanishes, see [@BenkoMehlitz2023 Lem. 3.2, Prop. 3.6]. Hence, SSOSC takes the form $$\exists \bar y\in\Lambda(\bar x)\cap \bigcap_{u\in\mathcal C(\bar x)\setminus\{0\}}\widehat{N}^\textup{p}_{C_\textup{cc}}(c(\bar x),c^\prime(\bar x)u),\, \forall u\in\mathcal C(\bar x)\setminus\{0\}\colon\quad \nabla^2_{xx}\mathcal{L}(\bar x,\bar y)[u,u]>0.$$ It follows from [@Gfrerer2014 Lem. 3.2] that this is a less restrictive assumption than the standard second-order sufficient condition for [\[eq:MPCC\]](#eq:MPCC){reference-type="eqref" reference="eq:MPCC"} which takes the form $$\exists \bar y\in\Lambda(\bar x)\cap\widehat{N}_{C_{\textup{cc}}}(c(\bar x)),\, \forall u\in\mathcal C(\bar x)\setminus\{0\}\colon\quad \nabla^2_{xx}\mathcal{L}(\bar x,\bar y)[u,u]>0$$ and is based on a strongly stationary point. A detailed study on the relationship between WSOSC and SSOSC as well as other MPCC-tailored second-order optimality conditions is beyond the scope of this paper, see e.g. [@Gfrerer2014; @GuoLinYe2013] for an overview. The graphical derivative of the limiting normal cone mappings associated with $C_\textup{cc}$ has been computed recently in [@BenkoMehlitz2022a Sec. 5.2], and the obtained formulas can be used to specify the CQs [\[eq:strong_metric_subregularity\]](#eq:strong_metric_subregularity){reference-type="eqref" reference="eq:strong_metric_subregularity"}, [\[eq:CQ_uniqueness_of_multiplier\]](#eq:CQ_uniqueness_of_multiplier){reference-type="eqref" reference="eq:CQ_uniqueness_of_multiplier"}, [\[eq:a_really_crucial_CQ\]](#eq:a_really_crucial_CQ){reference-type="eqref" reference="eq:a_really_crucial_CQ"}, and [\[eq:some_new_second_order_condition_for_error_bound\]](#eq:some_new_second_order_condition_for_error_bound){reference-type="eqref" reference="eq:some_new_second_order_condition_for_error_bound"} in the recent setting. # Concluding remarks {#sec:conclusions} The results in this paper could be extended to cover the extra feature in [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} of a geometric *convex* constraint $x \in X$, which was not included here for reasons of exposition. It remains unclear, instead, how to address such additional constraint with nonconvex $X$, if not reformulating into [\[eq:P\]](#eq:P){reference-type="eqref" reference="eq:P"} and accepting $x\in X$ as a soft constraint, see [@demarchi2023implicit Rem. 4.1]. Another challenging question is whether it is possible to dispose the additional constraint qualification in the nonconvex polyhedral case (i.e., $\mathop{\mathrm{epi}}g$ being the union of finitely many convex polyhedra) in the analysis of [5](#sec:ALM){reference-type="ref" reference="sec:ALM"}. Such a result would yield convergence rates merely via some second-order sufficient conditions and the (upper) error bound, generalizing [@FernandezSolodov2012]. In specific situations, this should be possible even in the nonpolyhedral setting, as [@HangSarabi2021] has shown for the case of convex linear-quadratic $g$. As already pointed out in [1](#sec:introduction){reference-type="ref" reference="sec:introduction"}, such a generalization does not seem to be available in nonpolyhedral settings. Future research may also focus on the relationships between the proximal point algorithm and the augmented Lagrangian method in the fully nonconvex setting, in the vein of [@rockafellar1976augmented; @rockafellar2022convergence], and investigate saddle-point properties of the augmented Lagrangian function in primal-dual terms as in [@steck2018dissertation]. [^1]: University of the Bundeswehr Munich, Department of Aerospace Engineering, Institute of Applied Mathematics and Scientific Computing, 85577 Neubiberg, Germany . [email]{.smallcaps} <alberto.demarchi@unibw.de>, [orcid]{.smallcaps} [0000-0002-3545-6898](https://orcid.org/0000-0002-3545-6898). [^2]: Brandenburg University of Technology Cottbus-Senftenberg, Institute of Mathematics, 03046 Cottbus, Germany . [email]{.smallcaps} <patrick.mehlitz@b-tu.de>, [orcid]{.smallcaps} [0000-0002-9355-850X](https://orcid.org/0000-0002-9355-850X).
arxiv_math
{ "id": "2309.01980", "title": "Local properties and augmented Lagrangians in fully nonconvex composite\n optimization", "authors": "Alberto De Marchi, Patrick Mehlitz", "categories": "math.OC", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | A group $G$ is called residually finite if for every non-trivial element $g \in G$, there exists a finite quotient $Q$ of $G$ such that the element $g$ is non-trivial in the quotient as well. Instead of just investigating whether a group satisfies this property, a new perspective is to quantify residual finiteness by studying the minimal size of the finite quotient $Q$ depending on the complexity of the element $g$, for example by using the word norm $\|g\|_G$ if the group $G$ is assumed to be finitely generated. The residual finiteness growth $\mathop{\mathrm{RF}}_G: \mathbb{N}\to \mathbb{N}$ is then defined as the smallest function such that if $\|g\|_G \leq r$, there exists a morphism $\varphi: G \to Q$ to a finite group $Q$ with $|Q| \leq \mathop{\mathrm{RF}}_G(r)$ and $\varphi(g) \neq e_Q$. Although upper bounds have been established for several classes of groups, exact asymptotics for the function $\mathop{\mathrm{RF}}_G$ are only known for very few groups such as abelian groups, the Grigorchuk group and certain arithmetic groups. In this paper, we show that the residual finiteness growth of virtually abelian groups equals $\log^k$ for some $k \in \mathbb{N}$, where the value $k$ is given by an explicit expression. As an application, we show that for every $m \geq 1$ and every $1 \leq k \leq m$, there exists a group $G$ containing a normal abelian subgroup of rank $m$ and with $\mathop{\mathrm{RF}}_G \approx \log^k$. author: - "Jonas Deré and Joren Matthys[^1]" bibliography: - Dere_Matthys_virtually_abelian.bib title: Residual Finiteness Growth in Virtually Abelian Groups --- # Introduction In a residually finite group $G$ there exists by definition for every non-trivial element $g\in G$ a group morphism $\varphi: G \to Q$ to a finite group $Q$ such that $\varphi(g)$ is still non-trivial. Examples of such groups include finite groups, free groups, finitely generated virtually nilpotent and more generally polycyclic groups, finitely generated linear groups, and fundamental groups of compact 3-manifolds. As every residually finite group is Hopfian, the Baumslag-Solitar group $BS(2,3)$ on the other hand is a non-example. Quite recently, the quantification of this property for finitely generated groups $G$ was initiated by Bou-Rabee in his paper [@bou2010quantifying]. More precisely, Bou-Rabee studied the asymptotic behavior of the (normal) residual finiteness growth $\mathop{\mathrm{RF}}_G: \mathbb{N}\to \mathbb{N}$, i.e. the minimal function such that if $\|g\|_G \leq r$, then $Q$ exists as above with $|Q| \leq \mathop{\mathrm{RF}}_G(r)$. Here $\|\cdot\|_G$ denotes a fixed word norm on $G$, induced by a finite generating set $S$, so satisfying $\|g\|_G \leq r$ if and only if $g$ can be written as a product of at most $r$ elements in $S \cup S^{-1}$. Although needed for the definition, the exact generating set $S$ does not play a role when considering $\mathop{\mathrm{RF}}_G$ up to a certain equivalence relation, see Section [2](#sec_resid_fin_growth){reference-type="ref" reference="sec_resid_fin_growth"} for more details. Since this first paper in 2010, the group invariant $\mathop{\mathrm{RF}}_G$ has been studied for different classes of residually finite groups. In several cases upper bounds have been found, for example in the case of linear groups in [@franz2017quantifying], $S$-arithmetic subgroups of higher rank Chevalley groups in [@bou2012quantifying], the first Grigorchuk group [@bou2010quantifying], free groups [@bradford2019short], lamplighter groups [@bou2019residual], ... However, exact bounds are rare in the literature, most strikingly illustrated by the fact that the exact asymptotic behavior for free groups is still unknown. In the same light, the exact connection between the residual finiteness growth of groups and certain group constructions remains largely unknown, for instance when considering its automorphism group, forming semi-direct products or wreath products,... In the case of taking finitely generated subgroups $K$ of $G$, it has been shown in [@bou2010quantifying] that $\mathop{\mathrm{RF}}_K$ is bounded above by $\mathop{\mathrm{RF}}_G$. Furthermore, if $K$ is a finite extension of $G$ with index $n = [G:K]$, then $\mathop{\mathrm{RF}}_G$ is itself bounded above by $(\mathop{\mathrm{RF}}_K)^n$. We refer to the survey article [@survey2022] for a more detailed discussion about the known results and the relation to other residual properties such as conjugacy separability. An interesting result in this context is the following from [@bou2011asymptotic]. In this paper, the authors characterize the virtually nilpotent groups within the class of linear groups as exactly those group $G$ for which $\mathop{\mathrm{RF}}_G$ is bounded by $\log^k$ for some $k\geq 0$. This raises the following question with respect to the exact asymptotics of $\mathop{\mathrm{RF}}_G$ for these groups. **Question 1**. Let $G$ be a finitely generated virtually nilpotent group, does there exist a $k \geq 0$ with $\mathop{\mathrm{RF}}_G$ equal to $\log^k$? Note that the residual finiteness growth is bounded if and only if the group is finite, and if the group contains an infinite cyclic group, then $\log \preceq \mathop{\mathrm{RF}}_G$. So far, Question [Question 1](#Q1){reference-type="ref" reference="Q1"} has only been answered for abelian groups, where $\mathop{\mathrm{RF}}_G$ is either bounded (so $\log^0$) for the finite ones or $\log$ for the infinite groups respectively. The answer is also positive for all examples of nilpotent groups for which $\mathop{\mathrm{RF}}_{G}$ has been computed, as for instance the Heisenberg group for which $\mathop{\mathrm{RF}}_G$ equals $\log^3$. In this paper, we give a positive answer to Question [Question 1](#Q1){reference-type="ref" reference="Q1"} for all virtually abelian groups. **Theorem 1**. *Let $G$ be a finitely generated virtually abelian group, then $\mathop{\mathrm{RF}}_G$ equals $\log^k$ for some $k\geq 0$.* Moreover, this $k$ has an explicit form, depending on an induced integral representation. Indeed, take any abelian torsion-free normal subgroup $K \triangleleft~G$ of finite index, then the finite group $H = G/K$ acts via conjugation on $K$, leading to a representation $\varphi: H \to \mathop{\mathrm{Aut}}(K) \cong \mathop{\mathrm{GL}}(m,\mathbb{Z})$ with $m$ the rank of $K$. In the case of crystallographic groups, so discrete and cocompact subgroups of Euclidean isometries, the representation $\varphi$ is equal to the holonomy representation, getting its name from its geometric interpretation. Over the complex numbers $\mathbb{C}$, the representation $\varphi$ decomposes into irreducible subrepesentations of dimensions $\leq m$. We establish the following refinement of the previous theorem: **Theorem 2**. *Let $G$ be a virtually abelian group with torsion-free abelian subgroup of rank $m$. Then $\mathop{\mathrm{RF}}_G$ equals $\log^k$, where $0 \leq k \leq m$ is the maximal dimension of the irreducible subrepresentations of $\varphi$ over $\mathbb{C}$.* Moreover, for every $m\geq 1$ we give an example to show that every $1 \leq k \leq m$ is possible. This result can also be interpreted as a specialization of the general bounds for finitely generated subgroups as described above. Indeed, the group $G$ is a finite extension of an abelian subgroup $K$ of rank $m\geq 1$ and hence the previous results asserted that $\mathop{\mathrm{RF}}_G$ was bounded below and above by $\log$ and $\log^n$ respectively, with $n = [G:K]$. However, now we know it equals $\log^k$ for some $1\leq k \leq \min\{m,n\}$, with various examples where the general bounds are not optimal. The outline of this paper is as follows. In section [2](#sec_resid_fin_growth){reference-type="ref" reference="sec_resid_fin_growth"}, we will introduce the notion of residual finiteness growth with respect to families of normal subgroups, as will be crucial later. Next, in section [3](#sec_virt_ab){reference-type="ref" reference="sec_virt_ab"}, we will define virtually abelian groups, and we will make some preliminary considerations concerning their residual finiteness growth. Following this initial discussion, section [4](#sec_upper){reference-type="ref" reference="sec_upper"} and [5](#sec_lower){reference-type="ref" reference="sec_lower"} will focus on the upper and the lower bound of the main result in theorem [Theorem 2](#thm_main){reference-type="ref" reference="thm_main"}. In order to do so, section [4](#sec_upper){reference-type="ref" reference="sec_upper"} will give more background about linear representations and section [5](#sec_lower){reference-type="ref" reference="sec_lower"} about matrices commuting with such a representation. In the last subsection of section [5](#sec_lower){reference-type="ref" reference="sec_lower"}, the main result will then be proven. Lastly, we will conclude this article with some applications and open questions in sections [6](#sec_applic){reference-type="ref" reference="sec_applic"} and [7](#sec_open_questions){reference-type="ref" reference="sec_open_questions"}. # Residual finiteness growth {#sec_resid_fin_growth} In this subsection, we introduce the concept of residual finiteness growth and we prove some of its properties. However, as this will be important for future results, we will work in the more general context of an arbitrary family $P$ of normal subgroup in $G$, in a similar flavor as [@bou2015residual]. Every group under consideration is assumed to be finitely generated and residually finite. For these groups, we have a word norm $\|\cdot\|_G$ on $G$, which measures the number of generators needed to write an element of $G$. As the only information we need from our metric are the balls of a certain radius, we will include this information in our definitions. **Definition 3**. For a group $G$ with finite generating set $S$, we define the ball $B_G(r)$ of radius $r > 0$ as $$\begin{aligned} B_G(r) &= \left\{g\in G \mid \|g\|_G \leq r \right\} \\ &= \left\{s_1^{\pm1} \ldots s_k^{\pm1} \mid s_i \in S, k \leq r \right\} \end{aligned}$$ As we will introduce in Definition [Definition 12](#df_resid_fin){reference-type="ref" reference="df_resid_fin"}, we will define residual finiteness growth in such a way that it is independent of the choice of word metric. However, for the time being, we keep the balls $B_G$ in our definitions. **Definition 4**. We write $\nu(G) \subset \mathcal{P}(G)$ for the set of all finite index, normal subgroups of $G$. Note that a group $G$ is residually finite if and only if $\displaystyle \bigcap_{N\in \nu(G)} N = \left\{e_G\right\}$. However, for the remainder it will be important to build the theory for more general subsets $P \subset \nu(G)$. **Definition 5**. Let $P$ be a subset of $\nu(G)$ for which $\displaystyle \bigcap_{N\in P}N = \left\{e_G\right\}$. Define the divisibility function $D_{G,P}: G \to \mathbb{N}\cup \left\{\infty \right\}$ with respect to $G$ and $P$ as $$D_{G,P}(g) = \begin{cases} \min\left\{[G:N]\mid g\notin N, N \in P \right\} & \text{for } g \neq e_G,\\ \infty & \text{for } g = e_G. \end{cases}$$ and the residual finiteness growth $\mathop{\mathrm{RF}}_{G,P,B_G}: \mathbb{R}_{\geq 1} \to \mathbb{N}$ with respect to $G$, $P$ and $B_G$ as $$\mathop{\mathrm{RF}}_{G,P,B_G}(r)=\max\left\{D_{G,P}(g)\mid e_G \neq g \in B_G(r) \right\}.$$ By definition we let $D_{G,P}(e_G) = \infty$, as this will be convenient for writing certain upper bounds as in Lemma [Lemma 14](#lem_direct_sum){reference-type="ref" reference="lem_direct_sum"}. From now on, we will always assume that our subsets $P \subset \nu(G)$ satisfy $\displaystyle \bigcap_{N\in P}N = \left\{e_G\right\}$. In particular, the set $P$ is always non-empty. We will study the function $\mathop{\mathrm{RF}}_{G,P,B_G}$ up to the following preorder and corresponding equivalence relation: **Definition 6**. Let $f,g: \mathbb{R}_{\geq 1} \to \mathbb{R}_{\geq 1}$ be increasing functions. We define the following relations: $$\begin{aligned} f &\preceq g \Leftrightarrow \exists C >0: \forall r \geq \max\{1,1/C\}: f(r) \leq Cg(Cr)\\ f&\approx g \Leftrightarrow f\preceq g \text{ and } g \preceq f\end{aligned}$$ We will give some lemmas concerning the behavior of $RF_{G,P, B_G}$ and $D_{G,P}$ with respect to changing $G$, $P$ or the metric balls $B_G$. It should be noted that some of the results in this section still hold in a more general setting, for example when allowing for normal subgroups of infinite index in $\nu(G)$ or when relaxing the condition that $\displaystyle \cap_{N\in P}N = \left\{e_G \right\}$. We introduce the following operations on subsets $P \subset \nu(G)$. **Definition 7**. - Let $K\leq G$ and $P \subset \nu(G)$. We define the intersection as $$P\cap K = \left\{N\cap K \mid N \in P \right\} \subset \nu(K) .$$ Note that, if $K$ is a normal subgroup of $G$, every $N \cap K$ is normal in $G$ as well, and thus we can also consider $P \cap K$ as a subset of $\nu(G)$. It will be clear from the context which convention we use. - Let $P_1\subset \nu(G_1)$ and $P_2 \subset \nu(G_2)$. The direct sum is defined as $$P_1\oplus P_2 = \left\{N_1\oplus N_2\mid N_1\subset P_1, N_2\subset P_2 \right\} \subset \nu(G_1\oplus G_2).$$ - Let $\psi: G_1 \to G_2$ be a homomorphism and $P\subset \nu(G_2)$. We define the inverse image of $P$ as $$\psi^{-1}(P) = \left\{\psi^{-1}(N)\mid N\subset P \right\} \subset \nu(G_1).$$ Note that the inverse image of a normal subgroup of finite index is again normal of finite index. It should be noted that the first two constructions above do not change the assumption on $\displaystyle \bigcap_{N\in P}N = \left\{e_G\right\}$, as can be checked by the reader. For example, in the first case, we have that $$\displaystyle \bigcap_{N\in P \cap K}N = \left(\bigcap_{N\in P}N \right) \cap K= \left\{e_G\right\} \cap K = \left\{e_G\right\}.$$ However, the last construction does not preserve this property, as the kernel of $\psi$ always lies in $\psi^{-1}(N)$. We will only use the latter in Lemma [Lemma 15](#lem_surjective){reference-type="ref" reference="lem_surjective"} for the divisibility function. **Lemma 8**. *Let $G$ be a group and $P, \bar{P}$ be subsets of $\nu(G)$. If there exists a constant $C>0$ such that for all $N\in P$, there exists $\bar{N} \in \bar{P}$ with $\bar{N} \subset N$ and $[G:\bar{N}] \leq C[G:N]$, then $\mathop{\mathrm{RF}}_{G, \bar{P}, B_G} \preceq \mathop{\mathrm{RF}}_{G,P,B_G}$.* *Proof.* Let $e_G \neq g \in B_G(r)$ and take $N\in P$ such that $g\notin N$ and $[G:N] = D_{G,P}(g)$. Take $\bar{N} \subset N$ as in the condition of the lemma, then we know that $g\notin \bar{N}$ and hence $$D_{G, \bar{P}}(g) \leq [G: \bar{N}] \leq C[G:N] = C\cdot D_{G,P}(g) \leq C\cdot\mathop{\mathrm{RF}}_{G,P,B_G}(r).$$ Taking the maximum over all non-trivial $g$ in $B_G(r)$ gives the result. ◻ A particular case of this lemma will be used several times in this paper: **Example 9**. For any group $G$ and $P \subset \bar{P} \subset \nu(G)$ with $\displaystyle \bigcap_{N\in P}N = \left\{e_G\right\}$, the conditions of Lemma [Lemma 8](#lem_change_P){reference-type="ref" reference="lem_change_P"} are automatically satisfied by taking $N = \bar{N}$ and $C=1$. We conclude that $\mathop{\mathrm{RF}}_{G,\bar{P},B_G} \preceq \mathop{\mathrm{RF}}_{G,P,B_G}$ in this case. Note that if $K$ is a finitely generated subgroup of $G$, then there always exists a constant $D>0$ such that $B_K(D\cdot r) \subset B_G(r)$. If $K$ has finite index in $G$, then $K$ is automatically finitely generated and moreover there exists a constant $C \geq 0$ such that $B_G(r)\cap K \subset B_K(C\cdot r)$, see [@loh2017geometric corollary 5.4.5]. **Lemma 10**. *Let $P\subset \nu(G)$ and suppose $K$ is a finite index subgroup of $G$, then $\mathop{\mathrm{RF}}_{K,P\cap K, B_K} \preceq \mathop{\mathrm{RF}}_{G,P,B_G}$. If moreover every $N\in P$ satisfies $N \subset K$ then $\mathop{\mathrm{RF}}_{K,P\cap K, B_K} \approx \mathop{\mathrm{RF}}_{G,P,B_G}$.* The last condition means that we can consider $P \subset \nu(K)$ as well, as every element lies entirely in $K$. Of course, not every element of $\nu(K)$ lies in $\nu(G)$. *Proof.* Take $D>0$ such that $B_K(D\cdot r) \subset B_G(r)$ and $e_K \neq k \in B_K(D\cdot r)$. Since $e_G \neq k \in B_G(r)$ we find $N\in P$ such that $k\notin N$ and $[G:N] = D_{G,P}(k)$. Hence we have $$D_{K,P\cap K}(k) \leq [K:K\cap N] \leq [G:N] = D_{G,P}(k) \leq \mathop{\mathrm{RF}}_{G,P,B_G}(r),$$ using that $[G:N] = [G:KN]\cdot [K:K\cap N]$. Taking the maximum over all non-trivial $k$ in $B_K(D\cdot r)$ gives the inequality $\mathop{\mathrm{RF}}_{K, P\cap K, B_K} \preceq \mathop{\mathrm{RF}}_{G,P,B_G}$. For the second statement, take $C$ such that $B_G(r)\cap K \subset B_K(C\cdot r)$. Fix a normal subgroup $N'$ of finite index in $G$, lying in $K$, which exists by our assumptions on $P$. Now take $e_G \neq g \in B_G(r)$. If $g \notin K$, then $D_{G,P}(g) \leq [G:N']$. If $g \in K$ and hence $k\in B_K(C\cdot r)$, we can take $N \in P$ such that $g \notin N$ and $[K:N] = D_{K,P}(g)$. We obtain $$\begin{aligned} D_{G,P}(g) & \leq [G:N] = [G:K]\cdot [K:N] \\ & = [G:K]\cdot D_{K,P}(g) \leq [G:K]\cdot \mathop{\mathrm{RF}}_{K,P,B_K}(C\cdot r). \end{aligned}$$ In total, we argued that $D_{G,P}(g) \leq \max\left\{[G:N'], [G:K]\cdot\mathop{\mathrm{RF}}_{K,P,B_K}(C\cdot r) \right\}$. As $\mathop{\mathrm{RF}}_{K,P,B_K}(C\cdot r)$ is at least a bounded function, taking the maximum over all $e_G \neq g\in B_G(r)$ leads to the desired result. ◻ As a consequence, we see that the finite generating set does not play a role when studying the residual finiteness growth. **Corollary 11**. *Let $B_G$ and $B_{G}'$ be two metric balls associated to different word metrics on $G$. Then $\mathop{\mathrm{RF}}_{G,P,B_G} \approx \mathop{\mathrm{RF}}_{G,P,B_{G}'}$.* *Proof.* Apply the previous lemma to the case where $K = G$ and $B_K = B_{G}'$. ◻ This way we can define residual finiteness growth on $G$ independent of the word metric. For the sake of this paper, the metric will play no active role. Therefore, we will no longer specify the word metrics in our results. **Definition 12**. The residual finiteness growth $\mathop{\mathrm{RF}}_G$ of a group $G$ is defined as the equivalence class of $\mathop{\mathrm{RF}}_{G, \nu(G),B_G}$. We will say that $\mathop{\mathrm{RF}}_G$ equals $f$ if $\mathop{\mathrm{RF}}_{G, \nu(G),B_G} \approx f$ for some (and hence every) finite generating set $S$. The next lemma forms a combination of Lemmata [Lemma 8](#lem_change_P){reference-type="ref" reference="lem_change_P"} and [Lemma 10](#lem_reduce_group){reference-type="ref" reference="lem_reduce_group"}. **Lemma 13**. *Suppose $K$ is a normal, finite index subgroup of $G$. Let $P\subset \nu(G)$ with $G\in P$ and $\bar{P} \subset P\cap K \subset \nu(K)$. If there exists a constant $C>0$ such that for all $N\in P$ there exists $\bar{N} \in \bar{P}$ such that $\bar{N}\subset N$ and $[K:\bar{N}] \leq C[G:N]$, then $\mathop{\mathrm{RF}}_{G,P,B_G} \approx \mathop{\mathrm{RF}}_{K, \bar{P},B_K}$.* *Proof.* We will show the following equivalences: $$\begin{split} \mathop{\mathrm{RF}}_{G,P,B_G} & \approx \mathop{\mathrm{RF}}_{G,P\cap K,B_G} \\ & \approx \mathop{\mathrm{RF}}_{K,P\cap K,B_K} \\ & \approx \mathop{\mathrm{RF}}_{K, \bar{P},B_K}. \end{split}$$ For the first equivalence, we first show that $\mathop{\mathrm{RF}}_{G,P,B_G} \preceq \mathop{\mathrm{RF}}_{G,P\cap K,B_G}$. Take $g\in B_G(r)$ arbitrary, then either $g\notin K$ or $g\in K$. If $g\notin K$, $D_{G,P}(g) \leq [G:G\cap K] = [G:K]$. In the other case, take $g\notin N\cap K$ that realizes $D_{G,P\cap K}(g)$, i.e. $[G:N\cap K] = D_{G,P\cap K}(g)$. Then it also holds that $g\notin N$. Hence, we see that $$D_{G,P}(g) \leq [G:N] \leq [G:N\cap K] = D_{G,P\cap K}(g) \leq \mathop{\mathrm{RF}}_{G,P\cap K,B_G}(r).$$ In conclusion, $D_{G,P}(g) \leq \max\left\{[G:K], \mathop{\mathrm{RF}}_{G,P\cap K,B_G} \right\}$. Taking the maximum over all $g\in B_G(r)$ gives the result. Conversely, if $N\lhd G$, we have $[G:N] = [G:NK]\cdot [K:N\cap K]$, so $[G:N] \geq [K:N\cap K]$. As a consequence, $$[G:N\cap K] = [G:K]\cdot [K:N\cap K] \leq [G:K]\cdot [G:N].$$ By lemma [Lemma 8](#lem_change_P){reference-type="ref" reference="lem_change_P"} with $C = [G:K]$, we conclude the first equality holds. The second equivalence is a direct application of Lemma [Lemma 10](#lem_reduce_group){reference-type="ref" reference="lem_reduce_group"}. For the third equivalence, we note that by Example [Example 9](#ex:sub){reference-type="ref" reference="ex:sub"} it holds that $\mathop{\mathrm{RF}}_{K,P\cap K,B_K} \preceq \mathop{\mathrm{RF}}_{K, \bar{P},B_K}$. For the converse, we wish to apply Lemma [Lemma 8](#lem_change_P){reference-type="ref" reference="lem_change_P"}. To do so, we claim that for every $N\cap K \in P\cap K$, there exists $\bar{N} \in \overline{P}$ with $\bar{N}\subset N\cap K$ and $[G:\bar{N}] \leq C[G:K] \cdot [K:N\cap K]$. Here, the constant in the lemma's statement is in fact $C[G:K]$. Take $N\cap K \in N\cap P$. By assumption, there exists $\bar{N} \in \bar{P}$ such that $\bar{N} \subset N$ and $[K: \bar{N}] \leq C[G:N]$. This implies that $\bar{N} \subset N\cap K \in P\cap K$ as $\bar{P} \subset P\cap K$ and $$[K: \bar{N}] \leq C[G:KN]\cdot [K:N\cap K] \leq C[G:K] \cdot [K:N\cap K].$$ This shows the claim and therefore ends the proof. ◻ We now look at how residual finiteness behaves with respect to direct sums. **Lemma 14**. *Let $G = G_1\oplus G_2$ be a direct sum of two groups $G_i$ and $P \subset \nu(G)$. If we write $P_i = P \cap G_i$ and assume that $\bar{P} = P_1\oplus P_2 \subset P$, $G_1 \in P_1$ and $G_2 \in P_2$, then $$\label{eq_minimum} D_{G,P}(g) \leq \min\left\{D_{G_1,P_1}(\pi_1(g)), D_{G_2,P_2}(\pi_2(g)) \right\},$$ and $$\mathop{\mathrm{RF}}_{G,P,B_G} \approx \max\left\{\mathop{\mathrm{RF}}_{G_1,P_1,B_1}, \mathop{\mathrm{RF}}_{G_2,P_2,B_2} \right\}.$$ The maps $\pi_1$ and $\pi_2$ are the natural projections from $G$ onto $G_1$ and $G_2$.* *Proof.* Suppose $G = G_1\oplus G_2$ and $\bar{P} = P_1\oplus P_2 \subset P$. We clearly have $D_{G,P}(g) \leq D_{G, \bar{P}}(g)$ by Example [Example 9](#ex:sub){reference-type="ref" reference="ex:sub"}. If $N = N_1\oplus N_2 \in \bar{P}$ is the normal subgroup such that $g \notin N$ and $[G:N] = D_{G, \bar{P}}(g)$, then either $\pi_1(g) \notin N_1$ or $\pi_2(g) \notin N_2$. Suppose that $\pi_1(g) \notin N_1$, then $g\notin N_1\oplus G_2 \in \bar{P}$ and thus $D_{G, \bar{P}}(g) = [G_1:N_1]$. Choosing $N_1$ to have minimal index in $G_1$ with $\pi_1(g) \notin N_1$ shows that $D_{G, \bar{P}}(g) \leq D_{G_1,P_1}(\pi_1(g))$. Analogously if $\pi_2(g) \notin N_2$, we get $D_{G, \bar{P}}(g) \leq D_{G_2,P_2}(\pi_2(g))$. From this, we conclude that $$D_{G,P}(g) \leq \min\left\{D_{G_1,P_1}(\pi_1(g)), D_{G_2,P_2}(\pi_2(g)) \right\}.$$ Now, we need to show that $$\mathop{\mathrm{RF}}_{G,P,B_G} \approx \max\left\{\mathop{\mathrm{RF}}_{G_1,P_1,B_1}, \mathop{\mathrm{RF}}_{G_2,P_2,B_2} \right\}.$$ Since the definition of $\mathop{\mathrm{RF}}$ is up to equivalence independent of the choice of metric, we take $B_G(r)$ as $$B_G(r) = \left\{g\mid \pi_1(g)\in B_{G_1}(r_1), \pi_2(g)\in B_{G_2}(r_2), r_1+r_2 \leq r \right\}.$$ These are exactly the metric balls arising from adjoining generators $S_1$ of $G_1$ with $S_2$ of $G_2$, more concretely with generating set $S = \{(s,e_{G_2}) \mid s \in S_1\} \cup \{(e_{G_1},s) \mid s \in S_2\}$ for $G_1 \oplus G_2$. From one side, we have for all $e_G\neq g\in B_G(r)$ that either $\pi_1(g)$ or $\pi_2(g)$ is non-trivial. By symmetry, it suffices to consider the case when $\pi_1(g) \neq e_{G_1}$. Since $\pi_1(g) \in B_{G_1}(r)$, we find that $$\begin{aligned} D_{G,P}(g) & \leq \min\left\{D_{G_1,P_1}(\pi_1(g)), D_{G_2,P_2}(\pi_2(g))\right\}\\ & \leq D_{G_1,P_1}(\pi_1(g)) \\ & \leq \mathop{\mathrm{RF}}_{G_1,P_1,B_1}(r) \\& \leq \max\left\{\mathop{\mathrm{RF}}_{G_1,P_1,B_1}(r), \mathop{\mathrm{RF}}_{G_2,P_2,B_2}(r)\right\}. \end{aligned}$$ since $D_{G_1,P_1}(\pi_1(g)) < \infty$. This shows the first inequality. From the other side, take $g\in B_1(r) \subset G_1$ such that $\mathop{\mathrm{RF}}_{G_1,P_1,B_1}(r) = D_{G_1,P_1}(g)$. Then for all $N\in P$ with $g\notin N\cap G_1 \in P_1$ we have $$[G:N]\geq [G_1:N\cap G_1] \geq D_{G_1,P_1}(g) = \mathop{\mathrm{RF}}_{G_1,P_1,B_1}(r),$$ or after taking the minimum over all such normal subgroups: $$D_{G,P}(g) \geq D_{G_1,P_1}(g) = \mathop{\mathrm{RF}}_{G_1,P_1,B_1}(r).$$ Hence surely, $\mathop{\mathrm{RF}}_{G,P,B_G}(r) \geq \mathop{\mathrm{RF}}_{G_1,P_1,B_1}(r)$. By symmetry, this argument also applies to $G_2$, therefore ending the proof. ◻ Note that the conditions $G_1 \in P_1$ and $G_2 \in P_2$ make sure that the inequality [\[eq_minimum\]](#eq_minimum){reference-type="eqref" reference="eq_minimum"} holds. If those conditions are removed but $P$ is non-empty, a multiplicative constant depending on $G$ and $P$ but not on $g$ should be added. **Lemma 15**. *If $\psi: G_1 \to G_2$ is a surjective homomorphism, $P\subset \nu(G_2)$ and $\psi^{-1}(P) \subset \bar{P}$, then $D_{G_1, \bar{P}}(g) \leq D_{G_2, P}(\psi(g))$.* *Proof.* Since $\psi^{-1}(P) \subset \bar{P}$, it is immediate that $$D_{G_1, \bar{P}}(g) \leq D_{G_1, \psi^{-1}(P)}(g).$$ We will now show that $$D_{G_1, \psi^{-1}(P)}(g) \leq D_{G_2, P}(\psi(g)).$$ If $D_{G_2, P}(\psi(g)) = \infty$, then there is nothing to prove, so we can assume that $D_{G_2, P}(\psi(g)) < \infty$. Choose $N \in P$ such that $\psi(g) \notin N$ and $[G_2:N] = D_{G_2, P}(\psi(g))$, then $g\notin \psi^{-1}(N)$ and so $$\begin{aligned} D_{G_1, \psi^{-1}(P)}(g) & \leq [G_1:\psi^{-1}(N)] \\ & = [G_2:N] \\ &= D_{G_2, \psi(P)}(\psi(g)). \end{aligned}$$ ◻ In particular, the previous lemma applies to the case where $\bar{P} = \nu(G_1)$ and $P = \nu(G_2)$. # Residual finiteness growth of virtually abelian groups {#sec_virt_ab} In this subsection, we will find a new expression for the residual finiteness growth of virtually abelian groups, using results of the previous sections for families $P$ of normal subgroups. Let us first start with a general group extension $G$, namely a group fitting in a short exact sequence $$\begin{tikzcd} 1 \arrow[r] & K \arrow[r, "i", hook] & G \arrow[r, "\pi", two heads] & H \arrow[r] & 1. \label{eq_short_exact} \end{tikzcd}$$ If $s: H\to G$ is a set theoretic map with the property that $\pi\circ s = \mathop{\mathrm{Id}}$, called a section, then $$\begin{aligned} \varphi: H &\to \mathop{\mathrm{Out}}(K)\\ h & \mapsto (i^{-1}\circ C_{s(h)}\circ i)\mathop{\mathrm{Inn}}(K) \end{aligned}$$ is a well-defined homomorphism, independent of the chosen section. Here $C_g: K \to K$ for $g \in G$ stands for the automorphism of $K$ given by conjugation with $g$, so $C_g(k) = g k g^{-1}$. We write $\mathop{\mathrm{Inn}}(G) = \{C_g \mid g \in G\} \subset \mathop{\mathrm{Aut}}(G)$ for the normal subgroup which contains every conjugation automorphism, and $\mathop{\mathrm{Out}}(G) = \mathop{\mathrm{Aut}}(G) / \mathop{\mathrm{Inn}}(G)$ for the quotient group of outer automorphisms. A first result in this section is Theorem [Theorem 18](#thm_finite_extension){reference-type="ref" reference="thm_finite_extension"} below, which relates $\mathop{\mathrm{RF}}_G$ to $\mathop{\mathrm{RF}}_{K,P}$ for a certain family of subgroups $P$, namely the ones invariant under the morphism $\varphi: H \to \mathop{\mathrm{Out}}(K)$. We start with the following observation. **Lemma 16**. *Assume that $\psi_1$ and $\psi_2$ are two automorphisms of $K$ such that $\psi_1 \mathop{\mathrm{Inn}}(K)= \psi_2 \mathop{\mathrm{Inn}}(K)$, then $\psi_1(N) = \psi_2(N)$ for any normal subgroup $N$ of $K$.* *Proof.* By assumption, there exists $k \in K$ such that $\psi_1 = \psi_2 \circ C_k$. For any normal subgroup $N \triangleleft K$ we have that $$\psi_1(N) = \psi_2(C_k(N)) = \psi_2(N).$$ ◻ Hence, this lemma shows that we can define $\overline{\psi}(N)$ for any normal subgroup and any element $\overline{\psi} \in \mathop{\mathrm{Out}}(K)$, making sense of the following definition. **Definition 17**. Let $\varphi: H \to \mathop{\mathrm{Out}}(K)$ be a morphism, then we define $\mathop{\mathrm{Inv}}(\varphi) \subset \nu(K)$ as the set $$\mathop{\mathrm{Inv}}(\varphi) = \left\{N\in \nu(K)\mid \forall h \in H: \varphi(h)(N) = N \right\}.$$ The importance of this set lies in the following application of Lemma [Lemma 13](#lem_reduc_two){reference-type="ref" reference="lem_reduc_two"}. **Theorem 18**. *Let $G$ be a residually finite, finitely generated group in a short exact sequence as in equation [\[eq_short_exact\]](#eq_short_exact){reference-type="eqref" reference="eq_short_exact"}. If $H$ is finite, then $\mathop{\mathrm{RF}}_{G, \nu(G), B_G}$ equals $\mathop{\mathrm{RF}}_{K, \mathop{\mathrm{Inv}}(\varphi), B_K}$.* *Proof.* We first claim that $\nu(G)\cap K$ equals precisely $\mathop{\mathrm{Inv}}(\varphi)$. Indeed, take $N$ normal in $G$ arbitrary. Since $K$ is also a normal subgroup, the intersection $\bar{N} := N\cap K$ is normal in $G$. Take any $h\in H$, then $C_{s(h)}(\bar{N}) = \bar{N}$, where $C_{s(h)}$ denotes conjugation as mentioned above. By definition, this is precisely $\varphi(h)(\bar{N})$, so $\bar{N} \in \mathop{\mathrm{Inv}}(\varphi)$. Conversely, if $N \in \mathop{\mathrm{Inv}}(\varphi)$, then $N\leq K$ and for $g\in G$ arbitrary we have $$g^{-1}Ng = s(h)^{-1}k^{-1}Nks(h) = \varphi(h)(N) = N,$$ by writing $g = ks(h)$ for $k\in K$ and $h\in H$, so $N\in \nu(G)\cap K$. The statement of the theorem is now a direct application of Lemma [Lemma 13](#lem_reduc_two){reference-type="ref" reference="lem_reduc_two"}, because $K\lhd_f G$ and $\nu(G)\cap K$ equals precisely $\mathop{\mathrm{Inv}}(\varphi)$. For every $N\lhd G$, we can now set $\bar{N} = N\cap K$. We have the inequality $[K:\bar{N}] \leq [G:N]$, because $[G:N] = [G:KN]\cdot [K:K\cap N]$. ◻ Note that the previous result holds for general groups $K$, without assuming it is abelian as we will do further on. However, the assumption that $H$ is finite is crucial in order to use Lemma [Lemma 13](#lem_reduc_two){reference-type="ref" reference="lem_reduc_two"}, as we illustrate with the following example. **Example 19**. The groups $\mathbb{Z}^3$ and the discrete Heisenberg group $H_3(\mathbb{Z})$ have different residual finiteness growth, namely $\log$ and $\log^3$ respectively, see [@bou2010quantifying]. However, both groups have a normal subgroup $\mathbb{Z}$ with quotient $\mathbb{Z}^2$, namely $$\dfrac{\mathbb{Z}^3}{\mathbb{Z}\times\left\{0 \right\}^2} \cong \mathbb{Z}^2 \cong \dfrac{H_3(\mathbb{Z})}{Z(H_3(\mathbb{Z}))},$$ and in both cases the morphism $\varphi: \mathbb{Z}^2 \to \mathop{\mathrm{Out}}(\mathbb{Z}) = \mathop{\mathrm{Aut}}(\mathbb{Z})$ induced by conjugation is the trivial one. The class of virtually abelian groups is exactly the special case where $K$ is an abelian group and $H$ is finite. **Definition 20**. A group $G$ is said to be virtually abelian if it has an abelian normal subgroup $K$ of finite index. Note that the map $\varphi$ is then a morphism $H\to \mathop{\mathrm{Aut}}(K)$, as $\mathop{\mathrm{Out}}(K) = \mathop{\mathrm{Aut}}(K)$ when $K$ is abelian. Since we assume that $G$ is finitely generated and hence $K$ as well, we know that, $K \cong \mathbb{Z}^m\oplus T$ for some $m\in \mathbb{N}$ and $|T| < \infty$ by the structure theorem of finitely generated abelian groups. If we take $K^\prime = \vert T \vert K \subset K$, then $K^\prime$ is a characteristic and torsion-free subgroup of $K$ and hence a normal subgroup of $G$. Hence without loss of generality we can assume that $K$ is a torsion-free normal subgroup. In fact, this argument is a simplified version of a more general statement for virtually polycyclic groups, which always have a finite index normal subgroup which is torsion-free, see [@ragh72]. Since $K \cong \mathbb{Z}^m$ for some $m \geq 0$, we have reduced the problem of finding $\mathop{\mathrm{RF}}_G$ for a virtually abelian group $G$ to the question of determining $\mathop{\mathrm{RF}}_{\mathbb{Z}^m, \mathop{\mathrm{Inv}}(\varphi), B_{\mathbb{Z}^m}}$ for all $\varphi: H \to \mathop{\mathrm{GL}}(m,\mathbb{Z})$ with $H$ finite. The next two sections deal with the upper and lower bound respectively. However, we first make some final comments about the representation $\varphi$ and how it depends on the choice of the subgroup $K$. In the special case where $G$ is crystallographic, which is equivalent to $G$ being finitely generated, virtually abelian and for which every finite normal subgroup is trivial by [@deki96-1], we can say more about the representation $\varphi$. Indeed, in this case, $G$ has a maximal abelian subgroup $A$ which is torsion-free and normal. In particular, we can take $K = A$, and then the representation $\varphi: H \to \mathop{\mathrm{Aut}}(A)$ becomes faithful. This representation is known as the holonomy representation. We refer to [@deki96-1] for more details. However, in the case where $G$ is not crystallographic, there is no canonical choice for the abelian normal subgroup $K$. For any two choices $K, K^\prime \subset G$ which are torsion-free abelian of maximal rank, we have that the groups $K$ and $K^\prime$ are commensurable, i.e. $K \cap K^\prime$ is a subgroup of finite index in both $K$ and $K^\prime$. To see how the representation $\varphi$ varies over the subgroups $K$, it hence suffices to consider the case $K^\prime \subset K$ with corresponding groups $H = G / K, H^\prime = G /K^\prime$ and maps $\varphi: H \to \mathop{\mathrm{Aut}}(K)$, $\varphi^\prime: H^\prime \to \mathop{\mathrm{Aut}}(K^\prime)$. Note that the subgroup $K /K^\prime = M$ is a normal subgroup of $H^\prime$ which acts trivially on $K^\prime$ by conjugation, and thus $M$ lies in the kernel of $\varphi^\prime$. Hence, we find that the induced representation $$\overline{\varphi}^\prime: H = H^\prime/M \to \mathop{\mathrm{Aut}}(K^\prime)$$ is by definition the restriction of the representation $\varphi$ to the invariant subgroup $K^\prime \subset K$. As $K^\prime$ has finite index in $K$, the representations $\varphi$ and $\varphi^\prime$ are equivalent over the rational numbers $\mathbb{Q}$. In particular, we conclude that the image $\varphi(H) \subset \mathop{\mathrm{GL}}(m,\mathbb{Z})$ does not depend on the choice of $K$ up to $\mathbb{Q}$-equivalence. As our main result only depends on the representation $\varphi$ over $\mathbb{C}$, this $\mathbb{Q}$-equivalence does not play a role further and we can just fix one choice of subgroup $K$. # Proof of the upper bound {#sec_upper} In the previous section, we showed how $\mathop{\mathrm{RF}}_{G}$ for finitely generated virtually abelian group $G$ only depends on the corresponding representation $\varphi: H \to \mathop{\mathrm{GL}}(m,\mathbb{Z})$ for some finite group $H$. Recall that a (linear) representation of a finite group $H$ is by definition a group homomorphism $\varphi : H \to \mathop{\mathrm{GL}}(m, \mathbb{F})$ for some field $\mathbb{F}$. The special case where the entries lie in $\mathbb{Z}$ is important for our main results. **Definition 21**. We say a group representation is integral if it is of the form $H\to \mathop{\mathrm{GL}}(m, \mathbb{Z})$. If $\varphi: H \to GL(m,\mathbb{F})$ is a linear representation and $\mathbb{F} \subset \mathbb{E}$ is a field extension, we also get a linear representation $\varphi^\mathbb{E}: H \to GL(m,\mathbb{E})$. Sometimes we will write both representations as $\varphi$ if it is clear from the context over which field we are working. The advantage of working over fields is that every representation can be decomposed into irreducible subrepresentations. If $\varphi: H \to \mathop{\mathrm{GL}}(m, \mathbb{F})$ is a representation, then a subrepresentation is a subspace $W \subset \mathbb{F}^m$ which is invariant under every element $\varphi(h)$ for $h \in H$. After choosing a basis for $W$, we then also get a map $H \to \mathop{\mathrm{GL}}(m^\prime,\mathbb{F})$. We call a representation irreducible if the only subrepresentations are the trivial ones, namely $0$ and $\mathbb{F}^m$. Every group representation $\varphi: H \to \mathop{\mathrm{GL}}(m, \mathbb{F})$ can be written as $$\varphi = \varphi_1 \times \dots \times \varphi_n,$$ where $\varphi_i$ are irreducible subrepresentations, by restricting $\varphi$ to a subspace $W_i$ with $F^m = W_1 \oplus \ldots \oplus W_n$. After choosing a basis for the spaces $W_i$ we can also see $\varphi_i: H \to \mathop{\mathrm{GL}}(m_i,\mathbb{F})$ with $m_i$ the dimension of $W_i$. If $\varphi$ is irreducible over any field extension of $\mathbb{F}$, then it is called absolutely irreducible and the field $\mathbb{F}$ is called a splitting field of $\varphi$. By [@curtis2006representation p. 292 & p. 475] we have the following result: **Theorem 22**. *If a field $\mathbb{F}$ contains an $|H|$-th root of unity, then it is a splitting field of $\varphi$.* Our main goal in this section is to give an upper bound for $\mathop{\mathrm{RF}}_G$ depending on how $\varphi$ splits over the complex numbers, as given in the following result. **Theorem 23**. *Let $\varphi: H \to \mathop{\mathrm{GL}}(m, \mathbb{Z})$ be an integral representation of a finite group $H$. Suppose that all the irreducible $\mathbb{C}$-subrepresentations have degree smaller or equal to $k$, then $$\mathop{\mathrm{RF}}_{\mathbb{Z}^m, \mathop{\mathrm{Inv}}(\varphi),B_{\mathbb{Z}^m}} \preceq \log^k.$$* The main tool here is the density of primes in certain subsets of $\mathbb{N}$. As often in number theory, we write $f \asymp g$ for functions $f,g: \mathbb{R}^+ \to \mathbb{R}^+$ if there exists constants $C_1,C_2 > 0$ such that $$C_1 \leq \liminf_{x\to \infty} \frac{f(x)}{g(x)} \leq \limsup_{x\to \infty} \frac{f(x)}{g(x)} \leq C_2.$$ In other words, $f \asymp g$ if and only if there exists a constant $M>0$ such that for all $x\geq M$ it holds that $$C_1g(x) \leq f(x) \leq C_2g(x).$$ **Definition 24**. Let $S\subset \left\{p \in \mathbb{N}\mid p \text{ is prime} \right\}$ be any subset of primes. We define the function $\pi_S: \mathbb{N}\to \mathbb{N}$ as $\pi_S(x) = |\left\{p \in S\mid p\leq x \right\}|$, i. e. $\pi_S(x)$ is the number of elements in $S$ smaller than or equal to $x$. The following results about the asymptotics of $\pi_S$ are well-known. **Lemma 25**. *The following are equivalent for subsets $S \subset \left\{p \in \mathbb{N}\mid p \text{ is prime} \right\}$ of prime numbers:* 1. *$\pi_S(x) \asymp \dfrac{x}{\log(x)}$,* 2. *$\log(\prod_{p\in S, p\leq x}p) \asymp x$.* We know that various subsets $S$ satisfy the lemma above by certain prime number theorems: **Theorem 26**. *The following subsets $S$ satisfy the lemma above:* 1. *the set $S = \left\{p \in \mathbb{N}\mid p \text{ is prime} \right\}$ of all primes;* 2. *the set $S = \left\{p\mid p \text{ prime and } p\equiv a \mod n \right\}$ for fixed $a$ and $n$ with $\mathop{\mathrm{gcd}}(a,n) = 1$.* *Proof.* The first claim follows by the prime number theorem, the second by the prime number theorem in arithmetic progressions, see for example [@fine2007number]. ◻ It should be noted that this result is a special case of Chebotarev's density theorem [@tschebotareff1926bestimmung]. However, since we do not need this more general density theorem, we will not discuss it further in this paper. For the sets $S$ mentioned above, the following result holds: **Proposition 27**. *Suppose $S$ is a subset of primes satisfying the equivalent conditions of Lemma [Lemma 25](#lem_PNT){reference-type="ref" reference="lem_PNT"}. There exist numbers $D_1,D_2>0$ such that for all $m\in\mathbb{N}$ there exists a prime number $p\in S$ with $p \leq D_1\cdot \log(m) + D_2$ with $p\nmid m$ (or equivalently $m\notin p\mathbb{Z}$).* *Proof.* Let all primes in this proof denote primes in $S$. We first show the result for all $m\geq 2$, where $D_2=0$. Suppose such a fixed number $D_1$ does not exist. This means that for $n\in\mathbb{N}$, there must be at least one element $m_n \geq 2$ such that $p \mid m_n$ as soon as $p\leq n\cdot \log(m_n)$ and $p \in S$. As a consequence, we see that $$\prod_{\substack{p\leq n \log(m_n) \\ p \in S}} p \mid m_n.$$ In particular, this means that $\displaystyle \prod_{\substack{p\leq n \log(m_n) \\ p \in S}}p \leq m_n$.\ We claim this is a contradiction to lemma [Lemma 25](#lem_PNT){reference-type="ref" reference="lem_PNT"}. Indeed, since $n \log(m_n)$ must go to infinity, this lemma says precisely that there exists a number $C$ such that for $n$ large enough we have the inequality $$0< C \leq \frac{\log \left( \prod_{\substack{p\leq n \log(m_n) \\ p \in S} }p \right)}{n \log(m_n)}.$$ This means that $m_n^{Cn} \leq \displaystyle \prod_{\substack{p\leq n \log(m_n) \\ p \in S}}p$. Since $m_n \geq 2$ and for $n$ large $Cn > 1$, we have $m_n < m_n^{Cn}$. Hence, this lower bound on $\displaystyle \prod_{\substack{p\leq n \log(m_n) \\ p \in S}}p$ contradicts the earlier upper bound, so the statement must hold. If we pick $D_2$ to be the smallest prime in $S$, then the statement is also satisfied for $m = 1$. ◻ Combining these results about the density of prime numbers with the theory of splitting fields for representations, we get the proof of Theorem [Theorem 23](#thm_upper){reference-type="ref" reference="thm_upper"}. *Proof of Theorem [Theorem 23](#thm_upper){reference-type="ref" reference="thm_upper"}.* Consider the standard word norm on $\mathbb{Z}^m$. Take $0\neq v \in B_{\mathbb{Z}^m}(r)$ arbitrary. Since this vector is non-zero, we can find one non-zero entry $a$. By Proposition [Proposition 27](#prop_Dlogn_gPNT){reference-type="ref" reference="prop_Dlogn_gPNT"}, we can take a prime number $$p \leq D_1\cdot \log(|a|) + D_2 \leq D_1\cdot \log(\|v\|_{\mathbb{Z}^m})+D_2$$ such that $a \notin p\mathbb{Z}$ and $p\equiv 1 \mod |H|$. In particular, $\mathbb{Z}_p$ contains a primitive $|H|$-th root of unity. By construction, $v\notin p\mathbb{Z}^m$. Let $\psi$ be the map $\mathbb{Z}^m \to \mathbb{Z}^m_p: v \mapsto v \mod p$. We claim that $$D_{\mathbb{Z}^m, \mathop{\mathrm{Inv}}(\varphi)}(v) \leq D_{\mathbb{Z}_p^m, \mathop{\mathrm{Inv}}(\phi)}(\psi(v))$$ by Lemma [Lemma 15](#lem_surjective){reference-type="ref" reference="lem_surjective"}, where $\phi: H \to \mathop{\mathrm{GL}}(m,\mathbb{Z}_p)$ is the map $\varphi$ modulo $p$, so satisfying $$\phi(h)(\psi(v)) = \psi(\varphi(h)(v))$$ for all $v \in \mathbb{Z}^m, h \in H$. Indeed, we have to verify that $\psi^{-1}(\mathop{\mathrm{Inv}}(\phi)) \subset \mathop{\mathrm{Inv}}(\varphi)$, but this follows easily from the definition of $\phi$. By theorem [Theorem 22](#thm_splitting_field){reference-type="ref" reference="thm_splitting_field"}, we know $\mathbb{Z}_p$ is a splitting field for $\phi$. Hence, we can write $$\phi = \phi_1\times \dots \times \phi_n,$$ where $\phi_i$ is an absolutely irreducible subrepresentation corresponding to the $\phi$-invariant $\mathbb{Z}_p$-subspaces $V_i$ for each $1\leq i\leq n$. Now we wish to apply Lemma [Lemma 14](#lem_direct_sum){reference-type="ref" reference="lem_direct_sum"}. Hence, we will argue that $\mathop{\mathrm{Inv}}(\phi)\cap V_i$ equals $\mathop{\mathrm{Inv}}(\phi_i)$. For $\mathop{\mathrm{Inv}}(\phi)\cap V_i \subset \mathop{\mathrm{Inv}}(\phi_i)$, suppose $N\in \mathop{\mathrm{Inv}}(\phi)$. If $w\in N\cap V_i$, then $\phi_i(h)(w) := \phi(h)(w) \in N\cap V_i$, since $N$ and $V_i$ are $\phi$-invariant. Hence, $N\cap V_i$ is $\phi_i$-invariant. The other inclusion is similar. It is also clear that the conditions $$\displaystyle \bigoplus_{i=1}^n\mathop{\mathrm{Inv}}(\phi_i) \subset \mathop{\mathrm{Inv}}(\phi)$$ and $V_i \in \mathop{\mathrm{Inv}}(\phi_i)$ are satisfied. To apply the lemma, write $\psi(v) = v_1 + \dots + v_n$ where $v_i \in V_i$. Note that since $\psi(v)$ is non-zero, so is at least one of the $v_i$'s. We get $$D_{\mathbb{Z}_p^m, \mathop{\mathrm{Inv}}(\phi)}(\psi(v)) \leq \min\left\{D_{V_i, \mathop{\mathrm{Inv}}(\phi_i)}(v_i) \mid 1 \leq i \leq n \right\}.$$ Let the dimension of $V_i$ as a $\mathbb{Z}_p$-vector space be $m_i$, then clearly $D_{V_i, \mathop{\mathrm{Inv}}(\phi_i),B_i}(v_i) \leq |V_i| \leq p^{m_i}$. In particular, let $k = \max\left\{m_i\mid 1\leq i\leq n \right\}$, then $$D_{\mathbb{Z}_p^m, \mathop{\mathrm{Inv}}(\phi)}(\psi(v)) \leq p^k \leq (D_1\cdot \log(\|v\|_{\mathbb{Z}^m})+D_2)^k.$$ We end by noting that $k$ is also the maximal degree of the irreducible $\mathbb{C}$-subrepresentations of $\varphi$ by [@isaacs2006character Theorem 15.13]. ◻ # Proof of the lower bound {#sec_lower} The goal of this section is to show that the inequality in Theorem [Theorem 23](#thm_upper){reference-type="ref" reference="thm_upper"} is in fact optimal, by producing the corresponding lower bound. We proceed in three steps. First, we reduce our problem further to show that we may restrict our attention to $\mathbb{Q}$-irreducible representations. Then, we describe how matrices that commute with a given $\mathbb{Q}$-irreducible representation look like, by using the theory of Galois Descent. Finally, we proceed by comparing $\mathop{\mathrm{RF}}_{\mathbb{Z}^m, \mathop{\mathrm{Inv}}(\varphi), B_{\mathbb{Z}^m}}$ to the more restrictive $\mathop{\mathrm{RF}}_{\mathbb{Z}^m, \mathop{\mathrm{Com}}(\varphi), B_{\mathbb{Z}^m}}$, where $\mathop{\mathrm{Com}}(\varphi)$ consists of those subgroups that can be written as $\text{Im\,}B$ for some matrix $B$ that commutes with $\varphi$. The claimed lower bound then follows from combining the previous steps. ## Reduction to $\mathbb{Q}$-irreducible representations Suppose $\varphi$ is an integral representation considered over the field $\mathbb{Q}$, then we can write $$\varphi = \varphi_1\times \dots \times \varphi_n,$$ where $\varphi_i$ with $1\leq i\leq n$ are irreducible $\mathbb{Q}$-subrepresentations with corresponding irreducible $\mathbb{Q}$-vector subspaces $W_i$, so $\varphi_i$ is the restriction of $\varphi$ to $W_i$. We write $P_i: \mathbb{Q}^m \to W_i$ for the natural projection onto $W_i$. **Theorem 28**. *Let $K_i$ denote $P_i(\mathbb{Z}^m)$, then $\mathop{\mathrm{RF}}_{\mathbb{Z}^m, \mathop{\mathrm{Inv}}(\varphi),B_{\mathbb{Z}^m}} \approx \max\left\{\mathop{\mathrm{RF}}_{K_i, \mathop{\mathrm{Inv}}(\varphi_i),B_i} \mid 1\leq i\leq n \right\}$.* By corollary [Corollary 11](#cor_independent){reference-type="ref" reference="cor_independent"}, this equality holds for any choice of metric balls $B_i$, hence we do not specify the specific choice of metric on the $K_i$. *Proof.* Note first that the linear map $L(v) = \sum_{i=1}^nP_i(v)$ is the identity map on $\mathbb{Q}^m$ by construction. Set $K = \oplus_{i=1}^n K_i$. We have the following inclusion: $$\mathbb{Z}^m = L(\mathbb{Z}^m) \subset \oplus_{i=1}^n P_i(\mathbb{Z}^m) = \oplus_{i=1}^n K_i = K.$$ As $K$ by construction is an abelian group of rank at most $m$, it must have rank exactly $m$ and contain $\mathbb{Z}^m$ as a subgroup of finite index. Since $\varphi$ is integral, we know that for all $v\in \mathbb{Z}^m$ the vector $\varphi(h)(v)$ lies in $\mathbb{Z}^m$. Decomposing $v$ as $\sum_{i=1}^n v_i$, where $v_i = P_i(v) \in K_i$, we see that $$\varphi(h)(v) = \sum_{i=1}^n\varphi(h)(v_i) = \sum_{i=1}^n\varphi_i(h)(v_i),$$ where thus $\varphi_i(h)(v_i) \in K_i$. In particular, if $v_i \in K_i$, then also $\varphi_i(h)(v_i) \in K_i$ and thus $\varphi_i: H \to \mathop{\mathrm{Aut}}(K_i)$ is well-defined. Let $\bar{\varphi}$ be the map $$H\to \mathop{\mathrm{Aut}}(K): h \mapsto \sum_{i=1}^n\varphi_i(h)\circ P_i.$$ The previous computation shows that $\bar{\varphi}$ is the natural extension of $\varphi: H\to \mathop{\mathrm{GL}}(m, \mathbb{Z})$ to $\mathop{\mathrm{GL}}(m, \mathbb{Z}) \subset \mathop{\mathrm{Aut}}(K)$. We can apply lemma [Lemma 13](#lem_reduc_two){reference-type="ref" reference="lem_reduc_two"} to obtain the equality $$\mathop{\mathrm{RF}}_{\mathbb{Z}^m, \mathop{\mathrm{Inv}}(\varphi),B_{\mathbb{Z}^m}} \approx \mathop{\mathrm{RF}}_{K, \mathop{\mathrm{Inv}}(\bar{\varphi}),B_K}.$$ Indeed, $\mathbb{Z}^m \lhd_f K$, $\mathop{\mathrm{Inv}}(\bar{\varphi})\cap \mathbb{Z}^m = \mathop{\mathrm{Inv}}(\varphi)$, since $\bar{\varphi}$ is the natural extension of $\varphi$, and for every $N \in \mathop{\mathrm{Inv}}(\bar{\varphi})$ we have $[\mathbb{Z}^m: N\cap \mathbb{Z}^m] \leq [K:N]$.\ We conclude the proof by applying lemma [Lemma 14](#lem_direct_sum){reference-type="ref" reference="lem_direct_sum"}, using that $\bar{\varphi} = \varphi_1\times \dots \times \varphi_n$ on $K = \oplus_{i=1}^n K_i$. ◻ Note that $K_i \cong \mathbb{Z}^{m_i}$ for some $m_i$. Hence we may indeed reduce our attention to $\mathop{\mathrm{RF}}_{\mathbb{Z}^m, \mathop{\mathrm{Inv}}(\varphi), B_{\mathbb{Z}^m}}$, where $\varphi$ is $\mathbb{Q}$-irreducible. ## Matrices commuting with $\mathbb{Q}$-irreducible representations {#sec_irrQrepr} Let $\varphi: H \to \mathop{\mathrm{GL}}(m, \mathbb{Z})$ be an integral representation which is irreducible as a representation over $\mathbb{Q}$. In this section we discuss the matrices that commute with $\varphi$, i.e. we investigate the $B\in \mathop{\mathrm{GL}}(m, \mathbb{Q})$ such that $\varphi(h)B = B\varphi(h)$ for all $h\in H$. For this, we will use some notions of Galois theory, for which we refer to [@Winter1974Fields] for more background and notation. For the following definitions we work with the standard basis of $\mathbb{K}^m$. If $\sigma$ is an automorphism of $\mathbb{K}$ and $v\in \mathbb{K}^m$, then we write $\sigma(v)$ for the vector obtained by applying $\sigma$ on the entries of the vector $v$. **Definition 29**. We say a $\mathbb{K}$-vector subspace $W\subset \mathbb{K}^m$ is minimal over the field $\mathbb{F}$ if $W$ has a basis with entries over $\mathbb{F}$, but no basis with entries over any strictly smaller field $\mathbb{L}$. Note that for any subspace $W \subset \mathbb{K}^m$, the set $\sigma(W) = \{\sigma(w) \mid w \in W\}$ is also a vector space over $\mathbb{K}$, and that $\sigma(W) = W$ if $W$ has a basis with entries over $\mathbb{F}$. **Lemma 30**. *Let $\varphi: H\to \mathop{\mathrm{GL}}(m, \mathbb{Q})$ be a representation of a finite group and $\mathbb{K}$ be some number field. If $W$ is an irreducible $\mathbb{K}$-subspace of $\varphi^\mathbb{K}$ and $\sigma$ is an automorphism of $\mathbb{K}$, then $\sigma(W)$ is also $\varphi^\mathbb{K}$-irreducible.* *Proof.* Let $w\in W$. By definition of being $\varphi^\mathbb{K}$-invariant, we know that $$\forall h\in H: \varphi^\mathbb{K}(h)(w) \in W.$$ Applying $\sigma$ to this expression and using that $\varphi^\mathbb{K}(h)$ is a rational matrix, we obtain $$\forall h \in H: \varphi^\mathbb{K}(h)(\sigma(w)) \in \sigma(W).$$ This shows that $\sigma(W)$ is also $\varphi^\mathbb{K}$-invariant. Now suppose $\sigma(W)$ contains a strict subspace $W'$ that is invariant: $$\forall w\in W', \forall h \in H: \varphi^\mathbb{K}(h)(w) \in W'.$$ Now apply $\sigma^{-1}$ to this expression to find that $$\forall w\in W', \forall h \in H: \varphi^\mathbb{K}(h)(\sigma^{-1}(w)) \in \sigma^{-1}(W').$$ This shows that $\sigma^{-1}(W')$ is an invariant subspace. However, this is a strict subspace of $W$. Therefore, $\sigma^{-1}(W') = \{0\}$ and $W' = \{0\}$. This shows that $\sigma(W)$ is irreducible. ◻ This leads to the following result describing how $\varphi$ splits over a splitting field. **Theorem 31**. *Let $\varphi: H\to \mathop{\mathrm{GL}}(m, \mathbb{Z})$ be a $\mathbb{Q}$-irreducible representation of a finite group. Suppose it decomposes into absolutely irreducible components over a Galois extension $\mathbb{K}$ of $\mathbb{Q}$. Suppose that $W$ is an irreducible $\mathbb{K}$-subspace which is minimal over $\mathbb{F} \subset \mathbb{K}$. If $\sigma_1$ up to $\sigma_n$ denote the $n$ automorphisms of $\mathbb{K}$ distinct on $\mathbb{F}$, i.e. the automorphisms such that $\sigma_i\big|_\mathbb{F} \neq \sigma_j\big|_\mathbb{F}$, then $$\mathbb{K}^m = \bigoplus_{i=1}^n\sigma_i(W)$$ is a decomposition of $\mathbb{K}^m$ into irreducible subspaces.* *Remark 1*. Note that the automorphisms $\sigma_1$ to $\sigma_n$ can be identified with representatives of the cosets of $\mathop{\mathrm{Gal}}(\mathbb{K}/\mathbb{F})$ in $\mathop{\mathrm{Gal}}(\mathbb{K}/\mathbb{Q})$. Indeed, $\sigma_i(x) = \sigma_j(x)$ for all $x\in\mathbb{F}$ if and only if $\sigma_i\mathop{\mathrm{Gal}}(\mathbb{K}/\mathbb{F}) = \sigma_j\mathop{\mathrm{Gal}}(\mathbb{K}/\mathbb{F})$. As a consequence, if $\sigma$ is an automorphism of $\mathbb{K}$, then it induces a permutation on the cosets, implying that $\sigma\circ\sigma_i = \sigma_j\circ\sigma'$ for some $1\leq i,j\leq n$ and $\sigma'\in \mathop{\mathrm{Gal}}(\mathbb{K}/\mathbb{F})$. Since $W$ has a basis over $\mathbb{F}$, $\sigma'(W) = W$. Therefore, $\sigma$ induces a permutation on $\{\sigma_i(W)\mid 1\leq i\leq n\}$. *Proof.* In [@dekimpe2015existence theorem 3.1] a similar statement is shown, from which we already know that if $\dim(W) = k$, then $m = kn$. However, the result does not specify the direct sum along which this decomposition holds, which is crucial for our purposes. Consider the subspace $V = \sigma_1(W) + \dots + \sigma_n(W)$. We will argue that this is in fact a direct sum with $V = \mathbb{K}^m$. The $\mathbb{K}$-vector space $V$ is invariant under $\varphi^\mathbb{K}$, being the sum of invariant subspaces. Also, if $\sigma \in \mathop{\mathrm{Gal}}(\mathbb{K})$ and $\sum_{i=1}^n \sigma_i(w_i)$ is an arbitrary element of $V$, then $\sigma(\sum_{i=1}^n \sigma_i(w_i))$ is still an element of $V$, as $\sigma$ permutes $\{\sigma_i(W)\mid 1\leq i \leq n\}$, so $\sigma(\sigma_i(w_i)) = \sigma_j(w'_i)$ for some $1\leq j \leq n$ and $w'_i \in W$. We conclude that $\sigma(V) = V$ for all $\sigma \in \mathop{\mathrm{Gal}}(\mathbb{K})$. Let $U$ denote the set $\{v \in V\mid \forall \sigma \in \mathop{\mathrm{Gal}}(\mathbb{K}): \sigma(v) = v\}$ of vectors in $V$ fixed under $\mathop{\mathrm{Gal}}(\mathbb{K})$. By the theory of Galois Descent, see [@Winter1974Fields Theorem 3.2.5], $U$ is a $\mathbb{Q}$-vector subspace of $\mathbb{Q}^m$ for which $U\otimes_\mathbb{Q} \mathbb{K} = V$. (In particular, $V$ has a basis with entries over $\mathbb{Q}$.) However, $U$ is now a $\varphi^\mathbb{Q}$-invariant subspace. As $\varphi^\mathbb{Q}$ is irreducible, this implies that $U = \mathbb{Q}^m$ and $V = \mathbb{K}^m$. In conclusion, we obtain $$\mathbb{K}^m = \sigma_1(W) + \dots + \sigma_n(W).$$ Thus, comparing the dimensions of both sides, this must be a direct sum. ◻ Now let $B$ be a rational matrix that commutes with $\varphi: H \to \mathop{\mathrm{GL}}(m,\mathbb{Z})$. Take $\mathbb{K}$ a number field such that $\varphi$ decomposes into absolutely irreducible components and $B$ can be put in its Jordan normal form over $\mathbb{K}$. We may assume that $\mathbb{K}$ is Galois over $\mathbb{Q}$. **Lemma 32**. *Let $\varphi: H \to \mathop{\mathrm{GL}}(m, \mathbb{Z})$ and let $B\in \mathop{\mathrm{GL}}(m, \mathbb{Q})$ commute with $\varphi$. There exists an absolutely irreducible $\mathbb{K}$-subspace $W$ of $\varphi^\mathbb{K}$ contained in an eigenspace of $B$.* *Proof.* By the Jordan decomposition of $B$ over $\mathbb{K}$, we know there exists at least one eigenvector $v$ for some eigenvalue $\lambda$. Consider $$V =\text{span}_\mathbb{K}\{\varphi^\mathbb{K}(h)(v)\mid h \in H\}.$$ This subspace is clearly $\varphi^\mathbb{K}$-invariant. Furthermore, we have $$B(\varphi^\mathbb{K}(h)v) = \varphi^\mathbb{K}(h)Bv = \lambda\varphi^\mathbb{K}(h)v$$ for all basis vectors. Hence $V$ is in fact contained in the eigenspace of $\lambda$.\ Since $V$ is $\varphi^\mathbb{K}$-invariant, it contains an absolutely irreducible $\mathbb{K}$-subspace $W$. This ends the proof. ◻ Take the $\mathbb{K}$-vector subspace $W \subset \mathbb{K}^m$ as in the previous lemma. Suppose it is minimal over the field $\mathbb{F}$. Then $\lambda \in \mathbb{F}$, and for all $\sigma_i$ in the direct sum $\mathbb{K}^m = \oplus_{i=1}^n \sigma_i(W)$, we have $$\forall w\in W: B\sigma_i(w) = \sigma_i(Bw) = \sigma_i(\lambda w) = \sigma_i(\lambda)\sigma_i(w).$$ We get the following result: **Proposition 33**. *Using the notation as above, if $B$ commutes with a $\mathbb{Q}$-irreducible representation $\varphi$, then it is of the form $$B \sim_\mathbb{K} \begin{pmatrix} \sigma_1(\lambda)\mathbb{1}_k & 0 & \dots & 0\\ 0 & \sigma_2(\lambda)\mathbb{1}_k & &\\ \vdots & & \ddots &\\0 & 0 & \dots & \sigma_n(\lambda)\mathbb{1}_k \end{pmatrix}$$ with respect to a basis along the direct sum $\oplus_{i=1}^n \sigma_i(W)$.* **Example 34**. The quaternion group $Q = \{\pm1, \pm i, \pm j, \pm k\}$ of order $8$ has a faithful group representation $\varphi: Q \to \mathop{\mathrm{GL}}(4, \mathbb{Z})$ given by $$\begin{aligned} \varphi(i) &= \begin{pmatrix}0&1&0&0\\-1&0&0&0\\0&0&0&-1\\0&0&1&0\end{pmatrix}, \\ \varphi(j) &= \begin{pmatrix}0&0&1&0\\0&0&0&1\\-1&0&0&0\\0&-1&0&0\end{pmatrix}, \\ \varphi(k) &= \begin{pmatrix}0&0&0&1\\0&0&-1&0\\0&1&0&0\\-1&0&0&0\end{pmatrix}. \end{aligned}$$ The following matrix commutes with every element of $\varphi(Q)$: $$B = \begin{pmatrix}1&-1&-2&0\\1&1&0&2\\2&0&1&-1\\0&-2&1&1\end{pmatrix}.$$ A minimal splitting field for both $B$ and $\varphi$ is $K = \mathbb{Q}(\sqrt{5}i)$ with $\mathop{\mathrm{Gal}}(K,\mathbb{Q}) = \{\sigma_1, \sigma_2\}$ and $\sigma_1(x) = x$ and $\sigma_2(x) = \bar{x}$, where $\bar{x}$ denotes the complex conjugation. With respect to the basis given by $\left\{\sigma_1(w_1), \sigma_1(w_2), \sigma_2(w_1), \sigma_2(w_2) \right\}$ with $$w_1 = \begin{pmatrix} -\sqrt{5}i\\1\\2\\0\end{pmatrix}\text{ and }w_2 = \begin{pmatrix} 1\\\sqrt{5}i\\0\\2\end{pmatrix}$$ we obtain the following matrices: $$B \sim_\mathbb{C} \begin{pmatrix}1-\sqrt{5}i&0&0&0\\0&1-\sqrt{5}i&0&0\\0&0&1+\sqrt{5}i&0\\0&0&0&1+\sqrt{5}i\end{pmatrix},$$ $$\varphi(i), \varphi(j), \varphi(k) \sim_\mathbb{C} \begin{pmatrix}0&-1&0&0\\1&0&0&0\\0&0&0&1\\0&0&-1&0\end{pmatrix}, \, \begin{pmatrix}\frac{\sqrt{5}i}{2}&-\frac{1}{2}&0&0\\-\frac{1}{2}&-\frac{\sqrt{5}i}{2}&0&0\\0&0&-\frac{\sqrt{5}i}{2}&-\frac{1}{2}\\0&0&-\frac{1}{2}&\frac{\sqrt{5}i}{2}\end{pmatrix}, \, \begin{pmatrix}\frac{1}{2}&\frac{\sqrt{5}i}{2}&0&0\\ \frac{\sqrt{5}i}{2}&-\frac{1}{2}&0&0\\0&0&\frac{1}{2}&-\frac{\sqrt{5}i}{2}\\0&0&-\frac{\sqrt{5}i}{2}&-\frac{1}{2}\end{pmatrix}.$$ **Lemma 35**. *Suppose $B$ is an integral matrix of full rank that commutes with the $\mathbb{Q}$-irreducible representation $\varphi$ of dimension $m$. If we take notations as in Theorem [Theorem 31](#thm_notation){reference-type="ref" reference="thm_notation"} and write $x = \prod_{1\leq i\leq n} \sigma_i(\lambda)$, then $x \in \mathbb{Z}$ is an integer, $\det B = x^k$ with $k$ the dimension of a $\mathbb{C}$-irreducible subspace of $\varphi$ and $$x \mathbb{Z}^m \subset \text{Im\,}B.$$* *Proof.* Let $f(X)$ be the polynomial $$\prod_{1\leq i \leq n} (X - \sigma_i(\lambda)) = \sum_{i=0}^n a_i X^i.$$ If $\mathbb{K}$ is the Galois closure of the field $\mathbb{F}$, then we see that all $\sigma \in \mathop{\mathrm{Gal}}(\mathbb{K}/\mathbb{Q})$ permute the roots of $f(X)$. In particular, we see that $\sigma(f(X)) = f(X)$, so $f(X)$ is a rational polynomial. In fact, it is integral, since all roots are algebraic numbers, because they are roots of the characteristic polynomial of $B$, which is $f(X)^k$. The number $x$ equals $a_0$. Note also that $a_n$ equals $1$. It is left to prove that $x \mathbb{Z}^m \subset \text{Im\,}B$. We will argue that the largest possible order of an element in the quotient group $\mathbb{Z}^m/\text{Im\,}B$ is divisible by $x$. The largest possible order is precisely the largest invariant factor of the Smith-normal form decomposition of $B$. At the other hand, it is known that this equals $(\det A)/D$, where $D$ is the greatest common divisor of the minors. Hence, we need to prove that all minors are multiples of $x^{k-1}$. Recall that the minors are the entries of the adjugate matrix $\text{Adj}(B)$. Set $M = \sum_{i=1}^n a_i B^{i-1}$. This is clearly an integral matrix. We find $$BM = MB \sim_\mathbb{C} \begin{pmatrix} (\sum_{i=1}^n a_i \sigma_1(\lambda)^{i-1})\sigma_1(\lambda)\mathbb{1}_k & 0 & \dots \\ 0 & (\sum_{i=1}^n a_i \sigma_2(\lambda)^{i-1})\sigma_2(\lambda)\mathbb{1}_k & \dots \\ \vdots & \vdots & \ddots \end{pmatrix}.$$ Note that $(\sum_{i=1}^n a_i \sigma_1(\lambda)^{i-1})\sigma_1(\lambda) = \sum_{i=1}^n a_i \sigma_1(\lambda)^{i} = -a_0 = -x$, since $\sigma_1(\lambda)$ is a root of $f(X)$. We have thus found that $$BM = MB = -x \mathbb{1},$$ so $B^{-1} = -\frac{1}{x}M$ and hence $$\text{Adj}(B) = (\det B)B^{-1} = x^k (-\frac{1}{x}M) = -x^{k-1}M.$$ This ends the proof. ◻ ## Reduction to Commuting Matrices Any finite index subgroup of $\mathbb{Z}^m$ is equal to $\text{Im\,}B := B(\mathbb{Z}^m)$ for some matrix $B \in \mathbb{Z}^{m\times m}$ which is invertible over $\mathbb{Q}$, i.e. $\det(B) \neq 0$. If $\varphi: H \to \mathop{\mathrm{GL}}(m,\mathbb{Z})$ is an integer representation, then the set $\mathop{\mathrm{Inv}}(\varphi)$ consists of the images $\text{Im\,}B$ such that $\text{Im\,}\varphi(h)B = \text{Im\,}B$ for all $h \in H$. In the special case when $B$ commutes with $\varphi(h)$, then surely $\text{Im\,}\varphi(h)B = \text{Im\,}B\varphi(h) = \text{Im\,}B$, because $\varphi(h) \in \mathop{\mathrm{GL}}(m, \mathbb{Z})$. Hence, commuting gives a stronger, yet not equivalent, notion than invariant subgroups. The goal of this section is to show that we may replace $\mathop{\mathrm{Inv}}(\varphi)$ by those subgroups which come from commuting matrices. In the proof of the lower bound, we will use this in order to apply lemma [Lemma 35](#lem_copy_mat){reference-type="ref" reference="lem_copy_mat"}. **Definition 36**. Let $\mathop{\mathrm{Com}}(\varphi)$ be the set $\left\{\text{Im\,}B \in \mathop{\mathrm{Inv}}(\varphi)\mid \forall h\in H: B\varphi(h)=\varphi(h)B \right\}$. **Example 37**. Continuing on Example [Example 34](#ex_commuting){reference-type="ref" reference="ex_commuting"}, the matrix $B$ induces a subgroup $\text{Im\,}B$ in $\mathop{\mathrm{Com}}(\varphi)$, where $\varphi$ is the action of the quaternions. Note that applying elementary column operations on $B$ (over the ring $\mathbb{Z}$) does not change the subgroup $\text{Im\,}B$. However, the new matrix does not have to commute with $\varphi$ anymore. A bit more work shows that the corresponding subgroups $\mathop{\mathrm{Com}}(\varphi)$ and $\mathop{\mathrm{Inv}}(\varphi)$ are also different here. Indeed, consider $$\text{Im\,}\begin{pmatrix} 1&0&0&0\\0&1&0&0\\0&0&1&0\\1&1&1&2 \end{pmatrix}.$$ One can verify that this subgroup is invariant under $\varphi$. However, the determinant of the matrix is two, and so it does not lie in $\mathop{\mathrm{Com}}(\varphi)$, as lemma [Lemma 35](#lem_copy_mat){reference-type="ref" reference="lem_copy_mat"} shows that the determinant should be a proper square then. **Lemma 38**. *Let $\varphi: H \to \mathop{\mathrm{GL}}(m,\mathbb{Z})$ be an integral representation and $\text{Im\,}B$ is $\varphi$-invariant subgroup with $B \in \mathbb{Z}^{m \times m}$ and $\det B \neq 0$. Then there exists an integral representation $\bar{\varphi}: H \to \mathop{\mathrm{GL}}(m, \mathbb{Z})$ such that for all $h\in H$: $\varphi(h)B = B\bar{\varphi}(h)$. In particular, $\varphi$ and $\bar{\varphi}$ are $\mathbb{Q}$-equivalent integral representations.* *Proof.* Define $\bar{\varphi}(h) = B^{-1}\varphi(h)B$. This clearly defines an $\mathbb{Q}$-equivalent representation. It suffices to argue that $\bar{\varphi}$ is an integral representation.\ Take $h\in H$ arbitrary. We argue that $\bar{\varphi}(h)$ is integral. Since $\text{Im\,}B$ is $\varphi$-invariant, we know that for all standard vectors $e_i$, we find an integral vector $u_i$ such that $$\varphi(h) Be_i = Bu_i.$$ Note that $Be_i$ is the $i$-th column of $B$, say $B_i$. In other words, we have $\varphi(h)B_i = Bu_i$. As a consequence, we have $$\varphi(h) B = B \begin{pmatrix}u_1&\dots&u_m\end{pmatrix},$$ so $\bar{\varphi}(h) = \begin{pmatrix}u_1&\dots&u_m\end{pmatrix} \in \mathbb{Z}^{m\times m}$. ◻ By [@curtis2006representation p. 559], we have the following result: **Theorem 39** ((Corollary of) Jordan-Zassenhaus Theorem). *Given an integral representation $\varphi: H \to \mathop{\mathrm{GL}}(m, \mathbb{Z})$. The set $\left\{\tilde{\varphi}: H \to \mathop{\mathrm{GL}}(m, \mathbb{Z}) \mid \varphi \sim_\mathbb{Q} \tilde{\varphi} \right\}$ of all integral representations $\mathbb{Q}$-equivalent to $\varphi$ splits in a finite number of equivalence classes under $\mathbb{Z}$-equivalence.* We use this to prove the following: **Proposition 40**. *There exists a constant $M>0$ such that for each $B \in \mathbb{Z}^{m \times m}$ with $\det(B) \neq 0$ for which $\text{Im\,}B \in \mathop{\mathrm{Inv}}(\varphi)$, there exists a matrix $B'\in \mathop{\mathrm{Com}}(\varphi)$ such that $\text{Im\,}B' \leq \text{Im\,}B$ and $0< |\det B'| \leq M |\det B|$.* *Proof.* By the Jordan-Zassenhaus Theorem, we find a finite number of representatives of the $\mathbb{Z}$-equivalence classes of the set $\left\{\tilde{\varphi}: H \to \mathop{\mathrm{GL}}(m, \mathbb{Z}) \mid \varphi \sim_\mathbb{Q} \tilde{\varphi} \right\}$, say $\varphi_1$ up to $\varphi_k$. We also fix the matrices $A_i \in \mathop{\mathrm{GL}}(m, \mathbb{Q})$ such that $\varphi_i(h)A_i = A_i\varphi(h)$. By multiplying the matrices $A_i$ by a well-chosen scalar matrix, we may assume that $A_i \in \mathbb{Z}^{m\times m}$.\ Take $M = \max\left\{|\det A_i|\mid 1\leq i\leq k \right\}$. Let $B$ be an arbitrary matrix for which $\text{Im\,}B$ is $\varphi$-invariant. By the previous lemma, we know that there is an integral representation $\bar{\varphi}$ such that $\varphi(h)B = B\bar{\varphi}(h)$. Also, we know that $\bar{\varphi}$ lies in the $\mathbb{Z}$-equivalence class of some $\varphi_i$. Hence, there exists a matrix $P\in \mathop{\mathrm{GL}}(m, \mathbb{Z})$ such that $\bar{\varphi}(h)P = P\varphi_i(h)$. We claim that the matrix $BPA_i$ is the matrix $B'$ from the proposition's statement.\ First, by construction $PA_i$ is an integral matrix, hence $\text{Im\,}BPA_i$ is clearly a subgroup of $\text{Im\,}B$. Its determinant can be estimated by $$0 < |\det (BPA_i)| = |\det A_i|\cdot|\det B| \leq M|\det B|.$$ It is left to show that this matrix commutes with $\varphi(h)$ for every $h\in H$. We get $$\varphi(h)BPA_i = B\bar{\varphi}(h)PA_i = BP\varphi_i(h)A_i = BPA_i \varphi(h).$$ ◻ **Theorem 41**. *The following equality holds: $$\mathop{\mathrm{RF}}_{\mathbb{Z}^m, \mathop{\mathrm{Inv}}(\varphi),B_{\mathbb{Z}^m}} \approx \mathop{\mathrm{RF}}_{\mathbb{Z}^m, \mathop{\mathrm{Com}}(\varphi),B_{\mathbb{Z}^m}}.$$* *Proof.* This follows directly from Proposition [Proposition 40](#prop_commut){reference-type="ref" reference="prop_commut"} and Lemma [Lemma 8](#lem_change_P){reference-type="ref" reference="lem_change_P"}. ◻ ## Proof of the Main Result In this last subsection we will prove the claimed lower bound, and we will prove the main result, i.e. Theorem [Theorem 2](#thm_main){reference-type="ref" reference="thm_main"}. **Theorem 42**. *If $\varphi: H \to \mathop{\mathrm{GL}}(m, \mathbb{Z})$ is $\mathbb{Q}$-irreducible and has an absolutely irreducible subspace of dimension $k$ over $\mathbb{C}$, then $\mathop{\mathrm{RF}}_{\mathbb{Z}^m, \mathop{\mathrm{Com}}(\varphi),B_{\mathbb{Z}^m}} \succeq \log^k$.* *Proof.* Take the first standard basis vector $v$ of $\mathbb{Z}^m$ and consider the elements $$v_s = \mathop{\mathrm{lcm}}(1,2, \dots , s)v \in B_{\mathbb{Z}^m}(\mathop{\mathrm{lcm}}(1,2, \dots , s))$$ for any $s \in \mathbb{N}$. We first show that $D_{\mathbb{Z}^m, \mathop{\mathrm{Com}}(\varphi)}(v_s) \geq s^k$. Indeed, take any $\text{Im\,}B \in \mathop{\mathrm{Com}}(\varphi)$ such that $v_s \notin \text{Im\,}B$. By lemma [Lemma 35](#lem_copy_mat){reference-type="ref" reference="lem_copy_mat"}, we know that $|\det B| = x^k$ for some $x\in \mathbb{Z}$ and $x\mathbb{Z}^m \subset \text{Im\,}B$. Thus $v_s$ does also not lie in $x\mathbb{Z}^m$. As $v_s \in l\mathbb{Z}^m$ for any $1 \leq l \leq s$, we conclude that $s < x$, implying that $|\det B| \geq s^k$. Now, by the prime number theorem, this implies that $D_{\mathbb{Z}^m, \mathop{\mathrm{Com}}(\varphi)}(v_s) \geq (D\cdot \log(\mathop{\mathrm{lcm}}(1,2, \dots , s)))^k$ for some fixed constant $D>0$, so $$\mathop{\mathrm{RF}}_{\mathbb{Z}^m, \mathop{\mathrm{Com}}(\varphi),B_{\mathbb{Z}^m}}(\mathop{\mathrm{lcm}}(1,2, \dots , s)) \geq (D\cdot \log(\mathop{\mathrm{lcm}}(1,2, \dots , s)))^k.$$ This ends the proof. ◻ All the previous results show that Theorem [Theorem 2](#thm_main){reference-type="ref" reference="thm_main"} holds: *Proof of Theorem [Theorem 2](#thm_main){reference-type="ref" reference="thm_main"}.* The upper bound follows by the equalities $$\mathop{\mathrm{RF}}_{G, \nu(G), B_G} \approx \mathop{\mathrm{RF}}_{K, \mathop{\mathrm{Inv}}(\bar{\varphi}),B_{K}} \preceq \log^k$$ by Theorems [Theorem 18](#thm_finite_extension){reference-type="ref" reference="thm_finite_extension"} and [Theorem 23](#thm_upper){reference-type="ref" reference="thm_upper"}.\ The lower bound follows by the equalities $$\begin{split} \mathop{\mathrm{RF}}_{G, \nu(G), B_G} & \approx \mathop{\mathrm{RF}}_{K, \mathop{\mathrm{Inv}}(\varphi),B_{K}}\\ & \approx \max\left\{\mathop{\mathrm{RF}}_{K_i, \mathop{\mathrm{Inv}}(\varphi_i),B_i} \mid 1\leq i\leq n \right\} \\ & \approx \max\left\{\mathop{\mathrm{RF}}_{K_i, \mathop{\mathrm{Com}}(\varphi_i),B_i} \mid 1\leq i\leq n \right\}\\ & \succeq \max\left\{ \log^{k_i} \mid 1\leq i\leq n \right\}\\ & \approx \log^k, \end{split}$$ where $k_i$ is the dimension of the $\mathbb{C}$-irreducible subrepresentations of $\varphi_i$. Here we used consecutively Theorems [Theorem 18](#thm_finite_extension){reference-type="ref" reference="thm_finite_extension"} and [Theorem 28](#thm_irred_Q_split){reference-type="ref" reference="thm_irred_Q_split"}, Proposition [Theorem 41](#thm_reduc_to_comm){reference-type="ref" reference="thm_reduc_to_comm"} and Theorem [Theorem 42](#thm_lower){reference-type="ref" reference="thm_lower"}. ◻ # Applications and examples {#sec_applic} If the maximal torsion-free abelian subgroup of a virtually abelian group $G$ has rank $m$, then Theorem [Theorem 2](#thm_main){reference-type="ref" reference="thm_main"} says that $\mathop{\mathrm{RF}}_G$ equals $\log^k$ for some $1\leq k \leq m$. The following corollary tells us that this result is optimal, namely here exists such a group for every $k$ between $1$ and $m$. **Corollary 43**. *For every $m\geq 1$ and every $1 \leq k \leq m$, there exists a semidirect product $G = \mathbb{Z}^m \rtimes_\varphi H$ for a finite group $H$, such that $\mathop{\mathrm{RF}}_G$ equals $\log^k$. In particular, for every $k\in \mathbb{N}$, there exists a finitely generated, residually finite group $G$ such that $\mathop{\mathrm{RF}}_G$ equals $\log^k$.* *Proof.* Consider the permutation representation of the symmetric group $S_{k+1}$ on $\mathbb{Z}^{k+1}$. It is known that it decomposes into two absolutely irreducible representations over $\mathbb{Q}$ corresponding to the subspaces $\left\{(l,l, \dots, l) \in \mathbb{Q}^{k+1}\mid l\in \mathbb{Q} \right\}$ and $\left\{(l_1,l_2, \dots ,l_{k+1})\mid \sum_{i=1}^{k+1}l_i = 0 \right\}$, see for example [@serre1977linear exercise 2.6]. Therefore, there exists an absolutely irreducible representation $\varphi_k: S_{k+1} \to \mathop{\mathrm{GL}}(k, \mathbb{Z})$. The result follows by taking $G$ to be $\mathbb{Z}^m\rtimes_\varphi S_{k+1}$, where $\varphi = \varphi_k \times \mathop{\mathrm{Id}}_{m-k}$. ◻ Since virtually abelian groups are quasi-isometric if and only if their maximal torsion-free abelian subgroups have the same rank, the following result is now evident: **Corollary 44**. *Residual finiteness growth is not a quasi-isometric invariant in the class of virtually abelian groups. In fact, if $G$ is a virtually abelian group with torsion-free abelian subgroup of rank at least two, then there exists a group $G^\prime$ quasi-isometric to $G$ with $\mathop{\mathrm{RF}}_G \not \approx \mathop{\mathrm{RF}}_{G^\prime}$.* We also wish to remark that the power $k$ can be effectively computed from the character table of the finite group $H$. To illustrate this point, let us recall some notions. For further details, we refer to [@isaacs2006character Chapter 2]. **Definition 45**. Let $\varphi: H \to \mathop{\mathrm{GL}}(m, \mathbb{C})$ be a complex representation of a finite group $H$. The character afforded by $\varphi$, denoted by $\chi_\varphi$, is the function $$\begin{aligned} \chi_\varphi: H &\to \mathbb{C}\\ h &\mapsto \text{Trace}(\varphi(h)).\end{aligned}$$ Here $\varphi(h)$ is represented as a matrix with respect to any basis of $\mathbb{C}^m$. Note that $\chi_\varphi(e_H)$ must equal the dimension $m$ of $\varphi$. A finite group $H$ allows only a finite number of non-equivalent irreducible representations, say $\varphi_1$ to $\varphi_r$. Let $\chi_1$ to $\chi_r$ denote the corresponding characters, called irreducible characters. Suppose $\zeta_1$ and $\zeta_2$ are two characters of $H$, then we can define the following inner product: **Definition 46**. $$\langle \zeta_1, \zeta_2\rangle = \frac{1}{\vert H \vert} \sum_{h\in H} \zeta_1(h)\overline{\zeta_2(h)} \in \mathbb{C}.$$ With respect to this inner product, the irreducible characters arise as an orthonormal basis in the sense that $\langle \chi_i, \chi_j \rangle = \delta_{i,j}$, where $\delta_{i,j}$ is the Kronecker delta, and if $\chi_\varphi$ is any character, then $$\chi_\varphi = \sum_{i=1}^r\langle \chi_\varphi , \chi_i \rangle \chi_i.$$ In terms of the original representation, this means $\varphi_i$ arises $\langle \chi_\varphi, \chi_i\rangle$ times as a subrepresentation of $\varphi$. We obtain the following theorem: **Theorem 47**. *Let $\varphi: H \to \mathop{\mathrm{GL}}(m, \mathbb{Z})$ be a representation of a finite group $H$. Then $\mathop{\mathrm{RF}}_{\mathbb{Z}^m, \mathop{\mathrm{Inv}}(\varphi), B_{\mathbb{Z}^m}}$ equals $\log^k$ with $k = \max\left\{\chi_i(e_H)\mid 1\leq i\leq r: \langle \chi_\varphi, \chi_i \rangle \neq 0 \right\}$, where $\chi_1$ to $\chi_r$ are the irreducible characters of $H$.* *Proof.* If $\langle \chi_\varphi, \chi_i \rangle \neq 0$, then the irreducible representation $\varphi_i$ is a subrepresentation of $\varphi$. Its dimension is $\chi_i(e_H)$. Hence, by Theorem [Theorem 2](#thm_main){reference-type="ref" reference="thm_main"}, the maximum of the set $\left\{\chi_i(e_H)\mid 1\leq i\leq r: \langle \chi_\varphi, \chi_i \rangle \neq 0 \right\}$ is precisely the power of the logarithm. ◻ If $h_1$ and $h_2$ are conjugate and $\chi_\varphi$ is a character, then $\chi_\varphi(h_1) = \chi_\varphi(h_2)$. Therefore, characters are fully determined by the values on their conjugacy classes $C_1$ to $C_t$. **Definition 48**. The character table of a finite group $H$ is the matrix $(\chi_i(C_j))_{1\leq i\leq r, 1 \leq j \leq t}$ consisting of the value of every irreducible character per conjugacy class. Usually, the characters are ordered according to increasing dimension. **Example 49**. The character table of $D_4 = \{a, b\mid a^4=b^2=1, bab = a^{-1}\}$, the dihedral group on $8$ elements, is given in Table [1](#table_D4){reference-type="ref" reference="table_D4"}. There are five irreducible characters. Suppose we are given the representation $\varphi: D_4 \to \mathop{\mathrm{GL}}(m, \mathbb{Z})$ with $$\begin{aligned} \varphi(a) &= \begin{pmatrix}-1&1&-1\\-2&0&-1\\2&-1&2 \end{pmatrix} \\ \varphi(b) &= \begin{pmatrix}-3&0&-2\\0&1&0\\4&0&3\end{pmatrix}. \end{aligned}$$ The values of its character $\chi_\varphi$ have been added in Table [1](#table_D4){reference-type="ref" reference="table_D4"}. Using those values, we can compute $\langle \chi_\varphi, \chi_5\rangle$: $$\langle \chi_\varphi, \chi_5\rangle = \frac{1}{8}\left( 1\cdot (3\cdot 2) + 2\cdot (1\cdot 0) + 1\cdot (-1\cdot (-2)) + 2\cdot (1\cdot 0) + 2\cdot (1 \cdot 0)\right) = 1.$$ Hence, the two-dimensional irreducible representation corresponding to $\chi_5$ occurs in the decomposition of $\varphi$. Therefore, $\mathop{\mathrm{RF}}_{\mathbb{Z}^m, \mathop{\mathrm{Inv}}(\varphi), B_{\mathbb{Z}^m}}$ equals $\log^2$. (In fact, one can verify that $\chi_\varphi = \chi_1 + \chi_5$.) $\{1\}$ $\{a, a^3\}$ $\{a^2\}$ $\{ab, a^3b\}$ $\{b, a^2b\}$ ---------------- --------- -------------- ----------- ---------------- --------------- $\chi_1$ 1 1 1 1 1 $\chi_2$ 1 -1 1 -1 1 $\chi_3$ 1 1 1 -1 -1 $\chi_4$ 1 -1 1 1 -1 $\chi_5$ 2 0 -2 0 0 $\chi_\varphi$ 3 1 -1 1 1 : The character table of $D_4$ and the values of the character $\chi_\varphi$ of Example [Example 49](#ex_D4){reference-type="ref" reference="ex_D4"} . [\[table_D4\]]{#table_D4 label="table_D4"} **Corollary 50**. *Let $\varphi: H \to \mathop{\mathrm{GL}}(m, \mathbb{Z})$ be a representation of a finite group $H$. Then $\mathop{\mathrm{RF}}_{\mathbb{Z}^m, \mathop{\mathrm{Inv}}(\varphi), B_{\mathbb{Z}^m}}$ equals $\log$ if and only if $\varphi(H) \leq \mathop{\mathrm{GL}}(m, \mathbb{Z})$ is an abelian group.* *Proof.* It is clear that $\mathop{\mathrm{Inv}}(\varphi)$ equals $\mathop{\mathrm{Inv}}(\psi)$ with $\psi: \varphi(H) \to \mathop{\mathrm{GL}}(m, \mathbb{Z})$ the natural inclusion map. Hence, $\mathop{\mathrm{RF}}_{\mathbb{Z}^m, \mathop{\mathrm{Inv}}(\varphi), B_{\mathbb{Z}^m}}$ equals $\mathop{\mathrm{RF}}_{\mathbb{Z}^m, \mathop{\mathrm{Inv}}(\psi), B_{\mathbb{Z}^m}}$. By [@isaacs2006character Corollary 2.6], a finite group is abelian if and only if all its irreducible characters are one-dimensional. Thus, if $\varphi(H)$ is abelian, then its irreducible characters are one-dimensional, and therefore theorem [Theorem 47](#thm_character_table){reference-type="ref" reference="thm_character_table"} tells that its residual finiteness growth is $\log$. Conversely, if $\mathop{\mathrm{RF}}_{\mathbb{Z}^m, \mathop{\mathrm{Inv}}(\psi), B_{\mathbb{Z}^m}}$ is $\log$, then the irreducible subrepresentations of $\psi^\mathbb{C}$ are one-dimensional and thus $\varphi(H)$ must be abelian, as it is conjugate to a subgroup of diagonal matrices. ◻ # Open questions {#sec_open_questions} Our main result, namely Theorem [Theorem 2](#thm_main){reference-type="ref" reference="thm_main"}, describes the residual finiteness growth $\mathop{\mathrm{RF}}_G$ for finitely generated virtually abelian groups $G$. Hence, it forms the first step in determining $\mathop{\mathrm{RF}}_G$ for all finitely generated virtually nilpotent groups, a question following from previous work in [@bou2011asymptotic] and which is still open in full generality, see [@survey2022]. However, the exact nature of our work raises related problems in determining the residual finiteness growth. As mentioned before in Corollary [Corollary 44](#cor:QI){reference-type="ref" reference="cor:QI"}, we concluded that $\mathop{\mathrm{RF}}_G$ is not a quasi-isometric invariant for virtually abelian groups $G$ in a very strong sense, although it (trivially) is one in the class of abelian groups. It is currently unknown whether the residual finiteness growth is a quasi-isometric invariant for finitely generated nilpotent groups. **Question 2**. Let $G_1$ and $G_2$ be finitely generated nilpotent groups that are quasi-isometric. Does it hold that $\mathop{\mathrm{RF}}_{G_1} \approx \mathop{\mathrm{RF}}_{G_2}$? Another interesting fact following from our main result is that $\mathop{\mathrm{RF}}_G$ only depends on properties of the representation over the complex numbers. Recall that every finitely generated torsion-free nilpotent group $G$ has a unique Mal'cev completion $G^\mathbb{F}$ over any field $\mathbb{F}$ of characteristic zero (see [@sega83-1] for more details). Motivated by the virtually abelian groups, we wonder whether $\mathop{\mathrm{RF}}_G$ only depends on $G^\mathbb{C}$. **Question 3**. Let $G_1$ and $G_2$ be finitely generated torsion-free nilpotent groups with isomorphic complex Mal'cev completions $G_1^\mathbb{C}\cong G_2^\mathbb{C}$. Does it hold that $\mathop{\mathrm{RF}}_{G_1} \approx \mathop{\mathrm{RF}}_{G_2}$? Both questions are related in the following sense. If $G_1$ and $G_2$ are quasi-isometric nilpotent groups, then it is conjectured that their real Mal'cev completions $G_1^{\mathbb{R}}$ and $G_2^{\mathbb{R}}$ are isomorphic (it is even conjectured that the converse holds as well) and in particular their complex Mal'cev completions as well. Hence the latter question is a stronger version of the first one if the conjecture holds. Even more generally, we could study the same question for virtually nilpotent groups, where we also have a representation of a finite group $H \to \mathop{\mathrm{Aut}}(G^\mathbb{C})$ into the automorphisms of the complex Mal'cev completion, for example as described in the rational case in [@dere22-1 Section 4.1.]. [^1]: KU Leuven Campus Kulak Kortrijk, Department of Mathematics, Research unit 'Algebraic Topology and Group Theory', B-8560 Kortrijk, Belgium. The authors were supported by Internal Funds KU Leuven (project number 3E220559).
arxiv_math
{ "id": "2309.12831", "title": "Residual Finiteness Growth in Virtually Abelian Groups", "authors": "Jonas Der\\'e and Joren Matthys", "categories": "math.GR", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | For every nonnegative integer $n$, let $r_F(n)$ be the number of ways to write $n$ as a sum of Fibonacci numbers, where the order of the summands does not matter. Moreover, for all positive integers $p$ and $N$, let $$S_{F}^{(p)}(N) := \sum_{n = 0}^{N - 1} \big(r_F(n)\big)^p .$$ Chow, Jones, and Slattery determined the order of growth of $S_{F}^{(p)}(N)$ for $p \in \{1,2\}$. We prove that, for all positive integers $p$, there exists a real number $\lambda_p > 1$ such that $$S^{(p)}_F(N) \asymp_p N^{(\log \lambda_p) /\!\log \varphi}$$ as $N \to +\infty$. Furthermore, we show that $$\lim_{p \to +\infty} \lambda_p^{1/p} = \varphi^{1/2} ,$$ where $\varphi := (1 + \sqrt{5})/2$ is the golden ratio. Our proofs employ automata theory and a result on the generalized spectral radius due to Blondel and Nesterov. author: - Carlo Sanna$^\dagger$ title: | A note on the power sums of\ the number of Fibonacci partitions --- # Introduction For every nonnegative integer $n$, let $r_F(n)$ be the number of ways to write $n$ as a sum of Fibonacci numbers, where the order of the summands does not matter. Formulas for $r_F(n)$ were proved by Carlitz [@MR0236094], Robbins [@MR1394758], Berstel [@MR1922290], Ardila [@MR2093874], Weinstein [@MR3499711], and Chow--Slattery [@MR4235264]. For all positive integers $p$ and $N$, define the power sum $$S_{F}^{(p)}(N) := \sum_{n = 0}^{N - 1} \big(r_F(n)\big)^p .$$ Chow and Slattery [@MR4235264] proved a recursive formula for $S_{F}^{(1)}(N)$, and obtained that $$\label{equ:SF1asymp} S_F^{(1)}(N) \asymp N^{(\log 2) /\! \log \varphi}$$ as $N \to +\infty$, where $\varphi := (1 + \sqrt{5}) / 2$ is the golden ratio. Moreover, they showed that the limit $$\lim_{N \to +\infty} \frac{S_F^{(1)}(N)}{N^{(\log 2) /\! \log \varphi}}$$ does not exists. Limits of this kind were then studied in a far more general context by Zhou [@preprintZhou23]. Chow and Jones [@preprintCJ23] proved an inhomogeneous linear recurrence for a subsequence of $S_F^{(2)}(N)$ and showed that $$\label{equ:SF2asymp} S_F^{(2)}(N) \asymp N^{(\log \lambda_2) /\! \log \varphi}$$ as $N \to +\infty$, where $\lambda_2 = 2.48\dots$ is the greatest root of the polynomial $X^3 - 2X^2 - 2X + 2$. Our first result is the following. **Theorem 1**. *Let $p$ be a positive integer. Then there exists a real number $\lambda_p > 1$ such that $$\label{equ:main-asymp} S^{(p)}_F(N) \asymp_p N^{(\log \lambda_p) /\!\log \varphi}$$ as $N \to +\infty$.* While the proofs of [\[equ:SF1asymp\]](#equ:SF1asymp){reference-type="eqref" reference="equ:SF1asymp"} and [\[equ:SF2asymp\]](#equ:SF2asymp){reference-type="eqref" reference="equ:SF2asymp"} given by Chow, Jones, and Slattery rely on counting arguments and inequalities, the proof of Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"} is based on automata theory. We remark that Berstel [@MR1922290] and Shallit [@MR4195833] had already employed automata theory to study $r_F(n)$. The proof of Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"} provides an effective algorithm to compute $\lambda_p$. Precisely, we have that $\lambda_p$ is the greatest real root of an effectively computable monic polynomial with integer coefficients. In particular, it follows that $\lambda_p$ is an algebraic integer. Table [1](#tab:lambda-p){reference-type="ref" reference="tab:lambda-p"} provides the first values of $\lambda_p$ and their minimal polynomials over $\mathbb{Q}$. $p$ $\lambda_p$ ----- ---------------- -------------------------------------------------------------------------------------------------- $1$ $2.00000\dots$ $X - 2$ $2$ $2.48119\dots$ $X^{3} - 2 X^{2} - 2 X + 2$ $3$ $3.08613\dots$ $X^{3} - 2 X^{2} - 4 X + 2$ $4$ $3.84606\dots$ $X^{5} - 2 X^{4} - 7 X^{3} - 2 X + 2$ $5$ $4.80052\dots$ $X^{5} - 2 X^{4} - 11 X^{3} - 8 X^{2} - 20 X + 10$ $6$ $5.99942\dots$ $X^{7} - 2 X^{6} - 17 X^{5} - 28 X^{4} - 88 X^{3} + 26 X^{2} - 4 X + 4$ $7$ $7.50569\dots$ $X^{7} - 2 X^{6} - 26 X^{5} - 74 X^{4} - 311 X^{3} + 34 X^{2} - 84 X + 42$ $8$ $9.39867\dots$ $X^{9} - 2 X^{8} - 40 X^{7} - 174 X^{6} - 969 X^{5} - 2 X^{4} - 428 X^{3} + 174 X^{2} - 4 X + 4$ : First values of $\lambda_p$ and their minimal polynomials over $\mathbb{Q}$. Our second result regards the growth of $\lambda_p$. **Theorem 2**. *We have that $$\lim_{p \to +\infty} \lambda_p^{1/p} = \varphi^{1/2} .$$* We remark that the methods used in the proofs of Theorems [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"} and [Theorem 2](#thm:lambda-p){reference-type="ref" reference="thm:lambda-p"} can be adapted to prove similar results, where the sequence of Fibonacci numbers is replaced by other linear recurrences, such as the sequence of Lucas number. ## Notation {#notation .unnumbered} We employ the standard asymptotic notations. Precisely, given two functions $f, g \colon \mathbb{N} \to \mathbb{R}$, we write $f(n) \sim g(n)$ and $f(n) \asymp g(n)$ to mean that $$\label{equ:liminfsup} \lim_{n \to +\infty} \frac{f(n)}{g(n)} = 1 \quad\text{and}\quad 0 < \liminf_{n \to +\infty} \frac{f(n)}{g(n)} \leq \limsup_{n \to +\infty} \frac{f(n)}{g(n)} < +\infty ,$$ respectively. We add parameters as subscripts of the symbol "$\asymp$" if the values of the limit inferior and limit superior in [\[equ:liminfsup\]](#equ:liminfsup){reference-type="eqref" reference="equ:liminfsup"} may depend on such parameters. Other notations are introduced when first used. # Preliminaries In this section, we collect some preliminary results needed later. Sections [2.1](#sec:zeck){reference-type="ref" reference="sec:zeck"} and [2.2](#sec:berstel){reference-type="ref" reference="sec:berstel"} follow most of the notation of Shallit [@MR4195833 Section 3]. ## Fibonacci representation {#sec:zeck} Let $(f_n)$ be the sequence of Fibonacci numbers, recursively defined by $f_1 := 1$, $f_2 := 2$, and $f_{n + 2} = f_{n + 1} + f_n$ for every positive integer $n$. Note that, differently from the usual definition, we shifted the indices so that the sequence of Fibonacci numbers is strictly increasing, which is more convenient for our purposes. For every binary word $x = x_1 \cdots x_\ell$ of $\ell$ bits $x_1, \dots, x_\ell \in \{0,1\}$, let $[x]_F := \sum_{i=1}^\ell x_i f_{\ell - i + 1}$. Moreover, let $C_F$ be the set of binary words having no consecutive $1$'s and that do not start with $0$. It is well known that every nonnegative integer can be written in a unique way as a sum of distinct nonconsecutive Fibonacci numbers [@MR0308032]. Hence, for every integer $n \geq 0$ there exists a unique $y \in C_F$ such that $n = [y]_F$. The word $y$ is called the *canonical Fibonacci representation* of $n$. ## Berstel's automaton {#sec:berstel} For binary words $x=x_1 \cdots x_\ell$ and $y = y_1 \cdots y_\ell$ of the same length, define $x \times y$ to be the word of pairs $(x_1, y_1) \cdots (x_\ell, y_\ell)$. Berstel [@MR1922290] constructed a deterministic finite automaton $\mathcal{B}$ that takes as input $x \times y$ and accepts if and only if $$_F = [y]_F \quad\text{and}\quad y \in 0^* C_F .$$ Berstel's automaton has $4$ states, which we call $a,b,c,d$, and is depicted in Figure [1](#fig:berstel-automaton){reference-type="ref" reference="fig:berstel-automaton"}. ![Berstel's automaton $\mathcal{B}$.](berstel-automaton.pdf){#fig:berstel-automaton} ## Irreducible aperiodic automata We need to recall some terminology related to the Perron--Frobenius theorem and its application to automata theory (cf. [@MR2483235  Section V.5.2]). Let $M$ be an $n \times n$ matrix with nonnegative real entries. The *dependency graph* of $M$ is the (directed) graph having vertex set $\{1, \dots, n\}$ and edge set $\{i \to j : M_{i,j} \neq 0\}$. The matrix $M$ is *irreducible* if its dependency graph is strongly connected (that is, any two vertices are connected by a directed path). A strongly connected graph is *aperiodic* if its vertex set $V$ cannot be partitioned into disjoint sets $V_0, \dots, V_{m - 1}$, with $m > 1$, such that each edge with source in $V_i$ has destination in $V_{i + 1 \bmod m}$. The matrix $M$ is *aperiodic* if its dependency graph is aperiodic. As a consequence of the Perron--Frobenius theorem [@MR2483235 V.34], if the matrix $M$ is irreducible and aperiodic, then the spectrum of $M$ contains a unique eigenvalue $\lambda$ of maximum absolute value, which is called the *Perron--Frobenius eigenvalue* of $M$. Moreover, the eigenvalue $\lambda$ is real and simple. In particular, we have that $\lambda$ is equal to the spectral radius of $M$. **Theorem 3**. *Let $\mathcal{A}$ be a deterministic finite automaton whose graph is strongly connected and aperiodic. Then the number $A_\ell$ of words of length $\ell$ that are accepted by $\mathcal{A}$ satisfies $$A_\ell \sim c \lambda^\ell$$ as $\ell \to +\infty$, where $c > 0$ is a real number and $\lambda$ is the Perron--Frobenius eigenvalue of the transition matrix of $\mathcal{A}$.* *Proof.* See [@MR2483235 Proposition V.7]. ◻ ## Generalized spectral radius Let $M$ be a square matrix over $\mathbb{C}$. We let $\rho(M)$ denote the *spectral radius* of $M$, that is, the maximum of the absolute values of the eigenvalues of $M$. Let $\Sigma$ be a finite set of $n \times n$ matrices over $\mathbb{C}$. Daubechies and Lagarias [@MR1142737 p. 235] defined the *generalized spectral radius* of $\Sigma$ as $$\rho(\Sigma) := \limsup_{k \to +\infty} \big(\rho_k(\Sigma)\big)^{1/k} ,$$ where $$\rho_k(\Sigma) := \max\!\left\{ \,\rho\!\left(\prod_{i=1}^k M_i \right) : M_1, \dots, M_k \in \Sigma \right\} .$$ Note that, if $\Sigma$ contains a single matrix $M$, then $\rho(\Sigma) = \rho(M)$. If $\Sigma = \{M_1, \dots, M_h\}$, we also write $\rho(M_1, \dots, M_h) := \rho(\Sigma)$. For every matrix $M$ over $\mathbb{C}$ and for every positive integer $k$, let $$M^{\otimes k} := \underbrace{M \otimes \cdots \otimes M}_{\text{$k$ times $M$}} ,$$ where $\otimes$ denotes the Kronecker product of matrices. **Theorem 4**. *Let $M_1, \dots, M_h$ be $n \times n$ matrices with real nonnegative entries. Then $$\label{equ:limsup-kronecker} \lim_{k \to +\infty} \left(\rho\big(M_1^{\otimes k} + \cdots + M_h^{\otimes k}\big)\right)^{1/k} = \rho(M_1, \dots, M_h) .$$* *Proof.* This is a result of Blondel and Nesterov [@MR2176820]. Note that they stated [\[equ:limsup-kronecker\]](#equ:limsup-kronecker){reference-type="eqref" reference="equ:limsup-kronecker"} for the *joint spectral radius* instead of the generalized spectral radius. However, for finite sets of matrices the joint spectral radius and the generalized spectral radius are equal (see [@MR1152485 Theorem IV] or [@MR1334574 Theorem 1$^\prime$]). ◻ *Remark 1*. There exist matrices over $\mathbb{R}$ for which [\[equ:limsup-kronecker\]](#equ:limsup-kronecker){reference-type="eqref" reference="equ:limsup-kronecker"} is false (see [@MR2889254 last paragraph of p. 1532]). However, Xiao and Xu [@MR2889254] proved that if the limit is replaced by a limit superior then [\[equ:limsup-kronecker\]](#equ:limsup-kronecker){reference-type="eqref" reference="equ:limsup-kronecker"} holds for all matrices over $\mathbb{C}$. # Proof of Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"} {#proof-of-theorem-thmmain} Let $p$ be a positive integer. In order to prove Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"}, we proceed as follows. For binary words $$x^{(1)} = x_1^{(1)}\cdots x_\ell^{(1)},\; \dots ,\; x^{(p)} = x_1^{(p)}\cdots x_\ell^{(p)},\; \text{ and }\; y = y_1\cdots y_\ell,$$ of the same length $\ell$, define $x^{(1)} \times \cdots \times x^{(p)} \times y$ to be the word of $(p+1)$-tuples $$(x^{(1)}_1, \dots, x^{(p)}_1, y_1) \cdots (x^{(1)}_\ell, \dots, x^{(p)}_\ell, y_\ell) .$$ First, we construct a deterministic finite automaton $\mathcal{A}_p$ that takes as input $x^{(1)} \times \cdots \times x^{(p)} \times y$ and accepts if and only if $$\label{equ:accept-p} \big[x^{(1)}\big]_F = \cdots = \big[x^{(p)}\big]_F = [y]_F \quad \text{and} \quad y \in 0^* C_F .$$ Hence, by the uniqueness of the canonical Fibonacci representation, it follows easily that the number of words of length $\ell$ that are accepted by $\mathcal{A}_p$ is equal to $S^{(p)}_F(f_{\ell+1})$. Second, we prove that the graph of $\mathcal{A}_p$ is strongly connected and aperiodic. Consequently, by Theorem [Theorem 3](#thm:perron-frobenius-automaton){reference-type="ref" reference="thm:perron-frobenius-automaton"}, we get that $$\label{equ:SpFfell1} S^{(p)}_F(f_{\ell+1}) \sim c_p \lambda_p^\ell ,$$ as $\ell \to +\infty$, where $c_p > 0$ is a real number and $\lambda_p$ is the Perron--Frobenius eigenvalue of the transition matrix of $\mathcal{A}_p$. Finally, for every positive integer $N$, let $\ell$ be the unique positive integer such that $$f_\ell \leq N < f_{\ell + 1} .$$ Hence, we have that $$\label{equ:squeeze} S^{(p)}_F(f_{\ell}) \leq S^{(p)}_F(N) < S^{(p)}_F(f_{\ell+1}) .$$ Moreover, the Binet formula yields that $N \asymp \varphi^\ell$. Consequently, from [\[equ:SpFfell1\]](#equ:SpFfell1){reference-type="eqref" reference="equ:SpFfell1"} and [\[equ:squeeze\]](#equ:squeeze){reference-type="eqref" reference="equ:squeeze"} we get that $$S^{(p)}_F(N) \asymp_p \lambda_p^\ell = \big(\varphi^\ell\big)^{(\log \lambda_p) / \! \log \varphi} \asymp_p N^{(\log \lambda_p) / \! \log \varphi} ,$$ as desired. Note that $\lambda_p > 1$, otherwise [\[equ:main-asymp\]](#equ:main-asymp){reference-type="eqref" reference="equ:main-asymp"} could not be satisfied. It remains to construct $\mathcal{A}_p$ and to prove that its graph is strongly connected and aperiodic. The construction of $\mathcal{A}_p$ proceeds as follows. We begin by constructing a deterministic finite automaton $\mathcal{B}_p$ that consists of $p$ copies of the Berstel's automaton $\mathcal{B}$ that run in parallel on inputs $(x^{(1)}, y)$, ..., $(x^{(p)}, y)$. Precisely, we define $\mathcal{B}_p$ as follows. - The states of $\mathcal{B}_p$ are the elements of $\{a,b,c,d\}^p$. We write the states of $\mathcal{B}_p$ as $p$-tuples or as words of $p$ letters. - The initial state of $\mathcal{B}_p$ is $a^p$. - The accepting states of $\mathcal{B}_p$ are $a^p$ and $d^p$. - There is a transition labeled $(x_1, \dots, x_p, y)$ from the state $(s_1, \dots, s_p)$ to the state $(s_1^\prime, \dots, s_p^\prime)$ of $\mathcal{B}_p$ if and only if for each $i \in \{1, \dots, p\}$ there is a transition labeled $(x_i, y)$ from the state $s_i$ to the state $s_i^\prime$ of $\mathcal{B}$. By construction, and by the properties of Berstel's automaton $\mathcal{B}$, it follows easily that $\mathcal{B}_p$ accepts the input $x^{(1)} \times \cdots \times x^{(p)} \times y$ if and only if [\[equ:accept-p\]](#equ:accept-p){reference-type="eqref" reference="equ:accept-p"} holds. Note that $\mathcal{B}_p$ has $4^p$ states. We shall prove that (for large $p$) most of these states are not accessible. Precisely, we shall prove that the set of accessible states of $\mathcal{B}_p$ is equal to $$\mathcal{S}_p := \{a, b\}^p \cup \{a, c\}^p \cup \{b, d\}^p .$$ For all $s_1, s_2 \in \{a, b, c, d\}$, put $\{s_1, s_2\}^{\bullet p} := \{s_1, s_2\}^p \setminus \{s_1^p, s_2^p\}$. The following claims on the states and the transitions of $\mathcal{B}_p$ can be easily checked. 1. [\[ite:t1\]]{#ite:t1 label="ite:t1"} The transitions starting from $a^p$ are of the following two kinds. 1. A unique transition labeled $(0, \dots, 0)$ and going to $a^p$ itself. 2. Transitions labeled $(x_1, \dots, x_p, 1)$ and sending $a^p$ to the state $s = (s_1, \dots, s_p) \in \{b, d\}^p$, where each $x_i \in \{0, 1\}$ is arbitrary, $s_i = b$ if $x_i = 0$, and $s_i = d$ if $x_i = 1$. 2. [\[ite:t2\]]{#ite:t2 label="ite:t2"} There is only one transition starting from $b^p$. This transition goes to $c^p$. 3. [\[ite:t3\]]{#ite:t3 label="ite:t3"} The transitions starting from $c^p$ are of the following three kinds. 1. Transitions labeled $(x_1, \dots, x_p, 0)$ and sending $c^p$ to the state $(s_1, \dots, s_p) \in \{a, b\}^{\bullet p}$, where $x_1, \dots, x_p \in \{0,1\}$ are arbitrary but not all equal, $s_i = b$ if $x_i = 0$, and $s_i = a$ if $x_i = 1$. 2. A unique transition with labels $(0, \dots, 0)$ and $(1, \dots, 1)$ that goes to $b^p$. 3. A unique transition labeled $(1, \dots, 1, 0)$ that goes to $a^p$. 4. [\[ite:t4\]]{#ite:t4 label="ite:t4"} There is only one transition starting from $d^p$. This transition goes to $a^p$. 5. [\[ite:t5\]]{#ite:t5 label="ite:t5"} For each state $s = (s_1, \dots, s_p) \in \{a, b\}^{\bullet p}$ there is a unique transition starting from $s$. This transition is labeled $(x_1, \dots, x_p, 0)$ and sends $s$ to the state $(s_1^\prime, \dots, s_p^\prime) \in \{a, c\}^{\bullet p}$, where $x_i = 0$ and $s_i^\prime = a$ if $s_i = a$, while $x_i = 1$ and $s_i^\prime = c$ if $s_i = b$. 6. [\[ite:t6\]]{#ite:t6 label="ite:t6"} For each state $s = (s_1, \dots, s_p) \in \{a, c\}^{\bullet p}$ the transitions starting from $s$ are of the following two kinds. 1. Transitions labeled $(x_1, \dots, x_p, 0)$ and sending $s$ to the state $(s_1^\prime, \dots, s_p^\prime) \in \{a, b\}^p$, where if $s_i = a$ then $x_i = 0$ and $s_i^\prime = a$; while if $s_i = c$ then $x_i = 0$ and $s_i^\prime = b$, or $x_i = 1$ and $s_i^\prime = a$ 2. Transitions labeled $(x_1, \dots, x_p, 1)$ and sending $s$ to the state $(s_1^\prime, \dots, s_p^\prime) \in \{b, d\}^p$, where if $s_i = a$ then $x_i = 0$ and $s_i^\prime = b$, or $x_i = 1$ and $s_i^\prime = d$; while if $s_i = c$ then $x_i = 1$ and $s_i^\prime = b$. 7. [\[ite:t7\]]{#ite:t7 label="ite:t7"} For each state $s = (s_1, \dots, s_p) \in \{b, d\}^{\bullet p}$ there is a unique transition starting from $s$. This transition is labeled $(x_1, \dots, x_p, 0)$ and sends $s$ to the state $(s_1^\prime, \dots, s_p^\prime) \in \{a, c\}^{\bullet p}$, where $x_i = 0$ and $s_i^\prime = a$ if $s_i = d$, while $x_i = 1$ and $s_i^\prime = c$ if $s_i = b$. The transitions described in [\[ite:t1\]](#ite:t1){reference-type="ref" reference="ite:t1"}--[\[ite:t7\]](#ite:t7){reference-type="ref" reference="ite:t7"} are depicted in Figure [2](#fig:transitions-schematic){reference-type="ref" reference="fig:transitions-schematic"}. ![Transitions from states in $\mathcal{S}_p$.](transitions-schematic.pdf){#fig:transitions-schematic} From [\[ite:t1\]](#ite:t1){reference-type="ref" reference="ite:t1"}--[\[ite:t7\]](#ite:t7){reference-type="ref" reference="ite:t7"}, we get that each transition starting from a state in $\mathcal{S}_p$ go to a state in $\mathcal{S}_p$. Since $\mathcal{S}_p$ contains the initial state of $\mathcal{B}_p$, it follows that the set of accessible states of $\mathcal{B}_p$ is a subset of $\mathcal{S}_p$. Furthermore, from [\[ite:t1\]](#ite:t1){reference-type="ref" reference="ite:t1"}--[\[ite:t7\]](#ite:t7){reference-type="ref" reference="ite:t7"} it follows easily that the graph of $\mathcal{B}_p$ restricted to the states in $\mathcal{S}_p$ is strongly connected. Hence, the set of accessible states of $\mathcal{B}_p$ is equal to $\mathcal{S}_p$, as previously claimed. At this point, we define $\mathcal{A}_p$ as the deterministic finite automaton obtained by removing from $\mathcal{B}_p$ the states that are not accessible. For example, the automaton $\mathcal{A}_2$ is depicted in Figure [3](#fig:berstel2-automaton){reference-type="ref" reference="fig:berstel2-automaton"}. We already observed that the graph of $\mathcal{A}_p$ is strongly connected. Moreover, it follows easily that the graph of $\mathcal{A}_p$ is aperiodic, since the initial state has a loop. The proof is complete. *Remark 2*. The automaton $\mathcal{A}_p$ has $3 \cdot 2^p - 2$ states but (for large $p$) many of these states are nondistinguishable. By merging the nondistinguishable states of $\mathcal{A}_p$, it is possible to obtain a minimized automaton with only $2^{p+1}$ states. # Proof of Theorem [Theorem 2](#thm:lambda-p){reference-type="ref" reference="thm:lambda-p"} {#proof-of-theorem-thmlambda-p} We let $\mathcal{A}_p$, $\mathcal{B}_p$, $\mathcal{S}_p$, and $\lambda_p$ be defined as in the proof of Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"}. In particular, recall that $\mathcal{S}_p$ is the set of accessible states of $\mathcal{B}_p$, and that $\lambda_p$ is the spectral radius of the transition matrix of $\mathcal{A}_p$. Let $\mathcal{S}_p^\prime$, respectively $\mathcal{S}_p^{\prime\prime}$, be the set of nonaccessible states $s = (s_1, \dots, s_p)$ of $\mathcal{B}_p$ such that $d$ does not belong, respectively belongs, to $\{s_1, \dots, s_p\}$. Note that $\mathcal{S}_p, \mathcal{S}_p^\prime, \mathcal{S}_p^{\prime\prime}$ is a partition of $\{a, b, c, d\}^p$. The following claims on the transitions of $\mathcal{B}_p$ can be easily checked. 1. [\[ite:t8\]]{#ite:t8 label="ite:t8"} There is no transition from a state in $\mathcal{S}_p$ to a state in $\mathcal{S}_p^\prime \cup \mathcal{S}_p^{\prime\prime}$. 2. [\[ite:t9\]]{#ite:t9 label="ite:t9"} For each state $s \in \mathcal{S}_p^\prime$ the transitions starting from $s$ are of the following two kinds. 1. Transitions sending $s$ to a state $s^\prime \notin \mathcal{S}_p^{\prime\prime}$ (not necessarily belonging to $\mathcal{S}_p^\prime$) such that $s^\prime$ contains more $a$'s than $s$. 2. [\[ite:t9ii\]]{#ite:t9ii label="ite:t9ii"} Transitions sending $s = (s_1, \dots, s_p)$ to a state $s^\prime = (s_1^\prime, \dots, s_p^\prime) \in \mathcal{S}_p^\prime$ such that $s^\prime_i$ is equal to $a$, $b$, or $c$ if $s_i$ is equal to $a$, $c$, or $b$, respectively. 3. [\[ite:t10\]]{#ite:t10 label="ite:t10"} Every transition starting from $\mathcal{S}_p^{\prime\prime}$ does not have destination in $\mathcal{S}_p^{\prime\prime}$. We sort the states of $\mathcal{B}_p$ according to the following rules. 1. [\[ite:ord1\]]{#ite:ord1 label="ite:ord1"} The states in $\mathcal{S}_p$ come first (in whatever order). 2. [\[ite:ord2\]]{#ite:ord2 label="ite:ord2"} Then the states in $\mathcal{S}_p^\prime$ follow, sorted so that: 1. the number of $a$'s in each state is less than or equal to the number of $a$'s in the previous state; 2. states connected by a transition in [\[ite:t9ii\]](#ite:t9ii){reference-type="ref" reference="ite:t9ii"} are consecutive. 3. [\[ite:ord3\]]{#ite:ord3 label="ite:ord3"} The states in $\mathcal{S}_p^{\prime\prime}$ come last. Let $T_p$ and $U_p$ be the transition matrices of $\mathcal{A}_p$ and $\mathcal{B}_p$, respectively, according to the ordering of states defined by [\[ite:ord1\]](#ite:ord1){reference-type="ref" reference="ite:ord1"}--[\[ite:ord3\]](#ite:ord3){reference-type="ref" reference="ite:ord3"}. In light of [\[ite:t8\]](#ite:t8){reference-type="ref" reference="ite:t8"}--[\[ite:t10\]](#ite:t10){reference-type="ref" reference="ite:t10"}, and since $\mathcal{A}_p$ is obtained from $\mathcal{B}_p$ by removing the states that are not in $\mathcal{S}_p$, we get that $$U_p = \begin{pmatrix}\begin{array}{c|c|c} T_p & \mathbf{0} & \mathbf{0} \\[2pt] \hline \rule{0pt}{\normalbaselineskip} * & T_p^\prime & \mathbf{0} \\[2pt] \hline \rule{0pt}{\normalbaselineskip} * & * & \mathbf{0} \end{array}\end{pmatrix}, \quad\text{ with }\quad T_p^\prime := \begin{pmatrix}\begin{array}{c|c|c|c|c} \begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix} & \mathbf{0} & \mathbf{0} & \cdots & \mathbf{0} \\ \hline \rule{0pt}{1.7em} * & \begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix} & \mathbf{0} & \,\cdots & \mathbf{0} \\ \hline \rule{0pt}{1.7em} * & * & \begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix} & \,\cdots & \mathbf{0} \\ \hline \rule{0pt}{2em} \vdots & \vdots & \vdots & \,\ddots & \vdots \\[5pt] \hline \rule{0pt}{1.7em} * & * & * & \,\cdots & \begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix} \end{array}\end{pmatrix} ,$$ where the $\mathbf{0}$'s are zero matrices, and the $\mathbf{*}$'s denote matrices of the right sizes. Therefore, the spectrum of $U_p$ is equal to the union of $\{-1, 0, +1\}$ and the spectrum of $T_p$. Recalling that $\lambda_p > 1$, we get that $\rho(U_p) = \rho(T_p) = \lambda_p$. ![The automaton $\mathcal{A}_2$.](berstel2-automaton.pdf){#fig:berstel2-automaton} For each $y \in \{0, 1\}$, let $V_y$ be the $4 \times 4$ matrix whose entry of the $i$th row and $j$th column is equal to the number of transitions labeled $(*,y)$ (where $*$ is any symbol) that go from the $i$th state to the $j$th state of the Berstel automaton $\mathcal{B}$. Explicitly, we have that $$V_0 = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 1 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ \end{pmatrix} \quad \text{and} \quad V_1 = \begin{pmatrix} 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{pmatrix} .$$ The next lemma provides the generalized spectral radius of $V_0$ and $V_1$. **Lemma 5**. *We have that $\rho(V_0, V_1) = \varphi^{1/2}$.* *Proof.* For every binary word $x = x_1 \cdots x_k$, let $V_x := V_{x_1} \cdots V_{x_k}$. For all positive integers $h$, we have that $$\big(\rho(V_{(0001)^h})\big)^{1/(4h)} = \big(\rho(V_{0001}^h)\big)^{1/(4h)} = \big(\rho(V_{0001})^h\big)^{1/(4h)} = \big(\rho(V_{0001}))^{1/4} = \varphi^{1/2} .$$ Hence, we get that $\rho(V_0, V_1) \geq \varphi^{1/2}$. Thus it suffices to prove that $\rho(V_x)^{1/k} \leq \varphi^{1/2}$ for every binary word $x = x_1 \cdots x_k$. Note that we have $$\big(\rho(V_{0^k})\big)^{1/k} = \big(\rho(V_0^k)\big)^{1/k} = \big(\rho(V_0)^k\big)^{1/k} = \rho(V_0) = 1 .$$ Hence, we can assume that $x \neq 0^k$. It is well known that, if $A$ and $B$ are $n \times n$ complex matrices, then $AB$ and $BA$ have the same eigenvalues. Consequently, by applying a circular shift to $x$, we can assume that $x_k = 1$. Moreover, since $V_1^2 = \mathbf{0}$, we can assume that $x$ does not have two consecutive $1$'s. Again since $V_1^2 = \mathbf{0}$, we get that if $x_1 = 1$ then $V_x^2 = \mathbf{0}$, and consequently $\rho(V_x) = 0$. Hence, we can assume that $x_1 = 0$ (and so $k \geq 2$). Therefore, we have that $x = (0^{k_1 - 1}1) \cdots (0^{k_s - 1}1)$, for some integers $k_1, \dots, k_s \geq 2$ such that $k_1 + \cdots + k_s = k$. Suppose that $\lambda \neq 0$ is an eigenvalue of $V_x$, and let $v$ be the corresponding eigenvector, so that $v V_x = \lambda v$. Since $x_k = 1$, we get that $v = (0 \; v_2 \; 0 \; v_4)$ for some $v_2, v_4 \in \mathbb{C}$. It follows easily by induction that $$(0 \; w_2 \; 0 \; w_3) \, V_{0^{h-1} 1} = (0 \; w_2 \; 0 \; w_3) \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & \lfloor h / 2 \rfloor & 0 & \lfloor (h-1) / 2 \rfloor \\ 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 1 \end{pmatrix} ,$$ for all $w_2, w_4 \in \mathbb{C}$ and for every integer $h \geq 2$. Therefore, we ge that $v V_x = \lambda v$ is equivalent to $(v_2 \; v_4) Z_{k_1} \cdots Z_{k_s} = \lambda (v_2 \; v_4)$, where $$Z_h := \begin{pmatrix} \lfloor h / 2 \rfloor & \lfloor (h-1) / 2 \rfloor \\ 1 & 1 \end{pmatrix}$$ for every integer $h \geq 2$. Consequently, we obtain that $$\label{equ:bound-rho-Tx} \rho(V_x) \leq \rho(Z_{k_1} \cdots Z_{k_s}) \leq \|Z_{k_1} \cdots Z_{k_s}\| \leq \|Z_{k_1}\| \cdots \|Z_{k_s}\| ,$$ where $\|\cdot\|$ is the *spectral norm*, which is defined by $\|A\| := \sqrt{\!\rho(A^{\mathsf{H}} A)}$ for every square matrix $A$ over $\mathbb{C}$, with $A^\mathsf{H}$ denoting the conjugate transpose. We claim that $\|Z_h\|^{1/h} \leq \varphi^{1/2}$ for every integer $h \geq 2$. Indeed, since the eigenvalues of $Z_h^{\mathsf{H}}\, Z_h$ are nonnegative real numbers, we have that $$\|Z_h\|^{1/h} = \big(\rho(Z_h^{\mathsf{H}}\, Z_h)\big)^{1/(2h)} \leq \big(\!\mathop{\mathrm{tr}}(Z_h^{\mathsf{H}}\, Z_h)\big)^{1/(2h)} \leq \left(\frac{h^2+4}{2}\right)^{1/(2h)} \leq \varphi^{1/2} ,$$ for every integer $h \geq 7$. Then the claim can be checked for $h \in \{2, \dots, 7\}$. Therefore, from [\[equ:bound-rho-Tx\]](#equ:bound-rho-Tx){reference-type="eqref" reference="equ:bound-rho-Tx"} we get that $$\begin{aligned} \big(\rho(V_x)\big)^{1/k} &\leq \big(\|Z_{k_1}\| \cdots \|Z_{k_s}\|\big)^{1/k} = \big(\|Z_{k_1}\|^{1/k_1}\big)^{k_1/k} \cdots \big(\|Z_{k_s}\|^{1/k_s}\big)^{k_s/k} \\ &\leq \big(\varphi^{1/2}\big)^{k_1/k} \cdots \big(\varphi^{1/2}\big)^{k_s/k} = \big(\varphi^{1/2}\big)^{(k_1 + \cdots + k_2)/k} = \varphi^{1/2} , \end{aligned}$$ as desired. ◻ Let $U_p^{\textsf{lex}}$ be the transition matrix of $\mathcal{B}_p$, where the states of $\mathcal{B}_p$ are sorted in lexicographic order. Note that the matrices $U_p^{\textsf{lex}}$ and $U_p$ are similar, and consequently $\rho(U_p^{\textsf{lex}}) = \rho(U_p) = \lambda_p$. By the construction of $\mathcal{B}_p$, we have that $U_p^{\textsf{lex}} = V_0^{\otimes p} + V_1^{\otimes p}$. Hence, by Theorem [Theorem 4](#thm:kronecker-power){reference-type="ref" reference="thm:kronecker-power"} and Lemma [Lemma 5](#lem:rho-T0-T1){reference-type="ref" reference="lem:rho-T0-T1"}, we get that $$\lim_{p \to +\infty} \lambda_p^{1/p} = \lim_{k \to +\infty} \big(\rho(U_p^{\textsf{lex}})\big)^{1/p} = \lim_{k \to +\infty} \Big(\rho\big(V_0^{\otimes p} + V_1^{\otimes p}\big)\Big)^{1/p} = \rho(V_0, V_1) = \varphi^{1/2} .$$ The proof is complete. # Statements and Declarations {#statements-and-declarations .unnumbered} ## Competing Interests {#competing-interests .unnumbered} The author declare that he has no known competing financial interests or personal relationships that could have appeared to influence the work reported. ## Data availability statement {#data-availability-statement .unnumbered} No new data were created or analysed in this study. Data sharing is not applicable to this article. 10 F. Ardila, *The coefficients of a Fibonacci power series*, Fibonacci Quart. **42** (2004), no. 3, 202--204. M. A. Berger and Y. Wang, *Bounded semigroups of matrices*, Linear Algebra Appl. **166** (1992), 21--27. J. Berstel, *An exercise on Fibonacci representations*, Theor. Inform. Appl. **35** (2001), no. 6, 491--498. V. D. Blondel and Y. Nesterov, *Computationally efficient approximations of the joint spectral radius*, SIAM J. Matrix Anal. Appl. **27** (2005), no. 1, 256--272. L. Carlitz, *Fibonacci representations*, Fibonacci Quart. **6** (1968), no. 4, 193--220. S. Chow and O. Jones, *On the variance of the Fibonacci partition function*, 2023, arXiv preprint: <https://arxiv.org/abs/2308.15415>. S. Chow and T. Slattery, *On Fibonacci partitions*, J. Number Theory **225** (2021), 310--326. I. Daubechies and J. C. Lagarias, *Sets of matrices all infinite products of which converge*, Linear Algebra Appl. **161** (1992), 227--263. L. Elsner, *The generalized spectral-radius theorem: an analytic-geometric proof*, Proceedings of the Workshop "Nonnegative Matrices, Applications and Generalizations" and the Eighth Haifa Matrix Theory Conference (Haifa, 1993), vol. 220, 1995, pp. 151--159. P. Flajolet and R. Sedgewick, *Analytic Combinatorics*, Cambridge University Press, Cambridge, 2009. N. Robbins, *Fibonacci partitions*, Fibonacci Quart. **34** (1996), no. 4, 306--313. J. Shallit, *Robbins and Ardila meet Berstel*, Inform. Process. Lett. **167** (2021), Paper No. 106081, 6. F. V. Weinstein, *Notes on Fibonacci partitions*, Exp. Math. **25** (2016), no. 4, 482--500. J. Xu and M. Xiao, *A characterization of the generalized spectral radius with Kronecker powers*, Automatica J. IFAC **47** (2011), no. 7, 1530--1533. E. Zeckendorf, *Représentation des nombres naturels par une somme de nombres de Fibonacci ou de nombres de Lucas*, Bull. Soc. Roy. Sci. Liège **41** (1972), 179--182. N. H. Zhou, *On the representation functions of certain numeration systems*, 2023, arXiv preprint: <https://arxiv.org/abs/2305.00792>.
arxiv_math
{ "id": "2309.12724", "title": "A note on the power sums of the number of Fibonacci partitions", "authors": "Carlo Sanna", "categories": "math.NT cs.FL", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- author: - "Yang Liu[^1]" - "Andre Milzarek[^2]" - "Fred Roosta[^3]" bibliography: - biblio.bib title: Obtaining Pseudo-inverse Solutions With MINRES --- # Introduction {#sec:intro} We consider the linear least-squares problem, $$\begin{aligned} \label{eq:least_squares} \min_{\mathbf{x}\in \mathbb{C}^{d}} \| \mathbf{b}- \mathbf{A}\mathbf{x} \|^2,\end{aligned}$$ involving a complex-valued matrix $\mathbf{A}\in \mathbb{C}^{d\times d}$ that is either Hermitian or complex symmetric, and a right-hand side complex vector $\mathbf{b}\in \mathbb{C}^{d}$. Our primary focus is on cases where the system is inconsistent, i.e., $\mathbf{b}\notin \textnormal{Range}(\mathbf{A})$, which arise in a vast variety of situations such as in optimization of non-convex problems [@roosta2018newton; @liu2022newtonmr], numerical solution of partial differential equations [@kaasschieter1988preconditioned], and low-rank matrix computations [@gallivan1996high]. In fact, even if the system is theoretically consistent, due to errors from measurements, discretizations, truncations, or round-off, the matrix $\mathbf{A}$ can become numerically singular, resulting in a numerically incompatible system [@choi2013minimal]. In these situations, problem [\[eq:least_squares\]](#eq:least_squares){reference-type="ref" reference="eq:least_squares"} admits infinitely many solutions. Among all of these solutions, the one with the smallest Euclidean norm is commonly referred to as the pseudo-inverse solution, also known as the minimum-norm solution, and is given by $\mathbf{A}^{\dagger}\mathbf{b}$ where $\mathbf{A}^{\dagger}$ is the Moore-Penrose pseudo-inverse of $\mathbf{A}$. This particular solution, among all other alternatives, holds a special place and offers numerous practical and theoretical advantages in various applications including, e.g., linear matrix equations [@engl1981new], optimization [@roosta2018newton; @liu2021convergence], and machine learning [@derezinski2020exact; @dar2022double]. When $d$ is small, one can obtain $\mathbf{A}^{\dagger}\mathbf{b}$ by explicitly computing $\mathbf{A}^{\dagger}$ via direct methods, e.g., [@courrieu2005fast; @katsikis2011improved; @bjorck2015numerical; @bjorck1996numerical; @klema1980singular]. Among them, the most well-known direct method to obtain $\mathbf{A}^{\dagger}$ is via the singular value decomposition [@klema1980singular]. However, rather than explicitly computing $\mathbf{A}^{\dagger}$ for large-scale problems when $d \gg 1$, it is more appropriate, if not indispensable, to obtain the pseudo-inverse solution $\mathbf{A}^{\dagger}\mathbf{b}$ by an iterative scheme. LSQR [@paige1982lsqr] and LSMR [@fong2011lsmr] are iterative Krylov sub-space methods that can recover the pseudo-inverse solution $\mathbf{A}^{\dagger}\mathbf{b}$ for any $\mathbf{A}\in \mathbb{C}^{m \times d}$. However, they are mathematically equivalent to the conjugate gradient method (CG) [@hestenes1952methods] and the minimum residual method (MINRES) [@paige1975solution], respectively, applied to the normal equation of [\[eq:least_squares\]](#eq:least_squares){reference-type="ref" reference="eq:least_squares"}. Consequently, each iteration of LSQR and LSMR necessitates two matrix-vector product evaluations, which can be computationally challenging when $d \gg 1$. While CG iterations are not well-defined unless $\mathbf{A}\succeq \mathbf{0}$ and $\mathbf{b}\in \textnormal{Range}(\mathbf{A})$, MINRES can effectively solve [\[eq:least_squares\]](#eq:least_squares){reference-type="ref" reference="eq:least_squares"} even for indefinite systems. Nonetheless, MINRES is not guaranteed to obtain $\mathbf{A}^{\dagger}\mathbf{b}$ unless $\mathbf{b}\in \textnormal{Range}(\mathbf{A})$ [@choi2011minres Theorem 3.1]. To address this limitation, a variant of MINRES, called MINRES-QLP was introduced in [@choi2011minres; @choi2014algorithm]. MINRES-QLP ensures convergence to the pseudo-inverse solution in all cases. Compared with MINRES, which requires $\approx 9d$ flops per iteration, the rank-revealing QLP decomposition of the tridiagonal matrix from the Lanczos process in MINRES-QLP necessitates an additional $\approx 5d$ extra flops per iteration [@choi2011minres]. In addition, due to its complexity, the implementation of MINRES-QLP can be quite challenging. In this paper, we introduce a novel and remarkably simple lifting strategy that seamlessly incorporates into the final MINRES iteration, allowing us to obtain the minimum norm solution with negligible additional computational overhead and minimal alterations to the original MINRES algorithm. We consider our lifting strategy in a variety of settings including Hermitian and complex symmetric systems as well as systems with semi-definite preconditioners. The rest of the paper is organized as follows. We end this section by introducing our notation and definitions. The theoretical analyses underpinning our lifting strategy in a variety of settings are given in [\[sec:pinverse,sec:pMINRES\]](#sec:pinverse,sec:pMINRES){reference-type="ref" reference="sec:pinverse,sec:pMINRES"}. In particular, [\[sec:hermitian,sec:CSMINRES\]](#sec:hermitian,sec:CSMINRES){reference-type="ref" reference="sec:hermitian,sec:CSMINRES"} consider Hermitian and complex symmetric systems, respectively, while [\[sec:pMINRES_H,sec:pMINRES_CS\]](#sec:pMINRES_H,sec:pMINRES_CS){reference-type="ref" reference="sec:pMINRES_H,sec:pMINRES_CS"} study our lifting strategy in the presence of positive semi-definite preconditioners in each matrix class. Numerical results are given in [4](#sec:exp){reference-type="ref" reference="sec:exp"}. Conclusion and further thoughts are gathered in [5](#sec:conclusion){reference-type="ref" reference="sec:conclusion"}. ## Notation and Definition {#notation-and-definition .unnumbered} Throughout the paper, vectors and matrices are denoted by bold lower- and upper-case letters, respectively. Regular lower-case letters are reserved for scalars. The space of complex-symmetric and Hermitian $d \times d$ matrices are denoted by $\mathbb{S}^{d \times d}$ and $\mathbb{H}^{d \times d}$, respectively. The inner-product of complex vectors $\mathbf{x}$ and $\mathbf{y}$ is defined as $\langle \mathbf{x}, \mathbf{y} \rangle = \mathbf{x}^{*} \mathbf{y}$, where $\mathbf{x}^*$ represents the conjugate transpose of $\mathbf{x}$. The Euclidean norm of a vector $\mathbf{x}\in \mathbb{C}^{d}$ is given by $\|\mathbf{x}\|$. The conjugate of a vector or a matrix is denoted by $\bar{\mathbf{b}}= \textnormal{Conj}(\mathbf{b})$ or $\bar{\mathbf{A}}= \textnormal{Conj}(\mathbf{A})$. The zero vector and the zero matrix are denoted by $\mathbf{0}$ while the identity matrix of dimension $t \times t$ is given by $\mathbf{I}_{t}$. We use $\mathbf{e}_{j}$ to denote the $j\textsuperscript{th}\xspace$ column of the identity matrix. We use $\mathbf{M}\succeq\,(\succ)\,\mathbf{0}$, to indicate that $\mathbf{M}$ is positive semi-definite (PSD) (positive definite (PD)). For a PSD matrix $\mathbf{M}$, we write $\| \mathbf{x} \|_{\mathbf{M}} = \sqrt{\langle \mathbf{x},\mathbf{M}\mathbf{x} \rangle}$ (this is an abuse of notation since unless $\mathbf{M}\succ \mathbf{0}$, this does not imply a norm). For any $t\geq 1$, the set $\mathcal{K}_{t}(\mathbf{A}, \mathbf{b}) = \textnormal{Span}\left\{\mathbf{b},\mathbf{A}\mathbf{b},\ldots,\mathbf{A}^{t-1}\mathbf{b}\right\}$ denotes the Krylov sub-space of degree $t$ generated using $\mathbf{A}$ and $\mathbf{b}$. The residual vector at $t\textsuperscript{th}\xspace$ iteration is denoted by $\mathbf{r}_{t} = \mathbf{b}- \mathbf{A}\mathbf{x}_{t}$. For completeness, we recall the definition of the Moore-Penrose pseudo-inverse of $\mathbf{A}$. [\[def:pinverse\]]{#def:pinverse label="def:pinverse"} For given $\mathbf{A}\in \mathbb{C}^{d\times m}$, the (Moore-Penrose) pseudo-inverse is the matrix $\mathbf{B}\in \mathbb{C}^{m \times d}$ satisfying the Moore-Penrose conditions, namely [\[eq:pi\]]{#eq:pi label="eq:pi"} $$\begin{aligned} \mathbf{A}\mathbf{B}\mathbf{A}&= \mathbf{A}, \label{eq:pi_1} \\ \mathbf{B}\mathbf{A}\mathbf{B}&= \mathbf{B}, \label{eq:pi_2} \\ (\mathbf{A}\mathbf{B})^{*} &= \mathbf{A}\mathbf{B}, \label{eq:pi_3} \\ (\mathbf{B}\mathbf{A})^{*} &= \mathbf{B}\mathbf{A}. \label{eq:pi_4} \end{aligned}$$ The matrix $\mathbf{B}$ is unique and is denoted by $\mathbf{A}^{\dagger}$. The point $\mathbf{x}^{+}\triangleq\mathbf{A}^{\dagger}\mathbf{b}$ is said to be the (Moore-Penrose) pseudo-inverse solution of [\[eq:least_squares\]](#eq:least_squares){reference-type="ref" reference="eq:least_squares"}. The interplay between $\mathbf{A}$ and $\mathbf{b}$ is a critical factor in the convergence of MINRES and many other iterative solvers. This interplay is encapsulated in the notion of the grade of $\mathbf{b}$ with respect to $\mathbf{A}$. [\[def:grade_b\]]{#def:grade_b label="def:grade_b"} The grade of $\mathbf{b}\in \mathbb{C}^{d}$ with respect to $\mathbf{A}\in \mathbb{C}^{d\times d}$ is the positive integer $g(\mathbf{A}, \mathbf{b})$ such that $\text{dim}(\mathcal{K}_{t}(\mathbf{A}, \mathbf{b})) = \min\{t,g(\mathbf{A},\mathbf{b})\}$. In the special case where $\mathbf{A}$ is complex-symmetric, i.e., $\mathbf{A}^{\intercal}= \mathbf{A}$, we define the grade $g(\mathbf{A},\mathbf{b})$ with respect to a modified Krylov subspace, the so-called Saunders subspaces, cf. [@choi2013minimal; @saunders1988two] and [\[eq:saunders\]](#eq:saunders){reference-type="eqref" reference="eq:saunders"}, i.e., $\text{dim}(\mathcal{S}_{t}(\mathbf{A}, \mathbf{b})) = \min\{t,g(\mathbf{A},\mathbf{b})\}$. Recall that $g(\mathbf{A}, \mathbf{b})$, in fact, determines the iteration at which a Kryolov subspace algorithm terminates with a solution (in exact arithmetic). For simplicity, for given pairs $(\mathbf{A}, \mathbf{b})$ and $(\tilde{\mathbf{A}}, \tilde{\mathbf{b}})$, we set $g = g(\mathbf{A}, \mathbf{b})$ and $\tilde{g}= g(\tilde{\mathbf{A}}, \tilde{\mathbf{b}})$. # Pseudo-inverse Solutions in MINRES {#sec:pinverse} The most common method to compute the pseudo-inverse of a matrix is via singular value decomposition (SVD). The computational cost of the SVD is $\mathcal O(d^3)$, which essentially makes such an approach intractable in high-dimensions. In our context, we do not seek to directly compute $\mathbf{A}^{\dagger}$ as our interest lies only in obtaining the pseudo-inverse solution, i.e., $\mathbf{x}^{+}= \mathbf{A}^{\dagger}\mathbf{b}$. As alluded to earlier in [1](#sec:intro){reference-type="ref" reference="sec:intro"}, existing iterative methods that can ensure the pseudo-inverse solution of [\[eq:least_squares\]](#eq:least_squares){reference-type="ref" reference="eq:least_squares"} in the incompatible setting demand significantly higher computational effort per iteration compared to plain MINRES. The MINRES method is shown in [\[alg:MINRES\]](#alg:MINRES){reference-type="ref" reference="alg:MINRES"}. Recall that for a Hermitian matrix $\mathbf{A}\in \mathbb{H}^{d\times d}$ and a right-hand side vector $\mathbf{b}\in \mathbb{C}^{d}$, the $t\textsuperscript{th}\xspace$ iteration of MINRES for solving [\[eq:least_squares\]](#eq:least_squares){reference-type="ref" reference="eq:least_squares"} can be written as $$\begin{aligned} \label{eq:MINRES} \min_{\mathbf{x}\in \mathcal{K}_t(\mathbf{A}, \mathbf{b})} \| \mathbf{b}- \mathbf{A}\mathbf{x} \|^2,\end{aligned}$$ where $t \leq g$ and the grade $g$ is defined as in [\[def:grade_b\]](#def:grade_b){reference-type="ref" reference="def:grade_b"}; see [@paige1975solution; @liu2022minres] for more details. When $\mathbf{b}\in \textnormal{Range}(\mathbf{A})$, it can be easily shown that MINRES returns the pseudo-inverse solution of [\[eq:least_squares\]](#eq:least_squares){reference-type="ref" reference="eq:least_squares"}, [@choi2011minres]. However, when $\mathbf{b}\notin \textnormal{Range}(\mathbf{A})$, the situation is more complicated. In such settings, as long as $\textnormal{Range}(\mathbf{A}) \subseteq \mathcal{K}_{g}(\mathbf{A}, \mathbf{b})$, the final iterate of MINRES satisfies the properties [\[eq:pi_1,eq:pi_2,eq:pi_3\]](#eq:pi_1,eq:pi_2,eq:pi_3){reference-type="ref" reference="eq:pi_1,eq:pi_2,eq:pi_3"} in [\[def:pinverse\]](#def:pinverse){reference-type="ref" reference="def:pinverse"}, otherwise one can only guarantee [\[eq:pi_2,eq:pi_3\]](#eq:pi_2,eq:pi_3){reference-type="ref" reference="eq:pi_2,eq:pi_3"}; see, e.g., [@choi2006iterative Theorem 2.27]. To obtain a solution that satisfies all four properties of the pseudo-inverse, Choi et al., [@choi2011minres], provide an algorithm, called MINRES-QLP. However, not only is MINRES-QLP rather complicated to implement, but it also demands roughly an extra $50\%$ flops per iteration compared to MINRES. Our goal in this section is to obtain the pseudo-inverse solution of [\[eq:least_squares\]](#eq:least_squares){reference-type="ref" reference="eq:least_squares"} with minimal additional costs and alterations to the MINRES algorithm. Before doing so, we first restate a simple and well-known result for solutions of the least squares problem [\[eq:least_squares\]](#eq:least_squares){reference-type="ref" reference="eq:least_squares"}; the proof of Lemma [\[lemma:MINRES_solve_exactly\]](#lemma:MINRES_solve_exactly){reference-type="ref" reference="lemma:MINRES_solve_exactly"} can be found in, e.g., [@bjorck1996numerical]. [\[lemma:MINRES_solve_exactly\]]{#lemma:MINRES_solve_exactly label="lemma:MINRES_solve_exactly"} The vector $\mathbf{x}$ is a solution of [\[eq:least_squares\]](#eq:least_squares){reference-type="ref" reference="eq:least_squares"} if and only if there exists a vector $\mathbf{y}\in \mathbb{C}^{d}$ such that $$\begin{aligned} \mathbf{x}= \mathbf{A}^{\dagger}\mathbf{b}+ (\mathbf{I}_{d} - \mathbf{A}^{\dagger}\mathbf{A}) \mathbf{y}. \end{aligned}$$ Furthermore, for any solution $\mathbf{x}$, we have $\mathbf{A}\mathbf{x}= \mathbf{A}\mathbf{A}^{\dagger}\mathbf{b}$ and $\mathbf{r}= \mathbf{b}- \mathbf{A}\mathbf{x}= (\mathbf{I}- \mathbf{A}\mathbf{A}^{\dagger}) \mathbf{b}$. By [\[lemma:MINRES_solve_exactly\]](#lemma:MINRES_solve_exactly){reference-type="ref" reference="lemma:MINRES_solve_exactly"}, the general solution of [\[eq:least_squares\]](#eq:least_squares){reference-type="ref" reference="eq:least_squares"} is given by $$\begin{aligned} % \label{eq:MINRES_xg} \mathbf{x}= \mathbf{x}^{+}+ \mathbf{z}, \quad \text{where } \quad \mathbf{z}\in \textnormal{Null}(\mathbf{A}).\end{aligned}$$ In other words, at $t = g$, the final iterate of MINRES is a vector that is implicitly given as a linear combination of two vectors, namely $\mathbf{A}^{\dagger}\mathbf{b}$ and some $\mathbf{z}\in \textnormal{Null}(\mathbf{A})$. Our lifting technique amounts to eliminating the component $\mathbf{z}$ to obtain the pseudo-inverse solution $\mathbf{x}^{+}$. ## Hermitian and skew-Hermitian Systems {#sec:hermitian} We now detail our lifting strategy to obtain the pseudo-inverse solution of [\[eq:least_squares\]](#eq:least_squares){reference-type="ref" reference="eq:least_squares"} from MINRES iterations. We show that not only does our strategy apply to the typical case where $\mathbf{A}$ is Hermitian, but it can also be readily adapted to cases where the underlying matrix is skew-Hermitian [@greif2009iterative]. [\[thm:MINRES_dagger\]]{#thm:MINRES_dagger label="thm:MINRES_dagger"} Let $\mathbf{x}_{g}$ be the final iterate of MINRES for solving [\[eq:least_squares\]](#eq:least_squares){reference-type="ref" reference="eq:least_squares"} with the Hermitian matrix $\mathbf{A}\in \mathbb{H}^{d\times d}$. 1. If $\mathbf{r}_{g}= \bm{0}$, then $\mathbf{x}_{g}= \mathbf{A}^{\dagger}\mathbf{b}$. 2. If $\mathbf{r}_{g}\neq \bm{0}$, define the lifted vector $$\begin{aligned} \label{eq:MINRES_lifting} \hat{\mathbf{x}}^{+}= \mathbf{x}_{g}- \frac{\langle \mathbf{r}_{g}, \mathbf{x}_{g} \rangle}{\| \mathbf{r}_{g} \|^2} \mathbf{r}_{g}= \left(\mathbf{I}_{d} - \frac{\mathbf{r}_{g}\mathbf{r}_{g}^{*}}{\| \mathbf{r}_{g} \|^{2}}\right) \mathbf{x}_{g}, \end{aligned}$$ where $\mathbf{r}_{g}= \mathbf{b}- \mathbf{A}\mathbf{x}_{g}$. Then, we have $\hat{\mathbf{x}}^{+}= \mathbf{A}^{\dagger}\mathbf{b}$. *Proof.* If $\mathbf{r}_{g}= \bm{0}$, since $\mathbf{b}\in \textnormal{Range}(\mathbf{A})$, we naturally obtain $\mathbf{x}_{g}= \mathbf{x}^{+}$ by [@choi2011minres Theorem 3.1]. Now, suppose $\mathbf{r}_{g}\neq \mathbf{0}$, which implies $\mathbf{b}\not\in \textnormal{Range}(\mathbf{A})$. Since $\mathbf{x}_{g}\in \mathcal{K}_{g}(\mathbf{A}, \mathbf{b})$, we can write $\mathbf{x}_{g}= \sum_{i=1}^{g} a_{i} \mathbf{A}^{i-1} \mathbf{b}$ for some $a_i \in \mathbb{C}$, $i=1,\dots,g$. It then follows $$\begin{aligned} \mathbf{x}_{g}= \mathbf{x}_{g}- a_1(\mathbf{I}-\mathbf{A}\mathbf{A}^{\dagger})\mathbf{b}+ a_1(\mathbf{I}-\mathbf{A}\mathbf{A}^{\dagger})\mathbf{b}= \mathbf{p}+ {\mathbf{q}}, \end{aligned}$$ where $$\begin{aligned} \mathbf{p}&\triangleq\mathbf{x}_{g}- a_1(\mathbf{I}-\mathbf{A}\mathbf{A}^{\dagger})\mathbf{b}= a_1 \mathbf{A}\mathbf{A}^{\dagger}\mathbf{b}+ {\sum}_{i=2}^{g} a_{i} \mathbf{A}^{i-1} \mathbf{b},\\ {\mathbf{q}}&\triangleq a_1 (\mathbf{I}- \mathbf{A}\mathbf{A}^{\dagger}) \mathbf{b}= a_1 \mathbf{r}_{g}. \end{aligned}$$ Since $\mathbf{A}$ is Hermitian, we have $\mathbf{A}^{2} \mathbf{A}^{\dagger}= \mathbf{A}$ [@bernstein2009matrix Fact 6.3.17]. So $\mathbf{A}\mathbf{p}= \mathbf{A}\mathbf{x}_{g}$, which implies that the normal equation is satisfied for $\mathbf{p}$. In other words, $\mathbf{p}$ is a solution to [\[eq:least_squares\]](#eq:least_squares){reference-type="ref" reference="eq:least_squares"}. We also recall from [\[lemma:MINRES_solve_exactly\]](#lemma:MINRES_solve_exactly){reference-type="ref" reference="lemma:MINRES_solve_exactly"} that $\mathbf{x}_{g}= \mathbf{A}^{\dagger}\mathbf{b}+ \left(\mathbf{I}_{d} - \mathbf{A}^{\dagger}\mathbf{A}\right) \mathbf{y}$ for some $\mathbf{y}\in \mathbb{C}^{d}$. Now, since $\mathbf{p}\in \textnormal{Range}(\mathbf{A}) = \textnormal{Range}(\mathbf{A}^{\dagger})$ and ${\mathbf{q}}\in \textnormal{Null}(\mathbf{A})$, we must have that $\mathbf{p}= \mathbf{A}^{\dagger}\mathbf{b}$. Also, since $\mathbf{p}\perp {\mathbf{q}}$, we can find $a_1$ as $$\begin{aligned} a_1 = \frac{\langle \mathbf{r}_{g}, \mathbf{x}_{g} \rangle}{\| \mathbf{r}_{g} \|^{2}}, \end{aligned}$$ which gives the desired result. ◻ In [\[prop:lifting_t\]](#prop:lifting_t){reference-type="ref" reference="prop:lifting_t"}, we relate the lifting strategy at iteration $t$ of MINRES to the orthogonal projection of the iterate ${\mathbf{x}}_{t}$ onto $\mathbf{A}\mathcal{K}_{t}(\mathbf{A}, \mathbf{b})$. Consequently, performing lifting at the final iteration amounts to the orthogonal projection of $\mathbf{x}_{g}$ onto $\mathbf{A}\mathcal{K}_{g}(\mathbf{A}, \mathbf{b})$, effectively eliminating the contribution of the portion of $\mathbf{b}$ that lies in the null space of $\mathbf{A}$ from the final solution. [\[prop:lifting_t\]]{#prop:lifting_t label="prop:lifting_t"} In MINRES, $\hat{\mathbf{x}}_{t}^{\natural}$ is the orthogonal projection of ${\mathbf{x}}_{t}$ onto $\mathbf{A}\mathcal{K}_{t}(\mathbf{A}, \mathbf{b})$, where $$\begin{aligned} \label{eq:lifting_t} \hat{\mathbf{x}}_{t}^{\natural}\triangleq {\mathbf{x}}_{t}- \frac{\langle {\mathbf{r}}_{t}, {\mathbf{x}}_{t} \rangle}{\| {\mathbf{r}}_{t} \|^2} {\mathbf{r}}_{t}= \left(\mathbf{I}_{d} - \frac{{\mathbf{r}}_{t}{\mathbf{r}}_{t}^{*}}{\| {\mathbf{r}}_{t} \|^{2}}\right) {\mathbf{x}}_{t}. \end{aligned}$$ *Proof.* Note that, for $t \leq g-1$, $\mathbf{A}\mathcal{K}_{t}(\mathbf{A}, \mathbf{b}) = \textnormal{Span}\{\mathbf{A}\mathbf{b}, \mathbf{A}^2 \mathbf{b}, \ldots, \mathbf{A}^t \mathbf{b}\}$ is a $t$ dimensional subspace. The oblique projection framework gives ${\mathbf{r}}_{t}\perp \mathbf{A}\mathcal{K}_{t}(\mathbf{A}, \mathbf{b})$. Therefore, $\mathcal{K}_{t+1}(\mathbf{A}, \mathbf{b}) = \textnormal{Span}\{{\mathbf{r}}_{t}\} \oplus \mathbf{A}\mathcal{K}_{t}(\mathbf{A}, \mathbf{b})$ and $\mathcal{K}_{t+1}(\mathbf{A}, \mathbf{b})$ has dimension $t+1$. From ${\mathbf{x}}_{t}\in \mathcal{K}_{t}(\mathbf{A}, \mathbf{b}) \subset \mathcal{K}_{t+1}(\mathbf{A}, \mathbf{b})$, it follows that $$\begin{aligned} {\mathbf{x}}_{t}= \mathbf{P}_{{\mathbf{r}}_{t}} {\mathbf{x}}_{t}+ \mathbf{P}_{\mathbf{A}\mathcal{K}_{t}} {\mathbf{x}}_{t}, \end{aligned}$$ where $\mathbf{P}_{{\mathbf{r}}_{t}}$ and $\mathbf{P}_{\mathbf{A}\mathcal{K}_{t}}$ are orthogonal projections onto the span of ${\mathbf{r}}_{t}$ and the sub-space $\mathbf{A}\mathcal{K}_{t}(\mathbf{A},\mathbf{b})$, respectively. Noting that $\mathbf{P}_{{\mathbf{r}}_{t}} = {\mathbf{r}}_{t}{\mathbf{r}}_{t}^{*}/\| {\mathbf{r}}_{t} \|^2$ gives the desired result. ◻ Recall that if $\mathbf{A}$ is skew-Hermitian, i.e., $\mathbf{A}^{*}= -\mathbf{A}$, then we have $i\mathbf{A}\in \mathbb{H}^{d\times d}$. Thus, by [@choi2013minimal], we can apply MINRES with inputs $i\mathbf{A}$ and $i\mathbf{b}$ to solve [\[eq:MINRES\]](#eq:MINRES){reference-type="ref" reference="eq:MINRES"}. We immediately obtain the following result. [\[coro:MINRES_SH_dagger\]]{#coro:MINRES_SH_dagger label="coro:MINRES_SH_dagger"} If $\mathbf{A}\in \mathbb{C}^{d\times d}$ is skew-Hermitian, the result of [\[thm:MINRES_dagger\]](#thm:MINRES_dagger){reference-type="ref" reference="thm:MINRES_dagger"} continues to hold as long as MINRES is applied with inputs $i\mathbf{A}$ and $i\mathbf{b}$. Note that, here, $\mathbf{r}_{g}= i\mathbf{b}- i\mathbf{A}\mathbf{x}_{g}$. *Proof.* We can easily verify that $-i\mathbf{A}^{\dagger}$ is the pseudo inverse of $i\mathbf{A}$ by [\[def:pinverse\]](#def:pinverse){reference-type="ref" reference="def:pinverse"}. Hence, we obtain the desired result as in the proof of [\[thm:MINRES_dagger\]](#thm:MINRES_dagger){reference-type="ref" reference="thm:MINRES_dagger"} by noticing $[i \mathbf{A}]^{\dagger} (i\mathbf{b}) = - i\mathbf{A}^{\dagger}i\mathbf{b}= \mathbf{A}^{\dagger}\mathbf{b}$. ◻ ## Complex-symmetric Systems {#sec:CSMINRES} We now consider the case where $\mathbf{A}\in \mathbb{S}^{d \times d}$ is complex-symmetric, i.e., $\mathbf{A}^{\intercal}= \mathbf{A}$ but $\mathbf{A}^{*}\neq \mathbf{A}$. Least-squares systems involving such matrices arise in many applications such as data fitting [@luk2002exponential; @ammar1999computation], viscoelasticity [@christensen2012theory; @arts1992experimental], quantum dynamics [@bar1997fast; @bar1995new], electromagnetics [@arbenz2004jacobi], and power systems [@howle2005iterative]. Although MINRES was initially designed for handling least-squared problems involving Hermitian matrices, it has since been extended to systems involving complex-symmetric [@choi2013minimal]. Here, we extend our lifting procedure for obtaining the pseudo-inverse solution in such settings. In order to provide clear explanations for the lifting strategy in this case, we first briefly review the construction of the extension of MINRES that is adapted for complex-symmetric systems. For this, we mostly follow the development in the original work of [@choi2013minimal] with a few modifications. ### Saunders process {#saunders-process .unnumbered} For complex-symmetric system, the usual Lanczos process [@paige1975solution] is replaced with what is referred to as the Saunders process [@choi2013minimal; @saunders1988two]. After $t$ iterations of the Saunders process, and in the absence of round-off errors, the Saunders vectors form an orthogonal matrix $\mathbf{V}_{t+1} = [ \mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_{t+1} ] \in \mathbb{C}^{d\times (t+1)}$, whose columns span $\mathcal{S}_{t+1}(\mathbf{A}, \mathbf{b})$ and satisfy the relation $\mathbf{A}\bar{\mathbf{V}}_{t} = \mathbf{V}_{t+1} \hat{\mathbf{T}}_{t}$. Recall that the Saunders subspace [@choi2013minimal; @saunders1988two] is defined as $$\begin{aligned} \label{eq:saunders} \mathcal{S}_{t}(\mathbf{A}, \mathbf{b}) = {\mathcal{K}_{t_1}} (\mathbf{A}\bar{\mathbf{A}}, \mathbf{b}) \oplus {\mathcal{K}_{t_2}} (\mathbf{A}\bar{\mathbf{A}}, \mathbf{A}\bar{\mathbf{b}}), \quad t_1 + t_2 = t, \quad 0 \leq t_1 - t_2 \leq 1,\end{aligned}$$ where $\oplus$ is the direct sum operator. Here, $\hat{\mathbf{T}}_{t}$ is a tridiagonal upper-Hessenberg matrix of the form $$\begin{aligned} \label{eq:tridiagonal_T} \hat{\mathbf{T}}_{t} &= \begin{bmatrix} \alpha_1 & \beta_2 & & & \\ \beta_2 & \alpha_2 & \beta_3 & & \\ & \beta_3 & \alpha_3 & \ddots & \\ & & \ddots & \ddots & \beta_{t} \\ & & & \beta_{t} & \alpha_{t} \\ \hdashline & & & & \beta_{t+1} \\ \end{bmatrix} \triangleq \begin{bmatrix} \mathbf{T}_{t} \\ \beta_{t+1}\mathbf{e}^\intercal_{t} \end{bmatrix} \in \mathbb{C}^{(t+1) \times t},\end{aligned}$$ where $\mathbf{T}_{t}^{\intercal} = \mathbf{T}_{t}= \mathbf{V}_{t}^{*}\mathbf{A}\bar{\mathbf{V}}_{t}= \mathbf{V}_{t}^{*}\mathbf{U}\mathbf{\Sigma}\mathbf{U}^{\intercal}\bar{\mathbf{V}}_{t}\in \mathbb{C}^{t \times t}$ is complex-symmetric. Subsequently, we get the three-term recursion $$\begin{aligned} \label{eq:lanczos_CS} \mathbf{A}\bar{\mathbf{v}}_{t} = \beta_{t} \mathbf{v}_{t-1} + \alpha_{t} \mathbf{v}_{t} + \beta_{t+1} \mathbf{v}_{t+1}, \quad t \geq 1,\end{aligned}$$ where $\alpha_{t}$ can be computed as $\alpha_{t} = \langle \mathbf{v}_{t}, \mathbf{A}\bar{\mathbf{v}}_{t} \rangle$, and $\beta_{t+1} > 0$ is chosen by enforcing that $\mathbf{v}_{t+1}$ is a unit vector for all $1 \leq t \leq g - 1$, i.e., $\beta_{t+1} = \| \beta_{t+1} \mathbf{v}_{t+1} \|$. Letting $\mathbf{x}_{t} = \bar{\mathbf{V}}_{t} \mathbf{y}_{t}$ for some $\mathbf{y}_{t} \in \mathbb{C}^{t}$, it follows that the residual $\mathbf{r}_{t}$ can be written as $$\begin{aligned} %\label{eq:rk} \mathbf{r}_{t} = \mathbf{b}- \mathbf{A}\mathbf{x}_{t} = \mathbf{b}- \mathbf{A}\bar{\mathbf{V}}_{t} \mathbf{y}_{t} = \mathbf{b}- \mathbf{V}_{t+1} \hat{\mathbf{T}}_{t} \mathbf{y}_{t} = \mathbf{V}_{t+1}(\beta_1 \mathbf{e}_1 - \hat{\mathbf{T}}_{t} \mathbf{y}_{t}), \quad \beta_{1} = \| \mathbf{b}\|,\end{aligned}$$ which yields the sub-problems of complex-symmetric MINRES, $$\begin{aligned} \label{eq:MINRES_sub} \min_{\mathbf{y}_{t} \in \mathbb{C}^{t}} \| \beta_1 \mathbf{e}_1 - \hat{\mathbf{T}}_{t} \mathbf{y}_{t} \|^2.\end{aligned}$$ ### QR decomposition {#qr-decomposition .unnumbered} In contrast to [@choi2013minimal], to perform the QR decomposition of $\hat{\mathbf{T}}_{t}$, we employ the complex-symmetric Householder reflector [@trefethen1997numerical Exercise 10.4]. This is a judicious choice to maintain some of the classical properties of MINRES in the Hermitian case such as $\| {\mathbf{r}}_{t}\| = \phi_{t} > 0$, as well as the singular value approximations using $\gamma_{t}^{[2]} > 0$. Doing this, we obtain $\mathbf{Q}_{t}\in \mathbb{C}^{(t+1) \times (t+1)}$ and $\hat{\mathbf{R}}_{t}\in \mathbb{C}^{(t+1) \times t}$ via [\[eq:T_QR_decomp\]]{#eq:T_QR_decomp label="eq:T_QR_decomp"} $$\begin{aligned} \mathbf{Q}_{t} \hat{\mathbf{T}}_{t} = \hat{\mathbf{R}}_{t} \triangleq \begin{bmatrix} \mathbf{R}_{t}\\ \mathbf{0}^{\intercal} \end{bmatrix}, \quad \mathbf{R}_{t} = \begin{bmatrix} \gamma_1^{[2]} & \delta_2^{[2]} & \epsilon_3 & \\ &\gamma_2^{[2]} & \delta_3^{[2]} & \ddots \\ & & \ddots & \ddots & \epsilon_{t} \\ & & & \gamma_{t-1}^{[2]} & \delta_{t}^{[2]} \\ & & & & \gamma_{t}^{[2]} \\ \end{bmatrix}&. \label{eq:block_R} \\ \mathbf{Q}_{t} \triangleq \prod_{i=1}^{t}\mathbf{Q}_{i,i+1}, \quad \mathbf{Q}_{i,i+1} \triangleq \begin{bmatrix} \mathbf{I}_{i-1} & & & \\ & \bar{c}_{i} & s_{i} &\\ & s_{i} & -c_{i} & \\ & & & \mathbf{I}_{t-i} \end{bmatrix}&, \label{eq:block_Q} \end{aligned}$$ where for $1 \leq i \leq t$, $\gamma_{i}^{[2]} \geq 0$, $\delta_{i}^{[2]} \in \mathbb{C}$, $\epsilon_{i} \geq 0$, $c_{i} \in \mathbb{C}$, $0 \leq s_{i} \leq 1$, $| c_{i} |^2 + s_{i}^2 = 1$, $$\begin{aligned} % \label{eq:c_s_r2} c_{i} = \frac{\gamma_{i}}{\sqrt{| \gamma_{i} |^2 + \beta_{i+1}^2}}, \quad s_{i} = \frac{\beta_{i+1}}{\sqrt{| \gamma_{i} |^2 + \beta_{i+1}^2}}, \quad \gamma_{i}^{[2]} = \bar{c}_{i} \gamma_{i} + s_{i} \beta_{i+1} = \sqrt{| \gamma_{i} |^2 + \beta_{i+1}^2},\end{aligned}$$ and $\gamma_{i}$ is defined in [\[alg:MINRES\]](#alg:MINRES){reference-type="ref" reference="alg:MINRES"}. In fact, the same series of transformations are also simultaneously applied to $\beta_{1} \mathbf{e}_1$, i.e., $$\begin{aligned} \mathbf{Q}_{t}\beta_1\mathbf{e}_1 = \beta_1 \mathbf{Q}_{k,k+1} \dots \mathbf{Q}_{2,3} \begin{bmatrix} \bar{c}_1\\ s_1\\ \bm{0}_{k-1} \end{bmatrix} = \beta_1 \mathbf{Q}_{k,k+1} \dots \mathbf{Q}_{3,4} \begin{bmatrix} \bar{c}_1\\ s_1 \bar{c}_2\\ s_1 s_2\\ \bm{0}_{k-2} \end{bmatrix} = \beta_1 \begin{bmatrix} \bar{c}_1\\ s_1 \bar{c}_2\\ \vdots\\ s_1 s_2\dots s_{t-1} \bar{c}_{t}\\ s_1 s_2\dots s_{t-1} s_{t} \end{bmatrix} \triangleq \begin{bmatrix} \tau_1\\ \tau_2\\ \vdots\\ \tau_{t}\\ \phi_{t} \end{bmatrix} \triangleq \begin{bmatrix} \mathbf{t}_{t} \\ \phi_{t} \end{bmatrix}.\end{aligned}$$ [\[rm:real\]]{#rm:real label="rm:real"} By [@saad2011numerical Theorem 6.2], the Lanczos process applied to a Hermitian matrix gives rise to a real symmetric tridiagonal matrix $\mathbf{T}_{t}= \mathbf{V}_{t}^{*}\mathbf{A}\mathbf{V}_{t}\in \mathbb{S}^{t \times t}$. Therefore, the QR decomposition for the preconditioned MINRES with such Hermitian matrices involve real components, e.g., $c_t = \bar{c}_{t} \in \mathbb{R}$. This is in contrast to the complex-symmetric case. Having introduced these different terms, we can solve [\[eq:MINRES_sub\]](#eq:MINRES_sub){reference-type="ref" reference="eq:MINRES_sub"} by noting that $$\begin{aligned} \min_{\mathbf{y}_{t} \in \mathbb{C}^{t}} \| \mathbf{r}_{t} \| &= \min_{\mathbf{y}_{t} \in \mathbb{C}^{t}} \| \beta_1\mathbf{e}_1 - \hat{\mathbf{T}}_{t}\mathbf{y}_{t} \| = \min_{\mathbf{y}_{t} \in \mathbb{C}^{t}} \| \mathbf{Q}_{t}^{*}(\mathbf{Q}_{t}\beta_1\mathbf{e}_1 - \mathbf{Q}_{t} \hat{\mathbf{T}}_{t} \mathbf{y}_{t}) \| \\ & = \min_{\mathbf{y}_{t} \in \mathbb{C}^{t}} \| \mathbf{Q}_{t}\beta_1\mathbf{e}_1 - \mathbf{Q}_{t} \hat{\mathbf{T}}_{t} \mathbf{y}_{t} \| = \min_{\mathbf{y}_{t} \in \mathbb{C}^{t}} \left\|{\begin{bmatrix} \mathbf{t}_{t}\\ \phi_{t} \end{bmatrix} - \begin{bmatrix} \mathbf{R}_{t} \\ \mathbf{0}^{\intercal} \end{bmatrix}\mathbf{y}_{t}}\right\|.\end{aligned}$$ Note that this in turn implies that $\| \mathbf{r}_{t} \| = \phi_{t}$. We also trivially have $\beta_{1} = \phi_{0} = \| \mathbf{b} \|$. ### Updates {#updates .unnumbered} Let $t < g$ and define $\mathbf{D}_{t} \in \mathbb{C}^{m \times t}$ from solving the lower triangular system $\mathbf{R}_{t}^{\intercal} \mathbf{D}_{t}^{\intercal} = \mathbf{V}_{t}^{*}$ where $\mathbf{R}_{t}$ is as in [\[eq:T_QR_decomp\]](#eq:T_QR_decomp){reference-type="ref" reference="eq:T_QR_decomp"}. Now, letting $\mathbf{V}_{t} = [ \mathbf{V}_{t-1}, \mathbf{v}_{t}]$, and using the fact that $\mathbf{R}_{t}$ is upper-triangular, we get the recursion $\mathbf{D}_{t} = [ \mathbf{D}_{t-1}, \mathbf{d}_{t}]$ for some vector $\mathbf{d}_{t}$. As a result, using $\mathbf{R}_{t} \mathbf{y}_{t} = \mathbf{t}_{t}$, one can update the iterate as $$\begin{aligned} % \label{eq:updates_x_d} \mathbf{x}_{t} = \bar{\mathbf{V}}_{t} \mathbf{y}_{t} = \mathbf{D}_{t} \mathbf{R}_{t} \mathbf{y}_{t} = \mathbf{D}_{t} \mathbf{t}_{t} = \begin{bmatrix} \mathbf{D}_{t-1} & \mathbf{d}_{t} \end{bmatrix} \begin{bmatrix} \mathbf{t}_{t-1} \\ \tau_{t} \end{bmatrix} = \mathbf{x}_{t-1} + \tau_{t} \mathbf{d}_{t},\end{aligned}$$ where we let $\mathbf{x}_{0} = \mathbf{0}$. Furthermore, applying $\bar{\mathbf{V}}_{t} = \mathbf{D}_{t} \mathbf{R}_{t}$ and the upper-triangular form of $\mathbf{R}_{t}$ in [\[eq:T_QR_decomp\]](#eq:T_QR_decomp){reference-type="eqref" reference="eq:T_QR_decomp"}, we obtain the following relationship for $\mathbf{v}_{t}$: $$\begin{aligned} % \label{eq:updates_v_d_CS} \bar{\mathbf{v}}_{t} = \epsilon_{t} \mathbf{d}_{t-2} + \delta^{[2]}_{t} \mathbf{d}_{t-1} + \gamma^{[2]}_{t} \mathbf{d}_{t}.\end{aligned}$$ All of the above steps constitute the MINRES algorithm, adapted for complex-symmetric systems, which is given in [\[alg:MINRES\]](#alg:MINRES){reference-type="ref" reference="alg:MINRES"}. We note that although the bulk of [\[alg:MINRES\]](#alg:MINRES){reference-type="ref" reference="alg:MINRES"} is given for the Hermitian case, the particular modifications needed for complex symmetric settings are highlighted on the right column of [\[alg:MINRES\]](#alg:MINRES){reference-type="ref" reference="alg:MINRES"}. [\[lemma:residual_CS\]]{#lemma:residual_CS label="lemma:residual_CS"} In [\[alg:MINRES\]](#alg:MINRES){reference-type="ref" reference="alg:MINRES"} for complex-symmetric matrices, ${\mathbf{r}}_{t}$ can be updated via $$\begin{aligned} {\mathbf{r}}_{t}= s_t^2 {\mathbf{r}}_{t-1}- \phi_{t} \bar{c}_t \mathbf{v}_{t+1}. \end{aligned}$$ *Proof.* By [@choi2013minimal Eqn (B.1)] and [\[eq:block_Q\]](#eq:block_Q){reference-type="ref" reference="eq:block_Q"}, it holds that $$\begin{aligned} {\mathbf{r}}_{t}&= \phi_{t} \mathbf{V}_{t+1}\mathbf{Q}_{t}^{*} \mathbf{e}_{t+1} = \phi_{t} \begin{bmatrix} \mathbf{V}_{t}& \mathbf{v}_{t+1} \end{bmatrix} \begin{bmatrix} \mathbf{Q}_{t-1}^{*} & \\ & 1 \end{bmatrix} \begin{bmatrix} s_t \mathbf{e}_t \\ -\bar{c}_t \end{bmatrix} \\ &= \phi_{t} s_t \mathbf{V}_{t}\mathbf{Q}_{t-1}^{*} \mathbf{e}_t - \phi_{t} \bar{c}_t \mathbf{v}_{t+1} = s_t^2 \phi_{t-1} \mathbf{V}_{t}\mathbf{Q}_{t-1}^{*} \mathbf{e}_t - \phi_{t} \bar{c}_t \mathbf{v}_{t+1}, \end{aligned}$$ as desired. ◻ We now give the lifting procedure to obtain the pseudo-inverse solution of [\[eq:least_squares\]](#eq:least_squares){reference-type="ref" reference="eq:least_squares"} from [\[alg:MINRES\]](#alg:MINRES){reference-type="ref" reference="alg:MINRES"} when $\mathbf{A}$ is complex-symmetric. In our analysis, we use the Tagaki singular value decomposition (SVD) of $\mathbf{A}$ which is ensured to be symmetric for such matrices [@bunse1988singular]. Specifically, the Tagaki SVD of $\mathbf{A}$ is given by $$\begin{aligned} \label{eq:decomp_A_Ad_CS} \mathbf{A}&= \begin{bmatrix} \mathbf{U}& \mathbf{U}_{\perp} \end{bmatrix} \begin{bmatrix} \mathbf{\Sigma}& \mathbf{0}\\ \mathbf{0}& \mathbf{0} \end{bmatrix} \begin{bmatrix} \mathbf{U}& \mathbf{U}_{\perp} \end{bmatrix}^{\intercal} = \mathbf{U}\mathbf{\Sigma}\mathbf{U}^{\intercal},\end{aligned}$$ where $\mathbf{U}\in \mathbb{C}^{d\times r_{\mathbf{A}}}$ and $\mathbf{U}_{\perp}\in \mathbb{C}^{d\times (d- r_{\mathbf{A}})}$ are orthogonal matrices and $\Sigma \in \mathbb{R}^{r_{\mathbf{A}} \times r_{\mathbf{A}}}$ is a diagonal matrix containing $r_{\mathbf{A}} \leq d$ positive values. Note that $\textnormal{Range}(\mathbf{U}_{\perp})$ is the orthogonal complement of $\textnormal{Range}(\mathbf{U})$, and hence $[\mathbf{U}\hspace*{2mm} \mathbf{U}_{\perp}]$ is unitary. From [\[def:pinverse\]](#def:pinverse){reference-type="ref" reference="def:pinverse"}, we can also see that $$\begin{aligned} \mathbf{A}^{\dagger}= \begin{bmatrix} \bar{\mathbf{U}}& \bar{\mathbf{U}}_{\perp} \end{bmatrix} \begin{bmatrix} \mathbf{\Sigma}^{-1} & \mathbf{0}\\ \mathbf{0}& \mathbf{0} \end{bmatrix} \begin{bmatrix} \mathbf{U}& \mathbf{U}_{\perp} \end{bmatrix}^{*}.\end{aligned}$$ [\[thm:CSMINRES_dagger\]]{#thm:CSMINRES_dagger label="thm:CSMINRES_dagger"} Let $\mathbf{x}_{g}$ be the final iterate of MINRES for solving [\[eq:least_squares\]](#eq:least_squares){reference-type="ref" reference="eq:least_squares"} with complex-symmetric $\mathbf{A}\in \mathbb{C}^{d\times d}$. 1. If $\mathbf{r}_{g}= \bm{0}$, then $\mathbf{x}_{g}= \mathbf{A}^{\dagger}\mathbf{b}$. 2. If $\mathbf{r}_{g}\neq \bm{0}$, let us define the lifted vector $$\begin{aligned} \label{eq:CSMINRES_lifting} \hat{\mathbf{x}}^{+}= \mathbf{x}_{g}- \frac{\langle \bar{\mathbf{r}}_{g}, \mathbf{x}_{g} \rangle}{\| \bar{\mathbf{r}}_{g} \|^2} \bar{\mathbf{r}}_{g}= \left(\mathbf{I}_{d} - \frac{\bar{\mathbf{r}}_{g}\mathbf{r}_{g}^{\intercal}}{\| \bar{\mathbf{r}}_{g} \|^{2}}\right) \mathbf{x}_{g}, \end{aligned}$$ where $\mathbf{r}_{g}= \mathbf{b}- \mathbf{A}\mathbf{x}_{g}$. Then, we have $\hat{\mathbf{x}}^{+}= \mathbf{A}^{\dagger}\mathbf{b}$. *Proof.* When $\mathbf{b}\in \textnormal{Range}(\mathbf{A})$, we have $\mathbf{x}^{+}= \hat{\mathbf{x}}^{+}= \mathbf{x}_{g}$ by [@choi2013minimal Theorem 3.1]. So we only need to consider the case $\mathbf{b}\notin \textnormal{Range}(\mathbf{A})$. We can write $\mathbf{x}_{g}\in \textnormal{Span}(\bar{\mathbf{V}}_{t})$ as $\mathbf{x}_{g}= b_1 \bar{\mathbf{b}}+ b_2 \mathbf{p}_g$ where $\mathbf{p}_g \in \textnormal{Range}(\bar{\mathbf{A}})$. So, using the matrix $\mathbf{U}$ in [\[eq:decomp_A\_Ad_CS\]](#eq:decomp_A_Ad_CS){reference-type="ref" reference="eq:decomp_A_Ad_CS"} and noting that $\bar{\mathbf{U}}^{*} = \mathbf{U}^{\intercal}$, we obtain $$\begin{aligned} \mathbf{x}_{g}&= b_1 (\bar{\mathbf{U}}\mathbf{U}^{\intercal}+ \bar{\mathbf{U}}_{\perp}\mathbf{U}_{\perp}^{\intercal}) \bar{\mathbf{b}}+ b_2 \mathbf{p}_g = b_1 \bar{\mathbf{U}}\mathbf{U}^{\intercal}\bar{\mathbf{b}}+ b_2 \mathbf{p}_g + b_1 \bar{\mathbf{U}}_{\perp}\mathbf{U}_{\perp}^{\intercal}\bar{\mathbf{b}}. \end{aligned}$$ Just as in the proof of [\[thm:MINRES_dagger\]](#thm:MINRES_dagger){reference-type="ref" reference="thm:MINRES_dagger"}, we see that $\mathbf{A}\mathbf{p}= \mathbf{A}\mathbf{x}_{g}$ and ${\mathbf{q}}\in \textnormal{Null}(\mathbf{A}) = \textnormal{Range}(\bar{\mathbf{U}}_{\perp})$ where $$\begin{aligned} \mathbf{p}&\triangleq b_1 \bar{\mathbf{U}}\mathbf{U}^{\intercal}\bar{\mathbf{b}}+ b_2 \mathbf{p}_g\\ {\mathbf{q}}&\triangleq b_1 \bar{\mathbf{U}}_{\perp}\mathbf{U}_{\perp}^{\intercal}\bar{\mathbf{b}}= b_1 \textnormal{Conj}(\mathbf{U}_{\perp}\mathbf{U}_{\perp}^{*}\mathbf{b}) = b_{1} \bar{\mathbf{r}}_{g}. \end{aligned}$$ Since $\mathbf{p}$ satisfies the normal equation, i.e., $\mathbf{p}$ is a solution to [\[eq:least_squares\]](#eq:least_squares){reference-type="ref" reference="eq:least_squares"}, [\[lemma:MINRES_solve_exactly\]](#lemma:MINRES_solve_exactly){reference-type="ref" reference="lemma:MINRES_solve_exactly"} implies $\mathbf{p}= \mathbf{A}^{\dagger}\mathbf{b}$. Furthermore, since $\mathbf{p}\perp {\mathbf{q}}$, we can compute $$\begin{aligned} b_1 = \frac{\langle \bar{\mathbf{r}}_{g}, \mathbf{x}_{g} \rangle}{\| \bar{\mathbf{r}}_{g} \|^2}, \end{aligned}$$ which gives the desired result. ◻ Similar to [\[prop:lifting_t\]](#prop:lifting_t){reference-type="ref" reference="prop:lifting_t"}, the lifting step in [\[eq:CSMINRES_lifting\]](#eq:CSMINRES_lifting){reference-type="ref" reference="eq:CSMINRES_lifting"} can be regarded as the orthogonal projection of the final iterate $\mathbf{x}_{g}$ onto $\bar{\mathbf{A}}\mathcal{S}_{g}(\mathbf{A}, \mathbf{b})$. To establish this, we first show that any iteration of MINRES in the complex-symmetric setting can be formulated as a special Petrov-Galerkin orthogonality condition with respect to the underlying Saunders subspace. [\[lemma:CS_property\]]{#lemma:CS_property label="lemma:CS_property"} In [\[alg:MINRES\]](#alg:MINRES){reference-type="ref" reference="alg:MINRES"} for complex-symmetric matrices, we have $$\begin{aligned} \mathbf{A}\bar{\mathbf{r}}_{t}&= \phi_t (\gamma_{t+1} {\mathbf{v}}_{t+1}+ \delta_{t+2} \mathbf{v}_{t+2}), \\%\label{eq:Abr_CS} \\ \langle \bar{\mathbf{x}}_i, \mathbf{A}\bar{\mathbf{r}}_{t} \rangle &= 0, \quad 0 \leq i \leq t, \\%\label{eq:Abr_perp_CS} \\ {\mathbf{r}}_{t}&\perp \mathbf{A}\mathcal{S}_{t}(\bar{\mathbf{A}}, \bar{\mathbf{b}}), %\label{eq:Saunders_cond} \end{aligned}$$ where $0 \leq t \leq g$ and $\mathbf{v}_{g+1} = \bm{0}$. *Proof.* From [@choi2013minimal Eqn (B.1)], we have $$\begin{aligned} \mathbf{A}\bar{\mathbf{r}}_{t}= \phi_t \mathbf{A}\bar{\mathbf{V}}_{t+1} \mathbf{Q}_{t}^{\intercal}\mathbf{e}_{t+1} = \phi_t \begin{bmatrix} \mathbf{V}_{t}& {\mathbf{v}}_{t+1} \end{bmatrix} \hat{\mathbf{T}}_{t}\mathbf{Q}_{t}^{\intercal}\mathbf{e}_{t+1} = \phi_t (\gamma_{t+1} {\mathbf{v}}_{t+1}+ \delta_{t+2} \mathbf{v}_{t+2}) \end{aligned}$$ where the last equality is obtained using a similar reasoning as in [@choi2011minres Lemma 3.3]. By the orthogonality of columns of $\mathbf{V}_{t+1}$, we obtain $\mathbf{A}\bar{\mathbf{r}}_{t}\perp \textnormal{Span}\{\mathbf{v}_1, \mathbf{v}_2, \ldots, \mathbf{v}_{t} \}$, i.e., $\langle \mathbf{A}\bar{\mathbf{r}}_{t}, \mathbf{z} \rangle = 0$ for any $\mathbf{z}\in \mathcal{S}_{t}(\mathbf{A}, \mathbf{b})$. This implies $\langle {\mathbf{r}}_{t}, \mathbf{A}\bar{\mathbf{z}} \rangle = 0$ from which the desired result follows. ◻ Using [\[lemma:CS_property\]](#lemma:CS_property){reference-type="ref" reference="lemma:CS_property"}, we get the equivalent of [\[prop:lifting_t\]](#prop:lifting_t){reference-type="ref" reference="prop:lifting_t"} in the complex symmetric settings. [\[prop:lifting_t\_CS\]]{#prop:lifting_t_CS label="prop:lifting_t_CS"} In MINRES, $\hat{\mathbf{x}}_{t}^{\natural}$ is the orthogonal projection of ${\mathbf{x}}_{t}$ onto $\bar{\mathbf{A}}\mathcal{S}_{t}(\mathbf{A}, \mathbf{b})$, where $$\begin{aligned} \hat{\mathbf{x}}_{t}^{\natural}\triangleq {\mathbf{x}}_{t}- \frac{\langle \bar{\mathbf{r}}_{t}, {\mathbf{x}}_{t} \rangle}{\| \bar{\mathbf{r}}_{t} \|^2} \bar{\mathbf{r}}_{t}= \left(\mathbf{I}_{d} - \frac{\bar{\mathbf{r}}_{t}{\mathbf{r}}_{t}^{\intercal}}{\| \bar{\mathbf{r}}_{t} \|^{2}}\right) {\mathbf{x}}_{t}. \end{aligned}$$ # Pseudo-inverse Solutions in Preconditioned MINRES {#sec:pMINRES} To solve systems involving ill-conditioned matrices and, in turn, to speed up iterative procedures, preconditioning is a very effective strategy, which is in fact indispensable for large-scale problems [@saad2011numerical; @saad2003iterative; @bjorck1996numerical; @bjorck2015numerical]. The primary focus of research efforts on preconditioning has been on solving consistent systems of linear equations [@saad2003iterative; @saad2011numerical; @benzi2005numerical; @gould2017state; @morikuni2013inner; @morikuni2015convergence; @rozloznik2002krylov; @bjorck1996numerical; @bjorck2015numerical]. For instance, when solving the linear system $\mathbf{A}\mathbf{x}= \mathbf{b}$, where $\mathbf{A}\succ \mathbf{0}$, one can consider a positive definite matrix $\mathbf{M}\succ \mathbf{0}$ and solve the transformed problem $$\begin{aligned} \label{eq:pCG} (\mathbf{M}^{1/2}\mathbf{A}\mathbf{M}^{1/2}) (\mathbf{M}^{-1/2}\mathbf{x}) = \mathbf{M}^{1/2}\mathbf{b}.\end{aligned}$$ The formulation [\[eq:pCG\]](#eq:pCG){reference-type="ref" reference="eq:pCG"} is often referred to as split preconditioning. Alternative formulations also exist, e.g., the celebrated preconditioned CG uses a left-preconditioning $\mathbf{M}^{1/2}\mathbf{A}\mathbf{x}= \mathbf{M}^{1/2}\mathbf{b}$. If the matrix $\mathbf{M}$ is chosen appropriately, the transformed problem can exhibit significantly improved conditioning compared to the original formulation. For such problems, the preconditioning matrix $\mathbf{M}$ is naturally always taken to be non-singular, see, e.g., [@gould2017state; @morikuni2013inner; @morikuni2015convergence; @saad2011numerical; @saad2003iterative; @bjorck1996numerical; @bjorck2015numerical; @benzi2005numerical; @choi2013minimal; @choi2014algorithm]. In fact, for many iterative procedures, the matrix $\mathbf{M}$ is *required* to be positive definite, as in, e.g., preconditioned CG and MINRES. In stark contrast, preconditioning for linear least-squared problems, i.e., inconsistent systems where $\mathbf{b}\notin \textnormal{Range}(\mathbf{A})$, has received less attention [@bjorck1996numerical; @bjorck1999preconditioners]. In fact, the challenge of obtaining a pseudo-inverse solution to [\[eq:least_squares\]](#eq:least_squares){reference-type="ref" reference="eq:least_squares"} is greatly exacerbated when the underlying matrix is preconditioned. This is in part due to the fact that, unlike the case of consistent linear systems, the particular choice of the preconditioner can have a drastic effect on the type of obtained solutions in inconsistent settings. We illustrate such effect in the following simple example. For any $a > 0$, consider $$\begin{aligned} \mathbf{A}= \begin{bmatrix} a & 0 \\ 0 & 0 \end{bmatrix} \quad \text{and} \quad \mathbf{b}= \begin{bmatrix} 1 \\ 1 \end{bmatrix} \notin \textnormal{Range}(\mathbf{A}). \end{aligned}$$ It can easily be verified that $$\begin{aligned} \mathbf{x}^{+}= \mathbf{A}^{\dagger}\mathbf{b}= \frac{1}{a} \begin{bmatrix} 1 \\ 0 \end{bmatrix} \quad \text{and} \quad \mathbf{r}^{+}\triangleq \mathbf{b}- \mathbf{A}\mathbf{x}^{+}= \begin{bmatrix} 0 \\ 1 \end{bmatrix}. \end{aligned}$$ Now, we consider to solve the corresponding preconditioned problem [\[eq:pCG\]](#eq:pCG){reference-type="ref" reference="eq:pCG"}, $$\begin{aligned} \min_{\mathbf{y}\in \mathbb{R}^{d}} \| (\mathbf{M}^{1/2}\mathbf{A}\mathbf{M}^{1/2}) \mathbf{y}- \mathbf{M}^{1/2}\mathbf{b} \|^{2}, \quad \text{where} \quad \mathbf{M}^{1/2}= \begin{bmatrix} b & 1 \\ 1 & 1 \end{bmatrix} \end{aligned}$$ for some $b > 1$. Specifically, we have $$\begin{aligned} \mathbf{M}^{1/2}\mathbf{A}\mathbf{M}^{1/2} % = \begin{bmatrix} % a b^2 & a b \\ % a b & a % \end{bmatrix} = a (b^2+1) \mathbf{c}\mathbf{c}^{\intercal}, \quad \text{where} \quad \mathbf{c}\triangleq \frac{1}{\sqrt{b^2+1}} \begin{bmatrix} b \\ 1 \end{bmatrix} \quad \text{and} \quad \mathbf{M}^{1/2}\mathbf{b}= \begin{bmatrix} b+1 \\ 2 \end{bmatrix}. \end{aligned}$$ The pseudo-inverse solution of the above preconditioned least-squares is given by $$\begin{aligned} \mathbf{y}= [\mathbf{M}^{1/2}\mathbf{A}\mathbf{M}^{1/2}]^{\dagger} \mathbf{M}^{1/2}\mathbf{b}= \frac{1}{a (b^2+1)} \mathbf{c}\mathbf{c}^{\intercal} \begin{bmatrix} b+1 \\ 2 \end{bmatrix} = \frac{b^2 + b + 2}{a (b^2+1)^2}\begin{bmatrix} b \\ 1 \end{bmatrix}, \end{aligned}$$ which in turn amounts to $$\begin{aligned} \mathbf{x}= \mathbf{M}^{1/2}\mathbf{y}= \frac{b^2 + b + 2}{a (b^2+1)^2} \begin{bmatrix} b^2 + 1 \\ b + 1 \end{bmatrix} \quad \text{with} \quad \mathbf{r}= \mathbf{b}- \mathbf{A}\mathbf{x}= \begin{bmatrix} -\frac{b+1}{b^2+1} \\ 1 \end{bmatrix}. \end{aligned}$$ It then follows that $$\begin{aligned} \mathbf{x}- \mathbf{x}^{+}= \frac{b+1}{a (b^2+1)^2}\begin{bmatrix} b^2+1 \\ b^2 + b + 2 \end{bmatrix} \quad \text{and} \quad \mathbf{r}- \mathbf{r}^{+}= -\frac{b+1}{b^2+1} \begin{bmatrix} 1 \\ 0 \end{bmatrix}. \end{aligned}$$ In other words, even though the preconditioner is positive definite, the pseudo-inverse solutions and the respective residuals in the original problem and in the preconditioned version do not coincide. In fact, we can only expect to obtain $\mathbf{r}- \mathbf{r}^{+}\rightarrow \mathbf{0}$ and $\mathbf{x}- \mathbf{x}^{+}\rightarrow \mathbf{0}$ as $b \rightarrow \infty$. Hence, the choice of the preconditioner in inconsistent least-squares plays a much more subtle role than in the usual consistent linear system settings. Though the construction of preconditioners is predominantly done on an ad hoc basis, a wide range of positive (semi)definite preconditioners can be created by introducing a matrix $\mathbf{S}\in \mathbb{C}^{d\times m}$, where $m \geq 1$, and letting $\mathbf{M}= \mathbf{S}\mathbf{S}^{*}$. In this context, the matrix $\mathbf{S}$ is often referred to as the sub-preconditioner. Naturally, depending on the structure of $\mathbf{S}$, the matrix $\mathbf{M}$ can be singular. Square, yet singular, sub-preconditioners $\mathbf{S}\succeq \mathbf{0}$ have been explored in the context of GMRES [@elden2012solving] and MINRES [@hong2022preconditioned]. Here, we consider an arbitrary sub-preconditioner $\mathbf{S}$ and study our lifting strategy in the presence of the resulting preconditioner $\mathbf{M}= \mathbf{S}\mathbf{S}^{*}$. Before delving any deeper, we state several facts that are used throughout the rest of this section. Let the economy SVD of $\mathbf{M}$ and $\mathbf{S}$ be given by [\[eq:def_Md_M\_Sd_S\]]{#eq:def_Md_M_Sd_S label="eq:def_Md_M_Sd_S"} $$\begin{aligned} \mathbf{M}&= \mathbf{P}\mathbf{\Sigma}^2 \mathbf{P}^{*}, \label{eq:def_Md_M}\\ \mathbf{S}&= \mathbf{P}\mathbf{\Sigma}\mathbf{K}^{*}, \label{eq:def_Sd_S} \end{aligned}$$ where $\mathbf{P}\in \mathbb{C}^{d\times r}$ and $\mathbf{K}\in \mathbb{C}^{m \times r}$ are orthogonal matrices, $\mathbf{\Sigma}\in \mathbb{R}^{r \times r}$ is a diagonal matrix with non-zero diagonals elements, and $r \leq \min\{d,m\}$ is the rank of $\mathbf{M}$. [\[fact:M_S\]](#fact:M_S){reference-type="ref" reference="fact:M_S"} lists several relationships among the factors in [\[eq:def_Md_M\_Sd_S\]](#eq:def_Md_M_Sd_S){reference-type="ref" reference="eq:def_Md_M_Sd_S"}. The proofs are trivial, and hence are omitted. [\[fact:M_S\]]{#fact:M_S label="fact:M_S"} With [\[eq:def_Md_M\_Sd_S\]](#eq:def_Md_M_Sd_S){reference-type="ref" reference="eq:def_Md_M_Sd_S"}, we have [\[eq:M_S\_property\]]{#eq:M_S_property label="eq:M_S_property"} $$\begin{aligned} \mathbf{M}^{\dagger}&= \mathbf{P}\mathbf{\Sigma}^{-2} \mathbf{P}^{*}, \quad \mathbf{S}^{\dagger}= \mathbf{K}\mathbf{\Sigma}^{-1}\mathbf{P}^{*}, \label{eq:Md_Sd}\\ \mathbf{M}&= \mathbf{S}\mathbf{S}^{*}, \quad \mathbf{M}^{\dagger}= [\mathbf{S}^{\dagger}]^{*}\mathbf{S}^{\dagger}, \label{eq:Md_STS}\\ \mathbf{M}\mathbf{M}^{\dagger}&= \mathbf{M}^{\dagger}\mathbf{M}= \mathbf{S}\mathbf{S}^{\dagger}= [\mathbf{S}^{\dagger}]^{*}\mathbf{S}^{*}= \mathbf{P}\mathbf{P}^{*}, \label{eq:Md_PPH} \\ \mathbf{S}^{\dagger}\mathbf{S}&= \mathbf{S}^{*}[\mathbf{S}^{\dagger}]^{*}= \mathbf{K}\mathbf{K}^{*}, % \quad \SST \SSdT = \bSSd \bSS = \bKK \KKT, \label{eq:Md_KKH} \end{aligned}$$ ## Preconditioned Hermitian Systems {#sec:pMINRES_H} As one might expect, for linear least-squares problems arising from inconsistent systems, one can relax the invertibility or positive definiteness requirement for $\mathbf{M}$. Specific to our work here, when the matrix $\mathbf{A}$ is Hermitian, the preconditioner $\mathbf{M}\in \mathbb{C}^{d\times d}$ will only be required to be PSD. Recently, [@hong2022preconditioned] has considered such PSD preconditioners for MINRES in the context of solving consistent linear systems with singular matrix $\mathbf{A}$. It was shown that, since $\mathbf{A}$ and $\mathbf{M}$ are Hermitian and $\langle \mathbf{A}\mathbf{M}\mathbf{v}, \mathbf{w} \rangle_{\mathbf{M}} = \langle \mathbf{v},\mathbf{A}\mathbf{M}\mathbf{w} \rangle_{\mathbf{M}}$ for any $\mathbf{v},\mathbf{w}\in \mathbb{R}^{d}$, the matrix $\mathbf{A}\mathbf{M}$ is self-adjoint with respect to the $\langle \mathbf{v},\mathbf{w} \rangle_{\mathbf{M}} = \mathbf{v}^{*} \mathbf{M}\mathbf{w}$ inner-product. This, in turn, gives rise to the right-preconditioning of $\mathbf{A}$ as $\mathbf{A}\mathbf{M}$. In such a consistent setting, the iterates of the right-preconditioned MINRES algorithm are generated as [\[eq:right_prec\]]{#eq:right_prec label="eq:right_prec"} $$\begin{aligned} \check{\mathbf{x}}_{t}&= \mathop{\mathrm{arg\,min}}_{\check{\mathbf{x}}\in \mathcal{K}_{t}(\mathbf{A}\mathbf{M},\mathbf{b})} \| \mathbf{b}- \mathbf{A}\mathbf{M}\check{\mathbf{x}} \|_{\mathbf{M}}^2, \label{eq:right_prec_yy}\\ {\mathbf{x}}_{t}&= \mathbf{M}\check{\mathbf{x}}_{t}, \label{eq:right_prec_xx} \end{aligned}$$ where ${\mathbf{x}}_{0}= \check{\mathbf{x}}_{0} = \bm{0}$ and $\| . \|^{2}_{\mathbf{M}} \triangleq\langle .,. \rangle_{\mathbf{M}}$ defines a semi-norm. This motivates us to consider the following more general problem $$\begin{aligned} \label{eq:right_prec_02} {\mathbf{x}}_{t}= % \argmin_{\xx \in \MM \Kt{\AA\MM,\bb}} \vnorm{\bb - \AA \MM \MMd \xx}_{\MM}^2 = \mathop{\mathrm{arg\,min}}_{\mathbf{x}\in \mathbf{M}\mathcal{K}_{t}(\mathbf{A}\mathbf{M},\mathbf{b})} \| \mathbf{b}- \mathbf{A}\mathbf{x} \|_{\mathbf{M}}^2.\end{aligned}$$ It can be easily shown that the iterate ${\mathbf{x}}_{t}$ from [\[eq:right_prec\]](#eq:right_prec){reference-type="ref" reference="eq:right_prec"} is a solution to [\[eq:right_prec_02\]](#eq:right_prec_02){reference-type="ref" reference="eq:right_prec_02"}. However, the converse is not necessarily true, i.e., for any ${\mathbf{x}}_{t}$ from [\[eq:right_prec_02\]](#eq:right_prec_02){reference-type="ref" reference="eq:right_prec_02"}, there might not exist $\check{\mathbf{x}}_{t}\in \mathcal{K}_{t}(\mathbf{A}\mathbf{M},\mathbf{b})$ satisfying [\[eq:right_prec_xx\]](#eq:right_prec_xx){reference-type="ref" reference="eq:right_prec_xx"}. Indeed, for a given ${\mathbf{x}}_{t}$ from [\[eq:right_prec_02\]](#eq:right_prec_02){reference-type="ref" reference="eq:right_prec_02"}, to have [\[eq:right_prec_xx\]](#eq:right_prec_xx){reference-type="ref" reference="eq:right_prec_xx"} we must have $\check{\mathbf{x}}_{t}= \mathbf{M}^{\dagger}{\mathbf{x}}_{t}+ \left(\mathbf{I}- \mathbf{M}^{\dagger}\mathbf{M}\right) \mathbf{z}$ for some $\mathbf{z}\in \mathbb{R}^{d}$. In other words, we must have $\check{\mathbf{x}}_{t}\in \mathbf{M}^{\dagger}\mathbf{M}\mathcal{K}_{t}(\mathbf{A}\mathbf{M},\mathbf{b}) \oplus \textnormal{Null}(\mathbf{M})$. Dropping the inconsequential term $\textnormal{Null}(\mathbf{M})$, it follows that the solutions of [\[eq:right_prec,eq:right_prec_02\]](#eq:right_prec,eq:right_prec_02){reference-type="ref" reference="eq:right_prec,eq:right_prec_02"} coincide in the restricted case where $\mathbf{b}\in \textnormal{Range}(\mathbf{A}) = \textnormal{Range}(\mathbf{M})$, which is precisely the setting considered in [@hong2022preconditioned]. Here, we consider the more general formulation [\[eq:right_prec_02\]](#eq:right_prec_02){reference-type="ref" reference="eq:right_prec_02"}. In fact, since $$\begin{aligned} \mathbf{M}\mathcal{K}_{t}(\mathbf{A}\mathbf{M},\mathbf{b}) &= \mathbf{M}\textnormal{Span}\left\{\mathbf{b},\mathbf{A}\mathbf{M}\mathbf{b}, (\mathbf{A}\mathbf{M})^{2} \mathbf{b},\ldots,(\mathbf{A}\mathbf{M})^{t-1} \mathbf{b}\right\} \\ &= \textnormal{Span}\left\{\mathbf{M}\mathbf{b}, \mathbf{M}\mathbf{A}\mathbf{M}\mathbf{b}, (\mathbf{M}\mathbf{A})^2 \mathbf{M}\mathbf{b},\ldots,(\mathbf{M}\mathbf{A})^{t-1}\mathbf{M}\mathbf{b}\right\} = \mathcal{K}_{t}(\mathbf{M}\mathbf{A},\mathbf{M}\mathbf{b}),\end{aligned}$$ the formulation [\[eq:right_prec_02\]](#eq:right_prec_02){reference-type="ref" reference="eq:right_prec_02"} is also equivalent to $$\begin{aligned} \label{eq:right_prec_03} {\mathbf{x}}_{t}= \mathop{\mathrm{arg\,min}}_{\mathbf{x}\in \mathcal{K}_{t}(\mathbf{M}\mathbf{A},\mathbf{M}\mathbf{b})} \| \mathbf{b}- \mathbf{A}\mathbf{x} \|_{\mathbf{M}}^2.\end{aligned}$$ Formulation [\[eq:right_prec_03\]](#eq:right_prec_03){reference-type="ref" reference="eq:right_prec_03"} constitutes the starting point for our theoretical analysis and algorithmic derivations. ### Derivation of the Preconditioned MINRES Algorithm {#derivation-of-the-preconditioned-minres-algorithm .unnumbered} Consider the formulation [\[eq:right_prec_03\]](#eq:right_prec_03){reference-type="ref" reference="eq:right_prec_03"} with $\mathbf{A}\in \mathbb{H}^{m \times m}$ and suppose the preconditioner $\mathbf{M}$ is given as $\mathbf{M}= \mathbf{S}\mathbf{S}^{*}$, where $\mathbf{S}\in \mathbb{C}^{d\times m}$ for some $m \geq 1$ is the sub-preconditioner matrix. We have $$\begin{aligned} {\mathbf{x}}_{t}= \mathop{\mathrm{arg\,min}}_{\mathbf{x}\in \mathcal{K}_{t}(\mathbf{M}\mathbf{A},\mathbf{M}\mathbf{b})} \| \mathbf{b}- \mathbf{A}\mathbf{x} \|_{\mathbf{M}}^2 &= \mathop{\mathrm{arg\,min}}_{\mathbf{x}\in \mathcal{K}_{t}(\mathbf{M}\mathbf{A},\mathbf{M}\mathbf{b})} \| \mathbf{S}^{*}\left(\mathbf{b}- \mathbf{A}\mathbf{x}\right) \|^{2} = \mathop{\mathrm{arg\,min}}_{\mathbf{x}\in \mathbf{S}\mathcal{K}_{t}(\mathbf{S}^{*}\mathbf{A}\mathbf{S},\mathbf{S}^{*}\mathbf{b})} \| \mathbf{S}^{*}\left(\mathbf{b}- \mathbf{A}\mathbf{x}\right) \|^{2}.\end{aligned}$$ This allows us to consider the equivalent problem [\[eq:preq_MINRES\]]{#eq:preq_MINRES label="eq:preq_MINRES"} $$\begin{aligned} \tilde{\mathbf{x}}_{t}&= \mathop{\mathrm{arg\,min}}_{\tilde{\mathbf{x}}\in \mathcal{K}_{t}(\tilde{\mathbf{A}}, \tilde{\mathbf{b}})} \| \tilde{\mathbf{b}}- \tilde{\mathbf{A}}\tilde{\mathbf{x}} \|^2, \label{eq:pMINRES}\\ {\mathbf{x}}_{t}&= \mathbf{S}\tilde{\mathbf{x}}_{t}, \label{eq:def_x_r} \end{aligned}$$ where we have defined $$\begin{aligned} \label{eq:def_tA_tb_tr_hr} \tilde{\mathbf{A}}\triangleq \mathbf{S}^{*}\mathbf{A}\mathbf{S}\in \mathbb{H}^{m \times m}, \quad \text{and} \quad \tilde{\mathbf{b}}\triangleq \mathbf{S}^{*}\mathbf{b}\in \mathbb{C}^{m}.\end{aligned}$$ The residual of the system in [\[eq:pMINRES\]](#eq:pMINRES){reference-type="ref" reference="eq:pMINRES"} is given by $\tilde{\mathbf{r}}_{t}= \tilde{\mathbf{b}}- \tilde{\mathbf{A}}\tilde{\mathbf{x}}_{t}$, which implies $$\begin{aligned} \label{eq:trrt_rrt} \tilde{\mathbf{r}}_{t}= \mathbf{S}^{*}{\mathbf{r}}_{t}, \quad \text{where} \quad {\mathbf{r}}_{t}= \mathbf{b}- \mathbf{A}{\mathbf{x}}_{t}.\end{aligned}$$ Clearly, the matrix in the least-squares problem [\[eq:pMINRES\]](#eq:pMINRES){reference-type="ref" reference="eq:pMINRES"} is itself Hermitian. As a result, the Lanczos process [@paige1975solution; @liu2022minres] underlying the MINRES algorithm applied to [\[eq:pMINRES\]](#eq:pMINRES){reference-type="ref" reference="eq:pMINRES"} amounts to $$\begin{aligned} \label{eq:lanczos} \beta_{t+1}\tilde{\mathbf{v}}_{t+1} = \tilde{\mathbf{A}}\tilde{\mathbf{v}}_{t}- \alpha_{t}\tilde{\mathbf{v}}_{t}- \beta_{t}\tilde{\mathbf{v}}_{t-1}, \quad 1 \leq t \leq \tilde{g}.\end{aligned}$$ Denoting $$\begin{aligned} \label{eq:def_z_w_d} \beta_{t}\tilde{\mathbf{v}}_{t}= \mathbf{S}^{*}\mathbf{z}_{t}, \quad {\mathbf{w}}_{t}= \mathbf{M}\mathbf{z}_{t}= \beta_{t}\mathbf{S}\tilde{\mathbf{v}}_{t},\end{aligned}$$ and using [\[eq:def_z\_w_d,eq:Md_STS\]](#eq:def_z_w_d,eq:Md_STS){reference-type="ref" reference="eq:def_z_w_d,eq:Md_STS"}, we obtain $$\begin{aligned} \mathbf{S}^{*}\mathbf{z}_{t+1} &= \frac{1}{\beta_{t}} \mathbf{S}^{*}\mathbf{A}\mathbf{S}\mathbf{S}^{*}\mathbf{z}_{t}- \frac{\alpha_{t}}{\beta_{t}} \mathbf{S}^{*}\mathbf{z}_{t}- \frac{\beta_{t}}{\beta_{t-1}} \mathbf{S}^{*}\mathbf{z}_{t-1} \\ &= \mathbf{S}^{*}\left(\frac{1}{\beta_{t}} \mathbf{A}\mathbf{M}\mathbf{z}_{t}- \frac{\alpha_{t}}{\beta_{t}} \mathbf{z}_{t}- \frac{\beta_{t}}{\beta_{t-1}} \mathbf{z}_{t-1}\right).\end{aligned}$$ This relation allows us to define the following three-term recurrence relation for $\mathbf{z}_{t}$ $$\begin{aligned} \label{eq:updates_z_w} \mathbf{z}_{t+1}= \frac{1}{\beta_{t}} \mathbf{A}\mathbf{M}\mathbf{z}_{t}- \frac{\alpha_{t}}{\beta_{t}} \mathbf{z}_{t}- \frac{\beta_{t}}{\beta_{t-1}} \mathbf{z}_{t-1} = \frac{1}{\beta_{t}} \mathbf{A}{\mathbf{w}}_{t}- \frac{\alpha_{t}}{\beta_{t}} \mathbf{z}_{t}- \frac{\beta_{t}}{\beta_{t-1}} \mathbf{z}_{t-1}, \quad 1 \leq t \leq \tilde{g},\end{aligned}$$ with $\mathbf{z}_{0} = \mathbf{0}$, $\mathbf{z}_1 = \mathbf{b}$, and $\beta_0 = \beta_1 = \| \tilde{\mathbf{b}}\|$. Thus, the updates [\[eq:updates_z\_w\]](#eq:updates_z_w){reference-type="ref" reference="eq:updates_z_w"} yield the Lanczos process [\[eq:lanczos\]](#eq:lanczos){reference-type="ref" reference="eq:lanczos"}. To present the main result of this section, we first need to establish a few technical lemmas. [\[lemma:Kry_tA_tb\]](#lemma:Kry_tA_tb){reference-type="ref" reference="lemma:Kry_tA_tb"} expresses the corresponding Krylov subspace $\mathcal{K}_{t}(\tilde{\mathbf{A}}, \tilde{\mathbf{b}})$ in terms of the underlying components $\mathbf{A}$, $\mathbf{b}$, $\mathbf{S}$, and $\mathbf{M}$. [\[lemma:Kry_tA_tb\]]{#lemma:Kry_tA_tb label="lemma:Kry_tA_tb"} For any $1 \leq t \leq \tilde{g}$, we have $\mathcal{K}_{t}(\tilde{\mathbf{A}}, \tilde{\mathbf{b}}) = \mathbf{S}^{\dagger}\mathcal{K}_{t}(\mathbf{M}\mathbf{A}, \mathbf{M}\mathbf{b})$. *Proof.* By [\[eq:def_tA_tb_tr_hr,eq:Md_STS,eq:Md_KKH\]](#eq:def_tA_tb_tr_hr,eq:Md_STS,eq:Md_KKH){reference-type="ref" reference="eq:def_tA_tb_tr_hr,eq:Md_STS,eq:Md_KKH"} using the identity $\mathbf{S}^{*}= \mathbf{S}^{\dagger}\mathbf{S}\mathbf{S}^{*}= \mathbf{S}^{\dagger}\mathbf{M}$, we have $$\begin{aligned} \mathcal{K}_{t}(\tilde{\mathbf{A}}, \tilde{\mathbf{b}}) &= \textnormal{Span}\{\tilde{\mathbf{b}}, \tilde{\mathbf{A}}\tilde{\mathbf{b}}, \ldots, \tilde{\mathbf{A}}^{t-1} \tilde{\mathbf{b}}\} = \textnormal{Span}\{\mathbf{S}^{*}\mathbf{b}, \mathbf{S}^{*}\mathbf{A}\mathbf{M}\mathbf{b}, \ldots, \mathbf{S}^{*}\mathbf{A}\left[\mathbf{M}\mathbf{A}\right]^{t-2} \mathbf{M}\mathbf{b}\} \\ %&= \Span\{\SSd \SS \SSH \bb, \SSd \SS \SSH \AA \MM \bb, \ldots, \SSd \SS \SSH \AA \left[\MM \AA\right]^{t-2} \MM \bb\} \\ &= \mathbf{S}^{\dagger}\times \textnormal{Span}\{\mathbf{M}\mathbf{b}, \mathbf{M}\mathbf{A}\mathbf{M}\mathbf{b}, \ldots, \mathbf{M}\mathbf{A}\left[\mathbf{M}\mathbf{A}\right]^{t-2} \mathbf{M}\mathbf{b}\}= \mathbf{S}^{\dagger}\mathcal{K}_{t}(\mathbf{M}\mathbf{A}, \mathbf{M}\mathbf{b}). \end{aligned}$$ ◻ Next, we give some properties of the vectors $\mathbf{z}_{i}$ and $\mathbf{w}_{i}$. [\[lemma:Kry_spans_z\_w\]]{#lemma:Kry_spans_z_w label="lemma:Kry_spans_z_w"} For any $1\leq t \leq \tilde{g}$, we have $$\begin{aligned} \textnormal{Span}\{\mathbf{w}_{1}, \mathbf{w}_{2}, \ldots, \mathbf{w}_{t}\} &= \mathcal{K}_{t}(\mathbf{M}\mathbf{A}, \mathbf{M}\mathbf{b}),\\ \mathbf{P}\mathbf{P}^{*}\textnormal{Span}\{\mathbf{z}_{1}, \mathbf{z}_{2}, \ldots, \mathbf{z}_{t}\} &= \mathbf{P}\mathbf{P}^{*}\mathcal{K}_{t}(\mathbf{A}\mathbf{M}, \mathbf{b}), \end{aligned}$$ where $\mathbf{P}$ is as in [\[eq:def_Md_M\]](#eq:def_Md_M){reference-type="ref" reference="eq:def_Md_M"}. Furthermore, $\mathbf{w}_{\tilde{g}+1} = \mathbf{0}$ and $\mathbf{z}_{\tilde{g}+1} \in \textnormal{Null}(\mathbf{M})$. *Proof.* By [\[eq:def_z\_w_d,lemma:Kry_tA_tb,eq:Md_PPH,eq:Md_STS\]](#eq:def_z_w_d,lemma:Kry_tA_tb,eq:Md_PPH,eq:Md_STS){reference-type="ref" reference="eq:def_z_w_d,lemma:Kry_tA_tb,eq:Md_PPH,eq:Md_STS"}, we have $$\begin{aligned} \textnormal{Span}\left\{\mathbf{w}_1, \mathbf{w}_2, \ldots, \mathbf{w}_t\right\} &= \mathbf{S}\times \textnormal{Span}\left\{\tilde{\mathbf{v}}_1, \tilde{\mathbf{v}}_2, \ldots, \tilde{\mathbf{v}}_t\right\} = \mathbf{S}\mathcal{K}_{t}(\tilde{\mathbf{A}}, \tilde{\mathbf{b}}) \\ &= \mathbf{S}\mathbf{S}^{\dagger}\mathcal{K}_{t}(\mathbf{M}\mathbf{A}, \mathbf{M}\mathbf{b}) = \mathcal{K}_{t}(\mathbf{M}\mathbf{A}, \mathbf{M}\mathbf{b}), \end{aligned}$$ and $$\begin{aligned} \mathbf{P}\mathbf{P}^{*}\textnormal{Span}\left\{\mathbf{z}_1, \mathbf{z}_2, \ldots, \mathbf{z}_t\right\} &= [\mathbf{S}^{\dagger}]^{*}\mathbf{S}^{*}\times \textnormal{Span}\left\{\mathbf{z}_1, \mathbf{z}_2, \ldots, \mathbf{z}_t\right\} = [\mathbf{S}^{\dagger}]^{*}\times \textnormal{Span}\left\{\tilde{\mathbf{v}}_1, \tilde{\mathbf{v}}_2, \ldots, \tilde{\mathbf{v}}_t\right\} \\ &= [\mathbf{S}^{\dagger}]^{*}\mathbf{S}^{\dagger}\mathcal{K}_{t}(\mathbf{M}\mathbf{A}, \mathbf{M}\mathbf{b}) = \mathbf{M}^{\dagger}\mathcal{K}_{t}(\mathbf{M}\mathbf{A}, \mathbf{M}\mathbf{b}) = \mathbf{P}\mathbf{P}^{*}\mathcal{K}_{t}(\mathbf{A}\mathbf{M}, \mathbf{b}), \end{aligned}$$ where the vectors $\tilde{\mathbf{v}}_i$ are those appearing in the Lanczos process [\[eq:lanczos\]](#eq:lanczos){reference-type="ref" reference="eq:lanczos"}. At $t = \tilde{g}+ 1$, from [\[eq:def_z\_w_d\]](#eq:def_z_w_d){reference-type="ref" reference="eq:def_z_w_d"} and the fact that $\tilde{\mathbf{v}}_{\tilde{g}+1} = \mathbf{0}$, it follows that $\mathbf{w}_{\tilde{g}+1} = \mathbf{0}$ and $\mathbf{S}^{*}\mathbf{z}_{\tilde{g}+1} = \mathbf{0}$, i.e., $\mathbf{z}_{\tilde{g}+1} \in \textnormal{Null}(\mathbf{M})$. ◻ We now show how the coefficients $\alpha_{t}$ and $\beta_{t}$ in [\[eq:updates_z\_w\]](#eq:updates_z_w){reference-type="ref" reference="eq:updates_z_w"} can be computed, i.e., how to construct $\hat{\mathbf{T}}_{t}$ in [\[eq:tridiagonal_T\]](#eq:tridiagonal_T){reference-type="ref" reference="eq:tridiagonal_T"}. Note that by construction, we set $\beta_{1} = \| \tilde{\mathbf{b}}\| = \sqrt{\langle \mathbf{z}_{1}, \mathbf{w}_{1} \rangle}$. [\[lemma:alpha_beta\]]{#lemma:alpha_beta label="lemma:alpha_beta"} For $1 \leq t \leq \tilde{g}$, we can compute $\alpha_{t}$ and $\beta_{t}$ in [\[eq:updates_z\_w\]](#eq:updates_z_w){reference-type="ref" reference="eq:updates_z_w"} as $$\begin{aligned} \alpha_{t}= \frac{1}{\beta_{t}^2} \langle {\mathbf{w}}_{t}, \mathbf{A}{\mathbf{w}}_{t} \rangle, \quad \beta_{t+1}= \sqrt{\langle \mathbf{z}_{t+1}, \mathbf{w}_{t+1} \rangle}. \end{aligned}$$ *Proof.* By [\[eq:lanczos,eq:def_z\_w_d,eq:Md_STS\]](#eq:lanczos,eq:def_z_w_d,eq:Md_STS){reference-type="ref" reference="eq:lanczos,eq:def_z_w_d,eq:Md_STS"} and the orthonormality of the Lanczos vectors $\tilde{\mathbf{v}}_{i}$, we obtain $$\begin{aligned} \alpha_{t}= \langle \tilde{\mathbf{v}}_{t}, \tilde{\mathbf{A}}\tilde{\mathbf{v}}_{t} \rangle = \langle \frac{1}{\beta_{t}} \mathbf{S}^{*}\mathbf{z}_{t}, \left(\mathbf{S}^{*}\mathbf{A}\mathbf{S}\right) \frac{1}{\beta_{t}} \mathbf{S}^{*}\mathbf{z}_{t} \rangle = \frac{1}{\beta_{t}^2} \langle \mathbf{M}\mathbf{z}_{t}, \mathbf{A}\mathbf{M}\mathbf{z}_{t} \rangle = \frac{1}{\beta_{t}^2} \langle {\mathbf{w}}_{t}, \mathbf{A}{\mathbf{w}}_{t} \rangle. \end{aligned}$$ By [\[eq:def_z\_w_d,eq:Md_STS\]](#eq:def_z_w_d,eq:Md_STS){reference-type="ref" reference="eq:def_z_w_d,eq:Md_STS"} and the facts that $\beta_{t+1} > 0$ and $\| \tilde{\mathbf{v}}_{t+1} \| = 1$ for any $1 \leq t \leq \tilde{g}-1$, we get $$\begin{aligned} \beta_{t+1}= \sqrt{\| \beta_{t+1}\tilde{\mathbf{v}}_{t+1} \|^2} = \sqrt{\langle \mathbf{S}^{*}\mathbf{z}_{t+1}, \mathbf{S}^{*}\mathbf{z}_{t+1} \rangle} = \sqrt{\langle \mathbf{z}_{t+1}, \mathbf{M}\mathbf{z}_{t+1} \rangle} = \sqrt{\langle \mathbf{z}_{t+1}, \mathbf{w}_{t+1} \rangle}. \end{aligned}$$ For $t = \tilde{g}$, this relation continues to hold since $\beta_{\tilde{g}+1} = 0$ and by [\[lemma:Kry_spans_z\_w\]](#lemma:Kry_spans_z_w){reference-type="ref" reference="lemma:Kry_spans_z_w"}, $\mathbf{w}_{\tilde{g}+1} = \mathbf{0}$. ◻ Now, recall the update direction within MINRES is given by the three-term recurrence relation [@liu2022minres; @paige1975solution] $$\begin{aligned} \label{eq:updates_v_d} \tilde{\mathbf{d}}_{t} = \frac{1}{\gamma^{[2]}_{t} } \left(\tilde{\mathbf{v}}_{t} - \epsilon_{t} \tilde{\mathbf{d}}_{t-2} - \delta^{[2]}_{t} \tilde{\mathbf{d}}_{t-1}\right), \quad 1 \leq t \leq \tilde{g},\end{aligned}$$ where $\tilde{\mathbf{d}}_{0} = \tilde{\mathbf{d}}_{-1} = \mathbf{0}$. It is easy to see $\tilde{\mathbf{d}}_{t}\in \textnormal{Span}\{\tilde{\mathbf{v}}_1, \tilde{\mathbf{v}}_2, \ldots, \tilde{\mathbf{v}}_{t} \}$. Let ${\mathbf{d}}_{t}$ be a vector such that ${\mathbf{d}}_{t}= \mathbf{S}\tilde{\mathbf{d}}_{t}$ and define $\mathbf{d}_{0} = \mathbf{d}_{-1} = \bm{0}$. Multiplying both sides of [\[eq:updates_v\_d\]](#eq:updates_v_d){reference-type="ref" reference="eq:updates_v_d"} by $\mathbf{S}$ gives $$\begin{aligned} \mathbf{d}_{t} &= \mathbf{S}\tilde{\mathbf{d}}_{t}= \frac{1}{\gamma^{[2]}_{t} } \left(\mathbf{S}\tilde{\mathbf{v}}_{t}- \epsilon_{t} \mathbf{S}\tilde{\mathbf{d}}_{t-2} - \delta^{[2]}_{t} \mathbf{S}\tilde{\mathbf{d}}_{t-1}\right) = \frac{1}{\gamma^{[2]}_{t} } \left(\mathbf{S}\tilde{\mathbf{v}}_{t}- \epsilon_{t} \mathbf{d}_{t-2} - \delta^{[2]}_{t} \mathbf{d}_{t-1}\right),\end{aligned}$$ which, using [\[eq:def_z\_w_d\]](#eq:def_z_w_d){reference-type="ref" reference="eq:def_z_w_d"}, yields the following three-term recurrence relation $$\begin{aligned} \label{eq:update_x_d} \mathbf{d}_{t} = \frac{1}{\gamma^{[2]}_{t} } \left(\frac{1}{\beta_{t}} {\mathbf{w}}_{t}- \epsilon_{t} \mathbf{d}_{t-2} - \delta^{[2]}_{t} \mathbf{d}_{t-1}\right), \quad 1 \leq t \leq \tilde{g}.\end{aligned}$$ Clearly, we have "[\[eq:update_x\_d\]](#eq:update_x_d){reference-type="ref" reference="eq:update_x_d"} $\implies$ [\[eq:updates_v\_d\]](#eq:updates_v_d){reference-type="ref" reference="eq:updates_v_d"}". To see this, note that [\[lemma:Kry_tA_tb\]](#lemma:Kry_tA_tb){reference-type="ref" reference="lemma:Kry_tA_tb"} and the identity $\mathbf{S}^{\dagger}\mathbf{S}\mathbf{S}^{\dagger}= \mathbf{S}^{\dagger}$ imply that $\mathbf{S}^{\dagger}{\mathbf{d}}_{t}= \mathbf{S}^{\dagger}\mathbf{S}\tilde{\mathbf{d}}_{t}= \tilde{\mathbf{d}}_{t}$ and $\tilde{\mathbf{v}}_{t}= \mathbf{S}^{\dagger}\mathbf{S}\tilde{\mathbf{v}}_{t}= \mathbf{S}^{\dagger}{\mathbf{w}}_{t}/ \beta_{t}$. Hence, multiplying both sides of [\[eq:update_x\_d\]](#eq:update_x_d){reference-type="ref" reference="eq:update_x_d"} by $\mathbf{S}^{\dagger}$, we obtain [\[eq:updates_v\_d\]](#eq:updates_v_d){reference-type="ref" reference="eq:updates_v_d"}. Also, by [\[lemma:Kry_spans_z\_w\]](#lemma:Kry_spans_z_w){reference-type="ref" reference="lemma:Kry_spans_z_w"}, this construction implies that ${\mathbf{d}}_{t}\in \textnormal{Span}\{\mathbf{w}_1, \mathbf{w}_2, \ldots, \mathbf{w}_{t} \} = \mathcal{K}_{t}(\mathbf{M}\mathbf{A}, \mathbf{M}\mathbf{b})$ for $1 \leq t \leq \tilde{g}$. Finally, from [\[eq:def_x\_r,eq:update_x\_d\]](#eq:def_x_r,eq:update_x_d){reference-type="ref" reference="eq:def_x_r,eq:update_x_d"}, it follows that the update is given by $$\begin{aligned} \mathbf{x}_{t+1}= \mathbf{S}\tilde{\mathbf{x}}_{t+1}= \mathbf{S}(\tilde{\mathbf{x}}_{t}+ {\tau}_{t}\tilde{\mathbf{d}}_{t}) = {\mathbf{x}}_{t}+ {\tau}_{t}{\mathbf{d}}_{t}.\end{aligned}$$ Initializing with letting $\mathbf{x}_0 = \bm{0}$, gives ${\mathbf{x}}_{t}\in \mathcal{K}_{t}(\mathbf{M}\mathbf{A}, \mathbf{M}\mathbf{b})$. Furthermore, let us define $\tilde{\mathbf{d}}_{t}= \mathbf{S}^{*}\check{\mathbf{d}}_{t}$, $\tilde{\mathbf{x}}_{t}= \mathbf{S}^{*}\check{\mathbf{x}}_{t}$ and construct updates $$\begin{aligned} \label{eq:update_cxx_d} \check{\mathbf{d}}_{t} = \frac{1}{\gamma^{[2]}_{t} } \left(\frac{1}{\beta_{t}} \mathbf{z}_{t}- \epsilon_{t} \check{\mathbf{d}}_{t-2} - \delta^{[2]}_{t} \check{\mathbf{d}}_{t-1}\right), \quad \text{and} \quad \check{\mathbf{x}}_{t}= \check{\mathbf{x}}_{t-1} + {\tau}_{t}\check{\mathbf{d}}_{t},\end{aligned}$$ where $\check{\mathbf{x}}_0 = \check{\mathbf{d}}_0 = \check{\mathbf{d}}_{-1} = \bm{0}$. By [\[lemma:Kry_spans_z\_w\]](#lemma:Kry_spans_z_w){reference-type="ref" reference="lemma:Kry_spans_z_w"}, we obtain $\check{\mathbf{d}}_{t}\in \textnormal{Span}\{\mathbf{z}_1, \mathbf{z}_2, \ldots, \mathbf{z}_{t} \} = \mathcal{K}_{t}(\mathbf{A}\mathbf{M}, \mathbf{b})$ and $\check{\mathbf{x}}_{t}\in \mathcal{K}_{t}(\mathbf{A}\mathbf{M}, \mathbf{b})$. Multiplying both sides of [\[eq:update_cxx_d\]](#eq:update_cxx_d){reference-type="ref" reference="eq:update_cxx_d"} by $\mathbf{S}^{*}$ recovers the three-term recurrence relation [\[eq:updates_v\_d\]](#eq:updates_v_d){reference-type="ref" reference="eq:updates_v_d"}. Clearly, by [\[eq:def_x\_r\]](#eq:def_x_r){reference-type="ref" reference="eq:def_x_r"}, ${\mathbf{x}}_{t}= \mathbf{S}\tilde{\mathbf{x}}_{t}= \mathbf{S}\mathbf{S}^{*}\check{\mathbf{x}}_{t}= \mathbf{M}\check{\mathbf{x}}_{t}$ is exactly the solution in [\[eq:right_prec_yy\]](#eq:right_prec_yy){reference-type="ref" reference="eq:right_prec_yy"}. We also have the following recurrence relation on quantities that are connected to the residual. [\[lemma:residuals\]]{#lemma:residuals label="lemma:residuals"} For any $1\leq t \leq \tilde{g}$, define $\hat{\mathbf{r}}_{t}$ as $$\begin{aligned} \label{eq:def_tA_tb_tr_hr_hrrt} \hat{\mathbf{r}}_{t}= \mathbf{S}\tilde{\mathbf{r}}_{t}\in \mathbb{C}^{d}, \end{aligned}$$ where $\tilde{\mathbf{r}}_{t}= \tilde{\mathbf{b}}- \tilde{\mathbf{A}}\tilde{\mathbf{x}}_{t}$. We have [\[eq:residuals\]]{#eq:residuals label="eq:residuals"} $$\begin{aligned} \hat{\mathbf{r}}_{t}&= s_{t}^2 \hat{\mathbf{r}}_{t-1}- \frac{\phi_{t} c_{t}}{\beta_{t+1}} \mathbf{w}_{t+1}, \label{eq:hr} \\ \mathbf{P}\mathbf{P}^{*}{\mathbf{r}}_{t}&= \mathbf{P}\mathbf{P}^{*}\left(s_t^2 \mathbf{r}_{t-1} - \frac{\phi_{t} c_t}{\beta_{t+1}} \mathbf{z}_{t+1}\right), \label{eq:PPHr} \end{aligned}$$ where $\mathbf{P}$ and $\mathbf{w}_{t}$ are, respectively, defined in [\[eq:def_Md_M,eq:def_z\_w_d\]](#eq:def_Md_M,eq:def_z_w_d){reference-type="ref" reference="eq:def_Md_M,eq:def_z_w_d"}. *Proof.* By [\[eq:def_tA_tb_tr_hr,eq:def_z\_w_d\]](#eq:def_tA_tb_tr_hr,eq:def_z_w_d){reference-type="ref" reference="eq:def_tA_tb_tr_hr,eq:def_z_w_d"} and multiplying both sides of the residual update in [\[alg:MINRES\]](#alg:MINRES){reference-type="ref" reference="alg:MINRES"} by $\mathbf{S}$, we get $$\begin{aligned} \hat{\mathbf{r}}_{t}= \mathbf{S}\tilde{\mathbf{r}}_{t} = s_{t}^2 \mathbf{S}\tilde{\mathbf{r}}_{t-1} - \phi_{t} c_{t} \mathbf{S}\tilde{\mathbf{v}}_{t+1} = s_{t}^2 \hat{\mathbf{r}}_{t-1}- \frac{\phi_{t} c_{t}}{\beta_{t+1}} \mathbf{w}_{t+1}. \end{aligned}$$ Now, by [\[eq:def_tA_tb_tr_hr,eq:Md_PPH,eq:Md_STS,eq:def_z\_w_d,eq:trrt_rrt\]](#eq:def_tA_tb_tr_hr,eq:Md_PPH,eq:Md_STS,eq:def_z_w_d,eq:trrt_rrt){reference-type="ref" reference="eq:def_tA_tb_tr_hr,eq:Md_PPH,eq:Md_STS,eq:def_z_w_d,eq:trrt_rrt"} and noting that $[\mathbf{S}^{\dagger}]^{*}= [\mathbf{S}^{\dagger}]^{*}\mathbf{S}^{\dagger}\mathbf{S}= \mathbf{M}^{\dagger}\mathbf{S}$, we have $$\begin{aligned} \mathbf{P}\mathbf{P}^{*}{\mathbf{r}}_{t}&= [\mathbf{S}^{\dagger}]^{*}\mathbf{S}^{*}{\mathbf{r}}_{t}= [\mathbf{S}^{\dagger}]^{*}\tilde{\mathbf{r}}_{t}= \mathbf{M}^{\dagger}\mathbf{S}\tilde{\mathbf{r}}_{t}= \mathbf{M}^{\dagger}\mathbf{S}(s_t^2 \tilde{\mathbf{r}}_{t-1} - \phi_{t} c_{t} \tilde{\mathbf{v}}_{t+1}) %&= s_t^2 \MMd \SS \trr_{t-1} - \phi_{t} c_{t} \SSdH \tvv_{t+1} = \mathbf{P}\mathbf{P}^{*}(s_t^2 {\mathbf{r}}_{t-1}- \frac{\phi_{t} c_t}{\beta_{t+1}} \mathbf{z}_{t+1}). \end{aligned}$$ ◻ The following result shows that ${\mathbf{x}}_{t}$ and $\hat{\mathbf{r}}_{t}$ belong to a certain Krylov subspace. [\[lemma:Kry_x\_hr\]]{#lemma:Kry_x_hr label="lemma:Kry_x_hr"} With [\[eq:def_tA_tb_tr_hr,eq:def_x\_r\]](#eq:def_tA_tb_tr_hr,eq:def_x_r){reference-type="ref" reference="eq:def_tA_tb_tr_hr,eq:def_x_r"} and $\mathbf{x}_{0} = \mathbf{0}$, we have for any $1 \leq t \leq \tilde{g}$ $$\begin{aligned} {\mathbf{x}}_{t}\in \mathcal{K}_{t}(\mathbf{M}\mathbf{A}, \mathbf{M}\mathbf{b}), \end{aligned}$$ and $\textnormal{Span}\{\hat{\mathbf{r}}_{0}, \hat{\mathbf{r}}_{1}, \ldots, \hat{\mathbf{r}}_{t-1}\} = \mathcal{K}_{t}(\mathbf{M}\mathbf{A}, \mathbf{M}\mathbf{b})$. *Proof.* The first result follows from [\[lemma:Kry_spans_z\_w\]](#lemma:Kry_spans_z_w){reference-type="ref" reference="lemma:Kry_spans_z_w"} and our construction of the update direction, as discussed above. The second result is also obtained using [\[eq:hr,lemma:Kry_spans_z\_w\]](#eq:hr,lemma:Kry_spans_z_w){reference-type="ref" reference="eq:hr,lemma:Kry_spans_z_w"}, and noting that $\hat{\mathbf{r}}_{0} = \mathbf{w}_{1}$. ◻ [\[rm:all_vectors_subspaces\]]{#rm:all_vectors_subspaces label="rm:all_vectors_subspaces"} By inspecting [\[eq:PPHr\]](#eq:PPHr){reference-type="ref" reference="eq:PPHr"}, we see that a new vector $\check{\mathbf{r}}_{t}$ can be defined as $$\begin{aligned} \label{eq:cr} \check{\mathbf{r}}_{t}= s_t^2 \check{\mathbf{r}}_{t-1}- \frac{\phi_{t} c_t}{\beta_{t+1}} \mathbf{z}_{t+1}, \quad 1 \leq t \leq \tilde{g}, \end{aligned}$$ where $\check{\mathbf{r}}_{-1} \triangleq\mathbf{0}$. This way, we always have $\mathbf{P}\mathbf{P}^{*}\check{\mathbf{r}}_{t}= \mathbf{P}\mathbf{P}^{*}{\mathbf{r}}_{t}$. Hence, we can compute $\mathbf{P}\mathbf{P}^{*}{\mathbf{r}}_{t}$ without performing an extra matrix-vector product $\mathbf{A}{\mathbf{x}}_{t}$. Coupled with [\[eq:updates_z\_w,eq:update_cxx_d\]](#eq:updates_z_w,eq:update_cxx_d){reference-type="ref" reference="eq:updates_z_w,eq:update_cxx_d"} as well as noting that $\mathbf{z}_{0} = \mathbf{0}$ and $\mathbf{z}_1 = \mathbf{b}$, this gives $\mathbf{z}_{t}\in \mathcal{K}_{t}(\mathbf{A}\mathbf{M}, \mathbf{b})$, which in turn implies $\check{\mathbf{d}}_{t}, \check{\mathbf{x}}_{t}, \check{\mathbf{r}}_{t-1}\in \mathcal{K}_{t}(\mathbf{A}\mathbf{M}, \mathbf{b})$. In addition, from [\[lemma:Kry_x\_hr,lemma:Kry_spans_z\_w,eq:update_x\_d\]](#lemma:Kry_x_hr,lemma:Kry_spans_z_w,eq:update_x_d){reference-type="ref" reference="lemma:Kry_x_hr,lemma:Kry_spans_z_w,eq:update_x_d"}, it follows that ${\mathbf{w}}_{t}, {\mathbf{d}}_{t}, {\mathbf{x}}_{t}, \hat{\mathbf{r}}_{t-1}\in \mathcal{K}_{t}(\mathbf{M}\mathbf{A}, \mathbf{M}\mathbf{b})$. All in all, we conclude that all the vectors ${\mathbf{w}}_{t}, {\mathbf{d}}_{t}, {\mathbf{x}}_{t}, \hat{\mathbf{r}}_{t-1}$ and $\mathbf{z}_{t}, \check{\mathbf{d}}_{t}, \check{\mathbf{x}}_{t}, \check{\mathbf{r}}_{t-1}$ are solely dependent on $\mathbf{M}$, and hence are invariant to the particular choices of $\mathbf{S}$ that satisfies $\mathbf{M}= \mathbf{S}\mathbf{S}^{*}$. All these iterative relations give us the preconditioned MINRES algorithm, constructed from [\[eq:right_prec_03\]](#eq:right_prec_03){reference-type="ref" reference="eq:right_prec_03"} and depicted in [\[alg:pMINRES\]](#alg:pMINRES){reference-type="ref" reference="alg:pMINRES"}. As a sanity check, if we set the ideal preconditioner $\mathbf{M}= \mathbf{A}^{\dagger}$, we obtain $\mathbf{A}^{\dagger}\mathbf{b}= {\mathbf{x}}_{t}\in {\mathcal{K}_{t}} (\ensuremath{\mathbf{A}^{\dagger}\mathbf{A}, \mathbf{A}^{\dagger}\mathbf{b}})$ for any $t$, which by [\[lemma:Kry_x\_hr\]](#lemma:Kry_x_hr){reference-type="ref" reference="lemma:Kry_x_hr"} implies that [\[alg:pMINRES\]](#alg:pMINRES){reference-type="ref" reference="alg:pMINRES"} terminates at the very first iteration. We now give the lifting procedure for an iterate of [\[alg:pMINRES\]](#alg:pMINRES){reference-type="ref" reference="alg:pMINRES"} applied to Hermitian systems. [\[thm:lifting\]]{#thm:lifting label="thm:lifting"} Let $\mathbf{x}_{\tilde{g}}$ be the final iterate of [\[alg:pMINRES\]](#alg:pMINRES){reference-type="ref" reference="alg:pMINRES"} and let the Hermitian matrix $\mathbf{A}\in \mathbb{H}^{d\times d}$ be given. 1. If $\hat{\mathbf{r}}_{\tilde{g}}= \bm{0}$, then we must have $\mathbf{x}_{\tilde{g}}= \mathbf{S}\tilde{\mathbf{x}}_{\tilde{g}}$, where $\tilde{\mathbf{x}}_{\tilde{g}}= {\tilde{\mathbf{A}}}^{\dagger}\tilde{\mathbf{b}}$. 2. If $\hat{\mathbf{r}}_{\tilde{g}}\neq \bm{0}$, define the lifted vector $$\begin{aligned} \label{eq:P_lifting} \hat{\mathbf{x}}^{+}= \mathbf{x}_{\tilde{g}}- \frac{\langle \check{\mathbf{r}}_{\tilde{g}}, \mathbf{x}_{\tilde{g}} \rangle}{\langle \hat{\mathbf{r}}_{\tilde{g}}, \check{\mathbf{r}}_{\tilde{g}} \rangle} \hat{\mathbf{r}}_{\tilde{g}}= \left(\mathbf{I}_{d} - \frac{\hat{\mathbf{r}}_{\tilde{g}}\check{\mathbf{r}}_{\tilde{g}}^{*}}{\hat{\mathbf{r}}_{\tilde{g}}^{*} \check{\mathbf{r}}_{\tilde{g}}}\right)\mathbf{x}_{\tilde{g}}, \end{aligned}$$ where $\hat{\mathbf{r}}_{\tilde{g}}$ and $\check{\mathbf{r}}_{\tilde{g}}$ are defined in [\[eq:hr,eq:cr\]](#eq:hr,eq:cr){reference-type="ref" reference="eq:hr,eq:cr"}. Then, we have $\hat{\mathbf{x}}^{+}= \mathbf{S}\tilde{\mathbf{x}}^{+}$ where $\tilde{\mathbf{x}}^{+}= {\tilde{\mathbf{A}}}^{\dagger}\tilde{\mathbf{b}}$. *Proof.* Following the proof in [\[thm:MINRES_dagger\]](#thm:MINRES_dagger){reference-type="ref" reference="thm:MINRES_dagger"}, we obtain the desired result for $\hat{\mathbf{r}}_{\tilde{g}}= \bm{0}$ trivially. Now we consider $\hat{\mathbf{r}}_{\tilde{g}}\neq \bm{0}$. By [\[thm:MINRES_dagger\]](#thm:MINRES_dagger){reference-type="ref" reference="thm:MINRES_dagger"}, we have $$\begin{aligned} \tilde{\mathbf{x}}_{\tilde{g}}= \tilde{\mathbf{x}}^{+}+ \frac{\langle \tilde{\mathbf{r}}_{\tilde{g}}, \tilde{\mathbf{x}}_{\tilde{g}} \rangle}{\| \tilde{\mathbf{r}}_{\tilde{g}} \|^2} \tilde{\mathbf{r}}_{\tilde{g}}. \end{aligned}$$ By [\[eq:def_x\_r,eq:Md_PPH,eq:trrt_rrt,eq:def_tA_tb_tr_hr_hrrt\]](#eq:def_x_r,eq:Md_PPH,eq:trrt_rrt,eq:def_tA_tb_tr_hr_hrrt){reference-type="ref" reference="eq:def_x_r,eq:Md_PPH,eq:trrt_rrt,eq:def_tA_tb_tr_hr_hrrt"}, $$\begin{aligned} \mathbf{x}_{\tilde{g}}= \mathbf{S}\tilde{\mathbf{x}}_{\tilde{g}}= \mathbf{S}\tilde{\mathbf{x}}^{+}+ \frac{\langle \mathbf{S}^{*}\mathbf{r}_{\tilde{g}}, \tilde{\mathbf{x}}_{\tilde{g}} \rangle}{\langle \tilde{\mathbf{r}}_{\tilde{g}}, \mathbf{S}^{*}\mathbf{r}_{\tilde{g}} \rangle} \mathbf{S}\tilde{\mathbf{r}}_{\tilde{g}}= \mathbf{S}\tilde{\mathbf{x}}^{+}+ \frac{\langle \check{\mathbf{r}}_{\tilde{g}}, \mathbf{x}_{\tilde{g}} \rangle}{\langle \hat{\mathbf{r}}_{\tilde{g}}, \check{\mathbf{r}}_{\tilde{g}} \rangle} \hat{\mathbf{r}}_{\tilde{g}}. \end{aligned}$$ The desired result follows by noting $\mathbf{x}_{g}, \hat{\mathbf{r}}_{g}\in \textnormal{Range}(\mathbf{S}\mathbf{S}^{\dagger}) = \textnormal{Range}(\mathbf{P}\mathbf{P}^{*})$ and $\mathbf{P}\mathbf{P}^{*}\check{\mathbf{r}}_{t}= \mathbf{P}\mathbf{P}^{*}{\mathbf{r}}_{t}$ from [\[rm:all_vectors_subspaces\]](#rm:all_vectors_subspaces){reference-type="ref" reference="rm:all_vectors_subspaces"}. ◻ Similar to [\[prop:lifting_t\]](#prop:lifting_t){reference-type="ref" reference="prop:lifting_t"}, the general lifting step at every iteration $1 \leq t \leq \tilde{g}$ can be formulated as $$\begin{aligned} \label{eq:P_lifting_t} \hat{\mathbf{x}}^{\natural}_t = {\mathbf{x}}_{t}- \frac{\langle \check{\mathbf{r}}_{t}, {\mathbf{x}}_{t} \rangle}{\langle \hat{\mathbf{r}}_{t}, \check{\mathbf{r}}_{t} \rangle} \hat{\mathbf{r}}_{t}.\end{aligned}$$ Of course, in general, the lifted solution does not coincide with the pseudo-inverse solution to the original unpreconditioned problem [\[eq:least_squares\]](#eq:least_squares){reference-type="ref" reference="eq:least_squares"}. However, it turns out that in the special case where $\textnormal{Range}(\mathbf{M}) = \textnormal{Range}(\mathbf{A})$, one can indeed recover $\mathbf{A}^{\dagger}\mathbf{b}$ using our lifting strategy. [\[coro:pseudo_P\]]{#coro:pseudo_P label="coro:pseudo_P"} If $\textnormal{Range}(\mathbf{M}) = \textnormal{Range}(\mathbf{A})$, then $\hat{\mathbf{r}}_{\tilde{g}}= \bm{0}$ and $\mathbf{x}_{\tilde{g}}= \mathbf{A}^{\dagger}\mathbf{b}$. *Proof.* From $\textnormal{Range}(\mathbf{M}) = \textnormal{Range}(\mathbf{A})$, it follows that $\tilde{\mathbf{b}}\in \textnormal{Range}(\tilde{\mathbf{A}})$ and $\hat{\mathbf{r}}_{\tilde{g}}= \bm{0}$. By applying [@bernstein2009matrix Fact 6.4.10] twice as well as using [\[eq:Md_PPH,thm:lifting\]](#eq:Md_PPH,thm:lifting){reference-type="ref" reference="eq:Md_PPH,thm:lifting"}, we obtain $$\begin{aligned} \mathbf{x}_{\tilde{g}}= \mathbf{S}{\tilde{\mathbf{A}}}^{\dagger}\tilde{\mathbf{b}}= \mathbf{S}\left[\mathbf{S}^*\mathbf{A}\mathbf{S}\right]^{\dagger} \mathbf{S}^*\mathbf{b}= \mathbf{P}\mathbf{P}^{*}\mathbf{A}^{\dagger}\mathbf{P}\mathbf{P}^{*}\mathbf{b}= \mathbf{A}^{\dagger}\mathbf{b}. \end{aligned}$$ ◻ In the more general case where $\textnormal{Range}(\mathbf{M}) \neq \textnormal{Range}(\mathbf{A})$, we might have $\tilde{\mathbf{b}}\not\in \textnormal{Range}(\tilde{\mathbf{A}})$. This significantly complicates the task of establishing conditions for the recovery of $\mathbf{A}^{\dagger}\mathbf{b}$ from the preconditioned problem. More investigations in this direction are left for future work. [\[rm:reoth\]]{#rm:reoth label="rm:reoth"} In practice, round-off errors result in a loss of orthogonality in the vectors generated by the Lanczos process, which may be safely ignored in many applications. However, if strict orthogonality is required, it can be enforced by a reorthogonalization strategy. Let $\tilde{\mathbf{V}}_{t}=\left[\tilde{\mathbf{v}}_1,\tilde{\mathbf{v}}_2,\ldots,\tilde{\mathbf{v}}_{t}\right]$ be the matrix containing the Lanczos vectors as its columns, that is generated as part of solving [\[eq:pMINRES\]](#eq:pMINRES){reference-type="ref" reference="eq:pMINRES"}. Recall that such a reorthogonalization strategy amounts to additionally performing $$\begin{aligned} \label{eq:reoth} \tilde{\mathbf{v}}_{t}= \tilde{\mathbf{v}}_{t}- \tilde{\mathbf{V}}_{t-1} \tilde{\mathbf{V}}_{t-1}^{*} \tilde{\mathbf{v}}_{t}, \end{aligned}$$ at every iteration of [\[alg:MINRES\]](#alg:MINRES){reference-type="ref" reference="alg:MINRES"} applied to [\[eq:pMINRES\]](#eq:pMINRES){reference-type="ref" reference="eq:pMINRES"}. This allows us to derive the reorthogonalization strategy for [\[alg:pMINRES\]](#alg:pMINRES){reference-type="ref" reference="alg:pMINRES"}. Indeed, by [\[eq:def_z\_w_d,eq:Md_PPH,eq:Md_STS,eq:reoth\]](#eq:def_z_w_d,eq:Md_PPH,eq:Md_STS,eq:reoth){reference-type="ref" reference="eq:def_z_w_d,eq:Md_PPH,eq:Md_STS,eq:reoth"}, we have $$\begin{aligned} \mathbf{P}\mathbf{P}^{*}\frac{\mathbf{z}_{t}}{\beta_{t}} = [\mathbf{S}^{\dagger}]^{*}\tilde{\mathbf{v}}_{t}&= [\mathbf{S}^{\dagger}]^{*}\mathbf{S}^{*}\frac{\mathbf{z}_{t}}{\beta_{t}} - [\mathbf{S}^{\dagger}]^{*}\begin{bmatrix} \tilde{\mathbf{v}}_1 & \tilde{\mathbf{v}}_2 & \ldots & \tilde{\mathbf{v}}_{t-1} \end{bmatrix} \begin{bmatrix} \tilde{\mathbf{v}}_1 & \tilde{\mathbf{v}}_2 & \ldots & \tilde{\mathbf{v}}_{t-1} \end{bmatrix}^{*} \left(\mathbf{S}^{*}\frac{\mathbf{z}_{t}}{\beta_{t}}\right) \\ &= \mathbf{P}\mathbf{P}^{*}\left(\frac{\mathbf{z}_{t}}{\beta_{t}} - \begin{bmatrix} \frac{\mathbf{z}_1}{\beta_1} & \frac{\mathbf{z}_2}{\beta_2} & \ldots & \frac{\mathbf{z}_{t-1}}{\beta_{t-1}} \end{bmatrix} \begin{bmatrix} \frac{\mathbf{w}_1}{\beta_1} & \frac{\mathbf{w}_2}{\beta_2} & \ldots & \frac{\mathbf{w}_{t-1}}{\beta_{t-1}} \end{bmatrix}^{*} \frac{\mathbf{z}_{t}}{\beta_{t}}\right), \end{aligned}$$ and $$\begin{aligned} \frac{{\mathbf{w}}_{t}}{\beta_{t}} = \mathbf{S}\tilde{\mathbf{v}}_{t}&= \frac{{\mathbf{w}}_{t}}{\beta_{t}} - \mathbf{S}\begin{bmatrix} \tilde{\mathbf{v}}_1 & \tilde{\mathbf{v}}_2 & \ldots & \tilde{\mathbf{v}}_{t-1} \end{bmatrix} \begin{bmatrix} \tilde{\mathbf{v}}_1 & \tilde{\mathbf{v}}_2 & \ldots & \tilde{\mathbf{v}}_{t-1} \end{bmatrix}^{*} \left(\mathbf{S}^{*}\frac{\mathbf{z}_{t}}{\beta_{t}}\right) \\ &= \frac{{\mathbf{w}}_{t}}{\beta_{t}} - \begin{bmatrix} \frac{\mathbf{w}_1}{\beta_1} & \frac{\mathbf{w}_2}{\beta_2} & \ldots & \frac{\mathbf{w}_{t-1}}{\beta_{t-1}} \end{bmatrix} \begin{bmatrix} \frac{\mathbf{z}_1}{\beta_1} & \frac{\mathbf{z}_2}{\beta_2} & \ldots & \frac{\mathbf{z}_{t-1}}{\beta_{t-1}} \end{bmatrix}^{*} \frac{{\mathbf{w}}_{t}}{\beta_{t}}. \end{aligned}$$ These two relations suggest that to guarantee [\[eq:reoth\]](#eq:reoth){reference-type="ref" reference="eq:reoth"}, we can perform the following additional operations in [\[alg:pMINRES\]](#alg:pMINRES){reference-type="ref" reference="alg:pMINRES"} for Hermitian systems $$\begin{aligned} \mathbf{z}_{t}= \mathbf{z}_{t}- \mathbf{Y}_{t-1} \mathbf{z}_{t}, \quad {\mathbf{w}}_{t}= {\mathbf{w}}_{t}- \mathbf{Y}_{t-1}^{*} {\mathbf{w}}_{t}, \end{aligned}$$ where $\mathbf{Y}_0 = \mathbf{0}$ and $$\begin{aligned} \mathbf{Y}_{t} \triangleq \begin{bmatrix} \frac{\mathbf{z}_1}{\beta_1} & \frac{\mathbf{z}_2}{\beta_2} & \ldots & \frac{\mathbf{z}_{t}}{\beta_{t}} \end{bmatrix} \begin{bmatrix} \frac{\mathbf{w}_1}{\beta_1} & \frac{\mathbf{w}_2}{\beta_2} & \ldots & \frac{\mathbf{w}_{t}}{\beta_{t}} \end{bmatrix}^{*} = \mathbf{Y}_{t-1} + \frac{1}{\beta_{t}^2}\mathbf{z}_{t}{\mathbf{w}}_{t}^{*}. \end{aligned}$$ Indeed, multiplying both sides of the relation on $\mathbf{z}_{t}$ by $\mathbf{S}^{*}/\beta_{t}$ and using [\[eq:def_z\_w_d\]](#eq:def_z_w_d){reference-type="ref" reference="eq:def_z_w_d"}, gives [\[eq:reoth\]](#eq:reoth){reference-type="ref" reference="eq:reoth"}. Similarly, if we multiply both sides of the relation on ${\mathbf{w}}_{t}$ by $\mathbf{S}^{\dagger}/\beta_{t}$, and use [\[eq:def_z\_w_d\]](#eq:def_z_w_d){reference-type="ref" reference="eq:def_z_w_d"}, we get $$\begin{aligned} % \frac{\wwt}{\betat} &= \frac{\wwt}{\betat} - \begin{bmatrix} % \frac{\ww_1}{\beta_1} & \frac{\ww_2}{\beta_2} & \ldots & \frac{\ww_{t-1}}{\beta_{t-1}} % \end{bmatrix} % \begin{bmatrix} % \frac{\zz_1}{\beta_1} & \frac{\zz_2}{\beta_2} & \ldots & \frac{\zz_{t-1}}{\beta_{t-1}} % \end{bmatrix}^{\H} \frac{\wwt}{\betat}. \\ % \SS \tvvt &= \SS \tvvt - \begin{bmatrix} % \frac{\ww_1}{\beta_1} & \frac{\ww_2}{\beta_2} & \ldots & \frac{\ww_{t-1}}{\beta_{t-1}} % \end{bmatrix} % \begin{bmatrix} % \frac{\zz_1}{\beta_1} & \frac{\zz_2}{\beta_2} & \ldots & \frac{\zz_{t-1}}{\beta_{t-1}} % \end{bmatrix}^{\H} \SS \tvvt. \\ % \SS \tvvt &= \SS \tvvt - \begin{bmatrix} % \frac{\ww_1}{\beta_1} & \frac{\ww_2}{\beta_2} & \ldots & \frac{\ww_{t-1}}{\beta_{t-1}} % \end{bmatrix} % \begin{bmatrix} % \frac{\zz_1}{\beta_1} & \frac{\zz_2}{\beta_2} & \ldots & \frac{\zz_{t-1}}{\beta_{t-1}} % \end{bmatrix}^{\H} \SS \tvvt. \\ % \SS \tvvt &= \SS \tvvt - \begin{bmatrix} % \SS \tvv_1 & \SS \tvv_2 & \ldots & \SS \tvv_{t-1} % \end{bmatrix} % \begin{bmatrix} % \frac{\zz_1}{\beta_1} & \frac{\zz_2}{\beta_2} & \ldots & \frac{\zz_{t-1}}{\beta_{t-1}} % \end{bmatrix}^{\H} \SS \tvvt. \\ \mathbf{S}^{\dagger}\mathbf{S}\tilde{\mathbf{v}}_{t}&= \mathbf{S}^{\dagger}\mathbf{S}\tilde{\mathbf{v}}_{t}- \mathbf{S}^{\dagger}\begin{bmatrix} \mathbf{S}\tilde{\mathbf{v}}_1 & \mathbf{S}\tilde{\mathbf{v}}_2 & \ldots & \mathbf{S}\tilde{\mathbf{v}}_{t-1} \end{bmatrix} \begin{bmatrix} \tilde{\mathbf{v}}_1 & \tilde{\mathbf{v}}_2 & \ldots & \tilde{\mathbf{v}}_{t-1} \end{bmatrix}^{*} \tilde{\mathbf{v}}_{t}. \end{aligned}$$ Now, we get [\[eq:reoth\]](#eq:reoth){reference-type="ref" reference="eq:reoth"} by noting that for any $1 \leq i \leq \tilde{g}$, we have $\tilde{\mathbf{v}}_{i} \in \textnormal{Range}(\mathbf{S}^{\dagger})$ by [\[lemma:Kry_tA_tb\]](#lemma:Kry_tA_tb){reference-type="ref" reference="lemma:Kry_tA_tb"}. By construction, [\[alg:pMINRES\]](#alg:pMINRES){reference-type="ref" reference="alg:pMINRES"} is analytically equivalent to [\[alg:sub_pMINRES\]](#alg:sub_pMINRES){reference-type="ref" reference="alg:sub_pMINRES"}, which involves applying [\[alg:MINRES\]](#alg:MINRES){reference-type="ref" reference="alg:MINRES"} for solving [\[eq:pMINRES\]](#eq:pMINRES){reference-type="ref" reference="eq:pMINRES"} to obtain iterates $\tilde{\mathbf{x}}_{t}, \; 1 \leq t \leq \min \{m, \tilde{g}\}$, and then recovering ${\mathbf{x}}_{t}= \mathbf{S}\tilde{\mathbf{x}}_{t}\in \mathbb{C}^{d}$ by [\[eq:def_x\_r\]](#eq:def_x_r){reference-type="ref" reference="eq:def_x_r"}. In this light, when $m \ll d$, the sub-preconditioned algorithm [\[alg:sub_pMINRES\]](#alg:sub_pMINRES){reference-type="ref" reference="alg:sub_pMINRES"} can be regarded as a dimensionality reduced version of [\[alg:pMINRES\]](#alg:pMINRES){reference-type="ref" reference="alg:pMINRES"}, i.e., we essentially first project onto the lower dimensional space $\mathbb{C}^{m}$, use [\[alg:sub_pMINRES\]](#alg:sub_pMINRES){reference-type="ref" reference="alg:sub_pMINRES"}, and then project back onto the original space $\mathbb{C}^{d}$. This, for example, allows for significantly less storage/computations [\[alg:sub_pMINRES\]](#alg:sub_pMINRES){reference-type="ref" reference="alg:sub_pMINRES"} than what is required in [\[alg:pMINRES\]](#alg:pMINRES){reference-type="ref" reference="alg:pMINRES"}; see experiments [4.2](#sec:exp_image){reference-type="ref" reference="sec:exp_image"} for more discussion. ## Preconditioned Complex-symmetric Systems {#sec:pMINRES_CS} Our construction of the preconditioned MINRES in the complex-symmetric setting is motivated by [@choi2013minimal] and will generally follow the standard framework as in [@choi2006iterative; @choi2011minres; @saad2003iterative; @rozloznik2002krylov]. ### Derivation of the Preconditioned MINRES Algorithm {#derivation-of-the-preconditioned-minres-algorithm-1 .unnumbered} Motivated by [3.1](#sec:pMINRES_H){reference-type="ref" reference="sec:pMINRES_H"}, with complex-symmetric $\mathbf{A}\in \mathbb{S}^{m \times m}$, and $\mathbf{M}$ given as $\mathbf{M}= \mathbf{S}\mathbf{S}^{*}$ for some sub-preconditioner matrix $\mathbf{S}\in \mathbb{C}^{d\times m}$, we again see that $$\begin{aligned} {\mathbf{x}}_{t}= \mathop{\mathrm{arg\,min}}_{\mathbf{x}\in \mathcal{S}_{t}(\bar{\mathbf{M}}\mathbf{A},\bar{\mathbf{M}}\mathbf{b})} \| \mathbf{b}- \mathbf{A}\mathbf{x} \|_{\bar{\mathbf{M}}}^2 = \mathop{\mathrm{arg\,min}}_{\mathbf{x}\in \mathcal{S}_{t}(\bar{\mathbf{M}}\mathbf{A},\bar{\mathbf{M}}\mathbf{b})} \| \mathbf{S}^{\intercal}\left(\mathbf{b}- \mathbf{A}\mathbf{x}\right) \|^{2} = \mathop{\mathrm{arg\,min}}_{\mathbf{x}\in \bar{\mathbf{S}}\mathcal{S}_{t}(\mathbf{S}^{\intercal}\mathbf{A}\mathbf{S},\mathbf{S}^{\intercal}\mathbf{b})} \| \mathbf{S}^{\intercal}\left(\mathbf{b}- \mathbf{A}\mathbf{x}\right) \|^{2}.\end{aligned}$$ This allows us to consider [\[eq:preq_MINRES\]](#eq:preq_MINRES){reference-type="ref" reference="eq:preq_MINRES"} with $$\begin{aligned} \label{eq:def_tA_tb_tr_hr_CS} \tilde{\mathbf{A}}\triangleq \mathbf{S}^{\intercal}\mathbf{A}\mathbf{S}\in \mathbb{S}^{m \times m}, \quad \text{and} \quad \tilde{\mathbf{b}}\triangleq \mathbf{S}^{\intercal}\mathbf{b}\in \mathbb{C}^{m}.\end{aligned}$$ Using [\[eq:def_x\_r\]](#eq:def_x_r){reference-type="ref" reference="eq:def_x_r"}, for the residual of the system in [\[eq:pMINRES\]](#eq:pMINRES){reference-type="ref" reference="eq:pMINRES"}, $\tilde{\mathbf{r}}_{t}= \tilde{\mathbf{b}}- \tilde{\mathbf{A}}\tilde{\mathbf{x}}_{t}$, we have $$\begin{aligned} \label{eq:trrt_rrt_CS} \tilde{\mathbf{r}}_{t}= \mathbf{S}^{\intercal}{\mathbf{r}}_{t}\in \mathbb{C}^{m}, \quad \text{where} \quad {\mathbf{r}}_{t}= \mathbf{b}- \mathbf{A}\mathbf{x}. \end{aligned}$$ Similar to [3.1](#sec:pMINRES_H){reference-type="ref" reference="sec:pMINRES_H"}, we now construct the preconditioned MINRES algorithm for complex-symmetric systems. For this, note that $\tilde{\mathbf{A}}$, defined in [\[eq:def_tA_tb_tr_hr_CS\]](#eq:def_tA_tb_tr_hr_CS){reference-type="ref" reference="eq:def_tA_tb_tr_hr_CS"}, is also itself complex-symmetric. Hence, following [2.2](#sec:CSMINRES){reference-type="ref" reference="sec:CSMINRES"}, the Saunders process [\[eq:lanczos_CS\]](#eq:lanczos_CS){reference-type="ref" reference="eq:lanczos_CS"} yields $$\begin{aligned} \label{eq:saunders_process} \beta_{t+1}\tilde{\mathbf{v}}_{t+1} = \tilde{\mathbf{A}}\textnormal{Conj}(\tilde{\mathbf{v}}_{t}) - \alpha_{t}\tilde{\mathbf{v}}_{t}- \beta_{t}\tilde{\mathbf{v}}_{t-1}.\end{aligned}$$ Denoting $$\begin{aligned} \label{eq:def_z_w_d_CS} \beta_{t}\tilde{\mathbf{v}}_{t}= \mathbf{S}^{\intercal}\mathbf{z}_{t}, \quad {\mathbf{w}}_{t}= \mathbf{M}\bar{\mathbf{z}}_{t}= \beta_{t}\mathbf{S}\textnormal{Conj}(\tilde{\mathbf{v}}_{t}),\end{aligned}$$ and using [\[eq:def_z\_w_d\_CS,eq:Md_STS\]](#eq:def_z_w_d_CS,eq:Md_STS){reference-type="ref" reference="eq:def_z_w_d_CS,eq:Md_STS"}, we obtain $$\begin{aligned} \mathbf{S}^{\intercal}\mathbf{z}_{t+1} &= \frac{1}{\beta_{t}} \mathbf{S}^{\intercal}\mathbf{A}\mathbf{S}\mathbf{S}^{*}\bar{\mathbf{z}}_{t}- \frac{\alpha_{t}}{\beta_{t}} \mathbf{S}^{\intercal}\mathbf{z}_{t}- \frac{\beta_{t}}{\beta_{t-1}} \mathbf{S}^{\intercal}\mathbf{z}_{t-1} \\ &= \mathbf{S}^{\intercal}\left(\frac{1}{\beta_{t}} \mathbf{A}\mathbf{M}\bar{\mathbf{z}}_{t}- \frac{\alpha_{t}}{\beta_{t}} \mathbf{z}_{t}- \frac{\beta_{t}}{\beta_{t-1}} \mathbf{z}_{t-1}\right).\end{aligned}$$ This relation suggests that to guarantee [\[eq:saunders_process\]](#eq:saunders_process){reference-type="ref" reference="eq:saunders_process"}, we can define the following three-term recurrence on $\mathbf{z}_{t}$ $$\begin{aligned} \label{eq:updates_z_w_CS} \mathbf{z}_{t+1}= \frac{1}{\beta_{t}} \mathbf{A}\mathbf{M}\bar{\mathbf{z}}_{t}- \frac{\alpha_{t}}{\beta_{t}} \mathbf{z}_{t}- \frac{\beta_{t}}{\beta_{t-1}} \mathbf{z}_{t-1} = \frac{1}{\beta_{t}} \mathbf{A}{\mathbf{w}}_{t}- \frac{\alpha_{t}}{\beta_{t}} \mathbf{z}_{t}- \frac{\beta_{t}}{\beta_{t-1}} \mathbf{z}_{t-1}.\end{aligned}$$ where $\beta_0 = \beta_1 = \| \tilde{\mathbf{b}}\|$. The main result of this section relies on a few technical lemmas, which we now present. [\[lemma:Kry_tA_tb_CS\]](#lemma:Kry_tA_tb_CS){reference-type="ref" reference="lemma:Kry_tA_tb_CS"}, similar to [\[lemma:Kry_tA_tb\]](#lemma:Kry_tA_tb){reference-type="ref" reference="lemma:Kry_tA_tb"}, gives an alternative characterization of the Krylov sub-space in [\[eq:pMINRES\]](#eq:pMINRES){reference-type="ref" reference="eq:pMINRES"}. [\[lemma:Kry_tA_tb_CS\]]{#lemma:Kry_tA_tb_CS label="lemma:Kry_tA_tb_CS"} For any $1 \leq t \leq \tilde{g}$, we have ${\mathcal{S}_{t}} (\tilde{\mathbf{A}}, \tilde{\mathbf{b}}) = \mathbf{S}^{\intercal}{\mathcal{S}_{t}} (\mathbf{A}\mathbf{M}, \mathbf{b})$. *Proof.* By [\[eq:def_tA_tb_tr_hr_CS,eq:Md_STS,eq:Md_KKH\]](#eq:def_tA_tb_tr_hr_CS,eq:Md_STS,eq:Md_KKH){reference-type="ref" reference="eq:def_tA_tb_tr_hr_CS,eq:Md_STS,eq:Md_KKH"} and the definition of the Saunders subspace in [\[eq:saunders\]](#eq:saunders){reference-type="ref" reference="eq:saunders"}, we have $$\begin{aligned} {\mathcal{S}_{t}} (\tilde{\mathbf{A}}, \tilde{\mathbf{b}}) &= {\mathcal{K}_{t_1}} (\tilde{\mathbf{A}}\textnormal{Conj}(\tilde{\mathbf{A}}), \tilde{\mathbf{b}}) \oplus {\mathcal{K}_{t_2}} (\tilde{\mathbf{A}}\textnormal{Conj}(\tilde{\mathbf{A}}), \tilde{\mathbf{A}}\textnormal{Conj}(\tilde{\mathbf{b}})), \quad t_1 + t_2 = t, \quad 0 \leq t_1 - t_2 \leq 1, \end{aligned}$$ where $$\begin{aligned} {\mathcal{K}_{t_1}} (\tilde{\mathbf{A}}\textnormal{Conj}(\tilde{\mathbf{A}}), \tilde{\mathbf{b}}) &= \textnormal{Span}\left\{\tilde{\mathbf{b}}, \tilde{\mathbf{A}}\textnormal{Conj}(\tilde{\mathbf{A}}) \tilde{\mathbf{b}}, \ldots, [\tilde{\mathbf{A}}\textnormal{Conj}(\tilde{\mathbf{A}})]^{t_1-1} \tilde{\mathbf{b}}\right\} \\ &= \textnormal{Span}\left\{\mathbf{S}^{\intercal}\mathbf{b}, \mathbf{S}^{\intercal}\mathbf{A}\mathbf{M}\bar{\mathbf{A}}\bar{\mathbf{M}}\mathbf{b}, \ldots, \mathbf{S}^{\intercal}\left[\mathbf{A}\mathbf{M}\bar{\mathbf{A}}\bar{\mathbf{M}}\right]^{t_1-1} \mathbf{b}\right\} = \mathbf{S}^{\intercal}{\mathcal{K}_{t_1}} (\mathbf{A}\mathbf{M}\bar{\mathbf{A}}\bar{\mathbf{M}}, \tilde{\mathbf{b}}). \end{aligned}$$ and $$\begin{aligned} {\mathcal{K}_{t_2}} (\tilde{\mathbf{A}}\textnormal{Conj}(\tilde{\mathbf{A}}), \tilde{\mathbf{A}}\textnormal{Conj}(\tilde{\mathbf{b}})) &= \textnormal{Span}\left\{\tilde{\mathbf{A}}\textnormal{Conj}(\tilde{\mathbf{b}}), \tilde{\mathbf{A}}\textnormal{Conj}(\tilde{\mathbf{A}}) \tilde{\mathbf{A}}\textnormal{Conj}(\tilde{\mathbf{b}}), \ldots, [\tilde{\mathbf{A}}\textnormal{Conj}(\tilde{\mathbf{A}})]^{t_2-1} \tilde{\mathbf{A}}\textnormal{Conj}(\tilde{\mathbf{b}})\right\} \\ &= \textnormal{Span}\left\{\mathbf{S}^{\intercal}\mathbf{A}\mathbf{M}\bar{\mathbf{b}}, \mathbf{S}^{\intercal}\mathbf{A}\mathbf{M}\bar{\mathbf{A}}\bar{\mathbf{M}}\mathbf{A}\mathbf{M}\bar{\mathbf{b}}, \ldots, \mathbf{S}^{\intercal}\left[\mathbf{A}\mathbf{M}\bar{\mathbf{A}}\bar{\mathbf{M}}\right]^{t_2-1} \mathbf{A}\mathbf{M}\bar{\mathbf{b}}\right\} \\ &= \mathbf{S}^{\intercal}{\mathcal{K}_{t_2}} (\mathbf{A}\mathbf{M}\bar{\mathbf{A}}\bar{\mathbf{M}}, \mathbf{A}\mathbf{M}\bar{\mathbf{b}}). \end{aligned}$$ The result follows by combining the latter expressions. ◻ Using [\[lemma:Kry_tA_tb_CS\]](#lemma:Kry_tA_tb_CS){reference-type="ref" reference="lemma:Kry_tA_tb_CS"}, we can now gain insight into the properties of the vectors $\mathbf{z}_{i}$ and $\mathbf{w}_{i}$. [\[lemma:Kry_spans_z\_w_CS\]]{#lemma:Kry_spans_z_w_CS label="lemma:Kry_spans_z_w_CS"} For any $1\leq t \leq \tilde{g}$, we have $$\begin{aligned} \textnormal{Span}\{\mathbf{w}_{1}, \mathbf{w}_{2}, \ldots, \mathbf{w}_{t}\} &= \mathbf{M}{\mathcal{S}_{t}} (\bar{\mathbf{A}}\bar{\mathbf{M}}, \bar{\mathbf{b}}),\\ \bar{\mathbf{P}}\mathbf{P}^{\intercal}\textnormal{Span}\{\mathbf{z}_{1}, \mathbf{z}_{2}, \ldots, \mathbf{z}_{t}\} &= \bar{\mathbf{P}}\mathbf{P}^{\intercal}{\mathcal{S}_{t}} (\mathbf{A}\mathbf{M}, \mathbf{b}), \end{aligned}$$ where $\mathbf{P}$ is as in [\[eq:def_Md_M\]](#eq:def_Md_M){reference-type="ref" reference="eq:def_Md_M"}. Furthermore, $\mathbf{w}_{\tilde{g}+1} = \mathbf{0}$ and $\mathbf{z}_{\tilde{g}+1} \in \textnormal{Null}(\mathbf{M})$. *Proof.* Again, recall that for all $1 \leq t \leq \tilde{g}$, we have $\beta_{t} > 0$. Now, similar to the proof of [\[lemma:Kry_spans_z\_w\]](#lemma:Kry_spans_z_w){reference-type="ref" reference="lemma:Kry_spans_z_w"}, by [\[eq:def_z\_w_d\_CS,lemma:Kry_tA_tb_CS,eq:Md_STS,eq:saunders\]](#eq:def_z_w_d_CS,lemma:Kry_tA_tb_CS,eq:Md_STS,eq:saunders){reference-type="ref" reference="eq:def_z_w_d_CS,lemma:Kry_tA_tb_CS,eq:Md_STS,eq:saunders"}, it holds that $$\begin{aligned} \textnormal{Span}\{\mathbf{w}_{1}, \mathbf{w}_{2}, \ldots, \mathbf{w}_{t}\} &= \mathbf{S}\times \textnormal{Span}\{\textnormal{Conj}(\tilde{\mathbf{v}}_{1}), \textnormal{Conj}(\tilde{\mathbf{v}}_{2}), \ldots, \textnormal{Conj}(\tilde{\mathbf{v}}_{t})\} \\ &= \mathbf{S}\times \textnormal{Conj}(\textnormal{Span}\{\tilde{\mathbf{v}}_{1}, \tilde{\mathbf{v}}_{2}, \ldots, \tilde{\mathbf{v}}_{t}\}) \\ &= \mathbf{S}\textnormal{Conj}({\mathcal{S}_{t}} (\tilde{\mathbf{A}}, \tilde{\mathbf{b}})) = \mathbf{M}{\mathcal{S}_{t}} (\bar{\mathbf{A}}\bar{\mathbf{M}}, \bar{\mathbf{b}}), \end{aligned}$$ and $$\begin{aligned} \bar{\mathbf{P}}\mathbf{P}^{\intercal}\textnormal{Span}\{\mathbf{z}_{1}, \mathbf{z}_{2}, \ldots, \mathbf{z}_{t}\} &= \mathbf{S}^{\ddagger}\mathbf{S}^{\intercal}\textnormal{Span}\{\mathbf{z}_{1}, \mathbf{z}_{2}, \ldots, \mathbf{z}_{t}\} = \mathbf{S}^{\ddagger}\times \textnormal{Span}\{\tilde{\mathbf{v}}_{1}, \tilde{\mathbf{v}}_{2}, \ldots, \tilde{\mathbf{v}}_{t}\} \\ &= \mathbf{S}^{\ddagger}{\mathcal{S}_{t}} (\tilde{\mathbf{A}}, \tilde{\mathbf{b}}) = \bar{\mathbf{P}}\mathbf{P}^{\intercal}{\mathcal{S}_{t}} (\mathbf{A}\mathbf{M}, \mathbf{b}), \end{aligned}$$ where $\mathbf{S}^{\ddagger}\triangleq(\mathbf{S}^{\dagger})^{\intercal}$. Also, at $t = \tilde{g}+1$, by [\[eq:def_z\_w_d\_CS,eq:def_Md_M\_Sd_S\]](#eq:def_z_w_d_CS,eq:def_Md_M_Sd_S){reference-type="ref" reference="eq:def_z_w_d_CS,eq:def_Md_M_Sd_S"} and the fact $\tilde{\mathbf{v}}_{\tilde{g}+1} = \mathbf{0}$, we obtain $\mathbf{w}_{\tilde{g}+1} = \mathbf{0}$ and $\mathbf{S}^{\intercal}\mathbf{z}_{\tilde{g}+1} = \mathbf{0}$, i.e., $\mathbf{z}_{\tilde{g}+1} \in \textnormal{Null}(\bar{\mathbf{M}})$. ◻ [\[lemma:alpha_beta_CS\]](#lemma:alpha_beta_CS){reference-type="ref" reference="lemma:alpha_beta_CS"} shows how we can compute $\alpha_{t}$ and $\beta_{t}$ in [\[eq:updates_z\_w_CS\]](#eq:updates_z_w_CS){reference-type="ref" reference="eq:updates_z_w_CS"}, i.e., how to construct $\tilde{\mathbf{T}}_{t}$ in [\[eq:tridiagonal_T\]](#eq:tridiagonal_T){reference-type="ref" reference="eq:tridiagonal_T"}. [\[lemma:alpha_beta_CS\]]{#lemma:alpha_beta_CS label="lemma:alpha_beta_CS"} For $1 \leq t \leq \tilde{g}$, we can compute $\alpha_{t}$ and $\beta_{t}$ in [\[eq:updates_z\_w_CS\]](#eq:updates_z_w_CS){reference-type="ref" reference="eq:updates_z_w_CS"} as $$\begin{aligned} \alpha_{t}= \frac{1}{\beta_{t}^2} \langle \bar{\mathbf{w}}_{t}, \mathbf{A}{\mathbf{w}}_{t} \rangle, \quad \beta_{t+1}= \sqrt{\langle \bar{\mathbf{z}}_{t+1}, \mathbf{w}_{t+1} \rangle}. \end{aligned}$$ *Proof.* By [\[eq:saunders_process\]](#eq:saunders_process){reference-type="ref" reference="eq:saunders_process"}, [\[eq:def_z\_w_d\_CS\]](#eq:def_z_w_d_CS){reference-type="ref" reference="eq:def_z_w_d_CS"}, [\[lemma:Kry_spans_z\_w_CS\]](#lemma:Kry_spans_z_w_CS){reference-type="ref" reference="lemma:Kry_spans_z_w_CS"}, [\[eq:Md_STS\]](#eq:Md_STS){reference-type="ref" reference="eq:Md_STS"}, and the orthonormality of the Saunders vectors $\tilde{\mathbf{v}}_{i}$, we obtain $$\begin{aligned} \alpha_{t}= \langle \tilde{\mathbf{v}}_{t}, \tilde{\mathbf{A}}\textnormal{Conj}(\tilde{\mathbf{v}}_{t}) \rangle = \langle \frac{1}{\beta_{t}} \mathbf{S}^{\intercal}\mathbf{z}_{t}, \left(\mathbf{S}^{\intercal}\mathbf{A}\mathbf{S}\right) \frac{1}{\beta_{t}} \mathbf{S}^{*}\bar{\mathbf{z}}_{t} \rangle = \frac{1}{\beta_{t}^2} \langle \bar{\mathbf{M}}\mathbf{z}_{t}, \mathbf{A}\mathbf{M}\bar{\mathbf{z}}_{t} \rangle = \frac{1}{\beta_{t}^2} \langle \bar{\mathbf{w}}_{t}, \mathbf{A}{\mathbf{w}}_{t} \rangle. \end{aligned}$$ By [\[eq:def_z\_w_d\_CS,eq:Md_STS\]](#eq:def_z_w_d_CS,eq:Md_STS){reference-type="ref" reference="eq:def_z_w_d_CS,eq:Md_STS"}, and the facts that $\beta_{t+1} > 0$ and $\| \tilde{\mathbf{v}}_{t+1} \| = 1$ for any $1 \leq t \leq \tilde{g}-1$, we get $$\begin{aligned} \beta_{t+1}= \sqrt{\| \beta_{t+1}\tilde{\mathbf{v}}_{t+1} \|^2} = \sqrt{\| \beta_{t+1}\bar{\mathbf{v}}_{t+1} \|^2} = \sqrt{\langle \mathbf{S}^{*}\bar{\mathbf{z}}_{t+1}, \mathbf{S}^{*}\bar{\mathbf{z}}_{t+1} \rangle} = \sqrt{\langle \bar{\mathbf{z}}_{t+1}, \mathbf{M}\bar{\mathbf{z}}_{t+1} \rangle} = \sqrt{\langle \bar{\mathbf{z}}_{t+1}, \mathbf{w}_{t+1} \rangle}. \end{aligned}$$ For $t = \tilde{g}$, this relation continues to hold since $\beta_{\tilde{g}+1} = 0$ and by [\[lemma:Kry_spans_z\_w_CS\]](#lemma:Kry_spans_z_w_CS){reference-type="ref" reference="lemma:Kry_spans_z_w_CS"}, $\mathbf{w}_{\tilde{g}+1} = \mathbf{0}$. ◻ Since $\mathbf{M}\succeq \mathbf{0}$ and $\langle \bar{\mathbf{z}}_{t}, {\mathbf{w}}_{t} \rangle = \langle \bar{\mathbf{z}}_{t}, \mathbf{M}\bar{\mathbf{z}}_{t} \rangle$, $\beta_{t}$ is well-defined for all $1 \leq t \leq \tilde{g}$, where $\beta_{1} = \| \tilde{\mathbf{b}}\| = \sqrt{\langle \bar{\mathbf{z}}_{1}, \mathbf{w}_{1} \rangle}$. Recall the update direction in MINRES applied to [\[eq:pMINRES\]](#eq:pMINRES){reference-type="ref" reference="eq:pMINRES"} with the complex-symmetric matrix is given by $$\begin{aligned} \label{eq:updates_v_d_CS} \tilde{\mathbf{d}}_{t} = \frac{1}{\gamma^{[2]}_{t}}\left( \textnormal{Conj}(\tilde{\mathbf{v}}_{t}) - \epsilon_{t} \tilde{\mathbf{d}}_{t-2} - \delta^{[2]}_{t} \tilde{\mathbf{d}}_{t-1} \right), \quad 1 \leq t \leq \tilde{g},\end{aligned}$$ where $\tilde{\mathbf{d}}_{0} = \tilde{\mathbf{d}}_{-1} = \mathbf{0}$. Let ${\mathbf{d}}_{t}$ be a vector such that ${\mathbf{d}}_{t}= \mathbf{S}\tilde{\mathbf{d}}_{t}$ and set $\mathbf{d}_{0} = \mathbf{d}_{-1} = \bm{0}$. It follows that $$\begin{aligned} {\mathbf{d}}_{t}= \mathbf{S}\tilde{\mathbf{d}}_{t}= \frac{1}{\gamma^{[2]}_{t}}\left( \mathbf{S}\textnormal{Conj}(\tilde{\mathbf{v}}_{t}) - \epsilon_{t} \mathbf{S}\tilde{\mathbf{d}}_{t-2} - \delta^{[2]}_{t} \mathbf{S}\tilde{\mathbf{d}}_{t-1} \right) = \frac{1}{\gamma^{[2]}_{t}}\left( \mathbf{S}\textnormal{Conj}(\tilde{\mathbf{v}}_{t}) - \epsilon_{t} \mathbf{d}_{t-2} - \delta^{[2]}_{t} \mathbf{d}_{t-1} \right).\end{aligned}$$ From [\[eq:def_z\_w_d\_CS,eq:Md_STS\]](#eq:def_z_w_d_CS,eq:Md_STS){reference-type="ref" reference="eq:def_z_w_d_CS,eq:Md_STS"}, we obtain the following three-term recurrence relation $$\begin{aligned} \label{eq:update_x_d_CS} \mathbf{d}_{t} = \frac{1}{\gamma^{[2]}_{t} } \left(\frac{1}{\beta_{t}} {\mathbf{w}}_{t}- \epsilon_{t} \mathbf{d}_{t-2} - \delta^{[2]}_{t} \mathbf{d}_{t-1}\right), \quad 1 \leq t \leq \tilde{g}.\end{aligned}$$ Clearly, [\[eq:update_x\_d_CS\]](#eq:update_x_d_CS){reference-type="ref" reference="eq:update_x_d_CS"} implies [\[eq:updates_v\_d_CS\]](#eq:updates_v_d_CS){reference-type="ref" reference="eq:updates_v_d_CS"}. To see this, note that [\[lemma:Kry_tA_tb_CS\]](#lemma:Kry_tA_tb_CS){reference-type="ref" reference="lemma:Kry_tA_tb_CS"} implies $\textnormal{Conj}(\tilde{\mathbf{v}}_{t}) \in \textnormal{Range}(\mathbf{S}^{*})$, and as a result $\tilde{\mathbf{d}}_{t}\in \textnormal{Range}(\mathbf{S}^{*})$ by [\[eq:updates_v\_d_CS\]](#eq:updates_v_d_CS){reference-type="ref" reference="eq:updates_v_d_CS"}. Hence, the identity $\mathbf{S}^{\dagger}\mathbf{S}\mathbf{S}^{*}= \mathbf{S}^{*}$ implies that $\mathbf{S}^{\dagger}{\mathbf{d}}_{t}= \mathbf{S}^{\dagger}\mathbf{S}\tilde{\mathbf{d}}_{t}= \tilde{\mathbf{d}}_{t}$ and $\textnormal{Conj}(\tilde{\mathbf{v}}_{t}) = \mathbf{S}^{\dagger}{\mathbf{w}}_{t}/ \beta_{t}$ by [\[eq:def_z\_w_d\_CS\]](#eq:def_z_w_d_CS){reference-type="ref" reference="eq:def_z_w_d_CS"}. Now, multiplying both sides of [\[eq:update_x\_d_CS\]](#eq:update_x_d_CS){reference-type="ref" reference="eq:update_x_d_CS"} by $\mathbf{S}^{\dagger}$ gives [\[eq:updates_v\_d_CS\]](#eq:updates_v_d_CS){reference-type="ref" reference="eq:updates_v_d_CS"}. Also, by [\[lemma:Kry_spans_z\_w_CS\]](#lemma:Kry_spans_z_w_CS){reference-type="ref" reference="lemma:Kry_spans_z_w_CS"}, this construction implies that ${\mathbf{d}}_{t}\in \textnormal{Span}\{\mathbf{w}_1, \mathbf{w}_2, \ldots, \mathbf{w}_{t} \} = \mathbf{M}{\mathcal{S}_{t}} (\bar{\mathbf{A}}\bar{\mathbf{M}}, \bar{\mathbf{b}})$ for $1 \leq t \leq \tilde{g}$. Finally, from [\[eq:def_x\_r,eq:update_x\_d_CS\]](#eq:def_x_r,eq:update_x_d_CS){reference-type="ref" reference="eq:def_x_r,eq:update_x_d_CS"}, it follows that the update is given by $$\begin{aligned} \mathbf{x}_{t+1}= \mathbf{S}\tilde{\mathbf{x}}_{t+1}= \mathbf{S}\tilde{\mathbf{x}}_{t}+ \tau \mathbf{S}\tilde{\mathbf{d}}_{t}= {\mathbf{x}}_{t}+ \tau {\mathbf{d}}_{t}.\end{aligned}$$ Initializing with letting $\mathbf{x}_0 = \bm{0}$, gives ${\mathbf{x}}_{t}\in \mathbf{M}{\mathcal{S}_{t}} (\bar{\mathbf{A}}\bar{\mathbf{M}}, \bar{\mathbf{b}})$. We also have the following recurrence relation on quantities connected to the residual in this complex-symmetric setting, similar to [\[lemma:residuals\]](#lemma:residuals){reference-type="ref" reference="lemma:residuals"} for the case of Hermitian systems. [\[lemma:residuals_CS\]]{#lemma:residuals_CS label="lemma:residuals_CS"} For any $1\leq t \leq \tilde{g}$, define $\hat{\mathbf{r}}_{t}$ as $$\begin{aligned} \label{eq:def_tA_tb_tr_hr_hrrt_CS} \hat{\mathbf{r}}_{t}= \bar{\mathbf{S}}\tilde{\mathbf{r}}_{t}\in \mathbb{C}^{d}, \end{aligned}$$ where $\tilde{\mathbf{r}}_{t}= \tilde{\mathbf{b}}- \tilde{\mathbf{A}}\tilde{\mathbf{x}}_{t}$. We have $$\begin{aligned} \hat{\mathbf{r}}_{t}&= s_{t}^2 \hat{\mathbf{r}}_{t-1}- \frac{\phi_{t} \bar{c}_{t}}{\beta_{t+1}} \bar{\mathbf{w}}_{t+1}\label{eq:hr_CS}, \\ \bar{\mathbf{P}}\mathbf{P}^{\intercal}{\mathbf{r}}_{t}&= \bar{\mathbf{P}}\mathbf{P}^{\intercal}\left(s_t^2 \mathbf{r}_{t-1} - \frac{\phi_{t} \bar{c}_t}{\beta_{t+1}} \mathbf{z}_{t+1}\right), \label{eq:PPTr_CS} \end{aligned}$$ where $\mathbf{P}$ and $\mathbf{w}_{t}$ are, respectively, defined in [\[eq:def_Md_M,eq:def_z\_w_d\_CS\]](#eq:def_Md_M,eq:def_z_w_d_CS){reference-type="ref" reference="eq:def_Md_M,eq:def_z_w_d_CS"}. *Proof.* By [\[eq:def_tA_tb_tr_hr_CS,eq:def_z\_w_d\_CS\]](#eq:def_tA_tb_tr_hr_CS,eq:def_z_w_d_CS){reference-type="ref" reference="eq:def_tA_tb_tr_hr_CS,eq:def_z_w_d_CS"}, and multiplying both sides of the residual update in [\[alg:pMINRES\]](#alg:pMINRES){reference-type="ref" reference="alg:pMINRES"} (cf. [\[lemma:residual_CS\]](#lemma:residual_CS){reference-type="ref" reference="lemma:residual_CS"}) by $\mathbf{S}$, we get $$\begin{aligned} \hat{\mathbf{r}}_{t}= \bar{\mathbf{S}}\tilde{\mathbf{r}}_{t} = s_{t}^2 \bar{\mathbf{S}}\tilde{\mathbf{r}}_{t-1} - \phi_{t} \bar{c}_{t} \bar{\mathbf{S}}\tilde{\mathbf{v}}_{t+1} = s_{t}^2 \hat{\mathbf{r}}_{t-1}- \frac{\phi_{t} \bar{c}_{t}}{\beta_{t+1}} \bar{\mathbf{w}}_{t+1}. \end{aligned}$$ Similarly, by [\[eq:def_tA_tb_tr_hr_CS,eq:Md_PPH,eq:Md_STS,eq:def_z\_w_d\_CS,eq:trrt_rrt_CS\]](#eq:def_tA_tb_tr_hr_CS,eq:Md_PPH,eq:Md_STS,eq:def_z_w_d_CS,eq:trrt_rrt_CS){reference-type="ref" reference="eq:def_tA_tb_tr_hr_CS,eq:Md_PPH,eq:Md_STS,eq:def_z_w_d_CS,eq:trrt_rrt_CS"}, and noting that $\mathbf{S}^{\ddagger}= \mathbf{S}^{\ddagger}\bar{\mathbf{S}}^{\dagger}\bar{\mathbf{S}}= \bar{\mathbf{M}}^{\dagger}\bar{\mathbf{S}}$, we have $$\begin{aligned} \bar{\mathbf{P}}\mathbf{P}^{\intercal}{\mathbf{r}}_{t}&= \mathbf{S}^{\ddagger}\mathbf{S}^{\intercal}{\mathbf{r}}_{t}= \mathbf{S}^{\ddagger}\tilde{\mathbf{r}}_{t}= \bar{\mathbf{M}}^{\dagger}\bar{\mathbf{S}}\tilde{\mathbf{r}}_{t}= s_t^2 \bar{\mathbf{M}}^{\dagger}\bar{\mathbf{S}}\tilde{\mathbf{r}}_{t-1} - \phi_{t} \bar{c}_{t} \bar{\mathbf{M}}^{\dagger}\bar{\mathbf{S}}\tilde{\mathbf{v}}_{t+1} \\ &= s_t^2 \bar{\mathbf{M}}^{\dagger}\bar{\mathbf{M}}\mathbf{r}_{t-1} - \frac{\phi_{t} \bar{c}_t}{\beta_{t+1}} \bar{\mathbf{M}}^{\dagger}\bar{\mathbf{M}}\mathbf{z}_{t+1} = s_t^2 \bar{\mathbf{P}}\mathbf{P}^{\intercal}{\mathbf{r}}_{t-1}- \frac{\phi_{t} \bar{c}_t}{\beta_{t+1}} \bar{\mathbf{P}}\mathbf{P}^{\intercal}\mathbf{z}_{t+1}. \end{aligned}$$ ◻ Next, we show that ${\mathbf{x}}_{t}$ and the residuals $\hat{\mathbf{r}}_{t}$ from [\[alg:pMINRES\]](#alg:pMINRES){reference-type="ref" reference="alg:pMINRES"} for complex-symmetric systems belong to a certain Krylov subspace; see [\[lemma:Kry_x\_hr\]](#lemma:Kry_x_hr){reference-type="ref" reference="lemma:Kry_x_hr"} for the corresponding result for the Hermitian case. [\[lemma:Kry_x\_hr_CS\]]{#lemma:Kry_x_hr_CS label="lemma:Kry_x_hr_CS"} With [\[eq:def_tA_tb_tr_hr_CS,eq:def_x\_r\]](#eq:def_tA_tb_tr_hr_CS,eq:def_x_r){reference-type="ref" reference="eq:def_tA_tb_tr_hr_CS,eq:def_x_r"} and $\mathbf{x}_{0} = \mathbf{0}$, we have for any $1 \leq t \leq \tilde{g}$ $$\begin{aligned} {\mathbf{x}}_{t}\in \mathbf{M}{\mathcal{S}_{t}} (\bar{\mathbf{A}}\bar{\mathbf{M}}, \bar{\mathbf{b}}), \end{aligned}$$ and $\textnormal{Span}\{\hat{\mathbf{r}}_{0}, \hat{\mathbf{r}}_{1}, \ldots, \hat{\mathbf{r}}_{t-1}\} = \bar{\mathbf{M}}{\mathcal{S}_{t}} (\mathbf{A}\mathbf{M}, \mathbf{b})$. *Proof.* The first result is obtained by [\[lemma:Kry_spans_z\_w_CS\]](#lemma:Kry_spans_z_w_CS){reference-type="ref" reference="lemma:Kry_spans_z_w_CS"} and our construction of the update direction, as discussed above. The second result is also obtained by [\[eq:hr_CS,lemma:Kry_spans_z\_w_CS\]](#eq:hr_CS,lemma:Kry_spans_z_w_CS){reference-type="ref" reference="eq:hr_CS,lemma:Kry_spans_z_w_CS"} and noting that $\hat{\mathbf{r}}_{0} = \mathbf{w}_{1}$. ◻ [\[rm:all_vectors_subspaces_CS\]]{#rm:all_vectors_subspaces_CS label="rm:all_vectors_subspaces_CS"} Similar to [\[rm:all_vectors_subspaces\]](#rm:all_vectors_subspaces){reference-type="ref" reference="rm:all_vectors_subspaces"} for the Hermitian systems, [\[eq:PPTr_CS\]](#eq:PPTr_CS){reference-type="ref" reference="eq:PPTr_CS"} implies that in the complex-symmetric settings, we can define a new vector $\check{\mathbf{r}}_{t}$ by $$\begin{aligned} \label{eq:cr_CS} \check{\mathbf{r}}_{t}= s_t^2 \check{\mathbf{r}}_{t-1}- \frac{\phi_{t} \bar{c}_t}{\beta_{t+1}} \mathbf{z}_{t+1}, \end{aligned}$$ where $\check{\mathbf{r}}_{-1} = \mathbf{0}$, which then implies $\bar{\mathbf{P}}\mathbf{P}^{\intercal}\check{\mathbf{r}}_{t}= \bar{\mathbf{P}}\mathbf{P}^{\intercal}{\mathbf{r}}_{t}$, i.e., computing $\bar{\mathbf{P}}\mathbf{P}^{\intercal}{\mathbf{r}}_{t}$ can be done without performing an extra matrix-vector product $\mathbf{A}{\mathbf{x}}_{t}$. Similar to [\[rm:all_vectors_subspaces\]](#rm:all_vectors_subspaces){reference-type="ref" reference="rm:all_vectors_subspaces"}, this fact as well as [\[eq:updates_z\_w_CS,lemma:Kry_spans_z\_w_CS\]](#eq:updates_z_w_CS,lemma:Kry_spans_z_w_CS){reference-type="ref" reference="eq:updates_z_w_CS,lemma:Kry_spans_z_w_CS"}, along with $\mathbf{z}_{0} = \mathbf{0}$ and $\mathbf{z}_1 = \mathbf{b}$ give $\mathbf{z}_{t}\in {\mathcal{S}_{t}} (\mathbf{A}\mathbf{M}, \mathbf{b})$, which in turn implies $\check{\mathbf{r}}_{t-1}\in {\mathcal{S}_{t}} (\mathbf{A}\mathbf{M}, \mathbf{b})$. Also, from [\[lemma:Kry_x\_hr_CS,lemma:Kry_spans_z\_w_CS,eq:update_x\_d_CS\]](#lemma:Kry_x_hr_CS,lemma:Kry_spans_z_w_CS,eq:update_x_d_CS){reference-type="ref" reference="lemma:Kry_x_hr_CS,lemma:Kry_spans_z_w_CS,eq:update_x_d_CS"}, it follows that ${\mathbf{w}}_{t}, {\mathbf{d}}_{t}, {\mathbf{x}}_{t}\in \mathbf{M}{\mathcal{S}_{t}} (\bar{\mathbf{A}}\bar{\mathbf{M}}, \bar{\mathbf{b}})$ and $\hat{\mathbf{r}}_{t-1}\in \bar{\mathbf{M}}{\mathcal{S}_{t}} (\mathbf{A}\mathbf{M}, \mathbf{b})$. Hence, for a given $\mathbf{M}$, all the vectors ${\mathbf{w}}_{t}, {\mathbf{d}}_{t}, {\mathbf{x}}_{t}, \hat{\mathbf{r}}_{t-1}, \mathbf{z}_{t}, \check{\mathbf{r}}_{t-1}$ are invariant to the particular choice of $\mathbf{S}$ that gives $\mathbf{M}= \mathbf{S}\mathbf{S}^{*}$. We now give the lifting procedure for iterates of [\[alg:pMINRES\]](#alg:pMINRES){reference-type="ref" reference="alg:pMINRES"} applied to complex-symmetric systems. [\[thm:lifting_CS\]]{#thm:lifting_CS label="thm:lifting_CS"} Let $\mathbf{x}_{\tilde{g}}$ be the final iterate of [\[alg:pMINRES\]](#alg:pMINRES){reference-type="ref" reference="alg:pMINRES"} with $\mathbf{A}\in \mathbb{S}^{d\times d}$ being complex-symmetric. 1. If $\hat{\mathbf{r}}_{\tilde{g}}= \bm{0}$, then we must have $\mathbf{x}_{\tilde{g}}= \mathbf{S}\tilde{\mathbf{x}}_{\tilde{g}}$, where $\tilde{\mathbf{x}}_{\tilde{g}}= {\tilde{\mathbf{A}}}^{\dagger}\tilde{\mathbf{b}}$. 2. If $\hat{\mathbf{r}}_{\tilde{g}}\neq \bm{0}$, define the lifted vector $$\begin{aligned} \label{eq:P_lifting_CS} \hat{\mathbf{x}}^{+}= \mathbf{x}_{\tilde{g}}- \frac{\langle \textnormal{Conj}(\check{\mathbf{r}}_{\tilde{g}}), \mathbf{x}_{\tilde{g}} \rangle}{\langle \textnormal{Conj}(\hat{\mathbf{r}}_{\tilde{g}}), \textnormal{Conj}(\check{\mathbf{r}}_{\tilde{g}}) \rangle} \textnormal{Conj}(\hat{\mathbf{r}}_{\tilde{g}}) = \left(\mathbf{I}_{d} - \textnormal{Conj}\left(\frac{\hat{\mathbf{r}}_{\tilde{g}}\check{\mathbf{r}}_{\tilde{g}}^{*} }{\hat{\mathbf{r}}_{\tilde{g}}^{*} \check{\mathbf{r}}_{\tilde{g}}}\right)\right) \mathbf{x}_{\tilde{g}}. \end{aligned}$$ where $\hat{\mathbf{r}}_{\tilde{g}}$ and $\check{\mathbf{r}}_{\tilde{g}}$ are defined, respectively, in [\[eq:hr_CS,eq:cr_CS\]](#eq:hr_CS,eq:cr_CS){reference-type="ref" reference="eq:hr_CS,eq:cr_CS"}. Then, it holds that $\hat{\mathbf{x}}^{+}= \mathbf{S}\tilde{\mathbf{x}}^{+}$ where $\tilde{\mathbf{x}}^{+}= {\tilde{\mathbf{A}}}^{\dagger}\tilde{\mathbf{b}}$. *Proof.* The proof is similar to that of [\[thm:lifting\]](#thm:lifting){reference-type="ref" reference="thm:lifting"}. From [\[eq:def_x\_r,lemma:Kry_x\_hr_CS,eq:Md_PPH,eq:CSMINRES_lifting,eq:trrt_rrt_CS,eq:def_tA_tb_tr_hr_hrrt_CS\]](#eq:def_x_r,lemma:Kry_x_hr_CS,eq:Md_PPH,eq:CSMINRES_lifting,eq:trrt_rrt_CS,eq:def_tA_tb_tr_hr_hrrt_CS){reference-type="ref" reference="eq:def_x_r,lemma:Kry_x_hr_CS,eq:Md_PPH,eq:CSMINRES_lifting,eq:trrt_rrt_CS,eq:def_tA_tb_tr_hr_hrrt_CS"}, we get $$\begin{aligned} \mathbf{x}_{\tilde{g}}&= \mathbf{S}\mathbf{S}^{\dagger}\mathbf{x}_{\tilde{g}}= \mathbf{S}\tilde{\mathbf{x}}_{\tilde{g}}= \mathbf{S}{\tilde{\mathbf{A}}}^{\dagger}\tilde{\mathbf{b}}+ \frac{\langle \textnormal{Conj}(\tilde{\mathbf{r}}_{\tilde{g}}), \tilde{\mathbf{x}}_{\tilde{g}} \rangle}{\langle \textnormal{Conj}(\tilde{\mathbf{r}}_{\tilde{g}}), \textnormal{Conj}(\tilde{\mathbf{r}}_{\tilde{g}}) \rangle} \mathbf{S}\textnormal{Conj}(\tilde{\mathbf{r}}_{\tilde{g}}) \\ &= \mathbf{S}{\tilde{\mathbf{A}}}^{\dagger}\tilde{\mathbf{b}}+ \frac{\langle \mathbf{S}^{*}\bar{\mathbf{r}}_{\tilde{g}}, \tilde{\mathbf{x}}_{\tilde{g}} \rangle}{\langle \textnormal{Conj}(\tilde{\mathbf{r}}_{\tilde{g}}), \mathbf{S}^{*}\bar{\mathbf{r}}_{\tilde{g}} \rangle} \mathbf{S}\textnormal{Conj}(\tilde{\mathbf{r}}_{\tilde{g}}) = \mathbf{S}{\tilde{\mathbf{A}}}^{\dagger}\tilde{\mathbf{b}}+ \frac{\langle \textnormal{Conj}(\check{\mathbf{r}}_{\tilde{g}}), \mathbf{x}_{\tilde{g}} \rangle}{\langle \textnormal{Conj}(\hat{\mathbf{r}}_{\tilde{g}}), \textnormal{Conj}(\check{\mathbf{r}}_{\tilde{g}}) \rangle} \textnormal{Conj}(\hat{\mathbf{r}}_{\tilde{g}}). \end{aligned}$$ By further noting $\mathbf{x}_{\tilde{g}}, \textnormal{Conj}(\hat{\mathbf{r}}_{\tilde{g}}) \in \textnormal{Range}(\mathbf{M}) = \textnormal{Range}(\mathbf{P}\mathbf{P}^{*})$ and $\bar{\mathbf{P}}\mathbf{P}^{\intercal}\check{\mathbf{r}}_{t}= \bar{\mathbf{P}}\mathbf{P}^{\intercal}{\mathbf{r}}_{t}$ from [\[rm:all_vectors_subspaces_CS\]](#rm:all_vectors_subspaces_CS){reference-type="ref" reference="rm:all_vectors_subspaces_CS"}, we obtain the desired result. ◻ Just like [\[eq:P_lifting_t\]](#eq:P_lifting_t){reference-type="ref" reference="eq:P_lifting_t"}, a general lifting step at every iteration $1 \leq t \leq \tilde{g}$ can be defined as $$\begin{aligned} \label{eq:P_lifting_t_CS} \hat{\mathbf{x}}^{\natural}_t = {\mathbf{x}}_{t}- \frac{\langle \textnormal{Conj}(\check{\mathbf{r}}_{t}), {\mathbf{x}}_{t} \rangle}{\langle \textnormal{Conj}(\check{\mathbf{r}}_{t}), \textnormal{Conj}(\hat{\mathbf{r}}_{t}) \rangle} \textnormal{Conj}(\hat{\mathbf{r}}_{t}).\end{aligned}$$ Similar to [\[coro:pseudo_P\]](#coro:pseudo_P){reference-type="ref" reference="coro:pseudo_P"}, [\[coro:pseudo_P\_CS\]](#coro:pseudo_P_CS){reference-type="ref" reference="coro:pseudo_P_CS"} shows that such lifting strategy gives the pseudo-inverse solution of the original unpreconditioned problem, i.e., $\mathbf{A}^{\dagger}\mathbf{b}$. [\[coro:pseudo_P\_CS\]]{#coro:pseudo_P_CS label="coro:pseudo_P_CS"} When $\textnormal{Range}(\bar{\mathbf{M}}) = \textnormal{Range}(\mathbf{A})$, we must have $\hat{\mathbf{r}}_{\tilde{g}}= \bm{0}$ and $\mathbf{x}_{\tilde{g}}= \mathbf{A}^{\dagger}\mathbf{b}$. *Proof.* From $\textnormal{Range}(\bar{\mathbf{M}}) = \textnormal{Range}(\mathbf{A})$, we obtain $\tilde{\mathbf{b}}\in \textnormal{Range}(\tilde{\mathbf{A}})$ and $\hat{\mathbf{r}}_{\tilde{g}}= \bar{\mathbf{S}}\tilde{\mathbf{r}}_{\tilde{g}}= \bm{0}$. By applying [@bernstein2009matrix Fact 6.4.10] twice and using [\[eq:Md_PPH,thm:lifting\]](#eq:Md_PPH,thm:lifting){reference-type="ref" reference="eq:Md_PPH,thm:lifting"}, we obtain $$\begin{aligned} \mathbf{x}_{\tilde{g}}= \mathbf{S}{\tilde{\mathbf{A}}}^{\dagger}\tilde{\mathbf{b}}= \mathbf{S}\left[\mathbf{S}^\intercal\mathbf{A}\mathbf{S}\right]^{\dagger} \mathbf{S}^\intercal\mathbf{b}= \mathbf{P}\mathbf{P}^{*}\mathbf{A}^{\dagger}\bar{\mathbf{P}}\mathbf{P}^{\intercal}\mathbf{b}= \mathbf{A}^{\dagger}\mathbf{b}. \end{aligned}$$ ◻ [\[rm:reoth_CS\]]{#rm:reoth_CS label="rm:reoth_CS"} The reorthogonalization strategy within [\[alg:pMINRES\]](#alg:pMINRES){reference-type="ref" reference="alg:pMINRES"} for complex-symmetric systems is done similarly to that in [\[rm:reoth\]](#rm:reoth){reference-type="ref" reference="rm:reoth"}, i.e., $$\begin{aligned} \mathbf{z}_{t}= \mathbf{z}_{t}- \mathbf{Y}_{t-1} \mathbf{z}_{t}, \quad {\mathbf{w}}_{t}= {\mathbf{w}}_{t}- \mathbf{Y}_{t-1}^{\intercal} {\mathbf{w}}_{t}, \end{aligned}$$ where $\mathbf{Y}_0 = \mathbf{0}$ and $\mathbf{Y}_{t} \triangleq [\frac{\mathbf{z}_1}{\beta_1}, \frac{\mathbf{z}_2}{\beta_2},\dots,\frac{\mathbf{z}_{t}}{\beta_{t}}] [\frac{\mathbf{w}_1}{\beta_1}, \frac{\mathbf{w}_2}{\beta_2},\dots,\frac{\mathbf{w}_{t}}{\beta_{t}}]^{\intercal} = \mathbf{Y}_{t-1} + \frac{1}{\beta_{t}^2} \mathbf{z}_{t} \mathbf{w}_{t}^{\intercal}$. We end this section by noting that, similar to Hermitian systems, in the case where $\mathbf{A}$ is complex-symmetric [\[alg:pMINRES\]](#alg:pMINRES){reference-type="ref" reference="alg:pMINRES"} is analytically equivalent to [\[alg:sub_pMINRES\]](#alg:sub_pMINRES){reference-type="ref" reference="alg:sub_pMINRES"}. Hence, when $m \ll d$, the sub-preconditioned algorithm [\[alg:sub_pMINRES\]](#alg:sub_pMINRES){reference-type="ref" reference="alg:sub_pMINRES"} can be regarded as a dimensionality reduced version of [\[alg:pMINRES\]](#alg:pMINRES){reference-type="ref" reference="alg:pMINRES"} and allows for significantly less storage/computations than what is required in [\[alg:pMINRES\]](#alg:pMINRES){reference-type="ref" reference="alg:pMINRES"}. # Numerical experiments {#sec:exp} In this section, we provide several numerical examples illustrating our theoretical findings. We first verify our theoretical results in [4.1](#sec:exp:lifting){reference-type="ref" reference="sec:exp:lifting"} using synthetic data. We subsequently aim to explore related applications in image deblurring in [4.2](#sec:exp_image){reference-type="ref" reference="sec:exp_image"}. ## Pseudo-inverse Solutions by Lifting: [\[thm:MINRES_dagger,thm:CSMINRES_dagger\]](#thm:MINRES_dagger,thm:CSMINRES_dagger){reference-type="ref" reference="thm:MINRES_dagger,thm:CSMINRES_dagger"} {#sec:exp:lifting} We first demonstrate [\[thm:MINRES_dagger,thm:CSMINRES_dagger\]](#thm:MINRES_dagger,thm:CSMINRES_dagger){reference-type="ref" reference="thm:MINRES_dagger,thm:CSMINRES_dagger"} on some simple and small synthetic problems. We consider [\[eq:least_squares\]](#eq:least_squares){reference-type="ref" reference="eq:least_squares"} with $d = 20$ and for Hermitian as well as complex-symmetric synthetic matrices. For both matrices, the rank is set to $r = 15$. We also let $\mathbf{b}= \mathbf{1}$, i.e., the vector of all ones, which ensures that the right-hand side vector $\mathbf{b}$ does not lie entirely in the range of these matrices. In [\[fig:pseudo\]](#fig:pseudo){reference-type="ref" reference="fig:pseudo"}, we plot the relative error of the iterates of [\[alg:MINRES\]](#alg:MINRES){reference-type="ref" reference="alg:MINRES"}, ${\mathbf{x}}_{t}$, as well as the lifted iterates $\hat{\mathbf{x}}_{t}^{\dagger}$ using [\[eq:MINRES_lifting\]](#eq:MINRES_lifting){reference-type="ref" reference="eq:MINRES_lifting"} and [\[eq:CSMINRES_lifting\]](#eq:CSMINRES_lifting){reference-type="ref" reference="eq:CSMINRES_lifting"} to the true pseudo-inverse solution $\mathbf{x}^{+}= \mathbf{A}^{\dagger}\mathbf{b}$, namely $\| {\mathbf{x}}_{t}- \mathbf{x}^{+}\| / \| \mathbf{x}^{+}\|$ and $\| \hat{\mathbf{x}}_{t}^{\natural}- \mathbf{x}^{+}\| / \| \mathbf{x}^{+}\|$. As it can be seen from [\[fig:pseudo\]](#fig:pseudo){reference-type="ref" reference="fig:pseudo"}, in sharp contrast to the plain MINRES iterates, the final lifted iterate in each setting coincides with the respective pseudo-inverse solution, corroborating the results of [\[thm:MINRES_dagger,thm:CSMINRES_dagger\]](#thm:MINRES_dagger,thm:CSMINRES_dagger){reference-type="ref" reference="thm:MINRES_dagger,thm:CSMINRES_dagger"}. ## Image Deblurring by Lifting: [\[thm:MINRES_dagger,thm:lifting\]](#thm:MINRES_dagger,thm:lifting){reference-type="ref" reference="thm:MINRES_dagger,thm:lifting"} {#sec:exp_image} We now turn our attention to a more realistic setting in the context of image deblurring. We emphasize that our goal here is not to obtain a cutting-edge image deblurring technique that can achieve the state-of-the-art, but rather to explore the potentials of our lifting strategies in a real world application. We consider the following image blurring model $$\begin{aligned} \mathbf{B}= \mathcal{A}(\mathbf{X}) + \mathbf{E},\end{aligned}$$ where $\mathbf{B}\in \mathbb{R}^{n \times n}$ is a noisy and blurred version of an image $\mathbf{X}\in \mathbb{R}^{n \times n}$, $\mathcal{A}: \mathbb{R}^{n \times n} \to \mathbb{R}^{n \times n}$ represents the linear Gaussian smoothing/blurring operator [@shapiro2001computer] as a 2D convolutional operator, and $\mathbf{E}$ is some noise matrix. Simple deblurring can be done by solving the least squared problem [\[eq:least_squares\]](#eq:least_squares){reference-type="ref" reference="eq:least_squares"} where $\mathbf{x}= \textnormal{vec}(\mathbf{X})\in \mathbb{R}^{n^{2}}$, $\mathbf{b}= \textnormal{vec}(\mathbf{B})\in \mathbb{R}^{n^{2}}$, and $\mathbf{A}\in \mathbb{R}^{n^{2} \times n^{2}}$ is the real symmetric matrix representation of the Gaussian blur operator $\mathcal{A}$. Though $\mathbf{A}$ has typically full rank, it often contains several small singular values, which make $\mathbf{A}$ almost numerically singular. In lieu of employing any particular explicit regularization, we terminate the iterations early before fitting to the "noisy subspaces" corresponding to the extremely small singular values. Our aim here is to investigate the effects of our lifting strategies, with or without preconditioning, for deblurring images. More specifically, we will investigate the application of [\[eq:lifting_t,eq:P_lifting_t\]](#eq:lifting_t,eq:P_lifting_t){reference-type="ref" reference="eq:lifting_t,eq:P_lifting_t"}. When $n$ is large, it is rather impractical to explicitly store the matrix $\mathbf{A}$. To remedy this, we generate a Toeplitz matrix $\mathbf{Z}\in \mathbb{R}^{n \times n}$, from which we can implicitly define the blurring matrix $\mathbf{A}$ as $\mathbf{A}= \mathbf{Z}\otimes \mathbf{Z}$, where $\otimes$ denotes the Kronecker product. For our experiments, we consider a colored image with $n = 1,024$. We solve for each channel separately and then normalize our final result to regenerate the color image. The Gaussian blurring matrix $\mathbf{A}$ is generated with bandwidth $101$ and the standard deviation of $9$. For each color channel, the elements of the corresponding noise matrix are generated i.i.d. from the standard normal distribution. Finally, we note that, in our experiments since $n\gg1$, we apply [\[eq:lifting_t\]](#eq:lifting_t){reference-type="ref" reference="eq:lifting_t"} with [\[alg:sub_pMINRES\]](#alg:sub_pMINRES){reference-type="ref" reference="alg:sub_pMINRES"} as an equivalent alternative to applying [\[eq:P_lifting_t\]](#eq:P_lifting_t){reference-type="ref" reference="eq:P_lifting_t"} with [\[alg:pMINRES\]](#alg:pMINRES){reference-type="ref" reference="alg:pMINRES"}. ### Deblurring with [\[alg:MINRES\]](#alg:MINRES){reference-type="ref" reference="alg:MINRES"} and [\[prop:lifting_t\]](#prop:lifting_t){reference-type="ref" reference="prop:lifting_t"} {#deblurring-with-algminres-and-proplifting_t .unnumbered} In our experiments, we first apply [\[alg:MINRES\]](#alg:MINRES){reference-type="ref" reference="alg:MINRES"}, and evaluate the quality of the deblurred image from both unlifted solution as well as the lifted one obtained from [\[prop:lifting_t\]](#prop:lifting_t){reference-type="ref" reference="prop:lifting_t"}. As a benchmark, we consider comparing the solutions obtained from LSQR [@paige1982lsqr], which has shown to enjoy favorable stability properties for solving large scale least-squared problems [@wathen2022some], as well as the standard truncated SVD. Both these methods, though not the state-of-the-art, have long been used for image deblurring problems [@hansen1998; @hansen2006deblurring]. In our implementation, we set the maximum iterations of LSQR and [\[alg:MINRES\]](#alg:MINRES){reference-type="ref" reference="alg:MINRES"} to be $30$ and consider applying the lifting procedure from [\[prop:lifting_t\]](#prop:lifting_t){reference-type="ref" reference="prop:lifting_t"} to eliminate the contributions from the noisy subspaces corresponding to extremely small singular values of $\mathbf{A}$. Note that every iteration of LSQR requires two matrix-vector multiplications whereas every iteration of [\[alg:MINRES\]](#alg:MINRES){reference-type="ref" reference="alg:MINRES"} involves only one such operation per iteration. To evaluate the quality of the reconstructed images, we use two common metrics, namely the peak-signal-to-noise ratio (PSNR), and the structural similarity index measure (SSIM) [@hore2010image; @wang2004image]. Deblurred images with higher value in either metric are deemed to be of relatively better quality. In [1](#fig:deblurring){reference-type="ref" reference="fig:deblurring"}, we see that the image returned from LSQR is of higher quality that the one directly obtained from [\[alg:MINRES\]](#alg:MINRES){reference-type="ref" reference="alg:MINRES"}. However, after applying the lifting step from [\[prop:lifting_t\]](#prop:lifting_t){reference-type="ref" reference="prop:lifting_t"} to the image returned from [\[alg:MINRES\]](#alg:MINRES){reference-type="ref" reference="alg:MINRES"}, the quality of the image is drastically improved. In fact, even though the images from LSQR and that after lifting from [\[alg:MINRES\]](#alg:MINRES){reference-type="ref" reference="alg:MINRES"} exhibit similar PSNR and SSIM values, the lifted image appears sharper and is of better quality as measured by the proverbial "eye-norm". ![From the left to the right: the original image, the noisy blurred image, the image deblurred with LSQR, that obtained from [\[alg:MINRES\]](#alg:MINRES){reference-type="ref" reference="alg:MINRES"}, and the lifted image from [\[prop:lifting_t\]](#prop:lifting_t){reference-type="ref" reference="prop:lifting_t"}. The "wall-clock" time it takes in seconds for iterative algorithms to reach $30$ iterations are also shown. [\[fig:deblurring\]]{#fig:deblurring label="fig:deblurring"}](figures/deblurring.png){#fig:deblurring width="100%"} At the same time, we also apply truncated SVD [@hansen2006deblurring] to give another benchmark. We can see from [2](#fig:Truncated_deblurring){reference-type="ref" reference="fig:Truncated_deblurring"}, that the performance of the truncated SVD is reasonable when the singular value threshold $\epsilon$ is large. However, it is also clear that the method is highly sensitive to the choice of this threshold. ![Image deblurring with Truncated SVD. "Rank $a$%", "$a$%" represents the rank-ratio $r^2/n^2$, where $r$ represents the rank being used in the Truncated SVD method. [\[fig:Truncated_deblurring\]]{#fig:Truncated_deblurring label="fig:Truncated_deblurring"}](figures/Truncated_deblurring.png){#fig:Truncated_deblurring width="100%"} ### Deblurring with [\[alg:sub_pMINRES\]](#alg:sub_pMINRES){reference-type="ref" reference="alg:sub_pMINRES"} and [\[eq:P_lifting_t\]](#eq:P_lifting_t){reference-type="ref" reference="eq:P_lifting_t"} {#deblurring-with-algsub_pminres-and-eqp_lifting_t .unnumbered} We now consider incorporating preconditioning withing our image deblurring problem. Recall that preconditioning has long been used in this context [@dell2017structure; @chen2016preconditioning; @hansen2006deblurring; @bianchi2019structure]. To fit our settings here, we consider appropriate singular preconditioners $\mathbf{M}\succeq \mathbf{0}$, and explore the deblurring quality of the unlifted solution of [\[alg:sub_pMINRES\]](#alg:sub_pMINRES){reference-type="ref" reference="alg:sub_pMINRES"}, as well as the lifted one by applying [\[prop:lifting_t\]](#prop:lifting_t){reference-type="ref" reference="prop:lifting_t"} to the preconditioned system [\[eq:preq_MINRES\]](#eq:preq_MINRES){reference-type="ref" reference="eq:preq_MINRES"}. We construct two different types of sub-preconditioners. For $i =1,2$, we let $\mathbf{S}_i = \mathbf{C}_i \otimes \mathbf{C}_i \in \mathbb{R}^{n^2 \times r_i^2}, \; i=1,2$, where $\mathbf{C}_i = \mathbf{Q}_i \mathbf{\Sigma}_i \in \mathbb{R}^{n \times r_i}, \; i =1 ,2$ and the factors $\mathbf{\Sigma}_i$, $\mathbf{Q}_i$ and the corresponding ranks $r_i$ are generated as follows: 1. To construct $\mathbf{Q}_{i}$, we first generate a $n \times n$ matrix $\hat{\mathbf{C}}$ whose elements are drawn independently from the standard normal distribution. We then choose $\mathbf{Q}_1 \in \mathbb{R}^{n \times r_1}$ and $\mathbf{Q}_2 \in \mathbb{R}^{n \times r_2}$ via the incomplete QR decomposition of $\mathbf{Z}\hat{\mathbf{C}}= \mathbf{Q}_1 \mathbf{R}_1$ and $\hat{\mathbf{C}}= \mathbf{Q}_2 \mathbf{R}_2$ with the corresponding ranks $r_1$ and $r_2$. 2. We construct the diagonal matrices $\mathbf{\Sigma}_i \in \mathbb{R}^{r_i \times r_i}, \; i =1,2$ with positive diagonal entries such that the smallest and the largest diagonal elements are, respectively $1$ and $2$, and other elements are linearly spaced values in-between. Based on our constructions, $\textnormal{Range}(\mathbf{S}_1) \subseteq \textnormal{Range}(\mathbf{A})$, while this is not true for $\mathbf{S}_2$. Considering $\mathbf{S}_2$ as an example, we use the following Kronecker product properties to avoid explicit storage of the matrix $\mathbf{A}$: $$\begin{aligned} \tilde{\mathbf{A}}\textnormal{vec}(\tilde{\mathbf{X}}) &= \mathbf{S}^{\intercal}_2 \mathbf{A}\mathbf{S}_2 \textnormal{vec}(\tilde{\mathbf{X}}) = \left(\mathbf{C}_2^{\intercal} \otimes \mathbf{C}_2^{\intercal}\right) \left(\mathbf{Z}\otimes \mathbf{Z}\right) \left(\mathbf{C}_2 \otimes \mathbf{C}_2\right) \textnormal{vec}(\tilde{\mathbf{X}}) \\ &= \left(\left(\mathbf{C}_2^{\intercal} \mathbf{Z}\mathbf{C}_2\right) \otimes \left(\mathbf{C}_2^{\intercal} \mathbf{Z}\mathbf{C}_2 \right)\right) \textnormal{vec}(\tilde{\mathbf{X}}) = \textnormal{vec}(\mathbf{C}_2^{\intercal} \mathbf{Z}\mathbf{C}_2 \tilde{\mathbf{X}}\mathbf{C}_2^{\intercal} \mathbf{Z}\mathbf{C}_2)\\ \tilde{\mathbf{b}}&= \mathbf{S}^{\intercal}_2 \textnormal{vec}(\mathbf{B}) = \left(\mathbf{C}_2^{\intercal} \otimes \mathbf{C}_2^{\intercal}\right) \textnormal{vec}(\mathbf{B}) = \textnormal{vec}(\mathbf{C}_2^{\intercal} \mathbf{B}\mathbf{C}_2) \end{aligned}$$ and $\mathbf{x}= \textnormal{vec}(\mathbf{X}) = \mathbf{S}_2 \textnormal{vec}(\tilde{\mathbf{X}}) = \left(\mathbf{C}_2 \otimes \mathbf{C}_2\right) \textnormal{vec}(\tilde{\mathbf{X}}) = \textnormal{vec}(\mathbf{C}_2 \tilde{\mathbf{X}}\mathbf{C}_2^{\intercal})$. ![Image deblurring using [\[alg:sub_pMINRES\]](#alg:sub_pMINRES){reference-type="ref" reference="alg:sub_pMINRES"} with $\mathbf{S}_1 \in \mathbb{R}^{n^2 \times r_1^2}$. In our naming convention, "Rank $a$% ($b$/$c$ sec)", "$a$%" represents the rank-ratio $r_1^2/n^2$, "$c$" is total wall-clock time of the full deblurring process (including the incomplete QR decomposition to compute $\mathbf{S}_1$), and "$b$" shows the time taken by [\[alg:sub_pMINRES\]](#alg:sub_pMINRES){reference-type="ref" reference="alg:sub_pMINRES"} only. The figures in the bottom row are the lifted versions of ones in the top row. [\[fig:PMR_range_deblurring\]]{#fig:PMR_range_deblurring label="fig:PMR_range_deblurring"}](figures/PMR_range_deblurring.png){#fig:PMR_range_deblurring width="100%"} ![Image deblurring using [\[alg:sub_pMINRES\]](#alg:sub_pMINRES){reference-type="ref" reference="alg:sub_pMINRES"} with $\mathbf{S}_2 \in \mathbb{R}^{n^2 \times r_2^2}$. In our naming convention, "Rank $a$% ($b$/$c$ sec)", "$a$%" represents the rank-ratio $r_2^2/n^2$, "$c$" is total wall-clock time of the full deblurring process (including the incomplete QR decomposition to compute $\mathbf{S}_2$), and "$b$" shows the time taken by [\[alg:sub_pMINRES\]](#alg:sub_pMINRES){reference-type="ref" reference="alg:sub_pMINRES"} only. The figures in the bottom row are the lifted versions of ones in the top row. [\[fig:PMR_deblurring\]]{#fig:PMR_deblurring label="fig:PMR_deblurring"}](figures/PMR_deblurring.png){#fig:PMR_deblurring width="100%"} [3](#fig:PMR_range_deblurring){reference-type="ref" reference="fig:PMR_range_deblurring"} depicts the reconstruction quality using the sub-preconditioner $\mathbf{S}_1$, where for rank-ratios larger than $1\%$, we obtain reasonable quality results. The effect of the lifting strategy from [\[thm:lifting\]](#thm:lifting){reference-type="ref" reference="thm:lifting"} is also clearly visible, in particular for higher values of the rank-ratio. Notably, the running time of [\[alg:sub_pMINRES\]](#alg:sub_pMINRES){reference-type="ref" reference="alg:sub_pMINRES"} with $\mathbf{S}_1$ makes up a small portion of the total deblurring time, including the QR decomposition to construct $\mathbf{S}_1$. This is a direct consequence of the dimensionality reduction nature of [\[alg:sub_pMINRES\]](#alg:sub_pMINRES){reference-type="ref" reference="alg:sub_pMINRES"}, which employs the sub-preconditioner $\mathbf{S}_1$, as opposed to the full preconditioner $\mathbf{M}$. Also, from [3](#fig:PMR_range_deblurring){reference-type="ref" reference="fig:PMR_range_deblurring"} it is clear that [\[alg:sub_pMINRES\]](#alg:sub_pMINRES){reference-type="ref" reference="alg:sub_pMINRES"} exhibits a great degree of stability with respect to the different choices of the rank for the sub-preconditioner $\mathbf{S}_1$, as compared with the truncated SVD in [2](#fig:Truncated_deblurring){reference-type="ref" reference="fig:Truncated_deblurring"}. In [4](#fig:PMR_deblurring){reference-type="ref" reference="fig:PMR_deblurring"}, however, we see poor quality reconstructions. This is entirely due to the fact that, by construction, $\textnormal{Range}(\mathbf{S}_2)$ does not align well with the range space of $\mathbf{A}$, and hence constitute a poor sub-preconditioner. Hence, when the underlying matrix is (nearly) singular and $\mathbf{b}$ substantially aligns with the eigen-space corresponding to small eigenvalues of $\mathbf{A}$, one should apply preconditioners that can counteract the negative effects of such alignments. This observations underlines the fact that, unlike the typical settings where the preconditioning matrix is assumed to have full-rank, the situation can be drastically different with singular preconditioners and one can face a multitude of challenges. In this respect, although singular preconditioners can be regarded as a way to reduce the dimensionality of the problem, more studies need to be done to explore various effects and properties of such preconditioners in various contexts. # Conclusions {#sec:conclusion} We considered the minimum residual method (MINRES) for solving linear least-squares problems involving singular matrices. It is well-known that, unless the right hand-side vector entirely lies in the range-space of the matrix, MINRES does not necessarily converge to the pseudo-inverse solution of the problem. We propose a novel lifting strategy, which can readily recover such a pseudo-inverse solution from the final iterate of MINRES. We provide similar results in the context of preconditioned MINRES using positive semi-definite and singular preconditioners. We also showed that, when such singular preconditioners are formed from sub-preconditioners, we can equivalently reformulate the original preconditioned MINRES algorithm in lower dimensions, which can be solved more efficiently. We provide several numerical examples to further shed light on our theoretical results and to explore their use-cases in some real world application. # Statements and Declarations {#statements-and-declarations .unnumbered} **Funding.** Y. Liu was supported by the Hong Kong Innovation and Technology Commission (InnoHK Project CIMDA). Andre Milzarek was partly supported by the National Natural Science Foundation of China (NSFC) -- Foreign Young Scholar Research Fund Project (RFIS) under Grant No. 12150410304 and by the Shenzhen Science and Technology Program under Grant No. RCYX20221008093033010. Fred Roosta was partially supported by the Australian Research Council through an Industrial Transformation Training Centre for Information Resilience (IC200100022) as well as a Discovery Early Career Researcher Award (DE180100923). [^1]: Mathematical Institute, University of Oxford. Email: `yang.liu@maths.ox.ac.uk` [^2]: School of Data Science, The Chinese University of Hong Kong, Shenzhen, (CUHK-Shenzhen), China. Email: `andremilzarek@cuhk.edu.cn` [^3]: School of Mathematics and Physics, University of Queensland, Australia, and International Computer Science Institute, Berkeley, USA. Email: `fred.roosta@uq.edu.au`
arxiv_math
{ "id": "2309.17096", "title": "Obtaining Pseudo-inverse Solutions With MINRES", "authors": "Yang Liu, Andre Milzarek and Fred Roosta", "categories": "math.NA cs.NA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We investigate the joint convergence of independent random Toeplitz matrices with complex input entries that have a pair-correlation structure, along with deterministic Toeplitz matrices and the backward identity permutation matrix. We also extend the above results to generalized Toeplitz and related matrices. The limits depend only on the correlation structure but are universal otherwise, in that they do not depend on the underlying distributions of the entries. In particular, these results provide the joint convergence of asymmetric Hankel matrices. Earlier results in the literature on the joint convergence of random symmetric Toeplitz and symmetric Hankel matrices with real entries follow as special cases. author: - "Kartick Adhikari[^1]" - "Arup Bose[^2]" - "Shambhu Nath Maurya[^3]" date: - - title: Convergence of high dimensional Toeplitz and related matrices with correlated inputs --- **AMS Subject Classification**: Primary 15B05; Secondary 15B52, 60B20. # Introduction and Main results Let $A_n$ be an $n\times n$ random matrix with eigenvalues $\lambda_{1,n},\ldots, \lambda_{n,n}$. Then the *empirical spectral distribution* (ESD) of $A_n$ is defined as $$\begin{aligned} F^{A_n}(x,y)=n^{-1}\#\{k\;:\; \Re(\lambda_{k,n})\le x, \Im(\lambda_{k,n})\le y\},\;\;\;\mbox{ for $x,y\in \mathbb{R}$},\end{aligned}$$ where $\#$ denotes the cardinality of $A$, and $\Im(x)$ and $\Re (x)$ denote the imaginary and real parts of $x$ respectively. While $F^{A_n}$ is a random distribution function, its expectation $\mbox{\bf E}[F^{A_n}(x,y)]$ (called the EESD) is a non-random distribution function. If as $n \to \infty$, the ESD and/or the EESD converges weakly (a.s. or in probability for the ESD) to a distribution function $F_{\infty}$, then the limit(s) are commonly referred to as the *limiting spectral distribution* (LSD) of $A_n$. When we have one or more sequences of $n\times n$ (random) matrices, the appropriate notion of convergence is via viewing them as elements of the *$*$-probability spaces* of matrix algebras with the state as the average trace. In general a $*$-probability space is a pair $(\mathcal A,\varphi)$ where $\mathcal A$ is a unital $*$-algebra (with unity ${\mathbf 1}_{\mathcal A}$) over complex numbers, and $\varphi$ is a linear functional such that $\varphi({\mathbf 1}_{\mathcal A})=1$ and $\varphi(aa^*)\ge 0$ for all $a\in \mathcal A$. The state is called tracial if $\varphi(ab)=\varphi(ba)$ for all $a, b \in \mathcal{A}$. When dealing with matrices, the natural choice of the $*$-probability space is the matrix algebra $\mathcal{A}_n$ (the operation $*$ is one of taking complex conjugate), of $n\times n$ random matrices whose input entries have all moments finite, with the state $$\begin{aligned} \varphi_n(B_n)&:=\frac{1}{n}\mbox{\bf E}[{\mbox{Tr}}(B_n)], \;\mbox{where ${\mbox{Tr}}(B_n)=\sum_{i=1}^{n}b_{ii}$ when $B_n=((b_{ij}))_{n\times n}\in \mathcal A_n$}.\end{aligned}$$ Suppose $\{A_{i,n}, 1\leq i \leq p\}$ are $p$ sequences of $n\times n$ random matrices. We say that they *converge jointly in $*$-distribution* to some elements $\{a_i, 1\leq i \leq p\}\in (\mathcal A,\varphi)$ if for every choice of $k \geq 1$, $\epsilon_1,\epsilon_2,\ldots, \epsilon_k\in \{1, *\}$ and $i_1, \ldots, i_k \in \{1, \ldots , p\}$, we have $$\label{eq:star_conv} \lim_{n\to \infty}\varphi_n(A_{i_{1},n}^{\epsilon_1}\cdots A_{i_{k},n}^{\epsilon_k})=\varphi(a_{i_{1}}^{\epsilon_1}\cdots a_{i_{k}}^{\epsilon_k}).$$ Then we write $(A_{i,n}, 1\leq i\leq p)\stackrel{*\mbox{-dist}}{\longrightarrow} (a_i, 1\leq i \leq p)$. If the matrices are *real symmetric*, then taking $*$ is redundant. The quantities $\{\varphi_n(\cdot)\}$ and $\{\varphi(\cdot)\}$ are referred to as $*$-moments of the respective variables. A moment is said to be odd if $k$ is odd. It is important to note that if the limit on the left of [\[eq:star_conv\]](#eq:star_conv){reference-type="eqref" reference="eq:star_conv"} exists for all choices, then these limits automatically define a $*$-probability space with $p$ indeterminates. The two notions of convergence are related. Suppose $A_n$ are hermitian and $A_n$ converges to $a$ in the above sense where the moments $\varphi(a^k)$ determine a unique probability distribution $\mu$ say. Then the EESD of $A_n$ converges weakly to $\mu$. The joint convergence of Wigner matrices and the notion of free independence has been known since the work of [@Dan_wigner_JC]. The work on joint convergence that is especially relevant to our work is that of [@bose_saha_patter_JC_annals], who presented a unified approach for the joint convergence of the five symmetric random matricesWigner, symmetric Toeplitz, symmetric Hankel, reverse circulant and symmetric circulant. For an overview of joint convergence of other random matrices, see [@rayan_JC_limitnotfree], [@basu_bose_JC_patt], [@mingo_freeprob_book] and [@bose_freeprob]. The overriding assumption in these works is that the input variables that is used to populate these matrices are independent and real valued. We look more closely at the Toeplitz and the Hankel matrices. First we introduce specific patterns of correlation in these matrices and also allow the entries to be complex-valued and study the joint convergence of independent copies of these matrices. It is well-known that the Hankel and the Toeplitz matrices are related to each other through certain deterministic matrices. Moreover, there are results on the (joint) convergence of deterministic Toeplitz matrices, and certain generalizations of them. This gives several different deterministic and random matrix models and we explore their joint convergence. The $n\times n$ Toeplitz matrix $T_n$ with *input sequence* $\{a_{i,n}; i\in \mathbb{Z}\}$, is defined as $$\begin{aligned} T_n=\left(\begin{array}{ccccc} a_{0,n} & a_{-1,n} & a_{-2,n} & \cdots & a_{1-n,n}\\ a_{1,n} & a_{0,n} & a_{-1,n} & \cdots & a_{2-n,n}\\ a_{2,n} & a_{1,n} & a_{0,n} & \cdots & a_{3-n,n} \\ \vdots &\vdots &\vdots & \cdots & \vdots\\ a_{n-1,n}& a_{n-2,n}& a_{n-3,n}& \cdots & a_{0,n} \end{array} \right).\end{aligned}$$ We shall write $a_i$ for $a_{i,n}$. In short we can write $T_n=((a_{i-j}))_{i,j=1}^n$. Note that $T_n$ is not symmetric. We shall also work with its symmetric version, namely, $T_{n,s}=((a_{|i-j|}))_{i,j=1}^n$. Likewise, the general $n\times n$ Hankel matrix $H_n$ with input sequence $\{a_{i,n}; i\in \mathbb{Z}\}$ is $$\begin{aligned} \label{eqn:Hankel} H_n=\left(\begin{array}{cccccc} a_{2,n} & a_{-3,n} & a_{-4,n} & \cdots & a_{-n,n} & a_{-(n+1),n}\\ a_{3,n} & a_{4,n} & a_{-5,n } & \cdots & a_{-(n+1),n} & a_{-(n+2),n}\\ a_{4,n} & a_{5,n} & a_{6,n} &\cdots & a_{-(n+2),n} & a_{-(n+3),n}\\ \vdots & \vdots & \vdots & \cdots &\vdots& \vdots\\ %a_{n-1,n} & a_{n,n} & a_{n+1,n} & \cdots & a_{-(2n-3),n} &a_{-(2n-2),n}\\ a_{n,n} & a_{n+1,n} & a_{n+2,n} & \cdots&a_{2n-2,n} & a_{-(2n-1),n}\\ a_{n+1,n} & a_{n+2,n} & a_{n+3,n} & \cdots & a_{2n-1,n} & a_{2n,n} \end{array} \right).\end{aligned}$$ We shall write $a_i$ for $a_{i,n}$. In short we can write $H_n=((a_{(i+j){\mbox{sgn}}(i-j)}))_{i,j=1}^n$, where $$%\label{eqn:sgn_map} {\mbox{sgn}}(\ell)= \left\{\begin{array}{rll} 1 & \mbox{ if } & \ell \geq 0,\\ -1& \mbox{ if } & \ell<0. \end{array} \right.$$ Note that $H_n$ is not symmetric. Its symmetric version is $H_{n,s}=((a_{i+j}))_{i,j=1}^n$. First consider $T_{n,s}$ and $H_{n,s}$ with a *real* i.i.d. $\{a_j\}$ with finite variance. The a.s. convergence of the ESD of $n^{-1/2} T_{n,s}$ and $n^{-1/2} H_{n,s}$ have been proved by Hammond and Miller [@hammond_miller_05] and Bryc et al. [@bryc_lsd_06]. For LSD results on $T_{n,s}$ and $H_{n,s}$, when $\{a_j\}$ are independent but not necessarily identically distributed, see [@bose+saha+sen_pattern_RMTA_21] and the references therein. For LSD results when $\{a_j\}$ is a moving average, see[@bose_sen_LSD_EJP]. No LSD results are known for $T_n$ and $H_n$ in general. In [@bose+gango+sen_XX'_10], Bose et al. showed that the LSD of $n^{-1}T_nT_n^*$ and $n^{-1}H_nH_n^*$ exist if $\{a_j\}$ is *real* i.i.d. Bose and Sen [@bose+priyanka_XX'_22] considered the case where $\{a_j\}$ are real and independent, but not necessarily identically distributed. The joint convergence of i.i.d. copies of $n^{-1/2} T_{n,s}$ and $n^{-1/2} H_{n,s}$ was established in [@bose_saha_patter_JC_annals] when $\{a_j\}$ is i.i.d. and all its moments are finite. We now introduce a pair-correlation structure on the input sequence through the following assumptions. **Assumption 1**. Suppose $\{a_{j,n}= x_{j,n}+ \mathrm{i} y_{j,n}; j \in \mathbb{Z}\}$ are *complex* random variables with $\mbox{\bf E}(a_{j,n})=0$, and for every $n\geq 1$, $\{(x_{j,n}, x_{-j,n}, y_{j,n}, y_{-j,n})\}$ are independent across $j$. Further, for all $j\in \mathbb{Z}$, and $n \geq 1$, 1. $\mbox{\bf E}[x_{j,n}]^2=\sigma_x^2$, $\mbox{\bf E}[y_{j,n}]^2=\sigma_y^2$. 2. $\mbox{\bf E}[x_{j,n}y_{j,n}]=\rho_1$, $\mbox{\bf E}[x_{j,n}x_{-j,n}]=\rho_2$, $\mbox{\bf E}[x_{j,n} y_{-j,n}]=\rho_3$,\ $\mbox{\bf E}[x_{-j,n} y_{j,n}]= \rho_4, \ \mbox{\bf E}[y_{j,n} y_{-j,n}]= \rho_5, \ \mbox{\bf E}[x_{-j,n} y_{-j,n}]= \rho_6$. 3. $\displaystyle \sup_{j,n} \{\mbox{\bf E}(|x_{j,n}|^k), \mbox{\bf E}(|y_{j,n}|^k)\} \leq c_k < \infty, \ \ \text{for all}\ \ k\geq 1.$ For brevity, henceforth we write $a_{j}, x_{j}, y_{j}$ respectively for $a_{j,n}, x_{j,n}, y_{j,n}$. We must have $0 \leq \rho_i\leq 1$ for all $1\leq i \leq 6$, since we have an infinite sequence of equally correlated variables. Note that Assumption [Assumption 1](#assump:toeI){reference-type="ref" reference="assump:toeI"} holds if $\{a_j\}$ is i.i.d. $\mbox{\bf E}(a_j)=0$, $\mbox{\bf E}|a_j|^2=1$, $\mbox{\bf E}|a_j|^k < \infty$ for $k\geq 1$. The *Hermitian* Toeplitz and Hankel matrices, $T_{n,h}$ and $H_{n,h}$ are obtained by imposing $a^*_{i-j}=a_{j-i}$ in $T_n$ and $H_n$. The *real symmetric* Toeplitz and Hankel matrices, $T_{n,s}$ and $H_{n,s}$ are obtained by taking $y_j=0$ and $\rho_2=\sigma_x^2$. The *real asymmetric* Toeplitz and Hankel matrices are obtained by taking $y_j=0$. We now make a crucial observation on the relation between $H_{n,s}$ and $T_n$. Let $P_n$ be the $n\times n$ *backward identity* permutation matrix defined as $$P_n=\left[\begin{array}{cccccc} 0 & 0 & 0 & \ldots & 0 & 1 \\ 0 & 0 & 0 & \ldots & 1 & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 1 & 0 & \ldots & 0 & 0 \\ 1 & 0 & 0 & \ldots & 0 & 0 \end{array}\right].$$ Then it is easy to check that $P_nT_n$ is a symmetric Hankel matrix. Conversely, for any $H_{n,s}$, $P_n H_{n,s}$ is a Toeplitz matrix. In this article, our symmetric Hankel matrices are always considered to be of the form $P_n T_{n}$. From this relation it is clear that the problem of joint convergence of symmetric Hankel matrices is connected to the problem of joint convergence of $P_n$ and $T_n$. The deterministic Toeplitz and the deterministic Hankel matrices are of the form $D_n=((d_{i-j}))_{i,j=1}^n$, and $P_nD_n$ respectively, where $\{d_k; k \in \mathbb{Z}\}$ is a sequence of complex numbers. For symmetric deterministic Toeplitz ($D_{n,s}$) matrices, assuming that $\{d_n\}$ is square summable, the famous Szego's theorem [@szego_toeplitz_915] established that the LSD of $D_{n,s}$ exists as $n \to \infty$. The limit is the distribution of $g(U)$ where $g$ is the Fourier function of $\{d_n\}$ defined on the unit interval, and $U$ is uniformly distributed on this interval. For further information on high dimensional deterministic Toeplitz matrices see for example, [@grenander_szego_book_84] and [@bottcher+Silbermann_book_90]. Joint convergence of these matrices and some of their interesting extensions were covered in [@bose+sreela+saha_Toe+JC_13]. In this article we work with deterministic Toeplitz matrices that satisfy the stronger summability condition. This is helpful for the technical arguments in the proof of our results. **Assumption 1**. Let $\{d_k; k \in \mathbb{Z}\}$ be a sequence of complex numbers which is absolutely summable, that is, $\sum_{k= -\infty}^{\infty} |d_k| < \infty$. Our first main result provides the joint convergence of $T_n$, $D_n$ and $P_n$. Its proof is given in Section [2](#sec:jc_tdp){reference-type="ref" reference="sec:jc_tdp"}. **Theorem 1**. *Let $A_{n,1}=n^{-1/2}T_n$ where $T_n$ is a random Toeplitz matrix. For $i=1,2, \ldots, m$, let $\{A_{n,1}^{(i)}\}$ be independent copies of $A_{n,1}$ and $A_{n,2}^{(i)}=D^{(i)}_n$ be $m$ deterministic Toeplitz matrices. Suppose the input entries of $T_n$ satisfy Assumption [Assumption 1](#assump:toeI){reference-type="ref" reference="assump:toeI"} and the input entries of $D^{(i)}_n$ satisfy Assumption [Assumption 1](#assump:determin){reference-type="ref" reference="assump:determin"}. Then $\{P_n, A_{n,j}^{(i)}; 1\leq j \leq 2, 1\leq i\leq m\}$ converge jointly. In particular, $\{P_n, n^{-1/2}T_n, D_{n} \}$ converge jointly.* The following corollary follows from Theorem [Theorem 1](#thm:JC_tdp){reference-type="ref" reference="thm:JC_tdp"}: **Corollary 1**. *Suppose $\{T_{n}^{(i)}; 1\leq i\leq m\}$ and $\{H_{n,s}^{(i)}; 1\leq i\leq m\}$ are $m$ independent copies respectively of random Toeplitz and symmetric Hankel matrices, where the input entries satisfy Assumption [Assumption 1](#assump:toeI){reference-type="ref" reference="assump:toeI"}. Also, let $\{D_{n}^{(i)}; 1\leq i\leq m\}$ and $\{\tilde{H}_{n,s}^{(i)}; 1\leq i\leq m\}$ respectively be deterministic Toeplitz and deterministic symmetric Hankel matrices, whose input entries satisfy Assumption [Assumption 1](#assump:determin){reference-type="ref" reference="assump:determin"}. Let $X^{(i)}_n$ and $Y^{(i)}_n$ be any of the matrices; $n^{-1/2}T_{n}^{(i)}$, $n^{-1/2}H_{n,s}^{(i)}$, $D_{n}^{(i)}$, $\tilde{H}_{n,s}^{(i)}$ and $P_n$. Then $\{X^{(i)}_n, Y^{(i)}_n; 1 \leq i \leq m\}$ converge jointly.* As a special case of our main result, when the input entries are real, under Assumption [Assumption 1](#assump:toeI){reference-type="ref" reference="assump:toeI"}, independent copies of $n^{-1/2}T_n$ and $n^{-1/2}H_{n,s}$ converge jointly, and the limit depends only on $\rho_2$. This implies the joint convergence results of [@bose_saha_patter_JC_annals] for the special case of symmetric Toeplitz and Hankel matrices with independent entries. The LSD results of [@hammond_miller_05] and [@bryc_lsd_06] when the input is i.i.d. and only finiteness of the second moment is assumed, also follow via a truncation argument. Joint convergence results in the following matrix models follow from Theorem [Theorem 1](#thm:JC_tdp){reference-type="ref" reference="thm:JC_tdp"}: (a) random Toeplitz matrices with complex input entries (see Proposition [Proposition 1](#thm:toeplitz){reference-type="ref" reference="thm:toeplitz"} in Section [2.1](#subsec:jc_T){reference-type="ref" reference="subsec:jc_T"}); (b) random Hermitian and symmetric Toeplitz matrices (see Corollaries [Corollary 2](#cor:A2+3_TJc){reference-type="ref" reference="cor:A2+3_TJc"} and [Corollary 3](#cor:iid_jc_T){reference-type="ref" reference="cor:iid_jc_T"} in Section [2.1](#subsec:jc_T){reference-type="ref" reference="subsec:jc_T"}); (c) matrices $\{D_n, P_n\}$ (see Proposition [Proposition 2](#thm:jc_D+P){reference-type="ref" reference="thm:jc_D+P"} in Section [2.2](#subsec:dp){reference-type="ref" reference="subsec:dp"}); (d) matrices $\{D_n, T_n\}$ (see Proposition [Proposition 3](#thm:jc_T+D){reference-type="ref" reference="thm:jc_T+D"} in Section [2.3](#subsec:td){reference-type="ref" reference="subsec:td"}); (e) matrices $\{T_n, P_n\}$ (see Proposition [Proposition 4](#thm:jc_T+P){reference-type="ref" reference="thm:jc_T+P"} in Section [2.4](#subsec:Tp){reference-type="ref" reference="subsec:Tp"}), and (f) symmetric Hankel matrices (see Remark [Remark 3](#cor:com_HJc){reference-type="ref" reference="cor:com_HJc"} in Section [2.4](#subsec:Tp){reference-type="ref" reference="subsec:Tp"}). Further, if we take any *symmetric* matrix polynomial of matrices that converge jointly, then its LSD will also exist. See Corollaries [Corollary 4](#cor:poly_Lsd_T){reference-type="ref" reference="cor:poly_Lsd_T"} and [Corollary 5](#cor:A2+3_HJc){reference-type="ref" reference="cor:A2+3_HJc"} respectively in Sections [2.1](#subsec:jc_T){reference-type="ref" reference="subsec:jc_T"} and [2.4](#subsec:Tp){reference-type="ref" reference="subsec:Tp"}. While the characteristic feature of the Toeplitz and the Hankel matrices are the repetitions of the input entries over (parts of) each diagonal or anti-diagonal, let us consider a further generalization as follows: Define $T_{n,g}=((t_{i,j}))_{n\times n}$, where $$\label{def:T_gen} t_{i,j}=\left\{\begin{array}{ccl} a_{i-j} & \mbox{ if } & i+j\le n, \\ \\ b_{i-j} & \mbox{ if } & i+j\ge n+1. \end{array} \right.$$ We call it a *generalized Toeplitz matrix*. For example, $T_{5,g}$ is given by $$T_{5,g}=\left(\begin{array}{ccccc} a_0 & a_{-1} &a_{-2} & a_{-3} & b_{-4} \\ a_1 & a_{0} & a_{-1} & b_{-2} & b_{-3} \\ a_2 & a_{1} & b_0 & b_{-1} & b_{-2}\\ a_3 & b_2 & b_{1} & b_0 & b_{-1} \\ b_4 & b_3 & b_2 & b_{1} & b_0 \end{array} \right).$$ In other words, the $(i,j)$-th element of $T_{n,g}$ is given by $$t_{i,j}=a_{i-j}\chi_{[2,n]}(i+j)+b_{i-j}\chi_{[n+1,2n]}(i+j), \ \ \text{for} \ \ 1\le i,j\le n,$$ where $\chi_{[1,n]}(z) =1\;\;\mbox{if $z \in [1,n]$ and zero otherwise}$. Note that $P_nT_{n,g}$ is always a Hankel matrix. For example, $H_{5}= P_5T_{5,g}$ is given by $$H_{5}=\left(\begin{array}{ccccc} b_4 & b_3 & b_2 & b_{1} & b_0\\ a_3 & b_2 & b_{1} & b_0 & b_{-1} \\ a_2 & a_{1} & b_0 & b_{-1} & b_{-2}\\ a_1 & a_{0} & a_{-1} & b_{-2} & b_{-3} \\ a_0 & a_{-1} &a_{-2} & a_{-3} & b_{-4} \\ \end{array} \right)$$ is a Hankel matrix as given in ([\[eqn:Hankel\]](#eqn:Hankel){reference-type="ref" reference="eqn:Hankel"}). Since $H_n=P_nT_{n,g}$, the joint convergence of copies of $\{P_n, n^{-1/2}T_{n,g}\}$ implies the joint convergence of copies of $n^{-1/2}H_n$. Observe that for generalized Toeplitz matrices, we have used two labels $a$ and $b$. Analogously, the generalized deterministic Toeplitz matrix, denoted by $D_{n,g}$ is also defined via ([\[def:T_gen\]](#def:T_gen){reference-type="ref" reference="def:T_gen"}) using two sequences of complex numbers $\{d'_i\}_{i\in \mathbb{Z}}$ and $\{d{''}_i\}_{i\in \mathbb{Z}}$. The corresponding deterministic Hankel matrix is defined as $P_nD_{n,g}$. We now state the following assumption on $T_{n,g}$. **Assumption 1**. Let $\{(a_{j,n}= x_{j,n} + \mathrm{i} y_{j,n}, b_{j,n}= x'_{j,n}+ \mathrm{i} y'_{j,n}); j \in \mathbb{Z}\}$ be independent *complex* random variables with mean zero, variance one, and $\{(x_{j,n}, y_{j,n}, x'_{j,n}, y'_{j,n}); j\in \mathbb{Z}\}$ have the following correlation. 1. $\mbox{\bf E}[x_{j,n}y_{j,n}]=\rho_1$, $\mbox{\bf E}[x_{j}x'_{j,n}]=\rho_2$, $\mbox{\bf E}[x_{j,n} y'_{j,n}]=\rho_3$,\ $\mbox{\bf E}[x'_{j,n} y_{j,n}]= \rho_4, \ \mbox{\bf E}[x'_{j,n} y'_{j,n}]= \rho_5, \ \mbox{\bf E}[y_{j,n} y'_{j,n}]= \rho_6$. 2. $\displaystyle \sup_{j,n} \{\mbox{\bf E}(|x_{j,n}|^k), \mbox{\bf E}(|y_{j,n}|^k), \mbox{\bf E}(|x'_{j,n}|^k), \mbox{\bf E}(|y'_{j,n}|^k)\} \leq c_k < \infty, \ \ \text{for all}\ \ k\geq 1.$ Our second main result yields the joint convergence of $T_{n,g}$, $D_{n,g}$ and $P_n$ whose proof is given in Section [3](#sec:Jc_tdp_g){reference-type="ref" reference="sec:Jc_tdp_g"}. Some other references for the joint convergence of random matrices with a *pair-correlation* structure are [@bose+adhikari_Brownmeasure_19] and [@monika+bose+dey_JC_2022]. **Theorem 2**. *Let $B_{n,1}=n^{-1/2}T_{n,g}$ where $T_{n,g}$ is a random generalized Toeplitz matrix. For $i=1,2, \ldots, m$, let $\{B_{n,1}^{(i)}\}$ be independent copies of $B_{n,1}$ and $B_{n,2}^{(i)}= D^{(i)}_{n,g}$ be $m$ deterministic generalized Toeplitz matrices. Suppose the input sequences of $T_{n,g}$ satisfy Assumption [Assumption 1](#assump:toe_gII){reference-type="ref" reference="assump:toe_gII"} and the input sequences of $D_{n,g}$ satisfy Assumption [Assumption 1](#assump:determin){reference-type="ref" reference="assump:determin"}. Then $\{P_n, B_{n,j}^{(i)}; 1\leq j \leq 2, 1\leq i\leq m \}$ converge jointly. In particular, $\{P_n, n^{-1/2}T_{n,g}, D_{n,g}\}$ converge jointly.* Note that Assumption [Assumption 1](#assump:toe_gII){reference-type="ref" reference="assump:toe_gII"} includes the case where $a_{j,n}$ and $a_{-j,n}$ as well as $b_{j,n}$ and $b_{-j,n}$ are not correlated, and moreover all $a_{j,n},b_{j,n}$ have variance one. In the cases where $a_{j,n}$ and $a_{-j,n}$ (and/or $b_{j,n}$ and $b_{-j,n}$) are correlated, and variance of $a_{j,n},b_{j,n}$ are not equal, we can still derive convergence results using ideas similar to those in the proof of Theorem [Theorem 2](#thm:gen_tdp_com){reference-type="ref" reference="thm:gen_tdp_com"}. However, the combinatorics will be very intricate. The joint convergence of Hankel matrices follows from Theorem [Theorem 2](#thm:gen_tdp_com){reference-type="ref" reference="thm:gen_tdp_com"} (see Remark [Remark 4](#cor:jc_H_g){reference-type="ref" reference="cor:jc_H_g"} in Section [3.4](#subsec:tp_g){reference-type="ref" reference="subsec:tp_g"}). Observations as in Corollary [Corollary 1](#cor:jc_dtp_hankel){reference-type="ref" reference="cor:jc_dtp_hankel"} can also be made for generalized Toeplitz and related matrices. **Remark 1**. Suppose $T_{p\times n}$ and $H_{p \times n}$ are Toeplitz and Hankel matrices with *complex* input entries, defined similar to $T_n$ and $H_n$ but are of the order $p\times n$ where $p\to \infty$ and $p/n \to y\in (0, \infty)$. Then it can be shown that, under Assumptions [Assumption 1](#assump:toeI){reference-type="ref" reference="assump:toeI"} and [Assumption 1](#assump:toe_gII){reference-type="ref" reference="assump:toe_gII"}, the ESDs of $n^{-1}T_{p \times n}T_{p \times n}^{*}$ and $n^{-1}H_{p \times n}H_{p \times n}^{*}$ converge weakly a.s.. There is also convergence in $*$-distribution for any finitely many independent copies of $T_{p \times n}T_{p \times n}^{*}$ and $H_{p \times n}H_{p \times n}^{*}$. The limits depend only on the value of $\rho_1, \rho_2,\rho_3,\rho_4,\rho_5, \rho_6$ and $y$. This will be clear from the proofs of Propositions [Proposition 1](#thm:toeplitz){reference-type="ref" reference="thm:toeplitz"} and [Proposition 8](#thm:jc_T+P_g){reference-type="ref" reference="thm:jc_T+P_g"} given in Sections [2.1](#subsec:jc_T){reference-type="ref" reference="subsec:jc_T"} and [3.1](#sec:T_n,g){reference-type="ref" reference="sec:T_n,g"}, respectively . # Proof of Theorem [Theorem 1](#thm:JC_tdp){reference-type="ref" reference="thm:JC_tdp"}: joint convergence of $T_n$, $D_n$ and $P_n$ {#sec:jc_tdp} We introduce some notation that will be used throughout. We use the convention $0^0=1$. Consider the set $[n]=\{ 1,2 ,\ldots , n\}$. Let $\pi=\{V_{1},\ldots,V_{r}\}$ be a partition of $[n]$, is denoted by $\mathcal{P}(n)$. The set of all pair partitions, i.e. when $|V_{j}|=2$, $1\leq j \leq r$, is denoted by $\mathcal{P}_{2 }(n)$. Let $\pi=\{V_{1},\ldots,V_{k}\}$ be a pair partition of $[2k]$. Then we define a projection map as $$\label{eqn:pi'} \mbox{$\pi' (i):=j$ if $i$ belongs to the block $V_{j}$}.$$ Furthermore for two elements $p,q$ of $[n]$ we write $p\thicksim_{\pi} q$ if $\pi' (p)=\pi' (q)$. We also define: $$\begin{aligned} %\one_{A}(x) &=1\;\;\mbox{if $x \in A$ and zero otherwise}, \\ \delta_{i,j} &=1\;\;\mbox{if $i=j$ and zero otherwise}, %\chi_{[1,n]}(z) &=1\;\;\mbox{if $z\in [1,n]$ and zero otherwise},\end{aligned}$$ and for $\epsilon_i \in \{1,*\}$, $$\begin{aligned} \label{eqn:epsilon'} %\mbox{ $\pi'(r_t)=\pi'(s_t)=t$ and } \epsilon'_i :=\left\{\begin{array}{rll} 1 & \mbox{ if } & \epsilon_i =1,\\ -1& \mbox{ if } & \epsilon_i = *. \end{array} \right.\end{aligned}$$ The combinatorics involved in a direct proof of Theorem [Theorem 1](#thm:JC_tdp){reference-type="ref" reference="thm:JC_tdp"} would be somewhat intricate. Hence we break the proof into four steps. In Sections [2.1](#subsec:jc_T){reference-type="ref" reference="subsec:jc_T"}, [2.2](#subsec:dp){reference-type="ref" reference="subsec:dp"}, [2.3](#subsec:td){reference-type="ref" reference="subsec:td"} and [2.4](#subsec:Tp){reference-type="ref" reference="subsec:Tp"}, we establish the joint convergence of independent copies of $T_n$, $\{D_n,P_n\}$, $\{D_n,T_n\}$ and $\{T_n,P_n\}$, respectively. Finally, the above steps help us to conclude the proof of Theorem [Theorem 1](#thm:JC_tdp){reference-type="ref" reference="thm:JC_tdp"} in Section [2.5](#subsec:jc_tdp){reference-type="ref" reference="subsec:jc_tdp"}. At the heart of all the proofs are appropriate formulae that we develop for the traces of monomials of our matrices. ## Joint convergence of copies of $T_n$ {#subsec:jc_T} **Proposition 1**. *If $\{T_{n}^{(i)}; 1\leq i\leq m\}$ are $m$ independent copies of random Toeplitz matrices whose input entries satisfy Assumption [Assumption 1](#assump:toeI){reference-type="ref" reference="assump:toeI"}, then $\{n^{-1/2}T_{n}^{(i)}; 1 \leq i \leq m\}$ converge jointly. Moreover the limit $*$-moments are as follows. For $\epsilon_1,\ldots, \epsilon_{p}\in \{1,*\}$ and $\tau_1,\ldots, \tau_{p}\in \{1,\ldots, m\}$, $$\begin{aligned} \label{eqn:lim_phi(T*tau)} &\lim_{n\to \infty} \varphi_n(\dfrac{T_n^{(\tau_1)\epsilon_1}}{n^{1/2}} \cdots \dfrac{T_n^{(\tau_{p})\epsilon_{p}}}{n^{1/2}}) \nonumber \\ &=\left\{\begin{array}{lll} \displaystyle \hskip-7pt \sum_{\pi \in {\mathcal P}_2(2k)} \prod_{(r,s)\in \pi} \hskip-5pt \delta_{\tau_r,\tau_s} \theta(r,s) \hskip-2pt \int\limits_{0}^1 \hskip-2pt \int\limits_{[-1,1]^k} \prod_{\ell=1}^{2k}\chi_{[0,1]}\big(z_0+\sum_{t=\ell}^{2k}\epsilon_{\pi}(t)z_{\pi'(t)}\big)\prod_{i=0}^kdz_i & \hskip-5pt \mbox{if } p=2k, \\ 0 & \hskip-9pt \mbox{if } p=2k+1. \end{array}\right. \end{aligned}$$ where $\pi'$, $\epsilon_{\pi}$ and $\epsilon'$ are as defined in [\[eqn:pi\'\]](#eqn:pi'){reference-type="eqref" reference="eqn:pi'"}, [\[eqn:epsilon_pi\]](#eqn:epsilon_pi){reference-type="eqref" reference="eqn:epsilon_pi"} and [\[eqn:epsilon\'\]](#eqn:epsilon'){reference-type="eqref" reference="eqn:epsilon'"}, respectively, and $$\begin{aligned} %\label{eqn:theta_1} \theta(r,s) = \big[\sigma_x^2 + \sigma_y^2\big]^{1-\delta_{\epsilon'_{r}, \epsilon'_{s} }} \big[(\rho_2-\rho_5) + \mathrm{i} \epsilon'_r(\rho_3+\rho_4)\big]^{\delta_{\epsilon'_r,\epsilon'_s}}. %\big[\E(x_1)^2 +\E(y_1)^2 \big]^{1-\d_{\e'_r\e'_s}} \end{aligned}$$ In particular, for $m=1$, $$\begin{aligned} \label{eqn:lim_mome_A1_TJc} &\lim_{n\to \infty}\varphi_n(\dfrac{T_n^{\epsilon_1}}{n^{1/2}}\cdots \dfrac{T_n^{\epsilon_{p}}}{n^{1/2}}) \nonumber \\ &=\left\{\begin{array}{lll} \displaystyle \sum_{\pi \in {\mathcal P}_2(2k)} \prod_{(r,s)\in \pi} \hspace{-0.1in}\theta(r,s) \hspace{-0.05in}\int_{0}^{1}\hspace{-0.05in}\int_{[-1,1]^{k}}\prod_{\ell=1}^{2k}\chi_{[0,1]}\big(z_0+\sum_{t=\ell}^{2k}\epsilon_{\pi}(t) z_{\pi'(t)}\big)\prod_{i=0}^k dz_i & \mbox{if } p=2k, \\ 0 & \hskip-9pt \mbox{if } p=2k+1, \end{array}\right. \end{aligned}$$ where $\theta(r,s)$ is as in ([\[eqn:lim_phi(T\*tau)\]](#eqn:lim_phi(T*tau)){reference-type="ref" reference="eqn:lim_phi(T*tau)"}).* *Moreover if $\{a_j\}$ is real-valued, then $\theta(r,s)= (\sigma_x^2)^{1-\delta_{\epsilon'_{r}, \epsilon'_{s} }} \rho_{2}^{\delta_{\epsilon'_r,\epsilon'_s}}$.* The following corollaries follow from the above proposition. **Corollary 2**. *Suppose $\{T_{n,h}^{(i)}; 1\leq i\leq m\}$ are $m$ independent copies of random Hermitian Toeplitz matrices whose input entries satisfy Assumption [Assumption 1](#assump:toeI){reference-type="ref" reference="assump:toeI"}. Then $\{n^{-1/2}T^{(i)}_{n,h}; 1 \leq i \leq m\}$ converges in $*$-distribution, with the limit $*$-moments as in ([\[eqn:lim_phi(T\*tau)\]](#eqn:lim_phi(T*tau)){reference-type="ref" reference="eqn:lim_phi(T*tau)"}) with $\theta(r,s)=(\sigma_x^2 + \sigma_y^2)$.* Specializing further, if the input entries are real-valued, then for $m$ independent copies of random symmetric Toeplitz matrices, $\{n^{-1/2}T^{(i)}_{n,s}; 1 \leq i \leq m\}$, the limit $*$-moments are as in ([\[eqn:lim_phi(T\*tau)\]](#eqn:lim_phi(T*tau)){reference-type="ref" reference="eqn:lim_phi(T*tau)"}) with $\theta(r,s)=\sigma_x^2$. This is Proposition 1 of [@bose_saha_patter_JC_annals]. **Corollary 3**. *Let $T_{n}^{(1)},\ldots, T_{n}^{(m)}$ be independent Toeplitz matrices whose input entries are i.i.d complex random variables with mean zero, variance one and all moments finite. Then $\{n^{-1/2}T_{n}^{(1)},\ldots, n^{-1/2}T_{n}^{(m)}\}$ converge in $*$-distribution, with the limit $*$-moments as in ([\[eqn:lim_phi(T\*tau)\]](#eqn:lim_phi(T*tau)){reference-type="ref" reference="eqn:lim_phi(T*tau)"}) with $\theta(r,s)=\big[(\rho_2-\rho_5) + \mathrm{i} \epsilon'_r(\rho_3+\rho_4)\big]^{\delta_{\epsilon'_r,\epsilon'_s}}$.* **Corollary 4**. *Suppose $T_{n}^{(1)},\ldots, T_{n}^{(m)}$ are independent Toeplitz matrices whose input entries satisfy Assumption [Assumption 1](#assump:toeI){reference-type="ref" reference="assump:toeI"}, and $Q_n=Q(n^{-1/2} T_{n}^{(1)},\ldots, n^{-1/2}T_{n}^{(m)})$ is a self-adjoint polynomial in these matrices and their adjoints. Then, the Expected ESD of $Q_n$ converges weakly, the ESD converges to the same limit a.s., and the moments of the LSD are bounded by the moments of a Gaussian distribution.* *In particular, if $\{a_{j}\}$ satisfy Assumption [Assumption 1](#assump:toeI){reference-type="ref" reference="assump:toeI"}, then the ESD of $n^{-1/2}T_{n,h}$ and $n^{-1/2}T_{n,s}$ converge a.s. to symmetric probability distributions whose moments are as in ([\[eqn:lim_mome_A1_TJc\]](#eqn:lim_mome_A1_TJc){reference-type="ref" reference="eqn:lim_mome_A1_TJc"}) with $\epsilon_1=\cdots = \epsilon_{2k}=1$, and $\theta(r,s)=(\sigma_x^2 + \sigma_y^2)$ and $\theta(r,s)=\sigma_x^2$, respectively.* We use the following notation: $$\begin{aligned} \label{eqn:i_k in -n to n} %\chi_{[1,n]}(x)&=1\;\;\mbox{if $x\in [1,n]$ and zero otherwise}.\\ I_k&=\{(i_1,\ldots, i_k)\;:\; i_1,\ldots , i_k\in \{-(n-1),\ldots, -1, 0, 1,\ldots, n-1\}\}. %\\\mathcal P_2(2k)&= \mbox{the set of pair partition of $\{1,\ldots, 2k\}$}.\end{aligned}$$ Liu and Wang (2011)[@liu_wang2011] studied the convergence of the ESD of real band Toeplitz matrices using backward and forward shift matrices. We follow these approach since it helps us to compute the trace of matrices in a systematic manner. **Lemma 1**. *(a) Let $M_n=((a_{i-j}))_{n\times n}$ be an $n\times n$ Toeplitz matrix (random or non-random) with *complex* input entries. Then for $\epsilon_1,\ldots, \epsilon_k\in \{1,*\}$, $$\begin{aligned} {\mbox{Tr}}(M_n^{\epsilon_1}\cdots M_n^{\epsilon_k})=\sum_{j=1}^n\sum_{I_k} a^{\epsilon_1}_{i_1}\cdots a^{\epsilon_k}_{i_k}\prod_{\ell=1}^k \chi_{[1,n]}(j+\sum_{t=\ell}^k\epsilon_t'i_t)\delta_{0,\sum_{t=1}^k\epsilon_t'i_t},\end{aligned}$$ where $I_k$ is as in ([\[eqn:i_k in -n to n\]](#eqn:i_k in -n to n){reference-type="ref" reference="eqn:i_k in -n to n"}), $\epsilon_i'=1$ if $\epsilon_i=1$ and $\epsilon_i'=-1$ if $\epsilon_i=*$. (b) If $M_n^{(\tau)}=((a_{i-j}^{(\tau)}))_{n\times n}$ for $\tau=1,\ldots, m$, then for $\tau_1,\ldots, \tau_k\in \{1,\ldots, m\}$, $$\begin{aligned} {\mbox{Tr}}(M_n^{(\tau_1)\epsilon_1}\cdots M_n^{(\tau_{k})\epsilon_{k}})=\sum_{j=1}^n\sum_{I_{k}} a_{i_1}^{(\tau_1) \epsilon_1}\cdots a_{i_k}^{(\tau_k) \epsilon_k}\prod_{\ell=1}^{k} \chi_{[1,n]}(j+\sum_{t=\ell}^{k}\epsilon_t'i_t)\delta_{0,\sum_{t=1}^k \epsilon_t'i_t}. \end{aligned}$$* *Proof.* For $i=1,\ldots, n$, let $e_i=(0,\ldots, 1,\ldots, 0)^t$, where $1$ is at the $i$-th place. Then $$\begin{aligned} \label{eqn:1} {\mbox{Tr}}(M_n^{\epsilon_1}\cdots M_n^{\epsilon_k}) =\sum_{j=1}^ne_j^t(M_n^{\epsilon_1}\cdots M_n^{\epsilon_k})e_j.\end{aligned}$$ Suppose $$B_n=((\delta_{i+1,j }))_{n\times n}\ \ \text{and}\ \ F_n=((\delta_{i,j+1}))_{n\times n},$$ are respectively the backward and the forward shift matrices. We abbreviate them as $B$ and $F$. Then $$\begin{aligned} M_n=\sum_{i=0}^{n-1}a_{-i}B^i+\sum_{i=1}^{n-1}a_iF^i \ \mbox{ and } \ M_n^*=\sum_{i=0}^{n-1}a_{-i}F^i+\sum_{i=1}^{n-1}a_iB^i.\end{aligned}$$ Therefore, for $j=1,\ldots, n,$ we have $$\begin{aligned} \label{eqn:2} M_ne_j =\sum_{i=0}^{n-1}a_{-i}B^ie_j+\sum_{i=1}^{n-1}a_iF^ie_j\notag &=\sum_{i=0}^{n-1}a_{-i}\chi_{[1,n]}(j-i)e_{j-i}+\sum_{i=1}^{n-1}a_i\chi_{[1,n]}(j+i)e_{j+i}\notag \\&=\sum_{i=-(n-1)}^{n-1}a_i\chi_{[1,n]}(j+i)e_{j+i}.\end{aligned}$$ Similarly, for $j=1,\ldots, n,$ we have $$\begin{aligned} \label{eqn:3} M_n^*e_j=\sum_{i=-(n-1)}^{n-1}a^*_i\chi_{[1,n]}(j-i)e_{j-i}.\end{aligned}$$ Thus, combining [\[eqn:2\]](#eqn:2){reference-type="eqref" reference="eqn:2"} and [\[eqn:3\]](#eqn:3){reference-type="eqref" reference="eqn:3"}, we have $$\begin{aligned} %\label{eqn:Tepslion_eJ} M_n^{\epsilon}e_j=\sum_{i=-(n-1)}^{n-1}a^\epsilon_i\chi_{[1,n]}(j+\epsilon' i)e_{j+\epsilon'i},\end{aligned}$$ where for $\epsilon\in \{1, *\}$, $\epsilon'=1$ if $\epsilon=1$ and $\epsilon'=-1$ if $\epsilon=*$. Using the above ideas, for $j=1,\ldots, n$, we have $$\begin{aligned} M_n^{\epsilon_{k-1}}M_n^{\epsilon_k}e_j&= M_n^{\epsilon_{k-1}}\sum_{i_k=-(n-1)}^{n-1} a^{\epsilon_k}_{i_k}\chi_{[1,n]}(j+\epsilon_{i_k}' i_k)e_{j+\epsilon_{i_k}'i_k} \\&=\sum_{i_k=-(n-1)}^{n-1} a^{\epsilon_k}_{i_k}\chi_{[1,n]}(j+\epsilon_{i_k}' i_k) M_n^{\epsilon_{k-1}}e_{j+\epsilon_{i_k}'i_k} \\&=\hspace{-5pt}\sum_{i_k, i_{k-1}}\hspace{-5pt} a^{\epsilon_{k-1}}_{i_{k-1}} a^{\epsilon_{k}}_{i_k} \chi_{[1,n]}(j+\epsilon_{i_k}' i_k)\chi_{[1,n]}(j+\epsilon_{i_{k-1}}'i_{k-1}+\epsilon_{i_k}' i_k)e_{j+\epsilon_{i_{k-1}}'i_{k-1}+\epsilon_{i_k}' i_k}.\end{aligned}$$ Continuing this process, and using [\[eqn:1\]](#eqn:1){reference-type="eqref" reference="eqn:1"}, we get Part (a). The proof of Part (b) is similar and we skip the details. ◻ Now using the above trace formula, we prove Proposition [Proposition 1](#thm:toeplitz){reference-type="ref" reference="thm:toeplitz"}. *Proof of Proposition [Proposition 1](#thm:toeplitz){reference-type="ref" reference="thm:toeplitz"}.* For transparency, we first consider the case of a single matrix, that is, we show that ([\[eqn:lim_mome_A1_TJc\]](#eqn:lim_mome_A1_TJc){reference-type="ref" reference="eqn:lim_mome_A1_TJc"}). Let $p$ be a positive integer. By Lemma [Lemma 1](#lem:tracetoeplitz){reference-type="ref" reference="lem:tracetoeplitz"} we have $$\begin{aligned} \label{eqn:thm1} \varphi_n(\dfrac{T_n^{\epsilon_1}}{n^{1/2}}\cdots \dfrac{T_n^{\epsilon_p}}{n^{1/2}})&=\frac{1}{n^{\frac{p}{2}+1}}\mbox{\bf E}{\mbox{Tr}}[T_n^{\epsilon_1}\cdots T_n^{\epsilon_p}]\notag\\ &=\frac{1}{n^{\frac{p}{2}+1}}\sum_{j=1}^n\sum_{I_p}\mbox{\bf E}\big[a^{\epsilon_1}_{i_1}\cdots a^{\epsilon_p}_{i_p}\prod_{\ell=1}^p \chi_{[1,n]}(j+\sum_{t=\ell}^{p}\epsilon_t'i_t)\delta_{0,\sum_{t=1}^p\epsilon_t'i_t}\big]\notag \\&=\frac{1}{n^{\frac{p}{2}}}\sum_{I_p'}\mbox{\bf E}[a^{\epsilon_1}_{i_1}\cdots a^{\epsilon_p}_{i_p}] \frac{1}{n}\sum_{j=1}^n\prod_{\ell=1}^p \chi_{[1,n]}(j+\sum_{t=\ell}^p\epsilon_t'i_t),\end{aligned}$$ where $I_p$ is as in ([\[eqn:i_k in -n to n\]](#eqn:i_k in -n to n){reference-type="ref" reference="eqn:i_k in -n to n"}) and $$\label{iprime} I_p'=\{(i_1,\ldots, i_p)\in I_p\; :\; \sum_{t=1}^p\epsilon_t'i_t=0\}.$$ Observe that, for all $n$, $$\begin{aligned} 0 \leq \frac{1}{n}\sum_{j=1}^n\prod_{\ell=1}^p \chi_{[1,n]}(j+\sum_{t=\ell}^{p}\epsilon_t'i_t)\le 1.\end{aligned}$$ Let $\pi$ be any partition of $\{i_1,\ldots, i_p\}$. We write it as $\pi=\pi_1\cup\cdots \cup \pi_k$ such that the random variables in $\{a_i : i\in \pi_j\}$ are correlated for $1\le j\le k$, but $\{a_i : i\in \pi_u\}$ and $\{a_i : i\in \pi_v\}$ are uncorrelated if $1\le u\neq v\le k$. Let $I_p'(\pi)$ be the set of indices from $I_p'$ that obey this rule. Then, by Assumption [Assumption 1](#assump:toeI){reference-type="ref" reference="assump:toeI"}, we have $$\begin{aligned} \sum_{I_p'}\mbox{\bf E}[a^{\epsilon_1}_{i_1}\cdots a^{\epsilon_p}_{i_p}] =\sum_{\pi\in \mathcal P(p)}\sum_{I_p'(\pi)} \prod_{u=1}^{|\pi|} \mbox{\bf E}_{\pi_u}[a^{\epsilon_1}_{i_1}\cdots a^{\epsilon_p}_{i_p}] \le \sum_{\pi\in \mathcal P(p)} n^{|\pi|} \prod_{u=1}^{|\pi|} c_{|\pi_u|},\end{aligned}$$ where $c_{|\pi_u|}$ are as in Assumption [Assumption 1](#assump:toeI){reference-type="ref" reference="assump:toeI"}, $\mbox{\bf E}_{\pi_u}[a^{\epsilon_1}_{i_1}\cdots a^{\epsilon_p}_{i_p}]=\mbox{\bf E}[a^{\epsilon_{d_1}}_{i_{d_1}}\cdots a^{\epsilon_{d_u}}_{i_{d_u}}]$ if $\pi_u$ looks like $\{i_{d_1}, \ldots, i_{d_u}\}$, $|\pi|$ denotes the number of blocks in $\pi$ and $|\pi_u|$ denotes the cardinality of $\pi_u$. Note that the constant $\prod_{u=1}^{|\pi|} c_{|\pi_u|}$ depends only on $\pi$. Therefore $$\lim_{n \to \infty}\frac{1}{n^{\frac{p}{2}}}\sum_{I_p'}\mbox{\bf E}[a^{\epsilon_1}_{i_1}\cdots a^{\epsilon_p}_{i_p}]=0, \mbox{ if $|\pi|<\frac{p}{2}$}.$$ Thus, to obtain the limit of the expression in ([\[eqn:thm1\]](#eqn:thm1){reference-type="ref" reference="eqn:thm1"}), it is enough to consider only those $\pi$ for which $|\pi|\ge p/2$. On the other hand, as $\mbox{\bf E}[a_i]=0$, $\mbox{\bf E}[a^{\epsilon_1}_{i_1}\cdots a^{\epsilon_p}_{i_p}]$ is non-zero when $|\pi_i|\ge 2$ for $1\le i\le k$. In that case $|\pi|\leq p/2$. Hence, we are left to consider only those $\pi$ where $|\pi|=p/2$. In other words, $\pi$ is a *pair-partition*. As a consequence, if $p$ is odd, then there are no pair-partitions and hence $$\label{eqn:phi_To(1)odd} \lim_{n\to \infty}n^{-\frac{p}{2}}\sum_{I_p'}\mbox{\bf E}[a^{\epsilon_1}_{i_1}\cdots a^{\epsilon_p}_{i_p}]=0.$$ Now suppose $p$ is even, say $p=2k$. Then $$\begin{aligned} \varphi_n(\dfrac{T_n^{\epsilon_1}}{n^{1/2}}\cdots \dfrac{T_n^{\epsilon_{2k}}}{n^{1/2}})&=\frac{1}{n^{k+1}}\sum_{j=1}^n\sum_{\pi\in \mathcal P_{2}(2k)}\sum_{I_{2k}'(\pi)}\prod_{(r,s)\in \pi}\hspace{-6pt} \mbox{\bf E}[a^{\epsilon_{r}}_{i_r} a^{\epsilon_{s}}_{i_{s}}]\prod_{\ell=1}^{2k}\hspace{-2pt} \chi_{[1,n]}(j+\sum_{t=\ell}^{2k}\epsilon_t'i_t)+o(1). %\\&=\frac{1}{n^{k+1}}\sum_{\pi\in \mathcal P(2k)}\sum_{j=1}^n\sum_{I_{2k}'}\prod_{(r,s)\in \pi}(\d_{i_ri_s}+\rho \d_{i_r(-i_s)})\prod_{\ell=1}^{2k} %\chi_{[1,n]}(j+\sum_{t=1}^{\ell}\e_t'i_t)+o(1)\end{aligned}$$ Clearly, if the number of free indices among the indices $\{i_1,\ldots, i_{2k}\}$ is strictly less than $k$, then the limit contribution is zero. On the other hand, by the independence property of entries (Assumption [Assumption 1](#assump:toeI){reference-type="ref" reference="assump:toeI"}), we have that $\mbox{\bf E}[a^{\epsilon_{r}}_{i_r} a^{\epsilon_{s}}_{i_{s}}]$ is non-zero only if, $i_{r}=i_{s}$ or $i_{r}=-i_{s}$. This implies that the number of free indices is at most $k$. Therefore we focus on the summands where the number of free indices is exactly $k$. In other words, $\{i_{r_1}, i_{s_1}\},\ldots, \{i_{r_k}, i_{s_k}\}$ are disjoint, where $\pi=(r_1,s_1)\cdots (r_k,s_k)\in \mathcal P_2(2k)$. Again, as $(i_1,\ldots, i_{2k})\in I_{2k}'(\pi)$, we have $$\begin{aligned} %\label{eqn:sum} 0=\sum_{t=1}^{2k}\epsilon_t'i_t=\sum_{t=1}^k(\epsilon_{r_t}'i_{r_t}+\epsilon_{s_t}'i_{s_t}).\end{aligned}$$ This implies that the number of free indices will be $k$ only when $$\begin{aligned} \label{eqn:reduction} (\epsilon_{r_t}'i_{r_t}+\epsilon_{s_t}'i_{s_t})=0, \mbox{ for $t=1,\ldots, k$}.\end{aligned}$$ Otherwise, $\{i_{r_1}, i_{s_1}\},\ldots, \{i_{r_k}, i_{s_k}\}$ will satisfy a one dimensional equation which will reduce one degree of freedom. Since $\epsilon_1',\ldots, \epsilon_{2k}'\in \{1,-1\}$, the above holds if and only if $$\begin{aligned} i_{r_t}= \left\{\begin{array}{lll} i_{s_t} & \mbox{ if } & \epsilon_{r_t}'\epsilon_{s_t}'=-1,\\ -i_{s_t} & \mbox{ if} & \epsilon_{r_t}'\epsilon_{s_t}'=1, \end{array} \right.\end{aligned}$$ for $t=1,\ldots, k$. In other words, we have a non-zero contribution in the limit iff $$\begin{aligned} \label{eqn:E[ars_epsilon]} \mbox{\bf E}[a^{\epsilon_{r_{t}}}_{i_{r_{t}}} a^{\epsilon_{s_{t}}}_{i_{s_{t}}}] &= \left\{\begin{array}{lll} \mbox{\bf E}|a_{i_{r_{t}}}|^2 & \mbox{ if} & i_{r_{t}}=i_{s_{t}},\\ (\rho_2-\rho_5) + \mathrm{i} \epsilon'_{r_t}(\rho_3+\rho_4) & \mbox{ if} & i_{r_{t}}=-i_{s_{t}}, \end{array}\right. \nonumber \\ &= \big[\sigma_x^2 + \sigma_y^2\big]^{1-\delta_{\epsilon'_{r_{t}}, \epsilon'_{s_{t}} }} \big[(\rho_2-\rho_5) + \mathrm{i} \epsilon'_{r_t}(\rho_3+\rho_4)\big]^{\delta_{\epsilon'_{r_{t}}, \epsilon'_{s_{t}} }} \nonumber \\ & = \theta(r_t,s_t), \mbox{ say}.\end{aligned}$$ Define $\pi'(r_t)=\pi'(s_t)=t$, and $\xi_\pi(r_t)=(-1)^{\delta_{\epsilon_{r_t}',\epsilon_{s_t}'}}$ and $\xi_{\pi}(s_t)=1$ for $t=1,\ldots, k$. Thus $\pi'$ gives the projection map where $\pi'(p)=\pi'(q)$ if $p,q$ are in the same block and $\pi'(p)\neq \pi'(q)$ if $p,q$ are not in a common block, and $\pi'$ gives $k$ variables $z_1,\ldots,z_k$ from $k$ partitions. In other words, each partition introducing one new variable. Thus we can write $$\begin{aligned} %\label{eqn:epsilon} \prod_{\ell=1}^{2k} \chi_{[1,n]}(j+\sum_{t=\ell}^{2k}\epsilon_t'i_t)=\prod_{\ell=1}^{2k} \chi_{[1,n]}(j+\sum_{t=\ell}^{2k}\epsilon_t'\xi_{\pi}(t)i_{\pi'(t)}).\end{aligned}$$ Now from [\[eqn:E\[ars_epsilon\]\]](#eqn:E[ars_epsilon]){reference-type="eqref" reference="eqn:E[ars_epsilon]"} and the above equation, we have $$\begin{aligned} \lim_{n\to \infty}\varphi_n(\dfrac{T_n^{\epsilon_1}}{n^{1/2}}\cdots \dfrac{T_n^{\epsilon_{2k}}}{n^{1/2}}) &=\sum_{\pi \in {\mathcal P}_2(2k)} \prod_{t=1}^k \theta(r_t,s_t) \lim_{n\to \infty}\frac{1}{n^{k+1}}\sum_{j=1}^n\sum_{I'_k}\prod_{\ell=1}^{2k} \chi_{[1,n]}(j+\sum_{t=\ell}^{2k}\epsilon_t'\xi_{\pi}(t)i_{\pi'(t)}) \\&=\sum_{\pi \in {\mathcal P}_2(2k)} \prod_{t=1}^k \theta(r_t,s_t) \lim_{n\to \infty}\frac{1}{n^{k+1}}\sum_{j=1}^n\sum_{I'_k}\prod_{\ell=1}^{2k} \chi_{[\frac{1}{n},1]}\big(\frac{j}{n}+\sum_{t=\ell}^{2k}\epsilon_t'\xi_{\pi}(t)\frac{i_{\pi'(t)}}{n}\big) \\&=\sum_{\pi \in {\mathcal P}_2(2k)} \prod_{t=1}^k \theta(r_t,s_t) \int_{0}^1\int_{[-1,1]^k}\prod_{\ell=1}^{2k}\chi_{[0,1]}\big(z_0+\sum_{t=\ell}^{2k}\epsilon_t'\xi_{\pi}(t)z_{\pi'(t)}\big)\prod_{i=0}^kdz_i,%dx_0dx_1\cdots dx_k.\end{aligned}$$ where the last equation in the last step follows by appealing to convergence of Riemann sums of nice functions to their Riemann integrals. Note that $\epsilon'_{s_t}$ and $\epsilon'_{r_t}\xi_{\pi}(r_t)$ have opposite signs. Consider the change of variables $\epsilon_{s_t}'z_t$ to $z_t, 1\leq t\leq k$ and appeal to symmetry, we get $$\begin{aligned} \lim_{n\to \infty}\varphi_n(\dfrac{T_n^{\epsilon_1}}{n^{1/2}}\cdots \dfrac{T_n^{\epsilon_{2k}}}{n^{1/2}}) =\sum_{\pi \in {\mathcal P}_2(2k)} \prod_{(r,s)\in \pi} \theta(r,s) \int_{0}^1\int_{[-1,1]^k}\prod_{\ell=1}^{2k}\chi_{[0,1]}\big(z_0+\sum_{t=\ell}^{2k}\epsilon_{\pi}(t)z_{\pi(t)}\big)\prod_{i=0}^kdz_i,%dx_0dx_1\cdots dx_k. \end{aligned}$$ where for any pair-partition $\pi=(r_1,s_1)\cdots (r_k,s_k) \ \text{with} \ \ r_d<s_d,$ $$\begin{aligned} \label{eqn:epsilon_pi} \epsilon_\pi(t):=\left\{\begin{array}{rll} 1 & \mbox{ if } & t=s_d,\\ -1& \mbox{ if } & t=r_d. \end{array} \right.\end{aligned}$$ This completes the proof for a single matrix. Now suppose we have $m$ matrices. Then from Lemma [Lemma 1](#lem:tracetoeplitz){reference-type="ref" reference="lem:tracetoeplitz"}, for $\tau_i \in \{1,\ldots, m\}$, we have $$\begin{aligned} \varphi_n(\dfrac{T_n^{(\tau_1)\epsilon_1}}{n^{1/2}}\cdots \dfrac{T_n^{(\tau_{p})\epsilon_{p}}}{n^{1/2}})=\frac{1}{n^{\frac{p}{2}+1}}\sum_{j=1}^n\sum_{I_{p}'}\mbox{\bf E}\big[a_{i_1}^{(\tau_1)\epsilon_1}\cdots a_{i_p}^{(\tau_p)\epsilon_p}\prod_{\ell=1}^{p} \chi_{[1,n]}(j+\sum_{t=\ell}^{p}\epsilon_t'i_t)\big],\end{aligned}$$ where $I_p'$ is as in ([\[iprime\]](#iprime){reference-type="ref" reference="iprime"}). Using the arguments those used to obtain ([\[eqn:phi_To(1)odd\]](#eqn:phi_To(1)odd){reference-type="ref" reference="eqn:phi_To(1)odd"}), for $p$ odd, we have $$\begin{aligned} \lim_{n \to \infty}\varphi_n(\dfrac{T_n^{(\tau_1)\epsilon_1}}{n^{1/2}}\cdots \dfrac{T_n^{(\tau_p)\epsilon_p}}{n^{1/2}})=0.\end{aligned}$$ Let $p=2k$. Then, using the previous arguments, we have $$\begin{aligned} &\lim_{n\to\infty}\varphi_n(\dfrac{T_n^{(\tau_1)\epsilon_1}}{n^{1/2}}\cdots \dfrac{T_n^{(\tau_{2k})\epsilon_{2k}}}{n^{1/2}}) \\&=\lim_{n\to\infty}\frac{1}{n^{k+1}}\sum_{j=1}^n\sum_{I_{2k}'}\sum_{\pi\in \mathcal P_2(2k)}\prod_{(r,s)\in \pi}\mbox{\bf E}[a_{i_r}^{(\tau_r)\epsilon_r} a_{i_s}^{(\tau_s)\epsilon_s}]\prod_{\ell=1}^{2k} \chi_{[1,n]}(j+\sum_{t=\ell}^{2k}\epsilon_t'i_t) \\&=\lim_{n\to\infty}\frac{1}{n^{k+1}}\sum_{j=1}^n\sum_{\pi\in \mathcal P_2(2k)}\sum_{I_{2k}'(\pi)}\prod_{(r,s)\in \pi}\delta_{\tau_r,\tau_s} \theta(r,s) \prod_{\ell=1}^{2k} \chi_{[1,n]}(j+\sum_{t=\ell}^{2k}\epsilon_t'\xi_{\pi}(t)i_{\pi(t)}) %\\&=\sum_{\pi\in {\mathcal P}_2(2k)}\lim_{n\to\infty}\frac{1}{n^{k+1}}\sum_{j=1}^n\sum_{I_k} \prod_{(r,s)\in \pi}\d_{\tau_r,\tau_s} \theta(r,s) \prod_{\ell=1}^{2k} %\chi_{[1,n]}(j+\sum_{t=\ell}^{2k}\e_t'\xi_{\pi}(t)i_{\pi(t)}) \\&=\sum_{\pi \in {\mathcal P}_2(2k)}\prod_{(r,s)\in \pi}\delta_{\tau_r,\tau_s} \theta(r,s) \int_{0}^1\int_{[-1,1]^k}\prod_{\ell=1}^{2k} \chi_{[0,1]}(z_0+\sum_{t=\ell}^{2k}\epsilon_{\pi}(t)z_{\pi(t)})\prod_{i=0}^kdz_i.%dx_0dx_1\cdots dx_k.\end{aligned}$$ This completes the proof for $m$ matrices with complex entries. Now if the input entries of matrices are *real*, then $\sigma_y^2=0$ and $\rho_3=\rho_4=\rho_5=0$. Thus from ([\[eqn:E\[ars_epsilon\]\]](#eqn:E[ars_epsilon]){reference-type="ref" reference="eqn:E[ars_epsilon]"}), we have the limit $*$-moments are as in ([\[eqn:lim_phi(T\*tau)\]](#eqn:lim_phi(T*tau)){reference-type="ref" reference="eqn:lim_phi(T*tau)"}) with $\theta(r,s)=(\sigma_x^2)^{1-\delta_{\epsilon'_{r}, \epsilon'_{s} }} \rho_{2}^{\delta_{\epsilon'_r,\epsilon'_s}}$. This completes the proof of Proposition [Proposition 1](#thm:toeplitz){reference-type="ref" reference="thm:toeplitz"}. ◻ *Proof of Corollary [Corollary 2](#cor:A2+3_TJc){reference-type="ref" reference="cor:A2+3_TJc"}.* If the Toeplitz matrix is Hermitian, then $a_j^*=\overline{a}_j$, that is, $a_{-j}=\overline{a}_j$. Thus, $x_j=x_{-j}$ and $y_j=-y_{-j}$. So, in this case, we have $\rho_2=\sigma_x^2$, $\rho_3=-\rho_4$ and $\rho_5=-\sigma_y^2$. Hence from ([\[eqn:E\[ars_epsilon\]\]](#eqn:E[ars_epsilon]){reference-type="ref" reference="eqn:E[ars_epsilon]"}), we have the limit $*$-moments are as in ([\[eqn:lim_phi(T\*tau)\]](#eqn:lim_phi(T*tau)){reference-type="ref" reference="eqn:lim_phi(T*tau)"}) with $\theta(r,s)=(\sigma_x^2 + \sigma_y^2)$. This gives the first part of the corollary. Now if the matrices are real symmetric Toeplitz, then $\sigma_y^2$ will be zero, and thus the limit $*$-moments are as in ([\[eqn:lim_phi(T\*tau)\]](#eqn:lim_phi(T*tau)){reference-type="ref" reference="eqn:lim_phi(T*tau)"}) with $\theta(r,s)=\sigma_x^2$. This gives the second part of the corollary, and hence the proof is complete. ◻ *Proof of Corollary [Corollary 4](#cor:poly_Lsd_T){reference-type="ref" reference="cor:poly_Lsd_T"}.* By Proposition [Proposition 1](#thm:toeplitz){reference-type="ref" reference="thm:toeplitz"}, moments of the expected ESD, that is, $n^{-1}\mbox{\bf E}[{\mbox{Tr}}(Q_n^k)]$ converge for every $k\geq 1$. From the formula for the limiting moments, it is easy to see that the even limit moments are bounded by the even moments of an appropriate Gaussian distribution, and hence they satisfy Carleman's condition. So, the limit moments define a unique distribution, and thus weak convergence of the expected ESD holds. One can show that for every $k\geq 1$, $$\label{eq:m4} \mbox{\bf E}\big[n^{-1}{\mbox{Tr}}(Q_{n}^{k})-\mbox{\bf E}[n^{-1}{\mbox{Tr}}(Q_n^{k})]\big]^4=O(n^{-2}).$$ This can be shown by following the arguments in Bose (2018)[@bose_patterned], who proved ([\[eq:m4\]](#eq:m4){reference-type="ref" reference="eq:m4"}) when we have real i.i.d. input. We omit the details. Now using the Borel-Cantelli lemma, it follows that the ESD of $Q_n$ converges a.s. to the same limit. ◻ ## Joint convergence of $D_n$ and $P_{n}$ {#subsec:dp} **Proposition 2**. *Suppose $\{D_{n}^{(\tau)}; 1\leq \tau \leq m\}$ are $m$ deterministic Toeplitz matrices whose input entries satisfy Assumption [Assumption 1](#assump:determin){reference-type="ref" reference="assump:determin"} and $P_n$ is the backward identity permutation matrix. Then $\{P_n, D_{n}^{(\tau)}; 1 \leq \tau \leq m\}$ converge jointly. The limit $*$-moments are as given in ([\[eqn:lim_phi_P,D\*\]](#eqn:lim_phi_P,D*){reference-type="ref" reference="eqn:lim_phi_P,D*"}).* We first derive a trace formula for an arbitrary monomial in $P_n$ and Toeplitz matrices. **Lemma 2**. *Let $M^{(\tau)}_{n}$ be $m$ Toeplitz matrices (random or non-random) with input sequences $(a^{(\tau)}_i)_{i\in \mathbb{Z}}$ for $\tau=1,2, \ldots, m$ and $P_n$ be the backward identity permutation matrix, then for $\epsilon_i\in \{1,*\}$, $\tau_i \in \{1, \ldots,m\}$ and for non-negative integers $k_1, \ldots, k_p$, we have $$\begin{aligned} \label{eqn:trace_Hs} %\Tr\big[(P_nM_{n}^{\e_1} \cdots M_{n}^{\e_{k_1}}) (P_nM_{n}^{\e_{k_1+1}} \cdots M_{n}^{\e_{k_2}}) \cdots (P_nM_{n}^{\e_{k_{p-1}+1}} \cdots M_{n}^{\e_{k_p}})\big] &{\mbox{Tr}}\big[(P_nM_{n}^{(\tau_1)\epsilon_1} \cdots M_{n}^{(\tau_{k_1}) \epsilon_{k_1}}) %(P_nM_{n}^{(\tau_{k_1+1})\e_{k_1+1}} \cdots M_{n}^{(\tau_{k_2})\e_{k_2}}) \cdots (P_nM_{n}^{(\tau_{k_{p-1}+1})\epsilon_{k_{p-1}+1}} \cdots M_{n}^{(\tau_{k_p})\epsilon_{k_p}})\big] \nonumber \\ & = \left\{ \begin{array}{ll} \displaystyle{ \sum_{j=1}^n \sum_{I_{k_p}} \prod_{t=1}^{k_p} a^{(\tau_t)\epsilon_t}_{i_t} \prod_{e=1}^p m_{\chi, k_e}(j,i) \delta_{0,\sum_{c=1}^p(-1)^{p-c} \sum_{t=k_{c-1}+1}^{k_{c}} \epsilon_t'i_t}} & \mbox{if $p$ is even}, \\ \displaystyle{\sum_{j=1}^n \sum_{I_{k_p}} \prod_{t=1}^{k_p} a^{(\tau_t)\epsilon_t}_{i_t} \prod_{e=1}^p m_{\chi, k_e}(j,i) \delta_{2j-1-n,\sum_{c=1}^p(-1)^{p-c} \sum_{t=k_{c-1}+1}^{k_{c}} \epsilon'_t i_t}} & \mbox{if $p$ is odd}, \end{array} \right. \end{aligned}$$ where $I_{k_p}$ is as in ([\[eqn:i_k in -n to n\]](#eqn:i_k in -n to n){reference-type="ref" reference="eqn:i_k in -n to n"}) and for $e=1,2, \ldots,p$, $$\begin{aligned} \label{eqn:m_chi,t_Hn} m_{\chi, k_e}(j,i) = \prod_{\ell=k_{e-1}+1}^{k_e} \chi_{[1,n]}\big(j+ (-1)^{p-e} \sum_{t=\ell}^{k_e} \epsilon'_t i_t + \sum_{c=e+1}^{p} (-1)^{p-c} \sum_{t=k_{c-1}+1}^{k_c} \epsilon'_t i_t \big), \end{aligned}$$ with $k_0=0$ and $\sum_{c=p+1}^{p}(-1)^{p-c} \sum_{t=k_{c-1}+1}^{k_c} \epsilon'_t i_t =0$.* *Proof.* For simplicity of notation, we prove this lemma only for a single matrix. The same argument will work if we have multiple collections. Note from Lemma [Lemma 1](#lem:tracetoeplitz){reference-type="ref" reference="lem:tracetoeplitz"} (a) that $$\begin{aligned} (M_n^{\epsilon_{1}}\cdots M_n^{\epsilon_{k_1}})e_j = \sum_{I_{k_1}} a^{\epsilon_1}_{i_1}\cdots a^{\epsilon_{k_1}}_{i_{k_1}}\prod_{\ell=1}^{k_1} \chi_{[1,n]}(j+\sum_{t=\ell}^{k_1}\epsilon_t'i_t) e_{j+\sum_{t=1}^{k_1} \epsilon_t'i_t}, \end{aligned}$$ where $I_{k_1}$ is as in ([\[eqn:i_k in -n to n\]](#eqn:i_k in -n to n){reference-type="ref" reference="eqn:i_k in -n to n"}) and $\epsilon'$ is as in ([\[eqn:epsilon\'\]](#eqn:epsilon'){reference-type="ref" reference="eqn:epsilon'"}). Since $P_ne_j=e_{n+1-j}$, we have $$\begin{aligned} (P_n M_n^{\epsilon_{1}}\cdots M_n^{\epsilon_{k_1}})e_j &= \sum_{I_{k_1}} a^{\epsilon_1}_{i_1}\cdots a^{\epsilon_{k_1}}_{i_{k_1}}\prod_{\ell=1}^{k_1} \chi_{[1,n]}(j+\sum_{t=\ell}^{k_1}\epsilon_t'i_t) e_{n+1 -(j+\sum_{t=1}^{k_1} \epsilon_t'i_t)}. \end{aligned}$$ Now $$\begin{aligned} &(P_n M_n^{\epsilon_{k_{p-2}+1}}\cdots M_n^{\epsilon_{k_{p-1}}}) (P_n M_n^{\epsilon_{k_{p-1}+1}}\cdots M_n^{\epsilon_{k_p}}) e_j\\ & = \hskip-5pt \sum_{I_{k_p-k_{p-1}}} \prod_{\ell=k_{p-1}+1}^{k_p} \hskip-5pt a^{\epsilon_\ell}_{i_\ell} \prod_{\ell=k_{p-1}+1}^{k_p} \hskip-5pt \chi_{[1,n]}(j+\sum_{t=\ell}^{k_p}\epsilon_t'i_t) (P_n M_n^{\epsilon_{k_{p-2}+1}}\cdots M_n^{\epsilon_{k_{p-1}}}) e_{n+1 -(j+\sum_{t=k_{p-1}+1}^{k_p} \epsilon_t'i_t)} \\ &=\sum_{I_{k_p-k_{p-2}}} \prod_{\ell=k_{p-2}+1}^{k_p} a^{\epsilon_\ell}_{i_\ell} \prod_{\ell=k_{p-1}+1}^{k_p} \chi_{[1,n]}(j+\sum_{t=\ell}^{k_p}\epsilon_t'i_t) \\ & \qquad \times \prod_{\ell=k_{p-2}+1}^{k_{p-1}} \chi_{[1,n]}\big( j - \sum_{t=\ell}^{k_{p-1}}\epsilon_t'i_t +\sum_{t=k_{p-1}+1}^{k_p}\epsilon_t'i_t\big) e_{j+\sum_{t=k_{p-1}+1}^{k_p} \epsilon_t'i_t- \sum_{t=k_{p-2}+1}^{k_{p-1}} \epsilon_t'i_t}. \end{aligned}$$ Continuing the process, for $\epsilon_1,\ldots, \epsilon_{k_p}\in \{1,*\}$, we get $$\begin{aligned} & \big[(P_nM_{n}^{\epsilon_1} \cdots M_{n}^{\epsilon_{k_1}}) (P_nM_{n}^{\epsilon_{k_1+1}} \cdots M_{n}^{\epsilon_{k_2}}) \cdots (P_nM_{n}^{\epsilon_{k_{p-1}+1}} \cdots M_{n}^{\epsilon_{k_p}})\big] e_j\\ & = \left\{\begin{array}{lll} \displaystyle \sum_{I_{k_p}} \prod_{t=1}^{k_p} a^{\epsilon_t}_{i_t} \prod_{e=1}^p m_{\chi,k_e}(j,i) e_{j+\sum_{c=1}^p(-1)^{p-c} \sum_{t=k_{c-1}+1}^{k_{c}} \epsilon_t'i_t} & \mbox{ if $p$ is even}, \\ \displaystyle \sum_{I_{k_p}} \prod_{t=1}^{k_p} a^{\epsilon_t}_{i_t} \prod_{e=1}^p m_{\chi,k_e}(j,i) e_{n+1-j-\sum_{c=1}^p(-1)^{p-c} \sum_{t=k_{c-1}+1}^{k_{c}} \epsilon_t'i_t} & \mbox{ if $p$ is odd}, \end{array}\right. \end{aligned}$$ where $I_{k_p}$ and $m_{\chi,k_e}(j,i)$ are as in ([\[eqn:i_k in -n to n\]](#eqn:i_k in -n to n){reference-type="ref" reference="eqn:i_k in -n to n"}) and ([\[eqn:m_chi,t_Hn\]](#eqn:m_chi,t_Hn){reference-type="ref" reference="eqn:m_chi,t_Hn"}), respectively. Now using the fact that ${\mbox{Tr}}(A_n)=\sum_{j=1}^ne_j^t(A_n)e_j$, we get ([\[eqn:trace_Hs\]](#eqn:trace_Hs){reference-type="ref" reference="eqn:trace_Hs"}). This completes the proof. ◻ Now using the above trace formula, we prove Proposition [Proposition 2](#thm:jc_D+P){reference-type="ref" reference="thm:jc_D+P"}. *Proof of Proposition [Proposition 2](#thm:jc_D+P){reference-type="ref" reference="thm:jc_D+P"}.* We first consider the monomial $D_n^p$. To find $\lim_{n\to \infty}\varphi_n( D_{n}^p)$, we first consider a "truncated matrix\". This notion shall will be used later also. Let $D_n=((d_{i-j}))$ be the deterministic Toeplitz matrix. For a fixed $m \in \mathbb{N}$, we define a new matrix $D_{n,m}$ whose $(i,j)$-th entry is: $$\label{def:D_n,m} D_{n,m}(i,j) := d_{i-j} \chi_{[0,m]} (|i-j|).$$ We first look at the convergence of $D_{n,m}$. From Lemma [Lemma 1](#lem:tracetoeplitz){reference-type="ref" reference="lem:tracetoeplitz"}, we have $$\begin{aligned} \varphi_n( (D_{n,m})^p) &= \sum_{i_1, \ldots, i_p = -(m-1)}^ {m-1} d_{i_1}\cdots d_{i_p} \times \frac{1}{n}\sum_{j=1}^n\prod_{\ell=1}^p \chi_{[1,n]}(j+\sum_{t=\ell}^p i_t) \delta_{0,\sum_{t=1}^pi_t} \nonumber \\ & = \sum_{i_1, \ldots, i_p = -(m-1)}^ {m-1} f(i_1, \ldots, i_p) g(i_1, \ldots, i_p,n), \mbox{ say},\end{aligned}$$ where $f(i_1, \ldots, i_p)=d_{i_1}\cdots d_{i_p}$. Note that $|g(i_1, \ldots, i_p,n)| \leq 1$. Therefore, $$\begin{aligned} \label{eqn:lim_g_m} \lim_{n \to \infty} g(i_1, \ldots, i_p,n) =\left\{\begin{array}{lll} 1 & \mbox{if } \sum_{t=1}^pi_t=0, \\ 0 & \mbox{if } \sum_{t=1}^pi_t \neq 0. \end{array}\right.\end{aligned}$$ Thus, from the above observations, for fixed $m$, we have $$\begin{aligned} \label{eqn:lim_Dm} \lim_{n \to \infty} \varphi_n(( D_{n,m})^p) = \sum_{i_1, \ldots, i_p=-(m-1)}^{(m-1)}d_{i_1}\cdots d_{i_{p-1}} d_{i_p} \delta_{0,\sum_{t=1}^pi_t}. \end{aligned}$$ Now observe that if we take $m=n^\alpha$ with $\alpha \in (0,1)$, then ([\[eqn:lim_g\_m\]](#eqn:lim_g_m){reference-type="ref" reference="eqn:lim_g_m"}) is still true. Let $\{d_i\}$ satisfies Assumption [Assumption 1](#assump:determin){reference-type="ref" reference="assump:determin"}. Then using Dominated convergence theorem, we have $$\begin{aligned} % \label{eqn:lim_D_n^p} \lim_{n \to \infty} \varphi_n(( D_{n,n^\alpha})^p ) = \sum_{i_1, \ldots, i_{p}=-\infty }^{\infty} d_{i_1}\cdots d_{i_{p-1}} d_{i_p} \delta_{0,\sum_{t=1}^pi_t}. %\sum_{i_1, \ldots, i_{p-1}=-(\infty) }^{\infty}d_{i_1}\cdots d_{i_{p-1}} d_{-(i_1 + \cdots + i_{p-1})}\end{aligned}$$ Note that the existence of the above limit is guaranteed because $\sum_{t= -\infty}^{\infty} |d_t| < \infty$. Now we find out the limit of $\varphi_n( D_{n}^p)$. Using the structure of the matrix $D_{n,n^\alpha}$ and $D_n$, it easily follows that $\lim_{n \to \infty} \varphi_n( D_{n}^p) = \lim_{n \to \infty} \varphi_n(( D_{n,n^\alpha})^p)$. Hence from the above expression, we have $$\begin{aligned} \label{eqn:lim_D_n^p} \lim_{n \to \infty} \varphi_n( D_{n}^p) = \lim_{n \to \infty} \varphi_n(( D_{n,n^\alpha})^p) = \sum_{i_1, \ldots, i_{p}=-\infty }^{\infty} d_{i_1}\cdots d_{i_{p-1}} d_{i_p} \delta_{0,\sum_{t=1}^pi_t}. %\sum_{i_1, \ldots, i_{p-1}=-(\infty) }^{\infty}d_{i_1}\cdots d_{i_{p-1}} d_{-(i_1 + \cdots + i_{p-1})}\end{aligned}$$ Similarly, we can also show that for $\epsilon_i \in \{1,*\}$ and $\tau_i \in \{1, \ldots,m\}$, $$\begin{aligned} \label{eqn:lim_D*_n^p} \lim_{n \to \infty} \varphi_n( D^{(\tau_1)\epsilon_1}_{n} \cdots D^{(\tau_p)\epsilon_p}_{n}) = \sum_{i_1, \ldots, i_{p}=-\infty }^{\infty} d^{(\tau_1)\epsilon_1}_{i_1}\cdots d^{(\tau_p) \epsilon_{p}}_{i_{p}} \delta_{0,\sum_{t=1}^p \epsilon'_ti_t}.\end{aligned}$$ Now we deal with an arbitrary monomial from the collection $\{P_n, D_{n}^{(\tau)}; 1 \leq \tau \leq m\}$. Note that for $\epsilon_i \in \{ 1,*\}$ and $\tau_i \in \{1, \ldots, m\}$, an arbitrary monomial from this collection looks like $$\big[(P_nD_{n}^{(\tau_1)\epsilon_1} \cdots D_{n}^{(\tau_{k_1})\epsilon_{k_1}}) \cdots (P_nD_{n}^{(\tau_{k_{p-1}+1})\epsilon_{k_{p-1}+1}} \cdots D_{n}^{(\tau_{k_p})\epsilon_{k_p}})\big]= q_{k_1, \ldots, k_p}(\epsilon, \tau), \mbox{ say}.$$ First, let $p$ be odd positive integer. Observe from Lemma [Lemma 2](#lem:hankel){reference-type="ref" reference="lem:hankel"} that $$\begin{aligned} \varphi_n\big( q_{k_1, \ldots, k_p}(\epsilon, \tau)\big) &= \sum_{I_{k_p}} \prod_{t=1}^{k_p} d^{(\tau_{t})\epsilon_t}_{i_t} \times \frac{1}{n} \sum_{j=1}^n \prod_{e=1}^p m_{\chi,k_e}(j,i) \delta_{2j-1-n, \sum_{c=1}^p(-1)^{p-c} \sum_{t=k_{c-1}+1}^{k_{c}} \epsilon_t'i_t},\end{aligned}$$ where $I_{k_p}$ and $m_{\chi,k_e}(j,i)$ are as in ([\[eqn:i_k in -n to n\]](#eqn:i_k in -n to n){reference-type="ref" reference="eqn:i_k in -n to n"}) and ([\[eqn:m_chi,t_Hn\]](#eqn:m_chi,t_Hn){reference-type="ref" reference="eqn:m_chi,t_Hn"}), respectively. Now for a fixed values of $i_1, \ldots, i_{k_p}$, note from ([\[eqn:m_chi,t_Hn\]](#eqn:m_chi,t_Hn){reference-type="ref" reference="eqn:m_chi,t_Hn"}) that $|m_{\chi,k_e}(j,i) | \leq 1$ and thus $$\begin{aligned} & |\frac{1}{n} \sum_{j=1}^n \prod_{e=1}^p m_{\chi,k_e}(j,i) \delta_{2j-1-n, \sum_{c=1}^p(-1)^{p-c} \sum_{t=k_{c-1}+1}^{k_{c}} \epsilon_t'i_t}| \\ &\leq \frac{1}{n}\sum_{j=1}^n \delta_{2j-1-n,\sum_{c=1}^p(-1)^{p-c} \sum_{t=k_{c-1}+1}^{k_{c}} \epsilon_t'i_t} = O(\frac{1}{n}). \end{aligned}$$ Also observe that $\sum_{I_{k_p}} \prod_{t=1}^{k_p} d^{(\tau_{t})\epsilon_t}_{i_t} < \infty$. Hence for odd values of $p$, $$\label{eqn:phi_PD_o(1)odd} \varphi_n\big(q_{k_1, \ldots, k_{p}}(\epsilon, \tau)\big) =o(1).$$ Now suppose $p$ is even. Then from Lemma [Lemma 2](#lem:hankel){reference-type="ref" reference="lem:hankel"}, we have $$\begin{aligned} \varphi_n\big( q_{k_1, \ldots, k_{p}}(\epsilon, \tau)\big) &= \sum_{I_{k_p}} \prod_{t=1}^{k_p} d^{(\tau_{t})\epsilon_t}_{i_t} \times \frac{1}{n} \sum_{j=1}^n \prod_{e=1}^p m_{\chi,k_e}(j,i) \delta_{0, \sum_{c=1}^p(-1)^{p-c} \sum_{t=k_{c-1}+1}^{k_{c}} \epsilon_t'i_t}. \end{aligned}$$ Let $q'_{k_1, \ldots, k_{p}}(\epsilon, \tau)$ be a monomial which is obtained by replacing $D_n$ by $D_{n, n^\alpha}$ in $q_{k_1, \ldots, k_{p}}(\epsilon, \tau)$, where $D_{n, n^\alpha}$ is as in ([\[def:D_n,m\]](#def:D_n,m){reference-type="ref" reference="def:D_n,m"}) with $\alpha \in (0,1)$. Using arguments similar to those used while establishing ([\[eqn:lim_D\_n\^p\]](#eqn:lim_D_n^p){reference-type="ref" reference="eqn:lim_D_n^p"}), we have $$\begin{aligned} %\label{eqn:lim_phi_P,D*} \lim_{n \to \infty} \varphi_n\big( q'_{k_1, \ldots, k_{p}}(\epsilon, \tau)\big) = \sum_{i_1, \ldots, i_{k_p}=-\infty }^{\infty} \prod_{t=1}^{k_p} d^{(\tau_{t})\epsilon_t}_{i_t} \delta_{0, \sum_{c=1}^p(-1)^{p-c} \sum_{t=k_{c-1}+1}^{k_{c}} \epsilon_t'i_t}. %\sum_{i_1, \ldots, i_{p-1}=-(\infty) }^{\infty}d_{i_1}\cdots d_{i_{p-1}} d_{-(i_1 + \cdots + i_{p-1})}\end{aligned}$$ But $\lim_{n \to \infty} \varphi_n\big( q'_{k_1, \ldots, k_{p}}(\epsilon, \tau)\big) = \lim_{n \to \infty} \varphi_n\big( q_{k_1, \ldots, k_{p}}(\epsilon, \tau)\big)$. Hence $$\begin{aligned} \label{eqn:lim_phi_P,D*} & \lim_{n \to \infty} \varphi_n \big[(P_nD_{n}^{(\tau_1)\epsilon_1} \cdots D_{n}^{(\tau_{k_1})\epsilon_{k_1}}) \cdots (P_nD_{n}^{(\tau_{k_{p-1}+1})\epsilon_{k_{p-1}+1}} \cdots D_{n}^{(\tau_{k_p})\epsilon_{k_p}})\big] \nonumber \\ & = \left\{\begin{array}{lll} \displaystyle \sum_{i_1, \ldots, i_{k_p}=-\infty }^{\infty} \prod_{t=1}^{k_p} d^{(\tau_{t})\epsilon_t}_{i_t} \delta_{0, \sum_{c=1}^p(-1)^{p-c} \sum_{t=k_{c-1}+1}^{k_{c}} \epsilon_t'i_t} & \mbox{ if $p$ is even}, \\ 0& \mbox{ if $p$ is odd}. \end{array}\right.\end{aligned}$$ This completes the proof of Proposition [Proposition 2](#thm:jc_D+P){reference-type="ref" reference="thm:jc_D+P"}. ◻ **Remark 2**. Let $\{M^{(i)}_n; 1 \leq i \leq \ell\}$ be any given $\ell$ matrices where $M^{(i)}_n$ is $P_n$ or $n^{-1/2}T_n$. Let the input entries of $T_n$ and $D_n$ satisfy Assumptions [Assumption 1](#assump:toeI){reference-type="ref" reference="assump:toeI"} and [Assumption 1](#assump:determin){reference-type="ref" reference="assump:determin"}, respectively. Suppose $Q(M_n,D_n)$ is any monomial in $\{M^{(i)}_n; 1 \leq i \leq \ell\}$ and $D_n$, and $Q(M_n,D_{n, n^\alpha})$ is the same monomial obtained by replacing $D_n$ by $D_{n, n^\alpha}$ in $Q(M_n,D_n)$. Then from a similar argument that was used to establish ([\[eqn:lim_D\_n\^p\]](#eqn:lim_D_n^p){reference-type="ref" reference="eqn:lim_D_n^p"}), we have $\lim_{n \to \infty} \varphi_n\big(Q(M_n,D_n)\big) = \lim_{n \to \infty} \varphi_n\big(Q(M_n,D_{n, n^\alpha})\big)$. ## Joint convergence of $T_n$ and $D_{n}$ {#subsec:td} **Proposition 3**. *Let $A_{n,1}=n^{-1/2}T_n$ where $T_n$ is a random Toeplitz matrix. Let $\{A_{n,1}^{(i)}\}$, $1\leq i =1,2,\ldots, m$, be independent copies of $A_{n,1}$. Let $A_{n,2}^{(i)}=D^{(i)}_n$ be $m$ deterministic Toeplitz matrices. Suppose the input entries of $T_n$ and $D^{(i)}_n$ satisfy Assumptions [Assumption 1](#assump:toeI){reference-type="ref" reference="assump:toeI"} and [Assumption 1](#assump:determin){reference-type="ref" reference="assump:determin"}, respectively. Then $\{A_{n,j}^{(i)}; 1\leq i\leq m, 1\leq j \leq 2\}$ converge jointly. The limit $*$-moments are given in ([\[eqn:lim_ph(Dm\*,T\*)\]](#eqn:lim_ph(Dm*,T*)){reference-type="ref" reference="eqn:lim_ph(Dm*,T*)"}).* *Proof.* First we deal with the monomial of the form $(D_{n}^p (n^{-1/2}T_n)^q)$. Recall the definition of $D_{n,m}$ from ([\[def:D_n,m\]](#def:D_n,m){reference-type="ref" reference="def:D_n,m"}). Observe from Remark [Remark 2](#rem:obse_D,D_m){reference-type="ref" reference="rem:obse_D,D_m"} that to find the limit of $\varphi_n(D_{n}^p (n^{-1/2}T_n)^q)$, it is sufficient to find the limit of $\varphi_n(D_{n, n^\alpha}^p (n^{-1/2}T_n)^q)$, where $\alpha \in (0,1)$, and both the limits will be the same. Note from Lemma [Lemma 1](#lem:tracetoeplitz){reference-type="ref" reference="lem:tracetoeplitz"} that $$\begin{aligned} \label{eqn:ph(Dm,T)} &\varphi_n(D_{n, n^\alpha}^p (n^{-1/2}T_n)^q) \nonumber \\ &= \hskip-5pt \sum_{i_1, \ldots, i_p=-(n^\alpha-1)}^{n^\alpha-1} \prod_{t=1}^{p} d_{i_t} \times \frac{1}{n^{\frac{q}{2}}} \sum_{i_{p+1}, \ldots, i_{p+q}=-(n-1)}^{n-1} \hskip-5pt \mbox{\bf E}\big[ \prod_{t=p+1}^{p+q} a_{i_{t}}\big] \frac{1}{n}\sum_{j=1}^n\prod_{\ell=1}^{p+q} \chi_{[1,n]}(j+\sum_{t=\ell}^{p+q} i_t) \delta_{0,\sum_{t=1}^{p+q} i_t}. %\nonumber \\ %& = f(i_1, \ldots, i_p) g(i_1, \ldots, i_p,n), \mbox{ say},\end{aligned}$$ Observe that $$|\frac{1}{n}\sum_{j=1}^n\prod_{\ell=1}^{p+q} \chi_{[1,n]}(j+\sum_{t=\ell}^{p+q} i_t)| \leq 1 \mbox{ and } \sum_{i_1, \ldots, i_p=-\infty}^{\infty} \prod_{t=1}^{p} d_{i_t} < \infty.$$ Thus $\varphi_n(D_{n, n^\alpha}^p (n^{-1/2}T_n)^q)$ is of the order $O(1)$ iff for fixed values of $i_1, \ldots, i_p$, $$\frac{1}{n^{\frac{q}{2}}} \sum_{i_{p+1}, \ldots, i_{p+q}=-(n-1)}^{n-1} \mbox{\bf E}\big[\prod_{t=p+1}^{p+q} a_{i_{t}}\big] \delta_{0,\sum_{t=1}^{p+q} i_t} =O(1).$$ Now from the previous arguments as used in the proof of Proposition [Proposition 1](#thm:toeplitz){reference-type="ref" reference="thm:toeplitz"}, the above expression is true iff $q$ is even and $(i_{p+1}, \ldots, i_{p+q})$ is pair-matched with $\sum_{t=p+1}^{p+q} i_t=0$. Thus ([\[eqn:ph(Dm,T)\]](#eqn:ph(Dm,T)){reference-type="ref" reference="eqn:ph(Dm,T)"}) will be $$\begin{aligned} %\label{eqn:ph(Dm,T)} \varphi_n(D_{n, n^\alpha}^p (n^{-1/2}T_n)^q) &= \sum_{i_1, \ldots, i_p=-(n^\alpha-1)}^{n^\alpha-1} \prod_{t=1}^{p} d_{i_t} \delta_{0,\sum_{t=1}^{p} i_t} \nonumber \\ &\times \frac{1}{n^{\frac{q}{2}}} \sum_{i_{p+1}, \ldots, i_{p+q}=-(n-1)}^{n-1} \hskip-5pt \mbox{\bf E}\big[ \prod_{t=p+1}^{p+q} a_{i_{t}}\big] \frac{1}{n}\sum_{j=1}^n\prod_{\ell=1}^{p+q} \chi_{[1,n]}(j+\sum_{t=\ell}^{p+q} i_t) \delta_{0,\sum_{t=p+1}^{p+q} i_t}. %\nonumber \\ %& = f(i_1, \ldots, i_p) g(i_1, \ldots, i_p,n), \mbox{ say},\end{aligned}$$ Since $i_1, \ldots, i_p \in (-(n^\alpha-1), (n^\alpha-1))$ with $\alpha \in (0,1)$, we have $$\label{eqn:chi_D_remove} \frac{1}{n}\sum_{j=1}^n\prod_{\ell=1}^{p+q} \chi_{[1,n]}(j+\sum_{t=\ell}^{p+q} i_t) = \frac{1}{n}\sum_{j=1}^n\prod_{\ell=p+1}^{p+q} \chi_{[1,n]}(j+\sum_{t=\ell}^{p+q} i_t) +o(1).$$ Hence $$\begin{aligned} \label{eqn:lim_ph(Dm,T)} & \lim_{n \to \infty} \varphi_n(D_{n, n^\alpha}^p (n^{-1/2}T_n)^q) \nonumber \\ &= \sum_{i_1, \ldots, i_p=-\infty}^{\infty} \prod_{t=1}^{p} d_{i_t} \delta_{0,\sum_{t=1}^{p} i_t} \nonumber \\ & \qquad \times \lim_{n \to \infty} \frac{1}{n^{\frac{q}{2}}} \sum_{i_{p+1}, \ldots, i_{p+q}=-(n-1)}^{n-1} \mbox{\bf E}\big[ \prod_{t=p+1}^{p+q} a_{i_{t}}\big] \frac{1}{n}\sum_{j=1}^n\prod_{\ell=p+1}^{p+q} \chi_{[1,n]}(j+\sum_{t=\ell}^{p+q} i_t) \delta_{0,\sum_{t=p+1}^{p+q} i_t} \nonumber \\ & = \lim_{n \to \infty} \varphi_n( D_{n}^p ) \times \lim_{n \to \infty} \varphi_n( (n^{-1/2}T_n)^q),\end{aligned}$$ where the existence of the first limit is given in ([\[eqn:lim_D\_n\^p\]](#eqn:lim_D_n^p){reference-type="ref" reference="eqn:lim_D_n^p"}) and the second limit in the above equation is guaranteed by Proposition [Proposition 1](#thm:toeplitz){reference-type="ref" reference="thm:toeplitz"} with the limit value as in ([\[eqn:lim_phi(T\*tau)\]](#eqn:lim_phi(T*tau)){reference-type="ref" reference="eqn:lim_phi(T*tau)"}). Now for $\epsilon_i \in \{1, *\}$ and $\tau_i \in \{1,2, \ldots, m\}$, we calculate the limit for the monomial $q_{ k}(\epsilon, \tau) = D_{n}^{(\tau_1) \epsilon_1} \cdots D_{n}^{(\tau_p) \epsilon_p} (n^{-1/2} T_{n}^{(\tau_{p+1}) \epsilon_{p+1}}) \cdots( n^{-1/2} T_{n}^{(\tau_k) \epsilon_k})$. Note from Remark [Remark 2](#rem:obse_D,D_m){reference-type="ref" reference="rem:obse_D,D_m"} that the limit of $\varphi_n(q_{ k}(\epsilon, \tau))$ will be the same as $\varphi_n(q'_{ k}(\epsilon, \tau))$, where $q'_{k}(\epsilon, \tau)$ be the monomial obtained by replacing $D_{n}$ by $D_{n,n^\alpha}$ in $q_{ k}(\epsilon, \tau)$. From Lemma [Lemma 1](#lem:tracetoeplitz){reference-type="ref" reference="lem:tracetoeplitz"}, we have $$\begin{aligned} %\label{eqn:ph(Dm*,T*)} \varphi_n(q'_{k}(\epsilon, \tau)) &= \sum_{i_{1}, \ldots, i_{p}=-(n^\alpha-1)}^{n^\alpha-1} \prod_{t=1}^{p} d^{(\tau_t) \epsilon_t}_{i_t} \nonumber \\ & \times \frac{1}{n^{\frac{k-p}{2}}} \sum_{i_{p+1}, \ldots, i_{k}=-(n-1)}^{n-1} \mbox{\bf E}\big[ \prod_{t=p+1}^{k} a^{(\tau_t) \epsilon_t}_{i_{t}}\big] \frac{1}{n}\sum_{j=1}^n\prod_{\ell=1}^{k} \chi_{[1,n]}(j+\sum_{t=\ell}^{k} \epsilon'_t i_t) \delta_{0,\sum_{t=1}^{k} \epsilon'_t i_t}. %\nonumber \\ %& = f(i_1, \ldots, i_p) g(i_1, \ldots, i_p,n), \mbox{ say},\end{aligned}$$ Following the similar arguments those used to establish ([\[eqn:lim_ph(Dm,T)\]](#eqn:lim_ph(Dm,T)){reference-type="ref" reference="eqn:lim_ph(Dm,T)"}), we get $$\begin{aligned} % \label{eqn:lim_ph(Dm*,T*)} & \lim_{n \to \infty} \varphi_n(q_{k}(\epsilon, \tau)) \nonumber \\ %&= \sum_{i_1, \ldots, i_p=-\infty}^{\infty} \prod_{t=1}^{p} d_{i_t} \d_{0,\sum_{t=1}^{p} i_t} \times \lim_{n \to \infty} \frac{1}{n^{\frac{q}{2}}} \sum_{i_{p+1}, \ldots, i_{p+q}=-(n-1)}^{n-1} \prod_{t=p+1}^{p+q} \E[a_{i_{t}}] \frac{1}{n}\sum_{j=1}^n\prod_{\ell=p+1}^{p+q} %\chi_{[1,n]}(j+\sum_{t=\ell}^{p+q} i_t) \d_{0,\sum_{t=p+1}^{p+q} i_t} %\nonumber \\ & = \lim_{n \to \infty} \varphi_n(D_{n}^{(\tau_1)\epsilon_1} \cdots D_{n}^{(\tau_{p})\epsilon_{p}}) \times \lim_{n \to \infty} \varphi_n \big( (n^{-1/2}T^{(\tau_{p+1}) \epsilon_{p+1}}_n) \cdots (n^{-1/2}T^{(\tau_{k}) \epsilon_{k}}_n) \big).\end{aligned}$$ Here the existence of the first limit is given in ([\[eqn:lim_D\*\_n\^p\]](#eqn:lim_D*_n^p){reference-type="ref" reference="eqn:lim_D*_n^p"}) and the second limit is guaranteed by Proposition [Proposition 1](#thm:toeplitz){reference-type="ref" reference="thm:toeplitz"} with the limit value as in ([\[eqn:lim_phi(T\*tau)\]](#eqn:lim_phi(T*tau)){reference-type="ref" reference="eqn:lim_phi(T*tau)"}). Now for an arbitrary monomial $A_{n,j}^{(\tau_1) \epsilon_1} A_{n,j}^{(\tau_2) \epsilon_2} \cdots A_{n,j}^{(\tau_k) \epsilon_k}$ with $A_{n,1}=n^{-1/2}T_n$ and $A_{n,2}=D_n$, using the similar calculation as used above, we have $$\begin{aligned} \label{eqn:lim_ph(Dm*,T*)} &\lim_{n \to \infty} \varphi_n \big( A_{n,j}^{(\tau_1) \epsilon_1} A_{n,j}^{(\tau_2) \epsilon_2} \cdots A_{n,j}^{(\tau_k) \epsilon_k} \big) \nonumber \\ & = \lim_{n \to \infty} \varphi_n \big( \prod_{t \in \{v_1, v_2, \ldots, v_p\}} D_{n}^{(\tau_t) \epsilon_t} \big) \times \lim_{n \to \infty} \varphi_n \big(\prod_{t \in \{[k] \setminus \{v_1, v_2, \ldots, v_p\} \}} \frac{T_{n}^{(\tau_t) \epsilon_t}}{\sqrt{n}} \big),\end{aligned}$$ where $\{v_1, v_2, \ldots, v_p\}$ be the indices corresponding to the positions of $D_n$ in the monomial $A_{n,j}^{(\tau_1) \epsilon_1} A_{n,j}^{(\tau_2) \epsilon_2} \cdots A_{n,j}^{(\tau_k) \epsilon_k}$. Here the first limit is of the form ([\[eqn:lim_D\*\_n\^p\]](#eqn:lim_D*_n^p){reference-type="ref" reference="eqn:lim_D*_n^p"}) and the second limit is of the form ([\[eqn:lim_phi(T\*tau)\]](#eqn:lim_phi(T*tau)){reference-type="ref" reference="eqn:lim_phi(T*tau)"}). This completes the proof of Proposition [Proposition 3](#thm:jc_T+D){reference-type="ref" reference="thm:jc_T+D"}. ◻ ## Joint convergence of $T_n$ and $P_{n}$ {#subsec:Tp} We now focus on the joint convergences of $T_n$ and $P_n$, and of symmetric Hankel matrices. **Proposition 4**. *If $\{T_{n}^{(i)}; 1\leq i\leq m\}$ are $m$ independent copies of random Toeplitz matrices whose input entries satisfy Assumption [Assumption 1](#assump:toeI){reference-type="ref" reference="assump:toeI"}, then $\{P_n, n^{-1/2}T_{n}^{(i)}; 1 \leq i \leq m\}$ converge jointly. The limit $*$-moments are given in ([\[eqn:lim_phi(Te\*,P)\]](#eqn:lim_phi(Te*,P)){reference-type="ref" reference="eqn:lim_phi(Te*,P)"}).* *Proof of Proposition [Proposition 4](#thm:jc_T+P){reference-type="ref" reference="thm:jc_T+P"}.* The main idea of the proof is similar to the proof of Proposition [Proposition 1](#thm:toeplitz){reference-type="ref" reference="thm:toeplitz"}. For simplicity of notation, we first consider a single matrix. Note that an arbitrary monomial in $\{P_n,T_n\}$ looks like $$(P_n \frac{T_{n}^{\epsilon_1}}{\sqrt{n}} \cdots \frac{T_{n}^{\epsilon_{k_1}}}{\sqrt{n}}) (P_n \frac{T_{n}^{\epsilon_{k_1+1}}}{\sqrt{n}} \cdots \frac{T_{n}^{\epsilon_{k_2}}}{\sqrt{n}} ) \cdots (P_n \frac{T_{n}^{\epsilon_{k_{p-1}+1}}}{\sqrt{n}} \cdots \frac{T_{n}^{\epsilon_{k_p}}}{\sqrt{n}} )= q_{k_1, \ldots, k_p}(\epsilon), \mbox{ say}.$$ First suppose $p$ is odd. Then using arguments similar to those used while establishing ([\[eqn:phi_PD_o(1)odd\]](#eqn:phi_PD_o(1)odd){reference-type="ref" reference="eqn:phi_PD_o(1)odd"}), we get $\phi_n(q_{k_1, \ldots, k_p}(\epsilon))= o(1)$. Now suppose $p$ is even. Again, from a similar argument that was used to establish ([\[eqn:phi_To(1)odd\]](#eqn:phi_To(1)odd){reference-type="ref" reference="eqn:phi_To(1)odd"}), we get $\phi_n(q_{k_1, \ldots, k_p}(\epsilon))= o(1)$ when $k_p$ is odd. So let $k_p=2k$ and for $I_{2k}$ as in ([\[eqn:i_k in -n to n\]](#eqn:i_k in -n to n){reference-type="ref" reference="eqn:i_k in -n to n"}), $$%\label{eqn:I'2k_Hn} I_{2k}^{o}=\{(i_1,\ldots, i_{k_1}, i_{k_1+1}, \ldots, i_{k_2}, \ldots, i_{k_{p-1}+1}, \ldots, i_{2k})\in I_{2k}\; :\;\sum_{c=1}^{p}(-1)^c \sum_{t=k_{c-1}+1}^{k_c} \epsilon'_t i_t=0 \}.$$ From Lemma [Lemma 2](#lem:hankel){reference-type="ref" reference="lem:hankel"}, we have $$\begin{aligned} %\label{eqn:EHs_2k} \lim_{n \to \infty} \varphi_n\big(q_{k_1, \ldots, 2k}(\epsilon)\big) &=\lim_{n \to \infty}\frac{1}{n^{k+1}}\sum_{j=1}^{n}\sum_{I_{2k}^{o}} \mbox{\bf E}[\prod_{t=1}^{2k} a^{\epsilon_t}_{i_t}] \prod_{e=1}^{2k} m_{\chi,k_e}(j,i), \end{aligned}$$ where $m_{\chi,k_e}(j,i)$ is as in ([\[eqn:m_chi,t_Hn\]](#eqn:m_chi,t_Hn){reference-type="ref" reference="eqn:m_chi,t_Hn"}). As before, only the pair-partitions will contribute. Hence we have $$\begin{aligned} \label{eqn:EHs_2k} \lim_{n \to \infty} \varphi_n\big(q_{k_1, \ldots, 2k}(\epsilon)\big) &=\sum_{\pi\in \mathcal P_{2}(2k)}\lim_{n\to \infty}\frac{1}{n^{k+1}}\sum_{j=1}^{n}\sum_{I_{2k}^{o}(\pi)}\prod_{(r,s)\in \pi}\mbox{\bf E}[a^{\epsilon_r}_{i_r} a^{\epsilon_s}_{i_s}] \prod_{e=1}^{2k} m_{\chi,k_e}(j,i), \end{aligned}$$ where $I_{2k}^{o}(\pi)=\{(i_1,\ldots, i_{2k})\in I_{2k}^{o} \; :\; \prod_{(r,s)\in \pi}\mbox{\bf E}[a^{\epsilon_r}_{i_r} a^{\epsilon_s}_{i_s}]\neq 0\}$. For a block $(r,s)\in \pi$, let $r \in \{k_{c-1}, k_{c-1}+1, \ldots, k_{c}\}$ and $s \in \{k_{c'-1}, k_{c'-1}+1, \ldots, k_{c'}\}$ for some $c,c' \in \{1,2, \ldots,p\}$ with $k_0=1$ and $k_p=2k$. Observe that, the constraint on the indices, $\sum_{c=1}^{p}(-1)^c \sum_{t=k_{c-1}+1}^{k_c} \epsilon'_t i_t$ can also be written as $\sum_{c=1}^{p} \nu_c \sum_{t=k_{c-1}+1}^{k_c} \epsilon'_t i_t$, where $$\begin{aligned} \label{eqn:nu_t_oddeven} \nu_c= \left\{\begin{array}{lll} 1 & \mbox{ if $c$ is even}, \\ -1& \mbox{ if $c$ is odd}. \end{array}\right. \end{aligned}$$ Thus, in terms of a pair-partition, the constraint on the indices will be $\sum_{(r,s)\in \pi}(\nu_c \epsilon'_ri_r+\nu_{c'} \epsilon'_si_s)=0$. Here $i_r$ matches with $i_s$. Observe that if $r,s \in \{k_d, k_d+1, \ldots, k_{d+1}\}$ for some $d=0,1, \ldots, (p-1)$, then the corresponding $\nu_c,\nu_{c'}$ have the same sign. Now, similar to [\[eqn:reduction\]](#eqn:reduction){reference-type="eqref" reference="eqn:reduction"}, the number of free indices for $(i_1,\ldots, i_{k_1}, i_{k_1+1}, \ldots, i_{k_2}, \ldots, i_{k_{p-1}+1}, \ldots, i_{2k})\in I_{2k}^{o}(\pi)$ will be $k$, if and only if for all blocks $(r,s)\in \pi$ $$\begin{aligned} \nu_c \epsilon'_r i_r+\nu_{c'} \epsilon'_s i_s=0 , \mbox{ equivalently } i_r= \left\{\begin{array}{rl} i_s & \mbox{ if } \nu_c\nu_{c'} \epsilon'_r\epsilon'_s =-1, \\ -i_s & \mbox{ if } \nu_c \nu_{c'} \epsilon'_r\epsilon'_s=1. \end{array} \right. \end{aligned}$$ Since the input entries of matrices satisfy Assumption [Assumption 1](#assump:toeI){reference-type="ref" reference="assump:toeI"}, following computations similar to those used in establishing ([\[eqn:E\[ars_epsilon\]\]](#eqn:E[ars_epsilon]){reference-type="ref" reference="eqn:E[ars_epsilon]"}), we have a non-zero contribution in the limit iff $$\begin{aligned} \label{eqn:EHs_ars_epsilon} &\mbox{\bf E}[a^{\epsilon_{r}}_{i_{r}} a^{\epsilon_{s}}_{i_{s}}] \nonumber \\ &= \begin{cases} (\sigma_x^2+\sigma_y^2) \delta_{\nu_c, \nu_{c'}} (1-\delta_{\epsilon'_{r}, \epsilon'_{s} }) + [\sigma_x^2-\sigma_y^2+\epsilon'_{r} 2\mathrm{i} \rho_1 ] (1-\delta_{\nu_c, \nu_{c'}}) \delta_{\epsilon'_{r}, \epsilon'_{s} } &\mbox{ if } i_{r}=i_{s}>0, \\ (\sigma_x^2+\sigma_y^2) \delta_{\nu_c, \nu_{c'}} (1-\delta_{\epsilon'_{r}, \epsilon'_{s} }) + [\sigma_x^2-\sigma_y^2+\epsilon'_{r} 2\mathrm{i} \rho_6 ] (1-\delta_{\nu_c, \nu_{c'}}) \delta_{\epsilon'_{r}, \epsilon'_{s} } &\mbox{ if } i_{r}=i_{s}<0, \\ [\rho_2+\rho_5 + \epsilon'_{r} \mathrm{i}(\rho_4-\rho_3) ](1-\delta_{\nu_c, \nu_{c'}}) (1-\delta_{\epsilon'_{r}, \epsilon'_{s}}) \\ \quad + [\rho_2-\rho_5 + \epsilon'_{r} \mathrm{i}(\rho_4+\rho_3) ] \delta_{\nu_c, \nu_{c'}} \delta_{\epsilon'_{r}, \epsilon'_{s} } &\mbox{ if } i_{r}=-i_{s}>0, \\ [\rho_2+\rho_5 + \epsilon'_{s} \mathrm{i}(\rho_4-\rho_3) ](1-\delta_{\nu_c, \nu_{c'}})(1-\delta_{\epsilon'_{r}, \epsilon'_{s} }) \\ \quad + [\rho_2-\rho_5 + \epsilon'_{s} \mathrm{i}(\rho_4+\rho_3) ] \delta_{\nu_c, \nu_{c'}} \delta_{\epsilon'_{r}, \epsilon'_{s}} &\mbox{ if } i_{r}=-i_{s}<0, \end{cases} \nonumber \\ %&= \big[(\rho_2-\rho_5) +i \e'_{r_t}(\rho_3+\rho_4)\big]^{\d_{\e'_{r_{t}} \e'_{s_{t}} }} \nonumber \\ & = \theta^{(T,P)}_{r,s}(\epsilon,\rho,i), \mbox{ say}. \end{aligned}$$ Recall, for $\pi=(r_1,s_1)\cdots (r_k,s_k)$, we have $\pi'(r_t)=\pi'(s_t)=t$. Define $\eta_{\pi}(r_t)=1$, and $\eta_{\pi}(s_t)=(-1)^{\delta_{\epsilon'_{r_t} \epsilon'_{s_t}, \nu_{c} \nu_{c'}}}$. Also for a set of variables $z_0,z_1, \ldots, z_{2k}$, we define $\theta^{(T,P)}_{r,s}(\epsilon,\rho,z)$ as $\theta^{(T,P)}_{r,s}(\epsilon,\rho,i)$ where $i_r,i_s$ are respectively replaced by $z_r,z_s$, and for $e=1,2, \ldots,p$, $$\begin{aligned} \label{eqn:m_chi,t,z_Hn} m_{\chi,k_e}(z) & = \prod_{\ell=k_{e-1}+1}^{k_e} \chi_{[0,1]}\big(z_0+ (-1)^{e} \sum_{t=\ell}^{k_e} \epsilon'_t \eta_{\pi}(t) z_{\pi'(t)} + \sum_{c=e+1}^{p} (-1)^{c} \sum_{t=k_{c-1}+1}^{k_c} \epsilon'_t \eta_{\pi}(t) z_{\pi'(t)} \big). \end{aligned}$$ Using [\[eqn:EHs_ars_epsilon\]](#eqn:EHs_ars_epsilon){reference-type="eqref" reference="eqn:EHs_ars_epsilon"} in ([\[eqn:EHs_2k\]](#eqn:EHs_2k){reference-type="ref" reference="eqn:EHs_2k"}), we get $$\begin{aligned} \label{eqn:lim_phi(T*,P)} \lim_{n \to \infty} \varphi_n \big(q_{k_1, \ldots, 2k}(\epsilon)\big) = \sum_{\pi\in \mathcal P_{2}(2k)} \int_0^1\int_{[-1,1]^k} \prod_{(r,s)\in \pi} \theta^{(T,P)}_{r,s}(\epsilon,\rho,z) \prod_{e=1}^{p} m_{\chi,k_e}(z) \prod_{i=0}^kdz_i. \end{aligned}$$ This establishes the limit for a single matrix. Now for $\tau=(\tau_1,\ldots,\tau_{k_p})$ and $\epsilon=(\epsilon_1, \ldots, \epsilon_{k_p})$ with $\tau_i\in \{1,\ldots, m\}$ and $\epsilon_i \in \{1,*\}$, let $$q_{k_1, \ldots, k_p}(\tau, \epsilon) :=\Big( (P_n \frac{T_{n}^{(\tau_1)\epsilon_1}}{\sqrt{n}} \cdots \frac{T_{n}^{(\tau_{k_1})\epsilon_{k_1}}}{\sqrt{n}}) \cdots (P_n \frac{T_{n}^{(\tau_{k_{p-1}+1})\epsilon_{k_{p-1}+1}}}{\sqrt{n}} \cdots \frac{T_{n}^{(\tau_{k_p})\epsilon_{k_p}}}{\sqrt{n}} ) \Big).$$ Using Lemma [Lemma 2](#lem:hankel){reference-type="ref" reference="lem:hankel"} and the above techniques, one can show that the limit of $\varphi_n \big( q_{k_1, \ldots, k_p}(\tau, \epsilon) \big)$ is zero if $p$ is odd or $k_p$ is odd. Let $p$ be even and $k_p=2k$. Now using arguments similar to those used while establishing ([\[eqn:EHs_2k\]](#eqn:EHs_2k){reference-type="ref" reference="eqn:EHs_2k"}), we have $$\begin{aligned} &\lim_{n \to \infty} \varphi_n \big( q_{k_1, \ldots, k_p}(\tau, \epsilon) \big) & = \sum_{\pi\in \mathcal P_{2}(2k)} \lim_{n\to \infty} \frac{1}{n^{k+1}}\sum_{j=1}^{n}\sum_{I_{2k}^{o}(\pi)}\prod_{(r,s)\in \pi} \mbox{\bf E}[a^{(\tau_r) \epsilon_r}_{i_r} a^{(\tau_s) \epsilon_s}_{i_s}] \prod_{e=1}^{p} m_{\chi,k_e}(j,i), \end{aligned}$$ where $m_{\chi,k_e}(j,i)$ is as in ([\[eqn:EHs_2k\]](#eqn:EHs_2k){reference-type="ref" reference="eqn:EHs_2k"}). Following the arguments which were used to establish [\[eqn:EHs_ars_epsilon\]](#eqn:EHs_ars_epsilon){reference-type="eqref" reference="eqn:EHs_ars_epsilon"}, the contribution of the order $O(1)$ from a summand in the right side can only happen if $$\mbox{\bf E}[a^{(\tau_r) \epsilon_r}_{i_r} a^{(\tau_s) \epsilon_s}_{i_s}] =\delta_{\tau_r,\tau_s}\theta^{(T,P)}_{r,s}(\epsilon,\rho,i),$$ where $\theta^{(T,P)}_{r,s}(\epsilon,\rho,i)$ is as in [\[eqn:EHs_ars_epsilon\]](#eqn:EHs_ars_epsilon){reference-type="eqref" reference="eqn:EHs_ars_epsilon"}. Thus, using steps similar to those used to obtain ([\[eqn:lim_phi(T\*,P)\]](#eqn:lim_phi(T*,P)){reference-type="ref" reference="eqn:lim_phi(T*,P)"}) for a single $T_{n}$, we get $$\begin{aligned} \label{eqn:lim_phi(Te*,P)} & \lim_{n \to \infty} \varphi_n\big( q_{k_1, \ldots, k_p}(\tau, \epsilon)\big) \nonumber \\ =& \left\{\begin{array}{lll} \displaystyle \hskip-5pt \sum_{\pi\in \mathcal P_{2}(2k)} \int_0^1 \hskip-3pt \int_{[-1,1]^k} \prod_{(r,s)\in \pi} \hskip-7pt \delta_{\tau_r,\tau_s} \theta^{(T,P)}_{r,s}(\epsilon,\rho,z) \prod_{e=1}^{p} m_{\chi,k_e}(z) \prod_{i=0}^kdz_i, & \mbox{ if $p, k_p$ are even}, \\ 0& \mbox{ otherwise}, \end{array}\right. \end{aligned}$$ where $k_p=2k$ and $\theta^{(T,P)}_{r,s}(\epsilon,\rho,z)$, $m_{\chi,k_e}(z)$ are as in [\[eqn:EHs_ars_epsilon\]](#eqn:EHs_ars_epsilon){reference-type="eqref" reference="eqn:EHs_ars_epsilon"}, ([\[eqn:m_chi,t,z_Hn\]](#eqn:m_chi,t,z_Hn){reference-type="ref" reference="eqn:m_chi,t,z_Hn"}), respectively. This completes the proof of Proposition [Proposition 4](#thm:jc_T+P){reference-type="ref" reference="thm:jc_T+P"}. ◻ **Remark 3**. Recall that any symmetric Hankel matrix is of the form $H_{n,s}=P_nT_n$. While the joint convergence of $\{H_{n,s}^{(i)}; 1\leq i \leq m\}$ follows from Proposition [Proposition 4](#thm:jc_T+P){reference-type="ref" reference="thm:jc_T+P"}, their limit $*$-moments cannot be easily deduced from ([\[eqn:lim_phi(Te\*,P)\]](#eqn:lim_phi(Te*,P)){reference-type="ref" reference="eqn:lim_phi(Te*,P)"}). This is because $H^*_{n,s}= T^*_nP_n$ and so, for the $*$-moment of a monomial, the positions of the $P_n$'s depend on the particular monomial, and becomes crucial. That is we have to know the values of $\epsilon_1, \epsilon_2, \ldots, \epsilon_p$. We now show how to calculate the moments directly. Suppose $\{H_{n,s}^{(i)}; 1\leq i \leq m \}$ are $m$ independent copies of symmetric Hankel matrices whose input entries satisfy Assumption [Assumption 1](#assump:toeI){reference-type="ref" reference="assump:toeI"}. Then for any $\epsilon_i\in \{1,*\}$ and $\tau_i\in \{1,\ldots, m\}$, $$\begin{aligned} \label{eqn:lim_mome_A1_HJc} &\lim_{n\to \infty}\varphi_n(\dfrac{H_{n,s}^{(\tau_1)\epsilon_1}}{n^{1/2}}\cdots \dfrac{H_{n,s}^{(\tau_{p})\epsilon_{p}}}{n^{1/2}}) \nonumber \\ & =\left\{\begin{array}{lll} \displaystyle \sum_{\pi\in \mathcal P_{2}(2k)} \int_0^1\int_{[-1,1]^k} \prod_{(r,s)\in \pi} \delta_{\tau_r,\tau_s} \theta^{(H)}_{r,s}(\epsilon,\rho,z) \prod_{t=1}^{2k} m_{\chi,t}(z) \prod_{i=0}^kdz_i & \mbox{ if } p=2k, \\ 0 & \mbox{ if } p=2k+1, \end{array}\right. \end{aligned}$$ where $\theta^{(H)}_{r,s}(\epsilon,\rho,z)$ is as in ([\[eqn:EHs_ars_epsilon_Hs\]](#eqn:EHs_ars_epsilon_Hs){reference-type="ref" reference="eqn:EHs_ars_epsilon_Hs"}) and for $t=1,2, \ldots,2k$, $m_{\chi,t}(z) = \chi_{[0,1]}\big(z_0+ \sum_{\ell=t}^{2k} (-1)^{\ell} \eta_{\pi}(\ell) z_{\pi'(\ell)}\big)$ with $\eta_{\pi}(\ell)$ as in ([\[eqn:eta_pi_Hns\]](#eqn:eta_pi_Hns){reference-type="ref" reference="eqn:eta_pi_Hns"}). To prove ([\[eqn:lim_mome_A1_HJc\]](#eqn:lim_mome_A1_HJc){reference-type="ref" reference="eqn:lim_mome_A1_HJc"}), first note the following trace formula for symmetric Hankel matrices: Let $H_{n,s}^{(1)}, \ldots, H_{n,s}^{(m)}$ be symmetric Hankel matrices with input sequences $(a_i^{(1)})_{i\in \mathbb{Z}}, \ldots, (a_i^{(m)})_{i\in \mathbb{Z}}$, respectively, then for $\tau_1,\ldots, \tau_p\in \{1,\ldots, m\}$ and $\epsilon_1,\ldots, \epsilon_p\in \{1,*\}$, we have $$\begin{aligned} %\label{eqn:trace_Hs_Hs} & {\mbox{Tr}}\big(H_{n,s}^{(\tau_1)\epsilon_1}\cdots H_{n,s}^{(\tau_p)\epsilon_p}\big) \nonumber \\ & \quad = \left\{ \begin{array}{ll} \displaystyle{ \sum_{j=1}^n \sum_{I_p} \prod_{t=1}^p a^{(\tau_t)\epsilon_t}_{i_t} \prod_{t=1}^p \chi_{[1,n]}\big(j+ \sum_{\ell=t}^p (-1)^{p-\ell}i_\ell\big) \delta_{0,\sum_{t=1}^p(-1)^{p-t}i_t}} & \mbox{ if $p$ is even}, \\ \displaystyle{\sum_{j=1}^n \sum_{I_p} \prod_{t=1}^p a^{(\tau_t)\epsilon_t}_{i_t} \prod_{t=1}^p \chi_{[1,n]} \big(j+ \sum_{\ell=t}^p (-1)^{p-\ell}i_\ell\big) \delta_{2j-1-n,\sum_{t=1}^p(-1)^{p-t}i_t}} & \mbox{ if $p$ is odd}, \end{array} \right. \end{aligned}$$ where $I_p$ is as in ([\[eqn:i_k in -n to n\]](#eqn:i_k in -n to n){reference-type="ref" reference="eqn:i_k in -n to n"}). We skip the proof of the above trace formula which uses the same ideas as in the proofs of Lemmas [Lemma 1](#lem:tracetoeplitz){reference-type="ref" reference="lem:tracetoeplitz"} and [Lemma 2](#lem:hankel){reference-type="ref" reference="lem:hankel"}. The proof of ([\[eqn:lim_mome_A1_HJc\]](#eqn:lim_mome_A1_HJc){reference-type="ref" reference="eqn:lim_mome_A1_HJc"}) is along the lines of the proof of Proposition [Proposition 4](#thm:jc_T+P){reference-type="ref" reference="thm:jc_T+P"}. For simplicity of notation, we only consider a single matrix. The same idea will work when we have $m$ matrices also. Note that the odd moments are zero. So let $p=2k$. As before, only the pair-partitions will contribute. Hence from the above trace formula, we have $$\begin{aligned} \label{eqn:EHs_2k_Hs} \lim_{n \to \infty} \varphi_n( \frac{H_{n,s}^{\epsilon_1}}{\sqrt{n}} \cdots \frac{H_{n,s}^{\epsilon_{2k}}}{\sqrt{n}}) &=\lim_{n \to \infty}\frac{1}{n^{k+1}}\sum_{j=1}^{n}\sum_{I_{2k}^{''}} \mbox{\bf E}[\prod_{t=1}^{2k} a^{\epsilon_t}_{i_t}] \prod_{t=1}^{2k} \chi_{[1,n]} \big(j+ \sum_{\ell=t}^{2k} (-1)^{\ell}i_\ell\big) \nonumber \\ &= \hskip-5pt \sum_{\pi\in \mathcal P_{2}(2k)}\lim_{n\to \infty}\frac{1}{n^{k+1}}\sum_{j=1}^{n}\sum_{I_{2k}^{''}(\pi)}\prod_{(r,s)\in \pi}\mbox{\bf E}[a^{\epsilon_r}_{i_r} a^{\epsilon_s}_{i_s}] \prod_{t=1}^{2k} \chi_{[1,n]}\big(j+ \sum_{\ell=t}^{2k} (-1)^{\ell}i_\ell\big), \end{aligned}$$ where $I^{''}_{2k}(\pi)=\{(i_1,\ldots, i_{2k})\in I_{2k}^{''} \; :\; \prod_{(r,s)\in \pi}\mbox{\bf E}[a^{\epsilon_r}_{i_r} a^{\epsilon_s}_{i_s}]\neq 0\}$ with $$\label{eqn:I'2k_Hs} I_{2k}^{''}=\{(i_1,\ldots, i_{2k})\in I_{2k}\; :\;\sum_{t=1}^{2k}(-1)^ti_t=0 \}.$$ Note that, similar to [\[eqn:reduction\]](#eqn:reduction){reference-type="eqref" reference="eqn:reduction"}, the number of free indices for $(i_1,\ldots, i_{2k})\in I_{2k}^{''}(\pi)$ will be $k$, if and only if for all $(r,s)\in \pi$ $$\begin{aligned} \nu_ri_r+\nu_si_s=0 , \mbox{ equivalently } i_r= \left\{\begin{array}{rl} i_s & \mbox{ if } \nu_r\nu_s=-1, \\ -i_s & \mbox{ if } \nu_r\nu_s=1, \end{array} \right. \end{aligned}$$ where $\nu_t$ is as defined in ([\[eqn:nu_t\_oddeven\]](#eqn:nu_t_oddeven){reference-type="ref" reference="eqn:nu_t_oddeven"}). Since the input entries of matrices satisfy Assumption [Assumption 1](#assump:toeI){reference-type="ref" reference="assump:toeI"}, we have a non-zero contribution in the limit if and only if $$\begin{aligned} \label{eqn:EHs_ars_epsilon_Hs} & \mbox{\bf E}[a^{\epsilon_{r}}_{i_{r}} a^{\epsilon_{s}}_{i_{s}}] \nonumber \\ &= \begin{cases} (\sigma_x^2+\sigma_y^2) (1-\delta_{\epsilon'_{r}, \epsilon'_{s} }) + [\sigma_x^2-\sigma_y^2+\epsilon'_{r} 2\mathrm{i} \rho_1 ] \delta_{\epsilon'_{r}, \epsilon'_{s} } &\mbox{ if } i_{r}=i_{s}>0, \\ (\sigma_x^2+\sigma_y^2) (1-\delta_{\epsilon'_{r}, \epsilon'_{s} }) + [\sigma_x^2-\sigma_y^2+\epsilon'_{r} 2\mathrm{i} \rho_6 ] \delta_{\epsilon'_{r}, \epsilon'_{s} } &\mbox{ if } i_{r}=i_{s}<0, \\ [\rho_2+\rho_6 + \epsilon'_{r} \mathrm{i}(\rho_4-\rho_3) ](1-\delta_{\epsilon'_{r}, \epsilon'_{s} }) + [\rho_2-\rho_6 + \epsilon'_{r} \mathrm{i}(\rho_4+\rho_3) ] \delta_{\epsilon'_{r}, \epsilon'_{s} } &\mbox{ if } i_{r}=-i_{s}>0, \\ [\rho_2+\rho_6 + \epsilon'_{s} \mathrm{i}(\rho_4-\rho_3) ](1-\delta_{\epsilon'_{r}, \epsilon'_{s} }) + [\rho_2-\rho_6 + \epsilon'_{s} \mathrm{i}(\rho_4+\rho_3) ] \delta_{\epsilon'_{r}, \epsilon'_{s} } &\mbox{ if } i_{r}=-i_{s}<0, \\ \end{cases} \nonumber \\ %&= \big[(\rho_2-\rho_5) +i \e'_{r_t}(\rho_3+\rho_4)\big]^{\d_{\e'_{r_{t}} \e'_{s_{t}} }} \nonumber \\ & = \theta^{(H)}_{r,s}(\epsilon,\rho,i), \mbox{ say}. \end{aligned}$$ Recall, for $\pi=(r_1,s_1)\cdots (r_k,s_k)$, we have $\pi'(r_t)=\pi'(s_t)=t$. Define $$\label{eqn:eta_pi_Hns} \mbox{$\eta_{\pi}(r_t)=1$ and $\eta_{\pi}(s_t)=(-1)^{\delta_{\nu_{r_t}, \nu_{s_t}}}$}.$$ Also for $t=1,2, \ldots,2k$, define $m_{\chi,t}(z) = \chi_{[0,1]}\big(z_0+ \sum_{\ell=t}^{2k} (-1)^{\ell} \eta_{\pi}(\ell) z_{\pi'(\ell)}\big).$ Now using [\[eqn:EHs_ars_epsilon_Hs\]](#eqn:EHs_ars_epsilon_Hs){reference-type="eqref" reference="eqn:EHs_ars_epsilon_Hs"} in ([\[eqn:EHs_2k_Hs\]](#eqn:EHs_2k_Hs){reference-type="ref" reference="eqn:EHs_2k_Hs"}), we get $$\begin{aligned} \lim_{n \to \infty} \varphi_n( \frac{H_{n,s}^{\epsilon_1}}{\sqrt{n}} \cdots \frac{H_{n,s}^{\epsilon_{2k}}}{\sqrt{n}}) = \sum_{\pi\in \mathcal P_{2}(2k)} \int_0^1\int_{[-1,1]^k} \prod_{(r,s)\in \pi} \theta^{(H)}_{r,s}(\epsilon,\rho,z) m_{\chi,t}(z) \prod_{i=0}^kdz_i. \end{aligned}$$ This establishes the limit for a set of single matrices. The following corollary can be concluded from Remark [Remark 3](#cor:com_HJc){reference-type="ref" reference="cor:com_HJc"}. **Corollary 5**. *(a) [@bose_saha_patter_JC_annals Proposition 1] Let $\{H^{(i)}_{n,s}; 1 \leq i \leq m\}$ be independent copies of random symmetric Hankel matrices with *real* input sequence $\{a^{(i)}_{j}\}$ which satisfies Assumption [Assumption 1](#assump:toeI){reference-type="ref" reference="assump:toeI"}. Then $\{n^{-1/2}H^{(i)}_{n,s}; 1 \leq i \leq m\}$ converges in $*$-distribution and the limit $*$-moments are as in ([\[eqn:lim_mome_A1_HJc\]](#eqn:lim_mome_A1_HJc){reference-type="ref" reference="eqn:lim_mome_A1_HJc"}) with $\theta^{(H)}_{r,s}(\epsilon,\rho,z)= \sigma_x^2 (1-\delta_{\nu_{r}, \nu_{s}}) + \rho_{2}\delta_{\nu_{r}, \nu_{s}}$. (b) A result similar to Corollary [Corollary 4](#cor:poly_Lsd_T){reference-type="ref" reference="cor:poly_Lsd_T"} also holds. In particular, if $\{a_{j}\}$ satisfy Assumption [Assumption 1](#assump:toeI){reference-type="ref" reference="assump:toeI"}, then the ESD of real symmetric $n^{-1/2}H_{n,s}$ converge a.s. to a symmetric probability distribution whose moments are as in ([\[eqn:lim_mome_A1_HJc\]](#eqn:lim_mome_A1_HJc){reference-type="ref" reference="eqn:lim_mome_A1_HJc"}) with $\theta^{(H)}_{r,s}(\epsilon,\rho,z)= \sigma_x^2 (1-\delta_{\nu_{r}, \nu_{s}}) + \rho_{2}\delta_{\nu_{r}, \nu_{s}}$. The LSD results of [@bryc_lsd_06] and [@bose_sen_LSD_EJP] are then special cases via truncation arguments.* ## Final arguments in the proof of Theorem [Theorem 1](#thm:JC_tdp){reference-type="ref" reference="thm:JC_tdp"} {#subsec:jc_tdp} Having gained an insight through the study of the above special cases, it will be enough to explain the main steps in the proof of Theorem [Theorem 1](#thm:JC_tdp){reference-type="ref" reference="thm:JC_tdp"} when we consider all the matrices together. *Proof of Theorem [Theorem 1](#thm:JC_tdp){reference-type="ref" reference="thm:JC_tdp"}.* Let the input entries of $T^{(\tau)}_n$ and $D^{(\tau)}_n$ be $\{a^{(\tau)}_i\}$ and $\{d^{(\tau)}_i\}$, respectively. Note that an arbitrary monomial from the collection $\{P_n, n^{-1/2}T_{n}^{(i)}, D_{n}^{(i)}; 1 \leq i \leq m\}$ looks like the following: $$\begin{aligned} &(P_n A_{n,\mu}^{(\tau_1) \epsilon_1} \cdots A_{n,\mu}^{(\tau_{k_1}) \epsilon_{k_1}} ) (P_n A_{n,\mu}^{(\tau_{k_1+1}) \epsilon_{k_1+1}} \cdots A_{n,\mu}^{(\tau_{k_2}) \epsilon_{k_2}} ) \cdots (P_n A_{n,\mu}^{(\tau_{k_{p-1}+1}) \epsilon_{k_{p-1}+1}} \cdots A_{n,\mu}^{(\tau_{k_p}) \epsilon_{k_p}} ) \nonumber \\ & = q_{k_p}(P,D,T), \mbox{ say},\end{aligned}$$ where $\epsilon_i \in \{1, *\}$, $\tau_i \in \{1,2, \ldots, m\}$ and for $\mu \in \{1,2\}$, $A^{(\tau_i) \epsilon_i}_{n,1}=n^{-1/2}T^{(\tau_i) \epsilon_i}_n$, $A^{(\tau_i) \epsilon_i}_{n,2}=D^{(\tau_i) \epsilon_i}_n$. First note from the proof of Proposition [Proposition 4](#thm:jc_T+P){reference-type="ref" reference="thm:jc_T+P"} that if $p$ is odd, then $\varphi_n \big(q_{k_p}(P,D,T)\big)=o(1)$. So let $p$ be even. Then from Lemma [Lemma 2](#lem:hankel){reference-type="ref" reference="lem:hankel"}, we have $$\begin{aligned} \varphi_n \big(q_{k_p}(P,D,T)\big) = \sum_{I_{k_p}} \mbox{\bf E}\big[\prod_{t=1}^{k_p} z^{(\tau_t)\epsilon_t}_{i_t} \big] \frac{1}{n^{1+\frac{w_p}{2} }} \sum_{j=1}^n \prod_{e=1}^p m_{\chi,k_e}(j,i) \delta_{0,\sum_{c=1}^p(-1)^{p-c} \sum_{t=k_{c-1}+1}^{k_{c}} \epsilon_t'i_t},\end{aligned}$$ where $z^{(\tau_t)\epsilon_t}_{i_t}= a^{(\tau_t)\epsilon_t}_{i_t}$ or $d^{(\tau_t)\epsilon_t}_{i_t}$ depending on whether the matrix $A_{n,\mu}^{(\tau_t)\epsilon_t}$ is $T_n^{(\tau_t)\epsilon_t}$ or $D_n^{(\tau_t)\epsilon_t}$; $I_{k_p}$ and $m_{\chi,k_e}$ are as in ([\[eqn:i_k in -n to n\]](#eqn:i_k in -n to n){reference-type="ref" reference="eqn:i_k in -n to n"}) and ([\[eqn:m_chi,t_Hn\]](#eqn:m_chi,t_Hn){reference-type="ref" reference="eqn:m_chi,t_Hn"}), respectively; $$\begin{aligned} \label{eqn:no of T in Q} w_p= \# \{ \mu : A_{n,\mu}^{(\tau_t) \epsilon_t} = A_{n,1}^{(\tau_t) \epsilon_t} \mbox{ in } q_{k_p}(P,D,T)\},\end{aligned}$$ which is the number of random Toeplitz matrices in $q_{k_p}(P,D,T)$. Observe that if $w_p$ is odd, then by using arguments similar to those used in establishing ([\[eqn:phi_To(1)odd\]](#eqn:phi_To(1)odd){reference-type="ref" reference="eqn:phi_To(1)odd"}) and Proposition [Proposition 3](#thm:jc_T+D){reference-type="ref" reference="thm:jc_T+D"}, we get $\varphi_n \big(q_{k_p}(P,D,T)\big)=o(1)$. Now for $c=1,2, \ldots, p$, let $u_{r_{c-1}}, u_{r_{c-1}+1}, \ldots, u_{r_{c}}$ be the positions of $D_n$ between $A_{n,\mu}^{(\tau_{k_{c-1}+1}) \epsilon_{k_{c-1}+1}}$ and $A_{n,\mu}^{(\tau_{k_c}) \epsilon_{k_c}}$ (including these two also) and let $v_{w_{c-1}}, v_{w_{c-1}+1}, \ldots, v_{w_{c}}$ be the positions of $T_n$ between $A_{n,\mu}^{(\tau_{k_{c-1}+1}) \epsilon_{k_{c-1}+1}}$ and $A_{n,\mu}^{(\tau_{k_c}) \epsilon_{k_c}}$. Here $r_0=k_0=w_0=1$ and $r_p+w_p=k_p$. Using arguments similar to those used while establishing ([\[eqn:chi_D\_remove\]](#eqn:chi_D_remove){reference-type="ref" reference="eqn:chi_D_remove"}), we have $$\begin{aligned} &\frac{1}{n} \sum_{j=1}^n m_{\chi, k_e}(j,i) \\ &= \frac{1}{n} \sum_{j=1}^n \prod_{\ell=k_{e-1}+1}^{k_e} \chi_{[1,n]}\big(j+ (-1)^{p-e} \sum_{t=\ell}^{k_e} \epsilon_t' i_t + \sum_{c=e+1}^{p} (-1)^{p-c} \sum_{t=k_{c-1}+1}^{k_c} \epsilon_t' i_t \big) \\ &= \frac{1}{n} \sum_{j=1}^n \prod_{\ell={w_{e-1}}+1}^{{w_e}} \chi_{[1,n]}\big(j+ (-1)^{p-e} \sum_{t=\ell}^{{w_e}} \epsilon_{v_t}' i_{v_t} + \sum_{c=e+1}^{p} (-1)^{p-c} \sum_{t={w_{c-1}}+1}^{{w_c}} \epsilon_{v_t}' i_{v_t} \big) +o(1)\\ &= \frac{1}{n} \sum_{j=1}^n m_{\chi, w_e}(j,i) +o(1). %, \mbox{ say}.\end{aligned}$$ Therefore, using arguments similar to those used in the proof of Proposition [Proposition 4](#thm:jc_T+P){reference-type="ref" reference="thm:jc_T+P"}, $$\begin{aligned} %\label{eqn:lim_phi(PTD)} & \lim_{n \to \infty} \varphi_n \big(q_{k_p}(P,D,T)\big) \nonumber\\ & = \sum_{i_{u_{r_0}}, \ldots, i_{u_{r_p}}=-\infty}^{\infty} \prod_{t=1}^{r_p} d^{(\tau_{u_t}) \epsilon_{u_t}}_{i_{u_t}} \delta_{0,\sum_{c=1}^{p}(-1)^c \sum_{t=r_{c-1}+1}^{r_c} \epsilon'_{u_t} i_{u_t}} \nonumber\\ & \times \lim_{n \to \infty} \frac{1}{n^{\frac{w_p}{2}}} \sum_{i_{v_{w_0}}, \ldots, i_{v_{w_p}}=-(n-1)}^{n-1} \mbox{\bf E}\big[ \prod_{t=1}^{w_p} a^{(\tau_{v_t}) \epsilon_{v_t}}_{i_{v_t}} \big] \frac{1}{n} \sum_{j=1}^n \prod_{\ell=1}^{p} m_{\chi,w_e}(j,i) \delta_{0,\sum_{c=1}^{p}(-1)^c \sum_{t=w_{c-1}+1}^{w_c} \epsilon'_{v_t} i_{v_t}} \nonumber \\ & = \lim_{n \to \infty} \varphi_n \big(q_{r_p}(P,D)\big) \times \lim_{n \to \infty} \varphi_n \big(q_{w_p}(P,T)\big),\end{aligned}$$ where $$\begin{aligned} q_{r_p}(P,D) = \Big( (P_n D_{n}^{(\tau_{u_1})\epsilon_{u_1}} \cdots D_{n}^{(\tau_{u_{r_1}})\epsilon_{u_{r_1}}}) \cdots (P_n D_{n}^{(\tau_{u_{r_{p-1}}+1})\epsilon_{u_{r_{p-1}}+1}} \cdots D_{n}^{(\tau_{u_{r_p}})\epsilon_{u_{r_p}}}) \Big), \\ q_{w_p}(P,T) = \Big( (P_n \frac{T_{n}^{(\tau_{v_1})\epsilon_{v_1}}}{\sqrt{n}} \cdots \frac{T_{n}^{(\tau_{v_{w_1}})\epsilon_{v_{w_1}}}}{\sqrt{n}}) \cdots (P_n \frac{T_{n}^{(\tau_{v_{w_{p-1}}+1})\epsilon_{v_{w_{p-1}}+1}}} {\sqrt{n}} \cdots \frac{T_{n}^{(\tau_{v_{w_p}})\epsilon_{v_{w_p}}}}{\sqrt{n}} ) \Big).\end{aligned}$$ Here $\lim_{n \to \infty} \varphi_n \big(q_{r_p}(P,D)\big)$ and $\lim_{n \to \infty} \varphi_n \big(q_{w_p}(P,T)\big)$ are as in ([\[eqn:lim_phi_P,D\*\]](#eqn:lim_phi_P,D*){reference-type="ref" reference="eqn:lim_phi_P,D*"}) and ([\[eqn:lim_phi(Te\*,P)\]](#eqn:lim_phi(Te*,P)){reference-type="ref" reference="eqn:lim_phi(Te*,P)"}), respectively. Hence $$\begin{aligned} %\label{eqn:lim_phi(PTD)_fin} \lim_{n \to \infty} \varphi_n \big(q_{k_p}(P,D,T)\big) \nonumber & = \left\{\begin{array}{lll} \displaystyle \lim_{n \to \infty} \varphi_n \big(q_{r_p}(P,D)\big) \times \lim_{n \to \infty} \varphi_n \big(q_{w_p}(P,T)\big) & \mbox{ if $p, w_p$ are even}, \\ 0& \mbox{ otherwise}, \end{array}\right.\end{aligned}$$ where $r_p=k_p-w_p$ with $w_p$ is as in ([\[eqn:no of T in Q\]](#eqn:no of T in Q){reference-type="ref" reference="eqn:no of T in Q"}). This completes the proof of Theorem [Theorem 1](#thm:JC_tdp){reference-type="ref" reference="thm:JC_tdp"}. ◻ # Proof of Theorem [Theorem 2](#thm:gen_tdp_com){reference-type="ref" reference="thm:gen_tdp_com"} {#sec:Jc_tdp_g} As before, we break the proof into several steps. In Sections [3.1](#sec:T_n,g){reference-type="ref" reference="sec:T_n,g"} to [3.4](#subsec:tp_g){reference-type="ref" reference="subsec:tp_g"}, we establish the joint convergence of $T_{n,g}$, $\{D_{n,g},P_n\}$, $\{D_{n,g},T_{n,g}\}$ and $\{T_{n,g},P_n\}$, respectively. Finally, in Section [3.5](#subsec:Tdp,g){reference-type="ref" reference="subsec:Tdp,g"} we show how to conclude the joint convergence of $T_{n,g}, D_{n,g}$ and $P_n$ in Theorem [Theorem 2](#thm:gen_tdp_com){reference-type="ref" reference="thm:gen_tdp_com"}. ## Joint Convergence of $T_{n,g}$ {#sec:T_n,g} Recall the generalized Toeplitz matrix $T_{n,g}$ from ([\[def:T_gen\]](#def:T_gen){reference-type="ref" reference="def:T_gen"}). Then we have the following result. **Proposition 5**. *Suppose $\{T^{(i)}_{n,g}; 1 \leq i \leq m \}$ are $m$ independent copies of generalized Toeplitz matrices and the input sequences $(a_i)_{i\in \mathbb{Z}}, (b_i)_{i\in \mathbb{Z}}$ satisfy Assumption [Assumption 1](#assump:toe_gII){reference-type="ref" reference="assump:toe_gII"}. Then, for $\epsilon_1,\ldots, \epsilon_{p}\in \{1,*\}$ and $\tau_1,\ldots, \tau_{p}\in \{1,\ldots, m\}$, $$\begin{aligned} %\label{eqn:lim_mome_A2_Tgen_com} \lim_{n\to \infty}\varphi_n(\dfrac{T^{(\tau_1) \epsilon_1}_{n,g}}{n^{1/2}}\cdots \dfrac{T^{(\tau_p)\epsilon_p}_{n,g}}{n^{1/2}}) \nonumber %&= \sum_{\pi \in {\mathcal P}_2(2k)} \prod_{(r,s)\in \pi} \theta(r,s) \int_{0}^1\int_{[-1,1]^k}\prod_{\ell=1}^{2k}\chi_{[0,1]}\big(z_0+\sum_{t=\ell}^{2k}\e_{\pi}(t) z_{\pi'(t)}\big)\prod_{i=0}^k dz_i.%dx_0dx_1\cdots dx_k. & =\left\{\begin{array}{lll} \displaystyle \hskip-5pt \sum_{\pi\in \mathcal P_2(2k)}\int_{0}^1\int_{[-1,1]^k}\prod_{(r,s)\in \pi} \hskip-4pt \delta_{\tau_r,\tau_s} \mathcal E_{r,s}'({\underline z_{2k}}) \prod_{i=0}^k dz_i & \mbox{ if } p=2k, \\ 0 & \mbox{ if } p=2k+1, \end{array}\right. \end{aligned}$$ where $\mathcal E_{r,s}'({\underline z_{2k}})$ is as given in ([\[eqn:mathE\'(z2k)\_T\]](#eqn:mathE'(z2k)_T){reference-type="ref" reference="eqn:mathE'(z2k)_T"}).* Towards the proof of Proposition [Proposition 5](#thm:generalT_com){reference-type="ref" reference="thm:generalT_com"}, we first derive a trace formula for the product of generalized Toeplitz matrices. **Lemma 3**. *Suppose $\{T^{(\tau)}_{n,g}; 1 \leq \tau \leq m\}$ are $m$ copies of generalized Toeplitz matrices with input sequence $(a^{(\tau)}_i)_{i\in \mathbb{Z}}$ and $(b^{(\tau)}_i)_{i\in \mathbb{Z}}$. Then for $\epsilon_1,\ldots, \epsilon_{p}\in \{1,*\}$ and $\tau_1,\ldots, \tau_{p}\in \{1,\ldots, m\}$, $$\begin{aligned} {\mbox{Tr}}[T_{n,g}^{(\tau_1) \epsilon_1} \cdots T_{n,g}^{(\tau_p) \epsilon_p}] =\sum_{j=1}^n\sum_{I_p'} \prod_{t=1}^p(a^{(\tau_t)\epsilon_t}_{i_t}{\mathbf 1}_{{\mathcal A}_{\epsilon_t'i_t}}+b^{(\tau_t)\epsilon_t}_{i_t}{\mathbf 1}_{{\mathcal B}_{\epsilon_t'i_t}}) (m_t), \end{aligned}$$ where $I'_p$ is as in ([\[iprime\]](#iprime){reference-type="ref" reference="iprime"}); $\epsilon'_t$ is as in ([\[eqn:epsilon\'\]](#eqn:epsilon'){reference-type="ref" reference="eqn:epsilon'"}); for any $i \in \mathbb{Z}$, ${\mathcal A}_i, {\mathcal B}_i$ are as in ([\[eqn:AA_i,BB_i\]](#eqn:AA_i,BB_i){reference-type="ref" reference="eqn:AA_i,BB_i"}), and for $t=1,2, \ldots, (p-1),$ $$\label{eqn:m_t_toetrace} m_t= j+\sum_{\ell=t}^p\epsilon_\ell'i_\ell \mbox{ with } m_p=j.$$ In this section, for any given set $A$, ${\mathbf 1}_{A}$ denotes the indicator function, that is, ${\mathbf 1}_{A}(x) =1\;\;\mbox{if $x \in A$ and zero otherwise}$.* *Proof.* We prove the lemma for a single matrix $T_{n,g}$. The same argument will also work for several matrices. Note from [\[eqn:2\]](#eqn:2){reference-type="eqref" reference="eqn:2"} that $$\begin{aligned} T_{n,g}e_j=\sum_{i=-(n-1)}^{(n-1)}t_i{\mathbf 1}_{[1,n]}{(j+i)}e_{j+i}=\sum_{i=-(n-1)}^{(n-1)}t_i{\mathbf 1}_{A_i}{(j)}e_{j+i}, \end{aligned}$$ where $A_i=[\max\{1, 1-i\}, \min\{n-i,n\}]$. Which means $t_i$, for $-(n-1)\le i\le (n-1)$, appears in the $j$-th, $1\le j\le n$, column iff $i+j\in [1,n]$. Observe that $a_{i}$ appears in $j$-th column if $$\begin{aligned} \max\{1, 1-i\}\le j\le {\frac{1}{2}(n-i)}. \end{aligned}$$ Similarly, $b_i$ appears in $j$-th column if ${\frac{1}{2}(n-i)}< j\le \min\{n-i,n\}.$ Now if we define intervals $$\label{eqn:AA_i,BB_i} \mathcal A_i=[\max\{1,1-i\}, \ \frac{1}{2}(n-i)], \ \mathcal B_i=(\frac{1}{2}(n-i),\ \min\{n-i,n\}],$$ then we have $$\begin{aligned} T_{n,g}e_j=\sum_{i=-(n-1)}^{n-1}(a_i{\mathbf 1}_{{\mathcal A}_{i}}+b_i{\mathbf 1}_{{\mathcal B}_i})(j)e_{j+i}. \end{aligned}$$ Similarly, we have $T_{n,g}^*e_j=\sum_{i}(a^*_i{\mathbf 1}_{{\mathcal A}_{-i}}+b^*_i{\mathbf 1}_{{\mathcal B}_{-i}})(j)e_{j-i}.$ Recall that $\epsilon'=1$ if $\epsilon=1$ and $\epsilon'=-1$ if $\epsilon=*$. Therefore, for $\epsilon\in \{1,*\}$, we can write $$\begin{aligned} %\label{eqn:T^e_ge_j} T_{n,g}^{\epsilon}e_j=\sum_{i}(a^{\epsilon}_i{\mathbf 1}_{{\mathcal A}_{\epsilon'i}}+b^{\epsilon}_i{\mathbf 1}_{{\mathcal B}_{\epsilon'i}})(j)e_{j+\epsilon'i}. \end{aligned}$$ Thus $$\begin{aligned} (T_{n,g}^{\epsilon_{p-1}}T_{n,g}^{\epsilon_p})e_j &=\sum_{i_p}(a^{\epsilon_p}_{i_p}{\mathbf 1}_{{\mathcal A}_{\epsilon_p'i_p}}+b^{\epsilon_p}_{i_p}{\mathbf 1}_{{\mathcal B}_{\epsilon_p'i_p}})(j)T_{n}^{\epsilon_{p-1}}e_{j+\epsilon_p'i_p} \\&=\sum_{i_p, i_{p-1}}\prod_{t=p-1}^p (a^{\epsilon_t}_{i_t}{\mathbf 1}_{{\mathcal A}_{\epsilon_t'i_t}}+b^{\epsilon_t}_{i_t}{\mathbf 1}_{{\mathcal B}_{\epsilon_t'i_t}}) (j+\sum_{\ell=t}^p\epsilon_\ell'i_\ell)e_{j+\sum_{t=p-1}^p\epsilon_t'i_t}, \end{aligned}$$ with $j+\sum_{\ell=p}^p\epsilon_\ell'i_\ell = j$. Continuing the process, for $\epsilon_1,\ldots, \epsilon_p\in \{1,*\}$, we get $$\begin{aligned} (T_{n,g}^{\epsilon_1} \cdots T_{n,g}^{\epsilon_p})e_j =\sum_{I_p} \prod_{t=1}^p(a^{\epsilon_t}_{i_t}{\mathbf 1}_{{\mathcal A}_{\epsilon_t'i_t}}+b^{\epsilon_t}_{i_t}{\mathbf 1}_{{\mathcal B}_{\epsilon_t'i_t}})(j+\sum_{\ell=t}^p\epsilon_\ell'i_\ell)e_{j+\sum_{t=1}^p\epsilon_t'i_t}, \end{aligned}$$ where $I_p=\{(i_1,\ldots, i_p)\; :\; -(n-1)\le i_1,\ldots, i_p\le n-1\}$. Thus we have $$\begin{aligned} {\mbox{Tr}}(T_{n,g}^{\epsilon_1} \cdots T_{n,g}^{\epsilon_p}) =\sum_{j=1}^n e_j^t (T_{n,g}^{\epsilon_1} \cdots T_{n,g}^{\epsilon_p}) e_j =\sum_{j=1}^n\sum_{I_p'}\prod_{t=1}^p(a^{\epsilon_t}_{i_t}{\mathbf 1}_{{\mathcal A}_{\epsilon_t'i_t}}+b^{\epsilon_t}_{i_t}{\mathbf 1}_{{\mathcal B}_{\epsilon_t'i_t}})(m_t), \end{aligned}$$ where $I'_p$ is as in ([\[iprime\]](#iprime){reference-type="ref" reference="iprime"}) and $m_t$ is as in ([\[eqn:m_t\_toetrace\]](#eqn:m_t_toetrace){reference-type="ref" reference="eqn:m_t_toetrace"}). Hence the result. ◻ Now using the above trace formula, we prove Proposition [Proposition 5](#thm:generalT_com){reference-type="ref" reference="thm:generalT_com"}. *Proof of Proposition [Proposition 5](#thm:generalT_com){reference-type="ref" reference="thm:generalT_com"}.* For simplicity of notation, we first consider a single matrix. By Lemma [Lemma 3](#lem:tracegeneralT){reference-type="ref" reference="lem:tracegeneralT"} we have $$\begin{aligned} \varphi_n( \frac{T_{n,g}^{\epsilon_1}}{\sqrt{n}} \cdots \frac{T_{n,g}^{\epsilon_p}}{\sqrt{n}}) &=\frac{1}{n^{\frac{p}{2}+1}}\sum_{j=1}^n\sum_{I_{p}'}\mbox{\bf E}[\prod_{t=1}^{p}(a^{\epsilon_t}_{i_t}{\mathbf 1}_{{\mathcal A}_{\epsilon_t'i_t}}(m_t)+b^{\epsilon_t}_{i_t}{\mathbf 1}_{{\mathcal B}_{\epsilon_t'i_t}}(m_t))], %\\&=\sum_{\pi\in \mathcal P_2(2k)}\frac{1}{n^{k+1}}\sum_{j=1}^n\sum_{I_{2k}'}\prod_{t=1}^{p}(a_{i_t}\one_{\AA_{\e_t'i_t}}(j+s_t)+b_{i_t}\one_{\BB_{\e_t'i_t}}(j+s_t)). \end{aligned}$$ where $I'_p$ is as in ([\[iprime\]](#iprime){reference-type="ref" reference="iprime"}) and $m_t$ is as in ([\[eqn:m_t\_toetrace\]](#eqn:m_t_toetrace){reference-type="ref" reference="eqn:m_t_toetrace"}). By arguments similar to those used in the proof of Proposition [Proposition 1](#thm:toeplitz){reference-type="ref" reference="thm:toeplitz"}, we conclude that only pair-partition will contribute to the limit. In particular, the limit is zero when $p$ is odd. Let $p=2k$. Then we have $$\begin{aligned} \varphi_n( \frac{T_{n,g}^{\epsilon_1}}{\sqrt{n}} \cdots \frac{T_{n,g}^{\epsilon_{2k}}}{\sqrt{n}}) &=\sum_{\pi\in \mathcal P_2(2k)}\frac{1}{n^{k+1}}\sum_{j=1}^n\sum_{I_{2k}'(\pi)}\prod_{(r,s)\in \pi} \mathcal E_{r,s}(\underline i_{2k})+o(1), \end{aligned}$$ where $\underline i_{2k}=(j,i_1,\ldots, i_{2k})$, $I_{2k}'(\pi)=\{(i_1,\ldots, i_{2k})\in I_{2k}' \; :\; \prod_{(r,s)\in \pi} \mathcal E_{r,s}(\underline i_{2k}) \neq 0\}$ and $$\begin{aligned} \mathcal E_{r,s}(\underline i_{2k})=\mbox{\bf E}\left[\left(a^{\epsilon_r}_{i_{r}}{\mathbf 1}_{{\mathcal A}_{\epsilon_r'i_r}}(m_r)+b^{\epsilon_r}_{i_r}{\mathbf 1}_{{\mathcal B}_{\epsilon_r'i_r}}(m_r)\right)\left(a^{\epsilon_s}_{i_{s}}{\mathbf 1}_{{\mathcal A}_{\epsilon_s'i_s}}(m_s)+b^{\epsilon_s}_{i_s}{\mathbf 1}_{{\mathcal B}_{\epsilon_s'i_s}}(m_s)\right)\right]. \end{aligned}$$ Note that, we have the constraint $\sum_{(r,s)\in \pi}(\epsilon_r'i_r+\epsilon_s'i_s)=0$ on the indices. Now using arguments similar to those used in the proof of Proposition [Proposition 1](#thm:toeplitz){reference-type="ref" reference="thm:toeplitz"}, we have a non-zero contribution if and only if $(\epsilon_r'i_r+\epsilon_s'i_s)=0$ for each pair $(r,s)\in \pi$, which is equivalent to the following: $$\begin{aligned} i_{r}= \left\{\begin{array}{lll} i_{s} & \mbox{ if } & \epsilon_{r}'\epsilon_{s}'=-1,\\ -i_{s} & \mbox{ if} & \epsilon_{r}'\epsilon'_{s}=1. \end{array} \right. \end{aligned}$$ Note that the input entries $\{(a_{j}= x_{j} + \mathrm{i} y_{j}, b_{j}= x'_{j}+ \mathrm{i} y'_{j}); j \in \mathbb{Z}\}$ are independent, we have a loss of a degree of freedom if $i_r=-i_s$. Hence a non-zero contribution is possible only when $i_r=i_s$, that is, $\epsilon_{r}'\epsilon_{s}'=-1$. Since the input entries satisfy Assumption [Assumption 1](#assump:toe_gII){reference-type="ref" reference="assump:toe_gII"}, we have $$\begin{aligned} \mbox{\bf E}[a^{\epsilon_r}_{i_r}a^{\epsilon_s}_{i_s}] &= 1-\delta_{\epsilon_r,\epsilon_s},\; \mbox{\bf E}[b^{\epsilon_r}_{i_r}b^{\epsilon_s}_{i_s}]=1-\delta_{\epsilon_r,\epsilon_s}, \ \mbox{\bf E}[a^{\epsilon_r}_{i_r}b^{\epsilon_s}_{i_s}] = (1-\delta_{\epsilon_r,\epsilon_s}) [\rho_2+\rho_6 - \mathrm{i} \epsilon'_r(\rho_3-\rho_4)], \\ \mbox{\bf E}[a^{\epsilon_s}_{i_s}b^{\epsilon_r}_{i_r}] &= (1-\delta_{\epsilon_r,\epsilon_s}) [\rho_2+\rho_6 - \mathrm{i} \epsilon'_s(\rho_3-\rho_4)]. \end{aligned}$$ Therefore $$\begin{aligned} \mathcal E_{r,s}(\underline i_{2k}) =(1-\delta_{\epsilon_r,\epsilon_s}) \big[(f_1+f_4) + [\rho_2+\rho_6 - \mathrm{i}\epsilon'_r(\rho_3-\rho_4)]f_2 + [\rho_2+\rho_6 - \mathrm{i}\epsilon'_s(\rho_3-\rho_4)]f_3 \big], %\red{ \d_{\eta(i_r),i_s}}, \end{aligned}$$ where $$\begin{aligned} &f_1={\mathbf 1}_{{\mathcal A}_{\epsilon_r'i_r}}(m_r){\mathbf 1}_{{\mathcal A}_{\epsilon_s'i_s}}(m_s),\; f_2={\mathbf 1}_{{\mathcal A}_{\epsilon_r'i_r}}(m_r){\mathbf 1}_{{\mathcal B}_{\epsilon_s'i_s}}(m_s),\\& f_3={\mathbf 1}_{{\mathcal A}_{\epsilon_s'i_s}}(m_s){\mathbf 1}_{{\mathcal B}_{\epsilon_r'i_r}}(m_r),\; f_4={\mathbf 1}_{{\mathcal B}_{\epsilon_r'i_r}}(m_r){\mathbf 1}_{{\mathcal B}_{\epsilon_s'i_s}}(m_s). \end{aligned}$$ Observe that, $\mathcal E_{r,s}(\underline i_{2k})$ implies that the number of free indices among $\{i_1,\ldots, i_{2k}\}$ is $k$, as the indices satisfy the relation $i_r=i_s$. Let $m_t'=\frac{1}{n}(j+\sum_{\ell=t}^{2k}\epsilon_{\ell}'i_\ell)$. Then we have $$\begin{aligned} &f_1={\mathbf 1}_{{\mathcal A}_{\epsilon_r'\frac{i_r}{n}}}(m_r'){\mathbf 1}_{{\mathcal A}_{\epsilon_s'\frac{i_s}{n}}}(m_s'),\; f_2={\mathbf 1}_{{\mathcal A}_{\epsilon_r'\frac{i_r}{n}}}(m_r'){\mathbf 1}_{{\mathcal B}_{\epsilon_s'\frac{i_s}{n}}}(m_s'),\\& f_3={\mathbf 1}_{{\mathcal A}_{\epsilon_s'\frac{i_s}{n}}}(m_s'){\mathbf 1}_{{\mathcal B}_{\epsilon_r'\frac{i_r}{n}}}(m_r'),\; f_4={\mathbf 1}_{{\mathcal B}_{\epsilon_r'\frac{i_r}{n}}}(m_r'){\mathbf 1}_{{\mathcal B}_{\epsilon_s'\frac{i_s}{n}}}(m_s'). \end{aligned}$$ Also let $$\label{eqn:A_x,B_x} \tilde {\mathcal A}_z=[\max\{0,-z\}, \frac{1}{2}(1-z)], \ \tilde{\mathcal B}_z=[\frac{1}{2}(1-z), \min\{1-z,1\}].$$ Recall, if $\pi=(r_1,s_1)\cdots (r_k,s_k)$, then we have $\pi'(r_t)=\pi'(s_t)=t$. Define $w_t=z_0+\sum_{\ell=t}^{2k}\epsilon_\ell' z_{\pi'(\ell)}$ and $$\begin{aligned} %\label{eqn:f'1234_T} &f_1'={\mathbf 1}_{\tilde{\mathcal A}_{\epsilon_r'z_{\pi'(r)}}}(w_r){\mathbf 1}_{\tilde{\mathcal A}_{\epsilon_s'z_{\pi'(s)}}}(w_s), \; f_2'={\mathbf 1}_{\tilde{\mathcal A}_{\epsilon_r'z_{\pi'(r)}}}(w_r){\mathbf 1}_{\tilde{\mathcal B}_{\epsilon_s'z_{\pi'(s)}}}(w_s),\\ & f_3'={\mathbf 1}_{\tilde{\mathcal B}_{\epsilon_r'z_{\pi'(r)}}}(w_r){\mathbf 1}_{\tilde{\mathcal A}_{\epsilon_s'z_{\pi'(s)}}}(w_s), \; f_4'={\mathbf 1}_{\tilde{\mathcal B}_{\epsilon_r'z_{\pi'(r)}}}(w_r){\mathbf 1}_{\tilde{\mathcal B}_{\epsilon_s'z_{\pi'(s)}}}(w_s). \nonumber \end{aligned}$$ Then by Riemann integration, we get $$\begin{aligned} \lim_{n \to \infty}\varphi_n( \frac{T_{n,g}^{\epsilon_1}}{\sqrt{n}} \cdots \frac{T_{n,g}^{\epsilon_{2k}}}{\sqrt{n}}) &=\int_{z_0=0}^1 \sum_{\pi\in \mathcal P_2(2k)} \int_{[-1,1]^k}\prod_{(r,s)\in \pi} \mathcal E_{r,s}'({\underline z_{2k}})dz_0\cdots dz_k, \end{aligned}$$ where $\underline z_{2k}=(z_0,z_1,\ldots,z_{2k})$ and $$\begin{aligned} \label{eqn:mathE'(z2k)_T} \mathcal E_{r,s}'(\underline z_{2k}) = (1-\delta_{\epsilon_r,\epsilon_s}) \big[(f'_1+f'_4) + [\rho_2+\rho_6 - \mathrm{i}\epsilon'_r(\rho_3-\rho_4)]f'_2 + [\rho_2+\rho_6 - \mathrm{i}\epsilon'_s(\rho_3-\rho_4)]f'_3 \big]. \end{aligned}$$ Now we look at the convergence for $m$ independent matrices. Note from Lemma [Lemma 3](#lem:tracegeneralT){reference-type="ref" reference="lem:tracegeneralT"} that $$\begin{aligned} \varphi_n(\dfrac{T^{(\tau_1) \epsilon_1}_{n,g}}{n^{1/2}}\cdots \dfrac{T^{(\tau_p)\epsilon_p}_{n,g}}{n^{1/2}}) &=\frac{1}{n^{\frac{p}{2}+1}}\sum_{j=1}^n\sum_{I_{p}'}\mbox{\bf E}[\prod_{t=1}^{p}(a^{(\tau_t)\epsilon_t}_{i_t}{\mathbf 1}_{{\mathcal A}_{\epsilon_t'i_t}}(m_t) +b^{(\tau_t)\epsilon_t}_{i_t}{\mathbf 1}_{{\mathcal B}_{\epsilon_t'i_t}}(m_t))]. %\\&=\sum_{\pi\in \mathcal P_2(2k)}\frac{1}{n^{k+1}}\sum_{j=1}^n\sum_{I_{2k}'}\prod_{t=1}^{p}(a_{i_t}\one_{\AA_{\e_t'i_t}}(j+s_t)+b_{i_t}\one_{\BB_{\e_t'i_t}}(j+s_t)).\end{aligned}$$ Also here, if $p$ is odd, then the limit will be zero and for even $p$, say $2k$, we have $$\lim_{n\to \infty}\varphi_n(\dfrac{T^{(\tau_1) \epsilon_1}_{n,g}}{n^{1/2}}\cdots \dfrac{T^{(\tau_{2k})\epsilon_{2k}}_{n,g}}{n^{1/2}}) = \int_{z_0=0}^1 \sum_{\pi\in \mathcal P_2(2k)} \int_{[-1,1]^k}\prod_{(r,s)\in \pi} \delta_{\tau_r,\tau_s} \mathcal E_{r,s}'({\underline z_{2k}}) \prod_{i=0}^k dz_i,$$ where $\mathcal E_{r,s}'({\underline z_{2k}})$ is as in ([\[eqn:mathE\'(z2k)\_T\]](#eqn:mathE'(z2k)_T){reference-type="ref" reference="eqn:mathE'(z2k)_T"}). This completes the proof. ◻ ## Joint Convergence of $D_{n,g}$ and $P_{n}$ {#subsec:dp_g} **Proposition 6**. *Suppose $\{D_{n,g}^{(\tau)}; 1\leq \tau \leq m\}$ are $m$ deterministic generalized Toeplitz matrices with input sequences $(d^{'(\tau)}_i)_{i\in \mathbb{Z}}, (d^{''(\tau)}_i)_{i\in \mathbb{Z}}$ which satisfy Assumption [Assumption 1](#assump:determin){reference-type="ref" reference="assump:determin"}. Then $\{P_n, D_{n,g}^{(\tau)}; 1 \leq \tau \leq m\}$ converge jointly. The limit $*$-moments are given in ([\[eqn:lim_phi_P,D\*\_g\]](#eqn:lim_phi_P,D*_g){reference-type="ref" reference="eqn:lim_phi_P,D*_g"}).* First, we establish a trace formula for a monomial in $P_n$ and generalized Toeplitz matrices. **Lemma 4**. *For $\tau=1,2, \ldots, m$, let $M^{(\tau)}_{n,g}$ be generalized Toeplitz matrices (random or non-random) with input sequences $(a^{(\tau)}_i)_{i\in \mathbb{Z}}, (b^{(\tau)}_i)_{i\in \mathbb{Z}}$. Then for $\epsilon_i\in \{1,*\}$ and $\tau_i\in \{1, \ldots,m\}$, we have $$\begin{aligned} % \label{eqn:trace_Hs_g} %\Tr\big[(P_nM_{n,g}^{\e_1} \cdots M_{n}^{\e_{k_1}}) (P_nM_{n}^{\e_{k_1+1}} \cdots M_{n}^{\e_{k_2}}) \cdots (P_nM_{n}^{\e_{k_{p-1}+1}} \cdots M_{n}^{\e_{k_p}})\big] &{\mbox{Tr}}\big[(P_nM_{n,g}^{(\tau_1)\epsilon_1} \cdots M_{n,g}^{(\tau_{k_1}) \epsilon_{k_1}}) %(P_nM_{n}^{(\tau_{k_1+1})\e_{k_1+1}} \cdots M_{n}^{(\tau_{k_2})\e_{k_2}}) \cdots (P_nM_{n,g}^{(\tau_{k_{p-1}+1})\epsilon_{k_{p-1}+1}} \cdots M_{n,g}^{(\tau_{k_p})\epsilon_{k_p}})\big] \nonumber \\ & = \left\{\begin{array}{l} \displaystyle{\sum_{j=1}^n \sum_{I_{k_p}} \prod_{e=1}^p \prod_{t=k_{e-1}+1}^{k_e} (a^{(\tau_t)\epsilon_t}_{i_t}{\mathbf 1}_{{\mathcal A}_{\epsilon_t'i_t}}+b^{(\tau_t)\epsilon_t}_{i_t}{\mathbf 1}_{{\mathcal B}_{\epsilon_t'i_t}}) (m_{t,k_e}) \delta_{0,\sum_{c=1}^p(-1)^{p-c} \sum_{t=k_{c-1}+1}^{k_{c}} \epsilon_t'i_t}} \\ \hfill{\mbox{ if $p$ is even,}} \\ \displaystyle{\hskip-5pt \sum_{j=1}^n \sum_{I_{k_p}} \prod_{e=1}^p \prod_{t=k_{e-1}+1}^{k_e} (a^{(\tau_t)\epsilon_t}_{i_t}{\mathbf 1}_{{\mathcal A}_{\epsilon_t'i_t}}+b^{(\tau_t)\epsilon_t}_{i_t}{\mathbf 1}_{{\mathcal B}_{\epsilon_t'i_t}}) (m_{t,k_e}) \delta_{2j-1-n,\sum_{c=1}^p(-1)^{p-c} \sum_{t=k_{c-1}+1}^{k_{c}} \epsilon_t'i_t}} \\ \hfill\mbox{ if $p$ is odd,} \end{array} \right. \end{aligned}$$ where $I_{k_p}$ is as in ([\[eqn:i_k in -n to n\]](#eqn:i_k in -n to n){reference-type="ref" reference="eqn:i_k in -n to n"}) and for $e=1,2, \ldots,p$, $$\begin{aligned} \label{eqn:m_chi,t_Hn_g} m_{t,k_e} =j+ (-1)^{p-e} \sum_{t=\ell}^{k_e} \epsilon'_t i_t + \sum_{c=e+1}^{p} (-1)^{p-c} \sum_{t=k_{c-1}+1}^{k_c} \epsilon'_t i_t. \end{aligned}$$* We skip the proof of this lemma, but note that a proof can be fashioned out of the ideas used in the proofs of Lemmas [Lemma 2](#lem:hankel){reference-type="ref" reference="lem:hankel"} and [Lemma 3](#lem:tracegeneralT){reference-type="ref" reference="lem:tracegeneralT"}. *Proof of Proposition [Proposition 6](#thm:jc_D+P_g){reference-type="ref" reference="thm:jc_D+P_g"}.* We skip the detailed proof. But we briefly justify the value of the limits. Note also the ideas used in the proof of Proposition [Proposition 2](#thm:jc_D+P){reference-type="ref" reference="thm:jc_D+P"}. From arguments similar to those used in the proof of ([\[eqn:lim_D\*\_n\^p\]](#eqn:lim_D*_n^p){reference-type="ref" reference="eqn:lim_D*_n^p"}), using the trace formula from Lemma [Lemma 3](#lem:tracegeneralT){reference-type="ref" reference="lem:tracegeneralT"}, we can show that for $\epsilon_i \in \{1,*\}$ and $\tau_i \in \{1, \ldots,m\}$, $$\begin{aligned} \label{eqn:lim_D*_n^p_g} &\lim_{n \to \infty} \varphi_n( D^{(\tau_1)\epsilon_1}_{n,g} \cdots D^{(\tau_p)\epsilon_p}_{n,g}) \nonumber\\ &= \sum_{i_1, \ldots, i_{p}=-\infty }^{\infty} \int_{z_0=0}^{1} \prod_{t=1}^p (d^{'(\tau_t)\epsilon_t}_{i_t}{\mathbf 1}_{[0,1/2]}(z_0)+d^{''(\tau_t)\epsilon_t}_{i_t}{\mathbf 1}_{(1/2,1]} (z_0)) \delta_{0,\sum_{t=1}^p \epsilon'_ti_t} dz_0 \nonumber\\ &= \sum_{i_1, \ldots, i_{p}=-\infty }^{\infty} \prod_{t=1}^p (\frac{1}{2} d^{'(\tau_t)\epsilon_t}_{i_t}+ \frac{1}{2} d^{''(\tau_t)\epsilon_t}_{i_t}) \delta_{0,\sum_{t=1}^p \epsilon'_ti_t}. \end{aligned}$$ Now from arguments similar to those used in establishing ([\[eqn:lim_phi_P,D\*\]](#eqn:lim_phi_P,D*){reference-type="ref" reference="eqn:lim_phi_P,D*"}), for an arbitrary monomial from the collection $\{P_n, D_{n,g}^{(\tau)}; 1 \leq \tau \leq m\}$, using Lemma [Lemma 4](#lem:hankel_g){reference-type="ref" reference="lem:hankel_g"}, we have $$\begin{aligned} \label{eqn:lim_phi_P,D*_g} & \lim_{n \to \infty} \varphi_n \big[(P_nD_{n,g}^{(\tau_1)\epsilon_1} \cdots D_{n,g}^{(\tau_{k_1})\epsilon_{k_1}}) \cdots (P_nD_{n,g}^{(\tau_{k_{p-1}+1})\epsilon_{k_{p-1}+1}} \cdots D_{n,g}^{(\tau_{k_p})\epsilon_{k_p}})\big] \nonumber \\ & = \left\{\begin{array}{lll} \displaystyle \sum_{i_1, \ldots, i_{k_p}=-\infty }^{\infty} \prod_{t=1}^{k_p} (\frac{1}{2} d^{'(\tau_t)\epsilon_t}_{i_t}+ \frac{1}{2} d^{''(\tau_t)\epsilon_t}_{i_t}) \delta_{0, \sum_{c=1}^p(-1)^{p-c} \sum_{t=k_{c-1}+1}^{k_{c}} \epsilon_t'i_t} & \mbox{ if $p$ is even}, \\ 0& \mbox{ if $p$ is odd}. \end{array}\right. \end{aligned}$$ This completes the proof of Proposition [Proposition 6](#thm:jc_D+P_g){reference-type="ref" reference="thm:jc_D+P_g"}. ◻ ## Joint Convergence of $T_{n,g}$ and $D_{n,g}$ {#subsec:td_g} **Proposition 7**. *Let $B_{n,1}=n^{-1/2}T_{n,g}$ where $T_{n,g}$ is a random generalized Toeplitz matrix. For $i=1,2, \ldots, m$, let $\{B_{n,1}^{(i)}\}$ be independent copies of $B_{n,1}$ and $B_{n,2}^{(i)}=D^{(i)}_{n,g}$ be $m$ deterministic generalized Toeplitz matrices. Suppose the input entries $(a_j)_{j\in \mathbb{Z}}, (b_j)_{j\in \mathbb{Z}}$ of $T_{n,g}$ satisfy Assumption [Assumption 1](#assump:toe_gII){reference-type="ref" reference="assump:toe_gII"} and the input entries $(d'_j)_{j\in \mathbb{Z}}, (d^{''}_j)_{j\in \mathbb{Z}}$ of $D_{n,g}$ satisfy Assumption [Assumption 1](#assump:determin){reference-type="ref" reference="assump:determin"}. Then $\{B_{n,j}^{(i)}; 1\leq i\leq m, 1\leq j \leq 2\}$ converge jointly. The limit $*$-moments are given in ([\[eqn:lim_ph(Dm\*,T\*)\_g\]](#eqn:lim_ph(Dm*,T*)_g){reference-type="ref" reference="eqn:lim_ph(Dm*,T*)_g"}).* *Proof.* Here again we outline the justification for the limit moments. For more details, see the proof of Proposition [Proposition 3](#thm:jc_T+D){reference-type="ref" reference="thm:jc_T+D"}. First note that the limit of $\varphi_n(D_{n, g}^p (n^{-1/2}T_{n,g})^q)$ will be zero if $q$ is odd. Let $q=2k$, then from the proof of ([\[eqn:lim_ph(Dm,T)\]](#eqn:lim_ph(Dm,T)){reference-type="ref" reference="eqn:lim_ph(Dm,T)"}) and the idea of the proof of Proposition [Proposition 6](#thm:jc_D+P_g){reference-type="ref" reference="thm:jc_D+P_g"}, we have $$\begin{aligned} %\label{eqn:lim_ph(Dm,T)g} \lim_{n \to \infty} \varphi_n(D_{n, g}^p (n^{-1/2}T_{n,g})^q) &= \int_{z_0=0}^{1} \sum_{i_1, \ldots, i_{p}=-\infty }^{\infty} \prod_{t=1}^p (d^{'(\tau_t)\epsilon_t}_{i_t}{\mathbf 1}_{[0,1/2]}(z_0)+d^{''(\tau_t)\epsilon_t}_{i_t}{\mathbf 1}_{(1/2,1]} (z_0)) \nonumber \\ & \qquad \times \delta_{0,\sum_{t=1}^p \epsilon'_ti_t} \sum_{\pi\in \mathcal P_2(2k)} \int_{[-1,1]^k}\prod_{(r,s)\in \pi} \mathcal E_{r,s}'({\underline z_{2k}}) \prod_{i=0}^k dz_i, %& = \lim_{n \to \infty} \vp_n( D_{n,g}^p ) \times \lim_{n \to \infty} \vp_n( (n^{-1/2}T_{n,g})^q), \red{ doubt} \end{aligned}$$ where the existence of the first limit is given in ([\[eqn:lim_D\*\_n\^p_g\]](#eqn:lim_D*_n^p_g){reference-type="ref" reference="eqn:lim_D*_n^p_g"}) and $\mathcal E_{r,s}'({\underline z_{2k}})$ is as in ([\[eqn:mathE\'(z2k)\_T\]](#eqn:mathE'(z2k)_T){reference-type="ref" reference="eqn:mathE'(z2k)_T"}) with $\epsilon_1 = \cdots= \epsilon_{2k}=1$. Similarly, for an arbitrary monomial $B_{n,j}^{(\tau_1) \epsilon_1} B_{n,j}^{(\tau_2) \epsilon_2} \cdots B_{n,j}^{(\tau_q) \epsilon_q}$ with $B_{n,1}=n^{-1/2}T_{n,g}$ and $B_{n,2}=D_{n,g}$. Let $\mathcal{I}_p= (v_1, v_2, \ldots, v_p)$ be the indices corresponding to the positions of $D_{n,g}$ in the monomial and $R= [q] \setminus \mathcal{I}_p$. Note that if the cardinality of $R$ is odd, then the limit will be zero. Let $\# R$ be even, say $2k$, then similar to the above expression, we can show that $$\begin{aligned} \label{eqn:lim_ph(Dm*,T*)_g} \lim_{n \to \infty} \varphi_n \big( B_{n,j}^{(\tau_1) \epsilon_1} B_{n,j}^{(\tau_2) \epsilon_2} \cdots B_{n,j}^{(\tau_q) \epsilon_q} \big) \nonumber & = \hskip-5pt \int_{z_0=0}^{1} \sum_{i_{v_1}, \ldots, i_{v_{p}}=-\infty }^{\infty} \prod_{t\in \mathcal{I}_p}(d^{'(\tau_t)\epsilon_t}_{i_t}{\mathbf 1}_{[0,1/2]}(z_0)+d^{''(\tau_t)\epsilon_t}_{i_t}{\mathbf 1}_{(1/2,1]} (z_0)) \nonumber \\ & \quad \times \delta_{0,\sum_{t=1}^p \epsilon'_{v_t}i_{v_t}} \sum_{\pi\in \mathcal P_2(R)} \int_{[-1,1]^k}\prod_{(r,s)\in \pi} \mathcal E_{r,s}'({\underline z_{2k}}) \prod_{i=0}^k dz_i, \end{aligned}$$ where $P_2(R)$ denotes the set of all pair-partition of set $R$ and $\mathcal E_{r,s}'({\underline z_{2k}})$ is as in ([\[eqn:mathE\'(z2k)\_T\]](#eqn:mathE'(z2k)_T){reference-type="ref" reference="eqn:mathE'(z2k)_T"}) for the pair-partition of set $R$. This completes the proof of the proposition. ◻ ## Joint Convergence of $T_{n,g}$ and $P_n$ {#subsec:tp_g} **Proposition 8**. *Suppose $\{T_{n,g}^{(i)}; 1\leq i\leq m\}$ are $m$ independent copies of generalized Toeplitz matrices whose input entries satisfy Assumption [Assumption 1](#assump:toe_gII){reference-type="ref" reference="assump:toe_gII"}. Then $\{P_n, n^{-1/2}T_{n,g}^{(i)}; 1 \leq i \leq m\}$ converge jointly. The limit $*$-moments are given in ([\[eqn:lim_phi(Te\*,P)\_g\]](#eqn:lim_phi(Te*,P)_g){reference-type="ref" reference="eqn:lim_phi(Te*,P)_g"}).* Note that an arbitrary monomial from the collection $\{P_n, T^{(i)}_{n,g}; 1 \leq i \leq m \}$ looks like the following: $(P_nT_{n,g}^{(\tau_1)\epsilon_1} \cdots T_{n,g}^{(\tau_{k_1})\epsilon_{k_1}}) \cdots (P_nT_{n,g}^{(\tau_{k_{p-1}+1})\epsilon_{k_{p-1}+1}} \cdots T_{n,g}^{(\tau_{k_p})\epsilon_{k_p}})$. Recall that $P_n T_{n,g}=H_n$. For simplicity, we first provide the arguments for the monomial $H_{n}^{(\tau_1)\epsilon_1} \cdots H_{n}^{(\tau_{p})\epsilon_{p}}$. For the general case, similar arguments will work. Also see the proof of Proposition [Proposition 4](#thm:jc_T+P){reference-type="ref" reference="thm:jc_T+P"}. Now under Assumption [Assumption 1](#assump:toe_gII){reference-type="ref" reference="assump:toe_gII"}, the following remark provides the joint convergence of Hankel matrices. **Remark 4**. Let $\{H^{(i)}_{n}; 1 \leq i \leq m\}$ be $m$ independent copies of Hankel matrices with the input sequences $(a^{(i)}_j)_{j\in \mathbb{Z}}, (b^{(i)}_j)_{j\in \mathbb{Z}}$ which satisfy Assumption [Assumption 1](#assump:toe_gII){reference-type="ref" reference="assump:toe_gII"}. Then, for $\epsilon_1,\ldots, \epsilon_{p}\in \{1,*\}$, and $\tau_1,\ldots, \tau_{p}\in \{1,\ldots, m\}$, $$\begin{aligned} \label{eqn:lim_mome_A3_Hgen_com} \lim_{n\to \infty}\varphi_n(\dfrac{H^{(\tau_1) \epsilon_1}_{n}}{n^{1/2}}\cdots \dfrac{H^{(\tau_p)\epsilon_p}_{n}}{n^{1/2}}) %&= \sum_{\pi \in {\mathcal P}_2(2k)} \prod_{(r,s)\in \pi} \theta(r,s) \int_{0}^1\int_{[-1,1]^k}\prod_{\ell=1}^{2k}\chi_{[0,1]}\big(z_0+\sum_{t=\ell}^{2k}\e_{\pi}(t) z_{\pi'(t)}\big)\prod_{i=0}^k dz_i.%dx_0dx_1\cdots dx_k. & =\left\{\begin{array}{lll} \displaystyle \hskip-5pt \sum_{\pi\in \mathcal P_2(2k)}\int_{0}^1\int_{[-1,1]^k}\prod_{(r,s)\in \pi} \hskip-5pt \delta_{\tau_r,\tau_s} \mathcal E_{r,s}^{(H)}({\underline z_{2k}}) \prod_{i=0}^k dz_i &\mbox{ if } h=2k, \\ 0 & \hskip-10pt \mbox{ if } h=2k+1, \end{array}\right. \end{aligned}$$ where $\mathcal E_{r,s}^{(H)}({\underline z_{2k}})$ is as in ([\[eqn:mathE\'(z2k)\_H\]](#eqn:mathE'(z2k)_H){reference-type="ref" reference="eqn:mathE'(z2k)_H"}). We need the following trace formula for the proof of ([\[eqn:lim_mome_A3_Hgen_com\]](#eqn:lim_mome_A3_Hgen_com){reference-type="ref" reference="eqn:lim_mome_A3_Hgen_com"}): Suppose $\{H^{(\tau)}_{n}; 1 \leq \tau \leq m \}$ are the Hankel matrices with input sequences $(a^{(\tau)}_i)_{i\in \mathbb{Z}}$ and $(b^{(\tau)}_i)_{i\in \mathbb{Z}}$. Then we have $$\begin{aligned} %\label{eqn:trace_Hg} &{\mbox{Tr}}[H_{n}^{(\tau_1) \epsilon_1}\cdots H_{n}^{(\tau_p) \epsilon_p}] \nonumber \\ &= \left\{\begin{array}{lll} \displaystyle \sum_{j=1}^n \prod_{t=1}^p(a^{(\tau_t)}_{i_t}{\mathbf 1}_{{\mathcal A}_{\epsilon_t'i_t}} +b^{(\tau_t)}_{i_t}{\mathbf 1}_{{\mathcal B}_{\epsilon_t'i_t}})(m_t) \delta_{0,\sum_{t=1}^p(-1)^{p-t}i_t} & \mbox{ if $p$ is even}, \\ \displaystyle \sum_{j=1}^n \prod_{t=1}^p(a^{(\tau_t)}_{i_t}{\mathbf 1}_{{\mathcal A}_{\epsilon_t'i_t}} +b^{(\tau_t)}_{i_t}{\mathbf 1}_{{\mathcal B}_{\epsilon_t'i_t}})(m_t) \delta_{2j-1-n,\sum_{t=1}^p(-1)^{p-t}i_t} & \mbox{ if $p$ is odd}, \end{array} \right. \end{aligned}$$ where for any $i \in \mathbb{Z}$, ${\mathcal A}_i, {\mathcal B}_i$ are as in ([\[eqn:AA_i,BB_i\]](#eqn:AA_i,BB_i){reference-type="ref" reference="eqn:AA_i,BB_i"}); for $t=1,2, \ldots,(p-1)$, $$\begin{aligned} \label{eqn:m_t_Hn} m_t = j+\sum_{\ell=t}^p (-1)^{p-\ell} i_\ell \mbox{ with $m_p=j$}. \end{aligned}$$ We skip the proof of this trace formula and refer to the proof of Lemma [Lemma 3](#lem:tracegeneralT){reference-type="ref" reference="lem:tracegeneralT"}. Now we prove ([\[eqn:lim_mome_A3_Hgen_com\]](#eqn:lim_mome_A3_Hgen_com){reference-type="ref" reference="eqn:lim_mome_A3_Hgen_com"}). For simplicity of notation, we first consider a single matrix. The same idea will work for $m$ matrices also. By arguments similar to those used in the proof of Proposition [Proposition 1](#thm:toeplitz){reference-type="ref" reference="thm:toeplitz"}, we conclude that the limit of odd $*$-moments will be zero. Let $p=2k$. Then using the above trace formula, we have $$\begin{aligned} \varphi_n( \frac{H_{n}^{\epsilon_1}}{\sqrt{n}} \cdots \frac{H_{n}^{\epsilon_{2k}}}{\sqrt{n}}) &=\frac{1}{n^{k+1}}\sum_{j=1}^n\sum_{I_{2k}^{''}}\mbox{\bf E}[\prod_{t=1}^{2k}(a^{\epsilon_t}_{i_t}{\mathbf 1}_{{\mathcal A}_{\epsilon_t'i_t}}(m_t)+b^{\epsilon_t}_{i_t}{\mathbf 1}_{{\mathcal B}_{\epsilon_t'i_t}}(m_t))], %\\&=\sum_{\pi\in \mathcal P_2(2k)}\frac{1}{n^{k+1}}\sum_{j=1}^n\sum_{I_{2k}'}\prod_{t=1}^{p}(a_{i_t}\one_{\AA_{\e_t'i_t}}(j+s_t)+b_{i_t}\one_{\BB_{\e_t'i_t}}(j+s_t)). \end{aligned}$$ where $I_{2k}^{''}$ is as in ([\[eqn:I\'2k_Hs\]](#eqn:I'2k_Hs){reference-type="ref" reference="eqn:I'2k_Hs"}) and $m_t$ is as in ([\[eqn:m_t\_Hn\]](#eqn:m_t_Hn){reference-type="ref" reference="eqn:m_t_Hn"}). Note that, in the above expression, only the pair-partitions will contribute in the limit, therefore $$\begin{aligned} \varphi_n( \frac{H_{n}^{\epsilon_1}}{\sqrt{n}} \cdots \frac{H_{n}^{\epsilon_{2k}}}{\sqrt{n}}) &=\sum_{\pi\in \mathcal P_2(2k)}\frac{1}{n^{k+1}} \sum_{j=1}^n\sum_{I_{2k}^{''}(\pi)}\prod_{(r,s)\in \pi} \mathcal E_{r,s}(\underline i_{2k})+o(1), \end{aligned}$$ where $I_{2k}^{''}(\pi)=\{(i_1,\ldots, i_{2k})\in I_{2k}^{''} \; :\; \prod_{(r,s)\in \pi}\mbox{\bf E}[a^{\epsilon_r}_{i_r} a^{\epsilon_s}_{i_s}]\neq 0\}$ and for $\underline i_{2k}=(j,i_1,\ldots, i_{2k})$ $$\begin{aligned} \mathcal E_{r,s}(\underline i_{2k})=\mbox{\bf E}\left[\left(a^{\epsilon_r}_{i_{r}}{\mathbf 1}_{{\mathcal A}_{\epsilon_r'i_r}}(m_r)+b^{\epsilon_r}_{i_r}{\mathbf 1}_{{\mathcal B}_{\epsilon_r'i_r}}(m_r)\right)\left(a^{\epsilon_s}_{i_{s}}{\mathbf 1}_{{\mathcal A}_{\epsilon_s'i_s}}(m_s)+b^{\epsilon_s}_{i_s}{\mathbf 1}_{{\mathcal B}_{\epsilon_s'i_s}}(m_s)\right)\right]. \end{aligned}$$ Observe that, the constraint on the indices, $\sum_{t=1}^{2k}(-1)^{2k-t}i_t$ can also be written as $\sum_{t=1}^{2k}\nu_t i_t$, where $\nu_t$ is as in ([\[eqn:nu_t\_oddeven\]](#eqn:nu_t_oddeven){reference-type="ref" reference="eqn:nu_t_oddeven"}). Thus, in terms of a pair-partition, the constraint on the indices will be $\sum_{(r,s)\in \pi}(\nu_ri_r+\nu_si_s)=0$. Now using arguments similar to those used in the proof of Proposition [Proposition 1](#thm:toeplitz){reference-type="ref" reference="thm:toeplitz"} and Remark [Remark 3](#cor:com_HJc){reference-type="ref" reference="cor:com_HJc"}, we have a non-zero contribution in the limit if and only if $(\nu_ri_r+\nu_si_s)=0$ for each pair $(r,s)\in \pi$, which is equivalent to the following $$\begin{aligned} i_{r}= \left\{\begin{array}{lll} i_{s} & \mbox{ if } & \nu_{r} \nu_{s}=-1,\\ -i_{s} & \mbox{ if} & \nu_{r} \nu_{s}=1. \end{array} \right. \end{aligned}$$ Note that $\{(a_{j,n}= x_{j,n} + \mathrm{i} y_{j,n}, b_{j,n}= x'_{j,n}+ \mathrm{i} y'_{j,n}); j \in \mathbb{Z}\}$ are independent and satisfy Assumption [Assumption 1](#assump:toe_gII){reference-type="ref" reference="assump:toe_gII"}, therefore $$\begin{aligned} \mbox{\bf E}[a^{\epsilon_r}_{i_r}a^{\epsilon_s}_{i_s}] &= \left\{\begin{array}{lll} (1-\delta_{\nu_{r}, \nu_{s}}) & \mbox{ if } & \epsilon_r \neq \epsilon_s,\\ 2i\rho_1(1-\delta_{\nu_{r}, \nu_{s}}) & \mbox{ if} & \epsilon_r = \epsilon_s=1, \\ -2i\rho_1(1-\delta_{\nu_{r}, \nu_{s}}) & \mbox{ if} & \epsilon_r = \epsilon_s=*, \\ \end{array} \right. \\ & = (1-\delta_{\nu_{r}, \nu_{s}}) \big[\epsilon'_{r} 2i\rho_1 \big]^{\delta_{\epsilon'_{r}, \epsilon'_{s} }}, \end{aligned}$$ where $\epsilon'=1$ if $\epsilon=1$ and $\epsilon'=-1$ if $\epsilon=*$. Similarly, $$\begin{aligned} \mbox{\bf E}[b^{\epsilon_r}_{i_r}b^{\epsilon_s}_{i_s}] &= \left\{\begin{array}{lll} (1-\delta_{\nu_{r}, \nu_{s}}) & \mbox{ if } & \epsilon_r \neq \epsilon_s,\\ 2i\rho_5(1-\delta_{\nu_{r}, \nu_{s}}) & \mbox{ if} & \epsilon_r = \epsilon_s=1, \\ -2i\rho_5(1-\delta_{\nu_{r}, \nu_{s}}) & \mbox{ if} & \epsilon_r = \epsilon_s=*, \\ \end{array} \right. \\ & = (1-\delta_{\nu_{r}, \nu_{s}}) \big[\epsilon'_{r} 2i\rho_5 \big]^{\delta_{\epsilon'_{r}, \epsilon'_{s} }}. \end{aligned}$$ Also, from similar calculation, we have $$\begin{aligned} \mbox{\bf E}[a^{\epsilon_r}_{i_r}b^{\epsilon_s}_{i_s}] & = (1-\delta_{\nu_{r}, \nu_{s}}) \big[\rho_2+\rho_6 + \epsilon'_{r} \mathrm{i}(\rho_4-\rho_3) \big]^{(1-\delta_{\epsilon'_{r}, \epsilon'_{s} })} \big[\rho_2-\rho_6 + \epsilon'_{r} \mathrm{i}(\rho_4+\rho_3) \big]^{\delta_{\epsilon'_{r}, \epsilon'_{s} }}, \\ \mbox{\bf E}[a^{\epsilon_s}_{i_r}b^{\epsilon_r}_{i_s}] & = (1-\delta_{\nu_{r}, \nu_{s}}) \big[\rho_2+\rho_6 + \epsilon'_{s} \mathrm{i}(\rho_4-\rho_3) \big]^{(1-\delta_{\epsilon'_{r}, \epsilon'_{s} })} \big[\rho_2-\rho_6 + \epsilon'_{s} \mathrm{i}(\rho_4+\rho_3) \big]^{\delta_{\epsilon'_{r}, \epsilon'_{s} }}.\end{aligned}$$ Thus $$\begin{aligned} \mathcal E_{r,s}(\underline i_{2k}) =&\Big[ \big[\epsilon'_{r} 2i\rho_1 \big]^{\delta_{\epsilon'_{r}, \epsilon'_{s} }} f_1+ \big[\epsilon'_{r} 2i\rho_5 \big]^{\delta_{\epsilon'_{r}, \epsilon'_{s} }}f_4 + \big[\rho_2+\rho_6 + \epsilon'_{r} \mathrm{i}(\rho_4-\rho_3) \big]^{(1-\delta_{\epsilon'_{r}, \epsilon'_{s} })} \\ & \quad \times \big[\rho_2-\rho_6 + \epsilon'_{r} \mathrm{i}(\rho_4+\rho_3) \big]^{\delta_{\epsilon'_{r}, \epsilon'_{s} }} f_2 + \big[\rho_2+\rho_6 + \epsilon'_{s} \mathrm{i}(\rho_4-\rho_3) \big]^{(1-\delta_{\epsilon'_{r}, \epsilon'_{s} })} \\ & \quad \times \big[\rho_2-\rho_6 + \epsilon'_{s} \mathrm{i}(\rho_4+\rho_3) \big]^{\delta_{\epsilon'_{r}, \epsilon'_{s} }} f_3 \Big] (1-\delta_{\nu_{r}, \nu_{s}}), \end{aligned}$$ where $$\begin{aligned} &f_1={\mathbf 1}_{{\mathcal A}_{\epsilon_r'i_r}}(m_r){\mathbf 1}_{{\mathcal A}_{\epsilon_s'i_s}}(m_s),\; f_2={\mathbf 1}_{{\mathcal A}_{\epsilon_r'i_r}}(m_r){\mathbf 1}_{{\mathcal B}_{\epsilon_s'i_s}}(m_s),\\& f_3={\mathbf 1}_{{\mathcal A}_{\epsilon_s'i_s}}(m_s){\mathbf 1}_{{\mathcal B}_{\epsilon_r'i_r}}(m_r),\; f_4={\mathbf 1}_{{\mathcal B}_{\epsilon_r'i_r}}(m_r){\mathbf 1}_{{\mathcal B}_{\epsilon_s'i_s}}(m_s), \end{aligned}$$ with $m_t$ as in ([\[eqn:m_t\_Hn\]](#eqn:m_t_Hn){reference-type="ref" reference="eqn:m_t_Hn"}). Note that for even $p$, $m_t$ can also be written as the following $$\begin{aligned} \label{eqn:m_t_nut_Hn} m_t = \displaystyle j+\sum_{\ell=t}^{2k} \nu_\ell i_\ell,\end{aligned}$$ where $\nu_t$ is as in ([\[eqn:nu_t\_oddeven\]](#eqn:nu_t_oddeven){reference-type="ref" reference="eqn:nu_t_oddeven"}). Observe that, $\mathcal E_{r,s}(\underline i_{2k})$ implies that the number of free indices among $\{i_1,\ldots, i_{2k}\}$ is $k$, as the indices have relations $i_r=i_s$. Let $m_t'=\frac{1}{n}(m_t)$, where $m_t$ is as in ([\[eqn:m_t\_nut_Hn\]](#eqn:m_t_nut_Hn){reference-type="ref" reference="eqn:m_t_nut_Hn"}). Then we have $$\begin{aligned} &f_1={\mathbf 1}_{{\mathcal A}_{\epsilon_r'\frac{i_r}{n}}}(m_r'){\mathbf 1}_{{\mathcal A}_{\epsilon_s'\frac{i_s}{n}}}(m_s'),\; f_2={\mathbf 1}_{{\mathcal A}_{\epsilon_r'\frac{i_r}{n}}}(m_r'){\mathbf 1}_{{\mathcal B}_{\epsilon_s'\frac{i_s}{n}}}(m_s'),\\& f_3={\mathbf 1}_{{\mathcal A}_{\epsilon_s'\frac{i_s}{n}}}(m_s'){\mathbf 1}_{{\mathcal B}_{\epsilon_r'\frac{i_r}{n}}}(m_r'),\; f_4={\mathbf 1}_{{\mathcal B}_{\epsilon_r'\frac{i_r}{n}}}(m_r'){\mathbf 1}_{{\mathcal B}_{\epsilon_s'\frac{i_s}{n}}}(m_s'). \end{aligned}$$ Let $\tilde {\mathcal A}_z$ and $\tilde {\mathcal B}_z$ be as in ([\[eqn:A_x,B_x\]](#eqn:A_x,B_x){reference-type="ref" reference="eqn:A_x,B_x"}). Recall, if $\pi=(r_1,s_1)\cdots (r_k,s_k)$, then we have $\pi'(r_t)=\pi'(s_t)=t$. Define $w_d=(z_0+\sum_{\ell=d}^{2k} \nu_\ell z_{\pi'(\ell)})$, also define $$\begin{aligned} %\label{eqn:f'1234_H} &f_1'={\mathbf 1}_{\tilde{\mathcal A}_{\epsilon_r'x_{\pi'(r)}}}(w_r){\mathbf 1}_{\tilde{\mathcal A}_{\epsilon_s'z_{\pi'(s)}}}(w_s),\; f_2'={\mathbf 1}_{\tilde{\mathcal A}_{\epsilon_r'z_{\pi'(r)}}}(w_r){\mathbf 1}_{\tilde{\mathcal B}_{\epsilon_s'z_{\pi'(s)}}}(w_s),\\ & f_3'={\mathbf 1}_{\tilde{\mathcal B}_{\epsilon_r'z_{\pi'(r)}}}(w_r){\mathbf 1}_{\tilde{\mathcal A}_{\epsilon_s'z_{\pi'(s)}}}(w_s),\; f_4'={\mathbf 1}_{\tilde{\mathcal B}_{\epsilon_r'z_{\pi'(r)}}}(w_r){\mathbf 1}_{\tilde{\mathcal B}_{\epsilon_s'z_{\pi'(s)}}}(w_s). \nonumber \end{aligned}$$ Then by Riemann integration, we get $$\begin{aligned} \lim_{n \to \infty}\varphi_n( \frac{H_{n}^{\epsilon_1}}{\sqrt{n}} \cdots \frac{H_{n}^{\epsilon_{2k}}}{\sqrt{n}}) &=\sum_{\pi\in \mathcal P_2(2k)}\int_{z_0=0}^1\int_{[-1,1]^k}\prod_{(r,s)\in \pi} \mathcal E_{r,s}^{(H)}({\underline z_{2k}})dz_0\cdots dz_k, \end{aligned}$$ where $\underline z_{2k}=(z_0,z_1,\ldots,z_{2k})$ and $$\begin{aligned} \label{eqn:mathE'(z2k)_H} \mathcal E_{r,s}^{(H)}(\underline z_{2k}) =&\Big[ \big[\epsilon'_{r} 2i\rho_1 \big]^{\delta_{\epsilon'_{r}, \epsilon'_{s} }} f'_1+ \big[\epsilon'_{r} 2i\rho_5 \big]^{\delta_{\epsilon'_{r}, \epsilon'_{s} }}f'_4 + \big[\rho_2+\rho_6 + \epsilon'_{r} \mathrm{i}(\rho_4-\rho_3) \big]^{(1-\delta_{\epsilon'_{r}, \epsilon'_{s} })} \nonumber\\ & \quad \times \big[\rho_2-\rho_6 + \epsilon'_{r} \mathrm{i}(\rho_4+\rho_3) \big]^{\delta_{\epsilon'_{r}, \epsilon'_{s} }} f'_2 + \big[\rho_2+\rho_6 + \epsilon'_{s} \mathrm{i}(\rho_4-\rho_3) \big]^{(1-\delta_{\epsilon'_{r}, \epsilon'_{s} })} \nonumber\\ & \quad \times \big[\rho_2-\rho_6 + \epsilon'_{s} \mathrm{i}(\rho_4+\rho_3) \big]^{\delta_{\epsilon'_{r}, \epsilon'_{s} }} f'_3 \Big] (1-\delta_{\nu_{r}, \nu_{s}}). \end{aligned}$$ *Proof of Proposition [Proposition 8](#thm:jc_T+P_g){reference-type="ref" reference="thm:jc_T+P_g"}.* Here we skip the proof. Using the idea of the proof of Proposition [Proposition 5](#thm:generalT_com){reference-type="ref" reference="thm:generalT_com"} and Remark [Remark 4](#cor:jc_H_g){reference-type="ref" reference="cor:jc_H_g"}, one can show that $$\begin{aligned} \label{eqn:lim_phi(Te*,P)_g} & \lim_{n \to \infty} \varphi_n\big((P_nT_{n,g}^{(\tau_1)\epsilon_1} \cdots T_{n,g}^{(\tau_{k_1})\epsilon_{k_1}}) \cdots (P_nT_{n,g}^{(\tau_{k_{p-1}+1})\epsilon_{k_{p-1}+1}} \cdots T_{n,g}^{(\tau_{k_p})\epsilon_{k_p}}) \big) \nonumber \\ =& \left\{\begin{array}{lll} \displaystyle \sum_{\pi\in \mathcal P_{2}(2k)} \int_{z_0=0}^1\int_{[-1,1]^k} \prod_{(r,s)\in \pi} \delta_{\tau_r,\tau_s} \mathcal E_{r,s}^{(T,P)}({\underline z_{2k}}) \prod_{i=0}^kdz_i, & \mbox{ if $p, k_p$ are even}, \\ 0& \mbox{ otherwise}, \end{array}\right.\end{aligned}$$ where $k_p=2k$ and $\mathcal E_{r,s}^{(T,P)}({\underline z_{2k}})$ is some function of the form ([\[eqn:mathE\'(z2k)\_H\]](#eqn:mathE'(z2k)_H){reference-type="ref" reference="eqn:mathE'(z2k)_H"}). ◻ ## Final arguments in the proof of Theorem [Theorem 2](#thm:gen_tdp_com){reference-type="ref" reference="thm:gen_tdp_com"} {#subsec:Tdp,g} We use the ideas from the proofs of Propositions [Proposition 5](#thm:generalT_com){reference-type="ref" reference="thm:generalT_com"} to [Proposition 8](#thm:jc_T+P_g){reference-type="ref" reference="thm:jc_T+P_g"}. We mention only the main steps. *Proof of Theorem [Theorem 2](#thm:gen_tdp_com){reference-type="ref" reference="thm:gen_tdp_com"}.* Let $(a^{(\tau)}_i)_{i\in \mathbb{Z}}, (b^{(\tau)}_i)_{i\in \mathbb{Z}}$ be the input sequences for $T^{(\tau)}_{n,g}$ and $(d^{'(\tau)}_i)_{i\in \mathbb{Z}}, (d^{''(\tau)}_i)_{i\in \mathbb{Z}}$ be the input sequences for $D^{(\tau)}_{n,g}$. Note that an arbitrary monomial from the collection $\{P_n, n^{-1/2}T_{n,g}^{(i)}, D_{n,g}^{(i)}; 1 \leq i \leq m\}$ looks like the following: $$\begin{aligned} &(P_n B_{n,\mu}^{(\tau_1) \epsilon_1} \cdots B_{n,\mu}^{(\tau_{k_1}) \epsilon_{k_1}} ) (P_n B_{n,\mu}^{(\tau_{k_1+1}) \epsilon_{k_1+1}} \cdots B_{n,\mu}^{(\tau_{k_2}) \epsilon_{k_2}} ) \cdots (P_n B_{n,\mu}^{(\tau_{k_{p-1}+1}) \epsilon_{k_{p-1}+1}} \cdots B_{n,\mu}^{(\tau_{k_p}) \epsilon_{k_p}} ) \nonumber \\ & = q_{k_p}(P,D_g,T_g), \mbox{ say}, \end{aligned}$$ where $\epsilon_i \in \{1, *\}$, $\tau_i \in \{1,2, \ldots, m\}$ and for $\mu \in \{1,2\}$, $B^{(\tau_i) \epsilon_i}_{n,1}=n^{-1/2}T^{(\tau_i) \epsilon_i}_{n,g}$, $B^{(\tau_i) \epsilon_i}_{n,2}=D^{(\tau_i) \epsilon_i}_{n,g}$. First note that if $p$ is odd, then $\varphi_n \big(q_{k_p}(P,D_g,T_g)\big)=o(1)$. Let $p$ be even. Then from Lemma [Lemma 4](#lem:hankel_g){reference-type="ref" reference="lem:hankel_g"}, we have $$\begin{aligned} & \varphi_n \big(q_{k_p}(P,D_g,T_g)\big) \nonumber \\ & = \frac{1}{n^{1+\frac{w_p}{2} }} \sum_{j=1}^n \sum_{I_{k_p}} \prod_{e=1}^p \prod_{t=k_{e-1}+1}^{k_e} (z^{(\tau_t)\epsilon_t}_{i_t}{\mathbf 1}_{{\mathcal A}_{\epsilon_t'i_t}}+z^{'(\tau_t)\epsilon_t}_{i_t}{\mathbf 1}_{{\mathcal B}_{\epsilon_t'i_t}}) (m_{t,k_e}) \delta_{0,\sum_{c=1}^p(-1)^{p-c} \sum_{t=k_{c-1}+1}^{k_{c}} \epsilon_t'i_t}, \end{aligned}$$ where $z^{(\tau_t) \epsilon_t}_{i_t}= a^{(\tau_t)\epsilon_t}_{i_t}$ or $d^{'(\tau_t)\epsilon_t}_{i_t}$ depending whether the matrix $B_{n,\mu}^{(\tau_t)\epsilon_t}$ is $T_{n,g}^{(\tau_t)\epsilon_t}$ or $D_{n,g}^{(\tau_t)\epsilon_t}$; $z^{'(\tau_t) \epsilon_t}_{i_t}= b^{(\tau_t)\epsilon_t}_{i_t}$ or $d^{''(\tau_t)\epsilon_t}_{i_t}$ if the matrix $B_{n,\mu}^{(\tau_t)\epsilon_t}$ is $T_{n,g}^{(\tau_t)\epsilon_t}$ or $D_{n,g}^{(\tau_t)\epsilon_t}$; $I_{k_p}$ and $m_{t,k_e}$ are as in ([\[eqn:i_k in -n to n\]](#eqn:i_k in -n to n){reference-type="ref" reference="eqn:i_k in -n to n"}) and ([\[eqn:m_chi,t_Hn_g\]](#eqn:m_chi,t_Hn_g){reference-type="ref" reference="eqn:m_chi,t_Hn_g"}), respectively; and $$\begin{aligned} \label{eqn:no of T in Q_g} w_p= \# \{ \mu : B_{n,\mu}^{(\tau_t) \epsilon_t} = B_{n,1}^{(\tau_t) \epsilon_t} \mbox{ in } q_{k_p}(P,D_g,T_g)\}. \end{aligned}$$ Note that if $w_p$ is odd, then $\varphi_n \big(q_{k_p}(P,D_g,T_g)\big)=o(1)$. Now for $c=1,2, \ldots, p$, let $u_{r_{c-1}}, u_{r_{c-1}+1}, \ldots, u_{r_{c}}$ be the positions of $D_{n,g}$ and $v_{w_{c-1}}, v_{w_{c-1}+1}, \ldots, v_{w_{c}}$ be the positions of $T_{n,g}$ between $B_{n,\mu}^{(\tau_{k_{c-1}+1}) \epsilon_{k_{c-1}+1}}$ and $B_{n,\mu}^{(\tau_{k_c}) \epsilon_{k_c}}$. Here $r_0=k_0=w_0=1$ and $r_p+w_p=k_p$. Suppose $$%\label{eqn:setR_Tg} R=([k_p] \setminus \cup_{c=1}^p \{ u_{r_{c-1}}, u_{r_{c-1}+1}, \ldots, u_{r_{c}}\}).$$ Note from ([\[eqn:no of T in Q_g\]](#eqn:no of T in Q_g){reference-type="ref" reference="eqn:no of T in Q_g"}) that $\# R=w_p$. Let $w_p=2k$. Then using arguments similar to those used while establishing ([\[eqn:lim_phi_P,D\*\_g\]](#eqn:lim_phi_P,D*_g){reference-type="ref" reference="eqn:lim_phi_P,D*_g"}) and ([\[eqn:lim_phi(Te\*,P)\_g\]](#eqn:lim_phi(Te*,P)_g){reference-type="ref" reference="eqn:lim_phi(Te*,P)_g"}), we have $$\begin{aligned} %\label{eqn:lim_phi(PTD)_fin_g} & \lim_{n \to \infty} \varphi_n \big(q_{k_p}(P,D_g,T_g)\big) \nonumber \\ & = \left\{\begin{array}{lll} \displaystyle \hskip-5pt \int_{z_0=0}^1 \sum_{i_{u_{r_0}}, \ldots, i_{u_{r_p}}=-\infty}^{\infty} \prod_{t=1}^{r_p} (d^{'(\tau_t)\epsilon_t}_{i_t}{\mathbf 1}_{[0,1/2]}(z_0) +d^{''(\tau_t)\epsilon_t}_{i_t}{\mathbf 1}_{(1/2,1]} (z_0)) \\ \times \delta_{0,\sum_{c=1}^{p}(-1)^c \sum_{t=r_{c-1}+1}^{r_c} \epsilon'_{u_t} i_{u_t}} \displaystyle \hskip-3pt \sum_{\pi\in \mathcal P_{2}(R)} \int_{[-1,1]^k} \prod_{(r,s)\in \pi} \hskip-5pt \delta_{\tau_r,\tau_s} \mathcal E_{r,s}^{(T,P)}({\underline z_{2k}}) \prod_{i=0}^kdz_i & \mbox{ if $p, w_p$ are even}, \\ 0& \mbox{ otherwise}, \end{array}\right. \end{aligned}$$ where $\mathcal E_{r,s}^{(T,P)}({\underline z_{2k}})$ is as in ([\[eqn:lim_phi(Te\*,P)\_g\]](#eqn:lim_phi(Te*,P)_g){reference-type="ref" reference="eqn:lim_phi(Te*,P)_g"}) for pair-partitions of the set $R$. This completes the proof of Theorem [Theorem 2](#thm:gen_tdp_com){reference-type="ref" reference="thm:gen_tdp_com"}. ◻ 10 K. Adhikari and A. Bose. .11em plus .33em minus .07emBrown measure and asymptotic freeness of elliptic and related matrices. .11em plus .33em minus .07em*Random Matrices Theory Appl.*, 8(2):1950007, 23, 2019. R. Basu, A. Bose, S. Ganguly, and R. Subhra Hazra. .11em plus .33em minus .07emJoint convergence of several copies of different patterned random matrices. .11em plus .33em minus .07em*Electron. J. Probab.*, 17:no. 82, 33, 2012. M. Bhattacharjee, A. Bose, and A. Dey. .11em plus .33em minus .07emJoint convergence of sample cross-covariance matrices. .11em plus .33em minus .07em*ALEA Lat. Am. J. Probab. Math. Stat.*, 20(1):395--423, 2023. A. Bose. .11em plus .33em minus .07em*Patterned Random Matrices*. .11em plus .33em minus .07emCRC Press, Boca Raton, FL, 2018. A. Bose. .11em plus .33em minus .07em*Random Matrices and Non-Commutative Probability*. .11em plus .33em minus .07emCRC Press, Boca Raton, FL, 2021. A. Bose, S. Gangopadhyay, and K. Saha. .11em plus .33em minus .07emConvergence of a class of Toeplitz type matrices. .11em plus .33em minus .07em*Random Matrices Theory Appl.*, 2(3):1350006, 21, 2013. A. Bose, S. Gangopadhyay, and A. Sen. .11em plus .33em minus .07emLimiting spectral distribution of $XX'$ matrices. .11em plus .33em minus .07em*Ann. Inst. Henri Poincaré Probab. Stat.*, 46(3):677--707, 2010. A. Bose, R. S. Hazra, and K. Saha. .11em plus .33em minus .07emConvergence of joint moments for independent random patterned matrices. .11em plus .33em minus .07em*Ann. Probab.*, 39(4):1607--1620, 2011. A. Bose, K. Saha, and P. Sen. .11em plus .33em minus .07emSome patterned matrices with independent entries. .11em plus .33em minus .07em*Random Matrices: Theory and Applications*, 10(3):2150030, 2021. .11em plus .33em minus .07emErratum 2292001 (4 pages), 10.1142/S2010326322920010. A. Bose and A. Sen. .11em plus .33em minus .07emAnother look at the moment method for large dimensional random matrices. .11em plus .33em minus .07em*Electron. J. Probab.*, 13:no. 21, 588--628, 2008. A. Bose and P. Sen. .11em plus .33em minus .07em$XX^T$ matrices with independent entries. .11em plus .33em minus .07em*ALEA Lat. Am. J. Probab. Math. Stat.*, 20(1):75--125, 2023. A. Böttcher and B. Silbermann. .11em plus .33em minus .07em*Analysis of Toeplitz operators*. .11em plus .33em minus .07emSpringer-Verlag, Berlin, 1990. W. Bryc, A. Dembo, and T. Jiang. .11em plus .33em minus .07emSpectral measure of large random Hankel, Markov and Toeplitz matrices. .11em plus .33em minus .07em*Ann. Probab.*, 34(1):1--38, 2006. U. Grenander and G. Szegő. .11em plus .33em minus .07em*Toeplitz forms and their applications*. .11em plus .33em minus .07emChelsea Publishing Co., New York, second edition, 1984. C. Hammond and S. J. Miller. .11em plus .33em minus .07emDistribution of eigenvalues for the ensemble of real symmetric Toeplitz matrices. .11em plus .33em minus .07em*J. Theoret. Probab.*, 18(3):537--566, 2005. D.-Z. Liu and Z.-D. Wang. .11em plus .33em minus .07emLimit distribution of eigenvalues for random Hankel and Toeplitz band matrices. .11em plus .33em minus .07em*J. Theoret. Probab.*, 24(4):988--1001, 2011. J. A. Mingo and R. Speicher. .11em plus .33em minus .07em*Free Probability and Random Matrices*, volume 35 of *Fields Institute Monographs*. .11em plus .33em minus .07emSpringer, New York, 2017. O. Ryan. .11em plus .33em minus .07emOn the limit distributions of random matrices with independent or free entries. .11em plus .33em minus .07em*Comm. Math. Phys.*, 193(3):595--626, 1998. G. Szegö. .11em plus .33em minus .07emEin Grenzwertsatz über die Toeplitzschen Determinanten einer reellen positiven Funktion. .11em plus .33em minus .07em*Math. Ann.*, 76(4):490--503, 1915. D. Voiculescu. .11em plus .33em minus .07emLimit laws for random matrices and free products. .11em plus .33em minus .07em*Invent. Math.*, 104(1):201--220, 1991. [^1]: kartick\@iiserb.ac.in : Research supported by Inspire Faculty Fellowship, DST/INSPIRE/04/2020/000579. [^2]: bosearu\@gmail.com : Research supported by J. C. Bose National Fellowship, Department of Science and Technology, Government of India. [^3]: shambhumath4\@gmail.com : Research partially supported by NBHM Post-doctoral Fellowship, 0204/10/(25)/2023/R$\&$D-II/2803. Work partially done at IISER Bhopal, funded by DST/INSPIRE/04/2020/000579.
arxiv_math
{ "id": "2310.03479", "title": "Convergence of high dimensional Toeplitz and related matrices with\n correlated inputs", "authors": "Kartick Adhikari, Arup Bose, and Shambhu Nath Maurya", "categories": "math.PR", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper we study a Dirichlet-type differential inclusion involving the Finsler-Laplace operator on a complete Finsler manifold. Depending on the positive $\lambda$ parameter of the inclusion, we establish non-existence, as well as existence and multiplicity results by applying non-smooth variational methods. The main difficulties are given by the problem's highly nonlinear nature due to the general Finslerian setting, as well as the non-smooth context. author: - - title: A Dirichlet inclusion problem on Finsler manifolds --- differential inclusion, Dirichlet problem, Finsler manifold, Finsler-Laplace operator # Introduction and main results Numerous geometric and physical phenomena can be addressed by examining certain elliptic PDEs. An important class of such problems is represented by the elliptic equation $$Lu(x) = g(u(x)), \ x \in \Omega, \eqno{({\mathcal P_1})}$$ where $\Omega \subset \mathbb R^n$ denotes an open set $(n\geq 2)$, $L$ represents an elliptic operator and $g: \mathbb R \to \mathbb R$ is a nonlinear function verifying certain regularity and growth conditions. Due to the recent advances in geometric analysis, several elliptic problems have been studied on non-Euclidean spaces such as Riemannian and Finsler manifolds, see e.g., Bonanno, Bisci and Rădulescu [@BonannoBisciRadulescu], Cheng and Yau [@Cheng], Farina, Sire and Valdinoci [@Farina], Farkas, Kristály and Varga [@FarkasKristalyVarga], Kristály, Mester and Mezei [@KristalyMester], Kristály and Rudas[@KristalyRudas], and references therein. Concerning physical applications, a significant extension of problem $({\mathcal P_1})$ addresses the case when the nonlinear function $g$ is not continuous. This setting is handled by the replacement of $({\mathcal P_1})$ with the differential inclusion problem $$Lu(x) \in \partial G(u(x)), \ x \in \Omega, \eqno{({\mathcal P_2})}$$ where $G:\mathbb{R} \to \mathbb{R}$ is a locally Lipschitz function and $\partial G$ stands for the generalized gradient of $G$, see e.g., Carl and Le [@CarlLe], Kristály, Mezei and Szilák [@oscillatory; @KMSZ], and references therein. Motivated by these facts, we consider the following Dirichlet-type differential inclusion problem on a complete Finsler manifold $(M,F)$, namely $$\left\{ \begin{array}{lll} -\Delta_F u(x) \in \lambda \partial G(u(x)),& x \in \Omega, \\ u(x) = 0, & x \in \partial\Omega, \end{array}\right. \eqno{({\mathcal DI})_\lambda}$$ where $\Omega \subset M$ is an open, bounded set with sufficiently smooth boundary, $\Delta_F$ denotes the Finsler-Laplace operator defined on $(M,F)$ and $\lambda > 0$ is a parameter. Furthermore, $G:\mathbb{R} \to \mathbb{R}$ is a locally Lipschitz function and $\partial G$ represents the generalized gradient of $G$ in the sense of Clarke [@Clarke]. In this case, an element $u \in W^{1,2}_{0,F}(\Omega)$ is said to be a weak solution of problem $({\mathcal DI})_\lambda$ if there exists a measurable mapping $x \mapsto \xi_x \in \partial G(u(x))$ on $\Omega$ such that for every test function $\varphi \in C_0^\infty(\Omega)$, the correspondence $x \mapsto \xi_x \varphi(x)$ belongs to $L^1(\Omega)$ and $$\label{weak_solution} \int_\Omega (D\varphi(x))\big(\nabla_F u(x)\big)\,\mathrm{d}v_F(x) = \lambda \int_\Omega \xi_x \varphi(x)\,\mathrm{d}v_F(x).$$ Considering the nonlinear term $G$, we further require the following conditions: 1. [\[A1\]]{#A1 label="A1"} $\displaystyle\lim_{t\to 0}\frac{\max\{|\xi|: \xi \in \partial G(t)\}}{t}=0$; 2. [\[A2\]]{#A2 label="A2"} $\displaystyle\lim_{t\to \infty}\frac{\max\{|\xi|: \xi \in \partial G(t)\}}{t}=0$; 3. [\[A3\]]{#A3 label="A3"} $\displaystyle G(0) = 0$. The purpose of this paper is to prove non-existence and existence/multiplicity results for the inclusion problem $({\mathcal DI})_\lambda$ depending on the parameter $\lambda$. The ambient space is given by a complete Finsler manifold, on which we consider an open, bounded subset $\Omega$. Note that the boundedness of $\Omega$ guarantees the validity of certain continuous and compact Sobolev embeddings, as well as the fact that the uniformity constant on $\Omega$, $l_{F,\Omega}$ is nonzero. All of these are crucial in our arguments. Further ingredients of the proof involve variational methods, in particular the non-smooth mountain pass theorem. Our result can be stated as follows: **Theorem 1**. *Let $(M,F)$ be an $n$-dimensional complete Finsler manifold with $n \geq 3$, and let $\Omega \subset M$ be an open, bounded set with sufficiently smooth boundary. Let us consider the parameter-dependent Dirichlet inclusion problem $({\mathcal DI})_\lambda$, where $\lambda > 0$ and $G$ is a locally Lipschitz function verifying assumptions $(\textbf{H}_1) - (\textbf{H}_3)$.* *Then there exist the numbers $0 < \lambda_1 < \lambda_2$ such that* 1. *(Non-existence): $({\mathcal DI})_\lambda$ has only the trivial solution if $0 < \lambda < \lambda_1$;* 2. *(Existence/multiplicity): $({\mathcal DI})_\lambda$ has two different nontrivial nonnegative weak solutions when $\lambda > \lambda_2$.* The organization of the paper is the following. Section [2](#Sec2){reference-type="ref" reference="Sec2"} provides a summary of preliminary notions and results concerning Finsler manifolds. In Section [3](#Sec3){reference-type="ref" reference="Sec3"}, we recall some fundamental definitions and theorems of non-smooth analysis that find application in our proof. Finally, section [4](#Sec4){reference-type="ref" reference="Sec4"} contains the proof of Theorem [Theorem 1](#theorem1){reference-type="ref" reference="theorem1"}. # Preliminaries on Finsler manifolds {#Sec2} In this section we briefly present the fundamental notions of Finsler manifolds, which are necessary for our further developments. For more details we refer to Bao, Chern and Shen [@BCS], Ohta and Sturm [@OhtaSturm], and Shen [@Shen01]. Let $M$ be a connected $n$-dimensional differentiable manifold. The collection of vectors tangent to $M$ is denoted as the tangent bundle of $M$, defined as $$TM=\cup_{x \in M}\{(x,v): v \in T_{x} M\},$$ where $T_{x} M$ represents the tangent space of $M$ at the point $x$. The pair $(M,F)$ is called a Finsler manifold if $F: TM \to [0,\infty)$ is a continuous function verifying the following conditions: 1. $F \in C^{\infty}(TM \setminus \{ 0 \})$; 2. $F(x,\lambda v) = \lambda F(x,v)$, for every $\lambda \geq 0$ and $(x,v) \in TM$; 3. the Hessian matrix $\left[\left( \frac{1}{2}F^{2}(x,v)\right) _{v^{i}v^{j}} \right]_{i,j=\overline{1,n}}$ is positive definite for every $(x,v)\in TM \setminus \{0\}.$ In this case we say that $F$ is a Finsler metric on $M$. If, in addition, $F(x,\lambda v) = |\lambda| F(x,v)$ holds for all $\lambda \in \mathbb{R}$ and $(x,v) \in TM$, then $F$ is said to be symmetric and the Finsler manifold is called reversible. Otherwise, $(M,F)$ is classified as nonreversible. The reversibility constant of $(M,F)$ is defined by the number $$r_{F} = \sup_{x\in M} ~ \sup_{\substack{ v\in T_{x} M \setminus \{0\}}} \frac{F(x,v)}{F(x,-v)} ~ \in [1, \infty],$$ and it measures how much the Finsler manifold deviates from being reversible, see Rademacher [@Rademacher]. In particular, $r_F = 1$ if and only if $(M,F)$ is a reversible Finsler manifold. The uniformity constant of $(M,F)$ is defined by the number $$l_{F} = \inf_{x\in M} ~ \inf_{v,w,z\in T_x M\setminus \{0\}}\frac{g_{(x,w)}(v,v)}{g_{(x,z)}(v,v)} ~ \in ~ [0, 1],$$ which measures how much $F$ deviates from being a Riemannian structure, see Egloff [@Egloff]. Here $g$ denotes the fundamental tensor of $(M,F)$, see Bao, Chern and Shen [@BCS]. We have $l_F = 1$ if and only if $(M,F)$ is a Riemannian manifold, see Ohta [@Ohta]. A $C^\infty$-differentiable curve $\gamma: [a,b] \to M$ is called a geodesic if its velocity field $\dot \gamma$ is parallel along the curve. The Finsler manifold is said to be complete if every geodesic segment $\gamma: [a,b] \to M$ can be extended to a geodesic defined on $\mathbb{R}$. The dual metric $F^*:T^*M \to [0,\infty)$ is called the polar transform of $F$, defined as $$F^*(x,\alpha) = \sup_{v \in T_xM \setminus \{0\}} ~ \frac{\alpha(v)}{F(x,v)},$$ where $T^*M = \bigcup_{x \in M}T^*_{x} M$ is the cotangent bundle of $M$ and $T^*_{x} M$ represents the dual space of $T_{x} M$. In local coordinates, the Legendre transform $J^*:T^*M \to TM$ is defined as $$J^*(x,\alpha) = \sum_{i=1}^n \frac{\partial}{\partial \alpha_i}\left(\frac{1}{2} F^{*2}(x,\alpha)\right)\frac{\partial}{\partial x^i}.$$ Note that if $J^*(x, \alpha) = (x,v)$, then one has $$F(x,v) = F^*(x,\alpha) \quad \text{and} \quad \alpha(v) = F^*(x,\alpha) F(x,v).$$ Furthermore, if $u: M \to \mathbb R$ is a function of class $C^1$, then the gradient of $u$ is defined as $$\nabla_F u(x) = J^*(x, Du(x)), \ \forall x \in M,$$ where $Du(x) \in T_x^*M$ denotes the differential of $u$ at the point $x$. In particular, one has that $$F(x, \nabla_F u(x)) = F^*(x, Du(x)), \ \forall x \in M.$$ It is important to note that, in general, $\nabla_F$ is a nonlinear operator. Also, the mean value theorem implies that $$\begin{aligned} \label{mean_theorem} &(Du(x) - Df(x))(\nabla_F u(x)-\nabla_F f(x)) \geq \nonumber \\ & \geq l_F F^{*2}(Du(x) - Df(x)),\end{aligned}$$ for every $u,f \in C^1(M)$. For a function $u \in C^2(M)$, the Finsler-Laplace operator $\Delta_F$ is defined as $$\Delta_F u(x) = \mathrm{div}(\nabla_F u(x)),$$ where $$\mathrm{div} V(x) = \frac{1}{\sigma_F(x)} \sum_{i=1}^n \frac{\partial}{\partial x^i} \Big(\sigma_F(x) V^i(x) \Big)$$ for some vector field $V$ on $M$. Here, $\sigma_F$ represents the density function defined by $\sigma_F(x) = \frac{ \omega_n}{\mathrm{Vol}(B_x(1))},$ where $\omega_n$ and $\mathrm{Vol}(B_x(1))$ denote the Euclidean volume of the $n$-dimensional Euclidean unit ball and the set $$B_x(1) = \Big\{ (v^i) \in \mathbb{R}^n :~ F \Big(x, \sum_{i=1}^n v^i \frac{\partial}{\partial x^i} \Big) < 1 \Big\} \subset \mathbb{R}^n,$$ respectively. Again, the Finsler-Laplace operator $\Delta_F$ is generally nonlinear. The canonical Hausdorff volume form $\mathrm{d}v_F$ on $(M, F)$ is defined as $$\label{Hausdorff_measure} \mathrm{d}v_F(x) = \sigma_F(x) \mathrm{d} x^1 \land \dots \land \mathrm{d} x^n.$$ In the following, we may omit the parameter $x$ for the sake of brevity. The Finslerian volume of an open set $\Omega\subset M$ is given by $\mathrm{Vol}_F(\Omega) = \int_\Omega \mathrm{d}v_F(x)$. Let $\Omega \subset M$ be an open set. The Sobolev space $W^{1,2}_F(\Omega)$ associated with the Finsler structure $F$ and the canonical Hausdorff measure $\mathrm{d}v_F$ is defined as $$W^{1,2}_F(\Omega) = \left\{u\in W^{1,2}_\mathrm{loc}(\Omega): \int_\Omega {F^*}^2(x,Du(x))\mathrm{d}v_F < \infty \right\}.$$ It can be proved that $W^{1,2}_F(\Omega)$ is the closure of $C^\infty(\Omega)$ with respect to the (generally asymmetric) norm $$\label{Sobolev_norm1} \|u\|_{W^{1,2}_F(\Omega)}=\left(\int_\Omega {F^*}^2(x,Du(x))\,\mathrm{d}v_F + \int_\Omega |u(x)|^2\,\mathrm{d}v_F \right)^{\frac{1}{2}}.$$ The space $W_{0,F}^{1,2}(\Omega)$ is defined as the closure of $C_{0}^{\infty}(\Omega)$ with respect to the norm $\|\cdot\|_{W^{1,2}_F(\Omega)}$. Since $F$ is not necessarily symmetric, the Sobolev spaces $W^{1,2}_F(\Omega)$ and $W^{1,2}_{0,F}(\Omega)$ are generally asymmetric normed spaces. However, if $(M, F)$ is a complete Finsler manifold and $\Omega$ is a bounded subset of $M$, then due to Farkas, Kristály and Varga [@FarkasKristalyVarga], we have that $W^{1,2}_F(\Omega)$ and $W^{1,2}_{0,F}(\Omega)$ are reflexive biBanach spaces (i.e., complete asymmetric normed spaces, see Cobzaş [@Cobzas]). Furthermore, when $\Omega \subset M$ is an open, bounded set with sufficiently smooth boundary and $n = \text{dim} M \geq 3$, the Sobolev space $W^{1,2}_{F}(\Omega)$ can be continuously embedded into the Lebesgue space $L^q(\Omega)$ (see Hebey [@Hebey Theorem 2.6]) for every $q \in [1,2^*]$, where $2^* = \frac{2n}{n-2}$. In particular, for every $q \in [1,2^*]$, there exists a constant $C_q > 0$ such that $$\label{cont_embedding} \left(\int_\Omega |u(x)|^q\,\mathrm{d}v_F\right)^\frac{1}{q} \leq C_q \left(\int_\Omega F^{*2}(x,Du(x))\,\mathrm{d}v_F\right)^\frac{1}{2},$$ for every $u \in W^{1,2}_{F}(\Omega)$. Moreover, the compact embeddings $W^{1,2}_{F}(\Omega) \hookrightarrow L^q(\Omega)$ are also valid for every $q \in [1,2^*)$, see Hebey [@Hebey Theorem 2.9]. The operators $\mathrm{div}$ and $\Delta_F$ can be defined in a distributional sense as well, see Ohta and Sturm [@OhtaSturm]. For instance, for a function $u \in W^{1,2}_F(\Omega)$, $\Delta_F u$ is defined as $$\label{divergence_theorem} \int_{\Omega} \varphi(x) \Delta_F u(x) ~ \mathrm{d}v_F = -\int_{\Omega} D\varphi(x)\big( \nabla_F u(x)\big) \mathrm{d}v_F,$$ for all $\varphi \in C^{\infty}_0(\Omega)$. # Elements of non-smooth analysis {#Sec3} In this segment, we review the fundamental characteristics of locally Lipschitz functions that find application in our proofs. For more details, see Clarke [@Clarke]. Let $X$ be a real Banach space equipped with the norm $\|\cdot\|$. A function $h:X \rightarrow\mathbb{R}$ is said to be locally Lipschitz if every point $u\in X$ has a neighborhood $N_u \subset X$ such that $$%\label{loc-lip} \vert h(u_{1})-h(u_{2})\vert\leq K\| u_{1}-u_{2}\|, \quad \forall u_{1}, u_{2} \in N_u,$$ where $K > 0$ is a constant depending on $N_u$. Now suppose that $h:X\rightarrow\mathbb{R}$ is a locally Lipschitz function. The generalized directional derivative of $h$ at $u\in X$ in the direction $v\in X$ is defined as $$h^{0}(u;v):=\limsup_{\scriptstyle {\it w\rightarrow u}\atop \scriptstyle \it t\searrow 0}\frac{h(w+tv)-h(w)}{t}.$$ Note that if $h$ is of class $C^1$ on $X$, then $h^{0}(u;v) = \langle h'(u), v\rangle$ for all $u, v \in X$. Hereafter, $\langle \cdot, \cdot \rangle$ and $\|\cdot\|_*$ denote the duality mapping on $(X^*, X)$ and the norm of the dual space $X^*$. The Clarke subdifferential of the locally Lipschitz function $h:X\rightarrow\mathbb{R}$ at a point $u\in X$ is defined by the set $$\partial h(u):=\left\{\zeta\in X^{\ast}: \langle \zeta, v\rangle\leq h^{0}(u; v),\ \forall v\in X \right\}.$$ An element $u\in X$ is called a critical point of $h$ if $0\in \partial h(u)$, see Chang [@Chang Definition 2.1]. For a locally Lipschitz function, the following assertions are available. **Proposition 1**. *(Clarke [@Clarke]) *Let $h: X\rightarrow\mathbb{R}$ be a locally Lipschitz function. Then the following properties hold:** 1. *$($Lebourg's mean value theorem$)$ Let $U$ be an open subset of a Banach space $X$ and $u, v$ be two points of $U$ such that the line segment $[u,v] = \{(1-t)u+tv: t \in [0,1]\}$ is in $U$. If $h:U\rightarrow\mathbb{R}$ is a Lipschitz function, then there exist a point $w\in (u,v)$ and $\zeta\in \partial h(w)$ such that $h(v)-h(u)\in \langle \zeta,v-u\rangle.$* 2. *If $j:X\to \mathbb R$, $j \in C^1(X)$, then $\partial (j+h)(u)=j'(u)+\partial h(u)$ and $(j+h)^0(u;v)=\langle j'(u),v\rangle +h^0(u;v)$ for every $u,v\in X.$* 3. *$(-h)^0(u;v)=h^0(u;-v)$ for every $u,v\in X.$* 4. *$\partial (\alpha h)(u)=\alpha \partial h(u)$ for every $\alpha\in \mathbb R$ and $u\in X.$* # Proof of Theorem [Theorem 1](#theorem1){reference-type="ref" reference="theorem1"} {#Sec4} ## Proof of the non-existence result $(\textbf{R}_1)$ Let $u \in W^{1,2}_{0,F}(\Omega)$ be a weak solution of $(\mathcal DI)_\lambda$. By density arguments, we can choose $\varphi \coloneqq u$ in [\[weak_solution\]](#weak_solution){reference-type="eqref" reference="weak_solution"}, thus we obtain $$\label{egyik} \int_\Omega {F^*}^2(x,Du(x))\,\mathrm{d}v_F = \lambda \int_\Omega \xi_x u(x)\,\mathrm{d}v_F,$$ where $\xi_x \in \partial G(u(x))$ for every $x \in \Omega$ such that $x \mapsto \xi_x u(x)$ belongs to $L^1(\Omega)$. Based on the assumptions $(\textbf{H}_1)$ and $(\textbf{H}_2)$, for every $\varepsilon>0$, one can find the numbers $\delta_1, \delta_2>0$ such that $$\label{A1_and_A2_concl} |\xi|\leq \varepsilon t, ~ \forall \xi \in \partial G(t) \text{ and } \forall 0 < t < \delta_1 \textnormal{ or } t > \delta_2.$$ Then for every $\varepsilon>0$ we have that $$\label{masik} \int_\Omega \xi_x u(x)\,\mathrm{d}v_F \leq \varepsilon \int_\Omega |u(x)|^2\,\mathrm{d}v_F.$$ Consequently, from [\[egyik\]](#egyik){reference-type="eqref" reference="egyik"}, [\[masik\]](#masik){reference-type="eqref" reference="masik"} and the continuous embedding $W^{1,2}_{F}(\Omega) \subset L^2(\Omega)$ (see [\[cont_embedding\]](#cont_embedding){reference-type="eqref" reference="cont_embedding"}), it follows that $$(\lambda \varepsilon C_2^2 -1 )\int_\Omega {F^*}^2(x,Du(x))\,\mathrm{d}v_F \geq 0,$$ where $C_2$ denotes the embedding constant from [\[cont_embedding\]](#cont_embedding){reference-type="eqref" reference="cont_embedding"} in the case $q=2$. Accordingly, if $\lambda \varepsilon C_2^2 < 1$, i.e., when $$\lambda < \frac{1}{\varepsilon C_2^2} \eqqcolon \lambda_1,$$ then we necessarily have $u = 0$ a.e. on $\Omega$, which concludes the proof of $(\textbf{R}_1)$. ## Proof of the existence/multiplicity result $(\textbf{R}_2)$ {#B} Having in mind the inclusion $(\mathcal DI)_\lambda$, let us construct the modified problem $$\left\{ \begin{array}{lll} -\Delta_F u(x) \in \lambda \partial \widetilde{G}(u(x)),& x \in \Omega, \\ u(x) = 0, & x \in \partial\Omega, \end{array}\right. \eqno{(\widetilde{\mathcal DI})_\lambda}$$ where we define $\widetilde{G}: \mathbb R \to \mathbb R$ as $$\widetilde{G}(t) = \begin{cases} 0, & \text{ if } t < 0 \\ G(t), & \text{ if } t \geq 0. \end{cases}$$ Clearly, $\widetilde{G}$ is a locally Lipschitz function which also satisfies hypotheses $(\textbf{H}_1)-(\textbf{H}_3)$. We consider the energy functional associated with problem $(\widetilde{\mathcal DI})_\lambda$ for every $\lambda >0$, namely $$\mathcal{E}_\lambda: W^{1,2}_{0,F}(\Omega)\to \mathbb{R}, \quad \mathcal{E}_\lambda(u) = \frac{1}{2} \mathcal{N}(u) - \lambda \mathcal{G}(u),$$ where $\mathcal{N}, \mathcal{G}: W^{1,2}_{0,F}(\Omega)\to \mathbb{R}$, $$\mathcal{N}(u) = \int_\Omega{F^*}^2(x,Du(x))\,\mathrm{d}v_F$$ and $$\mathcal{G}(u)=\int_\Omega \widetilde{G}(u(x))\,\mathrm{d}v_F.$$ Let us observe that if $u \in W^{1,2}_{0,F}(\Omega)$ is a weak solution of $(\widetilde{\mathcal DI})_\lambda$, then $u$ is a nonnegative weak solution of the original problem $(\mathcal DI)_\lambda$. Indeed, suppose that there exists $u \in W^{1,2}_{0,F}(\Omega)$ a nontrivial solution of $(\widetilde{\mathcal DI})_\lambda$ for some $\lambda$. Let $u_-(x) = \min(0,u(x))$ and $\Omega_- = \{x \in \Omega: u(x) < 0\}$. Multiplying the first equation of $(\widetilde{\mathcal DI})_\lambda$ by $u_-$ and integrating over $\Omega$, we obtain that $$\int_{\Omega_-} F^{*2}(x, Du_-(x)) \mathrm{d}v_F = 0,$$ which in turn yields that $u_- = 0$, since $u = 0$ on $\partial \Omega$. Hence $u \geq 0$ on $\Omega$, which means that $\widetilde{G}(u(x)) = G(u(x))$ on $\Omega$. Therefore, it is enough to study the weak solutions of problem $(\widetilde{\mathcal DI})_\lambda$, which are given by the critical points of $\mathcal{E}_\lambda$. Accordingly, in the following subsections we shall study the properties of the energy functional $\mathcal{E}_\lambda$, namely: coercivity, boundedness from below and the validity of the non-smooth Palais-Smale condition. ## $\mathcal{E}_{\lambda}$ is coercive and bounded below {#Coercivity} In this subsection we prove that the energy functional $\mathcal E_{\lambda}$ is coercive and bounded below. Let $\delta >0$ be arbitrarily fixed, and let $S_1, S_2 \subset \Omega$ denote the following level sets for some function $u \in W^{1,2}_{0,F}(\Omega)$: $$\begin{aligned} {} S_1 &= \{x \in \Omega: u(x)\leq\delta \} ~ ~ \text{ and} \\ S_2 &= \{x \in \Omega: u(x)>\delta \}. \end{aligned}$$ Evidently, we have $$\mathcal{G}(u) = \int_{S_1} \widetilde{G}(u(x))\,\mathrm{d}v_F + \int_{S_2} \widetilde{G}(u(x))\,\mathrm{d}v_F .$$ For the first term, we can write $${} \int_{S_1} \widetilde{G}(u(x)) {\rm d}v_F \leq \max_{t \leq \delta} |\widetilde{G}(t)| \cdot \mathrm{Vol}_F(\Omega).$$ For the second term, we apply Lebourg's mean value theorem (see Proposition [Proposition 1](#prop-lok-lip-0){reference-type="ref" reference="prop-lok-lip-0"}, *1)*). Then, based on relation [\[A1_and_A2_concl\]](#A1_and_A2_concl){reference-type="eqref" reference="A1_and_A2_concl"} we conclude that for every $\epsilon>0$ we can choose $\delta$ such that $$\begin{aligned} \int_{S_2} \widetilde{G}(u(x)) {\rm d}v_F &= \int_{S_2} \widetilde{G}(u(x))-\widetilde{G}(\delta) {\rm d}v_F + \int_{S_2} \widetilde{G}(\delta) {\rm d}v_F \nonumber\\ &= \int_{S_2} \langle \xi(x), u(x)-\delta\rangle {\rm d}v_F + \int_{S_2} \widetilde{G}(\delta) {\rm d}v_F \nonumber \\ &\leq \varepsilon \int_{\Omega} |u(x)|^2 {\rm d}v_F + \widetilde{G}(\delta)\mathrm{Vol}_F(\Omega).\nonumber \end{aligned}$$ Therefore, taking into account the continuous embedding $W^{1,2}_F(\Omega) \subset L^2(\Omega)$ (see [\[cont_embedding\]](#cont_embedding){reference-type="eqref" reference="cont_embedding"} in the case $q=2$), it results that $$\begin{aligned} \mathcal E_{\lambda}(u) &= \frac{1}{2}\mathcal{N}(u) - \lambda \mathcal{G}(u) \nonumber \\ &\geq \left(\frac{1}{2 C_2^2} - \lambda \epsilon\right)\int_{\Omega} |u(x)|^2\,\mathrm{d}v_F \nonumber \\ &- \lambda \bigg(\max_{t \leq \delta} |\widetilde{G}(t)| + \widetilde{G}(\delta)\bigg)\mathrm{Vol}_F(\Omega). \nonumber \end{aligned}$$ Since $\Omega$ is bounded, by choosing $\varepsilon < \frac{1}{2C_2^2\lambda}$ sufficiently small, it follows that the energy functional $\mathcal E_{\lambda}$ is coercive and bounded from below. ## $\mathcal E_{\lambda}$ satisfies the non-smooth Palais-Smale condition {#PS} Let $(u_k)_k$ be a Palais-Smale sequence for $\mathcal E_{\lambda}$ in $W^{1,2}_{0,F}(\Omega)$, i.e., suppose that $(\mathcal{E}_{\lambda}(u_k))_k$ is bounded and $m(u_k) \to 0$ as $k \to \infty$, where $$m(u) = \min \{\|\xi\|_* : \xi \in \partial \mathcal{E}_{\lambda}(u)\}.$$ Due to the coercivity of $\mathcal{E}_{\lambda}$, it follows that the sequence $(u_k)_k$ is bounded in $W^{1,2}_{0,F}(\Omega)$. As $W^{1,2}_{F}(\Omega)$ can be compactly embedded into $L^{q}(\Omega)$ for any $q \in [2,2^*)$ (see [\[cont_embedding\]](#cont_embedding){reference-type="eqref" reference="cont_embedding"}), there exists a function $u \in W^{1,2}_{0,F}(\Omega)$ such that, up to a subsequence, $u_k\rightharpoonup u$ weakly in $W^{1,2}_{0,F}(\Omega)$ and $u_k \to u$ strongly in $L^q(\Omega)$ for every $q \in [2,2^*)$. Based on Proposition [Proposition 1](#prop-lok-lip-0){reference-type="ref" reference="prop-lok-lip-0"}, we can write for the generalized directional derivative of $\mathcal E_{\lambda}$ that $$\mathcal E_{\lambda}^{0}(u;u_k-u) = \frac{1}{2}\langle \mathcal{N}^{'}(u), u_k-u \rangle + \lambda (-\mathcal{G})^{0}(u;u_k-u)$$ and $$\mathcal E_{\lambda}^{0}(u_k;u-u_k) = \frac{1}{2}\langle \mathcal{N}^{'}(u_k), u-u_k \rangle + \lambda (-\mathcal{G})^{0}(u_k;u-u_k).$$ Our aim is to prove that, up to a subsequence, $(u_k)_k$ strongly converges to $u$ in $W^{1,2}_{0,F}(\Omega)$. Therefore, we analyze the expression $$\begin{aligned} I_k &\coloneqq \int_{\Omega} (Du - Du_k)(\nabla_F u - \nabla_F u_k)\,\mathrm{d}v_F \\ % &= \int_{\Omega} Du(\nabla_F u - \nabla_F u_k)\,\mathrm{d}v_F \\ % &- \int_{\Omega} Du_k(\nabla_F u - \nabla_F u_k)\,\mathrm{d}v_F \\ &= \langle \mathcal{N}^{'}(u), u-u_k \rangle - \langle \mathcal{N}^{'}(u_k), u-u_k \rangle \\ &= 2 \cdot \big\{-\mathcal E_{\lambda}^{0}(u;u_k-u) + \lambda (-\mathcal{G})^{0}(u;u_k-u) \\ &-\mathcal E_{\lambda}^{0}(u_k;u-u_k) + \lambda (-\mathcal{G})^{0}(u_k;u-u_k)\big\}. \end{aligned}$$ First, since $(u_k)_k$ is a Palais-Smale sequence for $\mathcal E_{\lambda}$, we have that $$\mathcal E_{\lambda}^{0}(u;u_k-u) +\mathcal E_{\lambda}^{0}(u_k;u-u_k) \to 0 ~ \text{ as } ~ k\to \infty.$$ For the remaining terms, first we recall the fact that $\partial \widetilde{G}$ is upper semicontinuous. Therefore, applying the Weierstrass theorem with conditions $(\textbf{H}_1) \& (\textbf{H}_2)$ gives us the boundedness of the function $$\label{boundedness} t \mapsto \frac{\max \{ |\xi|: \xi \in \partial \widetilde{G}(t) \} }{t^{p-1}},$$ where $t >0$ and $p \in (2,2^*)$. Combining [\[A1_and_A2_concl\]](#A1_and_A2_concl){reference-type="eqref" reference="A1_and_A2_concl"} with the boundedness of [\[boundedness\]](#boundedness){reference-type="eqref" reference="boundedness"} on $[\delta_1, \delta_2]$ yields that for every $\varepsilon>0$, there exists $k_{\varepsilon} > 0$ such that $$\label{estimate_1} |\xi| \leq \varepsilon t + k_{\varepsilon}t^{p-1}, ~ \forall t \geq 0, ~ \forall \xi \in \partial \widetilde{G}(t).$$ Accordingly, it follows that $$\begin{aligned} S_k&\coloneqq \mathcal{G}^{0}(u;u-u_k) + \mathcal{G}^{0}(u_k;u_k-u) \\ &\leq \int_\Omega \left[\widetilde{G}^{0}(u;u-u_k) + \widetilde{G}^{0}(u_k;u_k-u) \right]{\rm d}v_F \\ &\leq \int_\Omega \left[\max_{\xi}\{\xi(u-u_k)\} + \max_{\eta_k}\{\eta_k(u_k-u)\} \right]{\rm d}v_F \\ &\leq 2 \varepsilon\left[\|u\|^2_{L^2(\Omega)}+\|u_k\|^2_{L^2(\Omega)}\right] \\ &+ k_\varepsilon \|u-u_k\|_{L^2(\Omega)} \left[\|u\|^{p}_{L^p(\Omega)}+\|u_k\|^{p}_{L^p(\Omega)}\right] , \end{aligned}$$ where $\xi \in \partial \widetilde{G}(u)$ and $\eta_k \in \partial \widetilde{G}(u_k)$. Since $u_k \to u$ strongly in $L^q(\Omega)$ for any $q \in [2,2^*)$ it follows that $\limsup_{k \to \infty} {S_k} \leq 0$. On the other hand, by using [\[mean_theorem\]](#mean_theorem){reference-type="eqref" reference="mean_theorem"} on $\Omega$, we obtain that $$\begin{aligned} I_k &= \int_\Omega (Du - Du_k)(\nabla_F u-\nabla_F u_k)\,{\rm d}v_F \geq \\ & \geq l_{F, \Omega} \int_\Omega F^{*2}(Du(x) - Du_k(x))\,{\rm d}v_F , \end{aligned}$$ where $$l_{F, \Omega} = \inf_{x\in \Omega} ~ \inf_{v,w,z\in T_x M\setminus \{0\}}\frac{g_{(x,w)}(v,v)}{g_{(x,z)}(v,v)}$$ is the uniformity constant associated with $\Omega$. Since $\Omega \subset M$ is bounded, we can prove that $0 < l_{F, \Omega} \leq 1$, thus we obtain that $$\int_\Omega F^{*2}(Du(x) - Du_k(x))\,{\rm d}v_F \to 0$$ as $k \to 0$. Therefore, $u_k \to u$ strongly in $W^{1,2}_{0,F}(\Omega)$, which proves the claim. ## First solution In this section we prove that there exists a local minimum for the energy functional $\mathcal{E}_{\lambda}$. By applying Lebourg's mean value theorem (see Proposition [Proposition 1](#prop-lok-lip-0){reference-type="ref" reference="prop-lok-lip-0"}, *1)*), estimate [\[estimate_1\]](#estimate_1){reference-type="eqref" reference="estimate_1"} together with assumption $(\textbf{H}_3)$ gives the inequality $$%\label{estimate_2} 0 \leq |\widetilde{G}(t)| \leq \varepsilon t^2 + k_{\varepsilon}|t|^p \textnormal{, } ~ \forall t \in \mathbb{R}.$$ Therefore, the continuous embeddings $W^{1,2}_{F}(\Omega) \subset L^q(\Omega)$ for every $q \in [2,2^*)$ yield that $$\begin{aligned} \label{G_estimate} 0&\leq|\mathcal{G}(u)| \leq \int_{\Omega} |\widetilde{G}(u(x))|\,{\rm d}v_{F} \nonumber\\ &\leq \varepsilon \int_{\Omega} u(x)^2\,{\rm d}v_F + k_{\varepsilon} \int_{\Omega} |u(x)|^p\,{\rm d}v_F \nonumber\\ &\leq \varepsilon C^2_2 \int_{\Omega} F^{*2}(x,Du(x)){\rm d}v_F \\ &+ k_{\varepsilon} C_p^p \left(\int_{\Omega} F^{*2}(x,Du(x)){\rm d}v_F\right)^\frac{p}{2}, \nonumber \end{aligned}$$ for all $u \in W^{1,2}_{0,F}(\Omega)$, where $C_2$ and $C_p$ denote the embedding constants from [\[cont_embedding\]](#cont_embedding){reference-type="eqref" reference="cont_embedding"} where $p \in (2, 2^*)$. Since $\varepsilon >0$ can be arbitrarily small and $p>2$, the previous estimate implies that for every $u \in W^{1,2}_{0,F}(\Omega) \setminus \{0\}$, we have $$\label{nullaban} \lim_{\mathcal{N}(u)\to 0}\frac{\mathcal{G}(u)}{\mathcal{N}(u)} = 0.$$ Similarly, it can be proven that $$\label{vegtelenben} \lim_{\mathcal{N}(u)\to \infty}\frac{\mathcal{G}(u)}{\mathcal{N}(u)} = 0.$$ Indeed, taking into account the boundedness of the function $$t \mapsto \frac{\max \{ |\xi|: \xi \in \partial \widetilde{G}(t) \} }{t^{1/2}}$$ on $[\delta_1, \delta_2]$ together with relation ([\[A1_and_A2_concl\]](#A1_and_A2_concl){reference-type="ref" reference="A1_and_A2_concl"}), for every $\varepsilon >0$ one can find an $l_{\varepsilon} > 0$ such that $$|\xi| \leq \varepsilon t + l_{\varepsilon}t^{1/2}, ~ \forall t \geq 0, ~ \forall \xi \in \partial \widetilde{G}(t).$$ In a similar fashion as before, applying Lebourg's mean value theorem and the continuous Sobolev embeddings, for all $u \in W^{1,2}_{0,F}(\Omega)$ we obtain the estimate $$\begin{aligned} 0 &\leq \mathcal{G}(u) \leq \int_{\Omega} |\widetilde{G}(u(x))| \mathrm{d}v_{F}\\ &\leq \varepsilon \int_{\Omega} u(x)^2 \mathrm{d}v_F + l_{\varepsilon} \int_{\Omega} |u(x)|^{\frac{3}{2}} \mathrm{d}v_F \\ &\leq \varepsilon C_2^2 \int_{\Omega} F^{*2}(x,Du(x)) \mathrm{d}v_F \\ &+ l_{\varepsilon} C_{3/2}^{\frac{3}{2}} \left(\int_{\Omega} F^{*2}(x,Du(x))\mathrm{d}v_F\right)^{\frac{3}{4}}, \end{aligned}$$ where $C_{3/2}$ and $C_2$ stand for the appropriate Sobolev embedding constants (see inequality [\[cont_embedding\]](#cont_embedding){reference-type="eqref" reference="cont_embedding"} when $q \in \{\frac{3}{2}, 2\}$). Since $\varepsilon>0$ can be arbitrarily small, the previous inequalities immediately imply [\[vegtelenben\]](#vegtelenben){reference-type="eqref" reference="vegtelenben"}. On the account of [\[nullaban\]](#nullaban){reference-type="eqref" reference="nullaban"} and [\[vegtelenben\]](#vegtelenben){reference-type="eqref" reference="vegtelenben"}, we can conclude that $$%\label{boundness} 0 <\sup_{u \in W^{1,2}_{0,F}(\Omega)\setminus\{0\}}\frac{\mathcal{G}(u)}{\mathcal{N}(u)}<+\infty.$$ Therefore, one has $$0 < \lambda_2 \coloneqq \inf_{\substack{u\in W^{1,2}_{0,F}(\Omega) \\ \mathcal G(u)> 0}}\frac{\mathcal N(u)}{2\mathcal G(u)} < +\infty.$$ Now let us fix a parameter $\lambda > \lambda_2$. Then there exists a function $w_\lambda \in W^{1,2}_{0,F}(\Omega)$ such that $\mathcal{G}(w_\lambda)>0$ and $$\lambda > \frac{\mathcal{N}(w_\lambda)}{2\mathcal{G}(w_\lambda)} \geq \lambda_2.$$ This yields that $$\begin{aligned} C_\lambda^1 &\coloneqq \inf_{u \in W^{1,2}_{0,F}(\Omega)} \mathcal{E}_{\lambda}(u) \\ &\leq \mathcal{E}_{\lambda}(w_\lambda) = \frac{1}{2}\mathcal{N}(w_\lambda)-\lambda \mathcal{G}(w_\lambda)<0. \end{aligned}$$ Since the energy functional $\mathcal{E}_{\lambda}$ is bounded from below and verifies the non-smooth Palais-Smale condition (see Sections [4.3](#Coercivity){reference-type="ref" reference="Coercivity"} & [4.4](#PS){reference-type="ref" reference="PS"}), it follows by Chang [@Chang Theorem 3.5] that $C_\lambda^1$ is a critical value of $\mathcal{E}_{\lambda}$, thus there exists a function $u_{\lambda}^1 \in W^{1,2}_{0,F}(\Omega)$ such that $C_\lambda^1 = \mathcal{E}_{\lambda}(u_{\lambda}^1)$ and $0 \in \partial \mathcal{E}_{\lambda}(u_{\lambda}^1)$. In particular, we have $u_{\lambda}^1 \neq 0$, since $C_\lambda^1 = \mathcal{E}_{\lambda}(u_{\lambda}^1) < 0 = \mathcal{E}_{\lambda}(0)$, which also implies that $\lambda_2 > \lambda_1$. According to Section [4.2](#B){reference-type="ref" reference="B"}, $u_{\lambda}^1$ is a nontrivial nonnegative weak solution of $({\mathcal DI})_\lambda$ when $\lambda > \lambda_2 > \lambda_1$. ## Second solution This section provides another, minimax-type critical point for the energy functional $\mathcal{E}_{\lambda}$. Let $\lambda >\lambda_2$ and $\varepsilon>0$ be arbitrarily fixed. Estimate [\[G_estimate\]](#G_estimate){reference-type="eqref" reference="G_estimate"} yields that for every $p \in (2,2^*)$ and $u \in W^{1,2}_{0,F}(\Omega)$, $$\begin{aligned} \label{E_estimate} \mathcal{E}_\lambda(u) &= \frac{1}{2}\mathcal{N}(u) - \lambda \mathcal{G}(u) \nonumber\\ &\geq \left( \frac{1}{2} -\lambda\varepsilon C_2^2\right) \int_{\Omega} F^{*2}(x,Du(x))\mathrm{d}v_F \\ &-\lambda k_{\varepsilon} C_p^p \left(\int_{\Omega} F^{*2}(x,Du(x))\mathrm{d}v_F\right)^\frac{p}{2}, \nonumber \end{aligned}$$ where $C_2$ and $C_p$ denote the Sobolev embedding constants from [\[cont_embedding\]](#cont_embedding){reference-type="eqref" reference="cont_embedding"} for every $p \in (2, 2^*)$. By choosing $\varepsilon$ small enough, namely $\varepsilon < \frac{1}{2\lambda C_2^2}$, we obtain that $$\rho_\lambda \coloneqq \left(\frac{\frac{1}{2}-\lambda\varepsilon C_2^2}{\lambda k_{\varepsilon}C_p^p}\right)^{\frac{1}{p-2}}>0.$$ Since $\rho_{\lambda}>0$, relation [\[E_estimate\]](#E_estimate){reference-type="eqref" reference="E_estimate"} implies that $$\displaystyle\inf_{\substack{u\in W^{1,2}_{0,F}(\Omega) \\ \mathcal{N}(u) = \rho_{\lambda}^2}}\mathcal{E}_{\lambda}(u) \geq \left(\frac{1}{2}-\lambda\varepsilon C_2^2\right)\rho_\lambda^2 > 0,$$ and we recall that $0 = \mathcal{E}_{\lambda}(0) > \mathcal{E}_{\lambda}(w_\lambda)$. Accordingly, $\mathcal{E}_{\lambda}$ has the mountain pass geometry. Since $\mathcal{E}_{\lambda}$ also verifies the non-smooth Palais-Smale condition (see Section [4.4](#PS){reference-type="ref" reference="PS"}), we can apply the non-smooth mountain pass theorem (see e.g., Kristály, Motreanu and Varga [@KMV Theorem 2]), which implies that there exists a critical point $u_{\lambda}^2 \in W^{1,2}_{0,F}(\Omega)$, such that $0 \in \partial \mathcal E_{\lambda}(u_{\lambda}^2)$ and $$C^2_\lambda \coloneqq \mathcal E_{\lambda}(u_{\lambda}^2) = \inf_{\gamma \in \Gamma} \max_{t\in [0,1]}\mathcal{E}_{\lambda}(\gamma(t)),$$ where $$\Gamma = \bigg\{\gamma \in C\left([0,1];W^{1,2}_{0,F}(\Omega)\right): \gamma(0) = 0, \gamma(1) = w_\lambda \bigg\}.$$ Considering the fact that $$C^2_\lambda = \mathcal{E}_{\lambda}(u_{\lambda}^2)>0 = \mathcal{E}_{\lambda}(0) > \mathcal{E}_{\lambda}(u_{\lambda}^1),$$ we clearly have $u_{\lambda}^1 \neq u_{\lambda}^2$ and $u_{\lambda}^2 \neq 0$. Taking into account Section [4.2](#B){reference-type="ref" reference="B"}, it follows that $u_{\lambda}^2$ is the second nontrivial nonnegative weak solution of the inclusion problem $({\mathcal DI})_\lambda$, which concludes the proof of $(\textbf{R}_2)$. # Acknowledgment Á. Mester was supported by the ÚNKP-22-4 New National Excellence Program of the Ministry for Culture and Innovation from the source of the National Research, Development and Innovation Fund. 99 A. Ambrosetti and P. Rabinowitz, *Dual variational methods in critical point theory and applications*. J. Funct. Anal. 14, 349--381, 1973. D. Bao, S.-S. Chern and Z. Shen, *An introduction to Riemann-Finsler geometry*. Graduate Texts in Mathematics 200. Springer-Verlag, New York, 2000. G. Bonanno, G.M. Bisci and V.D. Rădulescu, *Nonlinear elliptic problems on Riemannian manifolds and applications to Emden--Fowler type equations*. Manuscripta Math. 142, 157--185, 2013. S. Carl and V.K. Le, *Multi-valued variational inequalities and inclusions*. Springer Monographs in Mathematics. Springer, Cham, 2021. K.C. Chang, *Variational methods for nondifferentiable functionals and their applications to partial differential equations*. J. Math. Anal. Appl. 80, 102--129, 1981. S. Y. Cheng and S. T. Yau, *Differential equations on Riemannian manifolds and their geometric applications*. Comm. Pure Appl. Math. 28, no.3, 333--354, 1975. F.H. Clarke, *Optimization and Non-smooth Analysis*. John Wiley & Sons, New York, 1983. S. Cobzaş, *Functional Analysis in Asymmetric Normed Spaces*. Birkhäuser/Springer Basel, Basel, 2013. D. Egloff, *Uniform Finsler Hadamard manifolds*. Ann. Inst. H. Poincaré Phys. Théor. 66, no.3, 323--357, 1997. A. Farina, Y. Sire and E. Valdinoci, *Stable Solutions of Elliptic Equations on Riemannian Manifolds*. J. Geom. Anal. 23, 1158--1172, 2013. C. Farkas, A. Kristály and C. Varga, *Singular Poisson equations on Finsler-Hadamard manifolds*. Calc. Var. and PDE 54, no. 2, 1219--1241, 2015. E. Hebey, *Nonlinear Analysis on Manifolds: Sobolev Spaces and Inequalities*. American Mathematical Society 5, 2000. A. Kristály, Á. Mester and I.I. Mezei, *Sharp Morrey-Sobolev inequalities and eigenvalue problems on Riemannian- Finsler manifolds with nonnegative Ricci curvature*, Commun. Contemp. Math., in press, 2022. DOI:10.1142/S0219199722500638. A. Kristály, I. I. Mezei and K. Szilák, *Differential inclusions involving oscillatory terms*. Nonlinear Analysis 197, 111834, 2020. A. Kristály, I. Mezei and K. Szilák, *Elliptic differential inclusions on non-compact Riemannian manifolds*. Nonlinear Analysis - Real World Applications 69, 103740, 2023. A. Kristály, V. Motreanu and C. Varga, *A minimax principle with general Palais-Smale conditions*. Comm. Appl. Anal. 9, no. 2, 285--299, 2005. A. Kristály and I. Rudas, *Elliptic problems on the ball endowed with Funk-type metrics*. Nonlinear Anal. 119, 199--208, 2015. H.-B. Rademacher, *A sphere theorem for non-reversible Finsler metrics*. Math. Ann. 328, no. 3, 373--387, 2004. S. Ohta, *Uniform convexity and smoothness, and their applications in Finsler geometry*. Math. Ann. 343, no. 3, 669--699, 2009. S. Ohta and K.-T. Sturm, *Heat flow on Finsler manifolds*. Comm. Pure Appl. Math. 62, no. 10, 1386--1433, 2009. Z. Shen, *Lectures on Finsler geometry*. World Scientific Publishing Co., Singapore, 2001.
arxiv_math
{ "id": "2309.05399", "title": "A Dirichlet inclusion problem on Finsler manifolds", "authors": "\\'Agnes Mester and K\\'aroly Szil\\'ak", "categories": "math.AP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | This article investigates the existence, non-existence, and multiplicity of weak solutions for a parameter-dependent nonlocal Schrödinger-Kirchhoff type problem on $\mathbb R^N$ involving singular non-linearity. By performing nuanced analysis based on Nehari submanifolds and fibre maps, our goal is to show the problem has at least two positive solutions even if $\lambda$ lies beyond the extremal parameter $\lambda_\ast$. author: - | Deepak Kumar Mahanta$^{1}$, Tuhina Mukherjee$^{1}$ and Abhishek Sarkar$^{1,}$[^1]\ $^{1}$ Department of Mathematics, Indian Institute of Technology Jodhpur, Rajasthan 342030, India\ bibliography: - ref.bib title: On the extreme value of Nehari manifold for nonlocal singular Schrödinger-Kirchhoff equations in $\mathbb{R}^N$ --- ***Keywords---*** Nehari manifold, Schrödinger-Kirchhoff equations, fractional $p$-Laplacian, singular problems, extreme values of parameter, variational methods\ ***Mathematics Subject Classification---*** 35J50, 35J75, 35J60, 35R11 # Introduction Given $0<s<1$, $p\in(1,\infty)$ and $sp<N$, we consider the following fractional $p$-Laplacian singular problem of Schrödinger-Kirchhoff type: $$\label{main problem} \begin{cases} M\big( \|u\|^p\big)\bigg[\big(-\Delta_p\big)^s u+V(x)u^{p-1}\bigg]=\lambda\alpha(x)u^{-\delta}+\beta(x)u^{\gamma-1}~~\text{in}~~\mathbb{R}^N,\\ ~~~~~~u>0~~\text{in}~~\mathbb{R}^N,~\displaystyle \int_{\mathbb{R}^N} V(x)u^p~\mathrm{d}x<\infty,~ u\in W^{s,p}(\mathbb{R}^N), \end{cases}\tag{$\mathcal{E}_\lambda$}$$ with $$\|u\|=\Bigg(\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac{|u(x)-u(y)|^p}{|x-y|^{N+sp}}~\mathrm{d}x\mathrm{d}y+\int_{\mathbb{R}^{N}}V(x)|u|^p~\mathrm{d}x\Bigg)^{\frac{1}{p}},$$ where $N\geq 2$, $0<\delta<1$, $\lambda>0$ is a real parameter, $M:\mathbb{R}^+\to\mathbb{R}^+$ (where $\mathbb{R}^+:=(0,\infty)$) is a Kirchhoff function, $V:\mathbb{R}^N\to \mathbb{R}^+$ is a potential function and $\big(-\Delta_p\big)^s$ is the fractional $p$-Laplacian operator which, up to a normalization constant, is defined as $$\big(-\Delta_p\big)^s \phi(x)= 2\lim_{\epsilon\to 0^+}\int_{\mathbb{R}^N\setminus B_{\epsilon}(x)} \frac{|\phi(x)-\phi(y)|^{p-2}(\phi(x)-\phi(y))}{|x-y|^{N+sp}}~\mathrm{d}y,~~x\in \mathbb{R}^N,$$ along any $\phi\in\mathcal{C}_0 ^{\infty}(\mathbb{R}^N)$, where $B_{\epsilon}(x)=\{y\in \mathbb{R}^N: |x-y|<\epsilon\}$. More details on the fractional $p$-Laplacian operator can be found in [@goyal2015existence; @franzina2014fractional] and the references therein. Throughout the paper, we assume the following hypotheses: - $0<\delta<1<p<\gamma<p^*_{s}-1$, where $p^*_{s}=\frac{Np}{N-sp}$ is the fractional Sobolev critical exponent. - $\alpha>0$ a.e. in $\mathbb{R}^N$ and there exists $\xi>1$ such that $\alpha\in L^\xi(\mathbb{R}^N)\cap L^{\infty}(\mathbb{R}^N)$. Furthermore, there exists another constant $\tau>1$ such that $p<\tau<p^*_{s}$ satisfies $\frac{1}{\xi}+\frac{1-\delta}{\tau}=1$. - $\beta$ can be sign changing a.e. in $\mathbb{R}^N$ with $\beta^+=\max ~\{\beta,0\}\neq 0$ and $\beta\in L^{\infty}(\mathbb{R}^N)$. - If $\beta(x)>0$ in $\mathbb{R}^N$, then $\bigg(\frac{\alpha(x)}{\beta(x)}\bigg)^\frac{1}{\gamma+\delta-1}\notin W^{s,p}_V(\mathbb{R}^N)$. - The Kirchhoff function $M:\mathbb{R}^+\to\mathbb{R}^+$ is defined by $M(s)=as^m ~(a,m>0)$, where $p(m+1)<\gamma$, is a continuous and monotonically increasing function. Moreover, we define $$\widehat{M}(t)=\int_{0}^{t}M(s)~\mathrm{d}s,~\forall~t>0.$$ Furthermore, to avoid the lack of compactness of the embeddings of the solution space into Lebesgue space, we introduce the following conditions on $V:\mathbb{R}^N\to \mathbb{R}^+$: - $V\in\mathcal{C}(\mathbb{R}^N)$ and there exists $V_0>0$ such that $\displaystyle\inf_{\mathbb{R}^N} V\geq V_0$. - There exists $h>0$ such that $\displaystyle\lim_{|y|\to\infty} ~\mathrm{meas}\{x\in B_h(y):V(x)\leq c\}=0 ,$ for all $c>0$, where $B_R(x)$ denotes any open ball in $\mathbb{R}^N$ centered at $x$ and of radius $R>0$. By $\mathrm{meas}(E)$, we denote the Lebesgue measure of $E \subset \mathbb{R}^N$. **Remark 1**. *The condition $\mathrm{(V2)}$ is weaker than the coercivity assumption, that is, $V(x)\to\infty~\text{as}~|x|\to\infty$.* The study of Kirchhoff-type problems, which Kirchhoff originally studied in [@kirchhoff1897vorlesungen], has received a great deal of attention in past years due to its applicability in a wide range of models of physical and biological systems. The Kirchhoff function $M$ is often represented by the expression $M(t)=a+bt^\theta$ for $t\geq 0$ and $\theta>0$, where $a,b\geq 0$ and $a+b>0$. Consequently, $M$ is considered a degenerate Kirchhoff function if and only if $a=0$, while it is nondegenerate if $a>0$. However, we note that the degenerate situation in Kirchhoff's theory is much more fascinating and delicate than the non-degenerate case. In this direction, we mention [@ambrosio2022kirchhoff; @chen2013multiple; @pucci2019existence; @pucci2015multiple] and references therein for interested readers. A very well-established fact says that the Nehari submanifolds (defined in Section [3](#sec3){reference-type="ref" reference="sec3"}) $\mathcal{M}^\pm_\lambda$ are separated from $\mathcal{M}^0_\lambda$, in the sense that the boundaries of $\mathcal{M}^\pm_\lambda$ do not overlap with $\mathcal{M}^0_\lambda$, then minimizing the energy functional over $\mathcal{M}^\pm_\lambda$, one can easily show the existence of at least two nonnegative solutions to the corresponding problem. The main difficulties arise when the boundaries of these submanifolds overlap with each other. To avoid these difficulties, we introduce the extremal threshold value $\lambda^\ast$ (see [\[eq3.19\]](#eq3.19){reference-type="eqref" reference="eq3.19"}), in the sense that when $\lambda<\lambda^\ast$, then $\mathcal{M}_\lambda$ is a $\mathcal{C}^1$-manifold while if $\lambda\geq\lambda^\ast$, then $\mathcal{M}^0_\lambda\neq \emptyset$ and $\mathcal{M}_\lambda$ is not a manifold. Consequently, whenever $\lambda\geq\lambda^\ast$, the standard minimizing techniques are of no help to find the local minimizers for the energy functional over $\mathcal{M}^\pm_\lambda$ and the situation becomes more complicated. We need some topological estimates for the Nehari sets to overcome these technical issues and obtain local minimizers for the energy functional. In that context, we refer the reader to [@alves2022multiplicity; @ilyasov2017extreme; @ilyasov2018branches; @quoirin2023local; @quoirin2021nehari; @silva2018extremal; @de2020extreme] and references therein. In the nonlocal framework, many authors have extensively studied elliptic problems with singular nonlinearity involving fractional $p$-Laplacian over bounded domains using the Nehari manifold technique and the fibre map analysis. For instance, we refer [@mukherjee2016dirichlet; @wang2019existence; @goyal2017multiplicity; @goyal2016fractional] and references therein for further readings in this direction. Whereas we refer to [@wang2021uniqueness; @xiang2020least; @xiang2016multiplicity; @fareh2023multiplicity; @do2019nehari] for the results concerning Kirchhoff-type problems. It is worth mentioning that very few contributions are devoted to the study of nonlocal Kirchhoff problems with singular nonlinearity in bounded domains and the whole space $\mathbb{R}^N$. In this direction, we refer the reader to [@hsini2019multiplicity; @fiscella2019nehari; @wang2020combined]. Motivated by the works mentioned above, our objective in this paper is to generalize the results of [@alves2022multiplicity]. To the best of the authors' knowledge, the nonlocal problems involving fractional $p$-Laplacian operator with singular nonlinearity and sign-changing weight function in $\mathbb{R}^N$ in the context of extremal parameters of Nehari manifold have not yet been studied. Our paper is a standard contribution to the existing literature. However, we combine the known techniques due to the appearance of degenerate Kirchhoff function, the singular term, and whole space $\mathbb R^N$ in [\[main problem\]](#main problem){reference-type="eqref" reference="main problem"}. Each feature has its characteristics to enhance the novelty of the situation, and we try to enlist them now. The singular term in [\[main problem\]](#main problem){reference-type="eqref" reference="main problem"} finishes the possibility to differentiate the associated energy functional; the degenerate Kirchhoff function puts many barriers to obtaining strong convergence from weak convergence of minimizing sequences. To get over the trouble caused by the degenerate Kirchhoff function, we build an operator (see Lemma [Lemma 7](#lemma2.4){reference-type="ref" reference="lemma2.4"}) to the fixed weakly convergent sequence for establishing compactness. To handle the noncompactness caused due to $\mathbb R^N$, the assumptions $(V1)$ and $(V2)$ come to our rescue. Last but not least, we study the existence and nonexistence of solutions to [\[main problem\]](#main problem){reference-type="eqref" reference="main problem"} beyond the extremal value $\lambda_*$. Our efforts lie in combining these techniques efficiently to obtain the main results listed below. Before we state our main results, we define the weak solution and the energy functional associated with [\[main problem\]](#main problem){reference-type="eqref" reference="main problem"}. **Definition 2** (Weak Solution). *We say that $u\in W^{s,p}_{V}(\mathbb{R}^N)$ is a weak solution of [\[main problem\]](#main problem){reference-type="eqref" reference="main problem"} if $\alpha(x)u^{-\delta}v\in L^1(\mathbb{R}^N)$, $u>0$ for a.e. $x\in \mathbb{R}^N$ and it satisfies $$M\big(\|u\|^p\big)\bigg[\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))(v(x)-v(y))}{|x-y|^{N+sp}}~\mathrm{d}x\mathrm{d}y+\int_{\mathbb{R}^{N}}V(x)u^{p-1}v~\mathrm{d}x\bigg]$$ $$-\lambda \int_{\mathbb{R}^{N}} \alpha(x)u^{-\delta}v~\mathrm{d}x-\int_{\mathbb{R}^{N}}\beta(x)u^{\gamma-1}v~\mathrm{d}x=0,~\text{for all}~v\in W^{s,p}_{V}(\mathbb{R}^N).$$* The energy functional $\Psi_\lambda:W^{s,p}_{V}(\mathbb{R}^N)\to\mathbb{R}$ associated with [\[main problem\]](#main problem){reference-type="eqref" reference="main problem"} is given by $$\label{eq2.11} \Psi_\lambda(u)=\frac{1}{p}\widehat{M}\big(\|u\|^p\big)-\frac{\lambda}{1-\delta}\int_{\mathbb{R}^{N}}\alpha(x)|u|^{1-\delta}~\mathrm{d}x-\frac{1}{\gamma}\int_{\mathbb{R}^{N}}\beta (x)|u|^\gamma~\mathrm{d}x.$$ It is easy to see that $\Psi_\lambda$ is well defined and continuous in $W^{s,p}_{V}(\mathbb{R}^N)$. One can notice that due to the presence of the singular term the energy functional $\Psi_\lambda$ is not $\mathcal{C}^1$ in $W^{s,p}_{V}(\mathbb{R}^N)$ and also it is not bounded from below on $W^{s,p}_{V}(\mathbb{R}^N)$. Hence, we cannot apply the critical point theory to verify the existence of weak solutions for [\[main problem\]](#main problem){reference-type="eqref" reference="main problem"}. To overcome this, we work with a set of Nehari manifolds. The following states our main results. **Theorem 3**. *Assume that $(H1)-(H5)$ and $(V1)-(V2)$ are satisfied, then there exists $\lambda_*>0$ such that for all $\lambda\in$ $(0,\lambda_*+\epsilon)$, where $\epsilon>0$ is sufficiently small, the problem [\[main problem\]](#main problem){reference-type="eqref" reference="main problem"} has at least two positive solutions $u_\lambda\in \mathcal{M}^+_\lambda$ and $w_\lambda\in \mathcal{M}^-_\lambda$, respectively.* The remainder of the article is structured as follows. In Section [2](#sec2){reference-type="ref" reference="sec2"}, we have provided the mathematical framework for our analysis. In Section [3](#sec3){reference-type="ref" reference="sec3"}, we introduce the setup of the Nehari manifold and the study of the fibre map for [\[main problem\]](#main problem){reference-type="eqref" reference="main problem"} and also describe the extremal parameter $\lambda_\ast$. Finally, in Sections [4](#sec4){reference-type="ref" reference="sec4"}--[6](#sec6){reference-type="ref" reference="sec6"}, we have established the existence and multiplicity of solutions to [\[main problem\]](#main problem){reference-type="eqref" reference="main problem"} for $0<\lambda<\lambda_\ast$, $\lambda=\lambda_\ast$ and $\lambda>\lambda_\ast$, respectively, and we prove Theorem [Theorem 3](#thm2.6){reference-type="ref" reference="thm2.6"}. # Preliminaries {#sec2} In this section, we introduce some primary results and properties of the fractional Sobolev spaces and then provide some necessary lemmas that will be needed in the proof of our main results. Let $0<s<1<p<\infty$ and $sp<N$. The fractional Sobolev space $W^{s,p}(\mathbb{R}^N)$ is defined by $$W^{s,p}(\mathbb{R}^N)=\bigg\{u\in L^p(\mathbb{R}^N): \int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac{|u(x)-u(y)|^p}{|x-y|^{N+sp}}~\mathrm{d}x\mathrm{d}y<\infty\bigg\},$$ equipped with the norm $$\|u\|_{ W^{s,p}(\mathbb{R}^N)}=\bigg(\|u\|^p_ {L^{p}(\mathbb{R}^N)}+[u]^p_{s,p}\bigg)^\frac{1}{p},$$ where $[u]_{s,p}$ denotes the Gagliardo seminorm, defined by $$[u]_{s,p}=\bigg(\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac{|u(x)-u(y)|^p}{|x-y|^{N+sp}}~\mathrm{d}x\mathrm{d}y\bigg)^\frac{1}{p}.$$ It is well-known that $\bigg(W^{s,p}(\mathbb{R}^N),\|\cdot\|_{ W^{s,p}(\mathbb{R}^N)} \bigg)$ is a uniformly convex Banach space (see[@pucci2015multiple]). The fractional critical exponent is defined by $$p^*_{s}=\begin{cases} \frac{Np}{N-sp},~\text{if}~sp<N;\\ \infty~~~~,~\text{if}~sp\geq N. \end{cases}$$ The space $W^{s,p}(\mathbb{R}^N)$ is continuously embedded in $L^q(\mathbb{R}^N)$ for any $q\in[p,p^*_{s}]$ but is compactly embedded in $L^q_{loc}(\mathbb{R}^N)$ for any $q\in[p,p^*_{s})$ (see [@kim2018multiplicity]). For more details about the space $W^{s,p}(\mathbb{R}^N)$, we refer to [@di2012hitchhiker] and the references therein. Moreover, for $1\leq p<\infty$, the space $L^p_{V}(\mathbb{R}^N)$ is consisting of all real-valued measurable functions, with $V(x)|u|^p\in L^1(\mathbb{R}^N)$, and endowed with the norm $$\|u\|_{L^p_{V}(\mathbb{R}^N)}=\bigg( \int_{\mathbb{R}^N}V(x)|u|^p~\mathrm{d}x\bigg)^\frac{1}{p},~\forall~u\in L^p_{V}(\mathbb{R}^N).$$ The space $\big(L^p_{V}(\mathbb{R}^N), \|\cdot\|_{L^p_{V}(\mathbb{R}^N)}\big)$ is also a uniformly convex Banach space thanks to $(V1)$ (see [@pucci2015multiple]). Now we define weighted fractional Sobolev space $W^{s,p}_{V}(\mathbb{R}^N)$, which is a subspace of $W^{s,p}(\mathbb{R}^N)$ and is defined by $$W^{s,p}_{V}(\mathbb{R}^N)=\bigg\{u\in W^{s,p}(\mathbb{R}^N):\int_{\mathbb{R}^N}V(x)|u|^p~\mathrm{d}x<\infty\bigg\},$$ endowed with the norm $$\|u\|=\bigg(\|u\|^p_{L^p_{V}(\mathbb{R}^N)}+[u]^p_{s,p}\bigg)^\frac{1}{p}.$$ It is easy to see that the space $\big(W^{s,p}_{V}(\mathbb{R}^N),\|\cdot\|\big)$ is a unifomly convex Banach space (see [@pucci2015multiple]). **Lemma 4** (see[@pucci2015multiple]). *Let $(V1)$ holds. Then the embeddings $$W^{s,p}_{V}(\mathbb{R}^N)\hookrightarrow W^{s,p}(\mathbb{R}^N)\hookrightarrow L^\vartheta (\mathbb{R}^N)$$ are continuous for any $\vartheta\in [p,p^*_{s}]$. Consequently, we have $\textit{min}~\{1,V_0\}\|u\|^p_{ W^{s,p}(\mathbb{R}^N)}\leq \|u\|^p$ for all $u\in W^{s,p}_{V}(\mathbb{R}^N)$. Also, when $\vartheta\in [1,p^*_{s})$, then the embedding $W^{s,p}_{V}(\mathbb{R}^N)\hookrightarrow L^\vartheta (B_R(0))$ is compact for any $R>0$.* **Lemma 5** (see[@pucci2015multiple]). *Suppose $(V1)$ and $(V2)$ hold. Let $\theta\in [p,p^*_{s})$ and $\{v_k\}_{k\in \mathbb{N}}$ be a bounded sequence in $W^{s,p}_{V}(\mathbb{R}^N)$. Then there exists $v\in W^{s,p}_{V}(\mathbb{R}^N)\cap L^\theta (\mathbb{R}^N)$ such that up to a subsequence $v_k\to v$ in $L^\theta (\mathbb{R}^N)$ as $k\to\infty$.* **Lemma 6** (see [@liang2017multiplicity]). *Let $(V1)$ and $(V2)$ are satisfied. Then the embedding $W^{s,p}_{V}(\mathbb{R}^N)\hookrightarrow L^\tau (\mathbb{R}^N)$ is compact for any $\tau\in (p,p^*_{s})$.* Let $\big(W^{s,p}_{V}(\mathbb{R}^N)\big)^*$ is the continuous dual of $W^{s,p}_{V}(\mathbb{R}^N)$ and $\langle{\cdot,\cdot}\rangle$ denotes the duality pair between $\big(W^{s,p}_{V}(\mathbb{R}^N)\big)^*$ and $W^{s,p}_{V}(\mathbb{R}^N)$. Now we define the operator $L: W^{s,p}_{V}(\mathbb{R}^N)\to \big(W^{s,p}_{V}(\mathbb{R}^N)\big)^*$ by $$\langle{L(u),v}\rangle =M(\|u\|^p)\langle{B(u),v}\rangle,~\text{for any}~v\in W^{s,p}_{V}(\mathbb{R}^{N}),$$ where $$\langle{B(u),v}\rangle=\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))(v(x)-v(y))}{|x-y|^{N+sp}}~\mathrm{d}x\mathrm{d}y+\int_{\mathbb{R}^{N}}V(x)|u|^{p-2}uv~\mathrm{d}x.$$ **Lemma 7** (see [@akkoyunlu2019infinitely]). *If $(H5)$ is satisfied, then* - *$L: W^{s,p}_{V}(\mathbb{R}^N)\to \big(W^{s,p}_{V}(\mathbb{R}^N)\big)^*$ is continuous, bounded and strictly monotone.* - *$L$ is a map of type $(S_+)$, i.e., if $v_k\rightharpoonup v$ weakly in $W^{s,p}_{V}(\mathbb{R}^N)$ as $k\to\infty$ and $\displaystyle\limsup_{k\to\infty}\langle{L(v_k)-L(v),v_k-v}\rangle\leq 0,$ then $v_k\to v$ in $W^{s,p}_{V}(\mathbb{R}^N)$ as $k\to\infty$.* - *$L: W^{s,p}_{V}(\mathbb{R}^N)\to \big(W^{s,p}_{V}(\mathbb{R}^N)\big)^*$ is a homeomorphism.* # Review of the Nehari manifold set {#sec3} This section provides technical results on the fibre maps and Nehari manifold set. Since we aim to study positive solutions to the problem [\[main problem\]](#main problem){reference-type="eqref" reference="main problem"}, therefore we restrict the energy functional $\Psi_\lambda$ to a cone of non-negative functions (say $\mathcal{K}$ ) of $W^{s,p}_{V}(\mathbb{R}^N)$ defined by $$\mathcal{K}=\big\{u\in W^{s,p}_{V}(\mathbb{R}^N)\setminus\{0\}:u\geq 0 \big\}.$$ For $u\in \mathcal{K}$, define the $\mathcal{C}^\infty$-fiber map $\mathcal{F}_{\lambda,u}: \mathbb{R}^+\to \mathbb{R}$ by $\mathcal{F}_{\lambda,u}(t)=\Psi_{\lambda}(tu)$ for all $t,\lambda>0$. We can easily derive that $$\hspace{-4cm}\mathcal{F}_{\lambda,u}(t)=\frac{1}{p}\widehat{M}\big(t^p\|u\|^p\big)-\frac{\lambda}{1-\delta}t^{1-\delta}\int_{\mathbb{R}^{N}}\alpha(x)|u|^{1-\delta}~\mathrm{d}x-\frac{t^\gamma}{\gamma}\int_{\mathbb{R}^{N}}\beta (x)|u|^\gamma~\mathrm{d}x$$ $$\hspace{-1.7cm}=\frac{at^{p(m+1)}}{p(m+1)}\|u\|^{p(m+1)}-\frac{\lambda}{1-\delta}t^{1-\delta}\int_{\mathbb{R}^{N}}\alpha(x)|u|^{1-\delta}~\mathrm{d}x-\frac{t^\gamma}{\gamma}\int_{\mathbb{R}^{N}}\beta (x)|u|^\gamma~\mathrm{d}x,$$ $$\hspace{-3.2cm}\mathcal{F}'_{\lambda,u}(t)=t^{p-1} M\big(t^p\|u\|^p\big)\|u\|^p-\lambda t^{-\delta}\int_{\mathbb{R}^{N}}\alpha(x)|u|^{1-\delta}~\mathrm{d}x-t^{\gamma-1}\int_{\mathbb{R}^{N}}\beta (x)|u|^\gamma~\mathrm{d}x$$ $$\hspace{-1.8cm}=at^{p(m+1)-1}\|u\|^{p(m+1)}-\lambda t^{-\delta}\int_{\mathbb{R}^{N}}\alpha(x)|u|^{1-\delta}~\mathrm{d}x-t^{\gamma-1}\int_{\mathbb{R}^{N}}\beta (x)|u|^\gamma~\mathrm{d}x$$ and $$\hspace{-5.8cm}\mathcal{F}''_{\lambda,u}(t)=(p-1)t^{p-2} M\big(t^p\|u\|^p\big)\|u\|^p +pt^{2p-2}M'\big(t^p\|u\|^p\big)\|u\|^{2p}$$ $$\hspace{2.5cm}+\lambda\delta t^{-\delta-1}\int_{\mathbb{R}^{N}}\alpha(x)|u|^{1-\delta}~\mathrm{d}x-(\gamma-1)t^{\gamma-2}\int_{\mathbb{R}^{N}}\beta (x)|u|^\gamma~\mathrm{d}x$$ $$\hspace{1.5cm}=a(p(m+1)-1)t^{p(m+1)-2}\|u\|^{p(m+1)}+\lambda\delta t^{-\delta-1}\int_{\mathbb{R}^{N}}\alpha(x)|u|^{1-\delta}~\mathrm{d}x-(\gamma-1)t^{\gamma-2}\int_{\mathbb{R}^{N}}\beta (x)|u|^\gamma~\mathrm{d}x.$$ Define the Nehari manifold set as $$\mathcal{M}_\lambda=\big\{u\in\mathcal{K}:\mathcal{F}'_{\lambda,u}(1)=0\big\}.$$ It is easy to verify that $\mathcal{F}'_{\lambda,u}(t)=0$ if and only if $tu\in \mathcal{M}_\lambda$. In particular, $\mathcal{F}'_{\lambda,u}(1)=0$ if and only if $u\in \mathcal{M}_\lambda$. It follows that every weak solution of [\[main problem\]](#main problem){reference-type="eqref" reference="main problem"} always belongs to $\mathcal{M}_\lambda$. Therefore, it is natural to split $\mathcal{M}_\lambda$ into three disjoint sets corresponding to local minima, local maxima, and points of inflexion, that is, $$\mathcal{M}^\pm_\lambda =\bigg\{u\in\mathcal{M}_\lambda:\mathcal{F}''_{\lambda,u}(1)\overset{>}{<} 0\bigg\}~\text{and}~\mathcal{M}^0_\lambda =\big\{u\in\mathcal{M}_\lambda:\mathcal{F}''_{\lambda,u}(1)=0\big\}.$$ **Theorem 8**. *The following results hold:* - *$\Psi_{\lambda}$ is weakly lower semicontinuous.* - *$\Psi_{\lambda}$ is coercive and bounded from below on $\mathcal{M}_\lambda$. In particular, $\Psi_{\lambda}$ is coercive and bounded from below on $\mathcal{M}^{+}_\lambda$ and $\mathcal{M}^{-}_\lambda$ respectively.* *Proof.* $(i)$ Let $\{u_n\}_{n\in \mathbb{N}}\subset W^{s,p}_{V}(\mathbb{R}^N)$ be such that $u_n\rightharpoonup u$ weakly in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n\to\infty$. From Lemma [Lemma 6](#lemma2.3){reference-type="ref" reference="lemma2.3"}, we get $u_n\to u$ in $L^\tau(\mathbb{R}^N)$ and $L^\gamma(\mathbb{R}^N)$ respectively as $n\to\infty$, $u_n\to u$ a.e. in $\mathbb{R}^N$ and there exist two functions $g_\tau$ and $h_\gamma$ in $L^\tau(\mathbb{R}^N)$ and $L^\gamma(\mathbb{R}^N)$ such that $|u_n(x)|\leq g_\tau(x)$ a.e. in $\mathbb{R}^N$ and $|u_n(x)|\leq h_\gamma(x)$ a.e. in $\mathbb{R}^N$. This implies that $\big||u_n|^{1-\delta}-|u|^{1-\delta}\big|^{\frac{\tau}{1-\delta}}\to 0~\text{a.e. in} ~\mathbb{R}^N$ and $\big||u_n|^{1-\delta}-|u|^{1-\delta}\big|^{\frac{\tau}{1-\delta}}\leq 2^{\frac{\tau}{1-\delta}} (g_\tau(x))^\tau\in L^1(\mathbb{R}^N)$. Applying the Lebesgue dominated convergence theorem, we obtain $$\label{eq3.1} \int_{\mathbb{R}^N}\big||u_n|^{1-\delta}-|u|^{1-\delta}\big|^{\frac{\tau}{1-\delta}}~\mathrm{d}x\to 0~\text{as}~n\to\infty.$$ Since $\big||u_n|^{1-\delta}-|u|^{1-\delta}\big|\in L^{\frac{\tau}{1-\delta}}(\mathbb{R}^N)$, therefore using $(H2)$, [\[eq3.1\]](#eq3.1){reference-type="eqref" reference="eq3.1"} and Holder's inequality, we deduce that $$\bigg|\int_{\mathbb{R}^N}\alpha(x)\big(|u_n|^{1-\delta}-|u|^{1-\delta}\big)~\mathrm{d}x\bigg|\leq \|\alpha\|_{L^\xi(\mathbb{R}^N)}\||u_n|^{1-\delta}-|u|^{1-\delta}\|_{L^{\frac{\tau}{1-\delta}}(\mathbb{R}^N)}\to 0~\text{as}~n\to\infty.$$ It follows that $$\label{eq3.2} \lim_{n\to\infty}\int_{\mathbb{R}^N}\alpha(x)|u_n|^{1-\delta}~\mathrm{d}x=\int_{\mathbb{R}^N}\alpha(x)|u|^{1-\delta}~\mathrm{d}x.$$ Similarly, we can prove that $$\label{eq3.4} \lim_{n\to\infty}\int_{\mathbb{R}^N}\beta(x)|u_n|^{\gamma}~\mathrm{d}x=\int_{\mathbb{R}^N}\beta(x)|u|^{\gamma}~\mathrm{d}x.$$ Since $\widehat{M}\big(\|u\|^p\big)$ is weakly lower semicontinuous, i.e., when $u_n\rightharpoonup u$ weakly in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n\to\infty$, then $$\label{eq3.5} \widehat{M}\big(\|u\|^p\big)\leq\displaystyle\liminf_{n\to\infty} \widehat{M}\big(\|u_n\|^p\big).$$ It follows from [\[eq3.2\]](#eq3.2){reference-type="eqref" reference="eq3.2"}, [\[eq3.4\]](#eq3.4){reference-type="eqref" reference="eq3.4"} and [\[eq3.5\]](#eq3.5){reference-type="eqref" reference="eq3.5"} that $\Psi_\lambda(u)\leq\displaystyle\liminf_{n\to\infty}\Psi_\lambda(u_n)$. Hence, $\Psi_\lambda$ is a weakly lower semicontinuous. $(ii)$ Let $u\in\mathcal{M}_\lambda$ such that $\|u\|\geq 1$. This yields $\mathcal{F}'_{\lambda,u}(1)=0$ and hence we obtain $$\label{eq3.6} \int_{\mathbb{R}^N}\beta(x)|u|^\gamma~\mathrm{d}x=a\|u\|^{p(m+1)}-\lambda \int_{\mathbb{R}^N}\alpha(x)|u|^{1-\delta}~\mathrm{d}x.$$ Further, using Holder's inequality and Lemma [Lemma 6](#lemma2.3){reference-type="ref" reference="lemma2.3"}, we deduce that there exists a constant $c>0$ such that $$\label{eq3.7} \int_{\mathbb{R}^N}\alpha(x)|u|^{1-\delta}~\mathrm{d}x\leq c\|\alpha\|_{L^\xi(\mathbb{R}^N)}\|u\|^{1-\delta}.$$ Using [\[eq3.6\]](#eq3.6){reference-type="eqref" reference="eq3.6"} and [\[eq3.7\]](#eq3.7){reference-type="eqref" reference="eq3.7"} in [\[eq2.11\]](#eq2.11){reference-type="eqref" reference="eq2.11"}, we deduce that $$\label{eq9.9} \Psi_\lambda(u)\geq a\bigg(\frac{1}{p(m+1)}-\frac{1}{\gamma}\bigg)\|u\|^{p(m+1)}-c\lambda\|\alpha\|_{L^\xi(\mathbb{R}^N)}\bigg(\frac{1}{1-\delta}-\frac{1}{\gamma}\bigg)\|u\|^{1-\delta}$$ $$\to\infty~\text{as}~\|u\|\to\infty,~\text{since} ~0<1-\delta<1<p(m+1)<\gamma.$$ This shows that $\Psi_\lambda$ is coercive on $\mathcal{M}_\lambda$ . Let us define a function $f:\mathbb{R}^+\to \mathbb{R}$ by $$f(t)=K_1 t^{p(m+1)}-K_2 t^{1-\delta},~\forall~ t>0,$$ where $$K_1=a\bigg(\frac{1}{p(m+1)}-\frac{1}{\gamma}\bigg)~\text{and}~K_2=c\lambda\|\alpha\|_{L^\xi(\mathbb{R}^N)}\bigg(\frac{1}{1-\delta}-\frac{1}{\gamma}\bigg).$$ Thanks to elementary calculus, at the point $t(K_1,K_2,p,m,\delta)=\bigg(\frac{(1-\delta)K_2}{p(m+1)K_1}\bigg)^{\frac{1}{p(m+1)+\delta-1}},$ the function $f$ has a unique global minimum. This yields that $\Psi_\lambda(u)\geq f(t(K_1,K_2,p,m,\delta))$ and hence $\Psi_\lambda$ is bounded from below on $\mathcal{M}_\lambda$.This completes the proof. ◻ **Lemma 9**. *Let $\lambda>0$, then the following results hold:* - *$\sup\big\{\|u\|:u\in \mathcal{M}^+_\lambda\big\}<\infty$. Furthermore, $0>\inf\big\{\Psi_\lambda(u):u\in \mathcal{M}^+_\lambda\big\}>-\infty$.* - *$\inf\big\{\|w\|:w\in \mathcal{M}^-_\lambda\big\}>0$ and $\sup\big\{\|w\|:w\in \mathcal{M}^-_\lambda,~\Psi_\lambda(w)\leq \jmath\big\}<\infty$ for any $\jmath>0$. Moreover, $\inf\big\{\Psi_\lambda(w):w\in \mathcal{M}^-_\lambda\big\}>-\infty$.* *Proof.* $(i)$ Let us assume that $u\in \mathcal{M}^+_\lambda$. From the definition of $\mathcal{M}^+_\lambda$, we have $$\label{eq3.10} \|u\|^{p(m+1)}<\frac{\lambda(\delta+\gamma-1)}{a(\gamma-p(m+1))}\int_{\mathbb{R}^{N}}\alpha(x)|u|^{1-\delta}~\mathrm{d}x.$$ Using [\[eq3.7\]](#eq3.7){reference-type="eqref" reference="eq3.7"} in [\[eq3.10\]](#eq3.10){reference-type="eqref" reference="eq3.10"}, we get $$\|u\|<\bigg[ \frac{c\lambda\|\alpha\|_{L^\xi(\mathbb{R}^N)}(\delta+\gamma-1)}{a(\gamma-p(m+1))}\bigg]^\frac{1}{p(m+1)+\delta-1}=\hat{C}~\text{(say)}.$$ From the above inequality, we infer that $\sup\big\{\|u\|:u\in \mathcal{M}^+_\lambda\big\}<\infty$. Next, our aim is to show $0>\inf\big\{\Psi_\lambda(u):u\in \mathcal{M}^+_\lambda\big\}>-\infty$. To prove this, let us take a minimizing sequence $\{u_n\}_{n\in \mathbb{N}}$ in $\mathcal{M}^+_\lambda$ for $\Psi$. Since $\Psi_\lambda$ is coercive on $\mathcal{M}^+_\lambda$, therefore $\{u_n\}_{n\in N}$ must be bounded and hence up to a subsequence $u_n\rightharpoonup u$ in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n\to\infty$. It follows from Theorem [Theorem 8](#thm3.1){reference-type="ref" reference="thm3.1"}-$(i)$ that $-\infty<\Psi_\lambda(u)\leq\displaystyle\liminf_{n\to\infty}\Psi_\lambda(u_n)=\inf\big\{\Psi_\lambda(u):u\in \mathcal{M}^+_\lambda\big\}$. This shows that $\inf\big\{\Psi_\lambda(u):u\in \mathcal{M}^+_\lambda\big\}>-\infty$. From [\[eq2.11\]](#eq2.11){reference-type="eqref" reference="eq2.11"} and [\[eq3.10\]](#eq3.10){reference-type="eqref" reference="eq3.10"}, we obtain $$\Psi_\lambda(u)=a\bigg(\frac{1}{p(m+1)}-\frac{1}{\gamma}\bigg)\|u\|^{p(m+1)}-\lambda\bigg(\frac{1}{1-\delta}-\frac{1}{\gamma}\bigg) \int_{\mathbb{R}^N}\alpha(x)|u|^{1-\delta}~\mathrm{d}x~(\text{as}~ u\in\mathcal{M}^+_\lambda)$$ $$<a\bigg(\frac{1}{p(m+1)}-\frac{1}{\gamma}\bigg)\|u\|^{p(m+1)}-\lambda\bigg(\frac{1}{1-\delta}-\frac{1}{\gamma}\bigg) \frac{a(\gamma-p(m+1))}{\lambda(\delta+\gamma-1)}\|u\|^{p(m+1)}$$ $$\hspace{-3cm}=a\bigg(\frac{\gamma-p(m+1)}{\gamma}\bigg)\bigg(\frac{1}{p(m+1)}-\frac{1}{1-\delta}\bigg)\|u\|^{p(m+1)}<0.$$ This infers that $\inf\big\{\Psi_\lambda(u):u\in \mathcal{M}^+_\lambda\big\}<0.$ $(ii)$ Let $w\in \mathcal{M}_\lambda^-$ and hence we have the following inequality $$\label{eq3.14} \|w\|^{p(m+1)}<\frac{(\delta+\gamma-1)}{a(p(m+1)+\delta-1)}\int_{\mathbb{R}^{N}}\beta(x)|w|^{\gamma}~\mathrm{d}x.$$ By Lemma [Lemma 6](#lemma2.3){reference-type="ref" reference="lemma2.3"}, there exists $C>0$ such that $$\label{eq3.15} \int_{\mathbb{R}^{N}}\beta(x)|w|^{\gamma}~\mathrm{d}x\leq C\|\beta\|_{L^\infty(\mathbb{R}^{N})}\|w\|^\gamma.$$ On solving [\[eq3.14\]](#eq3.14){reference-type="eqref" reference="eq3.14"} and [\[eq3.15\]](#eq3.15){reference-type="eqref" reference="eq3.15"}, we obtain $$\label{eq3.16} \|w\|>\bigg(\frac{a(p(m+1)+\delta-1)}{C\|\beta\|_{L^\infty(\mathbb{R}^{N})}(\delta+\gamma-1)}\bigg)^{\frac{1}{\gamma-p(m+1)}}=\hat{c}~\text{(say)}.$$ The above inequality infer that $\inf\big\{\|w\|:w\in \mathcal{M}^-_\lambda\big\}>0$. Now from [\[eq9.9\]](#eq9.9){reference-type="eqref" reference="eq9.9"}, we get $$\jmath\geq\Psi_\lambda(w)\geq a\bigg(\frac{1}{p(m+1)}-\frac{1}{\gamma}\bigg)\|w\|^{p(m+1)}-c\lambda\|\alpha\|_{L^\xi(\mathbb{R}^N)}\bigg(\frac{1}{1-\delta}-\frac{1}{\gamma}\bigg)\|w\|^{1-\delta}.$$ Since $1-\delta<1<p(m+1)$, we have $\sup\big\{\|w\|:w\in \mathcal{M}^-_\lambda,~\Psi_\lambda(w)\leq \jmath\big\}<\infty$. The proof of $\inf\big\{\Psi_\lambda(w):w\in \mathcal{M}^-_\lambda\big\}>-\infty$ is similar to the proof of $\inf\big\{\Psi_\lambda(u):u\in \mathcal{M}^+_\lambda\big\}>-\infty$ and hence we omit. This completes the proof. ◻ **Corollary 10**. *If $u\in\mathcal{M}^0_\lambda$, then $\displaystyle\int_{\mathbb{R}^{N}}\beta (x)|u|^\gamma~\mathrm{d}x>0$ and $\displaystyle\int_{\mathbb{R}^{N}}\alpha (x)|u|^{1-\delta}~\mathrm{d}x>0$. Moreover, the set $\mathcal{M}^0_\lambda$ is bounded with respect to the norm of $W^{s,p}_V(\mathbb{R}^N)$, for each $\lambda>0$.* *Proof.* The proof is obtained from the definition of $\mathcal{M}^0_\lambda$, Lemma [Lemma 6](#lemma2.3){reference-type="ref" reference="lemma2.3"} and Holder's inequality. ◻ ## Descriptive analysis of the extremal parameter {#subsec3} Define $\mathcal{G}^+=\bigg\{u\in\mathcal{K}:~\displaystyle\int_{\mathbb{R}^{N}}\beta (x)|u|^\gamma~\mathrm{d}x>0\bigg\}$. Now, we characterize the set $\mathcal{M}^0_\lambda$ on the basis of the set $\mathcal{G}^+$. For this, let $u\in\mathcal{G}^+$ such that $tu\in\mathcal{M}^0_\lambda$. Hence, we have the system of equations $\mathcal{F}'_{\lambda,u}(t)=0$ and $\mathcal{F}''_{\lambda,u}(t)=0$. By solving these two equations, we can get a unique pair $(t(u),\lambda(u))$ as $$\label{eq3.18} \begin{cases} t(u)=\bigg[ a\bigg(\frac{p(m+1)+\delta-1}{\gamma+p-1}\bigg)\bigg]^{\frac{1}{\gamma-p(m+1)}}\Bigg[\cfrac{\|u\|^{p(m+1)}}{\displaystyle\int_{\mathbb{R}^{N}}\beta (x)|u|^\gamma~\mathrm{d}x}\Bigg]^{\frac{1}{\gamma-p(m+1)}};\\ \\ \lambda(u)=C(a,\gamma,\delta,m,p)\cfrac{\Big[\|u\|^{p(m+1)}\Big]^{\frac{\gamma+\delta-1}{\gamma-p(m+1)}}}{\bigg[\displaystyle\int_{\mathbb{R}^{N}}\alpha (x)|u|^{1-\delta}~\mathrm{d}x\bigg]\bigg[\displaystyle\int_{\mathbb{R}^{N}}\beta (x)|u|^\gamma~\mathrm{d}x\bigg]^{\frac{p(m+1)+\delta-1}{\gamma-p(m+1)}}}, \end{cases}$$ where $$C(a,\gamma,\delta,m,p)=a\bigg[\frac{\gamma-p(m+1)}{\gamma+\delta-1}\bigg] \bigg[ a\bigg(\frac{p(m+1)+\delta-1}{\gamma+p-1}\bigg)\bigg]^{\frac{p(m+1)+\delta-1}{\gamma-p(m+1)}}>0.$$ Due to Y.II'yasov (see[@ilyasov2017extreme; @ilyasov2018branches]), we define the extremal parameter $\lambda_*$ by $$\label{eq3.19} \lambda_* =\inf_{u\in\mathcal{G}^+}\lambda(u).$$ **Proposition 11**. *For any $u\in\mathcal{K}$, we have the following results:* - *If $\lambda>0$ and $\displaystyle\int_{\mathbb{R}^{N}}\beta (x)|u|^\gamma~\mathrm{d}x\leq 0$, then $\mathcal{F}_{\lambda,u}$ has a unique critical point $t^+_\lambda(u)\in (0,\infty)$ such that $t^+_\lambda(u)u\in \mathcal{M}^+_\lambda$.* - *If $\displaystyle\int_{\mathbb{R}^{N}}\beta (x)|u|^\gamma~\mathrm{d}x> 0$, then there are three possibilities:* - *if $0<\lambda<\lambda(u)$, then $\mathcal{F}_{\lambda,u}$ has two unique critical points $0<t^+_\lambda(u)<t^-_\lambda(u)$ such that $t^+_\lambda(u)u\in \mathcal{M}^+_\lambda$ and $t^-_\lambda(u)u\in \mathcal{M}^-_\lambda$. Moreover, $\mathcal{F}_{\lambda,u}$ is decreasing over the intervals $(0,t^+_\lambda(u))$, $(t^-_\lambda(u),\infty)$ and increasing over the interval $(t^+_\lambda(u),t^-_\lambda(u))$.* - *if $\lambda=\lambda(u)$, then $\mathcal{F}_{\lambda,u}$ has a unique critical point $t^0_\lambda(u)>0$, which is a saddle point and $t^0_\lambda(u)u\in \mathcal{M}^0_\lambda$. Furthermore, $t^0_\lambda(u)=t(u)$ and $0<t^+_\lambda(u)<t(u)<t^-_\lambda(u)$ for all $\lambda\in(0,\lambda(u))$ and $\mathcal{F}_{\lambda,u}(t)$ decreases for $t>0$.* - *if $\lambda>\lambda(u)$, then $\mathcal{F}_{\lambda,u}$ is decreasing for $t>0$ and has no critical points.* *Proof.* Let us define the function $g_u:\mathbb{R}^+\to\mathbb{R}$ given by $$\label{eq3.97} g_u(t)= at^{p(m+1)+\delta-1}\|u\|^{p(m+1)}-t^{\gamma+\delta-1}\int_{\mathbb{R}^{N}}\beta (x)|u|^\gamma~\mathrm{d}x~\text{for all}~t>0 .$$ It is easy to see that for all $t>0$, $tu\in\mathcal{M}_\lambda$ if and only if $g_u(t)=\lambda\int_{\mathbb{R}^{N}}\alpha(x)|u|^{1-\delta}~\mathrm{d}x$. Hence we deduce that $t^{2-\delta}g'_u(t)=\mathcal{F}''_{\lambda,tu}(1)$ holds true whenever $tu\in\mathcal{M}_\lambda$. $(I)$ Let $\lambda>0$ and $\int_{\mathbb{R}^{N}}\beta (x)|u|^\gamma~\mathrm{d}x\leq 0$, then $$t^{2-\delta}g'_u(t)=\mathcal{F}''_{\lambda,tu}(1)=a(p(m+1)-1)t^{p(m+1)}\|u\|^{p(m+1)}-(\gamma-1)t^\gamma\int_{\mathbb{R}^{N}}\beta (x)|u|^\gamma~\mathrm{d}x$$ $$+\lambda\delta t^{1-\delta}\int_{\mathbb{R}^{N}}\alpha(x)|u|^{1-\delta}~\mathrm{d}x>0.$$ This shows that $g'_u(t)>0$, i.e., $g_u(t)$ is strictly increasing for $t>0$. Thus, there exists a unique positive real number $t^+_\lambda(u)\in (0,\infty)$ such that $t^+_\lambda(u)u\in \mathcal{M}^+_\lambda$. $(II)(a)$ By [\[eq3.97\]](#eq3.97){reference-type="eqref" reference="eq3.97"}, one can easily verify that $t=t(u)$ (see [\[eq3.18\]](#eq3.18){reference-type="eqref" reference="eq3.18"}) is a unique critical point for $g_u(t)$. In addition, we obtain $g'_u(t)\big|_{t=t(u)}=0$ and $g''_u(t)\big|_{t=t(u)}<0$. This implies that $g_u(t)$ has a global maximum at $t=t(u)$ and hence $g_u(t)$ is strictly increasing in the interval $(0,t(u))$ and strictly decreasing in the interval $(t(u),\infty)$. Since $t^{2-\delta}g'_u(t)=\mathcal{F}''_{\lambda,tu}(1)$, therefore $\mathcal{F}''_{\lambda,tu}(1)>0$ for all $t\in(0,t(u))$ and $\mathcal{F}''_{\lambda,tu}(1)<0$ for all $t\in(t(u),\infty)$. Choosing $0<t^+_\lambda(u)<t(u)<t^-_\lambda(u)$, it is clear from the strictly monotonicity of the function $g_u(t)$ in the intervals $(0,t(u))$ and $(t(u),\infty)$ that $t^+_\lambda(u)$ and $t^+_\lambda(u)$ are unique positive real numbers such that $t^+_\lambda(u)u\in \mathcal{M}^+_\lambda$ and $t^-_\lambda(u)u\in \mathcal{M}^-_\lambda$. Thus, $\mathcal{F}''_{\lambda,t^+_\lambda(u)u}(1)>0$ and $\mathcal{F}''_{\lambda,t^-_\lambda(u)u}(1)<0$. This shows that $\mathcal{F}_{\lambda,u}$ decreases over the intervals $(0,t^+_\lambda(u))$, $(t^-_\lambda(u),\infty)$ and increases over the interval $(t^+_\lambda(u),t^-_\lambda(u))$. $(II)(b)$ Assume $\lambda=\lambda(u)$, then we get a unique point $t=t(u)$ such that $g'_u(t(u))=0$. This yields $\mathcal{F}''_{\lambda(u),u}(t(u))=0$ and hence $t(u)u\in\mathcal{M}^0_{\lambda(u)}$ with $t^+_\lambda(u)<t(u)<t^-_\lambda(u)$. It is easy to see $\mathcal{F}'_{\lambda(u),u}(t)\leq 0$ for all $t>0$ ( as $\mathcal{F}'_{\lambda(u),u}(t(u))=0$). Hence $\mathcal{F}_{\lambda(u),u}$ always decreases for $t>0$ with $t=t(u)$ as a saddle point. $(II)(c)$ Let $\lambda(u)<\lambda$, then using $(b)$ we get $\mathcal{F}'_{\lambda,u}(t)<\mathcal{F}'_{\lambda(u),u}(t)\leq 0$ for all $t>0$. This concludes that $\mathcal{F}_{\lambda,u}$ is decreasing for $t>0$ and has no critical points. ◻ **Lemma 12**. *The function $\lambda(u)$ defined as in [\[eq3.18\]](#eq3.18){reference-type="eqref" reference="eq3.18"} is continuous and $0$-homogeneous. Furthermore, the extremal parameter $\lambda_*>0$ and there exists $u\in\mathcal{G}^+$ such that $\lambda_*=\lambda(u)$.* *Proof.* It is obvious that $\lambda(u)$ is continuous. Let us choose $t>0$, then $\lambda(tu)=\lambda(u)$, that is, the function $\lambda(u)$ is 0-homogeneous. Since $\lambda(u)$ is $0$-homogeneous, we can restrict $\lambda(u)$ to the set $\mathcal{G}^+\cap S$, where $S=\big\{u\in W^{s,p}_{V}(\mathbb{R}^N):\|u\|=1\big\}$. Now from [\[eq3.7\]](#eq3.7){reference-type="eqref" reference="eq3.7"} and [\[eq3.15\]](#eq3.15){reference-type="eqref" reference="eq3.15"}, we obtain $$\lambda_*=\inf_{\mathcal{G}^+\cap S} \lambda(u)\geq \frac{C(a,\gamma,\delta,m,p)}{c\|\alpha\|_{L^\xi(\mathbb{R}^N)}\big(C\|\beta\|_{L^\infty(\mathbb{R}^{N})}\big)^{\frac{p(m+1)+\delta-1}{\gamma-p(m+1)}}}>0.$$ Next, choose a sequence $\{u_n\}_{n\in \mathbb{N}}$ in $\mathcal{G}^+\cap S$ such that $\lambda(u_n)\to \lambda_*$ as $n\to\infty$. Since $\|u_n\|=1,~\forall~n\in \mathbb{N}$, therefore without loss of generality we can assume $u_n\rightharpoonup u$ weakly in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n\to\infty$ and hence $u_n\to u$ in $L^\tau(\mathbb{R}^N)$ and $L^\gamma(\mathbb{R}^N)$ respectively as $n\to\infty$. Notice that $u\neq 0$, otherwise $\lambda(u_n)\to \infty$ as $n\to\infty$. Let us choose $\frac{u}{\|u\|}\in \mathcal{G}^+\cap S$. Now we claim that $u_n\to u$ in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n\to\infty$. If not, then by using the 0-homogeneity property of $\lambda$ and weakly lower semicontinuity of the norm, we obtain $$\lambda\bigg(\frac{u}{\|u\|}\bigg)=\lambda(u)<\liminf_{n\to\infty}\lambda(u_n)=\lambda_*,$$ which is a contradiction. Thus, we must have $u_n\to u$ in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n\to\infty$ and therefore $u\in \mathcal{G}^+\cap S$. The continuity of $\lambda$ implies $\lambda(u)=\lambda_*$. This completes the proof. ◻ **Corollary 13**. *The following results hold:* - *The sets $\mathcal{M}^+_\lambda$ and $\mathcal{M}^-_\lambda$ are non empty for $\lambda>0$.* - *$\mathcal{M}^0_\lambda=\emptyset$ for $\lambda\in(0,\lambda_*)$ and $\mathcal{M}^0_\lambda\neq\emptyset$ for $\lambda\in[\lambda_*,\infty)$.* - *$\mathcal{M}^0_{\lambda_\ast}=\bigg\{u\in \mathcal{M}_{\lambda_\ast}:~\displaystyle\int_{\mathbb{R}^{N}}\beta (x)|u|^\gamma~\mathrm{d}x>0~\text{and}~\lambda(u)=\lambda_*\bigg\}$.* *Proof.* The proof is a direct consequence of Proposition [Proposition 11](#prop3.1){reference-type="ref" reference="prop3.1"} and Lemma [Lemma 12](#lem3.5){reference-type="ref" reference="lem3.5"}. ◻ Let $\lambda>0$, the we define the sets $$\hat{\mathcal{M}}_\lambda=\big\{u\in\mathcal{K}:~u\in\mathcal{G}^+~\text{and}~\lambda<\lambda(u)\big\} ~\text{and}~ \hat{\mathcal{M}}^+_\lambda=\bigg\{u\in\mathcal{K}:~\int_{\mathbb{R}^{N}}\beta (x)|u|^\gamma~\mathrm{d}x\leq 0\bigg\}.$$ Further, for $\lambda>0$, we define $\mathcal{I}^+_{\lambda}:\hat{\mathcal{M}}_\lambda\cup\hat{\mathcal{M}}^+_\lambda\to\mathbb{R}$ and $\mathcal{I}^-_{\lambda}:\hat{\mathcal{M}}_\lambda\to\mathbb{R}$ by $$\mathcal{I}^+_{\lambda}(u)=\Psi_\lambda(t^+_\lambda(u)u)~\text{and}~\mathcal{I}^-_{\lambda}(u)=\Psi_\lambda(t^-_\lambda(u)u).$$ Consider the following constrained minimization problems $$\Upsilon^+_\lambda=\inf\big\{\mathcal{I}^+_{\lambda}(u):u\in\mathcal{M}^+_\lambda\big\}~\text{and}~\Upsilon^-_\lambda=\inf\big\{\mathcal{I}^-_{\lambda}(u):u\in \mathcal{M}^+_\lambda\big\} .$$ **Lemma 14**. *Let $u\in\mathcal{K}$ and $I$ be an open interval in $\mathbb{R}$ such that $t^{\pm}_{\lambda}(u)$ are well defined for all $\lambda\in I$. Then the following results hold:* - *the functions $I\ni\lambda\mapsto t^{\pm}_{\lambda}(u)$ are $\mathcal{C}^\infty$. Furthermore, $I\ni\lambda\mapsto t^{+}_{\lambda}(u)$ is strictly increasing, whereas $I\ni\lambda\mapsto t^{-}_{\lambda}(u)$ is strictly decreasing.* - *the functions $I\ni\lambda\mapsto \mathcal{I}^{\pm}_{\lambda}(u)$ are $\mathcal{C}^\infty$ and strictly decreasing.* *Consequently, the above results hold for $I=(0,\lambda_\ast)$ as well as $I=(\lambda_\ast,\lambda_\ast+\epsilon)$, where $\epsilon>0$ is small enough.* *Proof.* The proof is similar to \[[@alves2022multiplicity], Lemma 2.7\], and we omit it. ◻ # Existence of solutions for $\lambda \in (0,\lambda_\ast)$ {#sec4} In this section, we show the existence of at least two positive solutions to [\[main problem\]](#main problem){reference-type="eqref" reference="main problem"} for $0<\lambda<\lambda_\ast$. **Lemma 15**. *Let $0<\lambda<\lambda_\ast$. Then there exists $u_\lambda\in\mathcal{M}^+_\lambda$ such that $\Psi_\lambda(u_\lambda)=\mathcal{I}^+_\lambda(u_\lambda)=\Upsilon^+_\lambda$.* *Proof.* Since the functional $\Psi_\lambda$ is bounded from below on $\mathcal{M}_\lambda$ and so on $\mathcal{M}^+_\lambda$. So, there exists a minimizing sequence $\{u_n\}_{n\in \mathbb{N}}\subset \mathcal{M}^+_\lambda$ for $\Psi_\lambda$, i.e., $$\lim_{n\to\infty}\Psi_\lambda(u_n)=\lim_{n\to\infty}\mathcal{I}^+_\lambda(u_n)=\Upsilon^+_\lambda=\inf\big\{\Psi_\lambda(u):u\in \mathcal{M}^+_\lambda\big\}.$$ The coercivity of $\Psi_\lambda$ (see Theorem [Theorem 8](#thm3.1){reference-type="ref" reference="thm3.1"}) implies that $\{u_n\}_{n\in \mathbb{N}}$ is bounded in $W^{s,p}_{V}(\mathbb{R}^N)$. Hence, up to a subsequence $u_n\rightharpoonup u_\lambda$ weakly in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n\to\infty$. By Lemma [Lemma 6](#lemma2.3){reference-type="ref" reference="lemma2.3"}, we have $u_n\to u_\lambda$ in $L^\tau(\mathbb{R}^N)$ and $L^\gamma(\mathbb{R}^N)$ respectively as $n\to\infty$, $u_n\to u_\lambda$ a.e. in $\mathbb{R}^N$. Now similar to the proof of [\[eq3.2\]](#eq3.2){reference-type="eqref" reference="eq3.2"}, [\[eq3.4\]](#eq3.4){reference-type="eqref" reference="eq3.4"} and [\[eq3.5\]](#eq3.5){reference-type="eqref" reference="eq3.5"}, we obtain $$\label{eq4.2} \begin{cases} \displaystyle\lim_{n\to\infty}\int_{\mathbb{R}^N}\alpha(x)|u_n|^{1-\delta}~\mathrm{d}x=\int_{\mathbb{R}^N}\alpha(x)|u_\lambda|^{1-\delta}~\mathrm{d}x,\\ \displaystyle\lim_{n\to\infty}\int_{\mathbb{R}^N}\beta(x)|u_n|^{\gamma}~\mathrm{d}x=\int_{\mathbb{R}^N}\beta(x)|u_\lambda|^{\gamma}~\mathrm{d}x,\\ \widehat{M}\big(\|u_\lambda\|^p\big)\leq\displaystyle\liminf_{n\to\infty} \widehat{M}\big(\|u_n\|^p\big)~\text{and}~u_\lambda\geq 0. \end{cases}$$ By using [\[eq4.2\]](#eq4.2){reference-type="eqref" reference="eq4.2"}, we get $$\label{eq4.3} \lim_{n\to\infty}\Psi_\lambda(u_n)\geq \Psi_\lambda(u_\lambda).$$ Now we have to show $u_\lambda>0$, that is, $u_\lambda\neq 0$. Indeed, if not, then from [\[eq4.3\]](#eq4.3){reference-type="eqref" reference="eq4.3"} we obtain $\hat{\mathcal{I}}^+_\lambda\geq 0$, which is a contradiction because $\hat{\mathcal{I}}^+_\lambda<0$ (see Lemma [Lemma 9](#lem3.2){reference-type="ref" reference="lem3.2"}). This yields $u_\lambda\neq0$ and $u_\lambda\in\mathcal{K}$. If $\displaystyle\int_{\mathbb{R}^N}\beta(x)|u_\lambda|^{\gamma}~\mathrm{d}x>0$ or $\displaystyle\int_{\mathbb{R}^N}\beta(x)|u_\lambda|^{\gamma}~\mathrm{d}x\leq 0$, then by Proposition [Proposition 11](#prop3.1){reference-type="ref" reference="prop3.1"} there exists a unique positive real number $t^+_\lambda(u_\lambda)$ such that $t^+_\lambda(u_\lambda)u\in\mathcal{M}^+_\lambda$, i.e., $\mathcal{F}'_{\lambda,u_\lambda}(t^+_\lambda(u_\lambda))=0$. The next step is to prove $u_\lambda\in\mathcal{M}^+_\lambda$. For this, we claim that $u_n\to u_\lambda$ in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n\to\infty$. Indeed, if not, then $u_n\not\to u_\lambda$ in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n\to\infty$. Therefore, we have $\|u_\lambda\|^p<\displaystyle\liminf_{n\to\infty}\|u_n\|^p$ and $M\big(\|u_\lambda\|^p\big)<\displaystyle\liminf_{n\to\infty} M\big(\|u_n\|^p\big)$. From [\[eq4.2\]](#eq4.2){reference-type="eqref" reference="eq4.2"}, we obtain $$\liminf_{n\to\infty}\mathcal{F}'_{\lambda,u_n}(t^+_\lambda(u_\lambda)) >\mathcal{F}'_{\lambda,u_\lambda}(t^+_\lambda(u_\lambda))=0.$$ It follows that $\mathcal{F}'_{\lambda,u_n}(t^+_\lambda(u_\lambda))>0$ for $n$ large enough. Since $u_n\in \mathcal{M}^+_\lambda$ for all $n\in N$, therefore 1 is the point of minimum for $\mathcal{F}_{\lambda,u_n}$, $\mathcal{F}'_{\lambda,u_n}(t)<0$ for all $t\in(0,1)$ and $\mathcal{F}'_{\lambda,u_n}(t)>0$ for all $t>1$. This shows that $t^+_\lambda(u_\lambda)>1$. Using the fact that $\mathcal{F}_{\lambda,u_\lambda}$ is strictly decreasing in $(0,t^+_\lambda(u_\lambda))$ (as $t^+_\lambda(u_\lambda)u_\lambda\in\mathcal{M}^+_\lambda$) and [\[eq4.3\]](#eq4.3){reference-type="eqref" reference="eq4.3"}, we get $$\Psi_\lambda(t^+_\lambda(u_\lambda)u_\lambda)=\mathcal{F}_{\lambda,u_\lambda}(t^+_\lambda(u_\lambda))<\mathcal{F}_{\lambda,u_\lambda}(1)=\Psi_\lambda(u_\lambda)\leq \lim_{n\to\infty} \Psi_\lambda(u_n)=\Upsilon^+_\lambda,$$ which is absurd because $t^+_\lambda(u_\lambda)u_\lambda\in\mathcal{M}^+_\lambda$. Thus, $u_n\to u_\lambda$ in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n\to\infty$. This implies that $\mathcal{F}'_{\lambda,u_\lambda}(1)=0$ and $\mathcal{F}''_{\lambda,u_\lambda}(1)\geq 0$. By Corollary [Corollary 13](#cor3.6){reference-type="ref" reference="cor3.6"}, $\mathcal{M}^0_\lambda =\emptyset$ for $0<\lambda<\lambda_\ast$, therefore $u_\lambda\in\mathcal{M}^+_\lambda$ and $\Psi_\lambda(u_\lambda)=\mathcal{I}^+_\lambda(u_\lambda)=\Upsilon^+_\lambda$. This completes the proof. ◻ **Lemma 16**. *Let $0<\lambda<\lambda_\ast$. Then there exists $w_\lambda\in\mathcal{M}^-_\lambda$ such that $\Psi_\lambda(w_\lambda)=\mathcal{I}^-_\lambda(w_\lambda)=\Upsilon^-_\lambda$.* *Proof.* The functional $\Psi_\lambda$ is bounded from below on $\mathcal{M}_\lambda$ and therefore on $\mathcal{M}^+_\lambda$. As a result, there is a minimizing sequence $\{u_n\}_{n\in \mathbb{N}}\subset \mathcal{M}^+_\lambda$ for $\Psi_\lambda$, i.e., $$\lim_{n\to\infty}\Psi_\lambda(u_n)=\lim_{n\to\infty}\mathcal{I}^-_\lambda(u_n)=\Upsilon^-_\lambda=\inf\big\{\Psi_\lambda(w):w\in \mathcal{M}^-_\lambda\big\}.$$ Due to the coercivity of $\Psi_\lambda$ (see Theorem [Theorem 8](#thm3.1){reference-type="ref" reference="thm3.1"}), the sequence $\{u_n\}_{n\in \mathbb{N}}$ must be bounded in $W^{s,p}_{V}(\mathbb{R}^N)$ and hence we can assume $u_n\rightharpoonup w_\lambda$ weakly in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n\to\infty$. Now replacing $u_\lambda$ with $w_\lambda$ in [\[eq4.2\]](#eq4.2){reference-type="eqref" reference="eq4.2"}, one can observe that [\[eq4.2\]](#eq4.2){reference-type="eqref" reference="eq4.2"} still holds. We claim that $w_\lambda\neq 0$. Indeed, if $w_\lambda=0$, then by Lemma [Lemma 9](#lem3.2){reference-type="ref" reference="lem3.2"} (see [\[eq3.14\]](#eq3.14){reference-type="eqref" reference="eq3.14"}) and [\[eq4.2\]](#eq4.2){reference-type="eqref" reference="eq4.2"}, we get $$0=\|w_\lambda\|^{p(m+1)}<\liminf_{n\to\infty} \|u_n\|^{p(m+1)}\leq \frac{(\delta+\gamma-1)}{a(p(m+1)+\delta-1)}\liminf_{n\to\infty}\int_{\mathbb{R}^{N}}\beta(x)|u_n|^{\gamma}~\mathrm{d}x$$ $$=\frac{(\delta+\gamma-1)}{a(p(m+1)+\delta-1)}\int_{\mathbb{R}^{N}}\beta(x)|w_\lambda|^{\gamma}~\mathrm{d}x=0,$$ a contradiction and hence the claim. Repeating the above steps, one can easily deduce that $\displaystyle\int_{\mathbb{R}^{N}}\beta(x)|w_\lambda|^{\gamma}~\mathrm{d}x>0$ and $w_\lambda\in\mathcal{K}$. Now by Proposition [Proposition 11](#prop3.1){reference-type="ref" reference="prop3.1"}, there exists $t^-_{\lambda}(w_\lambda)>0$ such that $t^-_{\lambda}(w_\lambda)w_\lambda\in \mathcal{M}^+_\lambda$. Next, we claim that $u_n\to w_\lambda$ in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n\to\infty$. Indeed, if not, then $u_n\not\to w_\lambda$ in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n\to\infty$, i.e., $u_n\rightharpoonup w_\lambda$ weakly in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n\to\infty$ and also $t^-_{\lambda}(w_\lambda)u_n\rightharpoonup t^-_{\lambda}(w_\lambda) w_\lambda$ weakly in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n\to\infty$. Therefore, we obtain $$\label{eq4.7} \begin{cases} \displaystyle\liminf_{n\to\infty}\Psi_\lambda(t^-_{\lambda}(w_\lambda)u_n)>\Psi_\lambda(t^-_{\lambda}(w_\lambda)w_\lambda)\\ \hspace{4cm}\text{and}\\ \displaystyle\liminf_{n\to\infty}\mathcal{F}'_{\lambda,u_n}(t^+_\lambda(w_\lambda)) > \mathcal{F}'_{\lambda,w_\lambda}(t^+_\lambda(w_\lambda))=0. \end{cases}$$ This shows that $\mathcal{F}'_{\lambda,u_n}(t^+_\lambda(w_\lambda))>0$ for $n$ large enough. As $u_n\in\mathcal{M}^-_\lambda$ for all $n\in N$, thus 1 is the point of maximum of $\mathcal{F}_{\lambda,u_n}$. It follows that $\mathcal{F}'_{\lambda,u_n}(t)>0$ for all $t\in(0,1)$ and $\mathcal{F}'_{\lambda,u_n}(t)<0$ for all $t>1$. Hence we obtain $t^+_\lambda(w_\lambda)<1$. Now, using [\[eq4.7\]](#eq4.7){reference-type="eqref" reference="eq4.7"} and the fact that 1 is the global maximum point for $\mathcal{F}_{\lambda,u_n}$ lead us to conclude that $$\Psi_\lambda(t^-_{\lambda}(w_\lambda)w_\lambda)< \liminf_{n\to\infty}\Psi_\lambda(t^-_{\lambda}(w_\lambda)u_n)\leq \liminf_{n\to\infty}\Psi_\lambda(u_n)=\Upsilon^-_\lambda,$$ which is a contradiction because $t^-_{\lambda}(w_\lambda)w_\lambda\in \mathcal{M}^+_\lambda$. This yields $u_n\to w_\lambda$ in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n\to\infty$ and hence $\mathcal{F}'_{\lambda,w_\lambda}(1)=0$ and $\mathcal{F}''_{\lambda,w_\lambda}(1)\leq 0$. From Corollary [Corollary 13](#cor3.6){reference-type="ref" reference="cor3.6"}, we have $\mathcal{M}^0_\lambda =\emptyset$ for $0<\lambda<\lambda_\ast$. Thus $w_\lambda\in\mathcal{M}^-_\lambda$ and $\Psi_\lambda(w_\lambda)=\mathcal{I}^-_\lambda(w_\lambda)=\Upsilon^-_\lambda$. This completes the proof. ◻ **Corollary 17**. *Let $u\in\mathcal{M}^\pm_\lambda$, then there exists $\rho>0$ and a continuous function $t:B_\rho(0)\to (0,\infty)$ such that $t(0)=1$ and $t(y)(u+y)\in\mathcal{M}^\pm_\lambda$ for all $y\in B_\rho(0)$, where $$B_\rho(0)=\big\{u\in W^{s,p}_{V}(\mathbb{R}^N):\|u\|=\rho\big\}.$$* *Proof.* The proof is quite similar to \[[@garain2023anisotropic], Lemma 2.14\] and hence omitted here. ◻ **Lemma 18**. *Let $0<\lambda<\lambda_\ast$ and suppose $u_\lambda$ and $w_\lambda$ are minimizers of $\Psi_\lambda$ over $\mathcal{M}^+_\lambda$ and $\mathcal{M}^-_\lambda$ respectively. Then for every $v\in\mathcal{K}$, we have* - *there exists a $\epsilon_0>0$ such that $\Psi_\lambda(u_\lambda+\epsilon v)\geq\Psi_\lambda(u_\lambda)$ for every $0\leq\epsilon\leq\epsilon_0$.* - *$t^-_\lambda(w_\lambda+\epsilon v)\to 1$ as $\epsilon\to 0^+$, where $t^-_\lambda(w_\lambda+\epsilon v)$ is a unique positive real number such that $t^-_\lambda(w_\lambda+\epsilon v)(w_\lambda+\epsilon v)\in \mathcal{M}^-_\lambda$.* *Proof.* $(i)$ The proof is a consequence of Corollary [Corollary 17](#cor4.3){reference-type="ref" reference="cor4.3"}. $(ii)$ Define a $\mathcal{C}^\infty$-function $\mathcal{H}:(0,\infty)\times\mathbb{R}^3 \to\mathbb{R}$ by $\mathcal{H}(t,\textit{a},\textit{b},\textit{c})=t^{p(m+1)-1}\textit{a}-\lambda t^{-\delta}\textit{b}-t^{\gamma-1}\textit{c}$, where $$\textit{a}=M\big(\|u\|^p\big)\|u\|^p,\textit{b}=\int_{\mathbb{R}^{N}}\alpha(x)|u|^{1-\delta}~\mathrm{d}x~\text{and}~\textit{c}=\int_{\mathbb{R}^{N}}\beta (x)|u|^\gamma~\mathrm{d}x.$$ It follows from $w_\lambda\in\mathcal{M}^-_\lambda$ that $$\mathcal{H}(1,\textit{a},\textit{b},\textit{c})=\mathcal{F}'_{\lambda,w_\lambda}(1)=0~\text{and}~\frac{\mathrm{d}\mathcal{H}(1,\textit{a},\textit{b},\textit{c})}{\mathrm{d}t}=\mathcal{F}''_{\lambda,w_\lambda}(1)<0.$$ By the hypothesis, we have $t^-_\lambda(w_\lambda+\epsilon v)(w_\lambda+\epsilon v)\in \mathcal{M}^-_\lambda$, i.e., $\mathcal{F}'_{\lambda,w_\lambda+\epsilon v}(t^-_\lambda(w_\lambda+\epsilon v))=0$. This shows that $$\label{eq4.8} \mathcal{H}\bigg((t^-_\lambda(w_\lambda+\epsilon v),M\big(\|w_\lambda+\epsilon v\|^p\big)\|w_\lambda+\epsilon v\|^p,\int_{\mathbb{R}^{N}}\alpha(x)|w_\lambda+\epsilon v|^{1-\delta}~\mathrm{d}x,\int_{\mathbb{R}^{N}}\beta (x)|w_\lambda+\epsilon v|^\gamma~\mathrm{d}x\bigg)=0.$$ By implicit function theorem, there exist open neighbourhoods $A\subset(0,\infty)$ and $B\subset \mathbb{R}^3$ containing 1 and $\bigg(M\big(\|w_\lambda\|^p\big)\|w_\lambda\|^p,\displaystyle\int_{\mathbb{R}^{N}}\alpha(x)|w_\lambda|^{1-\delta}~\mathrm{d}x,\displaystyle\int_{\mathbb{R}^{N}}\beta (x)|w_\lambda|^\gamma~\mathrm{d}x\bigg)$ raspectively such that for all $y\in B$, the equation $\mathcal{H}(t,y)=0$ has a unique solution $t=\mathcal{G}(y)$ with $\mathcal{G}:B\to A$ being a smooth function. Choosing $\epsilon>0$ very small enough and $$\bigg(M\big(\|w_\lambda+\epsilon v\|^p\big)\|w_\lambda+\epsilon v\|^p,\int_{\mathbb{R}^{N}}\alpha(x)|w_\lambda+\epsilon v|^{1-\delta}~\mathrm{d}x,\int_{\mathbb{R}^{N}}\beta (x)|w_\lambda+\epsilon v|^\gamma~\mathrm{d}x\bigg)\in B ,$$ then we conclude that [\[eq4.8\]](#eq4.8){reference-type="eqref" reference="eq4.8"} has a unique solution $$\mathcal{G}\bigg(M\big(\|w_\lambda+\epsilon v\|^p\big)\|w_\lambda+\epsilon v\|^p,\int_{\mathbb{R}^{N}}\alpha(x)|w_\lambda+\epsilon v|^{1-\delta}~\mathrm{d}x,\int_{\mathbb{R}^{N}}\beta (x)|w_\lambda+\epsilon v|^\gamma~\mathrm{d}x\bigg)=t^-_\lambda(w_\lambda+\epsilon v) .$$ The continuity of $\mathcal{G}$ infers that $t^-_\lambda(w_\lambda+\epsilon v)\to 1$ as $\epsilon\to 0^+$.This completes the proof. ◻ **Lemma 19**. *Let $0<\lambda<\lambda_\ast$, then the following results hold:* - *if $u_\lambda$ is a minimizer of $\Psi_\lambda$ over $\mathcal{M}^+_\lambda$, then $u_\lambda>0$ a.e. in $\mathbb{R}^N$, $\alpha(x)u_\lambda^{-\delta}v\in L^1(\mathbb{R}^N)$ and it satisfies $$\label{eq4.997} \langle{L(u_\lambda),v}\rangle-\lambda \int_{\mathbb{R}^{N}} \alpha(x)u_\lambda^{-\delta}v~\mathrm{d}x-\int_{\mathbb{R}^{N}}\beta(x)u_\lambda^{\gamma-1}v~\mathrm{d}x\geq 0~\text{for all}~v\in\mathcal{K}.$$* - *if $w_\lambda$ is a minimizer of $\Psi_\lambda$ over $\mathcal{M}^-_\lambda$, then $w_\lambda>0$ a.e. in $\mathbb{R}^N$, $\alpha(x)w_\lambda^{-\delta}v\in L^1(\mathbb{R}^N)$ and it satisfies $$\label{eq4.1000} \langle{L(w_\lambda),v}\rangle-\lambda \int_{\mathbb{R}^{N}} \alpha(x)w_\lambda^{-\delta}v~\mathrm{d}x-\int_{\mathbb{R}^{N}}\beta(x)w_\lambda^{\gamma-1}v~\mathrm{d}x\geq 0~\text{for all}~v\in\mathcal{K}.$$* *Proof.* $(i)$ Since $u_\lambda$ is a minimizer of $\Psi_\lambda$ on $\mathcal{M}^+_\lambda$, therefore $u_\lambda\geq 0$ a.e. in $\mathbb{R}^{N}$. We claim that $u_\lambda>0$ for a.a. $x\in\mathbb{R}^{N}$. Indeed, if not, then let there exists a set $\mathbb{C}$ of positive measure such that $u_\lambda=0$ in $\mathbb{C}$. Choose $v\in W^{s,p}_{V}(\mathbb{R}^N)$, $v>0$ and $\epsilon\in(0,\epsilon_0)$ as in Lemma [Lemma 18](#lem4.4){reference-type="ref" reference="lem4.4"} such that $(u_\lambda+\epsilon v)^{1-\delta}>u^{1-\delta}_\lambda$ a.e. in $\mathbb{R}^N\setminus\mathbb{C}$. From Lemma [Lemma 18](#lem4.4){reference-type="ref" reference="lem4.4"}-$(i)$, we have $$\hspace{-11cm}0\leq \frac{\Psi_\lambda(u_\lambda+\epsilon v)-\Psi_\lambda(u_\lambda)}{\epsilon}$$ $$< \frac{1}{p\epsilon}\bigg[\widehat{M}\big(\|u_\lambda+\epsilon v\|^p\big)-\widehat{M}\big(\|u_\lambda\|^p\big)\bigg]-\frac{\lambda}{(1-\delta)\epsilon^\delta}\int_\mathbb{C}\alpha(x)v^{1-\delta}~\mathrm{d}x-\frac{1}{\gamma\epsilon}\int_{\mathbb{R}^N}\beta(x) \big((u_\lambda+\epsilon v)^{\gamma}-u^{\gamma}_\lambda\big)~\mathrm{d}x.$$ This yields $$0\leq \frac{\Psi_\lambda(u_\lambda+\epsilon v)-\Psi_\lambda(u_\lambda)}{\epsilon} \to-\infty~\text{as}~\epsilon\to 0^+,$$ which is a contradiction. Thus we have $u_\lambda>0$ a.e. in $\mathbb{R}^N$. Further, take $v\in\mathcal{K}$ and choose a decreasing sequence $\{\epsilon_n\}_{n\in \mathbb{N}}\subset (0,1]$ such that $\epsilon_n\to 0$ as $n\to\infty$. For each $n\in \mathbb{N}$, define a sequence of non-negative measurable functions $\{g_n(x)\}_{n\in \mathbb{N}}$ by $$g_n(x)=\alpha(x)\bigg(\frac{(u_\lambda+\epsilon_n v)^{1-\delta}-u^{1-\delta}_\lambda}{\epsilon_n}\bigg).$$ By Mean Value theorem, we have $$\lim_{n\to\infty}g_n(x)=(1-\delta)\alpha(x)u^{-\delta}_\lambda v~\text{for a.e.}~x\in \mathbb{R}^N.$$ Consequently, by Fatou's Lemma, we obtain $$\label{eq4.9} \int_{\mathbb{R}^N} \alpha(x)u^{-\delta}_\lambda v~\mathrm{d}x\leq \frac{1}{1-\delta}\liminf_{n\to\infty} \int_{\mathbb{R}^N} \alpha(x)\bigg(\frac{(u_\lambda+\epsilon_n v)^{1-\delta}-u^{1-\delta}_\lambda}{\epsilon_n}\bigg)~\mathrm{d}x.$$ Using Lemma [Lemma 18](#lem4.4){reference-type="ref" reference="lem4.4"}-$(i)$ for n large enough, we have $$\label{eq4.666} 0\leq \frac{\Psi_\lambda(u_\lambda+\epsilon_n v)-\Psi_\lambda(u_\lambda)}{\epsilon_n}.$$ On simplifying [\[eq4.666\]](#eq4.666){reference-type="eqref" reference="eq4.666"}, we have $$\label{eq4.10} \begin{split} \frac{\lambda}{1-\delta}\int_{\mathbb{R}^N} \alpha(x)\bigg(\frac{(u_\lambda+\epsilon_n v)^{1-\delta}-u^{1-\delta}_\lambda}{\epsilon_n}\bigg)~\mathrm{d}x\leq \frac{1}{p}\Bigg[\frac{\widehat{M}\big(\|u_\lambda+\epsilon_n v\|^p\big)-\widehat{M}\big(\|u_\lambda\|^p}{\epsilon_n}\Bigg]\\&\hspace{-8cm}-\frac{1}{\gamma}\int_{\mathbb{R}^N} \beta(x)\bigg(\frac{(u_\lambda+\epsilon_n v)^{\gamma}-u^{\gamma}_\lambda}{\epsilon_n}\bigg)~\mathrm{d}x. \end{split}$$ By the mean value theorem and Lebesgue-dominated convergence theorem, we have $$\label{eq4.1001} \frac{1}{p}\Bigg[\frac{\widehat{M}\big(\|u_\lambda+\epsilon_n v\|^p\big)-\widehat{M}\big(\|u_\lambda\|^p}{\epsilon_n}\Bigg]\to \langle{L(u_\lambda),v}\rangle ~\text{as} ~n\to\infty$$ and $$\label{eq4.1002} \frac{1}{\gamma}\int_{\mathbb{R}^N} \beta(x)\bigg(\frac{(u_\lambda+\epsilon v)^{\gamma}-u^{\gamma}_\lambda}{\epsilon_n}\bigg)~\mathrm{d}x\to \int_{\mathbb{R}^{N}}\beta(x)u_\lambda^{\gamma-1}v~\mathrm{d}x~\text{as} ~n\to\infty.$$ Taking limit $n\to\infty$ on both the sides of [\[eq4.10\]](#eq4.10){reference-type="eqref" reference="eq4.10"} and using [\[eq4.1001\]](#eq4.1001){reference-type="eqref" reference="eq4.1001"} and [\[eq4.1002\]](#eq4.1002){reference-type="eqref" reference="eq4.1002"}, we deduce from [\[eq4.9\]](#eq4.9){reference-type="eqref" reference="eq4.9"} that $\alpha(x)u_\lambda^{-\delta}v\in L^1(\mathbb{R}^N)$ and [\[eq4.997\]](#eq4.997){reference-type="eqref" reference="eq4.997"} holds. $(ii)$ According to Lemma [Lemma 18](#lem4.4){reference-type="ref" reference="lem4.4"} $(ii)$, we have $t^-_\lambda(w_\lambda+\epsilon v)\to 1$ as $\epsilon\to 0^+$, where $t^-_\lambda(w_\lambda+\epsilon v)(w_\lambda+\epsilon v)\in \mathcal{M}^-_\lambda$ for $v\in\mathcal{K}$ and $0\leq\epsilon\leq\epsilon_0$. Since $w_\lambda$ is a minimizer of $\Psi_\lambda$ on $\mathcal{M}^-_\lambda$, therefore $w_\lambda\geq 0$ a.e. in $\mathbb{R}^{N}$ and $$\label{eq4.11} \Psi_\lambda(w_\lambda)\leq \Psi_\lambda(t^-_\lambda(w_\lambda+\epsilon v)(w_\lambda+\epsilon v))~\text{for all}~0\leq\epsilon\leq\epsilon_0.$$ Let us prove that $w_\lambda>0$ for a.a. $x\in\mathbb{R}^N$. Indeed, if not, then let $w_\lambda=0$ in $\mathbb{C}$, where the measure of the set $\mathbb{C}$ is positive. Furthermore, choose $v\in W^{s,p}_{V}(\mathbb{R}^N)$ with $v>0$ and $\epsilon\in(0,\epsilon_0)$ small enough such that $\big(t^-_\lambda(w_\lambda+\epsilon v)(w_\lambda+\epsilon v)\big)^{1-\delta}>\big(t^-_\lambda(w_\lambda+\epsilon v)(w_\lambda)\big)^{1-\delta}$ a.e. in $\mathbb{R}^N\setminus\mathbb{C}$. From [\[eq4.11\]](#eq4.11){reference-type="eqref" reference="eq4.11"} and using the fact that 1 is the global maximum point for $\mathcal{F}_{\lambda,w_\lambda}$, we have the following. $$\label{eq4.12} \Psi_\lambda(w_\lambda)\geq \Psi_\lambda(t^-_\lambda(w_\lambda+\epsilon v)w_\lambda).$$ Combining [\[eq4.11\]](#eq4.11){reference-type="eqref" reference="eq4.11"} and [\[eq4.12\]](#eq4.12){reference-type="eqref" reference="eq4.12"}, we have $$\label{eq4.999} \Psi_\lambda(t^-_\lambda(w_\lambda+\epsilon v)(w_\lambda+\epsilon v))\geq \Psi_\lambda(t^-_\lambda(w_\lambda+\epsilon v)w_\lambda) ~\text{for} ~\epsilon\in(0,\epsilon_0).$$ Now using [\[eq4.999\]](#eq4.999){reference-type="eqref" reference="eq4.999"} and applying a similar strategy as in case-$(i)$, we can deduce that $w_\lambda>0$ a.e. in $\mathbb{R}^N$, $\alpha(x)w_\lambda^{-\delta}v\in L^1(\mathbb{R}^N)$ and [\[eq4.1000\]](#eq4.1000){reference-type="eqref" reference="eq4.1000"} holds. ◻ **Theorem 20**. *Let $0<\lambda<\lambda_*$, then the minimizers $u_\lambda$ and $w_\lambda$ for the functional $\Psi_\lambda$ on $\mathcal{M}^+_\lambda$ and $\mathcal{M}^-_\lambda$, respectively are weak solutions of [\[main problem\]](#main problem){reference-type="eqref" reference="main problem"}.* *Proof.* To prove $u_\lambda$ is a weak solution of [\[main problem\]](#main problem){reference-type="eqref" reference="main problem"}, choose $v\in W^{s,p}_{V}(\mathbb{R}^N)$ and define $\phi_\epsilon=u_\lambda+\epsilon v$, then for each $\epsilon>0$ be given $\phi^+_\epsilon\in\mathcal{K}$. Now by Lemma [Lemma 19](#lem4.5){reference-type="ref" reference="lem4.5"}-$(i)$, we have $$\langle{L(u_\lambda),\phi^+_\epsilon}\rangle-\lambda \int_{\mathbb{R}^{N}} \alpha(x)u_\lambda^{-\delta}\phi^+_\epsilon~\mathrm{d}x-\int_{\mathbb{R}^{N}}\beta(x)u_\lambda^{\gamma-1}\phi^+_\epsilon~\mathrm{d}x\geq 0 .$$ Replacing $\phi^+_\epsilon=\phi_\epsilon+\phi^-_\epsilon$ in the above inequality, we obtain $$\langle{L(u_\lambda),\phi_\epsilon+\phi^-_\epsilon}\rangle-\lambda \int_{\mathbb{R}^{N}} \alpha(x)u_\lambda^{-\delta}(\phi_\epsilon+\phi^-_\epsilon)~\mathrm{d}x-\int_{\mathbb{R}^{N}}\beta(x)u_\lambda^{\gamma-1}(\phi_\epsilon+\phi^-_\epsilon)~\mathrm{d}x\geq 0 .$$ Define $\mathcal{D}_{\epsilon}=\{x\in\mathbb{R}^{N}:~\phi_\epsilon(x)\leq 0\}$ and $\mathcal{D}^c_{\epsilon}=\{x\in\mathbb{R}^{N}:~\phi_\epsilon(x)>0\}$. After some straightforward calculations, we have $$\begin{aligned} \quad 0&\leq \Bigg[M\big(\|u_\lambda\|^p\big)\|u_\lambda\|^p-\lambda \int_{\mathbb{R}^{N}} \alpha(x)u_\lambda^{1-\delta}~\mathrm{d}x-\int_{\mathbb{R}^{N}}\beta(x)u_\lambda^{\gamma}~\mathrm{d}x \Bigg]\\ &\qquad+\epsilon\Bigg[\langle{L(u_\lambda),v}\rangle-\lambda \int_{\mathbb{R}^{N}} \alpha(x)u_\lambda^{-\delta}v~\mathrm{d}x-\int_{\mathbb{R}^{N}}\beta(x)u_\lambda^{\gamma-1}v~\mathrm{d}x \Bigg]\\ &\qquad+M\big(\|u_\lambda\|^p\big)\bigg[\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac{|u_\lambda(x)-u_\lambda(y)|^{p-2}(u_\lambda(x)-u_\lambda(y))(\phi^-_\epsilon(x)-\phi^-_\epsilon(y))}{|x-y|^{N+sp}}~\mathrm{d}x\mathrm{d}y-\int_{\mathcal{D}_{\epsilon}}V(x)u_\lambda^{p-1}(u_\lambda+\epsilon v)~\mathrm{d}x\bigg]\\ &\qquad+\lambda \int_{\mathcal{D}_{\epsilon}} \alpha(x)u_\lambda^{-\delta}(u_\lambda+\epsilon v)~\mathrm{d}x+\int_{\mathcal{D}_{\epsilon}}\beta(x)u_\lambda^{\gamma-1}(u_\lambda+\epsilon v)~\mathrm{d}x.\end{aligned}$$ This yields $$\label{eq4.13} \begin{split} 0\leq\epsilon\Bigg[\langle{L(u_\lambda),v}\rangle-\lambda \int_{\mathbb{R}^{N}} \alpha(x)u_\lambda^{-\delta}v~\mathrm{d}x-\int_{\mathbb{R}^{N}}\beta(x)u_\lambda^{\gamma-1}v~\mathrm{d}x \Bigg]\\ &\hspace{-8cm}+M\big(\|u_\lambda\|^p\big)\bigg[\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac{|u_\lambda(x)-u_\lambda(y)|^{p-2}(u_\lambda(x)-u_\lambda(y))(\phi^-_\epsilon(x)-\phi^-_\epsilon(y))}{|x-y|^{N+sp}}~\mathrm{d}x\mathrm{d}y-\epsilon\int_{\mathcal{D}_{\epsilon}}V(x)u_\lambda^{p-1}v~\mathrm{d}x\bigg]\\ &\hspace{-7.9cm}+\int_{\mathcal{D}_{\epsilon}}\beta(x)u_\lambda^{\gamma-1}(u_\lambda+\epsilon v)~\mathrm{d}x , \end{split}$$ where the above inequality is obtained by using the fact that $u_\lambda\in\mathcal{M}^+_\lambda$, i.e., $$M\big(\|u_\lambda\|^p\big)\|u_\lambda\|^p-\lambda \int_{\mathbb{R}^{N}} \alpha(x)u_\lambda^{1-\delta}~\mathrm{d}x-\int_{\mathbb{R}^{N}}\beta(x)u_\lambda^{\gamma}~\mathrm{d}x =0.$$ Further, we also have used $$\int_{\mathcal{D}_{\epsilon}} \alpha(x)u_\lambda^{-\delta}(u_\lambda+\epsilon v)~\mathrm{d}x\leq 0~\text{and}~\int_{\mathcal{D}_{\epsilon}}V(x)u_\lambda^{p}~\mathrm{d}x\geq 0.$$ Define $$\mathcal{I_{\lambda,\epsilon}}=\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac{|u_\lambda(x)-u_\lambda(y)|^{p-2}(u_\lambda(x)-u_\lambda(y))(\phi^-_\epsilon(x)-\phi^-_\epsilon(y))}{|x-y|^{N+sp}}~\mathrm{d}x\mathrm{d}y,$$ then by the symmetry of the fractional kernel and after some simple computations, we obtain $$\begin{aligned} \mathcal{I_{\lambda,\epsilon}}&=\int_{\mathcal{D}_{\epsilon}}\int_{\mathcal{D}_{\epsilon}}\frac{|u_\lambda(x)-u_\lambda(y)|^{p-2}(u_\lambda(x)-u_\lambda(y))(\phi^-_\epsilon(x)-\phi^-_\epsilon(y))}{|x-y|^{N+sp}}~\mathrm{d}x\mathrm{d}y\\ &\qquad+2 \int_{\mathcal{D}_{\epsilon}}\int_{\mathcal{D}^c_{\epsilon}}\frac{|u_\lambda(x)-u_\lambda(y)|^{p-2}(u_\lambda(x)-u_\lambda(y))(\phi^-_\epsilon(x)-\phi^-_\epsilon(y))}{|x-y|^{N+sp}}~\mathrm{d}x\mathrm{d}y \\ &\leq -\epsilon\bigg[\int_{\mathcal{D}_{\epsilon}}\int_{\mathcal{D}_{\epsilon}}\frac{|u_\lambda(x)-u_\lambda(y)|^{p-2}(u_\lambda(x)-u_\lambda(y))(v(x)-v(y))}{|x-y|^{N+sp}}~\mathrm{d}x\mathrm{d}y\\ &\qquad+2 \int_{\mathcal{D}_{\epsilon}}\int_{\mathcal{D}^c_{\epsilon}}\frac{|u_\lambda(x)-u_\lambda(y)|^{p-2}(u_\lambda(x)-u_\lambda(y))(v(x)-v(y))}{|x-y|^{N+sp}}~\mathrm{d}x\mathrm{d}y \bigg] \\ &\leq 2\epsilon \int_{\mathcal{D}_{\epsilon}}\int_{\mathbb{R}^{N}}\frac{|u_\lambda(x)-u_\lambda(y)|^{p-1}|v(x)-v(y)|}{|x-y|^{N+sp}}~\mathrm{d}x\mathrm{d}y.\end{aligned}$$ Now using the fact that $u_\lambda$ is bounded in $W^{s,p}_{V}(\mathbb{R}^N)$ and Holder's inequality, we infer that there exists a positive constant $C>0$ such that $$\label{eq4.16} \int_{\mathcal{D}_{\epsilon}}\int_{\mathbb{R}^{N}}\frac{|u_\lambda(x)-u_\lambda(y)|^{p-1}|v(x)-v(y)|}{|x-y|^{N+sp}}~\mathrm{d}x\mathrm{d}y\leq C\Bigg[\int_{\mathcal{D}_{\epsilon}}\int_{\mathbb{R}^{N}}\frac{|v(x)-v(y)|^p}{|x-y|^{N+sp}}~\mathrm{d}x\mathrm{d}y \Bigg] ^{\frac{1}{p}}.$$ From the above inequality and [\[eq4.16\]](#eq4.16){reference-type="eqref" reference="eq4.16"}, we obtain $$\label{eq4.17} \mathcal{I_{\lambda,\epsilon}}\leq 2C\epsilon \Bigg[\int_{\mathcal{D}_{\epsilon}}\int_{\mathbb{R}^{N}}\frac{|v(x)-v(y)|^p}{|x-y|^{N+sp}}~\mathrm{d}x\mathrm{d}y \Bigg] ^{\frac{1}{p}}.$$ Since $\frac{|v(x)-v(y)|^p}{|x-y|^{N+sp}}\in L^1(\mathbb{R}^{2N})$, thus for any $\zeta>0$ there exists $R_\zeta$ large enough such that $$\int_{\textit{supp}(v)}\int_{\mathbb{R}^{N}\setminus B_{R_\zeta}(0)}\frac{|v(x)-v(y)|^p}{|x-y|^{N+sp}}~\mathrm{d}x\mathrm{d}y<\frac{\zeta}{2} .$$ It follows from $\mathcal{D}_{\epsilon}\subset \textit{supp}(v)$ and $\big|\mathcal{D}_{\epsilon}\times B_{R_\zeta}(0)\big|\to 0$ as $\epsilon\to 0^+$ that there exists a $\rho_{\zeta}>0$ and $\epsilon_\zeta>0$ such that $$\big|\mathcal{D}_{\epsilon}\times B_{R_\zeta}(0)\big|<\rho_{\zeta}~\text{and}~\int_{\mathcal{D}_{\epsilon}}\int_{B_{R_\zeta}(0)}\frac{|v(x)-v(y)|^p}{|x-y|^{N+sp}}~\mathrm{d}x\mathrm{d}y<\frac{\zeta}{2} ~\text{for}~\epsilon\in(0,\epsilon_\zeta).$$ This shows that for any $\epsilon\in(0,\epsilon_\zeta)$, we have $$\int_{\mathcal{D}_{\epsilon}}\int_{\mathbb{R}^{N}}\frac{|v(x)-v(y)|^p}{|x-y|^{N+sp}}~\mathrm{d}x\mathrm{d}y <\zeta .$$ Hence $$\label{eq4.18} \lim_{\epsilon\to 0^+} \int_{\mathcal{D}_{\epsilon}}\int_{\mathbb{R}^{N}}\frac{|v(x)-v(y)|^p}{|x-y|^{N+sp}}~\mathrm{d}x\mathrm{d}y=0.$$ Observe that $u_\lambda\leq -\epsilon v$ on $\mathcal{D}_{\epsilon}$ and hence by Lemma [Lemma 6](#lemma2.3){reference-type="ref" reference="lemma2.3"} there exists a positive constant $c>0$ such that $$\bigg|\int_{\mathcal{D}_{\epsilon}}\beta(x)u_\lambda^{\gamma-1}(u_\lambda+\epsilon v)~\mathrm{d}x\bigg|\leq \int_{\mathcal{D}_{\epsilon}}|\beta(x)u_\lambda^{\gamma}+\beta(x)\epsilon u_\lambda^{\gamma-1}v)|~\mathrm{d}x\leq 2c\epsilon^\gamma\|\beta\|_{L^\infty(\mathbb{R}^{N})}\|v\|^\gamma .$$ This yields $$\label{eq4.19} \lim_{\epsilon\to 0^+}\frac{1}{\epsilon} \int_{\mathcal{D}_{\epsilon}}\beta(x)u_\lambda^{\gamma-1}(u_\lambda+\epsilon v)~\mathrm{d}x=0.$$ Dividing $\epsilon$ on both the sides of [\[eq4.13\]](#eq4.13){reference-type="eqref" reference="eq4.13"} and using that $|\mathcal{D}_{\epsilon}|\to 0$ as $\epsilon\to 0^+$, [\[eq4.17\]](#eq4.17){reference-type="eqref" reference="eq4.17"}, [\[eq4.18\]](#eq4.18){reference-type="eqref" reference="eq4.18"} and [\[eq4.19\]](#eq4.19){reference-type="eqref" reference="eq4.19"} hold true, we notice that $$\langle{L(u_\lambda),v}\rangle-\lambda \int_{\mathbb{R}^{N}} \alpha(x)u_\lambda^{-\delta}v~\mathrm{d}x-\int_{\mathbb{R}^{N}}\beta(x)u_\lambda^{\gamma-1}v~\mathrm{d}x\geq 0.$$ Due to the arbitrariness of $v$, we deduce that $u_\lambda$ is a weak solution to [\[main problem\]](#main problem){reference-type="eqref" reference="main problem"}. Similarly, we can prove that $w_\lambda$ is a weak solution to [\[main problem\]](#main problem){reference-type="eqref" reference="main problem"}. This completes the proof. ◻ # Existence of solutions for $\lambda=\lambda_\ast$ {#sec5} In this section, we study the existence of solutions to [\[main problem\]](#main problem){reference-type="eqref" reference="main problem"} for $\lambda=\lambda_\ast$ and characterize the Nehari submanifold $\mathcal{M}^0_{\lambda_\ast}$, which is non-empty. **Lemma 21**. *Let $u\in \mathcal{M}^0_{\lambda_\ast}$, then for every $v\in W^{s,p}_{V}(\mathbb{R}^N)$ the following result holds $$\label{eq5.777} p(m+1)\langle{L(u),v}\rangle -\lambda_\ast \int_{\mathbb{R}^{N}} \alpha(x)u^{-\delta}v~\mathrm{d}x-\int_{\mathbb{R}^{N}}\beta(x)u^{\gamma-1}v~\mathrm{d}x= 0.$$ In particular, $(\mathcal{E}_{\lambda_\ast})$ has no solution in $\mathcal{M}^0_{\lambda_\ast}$ .* *Proof.* Let us rewrite $\lambda(u)$ by $\lambda(u)=C(a,\gamma,\delta,m,p)g(u)h(u)$, where we define $g$ and $h$ by $$\label{eq5.99} g(u)=\frac{1}{\displaystyle\int_{\mathbb{R}^{N}}\alpha (x)|u|^{1-\delta}~\mathrm{d}x} ~\text{and}~h(u)=\frac{\Big[\|u\|^{p(m+1)}\Big]^{\frac{\gamma+\delta-1}{\gamma-p(m+1)}}}{\bigg[\displaystyle\int_{\mathbb{R}^{N}}\beta (x)|u|^\gamma~\mathrm{d}x\bigg]^{\frac{p(m+1)+\delta-1}{\gamma-p(m+1)}}}.$$ Let $u\in\mathcal{M}^0_{\lambda_\ast}$ and $v\in\mathcal{K}$. By Theorem [Theorem 8](#thm3.1){reference-type="ref" reference="thm3.1"} and Corollary [Corollary 10](#cor3.3){reference-type="ref" reference="cor3.3"}, the map $u\mapsto\displaystyle\int_{\mathbb{R}^{N}}\beta (x)|u|^\gamma~\mathrm{d}x$ is continuous and $\displaystyle\int_{\mathbb{R}^{N}}\beta (x)|u|^\gamma~\mathrm{d}x>0$. It follows that $\displaystyle\int_{\mathbb{R}^{N}}\beta (x)|u+tv|^\gamma~\mathrm{d}x>0$ for $t>0$ small enough and hence $h(u+tv)$ is well-defined for $t>0$ small enough. Thus $\langle{h'(u),v}\rangle$ exists finitely. Now by Corollary [Corollary 13](#cor3.6){reference-type="ref" reference="cor3.6"}, we have $\lambda(u)=\lambda_\ast$. This shows that $\lambda(u+tv) -\lambda(u)=\lambda(u+tv)-\lambda_\ast\geq 0~\text{for all}~t>0 ~\text{small enough}.$ Hence $$\label{eq5.1} (h(u+tv)-h(u))g(u+tv)\geq -h(u)(g(u+tv)-g(u)).$$ Dividing $t$ on both the sides of [\[eq5.1\]](#eq5.1){reference-type="eqref" reference="eq5.1"} and applying limit inferior as $t\to 0^+$, we get $$\label{eq5.2} h(u)\bigg[\int_{\mathbb{R}^{N}}\alpha (x)|u|^{1-\delta}~\mathrm{d}x\bigg]^{-2}\liminf_{t\to 0^+}\int_{\mathbb{R}^{N}}\alpha(x)\bigg(\frac{|u+tv|^{1-\delta}-|u|^{1-\delta}}{t}\bigg)~\mathrm{d}x \leq \langle{h'(u),v}\rangle g(u)<\infty.$$ By Mean Value theorem, there exists $\theta\in(0,1)$ such that $$\alpha(x)\bigg(\frac{|u+tv|^{1-\delta}-|u|^{1-\delta}}{t}\bigg)=(1-\delta)\alpha(x)(u+\theta tv)^{-\delta}v \geq 0$$ and $$\label{eq5.999} \lim_{t\to 0^+}\alpha(x)\bigg(\frac{|u+tv|^{1-\delta}-|u|^{1-\delta}}{t}\bigg)=H(x)~\text{(say)}=\begin{cases} 0~\text{if}~v=0,u>0;\\ (1-\delta)\alpha(x)u^{-\delta}v~\text{if}~v>0,u>0;\\ \infty~\text{if}~v>0,u=0. \end{cases}$$ Now, by applying Fatou's lemma, we have $$\label{eq5.3} \int_{\mathbb{R}^{N}}H(x)~\mathrm{d}x\leq \liminf_{t\to 0^+}\int_{\mathbb{R}^{N}}\alpha(x)\bigg(\frac{|u+tv|^{1-\delta}-|u|^{1-\delta}}{t}\bigg)~\mathrm{d}x.$$ It is clear from [\[eq5.2\]](#eq5.2){reference-type="eqref" reference="eq5.2"} and [\[eq5.3\]](#eq5.3){reference-type="eqref" reference="eq5.3"} that $0\leq \displaystyle\int_{\mathbb{R}^{N}}H(x)~\mathrm{d}x<\infty$. Hence we must have $H(x)=(1-\delta)\alpha(x)u^{-\delta}v$ and $u>0$ a.e. in $\mathbb{R}^{N}$. Choosing $v>0$, we obtain $0<\displaystyle\int_{\mathbb{R}^{N}}\alpha(x)u^{-\delta}v~\mathrm{d}x<\infty$. It follows that $\langle{g'(u),v}\rangle$ exists and is given by $$\label{eq5.5} \langle{g'(u),v}\rangle=-(1-\delta)\bigg[\int_{\mathbb{R}^{N}}\alpha (x)|u|^{1-\delta}~\mathrm{d}x\bigg]^{-2}\int_{\mathbb{R}^{N}}\alpha(x)u^{-\delta}v~\mathrm{d}x.$$ Taking into account [\[eq5.99\]](#eq5.99){reference-type="eqref" reference="eq5.99"}, [\[eq5.2\]](#eq5.2){reference-type="eqref" reference="eq5.2"}, [\[eq5.3\]](#eq5.3){reference-type="eqref" reference="eq5.3"}, [\[eq5.5\]](#eq5.5){reference-type="eqref" reference="eq5.5"} and the fact that $u\in\mathcal{M}^0_{\lambda_\ast}$, we can easily obtain $$\label{eq5.6} p(m+1)\langle{L(u),v}\rangle -\lambda_\ast \int_{\mathbb{R}^{N}} \alpha(x)u^{-\delta}v~\mathrm{d}x-\int_{\mathbb{R}^{N}}\beta(x)u^{\gamma-1}v~\mathrm{d}x\geq 0,~\forall~v\in \mathcal{K}.$$ Choose $v\in W^{s,p}_{V}(\mathbb{R}^N)$ and define $\phi_\epsilon=u+\epsilon v$, then for each $\epsilon>0$ be given $\phi^+_\epsilon\in\mathcal{K}$. Now, the result follows by replacing $\phi_\epsilon$ in place of $v$ in [\[eq5.6\]](#eq5.6){reference-type="eqref" reference="eq5.6"} and applying a similar strategy as in Theorem [Theorem 20](#thm4.6){reference-type="ref" reference="thm4.6"}.\ Next, we have to show that $(\mathcal{E}_{\lambda_\ast})$ has no solution in $\mathcal{M}^0_{\lambda_\ast}$. On the contrary, let it have a solution $u\in\mathcal{M}^0_{\lambda_\ast}$. Then from the weak formulation definition and [\[eq5.777\]](#eq5.777){reference-type="eqref" reference="eq5.777"}, we deduce that $$\int_{\mathbb{R}^{N}}\big(\lambda_\ast(p(m+1)+\delta-1)\alpha(x)u^{-\delta}+(p(m+1)-\gamma)\beta(x)u^{\gamma-1}\big)v~\mathrm{d}x=0~\text{for all}~v\in W^{s,p}_{V}(\mathbb{R}^N).$$ This yields $$\lambda_\ast(p(m+1)+\delta-1)\alpha(x)u^{-\delta}= (\gamma-p(m+1))\beta(x)u^{\gamma-1}~\text{a.e. in} ~\mathbb{R}^{N}.$$ Due to the sign-changing behaviour of $\beta$, we have two situations, i.e., either $\beta(x)\leq 0$ or $\beta(x)> 0$. If $\beta(x)\leq 0$ in some positive measure set $\mathcal{D}\subset \mathbb{R}^N$, then it is easy to see that $\lambda_\ast(p(m+1)+\delta-1)\alpha(x)u^{-\delta}\leq 0$ in $\mathcal{D}$, which is a contradiction. Further, if $\beta(x)>0$ a.e. in $\mathbb{R}^N$, then from $(H4)$ we obtain $$u=\Bigg[\frac{\lambda_\ast(p(m+1)+\delta-1)\alpha(x)}{(\gamma-p(m+1))\beta(x)}\Bigg]^{\frac{1}{\gamma+\delta-1}}\notin W^{s,p}_{V}(\mathbb{R}^N),$$ which is again a contradiction. This completes the proof. ◻ **Corollary 22**. *The Nehari submanifold set $\mathcal{M}^0_{\lambda_\ast}$ is compact.* *Proof.* Choose a sequence $\{u_n\}_{n\in \mathbb{N}}$ in $\mathcal{M}^0_{\lambda_\ast}$, then by Corollary [Corollary 10](#cor3.3){reference-type="ref" reference="cor3.3"}, $\{u_n\}_{n\in \mathbb{N}}$ is bounded. Hence up to a subsequence $u_n\rightharpoonup u$ weakly in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n\to\infty$. Similar to Lemma [Lemma 16](#lem4.2){reference-type="ref" reference="lem4.2"}, we can obtain $u>0$ and $\displaystyle\int_{\mathbb{R}^{N}}\beta(x)|u|^{\gamma}~\mathrm{d}x>0$. Now we claim that $u_n\to u$ in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n\to\infty$. Indeed, if not, then we have $\lambda_\ast=\displaystyle\liminf_{n\to\infty}\lambda(u_n)>\lambda(u),$ which is a contradiction. Therefore, $u_n\to u$ in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n\to\infty$ and $\mathcal{F}'_{\lambda_*,u}(1)=\mathcal{F}''_{\lambda_*,u}(1)=0$. Thus, we get $u\in \mathcal{M}^0_{\lambda_\ast}$ and hence the result follows. ◻ Define $t_{\lambda_\ast}:\overline{\hat{\mathcal{M}}_{\lambda_\ast}}\setminus\{0\}\to \mathbb{R}$ and $s_{\lambda_\ast}:\overline{\hat{\mathcal{M}}_{\lambda_\ast}\cup\hat{\mathcal{M}}^+_{\lambda_\ast}}\to\mathbb{R}$ by $$t_{\lambda_\ast}(u)=\begin{cases} t^-_{\lambda_\ast}(u)~\text{if}~u\in\hat{\mathcal{M}}_{\lambda_\ast},\\ t^0_{\lambda_\ast}(u)~\text{otherwise}. \end{cases} ~\text{and}~s_{\lambda_\ast}(w)=\begin{cases} t^+_{\lambda_\ast}(w)~\text{if}~w\in\hat{\mathcal{M}}_{\lambda_\ast}\cup\hat{\mathcal{M}}^+_{\lambda_\ast},\\ t^0_{\lambda_\ast}(w)~\text{otherwise}. \end{cases} .$$ **Corollary 23**. *Let $u\notin \hat{\mathcal{M}}^+_{\lambda_\ast}$, then the following results hold:* - *$$\lim_{\lambda\uparrow{\lambda_\ast}}t^-_\lambda(u)=t_{\lambda_\ast}(u)~ \text{and} ~ \lim_{\lambda\uparrow {\lambda_\ast}}t^+_\lambda(u)=s_{\lambda_\ast}(u).$$* - *$$\lim_{\lambda\uparrow {\lambda_\ast}}\mathcal{I}^-_{\lambda}(u)=\Psi_{\lambda_\ast}(t_{\lambda_\ast}(u)u)~ \text{and} ~ \lim_{\lambda\uparrow {\lambda_\ast}}\mathcal{I}^+_{\lambda}(u)=\Psi_{\lambda_\ast}(s_{\lambda_\ast}(u)u).$$* *Proof.* It follows immediately from Lemma [Lemma 14](#lem3.7){reference-type="ref" reference="lem3.7"}. ◻ The results below, which may be directly obtained from [@silva2018local], are important to show the existence of solutions when $\lambda\geq \lambda_\ast$. **Proposition 24**. *The following results hold:* - *$\overline{\hat{\mathcal{M}}_{\lambda_\ast}\cup\hat{\mathcal{M}}^+_{\lambda_\ast}}=\overline{\hat{\mathcal{M}}_{\lambda_\ast}}\cup \overline{\hat{\mathcal{M}}^+_{\lambda_\ast}}\cup \{tu:t>0,~u\in\mathcal{M}^0_{\lambda_\ast}\}\cup\{0\}.$* - *$t_{\lambda_\ast}$ is continuous and the map $P^-:S\cap\overline{\hat{\mathcal{M}}_{\lambda_\ast}}\to \mathcal{M}^-_{\lambda_\ast}\cup\mathcal{M}^0_{\lambda_\ast}$ defined by $P^-(u)=t_{\lambda_\ast}(u)u$ is a homeomorphism.* - *$s_{\lambda_\ast}$ is continuous and the map $P^+:S\to \mathcal{M}^+_{\lambda_\ast}\cup\mathcal{M}^0_{\lambda_\ast}$ defined by $P^+(u)=s_{\lambda_\ast}(u)u$ is a homeomorphism.* - *$\mathcal{M}^0_{\lambda_\ast}$ has empty interior .* Define $$\hat{\Psi}^-_{\lambda_\ast}=\inf\{\Psi_{\lambda_\ast}(t_{\lambda_\ast}(u)u):~u\in \mathcal{M}^-_{\lambda_\ast}\cup\mathcal{M}^0_{\lambda_\ast}\}$$ and $$\hat{\Psi}^+_{\lambda_\ast}=\inf\{\Psi_{\lambda_\ast}(s_{\lambda_\ast}(u)u):~u\in \mathcal{M}^+_{\lambda_\ast}\cup\mathcal{M}^0_{\lambda_\ast}\}.$$ Then from Corollary [Corollary 23](#cor5.3){reference-type="ref" reference="cor5.3"} and Proposition [Proposition 24](#prop5.4){reference-type="ref" reference="prop5.4"}, we notice that $\hat{\Psi}^\pm_{\lambda_\ast}=\Upsilon^\pm_{\lambda_\ast}$ (see [@silva2018local]). **Proposition 25**. *The maps $(0,\lambda_\ast]\ni \lambda\mapsto \Upsilon^\pm_{\lambda}$ are decreasing and left continuous for $\lambda\in(0,\lambda_\ast)$ . Moreover,$$\lim_{\lambda\uparrow{\lambda_\ast}}\Upsilon^\pm_{\lambda}=\Upsilon^\pm_{\lambda_\ast}.$$* *Proof.* Let $0<\lambda<\hat{\lambda}<\lambda_\ast$. By Lemma [Lemma 16](#lem4.2){reference-type="ref" reference="lem4.2"} and Lemma [Lemma 14](#lem3.7){reference-type="ref" reference="lem3.7"}, we have $\mathcal{I}^-_\lambda(w_\lambda)=\Upsilon^-_{\lambda}$ and the map $\lambda\mapsto \mathcal{I}^-_{\lambda}(u)$ is strictly decreasing. Using these informations, we have $$\Upsilon^-_{\hat{\lambda}}\leq \mathcal{I}^-_{\hat{\lambda}}(w_\lambda)< \mathcal{I}^-_\lambda(w_\lambda)=\Upsilon^-_{\lambda}.$$ It follows that $\Upsilon^-_{\lambda}$ is strictly decreasing for $0<\lambda<\lambda_\ast$. If $0<\lambda<\lambda_\ast$, then by Corollary [Corollary 23](#cor5.3){reference-type="ref" reference="cor5.3"} and Lemma [Lemma 14](#lem3.7){reference-type="ref" reference="lem3.7"}, we have $$\Upsilon^-_{\lambda_\ast}\leq\Psi_{\lambda_\ast}(t_{\lambda_\ast}(u)u)=\lim_{\lambda\uparrow {\lambda_\ast}}\mathcal{I}^-_{\lambda}(u)=\mathcal{I}^-_{\lambda_\ast}(u)< \mathcal{I}^-_{\lambda}(u).$$ This yields $\Upsilon^-_{\lambda_\ast}\leq \Upsilon^-_{\lambda}$ and hence $\Upsilon^-_{\lambda}$ is decreasing for $0<\lambda\leq\lambda_\ast$. To prove the left continuity of $\Upsilon^-_{\lambda}$, choose a sequence $\{\lambda_n\}_{n\in N}$ such that $\lambda_n\uparrow \lambda\in (0,\lambda_\ast)$ as $n\to\infty$. Since the map $(0,\lambda_\ast)\ni\lambda\mapsto \Upsilon^-_{\lambda}$ is strictly decreasing and $\lambda_n<\lambda$ for $n$ large enough . It follows that $\Upsilon^-_{\lambda}<\Upsilon^-_{\lambda_n}$ for $n$ large enough and hence $\Upsilon^-_{\lambda}\leq\displaystyle\lim_{n\to\infty}\Upsilon^-_{\lambda_n}$. This shows that $$\Upsilon^-_{\lambda}\leq\lim_{n\to\infty}\Upsilon^-_{\lambda_n}\leq\lim_{n\to\infty}\mathcal{I}^-_{\lambda_n}(w_\lambda)=\lim_{n\to\infty}\Psi_{\lambda_n}(t^-_{\lambda_n}(w_\lambda)w_\lambda)=\Psi_{\lambda}(t^-_{\lambda}(w_\lambda)w_\lambda) =\mathcal{I}^-_\lambda(w_\lambda)=\Upsilon^-_{\lambda}.$$ Hence $$\lim_{n\to\infty}\Upsilon^-_{\lambda_n}=\Upsilon^-_{\lambda},~\text{i.e.},~ \lim_{\lambda_n\uparrow \lambda}\Upsilon^-_{\lambda_n}=\Upsilon^-_{\lambda},~\text{for all}~\lambda\in (0,\lambda_\ast).$$ Therefore, the map $\Upsilon^-_{\lambda}$ is left continuous for $\lambda\in(0,\lambda_\ast)$. Next, our aim is to prove $\Upsilon^-_{\lambda}$ is left continuous at $\lambda=\lambda_\ast$, that is, $\displaystyle\lim_{\lambda\uparrow{\lambda_\ast}}\Upsilon^-_{\lambda}=\Upsilon^-_{\lambda_\ast}.$ For this, take a $\{\lambda_n\}_{n\in N}$ such that $\lambda_n\uparrow \lambda_\ast$ as $n\to\infty$, therefore $\lambda_n<\lambda_\ast$ for $n$ large enough. The decreaseness of the map $(0,\lambda_\ast]\ni \lambda\mapsto \Upsilon^-_{\lambda}$ infer that $\Upsilon^-_{\lambda_\ast}\leq\Upsilon^-_{\lambda_n}$ for $n$ large enough and hence $\Upsilon^-_{\lambda_\ast}\leq\displaystyle\lim_{n\to\infty}\Upsilon^-_{\lambda_n}=\Upsilon$ (say). We claim that $\Upsilon=\Upsilon^-_{\lambda_\ast}$. Indeed, if not, then let there exist a $\rho>0$ such that $\Upsilon-\Upsilon^-_{\lambda_\ast}\geq \rho$. Choosing $\rho'>0$ with $2\rho'<\rho$ and $w_{\rho'}\in \mathcal{M}^-_{\lambda_\ast}$ such that $\mathcal{I}^-_{\lambda_\ast}(w_{\rho'})\leq \Upsilon^-_{\lambda_\ast}+\rho'$. Now from the continuity of the map $\lambda\mapsto \mathcal{I}^-_{\lambda}(u)$, we obtain $\mathcal{I}^-_{\lambda_n}(w_{\rho'})\to\mathcal{I}^-_{\lambda_\ast}(w_{\rho'})$ as $\lambda_n\uparrow \lambda_\ast$. Thus for $n$ large enough, we have $$0\leq \mathcal{I}^-_{\lambda_n}(w_{\rho'})-\mathcal{I}^-_{\lambda_\ast}(w_{\rho'})\leq \rho'.$$ It follows that $$\Upsilon^-_{\lambda_n} \leq \mathcal{I}^-_{\lambda_n}(w_{\rho'})\leq \mathcal{I}^-_{\lambda_\ast}(w_{\rho'})+\rho'\leq\Upsilon^-_{\lambda_\ast}+2\rho'\leq \Upsilon-\rho+2\rho'<\Upsilon.$$ Passing limit $n\to\infty$ in the above inequality, we have $\Upsilon\leq \Upsilon-\rho+2\rho'<\Upsilon$, which is a contradiction and hence the proof is completed.\ Similarly, we can prove that all the above results hold for the map $\Upsilon^+_{\lambda}$. ◻ **Theorem 26**. *The problem $(\mathcal{E}_{\lambda_\ast})$ has at least two solutions $w_{\lambda_\ast}\in\mathcal{M}^-_{\lambda_\ast}$ and $u_{\lambda_\ast}\in\mathcal{M}^+_{\lambda_\ast}$. Moreover, $$\mathcal{I}^-_{\lambda_\ast}(w_{\lambda_\ast})=\Upsilon^-_{\lambda_\ast}~\text{and}~\mathcal{I}^+_{\lambda_\ast}(u_{\lambda_\ast})=\Upsilon^+_{\lambda_\ast}.$$* *Proof.* To prove $w_{\lambda_\ast}\in\mathcal{M}^-_{\lambda_\ast}$ is a solution for $(\mathcal{E}_{\lambda_\ast})$, take $\lambda_n\uparrow \lambda_\ast$ as $n\to\infty$. Let $\{u_n\}_{n\in \mathbb{N}}$ be a sequence in $\mathcal{M}^-_{\lambda_n}$ such that $\Upsilon^-_{\lambda_n}=\mathcal{I}^-_{\lambda_n}(u_n)$ and also $u_n$ be a solution to $(\mathcal{E}_{\lambda_n})$ for each $n\in \mathbb{N}$. Now we claim that the sequence $\{u_n\}_{n\in \mathbb{N}}$ is bounded in $W^{s,p}_{V}(\mathbb{R}^N)$. Indeed, by similar to the proof of Lemma [Lemma 9](#lem3.2){reference-type="ref" reference="lem3.2"}-$(ii)$ (see [\[eq3.16\]](#eq3.16){reference-type="eqref" reference="eq3.16"}), we can deduce that $\{u_n\}_{n\in \mathbb{N}}$ is bounded from below. Now we only have to prove $\{u_n\}_{n\in \mathbb{N}}$ is bounded from above. If not, let $\|u_n\|\to\infty$ as $n\to\infty$, then using the fact $u_n\in\mathcal{M}^-_{\lambda_n}$ and Proposition [Proposition 25](#prop5.5){reference-type="ref" reference="prop5.5"}, we have $$\Upsilon^-_{\lambda_\ast}=\lim_{n\to\infty} \mathcal{I}^-_{\lambda_n}(u_n)=\lim_{n\to\infty} \Psi_{\lambda_n}(u_n)\geq a\bigg(\frac{1}{p(m+1)}-\frac{1}{\gamma}\bigg)\|u_n\|^{p(m+1)}-c{\lambda_n}\|\alpha\|_{L^\xi(\mathbb{R}^N)}\bigg(\frac{1}{1-\delta}-\frac{1}{\gamma}\bigg)\|u_n\|^{1-\delta}$$ $$\hspace{3cm} \to\infty~\text{as}~n \to\infty,~\text{since}~0<1-\delta<1<p(m+1)<\gamma,$$ which is a contradiction. It follows that $\{u_n\}_{n\in \mathbb{N}}$ must be bounded and consequently, up to a subsequence $u_n\rightharpoonup w_{\lambda_\ast}$ weakly in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n\to\infty$. By Lemma [Lemma 6](#lemma2.3){reference-type="ref" reference="lemma2.3"}, we have $u_n\to w_{\lambda_\ast}$ in $L^\tau(\mathbb{R}^N)$ and $L^\gamma(\mathbb{R}^N)$ respectively as $n\to\infty$, $u_n\to w_{\lambda_\ast}$ a.e. $x,y\in\mathbb{R}^N$ and $w_{\lambda_\ast}\geq 0$. Note that [\[eq4.2\]](#eq4.2){reference-type="eqref" reference="eq4.2"} still holds with $u_\lambda$ replaced by $w_{\lambda_\ast}$. The boundedness of $\{u_n\}_{n\in \mathbb{N}}$ in $W^{s,p}_{V}(\mathbb{R}^N)$ infer that $\bigg\{\frac{|u_n(x)-u_n(y)|^{p-2}(u_n(x)-u_n(y))}{|x-y|^{(N+sp)/p^{\prime}}}\bigg\}_{n\in \mathbb{N}} \text{and } \big\{V^{{\frac{1}{p^{\prime}}}}|u_n|^{p-2}u_n\big\}_{n\in \mathbb{N}}$ are bounded in $L^{p^{\prime}}(\mathbb{R}^{2N})$ and $L^{p^{\prime}}(\mathbb{R}^{N})$ respectively, where $p^{\prime}=\frac{p}{p-1}$ is the conjugate exponent of $p$. Moreover, we have $$\frac{|u_n(x)-u_n(y)|^{p-2}(u_n(x)-u_n(y))}{|x-y|^{(N+sp)/p^{\prime}}}\to \frac{|w_{\lambda_\ast}(x)-w_{\lambda_\ast}(y)|^{p-2}(w_{\lambda_\ast}(x)-w_{\lambda_\ast}(y))}{|x-y|^{(N+sp)/p^{\prime}}}~\text{a.e. in }~\mathbb{R}^{2N}$$ and $$V^{\frac{1}{p^{\prime}}}|u_n|^{p-2}u_n\to V^{\frac{1}{p^{\prime}}}|w_{\lambda_\ast}|^{p-2}w_{\lambda_\ast}~\text{a.e. in }~\mathbb{R}^{N}~\text{as}~n\to\infty .$$ It follows that $$\frac{|u_n(x)-u_n(y)|^{p-2}(u_n(x)-u_n(y))}{|x-y|^{(N+sp)/p^{\prime}}}\rightharpoonup \frac{|w_{\lambda_\ast}(x)-w_{\lambda_\ast}(y)|^{p-2}(w_{\lambda_\ast}(x)-w_{\lambda_\ast}(y))}{|x-y|^{(N+sp)/p^{\prime}}}~\text{weakly in }~L^{p^{\prime}}(\mathbb{R}^{2N})$$ and $$V^{\frac{1}{p^{\prime}}}|u_n|^{p-2}u_n\rightharpoonup V^{\frac{1}{p^{\prime}}}|w_{\lambda_\ast}|^{p-2}w_{\lambda_\ast}~\text{weakly in }~L^{p^{\prime}}(\mathbb{R}^{N})~\text{as}~n\to\infty .$$ Due to weak convergence in Lebesgue space, for any $v\in W^{s,p}_{V}(\mathbb{R}^N)$, we obtain $$\label{eq5.10} \lim_{n\to\infty}\langle{B(u_n),v}\rangle=\langle{B(w_{\lambda_\ast}),v}\rangle .$$ Since $u_n\to w_{\lambda_\ast}$ in $L^\gamma(\mathbb{R}^N)$ as $n\to\infty$, therefore by applying Lebesgue dominated convergence theorem, we have $|u_n|^{\gamma-2}u_n\to|w_{\lambda_\ast}|^{\gamma-2}w_{\lambda_\ast}$ in $L^{\frac{\gamma}{\gamma-1}}(\mathbb{R}^{N})$ as $n\to\infty$. Consequently, by Holder's inequality, we have $$\bigg|\int_{\mathbb{R}^{N}}\beta(x)\big(|u_n|^{\gamma-2}u_n-|w_{\lambda_\ast}|^{\gamma-2}w_{\lambda_\ast}\big)v~\mathrm{d}x\bigg|\leq \|\beta(x)\|_{L^\infty(\mathbb{R}^{N})}\bigg\||u_n|^{\gamma-2}u_n-|w_{\lambda_\ast}|^{\gamma-2}w_{\lambda_\ast}\bigg\|_{L^{\frac{\gamma}{\gamma-1}}(\mathbb{R}^{N})}\|v\|_{L^\gamma(\mathbb{R}^{N})}$$ $$\to 0~\text{as}~n\to\infty .$$ It follows that $$\label{eq5.11} \lim_{n\to\infty}\int_{\mathbb{R}^{N}}\beta(x)|u_n|^{\gamma-2}u_n v~\mathrm{d}x=\int_{\mathbb{R}^{N}}\beta(x)|w_{\lambda_\ast}|^{\gamma-2}w_{\lambda_\ast}v~\mathrm{d}x.$$ Using the fact that $u_n$ is a solution of the problem $(\mathcal{E}_{\lambda_n})$ for each $n\in \mathbb{N}$, we have $$\label{eq5.12} \langle{L(u_n),v}\rangle=\lambda_n \int_{\mathbb{R}^{N}}\alpha(x)u^{-\delta}_{n}v~\mathrm{d}x+\int_{\mathbb{R}^{N}}\beta(x)u_n^{\gamma-1} v~\mathrm{d}x.$$ Next, applying limit inferior on both sides of [\[eq5.12\]](#eq5.12){reference-type="eqref" reference="eq5.12"} as $n\to\infty$ with $v\in\mathcal{K}$ and using the fact that [\[eq4.2\]](#eq4.2){reference-type="eqref" reference="eq4.2"}, [\[eq5.10\]](#eq5.10){reference-type="eqref" reference="eq5.10"} and [\[eq5.11\]](#eq5.11){reference-type="eqref" reference="eq5.11"} hold, we get $$\label{eq5.13} \infty >\liminf_{n\to\infty}~ \langle{L(u_n),v}\rangle-\int_{\mathbb{R}^{N}}\beta(x)w_{\lambda_\ast}^{\gamma-1} v~\mathrm{d}x\geq\lambda_\ast\liminf_{n\to\infty} \int_{\mathbb{R}^{N}}\alpha(x)u^{-\delta}_{n}v~\mathrm{d}x.$$ Furthermore, by Fatou's lemma, we have $$\label{eq5.14} \liminf_{n\to\infty} \int_{\mathbb{R}^{N}}\alpha(x)u^{-\delta}_{n}v~\mathrm{d}x\geq \int_{\mathbb{R}^{N}}H(x)~\mathrm{d}x,$$ where $H(x)$ is defined as in [\[eq5.999\]](#eq5.999){reference-type="eqref" reference="eq5.999"}, with $u$ replaced by $w_{\lambda_\ast}$. From [\[eq5.13\]](#eq5.13){reference-type="eqref" reference="eq5.13"} and [\[eq5.14\]](#eq5.14){reference-type="eqref" reference="eq5.14"}, we obtain $0\leq \displaystyle\int_{\mathbb{R}^{N}}H(x)~\mathrm{d}x<\infty$ and hence $H(x)=\alpha(x)w^{-\delta}_{\lambda_\ast}v$, i.e., $w_{\lambda_\ast}>0$ a.e. in $\mathbb{R}^{N}$ and $\alpha(x)w^{-\delta}_{\lambda_\ast}v\in L^1(\mathbb{R}^{N})$. Consequently, from [\[eq4.2\]](#eq4.2){reference-type="eqref" reference="eq4.2"} and [\[eq5.11\]](#eq5.11){reference-type="eqref" reference="eq5.11"}-- [\[eq5.14\]](#eq5.14){reference-type="eqref" reference="eq5.14"}, we have $$\limsup_{n\to\infty} \langle{L(u_n)-L(~w_{\lambda_\ast}),u_n-~w_{\lambda_\ast}}\rangle=\limsup_{n\to\infty} \langle{L(u_n),u_n-~w_{\lambda_\ast}}\rangle\leq\limsup_{n\to\infty} \langle{L(u_n),u_n}\rangle-\liminf_{n\to\infty} \langle{L(u_n),w_{\lambda_\ast}}\rangle$$ $$\hspace{-5cm}=\limsup_{n\to\infty}M(\|u_n\|^p)\langle{B(u_n),u_n}\rangle-\liminf_{n\to\infty}M(\|u_n\|^p)\langle{B(u_n),w_{\lambda_\ast}}\rangle$$ $$\hspace{1cm}=\limsup_{n\to\infty}\bigg[\lambda_n \int_{\mathbb{R}^{N}}\alpha(x)u^{1-\delta}_{n}~\mathrm{d}x+\int_{\mathbb{R}^{N}}\beta(x)u_n^{\gamma} ~\mathrm{d}x\bigg]-\liminf_{n\to\infty}\bigg[\lambda_n \int_{\mathbb{R}^{N}}\alpha(x)u^{-\delta}_{n}w_{\lambda_\ast}~\mathrm{d}x+\int_{\mathbb{R}^{N}}\beta(x)u_n^{\gamma-1} w_{\lambda_\ast}~\mathrm{d}x\bigg]$$ $$\hspace{-1.2cm}\leq \bigg(\lambda_\ast \int_{\mathbb{R}^{N}}\alpha(x)w^{1-\delta}_{\lambda_\ast}~\mathrm{d}x+\int_{\mathbb{R}^{N}}\beta(x)w_{\lambda_\ast}^{\gamma} ~\mathrm{d}x\bigg)-\lambda_\ast \int_{\mathbb{R}^{N}}\alpha(x)w^{1-\delta}_{\lambda_\ast}~\mathrm{d}x-\int_{\mathbb{R}^{N}}\beta(x)w_{\lambda_\ast}^{\gamma} ~\mathrm{d}x=0.$$ Thus, by Lemma [Lemma 7](#lemma2.4){reference-type="ref" reference="lemma2.4"}, we ge $u_n\to w_{\lambda_\ast}$ in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n\to\infty$ and hence $M\big(\|u_n\|^p\big)\to M\big(\|w_{\lambda_\ast}\|^p\big)$ in $\mathbb{R}$ as $n\to\infty$. Now applying a limit inferior to [\[eq5.12\]](#eq5.12){reference-type="eqref" reference="eq5.12"} and using Fatou's lemma, we obtain $$\label{eq5.15} \langle{L(w_{\lambda_\ast}),v}\rangle-\int_{\mathbb{R}^{N}}\beta(x)w_{\lambda_\ast}^{\gamma-1} v~\mathrm{d}x\geq\lambda_\ast \int_{\mathbb{R}^{N}}\alpha(x)w^{-\delta}_{\lambda_\ast}v~\mathrm{d}x,~\forall~v\in\mathcal{K}.$$ In addition to this, we also have $$\mathcal{F}'_{\lambda_\ast,w_{\lambda_\ast}}(1)=0~\text{and}~\mathcal{F}''_{\lambda_\ast,w_{\lambda_\ast}}(1)\leq 0~\text{and hence} \int_{\mathbb{R}^{N}}\beta(x)|w_{\lambda_\ast}|^{\gamma}~\mathrm{d}x >0.$$ It follows that $w_{\lambda_\ast}\in \mathcal{M}^-_{\lambda_\ast}\cup\mathcal{M}^0_{\lambda_\ast}$ and therefore, we obtain $$M\big(\|w_{\lambda_\ast}\|^p\big)\|w_{\lambda_\ast}\|^p-\lambda_\ast \int_{\mathbb{R}^{N}} \alpha(x)w_{\lambda_\ast}^{1-\delta}~\mathrm{d}x-\int_{\mathbb{R}^{N}}\beta(x)w_{\lambda_\ast}^{\gamma}~\mathrm{d}x =0.$$ Let $v\in W^{s,p}_{V}(\mathbb{R}^N)$ and define $\phi_\epsilon= w_{\lambda_\ast}+\epsilon v$, then for each $\epsilon>0$ be given $\phi^+_\epsilon\in\mathcal{K}$. Now replacing $\phi_\epsilon$ with $v$ in [\[eq5.15\]](#eq5.15){reference-type="eqref" reference="eq5.15"} and adapting arguments similar to those of Theorem [Theorem 20](#thm4.6){reference-type="ref" reference="thm4.6"}, we can show that $w_{\lambda_\ast}$ is a solution of $(\mathcal{E}_{\lambda_\ast})$. By Lemma [Lemma 21](#lem5.1){reference-type="ref" reference="lem5.1"}, we get $w_{\lambda_\ast}\notin \mathcal{M}^0_{\lambda_\ast}$ and hence $w_{\lambda_\ast}\in \mathcal{M}^-_{\lambda_\ast}$. Moreover, we deduce from strong convergence and Proposition [Proposition 25](#prop5.5){reference-type="ref" reference="prop5.5"} that $$\mathcal{I}^-_{\lambda_\ast}(w_{\lambda_\ast})=\Psi_{\lambda_\ast}(w_{\lambda_\ast})=\lim_{n\to\infty} \Psi_{\lambda_n}(u_n)=\lim_{n\to\infty} \mathcal{I}^-_{\lambda_n}(u_n)=\Upsilon^-_{\lambda_\ast}.$$ Similarly, we can prove that $u_{\lambda_\ast}\in\mathcal{M}^+_{\lambda_\ast}$ is a solution for $(\mathcal{E}_{\lambda_\ast})$ and $\mathcal{I}^+_{\lambda_\ast}(u_{\lambda_\ast})=\Upsilon^+_{\lambda_\ast}$. Hence, the theorem is well established. ◻ For $0<\lambda\leq\lambda_\ast$, define the sets $$\mathcal{S}^+_{\lambda}=\big\{ u\in \mathcal{M}^+_{\lambda}:~\mathcal{I}^+_{\lambda}(u)=\Upsilon^+_{\lambda}\big\}~\text{and}~ \mathcal{S}^-_{\lambda}=\big\{ w\in \mathcal{M}^-_{\lambda}:~\mathcal{I}^-_{\lambda}(w)=\Upsilon^-_{\lambda}\big\}.$$ The following Corollary follows directly from Lemma [Lemma 15](#lem4.1){reference-type="ref" reference="lem4.1"}, Lemma [Lemma 16](#lem4.2){reference-type="ref" reference="lem4.2"} and Theorem [Theorem 26](#thm5.6){reference-type="ref" reference="thm5.6"}. **Corollary 27**. *The following holds for $0<\lambda\leq\lambda_\ast$:* - *$\mathcal{S}^+_{\lambda}$ and $\mathcal{S}^-_{\lambda}$ are non-empty, compact and hence there exist $c_\lambda,C_\lambda>0$ such that $c_\lambda\leq \|u\|,\|w\|\leq C_\lambda$ for all $u\in \mathcal{S}^+_{\lambda}$ and $w\in \mathcal{S}^-_{\lambda}$.* - *if $u\in \mathcal{S}^+_{\lambda}\cup \mathcal{S}^-_{\lambda}$, then $u$ is a solution for [\[main problem\]](#main problem){reference-type="eqref" reference="main problem"}.* # Existence of solutions for $\lambda>\lambda_\ast$ {#sec6} In this section, we study the existence of solutions for the problem [\[main problem\]](#main problem){reference-type="eqref" reference="main problem"} when $\lambda$ crosses the extremal parameter $\lambda_\ast$. The aim is to explore the minimization problem on appropriate subsets of $\mathcal{M}^+_{\lambda_\ast}$ and $\mathcal{M}^-_{\lambda_\ast}$ that preserve an acceptable distance from the set $\mathcal{M}^0_{\lambda_\ast}$ and minimizers achieved on these sets may be projected on $\hat{\mathcal{M}}_{\lambda}$ and $\hat{\mathcal{M}}_{\lambda}\cup \hat{\mathcal{M}}^+_{\lambda}$ for $\lambda\in (\lambda_\ast,\lambda_\ast+\epsilon)$, where $\epsilon>0$ small enough.\ Let $\lambda>0$, then for all $w\in\mathcal{M}^{\mp}_\lambda\cup\mathcal{M}^0_\lambda$, we define $$\mathcal{J}^\mp_\lambda(w)=a(p(m+1)-1)\|w\|^{p(m+1)}+\lambda\delta \int_{\mathbb{R}^{N}}\alpha(x)|w|^{1-\delta}~\mathrm{d}x-(\gamma-1)\int_{\mathbb{R}^{N}}\beta (x)|w|^\gamma~\mathrm{d}x.$$ **Lemma 28**. *Suppose that, $0<\hat{C}^+_1<\hat{C}^-_2$ and $\lambda_n\downarrow \lambda_\ast$ as $n\to\infty$. Also, assume $u_n\in \mathcal{M}^{\mp}_{\lambda_\ast}$ such that $\hat{C}^+_1\leq\|u_n\|\leq \hat{C}^-_2$ for each $n\in\mathbb{N}$ and $\mathcal{J}^{\mp}_{\lambda_n}(t^{\mp}_{\lambda_n}(u_n)u_n)\to 0$ as $n\to\infty$, then $\mathrm{dist} (u_n,\mathcal{M}^0_{\lambda_\ast})\to 0$ as $n\to\infty$.* *Proof.* We prove for $u_n\in \mathcal{M}^-_{\lambda_\ast}$. Since $u_n\in \mathcal{M}^-_{\lambda_\ast}$ by Lemma [Lemma 9](#lem3.2){reference-type="ref" reference="lem3.2"}-$(ii)$ (see [\[eq3.14\]](#eq3.14){reference-type="eqref" reference="eq3.14"}) that there exists $c_1>0$ such that $\displaystyle\int_{\mathbb{R}^{N}}\beta (x)|u_n|^\gamma~\mathrm{d}x>c_1$. Further, as $t^-_{\lambda_n}(u_n)u_n\in\mathcal{M}^-_{\lambda_n}$, hence by Proposition [Proposition 11](#prop3.1){reference-type="ref" reference="prop3.1"}, there exists $t^+_{\lambda_n}(u_n)>0$ satisfying $t^+_{\lambda_n}(u_n)<t^-_{\lambda_n}(u_n)$ such that $t^+_{\lambda_n}(u_n)u_n\in\mathcal{M}^+_{\lambda_n}$. Setting $t^+_n=t^+_{\lambda_n}(u_n)$ and $t^-_n=t^-_{\lambda_n}(u_n)$, we have $$\label{eq6.1} \mathcal{F}'_{\lambda_n,u_n}(t^-_n)= \mathcal{F}'_{\lambda_n,u_n}(t^+_n)=0~\text{and}~ \mathcal{J}^-_{\lambda_n}(t^-_n u_n)=o(1)~\text{as}~n\to\infty.$$ In solving these equations, we obtain $$\label{eq6.2} \begin{split} a t^-_n\|u_n\|^{p(m+1)}\Bigg[\frac{\big(p(m+1)+\delta-1\big)\big(\frac{t^+_n}{t^-_n}\big)^{\gamma+\delta-1}-(\gamma+\delta-1)\big(\frac{t^+_n}{t^-_n}\big)^{p(m+1)+\delta-1}+\big(\gamma-p(m+1)\big)}{\big(\frac{t^+_n}{t^-_n}\big)^{\gamma+\delta-1}-1}\Bigg]=o(1) \end{split}$$ as $n\to\infty$. Using $\hat{C}^+_1\leq\|u_n\|\leq \hat{C}^-_2$ for each $n\in\mathbb{N}$ and Lemma [Lemma 9](#lem3.2){reference-type="ref" reference="lem3.2"}, we deduce that $t^-_n$ and $t^+_n$ are bounded. Hence, up to subsequences $t^-_n\to\sigma$ and $t^+_n\to\eta$ as $n\to\infty$. Now from [\[eq6.2\]](#eq6.2){reference-type="eqref" reference="eq6.2"}, we obtain $$\big(p(m+1)+\delta-1\big)\bigg(\frac{\eta}{\sigma}\bigg)^{\gamma+\delta-1}-(\gamma+\delta-1)\bigg(\frac{\eta}{\sigma}\bigg)^{p(m+1)+\delta-1}+\big(\gamma-p(m+1)\big)=0,$$ which has a unique root $\eta=\sigma$ and hence $t^-_n\to\eta$ and $t^+_n\to\eta$ as $n\to\infty$. Also, since $t^+_n u_n\in\mathcal{M}^+_{\lambda_n}$, there exists $c_2>0$ such that $\displaystyle\int_{\mathbb{R}^{N}}\alpha(x)|u_n|^{1-\delta}~\mathrm{d}x>c_2$. From [\[eq6.1\]](#eq6.1){reference-type="eqref" reference="eq6.1"}, it follows that $$\mathcal{F}'_{\lambda_\ast,u_n}(\eta)=o(1)~\text{and}~\mathcal{J}^-_{\lambda_\ast}(\eta u_n)=o(1)~\text{as}~n\to\infty.$$ This yields $$\frac{a\big(\gamma-p(m+1)\big)\|\eta u_n\|^{p (m+1)}}{(\delta+\gamma-1)\displaystyle\int_{\mathbb{R}^{N}}\alpha(x)|\eta u_n|^{1-\delta}~\mathrm{d}x}=\lambda_\ast+o(1) ~\text{as}~n\to\infty$$ and $$\frac{a\big(p(m+1)+\delta-1\big)\|\eta u_n\|^{p (m+1)}}{(\delta+\gamma-1)\displaystyle\int_{\mathbb{R}^{N}}\beta(x)|\eta u_n|^{\gamma}~\mathrm{d}x}=1+ o(1)~\text{as}~n\to\infty.$$ Hence it follows from [\[eq3.18\]](#eq3.18){reference-type="eqref" reference="eq3.18"} and Lemma [Lemma 12](#lem3.5){reference-type="ref" reference="lem3.5"} that $$\lambda(u_n)=\lambda(\eta u_n)=(\lambda_\ast+o(1))(1+ o(1))^{\frac{p(m+1)+\delta-1}{\gamma-p(m+1)}}\to \lambda_\ast~\text{as}~n\to\infty.$$ This shows that $\{u_n\}_{n\in\mathbb{N}}$ is a bounded minimizing sequence for $\lambda_\ast$. Therefore, up to a subsequence $u_n\rightharpoonup u$ weakly in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n\to\infty$. Repeating a similar procedure done in Corollary [Corollary 22](#cor5.2){reference-type="ref" reference="cor5.2"}, we get $u_n\to u$ in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n\to\infty$. Now, from the continuity of $\lambda(u)$, we infer that $\lambda(u)=\lambda_\ast$ and also that we have $\displaystyle\int_{\mathbb{R}^{N}}\beta (x)|u|^\gamma~\mathrm{d}x>0$. It follows from strong convergence that $\mathcal{F}'_{\lambda_\ast,u}(1)=\displaystyle\lim_{n\to\infty} \mathcal{F}'_{\lambda_n,u_n}(1)=0,~\text{that is},~u\in\mathcal{M}_{\lambda_\ast}$ and hence by Corollary [Corollary 13](#cor3.6){reference-type="ref" reference="cor3.6"}-$(c)$, we deduce that $u\in\mathcal{M}^0_{\lambda_\ast}$ . This implies that $$\lim_{n\to\infty} \mathrm{dist}(u_n,\mathcal{M}^0_{\lambda_\ast})=\mathrm{dist}(u,\mathcal{M}^0_{\lambda_\ast})= 0.$$ By arguing similarly as above, we can prove for the case $u_n\in \mathcal{M}^-_{\lambda_\ast}$. This completes the proof. ◻ Let $\hat{C}^+_1,\hat{C}^-_2,d>0$ and define the sets $$\mathcal{M}^-_{\lambda_\ast,\mathrm{d}}=\big\{w\in \mathcal{M}^-_{\lambda_\ast}:~\mathrm{dist}(w,\mathcal{M}^0_{\lambda_\ast})>d,~\|w\|\leq \hat{C}^-_2\big\}$$ and $$\mathcal{M}^+_{\lambda_\ast,\mathrm{d}}=\big\{u\in \mathcal{M}^+_{\lambda_\ast}:~\mathrm{dist}(u,\mathcal{M}^0_{\lambda_\ast})>d,~\hat{C}^+_1\leq\|u\|\big\}.$$ Lemma [Lemma 28](#lem6.1){reference-type="ref" reference="lem6.1"} can immediately obtain the following Corollary. **Corollary 29**. *Let $\hat{C}^+_1, \hat{C}^-_2,d>0$ be given as above, then there exists $\epsilon>0$ such that* - *there exists $\eta<0$ such that $\mathcal{J}^-_{\lambda}(t^-_{\lambda}(w)w)< \eta$ for all $\lambda\in(\lambda_\ast,\lambda_\ast+\epsilon)$ and $w\in\mathcal{M}^-_{\lambda_\ast,\mathrm{d}}$. Consequently, $t^-_{\lambda}(w)w\in\mathcal{M}^-_\lambda$ and $w\in\hat{\mathcal{M}}_\lambda$, for all $\lambda\in(\lambda_\ast,\lambda_\ast+\epsilon)$.* - *there exists $\eta>0$ such that $\mathcal{J}^+_{\lambda}(t^+_{\lambda}(u)u)> \eta$ for all $\lambda\in(\lambda_\ast,\lambda_\ast+\epsilon)$ and $u\in\mathcal{M}^+_{\lambda_\ast,\mathrm{d}}$. Consequently, $t^+_{\lambda}(u)u\in\mathcal{M}^+_\lambda$ and $u\in\hat{\mathcal{M}}_\lambda\cup\hat{\mathcal{M}}^+_\lambda$, for all $\lambda\in(\lambda_\ast,\lambda_\ast+\epsilon)$.* **Lemma 30**. *There holds $\mathrm{dist}\big(\mathcal{S}^\pm_{\lambda_\ast},\mathcal{M}^0_{\lambda_\ast}\big)>0.$* *Proof.* First, we show that $\mathrm{dist}\big(\mathcal{S}^-_{\lambda_\ast},\mathcal{M}^0_{\lambda_\ast}\big)>0$. Indeed, if not, then $\mathrm{dist}\big(\mathcal{S}^-_{\lambda_\ast},\mathcal{M}^0_{\lambda_\ast}\big)=0$. So, there exist two sequences $\{w_n\}_{n\in\mathbb{N}}\subset \mathcal{S}^-_{\lambda_\ast}$ and $\{\psi_n\}_{n\in\mathbb{N}}\subset\mathcal{M}^0_{\lambda_\ast}$ such that $\|w_n-\psi_n\|\to 0$ as $n\to\infty$. As $w_n\in\mathcal{S}^-_{\lambda_\ast}$, therefore by Corollary [Corollary 27](#cor5.7){reference-type="ref" reference="cor5.7"}, we conclude that $w_n$ is a solution of $(\mathcal{E}_{\lambda_\ast})$. Therefore, we have $$\langle{L(w_n),v}\rangle-\lambda_\ast \int_{\mathbb{R}^{N}}\alpha(x)w^{-\delta}_n v~\mathrm{d}x-\int_{\mathbb{R}^{N}}\beta(x)w_n^{\gamma-1} v~\mathrm{d}x=0,~\text{for all}~v\in W^{s,p}_{V}(\mathbb{R}^N).$$ From Corollary [Corollary 22](#cor5.2){reference-type="ref" reference="cor5.2"}, we deduce that there exists $\psi\in\mathcal{M}^0_{\lambda_\ast}$ such that up to a subsequence $\psi_n\to\psi$ in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n\to\infty$ and hence $w_n\to\psi$ in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n\to\infty$. Now, following the proof of Theorem [Theorem 26](#thm5.6){reference-type="ref" reference="thm5.6"}, we obtain the following result. $$\label{eq6.5} \langle{L(\psi),v}\rangle-\lambda_\ast \int_{\mathbb{R}^{N}}\alpha(x)\psi^{-\delta} v~\mathrm{d}x-\int_{\mathbb{R}^{N}}\beta(x)\psi^{\gamma-1} v~\mathrm{d}x\geq 0,~\text{for all}~v\in\mathcal{K}.$$ Let $v\in W^{s,p}_{V}(\mathbb{R}^N)$ and define $\phi_\epsilon= \psi+\epsilon v$, then for each $\epsilon>0$, $\phi^+_\epsilon\in\mathcal{K}$. Now replacing $\phi_\epsilon$ with $v$ in [\[eq6.5\]](#eq6.5){reference-type="eqref" reference="eq6.5"} and following Theorem [Theorem 20](#thm4.6){reference-type="ref" reference="thm4.6"}, we can easily show that $\phi\in\mathcal{M}^0_{\lambda_\ast}$ is a solution to $(\mathcal{E}_{\lambda_\ast})$, which is a contradiction by Lemma [Lemma 21](#lem5.1){reference-type="ref" reference="lem5.1"}. This shows that $\mathrm{dist}\big(\mathcal{S}^-_{\lambda_\ast},\mathcal{M}^0_{\lambda_\ast}\big)>0$.\ Similarly, we can prove $\mathrm{dist}\big(\mathcal{S}^+_{\lambda_\ast},\mathcal{M}^0_{\lambda_\ast}\big)>0$. This completes the proof. ◻ Define $\mathrm{d}_{\lambda_\ast,\pm}=\mathrm{dist}\big(\mathcal{S}^\pm_{\lambda_\ast},\mathcal{M}^0_{\lambda_\ast}\big)$. Let there exist constants $\hat{C}^+_{1,\lambda_\ast},\hat{C}^-_{2,\lambda_\ast}>0$ such that $\|w\|\leq\hat{C}^-_{2,\lambda_\ast},~\forall~w\in \mathcal{S}^-_{\lambda_\ast}$ and $\hat{C}^+_{1,\lambda_\ast}\leq \|u\|,~\forall~u\in \mathcal{S}^+_{\lambda_\ast}$. Further, let $\mathrm{d}_{\pm}\in(0,\mathrm{d}_{\lambda_\ast,\pm})$, $\hat{C}^+_{1}<\hat{C}^+_{1,\lambda_\ast}$, $\hat{C}^-_{2,\lambda_\ast}<\hat{C}^-_{2}$ and $\lambda\in(\lambda_\ast,\lambda_\ast+\epsilon)$, where $\epsilon>0$ as in Corollary [Corollary 29](#cor6.2){reference-type="ref" reference="cor6.2"}. Now consider the following constrained minimization problems: $$\label{eq6.6} \Upsilon^-_{\lambda,d_{-}}=\inf\big\{\mathcal{I}^-_{\lambda}(w):w\in\mathcal{M}^-_{\lambda_\ast,d_{-}}\big\}~\text{and}~\Upsilon^+_{\lambda,d_{+}}=\inf\big\{\mathcal{I}^+_{\lambda}(u):u\in\mathcal{M}^+_{\lambda_\ast,d_{+}}\big\}.$$ **Remark 31**. *It is also true that $\mathcal{S}^\pm_{\lambda_\ast}\subset \mathcal{M}^\pm_{\lambda_\ast,d_{\pm}}.$* **Proposition 32**. *The maps $\lambda\ni(\lambda_\ast,\lambda_\ast+\epsilon)\mapsto \Upsilon^\pm_{\lambda,d_{\pm}}$ are decreasing and there holds $$\lim_{\lambda\downarrow\lambda_\ast} \Upsilon^\pm_{\lambda,d_{\pm}}=\Upsilon^\pm_{\lambda_\ast}.$$* *Proof.* Let $w\in \mathcal{M}^-_{\lambda_\ast,d_{-}}$, then by Lemma [Lemma 14](#lem3.7){reference-type="ref" reference="lem3.7"}, we obtain $\Upsilon^-_{\lambda,d_{-}}\leq \mathcal{I}^-_{\lambda}(w)<\mathcal{I}^-_{\lambda'}(w)$ for $\lambda_\ast<\lambda'<\lambda<\lambda_\ast+\epsilon$. It follows that $\Upsilon^-_{\lambda,d_{-}}\leq\Upsilon^-_{\lambda',d_{-}}$ and hence the map $\lambda\mapsto \Upsilon^-_{\lambda,d_{-}}$ is decreasing for all $\lambda\in(\lambda_\ast,\lambda_\ast+\epsilon)$. Further, if $w\in \mathcal{S}^-_{\lambda_\ast}$, then it is easy to see that $\Upsilon^-_{\lambda,d_{-}}<\Upsilon^-_{\lambda_\ast}$ for all $\lambda\in(\lambda_\ast,\lambda_\ast+\epsilon)$. Now we claim that $\displaystyle\lim_{\lambda\downarrow\lambda_\ast} \Upsilon^-_{\lambda,d_{-}}=\Upsilon^-_{\lambda_\ast}.$ Indeed, if not, then let $\lambda_n\downarrow\lambda_\ast$ as $n\to\infty$ such that $\displaystyle\lim_{\lambda_n\downarrow\lambda_\ast} \Upsilon^-_{\lambda_n,d_{-}}=\Upsilon^-~(\text{say})<\Upsilon^-_{\lambda_\ast}$ as $n\to\infty$ . By using [\[eq6.6\]](#eq6.6){reference-type="eqref" reference="eq6.6"}, we get a sequence $\{w_n\}_{n\in\mathbb{N}}\subset\mathcal{M}^-_{\lambda_\ast,d_{-}}$ such that $$\label{eq6.7} |\mathcal{I}^-_{\lambda_n}(w_n)-\Upsilon^-_{\lambda_n,d_{-}}|\to 0~\text{as}~n\to\infty.$$ Next, from the definition of $\mathcal{M}^{-}_{\lambda_\ast,d_{-}}$ and Corollary [Corollary 29](#cor6.2){reference-type="ref" reference="cor6.2"}, we get $\{w_n\}_{n\in\mathbb{N}}$ is bounded and $t^-_{\lambda_n}(w_n)w_n\in\mathcal{M}^-_{\lambda_n}$ for $n$ large enough. Hence up to a subsequence $w_n\rightharpoonup w$ weakly in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n\to\infty$. Lemma [Lemma 6](#lemma2.3){reference-type="ref" reference="lemma2.3"} says that $w_n\to w$ in $L^\tau(\mathbb{R}^N)$ and $L^\gamma(\mathbb{R}^N)$ respectively as $n\to\infty$, $w_n\to w$ a.e. in $\mathbb{R}^N$ and $w\geq 0$. Following the proof of Lemma [Lemma 16](#lem4.2){reference-type="ref" reference="lem4.2"}, we get $w\neq 0$ ( since $w_n\in \mathcal{M}^-_{\lambda_\ast}$ ) and $\displaystyle\int_{\mathbb{R}^N}\beta(x)|w|^\gamma~\mathrm{d}x>0$. Further, we claim that $w_n\to w$ in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n\to\infty$. Indeed, if not, then we have $$\liminf_{n\to\infty}\mathcal{F}'_{\lambda_n,w_n}(t_{\lambda_\ast}(w))> \mathcal{F}'_{\lambda_\ast,w}(t_{\lambda_\ast}(w))=0.$$ It follows that $\mathcal{F}'_{\lambda_n,w_n}(t_{\lambda_\ast}(w)>0$ for $n$ large enough. Since $t^-_{\lambda_n}(w_n)w_n\in\mathcal{M}^-_{\lambda_n}$ for $n$ large enough, therefore by Proposition [Proposition 11](#prop3.1){reference-type="ref" reference="prop3.1"} the map $\mathcal{F}_{\lambda_n,w_n}$ is strictly increasing in $(t^+_{\lambda_n}(w_n),t^-_{\lambda_n}(w_n))$ and hence $t^+_{\lambda_n}(w_n)<t_{\lambda_\ast}(w)<t^-_{\lambda_n}(w_n)$ for $n$ large enough. Therefore, we have $$\Psi_{\lambda_\ast}(t_{\lambda_\ast}(w)w)<\liminf_{n\to\infty}\Psi_{\lambda_n}(t_{\lambda_\ast}(w)w_n)=\liminf_{n\to\infty}\mathcal{F}_{\lambda_n,w_n}(t_{\lambda_\ast}(w))< \liminf_{n\to\infty}\Psi_{\lambda_n,}(t^-_{\lambda_n}(w_n)w_n)$$ $$=\liminf_{n\to\infty}\mathcal{I}^-_{\lambda_n}(w_n)=\liminf_{n\to\infty}\Upsilon^-_{\lambda_n,d_{-}}=\Upsilon^-<\Upsilon^-_{\lambda_\ast},$$ which is a contradiction because if we take $\lambda'_n\uparrow\lambda_\ast$ as $n\to\infty$, then by Corollary [Corollary 23](#cor5.3){reference-type="ref" reference="cor5.3"} and Proposition [Proposition 25](#prop5.5){reference-type="ref" reference="prop5.5"}, we get $$\Upsilon^-_{\lambda_\ast}=\lim_{\lambda'_n\uparrow{\lambda_\ast}}\Upsilon^-_{\lambda'_n}\leq \lim_{\lambda'_n\uparrow{\lambda_\ast}}\Psi_{\lambda'_n}(t^-_{\lambda'_n}(w)w)=\Psi_{\lambda_\ast}(t_{\lambda_\ast}(w)w).$$ This shows that $w_n\to w$ in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n\to\infty$. Next, by continuity of the function $\lambda\ni(\lambda_\ast,\lambda_\ast+\epsilon)\mapsto t^-_\lambda(w)$ (see Lemma [Lemma 14](#lem3.7){reference-type="ref" reference="lem3.7"}-$(i)$), we get $$\label{eq6.9} |\mathcal{I}^-_{\lambda_\ast}(w_n)-\mathcal{I}^-_{\lambda_n}(w_n)|=|\Psi_{\lambda_\ast}(t^-_{\lambda_\ast}(w_n)w_n)-\Psi_{\lambda_n}(t^-_{\lambda_n}(w_n)w_n)|\to 0~\text{as}~n\to\infty.$$ From [\[eq6.7\]](#eq6.7){reference-type="eqref" reference="eq6.7"} and [\[eq6.9\]](#eq6.9){reference-type="eqref" reference="eq6.9"}, we obtain $$|\Upsilon^-_{\lambda_\ast}-\Upsilon^-_{\lambda_n,d_{-}}|\leq |\mathcal{I}^-_{\lambda_\ast}(w_n)-\Upsilon^-_{\lambda_n,d_{-}}|\leq |\mathcal{I}^-_{\lambda_\ast}(w_n)-\mathcal{I}^-_{\lambda_n}(w_n)|+|\mathcal{I}^-_{\lambda_n}(w_n)-\Upsilon^-_{\lambda_n,d_{-}}|\to 0~\text{as}~n\to\infty.$$ It follows that $\Upsilon^-_{\lambda_n,d_{-}}\to \Upsilon^-_{\lambda_\ast}~\text{as}~n\to\infty$, which is again a contradiction. Thus we must have $\displaystyle\lim_{\lambda\downarrow\lambda_\ast} \Upsilon^-_{\lambda,d_{-}}=\Upsilon^-_{\lambda_\ast}$.\ Similarly, we can prove that the map $\lambda\mapsto \Upsilon^+_{\lambda,d_{+}}$ is decreasing for all $\lambda\in(\lambda_\ast,\lambda_\ast+\epsilon)$ and $\displaystyle\lim_{\lambda\downarrow\lambda_\ast} \Upsilon^+_{\lambda,d_{+}}=\Upsilon^+_{\lambda_\ast}$. Hence the proof is completed. ◻ **Lemma 33**. *Let us choose $\mathrm{d}_{-}\in(0,\mathrm{d}_{\lambda_\ast,-})$ and $\hat{C}^-_{2,\lambda_\ast}<\hat{C}^-_{2}$, then there exists $\varepsilon^->0$ such that for all $\lambda\in(\lambda_\ast,\lambda_\ast+\varepsilon^-)$, $\Upsilon^-_{\lambda,d_{-}}$ has a minimizer $\overline{w}_\lambda\in\mathcal{M}^-_{\lambda_\ast,d_{-}}$.* *Proof.* Let $\varepsilon^->0$ such that for all $\lambda\in(\lambda_\ast,\lambda_\ast+\varepsilon^-)$, there exists a minimizing sequence $\{\overline{w}_n(\lambda)\}_{n\in\mathbb{N}}\subset\mathcal{M}^-_{\lambda_\ast,d_{-}}$ for $\Upsilon^-_{\lambda,d_{-}}$, i.e., we have $$\label{eq6.10} \lim_{n\to\infty} \mathcal{I}^-_\lambda(\overline{w}_n(\lambda))=\Upsilon^-_{\lambda,d_{-}}.$$ It follows from the definition of $\mathcal{M}^-_{\lambda_\ast,d_{-}}$ that $\{\overline{w}_n(\lambda)\}_{n\in\mathbb{N}}$ is bounded and hence up to a subsequence $\overline{w}_n(\lambda)\rightharpoonup\overline{w}(\lambda)$ weakly in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n\to\infty$. Since $\overline{w}_n(\lambda)\in\mathcal{M}^-_{\lambda_\ast}$, therefore as done in Lemma [Lemma 16](#lem4.2){reference-type="ref" reference="lem4.2"}, we can obtain $\overline{w}(\lambda)\neq 0$. Next, we claim that $\overline{w}(\lambda)\in\mathcal{M}^-_{\lambda_\ast,d_{-}}$ for all $\lambda\in(\lambda_\ast,\lambda_\ast+\varepsilon^-)$. Indeed, if not, then let us choose a sequence $\overline{w}(\lambda_m)\notin\mathcal{M}^-_{\lambda_\ast,d_{-}}$ for $\lambda_m\downarrow\lambda_\ast$ as $m\to\infty$, that is, $\lambda_m\in(\lambda_\ast,\lambda_\ast+\varepsilon^-)$ for $m$ large enough. Therefore, from [\[eq6.10\]](#eq6.10){reference-type="eqref" reference="eq6.10"}, we have $$\label{eq6.11} |\Upsilon^-_{\lambda_m,d_{-}}-\mathcal{I}^-_{\lambda_m}(\overline{w}_n(\lambda_m))|\to 0 ~\text{as}~n,m\to\infty.$$ Now from Proposition [Proposition 32](#prop6.5){reference-type="ref" reference="prop6.5"} and [\[eq6.11\]](#eq6.11){reference-type="eqref" reference="eq6.11"}, we obtain $$\label{eq6.12} |\Upsilon^-_{\lambda_\ast}-\mathcal{I}^-_{\lambda_m}(\overline{w}_n(\lambda_m))|\to 0 ~\text{as}~n,m\to\infty.$$ Define $$w_n(\lambda)=t^-_{\lambda}(\overline{w}_n(\lambda))\overline{w}_n(\lambda)~\text{and}~w_{n,m}=w_n(\lambda_m)=t^-_{\lambda_m}(\overline{w}_n(\lambda_m))\overline{w}_n(\lambda_m).$$ By Corollary [Corollary 29](#cor6.2){reference-type="ref" reference="cor6.2"}, we have that $w_n(\lambda)\in\mathcal{M}^-_\lambda$ for all $\lambda\in(\lambda_\ast,\lambda_\ast+\epsilon)$ and hence $w_{n,m}\in \mathcal{M}^-_{\lambda_m}$ for $m$ large enough. The boundedness of $\overline{w}_n(\lambda_m)$, Lemma [Lemma 9](#lem3.2){reference-type="ref" reference="lem3.2"}-$(ii)$ and Lemma [Lemma 14](#lem3.7){reference-type="ref" reference="lem3.7"}-$(i)$ infer that there exists $a>0$ such that $a<t^-_{\lambda_m}(\overline{w}_n(\lambda_m))<t^-_{\lambda_\ast}(\overline{w}_n(\lambda_m))=1$ (since $\overline{w}_n(\lambda_m)\in\mathcal{M}^-_{\lambda_\ast}$ for $m$ large enough and for all $n\in \mathbb{N}$) for $n,m$ large enough. It follows that $\{w_{n,m}\}_{(n,m)\in\mathbb{N}^2}$ is bounded and hence up to a subsequence $w_{n,m}\rightharpoonup w\neq 0$ weakly in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n,m\to\infty$. Our aim is to prove $w_{n,m}\to w\neq 0$ in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n,m\to\infty$. Indeed, if not, then we have $$\liminf_{n,m\to\infty}\mathcal{F}'_{\lambda_m,w_{n,m}}(t_{\lambda_\ast}(w))> \mathcal{F}'_{\lambda_\ast,w}(t_{\lambda_\ast}(w))=0.$$ It follows that $\mathcal{F}'_{\lambda_m,w_{n,m}}(t_{\lambda_\ast}(w))>0$ for $n,m$ large enough. Since $w_{n,m}\in\mathcal{M}^-_{\lambda_m}$ for $m$ large enough, therefore by Proposition [Proposition 11](#prop3.1){reference-type="ref" reference="prop3.1"}, the map $\mathcal{F}_{\lambda_m,w_{n,m}}$ is strictly increasing in $(t^+_{\lambda_m}(w_{n,m}), t^-_{\lambda_m}(w_{n,m})=1)$ and hence $t^+_{\lambda_m}(w_{n,m})<t_{\lambda_\ast}(w)<t^-_{\lambda_m}(w_{n,m})=1$ for $n,m$ large enough. Now from [\[eq6.12\]](#eq6.12){reference-type="eqref" reference="eq6.12"}, we have $$\Psi_{\lambda_\ast}(t_{\lambda_\ast}(w)w)<\liminf_{n,m\to\infty}\Psi_{\lambda_m}(t_{\lambda_\ast}(w)w_{n,m})=\liminf_{n,m\to\infty}\mathcal{F}_{\lambda_m,w_{n,m}}(t_{\lambda_\ast}(w))$$ $$\hspace{2.5cm}< \liminf_{n,m\to\infty}\Psi_{\lambda_m}(t^-_{\lambda_m}(w_{n,m})w_{n,m})=\liminf_{n,m\to\infty}\mathcal{I}^-_{\lambda_m}(w_{n,m})=\Upsilon^-_{\lambda_\ast},$$ which is an absurd because if we take $\lambda'_n\uparrow\lambda_\ast$ as $n\to\infty$, then by Corollary [Corollary 23](#cor5.3){reference-type="ref" reference="cor5.3"} and Proposition [Proposition 25](#prop5.5){reference-type="ref" reference="prop5.5"}, we get $$\Upsilon^-_{\lambda_\ast}=\lim_{\lambda'_n\uparrow{\lambda_\ast}}\Upsilon^-_{\lambda'_n}\leq \lim_{\lambda'_n\uparrow{\lambda_\ast}}\Psi_{\lambda'_n}(t^-_{\lambda'_n}(w)w)=\Psi_{\lambda_\ast}(t_{\lambda_\ast}(w)w).$$ Therefore, we must have that $w_{n,m}\to w$ in $W^{s,p}_{V}(\mathbb{R}^N)$ and also up to a subsequence $t^-_{\lambda_m}(\overline{w}_n(\lambda_m))\to t\geq a$ as $n,m\to\infty$ (since $t^-_{\lambda_m}(\overline{w}_n(\lambda_m))$ is bounded). This infers that $\overline{w}_n(\lambda_m)\to\overline{w}\neq 0$ in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n,m\to\infty$. Hence for $m$ large enough, we have $$0\leq \|\overline{w}(\lambda_m)-\overline{w}\|\leq\liminf_{n\to\infty}\|\overline{w}_n(\lambda_m)-\overline{w}\|.$$ Passing limit $m\to\infty$ in the above inequality, we get $\overline{w}(\lambda_m)\to \overline{w}$ in $W^{s,p}_{V}(\mathbb{R}^N)$ as $m\to\infty$. Furthermore, from [\[eq6.12\]](#eq6.12){reference-type="eqref" reference="eq6.12"} and strong convergence, we get $\Upsilon^-_{\lambda_\ast}=\mathcal{I}^-_{\lambda_\ast}(\overline{w})$ and $\overline{w}\in \mathcal{M}^-_{\lambda_\ast}\cup\mathcal{M}^0_{\lambda_\ast}$. But from the definition of $\mathcal{M}^-_{\lambda_\ast,d_{-}}$, we have $\mathrm{dist}(\overline{w}_n(\lambda_m),\mathcal{M}^0_{\lambda_\ast})>d_{-}>0$ for $m$ large enough. It follows that $$\lim_{n,m\to\infty} \mathrm{dist}(\overline{w}_n(\lambda_m),\mathcal{M}^0_{\lambda_\ast})=\mathrm{dist}(\overline{w},\mathcal{M}^0_{\lambda_\ast})\geq d_{-}>0.$$ This yields $\overline{w}\notin\mathcal{M}^0_{\lambda_\ast}$ and hence $\overline{w}\in\mathcal{M}^-_{\lambda_\ast}$. Thus we have $\overline{w}\in\mathcal{S}^-_{\lambda_\ast}$ and $\overline{w}(\lambda_m)\in\mathcal{M}^-_{\lambda_\ast,d_{-}}$ for $m$ large enough, which is again a contradiction. Therefore, there exists $\varepsilon^->0$ such that $\overline{w}(\lambda)\in\mathcal{M}^-_{\lambda_\ast,d_{-}}$ for all $\lambda\in(\lambda_\ast,\lambda_\ast+\varepsilon^-)$. Arguing as before we conclude that $\overline{w}_n(\lambda)\to\overline{w}(\lambda)$ in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n\to\infty$ and $\mathcal{I}^-_\lambda(\overline{w}(\lambda))=\Upsilon^-_{\lambda,d_{-}}$. By choosing $\overline{w}_\lambda=\overline{w}(\lambda)$, the proof is completed. ◻ **Lemma 34**. *Let us choose $\mathrm{d}_{+}\in(0,\mathrm{d}_{\lambda_\ast,+})$ and $\hat{C}^+_{1}<\hat{C}^+_{1,\lambda_\ast}$, then there exists $\varepsilon^+>0$ such that for all $\lambda\in(\lambda_\ast,\lambda_\ast+\varepsilon^+)$, $\Upsilon^+_{\lambda,d_{+}}$ has a minimizer $\overline{u}_\lambda\in\mathcal{M}^+_{\lambda_\ast,d_{+}}$.* *Proof.* The proof is similar to Lemma [Lemma 33](#lem6.6){reference-type="ref" reference="lem6.6"}. ◻ **Theorem 35**. *The problem [\[main problem\]](#main problem){reference-type="eqref" reference="main problem"} has at least two solutions $w_\lambda\in\mathcal{M}^-_\lambda$ and $u_\lambda\in\mathcal{M}^+_\lambda$ when $\lambda\in(\lambda_\ast,\lambda_\ast+\epsilon)$, where $\epsilon>0$ is small enough.* *Proof.* Due to Lemma [Lemma 33](#lem6.6){reference-type="ref" reference="lem6.6"}, we have $\overline{w}_\lambda\in\mathcal{M}^-_{\lambda_\ast,d_{-}}$ is a minimizer for $\Upsilon^-_{\lambda,d_{-}}$ when $\lambda\in(\lambda_\ast,\lambda_\ast+\varepsilon^-)$. Consequently, by Corollary [Corollary 29](#cor6.2){reference-type="ref" reference="cor6.2"} we obtain $t^-_\lambda(\overline{w}_\lambda)\overline{w}_\lambda\in \mathcal{M}^-_{\lambda}$ for $\lambda\in(\lambda_\ast,\lambda_\ast+\epsilon)$. Denote $w_\lambda=t^-_\lambda(\overline{w}_\lambda)\overline{w}_\lambda$, then our aim is to show that $w_\lambda\in \mathcal{M}^-_{\lambda}$ is a solution for [\[main problem\]](#main problem){reference-type="eqref" reference="main problem"}. For this, we first have to prove $\overline{w}_\lambda$ is an interior point of $\mathcal{M}^-_{\lambda_\ast,d_{-}}$, i.e., to prove $$\|\overline{w}_\lambda\|<\hat{C}^-_2 ~\text{ for all}~ \lambda\in(\lambda_\ast,\lambda_\ast+\varepsilon^-),$$ where the constant $\hat{C}^-_2 >0$ satisfies $0<C_{\lambda_\ast}<\hat{C}^-_2$, $C_{\lambda_\ast}$ as in Corollary [Corollary 27](#cor5.7){reference-type="ref" reference="cor5.7"}. Next, consider a sequence $\{\lambda_n\}_{n\in\mathbb{N}}$ such that $\lambda_n\downarrow\lambda_\ast$ as $n\to\infty$. It follows that $\overline{w}_{\lambda_n}\in\mathcal{M}^-_{\lambda_\ast,d_{-}}$ for $n$ large enough and $\{\overline{w}_{\lambda_n}\}_{n\in\mathbb{N}}$ is bounded. Therefore, without loss of generality, we can assume $\overline{w}_{\lambda_n}\rightharpoonup \overline{w}$ weakly in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n\to\infty$. It follows from Lemma [Lemma 6](#lemma2.3){reference-type="ref" reference="lemma2.3"} that $\overline{w}_{\lambda_n}\to \overline{w}$ in $L^\tau(\mathbb{R}^N)$ and $L^\gamma(\mathbb{R}^N)$ respectively as $n\to\infty$, $\overline{w}_{\lambda_n}\to \overline{w}$ a.e. in $\mathbb{R}^N$ and $\overline{w}\geq 0$. Arguing a similar strategy to that in Lemma [Lemma 16](#lem4.2){reference-type="ref" reference="lem4.2"}, we obtain $\overline{w} \neq 0$ and $\displaystyle\int_{\mathbb{R}^N}\beta(x)|\overline{w}|^{\gamma}~\mathrm{d}x>0$. We claim that $\overline{w}_{\lambda_n}\to \overline{w}$ in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n\to\infty$. Indeed, if not, then we have $$\liminf_{n\to\infty}\mathcal{F}'_{\lambda_n,\overline{w}_{\lambda_n}}(t_{\lambda_\ast}(\overline{w}))> \mathcal{F}'_{\lambda_\ast,\overline{w}}(t_{\lambda_\ast}(\overline{w}))=0.$$ It follows that $\mathcal{F}'_{\lambda_n,\overline{w}_{\lambda_n}}(t_{\lambda_\ast}(\overline{w}))>0$ for $n$ large enough. By Corollary [Corollary 29](#cor6.2){reference-type="ref" reference="cor6.2"}, we have $t^-_{\lambda_n}(\overline{w}_{\lambda_n})\overline{w}_{\lambda_n}\in\mathcal{M}^-_{\lambda_n}$ for $n$ large enough, therefore by Proposition [Proposition 11](#prop3.1){reference-type="ref" reference="prop3.1"}, the map $\mathcal{F}_{\lambda_n,\overline{w}_{\lambda_n}}$ is strictly increasing in $(t^+_{\lambda_n}(\overline{w}_{\lambda_n}),t^-_{\lambda_n}(\overline{w}_{\lambda_n}))$ and therefore $t^+_{\lambda_n}(\overline{w}_{\lambda_n})<t_{\lambda_\ast}(\overline{w})<t^-_{\lambda_n}(\overline{w}_{\lambda_n})$ for $n$ large enough. Therefore, by Proposition [Proposition 32](#prop6.5){reference-type="ref" reference="prop6.5"}, we have $$\Psi_{\lambda_\ast}(t_{\lambda_\ast}(\overline{w})\overline{w})<\liminf_{n\to\infty}\Psi_{\lambda_n}(t_{\lambda_\ast}(\overline{w})\overline{w}_{\lambda_n})=\liminf_{n\to\infty}\mathcal{F}_{\lambda_n,\overline{w}_{\lambda_n}}(t_{\lambda_\ast}(w))$$ $$< \liminf_{n\to\infty}\Psi_{\lambda_n}(t^-_{\lambda_n}(\overline{w}_{\lambda_n})\overline{w}_{\lambda_n})=\liminf_{n\to\infty}\mathcal{I}^-_{\lambda_n}(\overline{w}_{\lambda_n})=\liminf_{n\to\infty}\Upsilon^-_{\lambda_n,d_{-}}=\Upsilon^-_{\lambda_\ast},$$ which is absurd because if we take $\lambda'_n\uparrow\lambda_\ast$ as $n\to\infty$, then by Corollary [Corollary 23](#cor5.3){reference-type="ref" reference="cor5.3"} and Proposition [Proposition 25](#prop5.5){reference-type="ref" reference="prop5.5"}, we get $$\Upsilon^-_{\lambda_\ast}=\lim_{\lambda'_n\uparrow{\lambda_\ast}}\Upsilon^-_{\lambda'_n}\leq \lim_{\lambda'_n\uparrow{\lambda_\ast}}\Psi_{\lambda'_n}(t^-_{\lambda'_n}(\overline{w})\overline{w})=\Psi_{\lambda_\ast}(t_{\lambda_\ast}(\overline{w})\overline{w}).$$ Thus we conclude that $\overline{w}_{\lambda_n}\to \overline{w}$ in $W^{s,p}_{V}(\mathbb{R}^N)$ as $n\to\infty$ and hence $\mathcal{F}'_{\lambda_\ast,\overline{w}}(1)=0$ and $\mathcal{F}''_{\lambda_\ast,\overline{w}}(1)\leq 0$ (since $\overline{w}_{\lambda_n}\in\mathcal{M}^-_{\lambda_\ast}$ for $n$ large enough). It follows that $\overline{w}\in \mathcal{M}^-_{\lambda_\ast}\cup\mathcal{M}^0_{\lambda_\ast}$. But we have $$\lim_{n\to\infty} \mathrm{dist}(\overline{w}_n(\lambda_n),\mathcal{M}^0_{\lambda_\ast})=\mathrm{dist}(\overline{w},\mathcal{M}^0_{\lambda_\ast})\geq d_{-}>0$$ and hence $\overline{w}\notin \mathcal{M}^0_{\lambda_\ast}$. This shows that we must have $\overline{w} \in \mathcal{M}^-_{\lambda_\ast}$. Next, it is easy to see that $t^-_{\lambda_n}(\overline{w}_{\lambda_n})$ is bounded and therefore, up to a subsequence we can assume $t^-_{\lambda_n}(\overline{w}_{\lambda_n})\to t$ as $n\to\infty$. Using the strong convergence and the fact that $t^-_{\lambda_n}(\overline{w}_{\lambda_n})\overline{w}_{\lambda_n}\in\mathcal{M}^-_{\lambda_n}$ for $n$ large enough, we obtain $\mathcal{F}'_{\lambda_\ast,\overline{w}}(t)=0$ and $\mathcal{F}''_{\lambda_\ast,\overline{w}}(t)\leq 0$. Hence, from $\overline{w} \in \mathcal{M}^-_{\lambda_\ast}$ it follows that $t=1$. Now from Proposition [Proposition 32](#prop6.5){reference-type="ref" reference="prop6.5"}, we get $$\mathcal{I}^-_{\lambda_\ast}(\overline{w})=\Psi_{\lambda_\ast}(\overline{w})=\lim_{n\to\infty} \Psi_{\lambda_n}(t^-_{\lambda_n}(\overline{w}_{\lambda_n})\overline{w}_{\lambda_n})=\lim_{n\to\infty}\mathcal{I}^-_{\lambda_n}(\overline{w}_{\lambda_n})=\lim_{n\to\infty}\Upsilon^-_{\lambda_n,d_{-}}=\Upsilon^-_{\lambda_\ast}.$$ This shows that $\overline{w}\in\mathcal{S}^-_{\lambda_\ast}$ and hence from strong convergence, we obtain $$\|\overline{w}\|=\lim_{n\to\infty} \|\overline{w}_{\lambda_n}\|\leq C_{\lambda_\ast}<\hat{C}^-_2 ~(\text{see Corollary \ref{cor5.7}}).$$ From the above inequality, we infer that $\lim_{\lambda\downarrow\lambda_\ast}\|\overline{w}_{\lambda}\|<\hat{C}^-_2$, that is, $\|\overline{w}_\lambda\|<\hat{C}^-_2 ~\text{ for all}~ \lambda\in(\lambda_\ast,\lambda_\ast+\varepsilon^-)$. To prove $w_\lambda\in \mathcal{M}^-_{\lambda}$ is a solution for [\[main problem\]](#main problem){reference-type="eqref" reference="main problem"}, let us choose $v\in\mathcal{K}$. It follows from $\overline{w}_\lambda\in\mathcal{M}^-_{\lambda_\ast,d_{-}}$ that $\overline{w}_\lambda\in\mathcal{M}^-_{\lambda_\ast}$ for all $\lambda\in(\lambda_\ast,\lambda_\ast+\varepsilon^-)$. Now let us consider that there exists a $\boldsymbol{\sigma}_0>0$ small enough with $0<\boldsymbol{\sigma}<\boldsymbol{\sigma}_0$ satisfying $t^-_{\lambda_\ast}(\overline{w}_\lambda+\boldsymbol{\sigma}v)(\overline{w}_\lambda+\boldsymbol{\sigma}v)\in \mathcal{M}^-_{\lambda_\ast}$. Therefore, by applying the implicit function theorem as in Lemma [\[lem4.4\]](#lem4.4){reference-type="eqref" reference="lem4.4"}-$(ii)$, we can easily deduce that $t^-_{\lambda_\ast}(\overline{w}_\lambda+\boldsymbol{\sigma}v)\to 1$ as $\boldsymbol{\sigma}\to 0^+$. In addition, we have $\|\overline{w}_\lambda\|<\hat{C}^-_2$ and $\mathrm{dist}(\overline{w}_\lambda,\mathcal{M}^0_{\lambda_\ast})>d_{-}$. This yields $$\|t^-_{\lambda_\ast}(\overline{w}_\lambda+\boldsymbol{\sigma}v)(\overline{w}_\lambda+\boldsymbol{\sigma}v)\|< \hat{C}^-_2$$ and $$\mathrm{dist}(t^-_{\lambda_\ast}(\overline{w}_\lambda+\boldsymbol{\sigma}v)(\overline{w}_\lambda+\boldsymbol{\sigma}v),\mathcal{M}^0_{\lambda_\ast})>d_{-}~\text{for}~\boldsymbol{\sigma}>0~\text{small enough}.$$ Hence $$\label{eq6.15} t^-_{\lambda_\ast}(\overline{w}_\lambda+\boldsymbol{\sigma}v)(\overline{w}_\lambda+\boldsymbol{\sigma}v)\in\mathcal{M}^-_{\lambda_\ast,d_{-}}~\text{for}~\boldsymbol{\sigma}>0~\text{small enough}.$$ Consequently, by using Corollary [Corollary 29](#cor6.2){reference-type="ref" reference="cor6.2"}, we have $$\label{eq6.16} t^-_\lambda\big(t^-_{\lambda_\ast}(\overline{w}_\lambda+\boldsymbol{\sigma}v)(\overline{w}_\lambda+\boldsymbol{\sigma}v)\big)\big(t^-_{\lambda_\ast}(\overline{w}_\lambda+\boldsymbol{\sigma}v)(\overline{w}_\lambda+\boldsymbol{\sigma}v)\big)\in\mathcal{M}^-_\lambda$$ for $\boldsymbol{\sigma}>0$ small enough and $\lambda>\lambda_\ast$ lies in an appropriate range. Now, from [\[eq6.15\]](#eq6.15){reference-type="eqref" reference="eq6.15"} and Lemma [Lemma 33](#lem6.6){reference-type="ref" reference="lem6.6"}, we get $$\label{eq6.17} \begin{split} \Psi_\lambda\bigg(t^-_\lambda\big(t^-_{\lambda_\ast}(\overline{w}_\lambda+\boldsymbol{\sigma}v)(\overline{w}_\lambda+\boldsymbol{\sigma}v)\big)\big(t^-_{\lambda_\ast}(\overline{w}_\lambda+\boldsymbol{\sigma}v)(\overline{w}_\lambda+\boldsymbol{\sigma}v)\big)\bigg)\\& \hspace{-10cm}=\mathcal{I}^-_\lambda\big(t^-_{\lambda_\ast}(\overline{w}_\lambda+\boldsymbol{\sigma}v)(\overline{w}_\lambda+\boldsymbol{\sigma}v)\big)\geq \Upsilon^-_{\lambda,d_{-}}=\mathcal{I}^-_{\lambda}(\overline{w}_\lambda)=\Psi_\lambda(t^-_\lambda(\overline{w}_\lambda)\overline{w}_\lambda). \end{split}$$ But for $\boldsymbol{\sigma}>0$ very small enough, we have $$\label{eq6.18} \Psi_\lambda\bigg(t^-_\lambda\big(t^-_{\lambda_\ast}(\overline{w}_\lambda+\boldsymbol{\sigma}v)(\overline{w}_\lambda+\boldsymbol{\sigma}v)\big)\big(t^-_{\lambda_\ast}(\overline{w}_\lambda+\boldsymbol{\sigma}v)\overline{w}_\lambda\big)\bigg)\simeq \Psi_\lambda(t^-_\lambda(\overline{w}_\lambda)\overline{w}_\lambda).$$ From [\[eq6.16\]](#eq6.16){reference-type="eqref" reference="eq6.16"} and using the fact that $t^-_\lambda(\overline{w}_\lambda)\overline{w}_\lambda\in \mathcal{M}^-_{\lambda}$, we can apply the implicit function theorem to the same function $\mathcal{H}$ as in Lemma [Lemma 18](#lem4.4){reference-type="ref" reference="lem4.4"}-$(ii)$ at the point $$\Bigg(t^-_\lambda(\overline{w}_\lambda),M(\|\overline{w}_\lambda\|^p)\|\overline{w}_\lambda\|^p,\int_{\mathbb{R}^{N}}\alpha(x)|\overline{w}_\lambda|^{1-\delta}~\mathrm{d}x,\int_{\mathbb{R}^{N}}\beta (x)|\overline{w}_\lambda|^\gamma~\mathrm{d}x\Bigg)$$ and easily verify $$t^-_\lambda\big(t^-_{\lambda_\ast}(\overline{w}_\lambda+\boldsymbol{\sigma}v)(\overline{w}_\lambda+\boldsymbol{\sigma}v)\big)\to t^-_\lambda(\overline{w}_\lambda)~\text{as}~\boldsymbol{\sigma}\to 0^+.$$ Denote $$\boldsymbol{\vartheta}=t^-_{\lambda_\ast}(\overline{w}_\lambda+\boldsymbol{\sigma}v)(\overline{w}_\lambda+\boldsymbol{\sigma}v).$$ Now from [\[eq6.17\]](#eq6.17){reference-type="eqref" reference="eq6.17"} and [\[eq6.18\]](#eq6.18){reference-type="eqref" reference="eq6.18"}, we obtain $$\frac {\Psi_\lambda\big(t^-_\lambda(\boldsymbol{\vartheta}) t^-_{\lambda_\ast}(\overline{w}_\lambda+\boldsymbol{\sigma}v)(\overline{w}_\lambda+\boldsymbol{\sigma}v)\big)-\Psi_\lambda\big(t^-_\lambda(\boldsymbol{\vartheta}) t^-_{\lambda_\ast}(\overline{w}_\lambda+\boldsymbol{\sigma}v)\overline{w}_\lambda\big)}{\boldsymbol{\sigma}}\geq 0.$$ On simplifying the above inequality, we get $$\hspace{-3cm}\big(t^-_\lambda(\boldsymbol{\vartheta}) t^-_{\lambda_\ast}(\overline{w}_\lambda+\boldsymbol{\sigma}v)\big)^{p(m+1)}\Bigg[\frac{\widehat{M}(\|\overline{w}_\lambda+\boldsymbol{\sigma}v\|^p)-\widehat{M}(\|\overline{w}_\lambda\|^p)}{p\boldsymbol{\sigma}}\Bigg]$$ $$\hspace{2cm}\geq \big(t^-_\lambda(\boldsymbol{\vartheta}) t^-_{\lambda_\ast}(\overline{w}_\lambda+\boldsymbol{\sigma}v)\big)^\gamma \int_{\mathbb{R}^N}\beta(x)\bigg(\frac{|\overline{w}_\lambda+\boldsymbol{\sigma}v|^\gamma-|\overline{w}_\lambda|^\gamma}{\gamma \boldsymbol{\sigma}}\bigg)~\mathrm{d}x$$ $$\hspace{4cm}+\lambda \big(t^-_\lambda(\boldsymbol{\vartheta}) t^-_{\lambda_\ast}(\overline{w}_\lambda+\boldsymbol{\sigma}v)\big)^{1-\delta}\int_{\mathbb{R}^N}\alpha(x)\bigg(\frac{|\overline{w}_\lambda+\boldsymbol{\sigma}v|^{1-\delta}-|\overline{w}_\lambda|^{1-\delta}}{(1-\delta) \boldsymbol{\sigma}}\bigg)~\mathrm{d}x.$$ Applying Fatou's Lemma to the above inequality and using the facts that $t^-_\lambda(\boldsymbol{\vartheta})\to t^-_\lambda(\overline{w}_\lambda)$, $t^-_{\lambda_\ast}(\overline{w}_\lambda+\boldsymbol{\sigma}v)\to 1$ as $\boldsymbol{\sigma}\to 0^+$ and $w_\lambda=t^-_\lambda(\overline{w}_\lambda)\overline{w}_\lambda$, we get $$\label{eq6.19} \langle{L(w_{\lambda}),v}\rangle-\lambda \int_{\mathbb{R}^{N}}\alpha(x)w_\lambda^{-\delta} v~\mathrm{d}x-\int_{\mathbb{R}^{N}}\beta(x)w_\lambda^{\gamma-1} v~\mathrm{d}x\geq 0,~\forall~v\in\mathcal{K}.$$ Choose $v\in W^{s,p}_{V}(\mathbb{R}^N)$ and define $\phi_\epsilon= w_\lambda+\epsilon v$, then for each $\epsilon>0$ be given $\phi^+_\epsilon\in\mathcal{K}$. Now replacing $\phi_\epsilon$ with $v$ in [\[eq6.19\]](#eq6.19){reference-type="eqref" reference="eq6.19"} and following Theorem [Theorem 20](#thm4.6){reference-type="ref" reference="thm4.6"}, we can easily show that $w_\lambda\in\mathcal{M}^-_{\lambda}$ is a solution to [\[main problem\]](#main problem){reference-type="eqref" reference="main problem"}. Similarly, we can prove that $u_\lambda\in\mathcal{M}^+_{\lambda}$ is a solution to [\[main problem\]](#main problem){reference-type="eqref" reference="main problem"} for $\lambda\in(\lambda_\ast,\lambda_\ast+\varepsilon^+)$. Let $\epsilon=\min~\{\varepsilon^-,\varepsilon^+\}$, then for $\lambda\in(\lambda_\ast,\lambda_\ast+\epsilon)$, the problem [\[main problem\]](#main problem){reference-type="eqref" reference="main problem"} has at least two solutions $w_\lambda\in\mathcal{M}^-_\lambda$ and $u_\lambda\in\mathcal{M}^+_\lambda$. This completes the proof. ◻ We complete the proof of Theorem [Theorem 3](#thm2.6){reference-type="ref" reference="thm2.6"}. *Proof of Theorem [Theorem 3](#thm2.6){reference-type="ref" reference="thm2.6"}.* It follows directly from Theorem [Theorem 20](#thm4.6){reference-type="ref" reference="thm4.6"}, Theorem [Theorem 26](#thm5.6){reference-type="ref" reference="thm5.6"} and Theorem [Theorem 35](#thm6.8){reference-type="ref" reference="thm6.8"}. ◻ # Conclusion and further remarks We believe that the result of our work is still true for the following degenerate Kirchhoff singular problems as well as for singular problems with certain relevant assumptions. For examples: - $$\begin{cases} M\big( \|u\|^p\big)\bigg[-\Delta_p u+V(x)u^{p-1}\bigg]=\lambda f(x)u^{-\delta}+g(x)u^{\gamma-1}~~\text{in}~~\mathbb{R}^N,\\ ~~~~~~u>0~~\text{in}~~\mathbb{R}^N,~ \displaystyle\int_{\mathbb{R}^N} V(x)u^p~\mathrm{d}x<\infty,~ u\in W^{1,p}(\mathbb{R}^N), \end{cases}$$ where $\|\cdot\|^p=\|\cdot\|^p_{W^{1,p}_V(\mathbb{R}^N)}$, $N\geq2$, $\Delta_p u=\text{div}(|\nabla u|^{p-2}\nabla u)$ is the $p$-Laplacian operator, $\lambda>0$ is a real parameter, $0<\delta<1<p<\gamma<p^\ast-1$, where $p^\ast=\frac{Np}{N-p}$ is the critical Sobolev exponent, $f>0$ a.e. in $\mathbb{R}^N$, $g\in L^\infty(\mathbb{R}^N)$ is sign changing in $\mathbb{R}^N$ with $g^+\neq 0$, $M:\mathbb{R}^+\to\mathbb{R}^+$ is a degenerate Kirchhoff function, $V:\mathbb{R}^N\to \mathbb{R}^+$ is a continuous potential function. - $$\begin{cases} M\big( \|u\|^p\big)\bigg[\mathcal{L}_1 (u)+V(x)u^{p-1}\bigg]=\lambda f(x)u^{-\delta}+\frac{\alpha}{\alpha+\beta} h(x)u^{\alpha-1}v^\beta~~\text{in}~~\mathbb{R}^N,\\ M\big( \|u\|^p\big)\bigg[\mathcal{L}_1 (v)+V(x)v^{p-1}\bigg]=\mu g(x)v^{-\delta}+\frac{\beta}{\alpha+\beta} h(x)u^{\alpha}v^{\beta-1}~~\text{in}~~\mathbb{R}^N,\\ ~~~~~~u,v>0~~\text{in}~~\mathbb{R}^N,~ \displaystyle\int_{\mathbb{R}^N} V(x)(u^p+v^p)~\mathrm{d}x<\infty,~ u,v\in \mathbf{X}, \end{cases}$$ where $\mathbf{X}=W^{1,p}(\mathbb{R}^N)~(\text{or}~W^{s,p}(\mathbb{R}^N))$ and we also define $\mathbf{Y}=W^{1,p}_V(\mathbb{R}^N)~(\text{or}~W^{s,p}_V(\mathbb{R}^N))$, with $\|\cdot\|^p=\|\cdot\|^p_{\mathbf{Y}}$, $N\geq2$, the operator $\mathcal{L}_1$ is defined by $\mathcal{L}_1(u)=-\Delta_p u=-\text{div}(|\nabla u|^{p-2}\nabla u)~(\text{or}~\big(-\Delta_p\big)^s)$, called the $p$-Laplacian (or fractional $p$-Laplacian) operator , $\alpha,\beta>1$, $0<\delta<1<p<\alpha+\beta<p^\ast$, where $p^\ast=\frac{Np}{N-p}$ (or $\frac{Np}{N-sp}$), called the critical Sobolev exponent, the real parameters $\lambda,\mu>0$, $f,g>0$ a.e. in $\mathbb{R}^N$, $h\in L^\infty(\mathbb{R}^N)$ is sign changing in $\mathbb{R}^N$ with $h^+\neq 0$, $M:\mathbb{R}^+\to\mathbb{R}^+$ is a degenerate Kirchhoff function, $V:\mathbb{R}^N\to \mathbb{R}^+$ is a continuous potential function. - $$\begin{cases} M\big( \|u\|^p\big)\bigg[\mathcal{L}_p (u)+V(x)u^{p-1}\bigg]+ M\big( \|u\|^q\big)\bigg[\mathcal{L}_q (u)+V(x)u^{q-1}\bigg]\\ \hspace{6.5cm} =\lambda f(x)u^{-\delta}+g(x)u^{\gamma-1}~~\text{in}~~\mathbb{R}^N,\\ ~u>0~~\text{in}~~\mathbb{R}^N,~ \displaystyle\int_{\mathbb{R}^N} V(x)u^p~\mathrm{d}x<\infty,~\displaystyle \int_{\mathbb{R}^N} V(x)u^q~\mathrm{d}x<\infty, u\in \mathbb{X}, \end{cases}$$ where $\mathbf{X}=W^{1,p}(\mathbb{R}^N)\cap W^{1,q}(\mathbb{R}^N)~(\text{or}~~W^{s,p}(\mathbb{R}^N)\cap W^{s,q}(\mathbb{R}^N))$ and we define $\mathbf{Y}=W^{1,m}_V(\mathbb{R}^N)$ $(~\text{or}~W^{s,m}_V(\mathbb{R}^N)$, with $\|\cdot\|^m=\|\cdot\|^m_{\mathbf{Y}}$ for $m\in\{p,q\}$, $N\geq2$, $\mathcal{L}_m u=-\Delta_m u$ (or $\big(-\Delta_m\big)^s$ ) for $m=\{p,q\}$, is the $m$-Laplacian (or fractional $m$-Laplacian) operator, $\lambda>0$ is a real parameter, $1<\delta<q<p<\gamma<q^\ast-1$, where $q^\ast=\frac{Nq}{N-q}$ (or $\frac{Np}{N-sp}$) is the critical Sobolev exponent, $f>0$ a.e. in $\mathbb{R}^N$, $g\in L^\infty(\mathbb{R}^N)$ is sign changing in $\mathbb{R}^N$ with $g^+\neq 0$, $M:\mathbb{R}^+\to\mathbb{R}^+$ is a degenerate Kirchhoff function, $V:\mathbb{R}^N\to \mathbb{R}^+$ is a continuous potential function. - $$\label{eq7.6} \begin{cases} \mathfrak{L }^{p,q}_{a,V}(u) =\lambda f(x)u^{-\delta}+g(x)u^{\gamma-1}~~\text{in}~~\mathbb{R}^N,\\ ~u>0~~\text{in}~~\mathbb{R}^N,~ \displaystyle\int_{\mathbb{R}^N} V(x) (u^p+a(x)u^q)~\mathrm{d}x<\infty, u\in W^{1,\mathcal{H}}(\mathbb{R}^N), \end{cases}\tag{$\mathcal{P}_\lambda$}$$ where $N\geq2$, $V:\mathbb{R}^N\to \mathbb{R}^+$ is a positive continuous function and $\lambda>0$ is a parameter. The double phase type operator that appears in the problem [\[eq7.6\]](#eq7.6){reference-type="eqref" reference="eq7.6"} is defined as follows: $$\mathfrak{L }^{p,q}_{a,V}(u)=-\text{div}(|\nabla u|^{p-2}\nabla u+a(x)|\nabla u|^{q-2}\nabla u)+V(x)(|u|^{p-2}u+a(x)|u|^{q-2}u).$$ Also, we have $0\leq a(\cdot)\in L^1(\mathbb{R}^N)$, $0<\delta<1<p<q<N,~\frac{p}{q}<1+\frac{1}{N},~q<p^\ast$, where $p^\ast=\frac{Np}{N-p}$ is the critical Sobolev exponent, $f>0$ a.e. in $\mathbb{R}^N$, $g\in L^\infty(\mathbb{R}^N)$ is sign changing in $\mathbb{R}^N$ with $g^+\neq 0$. **Remark 36**. *The above problems are still actual whenever $M\equiv 1$. Moreover, one can study such singular problems in the bounded domain. We will leave the details proof of the above issues to interested readers.* # Acknowledgements {#acknowledgements .unnumbered} DKM wants to sincerely thank DST INSPIRE Fellowship DST/INSPIRE/03/2019/000265 sponsored by Govt. of India. TM acknowledges the support of the Start-up Research Grant from DST-SERB, sanction no. SRG/2022/000524. AS was supported by the DST-INSPIRE Grant DST/INSPIRE/04/2018/002208 sponsored by Govt. of India. D.K. Mahanta, *E-mail address:* `mahanta.1@iitj.ac.in` T. Mukherjee, *E-mail address:* `tuhina@iitj.ac.in` A. Sarkar, *E-mail address:* `abhisheks@iitj.ac.in` [^1]: Corresponding author
arxiv_math
{ "id": "2309.09539", "title": "On the extreme value of Nehari manifold for nonlocal singular\n Schr{\\\"o}dinger-Kirchhoff equations in $\\mathbb{R}^N$", "authors": "Deepak Kumar Mahanta, Tuhina Mukherjee and Abhishek Sarkar", "categories": "math.AP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | The aim of this paper is to formulate a *non--commutative geometrical* version of the Electromagnetic Theory in a non--commutative space--time by using the theory of quantum principal bundles and quantum principal connections. To accomplish this purpose, we will *dualize* the geometrical formulation of this theory. *MSC 2010:* 46L87, 58B99. \ *Keywords:* Quantum principal bundles, non--commutative $U(1)$ gauge theory. address: | Gustavo Amilcar Saldaña Moncada\ CIMAT, Unidad Guanajuato author: - Gustavo Amilcar Saldaña Moncada title: Electromagnetic Theory with Quantum Principal Bundles --- # Introduction It is well--known the relationship between Geometry and Physics; particularly when we deal with the Electromagnetic Theory [@na]. Indeed, one of the most general starting points of this theory in the vacuum is to consider it as a Yang--Mills theory for the trivial principal $U(1)$--bundle over the Minkowski space--time: $\mathbb{R}^4$ with the metric $\eta=\mathrm{diag}(-1,1,1,1)$ [@na]. In Appendix A we show a brief summary of this well--known development. In particular, we comment how the (second) Bianchi identity (see Equation [\[1.2\]](#1.2){reference-type="ref" reference="1.2"}) gives rise to the Gauss Law for magnetism (Equation [\[1.7\]](#1.7){reference-type="ref" reference="1.7"}) and the Faraday equation (see Equation [\[1.8\]](#1.8){reference-type="ref" reference="1.8"}); while critical points of the Yang--Mills functional (see Equation [\[1.3\]](#1.3){reference-type="ref" reference="1.3"}), a functional that measures the square norm of the curvature of a principal connection, gives rise to the Gauss Law (Equation [\[1.9\]](#1.9){reference-type="ref" reference="1.9"}) and the Ampere equation (Equation [\[1.10\]](#1.10){reference-type="ref" reference="1.10"}). It is worth remarking that in Equation [\[1.3\]](#1.3){reference-type="ref" reference="1.3"} (or equivalently the Equations [\[1.9\]](#1.9){reference-type="ref" reference="1.9"}, [\[1.10\]](#1.10){reference-type="ref" reference="1.10"}) we are looking for critical points of the Yang--Mills functional, so in principle, not every principal connection (or equivalently, not every $1$--form potential) can satisfy it. Equations [\[1.9\]](#1.9){reference-type="ref" reference="1.9"}, [\[1.10\]](#1.10){reference-type="ref" reference="1.10"} are usually called *dynamical equations* because they rule the dynamics of the electric and the magnetic field. On the other side, the Equation [\[1.2\]](#1.2){reference-type="ref" reference="1.2"} (or equivalently the Equations [\[1.7\]](#1.7){reference-type="ref" reference="1.7"}, [\[1.8\]](#1.8){reference-type="ref" reference="1.8"}) is always satisfied for every potential $1$--form. This equation can be thought as a condition impose on the electromagnetic field by the Minkowski space--time and the group $U(1)$; so Equations [\[1.7\]](#1.7){reference-type="ref" reference="1.7"}, [\[1.8\]](#1.8){reference-type="ref" reference="1.8"} are usually called *geometrical equations*, reflecting the fact that they comes from the Bianchi equation, a geometrical equation. This paper aims is recreating the geometrical formulation of the Electromagnetic Theory showed in the Appendix A by using quantum principal bundles in a concrete non--commutative space--time: the Moyal--Weyl algebra. Of course, during the text we will show the link between our formulation with the common results about this subject in literature, for example [@obs; @05; @u(1); @elec; @gauge2; @twisted; @ruiz]. The importance of this paper lies in this link; as well as in the opportunity of using quantum principal bundles in $SU(2)$ or $SU(3)$--non--commutative gauge theories. In particular, we are going to show the correct geometrical equation by using the non--commutative Bianchi identity, and the correct dynamical equation by identifying critical points of the non--commutative Yang--Mills functional, which is now a functional that measures the square norm of the curvature of a quantum principal connection. With that, we will present the correct non--commutative Maxwell Equations as well as the correct Lagrangian of the system and its quantization. An interesting result of this formulation is the fact that the four--potential produces electric charges and currents in the vacuum! and *magnetic charges and currents in the vacuum too*!. This turns the photon field into a kind of *Dyon gauge field*. Moreover, in order to illustrate one more time the powerful tool that the theory quantum principal bundles is in non--commutative gauge theories (the reader should remember the link between them in the *classical* case [@na]) and based on the fact that in non--commutative case the photon field produces electric and magnetic charges/currents, we are going to develop a toy model of the Electromagnetic Theory in the Moyal--Weyl algebra with two gauge boson fields: one associated to electric charges and currents densities, the photon field, and the other one associated to magnetic charges and currents densities, a kind of *magnetic photon field*. In addition, having two gauge boson fields in the Electromagnetic Theory leads to full symmetry between the electric field and the magnetic field. The idea of using two gauge fields in the Electromagnetic Theory comes from the formalism of Cabibbo--Ferrari [@cf; @cv]. Nevertheless, in the *classical* context, these leads a theory with $U(1)\times U(1)$ as gauge group. In the framework quantum principal bundles it is possible to describe two gauge fields with only one $U(1)$ (by using a $2D$--dimensional differential calculus of $U(1)$), which seems to be the correct group of symmetries of the Electromagnetic Theory. This paper breaks down into $6$ sections. In the second one we are going to build the quantum principal $U(1)$--bundle used along the whole work; in particular, we are going to show the two differential calculus of $U(1)$ used for our two electromagnetic models. The third section is the heart of this paper. In this section we will present the non--commutative Maxwell equations for our two models by using only the geometry of the spaces. It is worth mentioning that in this section we will show a covariant formulation of both models, as well as the correct Lagrangian densities. Following the *classical* case, in the fourth section we are going to quantize our two models. The last section is for some concluding comments and just like we have mentioned before, in Appendix $A$ there is a brief summary of the geometrical formulation of the Electromagnetic Theory. To accomplish our purpose, we are going to use M. Durdevich's theory of quantum principal bundles and quantum principal connection ([@micho1; @micho2; @micho3; @steve]) and the general formulation of Yang--Mills equations presented on [@sald1; @sald2] and tested in other quantum bundles with several exciting and interesting results [@sald3; @sald4]. It is worth remarking that the theory showed on [@sald1; @sald2] was formulated in the most general way, it was not created for the particular case of the quantum bundle used in this paper. Of course, we will continue with the notation presented on these papers. Finally it is worth mentioning that we have chosen Durdevich's framework to develop this paper instead of [@libro; @brz; @qvbH] because of its purely geometrical/algebraic formulation and because its generality in terms of differential calculus and quantum connections: this theory allows to work with almost every differential calculus on the quantum spaces and with any quantum principal connections. This generality in terms of differential calculus and connections makes (at least in the context of Yang--Mills equations in which connections play the main role) a richer theory. In Non--Commutative Geometry it is common to use the word *quantum* as a synonymous of *non--commutative* and we will use it in the paper sometimes. On the other hand, in Physics it is common to use the word *non--commutative* to denote theories with non--abelian groups, which is no the case of this paper; so we expect that the reader do not confuse with all these terms. # The Quantum Principal $U(1)$--Bundle In this section we are going to present the quantum bundle on which we will work. Since we are not interesting in changing the topology of the spaces nor the bundle, we have to consider a trivial quantum principal $U(1)$--bundle. The general theory of this kind of bundles can be checked on [@micho2; @steve]. ## A Non--Commutative Minkowski Space--Time Let us start by considering the Minkowski space--time $(\mathbb{R}^4,\eta=\mathrm{diag}(1,-1,-1,-1))$ and its space of complex--valued smooth functions $C^\infty_\mathbb{C}(\mathbb{R}^4)$. By choosing a $4\times 4$ antisymmetric matrix $(\theta^{\mu\nu})$ $\in$ $M_4(\mathbb{R})$, it is possible to take $C^\infty_\mathbb{C}(\mathbb{R}^4)[[\theta^{\mu\nu}]]$, formal power series in $\theta^{\mu\nu}$ with coefficients in $C^\infty_\mathbb{C}(\mathbb{R}^4)$. Finally it is possible to apply a $\theta^{\mu\nu}$--twist on $C^\infty_\mathbb{C}(\mathbb{R}^4)[[\theta^{\mu\nu}]]$ by defining a new product: for every $f$, $h$ $\in$ $C^\infty_\mathbb{C}(\mathbb{R}^4)[[\theta^{\mu\nu}]]$ we define $$\label{2.1} f\cdot h:=m\circ \mathrm{exp}\left( \frac{i\,\theta^{\mu\nu}}{2} \frac{\partial}{\partial x^\mu}\otimes \frac{\partial}{\partial x^\nu} \right)(f\otimes h),$$ where $m$ denotes the usual product on $C^\infty_\mathbb{C}(\mathbb{R}^4)[[\theta^{\mu\nu}]]$. Explicitly we have $$f\cdot h:= \left( fh + \sum_{\mu,\nu} \frac{i\,\theta^{\mu\nu}}{2} \frac{\partial f}{\partial x^\mu} \frac{\partial h}{\partial x^\nu}+ \sum_{\mu,\nu,\alpha,\beta} \frac{i^2\,\theta^{\mu\nu}\, \theta^{\alpha\beta}}{8} \frac{\partial f}{\partial x^\alpha \partial x^\mu} \frac{\partial h}{\partial x^\beta \partial x^\nu}+\cdots \right).$$ With this new product, $C^\infty_\mathbb{C}(\mathbb{R}^4)[[\theta^{\mu\nu}]]$ is a non--commutative unital $\ast$--algebra ([@05]), where the unit is $\mathbbm{1}(x)=1$ for all $x$ and the $\ast$ operation is the complex conjugate. It is worth mentioning that $$\label{2.2} [x^\mu,x^\nu]:=x^\mu\cdot x^\mu-x^\mu \cdot x^\mu =i\,\theta^{\mu\nu}\,\mathbbm{1}.$$ This non--commutative unital $\ast$--algebra receives the name of Moyal--Weyl algebra ([@05]) and it will be our quantum space--time, which we are going to denote by $M$. The next step is to extend the Moyal product $\cdot$ to the space of differential forms [@05]. In fact, let us take $\Omega^\bullet_\mathbb{C}(\mathbb{R}^4)[[\theta^{\mu\nu}]]$ the space of formal power series in $\theta^{\mu\nu}$ with coefficient in the algebra of complex--valued differential forms. Now Equation [\[2.1\]](#2.1){reference-type="ref" reference="2.1"} is easily extended to $\Omega^\bullet_\mathbb{C}(\mathbb{R}^4)[[\theta^{\mu\nu}]]$ by considering the action of $\frac{\partial}{\partial x^\gamma}$ on forms by means of the Lie derivative. We will denoted by $\Omega^\bullet(M)$ the space $\Omega^\bullet_\mathbb{C}(\mathbb{R}^4)[[\theta^{\mu\nu}]]$ with the Moyal product $\cdot$ extended. This graded differential $\ast$--algebra is going to play the role of *quantum differential forms* on our quantum space--time $M$. In accordance with [@sald2], in order to apply its theory there are certain structures that previously we have to define on $\Omega^\bullet(M)$. Let us define the following left quantum Pseudo--Riemmanian metric on $M$ $$\label{2.3} \{\langle-,-\rangle^k_\mathrm{L}: \Omega^k(M)\times \Omega^k(M) \longrightarrow M\mid k=0,1,2,3,4\}$$ such that $\langle f ,h\rangle^0_\mathrm{L}= f\cdot h^\ast$ and for $k=1,2,3,4$ we extend the usual metric of the de Rham differential algebra of the Minkowski space--time. For example $$\displaystyle \langle \sum^3_{\mu=0}f_\mu\,dx^\mu ,\sum^3_{\nu=0}h_\nu\,dx^\nu\rangle^1_\mathrm{L}= \sum^3_{\mu,\nu=0}\eta^{\mu\,\nu} f_\mu\cdot h^\ast_\nu=f_0\cdot h^\ast_0-f_1\cdot h^\ast_1-f_2\cdot h^\ast_2-f_3\cdot h^\ast_3$$ and $$\langle f\,\mathrm{dvol},h\,\mathrm{dvol}\rangle^4_\mathrm{L}=f\cdot h^\ast,$$ where $$\label{2.4} \mathrm{dvol}:=dx^0\wedge dx^1\wedge dx^2\wedge dx^3$$ is the volume form. Furthermore, by postulating the orthogonality between quantum forms of different degrees, we can induce Pseudo--Riemannian structures in the whole graded space; so we will not use superscripts anymore. By taking the integral operator $$\int_{M}\mathrm{dvol}$$ we can define the left quantum Hodge pseudo inner product[^1] $$\label{2.5} \langle-|-\rangle_\mathrm{L}:= \int_{M} \langle-,-\rangle_\mathrm{L}\,\mathrm{dvol}.$$ Furthermore, and according to [@sald2], the left quantum Hodge operator is the antilinear $M$--isomorphism $$\label{2.6} \star_\mathrm{L}:\Omega^k(M)\longrightarrow \Omega^{4-k}(M)$$ that satisfies $$\hat{\mu}\cdot (\star_\mathrm{L}\mu) =\langle \hat{\mu},\mu\rangle_\mathrm{L}\,\mathrm{dvol}.$$ Explicitly, for our case in the canonical basis we have $$\label{2.7} \star_\mathrm{L}\mathbbm{1}=\mathrm{dvol}, \;\;\star_\mathrm{L}\mathrm{dvol}=-\mathbbm{1},$$ $$\label{2.8} \begin{aligned} \star_\mathrm{L}dx^0=dx^1\wedge dx^2\wedge dx^3, \quad \star_\mathrm{L}dx^1=dx^0\wedge dx^2\wedge dx^3,\\ \star_\mathrm{L}dx^2=-dx^0\wedge dx^1\wedge dx^3,\quad \star_\mathrm{L}dx^3=dx^0\wedge dx^1\wedge dx^2, \end{aligned}$$ $$\label{2.9} \begin{aligned} \star_\mathrm{L}dx^0 \wedge dx^1=-dx^2\wedge dx^3, \quad \star_\mathrm{L}dx^0 \wedge dx^2=dx^1\wedge dx^3, \quad \star_\mathrm{L}dx^0 \wedge dx^3=-dx^1\wedge dx^2,\\ \star_\mathrm{L}dx^1 \wedge dx^2=dx^0\wedge dx^3, \quad \star_\mathrm{L}dx^1 \wedge dx^3=-dx^0\wedge dx^2, \quad \star_\mathrm{L}dx^2 \wedge dx^3=dx^0\wedge dx^1, \end{aligned}$$ $$\label{2.10} \begin{aligned} \star_\mathrm{L}dx^1 \wedge dx^2\wedge dx^3 =dx^0, \quad \star_\mathrm{L}dx^0 \wedge dx^2 \wedge dx^3=dx^1,\\ \star_\mathrm{L}dx^0 \wedge dx^1 \wedge dx^3=-dx^2, \quad \star_\mathrm{L}dx^0 \wedge dx^1 \wedge dx^2=dx^3. \end{aligned}$$ Of course in all cases we have $\star^2_\mathrm{L}=(-1)^{k(4-k)+1}\,\mathrm{id}.$ Finally the left quantum codifferential is defined as the operator $$\label{2.11} d^{\star_\mathrm{L}}:=(-1)^{k+1}\,\star^{-1}_\mathrm{L}\,\circ\, d \,\circ\, \star_\mathrm{L}:\Omega^{k+1}(M)\longrightarrow \Omega^k(M),$$ and it is the formal adjoint operator of $d$ with respect to $\langle-|-\rangle_\mathrm{L}$ [@sald2]. It is worth mentioning that by considering the right structure as $$\label{2.12} \langle \hat{\mu},\mu\rangle_\mathrm{R}:=\langle \hat{\mu}^\ast,\mu^\ast\rangle_\mathrm{L}$$ we get $\langle-|-\rangle_\mathrm{R}$, $\star_\mathrm{R}:=\ast \circ \star_\mathrm{L}\circ \ast$ and $d^{\star_\mathrm{R}}:=(-1)^{k+1}\,\star^{-1}_\mathrm{R}\,\circ\, d \,\circ\, \star_\mathrm{R}=\ast \circ d^{\star_\mathrm{L}} \circ \ast$, which is the formal adjoint operator of $d$ with respect to $\langle-|-\rangle_\mathrm{R}$ [@sald2]. ## The Quantum Group of $U(1)$ and its Differential Calculus Let us start this subsection by considering the $\ast$--Hopf algebra of the polynomial Laurent algebra $(G=\mathbb{C}[z,z^{-1}],\phi,\epsilon,\kappa)$, where $z^{-1}=z^\ast$, $\phi$ is the coproduct, $\epsilon$ is the counity and $\kappa$ the coinverse. The space $g$ will play the role of the quantum structure group of our bundle. The next step is to find differential calculus on $G$ different to the well--known algebra of *classical* differential forms in order to create our two models. The reasons of these changes will be explored in whole text. ### The Non--Standard $1D$--Differential Calculus In accordance with [@steve; @woro], a bicovariant $\ast$--First Order Differential Calculus ($\ast$--FODC) can be defined as an $\mathrm{Ad}$--invariant right ideal $\mathcal{R} \subseteq \mathrm{Ker}(\epsilon)$ such that $\kappa(\mathcal{R})^\ast\subseteq \mathcal{R}$. In this way, let us consider any $\ast$--FODC $$\label{2.13} (\Gamma,d)$$ such that the set of invariant elements or the *quantum (dual) Lie algebra* ${_\mathrm{inv}}\Gamma$ satisfies $$\mathrm{dim}_\mathbb{C}({_\mathrm{inv}}\Gamma)=1,\qquad z\,\pi(z)=-\pi(z)\,z,$$ where $\pi:\mathrm{Ker}(\epsilon)\longrightarrow {_\mathrm{inv}}\Gamma$ is the quantum germs map given by $\pi(g)=\kappa(g^{(1)})dg^{(2)}$ for all $g$ $\in$ $\mathrm{Ker}(\epsilon)$ with $\phi(g)=g^{(1)}\otimes g^{(2)}$ (in Sweedler's notation) [@steve; @woro]. Notice $$\label{2.14} {_\mathrm{inv}}\Gamma=\mathrm{span}_\mathbb{C}\{\vartheta:=i\, \pi(z)\}.$$ Also we can calculate the adjoint (co)action of $G$ on ${_{\mathrm{inv}}}\Gamma$ $$\label{2.15} \begin{aligned} \mathrm{ad}: {_{\mathrm{inv}}}\Gamma &\longrightarrow {_{\mathrm{inv}}}\Gamma\otimes G\\ \vartheta &\longmapsto \vartheta\otimes \mathbbm{1} \end{aligned}$$ because of $$\mathrm{ad}(\pi(g))=((\pi\otimes \mathrm{id})\circ \mathrm{Ad})(g)=\pi(g)\otimes \mathbbm{1}$$ for all $g$ $\in$ $G$, where $\mathrm{Ad}:G\longrightarrow G\otimes G$ is the adjoint (right co)action on $G$ [@micho1; @micho2; @steve]. In Durdevich's framework of quantum principal bundles we have to take the universal differential envelope $\ast$--calculus [@micho1; @micho2; @steve] $$\label{2.16} (\Gamma^\wedge,d,\ast).$$ This graded differential $\ast$--algebra will play the role of *quantum differential forms* of $U(1)$ in the first model that we will present. This space has the particularity that $$\Gamma^{\wedge\,2}\cong G\otimes{_{\mathrm{inv}}}\Gamma^{\wedge\,2},$$ where $$\label{2.17} {_{\mathrm{inv}}}\Gamma^{\wedge\,2}=\mathrm{span}_\mathbb{C}\{ \vartheta\,\vartheta\};$$ so there are quantum differential forms of grade $2$. Moreover, there is no a top grade, which is a big difference with the algebra of *classical* differential forms of $U(1)$. The main reason to use the universal differential envelope $\ast$--calculus instead of, for example, the universal differential calculus is the fact that $(\Gamma^\wedge,d,\ast)$ allows to extend the structure of $\ast$--Hopf algebra to any grade and it is maximal with this property; reflecting the *classical* fact the tangent bundle of a Lie group is also a Lie Group. The structure of graded differential $\ast$--Hopf algebra will be denoted by the same symbols. In this specific case we have ([@micho1; @micho2; @steve]) $$\label{2.18} \vartheta^\ast=\vartheta,\qquad d\vartheta=i\vartheta\,\vartheta.$$ ### A $2D$--differential calculus Now let $q$ $\in$ $\mathbb{R}-\{0,1\}$, $\mathcal{L}=\mathrm{span}_\mathbb{C}\{z,z^{-1}\}$ and its linear dual space $\hat{\mathcal{L}}:=\mathrm{span}_\mathbb{C}\{\theta_-,\theta_+\}$, where $\theta_-(z)=1$, $\theta_-(z^{-1})=0$, $\theta_+(z)=0$, $\theta_+(z^{-1})=1$. The map $$\varpi: \mathrm{Ker}(\epsilon)\longrightarrow \hat{\mathcal{L}},\qquad g\longmapsto \varpi(g),$$ where $\varpi(g)(x)={\mathcal{Q}}(g\otimes x)$ with ${\mathcal{Q}}$ such that ${\mathcal{Q}}(z^m\otimes z^n)=q^{mn}$ for all $m$, $n$ $\in$ $\mathbb{Z}$; defines $$\label{2.19} (\Gamma,d),$$ a $\ast$--First Order Differential Calculus ($\ast$--FODC) by means of its space of invariant elements, or equivalently, its (dual) quantum Lie algebra ([@libro; @woro; @steve]) $$\label{2.20} {_{\mathrm{inv}}}\Gamma:=\frac{\mathrm{Ker}(\epsilon)}{\mathrm{Ker}(\varpi)}.$$ It is worth remarking that $\mathrm{dim}( {_{\mathrm{inv}}}\Gamma)=2$, a big difference with the classical case in which $\displaystyle \mathrm{dim} \left( \frac{\mathrm{Ker}(\epsilon)}{\mathrm{Ker}^2(\epsilon)}\right)=\mathrm{dim}(\mathfrak{u(1)})=1$ and clearly a linear basis of the quantum Lie algebra is given by $$\label{2.21} \beta:=\{ \vartheta^e:=-i\,\theta_-,\,\vartheta^m:=-i\,\theta_+\}.$$ By considering the quantum germs map [@steve; @woro] $$\label{2.22} \pi:\mathrm{Ker}(\epsilon)\longrightarrow {_{\mathrm{inv}}}\Gamma$$ it is easy to look for relations, such as $$\label{2.23} \begin{aligned} \pi(z^n)&=i(q^{n}-1)\vartheta^e+i(q^{-n}-1)\vartheta^m,\\ \pi(z^{-n})&=i(q^{-n}-1)\vartheta^e+i(q^{n}-1)\vartheta^m. \end{aligned}$$ Also we can calculate the adjoint (co)action of $G$ on ${_{\mathrm{inv}}}\Gamma$ $$\label{2.24} \begin{aligned} \mathrm{ad}: {_{\mathrm{inv}}}\Gamma &\longrightarrow {_{\mathrm{inv}}}\Gamma\otimes G\\ \vartheta &\longmapsto \vartheta\otimes \mathbbm{1} \end{aligned}$$ given by $$\mathrm{ad}(\pi(g))=((\pi\otimes \mathrm{id})\circ \mathrm{Ad})(g)=\pi(g)\otimes \mathbbm{1}$$ for all $g$ $\in$ $G$. Now we have to take the universal differential envelope $\ast$--calculus [@micho1; @micho2; @steve] $$\label{2.25} (\Gamma^\wedge,d,\ast).$$ This graded differential $\ast$--algebra will play the role of *quantum differential forms* of $U(1)$ in the second that we will present. To conclude this subsection we are to present some relations in this algebra $$\label{2.26} \vartheta^{e\ast}=\vartheta^e,\quad \vartheta^{m\ast}=\vartheta^m$$ $$\label{2.27} d\pi(z)=d\pi(z^{-1})=-\frac{(q-1)^2}{q}(\vartheta^e\vartheta^m+\vartheta^m\vartheta^e),$$ $$\label{2.28} d\vartheta^e=d\vartheta^m=i(\vartheta^e\vartheta^m+\vartheta^m\vartheta^e).$$ ## The Quantum Bundle Finally we have all the ingredients to build the trivial quantum bundle that we will used in the rest of this paper. Let $$\label{2.29} \zeta=(GM:=M\otimes G, M, {_{GM}}\Phi:=\mathrm{id}\otimes \phi)$$ be the trivial quantum principal $U(1)$--bundle over $M$ [@micho2; @steve]. Now we can also take the trivial differential calculus on the bundle ([@micho2; @steve]): $$\label{2.30} \Omega^\bullet(GM)=\Omega^\bullet(M)\otimes \Gamma^\wedge, \;\;{_{\Omega}}\Psi:=\mathrm{id}\otimes \phi:\Omega^\bullet(GM)\longrightarrow \Omega^\bullet(M)\otimes \Gamma^\wedge$$ (here $\phi$ is the extension of the coproduct in $\Gamma^\wedge$). The graded differential $\ast$--algebra $\Omega^\bullet(GM)$ will play de role of *quantum differential forms* on the total quantum space $GM$. It is worth mentioning that $$\label{2.31} \mathrm{Hor}^\bullet GM:=\Omega^\bullet(M)\otimes G$$ and of course the graded differential $\ast$--subalgebra of forms on the base quantum space matches with $\Omega^\bullet(M)$. In accordance with the general theory, the restriction of ${_{\Omega}}\Psi$ in $\mathrm{Hor}^\bullet GM$ is a (co)representation of $G$ on $\mathrm{Hor}^\bullet GM$ [@micho2; @steve]. This map will be denoted by ${_{\mathrm{H}}}\Phi$. In the light of [@micho2; @steve], the set of quantum principal connections (qpcs) $$\label{2.32} \{\omega: {_{\mathrm{inv}}}\Gamma\longrightarrow \Omega^1(GM)\}$$ on trivial bundles is in biyection with the space of *non--commutative gauge potentials* $$\label{2.33} \{A^\omega: {_{\mathrm{inv}}}\Gamma\longrightarrow \Omega^1(M)\mid A^\omega \mbox{ is linear} \}$$ by means of $$\omega(\theta)=A^\omega(\theta)\otimes \mathbbm{1}+\mathbbm{1}\otimes \theta$$ for all $\theta$ $\in$ ${_{\mathrm{inv}}}\Gamma$ (for both cases). For $A^\omega=0$ the corresponding qpc is called the trivial qpc and it is regular and multiplicative [@micho2; @steve]. **Proposition 1**. *The trivial qpc is the only regular connection.* *Proof.* Let us assume that $\omega$ is a regular qpc. Then according to [@micho2; @steve] its corresponding non--commutative gauge potential has to satisfies $$A^\omega(\theta \circ g)=\epsilon(g)A^\omega(\theta),$$ for all $\theta$ $\in$ ${_{\mathrm{inv}}}\Gamma$ and all $g$ $\in$ $G$. So it is enough to evaluate the last equality in $g=z$ to find that it is satisfied if and only if $A^\omega=0$. ◻ Despite the ease of the previous proof, the last proposition is quite important for our purpose because every (*classical*) principal connection is regular [@micho2]; so the last result tells that up to the trivial qpc, there is no *classical* counterpart of any qpc, i.e., up to the trivial qpc, all results in the rest of this paper will be completely *quantum*, they will not have *classical* analogous. In order to talk about the curvature of a qpc and develop the theory we need to consider the two different cases given by our two different calculus of $G$. ### The Non--Standard $1D$--Differential Calculus First of all, it is necessary to choose a embedded differential [@micho2; @steve]. In our case, the only embedded differential $$\label{2.34} \delta: {_{\mathrm{inv}}}\Gamma\longrightarrow {_{\mathrm{inv}}}\Gamma\otimes {_{\mathrm{inv}}}\Gamma$$ is given by $$\delta(\vartheta)=i\vartheta\otimes \vartheta.$$ With this, the space of curvatures of qpcs $$\label{2.35} \{R^\omega: {_{\mathrm{inv}}}\Gamma\longrightarrow \Omega^2(GM)\}$$ is in biyection with the space of *non--commutative fields strength* $$\label{2.36} \{F^\omega:=dA^\omega-\langle A^\omega,A^\omega \rangle: {_{\mathrm{inv}}}\Gamma\longrightarrow \Omega^2(M)\},$$ where $$\langle A^\omega,A^\omega \rangle=m\circ (A^\omega\otimes A^\omega)\circ \delta$$ and $m$ is the product map. The reader can comparison this relation between qpcs and their curvatures in terms of potentials ($A^\omega \longleftrightarrow F^\omega=dA^\omega-\langle A^\omega,A^\omega \rangle$) with Equation [\[1.1\]](#1.1){reference-type="ref" reference="1.1"} ($A^\omega \longleftrightarrow F^\omega=dA^\omega$). The embedded differential map in the definition of the curvature of a qpc allows to see it as a gauge field in the general case [@micho2; @steve]. Following the classical case, let us consider the non--commutative gauge potential $A^\omega$ given by $$\label{2.37} A^\omega(\vartheta)=\phi\, dx^0-A_1\,dx^1-A_2\,dx^2-A_3\,dx^3.$$ In this way, the non--commutative field strength is defined by $$\label{2.38} F^\omega(\vartheta)=dA^\omega(\vartheta)-i A^\omega(\vartheta)\wedge A^\omega(\vartheta)$$ and in terms of coordinates we have $$\label{2.39} F^\omega(\vartheta)=\sum_{0\leq \mu < \nu \leq 3}F_{\mu\nu}\, dx^\mu\wedge dx^\nu\;\; \mathrm{ where }\;\; (F_{\mu\nu})=\begin{pmatrix} 0 & D_1 & D_2 & D_3 \\ -D_1 & 0 & -H_3 & H_2 \\ -D_2 & H_3 & 0 & -H_1\\ -D_3 & -H_2 & H_1 & 0 \end{pmatrix},$$ where $$\label{2.40} {\bf D}=(D_1,D_2,D_3):={\bf E}+i [\phi,{\bf A}],\qquad {\bf E}:=-{\partial {\bf A}\over \partial x^0}-\nabla \phi$$ and $$\label{2.41} {\bf H}=(H_1,H_2,H_3):={\bf B}+ i {\bf A}\times {\bf A},\qquad {\bf B}:=\nabla \times {\bf A},$$ with ${\bf A}=(A_1,A_2,A_3)$. The definition of the commutators can be deduced by the context and we have considered that in the cross product the multiplication of elements always start from up to down. The field ${\bf D}$, ${\bf H}$ will be consider as the non--commutative electric field and the non--commutative magnetic field, respectively. Also $F_{\mu\nu}$ will recieved the name of the non--commutative electromagnetic tensor field. ### The $2D$--Differential Calculus For this other case the only embedded differential $$\label{2.42} \delta: {_{\mathrm{inv}}}\Gamma\longrightarrow {_{\mathrm{inv}}}\Gamma\otimes {_{\mathrm{inv}}}\Gamma$$ is given by $$\delta(\vartheta^e)=\delta(\vartheta^m)= i(\vartheta^e\otimes \vartheta^m+\vartheta^m\otimes\vartheta^e).$$ With this, the space of curvature of qpcs $$\label{2.43} \{R^\omega: {_{\mathrm{inv}}}\Gamma\longrightarrow \Omega^2(GM)\}$$ is in biyection with the space of *non--commutative fields strength* $$\label{2.44} \{F^\omega:=dA^\omega-\langle A^\omega,A^\omega \rangle: {_{\mathrm{inv}}}\Gamma\longrightarrow \Omega^2(M)\},$$ where $$\langle A^\omega,A^\omega \rangle=m\circ (A^\omega\otimes A^\omega)\circ \delta$$ and $m$ is the product map. The reader can comparison this relation between qpcs and their curvatures in terms of potentials ($A^\omega \longleftrightarrow F^\omega=dA^\omega-\langle A^\omega,A^\omega \rangle$) with Equation [\[1.1\]](#1.1){reference-type="ref" reference="1.1"} ($A^\omega \longleftrightarrow F^\omega=dA^\omega$). In this way, let us calculate the non--commutative field strength $F^\omega$ of the non--commutative gauge potential $A^\omega$ given by $$\label{2.45} A^\omega(\vartheta^e)= \phi^e\, dx^0-A^e_1\, dx^1-A^e_2\, dx^2-A^e_3\, dx^3,$$ $$\label{2.46} A^\omega(\vartheta^m)= \phi^m dx^0\,-A^m_1\, dx^1-A^m_2\, dx^2-A^m_3\, dx^3$$ Thus $$\label{2.47} F^\omega(\vartheta^e)=dA^\omega(\vartheta^e)-i A^\omega(\vartheta^e)\wedge A^\omega(\vartheta^m)-i A^\omega(\vartheta^m)\wedge A^\omega(\vartheta^e),$$ $$\label{2.48} F^\omega(\vartheta^m)=dA^\omega(\vartheta^m)-i A^\omega(\vartheta^e)\wedge A^\omega(\vartheta^m)-i A^\omega(\vartheta^m)\wedge A^\omega(\vartheta^e).$$ In terms of coordinates we get $$\label{2.49} F^\omega(\vartheta^e)=\sum_{0\leq \mu < \nu \leq 3}F^e_{\mu\nu}\, dx^\mu\wedge dx^\nu\;\; \mathrm{ where }\;\; (F^e_{\mu\nu})=\begin{pmatrix} 0 & D^e_1 & D^e_2 & D^e_3 \\ -D^e_1 & 0 & -H^e_3 & H^e_2 \\ -D^e_2 & H^e_3 & 0 & -H^e_1\\ -D^e_3 & -H^e_2 & H^e_1 & 0 \end{pmatrix},$$ $$\label{2.50} {\bf D}^e=(D^e_1,D^e_2,D^e_3):={\bf E}^e+i[\phi^m,{\bf A}^e]+i[\phi^e,{\bf A}^m],\;\quad {\bf E}^e:= -\frac{\partial{\bf A}^e}{\partial x^0}-\nabla \phi^e$$ and $$\label{2.51} {\bf H}^e=(H^e_1,H^e_2,H^e_3):={\bf B}^e+i {\bf A}^e\times {\bf A}^m + i{\bf A}^m \times {\bf A}^e, \;\quad {\bf B}^e:= \nabla \times {\bf A}^e,$$ where ${\bf A}^e=(A^e_1,A^e_2,A^e_3)$ and ${\bf A}^m=(A^m_1,A^m_2,A^m_3)$; while $$\label{2.52} F^\omega(\vartheta^m)=\sum_{0\leq \mu < \nu \leq 3}F^m_{\mu\nu}\, dx^\mu\wedge dx^\nu\;\; \mathrm{ where }\;\; (F^m_{\mu\nu})=\begin{pmatrix} 0 & H^m_1 & H^m_2 & H^m_3 \\ -H^m_1 & 0 & -D^m_3 & D^m_2 \\ -H^m_2 & D^m_3 & 0 & -D^m_1\\ -H^m_3 & -D^+_2 & D^m_1 & 0 \end{pmatrix},$$ $$\label{2.53} {\bf H}^m=(H^m_1,H^m_2,H^m_3):={\bf B}^m+i[\phi^m,{\bf A}^e]+i[\phi^e,{\bf A}^m],\;\quad {\bf B}^m:= -\frac{\partial{\bf A}^m}{\partial x^0}-\nabla \phi^m$$ and $$\label{2.54} {\bf D}^m=(D^m_1,D^m_2,D^m_3):={\bf E}^m+i{\bf A}^e\times {\bf A}^m +i {\bf A}^m \times {\bf A}^e, \;\quad {\bf E}^m:= \nabla \times {\bf A}^m.$$ The notation chosen is not a coincidence: we will consider that $A^\omega(\vartheta^e)$ is *the non--commutative electric potential $1$--form*; $(F^e_{\mu,\nu}),$ ${\bf D}^e$ and ${\bf H}^e$ are *the non--commutative electromagnetic tensor field, the non--commutative electric field and the non--commutative magnetic field* generated by $A^\omega(\vartheta^e)$, respectively; and ${\bf E}^e$ and ${\bf B}^e$ are their corresponding *classical* parts. In the same way, we will consider that $A^\omega(\vartheta^m)$ is *the non--commutative magnetic potential $1$--form*; $(F^m_{\mu,\nu}),$ ${\bf D}^m$ and ${\bf H}^m$ are *the non--commutative magnetoelectric tensor field, the non--commutative electric field and the non--commutative magnetic field* generated by $A^\omega(\vartheta^m)$, respectively; and ${\bf E}^m$ and ${\bf B}^m$ are their corresponding *classical* parts. # Non--Commutative Maxwell Equations Just like we have exposed at the beginning of this paper, Maxwell equations in the vacuum comes from the Bianchi identity and critical points of the Yang--Mills functional. In this sections we are going to recreate that process in order to find their *quantum* counterparts on our bundle. ## Non--Commutative Geometrical Equations In accordance with [@micho2; @sald2], every qpc satisfies the *non--commutative Bianchi identity*, which is $$\label{3.1} (D^\omega-S^\omega)R^\omega=\langle \omega, \langle \omega,\omega\rangle\rangle-\langle \langle \omega,\omega\rangle,\omega\rangle,$$ where $D^\omega$ is the covariant derivative and the operator $S^\omega$ measures the *lack of regularity* of the qpc, in the sense that $S^\omega=0$ when $\omega$ is regular. Furthermore, when $\omega$ is multiplicative [@micho2; @steve], the right--hand side of the last equation is equal to $0$. In summary, when $\omega$ is regular and multiplicative, for example for *classical* principal connections, Equation [\[3.1\]](#3.1){reference-type="ref" reference="3.1"} turns into the well--known *classical* Bianchi identity $D^\omega R^\omega=0$. ### The Non--Standard $1D$--Differential Calculus For our quantum bundle, Equation [\[3.1\]](#3.1){reference-type="ref" reference="3.1"} becomes into $$\label{3.2} (d-d^{S^\omega})F^\omega=0,$$ where $$\label{3.3} d^{S^\omega}\tau: {_{\mathrm{inv}}}\Gamma\longrightarrow \Omega^{k+1}(M)$$ is given by $$d^{S^\omega}\tau(\vartheta)=i[A^\omega(\vartheta),\tau(\vartheta)]^\partial:= i(A^\omega(\vartheta)\wedge \tau(\vartheta)-(-1)^k \tau(\vartheta)\wedge A^\omega(\vartheta))\;\;\;\in\;\;\; \Omega^{k+1}(M)$$ for all linear maps $\tau: {_{\mathrm{inv}}}\Gamma\longrightarrow \Omega^k(M).$ In this way, Equation [\[3.2\]](#3.2){reference-type="ref" reference="3.2"} is given by $$\label{3.4} dF^\omega(\vartheta)=[A^\omega(\vartheta),dF^\omega(\vartheta)] =i [A^\omega(\vartheta),dA^\omega(\vartheta)],$$ which is the *non--commutative geometrical equation* for our trivial quantum bundle with a non--standard $1D$--dimensional differential calculus of $U(1)$. The reader can comparison the last equation with Equation [\[1.2\]](#1.2){reference-type="ref" reference="1.2"}. It is worth mentioning that like in the *classical* case, Equation [\[3.4\]](#3.4){reference-type="ref" reference="3.4"} is satisfied by all qpcs! and in general, $dF^\omega(\vartheta)\not=0$. In the light of Equation [\[2.37\]](#2.37){reference-type="ref" reference="2.37"}, Equation [\[3.4\]](#3.4){reference-type="ref" reference="3.4"} becomes into the *non--commutative Gauss Law for magnetism* and the *non--commutative Faraday equation* $$\label{3.5} \nabla\cdot {\bf H}=\rho^m,$$ $$\label{3.6} \nabla\times {\bf D}+\frac{\partial {\bf H}}{\partial x^0}=-{\bf j}^m.$$ For these equations we have defined the magnetic charge density like $$\label{3.7} \rho^m:=i[{\bf B},{\bf A}]$$ and the magnetic current density like $$\label{3.8} -{\bf j}^m:=i[\phi,{\bf B}]-i({\bf E}\times {\bf A}+{\bf A}\times {\bf E})$$ Again, the definition of the commutators can be deduced by the context. The reader can comparison the Equations [\[3.5\]](#3.5){reference-type="ref" reference="3.5"}, [\[3.6\]](#3.6){reference-type="ref" reference="3.6"} with their *classical counterparts* in Equations [\[1.7\]](#1.7){reference-type="ref" reference="1.7"}, [\[1.8\]](#1.8){reference-type="ref" reference="1.8"}. It is worth remarking that Equation [\[3.5\]](#3.5){reference-type="ref" reference="3.5"} tells the existence of *magnetic charges* for the non--commutative gauge potential $A^\omega(\vartheta)$; while Equation [\[3.6\]](#3.6){reference-type="ref" reference="3.6"} tells the existence of *magnetic currents*, but both of them in the vacuum! and like in the *classical case*, this is only a consequence of the geometry of our spaces. Notice that the magnetic charge and current depend on the interaction of $A^\omega(\vartheta)$ with the *classical* part of the non--commutative field strength $F^\omega(\vartheta)$. ### The $2D$--Differential Calculus Now for this quantum bundle, Equation [\[3.1\]](#3.1){reference-type="ref" reference="3.1"} becomes into $$\label{3.9} (d-d^{S^\omega})F^\omega=\langle A^\omega, \langle A^\omega,A^\omega\rangle\rangle-\langle \langle A^\omega,A^\omega\rangle,A^\omega\rangle,$$ where $$\label{3.10} d^{S^\omega}\tau: {_{\mathrm{inv}}}\Gamma\longrightarrow \Omega^{k+1}(M)$$ is given by $$d^{S^\omega}\tau(\vartheta^e)=d^{S^\omega}\tau(\vartheta^m)=[A^\omega(\vartheta^m),\tau(\vartheta^e)]^\partial+[A^\omega(\vartheta^e),\tau(\vartheta^m)]^\partial\;\;\;\in\;\;\; \Omega^{k+1}(M)$$ for all linear maps $\tau: {_{\mathrm{inv}}}\Gamma\longrightarrow \Omega^k(M)$, with $[-,-]^\partial$ the graded--commutator. In this way the Equation [\[3.9\]](#3.9){reference-type="ref" reference="3.9"} for $\vartheta^e$, $\vartheta^m$ is $$\begin{aligned} \label{3.11} dF^\omega(\vartheta^e)=dF^\omega(\vartheta^m) &=-d(A^\omega(\vartheta^e)\cdot A^\omega(\vartheta^m))-d(A^\omega(\vartheta^m)\cdot A^\omega(\vartheta^e)).\\ &=[A^\omega(\vartheta^e),dA^\omega(\vartheta^m)]+[A^\omega(\vartheta^m),dA^\omega(\vartheta^e)] \end{aligned}$$ which are the *non--commutative geometrical equations* for our trivial quantum bundle with a $2D$--dimensional differential calculus of $U(1)$. The reader can comparison the last equation with Equation [\[1.2\]](#1.2){reference-type="ref" reference="1.2"}. It is worth mentioning that like in the *classical* case, Equation [\[3.4\]](#3.4){reference-type="ref" reference="3.4"} is satisfied by all qpcs! In the light of Equation [\[2.45\]](#2.45){reference-type="ref" reference="2.45"}, Equation [\[3.11\]](#3.11){reference-type="ref" reference="3.11"} becomes for $A^\omega(\vartheta^e)$ into $$\label{3.12} \nabla\cdot {\bf H}^e=\rho,$$ $$\label{3.13} \nabla\times {\bf D}^e+\frac{\partial {\bf H}^e}{\partial x^0}=-{\bf j}.$$ In the same way, by Equation [\[2.46\]](#2.46){reference-type="ref" reference="2.46"}, Equation [\[3.11\]](#3.11){reference-type="ref" reference="3.11"} for $A^\omega(\vartheta^m)$ becomes into $$\label{3.14} \nabla\cdot {\bf D}^m=\rho,$$ $$\label{3.15} \nabla\times {\bf H}^m+\frac{\partial {\bf D}^m}{\partial x^0}=-{\bf j}.$$ For these equations we have defined the magnetic and electric charge density generated by $A^\omega(\vartheta^e)$ and $A^\omega(\vartheta^m)$ respectively, as $$\label{3.16} \rho=i[{\bf B}^e,{\bf A}^m]+i[{\bf E}^m,{\bf A}^e];$$ and the magnetic current density and the electric current density generated by $A^\omega(\vartheta^e)$ and $A^\omega(\vartheta^m)$ respectively, as $$\label{3.17} -{\bf j}=i[\phi^m,{\bf B}^e]+i[\phi^e,{\bf E}^m]-i({\bf E}^e\times{\bf A}^m+{\bf A}^m\times{\bf E}^e+{\bf B}^m\times{\bf A}^e+{\bf A}^e\times{\bf B}^m).$$ The reader can comparison the set of Equations [\[3.12\]](#3.12){reference-type="ref" reference="3.12"}--[\[3.15\]](#3.15){reference-type="ref" reference="3.15"} with their *classical counterparts*: Equations [\[1.7\]](#1.7){reference-type="ref" reference="1.7"}, [\[1.8\]](#1.8){reference-type="ref" reference="1.8"}. At this point there are a couple of comments to do about the last equations. First of all, it is important to notice that Equation [\[3.12\]](#3.12){reference-type="ref" reference="3.12"} tells the existence of *magnetic charges!* for the non--commutative electric potential $1$--form; while Equation [\[3.14\]](#3.14){reference-type="ref" reference="3.14"} tells the existence of *electric charges* for the non--commutative magnetic potential $1$--form. Like in the *classical case*, all of this is a consequence of the geometry of our spaces. Second, $\{{\bf H}^e, {\bf D}^m \}$ and $\{\{{\bf D}^e, {\bf H}^e \}, \{{\bf D}^m, {\bf H}^m \}\}$ satisfied the same equalities! and they depend on a non--trivial way of both non--commutative potentials and the *classical* fields: there are electric/magnetic charges and electric/magnetic current densities due to self--interaction in the vacuum!. ## Non--Commutative Dynamical Equations In accordance with [@sald2], a qpc $\omega$ is a Yang--Mills qpc, i.e., a critical point of the non--commutative Yang--Mills functional (a functional that measures the square norm of the curvature of a qpc) if and only if $$\label{3.18} \langle \Upsilon_\mathrm{ad}\circ \lambda\,|\,(d^{\nabla^{\omega}_{\mathrm{ad}}\star_\mathrm{L}}-d^{S^{\omega}\star_\mathrm{L}}) R^{\omega}\rangle_\mathrm{L}+\langle \widetilde{\Upsilon}_\mathrm{ad}\circ \widehat{\lambda}\,|\,(d^{\widehat{\nabla}^{\omega}_{\mathrm{ad}}\star_\mathrm{R}}-d^{\widehat{S}^{\omega}\star_\mathrm{R}}) \widehat{R}^{\omega}\rangle_\mathrm{R}=0$$ for all $\lambda$ $\in$ $\overrightarrow{\mathfrak{qpc}(\zeta)}$, where $d^{\nabla^{\omega}_{\mathrm{ad}}\star_\mathrm{L}}$ is the formal adjoint operator of the exterior derivative of the induced quantum linear connection on the left associated quantum vector bundle of $\mathrm{ad}$ (just like in the *classical* case); while $d^{S^{\omega}\star_\mathrm{L}}$ is the formal adjoint operator of $\Upsilon_\mathrm{ad}\circ S^{\omega} \circ \Upsilon^{-1}_\mathrm{ad}$ [@sald2]. The operators $d^{\nabla^{\omega}_{\mathrm{ad}}\star_\mathrm{R}}$ and $d^{S^{\omega}\star_\mathrm{R}}$ are defined in an analogous way for the right structures. Here we are considering that $(d^{\nabla^{\omega}_{\mathrm{ad}}\star_\mathrm{L}}-d^{S^{\omega}\star_\mathrm{L}}) R^{\omega}:= (d^{\nabla^{\omega}_{\mathrm{ad}}\star_\mathrm{L}}-d^{S^{\omega}\star_\mathrm{L}}) \circ \Upsilon_{\mathrm{ad}}\circ R^{\omega}$, $(d^{\widehat{\nabla}^{\omega}_{\mathrm{ad}}\star_\mathrm{R}}-d^{\widehat{S}^{\omega}\star_\mathrm{R}}) \widehat{R}^{\omega}:=(d^{\widehat{\nabla}^{\omega}_{\mathrm{ad}}\star_\mathrm{R}}-d^{\widehat{S}^{\omega}\star_\mathrm{R}}) \circ \widetilde{\Upsilon}_{\mathrm{ad}}\circ \widehat{R}^{\omega}$ and $\widehat{\lambda}=\ast \circ \lambda \circ \ast$, $\widehat{R}^{\omega}=\ast \circ R^{\omega}\circ \ast$. ### The Non--Standard $1D$--Differential Calculus For our quantum bundle, Equation [\[3.18\]](#3.18){reference-type="ref" reference="3.18"} is reduced into $$\label{3.19} (d^{\star_\mathrm{L}}-d^{S^\omega\star_\mathrm{L}})F^\omega=0,$$ where $d^{\star_\mathrm{L}}$ is the left quantum codifferential (see Equation [\[2.11\]](#2.11){reference-type="ref" reference="2.11"}) and $d^{S^\omega\star_\mathrm{L}}$ is the formal adjoint operator of $d^{S^\omega}$ with respect to $\langle-|-\rangle_\mathrm{L}$. In concrete $$\label{3.20} d^{S^\omega\star_\mathrm{L}}\tau: {_{\mathrm{inv}}}\Gamma\longrightarrow \Omega^{k}(M)$$ is given by $$d^{S^\omega\star_\mathrm{L}}\tau(\vartheta)=(-1)^{k}\,i\,\star^{-1}_\mathrm{L}\left([A^\omega(\vartheta),\star_\mathrm{L}\tau(\vartheta)]^\partial\right),$$ for all linear maps $\tau:{_{\mathrm{inv}}}\Gamma\longrightarrow \Omega^{k+1}(M)$. The reader can comparison Equation [\[3.19\]](#3.19){reference-type="ref" reference="3.19"} with Equation [\[1.3\]](#1.3){reference-type="ref" reference="1.3"}. It is worth mentioning that like in the *classical* case, not all qpc satisfies Equation [\[3.19\]](#3.19){reference-type="ref" reference="3.19"}. In the light of Equation [\[2.37\]](#2.37){reference-type="ref" reference="2.37"}, Equation [\[3.19\]](#3.19){reference-type="ref" reference="3.19"} becomes into the *non--commutative Gauss Law* and the *non--commutative Ampere equation* $$\label{3.21} \nabla \cdot {\bf D}=\rho^e,$$ $$\label{3.22} \nabla\times {\bf H}-\frac{\partial {\bf D}}{\partial x^0}= {\bf j}^e.$$ Here we have defined the electric charge density and the electric current density by $$\label{3.23} \rho^e=i[{\bf D},{\bf A}^{\ast}],$$ and $$\label{3.24} {\bf j}^e=i [{\bf D},\phi^{\ast}]-i({\bf H}\times {\bf A}^{\ast}+{\bf A}^{\ast}\times {\bf H}),$$ where ${\bf A}^\ast=(A^\ast_1,A^\ast_2,A^\ast_3)$. In summary, Equations [\[3.5\]](#3.5){reference-type="ref" reference="3.5"}--[\[3.6\]](#3.6){reference-type="ref" reference="3.6"} and [\[3.21\]](#3.21){reference-type="ref" reference="3.21"}--[\[3.22\]](#3.22){reference-type="ref" reference="3.22"} are the *non--commutative Maxwell equations in the vacuum* associated to the non--commutative gauge potential $A^\omega$ in the quantum space--time $M$. It is work remarking again that even in the vacuum there are electric/magnetic charges as well as magnetic/electric current densities, which are generated by non--trivial self--interactions on $M$. Finally, it is possible to proceed exactly like in the *classical* case to find *conservation laws* for all cases $$\label{3.25} \nabla\cdot{\bf j}^e+\frac{\partial \rho^e}{\partial x^0}=0, \qquad \nabla\cdot{\bf j}^m+\frac{\partial \rho^m}{\partial x^0}=0.$$ ### The $2D$--Differential Calculus For our quantum bundle, Equation [\[3.18\]](#3.18){reference-type="ref" reference="3.18"} is reduced into $$\label{3.26} (d^{\star_\mathrm{L}}-d^{S^\omega\star_\mathrm{L}})F^\omega=0,$$ where $d^{\star_\mathrm{L}}$ is the left quantum codifferential (see Equation [\[2.11\]](#2.11){reference-type="ref" reference="2.11"}) and $d^{S^\omega\star_\mathrm{L}}$ is the formal adjoint operator of $d^{S^\omega}$ with respect to $\langle-|-\rangle_\mathrm{L}$. In concrete $$\label{3.27} d^{S^\omega\star_\mathrm{L}}\tau: {_{\mathrm{inv}}}\Gamma\longrightarrow \Omega^{k}(M)$$ is given by $$d^{S^\omega\star_\mathrm{L}}\tau(\vartheta^e)=(-1)^{k}\,i\,\star^{-1}_\mathrm{L}\left([A^\omega(\vartheta^m),\star_\mathrm{L}\tau(\vartheta^m)]^\partial+[A^\omega(\vartheta^m),\star_\mathrm{L}\tau(\vartheta^e)]^\partial\right),$$ $$d^{S^\omega\star_\mathrm{L}}\tau(\vartheta^m)=(-1)^{k}\,i\,\star^{-1}_\mathrm{L}\left([A^\omega(\vartheta^e),\star_\mathrm{L}\tau(\vartheta^m)]^\partial+[A^\omega(\vartheta^e),\star_\mathrm{L}\tau(\vartheta^e)]^\partial\right),$$ for all linear maps $\tau:{_{\mathrm{inv}}}\Gamma\longrightarrow \Omega^{k+1}(M)$. The reader can comparison Equation [\[3.12\]](#3.12){reference-type="ref" reference="3.12"} with Equation [\[1.3\]](#1.3){reference-type="ref" reference="1.3"}. It is worth mentioning that like in the *classical* case, not all qpc satisfies Equation [\[3.12\]](#3.12){reference-type="ref" reference="3.12"}. In the light of Equation [\[2.28\]](#2.28){reference-type="ref" reference="2.28"}, Equation [\[3.8\]](#3.8){reference-type="ref" reference="3.8"} becomes into the *non--commutative Gauss Law* and the *non--commutative Ampere equation* for $A^\omega(\vartheta^e)$ $$\label{3.28} \nabla \cdot {\bf D}^e=\rho^e,$$ $$\label{3.29} \nabla\times {\bf H}^e-\frac{\partial {\bf D}^e}{\partial x^0}= {\bf j}^e$$ and for $A^\omega(\vartheta^m)$ $$\label{3.30} \nabla \cdot {\bf H}^m=\rho^m,$$ $$\label{3.31} \nabla\times {\bf D}^m-\frac{\partial {\bf H}^m}{\partial x^0}={\bf j}^m,$$ where ${\bf A}^{e\ast}=(A^{e\ast}_1,A^{e\ast}_2,A^{e\ast}_3)$, ${\bf A}^{m\ast}=(A^{m\ast}_1,A^{m\ast}_2,A^{m\ast}_3)$. Here we have defined the electric and magnetic charge density generated by $A^\omega(\vartheta^e)$ and $A^\omega(\vartheta^m)$, respectively $$\label{3.32} \rho^e=i[{\bf D}^e+{\bf H}^m,{\bf A}^{m\ast}],\qquad \rho^m=i[{\bf D}^e+{\bf H}^m,{\bf A}^{e\ast}],$$ and the electric current density and the magnetic current density generated by $A^\omega(\vartheta^e)$ and $A^\omega(\vartheta^m)$, respectively $$\label{3.33} {\bf j}^e=i[{\bf D}^e+{\bf H}^m,\phi^{m\ast}]-i(({\bf H}^e+{\bf D}^m)\times {\bf A}^{m\ast}+{\bf A}^{m\ast}\times ({\bf H}^e+{\bf D}^m)),$$ $$\label{3.34} {\bf j}^m=i[{\bf D}^e+{\bf H}^m,\phi^{e\ast}]-i(({\bf H}^e+{\bf D}^m)\times {\bf A}^{e\ast}+{\bf A}^{e\ast}\times ({\bf H}^e+{\bf D}^m)).$$ The reader can comparison Equations [\[3.14\]](#3.14){reference-type="ref" reference="3.14"}--[\[3.17\]](#3.17){reference-type="ref" reference="3.17"} with their *classical* counterparts: Equations [\[1.9\]](#1.9){reference-type="ref" reference="1.9"}, [\[1.10\]](#1.10){reference-type="ref" reference="1.10"}. It is work remarking again that even in the vacuum there are electric/magnetic charges as well as magnetic/electric current densities, which are generated by non--trivial self--interactions on $M$. In summary, Equations [\[3.12\]](#3.12){reference-type="ref" reference="3.12"}--[\[3.15\]](#3.15){reference-type="ref" reference="3.15"} and [\[3.28\]](#3.28){reference-type="ref" reference="3.28"}--[\[3.31\]](#3.31){reference-type="ref" reference="3.31"} are the *non--commutative Maxwell equations in the vacuum* associated to the non--commutative gauge potential $A^\omega$ in the quantum space--time $M$: there are $4$ for the non--commutative electric potential $1$--form $A^\omega(\vartheta^e)$, and other $4$ for the non--commutative magnetic potential $1$--form $A^\omega(\vartheta^m)$. Since now we have $4$ new Maxwell equations and all of them are coupled, it is necessary to present a non--trivial solution; in particular, we are going to present a solution for which the non--commutative electric potential $1$--form produces a non--zero magnetic charge density (and hence, the non--commutative magnetic potential $1$--form produces a non--zero electric charge density). In fact, by taking $\theta^{\mu\nu}=0$ for all $\mu$, $\nu$ except $\theta^{23}$; let us consider the non--commutative gauge potential $$\label{3.35} A^\omega: {_{\mathrm{inv}}}\Gamma\longrightarrow \Omega^1(M)$$ defined by $$A^\omega(\vartheta^e)=x^3\,dx^2$$ $$A^\omega(\vartheta^m)=x^1x^2\,dx^3.$$ In this way $$E^e=(0,0,0),\quad B^e=(1,0,0),$$ $$D^e=(0,0,0),\quad H^e=(1+\theta^{23} x^1,0,0),$$ and $$B^m=(0,0,0),\quad E^m=(-x^1,x^2,0),$$ $$H^m=(0,0,0),\quad D^m=(-(1-\theta^{23}) x^1,x^2,0).$$ Thus the non--commutative Maxwell equations are satisfied: $$\nabla \cdot {\bf H}^e=\theta^{23}, \qquad \nabla \cdot {\bf D}^e=0,$$ $$\nabla \times {\bf D}^e+\frac{\partial {\bf H}^e}{\partial x^0}=0,\qquad \nabla \times {\bf H}^e-\frac{\partial {\bf D}^e}{\partial x^0}=0;$$ and $$\nabla \cdot {\bf D}^m=\theta^{23}, \qquad \nabla \cdot {\bf H}^m=0,$$ $$\nabla \times {\bf H}^m+\frac{\partial {\bf D}^m}{\partial x^0}=0,\qquad \nabla \times {\bf D}^m-\frac{\partial {\bf H}^m}{\partial x^0}=0.$$ This solution is consistence with the zero slope limit of string theory: $\theta^{0j}=0$ for $j=1,2,3$ [@string]. **Remark 2**. *The general theory in which we are based on ([@sald1], [@sald2]), works with qpcs in the most general way. However, there is a special kind of qpcs called *real* qpcs which satisfy $$\omega(\theta^\ast)=\omega(\theta)^\ast$$ for all $\theta$ $\in$ ${_\mathrm{inv}}\Gamma$. This kind of qpcs are the only ones that other papers consider, for example [@micho1; @micho2; @steve] and in the *classical* case, only real principal connections have physical meaning; so we could think that only real qpcs have physical meaning as well.* *For our quantum bundle this condition becomes into $$A^\omega(\theta^\ast)=A^\omega(\theta)^\ast$$ for all $\theta$ $\in$ ${_\mathrm{inv}}\Gamma$ and due to Equation [\[2.26\]](#2.26){reference-type="ref" reference="2.26"}, real qpcs fulfill $$\label{3.36} A^\omega(\vartheta^e)^\ast=A^\omega(\vartheta^e),\qquad A^\omega(\vartheta^m)^\ast=A^\omega(\vartheta^m).$$ It is worth mentioning that the explicit solution presented above comes from a real qpc and the charges densities are real constants. Of course, there are more solutions, this was only an example.* Finally, it is possible to proceed exactly like in the *classical* case to find *conservation laws* for all cases $$\label{3.37} \nabla\cdot{\bf j}^e+\frac{\partial \rho^e}{\partial x^0}=0, \qquad \nabla\cdot{\bf j}^m+\frac{\partial \rho^m}{\partial x^0}=0,\qquad \nabla\cdot{\bf j}+\frac{\partial \rho}{\partial x^0}=0.$$ ## Covariant Formulation In this section we will pass to the physics common index notation with the metric $\eta_{\mu\nu}=\mathrm{diag}(1,-1,-1,-1)$. ### The Non--Standard $1D$--Differential Calculus Let us consider $$\label{3.38} A_\mu=(\phi,-{\bf A})$$ and then Equation [\[2.39\]](#2.39){reference-type="ref" reference="2.39"} is given by $$\label{3.39} F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu-i[A_\mu,A_\nu].$$ The geometrical equation (Equation [\[3.4\]](#3.4){reference-type="ref" reference="3.4"}) can be written as $$\label{3.40} \partial_\mu \widetilde{F}^{\mu\nu}=J^{m\,\nu} \qquad \mbox{ where }\qquad J^{m\,\nu}=(\rho^m,{\bf j}^m)=i[A_\mu,\widetilde{F}^{\mu\nu}]=i[A_\mu,\widetilde{F}^{\mu\nu}_{\mathrm{classical}}].$$ Here, we are consider that $$\label{3.41} \widetilde{F}^{\mu\nu}={1\over 2}\epsilon^{\mu\nu\alpha\beta}F_{\alpha\beta},$$ is the dual electromagnetic tensor field with $\epsilon^{\mu\nu\alpha\beta}$ the Levi--Civita symbol, and $\widetilde{F}^{\mu\nu}_{\mathrm{classical}}$ is the *classical* part of $\widetilde{F}^{\mu\nu}$, i.e., the part of $\widetilde{F}^{\mu\nu}$ only with ${\bf E}$ and ${\bf B}$. The reader can comparison Equation [\[3.40\]](#3.40){reference-type="ref" reference="3.40"} with is *classical* counterpart, Equation $A.14$. On the other hand, the dynamical equation (Equation [\[3.9\]](#3.9){reference-type="ref" reference="3.9"}) can be written as $$\label{3.42} \partial_\mu F^{\mu\nu}=J^{e\,\nu} \qquad \mbox{ where }\qquad J^{e\,\nu}=(\rho^e,{\bf j}^e)=i[A^\ast_\mu,F^{\mu\nu}]$$ with $A^\ast_\mu=(\phi^\ast,-{\bf A}^\ast)$. The reader can comparison the last equation with its *classical* counterpart, Equation $A.15$. Notice that the form of both four--currents are like in *classical* non--abelian gauge theories. **Remark 3**. *In Equation [\[3.39\]](#3.39){reference-type="ref" reference="3.39"}, the term $$i[A_\mu,A_\nu]$$ comes from the general definition of the curvature for this quantum bundle with this specific differential calculus. In other papers, like [@ruiz; @elec], this term comes form the non--commutativity of $M$ in the definition of the *gauge curvature* $$i[D_\mu,D_\nu].$$* **Remark 4**. *If we consider the algebra of differential forms of $U(1)$, we would get $$F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu$$ even in $M$! because of in this case the only embedded differential is $\delta=0$ and hence the curvature is given by $F^\omega=dA^\omega$ (see Equation [\[2.36\]](#2.36){reference-type="ref" reference="2.36"}). The presence of $\delta$ in the definition of the curvature is needed to see the curvature as a gauge field in the general theory [@micho1; @micho2; @steve].* *So, there is discrepancy between the gauge curvature and the definition of the curvature of a principal connection in the non--commutative case. This discrepancy is only solved when we consider the non--standard $1D$ differential calculus of $U(1)$, so this calculus seems like the correct mathematical environment in the non--commutative context. In other words, in papers like [@elec; @ruiz] the authors were working with the non--standard differential calculus without knowing and hence, with a non--zero $S^\omega$.* *As another reason to consider our formulation is [@twisted]. In this paper the authors take a variation on $A_\mu$ and get the Equation [\[3.42\]](#3.42){reference-type="ref" reference="3.42"} for real qpcs (see Equation [\[3.36\]](#3.36){reference-type="ref" reference="3.36"}). We arrived to the same result in [@sald2] but in a general context: varying the qpc on the non--commutative Yang--Mills functional. Furthermore, Equation [\[3.40\]](#3.40){reference-type="ref" reference="3.40"} can be obtained by considering the Jacobi identity on $M$ $$0={1\over 2}\epsilon^{\mu\nu\alpha\beta}[D_\nu,[D_\alpha,D_\beta]].$$ All of these show that our mathematical formulation is the correct one in the non--commutative case.* By the last Remark, the correct Yang--Mills Langrangian density of the system is $$\label{3.43} \mathcal{L}_\mathrm{YM}=-{1\over 2}\left(F^{\mu\nu}\cdot F^{\ast}_{\mu\nu}\right)-A_\mu \cdot J^{e\,\ast\,\mu}-J^{e\;\mu}\cdot A^{\ast}_\mu$$ and of course, the Euler--Lagrange equations of the last Lagrangian reduces to Equation [\[3.42\]](#3.42){reference-type="ref" reference="3.42"}. In accordance with [@twisted], twisted gauge transformations are symmetries of the last Lagrangian. Furthermore, the coviarant formulation allows to identify that our Equations are covariant under the correct set of Lorentz transformations, for example, the ones who transform the matrix $(\theta^{\mu\nu})$ correctly [@lorentz]. On the other hand, solutions in terms of the four--potential in Lorentz gauge $$\label{3.44} \partial_\mu A^{\mu}=0$$ are given by $$\label{3.45} \partial_\mu\partial^\mu A^{\nu}=i[A_\mu,\widetilde{F}^{\mu\nu}_{\mathrm{classical}}]+i[A^{\mu},\partial_\mu A^{\nu}];$$ while the conservation laws are $$\label{3.46} \partial_\mu J^{e\,\mu}=0,\qquad \partial_\mu J^{m\,\mu}=0.$$ **Remark 5**. *The whole classical Electromagnetic Theory (Appendix $A$) can be recovered by considering $$\theta^{\mu\nu}\longrightarrow 0,$$ even with the non--standard $1D$ differential calculus of $U(1)$ because by the graded commutative of $\Omega^\bullet(M)$ we would have $F^\omega=dA^\omega$ (see Equation [\[2.36\]](#2.36){reference-type="ref" reference="2.36"})* Finally, it is worth mentioning that there is no a complete symmetry between ${\bf D}$, ${\bf H}$ in the sense that the electric four--current depends on the configuration of the four--potential; while the magnetic four--current does not: this four--current exists only by the presence of the electromagnetic field in the quantum space--time $M$. Furthermore, when we quantize the model (next section), there will be only one gauge field: $A_\mu$, which is typically identifying with the photon field. In other words, the photon field in the non--commutative case turns into a kind of dyon gauge field in the sense that it produces electric and magnetic charges and currents by itself in the vacuum. ### The $2D$--Differential Calculus Following the last subsection we get $$\label{3.47} A^{e}_\mu=(\phi^e, -{\bf A^e }),\qquad A^{m}_\mu=(\phi^m,-{\bf A^m }).$$ Thus, the non--commutative electromagnetic field is given by $$\label{3.48} F^e_{\mu\nu}=\partial_\mu A^e_\nu-\partial_\nu A^e_\mu-i[A^e_\mu,A^m_\nu]-i[A^m_\mu,A^e_\nu];$$ while the non--commutative magnetoelectric field is $$\label{3.49} F^m_{\mu\nu}=\partial_\mu A^m_\nu-\partial_\nu A^m_\mu-i[A^e_\mu,A^m_\nu]-i[A^m_\mu,A^e_\nu].$$ The non--commutative geometrical equations (Equation [\[3.11\]](#3.11){reference-type="ref" reference="3.11"}) can be expressed as $$\label{3.50} \partial_\mu \widetilde{F}^{e\,\mu\nu}=\partial_\mu \widetilde{F}^{m\,\nu\lambda}=J^{\nu},$$ where $$\label{3.51} \widetilde{F}^{e\,\mu\nu}={1\over 2} \epsilon^{\mu\nu\alpha\beta}F^e_{\alpha\beta},\qquad \widetilde{F}^{m\,\mu\nu}={1\over 2} \epsilon^{\mu\nu\alpha\beta}F^m_{\alpha\beta},$$ and $$J^{\nu}=(\rho,{\bf j}).$$ Notice that $$\label{3.52} J^{\nu}=i[A^e_\mu,\widetilde{F}^{m\,\mu\nu}_{\mathrm{classical}}]+i[A^m_\mu,\widetilde{F}^{e\,\mu\nu}_{\mathrm{classical}}],$$ where $\widetilde{F}^{m\,\mu\nu}_{\mathrm{classical}}$, $\widetilde{F}^{e\,\mu\nu}_{\mathrm{classical}}$ are the *classical* parts of the tensors $\widetilde{F}^{m\,\mu\nu}$, $\widetilde{F}^{e\,\mu\nu}$. The reader can comparison Equation [\[3.50\]](#3.50){reference-type="ref" reference="3.50"} with its *classical* counterpart, Equation [\[1.11\]](#1.11){reference-type="ref" reference="1.11"} and it is worth remembering that Equation [\[3.50\]](#3.50){reference-type="ref" reference="3.50"} is always satisfied, no matter the values of $A^e_\mu$, $A^m_\mu$. On the other hand, the non--commutative dynamical equation (Equation [\[3.26\]](#3.26){reference-type="ref" reference="3.26"}) are $$\label{3.53} \partial_\mu F^{e\;\mu\nu}=J^{e\;\nu},$$ $$\label{3.54} \partial_\mu F^{m\;\mu\nu}=J^{m\;\nu},$$ where $$J^{e\;\nu}=(\rho^e,{\bf j}^e)\quad \mbox{ and }\quad J^{m\;\nu}=(\rho^m,{\bf j}^m).$$ Notice that $$\label{3.55} J^{e\;\nu}=i[A^{m\,\ast}_\mu,F^{e\;\mu\nu}+F^{m\;\mu\nu}],\qquad J^{m\;\nu}=i[A^{e\,\ast}_\mu,F^{e\;\mu\nu}+F^{m\;\mu\nu}],$$ where $A^{e\,\ast}_\mu=(\phi^{e\ast}, -{\bf A^{e\ast} })$, $A^{m\,\ast}_\mu=(\phi^{m\ast},-{\bf A^{m\ast} })$. The reader can comparison Equations [\[3.52\]](#3.52){reference-type="ref" reference="3.52"} and [\[3.53\]](#3.53){reference-type="ref" reference="3.53"} with their *classical* counterparts, Equation [\[1.12\]](#1.12){reference-type="ref" reference="1.12"}. On the other hand, conservation laws are given by $$\label{3.56} \partial_\mu J^{e\;\mu}=0,\qquad \partial_\mu J^{m\;\mu}=0, \qquad \partial_\mu J^{\mu}=0.$$ Solutions in terms of potentials, in the Lorentz gauge $$\partial_\mu A^{e\,\mu}=0,\qquad \partial_\mu A^{m\,\mu}=0$$ are given by $$\label{3.57} \partial_\mu \partial^\mu A^{e\,\nu}=i[A^{m\,\ast}_\mu,F^{e\;\mu\nu}+F^{m\;\mu\nu}]+i[A^{e\,\mu},\partial_\mu A^{m\,\nu}]+i[A^{m\,\mu},\partial_\mu A^{e\,\nu}],$$ $$\label{3.58} \partial_\mu \partial^\mu A^{m\,\nu}=i[A^{e\,\ast}_\mu,F^{e\;\mu\nu}+F^{m\;\mu\nu}]+i[A^{e\,\mu},\partial_\mu A^{m\,\nu}]+i[A^{m\,\mu},\partial_\mu A^{e\,\nu}].$$ Now we are going to analyze *non--commutative symmetries*. First of all it is worth mentioning that non--commutative Maxwell equations are invariant under the following transformation $$\label{3.59} A^e_\mu\;\longleftrightarrow A^m_\mu.$$ In other words $${\bf D}^e\longrightarrow {\bf H}^m,\;\; {\bf E}^e\longrightarrow {\bf B}^m, \;\; \qquad {\bf H}^m\longrightarrow {\bf D}^e, \;\; {\bf B}^m\longrightarrow {\bf E}^e$$ $${\bf H}^e\longrightarrow {\bf D}^m, \;\; {\bf B}^e\longrightarrow {\bf E}^m, \;\;\qquad {\bf D}^m\longrightarrow {\bf H}^e, \;\; {\bf E}^m\longrightarrow {\bf B}^e,$$ $$\rho^e \longrightarrow \rho^m,\;\; {\bf J}^e \longrightarrow {\bf J}^m,\;\;\qquad \;\; \rho^m \longrightarrow \rho^e,\;\; {\bf J}^m \longrightarrow {\bf J}^e.$$ This symmetry produces a kind of *non--commutative* dual transformations and it comes from the interchanging of the order basis $\beta=\{\vartheta^e,\vartheta^m\}\,\longleftrightarrow\, -\beta=\{\vartheta^m,\vartheta^e\}$; so it is a $\mathbb{Z}_2$--symmetry (see subsection $2.2$). In other words, there is full symmetry between the electric field and the magnetic field in non--commutative Maxwell equations. We define the Yang--Mills Lagrangian density of the system as $$\label{5.60} \mathcal{L}_\mathrm{YM}=\mathcal{L}^e+\mathcal{L}^m,$$ where $$\label{5.61} \mathcal{L}^e=-{1\over 2}\left(F^{e\;\mu\nu}\cdot F^{e\,\ast}_{\mu\nu}\right)-A^{e}_\mu \cdot J^{e\,\ast\,\mu}-J^{e\;\mu}\cdot A^{e\,\ast}_\mu$$ and $$\label{5.62} \mathcal{L}^m={1\over 2}\left(F^{m\;\mu\nu}\cdot F^{m\,\ast}_{\mu\nu}\right)+A^{m}_\mu \cdot J^{m\,\ast\,\mu}+J^{m\;\mu}\cdot A^{m\,\ast}_\mu.$$ Of course, the Euler--Lagrange equations for $A^e_\mu$ and $A^m_\mu$ (or $A^{e\,\ast}_\mu$ and $A^{m\,\ast}_\mu$) reduce to Equations [\[3.53\]](#3.53){reference-type="ref" reference="3.53"}, [\[3.54\]](#3.54){reference-type="ref" reference="3.54"}. The minus sign in $\mathcal{L}^m$ with respect to $\mathcal{L}^e$ breaks the symmetry between the electric field and the magnetic field, however, Euler--Lagrange are still the correct ones and **Proposition 6**. *Let $\omega$ be a real qpc (see Equation [\[3.36\]](#3.36){reference-type="ref" reference="3.36"}). Then the Yang--Mills action $$\label{3.63} {\mathcal{S}}_\mathrm{YM}:=\int_M {\mathcal{L}}_\mathrm{YM}\, \mathrm{dvol}$$ posses the *classical* gauge transformation symmetry (see Equation [\[1.13\]](#1.13){reference-type="ref" reference="1.13"}), i.e., ${\mathcal{S}}_\mathrm{YM}$ is invariant under the transformation $$\label{3.64} A^e_\mu\longrightarrow {A^e_\mu}':=A^e_\mu+\partial_\mu \chi, \qquad A^m_\mu\longrightarrow {A^m_\mu}':=A^m_\mu+\partial_\mu \chi$$ for any real--valued $\chi$ $\in$ $M$.* The proof of the previous proposition is only a direct and large calculation taking into account that under Equation [\[3.64\]](#3.64){reference-type="ref" reference="3.64"} $${F^e_{\mu\nu}}'=\partial_\mu {A^e_\nu}'-\partial_\nu {A^e_\mu}'-i[{A^e_\mu}',{A^m_\nu}']-i[{A^m_\mu}',{A^e_\nu}'],$$ $${F^m_{\mu\nu}}'=\partial_\mu {A^m_\nu}'-\partial_\nu {A^m_\mu}'-i[{A^e_\mu}',{A^m_\nu}']-i[{A^m_\mu}',{A^e_\nu}']$$ and the cyclic property of the integral; so we are going to omit it. The previous covariant formulation allows to conclude that our equations are covariant under the correct set of Lorentz transformations, for example, the ones who transform the matrix $(\theta^{\mu\nu})$ correctly [@lorentz]. In accordance with [@twisted], twisted gauge transformations can leave the real action ${\mathcal{S}}_\mathrm{YM}$ invariant. **Remark 7**. *In the light of Remark [Remark 4](#rema3){reference-type="ref" reference="rema3"}, only when $A^e_\mu$, $A^e_\mu$ could have any physical sense is when the Yang--Mills action becomes invariant under the classical gauge transformations of the electromagnetic theory.* **Remark 8**. *The whole classical Electromagnetic Theory (Appendix $A$) can be recovered by considering $$\theta^{\mu\nu}\longrightarrow 0\qquad \mbox{ and } \qquad A^m_\mu \longrightarrow 0.$$* # Quantization ### The Non--Standard $1D$--Differential Calculus ### The $2D$--Differential Calculus # Concluding Comments In Differential Geometry the most general framework of the Electromagnetic Theory in the vacuum is given by the theory of principal bundles and principal connections; so it should be natural to think that in the non--commutative case the correct mathematical framework of the Electromagnetic Theory is given by the theory of quantum principal bundles and quantum principal connections, and showing this is exactly the purpose of this paper. As the reader should had already notice throughout the entire text, specially by Remark [Remark 4](#rema3){reference-type="ref" reference="rema3"}, the general theory of quantum principal bundles reproduces previous results of the non--commutative $U(1)$--gauge theory as well as clarify the correct equation of motion and the correct geometrical equation. On the other hand, our toy model with a $2D$--dimensional differential calculus of $U(1)$ presents the correct mathematical formulation of the Cabibbo--Ferrari idea because is the geometry of the spaces who is telling us how both four--potentials are related, not us by imposing it. By the previous fact, this toy model is a perfect example of the powerful tool that is using quantum principal bundles to describe non--commutative gauge theory [@sald1; @sald2]. This should be not a big surprise because of the intrinsic fruitful relationship between principal bundles and gauge theory in the *classical* case All of these are reasons to keep the research going. Particularly, it is possible to add matter fields in our framework. In addition, all of theses motive the research to develop a non--commutative $SU(2)$--gauge theory or a non--commutative $SU(3)$--gauge theory. Moreover, the general formulation present in [@sald1; @sald2] allows to work with other quantum space--time spaces, not only with the Moyal--Weyl algebra [@09]. The presence of the operator $S^\omega$ changes completely the mathematical formulation of these kind of models and the equation of motions and it is worth mentioning that this operator appears naturally in the general non--commutative Bianchi identity ([@micho2]) and it also appears naturally when you varying the qpc in the non--commutative Yang--Mills functional ([@sald2]), so it is essential to consider it in non--commutative gauge theory, just like this paper shows. # Classical Maxwell Equations Consider a principal $U(1)$--bundle over the Minkowski space--time: $\mathbb{R}^4$ with the metric $\eta=\mathrm{diag}(1,-1,-1,-1)$. Since $\mathbb{R}^4$ is contractible, every bundle over it is trivializable, so without lose of generality, let us consider the trivial principal bundle $$\label{1.0} \mathrm{proy}: \mathbb{R}^4 \times U(1)\longrightarrow \mathbb{R}^4.$$ In this case, principal connections $$\omega: T(\mathbb{R}^4\times U(1))\longrightarrow \mathfrak{u}(1)$$ are in biyection with globally defined $\mathfrak{u}(1)$--valued differential $1$--forms: $i\,A^\omega$, where $i=\sqrt{-1}$ and $$A^\omega=\displaystyle \sum^3_{\mu=0} A_{\mu}\,dx^{\mu}\; \in \; \Omega^1(\mathbb{R}^4)$$ is usually called *the potential 1--form*; and the vector $(A_0,A_1,A_2,A_3)$ created by the coefficients of $A^\omega$ receives the name of *four--vector potential*. For this case the curvature form of $\omega$ $$R^\omega: \Omega^2(\mathbb{R}^4\times U(1))\longrightarrow \mathfrak{u}(1)$$ is related with the potential $1$--form by means of the (de Rham) differential, i.e., curvature forms are in biyection with $$\label{1.1} dA^\omega=:F^\omega \; \in \; \Omega^2(\mathbb{R}^4).$$ In general every principal connection has to satisfies the (second) Bianchi identity; however due to the fact that $U(1)$ is an *abelian* group, for our case this identity reduces to ([@na]) $$\label{1.2} 0=dF^\omega=d^2 A^\omega,$$ which is a trivial relation taking into account the properties of $d$. Nevertheless it is worth remarking again that the last equation arises from: the relation between the curvature and the $1$--form potential, the Bianchi identity and the commutative of $U(1)$; all of this in the context of the graded--commutative de Rham algebra $\Omega^\bullet(\mathbb{R}^4)$. On the other hand, the Yang--Mills functional is a functional form the space of principal connections to $\mathbb{R}$ which measures the squared norm of the curvature of a principal connection and the Yang--Mills equation comes from a variational principle in which we are looking for critical points. For our case and again, by the relation between the curvature and the $1$--form potential and the commutative of $U(1)$, the Yang--Mills equation is ([@na]) $$\label{1.3} 0=d^\star F^\omega=d^\star dA^\omega,$$ where $d^\star=(-1)^k \star^{-1}\circ\, d \,\circ \star$ is the codifferential, the formally adjoint[^2] operator of $d$, and $\star$ is the star Hodge operator [@na]. Now from Equations [\[1.2\]](#1.2){reference-type="ref" reference="1.2"}, [\[1.3\]](#1.3){reference-type="ref" reference="1.3"} it is possible to obtain Maxwell equations [@na]. In fact, by choosing $(A_0,A_1,A_2,A_3)=(\phi,-{\bf A})$ with ${\bf A}$ a vector field in $\mathbb{R}^3$, the Equation [\[1.1\]](#1.1){reference-type="ref" reference="1.1"} becomes into $$\label{1.4} F^\omega=\sum_{0\leq \mu < \nu \leq 3}F_{\mu\nu}\, dx^\mu\wedge dx^\nu\quad \mathrm{ where }\quad (F_{\mu\nu})=\begin{pmatrix} 0 & E_1 & E_2 & E_3 \\ -E_1 & 0 & -B_3 & B_2 \\ -E_2 & B_3 & 0 & -B_1\\ -E_3 & -B_2 & B_1 & 0 \end{pmatrix},$$ where $$\label{1.5} {\bf E}=(E_1,E_2,E_3):=-\frac{\partial{\bf A}}{\partial x^0}-\nabla \phi$$ is the *electric field*, $$\label{1.6} {\bf B}=(B_1,B_2,B_3):=\nabla \times \bf A$$ is the *magnetic field* and the $2$--form $(F_{\mu,\nu})$ is called *the electromagnetic field tensor*. By substituting the value of $F^\omega$ in Equation [\[1.2\]](#1.2){reference-type="ref" reference="1.2"} we get the Gauss Law for magnetism and the Faraday equation $$\label{1.7} \nabla\cdot {\bf B}=0,$$ $$\label{1.8} \nabla\times {\bf E}+\frac{\partial {\bf B}}{\partial x^0}=0;$$ while by substituting it in Equation [\[1.3\]](#1.3){reference-type="ref" reference="1.3"} we get the Gauss Law and the Ampere equation $$\label{1.9} \nabla\cdot {\bf E}=0$$ $$\label{1.10} \nabla\times {\bf B}-\frac{\partial {\bf E}}{\partial x^0}=0.$$ Equations [\[1.7\]](#1.7){reference-type="ref" reference="1.7"}--[\[1.10\]](#1.10){reference-type="ref" reference="1.10"} are the Maxwell equations in the vacuum [@na]. $\phi$ receives the name of *the electric potential* or *scalar potential*; while ${\bf A}$ receives the name of *the magnetic potential* or *vector potential*. In index notation we have $$A_\mu=(\phi,-{\bf A})$$ and $$F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu.$$ The geometrical equation can be written as $$\label{1.11} \partial_\mu \widetilde{F}^{\mu\nu}=0,$$ and the dynamical equation can be written as $$\label{1.12} \partial_\mu F^{\mu\nu}=0,$$ where $$F^{\mu\nu}=\eta^{\mu\alpha}\eta^{\nu\beta}F_{\mu\nu},\qquad \widetilde{F}^{\mu\nu}={1\over 2}\epsilon^{\mu\nu\alpha\beta}F_{\alpha\beta},$$ with $\epsilon^{\mu\nu\alpha\beta}$ the Levi--Civita symbol and $\eta_{\mu\nu}=\eta^{\mu\nu}= \mathrm{diag}(1,-1,-1,-1)$. The tensor $\widetilde{F}_{\mu\nu}$ receives the name of the dual electromagnetic field tensor. It is worth mentioning that Equation [\[1.2\]](#1.2){reference-type="ref" reference="1.2"} shows explicitly the gauge transformation symmetry: $$\label{1.13} A^\omega\longrightarrow {A^\omega}':=A^\omega+d\chi \quad \mbox{ or in index notation } \quad A'_\mu:=A_\mu+\partial_\mu\chi,$$ for any $f$ $\in$ $C^\infty(\mathbb{R}^4)$. 99999 [Nakahara, M. :]{.smallcaps} *Geometry, Topology and Physics, * [CRC Press, Boca Raton, 2nd Edition, 2003.]{.roman} [Yang, C., N. :]{.smallcaps} *Magnetic Monopoles, Fiber Bundles, and Gauge Fields, * [NATO Sci. Ser. B **352**, 55-65 (1996).]{.roman} [Dirac, P., A., M. :]{.smallcaps} [Proc. Roy. Soc. **A133**, 60 (1931).]{.roman} [Beggs, J., E. & Majid, S. :]{.smallcaps} *Quantum Riemannian Geometry, * [Springer, 2nd Edition, 2020.]{.roman} [Aschieri, P., Borowiec, A. & Pachol, A. :]{.smallcaps} *Observables and Dispersion Relations in $\kappa$--Minkowski Spacetime, * [J. High Energ. Phys. **2017**, 152 (2017).]{.roman} [Aschieri, P., Blohmann, C., Dimitrijević, M., Meyer, F., Schupp, P. & Wess, J. :]{.smallcaps} *A Gravity Theory on Noncommutative Spaces, * [Class. Quant. Grav. **22** (17) 3511 (2005).]{.roman} [Balasin, H., Blaschke, D., N., Gieres, F. & Schweda, M. :]{.smallcaps} *On the energy-momentum tensor in Moyal space, * [Eur. Phys. J. C **75** 284 (2015).]{.roman} [Iorio, A. & Sýkora, T. :]{.smallcaps} *On the Space-Time Symmetries of Non-Commutative Gauge Theories, * [Int. J. Mod. Phys. A **17** (17) 2369 (2001).]{.roman} [Wallet J-C. :]{.smallcaps} *Noncommutative induced gauge theories on Moyal spaces, * [J. Phys.: Conf. Ser. **103** (2008).]{.roman} [Aschieri, A., Dimitrijevic, M., Meyer, F., Schraml S. & Wess, J :]{.smallcaps} *Twisted Gauge Theories, * [Lett. Math. Phys **78** 61--71 (2006).]{.roman} [Ruiz--Ruiz, F. :]{.smallcaps} *Gauge-fixing independence of IR divergences in non-commutative U(1), perturbative tachyonic instabilities and supersymmetry, * [Phys. Lett. B **502**, 274-278 (2008).]{.roman} [Cabibbo, N. & Ferrari, E. :]{.smallcaps} *Quantum electrodynamics with Dirac monopoles, * [Nuovo Cim. **23**, 1147-1154 (1962).]{.roman} [Singh, V. & Tripath, B., V.  :]{.smallcaps} *Topological Dyons, * [Int. J. Theor. Phys. **52**, 604--611 (2013).]{.roman} [Durdevich, M. :]{.smallcaps} *Geometry of Quantum Principal Bundles I, * [Commun. Math. Phys **175** (3), 457-521 (1996).]{.roman} [Durdevich, M. :]{.smallcaps} *Geometry of Quantum Principal Bundles II, * [Rev. Math. Phys. **9** (5), 531---607 (1997).]{.roman} [Durdevich, M. :]{.smallcaps} *Geometry of Quantum Principal Bundles III, * [Algebras, Groups & Geometries. **27**, 247--336 (2010).]{.roman} [Sontz, S, B. :]{.smallcaps} *Principal Bundles: The Quantum Case, * [Universitext, Springer, 2015.]{.roman} [Saldaña, M, G, A. :]{.smallcaps} *Geometry of Associated Quantum Vector Bundles and the Quantum Gauge Group. * [arXiv:2109.01550v3, 30 Apr 2022. Sent to J. Geom. Phys.]{.roman} [Saldaña, M, G, A. :]{.smallcaps} *Quantum Principal Bundles and Yang--Mills--Scalar--Matter Fields. * [arXiv:2109.01554v3, 30 Apr 2022. Sent to J. Geom. Phys.]{.roman} [Saldaña, M, G, A. :]{.smallcaps} *Yang--Mills-scalar-matter fields in the quantum Hopf fibration. * [Bol. Soc. Mat. Mex. **29** (37) 2003.]{.roman} [Saldaña, M, G, A. :]{.smallcaps} *Yang--Mills--Scalar--Matter Fields in the Two--Point Space. * [arXiv:2112.00647v1, 1 Dec 2021. Acepted in the review book "Scientific Legacy of Professor Zbigniew Oziewicz: Selected Papers from the International Conference Applied Category Theory Graph-Operad-Logic" (2021).]{.roman} [Brzezinski, T & Majid, S. :]{.smallcaps} *Quantum Group Gauge Theory on Quantum spaces, * [Commun. Math. Phys. **157**, 591--638 (1993). Erratum: Commun. Math. Phys. 167--235 (1995).]{.roman} [Hajac, P & Majid, S. :]{.smallcaps} *Projective Module Description of the $q$--monopole, * [arXiv:math/9803003v2, 30 Aug 1998.]{.roman} [Woronowicz, S, L. :]{.smallcaps} *Differential Calculus on Compact Matrix Pseudogroups (Quantum Groups), * [Commun Math. Phys., **122**, 125-170 (1989).]{.roman} [Seiberg, N., Susskind, L.,& Toumbas, N. ]{.smallcaps}:*Strings in background electric field, space / time noncommutativity and a new noncritical string theory,* [JHEP 06 021 (2000).]{.roman} [Gracia-Bondía, J., M. & Ruiz--Ruiz, F. ]{.smallcaps}:*Noncommutative spacetime symmetries: Twist versus covariance,* [Phys. Rev. D., **74** (2006).]{.roman} [Aschieri, P. & Castellani, L. ]{.smallcaps}:*Noncommutative Gravity Solutions,* [J. Geom. Phys. **60**, 375 (2010).]{.roman} [^1]: Whenever the integrals converge and taking into a count the corresponding equivalence classes. [^2]: Whenever the integrals converge.
arxiv_math
{ "id": "2309.11583", "title": "Electromagnetic Theory with Quantum Principal Bundles", "authors": "Gustavo Amilcar Salda\\~na Moncada", "categories": "math.QA", "license": "http://creativecommons.org/licenses/by-sa/4.0/" }
--- abstract: | By applying the linearly implicit conservative difference scheme proposed in \[D.-L. Wang, A.-G. Xiao, W. Yang. J. Comput. Phys. 2014;272:670-681\], the system of repulsive space fractional coupled nonlinear Schrödinger equations leads to a sequence of linear systems with complex symmetric and Toeplitz-plus-diagonal structure. In this paper, we propose the diagonal and normal with Toeplitz-block splitting iteration method to solve the above linear systems. The new iteration method is proved to converge unconditionally, and the optimal iteration parameter is deducted. Naturally, this new iteration method leads to a diagonal and normal with circulant-block preconditioner which can be executed efficiently by fast algorithms. In theory, we provide sharp bounds for the eigenvalues of the discrete fractional Laplacian and its circulant approximation, and further analysis indicates that the spectral distribution of the preconditioned system matrix is tight. Numerical experiments show that the new preconditioner can significantly improve the computational efficiency of the Krylov subspace iteration methods. Moreover, the corresponding preconditioned GMRES method shows space mesh size independent and almost fractional order parameter insensitive convergence behaviors. **Key words.** circulant matrix, coupled nonlinear Schrödinger equations, fractional derivative, preconditioning, repulsive nonlinearity, Toeplitz matrix **MSC codes.** 65F08, 65F10, 65M06, 65M22 author: - "Fei-Yan Zhangand Xi Yang[^1]" title: "Diagonal and normal with Toeplitz-block splitting iteration method for space fractional coupled nonlinear Schrödinger equations with repulsive nonlinearities[^2]" --- # Introduction {#sec-Introduction} The system of Schrödinger equations is a basic physical model describing nonrelativistic quantum mechanical behaviors. It is well known that Feynman and Hibbs derived the standard Schrödinger equation for the evolution of microscopic particles based on the Brownian path integrals. By replacing Brownian motion to the Lévy-like process, the standard Schrödinger equation is generalized to the fractional Schrödinger equation [@Laskin01; @Laskin02]. Since the system of fractional Schrödinger equations plays important roles in physical applications [@GuoXu06; @MS95; @ZVB95], the related theories of fractional Schrödinger equations have been studied extensively, including the existence and uniqueness of the solution [@BGuo2008], the well-posedness of one dimensional (1D) and higher dimensional problems [@GuoHou2010; @GuoHou2013], the construction of the ground state solution [@Secchi2013; @LiZW2019], etc. Due to the nonlocal nature of the fractional derivative operator, it is difficult to obtain the closed-form exact solution of a system of fractional Schrödinger equations. In order to overcome this conundrum, a number of numerical methods have been established and studied, e.g., finite element methods [@LM2018JCP; @LM2017NUMA], spectral methods [@DSW2016CMA; @WY2019ANM; @FF2014SIAM], collocation methods [@Amore2010JMP; @Bhrawy2017ANM], and finite difference methods [@WDL2013JCP; @WDL2014JCP; @WPD2015JCP; @ZRP2019SCM; @ZX2014SISC], etc. In this paper, the following system is considered, i.e., the space fractional coupled nonlinear Schrödinger (CNLS) equations $$\begin{aligned} \label{equ1} \begin{cases} {\imath}u_{t}-\gamma(-\Delta)^{\frac{\alpha}{2}}u+\rho(|u|^2+\beta|v|^2)u =0,\\ {\imath}v_{t}-\gamma(-\Delta)^{\frac{\alpha}{2}}v+\rho(|v|^2+\beta|u|^2)v =0,\ \end{cases} x\in\makebox{{\mbox{\bb R}}},\ 0 < t \le \mbox{T}\end{aligned}$$ with the initial conditions $u(x,0)=u_{0}(x)$ and $v(x,0)=v_{0}(x)$. Here, $\imath=\sqrt{-1}$, $(-\Delta)^{\frac{\alpha}{2}}$ is the 1D fractional Laplacian with $1<\alpha \le 2$, and the parameters $\gamma>0, \beta \ge 0, \rho$ are real constants. When $\alpha = 2$, the system ([\[equ1\]](#equ1){reference-type="ref" reference="equ1"}) reduces to the classical CNLS equations [@nonlinearoptic2009]. When $\beta=0$, the system ([\[equ1\]](#equ1){reference-type="ref" reference="equ1"}) reduces to the decoupled nonlinear Schrödinger (DNLS) equations [@BGuo2008]. When $\rho=0$, the system ([\[equ1\]](#equ1){reference-type="ref" reference="equ1"}), without the nonlinear term, describes the free particles [@Laskin2002; @Luchko2013]. We only consider the case of $\rho<0$ [@BWZ2012arXiv; @Carr2000PRA; @JS1999CPAM], i.e., the system ([\[equ1\]](#equ1){reference-type="ref" reference="equ1"}) with repulsive nonlinearities, and leave the attractive nonlinearity case ($\rho>0$ [@BWZ2012arXiv; @BWZ2003SINA; @Saito2001PRL]) as a future work. For solving the system ([\[equ1\]](#equ1){reference-type="ref" reference="equ1"}) numerically, the infinite interval $\makebox{{\mbox{\bb R}}}$ is truncated into a bounded computational interval $\mbox{a} \le x \le \mbox{b}$, and the Dirichlet boundary conditions are adopted, i.e., $$\begin{gathered} u(\mbox{a},t)=u(\mbox{b},t)=0,\qquad v(\mbox{a},t)=v(\mbox{b},t)=0, \qquad 0 \le t \le \mbox{T}. \end{gathered}$$ Then, the linearly implicit conservative difference (LICD) scheme proposed in [@WDL2014JCP] based on the fractional centered difference formula [@riesz21] can be applied to the truncated system ([\[equ1\]](#equ1){reference-type="ref" reference="equ1"}), which is stable and convergent, and leads to a sequence of complex linear systems. The coefficient matrices are dense, non-Hermitian, and of the form $\imath I+D-T$, where $D$ is diagonal and nonpositive, and $T$, the discrete fractional Laplacian, is Toeplitz and symmetric positive definite. Hence, these matrices can be treated as either Toeplitz-plus-diagonal matrices or complex symmetric matrices. When we treat $\imath I+D-T$ as a Toeplitz-plus-diagonal matrix, since there is no fast direct solver available for dense Toeplitz-plus-diagonal linear systems, the preconditioned Krylov subspace iteration methods [@GolubBook; @sad22] may be a class of reasonable options. Chan and Ng [@TPB12] proposed a banded preconditioner for the Toeplitz-plus-band matrix (the Toeplitz-plus-diagonal matrix is just a special case). The main drawback of this preconditioner is that the generating function of the corresponding Toeplitz matrix should be known in advance, but unfortunately, it is unavailable in general. Ng and Pan [@Cpd14] constructed the approximate inverse circulant-plus-diagonal (AICD) preconditioner for the Toeplitz-plus-diagonal matrix. The AICD preconditioner works effectively and efficiently if the elements of the Toeplitz matrix decay away exponentially from the main diagonal, the nonzero elements of the diagonal matrix vary mildly and smoothly, and the interpolation points with respect to the diagonals is appropriately selected. Bai et al. established a diagonal and circulant splitting (DCS) preconditioner for the Toeplitz-plus-diagonal matrix, which shows mesh size independent convergence behavior, and significantly accelerates the convergence rate of the preconditioned Krylov subspace iteration methods; see [@BaiLuPan2017] for the 1D case, and [@BaiLu2020] for the higher dimensional cases. The above preconditioners work well under a basic assumption that the system matrix is Hermitian positive definite. Nevertheless, $\imath I+D-T$ is obviously non-Hermitian, not coinciding with the basic assumption. When we treat $\imath I+D-T$ as a complex symmetric matrix, we can consider two kinds of efficient methods for the corresponding complex symmetric linear system. The first kind is the class of the alternating-type iteration methods, including the Hermitian and skew-Hermitian splitting (HSS [@HSS10]) iteration method and the preconditioned HSS (PHSS [@BaiGP2004]) iteration method for a general non-Hermitian positive definite linear system, and the modified HSS (MHSS [@MHSS18]) iteration method and the preconditioned MHSS (PMHSS [@BaiBC2011; @BaiBCW2013]) iteration method for a complex symmetric linear system who's coefficient matrix has nonnegative definite real and imaginary parts, and at least one of them is positive definite. The second kind is the C-to-R iteration method [@Axelsson2000], which reforms $\imath I+D-T$ to a real block two-by-two matrix, and performs a block Gaussian elimination, finally solves a Schur-complement linear subsystem. Unfortunately, these methods can not take full use of the Toeplitz structure of $\imath I+D-T$, and thus not leading to convincing numerical behavior. To deal with the matrix $\imath I+D-T$, Ran et al. proposed the partially inexact HSS (PIHSS [@Pihss08]) iteration method, and the HSS-like iteration method [@HSS-like09]. Numerical results in [@Pihss08] and [@HSS-like09] show that the PIHSS iteration method outperforms the HSS iteration method in terms of computing time, and the HSS-like iteration method has better behavior in terms of both iteration counts and computing time compared with the PIHSS iteration method. In [@RanWW2017], Ran et al. came to a conclusion that the HSS-like preconditioner is more efficient than the HSS preconditioner and the AICD preconditioner when they are in conjunction with the Krylov subspace iteration methods. By taking the Toeplitz-plus-diagonal structure into account, Wang et al. constructed an efficient variant of the PMHSS iteration method [@W-PMHSS11] which naturally leads to an efficient PMHSS preconditioner. Numerical results in [@W-PMHSS11] show that the PMHSS preconditioner outperforms the HSS-like preconditioner. In a word, the PMHSS preconditioner suggested by Wang et al. is the most efficient one available in the literature. In order to effectively utilize the structure of $\imath I+D-T$, we consider a real block two-by-two equivalent form and its diagonal and normal with Toeplitz-block (DNTB) splitting. This new splitting leads to the DNTB iteration method, which is a parameterized iteration method and unconditionally converges for any positive iteration parameter, and the optimal iteration parameter is deducted. The DNTB iteration method naturally admits the DNTB preconditioner, and the implementation of this preconditioner requires to solve a diagonal linear subsystem and a block normal linear subsystem with Toeplitz-block. In practice, an even more effective and efficient preconditioner can be constructed by approximating the Toeplitz-block with a circulant matrix. Then, we obtain the diagonal and normal with circulant-block (DNCB) preconditioner, which can be implemented efficiently based on the fast Fourier transform (FFT) [@GolubBook]. Theoretically, we provide sharp bounds for the eigenvalues of the discrete fractional Laplacian $T$ and its circulant approximation $C$, and further analysis indicates that the eigenvalues of the DNCB preconditioned system matrix are clustered around 1. Numerical experiments show that the DNCB preconditioner can significantly improve the computational efficiency of the Krylov subspace iteration methods, and outperforms a circulant improved version of the PMHSS preconditioner. In addition, the corresponding convergence behavior of the DNCB preconditioner is independent of the space mesh size and almost insensitive to the fractional order parameter. This paper is organized as follows. In Section 2, the complex linear system with the coefficient matrix $\imath I+D-T$ is derived by applying the LICD scheme to the system of repulsive space fractional CNLS equations. In Section 3, we construct the DNTB iteration method and study its asymptotic convergence theory. The DNCB preconditioner is presented and analyzed in Section 4, and the implementation details and the computational complexities of the preconditioners involved in the experiments are described in Section 5. Numerical results are reported in Section 6, and finally, the paper is ended with some concluding remarks in Section 7. [\[sec-introduction\]]{#sec-introduction label="sec-introduction"} # Discretization of the space fractional CNLS equations In this section, we apply the LICD scheme [@WDL2014JCP] to discrete the space fractional CNLS equations ([\[equ1\]](#equ1){reference-type="ref" reference="equ1"}), and then derive the complex symmetric linear system on each time level. We denote by $M$ and $N$ the prescribed positive integers, and let $\tau=\mbox{T}/N$ and $h=(\mbox{b}-\mbox{a})/(M+1)$ be the temporal step size and the spatial step size respectively. The time levels are $t_{n}=n\tau$ for $n=0,1,...,N$, and the spatial discrete points are $x_{j}=\mbox{a}+jh$ for $j=0,1,...,M+1$. The numerical solutions of ([\[equ1\]](#equ1){reference-type="ref" reference="equ1"}) are denoted by $u^{n}_{j}\approx u(x_{j},t_{n})$ and $v^{n}_{j}\approx v(x_{j},t_{n})$. The fractional Laplacian $(-\Delta)^\frac{\alpha}{2}$ is equivalent to the Riesz fractional derivative in 1D case [@fLapReviewJCP2020], and the latter can be discretized by the fractional center difference [@riesz21; @MDuman2012] on the uniform spatial grids in the bounded interval $[\mbox{a},\mbox{b}]$ as $$\begin{gathered} (-\Delta)^\frac{\alpha}{2}u(x_{j},t)=\frac{1}{h^{\alpha}}\sum^M_{k=1}c_{j-k}u_{k}+\mathcal{O}(h^2),\end{gathered}$$ where the coefficients read $c_{k}=(-1)^{k}\Gamma(\alpha+1)/[\Gamma(\alpha/2-k+1) \Gamma(\alpha/2+k+1)]$, $\forall\ k\in\makebox{{\mbox{\bb Z}}}$, with $\Gamma(\cdot)$ the gamma function. In addition, $c_{k}$ satisfies the properties [@MDuman2012] as follows $$\begin{gathered} \label{propertiesOfCk} %\begin{cases} c_{0}\ge 0,\ c_{k}=c_{-k}\le0,\ \forall\ k\ge 1,\ \mbox{and}\ \sum^{+\infty}_{k=-\infty,k\ne0}|c_{k}|=c_{0}. %\end{cases}\end{gathered}$$ By adopting the LICD scheme, the following discrete space fractional CNLS equations derived from the system ([\[equ1\]](#equ1){reference-type="ref" reference="equ1"}) can be obtained $$\label{discretizedCNLS} \begin{cases} \imath\frac{u^{n+1}_j-u^{n-1}_j}{2\tau}-\frac{\gamma}{h^\alpha} \sum^M_{k=1}c_{j-k}\hat{u}^n_k+\rho(|u^n_j|^2+\beta|v^n_j|^2) \hat{u}^n_j=0, \\ \imath\frac{v^{n+1}_j-v^{n-1}_j}{2\tau}-\frac{\gamma}{h^\alpha} \sum^M_{k=1}c_{j-k}\hat{v}^n_k+\rho(|v^n_j|^2+\beta|u^n_j|^2) \hat{v}^n_j=0, \end{cases}$$ where $\hat{u}^n_j=(u^{n+1}_j+u^{n-1}_j)/2$, $\hat{v}^n_j=(v^{n+1}_j+v^{n-1}_j)/2$, $j=1,2,\ldots,M,n=1,2,\ldots,N-1$, and the initial and boundary conditions read $u^0_j=u_0(x_j)$, $v^0_j=v_0(x_j)$, $u^{n}_0=u^{n}_{M+1}=0$, and $v^{n}_0=v^{n}_{M+1}=0$. In addition, the discrete space fractional CNLS equations ([\[discretizedCNLS\]](#discretizedCNLS){reference-type="ref" reference="discretizedCNLS"}) can also be written as the matrix-vector form as follows $$\begin{aligned} \label{discretizedCNLSMaxForm} \begin{cases} A^{n+1}_u\text{\bf u}^{n+1} =\text{\bf b}^{n+1}_u, \\ A^{n+1}_v\text{\bf v}^{n+1} =\text{\bf b}^{n+1}_v, \end{cases} &\forall \ n \ge 1,\end{aligned}$$ where the coefficient matrices $A^{n+1}_u$, $A^{n+1}_v \in \makebox{{\mbox{\bb C}}}^{M\times M}$ read $$\begin{aligned} \begin{cases} A^{n+1}_u=\imath I+ D^{n+1}_u-T, \\ A^{n+1}_v=\imath I+ D^{n+1}_v-T. \end{cases}\end{aligned}$$ Here, $I \in \makebox{{\mbox{\bb R}}}^{M \times M}$ represents the identity matrix, the diagonal matrices $D^{n+1}_u$, $D^{n+1}_v\in\makebox{{\mbox{\bb R}}}^{M\times M}$ read $$\begin{aligned} \begin{cases} D^{n+1}_u={\rm diag}\{d^{n+1}_{u,1},d^{n+1}_{u,2},\ldots,d^{n+1}_{u,M}\}, \\ D^{n+1}_v={\rm diag}\{d^{n+1}_{v,1},d^{n+1}_{v,2},\ldots,d^{n+1}_{v,M}\}, \end{cases}\end{aligned}$$ where $d^{n+1}_{u,j}=\rho\tau(|u^n_j|^2+\beta |v^n_j|^2)$ and $d^{n+1}_{v,j}=\rho\tau(|v^n_j|^2+\beta |u^n_j|^2)$ for $j=1,2,\ldots,M$, and $T\in\makebox{{\mbox{\bb R}}}^{M\times M}$ represents a Toeplitz matrix of the form $$\begin{aligned} \label{equ5} T=\mu \begin{bmatrix} c_0 & c_{-1} & \vdots & c_{2-M} & c_{1-M} \\ c_1 & c_0 & \vdots & c_{3-M} & c_{2-M} \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ c_{M-2} & c_{M-3} & \vdots & c_0 & c_{-1} \\ c_{M-1} & c_{M-2} & \vdots & c_1 & c_0 \\ \end{bmatrix}\end{aligned}$$ with $\mu=\frac{\gamma\tau}{h^\alpha}$. The solutions $\text{\bf u}^{n+1}$, $\text{\bf v}^{n+1}$ and the right-hand-sides $\text{\bf b}^{n+1}_u$, $\text{\bf b}^{n+1}_v$ of ([\[discretizedCNLSMaxForm\]](#discretizedCNLSMaxForm){reference-type="ref" reference="discretizedCNLSMaxForm"}) read $$\begin{aligned} \begin{cases} \text{\bf u}^{n+1}={[u^{n+1}_1,...,u^{n+1}_M]}^{\top}, \\ \text{\bf v}^{n+1}={[v^{n+1}_1,...,v^{n+1}_M]}^{\top}, \end{cases} \quad \mbox{and} \quad \begin{cases} \text{\bf b}^{n+1}_u={[b^{n+1}_{u,1},...,b^{n+1}_{u,M}]}^{\top}, \\ \text{\bf b}^{n+1}_v={[b^{n+1}_{v,1},...,b^{n+1}_{v,M}]}^{\top}, \end{cases}\end{aligned}$$ where $$\begin{aligned} \begin{cases} b^{n+1}_{u,j} \ =\ \imath u^{n-1}_j+\mu \sum^M_{k=1}c_{j-k}u^{n-1}_k-d^{n+1}_{u,j}u^{n-1}_j, \\ b^{n+1}_{v,j} \ =\ \imath v^{n-1}_j+\mu \sum^M_{k=1}c_{j-k}v^{n-1}_k-d^{n+1}_{v,j}v^{n-1}_j, \end{cases} & \forall\ 1\le j\le M.\end{aligned}$$ Obviously, the coupled complex linear systems in ([\[discretizedCNLSMaxForm\]](#discretizedCNLSMaxForm){reference-type="ref" reference="discretizedCNLSMaxForm"}) share the following unified form $$\begin{aligned} \label{equ3} A\ \text{\bf x} &= \text{\bf b},\end{aligned}$$ with a complex symmetric coefficient matrix $A=\imath I+D-T\in\makebox{{\mbox{\bb C}}}^{M\times M}$, since $D \in\makebox{{\mbox{\bb R}}}^{M \times M}$ is diagonal, and $T \in \makebox{{\mbox{\bb R}}}^{M \times M}$ is Toeplitz and symmetric positive definite. There may be two cases of $A\in\makebox{{\mbox{\bb C}}}^{M\times M}$ for $\rho\neq 0$. Since we already know $\gamma>0$, $\beta\ge 0$, it holds as follows. - If $\rho<0$, $D$ is negative semi-definite. Thus, together with the fact that $T$ is positive definite, it follows that $A$ is negative definite. - If $\rho>0$, $D$ is positive semi-definite. Thus, together with the fact that $T$ is positive definite, it follows that $A$ is indefinite. In this paper, the case of $\rho<0$ will be studied, and we leave the study of the case $\rho>0$ as a future work. In the latter case, a sequence of complex symmetric and non-Hermitian indefinite linear systems ([\[discretizedCNLS\]](#discretizedCNLS){reference-type="ref" reference="discretizedCNLS"}) need to be solved efficiently, which leads to a great challenge in algorithm design. # The DNTB iteration method In this section, we construct a structured iteration method for solving the complex linear system ([\[equ3\]](#equ3){reference-type="ref" reference="equ3"}) and provide the related convergence theory. Denote by $\text{\bf x}=y+\imath z\in\makebox{{\mbox{\bb C}}}^{M}$ the solution, and $\text{\bf b}=p+\imath q\in\makebox{{\mbox{\bb C}}}^{M}$ the right-hand-side, where $y$, $z$, $p$, $q\in\makebox{{\mbox{\bb R}}}^{M}$ are real vectors. It can be easily verified that the complex linear system $(\ref{equ3})$ is equivalent to the following real non-symmetric positive definite block linear system $$\begin{aligned} \label{positiveBlockForm} \mathcal{R}x\equiv \begin{bmatrix} T-D & -I\\ I & T-D\\ \end{bmatrix} \begin{bmatrix} z \\ y\\ \end{bmatrix} = \begin{bmatrix} -q \\ -p \\ \end{bmatrix} \equiv f,\end{aligned}$$ where $\mathcal{R} \in \makebox{{\mbox{\bb R}}}^{2M \times 2M}$, $x, f\in \makebox{{\mbox{\bb R}}}^{2M}$. The system matrix $\mathcal{R}$ admits a diagonal and normal with Toeplitz-block (DNTB) splitting of the form $$\begin{aligned} \label{rdt} \mathcal{R} = \mathcal{B} + \mathcal{H} \end{aligned}$$ with $$\label{equ4} \mathcal{B} = \begin{bmatrix} -D & 0 \\ 0 & -D \\ \end{bmatrix} \qquad \text{and} \qquad \mathcal{H} = \begin{bmatrix} T & -I \\ I & T \\ \end{bmatrix}.$$ Motivated by the DNTB splitting and the alternating direction implicit (ADI) iteration [@ADI1955; @ADI], we can construct the following DNTB iteration method for solving the block linear system ([\[positiveBlockForm\]](#positiveBlockForm){reference-type="ref" reference="positiveBlockForm"}). **Method 1** (The DNTB iteration method). *Let $x^{(0)} \in\makebox{{\mbox{\bb R}}}^{2M\times 2M}$ be an arbitrary initial guess. For $k=0,1,2,...$ until the sequence of iterates $\{x^{(k)}\}_{k\ge 0}$ converges, compute the next iterate $x^{(k+1)}$ according to the following procedure: $$\label{equ12} \begin{cases} (\omega I+\mathcal{B})x^{(k+\frac{1}{2})}=(\omega I-\mathcal{H})x^{(k)}+f, \\ (\omega I+\mathcal{H})x^{(k+1)}=(\omega I-\mathcal{B})x^{(k+\frac{1}{2})}+f, \ \end{cases}$$ where $\omega$ is a given positive parameter.* The above alternating iteration can be reformulated into a one-step iteration scheme as follows $$\begin{aligned} x^{(k+1)}=\mathcal{L}_{\omega} x^{(k)}+\mathcal{F}^{-1}_{\omega} f,\end{aligned}$$ with $$\begin{aligned} \mathcal{L}_{\omega}=(\omega I+\mathcal{H})^{-1}(\omega I-\mathcal{B})(\omega I+\mathcal{B})^{-1}(\omega I-\mathcal{H}).\label{lw}\end{aligned}$$ This one-step iteration scheme can also be derived from the following splitting $$\mathcal{R} \ =\ \mathcal{F}_{\omega}-\mathcal{G}_{\omega},$$ with $$\begin{aligned} \label{FG} \begin{cases} \mathcal{F}_{\omega} \ =\ \frac{1}{2\omega} (\omega I+\mathcal{B})(\omega I+\mathcal{H}),\\ \mathcal{G}_{\omega} \ =\ \frac{1}{2\omega} (\omega I-\mathcal{B})(\omega I-\mathcal{H}). \end{cases}\end{aligned}$$ Hence, the iteration matrix reads $\mathcal{L}_{\omega}=\mathcal{F}_{\omega}^{-1}\mathcal{G}_{\omega}$. The convergence property of the DNTB iteration method is summarized in the following theorem. **Theorem 1**. *Let $\mathcal{R}\in\makebox{{\mbox{\bb R}}}^{2M\times 2M}$ be a real non-symmetric positive definite block matrix as defined in ([\[positiveBlockForm\]](#positiveBlockForm){reference-type="ref" reference="positiveBlockForm"}). Let $\mathcal{B}, \mathcal{H}\in\makebox{{\mbox{\bb R}}}^{2M\times 2M}$ be the matrices in the DNTB splitting ([\[rdt\]](#rdt){reference-type="ref" reference="rdt"}). Let $\omega$ be a positive parameter, and the iteration matrix $\mathcal{L}_{\omega}$ of the DNTB iteration scheme ([\[equ12\]](#equ12){reference-type="ref" reference="equ12"}) is given by ([\[lw\]](#lw){reference-type="ref" reference="lw"}). Then, it holds that the spectral radius of $\mathcal{L}_{\omega}$ is bounded as follows $$\begin{aligned} \rho {(\mathcal{L}_{\omega})} \le \sigma(\omega) < 1, \end{aligned}$$ with $$\begin{aligned} \label{sigma} \sigma(\omega)= \underset{ \lambda_i \in \lambda{(D)}}{\mathop{\max}}\left\lvert{\frac{\omega+\lambda_i}{\omega-\lambda_i}}\right \rvert \underset{ \mu_i \in \lambda{(T)}}{\mathop{\max}}\sqrt {\frac {(\omega-\mu_i)^2+1}{(\omega+\mu_i)^2+1}}, \end{aligned}$$ where $\lambda{(D)}$ and $\lambda{(T)}$ are the spectral sets of $D$ and $T$, respectively.* *Proof.* Due to the facts that $\mathcal{B}$ is positive semi-definite, $\mathcal{H}$ is positive definite, and $\omega>0$, it follows that the matrices $\omega I+\mathcal{B}$ and $\omega I+\mathcal{H}$ are invertible. Similarity transformation of $\mathcal{L}_{\omega}$ leads to a fact $$\begin{aligned} \widetilde{\mathcal{L}}_{\omega}=(\omega I+\mathcal{H})\mathcal{L}_{\omega}(\omega I+\mathcal{H})^{-1} =\mathcal{U}_{\omega}\mathcal{V}_{\omega}\end{aligned}$$ with $$\begin{aligned} \mathcal{U}_{\omega}=(\omega I-\mathcal{B})(\omega I+\mathcal{B})^{-1} \quad \text{and} \quad \mathcal{V}_{\omega}=(\omega I-\mathcal{H})(\omega I+\mathcal{H})^{-1}.\end{aligned}$$ Since $\mathcal{B} \ge 0$, the estimate of $\|\mathcal{U}_{\omega}\|_2$ reads $$\begin{aligned} \left \|\mathcal{U}_{\omega} \right \|_2 = \left \|(\omega I-\mathcal{B})(\omega I+\mathcal{B})^{-1}\right \|_2 =\underset{\lambda_i\in\lambda(D)}{\mathop{\max}}\left\lvert{\frac{\omega+\lambda_i}{\omega-\lambda_i}}\right \rvert \le 1.\end{aligned}$$ Due to the fact that $\mathcal{H}^\top\mathcal{H}=\mathcal{H}\mathcal{H}^\top$, it holds that $$\begin{aligned} \mathcal{V}_{\omega}^\top\mathcal{V}_{\omega} &=(\omega I+\mathcal{H}^\top)^{-1}(\omega I-\mathcal{H}^\top)(\omega I-\mathcal{H})(\omega I+\mathcal{H})^{-1}\\ &=(\omega I-\mathcal{H}^\top)(\omega I-\mathcal{H})(\omega I+\mathcal{H}^\top)^{-1}(\omega I+\mathcal{H})^{-1}\\ &= \begin{bmatrix} (\omega I-T)^2+I & 0 \\ 0 & (\omega I-T)^2+I \ \end{bmatrix} \begin{bmatrix} (\omega I+T)^2+I & 0 \\ 0 & (\omega I+T)^2+I \ \end{bmatrix}^{-1}.\end{aligned}$$ Since $T \in \makebox{{\mbox{\bb R}}}^{M\times M}$ is symmetric positive definite, it reads $$\begin{aligned} \left \|\mathcal{V}_{\omega} \right \|_2 = \underset{\mu_i \in \lambda(T)}{\mathop{\max}} \sqrt {\frac {(\omega-\mu_i)^2+1}{(\omega+\mu_i)^2+1}} < 1.\end{aligned}$$ Therefore, the spectral radius of $\mathcal{L}_{\omega}$ is bounded as follows $$\begin{gathered} \rho (\mathcal{L}_{\omega})=\rho (\widetilde{\mathcal{L}}_{\omega}) \le \left \|\mathcal{U}_{\omega} \right \|_2 \left \|\mathcal{V}_{\omega} \right \|_2 = \sigma(\omega)<1.\end{gathered}$$ $\hfill\square$ **Remark 1**. *Here are some remarks on the upper bound $\sigma(\omega)$ provided by Theorem [Theorem 1](#rhoo){reference-type="ref" reference="rhoo"} and the selection of the parameter $\omega$:* - *The convergence rate of Method [Method 1](#DNTBmethod){reference-type="ref" reference="DNTBmethod"} is bounded by $\sigma(\omega)$, depending on both the spectra of $D$ and $T$, but not depending on the spectra of $\mathcal{R}$ and the eigenvectors of any of the above matrices.* - *The upper bound $\sigma(\omega)$ can be relaxed to a new bound $\widehat{\sigma}(\omega)$ as follows $$\begin{aligned} \label{DNTBupperBoundHat} \sigma(\omega) \le \underset{\lambda_{\min} \le \lambda \le \lambda_{\max}}{\mathop{\max}}\left\lvert{\frac{\omega+\lambda}{\omega-\lambda}}\right \rvert \underset{\mu_{\min} \le \mu \le \mu_{\max}}{\mathop{\max}} \sqrt {\frac {(\omega-\mu)^2+1}{(\omega+\mu)^2+1}} \equiv \hat{\sigma}(\omega), \end{aligned}$$ where $\lambda_{\min}\le\lambda_{\max}\le 0$, and $0<\mu_{\min}\le\mu_{\max}$ are the lower/upper bounds of the eigenvalues of $D$ and $T$, respectively. Obviously, the bound $\widehat{\sigma}(\omega)$ has two divisors, i.e., $$\begin{aligned} \widehat{\sigma}(\omega)=\sigma_1 (\omega)\sigma_2 (\omega) \end{aligned}$$ with $$\begin{aligned} \begin{cases} \sigma_1 (\omega)=\underset{\lambda_{\min} \le \lambda \le \lambda_{\max}}{\mathop{\max}}\left\lvert{\frac{\omega+\lambda}{\omega-\lambda}}\right \rvert,\\ \sigma_2 (\omega)=\underset{\mu_{\min} \le \mu \le \mu_{\max}}{\mathop{\max}} \sqrt {\frac {(\omega-\mu)^2+1}{(\omega+\mu)^2+1}}. \end{cases} \end{aligned}$$ The divisor $\sigma_1 (\omega)$ is minimized at $\omega^\star_1$ (Corollary 2.3 in [@HSS10]), i.e., $$\begin{aligned} \omega^\star_1=\arg\underset{\omega>0}{\mathop{\min}}\left \{\underset{\lambda_{\min} \le \lambda \le \lambda_{\max}}{\mathop{\max}}\left\lvert\frac{\omega+\lambda}{\omega-\lambda}\right\rvert\right \}=\sqrt{\lambda_{\min}\lambda_{\max}} \end{aligned}$$ and $$\begin{aligned} \sigma_1 (\omega^\star_1)=\frac{\sqrt {-\lambda_{\min}}-\sqrt {-\lambda_{\max}}}{\sqrt {-\lambda_{\min}}+\sqrt {-\lambda_{\max}}}. \end{aligned}$$ The divisor $\sigma_2 (\omega)$ is minimized at $\omega^\star_2$ (Theorem 2.2 in [@On-suc17]), i.e., $$\begin{aligned} \omega^\star_2=\arg\underset{\omega>0}{\mathop{\min}} \left\{ \underset{\mu_{\min} \le \mu \le \mu_{\max}}{\mathop{\max}}\sqrt {\frac {(\omega-\mu)^2+1}{(\omega+\mu)^2+1}} \right\}= \begin{cases} \begin{aligned} \sqrt{\mu_{\min}\mu_{\max}-1}\mbox{, if $1 < \sqrt{\mu_{\min}\mu_{\max}}$},\\ \sqrt{\mu_{\min}^2+1}\mbox{, if $1 \ge \sqrt{\mu_{\min}\mu_{\max}}$}, \end{aligned} \end{cases} \end{aligned}$$ and $$\begin{aligned} \sigma_2(\omega^\star_2)= \begin{cases} \begin{aligned} \left(\frac{\mu_{\min}+\mu_{\max}-2\sqrt{\mu_{\min}\mu_{\max}-1}}{\mu_{\min}+\mu_{\max}+2\sqrt{\mu_{\min}\mu_{\max}-1}}\right)^{\frac{1}{2}} \mbox{, if $1 < \sqrt{\mu_{\min}\mu_{\max}}$},\\ \left(\frac{\sqrt{\mu_{\min}^{2}+1}-\mu_{\min}}{\sqrt{\mu_{\min}^{2}+1}+\mu_{\min}}\right)^{\frac{1}{2}}\mbox{, if $1 \ge \sqrt{\mu_{\min}\mu_{\max}}$}, \end{aligned} \end{cases} \end{aligned}$$* *Obviously, if the values $\omega^\star_1$ and $\omega^\star_2$ are known in advance, it is reasonable to search the optimal value of $\omega$ minimizing $\widehat{\sigma}(\omega)$ between them.* The optimal iteration parameter $\omega_{opt}$ which minimizes the upper bound $\widehat{\sigma}(\omega)$ is determined in the following theorem. **Theorem 2**. *Let $\omega>0$ be an arbitrary constant. Let $\widehat{\sigma}(\omega)$ be the upper bound of the convergence rate of the DNTB iteration method in ([\[DNTBupperBoundHat\]](#DNTBupperBoundHat){reference-type="ref" reference="DNTBupperBoundHat"}). Let $g(\lambda,\mu;\omega)$ be a function with respect to $\omega$ for given constants $\lambda \le 0$ and $\mu > 0$, i.e., $$\begin{aligned} g(\lambda,\mu;\omega) &= \frac{\omega+\lambda}{\omega-\lambda} \sqrt{\frac{(\omega-\mu)^2+1}{(\omega+\mu)^2+1}}.\end{aligned}$$ After introducing the constants $\lambda_\star=\sqrt{\lambda_{\min}\lambda_{\max}}$, $\widetilde{\mu}_\star=\sqrt{\mu_{\min}\mu_{\max}-1}$, $\widehat{\mu}_\star=\sqrt{\mu_{\min}^2+1}$, and defining the functions with respect to $\omega$ as $g_1(\omega)=g(\lambda_{\max},\mu_{\max};\omega)$, $g_2(\omega)=g(\lambda_{\max},\mu_{\min};\omega)$, $g_3(\omega)=-g(\lambda_{\min},\mu_{\min};\omega)$, the optimal value of $\omega$ minimizing the upper bound $\widehat{\sigma}(\omega)$ can be determined as follows.* - *When $1 < \mu_{\min} \mu_{\max}$:* - *if $\lambda_\star<\widetilde{\mu}_\star$, $\widetilde{\mu}_\star<\widehat{\mu}_\star$, it holds $\omega_{\text{\rm opt}} = \arg \min_{\omega\in\{\omega_1,\omega_2\}} \widehat{\sigma}(\omega)$, where $\omega_1 = \arg \min_{\omega\in[\lambda_\star,\widetilde{\mu}_\star]} g_1(\omega)$, and $\omega_2 = \arg \min_{\omega\in[\widetilde{\mu}_\star,\widehat{\mu}_\star]} g_2(\omega)$.* - *if $\lambda_\star<\widetilde{\mu}_\star$, $\widetilde{\mu}_\star \ge \widehat{\mu}_\star$, it holds $\omega_{\text{\rm opt}} = \arg \min_{\omega\in[\lambda_\star,\widetilde{\mu}_\star]} g_1(\omega)$.* - *if $\lambda_\star \ge \widetilde{\mu}_\star$, $\widetilde{\mu}_\star \ge \widehat{\mu}_\star$, it holds $\omega_{\text{\rm opt}} = \arg \min_{\omega\in[\widetilde{\mu}_\star,\lambda_\star]} g_3(\omega)$.* - *if $\lambda_\star \ge \widetilde{\mu}_\star$, $\widetilde{\mu}_\star<\widehat{\mu}_\star$, there are two cases:* - *if $\lambda_\star < \widehat{\mu}_\star$, it holds $\omega_{\text{\rm opt}} = \arg \min_{\omega\in[\lambda_\star,\widehat{\mu}_\star]} g_2(\omega).$* - *if $\lambda_\star \ge \widehat{\mu}_\star$, it holds $\omega_{\text{\rm opt}} = \arg \min_{\omega\in[\widehat{\mu}_\star,\lambda_\star]} g_3(\omega).$* - *When $1 \ge \mu_{\min} \mu_{\max}$:* - *if $\lambda_\star<\widehat{\mu}_\star$, it holds $\omega_{\text{\rm opt}} = \arg \min_{\omega\in[\lambda_\star,\widehat{\mu}_\star]} g_2(\omega).$* - *if $\lambda_\star \ge \widehat{\mu}_\star$, it holds $\omega_{\text{\rm opt}} = \arg \min_{\omega\in[\widehat{\mu}_\star,\lambda_\star]} g_3(\omega).$* *Proof.* For $\omega>0$, the monotonicity of $\left|\frac{\omega+\lambda}{\omega-\lambda}\right|$ with respect to $\lambda\le 0$ leads to $$\begin{aligned} \sigma_1(\omega) &= \max \left\{ \left| \frac{\omega+\lambda_{\min}}{\omega-\lambda_{\min}} \right|, \left| \frac{\omega+\lambda_{\max}}{\omega-\lambda_{\max}} \right| \right\},\end{aligned}$$ and the monotonicity of $\frac{(\omega+\mu)^2+1}{(\omega-\mu)^2+1}$ with respect to $\mu>0$ leads to $$\begin{aligned} \sigma_2(\omega) &= \begin{cases} \max \left\{ \sqrt{\frac {(\omega+\mu_{\min})^2+1}{(\omega-\mu_{\min})^2+1}}, \sqrt{\frac {(\omega+\mu_{\max})^2+1}{(\omega-\mu_{\max})^2+1}} \right\}, & \mbox{if } 1< \mu_{\min}\mu_{\max}, \\ \sqrt{\frac {(\omega+\mu_{\min})^2+1}{(\omega-\mu_{\min})^2+1}}, & \mbox{if } 1\ge \mu_{\min}\mu_{\max}. \end{cases}\end{aligned}$$ Hence, the optimal value $\omega_{\text{\rm opt}}$ of the positive parameter $\omega$ minimizing the upper bound $\widehat{\sigma}(\omega)$ in ([\[DNTBupperBoundHat\]](#DNTBupperBoundHat){reference-type="ref" reference="DNTBupperBoundHat"}) can be obtained by considering the piecewise form and the monotonicity of $\widehat{\sigma}(\omega)$ and the different cases stated in Theorem [Theorem 2](#DNTBoptimalParameter){reference-type="ref" reference="DNTBoptimalParameter"}. $\hfill\square$ **Remark 2**. *The exact formulae of $\omega_{\text{\rm opt}}$ can be determined by finding the zeroes of the following equation $$\begin{aligned} \frac{\text{d}}{\text{d}\omega} g(\lambda,\mu;\omega) & =0 \quad\text{with $\omega\in[\omega_{\text{\rm L}}, \omega_{\text{\rm U}}]\subset (0,+\infty)$}, \end{aligned}$$ where the interval $[\omega_{\text{\rm L}}, \omega_{\text{\rm U}}]$ represents any of the cases included in Theorem [Theorem 2](#DNTBoptimalParameter){reference-type="ref" reference="DNTBoptimalParameter"}. The above equation can be equivalently reformulated into a quartic equation, i.e., $$\begin{aligned} \label{quartic_equation} \Upsilon\ \omega^4 + \Theta\ \omega^2 + \Xi & = 0 \quad\text{with $\omega\in[\omega_{\text{\rm L}}, \omega_{\text{\rm U}}]\subset (0,+\infty)$}, \end{aligned}$$ where $\Upsilon = \mu - \lambda$, $\Theta = 2\lambda(\mu^2-1) - \mu(\mu^2+\lambda^2+1)$, and $\Xi = \lambda(\mu^2+1)(\lambda\mu-\mu^2-1)$. Denoting by $\Delta=\Theta^2-4\Upsilon\Xi$ the discriminant of ([\[quartic_equation\]](#quartic_equation){reference-type="ref" reference="quartic_equation"}), then we can determine the optimal value $\omega_{\text{\rm opt}}$ of $\omega$ minimizing $g_i(\omega)$ ($i=1$, $2$, $3$) as follows:* - *When $\Delta<0$, the quartic equation ([\[quartic_equation\]](#quartic_equation){reference-type="ref" reference="quartic_equation"}) has no real positive zero. Therefore, it holds $\omega_{\text{\rm opt}} =\arg \min_{\omega\in\{\omega_{\text{\rm L}}, \omega_{\text{\rm U}}\}} g_i(\omega)$.* - *When $\Delta\ge 0$, the following cases should be considered:* - *if the quartic equation ([\[quartic_equation\]](#quartic_equation){reference-type="ref" reference="quartic_equation"}) has no positive zero in the interval $[\omega_{\text{\rm L}}, \omega_{\text{\rm U}}]$, it holds $$\omega_{\text{\rm opt}} =\arg \min_{\omega\in\{\omega_{\text{\rm L}}, \omega_{\text{\rm U}}\}} g_i(\omega);$$* - *if the quartic equation ([\[quartic_equation\]](#quartic_equation){reference-type="ref" reference="quartic_equation"}) has one positive zero $\omega_0\in[\omega_{\text{\rm L}}, \omega_{\text{\rm U}}]$, it holds $$\omega_{\text{\rm opt}} =\arg \min_{\omega\in\{\omega_{\text{\rm L}}, \omega_0, \omega_{\text{\rm U}}\}} g_i(\omega);$$* - *if the quartic equation ([\[quartic_equation\]](#quartic_equation){reference-type="ref" reference="quartic_equation"}) has two positive zeroes $\widehat{\omega}_0$, $\widetilde{\omega}_0\in[\omega_{\text{\rm L}}, \omega_{\text{\rm U}}]$, it holds $$\omega_{\text{\rm opt}} =\arg \min_{\omega\in\{\omega_{\text{\rm L}}, \widehat{\omega}_0, \widetilde{\omega}_0, \omega_{\text{\rm U}}\}} g_i(\omega).$$* *Obviously, the above idea for determining $\omega_{\text{\rm opt}}$ is based on the constants $\Upsilon$, $\Theta$, $\Xi$, $\omega_{\text{\rm L}}$, $\omega_{\text{\rm U}}$, which can be derived from the values of $\lambda_{\min}$, $\lambda_{\max}$, $\mu_{\min}$, $\mu_{\max}$. In addition, $\lambda_{\min}$ and $\lambda_{\max}$ can be easily obtained since they are eigenvalues of the diagonal matrix $D$, and $\mu_{\min}$ and $\mu_{\max}$ are the extreme eigenvalues of the discrete fractional Laplacian $T$, which can be estimated according to the eigenvalue bounds stated in Lemma [Lemma 2](#lambda){reference-type="ref" reference="lambda"}.* # Preconditioning The matrix $\mathcal{F}_{\omega}$ in ([\[FG\]](#FG){reference-type="ref" reference="FG"}) can serve as a preconditioner of Krylov subspace iteration methods for solving the linear system ([\[positiveBlockForm\]](#positiveBlockForm){reference-type="ref" reference="positiveBlockForm"}). We call $\mathcal{F}_{\omega}$ as the DNTB preconditioner. The main computational workload for implementing the preconditioner $\mathcal{F}_{\omega}$ is to solve the related generalized residual (GR) equation, which further requires to solve two linear subsystems, including $(\omega I+\mathcal{B})x=r$ and $(\omega I+\mathcal{H})x=r$. The former can be solved directly in $\mathcal{O}(M)$ flops since $\mathcal{B}$ is diagonal. The latter requires to solve a Schur complement linear subsystem with coefficient matrix of the form $(\omega I + T) + (\omega I + T)^{-1}\in \makebox{{\mbox{\bb R}}}^{M\times M}$, which can not be handled by fast algorithms since $T$ is a Toeplitz matrix. To remedy the imperfection of $\mathcal{F}_{\omega}$, we consider adopting the circulant preconditioning technique to reduce the computational costs, i.e., replacing the discrete fractional Laplacian $T$ by a circulant approximation. Then, we obtain a new preconditioner as follows $$\begin{aligned} \label{FA} \widetilde{\mathcal{F}}_{\omega}=\frac{1}{2\omega} (\omega I+\mathcal{B})(\omega I+\mathcal{C}),\end{aligned}$$ where $\mathcal{C}=\bigl[ \begin{smallmatrix} C & -I \\ I & C \end{smallmatrix}\bigr]$, and one can take $C\in \makebox{{\mbox{\bb R}}}^{M\times M}$ as any circulant approximation of $T$. Obviously, $\widetilde{\mathcal{F}}_{\omega}$ is the product of a scalar $1/(2\omega)$, a diagonal matrix $\omega I+\mathcal{B}$, and a normal matrix $\omega I+\mathcal{C}$ with circulant-blocks. Thus, $\widetilde{\mathcal{F}}_{\omega}$ is called the diagonal and normal with circulant-block (DNCB) preconditioner. In the sequel, the theoretical analysis of $\widetilde{\mathcal{F}}_{\omega}$ is based on the Strang's circulant approximation [@Cir-pre20] of even order for the sake of simplicity. For odd order case, similar theories can also be developed. When $M$ is even, it reads $$\label{equ6} C=\mu \begin{bmatrix} c_0 & c_{1} & \cdots & c_{M/2-1} & 0 & c_{M/2-1} & \cdots & c_{1} \\ c_{1} & c_0 & \ddots & \ddots & c_{M/2-1} & 0 & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots & c_{M/2-1} \\ c_{M/2-1} & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots & 0\\ 0 & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots & c_{M/2-1}\\ c_{M/2-1} & 0 & \ddots & \ddots & \ddots & \ddots & \ddots & \vdots\\ \vdots & \ddots & \ddots & \ddots & \ddots & \ddots & c_0 & c_{1} \\ c_{1} & \cdots & c_{M/2-1} & 0 & c_{M/2-1} & \ddots & c_{1} & c_0 \\ \end{bmatrix}.$$ In order to obtain sharp eigenvalue bounds for the discrete fractional Laplacian $T$ and its Strang's circulant approximation $C$, we first prove the following lemma. **Lemma 1**. *Let $c_j=(-1)^j\Gamma(\alpha+1)/[\Gamma(\alpha/2-j+1)\Gamma(\alpha/2+j+1)], k_0 \ge 3,$ and $1<\alpha \le 2$. Then, it holds that $$\begin{gathered} \frac{(1-\frac{1+\alpha}{5+\alpha/2})^{5+\alpha/2} e^{\alpha+1} \Gamma(\alpha+1) \sin(\pi \alpha/2)}{\pi \alpha (k_0+1/2)^{\alpha}}< \sum_{j=k_0+1}^{\infty}|c_j| < \frac{\sqrt{2} e^{13/12} \Gamma(\alpha+1) \sin(\pi \alpha /2)}{\pi \alpha (k_0-1)^{\alpha}}. \end{gathered}$$* *Proof.* According to the property of Gamma function $\Gamma(s+1)=s\Gamma(s), \forall s \in \makebox{{\mbox{\bb C}}}\backslash \{-1,-2,...\}$, it reads $$\begin{aligned} \Gamma(\alpha / 2-j+1)=\frac{\Gamma(\alpha / 2)}{\displaystyle\prod_{l=2}^j(\alpha / 2-l+1)}\end{aligned}$$ and $$\begin{aligned} \left \vert \displaystyle\prod_{l=2}^j(\alpha / 2-l+1) \right \vert=\displaystyle\prod_{l=2}^j(l-1-\alpha / 2)=\frac{\Gamma(j-\alpha / 2)}{\Gamma(1-\alpha / 2)},\end{aligned}$$ together with the fact $\Gamma(\alpha / 2)\Gamma(1-\alpha / 2)=\pi / \sin(\pi \alpha/2)$, it follows that $$\begin{aligned} \left \vert c_j \right \vert = \frac{\Gamma(\alpha+1)\sin(\pi \alpha / 2)}{\pi} \frac{\Gamma(j-\alpha / 2)}{\Gamma(j+1+\alpha / 2)}.\end{aligned}$$ Based on the following fact of Gamma function $$\begin{aligned} \Gamma(s)=\sqrt{2\pi}s^{s-1/2}e^{-s}(1+O(1 / s)), \quad for \quad s \rightarrow +\infty,\end{aligned}$$ where $\mathcal{O}(1 / s)$ is positive and monotonically decreasing for $s > 0$, and $1+\mathcal{O}(1 / s)<e^{13/12}$ for $s \ge 2$ [@robbin23], it reads $$\begin{aligned} \frac{\Gamma(j-\alpha / 2)}{\Gamma(j+1+\alpha / 2)}=(1-\frac{1+\alpha}{j+1+\alpha / 2})^{j+1+\alpha / 2}(1-\frac{1+\alpha}{j+1+\alpha / 2})^{-1/2}(j-\alpha / 2)^{-(\alpha+1)}e^{\alpha+1}\frac{1+O(\frac{1}{j-\alpha / 2})}{1+O(\frac{1}{j+1+\alpha / 2})}.\end{aligned}$$ Due to the facts $(1-\frac{\alpha+1}{s})^s$ monotonically increasing and satisfying $(1-\frac{\alpha+1}{s})^s \le e^{-(\alpha+1)}$ for $s \ge \alpha+1$, $1 < (1-\frac{1+\alpha}{j+1+\alpha/2})^{-1 / 2} \le \sqrt{2}$ for $j \ge 4$, and $(j-1 / 2)^{-(\alpha+1)} < (j-\alpha / 2)^{-(\alpha+1)} \le (j-1)^{-(\alpha+1)}$ for $j > 1$, it holds that $$\begin{aligned} (1-\frac{1+\alpha}{5+\alpha/2})^{5+\alpha / 2} e^{\alpha+1} (j-1 / 2)^{-(\alpha+1)} < \frac{\Gamma(j-\alpha / 2)}{\Gamma(j+1+\alpha / 2)} < \sqrt{2} e^{13/12} (j-1)^{-(\alpha+1)}.\end{aligned}$$ Then, the estimate of $|c_j|$ for $j \ge 4$ reads $$\begin{aligned} \frac{(1-\frac{1+\alpha}{5+\alpha/2})^{5+\alpha / 2} e^{\alpha+1} \Gamma(\alpha+1)\sin(\pi \alpha / 2)}{\pi (j-1 / 2)^{\alpha+1}}\le |c_j| \le \frac{\sqrt{2} e^{13/12}\Gamma(\alpha+1)\sin(\pi \alpha / 2)}{\pi (j-1)^{\alpha+1}}.\end{aligned}$$ The above estimate together with the facts $k_0 \ge 3$, $\displaystyle \sum_{j=k_0+1}^{+\infty}(j-1 / 2)^{-(\alpha+1)} > \frac{1}{\alpha(k_0+1 / 2)^\alpha}$, and $\displaystyle \sum_{j=k_0+1}^{+\infty}(j-1)^{-(\alpha+1)} <\frac{1}{\alpha(k_0-1)^\alpha}$ result in the estimate of $\sum_{j=k_0+1}^{\infty}|c_j|$. $\hfill\square$ Based on Lemma [Lemma 1](#lemma1){reference-type="ref" reference="lemma1"}, we can derive the eigenvalue bounds of $T$ and $C$. **Lemma 2**. *Let $T$ be the Toeplitz matrix in ([\[equ5\]](#equ5){reference-type="ref" reference="equ5"}), and let $C$ be the Strang's circulant approximation of $T$ in ([\[equ6\]](#equ6){reference-type="ref" reference="equ6"}). Denote by $$\begin{aligned} \theta = \frac{(1-\frac{1+\alpha}{5+\alpha/2})^{5+\alpha/2} e^{1+\alpha} \Gamma(\alpha+1) \sin(\pi \alpha /2)}{ \pi \alpha}, \end{aligned}$$ let $M$ be even, then it holds that $$\begin{aligned} \frac{2\gamma \tau \theta }{(\text{\rm b}-\text{\rm a})^{\alpha}} < \lambda_T < \frac{2 \gamma \tau}{h^{\alpha}} \left[\frac{\Gamma(\alpha+1)}{\Gamma(\alpha/2+1)^2}- \frac{\theta h^{\alpha}}{(\text{\rm b}-\text{\rm a})^{\alpha}}\right], \quad \text{for $M \ge 4$}, \end{aligned}$$ and $$\begin{aligned} \frac{2^{\alpha+1}\gamma \tau \theta }{(\text{\rm b}-\text{\rm a})^{\alpha} } < \lambda_C < \frac{2 \gamma \tau}{h^{\alpha}} \left[\frac{\Gamma(\alpha+1)}{\Gamma(\alpha/2+1)^2}- \frac{2^{\alpha} \theta h^{\alpha}}{(\text{\rm b}-\text{\rm a})^{\alpha}}\right], \quad \text{for $M \ge 8$}, \end{aligned}$$ where $\lambda_T$ and $\lambda_C$ are eigenvalues of $T$ and $C$, respectively.* *Proof.* According to the Gerschgorin disk theorem [@GolubBook], the properties of $c_k$, and Lemma [Lemma 1](#lemma1){reference-type="ref" reference="lemma1"}, it holds that $$\begin{aligned} \mu(c_0-2\sum_{k=1}^{M-1}|c_k|) \le &\lambda_T \le \mu(c_0+2\sum_{k=1}^{M-1}|c_k|)\\ 2\mu\sum_{k=M}^{+\infty}|c_k| \le &\lambda_T \le 2 \mu(c_0-\sum_{k=M}^{+\infty}|c_k|)\\ \frac{2\gamma \tau \theta }{(\mbox{b}-\mbox{a})^{\alpha}} < &\lambda_T < 2 \frac{\gamma \tau}{h^{\alpha}} \left[\frac{\Gamma(\alpha+1)}{\Gamma(\alpha/2+1)^2}- \frac{\theta h^{\alpha}}{(\mbox{b}-\mbox{a})^{\alpha}}\right], \quad \text{for $M \ge 4$},\end{aligned}$$ and $$\begin{aligned} \mu(c_0-2\sum_{k=1}^{M/2-1}|c_k|) \le &\lambda_C \le \mu(c_0+2\sum_{k=1}^{M/2-1}|c_k|)\\ 2\mu\sum_{k=M/2}^{+\infty}|c_k| \le &\lambda_C \le 2 \mu(c_0-\sum_{k=M/2}^{+\infty}|c_k|)\\ \frac{2^{\alpha+1}\gamma \tau \theta }{(\mbox{b}-\mbox{a})^{\alpha} } < &\lambda_C < 2 \frac{\gamma \tau}{h^{\alpha}} \left[\frac{\Gamma(\alpha+1)}{\Gamma(\alpha/2+1)^2}- \frac{2^{\alpha} \theta h^{\alpha}}{(\mbox{b}-\mbox{a})^{\alpha}}\right], \quad \text{for $M \ge 8$}.\end{aligned}$$ $\hfill\square$ **Remark 3**. *According to Lemma [Lemma 2](#lambda){reference-type="ref" reference="lambda"}, the upper bounds of the spectral condition numbers of $T$ and $C$ read $$\begin{aligned} \kappa(T) \le \frac{(M+1)^{\alpha}\Gamma(\alpha+1)}{\theta\Gamma(\alpha/2+1)^2} - 1 \quad \text{and} \quad \kappa(C) \le \frac{(M+1)^{\alpha}\Gamma(\alpha+1)}{\theta 2^{\alpha} \Gamma(\alpha/2+1)^2} - 1. \end{aligned}$$ Obviously, the magnitudes of $\kappa(T)$ and $\kappa(C)$ are of the same order, say $\mathcal{O}((M+1)^\alpha)$. Larger values of the space fractional order parameter $\alpha$ may lead to larger values of the condition numbers $\kappa(T)$ and $\kappa(C)$, which results in more ill-conditioned matrices $T$ and $C$. Therefore, a discrete fractional Laplacian $T$ with larger $\alpha$ may bring in more difficulties in solving the block linear system ([\[positiveBlockForm\]](#positiveBlockForm){reference-type="ref" reference="positiveBlockForm"}), which is confirmed by the experiments in Section 6.* Since the DNCB preconditioned system matrix $\widetilde{\mathcal{F}}^{-1}_{\omega}\mathcal{R}$ can be factorized as follows $$\begin{aligned} \label{two} \widetilde{\mathcal{F}}^{-1}_{\omega}\mathcal{R} &=\underbrace{\widetilde{\mathcal{F}}^{-1}_{\omega} \mathcal{F}_{\omega}} \underbrace{\mathcal{F}_{\omega}^{-1} \mathcal{R}},\end{aligned}$$ the property of $\widetilde{\mathcal{F}}^{-1}_{\omega}\mathcal{R}$ can be derived from the properties of both $\widetilde{\mathcal{F}}^{-1}_{\omega} \mathcal{F}_{\omega}$ and the DNTB preconditioned system matrix $\mathcal{F}_{\omega}^{-1} \mathcal{R}$. Firstly, the results of $\mathcal{F}^{-1}_{\omega}\mathcal{R}$ is stated in the following theorem. **Theorem 3**. *Let $\mathcal{R} \in \mathbb{R}^{2M \times 2M}$ be the real non-symmetric positive definite matrix in ([\[positiveBlockForm\]](#positiveBlockForm){reference-type="ref" reference="positiveBlockForm"}), $\omega > 0$ be a prescribed parameter, and $\sigma(\omega)$ be defined by ([\[sigma\]](#sigma){reference-type="ref" reference="sigma"}). Then, the eigenvalues of the DNTB preconditioned system matrix $\mathcal{F}_{\omega}^{-1}\mathcal{R}$ are located in a circle of radius $\sigma(\omega)$ centered at $1$.* *Proof.* The result can be derived easily from the relation $\mathcal{L}_{\omega} = I-\mathcal{F}_{\omega}^{-1}\mathcal{R}$ and Theorem [Theorem 1](#rhoo){reference-type="ref" reference="rhoo"}. $\hfill\square$ Secondly, by taking advantages of Lemmas [Lemma 1](#lemma1){reference-type="ref" reference="lemma1"} and [Lemma 2](#lambda){reference-type="ref" reference="lambda"}, we can summarize the property of $\widetilde{\mathcal{F}}^{-1}_{\omega}\mathcal{F}_{\omega}$ as follows. **Theorem 4**. *Let $1<\alpha<2$, and $M \ge 8$ be even. Defining the constants $$\begin{aligned} \theta = \frac{(1-\frac{1+\alpha}{5+\alpha/2})^{5+\alpha/2} e^{1+\alpha} \Gamma(\alpha+1) \sin(\pi \alpha/2)}{\pi \alpha } \qquad and \qquad \theta_{0} = \frac{\sqrt{2} e^{13/12} \Gamma(1+\alpha) \sin(\pi \alpha/2)}{ \pi \alpha}, \end{aligned}$$ let $\epsilon$ be a small positive constant satisfying $2^{\alpha} \mu \theta_0 / (M-2)^{\alpha}<\epsilon \le \mu \theta_{0}$, and $k_0 = \lceil (\mu \theta_{0} / \epsilon)^{1/\alpha} \rceil +1$ where $\lceil \centerdot \rceil$ rounds a real number towards positive infinity. Then, there exist two matrices $\mathcal{E}_{\omega c} \in \mathbb{R}^{2M \times 2M}$ and $\mathcal{F}_{\omega c} \in \mathbb{R}^{2M \times 2M}$, satisfying ${\rm rank}(\mathcal{E}_{\omega c})=4k_0$ and $$\begin{gathered} \| \mathcal{E}_{\omega c} \|_2 \le \frac{M^{1/2} \mu}{\sqrt{1+\left[\omega+\frac{2^{\alpha+1} \gamma \tau \theta}{(\text{\rm b}-\text{\rm a})^{\alpha}}\right]^2}}\left[\frac{c_0}{2}-\frac{\theta}{(M-1/2)^{\alpha}}\right], \quad \| \mathcal{F}_{\omega c} \|_2 \le \frac{M^{1/2} \epsilon}{\sqrt{1+\left[\omega+\frac{2^{\alpha+1} \gamma \tau \theta}{(\text{\rm b}-\text{\rm a})^{\alpha}}\right]^2}}. \end{gathered}$$ Such that $$\begin{gathered} \widetilde{\mathcal{F}}(\omega)^{-1}\mathcal{F}{(\omega)} = I+\mathcal{E}_{\omega c}+\mathcal{F}_{\omega c}. \end{gathered}$$* *Proof.* According to the structure of $T$ and $C$, and the fact that $M$ is even, it reads $$\begin{aligned} T-C=\widehat{E}+\widehat{F}\end{aligned}$$ where $$\begin{gathered} \widehat{E}=\mu \begin{bmatrix} 0 & 0 & \widehat{E}_{13} \\ 0 & 0 & 0\\ \widehat{E}^{\top}_{13} & 0 & 0 \\ \end{bmatrix},\quad \widehat{F}=\mu \begin{bmatrix} 0 & \widehat{F}_{12} & 0 \\ \widehat{F}^{\top}_{12} & 0 & 0 \\ 0 & 0 & 0\\ \end{bmatrix}.\end{gathered}$$ and $$\begin{gathered} \widehat{F}_{12}= \begin{bmatrix} c_{M/2} & c_{M/2+1}-c_{M/2-1} & \cdots & c_{M-k_0-1}-c_{k_0+1}\\ 0 & \ddots & \ddots & \vdots\\ \vdots & \ddots & \ddots & c_{M/2+1}-c_{M/2-1} \\ \vdots & \ddots & \ddots & c_{M/2}\\ 0 & \cdots & \cdots & 0\\ \vdots & & & \vdots\\ 0 & \cdots & \cdots & 0\\ \end{bmatrix} \in \mathbb{R}^{(M/2)\times\left(M/2-k_0\right)},\end{gathered}$$ $$\begin{gathered} \widehat{E}_{13}= \begin{bmatrix} c_{M-k_0}-c_{k_0} & \cdots & \cdots & c_{M-1}-c_1\\ \vdots & \ddots & \ddots & \vdots\\ c_{M/2+1}-c_{M/2-1} & \ddots & \ddots & \vdots\\ c_{M/2} & \ddots & \ddots & \vdots\\ 0 & \ddots & \ddots & c_{M-k_0}-c_{k_0}\\ \vdots & \ddots & \ddots & c_{M/2+1}-c_{M/2-1}\\ 0 & \cdots & 0 & c_{M/2}\\ \end{bmatrix} \in \mathbb{R}^{(M/2)\times k_0}.\end{gathered}$$ with $2\le k_0 \le M/2$. Obviously, the form of $\widehat{E}$ guarantees ${\rm rank}(\widehat{E})=2k_0$. In addition, the structures of $\widehat{E}$ and $\widehat{F}$ together with Lemma [Lemma 1](#lemma1){reference-type="ref" reference="lemma1"} lead to the following $\ell_{\infty}$-norm estimates $$\begin{aligned} \| \widehat{E} \|_\infty &=\mu \max\left\{{ \| \widehat{E}_{13} \|_\infty, \| \widehat{E}^\top_{13} \|_\infty}\right\}=\mu \| \widehat{E}_{13} \|_\infty\\ & \le \mu \sum^{M-1}_{j=1}|c_j|=\mu\left(\frac{c_0}{2}-\sum_{j=M}^{\infty}|c_{j}|\right)\\ & < \mu\left[\frac{c_0}{2}-\frac{\theta}{({M-1/2)}^{\alpha}}\right],\end{aligned}$$ $$\begin{aligned} \| \widehat{F} \|_\infty &=\mu \max\left\{{ \| \widehat{F}_{12} \|_\infty, \| \widehat{F}^\top_{12} \|_\infty}\right\}=\mu \| \widehat{F}_{12} \|_\infty\\ & \le \mu \sum^{M-k_0-1}_{j=k_0+1}|c_j|<\mu \sum^\infty_{j=k_0+1}|c_j|\\ & < \frac{\mu \theta_{0}}{(k_0-1)^{\alpha}} < \epsilon.\end{aligned}$$ Straightforward computations lead to $$\begin{aligned} \widetilde{\mathcal{F}}^{-1}_{\omega} \mathcal{F}_{\omega}-I & = \begin{bmatrix} \omega I+C & -I\\ I & \omega I+C \end{bmatrix}^{-1} \begin{bmatrix} T-C & 0\\ 0 & T-C \end{bmatrix}\\ & = \mathcal{E}_{\omega c}+\mathcal{F}_{\omega c},\end{aligned}$$ where $$\begin{gathered} \mathcal{E}_{\omega c}= \begin{bmatrix} \omega I+C & -I\\ I & \omega I+C \end{bmatrix}^{-1} \begin{bmatrix} \widehat{E} & 0\\ 0 & \widehat{E} \end{bmatrix} \quad \text{and} \quad \mathcal{F}_{\omega c}= \begin{bmatrix} \omega I+C & -I\\ I & \omega I+C \end{bmatrix}^{-1} \begin{bmatrix} \widehat{F} & 0\\ 0 & \widehat{F} \end{bmatrix}.\end{gathered}$$ Due to the equivalence of vector norms and the $\ell_{\infty}$-norm estimates of $\widehat{E}$ and $\widehat{F}$, we can obtain the $\ell_2$-norm estimates of $\mathcal{E}_{\omega c}$ and $\mathcal{F}_{\omega c}$ as follows $$\begin{aligned} \left \|\mathcal{E}_{\omega c}\right \|_2 &\le \left \| \begin{bmatrix} \omega I+C & -I\\ I & \omega I+C \end{bmatrix}^{-1} \right \|_2 \left \|\widehat{E}\right \|_2 \\ &\le \left \| \begin{bmatrix} \omega I+C & -I\\ I & \omega I+C \end{bmatrix}^{-1} \right \|_2 M^{1/2} \| \widehat{E} \|_{\infty}\\ &\le \frac{M^{1/2} \mu}{\sqrt{1+\left[\omega+\frac{2^{\alpha+1} \gamma \tau \theta }{(\text{\rm b}-\text{\rm a})^{\alpha}}\right]^2}}\left[\frac{c_0}{2}-\frac{\theta}{(M-1/2)^{\alpha}}\right],\end{aligned}$$ $$\begin{aligned} \left \|\mathcal{F}_{\omega c}\right \|_2 &\le \left \| \begin{bmatrix} \omega I+C & -I\\ I & \omega I+C \end{bmatrix}^{-1} \right \|_2 \left \|\widehat{F}\right \|_2 \\ &\le \left \| \begin{bmatrix} \omega I+C & -I\\ I & \omega I+C \end{bmatrix}^{-1} \right \|_2 M^{1/2} \| \widehat{F} \|_{\infty}\\ &< \frac{M^{1/2} \epsilon}{\sqrt{1+\left[\omega+\frac{2^{\alpha+1} \gamma \tau \theta }{(\text{\rm b}-\text{\rm a})^{\alpha}}\right]^2}}.\end{aligned}$$ $\hfill\square$ **Remark 4**. *According to Theorem [Theorem 4](#ffief){reference-type="ref" reference="ffief"}, we know that $\widetilde{\mathcal{F}}^{-1}_{\omega} \mathcal{F}_{\omega}-I=\mathcal{E}_{\omega c}+\mathcal{F}_{\omega c}$, where $\mathcal{E}_{\omega c}$ is a low rank matrix, and $\mathcal{F}_{\omega c}$ is a small norm matrix. On one hand, the matrix $I + \mathcal{F}_{\omega c}$ is a small perturbation of the identity matrix $I$. Specifically, the Bauer-Fike theorem [@BauerFike1960] leads to a fact that the eigenvalues of $I + \mathcal{F}_{\omega c}$ are clustered in a small disk centered at 1. On the other hand, since the matrix $\mathcal{E}_{\omega c}$ has bounded $\ell_2$-norm and low rank, the matrix $\widetilde{\mathcal{F}}^{-1}_{\omega} \mathcal{F}_{\omega}$ can be considered as a low rank correction of $I + \mathcal{F}_{\omega c}$. Then, one can expect that only a small number of the eigenvalues of $\widetilde{\mathcal{F}}^{-1}_{\omega} \mathcal{F}_{\omega}$ drift relatively further away from those of $I + \mathcal{F}_{\omega c}$. Hence, most of the eigenvalues of $\widetilde{\mathcal{F}}^{-1}_{\omega} \mathcal{F}_{\omega}$ are clustered around $1$.* Finally, together with the relation ([\[two\]](#two){reference-type="ref" reference="two"}), and Theorems [Theorem 3](#FR){reference-type="ref" reference="FR"} and [Theorem 4](#ffief){reference-type="ref" reference="ffief"}, we turn to consider the eigenvalue clustering property of $\widetilde{\mathcal{F}}^{-1}_{\omega}\mathcal{R}$. **Theorem 5**. *Let $1<\alpha \le 2$, and $M \ge 8$ be even, and $$\begin{aligned} \xi=\frac{2M^{1/2}\sqrt{1+\left\{\omega+2\left[\frac{\Gamma(\alpha+1)\mu}{\Gamma(\alpha/2+1)^2}-\frac{\theta \gamma \tau}{ (\text{\rm b}-\text{\rm a})^{\alpha}}\right]\right\}^2}}{\sqrt{1+\left[\omega+\frac{2^{\alpha+1} \gamma \tau \theta}{(\text{\rm b}-\text{\rm a})^{\alpha}}\right]^2}\sqrt{1+\left[\omega +\frac{2 \gamma \tau \theta }{(\text{\rm b}-\text{\rm a})^{\alpha}}\right]^2}}. \end{aligned}$$ Let $\epsilon$ be a small positive constant satisfying $2^{\alpha} \mu \theta_0 / (M-2)^{\alpha}\le \epsilon \le \mu \theta_{0}$, and $k_0 = \lceil ( \mu \theta_{0} / \epsilon)^{1 /\alpha} \rceil +1$. Then, there exist two matrices $\mathcal{M}_{\omega} \in \mathbb{R}^{2M \times 2M}$ and $\mathcal{N}_{\omega} \in \mathbb{R}^{2M \times 2M}$ satisfying ${\rm rank}(\mathcal{M}_{\omega})=4k_0$, $\| \mathcal{M}_{\omega} \|_2 \le \mu \xi \bigl[\frac{c_0}{2}-\frac{\theta}{(M-1/2)^{\alpha}}\bigr]$, and $\| \mathcal{N}_{\omega} \|_2 \le \xi \epsilon$, such that $$\begin{gathered} \widetilde{\mathcal{F}}^{-1}_{\omega}\mathcal{R} = \mathcal{F}_{\omega}^{-1} \mathcal{R}+\mathcal{M}_{\omega}+\mathcal{N}_{\omega}. \end{gathered}$$* *Proof.* It can be easily verified that $$\begin{aligned} \mathcal{F}_{\omega}^{-1}\mathcal{R} &= I-\mathcal{F}_{\omega}^{-1}\mathcal{G}_{\omega}\\ & = (\omega I+\mathcal{H})^{-1}(I-\widetilde{\mathcal{L}}_{\omega})(\omega I+\mathcal{H})\end{aligned}$$ with $\widetilde{\mathcal{L}}_{\omega}=\mathcal{U}_{\omega} \mathcal{V}_{\omega}$. According to the proof of Theorem [Theorem 1](#rhoo){reference-type="ref" reference="rhoo"}, we know that $\| \widetilde{\mathcal{L}}_{\omega} \|_2 < 1$, it then holds $\|I- \widetilde{\mathcal{L}}_{\omega}\|_2 \le \| I \|_2 + \|\widetilde{\mathcal{L}}_{\omega}\|_2 < 2$. The lower/upper bound estimates of the eigenvalues of $T$ provided in Lemma [Lemma 2](#lambda){reference-type="ref" reference="lambda"} lead to the following fact $$\begin{aligned} \|(\omega I+\mathcal{H})^{-1}\|_2 \| \omega I+\mathcal{H} \|_2 & \le \frac{\displaystyle \max_{\lambda_{i} \in \lambda(T)} \sqrt{1+(\omega+\lambda_i)^2}}{\displaystyle \min_{\lambda_{i} \in \lambda(T)} \sqrt{1+(\omega+\lambda_i)^2}}\\ & \le \frac{\sqrt{1+\left\{\omega+2\left[\frac{\Gamma(\alpha+1)\mu}{\Gamma(\alpha / 2+1)^2}-\frac{\theta \gamma \tau }{(\text{\rm b}-\text{\rm a})^{\alpha}}\right]\right\}^2}}{\sqrt{{1+\left[\omega+\frac{2\gamma \tau \theta }{(\text{\rm b}-\text{\rm a})^{\alpha}} \right]^2}}}.\end{aligned}$$ Then, the $\ell_2$-norm of $\mathcal{F}^{-1}_{\omega} \mathcal{R}$ is bounded as follows $$\begin{aligned} \left \| \mathcal{F}^{-1}_{\omega} \mathcal{R} \right \|_2 & \le \left \|(\omega I+\mathcal{H})^{-1}\right \|_2 \left \| \omega I+\mathcal{H} \right \|_2 \left \| I-\widetilde{\mathcal{L}}_{\omega} \right \|_2\\ & \le \frac{2 \sqrt{1+\left\{\omega+2\left[\frac{\Gamma(\alpha+1)\mu}{\Gamma(\alpha/2+1)^2}-\frac{\theta \gamma \tau }{(\text{\rm b}-\text{\rm a})^{\alpha}}\right]\right\}^2}}{\sqrt{{1+\left[\omega+\frac{2\gamma \tau \theta }{(\text{\rm b}-\text{\rm a})^{\alpha}} \right]^2}}}.\end{aligned}$$ According to the relation ([\[two\]](#two){reference-type="ref" reference="two"}) and Theorem [Theorem 4](#ffief){reference-type="ref" reference="ffief"}, we have $$\begin{aligned} \widetilde{\mathcal{F}}_{\omega}^{-1}\mathcal{R} & =(I+\mathcal{E}_{\omega c}+\mathcal{F}_{\omega c}) \mathcal{F}_{\omega}^{-1} \mathcal{R}\\ & = \mathcal{F}_{\omega}^{-1} \mathcal{R} + \mathcal{M}_{\omega}+ \mathcal{N}_{\omega}\end{aligned}$$ with $\mathcal{M}_{\omega}=\mathcal{E}_{\omega c} \mathcal{F}_{\omega}^{-1} \mathcal{R}$ and $\mathcal{N}_{\omega} = \mathcal{F}_{\omega c} \mathcal{F}_{\omega}^{-1} \mathcal{R}$. Moreover, it holds that ${\rm rank}(\mathcal{M}_{\omega})=4k_0$, and the matrices $\mathcal{M}_{\omega}$ and $\mathcal{N}_{\omega}$ satisfy $$\begin{aligned} \left\| \mathcal{M}_{\omega} \right\|_2 & \le \left\| \mathcal{E}_{\omega c} \right\|_2 \left\| \mathcal{F}_{\omega}^{-1} \mathcal{R} \right\|_2 \\ & \le \mu \xi \left[\frac{c_0}{2}-\frac{\theta}{(M-1/2)^{\alpha}}\right]\end{aligned}$$ and $$\begin{aligned} \left\| \mathcal{N}_{\omega} \right\|_2 & \le \left\| \mathcal{F}_{\omega c} \right\|_2 \left\| \mathcal{F}_{\omega}^{-1} \mathcal{R} \right\|_2 \\ & \le \xi \epsilon.\end{aligned}$$ $\hfill\square$ **Remark 5**. *According Theorem [Theorem 3](#FR){reference-type="ref" reference="FR"}, we know that the eigenvalues of $\mathcal{F}_{\omega}^{-1}\mathcal{R}$ are located in a disk of radius $\sigma(\omega)$ centered at 1. Theorem [Theorem 5](#eigDistrDNCBPrecSysMatrix){reference-type="ref" reference="eigDistrDNCBPrecSysMatrix"} shows that, on one hand, $\mathcal{N}_{\omega}$ is a small norm matrix, thus, if $\mathcal{F}_{\omega}^{-1}\mathcal{R}$ is diagonalizable, the eigenvalues of $\mathcal{F}_{\omega}^{-1}\mathcal{R}+\mathcal{N}_{\omega}$ can be considered as small perturbations of the eigenvalues of $\mathcal{F}_{\omega}^{-1}\mathcal{R}$. On the other hand, $\mathcal{M}_{\omega}$ is a low rank matrix, we may conclude that most of the eigenvalues of $\widetilde{\mathcal{F}}_{\omega}^{-1}\mathcal{R}$ are distributed around the eigenvalues of $\mathcal{F}_{\omega}^{-1}\mathcal{R}+\mathcal{N}_{\omega}$. In summary, it may be true that most of the eigenvalues of $\widetilde{\mathcal{F}}_{\omega}^{-1}\mathcal{R}$ are clustered around those of $\mathcal{F}_{\omega}^{-1}\mathcal{R}$. Therefore, the eigenvalues of $\widetilde{\mathcal{F}}_{\omega}^{-1}\mathcal{R}$ are relatively clustered.* # Implementation and complexity As stated in Section [1](#sec-Introduction){reference-type="ref" reference="sec-Introduction"}, it is reasonable to adopt the preconditioned Krylov subspace iteration methods for solving the discrete space fractional CNLS equations since they are efficient and competitive. Furthermore, a well-performed preconditioner is the efficient variant of the PMHSS preconditioner proposed in [@W-PMHSS11], which is specifically designed for the block equivalent form ([\[positiveBlockForm\]](#positiveBlockForm){reference-type="ref" reference="positiveBlockForm"}) of the discrete space fractional CNLS equations. Therefore, the PMHSS-type and DNTB-type preconditioners are supposed to be tested numerically in Section [6](#sec-numerical-experiments){reference-type="ref" reference="sec-numerical-experiments"}, and the implementation details and computational complexities of these preconditioners are discussed here. The efficient variant of the PMHSS preconditioner provided by [@W-PMHSS11] reads $$\begin{aligned} \label{preconditioner-PMHSS} \mathcal{F}_{\mbox{\tiny PMHSS}} & = \begin{bmatrix} I & I \\ -I & I \end{bmatrix}^{-1} \begin{bmatrix} \omega I + T & 0 \\ 0 & \omega I + T \end{bmatrix} \begin{bmatrix} \widehat{D} & 0 \\ 0 & \widehat{D} \end{bmatrix},\end{aligned}$$ where $\widehat{D}=(\omega I + D)^{-1}[(\omega+1)I + D]$ is diagonal, and the parameter $\omega$ satisfies $\omega >\|D\|_{\infty}$ such that $\widehat{D}$ is well defined. In actual implementation, the matrix $\widehat{D}$ can be calculated and stored in advance. The implementation of $\mathcal{F}_{\mbox{\tiny PMHSS}}$ requires to solve two linear subsystems with dense Toeplitz matrix $\omega I + T$. Obviously, direct methods will be very expensive in flops and storage, and iteration methods will lead to inner-outer schemes. Both the above ideas will affect the computational efficiency for solving the block linear system ([\[positiveBlockForm\]](#positiveBlockForm){reference-type="ref" reference="positiveBlockForm"}). Therefore, the implementation details of $\mathcal{F}_{\mbox{\tiny PMHSS}}$ is not included. In order to improve the implementing cost of $\mathcal{F}_{\mbox{\tiny PMHSS}}$ in preconditioned Krylov subspace iteration methods, a circulant approximation of the discrete fractional Laplacian $T$ can be applied to obtain a more efficient preconditioner as follows, i.e., the circulant improved PMHSS (CPMHSS) preconditioner $$\begin{aligned} \mathcal{F}_{\mbox{\tiny CPMHSS}} & = \begin{bmatrix} I & I \\ -I & I \end{bmatrix}^{-1} \begin{bmatrix} \omega I + C & 0 \\ 0 & \omega I + C \end{bmatrix} \begin{bmatrix} \widehat{D} & 0 \\ 0 & \widehat{D} \end{bmatrix} \\ \label{preconditioner-CPMHSS} & = \begin{bmatrix} I & I \\ -I & I \end{bmatrix}^{-1} \begin{bmatrix} F & 0 \\ 0 & F \end{bmatrix}^{-1} \begin{bmatrix} \widehat{\Lambda} & 0 \\ 0 & \widehat{\Lambda} \end{bmatrix} \begin{bmatrix} F & 0 \\ 0 & F \end{bmatrix} \begin{bmatrix} \widehat{D} & 0 \\ 0 & \widehat{D} \end{bmatrix} \end{aligned}$$ where $F \in \mathbb{C}^{M \times M}$ is the discrete Fourier transform (DFT), $\Lambda ={\rm diag}(Fc) \in \mathbb{C}^{M \times M}$ with $c \in \mathbb{C}^{M }$ being the first column of $C$, and $\widehat{\Lambda}=\omega I+\Lambda$ can be calculated in advance. By taking advantage of the algorithms of fast Fourier transform (FFT) and inverse FFT (IFFT), the action of $F$ or $F^{-1}$ on a vector can be accomplished in $\mathcal{O}(M\log M)$ flops. With notations $x=(x_1^{\top},x_2^{\top})^{\top}\in\makebox{{\mbox{\bb R}}}^{2M}$ with $x_1$, $x_2\in\makebox{{\mbox{\bb R}}}^M$, and $r=(r_1^{\top},r_2^{\top})^{\top}\in\makebox{{\mbox{\bb R}}}^{2M}$ with $r_1$, $r_2\in\makebox{{\mbox{\bb R}}}^M$, we can list the preconditioning procedure with respect to $\mathcal{F}_{\mbox{\tiny CPMHSS}}$ in Algorithm [\[alg-CPMHSS\]](#alg-CPMHSS){reference-type="ref" reference="alg-CPMHSS"}. Obviously, this algorithm requires six $M$-vector operations (including diagonal linear subsystem solves and vector additions, which can be accomplished in $\mathcal{O}(M)$ flops), and four $M$-vector FFT/IFFT operations ($\mathcal{O}(M\log M)$ flops in total). ------------------------------------------------------------------------ [\[alg-CPMHSS\]]{#alg-CPMHSS label="alg-CPMHSS"} ------------------------------------------------------------------------ $x_1 = r_1 + r_2$ and $x_2 = -r_1 + r_2$; % ` Two M-vector operations` $x_j=\text{FFT}(x_j)$, $j=1$, $2$; % ` Two M-vector FFTs` Solve $\widehat{\Lambda}x_j = x_j$, $j=1$, $2$; % ` Two M-vector operations` $x_j=\text{IFFT}(x_j)$, $j=1$, $2$; % ` Two M-vector IFFTs` Solve $\widehat{D} x_j = x_j$, $j=1$, $2$. % ` Two M-vector operations` ------------------------------------------------------------------------ For the DNTB preconditioner $\mathcal{F}_{\omega}$, since a scalar factor will not affect its property in nature, we simplify the form of $\mathcal{F}_{\omega}$ by ignoring the scalar factor ${1}/{2\omega}$ as follows $$\begin{aligned} %\label{preconditioner-BDNS} \mathcal{F}_{\mbox{\tiny DNTB}} = \begin{bmatrix} \omega I - D & 0 \\ 0 & \omega I - D \end{bmatrix} \begin{bmatrix} \omega I + T & -I \\ I & \omega I + T \end{bmatrix}.\end{aligned}$$ Obviously, the main computational cost for implementing $\mathcal{F}_{\mbox{\tiny DNTB}}$ is to solve a block linear subsystem with $\bigl[\begin{smallmatrix} \omega I+T & -I \\ I & \omega I+T \end{smallmatrix}\bigr]$, both direct and iterative methods are not cheap. Therefore, the implementation details of $\mathcal{F}_{\mbox{\tiny DNTB}}$ is not discussed. In a similar fashion, the DNCB preconditioner $\widetilde{\mathcal{F}}_{\omega}$ can be simplified by removing the scalar factor ${1}/{2\omega}$ and keeping its properties essentially the same. Thus, a practical form of $\widetilde{\mathcal{F}}_{\omega}$ reads $$\begin{aligned} \mathcal{F}_{\mbox{\tiny DNCB}} &= \begin{bmatrix} \omega I - D & 0 \\ 0 & \omega I - D \end{bmatrix} \begin{bmatrix} \omega I+C & -I \\ I & \omega I+C \end{bmatrix} \\ \label{preconditioner-CBDNS} &=\begin{bmatrix} \widetilde{D} & 0 \\ 0 & \widetilde{D} \end{bmatrix} \begin{bmatrix} F & 0 \\ 0 & F \end{bmatrix}^{-1} \begin{bmatrix} I & 0 \\ L_{21} & I \end{bmatrix} \begin{bmatrix} U_{11} & -I \\ 0 & U_{22} \end{bmatrix} \begin{bmatrix} F & 0 \\ 0 & F \end{bmatrix}, \end{aligned}$$ where $\widetilde{D}=\omega I - D$, $L_{21}=(\omega I+\Lambda)^{-1}$, $U_{11}=\omega I+\Lambda$, $U_{22}=\omega I+\Lambda+(\omega I+\Lambda)^{-1} \in \mathbb{R}^{M \times M}$ are diagonal, and they can be calculated and stored before the iteration begins. With the same notations in Algorithm [\[alg-CPMHSS\]](#alg-CPMHSS){reference-type="ref" reference="alg-CPMHSS"}, we can describe the preconditioning procedure with respect to $\mathcal{F}_{\mbox{\tiny DNCB}}$ in Algorithm [\[alg-CBDNS\]](#alg-CBDNS){reference-type="ref" reference="alg-CBDNS"}. Obviously, this algorithm requires seven $M$-vector operations ($\mathcal{O}(M)$ flops in total), and four $M$-vector FFT/IFFT operations ($\mathcal{O}(M\log M)$ flops in total). ------------------------------------------------------------------------ [\[alg-CBDNS\]]{#alg-CBDNS label="alg-CBDNS"} ------------------------------------------------------------------------ Solve $\widetilde{D}x_j = r_j$, $j=1$, $2$; % ` Two M-vector operations` $x_j=\text{FFT}(x_j)$, $j=1$, $2$; % ` Two M-vector FFTs` $x_2 = x_2 - L_{21}x_1$; % ` Two M-vector operations` Solve $U_{22}x_2 = x_2$ and $U_{11}x_1 = x_1+x_2$. % ` Three M-vector operations` $x_j=\text{IFFT}(x_j)$, $j=1$, $2$. % ` Two M-vector IFFTs` ------------------------------------------------------------------------ According to the above discussions, the computational workloads for implementing $\mathcal{F}_{\mbox{\tiny CPMHSS}}$ and $\mathcal{F}_{\mbox{\tiny DNCB}}$ are both dominated by four $M$-vector FFT/IFFT operations when $M$ is large, i.e., $\mathcal{O}(M\log M)$ flops are required at each iteration of the related preconditioned Krylov subspace iteration methods. Therefore, the computational efficiency of a preconditioned Krylov subspace iteration method in conjunction with $\mathcal{F}_{\mbox{\tiny CPMHSS}}$ or $\mathcal{F}_{\mbox{\tiny DNCB}}$ mainly depends on the corresponding convergence rate. # Numerical experiments {#sec-numerical-experiments} A large number of numerical experiments are presented to show the properties of the DNCB preconditioner, and the effectiveness and efficiency of the related preconditioned GMRES method for the discrete space fractional CNLS equations in the case of $\rho < 0$. Specifically, the LICD scheme applied to the space fractional CNLS equations requires the initial values at the initial and the first time levels to start-up, the former is given by the initial condition, and the latter can be obtained by a second order implicit conservative scheme, e.g., Crank-Nicolson difference scheme [@WDL2013JCP]. In all the experiments, the related block linear systems ([\[positiveBlockForm\]](#positiveBlockForm){reference-type="ref" reference="positiveBlockForm"}) at the 2nd time levels of the discrete fractional CNLS equations are tested in different settings, and the initial guess of the (preconditioned) GMRES method is zero vector. The parameter $\alpha$ of the fractional Laplacian is selected to be $\alpha=1.1:0.1:2$, and the number of inner spatial discrete points includes $M=800$, $1600$, $3200$, $6400$, $12800$ and $25600$. The temporal step size $\tau$ is simply fixed to be $\tau=0.01$. We denote by 'IT' the number of iterations, and 'CPU' the computing time in seconds. In all the experiments, the (preconditioned) GMRES method without restart is tested, and terminated either the spectral norm relative residual of the (preconditioned) system of linear equations reduced below $10^{-6}$ or the number of iterations exceeding $1000$. Before showing the numerical experiments, we provide some discussions about the discrete mass and energy conservations of the LICD scheme. Theoretically, Wang et al. [@WDL2014JCP] concluded that the discrete mass and energy are conserved for the LICD scheme in accurate arithmetic. As we all know, floating point arithmetic in computers always has rounding errors. Nevertheless, Wang et al. [@WDL2014JCP] observed the conservation phenomenon of the LICD scheme in their experiments when the linear systems are solved by a direct method at each time level under finite precision. In other words, as long as we solve the linear systems by any convergent iteration method with sufficient precision, the discrete mass and energy conservation of the LICD scheme can still be maintained. Therefore, we will not conduct similar experiments as shown in [@WDL2014JCP] to verify the conservation laws. Instead, for the LICD scheme, we provide the plots of the numerical solution computed by the DNCB preconditioned GMRES method and its corresponding small error with the exact solution computed by the Gaussian elimination, see Figures [8](#fig:singal-1.1){reference-type="ref" reference="fig:singal-1.1"}-[14](#fig:singal-2){reference-type="ref" reference="fig:singal-2"} for the decoupled case, and Figures [18](#fig:couple-alp=1.1-beta=1){reference-type="ref" reference="fig:couple-alp=1.1-beta=1"}-[24](#fig:couple-alp=2-beta=1){reference-type="ref" reference="fig:couple-alp=2-beta=1"} for the coupled case. This error can be further reduced by adopting a smaller tolerance in the iteration method until the conservation of the discrete mass and energy can be observed under a give precision. ## The decoupled nonlinear case Let $\gamma=1$, $\rho=-2$, $1<\alpha\le 2$, we consider the following truncated space fractional DNLS equations $$\begin{aligned} \label{equ:couple} %\imath u_t-(-\Delta)^{\frac{\alpha}{2}}u-2\vert u \vert^2u=0, \qquad -20\le x \le 20,\quad 0 < t \le 2, \imath u_t-\gamma(-\Delta)^{\frac{\alpha}{2}}u + \rho\vert u \vert^2u=0, \qquad -20\le x \le 20,\quad 0 < t \le 2,\end{aligned}$$ with the initial and boundary conditions $$\begin{aligned} \label{equ:couple initial} u(x,0)=\text{sech} (x) e^{2\imath x}, u(-20,t)=u(20,t)=0.\end{aligned}$$ The LICD scheme applied to ([\[equ:couple\]](#equ:couple){reference-type="ref" reference="equ:couple"}) and ([\[equ:couple initial\]](#equ:couple initial){reference-type="ref" reference="equ:couple initial"}) leads to the discrete space fractional DNLS equations. On each time level $t_n$, $1 < n \le N$, a complex symmetric linear system of the form ([\[equ3\]](#equ3){reference-type="ref" reference="equ3"}) needs to be solved, which is equivalent to solve a block linear system of the form ([\[positiveBlockForm\]](#positiveBlockForm){reference-type="ref" reference="positiveBlockForm"}). In the experiments, the properties of the DNTB/DNCB preconditioned system matrices are verified, and the GMRES method, the DNCB preconditioned GMRES method (DNCB-GMRES), and the CPMHSS-preconditioned GMRES method (CPMHSS-GMRES) are implemented and compared. Figure [1](#fig:singal-WW-it){reference-type="ref" reference="fig:singal-WW-it"} depicts the curves of IT of DNCB-GMRES versus the parameter $\omega \in (0,3]$ when $M=6400$. Here, we take the different fractional orders $\alpha=1.1:0.2:1.9$. It shows that IT increases rapidly as $\omega$ approaches zero in Figure [1](#fig:singal-WW-it){reference-type="ref" reference="fig:singal-WW-it"}. As $\omega$ increases, IT reaches its minimum value quickly and then grows slowly. The empirical optimal value $\omega$ stays in $[0.05,0.50]$ for all cases of $\alpha$, which implies that DNCB-GMRES is not very sensitive to the preconditioning parameter $\omega$ if it is slightly away from 0. In addition, the curve related to a larger value of $\alpha$ is located at a higher position, in another word, the larger the value of $\alpha$ the more difficult it is to solve the related linear system. Figure [2](#fig:singal-alp-IT){reference-type="ref" reference="fig:singal-alp-IT"} depicts the curves of IT of DNCB-GMRES versus the number of the inner spatial discrete points $M$. Here, we take the different fractional orders $\alpha=1.1:0.2:1.9$, and adopt the empirical optimal value $\omega$ for DNCB-GMRES. It shows that IT of DNCB-GMRES remains almost constant or changes very slowly when $M$ grows from 800 to 25600. This phenomenon verifies that the convergence of DNCB-GMRES is independent of $M$. As can be seen from Figure [2](#fig:singal-alp-IT){reference-type="ref" reference="fig:singal-alp-IT"}, the curve for $\alpha=1.1$ is at the bottom, and the one for $\alpha=1.9$ is at the top. Similarly to Figure [1](#fig:singal-WW-it){reference-type="ref" reference="fig:singal-WW-it"}, it indicates that a larger value of fractional order $\alpha$ leads to a more difficult linear system. Figures [4](#fig:eig-singal-1.3){reference-type="ref" reference="fig:eig-singal-1.3"}-[6](#fig:eig-singal-1.7){reference-type="ref" reference="fig:eig-singal-1.7"} depict the eigenvalue distribution of the system matrix $\mathcal{R}$, the DNTB/DNCB preconditioned system matrix $\mathcal{F}_{\mbox{\tiny DNTB}}^{-1}\mathcal{R}$/$\mathcal{F}_{\mbox{\tiny DNCB}}^{-1}\mathcal{R}$, and the PMHSS/CPMHSS preconditioned system matrix $\mathcal{F}_{\mbox{\tiny PMHSS}}^{-1}\mathcal{R}$/$\mathcal{F}_{\mbox{\tiny CPMHSS}}^{-1}\mathcal{R}$. The Strang's circulant approximation is adopted in $\mathcal{F}_{\mbox{\tiny DNCB}}$ and $\mathcal{F}_{\mbox{\tiny CPMHSS}}$, and the fractional order $\alpha=1.3, 1.7$ and the number of the spatial discrete points $M=1600, 3200$ are selected. In each figure, the plots of the cases for $M=1600$ and $M=3200$ are on the left and right, respectively. As can be seen from Figures [4](#fig:eig-singal-1.3){reference-type="ref" reference="fig:eig-singal-1.3"}-[6](#fig:eig-singal-1.7){reference-type="ref" reference="fig:eig-singal-1.7"}, the real parts of the eigenvalues of $\mathcal{R}$ are distributed over a wide range of scales from $\mathcal{O}(10^{-3})$ to $\mathcal{O}(10^1)$, and the larger the value of $M$, the wider the range of scales. The real parts of the eigenvalues of $\mathcal{F}_{\mbox{\tiny PMHSS}}^{-1}\mathcal{R}$/$\mathcal{F}_{\mbox{\tiny CPMHSS}}^{-1}\mathcal{R}$ are distributed over a range of scales from $\mathcal{O}(10^{-1})$ to $\mathcal{O}(10^{0})$, and the related imaginary parts stays in $[-0.5, 0.5]$. The real parts of the eigenvalues of $\mathcal{F}_{\mbox{\tiny DNTB}}^{-1}\mathcal{R}$/$\mathcal{F}_{\mbox{\tiny DNCB}}^{-1}\mathcal{R}$ are distributed around $\mathcal{O}(10^0)$, and the related imaginary parts also stays in $[-0.5, 0.5]$. Therefore, it shows that the eigenvalues of $\mathcal{F}_{\mbox{\tiny DNTB}}^{-1}\mathcal{R}$/$\mathcal{F}_{\mbox{\tiny DNCB}}^{-1}\mathcal{R}$ and $\mathcal{F}_{\mbox{\tiny PMHSS}}^{-1}\mathcal{R}$/$\mathcal{F}_{\mbox{\tiny CPMHSS}}^{-1}\mathcal{R}$ are more clustered than those of $\mathcal{R}$, and the eigenvalues of $\mathcal{F}_{\mbox{\tiny DNTB}}^{-1}\mathcal{R}$/$\mathcal{F}_{\mbox{\tiny DNCB}}^{-1}\mathcal{R}$ are clustered even more compact than those of $\mathcal{F}_{\mbox{\tiny PMHSS}}^{-1}\mathcal{R}$/$\mathcal{F}_{\mbox{\tiny CPMHSS}}^{-1}\mathcal{R}$. When $M$ increases from 1600 to 3200, the distribution of the eigenvalues of $\mathcal{F}_{\mbox{\tiny DNTB}}^{-1}\mathcal{R}$/$\mathcal{F}_{\mbox{\tiny DNCB}}^{-1}\mathcal{R}$ remains almost the same. It implies that the convergence of the preconditioned GMRES method in conjunction with $\mathcal{F}_{\mbox{\tiny DNTB}}^{-1}\mathcal{R}$/$\mathcal{F}_{\mbox{\tiny DNCB}}^{-1}\mathcal{R}$ may be space mesh size independent. Furthermore, most of the eigenvalues of $\mathcal{F}_{\mbox{\tiny DNCB}}^{-1}\mathcal{R}$ are clustered around those of $\mathcal{F}_{\mbox{\tiny DNTB}}^{-1}\mathcal{R}$, which indicates that $\mathcal{F}_{\mbox{\tiny DNCB}}$ is a good approximation to $\mathcal{F}_{\mbox{\tiny DNTB}}$. Table [\[tab:BDNS-different-circulant-alp=1.5\]](#tab:BDNS-different-circulant-alp=1.5){reference-type="ref" reference="tab:BDNS-different-circulant-alp=1.5"} list IT of DNCB-GMRES in conjunction with different circulant approximations. In these experiments, we take the empirical optimal value of the preconditioning parameter $\omega$ (searching in $(0,3]$), the fractional order $\alpha=1.5$, and the number of the spatial discrete points $M=6400$. The results of several representative circulant approximations are listed, including Strang's circulant approximation, T. Chan's circulant approximation, R. Chan's circulant approximation, circulant approximations constructed from some famous kernels (e.g., Modified Dirichlet kernel, von Hann kernel, Hamming kernel), and superoptimal circulant approximation [@CRH2007SIAMbook]. As shown in this table, IT of DNCB-GMRES equals to $9$ in most cases of the circulant approximations (except for the superoptimal circulant approximation which takes $32$ iterations), and the corresponding optimal empirical values of $\omega$ are almost the same, i.e., $[0.07,0.20]$ (except for the T. Chan's circulant approximation $[0.08,0.18]$ and the superoptimal circulant approximation $[1.50,1.74]$). Therefore, DNCB-GMRES is easy to apply due to the fact that most of the circulant approximations are efficient. The relatively poor performance of the superoptimal circulant approximation may be due to the failure of finding a good enough empirical optimal value of $\omega$ in $(0,3]$. Figures [8](#fig:singal-1.1){reference-type="ref" reference="fig:singal-1.1"}-[14](#fig:singal-2){reference-type="ref" reference="fig:singal-2"} depict the numerical solutions $u_{\text{\tiny DNCB}}$ and the related absolute errors $\text{ERR} = |u_{\text{\tiny DNCB}}-u_{\text{\tiny GE}}|$. Here, the exact solution $u_{\text{\tiny GE}}$ is obtained by solving the discrete space fractional DNLS equations with Guassian elimination (GE) based on the LICD scheme. We take the fractional order $\alpha=1.1:0.4:1.9$ and $\alpha=2$, and the number of the spatial discrete points $M=800$. In these figures, the numerical solutions $u_{\text{\tiny DNCB}}$ obtained by DNCB-GMRES are plotted on the left, and the errors between $u_{\text{\tiny DNCB}}$ and $u_{\text{\tiny GE}}$ are plotted on the right. As shown in these figures, the shape of $u_{\text{\tiny DNCB}}$ in hight and width is affected by the values of the fractional order $\alpha$. When $\alpha$ approaches to $2$, the shape of $u_{\text{\tiny DNCB}}$ tends to converge to the shape of the solution of the standard DNLS equation (the case of $\alpha=2$). Moreover, ERR stays as small as around $\mathcal{O}(10^{-4})$ in the whole computational space-time domain, which confirms that the numerical solution $u_{\text{\tiny DNCB}}$ is reliable. Obviously, even more exact numerical solution $u_{\text{\tiny DNCB}}$ can be computed by improving the stopping criterion of DNCB-GMRES. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![The curves of IT of DNCB-GMRES versus the parameter $\omega\in (0,3]$ of $\mathcal{F}_{\mbox{\tiny DNCB}}$ in the DNLS case when $\alpha=1.1:0.2:1.9$ and $M=6400$: blue solid line with circle mark for $\alpha=1.1$, red solid line with diamond mark for $\alpha=1.3$, orange solid line with triangle mark for $\alpha=1.5$, purple solid line with square mark for $\alpha=1.7$, green solid line with pentagram mark for $\alpha=1.9$.](w-IT-decouple-eps-converted-to.pdf){#fig:singal-WW-it} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![The curves of IT of DNCB-GMRES versus the number of the inner spatial discrete points $M$ of the LICD scheme in the DNLS case when $\alpha=1.1:0.2:1.9$: blue solid line with circle mark for $\alpha=1.1$, red solid line with diamond mark for $\alpha=1.3$, orange solid line with triangle mark for $\alpha=1.5$, purple solid line with square mark for $\alpha=1.7$, green solid line with pentagram mark for $\alpha=1.9$.](decouple-IT-space-eps-converted-to.pdf){#fig:singal-alp-IT} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![The eigenvalue distribution of $\mathcal{R}$, $\mathcal{F}_{\mbox{\tiny DNTB}}^{-1}\mathcal{R}$, $\mathcal{F}_{\mbox{\tiny DNCB}}^{-1}\mathcal{R}$, $\mathcal{F}_{\mbox{\tiny PMHSS}}^{-1}\mathcal{R}$ and $\mathcal{F}_{\mbox{\tiny CPMHSS}}^{-1}\mathcal{R}$ in the case of $\alpha=1.3$ for $M=1600$ (left), $3200$ (right): blue circle mark for $\mathcal{R}$, red square mark for $\mathcal{F}_{\mbox{\tiny DNTB}}^{-1}\mathcal{R}$, orange solid hexagram mark for $\mathcal{F}_{\mbox{\tiny DNCB}}^{-1}\mathcal{R}$, purple diamond mark for $\mathcal{F}_{\mbox{\tiny PMHSS}}^{-1}\mathcal{R}$, green solid triangle mark for $\mathcal{F}_{\mbox{\tiny CPMHSS}}^{-1}\mathcal{R}$.](decouple-eig-1.3-1600-eps-converted-to.pdf){#fig:eig-singal-1.3} ![The eigenvalue distribution of $\mathcal{R}$, $\mathcal{F}_{\mbox{\tiny DNTB}}^{-1}\mathcal{R}$, $\mathcal{F}_{\mbox{\tiny DNCB}}^{-1}\mathcal{R}$, $\mathcal{F}_{\mbox{\tiny PMHSS}}^{-1}\mathcal{R}$ and $\mathcal{F}_{\mbox{\tiny CPMHSS}}^{-1}\mathcal{R}$ in the case of $\alpha=1.3$ for $M=1600$ (left), $3200$ (right): blue circle mark for $\mathcal{R}$, red square mark for $\mathcal{F}_{\mbox{\tiny DNTB}}^{-1}\mathcal{R}$, orange solid hexagram mark for $\mathcal{F}_{\mbox{\tiny DNCB}}^{-1}\mathcal{R}$, purple diamond mark for $\mathcal{F}_{\mbox{\tiny PMHSS}}^{-1}\mathcal{R}$, green solid triangle mark for $\mathcal{F}_{\mbox{\tiny CPMHSS}}^{-1}\mathcal{R}$.](decouple-eig-1.3-3200-eps-converted-to.pdf){#fig:eig-singal-1.3} ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![ The eigenvalue distribution of $\mathcal{R}$, $\mathcal{F}_{\mbox{\tiny DNTB}}^{-1}\mathcal{R}$, $\mathcal{F}_{\mbox{\tiny DNCB}}^{-1}\mathcal{R}$, $\mathcal{F}_{\mbox{\tiny PMHSS}}^{-1}\mathcal{R}$ and $\mathcal{F}_{\mbox{\tiny CPMHSS}}^{-1}\mathcal{R}$ in the case of $\alpha=1.7$ for $M=1600$ (left), $3200$ (right): blue circle mark for $\mathcal{R}$, red square mark for $\mathcal{F}_{\mbox{\tiny DNTB}}^{-1}\mathcal{R}$, orange solid hexagram mark for $\mathcal{F}_{\mbox{\tiny DNCB}}^{-1}\mathcal{R}$, purple diamond mark for $\mathcal{F}_{\mbox{\tiny PMHSS}}^{-1}\mathcal{R}$, green solid triangle mark for $\mathcal{F}_{\mbox{\tiny CPMHSS}}^{-1}\mathcal{R}$.](decouple-eig-1.7-1600-eps-converted-to.pdf){#fig:eig-singal-1.7} ![ The eigenvalue distribution of $\mathcal{R}$, $\mathcal{F}_{\mbox{\tiny DNTB}}^{-1}\mathcal{R}$, $\mathcal{F}_{\mbox{\tiny DNCB}}^{-1}\mathcal{R}$, $\mathcal{F}_{\mbox{\tiny PMHSS}}^{-1}\mathcal{R}$ and $\mathcal{F}_{\mbox{\tiny CPMHSS}}^{-1}\mathcal{R}$ in the case of $\alpha=1.7$ for $M=1600$ (left), $3200$ (right): blue circle mark for $\mathcal{R}$, red square mark for $\mathcal{F}_{\mbox{\tiny DNTB}}^{-1}\mathcal{R}$, orange solid hexagram mark for $\mathcal{F}_{\mbox{\tiny DNCB}}^{-1}\mathcal{R}$, purple diamond mark for $\mathcal{F}_{\mbox{\tiny PMHSS}}^{-1}\mathcal{R}$, green solid triangle mark for $\mathcal{F}_{\mbox{\tiny CPMHSS}}^{-1}\mathcal{R}$.](decouple-eig-1.7-3200-eps-converted-to.pdf){#fig:eig-singal-1.7} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![The numerical solution (left) and its error (right) with the exact solution of the LICD scheme in the DNLS case when $\alpha=1.1$ and $M=800$.](decouple-alp=1.1-800-eps-converted-to.pdf){#fig:singal-1.1} ![The numerical solution (left) and its error (right) with the exact solution of the LICD scheme in the DNLS case when $\alpha=1.1$ and $M=800$.](decouple-minus-alp=1.1-800-eps-converted-to.pdf){#fig:singal-1.1} --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![The numerical solution (left) and its error (right) with the exact solution of the LICD scheme in the DNLS case when $\alpha=1.5$ and $M=800$.](decouple-alp=1.5-800-eps-converted-to.pdf){#fig:singal-1.5} ![The numerical solution (left) and its error (right) with the exact solution of the LICD scheme in the DNLS case when $\alpha=1.5$ and $M=800$.](decouple-minus-alp=1.5-800-eps-converted-to.pdf){#fig:singal-1.5} --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![The numerical solution (left) and its error (right) with the exact solution of the LICD scheme in the DNLS case when $\alpha=1.9$ and $M=800$.](decouple-alp=1.9-800-eps-converted-to.pdf){#fig:singal-1.9} ![The numerical solution (left) and its error (right) with the exact solution of the LICD scheme in the DNLS case when $\alpha=1.9$ and $M=800$.](decouple-minus-alp=1.9-800-eps-converted-to.pdf){#fig:singal-1.9} --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![The numerical solution (left) and its error (right) with the exact solution of the LICD scheme in the DNLS case when $\alpha=2$ and $M=800$.](decouple-alp=2-800-eps-converted-to.pdf){#fig:singal-2} ![The numerical solution (left) and its error (right) with the exact solution of the LICD scheme in the DNLS case when $\alpha=2$ and $M=800$.](decouple-minus-alp=2-800-eps-converted-to.pdf){#fig:singal-2} --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ## The coupled nonlinear case Let $\gamma=1$, $\rho=-2$, $\beta=1$, $1<\alpha\le 2$, we consider the truncated space fractional CNLS equations as follows $$\begin{aligned} \label{couple equation} \begin{cases} \imath u_t-\gamma(-\Delta)^{\frac{\alpha}{2}}u+\rho(\vert u \vert^2+\beta\vert v \vert^2)u=0, \qquad \\ \imath v_t-\gamma(-\Delta)^{\frac{\alpha}{2}}v+\rho(\vert v \vert^2+\beta\vert u \vert^2)v=0, \qquad \end{cases} -20\le x \le 20,\quad 0 < t \le 2, \end{aligned}$$ with the initial and boundary conditions $$\label{couple condition} \begin{cases} u(x,0)=\text{sech}(x+1) e^{2\imath x}, \quad v(x,0)=\text{sech}(x-1) e^{-2\imath x},\\ u(-20,t)=u(20,t)=0, \quad v(-20,t)=v(20,t)=0. \end{cases}$$ The LICD scheme applied to ([\[couple equation\]](#couple equation){reference-type="ref" reference="couple equation"}) and ([\[couple condition\]](#couple condition){reference-type="ref" reference="couple condition"}) leads to the discrete space fractional CNLS equations. On each time level $t_n$, $1 < n \le N$, two coupled complex symmetric linear systems of the form ([\[equ3\]](#equ3){reference-type="ref" reference="equ3"}) need to be solved sequentially, which are equivalent to solve two coupled block linear systems of the form ([\[positiveBlockForm\]](#positiveBlockForm){reference-type="ref" reference="positiveBlockForm"}). In the experiments, Gaussian elimination (GE) is used to solve the aforementioned complex symmetric linear systems, and GMRES, DNCB-GMRES and CPMHSS-GMRES are adopted to solve the corresponding coupled block linear systems. In this subsection, 'IT' is denoted by the total number of iterations of the preconditioned GMRES method for solving two coupled block linear systems on time level $t_2$, and 'CPU' is denoted by the total computing time in seconds for solving the coupled (complex symmetric or block) linear systems. Figure [15](#fig:couple space-it){reference-type="ref" reference="fig:couple space-it"} depicts the curves of IT of DNCB-GMRES versus the number of the spatial discrete points $M$. Here, we take different fractional orders $\alpha=1.1:0.2:1.9$, and adopt the empirical optimal value $\omega$ for DNCB-GMRES. It shows that IT of DNCB-GMRES remains almost unchanged or increases very slowly when $M$ varies from $6400$ to $25600$. This phenomenon verifies that the convergence of DNCB-GMRES is space mesh size independent. As shown in Figure [15](#fig:couple space-it){reference-type="ref" reference="fig:couple space-it"}, the curve for $\alpha=1.1$ stays at the bottom, and the one for $\alpha=1.9$ stays at the top, meaning that smaller $\alpha$ makes the linear systems easier to solve. For the case of $M=6400$, the curves of IT of DNCB-GMRES and CPMHSS-GMRES versus the fractional order ($\alpha=1.1:0.1:2$) are plotted in Figure [16](#fig:couple-alp-IT){reference-type="ref" reference="fig:couple-alp-IT"}. The blue dashed line represents DNCB-GMRES, and the red solid line represents CPMHSS-GMRES. As shown in Figure [16](#fig:couple-alp-IT){reference-type="ref" reference="fig:couple-alp-IT"}, IT grows proximately linearly as the fractional order $\alpha$ increases, and the slopes of both curves for DNCB-GMRES and CPMHSS-GMRES stay very low. It indicates that the convergence rates of DNCB-GMRES and CPMHSS-GMRES both have less dependence on or sensitivity to the fractional order $\alpha$. In addition, the DNCB-GMRES curve stays below the CPMHSS-GMRES curve, and the gap between them becomes larger as $\alpha$ tends to $2$, which means that DNCB-GMRES outperforms CPMHSS-GMRES in terms of IT, and CBDN-GMRES is less sensitive to $\alpha$ than CPMHSS-GMRES. Tables [\[tab:alp=1.1-beta=1\]](#tab:alp=1.1-beta=1){reference-type="ref" reference="tab:alp=1.1-beta=1"}-[\[tab:alp=1.9-beta=1\]](#tab:alp=1.9-beta=1){reference-type="ref" reference="tab:alp=1.9-beta=1"} list CPU of GE, and IT and CPU of GMRES, DNCB-GMRES and CPMHSS-GMRES. Here, we take fractional orders $\alpha=1.1:0.2:1.9$, and the number of the spatial discrete points $M=3200$, $6400$, $12800$ and $25600$. In these tables, "N/A" means that IT is not available for GE, the notation "--" means that GE fails to find the solution or the related iteration method reaches the maximum number of iterations without convergence in that case. The related empirical optimal parameters of DNCB-GMRES and CPMHSS-GMRES are listed in Tables [\[tab:omega-alp=1.1\]](#tab:omega-alp=1.1){reference-type="ref" reference="tab:omega-alp=1.1"}-[\[tab:omega-alp=1.9\]](#tab:omega-alp=1.9){reference-type="ref" reference="tab:omega-alp=1.9"}. Specifically, the empirical optimal parameters are denoted by "$\omega_u$" and "$\omega_v$" when the preconditioned GMRES methods are used to solve the block linear systems ([\[positiveBlockForm\]](#positiveBlockForm){reference-type="ref" reference="positiveBlockForm"}) related to $u$ and $v$ respectively. According to the data in Tables [\[tab:alp=1.1-beta=1\]](#tab:alp=1.1-beta=1){reference-type="ref" reference="tab:alp=1.1-beta=1"}-[\[tab:alp=1.9-beta=1\]](#tab:alp=1.9-beta=1){reference-type="ref" reference="tab:alp=1.9-beta=1"}, we can further compute and list the speed-up (SU) of DNCB-GMRES against CPMHSS-GMRES in Table [\[tab:sp-3200\]](#tab:sp-3200){reference-type="ref" reference="tab:sp-3200"} in which SU is defined as follows $$\begin{aligned} \nonumber \mbox{SU} &= \frac{\mbox{CPU of CPMHSS-GMRES}}{\mbox{CPU of DNCB-GMRES}}.\end{aligned}$$ According to Tables [\[tab:alp=1.1-beta=1\]](#tab:alp=1.1-beta=1){reference-type="ref" reference="tab:alp=1.1-beta=1"}-[\[tab:alp=1.9-beta=1\]](#tab:alp=1.9-beta=1){reference-type="ref" reference="tab:alp=1.9-beta=1"}, when the number of the spatial discrete points increases to $M=12800$ and above, GE encounters the problem of "Out of Memory", and it fails to solve the linear system. For the case of $M$ smaller than 12800, GE is much more time consuming than all the tested iteration methods. This is because the storage requirements and computational costs of GE cannot be effectively reduced when the linear system is dense even though it has Toeplitz-plus-diagonal structure. Hence, GE can only handle the situation of small dense linear systems for the case of limited storage and computing resources. According to the results of the iteration methods, GMRES fails to converge in the case of $\alpha=1.9$ for $M=25600$, and it requires the largest IT and CPU in all the other tests. Moreover, when the fractional order $\alpha$ and the number of spatial discrete points $M$ grows, IT of GMRES increases very rapidly, which means that the discrete space fractional CNLS equations become more difficult to solve for larger value of $\alpha$ and $M$. Meanwhile, DNCB-GMRES and CPMHSS-GMRES converge successfully in all the tests, and the former outperforms the latter in terms of both IT and CPU. In addition, both DNCB-GMRES and CPMHSS-GMRES possess the favorable space mesh size independence convergence property. To further demonstrate the advantages of DNCB-GMRES, it is reasonable to check the speed-up of DNCB-GMRES against CPMHSS-GMRES listed in Table [\[tab:sp-3200\]](#tab:sp-3200){reference-type="ref" reference="tab:sp-3200"}, which is at least $275.91\%$, and up to $452.76\%$. Therefore, the computational efficiency of DNCB-GMRES is significantly improved compared to that of CPMHSS-GMRES. In the case of $M=800$, and $\alpha=1.1:0.4:1.9$ and $\alpha=2$, Figures [18](#fig:couple-alp=1.1-beta=1){reference-type="ref" reference="fig:couple-alp=1.1-beta=1"}-[24](#fig:couple-alp=2-beta=1){reference-type="ref" reference="fig:couple-alp=2-beta=1"} show the plots of the numerical solutions $u_{\text{\tiny DNCB}}$ and $v_{\text{\tiny DNCB}}$ of the space fractional CNLS equations ([\[couple equation\]](#couple equation){reference-type="ref" reference="couple equation"}) on the left, and their errors $\text{ERR}_{u} = |u_{\text{\tiny DNCB}}-u_{\text{\tiny GE}}|$ and $\text{ERR}_{v} = |v_{\text{\tiny DNCB}}-v_{\text{\tiny GE}}|$ on the right. Here, $u_{\text{\tiny DNCB}}$ and $v_{\text{\tiny DNCB}}$ are obtained by solving the discrete space fractional CNLS equations with DNCB-GMRES on each time level of the LICD scheme, and $u_{\text{\tiny GE}}$ and $v_{\text{\tiny GE}}$ are the exact solutions of the original LICD scheme computed by GE. It is observed that the shape of the numerical solution is affected by the fractional order $\alpha$. When $\alpha$ approaches to $2$, the shape of the solution of the space fractional CNLS equations ([\[couple equation\]](#couple equation){reference-type="ref" reference="couple equation"}) tends to converge to that of the solution of the standard CNLS equations ($\alpha=2$). In addition, the error between the numerical solution of ([\[couple equation\]](#couple equation){reference-type="ref" reference="couple equation"}) and the exact solution of the original LICD scheme remains as small as around $\mathcal{O}(10^{-4})$ in the whole space-time domain, which shows that the numerical solution obtained by DNCB-GMRES is reliable. In fact, numerical solutions $u_{\text{\tiny DNCB}}$ and $v_{\text{\tiny DNCB}}$ with higher precision can be obtained by improving the stopping criterion of DNCB-GMRES. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![ The curves of IT of DNCB-GMRES versus the number of the inner spatial discrete points $M$ of the LICD scheme in the CNLS case when $\alpha=1.1:0.2:1.9$: blue solid line with circle mark for $\alpha=1.1$, red solid line with diamond mark for $\alpha=1.3$, orange solid line with triangle mark for $\alpha=1.5$, purple solid line with square mark for $\alpha=1.7$, green solid line with pentagram mark for $\alpha=1.9$. ](IT-space-couple-eps-converted-to.pdf){#fig:couple space-it} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![ The curves of IT of DNCB-GMRES and CPMHSS-GMRES versus the fractional order $\alpha=1.1:0.1:2$ in the CNLS case when $M=6400$: blue dashed line for DNCB-GMRES, red solid line for CPMHSS-GMRES.](couple-alp-it-eps-converted-to.pdf){#fig:couple-alp-IT} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![The numerical solution (left) and its error (right) with the exact solution of the LICD scheme in the CNLS case when $\alpha=1.1$ and $M=800$.](couple-alp=1.1-800-eps-converted-to.pdf){#fig:couple-alp=1.1-beta=1} ![The numerical solution (left) and its error (right) with the exact solution of the LICD scheme in the CNLS case when $\alpha=1.1$ and $M=800$.](couple-minus-alp=1.1-800-eps-converted-to.pdf){#fig:couple-alp=1.1-beta=1} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![The numerical solution (left) and its error (right) with the exact solution of the LICD scheme in the CNLS case when $\alpha=1.5$ and $M=800$.](couple-alp=1.5-800-eps-converted-to.pdf){#fig:couple-alp=1.5-beta=1} ![The numerical solution (left) and its error (right) with the exact solution of the LICD scheme in the CNLS case when $\alpha=1.5$ and $M=800$.](couple-minus-alp=1.5-800-eps-converted-to.pdf){#fig:couple-alp=1.5-beta=1} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![The numerical solution (left) and its error (right) with the exact solution of the LICD scheme in the CNLS case when $\alpha=1.9$ and $M=800$.](couple-alp=1.9-800-eps-converted-to.pdf){#fig:couple-alp=1.9-beta=1} ![The numerical solution (left) and its error (right) with the exact solution of the LICD scheme in the CNLS case when $\alpha=1.9$ and $M=800$.](couple-minus-alp=1.9-800-eps-converted-to.pdf){#fig:couple-alp=1.9-beta=1} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![The numerical solution (left) and its error (right) with the exact solution of the LICD scheme in the CNLS case when $\alpha=2$ and $M=800$.](couple-alp=2-800-eps-converted-to.pdf){#fig:couple-alp=2-beta=1} ![The numerical solution (left) and its error (right) with the exact solution of the LICD scheme in the CNLS case when $\alpha=2$ and $M=800$.](couple-minus-alp=2-800-eps-converted-to.pdf){#fig:couple-alp=2-beta=1} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ # Concluding remarks {#sec-conclusions} The DNCB preconditioned Krylov subspace iteration methods (e.g., DNCB-GMRES) are effective and efficient linear solvers for the complex linear system arising from the system of repulsive space fractional CNLS equations. The DNCB preconditioner is a circulant approximation of the DNTB preconditioner naturally derived from the DNTB iteration method. In theory, the DNTB iteration method converges unconditionally, and the optimal iteration parameter is deducted. Moreover, good clustering properties of the spectra of the DNCB preconditioned system matrix are proved based on the sharp bounds for the eigenvalues of the discrete fractional Laplacian and its circulant approximation. The above theoretical results are sufficiently verified by the numerical experiments based on 1D repulsive space fractional DNLS and CNLS equations. Now, we consider possible future works. Firstly, since the DNCB preconditioner is obtained by circulant approximation of the discrete fractional Laplacian $T$, we can also approximate $T$ by other techniques, for instance, the sine transform based preconditioning technique [@HuangLinNgSun2022], which may lead to even more efficient preconditioner. Secondly, higher dimensional problems are always interesting, thus extending the DNCB preconditioner to the higher dimensional cases deserves further study. Thirdly, variable coefficient problems and non-uniform spatial discretization schemes will make the derived complex linear system lacking the explicit Toeplitz structure. Therefore, exploiting the possible implicit data-sparse structure and combining other approaches (e.g., hierarchical-matrix approach [@BM2008SpringerBook; @HKL2016CPAM]) with our methodology to construct new preconditioners will be a useful and challenging work. # Acknowledgments {#acknowledgments .unnumbered} This work was funded by the National Natural Science Foundation (No. 11101213 and No. 12071215), China. 99 N. Laskin, Fractional quantum mechanics and Lévy path integrals, Phys. Lett. A 268 (2000) pp. 298--305. N. Laskin, Fractional quantum mechanics, Phys. Rev. E 62 (2000) pp. 3135. X.-Y. Guo, M.-Y. Xu, Some physical applications of fractional Schrödinger equation, J. Math. Phys. 47 (2006) pp. 082104. R.N. Mantegna, H.E. Stanley, Scaling behaviour in the dynamics of an economic index, Nature 376 (1995) pp. 46-49. G. Zimbardo, P. Veltri, G. Basile, et al., Anomalous diffusion and Lévy random walk of magnetic field lines in three dimensional turbulence, Phys. Plasmas 2 (1995) pp. 2653-2663. B.-L. Guo, Y.-Q. Han, J. Xin, Existence of the global smooth solution to the period boundary value problem of fractional nonlinear Schrödinger equation, Appl. Math. Comput. 204 (2008) pp. 468--477. B.-L. Guo, Z.-H. Huo, Global well-posedness for the fractional nonlinear Schrödinger equation, Commun. Part. Diff. Eq. 36 (2010) pp. 247--255. B.-L. Guo, Z.-H. Huo, Well-posedness for the nonlinear fractional Schrödinger equation and inviscid limit behavior of solution for the fractional Ginzburg-Landau equation, Fract. Calc. Appl. Anal. 16 (2013) pp. 226--242. S. Secchi, Ground state solutions for nonlinear fractional Schrödinger equations in $\makebox{{\mbox{\bb R}}}^n$, J. Math. Phys. 54 (2013) pp. 031501. Y. Li, D. Zhao, Q.-X. Wang, Ground state solution and nodal solution for fractional nonlinear Schrödinger equation with indefinite potential, J. Math. Phys. 60 (2019) pp. 041501. M. Li, X.-M. Gu, C.-M. Huang, et al., A fast linearized conservative finite element method for the strongly coupled nonlinear fractional Schrödinger equations, J. Comput. Phys. 358 (2018) pp. 256--282. M. Li, C.-M. Huang, P.-D. Wang, Galerkin finite element method for nonlinear fractional Schrödinger equations, Numer. Algorithms 74 (2017) pp. 499--525. S.-W. Duo, Y.-Z. Zhang, Mass-conservative Fourier spectral methods for solving the fractional nonlinear Schrödinger equation, Comput. Math. Appl. 71 (2016) pp. 2257--2271. Y. Wang, L.-Q. Mei, Q. Li, et al., Split-step spectral Galerkin method for the two-dimensional nonlinear space-fractional Schrödinger equation, Appl. Numer. Math. 136 (2019) pp. 257--278. F. Zeng, F. Liu, C. Li, et al., A Crank-Nicolson ADI spectral method for a two-dimensional Riesz space fractional nonlinear reaction-diffusion equation, SIAM J. Numer. Anal. 52 (2014) pp. 2599-2622. P. Amore, F.M. Fernández, C.P. Hofmann, et al., Collocation method for fractional quantum mechanics, J. Math. Phys. 51 (2010) pp. 122101. A.H. Bhrawy, M.A. Zaky, An improved collocation method for multi-dimensional space-time variable-order fractional Schrödinger equations, Appl. Numer. Math. 111 (2017) pp. 197--218. D.-L. Wang, A.-G. Xiao, W. Yang, Crank--Nicolson difference scheme for the coupled nonlinear Schrödinger equations with the Riesz space fractional derivative, J. Comput. Phys. 242 (2013) pp. 670--681. D.-L. Wang, A.-G. Xiao, W. Yang, A linearly implicit conservative difference scheme for the space fractional coupled nonlinear Schrödinger equations, J. Comput. Phys. 272 (2014) pp. 644--655. P.-D. Wang, C.-M. Huang, An energy conservative difference scheme for the nonlinear fractional Schrödinger equations, J. Comput. Phys. 293 (2015) pp. 238--251. R.-P. Zhang, Y. T. Zhang, Z. Wang, et al., A conservative numerical method for the fractional nonlinear Schrödinger equation in two dimensions, Sci. China Math. 62 (2019) pp. 1997--2014. X. Zhao, Z.-Z. Sun, Z.-P. Hao, A fourth-order compact ADI scheme for two-dimensional nonlinear space fractional Schrödinger equation, SIAM J. Sci. Comput. 36 (2014) pp. A2865--A2886. S. Leble, B. Reichel, Coupled nonlinear Schrödinger equations in optic fibers theory: from general to solitonic aspects, Eur. Phys. J. Spec. Top. 173 (2009) pp. 5--55. N. Laskin, Fractional Schrödinger equation, Phys. Rev. E 66 (2002) pp. 056108. Y. Luchko, Fractional Schrödinger equation for a particle moving in a potential well, J. Math. Phys. 54 (2013) pp. 012111. W.-Z. Bao, Y.-Y. Cai, Mathematical theory and numerical methods for Bose-Einstein condensation, arXiv preprint arXiv:1212.5341 (2012). L.D. Carr, C.W. Clark, W.P. Reinhardt, Stationary solutions of the one dimensional nonlinear Schrödinger equation I. Case of repulsive nonlinearity, Phys. Rev. A 62 (2000) pp. 063610. S. Jin, C.D. Levermore, D.W. McLaughlin, The semiclassical limit of the defocusing NLS hierarchy, Commun. Pur. Appl. Math. 52 (1999) pp. 613-654. W.-Z. Bao, D. Jaksch, An explicit unconditionally stable numerical method for solving damped nonlinear Schrödinger equations with a focusing nonlinearity, SIAM J. Numer. Anal. 41 (2003) pp. 1406-1426. H. Saito, M. Ueda, Intermittent implosion and pattern formation of trapped Bose-Einstein condensates with an attractive interaction, Phys. Rev. Lett. 86 (2001) pp. 1406. M.D. Ortigueira, Riesz potential operators and inverses via fractional centred derivatives, Int. J. Math. Math. Sci. 2006 (2006) pp. 048391. G.H. Golub, C.F. Van Loan, Matrix Computations, 4rd edn. Johns Hopkins University Press, Baltimore, 2013. Y. Saad, Iterative Methods for Sparse Linear Systems, 2nd edn. Society for Industrial and Applied Mathematics, Philadelphia, 2003. R.H. Chan, K.P. Ng, Fast iterative solvers for Toeplitz-plus-band systems, SIAM J. Sci. Comput. 14 (1993) pp. 1013-1019. M.K. Ng, J.-Y . Pan, Approximate inverse circulant-plus-diagonal preconditioners for Toeplitz- plus-diagonal matrices, SIAM J. Sci. Comput. 32 (2010) pp. 1442-1464. Z.-Z. Bai, K.-L. Lu, J.-Y. Pan, Diagonal and Toeplitz splitting iteration methods for diagonal‐plus‐Toeplitz linear systems from spatial fractional diffusion equations, Numer. Linear Algebra Appl. 24 (2017) pp. e2093. Z.-Z. Bai, K.-Y. Lu, Fast matrix splitting preconditioners for higher dimensional spatial fractional diffusion equations, J. Comput. Phys. 404 (2020) pp. 109117. Z.-Z. Bai, G.H. Golub, M.K. Ng, Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems, SIAM J. Matrix Anal. Appl. 24 (2003) pp. 603--626. Z.-Z. Bai, G.H. Golub, J.-Y. Pan, Preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian positive semidefinite linear systems, Numer. Math. 98 (2004) pp. 1--32. Z.-Z. Bai, M. Benzi, F. Chen, Modified HSS iteration methods for a class of complex symmetric linear systems, Computing 87 (2010) pp. 93--111. Z.-Z. Bai, M. Benzi, F. Chen, On preconditioned MHSS iteration methods for complex symmetric linear systems, Numer. Algorithms 56 (2011) pp. 297--317. Z.-Z. Bai, M. Benzi, F. Chen, Z.-Q. Wang, Preconditioned MHSS iteration methods for a class of block two-by-two linear systems with applications to distributed control problems, IMA J. Numer. Anal. 33 (2013) pp. 343--369. O. Axelsson, A. Kucherov, Real valued iterative methods for solving complex symmetric linear systems, Numer. Linear Algebra Appl. 7 (2000) pp. 197--218. Y.-H. Ran, J.-G. Wang, D.-L. Wang, On partially inexact HSS iteration methods for the complex symmetric linear systems in space fractional CNLS equations, J. Comput. Appl. Math. 317 (2017) pp. 128--136. Y.-H. Ran, J.-G. Wang, D.-L. Wang, On HSS-like iteration method for the space fractional coupled nonlinear Schrödinger equations, Appl. Math. Comput. 271 (2015) pp. 482--488. Y.-H. Ran, J.-G. Wang, D.-L. Wang, On preconditioners based on HSS for the space fractional CNLS equations, East Asian J. Appl. Math. 7 (2017) pp. 70-81. Z.-Q. Wang, J.-F. Yin, Q.-Y. Dou, Preconditioned modified Hermitian and skew-Hermitian splitting iteration methods for fractional nonlinear Schrödinger equations, J. Comput. Appl. Math. 367 (2020) pp. 112420. A. Lischke, G. Pang, M. Gulian, et al., What is the fractional Laplacian? A comparative review with new results, J. Comput. Phys. 404 (2020) pp. 109009. C. Çelik, M. Duman, Crank-Nicolson method for the fractional diffusion equation with the Riesz fractional derivative, J. Comput. Phys. 231 (2012) pp. 1743-1750. D.W. Peaceman, Jr H.H. Rachford, The numerical solution of parabolic and elliptic differential equations, J. Soc. Ind Appl. Math. 3 (1955) pp. 28-41. J. Douglas, Alternating direction methods for three space variables, Numer. Math. 4 (1962) pp. 41--63. Z.-Z. Bai, G.H. Golub, M.K. Ng, On successive overrelaxation acceleration of the Hermitian and skew-Hermitian splitting iterations, Numer. Linear Algebra Appl. 14 (2007) pp. 319--335. R.H. Chan, G. Strang, Toeplitz equations by conjugate gradients with circulant preconditioner, SIAM J. Sci. Stat. Comput. 10 (1989) pp. 104-119. H.E. Robbins, A remark on Stirling's formula, Am. Math. Mon. 62 (1955) pp. 26-29. F.L. Bauer, C.T. Fike, Norms and exclusion theorems, Numer. Math. 2 (1960) pp. 137-141. R.H. Chan, X.-Q. Jin, An Introduction to Iterative Toeplitz Solvers, Society for Industrial and Applied Mathematics, Philadelphia, 2007. X. Huang, X.-L. Lin, M.K. Ng, et al., Spectral analysis for preconditioning of multi-dimensional Riesz fractional diffusion equations, Numer. Math. Theor. Meth. Appl. 15 (2022) pp. 565-591. M. Bebendorf, Hierarchical Matrices, Springer-Verlag, Heidelberg, 2008. K.L. Ho, L. Ying, Hierarchical interpolative factorization for elliptic operators: differential equations, Commun. Pur. Appl. Math. 69 (2016) pp. 1415-1451. [^1]: College of Mathematics, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China. Key Laboratory of Mathematical Modelling and High Performance Computing of Air Vehicles (NUAA), MIIT, Nanjing 211106, China (yangxi\@nuaa.edu.cn, zhangfy\@nuaa.edu.cn). [^2]: This work was funded by the National Natural Science Foundation (No. 11101213 and No. 12071215), China. Corresponding author: Xi Yang.
arxiv_math
{ "id": "2309.11106", "title": "Diagonal and normal with Toeplitz-block splitting iteration method for\n space fractional coupled nonlinear Schr\\\"odinger equations with repulsive\n nonlinearities", "authors": "Fei-Yan Zhang and Xi Yang", "categories": "math.NA cs.NA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We obtain asymptotic resolvent expansions at the threshold of the essential spectrum for magnetic Schrödinger and Pauli operators in dimension three. These operators are treated as perturbations of the Laplace operator in $L^2(\mathbb{R}^3)$ and $L^2(\mathbb{R}^3;\mathbb{C}^2)$, respectively. The main novelty of our approach is to show that the relative perturbations, which are first order differential operators, can be factorized in suitably chosen auxiliary spaces. This allows us to derive the desired asymptotic expansions of the resolvents around zero. We then calculate their leading and sub-leading terms explicitly. Analogous factorization schemes for more general perturbations, including e.g. finite rank perturbations, are discussed as well. author: - "Arne Jensen[^1]  and Hynek Kovařı́k[^2]" title: Resolvent expansions of 3D magnetic Schrödinger operators and Pauli operators --- 35Q40, 35P05, 35J10 # Introduction {#sec-intro} The purpose of this paper is to prove asymptotic expansions around the threshold zero of the resolvents of magnetic Schrödinger operators and Pauli operators in dimension three. Besides being of interest on their own, resolvent expansions are also important for treating the low energy part in the proof of dispersive estimates for the operators we consider. As far as we know the results obtained here are the first to treat in detail all possible cases for the threshold zero. Using the notation $P=-i\nabla$, the magnetic Schrödinger operator is the operator $$H=(P-A)^2+V\quad\text{on $L^2(\mathbb{R}^3)$}.$$ Here $A\colon\mathbb{R}^3\to\mathbb{R}^3$ is the magnetic vector potential and $V\colon\mathbb{R}^3\to\mathbb{R}$ the electrostatic potential. We assume that both $V$ and $A$ are bounded and decay sufficiently fast. More precisely, in the latter case we assume that the magnetic field decays fast enough and show that a vector potential $A$ can be constructed in such way as to satisfy the required decay conditions, cf. Lemma [Lemma 2](#A-choice){reference-type="ref" reference="A-choice"}. We consider the resolvent $R(z)=(H-zI)^{-1}$. It is convenient to change variable in the resolvent to $\kappa$, where for $z\in\mathbb{C}\setminus(-\infty,0]$ we take $\kappa=-i\sqrt{\kappa}$, where $\mathop{\mathrm{Im}}\sqrt{z}>0$, such that $z=-\kappa^2$. Then we write $R(\kappa)=(H+\kappa^2I)^{-1}$. To analyze the behavior of $R(\kappa)$ around the threshold zero the first step is to analyze the structure of solutions to $Hu=0$ in the weighted Sobolev spaces $H^{1,-s}$, $\frac12<s\leq\frac32$. See Section [2](#set-up){reference-type="ref" reference="set-up"} for the definition of these spaces. The solutions can be classified as follows. Assume that there exist $N$ linearly independent solutions to $Hu=0$ in $H^{1,-s}$, $\frac12<s\leq\frac32$. These solutions can be chosen in such a way that at most one solution $u\notin L^2(\mathbb{R}^3)$. The remaining $N-1$ solutions are eigenfunctions of $H$. This is the same classification as in the case $A=0$, see [@JK]. The resolvent expansions are in the topology of the bounded operators from $H^{-1,s}$ to $H^{1,-s'}$, for admissible values of $s$, $s'$. In the generic case there are no non-zero solutions to $Hu=0$ in $H^{1,-s}$, $\frac12<s\leq\frac32$, and then zero is said to be a regular point for $H$. The leading part of asymptotic expansion takes the form $$R(\kappa)=F_0+\kappa F_1+\mathcal{O}(\kappa^2)\quad\text{as $\kappa\to0$}$$ in the topology of bounded operators from $H^{-1,s}$ to $H^{1,-s'}$, $s,s'>\frac52$. If there exist non-zero solutions to $Hu=0$ in $H^{1,-s}$, $\frac12<s\leq\frac32$, zero is said to be an exceptional point. In this case the leading part of the asymptotic expansion takes the form $$R(\kappa)=\kappa^{-2}F_{-2}+\kappa^{-1}F_{-1}+\mathcal{O}(1) \quad\text{as $\kappa\to0$}$$ in the topology of bounded operators from $H^{-1,s}$ to $H^{1,-s'}$, $s,s'>\frac92$. More precisely, there are three exceptional cases. In the first exceptional case there exists only one (up to normalization) solution to $Hu=0$ in $H^{1,-s}$, $\frac12<s\leq\frac32$, such that $u\notin L^2(\mathbb{R}^3)$. This is the *zero resonance* case. In this case $F_{-2}=0$ and $F_{-1}=\lvert{\psi_c}\rangle\langle{\psi_c}\rvert$, where $\psi_c$ is a normalization of the non-zero solution $u$. In the second exceptional case all solutions to $Hu=0$ in $H^{1,-s}$, $\frac12<s\leq\frac32$, lie in $L^2(\mathbb{R}^3)$ and zero is an eigenvalue of $H$. In this case $F_{-2}=P_0$, the eigenprojection of eigenvalue zero of $H$. The operator $F_{-1}$ is of rank at most $3$. It is described more precisely in Theorem [Theorem 17](#thm53){reference-type="ref" reference="thm53"}. The third exceptional case is the one where one has both a zero resonance and at least one zero eigenvalue. With the right choice of zero resonance function $\psi_c$ we have $F_{-2}=P_0$, and $F_{-1}$ is the sum of the coefficients in the first and second exceptional cases. See Theorems [Theorem 14](#thm-regular){reference-type="ref" reference="thm-regular"} and [Theorem 17](#thm53){reference-type="ref" reference="thm53"} for the full statements of the results. Next we obtain similar resolvent expansions for the Pauli operator $$H_P=\bigl(\sigma\cdot(P-A)\bigr)^2+V\mathbf{1}_2 =(P-A)^2\mathbf{1}_2+\sigma\cdot B+V\mathbf{1}_2,$$ where $\sigma=(\sigma_1,\sigma_2,\sigma_3)$ denotes the Pauli matrices and $\mathbf{1}_2$ the $2\times2$ identity matrix. The operator is defined on $L^2(\mathbb{R}^3;\mathbb{C}^2)$. We decompose it as $$H_P=-\Delta\mathbf{1}_2+W_P\, ,$$ where $$W_P=\bigl(-P\cdot A -A\cdot P +\lvert{A}\rvert^2\bigr) \mathbf{1}_2 + V\mathbf{1}_2+\sigma\cdot B.$$ Then we can obtain a classification of the point zero in the spectrum of $H_P$ and resolvent expansions around the threshold zero. The results are essentially the same as for the magnetic Schrödinger operator, see Theorems [Theorem 21](#thm-regular-pauli){reference-type="ref" reference="thm-regular-pauli"} and [Theorem 23](#thm-ex-pauli){reference-type="ref" reference="thm-ex-pauli"}. The proofs of these results are obtained by taking the results on the resolvent expansion of $-\Delta$ from [@JK] and combining them with the factored resolvent technique from [@jn], adapted to the two cases considered here. The main point here is that we can write the magnetic Schrödinger operator as $$H=-\Delta+W\quad\text{with} \quad W= -P\cdot A-A\cdot P+A^2+V,$$ and we can factor the perturbation as $W=w^*Uw$. Let $\beta>0$ be the decay rate of the potentials. Then $w\colon H^{1,-\beta/2}(\mathbb{R}^3)\to\mathscr{K}$ ($\mathscr{K}$ an auxiliary space) and $U$ is a self-adjoint and unitary operator on $\mathscr{K}$. See Section [4](#sec-factored){reference-type="ref" reference="sec-factored"} for the details. A similar factorization holds from $W_P$ in the Pauli operator case. Once the factorization is in place, the scheme from [@jn] can be applied and leads to the resolvent expansion results. It should be noted that the factorization method developed in this paper can be applied not only to perturbations arising from magnetic Hamiltonians, but to all perturbations represented by self-adjoint first order differential operators, see Remark [Remark 6](#rem-first-order){reference-type="ref" reference="rem-first-order"} for more details. As applications of the resolvent expansions of $H$ and $H_P$ around zero we obtain some further results. First, we consider the case $V\geq0$ for the magnetic Schrödinger operator and show that the point zero is a regular point. See Corollary [Corollary 20](#cor-regular){reference-type="ref" reference="cor-regular"}. Second, for the Pauli operator we consider the case $V=0$ and show that there are no zero resonances, cf. Lemma [Lemma 24](#lem-no-resonance){reference-type="ref" reference="lem-no-resonance"}. Moreover, we establish the connection between our results and the criterion for zero eigenvalues obtained in [@be; @bel; @bvb], see Proposition [Proposition 25](#prop-zero-mode){reference-type="ref" reference="prop-zero-mode"}. Resolvent expansions have a long history. We will not give a full account, but limit ourselves to the following remarks. Results on Schrödinger operators in $L^2(\mathbb{R}^3)$ were obtained in [@JK]. In particular, the classification of the point zero used here was introduced in this paper. All dimensions and general perturbations, including first order differential operators, were considered in [@mu], but the resolvent expansions were less explicit than the ones obtained here. After these two papers there are many papers obtaining resolvent expansions in many different contexts. In the two-dimensional setting, resolvent expansions of magnetic Schrödinger operators, for the generic case, and of purely magnetic Pauli operators were established in [@kov1; @kov2]. However, in dimension three, very few papers treated the case of magnetic Schrödinger operators and none of them Pauli operators, as far as we know. Partial results in the generic case for magnetic Schrödinger operators were obtained in [@kk]. Behavior of the resolvent at threshold, again in the generic case, was studied also in [@egs], where Strichartz estimates for magnetic Schrödinger operators are proved. The paper is organized as follows. In Section [2](#set-up){reference-type="ref" reference="set-up"} we introduce notation and the basic set-up for magnetic Schrödinger operators. In Section [3](#sec-free-exp){reference-type="ref" reference="sec-free-exp"} we recall some results on the free resolvent from [@JK]. Section [4](#sec-factored){reference-type="ref" reference="sec-factored"} is devoted to the factored resolvent equation. We derive a number of properties of the operators entering into this factorization. In Section [5](#sec-main){reference-type="ref" reference="sec-main"} we state the main results on resolvent expansions for magnetic Schrödinger operators. We limit the statements to the ones giving the singularity structure at threshold zero. In Section [6](#sec-pauli){reference-type="ref" reference="sec-pauli"} we state the results on the Pauli operator. In the final Section [7](#sec-general){reference-type="ref" reference="sec-general"} we briefly explain how to obtain a factorization of a general perturbation, thus allowing one to treat for example finite rank perturbations of a magnetic Schrödinger operator. # The set-up {#set-up} We will consider magnetic Schrödinger operators in $\mathbb{R}^3$. Let $B$ be a magnetic field in $\mathbb{R}^3$ and let $A\colon\mathbb{R}^3\to\mathbb{R}^3$ be an associated vector potential satisfying $\mathop{\mathrm{curl}}A=B$. Moreover, let $V\colon\mathbb{R}^3 \to \mathbb{R}$ be a scalar electric field. We consider the magnetic Schrödinger operator $$\label{schr-op} H = (P -A)^2 +V,\quad \text{where $P = -i\nabla$},$$ on $L^2(\mathbb{R}^3)$. Its resolvent is denoted by $$R(z)= (H-zI)^{-1}.$$ Our goal is to obtain asymptotic expansions of this resolvent around the threshold zero of $H$. These expansions are valid in the topology of bounded operators between weighted Sobolev spaces. We recall the definition of the weighted Sobolev spaces. Let $\langle x \rangle = (1+\lvert{x}\rvert^2)^{1/2}$. On the Schwartz space $\mathcal{S}(\mathbb{R}^3)$ define a norm $$\label{hms-norm} \lVert{u}\rVert_{H^{k,s}} = \lVert{\langle x \rangle ^s (1- \Delta)^{k/2} u}\rVert_{L^2(\mathbb{R}^3)}, \quad \text{$k\in\mathbb{R}$, $s\in\mathbb{R}$}.$$ The completion of $\mathcal{S}(\mathbb{R}^3)$ with this norm is the weighted Sobolev space, denoted by $H^{k,s}(\mathbb{R}^3)$. In the sequel we abbreviate this notation to $H^{k,s}$. The same holds for other spaces defined on $\mathbb{R}^3$. Obviously, $H^{0,0}=L^2(\mathbb{R}^3)$. The inner product $\langle{\cdot},{\cdot}\rangle$ on $L^2$ extends to a duality between $H^{k,s}$ and $H^{-k,-s}$. The bounded operators from $H^{k,s}$ to $H^{k',s'}$ are denoted by $$\mathscr{B}(k,s;k',s') = \mathscr{B}(H^{k,s}; H^{k',s'})$$ and this space is equipped with the operator norm. For later use we note the following property. Let $s_j\in\mathbb{R}$, $j=1,2$, with $s_1\leq s_2$ and $k\in\mathbb{R}$. Then we have the continuous embedding $$\label{embed} H^{k,s_2}\hookrightarrow H^{k,s_1}.$$ It is convenient to use the notation $$H^{k,s+0} = \bigcup_{s<r} H^{k,r} , \qquad H^{k,s-0} = \bigcap_{r<s} H^{ k,r}.$$ However, we do not introduce topologies on these spaces. They are considered only as algebraic vector spaces. Let us now state the assumptions on $B$ and $V$, and explain our choice of vector potential $A$. **Assumption 1**. Let $\beta>2$. Let $V\colon\mathbb{R}^3\to\mathbb{R}$ satisfy $$\lvert{V(x)}\rvert\lesssim\langle x \rangle ^{-\beta},\quad x\in\mathbb{R}^3.$$ Let $B\colon\mathbb{R}^3\to\mathbb{R}^3$ be continuously differentiable, such that $\nabla\cdot B=0$ and $$\label{B-decay-cond} \lvert{B(x)}\rvert\lesssim\langle x \rangle ^{-\beta-1},\quad x\in\mathbb{R}^3.$$ In the proof of the following lemma we explain our choice of gauge for $B$ satisfying the above assumption. **Lemma 2**. *There exists a vector potential $A\colon\mathbb{R}^3\to\mathbb{R}^3$ with $\mathop{\mathrm{curl}}A=B$ such that $$\label{A-decay} \lvert{A(x)}\rvert\lesssim\langle x \rangle ^{-\beta},\quad x\in\mathbb{R}^3.$$* *Proof.* Let $$A_p(x) = \int_0^1 B(t x)\, t\, dt \wedge x$$ denote the vector potential associated to $B$ via the Poincaré gauge. Moreover, let $$\label{LS-range} a_\ell(x) = \int_0^\infty B(t x)\, t\, dt \wedge x, \qquad a_s(x) = \int_1^\infty B(t x)\, t\, dt \wedge x$$ be the long and the short range components of $A_p$. Note that $a_\ell, a_s\colon\mathbb{R}^3\setminus\{0\}\to\mathbb{R}^3$, and that $A_p= a_\ell -a_s$. The crucial observation is that since $B$ is a magnetic field, we have $\nabla \cdot B=0$ , and a short calculation gives $\nabla \wedge a_\ell =0$ in $\mathbb{R}^3\setminus\{0\}$. Since $\mathbb{R}^3\setminus\{0\}$ is simply connected, there exists $\widetilde\varphi \in C^2(\mathbb{R}^3\setminus\{0\})$ such that $\nabla \widetilde\varphi = a_\ell$. Note however that $$\lvert{a_\ell(x)}\rvert \sim \lvert{x}\rvert^{-1} \quad \text{as $\lvert{x}\rvert\to 0$},$$ by scaling. Hence in order to construct a vector potential $A$ which satisfies [\[A-decay\]](#A-decay){reference-type="eqref" reference="A-decay"} we have to modify $\widetilde\varphi$ in the vicinity of the origin. By Tietze's extension theorem there exists $\varphi \in C^2(\mathbb{R}^3)$ such that $\varphi(x)=\widetilde\varphi(x)$ for all $x$ with $\lvert{x}\rvert \geq 1$. Now we define $A\colon\mathbb{R}^3\to\mathbb{R}^3$ by $$\label{A-gauge-defin} A =A_p -\nabla \varphi .$$ Then $A\in C^1(\mathbb{R}^3)$ and for all $\lvert{x}\rvert\geq 1$ we have $$\begin{aligned} \lvert{A(x)}\rvert &\leq \lvert{x}\rvert \int_1^\infty t \lvert{B(tx)}\rvert dt = \lvert{x}\rvert^{-1} \int_{\lvert{x}\rvert}^\infty s \lvert{B(s\lvert{x}\rvert^{-1}x)}\rvert ds \\ &\leq C \lvert{x}\rvert^{-1} \int_{\lvert{x}\rvert}^\infty \langle s \rangle^{-\beta} ds \leq C \langle x \rangle^{-\beta},\end{aligned}$$ as required. ◻ **Remark 3**. The fact that for a given short range magnetic field in $\mathbb{R}^3$ it is always possible to construct a short range vector potential $A$, contrary to the case of dimension two, is well-known, cf. [@ya]. We consider the operator $H$ as a perturbation of $-\Delta$, denoted by $W$, i.e. we define $$\label{W-def} W = H + \Delta= -P\cdot A -A\cdot P +\lvert{A}\rvert^2 +V.$$ Note that $W$ is a first order differential operator and thus a local operator. The following lemma is stated without proof. **Lemma 4**. *Let $B$ and $V$ satisfy Assumption [Assumption 1](#ass-BV){reference-type="ref" reference="ass-BV"} and let $A$ be chosen as in Lemma [Lemma 2](#A-choice){reference-type="ref" reference="A-choice"}. Then $W$ is a compact operator from $H^{1,s}$ to $H^{-1,s+\beta'}$ for any $s\in\mathbb{R}$ and $\beta'<\beta$.* # Properties of the free resolvent {#sec-free-exp} Let $R_0(z)=(-\Delta-zI)^{-1}$, $z\in\mathbb{C}\setminus[0,\infty)$. We recall some properties of this resolvent from [@JK; @jn]. We use the conventions from [@jn]. For $z\in\mathbb{C}\setminus[0,\infty)$ let $\kappa=-i\sqrt{z}$, where $\mathop{\mathrm{Im}}\sqrt{z}>0$, such that $z=-\kappa^2$. We write $R_0(\kappa)$ instead of $R_0(-\kappa^2)$ in the sequel. **Lemma 5** ([@JK Lemma 2.2]). *Assume $p\in\mathbb{N}_0$ and $s>p+\frac32$. Then $$R_0(\kappa)=\sum_{j=0}^p\kappa^jG_j + \mathcal{O}(\kappa^{p+1})$$ as $\kappa\to0$, $\mathop{\mathrm{Re}}\kappa>0$, in $\mathscr{B}(-1,s;1,-s)$. Here the operators $G_j$ are given by their integral kernels $$G_j(x,y)=(-1)^j\frac{\lvert{x-y}\rvert^{j-1}}{4\pi j!},\quad j\geq0.$$ We have $$\label{G0-map} G_0\in\mathscr{B}(-1,s;1,-s')\quad\text{for $s,s'>\tfrac12$ and $s+s'\geq2$},$$ and for $j\geq1$ $$\label{Gj-map} G_j\in\mathscr{B}(-1,s;1,-s')\quad\text{for $s,s'>j+\tfrac12$}.$$* # The factored resolvent equation {#sec-factored} We will treat the operator $H$ as a perturbation of $-\Delta$. Write $$A =(A_1,A_2, A_3) \quad \text{with $A_j = D_j C_j$},$$ where $$\label{BC-decay} \lvert{D_j(x)}\rvert \lesssim\langle x \rangle^{-\beta/2} ,\quad \lvert{C_j(x)}\rvert \lesssim \langle x\rangle^{-\beta/2}.$$ Now let $$\label{K-def} \mathscr{K}= L^2(\mathbb{R}^3)\oplus L^2(\mathbb{R}^3;\mathbb{C}^3) \oplus L^2(\mathbb{R}^3;\mathbb{C}^3) \oplus L^2(\mathbb{R}^3;\mathbb{C}^3) ,$$ and put $$\begin{aligned} v(x) &= \sqrt{\lvert{V(x)}\rvert}, \\ U(x) &=\begin{cases} -1, &\text{if  $V(x) <0$}, \\ \phantom{-}1, &\text{otherwise}. \end{cases} \end{aligned}$$ We define an operator matrix by $$\label{omega} w = \begin{bmatrix} v & A_1 & A_2 & A_3 & C_1 & C_2 & C_3 & D_1 P_1 & D_2 P_2 & D_3 P_3 \end{bmatrix}^T.$$ Under Assumption [Assumption 1](#ass-BV){reference-type="ref" reference="ass-BV"} and the choice [\[BC-decay\]](#BC-decay){reference-type="eqref" reference="BC-decay"} we have $$\label{w-map} w\in\mathscr{B}(H^{1,-s},\mathscr{K})\quad \text{and} \quad w^*\in\mathscr{B}(\mathscr{K},H^{-1,s})\quad \text{for $s\leq\beta/2$}.$$ Moreover, we define the block operator matrix $\mathcal{U}\colon\mathscr{K}\to\mathscr{K}$ by $$  \mathcal{U}= \begin{bmatrix} U & 0 &0 &0 \\ 0 & \mathbf{1}_3 &0 & 0 \\ 0 & 0 &0 & -\mathbf{1}_3 \\ 0 & 0 & -\mathbf{1}_3 & 0 \end{bmatrix}.$$ Here $\mathbf{1}_3$ denotes the $3\times 3$ unit matrix. Note that $\mathcal{U}$ is self-adjoint and that $\mathcal{U}^2=\mathbf{1}_{\mathscr{K}}$. The perturbation $W$ given by [\[W-def\]](#W-def){reference-type="eqref" reference="W-def"} then satisfies $$\label{factorisation} W = w^* \mathcal{U}w.$$ **Remark 6**. The same factorization method as above can be applied to any self-adjoint first order differential operator perturbation of $-\Delta$ of the form $$i (L\cdot \nabla +\nabla \cdot L) +V,$$ as long as the vector field $L\colon \mathbb{R}^3\to \mathbb{R}^3$ is sufficiently regular. Factorization of a more general class of perturbations is discussed in Section [7](#sec-general){reference-type="ref" reference="sec-general"}. To continue we define the operator $$M(\kappa)=\mathcal{U}+w R_0(\kappa)w^\ast$$ on $\mathscr{K}$. **Remark 7**. Note that for $-\kappa^2\notin\sigma(H)$ the operator $M(\kappa)$ is invertible. This follows from the relation $$M(\kappa)\bigl(\mathcal{U}-\mathcal{U}w(H+\kappa^2)^{-1}w^{\ast}\mathcal{U}\bigr) =\bigl(\mathcal{U}-\mathcal{U}w(H+\kappa^2)^{-1}w^{\ast}\mathcal{U}\bigr)M(\kappa)=I,$$ which is an immediate consequence of the second resolvent equation. Lemma [Lemma 5](#free-exp){reference-type="ref" reference="free-exp"} leads to the following result. **Lemma 8**. *Let $p\in\mathbb{N}$. Assume $\beta>2p+3$. Then $$\label{M-exp} M(\kappa)=\sum_{j=0}^p\kappa^jM_j+\mathcal{O}(\kappa^{p+1})$$ as $\kappa\to0$, $\mathop{\mathrm{Re}}\kappa>0$, in $\mathscr{B}(\mathscr{K})$. Here $$\label{M0-3d} M_0=\mathcal{U}+wG_0w^\ast$$ and $$\label{Mj-eq} M_j=wG_jw^\ast, \quad j\geq1.$$* For all $-\kappa^2\notin\sigma(H)$ we have the factored resolvent equation $$\label{res-formula-2} R(\kappa)=R_0(\kappa)-R_0(\kappa)w^{\ast}M(\kappa)^{-1}wR_0(\kappa),$$ see e.g. [@jn]. It follows from [\[M-exp\]](#M-exp){reference-type="eqref" reference="M-exp"} that the operator $$\widetilde M_1(\kappa) =\frac 1\kappa \big(M(\kappa)-M_0\big)$$ is uniformly bounded as $\kappa\to 0$. The following inversion formula is needed for the expansion of $M(\kappa)^{-1}$ as $\kappa\to 0$. We state it in a form simplified to our setting. For its general form we refer to [@jn; @jn2]. **Lemma 9** ([@jn Corollary 2.2]). *Let $M(\kappa)$ be as above. Suppose that $0$ is an isolated point of the spectrum of $M_0$, and let $S$ be the corresponding Riesz projection. Then for sufficiently small $\kappa$ the operator $Q(\kappa)\colon S\mathscr{K}\to S\mathscr{K}$ defined by $$\begin{aligned} Q(\kappa) & = \frac 1\kappa \big(S-S(M(\kappa) +S)^{-1} S\big) = \sum_{j=0}^\infty (-\kappa)^j S \big[ \widetilde M_1(\kappa) (M_0+S)^{-1} \big]^{j+1} S\end{aligned}$$ is uniformly bounded as $\kappa\to 0$. Moreover, the operator $M(\kappa)$ has a bounded inverse in $\mathscr{K}$ if and only if $Q(\kappa)$ has a bounded inverse in $S\mathscr{K}$ and in this case $$M(\kappa)^{-1} = (M(\kappa) +S)^{-1} +\frac 1\kappa(M(\kappa) +S)^{-1} S Q(\kappa)^{-1} S (M(\kappa) +S)^{-1}$$* Proposition [\[prop-M0\]](#prop-M0){reference-type="ref" reference="prop-M0"} below implies that the hypotheses of Lemma [Lemma 9](#lem-jn){reference-type="ref" reference="lem-jn"} are satisfied. In view of equation [\[res-formula-2\]](#res-formula-2){reference-type="eqref" reference="res-formula-2"} the first step in obtaining an asymptotic expansion of $R(\kappa)$ as $\kappa\to0$ consists in analyzing $\ker M_0$. In the sequel we always assume at least $\beta>2$. Under this condition Lemma [Lemma 5](#free-exp){reference-type="ref" reference="free-exp"} implies that $G_0W\in\mathscr{B}(1,-s;1,-s)$ and $WG_0\in\mathscr{B}(-1,s;-1,s)$, provided $\frac12<s<\beta-\frac12$. We define $$\begin{aligned} M&\coloneqq\{u\in H^{1,-s}\mid (1+G_0W)u=0\},\\ N&\coloneqq\{u\in H^{-1,s}\mid (1+WG_0)u=0\}.\end{aligned}$$ It is shown in [@JK] that these spaces are independent of $s$ provided $\frac12<s<\beta-\frac12$. Furthermore, since $G_0W$ and $WG_0$ are compact (see Lemma [Lemma 4](#compact){reference-type="ref" reference="compact"}) we get by duality $$\label{NM-dim} \dim M= \dim N.$$ We need the following result from [@JK]. **Lemma 10** ([@JK Lemma 2.4]). * * - *$-\Delta G_0\, u = u$ for any $u\in H^{-1, \frac 12+0}$.* - *$G_0(-\Delta) u'= u'$ for any $u'\in H^{0, -\frac 32}$ such that $\Delta u' \in H^{-1, \frac 12+0}$.* The spaces $M$ and $\ker(M_0)$ are related to a generalized null space of $H$ which we define by $$\mathop{\mathrm{null}}(H)=\{u\in H^{1,-\frac12-0}\mid Hu=0\},$$ where $Hu$ is understood to be in the sense of distributions. **Lemma 11**. *Let Assumption [Assumption 1](#ass-BV){reference-type="ref" reference="ass-BV"} be satisfied for some $\beta > 3$.* - *Let $f\in \ker(M_0)$, and define $u =-G_0w^* f$. Then $u\in M$, and $u\in \mathop{\mathrm{null}}(H)$.* - *Let $u\in M$. Then $u\in \mathop{\mathrm{null}}(H)$, and $f= \mathcal{U}w u$ satisfies $f\in \ker(M_0)$.* *Proof.* To prove part (1), assume $f\in\ker M_0$, i.e. $(\mathcal{U}+wG_0w^*)f=0$. Define $u=-G_0w^*f$. Since $w^*f\in H^{-1,\beta/2}\subset H^{-1,\frac32+0}$, we have $u\in H^{1,-\frac12-0}$ by [\[G0-map\]](#G0-map){reference-type="eqref" reference="G0-map"}. Lemma [Lemma 10](#lem-jk){reference-type="ref" reference="lem-jk"}(1) implies $H_0G_0w^*f=w^*f$ or $H_0u=-w^*f=w^*\mathcal{U}wG_0w^*f=-Wu$. Thus $u\in\mathop{\mathrm{null}}(H)$. To prove $u\in M$, note that $f\in\ker M_0$ implies $f=\mathcal{U}\mathcal{U}f =-\mathcal{U}wG_0w^*f=\mathcal{U}wu$. Hence $u=-G_0w^*f=-G_0w^*\mathcal{U}w u=-G_0Wu$, and $u\in M$ follows. To prove part (2), let $u\in M$. Then $u\in H^{1,-\frac12-0}$ and $Wu\in H^{-1,\beta-\frac12-0}\subset H^{-1,\frac12 + 0}$. Lemma [Lemma 10](#lem-jk){reference-type="ref" reference="lem-jk"}(1) implies $H_0u=-H_0G_0Wu=-Wu$ and $u\in\mathop{\mathrm{null}}(H)$ follows. Let $f=\mathcal{U}wu$. Then $f=-\mathcal{U}wG_0Wu=-\mathcal{U}w G_0 w^*\mathcal{U}wu =-\mathcal{U}w G_0 w^*f$, such that $f\in\ker M_0$ follows. ◻ Next we define the operators $T_1\colon \ker(M_0) \to M$ and $T_2\colon M \to \ker(M_0)$ by $$\label{T-12} T_1 = -G_0 w^*\big|_{\ker(M_0) } \quad \text{and} \quad T_2 = \mathcal{U}w\big|_M.$$ **Proposition 12**. * [\[prop-M0\]]{#prop-M0 label="prop-M0"} We have $$\label{M0-dim} \dim \ker(M_0) <\infty.$$ Moreover, $0$ is an isolated point of $\sigma(M_0)$.* *Proof.* From [\[T-12\]](#T-12){reference-type="eqref" reference="T-12"} we get $T_1 T_2 = -G_0 w^* \mathcal{U}w=-G_0 W$, which is the identity operator on $M$. On the other hand $T_2 T_1 = -\mathcal{U}w G_0 w^*$ is the identity operator on $\ker(M_0)$. Hence, in view of Lemma [Lemma 11](#lem-ker-M0){reference-type="ref" reference="lem-ker-M0"} and [\[NM-dim\]](#NM-dim){reference-type="eqref" reference="NM-dim"} we have $\dim\ker(M_0) = \dim M = \dim N < \infty$. To prove the second part of the claim we argue by contradiction. Suppose that $0\in\sigma_{\textup{ess}}(M_0)$. Then there exists an orthonormal Weyl sequence $\{u_n\}$ in $\mathscr{K}$ such that $$\label{un-seq} \lVert{M_0 u_n}\rVert_\mathscr{K}\to 0 \quad \text{as $n\to \infty$}.$$ In particular, $\{u_n\}$ converges weakly to $0$ in $\mathscr{K}$. Let $X= \mathcal{U}w G_0 w^*$. Since $G_0 W$ is compact on $H^{1, -s}$, $\frac12<s<\beta-\frac12$, it follows that the operator $$X^2 = \mathcal{U}w G_0 W G_0 w^*$$ is compact on $\mathscr{K}$. Hence $X^2 u_n \to 0$ in $\mathscr{K}$. Since $(1+X) u_n = \mathcal{U}M_0 u_n\to 0$ in $\mathscr{K}$ as well (see [\[un-seq\]](#un-seq){reference-type="eqref" reference="un-seq"}) we deduce that $$X u_n= X(1+X) u_n -X^2 u_n \to 0 \qquad \text{in} \quad \mathscr{K},$$ which implies that $\lVert{u_n}\rVert_\mathscr{K}\to 0$. However, this is in contradiction with the fact that the sequence $\{u_n\}$ is orthonormal in $\mathscr{K}$. ◻ **Lemma 13**. *Let $u\in \mathop{\mathrm{null}}(H)$. Then $$\label{L2-condition} u \in L^2(\mathbb{R}^3) \quad \Leftrightarrow\quad \langle u, W 1 \rangle=0.$$* *Proof.* If $u\in \mathop{\mathrm{null}}(H)$, then $u\in H^{1, -\frac 12-0}$ by definition, and therefore $W u \in H^{-1, s}$ for any $s <\min \{\beta -\frac 12, \frac 52\}$. Lemma [Lemma 10](#lem-jk){reference-type="ref" reference="lem-jk"}(2) then says that $u = -G_0 W u$. Now assume that $\langle u, W 1 \rangle=0$. Then by [@JK Lemma 2.5] we have $u=-G_0 Wu \in H^{1, s-2}$. Hence $W u\in H^{-1, s +\delta}$ with $\delta =\beta -2>0$. Repeating this argument a sufficient number of times, we conclude that $W u\in H^{-1, \frac 52-0}$, and therefore $u\in H^{1, \frac 12-0}$. To prove the opposite implication, suppose that $u \in L^2(\mathbb{R}^3)$. Then $u_1= \Delta u= Wu \in H^{-1, \frac 32+0}$, which implies that $(1-\Delta)^{-\frac 12} u_1 \in L^1(\mathbb{R}^3)$. Hence $(1+\lvert{\,\cdot\,}\rvert^2)^{-\frac 12} \widehat{u_1}$ is continuous, and therefore so is $\widehat{u_1}$. Since $\widehat{u_1}(p)= -\lvert{p}\rvert^2 \hat u(p)$ and $\hat u\in L^2(\mathbb{R}^3)$, we must have $\widehat{u_1}(0)=0$. This gives $\langle{u},{W 1}\rangle=0$. ◻ Next we need to classify the point $0$ in the spectrum of $H$. The classification is the same as in [@JK; @jn1]. We recall it for completeness. Let $S$ denote the orthogonal projection onto $\ker M_0$ in $\mathscr{K}$, cf. Lemma [Lemma 9](#lem-jn){reference-type="ref" reference="lem-jn"}, and let $S_1$ denote the orthogonal projection on $\ker SM_1S$ in $\mathscr{K}$. By Proposition [\[prop-M0\]](#prop-M0){reference-type="ref" reference="prop-M0"} $\ker M_0$ is finite dimensional, and by the definition of $M_1$ (see [\[Mj-eq\]](#Mj-eq){reference-type="eqref" reference="Mj-eq"}) we have $$\label{sm1s} SM_1S=-\frac{1}{4\pi}\lvert{Sw1}\rangle\langle{Sw1}\rvert.$$ It follows that $\mathop{\mathrm{rank}}S_1\geq \mathop{\mathrm{rank}}S -1$. Note that $f\in\ker SM_1S$ if and only if $\langle{f},{Sw1}\rangle=0$. The classification is then as follows (cf. [@jn1]): 1. The regular case: $S=0$. In this case $M(\kappa)$ is invertible. 2. The first exceptional case: $\mathop{\mathrm{rank}}S=1$ and $S_1=0$. In this case we have a threshold resonance. 3. The second exceptional case: $\mathop{\mathrm{rank}}S=\mathop{\mathrm{rank}}S_1\geq1$. In this case zero is an eigenvalue of multiplicity $\mathop{\mathrm{rank}}S$. 4. The third exceptional case: $\mathop{\mathrm{rank}}S_1\geq 2$, $\mathop{\mathrm{rank}}S_1=\mathop{\mathrm{rank}}S-1$. In this case we have a threshold resonance and zero is an eigenvalue with multiplicity $\mathop{\mathrm{rank}}S - 1$. # Main results {#sec-main} In this section we briefly state the leading terms in the resolvent expansions around zero in the four cases. We start with the regular case and give the proof for completeness. Note that we also give more precise mapping properties than in [@jn3]. **Theorem 14**. *Assume that zero is a regular point for $H$. Let Assumption [Assumption 1](#ass-BV){reference-type="ref" reference="ass-BV"} be satisfied for some $\beta > 5$ and let $s>\frac 52$. Then $$\label{exp-regular} R(\kappa) = F_0 +\kappa F_1 +\mathcal{O}(\kappa^{2})$$ in $\mathscr{B}(-1,s;1,-s)$, where $$\begin{aligned}   F_0 &= (I+G_0 W)^{-1} G_0\in\mathscr{B}(-1,s;1,-s),\quad s>1, \label{RF0} \\ F_1 &= (I+G_0 W)^{-1} G_1(I+ W G_0)^{-1}\in\mathscr{B}(-1,s;1,-s),\quad s>\tfrac32. \label{RF1}\end{aligned}$$* *Proof.* If $0$ is a regular point for $H$, then $\ker M_0 = \{0\}$. In view of Lemma [Lemma 11](#lem-ker-M0){reference-type="ref" reference="lem-ker-M0"} we thus have $\ker(I+G_0 W)= \{0\}$. Since $G_0 W$ is compact in $H^{1,-s}(\mathbb{R}^3)$ for any $\frac 12 < s <\beta -\frac 12$, it follows that $(I+G_0 W)^{-1}$ exists and is bounded on $H^{1,-s}(\mathbb{R}^3)$. By duality, $(I+ W G_0)^{-1}$ is bounded on $H^{-1,s}(\mathbb{R}^3)$ for any $\frac 12 < s <\beta -\frac 12$. Using [\[G0-map\]](#G0-map){reference-type="eqref" reference="G0-map"}, [\[Gj-map\]](#Gj-map){reference-type="eqref" reference="Gj-map"}, and [\[w-map\]](#w-map){reference-type="eqref" reference="w-map"} the results [\[RF0\]](#RF0){reference-type="eqref" reference="RF0"} and [\[RF1\]](#RF1){reference-type="eqref" reference="RF1"} follow. The proof of [\[exp-regular\]](#exp-regular){reference-type="eqref" reference="exp-regular"} follows the line of arguments used in [@jn3 Section 3.4]. Since $M_0$ is invertible in $\mathscr{K}$ (see Proposition [\[prop-M0\]](#prop-M0){reference-type="ref" reference="prop-M0"}), the Neumann series in combination with equations [\[M-exp\]](#M-exp){reference-type="eqref" reference="M-exp"} and [\[Mj-eq\]](#Mj-eq){reference-type="eqref" reference="Mj-eq"} gives $$M(\kappa)^{-1} = M_0^{-1} -\kappa M_0^{-1} M_1 M_0^{-1} + \mathcal{O}(\kappa^{2}) = M_0^{-1} -\kappa M_0^{-1} w G_1 w^* M_0^{-1} + \mathcal{O}(\kappa^{2}) .$$ From [\[res-formula-2\]](#res-formula-2){reference-type="eqref" reference="res-formula-2"} we the get the expansion [\[exp-regular\]](#exp-regular){reference-type="eqref" reference="exp-regular"} with $$F_0 = G_0 -G_0w^* M_0^{-1} w G_0, \quad F_1= (I- G_0w^* M_0^{-1} w) G_1 (I-w^* M_0^{-1} w G_0).$$ It remains to note that, similarly to [@jn3 Section 3.4], $$\begin{aligned} I- G_0w^* M_0^{-1} w &= I-G_0w^* (\mathcal{U}+w G_0 w^*)^{-1} w = I-G_0w^* \mathcal{U}(I+w G_0 w^*\mathcal{U})^{-1} w\\ & = I-G_0w^* \mathcal{U}w (I+ G_0 w^*\mathcal{U}w)^{-1} = I-G_0 W(I+G_0 W)^{-1}\\ &= (I+G_0 W)^{-1} .\end{aligned}$$ Note that equalities hold as operators in $\mathscr{B}(1,-s;1,-s)$, $\frac12<s<\beta-\frac12$. This result together with its adjoint imply equations [\[RF0\]](#RF0){reference-type="eqref" reference="RF0"} and [\[RF1\]](#RF1){reference-type="eqref" reference="RF1"}. ◻ **Remark 15**. The fact that $R(\kappa)$ remains uniformly bounded for $\kappa\to 0$ if zero is a regular point for $H$ was already proved in [@egs Sec. 3], see also [@kk Sec. 3.2]. In the cases (E1) and (E3) a threshold resonance occurs. We need to define a specific corresponding resonance function. Let $P_0$ denote the orthogonal projection in $L^2(\mathbb{R}^3)$ onto the eigenspace corresponding to eigenvalue zero of $H$. In case (E1) we take $P_0=0$. Take $f\in\ker M_0$ with $\lVert{f}\rVert_{\mathscr{K}}=1$ and $\langle{f},{w1}\rangle\neq0$. Define $$\label{psi-c-def} \psi_c=\frac{\sqrt{4\pi}\langle{f},{w1}\rangle}{\lvert{\langle{f},{w1}\rangle}\rvert^2} \bigl( G_0w^*f-P_0WG_2w^*f \bigr).$$ We need the following lemma. **Lemma 16** ([@JK Lemma 2.6]). *Let Assumption [Assumption 1](#ass-BV){reference-type="ref" reference="ass-BV"} be satisfied with $\beta>5$. Let $f_j\in\mathscr{K}$ with $\langle{f_j},{w1}\rangle=0$, $j=1,2$. Then $$\langle{w^*f_1},{G_2w^*f_2}\rangle=-\langle{G_0w^*f_1},{G_0w^*f_2}\rangle.$$* The results in the three exceptional cases are stated in the next theorem. **Theorem 17**. *Assume that zero is an exceptional point for $H$. Let Assumption [Assumption 1](#ass-BV){reference-type="ref" reference="ass-BV"} be satisfied for $\beta>9$. Assume $s>\frac92$. Then $$R(\kappa)=\kappa^{-2}F_{-2}+\kappa^{-1}F_{-1}+\mathcal{O}(1)$$ as $\kappa\to0$ in $\mathscr{B}(-1,s;1,-s)$.* *If zero is an exceptional point of the first kind, we have $$F_{-2}=0,\quad F_{-1}=\lvert{\psi_c}\rangle\langle{\psi_c}\rvert.$$* *If zero is an exceptional point of the second kind, we have $$F_{-2}=P_0,\quad F_{-1}=P_0WG_3WP_0.$$* *If zero is an exceptional point of the third kind, we have $$F_{-2}=P_0,\quad F_{-1}=\lvert{\psi_c}\rangle\langle{\psi_c}\rvert+P_0WG_3WP_0.$$* We do not give details of the proof of this theorem. It uses the results stated in Section [4](#sec-factored){reference-type="ref" reference="sec-factored"} and the technique developed in [@jn; @jn2] and is analogous to the one given in [@jn1 Appendix] and in [@jn3]. ## Case $V\geq 0.$ {#ssec-mg-laplace} The goal of this section is to show that if $V\geq 0$, then zero is a regular point for $H$. We will need slightly stronger conditions on $B$ than those stated in Assumption [Assumption 1](#ass-BV){reference-type="ref" reference="ass-BV"}. **Assumption 18**. Let $B$ satisfy Assumption [Assumption 1](#ass-BV){reference-type="ref" reference="ass-BV"} and suppose in addition that $$\lvert{\partial_{x_j} B(x)}\rvert\lesssim \langle x \rangle ^{-\beta-1}$$ for all $x\in\mathbb{R}^3$, $j=1,2,3$. We start with the magnetic Laplacian. **Lemma 19**. *Let $V=0$ and let $B$ satisfy Assumption [Assumption 18](#ass-B-2){reference-type="ref" reference="ass-B-2"} for some $\beta >2$. Then $\ker M_0 = \{0\}$.* *Proof.* Owing to Lemma [Lemma 11](#lem-ker-M0){reference-type="ref" reference="lem-ker-M0"} it suffices to show that $\mathop{\mathrm{null}}((P-A)^2)=\{0\}$. So let $u\in H^{1,-\frac12-0}$ be such that $(i\nabla +A)^2u=-\Delta u + Wu =0$. We have $$\label{W-bis} W = 2 i A \cdot\nabla + i\mathop{\mathrm{div}}A +\lvert{A}\rvert^2 .$$ By equations [\[LS-range\]](#LS-range){reference-type="eqref" reference="LS-range"}, [\[A-gauge-defin\]](#A-gauge-defin){reference-type="eqref" reference="A-gauge-defin"} and Assumption [Assumption 18](#ass-B-2){reference-type="ref" reference="ass-B-2"}, for all $\lvert{x}\rvert\geq 1$ it holds $$\lvert{\mathop{\mathrm{div}}A(x)}\rvert = \lvert{\mathop{\mathrm{div}}a_s(x)}\rvert = \Bigl\lvert x \cdot \Bigl (\int_1^\infty \nabla \wedge B(tx) \, t\, dt\Bigr) \Bigr\rvert \leq C \langle x \rangle^{-\beta}.$$ Hence from Lemma [Lemma 2](#A-choice){reference-type="ref" reference="A-choice"} and equation [\[W-bis\]](#W-bis){reference-type="eqref" reference="W-bis"} we deduce that $Wu\in H^{0, \beta -\frac 12-0}(\mathbb{R}^3)$, and therefore, by Hölder, $Wu\in L^{\frac 65}(\mathbb{R}^3)$. Moreover, $Wu \in L^2_{\textrm{loc}}(\mathbb{R}^3)$, so by the elliptic regularity we have $u\in H^2_{\textrm{loc}}(\mathbb{R}^3)$. By Lemma [Lemma 11](#lem-ker-M0){reference-type="ref" reference="lem-ker-M0"}, $u=- G_0Wu$, hence $$u(x) = -\frac{1}{4\pi} \int_{\mathbb{R}^3} \frac{(Wu)(y)}{\lvert{x-y}\rvert}\, dy .$$ In view of the regularity of $u$ we have $Wu \in H^1_{\textrm{loc}}(\mathbb{R}^3)$, and therefore $Wu \in L^6_{\textrm{loc}}(\mathbb{R}^3)$. Thus $$\lvert{\partial_{x_j} u(x)}\rvert \leq \frac{1}{4\pi} \int_{\mathbb{R}^3} \frac{\lvert{(Wu)(y)}\rvert}{\lvert{x-y}\rvert^2}\, dy .$$ Since $Wu\in L^{\frac 65}(\mathbb{R}^3)$, the Hardy-Littlewood-Sobolev inequality, see e.g. [@LL Section 4.3], then implies, by duality, that $\lvert{\nabla u}\rvert\in L^2(\mathbb{R}^3)$, and therefore $\lvert{(i \nabla +A) u}\rvert \in L^2(\mathbb{R}^3)$. Now let $\chi_n\colon \mathbb{R}^3\to\mathbb{R}$, $2\leq n\in\mathbb{N},$ be given by $$\chi_n(x)= 1 \quad \text{if $\lvert{x}\rvert \leq 1$}, \qquad \chi_n(x)= \Bigl(1- \frac{\log \lvert{x}\rvert}{\log n}\Bigr)_+ \quad \text{otherwise}.$$ Then $$\begin{aligned} \label{zero-eq} 0&= \int_{\mathbb{R}^3} \chi_n \bar u\, (i \nabla +A)^2 u \, dx \notag\\ &= - \int_{\mathbb{R}^3} \chi_n \lvert{(i \nabla +A) u}\rvert^2 \, dx - \int_{\mathbb{R}^3}\bar u\, \nabla \chi_n \cdot (i \nabla +A) u \, dx.\end{aligned}$$ A short calculation gives $$\nabla \chi_n(x)= -\frac{x}{\lvert{x}\rvert^2 \log n}\ \quad \text{if $1\leq \lvert{x}\rvert \leq n$}, \qquad \nabla \chi_n(x)= 0 \quad \text{otherwise},$$ we get, for any $0<\varepsilon\leq \frac 12$, $$\begin{aligned} \Bigl\lvert\int_{\mathbb{R}^3}\bar u\, \nabla \chi_n \cdot (i \nabla +A) u \, dx\Bigr\rvert &\leq \frac{1}{\log n}\int_{1\leq \lvert{x}\rvert\leq n} \lvert{u}\rvert \langle x\rangle^{-\frac 12-\varepsilon} \frac{\langle x\rangle^{\frac 12+\varepsilon}}{\lvert{x}\rvert} \lvert{(i \nabla +A) u }\rvert\, dx\\[4pt] & \leq \frac{1}{\log n}\, \lVert{u}\rVert_{L^{2,-\frac 12-\varepsilon}} \lVert{(i \nabla +A) u}\rVert_{L^2} \ \to \ 0 \quad \text{as $n\to \infty$} .\end{aligned}$$ Since $\chi_n \to 1$ in $L^\infty_{\textrm{loc}}(\mathbb{R}^3)$, this in combination with equation [\[zero-eq\]](#zero-eq){reference-type="eqref" reference="zero-eq"} gives $$\label{diamag-eq} (i \nabla +A) u = 0.$$ However, [\[diamag-eq\]](#diamag-eq){reference-type="eqref" reference="diamag-eq"} implies $\lvert{u}\rvert = \textrm{const}$, which in view of $u\in H^{1,-\frac12-0}$ means that $u=0$, see also [@si]. ◻ **Corollary 20**. *Suppose that $V$ and $B$ satisfy Assumptions [Assumption 1](#ass-BV){reference-type="ref" reference="ass-BV"} respectively [Assumption 18](#ass-B-2){reference-type="ref" reference="ass-B-2"} for some $\beta >2$. If $V\geq 0$, then $\ker M_0 = \{0\}$.* *Proof.* Let $u\in \mathop{\mathrm{null}}\bigl((P-A)^2 +V\bigr)$. Following the arguments of the proof of Lemma [Lemma 19](#lem-mg-laplacian){reference-type="ref" reference="lem-mg-laplacian"} we deduce that $u$ must satisfy $$\int_{\mathbb{R}^3} \lvert{(i \nabla +A) u}\rvert^2 \, dx + \int_{\mathbb{R}^3} V\, \lvert{u}\rvert^2\, dx =0.$$ Note that $\sqrt{V} u\in L^2(\mathbb{R}^3)$, by hypothesis. Since $V\geq 0$, we conclude, as above, that $(i \nabla +A) u = 0$ and therefore $u=0$. ◻ # The Pauli operator {#sec-pauli} We assume that $B\colon\mathbb{R}^3\to \mathbb{R}^3$ and $V\colon\mathbb{R}^3\to \mathbb{R}$ satisfy Assumption [Assumption 1](#ass-BV){reference-type="ref" reference="ass-BV"}. In what follows we denote by $\mathbf{1}_n$ the $n\times n$ identity matrix. We consider the Pauli operator in $L^2(\mathbb{R}^3;\mathbb{C}^2)$ given by $$%\label{pauli-def} H_P= \bigl(\sigma\cdot (P-A)\bigr)^2 + V\mathbf{1}_2 = (P-A)^2 \mathbf{1}_2 + \sigma\cdot B + V \mathbf{1}_2,$$ where $\sigma=(\sigma_1,\sigma_2,\sigma_3)$ is the set of Pauli matrices; $$\label{sigma-j} \sigma_1 = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}, \quad \sigma_2 = \begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix}, \quad \sigma_3 = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix},$$ and where $A$ is given by Lemma [Lemma 2](#A-choice){reference-type="ref" reference="A-choice"}. Here we adopt the usual notation $$\label{sigma-dot-B} \sigma\cdot B = \sum_{j=1}^3 \sigma_j B_j .$$ Hence $$\label{pauli-def-2} H_P= -\Delta\mathbf{1}_2 +W_P,$$ where $$\begin{aligned} \label{W-pauli} W_P &= W_1+W_2,\\ W_1&=\bigl(-P\cdot A -A\cdot P +\lvert{A}\rvert^2\bigr) \mathbf{1}_2 + V\mathbf{1}_2, \label{W1-def}\\ W_2&= \sigma\cdot B.\end{aligned}$$ Our aim is to factor the perturbation $W_P$ in a way similar to the scalar case. To this end note that $W_1=W\mathbf{1}_2$ with $W$ defined in [\[W-def\]](#W-def){reference-type="eqref" reference="W-def"}. We take as the intermediate space $$\mathscr{K}_1=\mathscr{K}\oplus\mathscr{K},$$ where the space $\mathscr{K}$ is defined in [\[K-def\]](#K-def){reference-type="eqref" reference="K-def"}. Thus the factorization of $W$ given in [\[factorisation\]](#factorisation){reference-type="eqref" reference="factorisation"} immediately gives a factorization of $W_1$. We can write it as $W_1={w_1}^{\!\!\ast\,}\mathcal{U}_1w_1$, where $w_1=w\oplus w$ and $\mathcal{U}_1=\mathcal{U}\oplus\mathcal{U}$. To factor $W_2$ we take as the intermediate space $$\mathscr{K}_2=L^2(\mathbb{R}^3;\mathbb{C}^2)\oplus L^2(\mathbb{R}^3;\mathbb{C}^2)\oplus L^2(\mathbb{R}^3;\mathbb{C}^2).$$ Then let $$U^B_j(x) = \begin{cases} -1, &\text{if $B_j(x) <0$}, \\ \phantom{-}1, & \text{otherwise}, \end{cases}$$ for $j=1,2,3$. Define the block-diagonal matrix operator $$\mathcal{U}_2=\mathop{\mathrm{diag}} \begin{bmatrix} U^B_1\sigma_1 & U^B_2\sigma_2 & U^B_3\sigma_3 \end{bmatrix}$$ Let $b_j(x)=\lvert{B_j(x)}\rvert^{\frac12}$ and then define the block-matrix operator $$w_2=\begin{bmatrix} b_1\mathbf{1}_2 & b_2\mathbf{1}_2 & b_3\mathbf{1}_2 \end{bmatrix}^T$$ Then we have the factorization $W_2={w_2}^{\!\!\ast\,}\mathcal{U}_2w_2$. We can now put the two factorizations together. The intermediate space is $$\mathscr{K}_P=\mathscr{K}_1\oplus\mathscr{K}_2,$$ and the factorization $$\label{W-factorization-pauli} W_P={w_P}^{\!\!\ast\,}\mathcal{U}_Pw_P$$ is obtained by taking $$w_P=w_1\oplus w_2\quad\text{and}\quad \mathcal{U}_P=\mathcal{U}_1\oplus\mathcal{U}_2.$$ In the sequel, given a self-adjoint operator $T$ in $L^2(\mathbb{R}^3)$ we will use, with a slight abuse of notation, the same symbol $T$ for its matrix valued counterpart in $L^2(\mathbb{R}^3;\mathbb{C}^n)$ acting as $T\, \mathbf{1}_n$. Hence the operators $G_j$ introduced in Section [3](#sec-free-exp){reference-type="ref" reference="sec-free-exp"} have the properties stated in Lemma [Lemma 5](#free-exp){reference-type="ref" reference="free-exp"} in the space of bounded operators from $H^{-1,s}(\mathbb{R}^3;\mathbb{C}^2)$ to $H^{1,-s'}(\mathbb{R}^3;\mathbb{C}^2)$, which we continue to denote by $\mathscr{B}(-1,s;1,-s')$. Accordingly, by $\langle{\cdot},{\cdot}\rangle$ we denote the inner product in $L^2(\mathbb{R}^3; \mathbb{C}^2)$. With equation [\[W-factorization-pauli\]](#W-factorization-pauli){reference-type="eqref" reference="W-factorization-pauli"} at hand we can thus write the resolvent of the Pauli operator $$R_P(\kappa) = (H_P+\kappa^2)^{-1}$$ in the factorized form as in [\[res-formula-2\]](#res-formula-2){reference-type="eqref" reference="res-formula-2"}. By carrying over the analysis of Section [4](#sec-factored){reference-type="ref" reference="sec-factored"} to the setting of operators defined on $L^2(\mathbb{R}^3;\mathbb{C}^2)$ we obtain the 'matrix versions' of Lemmas [Lemma 11](#lem-ker-M0){reference-type="ref" reference="lem-ker-M0"}, [Lemma 13](#lem-orthogonal){reference-type="ref" reference="lem-orthogonal"} and of Proposition [\[prop-M0\]](#prop-M0){reference-type="ref" reference="prop-M0"}. The classification of the point zero given in Section [4](#sec-factored){reference-type="ref" reference="sec-factored"} carries over unchanged to the Pauli operators. As in Section [5](#sec-main){reference-type="ref" reference="sec-main"} we thus arrive at the following results. **Theorem 21**. *Assume that zero is a regular point for $H_P$. Let Assumption [Assumption 1](#ass-BV){reference-type="ref" reference="ass-BV"} be satisfied for some $\beta > 5$ and let $s>\frac 52$. Then $$\label{exp-regular-pauli} R_P(\kappa) = F_0 +\kappa F_1 +\mathcal{O}(\kappa^{2})$$ in $\mathscr{B}(-1,s;1,-s)$, where $$\begin{aligned}   F_0 &= (I+G_0 W_P)^{-1} G_0\in\mathscr{B}(-1,s;1,-s),\quad s>1, \label{RF0-pauli} \\ F_1 &= (I+G_0 W_P)^{-1} G_1(I+ W_P G_0)^{-1}\in\mathscr{B}(-1,s;1,-s),\quad s>\tfrac32. \label{RF1-pauli}\end{aligned}$$* For later purposes we state also a simplified version of [\[exp-regular-pauli\]](#exp-regular-pauli){reference-type="eqref" reference="exp-regular-pauli"}. **Corollary 22**. *Assume that zero is a regular point for $H_P$. Let Assumption [Assumption 1](#ass-BV){reference-type="ref" reference="ass-BV"} be satisfied for some $\beta > 3$. Assume $s,s'>\tfrac 12$ and $s+s' \geq 2$. Then $\lim_{\kappa\to0}R_P(\kappa)$ exists in $\mathscr{B}(-1,s;1,-s')$ and $$R_P(0) = (I+G_0 W_P)^{-1} G_0\in\mathscr{B}(-1,s;1,-s').$$* *Proof.* Note that it suffices to prove the result for $s,s'$ small and satisfying the conditions in the corollary, due to the embedding property [\[embed\]](#embed){reference-type="eqref" reference="embed"}. The claim then follows from the mapping properties of $G_0$, see equation [\[G0-map\]](#G0-map){reference-type="eqref" reference="G0-map"}, and from the fact that $(I+G_0 W_P)^{-1}$ exists and is bounded on $H^{1,-s}(\mathbb{R}^3;\mathbb{C}^2)$ for any $\frac 12 < s <\beta -\frac 12$, see the proof of Theorem [Theorem 14](#thm-regular){reference-type="ref" reference="thm-regular"}. ◻ In the exceptional case we continue to denote by $\psi_c$ the specific resonance function defined in equation [\[psi-c-def\]](#psi-c-def){reference-type="eqref" reference="psi-c-def"}. We then have **Theorem 23**. *Assume that zero is an exceptional point for $H_P$. Let Assumption [Assumption 1](#ass-BV){reference-type="ref" reference="ass-BV"} be satisfied for $\beta>9$. Assume $s>\frac92$. Then $$R_P(\kappa)=\kappa^{-2}F_{-2}+\kappa^{-1}F_{-1}+\mathcal{O}(1)$$ as $\kappa\to0$ in $\mathscr{B}(-1,s;1,-s)$.* *If zero is an exceptional point of the first kind, we have $$F_{-2}=0,\quad F_{-1}=\lvert{\psi_c}\rangle\langle{\psi_c}\rvert.$$* *If zero is an exceptional point of the second kind, we have $$F_{-2}=P_0,\quad F_{-1}=P_0W_PG_3W_PP_0.$$* *If zero is an exceptional point of the third kind, we have $$F_{-2}=P_0,\quad F_{-1}=\lvert{\psi_c}\rangle\langle{\psi_c}\rvert+P_0W_PG_3W_PP_0.$$* Recall that the classification of exceptional points is determined by the values $\mathop{\mathrm{rank}}S$ and $\mathop{\mathrm{rank}}S_1$, see the end of Section [4](#sec-factored){reference-type="ref" reference="sec-factored"}. ## The case $V=0$. In this section we will analyze more in detail the purely magnetic Pauli operator $$\label{pauli-def} H_P= \bigl(\sigma\cdot (P-A)\bigr)^2 = (P-A)^2 +\sigma\cdot B.$$ Notice that in view of the assumptions on $A$, the operator $\bigl(\sigma\cdot (P-A)\bigr)^2$ is self-adjoint on $H^2(\mathbb{R}^3;\mathbb{C}^2)$. It is well-known that, contrary to purely magnetic Schrödinger operators, zero might be an exceptional point of $\bigl(\sigma\cdot (P-A)\bigr)^2$, see [@ly; @elt]. Our next result shows that, under suitable conditions on $B$, in such a case zero must be an eigenvalue of $\bigl(\sigma\cdot (P-A)\bigr)^2$ and that there is no threshold resonance. Our proof is based on an analogous, and more general, result for the Dirac operator obtained in [@fl]. **Lemma 24**. *Let $V=0$ and let $B$ satisfy Assumption [Assumption 18](#ass-B-2){reference-type="ref" reference="ass-B-2"} for some $\beta >2$. Then $\mathop{\mathrm{rank}}S=\mathop{\mathrm{rank}}S_1$.* Recall that $S$ and $S_1$ are orthogonal projections onto $\ker M_0$ and $\ker SM_1S$ in $\mathscr{K}$, respectively. *Proof.* It suffices to concider the case $S\neq 0$. Let $f\in \ker M_0$. By Lemma [Lemma 11](#lem-ker-M0){reference-type="ref" reference="lem-ker-M0"}(i), $u=-G_0 {w_P}^{\!\ast\,} f\in \mathop{\mathrm{null}}(H_P)$. Our goal is to show that $u\in L^2(\mathbb{R}^3; \mathbb{C}^2)$. Since $u\in \ker (I+G_0 W_P)$, see Lemma [Lemma 11](#lem-ker-M0){reference-type="ref" reference="lem-ker-M0"}(i), we have $$u(x) = -\frac{1}{4\pi} \int_{\mathbb{R}^3} \frac{(W_Pu)(y)}{\lvert{x-y}\rvert}\, dy .$$ The Hardy-Littlewood-Sobolev inequality then implies $u\in L^6(\mathbb{R}^3; \mathbb{C}^2)$. Moreover, by a straightforward modification of the proof of Lemma [Lemma 19](#lem-mg-laplacian){reference-type="ref" reference="lem-mg-laplacian"} we deduce that $u$ must satisfy $$\sigma\cdot (P-A) u = 0.$$ From [@fl Theorem. 2.1] we thus conclude that $u\in L^2(\mathbb{R}^3; \mathbb{C}^2)$, as desired. Now Lemma [Lemma 13](#lem-orthogonal){reference-type="ref" reference="lem-orthogonal"} gives $$0= \langle u, W_P 1 \rangle= - \langle G_0{w_P}^{\!\ast\,} f, W_P 1 \rangle = - \langle \, \mathcal{U}w_P G_0 {w_P}^{\!\ast\,} f, w_P 1 \rangle.$$ However, since $f\in \ker M_0$ we have $\mathcal{U}\, w_P G_0 {w_P}^{\!\ast\,} f= -f$. Hence $\langle f, w_P 1 \rangle=0$, which implies $f\in \ker S M_1 S$, cf. equation [\[sm1s\]](#sm1s){reference-type="eqref" reference="sm1s"}. Hence $\ker S M_1 S=\ker M_0$. ◻ The question of existence of zero modes (or zero energy eigenfunctions) of Pauli operators in dimension three is of current interest, see [@be; @bel; @bvb; @fl; @fl2]. We will explore the connection to the results obtained here. To do so we need to recall the set-up from [@be; @bel; @bvb] in some detail, in order to define the quantity $\delta(B)$, see [\[delta-B\]](#delta-B){reference-type="eqref" reference="delta-B"} below. Let $B$ satisfy Assumption [Assumption 18](#ass-B-2){reference-type="ref" reference="ass-B-2"}. We consider the operators $$H_P=(P-A)^2+\sigma\cdot B\quad\text{and}\quad \widetilde{H}_P=(P-A)^2+\sigma\cdot B+\lvert{B}\rvert=H_P+\lvert{B}\rvert.$$ They are obtained as the Friedrich extension of the corresponding forms with common form domain $\mathcal{Q}(H_P)=\mathcal{Q}(\widetilde{H}_P) =H^{1,0}(\mathbb{R}^3;\mathbb{C}^2)$. In the quadratic form sense we have $\widetilde{H}_P\geq (P-A)^2$, since $\sigma\cdot B+\lvert{B}\rvert\geq0$ as quadratic forms. It follows from Lemma [Lemma 19](#lem-mg-laplacian){reference-type="ref" reference="lem-mg-laplacian"} that zero is a regular point of $(P-A)^2$. As a consequence (cf. Lemma [Lemma 11](#lem-ker-M0){reference-type="ref" reference="lem-ker-M0"}) $\mathop{\mathrm{ran}}\widetilde{H}_P$ is dense in $\mathcal{H}$. Let $\widetilde{\mathcal{H}}$ be the completion of $\mathcal{Q}(H_P)$ under the norm $$\lVert{u}\rVert_{\widetilde{\mathcal{H}}}^2=\langle{u},{\widetilde{H}_Pu}\rangle.$$ The operators $\widetilde{H}_P^{\alpha}$ are defined for $\alpha=\pm\frac12$ and $\alpha=-1$ via the functional calculus, as self-adjoint operators on $\mathcal{H}$. In particular, the operator $$\widetilde{H}_P^{-\frac12}\colon\mathop{\mathrm{ran}}{\widetilde{H}_P^{\frac12}} \to \widetilde{\mathcal{H}}$$ preserves norms. Since $\mathop{\mathrm{ran}}\widetilde{H}_P^{-\frac12}=\mathop{\mathrm{dom}}\widetilde{H}_P^{\frac12}=\mathcal{Q}(H_P)$ is dense in $\widetilde{\mathcal{H}}$, the operator $\widetilde{H}_P^{-\frac12}$ extends to a unitary operator $\mathsf{U}\colon\mathcal{H}\to\widetilde{\mathcal{H}}$. Due to the assumption on $B$ the multiplication by $\lvert{B}\rvert^{\frac12}$ is bounded from $\widetilde{\mathcal{H}}$ to $\mathcal{H}$. Thus we can define $\mathcal{S}=\lvert{B}\rvert^{\frac12}\mathsf{U}\colon \mathcal{H}\to\mathcal{H}$, with the property $$\mathcal{S}u=\lvert{B}\rvert^{\frac12}\widetilde{H}_P^{-\frac12}u\quad \text{for $u\in\mathop{\mathrm{ran}}\widetilde{H}_P^{\frac12}$}.$$ Then we define (see [@bvb Equation (1)]) $$\label{delta-B} \delta(B)=\inf\{\lVert{(I-\mathcal{S}^*\mathcal{S})f}\rVert\mid \lVert{f}\rVert=1,\; \mathsf{U}f\in\mathcal{H}\}.$$ We recall some of the recent results on zero modes for $H_P$. Assuming that $\lvert{B}\rvert\in L^{3/2}(\mathbb{R}^3)$, Balinsky, Evans and Lewis proved in [@bel] that if the operator $H_P$ has a zero eigenfunction, then $\delta(B) =0$. Later, Benguria and Van den Bosch proved the converse implication under the additional condition that $B$ satisfy equation [\[B-decay-cond\]](#B-decay-cond){reference-type="eqref" reference="B-decay-cond"} for some $\beta>1$, cf. [@bvb Theorem 1.1]. Finally, in [@fl Theorem 2.2], Frank and Loss showed that the additional decay condition on $B$ introduced in [@bvb] is not necessary. It is illustrative to verify that, under somewhat stronger assumptions on $B$, the identity $\delta(B) =0$ is equivalent to zero being an exceptional point for $H_P$. **Proposition 25**. *Let $B$ satisfy Assumption [Assumption 18](#ass-B-2){reference-type="ref" reference="ass-B-2"} for some $\beta >3$. Then $\delta(B) =0$ if and only if zero is an exceptional point for $H_P$. In the affirmative case the exceptional point is of the second kind.* *Proof.* Using Corollary [Corollary 22](#cor-regular-pauli){reference-type="ref" reference="cor-regular-pauli"} and $\lvert{B(x)}\rvert^{\frac12}\lesssim \langle{x}\rangle^{-\beta/2}$ with $\beta>3$ we get that $$\lim_{\eta\downarrow0}\lvert{B}\rvert^{\frac12} (\widetilde{H}_P+\eta I)^{-1}\lvert{B}\rvert^{\frac12}=\lvert{B}\rvert^{\frac12} \widetilde{H}_P^{-1}\lvert{B}\rvert^{\frac12}=\mathcal{S}\mathcal{S}^*,$$ with convergence in operator norm. We note that as a consequence the operator $\mathcal{S}\mathcal{S}^*$ is compact. But then $\mathcal{S}^*\mathcal{S}$ is also compact. Assume that zero is an exceptional point for $H_P$. Then due to Lemmas [Lemma 11](#lem-ker-M0){reference-type="ref" reference="lem-ker-M0"} and [Lemma 24](#lem-no-resonance){reference-type="ref" reference="lem-no-resonance"}, there exists $u\in L^2(\mathbb{R}^3;\mathbb{C}^2)\cap H^{1, -s}(\mathbb{R}^3;\mathbb{C}^2)$ with $\tfrac 12 < s < \beta -\tfrac 12$, such that $(I+ G_0 W_P)u=0$, where $W_P= W_1 +\sigma\cdot B$, cf. [\[W1-def\]](#W1-def){reference-type="eqref" reference="W1-def"}. Let $\widetilde{W} \coloneqq W_P +\lvert{B}\rvert$. Hence $$(I+ G_0 \widetilde{W})u= G_0 \lvert{B}\rvert u .$$ Since $(I+G_0 \widetilde W )^{-1}$ exists and is bounded on $H^{1,-s}(\mathbb{R}^3;\mathbb{C}^2)$ for any $\tfrac 12 < s < \beta -\tfrac 12$, it follows that $$\label{eq-u} u = (I+ G_0 \widetilde W )^{-1} G_0 \lvert{B}\rvert u .$$ Let $f =\lvert{B}\rvert^{\tfrac 12} u$. Then $f\in L^2(\mathbb{R}^3;\mathbb{C}^2)$, and, in view of Corollary [Corollary 22](#cor-regular-pauli){reference-type="ref" reference="cor-regular-pauli"}, $$\begin{aligned} \mathcal{S}\mathcal{S}^* f &= \lvert{B}\rvert^{\tfrac 12} \bigl[ \bigl(\sigma\cdot (P-A)\bigr)^2 + \lvert{B}\rvert\bigr]^{-1} \lvert{B}\rvert^{\tfrac 12} f = \lvert{B}\rvert^{\tfrac 12} (I+ G_0 \widetilde W )^{-1} G_0 \lvert{B}\rvert u \\ &= \lvert{B}\rvert^{\tfrac 12} u = f.\end{aligned}$$ This shows that $\mathcal{S}\mathcal{S}^*$ has eigenvalue $1$, and therefore so does $\mathcal{S}^* \mathcal{S}$. Hence $\delta(B)=0$, see [\[delta-B\]](#delta-B){reference-type="eqref" reference="delta-B"}. Conversely, assume that $\delta(B)=0$. Then we can find a sequence $g_n\in\mathcal{H}$ with $\lVert{g_n}\rVert=1$ such that $$\lim_{n\to\infty}\lVert{(I-\mathcal{S}^*\mathcal{S})g_n}\rVert=0.$$ It follows from Weyl's criterion that $1$ is in the spectrum of $\mathcal{S}^*\mathcal{S}$. Since this operator is compact, $1$ is an eigenvalue of $\mathcal{S}^*\mathcal{S}$, hence also an eigenvalue of $\mathcal{S}\mathcal{S}^*$. Thus there exists $f\in L^2(\mathbb{R}^3;\mathbb{C}^2)$, $\lVert{f}\rVert=1$, such that $$\label{f-ef} \lvert{B}\rvert^{\tfrac 12} \widetilde{H}_P^{-1} \lvert{B}\rvert^{\tfrac 12} f = f.$$ Since $\lvert{B}\rvert^{\tfrac 12} f\in H^{-1, \frac 32 +0}(\mathbb{R}^3;\mathbb{C}^2)$, it follows from Corollary [Corollary 22](#cor-regular-pauli){reference-type="ref" reference="cor-regular-pauli"} that $$u \coloneqq \widetilde{H}_P^{-1} \lvert{B}\rvert^{\tfrac 12} f = (I+ G_0 \widetilde{W})^{-1} G_0 \lvert{B}\rvert^{\tfrac 12} f \in H^{1, -\frac 12 -0}(\mathbb{R}^3;\mathbb{C}^2).$$ Moreover, from [\[f-ef\]](#f-ef){reference-type="eqref" reference="f-ef"} we deduce the identity $$\begin{aligned} \lvert{B}\rvert^{\tfrac 12} f &= \lvert{B}\rvert \widetilde{H}_P^{-1} \lvert{B}\rvert^{\tfrac 12} f = \lvert{B}\rvert^{\tfrac 12} f - H_P u, \end{aligned}$$ which implies $H_P u =0$. This shows that $u\in \mathop{\mathrm{null}}H_P$ and therefore $\ker M_0 \neq \{0\}$, cf. Lemma [Lemma 11](#lem-ker-M0){reference-type="ref" reference="lem-ker-M0"}. The last statement follows from Lemma [Lemma 24](#lem-no-resonance){reference-type="ref" reference="lem-no-resonance"}. ◻ **Remark 26**. Sharp conditions for the nonexistence of zero energy eigenfunctions of $\sigma\cdot (P-A)$ in terms of $L^p$-norms of $B$ and $A$ were recently established in [@fl; @fl2]. # General perturbations {#sec-general} The set-up used here applies to a much larger class of perturbations of $-\Delta$ than the perturbations defined in [\[W-def\]](#W-def){reference-type="eqref" reference="W-def"}, and leads to resolvent expansions as those obtained in previous sections. The idea is to combine the factorization scheme in [@jn] with some of the estimates from [@JK], extending what was done above. See also the comment on the bottom of page 588 in [@JK]. In the sequel we use the notation $\mathcal{H}=L^2(\mathbb{R}^3)$ and $H_0=-\Delta$, with domain $H^{2,0}$. **Assumption 27**. Let $\mathsf{W}$ be a symmetric $H_0$-form-compact operator on $\mathcal{H}$. Let $\beta>0$. Assume that $\mathsf{W}$ defines a compact operator in $\mathscr{B}(1,-\beta/2;-1,\beta/2)$, also denoted by $\mathsf{W}$. We have the following result. The proof is a variant of the proof of [@IJ Proposition A.1]. **Lemma 28**. *Let $\mathsf{W}$ satisfy Assumption [Assumption 27](#gen-V){reference-type="ref" reference="gen-V"} for some $\beta>0$. Let $\mathscr{K}=\ell^2(\mathbb{N})$, if $\mathop{\mathrm{rank}}\mathsf{W}=\infty$. Otherwise, let $\mathscr{K}=\mathbb{C}^{\mathop{\mathrm{rank}}\mathsf{W}}$. Then there exist a bounded operator $\mathsf{w}\colon H^{1,-\beta/2}\to\mathscr{K}$ and a self-adjoint and unitary operator $\mathsf{U}$ on $\mathscr{K}$ such that $$\label{gen-fac} \mathsf{W}=\mathsf{w}^*\mathsf{U}\mathsf{w}.$$* *Proof.* We assume $\mathsf{W}\neq 0$. Define $$\widetilde{\mathsf{W}}=\langle{x}\rangle^{\beta/2}\langle{P}\rangle^{-1}\mathsf{W}\langle{P}\rangle^{-1}\langle{x}\rangle^{\beta/2}.$$ By assumption this operator is compact and self-adjoint on $\mathcal{H}$. Let $N=\mathop{\mathrm{rank}}\mathsf{W}$, and let $\{u_j\mid j=1,2,\ldots,N\}$, be an orthonormal sequence in $\mathcal{H}$ such that $$\widetilde{\mathsf{W}}=\sum_{j=1}^{N}\lambda_j\lvert{u_j}\rangle\langle{u_j}\rvert.$$ Here $\{\lambda_j\}$ denotes the *non-zero* eigenvalues of $\widetilde{\mathsf{W}}$, repeated with multiplicity. Define $\mathsf{U}$ on $\mathscr{K}$ as a matrix by $$(\mathsf{U})_{mn}=\begin{cases} \mathop{\mathrm{sgn}}(\lambda_m), & \text{for $m=n$, $1\leq m\leq N$},\\ 0, & \text{otherwise}. \end{cases}$$ Then $\mathsf{U}$ is self-adjoint and unitary. For $j=1,2,\ldots,N$, define $$\eta_j=\langle{P}\rangle\langle{x}\rangle^{-\beta/2}u_j.$$ Then define $\mathsf{w}\colon H^{1,-\beta/2}\to\mathscr{K}$ by $$(\mathsf{w}f)_j=\begin{cases} \lvert{\lambda_j}\rvert\langle{\eta_j},{f}\rangle, & j=1,2,\ldots, N,\\ 0, & j>N, \end{cases}$$ for $f\in H^{1,-\beta/2}$. With these definitions the factorization [\[gen-fac\]](#gen-fac){reference-type="eqref" reference="gen-fac"} follows. ◻ **Remark 29**. In explicit cases, e.g. the magnetic perturbation $W$ in [\[W-def\]](#W-def){reference-type="eqref" reference="W-def"}, there are other factorizations that are 'natural'. The same holds for a multiplicative perturbation. On the other hand, in the case of a self-adjoint finite rank perturbation Lemma [Lemma 28](#lem-gen-fac){reference-type="ref" reference="lem-gen-fac"} gives a natural factorization. In any case, due to the uniqueness of coefficients in an asymptotic expansion, the choice of factorization does not matter. However, it may be difficult to see explicitly in concrete examples that two coefficient expressions are equal. **Remark 30**. Recall that factored perturbations are additive in the following sense. Let $\mathsf{W}_j$, $j=1,2$, be perturbations satisfying Assumption [\[gen-fac\]](#gen-fac){reference-type="ref" reference="gen-fac"}. Let $\mathsf{W}_j=\mathsf{w}_j^*\mathsf{U}_j\mathsf{w}_j$, $j=1,2$, be factorizations with intermediate Hilbert spaces $\mathscr{K}_j$, $j=1,2$, and with the mapping properties stated in Lemma [Lemma 28](#lem-gen-fac){reference-type="ref" reference="lem-gen-fac"}. Let $\mathsf{W}=\mathsf{W}_1+\mathsf{W}_2$ and $\mathscr{K}=\mathscr{K}_1\oplus\mathscr{K}_2$. Define $$\mathsf{w}=\begin{bmatrix} \mathsf{w}_1 \\ \mathsf{w}_2 \end{bmatrix} \quad\text{and}\quad \mathsf{U}=\begin{bmatrix} \mathsf{U}_1 & 0 \\ 0 & \mathsf{U}_2 \end{bmatrix}.$$ Here we use matrix notation for operators on $\mathscr{K}=\mathscr{K}_1\oplus\mathscr{K}_2$. Then it is straightforward to verify that $\mathsf{W}$ satisfies Assumption [\[gen-fac\]](#gen-fac){reference-type="ref" reference="gen-fac"} and that we have the factorization $\mathsf{W}=\mathsf{w}^*\mathsf{U}\mathsf{w}$, with $\mathsf{w}$ and $\mathsf{U}$ having the mapping properties stated in Lemma [Lemma 28](#lem-gen-fac){reference-type="ref" reference="lem-gen-fac"}. #### Acknowledgments. {#acknowledgments. .unnumbered} AJ was partially supported by grant 8021--00084B from Independent Research Fund Denmark  Natural Sciences. 99 A. Balinsky and W. D. Evans: On the zero modes of Pauli Operators. *J. Func. Anal.* **179** (2001) 120--135. A. Balinsky, W. D. Evans and R. T. Lewis: Sobolev, Hardy and CLR inequalities associated with Pauli operators in $\mathbb{R}^3$. *J. Phys. A.* **34** (2001) L19. R. Benguria and H. Van Den Bosch: A criterion for the existence of zero modes for the Pauli operator with fastly decaying fields. *J. Math. Phys.* **56** (2015) pp. 052104 M. B. Erdoğan, M. Goldberg and W. Schlag: Strichartz and smoothing estimates for Schrödinger operators with large magnetic potentials in $\mathbb{R}^3$. *J. Eur. Math. Soc.* **10** (2008) 507--531. R. L. Frank and M. Loss: Which magnetic fields support a zero mode? *J. Reine Angew. Math.* **788** (2022) 1--36. R.L. Frank and M. Loss: A sharp criterion for zero modes of the Dirac equation. *J. Eur. Math. Soc.*, to appear. D. M. Elton: The local structure of zero mode producing magnetic potentials. *Comm. Math. Phys*. **229** (2002) 121--139. K. Ito and A. Jensen: Resolvent expansion for the Schrödinger operator on a graph with infinite rays. *J. Math. Anal.* **464** (2018) 616--661. A. Jensen and T. Kato: Spectral properties of Schrödinger operators and time-decay of the wave functions. *Duke Math. J.* **46** (1979) 583--611. A. Jensen and G. Nenciu: A unified approach to resolvent expansions at thresholds. *Rev. Math. Phys.* **13** (2001) 717--754. A. Jensen and G. Nenciu: Erratum: "A unified approach to resolvent expansions at thresholds". *Rev. Math. Phys.* **16** (2004) 675--677. A. Jensen and G. Nenciu: Schrödinger operators on the half line: Resolvent expansions and the Fermi golden rule at thresholds. *Proc. Indian Acad. Sci. (Math. Sci.)* **116** (2006) 375--392. A. Jensen and G. Nenciu: The Fermi Golden Rule and its form at thresholds in odd dimensions. *Commun. Math. Phys.* **261** (2006) 693--727. A. I. Komech and E. A. Kopylova: Dispersive decay for the magnetic Schrödinger equation. *J. Funct. Anal.* **264** (2013) 735--751. H. Kovařı́k: Resolvent expansion and time decay of the wave functions for two-dimensional magnetic Schrödinger operators. *Comm. Math. Phys.* **337** (2015) 681--726. H. Kovařı́k: Spectral properties and time decay of the wave functions of Pauli and Dirac operators in dimension two. *Adv. Math.* **398** (2022), 108244. E. H. Lieb and M. Loss, *Analysis. Second edition*. Graduate Studies in Mathematics, 14. American Mathematical Society, Providence, RI, 2001. xxii+346 pp. M. Loss and H.-T. Yau: Stability of Coulomb systems with magnetic fields. III. Zero energy bound states of the Pauli operator. *Comm. Math. Phys.* **104** (1986) 282--290. M. Murata: Asymptotic expansions in time for solutions of Schrödinger-type equations. *J. Funct. Anal.* **49** (1982), 10--56. B. Simon: Kato's inequality and the comparison of semigroups. *J. Funct. Anal.* **32** (1979), 97--101. D. R. Yafaev: Scattering by magnetic fields. *St. Petersburg Math. J.* **17** (2006) 875--895. [^1]: Department of Mathematical Sciences, Aalborg University, Skjernvej 4A, DK-9220 Aalborg Ø, Denmark, `matarne@math.aau.dk` [^2]: DICATAM, Sezione di Matematica, Università degli studi di Brescia, Via Branze 38, Brescia, 25123, Italy, `hynek.kovarik@unibs.it`
arxiv_math
{ "id": "2310.05874", "title": "Resolvent expansions of 3D magnetic Schr\\\"odinger operators and Pauli\n operators", "authors": "Arne Jensen and Hynek Kovarik", "categories": "math.SP math-ph math.MP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
arxiv_math
{ "id": "2310.00201", "title": "Homotopy Limits and Homotopy Colimits of Chain Complexes", "authors": "Kensuke Arakawa", "categories": "math.AT math.CT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We prove the well posedness of a class of non linear and non local mixed hyperbolic--parabolic systems in bounded domains, with Dirichlet boundary conditions. In view of control problems, stability estimates on the dependence of solutions on data and parameters are also provided. These equations appear in models devoted to population dynamics or to epidemiology, for instance. *2020 Mathematics Subject Classification:* 35M30, 35L04, 35K20 *Keywords:* Mixed Hyperbolic--Parabolic Initial Boundary Value Problems; Hyperbolic--Parabolic Problems with Dirichlet Boundary Conditions. author: - Rinaldo M. Colombo - Elena Rossi bibliography: - dirichlet.bib title: | Non Linear Hyperbolic--Parabolic Systems\ with Dirichlet Boundary Conditions --- # Introduction {#sec:Intro} We consider the following non linear system on a bounded domain $\Omega \subset {\mathbb{R}}^n$ $$\label{eq:1} \left\{ \begin{array}{l} \partial_t u + \mathinner{\nabla\cdot}\left(u \,v(t,w) \right) = \alpha (t,x,w) \, u + a(t,x) \,, \\ \partial_t w - \mu \, \Delta w = \beta (t,x,u,w) \, w + b (t,x)\,, \end{array} \right. \quad (t,x) \in [0,T] \times \Omega \,.$$ Systems of this form arise, for instance, in predator--prey systems [@parahyp] and can be used in the control of parasites, see [@SAPM2021; @Pfab2018489]. A similar mixed hyperbolic--parabolic system is considered, in one space dimension, in [@MR4273477], where Euler equations substitute the balance law in [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"}. Motivated by these applications, terms in [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"} may well contain non local functions of the unknowns. Typically, whenever $u$ is a *predator* and $w$ a *prey*, the speed $v$ governing the movement of $u$, when computed at a point $x$, i.e., $\left(v \left(t,w\right)\right) (x)$, depends on $w$ through integrals of the form $\int_{{\left\|x-\xi\right\|} \leq \rho} f \left(t,x, \xi, w (t,\xi)\right)\mathinner{\mathrm{d}{\xi}}$ so that $\rho$ is the *horizon* at which the predator feels the prey. Under standard assumptions on the functions defining [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"}, we provide the analytical framework where the existence and the uniqueness of solutions to [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"}--[\[eq:49\]](#eq:49){reference-type="eqref" reference="eq:49"}. Moreover, we obtain a full set of *a priori* and stability estimates on view of the interest about control problems based on these equations, see [@MR4456851]. To this aim, we equip [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"} with homogeneous Dirichlet boundary conditions and initial data: $$\label{eq:49} \left\{ \begin{array}{rcl} u (t,\xi) & = & 0 \\ w (t, \xi) & = & 0 \end{array} \right. \quad(t,\xi) \in [0,T] \times \partial\Omega \qquad \mbox{ and } \qquad \left\{ \begin{array}{rcl} u (0,x) & = & u_o (x) \\ w (0,x) & = & w_o (x) \end{array} \right. \quad x \in \Omega \,.$$ We stress that the whole construction is settled in $\mathbf{L}^1$, a usual choice for balance laws but less common in the case of the parabolic equation. This choice is motivated by the clear physical meaning of *total population* attached to this norm, whenever solutions are positive -- a standard situation in the motivating models. As is well known, in parabolic equations, $\mathbf{L}^2$ or ${\mathbf{W}^{k,2}}$ are more standard choices, also thanks to the further properties of reflexive spaces, see for instance the recent papers [@Colli; @HanSongYu]. The introduction of a boundary, with the corresponding boundary conditions, affects the whole analytical structure, differently in the two equations. Indeed, as is well known, the hyperbolic equation for $u$ may well lead to problems that are locally overdetermined, resulting in the boundary condition to be simply neglected, see [@MR542510; @siam2018; @MR2322018; @MR1884231]. On the contrary, the solution to the parabolic equation attains along the boundary the prescribed value, for all positive times, see [@FriedmanBook; @MR0241822; @QuittnerSouplet]. We stress that in the hyperbolic case, different definitions of solutions are available, see [@MR542510; @MR2322018; @MR3819847; @MR1884231]. Here, we provide Definition [Definition 12](#def:Hyp){reference-type="ref" reference="def:Hyp"} that unifies different approaches, also allowing to prove an intrinsic uniqueness of solutions, i.e., independent of the way solutions are constructed. Particularly relevant are the estimates on the dependence of $(u,w)$ on the terms $a,b$ in [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"}, which typically play the role of controls. Indeed, in the applications of [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"} to biological problems, $a$ and $b$ typically measure the deployment of parasitoids or chemicals that hinder the propagation or reproduction of harmful parasites, see [@SAPM2021; @Pfab2018489]. It is with reference to this context that we care to ensure the positivity of solutions, whenever the data and the controls are positive. The next section, after the necessary introduction of the notation, presents the result. Proofs and further technical details are deferred to Section [3](#sec:proofs){reference-type="ref" reference="sec:proofs"}, where different paragraphs refer to the parabolic problem, to the hyperbolic one and to the coupling. # Main Results {#sec:main} Throughout, the following notation is used. $\mathbb{R}_+ = [0, +\infty\mathclose[$, $\mathbb{R}_- =\mathopen]-\infty,0]$. If $A \subseteq \mathbb{R}^n$, the characteristic function ${\chi_{\strut A}}$ is defined by ${\chi_{\strut A}} (x) = 1$ iff $x \in A$ and ${\chi_{\strut A}} (x) = 0$ iff $x \in \mathbb{R}^n\setminus A$. For $x_o \in \mathbb{R}^n$ and $r>0$, $B (x_o,r)$ is the open sphere centered at $x_o$ with radius $r$. We fix a time $T > 0$ and the following condition on the spatial domain $\Omega$: 1. [\[it:omega\]]{#it:omega label="it:omega"} $\Omega$ is a non empty, bounded and connected open subset of ${\mathbb{R}}^n$, with $\mathbf{C}^{2,\gamma}$ boundary, for a $\gamma \in \mathopen]0,1\mathclose]$. This condition is mainly motivated by the treatment of the parabolic part. Here we mostly use the framework in [@QuittnerSouplet Appendix B, § 48]. Other possible regularity assumptions on $\partial\Omega$ are in [@MR0241822 Chapter 4, § 4, p. 294]. We pose the following assumptions on the functions appearing in problem [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"}: 1. [\[it:v\]]{#it:v label="it:v"} $v: [0,T] \times \mathbf{L}^\infty (\Omega; \mathbb{R}) \to (\mathbf{C}^{2} \cap {\mathbf{W}^{1,\infty}}) (\Omega; \mathbb{R}^n)$ is such that for a constant $K_v>0$ and for a map $C_v \in {\mathbf{L}_{\mathbf{loc}}^{\infty }}([0,T]\times{\mathbb{R}}_+; {\mathbb{R}}_+)$ non decreasing in each argument, for all $t,t_1,t_2 \in [0,T]$ and $w,w_1, w_2 \in \mathbf{L}^\infty (\Omega;{\mathbb{R}})$, $$\begin{aligned} {\left\|v (t,w)\right\|}_{\mathbf{L}^\infty (\Omega;\mathbb{R}^n)} \leq \ & K_v \, {\left\|w\right\|}_{\mathbf{L}^1 (\Omega;\mathbb{R})} \\ {\left\|D_x v (t,w)\right\|}_{\mathbf{L}^\infty (\Omega;\mathbb{R}^n)} \leq \ & K_v\, {\left\|w\right\|}_{\mathbf{L}^1 (\Omega;\mathbb{R})} \\ {\left\|v (t_1,w_1) - v (t_2,w_2)\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R}^n)} \leq \ & K_v \left( {\left|t_2 - t_1\right|} + {\left\|w_2 - w_1\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})}\right) \\ {\left\|D^2_x v (t,w)\right\|}_{\mathbf{L}^1 (\Omega;\mathbb{R}^{n\times n})} \leq \ & C_v \left(t,{\left\|w\right\|}_{\mathbf{L}^1 (\Omega;\mathbb{R})}\right) \, {\left\|w\right\|}_{\mathbf{L}^1 (\Omega;\mathbb{R})} \\ {\left\|\mathinner{\nabla\cdot}\left(v (t_1,w_1) - v (t_2,w_2)\right)\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R})} \leq \ & C_v \left(t,\max_{i=1,2}{\left\|w_i\right\|}_{\mathbf{L}^1 (\Omega;\mathbb{R})}\right) \, {\left\|w_1 - w_2\right\|}_{\mathbf{L}^1 (\Omega;\mathbb{R})} \,. \end{aligned}$$ ```{=html} <!-- --> ``` 1. [\[it:alpha\]]{#it:alpha label="it:alpha"} $\alpha: [0,T] \times \Omega \times \mathbb{R}\to \mathbb{R}$ admits a constant $K_\alpha> 0$ such that, for a.e. $t \in [0,T]$ and all $w, w_1,w_2 \in \mathbb{R}$ $$\begin{aligned} \sup_{x \in \Omega} {\left|\alpha (t,x,w_1) - \alpha (t,x,w_2)\right|} \le \ & K_\alpha \; {\left|w_1-w_2\right|} \\ \sup_{(x,w)\in\Omega\times\mathbb{R}} \alpha (t,x,w) \le \ & K_\alpha \left(1+w\right) \end{aligned}$$ and for all $w \in\mathbf{BV}(\Omega; \mathbb{R})$ $$\mathinner{\rm TV}\left(\alpha\left(t,\cdot,w (t,\cdot)\right)\right) \le K_\alpha \left(1+{\left\|w\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R})} + \mathinner{\rm TV}(w)\right) \,.$$ ```{=html} <!-- --> ``` 1. [\[it:a\]]{#it:a label="it:a"} $a \in \mathbf{L}^1 \left([0,T]; \mathbf{L}^\infty(\Omega; \mathbb{R})\right)$ and for all $t \in [0,T]$, $a (t) \in \mathbf{BV}(\Omega;\mathbb{R})$. ```{=html} <!-- --> ``` 1. [\[it:betaFinal\]]{#it:betaFinal label="it:betaFinal"} $\beta: [0,T] \times \Omega \times \mathbb{R}\times R \to \mathbb{R}$ admits a constant $K_\beta > 0$ such that, for a.e. $t \in [0,T]$ and all $u, u_1,u_2,w, w_1,w_2 \in \mathbb{R}$ $$\begin{aligned} \sup_{x \in \Omega} {\left|\beta (t,x,u_1,w_1) - \beta (t,x,u_2,w_2)\right|} \le \ & K_\beta \left({\left|u_1-u_2\right|} + {\left|w_1-w_2\right|}\right) \\ \sup_{(x,u,w)\in\Omega\times\mathbb{R}\times \mathbb{R}} \beta (t,x,u,w) \le \ & K_\beta \,. \end{aligned}$$ ```{=html} <!-- --> ``` 1. [\[it:b\]]{#it:b label="it:b"} $b \in \mathbf{L}^1 ([0,T]; \mathbf{L}^\infty( \Omega;\mathbb{R}))$ and for all $t \in [0,T]$, $b (t) \in \mathbf{BV}(\Omega;\mathbb{R}_+)$. Note in particular that [\[it:v\]](#it:v){reference-type="ref" reference="it:v"} requires to bound $\mathbf{L}^\infty$ norms by means of $\mathbf{L}^1$ norms, a feature typical of non local operators. In fact, referring to predator--prey applications, it is in general reasonable to assume that the $u$ (predator) population moves according to *averages* of the $w$ (prey) population density or of its gradient. This justifies our requiring $v$ in [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"} to be a *non local* function of $w$. Since we deal with the bounded domain $\Omega$, these averages need to be computed only inside $\Omega$. To this aim, the modified convolution introduced in [@siam2018 § 3], which reads $$\label{eq:58} % \label{eq:*omega} (\rho \mathop{\smash[b]{\,*_{{\strut\Omega}\!}}}\eta) (x) = \dfrac{\int_\Omega \rho (y) \; \eta (x-y) \mathinner{\mathrm{d}{y}}}{\int_{\Omega} \eta (x-y) \mathinner{\mathrm{d}{y}}}$$ is of help. The quantity $(\rho \mathop{\smash[b]{\,*_{{\strut\Omega}\!}}}\eta) (x)$ is an average of the crowd density $\rho$ in $\Omega$ around $x$ as soon as the kernel $\eta$ satisfies ($\boldsymbol{\eta}$) : $\eta (x) = \tilde\eta ({\left\|x\right\|})$, where $\tilde\eta \in \mathbf{C}^{2} ({\mathbb{R}}_+; {\mathbb{R}})$, $\mathop{\rm spt}\tilde\eta = [0, \ell_\eta]$, $\ell_\eta > 0$, $\tilde\eta' \leq 0$, $\tilde\eta' (0) = \tilde\eta'' (0) = 0$ and $\int_{{\mathbb{R}}^N} \eta (\xi) \mathinner{\mathrm{d}{\xi }}= 1$. In those models where it is reasonable to assume that $u$ moves directed towards the areas with higher/lower density of $w$, i.e., $v$ is parallel to the average gradient of $w$ in $\Omega$, we select: $$\label{eq:61} v (t,w) \quad\big/\!\!\big/\quad \dfrac{\mathinner{\nabla}(w \mathop{\smash[b]{\,*_{{\strut\Omega}\!}}}\eta)}{\sqrt{1 + {\left\|\mathinner{\nabla}(w \mathop{\smash[b]{\,*_{{\strut\Omega}\!}}}\eta)\right\|}^2}}$$ where as kernel $\eta$ we choose for instance $\eta (x) = \overline{\ell} \left(\ell^4 - {\left\|x\right\|}^4\right)^4$. Here, $\ell$ has the clear physical meaning of the distance, or *horizon*, at which individuals of the $u$ population *feel* the presence of the $w$ population. The normalization parameter $\overline\ell$ is chosen so that $\int_{\mathbb{R}^2} \eta (x) \mathinner{\mathrm{d}{x}} = 1$. A choice like [\[eq:61\]](#eq:61){reference-type="eqref" reference="eq:61"} is consistent with the requirements [\[it:v\]](#it:v){reference-type="ref" reference="it:v"}, as proved in [@siam2018 Lemma 3.2]. To state what we mean by a *solution* to [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"}, we resort to the standard definitions of solutions, separately, to the hyperbolic and to the parabolic problems constituting [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"}. In the former case, we refer to [@MR2322018; @MR3819847; @MR1884231] and in the latter to the classical [@QuittnerSouplet]. **Definition 1**. *A pair $(u,w) \in \mathbf{C}^{0} \left([0,T]; \mathbf{L}^1 (\Omega;{\mathbb{R}}^2)\right)$ is a solution to [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"}-- [\[eq:49\]](#eq:49){reference-type="eqref" reference="eq:49"} if, setting $$% \label{eq:44} \begin{array}{rcl} c (t,x) & = & v (t,w) (x) \\ A (t,x) & = & \alpha \left(t,x,w (t,x)\right) \end{array} \qquad\qquad B (t,x) = \beta \left(t,x,u (t,x),w (t,x)\right) \,,$$ the function $u$, according to Definition [Definition 12](#def:Hyp){reference-type="ref" reference="def:Hyp"}, solves $$% \label{eq:28} \left\{ \begin{array}{l@{\qquad}r@{\,}c@{\,}l} \partial_t u + \mathinner{\nabla\cdot}\left(u \, c(t,x) \right) = A (t,x)\, u + a(t,x) & (t,x) & \in & [0,T] \times \Omega \\ u (t,\xi) = 0 & (t,\xi) & \in & [0,T] \times \partial \Omega \\ u (0,x) = u_o (x) & x & \in & \Omega \end{array} \right.$$ and the function $w$, according to Definition [Definition 3](#def:Para){reference-type="ref" reference="def:Para"}, solves $$% \label{eq:43} \left\{ \begin{array}{l@{\qquad}r@{\,}c@{\,}l} \partial_t w - \mu \, \Delta w = B (t,x) \, w + b (t,x) & (t,x) & \in & [0,T] \times \Omega \\ w (t,\xi) = 0 & (t,\xi) & \in & [0,T] \times \partial \Omega \\ w (0,x) = w_o (x) & x & \in & \Omega \,. \end{array} \right.$$* In the present framework, we also verify that, under suitable conditions on the initial data, the solution $(u,w)$ enjoys the following regularity $\left(u (t), w (t)\right) \in (\mathbf{BV}\cap \mathbf{L}^\infty) (\Omega; {\mathbb{R}}_+^2)$ for all $t \in [0,T]$. We are now ready to state the main result of this work. **Theorem 2**. *Let [\[it:omega\]](#it:omega){reference-type="ref" reference="it:omega"}--[\[it:v\]](#it:v){reference-type="ref" reference="it:v"}--[\[it:alpha\]](#it:alpha){reference-type="ref" reference="it:alpha"}--[\[it:a\]](#it:a){reference-type="ref" reference="it:a"}--[\[it:betaFinal\]](#it:betaFinal){reference-type="ref" reference="it:betaFinal"}--[\[it:b\]](#it:b){reference-type="ref" reference="it:b"} hold. For any initial datum $(u_o,w_o)$ in $(\mathbf{L}^\infty \cap \mathbf{BV}) (\Omega;{\mathbb{R}}^2)$, problem [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"} admits a unique solution on $[0,T]$ in the sense of Definition [Definition 1](#def:sol){reference-type="ref" reference="def:sol"}. Moreover, the following properties hold:* #### A priori bounds: *There exists a constant $C$ depending only on $\Omega$, $K_\alpha,K_\beta,K_v$ such that for all $t \in [0,T]$ and for all initial data $$\begin{aligned} \label{eq:51} {\left\|w (t)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} \le \ & e^{C \, t} \left( {\left\|w_o\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} + {\left\|b\right\|}_{\mathbf{L}^1 ([0,t]\times\Omega;{\mathbb{R}})} \right) \\ \label{eq:52} {\left\|w (t)\right\|}_{\mathbf{L}^\infty (\Omega;{\mathbb{R}})} \le \ & e^{C \, t} \left( {\left\|w_o\right\|}_{\mathbf{L}^\infty (\Omega;{\mathbb{R}})} + {\left\|b\right\|}_{\mathbf{L}^1 ([0,t]; \mathbf{L}^\infty(\Omega;{\mathbb{R}})} \right) \,. \end{aligned}$$ Let $C_w (t)$ denote the maximum of the two right hand sides in [\[eq:51\]](#eq:51){reference-type="eqref" reference="eq:51"} and in [\[eq:52\]](#eq:52){reference-type="eqref" reference="eq:52"}; then, $$\begin{aligned} \label{eq:54} {\left\|u (t)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} \le \ & \left({\left\|u_o\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} + {\left\|a\right\|}_{\mathbf{L}^1 ([0,t] \times \Omega; \mathbb{R})}\right) \exp\left(C \, t \left( 1 + C_w (t) \right) \right) \\ \label{eq:55} {\left\|u (t)\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R})} \le \ & \left({\left\|u_o\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R})} + {\left\|a\right\|}_{\mathbf{L}^1 ([0,t]; \mathbf{L}^\infty (\Omega; \mathbb{R}))}\right) \exp\left(C \, t \left( 1 + 2\, C_w (t) \right) \right) \,. \end{aligned}$$* #### Lipschitz continuity in the initial data: *Let $(\tilde u_o, \tilde w_o) \in (\mathbf{L}^\infty \cap \mathbf{BV}) (\Omega;{\mathbb{R}}^2)$ and call $(\tilde u, \tilde w)$ the corresponding solution to [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"}. Then, for all $t \in [0,T]$, $${\left\|u(t)- \tilde u (t)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} + {\left\|w(t)- \tilde w (t)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} \le \mathcal{C}(t) \left( {\left\|u_o - \tilde u_o\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} + {\left\|w_o - \tilde w_o\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})}\right),$$ where $\mathcal{C} \in \mathbf{L}^\infty ([0,T];\mathbb{R}_+)$ depends on $\Omega$, $K_\alpha$, $K_\beta$, $K_v$, on the map $C_v$, on norms and total variation of the functions $a$ and $b$ and of the initial data.* #### Stability with respect to the controls: *Let $\tilde a$ satisfy [\[it:a\]](#it:a){reference-type="ref" reference="it:a"}, $\tilde b$ satisfy [\[it:b\]](#it:b){reference-type="ref" reference="it:b"} and call $(\tilde u, \tilde w)$ the corresponding solution to [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"}. Then, for all $t \in [0,T]$, $${\left\|u(t)- \tilde u (t)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} + {\left\|w(t)- \tilde w (t)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} \le \mathcal{C}(t)\! \left( {\left\|a - \tilde a\right\|}_{\mathbf{L}^1 ([0,t] \times \Omega; \mathbb{R})} \!+\! {\left\|b - \tilde b\right\|}_{\mathbf{L}^1 ([0,t] \times \Omega; \mathbb{R})} \right),$$ where $\mathcal{C} \in \mathbf{L}^\infty ([0,T];\mathbb{R}_+)$ depends on $\Omega$, $K_\alpha$, $K_\beta$, $K_v$, on the map $C_v$, on norms and total variation of the functions $a, \tilde a$ and $b, \tilde b$ and of the initial data.* #### Positivity: *If for all $(t,x) \in [0,T] \times \Omega$, $a (t,x) \geq 0$ and $b (t,x) \geq 0$, then for all initial datum $(u_o, w_o)$ with $u_o (x) \geq 0$ and $w_o (x) \geq 0$ for all $x \in \Omega$, the solution $(u, w)$ is such that $u (t,x) \geq 0$ and $w (t,x) \geq 0$ for all $(t,x) \in [0,T] \times \Omega$.* The proof is deferred to Section [3](#sec:proofs){reference-type="ref" reference="sec:proofs"}. The lower semicontinuity of the total variation with respect to the $\mathbf{L}^1$ distance ensures moreover that bounds on the total variation of the solution can be obtained by means of [\[eq:w_TV\]](#eq:w_TV){reference-type="eqref" reference="eq:w_TV"} and [\[eq:u_TV\]](#eq:u_TV){reference-type="eqref" reference="eq:u_TV"}. # Proofs {#sec:proofs} In the proofs below, we provide all details wherever necessary and precise references for those part that differ only slightly from the cases under consideration. ## Parabolic Estimates {#sec:parabolicDirichlet} Fix $T,\mu >0$ and let $\Omega$ satisfy [\[it:omega\]](#it:omega){reference-type="ref" reference="it:omega"}. This paragraph is devoted to the IBVP $$\label{eq:2} \left\{ \begin{array}{l@{\qquad}r@{\,}c@{\,}l} \partial_t w = \mu \, \Delta w + B (t,x) \, w + b (t,x) & (t,x) & \in & [0,T] \times \Omega \\ w (t,\xi) = 0 & (t,\xi) & \in & [0,T] \times \partial\Omega \\ w (0,x) = w_o (x) & x & \in & \Omega\,. \end{array} \right.$$ The following definition is adapted from [@QuittnerSouplet], see Remark [Remark 4](#rem:l1delta){reference-type="ref" reference="rem:l1delta"}. **Definition 3**. *A map $w \in \mathbf{C}^{0} ([0,T]; \mathbf{L}^1 (\Omega{;{\mathbb{R}}}))$ is a solution to [\[eq:2\]](#eq:2){reference-type="eqref" reference="eq:2"} if $w (0) = w_o$ and for all test functions $\varphi\in \mathbf{C}^{2} ([0,T]\times\overline{\Omega};{\mathbb{R}})$ such that $\varphi(T,x) =0$ for all $x \in \Omega$ and $\varphi(t,\xi) = 0$ for all $(t, \xi) \in [0,T] \times \partial\Omega$: $$\label{eq:4} \begin{array}{@{}r@{}} \displaystyle \int_0^T \int_\Omega \left( w (t,x) \, \partial_t \varphi(t,x) + \mu \, w (t,x) \, \Delta\varphi(t,x) + \left(B (t,x) \, w (t,x) + b (t,x)\right) \varphi(t,x) \right) \mathinner{\mathrm{d}{x}} \mathinner{\mathrm{d}{t }} \\ \displaystyle + \int_\Omega w_o (x) \, \varphi(0,x) \mathinner{\mathrm{d}{x}} = 0 \,. \end{array}$$* **Remark 4**. *Let $d (x, \partial\Omega) = \inf_{y \in \partial\Omega} {\left\|x-y\right\|}$. Recall ${\left\|w\right\|}_{\mathbf{L}^1_\delta (\Omega;{\mathbb{R}})} = \int_\Omega {\left|u (x)\right|} \, d (x,\partial\Omega) \mathinner{\mathrm{d}{x}}$ from [@QuittnerSouplet Appendix B]. Since ${\left\|w\right\|}_{\mathbf{L}^1_\delta (\Omega;{\mathbb{R}})} \leq \mathcal{O}(1){\left\|w\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})}$, a solution in the sense of Definition [Definition 3](#def:Para){reference-type="ref" reference="def:Para"} is also a *weak $\mathbf{L}^1_\delta$ solution* in the sense of [@QuittnerSouplet Definition 48.8, Appendix B].* **Remark 5**. *In  it is sufficient to consider test functions $\varphi\in \mathbf{C}^{1} ([0,T]\times\overline{\Omega}; {\mathbb{R}})$ such that for all $t \in [0,T]$, the map $x \mapsto \varphi(t,x)$ is of class $\mathbf{C}^{2} (\overline{\Omega}; {\mathbb{R}})$ and moreover $\varphi(T,x) =0$ for all $x \in \Omega$ and $\varphi(t,\xi) = 0$ for all $(t,\xi) \in [0,T]\times \partial\Omega$. This is proved through a standard regularization by means of a convolution with a mollifier supported in ${\mathbb{R}}_-$.* For $\mu>0$, the heat kernel is denoted $H_\mu (t,x) = (4 \, \pi \, \mu \, t)^{-n/2} \; \exp \left(-{\left\|x\right\|}^2 \middle/(4 \, \mu \, t)\right)$, where $t > 0$, $x \in {\mathbb{R}}^n$. As it is well known, ${\left\|H_\mu (t)\right\|}_{\mathbf{L}^1 ({\mathbb{R}}^n; {\mathbb{R}})} = 1$. **Proposition 6**. *Let $\Omega$ satisfy [\[it:omega\]](#it:omega){reference-type="ref" reference="it:omega"} and fix $\mu>0$. Then, there exists a Green function $$G \in \mathbf{C}^{\infty}(\mathopen]0, +\infty\mathclose[ \times \mathopen]0, +\infty\mathclose[ \times \Omega \times \Omega; {\mathbb{R}}_+) \cap \mathbf{C}^{0}(\mathopen]0, +\infty\mathclose[ \times \mathopen]0, +\infty\mathclose[ \times \overline{\Omega} \times \overline{\Omega}; {\mathbb{R}}_+)$$ such that:* 1. *[\[item:K2\]]{#item:K2 label="item:K2"} For all $t,\tau \in {\mathbb{R}}_+$ and $x,y \in \Omega$, $G (t,\tau,x,y) = G (t,\tau,y,x)$.* 2. *[\[item:K3\]]{#item:K3 label="item:K3"} For all $t \in {\mathbb{R}}_+$, $\xi \in \partial\Omega$ and $y \in \Omega$, $G (t,\tau,\xi,y)=0$.* 3. *[\[item:K4\]]{#item:K4 label="item:K4"} There exist positive constants $C,c$ such that for all $t \in {\mathbb{R}}_+$ and for all $x,y \in \Omega$, $$\begin{aligned} 0 \le G (t,\tau,x,y) \le \ & H_\mu (t-\tau,x-y) \\ {\left|\partial_t G (t,\tau,x,y)\right|} \le \ & c \, (t-\tau)^{-(n+2)/2} \exp\left(-C \, {\left\|x-y\right\|}^2 \middle/ (t-\tau)\right) \\ {\left\|\nabla_x G (t,\tau,x,y)\right\|} \le \ & c \, (t-\tau)^{-(n+1)/2} \exp\left(-C \, {\left\|x-y\right\|}^2 \middle/ (t-\tau)\right) \,. \end{aligned}$$* 4. *[\[item:K5\]]{#item:K5 label="item:K5"} For all $b \in \mathbf{L}^1 ([0,T]\times\Omega; {\mathbb{R}})$ and all $w_o \in \mathbf{L}^1 (\Omega;\mathbb{R})$, the IBVP $$\label{eq:3} \left\{ \begin{array}{l@{\qquad}r@{\,}c@{\,}l} \partial_t w = \mu \, \Delta w + b (t,x) & (t,x) & \in & [0,T] \times \Omega \\ w (t,\xi) = 0 & (t,\xi) & \in & [0,T] \times \partial\Omega \\ w (0,x) = w_o (x) & x & \in & \Omega \end{array} \right.$$ admits a unique solution in the sense of , which is $$\label{eq:7} w (t,x) = \int_\Omega G (t,0,x,y) \, w_o (y) \mathinner{\mathrm{d}{y}} + \int_0^t \int_\Omega G (t,\tau,x,y) \, b (\tau,y) \mathinner{\mathrm{d}{y}} \mathinner{\mathrm{d}{\tau}}\,.$$* The Green function depends both on $\mu$ and on $\Omega$ but, for simplicity, we omit this dependence. Condition [\[item:K2\]](#item:K2){reference-type="ref" reference="item:K2"} follows from [@QuittnerSouplet Appendix B, § 48.2]. Property [\[item:K3\]](#item:K3){reference-type="ref" reference="item:K3"} comes from [@MR0241822 Chapter IV, § 16, (16.7)--(16.8) p. 408]. The first bound in [\[item:K4\]](#item:K4){reference-type="ref" reference="item:K4"} follows from [@QuittnerSouplet Formula (48.4), p.440], the second and the third one from [@MR0241822 Chapter IV, § 16, Theorem 16.3, p. 413]. To prove [\[item:K5\]](#item:K5){reference-type="ref" reference="item:K5"}, use Remark [Remark 4](#rem:l1delta){reference-type="ref" reference="rem:l1delta"} and [@QuittnerSouplet Proposition 48.9, Appendix B], [@QuittnerSouplet Corollary 48.10, Appendix B] and the Maximum Principle [@QuittnerSouplet Proposition 52.7, Appendix B], which ensure the equivalence between [\[eq:4\]](#eq:4){reference-type="eqref" reference="eq:4"} and [\[eq:7\]](#eq:7){reference-type="eqref" reference="eq:7"} as soon as either $w_o \geq 0$, $b \geq 0$ or $w_o \leq 0$, $b \leq 0$. The linearity of [\[eq:3\]](#eq:3){reference-type="eqref" reference="eq:3"} allows to complete the proof. **Proposition 7**. *Let $\Omega$ satisfy [\[it:omega\]](#it:omega){reference-type="ref" reference="it:omega"}, fix $\mu>0$ and let* 1. *[\[it:P1\]]{#it:P1 label="it:P1"} $w_o \in \mathbf{L}^\infty (\Omega;{\mathbb{R}})$,* 2. *[\[it:P2\]]{#it:P2 label="it:P2"} $B \in \mathbf{L}^\infty ([0,T] \times \Omega;{\mathbb{R}})$,* 3. *[\[it:P3\]]{#it:P3 label="it:P3"} $b \in \mathbf{L}^1 ([0,T]; \mathbf{L}^\infty( \Omega;{\mathbb{R}}))$.* *Then:* 1. *[\[it:P_existence\]]{#it:P_existence label="it:P_existence"} Problem [\[eq:2\]](#eq:2){reference-type="eqref" reference="eq:2"} admits a unique solution in the sense of .* 2. *[\[it:P_formula\]]{#it:P_formula label="it:P_formula"} The solution to [\[eq:2\]](#eq:2){reference-type="eqref" reference="eq:2"} is implicitly given by $$\label{eq:5} \begin{array}{rcl} \displaystyle w (t,x) & = & \displaystyle \int_\Omega G (t,0,x,y) \, w_o (y) \mathinner{\mathrm{d}{y}} \\ & & \displaystyle \qquad + \int_0^t \int_\Omega G (t,\tau,x,y) \left(B (\tau,y) \, w (\tau,y) + b (\tau,y)\right) \mathinner{\mathrm{d}{y}} \mathinner{\mathrm{d}{\tau}} \end{array}$$ where $G$, independent of $b$ and $B$, is defined in .* 3. *[\[it:P_apriori\]]{#it:P_apriori label="it:P_apriori"} The following *a priori* bounds hold for all $t \in [0,T]$ $$\begin{aligned} \label{eq:11} {\left\|w (t)\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} \leq & \left( {\left\|w_o\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} + {\left\|b\right\|}_{\mathbf{L}^1 ([0,t]\times \Omega;{\mathbb{R}})} \right) \exp \int_0^t {\left\|B(\tau)\right\|}_{\mathbf{L}^\infty (\Omega;{\mathbb{R}})} \mathinner{\mathrm{d}{\tau}}, \\ \label{eq:32} {\left\|w (t)\right\|}_{\mathbf{L}^\infty (\Omega;{\mathbb{R}})} \leq & \left( {\left\|w_o\right\|}_{\mathbf{L}^\infty (\Omega;{\mathbb{R}})} + {\left\|b\right\|}_{\mathbf{L}^1 ([0,t]; \mathbf{L}^\infty (\Omega;{\mathbb{R}}))} \right) \exp \int_0^t {\left\|B(\tau)\right\|}_{\mathbf{L}^\infty (\Omega;{\mathbb{R}})} \mathinner{\mathrm{d}{\tau }}\,. \end{aligned}$$* 4. *[\[it:P_stability\]]{#it:P_stability label="it:P_stability"} If $w_1$, $w_2$ solve [\[eq:2\]](#eq:2){reference-type="eqref" reference="eq:2"} with data $w_o^1$, $w_o^2$ satisfying [\[it:P1\]](#it:P1){reference-type="ref" reference="it:P1"}, functions $B_1$, $B_2$ satisfying [\[it:P2\]](#it:P2){reference-type="ref" reference="it:P2"} and functions $b_1$, $b_2$ satisfying [\[it:P3\]](#it:P3){reference-type="ref" reference="it:P3"}, then $$\label{eq:10} \begin{array}{rcl} & & \displaystyle {\left\|w_1 (t) -w_2 (t)\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} \\ & \leq & \displaystyle \left( {\left\|w_o^1- w_o^2\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} + {\left\|b_1 - b_2\right\|}_{\mathbf{L}^1 ([0,t]\times\Omega;{\mathbb{R}})} \right) \exp \int_0^t {\left\|B_1(\tau)\right\|}_{\mathbf{L}^\infty (\Omega;{\mathbb{R}})} \mathinner{\mathrm{d}{\tau}} \\ & & \displaystyle + {\left\|B_1 - B_2\right\|}_{\mathbf{L}^1 ([0,t]\times\Omega;{\mathbb{R}})} \left( {\left\|w_o^2\right\|}_{\mathbf{L}^\infty (\Omega;{\mathbb{R}})} + {\left\|b_2\right\|}_{\mathbf{L}^1 ([0,t]; \mathbf{L}^\infty (\Omega;{\mathbb{R}}))} \right) \\ & & \displaystyle \qquad \times \exp \int_0^t \left( {\left\|B_1(\tau)\right\|}_{\mathbf{L}^\infty (\Omega;{\mathbb{R}})} + {\left\|B_2(\tau)\right\|}_{\mathbf{L}^\infty (\Omega;{\mathbb{R}})}\right) \mathinner{\mathrm{d}{\tau }}\,. \end{array}$$* 5. *[\[it:P_positivity\]]{#it:P_positivity label="it:P_positivity"} Positivity: if $b \ge 0$ and $w_o \ge 0$ then $w \ge0$.* 6. *[\[it:P_tv\]]{#it:P_tv label="it:P_tv"} If $w_o\in \mathbf{BV}(\Omega; \mathbb{R})$ and $b(t) \in \mathbf{BV}(\Omega; \mathbb{R})$ for all $t \in [0,T]$, then for all $t \in [0,T]$ the following estimate holds. $$\begin{aligned} \nonumber & \mathinner{\rm TV}\left(w(t)\right) \\ \label{eq:37} \le \ & \mathinner{\rm TV}(w_o) + \int_0^t \mathinner{\rm TV}\left(b (\tau)\right) \mathinner{\mathrm{d}{\tau}} \\ \nonumber & + \mathcal{O} (1) \, \sqrt{t}\, {\left\|B\right\|}_{\mathbf{L}^\infty ([0,t]\times \Omega; \mathbb{R})} \left( {\left\|w_o\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} {+} {\left\|b\right\|}_{\mathbf{L}^1 ([0,t]\times\Omega; \mathbb{R})} \right) \exp \int_0^t {\left\|B(\tau)\right\|}_{\mathbf{L}^\infty (\Omega;R)} \mathinner{\mathrm{d}{\tau}}. \end{aligned}$$* We note, for completeness, that in the setting of Proposition [Proposition 7](#prop:superStimePara){reference-type="ref" reference="prop:superStimePara"} the following regularity results -- not of use in the sequel -- can also be obtained: \(7\) : If $w_o\in \mathbf{C}_c^{1} (\Omega; \mathbb{R})$ then the solution $w$ is such that $w(t) \in \mathbf{C}^{1} (\Omega; \mathbb{R})$ for all $t \in [0,T]$. \(8\) : The solution $w$ is Hölder continuous in time. We split the proof in a few steps. #### Claim 1: Problem [\[eq:2\]](#eq:2){reference-type="eqref" reference="eq:2"} admits at most one solution in the sense of . Observe that if $w_1$, $w_2$ solve [\[eq:2\]](#eq:2){reference-type="eqref" reference="eq:2"} in the sense of , then their difference satisfies $\int_0^T \int_\Omega (w_2-w_1) \, (\partial_t \varphi+ \mu \, \Delta \varphi+ B \, \varphi) \mathinner{\mathrm{d}{x}} \mathinner{\mathrm{d}{t}} = 0$, for all $\varphi$ as regular as specified in . By [@QuittnerSouplet (ii) in Theorem 48.2, Appendix B], we choose as $\varphi$ the strong solution to $$\left\{ \begin{array}{l@{\qquad}r@{\,}c@{\,}l} \partial_t \varphi+ \mu \, \Delta\varphi+ B (t,x) \, \varphi= f & (t,x) & \in & [0,T] \times \Omega \\ \varphi(t,\xi) = 0 & (t,\xi) & \in & [0,T] \times \partial\Omega \\ \varphi(T,x) = 0 & x & \in & \Omega \end{array} \right.$$ where $f \in \mathbf{C}^{0} ([0,T] \times \overline{\Omega};{\mathbb{R}})$. We thus have $\int_0^T \int_\Omega (w_2-w_1) \, f \mathinner{\mathrm{d}{x}} \mathinner{\mathrm{d}{t}} = 0$, so that, by the arbitrariness of $f$, $w_1=w_2$. #### Claim 2: If $w \in \mathbf{L}^\infty ([0,T]; \mathbf{L}^1 (\Omega;{\mathbb{R}}))$ satisfies [\[eq:5\]](#eq:5){reference-type="eqref" reference="eq:5"}, then [\[eq:11\]](#eq:11){reference-type="eqref" reference="eq:11"} and [\[eq:32\]](#eq:32){reference-type="eqref" reference="eq:32"} hold. Consider first [\[eq:11\]](#eq:11){reference-type="eqref" reference="eq:11"}. By [\[item:K4\]](#item:K4){reference-type="ref" reference="item:K4"}, recalling ${\left\|H_\mu (t)\right\|}_{\mathbf{L}^1 (\Omega; {\mathbb{R}})} \leq 1$, we have $$\begin{aligned} {\left\|w (t)\right\|}_{\mathbf{L}^1 (\Omega; {\mathbb{R}})} \le \ & \int_\Omega \int_\Omega G(t,0,x,y) {\left|w_o(y)\right|}\mathinner{\mathrm{d}{y}} \mathinner{\mathrm{d}{x}} \\ & + \int_\Omega \int_0^t \int_\Omega G(t,\tau,x,y) {\left|B (\tau,y) \, w (\tau,y) + b (\tau,y)\right|} \mathinner{\mathrm{d}{y}} \mathinner{\mathrm{d}{\tau }}\mathinner{\mathrm{d}{x}} \\ \le \ & \int_\Omega \int_\Omega H_\mu(t,x-y) {\left|w_o(y)\right|} \mathinner{\mathrm{d}{y}} \mathinner{\mathrm{d}{x}} \\ & + \int_\Omega \int_0^t \int_\Omega H_\mu(t-\tau,x-y) {\left|B (\tau,y) \, w (\tau,y) + b (\tau,y)\right|} \mathinner{\mathrm{d}{y}} \mathinner{\mathrm{d}{\tau }}\mathinner{\mathrm{d}{x}} \\ \le \ & {\left\|w_o\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} + \int_0^t {\left\|B(\tau)\right\|}_{\mathbf{L}^\infty (\Omega;{\mathbb{R}})} \, {\left\|w (\tau)\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} \mathinner{\mathrm{d}{\tau}} + {\left\|b\right\|}_{\mathbf{L}^1 ([0,t] \times \Omega;{\mathbb{R}})} \,.\end{aligned}$$ An application of Gronwall Lemma [@bressan-piccoli Lemma 3.1] yields [\[eq:11\]](#eq:11){reference-type="eqref" reference="eq:11"}. The proof of [\[eq:32\]](#eq:32){reference-type="eqref" reference="eq:32"} is entirely similar. #### Claim 3: If $w_1, w_2 \in \mathbf{L}^\infty ([0,T]; \mathbf{L}^1 (\Omega;{\mathbb{R}}))$ satisfy [\[eq:5\]](#eq:5){reference-type="eqref" reference="eq:5"}, then [\[eq:10\]](#eq:10){reference-type="eqref" reference="eq:10"} holds. Note that $$\begin{aligned} w_1 (t,x) - w_2 (t,x) =\ & \int_\Omega G (t,0,x,y) \, \left(w_o^1 (y)- w_o^2 (x) \right)\mathinner{\mathrm{d}{y}} \\ & \qquad + \int_0^t \int_\Omega G (t,\tau,x,y) \left(B_1 (\tau,y) \, w_1 (\tau,y) - B_2 (\tau,y) \, w_2 (\tau,y) \right) \mathinner{\mathrm{d}{y}} \mathinner{\mathrm{d}{\tau}} \\ & \qquad + \int_0^t \int_\Omega G (t,\tau,x,y) \left(b_1 (t,y) - b_2 (t,y) \right) \mathinner{\mathrm{d}{y}} \mathinner{\mathrm{d}{\tau}} \\ = \ & \int_\Omega G (t,0,x,y) \, \left(w_o^1 (y)- w_o^2 (x) \right)\mathinner{\mathrm{d}{y}} \\ & \qquad + \int_0^t \int_\Omega G (t,\tau,x,y) B_1 (\tau,y) \, \left(w_1 (\tau,y) - w_2 (\tau,y) \right) \mathinner{\mathrm{d}{y}} \mathinner{\mathrm{d}{\tau}} \\ & \qquad + \int_0^t \int_\Omega G (t,\tau,x,y) \, \tilde b (t,y) \mathinner{\mathrm{d}{y}} \mathinner{\mathrm{d}{\tau}},\end{aligned}$$ where $\tilde b (t,x) = \left(B_1 (t,x) - B_2 (t,x)\right) \, w_2 (t,x) + b_1 (t,x) - b_2 (t,x)$. Proceeding as in the proof of **Claim 2** and exploiting [\[eq:32\]](#eq:32){reference-type="eqref" reference="eq:32"}, we obtain $$\begin{aligned} & {\left\|w_1 (t) - w_2 (t)\right\|}_{\mathbf{L}^1 (\Omega; {\mathbb{R}})} \\ \leq \ & \left( {\left\|w_o^1- w_o^2\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} + {\left\|\tilde b\right\|}_{\mathbf{L}^1 ([0,t] \times \Omega;{\mathbb{R}})} \right) \exp \int_0^t {\left\|B_1(\tau)\right\|}_{\mathbf{L}^\infty (\Omega;{\mathbb{R}})} \mathinner{\mathrm{d}{\tau}} \\ \leq \ & \left( {\left\|w_o^1- w_o^2\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} + {\left\|B_1 - B_2\right\|}_{\mathbf{L}^1 ([0,t]\times\Omega;{\mathbb{R}})} {\left\|w_2\right\|}_{\mathbf{L}^\infty ([0,t]\times\Omega;{\mathbb{R}})} + {\left\|b_1 - b_2\right\|}_{\mathbf{L}^1 ([0,t]\times\Omega;{\mathbb{R}})} \right) \\ & \quad \times \exp \int_0^t {\left\|B_1(\tau)\right\|}_{\mathbf{L}^\infty (\Omega;{\mathbb{R}})} \mathinner{\mathrm{d}{\tau}} \\ \leq \ & \left( {\left\|w_o^1- w_o^2\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} + {\left\|b_1 - b_2\right\|}_{\mathbf{L}^1 ([0,t]\times\Omega;{\mathbb{R}})} \right) \exp \int_0^t {\left\|B_1(\tau)\right\|}_{\mathbf{L}^\infty (\Omega;{\mathbb{R}})} \mathinner{\mathrm{d}{\tau}} \\ & \quad + {\left\|B_1 - B_2\right\|}_{\mathbf{L}^1 ([0,t]\times\Omega;{\mathbb{R}})} \left( {\left\|w_o^2\right\|}_{\mathbf{L}^\infty (\Omega;{\mathbb{R}})} + {\left\|b_2\right\|}_{\mathbf{L}^1 ([0,t]; \mathbf{L}^\infty (\Omega;{\mathbb{R}}))} \right) \\ & \qquad \times \exp \int_0^t \left( {\left\|B_1(\tau)\right\|}_{\mathbf{L}^\infty (\Omega;{\mathbb{R}})} + {\left\|B_2(\tau)\right\|}_{\mathbf{L}^\infty (\Omega;{\mathbb{R}})}\right) \mathinner{\mathrm{d}{\tau }}\,.\end{aligned}$$ #### Claim 4: If $w \in \mathbf{L}^\infty ([0,T]; \mathbf{L}^1 (\Omega;{\mathbb{R}}))$ satisfies [\[eq:5\]](#eq:5){reference-type="eqref" reference="eq:5"}, then $w \in \mathbf{C}^{0}([0,T]; \mathbf{L}^1 (\Omega;{\mathbb{R}}))$. Introduce the abbreviation $\tilde{b} (t,x) = B (t,x) \, w (t,x) + b (t,x)$ so that, using [\[eq:11\]](#eq:11){reference-type="eqref" reference="eq:11"}, $$\begin{aligned} {\left\|\tilde{b} (t)\right\|}_{\mathbf{L}^1(\Omega; {\mathbb{R}})} \leq \ & {\left\|B (t)\right\|}_{\mathbf{L}^\infty (\Omega;{\mathbb{R}})} \, {\left\|w (t)\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} + {\left\|b\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} \\ \leq \ & {\left\|B (t)\right\|}_{\mathbf{L}^\infty (\Omega;{\mathbb{R}})} \, \left( {\left\|w_o\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} + {\left\|b\right\|}_{\mathbf{L}^1 ([0,t] \times \Omega;{\mathbb{R}})} \right) \exp \int_0^t {\left\|B(\tau)\right\|}_{\mathbf{L}^\infty (\Omega;{\mathbb{R}})} \mathinner{\mathrm{d}{\tau}} \\ & + {\left\|b (t)\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})}\end{aligned}$$ and ${\left\|\tilde{b}\right\|}_{\mathbf{L}^\infty ([0,t]; \mathbf{L}^1 (\Omega;{\mathbb{R}}))} \leq \mathcal{O}(1)$. Compute, using [\[item:K4\]](#item:K4){reference-type="ref" reference="item:K4"}, for $t_2 > t_1 >0$ and $s,\sigma \in\mathopen]t_1, t_2\mathclose[$, $$\begin{aligned} & {\left\|w (t_2)-w (t_1)\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} \\ \le \ & \int_\Omega \int_\Omega {\left|G (t_2,0,x,y) - G (t_1,0,x,y)\right|} {\left|w_o (y)\right|} \mathinner{\mathrm{d}{y}} \mathinner{\mathrm{d}{x}} \\ & + \int_0^{t_1} \int_{\Omega} \int_{\Omega} {\left|G (t_2,\tau,x,y) - G (t_1,\tau,x,y)\right|} {\left|\tilde{b} (\tau,y)\right|} \mathinner{\mathrm{d}{y}} \mathinner{\mathrm{d}{x}} \mathinner{\mathrm{d}{\tau}} \\ & + \int_{t_1}^{t_2} \int_{\Omega} \int_{\Omega} {\left|G (t_2,\tau,x,y)\right|} {\left|\tilde{b} (\tau,y)\right|} \mathinner{\mathrm{d}{y}} \mathinner{\mathrm{d}{x}} \mathinner{\mathrm{d}{\tau}} \\ \le \ & \dfrac{c\, {\left|t_2-t_1\right|}}{s^{1+n/2}} \int_\Omega \int_\Omega {\left|w_o (y)\right|} \exp\left(-C \, {\left\|x-y\right\|}^2 \middle/ s\right) \mathinner{\mathrm{d}{y}} \mathinner{\mathrm{d}{x}} \\ & + \dfrac{c\, {\left|t_2-t_1\right|}}{\sigma^{1+n/2}} \int_{0}^{t_1} \int_\Omega \int_\Omega {\left|\tilde{b} (t,x)\right|} \, \exp\left(-C \, {\left\|x-y\right\|}^2 \middle/ \sigma\right) \mathinner{\mathrm{d}{y}} \mathinner{\mathrm{d}{x}} \mathinner{\mathrm{d}{\tau}} \\ & + c \int_{t_1}^{t_2} (t_2-\tau)^{-n/2} \int_\Omega \int_\Omega {\left|\tilde{b} (t,x)\right|} \, \exp\left(-C \, {\left\|x-y\right\|}^2 \middle/ (t_2-\tau)\right) \mathinner{\mathrm{d}{y}} \mathinner{\mathrm{d}{x}} \mathinner{\mathrm{d}{\tau}} \\ \le \ & \dfrac{c\, {\left|t_2-t_1\right|}}{s^{1+n/2}} \, {\left\|w_o\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} \int_\Omega \exp\left(-C \, {\left\|x\right\|}^2 \middle/ s\right) \mathinner{\mathrm{d}{x}} \\ & + \dfrac{c\, {\left|t_2-t_1\right|}}{\sigma^{1+n/2}} \, t_1 \, {\left\|\tilde{b}\right\|}_{\mathbf{L}^\infty ([0,t_2]; \mathbf{L}^1(\Omega; {\mathbb{R}}))} \int_\Omega \exp\left(-C \, {\left\|x\right\|}^2 \middle/ \sigma\right) \mathinner{\mathrm{d}{x}} \\ & + c \, {\left\|\tilde{b}\right\|}_{\mathbf{L}^\infty ([0,t_2]; \mathbf{L}^1(\Omega; {\mathbb{R}}))} \int_{t_1}^{t_2} (t_2-\tau)^{-n/2} \int_\Omega \exp\left(-C \, {\left\|x\right\|}^2 \middle/ (t_2-\tau)\right) \mathinner{\mathrm{d}{x}} \mathinner{\mathrm{d}{\tau}} \\ \le \ & \dfrac{c\, {\left|t_2-t_1\right|}}{C^{n/2} \, s} \, {\left\|w_o\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} \, \int_{{\mathbb{R}}^n} \exp\left(- {\left\|x\right\|}^2 \right) \mathinner{\mathrm{d}{x}} \\ &+ \dfrac{c\, {\left|t_2-t_1\right|}}{C^{n/2}} \, {\left\|\tilde{b}\right\|}_{\mathbf{L}^\infty([0,t_2];\mathbf{L}^1(\Omega; {\mathbb{R}}))} \, \int_{{\mathbb{R}}^n} \exp\left(- {\left\|x\right\|}^2 \right) \mathinner{\mathrm{d}{x}} \\ & + \dfrac{c\, {\left|t_2-t_1\right|}}{C^{n/2}} \, {\left\|\tilde{b}\right\|}_{\mathbf{L}^\infty ([0,t_2]; \mathbf{L}^1(\Omega; {\mathbb{R}}))} \, \int_{{\mathbb{R}}^n} \exp\left(- {\left\|x\right\|}^2 \right) \mathinner{\mathrm{d}{x}} \\ \le \ & \mathcal{O}(1)\left(1+\dfrac{1}{s}\right) \, {\left|t_2-t_1\right|} \,.\end{aligned}$$ Since $(t_2-t_1)/s \leq t_2/t_1 - 1 < \delta$ as soon as $t_2 < t_1 +\delta$, the $\mathbf{L}^1 (\Omega;{\mathbb{R}})$ continuity of $w$ is proved. #### Claim 5: There exists a solution to [\[eq:2\]](#eq:2){reference-type="eqref" reference="eq:2"} in the sense of  satisfying [\[eq:5\]](#eq:5){reference-type="eqref" reference="eq:5"}. Assume first that $w_o \in \mathbf{C}^{0} (\overline{\Omega}; {\mathbb{R}})$ with $w_o =$ on $\partial\Omega$, $B \in \mathbf{C}^{0} ([0,T]\times\overline{\Omega}; {\mathbb{R}})$ and $b \in \mathbf{C}^{0} ([0,T]\times \overline{\Omega}; {\mathbb{R}})$. From [@MR0241822 Chapter IV, § 16] we know that [\[eq:2\]](#eq:2){reference-type="eqref" reference="eq:2"} admits a classical solution, say $w$. Define $\tilde{b} (t,x) = B (t,x) \, w (t,x) + b (t,x)$, so that $w$ satisfies [\[eq:7\]](#eq:7){reference-type="eqref" reference="eq:7"} by [\[item:K5\]](#item:K5){reference-type="ref" reference="item:K5"} in . Hence, $w$ also satisfies [\[eq:5\]](#eq:5){reference-type="eqref" reference="eq:5"}. Under the weaker regularity [\[it:P1\]](#it:P1){reference-type="ref" reference="it:P1"}, [\[it:P2\]](#it:P2){reference-type="ref" reference="it:P2"} and [\[it:P3\]](#it:P3){reference-type="ref" reference="it:P3"}, introduce sequences $w_o^\nu \in \mathbf{C}^{0} (\overline{\Omega}; {\mathbb{R}})$ and $B^\nu,b^\nu \in \mathbf{C}^{0} ([0,T]\times \overline{\Omega}; {\mathbb{R}})$ converging to $w_o$ in $\mathbf{L}^1 (\Omega;{\mathbb{R}})$ and to $B,b$ in $\mathbf{L}^1 ([0,T] \times \Omega; {\mathbb{R}})$. Call $w^\nu$ the corresponding classical solution to [\[eq:2\]](#eq:2){reference-type="eqref" reference="eq:2"} which, by the paragraph above, exists and satisfies $$\label{eq:8} w^\nu (t,x) =\!\! \int_\Omega\! G (t,0,x,y) w^\nu_o (y) \mathinner{\mathrm{d}{y}} + \!\!\int_0^t\!\! \int_\Omega \!G (t,\tau,x,y)\! \left(B^\nu \!(\tau,y) w^\nu (t,y) + b^\nu (t,y)\right)\!\mathinner{\mathrm{d}{y}} \mathinner{\mathrm{d}{\tau}}.$$ Hence, by **Claim 3**, $w^\nu$ and $w^{\nu+1}$ satisfy [\[eq:10\]](#eq:10){reference-type="eqref" reference="eq:10"}, so that $$\begin{aligned} & {\left\|w^{\nu+1} (t) -w^\nu (t)\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} \\ \le \ & \left( {\left\|w_o^{\nu+1} - w_o^\nu\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} + {\left\|b^{\nu+1} - b^\nu \right\|}_{\mathbf{L}^1 ([0,t] \times \Omega;{\mathbb{R}})} \right) \\ & \times \exp\left(\int_0^t {\left\|B^{\nu} (\tau)\right\|}_{\mathbf{L}^\infty (\Omega;{\mathbb{R}})} \mathinner{\mathrm{d}{\tau}} \right) \\ & + {\left\|B^{\nu+1} - B^\nu\right\|}_{\mathbf{L}^1 ([0,t]\times\Omega;{\mathbb{R}})} \left( {\left\|w_o^{\nu+1}\right\|}_{\mathbf{L}^\infty (\Omega;{\mathbb{R}})} + {\left\|b^{\nu+1}\right\|}_{\mathbf{L}^1 ([0,t]; \mathbf{L}^\infty (\Omega;{\mathbb{R}}))} \right) \\ & \qquad \times \exp \int_0^t \left( {\left\|B^\nu(\tau)\right\|}_{\mathbf{L}^\infty (\Omega;{\mathbb{R}})} + {\left\|B^{\nu+1}(\tau)\right\|}_{\mathbf{L}^\infty (\Omega;{\mathbb{R}})}\right) \mathinner{\mathrm{d}{\tau }}\,.\end{aligned}$$ By the hypotheses on the sequences $w_o^\nu$, $B^\nu$ and $b^\nu$, $w^\nu$ is a Cauchy sequence in $\mathbf{L}^1 ([0,T] \times \Omega; {\mathbb{R}})$ converging to a function $w$ in $\mathbf{L}^1 ([0,T] \times \Omega; {\mathbb{R}})$. Moreover, since, by [\[eq:11\]](#eq:11){reference-type="eqref" reference="eq:11"}, $${\left\|w^\nu (t)\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} \leq \left( {\left\|w_o^\nu\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} + {\left\|b^\nu\right\|}_{\mathbf{L}^1 ([0,t] \times \Omega;{\mathbb{R}})} \right) \exp \int_0^t {\left\|B^\nu(\tau)\right\|}_{\mathbf{L}^\infty (\Omega;{\mathbb{R}})} \mathinner{\mathrm{d}{\tau}}$$ letting $\nu \to +\infty$, we also have $$\label{eq:56} {\left\|w (t)\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} \leq \left( {\left\|w_o\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} + {\left\|b\right\|}_{\mathbf{L}^1 ([0,t] \times \Omega;{\mathbb{R}})} \right) \exp \int_0^t {\left\|B(\tau)\right\|}_{\mathbf{L}^\infty (\Omega;{\mathbb{R}})} \mathinner{\mathrm{d}{\tau}}$$ and hence $w \in \mathbf{L}^\infty ([0,T]; \mathbf{L}^1 (\Omega;{\mathbb{R}}))$. Passing to the limit in [\[eq:8\]](#eq:8){reference-type="eqref" reference="eq:8"}, by the Dominated Convergence Theorem, we get that $w$ satisfies [\[eq:5\]](#eq:5){reference-type="eqref" reference="eq:5"} for a.e. $x \in \Omega$. Moreover, for any $\varphi\in \mathbf{C}^{2} ([0,T] \times \overline{\Omega}; {\mathbb{R}})$, a further application of the Dominated Convergence Theorem allows to pass to the limit $\nu \to +\infty$ in [\[eq:4\]](#eq:4){reference-type="eqref" reference="eq:4"}, proving that $w$ satisfies also [\[eq:4\]](#eq:4){reference-type="eqref" reference="eq:4"}. The $\mathbf{C}^{0}$ in time -- $\mathbf{L}^1$ in space continuity required by  is proved in **Claim 4**. This completes the proof of [\[it:P_existence\]](#it:P_existence){reference-type="ref" reference="it:P_existence"} and proves [\[it:P_formula\]](#it:P_formula){reference-type="ref" reference="it:P_formula"}. Then, [\[it:P_apriori\]](#it:P_apriori){reference-type="ref" reference="it:P_apriori"} follows from [\[eq:56\]](#eq:56){reference-type="eqref" reference="eq:56"} and [\[it:P_stability\]](#it:P_stability){reference-type="ref" reference="it:P_stability"} is proved similarly, as in **Claim 2**. #### Claim 6: Positivity. As above, consider a more regular and non negative datum $w_o \in \mathbf{C}^{0} (\overline{\Omega}; {\mathbb{R}}_+)$ with $w_o = 0$ on $\partial\Omega$, $B \in \mathbf{C}^{0} ([0,T]\times\overline{\Omega}; {\mathbb{R}})$ and a non negative $b \in \mathbf{C}^{0} ([0,T]\times \overline{\Omega}; {\mathbb{R}}_+)$. From [@MR0241822 Chapter IV, § 16] we know that [\[eq:2\]](#eq:2){reference-type="eqref" reference="eq:2"} admits a classical solution, say $w$. By [@MR0241822 Chapter I, § 2, Theorem 2.1], we also know that $w \geq 0$. Continue as in the proof of Claim 5 to obtain that in the general case the solution is point-wise almost everywhere limit of non negative classical solutions, completing the proof of [\[it:P_positivity\]](#it:P_positivity){reference-type="ref" reference="it:P_positivity"}.. #### Claim 7: $\mathbf{BV}$-bound. We follow the idea of [@SAPM2021 Proposition 2]. First, regularize the initial datum $w_o$ and the function $b$ appearing in the source term as follows: there exist sequences $w_o^h\in \mathbf{C}^{\infty }(\Omega; \mathbb{R})$ and $b_h (t)\in \mathbf{C}^{\infty }(\Omega;\mathbb{R})$, for all $t \in [0,T]$, such that $$\begin{aligned} \lim_{h \to +\infty} {\left\|w_o^h - w_o\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} = \ & 0, & {\left\|w_o^h\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R})} \le \ & {\left\|w_o\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R})}, & \mathinner{\rm TV}(w_o^h) \le\ & \mathinner{\rm TV}(w_o),\end{aligned}$$ and for all $t \in [0,T]$ $$\begin{aligned} \lim_{h \to +\infty} {\left\|b_h (t) - b (t)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} = \ & 0, & {\left\|b_h (t)\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R})} \le \ & {\left\|b (t)\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R})}, & \mathinner{\rm TV}(b_h (t)) \le\ & \mathinner{\rm TV}(b(t)).\end{aligned}$$ According to [\[eq:5\]](#eq:5){reference-type="eqref" reference="eq:5"}, define the sequence $w_h$ corresponding to the sequences $w_o^h$ and $b_h$. By construction and due to the regularity of the Green function $G$, $w_h (t) \in \mathbf{C}^{\infty }(\Omega; \mathbb{R})$ for all $t \in [0,T]$. Moreover, exploiting [\[eq:10\]](#eq:10){reference-type="eqref" reference="eq:10"}, it follows immediately that $w_h (t) \to w(t)$ in $\mathbf{L}^1 (\Omega; \mathbb{R})$ as $h \to + \infty$ for a.e. $t \in [0,T]$. Compute $\nabla w_h$, using [\[eq:5\]](#eq:5){reference-type="eqref" reference="eq:5"}, the symmetry property of the Green function $G$, see [\[item:K2\]](#item:K2){reference-type="ref" reference="item:K2"} in , integration by parts and [\[item:K3\]](#item:K3){reference-type="ref" reference="item:K3"} in : $$\begin{aligned} \nabla w_h(t,x) = \ & \int_\Omega \nabla_x G (t,0,x,y) \, w_o^h(y) \mathinner{\mathrm{d}{y}} + \int_0^t\int_\Omega \nabla_x G (t,\tau,x,y) \, B (\tau,y) \, w_h (\tau,y) \mathinner{\mathrm{d}{y}}\mathinner{\mathrm{d}{\tau}} \\ & + \int_0^t\int_\Omega \nabla_x G (t,\tau,x,y) \, b_h(\tau,y) \mathinner{\mathrm{d}{y}}\mathinner{\mathrm{d}{\tau}} \\ = \ & \int_\Omega \nabla_y G (t,0,y,x) \, w_o^h(y) \mathinner{\mathrm{d}{y}} + \int_0^t\int_\Omega \nabla_x G (t,\tau,x,y) \, B (\tau,y) \, w_h (\tau,y) \mathinner{\mathrm{d}{y}}\mathinner{\mathrm{d}{\tau}} \\ & + \int_0^t\int_\Omega \nabla_y G (t,\tau,y,x) \, b_h(\tau,y) \mathinner{\mathrm{d}{y}}\mathinner{\mathrm{d}{\tau}} \\ = \ & - \int_\Omega G (t,0,y,x) \, \nabla w_o^h(y) \mathinner{\mathrm{d}{y}} + \int_0^t\int_\Omega \nabla_x G (t,\tau,x,y) \, B (\tau,y) \, w_h (\tau,y) \mathinner{\mathrm{d}{y}}\mathinner{\mathrm{d}{\tau}} \\ & - \int_0^t\int_\Omega G (t,\tau,y,x) \, \nabla b_h(\tau,y) \mathinner{\mathrm{d}{y}}\mathinner{\mathrm{d}{\tau}}.\end{aligned}$$ Pass now to the $\mathbf{L}^1$-norm, exploiting [\[item:K4\]](#item:K4){reference-type="ref" reference="item:K4"} in  and [\[eq:11\]](#eq:11){reference-type="eqref" reference="eq:11"}: $$\begin{aligned} & {\left\|\nabla w_h(t)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} \\ \le \ & {\left\|\nabla w_o^h\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} + \int_0^t {\left\|\nabla b_h (\tau)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})}\mathinner{\mathrm{d}{\tau}} \\ & + \int_0^t \frac{c}{(t-\tau)^{-(n+1)/2}} {\left\|w_h(\tau)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} {\left\|B(\tau)\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R})} \int_\Omega \exp\left(-C{\left\|x-y\right\|}^2\middle/ (t-\tau)\right)\mathinner{\mathrm{d}{y}}\mathinner{\mathrm{d}{\tau}} \\ \le \ & {\left\|\nabla w_o^h\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} + \int_0^t {\left\|\nabla b_h (\tau)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})}\mathinner{\mathrm{d}{\tau}} \\ & + \mathcal{O} (1) \sqrt{t} \left({\left\|w_o^h\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} + {\left\|b_h\right\|}_{\mathbf{L}^1 ([0,t]\times\Omega; \mathbb{R})} \right) {\left\|B\right\|}_{\mathbf{L}^\infty ([0,t]\times\Omega; \mathbb{R})} \exp\int_0^t {\left\|B (\tau)\right\|}_{\mathbf{L}^\infty (\Omega;\mathbb{R})}\mathinner{\mathrm{d}{\tau}}.\end{aligned}$$ By the lower semicontinuity of the total variation and the hypotheses on the regularizing sequences $w_o^h$ and $b_h$, passing to the limit $h \to +\infty$ yields [\[eq:37\]](#eq:37){reference-type="eqref" reference="eq:37"}, proving [\[it:P_tv\]](#it:P_tv){reference-type="ref" reference="it:P_tv"}. ## Hyperbolic Estimates {#sec:hyperbolic} Fix $T >0$. This paragraph is devoted to the IBVP $$\label{eq:12} \left\{ \begin{array}{l@{\qquad}r@{\,}c@{\,}l} \partial_t u + \mathinner{\nabla\cdot}\left(c (t,x) \, u\right) = A (t,x) \, u + a (t,x) & (t,x) & \in & [0,T] \times \Omega \\ u (t,\xi) = 0 & (t,\xi) & \in & \mathopen]0,T] \times \partial \Omega \\ u (0,x) = u_o (x) & x & \in & \Omega \,. \end{array} \right.$$ We assume throughout the following conditions: 1. [\[item:H0\]]{#item:H0 label="item:H0"} $u_o \in \left(\mathbf{L}^\infty \cap \mathbf{BV}\right) (\Omega;{\mathbb{R}})$ 2. [\[it:H1\]]{#it:H1 label="it:H1"} $c \in \left(\mathbf{C}^{0} \cap \mathbf{L}^\infty\right) ([0,T] \times \Omega;{\mathbb{R}}^n)$, $c(t) \in \mathbf{C}^{1} (\Omega; \mathbb{R}^n)$ for all $t \in [0,T]$, $D_x c \in \mathbf{L}^\infty ([0,T] \times \Omega; \mathbb{R}^{n\times n})$. 3. [\[it:H2\]]{#it:H2 label="it:H2"} $A \in \mathbf{L}^\infty ([0,T] \times \Omega;{\mathbb{R}})$ and for all $t \in [0,T]$, $A (t) \in \mathbf{BV}(\Omega;{\mathbb{R}})$. 4. [\[it:H3\]]{#it:H3 label="it:H3"} $a \in \mathbf{L}^1 \left([0,T]; \mathbf{L}^\infty(\Omega; \mathbb{R})\right)$ and for all $t \in [0,T]$, $a (t) \in \mathbf{BV}(\Omega;{\mathbb{R}})$. Note that [\[it:H1\]](#it:H1){reference-type="ref" reference="it:H1"} ensures that $c (t) \in \mathbf{C}^{0,1} (\overline{\Omega};{\mathbb{R}}^n)$ for any $t \in[0,T]$. For $(t_o,x_o) \in [0,T] \times \overline{\Omega}$ introduce the map $$\label{eq:caratt} \begin{array}{lccl} X(\,\cdot\,; t_o,x_o) : & I(t_o,x_o) & \to & \overline{\Omega} \\ & t & \to & X(t; t_o,x_o) \end{array} \quad \text{ solving } \quad \begin{cases} \dot x = c(t,x), \\ x(t_o) = x_o, \end{cases}$$ $I(t_o,x_o)$ being the maximal interval where a solution to the Cauchy problem in [\[eq:caratt\]](#eq:caratt){reference-type="eqref" reference="eq:caratt"} is defined (with values in $\overline{\Omega}$). For $t \in [0,T]$ and for $x \in \Omega$ define $$\label{eq:13} \mathcal{E} (\tau,t,x) = \exp \left( \int_\tau^t\left( A \left(s, X(s;t,x)\right) - \mathinner{\nabla\cdot}c\left(s, X (s;t,x)\right) \right)\mathinner{\mathrm{d}{s}} \right)$$ and for all $(t,x) \in \mathopen]0,T] \times \Omega$, if $x \in X(t; [0,t\mathclose[, \partial\Omega) \cap \Omega$, set $$\label{eq:14} T(t,x) = \inf\{s \in [0,t] \colon X(s;t,x) \in \Omega\},$$ which is well defined by [\[it:H1\]](#it:H1){reference-type="ref" reference="it:H1"} and Cauchy Theorem. Note that the well posedness of the Cauchy problem [\[eq:caratt\]](#eq:caratt){reference-type="eqref" reference="eq:caratt"}, ensured by [\[it:H1\]](#it:H1){reference-type="ref" reference="it:H1"}, implies that for all $t \in \mathopen]0, T]$ $$\label{eq:38} \Omega \subseteq X (t;0,\Omega) \cup X (t; [0,t\mathclose[, \partial\Omega) \subseteq \overline{\Omega} \quad \mbox{ and } \quad X (t;0,\Omega) \cap X (t; [0,t\mathclose[, \partial\Omega) = \emptyset \,.$$ As is well known, integrating [\[eq:12\]](#eq:12){reference-type="eqref" reference="eq:12"} along characteristics leads, for $(t,x) \in [0,T] \times \Omega$, to $$\label{eq:9} u(t,x) = \begin{cases} \displaystyle u_o\left(X (0;t,x)\right) \mathcal{E} (0,t,x) {+} \int_0^ta\left(\tau,X(\tau;t,x)\right)\mathcal{E} (\tau,t,x) \mathinner{\mathrm{d}{\tau }}& x \in X(t;0,\Omega) \\ \displaystyle \int_{T(t,x)}^t a\left(\tau,X(\tau;t,x)\right)\mathcal{E} (\tau,t,x) \mathinner{\mathrm{d}{\tau }}& x \in X(t;[0,t\mathclose[,\partial\Omega) \end{cases}$$ The following relation will be of use below, see for instance [@bressan-piccoli Chapter 3] for a proof: $$\begin{aligned} \label{eq:pxoX} D_{x_o} X (t; t_o,x_o) = \ & M(t), \text{ the matrix } M \text{ solves } \left\{ \begin{array}{l} \dot M = D_x c\left(t,X (t;t_,x_o)\right) M \\ M(t_o) = \mathinner{\mathrm{Id}}. \end{array} \right.\end{aligned}$$ We first particularize classical estimates to the present case. **Lemma 8** ([@teo Lemma 4.2]). *Let [\[it:omega\]](#it:omega){reference-type="ref" reference="it:omega"} and [\[it:H1\]](#it:H1){reference-type="ref" reference="it:H1"} hold.* 1. *[\[item:3\]]{#item:3 label="item:3"} Assume $u_o \in \mathbf{L}^1 (\Omega;{\mathbb{R}})$, $A \in \mathbf{L}^\infty ([0,T]\times\Omega; {\mathbb{R}})$ and $a \in \mathbf{L}^1 ([0,T]\times\Omega; {\mathbb{R}})$. Then, the map $u$ defined in [\[eq:9\]](#eq:9){reference-type="eqref" reference="eq:9"} satisfies for all $t \in [0,T]$ $${\left\|u (t)\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} \leq \left( {\left\|u_o\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} + {\left\|a\right\|}_{\mathbf{L}^1 ([0,t]\times\Omega; {\mathbb{R}})} \right) \, \exp\left({\left\|A\right\|}_{\mathbf{L}^\infty ([0,t]\times\Omega;{\mathbb{R}})} \, t\right) .$$* 2. *[\[item:4\]]{#item:4 label="item:4"} Assume $u_o \in \mathbf{L}^\infty (\Omega;{\mathbb{R}})$, $A \in \mathbf{L}^1 ([0,T];\mathbf{L}^\infty(\Omega;{\mathbb{R}}))$ and $a \in \mathbf{L}^1 \left([0,T]; \mathbf{L}^\infty(\Omega; {\mathbb{R}})\right)$. Then, the map $u$ defined in [\[eq:9\]](#eq:9){reference-type="eqref" reference="eq:9"} satisfies for all $t \in [0,T]$ $$\begin{aligned} {\left\|u (t)\right\|}_{\mathbf{L}^\infty (\Omega;{\mathbb{R}})} \le \ & \left( {\left\|u_o\right\|}_{\mathbf{L}^\infty (\Omega;{\mathbb{R}})} + {\left\|a\right\|}_{\mathbf{L}^1 ([0,t];\mathbf{L}^\infty(\Omega; {\mathbb{R}}))} \right) \, \\ & \times \exp\left( {\left\|A\right\|}_{\mathbf{L}^1 ([0,t];\mathbf{L}^\infty(\Omega;{\mathbb{R}}))} + {\left\|\mathinner{\nabla\cdot}c\right\|}_{\mathbf{L}^1 ([0,t];\mathbf{L}^\infty (\Omega;{\mathbb{R}}))} \right) . \end{aligned}$$* **Lemma 9**. *Let [\[it:omega\]](#it:omega){reference-type="ref" reference="it:omega"} and [\[it:H1\]](#it:H1){reference-type="ref" reference="it:H1"} hold. Assume $u_o \in \mathbf{L}^\infty (\Omega;{\mathbb{R}})$ and $a \in \mathbf{L}^1 \left([0,T]; \mathbf{L}^\infty(\Omega; {\mathbb{R}})\right)$. Fix $A_1,A_2 \in \mathbf{L}^\infty ([0,T]\times\Omega;{\mathbb{R}})$. Then, the maps $u_1,u_2$ defined in [\[eq:9\]](#eq:9){reference-type="eqref" reference="eq:9"} satisfy for all $t \in [0,T]$ $$\begin{aligned} {\left\|u_2 (t) - u_1 (t)\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} \le \ & \exp \left( t \max \left\{ {\left\|A_1\right\|}_{\mathbf{L}^\infty ([0,t]\times\Omega;{\mathbb{R}})} \,,\; {\left\|A_2\right\|}_{\mathbf{L}^\infty ([0,t]\times\Omega;{\mathbb{R}})} \right\}\right) \\ & \times \left( {\left\|u_o\right\|}_{\mathbf{L}^\infty (\Omega;{\mathbb{R}})} + {\left\|a\right\|}_{\mathbf{L}^1 ([0,t]; \mathbf{L}^\infty(\Omega;{\mathbb{R}}))} \right) {\left\|A_2-A_1\right\|}_{\mathbf{L}^1 ([0,t]\times\Omega;{\mathbb{R}})} \,. \end{aligned}$$* The proof is a straightforward adaptation from [@teo Lemma 4.3]. The $\mathinner{\rm TV}$ bound obtained in the next lemma will be crucial in the sequel. **Lemma 10**. *Let [\[it:omega\]](#it:omega){reference-type="ref" reference="it:omega"}--[\[item:H0\]](#item:H0){reference-type="ref" reference="item:H0"}--[\[it:H3\]](#it:H3){reference-type="ref" reference="it:H3"} hold. Assume, moreover, that $A \in \mathbf{L}^1 ([0,T]; \mathbf{L}^\infty (\Omega;{\mathbb{R}}))$ and for all $t \in [0,T]$, $A(t) \in \mathbf{BV}(\Omega;{\mathbb{R}})$. Let $c$ satisfy [\[it:H1\]](#it:H1){reference-type="ref" reference="it:H1"} and, moreover, $c (t) \in \mathbf{C}^{2} (\Omega;{\mathbb{R}}^n)$ for all $t \in [0,T]$ and $\mathinner{\nabla}\mathinner{\nabla\cdot}c \in \mathbf{L}^1 ([0,T]\times \Omega; \mathbb{R}^n)$. Then, the map $u$ defined in [\[eq:9\]](#eq:9){reference-type="eqref" reference="eq:9"} satisfies for all $t \in [0,T]$. $$\begin{aligned} & \mathinner{\rm TV}(u (t);\Omega) \\ \le \ & \exp\left( {\left\|A\right\|}_{\mathbf{L}^1([0,t];\mathbf{L}^\infty (\Omega; \mathbb{R}))} + {\left\|D_x c\right\|}_{\mathbf{L}^1([0,t];\mathbf{L}^\infty (\Omega; \mathbb{R}^{n\times n}))} \right) \\ & \times \Biggl( \mathinner{\rm TV}(u_o) + \mathcal{O}(1) {\left\|u_o\right\|}_{\mathbf{L}^\infty(\Omega; \mathbb{R})} + \int_0^t \mathinner{\rm TV}\left( a (\tau)\right) \mathinner{\mathrm{d}{\tau}} \\ & \qquad + \left({\left\|u_o\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R})} + {\left\|a\right\|}_{\mathbf{L}^1([0,t];\mathbf{L}^\infty(\Omega; \mathbb{R}))}\right) \int_0^t \left(\mathinner{\rm TV}\left(A(\tau)\right) + {\left\|\mathinner{\nabla}\mathinner{\nabla\cdot}c(\tau) \right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R}^n)}\right) \mathinner{\mathrm{d}{\tau}} \Biggr). \end{aligned}$$* ***Proof.*** The proof extends that of [@siam2018 Lemma 4.4], where a linear conservation law, i.e., with no source term, on a bounded domain is considered. We first regularize the initial datum $u_o$ and the functions $A$ and $a$ appearing in the source term. In particular, we use the approximation of the initial datum constructed in [@siam2018 Lemma 4.3], yielding a sequence $u_o^h \in \mathbf{C}^{3} (\Omega; \mathbb{R})$ such that $$\label{eq:27} \begin{aligned} \lim_{h \to +\infty} {\left\|u_o^h - u_o\right\|}_{\mathbf{L}^1 (\Omega;\mathbb{R})} = \ & 0, & u_o^h (\xi) = \ & 0 \text{ for all } \xi \in \partial\Omega, \\ {\left\|u_o^h\right\|}_{\mathbf{L}^\infty (\Omega;\mathbb{R})} \le \ & {\left\|u_o\right\|}_{\mathbf{L}^\infty (\Omega;\mathbb{R})}, & \mathinner{\rm TV}(u_o^h) \le \ & \mathcal{O}(1) \, {\left\|u_o\right\|}_{\mathbf{L}^\infty (\Omega;\mathbb{R})} + \mathinner{\rm TV}(u_o) \,. \end{aligned}$$ Then, using [@Giusti Formula (1.8) and Theorem 1.17], we regularize the functions $A$ and $a$ as follows. For all $t \in [0,T]$ and $h\in \mathbb{N}\setminus \{0\}$, there exist sequences $A_h(t), a_h(t) \in \mathbf{C}^{\infty }(\Omega; \mathbb{R})$ such that, for all $t \in [0,T]$, $$\label{eq:29} \begin{aligned} \lim_{h\to+\infty} & {\left\|A_h (t) - A(t)\right\|}_{\mathbf{L}^1 (\Omega;\mathbb{R})} = \ 0, & \lim_{h\to+\infty} &{\left\|a_h (t) - a(t)\right\|}_{\mathbf{L}^1 (\Omega;\mathbb{R})} = \ 0, \\ &{\left\|A_h (t)\right\|}_{\mathbf{L}^\infty (\Omega;\mathbb{R})} \le \ {\left\|A(t)\right\|}_{\mathbf{L}^\infty (\Omega;\mathbb{R})}, && {\left\|a_h (t)\right\|}_{\mathbf{L}^\infty (\Omega;\mathbb{R})} \le \ {\left\|a(t)\right\|}_{\mathbf{L}^\infty (\Omega;\mathbb{R})}, \\ \lim_{h\to+\infty} & \mathinner{\rm TV}\left(A_h(t)\right) = \ \mathinner{\rm TV}\left(A(t) \right), & \lim_{h\to+\infty} & \mathinner{\rm TV}\left(a_h(t)\right) = \ \mathinner{\rm TV}\left(a(t) \right). \end{aligned}$$ According to [\[eq:9\]](#eq:9){reference-type="eqref" reference="eq:9"}, define the sequence $u_h$ corresponding to the sequences $u_o^h$, $A_h$, $a_h$, where the map $\mathcal{E}$ in [\[eq:13\]](#eq:13){reference-type="eqref" reference="eq:13"} is substituted by $\mathcal{E}_h$, defined accordingly exploiting $A_h$. By construction, $u_h (t)\in \mathbf{C}^{1} (\Omega; \mathbb{R})$ for all $t \in [0,T]$, thus we can differentiate it. In particular, we are interested in the $\mathbf{L}^1$--norm of $\nabla u_h(t)$. By [\[eq:38\]](#eq:38){reference-type="eqref" reference="eq:38"}, the following decomposition holds: $$\label{eq:17} {\left\|\nabla u_h(t)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} = {\left\|\nabla u_h(t)\right\|}_{\mathbf{L}^1 (X(t; 0, \Omega); \mathbb{R})} + {\left\|\nabla u_h(t)\right\|}_{\mathbf{L}^1 (X(t; [0,t\mathclose[, \partial\Omega); \mathbb{R})}.$$ The two terms on the right hand side of [\[eq:17\]](#eq:17){reference-type="eqref" reference="eq:17"} are treated separately. Focus on the first term: if $x \in X(t;0,\Omega)$, by [\[eq:9\]](#eq:9){reference-type="eqref" reference="eq:9"} $$\begin{aligned} & \mathinner{\nabla}u_h (t,x) \\= \ & \mathcal{E}_h (0,t,x) \Bigl( \mathinner{\nabla}u_o^h\left(X(0;t,x)\right) D_x X (0;t,x) \\ & + u_o^h\left( X (0;t,x)\right) \int_0^t \left(\mathinner{\nabla}A_h\left(s, X (s;t,x)\right) - \mathinner{\nabla}\mathinner{\nabla\cdot}c\left(s, X (s;t,x)\right)\right) D_x X (s;t,x) \mathinner{\mathrm{d}{s}} \Bigr) \\ & + \int_0^t \mathcal{E}_h(\tau,t,x) \Bigl( \mathinner{\nabla}a_h\left(\tau,X(\tau;t,x)\right) D_x X (\tau;t,x) \\ & + a_h\left(\tau,X (\tau;t,x)\right) \int_\tau^t \left(\mathinner{\nabla}A_h\left(s, X (s;t,x)\right) - \mathinner{\nabla}\mathinner{\nabla\cdot}c\left(s, X (s;t,x)\right)\right) D_x X (s;t,x) \mathinner{\mathrm{d}{s}} \Bigr) \mathinner{\mathrm{d}{\tau }}\,. \end{aligned}$$ Use the change of variables $y = X(0; t,x)$ in the first two lines above involving $u_o^h$, the change of variables $y = X(\tau; t,x)$ in the latter two lines and the bound $$\label{eq:36} {\left\|D_x X (\tau;t,x)\right\|} \le \exp \left(\int_\tau^t {\left\|D_x c(s)\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R}^{n\times n})} \mathinner{\mathrm{d}{s}}\right),$$ that holds for every $t \in [0,T]$ by [\[eq:pxoX\]](#eq:pxoX){reference-type="eqref" reference="eq:pxoX"}. We thus obtain $$\begin{aligned} \nonumber & {\left\|\mathinner{\nabla}u_h (t)\right\|}_{\mathbf{L}^1(X(t; 0, \Omega);\mathbb{R}^n)} = \ \int_{X(t; 0, \Omega)} {\left\|\mathinner{\nabla}u_h (t,x)\right\|}\mathinner{\mathrm{d}{x}} \\ \label{eq:tv1} \le \ & \exp\left( \int_0^t \left({\left\|A_h(\tau)\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R})} + {\left\|D_x c(\tau)\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R}^{n\times n})}\right) \mathinner{\mathrm{d}{\tau}} \right) \\\nonumber & \times \Biggl( {\left\|\mathinner{\nabla}u_o^h\right\|}_{\mathbf{L}^1 (\Omega;\mathbb{R}^n)} + {\left\|u_o^h\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R})} \int_0^t \left({\left\|\mathinner{\nabla}A_h(\tau)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R}^n)} + {\left\|\mathinner{\nabla}\mathinner{\nabla\cdot}c(\tau) \right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R}^n)}\right) \mathinner{\mathrm{d}{\tau}} \\\nonumber & \qquad + \int_0^t {\left\|\mathinner{\nabla}a_h (\tau)\right\|}_{\mathbf{L}^1 (X(\tau; 0, \Omega); \mathbb{R}^n)} \mathinner{\mathrm{d}{\tau}} + \int_0^t {\left\| a_h (\tau)\right\|}_{\mathbf{L}^\infty (X(\tau; 0, \Omega); \mathbb{R})} \\ \nonumber & \qquad \qquad \qquad \times \left(\int_\tau^t \left({\left\|\mathinner{\nabla}A_h(s)\right\|}_{\mathbf{L}^1 (X(s; 0, \Omega); \mathbb{R}^n)} + {\left\|\mathinner{\nabla}\mathinner{\nabla\cdot}c(s) \right\|}_{\mathbf{L}^1 (X(s; 0, \Omega); \mathbb{R}^n)}\right)\mathinner{\mathrm{d}{s}}\right) \mathinner{\mathrm{d}{\tau}} \Biggr). \end{aligned}$$ Pass to the second term on the right in [\[eq:17\]](#eq:17){reference-type="eqref" reference="eq:17"}: if $x \in X (t; [0,t\mathclose[, \partial \Omega)$, by [\[eq:9\]](#eq:9){reference-type="eqref" reference="eq:9"} and [\[eq:36\]](#eq:36){reference-type="eqref" reference="eq:36"} $$\begin{aligned} & \mathinner{\nabla}u_h (t,x) \\ = \ & \int_{T(t,x)}^t \mathcal{E}_h(\tau,t,x) \Bigl( \mathinner{\nabla}a_h\left(\tau,X(\tau;t,x)\right) D_x X (\tau;t,x) \\ & + a_h\left(\tau,X (\tau;t,x)\right) \int_\tau^t \left(\mathinner{\nabla}A_h\left(s, X (s;t,x)\right) - \mathinner{\nabla}\mathinner{\nabla\cdot}c\left(s, X (s;t,x)\right)\right) D_x X (s;t,x) \mathinner{\mathrm{d}{s}} \Bigr) \mathinner{\mathrm{d}{\tau}}. \end{aligned}$$ For every $t \in [0,T]$, proceed similarly as above using the change of variables $y=X(\tau;t,x)$: $$\begin{aligned} \nonumber & {\left\|\mathinner{\nabla}u_h (t)\right\|}_{\mathbf{L}^1( X (t; [0,t\mathclose[, \partial \Omega);\mathbb{R})} = \int_{\Omega \setminus X(t; 0,\Omega)} {\left\|\mathinner{\nabla}u_h (t,x)\right\|}\mathinner{\mathrm{d}{x}} \\ \label{eq:tv2} \le\ & \exp\left( \int_0^t \left({\left\|A_h(\tau)\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R}^n)} + {\left\|D_x c(\tau)\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R}^{n\times n})}\right) \mathinner{\mathrm{d}{\tau}} \right) \\\nonumber & \times \Biggl( \int_0^t \int_{\Omega \setminus X (\tau;0,\Omega)} {\left\|\mathinner{\nabla}a_h (\tau, y)\right\|}\mathinner{\mathrm{d}{y}}\mathinner{\mathrm{d}{\tau}} + \int_0^t {\left\| a_h (\tau)\right\|}_{\mathbf{L}^\infty (\Omega \setminus X (\tau;0,\Omega); \mathbb{R})} \mathinner{\mathrm{d}{\tau}} \\ \nonumber & \qquad\times \left( {\left\|\mathinner{\nabla}A_h\right\|}_{\mathbf{L}^1 (\Omega \setminus X ([0,t];0,\Omega); \mathbb{R}^n)} + {\left\|\mathinner{\nabla}\mathinner{\nabla\cdot}c\right\|}_{\mathbf{L}^1 (\Omega \setminus X ([0,t];0,\Omega); \mathbb{R}^n)} \right) \Biggr). \end{aligned}$$ Inserting the estimates [\[eq:tv1\]](#eq:tv1){reference-type="eqref" reference="eq:tv1"} and [\[eq:tv2\]](#eq:tv2){reference-type="eqref" reference="eq:tv2"} into [\[eq:17\]](#eq:17){reference-type="eqref" reference="eq:17"}, we thus obtain $$\begin{aligned} & {\left\|\mathinner{\nabla}u_h(t)\right\|}_{\mathbf{L}^1 (\Omega;\mathbb{R}^n)} \\ \le \ & \exp \left( {\left\|A_h\right\|}_{\mathbf{L}^1([0,t];\mathbf{L}^\infty (\Omega; \mathbb{R}^n))} + {\left\|D_x c\right\|}_{\mathbf{L}^1([0,t];\mathbf{L}^\infty (\Omega; \mathbb{R}^{n\times n}))} \right) \\ & \times \Biggl( {\left\|\mathinner{\nabla}u_o^h\right\|}_{\mathbf{L}^1 (\Omega;\mathbb{R})} + \int_0^t {\left\|\mathinner{\nabla}a_h (\tau)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R}^n)} \mathinner{\mathrm{d}{\tau}} \\ & \quad + \left(\!{\left\|u_o^h\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R})} \!\! +\! \int_0^t {\left\| a_h (\tau)\right\|}_{\mathbf{L}^\infty(\Omega; \mathbb{R})}\mathinner{\mathrm{d}{\tau}}\!\right) \!\! \left({\left\|\mathinner{\nabla}A_h\right\|}_{\mathbf{L}^1 ([0,t] \times \Omega; \mathbb{R}^n)} \!+\! {\left\|\mathinner{\nabla}\mathinner{\nabla\cdot}c \right\|}_{\mathbf{L}^1 ([0,t] \times \Omega; \mathbb{R}^n)}\right) \!\!\Biggr). \end{aligned}$$ Since $u_h(t) \to u(t)$ in $\mathbf{L}^1 (\Omega;\mathbb{R})$, by the lower semicontinuity of the total variation and the hypotheses [\[eq:27\]](#eq:27){reference-type="eqref" reference="eq:27"}--[\[eq:29\]](#eq:29){reference-type="eqref" reference="eq:29"} on the regularizing sequences $u_o^h$, $A_h$ and $a_h$, passing to the limit $h \to +\infty$, we complete the proof. It is on the basis of next Proposition that we give a definition of solution to [\[eq:12\]](#eq:12){reference-type="eqref" reference="eq:12"}. **Proposition 11**. *Let [\[it:omega\]](#it:omega){reference-type="ref" reference="it:omega"} and [\[it:H1\]](#it:H1){reference-type="ref" reference="it:H1"} hold. Assume $u_o \in \mathbf{L}^\infty (\Omega;{\mathbb{R}})$, $A \in \mathbf{L}^\infty ([0,T] \times \Omega; \mathbb{R})$ and $a \in \mathbf{L}^1 \left([0,T]; \mathbf{L}^\infty (\Omega; \mathbb{R})\right)$. Then, the following statements are equivalent:* 1. *[\[it:1\]]{#it:1 label="it:1"} $u$ is defined by [\[eq:9\]](#eq:9){reference-type="eqref" reference="eq:9"}, i.e., through integration along characteristics.* 2. *[\[it:2\]]{#it:2 label="it:2"} $u \in \mathbf{L}^\infty ([0,T] \times \Omega; \mathbb{R})$ is such that for any test function $\varphi\in \mathbf{C}_c^{1} (\mathopen] -\infty, T\mathclose[ \times \Omega; \mathbb{R})$, $$\label{eq:41} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \begin{array}{@{}r@{}} \displaystyle \int_0^T \int_\Omega \left(u (t,x) \left(\partial_t \varphi(t,x) + c (t,x) \cdot \nabla \varphi(t,x) \right) + \left(A (t,x) \, u(t,x) + a(t,x) \right)\varphi(t,x) \right) \mathinner{\mathrm{d}{x}}\mathinner{\mathrm{d}{t}} \\ \displaystyle + \int_\Omega u_o (x) \, \varphi(0,x) \mathinner{\mathrm{d}{x}} = 0. \end{array}$$* 3. *[\[it:5\]]{#it:5 label="it:5"} $u \in \mathbf{L}^\infty ([0,T] \times \Omega; \mathbb{R})$ is such that for any test function $\varphi\in {\mathbf{W}^{1,\infty }}(\mathopen] -\infty, T\mathclose[ \times \Omega; \mathbb{R})$, equality [\[eq:41\]](#eq:41){reference-type="eqref" reference="eq:41"} holds.* ***Proof.*** #### [\[it:1\]](#it:1){reference-type="ref" reference="it:1"}$\implies$[\[it:2\]](#it:2){reference-type="ref" reference="it:2"} The proof exploits arguments similar to [@CHM2011 Lemma 5.1], see also [@parahyp Lemma 2.7]. Indeed, $u$ defined as in [\[eq:9\]](#eq:9){reference-type="eqref" reference="eq:9"} is bounded by Item [\[item:4\]](#item:4){reference-type="ref" reference="item:4"} in Lemma [Lemma 8](#lem:L1){reference-type="ref" reference="lem:L1"}. Let $\varphi\in \mathbf{C}_c^{1} (\mathopen]-\infty, T\mathclose[ \times \Omega; \mathbb{R})$. We prove that the equality [\[eq:41\]](#eq:41){reference-type="eqref" reference="eq:41"} holds with $u$ defined as in [\[eq:9\]](#eq:9){reference-type="eqref" reference="eq:9"}. Notice that, for a fixed time $t \in [0,T]$, by [\[eq:38\]](#eq:38){reference-type="eqref" reference="eq:38"} the domain $\Omega$ is contained in the disjoint union of $X(t; 0, \Omega)$ and $X(t; [0,t\mathclose[ , \partial\Omega)$. The first set accounts for the characteristics emanating from the initial datum, the second one for those coming from the boundary. Therefore, to prove that the integral equality [\[eq:41\]](#eq:41){reference-type="eqref" reference="eq:41"} holds it is sufficient to verify that both the following integral equalities hold: $$\begin{aligned} \label{eq:datoIn} & \int_0^T \int_{X (t; 0, \Omega)} \left(u \left( \partial_t \varphi+ c \cdot \nabla \varphi+ A \, \varphi\right) + a \, \varphi \right) \mathinner{\mathrm{d}{x}}\mathinner{\mathrm{d}{t}} + \int_\Omega u_o (x) \, \varphi(0,x) \mathinner{\mathrm{d}{x}} = 0, \\ \label{eq:bordo} & \int_0^T \int_{X(t; [0,t\mathclose[, \partial\Omega)} \left(u \left(\partial_t \varphi+ c \cdot \nabla \varphi+ A \, \varphi\right) + a \, \varphi \right) \mathinner{\mathrm{d}{x}}\mathinner{\mathrm{d}{t}} = 0. \end{aligned}$$ In order to prove [\[eq:datoIn\]](#eq:datoIn){reference-type="eqref" reference="eq:datoIn"}, exploiting the change of variables $y = X(0;t,x)$, the first line in [\[eq:9\]](#eq:9){reference-type="eqref" reference="eq:9"} can be rewritten for $x \in X (t,0,\Omega)$ as $$u(t,x) = \left(u_o(y) + \mathcal{A}(t,y)\right) \frac{\mathscr{A}(t,y)}{J(t,y)} \quad \mbox{ where } \quad y = X(0;t,x)$$ with $$\begin{aligned} \mathscr{A} (t,y) = \ & \exp\left(\int_0^t A\left(\tau, X (\tau; 0,y) \right)\mathinner{\mathrm{d}{\tau}}\right), & J (t,y) = \ & \exp\left(\int_0^t \mathinner{\nabla\cdot}c \left(\tau, X (\tau; 0,y) \right)\mathinner{\mathrm{d}{\tau}}\right), \\ \mathcal{A}(t,y) = \ & \int_0^t a\left(\tau,X (\tau;0,y)\right) \frac{J(\tau,y)}{\mathscr{A} (\tau,y)}\mathinner{\mathrm{d}{\tau}}. \end{aligned}$$ Therefore, the left hand side of [\[eq:datoIn\]](#eq:datoIn){reference-type="eqref" reference="eq:datoIn"} now reads $$\begin{aligned} \int_0^T\int_\Omega & \bigl[ \left(u_o(y) + \mathcal{A}(t,y)\right) \frac{\mathscr{A}(t,y)}{J(t,y)} \left( \partial_t \varphi\left(t,X(t;0,y)\right) \right. \\ & \quad \left. + c \left(t,X(t;0,y)\right) \cdot \nabla \varphi\left(t,X(t;0,y)\right) + A \left(t,X(t;0,y)\right) \varphi\left(t,X(t;0,y)\right) \right) J(t,y) \\ & + a\left(t,X(t;0,y)\right) \varphi\left(t,X(t;0,y)\right) J(t,y) \bigr] \mathinner{\mathrm{d}{y}}\mathinner{\mathrm{d}{t}} + \int_\Omega u_o(x) \varphi(0,x) \mathinner{\mathrm{d}{x}} \\ = \ & \int_0^T\int_\Omega \frac{\mathinner{\mathrm{d}{}}}{\mathinner{\mathrm{d}{t}}} \left[ \left( u_o(y) + \mathcal{A}(t,y) \mathscr{A}(t,y) \right) \varphi\left(t,X(t;0,y)\right) \right]\mathinner{\mathrm{d}{y}}\mathinner{\mathrm{d}{t}} + \int_\Omega u_o(x) \varphi(0,x) \mathinner{\mathrm{d}{x}} \\ =\ & - \int_\Omega u_o(y) \varphi(0,y) \mathinner{\mathrm{d}{y}}+ \int_\Omega u_o(x) \varphi(0,x) \mathinner{\mathrm{d}{x}} - \int_\Omega \mathcal{A} (0,y) \mathscr{A}(0,y) \varphi(0,y) \mathinner{\mathrm{d}{y}} \\ = \ & 0, \end{aligned}$$ since, for all $y \in \Omega$, $\varphi(T,y) = 0$ and, by definition, $\mathcal{A}(0,y) = 0$. Pass now to [\[eq:bordo\]](#eq:bordo){reference-type="eqref" reference="eq:bordo"}. Here, for all $t \in [0,T]$, we use the change of variables $$\begin{array}{ccc} \Omega_{\tau,x}^t & \to & \Omega_{\sigma,y}^t \\ (\tau,x) & \mapsto & (\sigma,y) \end{array} \quad \mbox{ where }\quad \begin{array}{rcl} \Omega_{\tau,x}^t & = & \left\{ (\tau,x) \colon \tau \in [T (t,x), t] \mbox{ and } x = X (t, [0,t\mathclose[,\partial\Omega) \right\} \\ \Omega_{\sigma,y}^t & = & \left\{ (\sigma,y) \colon \sigma \in [0, t] \mbox{ and } y = X (\sigma, [0,\sigma\mathclose[,\partial\Omega) \right\} \\ \sigma & = & \tau \\ y & = & X (\tau;t,x) \end{array}$$ whose Jacobian, which also depends on $t$, is $\left. H (t,\sigma,y) \middle/ H (\sigma,\sigma,y) \right.$ where we set $$H (t,\sigma,y) = \exp \int_0^t \mathinner{\nabla\cdot}c \left(s,X (s;\sigma,y)\right) \mathinner{\mathrm{d}{s}} \quad \mbox{ and }\quad \widehat{A} (t,\sigma,y) = \exp \int_0^t A\left(s, X (s; \sigma,y) \right)\mathinner{\mathrm{d}{\sigma}} \,.$$ Using [\[eq:9\]](#eq:9){reference-type="eqref" reference="eq:9"}, we compute now the right hand side in [\[eq:bordo\]](#eq:bordo){reference-type="eqref" reference="eq:bordo"} as follows: $$\begin{aligned} &\int_0^T \int_{X(t; [0,t\mathclose[, \partial\Omega)} u \left(\partial_t \varphi+ c \cdot \nabla \varphi+ A \, \varphi\right) (t,x) \mathinner{\mathrm{d}{x}}\mathinner{\mathrm{d}{t}} + \int_0^T \int_{X(t; [0,t\mathclose[, \partial\Omega)} a \, \varphi \mathinner{\mathrm{d}{x}}\mathinner{\mathrm{d}{t}} \\ = \ & \int_0^T \int_{X(t; [0,t\mathclose[, \partial\Omega)} \int_{T(t,x)}^t a\left(\tau,X(\tau;t,x)\right)\mathcal{E} (\tau,t,x) \mathinner{\mathrm{d}{\tau}} \left(\partial_t \varphi+ c \cdot \nabla \varphi+ A \, \varphi\right) \mathinner{\mathrm{d}{x}}\mathinner{\mathrm{d}{t}} \\ & \qquad + \int_0^T \int_{X(t; [0,t\mathclose[, \partial\Omega)} a (t,x) \, \varphi(t,x) \mathinner{\mathrm{d}{x}}\mathinner{\mathrm{d}{t}} \\ = \ & \int_0^T \int_0^t \int_{X (\sigma,[0,\sigma\mathclose[,\partial\Omega)} a (\sigma,y) \, \dfrac{\widehat{A} (t,\sigma,y)}{\widehat{A} (\sigma,\sigma,y)} \\ & \qquad \qquad \qquad \times \left( \frac{\mathinner{\mathrm{d}{\varphi\left(t, X (t;\sigma,y) \right)}}}{\mathinner{\mathrm{d}{t}}} + A\left(t, X (t;\sigma,y)\right) \, \varphi\left(t, X (t;\sigma,y)\right) \right) \mathinner{\mathrm{d}{y}} \mathinner{\mathrm{d}{\sigma}} \mathinner{\mathrm{d}{t}} \\ & \qquad + \int_0^T \int_{X(t; [0,t\mathclose[, \partial\Omega)} a (t,x) \, \varphi(t,x) \mathinner{\mathrm{d}{x}}\mathinner{\mathrm{d}{t}} \\ = \ & \int_0^T \dfrac{\d~}{\mathinner{\mathrm{d}{t}}} \left( \int_0^t \int_{X (\sigma,[0,\sigma\mathclose[,\partial\Omega)} a (\sigma,y) \, \dfrac{\widehat{A} (t,\sigma,y)}{\widehat{A} (\sigma,\sigma,y)} \, \varphi\left(t, X (t;\sigma,y) \right) \mathinner{\mathrm{d}{y}} \mathinner{\mathrm{d}{\sigma}}\right) \mathinner{\mathrm{d}{t}} \\ = \ & 0, \end{aligned}$$ since $\varphi(T, \cdot) \equiv 0$. #### [\[it:2\]](#it:2){reference-type="ref" reference="it:2"}$\implies$[\[it:5\]](#it:5){reference-type="ref" reference="it:5"} Fix $\varphi\in {\mathbf{W}^{1,\infty }}(\mathopen]-\infty,T \mathclose[ \times \Omega; {\mathbb{R}})$. A standard construction, see [@Giusti § 1.14], ensures the existence of a sequence of functions $\varphi_h \in \mathbf{C}_c^{\infty}({\mathbb{R}}^{n+1}; {\mathbb{R}}_+)$ such that $\varphi_h \underset{h\to+\infty}{\longrightarrow} \varphi$, $\partial_t \varphi_h \underset{h\to+\infty}{\longrightarrow} \partial_t \varphi$ and $\nabla_x \varphi_h \underset{h\to+\infty}{\longrightarrow} \nabla_x \varphi$ in ${\mathbf{L}_{\mathbf{loc}}^{1}} (\mathopen]-\infty,T \mathclose[ \times \Omega; {\mathbb{R}})$ and ${\mathbf{L}_{\mathbf{loc}}^{1}} (\mathopen]-\infty,T \mathclose[ \times \Omega; {\mathbb{R}}^n)$. Call $\chi_h$ a function in $\mathbf{C}_c^{\infty }({\mathbb{R}}^n; {\mathbb{R}})$ such that $\chi_h (x) = 1$ for all $x \in \Omega$ such that $B(x,1/h) \subseteq \Omega$ and ${\left\|\nabla_x \chi_h\right\|} \leq 2\sqrt{n}\, h$ for all $x \in {\mathbb{R}}^n$. Then, we have $\varphi_h \, \chi_h \in \mathbf{C}_c^{1} (\mathopen]-\infty, T \mathclose[ \times \Omega; {\mathbb{R}})$. Moreover, $\varphi_h \, \chi_h \underset{h\to+\infty}{\longrightarrow} \varphi$ and $\partial_t (\varphi_h \, \chi_h) \underset{h\to+\infty}{\longrightarrow} \partial_t \varphi$ in ${\mathbf{L}_{\mathbf{loc}}^{1}} (\mathopen]-\infty,T \mathclose[ \times \Omega; {\mathbb{R}})$. Concerning the space gradient, we have $$\nabla_x (\varphi_h \, \chi_h) = \nabla_x \varphi_h \; \chi_h + \varphi_h \; \nabla_x \chi_h \quad \mbox{ and } \begin{array}{r@{\,}c@{\,}l@{\qquad}l} \nabla_x \varphi_h \; \chi_h & \underset{h\to+\infty}{\longrightarrow} & \nabla_x \varphi & \mbox{in } {\mathbf{L}_{\mathbf{loc}}^{1}} (\mathopen]-\infty,T \mathclose[ \times \Omega; {\mathbb{R}}) \,; \\ \varphi_h \; \nabla_x \chi_h & \underset{h\to+\infty}{\longrightarrow} & \varphi & \mbox{a.e. in } \mathopen]-\infty,T \mathclose[ \times \Omega \,. \end{array}$$ Therefore, for all $h$ by [\[it:2\]](#it:2){reference-type="ref" reference="it:2"} we have $$\begin{aligned} 0 = \ & \int_0^T \int_\Omega \left(u \left(\partial_t (\varphi_h \, \chi_h) + c \cdot \nabla (\varphi_h \, \omega_h) \right) + \left(A \, u(t,x) + a(t,x) \right)(\varphi_h \, \chi_h) \right) \mathinner{\mathrm{d}{x}}\mathinner{\mathrm{d}{t}} \\ & + \int_\Omega u_o (x) \, (\varphi_h \, \chi_h) (0,x) \mathinner{\mathrm{d}{x}}\end{aligned}$$ and, by the Dominated Convergence Theorem, [\[it:5\]](#it:5){reference-type="ref" reference="it:5"} follows. #### [\[it:5\]](#it:5){reference-type="ref" reference="it:5"}$\implies$[\[it:1\]](#it:1){reference-type="ref" reference="it:1"} Inspired by [@CHM2011 Lemma 5.1], we first consider the case $A \in (\mathbf{C}^{1} \cap {\mathbf{W}^{1,\infty}}) ([0,T]\times\Omega;{\mathbb{R}})$. Assume $u$ satisfies [\[it:5\]](#it:5){reference-type="ref" reference="it:5"} and call $u_*$ the function defined in [\[eq:9\]](#eq:9){reference-type="eqref" reference="eq:9"}. Then, by the above implications [\[it:1\]](#it:1){reference-type="ref" reference="it:1"}$\Rightarrow$[\[it:2\]](#it:2){reference-type="ref" reference="it:2"}$\Rightarrow$[\[it:5\]](#it:5){reference-type="ref" reference="it:5"}, the difference $U = u-u_*$ satisfies for all test functions $\tilde\varphi\in {\mathbf{W}^{1,\infty }}(\mathopen]-\infty, T\mathclose[ \times \Omega;{\mathbb{R}})$ the integral equality $$\label{eq:25} \int_0^T \int_\Omega U \left(\partial_t \tilde\varphi+ c \cdot \nabla \tilde\varphi + A \, \tilde\varphi \right) \mathinner{\mathrm{d}{x}}\mathinner{\mathrm{d}{t}} = 0 \,.$$ Proceed now exactly as in [@CHM2011 Lemma 5.1], choosing $\tau \in \mathopen]0, T\mathclose]$, a sequence $\chi_h \in \mathbf{C}_c^{1} ({\mathbb{R}}; {\mathbb{R}}_+)$ with $\chi_h (t) = 1$ for all $t \in [1/h, \tau-1/h]$ and ${\left|\chi'_h\right|} \leq 2h$. If $\varphi\in {\mathbf{W}^{1,\infty }}(\mathopen]-\infty, T\mathclose[ \times \Omega;{\mathbb{R}})$ then, $(\varphi\, \chi_h) \in {\mathbf{W}^{1,\infty }}(\mathopen]-\infty, T\mathclose[ \times \Omega;{\mathbb{R}})$. Choosing $\varphi\, \chi_h$ as $\tilde \varphi$ in [\[eq:25\]](#eq:25){reference-type="eqref" reference="eq:25"}, and passing to the limit $h \to +\infty$ via the Dominated Convergence Theorem, we get $$\label{eq:26} \int_0^\tau \int_\Omega U \left( \partial_t \varphi+ c \cdot \nabla \varphi + A \, \varphi \right) \mathinner{\mathrm{d}{x}}\mathinner{\mathrm{d}{t}} - \int_\Omega U (\tau,x) \, \varphi(\tau,x) \mathinner{\mathrm{d}{x}} = 0 \,.$$ Fix an arbitrary $\eta \in \mathbf{C}_c^{1} (\Omega;{\mathbb{R}})$ and let $\varphi$ solve $$\left\{ \begin{array}{l@{\qquad}r@{\,}c@{\,}l} \partial_t \varphi+ c \cdot \nabla \varphi+ A\, \varphi= 0 & (t,x) & \in & \Omega \\ \varphi(t,\xi) = 0 & (t,\xi) & \in & \partial\Omega \\ \varphi(\tau,x) = \eta (x) & (\tau,x) & \in & \Omega\,. \end{array} \right.$$ Note that $\varphi$ can be computed through integration along (backward) characteristics and hence $\varphi\in {\mathbf{W}^{1,\infty }}(\mathopen]-\infty, T\mathclose[ \times \Omega;{\mathbb{R}})$. With this choice, [\[eq:26\]](#eq:26){reference-type="eqref" reference="eq:26"} yields $\int_\Omega U (\tau,x) \, \eta (x) \mathinner{\mathrm{d}{x}} = 0$ for all $\eta \in \mathbf{C}_c^{1} (\Omega;{\mathbb{R}})$, so that $U (\tau,x) =0$ for all $x \in \Omega$. By the arbitrariness of $\tau$, we have $U \equiv 0$, hence $u = u_*$. Let now $A \in \mathbf{L}^\infty ([0,T] \times \Omega;{\mathbb{R}})$, call $u_*$ the function constructed in [\[eq:9\]](#eq:9){reference-type="eqref" reference="eq:9"} and assume there is a function $u$ satisfying [\[it:5\]](#it:5){reference-type="ref" reference="it:5"}. Construct a sequence $A_h \in (\mathbf{C}^{1} \cap {\mathbf{W}^{1,\infty}}) ([0,T]\times\Omega;{\mathbb{R}})$ such that $A_h \underset{h\to+\infty}{\longrightarrow} A$ in $\mathbf{L}^1 ([0,T]\times\Omega;{\mathbb{R}})$. Call $u_h$ the function constructed as in [\[eq:9\]](#eq:9){reference-type="eqref" reference="eq:9"} with $A_h$ in place of $A$. For any $t \in [0,T]$, we have $u_h (t) \underset{h\to+\infty}{\longrightarrow} u_* (t)$ in $\mathbf{L}^1 (\Omega; {\mathbb{R}})$, by Lemma [Lemma 9](#lem:A){reference-type="ref" reference="lem:A"}. Moreover, for all $\varphi\in {\mathbf{W}^{1,\infty }}(\mathopen]-\infty,T\mathclose[ \times \Omega;{\mathbb{R}})$, $$\begin{aligned} 0 = \ & \int_0^T \int_\Omega \left(u \left(\partial_t \varphi+ c \cdot \nabla \varphi\right) + \left(A \, u + a \right)\varphi \right) \mathinner{\mathrm{d}{x}}\mathinner{\mathrm{d}{t}} + \int_\Omega u_o (x) \, \varphi(0,x) \mathinner{\mathrm{d}{x}} \\ &- \int_0^T \int_\Omega \left(u_h \left(\partial_t \varphi+ c \cdot \nabla \varphi\right) + \left(A_h \, u_h + a \right)\varphi \right) \mathinner{\mathrm{d}{x}}\mathinner{\mathrm{d}{t}} - \int_\Omega u_o (x) \, \varphi(0,x) \mathinner{\mathrm{d}{x}} \\ = \ & \int_0^T \int_\Omega (u - u_h) \left(\partial_t \varphi+ c \cdot \nabla \varphi+A_h \, \varphi\right) \mathinner{\mathrm{d}{x}}\mathinner{\mathrm{d}{t}} + \int_0^T \int_\Omega (A- A_h) \, u \, \varphi\mathinner{\mathrm{d}{x}}\mathinner{\mathrm{d}{t}}.\end{aligned}$$ The latter summand vanishes since $A_h \to A$ in $\mathbf{L}^1$. The former summand, thanks to the regularity of $A_h$, can be treated by the procedure above, obtaining, for all $\eta \in \mathbf{C}_c^{1} (\Omega;{\mathbb{R}})$ and for a sequence of real numbers $\varepsilon_h$ converging to $0$, $$0 = \int_\Omega \left(u (\tau,x) - u_h (\tau,x) \right) \eta (x) \mathinner{\mathrm{d}{x}} + \varepsilon_h \,.$$ The above relation ensures that $u_h (\tau,x) \to u (\tau,x)$ for a.e. $x \in \Omega$ as $h \to +\infty$. Therefore, for all $t \in \mathopen]0,T\mathclose]$, $${\left\|u_* (t) - u (t)\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} \leq {\left\|u_* (t) - u_h (t)\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} + {\left\|u_h (t) - u (t)\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} \underset{h\to+\infty}{\longrightarrow} 0 \,,$$ completing the proof. **Definition 12**. *A map $u \in \mathbf{L}^\infty ([0,T]\times\Omega; {\mathbb{R}})$ is a solution to [\[eq:12\]](#eq:12){reference-type="eqref" reference="eq:12"} if it satisfies any of the requirements [\[it:1\]](#it:1){reference-type="ref" reference="it:1"}, [\[it:2\]](#it:2){reference-type="ref" reference="it:2"} or [\[it:5\]](#it:5){reference-type="ref" reference="it:5"} in .* By techniques similar to those in [@parahyp], one can verify that a solution to [\[eq:12\]](#eq:12){reference-type="eqref" reference="eq:12"} in the sense of  is a *weak entropy solution* also in any of the senses [@MR2322018; @MR1884231], or [@MR542510] in the $\mathbf{BV}$ case, see [@MR3819847] for a comparison. Here, as is well known, the linearity of the convective part in [\[eq:12\]](#eq:12){reference-type="eqref" reference="eq:12"} allows to avoid the introduction of any entropy condition, as also remarked in [@KPS2018bounded]. **Lemma 13**. *Let [\[it:omega\]](#it:omega){reference-type="ref" reference="it:omega"}--[\[item:H0\]](#item:H0){reference-type="ref" reference="item:H0"}--[\[it:H2\]](#it:H2){reference-type="ref" reference="it:H2"}--[\[it:H3\]](#it:H3){reference-type="ref" reference="it:H3"} hold. Fix $c_1, c_2$ satisfying [\[it:H1\]](#it:H1){reference-type="ref" reference="it:H1"} and moreover, for $i=1,2$, $c_i (t) \in \mathbf{C}^{2} (\Omega;{\mathbb{R}}^n)$ for all $t \in [0,T]$ and $\mathinner{\nabla}\mathinner{\nabla\cdot}c_i \in \mathbf{L}^1 ([0,T]\times \Omega; \mathbb{R}^n)$. Then, the maps $u_1,u_2$ defined in [\[eq:9\]](#eq:9){reference-type="eqref" reference="eq:9"} satisfy $${\left\|u_2 (t) - u_1 (t)\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} \le \mathcal{O}(1) \left( {\left\|c_1 - c_2\right\|}_{\mathbf{L}^1 ([0,t]; \mathbf{L}^\infty (\Omega;{\mathbb{R}}^n))} + {\left\|\mathinner{\nabla\cdot}(c_1 - c_2)\right\|}_{\mathbf{L}^1 ([0,t]; \mathbf{L}^\infty (\Omega;{\mathbb{R}}))} \right)$$ and a precise expression for the constant $\mathcal{O}(1)$ is provided in the proof.* ***Proof.*** Following the proof of , we first regularize the initial datum $u_o$ as in [\[eq:27\]](#eq:27){reference-type="eqref" reference="eq:27"} and the functions $A$ and $a$ appearing in the source term as in [\[eq:29\]](#eq:29){reference-type="eqref" reference="eq:29"}, obtaining $u_1^h$ and $u_2^h$ by [\[eq:9\]](#eq:9){reference-type="eqref" reference="eq:9"}. By , the difference $u_2^h - u_1^h$ solves $$\left\{ \begin{array}{l} \partial_t (u_2^h - u_1^h) + \mathinner{\nabla\cdot}\left(c_2 (u_2^h-u_1^h)\right) = A_h (u_2^h-u_h) + \alpha_h \\ (u_2^h - u_1^h) (0) = 0 \end{array} \right. \mbox{ where } \alpha_h = - \mathinner{\nabla\cdot}\left((c_2 - c_1) u_1^h\right)$$ in the sense of . Apply  in  to get $$\label{eq:20} {\left\|u_2^h(t) - u_1^h(t)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} \leq {\left\|\mathinner{\nabla\cdot}\left((c_2 - c_1) u_1^h\right)\right\|}_{\mathbf{L}^1 ([0,t]\times \Omega; \mathbb{R})} \exp\left({\left\|A\right\|}_{\mathbf{L}^\infty ([0,t]\times\Omega; \mathbb{R})}t\right),$$ where we use the estimate ${\left\|A_h(\tau)\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R})} \le {\left\|A(\tau)\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R})}$ for all $\tau \in [0,T]$. Observe that $$\begin{aligned} & {\left\|\mathinner{\nabla\cdot}\left(\left(c_2 (\tau) - c_1 (\tau) \right)u_1^h (\tau) \right)\right\|}_{\mathbf{L}^1 (\Omega;\mathbb{R})} \\ \le \ & {\left\|u_1^h (\tau)\right\|}_{\mathbf{L}^1 (\Omega;\mathbb{R})} {\left\|\mathinner{\nabla\cdot}\left(c_2 (\tau) - c_1 (\tau)\right)\right\|}_{\mathbf{L}^\infty (\Omega;\mathbb{R})} + {\left\|c_2 (\tau) -c_1 (\tau)\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R}^n)} {\left\|\nabla u_1^h (\tau)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} \\ \le \ & \left( {\left\|u_o^h\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} + {\left\|a_h\right\|}_{\mathbf{L}^1([0,\tau] \times \Omega; \mathbb{R})}\right) \exp\left({\left\|A_h\right\|}_{\mathbf{L}^\infty ([0,\tau] \times \Omega; \mathbb{R})} t\right) {\left\|\mathinner{\nabla\cdot}\left(c_2 (\tau) - c_1 (\tau)\right)\right\|}_{\mathbf{L}^\infty (\Omega;\mathbb{R})} \\ & + {\left\|c_2 (\tau) -c_1 (\tau)\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R}^n)} \exp \left( {\left\|A_h\right\|}_{\mathbf{L}^1([0,\tau];\mathbf{L}^\infty (\Omega; \mathbb{R}^n))} + {\left\|D_x c_1\right\|}_{\mathbf{L}^1([0,\tau];\mathbf{L}^\infty (\Omega; \mathbb{R}^{n\times n}))} \right) \\ & \times \Biggl( {\left\|\mathinner{\nabla}u_o^h\right\|}_{\mathbf{L}^1 (\Omega;\mathbb{R})} + \int_0^\tau {\left\|\mathinner{\nabla}a_h (s)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R}^n)} \mathinner{\mathrm{d}{s}} + \left({\left\|u_o^h\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R})} \!+\! \int_0^\tau {\left\| a_h (s)\right\|}_{\mathbf{L}^\infty(\Omega; \mathbb{R})}\mathinner{\mathrm{d}{s}}\right) \\ & \qquad\quad \times \left({\left\|\mathinner{\nabla}A_h\right\|}_{\mathbf{L}^1 ([0,\tau] \times \Omega; \mathbb{R}^n)} + {\left\|\mathinner{\nabla}\mathinner{\nabla\cdot}c_1 \right\|}_{\mathbf{L}^1 ([0,\tau] \times \Omega; \mathbb{R}^n)}\right) \Biggr) \\ \le \ & \left( {\left\|u_o^h\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} + {\left\|a_h\right\|}_{\mathbf{L}^1([0,\tau] \times \Omega; \mathbb{R})}\right) \exp\left({\left\|A\right\|}_{\mathbf{L}^\infty ([0,\tau] \times \Omega; \mathbb{R})} t\right) {\left\|\mathinner{\nabla\cdot}\left(c_2 (\tau) - c_1 (\tau)\right)\right\|}_{\mathbf{L}^\infty (\Omega;\mathbb{R})} \\ & + {\left\|c_2 (\tau) -c_1 (\tau)\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R}^n)} \exp \left( {\left\|A\right\|}_{\mathbf{L}^1([0,\tau];\mathbf{L}^\infty (\Omega; \mathbb{R}^n))} + {\left\|D_x c_1\right\|}_{\mathbf{L}^1([0,\tau];\mathbf{L}^\infty (\Omega; \mathbb{R}^{n\times n}))} \right) \\ & \times \Biggl( \mathinner{\rm TV}(u_o) + \mathcal{O}(1){\left\|u_o\right\|}_{\mathbf{L}^\infty (\Omega;\mathbb{R})} + \int_0^\tau {\left\|\mathinner{\nabla}a_h (s)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R}^n)} \mathinner{\mathrm{d}{s}} \\ & \quad \!+\! \left({\left\|u_o\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R})} \!+\! \int_0^\tau {\left\| a (s)\right\|}_{\mathbf{L}^\infty(\Omega; \mathbb{R})}\mathinner{\mathrm{d}{s}}\right)\!\! \left({\left\|\mathinner{\nabla}A_h\right\|}_{\mathbf{L}^1 ([0,\tau] \times \Omega; \mathbb{R}^n)} \!+\! {\left\|\mathinner{\nabla}\mathinner{\nabla\cdot}c_1 \right\|}_{\mathbf{L}^1 ([0,\tau] \times \Omega; \mathbb{R}^n)}\right)\!\! \Biggr), \end{aligned}$$ where we used  in , and the hypotheses [\[eq:27\]](#eq:27){reference-type="eqref" reference="eq:27"}--[\[eq:29\]](#eq:29){reference-type="eqref" reference="eq:29"} on the regularizing sequences $u_o^h$, $A_h$ and $a_h$. By the triangular inequality and the above computations, $$\begin{aligned} \nonumber & {\left\|u_2(t) - u_1(t)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} \\ \nonumber \le\ & {\left\|u_2(t) - u_2^h(t)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} + {\left\|u_2^h(t) - u_1^h(t)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} + {\left\|u_1(t) - u_1^h(t)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} \\ \label{eq:21} \le\ & {\left\|u_2(t) - u_2^h(t)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} \\\label{eq:22} & \begin{aligned} & + \left( {\left\|u_o^h\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} + {\left\|a_h\right\|}_{\mathbf{L}^1([0,t] \times \Omega; \mathbb{R})}\right) \\ & \quad \times \exp\left({\left\|A\right\|}_{\mathbf{L}^\infty ([0,t] \times \Omega; \mathbb{R})} t\right) \int_0^t {\left\|\mathinner{\nabla\cdot}\left(c_2 (\tau) - c_1 (\tau)\right)\right\|}_{\mathbf{L}^\infty (\Omega;\mathbb{R})} \mathinner{\mathrm{d}{\tau}} \end{aligned} \\ \nonumber & + \int_0^t {\left\|c_2 (\tau) -c_1 (\tau)\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R}^n)} \mathinner{\mathrm{d}{\tau}} \, \exp \left( {\left\|A\right\|}_{\mathbf{L}^1([0,t];\mathbf{L}^\infty (\Omega; \mathbb{R}^n))} + {\left\|D_x c_1\right\|}_{\mathbf{L}^1([0,t];\mathbf{L}^\infty (\Omega; \mathbb{R}^{n\times n}))} \right) \\\label{eq:23} & \quad \times \Biggl( \mathinner{\rm TV}(u_o) + \mathcal{O}(1){\left\|u_o\right\|}_{\mathbf{L}^\infty (\Omega;\mathbb{R})} + \int_0^t {\left\|\mathinner{\nabla}a_h (s)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R}^n)} \mathinner{\mathrm{d}{s}} \\\nonumber & \quad\, +\! \left({\left\|u_o\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R})} \!+\! \int_0^t\! {\left\| a (s)\right\|}_{\mathbf{L}^\infty(\Omega; \mathbb{R})}\mathinner{\mathrm{d}{s}}\right)\!\! \left({\left\|\mathinner{\nabla}A_h\right\|}_{\mathbf{L}^1 ([0,t] \times \Omega; \mathbb{R}^n)} \!+\! {\left\|\mathinner{\nabla}\mathinner{\nabla\cdot}c_1 \right\|}_{\mathbf{L}^1 ([0,t] \times \Omega; \mathbb{R}^n)}\right)\!\! \Biggr) \\\label{eq:24} & + {\left\|u_1(t) - u_1^h(t)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})}, \end{aligned}$$ and in the limit $h \to +\infty$ we treat each term separately. By construction, [\[eq:21\]](#eq:21){reference-type="eqref" reference="eq:21"} and [\[eq:24\]](#eq:24){reference-type="eqref" reference="eq:24"} converge to zero as $h \to + \infty$. By the hypotheses [\[eq:27\]](#eq:27){reference-type="eqref" reference="eq:27"} on $u_o^h$ and [\[eq:29\]](#eq:29){reference-type="eqref" reference="eq:29"} on $A_h$ and $a_h$, in the limit we thus obtain $$\begin{aligned} & {\left\|u_2(t) - u_1(t)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} \\ \le \ & \left( {\left\|u_o\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} + {\left\|a\right\|}_{\mathbf{L}^1([0,t] \times \Omega; \mathbb{R})}\right) \exp\left({\left\|A\right\|}_{\mathbf{L}^\infty ([0,t] \times \Omega; \mathbb{R})} t\right) \int_0^t {\left\|\mathinner{\nabla\cdot}\left(c_2 (\tau) - c_1 (\tau)\right)\right\|}_{\mathbf{L}^\infty (\Omega;\mathbb{R})} \mathinner{\mathrm{d}{\tau}} \\ & + \int_0^t {\left\|c_2 (\tau) -c_1 (\tau)\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R}^n)} \mathinner{\mathrm{d}{\tau}} \, \exp \left( {\left\|A\right\|}_{\mathbf{L}^1([0,t];\mathbf{L}^\infty (\Omega; \mathbb{R}^n))} + {\left\|D_x c_1\right\|}_{\mathbf{L}^1([0,t];\mathbf{L}^\infty (\Omega; \mathbb{R}^{n\times n}))} \right) \\ & \times \Biggl( \mathinner{\rm TV}(u_o) + \mathcal{O}(1){\left\|u_o\right\|}_{\mathbf{L}^\infty (\Omega;\mathbb{R})} + \int_0^t \mathinner{\rm TV}\left( a (s)\right) \mathinner{\mathrm{d}{s}} \\ & \quad + \left({\left\|u_o\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R})} \!+\! \int_0^t {\left\| a (s)\right\|}_{\mathbf{L}^\infty(\Omega; \mathbb{R})}\mathinner{\mathrm{d}{s}}\right)\!\! \left( \int_0^t\mathinner{\rm TV}\left( A (s)\right)\mathinner{\mathrm{d}{s}} + {\left\|\mathinner{\nabla}\mathinner{\nabla\cdot}c_1 \right\|}_{\mathbf{L}^1 ([0,t] \times \Omega; \mathbb{R}^n)}\right)\!\! \Biggr), \end{aligned}$$ concluding the proof. **Lemma 14**. *Let [\[it:omega\]](#it:omega){reference-type="ref" reference="it:omega"}--[\[it:H1\]](#it:H1){reference-type="ref" reference="it:H1"} hold. Assume moreover that $u_o \in \mathbf{L}^\infty (\Omega;{\mathbb{R}})$, with $u_o\geq 0$, $A \in \mathbf{L}^\infty ([0,T]\times\Omega; {\mathbb{R}})$ and $a \in \mathbf{L}^1 \left([0,T]; \mathbf{L}^\infty(\Omega; {\mathbb{R}}) \right)$, with $a \geq 0$. Then, the solution $u$ is positive.* The proof is an immediate consequence of the representation [\[eq:9\]](#eq:9){reference-type="eqref" reference="eq:9"}. **Lemma 15**. *Let [\[item:H0\]](#item:H0){reference-type="ref" reference="item:H0"}--[\[it:H1\]](#it:H1){reference-type="ref" reference="it:H1"}--[\[it:H2\]](#it:H2){reference-type="ref" reference="it:H2"}--[\[it:H3\]](#it:H3){reference-type="ref" reference="it:H3"} hold. Assume, moreover, that $c (t) \in \mathbf{C}^{2} (\Omega;{\mathbb{R}}^n)$ for all $t \in [0,T]$ and $\mathinner{\nabla}\mathinner{\nabla\cdot}c \in \mathbf{L}^1 ([0,T]\times \Omega; \mathbb{R}^n)$. If $u \in \mathbf{L}^\infty ([0,T]\times \Omega; {\mathbb{R}})$ is as in [\[eq:9\]](#eq:9){reference-type="eqref" reference="eq:9"}, then $u$ is $\mathbf{L}^1$--Lipschitz continuous in time: for all $t_1,t_2 \in [0,T]$, with $t_1< t_2$ $$\label{eq:42} {\left\|u (t_2) - u (t_1)\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} \leq \mathcal{O}(1)(t_2 - t_1)$$ where $\mathcal{O}(1)$ depends on norms of $c,A,a$ on the interval $[0, t_2]$ and of $u_o$.* ***Proof.*** By [\[eq:38\]](#eq:38){reference-type="eqref" reference="eq:38"}, the following decomposition holds $$\label{eq:45} \begin{array}{rcl} {\left\|u (t_2) - u (t_1)\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} & = & {\left\|u (t_2) - u (t_1)\right\|}_{\mathbf{L}^1 (X(t_2;t_1,\Omega);{\mathbb{R}})} \\ & & + {\left\|u (t_2) - u (t_1)\right\|}_{\mathbf{L}^1 (X(t_2;[t_1,t_2[,\partial\Omega);{\mathbb{R}})} \,. \end{array}$$ Estimate the two latter summands in [\[eq:45\]](#eq:45){reference-type="eqref" reference="eq:45"} separately. By [\[eq:9\]](#eq:9){reference-type="eqref" reference="eq:9"} $$\begin{aligned} \nonumber & {\left\|u (t_2) - u (t_1)\right\|}_{\mathbf{L}^1 (X(t_2;t_1,\Omega);{\mathbb{R}})} \\ \nonumber \le\ & \int_{X(t_2;t_1,\Omega)} {\left| u\left(t_1,X(t_1;t_2,x)\right) \, \mathcal{E} (t_1,t_2,x) - u (t_1,x) \right|} \mathinner{\mathrm{d}{x}} \\ \nonumber & \qquad + \int_{X(t_2;t_1,\Omega)} \int_{t_1}^{t_2} {\left|a\left(\tau,X (\tau;t_2,x)\right) \, \mathcal{E} (\tau,t_2,x)\right|} \mathinner{\mathrm{d}{\tau }}\mathinner{\mathrm{d}{x}} \\ \label{eq:46} \le \ & \int_{X(t_2;t_1,\Omega)} {\left| u\left(t_1,X(t_1;t_2,x)\right) - u (t_1,x) \right|} \, \mathcal{E} (t_1,t_2,x) \mathinner{\mathrm{d}{x}} \\ \label{eq:47} & + \int_{X(t_2;t_1,\Omega)} {\left| u (t_1,x) \right|} \; {\left|\mathcal{E} (t_1,t_2,x)-1\right|}\mathinner{\mathrm{d}{x}} \\ \label{eq:48} & + \int_{X(t_2;t_1,\Omega)} \int_{t_1}^{t_2} {\left|a\left(\tau,X (\tau;t_2,x)\right) \, \mathcal{E} (\tau,t_2,x)\right|} \mathinner{\mathrm{d}{\tau }}\mathinner{\mathrm{d}{x}} \,. \end{aligned}$$ To estimate [\[eq:46\]](#eq:46){reference-type="eqref" reference="eq:46"}, we use [@MauroMatthew Lemma 5.1] so that we obtain $$\begin{aligned} & \int_{X(t_2;t_1,\Omega)} {\left| u\left(t_1,X(t_1;t_2,x)\right) - u (t_1,x) \right|} \, \mathcal{E} (t_1,t_2,x) \mathinner{\mathrm{d}{x}} \\ \le \ & \dfrac{{\left\|c\right\|}_{\mathbf{L}^\infty ([t_1,t_2]\times\Omega;{\mathbb{R}}^n)}}{{\left\|D_x c\right\|}_{\mathbf{L}^\infty ([t_1,t_2]\times\Omega;{\mathbb{R}}^{n\times n})}} \left( e^{{\left\|D_x c\right\|}_{\mathbf{L}^\infty ([t_1,t_2]\times\Omega;{\mathbb{R}}^{n\times n})} (t_2-t_1)} -1\right) \mathinner{\rm TV}\left(u (t_1)\right) \\ \le \ & {\left\|c\right\|}_{\mathbf{L}^\infty ([t_1,t_2]\times\Omega;{\mathbb{R}}^n)} e^{{\left\|D_x c\right\|}_{\mathbf{L}^\infty ([t_1,t_2]\times\Omega;{\mathbb{R}}^{n\times n})} (t_2-t_1)} \mathinner{\rm TV}\left(u (t_1)\right) (t_2 - t_1), \end{aligned}$$ and the total variation of $u$ might be estimated thanks to . The bounds for [\[eq:47\]](#eq:47){reference-type="eqref" reference="eq:47"} and [\[eq:48\]](#eq:48){reference-type="eqref" reference="eq:48"} follow from the definition [\[eq:13\]](#eq:13){reference-type="eqref" reference="eq:13"} of $\mathcal{E}$: $$\begin{aligned} & \int_{X(t_2;t_1,\Omega)} {\left| u (t_1,x) \right|} \; {\left|\mathcal{E} (t_1,t_2,x)-1\right|}\mathinner{\mathrm{d}{x}} \\ \le \ & {\left\|u(t_1)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} (t_2-t_1) \left( {\left\|A\right\|}_{\mathbf{L}^\infty ([t_1,t_2]\times\Omega; \mathbb{R})} +{\left\|\mathinner{\nabla\cdot}c\right\|}_{\mathbf{L}^\infty ([t_1,t_2]\times\Omega; \mathbb{R})} \right) \\ & \times \exp\left(\left( {\left\|A\right\|}_{\mathbf{L}^\infty ([t_1,t_2]\times\Omega; \mathbb{R})} +{\left\|\mathinner{\nabla\cdot}c\right\|}_{\mathbf{L}^\infty ([t_1,t_2]\times\Omega; \mathbb{R})} \right) (t_2-t_1)\right) \,; \\ & \int_{X(t_2;t_1,\Omega)} \int_{t_1}^{t_2} {\left|a\left(\tau,X (\tau;t_2,x)\right) \, \mathcal{E} (\tau,t_2,x)\right|} \mathinner{\mathrm{d}{\tau }}\mathinner{\mathrm{d}{x}} \\ \le \ & (t_2-t_1) \, {\left\|a\right\|}_{\mathbf{L}^\infty ([t_1,t_2]; \mathbf{L}^1 (\Omega; \mathbb{R}))} \exp\left(\int_{t_1}^{t_2}{\left\|A (\tau)\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R})} + {\left\|\mathinner{\nabla\cdot}c\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R})}\mathinner{\mathrm{d}{\tau}}\right) \,. \end{aligned}$$ Consider now the second summand in [\[eq:45\]](#eq:45){reference-type="eqref" reference="eq:45"}. Introduce $T_{t_1}(t_2,x) = \inf \{s \in [t_1,t_2] \colon$ $X(s;t_2,x) \in \Omega\}$ and compute $$\begin{aligned} & {\left\|u (t_2) - u (t_1)\right\|}_{\mathbf{L}^1 (X(t_2;[t_1,t_2[,\partial\Omega);{\mathbb{R}})} \\ \le \ & \int_{X(t_2;[t_1,t_2[,\partial\Omega)} {\left| \int_{T_{t_1} (t_2,x)}^{t_2} a\left(\tau,X (\tau;t_2,x)\right) \, \mathcal{E} (\tau,t_2,x) \mathinner{\mathrm{d}{\tau}} \right|} \mathinner{\mathrm{d}{x}}. \end{aligned}$$ The same procedure used to bound [\[eq:48\]](#eq:48){reference-type="eqref" reference="eq:48"} applies, completing the proof. ## Coupling {#sec:coupling} Fix $T>0$. Define $u_0 (t,x) = u_o (x)$ and $w_0(t,x) = w_o (x)$ for all $(t,x) \in [0,T] \times \Omega$. For $i \in {\mathbb{N}}$, define recursively $u_{i+1}$ and $w_{i+1}$ as solutions to $$\label{eq:28} \left\{ \begin{array}{l@{\qquad}r@{\,}c@{\,}l} \partial_t u_{i+1} + \mathinner{\nabla\cdot}\left(u_{i+1} \, c_i(t,x) \right) = A_i (t,x)\, u_{i+1} + a(t,x) & (t,x) & \in & [0,T] \times \Omega \\ u (t,\xi) = 0 & (t,\xi) & \in & [0,T] \times \partial \Omega \\ u (0,x) = u_o (x) & x & \in & \Omega \end{array} \right.$$ $$\label{eq:30} \left\{ \begin{array}{l@{\qquad}r@{\,}c@{\,}l} \partial_t w_{i+1} - \mu \, \Delta w_{i+1} = B_i (t,x) \, w_{i+1} + b (t,x) & (t,x) & \in & [0,T] \times \Omega \\ w (t,\xi) = 0 & (t,\xi) & \in & [0,T] \times \partial \Omega \\ w (0,x) = w_o (x) & x & \in & \Omega \end{array} \right.$$ where $$\label{eq:31} \begin{array}{rcl} c_i (t,x) & = & v (t,w_i) (x) \\ A_i (t,x) & = & \alpha \left(t,x,w_i (t,x)\right) \end{array} \qquad\qquad B_i (t,x) = \beta \left(t,x,u_i (t,x),w_i (t,x)\right) \,.$$ We aim to prove that $(u_i,w_i)$ is a Cauchy sequence with respect to the $\mathbf{L}^\infty ([0,T] ; \mathbf{L}^1 (\Omega;{\mathbb{R}}^2))$ distance as soon as $T$ is sufficiently small. Observe first that problem [\[eq:30\]](#eq:30){reference-type="eqref" reference="eq:30"} fits into the framework of , while problem [\[eq:28\]](#eq:28){reference-type="eqref" reference="eq:28"} fits into the framework of . Consider the $w$ component. applies, ensuring the existence of a solution to [\[eq:30\]](#eq:30){reference-type="eqref" reference="eq:30"} for all $i\in\mathbb{N}$. Moreover, if $b \geq 0$ and the initial datum $w_o$ is positive, the solution $w_i$ is positive. By [\[it:betaFinal\]](#it:betaFinal){reference-type="ref" reference="it:betaFinal"} and [\[eq:31\]](#eq:31){reference-type="eqref" reference="eq:31"}, for all $i\in \mathbb{N}$, $B_i$ satisfies [\[it:P2\]](#it:P2){reference-type="ref" reference="it:P2"} and for all $\tau \in [0,T]$ $$\label{eq:B_Linf} {\left\|B_{i} (\tau)\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R})} \le K_\beta \,,$$ while by [\[it:b\]](#it:b){reference-type="ref" reference="it:b"} the function $b$ satisfies [\[it:P3\]](#it:P3){reference-type="ref" reference="it:P3"}. The following uniform bounds on $w_i$ hold for every $i \in \mathbb{N}$: by [\[it:P_apriori\]](#it:P_apriori){reference-type="ref" reference="it:P_apriori"} and [\[it:P_tv\]](#it:P_tv){reference-type="ref" reference="it:P_tv"} in , exploiting also [\[eq:B_Linf\]](#eq:B_Linf){reference-type="eqref" reference="eq:B_Linf"}, for all $\tau \in [0,T]$, $$\begin{aligned} \label{eq:w_L1} {\left\|w_i (\tau)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} \le \ & e^{K_\beta \, \tau} \left( {\left\|w_o\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} + {\left\|b\right\|}_{\mathbf{L}^1 ([0,\tau]\times\Omega;{\mathbb{R}})} \right) = \colon C_{w,1}(\tau), \\ \label{eq:w_Linf} {\left\|w_i (\tau)\right\|}_{\mathbf{L}^\infty (\Omega;{\mathbb{R}})} \le \ & e^{K_\beta \, \tau} \left( {\left\|w_o\right\|}_{\mathbf{L}^\infty (\Omega;{\mathbb{R}})} + {\left\|b\right\|}_{\mathbf{L}^1 ([0,\tau]; \mathbf{L}^\infty(\Omega;{\mathbb{R}}))} \right) = \colon C_{w,\infty} (\tau), \\ \label{eq:w_TV} \mathinner{\rm TV}\left(w_i (\tau, \cdot)\right) \le \ & \mathinner{\rm TV}(w_o) + \int_0^\tau \mathinner{\rm TV}\left(b (s)\right)\mathinner{\mathrm{d}{s}} + \mathcal{O} (1) \sqrt{\tau}\, K_\beta \, {\left\|w_i (\tau)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} = \colon C_w^{\mathinner{\rm TV}} (\tau). \end{aligned}$$ By [\[it:P_stability\]](#it:P_stability){reference-type="ref" reference="it:P_stability"} in  we get $$\begin{aligned} \nonumber {\left\|w_{i+1} (t) - w_i (t)\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} \leq \ & {\left\|B_i-B_{i-1}\right\|}_{\mathbf{L}^1 ([0,t] \times \Omega; \mathbb{R})} \left({\left\|w_o\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R})} + {\left\|b\right\|}_{\mathbf{L}^1 ([0,t];\mathbf{L}^\infty (\Omega; \mathbb{R}))}\right) \\ \label{eq:33} & \times \exp\int_0^t\left( {\left\|B_i (\tau)\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R})} + {\left\|B_{i-1} (\tau)\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R})} \right)\mathinner{\mathrm{d}{\tau}}. \end{aligned}$$ By [\[eq:31\]](#eq:31){reference-type="eqref" reference="eq:31"}, exploiting the hypothesis [\[it:betaFinal\]](#it:betaFinal){reference-type="ref" reference="it:betaFinal"} we obtain $$\begin{aligned} & {\left\|B_i-B_{i-1}\right\|}_{\mathbf{L}^1 ([0,t] \times \Omega; \mathbb{R})} \\ = \ & \int_0^t\int_\Omega {\left|\beta \left(\tau,x,u_i (\tau,x), w_i (\tau,x)\right) -\beta \left(\tau,x,u_{i-1} (\tau,x), w_{i-1} (\tau,x)\right)\right|} \mathinner{\mathrm{d}{x}}\mathinner{\mathrm{d}{\tau}} \\ \le \ & K_\beta \left( {\left\|u_i - u_{i-1}\right\|}_{\mathbf{L}^1 ([0,t] \times \Omega; \mathbb{R})} + {\left\|w_i - w_{i-1}\right\|}_{\mathbf{L}^1 ([0,t] \times \Omega; \mathbb{R})} \right). \end{aligned}$$ Therefore, using also [\[eq:B_Linf\]](#eq:B_Linf){reference-type="eqref" reference="eq:B_Linf"} and the notation introduced in [\[eq:w_Linf\]](#eq:w_Linf){reference-type="eqref" reference="eq:w_Linf"}, [\[eq:33\]](#eq:33){reference-type="eqref" reference="eq:33"} becomes $$\label{eq:diff_wi} \begin{aligned} {\left\|w_{i+1} (t) - w_i (t)\right\|}_{\mathbf{L}^1 (\Omega;{\mathbb{R}})} \leq \ & K_\beta \, e^{t \, K_\beta} C_{w, \infty} (t) % \left(\norma{w_o}_{\L\infty (\Omega; \R)} % + \norma{b}_{\L1 ([0,t];\L\infty (\Omega; \R))}\right) \\ & \times \left( {\left\|u_i - u_{i-1}\right\|}_{\mathbf{L}^1 ([0,t] \times \Omega; \mathbb{R})} + {\left\|w_i - w_{i-1}\right\|}_{\mathbf{L}^1 ([0,t] \times \Omega; \mathbb{R})} \right). \end{aligned}$$ Pass now to the $u$ component. The results of  applies, ensuring the existence of a solution to [\[eq:28\]](#eq:28){reference-type="eqref" reference="eq:28"} for all $i\in\mathbb{N}$. Moreover, if $a \geq 0$ and the initial datum $u_o$ is positive, the solution $u_i$ is positive, see . By [\[it:alpha\]](#it:alpha){reference-type="ref" reference="it:alpha"} and [\[eq:31\]](#eq:31){reference-type="eqref" reference="eq:31"}, for every $i\in \mathbb{N}$ we have that $A_i$ satisfies [\[it:H2\]](#it:H2){reference-type="ref" reference="it:H2"} and for all $\tau \in [0,T]$, exploiting [\[eq:w_Linf\]](#eq:w_Linf){reference-type="eqref" reference="eq:w_Linf"} and [\[eq:w_TV\]](#eq:w_TV){reference-type="eqref" reference="eq:w_TV"}, $$\begin{aligned} \label{eq:A_Linf} {\left\|A_{i} (\tau)\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R})} \le \ & K_\alpha \left(1+C_{w,\infty} (\tau)\right), \\ \nonumber \mathinner{\rm TV}\left(A_i (\tau, \cdot)\right) = \ & \mathinner{\rm TV}\alpha\left(\tau, \cdot, w_i (\tau, \cdot)\right) \\ \nonumber \le \ & K_\alpha \left(1 + C_{w, \infty} (\tau) + \mathinner{\rm TV}\left(w_i (\tau,\cdot)\right)\right) \\ \label{eq:A_tv} \le \ & K_\alpha \left(1 + C_{w, \infty} (\tau) + C_w^{\mathinner{\rm TV}} (\tau) \right), \end{aligned}$$ while by [\[it:a\]](#it:a){reference-type="ref" reference="it:a"} the function $a$ satisfies [\[it:H3\]](#it:H3){reference-type="ref" reference="it:H3"}. By [\[it:v\]](#it:v){reference-type="ref" reference="it:v"}, for every $i \in \mathbb{N}$ the function $c_i$ satisfies [\[it:H1\]](#it:H1){reference-type="ref" reference="it:H1"} and, moreover, $c_i (t) \in \mathbf{C}^{2} (\Omega; \mathbb{R}^n)$ for all $t \in [0,T]$ and $\nabla \mathinner{\nabla\cdot}c_i \in \mathbf{L}^1 ([0,T]\times\Omega; \mathbb{R}^n)$. In particular, thanks to [\[it:v\]](#it:v){reference-type="ref" reference="it:v"} and [\[eq:w_L1\]](#eq:w_L1){reference-type="eqref" reference="eq:w_L1"}, the following bounds hold for every $i \in \mathbb{N}$ and $t \in [0,T]$: $$\begin{aligned} \label{eq:v_divL1} {\left\|\mathinner{\nabla\cdot}c_i\right\|}_{\mathbf{L}^1 ([0,t]; \mathbf{L}^\infty (\Omega; \mathbb{R}))} \le \ & K_v {\left\|w_i\right\|}_{\mathbf{L}^1 ([0,t] \times \Omega; \mathbb{R})} \le \ K_v \, t \, C_{w,1} (t), \\ \label{eq:v_DxL1} {\left\|D_x c_i\right\|}_{\mathbf{L}^1 ([0,t]; \mathbf{L}^\infty (\Omega; \mathbb{R}^{n\times n}))} \le \ & K_v {\left\|w_i\right\|}_{\mathbf{L}^1 ([0,t] \times \Omega; \mathbb{R})} \le \ K_v \, t \, C_{w,1} (t), \\ %\nonumber {\left\|\mathinner{\nabla}\mathinner{\nabla\cdot}c_i (t)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R}^n)} \le \ % & C_v\left(t, \norm{w_i (t)}_{\L1(\Omega; \R)}\right) % \norm{w_i (t)}_{\L1(\Omega; \R)} % \\ \label{eq:v_graddivL1} % \le \ & C_v \left(t, C_{w,1} (t)\right) C_{w,1} (t). \end{aligned}$$ The following uniform bounds on $u_i$ hold for every $i \in \mathbb{N}$: by  and , exploiting also [\[eq:w_L1\]](#eq:w_L1){reference-type="eqref" reference="eq:w_L1"}--[\[eq:w_TV\]](#eq:w_TV){reference-type="eqref" reference="eq:w_TV"} and [\[eq:A_Linf\]](#eq:A_Linf){reference-type="eqref" reference="eq:A_Linf"}--[\[eq:v_graddivL1\]](#eq:v_graddivL1){reference-type="eqref" reference="eq:v_graddivL1"}, for all $\tau \in [0,T]$, $$\begin{aligned} \nonumber {\left\|u_i (\tau)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} \le \ & \left({\left\|u_o\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} + {\left\|a\right\|}_{\mathbf{L}^1 ([0,\tau] \times \Omega; \mathbb{R})}\right) \exp\left(K_\alpha \, \tau \left(1+C_{w,\infty} (\tau)\right) \right) \\ \label{eq:u_L1} = \colon \ & C_{u,1} (\tau), \\ \nonumber {\left\|u_i (\tau)\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R})} \le \ & \left({\left\|u_o\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R})} + {\left\|a\right\|}_{\mathbf{L}^1 ([0,\tau]; \mathbf{L}^\infty (\Omega; \mathbb{R}))}\right) \\ \nonumber & \qquad \times \exp\left(K_\alpha \, \tau \left(1+C_{w,\infty} (\tau)\right) + K_v \, \tau \, C_{w,1} (\tau) \right) \\ \label{eq:u_Linf} = \colon \ & C_{u,\infty} (\tau), \\ \nonumber \mathinner{\rm TV}\left(u_i (\tau, \cdot)\right) \le \ & \exp\left( K_\alpha \, \tau \left(1+C_{w,\infty} (\tau)\right) + K_v \, \tau \, C_{w,1} (\tau) \right) \\ \nonumber & \times \Biggl( \mathinner{\rm TV}(u_o) + \mathcal{O}(1) {\left\|u_o\right\|}_{\mathbf{L}^\infty(\Omega; \mathbb{R})} + \int_0^\tau \mathinner{\rm TV}\left( a (s)\right) \mathinner{\mathrm{d}{s}} \Biggr) \\ \nonumber & % \qquad % + \left(\norma{u_o}_{\L\infty (\Omega; \R)} % + \norma{a}_{\L1([0,t];\L\infty(\Omega; \R))}\right) + C_{u, \infty} (\tau) \Biggl( K_\alpha \tau \left(1 + C_{w, \infty} (\tau) + C_w^{\mathinner{\rm TV}} (\tau)\right) + C_v \, \tau \left(\tau, C_{w,1} (\tau)\right) C_{w,1} (\tau) \Biggr) \\ \label{eq:u_TV} = \colon \ & C_u^{\mathinner{\rm TV}} (\tau). \end{aligned}$$ By  and , exploiting [\[eq:A_Linf\]](#eq:A_Linf){reference-type="eqref" reference="eq:A_Linf"}, [\[eq:u_L1\]](#eq:u_L1){reference-type="eqref" reference="eq:u_L1"} and [\[eq:u_TV\]](#eq:u_TV){reference-type="eqref" reference="eq:u_TV"}, we get $$\begin{aligned} \nonumber {\left\|u_{i+1} (t) - u_i (t)\right\|}_{\mathbf{L}^1 (\Omega;\mathbb{R})} \le \ & % \left( \norma{u_o}_{\L1 (\Omega; \R)} % + \norma{a}_{\L1([0,t] \times \Omega; \R)}\right) % \exp\left(\norma{A_i}_{\L\infty ([0,t] \times \Omega; \R)} % t\right) C_{u,1} (t) \int_0^t {\left\|\mathinner{\nabla\cdot}\left(c_i (\tau) - c_{i-1} (\tau)\right)\right\|}_{\mathbf{L}^\infty (\Omega;\mathbb{R})} \mathinner{\mathrm{d}{\tau}} \\\nonumber & + C_u^{\mathinner{\rm TV}} (t) \int_0^t {\left\|c_i (\tau) -c_{i-1} (\tau)\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R}^n)} \mathinner{\mathrm{d}{\tau}} % \exp % \left( % \norma{A_i}_{\L1([0,t];\L\infty (\Omega; \R^n))} % + % \norma{D_x c_i}_{\L1([0,t];\L\infty (\Omega; \R^{n\times % n}))} % \right) % \\ \nonumber % & \times \Biggl( % \tv(u_o) + \mathcal{O}(1)\norma{u_o}_{\L\infty (\Omega;\R)} % + \int_0^t \tv\left( a (s)\right) \dd{s} % \\\nonumber % & \quad + \left(\norma{u_o}_{\L\infty (\Omega; \R)} % + \int_0^t \norma{ a (s)}_{\L\infty(\Omega; % \R)}\dd{s}\right) % \left( \int_0^t\tv\left( A_i (s)\right)\dd{s} % + \norma{\grad \div c_i }_{\L1 ([0,t] \times \Omega; % \R^n)}\right) % \Biggr) \\ \label{eq:35} & + \exp \left( t \, K_\alpha \left(1+C_{w,\infty} (t)\right) \right) \\ \nonumber & \times \left( {\left\|u_o\right\|}_{\mathbf{L}^\infty (\Omega;\mathbb{R})} + {\left\|a\right\|}_{\mathbf{L}^1 ([0,t]; \mathbf{L}^\infty(\Omega;\mathbb{R}))} \right) {\left\|A_i-A_{i-1}\right\|}_{\mathbf{L}^1 ([0,t]\times\Omega;\mathbb{R})} . \end{aligned}$$ By [\[eq:31\]](#eq:31){reference-type="eqref" reference="eq:31"}, exploiting the hypothesis [\[it:alpha\]](#it:alpha){reference-type="ref" reference="it:alpha"} we obtain $$\label{eq:34} \begin{aligned} {\left\|A_i-A_{i-1}\right\|}_{\mathbf{L}^1 ([0,t]\times\Omega;\mathbb{R})} = \ & \int_0^t \int_\Omega {\left| \alpha\left(\tau,x,w_i (\tau,x)\right) - \alpha\left(\tau,x,w_{i-1} (\tau,x)\right) \right|}\mathinner{\mathrm{d}{x}}\mathinner{\mathrm{d}{\tau}} \\ \le\ & K_\alpha \, {\left\|w_i - w_{i-1}\right\|}_{\mathbf{L}^1 ([0,t]\times\Omega;\mathbb{R})}. \end{aligned}$$ By [\[eq:31\]](#eq:31){reference-type="eqref" reference="eq:31"}, exploiting the hypothesis [\[it:v\]](#it:v){reference-type="ref" reference="it:v"} and [\[eq:w_L1\]](#eq:w_L1){reference-type="eqref" reference="eq:w_L1"} we obtain $$\begin{aligned} \label{eq:39} {\left\|\mathinner{\nabla\cdot}\left(c_i (\tau) - c_{i-1} (\tau)\right)\right\|}_{\mathbf{L}^\infty (\Omega;\mathbb{R})} \le \ & C_v \left( t , C_{w,1} (t) \right) {\left\|w_i (\tau) - w_{i-1} (\tau)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})}, \\ \label{eq:40} {\left\|c_i (\tau) -c_{i-1} (\tau)\right\|}_{\mathbf{L}^\infty (\Omega; \mathbb{R}^n)} \le \ & K_v {\left\|w_i (\tau) - w_{i-1} (\tau)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})}. \end{aligned}$$ Hence, inserting [\[eq:34\]](#eq:34){reference-type="eqref" reference="eq:34"}, [\[eq:39\]](#eq:39){reference-type="eqref" reference="eq:39"} and [\[eq:40\]](#eq:40){reference-type="eqref" reference="eq:40"} into [\[eq:35\]](#eq:35){reference-type="eqref" reference="eq:35"} yields $$\begin{aligned} \nonumber & {\left\|u_{i+1} (t) - u_i (t)\right\|}_{\mathbf{L}^1 (\Omega;\mathbb{R})} \\ \nonumber \le \ & \Bigl( C_{u,1} (t) \, C_v \! \left( t , C_{w,1} (t)\right) + C_u^{\mathinner{\rm TV}} (t) \\ \label{eq:diff_ui} & +K_\alpha \, \exp \left( t \, K_\alpha \left(1+C_{w,\infty} (t)\right) \right) \left( {\left\|u_o\right\|}_{\mathbf{L}^\infty (\Omega;\mathbb{R})} + {\left\|a\right\|}_{\mathbf{L}^1 ([0,t]; \mathbf{L}^\infty(\Omega;\mathbb{R}))} \right) \Bigr) \\ \nonumber & \times {\left\|w_i - w_{i-1} \right\|}_{\mathbf{L}^1 ([0,t] \times \Omega; \mathbb{R})}. \end{aligned}$$ Collecting together [\[eq:diff_wi\]](#eq:diff_wi){reference-type="eqref" reference="eq:diff_wi"} and [\[eq:diff_ui\]](#eq:diff_ui){reference-type="eqref" reference="eq:diff_ui"} we obtain $$\begin{aligned} & {\left\|w_{i+1} - w_i\right\|}_{\mathbf{L}^\infty([0,t];\mathbf{L}^1 (\Omega; \mathbb{R}))} + {\left\|u_{i+1} - u_i\right\|}_{\mathbf{L}^\infty([0,t];\mathbf{L}^1 (\Omega; \mathbb{R}))} \\ \le \ & C_{u,w}(t) \, t \left( {\left\|w_{i} - w_{i-i}\right\|}_{\mathbf{L}^\infty ([0,t]; \mathbf{L}^1 (\Omega; \mathbb{R}))} + {\left\|u_i - u_{i-1}\right\|}_{\mathbf{L}^\infty ([0,t]; \mathbf{L}^1 (\Omega; \mathbb{R}))} \right), \end{aligned}$$ where $$\begin{aligned} C_{u,w}(t) = \ & K_\beta \, e^{t \, K_\beta} C_{w, \infty} (t) + \Bigl( C_{u,1} (t) \, C_v \! \left( t , C_{w,1} (t)\right) + C_u^{\mathinner{\rm TV}} (t) \\ & +K_\alpha \, \exp \left( t \, K_\alpha \left(1+C_{w,\infty} (t)\right) \right) \left( {\left\|u_o\right\|}_{\mathbf{L}^\infty (\Omega;\mathbb{R})} + {\left\|a\right\|}_{\mathbf{L}^1 ([0,t]; \mathbf{L}^\infty(\Omega;\mathbb{R}))} \right) \Bigr). \end{aligned}$$ Choosing a sufficiently small $t_*>0$, we ensure that $(u_i,w_i)$ is a Cauchy sequence in the complete metric space $\mathbf{L}^\infty \left([0,t_*]; \mathbf{L}^1 (\Omega; \mathbb{R}^2)\right)$. Call $(u_*,w_*)$ its limit. Then, the bounds [\[eq:51\]](#eq:51){reference-type="eqref" reference="eq:51"} and [\[eq:52\]](#eq:52){reference-type="eqref" reference="eq:52"} directly follow from [\[eq:w_L1\]](#eq:w_L1){reference-type="eqref" reference="eq:w_L1"} and [\[eq:w_Linf\]](#eq:w_Linf){reference-type="eqref" reference="eq:w_Linf"} by the lower semicontinuity of the $\mathbf{L}^\infty$ norm with respect to the $\mathbf{L}^1$ distance. The same procedure applies to get [\[eq:54\]](#eq:54){reference-type="eqref" reference="eq:54"} and [\[eq:55\]](#eq:55){reference-type="eqref" reference="eq:55"} from [\[eq:u_L1\]](#eq:u_L1){reference-type="eqref" reference="eq:u_L1"} and [\[eq:u_Linf\]](#eq:u_Linf){reference-type="eqref" reference="eq:u_Linf"}. If $a \geq 0$, $b\geq 0$ and both components of the initial datum $(u_o, w_o)$ are positive, then also the components of $(u_*, w_*)$ are positive. We now prove that $(u_*,w_*)$ solves [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"} in the sense of . Note that by [\[it:v\]](#it:v){reference-type="ref" reference="it:v"}, the sequence $v (\cdot, w_i)$ converges to $v (\cdot,w_*)$ in $\mathbf{L}^\infty\left([0,t_*]; \mathbf{L}^1 (\Omega;{\mathbb{R}})\right)$. Similarly, by [\[it:alpha\]](#it:alpha){reference-type="ref" reference="it:alpha"} and [\[it:betaFinal\]](#it:betaFinal){reference-type="ref" reference="it:betaFinal"}, $\alpha (\cdot,\cdot,w_i)$ and $\beta (\cdot, \cdot, u_i, w_i)$ converge to $\alpha (\cdot,\cdot,w_*)$ and $\beta (\cdot, \cdot, u_*, w_*)$. Two applications of the Dominated Convergence Theorem ensure that the integral equality [\[eq:41\]](#eq:41){reference-type="eqref" reference="eq:41"} for the hyperbolic problems and [\[eq:4\]](#eq:4){reference-type="eqref" reference="eq:4"} for the parabolic problem do hold. By [\[eq:u_Linf\]](#eq:u_Linf){reference-type="eqref" reference="eq:u_Linf"}, we also have $u_* \in \mathbf{L}^\infty ([0,t_*]\times \Omega; {\mathbb{R}})$. Moreover, Lemma [Lemma 15](#lem:tcont){reference-type="ref" reference="lem:tcont"} ensures that $u_* \in \mathbf{C}^{0} \left([0,t_*]; \mathbf{L}^1(\Omega; {\mathbb{R}})\right)$, using also [\[eq:A_Linf\]](#eq:A_Linf){reference-type="eqref" reference="eq:A_Linf"}--[\[eq:v_graddivL1\]](#eq:v_graddivL1){reference-type="eqref" reference="eq:v_graddivL1"}. By construction, we have $w_* \in \mathbf{C}^{0} \left([0,t_*]; \mathbf{L}^1 (\Omega;{\mathbb{R}})\right)$. Indeed, the uniform bound [\[eq:w_Linf\]](#eq:w_Linf){reference-type="eqref" reference="eq:w_Linf"} shows that $w_* \in \mathbf{L}^\infty ([0,t_*]\times \Omega; {\mathbb{R}}) \subseteq \mathbf{L}^\infty \left([0,t_*]; \mathbf{L}^1 (\Omega;{\mathbb{R}})\right)$. Moreover, a further application of the Dominated Convergence Theorem shows that $w_*$ satisfies [\[eq:5\]](#eq:5){reference-type="eqref" reference="eq:5"}. Hence, proceeding as in Claim 4 in the proof of , we have that $w_* \in \mathbf{C}^{0} \left([0,t_*]; \mathbf{L}^1 (\Omega;{\mathbb{R}})\right)$. Thus, $(u_*,w_*)$ satisfies the requirements in . Moreover, this solution $(u_*, w_*)$ can be uniquely extended to all $[0,T]$. The proof is identical to [@SAPM2021 Theorem 2.2, Step 6]. Following the same techniques used in [@SAPM2021 Theorem 2.2, Step 7], we can prove also the Lipschitz continuous dependence of the solution to [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"} on the initial data. Let $(u_o, w_o)$ and $(\tilde u_o, \tilde w_o)$ be two sets of initial data. Call $(u,w)$ and $(\tilde u, \tilde w)$ the corresponding solutions to [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"} in the sense of Definition [Definition 1](#def:sol){reference-type="ref" reference="def:sol"}. The proof is based on [\[eq:11\]](#eq:11){reference-type="eqref" reference="eq:11"}, [\[eq:B_Linf\]](#eq:B_Linf){reference-type="eqref" reference="eq:B_Linf"}, [\[eq:A_Linf\]](#eq:A_Linf){reference-type="eqref" reference="eq:A_Linf"}, in  and computations analogous to those leading to [\[eq:diff_wi\]](#eq:diff_wi){reference-type="eqref" reference="eq:diff_wi"} and [\[eq:diff_ui\]](#eq:diff_ui){reference-type="eqref" reference="eq:diff_ui"} now yield $$\begin{aligned} & {\left\|u(t)- \tilde u (t)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} + {\left\|w(t)- \tilde w (t)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} \\ \le \ & {\left\|u_o - \tilde u_o\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} \exp\left(K_\alpha \,t \left(1 + K_{w,\infty} (t)\right)\right) + {\left\|w_o - \tilde w_o\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} e^{K_\beta \,t} \\ & + K_\beta \, e^{t \, K_\beta} K_{w, \infty} (t) \left( \int_0^t {\left\|u (\tau) - \tilde u (\tau)\right\|}_{\mathbf{L}^1 (\Omega;\mathbb{R})} + {\left\|w(\tau) - \tilde w (\tau)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} \mathinner{\mathrm{d}{\tau}} \right) \\ & + K_1(t) \int_0^t {\left\|w (\tau) -\tilde w (\tau)\right\|}_{\mathbf{L}^1(\Omega; \mathbb{R})}\mathinner{\mathrm{d}{\tau}}. \end{aligned}$$ where $$\begin{aligned} \nonumber K_{w, \infty} (t) = \ & \min \left\{C_{w, \infty} (t), C_{\tilde w, \infty} (t)\right\}, \\ \label{eq:57} K_1 (t) = \ & \min \left\{ C_{u,1} (t) \, C_v \! \left( t , C_{w,1} (t)\right) + C_u^{\mathinner{\rm TV}} (t) +K_\alpha \, C_{u,\infty} (t), \right. \\ \nonumber & \qquad\quad\left. C_{\tilde u,1} (t) \, C_v \! \left( t , C_{\tilde w,1} (t)\right) + C_{\tilde u}^{\mathinner{\rm TV}} (t) +K_\alpha \, C_{\tilde u ,\infty} (t) \right\}, \end{aligned}$$ and $C_{\tilde w, 1}$, $C_{\tilde w, \infty}$, $C_{\tilde w}^{\mathinner{\rm TV}}$, $C_{\tilde u, 1}$, $C_{\tilde u, \infty}$, $C_{\tilde u}^{\mathinner{\rm TV}}$ are defined accordingly to [\[eq:w_L1\]](#eq:w_L1){reference-type="eqref" reference="eq:w_L1"}, [\[eq:w_Linf\]](#eq:w_Linf){reference-type="eqref" reference="eq:w_Linf"}, [\[eq:w_TV\]](#eq:w_TV){reference-type="eqref" reference="eq:w_TV"}, [\[eq:u_L1\]](#eq:u_L1){reference-type="eqref" reference="eq:u_L1"}, [\[eq:u_Linf\]](#eq:u_Linf){reference-type="eqref" reference="eq:u_Linf"}, [\[eq:u_TV\]](#eq:u_TV){reference-type="eqref" reference="eq:u_TV"}, corresponding to the initial datum $(\tilde u_o, \tilde w_o)$. Then, Gronwall Lemma [@bressan-piccoli Lemma 3.1] yields $$\begin{aligned} & {\left\|u(t)- \tilde u (t)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} + {\left\|w(t)- \tilde w (t)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} \\ \le \ & \left( {\left\|u_o - \tilde u_o\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} + {\left\|w_o - \tilde w_o\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})}\right) \int_0^t \mathcal{K}_o (\tau) \exp\left(\int_\tau^t \mathcal{K} (s) \mathinner{\mathrm{d}{s}}\right) \mathinner{\mathrm{d}{\tau}}, \end{aligned}$$ with $$\begin{aligned} \mathcal{K}_o (\tau) = \ & \exp\left( \max\left\{ \left(K_\alpha \, \tau \left(1 + K_{w,\infty} (\tau)\right)\right), K_\beta \, \tau \right\} \right), & \mathcal{K} (\tau) = \ & K_\beta \, e^{\tau \, K_\beta} K_{w, \infty} (\tau) + K_1 (\tau). \end{aligned}$$ Uniqueness of solution readily follows. Focus now on the stability of [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"} with respect to the controls $a$ and $b$. Let $a, \tilde a$ satisfy [\[it:a\]](#it:a){reference-type="ref" reference="it:a"}, $b, \tilde b$ satisfy [\[it:b\]](#it:b){reference-type="ref" reference="it:b"}. Call $(u,w)$ and $(\tilde u, \tilde w)$ the solutions to [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"} corresponding to the functions $a,b$ and $\tilde a, \tilde b$ respectively. Similarly to the previous step, by [\[eq:11\]](#eq:11){reference-type="eqref" reference="eq:11"}, [\[eq:B_Linf\]](#eq:B_Linf){reference-type="eqref" reference="eq:B_Linf"}, [\[eq:A_Linf\]](#eq:A_Linf){reference-type="eqref" reference="eq:A_Linf"}, in  and computations analogous to those leading to [\[eq:diff_wi\]](#eq:diff_wi){reference-type="eqref" reference="eq:diff_wi"} and [\[eq:diff_ui\]](#eq:diff_ui){reference-type="eqref" reference="eq:diff_ui"}, we obtain $$\begin{aligned} & {\left\|u(t)- \tilde u (t)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} + {\left\|w(t)- \tilde w (t)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} \\ \le \ & {\left\|a-\tilde a\right\|}_{\mathbf{L}^1 ([0,t]\times\Omega; \mathbb{R})} \exp\left(K_\alpha \, t \left(1+ K_{w, \infty} (t)\right) \right) + {\left\|b- \tilde b\right\|}_{\mathbf{L}^1 ([0,t]\times\Omega; \mathbb{R})} e^{K_\beta \, t} \\ & + K_\beta \, e^{t \, K_\beta} K_{w, \infty} (t) \left( \int_0^t {\left\|u (\tau) - \tilde u (\tau)\right\|}_{\mathbf{L}^1 (\Omega;\mathbb{R})} + {\left\|w(\tau) - \tilde w (\tau)\right\|}_{\mathbf{L}^1 (\Omega; \mathbb{R})} \mathinner{\mathrm{d}{\tau}} \right) \\ & + K_1(t) \int_0^t {\left\|w (\tau) -\tilde w (\tau)\right\|}_{\mathbf{L}^1(\Omega; \mathbb{R})}\mathinner{\mathrm{d}{\tau}}, \end{aligned}$$ where $K_{w, \infty} (t)$ and $K_1(t)$ are defined as in [\[eq:57\]](#eq:57){reference-type="eqref" reference="eq:57"}, with the main difference that the *tilde*-versions of $C_{*,*}$ now corresponds to the functions $\tilde a$ and $\tilde b$. An application of Gronwall Lemma [@bressan-piccoli Lemma 3.1] yields the desired estimate. #### Acknowledgement: The authors were partly supported by the GNAMPA 2022 project *Evolution Equations: Well Posedness, Control and Applications*.
arxiv_math
{ "id": "2309.06241", "title": "Non Linear Hyperbolic-Parabolic Systems with Dirichlet Boundary\n Conditions", "authors": "Rinaldo M. Colombo and Elena Rossi", "categories": "math.AP", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We investigate the homotopy type of a certain homogeneous space for a simple complex algebraic group. We calculate some of its classical topological invariants and introduce a new one. We also propose several conjectures about its topological rigidity. address: - Department of Mathematics, University of Warwick, Coventry, CV4 7AL, UK - Department of Mathematics, University of Warwick, Coventry, CV4 7AL, UK author: - Dylan Johnston - Dmitriy Rumynin bibliography: - refs_TOP.bib date: September 13, 2023 title: "Topological rigidity of ${\\mathrm{SL}}_2$-quotients" --- Let $G$ be a simple simply-connected complex algebraic group. By Jacobson-Morozov Theorem, its unipotent conjugacy classes are in one-to-one correspondence with conjugacy classes of homomorphisms: $$\varphi_u : {\mathrm{SL}}_2 ({\mathbb C})\rightarrow G \qquad \longleftrightarrow \qquad u = \varphi_u \begin{pmatrix} 1&1\\0&1\end{pmatrix}\, .$$ For each unipotent element $u\in G$ we consider the corresponding quotient topological space $X_u = X_u(G) \coloneqq G/H$ where $H= \varphi_u ({\mathrm{SL}}_2 ({\mathbb C}))$. This space is an affine algebraic variety: it is well-known that $G/K$ is affine if and only if $K$ is reductive. In the present paper we investigate its homotopy type. The following is our first main result: **Theorem 1**. *Consider two simple simply-connected reductive groups $G$, $K$ and their unipotent elements $u\in G$, $v\in K$. If $X_u(G)$ and $X_v(K)$ are homotopy equivalent, then $G\cong K$.* To prove this, we compute some of its classical topological invariants: rational homotopy type and some of the homotopy groups. Our second main result is related to this calculation: $\pi_2 (X_u)$ depends on whether $u$ is quite even or not. All quite even orbits are described in Section [3](#VeryEven){reference-type="ref" reference="VeryEven"}. Out second main result is Theorem [Theorem 10](#QuiteEvenClassical){reference-type="ref" reference="QuiteEvenClassical"}: it describes quite even orbits in classical types in terms of partitions. Our third main result is Theorem [Theorem 18](#2nd_thm){reference-type="ref" reference="2nd_thm"}: we show that a certain ideal $I_u\lhd {\mathbb Z}[x]$ is a topological invariant of $X_u$. This invariant originates from the topological K-theory but is much easier to use. In light of Theorem [Theorem 1](#main_theorem){reference-type="ref" reference="main_theorem"}, the homotopy classification of the spaces $X_u(G)$ reduces to the spaces with the same group $G$. We have collected enough evidence to propose the following conjecture. **Conjecture 2**. *[(]{.roman}Weak topological rigidity) Let $u,v\in G$ be unipotent elements, $H_u$ and $H_v$ -- their corresponding subgroups. The spaces $X_u(G)$ and $X_v(G)$ are homotopy equivalent if and only if there exists an automorphism $\psi : G \rightarrow G$ such that $\psi (H_u) = H_v$.* Such an isomorphism $\psi$ could be inner: then $u$ and $v$ belong to the same conjugacy class. However, if the Dynkin diagram has diagram automorphisms, then not all automorphisms of $G$ are inner and different conjugacy classes can be related by $\psi$. See Section [7](#Example2){reference-type="ref" reference="Example2"} for more detailed discussion. Let us now put forward two further, quite bold conjectures. While there is little evidence to support them, we feel that they are interesting question: proving or disproving them would be remarkable. Notice that we need to exclude $G=X_1(G)$ since the inverse element map $x\mapsto x^{-1}$ is a homotopy equivalence, not homotopic to an equivariant map **Conjecture 3**. *[(]{.roman}Strong topological rigidity) Suppose $u,v\in G$, $v\neq 1\neq u$ and $\phi: X_u(G) \rightarrow X_v(G)$ is continuous map that is a homotopy equivalence. Then $\phi$ is homotopic to a $G$-equivariant map.* All $G$-equivariant maps $X_u\rightarrow X_u$ are easy to describe: once the coset $H$ is mapped to a coset $zH$, the whole map has the form $xH \mapsto xzH$. To be well-defined, $z$ needs to belong to the normaliser ${\mathrm{N}_G}(H)$. Hence, we have a group homomorphism $\eta: {\mathrm{N}_G}(H) \rightarrow {\mathrm{Auh}\,}(X_u)$, to the group of homotopy self-equivalences. Its component group $\pi_0 ({\mathrm{N}_G}(H)) = {\mathrm{N}_G}(H)/{\mathrm{N}_G}(H)_0$ is the same as components groups of the centralisers $\pi_0 ({\mathrm{C}_G}(H))$ and $\pi_0 ({\mathrm{C}_G}(u))$. It also coincides with the fundamental group of the conjugacy class $\pi_1 ({\mathrm{Cl}_G}(u))$. Our final conjecture is about this group. **Conjecture 4**. *The group homomorphism $\pi_0 ( \eta ): \pi_0 ({\mathrm{N}_G}(H)) \rightarrow \pi_0 ({\mathrm{Auh}\,}(X_u))$ is an isomorphism.* Note that Conjecture [Conjecture 3](#st_rigid){reference-type="ref" reference="st_rigid"} implies surjectivity of $\pi_0 ( \eta )$. Let us now provide a detailed description of the content of the present paper. We start by reminding the reader the basic facts about $X_u$ and its homotopy groups $\pi_n (X_u)$ in Section [1](#intro){reference-type="ref" reference="intro"}. The next three sections are devoted to calculations of homotopy groups. In Section [2](#RaHoT){reference-type="ref" reference="RaHoT"}, we compute the rational homotopy type of $X_u$ and all the groups $\pi_n (X_u)\otimes {\mathbb Q}$. In Section [3](#VeryEven){reference-type="ref" reference="VeryEven"} we describe all the quite even orbits, essentially computing all $\pi_2 (X_u)$. We compute some of the higher homotopy groups $\pi_n (X_u)$ for $n \leq 6$ in Section [4](#HiHoGr){reference-type="ref" reference="HiHoGr"}: this completes our proof of Theorem [Theorem 1](#main_theorem){reference-type="ref" reference="main_theorem"}. We turn our attention to K-theory in Section [5](#KTheory){reference-type="ref" reference="KTheory"}: it can be computed using the Hodgkin spectral sequence and Kozsul resolution. This yields $I_u$, a new homotopy invariant of $X_u$. In the penultimate Section [6](#Example){reference-type="ref" reference="Example"}, we employ our two techniques to tackle Conjecture [Conjecture 2](#wk_rigid){reference-type="ref" reference="wk_rigid"} in dimension 75. This is an interesting example because these homogeneous spaces come from three different groups of dimension 78: $B_6$, $C_6$ and $E_6$. We are coming slightly short: cf. Proposition [Proposition 19](#finalprop){reference-type="ref" reference="finalprop"}. In the final Section [7](#Example2){reference-type="ref" reference="Example2"}, we clarify the effect of the Dynkin diagram automorphisms. The authors would like to thank Michael Albanese, John Greenlees, John Jones and Miles Reid for insightful information and/or inspirational conversations. # Introduction {#intro} A thoughtful reader will notice that the same questions can be asked for compact groups. Indeed, these languages are equivalent (cf. [@JoRuTh]). More precisely, $G\simeq G_c$, i.e., $G$ is homotopy equivalent to its maximal compact subgroup, while $X_u \simeq G_c/H_c$ [@RuTa Lemma 2.5]. The latter is a compact oriented manifold, hence, its dimension is a homotopy invariant. **Proposition 1**. *If $X_u \simeq X_v$, then $\dim (X_u)=\dim (X_v)$.* For the rest of the section, assume that $u\neq 1$. Let $X=X_u(G)=G/H$, where $H$ is the image of $\varphi_u$. The quotient map $G \rightarrow X$ is a Serre fibration with path-connected $X$ and fibre $H$. This yields a long exact sequence in homotopy $$\label{long_es} \dots \rightarrow \pi_i(H) \rightarrow \pi_i(G) \rightarrow \pi_i(X) \rightarrow \pi_{i-1}(H) \rightarrow \dots \rightarrow \pi_0(G) \rightarrow \pi_0(X) \rightarrow 0.$$ Since we know some of the low homotopy groups $$\label{homot_G} \pi_2(G)=\pi_2(H)=\pi_1(G)=0, \ \pi_3(G)=\pi_3(H)={\mathbb Z},$$ the end of the long exact sequence [\[long_es\]](#long_es){reference-type="eqref" reference="long_es"} looks like $$\label{long_es2} \dots \rightarrow {\mathbb Z}\xrightarrow{\pi_3 (\varphi_u)} {\mathbb Z}\rightarrow \pi_3(X) \rightarrow 0 \rightarrow 0 \rightarrow \pi_2(X) \rightarrow \pi_1(H) \rightarrow 0.$$ The integer $\nu (u) \coloneqq \pi_3 (\varphi_u)$ is the Dynkin index of the Lie algebra homomorphism $d\varphi_u: \mathfrak{sl}_2 \rightarrow \mathfrak g$ [@onishchik1994topology Theorem 17.2.2]. Their values are known for all $u$ [@Dynkin1952; @PANYUSHEV201515]. Note that $H$ is either ${\mathrm{PSL}}_2 ({\mathbb C})$ and $\pi_1 (H) = {\mathbb Z}/2$, or ${\mathrm{SL}}_2 ({\mathbb C})$ and $\pi_1 (H) = 0$. Let us call the unipotent $u$ *quite even* if $H={\mathrm{PSL}}_2 ({\mathbb C})$. This is reminiscent to the noting of even unipotent: $u$ is even if and only if the image of ${\mathrm{SL}}_2 ({\mathbb C})$ in the adjoint group $G_{ad}=G/Z(G)$ is ${\mathrm{PSL}}_2 ({\mathbb C})$. Coupled with this information, the long exact sequence [\[long_es2\]](#long_es2){reference-type="eqref" reference="long_es2"} yields the three lower homotopy groups **Proposition 2**. *If $u\neq 1$ and $X_u = X_u(G)$, then $$\pi_1(X_u)=0, \ \pi_3(X_u)={\mathbb Z}/\nu (u), \ \pi_2(X_u) = \begin{cases} {\mathbb Z}/2, & \text{if } u \text{ is quite even}, \\ 0,& \text{otherwise}. \end{cases}$$* Note that if $u=1$, then the homotopy groups of $X_1=G$ are given in [\[homot_G\]](#homot_G){reference-type="eqref" reference="homot_G"}. This is somewhat consistent with Proposition [Proposition 2](#low_pi){reference-type="ref" reference="low_pi"}: $\nu (1)=0$ because $d\varphi_1 =0$ and its Dynkin index is $0$. It remains to agree that $1$ is *not quite even* but it is inconsistent with the results of Section [3](#VeryEven){reference-type="ref" reference="VeryEven"}. The maximal compact subgroup ${\mathrm{SL}}_2 ({\mathbb C})$ is ${\mathrm{SU}}_2$, geometrically a 3-sphere $S^3$. Thus, ${\mathrm{SL}}_2 ({\mathbb C})\simeq S^3$ and every $\varpi_u$ yields an element of $\pi_3 (G)={\mathbb Z}$. This is another role of the Dynkin index. Indeed, the generator of $\pi_3 (G)= {\mathbb Z}$ is $\varphi_m$ where $m\in G$ is the minimal unipotent (the exponent of the long root vector $e_\alpha \in \mathfrak g$). Let $u\in G$ be a unipotent with Dynkin index $\nu (u)=d$. Then the following maps are homotopic: $$\label{dynkin_hom} \varphi_u \simeq \varphi_m \circ \mu_d \, : \, {\mathrm{SL}}_2 ({\mathbb C})\xrightarrow{\mu_d : x\mapsto x^{d}} {\mathrm{SL}}_2 ({\mathbb C}) \xrightarrow{\varphi_m }G\, .$$ Thus, the Dynkin index determines all the key maps in the long exact sequence [\[long_es2\]](#long_es2){reference-type="eqref" reference="long_es2"}: **Proposition 3**. *If $u\in G$ is a unipotent element with Dynkin index $\nu (u)=d$, then for all $k\geq 2$ $$\pi_k(\varphi_u) = \pi_k(\varphi_m \circ \mu_d) = \pi_k(\varphi_m) \circ \pi_k(\mu_d) = d \cdot \pi_k(\varphi_m).$$* Suppose $u,v\in G$ are non-conjugate unipotent elements, with the same Dynkin index. Then $G\rightarrow X_u$ and $G\rightarrow X_v$ are two (potentially distinct) fibrations with the same fibre. In Section [6](#Example){reference-type="ref" reference="Example"}, the reader will see many examples of these with $X_u\not\simeq X_v$. # Rational Homotopy Type {#RaHoT} Given a topological space $X$, we define the rational homotopy groups to be $\pi_n(X,{\mathbb Q}) = \pi(X) \otimes {\mathbb Q}$ for $n\geq 2$. The rational homotopy category is the localisation of the category of simply-connected topological spaces, where the maps $f$, such that all $\pi_n(f,{\mathbb Q})$, $n\geq 2$ are invertible, become isomorphisms. The spaces $X_u(G)$ are simply-connected and their rational homotopy types (i.e., isomorphism classes in the rational homotopy category) can be explicitly described. Let $W$ be the Weyl group of $G$. Let $$d_1 = 2 < d_2 \leq d_r \leq \ldots \leq d_r$$ be its fundamental degrees: these are the degrees of the generators of the invariant algebra ${\mathbb C}[\mathfrak h]^W$ where $\mathfrak h$ is a Cartan subalgebra in the Lie algebra of $G$. The following result is due to Serre. **Proposition 4**. *[@felix2012rational Ch. 15(f)] In the rational homotopy category, a simply-connected simple group $G$ is isomorphic to the product of odd-dimensional spheres $$G \stackrel{{\mathbb Q}}{\simeq} \prod_{i=1}^{r} S^{2d_i - 1} \, .$$* Intuitively, in the rational homotopy category, all non-trivial homomorphisms $\varphi_u$ are the first component embedding $S^3 \rightarrow \prod_{i} S^{2d_i - 1}$, while the quotient map $G\rightarrow X_u(G)$ is the projection along the first component $\prod_{i} S^{2d_i - 1} \rightarrow \prod_{i>1} S^{2d_i - 1}$. We give a rigorous proof of a weaker version of this statement, sufficient for our ends. **Proposition 5**. *Let $1\neq u\in G$ be a non-trivial unipotent. In the rational homotopy category, $X_u(G)\stackrel{{\mathbb Q}}{\simeq} \prod_{i=2}^{r} S^{2d_i - 1}$.* *Proof.* We will need the semifree DG-algebras $$\Lambda \langle x_1 , \ldots , x_n \rangle\coloneqq {\mathbb Q}\langle x_1 , \ldots , x_n \rangle / (x_i x_j - (-1)^{a_i a_j} x_jx_i)\, , \ a_i = \deg (x_i) .$$ They are positively graded algebra with a differential $D$ of degree $-1$. A Sullivan model for $X_u(G)$ is known: [@felix2012rational Prop. 15.16] $$\Lambda_u (G) \coloneqq \Lambda \langle x_1 , \ldots , x_r, y \rangle, \deg (x_i) = 2d_i - 1 , \deg (y) = 2, D(y)=0, D(x_i) = \alpha_i y^{d_i-1}.$$ The coefficients $\alpha_i$ are obtained by choosing the fundamental invariants for $G$ and ${\mathrm{SL}}_2$, defined over ${\mathbb Q}$ $${\mathbb Q}[\mathfrak h_{{\mathbb Q}}]^W = {\mathbb Q}[ \theta_1, \ldots , \theta_r ], \ \deg(\theta_i) = d_i, \ {\mathbb Q}[\mathfrak h_{{\mathbb Q}}({\mathrm{SL}}_2)]^{W({\mathrm{SL}}_2)} = {\mathbb Q}[ \eta ],$$ then restricting them under $d\varphi_u : \mathfrak h_{{\mathbb Q}}({\mathrm{SL}}_2) \rightarrow \mathfrak h_{{\mathbb Q}}$ $$d\varphi_u^\ast (\theta_i) = \alpha_i \eta^{d_i/2}.$$ Note that $\alpha_i=0$ for odd $d_i$ and $\alpha_1 \neq 0$. Hence, the homomorphism of the DG-algebras $$\Lambda_u (G) \rightarrow ( \Lambda \langle x_2 , \ldots , x_r \rangle, D=0) , \ x_1 \mapsto 0, y \mapsto 0, x_i \mapsto x_i, i\geq 2$$ is a quasiisomorphism and the latter is the minimal Sullivan algebra for $\prod_{i=2}^{r} S^{2d_i - 1}$. ◻ If $X_u(G){\simeq} X_v(H)$, then $X_u(G)\stackrel{{\mathbb Q}}{\simeq} X_v(H)$. Hence, in this case, $G$ and $H$ must have the same fundamental degrees. Non-isomorphic $G$ and $H$ have the same fundamental degrees only if they are of type $B_n$ and $C_n$. Thus, Propositions [Proposition 4](#typeG){reference-type="ref" reference="typeG"} and [Proposition 5](#typeGH){reference-type="ref" reference="typeGH"} yield: **Corollary 6**. *The spaces $X_u(G)$ and $X_v(H)$ have the same rational homotopy type only in the following four cases:* 1. *$G\cong H$ and $u=1$ and $v=1$,* 2. *$G\cong H$ and $u\neq 1$ and $v \neq 1$,* 3. *$G$, $H$ have types $B_n$, $C_n$ and $u=1$ and $v=1$,* 4. *$G$, $H$ have types $B_n$, $C_n$ and $u\neq 1$ and $v \neq 1$.* # Quite Even Orbits {#VeryEven} Now we describe all quite even orbits in all simple $G$. We assume that $u\neq 1$ throughout this section. Note that the results of this section indicate that $u=1$ should be called quite even. This disagrees with Proposition [Proposition 2](#low_pi){reference-type="ref" reference="low_pi"}, quite even $u$ correspond to the spaces $X_u(G)$ with non-trivial $\pi_2(X_u(G))$. Recall that it is usual to talk about even nilpotent elements $e\in \mathfrak g$ [@collingwood1993nilpotent Ch. 3.8]. We transfer this talk to unipotent elements: $u=\mbox{Exp}(e)$ is even if and only if $e$ is even. Being even boils down to parity of the dimensions of constituent $\mathfrak{sl}_2$-modules. Indeed, odd-dimensional non-trivial irreducible $\mathfrak{sl}_2$-modules are faithful representations of ${\mathrm{PSL}}_2 ({\mathbb C})$, while even-dimensional irreducible $\mathfrak{sl}_2$-modules are faithful representations of ${\mathrm{SL}}_2 ({\mathbb C})$. Thus $u$ is even if and only if the $\mathfrak{sl}_2$-module $\varphi_u^* (\mathfrak g)$ (restriction of the adjoint representation) is a direct sum of odd-dimensional irreducible $\mathfrak{sl}_2$-modules. The main task in this section is to ascertain parities of irreducible constituents of $\varphi_u^* (V)$ for some key $\mathfrak g$-modules. **Proposition 7**. *Every quite even unipotent is even. If $G$ is of type $A_{2k}$, $G_2$, $F_4$, $E_6$ or $E_8$, all non-trivial even unipotent elements are quite even.* *Proof.* Consider the surjection to the adjoint group $\psi : G \rightarrow G_{ad} = G/Z(G)$. If $u$ is quite even, then $\varphi_u ({\mathrm{SL}}_2({\mathbb C})) = {\mathrm{PSL}}_2({\mathbb C})$. Hence, $\psi(\varphi_u ({\mathrm{SL}}_2({\mathbb C})))={\mathrm{PSL}}_2({\mathbb C})$, which means $u$ is even. For the second statement, consider a non-trivial even, not quite even $u$. Then $\varphi_u ({\mathrm{SL}}_2({\mathbb C})) = {\mathrm{SL}}_2({\mathbb C})$ and $\varpi(\varphi_u ({\mathrm{SL}}_2({\mathbb C})))={\mathrm{PSL}}_2({\mathbb C})$. This means the centre $Z(G)$ contains $-I_2\in {\mathrm{SL}}_2({\mathbb C})$. But in all these cases, the order of $Z(G)$ is odd, so no such $u$ exists. ◻ Let $\alpha_1, \ldots , \alpha_r$ be the simple roots. Consider the vector $[u] \coloneqq (\alpha_i (h))$ where $h$ is the semisimple element in the $\mathfrak{sl}_2$-triple $e,h,f$ with $u= \mbox{Exp} (e)$. Note that the coefficients of $[u]$ are precisely the weights of the weighted Dynkin diagram of $u$ [@collingwood1993nilpotent Ch. 3.3]. In particular, $\alpha_i (h)\in \{0,1,2\}$ and $u$ is even if and only if all $\alpha_i (h)$ are even. **Proposition 8**. *Let $C$ be the Cartan matrix of $G$. Let $1\neq u \in G$ be a unipotent element. Then $u$ is quite even if and only if all the coefficients of $C^{-1}[u]$ are even.* *Proof.* The inverse Cartan matrix expresses the dominant weights $\varpi_i$ from the simple roots $\alpha_i$. Thus, $C^{-1}[u]=(\varpi_i (h))$. This vector is even if and only if all the weights of the restriction $\varphi_u^{\ast}(V)$ for any finite-dimensional $G$-module $V$ are even. This is equivalent to ${\mathrm{im}}(\varphi_u)={\mathrm{PSL}}_2({\mathbb C})$ and $u$ being quite even. ◻ Proposition [Proposition 8](#cirterion){reference-type="ref" reference="cirterion"} could be used for any particular $u$. The next corollary is verified by this direct calculation for each $u$. We follow the standard labels for the unipotent classes [@Carter93; @collingwood1993nilpotent]. **Corollary 9**. *Let $G$ be of type $E_7$. These are its quite even classes: $$A_2, 2A_2, D_4(a_1), D_4, A_4+A_2, E_6(a_3), D_5, A_6, E_6(a_1), E_6.$$ These are its non-trivial even, but not quite even classes: $$(3A_1)^{\prime\prime}, A_2+3A_1, (A_3+A_1)^{\prime\prime}, A_3+A_2+A_1, (A_3+A_1)^{\prime\prime}, (A_5)^{\prime\prime},$$ $$D_5(a_1)+A_1, E_7(a_5), E_7(a_4),E_7(a_3),E_7(a_2),E_7(a_1),E_7.$$* We finish by describing quite even (and also even) elements in the classical types in terms of partitions. Suppose $G$ is one of the groups ${\mathrm{SL}}_n ({\mathbb C})$, ${\mathrm{Spin}}_n ({\mathbb C})$ or ${\mathrm{Sp}}_n ({\mathbb C})$. Restrict the natural representation of $G$ on $V={\mathbb C}^n$ to ${\mathrm{SL}}_2({\mathbb C})$, along the map $\varphi_u$ and record the dimensions of irreducible constituents $(d_1\geq d_2 \geq \ldots)$. This is the partition $p(u) \in {\mathcal P}(n)$, associated to $u$ [@collingwood1993nilpotent 5.1.7]. Recall the natural restrictions on the parities of $d_i$ in $p(u)$. They come from the fact that odd-dimensional representations of ${\mathrm{SL}}_2 ({\mathbb C})$ are orthogonal, while even-dimensional representations are symplectic. In type $C_r$, $V$ carries a symplectic form, whose restrictions to odd-dimensional constituents of $\phi_u^{\ast} (V)$ must be zero. Thus, such constituents must come in dual isotropic pairs of spaces. Thus, all odd $d_i$ must appear an even number of times. Similarly, in types $B_r$ and $D_r$, $V$ carries an orthogonal form, whose restrictions to even-dimensional constituents of $\phi_u^{\ast} (V)$ must be zero. Thus, such constituents must come in dual isotropic pairs of spaces and all even $d_i$ must appear an even number of times. **Theorem 10**. *Consider $G$ of classical type and a unipotent element $u\in G$, $u\neq 1$ with the corresponding partition $p(u)=(d_i)\in {\mathcal P}(n)$.* 1. *$u$ is even if and only if all $d_i$ have the same parity (all even or all odd).* 2. *If $G$ is of type $A_r$ or $C_r$, then $u$ is quite even if and only if all $d_i$ are odd.* 3. *If $G$ is of type $B_r$ or $D_r$, then $u$ is quite even if and only if the product $\prod_i d_i \equiv \pm 1 \mod 8$.* *Proof.* The central element $-I_2\in {\mathrm{SL}}_2 ({\mathbb C})$ acts as $(-1)^{d+1}$ on the $d$-dimensional irreducible $\mathfrak{sl}_2$-module. Thus, all the questions of $u$ being (quite) even are reduced to a certain $\mathfrak{sl}_2$-module having only odd-dimensional constituents. For instance, Statement (2) is immediate since the standard representation $V={\mathbb C}^n$ is a faithful $G$-module in the types $A_r$ or $C_r$. In all the cases the adjoint $G$-module is a constituent of $V\otimes V^\ast$. This proves the "if" part of Statement (1): $-I_{2}$ acts as $1$ on $V\otimes V^\ast$. The "only if" part of Statement (1) is clear too. The adjoint $G$-module can be recovered from $V$ as $$\mathfrak g_{ad} = \frac{V\otimes V^\ast}{ {\mathbb C}I_V} \ (\mbox{type } A), \ \mathfrak g_{ad} = \Lambda^2 V \ (\mbox{types } B, D), \ \mathfrak g_{ad} = S^2 V \ (\mbox{type } C).$$ Suppose $u$ is even so that $-I_2$ acts on $\mathfrak g_{ad}$ as $1$. Consider two irreducible constituents $U_1, U_2$ of dimensions $d$ and $d^\prime$ of $\varphi_u^{\ast}V$. If $dd^\prime>1$, some constituents $U_1\otimes U_2 \cong U_1\otimes U_2^\ast$ inevitably appear in $\mathfrak g_{ad}$. But there $-I_2$ acts as $(-1)^{d+d'+2}$, proving that $d$ and $d'$ have the same parity. In the extreme case of $d=d'=1$, we can take any other constituent $U_3$ of degree $\tilde{d}$. The argument above shows that $\tilde{d}$ is odd so that all $d_i$ are odd. Statement (3) requires the image of $-I_2$ in a faithful representation of $G$. In type $B_r$, the image in the the spinor representation is computed in Lemma [Lemma 12](#typeB){reference-type="ref" reference="typeB"}. In type $D_r$, the image in the sum of the semispinor representations is computed in Lemma [Lemma 13](#typeD){reference-type="ref" reference="typeD"}. These two lemmas complete the proof. ◻ In types $A_r$ and $D_r$, there are intermediate groups $\psi : G \rightarrow G/A$, $A\leq Z(G)$, which are neither simply-connected, nor adjoint. For completeness, let us quickly address the conditions when the image of ${\mathrm{SL}}_2 ({\mathbb C})$ in these groups is ${\mathrm{PSL}}_2 ({\mathbb C})$. Clearly, $u$ being quite even is sufficient and $u$ being even is necessary. This leaves the following cases to address. **Corollary 11**. *The following statements hold for an even, not quite even unipotent element $u\in G$, $u\neq 1$ with the corresponding partition $p(u)=(d_i)\in {\mathcal P}(n)$.* 1. *Let $G$ be of type $A_{2k-1}$, $A= \langle e^{2\pi i /s} I_{2k} \rangle$ where $s$ divides $2k$. Then $\psi (\varphi_u ({\mathrm{SL}}_2 ({\mathbb C}))) = {\mathrm{PSL}}_2 ({\mathbb C})$ if and only if $m=2k/s$ is even.* 2. *Let $G$ be of type $D_{r}$, $G/A= {\mathrm{SO}}_{2r} ({\mathbb C})$. Then $\psi (\varphi_u ({\mathrm{SL}}_2 ({\mathbb C}))) = {\mathrm{PSL}}_2 ({\mathbb C})$ if and only if all $d_i$ are odd.* 3. *Let $G$ be of type $D_{2k}$, $G/A= {\mathrm{SSpin}}_{4k} ({\mathbb C})$, one of the semispin groups. Then $\psi (\varphi_u ({\mathrm{SL}}_2 ({\mathbb C}))) = {\mathrm{PSL}}_2 ({\mathbb C})$ if and only if the product $\prod_i d_i \equiv \pm 1 \mod 8$.* *Proof.* Let $\varpi_1, \ldots, \varpi_r$ be the fundamental weights, so that $V=V(\varpi_1)$. Under our assumption on $u$, $-I_2$ acts as $-1$ on $V$. A faithful representation of ${\mathrm{SL}}_{2k}({\mathbb C})/A$ is $V(\varpi_m) = \Lambda^m (V)$. Statement (1) immediately follows because $-I_2$ acts as $(-1)^m$ on $\Lambda^m (V)$. Statement (2) holds because the $d_i$ are dimensions of the irreducible constituents of $\phi_u^{\ast}(V)$, which is a faithful representation of ${\mathrm{SO}}_{2r} ({\mathbb C})$. Statement (3) is essentially Lemma [Lemma 13](#typeD){reference-type="ref" reference="typeD"} below. ◻ To finish the proof in type $B_r$, we need to compute the action of $-I_2\in{\mathrm{SL}}_2({\mathbb C})$ on the spin representation. We briefly recall its construction, following [@fulton2013representation Section 20.1]. Let $V={\mathbb C}^n$ where $n=2r+1$. Following [@fulton2013representation Section 18.1], we fix a basis $e_1,\ldots,e_{n}$ of $V$ and define a bilinear form $Q$ via $$Q(e_i,e_{r+i}) = Q(e_{r+i},e_{i}) =1,\hspace{3ex} Q(e_i,e_j) = 0 \text{ otherwise}.$$ The corresponding Clifford algebra (with respect to $Q$) $$C(Q) = C(V,Q) := T^{\bullet}(V)/\left(a\otimes b + b\otimes a - Q(a,b)\right)$$ admits a $\mathbb{Z}/2\mathbb{Z}$ grading $C(Q) = C(Q)_{0} \oplus C(Q)_{1}$. We have an embedding of $\mathfrak{so}(V)$ into $C(Q)_{0}$ via: $$\begin{gathered} \mathfrak{so}(V) \cong \Lambda^2(V) \longrightarrow C(V)^{0} \\ 2(E_{i,j} - E_{r+j,r+i}) \longmapsto e_i \wedge e_{r+j} \longmapsto \left(e_ie_{r+j} - \delta_{i,j}\right) \end{gathered}$$ where $E_{i.j}$ denotes the matrix with a $1$ in position $(i,j)$ and $0$ elsewhere. Note that we write $e_ie_{r+j} \in C(Q)_{0}$ instead of $e_i \otimes e_{r+j}+(a\otimes b + \ldots)$. Now write $V = W \oplus W' \oplus U$ where $W = \langle e_1,\ldots,e_r \rangle$, $W' = \langle e_{r+1},\ldots,e_{2r}\rangle$ and $U = \langle e_{2r+1} \rangle$. Observe that $W$ and $W'$ are isotropic subspaces with respect to $Q$. Moreover, we have an identification of $W'$ and $W^*$ via $e_{r+i} \mapsto e_{i}^* = \frac{1}{2}Q(e_{r+i},-)$. Now, one can show that we have an isomorphism [@fulton2013representation Eq 20.18] $$\begin{gathered} C(Q)_{0} \cong \text{End}(\Lambda^\bullet W) \\ e_i \longmapsto L_{e_i}: (w_1 \wedge \ldots \wedge w_s \longmapsto e_i \wedge w_1 \wedge \ldots \wedge w_s) \\ e_{r+i} \longmapsto D_{e_{i}}^*: \left(w_1 \wedge \ldots \wedge w_s \longmapsto \sum (-1)^{i-1}2Q(w_i,e_i)w_1 \wedge \ldots \wedge \widehat{w_i} \wedge \ldots \wedge w_s\right) \\ e_{2r+1} \longmapsto \big(w_1 \wedge \ldots \wedge w_r \longmapsto (-w_1) \wedge \ldots \wedge (- w_r)\big).\end{gathered}$$ This, alongside our embedding $\mathfrak{so}(V) \subset C(Q)_{0}$ gives a representation of $\mathfrak{so}(V)$ on $\Lambda^\bullet W$, known as the spin representation. We are ready to calculate the image of $h \in \mathfrak{sl}(2)$ (from the corresponding $\mathfrak{sl}_2$-triple) inside $\text{End}(\Lambda^\bullet W)$. **Lemma 12**. *Let $p(u)=(d_i)\in {\mathcal P}(n)$ be a partition of $n=2r+1$, with all $d_i$ odd and the corresponding map $d\varphi_u: \mathfrak{sl}(2) \longrightarrow \mathfrak{so}(n)$. Then the weights appearing in $\varphi_u^{\ast} \Lambda^\bullet W$ are all even or all odd. Moreover, the weights are all even if and only if $\displaystyle\,\prod_{i=1}^k d_i \equiv \pm 1 \mod 8.$* *Proof.* Let $h = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}\in\mathfrak{sl}_2$. Write our partition $[d_1,\dots d_k]$ as $[2c_1+1,\dots,2c_k+1]$. Consider the diagonal matrix $$D = \text{diag}(2c_1, 2c_1 -2 , \ldots,4,2,2c_2, \ldots,2c_k,\ldots 2,\underbrace{0,\ldots,0}_{\frac{k-1}{2} \text{ zeroes}}) \in M_{r\times r}({\mathbb C}).$$ Then $\varphi_u (h) = \text{diag}(D,-D,0)$ [@collingwood1993nilpotent 5.2.4]. Tracing through the maps $$\mathfrak{so}(V) \cong \Lambda^2 V \longrightarrow C(Q)_{0} \longrightarrow \text{End}(\Lambda^\bullet W)$$ associates to $h$ the endomorphism $$\sum_{i=1}^{k} \sum_{j=1}^{c_i} (c_i +1-j)\left(2L_{e_{c_1+\dots+c_{i-1}+j}} \circ D_{e_{c_1+\dots+c_{i-1}+j}^*} -\textup{Id}\right) \in \text{End}(\Lambda^\bullet W).$$ Now we have for $e_I := e_{i_1} \wedge \dots \wedge e_{i_k} \in \Lambda^\bullet W$: [@fulton2013representation within proof of 20.15] $$\left(2L_{e_i} \circ D_{e_{i}^*} - Id\right)(e_I) = \begin{cases} e_I & i \in I \\ -e_I & i \not\in I \end{cases}.$$ It follows that each $e_I$, $I \in 2^{\{1,\dots,n\}}$ (the power set of $\{1,\dots,n\}$) is a weight vector. The weights of $h$ correspond to the $2^n$ ways to assign signs to the sum $$\pm c_1 \pm (c_1 - 1) \pm \dots \pm 1 \pm c_2 \pm \dots \pm 1 \pm \dots \pm c_k \pm \dots \pm 1 \underbrace{\pm 0 \pm \dots \pm 0}_{\frac{k-1}{2} \text{ zeroes}}.$$ In particular, they are all even or all odd: changing a single sign results in adding or subtracting of twice a single summand. To determine whether it is even or odd, we can choose all signs to be even. Then $$\begin{aligned} \text{All weights are even} &\iff \sum_{i=1}^k \frac{c_i(c_i+1)}{2} \equiv 0 \mod 2 \\ &\iff \left|\left\{c_i : \frac{c_i(c_i+1)}{2} \equiv 1 \mod 2 \right\}\right| \text{ is odd}\\ &\iff \left|\left\{c_i : c_i \equiv 1,2 \mod 4 \right\}\right| \text{ is odd} \\ &\iff \left|\left\{d_i : d_i \equiv 3,5 \mod 8 \right\}\right| \text{ is odd} \\ &\iff \prod_{i=1}^k d_i = \pm 1 \mod 8 \end{aligned}$$ ◻ Let us finish the proof in type $D_r$. Similarly, to type $B_r$ we need to compute the action of $h$ on the half-spin representation $\Lambda^{2\bullet} W$ or $\Lambda^{2\bullet +1}W$ where $W={\mathbb C}^r$ is a lagrangian subspace in $V={\mathbb C}^{2r}$. **Lemma 13**. *Let $p(u)=(d_i)\in {\mathcal P}(n)$ be a partition of $n=2r$, with all $d_i$ odd and corresponding map $d\varphi_u: \mathfrak{sl}(2) \longrightarrow \mathfrak{so}(n)$. Then the weights appearing in $\varphi_u^{\ast} \Lambda^{2\bullet} W$ (or in $\varphi_u^{\ast} \Lambda^{2\bullet+1} W$) are all even or all odd. Moreover, the weights are all even if and only if $\displaystyle\,\prod_{i=1}^k d_i \equiv \pm 1 \mod 8.$* *Proof.* The proof goes verbatim to the type $B_r$ proof. The key difference in this case is that $$C(Q)_{0} \cong \text{End}(\Lambda^{2\bullet}W) \oplus \text{End}(\Lambda^{2\bullet +1}W). \ \ \mbox{\cite[20.13]{fulton2013representation}}$$ This yields the two semispin representations of $\mathfrak{so}(V)$. Following in the footsteps of the type $B_r$ proof, we observe that the weights of $h$ on $\varphi_u^{\ast}\Lambda^{2\bullet} W$ and $\varphi_u^{\ast} \Lambda^{2\bullet+1} W$ are the $2^{r-1}$ ways to assign signs to the sum $$\pm c_1 \pm (c_1 - 1) \pm \dots \pm 1 \pm c_2 \pm \dots \pm 1 \pm \dots \pm c_k \pm \dots \pm 1 \underbrace{\pm 0 \pm \dots \pm 0}_{\frac{k-1}{2} \text{ zeroes}}$$ such that an even (resp. odd) number of the signs are $'+'$. In particular, the finale of the type $B_r$ proof works here too. ◻ # Higher Homotopy Groups {#HiHoGr} As seen in Corollary [Corollary 6](#RatHomTypeCorollary){reference-type="ref" reference="RatHomTypeCorollary"}, rational homotopy theory do not differentiate between spaces $X_u({\mathrm{Spin}}_{2r+1})$ and $X_v({\mathrm{Sp}}_{2r})$ -- we skip ${\mathbb C}$ in this section. However, this can be done by considering several other homotopy groups. Recall that, by Bott periodicity, we know the following homotopy groups: $$\begin{aligned} \pi_3(H) = {\mathbb Z}, \pi_4(H) = {\mathbb Z}/2, \ & \pi_5(H) = {\mathbb Z}/2, \pi_6(H) = {\mathbb Z}/12, \\ \pi_3({\mathrm{Spin}}_{2r+1} ) = {\mathbb Z}, \pi_4({\mathrm{Spin}}_{2r+1} ) = 0, \ & \pi_5({\mathrm{Spin}}_{2r+1} ) = 0, \pi_6({\mathrm{Spin}}_{2r+1} ) = 0, \\ \pi_3({\mathrm{Sp}}_{2r} ) = {\mathbb Z}, \pi_4({\mathrm{Sp}}_{2r} ) = {\mathbb Z}/2, \ & \pi_5({\mathrm{Sp}}_{2r} ) = {\mathbb Z}/2, \pi_6({\mathrm{Sp}}_{2r} ) = 0.\end{aligned}$$ where, as before, $H$ is ${\mathrm{SL}}_2$ or ${\mathrm{PSL}}_2$. Knowing these, the long exact sequence [\[long_es\]](#long_es){reference-type="eqref" reference="long_es"} yields some information about homotopy groups of $X_u(G)$, summarised in the table \|P0.8cm\| P1.8cm\| P8.5cm \| &$\faktor{{\mathrm{Spin}}_{2r+1} }{H}$&$\faktor{{\mathrm{Sp}}_{2r} }{H}$\ $\pi_3$&${\mathbb Z}/d{\mathbb Z}$&${\mathbb Z}/d{\mathbb Z}$\ $\pi_4$&$0$&$\begin{cases} {\mathbb Z}/2{\mathbb Z}& \text{ if } b_4 = 0 \\ 0 & \text{ if } b_4 = 1 \end{cases}$\ $\pi_5$&${\mathbb Z}/2{\mathbb Z}$&$\begin{cases} {\mathbb Z}/2{\mathbb Z}\oplus {\mathbb Z}/2{\mathbb Z}\text{ or } {\mathbb Z}/4{\mathbb Z}& \text{ if } (b_4,b_5) = (1,1) \\ {\mathbb Z}/2{\mathbb Z}& \text{ if } (b_4,b_5) = (1,0) \text{ or } (0,1) \\ 0 & \text{ if } (b_4,b_5) = (0,0) \end{cases}$\ $\pi_6$&${\mathbb Z}/2{\mathbb Z}$&$\begin{cases} {\mathbb Z}/2{\mathbb Z}& \text{ if } b_5 = 0 \\ 0 & \text{ if } b_5 = 1 \end{cases}$\ where the values $d$, $b_4$ and $b_5$ are given as follows: - $d$ is the Dynkin index, i.e., the map $d: \pi_3(H) ={\mathbb Z}\rightarrow \pi_3(G) ={\mathbb Z}$, - $b_4$ represents the map $b_4: \pi_4(H) = {\mathbb Z}/2{\mathbb Z}\rightarrow \pi_4({\mathrm{Sp}}_{2r} ) = {\mathbb Z}/2{\mathbb Z}$, - $b_5$ represents the map $b_5: \pi_5(H) = {\mathbb Z}/2{\mathbb Z}\rightarrow \pi_5({\mathrm{Sp}}_{2r} ) = {\mathbb Z}/2{\mathbb Z}$. By Proposition [Proposition 3](#Dynkin_in){reference-type="ref" reference="Dynkin_in"}, these three values are not independent. We can say a bit more if we can compute $\pi_\ast (\varphi_m)$. The proof of the next result was explained to us by Michael Albanese [@stackex5]. **Proposition 14**. *If $G$ is of type $C_{r}$, $r\geq 1$, then $\pi_k(\varphi_m) = 1$ for $k \leq 5$.* *Proof.* We begin by noting that ${\mathrm{SL}}_2 \simeq {\mathrm{Sp}}(1)$ and $G\simeq {\mathrm{Sp}}(r)$ (these are the hyperunitary groups). Hence, the induced map $\pi_*({\mathrm{SL}}_2) \xrightarrow{\varphi_m} \pi_*({\mathrm{Sp}}_{2r} )$ may be thought of as a map $\pi_*({\mathrm{Sp}}(1)) \rightarrow \pi_*({\mathrm{Sp}}(r))$. Also, note that since $\pi_4({\mathrm{Sp}}_{2r} ) = \pi_5({\mathrm{Sp}}_{2r} ) = {\mathbb Z}/2$, showing that $\pi_4(\varphi_m) = \pi_5(\varphi_m) = 1$ is equivalent to showing that these maps are isomorphisms. Now, the quaternionic unitary group ${\mathrm{Sp}}(r)$ acts transitively on the sphere $S^{4r-1} \subset {\mathbb H}^r$, with stabiliser ${\mathrm{Sp}}(r-1)$. Choosing a base point $x = (0,0,\ldots,0,1)^T \in S^{4r-1} \subset {\mathbb H}^r$ gives us the maps $$\rho: {\mathrm{Sp}}(r) \rightarrow S^{4r-1}, \ \rho(A) = Ax, \ \iota_r: {\mathrm{Sp}}(r-1) \rightarrow {\mathrm{Sp}}(r), \ \iota_r(A) = \begin{pmatrix} A & 0 \\ 0 & I_2 \end{pmatrix}$$ that form a fibre bundle $${\mathrm{Sp}}(r-1) \rightarrow {\mathrm{Sp}}(r) \xrightarrow{\rho} S^{4r-1} \, ,$$ which, in its turn, induces a long exact sequence in homotopy groups $$\ldots \to \pi_{k+1}(S^{4r-1}) \to \pi_k({\mathrm{Sp}}(r-1)) \xrightarrow{(\iota_r)_*} \pi_k({\mathrm{Sp}}(r)) \xrightarrow{\rho_*} \pi_k(S^{4r-1}) \to \ldots$$ Notice that $\pi_{k}(S^{4r-1}) = \pi_{k+1}(S^{4r-1}) = 0$ for $k \leq 4r-3$ and so it immediately follows that ${\pi_k (\iota_r): \pi_k({\mathrm{Sp}}(r-1)) \rightarrow \pi_k({\mathrm{Sp}}(r))}$ is an isomorphism for $k \leq 4r - 3$. Now define $\iota \coloneqq \iota_r \circ \ldots \circ \iota_2$. We observe that $$\iota : {\mathrm{Sp}}(1) \rightarrow {\mathrm{Sp}}(r), \ \iota (A) = \begin{pmatrix} A & 0 \\ 0 & I_{2r-2} \end{pmatrix}, \ \pi_k (\iota) = \pi_k (\iota_r) \circ \ldots \circ \pi_k (\iota_2)\, .$$ In particular, $\iota \simeq \varphi_m$. Observe that for $m \geq 2$ we have $4 \leq 5 \leq 4m - 3$ and so $$\pi_4 (\iota_m): \pi_4({\mathrm{Sp}}(m-1)) \rightarrow \pi_4({\mathrm{Sp}}(m)) \mbox{ and } \pi_5 (\iota_m): \pi_5({\mathrm{Sp}}(m-1)) \rightarrow \pi_5({\mathrm{Sp}}(m))$$ are isomorphisms. Since a composition of isomorphisms is also an isomorphism we have that $$\pi_4 (\varphi_m) : \pi_4({\mathrm{Sp}}(1)) \rightarrow \pi_4({\mathrm{Sp}}(r)) \text{ and } \pi_5 (\varphi_m) : \pi_5({\mathrm{Sp}}(1)) \rightarrow \pi_5({\mathrm{Sp}}(r))$$ are isomorphisms, as required. ◻ **Corollary 15**. *For all unipotent elements $u \in {\mathrm{Spin}}_{2r+1}$, $v \in {\mathrm{Sp}}_{2r}$, $n \geq 3$, we have $X_u({\mathrm{Spin}}_{2r+1} ) \not\simeq X_v({\mathrm{Sp}}_{2r} )$.* *Proof.* Looking at the table above, we have $b_4 \equiv b_5 \equiv d \,(\text{mod } 2)$ by the proposition. It follows that $\pi_5\left(X_u({\mathrm{Spin}}_{2r+1} )\right) \neq \pi_5\left(X_v({\mathrm{Sp}}_{2r} )\right)$ and $X_u({\mathrm{Spin}}_{2r+1} ) \not\simeq X_v({\mathrm{Sp}}_{2r} ).$ ◻ Notice that Corollaries [Corollary 15](#BC_separate){reference-type="ref" reference="BC_separate"} and [Corollary 6](#RatHomTypeCorollary){reference-type="ref" reference="RatHomTypeCorollary"} together prove Theorem [Theorem 1](#main_theorem){reference-type="ref" reference="main_theorem"}. # K-Theory {#KTheory} Recall that if $X\simeq Y$ then we have an isomorphism of the topological K-theories $K(X) \cong K(Y)$ (as graded $\lambda$-rings). Topological K-theory is a powerful invariant for working with the spaces $X_u=X_u(G)$. Let us describe it. Recall that the representation ring of a simple simply-connected group $G$ of rank $r$ is polynomial: $$R(G) = \mathbb{Z}[y_1,\dots,y_r] , \ \mbox{ where } \ y_i = [V(\varpi_i)]$$ and $\varpi_i$ is the $i$-th fundamental weight. Similarly, $$R({\mathrm{SL}}_2(\mathbb{C})) = \mathbb{Z}[x] \geq R({\mathrm{PSL}}_2 ({\mathbb C})) = \mathbb{Z}[x'], \ \mbox{ where } \ x'=x^2-1$$ so that $x = [V(\varpi_1)]$ is the class of the 2-dimensional simple representation and $x' = [V(2\varpi_1)]$ is that of the 3-dimensional one. We will suppress the prime and just write $R(H) =\mathbb{Z}[x]$ for the representation ring of $H$ in both cases with $d\in\{2,3\}$ being the dimension of the module represented by $x$. It is well known that the $G$-equivariant K-theory of a homogeneous space $G/H$ is related to the representation ring of $H$, i.e., $K_{G}^0\left( X_u \right) = R(H)$. [@segal1968equivariant Ex. 2.ii] The full K-theory of $X_u$ is attainable via the Hodgkin spectral sequence, a version of the Kunneth formula in equivariant K-theory [@hodgkin1968equivariant] (cf. [@minami1975k]). The multiplicative 2-periodic spectral sequence $$E_2^{*,0} \coloneqq {\mathrm{Tor}}_{R(G)}^{*}\left(K_G^0(G),K_G^0\left(X_u \right)\right) = {\mathrm{Tor}}_{R(G)}^{*}\left({\mathbb Z},R(H)\right) \implies K^* ( X_u )$$ collapses, that is, all differentials ${\mathbf d}_k$ vanish for $k \geq 2$. The $R(G)$-module structures are given by $$\label{maps} {\mathbb Z}[y_1, \ldots, y_r] \rightarrow {\mathbb Z}[x], \ y_i \mapsto \overline{y_i} \ \ \mbox{ and } \ \ {\mathbb Z}[y_1, \ldots, y_r] \rightarrow {\mathbb Z}, \ y_i \mapsto d_i$$ where $d_i = \dim (V(\varpi_i))$ and $\overline{y_i}= [\varphi_u^{\ast} V(\varpi_i)]$. The collapse of the multiplicative spectral sequence yields the next result. **Proposition 16**. *There is an isomorphism of rings $$K^0 (X_u) \cong {\mathrm{Tor}}_{R(G)}^{2\bullet }\left({\mathbb Z},R(H)\right) = \bigoplus_{k=0} {\mathrm{Tor}}_{{\mathbb Z}[y_1, \cdots, y_r]}^{2k}\left({\mathbb Z},{\mathbb Z}[x]\right)$$ and an isomorphism of $K^0 (X_u)$-modules, connected to it $$K^1 (X_u) \cong {\mathrm{Tor}}_{R(G)}^{1+2\bullet}\left({\mathbb Z},R(H) \right) = \bigoplus_{k=0} {\mathrm{Tor}}_{{\mathbb Z}[y_1, \cdots, y_r]}^{2k+1}\left({\mathbb Z},{\mathbb Z}[x]\right) \, .$$* Notice that Proposition [Proposition 16](#KT_presentation){reference-type="ref" reference="KT_presentation"} equips $K^0(X_u)$ with a structure of a graded ${\mathbb Z}[x]$-algebra. Let us note its grade $0$ piece $$\label{Iu_def} K^0(X_u)_0 \cong R(H) \otimes_{R(G)} {\mathbb Z}= {\mathbb Z}[x] / I_u \ \mbox{ where } \ I_u = (\overline{y_1}-d_1, \ldots , \overline{y_r}-d_r ) \, .$$ If $X_u\simeq X_v$, we get two graded algebra structures on the same ring. Yet we can relate the zero-degree pieces (cf. Theorem [Theorem 18](#2nd_thm){reference-type="ref" reference="2nd_thm"})! To compute in Proposition [Proposition 16](#KT_presentation){reference-type="ref" reference="KT_presentation"}, we need a DG-algebra resolution. Since ${\mathbb Z}\cong {\mathbb Z}[\underline{y}] /(y_1-d_1,\dots,y_r-d_r)$ as $R(G)$-modules, a convenient resolution is given by the Koszul complex[^1] $\; {\mathbb K}_\bullet (y_1-d_1, \ldots, y_r-d_r)$ $$0 \longrightarrow \bigwedge^r \mathbb{Z}[\underline{y}]^r \longrightarrow \bigwedge^{r-1} \mathbb{Z}[\underline{y}]^r\longrightarrow \dots \longrightarrow \mathbb{Z}[\underline{y}]^r \xrightarrow{(y_i-d_i)_{i=1}^r}\mathbb{Z}[\underline{y}] \longrightarrow 0.$$ Tensoring with ${\mathbb Z}[x]$ gives us a Koszul complex over ${\mathbb Z}[x]$ $${\mathbb K}_\bullet (y_1-d_1, \ldots , y_r-d_r) \otimes_{\mathbb{Z}[\underline{y}]} \mathbb{Z}[x] = {\mathbb K}_\bullet ( \overline{y_1}-d_1, \ldots , \overline{y_r}-d_r)$$ that, in turn, yields the master formula $$\label{computeTor} {\mathrm{Tor}}_{R(G)}^{l}\left({\mathbb Z},R(H)\right) = H_l \left( {\mathbb K}_\bullet ( \overline{y_1}-d_1, \ldots , \overline{y_r}-d_r) \right) \ \mbox{ for all } l \, .$$ The homology of the Koszul complex over ${\mathbb Z}[x]$ is computable but not to the extent of the next lemma. A careful reader may observe that it also works (after careful adjustment) over a Dedekind domain. **Lemma 17**. *Suppose $R$ is a PID. Let ${\mathbb K}_\bullet = {\mathbb K}_\bullet (a_1,\dots,a_m)$, $a_i \in R$ denote the Koszul complex over $R$ $$0 \rightarrow R \xrightarrow{{\mathbf d}_m} R^m \xrightarrow{{\mathbf d}_{m-1}} R^{\binom{m}{m-2}} \xrightarrow{{\mathbf d}_{m-2}} \dots \longrightarrow R^{\binom{m}{2}} \xrightarrow{{\mathbf d}_2} R^m \xrightarrow{{\mathbf d}_1} R \rightarrow 0$$ Setting ${\mathbf d}_0 = {\mathbf d}_{n+1} = 0$ and $a=\gcd(a_1,\ldots,a_m)$, we have $$H_i ({\mathbb K}_\bullet) = \left(\faktor{R}{\left(a\right)}\right)^{\binom{m-1}{i}} \text{ for all }0 \leq i \leq m.$$* *Proof.* Pick a prime $p\in R$. Consider the localised Koszul complex $${\mathbb K}_{(p)} (a_1,\dots,a_m) = {\mathbb K}(a_1,\dots,a_m) \otimes_R R_{(p)} \, .$$ For any units $u_i \in R_{(p)}$, the complexes ${\mathbb K}_{(p)} (a_1,\ldots,a_m)$ and ${\mathbb K}_{(p)} (u_1a_1,\dots,u_ma_m)$ are quasi-isomorphic via the obvious chain map. In particular, they share the same homology. Thus, by choosing the $u_i$ such that $u_ia_i = p^{k_i}$ for some $k_i$, and assuming (without loss of generality) that $k_1 \leq \dots \leq k_m$, we conclude that $$H_\bullet\big( {\mathbb K}_{(p)} (a_1,\ldots,a_m) \big) = H_\bullet\big({\mathbb K}_{(p)} ( p^{k_1},\ldots,p^{k_m})\big) \, .$$ By inspection of the differentials ${\mathbf d}_i$ of ${\mathbb K}_{(p)} ( p^{k_1},\ldots,p^{k_m})$, we see that ${\mathrm{im}}({\mathbf d}_{i+1})$ is given by $R_{(p)}$-linear combinations of those columns $c$ that contain a $p^{k_1}$ term. Moreover, since $p^{k_1}$ divides each $p^{k_j}, j \geq 1$, we have ${p^{-k_1}}\cdot c \in {R_{(p)}}^{\binom{n}{i}}$ for each such column $c$. In particular, we have ${p^{-k_1}}\cdot c \in \ker({\mathbf d}_i)$ for all such $c$. Furthermore, any element of the kernel is a linear combination of these scaled columns. Now, there are precisely $\binom{m-1}{i}$ of these columns, namely the number of ways to choose exactly $i$ elements without replacement from the set $\{p^{k_2},\dots,p^{k_m}\}$. Combining both observations, we conclude that $$H_i ({\mathbb K}_{(p)}) = \bigoplus_{i=1}^{\binom{m-1}{i}}\, \faktor{\mathbb{Z}_{(p)}}{p^{k_1}\mathbb{Z}_{(p)}} = \left( \faktor{\mathbb{Z}_{(p)}}{\gcd (p^{k_1},\dots,p^{k_m})} \right)^{\binom{m-1}{i}} .$$ Let $P$ be the set of all primes in $p\in R$ such that $p$ divides one of the $a_i$. If $p\not\in P$, $k_1=0$ and the localisation at such $p$ "kills" homology of ${\mathbb K}_\bullet$. A direct sum of localisations at $p\in P$ gives us the required result. ◻ Recall that the ideal $I_u$ is defined in [\[Iu_def\]](#Iu_def){reference-type="eqref" reference="Iu_def"}. We are ready for the main result of the section. **Theorem 18**. *If $X_u\simeq X_v$, then $I_u=I_v$.* *Proof.* Notice that $u$ and $v$ are both quite even or not quite even. This means we can use the same $d\in\{2,3\}$ for the two realisations of $K_0(X_u)$ as the homology of the Koszul complexes $$K_0(X_u) \cong H_{2\bullet} ({\mathbb K}(u)) \cong H_{2\bullet} ({\mathbb K}(v))\, .$$ Moreover, we know that $$H_{0} ({\mathbb K}(u)) = {\mathbb Z}[x] / I_u, \ H_{0} ({\mathbb K}(v)) = {\mathbb Z}[x] / I_v \ \mbox{ and } \ \sqrt{I_u} = \sqrt{I_v} = (x-d).$$ The latter follows from the fact that $x-d$ is a nilpotent element in $K(X_u)$. Indeed, $x-d$ belongs to the reduced K-theory $\widetilde{K} (X_u)$, the kernel of the map $K(X_u) \rightarrow {\mathbb Z}$. As observed before Proposition [Proposition 1](#Xu_dim){reference-type="ref" reference="Xu_dim"}, $X_u\simeq G_c/H_c$ and the latter is compact, so can be covered by $m$ open contractible subsets. It is a standard topological fact that $\widetilde{K} (X_u)$ is $m$-nilpotent [@HatcherVB Ex. 2.13]. Let ${\mathbb F}={\mathbb Z}/ (p)$. In ${\mathbb F}[x]$ the reduced ideals $\overline{I_u}$ and $\overline{I_v}$ are still contained in $(x-d)$, hence, $\overline{I_u}=(x-d)^a$ and $\overline{I_v}=(x-d)^b$ for some positive integers $a$ and $b$. Now compare homology of Koszul complexes modulo a prime $p$. By Lemma [Lemma 17](#DedkindKoszullemma){reference-type="ref" reference="DedkindKoszullemma"}, $$H_{2\bullet} ({\mathbb K}(u)\otimes_{{\mathbb Z}} {\mathbb F}) \cong \Lambda^{2\bullet} \left(\dfrac{{\mathbb F}[x]}{(x-d)^a}\right)^{r-1}, \ H_{2\bullet} ({\mathbb K}(v)\otimes_{{\mathbb Z}} {\mathbb F}) \cong \Lambda^{2\bullet} \left(\dfrac{{\mathbb F}[x]}{(x-d)^b}\right)^{r-1} .$$ By the universal coefficients theorem, both reduced Koszul complexes fit into the same short exact sequence of vector spaces $$0 \rightarrow H_{2\bullet} ({\mathbb K})\otimes_{{\mathbb Z}} {\mathbb F}\rightarrow H_{2\bullet} ({\mathbb K}\otimes_{{\mathbb Z}} {\mathbb F}) \rightarrow \mbox{Tor}_1^{{\mathbb Z}}(H_{-1+2\bullet} ({\mathbb K}) , {\mathbb F}) \rightarrow 0$$ that can be rewritten with $c\in \{a,b\}$ as $$0 \rightarrow K^0 (X_u)\otimes {\mathbb F}\rightarrow \Lambda^{2\bullet} \left(\dfrac{{\mathbb F}[x]}{(x-d)^c}\right)^{r-1} \rightarrow p\mbox{\,-Tor} ( K^1 (X_u)) \rightarrow 0\, .$$ Dimension counting proves that $a=b$. Thus, $\overline{I_u} = \overline{I_v}$ for every prime $p$. It remains to show that this implies $I_u = I_v$. Consider the quotient $Q_u \coloneqq (I_u+I_v)/I_u$. We will show that $Q_u = 0$. Analogously, $(I_u+I_v)/I_v=0$ and $I_u = I_v$, as required. Since $\overline{I_u} = \overline{I_v}$ for each prime $p$, we have $Q_u \otimes_{{\mathbb Z}} \mathbb{Z}/p\mathbb{Z} = 0$ for all $p$. Furthermore, $I_u + I_v\subseteq {\mathbb Z}[x]$ has no $p$-torsion. Looking at the map ${\mathrm{Tor}}_1(I_u+I_v,\mathbb{Z}/p\mathbb{Z}) \rightarrow {\mathrm{Tor}}_1(Q_u,\mathbb{Z}/p\mathbb{Z})$, we conclude that ${\mathrm{Tor}}_1(Q_u,\mathbb{Z}/p\mathbb{Z}) = 0$. Tensoring a projective resolution of $\mathbb{Z}/p\mathbb{Z}$ with $Q_u$ gives us that the sequence $$0 \longrightarrow Q_u \otimes \mathbb{Z} \xrightarrow{p} Q_u \otimes \mathbb{Z} \longrightarrow \underbrace{Q_u \otimes \mathbb{Z}/p\mathbb{Z}}_{0}$$ is exact. Therefore, multiplication by $p$ is an automorphism of $Q_u$ for each prime p, or, in other words, $Q_u$ has the structure of a ${\mathbb Q}$-vector space. This turns $Q_u$ into a ${\mathbb Q}[x]$-module. Finally, suppose $Q_u\neq 0$. Choose a maximal submodule $\mathfrak{m}\subseteq Q_u$ and consider $Q_u / \mathfrak{m}$. It is a simple ${\mathbb Q}[x]$-module, so it is a number field. Now observe that all $I_u$, $I_v$, $Q_u$ and $Q_u / \mathfrak{m}$ are finitely generated ${\mathbb Z}[x]$-modules. A contradiction, since a number field is not a finitely generated ${\mathbb Z}[x]$-module. ◻ # Example: Homogeneous spaces of dimension $75$ {#Example} In light of Proposition [Proposition 1](#Xu_dim){reference-type="ref" reference="Xu_dim"}, it is interesting to consider spaces dimension by dimension. Suppose $\dim X_u(G)=75$. There are no simple groups of dimension 75 but there are three groups of dimension 78: types $B_6$, $C_6$ and $E_6$. It follows that $u\neq 1$: there are 93 potentially different spaces (the same as the number of non-trivial unipotent classes in these groups). Theorem [Theorem 1](#main_theorem){reference-type="ref" reference="main_theorem"} splits the spaces into three buckets: spaces coming from different groups are not homotopy equivalent. Then we can make use of Dynkin indices, cf. Proposition [Proposition 2](#low_pi){reference-type="ref" reference="low_pi"}. In the case of $G$ of type $E_6$ all Dynkin indices are distinct, cf. [@Dynkin1952 Table 18]. Thus, all 20 non-trivial unipotent classes give (homotopically) distinct space $X_u$. In the cases of $B_6$ and $C_6$, the numbers of orbits are 34 and 39 correspondingly. The Dynkin index does not distinguish all spaces completely, leaving only a small number of cases to consider. Note that some of them are distinguished using $\pi_2 (X_u)$, cf. Proposition [Proposition 2](#low_pi){reference-type="ref" reference="low_pi"}. - Remaining orbits in type $B_6$ and their Dynkin indices \|P3.5cm\| P3.5cm \|P2cm\|P2cm\| Partition of $13$ & Dynkin diagram & Dynkin Index & Quite even? (Y/N)\ $[5^2, 1^3]$ & $\triangle(0, 2, 0, 2, 0, 0)$ & $20$ & Y\ $[5, 4^2]$ & $\triangle(1, 0, 1, 1, 0, 1)$ & $20$ & N\ $[5, 3, 1^5]$ & $\triangle(2, 0, 2, 0, 0, 0)$ & $12$ & Y\ $[5, 2^4]$ & $\triangle(2, 1, 0, 0, 0, 1)$ & $12$ & N\ $[4^2, 3, 1^2]$ & $\triangle(0, 1, 1, 0, 1, 0)$ & $12$ & N\ $[5, 2^2, 1^4]$ & $\triangle(2, 1, 0, 1, 0, 0)$ & $11$ & N\ $[4^2, 2^2, 1]$ & $\triangle(0, 2, 0, 0, 0, 1)$ & $11$ & N\ $[5, 1^8]$ & $\triangle(2, 2, 0, 0, 0, 0)$ & $10$ & N\ $[4^2, 1^5]$ & $\triangle(0, 2, 0, 1, 0, 0)$ & $10$ & N\ $[3^2, 1^7]$ & $\triangle(0, 2, 0, 0, 0, 0)$ & $4$ & Y\ $[3, 2^4, 1^2]$ & $\triangle(1, 0, 0, 0, 1, 0)$ & $4$ & N\ $[3, 2^2, 1^6]$ & $\triangle(1, 0, 1, 0, 0, 0)$ & $3$ & N\ $[2^6, 1]$ & $\triangle(0, 0, 0, 0, 0, 1)$ & $3$ & N\ $[3, 1^{10}]$ & $\triangle(2, 0, 0, 0, 0, 0)$ & $2$ & N\ $[2^4, 1^5]$ & $\triangle(0, 0, 0, 1, 0, 0)$ & $2$ & N\ - Remaining orbits in type $C_6$ and their Dynkin indices. \|P3.5cm\| P3.5cm\|P2cm\|P2cm\| Partition of $12$ & Dynkin diagram & Dynkin Index & Quite even? (Y/N)\ $[4, 2, 1^6]$ & $\triangle(2, 0, 1, 0, 0, 0)$ &$11$ & N\ $[3^2, 2^3]$ & $\triangle(0, 1, 0, 0, 1, 0)$ &$11$ & N\ $[4, 1^8]$ & $\triangle(2, 1, 0, 0, 0, 0)$ &$10$ & N\ $[3^2, 2^2, 1^2]$ & $\triangle(0, 1, 0, 1, 0, 0)$ & $10$ & N\ Finally, we can deploy Theorem [Theorem 18](#2nd_thm){reference-type="ref" reference="2nd_thm"} to help distinguish the remaining cases. We pick a prime $p$ and compute[^2] $\gcd(\overline{y_1}-d_1,\ldots, \overline{y_r}-d_r) \in {\mathbb F}[x]$, the generator of the ideal $\overline{I_u}$. This does not help in type $C_6$ and in one of the cases in type $B_6$. \|P2.5cm\| P2.5cm \|P1.5cm\| P4.5cm\| Partition of $13$ & Dynkin index & char($\mathbb{F}$) & $\overline{I_u}$\ $[5, 2^4]$ & $12$ & $3$ & $\left((x-2)^3\right)$\ $[4^2 , 3, 1^2]$ & $12$ & $3$ & $\left((x-2)^2\right)$\ $[5, 2^2, 1^4]$ & $11$ & $-$ & no suitable $p$ found\ $[4^2, 2^2, 1]$ & $11$ & $-$ & no suitable $p$ found\ $[5, 1^8]$ & $10$ & $2$ & $\left(x^2\right)$\ $[4^2, 1^5]$ & $10$ & $2$ & $\left(x^4\right)$\ $[3, 2^2, 1^6]$ & $3$ & $3$ & $\left((x-2)^2\right)$\ $[2^6, 1]$ & $3$ & $3$ & $\left((x-2)^3\right)$\ $[3, 1^{10}]$ & $2$ & $2$ & $\left(x^2\right)$\ $[2^4, 1^5]$ & $2$ & $2$ & $\left(x^4\right)$\ Therefore, in the case of homogeneous spaces of dimension $75$ we have **Proposition 19**. *Let $G, K$ be simply-connected groups of types $B_6$, $C_6$ or $E_6$. Suppose $u \in G$, $v \in K$ are unipotent elements from different classes. Then $X_u(G) \not\simeq X_{v}(K)$ in all but the three cases, where it is undetermined:* 1. *$G=K={\mathrm{Spin}}_{13}({\mathbb C})$, $u,v$ correspond to partitions $[5,2^2,1^4]$ and $[4^2,2^2,1]$,* 2. *$G=K={\mathrm{Sp}}_{12} ({\mathbb C})$, $u,v$ correspond to partitions $[4,2,1^6]$ and $[3^2,2^3]$,* 3. *$G=K={\mathrm{Sp}}_{12} ({\mathbb C})$, $u,v$ correspond to partitions $[4,1^8]$ and $[3^2,2^2,1^2]$.* **Question 20**. *What techniques can be used to determine whether or not the three undetermined pairs of spaces in Proposition [Proposition 19](#finalprop){reference-type="ref" reference="finalprop"} are homotopy equivalent?* # Example: The effect of Dynkin diagram automorphisms {#Example2} Now we describe the effect of diagram automorphisms on Conjecture [Conjecture 2](#wk_rigid){reference-type="ref" reference="wk_rigid"}: cf. [@collingwood1993nilpotent] for all the relevant information. In types $E_6$ and $A_r$, $r\geq 2$ the automorphism group of the diagram is $C_2$. However, there is no effect on Conjecture [Conjecture 2](#wk_rigid){reference-type="ref" reference="wk_rigid"}: distinct conjugacy classes are not conjugate under outer automorphisms. In type $D_r$, $r\geq 5$ the automorphism group of the diagram is also $C_2$ but there exist distinct conjugacy classes conjugate under outer automorphisms. Each *very even partition* (those only with even parts) correspond to the two distinct classes, conjugate under an outer automorphism. Thus, the Conjecture [Conjecture 2](#wk_rigid){reference-type="ref" reference="wk_rigid"} proposes a bijection between homotopy types of spaces $X_u$ and partitions in ${\mathcal P}(2r)$, where every even part $d_i$ appears even number of times. In type $D_4$, we have triality: the automorphism group of the diagram is $S_3$. There are two very even partitions $[2^4]$ and $[4^2]$, each giving a pair of classes in ${\mathrm{Spin}}_8 ({\mathbb C})$. However, each pair is conjugate to a third class under the triality: the classes with partitions $[3,1^5]$ and $[2^4]$ are conjugate, ditto for $[5,1^3]$ and $[4^2]$. Thus, the Conjecture [Conjecture 2](#wk_rigid){reference-type="ref" reference="wk_rigid"} proposes a bijection between homotopy types of spaces $X_u$ and partitions in ${\mathcal P}(8)$, which have odd parts and all even parts appear even number of times. In fact, the conjecture holds in this case. There are 6 non-trivial partitions left to consider: $[2^2,1^2]$, $[3,1^5]$, $[3^2,1^2]$, $[3,2^2,1]$, $[5,1^3]$ and $[5,3]$. Their Dynkin indices are nearly all distinct (computed by [@PANYUSHEV201515 Th. 2.1]): 1, 2, 4, 4, 10 and 14. Out of the partition of index 4, $[3^2,1^2]$ is quite even and $[3,2^2,1]$ is not (by Theorem [Theorem 10](#QuiteEvenClassical){reference-type="ref" reference="QuiteEvenClassical"}). Let us examine the aforementioned trios of unipotent elements $u_1,u_2,u_3 \in {\mathrm{Spin}}_8 ({\mathbb C})$, related by triality, in light of Corollary [Corollary 11](#remaining_groups){reference-type="ref" reference="remaining_groups"}. By Theorem [Theorem 10](#QuiteEvenClassical){reference-type="ref" reference="QuiteEvenClassical"}, all $u_i$ are even, not quite even elements. Thus, in ${\mathrm{Spin}}_8 ({\mathbb C})$ they correspond to three subgroups ${\mathrm{SL}}_2 ({\mathbb C})$ that are related via the outer automorphisms. Their $-I_2$ are the three different elements of $Z(G) \cong K_4$. In any of the three quotients ${\mathrm{Spin}}_8 ({\mathbb C})/C_2$, one of the subgroups turns into ${\mathrm{PSL}}_2 ({\mathbb C})$, while the other two stay as ${\mathrm{SL}}_2 ({\mathbb C})$. [^1]: Here is a code for computing its differentials: [https://github.com/DylanJohnston/GmodSL2complementarycode](https://github.com/DylanJohnston/GmodSL2_complementary_code) [^2]: Our code is here: [https://github.com/DylanJohnston/GmodSL2complementarycode](https://github.com/DylanJohnston/GmodSL2_complementary_code)
arxiv_math
{ "id": "2309.07238", "title": "Topological rigidity of ${\\mathrm{SL}}_2$-quotients", "authors": "Dylan Johnston and Dmitriy Rumynin", "categories": "math.AT math.RT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | The existence of periodic solutions is proven for some neuroscience models with a small parameter. Moreover, the stability of such solutions is investigated, as well. The results are based on a theoretical research dealing with the functional differential equation with parameters $$\dot{x}(t)=L(\tau) x_t + \varepsilon f(t, x_t),$$ where $L: \mathbb{R}_+\rightarrow \mathcal{L}(C; \mathbb{R})$ and $f: \mathbb{R} \times C \rightarrow \mathbb{R}$ are, respectively, linear and nonlinear operators, and $\varepsilon>0$ is a small enough parameter. The theoretical results are applied to a Parkinson's disease model, where the obtained conclusions are illustrated by numerical simulations. address: $^a$Departamento de Matemática, Facultad de Ciencias, Universidad del Bío-Bío, Casilla 5-C, Concepción, VIII-Región, Chile author: - José Oyarce ^$\MakeLowercase{a}$^ title: Bifurcation and periodic solutions to neuroscience models with a small parameter --- # Introduction {#Introduccion} In the last decades the scalar nonlinear differential equations with several delays have become of interest in many fields of science. In particular, in the context of medical treatment, the neuroscience field has received significant attention (see, e.g., [@Book1; @Book2]). Based in time series analysis with nonlinear delay differential equations, Lainscsek *et al.* noticed that several models in medicine (see, e.g., [@Lainscsek1; @Lainscsek3; @Lainscsek2; @Lainscsek4; @Lainscsek5; @Lainscsek6]) can be investigated by the following general class of differential equations with $n$ delays $$\begin{gathered} \label{1.1} \begin{aligned} \dot{x}(t) &= a_1 x_{\tau_1}+a_2 x_{\tau_2} +a_3 x_{\tau_3}+ \ldots + a_{i-1}x_{\tau_n}+\ldots \\ & \ \ + a_i x^2_{\tau_1}+a_{i+1} x_{\tau_1}x_{\tau_2}+a_{i+2} x_{\tau_1}x_{\tau_3}+ \ldots \\ & \ \ + a_{j-1}x^2_{\tau_n}+a_j x^3_{\tau_1}+a_{j+1}x^2_{\tau_1}x_{\tau_2}+ \ldots \\ & \ \ \ \vdots \\ & \ \ + \ldots a_l x^m_{\tau_n}. \end{aligned}\end{gathered}$$ Here $a_i \in \mathbb{R}$ and $x_{\tau_i}=x(t-\tau_i)$ where $\tau_i\geq 0 \ (i=1, \ldots , n)$. In particular, Lainscsek *et al.* [@Lainscsek2] proposed an algorithm to find a scalar delay differential equation for the classification of the finger tapping movement in Parkinson's disease (PD) patients and, as a consequence, the following equation with 19 terms was considered as a general class of models $$\begin{gathered} \label{1.2} \begin{aligned} \dot{x}(t) &= a_1 x_{\tau_1}+a_2x_{\tau_2} +a_3 x_{\tau_3}+ a_4 x^2_{\tau_1}+ a_5 x_{\tau_1}x_{\tau_2} \\ & +a_6x_{\tau_1}x_{\tau_3} + a_7x^2_{\tau_2} +a_8 x_{\tau_2}x_{\tau_3}+a_9x^2_{\tau_3} + a_{10}x^3_{\tau_1} \\ & +a_{11}x^2_{\tau_1}x_{\tau_2}+a_{12}x^2_{\tau_1}x_{\tau_3} +a_{13}x_{\tau_1}x^2_{\tau_2} \\ & + a_{14}x_{\tau_1}x_{\tau_2}x_{\tau_3}+a_{15}x_{\tau_1}x^2_{\tau_3}+a_{16}x^3_{\tau_2} \\ & + a_{17} x^2_{\tau_2}x_{\tau_3}+a_{18}x_{\tau_2}x^2_{\tau_3} +a_{19} x^3_{\tau_3}. \end{aligned}\end{gathered}$$ The above model is useful for assessing the progression of the disease and, in particular, the coefficients in [\[1.2\]](#1.2){reference-type="eqref" reference="1.2"} can be used to asses the severity of the disease. Although the above-mentioned models are very important in medical treatment, to the best of our knowledge, delayed models with several delays have not been sufficiently investigated yet. Depending on the data type [@Lainscsek4], some of the coefficients of the model [\[1.2\]](#1.2){reference-type="eqref" reference="1.2"} could be zero or be considered as small parameters, therefore, the results obtained in this paper are applied to find the sufficient conditions for the existence of periodic solutions to a particular case of [\[1.2\]](#1.2){reference-type="eqref" reference="1.2"} with a small parameter. Averaging is a classical method in the analysis of nonlinear differential equations with small parameters. For ordinary differential equations, many theorems deals with the following initial value problem $$\label{1.3} \dot{x}(t)=\varepsilon F(t,x) + \varepsilon^2 R(t,x,\varepsilon), \qquad x(0)=x_0,$$ where $\varepsilon>0$ is a small parameter and $F, R$ are $T$-periodic in the first argument. According to periodic averaging theorems (see, e.g., [@Hale1; @Verhulst; @Verhulst2]) many dynamical information of the problem [\[1.3\]](#1.3){reference-type="eqref" reference="1.3"} can be obtained from the analysis of the following autonomous differential equation $$\label{1.4} \dot{y}(t)= \frac{1}{T} \int_0^T F(t, y)dt, \qquad y(0)=x_0,$$ and, in particular, the existence of $T$-periodic solutions to [\[1.3\]](#1.3){reference-type="eqref" reference="1.3"} can be investigated from the analysis of equilibria of the equation [\[1.4\]](#1.4){reference-type="eqref" reference="1.4"}. For more general settings, for instance, in [@Mesquita2; @Lunel; @Lakrib2; @Lakrib; @Mesquita; @Halanay; @Halesmallparameter] the authors studied the following functional differential equation $$\label{1.5} \dot{x}(t)=\varepsilon f(t,x_t),$$ where $x_t(\theta)=x(t+\theta)$ for $\theta\in[-r,0]$ and, generally speaking, $f: \mathbb{R}\times C \rightarrow \mathbb{R}$ is a nonlinear function. As for ordinary differential equations, good approximations of solutions to [\[1.5\]](#1.5){reference-type="eqref" reference="1.5"} can be obtained from solutions of an averaged autonomous functional differential equation. In this paper, based on some of the ideas and results in [@Faria; @Hale2; @Hale1; @Hsmith; @Halesmallparameter; @Verhulst], we study the existence and stability of periodic solutions to the following functional differential equation $$\label{1.6} \dot{x}(t)= L(\tau)x_t + \varepsilon f(t, x_t)$$ when $\varepsilon$ varies. Here $L: \mathbb{R}_+ \rightarrow \mathcal{L}(C; \mathbb{R})$ is a scalar linear functional, $f: \mathbb{R} \times C \rightarrow \mathbb{R}$ is a nonlinear operator, nonautonomous, $T$-periodic in the first argument and has a continuous second Frechét derivative with respect to the second argument and $\varepsilon>0$ is a small enough parameter. First, in the case that the nonperturbed equation $$\label{1.7} \dot{x}(t)=L(\tau)x_t,$$ undergoes a Hopf bifurcation at the critical value $\tau=\tau_0$, we associate to the equation [\[1.6\]](#1.6){reference-type="eqref" reference="1.6"} an abstract ordinary differential equation of the form [\[1.3\]](#1.3){reference-type="eqref" reference="1.3"}. Secondly, by using classical averaging techniques for ordinary differential equations (see, e.g., [@Verhulst]), we state sufficient conditions on the parameters to stablish the existence of periodic solutions for the abstract ordinary differential equation with a small enough parameter $\varepsilon>0$. The stability of such solutions is investigated, as well. The above procedure allow us to describe adequately the dynamic of [\[1.6\]](#1.6){reference-type="eqref" reference="1.6"} in a finite dimensional space by restricting the spectrum of the infinitesimal generator of the nonperturbed equation [\[1.7\]](#1.7){reference-type="eqref" reference="1.7"}. As a consequence of our theoretical results, we investigate a delayed model with a small parameter motivated by the Parkinson's disease model [\[1.2\]](#1.2){reference-type="eqref" reference="1.2"} where, in particular, the linear operator $L$ takes the form $L(\tau_2)x_t=-a_1x(t-\tau_1)-a_2x(t-\tau_2)$. The paper is structured as follows. In Section [2](#s2){reference-type="ref" reference="s2"}, in the case that the equation [\[1.7\]](#1.7){reference-type="eqref" reference="1.7"} undergoes a Hopf bifurcation at some critical parameter, we present the construction of an abstract ordinary differential equation that describes the dynamics of [\[1.6\]](#1.6){reference-type="eqref" reference="1.6"} and, by applying the classical averaging theory for ordinary differential equations, we prove the existence of periodic solutions for the equation [\[1.6\]](#1.6){reference-type="eqref" reference="1.6"} when $\varepsilon$ is small enough. In Section [3](#s3){reference-type="ref" reference="s3"}, by applying the results of Section [2](#s2){reference-type="ref" reference="s2"}, we prove the existence of periodic solutions for a Parkinson's disease model with a small parameter motivated by [\[1.2\]](#1.2){reference-type="eqref" reference="1.2"}. To verify the existence of the periodic solution, we present numerical simulations using the Matlab ddesd Package. The last section is devoted to a discussion of the results. # Abstract ordinary differential equation and averaging theory {#s2} In this section, by applying the general theory in [@Faria; @Hale2; @Hale1; @Hsmith; @Halesmallparameter], we introduce a convenient coordinate system to obtain an abstract ordinary differential equation which is equivalent to the general equation [\[1.6\]](#1.6){reference-type="eqref" reference="1.6"} in the case that [\[1.7\]](#1.7){reference-type="eqref" reference="1.7"} undergoes a Hopf bifurcation. Then, by applying the averaging theory for ordinary differential equations in [@Verhulst], we investigate the existence and stability of periodic solutions for such a ordinary differential equation with a small enough parameter $\varepsilon>0$ and, as a consequence, we obtain the main result about the existence and stability of periodic solutions for the equation [\[1.6\]](#1.6){reference-type="eqref" reference="1.6"}. Let $r>0$ and define the phase space $C\stackrel{def}{=} C([-r,0]; \mathbb{R})$ equipped with the sup norm, and consider the following Banach space $$BC\stackrel{def}{=} \lbrace \phi:[-r, 0] \rightarrow \mathbb{R}: \phi \ \textit{is continuous on } [-r, 0[, \exists \lim_{\theta \rightarrow 0^-} \phi (\theta) \in \mathbb{R}\rbrace$$ to define the scalar linear functional $L: \mathbb{R}_+ \rightarrow \mathcal{L}(C; \mathbb{R})$ on $BC$ as $$L(\tau)\phi \stackrel{def}{=} \int_{-\tau}^0 \phi (\theta) d\eta(\theta),$$ where $\eta$ is a bounded variation function. Let the linear autonomous equation $$\label{2.1} \dot{x}(t)=L(\tau)x_t$$ and consider the scalar perturbed equation with a small parameter $\varepsilon>0$ $$\label{2.2} \dot{x}(t)=L(\tau)x_t+\varepsilon f(t,x_t).$$ We now formulate some of the assumptions on $f$ to state the main results. 1. $f$ transforms $D \subset \mathbb{R}\times C$ into $\mathbb{R}$ and is a $T$-periodic function in $t$ uniformly with respect to $\phi$ (in some subset of $C$). 2. $f$ is a continuous and uniformly bounded function in $(t, \phi) \in D$ and has a continuous second Frechét derivative with respect to $\phi$. Since our purpose is to study the equation [\[2.2\]](#2.2){reference-type="eqref" reference="2.2"} in the case that [\[2.1\]](#2.1){reference-type="eqref" reference="2.1"} undergoes a Hopf bifurcation at some critical value $\tau=\tau_0$, for the characteristic equation $$\label{2.3} h(\lambda,\tau) \stackrel{def}{=} \lambda-L(\tau)e^{\lambda}=0$$ we shall assume the following hypotheses: 1. There exists a pair of simple characteristics roots $\mu(\tau)\pm i\omega(\tau)$ of [\[2.3\]](#2.3){reference-type="eqref" reference="2.3"} such that $$\mu(\tau_0)=0, \ \ \omega (\tau_0) \stackrel{def}{=} \omega^*>0, \ \ \mu'(\tau_0)\neq 0.$$ 2. The characteristic equation $h(\lambda, \tau_0)=0$ has no other roots with zero real parts. In view of the above-mentioned hypotheses, throughout this section, the following equation will be considered $$\label{2.4} \dot{x}(t)=L(\tau_0)x_t+\varepsilon f(t,x_t).$$ Let $C^*\stackrel{def}{=}C([0, r]; \mathbb{R})$ and, together with the equation [\[2.1\]](#2.1){reference-type="eqref" reference="2.1"}, consider the adjoint equation $$\dot{v}(s)=-\int_{-\tau}^0 v(s-\theta)d\eta(\theta)$$ with respect to the bilinear form $(\cdot, \cdot)$ in $C^* \times C$ defined by $$\label{2.5} (\psi, \varphi)= \psi(0)\varphi(0)-\int _{-\tau}^0 \int_0^{\theta} \psi(\xi-\theta)d\eta (\theta) \varphi(\xi)d\xi, \qquad (\psi, \varphi) \in C^*\times C.$$ Let $L_0=L(\tau_0), ~ \Lambda=\lbrace i\omega^*, -i\omega^* \rbrace$ and $P$ be the center space of the equation $$\label{2.6} \dot{y}(t)=L_0y_t.$$ Decomposing $C$ by $\Lambda$ as $C=P\oplus Q$, in contrast to the normal forms theory for delay differential equations (see, e.g, [@Hale2 Chapt. 8]) where is useful to consider complex coordinates, since we are interested in real solutions to [\[2.2\]](#2.2){reference-type="eqref" reference="2.2"}, along this paper we will choose real bases $\Phi, \Psi$ for $P$ and his dual $P^*$ respectively as $$\begin{aligned} \label{2.7} \begin{array}{lcl} P = \mbox{span} \Phi, && \Phi(\theta)= (\sin(\omega^*\theta),~ \cos(\omega^*\theta)), \ \ - \tau_0 \leq \theta \leq 0, \\ P^*=\mbox{span} \Psi, && \Psi(s) = \textit{col} \ (\alpha_1 \sin(\omega^*s)+\beta_1\cos(\omega^*s),~\alpha_2\sin(\omega^*s)+\beta_2\cos(\omega^*s)), \\ && \ \ \ \ \ \ \ \ \ \ 0\leq s \leq \tau_0 \end{array} \end{aligned}$$ where, in order to simplify the transformations, the parameters $(\alpha_1, \beta_1, \alpha_2, \beta_2)\in \mathbb{R}^4$ which could depend on $(\omega^*, \tau_0)$, can be chosen such that $(\Psi, \Phi)=I$ (the identity matrix). Further, the basis $\Phi$ defined as earlier, satisfies $$\label{2.8} \Phi(\theta)=\Phi(0)e^{B\theta}, \qquad B= \left( \begin{array}{cc} 0 & -\omega^* \\ \omega^* & 0 \end{array} \right).$$ We are in position to introduce a coordinate system in $C$ which allow us to obtain the abstract ordinary differential equation associated to [\[2.4\]](#2.4){reference-type="eqref" reference="2.4"} (see, e.g., [@Faria; @Hale2; @Hale1]). In fact, for the set of eigenvalues $\Lambda = \lbrace i \omega^*, -i \omega^* \rbrace$, it is possible decompose the elements of $C$ in a unique way as $$\label{2.9} \phi = \phi^P+\phi^Q, \qquad (\phi^P, \phi^Q)\in P\times Q$$ where $Q=\lbrace \phi \in C: (\Psi, \phi)=0 \rbrace$ and $\phi^P=\Phi b, ~b=\textit{col}~(b_1,b_2)=(\Psi, \phi)$ where $$b_i= \beta_i \phi(0)-\int_{-\tau_0}^0\int_{0}^{\theta} \left[ \alpha_i \sin(\omega^*(\xi-\theta))+ \beta_i \cos(\omega^*(\xi-\theta)) \right] d\eta(\theta)\phi(\xi)d\xi \qquad ( i=1,2).$$ In the sequel, we consider the equation [\[2.4\]](#2.4){reference-type="eqref" reference="2.4"} subject to the following non-negative initial condition and positive initial value $$\label{2.10} x(t)=\varphi(t), \ \ \varphi(t)\geq 0, \ \ -\tau_0 \leq t <0, \ \ x(0)=x_0>0.$$ Let $T(t), \ t\geq 0$ be the semigroup on $C$ generated by the solutions of [\[2.6\]](#2.6){reference-type="eqref" reference="2.6"}, then any solution of the problem [\[2.2\]](#2.2){reference-type="eqref" reference="2.2"}, [\[2.10\]](#2.10){reference-type="eqref" reference="2.10"} satisfies the so-called *variation of constant formula* $$\label{2.11} x_t=T(t)\varphi + \varepsilon \int_{0}^t T(t-s)X_0f(s,x_s)ds,$$ where $$X_0(\theta)= \left\lbrace \begin{array}{lcc} 1, & & \theta =0 \\ \\ 0, & & -\tau_0\leq \theta < 0 \end{array} \right.$$ Further, in view of [\[2.11\]](#2.11){reference-type="eqref" reference="2.11"} and [\[2.9\]](#2.9){reference-type="eqref" reference="2.9"}, we have that $x_t=\Phi y(t)+x_t^Q$ satisfies $$\begin{aligned} \label{2.12} \left\lbrace \begin{array}{lcl} \dot{y}(t) &=& By(t)+ \varepsilon \Psi (0) f(t, \Phi y(t)+ x_t^Q), \ \ y(0)=(\Psi, \varphi), \\ x_t^Q &=& \displaystyle{T(t)\varphi ^Q+\varepsilon \int_0^t T(t-s)X_0^Q f(s, \Phi y(s)+x_s^Q) ds} \end{array} \right.\end{aligned}$$ where $B$ is defined in [\[2.8\]](#2.8){reference-type="eqref" reference="2.8"}, $~ y(t)=(\Psi, x_t), ~ X_0=X_0^P+X_0^Q$ where $~ X_0^P=\Psi(0)$, and the initial condition satisfies $\varphi=\Phi(\Psi, \varphi)+\varphi^Q$. The integral in [\[2.12\]](#2.12){reference-type="eqref" reference="2.12"} is interpreted as a regular integral for $x_t^Q(\theta)$ and each $\theta \in [-\tau_0, 0]$. **Remark 1**. It is well-known that for any $\varphi^Q \in Q$, there exists positive constants $K, \alpha$ such that $$|| T(t)\varphi ^Q || \leq Ke^{-\alpha t}|| \varphi ^Q||, \qquad t \geq 0.$$ Thus, any bounded solution of system [\[2.12\]](#2.12){reference-type="eqref" reference="2.12"} satisfies $x_t^Q=O(\varepsilon)$ as $\varepsilon \rightarrow 0$ and hence, since we are interested in $\varepsilon$ sufficiently small, we study only the terms of order $\varepsilon$. According to Remark [Remark 1](#Remark2.1){reference-type="ref" reference="Remark2.1"}, now the investigation is focused on the initial value problem $$\label{2.13} \dot{y}(t)=By(t)+\varepsilon\Psi (0)f(t, \Phi y(t)), \qquad y(0)=(\Psi, \varphi).$$ In order to study the existence of periodic orbits to [\[2.13\]](#2.13){reference-type="eqref" reference="2.13"} using the averaging theory, in the sequel, we proceeds with the introduction of convenient polar coordinates and the application of averaging techniques as Taylor expansions around $\varepsilon=0$. As a first step, we present the *first order theorem* from the averaging theory which we need to prove our principal results (see, e.g., [@Verhulst Thm. 11.5-11.6, pp. 158-159]). Consider the ordinary differential system $$\label{2.14} \dot{x}(t)=\varepsilon F(t,x)+ \varepsilon ^2 R (t,x,\varepsilon),$$ where $\displaystyle{F: \mathbb{R}\times D \rightarrow \mathbb{R}^n, R: \mathbb{R}\times D \times [0, \varepsilon_{0} ] \rightarrow \mathbb{R}^{n}}$ with $t \geq 0$ and $D$ is an open subset of $\mathbb{R}^{n}$. Suppose that 1. The vector functions $\displaystyle{F, R, D_{x}F, D^{2}_{x}F, D_{x}R}$ are continuous and bounded by a constant $M$ (independent of $\varepsilon$) in $[0, \infty [ \times D$, $\varepsilon \in [0, \varepsilon_{0}]$. 2. $F$ and $R$ are $T$-periodic functions in $t$ ( $T$ independent of $\varepsilon$). The averaged system associated to the system [\[2.14\]](#2.14){reference-type="eqref" reference="2.14"} is defined by $$\label{2.15} \dot{z}(t)= \varepsilon F^0(z),$$ where $$\label{2.16} F^0(z)= \frac{1}{T} \int_{0}^{T} F (s,z)ds.$$ **Theorem 1** (cf. [@Verhulst]). *Assume that the previous hypotheses $(a)$ and $(b)$ hold and also that the equation [\[2.15\]](#2.15){reference-type="eqref" reference="2.15"} has an equilibrium solution $z^*$ such that $detD_{z}F^0(z^*) \neq 0$, where $D_{z}F^0 (\cdot)$ is a Jacobian matrix of $F^0$. Then* 1. *For $\varepsilon$ sufficiently small, there exists a $T$-periodic solution $g(t, \varepsilon)$ of the system [\[2.14\]](#2.14){reference-type="eqref" reference="2.14"} such that $\displaystyle{\lim_{\varepsilon \rightarrow 0} g (t, \varepsilon)=z^*}$.* 2. *If the eigenvalues of the critical point $z=z^*$ of the equation [\[2.15\]](#2.15){reference-type="eqref" reference="2.15"} all have negative real parts, the corresponding periodic solution $g(t, \varepsilon)$ of [\[2.14\]](#2.14){reference-type="eqref" reference="2.14"} is asymptotically stable for $\varepsilon$ sufficiently small. If one of the eigenvalues has positive real part, $g(t, \varepsilon)$ is unstable.* From [\[2.13\]](#2.13){reference-type="eqref" reference="2.13"}, we will study the equation [\[2.4\]](#2.4){reference-type="eqref" reference="2.4"} through the analysis of the initial value problem $$\begin{aligned} \label{2.17} \left\lbrace \begin{array}{lcl} \dot{y}_1(t)&=& - \omega^* y_2(t)+ \varepsilon \beta_1 f(t, \Phi y(t)), \ \ \ y_1(0)=y_{10}, \\ \dot{y}_2(t)&=& \ \ \omega^* y_1(t)+ \varepsilon \beta_2 f(t, \Phi y(t)), \ \ \ y_2(0)=y_{20}. \end{array} \right.\end{aligned}$$ Here $\Phi y(t)=y_1(t) \sin(\omega^* \theta)+y_2(t)\cos(\omega^* \theta), ~ -\tau_0\leq \theta \leq 0$ and the initial conditions satisfy $$\label{2.18} y_{i0}= \beta_i \varphi(0)-\int_{-\tau_0}^0\int_{0}^{\theta} \left[ \alpha_i \sin(\omega^*(\xi-\theta))+ \beta_i \cos(\omega^*(\xi-\theta)) \right] d\eta(\theta)\varphi(\xi)d\xi \qquad (i=1,2).$$ We introduce the polar coordinates $(\rho, \xi) \in \mathbb{R}^+\times \mathbb{S}^1$ by the change of variables $$\label{2.19} y_1(t)= \rho(t) \sin(\omega^* \xi(t)), \ \ \ \ y_2(t)=-\rho(t) \cos(\omega^* \xi(t)),$$ whence [\[2.17\]](#2.17){reference-type="eqref" reference="2.17"} becomes $$\begin{aligned} \label{2.20} \left\lbrace \begin{array}{lcl} \dot{\rho}(t)&=& \varepsilon f(t,\Phi y(t)) (\beta_1 \sin(\omega^* \xi(t))-\beta_2 \cos(\omega^* \xi(t))), \qquad \qquad \rho(0)=\rho_0, \\ \dot{\xi}(t) &=& \displaystyle{1+ \varepsilon f(t, \Phi y(t)) \left(\frac{\beta_1 \cos(\omega^* \xi(t))+\beta_2 \sin(\omega^* \xi(t))}{ \omega^* \rho(t)}\right)}, \ \ \ \xi(0)=\xi_0, \end{array} \right. \end{aligned}$$ where $\Phi y(t)= \rho(t) (\sin(\omega^* \xi(t)) \sin(\omega^* \theta)-\cos(\omega^* \xi(t))\cos(\omega^* \theta)), ~ -\tau_0\leq \theta \leq 0$, and the above initial conditions satisfy $$y_{10}=\rho_0\sin(\omega^*\xi_0), \ \ y_{20}=-\rho_0 \cos(\omega^*\xi_0),$$ where $y_{10}$ and $y_{20}$ are defined in [\[2.18\]](#2.18){reference-type="eqref" reference="2.18"}. **Remark 2**. Note that the derivatives of [\[2.20\]](#2.20){reference-type="eqref" reference="2.20"} are with respect to time $t$ but the system is not periodic in $t$. If instead of $t$, we take the variable $\xi$ as the new independent variable of the system, we obtain the periodicity necessary to apply Theorem [Theorem 1](#teorema2){reference-type="ref" reference="teorema2"}. For $\varepsilon >0$ sufficiently small and in a neighborhood of $\rho=0$ we have that $\dot{\xi}\neq 0$, in such a neighborhood we take $\xi$ as the new independent variable and we denote by a prime the derivative with respect to $\xi$. At the same time, expanding in Taylor series around $\varepsilon=0$ we obtain the differential equation $$\label{2.21} \rho '(\xi) = \varepsilon F(\xi, \rho(\xi)) + O(\varepsilon^2), \qquad \rho(0)=\rho_0,$$ where $$\begin{aligned} \begin{array}{lcl} &&F(\xi, \rho(\xi)) \stackrel{def}{=} (\beta_1\sin(\omega^* \xi)-\beta_2 \cos(\omega^* \xi))~ f(\xi, \rho (\xi) (\sin(\omega^* \xi)\sin(\omega^* \theta)-\cos(\omega^* \xi)\cos(\omega^* \theta))), \\ && \qquad \qquad \qquad -\tau_0 \leq \theta \leq 0. \end{array} \end{aligned}$$ In what follows, we assume that $f$ is a $T=2\pi/\omega^*$ - periodic function in $\xi$. Then, according to [\[2.15\]](#2.15){reference-type="eqref" reference="2.15"}-[\[2.16\]](#2.16){reference-type="eqref" reference="2.16"}, the averaged equation associated to [\[2.21\]](#2.21){reference-type="eqref" reference="2.21"} is $$\label{2.22} \rho '(\xi) = \varepsilon F^0(\rho),$$ where $$F^0(\rho) \stackrel{def}{=} \frac{~ \omega^*}{2\pi} \int_{0}^{2\pi / \omega^*} F(\xi, \rho)d\xi .$$ **Remark 3**. If $f$ satisfies the above-mentioned conditions (H.1)-(H.2) for $T=2\pi/\omega^*$, then the assumptions of Theorem [Theorem 1](#teorema2){reference-type="ref" reference="teorema2"} are fulfilled for the equation [\[2.21\]](#2.21){reference-type="eqref" reference="2.21"}. On the other hand, it is easy to check that $G: [0, +\infty[ \times [0, 2\pi[ \rightarrow \mathbb{R}^2$ defined by $G(\rho, \xi)= (\rho \sin(\omega^*\xi), -\rho \cos(\omega^*\xi))$ is a $C^1$-diffeomorphism and, since the system [\[2.17\]](#2.17){reference-type="eqref" reference="2.17"} is a special case of [\[2.20\]](#2.20){reference-type="eqref" reference="2.20"}, for a small parameter $\varepsilon>0$ the study about the existence and stability of $T$-periodic solutions for the ordinary differential equation [\[2.21\]](#2.21){reference-type="eqref" reference="2.21"} will give us equivalent information about $T$-periodic solutions to [\[2.4\]](#2.4){reference-type="eqref" reference="2.4"}, [\[2.10\]](#2.10){reference-type="eqref" reference="2.10"}. According to this last remark and, as a consequence of Theorem [Theorem 1](#teorema2){reference-type="ref" reference="teorema2"}, we now state the principal result of this section. **Theorem 2**. *If there exists an equilibrium $\rho^*>0$ of the equation [\[2.22\]](#2.22){reference-type="eqref" reference="2.22"} such that $F^0(\rho^*)=0$ and $D_{\rho}F^0(\rho^*) \neq 0$ then, for $\varepsilon$ sufficiently small, there exists a $~ 2\pi/\omega^*$- periodic solution $x(t)=g(t,\varepsilon)$ of the problem [\[2.4\]](#2.4){reference-type="eqref" reference="2.4"}, [\[2.10\]](#2.10){reference-type="eqref" reference="2.10"} such that $g(t,0)=\rho^*$. Moreover, such a periodic solution is asymptotically stable (resp. unstable) on the center manifold if the positive equilibrium $\rho^*$ is stable (resp. unstable).* # A two delayed equation and applications {#s3} In this section, first we show two of the main results presented by Piotrowska [@Piotrowska] about the existence of Hopf bifurcations in the following particular case of the equation [\[2.1\]](#2.1){reference-type="eqref" reference="2.1"} $$\label{3.1} \dot{x}(t)=-a_1x(t-\tau_1)-a_2x(t-\tau_2).$$ Here $a_1,a_2\in \mathbb{R}$, and the nonnegative delays are independent of each other. Secondly, we apply the above-mentioned results and the results of Section [2](#s2){reference-type="ref" reference="s2"} to prove the existence of periodic solutions to a particular case of the Parkinson's disease model [\[1.2\]](#1.2){reference-type="eqref" reference="1.2"}. The results are illustrated by numerical simulations. Li *et al.* [@LiRuan] investigated the stability of the zero equilibrium of [\[3.1\]](#3.1){reference-type="eqref" reference="3.1"} and the existence of Hopf bifurcations considering $\tau_2$ as a bifurcation parameter, where suitable sets of $a_1, a_2$ and $\tau_1$ were stablished to prove their main results. Piotrowska [@Piotrowska] presented some remarks and improvement of the results in [@LiRuan]. Furthermore, some aditional cases than the cases presented in [@LiRuan] were treated in [@Piotrowska]. In the sequel, we keep in mind that $\tau_2$ will be the bifurcation parameter. If $a_1a_2=0, \tau_1\tau_2=0$ or $\tau_1=\tau_2$, then the equation [\[3.1\]](#3.1){reference-type="eqref" reference="3.1"} is equal to an equation with a single delay, therefore, no interest is posed in this latter cases. The characteristic equation associated with [\[3.1\]](#3.1){reference-type="eqref" reference="3.1"} is $$\label{3.2} h(z)=z+a_1 e^{-z\tau_1}+a_2e^{-z\tau_2}=0.$$ By assuming $a_1\neq 0, a_2>0, \lambda=\dfrac{z}{a_1}, \tau_1=\dfrac{r_1}{|a_1|}, \tau_2= \dfrac{r_2}{|a_1|}$ and $a=\dfrac{a_2}{|a_1|},$ then for $a_1>0$ the characteristic equation becomes $$\label{3.3} \lambda=-e^{-\lambda r_1}-ae^{-\lambda r_2},$$ and for $a_1<0$ becomes $$\label{3.4} \lambda =e ^{-\lambda r_1} -a e^{-\lambda r_2}.$$ If $a_1+a_2=0,$ then $z=0$ is a solution of [\[3.2\]](#3.2){reference-type="eqref" reference="3.2"} for all nonnegative delays $\tau_1, \tau_2$ and, therefore, we do not consider that case. Also, from [@HaleHuang Prop. 1] if $a_1+a_2<0,$ then the zero solution of [\[3.2\]](#3.2){reference-type="eqref" reference="3.2"} is unstable for all delays $\tau_1, \tau_2$. Thus, throughout this section we assume $a_1+a_2>0$. It is easy to check that if $\tau_1=\tau_2=0$ and $a_1+a_2>0$ are fulfilled, then the zero solution of [\[3.2\]](#3.2){reference-type="eqref" reference="3.2"} is asymptotically stable and, by continuity, it will be asymptotically stable for sufficiently small delays $\tau_1, \tau_2>0$. We are interested in the existence of purely imaginary roots for [\[3.3\]](#3.3){reference-type="eqref" reference="3.3"} ($a_1>0$) and [\[3.4\]](#3.4){reference-type="eqref" reference="3.4"} ($a_1<0$), hence first we do $\lambda=i\omega (\omega>0)$ in both equations. Secondly, by separating real and imaginary parts, we have that the equations [\[3.3\]](#3.3){reference-type="eqref" reference="3.3"} and [\[3.4\]](#3.4){reference-type="eqref" reference="3.4"} have purely imaginary roots if the following systems $$\begin{aligned} \label{3.5} \begin{array}{lcl} && \cos(\omega r_1)=-a\cos(\omega r_2), \\ & & \omega - \sin (\omega r_1)= a\sin (\omega r_2), \end{array}\end{aligned}$$ and $$\begin{aligned} \label{3.6} \begin{array}{lcl} && \cos (\omega r_1)= a\cos(\omega r_2), \\ && \omega + \sin (\omega r_2)= a\sin (\omega r_2), \end{array}\end{aligned}$$ are satisfied, respectively. Adding up to square both sides of [\[3.5\]](#3.5){reference-type="eqref" reference="3.5"} and [\[3.6\]](#3.6){reference-type="eqref" reference="3.6"} we obtain $$\label{3.7} \sin (\omega r_1) = \dfrac{\omega^2+1-a^2}{2\omega},$$ and $$\label{3.8} \sin (\omega r_1) = \dfrac{a^2-\omega^2-1}{2\omega},$$ respectively. For $\omega \in ]0, + \infty[$ the geometrical properties of the functions on the right side of [\[3.7\]](#3.7){reference-type="eqref" reference="3.7"} and [\[3.8\]](#3.8){reference-type="eqref" reference="3.8"} were considered in [@Piotrowska] to investigate the existence of simple purely imaginary roots to [\[3.2\]](#3.2){reference-type="eqref" reference="3.2"}. In what follows, we present the critical values of the delay $r_2$ to undergoes a Hopf bifurcation around the zero solution of [\[3.1\]](#3.1){reference-type="eqref" reference="3.1"}. We refer to [@Piotrowska] to the details and notation involved. For any $r_1>0$ the equations [\[3.7\]](#3.7){reference-type="eqref" reference="3.7"} and [\[3.8\]](#3.8){reference-type="eqref" reference="3.8"} have a finite number $m\in \mathbb{N}$ of solutions $\omega_k$ ($k=1, \ldots, m$), then for all $r_1>0$ fixed and for each $\omega_k$ there is a infinite number of delays $r_2>0$ such that $$\label{3.9} \cos(\omega_k r_1) =- a\cos(\omega_kr_2)$$ and $$\label{3.10} \cos(\omega_k r_1)= a \cos (\omega_k r_2),$$ respectively. Let us consider $$r_2^{k,+}=\min \lbrace r_2\in \mathbb{R}_+: \eqref{3.9} \ \mbox{is fulfilled} \rbrace$$ and $$r_2^{k,-}= \min \lbrace r_2\in \mathbb{R}_+ : \eqref{3.10} \ \mbox{is fullfiled} \rbrace ,$$ to define $$\label{3.11} r_2^{0,+} = \min \lbrace r_2^{k,+}: k=1, \ldots , m \rbrace$$ and $$\label{3.12} r_2^{0,-}=\min \lbrace r_2^{k,-} : k=1, \ldots, m \rbrace .$$ In both of the above-mentioned cases ($a_1>0$ and $a_1<0$), we have that for some $k=1, \ldots ,m$ there exist $r_2^{k, \pm }=r_2^{0, \pm}$, then for that $k$ we put $\omega ^*=\omega_k$. On the other hand, define $$\Omega \stackrel{def}{=} \bigcup_{l \in \mathbb{N}\cup \lbrace 0 \rbrace} [2l\pi , \frac{\pi}{2}+2l\pi].$$ Now we present two of the most important results in [@Piotrowska] about the existence of Hopf bifurcations to [\[3.1\]](#3.1){reference-type="eqref" reference="3.1"} for $a_1>0$ and $a_1<0$ respectively. **Theorem 3** (cf. [@Piotrowska]). *Let $0<a_1<a_2$ and $\tau_2^{0,+}= \dfrac{r_2^{0,+}}{a_1}$, where $r_2^{0,+}$ is defined by [\[3.11\]](#3.11){reference-type="eqref" reference="3.11"}, then* 1. *For $\tau_2 \in [0, \tau_2^{0,+}[$ the trivial solution to [\[3.1\]](#3.1){reference-type="eqref" reference="3.1"} is asymptotically stable.* 2. *If $\omega^*\tau_2^{0,+} a_1 \in \Omega$, then the equation [\[3.1\]](#3.1){reference-type="eqref" reference="3.1"} undergoes a Hopf bifurcation when $\tau_2=\tau_2^{0,+}$.* **Theorem 4** (cf. [@Piotrowska]). *Let $a_1<0, a_2>|a_1|$ and $\tau_2^{0,-}=\dfrac{r_2^{0,-}}{|a_1|}$, where $r_2^{0,-}$ is defined by [\[3.12\]](#3.12){reference-type="eqref" reference="3.12"}, then* 1. *For $\tau_2 \in [0, \tau_2^{0,-}[$ the trivial solution to [\[3.1\]](#3.1){reference-type="eqref" reference="3.1"} is asymptotically stable.* 2. *If $\omega^*\tau_2^{0,-}|a_1|\in \Omega$, then the equation [\[3.1\]](#3.1){reference-type="eqref" reference="3.1"} undergoes a Hopf bifurcation when $\tau_2=\tau_2^{0,-}$.* In the sequel, in view of the notation and conclusions of Section [2](#s2){reference-type="ref" reference="s2"}, we consider that the equation [\[3.1\]](#3.1){reference-type="eqref" reference="3.1"} satisfies the hypotheses of Theorems [Theorem 3](#Theorem3.1){reference-type="ref" reference="Theorem3.1"} or [Theorem 4](#Theorem3.2){reference-type="ref" reference="Theorem3.2"}, i.e., for $\tau_2=\tau_2^0 \in \lbrace \tau_2^{0,+}, \tau_2^{0,-} \rbrace$ the equation [\[3.1\]](#3.1){reference-type="eqref" reference="3.1"} satisfies the conditions (H.3) and (H.4) mentioned in Section [2](#s2){reference-type="ref" reference="s2"}. Whitout loss of generality we assume that $\tau_1<\tau_2^0$ to define the linear functional $$L(\tau_2)\phi = \int_{-\tau_2^0}^0 \phi(\theta)d\eta (\theta),$$ where $$\begin{aligned} \eta (\theta) = \left\lbrace \begin{array}{lcl} 0 &\mbox{if}& \qquad \ \ \ \ \theta=-\tau_2^0, \\ -a_2 &\mbox{if}& \ -\tau_2^0<\theta \leq \tau_1 , \\ -(a_1+a_2) &\mbox{if}& \ -\tau_1< \theta \leq 0 . \end{array} \right. \end{aligned}$$ Furthermore, the bilinear form [\[2.5\]](#2.5){reference-type="eqref" reference="2.5"} is $$(\psi, \varphi)=\psi(0)\varphi(0)-\int_{-\tau_2^0}^0 [a_1\psi(\xi+\tau_1)+a_2\psi(\xi+\tau_2^0)]\varphi(\xi)d\xi$$ and the coefficients $\alpha_1, \beta_1, \alpha_2, \beta_2$ presented in [\[2.7\]](#2.7){reference-type="eqref" reference="2.7"} are $$\begin{gathered} \begin{aligned} \alpha_1 & \stackrel{def}{=} \dfrac{4-2\sin^2(\omega^*\tau_2^0)}{\omega^{*^2}\tau_2^{0^2}+\sin^2(\omega^*\tau_2^0)} , \\ \beta_1 & \stackrel{def}{=} \dfrac{2\omega^*\tau_2^0+\sin(2\omega^*\tau_2^0)}{\omega^{*^2}\tau_2^{0^2}+\sin^2(\omega^*\tau_2^0)} ,\\ \alpha_2 & \stackrel{def}{=} \dfrac{\sin(2\omega^*\tau_2^0)-2\omega^*\tau_2^0}{\omega^{*^2}\tau_2^{0^2}+\sin^2(\omega^*\tau_2^0)} , \\ \beta_2&\stackrel{def}{=}\dfrac{2\sin^2(\omega^*\tau_2^0)}{\omega^{*^2}\tau_2^{0^2}+\sin^2(\omega^*\tau_2^0)}. \end{aligned}\end{gathered}$$ Such a coefficients are such that $(\Psi, \Phi)=I$, where $\Phi$ and $\Psi$ are the bases for the center space $P$ and his dual $P^*$ respectively. In what follows we apply the results of Section [2](#s2){reference-type="ref" reference="s2"} and the results of [@Piotrowska] presented above to investigate a particular case of [\[1.1\]](#1.1){reference-type="eqref" reference="1.1"}. Most of the terms in [\[1.1\]](#1.1){reference-type="eqref" reference="1.1"} could be set to zero depending on the data type [@Lainscsek4], therefore, now we investigate the following slightly modified Parkinson's disease model with negative feedback, three delays and a small parameter $$\label{3.13} \dot{x}(t)=-a_1x(t-\tau_1)-a_2x(t-\tau_2)+ \varepsilon (a_4x^3(t-\tau_3)-a_3 x (t-\tau_3)).$$ Here the real parameters $a_1, a_2, \tau_1$ and $\tau_2$ satisfy the conditions and conclusions of Theorems [Theorem 3](#Theorem3.1){reference-type="ref" reference="Theorem3.1"} or [Theorem 4](#Theorem3.2){reference-type="ref" reference="Theorem3.2"}. It is worth mentioning here that, in view of [@Lainscsek1], the coefficients $a_1, a_2, a_3, a_4$ could be positive or negative depending on the data type, and the delays could be choosed to separate PD on or off medication from controls (individuals without the disease), and to separate PD on from PD off medication. Let $\tau_2=\tau_2^0 \in \lbrace \tau_2^{0,+}, \tau_2^{0,-} \rbrace$, then in view of [\[2.19\]](#2.19){reference-type="eqref" reference="2.19"} we have $$\begin{aligned} x(t-\tau_1) &=& \Phi(-\tau_1)y(t)= -\rho (\sin(\omega^*\tau_1)\sin(\omega^*\xi)+\cos(\omega^*\tau_1)\cos(\omega^*\xi)), \\ x(t-\tau_2^0) &=& \Phi(-\tau_2^0)y(t)= -\rho (\sin(\omega^*\tau_2^0))\sin(\omega^*\xi)+\cos(\omega^*\tau_2^0)\cos(\omega^*\xi)), \\ x(t-\tau_3) &=& \Phi(-\tau_3)y(t)= -\rho (\sin(\omega^*\tau_3))\sin(\omega^*\xi)+\cos(\omega^*\tau_3)\cos(\omega^*\xi)).\end{aligned}$$ Thus, the averaged equation [\[2.22\]](#2.22){reference-type="eqref" reference="2.22"} is $$\rho'(\xi)= \frac{1}{8} \rho (3a_4 \rho^2 -4a_3)(\beta_2\cos(\omega^*\tau_3)-\beta_1\sin(\omega^*\tau_3)).$$ Assume that $a_3 a_4>0$ and the delay $\tau_3>0$ is such that $\beta_2\cos(\omega^*\tau_3)-\beta_1\sin(\omega^*\tau_3)\neq 0$, then the averaged equation has a positive equilibrium $\rho^*=\sqrt{(4a_3)/(3a_4)}$ such that $D_{\rho}F^0(\rho^*)=a_3(\beta_2\cos(\omega^*\tau_3)-\beta_1\sin(\omega^*\tau_3))$. Then if $a_3(\beta_2\cos(\omega^*\tau_3)-\beta_1\sin(\omega^*\tau_3))<0$ the equilibrium point is asymptotically stable and unstable if $a_3(\beta_2\cos(\omega^*\tau_3)-\beta_1\sin(\omega^*\tau_3))>0$. Consequently, according to Theorem [Theorem 2](#promediorho){reference-type="ref" reference="promediorho"}, for $\varepsilon$ sufficiently small if $a_3a_4>0$ we have the existence of a $2\pi/\omega^*$-periodic solution $g(t, \varepsilon)$ to [\[3.13\]](#3.13){reference-type="eqref" reference="3.13"} such that $g(t,0)=\rho^*$. Further, such a periodic solution is asymptotically stable (resp. unstable) on the center manifold if $a_3(\beta_2\cos(\omega^*\tau_3)-\beta_1\sin(\omega^*\tau_3))<0$ (resp. $a_3(\beta_2\cos(\omega^*\tau_3)-\beta_1\sin(\omega^*\tau_3))>0$). Let us consider a particular case of [\[3.13\]](#3.13){reference-type="eqref" reference="3.13"}, namely the equation $$\label{3.14} \dot{x}(t)=-2x(t-\tau_1)-3x(t-\tau_2)+\varepsilon(x^3(t-\tau_3)-x(t-\tau_3)).$$ As usual, by a solution of the equation [\[3.14\]](#3.14){reference-type="eqref" reference="3.14"} we understand an continuously differentiable function $x$ which satisfies the problem [\[3.14\]](#3.14){reference-type="eqref" reference="3.14"}, [\[2.10\]](#2.10){reference-type="eqref" reference="2.10"} for $t\geq 0$. According to Theorem [Theorem 2](#promediorho){reference-type="ref" reference="promediorho"}, for $\tau_2=\tau_2^0$ and $\varepsilon$ sufficiently small, a periodic solution arises around the zero equilibrium. In particular, for $\tau_1=0.113279$ we obtain $\omega^*=3$ and $\tau_2^0=0.750157$. For $\tau_3=1.2$, in Figure [\[F2\]](#F2){reference-type="ref" reference="F2"} (resp. Figure [\[F3\]](#F3){reference-type="ref" reference="F3"}) we present a numerical simulation for $a_3=a_4=1$ (resp. $a_3=a_4=-1$), where if $a_3=a_4=\pm 1$ then $D_{\rho}F^0(\rho^*)=\pm 0.08362$, i.e., if $a_3=a_4=1$ (resp. $a_3=-1$) the periodic solution is unstable (resp. asymptotically stable) on the center manifold. # Discussion In this paper, we study a general class of scalar delay differential equations with a small parameter. By applying the Hopf bifurcation theorem and the so-called averaging theory for ordinary differential equations, we prove the existence of periodic solutions around the zero solution of perturbed models when the perturbation is sufficiently small. An equation arising from a Parkinson's disease model is investigated to apply our theoretical results, where the numerical simulations illustrate our investigation. Our results can be applied to a wide class of neuroscience models with several delays as long as the conditions of our main results are satisfied. To the best of the author's knowledge, neuroscience models with several delays have not been thoroughly investigated so far, therefore this paper contribute in the investigation of dynamical behaviour of such a models. # Acknowledgements {#acknowledgements .unnumbered} J. Oyarce acknowledges support from Chilean National Agency for Research an Development (PhD. 2018-21180824). jose T. Faria, L. T. Magalhães, Normal forms for retarded functional differential equations with parameters and applications to Hopf bifurcation, J. Differential Equations 122 (1995) 181-200. A. Halanay, On the method of averaging for differential equations with retarded arguments, J. Math. Anal. Appl. 14 (1966) 70-76. J.K Hale, Averaging methods for differential equations with retarded arguments and a small parameter, J. Differential Equations 2 (1966) 57-73. J. K. Hale, W. Huang, Global geometry of the stable regions for two delay differential equations, J. Math. Anal. Appl. 178 (1993) 344-362. J.K. Hale, L.T Magalhães, W.M Oliva, Dynamics in infinite dimensions, 2nd edition, Springer, New York, 2002. J.K. Hale, S.M. Verduyn Lunel, Averaging in infinite dimensions, J. Integral Equations Appl. 2 (1990) 463-494. J.K. Hale, S.M. Verduyn Lunel, Introduction to functional differential Equations, Appl. Math. Sci., Vol. 99, Springer, New York, 1993. V. In, P. Longhini, A. Palacios (Eds.), Applications on nonlinear dynamics, Understanding Complex Systems, Springer-Verlag, Berlin Heidelberg, 2009. V. In, P. Longhini, A. Palacios (Eds.), International conference on theory and application in nonlinear dynamics (ICAND 2012), Understanding Complex Systems, Springer, Switzerland, 2014. C. Lainscsek, L. Schettino, P. Rowat, E. van Erp, D. Song, H. Poizner, Nonlinear DDE analysis of repetitive hand movements in Parkinson's disease, pp. 421-427 in applications of nonlinear dynamics, Understanding Complex Systems (V. In, P. Longhini, and A. Palacios eds.), Springer-Verlag, Berlin Heidelberg, 2009. C. Lainscsek, P. Rowat, L. Schettino, D. Lee, D. Song, C. Letellier, H. Poizner, Finger tapping movements of Parkinson's disease patients automatically rated using nonlinear delay differential equations, Chaos 22 (2012) 013119. C. Lainscsek, T. J. Sejnowski, Electrocardiogram classification using delay differential equations, Chaos 23 (2013) 023132. C. Lainscsek, A. L. Sampson, R. Kim, M. L. Thomas, K. Man, X. Lainscsek, *et. al*, Nonlinear dynamics underlying sensory processing dysfunction in schizophrenia, Proc. Natl. Acad. Sci. 116 (2019) 3847-52. C. Lainscsek, V. Messager, A. Portman, J.F. Muir, T.J. Sejnowski, C. Letellier, Automatic sleep scoring from a single electrode using delay differential equations, pp. 371-382 in Applied non-linear dynamical systems (J. Awrejcewicz ed.), Springer, New York, 2014. C. Lainscsek, M.E. Hernandez, J. Weyhenmeyer, T.J. Sejnowksi, H. Poizner, Non-linear dynamical analysis of EEG time series distinguishes patients with Parkinson's disease from healthy individuals, Frontiers in Neurology 4 (2013) 1-8. M. Lakrib, On the averaging method for differential equations with delay, Electron. J. Differential Equations 65 (2002) 1-16. M. Lakrib, T. Sari, Averaging results for functional differential equations, Sib. Math. J. 45 (2004) 311-320. X. Li, S. Ruan, J. Wei, Stability and bifurcation in delay-differential equations with two delays, J. Math. Anal. Appl. 236 (1999) 254-280. J. G. Mesquita, A. Slavík, Periodic averaging theorems for various types of equations, J. Math. Anal. Appl. 387 (2012) 862-877. J.G. Mesquita, M. Federson, Averaging for retarded functional differential equations, J. Math. Anal. Appl. 382 (2011) 77-85. M. J. Piotrowska, A remark on the ODE with two discrete delays, J. Math. Anal. Appl. 329 (2007) 664-676. J.A. Sanders, F. Verhulst, J. Murdock, Averaging methods in nonlinear dynamical systems, Second ed., Springer, New York, 2007. H. Smith, An introduction to delay differential equations with applications to the life sciencies, Springer, New York, 2011. F. Verhulst, Nonlinear differential equations and dynamical systems, 2nd edition, Springer, New York, 2000.
arxiv_math
{ "id": "2309.06398", "title": "Bifurcation and periodic solutions to neuroscience models with a small\n parameter", "authors": "Jos\\'e Oyarce", "categories": "math.DS", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We show that the following properties of unital ${\rm C^*}$-algebra in a class of $\Omega$ are preserved by unital simple ${\rm C^*}$-algebra in the class of $\rm WTA\Omega$: $(1)$ uniform property $\Gamma$, $(2)$ a certain type of tracial nuclear dimension at most $n$, $(3)$ weakly $(m, n)$-divisible. address: - | Qingzhai Fan\ Department of Mathematics\ Shanghai Maritime University\ Shanghai\ China\ 201306 - | Jiahui Wang\ Department of Mathematics\ Shanghai Maritime University\ Shanghai\ China\ 201306 author: - Qingzhai Fan and Jiahui Wang title: "certain properties of generalized tracially approximated ${\\rm \\textbf{C}^*}$-algebras" --- [^1] [^2] # Introduction The Elliott program for the classification of amenable ${\rm C^*}$-algebras might be said to have begun with the ${\rm K}$-theoretical classification of AF algebras in [@E1]. A major next step was the classification of simple AH algebras without dimension growth (in the real rank zero case see [@E6], and in the general case see [@GL]). This led eventually to the classification of simple separable amenable ${\rm C^*}$-algebras with finite nuclear dimension in the UCT class (see [@KP], [@PP2], [@EZ5], [@GLN1], [@GLN2], [@TWW1], [@EGLN1], [@GL2], and [@GL3]). A crucial intermediate step was Lin's axiomatization of Elliott-Gong's decomposition theorem for simple AH algebras of real rank zero (classified by Elliott-Gong in [@E6]) and Gong's decomposition theorem ([@G1]) for simple AH algebras (classified by Elliott-Gong-Li in [@GL]). For this purpose, Lin introduced the concepts of TAF and TAI ([@L0] and [@L1]). Instead of assuming inductive limit structure, Lin started with a certain abstract (tracial) approximation property. Elliott and Niu in [@EZ] considered this notion of tracial approximation by other classes of unital ${\rm C^*}$-algebras than the finite-dimensional ones for TAF and the interval algebras for TAI. Large and centrally large subalgebras were introduced in [@P3] and [@AN] by Phillips and Archey as abstractions of Putnam's orbit breaking subalgebra of the crossed product algebra ${\rm C}^*(X,\mathbb{Z},\sigma)$ of the Cantor set by a minimal homeomorphism in [@P]. Inspired by centrally large subalgebras and tracial approximation ${\rm C^*}$-algebras, Elliott, Fan and Fang introduced a class of unital weakly tracial approximation ${\rm C^*}$-algebras in [@EFF]. The notion generalizes both Archey and Phillips's centrally large subalgebras and Lin's notion of tracial approximation. Let $\Omega$ be a class of ${\rm C^*}$-algebras. Elliott, Fan, and Fang in [@EFF] introduced the class of ${\rm C^*}$-algebras which can be weakly tracially approximated by ${\rm C^*}$-algebras in $\Omega$, and denote this class by ${\rm WTA}\Omega$ (see Definition [Definition 4](#def:2.6){reference-type="ref" reference="def:2.6"}). Let $\Omega$ be a class of finite dimension ${\rm C^*}$-algebras. Let $A$ be an infinite-dimensional unital simple $\rm C^{*}$-algebra and let $B\subseteq A$ be a centrally large subalgebra of $A$ such that $B$ has tracial topological rank zero. In [@FFW1], Fan, Fang and Wang show that $A\in {\rm WTA}\Omega$. In [@EFF], Elliott, Fan, and Fang show that the following properties of unital ${\rm C^*}$-algebras in a class $\Omega$ are inherited by unital simple ${\rm C^*}$-algebras in the class of ${\rm WTA}\Omega$: $(1)$ tracially $\mathcal{Z}$-absorbing, $(2)$ tracial nuclear dimension at most $n$, $(3)$ the property $\rm SP$, and $(4)$ $m$-almost divisible. In this paper, we shall prove the following results: Let $\Omega$ be a class of unital ${\rm C^*}$-algebras which have uniform property $\Gamma$ (see Definition [Definition 7](#def:2.9){reference-type="ref" reference="def:2.9"}). Then $A$ has uniform property $\Gamma$ for any simple unital stably finite ${\rm C^*}$-algebra $A\in{\rm WTA}\Omega$ (Theorem [Theorem 9](#thm:3.1){reference-type="ref" reference="thm:3.1"}). Let $\Omega$ be a class of unital nuclear ${\rm C^*}$-algebras which have a type of tracial nuclear dimension at most $n$ (see Definition [\[def:2.8\]](#def:2.8){reference-type="ref" reference="def:2.8"}). Then $A$ has the type of tracial nuclear dimension at most $n$ for any simple unital ${\rm C^*}$-algebra $A\in{\rm WTA}\Omega$ (Theorem [Theorem 12](#thm:3.4){reference-type="ref" reference="thm:3.4"}). Let $\Omega$ be a class of unital ${\rm C^*}$-algebras which are weakly $(m, n)$-divisible (see Definition [\[def:2.2\]](#def:2.2){reference-type="ref" reference="def:2.2"}). Let $A\in {\rm WTA}\Omega$ be a simple unital stably finite $C^{*}$-algebra such that for any integer $n\in \mathbb{N}$ the ${\rm C^*}$-algebra ${\rm M}_{n}(A)$ belongs to the class ${\rm WTA}\Omega$. Then $A$ is secondly weakly $(m, n)$-divisible (see Definition [Definition 3](#def:2.3){reference-type="ref" reference="def:2.3"}) (Theorem [Theorem 15](#thm:3.7){reference-type="ref" reference="thm:3.7"}). As applications, the following known results follow from these results. Let $A$ be an infinite-dimensional stably finite unital simple separable ${\rm C^*}$-algebra. Let $B\subseteq A$ be a centrally large subalgebra in $A$ such that $B$ has uniform property $\Gamma$. Then $A$ has uniform property $\Gamma$. This result was obtained by Fan and Zhang in [@FZ] Let $\Omega$ be a class of stably finite unital ${\rm C^*}$-algebras such that for any $B\in \Omega$, $B$ has uniform property $\Gamma$. Then $A$ has uniform property $\Gamma$ for any simple unital ${\rm C^*}$-algebra $A\in \rm{TA}\Omega$. This result was obtained by Fan and Zhang in [@FZ]. Let $A$ be a unital simple stably finite separable ${\rm C^*}$-algebra. Let $B\subseteq A$ be a centrally large subalgebra of $A$ such that $B$ is weakly $(m, n)$-divisible. Then $A$ is secondly weakly $(m, n)$-divisible. This result was obtained by Fan, Fang, and Zhao in [@FFZ]. # Preliminaries and definitions Let $A$ be a ${\rm C^*}$-algebra, and let ${\rm M}_n(A)$ denote the ${\rm C^*}$-algebra of $n\times n$ matrices with entries elements of $A$. Let ${\rm M}_{\infty}(A)$ denote the algebraic inductive limit of the sequence $({\rm M}_n(A),\phi_n),$ where $\phi_n:{\rm M}_n(A)\to {\rm M}_{n+1}(A)$ is the canonical embedding as the upper left-hand corner block. Let ${\rm M}_{\infty}(A)_+$ (respectively, ${\rm M}_{n}(A)_+$) denote the positive elements of ${\rm M}_{\infty}(A)$ (respectively, ${\rm M}_{n}(A)$). Given $a, b\in {\rm M}_{\infty}(A)_+,$ one says that $a$ is Cuntz subequivalent to $b$ (written $a\precsim b$) if there is a sequence $(v_n)_{n=1}^\infty$ of elements of ${\rm M}_{\infty}(A)$ such that $$\lim_{n\to \infty}\|v_nbv_n^*-a\|=0.$$ One says that $a$ and $b$ are Cuntz equivalent (written $a\sim b$) if $a\precsim b$ and $b\precsim a$. We shall write $\langle a\rangle$ for the Cuntz equivalence class of $a$. The object ${\rm Cu}(A):={(A\otimes {{K}}})_+/\sim$ will be called the Cuntz semigroup of $A$. (See [@CEI].) Observe that any $a, b\in {\rm M}_{\infty}(A)_+$ are Cuntz equivalent to orthogonal elements $a', b'\in {\rm M}_{\infty}(A)_+$ (i.e., $a'b'=0$), and so ${\rm Cu}(A)$ becomes an ordered semigroup when equipped with the addition operation $$\langle a\rangle+\langle b\rangle=\langle a+ b\rangle$$ whenever $ab=0$, and the order relation $$\langle a\rangle\leq \langle b\rangle\Leftrightarrow a\precsim b.$$ Let $A$ be a unital ${\rm C^*}$-algebra. Recall that a positive element $a\in A$ is called purely positive if $a$ is not Cuntz equivalent to a projection. Let $A$ be a stably finite ${\rm C^*}$-algebra and let $a\in A$ be a positive element. Then either $a$ is a purely positive element or $a$ is Cuntz equivalent to a projection. Given $a$ in $A_+$ and $\varepsilon>0,$ we denote by $(a-\varepsilon)_+$ the element of ${\rm C^*}(a)$ corresponding (via the functional calculus) to the function $f(t)={\max (0, t-\varepsilon)}$, $t\in \sigma(a)$. The following facts are well known. **Theorem 1**. *([@PPT], [@HO], [@P3], [@RW].) [\[thm:2.1\]]{#thm:2.1 label="thm:2.1"} Let $A$ be a ${\rm C^*}$-algebra.* *$(1)$ Let $a, b\in A_+$ and any $\varepsilon>0$ be such that $\|a-b\|<\varepsilon$. Then $(a-\varepsilon)_+\precsim b$.* *$(2)$ Let $a, p$ be positive elements in $A$ with $p$ a projection. If $p\precsim a,$ and $p$ is not equivalent to $a$, then there is a nonzero element $b$ in $A$ such that $bp=0$ and $b+p\precsim a$.* *$(3)$ Let $a$ be a positive element of ${\rm C^*}$-algebra $A$ and $a$ is not Cuntz equivalent to a projection. Let $\delta>0$, and let $g\in C_0(0,1]$ be a non-negative function with $g=0$ on $(\delta,1),$ $g>0$ on $(0,\delta)$, and $\|g\|=1$. Then $g(a)\neq 0$ and $(a-\delta)_++g(a)\precsim a.$* The property of weakly $(m, n)$-divisible was introduced by Kirchberg and Rørdam in [@KM]. **Definition 2**. *([@KM].)[\[def:2.2\]]{#def:2.2 label="def:2.2"} Let $A$ be a unital ${\rm C^*}$-algebra. Let $m, n\geq 1$ be two integers. $A$ is said to be weakly $(m, n)$-divisible, if for every $a\in {\rm M}_{\infty}(A)_{+}$ and any $\varepsilon>0$, there exist elements $x_{1}, x_{2}, \cdots, x_{n}\in {\rm M}_{\infty}(A)_{+}$, such that $m\left\langle x_{j}\right\rangle \leq \left\langle a \right\rangle$, for all $j=1, 2, \cdots, n$ and $\left\langle (a-\varepsilon)_{+} \right\rangle\leq\left\langle x_{1} \right\rangle+\left\langle x_{2} \right\rangle+\cdots+\left\langle x_{n} \right\rangle$.* **Definition 3**. *Let $A$ be a unital ${\rm C^*}$-algebra. Let $m, n\geq 1$ be two integers. $A$ is said to be secondly weakly $(m, n)$-divisible, if for every $a\in {\rm M}_{\infty}(A)_{+}$ and any $\varepsilon>0$, there exist elements $x_{1}, x_{2}, \cdots, x_{n}\in {\rm M}_{\infty}(A)_{+}$ such that $m\left\langle x_{j}\right\rangle \leq \left\langle a \right\rangle +\left\langle a \right\rangle$ for all $j=1, 2, \cdots, n$, and $\left\langle (a-\varepsilon)_{+} \right\rangle\leq\left\langle x_{1} \right\rangle+\left\langle x_{2} \right\rangle+\cdots+\left\langle x_{n} \right\rangle$.* Let $\Omega$ be a class of unital ${\rm C^*}$-algebras. Elliott, Fan, and Fang defined as follows the class of ${\rm C^*}$-algebras which can be weakly tracially approximated by ${\rm C^*}$-algebras in $\Omega$, and denoted this class by ${\rm WTA}\Omega$ in [@EFF]. **Definition 4**. *([@EFF].) A simple unital ${\rm C^*}$-algebra $A$ is said to belong to the class ${\rm WTA}\Omega$ if, for any $\varepsilon>0$, any finite subset $F\subseteq A$, and any non-zero element $a\geq 0$, there exist a projection $p\in A$, an element $g\in A$ with $0\leq g\leq 1$, and a unital ${\rm C^*}$-subalgebra $B$ of $A$ with $g\in B, ~1_B=p$, and $B\in \Omega$, such that* *$(1)$ $(p-g)x\in_{\varepsilon} B, ~ x(p-g)\in_{\varepsilon} B$ for all $x\in F$,* *$(2)$ $\|(p-g)x-x(p-g)\|<\varepsilon$ for all $x\in F$,* *$(3)$ $1-(p-g)\precsim a$, and* *$(4)$ $\|(p-g)a(p-g)\|\geq \|a\|-\varepsilon$.* Let $\Omega$ be a class of unital ${\rm C^*}$-algebras. Then the class of simple unital separable ${\rm C^*}$-algebras which can be tracially approximated by ${\rm C^*}$-algebras in $\Omega$, denoted by ${\rm TA}\Omega$. It follows from the definitions and by the proof of Theorem 4.1 of [@EZ] that if $A$ is a simple unital ${\rm C^*}$-algebra and $A\in {\rm TA}\Omega$, then $A\in {\rm WTA}\Omega$. Furthermore, if $\Omega=\{B\}$, and $B\subseteq A$ is a centrally large subalgebra of $A$, then $A\in {\rm WTA}\Omega$. Winter and Zacharias introduced the notion of nuclear dimension for ${\rm C^*}$-algebras in [@WW3]. **Definition 5**. *([@WW3].)[\[def:2.7\]]{#def:2.7 label="def:2.7"} A ${\rm C^*}$-algebra $A$ has nuclear dimension at most $n$, and denoted by ${\rm dim_{nuc}}\leq n$, if there exists a net $(F_\lambda,\psi_\lambda, \varphi_\lambda)_{\lambda\in \Lambda}$ such that the $F_\lambda$ are finite-dimensional ${\rm C^*}$-algebras, and such that $\psi_\lambda:A\to F_\lambda$ and $\varphi_\lambda:F_\lambda\to A$ are completely positive maps satisfying* *$(1)$ $\psi_\lambda\varphi_\lambda(a)\to a$ uniformly on finite subsets of $A$,* *$(2)$ $\|\psi_\lambda\|\leq 1$,* *$(3)$ for each $\lambda$, $F_\lambda$ decomposes into $n+1$ ideals $F_\lambda={F_\lambda}^0+\cdots+{F_\lambda}^n$ such that $\varphi_\lambda|_{{F_\lambda}^i}$ is completely positive contractive order zero maps (which means preserves orthogonality, i.e., $\varphi_\lambda(e)\varphi_\lambda(f)=0$ for all $e,~ f\in {{F_\lambda}^i}$ with $ef=0$) for $i=0,~\cdots,~n$.* Inspired by Hirshberg and Orovitz's tracial $\mathcal{Z}$-absorption in [@HO], Fu introduced some notion of tracial nuclear dimension in his doctoral dissertation [@FU] (see also [@FL]). **Definition 6**. *([@FU].)[\[def:2.8\]]{#def:2.8 label="def:2.8"} Let $A$ be a $\rm C^{*}$-algebra. Let $n\in \mathbb{N}$ be an integer. $A$ is said to have second type of tracial nuclear dimension at most $n$, and denoted by ${\rm {T^2dim_{nuc}}}(A)\leq n$, if for any finite positive subset $\mathcal{F}\subseteq A$, for any $\varepsilon>0$ and for any nonzero positive element $a\in A$, there exist a finite dimensional $\rm C^{*}$-algebra $F=F_{0}\oplus\cdots\oplus F_{n}$ and completely positive maps $\psi:A\rightarrow F$, $\varphi:F\rightarrow A$ such that* *$(1)$ for any $x\in F$, there exists $x'\in A_{+}$, such that $x'\precsim a$ and $\|x-x'-\varphi\psi(x)\|<\varepsilon$,* *$(2)$ $\|\psi\|\leq1$, and* *$(3)$ $\varphi|_{F_{i}}$ is a contractive completely positive order zero map for $i=1, \cdots, n$.* Inspired by Fu's second type of tracial nuclear dimension in [@FU], we shall introduce a new type of tracial nuclear dimension for unital $\rm C^{*}$-algebra. **Definition 7**. *Let $A$ be a unital $\rm C^{*}$-algebra. Let $n\in \mathbb{N}$ be an integer. $A$ will said to have a new type of tracial nuclear dimension at most $n$, if for any finite positive subset $\mathcal{F}\subseteq A$, for any $\varepsilon>0$ and for any nonzero positive element $a\in A_{+}$, there exist a finite dimensional $\rm C^{*}$-algebra $F=F_{0}\oplus\cdots\oplus F_{n}$ and completely positive maps $\psi:A\rightarrow F$, $\varphi:F\rightarrow A$ such that* *$(1)$ for any $x\in F$, there exists $x'\in A_{+}$, such that $\|x-x'-\varphi\psi(x)\|<\varepsilon$,* *$(2)$ $(1_{A}-\varphi\psi(1_{A})-\varepsilon)_+\precsim a$,* *$(3)$ $\|\psi\|\leq1$, and* *$(4)$ $\varphi|_{F_{i}}$ is a contractive completely positive order zero map for $i=1, \cdots, n$.* Let $A$ be a unital $\rm C^{*}$-algebra. It is easy to know that ${\rm {T^2dim_{nuc}}}(A)\leq n$ implies $A$ has the new type of tracial nuclear dimension at most $n$. Uniform property $\Gamma$ was introduced by J. Castillejos, S. Evington, A. Tikuisis, S. White, and W. Winter, which was used to prove that $\mathcal{Z}$-stable imply that finite nuclear dimension in [@CETWW]. Examples of separable nuclear ${\rm C^*}$-algebra with uniform property $\Gamma$ are by now abundant. Kerr and Szabó establish uniform property $\Gamma$ for crossed product ${\rm C^*}$-algebras arising from a free action with the small boundary property of an infinite amenable group on a compact metrisable space see Theorem 9.4 in [@KS]. We recall the equivalent local refinement of Property $\Gamma$ from Proposition 2.4 of [@CETWW1]. **Theorem 8**. *([@CETWW1].) Let $A$ be a separable ${\rm C^*}$-algebra with $T(A)$ nonempty and compact. Then the following are equivalent:* *$(1)$ $A$ has uniform property $\Gamma$.* *$(2)$ For any finite subset $F\subseteq A$, any $\varepsilon>0$, and any integer $n\in \mathbb{N}$, there exist pairwise orthogonal positive contractions $e_1, \cdots, e_n\in A$ such that for $i=1, \cdots, n$, and $a\in F$, we have $\|e_ia-ae_i\|<\varepsilon$ and $$\sup_{\tau\in T(A)}\|\tau(ae_i)-\frac{1}{n}\tau(a)\|<\varepsilon.$$* # The main results **Theorem 9**. *Let $\Omega$ be a class of unital separable $\rm C^{*}$-algebra such that $T(B)$ is nonempty and compact and $B$ has uniform property $\Gamma$ for any $B\in \Omega$. Then $A$ has uniform property $\Gamma$ for any simple infinite-dimensional separable unital stably finite $\rm C^{*}$-algebra $A\in {\rm WTA}\Omega$.* *Proof.* Since $A$ is a stably finite $\rm C^{*}$-algebra, then $T(A)$ is nonempty. This together with the unitality of $A$ implies that $T(A)$ is compact. By Theorem [Theorem 8](#thm:2.11){reference-type="ref" reference="thm:2.11"}: (2), we need to show that for fixed finite subset $F=\{a_1, a_2, \cdots, a_k\}$ of $A$ (we may assume that $\|a_j\|\leq 1$ for all $j=1, \cdots, k$), any $\varepsilon>0$, and any integer $n\in \mathbb{N}$, there exist pairwise orthogonal positive contractions $e_1, \cdots, e_n\in A$ such that $\|e_ia_j-a_je_i\|<\varepsilon$ and $$\sup_{\tau\in T(A)}\|\tau(a_je_i)-\frac{1}{n}\tau(a_j)\|<\varepsilon,$$ for $i=1, \cdots, n,$ and $j=1, \cdots, k$. For $\varepsilon>0$, we choose $0<\delta<\varepsilon$ with $X=[0,1], f(t)=t^{1/2}, g(t)=(1-t)^{1/2}$ according to Lemma 2.5.11 in [@L2]. Since $A$ is an infinite-dimensional unital simple separable ${\rm C^*}$-algebra, by Corollary 2.5 in [@P3], there exists a nonzero positive element $a\in A$ such that $\delta>d_\tau(a) =\lim_{n\to\infty}\tau(a^{1/n})$ for any $\tau\in T(A)$. For $F=\{a_1, a_2, \cdots, a_k\}$, any $\delta>0$, and nonzero $a\in A_+$, since $A\in {\rm WTA}\Omega$, there exist a projection $p\in A$, an element $g\in A$ with $\|g\|\leq1$ and a unital $\rm C^{*}$-subalgebra $B$ of $A$ with $g\in B, 1_{B}=p$ and $B\in\Omega$ such that $(1)$ $(p-g)x\in_{\delta}B, ~x(p-g)\in_{\delta}B$ for any $x\in F$, $(2)$ $\|(p-g)x-x(p-g)\|<\delta$ for any $x\in F$, and $(3)$ $1_{A}-(p-g)\precsim a$. By $(2)$, we have $(1)'$ $\|(1_{A}-(p-g))a_j-a_j(1_{A}-(p-g))\|<\delta$ for any $j=1, \cdots, k$. By $(3)$ and by Proposition 1.19 of [@P3], we have $$d_\tau(1_{A}-(p-g))\leq d_\tau(a)$$ for any $\tau\in T(A)$. Since $d_\tau(a)<\delta$, and $\tau(1_{A}-(p-g))\leq d_\tau(1_{A}-(p-g))$, we have $$\tau(1_{A}-(p-g))<\delta$$ for any $\tau\in T(A)$. By the choice of $\delta$, by $(2)$, $(1)'$ and by Lemma 2.5.11 in [@L2], one has $\|(1_{A}-(p-g))^{1/2}a_j-a_j(1_{A}-(p-g))^{1/2}\|<\varepsilon,$ and $\|(p-g)^{1/2}a_j-a_j(p-g)^{1/2}\|<\varepsilon.$ By $(1)$, there exists $a_j'\in B$ such that $(2)'$ $\|(p-g)a_j-a_j'\|<\delta.$ By $(1)'$ and $(2)'$, one has $\|a_j-a_j'-(1_{A}-(p-g))^{1/2}a_j(1_{A}-(p-g))^{1/2}\|$ $=\|(1_{A}-(p-g))a_j+(p-g)a_j-a_j'- (1_{A}-(p-g))^{1/2}a_j(1_{A}-(p-g))^{1/2}\|$ $\leq\|(1_{A}-(p-g))a_j-(1_{A}-(p-g))^{1/2}a_j(1_{A}-(p-g))^{1/2}\|+ \|(p-g)a_j-a_j'\|$$<\varepsilon+\delta<2\varepsilon,$ and $\|(p-g)^{1/2}a_j(p-g)^{1/2}-a_j'\|\leq\|(p-g)^{1/2}a_j(p-g)^{1/2}- (p-g)a_j\|$$+\|(p-g)a_j-a_j'\|$ $<\varepsilon+\delta<2\varepsilon$ for all $1\leq j\leq k$. For $\varepsilon>0$, and any integer $n\in \mathbb{N}$, we choose $\delta''=\delta''(\varepsilon,n)$ (with $\delta''<\varepsilon$) sufficiently small such that satisfying Lemma 2.5.12 in [@L2]. For $\delta''/2>0$, finite subset $\{a_1', a_2', \cdots, a_k', (p-g), (p-g)^{1/2}\}$ of $B$ and $n\in \mathbb{N}$, since $B$ has uniform property $\Gamma$, there exist pairwise orthogonal positive contractions $e_1', \cdots, e_n'\in B$ such that for $i=1, \cdots, n, j=1, \cdots, k$, one has $$\|e_i'a_j'-a_j'e_i'\|<\delta''/2, \|e_i'(p-g)-(p-g)e_i'\|<\delta''/2,$$$$\|e_i'(p-g)^{1/2}-(p-g)^{1/2}e_i'\|<\delta''/2,$$ and $$\sup_{\tau\in T(B)}\|\tau(a_j'e_i')-\frac{1}{n}\tau(a_j')\|<\delta''/2.$$ Since $\|(p-g)^{1/2}e_i'-e_i'(p-g)^{1/2}\|<\delta''$, and $e_i'e_j'=0, i\neq j$, so we have $\|(p-g)^{1/2}e_i'(p-g)^{1/2}(p-g)^{1/2}e_j'(p-g)^{1/2}\|$ $\leq\|(p-g)^{1/2}e_i'(p-g)^{1/2}(p-g)^{1/2}e_j'(p-g)^{1/2}- (p-g)e_i'(p-g)^{1/2}e_j'(p-g)^{1/2}\|$ $+\|(p-g)e_i'(p-g)^{1/2}e_j'(p-g)^{1/2}- (p-g)e_i'e_j'(p-g)\|$ $<\delta''/2+\delta''/2=\delta''$. By the choice of $\delta$, since $(p-g)^{1/2}e_i'(p-g)^{1/2}$ is a contraction, by the proof the Lemma 2.5.12 in [@L2], one can fine pairwise orthogonal positive contractions $e_i$ $(i=1,\cdots, n)$, such that $$\|(p-g)^{1/2}e_i'(p-g)^{1/2}-e_i\|<\varepsilon.$$ We have $\|a_je_i-a_j'e_i'\|$$\leq\| a_je_i-a_j(p-g)^{1/2}e_i'(p-g) ^{1/2}\|$ $+\|a_j(p-g)^{1/2}e_i'(p-g)^{1/2}-a_j(p-g)^{1/2}(p-g)^{1/2}e_i'\|$ $+\|a_j(p-g)^{1/2}(p-g)^{1/2}e_i'-(p-g)^{1/2}a_j(p-g)^{1/2}e_i'\|$ $+\|(p-g)^{1/2}a_j(p-g)^{1/2}e_i'-a_j'e_i'\|\leq 4\varepsilon$. With the same argument, one has $$\|e_ia_j-e_i'a_j'\|<4\varepsilon$$ for $i=1, \cdots, n, j=1, \cdots, k$. Since $\|e_i'a_j'-a_j'e_i'\|<\delta''/2$, so one has $\|a_je_i-e_ia_j\|$ $\leq\|a_je_i-a_j'e_i'\|+\|a_j'e_i'-e_i'a_j'\|+\|e_i'a_j'-e_ia_j\|$ $<4\varepsilon+4\varepsilon+\delta''/2<9\varepsilon,$ for $i=1,\cdots, n, j=1, \cdots, k$. Since $\|a_je_i-a_j'e_i'\|<4\varepsilon$ for any $\tau\in T(A)$, one has $$|\tau(a_je_i)-\tau(a_j'e_i') |<4\varepsilon.$$ Since $\|a_i-a_i'-(1_{A}-(p-g))^{1/2}a_j(1_{A}-(p-g))^{1/2}\|<2\varepsilon,$ we have $$|\tau(a_j)-\tau(a_j')-\tau((1_{A}-(p-g))^{1/2}a_j(1_{A}-(p-g))^{1/2})|<2\varepsilon.$$ Therefore, we have $|\tau(a_je_i)-\frac{1}{n}\tau(a_j)|$ $=|\tau(a_je_i)-\tau(a_j'e_i')|+|\tau(a_j'e_i')-\frac{1}{n}\tau(a_j')| +|(\frac{1}{n}\tau(a_j')-\frac{1}{n}\tau(a_j)|$ $\leq4\varepsilon+|\tau(a_j'e_i')-\frac{1}{n}\tau(a_j')|+\frac{1}{n} (2\varepsilon+\tau((1-(p-g))^{1/2}a_j(1-(p-g))^{1/2}))$ $\leq4\varepsilon+|\tau(a_j'e_i')-\frac{1}{n}\tau(a_j')|+\frac{1}{n} (2\varepsilon+\tau(1-(p-g))$ $\leq5\varepsilon+|\tau(a_j'e_i')-\frac{1}{n}\tau(a_j')|$. Therefore, one has $\sup_{\tau\in T(A)}|\tau(a_je_i)-\frac{1}{n}\tau(a_j)|$ $\leq \sup_{\tau\in T(A)}|\tau(a_j'e_i')-\frac{1}{n}\tau(a_j')|+5\varepsilon$ $\leq \frac{1}{\tau(p)}\sup_{\tau\in T(B)}|\tau(a_j'e_i')-\frac{1}{n}\tau(a_j')|+5\varepsilon$ $\leq \frac{1}{1-\delta}\sup_{\tau\in T(B)}|\tau(a_j'e_i')-\frac{1}{n}\tau(a_j')|+5\varepsilon$ $<5\varepsilon +\frac{\delta''}{1-\delta}$ $<6\varepsilon.$ By Theorem 2.11:(2), $A$ has uniform property $\Gamma$. ◻ The following two corollaries were obtained by Fan and Zhang in [@FZ]. **Corollary 10**. *([@FZ]) [\[cor:3.2\]]{#cor:3.2 label="cor:3.2"}Let $A$ be an infinite-dimensional stably finite unital simple ${\rm C^*}$-algebra. Let $B\subseteq A$ be a centrally large subalgebra of $A$ such that $B$ has uniform property $\Gamma$. Then $A$ has uniform property $\Gamma$.* **Corollary 11**. *([@FZ]) [\[cor:3.3\]]{#cor:3.3 label="cor:3.3"} Let $\Omega$ be a class of stably finite unital ${\rm C^*}$-algebras such that for any $B\in \Omega$, $B$ has uniform property $\Gamma$. Then $A$ has uniform property $\Gamma$ for any simple unital ${\rm C^*}$-algebra $A\in \rm{TA}\Omega$.* **Theorem 12**. *Let $\Omega$ be a class of unital nuclear ${\rm C^*}$-algebras which have the new type of tracial nuclear dimension at most $n$ (in the sense of Definition [Definition 7](#def:2.9){reference-type="ref" reference="def:2.9"}). Then $A$ has the new type of tracial nuclear dimension at most $n$ for any simple unital ${\rm C^*}$-algebra $A\in{\rm WTA}\Omega$.* *Proof.* We must show that for any finite positive subset $\mathcal{F}=\{a_1, a_2,$ $\cdots, a_k\}$ $\subseteq A$, for any $\varepsilon>0$, and for any nonzero positive element $b\in A$, there exist a finite dimensional $\rm C^{*}$-algebra $F=F_{0}\oplus\cdots\oplus F_{n}$ and completely positive maps $\psi:A\rightarrow F$,  $\varphi:F\rightarrow A$ such that $(1)$ for any $x\in F$, there exists $\overline{x}\in A_+$, such that $\|x-\overline{x}-\varphi\psi(x)\|<\varepsilon$, $(2)$ $(1_{A}-\varphi\psi(1_{A})-\varepsilon)_+\precsim b$, $(3)$ $\|\psi\|\leq1$, and $(4)$ $\varphi|_{F_{i}}$ is a completely positive contractive order zero map for $i=1, \cdots, n$. By Lemma 2.3 of [@L2], there exist positive elements $b_{1}, b_{2}\in A$ of norm one such that $b_{1}b_{2}=0, b_{1}\sim b_{2}$ and $b_{1}+b_{2}\precsim b$. Given $\varepsilon'>0$, for $H=\mathcal{F}\cup\{b\}$, since $A\in {\rm WTA}\Omega$, there exist a projection $p\in A$, an element $g\in A$ with $\|g\|\leq1$ and a unital $\rm C^{*}$-subalgebra $B$ of $A$ with $g\in B, 1_{B}=p$ and $B\in\Omega$ such that $(1)'$ $(p-g)x\in_{\varepsilon'}B, x(p-g)\in_{\varepsilon'}B$ for any $x\in H$, $(2)'$ $\|(p-g)x-x(p-g)\|<\varepsilon'$ for any $x\in H$, $(3)'$ $1_{A}-(p-g)\precsim b_{1}\sim b_{2}$, and $(4)'$ $\|(p-g)b(p-g)\|\geq 1-\varepsilon'$. By $(2)'$ and Lemma 2.5.11 of [@L2], with sufficiently small $\varepsilon'$, we can get $(5)'$ $\|(p-g)^{\frac{1}{2}}x-x(p-g)^{\frac{1}{2}}\|<\varepsilon$ for any $x\in H$, and $(6)'$ $\|(1_{A}-(p-g))^{\frac{1}{2}}x-x(1_{A}-(p-g))^{\frac{1}{2}}\|<\varepsilon$ for any $x\in H$. By $(1)'$ and $(5)'$, with sufficiently small $\varepsilon'$, there exist positive elements $a'_{1}, \cdots, a'_{n}\in B$ and a positive element $b_{2}'\in B$ such that $\|(p-g)^{\frac{1}{2}}a_{i}(p-g)^{\frac{1}{2}}-a'_{i}\|<\varepsilon$ for $1\leq i\leq k$, and $\|(p-g)^{\frac{1}{2}}b_{2}(p-g)^{\frac{1}{2}}-b_{2}'\|<\varepsilon$. Therefore, one has $\|a_{i}-a'_{i}-(1_{A}-(p-g))^{\frac{1}{2}}a_{i}(1_{A}-(p-g))^{\frac{1}{2}}\|$ $\leq\|a_{i}-(p-g)a_{i}-(1_{A}-(p-g))a_{i}\|+\|(p-g)a_{i}-a'_{i}\|$ $+\|(p-g)^{\frac{1}{2}}a_{i}(p-g)^{\frac{1}{2}}-(p-g)a_{i}\|$ $+\|(1_{A}-(p-g))a_{i}-(1_{A}-(p-g))^{\frac{1}{2}}a_{i}(1_{A}-(p-g))^{\frac{1}{2}}\|$ $<\varepsilon+\varepsilon+\varepsilon+\varepsilon=4\varepsilon$ for $1\leq i\leq k$. Since $\|(p-g)^{\frac{1}{2}}b_{2}(p-g)^{\frac{1}{2}}-b_{2}'\|<\varepsilon$, by (1) of Theorem [\[thm:2.1\]](#thm:2.1){reference-type="ref" reference="thm:2.1"}, we have $$(b_{2}'-3\varepsilon)_{+}\precsim((p-g)^{\frac{1}{2}}b_{2}(p-g)^{\frac{1}{2}}-2\varepsilon)_{+}. ~~~ \hspace{0.4cm}\textbf{(3.4.1)}$$ By $(4)'$, one has $$\|(p-g)^{\frac{1}{2}}b_{2}(p-g)^{\frac{1}{2}}\| \geq\|(p-g)b_{2}(p-g)\|\geq1-\varepsilon.$$ Therefore, we have $\|(b_{2}'-3\varepsilon)_{+}\|\geq\|(p-g)^{\frac{1}{2}}b_{2}(p-g)^{\frac{1}{2}}\|+4\varepsilon\geq1-5\varepsilon$, then, with $0<\varepsilon'<\varepsilon<\frac{1}{5}$, $(b_{2}'-3\varepsilon)_{+}\neq0$. Define a completely positive contractive map $\varphi'':A\rightarrow A$ by $\varphi''(a)=(1_{A}-(p-g))^{\frac{1}{2}}a(1_{A}-(p-g))^{\frac{1}{2}}$. Since $B$ is a nuclear $\rm C^{*}$-algebra, by Theorem 2.3.13 of [@L2], there exists a contractive completely positive map $\psi'':A\rightarrow B$ such that $\|\psi''(p-g)-(p-g)\|<\varepsilon$, and $\|\psi''(a'_{i})-a'_{i}\|<\varepsilon$ for all $1\leq i\leq n$. Since $B\in\Omega$, then $B$ has the new type of tracial nuclear dimension at most $n$, there exist a finite dimension $\rm C^{*}$-algebra $F=F_{0}\oplus\cdots\oplus F_{n}$ and completely positive maps $\psi':B\rightarrow F$, $\varphi':F\rightarrow B$ such that $(1)''$ for any $a_{i}'$ ($1\leq i\leq k$), there exists $\overline{\overline{a_{i}'}}\in B_+$, such that $\|a_{i}'-\overline{\overline{a_{i}'}}-\varphi'\psi'(a_{i}')\|<\varepsilon$, and for $g\in B_+$, there exists $\overline{\overline{g}}\in B$, such that $\|g-\overline{\overline{g}}-\varphi'\psi'(g)\|<\varepsilon. ~~~~~~\hspace{0.4cm}\textbf{(3.4.2)}$ $(2)''$ $(p-\varphi'\psi'(p)-\varepsilon)_+\precsim (b_{2}'-3\varepsilon)_{+}$, $(3)''$ $\|\psi'\|\leq1$, and $(4)''$ $\varphi'|_{F_{i}}$ is a completely positive contractive order zero map for $i=1, \cdots, n$. Define $\varphi:F\rightarrow A$ by $\varphi(a)=\varphi'(a)$ and $\psi:A\rightarrow F$ by $\psi(a)=\psi'\psi''((p-g)^{\frac{1}{2}}a(p-g)^{\frac{1}{2}})$ and $\overline{a_{i}}=\varphi''(a_{i})+\overline{\overline{a_{i}'}}\in A_+$ for $1\leq i\leq k$. Then, one has $(1_{A}-\varphi\psi(1_{A})-4\varepsilon)_+$ $= (1_{A}-\varphi'\psi'\psi''(p-g)-4\varepsilon)_+$ $\precsim 1_{A}-(\varphi'\psi'(p)-2\varepsilon)_++\varphi'\psi'(g)$ $\precsim (1_{A}-p)+(p-\varphi'\psi'(p)-\varepsilon)_++\varphi'\psi'(g)$ $\precsim (1_{A}-p)+(p-\varphi'\psi'(p)-\varepsilon)_++\varphi'\psi'(g)+(\overline{\overline{g}}-\varepsilon)_+$ $\precsim (1_{A}-p)+ g+(p-\varphi'\psi'(p)-\varepsilon)_+$   (by **(3.4.2)**) $\precsim b_{1}\oplus(b_{2}'-3\varepsilon)_{+}$  (by $(1)'$) $\precsim b_{1}\oplus((p-g)^{\frac{1}{2}}b_{2}(p-g)^{\frac{1}{2}}-2\varepsilon)_{+}$  (by **(3.4.1)**) $\precsim b_{1}+b_{2}\precsim b$. One also has $\|a_{i}-\overline{a_{i}}-\varphi\psi(a_{i})\|=\|a_{i}-\varphi''(a_{i})- \overline{\overline{a_{i}'}}-\varphi'\psi'\psi''((p-g)^{\frac{1}{2}}a_{i}(p-g)^{\frac{1}{2}})\|$ $\leq\|a_{i}-(1_{A}-(p-g))^{\frac{1}{2}}a_{i}(1_{A}-(p-g))^{\frac{1}{2}}- \overline{\overline{a_{i}'}}-\varphi'\psi'\psi''((p-g)^{\frac{1}{2}}a_{i}(p-g)^{\frac{1}{2}})\|$ $\leq\|a_{i}-a'_{i}-(1_{A}-(p-g))^{\frac{1}{2}}a_{i}(1_{A}-(p-g))^{\frac{1}{2}}\|+ \|a'_{i}-\overline{\overline{a_{i}'}}-\varphi'\psi'\psi''((p-g)^{\frac{1}{2}}a_{i}(p-g)^{\frac{1}{2}})\|$ $\leq3\varepsilon+ \|a'_{i}-\overline{\overline{a_{i}'}}-\varphi'\psi'(a_{i}')\|$ $+\|\varphi'\psi'(a_{i}')- \varphi'\psi'\psi''(a_{i}')\|+\| \varphi'\psi'\psi''(a_{i}')-\varphi'\psi'\psi''((p-g)^{\frac{1}{2}}a_{i}(p-g)^{\frac{1}{2}})\|$ $<3\varepsilon+2\varepsilon+2\varepsilon+2\varepsilon=9\varepsilon$. Since $\varphi'', \varphi', \psi', \psi''$ are completely positive contractive maps, then $\varphi$ and $\psi$ are completely positive maps. For $(3)''$, $\varphi'|_{F_{i}}$ is a completely positive contractive order zero map for $i=0, 1, \cdots, n$ and $\varphi(a)=\varphi'(a)$, then $\varphi|_{F_{i}}$ is contractive completely positive order zero map for $i=0, 1,\cdots, n$. For any $x\in A$, $\|\psi(x)\|=\|\psi'\psi''((p-g)^{\frac{1}{2}}x(p-g)^{\frac{1}{2}})\|\leq\|\psi'\|\|\psi''\|\|x\|$, then $\|\psi\|\leq\|\psi''\|\|\psi'\|\leq1$. Therefore, $A$ has the new type of tracial nuclear dimension at most $n$. ◻ **Corollary 13**. *Let $A$ be a unital simple ${\rm C^*}$-algebra. Let $B\subseteq A$ be a centrally large nuclear subalgebra of $A$ such that $B$ has the new type of tracial nuclear dimension at most $n$. Then $A$ has the new type of tracial nuclear dimension at most $n$.* **Corollary 14**. *Let $\Omega$ be a class of unital nuclear ${\rm C^*}$-algebras such that for any $B\in \Omega$, $B$ has the new type of tracial nuclear dimension at most $n$. Then $A$ has the new type of tracial nuclear dimension at most $n$ for any simple unital ${\rm C^*}$-algebra $A\in \rm{TA}\Omega$.* **Theorem 15**. *Let $\Omega$ be a class of unital $\rm C^{*}$-algebras such that $B$ is weakly $(m, n)$-divisible (with $n\neq m$) (see Definition [\[def:2.2\]](#def:2.2){reference-type="ref" reference="def:2.2"}) for any $B\in \Omega$. Let $A\in {\rm WTA}\Omega$ be a simple unital stably finite $\rm C^{*}$-algebra such that for any integer $n\in \mathbb{N}$ the $\rm C^{*}$-algebra ${\rm M}_{n}(A)$ belongs to the class ${\rm WTA}\Omega$. Then $A$ is secondly weakly $(m, n)$-divisible (with $n\neq m$) (see Definition [Definition 3](#def:2.3){reference-type="ref" reference="def:2.3"}).* *Proof.* For given $a\in {\rm M}_{\infty}(A)_{+},~\varepsilon>0$, we may assume that $a\in A_+$, and $\|a\|=1$, (we have replaced ${\rm M}_{n}(A)$ containing $a$ given initially by $A$), we must show that there are $x_{1}, x_{2}, \cdots, x_{n}\in {\rm M}_{\infty}(A)_{+}$ such that $m\langle x_{j}\rangle \leq \langle a \rangle +\langle a \rangle$, and $\langle (a-\varepsilon)_{+} \rangle\leq\langle x_{1} \rangle+\langle x_{2} \rangle+\cdots+\langle x_{n} \rangle$ for all $j=1, 2, \cdots, n$. For any $\delta_{1}>0$, since $A\in {\rm WTA}\Omega$, there exist a projection $p\in A$, an element $g\in A$ with $0\leq g \leq1$, and a $\rm C^{*}$-subalgebra $B$ of $A$ with $g\in B$, $1_{B}=p$, and $B\in \Omega$ such that \(1\) $(p-g)a\in_{\delta_{1}}B$, and \(2\) $\|(p-g)a-a(p-g)\|<\delta_{1}$. By (2), with sufficiently small $\delta_{1}$, by Lemma 2.5.11 (1) of [@L2], we have \(3\) $\|(p-g)^{\frac{1}{2}}a-a(p-g)^{\frac{1}{2}}\|<\varepsilon/3$, and \(4\) $\|(1-(p-g))^{\frac{1}{2}}a-a(1-(p-g))^{\frac{1}{2}}\|<\varepsilon/3$. By (1) and (2), with sufficiently small $\delta_{1}$, there exists a positive element $a^{'}\in B$ such that \(5\) $\|(p-g)^{\frac{1}{2}}a(p-g)^{\frac{1}{2}}-a^{'}\|<\varepsilon/3$. By (3), (4) and (5), $\|a-a^{'}-(1-(p-g))^{\frac{1}{2}}a(1-(p-g))^{\frac{1}{2}}\|$ $\leq\|a-(p-g)a-(1-(p-g))a\|+\|(p-g)a-(p-g)^{\frac{1}{2}}a(p-g)^{\frac{1}{2}}\|$ $+\|(1-(p-g))a-(1-(p-g))^{\frac{1}{2}}a(1-(p-g))^{\frac{1}{2}}\|$ $+\|(p-g)^{\frac{1}{2}}a(p-g)^{\frac{1}{2}}-a^{'}\|$ $<\varepsilon/3+\varepsilon/3+\varepsilon/3=\varepsilon$. Since $B$ is weakly $(m, n)$-divisible, and $(a^{'}-2\varepsilon)_{+}\in B_+$, there exist $x_{1}^{'}, x_{2}^{'},$ $\cdots, x_{n}^{'}\in B$, such that $\langle x_{j}^{'}\rangle +\langle x_{j}^{'}\rangle +\cdots+\langle x_{j}^{'}\rangle \leq\langle(a^{'}-2\varepsilon)_{+} \rangle$, and $\langle(a^{'}-3\varepsilon)_{+} \rangle\leq\sum_{i=1}^{n}\langle x_{i}^{'}\rangle$, where $\langle x_{j}^{'}\rangle$ repeats $m$ times. Since $B$ is weakly $(m, n)$-divisible, and $(a^{'}-\varepsilon)_{+}\in B$, there exist $y_{1}^{'}, y_{2}^{'},$ $\cdots, y_{n}^{'}\in B$, such that $\langle y_{j}^{'}\rangle +\langle y_{j}^{'}\rangle +\cdots+\langle y_{j}^{'}\rangle \leq\langle(a^{'}-\varepsilon)_{+} \rangle$, and $\langle(a^{'}-2\varepsilon)_{+} \rangle\leq\sum_{i=1}^{n}\langle y_{i}^{'}\rangle$, where $\langle y_{j}^{'}\rangle$ repeats $m$ times. Write $a^{''}=(1-(p-g))^{\frac{1}{2}}a(1-(p-g))^{\frac{1}{2}}$. We divide the proof into two cases. **Case (1)** We assume that $(a^{'}-2\varepsilon)_{+}$ is Cuntz equivalent to a projection. **(1.1)** We assume that $(a^{'}-3\varepsilon)_{+}$ is Cuntz equivalent to a projection. **(1.1.1)** We assume that $(a^{'}-2\varepsilon)_{+}$ is Cuntz equivalent to $(a^{'}-3\varepsilon)_{+}$. **(1.1.1.1)** If $x_{1}^{'}, x_{2}^{'}, \cdots, x_{n}^{'}\in B$ are all Cuntz equivalent to projections, and $\langle(a^{'}-3\varepsilon)_{+} \rangle\leq\sum_{i=1}^{n}\langle x_{i}^{'}\rangle$, then, by Theorem 2.1 (2), there exist some integer $j$ and a nonzero projection $d$ such that $\langle x_{j}^{'}+d\rangle +\langle x_{j}^{'}+d\rangle+\cdots+\langle x_{j}^{'}+d\rangle \leq\langle(a^{'}-2\varepsilon)_{+} \rangle$, where $\langle x_{j}^{'}+d\rangle$ repeats $m$ times, otherwise, this contradicts the stably finiteness of $A$ (since $m\neq n$ and $\rm C^{*}$-algebra $A$ is stably finite). For any $\delta_{2}>0$, since $A\in {\rm WTA}\Omega$, there exist a projection $p_1\in A$, an element $g_{1}\in A$ with $0\leq g_{1} \leq1$, and a $\rm C^{*}$-subalgebra $D_1$ of $A$ with $g_{1}\in D_1$, $1_{D{_1}}=p_1$, and $D_1\in \Omega$ such that ($1^{'}$) $(p_1-g_{1})a^{''}\in_{\delta_{2}}D_1$, ($2^{'}$) $\|(p_1-g_{1})a^{''}-a^{''}(p_1-g_{1})\|<\delta_{2}$, and ($3^{'}$) $1-(p_1-g_{1})\preceq d$. By ($1^{'}$) and ($2^{'}$), with sufficiently small $\delta_{2}$, as above, via the analogues for (4), (5) and (6) for $a^{''},~ p_1,$ and $g_{1}$, there exist a positive element $a^{'''}\in D_1$, such that $\|(p_1-g_{1})^{\frac{1}{2}}a^{''}(p_1-g_{1})^{\frac{1}{2}}-a^{'''}\|<\varepsilon/3,$ and $\|a^{''}-a^{'''}-(1-(p_1-g_{1}))^{\frac{1}{2}}a^{''}(1-(p_1-g_{1}))^{\frac{1}{2}}\|<\varepsilon.$ Since $D_1$ is weakly $(m, n)$-divisible, and $(a^{'''}-2\varepsilon)_{+}\in D_1$, there exist positive elements $x_{1}^{''}, x_{2}^{''}, \cdots, x_{n}^{''}\in D_1$, such that $\langle x_{j}^{''}\rangle +\langle x_{j}^{''}\rangle +\cdots+\langle x_{j}^{''}\rangle \leq\langle(a^{'''}-2\varepsilon)_{+} \rangle$, and $\langle(a^{'''}-3\varepsilon)_{+} \rangle\leq\sum_{i=1}^{n}\langle x_{i}^{''}\rangle$, where $\langle x_{j}^{''}\rangle$ repeats $m$ times. Since $a^{'}\leq a^{'}+a^{''},$ then $\langle (a^{'}-\varepsilon)_{+}\rangle\leq \langle (a^{'}+a^{''}-\varepsilon)_{+}\rangle$, and $\|a-a^{'}-a^{''}\|<\varepsilon$, so one has $\langle (a^{'}+a^{''}-\varepsilon)_{+}\rangle\leq\langle a\rangle$. Therefore, one has $$\langle (a^{'}-2\varepsilon)_{+}\rangle\leq\langle (a^{'}-\varepsilon)_{+}\rangle\leq\langle a\rangle.$$ Write $x=(1-(p_1-g_{1}))^{\frac{1}{2}}a^{''}(1-(p_1-g_{1}))^{\frac{1}{2}}$. Since $a^{'''}\leq a^{'''}+x$, then $\langle (a^{'''}-2\varepsilon)_{+}\rangle\leq\langle (a^{'''}+x-\varepsilon)_{+}\rangle$, and $\|a^{''}-a^{'''}-x\|<\varepsilon$, which implies that $$\langle (a^{'''}-2\varepsilon)_{+}\rangle\leq\langle a^{''}\rangle \leq\langle a\rangle.$$ Therefore, we have $\langle (x_{j}^{'}\oplus d)\oplus x_{j}^{''}\rangle +\langle (x_{j}^{'}\oplus d)\oplus x_{j}^{''}\rangle +\cdots+\langle (x_{j}^{'}\oplus d)+ x_{j}^{''}\rangle$ $\leq\langle (a^{'}-2\varepsilon)_{+}\rangle+\langle (a^{'''}-2\varepsilon)_{+}\rangle\leq \langle a\rangle+\langle a\rangle$, where $\langle (x_{j}^{'}\oplus d)\oplus x_{j}^{''}\rangle$ repeats $m$ times, and $\langle x_{i}^{'}\oplus x_{i}^{''}\rangle +\langle x_{i}^{'}\oplus x_{i}^{''}\rangle +\cdots\langle x_{i}^{'}\oplus x_{i}^{''}\rangle$, $\leq\langle (a^{'}-2\varepsilon)_{+}\rangle+\langle (a^{'''}-2\varepsilon)_{+}\rangle\leq \langle a\rangle+\langle a\rangle$, where $\langle x_{i}^{'}\oplus x_{i}^{''}\rangle$ repeats $m$ times for $1\leq i\leq n$ and $i\neq j$. We also have $\langle (a-10\varepsilon)_{+}\rangle$ $\leq\langle (a^{'}-3\varepsilon)_{+}\rangle+\langle (a^{'''}-3\varepsilon)_{+}\rangle+\langle (1-(p_1-g_{1}))^{\frac{1}{2}}a^{''}(1-(p_1-g_{1}))^{\frac{1}{2}}\rangle$ $\leq\langle (a^{'}-3\varepsilon)_{+}\rangle+\langle (a^{'''}-3\varepsilon)_{+}\rangle+\langle (1-(p_1-g_{1}))\rangle$ $\leq\langle (a^{'}-3\varepsilon)_{+}\rangle+\langle (a^{'''}-3\varepsilon)_{+}\rangle+\langle d\rangle$ $\leq\sum_{i=1, i\neq j} ^n\langle x_{i}^{'}\oplus x_{i}^{''}\rangle+\langle (x_{j}^{'}\oplus d)\oplus x_{j}^{''}\rangle.$ There are the desired inequalities, with$\langle x_{i}^{'}\oplus x_{i}^{''}\rangle+\langle (x_{j}^{'}\oplus d)\oplus x_{j}^{''}\rangle$ in place of $\langle x_{i}\rangle$, and $10\varepsilon$ in place of $\varepsilon$. **(1.1.1.2)** If $x_{1}^{'}, x_{2}^{'}, \cdots, x_{k}^{'}\in B$ are projections, and $\langle(a^{'}-3\varepsilon)_{+} \rangle <\sum_{i=1}^{n}\langle x_{i}^{'}\rangle$, then, by Theorem 2.1 (2), there exists a nonzero projection $e$, such that $\langle(a^{'}-3\varepsilon)_{+} \rangle+\langle e\rangle \leq\sum_{i=1}^{n}\langle x_{i}^{'}\rangle$. As in the part (1.1.1.1), since $A\in {\rm WTA}\Omega$, there exist a projection $p_2\in A$, an element $g_{2}\in A$ with $0\leq g_{2} \leq1$, and a $\rm C^{*}$-subalgebra $D_2$ of $A$ with $g_{2}\in D_2$, $1_{D_{2}}=p_2$, and $D_2\in \Omega$, by $(1)'$, there exists a positive element $a^{4}\in D_2$, such that $1-(p_2-g_{2})\preceq e,$ $\|(p_2-g_{2})^{\frac{1}{2}}a^{''}(p_2-g_{2})^{\frac{1}{2}}-a^{4}\|<\varepsilon/3,$ and $\|a^{''}-a^{4}-(1-(p_2-g_{2}))^{\frac{1}{2}}a^{''}(1-(p_2-g_{2}))^{\frac{1}{2}}\|<\varepsilon.$ Also as in the part (1.1.1.1), we have $\langle (a^{4}-2\varepsilon)_{+}\rangle\leq \langle a\rangle.$ Since $D_2$ is weakly $(m, n)$-divisible, $(a^{4}-2\varepsilon)_{+}\in D_2$, there exist $x_{1}^{4}, x_{2}^{4},$ $\cdots, x_{n}^{4}\in D_2$, such that $\langle x_{j}^{4}\rangle +\langle x_{j}^{4}\rangle +\cdots+\langle x_{j}^{4}\rangle \leq\langle(a^{4}-2\varepsilon)_{+} \rangle$, and $\langle(a^{4}-3\varepsilon)_{+} \rangle\leq\sum_{i=1}^{n}\langle x_{i}^{4}\rangle$, where $\langle x_{j}^{4}\rangle$ repeats $m$ times. Therefore, we have $\langle x_{j}^{'}\oplus x_{j}^{4}\rangle +\langle x_{j}^{'}\oplus x_{j}^{4}\rangle +\cdots+\langle x_{j}^{'}\oplus x_{j}^{4}\rangle$ $\leq\langle (a^{'}-2\varepsilon)_{+}\rangle+\langle (a^{4}-2\varepsilon)_{+}\rangle\leq \langle a\rangle+\langle a\rangle$, where $\langle x_{j}^{'}\oplus x_{j}^{4}\rangle$ repeats $m$ time for $1\leq i\leq n$. We also have $\langle (a-10\varepsilon)_{+}\rangle$ $\leq\langle (a^{'}-3\varepsilon)_{+}\rangle+\langle (a^{4}-3\varepsilon)_{+}\rangle+\langle (1-(p_{2}-g_{2}))^{\frac{1}{2}}a^{''}(1-(p_{2}-g_{2}))^{\frac{1}{2}}\rangle$ $\leq\langle (a^{'}-3\varepsilon)_{+}\rangle+\langle (a^{4}-3\varepsilon)_{+}\rangle+\langle (1-(p_{2}-g_{2}))\rangle$ $\leq\langle (a^{'}-3\varepsilon)_{+}\rangle+\langle (a^{4}-3\varepsilon)_{+}\rangle+\langle e\rangle$ $\leq\sum_{i=1}^{n}\langle x_{i}^{'}\oplus x_{i}^{4}\rangle.$ There are the desired inequalities, with $\langle x_{i}^{'}\oplus x_{i}^{4}\rangle$ in place of $\langle x_{i}\rangle$, and $10\varepsilon$ in place of $\varepsilon$. **(1.1.1.3)** we assume that there is a purely positive element, and we assume $x_{1}^{'}$ is a purely positive element. As $\langle(a^{'}-2\varepsilon)_{+} \rangle \leq\sum_{i=1}^{n}\langle x_{i}^{'}\rangle$, for any $\varepsilon>0$, there exists $\delta>0$ such that $\langle(a^{'}-4\varepsilon)_{+} \rangle\leq \langle (x_{1}^{'}-\delta)_{+}\rangle \oplus\sum_{i=1}^{n}\langle x_{i}^{'}\rangle$. Since $x_{1}^{'}$ is a purely positive element, by Theorem 2.1 (3), there exists a nonzero positive element $s$ such that $\langle(x_{1}^{'}-\delta)_{+} \rangle +\langle s\rangle\leq\langle x_{1}^{'}\rangle$. As in the part (1.1.1.1), since $A\in {\rm WTA}\Omega$, by $(1)'$, there exists a projection $p_3\in A$, an element $g_{3}\in A$ with $0\leq g_{3} \leq1$, and a $\rm C^{*}$-subalgebra $D_3$ of $A$ with $g_{3}\in D_3$, $1_{D_{3}}=p_3$, and $D_3\in \Omega$ there exist a positive element $a^{5}\in D_3$, such that $1-(p_3-g_{3})\preceq s,$ $\|(p_3-g_{3})^{\frac{1}{2}}a^{''}(p_3-g_{3})^{\frac{1}{2}}-a^{5}\|<\varepsilon/3,$ and $\|a^{''}-a^{5}-(1-(p_3-g_{3}))^{\frac{1}{2}}a^{''}(1-(p_3-g_{3}))^{\frac{1}{2}}\|<\varepsilon.$ Also as in the part (1.1.1.1), we have $\langle (a^{5}-2\varepsilon)_{+}\rangle\leq \langle a\rangle.$ Since $D_3$ is weakly $(m, n)$-divisible, $(a^{5}-2\varepsilon)_{+}\in D_3$, there exist $x_{1}^{5}, x_{2}^{5},$ $\cdots, x_{n}^{5}\in D_3$, such that $\langle x_{j}^{5}\rangle +\langle x_{j}^{5}\rangle +\cdots+\langle x_{j}^{5}\rangle \leq\langle(a^{5}-2\varepsilon)_{+} \rangle$, and $\langle(a^{5}-3\varepsilon)_{+} \rangle\leq\sum_{i=1}^{n}\langle x_{i}^{5}\rangle$, where $\langle x_{j}^{5}\rangle$ repeats $m$ times. Therefore, we have $\langle x_{j}^{'}\oplus x_{j}^{5}\rangle +\langle x_{j}^{'}\oplus x_{j}^{5}\rangle +\cdots+\langle x_{j}^{'}\oplus x_{j}^{5}\rangle$, $\leq\langle (a^{'}-2\varepsilon)_{+}\rangle+\langle (a^{5}-2\varepsilon)_{+}\rangle\leq \langle a\rangle+\langle a\rangle$, where $\langle x_{i}^{'}\oplus x_{i}^{5}\rangle$ repeats $m$ times for $1\leq i\leq n$. We also have $\langle (a-10\varepsilon)_{+}\rangle$ $\leq\langle (a^{'}-4\varepsilon)_{+}\rangle+\langle (a^{5}-3\varepsilon)_{+}\rangle+\langle (1-(p_3-g_{3}))^{\frac{1}{2}}a^{''}(1-(p_3-g_{3}))^{\frac{1}{2}}\rangle$ $\leq\langle (a^{'}-4\varepsilon)_{+}\rangle+\langle (a^{5}-3\varepsilon)_{+}\rangle+\langle (1-(p_3-g_{3}))\rangle$ $\leq\langle (a^{'}-4\varepsilon)_{+}\rangle+\langle (a^{5}-3\varepsilon)_{+}\rangle+\langle s\rangle$ $\leq\sum_{i=1}^{n}\langle x_{i}^{'}\oplus x_{i}^{5}\rangle.$ There are the desired inequalities, with $\langle x_{i}^{'}\oplus x_{i}^{5}\rangle$ in place of $\langle x_{i}\rangle$, and $10\varepsilon$ in place of $\varepsilon$. **(1.1.2)** we assume that exists nonzero projection $r$ such that $\langle (a^{'}-3\varepsilon)_{+}\rangle +\langle r\rangle \leq\langle (a^{'}-3\varepsilon)_{+} \rangle$. As in the part (1.1.1.1), as $A\in {\rm WTA}\Omega$, there exist a projection $p_4\in A$, an element $g_{4}\in A$ with $0\leq g_{4} \leq1$, and a $\rm C^{*}$-subalgebra $D_4$ of $A$ with $g_{4}\in D_4$, $1_{D_{4}}=p_4$, and $D_4\in \Omega$, by $(1)'$, there exists a positive element $a^{6}\in D_4$, such that $1-(p_4-g_{4})\preceq r,$ $\|(p_4-g_{4})^{\frac{1}{2}}a^{''}(p_4-g_{4})^{\frac{1}{2}}-a^{6}\|<\varepsilon/3,$ and $\|a^{''}-a^{6}-(1-(p_4-g_{4}))^{\frac{1}{2}}a^{''}(1-(p_4-g_{4}))^{\frac{1}{2}}\|<\varepsilon.$ Also as in the part (1.1.1.1), we have $\langle (a^{6}-2\varepsilon)_{+}\rangle\leq \langle a\rangle.$ Since $D_4$ is weakly $(m, n)$-divisible, $(a^{6}-2\varepsilon)_{+}\in D_4$, there exist $x_{1}^{6}, x_{2}^{6},$ $\cdots, x_{n}^{6}\in D_4$, such that $\langle x_{j}^{6}\rangle +\langle x_{j}^{6}\rangle +\cdots+\langle x_{j}^{6}\rangle \leq\langle(a^{6}-2\varepsilon)_{+} \rangle$, and $\langle(a^{6}-3\varepsilon)_{+} \rangle\leq\sum_{i=1}^{n}\langle x_{i}^{6}\rangle$, where $\langle x_{j}^{6}\rangle$ repeats $m$ times. Therefore, we have $\langle y_{i}^{'}\oplus x_{i}^{6}\rangle +\langle y_{i}^{'}\oplus x_{i}^{6}\rangle +\cdots+\langle y_{i}^{'}\oplus x_{i}^{6}\rangle$ $\leq\langle (a^{'}-\varepsilon)_{+}\rangle+\langle (a^{6}-2\varepsilon)_{+}\rangle\leq \langle a\rangle+\langle a\rangle$, where $\langle y_{i}^{'}\oplus x_{i}^{6}\rangle$ repeats $m$ times for $1\leq i\leq n$. We also have $\langle (a-10\varepsilon)_{+}\rangle$ $\leq\langle (a^{'}-3\varepsilon)_{+}\rangle+\langle (a^{6}-4\varepsilon)_{+}\rangle+\langle (1-(p_4-g_{4}))^{\frac{1}{2}}a^{''}(1-(p_4-g_{4}))^{\frac{1}{2}}\rangle$ $\leq\langle (a^{'}-3\varepsilon)_{+}\rangle+\langle (a^{6}-4\varepsilon)_{+}\rangle+\langle (1-(p_4-g_{4}))\rangle$ $\leq\langle (a^{'}-3\varepsilon)_{+}\rangle+\langle (a^{6}-4\varepsilon)_{+}\rangle+\langle r\rangle$ $\leq\sum_{i=1}^{n}\langle y_{i}^{'}\oplus x_{i}^{6}\rangle.$ There are the desired inequalities, with$\langle y_{i}^{'}\oplus x_{i}^{6}\rangle$ in place of $\langle x_{i}\rangle$, and $10\varepsilon$ in place of $\varepsilon$. **(1.2)** If $(a^{'}-3\varepsilon)_{+}$ is not Cuntz equivalent to a projection, by Theorem 2.1 (3), then there is a nonzero positive element $d$ such that $\langle (a^{'}-4\varepsilon)_{+}\rangle+\langle d\rangle \leq\langle (a^{'}-3\varepsilon)_{+}\rangle$. As in the part (1.1.1.1), since $A\in {\rm WTA}\Omega$, there exist a projection $p_5\in A$, an element $g_{5}\in A$ with $0\leq g_{5} \leq1$, and a $\rm C^{*}$-subalgebra $D_5$ of $A$ with $g_{5}\in D_5$, $1_{D_{5}}=p_5$, and $D_5\in \Omega$, by $(1)'$, there exists a positive element $a^{7}\in D_5$, such that $1-(p_5-g_{5})\preceq d,$ $\|(p_5-g_{5})^{\frac{1}{2}}a^{''}(p_5-g_{5})^{\frac{1}{2}}-a^{7}\|<\varepsilon/3,$ and $\|a^{''}-a^{7}-(1-(p_5-g_{5}))^{\frac{1}{2}}a^{''}(1-(p_5-g_{5}))^{\frac{1}{2}}\|<\varepsilon.$ Also as in the part (1.1.1.1), we have $\langle (a^{7}-2\varepsilon)_{+}\rangle\leq \langle a\rangle.$ Since $D_5$ is weakly $(m, n)$-divisible, $(a^{7}-2\varepsilon)_{+}\in D_5$, there exist $x_{1}^{7}, x_{2}^{7},$ $\cdots, x_{n}^{7}\in D_5$, such that $\langle x_{j}^{7}\rangle +\langle x_{j}^{7}\rangle +\cdots+\langle x_{j}^{7}\rangle \leq\langle(a^{7}-2\varepsilon)_{+}\rangle$, and $\langle(a^{7}-3\varepsilon)_{+} \rangle\leq\sum_{i=1}^{n}\langle x_{i}^{7}\rangle$, where $\langle x_{j}^{7}\rangle$ repeats $m$ times. Therefore, we have $\langle x_{j}^{'}\oplus x_{j}^{7}\rangle +\langle x_{j}^{'}\oplus x_{j}^{7}\rangle +\cdots+\langle x_{j}^{'}\oplus x_{j}^{7}\rangle$ $\leq\langle (a^{'}-2\varepsilon)_{+}\rangle+\langle (a^{7}-2\varepsilon)_{+}\rangle\leq \langle a\rangle+\langle a\rangle$, where $\langle x_{j}^{'}\oplus x_{j}^{7}\rangle$ repeats $m$ times for $1\leq i\leq n$. We also have $\langle (a-10\varepsilon)_{+}\rangle$ $\leq\langle (a^{'}-4\varepsilon)_{+}\rangle+\langle (a^{7}-4\varepsilon)_{+}\rangle+\langle (1-(p_5-g_{5}))^{\frac{1}{2}}a^{''}(1-(p_5-g_{5}))^{\frac{1}{2}}\rangle$ $\leq\langle (a^{'}-4\varepsilon)_{+}\rangle+\langle (a^{7}-4\varepsilon)_{+}\rangle+\langle (1-(p_5-g_{5}))\rangle$ $\leq\langle (a^{'}-4\varepsilon)_{+}\rangle+\langle (a^{7}-4\varepsilon)_{+}\rangle+\langle d\rangle$ $\leq\sum_{i=1}^{n}\langle x_{i}^{'}\oplus x_{i}^{7}\rangle.$ There are the desired inequalities, with $\langle x_{i}^{'}\oplus x_{i}^{7}\rangle$ in place of $\langle x_{i}\rangle$, and $10\varepsilon$ in place of $\varepsilon$. **Case(2)** If $(a^{'}-2\varepsilon)_{+}$ is not Cuntz equivalent to a projection, by Theorem 2.1 (3), there is a nonzero positive element $d$ such that $\langle (a^{'}-3\varepsilon)_{+}\rangle+\langle d\rangle \leq\langle (a^{'}-2\varepsilon)_{+}\rangle$. As in the part (1.1.1.1), since $A\in {\rm WTA}\Omega$, there exist a projection $p_6\in A$, an element $g_{6}\in A$ with $0\leq g_{6} \leq1$, and a $\rm C^{*}$-subalgebra $D_6$ of $A$ with $g_{6}\in D_6$, $1_{D_{6}}=p_6$, and $D_6\in \Omega$, by $(1)'$, there exists a positive element $a^{8}\in D_6$, such that $1-(p_6-g_{6})\preceq d,$ $\|(p_6-g_{6})^{\frac{1}{2}}a^{''}(p_6-g_{6})^{\frac{1}{2}}-a^{8}\|<\varepsilon/3,$ and $\|a^{''}-a^{8}-(1-(p_6-g_{6}))^{\frac{1}{2}}a^{''}(1-(p_6-g_{6}))^{\frac{1}{2}}\|<\varepsilon.$ Also as in the part (1.1.1.1), we have $\langle (a^{8}-2\varepsilon)_{+}\rangle\leq \langle a\rangle.$ Since $D_6$ is weakly $(m, n)$-divisible, $(a^{8}-2\varepsilon)_{+}\in D_6$, there exist $x_{1}^{8}, x_{2}^{8},$ $\cdots, x_{n}^{8}\in D_6$, such that $\langle x_{j}^{8}\rangle +\langle x_{j}^{8}\rangle +\cdots+\langle x_{j}^{8}\rangle \leq\langle(a^{8}-2\varepsilon)_{+} \rangle$, and $\langle(a^{8}-3\varepsilon)_{+} \rangle\leq\sum_{i=1}^{n}\langle x_{i}^{8}\rangle$, where $\langle x_{j}^{8}\rangle$ repeats $m$ times. Therefore, we have $\langle y_{i}^{'}\oplus x_{i}^{8}\rangle +\langle y_{i}^{'}\oplus x_{i}^{8}\rangle +\cdots+\langle y_{i}^{'}\oplus x_{i}^{8}\rangle$ $\leq\langle (a^{'}-\varepsilon)_{+}\rangle+\langle (a^{8}-2\varepsilon)_{+}\rangle\leq \langle a\rangle+\langle a\rangle$, where $\langle y_{i}^{'}\oplus x_{i}^{8}\rangle$ repeats $m$ times for $1\leq i\leq n$. We also have $\langle (a-10\varepsilon)_{+}\rangle$ $\leq\langle (a^{'}-3\varepsilon)_{+}\rangle+\langle (a^{8}-4\varepsilon)_{+}\rangle+\langle (1-(p_6-g_{6}))^{\frac{1}{2}}a^{''}(1-(p_6-g_{6}))^{\frac{1}{2}}\rangle$ $\leq\langle (a^{'}-3\varepsilon)_{+}\rangle+\langle (a^{8}-4\varepsilon)_{+}\rangle+\langle (1-(p_6-g_{6}))\rangle$ $\leq\langle (a^{'}-3\varepsilon)_{+}\rangle+\langle (a^{8}-4\varepsilon)_{+}\rangle+\langle d\rangle$ $\leq\sum_{i=1}^{n}\langle y_{i}^{'}\oplus x_{i}^{8}\rangle.$ There are the desired inequalities, with $\langle y_{i}^{'}\oplus x_{i}^{8}\rangle$ in place of $\langle x_{i}\rangle$, and $10\varepsilon$ in place of $\varepsilon$. ◻ The following corollary were obtained by Fan, Fang, and Zhao in [@FFZ]. **Corollary 16**. *Let $A$ be a unital simple stably finite separable ${\rm C^*}$-algebra. Let $B\subseteq A$ be a centrally large subalgebra of $A$ such that $B$ is weakly $(m, n)$-divisible. Then $A$ is secondly weakly $(m, n)$-divisible.* 13 R. Antoine, F. Perera, and H. Thiel. *Tensor products and regularity properties of Cuntz semigroups*. Mem. Amer. Math. Soc. **251** (2018), no. 1199; 199 pages. R. Antoine, F. Perera, L. Robert, and H. Thiel. *$C^*$-algebras of stable rank one and their Cuntz semigroups*. Duke Math. J. **171** (2022), 33--99. P. Ara, F. Perera, and A. S. Toms, *$K$-theory for operator algebras. Classification of ${C^*}$-algebras*, Aspects of Operator Algebras and Applications. Contemp. Math. **534**, American Mathematival Society, Providence, RI, 2011, pp. 1--71. D. Archey, J. Buck, and N. C. Phillips, *Centrally large subalgebra and tracial $\mathcal{Z}$-absorbing*, Int. Math. Res. Not. IMRN, **6** (2018), 1857--1877. D. Archey and N. C. Phillips, *Permanence of stable rank one for centrally large subalgebras and crossed products by minimal homeomorphisms*, J. Operator Theory, **83** (2020), 353--389. J. Castillejos, S. Evington, A. Tikuisis, S. White, and W. Winter, *Nuclear dimension of simple ${C^*}$-algebras*, Invent. Math., **224** (2021), 245--290. J. Castillejos, S. Evington, A. Tikuisis, and S. White, *Uniforem property Gamma*, Int. Math. RES. Not. IMRN, **13** (2022), 9864--9908. K. T. Coward, G. A. Elliott, and C. Ivanescu, *The Cuntz semigroup as an invariant for ${C^*}$-algebras*, J. Reine Angew. Math., **623** (2008), 161--193. G. A. Elliott, *On the classification of inductive limits of sequences of semisimple finite dimensional algebras*, J. Algebra, **38** (1976), 29--44. G. A. Elliott, Q. Fan, and X. Fang, *Certain properties of tracial approximation ${C^*}$-algebras*, C. R. Math. Acad. Sci. Soc. R. Can., **40** (2018), 104--133. G. A. Elliott, Q. Fan, and X. Fang, *Generalized tracially approximated ${C^*}$-algebras*, C. R. Math. Rep. Acad. Sci. Canada, **45** (2) (2023), 13--36. G. A. Elliott and G. Gong, *On the classification of ${C^*}$-algebras of real rank zero, II*, Ann. of Math., **144** (1996), 497--610. G. A. Elliott, G. Gong, and L. Li, *On the classification of simple inductive limit ${C^*}$-algebras II: The isomorphism theorem*, Invent. Math., **168** (2007), 249--320. G. A. Elliott, G. Gong, H. Lin, and Z. Niu, *The classification of simple separable $KK$-contractible ${C^*}$-algebras with finite nuclear dimension*, J. Geom. Phys., **158** (2020), Article 103861, 51 pages. G. A. Elliott, G. Gong, H. Lin, and Z. Niu, *On the classification of simple amenable ${C^*}$-algebras with finite decomposition rank, II*, arXiv:1507.03437. G. A. Elliott and Z. Niu, *On tracial approximation*, J. Funct. Anal., **254** (2008), 396--440. Q. Fan, X. Fang, and X. Zhao, *Inheritance of divisibility forms a large subalgegbra*, Acta. Math. Scientia, **41** (2021), 85--96. Q. Fan, X. Fang, and J. Wang, *The interence of tracia topological rank zero for centrally large subalgebra*, In preparation. Q. Fan and S. Zhang, *Uniform property Gamma for certain $C^*$-algebras*, Canad. Math. Bull., **65** (2022), 1063--1070. D.Kerr and G. Szabó, *Almost finiteness and the small boundary property*, Comm. Math. Phys., **374**, 1--31. E. Kirchberg and M. Rørdam, *Divisibility properties for ${\rm C^*}$-algelbras*, Proc. London Math. Soc., **106** (2013), 1330--1370 X. Fu, *Tracial nuclear dimension of ${C^*}$-algebras*, Ph.D. thesis, East China Normal University, 2018. X. Fu and H. Lin, *Tracial approximation in simple $C^*$-algebras*, Canad. J. Math., First View, 26 February 2021, pp. 1--63. G. Gong, *On the classification of simple inductive limit ${C^*}$-algebras, I. The reduction theorem*, Doc. Math., **7** (2002), 255--461. G. Gong and H. Lin, *On classification of non-unital simple amenable ${C^*}$-algebras, II*, J. Geom. Phys., **158** (2020), Article 103865, 102 pages. G. Gong and H. Lin, *On classification of non-unital simple amenable ${C^*}$-algebras III: the range and the reduction*, arXiv:2010.01788v4. G. Gong, H. Lin, and Z. Niu, *Classification of finite simple amenable $\mathcal{Z}$-stable ${C^*}$-algebras, I: ${C^*}$-algebra with generalized tracial rank one*, C. R. Math. Acad. Sci. Soc. R. Can., **42** (2020), 63--450. G. Gong, H. Lin, and Z. Niu, *Classification of finite simple amenable $\mathcal{Z}$-stable ${C^*}$-algebras, II: ${C^*}$-algebras with rational generalized tracial rank one*, C. R. Math. Acad. Sci. Soc. R. Can., **42** (2020), 451--539. I. Hirshberg and J. Orovitz, *Tracially $\mathcal{Z}$-absorbing ${C^*}$-algebras*, J. Funct. Anal., **265** (2013), 765--785. E. Kirchberg and N. C. Phillips, *Embedding of exact ${C^*}$-algebras in the Cuntz algebra $O_2$*, J. Reine Angew. Math., **525** (2000), 17--53. E. Kirchberg and M. Rørdam, *Divisibility properties for ${C^*}$-algebras*, Proc. London Math. Soc., **106** (2013), 1330--1370 H. Lin, *Tracially AF ${C^*}$-algebras*, Trans. Amer. Math. Soc., **353** (2001), 683--722. H. Lin, *The tracial topological rank of ${C^*}$-algebras*, Proc. London Math. Soc., **83** (2001), 199--234. H. Lin, *An Introduction to the Classification of Amenable ${C^*}$-Algebras*, World Scientific, New Jersey, London, Singapore, Hong Kong, 2001. Z. Niu, *Comparison radius and mean topological dimension: Rokhlin property, comparison of open sets, and subhomogeneous ${C^*}$-algebras*, J. Analyse Math., 2022, https://doi.org/10.1007/s11854-022-0205-8. N. C. Phillips, *A classification theorem for nuclear purely infinite ${C^*}$-algebras*, Doc. Math., **5** (2000), 49--114. N. C. Phillips, *Large subalgebras*, arXiv:1408.5546v2. I. F. Putnam, *The ${\rm C^*}$-algebras associated with mimimal homeomorphisms of the Cantor set*, Pacific J. Math., **136** (1989), 1483--1518. L. Robert and A. Tikuisis, *Nuclear dimension and ${\mathcal{Z}}$-stability of non-simple ${C^*}$-algebras*, Trans. Amer. Math. Soc., **369** (2017), 4631--4670. M. Rørdam and W. Winter, *The Jiang-Su algebra revisited*, J. Reine Angew. Math., **642** (2010), 129--155. A. Tikuisis, S. White, and W. Winter, *Quasidiagonality of nuclear ${C^*}$-algebras*, Ann. of Math. (2), **185** (2017), 229--284. W. Winter and J. Zacharias, *The nuclear dimension of a ${C^*}$-algebra*, Adv. Math., **224** (2010), 461--498. [^1]: **Key words** ${\rm C^*}$-algebras, tracial approximation, Cuntz semigroup. [^2]: 2000 *Mathematics Subject Classification:* 46L35, 46L05, 46L80
arxiv_math
{ "id": "2309.08900", "title": "certain properties of generalized tracially approximated C*-algebras", "authors": "Qingzhai Fan and Jiahui Wang", "categories": "math.OA", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Motivated by the study of greedy algorithms for graph coloring, Bernshteyn and Lee introduced a generalization of graph degeneracy, which is called weak degeneracy. In this paper, we show the lower bound of the weak degeneracy for $d$-regular graphs is exactly $\lfloor d/2\rfloor +1$, which is tight. This result refutes the conjecture of Bernshteyn and Lee on this lower bound. address: Department of Science, Beijing University of Posts and Telecommunications, Beijing, China author: - Yuxuan Yang title: weak degeneracy of regular graphs --- # Introduction {#sec1} In this paper, all graphs are finite and simple. Let $G$ be a graph. We denote by $V(G)$ and $E(G)$ the vertex set and the edge set of the graph $G$, respectively. Let $S$ be a subset of $V(G)$. Denote by $G-S$ the subgraph obtained from $G$ by deleting the vertices in $S$ together with their incident edges. Especially, when $S=\{v\}$ we may use $G-v$ instead of $G-\{v\}$. Denote by $G[S]$ the subgraph of $G$ induced by the vertex set $S$. Denote by $\deg_G(v)$ the degree of a vertex $v$ in a graph $G$. If $\deg_G(v)=d$ for each vertex $v$ in the graph $G$, we call $G$ a $d$-regular graph. Recall that for a graph $G$, $\chi (G)$ denotes its chromatic number, that is, the minimum number of colors necessary to color the vertices of $G$ so that adjacent vertices are colored differently. A well-studied generalization of graph coloring is list coloring, which was introduced independently by Vizing [@vizing] and Erdős et al. [@er79]. There are further generalizations, such as DP-chromatic number [@dp18], and DP-paint number [@KKLZ20]. As we shall not discuss these parameters, other than saying that they are bounded by the weak degeneracy defined below, we omit the definitions and refer the reader to [@KKLZ20] for the definitions and discussion about these parameters. For a graph $G$, the greedy coloring algorithm colors vertices one by one in order $v_1, v_2,\dots, v_n$, assigning $v_i$ the least-indexed color not used on its colored neighbors. An upper bound for the number of colors used in such a coloring is captured in the notion of graph degeneracy. This greedy algorithm works for all variations of chromatic numbers mentioned above, and it makes graph degeneracy an important graph parameter. Frequently, graph degeneracy and chromatic numbers have a big gap between them. For example, the degeneracy of $d$-regular graph is exactly $d$, and its chromatic number varies from 2 to $d+1$. It is therefore interesting to see if we can modify the greedy coloring procedure to "save" some of the colors and get a better bound. For example, when we color a vertex under the greedy algorithm, we can choose a color that one of its neighbors can not use if possible. Motivated by this, Bernshteyn and Lee [@bl23] introduced a new graph parameter of **weak degeneracy**. Weak degeneracy is a variation of degeneracy which shares many nice properties of degeneracy. Since then weak degeneracy has been paid a lot of attention (see for example [@Han2023; @Wang2023; @Wang2021; @Zhou2023]). Here is the definition and notation. **Definition 1** (Delete Operation). *Let $G$ be a graph and let $f:V(G)\rightarrow \mathbb{N}$ be a function. For a vertex $u\in V(G)$, the operation $$\mathop{\mathrm{Delete}}(G,f,u)$$ outputs the graph $G^{\prime}=G-u$ and the function $f^{\prime} :V(G^{\prime})\rightarrow \mathbb{Z}$ given by $$\label{eq1.1} f^{\prime}(v)= \begin{cases} f(v)-1, & \mbox{if } uv\in E(G);\\ f(v), & \mbox{otherwise}. \end{cases}$$ An application of the operation $\mathop{\mathrm{Delete}}$ is **legal** if the resulting function $f^{\prime}$ is nonnegative.* **Definition 2** (DeleteSave Operation). *Let $G$ be a graph and let $f:V(G)\rightarrow \mathbb{N}$ be a function. For a pair of adjacent vertices $u,w\in V(G)$, the operation $$\mathop{\mathrm{DeleteSave}}(G,f,u, w)$$ outputs the graph $G^{\prime}=G-u$ and the function $f^{\prime} :V(G^{\prime})\rightarrow \mathbb{Z}$ given by $$\label{eq1.2} f^{\prime}(v)= \begin{cases} f(v)-1, & \mbox{if } uv\in E(G) \mbox{ and }v\neq w;\\ f(v), & \mbox{otherwise}. \end{cases}$$ An application of the operation $\mathop{\mathrm{DeleteSave}}$ is **legal** if $f(u)>f(w)$ and the resulting function $f^{\prime}$ is nonnegative.* A graph $G$ is **f-degenerate** if it is possible to remove all vertices from $G$ by a sequence of legal applications of the operations $\mathop{\mathrm{Delete}}$. A graph $G$ is **weakly f-degenerate** if it is possible to remove all vertices from $G$ by a sequence of legal applications of the operations $\mathop{\mathrm{DeleteSave}}$ and $\mathop{\mathrm{Delete}}$. Given $d\in \mathbb{N}$, we say that $G$ is **d-degenerate** if it is f-degenerate with respect to the constant function of value $d$. Also, we say that $G$ is **weakly d-degenerate** if it is weakly f-degenerate with respect to the constant function of value $d$. The **degeneracy** of $G$, denoted by $\mathop{\mathrm{d}}(G)$, is the minimum integer $d$ such that $G$ is d-degenerate. The **weak degeneracy** of $G$, denoted by $\mathop{\mathrm{wd}}(G)$, is the minimum integer $d$ such that $G$ is weakly d-degenerate. Bernshteyn and Lee [@bl23] gave the following inequalities. **Proposition 1**. *[@bl23] [\[prop1\]]{#prop1 label="prop1"} For any graph $G$, we always have $$\chi (G)\leqslant\chi _{l} (G)\leqslant\chi _{DP} (G)\leqslant\chi_{DPP}(G)\leqslant\mathop{\mathrm{wd}}(G)+1\leqslant\mathop{\mathrm{d}}(G)+1,$$* where $\chi_{DP}(G)$ is the DP-chromatic number of $G$, and $\chi_{DPP}(G)$ is the DP-paint number of $G$. From Proposition [\[prop1\]](#prop1){reference-type="ref" reference="prop1"}, $\mathop{\mathrm{wd}}(G) + 1$ is an upper bound for many graph coloring parameters. This is actually a direct result from the greedy coloring algorithm. Then, it is interesting to determine the weak degeneracy of a graph. As far as lower bounds on weak degeneracy are concerned, by a double counting argument, Bernshteyn and Lee [@bl23] gave the following result. **Proposition 2**. *[@bl23] [\[thm0\]]{#thm0 label="thm0"} Let G be a $d$-regular graph with $n\geqslant 2$ vertices. Then $\mathop{\mathrm{wd}}(G)\geqslant d-\sqrt{2n}$.* In particular, if $n= O(d)$, then $\mathop{\mathrm{wd}}(G)\geqslant d-O(\sqrt{d})$. Note that the upper bound $d$ is trivial and can be reached by the complete graph. It indicates that the weak degeneracy of a $d$-regular graph seems to be close to $d$. Then, they gave a conjecture about this. **Conjecture 1**. *[@bl23] [\[conj\]]{#conj label="conj"} Every $d$-regular graph $G$ satisfies $\mathop{\mathrm{wd}}(G)\geqslant d-O(\sqrt{d})$.* In this paper, we will refute this conjecture by constructing a $d$-regular graph with weak degeneracy $\lfloor d/2\rfloor +1$. **Theorem 1**. *There exists a $d$-regular graph with weak degeneracy $\lfloor d/2\rfloor +1$.* Also, we will show this is best possible by proving $\lfloor d/2\rfloor +1$ is exactly the lower bound. **Theorem 2**. *Every $d$-regular graph $G$ satisfies $\mathop{\mathrm{wd}}(G)\geqslant\lfloor d/2\rfloor +1$.* Since the degeneracy of $d$-regular graph is exactly $d$ and the weak degeneracy can be significantly smaller, we can say that in some cases the weak degeneracy of regular graphs is a better bound for those graph coloring parameters under Proposition [\[prop1\]](#prop1){reference-type="ref" reference="prop1"}. # lower bound of weak degeneracy By a counting argument, we prove Theorem [Theorem 2](#thm2){reference-type="ref" reference="thm2"} first. *Proof of Theorem [Theorem 2](#thm2){reference-type="ref" reference="thm2"}.* Suppose $G$ is a $d$-regular graph with $\mathop{\mathrm{wd}}(G)\leqslant\lfloor d/2 \rfloor$. Let $k=\mathop{\mathrm{wd}}(G)$. By the definition of weak degeneracy, if we start with the constant function $f_1$ of value $k$ on $G$, there exists a sequence of legal applications of the operations $\mathop{\mathrm{Delete}}$ and $\mathop{\mathrm{DeleteSave}}$, which removes all vertices from $G$. We label the vertices $$v_1,v_2,\dots , v_n$$ by the removing order, and let $$f_1, f_2,\dots , f_n \quad \mbox{ and } \quad G_1,G_2,\dots G_n$$ denotes the corresponding functions and graphs during this procedure. To the specific, $f_i$ is the function after removing $v_1,v_2,\dots, v_{i-1}$ and it is a function on $G_i=G[\{v_{i},v_{i+1},\dots, v_n\}]$. We are interested in the value of the total sum of each function $f_i$, denoted by $$s_i:=\sum_{v\in G_i}f_i(v)=\sum_{j=i}^n f_i(v_{j}).$$ At the same time, the operation $\mathop{\mathrm{DeleteSave}}$ plays an important role in this procedure. We need two sequences to capture its effect. Let $$x_i= \begin{cases} 1, & \mbox{if the operation of removing } v_i \mbox{ is }\mathop{\mathrm{DeleteSave}};\\ 0, & \mbox{if the operation of removing } v_i \mbox{ is }\mathop{\mathrm{Delete}}. \end{cases}$$ Let $y_j$ denotes the number of $i$ that the operation $\mathop{\mathrm{DeleteSave}}(G_i,f_i,v_i,v_j)$ we used. It is not hard to find that $$\sum_{i=1}^n x_i= \sum_{i=1}^n y_i,$$ since they both count the total number of $\mathop{\mathrm{DeleteSave}}$ we used. Now, we consider $f_i(v_i)$. After $i-1$ operations, the function value decreased from $f_1(v_i)=k$ to $f_i(v_i)$. It means that $k-f_i(v_i)$ neighbors of $v_i$ were removed before $v_i$, which led to the decrease of the function value of $v_i$. Additionally, there are $y_i$ neighbors that were removed before $v_i$ without changing the function value of $v_i$. In total, $k-f_i(v_i)+y_i$ neighbors of $v_i$ were removed, then there are $d-(k-f_i(v_i)+y_i)$ remaining neighbors of $v_i$ in $G_i$. From [\[eq1.1\]](#eq1.1){reference-type="eqref" reference="eq1.1"} and [\[eq1.2\]](#eq1.2){reference-type="eqref" reference="eq1.2"}, we know the removal of $v_i$ makes that the total function value of $v_i$'s neighbors in $G_i$ decreases by $d-(k-f_i(v_i)+y_i)-x_i$. Comparing $s_i$ and $s_{i+1}$, we have $$s_i-s_{i+1}=f_i(v_i)+d-(k-f_i(v_i)+y_i)-x_i.$$ Observe that $f_i(v_i)\geqslant x_i$. Since $k\leqslant\lfloor d/2 \rfloor\leqslant d/2$, we know $$\label{eq2.1} s_i-s_{i+1}\geqslant d/2+x_i-y_i.$$ Note that [\[eq2.1\]](#eq2.1){reference-type="eqref" reference="eq2.1"} works for $i\in \{1,2,\dots, n-1\}$. For $i=n$, it also works in principle, but we haven't defined $s_{n+1}$. Actually, it is natural to let $s_{n+1}=0$ since it is the total sum of a function with an empty domain. To avoid confusion, we investigate the case $i=n$ in detail. We have a single-vertex graph $G_n$ and we use $\mathop{\mathrm{Delete}}$ operation for $v_n$, then $x_n=0$ and $f_n(v_n)\geqslant 0$. Also, $k-f_n(v_n)+y_n$ has to be $d$, since $v_n$ has no neighbors in $G_n$. From, $y_n=d-k+f_n(v_n)\geqslant d/2$, we have $$\label{eq2.2} s_n=f_n(v_n)\geqslant 0 \geqslant d/2+x_n-y_n.$$ Combining [\[eq2.1\]](#eq2.1){reference-type="eqref" reference="eq2.1"} and $\eqref{eq2.2}$, we know $$s_1\geqslant nd/2+\sum_{i=1}^n x_i-\sum_{i=1}^n y_i=nd/2.$$ On the other hand $s_1=kn\leqslant nd/2$. It means that all the equal signs hold in the analysis above. Then, we have $f_i(v_i)=x_i$. However, $f_1$ is a constant function, so we can't use $\mathop{\mathrm{DeleteSave}}$ for $v_1$. Then, $$f_1(v_1)=k\neq 0=x_1$$ gives a contradiction. ◻ # regular graphs with low weak degeneracy To prove Theorem [Theorem 1](#thm1){reference-type="ref" reference="thm1"}, we start with the odd case. For each positive integer $k\geqslant 0$, we construct a graph $G$ such that $\mathop{\mathrm{wd}}(G)\leqslant k+1$ and $G$ is $(2k+1)$-regular. Let $s>2k$ be a large integer to be defined and let $V_1$ be the disjoint union of $$\begin{aligned} &A=\{a_1,a_2,\dots,a_s\},\\ &B=\{b_1,b_2,\dots,b_s\},\\ &C=\{c_1,c_2,\dots,c_s\}. \end{aligned}$$ Here, $V_1=A\cup B\cup C$ is a subset of $V(G)$ and we assign the edges on $G[V_1]$ first. For each $i,j\in\{1,2,\dots,s\}$, $$\label{edgev1} \begin{aligned} &a_i, a_j \mbox{ are adjacent} \quad \Leftrightarrow \quad |i-j|\in[1,k-1],\\ &b_i, b_j \mbox{ are adjacent} \quad \Leftrightarrow \quad |i-j|\in[1,k],\\ &c_i, c_j \mbox{ are adjacent} \quad \Leftrightarrow \quad |i-j|\in[1,k],\\ &a_i, b_i \mbox{ are adjacent} , \quad a_i, c_i \mbox{ are adjacent}. \end{aligned}$$ ![ the main structure of $G(V_1)$](abcstr.png){#fig1 width="0.99\\linewidth"} All the edges of $G[V_1]$ are given by [\[edgev1\]](#edgev1){reference-type="eqref" reference="edgev1"} and we denote this edge set by $E_1$. Figure [1](#fig1){reference-type="ref" reference="fig1"} shows the main structure of $G[V_1]$ by illustrating the neighbors of $a_i,b_i,c_i$. As you can see, in this structure $a_i,b_i$ and $c_i$ have $2k, 2k+1, 2k+1$ neighbors, respectively. Actually, this main structure only works for $i\in [k+1,s-k]$, and the structure is fragmentary when $i\leqslant k$ or $i\geqslant s-k+1$. We collect all these abnormal vertices as $$D:=\{a_i|i\notin [k+1,s-k]\}\cup\{b_i|i\notin [k+1,s-k]\}\cup\{c_i|i\notin [k+1,s-k]\},$$ and add pendent edges to them such that their degree is $2k+1$. In other words, we have a new vertex set $V_2$ of size $$|V_2|=\sum_{v\in D} (2k+1-\deg_{G[V_1]}(v)),$$ and each $v\in D$ has $2k+1-\deg_{G[V_1]}(v)$ distinct neighbors in $V_2$. This edge set between $D$ and $V_2$ is denoted by $E_2$, which is shown in Figure [2](#fig2){reference-type="ref" reference="fig2"}. ![ The edges between $D$ and $V_2$](v12.png){#fig2 width="0.4\\linewidth"} The last step to construct a $(2k+1)$-regular graph is to add edges between $V_2$ and $V_1\backslash D$. By checking the current degree of each vertex in $V_1\cup V_2$, we can see that each $v\in V_2$ needs $2k$ more neighbors and $a_i$ needs one more neighbor for each $i\in[k+1,s-k]$. We simply choose a proper $s$ such that their demands match. A simple computation gives $$\begin{aligned} &|V_2|=6\cdot (1+2+\dots+k)=3k(k+1),\\ &s=2k+2k|V_2|=6k^3+6k^2+2k. \end{aligned}$$ To be specific, we have a map $$\phi: \{a_i|i\in[k+1,s-k]\}\rightarrow V_2$$ such that each vertex in $V_2$ has size-$2k$ preimage and we assign edges according to $\phi$, which means $v$ is adjacent to $\phi(v)$ for each $v$ in the domain. We denote the set of these edges by $E_3$. The construction of $G$ is done with $V(G)=V_1\cup V_2$ and $E(G)=E_1\cup E_2\cup E_3$. It is trivial to check that $G$ is $(2k+1)$-regular. We need to show that $\mathop{\mathrm{wd}}(G)$ is at most $k+1$, so we start with a constant function of value $k+1$. We use $\mathop{\mathrm{Delete}}$ or $\mathop{\mathrm{DeleteSave}}$ in the following order on the vertice set: $$a_1,b_1,c_1,a_2,b_2,c_2,a_3,b_3,c_3,\dots, a_s,b_s,c_s, V_2.$$ During this deletion process, we use $\mathop{\mathrm{DeleteSave}}(G,f,a_i,\phi(a_i))$ for $a_i\in V_1\backslash D$ when it is legal, and use $\mathop{\mathrm{Delete}}$ for all the other vertices. Note that $f$ represents the function on the vertex set at the current step of deletion, and it keeps changing during the process. We have the following observations. **Lemma 1**. *Under the deletion order above, for each $v\in V_1$, at the step of deleting $v$, we have $f(v)\geqslant 0$.* *Proof.* It suffices to show that there are at most $k+1$ neighbors that have been deleted at the step of deleting $v$. This is true since the deletion only happens in $V_1$ and the construction of $E_1$ in [\[edgev1\]](#edgev1){reference-type="eqref" reference="edgev1"} ensures this fact. ◻ Lemma [Lemma 1](#lm2){reference-type="ref" reference="lm2"} shows that the deletions of $V_1$ are all legal. **Lemma 2**. *Under the deletion order above, for each $a_i\in V_1\backslash D$, at the step of deleting $a_i$, we have $f(a_i)=2$.* *Proof.* $a_i\notin V_2$ indicates that we never use $a_i$ as the second vertex of $\mathop{\mathrm{DeleteSave}}$. The neighbors $$a_{i-k+1},a_{i-k+2},\dots,a_{i-1}$$ of $a_i$ have been deleted and other neighbors have not. It means that $$f(a_i)=(k+1)-(k-1)=2.$$ at the current step. ◻ From Lemma [Lemma 2](#lm1){reference-type="ref" reference="lm1"}, we know that we only use $\mathop{\mathrm{DeleteSave}}(G,f,a_i,\phi(a_i))$ for $a_i\in V_1\backslash D$ if $f(\phi(a_i))\leqslant 1$ at the step of deleting $a_i$. It remains to investigate the function values on $V_2$. Note that $V_2$ is an independent set of $G$. **Lemma 3**. *Under the assumptions above, after the deletion of $V_1$, $f(v)\geqslant 0$ for all $v\in V_2$.* *Proof.* For each vertex $v\in V_2$, one neighbor $u$ is in $D$ and the remaining neighbors form the preimage $\phi^{-1}(v)$ with size $2k$. From Lemma [Lemma 2](#lm1){reference-type="ref" reference="lm1"}, the deletion of each vertex in $\phi^{-1}(v)$ can never reduce its function $f(v)$ to $0$ because $\mathop{\mathrm{DeleteSave}}$ is used when $f(v)=1$. With considering the deletion of $u$, the value of $v$ is still non-negative. ◻ This completes the proof of $\mathop{\mathrm{wd}}(G)\leqslant k+1$. From Theorem [Theorem 2](#thm2){reference-type="ref" reference="thm2"}, we know $\mathop{\mathrm{wd}}(G)\geqslant k+1$ since $G$ is $(2k+1)$-regular. As a result, the construction of $G$ establishes the odd case of Theorem [Theorem 1](#thm1){reference-type="ref" reference="thm1"}. **Theorem 3**. *There exists a $(2k+1)$-regular graph with weak degeneracy $k+1$.* It is not hard to generalize this construction to the even case, but actually the even case is a direct consequence of the odd case by the following argument. **Theorem 4**. *If $G$ is $d$-regular, then there exists a $(d+1)$-regular graph $G^{\prime}$ such that $\mathop{\mathrm{wd}}(G^{\prime})\leqslant\mathop{\mathrm{wd}}(G)+1$.* *Proof.* We give the construction of $G^{\prime}$ explicitly. We collect $d+1$ copies of $G$ and add a common neighbor for $d+1$ copies of each vertex in $G$. See Figure [3](#fig3){reference-type="ref" reference="fig3"}. ![The construction of $G^{\prime}$](Gp.png){#fig3 width="0.4\\linewidth"} To be specific, let $$V(G^{\prime}):=V(G)\times \{0,1,2,\dots, d+1\}.$$ For $i\in\{1,2,\dots, d+1\}$, $(u,i)$ and $(v,i)$ are adjacent if $uv\in E(G)$. Also, $(v,i)$ and $(v,0)$ are adjacent for each $i\in\{1,2,\dots, d+1\}$ and $v\in V(G)$. These are all the edges of $G^{\prime}$. It is trivial to check $G^{\prime}$ is $(d+1)$-regular. We need to show $\mathop{\mathrm{wd}}(G^{\prime})\leqslant\mathop{\mathrm{wd}}(G)+1$, so we start with a constant function of value $\mathop{\mathrm{wd}}(G)+1$. We can use $\mathop{\mathrm{Delete}}$ operations for each $(v,0)\in V(G^{\prime})$. These vertices form an independent set and these deletion operations are legal. The remaining graph is $d+1$ copies of $G$ and the function value is a constant $\mathop{\mathrm{wd}}(G)$. By the definition of weak degeneracy, we can legally remove each copy of $G$ by $\mathop{\mathrm{Delete}}$ and $\mathop{\mathrm{DeleteSave}}$. As a result, we have $\mathop{\mathrm{wd}}(G^{\prime})\leqslant\mathop{\mathrm{wd}}(G)+1$. ◻ We end with a proof of Theorem [Theorem 1](#thm1){reference-type="ref" reference="thm1"}. *Proof of Theorem [Theorem 1](#thm1){reference-type="ref" reference="thm1"}.* When $d$ is an odd number, the statement holds because of Theorem [Theorem 3](#thm3){reference-type="ref" reference="thm3"}. When $d$ is an even number, from Theorem [Theorem 3](#thm3){reference-type="ref" reference="thm3"}, we have a $(d-1)$-regular graph with weak degeneracy $d/2$. By Theorem [Theorem 4](#thm4){reference-type="ref" reference="thm4"}, we have a $d$-regular graph with weak degeneracy at most $d/2+1$. By Theorem [Theorem 2](#thm2){reference-type="ref" reference="thm2"}, its weak degeneracy is at least $d/2+1$. Therefore, we have a $d$-regular graph with weak degeneracy $d/2+1$. ◻ # Acknowledgements {#acknowledgements .unnumbered} Research supported by "the Fundamental Research Funds for the Central University" in China. 9 A. Bernshteyn and E. Lee, Weak degeneracy of graphs, J. Graph Theory. 2023;103:607--634. Z. Dvořák and L. Postle, Correspondence coloring and its application to list‐coloring planar graphs without cycles of lengths 4 to 8, J. Combin. Theory, Ser. B. 129 (2018), 38--54. P. Erdős, A. L. Rubin, and H. Taylor, Choosability in graphs, Proc. West Coast Conf. on Combinatorics, Graph Theory and Computing, Congressus Numerantium XXVI. 26 (1979), 125--157. H. Han, T. Wang, J. Wu, H. Zhou, X. Zhu, Weak degeneracy of planar graphs and locally planar graphs. arXiv:2303.07901 (2023). S. J. Kim, A. Kostochka, X. Li, and X. Zhu, On‐line dp‐coloring of graphs, Discret. Appl. Math. 285 (2020), 443--453. V. G. Vizing, Vertex colorings with given colors (in Russian), Metody Diskret. Analiz. 29 (1976), 3--10. T. Wang, Weak degeneracy of planar graphs without 4-and 6-cycles, Discret. Appl. Math. 334 (2023), 110-118. Q. Wang, T. Wang, X. Yang, Variable degeneracy of graphs with restricted structures. arXiv:2112.09334 (2021). H. Zhou, J. Zhu, X. Zhu, Weak$^*$ degeneracy and weak$^*$ $k$-truncated-degree-degenerate graphs. arXiv:2308.15853 (2023).
arxiv_math
{ "id": "2309.12670", "title": "Weak degeneracy of regular graphs", "authors": "Yuxuan Yang", "categories": "math.CO", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | The interpolative decomposition (ID) aims to construct a low-rank approximation formed by a basis consisting of row/column skeletons in the original matrix and a corresponding interpolation matrix. This work explores fast and accurate ID algorithms from five essential perspectives for empirical performance: skeleton complexity that measures the minimum possible ID rank for a given low-rank approximation error, asymptotic complexity in FLOPs, parallelizability of the computational bottleneck as matrix-matrix multiplications, error-revealing property that enables automatic rank detection for given error tolerances without prior knowledge of target ranks, ID-revealing property that ensures efficient construction of the optimal interpolation matrix after selecting the skeletons. While a broad spectrum of algorithms have been developed to optimize parts of the aforementioned perspectives, practical ID algorithms proficient in all perspectives remain absent. To fill in the gap, we introduce *robust blockwise random pivoting (RBRP)* that is parallelizable, error-revealing, and exact-ID-revealing, with comparable skeleton and asymptotic complexities to the best existing ID algorithms in practice. Through extensive numerical experiments on various synthetic and natural datasets, we empirically demonstrate the appealing performance of RBRP from the five perspectives above, as well as the robustness of RBRP to adversarial inputs. author: - "Yijun Dong[^1]" - "Chao Chen[^2]" - "Per-Gunnar Martinsson[^3]" - "Katherine Pearce[^4]" bibliography: - ref.bib title: | Robust Blockwise Random Pivoting:\ Fast and Accurate Adaptive Interpolative Decomposition --- # Introduction Interpolative decomposition (ID) is a special type of low-rank approximation where the basis of the row/column space consists of the rows/columns in the original matrix. This allows ID to capture the essential structures and relationships within the data. ID has found applications in various fields, including numerical analysis, data compression, and machine learning, where preserving the integrity of the original data is of paramount importance. Given a data matrix $\mathbf{X}= \left[\mathbf{x}_1, \cdots, \mathbf{x}_n\right]^T \in \mathbb{R}^{n \times d}$ consisting of $n$ data points $\left\{\mathbf{x}_i \in \mathbb{R}^d\right\}_{i \in [n]}$, along with a constant $\epsilon> 0$ and a target rank $r \in \mathbb{N}$, we aim to construct a $(r,\epsilon)$-ID of $\mathbf{X}$: $$\begin{aligned} \label{eq:id} \underset{n \times d}{\mathbf{X}} \approx \underset{n \times k}{\mathbf{W}}\ \underset{k \times d}{\mathbf{X}_S} \quad \emph{s.t.} \quad \left\|\mathbf{X}- \mathbf{W}\mathbf{X}_S\right\|_F^2 \le (1+\epsilon) \left\|\mathbf{X}- {\mathbf{X}}_{\langle r\rangle}\right\|_F^2\end{aligned}$$ where $k$ is the (unknown) target rank; ${\mathbf{X}}_{\langle r\rangle}$ is the optimal rank-$r$ approximation of $\mathbf{X}$ (given by the singular value decomposition (SVD)); $S = \left\{s_1,\cdots,s_k\right\} \subset [n]$ are the skeleton indices corresponding to the *row skeleton* submatrix $\mathbf{X}_S = [\mathbf{x}_{s_1},\cdots,\mathbf{x}_{s_k} ]^\top \in \mathbb{R}^{k \times d}$; and $\mathbf{W}\in \mathbb{R}^{n \times k}$ is an *interpolation matrix* with (nearly) optimal error $\left\|\mathbf{X}- \mathbf{W}\mathbf{X}_S\right\|_F^2$ for the given $\mathbf{X}_S$. The construction of an ID can be divided into two stages: 1. selecting a skeleton subset $S \subset [n]$ that achieves a small *skeletonization error* $$\begin{aligned} \label{eq:skeletonization_error} \mathcal{E}_{\mathbf{X}}\left(S\right) \triangleq\left\|\mathbf{X}- \mathbf{X}\mathbf{X}_S^\dagger \mathbf{X}_S\right\|_F^2 = \min_{\mathbf{W}\in \mathbb{R}^{n \times \left\vert S\right\vert}} \left\|\mathbf{X}- \mathbf{W}\mathbf{X}_S\right\|_F^2; \end{aligned}$$ 2. given $S$, computing an interpolation matrix $\mathbf{W}\in \mathbb{R}^{n \times \left\vert S\right\vert}$ efficiently[^5] that (approximately) minimizes the *interpolation error*: $\mathcal{E}_{\mathbf{X}}\left(\mathbf{W} \mid S\right) \triangleq\left\|\mathbf{X}- \mathbf{W}\mathbf{X}_S\right\|_F^2$. In this work, we explore fast and accurate ID algorithms from both the skeleton subset and the interpolation matrix perspectives, starting by posing the main question: *How to construct a $(r,\epsilon)$-ID efficiently with as few skeletons $S$ as possible,\ without prior knowledge of the target rank $k$?* ## What are Fast and Accurate ID Algorithms? To formalize the concept of "fast and accurate" ID algorithms, we dissect the above main question into the following properties, together providing systematic performance measurements for ID algorithms from both the efficiency and accuracy perspectives. 1. **Skeleton complexity** measures the minimum number of skeletons (*i.e.*, the minimum possible rank of ID) that an algorithm must select before $\mathcal{E}_{\mathbf{X}}\left(S\right) \le (1+\epsilon) \left\|\mathbf{X}- {\mathbf{X}}_{\langle r\rangle}\right\|_F^2$ (*i.e.*, where there exists a $(r,\epsilon)$-ID associated with $S$). 2. **Asymptotic complexity** measures the number of floating point operations (FLOPs) in the skeleton selection process of the algorithm asymptotically. 3. **Parallelizability** (in the context of this work) refers to the implementation advantage that the dominant cost for skeleton selection in the algorithm appears as matrix-matrix, instead of matrix-vector, multiplications with $\mathbf{X}$. Thanks to the widely available high-performance implementation (*e.g.*, Level 3 BLAS [@goto2008high]), matrix-matrix multiplications are much faster than matrix-vector multiplications in practice under the same asymptotic complexity. 4. **Error-revealing property** refers to the ability of an algorithm to evaluate the skeletonization error efficiently on the fly so that the target rank $k$ does not need to be given a priori. **Definition 1** (Error-revealing). *An ID algorithm is error-revealing if after selecting a skeleton subset $S$, it can evaluate $\mathcal{E}_{\mathbf{X}}\left(S\right)$ efficiently with at most $O\left(n\right)$ operations.* That is, given any relative error tolerance $\tau \in (0,1)$, an error-revealing ID algorithm can determine an appropriate rank $k$ with negligible additional cost (*i.e.*, no more than $O(nk)$) while selecting a skeleton subset $S$ ($\left\vert S\right\vert=k$) such that $\mathcal{E}_{\mathbf{X}}\left(S\right) \le \tau \left\|\mathbf{X}\right\|_F^2$. For the sake of analysis, we define a useful constant concerning $\mathbf{X}$ throughout the paper: $$\begin{aligned} \label{eq:relative_tail} \eta_r \triangleq{\left\|\mathbf{X}- {\mathbf{X}}_{\langle r\rangle}\right\|_F^2}/{\left\|\mathbf{X}\right\|_F^2}, \end{aligned}$$ which quantifies the relative tail weight of $\mathbf{X}$ with respect to the rank $r$ and represents the relative optimal rank-$r$ approximation error of $\mathbf{X}$. Notice that when $\tau < (1+\epsilon)\eta_r$, the algorithm outputs an $(r,\epsilon)$-ID. 5. **ID-revealing property** characterizes whether the skeleton selection stage of an ID algorithm generates sufficient information for efficient interpolation matrix construction. **Definition 2** (Exact/inexact/non-ID-revealing). *We call a skeleton selection algorithm exact-ID-revealing if it contains sufficient information in addition to $S$ such that the optimal interpolation matrix $\mathbf{W}= \mathbf{X}\mathbf{X}_S^\dagger$ can be evaluated exactly and efficiently in $O(nk^2)$ time. Otherwise, if a suboptimal interpolation matrix $\mathbf{W}\approx \mathbf{X}\mathbf{X}_S^\dagger$ can be constructed in $O(nk^2)$ time, we say that the skeleton selection algorithm is inexact-ID-revealing. If neither $\mathbf{X}\mathbf{X}_S^\dagger$ nor its approximations can be constructed in $O(nk^2)$ time, the skeleton selection algorithm is non-ID-revealing.* ## How to Combine Adaptiveness and Randomness for ID? Adaptiveness and randomness are two critical algorithmic properties that can be leveraged to characterize a skeleton selection algorithm. On one hand, with adaptiveness, the skeleton selection in each step is aware of the selected skeletons from the previous steps so that redundant skeleton selection can be better avoided. On the other hand, randomness diversifies the skeleton selection, thereby improving the robustness of algorithms to scarce adversarial inputs and providing strong statistical guarantees for skeleton complexities. To ground the notions of adaptiveness and randomness, we synopsize some representative existing skeleton selection algorithms below, as well as in , with a focus on the aforementioned properties for performance measurement. 1. **Greedy pivoting** is a classical strategy in numerical linear algebra that involves *only adaptiveness*. As an example of greedy pivoting, column-pivoted QR (CPQR) [@golub2013matrix Section 5.4.2] picks the point in the residual with the maximum norm in each step and adaptively updates the residual via projection onto the orthogonal complement of the skeletons. Despite the error-revealing and exact-ID-revealing abilities, along with the remarkable empirical success, of CPQR, deterministic greedy pivoting algorithms like CPQR are inherently sequential and vulnerable to adversarial inputs where the skeleton complexity approaches the problem size $n$ [@kahan1966numerical; @chen2022randomly]. 2. **Sampling** is a widely studied set of skeleton selection methods that involve *only randomness*. Some common examples related to ID include squared-norm sampling [@deshpande2006matrix], leverage score sampling [@mahoney2009cur], and DPP sampling [@belabbas2009spectral; @kulesza2011k; @derezinski2021determinantal]. As a trade-off between the skeleton complexity and efficiency, the fast ($O(nd)$-time) sampling methods like squared-norm sampling tend to suffer from high skeleton complexities that depend heavily on the matrix [@deshpande2006matrix; @chen2022randomly]; whereas constructions of the more sophisticated distributions like leverage score and DPP are usually expensive [@drineas2012fast; @hough2006determinantal; @derezinski2019fast]. Moreover, for ID, sampling methods generally fail to be error-revealing or ID-revealing. 3. **Random pivoting** combines adaptiveness and randomness by replacing the greedy selection of maximum norm in CPQR with random sampling according to the squared-norm distribution associated with the residual [@deshpande2006matrix; @deshpande2006adaptive; @chen2022randomly]. Closely related to the ID problem considered in this work, the idea of random pivoting is revitalized by the inspiring recent work [@chen2022randomly] in the context of column Nyström approximation with a nearly optimal skeleton complexity guarantee [@chen2022randomly Theorem 3.1]. However, analogous to CPQR, although random pivoting enjoys the desired error-revealing and exact-ID-revealing properties, the sequential nature of random pivoting compromises its efficiency in practice. 4. **Sketchy pivoting** is an alternative combination of adaptiveness and randomness that randomly embeds the row space of $\mathbf{X}$ via sketching [@halko2011finding; @woodruff2014sketching], on the top of which skeletons are selected via greedy pivoting [@voronin2017efficient; @dong2021simpler]. In contrast to random pivoting, the cost of sketchy pivoting is dominated by the embarrassingly parallelizable sketching process. As a trade-off for the superior empirical efficiency, sketchy pivoting sacrifices the error-revealing property since sketching requires prior knowledge of the target $k$ (or its overestimation)[^6]. Furthermore, sketchy pivoting is inexact-ID-revealing due to the loss of information during sketching. As a simple effective remedy for the loss of accuracy, we observe that multiplicative oversampling can remarkably improve the gap between the interpolation and skeletonization error $\mathcal{E}_{\mathbf{X}}\left(\mathbf{W} \mid S\right) - \mathcal{E}_{\mathbf{X}}\left(S\right)$, which we refer to as *oversampled sketchy ID (OSID)*. width=1 Skeleton Selection Skeleton Complexity Dominant Cost Error-revealing ID-revealing ------------------------------------------------------------------------------------------------------------- --------------------------------------------------------- ------------------ ----------------- -------------- ([@golub2013matrix Section 5.4.2]) $k \ge \left(1-\left(1+\epsilon\right)\eta_r\right)n$ $O(ndk)$ m-v **Yes** **Exact** ( [@deshpande2006matrix]) $k \ge \frac{r-1}{\epsilon\eta_r} + \frac{1}{\epsilon}$ $O(nd)$ **m-m** No Non ( [@chen2022randomly]) $r \min\left\{\log\left(\frac{1}{\epsilon\eta_r}\right), 1 + \log\left(\frac{2^r}{\epsilon}\right)\right\}$ $O(ndk)$ m-v **Yes** **Exact** ( [@voronin2017efficient; @dong2021simpler]) $\left(k \gtrsim k_\text{RP}\right)^*$ $O(ndk)$ **m-m** No Inexact () $\left(k \gtrsim k_\text{RP}\right)^*$ $O(ndk)$ **m-m** **Yes** **Exact** : Summary of performance for skeleton selection methods that leverage adaptiveness and/or randomness. Recall from that $\eta_r \triangleq{\left\|\mathbf{X}- {\mathbf{X}}_{\langle r\rangle}\right\|_F^2}/{\left\|\mathbf{X}\right\|_F^2}$. The skeleton complexities in $(\cdot)^*$ are conjectured based on extensive empirical evidence and intuitive rationale but without formal proofs. In the "Dominant Cost" column (*i.e.*, showing both the asymptotic complexity and parallelizability), "m-v" stands for matrix-vector multiplications, which are significantly less efficient than "m-m"---matrix-matrix multiplications---due to the lack of parallelizability [@goto2008high]. To fill in a critical missing piece in the family of existing methods in that leverage adaptiveness and randomness, in this work, we introduce **robust blockwise random pivoting (RBRP, )**---an ID algorithm that is *$O(ndk)$-time, parallelizable, error-revealing, and exact-ID-revealing*, with comparable skeleton complexities to the nearly optimal random pivoting [@chen2022randomly] in practice. In particular, the plain blockwise random pivoting (*e.g.*, an extension of the blocked RPCholesky algorithm introduced in [@chen2022randomly Algorithm 2.2]) tends to suffer from unnecessarily large skeleton complexity (*cf.* (left)) under adversarial inputs (*e.g.*, ) due to the lack of local adaptiveness within each block. As a remedy, RBRP leverages *robust blockwise filtering* ()---applying CPQR to every small sampled block locally and discarding the potentially redundant points through a truncation on the relative residual of the CPQR. By choosing a reasonable block size, such robust blockwise filtering effectively resolves the inefficiency in skeleton complexity encountered by the plain blockwise random pivoting (*cf.* (right)), with negligible additional cost. ## Notations and Roadmap For any fixed data matrix $\mathbf{X}\in \mathbb{R}^{n \times d}$, let $\mathbf{X}= \mathbf{U}\boldsymbol{\Sigma}\mathbf{V}^\top = \sum_{i=1}^{\min(n,d)} \sigma_i \mathbf{u}_i \mathbf{v}_i^\top$ be the singular value decomposition of $\mathbf{X}$ with $\sigma_1 \ge \cdots \ge \sigma_{\min(n,d)} \ge 0$. Given any $r \in [\min(n,d)]$, we denote $\mathbf{U}_r = \left[\mathbf{u}_1,\cdots,\mathbf{u}_r\right]$, $\boldsymbol{\Sigma}_r = \mathop{\mathrm{diag}}\left(\sigma_1,\cdots,\sigma_r\right)$, $\mathbf{V}_r = \left[\mathbf{v}_1,\cdots,\mathbf{v}_r\right]$ such that ${\mathbf{X}}_{\langle r\rangle} = \mathbf{U}_r \boldsymbol{\Sigma}_r \mathbf{V}_r^\top$. We follow the MATLAB notation for vector and matrix indexing throughout this work. For any $n \in \mathbb{N}$, let $[n] = \left\{1,\cdots,n\right\}$ and $\Delta_n = \left\{\mathbf{p}\in [0,1]^n ~\middle\vert~ \left\|\mathbf{p}\right\|_1 = 1 \right\}$ be the probability simplex of dimension $n$. Also, let $\mathfrak{S}_n$ be the set of all permutations of $[n]$. For any distribution $\mathbf{p}\in \Delta_n$ and $k \in \mathbb{N}$, $\mathbf{p}^k$ represents the joint distribution over $\Delta_n^k = \Delta_n \times \cdots \times \Delta_n$ such that $S = \left\{s_j \sim \mathbf{p}\right\}_{j \in [k]} \sim \mathbf{p}^k$ is a set of $k$ $\emph{i.i.d.}\xspace$ samples. For any $n \in \mathbb{N}$, let $\mathbf{e}_i$ ($i \in [n]$) be the $i$-th canonical base of $\mathbb{R}^n$. For any $m \in \mathbb{N}$, let $\textbf{1}_{m} \in \mathbb{R}^m$ and $\textbf{0}_{m} \in \mathbb{R}^m$ be the vector with all entries equal to one and zero, respectively. As a brief roadmap, we review the adaptiveness-only (greedy pivoting) and randomness-only (sampling) skeleton selection algorithms in , along with the two different combinations of adaptiveness and randomness (random and sketchy pivoting) in . Then, we introduce the RBRP algorithm formally in . In , we discuss the efficient construction of interpolation matrix for exact- and inexact-ID-revealing algorithms. Finally in , we present numerical experiments to demonstrate the strong empirical performance of RBRP in various settings. The code for numerical experiments is available at <https://github.com/dyjdongyijun/Robust_Blockwise_Random_Pivoting>. # Adaptiveness vs. Randomness {#sec:adaptive_or_random} ## Adaptiveness via Greedy Squared-norm Pivoting Greedy pivoting is a classical way of incorporating adaptiveness in skeleton selection [@gu1996efficient; @sorensen2016deim; @voronin2017efficient]. Column pivoted QR (CPQR) [@golub2013matrix Section 5.4.2] is one of the most commonly used greedy pivoting methods. The pivoting strategy and adaptive update are two key components that characterize a greedy pivoting method. In particular, CPQR involves greedy squared-norm pivoting and adaptive QR updates as synopsized below. Starting with $\mathbf{X}^{(0)} \gets \mathbf{X}$, given a(n) (active) data (sub)matrix $\mathbf{X}^{(t)} \in \mathbb{R}^{n \times d}$ with at most $n-t$ nonzero rows, CPQR on $\mathbf{X}^\top$ greedily pivots the row in $\mathbf{X}^{(t)}$ with the maximum $\ell_2$-norm (*i.e.*, squared-norm pivoting): $$\begin{aligned} \label{eq:squared_norm_pivot} s_{t+1} \gets \mathop{\mathrm{argmax}}_{i \in [n]} \left\|\mathbf{X}^{(t)}\left(i,:\right)\right\|_2^2\end{aligned}$$ and adaptively updates the active submatrix by projecting it onto the orthogonal complement of the selected skeleton[^7]: $$\begin{aligned} \label{eq:qr_update} \mathbf{X}^{(t+1)} \gets \mathbf{X}^{(t)} - \mathbf{X}^{(t)} \frac{\mathbf{X}^{(t)}(s_{t+1},:)^\top \mathbf{X}^{(t)}(s_{t+1},:)}{\left\|\mathbf{X}^{(t)}(s_{t+1},:)\right\|_2^2}.\end{aligned}$$ The (weak) rank-revealing property of CPQR [@gu1996efficient Theorem 7.2] implies that the first $k$ pivots $S = \left\{s_1,\cdots,s_k\right\}$ form a reasonable skeleton subset with $$\begin{aligned} \label{eq:cpqr_weak_rank_revealing} \mathcal{E}_{\mathbf{X}}\left(S\right) \le 4^{k} \left(n-k\right) \left\|\mathbf{X}- {\mathbf{X}}_{\langle k\rangle}\right\|_F^2.\end{aligned}$$ The exponential dependence on $k$ in is tight under adversarial inputs (*e.g.*, Kahan matrices [@kahan1966numerical][@gu1996efficient Example 1]). In the worst case, CPQR can take almost all of the $n$ points as skeletons before finding a $(r,\epsilon)$-ID. Concretely, [@chen2022randomly Theorem 4.1] implies that for CPQR, $$\begin{aligned} \label{eq:cpqr_complexity} \mathcal{E}_{\mathbf{X}}\left(S\right) \le \frac{1}{\eta_r} \cdot \left(1-\frac{k}{n}\right) \left\|\mathbf{X}- {\mathbf{X}}_{\langle r\rangle}\right\|_F^2.\end{aligned}$$ That is, CPQR provides an $(r,\epsilon)$-ID when $k \ge \left(1 - (1+\epsilon)\eta_r\right)n$. Moreover, for any $\left\vert S\right\vert = k \le \min\left(1 - (1+\epsilon)\eta_r, \eta_r\right)n-1$, there exists an $\mathbf{X}$ where CPQR fails to form an $(r,\epsilon)$-ID. **Remark 1** (Greedy squared-norm pivoting is vulnerable but empirically successful). *The vulnerability to adversaries is a common and critical drawback of squared-norm pivoting in CPQR (and greedy pivoting methods in general). Nevertheless, it has been widely observed and conjectured that such adversarial inputs are extremely scarce in practice [@trefethen1990average]. Therefore, CPQR serves as a good skeleton selection method for most data matrices empirically [@voronin2017efficient; @dong2021simpler].* **Remark 2** (CPQR is inherently sequential but error-revealing). *From the computational efficiency perspective, a major drawback of CPQR is that the adaptive updates are inherently sequential (*i.e.*, only half of the computation is cast in terms of matrix-matrix multiplication in the best-known algorithm that underlies LAPACK's routine geqp3 [@quintana1998blas]). As a result, CPQR tends to be observably slower than simple alternatives with the same asymptotic complexity like Gaussian elimination with partial pivoting [@golub2013matrix Section 3.4]---another common greedy pivoting method that retains parallelizability at a cost of the rank-revealing guarantee.* *Nevertheless, the adaptive nature grants CPQR an appealing bonus---the error-revealing ability. Specifically, it is known that the skeletonization error of CPQR, i.e., $\mathcal{E}_{\mathbf{X}}\left(S\right) = \left\|\mathbf{X}^{\left(\left\vert S\right\vert\right)}\right\|_F^2$, can be downdated efficiently at the end of each update.* ## Randomness via Squared-norm Sampling Sampling is a widely used approach for random skeleton selection (also referred to as column subset selection, *e.g.*, in [@derezinski2020improved; @cortinovis2020low]). To identify a nearly optimal (smallest possible) skeleton subset $S$ for a $(r,\epsilon)$-ID, the intuitive goal is to construct a distribution $\mathbf{p}\in \Delta_n$ over the $n$ data points $\left\{\mathbf{x}_i\right\}_{i \in [n]}$ that weights each $\mathbf{x}_i$ according to its relative "importance". Analogous to the squared-norm pivoting in CPQR (, ), sampling according to the squared-norm distribution is a natural choice for such "importance" sampling. Concretely, adapting results from [@deshpande2006matrix] and [@chen2022randomly Theorem 4.2] leads to the following expected skeleton complexity for squared-norm sampling. **Proposition 1** (Squared-norm sampling). *Consider a size-$k$ skeleton subset $S=\left\{s_1,\cdots,s_k\right\}$ where each $s_j \in S$ is sampled from the squared-norm distribution $s_j \sim \mathbf{p}= \left(p_1,\cdots,p_n\right)$, $p_i = \frac{\left\|\mathbf{x}_i\right\|_2^2}{\left\|\mathbf{X}\right\|_F^2}$ independently with replacement. Then $$\begin{aligned} \mathbb{E}_{S \sim \mathbf{p}^k}\left[\mathcal{E}_{\mathbf{X}}\left(S\right)\right] \le \left(1 + \frac{r-1}{k \eta_r} + \frac{1}{k}\right) \left\|\mathbf{X}- {\mathbf{X}}_{\langle r\rangle}\right\|_F^2. \end{aligned}$$ That is, squared-norm sampling provides a $(r,\epsilon)$-ID when $k \ge \frac{r-1}{\epsilon\eta_r} + \frac{1}{\epsilon}$.* Compared to the skeleton complexity guarantee of squared-norm pivoting (CPQR) in , the squared-norm sampling in provides a potentially weaker guarantee in expectation (instead of worst-case) but brings a better skeleton complexity that scales proportionally to $r$ (instead of $n$). Despite the improved skeleton complexity in expectation compared to CPQR, like CPQR (), squared-norm sampling can also be vulnerable to adversarial inputs by selecting redundant skeletons with higher probabilities. Such potential inefficiency can be reflected by the linear dependence of the skeleton complexity $k \ge \frac{r-1}{\epsilon\eta_r} + \frac{1}{\epsilon}$ on $\frac{r}{\eta_r}$, as instantiated in . **Example 1** (Adversarial input for squared-norm sampling). *For $n \in \mathbb{N}$ divisible by $k \in \mathbb{N}$, $r \in \mathbb{N}$ such that $k = (1+\beta) r < n$ for some $\beta \in \mathbb{N}$, letting $\alpha_i = \sqrt{\alpha} \gg \beta \ge 1$ for all $1 \le i \le r$, and $\alpha_i = 1$ for all $r+1 \le i \le k$, we consider the following data matrix $$\begin{aligned} \mathbf{X}^\top = \left[\alpha_1 \mathbf{e}_1 \textbf{1}_{n/k}^\top, \cdots, \alpha_r \mathbf{e}_r \textbf{1}_{n/k}^\top, \alpha_{r+1} \mathbf{e}_{r+1} \textbf{1}_{n/k}^\top, \cdots, \alpha_k \mathbf{e}_k \textbf{1}_{n/k}^\top\right] \in \mathbb{R}^{d \times n} \end{aligned}$$ with SVD $$\begin{aligned} \mathbf{X}= \underbrace{\begin{bmatrix} \sqrt{\frac{k}{n}} \textbf{1}_{n/k} & \cdots & \textbf{0}_{n/k} \\ \vdots & \ddots & \vdots \\ \textbf{0}_{n/k} & \cdots & \sqrt{\frac{k}{n}} \textbf{1}_{n/k} \end{bmatrix}}_{\mathbf{U}\in \mathbb{R}^{n \times k}} \underbrace{\begin{bmatrix} \sqrt{\frac{n}{k}} \alpha_1 && \\ & \ddots & \\ && \sqrt{\frac{n}{k}} \alpha_k \end{bmatrix}}_{\boldsymbol{\Sigma}\in \mathbb{R}^{k \times k}} \underbrace{\begin{bmatrix} \mathbf{e}_1^\top \\ \vdots \\ \mathbf{e}_k^\top \end{bmatrix}}_{\mathbf{V}^\top \in \mathbb{R}^{k \times d}}. \end{aligned}$$* *Observe that a skeleton subset of size $k$ with the $k$ scaled canonical bases $\left\{\alpha_i \mathbf{e}_i \middle\vert i \in [k] \right\}$ is sufficient to form an ID that exactly recovers $\mathbf{X}$. That is, the uniform distribution over the $k$ scaled canonical bases is preferred.* *However, with $\alpha \gg 1$, the squared-norm distribution is heavily skewed to those rows in $\mathbf{X}$ pointing toward $\left\{\mathbf{e}_i ~\middle\vert~ 1 \le i \le r \right\}$, which leads to redundant samples along those directions: $$\begin{aligned} p_i = \begin{cases} \frac{\alpha}{\alpha r + k -r} = \frac{\alpha}{\left(\alpha + \beta\right) r} \quad &\forall~ 1 \le i \le r \\ \frac{1}{\alpha r + k -r} = \frac{1}{\left(\alpha + \beta\right) r} \quad &\forall~ r+1 \le i \le k \end{cases}. \end{aligned}$$* *Such inefficiency is well reflected in the skeleton complexity $k \ge \frac{r-1}{\epsilon\eta_r} + \frac{1}{\epsilon}$ through a small $\eta_r$: $$\begin{aligned} \eta_r = \frac{k-r}{\alpha r + k - r} = \frac{\beta}{\alpha+\beta} \ll 1. \end{aligned}$$ For example, with $\beta = 1$ and $\alpha \gg k$, the squared-norm sampling suffers from a skeleton complexity $\frac{k}{\epsilon} + \frac{\alpha (r-1) - r}{\epsilon} \gg k$ far greater than necessary.* *It is worth highlighting that adaptiveness is an effective remedy for such inefficiency of squared-norm sampling. For instance, after picking $\alpha_i \mathbf{e}_i$ as the first skeleton, the adaptive update (*e.g.*, ) eliminates the remaining $\frac{n}{k}-1$ rows of $\mathbf{X}$ in the $\mathbf{e}_i$ direction and therefore helps exclude the possible redundant samples.* As well-studied alternatives to squared-norm sampling, determinantal point process (DPP sampling) [@belabbas2009spectral; @derezinski2021determinantal] and leverage score sampling [@mahoney2009cur] provide better skeleton complexity guarantees but with higher cost of constructing the corresponding distributions as a trade-off. For example, the k-DPP sampling [@kulesza2011k] is known for its *nearly optimal* skeleton complexity in expectation [@belabbas2009spectral; @guruswami2012optimal; @chen2022randomly]: $$\begin{aligned} \label{eq:dpp_skeleton_complexity} k = \left\vert S\right\vert \ge \frac{r}{\epsilon} + r - 1 ~\Rightarrow~ \mathbb{E}_{S \sim \text{k-DPP}(\mathbf{X})}\left[\mathcal{E}_{\mathbf{X}}\left(S\right)\right] \le (1+\epsilon) \left\|\mathbf{X}- {\mathbf{X}}_{\langle r\rangle}\right\|_F^2.\end{aligned}$$ As a trade-off, the classical SVD-based algorithm for k-DPP$(\mathbf{X})$ [@hough2006determinantal] takes $O\left(nd^2\right)$ to construct the distribution and $O\left(nk^2\right)$ to draw $k$ samples. In contrast to sampling from sophisticated distributions, uniform sampling works well for incoherent matrices (whose singular vectors distribute evenly along all canonical bases) but easily fails on coherent ones [@cohen2015uniform]. Although an in-depth comparison of different sampling methods is beyond the scope of this work, we refer the interested reader to the following enlightening references: [@deshpande2006matrix; @cohen2015uniform; @derezinski2021determinantal; @chen2022randomly]. # Combining Adaptiveness and Randomness {#sec:adaptive_and_random} We recall that the vulnerability of greedy pivoting () comes from the scarce adversarial inputs which can be negligible in expectation under randomization. Meanwhile, the vulnerability of squared-norm sampling () can be effectively circumvented by incorporating adaptive updates. Therefore, a natural question is *how to combine adaptiveness and randomness effectively for better skeleton selection*. In this section, we review two existing ideas that involve both adaptiveness and randomness in different ways, while comparing their respective advantages and drawbacks. ## Random Pivoting Random pivoting [@deshpande2006matrix; @deshpande2006adaptive; @arthur2007k; @chen2022randomly] is arguably the most intuitive skeleton selection method that combines adaptiveness and randomness. It can be simply described as "replacing the greedy squared-norm pivoting in CPQR with squared-norm sampling" or "adaptively updating the dataset via QR after each step of squared-norm sampling", as formalized in . \(a\) Data matrix $\mathbf{X}= \left[\mathbf{x}_1,\cdots,\mathbf{x}_n\right]^\top \in \mathbb{R}^{n \times d}$, (b) Relative error tolerance $\tau = \left(1+\epsilon\right)\eta_r \in (0,1)$ (or target rank $k \in \mathbb{N}$), \(a\) Skeleton indices $S = \left\{s_1,\cdots,s_k\right\} \subset [n]$, (b) $\mathbf{L}\in \mathbb{R}^{n \times k}$, (c) $\boldsymbol{\pi}\in \mathfrak{S}_n$ $\mathbf{L}^{(0)} \gets \textbf{0}_{n \times 0}$, $\mathbf{Q}^{(0)} \gets \textbf{0}_{d \times 0}$, $\boldsymbol{\pi}\gets [1,\cdots,n]$, $S \gets \emptyset$, $t = 0$ $\mathbf{d}^{(0)} \in \mathbb{R}_{\ge 0}^n$ with $\mathbf{d}^{(0)}(i) = \left\|\mathbf{x}_i\right\|_2^2 ~\forall~ i \in [n]$ $t \gets t + 1$ $s_t \sim \mathbf{d}^{(t-1)} / \left\|\mathbf{d}^{(t-1)}\right\|_1$ $S \gets S \cup \left\{s_t\right\}$, $\boldsymbol{\pi}\gets \mathtt{swap}\left(\boldsymbol{\pi}, t, s_t\right)$ $\mathbf{a}^{(t)} \gets \textbf{0}_{n}$, $\boldsymbol{\pi}_a \gets \boldsymbol{\pi}(t:n)$ $\mathbf{v}\gets \mathbf{x}_{s_t} - \mathbf{Q}^{(t-1)} \left(\left(\mathbf{Q}^{(t-1)}\right)^\top \mathbf{x}_{s_t}\right) \in \mathbb{R}^d$ $\mathbf{Q}^{(t)} \gets \left[\mathbf{Q}^{(t-1)}, \mathbf{v}/ \left\|\mathbf{v}\right\|_2\right] \in \mathbb{R}^{d \times t}$ $\mathbf{a}^{(t)}\left(\boldsymbol{\pi}_a\right) \gets \mathbf{X}\left(\boldsymbol{\pi}_a,:\right) \mathbf{v}$ $\mathbf{L}^{(t)} \gets \left[\mathbf{L}^{(t-1)}, \mathbf{a}^{(t)} / \sqrt{\mathbf{a}^{(t)}\left(s_t\right)}\right]$ $\mathbf{d}^{(t)}(i) \gets 0 ~\forall~ i \in S$, $\mathbf{d}^{(t)}(i) \gets \mathbf{d}^{(t-1)}(i) - \frac{\left(\mathbf{a}^{(t)}(i)\right)^2}{\mathbf{a}^{(t)}\left(s_t\right)} ~\forall~ i \notin S$ $k \gets \left\vert S\right\vert$, $\mathbf{L}\gets \mathbf{L}^{(t)}$ has an asymptotic complexity of $O\left(ndk + dk^2\right)$, with $k$ inherently sequential passes through $\mathbf{X}$ (or its submatrices), while the storage of $\mathbf{L}^{(t)}$ and $\mathbf{Q}^{(t)}$ requires $O(nk)$ and $O(dk)$ memory, respectively. Meanwhile, analogous to greedy pivoting (), is error-revealing thanks to its adaptive nature. The idea of random pivoting in is ubiquitous in various related problems, *e.g.*, the combination of volume sampling and adaptive sampling for low-rank approximation [@deshpande2006adaptive], the Randomly Pivoted Cholesky (RPCholesky) [@chen2022randomly] for kernel column Nyström approximation [@martinsson2020randomized 19.2], and the $D^2$-sampling for `k-means++` clustering [@arthur2007k]. Specifically, applying RPCholesky [@chen2022randomly Algorithm 2.1] to the kernel matrix $\mathbf{X}\mathbf{X}^\top \in \mathbb{R}^{n \times n}$ can be reformulated as , which is equivalent to in the exact arithmetic (). Compared to , the skeleton selection stage of has an asymptotic complexity of $O\left(ndk + nk^2\right)$ (*cf.*$O\left(ndk + dk^2\right)$ for ), while only requires $O(nk)$ memory for the storage of $\mathbf{L}^{(t)}$. $\mathbf{L}^{(0)} \gets \textbf{0}_{n \times 0}$, $\boldsymbol{\pi}\gets [1,\cdots,n]$, $S \gets \emptyset$, $t = 0$ $\mathbf{d}^{(0)} \in \mathbb{R}_{\ge 0}^n$ with $\mathbf{d}^{(0)}(i) = \left\|\mathbf{x}_i\right\|_2^2 ~\forall~ i \in [n]$ $t \gets t + 1$ $s_t \sim \mathbf{d}^{(t-1)} / \left\|\mathbf{d}^{(t-1)}\right\|_1$ $S \gets S \cup \left\{s_t\right\}$, $\boldsymbol{\pi}\gets \mathtt{swap}\left(\boldsymbol{\pi}, t, s_t\right)$ $\mathbf{a}^{(t)} \gets \textbf{0}_{n}$, $\boldsymbol{\pi}_a \gets \boldsymbol{\pi}(t:n)$ $\mathbf{a}^{(t)}\left(\boldsymbol{\pi}_a\right) \gets \mathbf{X}\left(\boldsymbol{\pi}_a,:\right) \mathbf{x}_{s_t} - \mathbf{L}^{(t-1)}\left(\boldsymbol{\pi}_a,1:t-1\right) {\mathbf{L}^{(t-1)}\left(s_t, 1:t-1\right)}^\top$ $\mathbf{L}^{(t)} \gets \left[\mathbf{L}^{(t-1)}, \mathbf{a}^{(t)} / \sqrt{\mathbf{a}^{(t)}\left(s_t\right)}\right]$ $\mathbf{d}^{(t)}(i) \gets 0 ~\forall~ i \in S$, $\mathbf{d}^{(t)}(i) \gets \mathbf{d}^{(t-1)}(i) - \frac{\left(\mathbf{a}^{(t)}(i)\right)^2}{\mathbf{a}^{(t)}\left(s_t\right)} ~\forall~ i \notin S$ **Remark 3** (Relation between QR with sequential random pivoting and RPCholesky). *Sequential random pivoting () on $\mathbf{X}$ is equivalent to RPCholesky ( [@chen2022randomly Algorithm 2.1]) on $\mathbf{X}\mathbf{X}^\top$ in the exact arithmetic.* *Rationale for .* To show the equivalence in the exact arithmetic, it is sufficient to observe that yield the same $\mathbf{a}^{(t)}$ and $\mathbf{d}^{(t)}$ in each iteration $t = 1,2,\cdots$. By induction, when $t=1$, both lead to $\mathbf{a}^{(1)} = \mathbf{X}\mathbf{x}_{s_1}$ and $$\begin{aligned} \mathbf{d}^{(1)}(i) = \left\|\mathbf{x}_i\right\|_2^2 - \frac{\left(\mathbf{x}_i^\top \mathbf{x}_{s_1}\right)^2}{\left\|\mathbf{x}_{s_1}\right\|_2^2} = \left\|\left(\mathbf{I}_d - \mathbf{Q}^{(1)} \left(\mathbf{Q}^{(1)}\right)^\top\right) \mathbf{x}_i\right\|_2^2 \quad \forall~ i \in [n] \setminus \left\{s_1\right\}, \end{aligned}$$ while $\mathbf{d}^{(1)}(s_1) = 0$. We also observe that $\mathbf{L}^{(1)} = \mathbf{X}\mathbf{Q}^{(1)}$ since $\mathbf{a}^{(1)}\left(s_1\right) = \left\|\mathbf{x}_{s_1}\right\|_2^2$. Suppose that in the $(t-1)$-th iteration, produce equivalent $\mathbf{a}^{(t-1)}$ and $\mathbf{d}^{(t-1)}$ under the exact arithmetic, while $\mathbf{d}^{(t-1)}\left(i\right) = \left\|\left(\mathbf{I}_d - \mathbf{Q}^{(t-1)} \left(\mathbf{Q}^{(t-1)}\right)^\top\right) \mathbf{x}_i\right\|_2^2$ for all $i \in [n]$ and $\mathbf{L}^{(t-1)} = \mathbf{X}\mathbf{Q}^{(t-1)}$. Then in the $t$-th iteration, both compute $$\begin{aligned} \mathbf{a}^{(t)} = \mathbf{X}\mathbf{x}_{s_t} - \mathbf{X}\mathbf{Q}^{(t-1)} \left(\mathbf{Q}^{(t-1)}\right)^\top \mathbf{x}_{s_t} = \mathbf{X}\left(\mathbf{I}_d - \mathbf{Q}^{(t-1)} \left(\mathbf{Q}^{(t-1)}\right)^\top\right) \mathbf{x}_{s_t}, \end{aligned}$$ $\mathbf{L}^{(t)} = \mathbf{X}\mathbf{Q}^{(t)}$, and for all $i \in [n]$, $$\begin{aligned} \mathbf{d}^{(t)}(i) = &\left\|\left(\mathbf{I}_d - \mathbf{Q}^{(t-1)} \left(\mathbf{Q}^{(t-1)}\right)^\top\right) \mathbf{x}_i\right\|_2^2 - \frac{\left(\mathbf{x}_i^\top \left(\mathbf{I}_d - \mathbf{Q}^{(t-1)} \left(\mathbf{Q}^{(t-1)}\right)^\top\right) \mathbf{x}_{s_t}\right)^2}{\mathbf{x}_{s_t}^\top \left(\mathbf{I}_d - \mathbf{Q}^{(t-1)} \left(\mathbf{Q}^{(t-1)}\right)^\top\right) \mathbf{x}_{s_t}} \\ = &\left\|\left(\mathbf{I}_d - \mathbf{Q}^{(t)} \left(\mathbf{Q}^{(t)}\right)^\top\right) \mathbf{x}_i\right\|_2^2. \end{aligned}$$ ◻ In light of the equivalence between , [@chen2022randomly Theorem 3.1] provides the following skeleton complexity guarantee for sequential random pivoting. **Proposition 2** (Sequential random pivoting). *For any $\epsilon> 0$ and $r \in [n]$, the skeleton subset $S$ selected by provides $\mathbb{E}\left[\mathcal{E}_{\mathbf{X}}\left(S\right)\right] \le \left(1 + \epsilon\right) \left\|\mathbf{X}- {\mathbf{X}}_{\langle r\rangle}\right\|_F^2$ (*i.e.*, a $(r,\epsilon)$-ID in expectation) when[^8] $$\begin{aligned} k = \left\vert S\right\vert \ge \frac{r}{\epsilon} + r \cdot \min\left\{\log\left(\frac{1}{\epsilon\eta_r}\right), 1 + \log\left(\frac{2^r}{\epsilon}\right)\right\}. \end{aligned}$$* From , we notice that sequential random pivoting almost matches the nearly optimal skeleton complexity guarantee of DPP () up to some logarithmic factors $\log\left(\frac{1}{\epsilon\eta_r}\right)$ on $r$. Further, the appealing skeleton complexity of random pivoting is empirically demonstrated in the form of RPCholesky for column Nyström approximation in [@chen2022randomly]. Although random pivoting leverages the strength of both adaptiveness and randomness and achieves an appealing skeleton complexity with an affordable asymptotic complexity, the lack of parallelizability due to its sequential nature becomes a critical concern when considering empirical runtime. **Remark 4** (Inefficiency of sequential updates). *Given a sequence of matrix operations, it is well known that an appropriate implementation using Level-3 BLAS, or matrix-matrix, operations will run more efficiently on modern processors than an optimal implementation using Level-2 or Level-1 BLAS [@blackford2002updated]. This is largely due to the greater potential for the Level-3 BLAS to make more efficient use of memory caching in the processor. Notice that the computational bottleneck of with asymptotic complexity $O\left(ndk\right)$ (or $O\left(ndk\right)$) consists of $k$ matrix-vector multiplications (Level-2 BLAS) with $\mathbf{X}$. This leads to the considerable slowdown of in practice.* ## Sketchy Pivoting Alternative to random pivoting, randomness can be combined with adaptiveness through randomized linear embedding (also known as sketching), which leads to the sketchy pivoting methods [@martinsson2017householder; @voronin2017efficient; @dong2021simpler]. As given in , the general framework of sketching pivoting [@dong2021simpler Algorithm 1] consists of two stages. \(a\) $\mathbf{X}= \left[\mathbf{x}_1,\cdots,\mathbf{x}_n\right]^\top \in \mathbb{R}^{n \times d}$, (b) Target rank $k \in \mathbb{N}$, (c) Sample size $l \ge k$, (d) Pivoting strategy: LUPP/CPQR (a) $S = \left\{s_1,\cdots,s_k\right\} \subset [n]$, (b) $\mathbf{L}\in \mathbb{R}^{n \times k}$, (c) $\boldsymbol{\pi}\in \mathfrak{S}_n$ Draw a randomized linear embedding $\boldsymbol{\Omega}\in \mathbb{R}^{d \times l}$ $\mathbf{Y}\gets \mathbf{X}\boldsymbol{\Omega}\in \mathbb{R}^{n \times l}$ $\mathbf{L}, \sim, \boldsymbol{\pi}\gets \mathtt{lu}\left(\mathbf{Y}\left(:,1:k\right), \text{``vector''}\right)$ [@dong2021simpler] $\sim, \mathbf{L}^\top, \boldsymbol{\pi}\gets \mathtt{qr}\left(\mathbf{Y}\left(:,1:k\right)^\top,~ \text{``econ''},~ \text{``vector''}\right)$ [@voronin2017efficient] $S \gets \boldsymbol{\pi}\left(1:k\right)$ In the first stage, *randomness is incorporated through sketching*, which serves two purposes. - From the computational efficiency perspective, sketching reduces the data dimension from $d$ to $l = O(k)$[^9], while transferring the $O(ndk)$ computational bottleneck from the sequential greedy pivoting to the (input-sparsity-time and) parallelizable matrix-matrix multiplications. - From the skeleton complexity perspective, applying randomized embedding to the row space of $\mathbf{X}$ via sketching improves the empirical robustness of the subsequent greedy pivoting [@trefethen1990average; @dong2021simpler]. Intuitively, this is because the scarce adversarial inputs for greedy pivoting () are effectively negligible under randomization. Common choices of randomized linear embeddings in the sketching stage include Gaussian embedding [@indyk1998approximate] (with asymptotic complexity $O\left(ndk\right)$), subsampled randomized trigonometric transforms [@woolfe2008fast; @rokhlin2008fast; @tropp2011improved; @boutsidis2013improved] (with asymptotic complexity $O\left(nd \log(k)\right)$), and sparse embeddings [@meng2013low; @nelson2013osnap; @clarkson2017low; @tropp2017fixed] (with asymptotic complexity $O\left(nd\right)$). We refer the interested readers to [@halko2011finding; @woodruff2014sketching; @martinsson2020randomized] for a general review of sketching. In the second stage, *adaptiveness is utilized as greedy pivoting on the sketched sample matrix* $\mathbf{Y}= \mathbf{X}\boldsymbol{\Omega}\in \mathbb{R}^{n \times l}$ to select skeletons. Both LUPP [@golub2013matrix Section 3.4] and CPQR [@golub2013matrix Section 5.4.2] on $\mathbf{Y}$ cost $O(nk^2)$ asymptotically, whereas LUPP is remarkably faster in practice due to its superior parallelizability [@dong2021simpler]. Intuitively, applying greedy pivoting on the top of sketching can be viewed as an analog of the squared-norm sampling with a slightly different squared-norm-based distribution. For example, for a Gaussian random matrix $\boldsymbol{\Omega}$ with $\emph{i.i.d.}\xspace$ entries from $\mathcal{N}\left(0, 1/l\right)$, each row $\mathbf{y}_i = \boldsymbol{\Omega}^\top \mathbf{x}_i$ of $\mathbf{Y}$ is a Gaussian random vector with $\emph{i.i.d.}\xspace$ entries from $\mathcal{N}\left(\textbf 0, \left\|\mathbf{x}_i\right\|_2^2 \mathbf{I}_l\right)$[^10]. When applying greedy squared-norm pivoting (CPQR) in the second stage, the first pivot is selected according to $s_1 \gets \mathop{\mathrm{argmax}}_{i \in [n]} \left\|\mathbf{x}_i\right\|_2^2 Z_i$ where every $Z_i = \sum_{j=1}^l Y_{ij}^2$ is a $\chi^2_l$ random variable but $\left\{Z_i ~\middle\vert~ i \in [n] \right\}$ are dependent. The intuition behind sketchy pivoting is similar to random pivoting, but the randomness arises from the random variables $Z_i$, instead of sampling, which leads to the following conjecture. **Conjecture 3**. *There exists a distribution $P: \mathbb{R}\to [0,1]$ satisfying the universality assumptions (*e.g.*, $P$ is symmetric and zero-mean with bounded variance[^11]) such that the following holds. For the $l$ independent random vectors $\left\{\mathbf{Y}\left(:,j\right) = \left[Y_{1j},\cdots,Y_{nj}\right]^\top ~\middle\vert~ j \in [l] \right\}$ such that $$\begin{aligned} Y_{ij} \sim P,\quad \mathbb{E}\left[\mathbf{Y}\left(:,j\right)\right] = \mathbf{0},\quad \mathrm{Cov}\left(\mathbf{Y}\left(:,j\right)\right) = \mathbf{X}\mathbf{D}^{-1} \mathbf{X}^\top \quad \forall~ j \in [l], \end{aligned}$$ where $\mathbf{D}= \mathop{\mathrm{diag}}\left(\left\|\mathbf{x}_1\right\|_2^2, \cdots, \left\|\mathbf{x}_n\right\|_2^2\right)$, the induced discrete random variable $$\begin{aligned} s \gets \mathop{\mathrm{argmax}}_{i \in [n]} \left\|\mathbf{x}_i\right\|_2^2 Z_i, \quad Z_i = \sum_{j=1}^l Y_{ij}^2 \quad \forall~ i \in [n] \end{aligned}$$ follows the squared-norm distribution $p(s) \approx {\left\|\mathbf{x}_s\right\|_2^2}/{\left\|\mathbf{X}\right\|_F^2}$.* In the extreme scenarios, when the variance of $\left\{Z_i\right\}_{i \in [n]}$ is negligible in comparison to the squared norms $\left\{\left\|\mathbf{x}_i\right\|_2^2\right\}_{i \in [n]}$, sketchy pivoting behaves similarly to greedy pivoting. Meanwhile, when the variance of $\left\{Z_i\right\}_{i \in [n]}$ is much larger than differences among the squared norms, sketchy pivoting tends to behave like uniform sampling. Therefore intuitively, there exist some appropriate choices of variance interpolating the two extreme cases such that sketchy pivoting mimics the behavior of squared-norm sampling. Suppose holds for some distribution $P$, then sketchy pivoting (*e.g.*, SkCPQR in ) with a randomized linear embedding $\boldsymbol{\Omega}$ consisting of $\emph{i.i.d.}\xspace$ entries drawn from $P$ is equivalent to sequential random pivoting (), which enjoys the nearly optimal skeleton complexity guarantee in . In practice, sketchy pivoting () with Gaussian embedding provides high-quality skeleton selection with comparable error $\mathcal{E}_{\mathbf{X}}\left(S\right)$ to random pivoting (, *cf.*), but much more efficiently thanks to the parallelizability of sketching [@dong2021simpler]. However, as a major empirical limitation, sketchy pivoting is not error-revealing and requires prior knowledge of the target rank $k$. # Robust Blockwise Random Pivoting {#sec:rbas} In light of the existing skeleton selection methods summarized in that leverage/combine randomness and adaptiveness, a *parallelizable, error-revealing, and exact-ID-revealing* skeleton selection algorithm that attains similar skeleton and asymptotic complexities as the sequential random pivoting is a desirable missing piece. Blockwise random pivoting is a natural extension of its sequential variation that consists of matrix-matrix multiplications and is therefore parallelizable. In the kernel formulation (with an input kernel matrix $\mathbf{X}\mathbf{X}^\top$), [@chen2022randomly Algorithm 2.2] introduced a blocked version of RPCholesky that can be generalized as blockwise random pivoting (BRP--- with $\tau_b = 0$). However, for a large block size $b \in \mathbb{N}$ that improves parallelizability, such plain BRP can pick up to $b$ times more skeletons than necessary due to the similar pitfall of squared-norm sampling instantiated in (*e.g.*, ). Moreover, in practice, the numerical instability issue tends to be exacerbated as the block size increases. \(a\) $\mathbf{X}= \left[\mathbf{x}_1,\cdots,\mathbf{x}_n\right]^\top \in \mathbb{R}^{n \times d}$, (b) $\tau = \left(1+\epsilon\right) \eta_r \in (0,1)$ (or $k \in \mathbb{N}$), (c) Block size $b \in \mathbb{N}$, (d) $\tau_b \in [0,1)$---Tolerance for blockwise filtering (we choose $\tau_b = \frac{1}{b}$) (a) $S = \left\{s_1,\cdots,s_k\right\} \subset [n]$, (b) $\mathbf{L}\in \mathbb{R}^{n \times k}$, (c) $\boldsymbol{\pi}\in \mathfrak{S}_n$ $\mathbf{L}^{(0)} \gets \textbf{0}_{n \times 0}$, $\mathbf{Q}^{(0)} \gets \textbf{0}_{d \times 0}$, $\boldsymbol{\pi}\gets [1,\cdots,n]$, $S \gets \emptyset$, $t = 0$ $\mathbf{d}^{(0)} \in \mathbb{R}_{\ge 0}^n$ with $\mathbf{d}^{(0)}(i) = \left\|\mathbf{x}_i\right\|_2^2 ~\forall~ i \in [n]$ $t \gets t + 1$, $b \gets \min(b, k-\left\vert S\right\vert)$ Sample $S_t = \left(s_{\left\vert S\right\vert+1}, \cdots, s_{\left\vert S\right\vert+b}\right) \sim \mathbf{d}^{(t-1)} / \left\|\mathbf{d}^{(t-1)}\right\|_1$ without replacement $\mathbf{V}\gets \mathbf{X}\left(S_t,:\right)^\top - \mathbf{Q}^{(t-1)} \left(\left(\mathbf{Q}^{(t-1)}\right)^\top \mathbf{X}\left(S_t,:\right)^\top\right) \in \mathbb{R}^{d \times b}$ $\mathbf{Q}_\mathbf{V}, \mathbf{R}_\mathbf{V}, \boldsymbol{\pi}_\mathbf{V}\gets \mathtt{qr}\left(\mathbf{V},~ \text{``econ''},~ \text{``vector''}\right)$ $b' \gets \max\left\{i \in [b] ~\middle\vert~ \left\|\mathbf{R}_\mathbf{V}\left(i:b, i:b\right)\right\|_F^2 \ge \tau_b \cdot \left\|\mathbf{R}_\mathbf{V}\right\|_F^2 \right\}$ $S'_t \gets S_t\left(\boldsymbol{\pi}_\mathbf{V}\left(1:b'\right)\right)$, $\mathbf{Q}'_\mathbf{V}\gets \mathbf{Q}_\mathbf{V}\left(:,1:b'\right)$ $\mathbf{Q}^{(t)} \gets \left[\mathbf{Q}^{(t-1)}, \mathbf{Q}'_\mathbf{V}\right] \in \mathbb{R}^{d \times \left(\left\vert S\right\vert + b'\right)}$ $\boldsymbol{\pi}\gets \mathtt{swap}\left(\boldsymbol{\pi}, \left\vert S\right\vert+1:\left\vert S\right\vert+b', S'_t\right)$, $\boldsymbol{\pi}_a \gets \boldsymbol{\pi}(\left\vert S\right\vert+1:n)$ $\widehat{\mathbf{L}}^{(t)} \gets \textbf{0}_{n \times b'}$ $\widehat{\mathbf{L}}^{(t)}\left(\boldsymbol{\pi}_a,:\right) \gets \mathbf{X}\left(\boldsymbol{\pi}_a,:\right) \mathbf{Q}_\mathbf{V}$ $S \gets S \cup S'_t$ $\mathbf{L}^{(t)} \gets \left[\mathbf{L}^{(t-1)}, \widehat{\mathbf{L}}^{(t)}\right] \in \mathbb{R}^{n \times \left\vert S\right\vert}$ $\mathbf{d}^{(t)}(i) \gets 0 ~\forall~ i \in S$, $\mathbf{d}^{(t)}(i) \gets \mathbf{d}^{(t-1)}(i) - \left\|\widehat{\mathbf{L}}^{(t)}\left(i,:\right)\right\|_2^2 ~\forall~ i \notin S$ $k \gets \left\vert S\right\vert$, $\mathbf{L}\gets \mathbf{L}^{(t)}$ To exemplify the pitfall of plain BRP, we present an adversarial input in based on a tailored Gaussian mixture model (GMM). **Example 2** (Pitfall of plain blockwise random pivoting). *For $n, d, k \in \mathbb{N}$ such that $n/k = m \in \mathbb{N}$, we draw $n$ points from a GMM with means $\mathcal{C}= \left\{\boldsymbol{\mu}_j\right\}_{j \in [k]}$, covariance $\mathbf{I}_d$, and cluster size $m$---$\mathcal{X}= \left\{\mathbf{x}_i \in \mathbb{R}^d\right\}_{i \in [n]} = \bigcup_{j=1}^{k} \mathcal{X}_j$ where $$\begin{aligned} \mathcal{X}_j = \left\{\mathbf{x}_{m(j-1)+\iota} = \boldsymbol{\mu}_j + \boldsymbol{\xi}_\iota ~\middle\vert~ \boldsymbol{\mu}_j = 10 j \cdot \mathbf{e}_j,~ \boldsymbol{\xi}_\iota \sim \mathcal{N}\left(\textbf 0_d, \mathbf{I}_d\right)~\emph{i.i.d.}\xspace~\forall~\iota \in [m] \right\}. \end{aligned}$$ Consider a GMM data matrix $\mathbf{X}= \left[\mathbf{x}_1,\cdots,\mathbf{x}_n\right]^\top \in \mathbb{R}^{n \times d}$. Since the means in $\mathcal{C}$ have distinct norms whose discrepancies dominate the covariance $\mathbf{I}_d$, $\mathcal{X}$ can be partitioned into $k$ clusters $\left\{\mathcal{X}_j\right\}_{j \in [k]}$ each containing $m$ points with distinct norms.* *The best size-$k$ skeleton subset consists of exactly one point from each cluster $\mathcal{X}_j$, while multiple points from the same cluster are redundant. However, the plain blockwise adaptiveness (random (BRP)/greedy pivoting (BGP)) tends to pick multiple points from the same cluster. For instance, with block size $b$, BGP can pick up to $\min\left(b,m\right)$ points in the same cluster. Similar inefficiency also appears in the BRP but is generally alleviated thanks to the randomness ( (left)).* To resolve the challenges illustrated by , we introduce a *robust blockwise random pivoting (RBRP)* algorithm------that empirically achieves comparable skeleton complexities as the sequential adaptive methods (SRP and CPQR), as demonstrated in (right), while exploiting the parallelizability of matrix-matrix multiplications. **Remark 5** (Robust blockwise filtering). *The key step in that improves the robustness of BRP is the robust blockwise filtering with tolerance $\tau_b \in [0,1]$---applying the truncated CPQR locally on the small residuals of the selected candidates $\mathbf{V}\in \mathbb{R}^{d \times b}$: $$\begin{aligned} \mathbf{V}\left(:,\boldsymbol{\pi}_{\mathbf{V}}\right) = \mathbf{Q}_{\mathbf{V}} \mathbf{R}_{\mathbf{V}} \approx \mathbf{Q}_{\mathbf{V}}\left(:,1:b'\right) \mathbf{R}_{\mathbf{V}}\left(1:b',:\right) \end{aligned}$$ where the relative truncation error is upper bounded by $\tau_b$ $$\begin{aligned} \left\|\mathbf{V}\left(:,\boldsymbol{\pi}_{\mathbf{V}}\right)-\mathbf{Q}_{\mathbf{V}}\left(:,1:b'\right) \mathbf{R}_{\mathbf{V}}\left(1:b',:\right)\right\|_F^2 = \left\|\mathbf{R}_{\mathbf{V}}\left(b'+1:b,b'+1:b\right)\right\|_F^2 < \tau_b \left\|\mathbf{V}\right\|_F^2. \end{aligned}$$ With small constant block sizes, the robust blockwise filtering based on local CPQR can be computed with negligible additional cost $O(db^2)$, despite the sequential nature of CPQR.* In general, takes $O\left(ndk + bdk^2\right)$ time asymptotically (with a slightly larger lower-order term, *cf.*$O\left(ndk + dk^2\right)$ for the sequential version in ), where the dominant cost $O(ndk)$ consists of $k$ parallelizable passes through $\mathbf{X}$. Analogous to , the storage of $\mathbf{L}^{(t)}$ and $\mathbf{Q}^{(t)}$ requires $O(nk)$ and $O(dk)$ memory, respectively. Meanwhile, the adaptiveness grants the blockwise error-revealing ability. Abbreviation Skeleton selection method Algorithm -------------- ---------------------------------- ------------------------------------------------------------ CPQR Sequential greedy pivoting [@golub2013matrix Section 5.4.2] SqNorm Squared-norm sampling  [@deshpande2006matrix] DPP DPP sampling [@belabbas2009spectral Equation 8][@kulesza2011k; @GPBV19] SRP Sequential random pivoting  [@chen2022randomly] SkCPQR Sketchy pivoting with CPQR  [@voronin2017efficient] SkLUPP Sketchy pivoting with LUPP  [@dong2021simpler] BGP Blockwise greedy pivoting with $\tau_b = 0$ BRP Blockwise random pivoting with $\tau_b = 0$ RBGP Robust blockwise greedy pivoting with $\tau_b = \frac{1}{b}$ **RBRP** Robust blockwise random pivoting with $\tau_b = \frac{1}{b}$ : Summary of abbreviations for the skeleton selection methods in the experiments. **Remark 6** (Blockwise greedy pivoting). *For the completeness of comparison, in the experiments, we consider a variation of the (robust) blockwise random pivoting ((R)BRP)---(robust) blockwise greedy pivoting ((R)BGP). In , (R)BGP replaces the sampling step,* *sample $S_t = \left(s_{\left\vert S\right\vert+1}, \cdots, s_{\left\vert S\right\vert+b}\right) \sim \mathbf{d}^{(t-1)} / \left\|\mathbf{d}^{(t-1)}\right\|_1$ without replacement* *with the task of choosing points corresponding to the top-$b$ probabilities greedily:* *select $S_t = \left(s_{\left\vert S\right\vert+1}, \cdots, s_{\left\vert S\right\vert+b}\right) \gets$ indices of the top-$b$ entries in $\mathbf{d}^{(t-1)}$.* *With $\tau_b = 0$, BGP is reduced to CPQR. For adversarial inputs like the one in , BGP tends to suffer from far worse skeleton complexities than those of BRP ( (left)).* ![ The skeletonization error $\mathcal{E}_{\mathbf{X}}\left(S\right)$ of algorithms in on a GMM-based adversarial input as described in , where the adaptiveness is critical for achieving a good skeleton complexity/accuracy. **(Left)**: Sequential random pivoting (SRP) and greedy pivoting (CPQR) enjoy the best skeleton complexities. The plain blockwise random pivoting (BRP) and greedy pivoting (BGP) tend to suffer from observably higher skeleton complexities than the sequential adaptive methods, while greedy selection tends to severely exacerbate such inefficiency. The squared-norm sampling (SqNorm) can be viewed as plain BRP with block size $b=k$, which suffers from a similar inefficiency as BRP/BGP. DPP sampling attains relatively good accuracy even without adaptiveness, thanks to its nearly optimal skeleton complexity [@belabbas2009spectral; @guruswami2012optimal; @chen2022randomly], but with a much higher runtime in practice as a trade-off (*e.g.*, with [@GPBV19], k-DPP takes about $10^3 \times$ longer than other algorithms in ). **(Right)**: With robust blockwise filtering (), robust blockwise random pivoting (RBRP) and greedy pivoting (RBGP), as well as the sketchy pivoting algorithms, achieves the comparable, nearly optimal skeleton complexities as the sequential adaptive methods. It is worth highlighting that despite the comparable skeleton complexities between RBRP and RBGP, RBGP is generally much slower than RBRP (*cf.*). This is because, without randomness, blockwise greedy pivoting tends to pick more redundant points in each block, which are later filtered out. ](figs/id__gmm__b30__os2.0__rpgp.png){#fig:id_gmm_rpgp width="\\textwidth"} By comparing (left) and (right), we observe that the robust blockwise filtering () brings improvement for both blockwise random and greedy pivoting. In particular, with $\tau_b = \frac{1}{b}$, RBRP (and RBGP) empirically attains comparable skeleton complexities as those of the sequential random pivoting on the GMM adversarial input (). The same observation extends to all the natural and synthetic data matrices tested in our experiments (), which leads to the following : **Conjecture 4** (Skeleton complexity of RBRP). *RBRP () with $\tau_b = \frac{1}{b}$ shares a similar skeleton complexity as sequential random pivoting () in expectation.* *Rationale for .* Consider at the $t$-th step with $\left\vert S\right\vert$ selected skeletons from the previous steps, $\left\vert S_t\right\vert = b$ candidates drawn from $\frac{\mathbf{d}^{(t-1)}}{\left\|\mathbf{d}^{(t-1)}\right\|_1}$, and $\left\vert S'_t\right\vert = b'$ remianing skeletons in $S_t$ that pass through the robust blockwise filtering (). Now suppose we switch to sequential random pivoting after the $(t-1)$-th step and draw $b'$ skeletons adaptively, denoted as $S''_t$. Then, the key rationale for is that with $\tau_b = \frac{1}{b}$, $\mathcal{E}_{\mathbf{X}}\left(S \cup S'_t\right) \approx \mathcal{E}_{\mathbf{X}}\left(S \cup S''_t\right)$ with (a reasonably) high probability. To see this, we observe the close connection between the selection scheme of $S'_t$ and that of $S''_t$: - The robust blockwise filtering for $S'_t$ in RBRP adaptively selects the point with the maximum residual norm in each step (*i.e.*, CPQR on the residual matrix $\mathbf{V}$ in ), within a small subset $S_t$ of $b$ points sampled according to the squared-norm distribution over $[n] \setminus S$. - The random pivoting for $S''_t$ in SRP adaptively selects the point according to the squared-norm distribution of the residual over the complement of the skeleton subset in each step. Intuitively, every selection made for $S'_t$ and $S''_t$ brings similar decay in the skeletonization error if the squared norms of the residual matrix corresponding to remaining points in $S_t$ are around the maxima in the complement of the skeleton subset such that these remaining points in $S_t$ can be selected by SRP with high probability[^12]. Instead of computing the squared norms of the entire residual matrix explicitly for each selection as SRP (which leads to the undesired sequential updates), RBRP leverages the threshold $\tau_b = \frac{1}{b}$ to encourage that the first $b'$ points selected by the robust blockwise filtering in each block have such top squared norms in the corresponding residual matrices. Notice that with $\tau_b = \frac{1}{b}$, we enforce the sum of squared norms of the $b'$ points to be larger than the average squared norm of the original block: $\left\|\mathbf{R}_\mathbf{V}\left(i:b, i:b\right)\right\|_F^2 \ge \frac{1}{b} \left\|\mathbf{V}\right\|_F^2$. Since the squared norms of residuals are non-increasing under QR updates, $\frac{1}{b} \left\|\mathbf{V}\right\|_F^2$ serves as an overestimate for the squared norms of redundant points sampled in $S_t$. As toy examples, if $\mathbf{V}$ is rank deficient (*i.e.*, $\mathop{\mathrm{rank}}(\mathbf{V}) = b'' < b$), then there exists $b' \le b''$ such that $\left\|\mathbf{R}_\mathbf{V}\left(b':b, b':b\right)\right\|_F^2 \ge \frac{1}{b} \left\|\mathbf{V}\right\|_F^2$ but $$\begin{aligned} 0 = \left\|\mathbf{R}_\mathbf{V}\left(b''+1:b, b''+1:b\right)\right\|_F^2 \le \left\|\mathbf{R}_\mathbf{V}\left(b'+1:b, b'+1:b\right)\right\|_F^2 < \frac{1}{b} \left\|\mathbf{V}\right\|_F^2. \end{aligned}$$ Meanwhile, if $\mathbf{V}\in \mathbb{R}^{d \times b}$ contains orthogonal columns with the same squared norm, then $\left\|\mathbf{R}_\mathbf{V}\left(i:b, i:b\right)\right\|_F^2 \ge \left\vert\mathbf{R}_\mathbf{V}\left(b, b\right)\right\vert^2 = \frac{1}{b} \left\|\mathbf{V}\right\|_F^2$ for all $i \in [b]$, and therefore $b'=b$. ◻ # Interpolation Matrix Construction {#sec:id_construction} In this section, we consider the construction of the interpolation matrix $\mathbf{W}$ after skeleton selection. In particular, we are interested in exploiting information from the $O\left(ndk\right)$ skeleton selection algorithms in , *i.e.*, the matrix $\mathbf{L}\in \mathbb{R}^{n \times k}$ in , to construct $\mathbf{W}$ efficiently in $O(nk^2)$ time. We begin by noticing the optimal interpolation matrix follows the least square problem $$\begin{aligned} \label{eq:stable_id} \min_{\mathbf{W}\in \mathbb{R}^{n \times k}} \left\|\mathbf{X}- \mathbf{W}\mathbf{X}_S\right\|_F^2 \quad\Rightarrow\quad \mathbf{W}= \mathbf{X}\mathbf{X}_S^\dagger= \mathbf{X}\mathbf{X}_S^\top \left(\mathbf{X}_S \mathbf{X}_S^\top\right)^{-1},\end{aligned}$$ so that we can compute such optimal $\mathbf{W}$ exactly in $O\left(ndk + dk^2 + nk^2\right)$ time via QR decomposition on $\mathbf{X}_S$. However, the dominant cost $O(ndk)$ requires additional passes through $\mathbf{X}$ and can be prohibitive after the skeleton selection stage. ## Exact-ID-revealing Algorithms With the output matrix $\mathbf{L}\in \mathbb{R}^{n \times k}$ from , the corresponding optimal interpolation matrix $\mathbf{X}\mathbf{X}_S^\dagger$ can be computed in $O(nk^2)$ time following . \(a\) $\boldsymbol{\pi}\in \mathfrak{S}_n$ (such that $S = \boldsymbol{\pi}\left(1:k\right)$), (b) $\mathbf{L}\in \mathbb{R}^{n \times k}$, (a) $\mathbf{W}\in \mathbb{R}^{n \times k}$ $\mathbf{L}_1 \gets \mathbf{L}\left(\boldsymbol{\pi}\left(1:k\right),:\right) \in \mathbb{R}^{k \times k}$, $\mathbf{L}_2 \gets \mathbf{L}\left(\boldsymbol{\pi}\left(k+1:n\right),:\right) \in \mathbb{R}^{(n-k) \times k}$ $\mathbf{W}\gets \textbf{0}_{n \times k}$, $\mathbf{W}\left(\boldsymbol{\pi}\left(1:k\right),:\right) = \mathbf{I}_k$ $\mathbf{W}\left(\boldsymbol{\pi}\left(k+1:n\right),:\right) = \mathbf{L}_2 \mathbf{L}_1^{-1}$ **Proposition 5**. *The sequential () and blockwise () random pivoting, as well as their greedy pivoting variations, are exact-ID-revealing ().* *Proof of .* We first observe that the difference between sequential/blockwise random versus greedy pivoting lies only in the skeleton subset $S$. That is, assuming the same skeleton selection $S$, the resulting $\mathbf{L}$'s from random and greedy pivoting would be the same. Therefore, it is sufficient to show that holds for random pivoting in . For both , recall that $\mathbf{L}\gets \mathbf{L}^{(t)}$. We observe that $\mathbf{L}\mathbf{Q}^{(t)\top}$ provides a rank-$k$ approximation for $\mathbf{X}$: $\left\|\mathbf{X}- \mathbf{L}\mathbf{Q}^{(t)\top}\right\|_F^2 = \left\|\mathbf{d}^{(t)}\right\|_1 = \mathcal{E}_{\mathbf{X}}\left(S\right) < \tau \left\|\mathbf{X}\right\|_F^2$. We will show that $\mathbf{L}\mathbf{Q}^{(t)\top} = \mathbf{X}\mathbf{X}_S^\dagger \mathbf{X}_S$ in exact arithmetic. Then, with $\mathbf{L}_1 \gets \mathbf{L}\left(\boldsymbol{\pi}\left(1:k\right),:\right)$ and $\mathbf{L}_2 \gets \mathbf{L}\left(\boldsymbol{\pi}\left(k+1:n\right),:\right)$ as in , by constructing, $\mathbf{L}_2 \mathbf{Q}^{(t)\top} = \mathbf{X}\left(\boldsymbol{\pi}\left(k+1:n\right),:\right)$ and $\mathbf{L}_1 \mathbf{Q}^{(t)\top} = \mathbf{X}_S = \mathbf{X}\left(\boldsymbol{\pi}\left(1:k\right),:\right)$ where $\mathbf{L}_1 \in \mathbb{R}^{k \times k}$ is invertible as $\mathbf{X}_S$ consists of linearly independent rows. Therefore, $\mathbf{X}_S^\dagger = \mathbf{Q}^{(t)} \mathbf{L}_1^{-1}$, and the optimal interpolation matrix $\mathbf{W}= \mathbf{X}\mathbf{X}_S^\dagger$ can be expressed up to permutation as $$\begin{aligned} \mathbf{W}\left(\boldsymbol{\pi},:\right) = \mathbf{X}\left(\boldsymbol{\pi},:\right)\mathbf{X}_S^\dagger = \begin{bmatrix} \mathbf{L}_1 \mathbf{Q}^{(t)\top} \\ \mathbf{L}_2 \mathbf{Q}^{(t)\top} \end{bmatrix} \mathbf{Q}^{(t)} \mathbf{L}_1^{-1} = \begin{bmatrix} \mathbf{L}_1 \\ \mathbf{L}_2 \end{bmatrix} \mathbf{L}_1^{-1} = \begin{bmatrix} \mathbf{I}_k \\ \mathbf{L}_2 \mathbf{L}_1^{-1} \end{bmatrix}. \end{aligned}$$ The computation of $\mathbf{L}_2 \mathbf{L}_1^{-1}$ involves solving a small $k \times k$ system $(n-k)$ times, which takes $O(nk^2)$ time in general. Overall, given $\mathbf{L}$ from , constructs the optimal interpolation matrix $\mathbf{W}= \mathbf{X}\mathbf{X}_S^\dagger = \mathbf{L}\mathbf{L}_1^{-1}$. ◻ Moreover, the special structure of $\mathbf{L}_1$ may be leveraged to further accelerate the construction of $\mathbf{W}$. For instance, $\mathbf{L}_1$ from the sequential algorithm in is lower triangular, where $\mathbf{L}_2 \mathbf{L}_1^{-1}$ can be evaluated via backward substitution in $O\left((n-k)k^2\right)$ time. However, for the blockwise algorithm in with a reasonably large block size $b$, $\mathbf{L}_1$ tends to be ill-conditioned in practice due to numerical error, where truncated SVD is usually necessary for numerical stability, as elaborated in . **Remark 7** (Evaluation of $\mathbf{L}_2\mathbf{L}_1^{-1}$ for (R)BRP). *In , $\widehat{\mathbf{L}}^{(t)}\left(\boldsymbol{\pi}_a,:\right) \gets \mathbf{X}\left(\boldsymbol{\pi}_a,:\right) \mathbf{Q}_\mathbf{V}$ leads to a $b$-block lower triangular (instead of lower triangular) $\mathbf{L}_1$, where $\mathbf{L}_2 \mathbf{L}_1^{-1}$ can be computed via QR or (truncated) SVD of $\mathbf{L}_1$ in $O\left(nk^2\right)$ time. Specifically, when $\mathbf{L}_1$ is ill-conditioned, we evaluate $\mathbf{L}_1^{-1}$ via truncated SVD with small singular value clipping for numerical stability, which takes $O\left(nk^2 + k^3\right)$ time[^13].* *Alternatively, one can manually triangularize $\mathbf{L}_1$ via an additional QR when computing $\widehat{\mathbf{L}}^{(t)}\left(\boldsymbol{\pi}_a,:\right)$ in each block: $$\begin{aligned} \label{eq:block_cholid_triangularization} \begin{split} &\sim, \widehat{\mathbf{L}}^{(t)}\left(\boldsymbol{\pi}_a,:\right)^\top \gets \mathtt{qr}\left(\left(\mathbf{X}\left(\boldsymbol{\pi}_a,:\right) \mathbf{Q}_\mathbf{V}\right)^\top, \text{``econ''}\right), \end{split} \end{aligned}$$ with an additional cost of $O\left(nkb\right)$ in total. When such lower triangularized $\mathbf{L}_1$ is well-conditioned, $\mathbf{L}_2 \mathbf{L}_1^{-1}$ can be evaluated efficiently and stably via backward substitution. However, with a reasonably large block size $b$ for the sake of parallelizability in practice, the resulting triangular matrix $\mathbf{L}_1$ tends to be ill-conditioned or even numerically singular due to numerical error, where truncated SVD is nevertheless necessary for the stable construction of $\mathbf{L}_2 \mathbf{L}_1^{-1}$.* ## Inexact-ID-revealing Algorithms In contrast to , $\mathbf{L}$ from the sketchy pivoting algorithms in is insufficient for computing the optimal interpolation matrix $\mathbf{X}\mathbf{X}_S^\dagger$ exactly, intuitively due to the loss of information in sketching (*i.e.*, randomized dimension reduction). **Proposition 6**. *The sketchy pivoting algorithms () are inexact-ID-revealing.* follows directly from the observation that applying on $\mathbf{L}$ from (*i.e.*, $\mathbf{W}= \mathbf{L}\mathbf{L}_1^{-1}$) solves a sketched version of : $$\begin{aligned} \label{eq:sketched_least_square} \min_{\mathbf{W}\in \mathbb{R}^{n \times k}} \left\|\mathbf{X}\widehat{\boldsymbol{\Omega}}- \mathbf{W}\mathbf{X}_S\widehat{\boldsymbol{\Omega}}\right\|_F^2 \quad\Rightarrow\quad \mathbf{W}= \mathbf{L}\mathbf{L}_1^{-1} = \mathbf{X}\widehat{\boldsymbol{\Omega}}\left(\mathbf{X}_S \widehat{\boldsymbol{\Omega}}\right)^\dagger,\end{aligned}$$ where $\widehat{\boldsymbol{\Omega}}= \boldsymbol{\Omega}\left(:,1:k\right) \in \mathbb{R}^{d \times k}$ consists of the first $k$ columns in the randomized linear embedding $\boldsymbol{\Omega}\in \mathbb{R}^{d \times l}$ drawn in . Unfortunately, such sketched least square estimation [@woodruff2014sketching; @raskutti2016statistical; @dobriban2019asymptotics] is known to be suboptimal as long as $l < d$ [@pilanci2016iterative]. Specifically for the interpolation matrix construction, the sketched estimation with a sample size as small as $k$ (*i.e.*, without oversampling) in leads to a large interpolation error $\mathcal{E}_{\mathbf{X}}\left(\mathbf{W} \mid S\right) \gg \mathcal{E}_{\mathbf{X}}\left(S\right)$, as numerically illustrated in (*e.g.*, comparing $\mathcal{E}_{\mathbf{X}}\left(\mathbf{W} \mid S\right)$ as SkCPQR/SkLUPP-ID in and $\mathcal{E}_{\mathbf{X}}\left(S\right)$ as SkCPQR/SkLUPP in (right)). \(a\) $\boldsymbol{\pi}\in \mathfrak{S}_n$ ($S = \boldsymbol{\pi}\left(1:k\right)$), (b) Sample size $l \ge k$ (or pre-computed $\mathbf{Y}\in \mathbb{R}^{n \times l}$) (a) $\mathbf{W}\in \mathbb{R}^{n \times k}$ (Draw a randomized linear embedding $\boldsymbol{\Omega}\in \mathbb{R}^{d \times l}$) ($\mathbf{Y}\gets \mathbf{X}\boldsymbol{\Omega}\in \mathbb{R}^{n \times l}$) $\mathbf{Q}, \mathbf{R}\gets \mathtt{qr}\left(\mathbf{Y}\left(\boldsymbol{\pi}\left(1:k\right),:\right)^\top, \text{``econ''}\right)$ $\mathbf{W}\gets \textbf{0}_{n \times k}$, $\mathbf{W}\left(\boldsymbol{\pi}\left(1:k\right),:\right) = \mathbf{I}_k$ $\mathbf{W}\left(\boldsymbol{\pi}\left(k+1:n\right),:\right) \gets \left(\mathbf{Y}\left(\boldsymbol{\pi}\left(k+1:n\right),:\right) \mathbf{Q}\right) \mathbf{R}^{- \top}$ As an intuitive but effective remedy, the oversampled sketchy ID (OSID) in generalizes by increasing the sample size beyond $k$ (*i.e.*, oversampling), which is shown to considerably alleviate such suboptimality of in interpolation error $\mathcal{E}_{\mathbf{X}}\left(\mathbf{W} \mid S\right)$ (*cf.*, SkCPQR/SkLUPP-ID versus -OSID in ). Concretely, with a moderate multiplicative oversampling $l = O(k)$ (*e.g.*, $l = 2k$ in is usually sufficient in practice), can be expressed as $$\begin{aligned} \label{eq:fsid} \text{OSID}: \qquad \begin{split} &\underset{k \times l}{\mathbf{Y}_S} = \mathbf{Y}\left(S,:\right), \qquad \underset{l \times k}{\mathbf{Y}_S^\top} = \underset{l \times k}{\mathbf{Q}}\ \underset{k \times k}{\mathbf{R}} \\ &\mathbf{W}= \mathbf{Y}\mathbf{Y}_S^\dagger = \left(\mathbf{Y}\mathbf{Q}\right) \mathbf{R}^{-\top} = \mathbf{X}\left(\boldsymbol{\Omega}\boldsymbol{\Omega}^\top\right) \mathbf{X}_S^\dagger \end{split}\end{aligned}$$ where $\mathbb{E}_{\boldsymbol{\Omega}}\left[\mathbf{W}\right] = \mathbf{X}\mathbf{X}_S^\dagger$ is an unbiased estimate for the optimal interpolation matrix when the embedding $\boldsymbol{\Omega}$ is isotropic, *i.e.*, $\mathbb{E}_{\boldsymbol{\Omega}}\left[\boldsymbol{\Omega}\boldsymbol{\Omega}^\top\right] = \mathbf{I}_d$. Leveraging the existing theories for randomized linear embedding, [@woodruff2014sketching Theorem 9 and 23] implies that with , the gap between the interpolation and skeletonization errors converges as $\mathcal{E}_{\mathbf{X}}\left(\mathbf{W} \mid S\right)-\mathcal{E}_{\mathbf{X}}\left(S\right) = O\left({k^2}/{l}\right)$ when $\boldsymbol{\Omega}$ is a sparse embedding and $l \gg k^2$, whereas such convergence can be further improved to $\mathcal{E}_{\mathbf{X}}\left(\mathbf{W} \mid S\right)-\mathcal{E}_{\mathbf{X}}\left(S\right) = O\left({k}/{l}\right)$ with Gaussian embeddings (*e.g.*, by [@woodruff2014sketching Theorem 6]). Asymptotically, takes $O(nlk + nk^2 + lk^2) = O(nk^2 + k^3)$ time. Compared to without oversampling, the dominant cost $O(nk^2)$ increases proportionally to the oversampling factor $l/k$. Nevertheless, such additional cost is affordable in practice (*cf.* (right)) thanks to the parallelizability of matrix-matrix multiplications [@goto2008high]. # Experiments {#sec:experiments} ## Data Matrices {#subsec:data_matrices} In the experiments, we compare the accuracy and efficiency of various ID algorithms on several types of data matrices, including the matrix forms of natural image datasets and synthetic random matrices with varied spectra, outlined as follows. 1. We draw a synthetic random data matrix $\mathbf{X}\in \mathbb{R}^{2000 \times 500}$ from the adversarial GMM described in with $k=100$ cluster centers, denoted as `GMM`. 2. Recall that the MNIST training set [@lecun1998mnist] consists of $60,000$ images of hand-written digits from $0$ to $9$. We denote `MNIST` as a data matrix consisting of $n = 1000$ random images sampled uniformly from the MNIST training set where each row contains a flattened and *normalized* $28 \times 28$ image such that $d = 784$. The nonzero entries take approximately $20\%$ of the matrix. 3. The CIFAR-10 training set [@krizhevsky2009learning] consists of $60,000$ colored images of size $32 \times 32 \times 3$. We denote `CIFAR-10` as a data matrix consisting of $n = 1000$ random images sampled uniformly from the CIFAR-10 training set where each row contains a flattened and *normalized* image such that $d = 3072$. 4. Let `Gaussian-exp` be a random dense data matrix of size $1000 \times 1000$ with exponential spectral decay. We construct `Gaussian-exp` from its SVD $\mathbf{X}= \mathbf{U}\boldsymbol{\Sigma}\mathbf{V}^\top$ where $\mathbf{U}, \mathbf{V}\in \mathbb{R}^{1000 \times 1000}$ are random unitary matrices drawn from the Haar measure, and $\boldsymbol{\Sigma}= \mathop{\mathrm{diag}}\left(\sigma_1,\cdots,\sigma_{1000}\right)$ where $\sigma_i = 1$ for all $i \le 100$ and $\sigma_i = 0.8^{i-100}$ for all $i > 100$. 5. Let `SNN` be a random sparse non-negative (SNN) matrix [@sorensen2016deim] of size $1000 \times 1000$ such that $$\label{eq:snn-def} \mathbf{X}= \sum_{i=1}^{100} \frac{10}{i} \mathbf{u}_i \mathbf{v}_i^T + \sum_{i=101}^{1000} \frac{1}{i} \mathbf{u}_i \mathbf{v}_i^T$$ where $\mathbf{U}= \left[\mathbf{u}_1,\cdots,\mathbf{u}_{1000}\right]$ and $\mathbf{V}= \left[\mathbf{v}_1,\cdots,\mathbf{v}_{1000}\right]$ are $1000 \times 1000$ random sparse matrices with non-negative entries and sparsity $0.1$. ## Accuracy and Efficiency of ID {#subsec:id_accuracy_efficiency} In , we compare the empirical accuracy (left) and efficiency (right) of different ID algorithms on data matrices described in , with a focus on those demonstrating robust skeleton complexities on the adversarial input (*i.e.*, algorithms in (right)). Taking an integrated view of ID including skeleton selection and interpolation matrix construction, we consider the following relative ID error[^14] and runtime measurements: 1. For error-reveling algorithms (*i.e.*, except for sketchy pivoting and sampling)[^15], we plot the residual errors (denoted as **residual**) from their on-the-fly skeletonization error estimates and the runtimes for skeleton selection (*e.g.*, $\left\|\mathbf{d}^{(t)}\right\|_1$ in ) only. 2. For exact-ID-revealing algorithms (*i.e.*, except for sketchy pivoting and sampling), we plot the runtimes and interpolation errors $\mathcal{E}_{\mathbf{X}}\left(\mathbf{W} \mid S\right)$ with $\mathbf{W}$ from (**ID**), which are equal to the corresponding skeletonization errors $\mathcal{E}_{\mathbf{X}}\left(\mathbf{W} \mid S\right) = \mathcal{E}_{\mathbf{X}}\left(S\right)$. 3. For inexact-ID-revealing algorithms (*i.e.*, sketchy pivoting), we plot the runtimes and interpolation errors $\mathcal{E}_{\mathbf{X}}\left(\mathbf{W} \mid S\right)$ with $\mathbf{W}$ from both (**ID**) and (**OSID**) with small multiplicative oversampling $l = 2k$, where both $\mathcal{E}_{\mathbf{X}}\left(\mathbf{W} \mid S\right) > \mathcal{E}_{\mathbf{X}}\left(S\right)$, and tends to be more expensive. 4. For non-ID-revealing algorithms (*i.e.*, sampling), without explicitly specifying in the legends, we plot the runtimes and interpolation errors $\mathcal{E}_{\mathbf{X}}\left(\mathbf{W} \mid S\right) = \mathcal{E}_{\mathbf{X}}\left(S\right)$ with the optimal $\mathbf{W}$ from solving the exact least square problem in . The legends in are formatted in terms of "skeleton selection algorithm"-"ID error". The abbreviations of different skeleton selection algorithms are summarized in , whereas the ID error measurements are described as above. **Remark 8** (Choice of block size). *The choice of block size $b$ controls the trade-off between parallelizability and error-revealing property. Precisely, an insuitably small $b$ tends to fail exploiting the efficiency of matrix-matrix multiplications, whereas the larger $b$ leads to higher overestimation on the target rank. In the experiments, we use $b=30$ or $40$ to exploit the parallelizability of matrix-matrix multiplications without considerably compromising the error-revealing property.* ![ Relative interpolation error and runtime of ID algorithms with robust skeleton complexities on the `GMM` adversarial input (). Recall from that sampling methods, especially squared-norm sampling, suffer from significantly higher skeleton complexities than sequential/robust blockwise random pivoting and sketchy pivoting on `GMM`. For demonstration, we omit squared-norm sampling from the plot. ](figs/id__gmm__b30__os2.0__rp.png){#fig:id_gmm width="\\textwidth"} For the `GMM` adversarial input in , it is worth pointing out that the sharp transition of RBGP at $\left\vert S\right\vert \approx 100$ is because `GMM` consists of $100$ clusters, each with distinct squared-norms. Recall that blockwise greedy pivoting picks points with the maximum norms without randomness. When $\left\vert S\right\vert \le 100$, each block drains points from the same clusters with the maximum residual norms first, most of which are redundant points and are later excluded by the robust blockwise filtering. This leads to the inefficiency of RBGP when $\left\vert S\right\vert \le 100$. While after $\left\vert S\right\vert > 100$, $S$ contains at least one skeleton from each of the $100$ clusters. Therefore, the residuals are approximately $\emph{i.i.d.}\xspace$ Gaussian noise where RBGP works similarly to RBRP. Such inefficiency of RBGP in comparison to RBRP again illustrates the superiority of random pivoting over greedy pivoting, *i.e.*, the merit of randomness. ![Relative interpolation error and runtime of ID algorithms on `MNIST`.](figs/id__MNIST__b30__os2.0__rp.png){#fig:id_mnist width="\\textwidth"} ![Relative interpolation error and runtime of ID algorithms on `CIFAR-10`.](figs/id__cifar10__b40__os2.0__rp.png){#fig:id_cifar10 width="\\textwidth"} From experiments on the (non-adversarial) natural and synthetic datasets in , we observe the following (by algorithms): - The **robust blockwise random pivoting (RBRP)** enjoys (one of) the best skeleton complexities, with the skeletonization error estimates aligning perfectly with the interpolation errors (left), demonstrating the error-revealing and exact-ID-revealing abilities of RBRP. The skeleton selection process of RBRP tends to take slightly longer than that of sketchy pivoting (SkLUPP/SkCPQR) with ID (which is not error-revealing and suffers from much larger interpolation errors than RBRP). However, RBRP is generally faster than sketchy pivoting (SkLUPP/SkCPQR) with small multiplicative oversampling (OSID), while enjoying lower interpolation error. Overall, RBRP is shown to provide the best combination of accuracy and efficiency when the explicit construction of ID (in contrast to the skeleton subset only) is inquired. - The sketchy pivoting algorithms (**SkLUPP/SkCPQR**) also enjoy (one of) the best skeleton complexities, despite the lack of error-revealing ability and being inexact-ID-revealing (*i.e.*, suffering from large interpolation errors). That is, for skeleton selection only (without asking for the interpolation matrix), sketchy pivoting provides the most efficient selection of close-to-the-best skeleton subsets in practice (*cf.*). Moreover, our experiments verify the conclusion in [@dong2021simpler] that SkLUPP tends to be more efficient than SkCPQR in selecting skeletons without compromising the skeletonization error. - The inherently sequential algorithms, including **SRP** and **CPQR**, are highly competitive in terms of accuracy while being considerably slower than the parallelizable ones. - With appropriately normalized data, the $O(nd)$-time squared-norm sampling (**SqNorm**) [^16] also provides highly competitive skeleton complexities, while constructing the interpolation matrix by solving explicitly can slow down the overall ID algorithm significantly to the extent that the runtimes are similar to those of RBRP. However, we recall that sampling methods are in lack of error-revealing and ID-revealing abilities. Moreover, SqNorm is vulnerable to adversarial inputs like , as shown in . ![Relative interpolation error and runtime of ID algorithms on `Gaussian-exp`.](figs/id__Gaussian-exp__b40__os2.0__rp.png){#fig:id_Gaussian_exp width="\\textwidth"} ![Relative interpolation error and runtime of ID algorithms on `SNN`.](figs/id__SNN_m1000n1000r100a10__b30__os2.0__rp.png){#fig:snn_a10 width="\\textwidth"} # Discussions and Future Directions In this work, we focus on fast and accurate ID algorithms from five perspectives that measure the empirical performance systematically: skeleton complexity, asymptotic complexity, parallelizability, error-revealing property, and ID-revealing property. With a careful exploration of various existing ID algorithms through the lens of adaptiveness and/or randomness, we reemphasize the effectiveness of combining adaptiveness and randomness in solving ID problems, while unveiling a critical missing piece in the family of existing algorithms that masters all five aforementioned perspectives. To close the gap, we propose *robust blockwise random pivoting (RBRP)*---a parallelizable, error-revealing, and exact-ID-revealing algorithm that enjoys among-the-best skeleton and asymptotic complexities in practice compared to the existing algorithms and demonstrates promising robustness to adversarial inputs. With numerical experiments, we illustrate the empirical accuracy, efficiency, and robustness of RBRP on a broad spectrum of natural and synthetic datasets. As an appealing future direction, rigorous proofs of and [Conjecture 4](#conj:rbas_skeleton_complexity){reference-type="ref" reference="conj:rbas_skeleton_complexity"} could bring insights beyond formalizing the empirical successes of . Specifically concerning , greedy algorithms like pivoted LU and QR commonly suffer from scarce adversarial inputs (*e.g.*, Kahan-type matrices [@kahan1966numerical]) that severely compromise the worst-case performance. For ID, despite the competitive empirical skeleton complexity of greedy pivoting (CPQR), the worst-case skeleton complexity guarantee of CPQR scales proportionally to the problem size $n$, in contrast to the rank $r \ll n$ for other methods involving randomness (*cf.*). Probabilistically circumventing the adversarial inputs via random perturbation, randomness (*e.g.*, random matrices [@trefethen1990average] and additive Gaussian noise in smoothed analysis [@sankar2006smoothed; @spielman2009smoothed]) is widely leveraged to quantify the scarceness of these adversarial inputs and explain the effectiveness of greedy methods in practice. Analogously for randomization via sketching, the proof of would provide theoretical justification for the robustness of greedy pivoting coupled with sketching by establishing an equivalence with random pivoting. # Acknowledgments {#acknowledgments .unnumbered} YD was supported in part by the Office of Naval Research N00014-18-1-2354, NSF DMS-1952735, DOE ASCR DE-SC0022251, UT Austin Graduate School Summer Fellowship, and NYU Courant Instructorship. CC was supported in part by startup funds of North Carolina State University. KP was supported in part by the Peter O'Donnell Jr. Postdoctoral Fellowship at the Oden Institute. The authors wish to thank Ethan Epperly, Kevin Miller, Christopher Musco, Rachel Ward, and Robert Webber for enlightening discussions. [^1]: Courant Institute, New York University (`yd1319@nyu.edu`). [^2]: Department of Mathematics, North Carolina State University (`cchen49@ncsu.edu`). [^3]: Department of Mathematics & Oden Institute, University of Texas at Austin (`pgm@oden.utexas.edu`). [^4]: Oden Institute, University of Texas at Austin (`katherine.pearce@austin.utexas.edu`). [^5]: It is worth clarifying that although directly provides $\mathbf{W}= \mathbf{X}\mathbf{X}_S^\dagger$ as the minimizer of $\min_{\mathbf{W}}\mathcal{E}_{\mathbf{X}}\left(\mathbf{W} \mid S\right) = \mathcal{E}_{\mathbf{X}}\left(S\right)$, the explicit computation of $\mathbf{X}\mathbf{X}_S^\dagger$ takes $O\left(ndk + dk^2 + nk^2\right)$ time (*e.g.*, via ), where the dominant cost $O(ndk)$ is undesired. Therefore, whenever available, information from the skeleton selection process is usually leveraged to evaluate/approximate $\mathbf{W}$ efficiently in $o(ndk)$ time (as elaborated in ), which may lead to the gap $\mathcal{E}_{\mathbf{X}}\left(\mathbf{W} \mid S\right) - \mathcal{E}_{\mathbf{X}}\left(S\right) > 0$. [^6]: It is worth mentioning that practical adaptive algorithms with "approximately" error-revealing properties are currently in active development. [^7]: For illustration, we consider the exact arithmetic here and express the QR update as a Gram-Schmidt process, whereas, in practice, QR updates are usually conducted via Householder reflection [@golub2013matrix Section 5.1.2] for numerical stability. [^8]: *Without strictly specifying, we generally assume that $\epsilon> 0$ is reasonably small such that the $(r,\epsilon)$-ID is a non-trivial approximation, *i.e.*, $\epsilon< \min\left\{1/\eta_r, 2^r\right\}$ such that both logarithmic terms are positive. Otherwise, the skeleton complexity guarantee still holds by replacing any negative logarithmic terms with zeros.* [^9]: For applications like low-rank approximations and skeleton selection, a small constant oversampling like $k+5 \le l \le k+10$ is usually sufficient in practice. [^10]: It is worth highlighting that although rows of $\mathbf{Y}$ consist of $\emph{i.i.d.}\xspace$ entries, entries in each column of $\mathbf{Y}$ are dependent. In particular, for all $j \in [l]$, $\mathrm{Cov}\left(\mathbf{Y}\left(:,j\right)\right) = \mathbf{X}\mathbf{D}^{-1} \mathbf{X}^\top$ where $\mathbf{D}= \mathop{\mathrm{diag}}\left(\left\|\mathbf{x}_1\right\|_2^2, \cdots, \left\|\mathbf{x}_n\right\|_2^2\right)$. [^11]: *[@oymak2018universality Model 2.1, Theorem 9.1] implies that when $P$ is symmetric with mean zero and variance bounded, a random matrix $\boldsymbol{\Omega}\in \mathbb{R}^{d \times l}$ ($l \le d$) with bounded $\emph{i.i.d.}\xspace$ entries from $P$ serves as a randomized linear embedding.* [^12]: It is worth mentioning that, in contrast to SRP, RBRP does not recompute the entire residual matrix after each skeleton selection. Instead, within each block, RBRP computes the residual matrix locally for $\mathbf{X}\left(S_t,:\right)$ only, while updating the entire residual matrix at the end of each block via matrix-matrix multiplication. [^13]: *In the common low-rank approximation scenario with $k \ll n$, the lower order term $k^3 \ll nk^2$, and therefore constructing the interpolation matrix via truncated SVD maintains the $O(nk^2)$ complexity asymptotically.* [^14]: All error measurements are normalized with $\left\|\mathbf{X}\right\|_F^2$ to relative errors. [^15]: In , we also omit the residual errors of the sequential error-reveling algorithms SRP and CPQR (which overlap with the corresponding interpolation errors) for conciseness of demonstration. [^16]: Although we follow and show results by sampling $k~\emph{i.i.d.}\xspace$ skeletons with *replacement*, in practice, we observe that sampling *without replacement* tends to provide better skeleton complexity, especially on the adversarial input . Intuitively, sampling without replacement can be viewed as a weaker version of adaptiveness where only the selected point itself is excluded from the future selection.
arxiv_math
{ "id": "2309.16002", "title": "Robust Blockwise Random Pivoting: Fast and Accurate Adaptive\n Interpolative Decomposition", "authors": "Yijun Dong, Chao Chen, Per-Gunnar Martinsson, Katherine Pearce", "categories": "math.NA cs.NA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We introduce the concept of matching powers of monomial ideals. Let $I$ be a monomial ideal of $S=K[x_1,\dots,x_n]$, with $K$ a field. The $k$th matching power of $I$ is the monomial ideal $I^{[k]}$ generated by the products $u_1\cdots u_k$ where $u_1,\dots,u_k$ is a monomial regular sequence contained in $I$. This concept naturally generalizes that of squarefree powers of squarefree monomial ideals. We study depth and regularity functions of matching powers of monomial ideals and edge ideals of weighted oriented graphs. We show that the last nonvanishing power of a quadratic monomial ideal is always polymatroidal and thus has a linear resolution. When $I$ is a non-quadratic edge ideal of a weighted oriented forest, we characterize when $I^{[k]}$ has a linear resolution. address: - Nursel Erey, Gebze Technical University, Department of Mathematics, 41400 Gebze, Kocaeli, Turkey - Antonino Ficarra, Department of mathematics and computer sciences, physics and earth sciences, University of Messina, Viale Ferdinando Stagno d'Alcontres 31, 98166 Messina, Italy author: - Nursel Erey, Antonino Ficarra title: Matching powers of monomial ideals and edge ideals of weighted oriented graphs --- # Introduction {#introduction .unnumbered} Let $S=K[x_1,\dots,x_n]$ be the polynomial ring over a field $K$. Recall that the edge ideal of a finite simple graph $G$ with vertices $x_1,\dots ,x_n$ is generated by all the monomials $x_ix_j$ such that $\{x_i,x_j\}$ is an edge of $G$. The study of minimal free resolutions of edge ideals and their powers produced a great deal of interaction between combinatorics and commutative algebra. One of the most natural problems in this regard is to understand when those ideals, or more generally monomial ideals, have linear resolutions. Although edge ideals with linear resolutions are combinatorially characterized by a famous result of Fröberg [@F], it is unknown in general when powers of edge ideals have linear resolutions. Herzog, Hibi and Zheng [@HHZ2] showed that if an edge ideal has a linear resolution, then so does every power of it. It is their result that served as a starting point for the close examination of linear resolutions of powers of edge ideals by many researchers, resulting in several interesting results and conjectures. For any squarefree monomial ideal $I$ of $S$, the $k$th squarefree power of $I$, denoted by $I^{[k]}$ is the monomial ideal generated by all squarefree monomials in $I^k$. Recently, squarefree powers of edge ideals were studied in [@BHZN18; @CFL; @EH2021; @EHHM2022a; @EHHM2022b; @FHH23; @SASF2022; @SASF2023]. Determining linearity of minimal free resolutions of squarefree powers or finding their invariants is as challenging as those of ordinary powers although squarefree and ordinary powers have quite different behavior. In the case that $I$ is considered as edge ideal of a hypergraph $\mathcal{H}$, the minimal monomial generators of $I^{[k]}$ correspond to matchings of $\mathcal{H}$ of size $k$, which makes combinatorial aspect of squarefree powers interesting as well. This paper aims at presenting a wider framework for the study of squarefree powers by introducing a more general concept which we call matching powers. If $I$ is a monomial ideal of $S$, then the *$k^{\text{th}}$ matching power* $I^{[k]}$ of $I$ is generated by the products $u_1\cdots u_k$ where $u_1,\dots,u_k$ is a monomial regular sequence contained in $I$, or equivalently, $u_1,\dots,u_k$ is a sequence of monomials with pairwise disjoint support. Indeed, if $I$ is a squarefree monomial ideal, then the $k$th squarefree power of $I$ is the same as the $k$th matching power of $I$. With this new concept, since we are no longer restricted to squarefree monomial ideals, we can consider not only edge ideals of simple graphs but also edge ideals of weighted oriented graphs. We now discuss how the paper is organized. In Section [1](#sec:1-EreyFic){reference-type="ref" reference="sec:1-EreyFic"}, we summarize basic facts of the theory of matching powers. We define the normalized depth function $g_I$ of a monomial ideal $I$ in Definition [Definition 5](#def:gI-MononomialCase){reference-type="ref" reference="def:gI-MononomialCase"}. This function generalizes the normalized depth function introduced in [@EHHM2022b] for squarefree monomial ideals. It was conjectured in [@EHHM2022b] that $g_I$ is a non-increasing function for any squarefree monomial ideal. We show in Proposition [Proposition 8](#Prop:Conj-Equivalent){reference-type="ref" reference="Prop:Conj-Equivalent"} that this conjecture can be equivalently stated for monomial ideals. Hence, the normalized depth functions of squarefree monomial ideals comprise all normalized depth functions. In Theorem [Theorem 10](#Thm:I^[nu(I)]Polymatroidal){reference-type="ref" reference="Thm:I^[nu(I)]Polymatroidal"} we show that if $I$ is a quadratic monomial ideal, then the highest nonvanishing matching power of $I$ (namely $I^{[\nu(I)]}$, where $\nu(I)$ is the monomial grade of $I$) is polymatroidal. Since polymatroidal ideals have linear quotients, Theorem [Theorem 10](#Thm:I^[nu(I)]Polymatroidal){reference-type="ref" reference="Thm:I^[nu(I)]Polymatroidal"} provides a stronger result than [@BHZN18 Theorem 4.1]. In Section [2](#sec:2-EreyFic){reference-type="ref" reference="sec:2-EreyFic"}, we turn our attention to edge ideals of weighted oriented graphs. We make comparisons between homological invariants of matching powers $I(\mathcal{D})^{[k]}$ and $I(G)^{[k]}$, where $G$ is the underlying graph of a weighted oriented graph $\mathcal{D}$. We provide a lower bound for the regularity of $I(\mathcal{D})^{[k]}$ when $k$ does not exceed the induced matching number of the underlying graph of $\mathcal{D}$. In Section [3](#sec:3-EreyFic){reference-type="ref" reference="sec:3-EreyFic"}, we are interested in linearly related matching powers. The main result of this section is Theorem [Theorem 25](#thm: linearly related only in matching power){reference-type="ref" reference="thm: linearly related only in matching power"} which characterizes when $I(\mathcal{D})^{[k]}$ has a linear resolution or is linearly related provided that the underlying graph $G$ of $\mathcal{D}$ has the property that every subgraph of $G$ has at most one perfect matching and $I(\mathcal{D})\neq I(G)$. In particular, this result combined with [@EH2021 Theorem 41] gives a complete classification of weighted oriented forests $\mathcal{D}$ such that $I(\mathcal{D})^{[k]}$ has a linear resolution. The last section is devoted to demonstrate how one can recursively construct those weighted oriented forests described in Theorem [Theorem 25](#thm: linearly related only in matching power){reference-type="ref" reference="thm: linearly related only in matching power"}. # Matching Powers {#sec:1-EreyFic} Let $S=K[x_1,\dots,x_n]$ be the standard graded polynomial ring with coefficients in a field $K$. Recall that $f_1,\dots,f_m$ is a *regular sequence* (on $S$) if $f_i$ is a non zero--divisor on $S/(f_1,\dots,f_{i-1})$ for $i=1,\dots,m$. Let $I\subset S$ be a monomial ideal. We denote by $G(I)$ its unique minimal monomial generating set. Whereas, by $M(I)$ we denote the set of monomials belonging to $I$. The *$k$th matching power* of $I\subset S$ is the monomial ideal defined as $$I^{[k]}\ =\ (f_1\cdots f_k\ :\ f_i\in M(I), f_1,\dots,f_k\ \textit{is a regular sequence}).$$ If $u$ is a monomial, we set $\supp(u)=\{i:x_i\ \textup{divides}\ u\}$. It is easy to check when a sequence of monomials is a (monomial) regular sequence. Indeed, **Lemma 1**. *Let $v_1,\dots,v_r$ be monomials of $S$. Then $v_1,\dots,v_r$ is a regular sequence [(]{.upright}for any ordering[)]{.upright} if and only if $\supp(v_i)\cap\supp(v_j)=\emptyset$ for all $1\le i<j\le k$.* Let $I\subset S$ be a monomial ideal. The set $\supp(I)=\bigcup_{u\in G(I)}\supp(u)$ is called the *support* of $I$. We say that $I$ is *fully supported* if $\supp(I)=\{1,2,\dots,n\}$. From now, we tacitly assume that all monomial ideals we consider are fully supported. We denote by $\nu(I)$ the *monomial grade* of $I$, that is, the maximal length of a monomial regular sequence contained in $I$. In the next proposition, we collect some basic facts about matching powers. **Proposition 2**. *Let $I\subset S$ be a monomial ideal. Then, the following hold.* 1. *$I^{[k]}=(u_1\cdots u_k\ :\ u_i\in G(I),\supp(u_i)\cap\supp(u_j)=\emptyset, 1\le i<j\le k)$.* 2. *$I^{[k]}\ne0$ if and only if $1\le k\le\nu(I)$.* 3. *$I^{[k]}$ is a monomial ideal.* *Proof.* Statements (b) and (c) follow from statement (a). To prove the latter assertion, we set $J=(u_1\cdots u_k:u_i\in G(I),\supp(u_i)\cap\supp(u_j)=\emptyset, 1\le i<j\le k)$ and show that $J=I^{[k]}$. By Lemma [Lemma 1](#Lemma:regSeqMon){reference-type="ref" reference="Lemma:regSeqMon"}, it is clear that $J\subseteq I^{[k]}$. Conversely, let $v_1,\dots,v_k$ be a monomial regular sequence contained in $I$. Then $v_1\cdots v_k\in I^{[k]}$ and by Lemma [Lemma 1](#Lemma:regSeqMon){reference-type="ref" reference="Lemma:regSeqMon"}, we have $\supp(v_i)\cap\supp(v_j)=\emptyset$ for $i\ne j$. Since $I$ is a monomial ideal, $v_i=f_iu_i$ where $f_i$ is a monomial of $S$ and $u_i\in G(I)$, for all $i$. Hence, $\supp(u_i)\cap\supp(u_j)=\emptyset$ for all $i\ne j$, as well. Thus $u_1\cdots u_k\in J$ divides $v_1\cdots v_k$, and this implies that $I^{[k]}\subseteq J$. ◻ **Example 3**. 1. *Suppose that $I$ is a squarefree monomial ideal. Then, a product $u_1\cdots u_k$ with $u_i\in G(I)$ is in $I^{[k]}$ if and only if $u_1\cdots u_k$ is squarefree. Thus, in this case $I^{[k]}$ is just the usual *$k$th squarefree power* of $I$ introduced in [@BHZN18].* 2. *Let $I$ be a complete intersection monomial ideal generated by $u_1,\dots,u_m$. Then $I^{[k]}=(u_{i_1}\cdots u_{i_k}:1\le i_1<\dots<i_k\le m)$ and $\nu(I)=m$.* 3. *Let $(x_1^2,\,x_2^2,\,x_3^2,\,x_3x_4,\,x_5^5)$. Then $\nu(I)=4$ and $$\begin{aligned} \phantom{aaaaa}I^{[2]}\ &=\ (x_1^2x_2^2,\,x_1^2x_3^2,\,x_1^2x_3x_4,\,x_1^2x_5^5,\,x_2^2x_3^2,\,x_2^2x_3x_4,\,x_2^2x_5^5,\,x_3^2x_5^5,\,x_3x_4x_5^5)\\ I^{[3]}\ &=\ (x_1^2x_2^2x_3^2,\,x_1^2x_2^2x_3x_4,\,x_1^2x_2^2x_5^5,\,x_1^2x_3^2x_5^5\,,x_1^2x_3x_4x_5^5,\,x_2^2x_3^2x_5^5,\,x_2^2x_3x_4x_5^5),\\ I^{[4]}\ &=\ (x_1^2x_2^2x_3^2x_5^5,\,x_1^2x_2^2x_3x_4x_5^5). \end{aligned}$$* ## Normalized depth function {#normalized-depth-function .unnumbered} For a monomial $u\in S$, $u\ne1$, the *$x_i$-degree* of $u$ is defined as the integer $$\deg_{x_i}(u)\ =\ \max\{j\ge0:x_i^j\ \textup{divides}\ u\}.$$ Let $I\subset S$ be a monomial ideal. The *initial degree* of $I$, denoted by $\textup{indeg}(I)$ is the smallest degree of a monomial belonging to $I$. Following [@F2], we define the *bounding multidegree* of $I$ to be the vector $${\bf deg}(I)\ =\ (\deg_{x_1}(I),\dots,\deg_{x_n}(I)),$$ with $$\deg_{x_i}(I)\ =\ \max_{u\in G(I)}\deg_{x_i}(u),\ \ \textup{for all}\ \ \ 1\le i\le n.$$ We provide a lower bound for the depth of $S/I^{[k]}$ in terms of the initial degree of $I^{[k]}$ and the bounding multidegree of $I$ as follows: **Theorem 4**. *Let $I\subset S$ be a monomial ideal. Then, for all $1\le k\le\nu(I)$, we have $$\depth(S/I^{[k]})\ \ge\ \textup{indeg}(I^{[k]})-1+(n-|{\bf deg}(I)|).$$* *Proof.* We divide the proof in three steps.\ **(Step 1).** Let $J\subset S$ be a monomial ideal. We claim that $$\textup{pd}(J)\le|{\bf deg}(J)|-\textup{indeg}(J).$$ To prove the assertion, we use the Taylor resolution. Let $\beta_{i,j}(J)$ be a non--zero graded Betti number with $i=\textup{pd}(J)$. Then $j\ge\textup{indeg}(J)+\textup{pd}(J)$. It follows from the Taylor resolution that the highest shift in the minimal resolution of $J$ is at most $|{\bf deg}(J)|$, see [@F2 Theorem 1.3]. Thus, $|{\bf deg}(J)|\ge j$. Altogether, we obtain $|{\bf deg}(J)|\ge j\ge\textup{indeg}(J)+\textup{pd}(J)$ and the assertion follows.\ **(Step 2).** We claim that $|{\bf deg}(I^{[k]})|\le|{\bf deg}(I)|$ for all $1\le k\le\nu(I)$. Indeed, we even show that $\deg_{x_\ell}(I^{[k]})\le\deg_{x_\ell}(I)$ for all $\ell$. A set of generators of $I^{[k]}$ is $$\Omega\ =\ \{u_1\cdots u_k\ :\ u_i\in G(I),\supp(u_i)\cap\supp(u_j)=\emptyset,1\le i<j\le k\}.$$ Thus, $G(I^{[k]})$ is a subset of $\Omega$. Hence, if $v\in G(I^{[k]})$, then $v=u_1\cdots u_k\in\Omega$. Let $x_\ell$ be a variable dividing $v$, then $x_\ell$ divides at most one monomial $u_i$, say $u_{i_{\ell}}$. Therefore, $\deg_{x_\ell}(v)\le\deg_{x_\ell}(u_{i_\ell})\le\deg_{x_\ell}(I)$ and the assertion follows.\ **(Step 3).** By Steps 1 and 2 we have $$\textup{pd}(S/I^{[k]})\ \le\ |{\bf deg}(I^{[k]})|-\textup{indeg}(I^{[k]})+1\ \le\ |{\bf deg}(I)|-\textup{indeg}(I^{[k]})+1.$$ The asserted inequality follows from the Auslander--Buchsbaum formula. ◻ As a consequence of Theorem [Theorem 4](#Thm:ineqDepthIMon){reference-type="ref" reference="Thm:ineqDepthIMon"}, we can give the next definition: **Definition 5**. *Let $I\subset S$ be a monomial ideal. For all $1\le k\le\nu(I)$, we set $$g_I(k)\ =\ \depth(S/I^{[k]})+|{\bf deg}(I)|-n-(\textup{indeg}(I^{[k]})-1),$$ and call $g_I$ the *normalized depth function* of $I$.* By Theorem [Theorem 4](#Thm:ineqDepthIMon){reference-type="ref" reference="Thm:ineqDepthIMon"} we have $g_I(k)\ge0$ for all $1\le k\le\nu(I)$. If $I\subset S$ is a squarefree monomial ideal, then ${\bf deg}(I)={\bf 1}=(1,\dots,1)$ and so $$g_{I}(k)=\depth(S/I^{[k]})-(\textup{indeg}(I^{[k]})-1)$$ is the normalized depth function of $I$ introduced in [@EHHM2022b]. It is expected that the following is true. **Conjecture 6**. *[(Erey--Herzog--Hibi--Madani [@EHHM2022b]).]{.upright} Let $I\subset S$ be a squarefree monomial ideal. Then $g_{I}$ is a nonincreasing function.* Since the concept of the normalized depth function is extended from squarefree monomial ideals to all monomial ideals, it is natural to expect that the following more general statement is true. **Conjecture 7**. *Let $I\subset S$ be a monomial ideal. Then $g_I$ is nonincreasing.* It is clear that Conjecture [Conjecture 7](#Conj:new-g_I){reference-type="ref" reference="Conj:new-g_I"} implies Conjecture [Conjecture 6](#Conj:old-g_I){reference-type="ref" reference="Conj:old-g_I"}. Surprisingly, we show that the converse also holds. **Proposition 8**. *Conjectures [Conjecture 6](#Conj:old-g_I){reference-type="ref" reference="Conj:old-g_I"} and [Conjecture 7](#Conj:new-g_I){reference-type="ref" reference="Conj:new-g_I"} are equivalent.* To prove this result, we use the *polarization* technique. Let $u=x_1^{b_1}\cdots x_n^{b_n}\in S$ be a monomial. Then, the *polarization* of $u$ is the monomial $$u^\wp=\prod_{i=1}^n(\prod_{j=1}^{b_i}x_{i,j})=\prod_{\substack{1\le i\le n\\ b_i>0}}x_{i,1}x_{i,2}\cdots x_{i,b_i}$$ in the polynomial ring $K[x_{i,j}:1\le i\le n,1\le j\le b_i]$. Let $I\subset S$ be a monomial ideal. Then, the *polarization* of $I$ is defined to be the squarefree monomial ideal $I^\wp$ of $S^\wp=K[x_{i,j}:1\le i\le n,1\le j\le\deg_{x_i}(I)]$ with minimal generating set $G(I^\wp)=\{u^\wp:u\in G(I)\}$. *Proof of Proposition [Proposition 8](#Prop:Conj-Equivalent){reference-type="ref" reference="Prop:Conj-Equivalent"}..* Suppose that Conjecture [Conjecture 6](#Conj:old-g_I){reference-type="ref" reference="Conj:old-g_I"} holds, and let $I\subset S$ be a monomial ideal. We claim that $$\label{eq:exchange-wp-[k]} (I^{[k]})^\wp\ =\ (I^\wp)^{[k]},\ \ \textup{for all}\ \ 1\le k\le\nu(I).$$ Indeed, let $v_1,\dots,v_k\in G(I^\wp)$ with $\supp(v_i)\cap\supp(v_j)=\emptyset$ for all $1\le i<j\le k$. Then $v_1\cdots v_k\in (I^\wp)^{[k]}$. Since $G(I^\wp)=\{u^\wp:u\in G(I)\}$, we see that $v_i=u_i^\wp$ with $u_i\in G(I)$ for all $i$. It is clear that the condition $\supp(u_i^\wp)\cap\supp(u_j^\wp)=\emptyset$ is verified if and only if $\supp(u_i)\cap\supp(u_j)=\emptyset$. By this discussion, we have $$\begin{aligned} (I^\wp)^{[k]}\ &=\ (u_1^\wp\cdots u_k^\wp:u_i^\wp\in G(I^\wp),\supp(u_i^\wp)\cap\supp(u_j^\wp)=\emptyset,1\le i<j\le k)\\ &=\ (u_1^\wp\cdots u_k^\wp:u_i\in G(I),\supp(u_i)\cap\supp(u_j)=\emptyset,1\le i<j\le k)\\ &=\ ((u_1\cdots u_k)^\wp:u_i\in G(I),\supp(u_i)\cap\supp(u_j)=\emptyset,1\le i<j\le k)\\ &=\ (I^{[k]})^\wp. \end{aligned}$$ In the third equality we used the equation $u_1^\wp\cdots u_k^\wp=(u_1\cdots u_k)^\wp$, which holds because the monomials $u_1,\dots,u_k$ are in pairwise disjoint sets of variables. Hence, equation ([\[eq:exchange-wp-\[k\]\]](#eq:exchange-wp-[k]){reference-type="ref" reference="eq:exchange-wp-[k]"}) follows. By [@HHBook2011 Corollary 1.6.3(d)] and equation ([\[eq:exchange-wp-\[k\]\]](#eq:exchange-wp-[k]){reference-type="ref" reference="eq:exchange-wp-[k]"}) it follows that $$\textup{pd}(S/I^{[k]})\ =\ \textup{pd}(S^\wp/(I^{[k]})^\wp)\ =\ \textup{pd}(S^\wp/(I^\wp)^{[k]}).$$ Taking into account that $S^\wp$ is a polynomial ring in $|{\bf deg}(I)|$ variables, applying the Auslander--Buchsbaum formula we get $$\depth(S/I^{[k]})+|{\bf deg}(I)|-n=\depth(S^\wp/(I^\wp)^{[k]}).$$ Since $\textup{indeg}(I^{[k]})=\textup{indeg}((I^\wp)^{[k]})$, subtracting $\textup{indeg}(I^{[k]})-1$ from both sides of the above equation, we obtain $$g_{I}(k)\ =\ g_{I^\wp}(k), \ \ \textup{for all}\ \ 1\le k\le\nu(I).$$ By our assumption, $g_{I^\wp}$ is nonincreasing, because $I^\wp$ is a squarefree monomial ideal. Hence, $g_I$ is nonincreasing, as well. ◻ In the course of the proof, we have shown: **Corollary 9**. *Let $I\subset S$ be a monomial ideal. Then, the following hold.* 1. *$g_I=g_{I^\wp}$ and $\nu(I)=\nu(I^\wp)$.* 2. *$(I^{[k]})^\wp=(I^\wp)^{[k]}$ for all $1\le k\le\nu(I)$.* 3. *$\depth(S/I^{[k]})=\depth(S^\wp/(I^{\wp})^{[k]})-|{\bf deg}(I)|+n$, for all $1\le k\le\nu(I)$.* ## Highest nonvanishing matching power of a quadratic monomial ideal {#highest-nonvanishing-matching-power-of-a-quadratic-monomial-ideal .unnumbered} A monomial ideal $I\subset S$ generated in a single degree is called *polymatroidal* if the *exchange property* holds: for all $u,v\in G(I)$ and all $i$ with $\deg_{x_i}(u)>\deg_{x_i}(v)$ there exists $j$ such that $\deg_{x_j}(u)<\deg_{x_j}(v)$ and $x_j(u/x_i)\in G(I)$. A squarefree polymatroidal ideal is called *matroidal*. A polymatroidal ideal has linear quotients with respect to the lexicographic order induced by any ordering of the variables. Indeed, a polymatroidal ideal is weakly polymatroidal and the above claim follows from [@HHBook2011 Proof of Theorem 12.7.2]. Our next main result states that the highest nonvanishing matching power of a quadratic monomial ideal is polymatroidal and thus it has linear quotients. **Theorem 10**. *Let $I\subset S$ be a monomial ideal generated in degree two. Then $I^{[\nu(I)]}$ is a polymatroidal ideal.* We postpone the proof of Theorem [Theorem 10](#Thm:I^[nu(I)]Polymatroidal){reference-type="ref" reference="Thm:I^[nu(I)]Polymatroidal"} until the end of this section because it is based upon the squarefree version of the theorem which we will prove first. We will use the technique of polarization to pass from the squarefree case to the non-squarefree case. If $I$ is a polymatroidal ideal, then $I^\wp$ is not necessarily polymatroidal. For instance, the ideal $I=(x_1^2, x_1x_2, x_2^2)$ is polymatroidal but $I^\wp$ is not. On the other hand, we have **Lemma 11**. *Let $I\subset S$ be a monomial ideal. If $I^\wp$ is polymatroidal, then so is $I$.* *Proof.* Let $u,v\in G(I)$ with $p=\deg_{x_i}(u) >\deg_{x_i}(v)$. Then $x_{i,p}$ divides $u^\wp$ but not $v^\wp$. In fact, $$\deg_{x_{i,p}}(u^\wp)=1>0=\deg_{x_{i,p}}(v^\wp).$$ Since $I^\wp$ is polymatroidal, there exists $x_{j,k}$ with $j\neq i$ such that $$\deg_{x_{j,k}}(v^\wp)=1>0=\deg_{x_{j,k}}(u^\wp)$$ and $x_{j,k}(u^\wp/x_{i,p})\in G(I^\wp)$. This implies $\deg_{x_j}(u)=k-1$ and $\deg_{x_j}(v)\geq k$. Then $$(x_ju/x_i)^\wp = x_{j,k}(u^\wp/x_{i,p}) \in G(I^\wp)$$ and thus $x_ju/x_i\in G(I)$. ◻ Now, let us recall some definitions and fix some notation. Hereafter, for an integer $n\ge1$, we set $[n]=\{1,2,\dots,n\}$. If $F\subseteq[n]$ is non empty, we set ${\bf x}_F=\prod_{i\in F}x_i$. Let $G$ be a finite simple graph on vertex set $V(G)=[n]$ and with edge set $E(G)$. The *edge ideal* of $G$ is the ideal $I(G)=(x_ix_j:\{i,j\}\in E(G))$ of $S=K[x_1,\dots,x_n]$. A *matching* of $G$ is a set of edges of $G$ which are pairwise disjoint. If $M$ is a matching, then we denote by $V(M)$ the set of vertices $\bigcup_{e\in M}e$. We denote by $\nu(G)$ the *matching number* of $G$ which is the maximum size of a matching of $G$. Then one can verify that $\nu(I(G))=\nu(G)$. Bigdeli et al. showed in [@BHZN18 Theorem 4.1] that $I(G)^{[\nu(G)]}$ has linear quotients for any finite simple graph $G$. We strengthen their result as follows: **Theorem 12**. *Let $G$ be a finite simple graph. Then $I(G)^{[\nu(G)]}$ is polymatroidal.* *Proof.* Set $k=\nu(G)$, and let $u,v\in G(I(G)^{[k]})$ and $i$ such that $\deg_{x_i}(u)>\deg_{x_i}(v)$. Our job is to find $j$ such that $\deg_{x_j}(u)<\deg_{x_j}(v)$ and $x_j(u/x_i)\in G(I(G)^{[k]})$. Since $\nu(G)=\nu(I(G))$, we have $$u={\bf x}_{e_1}\cdots{\bf x}_{e_k}\ \ \ \textup{and}\ \ \ v={\bf x}_{f_1}\cdots{\bf x}_{f_k},$$ where $M_u=\{e_1,\dots,e_k\}$ and $M_v=\{f_1,\dots,f_k\}$ are $k$-matchings of $G$. Up to relabelling, we have $e_1=\{i,h\}$ for some $h\in[n]$. Since $\deg_{x_i}(u)>\deg_{x_i}(v)$ and $u$ and $v$ are squarefree, it follows that $i\notin V(M_v)$. Thus $h\in V(M_v)$, otherwise $\{e_1,f_1,\dots,f_k\}$ would be a $(k+1)$-matching of $G$, against the fact that $k=\nu(G)$. Thus, we may assume that $f_1=\{h,i_1\}$ for some vertex $i_1\ne h$. Suppose that $i_1\notin V(M_u)$. Then we have $\deg_{x_{i_1}}(u)<\deg_{x_{i_1}}(v)$ and $$x_{i_1}(u/x_i)=(x_{h}x_{i_1}){\bf x}_{e_2}\cdots {\bf x}_{e_k}\in G(I(G)^{[k]}).$$ The exchange property holds in this case. Otherwise, if $i_1\in V(M_u)$, then we may assume that $e_2=\{i_1,j_1\}$ for some vertex $j_1\notin\{i,h\}$. Then, $j_1$ must be in $V(M_{v})$, otherwise $\{\{i,h\},\{i_1,j_1\},f_2,\dots,f_k\}$ would be a $(k+1)$-matching of $G$, which is absurd. Hence, we may assume that $f_2=\{j_1,i_2\}$ for some $i_2\notin\{i,h,i_1,j_1\}$. Now, we distinguish two more cases. Suppose that $i_2\notin V(M_u)$. Then we have $\deg_{x_{i_2}}(u)<\deg_{x_{i_2}}(v)$ and $$x_{i_2}(u/x_i)=(x_{h}x_{i_1})(x_{j_1}x_{i_2}){\bf x}_{e_3}\cdots {\bf x}_{e_k}\in G(I(G)^{[k]}).$$ Thus, we are finished in this case. Otherwise, if $i_2\in V(M_u)$, then we may assume that $e_3=\{i_2,j_2\}$ for some vertex $j_2\notin\{i,h,i_1,j_1,i_2\}$. Arguing as before, we obtain that $j_2\in V(M_v)$, and we can assume that $f_3=\{j_2,i_3\}$ for some vertex $i_3\notin\{i,h,i_1,j_1,i_2\}$. Iterating this argument, we obtain at the $p$th step that 1. $e_1=\{i,h\}$, $e_2=\{i_1,j_1\}$, $\dots$, $e_{p}=\{i_{p-1},j_{p-1}\}$ and 2. $f_1=\{h,i_1\}$, $f_2=\{j_1,i_2\}$, $\dots$, $f_p=\{j_{p-1},i_p\}$. Thus, if $i_p\notin V(M_u)$, then $\deg_{x_{i_p}}(u)<\deg_{x_{i_p}}(v)$ and $$x_{i_p}(u/x_i)={\bf x}_{f_1}\cdots {\bf x}_{f_p}{\bf x}_{e_{p+1}}\cdots{\bf x}_{e_k}\in G(I(G)^{[k]}).$$ The exchange property holds in such a case. Otherwise, if $i_p\in V(M_u)$, then $e_{p+1}=\{i_p,j_p\}$ for some vertex $j_p$ different from all vertices $i,h,i_1,j_1,\dots,i_{p-1},j_{p-1},i_p$, and $f_{p+1}=\{j_p,i_{p+1}\}$ for some vertex $i_{p+1}$. It is clear that the process described in (i)--(ii) terminates at most after $k$ steps. If we reach the $k$th step, then $\deg_{x_{i_k}}(u)<\deg_{x_{i_k}}(v)$ and $$x_{i_k}(u/x_i)={\bf x}_{f_1}\cdots {\bf x}_{f_k}=v\in G(I(G)^{[k]}).$$ Thus, the exchange property holds in any case and $I(G)^{[k]}$ is polymatroidal. ◻ We are now ready for the proof of Theorem [Theorem 10](#Thm:I^[nu(I)]Polymatroidal){reference-type="ref" reference="Thm:I^[nu(I)]Polymatroidal"}. *Proof of Theorem [Theorem 10](#Thm:I^[nu(I)]Polymatroidal){reference-type="ref" reference="Thm:I^[nu(I)]Polymatroidal"}.* Let $k=\nu(I)$. By Corollary [Corollary 9](#Cor:g_Ipolarization){reference-type="ref" reference="Cor:g_Ipolarization"}(b), $(I^{[k]})^\wp=(I^\wp)^{[k]}$. Moreover, $I^\wp$ is an edge ideal and $\nu(I)=\nu(I^\wp)$ by Corollary [Corollary 9](#Cor:g_Ipolarization){reference-type="ref" reference="Cor:g_Ipolarization"}(a). Then Theorem [Theorem 12](#Thm:theoremI(G)[nu(G)]Polymatroidal){reference-type="ref" reference="Thm:theoremI(G)[nu(G)]Polymatroidal"} implies that $(I^{[k]})^\wp$ is polymatroidal. Finally, Lemma [Lemma 11](#Lemma:IwpPolym=>IPolym){reference-type="ref" reference="Lemma:IwpPolym=>IPolym"} implies that $I^{[k]}$ is polymatroidal as well. ◻ In [@EHHM2022b Corollary 3.5] it was proved that $g_{I(G)}(\nu(G))=0$ for any fully supported edge ideal $I(G)$. As an interesting consequence of Theorem [Theorem 12](#Thm:theoremI(G)[nu(G)]Polymatroidal){reference-type="ref" reference="Thm:theoremI(G)[nu(G)]Polymatroidal"} we extend this result to quadratic monomial ideals. **Corollary 13**. *Let $I\subset S$ be a monomial ideal generated in degree two. Then $g_I(\nu(I))=0$ and $\reg(I^{[\nu(I)]})=2\nu(I)$.* *Proof.* By Theorem [Theorem 12](#Thm:theoremI(G)[nu(G)]Polymatroidal){reference-type="ref" reference="Thm:theoremI(G)[nu(G)]Polymatroidal"}, $(I^\wp)^{[\nu(I)]}$ is matroidal. Hence [@EHHM2022b Theorem 1.6] yields that $\depth(S^\wp/(I^\wp)^{[\nu(I^\wp)]})=\textup{indeg}((I^\wp)^{[k]})-1$ and $g_{I^\wp}(\nu(I^\wp))=0$. Corollary [Corollary 9](#Cor:g_Ipolarization){reference-type="ref" reference="Cor:g_Ipolarization"} implies that $g_{I}(\nu(I))=g_{I^\wp}(\nu(I^\wp))=0$. Since $I^{[\nu(I)]}$ is a polymatroidal ideal generated in degree $2\nu(I)$, $I^{[\nu(I)]}$ has a linear resolution. Hence $\reg(I^{[\nu(I)]})=2\nu(I)$. ◻ The above result is no longer valid for monomial ideals generated in a single degree bigger than two. For instance, for the ideal $I=(x_1x_2^2, x_2x_3^2, x_3x_4^2, x_4x_1^2)$ of $S=K[x_1,\dots,x_4]$ we have $\nu(I)=2$ but $I^{[2]}$ does not have a linear resolution and $g_I(2)=1\neq 0$. # Edge ideals of weighted oriented graphs {#sec:2-EreyFic} In this section, we focus our attention on matching powers of edge ideals of weighted oriented graphs. The interest in these ideals stemmed from their relevance in coding theory, in particular in the study of Reed-Muller type codes [@MPV]. Recently, these ideals have been the subject of many research papers in combinatorial commutative algebra, e.g. [@BCDMS; @BDS23; @CK; @HLMRV; @KBLO; @PRT]. Hereafter, by a graph $G$ we mean a finite simple undirected graph without isolated vertices. A (*vertex*)-*weighted oriented graph* $\mathcal{D}=(V(\mathcal{D}),E(\mathcal{D}),w)$ consists of an underlying graph $G$, with $V(\mathcal{D})=V(G)$, on which each edge is given an orientation and it is equipped with a *weight function* $w:V(G)\rightarrow\mathbb{Z}_{\ge1}$. The *weight* of a vertex $i\in V(G)$, denoted by $w_i$, is the value $w(i)$ of the weight function at $i$. The directed edges of $\mathcal{D}$ are denoted by pairs $(i,j)\in E(\mathcal{D})$ to reflect the orientation, hence $(i,j)$ represents an edge directed from $i$ to $j$. The *edge ideal* of $\mathcal{D}$ is defined as the ideal $$I(\mathcal{D})\ =\ (x_ix_j^{w_j}\ :\ (i,j)\in E(\mathcal{D}))$$ of the polynomial ring $S=K[x_i:i\in V(G)]$. If $w_i=1$ for all $i\in V(G)$, then $I(\mathcal{D})=I(G)$ is the usual edge ideal of $G$. **Remark 14**. *If $i\in V(G)$ is a *source*, that is a vertex such that $(j,i)\notin E(\mathcal{D})$ for all $j$, then $\deg_{x_i}(I(\mathcal{D}))=1$. Therefore, hereafter we assume that $w_i=1$ for all sources $i\in V(G)$.* By Proposition [Proposition 2](#Prop:gensMatchPower){reference-type="ref" reference="Prop:gensMatchPower"}(a), $I(\mathcal{D})^{[k]}$ is generated by the products $u=u_1\cdots u_k$ where $u_p=x_{i_p}x_{j_p}^{w_{j_p}}\in G(I(\mathcal{D}))$ and $\supp(u_p)\cap\supp(u_q)=\emptyset$ for all $p\ne q$. Thus $u\in I(\mathcal{D})^{[k]}$ if and only if $M=\{\{i_1,j_1\},\dots,\{i_k,j_k\}\}$ is a $k$-matching of $G$. This observation justifies the choice to name $I^{[k]}$ the $k$th matching power of $I$. Firstly, we establish the homological comparison between the matching powers $I(\mathcal{D})^{[k]}$ and $I(G)^{[k]}$, where $G$ is the underlying graph of $\mathcal{D}$. The assumption in Remark [Remark 14](#remark: assumption on sources){reference-type="ref" reference="remark: assumption on sources"} is crucial for the statement (e) of Theorem [Theorem 15](#Thm:comparison){reference-type="ref" reference="Thm:comparison"}. **Theorem 15**. *Let $\mathcal{D}$ be a weighted oriented graph with underlying graph $G$. Then, the following statements hold.* 1. *$\nu(I(\mathcal{D}))=\nu(I(G))=\nu(G)$.* 2. *$\textup{pd}(I(G)^{[k]})\le\textup{pd}(I(\mathcal{D})^{[k]})$, for all $1\le k\le\nu(G)$.* 3. *$\reg(I(G)^{[k]})\le\reg(I(\mathcal{D})^{[k]})$, for all $1\le k\le\nu(G)$.* 4. *$\beta_i(I(G)^{[k]})\le\beta_i(I(\mathcal{D})^{[k]})$, for all $1\le k\le\nu(G)$ and $i$.* 5. *$g_{I(\mathcal{D})}(k)\le g_{I(G)}(k)+\sum\limits_{i\in V(G)}w_i-|V(G)|$, for all $1\le k\le\nu(G)$.* For the proof we recall a few basic facts. Let $I\subset S$ be a monomial ideal. 1. We have $\beta_{i,j}(I)=\beta_{i,j}(I^\wp)$ for all $i$ and $j$ [@HHBook2011 Corollary 1.6.3]. 2. For a monomial $u\in S$, we set $\sqrt{u}=\prod_{i\in\supp(u)}x_i$. If $G(I)=\{u_1,\dots,u_m\}$, then [@HHBook2011 Proposition 1.2.4] gives $$\sqrt{I}=(\sqrt{u_1},\dots,\sqrt{u_m}).$$ 3. Let $P$ be a monomial prime ideal of $S$. Let $S(P)$ be the polynomial ring in the variables which generate $P$. The *monomial localization* of $I$ at $P$ is the monomial ideal $I(P)$ of $S(P)$ which is obtained from $I$ by the substitution $x_i\mapsto1$ for all $x_i\notin P$. The monomial localization can also be described as the saturation $I:(\prod_{x_i\notin P}x_i)^\infty$. If ${\mathbb F}$ is the minimal (multi)graded free $S$-resolution of $I$, one can construct, starting from ${\mathbb F}$, a possibly non-minimal (multi)graded free $S$-resolution of $I(P)$ [@HMRZ021a Lemma 1.12]. It follows from this construction that $\beta_{i}(I(P))\le\beta_{i}(I)$ for all $i$. Moreover, $\textup{pd}(I(P))\le\textup{pd}(I)$ and $\reg(I(P))\le\reg(I)$. *Proof.* Statement (a) is clear. To prove (b), (c) and (d), set $J=I(\mathcal{D})^{[k]}$. Assume that $I(\mathcal{D})$ is a fully supported ideal of $S=K[x_1,\dots, x_n]$. Let $P=(x_{1,1},\dots,x_{n,1})$. Identifying $x_{i,1}$ with $x_i$ for all $i$, by applying (ii), $J^\wp(P)$ can be identified with $\sqrt{J}$. Then by (i) and (iii) we obtain $$\beta_{i}(\sqrt{J})=\beta_{i}(J^\wp(P))\leq \beta_{i}(J^\wp)=\beta_{i}(J)$$ for all $i$. To complete the proof, we will show that $\sqrt{J}=I(G)^{[k]}$. For this aim, let $v\in G(J)$. Then $v=(x_{i_1}x_{j_1}^{w_{j_1}})\cdots(x_{i_k}x_{j_k}^{w_{j_k}})$ with $(i_1,j_1),\dots,(i_k,j_k)\in E(\mathcal{D})$ and the corresponding undirected edges form a $k$-matching of $G$. Thus $\sqrt{v}=(x_{i_1}x_{j_1})\cdots(x_{i_k}x_{j_k})\in I(G)^{[k]}$ and consequently $\sqrt{J}\subseteq I(G)^{[k]}$. Conversely, let $u=(x_{i_1}x_{j_1})\cdots(x_{i_k}x_{j_k})\in G(I(G)^{[k]})$ with $\{\{i_1,j_1\},\dots,\{i_k,j_k\}\}$ a $k$-matching of $G$. Then $(i_1,j_1),\dots,(i_k,j_k)\in E(\mathcal{D})$ up to relabelling. So $v=(x_{i_1}x_{j_1}^{w_{j_1}})\cdots(x_{i_k}x_{j_k}^{w_{j_k}})\in J$ and $\sqrt{v}=u\in\sqrt{J}$. This shows that $I(G)^{[k]}\subseteq\sqrt{J}$. Equality follows. It remains to prove (e). Let $L$ be a monomial ideal of $S$. By the Auslander--Buchsbaum formula we have $\depth(S/L)=n-1-\textup{pd}(L)$. Hence, for all $1\le k\le\nu(L)$ we can rewrite $g_L(k)$ as $$g_L(k)=|{\bf deg}(L)|-\textup{pd}(L^{[k]})-\textup{indeg}(L^{[k]}).$$ By (b) we have $\textup{pd}(I(G)^{[k]})\le\textup{pd}(I(\mathcal{D})^{[k]})$ for all $k$. It is clear that $|{\bf deg}(I(G))|=n$ and $\textup{indeg}(I(G)^{[k]})=2k\le\textup{indeg}(I(\mathcal{D})^{[k]})$ for all $1\le k\le\nu(G)$. Therefore, $$\begin{aligned} g_{I(\mathcal{D})}(k)&=&|{\bf deg}(I(\mathcal{D}))|-\textup{pd}(I(\mathcal{D})^{[k]})-\textup{indeg}(I(\mathcal{D})^{[k]})\\ &\le&|{\bf deg}(I(\mathcal{D}))|-\textup{pd}(I(G)^{[k]})-\textup{indeg}(I(G)^{[k]})\\ &=& n-\textup{pd}(I(G)^{[k]})-\textup{indeg}(I(G)^{[k]})+|{\bf deg}(I(\mathcal{D}))|-n\\ &=& g_{I(G)}(k)+|{\bf deg}(I(\mathcal{D}))|-n. \end{aligned}$$ Since $\deg_{x_i}(I(\mathcal{D}))=w_i$ for all $i$, we have $|{\bf deg}(I(\mathcal{D}))|=\sum_{i=1}^nw_i$, as wanted. ◻ The inequalities in (b), (c), (d) and (e) need not to be equalities. **Example 16**. *Let $\mathcal{D}$ be the oriented 4-cycle with all vertices having weight 2 and with edge set $E(\mathcal{D})=\{(a,b),(b,c),(c,d),(d,a)\}$. Then $I(G)^{[2]}=(abcd)$, while $I(\mathcal{D})^{[2]}=(ab^2cd^2,a^2bc^2d)$. By using *Macaulay2* [@GDS] and the package [@FPack2], we checked that $\textup{pd}(I(G)^{[2]})=1<2=\textup{pd}(I(\mathcal{D})^{[2]})$, $\reg(I(G)^{[2]})=4<7=\reg(I(\mathcal{D})^{[2]})$, $\beta_1(I(G)^{[2]})=0<1=\beta_1(I(\mathcal{D})^{[2]})$, and $g_{I(G)}(2)=1< 5=g_{I(\mathcal{D})}(2)+\sum_{i=1}^4w_i-4$.* Hereafter, we concentrate our attention on edge ideals of vertex-weighted oriented graphs. Let $\mathcal{D}'$ and $\mathcal{D}$ be weighted oriented graphs with underlying graphs $G'$ and $G$ respectively. We say $\mathcal{D}'$ is a *weighted oriented subgraph* of $\mathcal{D}$ if the vertex and edge sets of $\mathcal{D}'$ are contained in respectively those of $\mathcal{D}$ and the weight functions coincide on $V(\mathcal{D}')$. A weighted oriented subgraph $\mathcal{D}'$ of $\mathcal{D}$ is called *induced weighted oriented subgraph* of $\mathcal{D}$ if $G'$ is an induced subgraph of $G$. Firstly, we turn to the problem of bounding the regularity of matching powers of edge ideals. We begin with the so-called restriction lemma. **Lemma 17**. *Let $\mathcal{D}'$ be an induced weighted oriented subgraph of $\mathcal{D}$. Then* 1. *$\beta_{i,{\bf a}}(I(\mathcal{D}')^{[k]}) \leq \beta_{i,{\bf a}}(I(\mathcal{D})^{[k]})$ for all $i$ and ${\bf a}\in \mathbb{Z}^n$.* 2. *$\reg(I(\mathcal{D}')^{[k]}) \leq \reg(I(\mathcal{D})^{[k]})$.* *Proof.* It follows from [@EHHM2022a Lemma 1.2]. ◻ Let $\mathop{\mathrm{im}}(G)$ denote the *induced matching number* of $G$. For any weighted oriented graph $\mathcal{D}$ with underlying graph $G$, let $\mathop{\mathrm{wim}}(\mathcal{D})$ denote the *weighted induced matching number* of $\mathcal{D}$. That is, $$\begin{aligned} \mathop{\mathrm{wim}}(\mathcal{D})=\max\big\{\!\sum_{i=1}^{m}w(y_i)\ :\ \ &\{\{x_1,y_1\},\dots,\{x_m,y_m\}\}\ \text{is an}\\&\ \text{induced matching of}\ G,\ \text{and}\ (x_i,y_i)\in E(\mathcal{D})\big\}. \end{aligned}$$ Notice that if $w_i=1$ for every $i\in V(\mathcal{D})$, then $\mathop{\mathrm{wim}}(\mathcal{D})=\mathop{\mathrm{im}}(G)$. Otherwise, we have the inequality $\mathop{\mathrm{wim}}(\mathcal{D})\geq \mathop{\mathrm{im}}(G)$. We extend the regularity lower bound given in [@BDS23 Theorem 3.8] as follows. **Proposition 18**. *Let $\mathcal{D}$ be a weighted oriented graph with underlying graph $G$. Then $$\reg(I(\mathcal{D})^{[k]})\geq \mathop{\mathrm{wim}}(\mathcal{D})+k$$ for all $1\leq k \leq \mathop{\mathrm{im}}(G)$.* *Proof.* The proof is similar to [@EHHM2022a Theorem 2.1]. We include the details for the sake of completeness. Let $\{\{x_1,y_1\},\dots ,\{x_r,y_r\}\}$ be an induced matching. Suppose that $(x_i,y_i)\in E(\mathcal{D})$ with $w(y_i)=t_i$ and $\sum_{i=1}^{r}t_i=\mathop{\mathrm{wim}}(\mathcal{D})$. Let $\mathcal{D}'$ be the induced weighted oriented subgraph of $\mathcal{D}$ on the vertices $x_1,\dots,x_r,y_1,\dots,y_r$. Then by Lemma [Lemma 17](#lem:induced subgraph){reference-type="ref" reference="lem:induced subgraph"} it suffices to show that $$\reg(I(\mathcal{D}')^{[k]})\geq \mathop{\mathrm{wim}}(\mathcal{D})+k.$$ To this end, we set $I=I(\mathcal{D}')$ and we claim that $$\beta_{r-k,\mathop{\mathrm{wim}}(\mathcal{D})+r}(I^{[k]})\neq 0.$$ Let $J = (z_1, \dots, z_r)$, where $z_1, \dots , z_r$ are new variables. Then $J^{[k]}$ is a squarefree strongly stable ideal in the polynomial ring $R = K[z_1, \dots, z_r]$. It was proved in [@EHHM2022a Theorem 2.1] that $\beta_{r-k, r}(J^{[k]})\neq 0$. Define the map $\phi: R \rightarrow S = K[x_1, \dots , x_r, y_1, \dots , y_r]$ by $z_i \mapsto x_iy_i^{t_i}$ for $i = 1, \dots , r$. Since $x_1y_1^{t_1}, \dots , x_ry_r^{t_r}$ is a regular sequence on $S$, the $K$-algebra homomorphism $\phi$ is flat. If $\mathbb{F}$ is the minimal free resolution of $J^{[k]}$ over $R$, then $\mathbb{G}:\mathbb{F}\otimes_R S$ is the minimal free resolution of $I^{[k]}$ over $S$. It follows that $$\beta_{i,(a_1,\dots ,a_r)}(J^{[k]})=\beta_{i,(a_1,\dots ,a_r,t_1a_1,\dots ,t_ra_r)}(I^{[k]})$$ for any $i$ and $(a_1,\dots ,a_r)\in \mathbb{Z}^r$. Then, $$0\neq\beta_{r-k,r}(J^{[k]})=\beta_{r-k,(1,\dots ,1)}(J^{[k]})=\beta_{r-k,(1,\dots ,1,t_1,\dots ,t_r)}(I^{[k]})$$ and $\beta_{r-k,\mathop{\mathrm{wim}}(\mathcal{D})+r}(I^{[k]})\neq 0$ as desired. ◻ We close this section by providing a lower bound for the projective dimension of matching powers of edge ideals. Let $P_n$ be the *path of length* $n$. That is, $V(P_n)=[n]$ and $E(P_n)=\{\{1,2\},\{2,3\},\dots,\{n-1,n\}\}$. We denote by $\mathcal{P}_n$ a weighted oriented path of length $n$, that is, a weighted oriented graph whose underlying graph is $P_n$. It is well--known that $\nu(P_n)=\lfloor\frac{n}{2}\rfloor$. For a weighted oriented graph $\mathcal{D}$ with underlying graph $G$, we denote by $\ell(\mathcal{D})$ the maximal length of an induced path of $G$. **Proposition 19**. *Let $\mathcal{D}$ be a weighted oriented graph. Then $\nu(I(\mathcal{D}))\ge\lfloor\frac{\ell(\mathcal{D})}{2}\rfloor$ and $$\textup{pd}(I(\mathcal{D})^{[k]})\ \ge\ \begin{cases} \ell(\mathcal{D})-\lceil\frac{\ell(\mathcal{D})}{3}\rceil-k&\text{if}\ 1\le k\le\lceil\frac{\ell(\mathcal{D})}{3}\rceil,\\ \ell(\mathcal{D})-2k&\text{if}\ \lceil\frac{\ell(\mathcal{D})}{3}\rceil+1\le k\le\lfloor\frac{\ell(\mathcal{D})}{2}\rfloor. \end{cases}$$* *Proof.* Let $\ell=\ell(\mathcal{D})$. There exists a subset $W$ of $V(\mathcal{D})$ such that the induced subgraph of $\mathcal{D}$ on $W$ is a weighted oriented path $\mathcal{P}_\ell$. Theorem [Theorem 15](#Thm:comparison){reference-type="ref" reference="Thm:comparison"}(b) combined with Lemma [Lemma 17](#lem:induced subgraph){reference-type="ref" reference="lem:induced subgraph"} implies that $\textup{pd}(I(P_\ell)^{[k]})\le\textup{pd}(I(\mathcal{P}_\ell)^{[k]})\le\textup{pd}(I(\mathcal{D})^{[k]})$. It was shown in [@CFL Theorem 3.1] that $$g_{I(P_\ell)}(k)\ =\ \begin{cases} \lceil\frac{\ell}{3}\rceil-k&\text{if}\ 1\le k\le\lceil\frac{\ell}{3}\rceil,\\ \hfil0&\text{if}\ \lceil\frac{\ell}{3}\rceil+1\le k\le\lfloor\frac{\ell}{2}\rfloor. \end{cases}$$ For a squarefree monomial ideal $I\subset S$, we have $g_I(k)=n-\textup{pd}(I^{[k]})-\textup{indeg}(I^{[k]})$. Hence, the assertion follows from the above formula. ◻ Although we only considered weighted oriented graphs in this section, our methods can be useful to prove analogous results for matching powers of edge ideals of edge-weighted graphs. An *edge-weighted graph* $G_w=(V(G_w),E(G_w),w)$ consists of an underlying graph $G$, with $V(G_w)=V(G)$ and $E(G_w)=E(G)$, equipped with a *weight function* $w:E(G)\rightarrow\mathbb{Z}_{\ge1}:\{i,j\}\in E(G)\mapsto w(\{i,j\})=w_{i,j}$. The *edge ideal* of $G_w$ is defined as the ideal $$I(G_w)\ =\ ((x_ix_j)^{w_{i,j}}\ :\ \{i,j\}\in E(G))$$ of $S=K[x_i:i\in V(G)]$, see [@PS13]. Notice that if the weight of every edge is $1$, then the edge ideal of $G_w$ coincides with that of $G$. # Linearly related matching powers {#sec:3-EreyFic} Let $I\subset S$ be a graded ideal generated in a single degree. We say $I$ is *linearly related*, if the first syzygy module of $I$ is generated by linear relations. In this section, we want to discuss which matching powers of the edge ideal $I(\mathcal{D})$ of a vertex-weighted oriented graph $\mathcal{D}$ are linearly related. Let $I$ be a monomial ideal of $S$ generated in degree $d$. Let $G_I$ denote the graph with vertex set $G(I)$ and edge set $$E(G_I)=\{\{u,v\}: u,v\in G(I)~\text{with}~\deg (\lcm(u,v))=d+1 \}.$$ For all $u, v\in G(I)$ let $G^{(u,v)}_I$ be the induced subgraph of $G_I$ whose vertex set is $$V(G^{(u,v)}_I)=\{w\in G(I)\colon \text{$w$ divides $\lcm(u, v)$}\}.$$ The following theorem provides a criterion through the graphs defined above to determine if a monomial ideal is linearly related. **Theorem 20**. *[@BHZN18 Corollary 2.2] Let $I$ be a monomial ideal generated in degree $d$. Then $I$ is linearly related if and only if for all $u,v\in G(I)$ there is a path in $G^{(u,v)}_I$ connecting $u$ and $v$.* **Lemma 21**. *Let $I$ be a monomial ideal and let $1\leq k < \nu(I)$. Suppose that $I^{[k]}$ is generated in single degree. Then, there is an integer $d$ such that* - *$I^{[k]}$ is generated in degree $dk$.* - *$I^{[k+1]}$ is generated in degree $d(k+1)$. Moreover, if $u=u_1\dots u_{k+1}\in G(I^{[k+1]})$, with each $u_i\in G(I)$ and $\supp(u_i)\cap \supp(u_j) =\emptyset$ for $i\neq j$, then $\deg(u_i)=d$ for each $i$.* *Proof.* Let $u=u_1\cdots u_{k+1}\in G(I^{[k+1]})$ with each $u_i\in G(I)$ and $\supp(u_i)\cap \supp(u_j)=\emptyset$ for $i\neq j$. Observe that $u/u_\ell\in G(I^{[k]})$ for any $\ell=1,\dots ,k+1$. First, we show that $\deg(u_i)=\deg(u_j)$ for each $i\neq j$. Without loss of generality, assume for a contradiction that $\deg(u_1)\neq \deg(u_2)$. Then $u_2u_3\cdots u_{k+1}$ and $u_1u_3\cdots u_{k+1}$ are minimal monomial generators of $I^{[k]}$ of different degrees, which is a contradiction. It follows that $u_1,\dots ,u_k$ are all of degree $d$ for some $d$. Now, suppose that $v=v_1\cdots v_{k+1}\in G(I^{[k+1]})$ with each $v_i\in G(I)$ and $\supp(v_i)\cap \supp(v_j)=\emptyset$ for $i\neq j$. By the above argument, each $v_i$ is of the same degree, say $d'$. Then $u_1\cdots u_k$ is a minimal monomial generator of $I^{[k]}$ of degree $dk$ whereas $v_1\cdots v_k$ is a minimal monomial generator of $I^{[k]}$ of degree $d'k$. Therefore $d=d'$ and $u$ and $v$ have the same degree. ◻ In [@BHZN18 Theorem 3.1] it was proved that $I(G)^s$ is linearly related for some $s\geq 1$ if and only if $I(G)^k$ is linearly related for all $k\geq 1$. Unlike the ordinary powers of edge ideals, not all squarefree powers of $I(G)$ are linearly related if some squarefree power is linearly related. On the other hand, it was proved in [@EHHM2022a Theorem 3.1] that if $I(G)^{[k]}$ is linearly related for some $k\geq 1$, then $I(G)^{[k+1]}$ is linearly related as well. We extend [@EHHM2022a Theorem 3.1] to monomial ideals, under some additional assumptions. **Theorem 22**. *Let $I$ be a monomial ideal such that $|\supp(w)|=2$ for every $w\in G(I)$. Suppose that $I^{[k]}$ is linearly related for some $1\leq k <\nu(I)$. If $\supp(u)\neq \supp(v)$ for every $u,v\in G(I^{[k+1]})$ with $u\neq v$, then $I^{[k+1]}$ is linearly related.* *Proof.* Suppose that $\supp(u)\neq \supp(v)$ for every $u,v\in G(I^{[k+1]})$ with $u\neq v$. By the previous lemma, $I^{[k]}$ is generated in degree $dk$, and $I^{[k+1]}$ is generated in degree $d(k+1)$. Let $u,v \in G(I^{[k+1]})$ with $u\neq v$. By Theorem [Theorem 20](#connected criterion){reference-type="ref" reference="connected criterion"} and Lemma [Lemma 21](#lem:generated in single degree){reference-type="ref" reference="lem:generated in single degree"}, it suffices to find a path in $G_{I^{[k+1]}}^{(u,v)}$ connecting $u$ to $v$. Let $u=u_1\cdots u_{k+1}$ and let $v=v_1\cdots v_{k+1}$ where $u_i,v_i \in G(I)$ for each $i=1,\dots ,k+1$ and $$\supp(u_p)\cap \supp(u_q)=\emptyset=\supp(v_p)\cap \supp(v_q)$$ for every distinct $p,q\in \{1,\dots, k+1\}$. By Lemma [Lemma 21](#lem:generated in single degree){reference-type="ref" reference="lem:generated in single degree"}, we have that $\deg(u_i)=\deg(v_i)=d$ for every $i=1,\dots ,k+1$. By the initial assumption, we may assume that there exists $\ell \in \supp(u)\setminus \supp(v)$. Without loss of generality, we may assume that $x_\ell$ divides $u_1$. Let $\supp(u_1)=\{\ell, m\}$. By definition of matching power, there exists at most one $j$ such that $x_m$ divides $v_j$. Again, without loss of generality, we may assume that $x_m$ does not divide $v_i$ for $i=2,\dots ,k+1$. Now, we have $$\supp(u_1)\cap \supp(v_p)=\emptyset\ \ \text{ for all }\ \ p=2,3,\dots ,k+1.$$ Let $u'=u_2\dots u_{k+1}$ and $v'=v_2\dots v_{k+1}$. Since $u', v'\in G(I^{[k]})$ there exists a path $u'=z_0, z_1, z_2, \dots , z_t, v'=z_{t+1}$ in $G_{I^{[k]}}^{(u',v')}$ connecting $u'$ to $v'$. We claim that $$P: u, u_1z_1, u_1z_2, \dots ,u_1z_t, u_1v'$$ is a path in $G_{I^{[k+1]}}^{(u,u_1v')}$. To prove the claim, we must show that - $u_1z_i\in G(I^{[k+1]})$ for all $i=1,\dots, t+1$, - $u_1z_i$ divides $\lcm(u,u_1v')$ for all $i=1,\dots ,t$ and, - $\deg(\lcm(u_1z_i, u_1z_{i+1}))=d(k+1)+1$ for all $i=0,\dots ,t$. Since $\supp(u_1)\cap \supp(\lcm(u', v'))=\emptyset$, the monomial $u_1z_i$ belongs to $I^{[k+1]}$ for all $i=1,\dots ,t+1$. Moreover, since $u_1z_i$ is of degree $d(k+1)$, it follows that $u_1z_i \in G(I^{[k+1]})$, which proves (i). To see (ii) holds, observe that $$\lcm(u,u_1v')=\lcm(u_1z_0, u_1z_{t+1})=u_1\lcm(z_0, z_{t+1}).$$ Lastly, (iii) holds because for all $i=0,\dots ,t$ we have $$\deg(\lcm(u_1z_i, u_1z_{i+1}))=\deg(u_1)+\deg(\lcm(z_i, z_{i+1}))=d+ (dk+1).$$ Now, let $w=u_1v_2\dots v_k$ and $w'=v_1v_2\dots v_k$. Since $w, w'\in G(I^{[k]})$ there exists a path $w, y_1, y_2, \dots , y_s, w'$ in $G_{I^{[k]}}^{(w,w')}$ connecting $w$ to $w'$. As before, we can then form a path $P'$ $$P': wv_{k+1}, y_1v_{k+1}, y_2v_{k+1}, \dots , y_sv_{k+1}, w'v_{k+1}=v$$ in $G_{I^{[k+1]}}^{(u_1v', v )}$. Connecting $P$ and $P'$ we get the required path, as $u_1v'=wv_{k+1}$. ◻ We will now observe that the assumption of the previous theorem is satisfied for edge ideals of some weighted oriented graphs including those whose underlying graphs are forests. Hereafter, to simplify the notation, we identify each vertex $i\in V(\mathcal{D})$ with the variable $x_i$. Hence, we will often write $x_i$ to denote $i$. **Lemma 23**. *Let $\mathcal{D}$ be a weighted oriented graph whose underlying graph is $G$. Suppose that every subgraph of $G$ has at most one perfect matching. Let $1\leq k\leq \nu(I(\mathcal{D}))$ and $u,v\in G(I(\mathcal{D})^{[k]})$. If $\supp (u)= \supp (v)$, then $u=v$.* *Proof.* Let $u=x_1y_1^{w(y_1)}\dots x_ky_k^{w(y_k)}$ where $(x_i,y_i)\in E(\mathcal{D})$ for each $i$ and $M_1=\{\{x_i,y_i\} : i=1,\dots ,k\}$ is a matching in $G$. Let $v=z_1t_1^{w(t_1)}\dots z_kt_k^{w(t_k)}$ where $(z_i,t_i)\in E(\mathcal{D})$ for each $i$ and $M_2=\{\{z_i,t_i\} : i=1,\dots ,k\}$ is a matching in $G$. Suppose that $\supp (u) =\supp (v)$. Then we can set $$W:= \{x_1,\dots x_k,y_1,\dots ,y_k\}=\{z_1,\dots,z_k,t_1,\dots t_k\}.$$ Since the induced subgraph of $G$ on $W$ has at most one perfect matching, it follows that $M_1=M_2$ and therefore $u=v$. ◻ Combining Theorem [Theorem 22](#thm:linearly related consecutive powers){reference-type="ref" reference="thm:linearly related consecutive powers"} and Lemma [Lemma 23](#lem: distinct support){reference-type="ref" reference="lem: distinct support"}, we get the following immediate corollary. **Corollary 24**. *Let $\mathcal{D}$ be a weighted oriented graph such that every subgraph of its underlying graph has at most one perfect matching [(]{.upright}e.g., a forest[)]{.upright}. If $I(\mathcal{D})^{[k]}$ is linearly related for some $1\leq k <\nu(I(\mathcal{D}))$, then $I(\mathcal{D})^{[k+1]}$ is linearly related as well.* Let $G$ be the underlying graph of $\mathcal{D}$. If every subgraph of $G$ has at most one perfect matching (e.g., $G$ is a forest, or an odd cycle), and $I(\mathcal{D})\ne I(G)$, then even more is true. **Theorem 25**. *Let $\mathcal{D}$ be a weighted oriented graph with underlying graph $G$. Suppose that every subgraph of $G$ has at most one perfect matching, and that $I(\mathcal{D})\neq I(G)$. Let $1\leq k\leq \nu(G)$. If $I(\mathcal{D})^{[k]}$ is linearly related, then $k=\nu(I(\mathcal{D}))$.* The next example shows that we can not drop the hypothesis that every subgraph of $G$ has at most one perfect matching. **Example 26**. *Let $\mathcal{D}$ be the oriented graph on vertex set $[6]$, with weights $w(1)=2$ and $w(i)=1$ for $i\in[6]\setminus\{1\}$, and with edge set $$E(\mathcal{D})=\{(2,1),(1,3),(1,4),(1,5),(1,6)\}\cup\{(i,j):2\le i<j\le 6\}.$$ Then, $G$ has several perfect matchings, and $$I(\mathcal{D})=(x_1^2x_2,x_1x_3,x_1x_4,x_1x_5,x_1x_6,x_2x_3,x_2x_4,x_2x_5,x_2x_6,\dots,x_4x_5,x_4x_6,x_5x_6).$$ We have $\nu(I(\mathcal{D}))=3$. However $I(\mathcal{D})^{[2]}=I(G)^{[2]}$ and $I(\mathcal{D})^{[3]}=I(G)^{[3]}$ are linearly related, indeed they even have a linear resolution.* Before we can prove Theorem [Theorem 25](#thm: linearly related only in matching power){reference-type="ref" reference="thm: linearly related only in matching power"}, we need some preliminary lemmas. Hereafter, with abuse of notation, for a monomial $u$, we denote by $\supp(u)$ also the set of variables dividing $u$. **Lemma 27**. *Let $\mathcal{D}$ be a weighted oriented graph and let $1\leq k\leq \nu(I(\mathcal{D}))$.* 1. *Suppose that every subgraph of the underlying graph $G$ of $\mathcal{D}$ has at most one perfect matching. Then, $u\in G(I(\mathcal{D}^{[k]}))$ if and only if $u=x_1y_1^{w(y_1)}\dots x_ky_k^{w(y_k)}$ for some $(x_i,y_i)\in E(\mathcal{D})$ with $\{\{x_i,y_i\} : i=1,\dots ,k\}$ a matching in $G$.* 2. *Let $u, v\in G(I(\mathcal{D})^{[k]})$ such that $\supp(u)\neq \supp(v)$ and $$\deg (\lcm(u,v))=\deg (u)+1=\deg(v)+1.$$ Then there exist variables $z_1\notin \supp(u)$, $z_2\notin\supp(v)$ such that $v=uz_1/z_2$, $\deg_{z_1}(v)=1$ and $\deg_{z_2}(u)=1$.* *Proof.* (a) The "only if\" side of the statement is by definition of matching power. The "if\" side of the statement follows from Lemma [Lemma 23](#lem: distinct support){reference-type="ref" reference="lem: distinct support"} and the fact that every minimal monomial generator of $I(\mathcal{D})^{[k]}$ has a support of size $2k$. \(b\) Since both $u$ and $v$ have support of size $2k$ and $\supp(u)\neq \supp(v)$, there exists a variable $z_1\in \supp(v)\setminus \supp(u)$ and $z_2\in \supp(u)\setminus \supp(v)$. Since $\deg (\lcm(u,v))=\deg (u)+1$, we get $\supp(v)\setminus \supp(u)=\{z_1\}$ and $\deg_{z_1}(v)=1$. Similarly, since $\deg (\lcm(u,v))=\deg (v)+1$, we get $\supp(u)\setminus \supp(v)=\{z_2\}$ and $\deg_{z_2}(u)=1$. Then for every $t\in \supp(u)\cap \supp(v)$, we get $\deg_t(u)=\deg_t(v)$ and the result follows. ◻ **Lemma 28**. *Let $\mathcal{D}$ be a weighted oriented graph with underlying graph $G$. Suppose that every subgraph of $G$ has at most one perfect matching. Suppose that $I(\mathcal{D})^{[k]}$ is linearly related. Let $u\in G(I(\mathcal{D})^{[k]})$ and let $x$ be a variable such that $\deg_{x}(u)=r>1$. Then $\deg_{x}(v)=r$ for every $v\in G(I(\mathcal{D})^{[k]})$.* *Proof.* Let $u\neq v$. By Theorem [Theorem 20](#connected criterion){reference-type="ref" reference="connected criterion"} there is a path $u_0=u, u_1, u_2, \dots ,u_s=v$ in the graph $H:=G^{(u,v)}_{I(\mathcal{D})^{[k]}}$. Since $\{u_0,u_1\}\in E(H)$, by Lemma [Lemma 23](#lem: distinct support){reference-type="ref" reference="lem: distinct support"} and Lemma [Lemma 27](#lem: if deg lcm(u,v)=d+1){reference-type="ref" reference="lem: if deg lcm(u,v)=d+1"}(b) it follows that $\deg_{x}(u_1)=r$. Similarly, since $\{u_1,u_2\}\in E(H)$ it follows that $\deg_{x}(u_2)=r$. Continuing this way, we obtain $\deg_{x}(u_s)=r$. ◻ *Proof of Theorem [Theorem 25](#thm: linearly related only in matching power){reference-type="ref" reference="thm: linearly related only in matching power"}.* We assume for a contradiction that $I(\mathcal{D})^{[k]}$ is linearly related but $k<\nu(I(\mathcal{D}))$. Let $M=\{\{a_i,b_i\} : i=1,\dots ,k+1\}$ be a matching with $(a_i,b_i)\in E(\mathcal{D})$. We claim that all the $b_i$s have the same weight, say $q$. To see this, we let $z=(a_1b_1^{w(b_1)})\cdots(a_kb_k^{w(b_k)})(a_{k+1}b_{k+1}^{w(b_{k+1})})$. Then by Lemma [Lemma 27](#lem: if deg lcm(u,v)=d+1){reference-type="ref" reference="lem: if deg lcm(u,v)=d+1"}(a) we see that $z/(a_ib_i^{w(b_i)}) \in G(I(\mathcal{D})^{[k]})$ for each $i=1,\dots,k+1$. Since $I(\mathcal{D})^{[k]}$ is generated in single degree, it follows that $w(b_i)=w(b_j)$ for all $i,j$. Since $I(\mathcal{D})\neq I(G)$ there is an edge $(c,d)\in E(\mathcal{D})$ with $w(d)=r>1$. We will show that $r=q$. Without loss of generality, we may assume that $$\{c,d\}\cap \{a_3,a_4, \dots ,a_{k+1}, b_3, b_4, \dots ,b_{k+1}\}=\emptyset.$$ Then $\{\{c,d\}, \{a_3,b_3\}, \dots ,\{a_{k+1},b_{k+1}\}\}$ is a matching. On the other hand, by Lemma [Lemma 27](#lem: if deg lcm(u,v)=d+1){reference-type="ref" reference="lem: if deg lcm(u,v)=d+1"}(a) $$(cd^r)(a_3b_3^q)\cdots(a_{k+1}b_{k+1}^q) \in G(I(\mathcal{\mathcal{D}})^{[k]}) \text{ and } (a_2b_2^q)(a_3b_3^q)\cdots(a_{k+1}b_{k+1}^q) \in G(I(\mathcal{D})^{[k]}).$$ Since $I(\mathcal{D})^{[k]}$ is generated in single degree, it follows that $r=q>1$. Let $u=(a_1b_1^r)(a_2b_2^r)\cdots(a_kb_k^r)$ and $v=(a_2b_2^r)(a_3b_3^r)\cdots(a_{k+1}b_{k+1}^r)$. Since $\deg_{b_1}(u)=r>1$, Lemma [Lemma 28](#lem:degree of variable bigger than 1){reference-type="ref" reference="lem:degree of variable bigger than 1"} implies $\deg_{b_1}(v)=r$, which is a contradiction. ◻ We can now characterize when $I(\mathcal{D})^{[k]}$ has a linear resolution or is linearly related provided that every subgraph of $G$ has at most one perfect matching. **Theorem 29**. *Let $\mathcal{D}$ be a weighted oriented graph with underlying graph $G$. Suppose that every subgraph of $G$ has at most one perfect matching. Suppose that $I(\mathcal{D})\neq I(G)$ and $1\leq k \leq \nu(G)$. Then the following statements are equivalent.* 1. *$I(\mathcal{D})^{[k]}$ is linearly related.* 2. *$I(\mathcal{D})^{[k]}$ is polymatroidal.* 3. *$I(\mathcal{D})^{[k]}$ has a linear resolution.* *Proof.* A polymatroidal ideal has linear quotients [@HHBook2011 Theorem 12.6.2] and therefore it has a linear resolution [@HHBook2011 Proposition 8.2.1]. We will only show that (a) $\ifmmode\Rightarrow \else \unskip$$\ignorespaces\fi$ (b) because (b) $\ifmmode\Rightarrow \else \unskip$$\ignorespaces\fi$ (c) $\ifmmode\Rightarrow \else \unskip$$\ignorespaces\fi$ (a) is known. Suppose that $I(\mathcal{D})^{[k]}$ is linearly related. For the rest of the proof, keep in mind that by Lemma [Lemma 28](#lem:degree of variable bigger than 1){reference-type="ref" reference="lem:degree of variable bigger than 1"} for any $m_1,m_2\in G(I(\mathcal{D})^{[k]})$ $$\label{eq: keep in mind} \deg_{t}(m_1)=\deg_{t}(m_2) \text{ for every } t\in \supp(m_1)\cap \supp(m_2).$$ Let $u,v\in G(I(\mathcal{D})^{[k]})$. Let $\{e_1,\dots,e_k\}$ be the underlying matching (of undirected edges) for $u$, that is, $\bigcup_{i=1}^ke_i=\supp(u)$. Similarly, let $\{f_1,\dots,f_k\}$ be the underlying matching (of undirected edges) for $v$, that is, $\bigcup_{i=1}^kf_i=\supp(v)$. Let $M_{e_i}$ be the monomial factor of $u$ corresponding to $e_i$. More precisely, we define $$M_{e_i}=\prod_{t\in e_i}t^{\deg_{t}(u)}\ \ \text{ and }\ \ M_{f_i}=\prod_{t\in f_i}t^{\deg_{t}(v)}$$ for every $i=1,\dots ,k$ so that $u=M_{e_1}\dots M_{e_k}$ and $v=M_{f_1}\dots M_{f_k}$. We know from Theorem [Theorem 25](#thm: linearly related only in matching power){reference-type="ref" reference="thm: linearly related only in matching power"} that $k=\nu(G)$. Therefore, $\supp(v)\cap e_i\neq \emptyset$ for every $i=1,\dots,k$. Suppose that $z_0$ is a variable which divides $u$ but not $v$. Then by Lemma [Lemma 28](#lem:degree of variable bigger than 1){reference-type="ref" reference="lem:degree of variable bigger than 1"} we must have $\deg_{z_0}(u)=1$. We may assume that $z_0\in e_1=\{z_0,y_1\}$. Then $y_1$ divides $v$ because $\supp(v)\cap e_1\neq \emptyset$. We may assume that $y_1\in f_1$. **(Step 1).** Let $f_1=\{y_1,z_1\}$. Assume for a moment that $z_1$ does not divide $u$. Then by Lemma [Lemma 28](#lem:degree of variable bigger than 1){reference-type="ref" reference="lem:degree of variable bigger than 1"} we must have $\deg_{z_1}(v)=1$. If $(y_1,z_1)\in E(\mathcal{D})$, then $w:=(y_1z_1)M_{e_2}\dots M_{e_k}\in G(I(\mathcal{D})^{[k]})$ by Lemma [Lemma 27](#lem: if deg lcm(u,v)=d+1){reference-type="ref" reference="lem: if deg lcm(u,v)=d+1"}(a) and $\deg_{y_1}(u)=1$ by [\[eq: keep in mind\]](#eq: keep in mind){reference-type="eqref" reference="eq: keep in mind"}. In that case, the exchange property is satisfied because $w=z_1u/z_0$. On the other hand, if $(z_1,y_1)\in E(\mathcal{D})$, then similarly the exchange property is satisfied because $w:=(z_1y_1^{w(y_1)})M_{e_2}\dots M_{e_k}\in G(I(\mathcal{D})^{[k]})$. We may assume that $z_1$ divides $u$ and $z_1\in e_2$. Let $e_2=\{y_2,z_1\}$. Then $y_2$ divides $v$ since otherwise $\nu(G)>k$. **(Step 2).** Let $f_2=\{y_2,z_2\}$. Assume for a moment that $z_2$ does not divide $u$. Then by Lemma [Lemma 28](#lem:degree of variable bigger than 1){reference-type="ref" reference="lem:degree of variable bigger than 1"} we must have $\deg_{z_2}(v)=1$. If $(y_2,z_2)\in E(\mathcal{D})$, then $w:=(y_2z_2)M_{f_1}M_{e_3}\dots M_{e_k}\in G(I(\mathcal{D})^{[k]})$ by Lemma [Lemma 27](#lem: if deg lcm(u,v)=d+1){reference-type="ref" reference="lem: if deg lcm(u,v)=d+1"} and $\deg_{y_2}(u)=1$ by [\[eq: keep in mind\]](#eq: keep in mind){reference-type="eqref" reference="eq: keep in mind"}. In that case, the exchange property is satisfied because $w=z_2u/z_0$ by [\[eq: keep in mind\]](#eq: keep in mind){reference-type="eqref" reference="eq: keep in mind"}. On the other hand, if $(z_2,y_2)\in E(\mathcal{D})$, then similarly the exchange property is satisfied because $w:=(z_2y_2^{w(y_2)})M_{f_1}M_{e_3}\dots M_{e_k}\in G(I(\mathcal{D})^{[k]})$. We may assume that $z_2$ divides $u$ and $z_2\in e_3$. Let $e_3=\{y_3,z_2\}$. Then $y_3$ divides $v$ since otherwise $\nu(G)>k$. If this process stops at some point, then we are done. Suppose that it continues until the last step: **(Step $\mathbf{k-1}$).** At this point, we have $e_i=\{y_i,z_{i-1}\}$ for all $1\leq i\leq k$ and $f_j=\{y_j,z_j\}$ for all $1\leq j\leq k-1$. First, observe that $y_k\in f_k$ since otherwise $\{e_1,e_2,\dots ,e_k\}\cup \{f_k\}$ is a matching in $G$, which is not possible because $\nu(G)=k$. Now, let $f_k=\{y_k, z_k\}$. Then by Lemma [Lemma 28](#lem:degree of variable bigger than 1){reference-type="ref" reference="lem:degree of variable bigger than 1"} we must have $\deg_{z_k}(v)=1$. If $(y_k,z_k)\in E(\mathcal{D})$, then $w:=(y_kz_k)M_{f_1}M_{f_2}\dots M_{f_{k-1}}\in G(I(\mathcal{D})^{[k]})$ by Lemma [Lemma 27](#lem: if deg lcm(u,v)=d+1){reference-type="ref" reference="lem: if deg lcm(u,v)=d+1"}. By [\[eq: keep in mind\]](#eq: keep in mind){reference-type="eqref" reference="eq: keep in mind"} we get $w=z_ku/z_0$. On the other hand, if $(z_k,y_k)\in E(\mathcal{D})$, then by a similar argument $w:=(z_ky_k^{w(y_k)})M_{f_1}M_{f_2}\dots M_{f_{k-1}}\in G(I(\mathcal{D})^{[k]})$ and $w=z_ku/z_0$. ◻ **Example 30**. *Let $\mathcal{D}$ be a weighted oriented graph whose underlying graph $G$ is an odd cycle, say $C_{2k+1}$ with $V(C_{2k+1})=[2k+1]$ and edge set $$E(C_{2k+1})=\{\{1,2\},\{2,3\},\dots,\{2k,2k+1\},\{2k+1,1\}\}.$$ It is well--known that $\nu(G)=k$. We claim that $I(\mathcal{D})^{[\nu(G)]}$ is linearly related if and only if $I(\mathcal{D})=I(G)$. Indeed, suppose that this is not the case but that $I(\mathcal{D})^{[\nu(G)]}$ is linearly related. Then there exists a vertex $i\in V(G)$ which is not a source such that $w(i)>1$. Up to relabeling, we may assume that $i=1$ and $(2,1)\in E(\mathcal{D})$. Hence, there is a generator of $I(\mathcal{D})^{[\nu(G)]}$ whose $x_1$-degree is $w(1)>1$. Then, Lemma [Lemma 28](#lem:degree of variable bigger than 1){reference-type="ref" reference="lem:degree of variable bigger than 1"} would imply that all generators of $I(\mathcal{D})^{[\nu(G)]}$ have $x_1$-degree bigger than 1. However, if we consider the $k$-matching $M=\{\{2,3\},\{4,5\},\dots,\{2k,2k+1\}\}$ of undirected edges of $G$, then there is a unique generator $v$ of $I(\mathcal{D})^{[\nu(G)]}$ whose support is $V(M)$ and so $\deg_{x_1}(v)=0$, which is absurd. Thus, we must have $I(\mathcal{D})=I(G)$ and by Theorem [Theorem 12](#Thm:theoremI(G)[nu(G)]Polymatroidal){reference-type="ref" reference="Thm:theoremI(G)[nu(G)]Polymatroidal"} $I(\mathcal{D})^{[\nu(G)]}$ is linearly related, indeed it even has a linear resolution.* **Example 31**. *In the above Theorem [Theorem 29](#Thm:I(D)linRel){reference-type="ref" reference="Thm:I(D)linRel"}, the condition that every subgraph of $G$ has at most one perfect matching is crucial. For example, let $\mathcal{D}$ be a weighted oriented graph with $I(\mathcal{D})=(x_1x_2^2, x_2x_3^2, x_2x_4^2, x_3x_1^2, x_3x_4^2, x_4x_1^2)$. Then $I(\mathcal{D})^{[2]}$ has a linear resolution but it is not polymatroidal. On the other hand, we do not know the answer to the following question:* **Question 32**. *Let $\mathcal{D}$ be a weighted oriented graph with $I(\mathcal{D})\neq I(G)$ where $G$ is the underlying graph. Suppose that $I(\mathcal{D})^{[k]}$ is linearly related. Then, does $I(\mathcal{D})^{[k]}$ have a linear resolution?* If $\mathcal{D}$ is a connected weighted oriented graph with $I(\mathcal{D})\neq I(G)$, then the above question has a positive answer for $k=1$ by [@BDS23 Theorem 3.5]. # Forests whose last matching power is polymatroidal {#sec:4-EreyFic} In this section, we combinatorially classify the weighted oriented forests $\mathcal{D}$ whose last matching power $I(\mathcal{D})^{[\nu(I(\mathcal{D}))]}$ is polymatroidal. To state the classification, we recall some concepts. A *leaf* $v$ of a graph $G$ is a vertex incident to only one edge. Any tree with at least one edge possesses at least two leaves. Let $a\in V(G)$ be a leaf and $b$ be the unique neighbor of $a$. Following [@EH2021], we say that $a$ is a *distant leaf* if at most one of the neighbors of $b$ is not a leaf. In this case, we say that $\{a,b\}$ is a *distant edge*. It is proved in [@J2004 Proposition 9.1.1] (see, also, [@EH2021 Lemma 4.2] or [@CFL Proposition 2.2]) that any forest with at least one edge has a distant leaf. We say that an edge $\{a,b\}$ of a graph $G$ is a *separated edge* if $a$ and $b$ are leaves. In this case $I(G)=I(G\setminus\{a,b\})+(ab)$. Suppose that $G$ is a forest whose not all edges are separate. Then, the above result [@J2004 Proposition 9.1.1] implies that we can find vertices $a_1,\dots,a_t,b,c$, with $t\ge1$, such that $a_1,\dots,a_t$ are distant leaves and $\{a_1,b\},\dots,\{a_t,b\},\{b,c\}\in E(G)$. In this case we say that $(a_1,\dots,a_t\ |\ b,c)$ is a *distant configuration* of the forest $G$. Figure [\[fig:1\]](#fig:1){reference-type="ref" reference="fig:1"} displays this situation. Let $\mathcal{D}$ be a weighted oriented graph with underlying graph $G$. If $W\subset V(\mathcal{D})$, we denote by $\mathcal{D}\setminus W$ the induced weighted oriented subgraph of $\mathcal{D}$ on the vertex set $V(\mathcal{D})\setminus W$. For any edge $\{a,b\}\in E(G)$, we set $${\bf x}_{\{a,b\}}^{(\mathcal{D})}\ =\ \begin{cases} x_ax_b^{w(b)}&\textit{if}\ (a,b)\in E(\mathcal{D}),\\ x_bx_a^{w(a)}&\textit{if}\ (b,a)\in E(\mathcal{D}). \end{cases}$$ We say that $\{a,b\}\in E(G)$ is a *strong edge* if $\{a,b\}$ belongs to all matchings of $G$ having maximal size $\nu(G)$. In such a case, $I(\mathcal{D})^{[\nu(G)]}={\bf x}_{\{a,b\}}^{(\mathcal{D})}I(\mathcal{D}\setminus\{a,b\})^{[\nu(G)-1]}$. It is clear that a separate edge is a strong edge. **Lemma 33**. *Let $G$ be a forest with distant configuration $(a_1,\dots,a_t\ |\ b,c)$ and with $\nu(G)\ge2$. Then $\{a_i,b\}$ is a strong edge of $G$, for some $i$, if and only if, $t=1$ and $c\in V(M)$ for all $(\nu(G)-1)$-matchings $M$ of $G\setminus\{b\}$.* *Proof.* Suppose that $\{a_i,b\}$ is a strong edge for some $i$. Then $t=1$. Indeed, let $M$ be a matching of $G$ of size $\nu(G)$. Then $\{a_i,b\}\in M$. But, if $t>1$ then for some $j\ne i$, $(M\setminus\{\{a_i,b\}\})\cup\{\{a_j,b\}\}$ would also be a matching of $G$ of maximal size not containing $\{a_i,b\}$, which is absurd. Thus $t=1$. Now, suppose that there exists a $(\nu(G)-1)$-matching $M$ of $G\setminus b$ with $c\notin V(M)$. Then $M\cup\{\{b,c\}\}$ would be a maximum matching of $G$ not containing $\{a_i,b\}$, which is absurd. Conversely, assume that $(a\ |\ b,c)$ is a distant configuration of $G$ and that $c\in V(M)$, for all $(\nu(G)-1)$-matchings of $G\setminus b$. Note that every matching $N$ of $G$ of size $\nu(G)$ contains either $\{b,c\}$ or $\{a,b\}$. But if $N$ contains $\{b,c\}$, then $N\setminus\{\{b,c\}\}$ would be a $(\nu(G)-1)$-matching of $G\setminus b$ whose vertex set does not contain $c$, against our assumption. The conclusion follows. ◻ **Theorem 34**. *Let $\mathcal{D}$ be a weighted oriented graph whose underlying graph $G$ is a forest, with $\nu(G)\ge2$. Suppose that $I(\mathcal{D})\ne I(G)$. Then, the following conditions are equivalent.* 1. *$I(\mathcal{D})^{[\nu(G)]}$ is linearly related.* 2. *$I(\mathcal{D})^{[\nu(G)]}$ is polymatroidal.* 3. *$I(\mathcal{D})^{[\nu(G)]}$ has a linear resolution.* 4. *One of the following conditions holds:* ```{=html} <!-- --> ``` 1. *$G$ has a separate edge $\{a,b\}$ such that $I(\mathcal{D}\setminus\{a,b\})^{[\nu(G)-1]}$ is polymatroidal, and $$I(\mathcal{D})^{[\nu(G)]}={\bf x}_{\{a,b\}}^{(\mathcal{D})}I(\mathcal{D}\setminus\{a,b\})^{[\nu(G)-1]}.$$* 2. *$G$ has a distant configuration $(a\ |\ b,c)$ with $\{a,b\}\in E(G)$ a strong edge of $G$, $I(\mathcal{D}\setminus\{a,b\})^{[\nu(G)-1]}$ is polymatroidal, and $$I(\mathcal{D})^{[\nu(G)]}={\bf x}_{\{a,b\}}^{(\mathcal{D})}I(\mathcal{D}\setminus\{a,b\})^{[\nu(G)-1]}.$$* 3. *$G$ has a distant configuration $(a_1,\dots,a_t\ |\ b,c)$, $w(a_1)=\dots=w(a_t)=1$, and $I(\mathcal{D}\setminus\{b\})^{[\nu(G)-1]}$ is polymatroidal. Moreover the following statements hold.* - *If $I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}=0$, then ${\bf x}_{\{a_i,b\}}^{(\mathcal{D})}=x_{a_i}x_b^{\delta}$ with $\delta\in\{1,w(b)\}$ for all $i$, and $$\label{equation d-3-i} I(\mathcal{D})^{[\nu(G)]}=x_{b}^{\delta}[(x_{a_1},\dots,x_{a_t})I(\mathcal{D}\setminus\{b\})^{[\nu(G)-1]}].$$* - *Otherwise, $I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}\ne0$ is polymatroidal, $\delta=w(b)$, $w(c)=1$ and $$\label{equation d-3-ii} I(\mathcal{D})^{[\nu(G)]}=x_{b}^{w(b)}[(x_{a_1},\dots,x_{a_t})I(\mathcal{D}\setminus\{b\})^{[\nu(G)-1]}+x_cI(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}].$$* *Proof.* From Theorem [Theorem 29](#Thm:I(D)linRel){reference-type="ref" reference="Thm:I(D)linRel"} and Lemma [Theorem 25](#thm: linearly related only in matching power){reference-type="ref" reference="thm: linearly related only in matching power"} it follows that (a) $\ifmmode\Longleftrightarrow \else \unskip$$\ignorespaces\fi$ (b) $\ifmmode\Longleftrightarrow \else \unskip$$\ignorespaces\fi$ (c). To conclude the proof, we show that (b) $\ifmmode\Longleftrightarrow \else \unskip$$\ignorespaces\fi$ (d). Firstly, we show that (b) $\ifmmode\Rightarrow \else \unskip$$\ignorespaces\fi$ (d). Suppose that $I(\mathcal{D})^{[\nu(G)]}$ is polymatroidal. If $G$ has a separate edge, then the statement (d-1) holds. Let us assume that $G$ has no separate edge. Then $G$ contains a distant configuration $(a_1,\dots,a_t\ |\ b,c)$. Suppose that $\{a_i,b\}$ is a strong edge for some $i$. Then, Lemma [Lemma 33](#Lemma:StrongNursel){reference-type="ref" reference="Lemma:StrongNursel"} implies $t=1$. Since $I(\mathcal{D})^{[\nu(G)]}$ is polymatroidal if and only if $I(\mathcal{D}\setminus\{a,b\})^{[\nu(G)-1]}$ is polymatroidal, (d-2) follows. Suppose that $\{a_i,b\}$ is not a strong edge for all $i$. Every matching of $G$ of size $\nu(G)$ contains either $\{b,c\}$ or $\{a_i,b\}$ for some $i=1,\dots ,t$. Therefore $$\label{eq:I(D)Decomp} I(\mathcal{D})^{[\nu(G)]}= \ \sum_{i=1}^t{\bf x}_{\{a_i,b\}}^{(\mathcal{D})}I(\mathcal{D}\setminus\{b\})^{[\nu(G)-1]} +\ {\bf x}_{\{b,c\}}^{(\mathcal{D})}I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}.$$ We claim that 1. $w(a_i)=1$ for all $i=1,\dots, t$ and 2. there exists $\delta\in\{1,w(b)\}$ such that ${\bf x}_{ \{a_i,b\}}^{(\mathcal{D})}=x_{a_i}x_b^\delta$ for all $i=1,\dots ,t$, 3. if $I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}\ne0$ then ${\bf x}_{\{b,c\}}^{(\mathcal{D})}=x_cx_b^{w(b)}$ and $\delta=w(b)$. Once we have proved these facts, if $I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}\ne0$, equation ([\[eq:I(D)Decomp\]](#eq:I(D)Decomp){reference-type="ref" reference="eq:I(D)Decomp"}) combined with (i), (ii) and (iii) implies that $$I(\mathcal{D})^{[\nu(G)]}=x_{b}^{w(b)}\Big[(x_{a_1},\dots,x_{a_t})I(\mathcal{D}\setminus\{b\})^{[\nu(G)-1]}+x_cI(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}\Big].$$ Since $I(\mathcal{D})^{[\nu(G)]}$ is polymatroidal by assumption, by Lemma [Lemma 17](#lem:induced subgraph){reference-type="ref" reference="lem:induced subgraph"} applied to the graph $\mathcal{D}\setminus\{a_1,\dots,a_t\}$, it follows that $x_cI(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}$ has a linear resolution. By applying [@BH2013 Theorem 1.1], we obtain that $$\begin{aligned} (I(\mathcal{D})^{[\nu(G)]}:x_{a_1}\cdots x_{a_t})\ &=\ x_{b}^{w(b)}\Big[I(\mathcal{D}\setminus\{b\})^{[\nu(G)-1]}+x_cI(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}\Big]\\ &=\ x_b^{w(b)}I(\mathcal{D}\setminus\{b\})^{[\nu(G)-1]} \end{aligned}$$ has a linear resolution. Now, Theorem [Theorem 29](#Thm:I(D)linRel){reference-type="ref" reference="Thm:I(D)linRel"} implies that both $I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}$ and $I(\mathcal{D}\setminus\{b\})^{[\nu(G)-1]}$ are polymatroidal, and so (d-3-ii) follows. Otherwise, if $I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}=0$, then equation ([\[eq:I(D)Decomp\]](#eq:I(D)Decomp){reference-type="ref" reference="eq:I(D)Decomp"}) combined with (i) and (ii) implies that $$I(\mathcal{D})^{[\nu(G)]}=x_{b}^{\delta}[(x_{a_1},\dots,x_{a_t})I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}]$$ By a similar argument as before, $I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}$ has a linear resolution. Then, Theorem [Theorem 29](#Thm:I(D)linRel){reference-type="ref" reference="Thm:I(D)linRel"} implies that it is polymatroidal and thus (d-3-i) follows. Next, we prove (i), (ii) and (iii). *Proof of* (i): By Remark [Remark 14](#remark: assumption on sources){reference-type="ref" reference="remark: assumption on sources"} if $a_i$ is a source, then we assume $w(a_i)=1$. Assume for a contradiction that $(b, a_i)\in E(\mathcal{D})$ but $w(a_i)>1$ for some $i$. Since $\{b,c\}$ is not a strong edge, equation ([\[eq:I(D)Decomp\]](#eq:I(D)Decomp){reference-type="ref" reference="eq:I(D)Decomp"}) implies that we can find a generator $u$ of $I(\mathcal{D})^{[\nu(G)]}$ with $\deg_{x_{a_i}}(u)=w(a_i)>1$. Lemma [Lemma 28](#lem:degree of variable bigger than 1){reference-type="ref" reference="lem:degree of variable bigger than 1"} implies that all generators of $I(\mathcal{D})^{[\nu(G)]}$ must have $x_{a_i}$-degree equal to $w(a_i)$. Then this implies that $\{b, a_i\}$ is a strong edge which is against our assumption. So, $w(a_i)=1$ for all $i$.$\square$ *Proof of* (ii): Lemma [Lemma 28](#lem:degree of variable bigger than 1){reference-type="ref" reference="lem:degree of variable bigger than 1"} and definition of $I(\mathcal{D})$ implies that there exists a $\delta\in\{1,w(b)\}$ such that ${\bf x}_{\{a_i,b\}}^{(\mathcal{D})}=x_{a_i}x_b^{\delta}$ for all $i=1,\dots,t$.$\square$ *Proof of* (iii): Suppose that $I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}$ is non-zero. Let $u$ be a minimal monomial generator of $I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}$. Then ${\bf x}_{ \{a_i,b\}}^{(\mathcal{D})}u$ is a minimal monomial generator of $I(\mathcal{D})^{[\nu(G)]}$ whose $x_c$-degree is zero. Lemma [Lemma 28](#lem:degree of variable bigger than 1){reference-type="ref" reference="lem:degree of variable bigger than 1"}, the assumption that $I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}$ is non-zero and equation ([\[eq:I(D)Decomp\]](#eq:I(D)Decomp){reference-type="ref" reference="eq:I(D)Decomp"}), imply that $\deg_{x_c}({\bf x}_{\{b,c\}}^{(\mathcal{D})})=1$. Next, we claim that $\delta=w(b)$. If $b$ is a source then $w(b)=1$ and there is nothing to prove. If $w(b)=1$, there is also nothing to prove. Suppose that $b$ is not a source and $w(b)>1$, then there is a vertex $d\in\{a_1,\dots,a_t,c\}$ with $(d,b)\in E(G)$ and ${\bf x}_{\{b,d\}}^{(\mathcal{D})}=x_dx_b^{w(b)}$. Equation ([\[eq:I(D)Decomp\]](#eq:I(D)Decomp){reference-type="ref" reference="eq:I(D)Decomp"}) then implies the existence of a generator of $I(\mathcal{D})^{[\nu(G)]}$ whose $x_b$-degree is $w(b)>1$. Lemma [Lemma 28](#lem:degree of variable bigger than 1){reference-type="ref" reference="lem:degree of variable bigger than 1"} implies that all generators of $I(\mathcal{D})^{[\nu(G)]}$ have $x_b$-degree equal to $w(b)$. Therefore $\delta=w(b)$ and ${\bf x}_{\{b,c\}}^{(\mathcal{D})}=x_cx_b^{w(b)}$.$\square$ We now prove that (d) $\ifmmode\Rightarrow \else \unskip$$\ignorespaces\fi$ (b). If (d-1) or (d-2) holds then (b) follows from the following fact. If $I$ is a polymatroidal ideal and $u\in S$ is a monomial, then $uI$ is again polymatroidal. Suppose that (d-3) holds. If $I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}=0$, then by equation [\[equation d-3-i\]](#equation d-3-i){reference-type="eqref" reference="equation d-3-i"}, the ideal $I(\mathcal{D})^{[\nu(G)]}$ is a product of polymatroidal ideals. Therefore it is polymatroidal as well by [@HHBook2011 Theorem 12.6.3]. Now, suppose that (d-3-ii) holds. Then $I(\mathcal{D}\setminus\{b\})^{[\nu(G)-1]}$ and $x_cI(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}$ are polymatroidal ideals. Hence, $(x_{a_1},\dots,x_{a_t})I(\mathcal{D}\setminus\{b\})^{[\nu(G)-1]}$ has a linear resolution, as it is the product of monomial ideals with linear resolution in pairwise disjoint sets of variables. Therefore [@FHT2009 Corollary 2.4] implies that [\[equation d-3-ii\]](#equation d-3-ii){reference-type="eqref" reference="equation d-3-ii"} is a Betti splitting. Now, since $$x_cI(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}\subset I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}\subset I(\mathcal{D}\setminus\{b\})^{[\nu(G)-1]}$$ and $x_{a_i}$ do not divide any generator of $x_cI(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}$ and $I(\mathcal{D}\setminus\{b\})^{[\nu(G)-1]}$, for all $1\le i\le t$, we obtain that $(x_{a_1},\dots,x_{a_t})I(\mathcal{D}\setminus\{b\})^{[\nu(G)-1]}\cap x_cI(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}=$\ $=x_c(x_{a_1},\dots,x_{a_t})I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}$ and this ideal has a linear resolution. Thus [@CFts1 Proposition 1.8] implies that $I(\mathcal{D})^{[\nu(G)]}$ has a linear resolution. By Theorem [Theorem 29](#Thm:I(D)linRel){reference-type="ref" reference="Thm:I(D)linRel"} it follows that $I(\mathcal{D})^{[\nu(G)]}$ is polymatroidal and (b) follows. ◻ Inspecting the proof of Theorem [Theorem 34](#Thm:ForestPolym){reference-type="ref" reference="Thm:ForestPolym"}, we see how to construct, recursively, all weighted oriented forests $\mathcal{D}$, with a given matching number, whose last matching power $I(\mathcal{D})^{[\nu(I(\mathcal{D}))]}$ is polymatroidal. Indeed, suppose that we have constructed all weighted oriented forests $\mathcal{D}$ with $\nu(I(\mathcal{D}))=k$ and $I(\mathcal{D})^{[k]}$ polymatroidal, then, according to the three possible cases (d-1), (d-2), (d-3), we can construct all weighted oriented forests $\mathcal{H}$ with $I(\mathcal{H})^{[\nu(I(\mathcal{H}))]}$ polymatroidal and with matching number $k+1$, one bigger than the previous fixed matching number. Let $\mathcal{D}$ be a weighted oriented graph, whose underlying graph $G$ is a forest, such that $I(\mathcal{D})\ne I(G)$. We illustrate the above procedure. If $\nu(G)=1$, then $G$ is a star graph, with, say, $V(G)=[m]$ and $E(G)=\{\{i,m\}:1\le i\le m-1\}$. If $I(\mathcal{D})^{[1]}=I(\mathcal{D})$ is polymatroidal, then $w_1=\dots=w_{m-1}=1$ by Lemma [Lemma 28](#lem:degree of variable bigger than 1){reference-type="ref" reference="lem:degree of variable bigger than 1"}. Since $I(\mathcal{D})\ne I(G)$, then $w_m>1$ and $E(\mathcal{D})=\{(i,m):1\le i\le n-1\}$. Thus, $I(\mathcal{D})=(x_1x_m^{w_m},x_2x_m^{w_m},\dots,x_{m-1}x_m^{w_m})=x_m^{w_m}(x_1,\dots,x_{m-1})$ is polymatroidal, for it is the product of polymatroidal ideals. In this case, Now, let $\nu(G)=2$, and suppose that $I(\mathcal{D})^{[2]}$ is polymatroidal. By Theorem [Theorem 34](#Thm:ForestPolym){reference-type="ref" reference="Thm:ForestPolym"}, only one of the possibilities (d-1), (d-2), (d-3) occurs. Exploiting these three possibilities, one can see that the only weighted oriented forests $\mathcal{D}$ such that $I(\mathcal{D})^{[2]}$ is polymatroidal, are the following ones:               [\[fig:2\]]{#fig:2 label="fig:2"} In the second graph displayed above, the edge connecting the two bottom vertices can have an arbitrary orientation. 99 S. Bandari, J. Herzog, *Monomial localizations and polymatroidal ideals*, Eur. J. Combin. **34** (2013) 752--763. A. Banerjee, B. Chakraborty, K. K Das, M. Mandal, S. Selvaraja, *Equality of ordinary and symbolic powers of edge ideals of weighted oriented graphs*, Comm. Algebra **51** (2023), no. 4, 1575--1580. A. Banerjee, K. K. Das, and S. Selvaraja, *Powers of edge ideals of weighted oriented graphs with linear resolutions*, J. Algebra Appl. **22** (2023), no. 7, Paper No. 2350148. M. Bigdeli, J. Herzog, R. Zaare-Nahandi, *On the index of powers of edge ideals*, Comm. Algebra, **46** (2018), 1080--1095. B. Casiday, S. Kara, *Betti numbers of weighted oriented graphs*, Electron. J. Combin. **28** (2021), no.2, Paper No. 2.33, 20 pp. M. Crupi, A. Ficarra, *Linear resolutions of t-spread lexsegment ideals via Betti splittings*, Journal of Algebra and Its Applications, doi 10.1142/S0219498824500725 M. Crupi, A. Ficarra, E. Lax, *Matchings, squarefree powers and Betti splittings*, 2023, available at <https://arxiv.org/abs/2304.00255> N. Erey, T. Hibi, *Squarefree powers of edge ideals of forests*, Electron. J. Combin., **28** (2) (2021), P2.32. N. Erey, J. Herzog, T. Hibi, S. Saeedi Madani, *Matchings and squarefree powers of edge ideals*, J. Comb. Theory Series. A, **188** (2022). N. Erey, J. Herzog, T. Hibi, S. Saeedi Madani, *The normalized depth function of squarefree powers*, Collect. Math. (2023). https://doi.org/10.1007/s13348-023-00392-x A. Ficarra, *Homological shifts of polymatroidal ideals*, available at arXiv preprint <https://arxiv.org/abs/2205.04163> (2022). A. Ficarra. *HomologicalShiftIdeals, Macaulay2 Package*, 2023, preprint [arXiv:2309.09271](arXiv:2309.09271). A. Ficarra, *MatchingPowers, Macaulay2 Package* (2023). A. Ficarra, J. Herzog, T. Hibi, *Behaviour of the normalized depth function*, Electron. J. Comb., **30** (2), (2023) P2.31. C. A. Francisco, H. T. Hà, A. Van Tuyl, *Splittings of monomial ideals*, Proc. Amer. Math. Soc., **137** (10) (2009), 3271-3282. R. Fröberg, *On Stanley-Reisner rings*, Topics in algebra, Banach Center Publications, **26** (2) (1990), 57--70. D. R. Grayson, M. E. Stillman. *Macaulay2, a software system for research in algebraic geometry*. Available at <http://www.math.uiuc.edu/Macaulay2>. H.T. Hà, K.N. Lin, S. Morey, E. Reyes, R.H. Villarreal, *Edge ideals of oriented graphs*, International Journal of Algebra and Computation, 29 (03), (2019), 535--559. J. Herzog, T. Hibi. *Monomial ideals*, Graduate texts in Mathematics **260**, Springer, 2011. J. Herzog, S. Moradi, M. Rahimbeigi, G. Zhu, *Homological shift ideals*, Collect. Math. **72** (2021), 157--174. J. Herzog, T. Hibi, X. Zheng, *Monomial ideals whose powers have a linear resolution*, Math. Scand., **95** (2004), 23--32. S. Jacques, *Betti numbers of graph ideals*, 2004, Ph.D. thesis, University of Sheffield, Great Britain. S. Kara, J. Biermann, K.N. Lin, A. O'Keefe, *Algebraic invariants of weighted oriented graphs*, J. Algebraic Combin. **55** (2022), no.2, 461--491. J. Martínez-Bernal, Y. Pitones and R. H. Villarreal, *Minimum distance functions of graded ideals and Reed-Muller-type codes*, J. Pure Appl. Algebra **221** (2017), 251--275. C. Paulsen and S. Sather-Wagstaff, *Edge ideals of weighted graphs*, J. Algebra Appl., 12 (2013), 1250223-1-24. Y. Pitones, E. Reyes, J. Toledo, *Monomial ideals of weighted oriented graphs*, Electron. J. Combin., **26** (2019), no. 3, Research Paper P3.44. S. A. Seyed Fakhari, *On the Castelnuovo-Mumford regularity of squarefree powers of edge ideals*, 2022, available at [arxiv.org/abs/2303.02791](arxiv.org/abs/2303.02791) S. A. Seyed Fakhari, *On the Regularity of squarefree part of symbolic powers of edge ideals*, 2023, available at [arxiv.org/abs/2207.08559](arxiv.org/abs/2207.08559)
arxiv_math
{ "id": "2309.13771", "title": "Matching powers of monomial ideals and edge ideals of weighted oriented\n graphs", "authors": "Nursel Erey, Antonino Ficarra", "categories": "math.AC math.CO", "license": "http://creativecommons.org/licenses/by-sa/4.0/" }
--- abstract: | In this paper, we establish the existence of positive non-decreasing radial solutions for a nonlinear mixed local and nonlocal Neumann problem in the ball. No growth assumption on the nonlinearity is required. We also provide a criterion for the existence of non-constant solutions provided the problem possesses a trivial constant solution. address: $^1$ School of Mathematics and Statistics, Carleton University, Ottawa, Ontario, Canada. author: - David Amundsen$^1$, Abbas Moameni$^1$ and Remi Yvant Temgoua$^1$ title: Radial positive solutions for mixed local and nonlocal supercritical Neumann problem --- *Keywords.* Mixed local and nonlocal Neumann problem, Variational principle, Radial solutions, Supercritical nonlinearity.\ # Introduction and main results {#section:introduction} Given $s>\frac{1}{2}$, $2\leq p, q$ with $p\neq q$, we consider the following mixed local and nonlocal Neumann problem $$\label{e2} \left\{\begin{aligned} {\mathcal L}u+u&=a(|x|)|u|^{p-2}u-b(|x|)|u|^{q-2}u~~~\text{in}~~B_1 \\ {\mathcal N}_su&=0 \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\text{in}~~\mathbb{R}^N\setminus\overline{B}_1\\ \frac{\partial u}{\partial\nu}&=0 \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\text{on}~~\partial B_1, \end{aligned} \right.$$ where $B_1$ is the unit ball in $\mathbb{R}^N,~N\geq2$. We study the existence of radial solutions of [\[e2\]](#e2){reference-type="eqref" reference="e2"} under the assumptions that the non-negative radial functions $a$ and $b$ are non-decreasing and non-increasing respectively. Here, $a$ and $b$ are two non-negative radial functions, and ${\mathcal L}:=-\Delta +(-\Delta)^s$ is the so-called mixed local and nonlocal operator. Recall that $-\Delta$ denotes the classical Laplace operator and $(-\Delta)^s$ the standard fractional Laplacian defined for every sufficiently regular function $u: \mathbb{R}^N\rightarrow\mathbb{R}$ by $$(-\Delta)^su(x)=c_{N,s}P.V.\int_{\mathbb{R}^N}\frac{u(x)-u(y)}{|x-y|^{N+2s}}\ dy,~~~x\in\mathbb{R}^N,$$ where $c_{N,s}$ is a normalization constant and "$P.V$" stands for the Cauchy principle value. Moreover, $\nu$ is the exterior normal to $B_1$ and ${\mathcal N}_s$ is the nonlocal Neumann condition introduced in [@dipierro2017nonlocal] defined by $${\mathcal N}_su(x)=\int_{B_1}\frac{u(x)-u(y)}{|x-y|^{N+2s}}\ dy\quad\quad\text{for all}~~x\in\mathbb{R}^N\setminus\overline{B}_1.$$ Neumann problems involving mixed local and nonlocal operators is explored to a very limited extent in the literature. In this regard, we can reference [@dipierro2022linear; @dipierro2022non; @mugnai2022mixed]. In [@dipierro2022linear], the authors discussed the spectral properties of a weighted eigenvalue problem and present a global bound for subsolutions. In [@dipierro2022non] a logistic equation is investigated. Very recently, Mugnai and Lippi [@mugnai2022mixed] obtained an existence result for nonlinear problem governed by local and nonlocal mixed operator with Neumann conditions. In this paper, the nonlinearity is of a subcritical type. The aim of this paper is to investigate nonlinear problem for ${\mathcal L}=-\Delta+(-\Delta)^s$ with Neumann conditions. The nonlinearity that we consider has no growth condition, allowing for supercritical growth. To the best of our knowledge, such types of problems have not yet been treated in the literature. Our motivation comes from the study of a nonlinear problem of the form $$\label{sample} {\mathcal P}u+u={\mathcal F}(u)~~\text{in}~~\Omega,$$ with some Neumann conditions on $u$. Here, $\Omega$ is a domain in $\mathbb{R}^N$, ${\mathcal P}$ is some operator (local/nonlocal/mixed) and ${\mathcal F}$ is a nonlinear function with supercritical growth. One of the main questions addressed in [\[sample\]](#sample){reference-type="eqref" reference="sample"} is the existence of solutions. We briefly present a literature survey of problems of the type [\[sample\]](#sample){reference-type="eqref" reference="sample"}. - Local case, i.e., ${\mathcal P}=-\Delta$: Serra and Tilli consider in [@serra2011monotonicity] the Neumann problem $$\label{classical-neumann-problem} -\Delta u+u= a(|x|)f(u),~~~~~u>0~~~~~\text{in}~~B,\quad\quad\frac{\partial u}{\partial\nu}=0~~\text{on}~~\partial B$$ and they use a monotonicity constraint to show that, under some assumptions on $a$ and $f$, [\[classical-neumann-problem\]](#classical-neumann-problem){reference-type="eqref" reference="classical-neumann-problem"} has at least one radially increasing solution. A similar result has been obtained in [@barutello2008note] by applying a shooting method when $a(|x|)f(u)$ is replaced by $|x|^{\alpha}|u|^p$ for every $p>1$ and $\alpha>0$. In [@bonheure2012increasing] the authors apply topological and variational arguments to prove that problem [\[classical-neumann-problem\]](#classical-neumann-problem){reference-type="eqref" reference="classical-neumann-problem"} admits at least one non-decreasing radial solution without any growth assumption on $f$. They also obtained the existence of a non-constant solution in the case that the problem has nontrivial constant solutions. These results were extended to the $p$-Laplace operator in [@colasuonno2016p; @secchi2012increasing]. Very recently, [@moameni2019existence] considers [\[classical-neumann-problem\]](#classical-neumann-problem){reference-type="eqref" reference="classical-neumann-problem"} when $a(|x|)f(u)$ is replaced by $a(|x|)|u|^{p-2}u-b(|x|)|u|^{q-2}u$ and they obtained the existence of a radially non-decreasing solution under some assumptions on $a$ and $b$ without imposing any growth assumption on $p$ and $q$. The method they use relies on a new variational principle recently developed in [@moameni2017variational; @moameni2018variational] that consists of restricting the Euler-Lagrange function to a convex set. Other interesting papers in which the Neumann problem of type [\[classical-neumann-problem\]](#classical-neumann-problem){reference-type="eqref" reference="classical-neumann-problem"} is considered are (but not limited to) [@yadava1993conjecture; @yadava1991existence; @cowan2016supercritical; @bonheure2016multiple; @faraci2003multiplicity].   \ - Nonlocal case, i.e., ${\mathcal P}=(-\Delta)^s$, for $s\in (0,1)$: Cinti and Colasuonno studied in [@cinti2020nonlocal] the nonlocal Neumann problem $$\label{nonlocal-neumann-problem} (-\Delta)^su+u=f(u),~~~~~u\geq0~~~~\text{in}~~~\Omega,~~~~~{\mathcal N}_su=0~~~~\text{in}~~~\mathbb{R}^N\setminus\overline{\Omega}$$ where $s\in(\frac{1}{2},1)$, $\Omega\subset\mathbb{R}^N$ is either a ball or an annulus and the nonlinearity $f$ is assumed to have supercritical growth. Using a truncation argument, they showed that problem [\[nonlocal-neumann-problem\]](#nonlocal-neumann-problem){reference-type="eqref" reference="nonlocal-neumann-problem"} possesses a positive non-decreasing radial solution. In the case when the problem admits a nontrivial constant solution, they also address the existence of a nonconstant solution as in [@bonheure2012increasing].    \ - Mixed operator case, i.e., ${\mathcal P}={\mathcal L}=-\Delta+(-\Delta)^s$: As stated above, very few references investigating the Neumann problem governed by mixed local and nonlocal operators ${\mathcal L}$ can be found in the literature apart from [@dipierro2022linear; @dipierro2022non; @mugnai2022mixed]. As we will see in the rest of the paper, an adaptation can be made to study the Neumann problem of type [\[sample\]](#sample){reference-type="eqref" reference="sample"} with ${\mathcal P}={\mathcal L}$ and ${\mathcal F}$ being a supercritical nonlinearity. We will focus on the model case ${\mathcal F}(u)=a(|x|)|u|^{p-2}u-b(|x|)|u|^{q-2}u$ as in [@moameni2019existence]. The consequent lack of compactness can be circumvented by working in a convex set of non-negative and non-decreasing radial functions. Although our argument is closely related to the one in [@moameni2019existence], it involves some technical issues in many regards due to the presence of the nonlocal term. For instance, to the best of our knowledge, no higher regularity is available for the operator ${\mathcal L}$ with Neumann conditions. Moreover, the action of ${\mathcal L}$ on radial functions does not give rise to an ODE in contrast with the Laplace operator $-\Delta$.   \ The main motivation for considering problem [\[e2\]](#e2){reference-type="eqref" reference="e2"} in this paper comes from the above discussion. To the best of our knowledge, there is no result involving a mixed local and nonlocal operator ${\mathcal L}$ with a supercritical nonlinearity, and with Neumann conditions. We aim to bridge this gap in the present paper. Notice that the case of Dirichlet condition has been considered very recently in [@amundsen2023mixed].    \ \ To state our main results, we introduce the following assumptions on $a$ and $b$: - $a\in L^1(0,1)$ is non-decreasing and $a(r)>0$ for a.e. $r\in [0,1]$; - $b\in L^1(0,1)$ is non-negative and non-increasing in $[0,1]$ Our first main result is concerned with the case when $p>q$. It reads as follows. **Theorem 1**. *Assume that $(A)$ and $(B)$ hold, and $p>q$. Then, problem [\[e2\]](#e2){reference-type="eqref" reference="e2"} admits at least one positive non-decreasing radial solution.* Notice that assumptions $(A)$ and $(B)$ are satisfied by $a=|x|^{\alpha}$ and $b=\frac{\mu}{|x|^{\beta}}$ where $\alpha, \mu\geq0$ and $0\leq\beta<N$. Therefore, Theorem [Theorem 1](#first-main-result){reference-type="ref" reference="first-main-result"} applies to the following problem $$\label{e2-2} \left\{\begin{aligned} {\mathcal L}u+u&=|x|^{\alpha}|u|^{p-2}u-\frac{\mu}{|x|^{\beta}}|u|^{q-2}u\quad\quad~\text{in}~~B_1 \\ {\mathcal N}_su&=0 \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\text{in}~~\mathbb{R}^N\setminus\overline{B}_1\\ \frac{\partial u}{\partial\nu}&=0 \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\text{on}~~\partial B_1. \end{aligned} \right.$$ In particular, when $\mu\equiv0$ and $\alpha>0$, [\[e2-2\]](#e2-2){reference-type="eqref" reference="e2-2"} is the so-called Neumann mixed local and nonlocal Hénon problem.\ Note that when the functions $a$ and $b$ are constants in problem [\[e2\]](#e2){reference-type="eqref" reference="e2"}, then this problem always possesses a constant solution. In our next result we show that the solution obtained in Theorem [Theorem 1](#first-main-result){reference-type="ref" reference="first-main-result"} is non-constant. **Theorem 2**. *Suppose that $2<q<p$. Assume also that $a\equiv 1,~ b>0$ and $$\label{key-condition-for-non-constancy-of-solutions} b(q-2)<p-2.$$ Then, problem [\[e2\]](#e2){reference-type="eqref" reference="e2"} admits at least one non-constant non-decreasing radial solution.* In Theorem [Theorem 12](#second-main-result){reference-type="ref" reference="second-main-result"} we address another key result for the case $p<q$. In this case under the integrability assumption $$\Big(\frac{a^q}{b^p}\Big)^{\frac{1}{q-p}}\in L^1(0,1),$$ we prove the existence of one positive non-decreasing radial solution.\ The paper is organized as follows. Section [2](#section:proof-of-main-result){reference-type="ref" reference="section:proof-of-main-result"} is devoted to the proof of existence results for both cases $p<q$ and $p>q.$ In Section [3](#section:non-constency){reference-type="ref" reference="section:non-constency"} we study the non-constancy of solutions.\ # Existence of solutions via a variational method {#section:proof-of-main-result} We begin this section by introducing the functional space in which we work. For all $s\in(0,1)$, we first recall the fractional Sobolev space $H^s_{B_1}$ introduced in [@dipierro2017nonlocal] and defined as $$H^s_{B_1}:=\Big\{u: \mathbb{R}^N\to\mathbb{R}: u|_{B_1}\in L^2(B_1)~~\text{and}~~\iint_{{\mathcal Q}}\frac{|u(x)-u(y)|^2}{|x-y|^{N+2s}}\ dxdy<+\infty\Big\},$$ where ${\mathcal Q}=\mathbb{R}^{2N}\setminus(B_1^c)^2$. Now, we define $$\mathbb{X}=\mathbb{X}(B_1)=H^1(B_1)\cap H^s_{B_1}.$$ Then $\mathbb{X}$ is a Hilbert space with scalar product $$\langle u, v\rangle_{\mathbb{X}}:=\int_{B_1}uv\ dx+\int_{B_1}\nabla u\cdot \nabla v\ dx+\iint_{{\mathcal Q}}\frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{N+2s}}\ dxdy\quad\text{for all}~~u, v\in \mathbb{X}.$$ Moreover, we also introduce the seminorm $[\cdot]_{\mathbb{X}}$ defined for every $u\in \mathbb{X}$ by $$^2_{\mathbb{X}}=\int_{B_1}|\nabla u|^2\ dx+\iint_{{\mathcal Q}}\frac{|u(x)-u(y)|^2}{|x-y|^{N+2s}}\ dxdy.$$ Then $\mathbb{X}$ is endowed with the norm $$\|u\|^2_{\mathbb{X}}=\|u\|^2_{L^2(B_1)}+[u]^2_{\mathbb{X}}.$$ Let us also recall the classical fractional Sobolev space $H^s(B_1)$ defined by $$H^s(B_1):=\{u\in L^2(B_1):|u|_{H^s(B_1)}<\infty\}$$ equipped with the norm $$\|u\|_{H^s(B_1)}=\|u\|_{L^2(B_1)}+|u|_{H^s(B_1)},$$ where $$|u|^2_{H^2(B_1)}=\int_{B_1}\int_{B_1}\frac{(u(x)-u(y))^2}{|x-y|^{N+2s}}\ dxdy.$$ Let $\mathbb{X}_{rad}$ be the space of radial functions in $\mathbb{X}$. Denote by $L^p_a(B_1)$ (resp. $L^q_b(B_1)$) the weighted $L^p$ (resp. $L^q$) space defined by $$L^p_a(B_1):=\Big\{u: \int_{B_1}a(|x|)|u|^p\ dx<\infty\Big\}\quad\quad \text{resp.}\quad\quad L^q_b(B_1):=\Big\{u: \int_{B_1}b(|x|)|u|^q\ dx<\infty\Big\}.$$ Now, we consider the Banach space $V$ defined by $$V=\mathbb{X}_{rad}\cap L^p_a(B_1)\cap L^q_b(B_1),$$ equipped with the norm $\|\cdot\|_{V}$ defined by $$\|u\|_{V}:=\|u\|_{\mathbb{X}}+\|u\|_{L^p_a(B_1)}+\|u\|_{L^q_b(B_1)},$$ where $$\|u\|_{L^p_a(B_1)}:=\Big(\int_{B_1}a(|x|)|u|^p\ dx\Big)^{\frac{1}{p}}\quad\quad\text{and}\quad\quad \|u\|_{L^q_b(B_1)}:=\Big(\int_{B_1}b(|x|)|u|^q\ dx\Big)^{\frac{1}{q}}.$$ Notice that throughout this section, the function $a$ and $b$ are assumed to satisfy assumptions $(A)$ and $(B)$. Let $I$ be the Euler-Lagrange functional corresponding to [\[e2\]](#e2){reference-type="eqref" reference="e2"}. Then, $$I(u)=\frac{1}{2}\int_{B_1}(|\nabla u|^2+|u|^2)\ dx+\frac{1}{2}\iint_{{\mathcal Q}}\frac{(u(x)-u(y))^2}{|x-y|^{N+2s}}\ dxdy+\frac{1}{q}\int_{B_1}b(|x|)|u|^q\ dx-\frac{1}{p}\int_{B_1}a(|x|)|u|^p\ dx.$$ We now define $\Psi: V\to\mathbb{R}$ and $\Phi: V\to\mathbb{R}$ by $$\Psi(u):=\frac{1}{2}\int_{B_1}(|\nabla u|^2+|u|^2)\ dx+\frac{1}{2}\iint_{{\mathcal Q}}\frac{(u(x)-u(y))^2}{|x-y|^{N+2s}}\ dxdy+\frac{1}{q}\int_{B_1}b(|x|)|u|^q\ dx$$ and $$\Phi(u):=\frac{1}{p}\int_{B_1}a(|x|)|u|^p\ dx$$ so that $$I:=\Psi-\Phi.$$ Clearly, $\Phi\in C^1(V,\mathbb{R})$ and $\Psi$ is a proper, convex, and lower semi-continuous function. Moreover, $\Psi$ is G$\hat{\text{a}}$teaux differentiable with $$\begin{aligned} D\Psi(u)(v)&=\int_{\Omega}(\nabla u\cdot\nabla v+uv)\ dx+\iint_{{\mathcal Q}}\frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{N+2s}}\ dxdy+\int_{B_1}b(|x|)|u|^{q-2}uv\ dx\\ &=\langle u, v\rangle_{\mathbb{X}}+\int_{B_1}b(|x|)|u|^{q-2}uv\ dx.\end{aligned}$$ Next, we consider the convex set consisting of non-negative, radial, non-decreasing functions $$\label{convex-set} K:=\Bigg\{u\in V: \begin{aligned} & u~\text{is radial and}~ u\geq0~\text{in}~\mathbb{R}^N,\\ & u(r_1)\leq u(r_2)~~\text{for all}~~0<r_1\leq r_2<1 \end{aligned}\Bigg\}.$$ We then denote by $$\label{restricted-euler-lagrange-function} I_K=\Psi_K-\Phi$$ the restriction of $I$ to $K$, where $$\label{psi-k} \Psi_K(u)=\left\{\begin{aligned} &\Psi(u)~~~\text{if}~~u\in K,\\ &+\infty~~~\text{otherwise}. \end{aligned} \right.$$ We now recall the following definition of a critical point for lower semi-continuous functions due to Szulkin, see [@szulkin1986minimax]. **Definition 3**. Let $V$ be a real Banach space, $\Phi\in C^1(V,\mathbb{R})$ and $\psi: V\to (-\infty,+\infty]$ be proper, convex and lower semi-continuous. A point $u\in V$ is a critical point of $$\label{energy-functional} F=\psi-\Phi$$ if $u\in Dom(\psi)$ and it satisfies the inequality $$\label{critical-point} \langle D\Phi(u), u-v\rangle+\psi(v)-\psi(u)\geq0,~~\text{for all}~~v\in V.$$ We also have the following **Definition 4**. Let $F=\psi-\Phi$ and $c\in\mathbb{R}$. We say that $F$ satisfies the Palais-Smale compactness condition **(PS)** if every sequence $\{u_k\}$ such that $F(u_k)\to c$ and $$\langle D\Phi(u_k), u_k-v\rangle+\psi(v)-\psi(u_k)\geq-\varepsilon_k\|u_k-v\|_{V},~~~\text{for all}~~v\in V,$$ where $\varepsilon_k\to0$, possesses a convergent subsequence. The next two results are due to Szulkin [@szulkin1986minimax]. **Theorem 5**. *(Mountain pass theorem)[\[mountain-pass-theorem\]]{#mountain-pass-theorem label="mountain-pass-theorem"} Suppose that $F:V\to (-\infty,+\infty]$ is as in [\[energy-functional\]](#energy-functional){reference-type="eqref" reference="energy-functional"} and satisfies **(PS)** and suppose moreover that $F$ satisfies mountain pass geometry **(MPG)*** - *$F(0)=0$* - *there exists $e\in V$ such that $F(e)\leq 0$* - *there exists some $\eta$ such that $0<\eta<\|e\|$ and for every $u\in V$ with $\|u\|=\eta$ one has $F(u)>0$.* *Then $F$ has a critical value $c$ which is defined by $$\label{critical-value} c=\inf_{\gamma\in\Gamma}\sup_{t\in [0,1]}F(\gamma(t)),$$ where $\Gamma=\{\gamma\in C([0,1],V): \gamma(0)=0,~ \gamma(1)=e\}$.* **Theorem 6**. *Suppose that $F: V\to (-\infty,+\infty]$ is as in [\[energy-functional\]](#energy-functional){reference-type="eqref" reference="energy-functional"} and satisfies the Palais-Smale condition. If $F$ is bounded from below, then $c=\inf_{u\in V}F(u)$ is a critical value.* Inspired by a variational principle in [@moameni2018variational], we have the following result. **Theorem 7**. *Let $V=\mathbb{X}_{rad}\cap L^p_a(B_1)\cap L^q_b(B_1)$, and let $K$ be the convex and weakly closed subset of $V$ defined in [\[convex-set\]](#convex-set){reference-type="eqref" reference="convex-set"}. Suppose the following two assertions hold:* - *The functional $I_K$ has a critical point $\overline{u}\in V$ in the sense of Definition [Definition 3](#def3){reference-type="ref" reference="def3"}, and;* - *there exist $\overline{v}\in K$ with ${\mathcal N}_s\overline{v}=0$ in $\mathbb{R}^N\setminus\overline{B}_1$ and $\frac{\partial\overline{v}}{\partial\nu}=0$ on $\partial B_1$ such that $$\label{v-tilde-equation} {\mathcal L}\overline{v}+\overline{v}+b(|x|)|\overline{v}|^{q-2}\overline{v}=D\Phi(\overline{u})=a(|x|)|\overline{u}|^{p-2}\overline{u},$$ in the weak sense namely, $$\begin{aligned} \label{v-tilde-weak-formulation} \nonumber\int_{B_1}\nabla\overline{v}\cdot\nabla\varphi\ dx+\int_{B_1}\overline{v}\varphi\ dx+\iint_{{\mathcal Q}}\frac{(\overline{v}(x)-\overline{v}(y))(\varphi(x)-\varphi(y))}{|x-y|^{N+2s}}\ &dxdy+\int_{B_1}b(|x|)|\overline{v}|^{q-2}\overline{v}\varphi\ dx\\ &=\int_{B_1}D\Phi(\overline{u})\varphi\ dx, \quad\forall\varphi\in V. \end{aligned}$$* *Then $\overline{u}\in K$ is a weak solution of the equation $$\label{u1} {\mathcal L}u+u=a(|x|)|u|^{p-2}u-b(|x|)|u|^{q-2}u$$ with Neumann conditions.* *Proof.* Since by assumption $(i)$ $\overline{u}$ is a critical point of $I_K$, then by Definition [Definition 3](#def3){reference-type="ref" reference="def3"}, we have $$\begin{aligned} &\frac{1}{2}\int_{B_1}(|\nabla\varphi|^2+|\varphi|^2)\ dx+\frac{1}{2}\iint_{{\mathcal Q}}\frac{(\varphi(x)-\varphi(y))^2}{|x-y|^{N+2s}}\ dxdy+\frac{1}{q}\int_{B_1}b(|x|)|\varphi|^q\ dx\\ &-\frac{1}{2}\int_{B_1}(|\nabla\overline{u}|^2+|\overline{u}|^2)\ dx-\frac{1}{2}\iint_{{\mathcal Q}}\frac{(\overline{u}(x)-\overline{u}(y))^2}{|x-y|^{N+2s}}\ dxdy-\frac{1}{q}\int_{B_1}b(|x|)|\overline{u}|^q\ dx\\ &\geq \langle D\Phi(\overline{u}),\varphi-\overline{u}\rangle=:\int_{B_1}D\Phi(\overline{u})(\varphi-\overline{u})\ dx, \quad\quad\forall\varphi\in K. \end{aligned}$$ By taking in particular $\varphi=\overline{v}$, the above inequality becomes $$\begin{aligned} \label{l1} \nonumber&\frac{1}{2}\int_{B_1}(|\nabla\overline{v}|^2+|\overline{v}|^2)\ dx+\frac{1}{2}\iint_{{\mathcal Q}}\frac{(\overline{v}(x)-\overline{v}(y))^2}{|x-y|^{N+2s}}\ dxdy+\frac{1}{q}\int_{B_1}b(|x|)|\overline{v}|^q\ dx\\ \nonumber&-\frac{1}{2}\int_{B_1}(|\nabla\overline{u}|^2+|\overline{u}|^2)\ dx-\frac{1}{2}\iint_{{\mathcal Q}}\frac{(\overline{u}(x)-\overline{u}(y))^2}{|x-y|^{N+2s}}\ dxdy-\frac{1}{q}\int_{B_1}b(|x|)|\overline{u}|^q\ dx\\ &\geq\int_{B_1}D\Phi(\overline{u})(\overline{v}-\overline{u})\ dx. \end{aligned}$$ We now use $\varphi=\overline{v}-\overline{u}$ as a test function in [\[v-tilde-weak-formulation\]](#v-tilde-weak-formulation){reference-type="eqref" reference="v-tilde-weak-formulation"} to get $$\begin{aligned} \label{l2} \nonumber\int_{\Omega}\nabla\overline{v}\cdot\nabla(\overline{v}-\overline{u})\ dx+\int_{B_1}\overline{v}(\overline{v}-\overline{u})&+\iint_{{\mathcal Q}}\frac{(\overline{v}(x)-\overline{v}(y))((\overline{v}-\overline{u})(x)-(\overline{v}-\overline{u})(y))}{|x-y|^{N+2s}}\ dxdy\\ &+\int_{B_1}b(|x|)|\overline{v}|^{q-2}\overline{v}(\overline{v}-\overline{u})\ dx=\int_{\Omega}D\Phi(\overline{u})(\overline{v}-\overline{u})\ dx. \end{aligned}$$ Plugging [\[l2\]](#l2){reference-type="eqref" reference="l2"} into [\[l1\]](#l1){reference-type="eqref" reference="l1"}, we get $$\begin{aligned} \label{l3} \nonumber&\frac{1}{2}\int_{B_1}(|\nabla\overline{v}|^2+|\overline{v}|^2)\ dx+\frac{1}{2}\iint_{{\mathcal Q}}\frac{(\overline{v}(x)-\overline{v}(y))^2}{|x-y|^{N+2s}}\ dxdy+\frac{1}{q}\int_{B_1}b(|x|)|\overline{v}|^q\ dx\\ \nonumber&-\frac{1}{2}\int_{B_1}(|\nabla\overline{u}|^2+|\overline{u}|^2)\ dx-\frac{1}{2}\iint_{{\mathcal Q}}\frac{(\overline{u}(x)-\overline{u}(y))^2}{|x-y|^{N+2s}}\ dxdy-\frac{1}{q}\int_{B_1}b(|x|)|\overline{u}|^q\ dx\\ \nonumber&\geq\int_{\Omega}\nabla\overline{v}\cdot\nabla(\overline{v}-\overline{u})\ dx+\int_{B_1}\overline{v}(\overline{v}-\overline{u})+\iint_{{\mathcal Q}}\frac{(\overline{v}(x)-\overline{v}(y))((\overline{v}-\overline{u})(x)-(\overline{v}-\overline{u})(y))}{|x-y|^{N+2s}}\ dxdy\\ &+\int_{B_1}b(|x|)|\overline{v}|^{q-2}\overline{v}(\overline{v}-\overline{u})\ dx. \end{aligned}$$ Note that $t\mapsto f(t)=\frac{1}{q}|t|^q$ is convex. Therefore, for all $t_1, t_2\in\mathbb{R}$, $$\label{property-of-covex-functions} f(t_2)-f(t_1)\geq f'(t_1)(t_2-t_1)=|t_1|^{q-2}t_1(t_2-t_1).$$ By substituting $t_1=\overline{v}$ and $t_2=\overline{u}$ in [\[property-of-covex-functions\]](#property-of-covex-functions){reference-type="eqref" reference="property-of-covex-functions"}, we have $$\frac{1}{q}|\overline{u}|^q-\frac{1}{q}|\overline{v}|^q\geq |\overline{v}|^{q-2}\overline{v}(\overline{u}-\overline{v}).$$ Then, multiplying the above inequality by $b(|x|)$ and integrating over $B_1$, one gets $$\frac{1}{q}\int_{B_1}b(|x|)|\overline{u}|^q\ dx-\frac{1}{q}\int_{B_1}b(|x|)|\overline{v}|^q\ dx\geq \int_{B_1}b(|x|)|\overline{v}|^{q-2}\overline{v}(\overline{u}-\overline{v})\ dx.$$ Taking this into account, it follows from [\[l3\]](#l3){reference-type="eqref" reference="l3"} that $$\begin{aligned} \label{l3'} \nonumber&\frac{1}{2}\int_{B_1}|(\nabla\overline{v}|^2+|\overline{v}|^2)\ dx+\frac{1}{2}\iint_{{\mathcal Q}}\frac{(\overline{v}(x)-\overline{v}(y))^2}{|x-y|^{N+2s}}\ dxdy\\ \nonumber&-\frac{1}{2}\int_{B_1}(|\nabla\overline{u}|^2+|\overline{u}|^2)\ dx-\frac{1}{2}\iint_{{\mathcal Q}}\frac{(\overline{u}(x)-\overline{u}(y))^2}{|x-y|^{N+2s}}\ dxdy\\ &\geq\int_{B_1}\nabla\overline{v}\cdot\nabla(\overline{v}-\overline{u})\ dx+\int_{B_1}\overline{v}(\overline{v}-\overline{u})+\iint_{{\mathcal Q}}\frac{(\overline{v}(x)-\overline{v}(y))((\overline{v}-\overline{u})(x)-(\overline{v}-\overline{u})(y))}{|x-y|^{N+2s}}\ dxdy. \end{aligned}$$ Using the elementary identity $$\begin{aligned} ((\overline{v}-\overline{u})(x)-(\overline{v}-\overline{u})(y))^2&=-(\overline{v}(x)-\overline{v}(y))^2+(\overline{u}(x)-\overline{u}(y))^2\\ &+2(\overline{v}(x)-\overline{v}(y))((\overline{v}-\overline{u})(x)-(\overline{v}-\overline{u})(y)), \end{aligned}$$ it follows from [\[l3\'\]](#l3'){reference-type="eqref" reference="l3'"} that $$\frac{1}{2}\int_{B_1}|\nabla(\overline{v}-\overline{u})|^2\ dx+\frac{1}{2}\int_{B_1}|\overline{v}-\overline{u}|^2\ dx+\frac{1}{2}\iint_{{\mathcal Q}}\frac{((\overline{v}-\overline{u})(x)-(\overline{v}-\overline{u})(y))^2}{|x-y|^{N+2s}}\ dxdy\leq0$$ and therefore, $\overline{v}-\overline{u}=0$ i.e., $\overline{v}=\overline{u}$ a.e., in $\mathbb{R}^N$. We then deduce from [\[v-tilde-equation\]](#v-tilde-equation){reference-type="eqref" reference="v-tilde-equation"} that $\overline{u}$ satisfies in the weak sense equation [\[u1\]](#u1){reference-type="eqref" reference="u1"}. Moreover, since $\overline{v}$ satisfies Neumann conditions, so does $\overline{u}$. The proof is therefore finished. ◻ We now collect a series of lemmas that will play a key role in the proof of our main results. Let us start with the following. **Lemma 8**. *The following assertions hold:* - *There exists a constant $C_*>0$ such that $$\|u\|_{L^{\infty}(B_1)}\leq C_{*}\|u\|_{\mathbb{X}},\quad\quad\text{for all}~~u\in K;$$* - *There exists a constant $C>0$ such that $$\|u\|_{\mathbb{X}}\leq\|u\|_{V}\leq C\|u\|_{\mathbb{X}},\quad\quad\text{for all}~~ u\in K.$$* *Proof.* We start by proving assertion $(i)$. We follow the lines of that of [@brasco2016global Lemma 4.3]. Let $r_0<\rho<1$. Utilising the fact that $u$ is radial, $s>\frac{1}{2}$, and the trace inequality for $H^s(B_{\rho}\setminus B_{r_0})$ (see e.g. [@serra2011monotonicity Section 3.3.3]), it follows that for every $x\in\partial B_{\rho}$, $$\begin{aligned} |u(x)|^2&=\frac{\rho^{1-N}}{N\omega_N}\int_{\partial B_{\rho}}|u|^2\ d{\mathcal H}^{N-1}\\ &\leq c\frac{\rho^{1-N}}{N\omega_N}\rho^{2s-1}\Big(|u|^2_{H^s(B_{\rho}\setminus B_{r_0})}+\frac{1}{\rho^{2s}}\|u\|^2_{L^2(B_{\rho}\setminus B_{r_0})}\Big), \end{aligned}$$ where $\omega_N$ is the volume of the $N$-dimensional unit sphere in $\mathbb{R}^N$ and $d{\mathcal H}^{N-1}$ denotes the $(N-1)$-dimensional Hausdorff measure. The above inequality reduces to $$\begin{aligned} |u(x)|&\leq c\rho^{-s}|x|^{\frac{2s-N}{2}}\|u\|_{H^s(B_{\rho}\setminus B_{r_0})}\quad\quad\text{if}~~\rho=|x|\\ &\leq c(1+r_0^{-s})r_0^{\frac{2s-N}{2}}\|u\|_{H^s(B_1\setminus B_{r_0})} \end{aligned}$$ Hence, $$\label{u2} \|u\|_{L^{\infty}(B_1\setminus B_{r_0})}\leq c_{r_0}\|u\|_{H^s(B_1\setminus B_{r_0})}\leq c_{r_0}\|u\|_{H^s(B_1)}\leq c_{r_0}\|u\|_{\mathbb{X}}$$ where $c_{r_0}:=c(1+r_0^{-s})r_0^{\frac{2s-N}{2}}$. Now, since $u$ is radially non-decreasing, then $\|u\|_{L^{\infty}(B_1)}=\|u\|_{L^{\infty}(B_1\setminus B_{\frac{1}{2}})}$. Taking this into account, we substitute $r_0=\frac{1}{2}$ in [\[u2\]](#u2){reference-type="eqref" reference="u2"} to get that $$\|u\|_{L^{\infty}(B_1)}\leq C_*\|u\|_{\mathbb{X}},$$ where $C_*=c_{\frac{1}{2}}$. This completes the proof of assertion $(i)$. Regarding the proof of $(ii)$, the inequality $\|u\|_{\mathbb{X}}\leq \|u\|_{V}$ is trivial by definition of the norm $\|\cdot\|_{V}$. Now, to obtain the second inequality, we use $(i)$ to see that $$\begin{aligned} \|u\|_{V}&=\|u\|_{\mathbb{X}}+\|u\|_{L^p_a(B_1)}+\|u\|_{L^q_b(B_1)}\\ &\leq\|u\|_{\mathbb{X}}+\|u\|_{L^{\infty}(B_1)}\Big(\|a\|^{\frac{1}{p}}_{L^1(B_1)}+\|b\|^{\frac{1}{q}}_{L^1(B_1)}\Big)\\ &\leq C\|u\|_{\mathbb{X}} \end{aligned}$$ where $C=1+C_{*}\Big(\|a\|^{\frac{1}{p}}_{L^1(B_1)}+\|b\|^{\frac{1}{q}}_{L^1(B_1)}\Big)$, as desired. ◻ **Lemma 9**. *The functional $I_K$ defined in [\[restricted-euler-lagrange-function\]](#restricted-euler-lagrange-function){reference-type="eqref" reference="restricted-euler-lagrange-function"} satisfies the **(PS)** condition if either of the following assertions hold:* - *$p>q$;* - *If $p<q$ then $\Big(\frac{a^q}{b^p}\Big)^{\frac{1}{q-p}}\in L^1(0,1)$.* *Proof.* We follow the lines of that of [@moameni2019existence Lemma 3.4]. Let $\{u_k\}\subset K$ be a sequence such that $I_K(u_k)\to c\in \mathbb{R}$ and $$\label{u7} \langle D\Phi(u_k), u_k-v\rangle+\Psi_K(v)-\Psi_K(u_k)\geq-\varepsilon_k\|u_k-v\|_{V},~~~\forall v\in V.$$ We shall show that $\{u_k\}$ possesses a convergent subsequence in $V$. For that, we first show that $u_k$ is bounded in $V$. We distinguish the cases $p>q$ and $p<q$. **Case 1.** $p>q$. Since $I_K(u_k)\to c$, then for $k$ sufficiently large, one has $$\label{u8} \frac{1}{2}\|u_k\|^2_{\mathbb{X}}+\frac{1}{q}\|u_k\|^q_{L^q_b(B_1)}-\frac{1}{p}\int_{B_1}a(|x|)|u_k|^p\ dx\leq c+1.$$ We now introduce the function $g(t)=t^q-p(t-1)-1$ for $t\in (1,+\infty)$. Then, there exists $\overline t=(\frac{p}{q})^{\frac{q}{q-1}}$ such that for every $t\in(1,\overline t)$, $g(t)<0$. Henceforth, by choosing such a number, we have $t>1$ and $t^{q}-1<p(t-1)$. Now, substituting $v=tu_k$ in [\[u7\]](#u7){reference-type="eqref" reference="u7"} and recalling that $\langle D\Phi(u_k), u_k\rangle=\int_{B_1}a(|x|)u_k(x)^p\ dx$, we have that $$\label{u9-1} \frac{(1-t^2)}{2}\|u\|^2_{\mathbb{X}}+\frac{(1-t^q)}{q}\|u_k\|^q_{L^q_b(B_1)}+(t-1)\int_{B_1}a(|x|)|u_k|^p\ dx\leq \varepsilon_k (t-1)\|u_k\|_{V}\leq C\|u_k\|_{V}.$$ Recalling that $t^{q}-1<p(t-1)$, then we let $\beta>0$ so that $$\frac{1}{p(t-1)}<\beta<\frac{1}{t^q-1}.$$ Multiplying [\[u9-1\]](#u9-1){reference-type="eqref" reference="u9-1"} and summing it up with [\[u8\]](#u8){reference-type="eqref" reference="u8"}, we have $$\frac{1+\beta(1-t^2)}{2}\|u_k\|^2_{\mathbb{X}}+\frac{1+\beta(1-t^q)}{q}\|u_k\|^q_{L^q_b(B_1)}+\Big(\beta(t-1)-\frac{1}{p}\Big)\int_{B_1}a(|x|)|u_k|^p\ dx\leq c+1+\beta c\|u_k\|_{V}.$$ By the choice of $\beta$, the fact that $t>1$ and $q\geq2$ then the constants on the left-hand side of the inequality above are positive. Therefore, $$\|u_k\|^2_{\mathbb{X}}+C_1\|u_k\|^q_{L^q_b(B_1)}+C_2\int_{B_1}a(|x|)|u_k|^p\ dx\leq C_3+C_4\|u_k\|_{V}.$$ for suitable constants $C_i>0,~i\in\{1,2,3,4\}$. From assertion $(ii)$ of Lemma [Lemma 8](#llm){reference-type="ref" reference="llm"}, we have $$\|u_k\|^2_{\mathbb{X}}\leq C_3+C_4\|u_k\|_{V}\leq C_3+cC_4\|u_k\|_{\mathbb{X}}$$ and thus, $\{u_k\}$ is bounded in $\mathbb{X}$. We then conclude that $\{u_k\}$ is also bounded in $V$. **Case 2.** $p<q$. Using the Hölder inequality with exponents $\frac{q}{q-p}$ and $\frac{q}{p}$, we have $$\begin{aligned} \int_{B_1}a(|x|)|u_k|^p\ dx&=\int_{B_1}a(|x|)b(|x|)^{-\frac{p}{q}} b(|x|)^{\frac{p}{q}}|u_k|^p\ dx\\ &\leq \|a\cdot b^{-\frac{p}{q}}\|_{L^{\frac{q}{q-p}}(B_1)}\Big(\int_{B_1}b(|x|)|u_k|^q\Big)^{\frac{p}{q}}. \end{aligned}$$ Hence, $$\begin{aligned} \label{k1} \nonumber I_K(u_k)&\geq \frac{1}{2}\int_{B_1}(|\nabla u_k|^2+|u|^2)\ dx+\frac{1}{2}\iint_{{\mathcal Q}}\frac{(u_k(x)-u_k(y))^2}{|x-y|^{N+2s}}\ dxdy+\frac{1}{q}\int_{B_1}b(|x|)|u_k|^q\ dx\\ \nonumber&~~~~~~~~~~~~~~~~~~~~~~~~~~-\frac{1}{p}\|a\cdot b^{-\frac{p}{q}}\|_{L^{\frac{q}{q-p}}(B_1)}\Big(\int_{B_1}b(|x|)|u_k|^q\Big)^{\frac{p}{q}}\\ &=\frac{1}{2}\|u_k\|^2_{\mathbb{X}}+\frac{1}{q}\|u_k\|^q_{L^{q}_b(B_1)}-\frac{1}{p}\|a\cdot b^{-\frac{p}{q}}\|_{L^{\frac{q}{q-p}}(B_1)}\|u_k\|^p_{L^q_b(B_1)}. \end{aligned}$$ Since by assumption $p<q$ and $\|a\cdot b^{-\frac{p}{q}}\|_{L^{\frac{q}{q-p}}(B_1)}<\infty$, then we deduce from the inequality above that $I_K$ is bounded from below and coercive in $V$ thanks to Lemma [Lemma 8](#llm){reference-type="ref" reference="llm"} $(ii)$. Therefore, we deduce that $\{u_k\}$ is bounded in $V$, as desired. So, the sequence $\{u_k\}$ is bounded in $V$ for both $p>q$ and $p<q$. Hence, up to a subsequence, there exists $\overline{u}\in V$ such that $u_k\rightharpoonup \overline{u}$ weakly in $\mathbb{X}$ and $u_k\to \overline{u}$ strongly in $L^2(B_1)$ thanks to the compact embedding $\mathbb{X}\hookrightarrow L^2(B_1)$. In particular, $u_k\to \overline{u}$ a.e. in $B_1$. Moreover, we also have $u_k\rightharpoonup \overline{u}$ weakly in $L^p_a(B_1)$ resp. $L^q_b(B_1)$. Notice that from Lemma [Lemma 8](#llm){reference-type="ref" reference="llm"} $(i)$, the sequence $\{u_k\}$ is also bounded in $L^{\infty}(B_1)$. Moreover, since each $u_k$ is radial, then $\overline{u}$ is radial as well. Thus, $\overline{u}\in K$. We now wish to prove that $$\label{u9} u_k\to \overline{u}\quad\text{strongly in}~~V.$$ For that, we mention first that by the lower semi-continuity property, one has $$\begin{aligned} \label{u10} \|\overline{u}\|_{\mathbb{X}}\leq \liminf_{k\to\infty}\|u_k\|_{\mathbb{X}}\quad\quad\text{and}\quad\quad \|\overline{u}\|_{L^q_b(B_1)}\leq \liminf_{k\to\infty}\|u_k\|_{L^q_b(B_1)}. \end{aligned}$$ Now, taking $v=\overline{u}$ in [\[u7\]](#u7){reference-type="eqref" reference="u7"} gives $$\begin{aligned} \label{u11} \frac{1}{2}(\|\overline{u}\|^2_{\mathbb{X}}-\|u_k\|^2_{\mathbb{X}})+\frac{1}{q}(\|\overline{u}\|^q_{L^q_b(B_1)}-\|u_k\|^q_{L^q_b(B_1)})+\int_{B_1}a(|x|)|u_k|^{p-2}u_k(u_k-\overline{u})\ dx\geq-\varepsilon_k\|u_k-\overline{u}\|_{V}. \end{aligned}$$ Since $\|u_k\|_{L^{\infty}(B_1)}$ is bounded, then $$\left\{\begin{aligned} a(|x|)|u_k|^{p-1}|u_k-\overline{u}|&\leq a(|x|)\|u_k\|^{p-1}_{L^{\infty}(B_1)}(\|u_k\|_{L^{\infty}(B_1)}+\|\overline{u}\|_{L^{\infty}(B_1)})\leq C a(|x|),~~\text{and}\\ a(|x|)|u_k|^p&\leq a(|x|)\|u_k\|^p_{L^{\infty}(B_1)}\leq Ca(|x|) \end{aligned} \right.$$ for some $C>0$. Now, using that $a\in L^1$, then the dominated convergence theorem yields $$\begin{aligned} \label{u12} \lim_{k\to\infty} \int_{B_1}a(|x|)|u_k|^{p-2}u_k(u_k-\overline{u})\ dx=0~~~\text{and}~~~\lim_{k\to\infty}\int_{B_1}a(|x|)|u_k|^p\ dx=\int_{B_1}a(|x|)|\overline{u}|^p\ dx. \end{aligned}$$ From [\[u11\]](#u11){reference-type="eqref" reference="u11"} and [\[u12\]](#u12){reference-type="eqref" reference="u12"}, it follows that $$\label{u13} \frac{1}{2}\Big(\limsup_{k\to\infty}\|u_k\|^2_{\mathbb{X}}-\|\overline{u}\|^2_{\mathbb{X}}\Big)+\frac{1}{q}\Big(\limsup_{k\to\infty}\|u_k\|^q_{L^q_b(B_1)}-\|\overline{u}\|^q_{L^q_b(B_1)}\Big)\leq0.$$ Combining [\[u13\]](#u13){reference-type="eqref" reference="u13"} together with [\[u10\]](#u10){reference-type="eqref" reference="u10"}, we deduce that $$\label{u14} \lim_{k\to\infty}\|u_k\|^2_{\mathbb{X}}=\|\overline{u}\|^2_{\mathbb{X}}\quad\quad\text{and}\quad\quad \lim_{k\to\infty}\|u_k\|^q_{L^q_b(B_1)}=\|\overline{u}\|^q_{L^q_b(B_1)}.$$ Finally, from [\[u14\]](#u14){reference-type="eqref" reference="u14"} and [\[u12\]](#u12){reference-type="eqref" reference="u12"} we obtain [\[u9\]](#u9){reference-type="eqref" reference="u9"}, as desired. ◻ The next lemma plays a key role in establishing assertion $(ii)$ in Theorem [Theorem 7](#t1){reference-type="ref" reference="t1"}. **Lemma 10**. *Let $g\in L^1(0,1)$ be a non-negative monotone function. Then there exists a sequence of smooth monotone functions $\{g_m\}$ with the property that $0\leq g_m\leq g$ and $g_m\to g$ strongly in $L^1(0,1)$.* *Proof.* The proof of this lemma can be found in [@cowan2017existence]. We sketch it here for the sake of completeness. Without loss of generality, we focus on the case when $g$ is monotone non-decreasing. The same argument works if $g$ is monotone non-increasing. For large $m$, we define $[0,\infty)\ni r\mapsto q_m(r)=\min\{g(r),m\}$. Then, $q_m$ is non-decreasing in $(0,1)$ for large $m$. We now extend $q_m(r)$ to $q_m(1)$ for $r>1$ and $q_m=0$ for $r<0$. Let $\eta\geq0$ be a smooth function with $\eta=0$ on $(-\infty,-1)\cup (0,\infty)$ and $\eta>0$ on $(-1,0)$. Assume also that $\int_{-1}^{0}\eta(t)\ dt=1$. For $\varepsilon>0$, we set $\eta_{\varepsilon}(r)=\frac{1}{\varepsilon}\eta(\frac{r}{\varepsilon})$ and define $$q^{\varepsilon}_m(r):=\int_{-\varepsilon}^{0}\eta_{\varepsilon}(t)q_m(r+t)\ dt.$$ Since $q_m$ is non-decreasing, it follows that for every fixed small $\varepsilon>0$, $q^{\varepsilon}_m$ is non-decreasing in $r$. Moreover, notice that $$0\leq q^{\varepsilon}_m(r)=\int_{-\varepsilon}^{0}\eta_{\varepsilon}(t)q_m(r+t)\ dt\leq q_m(r)\int_{-\varepsilon}^{0}\eta_{\varepsilon}(t)\ dt=q_m(r)\leq g(r).$$ We now let $\varepsilon_m\searrow0$ and we define $g_m(r):=q_m^{\varepsilon_m}(r)$. Thus, from the inequality above, we have $0\leq g_m(r)\leq g(r)$ for all $m$. Moreover, $r\mapsto g_m(r)$ is non-decreasing. Finally, one can now show that $g_m\to g$ in $L^1(0,1)$. ◻ **Lemma 11**. *Suppose that $\overline{u}\in K$. Then there exists $v\in K$ solving $$\label{e4} \left\{\begin{aligned} {\mathcal L}v+v+b(|x|)|v|^{q-2}v&=a(|x|)|\overline{u}|^{p-2}\overline{u}\quad\quad~\text{in}~~B_1 \\ {\mathcal N}_sv&=0 \quad\quad\quad\quad\quad\quad\quad~\text{in}~~\mathbb{R}^N\setminus\overline{B}_1\\ \frac{\partial v}{\partial\nu}&=0 \quad\quad\quad\quad\quad\quad\quad~\text{on}~~\partial B_1. \end{aligned} \right.$$ in the weak sense.* This lemma is the mixed local and nonlocal version of [@moameni2019existence Lemma 3.6]. As we will show below, to get the monotonicity property of $v$, we use a different strategy to that of [@moameni2019existence Lemma 3.6]. In fact, in [@moameni2019existence Lemma 3.6], the authors take advantage of the fact that the regularity of $v$ can be upgraded to $H^3(B_1)$ so that the equation can be differentiated with respect to the radial variable. Notice also that toward this, they used the fact that the equation can be transformed into an ODE. However, we do not have such properties in the present case. *Proof.* According to Lemma [Lemma 10](#lllm){reference-type="ref" reference="lllm"}, there exists a sequence $\{a_m\}$ (resp. $\{b_m\}$) of smooth functions such that $0\leq a_m\leq a$ (resp. $0\leq b_m\leq b$) and every $a_m$ is non-decreasing (resp. every $b_m$ is non-increasing) on $(0,1)$ and $a_m\to a$ strongly in $L^1(0,1)$ (resp. $b_m\to b$ strongly in $L^1(0,1)$). We now consider the equation $$\label{e4-1} \left\{\begin{aligned} {\mathcal L}v+v+b_m(|x|)|v|^{q-2}v&=a_m(|x|)|\overline{u}|^{p-2}\overline{u}\quad\quad~\text{in}~~B_1 \\ {\mathcal N}_sv&=0 \quad\quad\quad\quad\quad\quad\quad~~~~\text{in}~~\mathbb{R}^N\setminus\overline{B}_1\\ \frac{\partial v}{\partial\nu}&=0 \quad\quad\quad\quad\quad\quad\quad~~~~\text{on}~~\partial B_1. \end{aligned} \right.$$ Let $J: \mathbb{X}_{rad}\to \mathbb{R}$ be the functional defined by $$\begin{aligned} J(v)&:=\frac{1}{2}\int_{B_1}(|\nabla v|^2+|v|^2)\ dx+\frac{1}{2}\iint_{{\mathcal Q}}\frac{(v(x)-v(y))^2}{|x-y|^{N+2s}}\ dxdy\\ &~~~~~~~~~~~+\frac{1}{q}\int_{B_1}b_m(|x|)|v|^q\ dx-\int_{B_1}f_m(\overline{u})v\ dx, \end{aligned}$$ where $f_m(\overline u)=a_m(|x|)|\overline{u}|^{p-2}\overline{u}$. Then, $J$ is convex, lower semi-continuous and $$\lim_{\|v\|_{\mathbb{X}}\to \infty}J(v)=\infty.$$ Hence, $J$ achieves its minimum at some $v_m\in \mathbb{X}_{rad}$. Moreover, $v_m$ satisfies the Neumann conditions $$\label{nc} {\mathcal N}_s v_m=0~~~\text{in}~~\mathbb{R}^N\setminus\overline{B}_1\quad\quad\text{and}\quad\quad \frac{\partial v_m}{\partial\nu}=0~~~\text{on}~~\partial B_1.$$ In fact, since $v_m$ is the minimum of $J$ in $\mathbb{X}_{rad}$, it is also a critical point of $J$ in $\mathbb{X}_{rad}$. Therefore, it satisfies $$\begin{aligned} \label{z1} \nonumber \int_{B_1}\nabla v_m\cdot \nabla\varphi\ dx&+\int_{B_1}v_m\varphi\ dx+\iint_{{\mathcal Q}}\frac{(v_m(x)-v_m(y))(\varphi(x)-\varphi(y))}{|x-y|^{N+2s}}\ dxdy\\ &+\int_{B_1}b_m(|x|)|v_m|^{q-2}v_m\varphi\ dx=\int_{B_1}f_m(\overline{u})\varphi\ dx,\quad\quad \forall\varphi\in \mathbb{X}. \end{aligned}$$ We now take $\varphi\equiv0$ in $\overline{B}_1$ and from [\[z1\]](#z1){reference-type="eqref" reference="z1"}, we have that $$\begin{aligned} 0=\iint_{{\mathcal Q}}\frac{(v_m(x)-v_m(y))(\varphi(x)-\varphi(y))}{|x-y|^{N+2s}}\ dxdy&=2\int_{\mathbb{R}^N\setminus \overline{B}_1}\Bigg(\int_{B_1}\frac{v_m(x)-v_m(y)}{|x-y|^{N+2s}}\ dy\Bigg)\varphi(x)\ dx\\ &=2\int_{\mathbb{R}^N\setminus\overline{B}_1}{\mathcal N}_s v_m(x)\varphi(x)\ dx,\quad\quad\forall\varphi. \end{aligned}$$ In particular, ${\mathcal N}_sv_m=0$ a.e. in $\mathbb{R}^N\setminus\overline{B}_1$. On the other hand, from the integration by parts formula $$\label{integration-by-parts} \left\{\begin{aligned} \iint_{{\mathcal Q}}\frac{(v_m(x)-v_m(y))(\varphi(x)-\varphi(y))}{|x-y|^{N+2s}}\ dxdy&=\int_{B_1}(-\Delta)^sv_m\varphi\ dx+\int_{\mathbb{R}^N\setminus B_1}{\mathcal N}_s v_m\varphi\ dx;\\ \int_{B_1}\nabla v_m\cdot\nabla\varphi\ dx&=\int_{B_1}(-\Delta)v_m\varphi\ dx+\int_{\partial B_1}\frac{\partial v_m}{\partial\nu}\varphi\ d\sigma,\quad\quad\forall\varphi. \end{aligned} \right.$$ It follows from [\[z1\]](#z1){reference-type="eqref" reference="z1"} that $$\label{d-1} \int_{B_1}({\mathcal L}v_m+v_m+b_{m}(|x|)|v_m|^{q-2}v_m-f_m(\overline{u})v_m)\varphi\ dx+\int_{\mathbb{R}^N\setminus B_1}{\mathcal N}_sv_m\varphi\ dx+\int_{\partial B_1}\frac{\partial v_m}{\partial\nu}\varphi\ d\sigma=0~~~\forall\varphi$$ In particular, by taking $\varphi\equiv0$ in $\mathbb{R}^N\setminus\overline{B}_1$ it follows from [\[d-1\]](#d-1){reference-type="eqref" reference="d-1"} that $$\int_{B_1}({\mathcal L}v_m+v_m+b_{m}(|x|)|v_m|^{q-2}v_m-f_m(\overline{u})v_m)\varphi\ dx=0$$ and thus $$\label{d-2} {\mathcal L}v_m+v_m+b_{m}(|x|)|v_m|^{q-2}v_m=f_m(\overline{u})v_m~~\text{a.e. in}~~B_1.$$ Now, using [\[d-2\]](#d-2){reference-type="eqref" reference="d-2"}, the integration by parts formula [\[integration-by-parts\]](#integration-by-parts){reference-type="eqref" reference="integration-by-parts"}, and the fact that ${\mathcal N}_s v_m=0$ in $\mathbb{R}^N\setminus \overline{B}_1$ into [\[z1\]](#z1){reference-type="eqref" reference="z1"} we get that $$\int_{\partial B_1}\frac{\partial v_m}{\partial\nu}\varphi\ d\sigma=0\quad\quad\forall\varphi,$$ and therefore, $\frac{\partial v_m}{\partial\nu}=0$ a.e. on $\partial B_1$. We now wish to show that $v_m\in K$. Notice first that $v_m\in\mathbb{X}\cap L^q_{b_m}(B_1)$ satisfies $$\begin{aligned} \label{z2} \nonumber &\int_{B_1}\nabla v_m\cdot \nabla\varphi\ dx+\iint_{{\mathcal Q}}\frac{(v_m(x)-v_m(y))(\varphi(x)-\varphi(y))}{|x-y|^{N+2s}}\ dxdy\\ &~~~+\int_{B_1}v_m\varphi\ dx+\int_{B_1}b_m(|x|)|v_m|^{q-2}v_m\varphi\ dx=\int_{B_1}f_m(\overline{u})\varphi\ dx,\quad\forall\varphi\in \mathbb{X}\cap L^q_{b_m}(B_1). \end{aligned}$$ Since $a$ is smooth and $\overline{u}\in K$, then by the first part of Lemma [Lemma 8](#llm){reference-type="ref" reference="llm"}, $f_m(\overline{u})\in L^{\infty}(B_1)$. Notice also that $f_m(\overline{u})\geq0$. We now claim that $v_m\geq0$. In fact, to see this, we use $\varphi=v_m^-$ as a test function in [\[z2\]](#z2){reference-type="eqref" reference="z2"} to get $$\begin{aligned} \label{u4} \nonumber&\int_{B_1}\nabla v_m\cdot\nabla v_m^-\ dx+\iint_{{\mathcal Q}}\frac{(v_m(x)-v_m(y))(v_m^-(x)-v_m^-(y))}{|x-y|^{N+2s}}\ dxdy\\ &~~~~~~~~~~~~~~~~~+\int_{B_1}v_mv_m^-\ dx+\int_{B_1}b(x)|v_m|^{q-2}v_mv_m^-\ dx=\int_{B_1}f_m(\overline{u})u_m^-\ dx. \end{aligned}$$ Since $$\left\{\begin{aligned} & \nabla v_m\cdot\nabla v_m^-=-|\nabla v_m^-|^2,~~ v_mv_m^-=-|v_m^-|^2~~\text{and}\\ & (v_m(x)-v_m(y))(v_m^-(x)-v_m^-(y))\leq -(v_m^-(x)-v_m^-(y))^2, \end{aligned} \right.$$ then it follows from [\[u4\]](#u4){reference-type="eqref" reference="u4"} that $$\begin{aligned} \label{u5} \nonumber&\int_{B_1}|\nabla v_m^-|^2\ dx+\iint_{{\mathcal Q}}\frac{(v_m^-(x)-v_m^-(y))2}{|x-y|^{N+2s}}\ dxdy\\ &~~~~~~~~~~~~~~~~~+\int_{B_1}|v_m^-|^2\ dx+\int_{B_1}b(x)|v_m^-|^{q}\ dx\leq-\int_{B_1}f_m(\overline{u})v_m^-\ dx. \end{aligned}$$ Since $f_m(\overline{u})\geq0$ and $b$ is non-negative by assumption, we deduce from [\[u5\]](#u5){reference-type="eqref" reference="u5"} that $v_m^-\equiv0$ a.e. that is $v_m\geq0$ a.e. in $B_1$. Since $v_m$ is radial, then to have $v_m\in K$, it suffices to show that $v_m$ is non-decreasing with respect to the radial variable. For that, we arbitrarily fix $\overline r\in (0,1)$. Then to prove that $v_m$ is non-decreasing, it is enough[^1] to show that for every $r\in(\overline r,1)$, one of the following cases occurs: - $v_m(t)\leq v_m(r)$      $t\in (\overline r,r)$ - $v_m(t)\geq v_m(r)$      $t\in (r,1)$. Now, if $f_m(\overline{u}(r))\leq v_m(r)+b_m(r)|v_m(r)|^{q-2}v_m(r)$, then we use $$\varphi(x)=\left\{\begin{aligned} & (v_m(|x|)-v_m(r))^+~~~\quad\text{if}~~\overline r<|x|\leq r\\ & 0\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\text{otherwise} \end{aligned} \right.$$ as a test function in [\[z2\]](#z2){reference-type="eqref" reference="z2"} to see that $$\begin{aligned} \label{z3} \nonumber &\int_{B_r\setminus B_{\overline r}}\nabla v_m\cdot \nabla\varphi\ dx+\iint_{\mathbb{R}^{2N}\setminus ((B_r\setminus B_{\overline r})^c)^2}\frac{(v_m(x)-v_m(y))(\varphi(x)-\varphi(y))}{|x-y|^{N+2s}}\ dxdy\\ &~~~~~~~~~~~~+\int_{B_r\setminus B_{\overline r}}v_m\varphi\ dx+\int_{B_r\setminus B_{\overline r}}b_m(|x|)|v_m|^{q-2}v_m\varphi\ dx=\int_{B_r\setminus B_{\overline r}}f_m(\overline{u})\varphi\ dx. \end{aligned}$$ Now using that $\nabla v_m\cdot\nabla\varphi=\nabla (v_m-v_m(r))\cdot\nabla\varphi=|\nabla\varphi|^2$, $v_m\varphi=(v_m-v_m(r))\varphi+v_m(r)\varphi=|\varphi|^2+v_m(r)\varphi$ and from the elementary inequality $$(v_m(x)-v_m(y))(\varphi(x)-\varphi(y))=((v_m-v_m(r))(x)-(v_m-v_m(r))(y))(\varphi(x)-\varphi(y))\geq (\varphi(x)-\varphi(y))^2,$$ there holds $$\begin{aligned} &\iint_{\mathbb{R}^{2N}\setminus ((B_r\setminus B_{\overline r})^c)^2}\frac{(v_m(x)-v_m(y))(\varphi(x)-\varphi(y))}{|x-y|^{N+2s}}\ dxdy\geq \iint_{\mathbb{R}^{2N}\setminus ((B_r\setminus B_{\overline r})^c)^2}\frac{(\varphi(x)-\varphi(y))^2}{|x-y|^{N+2s}}\ dxdy,\label{i1}\\ &\int_{B_r\setminus B_{\overline r}}\nabla v_m\cdot\nabla\varphi\ dx=\int_{B_r\setminus B_{\overline r}}|\nabla\varphi|^2\ dx,\label{i2}\\ &\int_{B_r\setminus B_{\overline r}}v_m\varphi\ dx=\int_{B_r\setminus B_{\overline r}}|\varphi|^2\ dx+\int_{B_r\setminus B_{\overline r}}v_m(r)\varphi\ dx\label{i3}.\end{aligned}$$ Next, using that $b$ is non-increasing, then $$\begin{aligned} \label{z4} \nonumber \int_{B_r\setminus B_{\overline r}}b_m(|x|)|v_m|^{q-2}v_m\varphi\ dx&\geq b_m(r)\int_{B_r\setminus B_{\overline r}}|v_m|^{q-2}v_m\varphi\ dx\\ &=b_m(r)\int_{B_r\setminus B_{\overline r}}(|v_m|^{q-2}v_m-|v_m(r)|^{q-2}v_m(r))\varphi\ dx\\ \nonumber &+\int_{B_r\setminus B_{\overline r}}b_m(r)|v_m(r)|^{q-2}v_m(r)\varphi\ dx.\end{aligned}$$ Let $f:[0,1]\to\mathbb{R}$ be given by $$f(t)=|tv_m(x)+(1-t)v_m(r)|^{q-2}(tv_m(x)+(1-t)v_m(r)).$$ Then, $$|v_m|^{q-2}v_m-|v_m(r)|^{q-2}v_m(r)=f(1)-f(0)=\int_{0}^{1}f'(t)\ dt$$ Now, a direct calculation shows that $$f'(t)=(q-1)|tv_m(x)+(1-t)v_m(r)|^{q-2}(v_m(x)-v_m(r)).$$ So, $$\begin{aligned} \label{p} \nonumber &\int_{B_r\setminus B_{\overline r}}(|v_m|^{q-2}v_m-|v_m(r)|^{q-2}v_m(r))\varphi\ dx\\ \nonumber &=(q-1)\int_{B_r\setminus B_{\overline r}}\Bigg(\int_{0}^{1}|tv_m(x)+(1-t)v_m(r)|^{q-2}\ dt\Bigg)(v_m(x)-v_m(r))\varphi\ dx\\ &=(q-1)\int_{B_r\setminus B_{\overline r}}\Bigg(\int_{0}^{1}|tv_m(x)+(1-t)v_m(r)|^{q-2}\ dt\Bigg)|\varphi|^2\ dx.\end{aligned}$$ Plugging [\[p\]](#p){reference-type="eqref" reference="p"} into [\[z4\]](#z4){reference-type="eqref" reference="z4"}, we get $$\begin{aligned} \label{z5} \nonumber \int_{B_r\setminus B_{\overline r}}b_m(|x|)|v_m|^{q-2}v_m\varphi\ dx&\geq\int_{B_r\setminus B_{\overline r}}b_m(r)|v_m(r)|^{q-2}v_m(r)\varphi\ dx\\ \nonumber &+(q-1)b_m(r)\int_{B_r\setminus B_{\overline r}}\Bigg(\int_{0}^{1}|tv_m(x)+(1-t)v_m(r)|^{q-2}\ dt\Bigg)|\varphi|^2\ dx\\ &\geq \int_{B_r\setminus B_{\overline r}}b_m(r)|v_m(r)|^{q-2}v_m(r)\varphi\ dx.\end{aligned}$$ Now, from [\[z5\]](#z5){reference-type="eqref" reference="z5"}, [\[i3\]](#i3){reference-type="eqref" reference="i3"}, [\[i2\]](#i2){reference-type="eqref" reference="i2"} and [\[i1\]](#i1){reference-type="eqref" reference="i1"}, and that $\overline{u}$ is non-decreasing, it follows from [\[z3\]](#z3){reference-type="eqref" reference="z3"} that $$\begin{aligned} \int_{B_r\setminus B_{\overline r}}|\nabla\varphi|^2\ dx&+\iint_{\mathbb{R}^{2N}\setminus ((B_r\setminus B_{\overline r})^c)^2}\frac{(\varphi(x)-\varphi(y))^2}{|x-y|^{N+2s}}\ dxdy+\int_{B_r\setminus B_{\overline r}}|\varphi|^2\ dx\\ &\leq \int_{B_r\setminus B_{\overline r}}f_m(\overline{u})\varphi\ dx-\int_{B_r\setminus B_{\overline r}}v_m(r)\varphi\ dx-\int_{B_r\setminus B_{\overline r}}b_m(r)|v_m(r)|^{q-2}v_m(r)\varphi\ dx\\ &\leq \int_{B_r\setminus B_{\overline r}}f_m(\overline{u}(r))\varphi\ dx-\int_{B_r\setminus B_{\overline r}}v_m(r)\varphi\ dx-\int_{B_r\setminus B_{\overline r}}b_m(r)|v_m(r)|^{q-2}v_m(r)\varphi\ dx\\ &=\int_{B_r\setminus B_{\overline r}}\Big(f_m(\overline{u}(r))-v_m(r)-b_m(r)|v_m(r)|^{q-2}v_m(r)\Big)\varphi\ dx.\end{aligned}$$ Since by assumption the latter is non-positive, then $\varphi\equiv0$, and thus $(i)$ holds. By the same manner, when $f_m(\overline{u}(r)) > v_m(r)+b_m(r)|v_m(r)|^{q-2}v_m(r)$, we use as a test function $$\varphi(x)=\left\{\begin{aligned} & 0\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\text{if}~~\overline r<|x|\leq r\\ &(v_m(|x|)-v_m(r))^-\quad\quad~\text{otherwise} \end{aligned} \right.$$ to show that case $(ii)$ holds. We then conclude that $v_m$ is non-decreasing and thus $v_m\in K$. Next, we show that $\{v_m\}$ is bounded in $V$. Substituting $\varphi=v_m$ in [\[z2\]](#z2){reference-type="eqref" reference="z2"}, we have $$\begin{aligned} \label{z6} \nonumber\int_{B_1}|\nabla v_m|^2\ dx+\iint_{{\mathcal Q}}\frac{(v_m(x)-v_m(y))^2}{|x-y|^{N+2s}}\ dxdy +\int_{B_1}|v_m|^2\ dx&+\int_{B_1}b_m(|x|)|v_m|^{q}\ dx\\ &=\int_{B_1}f_m(\overline{u})v_m\ dx. \end{aligned}$$ Now using Lemma [Lemma 8](#llm){reference-type="ref" reference="llm"} together with the fact that $a_m\leq a$, $a\in L^1(0,1)$ and $\overline{u}, v_m\in K$, we have $$\begin{aligned} \nonumber \int_{B_1}f_m(\overline{u})v_m\ dx&=\int_{B_1}a_m(|x|)|\overline{u}|^{p-2}\overline{u}v_m\ dx\leq \int_{B_1}a(|x|)|\overline{u}|^{p-2}\overline{u}v_m\ dx\\ &\leq\|\overline{u}\|^{p-1}_{L^{\infty}(B_1)}\|a\|_{L^1(B_1)}\|v_m\|_{L^{\infty}(B_1)}\leq C\|v_m\|_{\mathbb{X}} \end{aligned}$$ where $C>0$ is a positive constant independent of $m$. Plugging this into [\[z6\]](#z6){reference-type="eqref" reference="z6"}, it follows that $$\|v_m\|^2_{\mathbb{X}}+\int_{B_1}b_m(|x|)|v_m|^{q}\ dx\leq C\|v_m\|_{\mathbb{X}}.$$ In particular, $v_m$ is bounded both in $\mathbb{X}$ and $L^q_b(B_1)$. Moreover, by Lemma [Lemma 8](#llm){reference-type="ref" reference="llm"}, we deduce that $\|v_m\|_{L^{\infty}(B_1)}$ is also bounded. Hence, $\{v_m\}$ is bounded in $V$. Therefore, up to a subsequence, there exists $0\leq v\in \mathbb{X}_{rad}$ such that $v_m\rightharpoonup v$ weakly in $\mathbb{X}$ and $v_m\to v$ strongly in $L^2(B_1)$ (thanks to the compact embedding $\mathbb{X}\hookrightarrow L^2(B_1)$). In particular, $v_m\to v$ a.e. in $B_1$. Since $v_m$ is non-decreasing for every $m$, then $v$ is also non-decreasing in the radial variable. Hence, by Lemma [Lemma 8](#llm){reference-type="ref" reference="llm"}, $v\in L^{\infty}(B_1)$ from which we obtain in particular that $v\in K$. To complete the proof, it remains to show that $v$ solves the problem [\[e4\]](#e4){reference-type="eqref" reference="e4"}. Let $\varphi\in \mathbb{X}\cap L^{\infty}(B_1)$. Then from [\[z2\]](#z2){reference-type="eqref" reference="z2"}, it holds that $$\begin{aligned} \label{z7} \nonumber &\int_{B_1}\nabla v_m\cdot \nabla\varphi\ dx+\iint_{{\mathcal Q}}\frac{(v_m(x)-v_m(y))(\varphi(x)-\varphi(y))}{|x-y|^{N+2s}}\ dxdy\\ &~~~+\int_{B_1}v_m\varphi\ dx+\int_{B_1}b_m(|x|)|v_m|^{q-2}v_m\varphi\ dx=\int_{B_1}f_m(\overline{u})\varphi\ dx. \end{aligned}$$ By weak convergence, we have $$\begin{aligned} \label{z8} \nonumber \int_{B_1}\nabla v_m\cdot \nabla\varphi\ dx+\iint_{{\mathcal Q}}\frac{(v_m(x)-v_m(y))(\varphi(x)-\varphi(y))}{|x-y|^{N+2s}}\ dxdy+\int_{B_1}v_m\varphi\ dx\\ \to \int_{B_1}\nabla v\cdot \nabla\varphi\ dx+\iint_{{\mathcal Q}}\frac{(v(x)-v(y))(\varphi(x)-\varphi(y))}{|x-y|^{N+2s}}\ dxdy+\int_{B_1}v\varphi\ dx. \end{aligned}$$ On the other hand, from the boundedness of $\|v_m\|_{L^{\infty}(B_1)}$, we have $$|b_m|v_m|^{q-2}v_m\varphi|\leq b\|v_m\|^{q-1}_{L^{\infty}(B_1)}\|\varphi\|_{L^{\infty}(B_1)}\leq Cb\in L^1.$$ Thus, the dominated convergence theorem yields $$\label{z9} \int_{B_1}b_m(|x|)|v_m|^{q-2}v_m\varphi\ dx\to \int_{B_1}b(|x|)|v|^{q-2}v\varphi\ dx.$$ Then, passing to the limit in [\[z7\]](#z7){reference-type="eqref" reference="z7"} and recalling [\[z8\]](#z8){reference-type="eqref" reference="z8"} and [\[z9\]](#z9){reference-type="eqref" reference="z9"}, we have that $$\begin{aligned} \label{z10} \nonumber &\int_{B_1}\nabla v\cdot \nabla\varphi\ dx+\iint_{{\mathcal Q}}\frac{(v(x)-v(y))(\varphi(x)-\varphi(y))}{|x-y|^{N+2s}}\ dxdy\\ &~~~+\int_{B_1}v\varphi\ dx+\int_{B_1}b(|x|)|v|^{q-2}v\varphi\ dx=\int_{B_1}a(|x|)|\overline{u}|^{p-2}\overline{u}\varphi\ dx~~~\forall\varphi\in\mathbb{X}\cap L^{\infty}(B_1). \end{aligned}$$ By density, the latter holds for all $\varphi\in\mathbb{X}\cap L^1_b(B_1)$. From this, we deduce that $v$ is a weak solution of [\[e4\]](#e4){reference-type="eqref" reference="e4"}. The proof is therefore finished. ◻ Having the above preliminary results, we now prove our main results. *Proof of Theorem [Theorem 1](#first-main-result){reference-type="ref" reference="first-main-result"}.* By Lemma [Lemma 9](#mpg){reference-type="ref" reference="mpg"}, the functional $I_K$ satisfies the **(PS)** condition. We now prove that $I_K$ satisfies the mountain pass geometry. We first notice that $I_K(0)=0$. Let $e\in K$. We have $$\begin{aligned} I_K(te)&=\frac{t^2}{2}\int_{B_1}(|\nabla e|^2+|e|^2)\ dx+\frac{t^2}{2}\iint_{{\mathcal Q}}\frac{(e(x)-e(y))^2}{|x-y|^{N+2s}}\ dxdy\\ &+\frac{t^q}{q}\int_{B_1}b(|x|)|e|^q\ dx-\frac{t^p}{p}\int_{B_1}a(|x|)|e|^p\ dx. \end{aligned}$$ For $t$ sufficiently large and recalling that $p>q\geq2$, we deduce from the inequality above that $I_K(te)$ is negative. Now, let $u\in K$ with $\|u\|_{V}=\rho>0$. From Lemma [Lemma 8](#llm){reference-type="ref" reference="llm"}, there exists a positive constant $C>0$ such that $$\label{z11} \|u\|_{\mathbb{X}}\leq \|u\|_{V}\leq C\|u\|_{\mathbb{X}}.$$ By definition, we have $$\label{z12} \int_{B_1}a(|x|)|u|^p\ dx\leq \|u\|^p_{V}.$$ Hence, from [\[z11\]](#z11){reference-type="eqref" reference="z11"} and [\[z12\]](#z12){reference-type="eqref" reference="z12"}, it follows that $$\begin{aligned} I_{K}(u)&=\frac{1}{2}\|u\|^2_{\mathbb{X}}+\frac{1}{q}\|u\|^q_{L^q_b(B_1)}-\frac{1}{p}\int_{B_1}a(|x|)|u|^p\ dx\\ &\geq \frac{1}{2}\|u\|^2_{\mathbb{X}}-\frac{1}{p}\int_{B_1}a(|x|)|u|^p\ dx\\ &\geq \frac{1}{2C^2}\|u\|^2_{V}-\frac{1}{p}\|u\|^p_{V}=\frac{1}{2C^2}\rho^2-\frac{1}{p}\rho^p>0, \end{aligned}$$ provided that $\rho>0$ is sufficiently small since $p>2$. Now, if $u\notin K$, then we immediately have that $I_K(u)>0$, thanks to the definition of $\Psi_K$, see [\[psi-k\]](#psi-k){reference-type="eqref" reference="psi-k"}. So, $I_K$ satisfies the mountain pass geometry. Hence, by Theorem [\[mountain-pass-theorem\]](#mountain-pass-theorem){reference-type="ref" reference="mountain-pass-theorem"}, the functional $I_K$ admits a nontrivial critical point $\overline{u}\in K$. Now by Lemma [Lemma 11](#llllm){reference-type="ref" reference="llllm"}, there exists $v\in K$ satisfying in the weak sense the problem $$\label{e4-44} \left\{\begin{aligned} {\mathcal L}v+v+b(|x|)|v|^{q-2}v&=a(|x|)|\overline{u}|^{p-2}\overline{u}\quad\quad~\text{in}~~B_1 \\ {\mathcal N}_sv&=0 \quad\quad\quad\quad\quad\quad\quad~\text{in}~~\mathbb{R}^N\setminus\overline{B}_1\\ \frac{\partial v}{\partial\nu}&=0 \quad\quad\quad\quad\quad\quad\quad~\text{on}~~\partial B_1. \end{aligned} \right.$$ We now deduce from Theorem [Theorem 7](#t1){reference-type="ref" reference="t1"} that $\overline{u}$ is a solution of [\[e2\]](#e2){reference-type="eqref" reference="e2"} with $I_K(\overline{u})>0$. This completes the proof. ◻ We shall also address the case $p<q$. In this case we need the following assumption: - $\Big(\frac{a^q}{b^p}\Big)^{\frac{1}{q-p}}\in L^1(0,1)$ and there exists $e\in K$ such that $I_K(e)<0$. Here is our result for the case $p<q.$ **Theorem 12**. *Assume that $(A)$, $(B)$, and $(C)$ hold. If $p<q$ then problem [\[e2\]](#e2){reference-type="eqref" reference="e2"} admits at least one positive non-decreasing radial solution.* *Proof of Theorem [Theorem 12](#second-main-result){reference-type="ref" reference="second-main-result"}.* We set $\mu=\inf_{u\in V}I(u)$. From [\[k1\]](#k1){reference-type="eqref" reference="k1"}, we see that $I_K$ is bounded from below. Moreover, $I_K$ satisfies the **(PS)** compactness condition by Lemma [Lemma 9](#mpg){reference-type="ref" reference="mpg"}. Thus, it follows from Theorem [Theorem 6](#y1){reference-type="ref" reference="y1"} that the infimum $\mu$ is achieved at some $\overline{u}\in K$. In particular, $\overline{u}$ is a critical point of $I_K$. It now follows from Lemma [Lemma 11](#llllm){reference-type="ref" reference="llllm"} that there exists $v\in K$, a weak solution of $$\label{e4-444} \left\{\begin{aligned} {\mathcal L}v+v+b(|x|)|v|^{q-2}v&=a(|x|)|\overline{u}|^{p-2}\overline{u}\quad\quad~\text{in}~~B_1 \\ {\mathcal N}_sv&=0 \quad\quad\quad\quad\quad\quad\quad~\text{in}~~\mathbb{R}^N\setminus\overline{B}_1\\ \frac{\partial v}{\partial\nu}&=0 \quad\quad\quad\quad\quad\quad\quad~\text{on}~~\partial B_1. \end{aligned} \right.$$ We now deduce from Theorem [Theorem 7](#t1){reference-type="ref" reference="t1"} that $\overline{u}$ is a solution of [\[e2\]](#e2){reference-type="eqref" reference="e2"}. Finally, notice that $$I_K(\overline{u})=\min_{u\in V}I_K(u)\leq I_K(e)<0,$$ and thus, $\overline{u}$ is a nontrivial solution. ◻ # Non-constancy of solutions {#section:non-constency} In this section, we show that when $a, b>0$ are two constants and $2\leq q<p$, then the nontrivial solution of $$\label{e4-444-1} \left\{\begin{aligned} {\mathcal L}u+u&=a|u|^{p-2}u-b|u|^{q-2}u\quad\quad~\text{in}~~B_1 \\ {\mathcal N}_su&=0 \quad\quad\quad\quad\quad\quad\quad\quad\quad~~~\text{in}~~\mathbb{R}^N\setminus\overline{B}_1\\ \frac{\partial u}{\partial\nu}&=0 \quad\quad\quad\quad\quad\quad\quad\quad\quad~~~\text{on}~~\partial B_1 \end{aligned} \right.$$ obtained in Theorem [Theorem 1](#first-main-result){reference-type="ref" reference="first-main-result"} is non-constant. We shall see that this is the case under certain conditions on $p$ and $q$. Now we set $$G(u)=a|u|^{p-2}u-b|u|^{q-2}u-u=(a|u|^{p-2}-b|u|^{q-2}-1)u.$$ Then it is observed that every nontrivial constant function $u$ satisfying $G(u)=0$ is also a nontrivial solution. In the sequel, we denote by $\lambda_2^{rad}$ the second radial eigenvalue of ${\mathcal L}+1$ in $B_1$ with Neumann boundary conditions. We have the following. **Lemma 13**. *Let $v$ be an eigenfunction associated with $\lambda_2^{rad}$, namely $v$ is nontrivial and satisfies $$\label{e4-4444} \left\{\begin{aligned} {\mathcal L}v+v&=\lambda_2^{rad}v\quad\quad~\text{in}~~B_1 \\ {\mathcal N}_sv&=0 \quad\quad\quad~~~\text{in}~~\mathbb{R}^N\setminus\overline{B}_1\\ \frac{\partial v}{\partial\nu}&=0 \quad\quad\quad\quad\text{on}~~\partial B_1. \end{aligned} \right.$$ Then $\lambda_2^{rad}>1$, $v$ is radial and unique up to a multiplicative factor. Moreover $\int_{B_1}v\ dx=0$.* *Proof.* Let $\lambda_1$ be the first nontrivial Neumann eigenvalue of ${\mathcal L}+1$ in $B_1$. Then $$\label{f} \lambda_1=1.$$ Indeed, by definition, $$\begin{aligned} \label{first-eigenvalue-of-L+1} \lambda_1=\inf_{0\neq u\in \mathbb{X}}\frac{\int_{B_1}|\nabla u|^2\ dx+\iint_{{\mathcal Q}}\frac{(u(x)-u(y))^2}{|x-y|^{N+2s}}\ dxdy+\int_{B_1}|u|^2\ dx}{\int_{B_1}|u|^2\ dx}. \end{aligned}$$ Then, in particular, $\lambda_1\geq 1$. Now, using the constant function $u\equiv 1$ as an admissible test function in [\[first-eigenvalue-of-L+1\]](#first-eigenvalue-of-L+1){reference-type="eqref" reference="first-eigenvalue-of-L+1"} we get that $\lambda\leq 1$ and thus, [\[f\]](#f){reference-type="eqref" reference="f"} follows. Notice that, $\lambda_1=1$ is achieved by the constant function $e_1=1$. Now, let $\lambda_2$ be the second eigenvalue of ${\mathcal L}+1$ in $B_1$ with Neumann boundary conditions. We claim that $$\label{lambda-2} \lambda_2>1.$$ To see this, we first notice that $\lambda_2\geq \lambda_1$, that is $\lambda_2=\lambda_1$ or $\lambda_2>\lambda_1$. Let $0\neq w\in\mathbb{X}$ be an eigenfunction associated to $\lambda_2$. Then by definition, $$\label{orthoganility-property} 0=\int_{B_1}we_1\ dx=\int_{B_1}w\ dx.$$ Now, if $\lambda_2$ were equal to $\lambda_1$ then necessarily $w=e_1=1$ and this violates [\[orthoganility-property\]](#orthoganility-property){reference-type="eqref" reference="orthoganility-property"}. Thus $\lambda_2$ must be strictly greater than $\lambda_1$ and therefore, [\[lambda-2\]](#lambda-2){reference-type="eqref" reference="lambda-2"} follows. By the inclusion $\mathbb{X}_{rad}\subset\mathbb{X}$ we have that $\lambda_2^{rad}\geq \lambda_2$ and thus, $\lambda_2^{rad}>1$, thanks to [\[lambda-2\]](#lambda-2){reference-type="eqref" reference="lambda-2"}. Moreover, by the direct method of calculus of variations, $\lambda_2^{rad}$ is attained at some $v\in \mathbb{X}_{rad}$ with $\int_{B_1}v\ dx=0$. ◻ Before proving Theorem [Theorem 2](#non-consistancy-theorem){reference-type="ref" reference="non-consistancy-theorem"}, let us first notice that under condition [\[key-condition-for-non-constancy-of-solutions\]](#key-condition-for-non-constancy-of-solutions){reference-type="eqref" reference="key-condition-for-non-constancy-of-solutions"}, $G$ admits a unique nonzero root. Indeed, set $g(t)=t^{p-2}-bt^{q-2}-1$. Then $g(1)=-b<0$ and $\lim_{t\to\infty}g(t)=+\infty$. Thus from the intermediate value theorem, $g$ has a root in $(1,+\infty)$. Next, $g'(t)=(p-2)t^{p-3}-b(q-2)t^{q-3}=t^{q-3}((p-2)t^{p-q}-b(q-2))>0$ for all $t\in (1, +\infty)$ thanks to [\[key-condition-for-non-constancy-of-solutions\]](#key-condition-for-non-constancy-of-solutions){reference-type="eqref" reference="key-condition-for-non-constancy-of-solutions"}. Hence $g$ is strictly increasing in $(1,+\infty)$. We then deduce that $g$ has a unique nonzero root in $(1,+\infty)$. On the other hand, for every $t\in [0,1]$, $g(t)<0$. In conclusion, $G$ has a unique nonzero root. Thus, [\[e4-444-1\]](#e4-444-1){reference-type="eqref" reference="e4-444-1"} admits a unique constant solution. Denote it by $u_0\in (1,+\infty)$. With the above remark in hand, we now give the proof of Theorem [Theorem 2](#non-consistancy-theorem){reference-type="ref" reference="non-consistancy-theorem"}. *Proof of Theorem [Theorem 2](#non-consistancy-theorem){reference-type="ref" reference="non-consistancy-theorem"}.* The proof is similar to the one of [@moameni2019existence Theorem 4.2]. We report it here for the sake of completeness. The goal of the proof is to show that solution $\overline{u}$ of [\[e2\]](#e2){reference-type="eqref" reference="e2"} obtain in Theorem [Theorem 1](#first-main-result){reference-type="ref" reference="first-main-result"} is different from $u_0$. Notice that $I(\overline{u})=I_K(\overline{u})=c$ where $c$ is defined (see [\[critical-value\]](#critical-value){reference-type="eqref" reference="critical-value"}) by $$\label{critical-value-2} c=\inf_{\gamma\in\Gamma}\sup_{t\in [0,1]}I(\gamma(t)),$$ where $\Gamma=\{\gamma\in C([0,1],V): \gamma(0)=0, \gamma(1)=e\}$. So, in order to prove that $\overline{u}\not\equiv u_0$, it suffices to show that $c<I(u_0)$. Let $v$ be as in Lemma [Lemma 13](#radial-eigenvalue-problem){reference-type="ref" reference="radial-eigenvalue-problem"} and take $\tau>0$ such that $\tau<\frac{\|u_0\|_{L^{\infty}(B_1)}}{\|v\|_{L^{\infty}(B_1)}}$. Then, $u_0+\tau v\in K$. For $r\in\mathbb{R}$, we have $$\begin{aligned} I((u_0+\tau v)r)=\frac{r^2}{2}\int_{B_1}\Big(u_0+\tau\mathpalette% \setbox 0=\hbox{$\lambda_2^{rad}\sqrt{v\,}$}\dimen 0=\ht 0 \advance\dimen 0-0.2\ht 0 \setbox 2=\hbox{\vrule height\ht 0 depth -\dimen 0}% {\box 0\lower 0.4pt\box 2}\Big)^2\ dx+\frac{r^q}{q}b\int_{B_1}(u_0+\tau v)^q\ dx-\frac{r^p}{p}\int_{B_1}(u_0+\tau v)^p\ dx.\end{aligned}$$ Since $q<p$, there is $r>1$ such that $I((u_0+\tau v)r)\leq0$. We introduce the function $\gamma_{\tau}$ defined by $\gamma_{\tau}(t)=t(u_0+\tau v)r$. Notice that $\gamma_{\tau}\in \Gamma$. Let $\psi:\mathbb{R}^2\to\mathbb{R}$ be given by $$\psi(\tau,t)=I'(t(u_0+\tau v)r)[(u_0+\tau v)r].$$ Since $I$ is of class $C^2$ then $\psi$ is of class $C^1$ with $\psi(0,\frac{1}{r})=0$. Thus, $$\begin{aligned} \frac{d}{dt}\Big|_{(0,\frac{1}{r})}\psi(\tau, t)&=I''(u_0)(ru_0,ru_0)\\ &=r^2\int_{B_1}\Big(1+b(q-1)u_0^{q-2}-(p-1)u_0^{p-2}\Big) u_0^2\ dx\\ &=r^2\int_{B_1}u_0^{p}-bu_0^{q}+b(q-1)u_0^{q}-(p-1)u_0^{p}\ dx\\ &=r^2\int_{B_1}b(q-2)u_0^{q}-(p-2)u_0^{p}\ dx<0.\end{aligned}$$ In the latter, we have used $G(u_0)=0$ and $b(q-2)<p-2$. Thus, the implicit function theorem guarantees the existence of $\varepsilon_1, \varepsilon_2>0$ and a $C^1$ function $h:(-\varepsilon_1, \varepsilon_1)\to (\frac{1}{r}-\varepsilon_2, \frac{1}{r}+\varepsilon_2)$ such that $h(0)=\frac{1}{r}$ and for $(\tau, t)\in (-\varepsilon_1, \varepsilon_1)\times (\frac{1}{r}-\varepsilon_2, \frac{1}{r}+\varepsilon_2)$ one has $\psi(\tau, t)=0$ if and only if $t=h(\tau)$. On the other hand, $$\begin{aligned} \frac{d}{d\tau}\Big|_{(0,\frac{1}{r})}\psi(\tau,t)&=I''(u_0)(u_0,rt)+I'(u_0)rv\\ &=r\int_{B_1}\Big(1+b(q-1)u_0^{q-2}-(p-1)u_0^{p-2}\Big)u_0v\ dx\\ &=r\Big(1+b(q-1)u_0^{q-2}-(p-1)u_0^{p-2}\Big)u_0\int_{B_1}v\ dx=0.\end{aligned}$$ Note that we have used the fact that $I'(u_0)=0$ and $\int_{B_1}v\ dx=0$. Thus, $h'(0)=0$. Next, we claim that $$\label{lab} I(h(\tau)(u_0+\tau v)r)< I(u_0),\quad\quad\text{for all}~~\tau\in (-\varepsilon_1, \varepsilon_1).$$ To see this, we first notice that since $h'(0)=0$, then for all $\tau\in (-\varepsilon_1, \varepsilon_1)$, we have $h(\tau)=\frac{1}{r}+o(\tau)$ so that $$h(\tau)(u_0+\tau v)r-u_0=(rh(\tau)-1)u_0+\tau rh(\tau)v=\tau v+o(\tau).$$ Now recalling that $I'(u_0)=0$, Taylor expansion yields $$\begin{aligned} &I(h(\tau)(u_0+\tau v)r)-I(u_0)\\ &=\frac{1}{2}I''(u_0)[h(\tau)(u_0+\tau v)r-u_0, h(\tau)(u_0+\tau v)r-u_0]+o(\tau^2)\\ &=\frac{1}{2}I''(u_0)(\tau v+o(\tau), \tau v+o(\tau))+o(\tau^2)=\frac{\tau^2}{2}I''(u_0)(v,v)+o(\tau^2)\\ &=\frac{\tau^2}{2}\Bigg(\int_{B_1}|\nabla v|^2\ dx+\iint_{{\mathcal Q}}\frac{(v(x)-v(y))^2}{|x-y|^{N+2s}}\ dxdy+\int_{B_1}|v|^2\ dx\Bigg)\\ &~~~~~~~+\frac{\tau^2}{2}\Bigg(\int_{B_1}\Big(b(q-1)u_0^{q-2}v^2-(p-1)u_0^{p-2}v^2\Big)\ dx\Bigg)+o(\tau^2)\\ &=\frac{\tau^2}{2}\int_{B_1}\Big(\lambda_2^{rad}v^2+b(q-1)u_0^{q-2}v^2-(p-1)u_0^{p-2}v^2\Big)\ dx+o(\tau^2)\\ &=\frac{\tau^2}{2u_0^2}\int_{B_1}\Big(\lambda_2^{rad}u_0^2+b(q-1)u_0^q-(p-1)u_0^p\Big)v^2\ dx+o(\tau^2)\\ &=\frac{\tau^2}{2u_0^2}\int_{B_1}\Big(\lambda_2^{rad}u_0^p-\lambda_2^{rad}bu_0^q+b(q-1)u_0^q-(p-1)u_0^p\Big)v^2\ dx+o(\tau^2).\end{aligned}$$ In the latter, we have used that $G(u_0)=0$. Hence, $$\begin{aligned} \label{lab1} I(h(\tau)(u_0+\tau v)r)-I(u_0)&=\frac{\tau^2}{2u_0^2}\int_{B_1}\Big((b(q-1)-\lambda_2^{rad})u_0^q-((p-1)-\lambda_2^{rad})u_0^p\Big)v^2\ dx+o(\tau^2).\end{aligned}$$ Now, using [\[key-condition-for-non-constancy-of-solutions\]](#key-condition-for-non-constancy-of-solutions){reference-type="eqref" reference="key-condition-for-non-constancy-of-solutions"} and the fact that $\lambda_2^{rad}>1$ (see Lemma [Lemma 13](#radial-eigenvalue-problem){reference-type="ref" reference="radial-eigenvalue-problem"}), we have $$b\leq \frac{p-2}{q-2}<\frac{p-1-\lambda_2^{rad}}{q-1-\lambda_2^{rad}}.$$ Taking this into account, the right hand side in [\[lab1\]](#lab1){reference-type="eqref" reference="lab1"} is strictly negative and thus [\[lab\]](#lab){reference-type="eqref" reference="lab"} follows. ◻ On the other hand the function $t\mapsto I(\gamma_0(t))$ takes its unique maximum at $t=\frac{1}{r}$. Indeed, $$\begin{aligned} \frac{d}{dt}I(\gamma_0(t))&=\frac{d}{dt}I(tu_0r)=I'(tu_0r)ru_0\\ &=|B_1|\Big(tu_0^2r^2+bt^{q-1}u_0^qr^q-t^{p-1}u_0^pr^p\Big)u_0\\ &=-|B_1|tu_0^2r^2g(tu_0r),\end{aligned}$$ where $|B_1|$ is the measure of $B_1$ and $g(t)=t^{p-2}-bt^{q-2}-1$. Thus, $\frac{d}{dt}I(\gamma_0(t))>0$ if $t<\frac{1}{r}$ and $\frac{d}{dt}I(\gamma_0(t))<0$ if $t>\frac{1}{r}$. It therefore follows that $t\mapsto I(\gamma_0(t))$ has a unique maximum point at $t=\frac{1}{r}$. Since $(\tau, t)\to I(\gamma_{\tau}(t))$ is a continuous function, we can choose $0<\tau_0<\varepsilon_1$ sufficiently small such that the maximum of the function $t\to I(\gamma_{\tau_0}(t))$ lies in $(\frac{1}{r}-\varepsilon_2, \frac{1}{r}+\varepsilon_2)$. Let $t_0\in (\frac{1}{r}-\varepsilon_2, \frac{1}{r}+\varepsilon_2)$ be such a maximum point. Then $$0=\frac{d}{dt}I(\gamma_{\tau_0}(t))\Big|_{t=t_0}=I'(t_0(u_0+\tau_0v)r)(u_0+\tau_0v)r,$$ and therefore $t_0=h(\tau_0)$. From [\[lab\]](#lab){reference-type="eqref" reference="lab"} it follows that $$I(t_0(u_0+\tau_0 v)r)<I(u_0).$$ Hence, $$I(t(u_0+\tau_0 v)r)<I(u_0),\quad\quad \forall t\in [0,1].$$ Thus, $\max_{t\in [0,1]}I(\gamma_{\tau_0}(t))< I(u_0)$. This yields $$c\leq \max_{t\in [0,1]}I(\gamma_{\tau_0}(t))< I(u_0),$$ as desired. The proof is finished. # Data availability statement {#data-availability-statement .unnumbered} Data sharing not applicable to this article as no datasets were generated or analyzed during the current study. # Declaration of competing interest {#declaration-of-competing-interest .unnumbered} The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. # Authors contributions {#authors-contributions .unnumbered} All authors contributed equally.\ **Acknowledgements:** D.A. and A.M. are pleased to acknowledge the support of the Natural Sciences and Engineering Research Council of Canada. R.Y.T. is supported by Fields Institute. 10 Adimurthi and S. L. Yadava, *Existence and nonexistence of positive radial solutions of Neumann problems with critical Sobolev exponents.* Archive for rational mechanics and analysis 115.3 (1991): 275-296. Adimurthi and S. L. Yadava, *On a conjecture of Lin-Ni for a semilinear Neumann problem.* Transactions of the American Mathematical Society 336.2 (1993): 631-637. D. Amundsen, A. Moameni, and R. Y. Temgoua, *A mixed local and nonlocal supercritical Dirichlet problems.* arXiv preprint arXiv:2303.03273 (2023). V. Barutello, S. Secchi, and E. Serra, *A note on the radial solutions for the supercritical Hénon equation.* Journal of mathematical analysis and applications 341.1 (2008): 720-728. D. Bonheure, C. Grumiau, and C. Troestler, *Multiple radial positive solutions of semilinear elliptic problems with Neumann boundary conditions.* Nonlinear Analysis: Theory, Methods & Applications 147 (2016): 236-273. D. Bonheure, B. Noris, and T. Weth, *Increasing radial solutions for Neumann problems without growth restrictions.* Annales de l'IHP Analyse non linéaire. Vol. 29. No. 4. 2012. L. Brasco, M. Squassina, and Y. Yang, *Global compactness results for nonlocal problems.* Discrete & Continuous Dynamical Systems-Series S 11.3 (2018): 391-424. E. Cinti and F. Colasuonno, *A nonlocal supercritical Neumann problem.* Journal of Differential Equations 268.5 (2020): 2246-2279. F. Colasuonno and B. Noris, *A $p$-Laplacian supercritical Neumann problem.* Discrete and Continuous Dynamical Systems 37.6 (2017): 3025-3057. C. Cowan and A. Moameni, *Supercritical Neumann problems on non-radial domains.* In: Transaction of AMS. C. Cowan, A. Moameni, and L. Salimi, *Existence of solutions to supercritical Neumann problems via a new variational principle.* Electronic Journal of Differential Equations 2017.213 (2017): 1-19. S. Dipierro, E. P. Lippi, and E. Valdinoci, *Linear theory for a mixed operator with Neumann conditions* Asymptotic Analysis 128.4 (2022): 571-594. S. Dipierro, E. P. Lippi, and E. Valdinoci, *(Non) local logistic equations with Neumann conditions.* Annales de l'Institut Henri Poincaré C (2022). S. Dipierro, X. Ros-Oton, and E. Valdinoci, *Nonlocal problems with Neumann boundary conditions.* Revista Matemática Iberoamericana 33.2 (2017): 377-416. I. Ekeland and R. Temam, *Convex analysis and variational problems.* American Elsevier Publishing Co., Inc., New York, 1976. F. Faraci, *Multiplicity results for a Neumann problem involving the $p$-Laplacian.* Journal of mathematical analysis and applications 277.1 (2003): 180-189. A. Moameni, *A variational principle for problems with a hint of convexity.* Comptes Rendus Mathematique 355.12 (2017): 1236-1241. A. Moameni, *Critical point theory on convex subsets with applications in differential equations and analysis.* J. Math. Pures Appl. (9) 141 (2020), 266-315. A. Moameni and L. Salimi, *Existence results for a supercritical Neumann problem with a convex-concave non-linearity.* Annali di Matematica Pura ed Applicata (1923-) 198 (2019): 1165-1184. D. Mugnai and E. P. Lippi, *On mixed local-nonlocal operators with $(\alpha, \beta)$-Neumann conditions.* Rendiconti del Circolo Matematico di Palermo Series 2 71.3 (2022): 1035-1048. S. Secchi, *Increasing variational solutions for a nonlinear $p$-Laplace equation without growth conditions.* Annali di Matematica Pura ed Applicata 191 (2012): 469-485. E. Serra and P. Tilli, *Monotonicity constraints and supercritical Neumann problems.* Annales de l'Institut Henri Poincaré C 28.1 (2011): 63-74. A. Szulkin, *Minimax principles for lower semicontinuous functions and applications to nonlinear boundary value problems.* Annales de l'Institut Henri Poincaré C, Analyse non linéaire. Vol. 3. No. 2., 1986. [^1]: Indeed, if $v_m(t_0)>v_m(r)$ for some $\overline r< t_0<r$, then by continuity of $v_m$, there exists $t\in (t_0,r)$ for which $v_m(t_0)>v_m(t)>v_m(r)$. This violates both $(i)$ and $(ii)$.
arxiv_math
{ "id": "2309.12948", "title": "Radial positive solutions for mixed local and nonlocal supercritical\n Neumann problem", "authors": "David Amundsen, Abbas Moameni and Remi Yvant Temgoua", "categories": "math.AP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Tomita-Takesaki theory associates a positive operator called the "modular operator" with a von Neumann algebra and a cyclic-separating vector. Tomita's theorem says that the unitary flow generated by the modular operator leaves the algebra invariant. I give a new, short proof of this theorem which only uses the analytic structure of unitary flows, and which avoids operator-valued Fourier transforms (as in van Daele's proof) and operator-valued Mellin transforms (as in Zsidó's and Woronowicz's proofs). The proof is similar to one given by Bratteli and Robinson in the special case that the modular operator is bounded. author: - Jonathan Sorce bibliography: - bibliography.bib title: A short proof of Tomita's theorem --- # Introduction Let $\mathcal{H}$ be a Hilbert space, $\mathcal{A}$ a von Neumann algebra, and $\mathcal{A}'$ its commutant. A vector $\Omega \in \mathcal{H}$ is said to be cyclic and separating for $\mathcal{A}$ if the subspaces $\mathcal{A}\Omega$ and $\mathcal{A}' \Omega$ are both dense in $\mathcal{H}$. Given such a vector, one may define an antilinear operator $S_0$ with domain $\mathcal{A}\Omega$ by $$S_0 (\mathrm{a} \Omega) = \mathrm{a}^* \Omega, \qquad \mathrm{a} \in \mathcal{A}.$$ The first basic result of the Tomita-Takesaki theory developed in [@takesaki2006tomita] --- see also [@takesaki-book; @struatilua2019lectures] for textbook treatments --- is that $S_0$ is a preclosed operator. Its closure is denoted $S$ and is called the *Tomita operator*. The polar decomposition of $S$ is written $$S = J \Delta^{1/2}.$$ $J$ is an antiunitary operator called the *modular conjugation*, and $\Delta$ is an invertible, positive, self-adjoint operator called the *modular operator*. One can also show that the Tomita operator of $\mathcal{A}'$ for $\Omega$ is $S^{*},$ with polar decomposition $$S^* = J \Delta^{-1/2}.$$ Since the modular operator is invertible, it generates a unitary group of operators $\Delta^{-it}.$ The map from $\mathcal{B}(\mathcal{H}) \times \mathbb{R}$ to $\mathcal{B}(\mathcal{H})$ given by $$g(\mathrm{x}, t) = \Delta^{-it} \mathrm{x} \Delta^{it}$$ is known as *modular flow*. The fundamental theorem of Tomita, which is the starting point for the rest of the Tomita-Takesaki theory, is that modular flow maps the algebra $\mathcal{A}$ to itself. The first complete proof of this theorem was given by Takesaki in [@takesaki2006tomita]. Other general proofs were given in [@van-daele-proof; @zsido-proof; @woronowicz-proof], and a simplified proof in the case that $\mathcal{A}$ is hyperfinite was given in [@longo-proof]. Tomita-Takesaki theory has been extremely useful for the study of von Neumann algebras, especially the ones of type III. It is the basic tool underlying Connes' classification of type III factors [@connes1973classification], Takesaki's duality theorem for crossed products [@takesaki1973duality], and also several interesting developments in quantum field theory [@Borchers:2000pv]. This paper presents a proof of Tomita's theorem that is, to the best of my knowledge, new to the literature. It is similar to Zsidó's proof from [@zsido-proof] in that it works by constructing a dense subset of $\mathcal{A}$ for which modular flow admits an entire analytic extension. The main difference is that for the dense subset constructed here, the analytic extension of modular flow has norm bounded by an exponential function at infinity, so that Carlson's theorem can be used to constrain modular flow by evaluating the analytic extension on the integers in the complex plane. This lets us show directly that all commutators of the form $$\qquad \mathrm{b}' \in \mathcal{A}'$$ vanish, which implies via von Neumann's bicommutant theorem the desired result $$\Delta^{-it} \mathrm{a} \Delta^{it} \in \mathcal{A}.$$ By contrast, Zsidó's proof proceeds using the theory of analytic generators [@cioranescu1976analytic], which requires studying certain Mellin transforms of operator-valued functions. (See also [@woronowicz-proof; @Zsido2012].) Note also that the idea of using Carlson's theorem to constrain modular flow has appeared previously in [@bratteli2012operator] in the special case that the modular operator $\Delta$ is bounded. Section [2](#sec:background){reference-type="ref" reference="sec:background"} states some results from previous work that go into the new proof of Tomita's theorem. Proofs are not given, but sources are provided. Section [3](#sec:proof){reference-type="ref" reference="sec:proof"} presents a proof of Tomita's theorem. A longer version of this proof with different exposition is presented in the companion article [@sorce-long-proof] for an audience of physicists. # Background material {#sec:background} **Remark 1**. The first important lemma tells us how to think of the domain of the Tomita operator $S$. The domain of $S_0$ is expressed in terms of operators in $\mathcal{A}$ acting on $\Omega.$ The following lemma tells us that the domain of $S$ can be expressed in terms of operators affiliated to $\mathcal{A}$ acting on $\Omega.$ (Recall that a closed, unbounded operator $\mathrm{T}$ is said to be affiliated to $\mathcal{A}$ if it commutes with every operator $\mathrm{a}' \in \mathcal{A}'$ on all vectors where both $\mathrm{T} \mathrm{a}'$ and $\mathrm{a}' \mathrm{T}$ are defined.) **Lemma 2**. *Let $\Omega$ be a cyclic-separating vector for a von Neumann algebra $\mathcal{A},$ and let $S$ be the Tomita operator. The domain of $S$, which is the same as the domain of $\Delta^{1/2},$ consists of all vectors of the form $\mathrm{T} \Omega,$ where $\mathrm{T}$ is a closed operator affiliated with $\mathcal{A}$, having $\mathcal{A}' \Omega$ as a core, and for which $\Omega$ is in the domain of both $\mathrm{T}$ and $\mathrm{T}^{*}.$ The operator $S$ acts as $$S (\mathrm{T} \Omega) = \mathrm{T}^{*} \Omega.$$* *Proof.* See e.g. [@takesaki-book lemma 1.10] or [@jones2003neumann theorem 13.1.3]. ◻ **Remark 3**. The next lemma tells us when the unitary group generated by a positive operator, thought of as a flow acting on bounded operators by conjugation, can be analytically continued away from the imaginary axis in the complex plane. **Lemma 4**. *Let $P$ be a positive, invertible, self-adjoint operator on the Hilbert space $\mathcal{H}.$ Let $w$ be a complex number with $\text{Re}(w) > 0.$ Fix a bounded operator $\mathrm{x}.$* *If the operator $P^{-w} \mathrm{x} P^{w}$ is defined and bounded on a core for $P^{w}$, then for every $z$ in the strip $0 \leq \text{Re}(z) \leq \text{Re}(w),$ the operator $P^{-z} \mathrm{x} P^{z}$ is bounded on its domain, so that it is preclosed with bounded closure. The bounded-operator-valued function $$z \mapsto \overline{P^{-z} \mathrm{x} P^{z}}$$ is analytic in the strip with respect to the norm topology, and continuous on the boundaries of the strip with respect to the strong operator topology.* *Proof.* See e.g. [@struatilua2019lectures section 9.24]. ◻ **Remark 5**. The next lemma is due to Takesaki, and appears in all general proofs of Tomita's theorem except the one by Woronowicz. The reason for needing this lemma in the proof of section [3](#sec:proof){reference-type="ref" reference="sec:proof"} is that we will want to study a special class of operators $\mathrm{a} \in \mathcal{A}$ for which the modular operator $\Delta$ "looks bounded." Concretely, this will mean that the vector $\mathrm{a} \Omega$ lies in a spectral subspace of $\Delta$ with bounded spectral range. To produce such vectors, we will start with other vectors in $\mathcal{H}$ and act on them with "mollifying operators" that truncate the large-spectral-value spectral subspaces of $\Delta$. Takesaki's lemma tells us what happens when the resolvent of the modular operator is used as a mollifier; other mollifying operators can be constructed from the resolvent using contour integrals. **Lemma 6**. *Let $\Omega$ be a cyclic-separating vector for a von Neumann algebra $\mathcal{A}$ with commutant $\mathcal{A}'.$ Let $\Delta$ be the associated modular operator. Let $z$ be in the resolvent set of $\Delta,$ so that $(z-\Delta)$ is invertible as a bounded operator. Fix $\mathrm{a}' \in \mathcal{A}.$* *Then there exists a unique operator $\mathrm{a} \in \mathcal{A}$ satisfying $$\mathrm{a} \Omega = (z - \Delta)^{-1} \mathrm{a}' \Omega,$$ and it satisfies the bound $$\lVert \mathrm{a} \rVert \leq \frac{\lVert \mathrm{a}' \rVert}{\sqrt{2 (|z| - \text{Re}(z))}}.$$* **Remark 7**. The final lemma we will need lets us express analytic functions of bounded operators as residue integrals. **Lemma 8**. *Let $\mathrm{x}$ be a bounded, self-adjoint operator, and let $f$ be a function analytic in a neighborhood of the spectrum of $\mathrm{x}.$ Then the operator $f(\mathrm{x})$, defined via the functional calculus using the spectral theorem, can be written in terms of the norm-convergent Bochner integral $$f(\mathrm{x}) = \frac{1}{2 \pi i} \int_{\gamma} dz\, f(z) (z - \mathrm{x})^{-1},$$ where $\gamma$ is any simple, counterclockwise, closed contour in the domain of $f$ encircling the spectrum of $\mathrm{x}.$* *Proof.* See e.g. [@struatilua2019lectures sections 2.25 and 2.29]. ◻ # A proof of Tomita's theorem {#sec:proof} Let $\Theta$ be the Heaviside theta function, defined by $$\Theta(x) = \begin{cases} 1 & x > 0 \\ \frac{1}{2} & x = 0 \\ 0 & x < 0 \end{cases}.$$ The main idea of this section is to produce operators in $\mathcal{A}$ for which the modular operator "looks bounded" by starting with a vector $\mathrm{a}' \Omega$ and acting on it with the operator $\Theta(\lambda - \Delta)$ for some $\lambda > 0.$ We will be able to study the vector $\Theta(\lambda - \Delta) \mathrm{a}'\Omega$ by approximating the function $\Theta(\lambda - x)$ with a sequence of sigmoid functions, $$f_k(x) = \frac{1}{1+e^{k (x - \lambda)}}.$$ Since $f_k(z)$ is analytic in the complex plane, the operator $f_k(\Delta)$ (defined via the functional calculus from the spectral theorem) can be studied using a contour integral of $f_k(z)$ multiplied by the resolvent $(z - \Delta)^{-1}.$ Once the resolvent has entered our construction, we will be able to apply lemma [Lemma 6](#lem:resolvent-lemma){reference-type="ref" reference="lem:resolvent-lemma"}. ![A sketch of the contour used in proposition [Proposition 9](#prop:contour-prop){reference-type="ref" reference="prop:contour-prop"} and theorem [Theorem 11](#thm:bounded-construction){reference-type="ref" reference="thm:bounded-construction"}. The black dot denotes the origin of the complex plane, and the jagged line is the positive real axis.](specific-contour.pdf){#fig:specific-contour} **Proposition 9**. *Fix $\lambda > 0,$ and let $f_k : \mathbb{C}\to \mathbb{C}$ be the sigmoid function $$f_k(z) = \frac{1}{1 + e^{k(z-\lambda)}}.$$ Let $\Delta$ be an invertible, self-adjoint, positive operator. Let $\gamma$ be the counterclockwise contour in the complex plane surrounding the positive real axis, given by combining the half-lines $\{t \pm 2 \pi i, \quad t \geq 0\}$ with the half-circle of radius $2 \pi i$ centered at the origin. (See figure [1](#fig:specific-contour){reference-type="ref" reference="fig:specific-contour"}.)* *Then for any nonnegative integer $n,$ and any $\psi \in \mathcal{H},$ we have $$\Delta^n f_k(\Delta) \psi = \frac{1}{2 \pi i} \int_{\gamma} dz\, z^n f_k(z) (z - \Delta)^{-1} \psi,$$ where this integral converges in the sense of the Bochner integral on Hilbert space.* *Proof.* For each fixed integer $m,$ let $v_m$ be the vertical segment passing through the real axis at $m+1/2$, oriented in the positive imaginary direction, and with endpoints on the contour $\gamma$ from figure [1](#fig:specific-contour){reference-type="ref" reference="fig:specific-contour"}. Let $\gamma_m$ be the portion of the contour $\gamma$ lying to the left of this vertical segment. Let $\Pi_m$ be the spectral projection of $\Delta$ in the range $[0, m],$ and denote $\Delta_{(m)} \equiv \Delta \Pi_m$. Now, consider the vector $\Pi_m \psi.$ We have $$\begin{aligned} \begin{split} \Delta^n f_k(\Delta) \Pi_m \psi = \Delta_{(m)}^n f_k(\Delta_{(m)}) \Pi_m \psi. \end{split} \end{aligned}$$ Each $\Delta_{(m)}$ is bounded, so by lemma [Lemma 8](#lem:residue-lemma){reference-type="ref" reference="lem:residue-lemma"}, we may express this equation in terms of a norm-convergent Bochner integral as $$\begin{aligned} \begin{split} \Delta^n f_k(\Delta) \Pi_m \psi = \frac{1}{2 \pi i} \int_{\gamma_m + v_m} z^n f_k(z) (z - \Delta_{(m)})^{-1} \Pi_m \psi. \end{split} \end{aligned}$$ In fact, since the spectrum of $\Delta_{(m)}$ lies in the range $[0, m],$ we may write this integral for any $m' \geq m$ as $$\begin{aligned} \begin{split} \Delta^n f_k(\Delta) \Pi_m \psi = \frac{1}{2 \pi i} \int_{\gamma_{m'} + v_{m'}} z^n f_k(z) (z - \Delta_{(m)})^{-1} \Pi_m \psi. \end{split} \end{aligned}$$ The integral over $v_{m'}$ is easily seen to vanish in the limit $m' \to \infty,$ which gives the identity $$\begin{aligned} \begin{split} \Delta^n f_k(\Delta) \Pi_m \psi & = \frac{1}{2 \pi i} \int_{\gamma} z^n f_k(z) (z - \Delta_{(m)})^{-1} \Pi_m \psi \\ & = \frac{1}{2 \pi i} \int_{\gamma} z^n f_k(z) (z - \Delta)^{-1} \Pi_m \psi \\ \end{split} \end{aligned}$$ So far we have shown that the proposition holds for any vector of the form $\Pi_m \psi.$ But by the spectral theorem, the sequence $\Pi_m$ converges strongly to the identity operator. Taking the limit $m \to \infty$ in the above expression gives $$\begin{aligned} \begin{split} \Delta^n f_k(\Delta) \psi & = \frac{1}{2 \pi i} \lim_{m \to \infty} \int_{\gamma} z^n f_k(z) (z - \Delta)^{-1} \Pi_m \psi \\ \end{split} \end{aligned}$$ Applying the dominated convergence theorem lets us move the limit inside the integral, and proves the proposition. ◻ **Proposition 10**. *Fix $\lambda > 0,$ and let $f_k : \mathbb{C}\to \mathbb{C}$ be the sigmoid function $$f_k(z) = \frac{1}{1 + e^{k(z-\lambda)}}.$$ Let $\Delta$ be an invertible, self-adjoint, positive operator. Then for any $\psi \in \mathcal{H}$ and any nonnegative integer $n,$ the vector sequence $\Delta^n f_k(\Delta) \psi$ converges to $\Delta^n \Theta(\lambda - \Delta) \psi$ in the limit $k \to \infty.$* *Proof.* We aim to show the identity $$\lim_{k \to \infty} \lVert \left(\Delta^n f_k(\Delta) - \Delta^n \Theta(\lambda - \Delta) \right) \psi\rVert = 0.$$ From the spectral theorem for $\Delta$, there exists a complex measure $\mu_{\psi}$ on $[0, \infty)$ satisfying $$\lVert \left(\Delta^n f_k(\Delta) - \Delta^n \Theta(\lambda - \Delta) \right) \psi\rVert^2 = \int d\mu_{\psi}(t)\, |t^n f_k(t) - t^n \Theta(\lambda - t)|^2.$$ This can be seen to converge to zero by a standard application of the dominated convergence theorem. ◻ **Theorem 11**. *Fix $\lambda > 0.$ Let $\Delta$ be the modular operator of a cyclic-separating vector $\Omega$ for a von Neumann algebra $\mathcal{A}$ with commutant $\mathcal{A}'.$ Let $n$ be a nonnegative integer, and fix $\mathrm{a}' \in \mathcal{A}'.$* *There exists a bounded operator $\mathrm{a}_{\lambda, n} \in \mathcal{A}$ satisfying $$\label{eq:theta-switch} \mathrm{a}_{\lambda, n} \Omega = \Delta^n \Theta(\lambda - \Delta) \mathrm{a}' \Omega.$$ Furthermore, there exist $n$-independent constants $\alpha_\lambda, \beta_\lambda > 0$ with $\lVert \mathrm{a}_{\lambda,n} \rVert \leq \alpha_\lambda e^{\beta_\lambda n}.$ In particular, one can show the concrete bound $$\begin{aligned} \label{eq:tidy-bound} \begin{split} \lVert \mathrm{a}_{\lambda, n} \rVert & \leq \frac{\lVert \mathrm{a}' \rVert}{2 \pi} \left(2 \lambda \frac{(\lambda^2 + 4 \pi^2)^{n/2}}{\sqrt{2 ((\lambda^2 + 4 \pi^2)^{1/2} - \lambda)}} + \frac{(2\pi)^{n+1} \pi}{\sqrt{4 \pi}} \right). \end{split}\end{aligned}$$* *Proof.* Since $\Delta^{n+1/2} \Theta(\lambda - \Delta)$ is a bounded operator, the vector $\Delta^n \Theta(\lambda - \Delta) \mathrm{a'} \Omega$ is in the domain of $\Delta^{1/2}.$ So by lemma [Lemma 2](#lem:modular-domain){reference-type="ref" reference="lem:modular-domain"}, there exists a closed operator $\mathrm{a}_{\lambda, n}$ affiliated to $\mathcal{A},$ with $\mathcal{A}' \Omega$ as a core, satisfying equation [\[eq:theta-switch\]](#eq:theta-switch){reference-type="eqref" reference="eq:theta-switch"}. The goal is to show that $\mathrm{a}_{\lambda, n}$ is bounded, and in fact that its norm is bounded by an exponential function of $n.$ Since $\mathcal{A}' \Omega$ is a core for $\mathrm{a}_{\lambda, n},$ it suffices to show that $\mathrm{a}_{\lambda, n}$ has bounded action on vectors of the form $\mathrm{b}' \Omega$ for $\mathrm{b}' \in \mathcal{A}.$ Combining propositions [Proposition 9](#prop:contour-prop){reference-type="ref" reference="prop:contour-prop"} and [Proposition 10](#prop:sigmoid-theta-prop){reference-type="ref" reference="prop:sigmoid-theta-prop"}, and once again using $f_k$ to denote the sigmoid function from those propositions, we have $$\begin{aligned} \label{eq:first-integral-manipulations} \begin{split} \mathrm{a}_{\lambda,n} \mathrm{b}' \Omega & = \mathrm{b}' \mathrm{a}_{\lambda,n} \Omega \\ & = \mathrm{b}' \Delta^n \Theta(\lambda - \Delta) \mathrm{a}' \Omega \\ & = \mathrm{b}' \lim_{k \to \infty} \Delta^n f_k(\Delta) \psi \Omega \\ & = \frac{1}{2\pi i} \mathrm{b}' \lim_{k \to \infty} \int_{\gamma} dz\, z^n f_k(z) (z - \Delta)^{-1} \mathrm{a}' \Omega, \end{split} \end{aligned}$$ where $\gamma$ is the contour from figure [1](#fig:specific-contour){reference-type="ref" reference="fig:specific-contour"}. By lemma [Lemma 6](#lem:resolvent-lemma){reference-type="ref" reference="lem:resolvent-lemma"}, there exist operators $\mathrm{a}_z \in \mathcal{A}$ satisfying $$(z - \Delta)^{-1} \mathrm{a}' \Omega = \mathrm{a}_z \Omega$$ and $$\label{eq:az-bound} \lVert \mathrm{a}_z \rVert \leq \frac{\lVert \mathrm{a}' \rVert}{\sqrt{2(|z| - \text{Re}(z))}}.$$ Since $\mathrm{b}'$ is a bounded operator, and since the integral in equation [\[eq:first-integral-manipulations\]](#eq:first-integral-manipulations){reference-type="eqref" reference="eq:first-integral-manipulations"} converges as a Bochner integral, we may move $\mathrm{b}'$ through the limit and through the integral symbol to write $$\begin{aligned} \begin{split} \mathrm{a}_{\lambda, n} \mathrm{b}' \Omega & = \lim_{k \to \infty} \frac{1}{2 \pi i} \int_{\gamma} dz\, z^n f_k(z) \mathrm{a}_z \mathrm{b}' \Omega. \end{split} \end{aligned}$$ Taking norms on either side of the equation gives $$\begin{aligned} \begin{split} \lVert \mathrm{a}_{\lambda, n} \mathrm{b}' \Omega \rVert & \leq \frac{1}{2 \pi} \lVert \mathrm{b}' \Omega \rVert \limsup_{k \to \infty} \int_{\gamma} ds\, |z|^n |f_k(z)| \lVert \mathrm{a}_z \rVert, \end{split} \end{aligned}$$ where $s$ is an arclength parameter for the contour $\gamma.$ Using the bound [\[eq:az-bound\]](#eq:az-bound){reference-type="eqref" reference="eq:az-bound"}, we may write the inequality $$\begin{aligned} \label{eq:penultimate-inequality} \begin{split} \lVert \mathrm{a}_{\lambda, n} \mathrm{b}' \Omega \rVert & \leq \frac{\lVert \mathrm{a}' \rVert}{2 \pi} \lVert \mathrm{b}' \Omega \rVert \limsup_{k \to \infty} \int_{\gamma} ds\, \frac{|z|^n |f_k(z)|}{\sqrt{2 (|z| - \text{Re}(z))}}. \end{split} \end{aligned}$$ On either half-line $\{ t \pm 2 \pi i, \quad t \geq 0\},$ we have $$f_k(t \pm 2 \pi i) = \frac{1}{1 + e^{k(t - \lambda)}}.$$ This converges pointwise, in the limit $k \to \infty,$ to the Heaviside function $\Theta(\lambda - t).$ An application of the dominated convergence theorem then gives $$\begin{aligned} \begin{split} \limsup_{k \to \infty} \int_{\text{half-line}} ds\, \frac{|z|^n |f_k(z)|}{\sqrt{2 (|z| - \text{Re}(z))}} & = \int_{0}^{\lambda} dt\, \frac{(t^2 + 4 \pi^2)^{n/2}}{\sqrt{2 ((t^2 + 4 \pi^2)^{1/2} - t)}}. \end{split} \end{aligned}$$ The integrand is monotonically increasing in $t,$ so the integral can be upper bounded by $\lambda$ times the value of the integrand at $t=\lambda,$ giving $$\begin{aligned} \label{eq:half-line-bound} \begin{split} \limsup_{k \to \infty} \int_{\text{half-line}} ds\, \frac{|z|^n |f_k(z)|}{\sqrt{2 (|z| - \text{Re}(z))}} & \leq \lambda \frac{(\lambda^2 + 4 \pi^2)^{n/2}}{\sqrt{2 ((\lambda^2 + 4 \pi^2)^{1/2} - \lambda)}}. \end{split} \end{aligned}$$ The other contribution to the contour integral in [\[eq:penultimate-inequality\]](#eq:penultimate-inequality){reference-type="eqref" reference="eq:penultimate-inequality"} is an integral over a half-circle, and may be written as $$\int_{\text{half-circle}} ds\, \frac{|z|^n |f_k(z)|}{\sqrt{2 (|z| - \text{Re}(z))}} = \int_{\pi/2}^{3 \pi/2} d\theta \frac{(2 \pi)^{n+1} |f_k(2 \pi e^{i \theta})|}{\sqrt{4 \pi (1 - \cos(\theta))}}.$$ On this half-circle, the functions $f_k$ converge pointwise to $1,$ and another application of the dominated convergence theorem gives $$\limsup_{k \to \infty} \int_{\text{half-circle}} ds\, \frac{|z|^n |f_k(z)|}{\sqrt{2 (|z| - \text{Re}(z))}} = \int_{\pi/2}^{3 \pi/2} d\theta \frac{(2 \pi)^{n+1}}{\sqrt{4 \pi (1 - \cos(\theta))}}.$$ On the half-circle, the denominator is lower-bounded by $\sqrt{4 \pi}$, which gives the simple approximation $$\label{eq:half-circle-bound} \limsup_{k \to \infty} \int_{\text{half-circle}} ds\, \frac{|z|^n |f_k(z)|}{\sqrt{2 (|z| - \text{Re}(z))}} \leq \frac{(2\pi)^{n+1} \pi}{\sqrt{4 \pi}}.$$ Combining expressions [\[eq:half-circle-bound\]](#eq:half-circle-bound){reference-type="eqref" reference="eq:half-circle-bound"} and [\[eq:half-line-bound\]](#eq:half-line-bound){reference-type="eqref" reference="eq:half-line-bound"} with the expression [\[eq:penultimate-inequality\]](#eq:penultimate-inequality){reference-type="eqref" reference="eq:penultimate-inequality"}, we may bound each operator $\mathrm{a}_{\lambda, n}$ by $$\begin{aligned} \begin{split} \lVert \mathrm{a}_{\lambda, n} \rVert & \leq \frac{\lVert \mathrm{a}' \rVert}{2 \pi} \left(2 \lambda \frac{(\lambda^2 + 4 \pi^2)^{n/2}}{\sqrt{2 ((\lambda^2 + 4 \pi^2)^{1/2} - \lambda)}} + \frac{(2\pi)^{n+1} \pi}{\sqrt{4 \pi}} \right). \end{split} \end{aligned}$$ ◻ **Remark 12**. A completely symmetric argument to the one given above, with the substitutions $\Delta \leftrightarrow \Delta^{-1}$ and $\mathcal{A}\leftrightarrow \mathcal{A}',$ shows that for $\lambda > 0$ and $n \leq 0,$ and for $\mathrm{a} \in \mathcal{A},$ there exists an operator $\mathrm{a}'_{\lambda, n} \in \mathcal{A}'$ satisfying $$\mathrm{a}'_{\lambda, n} \Omega = \Delta^n \Theta\left( \lambda^{-1} - \Delta^{-1} \right) \mathrm{a} \Omega = \Delta^{n} \Theta(\Delta - \lambda) \mathrm{a} \Omega,$$ and that the norm of $\mathrm{a}'_{\lambda, n}$ is bounded by an exponential function of $-n.$ Specifically, it is bounded by $$\begin{aligned} \begin{split} \lVert \mathrm{a}'_{\lambda, n} \rVert & \leq \frac{\lVert \mathrm{a} \rVert}{2 \pi} \left(2 \lambda^{-1} \frac{(\lambda^{-2} + 4 \pi^2)^{-n/2}}{\sqrt{2 ((\lambda^{-2} + 4 \pi^2)^{1/2} - \lambda^{-1})}} + \frac{(2\pi)^{-n+1} \pi}{\sqrt{4 \pi}} \right). \end{split} \end{aligned}$$ Combining this observation with theorem [Theorem 11](#thm:bounded-construction){reference-type="ref" reference="thm:bounded-construction"}, we may conclude that for any integer $n$ and any real numbers $\lambda_1, \lambda_2$ satisfying $0 < \lambda_1 < \lambda_2,$ there are operators $\mathrm{a}_{[\lambda_1, \lambda_2], n} \in \mathcal{A}$ and $\mathrm{a}_{[\lambda_1, \lambda_2], n}' \in \mathcal{A}'$ satisfying $$\label{eq:tidy-equation} \mathrm{a}_{[\lambda_1, \lambda_2], n} \Omega = \mathrm{a}'_{[\lambda_1, \lambda_2], n} \Omega = \Delta^n \Theta(\lambda_2 - \Delta) \Theta(\Delta - \lambda_1) \mathrm{a} \Omega.$$ Furthermore, the norm of either family of operators (the ones in $\mathcal{A}$ or the ones in $\mathcal{A}'$) is bounded by an exponential function of $|n|.$ **Definition 13**. *The space of operators $\mathrm{a}_{[\lambda_1, \lambda_2], 0} \in \mathcal{A}$ obtained as in the preceding remark will be called the space of **tidy operators** in $\mathcal{A},$ denoted $\mathcal{A}_{\text{tidy}}.$* **Definition 14**. *If $\mathrm{a}$ is a tidy operator in $\mathcal{A}$, then we denote by $\mathrm{a}'$ the operator in $\mathcal{A}'$ satisfying $$\mathrm{a} \Omega = \mathrm{a}' \Omega.$$ For any integer $n,$ we denote by $\mathrm{a}_n$ and $\mathrm{a}'_n$ the operators in $\mathcal{A}$ and $\mathcal{A}'$ satisfying $$\mathrm{a}_n \Omega = \mathrm{a}'_n \Omega = \Delta^n \mathrm{a} \Omega = \Delta^n \mathrm{a}'\Omega$$* **Remark 15**. The next lemma tells us how to think about the adjoints of tidy operators. **Lemma 16**. *Let $\mathrm{a}$ be a tidy operator in $\mathcal{A},$ and $n$ an integer. We have $$(\mathrm{a}'_{n+1})^{*} \Omega = (\mathrm{a}_{n})^* \Omega.$$* *Proof.* Fix $\mathrm{b} \in \mathcal{A}.$ We have $$\begin{aligned} \begin{split} \langle (\mathrm{a}_{n+1}')^* \Omega, \mathrm{b} \Omega \rangle & = \langle \mathrm{b}^* \Omega, \mathrm{a}_{n+1}' \Omega \rangle \\ & = \langle \mathrm{b}^* \Omega, \Delta \mathrm{a}_{n} \Omega \rangle \\ & = \langle \mathrm{b}^* \Omega, S^* S \mathrm{a}_n \Omega \rangle \\ & = \langle S \mathrm{a}_n \Omega, S \mathrm{b}^* \Omega \rangle \\ & = \langle (\mathrm{a}_n)^{*} \Omega, \mathrm{b} \Omega \rangle. \end{split} \end{aligned}$$ Since $\mathcal{A}\Omega$ is dense in $\mathcal{H},$ this proves the lemma. ◻ **Proposition 17**. *For any integer $n$ and any tidy operator $\mathrm{a} \in \mathcal{A}_{\text{tidy}},$ the operator $\Delta^{n} \mathrm{a} \Delta^{-n}$ is defined and bounded on $\mathcal{A}_{\text{tidy}} \Omega,$ and on that subspace it is equal to $\mathrm{a}_{n}.$* *Proof.* Fix $\mathrm{b} \in \mathcal{A}_{\text{tidy}}.$ By construction of $\mathcal{A}_{\text{tidy}},$ the vector $\mathrm{b} \Omega$ is in the domain of $\Delta^{-n}.$ Note that from the expression $$\begin{aligned} \begin{split} \mathrm{a} \Delta^{-n} \mathrm{b} \Omega & = \mathrm{a} \mathrm{b}_{-n} \Omega, \end{split} \end{aligned}$$ we see that $\mathrm{a} \Delta^{-n} \mathrm{b} \Omega$ is in the domain of the Tomita operator $S$. Applying lemma [Lemma 16](#lem:dagger-ladder){reference-type="ref" reference="lem:dagger-ladder"} gives $$\begin{aligned} \begin{split} S \mathrm{a} \Delta^{-n} \mathrm{b}\Omega & = S \mathrm{a} \mathrm{b}_{-n} \Omega \\ & = \mathrm{b}_{-n}^* \mathrm{a}^* \Omega \\ & = \mathrm{b}_{-n}^* (\mathrm{a}_{1}')^* \Omega \\ & = (\mathrm{a}_1')^* \mathrm{b}_{-n}^* \Omega \\ & = (\mathrm{a}_1')^* (\mathrm{b}_{-(n-1)}')^* \Omega. \end{split} \end{aligned}$$ So $S \mathrm{a} \Delta^{-n} \mathrm{b} \Omega$ is in the domain of the Tomita operator for $\mathcal{A}',$ which is the adjoint $S^*.$ This tells us that $\mathrm{a} \Delta^{-n} \mathrm{b} \Omega$ is in the domain of $S^* S = \Delta,$ and gives the identity $$\begin{aligned} \begin{split} \Delta \mathrm{a} \Delta^{-n} \mathrm{b}\Omega & = S^{*} S \mathrm{a} \Delta^{-n} \mathrm{b}\Omega \\ & = S^{*} (\mathrm{a}_1')^* (\mathrm{b}_{-(n-1)}')^* \Omega \\ & = \mathrm{b}_{-(n-1)}' \mathrm{a}_{1}' \Omega \\ & = \mathrm{b}_{-(n-1)}' \mathrm{a}_1 \Omega \\ & = \mathrm{a}_1 \mathrm{b}_{-(n-1)}' \Omega \\ & = \mathrm{a}_1 \mathrm{b}_{-(n-1)} \Omega. \end{split} \end{aligned}$$ Iterating this process $n$ times tells us that $\mathrm{b} \Omega$ is in the domain of $\Delta^n \mathrm{a} \Delta^{-n},$ and gives the identity $$\begin{aligned} \begin{split} \Delta^n \mathrm{a} \Delta^{-n} \mathrm{b}\Omega & = \mathrm{a}_{n} \mathrm{b} \Omega. \end{split} \end{aligned}$$ ◻ **Proposition 18**. *For any real number $x,$ the space $\mathcal{A}_{\text{tidy}} \Omega$ is a core for $\Delta^{x}.$* *Proof.* To show that $\mathcal{A}_{\text{tidy}} \Omega$ is a core for $\Delta^{x},$ we must show that vectors of the form $\mathrm{a} \Omega \oplus \Delta^x \mathrm{a} \Omega,$ for $\mathrm{a}\in \mathcal{A}_{\text{tidy}},$ are dense in the graph of $\Delta^{x}.$ Suppose that $\psi$ is a vector in the domain of $\Delta^{x}$ such that $\psi \oplus \Delta^{x} \psi$ is orthogonal to all such vectors. I.e., suppose that for all $\mathrm{a} \in \mathcal{A}_{\text{tidy}},$ we have $$\begin{aligned} \begin{split} 0 & = \langle \psi \oplus \Delta^{x} \psi, \mathrm{a} \Omega \oplus \Delta^x \mathrm{a} \Omega \rangle \\ & = \langle \psi, \mathrm{a} \Omega \rangle + \langle \Delta^{x} \psi, \Delta^x \mathrm{a} \Omega \rangle \\ & = \langle \psi, (1 + \Delta^{2x}) \mathrm{a} \Omega \rangle. \end{split} \end{aligned}$$ Our goal is to show that whenever this expression is satisfied, $\psi$ vanishes. It suffices to show that the space $(1 + \Delta^{2x}) \mathcal{A}_{\text{tidy}} \Omega$ is dense in $\mathcal{H}.$ To see this, note that per the construction of $\mathcal{A}_{\text{tidy}}$ from remark [Remark 12](#rem:tidy-construction){reference-type="ref" reference="rem:tidy-construction"}, each $\mathrm{a} \in \mathcal{A}_{\text{tidy}}$ satisfies an equation like $$\mathrm{a} \Omega = \Theta(\lambda_2 - \Delta) \Theta(\Delta - \lambda_1) \mathrm{b} \Omega$$ for some $0 < \lambda_1 < \lambda_2$ and some $\mathrm{b} \in \mathcal{A}.$ The operator $$(1 + \Delta^{2x}) \Theta(\lambda_2 - \Delta) \Theta(\Delta - \lambda_1)$$ is bounded, injective, and invertible on the spectral subspace of $\Delta$ corresponding to the range $[\lambda_1, \lambda_2].$ It therefore maps any dense subset of $\mathcal{H}$ into a dense subspace of that spectral subspace; in particular, the space $$(1 + \Delta^{2x}) \Theta(\lambda_2 - \Delta) \Theta(\Delta - \lambda_1) \mathcal{A} \Omega$$ is dense in the spectral subspace of $\Delta$ for the range $[\lambda_1, \lambda_2].$ For any $0 < \lambda_1 < \lambda_2,$ this is a subspace of $(1 + \Delta^{2x}) \mathcal{A}_{\text{tidy}} \Omega.$ Taking $\lambda_1 \to 0$ and $\lambda_2 \to \infty$ shows that $(1 + \Delta^{2x})\mathcal{A}_{\text{tidy}} \Omega$ is dense in the spectral subspace of $\Delta$ for the range $(0, \infty)$, which by invertibility of $\Delta$ is equal to all of $\mathcal{H}.$ ◻ **Theorem 19** (Tomita's theorem for tidy operators). *For any $\mathrm{a} \in \mathcal{A}_{\text{tidy}}$ and any $t \in \mathbb{R},$ the operator $\Delta^{-it} \mathrm{a} \Delta^{it}$ is in $\mathcal{A}.$* *Proof.* Fix $\mathrm{b}' \in \mathcal{A}'.$ By von Neumann's bicommutant theorem, it suffices to show $$= 0.$$ Fix any integer $n \geq 0.$ By proposition [Proposition 17](#prop:powers){reference-type="ref" reference="prop:powers"}, the operator $\Delta^{-n} \mathrm{a} \Delta^{n}$ is defined on the dense subspace $\mathcal{A}_{\text{tidy}} \Omega,$ and is equal to $\mathrm{a}_{-n}$ on that subspace. By proposition [Proposition 18](#prop:core){reference-type="ref" reference="prop:core"}, $\mathcal{A}_{\text{tidy}} \Omega$ is a core for $\Delta^{n}.$ Combining these observations with lemma [Lemma 4](#lem:operator-continuation){reference-type="ref" reference="lem:operator-continuation"}, we see that for any complex $z$ in the strip with $0 \leq \text{Re}(z) \leq n,$ the operator $\Delta^{-z} \mathrm{a} \Delta^{z}$ is bounded on its domain, and the map $$F_{\mathrm{a}}(z) = \overline{\Delta^{-z} \mathrm{a} \Delta^z}$$ is holomorphic in the interior of the strip and strongly continuous on the strip's boundary. Since this is true for any nonnegative integer $n$, it is true as a statement about the entire right half-plane. The function $$F_{\mathrm{a} \mathrm{b}'}(z) = [\overline{\Delta^{-z} \mathrm{a} \Delta^z}, \mathrm{b}']$$ is holomorphic in the right half-plane and strongly continuous on the imaginary axis, with norm bounded by $$\label{eq:last-bound} \lVert F_{\mathrm{a} \mathrm{b}'}(z) \rVert \leq 2 \lVert \overline{\Delta^{-\text{Re}(z)} \mathrm{a} \Delta^{\text{Re}(z)}} \rVert \lVert \mathrm{b}' \rVert.$$ Recall also that for any integer $n,$ we have $$\overline{\Delta^{-n} \mathrm{a} \Delta^n} = \mathrm{a}_{-n}.$$ From this we observe that $F_{\mathrm{a} \mathrm{b}'}(n)$ vanishes for any nonnegative integer $n.$ We also know by the Phragmén-Lindelöf principle, inequality [\[eq:last-bound\]](#eq:last-bound){reference-type="eqref" reference="eq:last-bound"}, and remark [Remark 12](#rem:tidy-construction){reference-type="ref" reference="rem:tidy-construction"} that the norm of $F_{\mathrm{a} \mathrm{b}'}$ is bounded in imaginary directions, and bounded by an exponential function along the real axis. Applying Carlson's theorem gives $F_{\mathrm{a} \mathrm{b}'}(z) = 0,$ and putting $z=it$ proves the theorem. ◻ **Proposition 20**. *We have $\mathcal{A}_{\text{tidy}}'' = \mathcal{A}.$* *Proof.* The inclusion $\mathcal{A}_{\text{tidy}} \subseteq \mathcal{A}$ and the double commutant theorem $\mathcal{A}= \mathcal{A}''$ give the inclusion $\mathcal{A}_{\text{tidy}}'' \subseteq \mathcal{A}.$ We must show the reverse inclusion. For this it suffices to show $\mathcal{A}_{\text{tidy}}' \subseteq \mathcal{A}'.$ To see this, recall that by proposition [Proposition 18](#prop:core){reference-type="ref" reference="prop:core"}, $\mathcal{A}_{\text{tidy}} \Omega$ is a core for $\Delta^{1/2},$ and hence for the Tomita operator $S$. Consequently, for any $\mathrm{a} \in \mathcal{A},$ there exists a sequence of tidy operators $\mathrm{a}_n$ satisfying both $$\lim_{n \to \infty} \mathrm{a}_n \Omega = \mathrm{a} \Omega$$ and $$\lim_{n \to \infty} \mathrm{a}_n^{*} \Omega = \mathrm{a}^{*} \Omega.$$ Clearly, we also have for any $\mathrm{b}' \in \mathcal{A}'$ the limits $$\lim_{n \to \infty} \mathrm{a}_n \mathrm{b}' \Omega = \mathrm{a} \mathrm{b}' \Omega$$ and $$\lim_{n \to \infty} \mathrm{a}_n^{*} \mathrm{b}' \Omega = \mathrm{a}^{*} \mathrm{b}' \Omega.$$ Now, suppose $O$ is an operator in $\mathcal{A}_{\text{tidy}}',$ $\mathrm{a}$ is an operator in $\mathcal{A},$ and $\mathrm{a}_n$ is a sequence of tidy operators converging as in the above equations. Fix $\mathrm{b}', \mathrm{c}' \in \mathcal{A}'.$ We have $$\begin{aligned} \begin{split} \langle [O, \mathrm{a}] \mathrm{b}' \Omega, \mathrm{c}' \Omega \rangle & = \langle \mathrm{a} \mathrm{b}' \Omega, O^{*} \mathrm{c}' \Omega \rangle - \langle O \mathrm{b}' \Omega, \mathrm{a}^* \mathrm{c}' \Omega \rangle \\ & = \lim_{n \to \infty} \left( \langle \mathrm{a}_n \mathrm{b}' \Omega, O^{*} \mathrm{c}' \Omega \rangle - \langle O \mathrm{b}' \Omega, \mathrm{a}_n^* \mathrm{c}' \Omega \rangle \right) \\ & = \lim_{n \to \infty} \langle [O, \mathrm{a}_n] \mathrm{b}' \Omega, \mathrm{c}' \Omega \rangle \\ & = 0. \end{split} \end{aligned}$$ Since $\mathcal{A}' \Omega$ is dense in $\mathcal{H},$ this implies that the commutator $[O, \mathrm{a}]$ vanishes. So every element of $\mathcal{A}_{\text{tidy}}'$ commutes with every element of $\mathcal{A},$ as desired. ◻ **Corollary 21** (Tomita's theorem). *For any $\mathrm{a} \in \mathcal{A}$ and any $t \in \mathbb{R}$, the operator $\Delta^{-it} \mathrm{a} \Delta^{it}$ is in $\mathcal{A}.$* *Proof.* Let $\mathrm{b}'$ be a tidy operator for the commutant algebra $\mathcal{A}',$ and fix arbitrary $\psi, \xi \in \mathcal{H}.$ Consider the commutator $$\begin{aligned} \begin{split} \langle [\Delta^{-it} \mathrm{a} \Delta^{it}, \mathrm{b}'] \psi, \xi \rangle & = \langle \Delta^{-it} \mathrm{a} \Delta^{it} \mathrm{b}' \psi, \xi \rangle - \langle \mathrm{b}' \Delta^{-it} \mathrm{a} \Delta^{it} \psi, \xi \rangle \end{split} \end{aligned}$$ By theorem [Theorem 19](#thm:tomita){reference-type="ref" reference="thm:tomita"} applied to tidy operators of $\mathcal{A}',$ the operator $$\mathrm{b}'(t) \equiv \Delta^{it} \mathrm{b}' \Delta^{-it}$$ is in $\mathcal{A}'.$ This gives $$\begin{aligned} \begin{split} \langle [\Delta^{-it} \mathrm{a} \Delta^{it}, \mathrm{b}'] \psi, \xi \rangle & = \langle \mathrm{a} \mathrm{b}'(t) \Delta^{it} \psi, \Delta^{it} \xi \rangle - \langle \mathrm{b}'(t) \mathrm{a} \Delta^{it} \psi, \Delta^{it} \xi \rangle \\ & = \langle [\mathrm{a}, \mathrm{b}'(t)] \Delta^{it} \psi, \Delta^{it} \xi \rangle \\ & = 0. \end{split} \end{aligned}$$ So $\Delta^{-it} \mathrm{a} \Delta^{it}$ commutes with every tidy operator in $\mathcal{A}'.$ By proposition [Proposition 20](#prop:density){reference-type="ref" reference="prop:density"}, it is in $\mathcal{A}.$ ◻ # Acknowledgments {#acknowledgments .unnumbered} I am grateful to Brent Nelson for comments on the first draft of this paper, and in particular for noticing an error that led to the addition of proposition [Proposition 20](#prop:density){reference-type="ref" reference="prop:density"}. This work was supported by the AFOSR under award number FA9550-19-1-0360, by the DOE Early Career Award, and by the Templeton Foundation via the Black Hole Initiative.
arxiv_math
{ "id": "2309.16762", "title": "A short proof of Tomita's theorem", "authors": "Jonathan Sorce", "categories": "math.OA math-ph math.FA math.MP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We study the zeta-regularized spectral determinant of the Friedrichs Laplacians on the singular spheres obtained by cutting and glueing copies of constant curvature (hyperbolic, spherical, or flat) double triangle. The determinant is explicitly expressed in terms of the corresponding Belyi functions and the determinant of the Friedrichs Laplacian on the double triangle. The latter determinant was found in a closed explicit form in ArXiv:2112.02771 [@KalvinLast]. In examples we consider the cyclic, dihedral, tetrahedral, octahedral, and icosahedral triangulations, and find the determinant for the corresponding spherical, Euclidean, and hyperbolic Platonic surfaces. These surfaces correspond to stationary points of the determinant. author: - "Victor Kalvin [^1]" title: Triangulations of singular constant curvature spheres via Belyi functions and determinants of Laplacians --- # Introduction {#Intro} We study the zeta-regularized spectral determinant of the Friedrichs scalar Laplacian on the surfaces constructed by cutting and gluing copies of a constant curvature $S^2$-like double triangle. We call these surfaces glued or triangulated. The geometric (combinatorial) cutting and gluing scheme is described in terms of the corresponding Belyi function [@Belyi]. Or, equivalently, in terms of *dessins d'enfants* [@Gr]. The Belyi function maps the (source) Riemann surface to the (target) Riemann sphere. Any constant curvature double triangle is isometric to the target Riemann sphere with explicitly constructed conformal metric with three conical singularities [@KalvinLast]. The glued surface is isometric to the source Riemann surface equipped with the pullback of the explicit conformal metric by the Belyi function. This provides us with a geodesic triangulation and an explicit uniformization of the glued surface. In particular, in the case of flat right isosceles $S^2$-like double triangle, we get the square-tiled flat surfaces, see e.g. [@Sh; @Zorich] and references therein. In other words, for studying the spectral determinant we suggest using the natural decomposition via triangulation based on the celebrated results of Belyi, Grothendieck, Shabat and Voevodskii. The price for this is that we first need to study the spectral determinant on surfaces with conical singularities [@KalvinLast; @KalvinCCM; @KalvinJFA; @KalvinJGA]. The main idea is to explicitly express the spectral determinant of the glued surface in terms of the corresponding Belyi function and the spectral determinant of the constant curvature $S^2$-like double triangle. The latter determinant was found in a closed explicit form in [@KalvinLast]. Let us recall that not all Riemann surfaces can be glued/triangulated in this manner. But for the most interesting surfaces, like the Fermat curve, the Bolza curve, the Hurwitz surfaces, including Klein's quartic, the Platonic surfaces, etc., it is certainly possible, thanks to the famous Belyi theorem [@Belyi; @KW; @ShVo]. Due to the highest possible number of authomorphisms [@ShVo; @harts], some of the triangulated surfaces (equipped with the natural smooth hyperbolic metric) are expected to be stationary points of the spectral determinant. To the best of our knowledge, no closed explicit expression for the spectral determinant of any of these surfaces is known yet. In this paper, we study the spectral determinant of the genus zero triangulated surfaces. For the triangulations of singular constant curvature (hyperbolic, spherical, or flat) spheres via Belyi functions, we explicitly express the spectral determinant in terms of the Belyi maps, and the determinant of the constant curvature $S^2$-like double triangle. In particular, with each bicolored plane tree [@BZ; @Bishop] we can naturally associate an infinite family of constant curvature singular spheres, and then the corresponding infinite family of explicit spectral invariants, that is the family of the corresponding spectral determinants. For some closely related geometric constructions and invariants see e.g. [@HogRiv; @Riv1; @Riv2; @Sprin; @thurston]. Let us stress that due to conical singularities on the cuts, none of the presently known BFK-type gluing formulae [@BFK] can be used. We rely on a completely different approach relating the determinants of Laplacians on the target Riemann sphere and on the ramified covering via anomaly formulae for the determinants [@KalvinLast; @KalvinJFA]. As a byproduct, our approach allows for explicit evaluation of the Liouville action [@CMS; @Z-Z; @T-Z], which can be of independent interest, cf. [@P-T-T; @ParkTeo; @T-T]. It would be interesting to check if the explicit expressions can be reproduced by using conformal blocks [@Z-Z]. As is known [@T-Z], the Liouville action generates the famous accessory parameters as their common antiderivative. We explicitly express the accessory parameters of the triangulated singular constant curvature spheres in terms of the Belyi functions and the orders of conical singularities. We also find the derivative of the Liouville action with respect to the order of a conical singularity in the spirit of [@Z-Z]. As one can expect, this allows us to show that the five Platonic solids are special also in the context of this paper: their surfaces correspond to stationary points of the determinant. Recall that for smooth metrics on Riemann surfaces extrema of spectral determinants were studied in a series of papers by Osgood, Phillips, and Sarnak [@OPS; @OPS.5; @OPS1]; see also [@Kim] for an extension of their results, and [@EM] for recent closely related results in the four-dimensional case. We illustrate the main results of this paper with a number of examples: We consider the cyclic, dihedral, tetrahedral, octahedral, and icosahedral triangulations [@Klein], and find explicit expressions for the spectral determinant of the corresponding spherical, Euclidean, and hyperbolic Platonic surfaces. In particular, we explicitly evaluate the determinants of the regular hyperbolic octahedron with vertices of angle $\pi$ (a double cover is the Bolza curve) and the regular hyperbolic icosahedron corresponding to the tessellation by $(2,3,7)$-triangles (related to Klein's quartic [@SSW; @KW; @ShVo]); see Example [Example 17](#EOctahedron){reference-type="ref" reference="EOctahedron"} and Example [Example 28](#EIcosahedron){reference-type="ref" reference="EIcosahedron"}. As the angles of the conical points of a Platonic surface go to zero (and the area remains fixed), the determinant grows without any bound, cf. Fig [3](#Platonic){reference-type="ref" reference="Platonic"}. In the limit, the conical points turn into cusps and we obtain an ideal polyhedron [@Judge; @Judge2; @KimWilkin]. The spectrum of the corresponding Laplacians is no longer discrete. This paper is organized as follows. Section [2](#S2){reference-type="ref" reference="S2"} contains preliminaries and the main results of the paper (Theorem [Theorem 1](#main){reference-type="ref" reference="main"}). Namely, after giving some preliminaries on constant curvature $S^2$-like double triangles and their spectral determinants in Subsection [\[SS2.1\]](#SS2.1){reference-type="ref" reference="SS2.1"}, we describe the geometric setting of this paper and formulate the main results in Theorem [Theorem 1](#main){reference-type="ref" reference="main"} of Subsection [2.2](#SS2.2){reference-type="ref" reference="SS2.2"}. Section [3](#PFmain){reference-type="ref" reference="PFmain"} is entirely devoted to the proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"}: In Subsection [3.1](#SS3.1.){reference-type="ref" reference="SS3.1."} we prove Proposition [Proposition 2](#PBdet){reference-type="ref" reference="PBdet"}, which is a preliminary version of Theorem [Theorem 1](#main){reference-type="ref" reference="main"}. In Subsection [3.2](#BFET){reference-type="ref" reference="BFET"} we refine the result of Proposition [Proposition 2](#PBdet){reference-type="ref" reference="PBdet"} by using the natural Euclidean equilateral triangulation [@ShVo; @VoSh]. This completes the proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"}. Section [4](#Unif){reference-type="ref" reference="Unif"} is devoted to the uniformization, the accessory parameters, the Liouville action, and stationary points of the spectral determinant. Finally, Section [5](#ExAppl){reference-type="ref" reference="ExAppl"} contains illustrating examples and applications. # Preliminaries and main results {#S2} ## Double triangles {#PrelimTriangulation} [\[SS2.1\]]{#SS2.1 label="SS2.1"} Consider the involutional tessellation of the standard round sphere $x_1^2+x_2^2+x_3^2=1$ in $\Bbb R^3$ by two congruent $(1,1,1)$-triangles; as usual, by a $(k,\ell,m)$-triangle we mean a geodesic triangle (spherical, Euclidean, or hyperbolic) with internal angles $\pi/k$, $\pi/\ell$, and $\pi/m$. In particular, the $(1,1,1)$-triangles are hemispheres. Let the standard sphere in $\Bbb R^3$ be identified with the Riemann sphere $\overline{\Bbb C}_z=\Bbb C\cup\infty$ by means of the stereographic projection. Let the $(1,1,1)$-triangles correspond to the upper $\Im z\geqslant 0$ and lower $\Im z\leqslant 0$ half-planes. Without loss of generality, we can assume that three marked points on the sphere (the vertices of the congruent $(1,1,1)$-triangles) have the coordinates $z=0$, $z=1$, and $z=\infty$. Let us endow the Riemann sphere with a unique unit area constant curvature metric having conical singularities of order $\beta_0$, $\beta_1$, and $\beta_\infty$ at the marked points $z=0$, $z=1$, and $z=\infty$ correspondingly. For simplicity, we assume that the orders $\beta_j$ are in the interval $(-1,0]$, or, equivalently, that the cone angles $2\pi(\beta_j+1)$ are positive and do not exceed $2\pi$; here $j\in\{0,1,\infty\}$. We denote the metric by $m_{\pmb\beta}$ and say that it represents the divisor $$\pmb\beta=\beta_0\cdot 0+\beta_1\cdot 1+\beta_\infty\cdot \infty$$ of degree $|\pmb \beta|=\beta_0+\beta_1+\beta_\infty$. The metric $m_{\pmb\beta}$ can be explicitly constructed as the pullback of the Gaussian curvature $2\pi(|\pmb\beta|+2)$ model metric $$\label{mm} \frac{4|dw|^2}{(1+2\pi(|\pmb\beta|+2)|w|^2)^2}$$ by an appropriately normalized Schwarz triangle function $z\mapsto w=w_{\pmb\beta}(z)$, see [@KalvinLast Appendix]. Note that the conditions $\beta_j -|\pmb \beta|/2>0$ are necessary and sufficient for the existence of the metric. The Schwarz triangle function $w_{\pmb\beta}$ maps the upper half-plane $\Im z>0$ into a geodesic (light-coloured) triangle with internal angle $\pi(\beta_0+1)$ at the vertex $w_{\pmb\beta}(0)=0$, $\pi(\beta_1+1)$ at the vertex $w_{\pmb\beta}(1)\in\Bbb R$, and $\pi(\beta_\infty+1)$ at the vertex $w_{\pmb\beta}(i\infty)$. The analytic continuation of $w_{\pmb\beta}$ (from the upper half-plane $\Im z>0$ through the interval $(0,1)$ of the real axis) maps the lower half-plane $\Im z<0$ into the (dark-coloured) reflection of the geodesic triangle in the side joining the points $w_{\pmb\beta}(0)$ and $w_{\pmb\beta}(1)$. Thus we obtain a geodesic with respect to the model metric [\[mm\]](#mm){reference-type="eqref" reference="mm"} bicolored double triangle, which is hyperbolic for $|\pmb\beta|<-2$ (cf. Fig. [\[TemplateHyp\]](#TemplateHyp){reference-type="ref" reference="TemplateHyp"}), spherical for $|\pmb\beta|>-2$ (cf. Fig. [\[TemplateSph\]](#TemplateSph){reference-type="ref" reference="TemplateSph"}), and Euclidean for $|\pmb\beta|=-2$ (cf. Fig. [\[TemplateEuc\]](#TemplateEuc){reference-type="ref" reference="TemplateEuc"}). To make it $S^2$-like, the double triangle is folded along the interval $[w_{\pmb\beta}(0), w_{\pmb\beta}(1)]\subset \Bbb R$, and then the corresponding sides of the light- and dark-coloured triangles are glued pairwise. Namely, the side joining $w_{\pmb\beta}(0)$ and $w_{\pmb\beta}(i\infty)$ is glued to the side joining $w_{\pmb\beta}(0)$ and $w_{\pmb\beta}(-i\infty)$, and then the side joining $w_{\pmb\beta}(1)$ and $w_{\pmb\beta}(i\infty)$ is glued to the side joining $w_{\pmb\beta}(1)$ and $w_{\pmb\beta}(-i\infty)$. The resulting $S^2$-like bicolored double triangle is isometric to the genus zero constant curvature surface $(\overline {\Bbb C}_z, m_{\pmb\beta})$ with three conical singularities. The isometry is given by the Schwarz triangle function $w_{\pmb\beta}$. The potential $\phi$ of the conformal metric $m_{\pmb\beta}=e^{2\phi}|dz|^2$ satisfies the Liouville equation $$\label{Liouv} e^{-2\phi}(-4\partial_z\partial_{\bar z}\phi)={2\pi(|\pmb\beta|+2)} ,\quad z\in\Bbb C\setminus\{0,1\},$$ and obeys the asymptotics $$\label{asympt_phi} \begin{aligned} \phi(z)&=\beta_0\log|z|+\phi_0+o(1), \quad z\to 0,\quad \phi_0=\Psi(\beta_0,\beta_1,\beta_\infty), \\ \phi(z)&=\beta_1\log|z-1|+\phi_1+o(1), \quad z\to 1,\quad \phi_1=\Psi(\beta_1,\beta_0,\beta_\infty), \\ \phi(z)&=-(\beta_\infty+2)\log|z|+\phi_\infty+ o(1), \quad z\to\infty,\quad \phi_\infty= \Psi(\beta_\infty,\beta_1,\beta_0). \end{aligned}$$ Here $\Psi(\beta_1,\beta_2,\beta_3)=\Phi(\beta_1,\beta_2,\beta_3)+ \log 2$ with explicit function $\Phi$ from [@KalvinLast Prop. A.2)]. The Liouville equation indicates that outside of the marked points the Gaussian curvature of the metric $m_{\pmb\beta}$ is $2\pi(|\pmb\beta|+2)$, while the asymptotics [\[asympt_phi\]](#asympt_phi){reference-type="eqref" reference="asympt_phi"} indicate that at the marked points $z=0$,$z=1$, and $z=\infty$ the metric has conical singularities of order $\beta_0$, $\beta_1$, and $\beta_\infty$ correspondingly [@Troyanov]. On the singular constant curvature surface $(\overline{\Bbb C}_z, m_{\pmb\beta})$ we consider the Laplace-Beltrami operator $\Delta_{\pmb\beta}=-e^{-2\phi}4\partial_z\partial_{\bar z}$ as an unbounded operator in the usual $L^2$-space. The operator is initially defined on the smooth functions supported outside of the conical singularities and not essentially selfadjoint. We take the Friedrichs selfadjoint extension, which we still denote by $\Delta_{\pmb\beta}$ and call the Friedrichs Laplacian or simply Laplacian for short. The spectrum of $\Delta_{\pmb\beta}$ consists of non-negative isolated eigenvalues of finite multiplicity, and the zeta-regularized spectral determinant $\det \Delta_{\pmb\beta}$ of $\Delta_{\pmb\beta}$ can be introduced in the standard well-known way. In what follows it is important that the spectral zeta-regularized determinant of the Friedrichs Laplacian $\Delta_{\pmb\beta}$ on the constant curvature surface $(\overline{\Bbb C}_z, m_{\pmb\beta})$ with three conical singularities, or, equivalently, on the isometric $S^2$-like bicolored double triangle, is an explicit function $$\label{CalcVar} (-1,0]^3\ni(\beta_0,\beta_1,\beta_\infty)\mapsto \det \Delta_{\pmb\beta}$$ found in [@KalvinLast Corollary 1.3]. ## Triangulations and determinants of Laplacians {#SS2.2} Recall that a (non-constant) meromorphic function $f: X\to \overline{\Bbb C}_z$ on a compact Riemann surface $X$ is called a Belyi function, if it is ramified at most above three points [@Belyi]. Any Belyi function can alternatively be described via the corresponding *dessin d'enfant*, which is usually defined as the graph formed on $X$ by the preimages $f^{-1}([0,1])$ of the real line segment $[0,1]$ with black points placed at the preimages $f^{-1}(0)$ of zero and white points placed at the preimages $f^{-1}(1)$ of $1$, e.g. [@CIW; @Gr; @LZ; @Sch; @Wolfart]. Any meromorphic function on the Riemann sphere is a rational function. If, in addition, $f$ has only a single pole that is at infinity, then $f$ is a polynomial [@BZ; @Bishop; @LZ]. Consider a Belyi function $f: \overline {\Bbb C}_x\to \overline {\Bbb C}_z$ ramified at most above the marked points $z=0$, $z=1$, and $z=\infty$. This defines a ramified (branched) covering and a bicolored triangulation of the Riemann sphere $\overline {\Bbb C}_x$, e.g. [@VoSh; @LZ]. Namely, the function $f$ maps: *i*) the sides of the bicolored triangles on $\overline {\Bbb C}_x$ to the line segments $(-\infty,0)$, $(0,1)$, and $(1,\infty)$ of the real axis; *ii*) the vertices of the triangles to the marked points $0$, $1$, and $\infty$; *iii*) the light-colored triangles to the upper half-plane $\Im z>0$, and the dark-colored triangles to the lower half-plane $\Im z <0$. The number of bicolored double triangles on $\overline {\Bbb C}_x$ is exactly the degree $\deg f=\max\{\deg P,\deg Q\}$ of $f$, where $f(x)=P(x)/Q(x)$ with coprime polynomials $P$ and $Q$. For example, the Belyi function $$\label{BFIcos} f(x)=1728 \frac{x^5(x^{10}-11 x^5-1)^5} {(x^{20}+228(x^{15}-x^{5})+494 x^{10}+1)^3}, \quad \deg f =60,$$ defines the icosahedral triangulation [@Klein1], which corresponds to the tessellation of the standard round sphere with bicolored spherical $(2,3,5)$-triangles in Fig. [1](#Icos Triang Pic){reference-type="ref" reference="Icos Triang Pic"}, Fig. [\[TemplateSph\]](#TemplateSph){reference-type="ref" reference="TemplateSph"}. As usual, we identify the Riemann sphere $\overline {\Bbb C}_x$ with the standard round sphere in $\Bbb R^3$ by means of the stereographic projection. ![Icosahedral triangulation](Icosahedral.png){#Icos Triang Pic} Depending on the sign of $|\pmb\beta|+2$, the composition $w_{\pmb\beta}\circ f$ (of the Schwarz triangle function $w_{\pmb\beta}$ with a Belyi function $f: \overline {\Bbb C}_x\to \overline {\Bbb C}_z$) maps the light-coloured (resp. the dark-coloured) triangles defined by $f$ on $\overline {\Bbb C}_x$ to the light-coloured (resp. the dark-coloured) hyperbolic, spherical, or Euclidean triangle in Fig. [\[TemplateHyp\]](#TemplateHyp){reference-type="ref" reference="TemplateHyp"}, Fig. [\[TemplateSph\]](#TemplateSph){reference-type="ref" reference="TemplateSph"}, and Fig. [\[TemplateEuc\]](#TemplateEuc){reference-type="ref" reference="TemplateEuc"}. The pullback of the model metric [\[mm\]](#mm){reference-type="eqref" reference="mm"} by this composite function, or, equivalently, the pullback $f^*m_{\pmb\beta}$ of the metric $m_{\pmb \beta}$ by $f$, is a singular metric of Gaussian curvature $2\pi(|\pmb\beta|+2)$ on $\overline{\Bbb C}_x$. The triangulated surface $(\overline{\Bbb C}_x, f^*m_{\pmb\beta})$ is isometric to the one obtained by cutting and gluing $\deg f$ copies of the $S^2$-like bicolored double triangle in accordance with a combinatorial scheme prescribed by $f$. In particular, Fuchsian triangle groups are included into consideration [@CIW]. Let us note that the above construction also works in the opposite (constructive) direction: the combinatorial cutting and gluing schemes of bicolored double triangles can be described with the help of a (fixed) base star in $\overline{\Bbb C}_z$ with the terminal vertices at $z=0,1,\infty$ and the constellations. For any constellation there exists a Belyi function defining the cutting and gluing scheme, e.g. [@LZ]. For instance, one can preserve the gluing scheme of bicolored double triangles in Fig. [1](#Icos Triang Pic){reference-type="ref" reference="Icos Triang Pic"}, and replace each light-coloured (resp. dark-coloured) spherical $(2,3,5)$-triangle with the image of the upper (resp. lower) complex half-plane under the Schwarz triangle function $w_{\pmb\beta}$ in Fig. [\[TemplateHyp\]](#TemplateHyp){reference-type="ref" reference="TemplateHyp"}, Fig. [\[TemplateSph\]](#TemplateSph){reference-type="ref" reference="TemplateSph"}, or Fig. [\[TemplateEuc\]](#TemplateEuc){reference-type="ref" reference="TemplateEuc"}. Then, as a result, we obtain a hyperbolic, Euclidean, or spherical singular surface isometric to $(\overline{\Bbb C}_x, f^*m_{\pmb\beta})$, where $f$ is the Belyi function [\[BFIcos\]](#BFIcos){reference-type="eqref" reference="BFIcos"}. However, in general, it is not easy to find a Belyi function corresponding to a particular gluing scheme, even though for the regular dihedra and the surfaces of five Platonic solids they were first found by Schwarz and Klein [@Klein], see also [@LZ; @Margot; @Zvonkin]. The main result of this paper is a closed explicit formula for the zeta-regularized spectral determinant of the Friedrichs Laplacian on the triangulated singular constant curvature spheres $(\overline{\Bbb C}_x, f^*m_{\pmb\beta})$. We express the determinant in terms of the Belyi function $f$ and the determinant [\[CalcVar\]](#CalcVar){reference-type="eqref" reference="CalcVar"} of the corresponding $S^2$-like bicolored double triangle. Recall that the latter determinant is explicitly evaluated in [@KalvinLast]. Let us list the preimages of the marked points $\{0,1,\infty\}\subset\overline{\Bbb C}_z$ under $f$ as the marked points $x_1,x_2,\dots,x_n$ on the Riemann sphere $\overline{\Bbb C}_x$; here $n=\deg f +2$. We shall always assume that $x_k=\infty$ for some $k\leqslant n$: this can always be achieved by replacing $f$ with equivalent Belyi function $f\circ \mu$, where $\mu$ is a Möbius transformation satisfying $\mu(\infty)=x_k$. The surfaces $(\overline{\Bbb C}_x, f^*m_{\pmb\beta})$ and $(\overline{\Bbb C}_x, (f\circ\mu)^*m_{\pmb\beta})$ are isometric, and, as a consequence, the corresponding spectral determinants are equal. Introduce the ramification divisor $$\pmb f :=\sum_{k=1}^n \operatorname{ord}_k f \cdot x_k,$$ where $\operatorname{ord}_k f$ is the ramification order of the Belyi function $f$ at $x_k$. If $x_k$ is a pole of $f$, then its multiplicity coincides with $\operatorname{ord}_k f +1$. If $f'(x_k)=0$, then $\operatorname{ord}_k f+1$ is the order of zero at $x=x_k$ of the function $x\mapsto f(x)- f(x_k)$. If $x_k$ is not a pole of $f$ and $f'(x_k)\neq 0$, then the ramification order $\operatorname{ord}_k f$ is zero. One can also interpret $x_k$ as a vertex in the *dessin d'enfant* corresponding to $f$, then the number $\operatorname{ord}_k f +1$ is its graph-theoretic degree, or, equivalently, the number of edges emanating from the vertex $x_k$. By the Riemann-Hurwitz formula the degree $|\pmb f|:=\sum_{k=1}^n \operatorname{ord}_k f$ of the divisor $\pmb f$ satisfies $|\pmb f|=2\deg f-2$. From the Liouville equation [\[Liouv\]](#Liouv){reference-type="eqref" reference="Liouv"} and the asymptotics [\[asympt_phi\]](#asympt_phi){reference-type="eqref" reference="asympt_phi"} it follows that the potential $$f^*\phi:=\phi\circ f+\log |f'|$$ of the pullback metric $f^*m_{\pmb\beta}=e^{2 f^*\phi}|dx|^2$ satisfies the Liouville equation $$\label{LUx} e^{-2f^*\phi}\bigl(-4\partial_x\partial_{\bar x} (f^*\phi)\bigr)= {2\pi(|\beta|+2)},\quad x\in \Bbb C\setminus \{x_1,\dots, x_n\},$$ and obeys the asymptotics $$\label{asfphi} \begin{aligned} (f^*\phi)(x)&=(f^*\beta)_k\log |x-x_k|+O(1), \quad x\to x_k\neq \infty,\\ (f^*\phi)(x)&=-((f^*\beta)_k+2)\log |x|+O(1),\quad x \to x_k=\infty, \end{aligned}$$ where $$\label{orders*} \left(f^*\pmb\beta\right)_k:=(\operatorname{ord}_k f+1)(\beta_{f(x_k)}+1) -1.$$ Geometrically this means that outside of the points $x_1,\dots,x_k$ the metric $f^*m_{\pmb\beta}$ is a regular metric of constant Gaussian curvature $2\pi(|\beta|+2)$, while at each point $x_k$ the metric has a conical singularity of order $(f^*\beta)_k$, or, equivalently, of angle $2\pi((f^*\beta)_k+1)$, see e.g. [@Troyanov]. That is because $\operatorname{ord}_k f+1$ vertices of bicolored double triangles meet together at $x_k$ to form the conical singularity. Each of the vertices makes a contribution of angle $2\pi(\beta_{f(x_k)}+1)$ into the angle of the conical singularity, cf. [\[orders\*\]](#orders*){reference-type="eqref" reference="orders*"}. Note that it could be so that for a point $x_k$ we have $(f^*\beta)_k=0$ (for example, this is the case if $\operatorname{ord}_k f=1$ and $\beta_{f(x_k)}=-1/2$), then $x_k$ is also a regular point of the metric $f^*m_{\pmb\beta}$. Clearly, $\beta_{f(x_k)}=\beta_\infty$ if $x_k$ is a pole of $f$, $\beta_{f(x_k)}=\beta_0$ if $x_k$ is a zero of $f$, and $\beta_{f(x_k)}=\beta_1$ if $f(x_k)=1$, where $\beta_j$ is the same as in [\[asympt_phi\]](#asympt_phi){reference-type="eqref" reference="asympt_phi"}. Now we are in a position to formulate the main results of this paper. **Theorem 1** (Spectral determinant of triangulated spheres). *Let $\det \Delta_{\pmb\beta}$ stand for the explicit function [\[CalcVar\]](#CalcVar){reference-type="eqref" reference="CalcVar"}, whose value at a point is the zeta-regularized spectral determinants of the Friedrichs selfadjoint extension of the Laplace-Beltrami operator on the unit area $S^2$-like double triangle, or, equivalently, on $(\overline {\Bbb C}_z,m_{\pmb\beta})$; see Sec. [2.1](#PrelimTriangulation){reference-type="ref" reference="PrelimTriangulation"} and [@KalvinLast].* *Let $f: \overline {\Bbb C}_x \to \overline {\Bbb C}_z$ be a Belyi function unramified outside of the set $\{0,1,\infty\}$ and such that $f(\infty)\in\{0,1,\infty\}$. Denote by $\operatorname{ord}_k f$ the ramification order of $f$ at $x_k$, where $x_1,x_2,\dots,x_n\in \overline {\Bbb C}_x$ are the preimages of the points $0$, $1$, and $\infty$ under $f$.* *Consider the surface $(\overline {\Bbb C}_x, f^*m_{\pmb\beta})$ isometric to the one glued from $\deg f$ copies of the double triangle in accordance with a pattern defined by $f$. Then for the zeta-regularized spectral determinant $\det\Delta_{f^*\pmb\beta}$ of the Friedrichs selfadjoint extension of the Laplace-Beltrami operator on $(\overline {\Bbb C}_x, f^*m_{\pmb\beta})$ we have $$\label{S20STMNT} \begin{aligned} \log\frac {\det\Delta_{f^*\pmb\beta}}{\deg f} =& \deg f \cdot\log{\det\Delta_{\pmb\beta}} +\frac 1 6 \sum_{k} \left(\operatorname{ord}_k f+1-\frac 1 {\operatorname{ord}_k f+1}\right)\frac {\phi_{f(x_k)}}{\beta_{f(x_k)}+1} \\ & -\frac 1 6 \sum_{k} \left( (f^*\pmb\beta)_k+1+\frac 1 { (f^*\pmb\beta)_k+1} \right)\log (\operatorname{ord}_k f+1) \\ & - \sum_{k}\Bigl(\mathcal C\left(\left(f^*\pmb\beta\right)_k\right) -(\operatorname{ord}_k f+1) \mathcal C(\beta_{f(x_k)})\Bigr) - (\deg f-1){\bf C} +C_f, \end{aligned}$$ where $\phi_j$ and $\beta_j$ with $j\in \{0,1,\infty\}$ is the (explicit) uniformization data in [\[asympt_phi\]](#asympt_phi){reference-type="eqref" reference="asympt_phi"}, and $(f^*\pmb\beta)_k$ is the same as in [\[orders\*\]](#orders*){reference-type="eqref" reference="orders*"}. The real-analytic function $(-\infty,0]\ni\beta\mapsto\mathcal C(\beta)$ is defined by the equality $$\label{cbeta} \mathcal C(\beta)=2\zeta'_B(0;\beta+1,1,1)-2\zeta_R'(-1)-\frac {\beta^2}{6(\beta+1)}\log 2 -\frac \beta {12}+\frac 1 2 \log(\beta+1),$$ where $\zeta'_B$ and $\zeta_R'$ stand for the derivatives with respect to $s$ of the Barnes double zeta function $\zeta_B(s;\beta+1,1,1)$ and the Riemann zeta function $\zeta_R(s)$ respectively. For the constant ${\bf C}$ in [\[S20STMNT\]](#S20STMNT){reference-type="eqref" reference="S20STMNT"} we have $$\label{bfC} {\bf C}=\frac 1 {6}-\frac 4 3 \log 2 -4\zeta_R'(-1) -\log\pi.$$ Finally, the constant $C_f$ in [\[S20STMNT\]](#S20STMNT){reference-type="eqref" reference="S20STMNT"} depends only on the Belyi function $f$. It is given by $$\label{C_F} \begin{aligned} C_f & =\frac 1 {18} \sum_{ k:x_k\neq \infty}\sum_{\ \ \ell:x_k\neq x_\ell\neq \infty}\frac{(\operatorname{ord}_k f-2)(\operatorname{ord}_\ell f -2)}{\operatorname{ord}_k f+1} \log|x_k-x_\ell| \\ &\qquad\qquad +\frac 1 6 \sum_{k} \left( \frac{\operatorname{ord}_k f +1} 3 +\frac 3 { \operatorname{ord}_k f+1} \right)\log (\operatorname{ord}_k f+1) \\ &\qquad\qquad +\frac 1 {6} \left( \deg f - \sum_{k} \frac 3 { \operatorname{ord}_k f+1} \right)\log A_f, \end{aligned}$$ where $$\label{Exp_A} A_f=\frac{|f(x)|^{-2/3}|f(x)-1|^{-2/3}|f'(x)|}{\prod_{k: x_k\neq\infty}|x-x_k|^{\frac 1 3 (\operatorname{ord}_k f-2)}}.$$ Note that $A_f$ is a scaling coefficient that does not depend on $x$ and can be easily evaluated for any particular Belyi function $f$.* As we show in the proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"}, the expression [\[C_F\]](#C_F){reference-type="eqref" reference="C_F"} for $C_f$ comes from the Euclidean equilateral triangulation defined on $\overline {\Bbb C}_x$ by $f$, cf. [@ShVo; @VoSh]. The readers whose interests are more in the results and applications than in the proof may wish to skip Section [3](#PFmain){reference-type="ref" reference="PFmain"} and proceed directly to Sections [4](#Unif){reference-type="ref" reference="Unif"} and [5](#ExAppl){reference-type="ref" reference="ExAppl"}. # Proof of the main results {#PFmain} ## Equality [\[S20STMNT\]](#S20STMNT){reference-type="eqref" reference="S20STMNT"} is valid with a constant $C_f$ {#SS3.1.} In this subsection we prove that the representation for the spectral determinant [\[S20STMNT\]](#S20STMNT){reference-type="eqref" reference="S20STMNT"} is valid with some constant $C_f$ that does not depend on the metric $m_{\pmb\beta}$ on the target Riemann sphere $\overline{\Bbb C}_z$. As a byproduct we obtain an explicit formula for $C_f$ that is different from the one in [\[C_F\]](#C_F){reference-type="eqref" reference="C_F"}. **Proposition 2**. *The equality [\[S20STMNT\]](#S20STMNT){reference-type="eqref" reference="S20STMNT"} in Theorem [Theorem 1](#main){reference-type="ref" reference="main"} is valid with a constant $C_f$ that depends only on the Belyi function $f$. Moreover, $$\label{C_f} \begin{aligned} C_f= \frac 1 6 \sum_{k: x_k\neq\infty, f(x_k)\neq\infty} \left(\frac{\operatorname{ord}_k f}{\operatorname{ord}_k f+1} \log|c_k| + (\operatorname{ord}_k f+2) \log (\operatorname{ord}_k f+1)\right) \\ + \frac 1 6 \sum_{k: x_k\neq\infty, f(x_k)=\infty} \left( \frac{\operatorname{ord}_k f+2}{\operatorname{ord}_k f+1} \log|c_k|-\operatorname{ord}_k f \log (\operatorname{ord}_k f+1) \right) \\ + \left. \frac 1 6 \left(- \frac{\operatorname{ord}_k f}{\operatorname{ord}_k f+1} \log|c_k| - (\operatorname{ord}_k f+2)\log (\operatorname{ord}_k f+1) \right)\right|_{k:x_k=\infty, f(\infty)=\infty} \\ +\left. \frac 1 6 \left( -\frac{\operatorname{ord}_k f+2}{\operatorname{ord}_k f+1} \log|c_k| + \operatorname{ord}_k f\log (\operatorname{ord}_k f+1) \right)\right |_{k: x_k=\infty, f(\infty)\neq\infty}, \end{aligned}$$ where* - *$c_k$ stands for the first nonzero coefficient in the Taylor series of $f(x)-f(x_k)$ at $x_k$, if $x_k$ is not a pole of $f$;* - *$c_k$ stands for the first nonzero coefficient of the Laurent series of $f$ at $x_k$, if $x_k$ is a pole of $f$.* The proof of Proposition [Proposition 2](#PBdet){reference-type="ref" reference="PBdet"} is preceded by Lemma [Lemma 3](#INT){reference-type="ref" reference="INT"} below. To formulate the lemma we need the following refined version of the asymptotics [\[asfphi\]](#asfphi){reference-type="eqref" reference="asfphi"} for the metric potential $f^*\phi$: $$\label{asympPB1} \begin{aligned} (f^*\phi) (x)= \left(f^*\pmb\beta\right)_k \log|x-x_k| +(f^*\phi)_k +o(1),\quad x\to x_k\neq\infty, \\ (f^*\phi) (x)= - \bigl( \left(f^*\pmb\beta\right)_k +2\bigr) \log|x| +(f^*\phi)_k +o(1) , \quad x\to x_k=\infty, \end{aligned}$$ where $\left(f^*\pmb\beta\right)_k$ is the same as in [\[orders\*\]](#orders*){reference-type="eqref" reference="orders*"}. Since $f^*\phi=\phi\circ f+\log |f'|$, it is not hard to see that the coefficients $(f^*\phi)_k$ in [\[asympPB1\]](#asympPB1){reference-type="eqref" reference="asympPB1"} satisfy $$\label{1} \begin{aligned} (f^*\phi)_k=\phi_{f(x_k)}+(\beta_{f(x_k)}+1)\log|c_k| +\log(\operatorname{ord}_k f+1), \quad \text{ if }\quad f(x_k)\neq\infty, \\ (f^*\phi)_k=\phi_{f(x_k)}-(\beta_{f(x_k)}+1)\log|c_k| +\log(\operatorname{ord}_k f+1), \quad \text{ if }\quad f(x_k)=\infty. \end{aligned}$$ Here $\beta_j$ and $\phi_j$ with $j\in\{0,1,\infty\}$ are the same as in [\[asympt_phi\]](#asympt_phi){reference-type="eqref" reference="asympt_phi"}, and $c_k$ is the same as in Proposition [Proposition 2](#PBdet){reference-type="ref" reference="PBdet"}. **Lemma 3**. *Recall that $m_{\pmb\beta}=e^{2\phi}|dz|^2$ and $f^*m_{\pmb\beta}=e^{f^*\phi}|dx|^2$. We have $$\begin{aligned} (|\pmb\beta|+2)\int_{\Bbb C}(f^*\phi) e^{2f^*\phi}\frac {dx\wedge d\bar x}{-2i}=(|\pmb\beta|+2)\deg f\int_{\Bbb C} \phi e^{2\phi}\frac {dz\wedge d\bar z}{-2i} \\ -\sum_{k: f(x_k)\neq \infty}(f^*\phi)_k \operatorname{ord}_k f + \sum_{k: f(x_k)=\infty} (f^*\phi)_k\operatorname{ord}_k f + \sum_{k}(f^*\pmb\beta)_k \log\left |(\operatorname{ord}_k f+1)c_k\right| \\ +2 \sum_{k: x_k\neq\infty, f(x_k)=\infty}(f^*\phi)_k+\left. 2 \log\left |(\operatorname{ord}_k f+1)c_k\right|\right|_{k:x_k=\infty} -\left.2(f^*\phi)_k\right|_{k:x_k=\infty, f(\infty)\neq\infty}, \end{aligned}$$ where $\left(f^*\pmb\beta\right)_k$ and $(f^*\phi)_k$ are the coefficients in the asymptotics [\[asympPB1\]](#asympPB1){reference-type="eqref" reference="asympPB1"} satisfying the equalities [\[orders\*\]](#orders*){reference-type="eqref" reference="orders*"} and [\[1\]](#1){reference-type="eqref" reference="1"} respectively.* *Proof.* We begin with the equality $$\int_{\Bbb C} (f^*\phi) e^{2f^*\phi}\frac {dx\wedge d\bar x}{-2i}=\deg f \cdot\int_{\Bbb C} \phi e^{2\phi}\frac {dz\wedge d\bar z}{-2i}+\int_{\Bbb C} (\log |f'|) e^{2f^*\phi} \frac {dx\wedge d\bar x}{-2i}$$ that easily follows from the relation $f^*\phi=\phi\circ f+\log |f'|$. Thus, in order to prove the lemma, we only need to evaluate the last integral. Thanks to the Liouville equation [\[LUx\]](#LUx){reference-type="eqref" reference="LUx"} we have $$\label{Int log f} \begin{aligned} (|\pmb\beta|+2)\int_{\Bbb C} (\log |f'|) e^{2 f^*\phi} \frac {dx\wedge d\bar x}{-2i}=\lim_{\epsilon\to 0} \frac 1 {2\pi}\int_{\Bbb C^\epsilon} (\log |f'|)\bigl (-4\partial_x\partial_{\bar x} (f^*\phi)\bigr) \frac {dx\wedge d\bar x}{-2i} \\ =\lim_{\epsilon\to 0} \frac 1 {2\pi}\oint_{\partial \Bbb C^\epsilon} \Bigl((f^*\phi) \partial_{\vec n }(\log|f'|) -(\log|f'|) \partial_{\vec n } (f^*\phi) \Bigr) \, |d x|, \end{aligned}$$ where $\Bbb C^\epsilon$ stands for the large disk $|x|>1/\epsilon$ with the small disks $|x-x_k|<\epsilon$ encircling the marked points $x_k$ removed, and ${\vec n}$ is the unit outward normal. The last equality in [\[Int log f\]](#Int log f){reference-type="eqref" reference="Int log f"} is valid because $x\mapsto \log|f'(x)|$ is a harmonic function on $\Bbb C^\epsilon$. Notice that if $x_k$ is not a pole of $f$, then we have $$\label{asymp_log_f1} \begin{aligned} \log|f'(x)|=\operatorname{ord}_k f\log |x-x_k|+\log\left |(\operatorname{ord}_k f+1)c_k\right|+o(1),\ x\to x_k\neq\infty, \\ \log|f'(x)|=-(\operatorname{ord}_k f +2)\log|x| +\log\left |(\operatorname{ord}_k f+1)c_k\right|+o(1) , \ x\to x_k=\infty. \end{aligned}$$ Besides, if $x_k$ is a pole of $f$, we have $$\label{asymp_log_f2} \begin{aligned} \log|f'(x)|=& -(\operatorname{ord}_k f +2)\log|x-x_k| \\ &\qquad \qquad+\log |(\operatorname{ord}_k f+1)c_k|+o(1),\quad x\to x_k\neq \infty, \\ \log|f'(x)|=& \operatorname{ord}_k f \log|x| +\log |(\operatorname{ord}_k f+1)c_k| +o(1),\quad x\to x_k= \infty. \end{aligned}$$ Below, for the derivatives along the unit outward normal vector ${\vec n}$ we use the equalities $$\partial_{\vec n}=-\partial_{|x-x_j|}\quad \text{on}\quad |x-x_j|=\epsilon;\quad \partial_{\vec n}=\partial_{|x|}\quad \text{on}\quad |x|=1/\epsilon.$$ We calculate the integral in right hand side of [\[Int log f\]](#Int log f){reference-type="eqref" reference="Int log f"} relying on [\[asympPB1\]](#asympPB1){reference-type="eqref" reference="asympPB1"} together with [\[asymp_log_f1\]](#asymp_log_f1){reference-type="eqref" reference="asymp_log_f1"} and [\[asymp_log_f2\]](#asymp_log_f2){reference-type="eqref" reference="asymp_log_f2"}. As a result, we obtain $$\begin{aligned} \lim_{\epsilon\to 0}\frac 1 {2\pi}\oint_{\partial \Bbb C^\epsilon} \Bigl((f^*\phi) \partial_{\vec n }(\log|f'|) -(\log|f'|) \partial_{\vec n } (f^*\phi) \Bigr)\,|d x| \\ =\sum_{j=1}^n \frac 1 {2\pi}\lim_{\epsilon\to 0}\oint_{ |x-x_j|=\epsilon}\Bigl((f^*\phi) \partial_{\vec n }(\log|f'|) -(\log|f'|) \partial_{\vec n } (f^*\phi) \Bigr)\,|d x| \\ =\sum_{k: x_k\neq\infty, f(x_k)\neq \infty}\Bigl( -(f^*\phi)_k \operatorname{ord}_k f+(f^*\pmb\beta)_k \log\left |(\operatorname{ord}_k f+1)c_k\right|\Bigr) \\ + \sum_{k: x_k\neq\infty, f(x_k)=\infty}\Bigl ( (f^*\phi)_k(\operatorname{ord}_k f +2)+(f^*\pmb\beta)_k \log\left |(\operatorname{ord}_k f+1)c_k \right| \Bigr) \\ +\Bigl( -(f^*\phi)_k (\operatorname{ord}_kf +2)+(\log\left |(\operatorname{ord}_k f+1)c_n\right|) \bigl( (f^*\pmb\beta)_k +2\bigr) \Bigr)\Bigr|_{k: x_k=\infty, f(\infty)\neq \infty} \\ +\Bigl( (f^*\phi)_k \operatorname{ord}_k f + ((f^*\pmb\beta)_k +2) \log\left |(\operatorname{ord}_k f+1)c_k\right|\Bigr)\Bigr|_{k: x_k=\infty, f(\infty)=\infty}. \end{aligned}$$ After regrouping the terms in the right hand side, this implies the assertion of Lemma [Lemma 3](#INT){reference-type="ref" reference="INT"}. ◻ **Lemma 4**. *For the determinant $\det\Delta_{f^*\pmb\beta}$ of the Friedrichs Laplacian on $(\overline{\Bbb C}_x, f^*m_\beta)$ the anomaly formula $$\label{S20Anom} \begin{aligned} \log \frac {\det\Delta_{f^*\pmb\beta}}{\deg f} = -& \frac{|\beta|+2}{6}\int_{\Bbb C} f^*\phi e^{2f^*\phi}\,\frac {dx\wedge d\bar x} {-2i} +\frac 1 6 \sum_{k: x_k\neq \infty}\frac {\left(f^*\pmb\beta\right)_k}{\left(f^*\pmb\beta\right)_k+1} (f^*\phi)_k \\& -\left. \frac 1 6 \frac{\left(f^*\pmb\beta\right)_k+2} {\left(f^*\pmb\beta\right)_k+1}(f^*\phi)_k\right |_{k:x_x=\infty} -\sum_{k}\mathcal C\bigl(\left(f^*\pmb\beta\right)_k\bigr) + {\bf C} \end{aligned}$$ is valid. Here $\mathcal C(\beta)$ is the function defined in [\[cbeta\]](#cbeta){reference-type="eqref" reference="cbeta"}, and $\bf C$ stands for the constant [\[bfC\]](#bfC){reference-type="eqref" reference="bfC"}.* *Proof.* The main argument of the proof is similar to the one in [@KalvinLast Proof of Prop. 2.1]. For this reason, we skip the details that can be easily restored from the references [@KalvinLast; @KalvinCCM; @KalvinJFA]. Let us first obtain an asymptotics for the determinant of the Friedrichs Dirichlet Laplacian $\Delta^D_{f^*\pmb\beta}\!\!\restriction_{|x|\leqslant 1/\epsilon}$ on the disk $|x|\leqslant 1/\epsilon$ endowed with the metric $f^*m_{\pmb\beta}$ as $\epsilon\to 0^+$. Denote by $\Delta^D_0 \!\!\restriction_{|x|\leqslant 1/\epsilon}$ the selfadjoint Dirichlet Laplacian on the disk $|x|\leqslant 1/\epsilon$ equipped with the flat background metric $|dx|^2$. As is known [@Weisberger], for the determinant of the Dirichlet Laplacian on the flat disk one has $$\log \det\Delta^D_0 \!\!\restriction_{|x|\leqslant 1/\epsilon}=\frac 1 3 \log\epsilon +\frac 1 3 \log 2-\frac 1 2\log 2 \pi -\frac 5 {12} -2\zeta_R'(-1).$$ On the other hand, the Polyakov-Alvarez type formula from [@KalvinJFA Theorem 1.1.2] reads $$\label{PAdisk} \begin{aligned} \log \frac {\det \Delta^D_{f^*\pmb\beta}\!\!\restriction_{|x|\leqslant 1/\epsilon}}{\det \Delta^D_0 \!\!\restriction_{|x|\leqslant 1/\epsilon}} =& -\frac {|f^*\pmb\beta|+2}{6\deg f } \int_{\Bbb C} (f^*\phi) e^{2f^*\phi}\frac {dx\wedge d\bar x}{-2i} \\ & -\frac 1 {12\pi}\oint_{|x|=1/\epsilon}(f^*\phi)\partial_{\vec n}(f^*\phi)\,|dx|-\frac \epsilon {6\pi}\oint_{|x|=1/\epsilon} (f^*\phi)\, |dx| \\ & -\frac 1 {4\pi} \oint_{|x|=1/\epsilon} \partial_{\vec n}(f^*\phi)\, |dx|+\frac 1 6 \sum_{k: x_k\neq\infty} \frac {\left(f^*\pmb\beta\right)_k}{\left(f^*\pmb\beta\right)_k+1} (f^*\phi)_k \\ &-\sum_{k: x_k\neq \infty} \mathcal C\bigl(\left(f^*\pmb\beta\right)_k\bigr), \end{aligned}$$ where $|f^*\pmb\beta|=\sum_k \left(f^*\pmb\beta\right)_k$ is the degree of the divisor $f^*\pmb\beta=\sum_k \left(f^*\pmb\beta\right)_k\cdot x_k$. By the Gauss-Bonnet theorem [@Troyanov], the product of the (regularized) Gaussian curvature by the total area of the singular sphere $(\overline{\Bbb C}_x, f^*m_{\pmb\beta})$ equals $2\pi(|f^*\pmb\beta|+2)$. Since the (regularized) Gaussian curvature is the one of $m_{\pmb\beta}$, and the total area is $\deg f$, we conclude that $$\frac{|f^*\pmb\beta|+2}{ \deg f}=|\pmb\beta|+2.$$ In [\[PAdisk\]](#PAdisk){reference-type="eqref" reference="PAdisk"} we also replace the integrals along the circle $|x|=1/\epsilon$ with their asymptotics. As a result, the Polyakov-Alvarez type formula [\[PAdisk\]](#PAdisk){reference-type="eqref" reference="PAdisk"} implies $$\begin{aligned} \log {\det \Delta^D_{f^*\pmb\beta}\!\!\restriction_{|x|\leqslant 1/\epsilon}}=-\frac {|\pmb\beta|+2}{6} \int_{\Bbb C}(f^*\phi) e^{2f^*\phi}\frac {dx\wedge d\bar x}{-2i} +\frac 1 6 \sum_{k: x_k\neq\infty} \frac {\left(f^*\pmb\beta\right)_k}{\left(f^*\pmb\beta\right)_k+1} (f^*\phi)_k \\ -\sum_{k:x_k\neq \infty} \mathcal C\bigl( \left(f^*\pmb\beta\right)_k\bigr ) +\frac 1 6 \left(f^*\pmb\beta\right)_k \left((f^*\phi)_k+3\right)|_{k:x_k=\infty} +\frac 1 3 \log 2-\frac 1 2\log 2 \pi +\frac 7 {12} \\ -2\zeta_R'(-1) + \frac 1 6\left( \left(\left(f^*\pmb\beta\right)_k+2\right)^2 -2\left(\left(f^*\pmb\beta\right)_k+1\right)\right)|_{k:x_k=\infty} \log \epsilon + o(1),\ \epsilon\to 0^+. \end{aligned}$$ In a similar way, one can also obtain the asymptotics for the determinant of theFriedrichs Dirichlet Laplacian on the cap $|x|\geqslant 1/\epsilon$ of the singular sphere $(\overline{\Bbb C}_x, f^*m_\beta)$: $$\begin{aligned} \log \det\Delta^D_{f^*\pmb\beta}\!\!\restriction_{|x|\geqslant 1/\epsilon}=-\frac 1 6 ((\left(f^*\pmb\beta\right)_k+1)^2+1)\log \epsilon-\frac 1 6\left(\left(f^*\pmb\beta\right)_k+1+\frac 1 {\left(f^*\pmb\beta\right)_k+1}\right)(f^*\phi)_k \\ -\mathcal C\bigl(\left(f^*\pmb\beta\right)_k\bigr) -2\zeta_R'(-1)-\frac 5 {12} +\frac 1 3 \log 2 -\frac 1 2 \log2\pi -\frac {\left(f^*\pmb\beta\right)_k} 2 +o(1),\quad \epsilon\to 0^+, \end{aligned}$$ where $k$ is such that $x_k=\infty$. In total, we have $$\label{FinalAsympt} \begin{aligned} \log {\det \Delta^D_{f^*\pmb\beta}\!\!\restriction_{|x|\leqslant 1/\epsilon}}+ \log \det\Delta^D_{f^*\pmb\beta}\!\!\restriction_{|x|\geqslant 1/\epsilon} =-\frac {|\pmb\beta|+2}{6} \int_{\Bbb C}(f^*\phi) e^{2f^*\phi}\frac {dx\wedge d\bar x}{-2i} \\ +\frac 1 6 \sum_{k: x_k\neq\infty} \frac {\left(f^*\pmb\beta\right)_k}{\left(f^*\pmb\beta\right)_k+1} (f^*\phi)_k -\frac 1 6\left(1+\frac 1 {\left(f^*\pmb\beta\right)_k+1}\right) (f^*\phi)_k \\ -\sum_{k} \mathcal C\bigl(\left(f^*\pmb\beta\right)_k\bigr)+{\bf C}+\log 2 + o(1),\quad \epsilon\to0^+. \end{aligned}$$ Now we cut the singular sphere $\left(\overline{\Bbb C}_x, f^*m_{\pmb\beta}\right)$ along the circle $|x|=1/\epsilon$ with the help of the BFK formula [@BFK Theorem B$^*$] to obtain $$\label{BFKf} \begin{aligned} \log \det \Delta_{f^*\pmb\beta}=&\log \deg f -\log 2 \\&+ \lim_{\epsilon\to 0^+}\left(\log \det \Delta^D_{f^*\pmb\beta}\!\!\restriction_{|x|\leqslant 1/\epsilon}+\log \det \Delta^D_{f^*\pmb\beta}\!\!\restriction_{|x|\geqslant 1/\epsilon} \right). \end{aligned}$$ Here $\deg f$ is the total area of the singular sphere, and the number $-\log 2$ is the value of the difference $$\log \det \mathcal N(\epsilon)-\log \oint_{|x|=1/\epsilon}e^{f^*\phi}\,|dx|,$$ where $\det \mathcal N(\epsilon)$ is the determinant of the Neumann jump operator on the cut $|x|=1/\epsilon$, see [@KalvinLast Proof of Prop. 2.1] or [@KalvinCCM]. As demonstrated in [@KalvinJFA], the BFK formula [\[BFKf\]](#BFKf){reference-type="eqref" reference="BFKf"} remains valid in spite of the fact that the metric $f^*m_{\pmb\beta}$ is singular. In the limit, the asymptotics [\[FinalAsympt\]](#FinalAsympt){reference-type="eqref" reference="FinalAsympt"} together with the BFK formula [\[BFKf\]](#BFKf){reference-type="eqref" reference="BFKf"} implies the anomaly formula [\[S20Anom\]](#S20Anom){reference-type="eqref" reference="S20Anom"}. This completes the proof of Lemma [Lemma 4](#ANOMALY){reference-type="ref" reference="ANOMALY"}. ◻ *Proof of Proposition [Proposition 2](#PBdet){reference-type="ref" reference="PBdet"}.* Thanks to Lemma [Lemma 3](#INT){reference-type="ref" reference="INT"}, the integral in the right-hand side of the anomaly formula in Lemma [Lemma 4](#ANOMALY){reference-type="ref" reference="ANOMALY"} reduces to an integral on the target sphere $(\overline{\Bbb C}_z,m_{\pmb\beta})$. Our next purpose is to express that integral in terms of the determinant of Laplacian $\Delta_{\pmb\beta}$. With this aim in mind, we write down the anomaly formula $$\label{DetDelta} \begin{aligned} \log {\det\Delta_{\pmb\beta}} = -\frac{|\pmb\beta|+2}{6}\int_{\Bbb C} \phi e^{2\phi}\,\frac {dz\wedge d\bar z} {-2i} +&\frac 1 6 \sum_{j=0,1}\frac {\beta_j}{\beta_j+1}\phi_j \\ & -\frac 1 6\frac {\beta_\infty+2} {\beta_\infty+1}\phi_\infty -\sum_{j\in\{0,1,\infty\}}\mathcal C(\beta_j) +{\bf C}, \end{aligned}$$ which is [\[S20Anom\]](#S20Anom){reference-type="eqref" reference="S20Anom"} in the particular case of $f\equiv 1$. Now we change the index of summation from $j\in\{0,1,\infty\}$ to $k=1,2,\dots,n$ as follows: $$\label{S20.2} \begin{aligned} \log {\det\Delta_{\pmb\beta}} =-\frac{|\pmb\beta|+2}{6}\int_{\Bbb C} \phi e^{2\phi}\,\frac {dz\wedge d\bar z} {-2i} +\frac 1 6 \sum_{k: f(x_k)\neq \infty} \frac {\operatorname{ord}_k f+1}{\deg f} \frac {\beta_{f(x_k)}}{\beta_{f(x_k)}+1}\phi_{f(x_k)} \\-\sum_{k} \frac {\operatorname{ord}_k f+1}{\deg f}\mathcal C(\beta_{f(x_k)}) -\frac 1 6 \sum_{k:f(x_k)=\infty} \frac {\operatorname{ord}_k f+1}{\deg f} \frac {\beta_{f(x_k)}+2} {\beta_{f(x_k)}+1}\phi_{f(x_k)} +{\bf C}. \end{aligned}$$ Here we rely on the obvious equalities $$\label{OEQ} \sum_{k: f(x_k)=j} (\operatorname{ord}_k f+1)=\deg f\ \text{ for any fixed } j\in\{0,1,\infty\}.$$ The equality [\[S20.2\]](#S20.2){reference-type="eqref" reference="S20.2"} together with the result of Lemma [Lemma 3](#INT){reference-type="ref" reference="INT"} allows us to express the integral in the anomaly formula [\[S20Anom\]](#S20Anom){reference-type="eqref" reference="S20Anom"} in terms of $\det\Delta_{\pmb\beta}$ and the explicit uniformization data in [\[asympt_phi\]](#asympt_phi){reference-type="eqref" reference="asympt_phi"}. In the remaining part of the proof, we show that as a result, we arrive at the equality [\[S20STMNT\]](#S20STMNT){reference-type="eqref" reference="S20STMNT"} with $C_f$ given by [\[C_f\]](#C_f){reference-type="eqref" reference="C_f"}. It is easy to see that proceeding as discussed above, we get the terms $$\label{EasyTerms} \deg f \cdot\log\det\Delta_{\pmb \beta}- \sum_{k}\left(\mathcal C\left(\left(f^*\pmb\beta\right)_k\right) -(\operatorname{ord}_k f+1) \mathcal C(\beta_{f(x_k)})\right) - (\deg f-1) {\bf C}$$ in the right hand side of [\[S20STMNT\]](#S20STMNT){reference-type="eqref" reference="S20STMNT"}, we omit the details. It is considerably harder to show that we also obtain the other terms in the right-hand sides of [\[S20STMNT\]](#S20STMNT){reference-type="eqref" reference="S20STMNT"} and [\[C_f\]](#C_f){reference-type="eqref" reference="C_f"}. With this aim in mind, we separately consider the following four possibilities: 2 - $x_k\neq\infty$ is not a pole of $f$, - $x_k\neq\infty$ is a pole of $f$, - $x_k=\infty$ is a not pole of $f$, and - $x_k=\infty$ is a pole of $f$. If $x_k\neq\infty$ is not a pole of $f$, then, in addition to the terms listed in [\[EasyTerms\]](#EasyTerms){reference-type="eqref" reference="EasyTerms"}, we get $\frac 1 6$ times $$\begin{aligned} { \frac {\left(f^*\pmb\beta\right)_k}{\left(f^*\pmb\beta\right)_k+1} (f^*\phi)_k} -\Bigl( -(f^*\phi)_k \operatorname{ord}_k f+(f^*\pmb\beta)_k \log\left |(\operatorname{ord}_k f+1)c_k\right|\Bigr)\\-{%\color{blue} (\operatorname{ord}_k f+1) \frac {\beta_{f(x_k)}}{\beta_{f(x_k)}+1}\phi_{f(x_k)}} \\ =\left(\operatorname{ord}_k f+1-\frac 1 {\operatorname{ord}_k f +1}\right)\frac{\phi_{f(x_k)}}{\beta_{f(x_k)}+1} - \left( (f^*\pmb\beta)_k+1+\frac 1 { (f^*\pmb\beta)_k+1} \right)\log (\operatorname{ord}_k f+1) \\ + \left( \frac{\operatorname{ord}_k f}{\operatorname{ord}_k f+1} \log|c_k| + (\operatorname{ord}_k f+2)\log (\operatorname{ord}_k f+1)\right). \end{aligned}$$ Here the first two terms in the right-hand side contribute to the first and the second sums in the right-hand side of [\[S20STMNT\]](#S20STMNT){reference-type="eqref" reference="S20STMNT"} correspondingly. The last term goes to $C_f$, cf. [\[C_f\]](#C_f){reference-type="eqref" reference="C_f"}. If $x_k \neq \infty$ is a pole of $f$, then we get the terms listed in [\[EasyTerms\]](#EasyTerms){reference-type="eqref" reference="EasyTerms"} and also $\frac 1 6$ times $$\begin{aligned} \frac {\left(f^*\pmb\beta\right)_k}{\left(f^*\pmb\beta\right)_k+1} (f^*\phi)_k -\Bigl ( (f^*\phi)_k(\operatorname{ord}_k f +2)+(f^*\pmb\beta)_k \log\left |(\operatorname{ord}_k f+1)c_k\right|\Bigr) \\ +(\operatorname{ord}_k f+1) \frac {\beta_{f(x_k)}+2} {\beta_{f(x_k)}+1}\phi_{f(x_k)} \\ =\left(\operatorname{ord}_k f+1-\frac 1 {\operatorname{ord}_k f +1}\right)\frac{\phi_{f(x_k)}}{\beta_{f(x_k)}+1} - \left( (f^*\pmb\beta)_k+1+\frac 1 { (f^*\pmb\beta)_k+1} \right)\log (\operatorname{ord}_k f+1) \\ + \left( \frac{\operatorname{ord}_k f+2}{\operatorname{ord}_k f+1} \log|c_k|- \operatorname{ord}_k f \log (\operatorname{ord}_k f+1) \right). \end{aligned}$$ Here again, the first two terms in the right-hand side contribute to the first and the second sums in the right-hand side of [\[S20STMNT\]](#S20STMNT){reference-type="eqref" reference="S20STMNT"}. The last term goes to $C_f$, cf. [\[C_f\]](#C_f){reference-type="eqref" reference="C_f"}. Similarly, if $x_k = \infty$ is not a pole of $f$, we get the terms listed in [\[EasyTerms\]](#EasyTerms){reference-type="eqref" reference="EasyTerms"} and $\frac 1 6$ times $$\begin{aligned} -\frac{\left(f^*\pmb\beta\right)_k+2} {\left(f^*\pmb\beta\right)_k+1}(f^*\phi)_k - \Bigl( -(f^*\phi)_k (\operatorname{ord}_k f +2)+ ((f^*\pmb\beta)_k+2) (\log\left |(\operatorname{ord}_k f+1)c_k\right|) \Bigr) \\ -(\operatorname{ord}_k f+1) \frac {\beta_{f(x_k)}}{\beta_{f(x_k)}+1}\phi_{f(x_k)} \\ =\left( \operatorname{ord}_k f+1-\frac 1 {\operatorname{ord}_k f +1}\right)\frac {\beta_{f(x_k)}}{\beta_{f(x_k)}+1}\phi_{f(x_k)} \qquad\qquad \qquad \qquad\qquad \qquad \qquad\qquad \\ - \left( (f^*\pmb\beta)_k+1+\frac 1 { (f^*\pmb\beta)_k+1} \right)\log (\operatorname{ord}_k f+1) \qquad\qquad \qquad\qquad \\ +\left( - \frac{\operatorname{ord}_k f+2}{\operatorname{ord}_k f+1} \log|c_k|+ \operatorname{ord}_k f\log (\operatorname{ord}_k f+1)\right) . \end{aligned}$$ Here, as before, the first two terms in the right-hand side are in agreement with [\[S20STMNT\]](#S20STMNT){reference-type="eqref" reference="S20STMNT"}, and the last term goes to $C_f$, cf. [\[C_f\]](#C_f){reference-type="eqref" reference="C_f"}. Finally, if $x_k = \infty$ is a pole of $f$, we get the terms listed in [\[EasyTerms\]](#EasyTerms){reference-type="eqref" reference="EasyTerms"} and $\frac 1 6$ times $$\begin{aligned} -\frac{\left(f^*\pmb\beta\right)_k+2} {\left(f^*\pmb\beta\right)_k+1}(f^*\phi)_k - \Bigl( (f^*\phi)_k \operatorname{ord}_k f +\bigl( (f^*\pmb\beta)_k +2\bigr) \log\left |(\operatorname{ord}_k f+1)c_k\right|\Bigr)\\+(\operatorname{ord}_k f+1) \frac {\beta_{f(x_k)}+2} {\beta_{f(x_k)}+1}\phi_{f(x_k)} \\ =\left(\operatorname{ord}_k f+1-\frac 1 {\operatorname{ord}_k f +1}\right)\frac{\phi_{f(x_k)}}{\beta_{f(x_k)}+1} - \left( (f^*\pmb\beta)_k+1+\frac 1 { (f^*\pmb\beta)_k+1} \right)\log (\operatorname{ord}_k f+1) \\ +\left(- \frac{\operatorname{ord}_k f}{\operatorname{ord}_k f+1} \log|c_k|- (\operatorname{ord}_k f+2)\log (\operatorname{ord}_k f+1) \right), \end{aligned}$$ which is in agreement with [\[S20STMNT\]](#S20STMNT){reference-type="eqref" reference="S20STMNT"} and [\[C_f\]](#C_f){reference-type="eqref" reference="C_f"}. This completes the proof of Theorem [Proposition 2](#PBdet){reference-type="ref" reference="PBdet"}. ◻ ## Euclidean equilateral triangulation {#BFET} By Proposition [Proposition 2](#PBdet){reference-type="ref" reference="PBdet"} the constant $C_f$ does not depend on the metric $m_{\pmb\beta}$. In this section, we make a particular choice of $m_{\pmb\beta}$ that significantly simplifies the calculation of the constant $C_f$. As we show in the proof of Proposition [Proposition 5](#B_C_f){reference-type="ref" reference="B_C_f"} below, it makes sense to consider the Euclidean equilateral triangulation naturally associated with Belyi function $f$, cf. [@ShVo; @VoSh]. **Proposition 5**. *The constant $C_f$ in [\[C_f\]](#C_f){reference-type="eqref" reference="C_f"} satisfies [\[C_F\]](#C_F){reference-type="eqref" reference="C_F"} with $A_f$ from [\[Exp_A\]](#Exp_A){reference-type="eqref" reference="Exp_A"}; see also Remark [Remark 6](#AvsCn){reference-type="ref" reference="AvsCn"} at the end of this subsection.* *Proof.* Here we consider the pull-back by $f$ of the flat singular metric $$m_{\pmb \beta}=c^2 |z|^{-4/3}|z-1|^{-4/3}|dz|^2,\quad \pmb\beta=\left(-\frac 2 3 \right)\cdot 0+\left(-\frac 2 3 \right)\cdot 1+\left(-\frac 2 3 \right)\cdot\infty,$$ where $c>0$ is a scaling coefficient that normalizes the area of the metric to one. Note that the surface $(\overline{\Bbb C}_z,m_{\pmb \beta})$ is isometric to two congruent Euclidean equilateral triangles glued together along their sides [@Troyanov; @KalvinJGA]. In terms of [@ShVo], the corresponding bicolored double triangle in Fig. [\[TemplateEuc\]](#TemplateEuc){reference-type="ref" reference="TemplateEuc"} (with $\beta_0=\beta_1=\beta_\infty=-2/3$) is a "butterfly" that puts the wings together to become $S^2$-like, see also [@VoSh]. Relying on anomaly formulae we can obtain expressions for the determinants of Laplacians on the base $(\overline{\Bbb C}_z, m_{\pmb \beta})$ and on the ramified covering $(\overline{\Bbb C}_x, f^*m_{\pmb \beta})$. These determinants satisfy the relation [\[S20STMNT\]](#S20STMNT){reference-type="eqref" reference="S20STMNT"} from Theorem [Proposition 2](#PBdet){reference-type="ref" reference="PBdet"}. As we show below, this leads to the equalities [\[C_F\]](#C_F){reference-type="eqref" reference="C_F"} and [\[Exp_A\]](#Exp_A){reference-type="eqref" reference="Exp_A"} for $C_f$ and $A_f$ correspondingly, and thus proves the assertion. The potential $\phi$ of the metric $m_{\pmb\beta}=e^{2\phi}|dz|^2$ has the asymptotics $$\phi(x)=-\frac 2 3 \log|z|+\log c+o(1),\quad|z|\to 0;\quad \phi(x)=-\frac 2 3 \log|z-1|+\log c +o(1),\quad|z|\to 1;$$ $$\phi(x)=-\frac 4 3\log|z|+\log c +o(1),\quad |z|\to \infty.$$ This asymptotics is particularly simple, because for the coefficients $\phi_j$ we have $\phi_j=\log c$ (instead of a cumbersome expression with a lot of Gamma functions for $\phi_j$ in [\[asympt_phi\]](#asympt_phi){reference-type="eqref" reference="asympt_phi"}, see [\[eqn_Psi\]](#eqn_Psi){reference-type="eqref" reference="eqn_Psi"}), and the weights $\beta_j$ of the marked points are $-2/3$. Moreover, thanks to our special choice of $m_{\pmb\beta}$, for the Laplacian $\Delta_{\pmb\beta}$ on $(\overline{\Bbb C}_z, m_{\pmb \beta})$ the right hand side of the anomaly formula [\[DetDelta\]](#DetDelta){reference-type="eqref" reference="DetDelta"} takes the following particularly simple form: $$\log {\det\Delta_{\pmb\beta}}=-\frac 4 3 \log c -3\mathcal C\left(-\frac 2 3\right)+{\bf C}.$$ This together with the relation [\[S20STMNT\]](#S20STMNT){reference-type="eqref" reference="S20STMNT"} proved in Proposition [Proposition 2](#PBdet){reference-type="ref" reference="PBdet"} gives $$\label{auxFeb26} \begin{aligned} \log\frac {\det\Delta_{f^*\pmb\beta}}{\deg f } = \deg f\cdot \left(-\frac 4 3\log c +{\bf C}\right)+\frac 1 2 \left( 3\deg f - \sum_{k}\frac 1 {\operatorname{ord}_k f+1}\right) {\log c} \\ -\frac 1 6 \sum_{k} \left( \frac{\operatorname{ord}_k f +1} 3 +\frac 3 { \operatorname{ord}_k f+1} \right)\log (\operatorname{ord}_k f+1) \\ - \sum_{k}\mathcal C\left(\frac{\operatorname{ord}_k f -2} 3\right)- (\deg f-1){\bf C}+C_f. \end{aligned}$$ One can think of the surface $(\overline{\Bbb C}_x,f^*m_{\pmb \beta})$ as of the $\deg f$ copies of the flat bicolored double triangle glued along the edges in a way prescribed by $f$. For the pull-back metric we obtain $$\label{pb1} f^*m_{\pmb\beta}=c^2 |f(x)|^{-4/3}|f(x)-1|^{-4/3}|f'(x)|^2 \, |dx|^2.$$ As is well known [@Troyanov], the flat metric $f^*m_{\pmb\beta}$ can equivalently be written in the standard form $$\label{pb2} f^*m_{\pmb\beta}=c^2 A_f^2 \prod_{k: x_k\neq \infty }|x-x_k|^{\frac 2 3 (\operatorname{ord}_k f-2)}\,|dx|^2.$$ Now the representation [\[Exp_A\]](#Exp_A){reference-type="eqref" reference="Exp_A"} for the scaling coefficient $A_f$ is an immediate consequence of the equalities [\[pb1\]](#pb1){reference-type="eqref" reference="pb1"} and [\[pb2\]](#pb2){reference-type="eqref" reference="pb2"}. Clearly, the metric potential $f^*\phi$ of $f^*m_{\pmb\beta}=e^{2f^*\phi}|dx|^2$ obeys the asymptotics $$\label{a_s_m} \begin{aligned} (f^*\phi)(x)= & \frac{\operatorname{ord}_k f-2} 3 \log|x-x_k|+ \log cA_f \\ & + \sum_{\ell: x_k\neq x_\ell\neq\infty} \frac {\operatorname{ord}_\ell f-2} 3 \log |x_k-x_\ell|+o(1), \quad x\to x_k\neq \infty, \end{aligned}$$ $$\label{asympA} (f^*\phi)(x)= -\left( \frac {\operatorname{ord}_k f-2}{3} +2\right) \log|x|+ \log cA_f +o(1), \quad x\to x_k=\infty.$$ Therefore, for the Laplacian $\widetilde \Delta_{f^*\pmb\beta}$ induced by the scaled flat metric $$(cA_f)^{-2} \cdot f^*m_{\pmb\beta}=\prod_{k: x_k\neq \infty}|x-x_k|^{\frac 2 3 (\operatorname{ord}_j f-2)}\,|dx|^2$$ of total area $\deg f\cdot (cA_f)^{-2}$, the anomaly formula from [@KalvinJFA Prop. 3.3] gives $$\label{fp-lA} \begin{aligned} \log\frac {\det\widetilde \Delta_{f^*\pmb\beta}}{\deg f \cdot (cA_f)^{-2}} =\frac 1 {18}\sum_{ k:x_k\neq \infty}\sum_{\ \ \ell:x_k\neq x_\ell\neq \infty}\frac{(\operatorname{ord}_k f-2)(\operatorname{ord}_\ell f -2)}{\operatorname{ord}_k f+1} \log|x_k-x_\ell| \\ -\sum_{k=1}^n\mathcal C\left( \frac{\operatorname{ord}_k f-2} 3\right)+{\bf C}. \end{aligned}$$ The standard rescaling property of the determinant reads $$\log \frac{\det\Delta_{f^*\pmb\beta}}{\deg f }=\log\frac { \det\widetilde \Delta_{f^*\pmb\beta}}{\deg f \cdot (cA_f)^{-2}}-2(\zeta(0)+1)\cdot \log cA_f,$$ where $$\zeta(0)=-\frac 1 {12}\sum_{k} \left( \frac{\operatorname{ord}_k f +1} 3 -\frac 3 { \operatorname{ord}_k f+1} \right)-1$$ is the value of the spectral zeta function of $\widetilde \Delta_{f^*\pmb\beta}$ at zero; for details we refer to [@KalvinJFA Section 1.2]. This together with [\[OEQ\]](#OEQ){reference-type="eqref" reference="OEQ"} allows one to rewrite the equality [\[fp-lA\]](#fp-lA){reference-type="eqref" reference="fp-lA"} in the form $$\begin{aligned} \log\frac {\det\Delta_{f^*\pmb\beta}}{\deg f } =\frac 1 {18}\sum_{ k:x_k\neq \infty}\sum_{\ \ \ell:x_k\neq x_\ell\neq \infty}\frac{(\operatorname{ord}_k f-2)(\operatorname{ord}_\ell f -2)}{\operatorname{ord}_k f+1} \log|x_k-x_\ell| \\ +\frac 1 {6}\left( \deg f- \sum_{k=1}^n \frac 3 { \operatorname{ord}_k f+1} \right)\log cA_f -\sum_{k=1}^n\mathcal C\left( \frac{\operatorname{ord}_k f-2} 3\right)+{\bf C}. \end{aligned}$$ Substituting this into the left-hand side of [\[auxFeb26\]](#auxFeb26){reference-type="eqref" reference="auxFeb26"}, we finally arrive at [\[C_F\]](#C_F){reference-type="eqref" reference="C_F"}. This completes the proof. ◻ **Remark 6**. *Let $c_k$ be the coefficient of Taylor or Laurent series from Proposition [Proposition 2](#PBdet){reference-type="ref" reference="PBdet"}. Let $A_f$ be the constant defined in [\[Exp_A\]](#Exp_A){reference-type="eqref" reference="Exp_A"}. Then the asymptotics [\[asympA\]](#asympA){reference-type="eqref" reference="asympA"} together with [\[asympPB1\]](#asympPB1){reference-type="eqref" reference="asympPB1"} and [\[1\]](#1){reference-type="eqref" reference="1"} implies $$A_f=\left\{ \begin{array}{cc} (\operatorname{ord}_k f+1)|c_k|^{1/3}, & \text{ where } k \text{ is such that } x_k=\infty \text{ and } f(\infty)\neq\infty, \\ (\operatorname{ord}_k f+1)|c_k|^{-1/3}, & \text{ where } k \text{ is such that } x_k=\infty \text{ and } f(\infty)=\infty. \end{array} \right.$$* # Uniformization, Liouville action, and stationary points of the determinant {#Unif} Consider a constant curvature sphere $(\overline{\Bbb C}_x,e^{2\varphi} |dx|^2)$ with conical singularities of order $\beta_k$ located at $x_k\in \overline{\Bbb C}_x$. The parameters $x_1,\dots,x_n$ are called moduli. By the Gauss-Bonnet theorem [@Troyanov], the (regularized) Gaussian curvature $K$ of the singular sphere $(\overline{\Bbb C}_x,e^{2\phi} |dx|^2)$ satisfies the equality $K=2\pi(|\pmb\beta|+2)/S_\varphi$, where $|\pmb\beta|=\sum_k \beta_k$ is the degree of the divisor $\pmb\beta=\sum \beta_k\cdot x_k$, and $$\label{Area} S_\varphi=\int_{\Bbb C} e^{2\varphi}\frac {dx\wedge d\bar x }{-2i}$$ is the total surface area of $(\overline{\Bbb C}_x,e^{2\varphi} |dx|^2)$. The potential $\varphi$ of the metric $e^{2\varphi} |dx|^2$ is a solution to the Liouville equation $$\label{LiouvilleEq} e^{-2\varphi}(-4\partial_x\partial_{\bar x} \varphi)=K, \quad x\in\Bbb C\setminus\{x_1,x_2,\dots,x_n\},$$ having the following asymptotics $$\label{ascoeff_k} \begin{aligned} \varphi(x)=\beta_k\log |x-x_k|+\varphi_k+o(1), \quad x\to x_k\neq\infty, \\ \varphi(x)=-(\beta_k+2)\log |x|+\varphi_k+o(1), \quad x\to x_k=\infty. \end{aligned}$$ The metric potential $\varphi$ and the coefficients $\varphi_k$ in the asymptotics depend on the divisor $\pmb\beta$, i.e. on the moduli $x_k$ and the orders $\beta_k$ of the conical singularities. Introduce the classical stress-energy tensor $T_{\varphi}:=2(\partial_x^2\varphi- (\partial_x \varphi)^2)$ of the Liouville field theory. The stress-energy tensor is a meromorphic function on $\overline{\Bbb C}$ satisfying $$\label{SET} \begin{aligned} T_\varphi(x)=& \sum_{k: x_k\neq \infty}\left(\frac {{ s}_k}{2(x-x_k)^2} +\frac { h_k}{x-x_k} \right),\\ T_\varphi(x)=& \frac {{s}_k}{2x^2}+\frac { h_k}{x^3}+O(x^{-4})\text{ as } x\to x_k=\infty. \end{aligned}$$ Here $s_k=-\beta_k(2+\beta_k)$ are the weights of the second order poles, and $h_k$ are the famous accessory parameters, e.g. [@He2; @Kra; @T-Z] and references therein. Note that the meromorphic quadratic differential $T_\varphi dx^2$ is a uniformizing projective connection compatible with the divisor $\pmb\beta$ [@Troyanov; @TroyanovSp]. Recall that one of the approaches to the uniformization consists of finding appropriate values of the accessory parameters $h_k$, and two appropriately normalized linearly independent solutions $u_1$ and $u_2$ to the Fuchsian differential equation $$\partial_x^2 u+\frac 1 2 T_{\varphi} u=0.$$ Then the metric potential $\varphi$ can be found in the form $$\varphi=\log 2 +\log |\partial_xw|-\log (1+K|w|^2),$$ where $w=u_1/u_2$ is an analytic in $\Bbb C\setminus\{x_1,\dots,x_n\}$ function called the developing map. It satisfies the Schwarzian differential equation $\{w,x\}=T_\varphi(x)$, where $\{w,x\}=\frac{2w' w'''-3w''^2}{2w'^2}$ is the Schwarzian derivative. However, the accessory parameters can be determined in some special cases only, and, in general, they remain elusive. It turns out that in the geometric setting of this paper, the accessory parameters can be found explicitly in terms of the Belyi function $f$ and the orders $\beta_0$, $\beta_1$, and $\beta_\infty$ of the conical singularities of the metric $m_{\pmb\beta}=e^{2\phi}|dz|^2$. Indeed, consider the constant curvature unit area singular sphere $(\overline{\Bbb C}_z, m_{\pmb\beta})$, see Sec. [2.1](#PrelimTriangulation){reference-type="ref" reference="PrelimTriangulation"}. For the corresponding stress-energy tensor we have $$T_\phi(z)=\frac {\mathfrak s_0}{2z^2}+\frac { \mathfrak h_0}{z}+\frac {\mathfrak s_1}{2(z-1)^2}+\frac {\mathfrak h_1}{z-1},\quad T_\phi(z)=\frac {\mathfrak s_\infty}{2z^2}+\frac{\mathfrak h_\infty}{z^3}+O(z^{-4})\quad\text{as}\quad z\to\infty,$$ where $$\label{weight_s} \mathfrak s_k=-\beta_k(2+\beta_k), \quad k\in\{0,1,\infty\}.$$ The accessory parameters $$\label{S_AP} \mathfrak h_0=-\mathfrak h_1=\frac{{\mathfrak s}_0+{\mathfrak s}_1-{\mathfrak s}_\infty}{2}, \quad \mathfrak h_\infty=\frac{{\mathfrak s}_1+{\mathfrak s}_\infty-{\mathfrak s}_0}{2}$$ were first found by Schwarz. The stress-energy tensors satisfy the relation $$\label{relation1727} T_{f^*\phi}=(T_\phi\circ f) (f')^2+\{f,x\}.$$ As a consequence, we obtain the following simple result: **Lemma 7** (Accessory parameters). *Let $f: \overline {\Bbb C}_x \to \overline {\Bbb C}_z$ be a Belyi function unramified outside of the set $\{0,1,\infty\}$ and such that $f(\infty)\in\{0,1,\infty\}$. Then the stress-energy tensor $T_{f^*\phi}$ of the pull back metric $f^*m_\beta=e^{f^*\phi}|dx|^2$ of $m_\beta$ by $f$ satisfies the relations [\[SET\]](#SET){reference-type="eqref" reference="SET"}, where $x_1,\dots,x_n$ with $n=\deg f +2$ are the preimages of the points $\{0,1,\infty\}\subset\overline{\Bbb C}_z$ under $f$, the weights of the second order poles in [\[SET\]](#SET){reference-type="eqref" reference="SET"} are given by the equalities $$s_k=-(f^*\pmb\beta)_k ((f^*\pmb\beta)_k +2)$$ with orders $(f^*\pmb\beta)_k$ from [\[orders\*\]](#orders*){reference-type="eqref" reference="orders*"}, and the accessory parameters $h_k$ can be found as follows:* *1. If $x_k$ is not a pole of $f$, then the accessory parameter $h_k$ satisfies $$\label{AP1} h_k= \left\{ \begin{array}{cc} \frac {d_k}{c_k}{\mathfrak s}_{f(x_k)}+ c_k \mathfrak h_{f(x_k)}& \text{ for }\quad \operatorname{ord}_k f =0, \\ - \frac {d_k}{c_k} \frac { (f^*\beta)_k ((f^*\beta)_k+2) }{ \operatorname{ord}_k f+1} & \text{ for }\quad \operatorname{ord}_k f >0, \end{array} \right.$$ where $\mathfrak s_k$ and $\mathfrak h_k$ are the same as in [\[weight_s\]](#weight_s){reference-type="eqref" reference="weight_s"} and [\[S_AP\]](#S_AP){reference-type="eqref" reference="S_AP"}, and the coefficients $c_k$ and $d_k$ are those from the first or the second expansion $$\begin{aligned} f(x)-f(x_k) & =c_k x^{-\operatorname{ord}_k f-1}\left(1+\frac{d_k}{c_k} \frac 1 x + O(x^{-2})\right), \quad x\to x_k=\infty, \\ f(x)-f(x_k) & =c_k(x-x_k)^{\operatorname{ord}_k f+1}\left(1+\frac {d_k}{c_k}(x-x_k)+O\left( (x-x_k)^{2}\right)\right),\quad x\to x_k\neq \infty, \end{aligned}$$ depending on whether $x_k=\infty$ or $x_k\neq\infty$,* *2. If $x_k$ is a pole of $f$, then the accessory parameter $h_k$ satisfies $$\label{AP2} h_k= \left\{ \begin{array}{cc} -\frac {d_k}{c_k}{\mathfrak s}_{\infty}+\frac{1}{ c_k} \mathfrak h_{\infty} & \text{ for }\quad \operatorname{ord}_k f =0, \\ \frac {d_k}{c_k} \frac { (f^*\beta)_k ((f^*\beta)_k+2) }{ \operatorname{ord}_k f+1} & \text{ for }\quad \operatorname{ord}_k f >0, \end{array} \right.$$ where $\mathfrak s_\infty$ and $\mathfrak h_\infty$ are the same in [\[weight_s\]](#weight_s){reference-type="eqref" reference="weight_s"} and [\[S_AP\]](#S_AP){reference-type="eqref" reference="S_AP"}, and the coefficients $c_k$ and $d_k$ are those from the first or the second expansion $$\begin{aligned} f(x)& =c_k x^{\operatorname{ord}_k f+1}\left(1+\frac {d_k}{c_k} \frac 1 x+ O(x^{-2})\right), \quad x\to x_k=\infty, \\ f(x)&={c_k}{(x-x_k)^{-\operatorname{ord}_k f-1}}\left(1+\frac {d_k}{c_k} (x-x_k)+O((x-x_k)^{2})\right),\quad x\to x_k\neq \infty, \end{aligned}$$ depending on whether $x_k=\infty$ or $x_k\neq\infty$.* *Proof.* If $f(x_k)\neq \infty$ and $x_k= \infty$, then $$f(x)-f(x_k)=c_k x^{-\operatorname{ord}_k f-1} \left(1+\frac {d_k}{c_k} \frac 1 x + O(x^{ -2})\right), \quad x\to x_k=\infty,$$ and the asymptotics can be differentiated. As a consequence, for the contributions into $$(T_\phi\circ f) (f')^2 (x)=\left(\frac {-\beta_{f(x_k)}(2+\beta_{f(x_k)})}{2(f(x)-f(x_k))^2}+\frac {\mathfrak h_{f(x_k)}}{f(x)-f(x_k)}+O(1)\right)(f'(x))^2$$ we obtain $$\frac {(f'(x))^2}{(f(x)-f(x_k))^2}=\frac {(\operatorname{ord}_k f+1)^2} {x^{2}}+2\frac {d_k}{c_k} \frac {\operatorname{ord}_k f+1} {x^3} +O(x^{-4}),$$ $$\frac {h_{f(x_k)}}{f(x)-f(x_k)}(f'(x))^2=h_{f(x_k)}{c_k} (\operatorname{ord}_k f+1)^2x^{-\operatorname{ord}_k f -3}+O(x^{-4}).$$ Besides, for the Schwarzian derivative, we get $$\{f,x\}= - \frac 1 2 (\operatorname{ord}_k f) (\operatorname{ord}_k f+2)\left( \frac {1}{x^{2} } + \frac {2d_k}{c_k( \operatorname{ord}_k f+1)} \frac {1} {x^3} +O(x^{-4}) \right),\quad x\to x_k=\infty.$$ These together with [\[relation1727\]](#relation1727){reference-type="eqref" reference="relation1727"} imply $$T_{f^*\phi}(x)=-\frac { (f^*\beta)_k( (f^*\beta)_k+2)}{2x^2}+\frac {h_k}{x^3}+O(x^{-4})\quad\text{as}\quad x\to x_k=\infty,$$ where the accessory parameter $h_k$ satisfies [\[AP1\]](#AP1){reference-type="eqref" reference="AP1"}. Similarly, if $f(x_k)=\infty$ and $x_k= \infty$, then starting with the asymptotics $$f(x)=c_k x^{\operatorname{ord}_k f+1}\left(1+\frac {d_k}{c_k} \frac 1 x+ O(x^{-2})\right), \quad x\to x_k=\infty,$$ we arrive at $$T_{f^*\phi}(x)=-\frac { (f^*\beta)_k( (f^*\beta)_k+2)}{2x^2}+\frac {h_k}{x^3}+O(x^{-4})\quad\text{as}\quad x\to x_k=\infty,$$ where the accessory parameter $h_k$ satisfies [\[AP2\]](#AP2){reference-type="eqref" reference="AP2"}. The cases $f(x_k)=\infty$ with $x_k\neq \infty$, and $f(x_k)\neq \infty$ with $x_k\neq \infty$ are similar, we omit the details. ◻ **Remark 8**. *In particular, by using Lemma [Lemma 7](#AccPar){reference-type="ref" reference="AccPar"} with $\pmb \beta=\beta_0\cdot 0+\beta_1\cdot 1+\left(-\frac 2 3 \right)\cdot \infty$ and $f(x)=x^3$, we recover the values of the accessory parameters found in [@Kra Sec. 4.13] for a constant curvature Hyperbolic or Euclidean metric with four conical singularities located at the vertices of a regular tetrahedron.* Introduce the Liouville action $$\label{LA} \mathcal S_{\pmb \beta}[\varphi]=2\pi(|\pmb\beta|+2) \left (\frac 1 {S_\varphi} \int_{\Bbb C} \varphi e^{2\varphi}\frac {dx\wedge d\bar x}{-2 i} -1 \right) +2\pi\sum_{k}\beta_k\varphi_k+4\pi \varphi_k|_{k: x_k=\infty},$$ where $S_\varphi$ is the total area of the singular sphere $(\overline{\Bbb C}_x, e^{2\varphi}|dx|^2)$, and $\varphi_k$ are the coefficients in the asymptotics [\[ascoeff_k\]](#ascoeff_k){reference-type="eqref" reference="ascoeff_k"}. As is demonstrated in [@KalvinLast], this new definition of the Liouville action is in agreement, for instance, with that in [@CMS; @Z-Z; @T-Z]. It is not hard to show that the Liouville equation [\[LiouvilleEq\]](#LiouvilleEq){reference-type="eqref" reference="LiouvilleEq"} is the Euler-Lagrange equation for the Liouville action functional $\psi\mapsto \mathcal S_{\pmb \beta}[\psi]$. **Remark 9**. *In the geometric setting of this paper we have $\varphi=f^*\phi$, and the anomaly formula from Lemma [Lemma 4](#ANOMALY){reference-type="ref" reference="ANOMALY"} can equivalently be written as follows: $$\log \frac {\det\Delta_{f^*\pmb\beta}}{\deg f} =\frac {|f^*\pmb\beta|+2}{6} -\frac 1 {12 \pi}\left(\mathcal S_{f^*\pmb\beta} [f^*\phi]-\pi\log \mathcal H_{f^*\pmb\beta} [f^*\phi]\right)-\sum_{k} \mathcal C((f^*\beta)_k) +{\bf C}.$$ Here the functional $\mathcal H_{f^*\pmb\beta} [f^*\phi]$ is defined explicitly via the equality $$\mathcal H_{f^*\pmb\beta} [f^*\phi]= \exp\left(2\sum_k \frac {(f^*\beta)_k((f^*\beta)_k+2)}{(f^*\beta)_k+1} (f^*\phi)_k\right)$$ with $(f^*\beta)_k$ from [\[orders\*\]](#orders*){reference-type="eqref" reference="orders*"} and $(f^*\phi)_k$ from [\[1\]](#1){reference-type="eqref" reference="1"}. Thus, as a consequence of the explicit expression for the determinant of Laplacian in Theorem [Theorem 1](#main){reference-type="ref" reference="main"} and the anomaly formula in Lemma [Lemma 4](#ANOMALY){reference-type="ref" reference="ANOMALY"}, one also obtains an explicit expression for the Liouville action $\mathcal S_{f^*\pmb\beta} [f^*\phi]$, cf. [@P-T-T; @ParkTeo; @T-T]. It would be interesting to check if this result can be reproduced by using conformal blocks [@Z-Z].* In the remaining part of this section for simplicity, we assume that the orders $\beta_k$ of the conical singularities meet the condition $|\pmb\beta|\leqslant-2$, i.e. we exclude form consideration the spherical metrics. This allows us to differentiate the hyperbolic metric potential and the corresponding Liouville action with respect to $x_k$ and $\beta_k$ relying on known regularity results, e.g. [@KimWilkin; @Kra; @SchuTrap1; @SchuTrap2; @T-Z]. In the Euclidean case, we have $|\pmb\beta|=-2$, and the metrics can be written explicitly [@TroyanovSSC], which immediately justifies the differentiation. Let us also note that there are good grounds to believe [@Judge; @Judge2; @KalvinLast; @KimWilkin; @TroyanovSp] that the potential $\varphi$ of a constant curvature metric is necessarily a real-analytic function of the orders of conical singularities on the existence and uniqueness set $$\{\beta_k\in(0,1): \beta_k-|\pmb\beta|/2>0, k=1,\dots,n\},$$ and the results below remain valid on that set. Next, we show that the Liouville action $\mathcal S_{\pmb \beta}[\varphi]$ generates the accessory parameters $h_k$ as their common antiderivative. **Lemma 10** (After P. Zograf and L. Takhtajan). *Assume that $\beta_k\in(0,1)$, $k=1,\dots, n$, and $|\pmb\beta|\leqslant-2$. Let $\varphi$ be a (unique) solution to the Liouville equation [\[LiouvilleEq\]](#LiouvilleEq){reference-type="eqref" reference="LiouvilleEq"} satisfying the area condition [\[Area\]](#Area){reference-type="eqref" reference="Area"} with some fixed $S_\varphi>0$, and having the asymptotics [\[ascoeff_k\]](#ascoeff_k){reference-type="eqref" reference="ascoeff_k"}. Then the Liouville action [\[LA\]](#LA){reference-type="eqref" reference="LA"} meets the equalities $$\label{TZeqn} -\frac 1 {2\pi}{\partial_{x_k} \mathcal S_{\pmb \beta}[\varphi]}=h_k, \quad k=1,\dots, n,$$ where $x_1,x_2,\dots, x_n$ are the moduli, and $h_1,h_2,\dots,h_n$ are the accessory parameters.* *Note that in the geometric setting of this paper, we have $\varphi=f^*\phi$. Thus the moduli $x_k$, $k=1,\dots,\deg f +2$, are the preimages of the points $\{0,1,\infty\}$ under $f$, and the accessory parameters $h_k$ are those found in Lemma [Lemma 7](#AccPar){reference-type="ref" reference="AccPar"}.* *Proof.* As is shown in [@KalvinLast], the expression in the right hand side of [\[LA\]](#LA){reference-type="eqref" reference="LA"} is an equivalent regularization of the Liouville action introduced in [@CMS; @Z-Z; @T-Z]. Hence, in the case of $|\pmb\beta|<-2$ and $K=-1$ the assertion of the lemma is just a reformulation of the result proven in [@CMS; @T-Z], see also [@T-Z1] for the first proof of Polyakov's conjecture [\[TZeqn\]](#TZeqn){reference-type="eqref" reference="TZeqn"}. Next we show that in the (hyperbolic) case $|\pmb\beta|<-2$ the assertion remains valid for any $S_\varphi>0$ and $K=2\pi(|\pmb\beta|+2)/S_\varphi<0$. Consider a (unique) metric potential $\varphi$ such that $|\pmb\beta|<-2$ and $K=-1$. Clearly, $S_\varphi=-2\pi(|\pmb\beta|+2)$, and for any $C>0$ we have the following transformation laws for the surface area, the Liouville action, and the stress-energy tensor: $$S_{\varphi+\log C}=C^2 S_\varphi, \quad \mathcal S_{\pmb \beta}[\varphi+\log C]=\mathcal S_{\pmb \beta}[\varphi] +4\pi(|\pmb\beta|+2)\log C,\quad T_{\varphi+\log C}=T_\varphi.$$ This implies that the rescaling $\varphi\mapsto \varphi+\log C$ multiplies the total area by $C^2$, but does not affect the equalities [\[TZeqn\]](#TZeqn){reference-type="eqref" reference="TZeqn"}. Thus, in the case $|\pmb\beta|<-2$, the assertion of lemma is valid for any fixed $S_\varphi>0$. In the case $|\pmb\beta|=-2$ the integral term in [\[LA\]](#LA){reference-type="eqref" reference="LA"} disappears and the metric $e^{2\varphi}|dx|^2$ is flat. As is known [@TroyanovSSC], up to a rescaling $\varphi\mapsto \varphi+\log C$, the potential $\varphi$ can be written explicitly in the form $$\label{ECSC} \varphi(\pmb\beta)=\sum_{k: x_k\neq \infty}\beta_k\log|x-x_k|,\quad |\pmb\beta|=-2.$$ As a result, the equality [\[TZeqn\]](#TZeqn){reference-type="eqref" reference="TZeqn"} follows from a simple direct computation. ◻ It may also be possible to prove Polyakov's conjecture [\[TZeqn\]](#TZeqn){reference-type="eqref" reference="TZeqn"} for the spherical case $|\pmb\beta|>-2$ along the lines of [@CMS; @T-Z], however this goes out of the scope of this paper. **Lemma 11** (After A. Zamolodchikov and Al. Zamolodchikov). *Assume that $\beta_k\in (-1,0)$ and $|\pmb\beta|\leqslant-2$. Let $\varphi$ stand for a (unique) solution to the Liouville equation [\[LiouvilleEq\]](#LiouvilleEq){reference-type="eqref" reference="LiouvilleEq"} satisfying the area condition [\[Area\]](#Area){reference-type="eqref" reference="Area"} with a fixed $S_\varphi>0$, and having the asymptotics [\[ascoeff_k\]](#ascoeff_k){reference-type="eqref" reference="ascoeff_k"}. Then for any fixed configuration $x_1,\dots,x_n$ the Liouville action [\[LA\]](#LA){reference-type="eqref" reference="LA"} satisfies $$\label{ZZeqn} -\frac 1 {2\pi} {\partial_{\beta_k} \mathcal S_{\pmb \beta}[\varphi]}=1-2 \varphi_k,$$ where $\varphi_k$ is the coefficient in the corresponding asymptotics [\[ascoeff_k\]](#ascoeff_k){reference-type="eqref" reference="ascoeff_k"}.* *In the geometric setting of this paper, we have $\varphi=f^*\phi$ and $\varphi_k=(f^*\phi)_k$, see [\[1\]](#1){reference-type="eqref" reference="1"}.* *Proof.* In the case $|\pmb\beta|<-2$, the proof essentially repeats the one in [@KalvinLast Proof of Lemma 3.1], where the differentiation with respect to $\beta_k$ is now justified by the results [@KimWilkin; @SchuTrap1; @SchuTrap2] on the regularity of $\beta_k\mapsto \varphi(\pmb\beta)$ for the hyperbolic metric $e^{2\varphi}|dx|^2$ on the Riemann sphere. Indeed, one need only notice that the index $j=k$ in [@KalvinLast Proof of Lemma 3.1] now runs from $1$ to $n$, and the region $\Bbb C_R$ is defined as follows: $$\Bbb C_R:=\{x\in\Bbb C: |x|\leqslant R,\ |x-x_k|\geqslant 1/R,\ k=1,\dots n \}.$$ In the case $|\pmb\beta|=-2$ the integral term in [\[LA\]](#LA){reference-type="eqref" reference="LA"} disappears, and the equality [\[ZZeqn\]](#ZZeqn){reference-type="eqref" reference="ZZeqn"} can be verified by a direct computation, cf. [\[ECSC\]](#ECSC){reference-type="eqref" reference="ECSC"}. We omit the details. ◻ Our choice of examples in Section [5](#ExAppl){reference-type="ref" reference="ExAppl"} is partially motivated by the following result: **Theorem 12**. *The (hyperbolic or flat) surfaces of five Platonic solids and the regular constant curvature dihedra are critical points of the spectral determinant on the conical metrics of fixed area and fixed Gaussian curvature.* *More precisely: Consider the divisors $\pmb\beta=\sum_{k} \beta_k\cdot x_k$ of degree $|\pmb\beta|\leqslant-2$ with distinct marked points $x_1,\dots,x_n$ and weights $\beta_k\in(-1,0)$. Then for any fixed $S>0$ and any divisor $\pmb\beta$ there exists a unique metric $e^{2\varphi}|dx|^2$ on $\overline{\Bbb C}$ of total area $S$, Gaussian curvature $K=2\pi(|\pmb\beta|+2)/S$, and representing the divisor $\pmb\beta$. Consider the spectral determinant $\det\Delta_{\pmb\beta}$ of the surface $(\overline{\Bbb C}_x,e^{2\varphi}|dx|^2)$ as a function on the configuration space $$\mathcal Z_n(S,K)=\left\{\pmb\beta =\sum_{k\leqslant n} \beta_k\cdot x_k: x_j\neq x_k\in \overline{\Bbb C}\text{ for } j\neq k, \beta_k\in(-1,0), 2\pi (|\pmb\beta|+2)=S K \right\}$$ with some fixed values $S>0$, $K\leqslant 0$, and $n\geqslant 3$.* 1. *If $\pmb\beta_0\in \mathcal Z_n(S,K)$ is a divisor such that the corresponding surface $(\overline{\Bbb C}_x,e^{2\varphi}|dx|^2)$ is isometric to the one of a Platonic solid, then $\pmb\beta_0$ is a stationary point of the function $$\label{ext_det} \mathcal Z_n(S,K)\ni\pmb\beta\mapsto \det\Delta_{\pmb\beta},$$ where $n$ is the number of vertices of the Platonic solid.* 2. *If $\pmb\beta_0\in \mathcal Z_n(S,K)$ is a divisor such that the corresponding surface $(\overline{\Bbb C}_x,e^{2\varphi}|dx|^2)$ is isometric to the regular dihedron with $n$ vertices, then $\pmb\beta_0$ is a stationary point of the function [\[ext_det\]](#ext_det){reference-type="eqref" reference="ext_det"} with the corresponding value of $n$.* *Proof.* Consider the potential $\varphi(x;\beta_1,\beta_2,\beta_3,\beta_4)$ of a (unique) unit area constant curvature metric $e^{2\varphi}|d x|^2$ representing the divisor $$\pmb\beta=\beta_1\cdot 0 +\beta_2\cdot (-1)+\beta_3\cdot e^{i\frac {\pi} 3}+\beta_4\cdot e^{-i\frac{\pi} 3}.$$ Recall that the Gauss-Bonnet theorem [@Troyanov] implies that the (regularized) Gaussian curvature of the surface $(\overline{\Bbb C}_x, e^{2\varphi}|dx|^2)$ equals $2\pi(|\pmb\beta|+2)$. The four marked points in the divisor $\pmb\beta$ are in an equi-anharmonic position. In particular, if the orders of the conical singularities satisfy $\beta_k=|\pmb\beta|/4$, then the surface $(\overline{\Bbb C}_x, e^{2\varphi}|dx|^2)$ is isometric to the one of a unit area regular tetrahedron of Gaussian curvature $2\pi(|\pmb\beta|+2)\leqslant 0$. Notice that $\varphi(\bar x;\beta_1,\beta_2,\beta_3,\beta_4)$ is the potential of a (unique) unit area Gaussian curvature $2\pi(|\pmb\beta|+2)$ metric representing the divisor $$\pmb\beta=\beta_1\cdot 0 +\beta_2\cdot (-1)+\beta_4\cdot e^{i\frac {\pi} 3}+\beta_3\cdot e^{-i\frac{\pi} 3}.$$ Similarly, the potential $\varphi(e^{i\frac{2\pi}3} x;\beta_1,\beta_2,\beta_3,\beta_4)$ corresponds to the divisor $$\pmb\beta=\beta_1\cdot 0 +\beta_4\cdot (-1)+\beta_2\cdot e^{i\frac{\pi} 3}+\beta_3\cdot e^{-i\frac{\pi} 3};$$ $\varphi(e^{-i\frac{2\pi}3} x;\beta_1,\beta_2,\beta_3,\beta_4)$ corresponds to the divisor $$\pmb\beta=\beta_1\cdot 0+\beta_3\cdot (-1) +\beta_4\cdot e^{i\frac \pi 3 }+\beta_2\cdot e^{-i\frac{\pi} 3};$$ and $\varphi(\frac {x+1}{2x-1};\beta_1,\beta_2,\beta_3,\beta_4)+\log3-2\log|2x-1|$ corresponds to the divisor $$\pmb\beta=\beta_2\cdot 0+\beta_1\cdot (-1) +\beta_4\cdot e^{i\frac{\pi} 3} +\beta_3\cdot e^{-i\frac {\pi} 3}.$$ As a consequence of these symmetries, we have $$\varphi(x;\beta_1,\beta_2,\beta_3,\beta_4)=\varphi(\bar x;\beta_1,\beta_2,\beta_4,\beta_3) =\varphi(e^{i\frac{2\pi}3} x;\beta_1,\beta_3,\beta_4,\beta_2)=\varphi(e^{-i\frac{2\pi}3} x;\beta_1,\beta_4,\beta_2,\beta_3)$$ $$=\varphi\left(\frac {x+1}{2x-1};\beta_2,\beta_1,\beta_4,\beta_3\right)+\log3-2\log|2x-1|.$$ For the coefficients $\varphi_k=\varphi_k(\beta_1,\beta_2,\beta_3,\beta_4)$ in the asymptotics [\[ascoeff_k\]](#ascoeff_k){reference-type="eqref" reference="ascoeff_k"} the latter equalities imply $$\label{STAR} \begin{aligned} (\varphi_1-\varphi_\ell)|_{\beta_k=|\pmb\beta|/4}=(|\pmb\beta|/4+1)\log 3,\quad \ell=2,3,4, \\ \sum_{j=1}^4 ( \partial_{\beta_1}\varphi_j -\partial_{\beta_\ell}\varphi_j )|_{\beta_k=|\pmb\beta|/4}=\log3,\quad\ell=2,3,4. \end{aligned}$$ Denote the Friedrichs Laplacian on $(\overline{\Bbb C}_x,e^{2\varphi}|dx|^2)$ by $\Delta_{\pmb\beta}$. As is proven in [@KalvinLast Sec. 2], the spectral determinant $\det \Delta_{\pmb\beta}$ satisfies the anomaly formula $$\label{AnFla} \log \det\Delta_{\pmb\beta}=-\frac{|\pmb\beta|+1} 6 -\frac 1 {12\pi}\left(\mathcal S_{\pmb\beta}[\varphi]-\pi\log \mathcal H_{\pmb\beta}[\varphi]\right) -\sum_{k=1}^4\mathcal C(\beta_k)+\bf C,$$ where $\mathcal S_{\pmb\beta}[\varphi]$ is the Liouville action [\[LA\]](#LA){reference-type="eqref" reference="LA"} and $$\label{F_H} \mathcal H_{\pmb\beta} [\varphi]= \exp\left\{2\sum_{k=1}^4 \left(\beta_k+1-\frac 1 {\beta_k+1} \right)\varphi_k\right\}.$$ Since $|\pmb\beta|$ is fixed, we can set, for example, $\beta_1=|\pmb\beta|-\sum_{k>1}\beta_k$, and consider the determinant $\det\Delta_{\pmb\beta}$ as a function of $(\beta_2,\beta_3,\beta_4)$. Then, thanks to the anomaly formula [\[AnFla\]](#AnFla){reference-type="eqref" reference="AnFla"}, Lemma [Lemma 11](#Z^2){reference-type="ref" reference="Z^2"}, and the equality [\[F_H\]](#F_H){reference-type="eqref" reference="F_H"}, we have $$\begin{aligned} \partial_{\beta_\ell} \left( \log \det\Delta_{\pmb\beta}|_{\beta_1=|\pmb\beta|-\sum_{k>1}\beta_k}\right)|_{\beta_k=\frac{|\pmb\beta|}4}& =\frac 1 3 (\varphi_1-\varphi_\ell)|_{\beta_k=\frac{|\pmb\beta|}4} \\ & - \frac 1 6 \left( 1+\left(\frac{|\pmb\beta|}4+1\right)^{-2} \right)(\varphi_1-\varphi_\ell)|_{\beta_k=\frac{|\pmb\beta|}4} \\ & - \frac 1 6 \left(\frac {|\pmb\beta|}4+1-\frac 1 {\frac{|\pmb\beta|}4+1}\right) \sum_{j=1}^4 ( \partial_{\beta_1}\varphi_j -\partial_{\beta_\ell}\varphi_j ) |_{\beta_k=\frac {|\pmb\beta|}4}. \end{aligned}$$ Here the right-hand side is equal to zero because of [\[STAR\]](#STAR){reference-type="eqref" reference="STAR"}. Now we are in a position to study the determinant under a small perturbation of the coordinate of a vertex. Let us consider the potential $\varphi(x)$ of a (unique) unit area constant curvature metric $e^{2\varphi}|d x|^2$ representing the divisor $$\pmb\beta=\frac{|\pmb\beta|} 4 \cdot h +\frac{|\pmb\beta|} 4\cdot (-1)+\frac{|\pmb\beta|} 4\cdot e^{i\frac {\pi} 3}+\frac{|\pmb\beta|} 4\cdot e^{-i\frac{\pi} 3}.$$ Here $h$ is a small complex number. In the case $h= 0$ the surface $(\overline{\Bbb C}_x, e^{2\varphi}|dx|^2)$ is isometric to the one of a unit area constant curvature regular tetrahedron. Consider, for example, the rotation $x\mapsto e^{i\frac{2\pi}3} x$. Notice that $\chi(x) :=\varphi( e^{i\frac{2\pi}3} x)$ is the potential of a (unique) unit area Gaussian curvature $2\pi(|\pmb\beta|+2)$ metric representing the divisor $${\pmb\gamma}=\frac{|\pmb\beta|} 4 \cdot (e^{-i\frac{2\pi}3}h) +\frac{|\pmb\beta|} 4\cdot (-1)+\frac{|\pmb\beta|} 4\cdot e^{i\frac {\pi} 3}+\frac{|\pmb\beta|} 4\cdot e^{-i\frac{\pi} 3}.$$ The surfaces $({\overline {\Bbb C}}_x, e^{2\varphi}|dx|^2)$ and $({\overline {\Bbb C}}_x, e^{2 \chi}|dx|^2)$ are isometric, the isometry is given by the rotation. As a consequence, $\det \Delta_{\pmb\beta}=\det \Delta_{\pmb\gamma}$. Equating the directional derivative of $\det \Delta_{\pmb\beta}$ along $h$ with the one along $e^{i\frac{2\pi}3}h$, we immediately conclude that $$\partial_{\Re h}\det\Delta_{\pmb\beta}=\partial_{\Im h} \det\Delta_{\pmb\beta}=0.$$ It remains to note that the determinants of the Laplacians $\Delta_{\pmb\beta}$ and $\Delta_{\pmb\beta}^{S_\varphi}=\frac 1 {S_\varphi} \Delta_{\pmb\beta}$ satisfy the standard rescaling property $$\log\det\Delta_{\pmb\beta}^{S_\varphi}=\log \det \Delta_{\pmb\beta} +\zeta_{\pmb\beta} (0) \log S_{\varphi},$$ where the value $\zeta_{\pmb\beta} (0)$ of the spectral zeta function at zero [@KalvinJFA] does not depend on the moduli $x_1,\dots,x_4$ and satisfies $$\zeta_{\pmb\beta} (0) =\frac{|\pmb\beta|+2} 6-\frac 1 {12}\sum_{k}\left(\beta_k+1-\frac{1}{\beta_k+1}\right)-1, \quad \partial_{\beta_\ell} \left(\zeta_{\pmb\beta}(0)|_{\beta_1=|\pmb\beta|-\sum_{k>1}\beta_k}\right)|_{\beta_k=\frac{|\pmb\beta|}4}=0.$$ Due to the invariance of the spectral determinant under the Möbius transformations, this completes the proof of the first assertion. For the octahedron, cube, dodecahedron, icosahedron, and dihedra there are more symmetries to consider, but the idea and the steps of the proof remain exactly the same. We omit the details. The case of constant curvature (flat, spherical, or hyperbolic) metrics with three conical singularities is studied in [@KalvinLast]. ◻ As is well-known, starting from four punctures on the $2$-sphere, explicit construction of the general uniformization map is an open long-standing problem. In this paper, we rely on the uniformization via Belyi functions. There is another straightforward special case that deserves to be mentioned. **Remark 13**. *In the case of a divisor $$\pmb\lambda=\left(-1/ 2 \right)\cdot 0+\left(-1/ 2 \right)\cdot 1 +\left(-1/ 2 \right)\cdot\lambda +\left(-1/ 2 \right)\cdot\infty, \quad \lambda\in\Bbb C,$$ the corresponding constant curvature unit area metric $m_{\pmb\lambda}$ with three or four conical singularities of angle $\pi$ can be written explicitly, e.g. [@BE; @Kra; @TroyanovSSC].* *Recall that by using a suitable Möbius transformation we can always normalize the marked points so that any three of them are at $0,1,\infty$. As we permute the marked points $0,1,\lambda,\infty$ by Möbius transformations so that three of them are still $0,1,\infty$, the fourth point is one of the following six: $$\label{37.1} \lambda,\quad \frac 1 \lambda,\quad 1-\lambda,\quad \frac 1 {1-\lambda},\quad \frac \lambda{\lambda-1},\quad \frac{\lambda-1}{\lambda}.$$ In general, these six points are distinct. The exceptions are the following three cases:* - *$\lambda=0$ or $\lambda=1$ or $\lambda=\infty$. In this case, the set [\[37.1\]](#37.1){reference-type="eqref" reference="37.1"} contains only three distinct numbers: $0,1,\infty$. This case is studied in [@KalvinLast], we do not discuss it here.* - *Harmonic position of four points: $\lambda=-1$ or $\lambda=1/2$ or $\lambda=2$. The set [\[37.1\]](#37.1){reference-type="eqref" reference="37.1"} contains only three distinct numbers: $-1,1/2,2$. The surface $(\overline{\Bbb C}_x, m_{\pmb\lambda})$ is isometric to a unit area flat regular dihedron with four conical singularities of angle $\pi$.* - *Equi-anharmonic position of four points: $\lambda=\frac{1+i\sqrt{3}}{2}$ or $\lambda=\frac{1-i\sqrt{3}}{2}$. The set [\[37.1\]](#37.1){reference-type="eqref" reference="37.1"} contains only the numbers $\frac{1\pm i\sqrt{3}}{2}$. The surface $(\overline{\Bbb C}_x, m_{\pmb\lambda})$ is isometric to the surface of a unit area regular Euclidean tetrahedron.* *In general, for $\lambda\neq 0,1,\infty$, the metric is flat, and we have $$m_{\pmb\lambda}=c^2_{\lambda}|x|^{-1}|x-1|^{-1}|x-\lambda|^{-1} |dx|^2,$$ where $c^2_{\lambda}$ is a scaling factor that guarantees that the surface $(\overline{\Bbb C}_x,m_{\pmb\lambda})$ is of unit area, see [@TroyanovSSC]. For the spectral determinant of the Friedrichs Laplacian $\Delta_{\lambda}$ on the flat surface $(\overline{\Bbb C}_x,m_{\pmb\lambda})$ the anomaly formula [\[AnFla\]](#AnFla){reference-type="eqref" reference="AnFla"} gives $$\log\det\Delta_\lambda=- \log c_{\lambda} +\frac 1 6 (\log |\lambda|+\log|\lambda-1|) -4\mathcal C(-1/2)+{\bf C},$$ where ${\bf C}$ is the same as in [\[bfC\]](#bfC){reference-type="eqref" reference="bfC"}. Besides, thanks to [@KalvinJFA Appendix], we have $$\label{C(-1/2)} \mathcal C\left(-\frac 1 2\right)=-\zeta_R'(-1)-\frac 1 6 \log 2 +\frac 1 {24}.$$ As is well-known, e.g. [@Clem Sec. 2.9], the scaling factor $c^2_\lambda$ satisfies $$c^{-2}_{\lambda}=\int_{\Bbb C} \frac{dx\wedge d\bar x}{-2i |x|\cdot|x-1|\cdot|x-\lambda|} =8|k|\left(K' \overline{K} +\overline{K'}K\right), \quad \lambda=\frac{(k+1)^2}{4k},$$ where $K=K(k)$ is the complete elliptic integral of the first kind, and $K'=K(\sqrt{1-k^2})$.* *In total, in terms of $\tau=i K'/K$, we get $$K=\frac \pi 2 \vartheta_3^2(0|\tau),\quad K'=-i\tau K,\quad k=\frac{\vartheta_2^2(0|\tau)}{\vartheta_3^2(0|\tau)},$$ $$\det\Delta_\lambda= \frac {2^{2/3}} \pi |1-k^2|^{1/3} |k|^{1/6}\sqrt{\Im\tau} |K|=\sqrt{\Im\tau}|\eta(\tau/2)|^2.$$ Here $\eta(\tau/2)$ is the Dedekind eta function. The last equality is a consequence of the identities $$2\eta^3(\tau)=\vartheta_2(0|\tau) \vartheta_3(0|\tau) \vartheta_4(0|\tau),\quad \eta^2(\tau/2)=\vartheta_4(0|\tau)\eta(\tau),\quad \vartheta^4_3(0|\tau)= \vartheta^4_2(0|\tau)+ \vartheta^4_4(0|\tau).$$ By analyzing the expression $\sqrt{\Im\tau}|\eta(\tau/2)|^2$, it is not hard to see that there are only two stationary points: $\tau=2i$ is a saddle point, and $\tau=2e^{2\pi i/3}$ is the unique absolute maximum of the determinant $\Bbb C\setminus\{0,1\}\ni\lambda\mapsto \det\Delta_{\lambda}$, cf. [@OPS Sec. 4]. The case $\tau=2i$ (resp. $\tau=2e^{i\pi/3}$) corresponds to a harmonic (resp. to an equi-anharmonic) position of four points in the divisor $\pmb\lambda$, cf. Theorem [Theorem 12](#ExtDet){reference-type="ref" reference="ExtDet"}.* *The reader may also find it interesting that in [@KK06 Sec. 3.5.2.] the authors demonstrate that the determinant $\det\Delta_\lambda$ is maximal for some (not equi-anharmonic nor harmonic) positions of four points on the unit circle.* # Examples and applications {#ExAppl} ## Determinant for triangulations by plane trees By Riemann's existence theorem, the planar bicolored trees are in one-to-one correspondence with the (classes of equivalence of) Shabat polynomials [@BZ; @LZ], see also [@Bishop]. Recall that a Shabat polynomial, also known as a generalized Chebyshev polynomial, is a polynomial with at most two critical values. Thanks to Theorem [Theorem 1](#main){reference-type="ref" reference="main"}, to each bicolored plane tree we can associate a family of spectral invariants $\det \Delta_{f^*\pmb\beta}$. Indeed, a Belyi function $f: \overline{\Bbb C}_x\to \overline{\Bbb C}_z$ (in this case it is a Shabat polynomial) only prescribes a certain gluing scheme of the bicolored double triangles. We can still make any suitable choice of the angles of those triangles, or, equivalently, of the orders $\beta_0$, $\beta_1$, and $\beta_\infty$ of three conical singularities of the constant curvature metric $e^{2\phi}|dz|^2$ on the target Riemann sphere $\overline{\Bbb C}_z$. As an example, consider the Shabat polynomial $$f(x)=x^\ell,\quad \ell\in\Bbb N.$$ The ramification divisor is $$\pmb f=(\ell-1)\cdot 0+(\ell-1)\cdot \infty,\quad |\pmb f|=2\ell-2,$$ where $x=0$ is the only point with $f'(x)=0$, and $x=\infty$ is the only pole of $f$. The corresponding bicolored tree is the inverse image of the line segment $[0,1]$ under $f$, see Fig. [\[Snowflake\]](#Snowflake){reference-type="ref" reference="Snowflake"}. The black colored point is the preimage of the point $z=0$, and the white colored points are the $\ell$ preimages $x=\sqrt[\ell]{1}$ of $z=1$. This describes the cyclic triangulation of the Riemann sphere, or, equivalently, the tessellation of the standard round sphere with $(\ell,\infty,\ell)$ bicolored double triangles, see Fig. [2](#Cyclic triangulation){reference-type="ref" reference="Cyclic triangulation"}. Clearly, the first non-zero coefficient in the Taylor expansion of $f-f(0)$ at zero is $c_1=1$, and the first non-zero coefficient in the Laurent expansion of $f$ at infinity is $c_2=1$. Hence, the equality [\[C_f\]](#C_f){reference-type="eqref" reference="C_f"} immediately implies $$\label{calcCf} C_f=\frac 1 6 \left(\frac {\ell-1}{\ell}\log|c_1|+(\ell+1)\log\ell\right) +\frac 1 6 \left(-\frac {\ell-1}\ell\log |c_2| -(\ell+1)\log\ell\right)=0,$$ where $C_f$ is the constant from Theorem [Proposition 2](#PBdet){reference-type="ref" reference="PBdet"}. The pullback of the divisor $\pmb \beta= \beta_0\cdot 0+\beta_1\cdot 1+\beta_\infty \cdot \infty$ by $f(x)=x^\ell$ is the divisor $f^*{\pmb \beta}=\sum_k \left(f^*\pmb\beta\right)_k\cdot x_k$. For the latter one we have $$f^*{\pmb \beta}=(\ell(\beta_0+1)-1)\cdot0 +\beta_1\cdot\{\sqrt[\ell]{1}\}+(\ell(\beta_\infty+1)-1)\cdot\infty,$$ where $\{\sqrt[\ell]{1}\}$ stands for the set of $\ell$ radicals $\sqrt[\ell]{1}$ in $\Bbb C_x$ (those are the white colored points of the "snowflake" in Fig. [\[Snowflake\]](#Snowflake){reference-type="ref" reference="Snowflake"}). The notation $\beta_1\cdot\{\sqrt[\ell]{1}\}$ means that each element of the set $\{\sqrt[\ell]{1}\}$ is a marked point of weight $\beta_1$. ![Cyclic triangulation](Cyclic.png){#Cyclic triangulation} **Theorem 14** (Cyclic triangulation). *Let $m_{\pmb\beta}$ be the unit area Gaussian curvature $2\pi(|\pmb\beta|+1)$ metric of $S^2$-like double triangle, see Section [2.1](#PrelimTriangulation){reference-type="ref" reference="PrelimTriangulation"}. Let $f(x)=x^\ell$ with $\ell\in\Bbb N$, cf. Fig [2](#Cyclic triangulation){reference-type="ref" reference="Cyclic triangulation"}. Then for the zeta-regularized spectral determinant of the Friedrichs Laplacian $\Delta_{f^*\pmb\beta}$ corresponding to the area $\ell$ pullback metric $f^*m_{\pmb\beta}=e^{2f^*\phi}|dx|^2$ we have $$\begin{aligned} \log\frac { \det \Delta_{f^*\pmb\beta}} \ell = & \ell \Bigl(\log \det \Delta_{\pmb\beta} + \mathcal C(\beta_0) + \mathcal C(\beta_\infty)-{\bf C}\Bigr) \\ &+\frac 1 6 \left(\ell-\frac 1 {\ell}\right)\left(\frac {\Psi(\beta_0,\beta_1,\beta_\infty)}{\beta_0+1} +\frac {\Psi(\beta_\infty,\beta_1,\beta_0)}{\beta_\infty+1} \right) \\ & -\frac 1 6 \left( \ell(\beta_0+\beta_\infty+2)+\frac 1 { \ell(\beta_0+1)} +\frac 1 {\ell(\beta_\infty+1)} \right)\log \ell \\ & - \mathcal C\left( \ell(\beta_0+1)-1 \right)-\mathcal C\left( \ell(\beta_\infty+1)-1\right)+ {\bf C}, \end{aligned}$$ where the right hand side is an explicit function of $\ell\in\Bbb N$ and $(\beta_0,\beta_1,\beta_\infty)\in(-1,0]^3$.* *Here $\beta\mapsto \mathcal C(\beta)$ is the function [\[cbeta\]](#cbeta){reference-type="eqref" reference="cbeta"}, $\bf C$ is the constant introduced in [\[bfC\]](#bfC){reference-type="eqref" reference="bfC"}, and the function $$\label{eqn_Psi} \begin{aligned} \Psi(\beta_0,\beta_1,\beta_\infty)& = \log \frac{\Gamma(-\beta_0)}{\Gamma(1+\beta_0)} \\ &+\frac 1 2 \log \frac{ \Gamma\left(2+\frac {|\pmb \beta|} 2\right) \Gamma\left(\beta_0-\frac {|\pmb \beta|} 2\right) \Gamma\left(1+\frac{|\pmb \beta|} 2-\beta_1\right) \Gamma\left(1+\frac {|\pmb \beta|} 2-\beta_\infty\right) }{ \pi \Gamma\left(-\frac {|\pmb \beta|} 2\right) \Gamma\left(1+\frac {|\pmb \beta|} 2-\beta_0\right) \Gamma\left(\beta_1-\frac {|\pmb \beta|} 2\right) \Gamma\left(\beta_\infty-\frac {|\pmb \beta|} 2\right) } \end{aligned}$$ is the one from [\[asympt_phi\]](#asympt_phi){reference-type="eqref" reference="asympt_phi"}. Recall that $\det \Delta_{\pmb\beta}$ stands for an explicit function [\[CalcVar\]](#CalcVar){reference-type="eqref" reference="CalcVar"}, whose values are the determinants of the unit area $S^2$-like double triangles of Gaussian curvature $2\pi(|\pmb\beta|+2)$.* *Proof.* Recall that $\Psi(\beta_1,\beta_2,\beta_3)=\Phi(\beta_1,\beta_2,\beta_3)+ \log 2$ with explicit function $\Phi$ from [@KalvinLast Prop. A.2)]. This implies [\[eqn_Psi\]](#eqn_Psi){reference-type="eqref" reference="eqn_Psi"}, where $\Gamma$ stands for the Gamma function. Now the assertion is an immediate consequence of Theorem [Theorem 1](#main){reference-type="ref" reference="main"} together with the asymptotics [\[asympt_phi\]](#asympt_phi){reference-type="eqref" reference="asympt_phi"}, and the calculation of $C_f$ in [\[calcCf\]](#calcCf){reference-type="eqref" reference="calcCf"}. ◻ **Example 15** (Dihedra). *For the determinant of the Gaussian curvature $2\pi(\beta+2/\ell)$ area $\ell$ regular dihedron with $\ell$ conical singularities of order $\beta$ we obtain $$\label{DDIH} \begin{aligned} \log { \det \Delta^\ell_{Dihedron}} = & \ell\log \det \Delta_{\pmb\beta} + 2\ell\mathcal C\left(\frac 1 \ell -1\right) \\ &+\frac {\ell^2- 1 } 3 \Psi\left(\frac 1 \ell -1,\beta,\frac 1 \ell -1\right) +\frac 1 3 \log \ell+ (1-\ell){\bf C}. \end{aligned}$$ This is a direct consequence of Theorem [Theorem 14](#CTr){reference-type="ref" reference="CTr"}, where we take $$\pmb\beta=\left(\frac 1 \ell -1 \right)\cdot 0+\beta\cdot 1+ \left(\frac 1 \ell -1 \right)\cdot \infty.$$* *In particular, when $\beta=-2/\ell$, we obtain the determinant of the flat regular dihedron of area $\ell$. In the case $\beta=0$, we obtain a surface isometric to the round sphere in $\Bbb R^3$ of area $\ell$ and its determinant. Finally, as $\beta\to -1^+$ the determinant increases without any bound in accordance with the asymptotics $$\begin{aligned} \log { \det \Delta^\ell_{Dihedron}} =\frac \ell {12} \left( -2\log(\beta+1) +\log \left(1-\frac 2 \ell\right)+\log 2\pi -2 +24\zeta_R'(-1) \right)\frac 1 {\beta+1} \\ -\frac \ell 2 \log(\beta+1) +O(1) \end{aligned}$$ of the right-hand side in [\[DDIH\]](#DDIH){reference-type="eqref" reference="DDIH"}. In the limit $\beta=-1$ we obtain a surface of Gaussian curvature $2\pi(2/\ell-1)$ with $\ell$ cusps. The spectrum of the corresponding Laplacian is no longer discrete [@Judge; @Judge2].* **Example 16** (Tetrahedron). *Here we find the spectral determinant of Laplacian for a constant curvature regular tetrahedron of total area $4\pi$ with (four) conical singularities of order $\beta$. Or, equivalently, the spectral determinant of the Platonic surface of Gaussian curvature $2\beta+1$ with four faces. Up to a rescaling, this is a particular case of Theorem [Theorem 14](#CTr){reference-type="ref" reference="CTr"}, that corresponds to the choice $\ell=3$ and $$\pmb\beta=\frac {\beta -2}3\cdot 0+\beta \cdot 1 + \left (-\frac 2 3\right) \cdot\infty.$$ Here and in the remaining part of this section we use the standard rescaling property [@KalvinJFA Sec. 1.2] of the determinant $$\log \det \Delta^{4\pi}_{f^*\pmb\beta}=\log \det \Delta_{f^*\pmb\beta}-\zeta_{f^*\pmb\beta}(0) \log \frac {4\pi}{\deg f}$$ in order to normalize the total area to $4\pi$, where $$\zeta_{f^*\pmb\beta}(0)=\frac {|f^*\pmb\beta|+2}{6}-\frac 1{12}\sum_k \left((f^*\pmb\beta)_k+1-\frac 1 {(f^*\pmb\beta)_k+1}\right)-1.$$ As a result, in the case $\beta=0$, when all conical singularities disappear, we obtain a surface isometric to the standard unit sphere $x_1^2+x_2^2+x_3^2=1$ in $\Bbb R^3$ and its determinant $$\log { \det \Delta} =1/2-4\zeta'_R(-1).$$* *In general, for the constant curvature regular tetrahedron of total area $4\pi$ with conical singularities of order $\beta$, Theorem [Theorem 14](#CTr){reference-type="ref" reference="CTr"} gives $$\label{DetTetr1} \begin{aligned} \log { \det \Delta^{4\pi}_{Tetrahedron}} & = 3 \left(\log \det \Delta_{\pmb\beta} + \mathcal C\left(\frac{\beta-2}3\right) +\mathcal C\left(-\frac 2 3\right) \right) \\& +\frac 4 3 \left(\frac 1 {\beta+1} \Psi\left(\frac{\beta-2}3,\beta,-\frac 2 3\right) +{\Psi\left(-\frac 2 3,\beta,\frac{\beta-2}3\right)} \right) \\& -\frac 1 3 \left( \beta-3+\frac 1 { \beta+1} \right)\left(\log (4\pi) -\frac 1 2 \log 3\right) - \mathcal C\left( \beta \right) -2 {\bf C}, \end{aligned}$$ where the right hand side is an explicit function of $\beta$. A graph of this function is depicted in Fig. [3](#Platonic){reference-type="ref" reference="Platonic"} as a solid line.* ![Graphs of the logarithm of the spectral determinant of Laplacian on the surfaces of Platonic solids of area $4\pi$ as a function of the order $\beta\in(-1,0]$ of the conical singularities: **a.** Regular Tetrahedron of Gaussian curvature $2\beta+1$ (solid line), **b.** Regular Octahedron of Gaussian curvature $3\beta+1$ (dotted line), **c.** Regular Cube of Gaussian curvature $4\beta+1$ (dashed line), **d.** Regular Icosahedron of Gaussian curvature $6\beta+1$ (dash-dotted line), **e.** Regular Dodecahedron of Gaussian curvature $10\beta+1$ (long-dashed line). The point $(0, 1/2-4\zeta'_R(-1))\approx (0,1.16)$ on the graphs corresponds to the logarithm of the spectral determinant of the unit round sphere in $\Bbb R^3$. As $\beta\to -1^+$, the determinants increase without any bound in accordance with the asymptotics [\[IdealTetrahedron\]](#IdealTetrahedron){reference-type="eqref" reference="IdealTetrahedron"}, [\[IdealOctahedron\]](#IdealOctahedron){reference-type="eqref" reference="IdealOctahedron"}, [\[IdealCube\]](#IdealCube){reference-type="eqref" reference="IdealCube"}, [\[IdealIcosahedron\]](#IdealIcosahedron){reference-type="eqref" reference="IdealIcosahedron"}, and [\[IdealDodecahedron\]](#IdealDodecahedron){reference-type="eqref" reference="IdealDodecahedron"}. In the limit $\beta=-1$, the conical singularities turn into cusps, and one obtains the ideal Platonic surfaces; the spectrum of the corresponding Laplacians is no longer discrete. ](Platonic.pdf){#Platonic} *In particular, in the case $\beta=-1/2$ we obtain a surface $(\overline{\Bbb C}_x, f^*m_{\pmb\beta})$ isometric to the surface of a Euclidean regular tetrahedron. Note that $\mathcal C(-1/2)$ can be evaluated as in [\[C(-1/2)\]](#C(-1/2)){reference-type="eqref" reference="C(-1/2)"}. As a result, the formula [\[DetTetr1\]](#DetTetr1){reference-type="eqref" reference="DetTetr1"} for the determinant reduces to $$\log { \det \Delta^{4\pi}_{Tetrahedron}}|_{\beta=-1/2}= \log \frac 4 3 - 3\log \Gamma\left(\frac 2 3\right) +\frac 3 2 \log \pi.$$ Alternatively, the latter expression for the determinant can be obtained by applying the partially heuristic Aurell-Salomonson formula [@AS2] to the explicitly evaluated pullback of the flat metric $$m_{\pmb\beta}=c^2 |z|^{-1}|z-1|^{-1} {|dz|^2}$$ by $f(x)=x^3$, where $c$ is a scaling coefficient that normalizes the total area of $m_{\pmb\beta}$ to $4\pi/3$; for a rigorous proof of the Aurell-Salomonson formula we refer to [@KalvinJFA Sec. 3.2].* *Finally, as $\beta\to -1^+$, the determinant grows without any bound in accordance with the asymptotics $$\label{IdealTetrahedron} \log { \det \Delta^{4\pi}_{Tetrahedron}} =\left ( -\frac 2 3 \log (\beta+1) -\frac 2 3 +8 \zeta_R'(-1) \right)\frac 1 {\beta+1} -2 \log (\beta+1)+ O(1)$$ of the right hand side in [\[DetTetr1\]](#DetTetr1){reference-type="eqref" reference="DetTetr1"}, cf. Fig [3](#Platonic){reference-type="ref" reference="Platonic"}. In the limit $\beta=-1$ we get a surface isometric to an ideal tetrahedron: a surface of Gaussian curvature $-2$ with four cusps; the spectrum of the Laplacian on an ideal tetrahedron is not discrete [@Judge; @Judge2].* **Example 17** (Octahedron). *Here we find the determinant of Laplacian for a constant curvature regular octahedron of area $4\pi$ with (six) conical singularities of order $\beta$. Or, equivalently, the spectral determinant of the Platonic surface of Gaussian curvature $3\beta+1$ with eight faces. In Theorem [Theorem 14](#CTr){reference-type="ref" reference="CTr"} we substitute $\ell=4$ and $$\label{ocDiv} \pmb\beta=\frac {\beta-3} 4\cdot 0+\beta \cdot 1 +\frac {\beta-3} 4 \cdot\infty.$$ As a result, after an appropriate rescaling, we obtain $$\label{DetOct1} \begin{aligned} \log \det \Delta^{4\pi}_{Octahedron} = 4\log \det \Delta_{\pmb\beta} - \left( \beta+1+\frac 1 {\beta+1} \right)\left(\frac 2 3 \log 2 +\frac 1 2 \log \pi\right) \\ +\frac 5 3 \log \pi+\frac 5 {\beta +1} \Psi\left(\frac {\beta-3} 4,\beta,\frac {\beta-3} 4\right) +2\log 2+8 \mathcal C\left(\frac {\beta-3} 4\right) - 2\mathcal C\left( \beta \right)-3{\bf C}, \end{aligned}$$ where the right hand side is an explicit function of $\beta$. A graph of this function is depicted in Fig. [3](#Platonic){reference-type="ref" reference="Platonic"} as a dotted line.* *In the case $\beta=0$ we obtain a surface isometric to the standard unit sphere in $\Bbb R^3$ and a representation for its determinant in terms of the determinant of spherical $(4,\infty,4)$ double triangle, see also Example [Example 18](#Spindles){reference-type="ref" reference="Spindles"} below.* *In the case $\beta=-1/3$ we obtain a surface $(\overline{\Bbb C}_x, f^*m_{\pmb\beta})$ isometric to the (flat) surface of Euclidean regular octahedron. The formula [\[DetOct1\]](#DetOct1){reference-type="eqref" reference="DetOct1"} for the determinant reduces to $$\log \det \Delta^{4\pi}_{Octahedron} |_{\beta=-1/3}=6\zeta_R'(-1) + \frac {35} {24}\log \frac 4 3 - \frac {13} 2 \log \Gamma\left (\frac 2 3\right) + \frac {13} 4 \log \pi.$$* *Let us also note, that in the case $\beta=-1/2$ we get the tessellation of the singular sphere $(\overline{\Bbb C}_x, f^*m_{\pmb\beta}|_{\beta=-1/2})$ by the hyperbolic $(2,3,8)$-triangle. The surface $(\overline{\Bbb C}_x, f^*m_{\pmb\beta}|_{\beta=-1/2})$, where $\pmb\beta$ is the divisor [\[ocDiv\]](#ocDiv){reference-type="eqref" reference="ocDiv"}, is isometric to a regular hyperbolic octahedron with conical singularities of angle $\pi$. This is remarkable, as a double of $(\overline{\Bbb C}_x, f^*m_{\pmb\beta}|_{\beta=-1/2})$ is the Bolza curve, known as the most symmetrical genus two smooth hyperbolic surface, see e.g. [@KW]. To the best of our knowledge, the exact value of the spectral determinant of the Bolza curve, endowed with the smooth Gaussian curvature $-1$ metric, is not yet known. For a numerical study see [@StrUs].* *Finally, as $\beta\to -1^+$, the determinant grows without any bound in accordance with the asymptotics $$\label{IdealOctahedron} \log \det \Delta^{4\pi}_{Octahedron}= \left(-\log(\beta+1) +\frac 1 2 \log 2 -1+12\zeta_R'(-1) \right)\frac 1 {\beta+1}-3\log (\beta+1)+O(1)$$ of the right-hand side in [\[DetOct1\]](#DetOct1){reference-type="eqref" reference="DetOct1"}. In the limit $\beta=-1$ we get a surface isometric to an ideal octahedron: a surface of Gaussian curvature $-2$ with six cusps, cf. [@Judge; @Judge2]. The spectrum of the corresponding Laplacian is no longer discrete.* **Example 18** (Spindles). *Let $m_{\pmb\beta}^S=e^{2\phi}|dz|^2$ be the metric of a spindle with two antipodal singularities [@TroyanovSp], where $$\phi(z)=\beta\log |z|+ \log 2+\log (\beta+1)-\log (1+|z|^{2\beta+2} ).$$ The (regularized) Gaussian curvature of $m_{\pmb\beta}^S$ is $1$, and the total area is $S= 4\pi(\beta+1)$. The metric represents the divisor $\pmb\beta=\beta_0\cdot 0+\beta_1\cdot 1+\beta_\infty\cdot\infty$ with $\beta_0=\beta_\infty=:\beta\in(-1,\infty)$ and $\beta_1=0$. The spindle $(\overline{\Bbb C}_z,m_{\pmb\beta}^S )$ is isometric to the spherical double triangle glued from two copies of a spherical triangle (a lune) with internal angles $\bigl(\pi(\beta+1), \pi, \pi(\beta+1)\bigr)$, cf. Fig. [2](#Cyclic triangulation){reference-type="ref" reference="Cyclic triangulation"}.* *For the asymptotics of the metric potential we have $$\begin{aligned} \phi(z)&=\beta\log |z|+\phi_0 +o(1),\quad z\to 0,\quad \phi_0=\log2(\beta+1), \\ \phi(z)&=-(\beta+2)\log|z| +\phi_\infty +o(1), \quad z\to \infty,\quad \phi_\infty=\log 2(\beta+1). \end{aligned}$$ Clearly, for the pullback of $m_{\pmb\beta}^S$ by the Shabat polynomial $f(x)=x^\ell$ we have $$f^*m_{\pmb\beta}^S= \frac{4 \ell^2(\beta+1)^2|x|^{2(\ell(\beta+1)-1)}|d x|^2}{(1+|x|^{2\ell( \beta +1) } )^2},\quad f^*\pmb\beta= (\ell(\beta+1)-1)\cdot 0+ (\ell(\beta+1)-1)\cdot\infty.$$* *The surface $(\overline{\Bbb C}_x,f^*m_{\pmb\beta}^S)$ is isometric to the surface glued from $\ell$ copies of the spindle $(\overline{\Bbb C}_z, m^S_{\pmb\beta})$ with a cut from the conical point at $z=0$ to the conical point at $z=\infty$. Or, equivalently, $(\overline{\Bbb C}_x,f^*m_{\pmb\beta}^S)$ is triangulated by $2\ell$ bicolored copies of a spherical triangle with internal angles $\bigl(\pi(\beta+1), \pi, \pi(\beta+1)\bigr)$ and unit Gaussian curvature, cf. Fig. [2](#Cyclic triangulation){reference-type="ref" reference="Cyclic triangulation"}. The surface $(\overline{\Bbb C}_x,f^*m_{\pmb\beta}^S)$ is again a spindle with two antipodal conical singularities: the metric $m_{\pmb\beta}^S$ coincides with $f^*m_{\pmb\beta}^S$ after the replacement of $\beta$ by $\ell(\beta+1)-1$ (and $z$ by $x$).* *As is known [@KalvinLast; @KalvinJFA; @Klevtsov; @SpreaficoZerbini], the spectral determinant of the Friedrichs Laplacian $\Delta^S_{\pmb\beta}$ on the spindle $(\overline{\Bbb C}_z,m^S_{\pmb\beta})$ satisfies $$\label{spindledet} \log\frac {\Delta^S_{\pmb\beta}}{S}=\frac {\beta+1} 3-\frac 1 3 \left( \beta+1 +\frac 1 {\beta+1} \right)\log 2(\beta+1) -2\mathcal C(\beta) +{\bf C},\quad \beta>-1.$$* *For the spectral determinant of the spindle $(\overline{\Bbb C}_x,f^*m_{\pmb\beta}^S)$ Theorem [Proposition 2](#PBdet){reference-type="ref" reference="PBdet"} (after an appropriate rescaling) gives $$\label{cyclicsphere} \begin{aligned} \log\frac {\det\Delta^{\ell S}_{f^*\pmb\beta}}{\ell S}=\ell \Bigl(\log\frac {\det\Delta^{S}_{\pmb\beta}}{4\pi(\beta+1)} +2\mathcal C(\beta)-{\bf C}\Bigr) +\frac 1 3\left(\ell-\frac 1 {\ell}\right)\frac {\log2 (\beta+1)}{\beta+1} \\ -\frac 1 6 \left( 2\ell(\beta+1)+\frac 2 { \ell(\beta+1)} \right)\log \ell -2\mathcal C\left(\ell(\beta+1)-1\right) +{\bf C}. \end{aligned}$$ As expected, after the substitution [\[spindledet\]](#spindledet){reference-type="eqref" reference="spindledet"} and the replacement of $\ell(\beta+1)-1$ by $\beta$, the equality [\[cyclicsphere\]](#cyclicsphere){reference-type="eqref" reference="cyclicsphere"} reduces to the one in [\[spindledet\]](#spindledet){reference-type="eqref" reference="spindledet"}.* *In particular, for $\beta=\frac 1 \ell -1$ the pullback of the spindle metric $m_{\pmb\beta}^S$ by $f$ is the metric of the standard round sphere of total area $\ell S=4\pi$. In this case, the equality [\[cyclicsphere\]](#cyclicsphere){reference-type="eqref" reference="cyclicsphere"} expresses the determinant of Laplacian of the standard sphere $$\det\Delta^{4\pi}_{f^*\pmb\beta}=\exp(1/2-4\zeta'_R(-1))$$ in terms of the determinants $\det\Delta^{4\pi/\ell}_{\pmb\beta}$ of the spherical double triangle (or, equivalently, of the bicolored double lune) corresponding to the cyclic triangulation of the sphere via the Shabat polynomial $f(x)=x^\ell$, cf. Fig. [2](#Cyclic triangulation){reference-type="ref" reference="Cyclic triangulation"}.* ## Determinant for dihedral triangulation Consider the Belyi function $$\label{BelyiDihedral} f(x)=1-\left(\frac {1-x^\ell}{1+x^\ell}\right)^2,\quad \ell\in\Bbb N.$$ This is a ramified covering of degree $2\ell$. For the ramification divisor of $f$ we have $${\pmb {f}} = (\ell-1)\cdot 0 + 1 \cdot \{\sqrt[\ell]{1} \} + 1 \cdot \{\sqrt[\ell]{-1} \} +(\ell-1)\cdot\infty,\quad |{\pmb {f}}| =4\ell-2.$$ This Belyi function defines a tessellation of the standard round sphere with $(2,2,\ell)$-triangles, cf. Fig [4](#DihTriang){reference-type="ref" reference="DihTriang"}. As we show in the proof of Theorem [Theorem 19](#Jan20){reference-type="ref" reference="Jan20"} below, to this Belyi map there corresponds the constant $$\label{CfDT} C_f=\frac 2 3 \left(\ell-\frac 1 \ell\right)\log 2.$$ The Belyi map $f$ sends the marked points listed in the ramification divisor $\pmb f$ to the points $0,1,\infty$ as follows: $$f(0)=0,\quad f(\infty)=0, \quad f( \sqrt[\ell]{-1 })=\infty, \quad f( \sqrt[\ell]{1 })=1.$$ For the pullback divisor of $\pmb \beta= \beta_0\cdot 0+\beta_1\cdot 1+\beta_\infty \cdot \infty$ by $f$ we obtain $$f^*\pmb\beta= (\ell(\beta_0+1)-1)\cdot 0 + ( 2\beta_1+1) \cdot \{\sqrt[\ell]{1} \} + (2\beta_\infty+1) \cdot \{\sqrt[\ell]{-1} \} + (\ell(\beta_0+1)-1)\cdot\infty.$$ ![Dihedral triangulation](Dihedral.png){#DihTriang} **Theorem 19** (Dihedral triangulation). *Let $m_{\pmb\beta}$ be the unit area Gaussian curvature $2\pi(|\pmb\beta|+1)$ metric of $S^2$-like double triangle, see Section [2.1](#PrelimTriangulation){reference-type="ref" reference="PrelimTriangulation"}. For the spectral zeta-regularized determinant of the Friedrichs Laplacian $\Delta_{f^*\pmb\beta}$ corresponding to the area $2\ell$ pullback metric $f^*m_{\pmb\beta}$ on the Riemann sphere $\overline{\Bbb C}_x$, where $f$ is the dihedral Belyi map [\[BelyiDihedral\]](#BelyiDihedral){reference-type="eqref" reference="BelyiDihedral"}, we have the following explicit expression: $$\begin{aligned} \log & \frac { \det\Delta_{f^*\pmb\beta}} {2\ell} = 2\ell\Bigl( \log \det \Delta_{\pmb\beta} +\mathcal C_{\pmb\beta} -{\bf C}\Bigr)+C_f \\ &+\frac 1 3 \left(\ell-\frac 1 {\ell}\right)\frac {\Psi(\beta_0,\beta_1,\beta_\infty)}{\beta_0+1} +\frac \ell 4\left(\frac {\Psi(\beta_1,\beta_0,\beta_\infty)}{\beta_1+1} +\frac { \Psi(\beta_\infty,\beta_1,\beta_0)}{\beta_\infty+1} \right) \\ & -\frac 1 3 \left( \ell(\beta_0+1)+\frac 1 { \ell(\beta_0+1)} \right)\log \ell -\frac \ell 3 \left( \beta_1+\beta_\infty+2+\frac 1{4\beta_1+4}+\frac 1{4\beta_\infty+4}\right) \log 2 \\ & -2 \mathcal C\left( \ell(\beta_0+1)-1 \right)-\ell\Bigl(\mathcal C\left( 2\beta_1+1\right)+\mathcal C\left( 2\beta_\infty+1\right)\Bigr) +{\bf C}, \end{aligned}$$ where $\mathcal C_{\pmb\beta} =\mathcal C(\beta_0)+\mathcal C(\beta_1)+\mathcal C(\beta_\infty)$ and $C_f$ is the same as in [\[CfDT\]](#CfDT){reference-type="eqref" reference="CfDT"}.* *Recall that $\det \Delta_{\pmb\beta}$ stands for an explicit function [\[CalcVar\]](#CalcVar){reference-type="eqref" reference="CalcVar"}, whose values are the determinants of the unit area $S^2$-like double triangles of Gaussian curvature $2\pi(|\pmb\beta|+2)$ tessellating the singular sphere $(\overline{\Bbb C}_x,f^*m_{\pmb\beta})$. The function $\beta\mapsto \mathcal C(\beta)$ is defined in [\[cbeta\]](#cbeta){reference-type="eqref" reference="cbeta"}, the function $\Psi$ is the same as in [\[eqn_Psi\]](#eqn_Psi){reference-type="eqref" reference="eqn_Psi"}, and $\bf C$ is the constant introduced in [\[bfC\]](#bfC){reference-type="eqref" reference="bfC"}.* *Proof.* For the derivative of the Belyi function [\[BelyiDihedral\]](#BelyiDihedral){reference-type="eqref" reference="BelyiDihedral"} we have $$f'(x)=\frac{4\ell x^{\ell-1}(1-x^\ell)}{(x^\ell+1)^3}.$$ As a result, we immediately obtain the asymptotics in vicinities of the critical points of the form [\[asymp_log_f1\]](#asymp_log_f1){reference-type="eqref" reference="asymp_log_f1"}. The first non-zero coefficients $c_k$ in the Taylor expansions (cf. Proposition [Proposition 2](#PBdet){reference-type="ref" reference="PBdet"}) satisfy $$|c_k|=\left\{ \begin{array}{cc} 4 , & x_k=0, \\ \ell^2/4, & x_k\in\{ \sqrt[\ell]{1 }\}, \\ 4, & x_k=\infty,\quad k=n=2\ell+2. \end{array} \right.$$ Similarly, we obtain the asymptotics in vicinities of the poles of the form [\[asymp_log_f2\]](#asymp_log_f2){reference-type="eqref" reference="asymp_log_f2"} with the first non-zero Laurent coefficients $c_k$ satisfying $$|c_k|=\ell^2/4, \quad x_k\in\{\sqrt[\ell]{- 1 }\}.$$ Now the expression [\[C_f\]](#C_f){reference-type="eqref" reference="C_f"} for $C_f$ implies [\[CfDT\]](#CfDT){reference-type="eqref" reference="CfDT"}. As a result, the assertion is an immediate consequence of Theorem [Theorem 1](#main){reference-type="ref" reference="main"} together with the asymptotics [\[asympt_phi\]](#asympt_phi){reference-type="eqref" reference="asympt_phi"}. ◻ **Example 20** (Dihedra). *Here we deduce an alternative formula for the determinant of the regular dihedron of Gaussian curvature $K=2\pi(\beta+1/\ell)$ and area $2\ell$ with $2\ell$ conical singularities of order $\beta$: In Theorem [Theorem 19](#Jan20){reference-type="ref" reference="Jan20"} we take $$\pmb\beta=\left(\frac 1 \ell -1 \right)\cdot 0+\frac{\beta-1} 2 \cdot 1+\frac{\beta-1} 2\cdot \infty,$$ and obtain $$\begin{aligned} \log & \,{ \det\Delta^{2\ell}_{Dihedron}} = 2\ell\Bigl( \log \det \Delta_{\pmb\beta} +\mathcal C_{\pmb\beta} -\mathcal C\left( \beta\right) \Bigr) + \frac 1 3 \log \ell +(1-2\ell){\bf C} \\ &+\frac {\ell^2-1 } 3 {\Psi\left ( \frac 1 \ell -1,\frac{\beta-1} 2,\frac{\beta-1} 2\right)} +\frac \ell { \beta+1}{ \Psi\left (\frac{\beta-1} 2,\frac{\beta-1} 2,\frac 1 \ell -1\right)} \\ & +\left(\frac 2 3 \left(\ell-\frac 1 \ell\right) -\frac \ell 3 \left( \beta+1+\frac 1{\beta+1}\right) +1 \right) \log 2. \end{aligned}$$* **Example 21** (Octahedron). * We obtain an alternative formula for the determinant of Laplacian on a regular octahedron of Gaussian curvature $3\beta+1$ with (six) conical singularities of order $\beta$: In Theorem [Theorem 19](#Jan20){reference-type="ref" reference="Jan20"} we take $\ell=2$ and $$\pmb\beta=\frac{\beta-1} 2\cdot 0+\frac{\beta-1} 2\cdot 1 +\frac{\beta-1} 2\cdot\infty.$$ This together with the standard rescaling property of determinants implies $$\begin{aligned} \log\, & { \det\Delta^{4\pi}_{Octahedron}} = 4\log \det \Delta_{\pmb\beta} - \left( \beta+1+\frac 1 { \beta +1} \right)\left(\log 2+\frac 1 2 \log \pi \right) +\frac 5 3 \log \pi \\ &+\frac 3 {{\beta+1} } {\Psi\left(\frac{\beta-1} 2,\frac{\beta-1} 2,\frac{\beta-1} 2\right)} +3\log 2+12\mathcal C\left(\frac{\beta-1} 2\right)-6\mathcal C\left( \beta\right) -3{\bf C}, \end{aligned}$$ where the right hand side is an explicit function of $\beta$; this expression is equivalent to the one in [\[DetOct1\]](#DetOct1){reference-type="eqref" reference="DetOct1"}. A graph of this function is depicted in Fig. [3](#Platonic){reference-type="ref" reference="Platonic"} as a dotted line. * ## Determinant for tetrahedral triangulation The tetrahedral Belyi map is given by the function $$\label{BelyiTetrahedral} f(x)=-64\frac{(x^3+1)^3}{(x^3-8)^3x^3},\quad \deg f=12.$$ For the ramification divisor, we obtain $$\pmb f= 2\cdot\left\{0,\sqrt[3]{8}\right\}+1\cdot\left\{ \sqrt[3]{-10\pm6\sqrt 3}\right\} + 2\cdot\left\{\sqrt[3]{-1},\infty\right\} ,\quad |\pmb f|=22.$$ The Belyi map sends the marked points listed in the divisor $\pmb f$ to the points $0,1,\infty$ of the target Riemann sphere $\overline{\Bbb C}_z$ as follows: $$f(0)=\infty, \quad f(\sqrt[3]{-1})=0,\quad f\left(\sqrt[3]{-10\pm6\sqrt 3}\right)=1, \quad f(\sqrt[3]{8})=\infty, \quad f(\infty)=0.$$ Here $f(x)=1$ are the edge midpoints of a regular tetrahedron. The poles $f(x)=\infty$ correspond to its vertices. The zeros $f(x)=0$, i.e. the roots of the numerator and $x=\infty$, are the centers of the faces. This defines a tessellation of the standard round sphere with spherical $(2,3,3)$-triangles, cf. Fig [5](#tetrahedral){reference-type="ref" reference="tetrahedral"}. A picture of the corresponding *dessin d'enfant* can be found e.g. in [@Margot; @Zvonkin Fig. 2]. In the proof of Theorem [Theorem 22](#T_t){reference-type="ref" reference="T_t"} below we show that to the Belyi function [\[BelyiTetrahedral\]](#BelyiTetrahedral){reference-type="eqref" reference="BelyiTetrahedral"} there corresponds the constant $$\label{CfTT} C_f=\log 2 +\frac 9 4 \log 3.$$ For the pullback of the divisor $\pmb \beta= \beta_0\cdot 0+\beta_1\cdot 1+\beta_\infty \cdot \infty$ by $f$ we obtain $$f^*\pmb\beta= (3\beta_0+2)\cdot\left\{\sqrt[3]{-1},\infty\right\}+(2\beta_1+1)\cdot\left\{ \sqrt[3]{-10\pm6\sqrt 3}\right\}+(3\beta_\infty+2)\cdot \left\{0,\sqrt[3]{8}\right\}.$$ ![Tetrahedral triangulation](Tetrahedral.png){#tetrahedral} **Theorem 22** (Tetrahedral triangulation). *Let $m_{\pmb\beta}$ be the unit area Gaussian curvature $2\pi(|\pmb\beta|+1)$ metric of $S^2$-like double triangle, see Section [2.1](#PrelimTriangulation){reference-type="ref" reference="PrelimTriangulation"}. For the determinant of the Friedrichs Laplacian $\det \Delta_{f^*\pmb\beta}$ corresponding to the pullback metric $f^*m_{\pmb\beta}$ of area $12$, where $f$ is the tetrahedral Belyi map [\[BelyiTetrahedral\]](#BelyiTetrahedral){reference-type="eqref" reference="BelyiTetrahedral"}, we have $$\begin{aligned} \log & \frac { \det \Delta_{f^*\pmb\beta}} {12} = 12\Bigl( \log \det \Delta_{\pmb\beta} +\mathcal C_{\pmb\beta} -{\bf C}\Bigr)+ C_f \\ &+ \frac {16} {9}\frac {\Psi(\beta_0,\beta_1,\beta_\infty)}{\beta_0+1} + \frac3 2 \frac {\Psi(\beta_1,\beta_0,\beta_\infty)}{\beta_1+1} + \frac {16} {9}\frac {\Psi(\beta_\infty,\beta_1,\beta_0)}{\beta_\infty+1} \\ & -\frac 2 3 \left(3(\beta_0+\beta_\infty+2)+\frac 1 { 3(\beta_\infty+1)} +\frac 1 { 3(\beta_0+1)} \right)\log 3 - \left( 2(\beta_1+1)+\frac 1{2(\beta_1+1)}\right) \log 2 \\ & -4\mathcal C\left( 3\beta_0+2 \right) -6\mathcal C\left( 2\beta_1+1\right)-4\mathcal C\left( 3\beta_\infty+2\right) +{\bf C}, \end{aligned}$$ where $\mathcal C_{\pmb\beta} =\mathcal C(\beta_0)+\mathcal C(\beta_1)+\mathcal C(\beta_\infty)$ and $C_f$ is the same as in [\[CfTT\]](#CfTT){reference-type="eqref" reference="CfTT"}.* *Recall that $\det \Delta_{\pmb\beta}$ stands for an explicit function [\[CalcVar\]](#CalcVar){reference-type="eqref" reference="CalcVar"}, whose values are the determinants of the unit area $S^2$-like double triangles of Gaussian curvature $2\pi(|\pmb\beta|+2)$ tessellating the singular sphere $(\overline{\Bbb C}_x,f^*m_{\pmb\beta})$. The function $\beta\mapsto \mathcal C(\beta)$ is defined in [\[cbeta\]](#cbeta){reference-type="eqref" reference="cbeta"}, the function $\Psi$ is the same as in [\[eqn_Psi\]](#eqn_Psi){reference-type="eqref" reference="eqn_Psi"}, and $\bf C$ is the constant introduced in [\[bfC\]](#bfC){reference-type="eqref" reference="bfC"}.* *Proof.* For the derivative of the Belyi function in [\[BelyiTetrahedral\]](#BelyiTetrahedral){reference-type="eqref" reference="BelyiTetrahedral"} we have $$f'(x)=\frac{192(x^3+1)^2(x^6+20x^3-8)}{x^4(x^3-8)^4}.$$ In exactly the same way as in the proof of Theorem [Theorem 19](#Jan20){reference-type="ref" reference="Jan20"} we obtain $$|c_k|=\left\{ \begin{array}{cc} 2^6/3^3, & x_k\in\{\sqrt[3]{-1}\}, \\ 2\sqrt 3 \pm 3, & x_k\in\left\{\sqrt[3]{-10\pm 6\sqrt 3}\right\}, \\ 2^6, & x_k=\infty,\ k=n,\\ 2^3, & x_k=0,\\ 2^3/3^3, & x_k\in \left\{\sqrt[3]8\right\}. \end{array} \right.$$ This together with the expression [\[C_f\]](#C_f){reference-type="eqref" reference="C_f"} for $C_f$ implies the value stated in [\[CfTT\]](#CfTT){reference-type="eqref" reference="CfTT"}. The remaining part of the assertion is a direct consequence of Theorem [Theorem 1](#main){reference-type="ref" reference="main"}. ◻ **Example 23** (Tetrahedron). *Here we deduce an alternative formula for the spectral determinant of a regular tetrahedron of Gaussian curvature $K=2\beta+1$: in Theorem [Theorem 22](#T_t){reference-type="ref" reference="T_t"} we substitute $$\pmb\beta=\left(-\frac 2 3 \right)\cdot 0+\left(-\frac 1 2 \right)\cdot 1+\frac{\beta-2}{3}\cdot\infty.$$ As a result, after some rescaling we obtain $$\begin{aligned} \log & \det \Delta^{4\pi}_{Tetrahedron} = 12\Bigl( \log \det \Delta_{\pmb\beta} +\mathcal C_{\pmb\beta} \Bigr)+\log 2 +\frac {7}{12}\log 3 +\frac 4 3 \log\pi \\ &+ \frac {16} {3} {\Psi\left(-\frac 2 3,-\frac 1 2,\frac{\beta-2}{3}\right) + 3 \Psi\left(-\frac 1 2,-\frac 2 3,\frac{\beta-2}{3}\right)} + \frac {16} {3 (\beta+1) } \Psi\left(\frac{\beta-2}{3},-\frac 1 2,-\frac 2 3\right) \\ & -\frac 1 3 \left(\beta+1+\frac 1 { \beta+1} \right) \log 3\pi -4\mathcal C\left( \beta\right) -11{\bf C}, \end{aligned}$$ where the right hand side is an explicit function of $\beta$. A graph of this function is depicted in Fig. [3](#Platonic){reference-type="ref" reference="Platonic"} as a solid line.* *In the case $\beta=0$ the surface $(\overline{\Bbb C}_x, f^*m_{\pmb\beta})$ is isometric to a unit sphere in $\Bbb R^3$. The sphere $(\overline{\Bbb C}_x, f^*m_{\pmb\beta}|_{\beta=0})$ is tessellated by the double of $(2,3,3)$-triangle, and the above formula for the determinant is a representation for the determinant $\det \Delta^{4\pi}_{Tetrahedron}|_{\beta=0}$ of the unit sphere in terms of the determinant of the double of spherical $(2,3,3)$-triangle.* ## Determinant for octahedral triangulation To the octahedral triangulation, there corresponds the Belyi function $$\label{cube} f(x)=-2^2 \cdot3^3\frac{(x^4+1)^4 x^4}{(x^8-14 x^4+1)^3},\quad \deg f=24.$$ In particular, thanks to the identification of the Riemann sphere $\overline{\Bbb C}_x$ with the standard round sphere in $\Bbb R^3$ via the stereographic projection, $f$ describes the tessellation of the standard round sphere with spherical $(2,3,4)$-triangles, cf. Fig [6](#Oct Triang){reference-type="ref" reference="Oct Triang"}. The ramification divisor of $f$ is $$\label{divisorF} \pmb f=3\cdot\left\{0,\infty,\sqrt[4]{-1}\right\}+1\cdot\left\{\sqrt[4]{1}, \sqrt[4]{-17\pm 3\cdot 2 ^{5/2}}\right\}+2\cdot\left\{\sqrt[4]{(2\pm\sqrt 3)^2}\right\},\quad |\pmb f|=46.$$ Here $\{\sqrt[4]{1}, \sqrt[4]{-17\pm 3\cdot 2 ^{5/2}}\}$ are the edge midpoints of a cube, for any $x$ in this set we have $f(x)=1$. The poles $\sqrt[4]{(2\pm\sqrt 3)^2}$ of $f$ correspond to the vertices of the cube. The zeros $\{0,\infty,\sqrt[4]{-1}\}$ of $f$ are the centers of the cube faces. For the octahedron dual to the cube: the points $\{\sqrt[4]{1}, \sqrt[4]{-17\pm 3\cdot 2 ^{5/2}}\}$ correspond to the edge midpoints, the poles of $f$ correspond to the centers of the faces, and the zeros of $f$ are the vertices. A picture of the corresponding *dessin d'enfant* can be found e.g. in [@Margot; @Zvonkin Fig. 3]. In the proof of Theorem [Theorem 24](#TOT){reference-type="ref" reference="TOT"} below we show that to the Belyi function [\[cube\]](#cube){reference-type="eqref" reference="cube"} there corresponds the constant $$\label{CfOT} C_f=\frac 9 4 \log 3 +\frac{119}{18}\log 2.$$ For the pullback of the divisor $\pmb \beta= \beta_0\cdot 0+\beta_1\cdot 1+\beta_\infty \cdot \infty$ by $f$ we get $$\begin{aligned} f^*\pmb\beta=(4\beta_0+3)\cdot\left\{0,\infty,\sqrt[4]{-1}\right\}+& (2\beta_1+1)\cdot\left\{\sqrt[4]{1}, \sqrt[4]{-17\pm 3\cdot 2 ^{5/2}}\right\} \\ &\qquad\qquad + (3\beta_\infty+2)\cdot\left\{\sqrt[4]{(2\pm\sqrt 3)^2}\right\}. \end{aligned}$$ ![Octahedral triangulation](Octahedral.png){#Oct Triang} **Theorem 24** (Octahedral triangulation). *Let $m_{\pmb\beta}$ be the unit area Gaussian curvature $2\pi(|\pmb\beta|+1)$ metric of $S^2$-like double triangle, see Section [2.1](#PrelimTriangulation){reference-type="ref" reference="PrelimTriangulation"}. For the determinant of Laplacian corresponding to the pullback metric $f^*m_{\pmb\beta}$ of area $24$, where $f$ is the Belyi map [\[cube\]](#cube){reference-type="eqref" reference="cube"}, we have $$\begin{aligned} \log & \frac { \det \Delta_{f^*\pmb\beta}} {24} = 24\Bigl( \log \det \Delta_{\pmb\beta} +\mathcal C_{\pmb\beta} -{\bf C}\Bigr)+ C_f \\ &+ \frac {15} {4}\frac {\Psi(\beta_0,\beta_1,\beta_\infty)}{\beta_0+1} + 3\frac {\Psi(\beta_1,\beta_0,\beta_\infty)}{\beta_1+1} +\frac {32} 9 \frac {\Psi(\beta_\infty,\beta_1,\beta_0)}{\beta_\infty+1} \\ & - \left(4(2\beta_0+\beta_1+3)+\frac 1 {2(\beta_0+1)} +\frac 1 {\beta_1+1}\right)\log 2 -4\left(\beta_\infty+1+\frac 1 {9(\beta_\infty+1)}\right)\log 3 \\& -8\mathcal C\left( 3\beta_\infty+2 \right) -6\mathcal C\left( 4\beta_0+3\right)-12\mathcal C\left( 2\beta_1+1\right) +{\bf C}, \end{aligned}$$ where $\mathcal C_{\pmb\beta} =\mathcal C(\beta_0)+\mathcal C(\beta_1)+\mathcal C(\beta_\infty)$ and $C_f$ is the same as in [\[CfOT\]](#CfOT){reference-type="eqref" reference="CfOT"}.* *Recall that $\det \Delta_{\pmb\beta}$ stands for an explicit function [\[CalcVar\]](#CalcVar){reference-type="eqref" reference="CalcVar"}, whose values are the determinants of the unit area $S^2$-like double triangles of Gaussian curvature $2\pi(|\pmb\beta|+2)$ tessellating the singular sphere $(\overline{\Bbb C}_x,f^*m_{\pmb\beta})$. The function $\beta\mapsto \mathcal C(\beta)$ is defined in [\[cbeta\]](#cbeta){reference-type="eqref" reference="cbeta"}, the function $\Psi$ is the same as in [\[eqn_Psi\]](#eqn_Psi){reference-type="eqref" reference="eqn_Psi"}, and $\bf C$ is the constant introduced in [\[bfC\]](#bfC){reference-type="eqref" reference="bfC"}.* *Proof.* Here we find $C_f$ by using Proposition [Proposition 5](#B_C_f){reference-type="ref" reference="B_C_f"}. Namely, we first write the potential of the pullback of $m_{\pmb\beta}=c|z|^{-4/3}|z-1|^{-4/3}|dz|^2$ by $f$ in the form $$\bigl(f^*\phi\bigr)(x)=\frac 1 3 \log |x|+\frac 1 3 \log |x^4+1| -\frac 1 3 \log |x^4-1|-\frac 1 3 \log |(x^4+17)^2-9\cdot 2^5|+\log cA,$$ cf. [\[pb2\]](#pb2){reference-type="eqref" reference="pb2"}. It is easy to see that as $x\to x_k\in\{\sqrt[4]{-1}\}$ the metric potential $f^*\phi$ satisfies the asymptotics $$\begin{aligned} \bigl(f^*\phi\bigr)(x)& = \frac 1 3 \log |x-x_k| +\frac 1 3 \log\left |(x^4+1)'\restriction_{x=\sqrt[4]{-1}}\right | -\frac 1 3 \log |-2| \\ & \qquad-\frac 1 3 \log |(-1+17)^2-9\cdot 2^5|+\log cA+o(1) \\ & =\frac 1 3 \log |x-x_k| +\log cA-\frac 4 3 \log 2 +o(1). \end{aligned}$$ This together with [\[a_s\_m\]](#a_s_m){reference-type="eqref" reference="a_s_m"} implies that for any $k$, such that $x_k\in\{\sqrt[4]{-1}\}$, we have $$\sum_{\ell: k\neq \ell<n} \frac {\operatorname{ord}_\ell f-2} 3 \log |x_k-x_\ell|=-\frac 4 3 \log 2.$$ Similarly, we obtain $$\sum_{\ell: k\neq \ell<n} \frac {\operatorname{ord}_\ell f-2} 3 \log |x_k-x_\ell|= \left\{ \begin{array}{cc} 0 ,& x_k=0, \\ -\log2 -\frac 2 3 \log 3, & x_k\in\{\sqrt[4]{1}\}, \\ \frac 1 3 \log \frac {3\pm2\sqrt 2}{144}, & x_k\in\left\{\sqrt[4]{-17\pm3\cdot 2^{5/2}}\right\}. \end{array} \right.$$ As a consequence, by using the expression [\[divisorF\]](#divisorF){reference-type="eqref" reference="divisorF"} for the ramification divisor $\pmb f$, for the first line in [\[C_F\]](#C_F){reference-type="eqref" reference="C_F"} we obtain $$\label{eqy1} \begin{aligned} \frac 1 {18} \sum_{ k:k<n}\sum_{\ell: k\neq \ell<n}\frac{(\operatorname{ord}_k f-2)(\operatorname{ord}_\ell f -2)}{\operatorname{ord}_k f+1} \log|x_k-x_\ell|= \log2 +\frac 2 3 \log 3. \end{aligned}$$ Thanks to [\[divisorF\]](#divisorF){reference-type="eqref" reference="divisorF"}, it is also easy to verify that for the second line in [\[C_F\]](#C_F){reference-type="eqref" reference="C_F"} one has $$\label{eqy2} \frac 1 6 \sum_{k\leqslant n} \left( \frac{\operatorname{ord}_k f +1} 3 +\frac 3 { \operatorname{ord}_k f+1} \right)\log (\operatorname{ord}_k f+1)= \frac{17} {2}\log 2+\frac 8 3 \log 3.$$ Finally, as the scaling constant $A$ satisfies [\[Exp_A\]](#Exp_A){reference-type="eqref" reference="Exp_A"}, we get $$A=\frac{|f(x)|^{-2/3}|f(x)-1|^{-2/3}|f'(x)|}{|x|^{1/3}|x^4+1|^{1/3}|x^4-1|^{-1/3}|(x^4+17)^2-9\cdot 2^5|^{-1/3}}=12\cdot 2^{2/3}.$$ This allows us to calculate the value of the last line in [\[C_F\]](#C_F){reference-type="eqref" reference="C_F"}: $$\label{eqy3} \frac 1 {6} \left( n-2 - \sum_{k\leqslant n} \frac 3 { \operatorname{ord}_k f+1} \right)\log A= - \frac {13} {12} \log (12\cdot 2^{2/3}).$$ Adding the values [\[eqy1\]](#eqy1){reference-type="eqref" reference="eqy1"}, [\[eqy2\]](#eqy2){reference-type="eqref" reference="eqy2"}, [\[eqy3\]](#eqy3){reference-type="eqref" reference="eqy3"} of the lines in [\[C_F\]](#C_F){reference-type="eqref" reference="C_F"} together, we arrive at [\[CfOT\]](#CfOT){reference-type="eqref" reference="CfOT"}. ◻ **Example 25** (Cube). *Here we find the spectral determinant of a cube of Gaussian curvature $4\beta+1$ with (eight) conical singularities of order $\beta$: in Theorem [Theorem 24](#TOT){reference-type="ref" reference="TOT"} we substitute $$\pmb\beta=\left(-\frac 3 4\right)\cdot 0+ \left(-\frac 1 2\right)\cdot 1+ \frac {\beta-2} 3\cdot \infty.$$ After rescaling, for the determinant of a cube of (regularized) Gaussian curvature $4\beta+1$ with eight conical singularities of order $\beta$ we obtain $$\label{DetCube} \begin{aligned} \log &\, { \det \Delta^{4\pi}_{Cube}} = 24\Bigl( \log \det \Delta_{\pmb\beta} +\mathcal C_{\pmb\beta} \Bigr)+ \frac {5} 4 \log 3 -\frac{7}{18}\log 2+2\log \pi \\ &+15 {\Psi\left(-\frac 3 4,-\frac 1 2, \frac {\beta-2} 3\right)} + 6 {\Psi\left(-\frac 1 2,-\frac 3 4, \frac {\beta-2} 3\right)} +\frac {32} {3(\beta+1)} {\Psi\left(\ \frac {\beta-2} 3,-\frac 1 2,-\frac 3 4\right)} \\ & -\frac 2 3\left( \beta+1+\frac 1 { \beta+1}\right) \log \frac {3\pi} 2 -8\mathcal C\left( \beta \right) -23{\bf C}, \end{aligned}$$ where the right hand side is an explicit function of $\beta$. A graph of this function is depicted in Fig. [3](#Platonic){reference-type="ref" reference="Platonic"} as a dashed line. As $\beta\to -1^+$, the determinant of Laplacian grows without any bound in accordance with the asymptotics $$\label{IdealCube} \log { \det \Delta^{4\pi}_{Cube}} =\left(-\frac 4 3 \log(\beta+1) +\frac 2 3 \log 3 -\frac 4 3 +16\zeta'_R(-1) \right)\frac 1 {\beta+1} -4\log (\beta+1) + O(1)$$ of the right hand side in [\[DetCube\]](#DetCube){reference-type="eqref" reference="DetCube"}. In the limit $\beta=-1$ we get a surface isometric to an ideal cube: a surface of Gaussian curvature $-3$ with eight cusps, cf. [@Judge; @Judge2]. The spectrum of the corresponding Laplacian is no longer discrete.* *To the case of a Euclidean cube there corresponds the value $\beta=-1/4$. The formula [\[DetCube\]](#DetCube){reference-type="eqref" reference="DetCube"} for the determinant reduces to $$\begin{aligned} \log { \det \Delta^{4\pi}_{Cube}}|_{\beta=-1/4} =& \frac {32} 3 \zeta_R'(-1) - \frac {37}{18}\log 2 + \frac {25}{12}\log 3 \\ & + \frac {16} 3 \log \Gamma\left(\frac 2 3\right) - \frac {86} 9 \log \Gamma\left (\frac 3 4\right) + \frac{19} 9 \log \pi. \end{aligned}$$* **Example 26** (Octahedron). *Now we can deduce yet another formula for the determinant of Laplacian on a regular octahedron of Gaussian curvature $3\beta+1$ with (six) conical singularities of order $\beta$. In Theorem [Theorem 24](#TOT){reference-type="ref" reference="TOT"} we take $$\pmb\beta=\frac {\beta-3} 4 \cdot 0+ \left(-\frac 1 2\right)\cdot 1+ \left(-\frac 2 3\right)\cdot \infty.$$ As a result, after an appropriate rescaling we obtain $$\begin{aligned} \log & \,{ \det \Delta^{4\pi}_{Octahedron}} = 24\Bigl( \log \det \Delta_{\pmb\beta} +\mathcal C_{\pmb\beta} \Bigr)- \frac {13}{12} \log 3 +\frac{71}{18}\log 2 \\ &+ \frac {15}{\beta+1}\Psi\left(\frac {\beta-3} 4,-\frac 1 2,-\frac 2 3\right ) + 6{\Psi\left(-\frac 1 2,\frac {\beta-3} 4,-\frac 2 3\right)} +\frac {32} 3 \Psi\left(-\frac 2 3,-\frac 1 2,\frac {\beta-3} 4\right) \\ & - \frac 1 2 \left(\beta+1+\frac 1 {\beta+1} \right) \log \frac {8\pi} 3 -6\mathcal C\left( \beta\right) -23{\bf C}+\frac 5 3 \log \pi. \end{aligned}$$ * ## Determinant for icosahedral triangulation The icosahedral Belyi function is given by $$\label{BMid} f(x)=1728 \frac{ x^5(x^{10}-11 x^5-1)^5} {(x^{20}+228(x^{15}-x^{5})+494 x^{10}+1)^3},\quad \deg f =60.$$ The ramification divisor of $\pmb f$ is $$\pmb f= 2\cdot \left\{20 \text{ poles of } f\right\} + 4\cdot\left\{12 \text{ zeros of } f\right\}+1\cdot\left\{30 \text{ zeros of } f-1\right\},\quad|\pmb f|=118.$$ This defines tessellation of a standard round sphere with bicolored spherical $(2,3,5)$ double triangle, cf. Fig [1](#Icos Triang Pic){reference-type="ref" reference="Icos Triang Pic"}. The $20$ poles of $f$ are the coordinates of the centers of the faces, the $30$ solutions to $f(x)=1$ are the edge midpoints, and the $12$ zeros of $f$ ($x=\infty$ is also a zero of $f$) are the vertices of a regular icosahedron inscribed into the sphere. In terms of the dodecahedron that is dual to the icosahedron: The poles of $f$ are the coordinates of the $20$ vertices, the $12$ zeros of $f$ are the centers of its faces, the $30$ solutions to the equation $f(x)=1$ correspond to the edge midpoints. A picture of the corresponding *dessin d'enfant* can be found e.g. in [@Margot; @Zvonkin Fig. 1]. For the icosahedral Belyi function evaluation of the right hand side in [\[C_F\]](#C_F){reference-type="eqref" reference="C_F"} gives $$\label{CfIT} C_f=\frac{139} {15} \log 2 + \frac {63} {10}\log 3 + \frac{125}{36}\log 5.$$ For the pullback of the divisor $\pmb\beta$ by $f$ we have $$f^*\pmb\beta= (5\beta_0+4)\cdot\left\{12 \text{ zeros of } f\right\}+(2\beta_1+1)\cdot\left\{30 \text{ zeros of } f-1\right\}+(3\beta_\infty+2)\cdot \left\{20 \text{ poles of } f\right\}.$$ **Theorem 27** (Icosahedral triangulation). *Let $m_{\pmb\beta}$ be the unit area Gaussian curvature $2\pi(|\pmb\beta|+1)$ metric of $S^2$-like double triangle, see Section [2.1](#PrelimTriangulation){reference-type="ref" reference="PrelimTriangulation"}. Then for the determinant of Laplacian corresponding to the area $60$ pullback metric $f^*m_{\pmb\beta}$, where $f$ is the icosahedral Belyi map [\[BMid\]](#BMid){reference-type="eqref" reference="BMid"}, we have $$\begin{aligned} \log & \frac { \det \Delta_{f^*\pmb\beta}} {60} = 60 \Bigl( \log \det \Delta_{\pmb\beta} +\mathcal C_{\pmb\beta} -{\bf C}\Bigr)+ C_f \\ &+\frac {48} 5 \frac {\Psi(\beta_0,\beta_1,\beta_\infty)}{\beta_0+1} + \frac {15} 2 \frac {\Psi(\beta_1,\beta_0,\beta_\infty)}{\beta_1+1} + \frac {80} 9 \frac {\Psi(\beta_\infty ,\beta_1,\beta_0)}{\beta_\infty+1} \\ & -2 \left(5\beta_0+5+\frac 1 {5\beta_0+5}\right)\log 5 -5 \left(2\beta_1+2+\frac 1 {2\beta_1+2}\right)\log 2 \\& -\frac {10} 3 \left(3\beta_\infty+3+\frac 1 {3\beta_\infty+3}\right)\log 3 \\& -12\mathcal C\left( 5\beta_0+4\right)-30\mathcal C\left( 2\beta_1+1\right) -20\mathcal C\left( 3\beta_\infty+2 \right) +{\bf C}, \end{aligned}$$ where $\mathcal C_{\pmb\beta} =\mathcal C(\beta_0)+\mathcal C(\beta_1)+\mathcal C(\beta_\infty)$. Recall that $\det \Delta_{\pmb\beta}$ stands for an explicit function [\[CalcVar\]](#CalcVar){reference-type="eqref" reference="CalcVar"}, whose values are the determinants of the unit area $S^2$-like double triangles of Gaussian curvature $2\pi(|\pmb\beta|+2)$. The function $\beta\mapsto \mathcal C(\beta)$ is defined in [\[cbeta\]](#cbeta){reference-type="eqref" reference="cbeta"}, the function $\Psi$ is the same as in [\[eqn_Psi\]](#eqn_Psi){reference-type="eqref" reference="eqn_Psi"}, and $\bf C$ is the constant introduced in [\[bfC\]](#bfC){reference-type="eqref" reference="bfC"}.* *Proof.* For this tessellation we find the value [\[CfIT\]](#CfIT){reference-type="eqref" reference="CfIT"} of $C_f$ in exactly the same way as for the octahedral one in the proof of Theorem [Theorem 24](#TOT){reference-type="ref" reference="TOT"}. The calculations are a bit tedious but straightforward. We omit the details. ◻ **Example 28** (Icosahedron). *Here we find an explicit expression for the spectral determinant of a regular icosahedron of area $4\pi$ and Gaussian curvature $6\beta +1$ with $12$ conical points of order $\beta$. In Theorem [Theorem 27](#IcoTr){reference-type="ref" reference="IcoTr"} we take $$\label{divIcos} \pmb\beta=\frac{\beta-4}5 \cdot 0 +\left(-\frac 1 2\right)\cdot 1 +\left(-\frac 2 3 \right) \cdot \infty.$$ In particular, in the case $\beta=0$ all conical singularities disappear and we obtain a surface isometric to the standard round sphere $x_1^2+x_2^2+x_3^2=1$ in $\Bbb R^3$.* *As a consequence of Theorem [Theorem 27](#IcoTr){reference-type="ref" reference="IcoTr"} and the standard rescaling property, for the divisor [\[divIcos\]](#divIcos){reference-type="eqref" reference="divIcos"} we obtain $$\label{DetIcos} \begin{aligned} \log\, & { \det \Delta^{4\pi}_{Icosahedron}} = 60 \Bigl( \log \det \Delta_{\pmb\beta} +\mathcal C_{\pmb\beta} \Bigr)+ \frac{19} {15} \log 2 - \frac {61} {30}\log 3 + \frac{65}{36}\log 5 \\ &+\frac {48} {\beta+1} {\Psi\left( \frac{\beta-4}5,-\frac 1 2,-\frac 2 3\right)}+ 15 {\Psi\left(-\frac 1 2,\frac{\beta-4}5,-\frac 2 3\right)} + \frac {80} 3 {\Psi\left(-\frac 2 3,-\frac 1 2,\frac{\beta-4}5\right)} \\ & - \left(\beta+1+\frac 1 {\beta+1}\right)\log \frac {5\pi}{3} -12\mathcal C\left( \beta\right) -59{\bf C} +\frac 8 3 \log \pi, \end{aligned}$$ where the right-hand side is an explicit function of $\beta$. A graph of this function is depicted in Fig. [3](#Platonic){reference-type="ref" reference="Platonic"} as a dash-dotted line.* *As $\beta\to -1^+$ the determinant increases without any bound in accordance with the asymptotics $$\label{IdealIcosahedron} \log { \det \Delta^{4\pi}_{Icosahedron}} =\Bigl( -2\log(\beta+1)+\log 5 -2+24\zeta'_R(-1) \Bigr)\frac 1 {\beta+1}-6\log(\beta+1)+O(1)$$ of the right-hand side in [\[DetIcos\]](#DetIcos){reference-type="eqref" reference="DetIcos"}. In the limit $\beta=-1$ we obtain an ideal icosahedron, cf. [@Judge; @Judge2; @KimWilkin]. The spectrum of the corresponding Laplacian is no longer discrete.* *In the case $\beta=-1/6$ we obtain a flat regular icosahedron: the pullback of the flat metric $m_{\pmb\beta}^S=|z|^{-4/3}|z-1|^{-1}|dz|^2$ by $f$ is the metric $$f^*m_{\pmb\beta}^S= 3\cdot 2^2\cdot 5^2 | x(x^{10}-11 x^5-1)|^{-1/3}|dx|^2$$ of a flat regular icosahedron. The equality [\[DetIcos\]](#DetIcos){reference-type="eqref" reference="DetIcos"} reduces to $$\log { \det \Delta^{4\pi}_{Icosahedron}} |_{\beta=-1/6}= \frac {96}{5} \zeta_R'(-1)+ \frac {18} 5 \log(\sqrt5-1) + \frac 6 5\log( \sqrt 5+1 )+\frac {23} 2 \log \pi$$ $$+\frac {214}{45} \log 2 -\frac {917}{60} \log 3+\frac {251}{36}\log 5 +\frac{72} 5 \log \Gamma\left(\frac 4 5\right)-\frac {211}{5} \log \Gamma\left(\frac 2 3\right)+\frac {24}{5} \log \Gamma\left(\frac 3 5 \right).$$* *Let us also note that for the Klein's tessellation by $(2,3,7)$-triangles, one should take $\beta=-2/7$ in [\[divIcos\]](#divIcos){reference-type="eqref" reference="divIcos"} and [\[DetIcos\]](#DetIcos){reference-type="eqref" reference="DetIcos"}. This is related to Klein's quartic [@SSW; @KW; @ShVo]: the genus $3$ surface with the highest possible number of authomorphisms [@harts], it is also the lowest genus Hurwitz surface. It is an open longstanding problem to find the exact value of the spectral determinant of Klein's quartic; for a numerical study see [@StrUs]. Klein's quartic is also conjectured to be a stationary point of the spectral determinant, cf. Theorem [Theorem 12](#ExtDet){reference-type="ref" reference="ExtDet"}.* **Example 29** (Dodecahedron). *Here we find an explicit expression for the spectral determinant of a regular dodecahedron of area $4\pi$ and Gaussian curvature $K= 10\beta+1$ with twenty conical singularities of order $\beta$. With this aim in mind, in Theorem [Theorem 27](#IcoTr){reference-type="ref" reference="IcoTr"} we take $$\label{divDod} \pmb\beta= \left(-\frac 4 {5}\right)\cdot 0 +\left(-\frac 1 2\right)\cdot 1 +\frac {\beta-2} 3 \cdot \infty.$$ Then after an appropriate rescaling we obtain $$\label{DODEXPL} \begin{aligned} \log\, & { \det \Delta^{4\pi}_{Dodecahedron}} = 60 \Bigl( \log \det \Delta_{\pmb\beta} +\mathcal C_{\pmb\beta} \Bigr)+ \frac{19} {15} \log 2 + \frac {33} {10}\log 3 - \frac{127}{36}\log 5 \\ &+ {48} {\Psi\left(-\frac 4 {5},-\frac 1 2,\frac {\beta-2} 3\right)} + {15} {\Psi\left(-\frac 1 2,-\frac 4 {5},\frac {\beta-2} 3\right)} + \frac {80} {3(\beta+1)} {\Psi\left(\frac {\beta-2} 3,-\frac 1 2,-\frac 4 {5}\right)} \\ & -\frac {5} 3 \left(\beta+1+\frac 1 {\beta+1}\right)\log \frac {3\pi}{5} -20\mathcal C\left( \beta \right)-59{\bf C} +4\log {\pi}, \end{aligned}$$ where the right-hand side is an explicit function of $\beta$. A graph of this function is depicted in Fig. [3](#Platonic){reference-type="ref" reference="Platonic"} as a long-dashed line.* *As $\beta\to -1^+$, the determinant increases without any bound in accordance with the asymptotics $$\label{IdealDodecahedron} \begin{aligned} \log { \det \Delta^{4\pi}_{Dodecahedron}}=\Bigl( - \frac {10} 3 (\log (\beta+1) - \log 3+1) + & 40\zeta'_R(-1) \Bigr)\frac 1 {\beta+1} \\ & -10 \log(\beta+1)+O(1) \end{aligned}$$ of the right-hand side in [\[DODEXPL\]](#DODEXPL){reference-type="eqref" reference="DODEXPL"}. In the limit $\beta=-1$ the surface $(\overline{\Bbb C}_x,f^*m_{\pmb\beta})$ becomes isometric to an ideal dodecahedron [@Judge; @Judge2].* *In the case of a Euclidean dodecahedron, the equality [\[DODEXPL\]](#DODEXPL){reference-type="eqref" reference="DODEXPL"} reduces to $$\begin{aligned} \log\, { \det \Delta^{4\pi}_{Dodecahedron}}|_{\beta=-1/10}=\frac{83}{180} \log 2 - \frac 7{135} \log 3 - \frac {19}{72} \log5 + \frac {19}{108}\log( \sqrt 5-1 ) \\ -\frac {19}{27}\log \Gamma\left(\frac 7{10}\right) -\frac{19} {27}\log\Gamma\left(\frac 4 5 \right)+\frac {19}{27} \log \pi+\frac 1 6- 4\zeta_R'(-1)-20\mathcal C\left(-\frac 1{10}\right), \end{aligned}$$ where $\mathcal C\left(-\frac 1{10}\right)$ can be expressed in terms of Riemann zeta and gamma functions [@KalvinJFA].* We end this section with a remark that the spectral determinants of the surfaces of Archimedean solids can be calculated in the same way as above thanks to the Belyi maps found in [@Margot; @Zvonkin]. Let us also notice that it would be interesting to study the cone to cusp degeneration [@Judge; @Judge2; @KimWilkin] and obtain, as a consequence of our results, some explicit formulae for the relative spectral determinant [@Mueller] of hyperbolic surfaces with cusps. We hope to address this elsewhere. I would like to thank Andrea Malchiodi for correspondence, Iosif Polterovich for discussions of spectral invariants, and Bin Xu for correspondence and drawing my attention to the reference [@KimWilkin]. 100 E. Aurell, P. Salomonson, On functional determinants of Laplacians in polygons and simplicial complexes. Comm. Math. Phys. 165 (1994), no. 2, 233--259. E. Aurell, P. Salomonson, Further results on Functional Determinants of Laplacians in Simplicial Complexes, Preprint May 1994, arXiv:hep-th/9405140 G.V. Belyi, On Galois extensions of a maximal cyclotomic field, Math. USSR Izv. 14, 247--256 (1980) J. Bétréma, A. Zvonkin, Plane trees and Shabat polynomials, Disc. Math. 153 (1996), 47--58 C. J. Bishop, True trees are dense, Invent. Math., 197 (2014), p. 433--452 M. Bonk, A. Eremenko, Canonical embeddings of pairs of arcs, Comput. Methods Funct. Theory 21 (2021), 825--830 D. Burghelea D., L. Friedlander, and T. Kappeler, Meyer-Vietoris type formula for determinants of elliptic differential operators, J. Funct. Anal. 107 (1992), 34--65 L. Cantini, P. Menotti, D. Seminara, Proof of Polyakov conjecture for general elliptic singularities, Phys. Lett. B 517 (2001), arXiv:hep-th/0105081 C.H. Clemens, A scrapbook of complex curve theory. Grad. Stud. Math. 55, Amer. Math. Soc., Prov., RI, 2003. P.B. Cohen, C. Itzykson, J. Wolfart, Fuchsian triangle groups and Grothendieck dessins. Variations on a theme of Belyi. Commun. Math. Phys. 163, 605--627 (1994) P. Esposito, A. Malchiodi, Critical metrics for Log-Determinant functionals in conformal geometry, J. Diff. Geom. to appear, arXiv:1906.08188 A. Grothendieck, Esquisse d'un programme. Geometric Galois actions, London Math. Soc. Lecture Note Ser., vol. 242, Cambridge Univ. Press, Cambridge, 1997. R.S. Hartshorne, Algebraic geometry, Springer-Verlag, 1977 C.D. Hodgson, I. Rivin, A characterization of compact convex polyhedra in hyperbolic 3-space. Invent. Math. 111 (1993), 77--111 D.A. Hejhal, Monodromy groups and Poincaré series, Bull. Amer. Math. Soc. 84 (1978), 339--376 C. Judge, Conformally converting cusps to cones. Conform. Geom. Dyn. 2 (1998), 107--113 C. Judge, On the existence of Maass cusp forms on hyperbolic surfaces with cone points. J. Amer. Math. Soc. 8 (1995), 715--759. V. Kalvin, Determinants of Laplacians for constant curvature metrics with three conical singularities on the $2$-sphere, Calc. Var. PDE 62 (2023), Paper No. 59, 35 pp., arXiv:2112.02771 V. Kalvin, Determinant of Friedrichs Dirichlet Laplacians on 2-dimensional hyperbolic cones, Commun. Contemp. Math. (2022), Paper No. 2150107, 14 pp., ArXiv:2011.05407 V. Kalvin, Polyakov-Alvarez type comparison formulas for determinants of Laplacians on Riemann surfaces with conical singularities, J. Funct. Anal. 280 (2021), no. 7, Paper No. 108866, arXiv:1910.00104 V. Kalvin, Spectral determinant on Euclidean isosceles triangle envelopes, J. Geom. Anal. 29 (2019), pp. 785--798 H. Karcher, M. Weber, The Geometry of Klein's Riemann Surface. The eightfold way, MSRI Publ. 35 (1999), 9--49, Cambridge Univ. Press. Y.-H. Kim, Surfaces with boundary: their uniformizations, determinants of Laplacians, and isospectrality. Duke Math. J. 144 (2008), 73--107, arXiv:math/0609085 S. Kim, G. Wilkin, Analytic convergence of harmonic metrics for parabolic Higgs bundles, J. Geom. Phys. 127 (2018), 55--67 F. Klein, Vorlesungen über die Entwicklung der Mathematik im $19$. Jahrhundert Teil 1, Berlin Verlag von Julius Springer 1926 F. Klein, Vorlesungen über das Ikosaeder und die Aflösung der Gleichungen vom fünften Grade. Leipzig, Druck und Verlag Von B.G. Teubner, 1884. S. Klevtsov, Lowest Landau level on a cone and zeta determinants, J.Phys. A: Math. Theor. 50 (2017), 234003 A. Kokotov, D. Korotkin, Bergman tau-function: from random matrices and Frobenius manifolds to spaces of quadratic differentials. J. Phys. A39(2006), 8997--9013 I. Kra, Accessory parameters for punctured spheres, Trans. Amer. Math. Soc. 313 (1989), 589--617 S.K. Lando, A.K. Zvonkin, Graphs on surfaces and their applications. With an appendix by Don B. Zagier. Encyclopedia of Mathematical Sciences, 141. Low-Dimensional Topology, II. Springer-Verlag, Berlin, 2004. N. Magot, A. Zvonkin, Belyi functions for Archimedean solids, Discrete Mathematics 217 (2000), 249--271 W. Müller, Relative zeta functions, relative determinants and scattering theory, Comm. Math. Phys. 192 (1998), 309--347 B. Osgood, R. Phillips, P. Sarnak, Extremals of Determinants of Laplacians. J. Funct. Anal. 80 (1988), 148--211 B. Osgood, R. Phillips, P. Sarnak, Compact isospectral sets of surfaces. J. Funct. Anal. 80 (1988), 212--234 B. Osgood, R. Phillips, P. Sarnak, Moduli space, heights and isospectral sets of plane domains, Ann. of Math. 129 (1989), 293--362 J. Park, L.A. Takhtajan, L.-P. Teo, Potentials and Chern forms for Weil-Petersson and Takhtajan-Zograf metrics on moduli spaces, Adv. Math. 305 (2017), 856--894 J. Park, L.-P. Teo, Liouville action and holography on quasi-Fuchsian deformation spaces, Comm. Math. Phys. 362 (2018), 717--758 I. Rivin, Euclidean structures on simplicial surfaces and hyperbolic volume, Annals of Math. 139 (1994), 553--580 I. Rivin, A characterization of ideal polyhedra in hyperbolic 3--space, Annals of Math. 143 (1996), 51--70 L. Schneps, Dessins d?enfants on the Riemann sphere, London Math. Soc. Lecture Notes 200, Cambridge Univ. Press, 1994 P. Scholl, A. Schürmann, J.M. Wills, Polyhedral models of Felix Klein's group. The Mathematical Intelligencer 24, 37--42 (2002). G. Schumacher, S. Trapani, Variation of cone metrics on Riemann surfaces, J. Math. Anal. Appl. 311 (2005), 218--230 G. Schumacher, S. Trapani, Weil-Petersson geometry for families of hyperbolic conical Riemann surfaces, Michigan. Math. J. 60 (2011), 3--33 G.B. Shabat, Square-tiled surfaces and curves over number fields, Preprint 2022, arXiv:2212.07755 \[math.AG\] G.B. Shabat, V.A. Voevodsky, Drawing curves over number fields. In: The Grothendieck Festschrift, Vol III, Basel, Boston: Birkhäuser, 1990, pp.199--227 M. Spreafico, S. Zerbini, Spectral analysis and zeta determinant on the deformed spheres, Commun. Math. Phys. 273 (2007), 677--704 B. Springborn, Ideal hyperbolic polyhedra and discrete uniformization, Discrete & Computational Geometry (2020), 64:63--108 A. Strohmaier, V. Uski, An algorithm for the computation of eigenvalues, spectral zeta functions and zeta-determinants on hyperbolic surfaces. Comm. Math. Phys. 317 (2013), no. 3, 827--869; Correction: Comm. Math. Phys. 359 (2018), no. 1, 427. L.A. Takhtajan, L.-P. Teo, Liouville action and Weil-Petersson Metric on deformation spaces, global Kleinian reciprocity and holography, Comm. Math. Phys. 239 (2003), 183--240 L. Takhtajan, P. Zograf, Hyperbolic 2-spheres with conical singularities, accessory parameters and Kähler metrics on $\mathcal M_{0,n}$, Trans. Amer. Math. Soc. 355 (2003), no. 5, 1857--1867 W.P. Thurston, Shapes of polyhedra and triangulations of the sphere, Geometry and Topology Monographs, V 1 (1998), pp. 511--549 M. Troyanov, Les surfaces euclidiennes à singularités coniques, L'Enseignement Mathématique 32 (1986), 79--94. M. Troyanov, Metrics of constant curvature on a sphere with two conical singularities, Lecture Notes in Math. 1410 (1989), 296--306 M. Troyanov, Prescribing curvature on compact surfaces with conical singularities, Trans. Amer. Math. Soc. 134 (1991), 792--821 V.A. Voevodsky, G.B. Shabat, Equilateral triangulations of Riemann surfaces, and curves over algebraic number fields. Soviet. Dokl. Math. 39, 38--41 (1989) W. Weisberger, Conformal invariants for determinants of Laplacians on Riemann surfaces, Commun. Math. Phys. 112 (1987), 633--638 J. Wolfart, ABC for polynomials, dessins d'enfants, and uniformization --- a survey, Schr. Wiss. Ges. Johann Wolfgang Goethe Univ. Frankfurt am Main, 20, Franz Steiner Verlag Stuttgart, Stuttgart, 2006, 313--345. A. Zorich, Flat surfaces. Frontiers in number theory, physics, and geometry. I, 437--583. Springer-Verlag, Berlin, 2006 A. Zamolodchikov, Al. Zamolodchikov, Conformal bootstrap in Liouville field theory, Nuclear Physics B 477 (1996), 577--605 P. Zograf, L. Takhtajan, On the Liouville equation, accessory parameters and the geometry of the Teichmüller space of the Riemann surfaces of genus $0$, Mat. Sb. 132 (1987), 147--166 (Russian); Engl. transl. in: Math. USSR Sb. 60(1988), 143--161 [^1]: Department of Mathematics, Dawson College, 3040 Sherbrooke St. W, Montreal, Quebec H3Z 1A4, Canada; E-mail: vkalvin\@gmail.com
arxiv_math
{ "id": "2310.04882", "title": "Triangulations of singular constant curvature spheres via Belyi\n functions and determinants of Laplacians", "authors": "Victor Kalvin", "categories": "math.AP math-ph math.AG math.MP math.SP", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | In this paper, we prove bilinear sparse domination bounds for a wide class of Fourier integral operators of general rank, as well as oscillatory integral operators associated to Hörmander symbol classes $S^m_{\rho,\delta}$ for all $0\leq\rho\leq 1$ and $0\leq\delta< 1$, a notable example is the Schrödinger operator. As a consequence, one obtains weak $(1,1)$ estimates, vector-valued estimates, and a wide range of weighted norm inequalities for these classes of operators. address: | Tobias Mattsson\ Department of Mathematics, Uppsala University\ S-751 06 Uppsala, Sweden author: - Tobias Mattsson bibliography: - references.bib title: Bilinear Sparse Domination for Oscillatory Integral Operators --- # Introduction In this paper we study the bilinear sparse domination of oscillatory integral operators of the form: $$T_a^\varphi f(x) := \int_{\mathbb{R}^{n}} e^{i\varphi(x,\xi)}\,a(x,\xi)\,\widehat f (\xi) \,\mathrm{d}\xi$$ where $a$ is an amplitude of Hörmander type, and the phase function class will satisfy regularity conditions in $\xi$ such that the operator is either a Fourier integral operator (FIO), or an oscillatory integral operator (OIO). See below for the definition of these two classes (see section [2.3](#sec:fio){reference-type="ref" reference="sec:fio"} and [2.4](#sec:oio){reference-type="ref" reference="sec:oio"}).\ Recently, a significant advancement in pointwise estimates has been made with the development of the theory of sparse domination. This theory introduces the concept of sparse domination for operators $T$ on function spaces, characterized by the following inequalities: $$\begin{aligned} |Tf(x)| \lesssim {\Lambda}_{\mathcal{S}, q}f(x) \quad \text{and} \quad |\langle{Tf,g}\rangle| \lesssim {\Lambda}_{{\mathcal{S}}, q, p'}(f, g).\end{aligned}$$ We call the first inequality *sparse domination* and the second one *bilinear sparse domination*. See section [2.2](#sparse domination theory){reference-type="ref" reference="sparse domination theory"} for the definition of ${\Lambda}_{ \mathcal{S} , q}$ and ${\Lambda}_{{\mathcal{S}},q,p'}$.\ The significance of proving sparse bounds extends well beyond their relationship to $L^p$ bounds. These bounds are useful for establishing weighted and vector-valued inequalities. This approach has been applied in many contexts, such as Bochner-Riesz multipliers [@LMRNC], singular integrals [@LSS; @grafakos], various Hilbert transforms, multipliers and pseudodifferential operators [@BC; @yamamoto]. Sparse bounds were also employed in the proof of the $A_2$ conjecture [@Lacey; @Lerner_simple_2012].\ In [@BC], D. Beltran and L. Cladek showed that a sharp (up to the endpoint) bilinear sparse domination estimate for classical pseudodifferential operators, with amplitudes in $S^m_{\rho,\delta}(\mathbb{R}^{n})$ for $0<\delta\leq \rho< 1$, holds. Building upon this result, our paper delves into the investigation of sparse domination for Fourier integral operators and other oscillatory integral operators with nonlinear phase functions associated to partial differential equations, like the Schrödinger equation. In the former case we focus on phase functions belonging to the Dos Santos Ferreira-Staubach classes $\Phi^2$. While in the latter case, we investigate the sparse domination of oscillatory integral operators with phase functions in the class $\textart{F}^k$, which was initially introduced by A. J. Castro, A. Israelsson, W. Staubach, and M. Yerlanov [@CISY].\ The sparse bounds established in our study yield a range of consequential results, including weighted and vector-valued inequalities for FIOs and OIOs. While some of the weighted norm inequalities have previously been established in the case of Fourier (see D. Dos Santos Ferreira, and W. Staubach [@DS]) and oscillatory integral operators (see A. Bergfeldt, and W. Staubach [@BS]), our work unveils new weighted norm inequalities, and gives quantitative control of the weighted operator norm. These new results, along with the previously known ones, are discussed in detail in Section [3](#Sec:weightedConsequences){reference-type="ref" reference="Sec:weightedConsequences"}. We choose to only discuss our main results about sparse domination in the introduction, beginning with the Fourier integral operators.\ To this end, we define the following notation, set $$\label{decay_FIO} m_\rho(q,p):=-n(1-\rho)\Big|\frac{1}{q}-\frac{1}{p}\Big|,$$ and $$\zeta:=\min\Big\{0,\frac{n}{2}(\rho-\delta)\Big\}.$$ This latter number is the sharp regularity exponent in the $L^2$ boundedness of FIOs and also OIOs. **Theorem 1** (Fourier Sparse Domination). *Assume that $a(x,\xi)\in S^{m}_{\rho,\delta}(\mathbb{R}^{n})$ for $0\leq \rho\leq 1$ and $0\leq \delta< 1$, and $\varphi(x,\xi)$ is in the class $\Phi^2$ with rank $0\leq\kappa\leq n-1$ and is *SND*. Then for any compactly supported bounded functions $f, g$ on $\mathbb{R}^n$, there exist sparse collections $\mathcal{S}$ and $\widetilde{\mathcal{S}}$ of dyadic cubes such that $$\begin{aligned} \begin{cases} |\left<T_a^\varphi f, g\right>|\le C(m, q, p)\Lambda_{\mathcal{S}, q, p'}(f, g), \text{ and }\\ |\left<T_a^\varphi f, g\right>|\le C(m, q, p)\Lambda_{\widetilde{\mathcal{S}}, p', q}(f, g), \end{cases} \end{aligned}$$ for all pairs $(q, p')$ and $(p', q)$ such that $$\begin{aligned} \label{FIO_ranges} \begin{cases} m<-m_{\rho}(q,2)-\kappa\rho\Big(\frac{1}{p}-\frac{1}{2}\Big)-\zeta, & 1\le q\le p \le 2\\ m<-m_{\rho}(q,p)-\zeta\big(\frac{2}{q}-1\big), & 1\le q\le 2 \le p \le q'. \end{cases} \end{aligned}$$ (See figure [\[figure numerology\]](#figure numerology){reference-type="ref" reference="figure numerology"})* If the phase $\varphi$ is linear (or $\kappa=0$), the FIOs reduce to the pseudodifferential case, this case was investigated by Beltran and Cladek in [@BC]. Their sparse domination result suggests that the bilinear sparse domination estimates for the pseudodifferential operators could be improved in two ways. First, pseudodifferential operators are examples of Fourier integral operators with phase function of rank $0$, secondly the order and type of the amplitude i.e. $m,$ $\rho$ and $\delta$. The motivation for investigating Fourier integral operators with amplitude types different than $\rho=1$ and $\delta=0$, and ranks different than $n-1$, comes from the theory of partial differential equations, scattering theory, inverse problems, and tomography, just to name a few. In this paper, we have made an attempt to investigate and achieve optimal results in all three of these directions.\ We now turn to the oscillatory integral operators. Let $\varkappa=\min(\rho,1-k)$ and set $$\label{decay_OIO} m_\varkappa(q,p):=-n(1-\varkappa)\Big|\frac{1}{q}-\frac{1}{p}\Big|.$$ We also extend the class of the phase function to include nonlinear phase functions of the type $x\cdot\xi+|\xi|^k$. This second class includes various oscillatory integral operators described in more detail below. **Theorem 2** (Oscillatory Sparse Domination). *Let $n\geq 1$, $0<k<\infty$. Assume that $\varphi\in \textart F^k$ is *SND*, satisfies the *LF*$(\mu)$-condition for some $0<\mu\leq 1$, and the $L^2$-condition [\[eq:L2 condition_old\]](#eq:L2 condition_old){reference-type="eqref" reference="eq:L2 condition_old"}. Assume also that $a(x,\xi)\in S^{m}_{\rho,\delta}(\mathbb{R}^{n}),$ for $0\leq \rho\leq 1$ and $0\leq \delta< 1$. Then for any compactly supported bounded functions $f, g$ on $\mathbb{R}^n$, there exist sparse collections $\mathcal{S}$ and $\widetilde{\mathcal{S}}$ of dyadic cubes such that $$\begin{aligned} \begin{cases} |\left<T_a^\varphi f, g\right>|\le C(m, q, p)\Lambda_{\mathcal{S}, q, p'}(f, g), \text{ and }\\ |\left<T_a^\varphi f, g\right>|\le C(m, q, p)\Lambda_{\widetilde{\mathcal{S}}, p', q}(f, g), \end{cases} \end{aligned}$$ for all pairs $(q, p')$ and $(p', q)$ such that $$\begin{aligned} \label{OIO_ranges} \begin{cases} m<-m_\varkappa(q,2)-\zeta, & 1\le q\le p \le 2\\ m<-m_\varkappa(q,p)-\zeta\big(\frac{2}{q}-1\big), & 1\le q\le 2 \le p \le q'. \end{cases} \end{aligned}$$ (See figure [\[figure numerology\]](#figure numerology){reference-type="ref" reference="figure numerology"})* The motivation behind investigating the OIOs in this paper stems from the theory of partial differential equations, specifically in the study of dispersive equations. These dispersive equations often involve phase functions $\varphi(x, \xi) = x \cdot \xi + \phi(\xi)$, where $\xi$ represents the wave variable and $x$ denotes the spatial variable. Different choices of $\phi(\xi)$ lead to various important equations. For instance, when $\phi(\xi) = |\xi|^{1/2}$, it corresponds to the water-wave equation. When $\phi(\xi) = |\xi|^{2}$, it relates to the Schrödinger equation. Furthermore, in dimension one, $\phi(\xi) = |\xi|^{3}$ and $\phi(\xi) = \xi|\xi|$ correspond to the Airy and Benjamin-Ono equations, respectively.\ This paper, and the method of obtaining sparse form bounds, is inspired by the work by Beltran and Cladek [@BC], M. T. Lacey and S. Spencer [@LSS], and M. T. Lacey, D. Mena, and M. C. Reguera [@LMRNC]. The essential ingredients of this method are geometrically decaying *$L^p$-improving estimates* (i.e. $L^q\to L^p$ estimates for $p>q$) on spatially and frequency localised pieces of the operator, as well as optimal $L^p$ estimates on frequency localized pieces.\ For $m<-n(1-\rho)(1/q-1/2)$ in the range $1\leq q\leq p\leq 2$, and $m<-n(1-\rho)(1/q-1/p)$ whenever $1\leq q\leq 2\leq p\leq q'$, Beltran and Cladek obtained sharp up to the endpoint bilinear sparse domination estimates for the pseudodifferential operators with symbols in the Hörmander classes $S^m_{\rho,\delta}$ for $0<\delta\leq\rho<1$. In the corresponding ranges of $p,q$ we obtain [\[FIO_ranges\]](#FIO_ranges){reference-type="eqref" reference="FIO_ranges"} for the Fourier integral operator with amplitudes in the Hörmander classes $S^m_{\rho,\delta}$ for $0\leq\delta<1$ and $0\leq\rho\leq 1$. Thus, since Theorem [Theorem 1](#main_FIO){reference-type="ref" reference="main_FIO"} reduces to the sharp sparse domination result obtained for the pseudodifferential case in [@BC] whenever $\rho\geq \delta$ and $\kappa=0$, our result is also sharp in that range.\ [\[figure numerology\]]{#figure numerology label="figure numerology"} We shall briefly describe the trapezoid associated with the admissible $(1/q,1/p')$-points corresponding to the case of OIOs for $m<0$. A similar description works to describe the numerology in Theorem [Theorem 1](#main_FIO){reference-type="ref" reference="main_FIO"}. Let $\textbf{T}$ be the trapezoid with vertices, $$\begin{aligned} \textbf{T}: \begin{cases} \nu_1=\big(1,0\big),& \nu_2=\big(1,\frac{-m-\zeta}{n(1-\varkappa)}\big),\\ \nu_1'=\big(0,1\big), & \nu_2'=\big(\frac{-m-\zeta}{n(1-\varkappa)},1\big). \end{cases}\end{aligned}$$ where $-n(1-\varkappa)\leq m+\zeta\leq-\frac{n(1-\varkappa)}{2}$. While if $-\frac{n(1-\varkappa)}{2}-\zeta<m<0$ then we obtain the trapezoid $$\begin{aligned} \textbf{T}': \begin{cases} \mu_1=\big(1/2-\frac{m+\zeta}{n(1-\varkappa)},1/2+\frac{m+\zeta}{n(1-\varkappa)}\big),& \mu_2=\big(1/2-\frac{m+\zeta}{n(1-\varkappa)},1/2\big),\\ \mu_1'=\big(1/2+\frac{m+\zeta}{n(1-\varkappa)},1/2-\frac{m+\zeta}{n(1-\varkappa)}\big), & \mu_2'=\big(1/2,1/2-\frac{m+\zeta}{n(1-\varkappa)}\big). \end{cases}\end{aligned}$$ See figure [\[figure numerology\]](#figure numerology){reference-type="ref" reference="figure numerology"} for a visualization of the trapezoid $\textbf{T}$. One may obtain similar trapezoids for the Fourier integral operators, e.g. just substitute $\rho$ for $\varkappa$ in $\textbf{T}$ to to obtain the corresponding trapezoid for the FIOs.\ ## Organization and notation {#organization-and-notation .unnumbered} In the first section of the paper (section [2](#Prelim){reference-type="ref" reference="Prelim"}), we introduce the necessary fundamental concepts from microlocal analysis, weighted $A_p$-theory, and sparse domination theory. We also give the definitions of the various classes of amplitudes and phase functions regarding oscillatory integral operators and Fourier integral operators.\ In section [3](#Sec:weightedConsequences){reference-type="ref" reference="Sec:weightedConsequences"} we derive all the corollaries of Theorem [Theorem 1](#main_FIO){reference-type="ref" reference="main_FIO"} and Theorem [Theorem 2](#main_OIO){reference-type="ref" reference="main_OIO"}, these include weighted norm inequalities, weak $(1,1)$ estimates, and vector-valued inequalities. We also discuss how these estimates are new in relation to previously established results.\ In section [4](#sec:decomp){reference-type="ref" reference="sec:decomp"} we provide the various decompositions that we employ in each case, along with various tools that we need in order to obtain the geometrically decaying $L^q$-improving estimates in section [5](#sec:proof_oio){reference-type="ref" reference="sec:proof_oio"}. This section is concerned with the proof of $L^q$-improving estimates of oscillatory integral operators. This process is divided in two main steps. The first step is to use the various tools developed in the first section to obtain $L^q$-improving estimates which decay geometrically. In the second step we use the optimal global $L^2$-boundedness of OIOs shown in [@IMS2], along with the geometric decay of the first step to obtain $L^2$ and estimate for the principal term of the decomposition. We do a similar procedure to obtain $L^1$ and $L^1\to L^\infty$ cases for the localized pieces and the principal term. By an interpolation procedure we then obtain the desired $L^q$-improving estimates.\ In section [6](#sec:proof_fio){reference-type="ref" reference="sec:proof_fio"} we turn to the proof of the $L^q$-improving estimates pertaining to FIOs. We employ a similar strategy to the one in section [5](#sec:proof_oio){reference-type="ref" reference="sec:proof_oio"}, the resulting analysis differs in the details more so than the overall strategy.\ Finally in section [7](#sec:proof){reference-type="ref" reference="sec:proof"} we give the proof of the main results.\ As for notation, in the subsequent analysis we will adopt the customary practice of denoting positive constants in inequalities by the symbol $C$. The specific value of $C$ is not crucial to the current problem but can be determined based on known parameters, such as $n$, $p$, and $q$ which are relevant in the given context. For instance, these parameters may be associated with the seminorms of various amplitudes or phase functions. While the value of $C$ may vary from line to line, it can be estimated if required. In this paper, we refer to $C$ as the \"hidden constant\".\ Additionally, we adopt the notation $c_1 \lesssim c_2$ as a shorthand for $c_1 \leq Cc_2$. Moreover, we use the notation $c_1 \sim c_2$ when both $c_1 \lesssim c_2$ and $c_2 \lesssim c_1$. ## Acknowledgements. {#acknowledgements. .unnumbered} The author is supported by the Knut and Alice Wallenberg Foundation. The author is also grateful to Andreas Strömbergsson for his support and encouragement. # Preliminaries {#Prelim} We start by recalling the definition of the Littlewood-Paley partition of unity, which is the most basic tool in the frequency decomposition of the operators at hand.\ Let $\psi_0 \in \mathcal{C}_c^\infty(\mathbb{R}^n)$ be equal to $1$ on $B(0,1)$ and have its support in $B(0,2)$. Then let $$\psi_j(\xi) := \psi_0 \left (2^{-j}\xi \right )-\psi_0 \left (2^{-(j-1)}\xi \right ),$$ where $j\geq 1$ is an integer and $\psi(\xi) := \psi_1(\xi)$. Then $\psi_j(\xi) = \psi\left (2^{-(j-1)}\xi \right )$, and one has the following Littlewood-Paley partition of unity $$\sum_{j=0}^\infty \psi_j(\xi) = 1,$$ for all $\xi\in\mathbb{R}^n$. We also define the fattened Littlewood-Paley operators, set $$\Psi_j := \psi_{j+1}+\psi_j+\psi_{j-1},$$ with $\psi_{-1}:=\psi_{0}$. Define the Littlewood-Paley operators by $$\psi_j(D), f(x) = \int_{\mathbb{R}^n} \psi_j(\xi)\widehat{f}(\xi)e^{ix\cdot\xi} \mathrm{d}\xi.$$ In connection to the $L^q\to L^p$ estimates for FIOs and OIOs below, we will encounter the well-known $L^r$-maximal function $$\label{HLmax} \mathcal{M}_r f(x):=\sup _{B\ni x}\Big(\frac{1}{|B|} \int_{B}|f(y)|^r \,\mathrm{d}y\Big)^{1/r},$$ where the supremum is taken over all balls $B$ containing $x$. It is a classical result of Hardy and Littlewood that the $L^r$-maximal function $\mathcal{M}_rf(x)$ is bounded on $L^p$ for $p>r$.\ The class of amplitudes considered in this paper was first introduced by L. Hörmander in [@Hor1]. **Definition 3**. *Let $m\in \mathbb{R}$ and $0\leq \rho, \delta \leq 1$. An amplitude $a(x,\xi)$ in the class $S^m_{\rho,\delta}(\mathbb{R}^{n})$ is a function $a\in \mathcal{C}^\infty (\mathbb{R}^{n}\times \mathbb{R}^{n})$ for which $$\left|\partial_{\xi}^\alpha \partial_{x}^\beta a(x,\xi) \right| \lesssim \langle\xi\rangle ^{m-\rho|\alpha|+\delta|\beta|},$$ for all multi-indices $\alpha$ and $\beta$ and $(x,\xi)\in \mathbb{R}^{n}\times \mathbb{R}^{n}$, where $\langle\xi\rangle:= (1+|\xi|^2)^{1/2}.$* In our subsequent discussion, we will refer to $m$ as the order of the amplitude, while using $\rho$ and $\delta$ to denote its type. Specifically, we will refer to the class $S_{\rho,\delta}^m(\mathbb{R}^n)$, where $0<\rho\leq 1$ and $0\leq \delta<1$, as the classical class. Similarly, the class $S_{0,\delta}^m(\mathbb{R}^n)$ with $0\leq \delta<1$ will be referred to as the exotic class, and we will denote the class $S_{\rho,1}^m(\mathbb{R}^n)$, where $0\leq \rho\leq 1$, as the forbidden class of amplitudes. ## Tools from weighted function spaces In deriving the weighted consequences of Theorem [Theorem 1](#main_FIO){reference-type="ref" reference="main_FIO"} and Theorem [Theorem 2](#main_OIO){reference-type="ref" reference="main_OIO"}, we will make use of the classical $A_p$ space and the reverse Hölder space $RH_p$. We recall the definitions of these below for the convenience of the reader. **Definition 4**. *Let $1<p< \infty$. A locally integrable and a.e. positive function $w$ is called a weight. A weight is called a Muckenhoupt $A_p$ weight if $$_{A_p}=\sup_{Q\text{ cube}} \langle w \rangle_{Q,1}\langle w^{1-p'} \rangle_{Q,1}^{p-1}<\infty,$$ while a weight is called a reverse Hölder $RH_p$ weight if $$_{RH_p}=\sup_{Q\text{ cube}} \langle w \rangle_{Q,1}^{-1}\langle w \rangle_{Q,p}<\infty.$$* The intersection of the $A_s$ and the reverse Hölder classes can be characterized by the equivalence: $$\label{equivalenceApRH} w \in A_{s/q} \cap RH_{(p/s)'} \Longleftrightarrow w^{(p/s)'} \in A_{(p/s)'(s/q-1 )+1}.$$ **Definition 5**. *Let $s \in {\mathbb{R}}$ and $1\leq p \leq\infty$, $1\leq q \leq\infty$ and $w\in A_p$. The weighted Triebel-Lizorkin space is defined by $$F^s_{p,q}(w) := \Big\{ f \in {\mathscr{S}'}(\mathbb{R}^n) \,:\, \|f\|_{F^s_{p,q}(w)} := \|(2^{jqs}\psi_j(D) f)\|_{L^p_w(\ell^q)}<\infty \Big\},$$ where $\mathscr{S}'(\mathbb{R}^n)$ denotes the space of tempered distributions.* Later, in the discussion on vector-valued inequalities we shall need the following classical lemma due to J. L. Rubio de Francia [@RUbio]. **Definition 6**. *An operator $T$ defined in $L^p$ is called linearizable if for any $f_0\in L^p$, there exists a linear operator $\mathcal{L}$ such that $$\begin{aligned} &|Tf_0|=|\mathcal{L}f_0|; &|\mathcal{L}f|\leq|Tf|,\quad\forall f\in L^p. \end{aligned}$$* **Theorem 7**. *Let $T_j$ be a sequence of linearizable operators and suppose that for some fixed $r\geq 1$, and all weights $w\in A_r$ one has $\|T_j f\|_{L^r_w}\lesssim \|f\|_{L^r_w}$. Then for $1 < p < \infty$ and $w\in A_p$ one has $$\|T_j f_j\|_{L^p_w(\ell^r)}\lesssim_w \|f_j\|_{L^p_w(\ell^r)}$$* ## Background on Sparse Forms {#sparse domination theory} Now, let us delve into some facts concerning sparse bounds. **Definition 8**. *Let $0<\eta <1$. A collection $\mathcal{S}$ of cubes in $\mathbb{R}^{n}$ is called an $\eta$-sparse family if there are pairwise disjoint subsets ${\{ E_Q\}}_{Q \in \mathcal{S} }$ such that* 1. *$E_{Q} \subset Q$* 2. *and $|E_{Q}|>\eta |Q|$.* If there is no confusion, we usually say *sparse* instead of *$\eta$-sparse*. By a cube $Q$ in $\mathbb{R}^{n}$ we mean a half open cube such as $$Q=[x_1,x_1+l(Q))\times\cdots[x_n,x_n+l(Q)),$$ where $l(Q)$ is the side length of the cube and $x(Q)=(x_1,\dots,x_n)$ is a corner of the cube. Define a dyadic cube to be a cube $Q$ with $l(Q)=2^k$ for some $k\in\mathbb{Z}$. Let $v$ be vector in $\{0,1,2\}^n$ and define the shifted dyadic grid $\mathcal{D}_v$ as the set $$\{Q:l(Q)=2^k,x(Q)=\overline{m},\forall k\in\mathbb{Z},\,\overline{m}\in\mathbb{Z}^n\}.$$ **Definition 9**. *For any cube $Q$ and $1\leq p <\infty$, we define ${\langle f \rangle }_{p,Q} := {|Q|}^{-\frac{1}{p}} {||f||}_{ L^p(Q)}$. Let $\mathcal{S}$ be an $\eta$-sparse family and $1\leq q <\infty$, then the $(q,s)$-sparse form operator ${\Lambda}_{\mathcal{S} , q,p}$ and $q$-sparse operator ${\Lambda}_{ \mathcal{S} ,q}$ are defined by $$\begin{aligned} {\Lambda}_{ \mathcal{S} ,q}f(x) :=\sum_{Q \in \mathcal{S}} {\langle f \rangle }_{q,Q} 1_Q (x) \ \ , \ \ {\Lambda}_{\mathcal{S} , q,p}(f,g) := \sum_{Q \in \mathcal{S}} |Q| {\langle f \rangle }_{q,Q}{\langle g \rangle }_{p,Q} \end{aligned}$$ for all $f,g \in L^{1}_{loc}$.* If $q<s<p$, we have $$\begin{aligned} {\Lambda}_{\mathcal{S} , q,p'}(f,g) \lesssim {||f||}_{s} {||g||}_{s'}. \end{aligned}$$ This inequality is easily derived from the $L^s$-boundedness of the $L^r$-maximal operator $M_r$. In section [7](#sec:proof){reference-type="ref" reference="sec:proof"} we shall also make use of the following useful fact. **Lemma 10**. *Let $1\leq p,q\leq\infty$. For bounded, compactly supported functions $f,g$, there exists a sparse form ${\Lambda}_{\mathcal{S}_0 , q,p'}$ such that $${\Lambda}_{\mathcal{S} , q,p'}(f,g)\lesssim {\Lambda}_{\mathcal{S}_0 , q,p'}(f,g).$$ (the hidden constant can be taken to only depend on the dimension $n$)* Obtaining $(1,1)$-sparse bounds is the best scenario, while the quite trivial $(p,p')$-sparse bounds are the weakest.\ The following result due to J. M. Conde-Alonso, A. Culiuc, F. Di Plinio and Y. Ou [@domin], shows that sparse form bounds are stronger than weak $(1,1)$ estimates. **Lemma 11**. *Suppose that a sublinear operator $T$ has the following property: there exists $C > 0$ and $1 \leq q < \infty$ such that for every $f, g$ bounded with compact support there exists a sparse collection $\mathcal{S}$ such that $$| \left< T f, g \right>| \leq C \Lambda_{\mathcal{S},1,q}(f,g).$$ Then $T : L^1(\mathbb{R}^{n}) \to L^{1,\infty}(\mathbb{R}^{n})$.* ## Basic definitions and results related to the Fourier integral operators {#sec:fio} \ The phase functions that we shall use to define Fourier integral operators were first introduced in [@DS] by D. Dos Santos Ferreira and W. Staubach. **Definition 12**. *A *phase function* $\varphi(x,\xi)$ in the class $\Phi^k$, $k\in\mathbb{Z}_{>0}$, is a function $\varphi(x,\xi)\in \mathcal{C}^{\infty}(\mathbb{R}^n \times\mathbb{R}^n \setminus\{0\})$, positively homogeneous of degree one in the frequency variable $\xi$ satisfying the following estimate* *$$\label{C_alpha} \sup_{(x,\,\xi) \in \mathbb{R}^n \times\mathbb{R}^n \setminus\{0\}} |\xi| ^{-1+\vert \alpha\vert}\left | \partial_{\xi}^{\alpha}\partial_{x}^{\beta}\varphi(x,\xi)\right | \leq C_{\alpha , \beta},$$ for any pair of multi-indices $\alpha$ and $\beta$, satisfying $|\alpha|+|\beta|\geq k.$* **Definition 13**. *One says that the phase function $\varphi(x,\xi)$ satisfies the strong non-degeneracy condition *(*or $\varphi$ is $\mathrm{SND}$ for short*)* if $$\label{eq:SND} \big |\det (\partial^{2}_{x_{j}\xi_{k}}\varphi(x,\xi)) \big | \geq \delta,\qquad \mbox{for some $\delta>0$ and all $(x,\xi)\in \mathbb{R}^{n} \times \mathbb{R}^{n}\setminus\{0\}$}.$$* Now, having the definitions of the amplitudes and the phase functions at hand, we define the FIOs as follows. **Definition 14**. *A Fourier integral operator $T_a^\varphi$ with amplitude $a$ and phase function $\varphi$, is an operator defined *(*once again a-priori on $\mathscr{S}(\mathbb{R}^n)$*)* by $$\label{eq:FIO} T_a^\varphi f(x) := \int_{\mathbb{R}^{n}} e^{i\varphi(x,\xi)}\,a(x,\xi)\,\widehat f (\xi) \,\mathrm{d}\xi,$$ where $\varphi$ is a member of $\Phi^k$ for some $k\geq 1$. We say that $\varphi$ is of rank $\kappa$ if it satisfies $\mathrm{rank}\,\partial^{2}_{\xi\xi} \varphi(x, \xi) = \kappa$ for all $(x,\xi) \in \mathbb{R}^{n}\times \mathbb{R}^{n}\setminus \{0\}$.* We will refer to a Fourier integral operator with amplitude in $S^m_{\rho,\delta}$ and phase function in $\Phi^2$ as a *Fourier integral operator* (or FIO for short). Taking $\varphi(x,\xi)=x\cdot\xi$, one obtains the class of pseudodifferential operators associated to Hörmanders symbol classes. ## Basic facts related to oscillatory integral operators {#sec:oio} In this section, we aim to provide a concise overview of fundamental concepts about oscillatory integral operators that will serve as the foundation for the subsequent sections of our paper. We begin by introducing the definition of an oscillatory integral operator and exploring its essential properties.\ When dealing with oscillatory integral operators, it is crucial to focus on the phase functions, as they play a central role in their classification. We consider nonlinear phase functions that go beyond the Fourier integral operators, in particular we are interested in phase functions of the following type, originally defined in [@CISY]: **Definition 15**. *For $0<k<\infty$, we say that a real-valued *phase function* $\varphi(x,\xi)$ belongs to the class $\textart F^k$, if $\varphi(x,\xi)\in \mathcal{C}^{\infty}(\mathbb{R}^{n}\times\mathbb{R}^{n}\setminus\{0\})$ and satisfies the following estimates $($depending on the range of $k)$*:** - *for $k \geq 1$, $$|\partial^\alpha_\xi (\varphi (x,\xi)-x\cdot\xi) |\leq c_{\alpha} |\xi|^{k-1}, \quad |\alpha| \geq 1 ,$$* - *for $0<k<1$, $$|\partial^\alpha_\xi \partial^{\beta}_x (\varphi (x,\xi)-x\cdot\xi) |\leq c_{\alpha,\beta} |\xi|^{k-|\alpha|}, \quad |\alpha + \beta | \geq 1 ,$$* *for all $x\in \mathbb{R}^{n}$ and $|\xi|\geq 1$.* A widely recognized and representative example of a phase in the space $\textart F^k$ is given by $|\xi|^k + x \cdot \xi$. This particular type of phase is closely associated with the operator $e^{i (\Delta)^{k/2}}$. Having the definitions of the amplitudes and the phase functions at hand, one has the following definition. **Definition 16**. *An oscillatory integral operator $T_a^\varphi$ with amplitude $a$ in $S^{m}_{\rho, \delta}(\mathbb{R}^n)$ and a real-valued phase function $\varphi$ is defined, initially on $\mathscr{S}(\mathbb{R}^n)$, as follows: $$\label{eq:OIO} T_a^\varphi f(x) := \int_{\mathbb{R}^n} e^{i\varphi(x,\xi)} a(x,\xi) \widehat{f}(\xi) \,\mathrm{d}\xi,$$ where $\varphi$ is a member of $\textart{F}^k$ for some $k\geq 1$. We call $k$ the order of the oscillatory integral operator.* In order to ensure the global $L^2$-boundedness of our operators, an additional condition must be imposed on the phase, which we will henceforth refer to as the $L^2$-condition for the sake of simplicity. This condition plays a crucial role in controlling the $L^2$ behavior of the oscillatory integral operators under consideration. **Definition 17**. *One says that the phase function $\varphi(x,\xi)\in \mathcal{C}^{\infty}(\mathbb{R}^{n}\times\mathbb{R}^{n})$ satisfies the $L^2$-condition if $$\label{eq:L2 condition_old} |\partial_{x}^{\alpha} \partial_{\xi}^{\beta} \varphi(x,\xi )| \leq C_{\alpha},$$ for $|\alpha|\geq 1$, $|\beta|\geq 1,$ all $x \in\mathbb{R}^{n}$ and $|\xi|\geq 1$.* The following $\mathrm{LF}(\mu)$-condition is an inherent requirement that, from the perspective of applications to PDEs, is typically satisfied and does not introduce any loss of generality. This condition is essential for analyzing the low-frequency components of the operators. **Definition 18**. *Let $\varphi(x,\xi)\in \mathcal{C}^{\infty}(\mathbb{R}^n \times \mathbb{R}^n \setminus {0})$ be a real-valued function, and consider $0<\mu\leq 1$. We define the low-frequency phase condition of order $\mu$, abbreviated as the $\mathrm{LF}(\mu)$-condition, as follows: We say that $\varphi$ satisfies the $\mathrm{LF}(\mu)$-condition if it satisfies the inequality $$\label{eq:LFmu} |\partial^{\alpha}_{\xi}\partial_{x}^{\beta} (\varphi(x,\xi)-x\cdot \xi) |\leq c_{\alpha} |\xi|^{\mu-|\alpha|},$$ for all $x\in \mathbb{R}^n$, $0<|\xi| \leq 2$, and all multi-indices $\alpha$ and $\beta$.* # Consequences of the main results {#Sec:weightedConsequences} Sparse domination offers a notable advantage over classical $L^p$ estimates by yielding weighted norm inequalities. In the subsequent subsections, we will delve into the discussion of several of these weighted estimates, as well as implications to weak $(1,1)$ estimates and vector-valued inequalities. ## Weighted norm inequalities {#sec_weight} In this section we derive bounds for $T_a^\varphi$ of the form $$\label{weighted_boundedness} \|T_a^\varphi f\|_{L^p_w}\leq C\|f\|_{L^p_w}$$ with $w\in A_q$, for some $1<q,p<\infty$, and where $$\|f\|_{L^p_w}:=\Big(\int_{\mathbb{R}^n} |f(x)|^p\,w(x)\,\mathrm{d}x\Big)^{1/p}.$$ For weighted norm inequalities for FIOs the reader is referred to the work D. Dos Santos Ferreira and W. Staubach [@DS]. For weighted norm inequalities for oscillatory integral operators that go beyond FIOs the reader is referred to A. Bergfeldt, S. Rodriguez Lopez, and W. Staubach [@BS]. Note that in these papers the authors consider weighted norm inequalities for weights that belong to all $A_p$-classes, for $1<p<\infty$ (see equation [\[weighted_boundedness\]](#weighted_boundedness){reference-type="eqref" reference="weighted_boundedness"}), i.e. $q=p$ in equation [\[weighted_boundedness\]](#weighted_boundedness){reference-type="eqref" reference="weighted_boundedness"}. In both cases they disregard the quantitative control on $C$.\ Our corollary recovers all the previous known weighted estimates of Beltran and Cladek in [@BC]. In the case of FIOs our result extends the weighted norm estimates in [@DS] and [@BS] to phase functions with arbitrary rank $0\leq \kappa\leq n-1$ (see item (2) below). In both the cases of FIOs and OIOs we obtain control of $C$, moreover we obtain new weighted norm inequalities for weights in the more general class $A_{q/q_0} \cap RH_{(p_0/q)'}$, i.e. we extend to the case when $p$ and $q$ are not necessarily equal.\ The following corollary derives from [@Bernicot1 Proposition 6.4] (due to F. Bernicot, D. Frey and S. Petermichl) and Theorem [Theorem 1](#main_FIO){reference-type="ref" reference="main_FIO"} in the case of FIOs and Theorem [Theorem 2](#main_OIO){reference-type="ref" reference="main_OIO"} in the case of OIOs. **Corollary 19**. *Let $a\in S^{m}_{\rho,\delta}$ be an amplitude with $m\leq 0$ and $0\leq\rho\leq 1$, $0\leq\delta< 1$. Let the phase functions $\varphi$ be SND and satisfy either* 1. *$\varphi\in \Phi^2$, or* 2. *$\varphi\in \textart{F}^k$, obeys the $L^2$-condition and is $LF(\mu)$ for some $0< \mu\leq 1$.* *If (1) holds, take $m$ and all pairs $(q_0, p_0')$, $(p_0', q_0)$ such that [\[FIO_ranges\]](#FIO_ranges){reference-type="eqref" reference="FIO_ranges"} is satisfied. While if (2) holds, take $(q_0, p_0')$, $(p_0', q_0)$ and $m$ such that [\[OIO_ranges\]](#OIO_ranges){reference-type="eqref" reference="OIO_ranges"} is satisfied. For $w \in A_{q/q_0} \cap RH_{(p_0/q)'}$ and all $q_0<p<p_0$, there exists a constant $c_{p}$ $$\|T_a^\varphi f\|_{L^p_w\to L^p_w}\leq c_p\Big([w]_{A_{q/q_0}}[w]_{RH_{(p_0/q)'}}\Big)^{\alpha}$$ where $$\alpha=\max\Big\{\frac{1}{p-q_0},\frac{p_0-1}{p_0-p}\Big\}.$$* We define the class dependent number $D$ as follows. Let $$D=D(n,\varkappa)=n(1-\varkappa)$$ if $T_a^\phi$ is an OIO, as in Theorem [Theorem 2](#main_OIO){reference-type="ref" reference="main_OIO"}, and let $$D=D(n,\rho)=n(1-\rho)$$ if $T_a^\phi$ is an FIO, as in Theorem [Theorem 1](#main_FIO){reference-type="ref" reference="main_FIO"}. **Remark 20**. *Observe that using the *open property* of $A_p$ weights (see [@hytonen] or [@Stein]) and that $A_p$ classes increase in $p$, one can show the following two cases of corollary [Corollary 19](#weightedConsequencesSmooth){reference-type="ref" reference="weightedConsequencesSmooth"} for OIOs. For more details on proving this we refer the reader to [@BC].* 1. *Let $-D-\zeta<m\leq -D/2-\zeta$. Then $T_a^\varphi$ is bounded on $L^p_w$ for $w \in A_{-\frac{p(m+\zeta)}{D}}$ and all $-\frac{D}{m+\zeta}<p<\infty$.* 2. *Let $m=-D/2-\zeta$. Then $T_a^\varphi$ is bounded on $L^p_w$ for $w \in A_{p/2}$ and all $2<p<\infty$.* *These results only constitute some of the weighted estimates implied by corollary [Corollary 19](#weightedConsequencesSmooth){reference-type="ref" reference="weightedConsequencesSmooth"}.* ## Weak-type estimates For weighted norm inequalities for local FIOs the reader is referred to the work by T. Tao [@Weak_tao]. Note that in that paper the author consider FIOs with amplitudes in $S^m_{1,0}$ ($m=-(n-1)/2$) and compact support in the spatial variable $x$, as well as phase functions which are smooth away from $|\xi|=0$, satisfy the SND condition, and are homogeneous of degree $1$. Our corollary extends this result to global FIOs, with phase functions that are in $\Phi^2$ and amplitudes in the $S^m_{\rho,\delta}$ for all $0\leq\rho\leq 1$ and $0\leq\delta<1$ up to the end point. In particular when the phase has full rank, we obtain $m<-(n-1)/2$ for $a\in S^m_{1,0}$, recovering the result in [@Weak_tao] up to the end point.\ For global oscillatory integral operators our corollary is entirely new, in particular the numerology in corollary [Corollary 22](#weak_oio){reference-type="ref" reference="weak_oio"} agrees with the $L^p$ estimates obtained in [@IMS2] by A. Israelsson, W. Staubach and the author.\ Combining Lemma [Lemma 11](#Weaksparse){reference-type="ref" reference="Weaksparse"} with Theorem [Theorem 1](#main_FIO){reference-type="ref" reference="main_FIO"} and Theorem [Theorem 2](#main_OIO){reference-type="ref" reference="main_OIO"} respectively, we obtain the following two corollaries. **Corollary 21**. *Assume that $a(x,\xi)\in S^{m}_{\rho,\delta}(\mathbb{R}^{n})$ for $0\leq \rho\leq 1$ and $0\leq \delta< 1$, and $\varphi(x,\xi)$ is in the class $\Phi^2$ with rank $0\leq\kappa\leq n-1$ and is *SND*. Then, for $$m< \begin{cases} -n(1-\rho)-\zeta, & 0\leq\rho< \frac{n}{n+\kappa},\\ -\frac{n(1-\rho)+\kappa\rho}{2}-\zeta, & \frac{n}{n+\kappa}\leq\rho\leq 1, \end{cases}$$ it holds that $$\|T_a^\varphi f\|_{L^{1,\infty}(\mathbb{R}^{n})}\lesssim \|f\|_{L^{1}(\mathbb{R}^{n})}.$$* **Corollary 22**. *Let $n\geq 1$, $0<k<\infty$. Assume that $\varphi\in \textart F^k$ is *SND*, satisfies the *LF*$(\mu)$-condition for some $0<\mu\leq 1$, and the $L^2$-condition [\[eq:L2 condition_old\]](#eq:L2 condition_old){reference-type="eqref" reference="eq:L2 condition_old"}. Assume also that $a(x,\xi)\in S^{m}_{\rho,\delta}(\mathbb{R}^{n}),$ for $0\leq \rho\leq 1$ and $0\leq \delta< 1$. Then, for $m<-\frac{n(1-\varkappa)}{2}-\zeta$ it holds that $$\|T_a^\varphi f\|_{L^{1,\infty}(\mathbb{R}^{n})}\lesssim \|f\|_{L^{1}(\mathbb{R}^{n})}.$$* ## Vector-Valued inequalities By leveraging the $A_p$ weighted estimates obtained earlier and Lemma [Definition 6](#linearizable){reference-type="ref" reference="linearizable"}, we can derive new vector-valued estimates for OIOs and FIOs. Define $\|\cdot\|_{L^q(\ell^p)}$ to be the quasi-norm $$\begin{aligned} \|(f_k)\|_{L^q(\ell^p)}&:=\Big\|\sum_{k} |f_k(\cdot)|^p\Big\|_{L^q(\mathbb{R}^{n})}^{1/p}.\end{aligned}$$ Define the space $L^q(\ell^p)$ as the sequence space in which $\|(f_k)\|_{L^q(\ell^p)}<\infty$. As we mentioned in the introduction there are important applications for FIOs and OIOs with simplified phase functions of the form $x\cdot\xi+\phi(\xi)$. The following result is a weighted vector-valued estimate for FIOs and OIOs with simple phase functions. For weighted Triebel-Lizorkin estimates we refer the reader to [@DS]. Since the proof of the result below follows essentially in the same way as theirs, we omit the details and refer the reader to that paper. **Theorem 23**. *Let $a\in S^{m}_{\rho,\delta}$be an amplitude with $m\leq 0$ and $0\leq\rho\leq 1$, $0\leq\delta< 1$. Let the phase functions $\varphi$ be SND and satisfy either* 1. *$\varphi\in \Phi^2$, or* 2. *$\varphi\in \textart{F}^k$, and $\varphi$ is $LF(\mu)$ for some $0< \mu\leq 1$.* *If (1) holds, take $m=-n(1-\rho)-\zeta$, while if (2) holds take $m=-n(1-\varkappa)-\zeta$. For all $1<q,p<\infty$, and $(f_j)_{j\geq 0}\in L^q(\ell^p)$ it holds that $$\begin{aligned} \label{Rubioeq} \|\Big(\sum_{j=0}^\infty |T_{a}^{\varphi} f_j|^p\Big)^{1/p}\|_{L^q_w(\mathbb{R}^n)}\lesssim \|\Big(\sum_{j=0}^\infty |f_j|^p\Big)^{1/p}\|_{L^q_w(\mathbb{R}^n)}. \end{aligned}$$ Moreover, for $s'\geq s$ it holds that $$\begin{aligned} \label{Rubioeq1} \|T_{a}^{\varphi}\|_{F^{s'}_{p,q}(w)\to F^s_{p,q}(w)}\lesssim 1, \end{aligned}$$ whenever the phase is simple. *(*See Definition [Definition 5](#def:TLspace){reference-type="ref" reference="def:TLspace"} for the details on the space $F^s_{p,q}(w)$.*)** In the above result [\[Rubioeq\]](#Rubioeq){reference-type="eqref" reference="Rubioeq"} is a direct consequence of Theorem [Theorem 7](#Rubio){reference-type="ref" reference="Rubio"}, and item (2) in remark [Remark 20](#weighted_remark){reference-type="ref" reference="weighted_remark"} applied to $T_{a}^{\varphi}$. For the reader interested in the proof of estimate [\[Rubioeq1\]](#Rubioeq1){reference-type="eqref" reference="Rubioeq1"} we refer to [@DS]. The argument is rather short and is essentially an application of a variant of [\[Rubioeq\]](#Rubioeq){reference-type="eqref" reference="Rubioeq"}, and using that $[\psi_j,T_{a}^{\varphi}]=0$ for simple phase functions. # Decompositions and Kernel Estimates {#sec:decomp} In this section we provide the necessary tools used in the proofs of Theorem [Theorem 1](#main_FIO){reference-type="ref" reference="main_FIO"} and Theorem [Theorem 2](#main_OIO){reference-type="ref" reference="main_OIO"}. ## Decomposition of Fourier integral operators {#decomposition-of-fourier-integral-operators .unnumbered} In this section we introduce the spatial and frequency decompositions related to the FIOs, we refer the reader to [@IMS] for the origin of this decomposition and details on the proofs of the Lemma [Lemma 25](#lem:chijnu){reference-type="ref" reference="lem:chijnu"} and Lemma [Lemma 26](#lem:h){reference-type="ref" reference="lem:h"}.\ To start, one considers an FIO $T^{\varphi}_{a}$ with amplitude $a(x, \xi)\in S^{m}_{\rho, \delta}(\mathbb{R}^n)$, $0<\rho\leq 1$, $0\leq \delta<1$, $m\in\mathbb{R}$ and $0 \leq \kappa\leq n-1$, $\varphi\in\Phi^2$, $\mathrm{rank}\,\partial^{2}_{\xi\xi} \varphi(x, \xi) = \kappa$, on the support of $a(x, \xi)$ and its Littlewood-Paley decomposition $$\label{eq:LPdecomp} T^{\varphi}_{a}= \sum_{j=0}^\infty T^{\varphi}_{a}\psi_j(D) =: \sum_{j=0}^\infty T_j,$$ where the kernel $K_j$ of $T_j$ is given by $$\begin{aligned} &K_j(x,y):= \int_{\mathbb{R}^{n}} e^{i\varphi(x,\xi)-iy\cdot\xi}\,\psi_j(\xi)\,a(x,\xi)\, \,\mathrm{d}\xi.\end{aligned}$$ Here each $\psi_j$ (for $j\geq 1$) is supported in a dyadic shell $\left \{2^{j-1}\leq \vert \xi\vert\leq 2^{j+1}\right \}$.\ The shells $A_j$ will in turn be decomposed into truncated cones using the following construction. Since $\varphi$ has constant rank $\kappa$ on the support of $a(x, \xi)$ we may assume that there exists some $\kappa$-dimensional submanifold $S_\kappa(x)$ of $\mathbb S^{n-1}\cap \Gamma$ for some sufficiently narrow cone $\Gamma$, such that $\mathbb S^{n-1}\cap \Gamma$ is parameterised by $\overline{\xi}=\overline{\xi}_x(u,v)$, for $(u,v)$ in a bounded open set $U\times V$ near $(0,0)\in \mathbb{R}^{\kappa}\times\mathbb{R}^{n-\kappa-1}$, and such that $\overline{\xi}_x(u,v)\in S_\kappa(x)$ if and only if $v=0$, and $\nabla_\xi\varphi(x,\overline{\xi}_x(u,v))=\nabla_\xi\varphi(x,\overline{\xi}_x(u,0))$. **Definition 24**. *For each $j\in\mathbb Z_{>0}$, let $\{u^{\nu}_{j} \}$ be a collection of points in $U$ such that* 1. *$| u^{\nu}_{j}-u^{\nu'}_{j} |\geq 2^{-j/2},$ whenever $\nu\neq \nu '$.* 2. *If $u\in U$, then there exists a $u^{\nu}_{j}$ so that $\vert u -u^{\nu}_{j} \vert <2^{-j/2}$.* *Moreover we set $\xi^{\nu}_{j}=\overline{\xi}_x(u^{\nu}_{j},0).$ One may take such a sequence by choosing a maximal collection $\{\xi_{j}^{\nu}\}$ for which $(i)$ holds, then $(ii)$ follows. Now, denote the number of cones needed by $\mathscr N_j$.* Let $\Gamma^{\nu}_{j}$ denote the cone in the $\xi$-space given by $$\label{eq:gammajnu} \Gamma^{\nu}_{j} := \{ \xi \in \mathbb{R}^n ;\, \xi=s\overline{\xi}_x(u,v), | \frac{u}{|u|} - u^{\nu}_{j} | <2^{-j/2}, v\in V, s>0\}.$$ We decompose each $A_j$ into truncated cones $\Xi^{\nu}_{j}:=\Gamma^{\nu}_{j}\cap A_j.$ One also defines the partition of unity $\{\chi_j^\nu(u)\}_{j,\nu}$ on $\Gamma$ (a construction can be found in [@IMS]). This homogeneous partition of unity $\chi_j^\nu$ is defined such that $\chi_j^\nu (s\overline{\xi}_x(u,v))=\tilde\chi_j^\nu (u),$ for $v\in V$ and $s>0$. Using this decomposition, we make a second dyadic decomposition of $T_j$ as $$T_j^\nu f(x) := \int_{\mathbb{R}^{n}}K_j^\nu(x,y)\, f(y)\,\mathrm{d}y,$$ where $$\begin{aligned} K_j^\nu(x,y):= \int_{\mathbb{R}^{n}} e^{i\varphi(x,\xi)-iy\cdot\xi}\,\psi_j(\xi)\,\chi_j^\nu(\xi)\,a(x,\xi)\,\mathrm{d}\xi .\end{aligned}$$ Observe that $T_j = \sum_{\nu=1}^{\mathscr N_j} T_j^\nu.$ We choose the coordinate axes in $\xi$-space such that $\xi''\in (T_{\xi^{\nu}_j}S_\kappa(x))^\perp$ and $\xi'\in T_{\xi^{\nu}_j}S_\kappa(x)$, so that $\xi=(\xi'',\xi')$. **Lemma 25**. *The functions $\chi_j^\nu$ belong to $\mathcal C^\infty(\mathbb{R}^n\setminus \{0\} )$ and are supported in the cones $\Gamma_j^\nu$. They satisfy $$\begin{aligned} |\partial_\xi^\alpha \chi_j^\nu (\xi) |\lesssim 2^{j|\alpha|/2} |\xi|^{-|\alpha|} \end{aligned}$$ for all multi-indices $\alpha$.* We will also make us of the following slightly modified version of the partition of unity $$\label{chialt} \tilde{\chi}_j^\nu := \eta_j^\nu \Big(\sum_\nu \big(\eta_j^\nu\big)^2\Big)^{-\frac{1}{2}},$$ for which $$\sum_\nu \tilde{\chi}_j^\nu(\xi)^2 = 1,\quad \text{ for all } j\text { and } \xi\neq 0.$$ This partition of unity also satisfies the estimates in Lemma [Lemma 25](#lem:chijnu){reference-type="ref" reference="lem:chijnu"}. In connection to this partition of unity we also define the fattened version of the second dyadic decomposition. We define these pieces by $\mathcal{X}_j^{\nu}(D):=\tilde{\chi}_j^\nu(D)\Psi_j(D)$. Observe that these have the nice property that they satisfy the equation $$\label{fatsecond} T_j^\nu f(x)=T_j^\nu f_j^\nu (x),$$ where $f_j^\nu (x):=\mathcal{X}_j^{\nu}(D) \Psi_j(D) f(x)$. Moreover, these pieces satisfy the following estimate $$\label{eq:high} \|(\mathcal X_{j}^{\nu})^\vee\|_{L^p(\mathbb{R}^n)} \lesssim 2^{j\left (\frac{n+1}{2}-\frac{n+1}{2p}\right )},$$ the proof of which follows using that $\tilde{\chi}_j^\nu(D)$ satisfies lemma [Lemma 25](#lem:chijnu){reference-type="ref" reference="lem:chijnu"} and partial integration.\ We will split the phase $\varphi(x,\xi)-y\cdot\xi$ into two different pieces, $(\nabla_\xi\varphi(x,\xi_j^\nu)-y)\cdot \xi$ (which is linear in $\xi$), and $\varphi(x,\xi) - \nabla_\xi \varphi(x,\xi_j^\nu) \cdot \xi$. The following lemma yields an estimate for the nonlinear second piece. For $j\in \mathbb{Z}_{\geq0}$, $1\leq\nu \leq \mathscr N_j,$ define $$h_j^\nu(x,\xi):= \varphi(x,\xi) -\nabla_\xi \varphi(x,\xi_j^\nu)\cdot \xi.$$ **Lemma 26**. *Let $\varphi(x,\xi)\in\mathcal{C}^{\infty}(\mathbb{R}^n \times \mathbb{R}^n\setminus\{0\})$ be positively homogeneous of degree one in $\xi$. Then for $\xi$ in $\Xi^{\nu}_{j}$, one has for $\alpha\in \mathbb Z_{> 0}^{\kappa}$ that $$|{\partial_{\xi'}^\alpha} h_j^\nu(x,\xi)| \lesssim \begin{cases} |\xi'| |\xi|^{-1} & \text{ if } |\alpha|= 1 \\ |\xi| ^{1-|\alpha|} & \text{ if } |\alpha| \geq 2 \end{cases} \,\lesssim 2^{-j/2}.$$ Moreover, for $\beta\in \mathbb Z_{> 0}^{n-\kappa}$ one has that $$|\partial_{\xi''}^\beta h_j^\nu(x,\xi)|\lesssim |\xi'|^2 |\xi| ^{-|\beta|-1}\lesssim 2^{-j}.$$ And finally one has the estimate $$\label{eq:trunkerad0} |\nabla_{\xi} h^{\nu}_j(x,\xi)|\lesssim 2^{-j/2},\quad \xi\in \Xi^{\nu}_j.$$* Define $\widetilde{\psi}_{j,l}^{\nu}(x,y)$ as the characteristic function of the set $$\{(x,y)\in\mathbb{R}^{2n}: 2^l\leq (1+2^{2j}|\nabla_{\xi''} \varphi(x,\xi_j^\nu)-y|^2)(1+2^{j}|\nabla_{\xi'} \varphi(x,\xi_j^\nu)-y|^2)< 2^{l+1}\}.$$ We further decompose each $T_j$ into spatially localised operators $T_{j, l}^\nu$ defined by $$\begin{aligned} T_{j, l}^\nu f(x)&=\int_{\mathbb{R}^{n}}f(y)\widetilde{\psi}_{j,l}^{\nu}(x,y) K_j^\nu(x,y)\,\mathrm{d}y\end{aligned}$$ for $l \in \mathbb{Z}$ and $j>0$, and in an analogous manner for $T_{0,l}$, so that $$\begin{aligned} T_a^\varphi=\sum_{j\ge 0}T_j=\sum_{l\in\mathbb{Z}}\sum_{j\ge 0}\sum_{\nu}T_{j, l}^\nu.\end{aligned}$$ We further group the pieces $T_{j,l}$ according to their spatial scale. Fix some $\epsilon>0$ and write $$\begin{aligned} T_j^\nu=\sum_{l\le j\epsilon}T_{j, l}^\nu+\sum_{l>j\epsilon}T_{j, l}^\nu.\end{aligned}$$ In the next lemma we estimate the regularity of the kernel $$K_{j,l}^\nu(x,y):=\widetilde{\psi}_{j,l}^{\nu}(x,y) K_{j}^\nu(x,y)$$ of $T_{j,l}^\nu$. **Lemma 27**. *Let $0\leq \rho,\delta\leq 1$, $\rho\neq 0$, $n\geq 1$. Assume that $\varphi\in \Phi^2$. Then if $a(x,\xi)\in S^{m}_{\rho,\delta}(\mathbb{R}^{n})$, we have for $1\leq p\leq\infty$ that $$\label{kernel estimate_FIO} \|w_j^{\nu}(x,y)^{N} K_{j,l}^{\nu} (x,y)\|_{L^{p}_y(\mathbb{R}^{n})} \lesssim 2^{j(m+n/p'-1/p)} 2^{-lL},$$ for all $j\geq 0$ and all $N,L\geq 0$, and where $$\label{metric definition} w_j^{\nu}(x,y)=(1+2^{2j}|\nabla_{\xi''} \varphi(x,\xi_j^\nu)-y''|^2)(1+2^{j\rho}|\nabla_{\xi'} \varphi(x,\xi_j^\nu)-y'|^2).$$* *Proof.* For any $j\geq 0$ we have $$\begin{aligned} K_{j,l}^\nu(x,y) & = \widetilde{\psi}_{j,l}^{\nu}(x,y) \int_{\mathbb{R}^{n}} \psi_j(\xi)\,\chi_j^\nu(\xi)\, e^{i(\varphi(x,\xi )-y\cdot \xi)} \,a(x,\xi) \,\mathrm{d}\xi \\ & = \widetilde{\psi}_{j,l}^{\nu}(x,y) \int_{\mathbb{R}^{n}} e^{i(\varphi(x,\xi )-y\cdot \xi)}\, \sigma_j^{\nu}(x,\xi) \,\mathrm{d}\xi,\end{aligned}$$ where $$\sigma_{j}^{\nu}(x,\xi) := \psi_j(\xi)\,\chi_j^\nu(\xi) \, a(x,\xi).$$ Observe that by Lemma [Lemma 26](#lem:h){reference-type="ref" reference="lem:h"} we have $$\begin{aligned} |\partial_\xi^\alpha \chi_j^\nu (\xi) |\lesssim 2^{j(\rho/2-1)|\alpha|}, \end{aligned}$$ on the support of $\psi_j$. Using this and that $a(x,\xi)\in S^{m}_{\rho,\delta}(\mathbb{R}^{n})$ we can calculate that for any multi-index $\gamma$, any $j\geq 0$ and any $\nu$ one has $$\label{amplitude derivative estim_FIO} |\partial^{\gamma}_\xi \sigma_j^{\nu}(x,\xi)|\lesssim 2^{j(m-\min\{\rho, 1-\rho/2\}|\gamma|)}\lesssim 2^{jm}.$$ If we now set $h_j^\nu(x,\xi):=\varphi(x,\xi)-\xi \cdot \nabla_\xi \varphi(x,\xi_j^\nu)$, then we can write $$\begin{aligned} K_{j,l}^\nu(x,y) &= \widetilde{\psi}_{j,l}^{\nu}(x,y) \int_{\mathbb{R}^{n}} e^{i(\nabla_\xi\varphi(x,\xi_j^\nu )-y)\cdot \xi}\,e^{ih_j^\nu(x,\xi)}\,\sigma_{j}^{\nu}(x,\xi) \,\mathrm{d}\xi.\end{aligned}$$ Now we estimate the derivatives of $h_j^\nu$ in $\xi$ on the support of $\sigma_{j}^{\nu}(x,\xi)$. To this end, Lemma [Lemma 26](#lem:h){reference-type="ref" reference="lem:h"} yields $$|\nabla_{\xi} h^{\nu}_j(x,\xi)|\lesssim 2^{-j\rho /2}\lesssim 1,\quad \xi\in \Xi^{\nu}_j,$$ and $$\begin{aligned} \label{nonlinear_FIO} |\partial_{\xi}^{\alpha}\vartheta_j^\nu(x,\xi)| = |\partial_{\xi}^{\alpha}\varphi(x,\xi)|\lesssim 1,\quad \text{ for all } |\alpha| \geq 2.\end{aligned}$$ Next we introduce the operator $L$ defined by $$\begin{aligned} L=(I-2^{2j}\Delta_{\xi''})(I-2^{j\rho}\Delta_{\xi'}).\end{aligned}$$ Then using the estimate [\[amplitude derivative estim_FIO\]](#amplitude derivative estim_FIO){reference-type="eqref" reference="amplitude derivative estim_FIO"} and [\[nonlinear_FIO\]](#nonlinear_FIO){reference-type="eqref" reference="nonlinear_FIO"} we obtain, $$\begin{aligned} L^N(e^{i\vartheta_j^\nu(x,\xi)}\,\sigma_{j}^{\nu}(x,\xi))\lesssim 2^{jm}\end{aligned}$$ This estimate can be improved in terms of obtaining a decaying factor depending on $\rho$, however for our purposes this is not necessary since we are looking for geometric decay, which will be achieved in a different manner below. We also have that $$\begin{aligned} e^{i(\nabla_\xi\varphi(x,\xi_j^\nu )-y)\cdot \xi}=\frac{L^N(e^{i(\nabla_\xi\varphi(x,\xi_j^\nu )-y)\cdot \xi})}{w_j^{\nu}(x,y)^N}\, \end{aligned}$$ Therefore, partial integration and the trivial estimate $|\Xi^{\nu}_{j}|\lesssim 2^{jn}$ yields $$\begin{aligned} |K_{j,l}^\nu(x,y) | &\lesssim\frac { \widetilde{\psi}_{j,l}^{\nu}(x,y) 2^{j(m+n)}}{w_j^{\nu}(x,y)^N}.\end{aligned}$$ The observation that $$w_j^{\nu}(x,y)^N\gtrsim 2^{lN}$$ on the support of $\widetilde{\psi}_{j,l}^\nu$ gives us for any $N,L\geq 0$ and $1\leq p\leq\infty$ that $$\begin{aligned} &\|w_j^{\nu}(x,y)^{N} K_{j,l}^{\nu} (x,y)\|_{L^{p}_y(\mathbb{R}^{n})}2^{-j(m+n)}\\ &= \Big(\int_{\mathbb{R}^{n}} \frac{\widetilde{\psi}_{j,l}^{\nu}(x,y)w_j^{\nu}(x,y)^{-pL}}{w_j^{\nu}(x,y)^{-p(N+L-M)}}\,\mathrm{d}y \Big)^{1/p}\\ &\lesssim 2^{-lL}\Big(\int_{\mathbb{R}^{n}} w_j^{\nu}(x,y)^{p(N+L-M)}\,\mathrm{d}y \Big)^{1/p}\\ &\lesssim 2^{-lL}2^{-j(n+1)/p},\end{aligned}$$ for $M>n+N+L$. Therefore we obtain, $$\begin{aligned} \|w_j^{\nu}(x,y)^{N} K_{j,l}^{\nu} (x,y)\|_{L^{p}_y(\mathbb{R}^{n})} \lesssim 2^{j(m+n/p'-1/p)} 2^{-lL}.\end{aligned}$$ ◻ ## Decomposition of oscillatory integral operators {#decomposition-of-oscillatory-integral-operators .unnumbered} In this section we introduce the spatial and frequency decompositions related to the OIOs, we refer the reader to [@IMS2] for the origin of this decomposition. It is an adaption of the classical second dyadic decomposition introduced by C. Fefferman, to the case of nonlinear phases of the type $\textart {F}^k$.\ Let $k>0$. We make the following decomposition of the integral kernel $$K(x,y)= \int_{\mathbb{R}^{n}} a(x,\xi)\,e^{i\varphi(x,\xi)-iy\cdot \xi}\, \,\mathrm{d}\xi.$$ Then for every $j\geq 0$ we cover $\mathop{\mathrm{supp}}\psi_j$ with open balls $C_j^\nu$ with radii $2^{j(1-k)}$ and centres $\xi_j^\nu$, where $\nu$ runs from $1$ to $\mathscr{N}_j:=O(2^{njk})$. Observe that $|C_j^\nu|\lesssim 2^{n j(1-k)}$, uniformly in $\nu$. Now take $u\in \mathcal{C}_c^{\infty}(\mathbb{R}^{n})$, with $0\leq u\leq 1$ and supported in $B(0,2)$ with $u=1$ on $\overline{B(0,1)}.$ Next set $$\chi_j^\nu(\xi) := \frac{u(2^{-(1-k) j}(\xi -\xi_{j}^{\nu}))} {\sum_{\kappa=1}^{\mathscr{N}_j} u(2^{-(1-k) j}(\xi-\xi_{j}^{\kappa}))},$$ and note that $$\sum_{j=0}^\infty \sum_{\nu=1}^{\mathscr N_j} \chi_j^\nu(\xi)\,\psi_j(\xi) =1.$$ Now we define the second order frequency localized pieces of the kernel above as $$K_{j}^\nu (x,y) := \int_{\mathbb{R}^{n}} \psi_j(\xi)\,\chi_j^\nu(\xi)\,e^{i\varphi(x,\xi)-iy\cdot \xi}\,a(x,\xi) \,\mathrm{d}\xi,$$ and define $$\widetilde{\psi}_{j,l}^{\nu}(x,y):=\widetilde{\psi}(2^{-l+j\varkappa}(\nabla_\xi \varphi(x,\xi_j^\nu)-y)).$$ We further decompose each $T_j$ into spatially localised operators $T_{j, l}^\nu$ defined by $$\begin{aligned} T_{j, l}^\nu f(x)&=\int_{\mathbb{R}^{n}}f(y)\widetilde{\psi}_{j,l}^{\nu}(x,y) K_j^\nu(x,y)\,\mathrm{d}y\end{aligned}$$ for $l \in \mathbb{Z}$ and $j>0$, and in an analogous manner for $T_{0,l}$, so that $$\begin{aligned} T_a^\varphi=\sum_{j\ge 0}T_j=\sum_{l\in\mathbb{Z}}\sum_{j\ge 0}\sum_{\nu}T_{j, l}^\nu.\end{aligned}$$ We further group the pieces $T_{j,l}$ according to their spatial scale. Fix some $\epsilon>0$ and write $$\begin{aligned} T_j^\nu=\sum_{l\le j\epsilon}T_{j, l}^\nu+\sum_{l>j\epsilon}T_{j, l}^\nu.\end{aligned}$$ In the next lemma we estimate the kernel $$K_{j,l}^\nu(x,y):=\widetilde{\psi}_{j,l}^{\nu}(x,y) K_{j}^\nu(x,y)$$ of $T_{j,l}^\nu$, to this end define the notation $$w(k,\rho)= \begin{cases} \min\{\rho,1-k\} & 0<k<1,\\ k-1 & k\geq1.\\ \end{cases}$$ **Lemma 28**. *Let $0\leq \rho,\delta\leq 1$, $n\geq 1$, and $0<k<\infty$. Assume that $\varphi\in \textart F^k$. Then if $a(x,\xi)\in S^{m}_{\rho,\delta}(\mathbb{R}^{n})$, we have for $1\leq p\leq\infty$ that $$\label{kernel estimate} \|(1+2^{jw(k,\rho)}|x-y|)^{N} K_{j,l}^{\nu} (x,y)\|_{L^{p}_y(\mathbb{R}^{n})} \lesssim 2^{-lL+j\varkappa L}2^{-jw(k,\rho)(n/p+L)},$$ for all $j\geq 0$ and all $N,L\geq 0$.* *Proof.* From Lemma 3.2 in [@IMS2] we obtain the following inequality $$| K_{j,l}^\nu (x,y)| \lesssim \widetilde{\psi}_{j,l}^{\nu}(x,y)\frac {2^{jm} 2^{jn(1-k)}}{\langle 2^{jw(k,\rho)}(\nabla_\xi \varphi(x,\xi_j^\nu)-y) \rangle^{M}},$$ for all $j\geq 0$ and $M\geq 0$. The observation that $$|\nabla_\xi \varphi(x,\xi_j^\nu)-y|^{-M}\lesssim 2^{-lM+j\varkappa M}$$ on the support of $\widetilde{\psi}_{j,l}^{\nu}$ gives us for any $N,L\geq 0$ and $1\leq p\leq\infty$ that $$\begin{aligned} &\|(1+2^{jw(k,\rho)}|\nabla_\xi \varphi(x,\xi_j^\nu)-y|)^{N} K_{j,l}^{\nu} (x,y)\|_{L^{p}_y(\mathbb{R}^{n})}\\ &= \Big(\int_{\mathbb{R}^{n}} \frac{\widetilde{\psi}_{j,l}^{\nu}(x,y)(1+2^{jw(k,\rho)}|\nabla_\xi \varphi(x,\xi_j^\nu)-y|)^{-pL}}{(1+2^{jw(k,\rho)}|\nabla_\xi \varphi(x,\xi_j^\nu)-y|)^{-p(N+L-M)}}\,\mathrm{d}y \Big)^{1/p}\\ &\lesssim 2^{-lL+j\varkappa L}2^{-jw(k,\rho)L}\Big(\int_{\mathbb{R}^{n}} (1+2^{jw(k,\rho)}|\nabla_\xi \varphi(x,\xi_j^\nu)-y|)^{p(N+L-M)}\,\mathrm{d}y \Big)^{1/p}\\ &\lesssim 2^{-lL+j\varkappa L}2^{-jw(k,\rho)(n/p+L)},\end{aligned}$$ for $M>n+N+L$. ◻ ## A low frequency sparse domination result for Fourier and oscillatory integral operators In this section we prove a low frequency sparse form bound, which yields the necessary boundedness for all the low frequency parts considered in this paper. Therefore, from now we only consider the high frequency portions of the operators at hand. **Theorem 29**. *Let $a\in S^m_{\rho,\delta}$ with $m\in \mathbb{R}$ and $0\leq\rho,\delta\leq 1$ and let the phase function $\varphi$ be SND and satisfy either* 1. *$\varphi\in \Phi^2$ or* 2. *$\varphi\in \textart{F}^k$, obeys the $L^2$-condition and $\varphi$ is $LF(\mu)$ for some $0< \mu\leq 1$.* *Let $\chi\in C_c^\infty$ compactly supported around the origin, and define the OIO $$T_\chi f(x):=\int_{\mathbb{R}^{n}} \chi(\xi) a(x,\xi)\,e^{i\varphi(x,\xi)-iy\cdot \xi}\, \widehat{f}(\xi) \,\mathrm{d}\xi.$$ Then for any compactly supported bounded functions $f, g$ on $\mathbb{R}^n$, there exist sparse collections $\mathcal{S}$ and $\mathcal{S}'$ such that $$\begin{aligned} \begin{cases} |\left<T_a^\varphi f, g\right>|\le C(m, q, p)\Lambda_{\mathcal{S}, q, p'}(f, g), \text{ and }\\ |\left<T_a^\varphi f, g\right>|\le C(m, q, p)\Lambda_{\widetilde{\mathcal{S}}, p', q}(f, g), \end{cases} \end{aligned}$$ for all pairs $(q, p')$ and $(p', q)$ such that $$\begin{aligned} 1\leq q\le p \le 2,\quad \text{and}\quad 1\le q\le 2 \le p \le q'. \end{aligned}$$* *Proof.* We begin by considering the case of FIOs. Fix a $\xi_0\in S^{n-1}$, and define $$\widetilde{\psi}_{l}(x,y):=\widetilde{\psi}(2^{-l}(\nabla_\xi \varphi(x,\xi_0)-y)).$$ We decompose $T_\chi$ into spatially localised operators $T_{\chi, l}$ defined by $$\begin{aligned} T_{\chi, l} f(x)&=\int_{\mathbb{R}^{n}}f(y)K_l(x,y)\,\mathrm{d}y \end{aligned}$$ with $K_l(x,y)= \widetilde{\psi}_{\chi,l}(x,y) K(x,y)$. Let $b(x,\xi)=e^{i(\nabla_\xi\varphi(x,\xi)-\nabla_\xi\varphi(x,\xi_0)\cdot \xi)}a(x,\xi)\chi(\xi)$, then $$\begin{aligned} T_\chi f(x)=\int_{\mathbb{R}^{n}} b(x,\xi)\,e^{i(\nabla_\xi\varphi(x,\xi_0)-y)\cdot \xi}\, \widehat{f}(x) \,\mathrm{d}\xi. \end{aligned}$$ Observe that for all $\mu\in[0,1)$, $$\begin{aligned} |K_l(x,y)| &\lesssim \widetilde{\psi}_{l}^{\nu}(x,y) (1+|(\nabla_\xi \varphi(x,\xi_j^\nu)-y)''|^2)^{-n-\mu}(1+|(\nabla_\xi \varphi(x,\xi_j^\nu)-y)'|^2)^{-n-\mu}. \end{aligned}$$ The proof of this estimate follows from the same arguments as Theorem 1.2.11 in [@DS] except one uses integration by parts with the differential operator given by $L=(I-\Delta_{\xi''})(I-\Delta_{\xi'})$ instead of just $L=\partial_\xi$. The rest of the proof for FIOs follows in a very similar manner to that of Theorem [Theorem 1](#main_FIO){reference-type="ref" reference="main_FIO"}.\ *Handling the OIOs:* In the case of the OIOs, define $$\widetilde{\psi}_{l}(x,y):=\widetilde{\psi}(2^{-l}(x-y)).$$ We decompose $T_\chi$ into spatially localised operators $T_{\chi, l}$ defined by $$\begin{aligned} T_{\chi, l} f(x)&=\int_{\mathbb{R}^{n}}f(y)K_l(x,y)\,\mathrm{d}y \end{aligned}$$ with $K_l(x,y)= \widetilde{\psi}_{\chi,l}(x,y) K(x,y)$. Observe that when $\varphi\in \textart{F}^k$ satisfies the $LF(\mu)$-condition then Lemma 4.3 in [@CISY] yields $$\begin{aligned} |K_l(x,y)| &\lesssim \widetilde{\psi}_{l}(x,y) \langle x-y \rangle^{-(n+\epsilon_1\mu)}, \end{aligned}$$ for any $0\leq\epsilon_1<1$. Thus, by continuing in the same way as was done for the FIOs above, one obtains the desired sparse form bound. ◻ # $L^q$-improving estimates for oscillatory integral operators {#sec:proof_oio} This section is devoted to showing the $L^q\to L^p$ boundedness of OIOs. Recall that because of the low frequency sparse bound Lemma [Theorem 29](#low-freq-sparse){reference-type="ref" reference="low-freq-sparse"} it suffices to consider the high frequency part in this section, therefore we do not need the $LF(\mu)$-condition either, which is needed in Theorem [Theorem 2](#main_OIO){reference-type="ref" reference="main_OIO"}. **Lemma 30**. *Let $n\geq 1$, $0<k<\infty$. Assume that $\varphi\in \textart F^k$ is *SND*, and the $L^2$-condition [\[eq:L2 condition_old\]](#eq:L2 condition_old){reference-type="eqref" reference="eq:L2 condition_old"}. Assume also that $a(x,\xi)\in S^{m(p)}_{\rho,\delta}(\mathbb{R}^{n}),$ for $0\leq \rho\leq 1$ and $0\leq \delta< 1$. Then for $1\leq q\leq p\leq 2$ $$\begin{aligned} \begin{cases} \|\sum_{l\le j\epsilon}T_{j, l}\|_{L^q\to L^p}\lesssim_{\epsilon} 2^{j(m+\frac{n(1-\varkappa)}{2}\big(\frac{2}{p}-1\big)+\zeta)}2^{-jn\big(\frac{1}{p}-\frac{1}{q}\big)},\\ \|T_{j, l}\|_{L^q\to L^p}\lesssim_{\epsilon} 2^{-C(j+l)}2^{-jn\big(\frac{1}{p}-\frac{1}{q}\big)},& \text{for all } l>j\epsilon. \end{cases}\end{aligned}$$ Moreover, for $1\leq q \leq 2 \leq p \leq q'$ we have $$\begin{aligned} \begin{cases} \|\sum_{l\le j\epsilon}T_{j, l}\|_{L^q\to L^p}\lesssim_{\epsilon}2^{j(m-n\big(\frac{1}{p}-\frac{1}{q}\big)+\zeta\big(\frac{2}{q}-1\big))}\\ \|T_{j, l}\|_{L^q\to L^p}\lesssim_{\epsilon} 2^{-C(j+l)}2^{-jn\big(\frac{1}{p}-\frac{1}{q}\big)},& \text{for all }l>j\epsilon. \end{cases}\end{aligned}$$* *Proof.* We begin by proving the geometrically decaying $L^q\to L^p$ estimates for $T_{j, l}^\nu$ when $1\leq p \leq 2$. Lemma [Lemma 28](#kernel lemma){reference-type="ref" reference="kernel lemma"} gives us that $$\begin{aligned} \label{pt-est_oio} |T_{j, l}^\nu f(x)| & \lesssim \|(1+2^{jw(k,\rho)}|\nabla_\xi \varphi(x,\xi_j^\nu)-y|)^{N} K_j (x,y)\|_{L^{r'}_y(\mathbb{R}^{n})}\\ &\nonumber\qquad\times\Big(\int_{\mathbb{R}^{n}} \frac{|f(y)|^r}{(1+2^{jw(k,\rho)}|\nabla_\xi \varphi(x,\xi_j^\nu)-y|)^{rN}} \,dy\Big)^{1/r}\\ &\nonumber\lesssim 2^{-lL(C,\epsilon)+j\varkappa L(C,\epsilon)}2^{-jw(k,\rho)(n/r'+L(C,\epsilon))} 2^{-njw(k,\rho)/r} \mathcal{M}_r f(\nabla_\xi \varphi(x,\xi_j^\nu))\\ &\nonumber\lesssim 2^{-lL(C,\epsilon)+j\varkappa L(C,\epsilon)}2^{-jw(k,\rho)(n+L(C,\epsilon))} \mathcal{M}_r f(\nabla_\xi \varphi(x,\xi_j^\nu))\\ &\nonumber\lesssim 2^{-jnk}2^{-C(j+l)} \mathcal{M}_r f(\nabla_\xi \varphi(x,\xi_j^\nu))\end{aligned}$$ for $N>n/r$, $l>j\epsilon$, and $L(C,\epsilon)>\max\{C,\frac{\varkappa-w(k,\rho)n+C+C\epsilon+nk}{w(k,\rho)+\epsilon}\}.$ By the SND condition and the Hardy-Littlewood maximal theorem we have that $$\begin{aligned} \Vert T_{j, l} f\Vert_{L^p(\mathbb{R}^{n})} \lesssim \sum_{\nu=1}^{\mathcal{N}_j} 2^{-jnk} 2^{-C(j+l)}\Vert \mathcal{M}_r f(\nabla_\xi \varphi(\cdot,\xi_j^\nu)) f\Vert_{L^p(\mathbb{R}^{n})} \lesssim 2^{-C(j+l)}\Vert f\Vert_{L^p(\mathbb{R}^{n})} \end{aligned}$$ for $1<r<p\leq \infty$. If $p=1$ then Young's inequality for integral operators and Lemma [Lemma 28](#kernel lemma){reference-type="ref" reference="kernel lemma"} yields $$\begin{aligned} &\Vert T_{j, l}^\nu f\Vert_{L^1(\mathbb{R}^{n})} \lesssim 2^{-lL(C,\epsilon)+j\varkappa L(C,\epsilon)}2^{-jw(k,\rho)(n+L(C,\epsilon))}\Vert f\Vert_{L^1(\mathbb{R}^{n})} \lesssim 2^{-jnk}2^{-C(j+l)}\Vert f\Vert_{L^1(\mathbb{R}^{n})}.\end{aligned}$$ Thus summing this in $\nu$, we have for all $1\leq p\leq 2$ that $$\begin{aligned} \label{Decay_Lp_oio} \Vert T_{j, l} f\Vert_{L^p(\mathbb{R}^{n})} \lesssim 2^{-C(j+l)}\Vert f\Vert_{L^p(\mathbb{R}^{n})}.\end{aligned}$$ We now lift the $L^p$ estimate above to geometrically decaying $L^q\to L^p$ bounds of $T_{j, l}^\nu$ in the range $1\leq q\leq p\leq 2$ . To this end, we use Bernstein's inequality for $1\leq q\leq p\leq 2$ and estimate [\[Decay_Lp_oio\]](#Decay_Lp_oio){reference-type="eqref" reference="Decay_Lp_oio"}, $$\begin{aligned} \label{geometricBernstein_oio} \Vert T_{j, l}^\nu f\Vert_{L^p(\mathbb{R}^{n})}& \lesssim \Vert T_{j, l}^\nu \Vert_{L^p\to L^p} \Vert \Psi_j(D) f \Vert_{L^p(\mathbb{R}^{n})}\\ &\nonumber\lesssim 2^{-C(j+l)}2^{-jn\big(\frac{1}{p}-\frac{1}{q}\big)} \Vert f \Vert_{L^q(\mathbb{R}^{n})} \end{aligned}$$ where $\Psi_j$ is a fattened Littlewood-Paley piece.\ The next step is to obtain geometrically decaying $L^q\to L^p$ bounds of $T_{j, l}^\nu$ in the range $1\leq q\leq 2\leq p<q'$. Observe that the pointwise estimate [\[pt-est_oio\]](#pt-est_oio){reference-type="eqref" reference="pt-est_oio"} yields that $$\begin{aligned} |T_{j, l} f(x)| & \lesssim \sum_{\nu=1}^{\mathcal{N}_j}\|(1+2^{jw(k,\rho)}|\nabla_\xi \varphi(x,\xi_j^\nu)-y|)^{N} K_{j,l}^{\nu} (x,y)\|_{L^{\infty}_y(\mathbb{R}^{n})}\|f\|_{L^1(\mathbb{R}^{n})}\\ & \nonumber\lesssim 2^{-C(j+l)}\|f\|_{L^1(\mathbb{R}^{n})}.\end{aligned}$$ By now interpolating this $L^1\to L^\infty$ estimate and the $L^2\to L^2$ estimate of $T_{j, l}$ we obtain for $1\leq p\leq 2$ that $$\begin{aligned} \Vert T_{j, l} f\Vert_{L^{p'}(\mathbb{R}^{n})}&\lesssim 2^{-C(j+l)} \Vert f \Vert_{L^{p}(\mathbb{R}^{n})}.\end{aligned}$$ Now interpolating this with [\[geometricBernstein_oio\]](#geometricBernstein_oio){reference-type="eqref" reference="geometricBernstein_oio"} with $p=2$ and using Bernstein's inequality again we obtain $$\begin{aligned} \label{geometricBernstein2_oio} \Vert T_{j, l} f\Vert_{L^p(\mathbb{R}^{n})} \lesssim 2^{-C(j+l)}2^{-jn\big(\frac{1}{p}-\frac{1}{q}\big)} \Vert f \Vert_{L^q(\mathbb{R}^{n})} \end{aligned}$$ for $1\leq q\leq 2\leq p\leq q'$.\ *The case of $\sum_{l\le j\epsilon} T_{j, l}$:\ * In this case we leverage the geometrically decaying bounds on $T_{j, l}^\nu$ and the $L^2$ boundedness of $T_j$ established in [@IMS2]. For a fixed $j$ consider $T_j$, defined as $2^{j(-\zeta-m)}T_j = \sum_l 2^{j(-\zeta-m)}T_{j, l}$, which corresponds to a OIO associated with an amplitude in the class $S_{\rho, \delta}^{\zeta}$ with $L^p$ operator norm of $$\left\Vert 2^{-jm}T_j\right\Vert_{L^2\to L^2}=2^{j\zeta}.$$ By the $L^2$-boundedness in [@IMS2] we establish the inequality $$\|\sum_l T_{j, l}\|_{L^2\to L^2} \lesssim 2^{j(m+\zeta)}.$$ Using the decomposition $$\|\sum_{l\le j\epsilon} T_{j, l}\|_{L^2\to L^2} \leq\| \sum_l T_{j, l}\|_{L^2\to L^2} + \|\sum_{l>j\epsilon} T_{j, l}\|_{L^2\to L^2},$$ and the geometrically decaying estimate [\[Decay_Lp_oio\]](#Decay_Lp_oio){reference-type="eqref" reference="Decay_Lp_oio"} we obtain $$\label{L2oio} \|\sum_{l\le j\epsilon} T_{j, l}\|_{L^2\to L^2}\lesssim 2^{j(m+\zeta)}.$$ Similarly for $L^1\to L^1$ we obtain $$\|\sum_{l\le j\epsilon} T_{j, l}\|_{L^1\to L^1} \lesssim 2^{j(m+n(1-\varkappa)/2+\zeta)}.$$ Interpolating these two bounds we obtain for $1\leq p\leq 2$ that $$\|\sum_{l\le j\epsilon} T_{j, l}\|_{L^p\to L^p} \lesssim 2^{j(m+\frac{n(1-\varkappa)}{2}\big(\frac{2}{p}-1\big)+\zeta)}.$$ As demonstrated in the previous case, this and Bernstein's inequality for $1\leq q\leq p\leq 2$ yields $$\begin{aligned} \Vert \sum_{l>j\epsilon} T_{j, l} f\Vert_{L^p(\mathbb{R}^{n})}&\lesssim 2^{j(m+\frac{n(1-\varkappa)}{2}\big(\frac{2}{p}-1\big)+\zeta)}2^{-jn\big(\frac{1}{p}-\frac{1}{q}\big)} \Vert f \Vert_{L^q(\mathbb{R}^{n})} .\end{aligned}$$ We do the $L^\infty$ bound as before, by taking out the supremum of the kernel and summing over $\nu$, this yields the estimate $$\begin{aligned} \|T_{j} f\|_{L^{\infty}(\mathbb{R}^{n})} & \lesssim \sum_{\nu=1}^{\mathcal{N}_j}\sup_{x,y\in \mathbb{R}^{n}}|K_{j,l}^{\nu} (x,y)|\|f\|_{L^1(\mathbb{R}^{n})} \nonumber\lesssim 2^{j(m+n)}\|f\|_{L^1(\mathbb{R}^{n})},\end{aligned}$$ which combined with the corresponding estimates for $T_{j, l}$ and the rapid decay yields that $$\begin{aligned} \|\sum_{l>j\epsilon} T_{j,l} f\|_{L^{\infty}(\mathbb{R}^{n})} \lesssim 2^{j(m+n)}\|f\|_{L^1(\mathbb{R}^{n})}.\end{aligned}$$ By interpolating the $L^1\to L^\infty$ estimate and the $L^2\to L^2$ estimate of $\sum_{l\le j\epsilon} T_{j, l}$, we obtain for $1\leq p\leq 2$ that $$\begin{aligned} \Vert \sum_{l\le j\epsilon} T_{j, l} f\Vert_{L^{p'}(\mathbb{R}^{n})}& 2^{j(m+\zeta+n\big(\frac{2}{p}-1\big)+\zeta\big(1-\frac{2}{p}\big))} \Vert f \Vert_{L^{p}(\mathbb{R}^{n})}.\end{aligned}$$ Moreover, [\[L2oio\]](#L2oio){reference-type="eqref" reference="L2oio"} and Bernstein's inequality yields $$\begin{aligned} \Vert \sum_{l\le j\epsilon} T_{j, l} f\Vert_{L^{2}(\mathbb{R}^{n})}& 2^{j(m+\zeta-n\big(\frac{1}{2}-\frac{1}{p}\big))} \Vert f \Vert_{L^{p}(\mathbb{R}^{n})}.\end{aligned}$$ Now interpolating these last two estimates we obtain $$\begin{aligned} \Vert \sum_{l\le j\epsilon} T_{j, l} f\Vert_{L^p(\mathbb{R}^{n})} \lesssim 2^{j(m-n\big(\frac{1}{p}-\frac{1}{q}\big)+\zeta\big(\frac{2}{q}-1\big))} \Vert f \Vert_{L^q(\mathbb{R}^{n})} \end{aligned}$$ for $1\leq q\leq 2\leq p\leq q'$. ◻ # $L^q$-improving estimates for Fourier integral operators {#sec:proof_fio} This section is devoted to showing the $L^q\to L^p$ boundedness of Fourier integral operators. **Lemma 31**. *Assume that $a(x,\xi)\in S^{m}_{\rho,\delta}(\mathbb{R}^{n})$ for $0< \rho\leq 1$ and $0\leq \delta< 1$, and $\varphi(x,\xi)$ is in the class $\Phi^2$ and is *SND*. Then for $1\leq q\leq p\leq 2$ $$\begin{aligned} \begin{cases} \|\sum_{l\le j\epsilon}T_{j, l}\|_{L^q\to L^p}\lesssim_{\epsilon} 2^{j(m+(\kappa+(n-\kappa)(1-\rho))\big(\frac{1}{p}-\frac{1}{2}\big)+\zeta)}2^{-jn\big(\frac{1}{p}-\frac{1}{q}\big)},\\ \|T_{j, l}\|_{L^q\to L^p}\lesssim_{\epsilon} 2^{-C(j+l)}2^{-jn\big(\frac{1}{p}-\frac{1}{q}\big)},&\text{for all } l>j\epsilon. \end{cases}\end{aligned}$$ Moreover, for $1\leq q \leq 2 \leq p \leq q'$ we have $$\begin{aligned} \begin{cases} \|\sum_{l\le j\epsilon}T_{j, l}\|_{L^q\to L^p}\lesssim_{\epsilon} 2^{j(m-n\big(\frac{1}{p}-\frac{1}{q}\big)+\zeta\big(\frac{2}{q}-1\big))}\\ \|T_{j, l}\|_{L^q\to L^p}\lesssim_{\epsilon} 2^{-C(j+l)}2^{-jn\big(\frac{1}{p}-\frac{1}{q}\big)},& \text{for all }l>j\epsilon. \end{cases}\end{aligned}$$* *Proof.* The proof for the $\sum_{l\le j\epsilon} T_{j, l}^\nu$ case is similar to the one above in Lemma [Lemma 30](#lplq_oio){reference-type="ref" reference="lplq_oio"}, corresponding to the OIOs, with a few modifications. Mainly, one substitutes the use of the sharp $L^2$-boundedness result [@IMS2] for OIOs with a corresponding optimal $L^2$-boundedness result in [@DS Theorem 2.7]. The rest of the changes to the argument involve simple numerical modifications.\ We turn to proving the geometrically decaying $L^q\to L^p$ estimates for $T_{j, l}^\nu$. Define $\mathcal{M}''$ as the Hardy-Littlewood maximal function acting on the function in the $x''$ variable, i.e. $$\mathcal{M}''f(x):= \sup_{\delta>0} \frac{1}{|B(x'',\delta)|} \int_{B(x'',\delta)} |f(y'',x')| \,\mathrm{d}y'',$$ where $x=(x'',x')$. We define $\mathcal{M}'$ in a similar manner.\ Using [\[fatsecond\]](#fatsecond){reference-type="eqref" reference="fatsecond"} we can write $T^\nu_{j,l}f(x)=T^\nu_{j,l}f^\nu_j(x)$, where $f^\nu_j(x)=\chi_{j}^\nu(D)\Psi_j(D)f(x).$ Now using Lemma 2.22 in [@IRS], take $k=\nu$, $n'=\kappa$ and $r_1=r_2 = \frac 1{2N} <p$ and note that the spectrum of $f^\nu_j$ is $$\mathop{\mathrm{supp}}\widehat {f_j^\nu} \subset \left \{ (\xi'',\xi')\in \mathbb{R}^{\kappa} \times \mathbb{R}^{n-\kappa}:\ |\xi''|\leq 2^{j}, |\xi'|\leq 2^{\frac {j}{2}} \right \}.$$ Moreover take $c_{j,\nu}=2^{j}$ and $d_{j,\nu}=2^{\frac {j}{2}}$. Then the conditions of Lemma 2.22 in [@IRS] and Lemma [Lemma 28](#kernel lemma){reference-type="ref" reference="kernel lemma"} all hold for $f_j^\nu$, and therefore we have $$\begin{aligned} \label{pt-est_fio} &|T_{j, l}^\nu f^\nu_j(x)| \\ & \nonumber\lesssim \|w_j^{\nu}(x,\cdot)^{N}K_j^\nu(x,\cdot) \|_{L^{1}_y(\mathbb{R}^{n})}\sup_{y\in\mathbb{R}^n} \frac{|f^\nu_j(y)|}{w_j^{\nu}(x,y)^{N}}\\ &\nonumber\lesssim 2^{j(m-1)}) 2^{-lL} [\mathcal{M}''(\mathcal{M}' |f^\nu_j|^{r_1})^{r_2/r_1}]^{1/r_2}(\nabla_\xi \varphi(x,\xi_j^\nu))\\ &\nonumber\lesssim 2^{-j(n-1)/2}2^{-j\left (\frac{n+1}{2}-\frac{n+1}{2p}\right )}2^{-C(j+l)} [\mathcal{M}''(\mathcal{M}' |f^\nu_j|^{r_1})^{r_2/r_1}]^{1/r_2}(\nabla_\xi \varphi(x,\xi_j^\nu))\end{aligned}$$ for $l>j\epsilon$, $L-N> n$ and $L=L(C,\epsilon,n)$ sufficiently large.\ By the SND condition, the Hardy-Littlewood maximal theorem, and [\[eq:high\]](#eq:high){reference-type="eqref" reference="eq:high"} we have $$\begin{aligned} \label{Decay_Lp_fio} &\Vert T_{j, l} f\Vert_{L^p(\mathbb{R}^{n})} \\ &\lesssim \nonumber\sum_{\nu=1}^{\mathcal{N}_j} 2^{-j(n-1)/2} 2^{-j\left (\frac{n+1}{2}-\frac{n+1}{2p}\right )} 2^{-C(j+l)}\Vert \mathcal{M}''(\mathcal{M}' |f^\nu_j|^{r_1})^{r_2/r_1} \Vert_{L^{p/r_2}(\mathbb{R}^{n})}^{1/r_2} \\ &\lesssim \nonumber2^{-C(j+l)}2^{-j\left (\frac{n+1}{2}-\frac{n+1}{2p}\right )}\Vert \mathcal{M}' |f^\nu_j|^{r_1}) \Vert_{L^{p/r_1}(\mathbb{R}^{n})}^{1/r_1} \\ &\lesssim \nonumber2^{-C(j+l)}2^{-j\left (\frac{n+1}{2}-\frac{n+1}{2p}\right )}\Vert f^\nu_j \Vert_{L^p(\mathbb{R}^{n})} \\ &\lesssim \nonumber2^{-C(j+l)}\Vert f \Vert_{L^p(\mathbb{R}^{n})} \end{aligned}$$ for $1\leq p\leq \infty$. We also used that there are roughly $2^{j(n-1)/2}$ terms in the sum over $\nu$.\ We now lift the $L^p$ estimate above to geometrically decaying $L^q\to L^p$ bounds of $T_{j, l}^\nu$ in the range $1\leq q\leq p<\infty$. To this end, we use Bernstein's inequality for $1\leq q\leq p\leq 2$ and estimate [\[Decay_Lp_fio\]](#Decay_Lp_fio){reference-type="eqref" reference="Decay_Lp_fio"}, $$\begin{aligned} \label{geometricBernstein_fio} \Vert T_{j, l}^\nu f\Vert_{L^p(\mathbb{R}^{n})}& \lesssim \Vert T_{j, l}^\nu \Vert_{L^p\to L^p} \Vert \Psi_j(D) f \Vert_{L^p(\mathbb{R}^{n})}\\ &\nonumber\lesssim 2^{-C(j+l)}2^{-jn\big(\frac{1}{p}-\frac{1}{q}\big)} \Vert f \Vert_{L^q(\mathbb{R}^{n})} \end{aligned}$$ where $\Psi_j$ is a fattened Littlewood-Paley piece.\ The next step is to obtain geometrically decaying $L^q\to L^p$ bounds of $T_{j, l}^\nu$ in the range $1\leq q\leq 2\leq p<q'$. Observe that $$\begin{aligned} |T_{j, l} f(x)| & \lesssim \sum_{\nu=1}^{\mathcal{N}_j}\|(1+2^{jw(k,\rho)}|\nabla_\xi \varphi(x,\xi_j^\nu)-y|)^{N} K_{j,l}^{\nu} (x,y)\|_{L^{\infty}_y(\mathbb{R}^{n})}\|f\|_{L^1(\mathbb{R}^{n})}\\ & \nonumber\lesssim 2^{-C(j+l)}\|f\|_{L^1(\mathbb{R}^{n})}. \end{aligned}$$ By interpolating this $L^1\to L^\infty$ estimate and the $L^2\to L^2$ estimate of $T_{j, l}$ we obtain for $1\leq p\leq 2$ that $$\begin{aligned} \Vert T_{j, l} f\Vert_{L^{p'}(\mathbb{R}^{n})}&\lesssim 2^{-C(j+l)} \Vert f \Vert_{L^{p}(\mathbb{R}^{n})}.\end{aligned}$$ Now interpolating this with [\[geometricBernstein_fio\]](#geometricBernstein_fio){reference-type="eqref" reference="geometricBernstein_fio"} for $p=2$ and using Bernstein's inequality, again we obtain $$\begin{aligned} \label{geometricBernstein_fio2} \Vert T_{j, l}^\nu f\Vert_{L^p(\mathbb{R}^{n})} \lesssim 2^{-C(j+l)}2^{-jn\big(\frac{1}{p}-\frac{1}{q}\big)} \Vert f \Vert_{L^q(\mathbb{R}^{n})} \end{aligned}$$ for $1\leq q\leq 2\leq p\leq q'$. ◻ # Proof of the main results {#sec:proof} In this section we give a brief description of the proof of Theorem [Theorem 1](#main_FIO){reference-type="ref" reference="main_FIO"} and Theorem [Theorem 2](#main_OIO){reference-type="ref" reference="main_OIO"}.\ We distil the idea of [@BC], [@LMRNC], and [@LSS] to use geometrically decaying $L^q\to L^p$ estimates of pieces of an operator to prove sparse form bounds. We attribute the subsequent lemma to the work of Beltran and Cladek in [@BC], building on earlier work of Lacey, and Spencer [@LSS], and Lacey, Mena, and Reguera [@LMRNC]. Using the one-third trick and using shifted dyadic grids, they made calculations that have culminated in the following lemma. Notably, although their original paper did not cast it as a lemma, we have chosen to formalize it as such in our context. **Lemma 32** ([@BC]). *Fix $\epsilon>0$ and $\delta> \epsilon$. Then for a sequence of operators $(T_{j,l})_{j,l\geq 0}$ we have $$\begin{aligned} \Big<\sum_{j\ge 0}\sum_{l \leq j \epsilon} T_{j,l} f, g\Big> & \lesssim_{\epsilon}\sum_{j\ge 0}\sum_{\substack{Q\text{ dyadic}: \\ \ell(Q)= 2^{\lfloor{-j\delta+j\epsilon+10\rfloor}}}} \!\!\!\! |Q|^{1/r+1/s'-1}\Big\|\sum_{l\le j\epsilon}T_{j, l}\Big\|_{L^r\to L^s}|Q| \left<f\right>_{r, Q}\left<g\right>_{s', Q},\end{aligned}$$ and $$\begin{aligned} \Big< \sum_{j \geq 0} \sum_{l > j\epsilon} T_{j,l}f, g \Big> & \lesssim_{\epsilon} \sum_{j \geq 0} \sum_{l > j\epsilon} \sum_{\substack{Q \text{ dyadic}: \\ \ell(Q)=2^{\lfloor -j\delta+l+10 \rfloor}}} \!\!\!\! |Q|^{1/r+1/s'-1} \|T_{j,l}\|_{L^r \to L^s} |Q| \left< f \right>_{r,Q} \left< g \right>_{s',Q},\end{aligned}$$ where the sums are taken over $Q$ in some shifted dyadic grid $\mathcal{D}_v$.* **Proposition 33**. *Let $1\leq q, p\leq \infty$, $\delta>\epsilon>0$ and $m<(\delta-\epsilon)n(1/q-1/p)$ be given. Suppose that for all $l,j\geq 0$ and all $C\gg 1$ (not depending on $l,j$) such that $$\label{lifing est} \begin{cases} \|T_{j,l}\|_{L^q \to L^p}\lesssim 2^{-C(j+l)}, \text{ and}&\\ \|\sum_{l\le j\epsilon}T_{j,l}\|_{L^q \to L^p}\lesssim 2^{jm}, \end{cases}$$ holds, then for any compactly supported bounded functions $f, g$ on $\mathbb{R}^n$, there exist sparse collections $\mathcal{S}$ and $\widetilde{\mathcal{S}}$ of dyadic cubes such that $$\begin{cases} |\left<T f, g\right>|\le C(m, q, p)\Lambda_{\mathcal{S}, q, p'}(f, g), \text{ and }\\ |\left<T f, g\right>|\le C(m, q, p)\Lambda_{\widetilde{\mathcal{S}}, p', q}(f, g). \end{cases}$$* *Proof.* Observe that estimate $|\left<T f, g\right>|\le C(m, q, p)\Lambda_{\widetilde{\mathcal{S}}, p', q}(f, g)$ follows by duality from $|\left<T f, g\right>|\le C(m, q, p)\Lambda_{\mathcal{S}, q, p'}(f, g)$. By Lemma [Lemma 32](#sparse domination lift_lemma){reference-type="ref" reference="sparse domination lift_lemma"}, and the first estimate in [\[lifing est\]](#lifing est){reference-type="eqref" reference="lifing est"}, we have that $$\begin{aligned} &\Big<\sum_{j\ge 0}\sum_{l > j \epsilon} T_{j,l} f, g\Big>\\ & \lesssim_{\epsilon}\sum_{j\ge 0}\sum_{l > j \epsilon}\sum_{\substack{Q\text{ dyadic}: \\ \ell(Q)= 2^{\lfloor{-j\delta+j\epsilon+10\rfloor}}}} \!\!\!\! |Q|^{1/q+1/p'-1}\Big\|T_{j, l}^\nu\Big\|_{L^q\to L^p}|Q| \left<f\right>_{q, Q}\left<g\right>_{p', Q}\\ & \lesssim_{\epsilon}\sum_{j\ge 0}\sum_{l > j \epsilon} 2^{-C(j+l)}2^{(-j\delta+j\epsilon)n(1/q-1/p)} \sum_{\substack{Q\text{ dyadic}: \\ \ell(Q)= 2^{\lfloor{-j\delta+j\epsilon+10\rfloor}}}} \!\!\!\! |Q| \left<f\right>_{q, Q}\left<g\right>_{p', Q}. \end{aligned}$$ Both sums in $j$ and $l$ converge for sufficiently large $C$. The bound for $\sum_{l \leq j \epsilon} T_{j,l}$ follows from essentially the same arguments, except that we need to take $m<(\delta-\epsilon)n(1/q-1/p)$. Finally Lemma [Lemma 10](#universal sparse form){reference-type="ref" reference="universal sparse form"} gives us the sparse form bound we desire. ◻ We now turn to the proof of our main results. *Proof of Theorem [Theorem 2](#main_OIO){reference-type="ref" reference="main_OIO"}.* Note that for $1\leq q\leq p\leq 2$ we have by Lemma [Lemma 30](#lplq_oio){reference-type="ref" reference="lplq_oio"} that $$\begin{aligned} \begin{cases} \|\sum_{l\le j\epsilon}T_{j, l}\|_{L^q\to L^p}\lesssim_{\epsilon} 2^{j(m+\frac{n(1-\varkappa)}{2}\big(\frac{2}{p}-1\big)+\zeta)}2^{-jn\big(\frac{1}{p}-\frac{1}{q}\big)},\\ \|T_{j, l}\|_{L^q\to L^p}\lesssim_{\epsilon} 2^{-C(j+l)}2^{-jn\big(\frac{1}{p}-\frac{1}{q}\big)},& l>j\epsilon. \end{cases}\end{aligned}$$ By taking $C$ large enough we have $$\|T_{j, l}\|_{L^q\to L^p}\lesssim_{\epsilon} 2^{-C(j+l)},\quad l>j\epsilon$$ for all $1\leq q\leq p\leq 2$. Moreover, to satisfy the hypothesis of Proposition [Proposition 33](#lifting result){reference-type="ref" reference="lifting result"} we need that $$\begin{aligned} m&<n\big(\frac{1}{p}-\frac{1}{q}\big)+\varkappa n\big(\frac{1}{q}-\frac{1}{p}\big)-\frac{n(1-\varkappa)}{2}\big(\frac{2}{p}-1\big)-\zeta\\ &=-n(1-\varkappa)\big(\frac{1}{q}-\frac{1}{2}\big)-\zeta, \end{aligned}$$ since we can take $\epsilon>0$ arbitrarily small. Under this assumption on $m$, the case when $1\leq q\leq p\leq 2$ in Theorem [Theorem 2](#main_OIO){reference-type="ref" reference="main_OIO"} follows from Proposition [Proposition 33](#lifting result){reference-type="ref" reference="lifting result"}. Moreover, for $1\leq q \leq 2 \leq p \leq q'$ we have $$\begin{aligned} \begin{cases} \|\sum_{l\le j\epsilon}T_{j, l}\|_{L^q\to L^p}\lesssim_{\epsilon} 2^{j(m-n\big(\frac{1}{p}-\frac{1}{q}\big)+\zeta\big(\frac{2}{q}-1\big))}\\ \|T_{j, l}\|_{L^q\to L^p}\lesssim_{\epsilon} 2^{-C(j+l)}2^{-jn\big(\frac{1}{p}-\frac{1}{q}\big)},& l>j\epsilon. \end{cases}\end{aligned}$$ The second inequality can be handled as above. For the first one we need that $$\begin{aligned} m&<n\big(\frac{1}{p}-\frac{1}{q}\big)+\varkappa n\big(\frac{1}{q}-\frac{1}{p}\big)-\zeta\big(\frac{2}{q}-1\big)\\ &=-n(1-\varkappa)\big(\frac{1}{q}-\frac{1}{p}\big)-\zeta\big(\frac{2}{q}-1\big), \end{aligned}$$ since we can take $\epsilon>0$ arbitrarily small. Under this assumption we then obtain Theorem [Theorem 2](#main_OIO){reference-type="ref" reference="main_OIO"} in the range $1\leq q \leq 2 \leq p \leq q'$. ◻ *Proof of Theorem [Theorem 1](#main_FIO){reference-type="ref" reference="main_FIO"}.* The proof follows is very similar manner to the one above, the main modification is the following requirements: $$\begin{aligned} m&<-n(1-\rho)\big(\frac{1}{q}-\frac{1}{2}\big)-\kappa\rho\big(\frac{1}{p}-\frac{1}{2}\big)-\zeta,\quad 1\leq q\leq p\leq 2;\\ m&<-n(1-\rho)\big(\frac{1}{q}-\frac{1}{p}\big)-\zeta\big(\frac{2}{q}-1\big)\quad 1\leq q \leq 2 \leq p \leq q'. \end{aligned}$$ ◻
arxiv_math
{ "id": "2309.07552", "title": "Bilinear Sparse Domination for Oscillatory Integral Operators", "authors": "Tobias Mattsson", "categories": "math.CA math.AP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We employ random matrix theory to establish consistency of generalized cross validation (GCV) for estimating prediction risks of sketched ridge regression ensembles, enabling efficient and consistent tuning of regularization and sketching parameters. Our results hold for a broad class of asymptotically free sketches under very mild data assumptions. For squared prediction risk, we provide a decomposition into an unsketched equivalent implicit ridge bias and a sketching-based variance, and prove that the risk can be globally optimized by only tuning sketch size in infinite ensembles. For general subquadratic prediction risk functionals, we extend GCV to construct consistent risk estimators, and thereby obtain distributional convergence of the GCV-corrected predictions in Wasserstein-2 metric. This in particular allows construction of prediction intervals with asymptotically correct coverage conditional on the training data. We also propose an "ensemble trick" whereby the risk for unsketched ridge regression can be efficiently estimated via GCV using small sketched ridge ensembles. We empirically validate our theoretical results using both synthetic and real large-scale datasets with practical sketches including CountSketch and subsampled randomized discrete cosine transforms. author: - | Pratik Patil [^1]\ <pratikpatil@berkeley.edu> - | Daniel LeJeune [^2]\ <daniel@dlej.net> bibliography: - references.bib title: | Asymptotically free sketched ridge ensembles:\ Risks, cross-validation, and tuning --- # Introduction {#sec:introduction} *Random sketching* is a powerful tool for reducing the computational complexity associated with large-scale datasets by projecting them to a lower-dimensional space for efficient computations. Sketching has been a remarkable success both in practical applications and from a theoretical standpoint: it has enabled application of statistical techniques to problem scales that were formerly unimaginable [@pmlr-v80-aghazadeh18a; @murray2023randomized], while enjoying rigorous technical guarantees that ensure the underlying learning problem essentially remains unchanged provided the sketch dimension is not too small (e.g., above the rank of the full data matrix) [@tropp2011hadamard; @woodruff2014sketching]. However, real-world data scenarios often deviate from these ideal conditions for the problem to remain unchanged. For one, real data often has a tail of non-vanishing eigenvalues and is not truly low rank. For another, our available resources may impose constraints on sketch sizes, forcing them to fall below the critical threshold. When the sketch size is critically low, the learning problem can change significantly. In particular, when reducing the dimensionality below the threshold to solve the original problem, the problem becomes *implicitly regularized* [@mahoney2011randomized; @thanei_heinze_meinshausen_2017]. Recent work has precisely characterized this problem change in linear regression [@lejeune2022asymptotics], being exactly equal to ridge regression in an infinite ensemble of sketched predictors [@lejeune2020implicit], with the size of the sketch acting as an additional hyperparameter that affects the implicit regularization. If the underlying problem changes with sketching, a key question arises: *can we reliably and efficiently tune hyperparameters of sketched prediction models, such as the sketch size?* While cross-validation (CV) is the classical way to tune hyperparameters, standard $k$-fold CV (with small or moderate $k$ values, such as $5$ or $10$) is not statistically consistent for high-dimensional data [@xu_maleki_rad_2019], and leave-one-out CV (LOOCV) is often computationally infeasible. Generalized cross-validation (GCV), on the other hand, is an extremely efficient method for estimating generalization error using only training data [@craven_wahba_1979; @friedman_hastie_tibshirani_2009], providing asymptotically exact error estimators in high dimensions with similar computational cost to fitting the model [@patil2021uniform; @wei_hu_steinhardt]. However, since the consistency of GCV is due to certain concentration of measure phenomena of data, it is unclear whether GCV should also provide a consistent error estimator for predictors with sketched data, in particular when combining several sketched predictors in an ensemble, such as in distributed optimization settings. In this work, we prove that efficient consistent tuning of hyperparameters of sketched ridge regression ensembles is achievable with GCV (see Figure [1](#fig:gcv_paths){reference-type="ref" reference="fig:gcv_paths"} for an illustration). Furthermore, we state our results for a very broad class of *asymptotically free* sketching matrices, a notion from free probability theory [@voiculescu1997free; @mingo2017free] generalizing rotational invariance. ## Summary of results and outline Below we present a summary of our main results in this paper and provide an outline of the paper. ![ **GCV provides consistent risk estimation for sketched ridge regression.** We show squared risk (solid) and GCV estimates (dashed) for sketched regression ensembles of $K = 5$ predictors on synthetic data with $n = 500$ observations and $p = 600$ features. **Left:** Each sketch induces its own risk curve in regularization strength $\lambda$, but across all sketches GCV is consistent. **Middle:** Minimizers and minimum values can vary by sketching type. **Right:** Each sketch also induces a risk curve in sketch size $\alpha = q/p$, so sketch size can be tuned to optimize risk. Error bars denote standard error of the mean over 100 trials. ](figures/gcv_paths.pdf){#fig:gcv_paths width="99%"} - **Squared risk asymptotics.** We provide precise asymptotics of squared risk and its GCV estimator for sketched ridge ensembles in for the class of asymptotically free sketches applied to features. We give this result in terms of an exact bias--variance decomposition into an equivalent implicit unsketched ridge regression risk and an inflation term due to randomness of the sketch that is controlled by ensemble size. - **Distributional and functional consistencies.** We prove consistency of GCV risk estimators for a broad class of subquadratic risk functionals in . To the best of our knowledge, this is the first extension of GCV beyond residual-based risk functionals in any setting. In doing so, we also prove the consistency of estimating the joint response--prediction distribution using GCV in Wasserstein $W_2$ metric in , enabling the use of GCV for also evaluating classification error and constructing prediction intervals with valid asymptotic conditional coverage. - **Tuning applications.** Exploiting the special form of the risk decomposition, we propose a method in the form of an "ensemble trick" to tune unsketched ridge regression using only sketched ensembles. We also prove that large unregularized sketched ensembles with tuned sketch size can achieve the optimal unsketched ridge regression risk in . Throughout all of our results, we impose very weak assumptions: we require no model on the relationship between response variables and features; we allow for arbitrary feature covariance with random matrix structure; we allow any sketch that satisfies asymptotic freeness, which we empirically verify for CountSketch [@charikar2002frequent] and subsampled randomized discrete cosine transforms (SRDCT); and we allow for the consideration of zero or even negative regularization. All proofs and details of experiments and additional numerical illustrations are deferred to appendices, which also contain relevant backgrounds on asymptotic freeness and asymptotic equivalents. The source code for generating all of our experimental figures in this paper is available at <https://github.com/dlej/sketched-ridge>. ## Related work For context, we briefly discuss related work on sketching, ridge regression, and cross-validation. *Sketching and implicit regularization.* The implicit regularization effect of sketching has been known for some time [@mahoney2011randomized; @thanei_heinze_meinshausen_2017]. This effect is strongly related to *inversion bias*, and has been precisely characterized in a number of settings in recent years [@pmlr-v108-mutny20a; @derezinski2021newtonless; @pmlr-v134-derezinski21a]. Most recently, [@lejeune2022asymptotics] showed that sketched matrix inversions are asymptotically equivalent to unsketched implicitly regularized inversions. Notably, this holds not only for i.i.d. random sketches but also for asymptotically free sketches. This result is a crucial component of our bias--variance decomposition of GCV risk. By accommodating free sketches, we can apply our results to many sketches used in practice with limited prior theoretical understanding. We offer further comments and comparisons in . *High-dimensional ridge and sketching.* Ridge regression, particularly its "ridgeless" variant where the regularization parameter approaches zero, has attracted significant attention in the last few years. This growing interest stems the phenomenon that in the overparameterized regime, where the number of features exceeds than the number of observations, the ridgeless estimator interpolates the training data and exhibits a peculiar generalization behaviour [@belkin_hsu_xu_2020; @bartlett_long_lugosi_tsigler_2020; @hastie2022surprises]. Different sketching variants and their risks for a single sketched ridge estimator under positive regularization are analyzed in [@liu_dobriban_2019]. Very recently, [@bach2023high] consider the effect random sketching that includes ridgeless regression. Our work broadens the scope of these prior works by considering all asymptotically free sketched ensembles and accommodating zero and negative regularization. Complementary to feature sketching, there is an emerging interest in investigating subsampling, and more broadly observation sketching. The statistical properties of subsampled ridge predictors are recently analyzed in several works under different data settings: [@lejeune2020implicit; @patil2022bagging; @du2023subsample; @patil2023generalized; @chen2023sketched; @ando2023high]. At a high level, this work can be can informally thought of "dual" to this evolving literature. While there are definite parallels between the two, there are some crucial differences as well. We discuss more on this aspect in . *Cross-validation and tuning.* CV is a prevalent method for model assessment and selection [@gyorfi_kohler_krzyzak_walk_2006; @friedman_hastie_tibshirani_2009]. For comprehensive surveys on various CV variants, we refer readers to [@arlot_celisse_2010; @zhang_yang_2015]. Initially proposed for linear smoothers in the fixed-X design settings, GCV provides an extremely efficient alternative to traditional CV methods like LOOCV [@golub_heath_wabha_1979; @craven_wahba_1979]. It approximates the so-called "shortcut" LOOCV formula [@friedman_hastie_tibshirani_2009]. More recently, there has been growing interest in GCV in the random-X design settings. Consistency properties of GCV have been investigated: for ridge regression under various scenarios [@adlam_pennington_2020neural; @patil2021uniform; @patil2022estimating; @wei_hu_steinhardt], for LASSO [@bayati2011amp; @celentano2020lasso], and for general regularized $M$-estimators [@bellec2020out; @bellec2022derivatives], among others. Our work adds to this body of work by analyzing GCV for freely sketched ridge ensembles and establishing its consistency across a broad class of risk functionals. # Sketched ensembles {#sec:preliminaries} Let $\smash{((\mathbf{x}_i, y_i))_{i=1}^{n}}$ be $n$ i.i.d. observations in $\mathbb R^{p} \times \mathbb R$. We denote by $\mathbf{X}\in \mathbb R^{n \times p}$ the data matrix whose $i$-th row contains $\smash{\mathbf{x}_i^\top}$ and by $\mathbf{y}\in \mathbb R^n$ the associated response vector whose $i$-th entry contains $y_i$. #### Sketched ensembles and risk functionals. Consider a collection of $K$ independent sketching matrices $\mathbf{S}_k \in \mathbb R^{p \times q}$ for $k \in [K]$. We consider sketched ridge regression where we apply the sketching matrix $\mathbf{S}_k$ to the features (columns) of the data $\mathbf{X}$ only. We denote the sketching solution as $$\label{eq:beta-hat} \widehat{{\bm{\beta}}}_\lambda^k =\mathbf{S}_k \widehat{{\bm{\beta}}}^{\mathbf{S}_k}_\lambda \quad \text{for} \quad \widehat{{\bm{\beta}}}^{\mathbf{S}_k}_\lambda =\mathop{\mathrm{arg\,min}}_{{\bm{\beta}}\in \mathbb R^{q}} \tfrac{1}{n} \big\Vert \mathbf{y}- \mathbf{X}\mathbf{S}_k {\bm{\beta}}\big\Vert_2^2 + \lambda \ensuremath{{\left\|{\bm{\beta}}\right\|}_{2}}^2,$$ where $\lambda$ is the ridge regularization level. The estimator $\widehat{{\bm{\beta}}}_\lambda^k$ admits a closed form expression shown below in [\[eq:ensemble-estimator\]](#eq:ensemble-estimator){reference-type="eqref" reference="eq:ensemble-estimator"}. Note that we express the solution $\widehat{{\bm{\beta}}}_\lambda^k$ in the feature space as the sketching matrix $\mathbf{S}_k$ times a solution $\widehat{{\bm{\beta}}}^{\mathbf{S}_k}_\lambda$ in the sketched data space. When we use this solution on a new data point $\mathbf{x}_0$, the predicted response is given by $\mathbf{x}_0^\top\widehat{{\bm{\beta}}}_\lambda^k = \mathbf{x}_0^\top\mathbf{S}_k \widehat{{\bm{\beta}}}^{\mathbf{S}_k}_\lambda$. This is simply the application of $\widehat{{\bm{\beta}}}^{\mathbf{S}_k}_\lambda$ to the sketched data point $\mathbf{S}_k^\top\mathbf{x}$. The primary advantage of representing $\widehat{{\bm{\beta}}}_\lambda^k$ in the feature space $\mathbb R^p$, rather than in the sketched data space $\mathbb R^q$, is that we can now perform a direct comparison with other estimators within the feature space $\mathbb R^p$. We obtain the final ensemble estimator as a simple unweighted average of $K$ independently sketched predictors, each of which admits a simple expression in terms of a regularized pseudoinverse of the sketched data: $$\label{eq:ensemble-estimator} \widehat{{\bm{\beta}}}^{\mathrm{ens}}_\lambda = \frac{1}{K} \sum_{k=1}^K \widehat{{\bm{\beta}}}^{k}_\lambda, \quad \text{where} \quad \widehat{{\bm{\beta}}}_\lambda^k = \tfrac{1}{n} \mathbf{S}_k \big( \tfrac{1}{n} \mathbf{S}_k^\top\mathbf{X}^\top\mathbf{X}\mathbf{S}_k + \lambda \mathbf{I}_q \big)^{-1} \mathbf{S}_k^\top\mathbf{X}^\top\mathbf{y}.$$ Note that we allow for $\lambda$ to be possibly negative in when writing [\[eq:ensemble-estimator\]](#eq:ensemble-estimator){reference-type="eqref" reference="eq:ensemble-estimator"} (see for details). Let $(\mathbf{x}_0, y_0)$ be a test point drawn independently from the same distribution as the training data. Risk functionals of the ensemble estimator are properties of the joint distribution of $\smash{(y_0, \mathbf{x}_0^\top\widehat{{\bm{\beta}}}_\lambda^\mathrm{ens})}$. Letting $\smash{P_\lambda^\mathrm{ens}}$ denote this distribution, we are interested in estimating linear functionals of $\smash{P_\lambda^\mathrm{ens}}$. That is, let $t : \mathbb{R}^2 \to \mathbb{R}$ be an error function. Define the corresponding conditional prediction risk functional as $$\label{eq:func_def} T(\widehat{{\bm{\beta}}}_\lambda^\mathrm{ens}) = \int t(y, z) \, \mathrm{d}P^\mathrm{ens}_\lambda(y, z) = \mathbb{E}_{\mathbf{x}_0, y_0} \Big[ t(y_0, \mathbf{x}_0^\top {\widehat{\beta}}_\lambda^\mathrm{ens}) \bigm| \mathbf{X}, \mathbf{y}, ( \mathbf{S}_k)_{k=1}^K \Big].$$ A special case of a risk functional is the squared risk when $\smash{t(y, z) = (y - z)^2}$. We denote the risk functional in this case by $R(\widehat{{\bm{\beta}}}_\lambda^\mathrm{ens})$, which is the classical mean squared prediction risk. #### Proposed GCV plug-in estimators. Note that each individual estimator $\smash{\widehat{{\bm{\beta}}}_\lambda^k}$ of the ensemble is a linear smoother with smoothing matrix $$\begin{aligned} \mathbf{L}_\lambda^k = \tfrac{1}{n} \mathbf{X}\mathbf{S}_k (\tfrac{1}{n} \mathbf{S}_k^\top\mathbf{X}^\top\mathbf{X}\mathbf{S}_k + \lambda \mathbf{I}_q)^{-1} \mathbf{S}_k^\top\mathbf{X}^\top,\end{aligned}$$ in the sense that the training data predictions are given by $\smash{\mathbf{X}\widehat{{\bm{\beta}}}_\lambda^k = \mathbf{L}_\lambda^k \mathbf{y}}$. This motivates our consideration of estimators based on generalized cross-validation (GCV) [@friedman_hastie_tibshirani_2009 Chapter 7]. Given any linear smoother of the responses with smoothing matrix $\mathbf{L}$, the GCV estimator of the squared prediction risk is $\smash{{\tfrac{1}{n} \| \mathbf{y}- \mathbf{L}\mathbf{y}\|_2^2}/{(1 - \tfrac{1}{n} {\rm tr}(\mathbf{L}))^2}}$. GCV enjoys certain consistency properties in the fixed-$\mathbf{X}$ setting [@li_1985; @li_1986] and has recently been shown to also be consistent under various random-$\mathbf{X}$ settings for ridge regression [@patil2021uniform; @wei_hu_steinhardt; @han2023distribution]. We extend the GCV estimator to general functionals by considering GCV as a plug-in estimator of squared risk of the form $\smash{\tfrac{1}{n} \sum_{i=1}^n (y_i - z_i)^2}$. Determining the $z_i$ that correspond to GCV, we obtain the empirical distribution of GCV-corrected predictions as follows: $$\label{eq:gcv-dual-empirical-dist} {\widehat{P}}^\mathrm{ens}_\lambda = \frac{1}{n} \sum_{i = 1}^{n} \delta \bigg\{ \Big( y_i, \frac{x_i^\top\widehat{{\bm{\beta}}}_\lambda^\mathrm{ens}- \tfrac{1}{n} {\rm tr}[\mathbf{L}_\lambda^\mathrm{ens}] y_i}{1 - \tfrac{1}{n} {\rm tr}[\mathbf{L}_\lambda^\mathrm{ens}]} \Big) \bigg\}, \quad \text{where} \quad \mathbf{L}^\mathrm{ens}_\lambda = \frac{1}{K} \sum_{k=1}^{K} \mathbf{L}_\lambda^k.$$ Here $\delta\{\mathbf{a}\}$ denotes a Dirac measure located at an atom $\mathbf{a}\in \mathbb R^2$. To give some intuition as to why this is a reasonable choice, consider that when fitting a model, the predictions on training points will be excessively correlated with the training responses. In order to match the test distribution, we need to cancel this increased correlation, which we accomplish by subtracting an appropriately scaled $y_i$. Using this empirical distribution, we form the plug-in GCV risk functional estimators $$\label{eq:def-gcv} \widehat{T}(\widehat{{\bm{\beta}}}_\lambda^\mathrm{ens}) = \frac{1}{n} \sum_{i=1}^{n} t\Big(y_i, \frac{x_i^\top\widehat{{\bm{\beta}}}_\lambda^\mathrm{ens}- \tfrac{1}{n} {\rm tr}[\mathbf{L}_\lambda^\mathrm{ens}] y_i}{1 - \tfrac{1}{n} {\rm tr}[\mathbf{L}_\lambda^\mathrm{ens}]} \Big) \quad \text{and} \quad {\widehat{R}}(\widehat{{\bm{\beta}}}^\mathrm{ens}_\lambda) = \frac {\tfrac{1}{n} \| \mathbf{y}- \mathbf{X}\widehat{{\bm{\beta}}}^\mathrm{ens}_\lambda \|_2^2} {( 1 - \tfrac{1}{n} {\rm tr}[\mathbf{L}^\mathrm{ens}_\lambda] )^2}.$$ In the case where $\lambda \to 0^+$ but ridgeless regression is well-defined, the denominator may tend to zero. However, the numerator will also tend to zero, and therefore one should interpret this quantity as its analytic continuation, which is also well-defined. In practice, if so desired, one can choose very small (positive and negative) $\lambda$ near zero and interpolate for a first-order approximation. We emphasize that the GCV-corrected predictions are "free lunch" in most circumstances. For example, when tuning over $\lambda$, it is common to precompute a decomposition of $\mathbf{X}\mathbf{S}_k$ such that subsequent matrix inversions for each $\lambda$ are very inexpensive, and the same decomposition can be used to evaluate $\smash{\tfrac{1}{n} {\rm tr}[\mathbf{L}_\lambda^\mathrm{ens}]}$ exactly. Otherwise, Monte-Carlo trace estimation is a common strategy for GCV [@girard1989montecarlo; @hutchinson1989trace] that yields consistent estimators using very few (even single) samples, such that the additional computational cost is essentially the same as fitting the model. # Squared risk asymptotics and consistency {#sec:squared-risk} We now derive the asymptotics of squared risk and its GCV estimator for the finite ensemble sketched estimator. The special structure of the squared risk allows us to obtain explicit forms of the asymptotics that shed light on the dependence of both the ensemble risk and GCV on $K$, the size of the ensemble. We then show consistency of GCV for squared risk using these asymptotics. We express our asymptotic results using the asymptotic equivalence notation $\mathbf{A}_n \simeq\mathbf{B}_n$, which means that for any sequence of ${\bm{\Theta}}_n$ having $\smash{\ensuremath{{\left\|{\bm{\Theta}}_n\right\|}_{{\rm tr}}} = {\rm tr}\left[ ({\bm{\Theta}}_n {\bm{\Theta}}_n^\top)^{1/2} \right]}$ uniformly bounded in $n$, $\smash{\lim_{n \to \infty} {\rm tr}\left[ {\bm{\Theta}}_n (\mathbf{A}_n - \mathbf{B}_n) \right] = 0}$ almost surely. In the case that $\mathbf{A}_n$ and $\mathbf{B}_n$ are scalars $a_n$ and $b_n$ such as risk estimators, this reduces to $\lim_{n \to \infty} (a_n - b_n) = 0$. Our forthcoming results apply to a sequence of problems of increasing dimensionality proportional to $n$, and we omit the explicit dependence on $n$ in our statements. ## Asymptotically free sketching {#sec:asymptotically-free-sketching} For our theoretical analysis, we need our sketching matrix $\mathbf{S}$ to have favorable properties. The sketch should preserve much of the essential structure of the data, even through (regularized) matrix inversion. A sufficient yet quite general condition for this is *freeness* [@voiculescu1997free; @mingo2017free]. [\[cond:sketch\]]{#cond:sketch label="cond:sketch"} Let $\mathbf{S}\mathbf{S}^\top$ and $\tfrac{1}{n} \mathbf{X}^\top\mathbf{X}$ converge almost surely to bounded operators infinitesimally free with respect to $\smash{(\tfrac{1}{p}{\rm tr}[\cdot], {\rm tr}[{\bm{\Theta}}(\cdot)])}$ for any ${\bm{\Theta}}$ independent of $\mathbf{S}$ with $\ensuremath{{\left\|{\bm{\Theta}}\right\|}_{{\rm tr}}}$ uniformly bounded, and let $\mathbf{S}\mathbf{S}^\top$ have limiting S-transform $\smash{\mathscr S_{\mathbf{S}\mathbf{S}^\top}}$ analytic on $\mathbb C^{-}$. We give a background on freeness including infintesimal freeness [@shlyakhtenko2018infinitesimal] in . Intuitively, freeness of a pair of operators $\mathbf{A}$ and $\mathbf{B}$ means that the eigenvectors of one are completely unaligned or incoherent with the eigenvectors of the other. For example, if $\smash{\mathbf{A}= \mathbf{U}\mathbf{D}\mathbf{U}^\top}$ for a uniformly random unitary matrix $\mathbf{U}$ drawn independently of positive semidefinite $\mathbf{B}$ and $\mathbf{D}$, then $\mathbf{A}$ and $\mathbf{B}$ are almost surely asymptotically infinitesimally free [@cebron2022freeness].[^3] For this reason, we expect any sketch that is *rotationally invariant*, a desired property of sketches in practice as we do not wish the sketch to prefer any particular dimensions of our data, to satisfy Assumption [\[cond:sketch\]](#cond:sketch){reference-type="ref" reference="cond:sketch"}. We refer readers to Chapter 2.4 of [@tulino2004random] for some instances of asymptotic freeness. For further details on infinitesimal freeness, see . The property that the sketch preserves the structure of the data is captured in the notion of subordination and conditional expectation in free probability [@biane1998], closely related to the *deterministic equivalents* [@dobriban_sheng_2020; @dobriban_sheng_2021] used in random matrix theory. The work in [@lejeune2022asymptotics] recently extended such results to infinitesimally free operators in the context of sketching, which will form the basis of our analysis.[^4] For the statement to follow, define $\smash{\widehat{{\bm{\Sigma}}}=\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}}$ and $\smash{\lambda_0 =-\liminf_{p \to \infty}{\lambda_{\min}^{+}(\mathbf{S}^\top\widehat{{\bm{\Sigma}}}\mathbf{S})}}$. Here $\smash{\lambda_{\min}^{+}(\mathbf{A})}$ denotes the minimum nonzero eigenvalue of a symmetric matrix $\mathbf{A}$. [\[thm:sketched-pseudoinverse\]]{#thm:sketched-pseudoinverse label="thm:sketched-pseudoinverse"} Under , for all $\lambda > \lambda_0$, $$\begin{aligned} \mathbf{S}\big( \mathbf{S}^\top\widehat{{\bm{\Sigma}}}\mathbf{S}+ \lambda \mathbf{I}_q \big)^{-1} \mathbf{S}^\top \simeq\big( \widehat{{\bm{\Sigma}}}+ \mu \mathbf{I}_p \big)^{-1}, \label{eq:cond-sketch} \end{aligned}$$ where $\smash{\mu > -\lambda_{\min}^{+}(\widehat{{\bm{\Sigma}}})}$ is increasing in $\lambda > \lambda_0$ and satisfies $$\begin{aligned} \label{eq:dual-fp-main} \mu \simeq \lambda \mathscr S_{\mathbf{S}\mathbf{S}^\top} \big( -\tfrac{1}{p} {\rm tr}\big[ \mathbf{S}^\top\widehat{{\bm{\Sigma}}}\mathbf{S}\big( \mathbf{S}^\top\widehat{{\bm{\Sigma}}}\mathbf{S}+ \lambda \mathbf{I}_q \big)^{-1} \big] \big) \simeq \lambda \mathscr S_{\mathbf{S}\mathbf{S}^\top} \big( -\tfrac{1}{p} {\rm tr}\big[ \widehat{{\bm{\Sigma}}}\big( \widehat{{\bm{\Sigma}}}+ \mu \mathbf{I}_p \big)^{-1} \big] \big). \end{aligned}$$ Put another way, when we sketch $\smash{\widehat{{\bm{\Sigma}}}}$ and compute a regularized inverse, it is (in a first-order sense) as if we had computed an unsketched regularized inverse of $\smash{\widehat{{\bm{\Sigma}}}}$, potentially with a different "implicit" regularization strength $\mu$ instead of $\lambda$. Since the result holds for free sketching matrices, we expect this to include fast practical sketches such as CountSketch [@charikar2002frequent] and subsampled randomized Fourier and Hadamard transforms (SRFT/SRHT) [@tropp2011hadamard; @lacotte2020optimal], which were demonstrated empirically to satisfy the same relationship in [@lejeune2022asymptotics], and for which we also provide further empirical support in this work in . While the form of the relationship between the original and implicit regularization parameters $\lambda$ and $\mu$ in may seem complicated, the remarkable fact is that our GCV consistency results in the next section are agnostic to the specific form of any of the quantities involved (such as $\smash{\mathscr S_{\mathbf{S}\mathbf{S}^\top}}$ and $\mu$). That is, GCV is able to make the appropriate correction in a way that adapts to the specific choice of sketch, such that the statistician need not worry. Nevertheless, for the interested reader we provide a listing of known examples of sketches satisfying and their corresponding S-transforms in in , parameterized by $\alpha = q/p$. ## Asymptotic decompositions and consistency We first state a result on the decomposition of squared risk and the GCV estimator. Here we let $\smash{\widehat{{\bm{\beta}}}_{\mu}^\mathrm{ridge}}$ denote the ridge estimator fit on unsketched data at the implicit regularization parameter $\mu$. With slight overloading of notation, let us now define $\lambda_0 = - \liminf_{p \to \infty} \min_{k \in [K]} \lambda_{\min}^{+}(\mathbf{S}_k^\top\widehat{{\bm{\Sigma}}}\mathbf{S}_k)$ (since both quantities match, this is a harmless overloading). In addition, define ${\bm{\Sigma}}= \mathbb{E}[\mathbf{x}_0 \mathbf{x}_0^\top]$. [\[thm:risk-gcv-asymptotics\]]{#thm:risk-gcv-asymptotics label="thm:risk-gcv-asymptotics"} Suppose holds, and that the operator norm of ${\bm{\Sigma}}$ and second moment of $y_0$ are uniformly bounded in $p$. Then, for $\lambda > \lambda_0$ and all $K$, $$\begin{gathered} \label{eq:risk-gcv-decomp} R \big( \widehat{{\bm{\beta}}}_{\lambda}^\mathrm{ens} \big) \simeq R \big( \widehat{{\bm{\beta}}}_{\mu}^\mathrm{ridge} \big) + \frac{\mu' \Delta}{K} \quad \text{and} \quad {\widehat{R}}\big( \widehat{{\bm{\beta}}}_{\lambda}^\mathrm{ens} \big) \simeq {\widehat{R}}\big( \widehat{{\bm{\beta}}}_{\mu}^\mathrm{ridge} \big) + \frac{\mu'' \Delta}{K}, \end{gathered}$$ where $\mu$ is as given in , $\Delta = \tfrac{1}{n} \mathbf{y}^\top(\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top+ \mu \mathbf{I}_n)^{-2} \mathbf{y}\geq 0$, and $\mu' \geq 0$ is a certain non-negative inflation factor in the risk that only depends on $\smash{\mathscr S_{\mathbf{S}\mathbf{S}^\top}}$, $\smash{\widehat{{\bm{\Sigma}}}}$, and ${\bm{\Sigma}}$, while $\mu'' \geq 0$ is a certain non-negative inflation factor in the risk estimator that only depends on $\smash{\mathscr S_{\mathbf{S}\mathbf{S}^\top}}$ and $\smash{\widehat{{\bm{\Sigma}}}}$. In other words, this result gives *bias--variance* decompositions for both squared risk and its GCV estimator for sketched ensembles. The result says that the risk of the sketched predictor is equal to the risk of the unsketched equivalent implicit ridge regressor (bias) plus a term due to the randomness of the sketching that depends on the inflation factor $\mu'$ or $\mu''$ (variance), which is controlled by the ensemble size at a rate of $1/K$ (see for a numerical verification of this rate). It is worth mentioning that holds true even when the distribution of $(\mathbf{x}_0, y_0)$ differs from the training data. In other words, the asymptotics decompositions given in [\[eq:risk-gcv-decomp\]](#eq:risk-gcv-decomp){reference-type="eqref" reference="eq:risk-gcv-decomp"} apply even to out-of-distribution (OOD) risks, regardless of the consistency of GCV that we will state shortly. Additionally, we have not made any distributional assumptions on the design matrix $\mathbf{X}$ and the response vector $\mathbf{y}$ beyond the norm boundedness. The core of the statement is driven by asymptotic freeness between the sketching and data matrices. We refer the reader to in for precise expressions for $\mu'$ and $\mu''$, and to [@lejeune2022asymptotics] for illustrations of their relationship of these parameters with $\alpha$ and $\lambda$ in the case of i.i.d. sketching. For expressions of limiting non-sketched risk and GCV for ridge regression, we also refer to [@patil2021uniform], which could be combined with [\[eq:risk-gcv-decomp\]](#eq:risk-gcv-decomp){reference-type="eqref" reference="eq:risk-gcv-decomp"} to obtain exact formulas for asymptotic risk and GCV for sketched ridge regression, or to [@bach2023high] for exact squared risk expressions in the i.i.d. sketching case for $K=1$. For our consistency result, we impose certain mild random matrix assumptions on the feature vectors and assume a mild bounded moment condition on the response variable. Notably, we do not require any specific model assumption on the response variable $y$ in the way that it relates to the feature vector $\mathbf{x}$. Thus, all of our results are applicable in a model-free setting. [\[asm:data\]]{#asm:data label="asm:data"} The feature vector decomposes as $\smash{\mathbf{x}= {\bm{\Sigma}}^{1/2}\mathbf{z}}$, where $\mathbf{z}\in\mathbb{R}^{p}$ contains i.i.d. entries with mean $0$, variance $1$, bounded moments of order $4+\delta$ for some $\delta > 0$, and ${\bm{\Sigma}}\in \mathbb{R}^{p \times p}$ is a symmetric matrix with eigenvalues uniformly bounded between $r_{\min}>0$ and $r_{\max}<\infty$. The response $y$ has mean $0$ and bounded moment of order $4+\delta$ for some $\delta>0$. The assumption of zero mean in the features and response is only done for mathematical simplicity. To deal with non-zero mean, one can add an (unregularized) intercept to the predictor, and all of our results can be suitably adapted. We apply such an intercept in our experiments on real-world data. It has been recently shown that GCV for unsketched ridge regression is an asymptotically consistent estimator of risk [@patil2021uniform] under , so given our bias--variance decomposition in [\[eq:risk-gcv-decomp\]](#eq:risk-gcv-decomp){reference-type="eqref" reference="eq:risk-gcv-decomp"}, the only question is whether the variance term from GCV is a consistent estimator of the variance term of risk. This indeed turns out to be the case, as we state in the following theorem for squared risk. [\[thm:gcv-consistency\]]{#thm:gcv-consistency label="thm:gcv-consistency"} Under , for $\lambda > \lambda_0$ and all $K$, $$\label{eq:gcv-consistency} \mu' \simeq \mu'', \quad \text{and therefore} \quad {\widehat{R}}(\widehat{{\bm{\beta}}}_{\lambda}^\mathrm{ens}) \simeq R(\widehat{{\bm{\beta}}}_{\lambda}^\mathrm{ens}).$$ The remarkableness of this result is its generality: we have made no assumption on a particular choice of sketching matrix (see ) or the size $K$ of the ensemble. We also make no assumption other than boundedness on the covariance ${\bm{\Sigma}}$, and we do not require any model on the relation of the response to the data. Furthermore, this result is not marginal but rather conditional on $\smash{\mathbf{X}, \mathbf{y}, (\mathbf{S}_k)_{k=1}^K}$, meaning that we can trust GCV to be consistent for tuning on a single learning problem. We also emphasize that our results holds for positive, zero, and even negative $\lambda$ generally speaking. This is important, as negative regularization can be optimal in ridge regression in certain circumstances [@kobak_lomond_sanchez_2020; @wu_xu_2020] and even more commonly in sketched ridge ensembles [@lejeune2022asymptotics], as we demonstrate in . An astute reader will observe that for the case of $K = 1$, that is, sketched ridge regression, one can absorb the sketching matrix $\mathbf{S}$ into the data matrix $\mathbf{X}$ such that the transformed data satisfies . We therefore directly obtain the consistency of GCV in this case using results of [@patil2021uniform]. The novel aspect of is thus that the consistency of GCV holds for ensembles of any $K$, which is not obvious, due to the interactions across predictors in squared error. The non-triviality of this result is perhaps subtle: one may wonder whether GCV is always consistent under any sketching setting. However, as we discuss later in , when sketching observations, GCV fails to be consistent, and so we cannot blindly assert that sketching and GCV are always compatible. ![image](figures/real-data.pdf){width="99%"} # General functional consistency {#sec:functional-consistency} In the previous section, we obtained an elegant decomposition for squared risk and the GCV estimator that cleanly captures the effect of ensembling as controlling the variance from an equivalent unsketched implicit ridge regression risk at a rate of $1/K$. However, we are also interested in using GCV for evaluating other risk functionals, which do not yield bias--variance decompositions that we can manipulate in the same way. Fortunately, however, we can leverage the close connection between GCV and LOOCV to prove the consistency for a broad class of *subquadratic* risk functionals. As a result, we also certify that the *distribution* of the GCV-corrected predictions converges to the test distribution. We show convergence for all error functions $t$ in [\[eq:func_def\]](#eq:func_def){reference-type="eqref" reference="eq:func_def"} satisfying the following subquadratic growth condition, commonly used in the approximate message passing (AMP) literature (see, e.g., [@bayati2011amp]). [\[asm:test-error\]]{#asm:test-error label="asm:test-error"} The error function $t \colon \mathbb R^2 \to \mathbb R$ is pseudo-Lipschitz of order 2. That is, there exists a constant $L > 0$ such that for all $\mathbf{u}, \mathbf{v}\in \mathbb R^2$, the following bound holds true: $|t(\mathbf{u}) - t(\mathbf{v})| \leq L (1 + \ensuremath{{\left\|\mathbf{u}\right\|}_{2}} + \ensuremath{{\|\mathbf{v}\|}_{2}}) \ensuremath{{\|\mathbf{u}- \mathbf{v}\|}_{2}}.$ The growth condition on $t$ in the assumption above is ultimately tied to our assumptions on the bounded moment of order $4 + \delta$ for some $\delta > 0$ on the entries of the feature vector and the response variable. By imposing stronger the moment assumptions, one can generalize these results for error functions with higher growth rates at the expense of less data generality. We remark that this extends the class of functionals previously shown to be consistent for GCV in ridge regression [@patil2022estimating], which were of the residual form $t(y - z)$. While the tools needed for this extension are not drastically different, it is nonetheless a conceptually important extension. In particular, this is useful for classification problems where metrics do not have a residual structure and for adaptive prediction interval construction. We now state our main consistency result. [\[thm:functional_consistency\]]{#thm:functional_consistency label="thm:functional_consistency"} Under , for $\lambda > \lambda_0$ and all $K$, $$\label{eq:functional_consistency} \widehat{T}(\widehat{{\bm{\beta}}}_\lambda^\mathrm{ens}) \simeq T(\widehat{{\bm{\beta}}}_\lambda^\mathrm{ens}).$$ Since $\smash{t(y, z) = (y - z)^2}$ satisfies , this result is strict generalization of . This class of risk functionals is very broad: it includes for example robust risks such as the mean absolute error or Huber loss, and even classification risks such as hinge loss and logistic loss. Furthermore, this class of error functions is sufficiently rich as to guarantee that not only do risk functionals converge, but in fact the GCV-corrected predictions also converge in distribution to the predictions of test data. This simple corollary captures the fact that empirical convergence of pseudo-Lipschitz functionals of order 2, being equivalent to weak convergence plus convergence in second moment, is equivalent to Wasserstein convergence [@villani2009optimal Chapter 6]. ![image](figures/confidence-intervals.pdf){width="99%"} [\[cor:distributional-consistency\]]{#cor:distributional-consistency label="cor:distributional-consistency"} Under , for $\lambda > \lambda_0$ and all $K$, $\smash{\widehat{P}_{\lambda}^\mathrm{ens}\overset{2}{\Rightarrow}P_{\lambda}^{\mathrm{ens}}}$, where $\overset{2}{\Rightarrow}$ denotes convergence in Wasserstein $W_2$ metric. Distributional convergence further enriches our choices of consistent estimators that we can construct with GCV, in that we can now construct estimators of sets and their probabilities. One example is classification error $\smash{\mathbb{E}[{\mathds 1}\{y_0 \neq \mathrm{sign}(\mathbf{x}_0^\top\widehat{{\bm{\beta}}}_\lambda^\mathrm{ens})\}]}$, which can be expressed in terms of conditional probability over discrete $y_0$. In our real data experiments in , we also compute classification error using GCV and find it yields highly consistent estimates, which is useful as squared error (and hence ridge) is known to be a competitive loss function for classification [@hui2021evaluation]. Of statistical interest, we can also do things such as construct prediction intervals using the GCV-corrected empirical distribution. For example, for $\tau \in (0, 1)$, consider the level-$\tau$ quantile $\widehat{Q}(\tau) = \inf\{z : \widehat{F}(z) \geq \tau\}$ and confidence interval $$\begin{aligned} \mathcal{I}= \big[ \mathbf{x}_0^\top\widehat{{\bm{\beta}}}_\lambda^\mathrm{ens}+ \widehat{Q}(\tau_l), \, \mathbf{x}_0^\top\widehat{{\bm{\beta}}}_\lambda^\mathrm{ens}+ \widehat{Q}(\tau_u) \big],\end{aligned}$$ where $\widehat{F}$ is the cumulative distribution function (CDF) of the GCV residuals $\smash{(y - z) \colon (y, z) \sim \widehat{P}_\lambda^\mathrm{ens}}$. Then $\mathcal{I}$ is a prediction interval for $y_0$ built only from training data that has the right coverage $\tau_u - \tau_l$, conditional on the training data, asymptotically almost surely. Furthermore, we can tune our model based on confidence interval metrics such as interval width. We demonstrate this idea in the experiment in . This idea could be further extended to produce tighter *locally adaptive* confidence intervals by leveraging the entire joint distribution ${\widehat{P}}_\lambda^\mathrm{ens}$ rather than only the residuals. # Tuning applications and theoretical implications {#sec:tuning} The obvious implication of the consistency results for GCV stated above is that we can also consistently tune sketched ridge regression: for any finite collection of hyperparameters ($\lambda$, $\alpha$, sketching family, $K$) over which we tune, consistency at each individual choice of hyperparameters implies that optimization over the hyperparameter set is also consistent. Thus if the predictor that we want to fit to our data is a sketched ridge regression ensemble, direct GCV enables us to efficiently tune it. However, suppose we have the computational budget to fit a single large predictor, such as unsketched ridge regression or a large ensemble. Due to the large cost of refitting, tuning this predictor directly might be unfeasible. Fortunately, thanks to the bias--variance decomposition in , we can use small sketched ridge ensembles to tune such large predictors. The key idea is to recall that asymptotically, the sketched risk is simply a linear combination of the equivalent ridge risk and a variance term, and that we can control the mixing of these terms by choice of the ensemble size $K$. This means that by choosing multiple distinct values of $K$, we can solve for the equivalent ridge risk. As a concrete example, suppose we have an ensemble of size $K=2$ with corresponding risk $R_2 = R(\widehat{{\bm{\beta}}}_\lambda^\mathrm{ens})$, and let $R_1$ be the risk corresponding to the individual members of the ensemble. Then we can eliminate the variance term and obtain the equivalent limiting risk as $$\begin{aligned} \label{eq:ensemble-trick} R(\widehat{{\bm{\beta}}}_\mu^\mathrm{ridge}) \simeq 2 R_2 - R_1.\end{aligned}$$ Subsequently using the subordination relation $$\begin{aligned} \mu \simeq\lambda \mathscr S_{\mathbf{S}\mathbf{S}^\top}\Big(-\tfrac{1}{p} {\rm tr}[\mathbf{S}^\top\widehat{{\bm{\Sigma}}}\mathbf{S}(\mathbf{S}^\top\widehat{{\bm{\Sigma}}}\mathbf{S}+ \lambda \mathbf{I}_q)^{-1}]\Big)\end{aligned}$$ from [\[eq:dual-fp-main\]](#eq:dual-fp-main){reference-type="eqref" reference="eq:dual-fp-main"} in , we can map our choice of $\lambda$ and $\mathbf{S}$ to the equivalent $\mu$. By , we can use the GCV risk estimators for $R_1$ and $R_2$ and have a consistent estimator for ridge risk at $\mu$. In this way, we obtain a consistent estimator of risk that can be computed entirely using only the $q$-dimensional sketched data rather than the full $p$-dimensional data, which can be computed in less time with a smaller memory footprint. We demonstrate this "ensemble trick" for estimating ridge risk in , which is accurate even where the variance component of sketched ridge risk is large. Furthermore, even though GCV is not consistent for sketched observations instead of features (see ), the ensemble trick still provides a consistent estimator for ridge risk since the bias term is unchanged. One limitation of this method when considering a fixed sketch $\mathbf{S}$, varying only $\lambda$, is that this limits the minimum value of $\mu$ that can be considered (see discussion by [@lejeune2022asymptotics]). A solution to this is to consider varying sketch sizes, allowing the full range of $\mu > 0$, as captured by the following result. [\[cor:ridge-equivalence\]]{#cor:ridge-equivalence label="cor:ridge-equivalence"} Under , if $\mathbf{S}_k^\top\mathbf{S}_k$ is invertible, then for any $\mu > 0$, if $\lambda = 0$ and $K \to \infty$, $${\widehat{R}}(\widehat{{\bm{\beta}}}_\lambda^\mathrm{ens}) \simeq R(\widehat{{\bm{\beta}}}_\lambda^\mathrm{ens}) \simeq R(\widehat{{\bm{\beta}}}_{\mu}^\mathrm{ridge}) \quad\text{for}\quad \alpha = \tfrac{1}{p}{\rm tr}[\widehat{{\bm{\Sigma}}}(\widehat{{\bm{\Sigma}}}+ \mu \mathbf{I}_p)^{-1}].$$ That is, for any desired level of equivalent regularization $\mu$, we can obtain a sketched ridge regressor with the same bias (equivalently, the same large ensemble risk as $K \to \infty$) by changing only the sketch size and fixing $\lambda = 0$. A narrower result was shown for subsampled ensembles in [@lejeune2020implicit], but this result significantly generalizes as it provides equivalences for all $\mu > 0$ rather than only the optimal $\mu$, and it holds for any full rank sketching matrix. Another interpretation of this result is that freely sketched predictors span the same class of predictors as ridge regression. ![ **GCV combined with sketching yields a fast method for tuning ridge.** We fit SRDCT ensembles on synthetic data ($n = 600$, $p = 800$), sketching features (**left** and **right**) or observations (**middle**). GCV (dashed) provides consistent estimates of test risk (solid) for feature sketching but not for observation sketching. However, the ensemble trick in [\[eq:ensemble-trick\]](#eq:ensemble-trick){reference-type="eqref" reference="eq:ensemble-trick"} does not depend on the variance and thus works for both. For $\lambda = 0$, each equivalent $\mu > 0$ can be achieved by an appropriate choice of $\alpha$. Error bars denote standard deviation over 50 trials. ](figures/ensemble_trick.pdf){#fig:tuning-applications width="99%"} # Discussion {#sec:discussion} This paper establishes the consistency of GCV-based estimators of risk functionals. We show that GCV provides a method for consistent fast tuning of sketched ridge ensemble parameters. However, taking a step back, given the connection between the sketched pseudoinverse and implicit ridge regularization in the unsketched inverse () and the fact that GCV "works" for ridge regression [@patil2021uniform; @wei_hu_steinhardt], one might wonder if the results in this paper were "expected"? The introduction of the ensemble required additional analysis of course, but perhaps the results seem intuitively natural. Surprisingly (even to the authors), if one changes the strategy from sketching features to sketching observations, we no longer have GCV consistency for finite ensembles! Consider a formulation where we now sketch observations with $K$ independent sketching matrices $\mathbf{T}_k \in \mathbb{R}^{n \times m}$ for $k \in [K]$. We denote the $k$-th observation sketched ridge estimator at regularization level $\lambda$ as: $$\label{eq:primal-sketch-estimator} \widetilde{{\bm{\beta}}}_\lambda^k =\mathop{\mathrm{arg\,min}}_{{\bm{\beta}}\in \mathbb{R}^{p}} \tfrac{1}{n} \big\Vert \mathbf{T}_k^\top(\mathbf{y}- \mathbf{X}{\bm{\beta}}) \big\Vert_2^2 + \lambda \ensuremath{{\left\|{\bm{\beta}}\right\|}_{2}}^2.$$ Note the solution [\[eq:primal-sketch-estimator\]](#eq:primal-sketch-estimator){reference-type="eqref" reference="eq:primal-sketch-estimator"} is already in the feature space $\mathbb{R}^p$. As with feature sketch, the estimator $\widetilde{{\bm{\beta}}}_\lambda^k$ admits a closed-form expression displayed below in [\[eq:ensemble-estimator-primal\]](#eq:ensemble-estimator-primal){reference-type="eqref" reference="eq:ensemble-estimator-primal"}. Let the final ensemble estimator $\widetilde{{\bm{\beta}}}^\mathrm{ens}_\lambda$ be defined analogously to [\[eq:ensemble-estimator\]](#eq:ensemble-estimator){reference-type="eqref" reference="eq:ensemble-estimator"} as a simple unweighted average of the $K$ component sketched estimators: $$\label{eq:ensemble-estimator-primal} \widetilde{{\bm{\beta}}}^{\mathrm{ens}}_\lambda = \frac{1}{K} \sum_{k=1}^K \widetilde{{\bm{\beta}}}^{k}_\lambda, \quad \text{where} \quad \widetilde{{\bm{\beta}}}_\lambda^k = \tfrac{1}{n} \big( \tfrac{1}{n} \mathbf{X}^\top\mathbf{T}_k \mathbf{T}_k^\top\mathbf{X}+ \lambda \mathbf{I}_p \big)^{-1} \mathbf{X}^\top\mathbf{T}_k \mathbf{T}_k^\top\mathbf{y}.$$ Note again that the ensemble estimator $\smash{\widetilde{{\bm{\beta}}}_{\lambda}^{\mathrm{ens}}}$ is a linear smoother with the smoothing matrix: $$\widetilde{\mathbf{L}}_{\lambda}^{\mathrm{ens}} = \frac{1}{K} \sum_{k = 1}^K \widetilde{\mathbf{L}}_{\lambda}^k, \quad \text{where} \quad \widetilde{\mathbf{L}}_{\lambda}^{k} = \tfrac{1}{n} \mathbf{X}(\tfrac{1}{n} \mathbf{X}^\top\mathbf{T}_k \mathbf{T}_k^\top\mathbf{X}+ \lambda \mathbf{I}_p)^{-1} \mathbf{X}^\top\mathbf{T}_{k} \mathbf{T}_{k}^{\top}.$$ We can then define the GCV estimator ${\widetilde{R}(\widetilde{{\bm{\beta}}}^\mathrm{ens}_\lambda)}$ of the squared risk in a similar fashion to [\[eq:def-gcv\]](#eq:def-gcv){reference-type="eqref" reference="eq:def-gcv"}: $$\label{eq:def-gcv-primal} \widetilde{R}(\widetilde{{\bm{\beta}}}^\mathrm{ens}_\lambda) = \frac {\tfrac{1}{n} \| \mathbf{y}- \mathbf{X}\widetilde{{\bm{\beta}}}^\mathrm{ens}_\lambda \|_2^2} {( 1 - \tfrac{1}{n} {\rm tr}[\widetilde{\mathbf{L}}^\mathrm{ens}_\lambda] )^2}.$$ The following result shows that $\widetilde{R}(\widetilde{{\bm{\beta}}}^\mathrm{ens}_\lambda)$ is *inconsistent* for any $K$. In preparation for the forthcoming statement, define $\widetilde{\lambda}_0 = - \liminf_{p \to \infty} \min_{k \in [K]} \lambda_{\min}^{+}(\tfrac{1}{n} \mathbf{X}^\top \mathbf{T}_k \mathbf{T}_k^\top \mathbf{X})$. [\[prop:gcv-primal-fails\]]{#prop:gcv-primal-fails label="prop:gcv-primal-fails"} Suppose holds for $\mathbf{T}\mathbf{T}^\top$, and that the operator norm of ${\bm{\Sigma}}$ and second moment of $y_0$ are uniformly bounded in $p$. Then, for $\lambda > \widetilde{\lambda}_0$ and any $K$, $$\begin{gathered} \label{eq:primal-asymptotics-decompositions} R \big( \widetilde{{\bm{\beta}}}_{\lambda}^\mathrm{ens} \big) \simeq R \big( \widetilde{{\bm{\beta}}}_{\nu}^\mathrm{ridge} \big) + \frac{\nu' \widetilde{\Delta}}{K} \quad \text{and} \quad \widetilde{R} \big( \widetilde{{\bm{\beta}}}_{\lambda}^\mathrm{ens} \big) \simeq \widetilde{R} \big( \widetilde{{\bm{\beta}}}_{\nu}^\mathrm{ridge} \big) + \frac{\nu'' \widetilde{\Delta}}{K}, \end{gathered}$$ where $\nu > - \lambda_{\min}^{+}(\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top)$ is increasing in $\lambda > \widetilde{\lambda}_0$ and satisfies $$\label{eq:mu-main-primal} \nu = \lambda \mathscr S_{\mathbf{T}\mathbf{T}^\top}\big(-\tfrac{1}{n} {\rm tr}\big[\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top\big( \tfrac{1}{n} \mathbf{X}\mathbf{X}^\top+ \nu \mathbf{I}_n \big)^{-1}\big]\big),$$ $\widetilde{\Delta} = \tfrac{1}{n} \mathbf{y}^\top(\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top+ \nu \mathbf{I}_n)^{-2} \geq 0$, and $\nu' \ge 0$ is a certain non-negative inflation factor in the risk that only depends on $\mathscr S_{\mathbf{T}\mathbf{T}^\top}$, $\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top$, and ${\bm{\Sigma}}$, while $\nu'' \ge 0$ is a certain non-negative inflation factor in the risk estimator that only depends on $\mathscr S_{\mathbf{T}\mathbf{T}^\top}$ and $\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top$. Furthermore, under , in general we have $\nu' \not \simeq\nu''$, and therefore $\smash{{\widehat{R}}(\widehat{{\bm{\beta}}}_{\lambda}^{\mathrm{ens}}) \not \simeq R(\widehat{{\bm{\beta}}}_{\lambda}^{\mathrm{ens}})}$. is dual analogue of . For precise expressions of $\nu'$ and $\nu''$, we defer readers to in . Note that as $K \to \infty$, the variance terms in [\[eq:primal-asymptotics-decompositions\]](#eq:primal-asymptotics-decompositions){reference-type="eqref" reference="eq:primal-asymptotics-decompositions"} vanish and we get back consistency; for this reason, the "ensemble trick" in [\[eq:ensemble-trick\]](#eq:ensemble-trick){reference-type="eqref" reference="eq:ensemble-trick"} still works. This negative result highlights the subtleties in the results in this paper, and that the GCV consistency for sketched ensembles of finite $K$ is far from obvious and needs careful analysis to check whether it is consistent. It is still possible to correct GCV in this case, as we detail in , but it requires the use of the unsketched data as well. While our results are quite general in terms of being applicable to a wide variety of data and sketches, they are limited in that they apply only to ridge regression with isotropic regularization. However, we believe that the tools used in this work are useful in extending GCV consistency and the understanding of sketching to many other linear learning settings. It is straightforward to extend our results beyond isotropic ridge regularization. We might want to apply generalized anisotropic ridge regularization in real-world scenarios: generalized ridge achieves Bayes-optimal regression when the ground truth coefficients in a linear model come from an anisotropic prior. We can cover this case with a simple extension of our results; see . Going beyond ridge regression, we anticipate that GCV for sketched ensembles should also be consistent for generalized linear models with arbitrary convex regularizers, as was recently shown in the unsketched setting for Gaussian data [@bellec2020out]. The key difficulty in applying the analysis based on to the general setting is that we can only characterize the effect of sketching as additional ridge regularization. One promising path forward is via viewing the optimization as iteratively reweighted least squares (IRLS). On the regularization side, IRLS can achieve many types of structure-promoting regularizers (see [@lejeune2021flipside] and references therein) via successive generalized ridge, and so we might expect GCV to also be consistent in this case. Furthermore, for general training losses, we believe that GCV can be extended appropriately to handle reweighting of observations and leverage the classical connection between IRLS and maximum likelihood estimation in generalized linear models. Furthermore, to slightly relax data assumptions, we can extend GCV to the closely related approximate leave-one-out (ALO) risk estimation [@xu_maleki_rad_2019; @rad_maleki_2020], which relies on fewer concentration assumptions for consistency. # Acknowledgements {#acknowledgements .unnumbered} We are grateful to thank Ryan J. Tibshirani for helpful feedback on this work. We warmly thank Benson Au, Roland Speicher, Dimitri Shlyakhtenko for insightful discussions related to free probability theory and infinitesimal freeness. We also warmly thank Arun Kumar Kuchibhotla, Alessandro Rinaldo, Yuting Wei, Jin-Hong Du, Alex Wei for many useful discussions regrading the "dual" aspects of observation subsampling in the context of risk monotonization. As is the nature of direction reversing and side flipping dualities in general, the insights and perspectives gained from that side are naturally "mirrored" and "transposed" onto this side (with some important caveats)! This collaboration was partially supported by Office of Naval Research MURI grant N00014-20-1-2787. DL was supported by Army Research Office grant 2003514594. This serves as a supplement to the paper " Asymptotically free sketched ridge ensembles: Risks, cross-validation, and tuning." Below we first provide an outline for the supplement in . Then we list some of the general and specific notations used throughout the main paper and the supplement in , respectively. ## Outline {#outline .unnumbered} L2cmL15cm **Appendix** & **Content** & Background on asymptotic freeness and empirical support for sketching freeness & Asymptotic equivalents for freely sketched resolvents used in the proofs throughout & Proofs of and (from ) & Proofs of and (from ) & Proof of (from ) & Proof of and statements and other details for anisotropic sketching, generalized ridge regression, and observation sketch (from ) & Additional experimental illustrations and setup details for ## General notation {#general-notation .unnumbered} L3cmL15cm **Notation** & **Description** Non-bold & Denotes scalars, functions, distributions etc. (e.g., $k$, $f$, $P$) Lowercase bold& Denotes vectors (e.g., $\mathbf{x}$, $\mathbf{y}$, ${\bm{\beta}}$) Uppercase bold & Denotes matrices (e.g., $\mathbf{X}$, $\mathbf{S}$, ${\bm{\Sigma}}$) $\mathbb{R}$, $\mathbb{R}_{\ge 0}$ & Set of real and non-negative real numbers $\mathbb{C}$, $\mathbb{C}^{+}$, $\mathbb{C}^{-}$ & Set of complex numbers, and upper and lower complex half-planes $[n]$ & Set $\{1, \dots, n\}$ for a natural number $n$ ${\mathds 1}\{A\}$ & Indicator random variable associated with an event $A$ $\| \mathbf{u}\|_{p}$, $\|f\|_{L_p}$ & The $\ell_p$ norm of a vector $\mathbf{u}$ and the $L_p$ norm of a function $f$ for $p \ge 1$ $\| \mathbf{X}\|_{\mathrm{op}}$, $\| \mathbf{X}\|_{\mathrm{tr}}$ & Operator (or spectral) and trace (or nuclear) norm of a rectangular matrix $\mathbf{X}\in \mathbb{R}^{n \times p}$ ${\rm tr}[\mathbf{A}]$, $\mathbf{A}^{-1}$ & Trace and inverse (if invertible) of a square matrix $\mathbf{A}\in \mathbb{R}^{p \times p}$ $\mathrm{rank}(\mathbf{B})$, $\mathbf{B}^\top$, $\mathbf{B}^{\dagger}$ & Rank, transpose and Moore-Penrose inverse of a rectangular matrix $\mathbf{B}\in \mathbb{R}^{n \times p}$ $\mathbf{C}^{1/2}$ & Principal square root of a positive semidefinite matrix $\mathbf{C}\in \mathbb{R}^{p \times p}$ $\mathbf{I}_n$ or $\mathbf{I}$ & The $n \times n$ identity matrix $\mathcal{O}$, $o$ & Deterministic big-O and little-o notation $\mathbf{u}\le \mathbf{v}$ & Lexicographic ordering for real vectors $\mathbf{u}$ and $\mathbf{v}$ $\mathbf{A}\preceq \mathbf{B}$ & Loewner ordering for symmetric matrices $\mathbf{A}$ and $\mathbf{B}$ $\mathcal{O}_p$, $o_p$ & Probabilistic big-O and little-o notation $\mathbf{A}\simeq\mathbf{B}$ & Asymptotic equivalence of matrices $\mathbf{A}$ and $\mathbf{B}$ (see for details) $\xrightarrow{\text{a.s.}}$, $\xrightarrow{\text{p}}$, $\xrightarrow{\text{d}}$ & Almost sure convergence, convergence in probability, and weak convergence $\overset{2}{\Rightarrow}$ & Convergence in Wasserstein $W_2$ metric ## Specific notation {#specific-notation .unnumbered} L1.8cm L16cm **Symbol** & **Meaning** $((\mathbf{x}_i, y_i))_{i=1}^n$ & Train dataset containing $n$ i.i.d. observations in $\mathbb{R}^{p} \times \mathbb{R}$ ($\mathbf{X}$, $\mathbf{y}$) & Train data matrix $(\mathbf{x}_1, \ldots, \mathbf{x}_n)^\top$ in $\mathbb R^{n \times p}$ and response vector $\mathbf{y}= (y_1, \ldots, y_n)$ in $\mathbb R^{n}$ $(\mathbf{x}_0, y_0)$ & Test point in $\mathbb{R}^{p} \times \mathbb{R}$ drawn independently from the train data distribution ${\bm{\Sigma}}$ & Population covariance matrix in $\mathbb{R}^{p \times p}$: $\mathbb{E}[\mathbf{x}_0 \mathbf{x}_0^\top]$ ${\bm{\beta}}_0$ & Coefficients of population linear projection of $y_0$ onto $\mathbf{x}_0$ in $\mathbb{R}^{p}$: ${\bm{\Sigma}}^{-1} \mathbb{E}[\mathbf{x}_0 y_0]$ $\widehat{{\bm{\Sigma}}}$ & Sample covariance matrix in $\mathbb{R}^{p \times p}$: $\tfrac{1}{n} \mathbf{X}^\top\mathbf{X}$ $\widehat{{\bm{\beta}}}_\lambda^\mathrm{ridge}$ & Ridge estimator on full data $(\mathbf{X}, \mathbf{y})$ at regularization level $\lambda$: $(\widehat{{\bm{\Sigma}}}+ \lambda \mathbf{I}_p)^{-1} \tfrac{1}{n} \mathbf{X}^\top \mathbf{y}$ $K$ & Ensemble size $( \mathbf{S}_k )_{k = 1}^{K}$ & Sketching matrices in $\mathbb{R}^{p \times q}$ (for feature sketch) $\alpha$ & Sketching aspect ratio $\alpha = \frac{q}{p}$ $\widehat{{\bm{\beta}}}^{\mathbf{S}_k}_\lambda$ & $k$-th component estimator in the sketched ensemble in $\mathbb{R}^{q}$ (sketch space) at regularization level $\lambda$ $\widehat{{\bm{\beta}}}_\lambda^k$ & $k$-th component estimator in the sketched ensemble in $\mathbb{R}^{p}$ (feature space) $\mathbf{L}^k_\lambda$ & Smoothing matrix of the $k$-th component estimator in the sketched ensemble in $\mathbb{R}^{n \times n}$ $\widehat{{\bm{\beta}}}^{\mathrm{ens}}_\lambda$ & Final sketched ensemble estimator in $\mathbb{R}^{p}$: $\tfrac{1}{K} \sum_{k=1}^K \widehat{{\bm{\beta}}}^k_\lambda$ $\mathbf{L}^\mathrm{ens}_\lambda$ & Smoothing matrix of the sketched ensemble estimator in $\mathbb{R}^{n \times n}$: $\tfrac{1}{K} \sum_{k=1}^{K} \mathbf{L}^k_\lambda$ $P^\mathrm{ens}_\lambda$ & Joint distribution of test response and test predicted values of the sketched ensemble estimator (for feature sketch) at regularization level $\lambda$ $R(\widehat{{\bm{\beta}}}_{\lambda}^\mathrm{ens})$ & Squared risk of the sketched ensemble estimator $T(\widehat{{\bm{\beta}}}_\lambda^\mathrm{ens})$ & General linear risk functional of the sketched ensemble estimator $\widehat{P}^\mathrm{ens}_\lambda$ & Estimated joint distribution of test response and test predicted values of the sketched ensemble (for feature sketch) at regularization level $\lambda$ using GCV residuals ${\widehat{R}}(\widehat{{\bm{\beta}}}^\mathrm{ens}_\lambda)$ & Estimated squared risk of the sketched ensemble estimator $\widehat{T}(\widehat{{\bm{\beta}}}^\mathrm{ens}_{\lambda})$ & Estimated general linear functional of the sketched ensemble estimator $\mu$ & Effective induced regularization level of the sketched ensemble estimator (for feature sketch) with original regularization level $\lambda$ $\mu'$ & Inflation factor in the squared risk decomposition of the sketched ensemble estimator $\mu''$ & Inflation factor in the GCV decomposition for the sketched ensemble estimator $( \mathbf{T}_k )_{k=1}^{K}$ & Sketching matrices $\mathbb R^{n \times m}$ (for observation sketch) $\eta$ & Sketching aspect ratio $\eta = \frac{m}{n}$ $\widetilde{{\bm{\beta}}}^k_\lambda$ & $k$-th component estimator in the sketched ensemble in $\mathbb{R}^p$ (feature space) at regularization level $\lambda$ $\widetilde{\mathbf{L}}^k_\lambda$ & Smoothing matrix of the $k$-th component estimator in the sketched ensemble in $\mathbb{R}^{n \times n}$ $\widetilde{{\bm{\beta}}}^\mathrm{ens}_\lambda$ & Final sketched ensemble estimator in $\mathbb{R}^{p}$: $\frac{1}{K} \sum_{k=1}^K \widetilde{{\bm{\beta}}}^k_\lambda$ $\widetilde{\mathbf{L}}^\mathrm{ens}_\lambda$ & Smoothing matrix of the sketched ensemble estimator in $\mathbb{R}^{n \times n}$: $\tfrac{1}{K} \sum_{k=1}^{K} \widetilde{\mathbf{L}}^k_\lambda$ $R(\widetilde{{\bm{\beta}}}^\mathrm{ens}_\lambda)$ & Squared risk of the sketched ensemble estimator (for observation sketch) at regularization level $\lambda$ $\widetilde{R}(\widetilde{{\bm{\beta}}}^\mathrm{ens}_\lambda)$ & Estimated squared risk of the sketched ensemble estimator using GCV $\nu$ & Effective induced regularization level of the sketched ensemble estimator (for observation sketch) with original regularization level $\lambda$ $\nu'$ & Inflation factor in the squared risk decomposition of the sketched ensemble estimator $\nu''$ & Inflation factor in the GCV decomposition for the sketched ensemble estimator $\mathscr S_{\mathbf{S}\mathbf{S}^\top}$ & S-transform of the spectrum of $\mathbf{S}\mathbf{S}^\top \in \mathbb{R}^{p \times p}$ (for feature sketch) $\mathscr S_{\mathbf{T}\mathbf{T}^\top}$ & S-transform of the spectrum of $\mathbf{T}\mathbf{T}^\top \in \mathbb{R}^{n \times n}$ (for observation sketch) # Background on asymptotic freeness and free sketching support {#sec:background-freesketching} Free probability [@voiculescu1997free] is a mathematical framework that deals with non-commutative random variables. One of the key concepts in free probability is asymptotic freeness, which studies the behavior of random matrices in the limit as their dimension tends to infinity. This notion enables us to understand how independent random matrices become uncorrelated and behave as if they were freely independent in the high-dimensional limit. Good full-length references on free probability theory include: [@mingo2017free], [@bose2021random]. Chapters 2.4 and 2.5 from [@tulino2004random] and [@tao2023topics], respectively, are enjoyable introductions. ## Free probability theory {#sec:free-probability-theory} We begin with a few definitions from [@mingo2017free]. A pair $(\mathcal A, \varphi)$ is called a non-commutative *$C^*$-probability space* if $\mathcal A$ is a unital $C^*$-algebra and the linear functional $\varphi \colon \mathcal A\to \mathbb C$ is a unital *state*: i.e., $\varphi(1) = 1$ and $\varphi(a^* a) \geq 0$ for all $a \in \mathcal A$. [\[def:freeness\]]{#def:freeness label="def:freeness"} Let $(\mathcal A, \varphi)$ be a $C^*$-probability space and let $(\mathcal A_1, \ldots, \mathcal A_s)$ be unital subalgebras of $\mathcal A$. Then $(\mathcal A_1, \ldots, \mathcal A_s)$ are *free* with respect to $\varphi$ if, for any $r \geq 2$ and $a_1, \ldots, a_r \in \mathcal A$ such that $\varphi(a_i) = 0$ for all $1 \leq i \leq r$ and $a_i \in \mathcal A_{j_i}$ for $j_i \neq j_{i+1}$ for all $1 \leq i \leq r - 1$, we have $\varphi(a_1 \cdots a_r) = 0$. Furthermore, we say that elements $a_1, \ldots, a_s \in \mathcal A$ are *free* with respect to $\varphi$ if the corresponding generated unital algebras $\mathcal A_1, \ldots, \mathcal A_s$ are free. That is, we say that elements of the algebra are free if any alternating product of centered polynomials is also centered. In this work, we will consider $\varphi$ to be the normalized trace---that is, the generalization of $\tfrac{1}{p} {\rm tr}[\mathbf{A}]$ for $\mathbf{A}\in \mathbb C^{p \times p}$ to elements of a $C^*$-algebra $\mathcal A$. Specifically, for any self-adjoint $a \in \mathcal A$ and any polynomial $p$, $$\varphi(p(a)) = \int p(z) \, \mathrm{d} \mu_a(z),$$ where $\mu_a$ is the probability measure characterizing the spectral distribution of $a$. Let $(\mathcal A, \varphi)$ be a $C^*$-probability space. We say that $\mathbf{A}_1, \ldots, \mathbf{A}_m \in \mathbb C^{p \times p}$ *converge in spectral distribution* to elements $a_1, \ldots, a_m \in \mathcal A$ if for all $1 \leq \ell < \infty$ and $1 \leq i_{j} \leq m$ for $1 \leq j \leq \ell$, we have $$\tfrac{1}{p}{\rm tr}[\mathbf{A}_{i_1} \cdots \mathbf{A}_{i_\ell}] \to \varphi(a_{i_1} \cdots a_{i_\ell}).$$ One limitation of standard free probability theory is that it does not allow us to consider general expressions of the form ${\rm tr}[{\bm{\Theta}}\mathbf{A}]$ when ${\bm{\Theta}}$ has bounded trace norm, as this would require us to use an unbounded operator $\widetilde{{\bm{\Theta}}} = p {\bm{\Theta}}$ to evaluate $\smash{\tfrac{1}{p} {\rm tr}[\widetilde{{\bm{\Theta}}} \mathbf{A}]}$, but such an unbounded $\widetilde{{\bm{\Theta}}}$ cannot be an element of a $C^*$-algebra. However, evaluation of such expressions is possible with an extension called *infinitesimal* free probability [@shlyakhtenko2018infinitesimal], which is used in from [@lejeune2022asymptotics] that our results build upon. Unital subalgebras $\mathcal A_1, \mathcal A_2 \subseteq \mathcal A$ are *infinitesimally free* with respect to $(\varphi, \varphi')$ if, for any $r \geq 2$ and $a_1, \ldots, a_r \in \mathcal A$ where $a_i \in \mathcal A_{j_i}$ for $j_i \neq j_{i+1}$ for all $1 \leq i \leq r - 1$, we have $$\varphi_t((a_1 - \varphi_t(a_1)) \cdots (a_r - \varphi_t(a_r))) = o(t),$$ where $\varphi_t = \varphi + t \varphi'$. We lastly introduce a series of invertible transformations for an element $a$ of a $C^*$-probability space: $$\begin{aligned} G_a(z) = \varphi \big( \big( z - a \big)^{-1} \big) \; \longleftrightarrow \; M_a(z) = \frac{1}{z} G_a\left( \frac{1}{z} \right) - 1 \; \longleftrightarrow \; \mathscr S_a(z) = \frac{1 + z}{z} M_a^{\langle -1\rangle}(z), \nonumber\end{aligned}$$ which are the Cauchy transform (negative of the Stieltjes transform), moment generating series $M_a(z) = \sum_{k=1}^\infty \varphi(a^k) z^k$, and S-transform of $a$, respectively. Here $M_a^{\langle -1\rangle}$ denotes inverse under composition of $M_a$. ## Asymptotic freeness {#sec:asymptotic-freeness} Freeness is characterized by a certain non-commutative centered alternating product condition (see ) with respect to a state function. With some slight abuse of notation, we consider the state function $\smash{\tfrac{1}{p} {\rm tr}[\cdot]}$. Then two matrices $\mathbf{A}, \mathbf{B}\in \mathbb R^{p \times p}$ would be said to be free if $$\begin{aligned} \tfrac{1}{p} {\rm tr}\left[ \prod_{\ell=1}^L \mathrm{poly}_\ell^\mathbf{A}(\mathbf{A}) \mathrm{poly}_\ell^\mathbf{B}(\mathbf{B}) \right] = 0,\end{aligned}$$ for all $L \geq 1$ and all centered polynomials---i.e., $\smash{\tfrac{1}{p} {\rm tr}[\mathrm{poly}_\ell^\mathbf{A}(\mathbf{A})] = 0}$. The reason this is an abuse of notation is that finite matrices cannot satisfy this condition; however, they can satisfy it asymptotically as $p \to \infty$, and in this case we say that $\mathbf{A}$ and $\mathbf{B}$ are *asymptotically free*. We test this property for CountSketch and SRDCT for polynomials of the form $$\begin{aligned} \mathrm{poly}_r(\mathbf{A}) = \mathbf{A}^r - \tfrac{1}{p} {\rm tr}[\mathbf{A}^r] \mathbf{I}_p.\end{aligned}$$ Specifically, we arbitrarily pick two choices $$\mathrm{poly}(\mathbf{A},\mathbf{B}) = \mathrm{poly}_1(\mathbf{A}) \mathrm{poly}_2(\mathbf{B}) \mathrm{poly}_2 (\mathbf{A}) \mathrm{poly}_3(\mathbf{B})$$ and $$\mathrm{poly}(\mathbf{A},\mathbf{B}) = \mathrm{poly}_3(\mathbf{A}) \mathrm{poly}_1(\mathbf{B}) \mathrm{poly}_4 (\mathbf{A}) \mathrm{poly}_2(\mathbf{B})$$ and evaluate $\smash{\tfrac{1}{p} {\rm tr}[\mathrm{poly}(\mathbf{A}, \mathbf{S}\mathbf{S}^\top)]}$ for increasing $p$ over 10 trials, where $\mathbf{A}$ is a diagonal matrix with values linearly interpolating between $0.5$ and $1.5$ along the diagonal. As we see in (left), for both sketches, this normalized trace is quite small and tending to zero. This strongly supports the assumption that CountSketch and SRDCT are both asymptotically free from diagonal matrices, and we expect the same to hold if $\mathbf{A}$ is rotated to be non-diagonal independently of the sampling of the $\mathbf{S}$. ![ **Empirical support for asymptotic freeness and subordination relation**. **Left:** We plot the absolute value of the average of the normalized traces of polynomials, which converge to zero. We also plot best fit lines on the log--log scale (dashed). Error bars denote one standard deviation over 10 trials, collected over both polynomials. **Right:** We numerically compute $\mu$ and plot the empirical subordination relation, which are decreasing continuous functions that closely match the theoretical S-transforms of Gaussian (dashed) for CountSketch ($\times$) and orthogonal (dash--dot) for SRDCT ($\circ$). Each mark in the scatter plots corresponds to a single $(\mathbf{A}, \lambda)$ pair, and we solve for the corresponding $\mu$. ](figures/free_sketching_investigation.pdf){#fig:free_sketching_investigation width="99%"} ## Empirical subordination relations {#sec:empirical-subordination-relations} Suppose $\widehat{{\bm{\Sigma}}}$ and $\mathbf{S}\mathbf{S}^\top$ are free and holds. This means that a subordination relation via $\mathscr S_{\mathbf{S}\mathbf{S}^\top}$ should characterize the implicit regularization, so we test this implication empirically as well. Specifically, without using any known form for $\mathscr S_{\mathbf{S}\mathbf{S}^\top}$, we empirically verify that this mapping does not depend on $\mathbf{X}$ and compare it to known S-transforms. As in the previous section, we will simplify our tests by considering a diagonal $\mathbf{A}$ instead of $\widehat{{\bm{\Sigma}}}$. We generate a family of $\mathbf{A}= \mathrm{diag}\left(\mathbf{a}\right)$ parameterized by $a_0, s_0 > 0$, and $t_0 \in [0, 1]$ as $$\begin{aligned} a_i = \frac{a_0}{1 - e^{-(t_i - t_0)/s_0}}, \quad \text{where} \quad t_i = \frac{i - 1}{p - 1}.\end{aligned}$$ This family spans a variety of spectral distributions and provides a rich class of matrices over which must hold simultaneously. For a fixed $700 \times 585$ sketching matrix $\mathbf{S}$ that we sample for CountSketch and for SRDCT, we sampled $\mathbf{A}$ over a $5 \times 5 \times 5$ grid of $a_0$ and $s_0$ logarithmically spaced between $0.1$ and $10$ and $t_0$ linearly spaced between $0$ and $1$. For each $\mathbf{A}$, we used numerical root finding to determine $\mu$ such that $$\begin{aligned} \tfrac{1}{p} {\rm tr}\left[ \mathbf{A}\mathbf{S}\big( \mathbf{S}^\top\mathbf{A}\mathbf{S}+ \lambda \mathbf{I}_q \big)^{-1} \mathbf{S}^\top \right] = \tfrac{1}{p} {\rm tr}\left[ \mathbf{A}\big( \mathbf{A}+ \mu \mathbf{I}_p \big)^{-1} \right]\end{aligned}$$ for each $\lambda \in \left\{0.01, 0.1, 1, 10, 100\right\}$. Then we construct a scatter plot of $\mu/\lambda$ and $\smash{\mu \tfrac{1}{p} {\rm tr}[(\mathbf{A}+ \mu \mathbf{I}_p)^{-1}]}$ in (right), which should match $\mathscr S_{\mathbf{S}\mathbf{S}^\top}$ and be a decreasing continuous function. We see that this is indeed the case, and furthermore by also plotting the known S-transform for Gaussian and orthogonal sketches from , we see that CountSketch matches the Gaussian function and SRDCT matches the orthogonal function. ## Known S-transforms {#app:S-transforms} We state some known S-transforms in the following table, where we let $\alpha = q/p$. We also assume that $\mathbf{S}$ is normalized such that $\mathbf{S}\mathbf{S}^\top\simeq\mathbf{I}_p$, following [@lejeune2022asymptotics]. For the i.i.d. sketch, this is simply the S-transform of the Marchenko--Pastur distribution, and for the orthogonal sketch, it is the S-transform of a binary distribution on $\left\{0, \tfrac{1}{\alpha}\right\}$. The identity sketch refers to simply $\mathbf{S}= \mathbf{I}_p$. cccc Sketching family: & IID & Orthogonal, SRFT & Identity $\mathscr S_{\mathbf{S}\mathbf{S}^\top}(w)$: & $\frac{\alpha}{\alpha + w}$ & $\frac{\alpha (1 + w)}{\alpha + w}$ & $1$ [\[tab:S-functions\]]{#tab:S-functions label="tab:S-functions"} # Asymptotic equivalents for freely sketched resolvents {#app:useful-asymp-equi} In this section, we provide a brief background on the language of asymptotic equivalents used in the proofs throughout the paper. We will state the definition of asymptotic equivalents and point to useful calculus rules. For more details, see [@dobriban_wager_2018; @dobriban_sheng_2021; @lejeune2022asymptotics]. We use the language of asymptotic equivalents throughout the paper, defined formally as follows. [\[def:deterministic-equivalence\]]{#def:deterministic-equivalence label="def:deterministic-equivalence"} Consider sequences $( \mathbf{A}_p )_{p \ge 1}$ and $( \mathbf{B}_p )_{p \ge 1}$ of (random or deterministic) matrices of growing dimension. We say that $\mathbf{A}_p$ and $\mathbf{B}_p$ are equivalent and write $\mathbf{A}_p \simeq\mathbf{B}_p$ if $\lim_{p \to \infty} | {\rm tr}[\mathbf{C}_p (\mathbf{A}_p - \mathbf{B}_p)] | = 0$ almost surely for any sequence $\mathbf{C}_p$ matrices with bounded trace norm such that $\limsup \| \mathbf{C}_p \|_{\mathrm{tr}} < \infty$ as $p \to \infty$. The notion of deterministic equivalents obeys various calculus rules such as sum, product, differentiation, conditioning, substitution, among others. We refer readers to [@patil2022bagging] for a comprehensive list of these calculus rules, their proofs, and other related details. ## Known asymptotic equivalents for ordinary ridge resolvents {#app:detequi-standard-ridge} In this section, we collect known asymptotic equivalents for the first- and second-order ordinary ridge resolvents. See [@hastie2022surprises; @patil2022bagging] for more details. [\[lem:standard-ridge-detequi\]]{#lem:standard-ridge-detequi label="lem:standard-ridge-detequi"} Suppose holds. Then, for $\mu > - \lambda_{\min}^{+}(\tfrac{1}{n} \mathbf{X}^\top\mathbf{X})$, the following statements hold: 1. First-order ordinary ridge resolvent: $$\label{eq:detequi-ridge-firstorder} \mu ({\widehat{{{\bm{\Sigma}}}}}+ \mu \mathbf{I}_p)^{-1} \simeq (v {\bm{\Sigma}}+ \mathbf{I}_p)^{-1},$$ where $v^{-1}$ is the most positive solution to $$\label{eq:v-fixed-point} \mu = v^{-1} - \gamma \tfrac{1}{p} {\rm tr}[{\bm{\Sigma}}(v {\bm{\Sigma}}+ \mathbf{I}_p)^{-1}].$$ 2. Second-order ordinary ridge resolvent (population version): $$\label{eq:detequi-ridge-var} \mu^2 ({\widehat{{{\bm{\Sigma}}}}}+ \mu \mathbf{I}_p)^{-1} {\bm{\Sigma}}({\widehat{{{\bm{\Sigma}}}}}+ \mu \mathbf{I}_p)^{-1} \simeq (1 + \widetilde{v}_b) (v {\bm{\Sigma}}+ \mathbf{I}_p)^{-1} {\bm{\Sigma}} (v {\bm{\Sigma}}+ \mathbf{I}_p)^{-1},$$ where $v$ as defined in [\[eq:v-fixed-point\]](#eq:v-fixed-point){reference-type="eqref" reference="eq:v-fixed-point"}, and $\widetilde{v}_b$ is defined in terms of $v$ by the equation $$\label{eq:def-tvb-ridge} \widetilde{v}_b = \frac{\displaystyle \gamma \tfrac{1}{p} {\rm tr}[{\bm{\Sigma}}^2 (v {\bm{\Sigma}}+ \mathbf{I}_p)^{-2}] }{\displaystyle v^{-2} - \gamma \tfrac{1}{p} {\rm tr}[{\bm{\Sigma}}^2 (v {\bm{\Sigma}}+ \mathbf{I}_p)^{-2}] }.$$ 3. Second-order ordinary ridge resolvent (empirical version): $$\label{eq:detequi-ridge-bias} ({\widehat{{{\bm{\Sigma}}}}}+ \mu \mathbf{I}_p)^{-1} {\widehat{{{\bm{\Sigma}}}}} ({\widehat{{{\bm{\Sigma}}}}}+ \mu \mathbf{I}_p)^{-1} \simeq \widetilde{v}_v (v {\bm{\Sigma}}+ \mathbf{I}_p)^{-1} {\bm{\Sigma}} (v {\bm{\Sigma}}+ \mathbf{I}_p)^{-1},$$ where $v$ is as defined in [\[eq:v-fixed-point\]](#eq:v-fixed-point){reference-type="eqref" reference="eq:v-fixed-point"}, and $\widetilde{v}_v$ is defined in terms of $v$ by the equation $$\label{eq:def-tvv-ridge} \widetilde{v}_v^{-1} = v^{-2} - \gamma \tfrac{1}{p} {\rm tr}[{\bm{\Sigma}}^2 (v {\bm{\Sigma}}+ \mathbf{I}_p)^{-2}].$$ ## New asymptotic equivalents for freely sketched ridge resolvents In this section, we derive first- and second-order equivalences for (both feature and observation) sketched resolvents. Their proofs are provided just after the statements. [\[lem:general-first-order\]]{#lem:general-first-order label="lem:general-first-order"} The following statements hold: 1. First-order sketched ridge resolvent (for feature sketch): Suppose $\Cref{cond:sketch}$ holds for $\mathbf{S}\mathbf{S}^\top$. Then, for all $\lambda > \lambda_0$, $$\mathbf{S} (\mathbf{S}^\top \tfrac{1}{n} \mathbf{X}^\top \mathbf{X}\mathbf{S} + \lambda \mathbf{I}_q)^{-1} \mathbf{S}^\top \simeq (\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}+ \mu \mathbf{I}_p)^{-1},$$ where $\mu$ solves $$\label{eq:fp-dual-sketch} \mu = \lambda \mathscr S_{\mathbf{S}\mathbf{S}^\top}\big(-\tfrac{1}{p} {\rm tr}\big[\tfrac{1}{n} \mathbf{X}^\top\mathbf{X}\big( \tfrac{1}{n} \mathbf{X}^\top\mathbf{X}+ \mu \mathbf{I}_p \big)^{-1}\big]\big).$$ 2. First-order sketched ridge resolvent (for observation sketch): Suppose $\Cref{cond:sketch}$ holds for $\mathbf{T}\mathbf{T}^\top$. Then, for all $\lambda > \widetilde{\lambda}_0$, $$(\tfrac{1}{n} \mathbf{X}^\top \mathbf{T}\mathbf{T}^\top \mathbf{X}+ \lambda \mathbf{I}_p)^{-1} \mathbf{X}^\top \mathbf{T}\mathbf{T}^\top \simeq \mathbf{X}^\top (\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top + \nu \mathbf{I}_n)^{-1},$$ where $\nu$ solves $$\label{eq:fp-primal-sketch} \nu = \lambda \mathscr S_{\mathbf{T}\mathbf{T}^\top}\big(-\tfrac{1}{n} {\rm tr}\big[\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top\big( \tfrac{1}{n} \mathbf{X}\mathbf{X}^\top+ \nu \mathbf{I}_n \big)^{-1}\big]\big).$$ [\[lem:general-second-order\]]{#lem:general-second-order label="lem:general-second-order"} Under the settings of , for any positive semidefinite ${\bm{\Psi}}$ with uniformly bounded operator norm, the following statements hold: 1. Second-order sketched ridge resolvent (for feature sketch): For all $\lambda > \lambda_0$, $$\begin{gathered} \mathbf{S}\big( \mathbf{S}^\top\tfrac{1}{n} \mathbf{X}^\top\mathbf{X}\mathbf{S}+ \lambda \mathbf{I}_q \big)^{-1} \mathbf{S}^\top{\bm{\Psi}}\mathbf{S}\big( \mathbf{S}^\top\tfrac{1}{n} \mathbf{X}^\top\mathbf{X}\mathbf{S}+ \lambda \mathbf{I}_q \big)^{-1} \mathbf{S}^\top\relax \simeq \big( \tfrac{1}{n} \mathbf{X}^\top\mathbf{X}+ \mu \mathbf{I}_p \big)^{-1} ({\bm{\Psi}}+ \mu_{{\bm{\Psi}}}' \mathbf{I}_p) \big( \tfrac{1}{n} \mathbf{X}^\top\mathbf{X}+ \mu \mathbf{I}_p \big)^{-1}, \label{eq:general-second-order} \end{gathered}$$ where $\mu'_{{\bm{\Psi}}} \ge 0$ is given by: $$\begin{aligned} \label{eq:mu'_mPsi} \mu'_{{\bm{\Psi}}} = - \frac{\partial \mu}{\partial \lambda} \lambda^2 \mathscr S_{\mathbf{S}\mathbf{S}^\top}'\big(- \tfrac{1}{p}{\rm tr}\big[\tfrac{1}{n} \mathbf{X}^\top\mathbf{X}\big( \tfrac{1}{n} \mathbf{X}^\top\mathbf{X}+ \mu \mathbf{I}_p \big)^{-1}\big]\big) \tfrac{1}{p} {\rm tr}\big[{\bm{\Psi}}\big( \tfrac{1}{n} \mathbf{X}^\top\mathbf{X}+ \mu \mathbf{I}_p \big)^{-2}\big]. \end{aligned}$$ 2. Second-order sketched ridge resolvent (for observation sketch): For all $\lambda > \widetilde{\lambda}_0$, $$\begin{gathered} \tfrac{1}{n} \mathbf{T}\mathbf{T}^\top\mathbf{X}\big( \tfrac{1}{n} \mathbf{X}^\top\mathbf{T}\mathbf{T}^\top\mathbf{X}+ \lambda \mathbf{I}_p \big)^{-1} {\bm{\Psi}}\big( \tfrac{1}{n} \mathbf{X}^\top\mathbf{T}\mathbf{T}^\top\mathbf{X}+ \lambda \mathbf{I}_p \big)^{-1} \mathbf{X}^\top\mathbf{T}\mathbf{T}^\top\relax \simeq \big( \tfrac{1}{n} \mathbf{X}\mathbf{X}^\top + \nu \mathbf{I}_n \big)^{-1} (\tfrac{1}{n} \mathbf{X}{\bm{\Psi}}\mathbf{X}^\top + \nu_{{\bm{\Psi}}}' \mathbf{I}_n) \big( \tfrac{1}{n} \mathbf{X}\mathbf{X}^\top+ \nu \mathbf{I}_n \big)^{-1}, \label{eq:general-second-order-primal} \end{gathered}$$ where $\nu_{{\bm{\Psi}}}' \ge 0$ is given by: $$\begin{aligned} \hspace{-1.5em} \nu'_{{\bm{\Psi}}} = {- \frac{\partial \mu}{\partial \lambda} \lambda^2 \mathscr S_{\mathbf{T}\mathbf{T}^\top}'\big(- \tfrac{1}{n}{\rm tr}\big[\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top\big( \tfrac{1}{n} \mathbf{X}\mathbf{X}^\top+ \nu \mathbf{I}_n \big)^{-1}\big]\big)} \tfrac{1}{p} {\rm tr}\big[\tfrac{1}{n} \mathbf{X}{\bm{\Psi}}\mathbf{X}^\top\big( \tfrac{1}{n} \mathbf{X}\mathbf{X}^\top+ \nu \mathbf{I}_n \big)^{-2}\big]. \label{eq:tmu'_mPsi} \end{aligned}$$ *Proof of .* There are two cases two show. 1. The statement for the feature sketch follows from . 2. For the statement for the observation sketch, we use the Woodbury matrix identity to write $$\begin{aligned} (\tfrac{1}{n} \mathbf{X}^\top \mathbf{T}\mathbf{T}^\top \mathbf{X}+ \lambda \mathbf{I}_p)^{-1} \mathbf{X}^\top \mathbf{T}\mathbf{T}^\top &= \mathbf{X}^\top\mathbf{T}(\tfrac{1}{n} \mathbf{T}^\top\mathbf{X}\mathbf{X}^\top\mathbf{T}+ \lambda \mathbf{I}_m)^{-1} \mathbf{T}^\top. \end{aligned}$$ Now we can use the result from feature sketch with $\mathbf{T}$ playing the role of $\mathbf{S}$ and $\mathbf{X}^\top$ playing the role of $\mathbf{X}$. This completes the two cases and finishes the proof. ◻ *Proof of .* There are again two cases to prove. 1. We begin with feature sketch. Let $\mathbf{A}_z = \tfrac{1}{n} \mathbf{X}^\top\mathbf{X}+ z {\bm{\Psi}}$ with corresponding $\mu_z$ from . Following the same strategy as the proof of [@lejeune2022asymptotics Theorem 4.8], the two sides of [\[eq:general-second-order\]](#eq:general-second-order){reference-type="eqref" reference="eq:general-second-order"} are equal to $$\begin{aligned} -\frac{\partial}{\partial z} \mathbf{S}\big( \mathbf{S}^\top\mathbf{A}_z \mathbf{S}+ \lambda \mathbf{I}_q \big)^{-1} \simeq -\frac{\partial}{\partial z} \big( \mathbf{A}_z + \mu_z \mathbf{I}_p \big)^{-1} \end{aligned}$$ at $z = 0$, and therefore $\mu' = \partial \mu_z / \partial z$ at $z=0$. Letting $$\mathscr S' = \mathscr S_{\mathbf{S}\mathbf{S}^\top}'(- \tfrac{1}{p}{\rm tr}\big[\tfrac{1}{n} \mathbf{X}^\top\mathbf{X}\big( \tfrac{1}{n} \mathbf{X}^\top\mathbf{X}+ \mu \mathbf{I}_p \big)^{-1}\big])$$ for brevity, and noting that $$- \mathbf{A}_z \big( \mathbf{A}_z + \mu_z \mathbf{I}_p \big)^{-1} = \mu_z \big( \mathbf{A}_z + \mu_z \mathbf{I}_p \big)^{-1} - 1,$$ we differentiate [\[eq:cond-sketch\]](#eq:cond-sketch){reference-type="eqref" reference="eq:cond-sketch"} to obtain for $z = 0$ $$\begin{aligned} \mu' = \lambda \mathscr S' \cdot \left( \mu' \tfrac{1}{p}{\rm tr}\big[\big( \tfrac{1}{n} \mathbf{X}^\top\mathbf{X}+ \mu \mathbf{I}_p \big)^{-1}\big] -\mu \tfrac{1}{p} {\rm tr}\big[\big( \tfrac{1}{n} \mathbf{X}^\top\mathbf{X}+ \mu \mathbf{I}_p \big)^{-2} ({\bm{\Psi}}+ \mu' \mathbf{I}_p)\big] \right). \end{aligned}$$ Solving for $\mu'$, we get $$\mu' = \frac {-\lambda \mu \mathscr S' \tfrac{1}{p}{\rm tr}\big[{\bm{\Psi}}\big( \tfrac{1}{n} \mathbf{X}^\top\mathbf{X}+ \mu \mathbf{I}_p \big)^{-2}\big]} { \lambda \mu \mathscr S' \tfrac{1}{p} {\rm tr}\big[\big( \tfrac{1}{n} \mathbf{X}^\top\mathbf{X}+ \mu \mathbf{I}_p \big)^{-2}\big] - \lambda \mathscr S' \tfrac{1}{p} {\rm tr}\big[\big( \tfrac{1}{n} \mathbf{X}^\top\mathbf{X}+ \mu \mathbf{I}_p \big)^{-1}\big] + 1}.$$ Meanwhile, if we take partial derivatives with respect to $\lambda$ (after dividing by $\lambda$ on both sides), $$\begin{aligned} \frac{\partial \mu}{\partial \lambda} \frac{1}{\lambda} - \frac{\mu}{\lambda^2} = \mathscr S' \cdot \left( \tfrac{1}{p}{\rm tr}\big[\big( \tfrac{1}{n} \mathbf{X}^\top\mathbf{X}+ \mu \mathbf{I}_p \big)^{-1}\big] -\mu \tfrac{1}{p} {\rm tr}\big[ \big( \tfrac{1}{n} \mathbf{X}^\top\mathbf{X}+ \mu \mathbf{I}_p \big)^{-2}\big] \right) \frac{\partial \mu}{\partial \lambda}. \end{aligned}$$ Combining these two equations gives the stated result for the feature sketch. 2. For the observation sketch, we once again use the Woodbury matrix identity to write $$\begin{aligned} & \tfrac{1}{n} \mathbf{T}\mathbf{T}^\top\mathbf{X}\big( \tfrac{1}{n} \mathbf{X}^\top\mathbf{T}\mathbf{T}^\top\mathbf{X}+ \lambda \mathbf{I}_p \big)^{-1} {\bm{\Psi}}\big( \tfrac{1}{n} \mathbf{X}^\top\mathbf{T}\mathbf{T}^\top\mathbf{X}+ \lambda \mathbf{I}_p \big)^{-1} \mathbf{X}^\top\mathbf{T}\mathbf{T}^\top\relax &= \tfrac{1}{n} \mathbf{T}\big( \tfrac{1}{n} \mathbf{T}^\top\mathbf{X}\mathbf{X}^\top\mathbf{T}+ \lambda \mathbf{I}_m \big)^{-1} \mathbf{T}^\top\mathbf{X} {\bm{\Psi}}\mathbf{X}^\top\mathbf{T}\big( \tfrac{1}{n} \mathbf{T}^\top\mathbf{X}\mathbf{X}^\top\mathbf{T}+ \lambda \mathbf{I}_m \big)^{-1} \mathbf{T}^\top. \end{aligned}$$ The equivalence in [\[eq:general-second-order-primal\]](#eq:general-second-order-primal){reference-type="eqref" reference="eq:general-second-order-primal"} and the inflation parameter in [\[eq:tmu\'\_mPsi\]](#eq:tmu'_mPsi){reference-type="eqref" reference="eq:tmu'_mPsi"} now follow from the second-order result for feature sketch by substituting $\mathbf{T}$ for $\mathbf{S}$, $\mathbf{X}$ for $\mathbf{X}^\top$, and $\tfrac{1}{n} \mathbf{X}{\bm{\Psi}}\mathbf{X}^\top$ for ${\bm{\Psi}}$ in [\[eq:general-second-order\]](#eq:general-second-order){reference-type="eqref" reference="eq:general-second-order"}. This finishes the two cases and concludes the proof. ◻ # Proofs in {#app:proofs-sec:sqaured-risk} ## Proof of {#app:proof-thm:risk-gcv-asymptotics} Below we first provide the complete statement of , which includes expressions for $\mu'$ and $\mu''$ that are excluded from the main paper. [\[thm:risk-gcv-asymptotics-appendix\]]{#thm:risk-gcv-asymptotics-appendix label="thm:risk-gcv-asymptotics-appendix"} Suppose hold. Then, for all $\lambda > \lambda_0 =-\liminf_{p \to \infty} \min_{k \in [K]}{\lambda_{\min}^{+}(\mathbf{S}^\top_k \widehat{{\bm{\Sigma}}}\mathbf{S}_k)}$ and all $K$, $$\begin{gathered} \label{eq:risk-gcv-decomp-appendix} R \big( \widehat{{\bm{\beta}}}_{\lambda}^\mathrm{ens} \big) \simeq R \big( \widehat{{\bm{\beta}}}_{\mu}^\mathrm{ridge} \big) + \frac{\mu' \Delta}{K} \quad \text{and} \quad {\widehat{R}}\big( \widehat{{\bm{\beta}}}_{\lambda}^\mathrm{ens} \big) \simeq {\widehat{R}}\big( \widehat{{\bm{\beta}}}_{\mu}^\mathrm{ridge} \big) + \frac{\mu'' \Delta}{K}, \end{gathered}$$ where $\mu$ is an implicit regularization parameter that solves [\[eq:fp-dual-sketch\]](#eq:fp-dual-sketch){reference-type="eqref" reference="eq:fp-dual-sketch"}, $\Delta = \tfrac{1}{n} \mathbf{y}^\top(\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top+ \mu \mathbf{I}_n)^{-2} \mathbf{y}$, and $\mu' \geq 0$ is an inflation factor in the risk decomposition given by: $$\label{eq:mu'-proof} \mu' = - \frac{\partial \mu}{\partial \lambda} \lambda^2 \mathscr S_{\mathbf{S}\mathbf{S}^\top}'\big(- \tfrac{1}{p}{\rm tr}\big[\tfrac{1}{n} \mathbf{X}^\top\mathbf{X}\big( \tfrac{1}{n} \mathbf{X}^\top\mathbf{X}+ \mu \mathbf{I}_p \big)^{-1}\big]\big) \tfrac{1}{p} {\rm tr}\big[{\bm{\Sigma}}\big( \tfrac{1}{n} \mathbf{X}^\top\mathbf{X}+ \mu \mathbf{I}_p \big)^{-2}\big],$$ while $\mu'' \geq 0$ is an inflation factor in the GCV decomposition given by: $$\label{eq:mu''-proof} \mu'' = \frac{\displaystyle - \frac{\partial \mu}{\partial \lambda} \lambda^2 \mathscr S_{\mathbf{S}\mathbf{S}^\top}'\big(- \tfrac{1}{p}{\rm tr}\big[\tfrac{1}{n} \mathbf{X}^\top\mathbf{X}\big( \tfrac{1}{n} \mathbf{X}^\top\mathbf{X}+ \mu \mathbf{I}_p \big)^{-1}\big]\big) \tfrac{1}{p} {\rm tr}\big[\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}\big( \tfrac{1}{n} \mathbf{X}^\top\mathbf{X}+ \mu \mathbf{I}_p \big)^{-2}\big]}{\displaystyle \left(1 - \tfrac{1}{n} {\rm tr}[\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}(\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}+ \mu \mathbf{I}_p)^{-1}] \right)^2}.$$ *Proof.* The core component of the proof is . We shall break down the proof into two parts: risk asymptotics and GCV asymptotics. Before proceeding, let us introduce some essential notation first. *Notation*: We decompose the unknown response $y_0$ into its linear predictor and residual. Specifically, let ${\bm{\beta}}_0$ be the optimal projection parameter given by ${\bm{\beta}}_0 = {\bm{\Sigma}}^{-1} \mathbb{E}[\mathbf{x}_0 y_0]$. Then, we can express the response as a sum of its best linear predictor, $\mathbf{x}^\top {\bm{\beta}}_0$, and the residual, $y_0 - \mathbf{x}_0^\top {\bm{\beta}}_0$. Denote the variance of this residual by $\sigma^2 = \mathbb{E}[(y_0 - \mathbf{x}_0^\top {\bm{\beta}}_0)^2]$. #### Part 1: Risk asymptotics. It is easy to see that the risk decomposes as follows: $$R(\widehat{{\bm{\beta}}}^\mathrm{ens}_\lambda) = \mathbb{E}\big[ (y_0 - \mathbf{x}_0^\top \widehat{{\bm{\beta}}}_{\lambda}^\mathrm{ens})^2 \mid \mathbf{X}, \mathbf{y}, (\mathbf{S}_k)_{k=1}^{k}\big] = (\widehat{{\bm{\beta}}}^\mathrm{ens}_\lambda - {\bm{\beta}}_0)^\top {\bm{\Sigma}} (\widehat{{\bm{\beta}}}^\mathrm{ens}_\lambda - {\bm{\beta}}_0) + \sigma^2.$$ Here, we used the fact that $(y_0 - \mathbf{x}_0^\top {\bm{\beta}}_0)$ is uncorrelated with $\mathbf{x}_0$, that is $\mathbb{E}[\mathbf{x}_0 (y_0 - \mathbf{x}_0^\top {\bm{\beta}}_0)] = \bm{0}_p$. We note that $\| \beta_0 \|_2 < \infty$ and ${\bm{\Sigma}}$ has uniformly bounded operator norm from . Applying then yields: $$\begin{aligned} R(\widehat{{\bm{\beta}}}_{\lambda}^\mathrm{ens}) = (\widehat{{\bm{\beta}}}^\mathrm{ens}_\lambda - {\bm{\beta}}_0)^\top {\bm{\Sigma}} (\widehat{{\bm{\beta}}}^\mathrm{ens}_\lambda - {\bm{\beta}}_0) + \sigma^2 &\simeq (\widehat{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}} - {\bm{\beta}}_0)^\top {\bm{\Sigma}} (\widehat{{\bm{\beta}}}_{\mu}^\mathrm{ridge}- {\bm{\beta}}_0) + \sigma^2 + \frac{\mu' \Delta}{K} \relax &= R(\widehat{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}}) + \frac{\mu' \Delta}{K},\end{aligned}$$ where $\mu'$ is as defined in [\[eq:mu\'-proof\]](#eq:mu'-proof){reference-type="eqref" reference="eq:mu'-proof"}. This completes the first part of the proof for the risk asymptotics decomposition. #### Part 2: GCV asymptotics. We will work on the numerator and denominator asymptotics separately, and combine them to get the final expression for the GCV asymptotics. *Numerator:* We start with the numerator. Similar decomposition as the risk yields $$\begin{aligned} \tfrac{1}{n} \| \mathbf{y}- \mathbf{X}\widehat{{\bm{\beta}}}_{\lambda}^\mathrm{ens}\|_2^2 &= \tfrac{1}{n} \| \mathbf{X}({\bm{\beta}}_0 - \widehat{{\bm{\beta}}}_{\lambda}^{\mathrm{ens}}) + (\mathbf{y}- \mathbf{X}{\bm{\beta}}_0) \|_2^2 \relax &= \tfrac{1}{n} \| \mathbf{X}({\bm{\beta}}_0 - \widehat{{\bm{\beta}}}_{\lambda}^{\mathrm{ens}}) \|_2^2 + \tfrac{1}{n} \| (\mathbf{y}- \mathbf{X}{\bm{\beta}}_0) \|_2^2 + \tfrac{2}{n} (\mathbf{y}- \mathbf{X}{\bm{\beta}}_0)^\top \mathbf{X}({\bm{\beta}}_0 - \widehat{{\bm{\beta}}}_{\lambda}^{\mathrm{ens}}).\end{aligned}$$ From , note that $$\tfrac{2}{n} (\mathbf{y}- \mathbf{X}{\bm{\beta}}_0)^\top \mathbf{X}({\bm{\beta}}_0 - \widehat{{\bm{\beta}}}_{\lambda}^{\mathrm{ens}}) \simeq \tfrac{2}{n} (\mathbf{y}- \mathbf{X}{\bm{\beta}}_0)^\top \mathbf{X}({\bm{\beta}}_0 - \widehat{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}}).$$ Next we expand $$\begin{aligned} \tfrac{1}{n} \| \mathbf{X}({\bm{\beta}}_0 - \widehat{{\bm{\beta}}}_{\lambda}^{\mathrm{ens}}) \|_2^2 &= (\widehat{{\bm{\beta}}}_{\lambda}^{\mathrm{ens}} - {\bm{\beta}}_0)^\top \tfrac{1}{n} \mathbf{X}^\top \mathbf{X} (\widehat{{\bm{\beta}}}_{\lambda}^{\mathrm{ens}} - {\bm{\beta}}_0) \relax &\simeq (\widehat{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}} - {\bm{\beta}}_0)^\top \tfrac{1}{n} \mathbf{X}^\top \mathbf{X} (\widehat{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}} - {\bm{\beta}}_0) + \frac{\mu''_{\mathrm{num}} \Delta}{K},\end{aligned}$$ where $\mu''_{\mathrm{num}}$ is given by: $$\mu''_{\mathrm{num}} = - \frac{\partial \mu}{\partial \lambda} \lambda^2 \mathscr S_{\mathbf{S}\mathbf{S}^\top}'\big(- \tfrac{1}{p}{\rm tr}\big[\tfrac{1}{n} \mathbf{X}^\top\mathbf{X}\big( \tfrac{1}{n} \mathbf{X}^\top\mathbf{X}+ \mu \mathbf{I}_p \big)^{-1}\big]\big) \tfrac{1}{p} {\rm tr}\big[\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}\big( \tfrac{1}{n} \mathbf{X}^\top\mathbf{X}+ \mu \mathbf{I}_p \big)^{-2}\big].$$ Now appealing to , we have $$\begin{aligned} \tfrac{1}{n} \| \mathbf{y}- \mathbf{X}\widehat{{\bm{\beta}}}_{\lambda}^{\mathrm{ens}} \|_2^2 &\simeq (\widehat{{\bm{\beta}}}_{\mu}^\mathrm{ridge}- {\bm{\beta}}_0)^\top \tfrac{1}{n} \mathbf{X}^\top \mathbf{X} (\widehat{{\bm{\beta}}}_{\mu}^\mathrm{ridge}- {\bm{\beta}}_0) + \tfrac{2}{n} (\mathbf{y}- \mathbf{X}{\bm{\beta}}_0)^\top \mathbf{X}({\bm{\beta}}_0 - \widehat{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}}) \nonumber \relax &\quad + \tfrac{1}{n} \| (\mathbf{y}- \mathbf{X}{\bm{\beta}}_0) \|_2^2 + \frac{\mu''_{\mathrm{num}} \Delta}{K}. \label{eq:ensemble-squared-decomp-dual-1}\end{aligned}$$ Note also that $$\begin{aligned} \tfrac{1}{n} \| \mathbf{y}- \mathbf{X}\widehat{{\bm{\beta}}}_\mu^\mathrm{ridge}\|_2^2 &= \tfrac{1}{n} \| (\mathbf{y}- \mathbf{X}{\bm{\beta}}_0) + \mathbf{X}({\bm{\beta}}_0 - \widehat{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}}) \|_2^2 \nonumber \relax &= \tfrac{1}{n} \| \mathbf{y}- \mathbf{X}{\bm{\beta}}_0 \|_2^2 + \tfrac{2}{n} (\mathbf{y}- \mathbf{X}{\bm{\beta}}_0)^\top \mathbf{X}({\bm{\beta}}_0 - \widehat{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}}) + \tfrac{1}{n} \| \mathbf{X}({\bm{\beta}}_0 - \widehat{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}}) \|_2^2. \label{eq:ensemble-squared-decomp-dual-2}\end{aligned}$$ Combining [\[eq:ensemble-squared-decomp-dual-1\]](#eq:ensemble-squared-decomp-dual-1){reference-type="eqref" reference="eq:ensemble-squared-decomp-dual-1"} and [\[eq:ensemble-squared-decomp-dual-2\]](#eq:ensemble-squared-decomp-dual-2){reference-type="eqref" reference="eq:ensemble-squared-decomp-dual-2"}, we arrive at the following asymptotic decomposition for the numerator: $$\label{eq:gcv-numerator-asympequi} \tfrac{1}{n} \| \mathbf{y}- \mathbf{X}\widehat{{\bm{\beta}}}_{\lambda}^{\mathrm{ens}} \|_2^2 \simeq \tfrac{1}{n} \| \mathbf{y}- \mathbf{X}\widehat{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}} \|_2^2 + \frac{\mu''_{\mathrm{num}} \Delta}{K}.$$ *Denominator:* Next we work on the denominator. For the ensemble smoothing matrix, observe that $$\begin{aligned} \mathbf{L}^\mathrm{ens}_{\lambda} = \frac{1}{K} \sum_{k=1}^{K} \tfrac{1}{n} \mathbf{X}\mathbf{S}_k (\tfrac{1}{n} \mathbf{S}_k^\top\mathbf{X}^\top\mathbf{X}\mathbf{S}_k + \lambda \mathbf{I}_q)^{-1} \mathbf{S}_k^\top\mathbf{X}^\top \simeq\tfrac{1}{n} \mathbf{X}(\tfrac{1}{n} \mathbf{X}^\top\mathbf{X}+ \mu \mathbf{I}_p)^{-1} \mathbf{X}^\top,\end{aligned}$$ where we used to write the asymptotic equivalence in the last line. Thus, we have $$\label{eq:gcv-denominator-asympequi} {\rm tr}[\mathbf{L}^\mathrm{ens}_{\lambda}] \simeq{\rm tr}[(\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}+ \mu \mathbf{I}_p)^{-1} \tfrac{1}{n} \mathbf{X}^\top \mathbf{X}] = {\rm tr}[\mathbf{L}^\mathrm{ridge}_{\mu}].$$ Therefore, combining [\[eq:gcv-numerator-asympequi\]](#eq:gcv-numerator-asympequi){reference-type="eqref" reference="eq:gcv-numerator-asympequi"} and [\[eq:gcv-denominator-asympequi\]](#eq:gcv-denominator-asympequi){reference-type="eqref" reference="eq:gcv-denominator-asympequi"}, for the GCV estimator, we obtain $$\begin{aligned} {\widehat{R}}(\widehat{{\bm{\beta}}}_{\lambda}^{\mathrm{ens}}) = \frac{\tfrac{1}{n} \| \mathbf{y}- \mathbf{X}\widehat{{\bm{\beta}}}_{\lambda}^{\mathrm{ens}} \|_2^2}{(1 - \tfrac{1}{n} {\rm tr}[\mathbf{L}_{\lambda}^{\mathrm{ens}}])^2} \simeq \frac{\displaystyle \tfrac{1}{n} \| \mathbf{y}- \mathbf{X}\widehat{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}} \|_2^2 + \frac{\mu''_{\mathrm{num}} \Delta}{K} }{\displaystyle (1 - \tfrac{1}{n} {\rm tr}[\mathbf{L}_{\lambda}^{\mathrm{ens}}])^2} &\simeq \frac{\displaystyle \tfrac{1}{n} \| \mathbf{y}- \mathbf{X}\widehat{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}} \|_2^2 + \frac{\mu''_{\mathrm{num}} \Delta}{K} }{\displaystyle (1 - \tfrac{1}{n} {\rm tr}[\mathbf{L}_{\mu}^{\mathrm{ridge}}])^2} \relax &= {\widehat{R}}(\widehat{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}}) + \frac{\mu'' \Delta}{K},\end{aligned}$$ where $\mu''$ is as defined in [\[eq:mu\'\'-proof\]](#eq:mu''-proof){reference-type="eqref" reference="eq:mu''-proof"}. This finishes the second part of the proof for the GCV asymptotics decomposition and concludes the proof. ◻ ### Helper lemma for the proof of {#helper-lemma-for-the-proof-of .unnumbered} [\[lem:quad-risk-ensemble\]]{#lem:quad-risk-ensemble label="lem:quad-risk-ensemble"} Assume the conditions of . Let ${\bm{\Psi}}$ be any positive semidefinite matrix with uniformly bounded operator norm, that is independent of $( \mathbf{S}_k )_{k=1}^{K}$. Let ${\bm{\beta}}_0 \in \mathbb{R}^{p}$ be any vector with uniformly bounded Euclidean norm, that is independent of $( \mathbf{S}_k )_{k=1}^{K}$. Then, under , for $\lambda > \lambda_0$ and all $K$, $$(\widehat{{\bm{\beta}}}_{\lambda}^{\mathrm{ens}} - {\bm{\beta}}_0)^\top {\bm{\Psi}} (\widehat{{\bm{\beta}}}_{\lambda}^{\mathrm{ens}} - {\bm{\beta}}_0) \simeq (\widehat{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}} - {\bm{\beta}}_0)^\top {\bm{\Psi}} (\widehat{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}} - {\bm{\beta}}_0) + \frac{\mu_{{\bm{\Psi}}}' \Delta}{K},$$ where $\mu$ is as defined in [\[eq:fp-dual-sketch\]](#eq:fp-dual-sketch){reference-type="eqref" reference="eq:fp-dual-sketch"}, $\mu_{{\bm{\Psi}}}'$ is as defied in [\[eq:mu\'\_mPsi\]](#eq:mu'_mPsi){reference-type="eqref" reference="eq:mu'_mPsi"}, and $\Delta$ is as defined in . *Proof.* We start with a decomposition. Observe that $$\begin{aligned} &(\widehat{{\bm{\beta}}}^\mathrm{ens}_\lambda - {\bm{\beta}}_0)^\top {\bm{\Psi}} (\widehat{{\bm{\beta}}}^\mathrm{ens}_\lambda - {\bm{\beta}}_0) \relax &= \bigg( \frac{1}{K} \sum_{k=1}^{K} \widehat{{\bm{\beta}}}_\lambda^k - {\bm{\beta}}_0 \bigg)^\top {\bm{\Psi}} \bigg( \frac{1}{K} \sum_{k=1}^{K} \widehat{{\bm{\beta}}}_\lambda^k - {\bm{\beta}}_0 \bigg) \relax &= \frac{1}{K^2} \sum_{k, \ell=1}^K (\widehat{{\bm{\beta}}}_\lambda^{k})^\top{\bm{\Psi}}\widehat{{\bm{\beta}}}_\lambda^{\ell} - \frac{2}{K} \sum_{k=1}^{K} {{\bm{\beta}}_0}^\top{\bm{\Sigma}}\widehat{{\bm{\beta}}}_\lambda^{k} + {{\bm{\beta}}_0}^\top{\bm{\Sigma}}{\bm{\beta}}_0 \relax &= \frac{1}{K^2} \sum_{k, \ell=1}^K (\widehat{{\bm{\beta}}}_\lambda^{k})^\top{\bm{\Psi}}\widehat{{\bm{\beta}}}_\lambda^{\ell} - (\widehat{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}})^\top{\bm{\Psi}}\widehat{{\bm{\beta}}}_{\mu}^\mathrm{ridge} + ({\widehat{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}})^\top{\bm{\Psi}}\widehat{{\bm{\beta}}}_{\mu}^\mathrm{ridge}- \frac{2}{K} \sum_{k=1}^{K} {{\bm{\beta}}_0}^\top{\bm{\Psi}}\widehat{{\bm{\beta}}}_\lambda^{k}+ {{\bm{\beta}}_0}^\top{\bm{\Psi}}{\bm{\beta}}_0}.\end{aligned}$$ By , note that $$\frac{1}{K} \sum_{k = 1}^{K} \widehat{{\bm{\beta}}}_{\lambda}^{k} \simeq\widehat{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}}.$$ Similarly, by two applications of , we know that $(\widehat{{\bm{\beta}}}_\lambda^{k})^{\top} {\bm{\Psi}}\widehat{{\bm{\beta}}}_\lambda^{\ell} - (\widehat{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}})^{\top} {\bm{\Psi}}\widehat{{\bm{\beta}}}_{\mu}^\mathrm{ridge}\xrightarrow{\text{a.s.}}0$ when $k \neq \ell$ since $\mathbf{S}_k$ and $\mathbf{S}_\ell$ are independent. The remaining $K$ terms where $k = \ell$ converge identically, so $$\begin{aligned} (\widehat{{\bm{\beta}}}_{\lambda}^{\mathrm{ens}} - {\bm{\beta}}_0)^\top {\bm{\Psi}} (\widehat{{\bm{\beta}}}_{\lambda}^{\mathrm{ens}} - {\bm{\beta}}_0) - \left( (\widehat{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}} - {\bm{\beta}}_0)^\top {\bm{\Psi}} (\widehat{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}} - {\bm{\beta}}_0) + \frac{1}{K} \big( \widehat{{\bm{\beta}}}_\lambda^\top{\bm{\Psi}}\widehat{{\bm{\beta}}}_\lambda - (\widehat{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}})^{\top} {\bm{\Psi}}\widehat{{\bm{\beta}}}_{\mu}^\mathrm{ridge} \big) \right) \xrightarrow{\text{a.s.}}0.\end{aligned}$$ Thus, it suffices to evaluate the difference $\widehat{{\bm{\beta}}}_\lambda^\top{\bm{\Psi}}\widehat{{\bm{\beta}}}_\lambda - (\widehat{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}})^{\top} {\bm{\Psi}}\widehat{{\bm{\beta}}}_{\mu}^\mathrm{ridge}$, which we will do next. By linearity of the trace, we have $$\begin{gathered} \widehat{{\bm{\beta}}}_\lambda^\top{\bm{\Psi}}\widehat{{\bm{\beta}}}_\lambda - (\widehat{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}})^{\top} {\bm{\Psi}}\widehat{{\bm{\beta}}}_{\mu}^\mathrm{ridge}= \tfrac{1}{n} {\rm tr}\left[ \big( \mathbf{X}_\lambda^{\ddagger\top} {\bm{\Psi}}\mathbf{X}_\lambda^\ddagger - \mathbf{X}_\lambda^{\dagger\top} {\bm{\Psi}}\mathbf{X}_\lambda^\dagger \big) \mathbf{y}\mathbf{y}^\top \right],\end{gathered}$$ where $$\mathbf{X}_\lambda^\ddagger = \tfrac{1}{\sqrt{n}} \mathbf{S}\big( \tfrac{1}{n} \mathbf{S}^\top\mathbf{X}^\top\mathbf{X}\mathbf{S}+ \lambda \mathbf{I}_q \big)^{-1} \mathbf{S}^\top\mathbf{X}^\top \text{ and } \mathbf{X}_{\lambda}^\dagger =\tfrac{1}{\sqrt{n}} \left( \tfrac{1}{n} \mathbf{X}^\top\mathbf{X}+ \mu \mathbf{I}_p \right)^{-1} \mathbf{X}^\top.$$ The result now follows by evaluating the second-order asymptotic equivalences from . This concludes the proof. ◻ ## Proof of {#proof-of} The main ingredient of the proof will be . Comparing the expressions of $\mu'$ and $\mu''$, it suffices to show the following equivalence: $$\tfrac{1}{p} {\rm tr}\big[{\bm{\Sigma}}\big( \tfrac{1}{n} \mathbf{X}^\top\mathbf{X}+ \mu \mathbf{I}_p \big)^{-2}\big] \simeq \frac{\tfrac{1}{p} {\rm tr}\big[ \tfrac{1}{n} \mathbf{X}^\top \mathbf{X}\big( \tfrac{1}{n} \mathbf{X}^\top\mathbf{X}+ \mu \mathbf{I}_p \big)^{-2}\big]}{\left(1 - \tfrac{1}{n} {\rm tr}\big[\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}(\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}+ \mu \mathbf{I}_p)^{-1}\big] \right)^2}.$$ We show this equivalence in to finish the proof. A side remark that is worth stressing about the proof of : The inflation in both the test error and train errors are such that the same GCV denominator cancels them appropriately! Thus, while one may expect that the GCV for infinite ensemble may work given the equivalence to the ridge regression, the fact that GCV works for a single instance of sketch is (even to the authors) quite remarkable! ### Helper lemma for the proof of {#helper-lemma-for-the-proof-of-1 .unnumbered} [\[lem:dual-mu\'-mu\'\'-matching\]]{#lem:dual-mu'-mu''-matching label="lem:dual-mu'-mu''-matching"} Under , $$\label{eq:dual-mu'-mu''-matching-proof} (\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}+ \mu \mathbf{I}_p)^{-1} {\bm{\Sigma}} (\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}+ \mu \mathbf{I}_p)^{-1} \simeq \frac{\displaystyle (\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}+ \mu \mathbf{I}_p)^{-1} \tfrac{1}{n} \mathbf{X}^\top \mathbf{X} (\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}+ \mu \mathbf{I}_p)^{-1} }{\displaystyle \left(1 - \tfrac{1}{n} {\rm tr}\big[\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}(\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}+ \mu \mathbf{I}_p)^{-1}\big] \right)^2 }.$$ *Proof.* We will first derive asymptotic equivalents for both the left-hand and right-hand sides of [\[eq:dual-mu\'-mu\'\'-matching-proof\]](#eq:dual-mu'-mu''-matching-proof){reference-type="eqref" reference="eq:dual-mu'-mu''-matching-proof"}. We will then show that the asymptotic equivalents match appropriately. #### Asymptotic equivalent for left-hand side. From the second part of , we have $$\mu^2 (\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}+ \mu \mathbf{I}_p)^{-1} {\bm{\Sigma}} (\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}+ \mu \mathbf{I}_p)^{-1} \simeq (1 + \widetilde{v}_b) (v {\bm{\Sigma}}+ \mathbf{I}_p)^{-1} {\bm{\Sigma}} (v {\bm{\Sigma}}+ \mathbf{I}_p)^{-1},$$ where the parameter $\widetilde{v}_b$ from [\[eq:def-tvb-ridge\]](#eq:def-tvb-ridge){reference-type="eqref" reference="eq:def-tvb-ridge"} is given by: $$\widetilde{v}_b = \frac{\displaystyle \gamma \tfrac{1}{p} {\rm tr}[{\bm{\Sigma}}^2 (v {\bm{\Sigma}}+ \mathbf{I}_p)^{-2}] }{\displaystyle v^{-2} - \gamma \tfrac{1}{p} {\rm tr}[{\bm{\Sigma}}^2 (v {\bm{\Sigma}}+ \mathbf{I}_p)^{-2}]. }$$ Now, note that $$1 + \widetilde{v}_b = \frac{v^{-2}}{v^{-2} + \gamma \tfrac{1}{p} {\rm tr}[{\bm{\Sigma}}^2 (v {\bm{\Sigma}}+ \mathbf{I}_p)^{-2}]},$$ which leads to $$\frac{1 + \widetilde{v}_b}{\mu^2} = \frac{1}{\mu^2} \frac{v^{-2}}{v^{-2} + \gamma \tfrac{1}{p} {\rm tr}[{\bm{\Sigma}}^2 (v {\bm{\Sigma}}+ \mathbf{I}_p)^{-2}]}.$$ Thus, we have that $$\begin{aligned} &(\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}+ \mu \mathbf{I}_p)^{-1} {\bm{\Sigma}} (\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}+ \mu \mathbf{I}_p)^{-1} \nonumber \relax &\simeq \frac{1}{\mu^2} \frac{v^{-2}}{v^{-2} + \gamma \tfrac{1}{p} {\rm tr}[{\bm{\Sigma}}^2 (v {\bm{\Sigma}}+ \mathbf{I}_p)^{-2}]} (v {\bm{\Sigma}}+ \mathbf{I}_p)^{-1} {\bm{\Sigma}} (v {\bm{\Sigma}}+ \mathbf{I}_p)^{-1}. \label{eq:dual-matching-lhs-asympequi} \end{aligned}$$ #### Asymptotic equivalent for right-hand side. From the third part of , we have $$\label{eq:dual-matching-rhs-numerator} (\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}+ \mu \mathbf{I}_p)^{-1} \tfrac{1}{n} \mathbf{X}^\top \mathbf{X} (\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}+ \mu \mathbf{I}_p)^{-1} \simeq \widetilde{v}_v (v {\bm{\Sigma}}+ \mathbf{I}_p)^{-1} {\bm{\Sigma}} (v {\bm{\Sigma}}+ \mathbf{I}_p)^{-1},$$ where the parameter $\widetilde{v}_v$ from [\[eq:def-tvv-ridge\]](#eq:def-tvv-ridge){reference-type="eqref" reference="eq:def-tvv-ridge"} is given by: $$\label{eq:dual-matching-rhs-numerator-fp} \widetilde{v}_v = \frac{1}{v^{-2} - \gamma \tfrac{1}{p} {\rm tr}[{\bm{\Sigma}}^2 (v {\bm{\Sigma}}+ \mathbf{I}_p)^{-2}]}.$$ From the first of , we have $$\begin{aligned} 1 - \tfrac{1}{n} {\rm tr}[\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}(\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}+ \mu \mathbf{I}_p)^{-1}] &= 1 - \tfrac{1}{n} {\rm tr}[\mathbf{I}_p - \mu (\tfrac{1}{n} \mathbf{X}^\top\mathbf{X}+ \mu \mathbf{I}_p)^{-1}] \nonumber \relax &= 1 - \gamma \big(1 - \tfrac{1}{p} \mu {\rm tr}[(\tfrac{1}{n} \mathbf{X}^\top\mathbf{X}+ \mu \mathbf{I}_p)^{-1}]\big) \nonumber \relax &\simeq 1 - \gamma \big(1 - \tfrac{1}{p} {\rm tr}[(v {\bm{\Sigma}}+ \mathbf{I}_p)^{-1}]\big) \nonumber \relax &= 1 - \gamma + \gamma \tfrac{1}{p} {\rm tr}[(v {\bm{\Sigma}}+ \mathbf{I}_p)^{-1}]. \label{eq:dual-matching-rhs-denominator} \end{aligned}$$ Combining [\[eq:dual-matching-rhs-numerator\]](#eq:dual-matching-rhs-numerator){reference-type="eqref" reference="eq:dual-matching-rhs-numerator"}, [\[eq:dual-matching-rhs-numerator-fp\]](#eq:dual-matching-rhs-numerator-fp){reference-type="eqref" reference="eq:dual-matching-rhs-numerator-fp"}, and [\[eq:dual-matching-rhs-denominator\]](#eq:dual-matching-rhs-denominator){reference-type="eqref" reference="eq:dual-matching-rhs-denominator"}, we obtain $$\begin{aligned} &\frac{\displaystyle (\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}+ \mu \mathbf{I}_p)^{-1} \tfrac{1}{n} \mathbf{X}^\top \mathbf{X} (\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}+ \mu \mathbf{I}_p)^{-1} }{\displaystyle \left(1 - \tfrac{1}{n} {\rm tr}[\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}(\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}+ \mu \mathbf{I}_p)^{-1}] \right)^2 } \nonumber \relax &\simeq \frac{1}{v^{-2} + \gamma \tfrac{1}{p} {\rm tr}[{\bm{\Sigma}}^2 (v {\bm{\Sigma}}+ \mathbf{I}_p)^{-2}]} \cdot \frac{\displaystyle (v {\bm{\Sigma}}+ \mathbf{I}_p)^{-1} {\bm{\Sigma}} (v {\bm{\Sigma}}+ \mathbf{I}_p)^{-1} }{\displaystyle \big(1 - \gamma + \gamma \tfrac{1}{p} {\rm tr}[(v {\bm{\Sigma}}+ \mathbf{I}_p)^{-1}]\big)^2}. \label{eq:dual-matching-rhs-asympequi} \end{aligned}$$ #### Matching of asymptotic equivalents. Note from the fixed-point equation [\[eq:v-fixed-point\]](#eq:v-fixed-point){reference-type="eqref" reference="eq:v-fixed-point"} that $$\mu = v^{-1} - \gamma \tfrac{1}{p} {\rm tr}[{\bm{\Sigma}}(v {\bm{\Sigma}}+ \mathbf{I}_p)^{-1}].$$ Multiplying by $v$ on both sides yields $$\label{eq:dual-matching-fp-relation} \mu v = 1 - \gamma \tfrac{1}{p} {\rm tr}[v {\bm{\Sigma}}(v {\bm{\Sigma}}+ \mathbf{I}_p)^{-1}] = 1 - \gamma + \gamma \tfrac{1}{p} {\rm tr}[(v {\bm{\Sigma}}+ \mathbf{I}_p)^{-1}].$$ Thus, combining [\[eq:dual-matching-lhs-asympequi\]](#eq:dual-matching-lhs-asympequi){reference-type="eqref" reference="eq:dual-matching-lhs-asympequi"}, [\[eq:dual-matching-fp-relation\]](#eq:dual-matching-fp-relation){reference-type="eqref" reference="eq:dual-matching-fp-relation"}, and [\[eq:dual-matching-rhs-asympequi\]](#eq:dual-matching-rhs-asympequi){reference-type="eqref" reference="eq:dual-matching-rhs-asympequi"}, we have $$\begin{aligned} &(\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}+ \mu \mathbf{I}_p)^{-1} {\bm{\Sigma}} (\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}+ \mu \mathbf{I}_p)^{-1} \relax &\simeq \frac{1}{\mu^2} \frac{v^{-2}}{v^{-2} + \gamma \tfrac{1}{p} {\rm tr}[{\bm{\Sigma}}^2 (v {\bm{\Sigma}}+ \mathbf{I}_p)^{-2}]} (v {\bm{\Sigma}}+ \mathbf{I}_p)^{-1} {\bm{\Sigma}} (v {\bm{\Sigma}}+ \mathbf{I}_p)^{-1} \relax &= \frac{1}{(\mu v)^2} \frac{1}{v^{-2} + \gamma \tfrac{1}{p} {\rm tr}[{\bm{\Sigma}}^2 (v {\bm{\Sigma}}+ \mathbf{I}_p)^{-2}]} (v {\bm{\Sigma}}+ \mathbf{I}_p)^{-1} {\bm{\Sigma}} (v {\bm{\Sigma}}+ \mathbf{I}_p)^{-1} \relax &= \frac{1}{(1 - \gamma + \gamma \tfrac{1}{p} {\rm tr}[(v {\bm{\Sigma}}+ \mathbf{I}_p)^{-1}])^2} \frac{1}{v^{-2} + \gamma \tfrac{1}{p} {\rm tr}[{\bm{\Sigma}}^2 (v {\bm{\Sigma}}+ \mathbf{I}_p)^{-2}]} (v {\bm{\Sigma}}+ \mathbf{I}_p)^{-1} {\bm{\Sigma}} (v {\bm{\Sigma}}+ \mathbf{I}_p)^{-1} \relax &\simeq \frac{\displaystyle (\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}+ \mu \mathbf{I}_p)^{-1} \tfrac{1}{n} \mathbf{X}^\top \mathbf{X} (\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}+ \mu \mathbf{I}_p)^{-1} }{\displaystyle \left(1 - \tfrac{1}{n} {\rm tr}[\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}(\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}+ \mu \mathbf{I}_p)^{-1}] \right)^2 }. \end{aligned}$$ In the chain above, the first equivalence follows from [\[eq:dual-matching-lhs-asympequi\]](#eq:dual-matching-lhs-asympequi){reference-type="eqref" reference="eq:dual-matching-lhs-asympequi"}, the penultimate equality follows from [\[eq:dual-matching-fp-relation\]](#eq:dual-matching-fp-relation){reference-type="eqref" reference="eq:dual-matching-fp-relation"}, and the final equivalence follows from [\[eq:dual-matching-rhs-asympequi\]](#eq:dual-matching-rhs-asympequi){reference-type="eqref" reference="eq:dual-matching-rhs-asympequi"}. This finishes the proof. ◻ # Proofs in {#app:proofs-sec:functional-consistency} ## Proof of {#proof-of-1} As mentioned in the main paper, the idea of the proof is to exploit the close connection between GCV and LOOCV to prove the consistency for general functionals. In particular, we will consider an intermediate functional, constructed based on LOO-reweighted residuals. We will then connect the functional constructed based on GCV-reweighted residuals to that based on LOO-reweighted residuals. #### Step 1: LOOCV consistency. Let $\mathbf{X}_{-i}$ denote the the feature matrix obtained by removing the $i$-th row from $\mathbf{X}$, and $\mathbf{y}_{-i}$ is the response vector obtained by removing the $i$-th entry from $\mathbf{y}$. Let $\widehat{{\bm{\beta}}}^\mathrm{ens}_{-i,\lambda}$ denote the LOO ensemble estimator. It is defined using $K$ constituent sketched LOO estimators $\widehat{{\bm{\beta}}}_{-i,\lambda}^k$ for $k \in [K]$ as follows: $$\widehat{{\bm{\beta}}}^\mathrm{ens}_{-i,\lambda} = \frac{1}{K} \sum_{k=1}^{K} \widehat{{\bm{\beta}}}^k_{-i, \lambda}, \quad \text{where} \quad \widehat{{\bm{\beta}}}^k_{-i,\lambda} = \tfrac{1}{n} \mathbf{S}_k (\tfrac{1}{n} \mathbf{S}_k \mathbf{X}_{-i}^\top \mathbf{X}_{-i} \mathbf{S}_k + \lambda \mathbf{I}_q)^{-1} \mathbf{S}_k^\top \mathbf{X}_{-i}^\top \mathbf{y}_{-i}.$$ The LOOCV functional is the defined using the predictions of the LOO ensemble estimator as: $$\label{eq:loo-functional-original} {\widehat{T}}^{\mathrm{loo}}_{\lambda} = \frac{1}{n} \sum_{i=1}^{n} t(y_i, \mathbf{x}_i^\top \widehat{{\bm{\beta}}}^{\mathrm{ens}}_{-i,\lambda}).$$ It follows from the proof of Theorem 3 of [@patil2022estimating] that $| R(\widehat{{\bm{\beta}}}_{\lambda}^\mathrm{ens}) - {\widehat{T}}^{\mathrm{loo}}_{\lambda} | \xrightarrow{\text{a.s.}}0.$ Next we will show that $| {\widehat{T}}^{\mathrm{loo}}_{\lambda} - {\widehat{T}}_{\lambda} | \xrightarrow{\text{a.s.}}0,$ where ${\widehat{T}}_{\lambda}$ is the GCV functional. #### Step 2: From LOOCV to GCV consistency. To go from LOOCV to GCV, we first rewrite the LOO errors. From Woodbury matrix identity, observe that $$y_i - \mathbf{x}_i^\top \widehat{{\bm{\beta}}}^k_{-i,\lambda} = \frac{y_i - \mathbf{x}_i^\top \widehat{{\bm{\beta}}}^k_{\lambda}} {1 - [\mathbf{L}_{\lambda}^k]_{ii}}.$$ This is the so-called exact shortcut for LOO errors for ridge regression, that also works for sketched ridge regression. In other words, we have $$\mathbf{x}_i^\top \widehat{{\bm{\beta}}}_{-i,\lambda}^{k} = y_i - \frac{y_i - \mathbf{x}_{i}^\top \widehat{{\bm{\beta}}}_{\lambda}^k}{1 - [\mathbf{L}_{\lambda}^k]_{ii}}.$$ This implies that $$\mathbf{x}_i^\top \widehat{{\bm{\beta}}}_{-i,\lambda}^{\mathrm{ens}} = y_i - \frac{1}{K} \sum_{k=1}^{K} \frac{y_i - \mathbf{x}_{i}^\top \widehat{{\bm{\beta}}}_{\lambda}^k}{1 - [\mathbf{L}_{\lambda}^k]_{ii}}.$$ Thus, we can write the LOO function [\[eq:loo-functional-original\]](#eq:loo-functional-original){reference-type="eqref" reference="eq:loo-functional-original"} in the shortcut form as follows: $${\widehat{T}}^\mathrm{loo}_{\lambda} = \frac{1}{n} \sum_{i=1}^{n} t\left(y_i, y_i - \frac{1}{K} \sum_{k=1}^{K} \frac{y_i - \mathbf{x}_i^\top \widehat{{\bm{\beta}}}_{\lambda}^k}{1 - [\mathbf{L}_{\lambda}^k]_{ii}}\right).$$ We will now bound the difference: $$\begin{aligned} |{\widehat{T}}^{\mathrm{loo}}_\lambda - {\widehat{T}}_\lambda| &= \frac{1}{n} \sum_{i=1}^{n} \left| t\left(y_i, y_i - \frac{1}{K} \sum_{k=1}^K \frac{y_i - \mathbf{x}_i^\top \widehat{{\bm{\beta}}}_{\lambda}^k}{1 - [\mathbf{L}_{\lambda}^k]_{ii}}\right) - t\left(y_i, y_i - \frac{y_i - \mathbf{x}_i^\top \widehat{{\bm{\beta}}}_{\lambda}^\mathrm{ens}}{1 - \tfrac{1}{n}{\rm tr}[\mathbf{L}_{\lambda}^\mathrm{ens}]}\right) \right| \relax &= \frac{1}{n} \sum_{i=1}^{n} \left| t\left(y_i, y_i - \frac{1}{K} \sum_{k=1}^K \frac{y_i - \mathbf{x}_i^\top \widehat{{\bm{\beta}}}_{\lambda}^k}{1 - [\mathbf{L}_{\lambda}^k]_{ii}}\right) - t\left(y_i, y_i - \frac{1}{K} \sum_{k=1}^{K} \frac{y_i - \mathbf{x}_i^\top \widehat{{\bm{\beta}}}_{\lambda}^k}{1 - \tfrac{1}{n}{\rm tr}[\mathbf{L}_{\lambda}^\mathrm{ens}]}\right) \right|.\end{aligned}$$ The final equality follows from using the definition of the ensemble estimator [\[eq:ensemble-estimator\]](#eq:ensemble-estimator){reference-type="eqref" reference="eq:ensemble-estimator"}. For notational simplicity, denote by: - $r_i^k = y_i - \mathbf{x}_i^\top \widehat{{\bm{\beta}}}_{\lambda}^k$ for $k \in [K]$ and $i \in [n]$; - $d_i^k = 1 - [\mathbf{L}_\lambda^k]_{ii}$ for $k \in [K]$ and $i \in [n]$, and $\bar{d} = 1 - \tfrac{1}{n} {\rm tr}[\mathbf{L}^\mathrm{ens}_{\lambda}]$; - $\mathbf{u}_i = (y_i, y_i - \tfrac{1}{K} \sum_{k=1}^{K} \tfrac{r_i^k}{d_i^k})$, $\mathbf{v}_i = (y_i, y_i - \tfrac{1}{K} \sum_{k=1}^{K} \tfrac{r_i^k}{\bar{d}})$. Now, consider the desired difference: $$\begin{aligned} |{\widehat{T}}^{\mathrm{loo}}_\lambda - {\widehat{T}}_\lambda| = \frac{1}{n} \sum_{i=1}^{n} \left| t(\mathbf{u}_i) - t(\mathbf{v}_i) \right| &\le \frac{1}{n} \sum_{i=1}^{n} L (1 + \| \mathbf{u}_i \|_2 + \| \mathbf{v}_i \|_2) (\| \mathbf{u}_i - \mathbf{v}_i \|_2) \nonumber \relax &= \frac{1}{n} \sum_{i=1}^{n} L (1 + \| \mathbf{u}_i \|_2 + \| \mathbf{v}_i \|_2) \left| \frac{1}{K} \sum_{k=1}^{K} r_i^k \left( \frac{1}{d_i^k} - \frac{1}{\bar{d}} \right) \right|. \label{eq:loo-bound-1}\end{aligned}$$ Above, we used in the inequality. Using the Hölder inequality, we can bound $$\begin{aligned} \left| \frac{1}{K} \sum_{k=1}^{K} r_i^k \left( \frac{1}{d_i^k} - \frac{1}{\bar{d}} \right) \right| &\le \frac{1}{K} \sum_{k=1}^{K} |r_i^k| \cdot \max_{1 \le k \le K} \left| \frac{1}{d_i^k} - \frac{1}{\bar{d}} \right| \nonumber \relax &\le \frac{1}{\sqrt{K}} \| \mathbf{r}_i \|_2 \cdot \max_{1 \le k \le K} \left| \frac{1}{d_i^k} - \frac{1}{\bar{d}} \right|, \label{eq:loo-bound-2}\end{aligned}$$ where we denote by $\mathbf{r}_i = (r_i^1, \dots, r_i^K)$ for $i \in [n]$. In the second inequality, we used the fact that $\| \mathbf{r}_i \|_{1} \le \sqrt{K} \| \mathbf{r}_i \|_2$. Combining [\[eq:loo-bound-1\]](#eq:loo-bound-1){reference-type="eqref" reference="eq:loo-bound-1"} with [\[eq:loo-bound-2\]](#eq:loo-bound-2){reference-type="eqref" reference="eq:loo-bound-2"} yields $$\begin{aligned} |{\widehat{T}}^{\mathrm{loo}}_{\lambda} - {\widehat{T}}_{\lambda}| &\le \frac{1}{n} \sum_{i=1}^{n} \bigg\{ L (1 + \| \mathbf{u}_i \|_2 + \| \mathbf{v}_i \|_2) \Big(\tfrac{1}{\sqrt{K}} \| \mathbf{r}_i \|_2\Big) \bigg\} \bigg\{ \max_{1 \le k \le K} \left| \frac{1}{d_i^k} - \frac{1}{\bar{d}} \right| \bigg\} \relax &\le L \bigg\{ \frac{1}{n} \sum_{i=1}^{n} (1 + \| \mathbf{u}_i \|_2 + \| \mathbf{v}_i \|_2) \Big(\tfrac{1}{\sqrt{K}} \| \mathbf{r}_i \|_2\Big) \bigg\} \bigg\{ \max_{1 \le i \le n} \max_{1 \le k \le K} \left| \frac{1}{d_i^k} - \frac{1}{\bar{d}} \right| \bigg\}.\end{aligned}$$ Further denote by: - $\mathbf{u}= (\| \mathbf{u}_1 \|_2, \dots, \| \mathbf{u}_n \|_2)$; - $\mathbf{v}= (\| \mathbf{v}_1 \|_2, \dots, \| \mathbf{v}_n \|_2)$; - $\mathbf{r}= \tfrac{1}{\sqrt{K}} (\| \mathbf{r}_1 \|_2, \dots, \| \mathbf{r}_n \|_2)$; - $\delta_i = \max_{1 \le k \le K} |1/d_i^k - 1/\bar{d}|$ for $i \in [n]$, and ${\bm{\delta}}= (\delta_1, \dots, \delta_n)$. Continuing from above, using the Cauchy-Schwartz inequality on the first term and triangle inequality, we obtain $$\begin{aligned} |{\widehat{T}}^{\mathrm{loo}}_{\lambda} - {\widehat{T}}_{\lambda}| \le L \cdot \left(1 + \frac{\| \mathbf{u}\|_2}{\sqrt{n}} + \frac{\| \mathbf{v}\|_2}{\sqrt{n}}\right) \cdot \frac{\| \mathbf{r}\|_2}{\sqrt{n}} \cdot \| {\bm{\delta}}\|_{\infty} \le C \| {\bm{\delta}}\|_{\infty}.\end{aligned}$$ From a short calculations, it follows that $\tfrac{1}{\sqrt{n}}\| \mathbf{u}\|_2$, $\tfrac{1}{\sqrt{n}} \| \mathbf{v}\|_2$, $\tfrac{1}{\sqrt{n}} \| \mathbf{r}\|_2$ are all eventually almost surely bounded. We will next show that $\| {\bm{\delta}}\|_{\infty} \xrightarrow{\text{a.s.}}0$. *Sup-norm concentration*: We will show that $$\max_{1 \le i \le n} \left| \frac{1}{K} \sum_{k=1}^{K} \frac{1}{1 - {\rm tr}[\mathbf{L}_{\lambda}^k]_{ii}} - \frac{1}{1 - \tfrac{1}{n} {\rm tr}[\mathbf{L}^\mathrm{ens}_{\lambda}]} \right| \xrightarrow{\text{a.s.}}0.$$ From , observe that $$\begin{aligned} \frac{1}{K} \sum_{k=1}^{K} \frac{1}{1 - [\mathbf{L}_{\lambda}^k]_{ii}} &= \frac{1}{K} \sum_{k=1}^{K} \frac{1}{1 - [\tfrac{1}{n} \mathbf{X}\mathbf{S}_k (\tfrac{1}{n} \mathbf{S}_k^\top\mathbf{X}^\top\mathbf{X}\mathbf{S}_k + \lambda \mathbf{I}_q)^{-1} \mathbf{S}_k^\top\mathbf{X}^\top]_{ii}} \relax &\simeq \frac{1}{K} \sum_{k=1}^{K} \frac{1}{1 -[\mathbf{L}_{\mu}^{\mathrm{ridge}}]_{ii}} = \frac{1}{1 -[\mathbf{L}_{\mu}^{\mathrm{ridge}}]_{ii}}.\end{aligned}$$ Similarly, using again, we also have $$\begin{aligned} \frac{1}{1 - \frac{1}{n} {\rm tr}[\mathbf{L}_{\lambda}^{\mathrm{ens}}]} = \frac{1}{1 - \tfrac{1}{n} \tfrac{1}{K} \sum_{k=1}^K {\rm tr}[\mathbf{L}_{\lambda}^k]} &= \frac{1}{1 - \tfrac{1}{n} \tfrac{1}{K} \sum_{k=1}^K {\rm tr}[\tfrac{1}{n} \mathbf{X}\mathbf{S}_k (\tfrac{1}{n} \mathbf{S}_k^\top\mathbf{X}^\top\mathbf{X}\mathbf{S}_k + \lambda \mathbf{I}_q)^{-1} \mathbf{S}_k^\top\mathbf{X}^\top]} \relax &\simeq \frac{1}{1 - \tfrac{1}{n} \tfrac{1}{K} \sum_{k=1}^{K} {\rm tr}[\mathbf{L}_{\mu}^{\mathrm{ridge}}]} = \frac{1}{1 - \tfrac{1}{n} {\rm tr}[\mathbf{L}_{\mu}^{\mathrm{ridge}}]}.\end{aligned}$$ Thus, we have the desired difference to be $$\max_{1 \le i \le n} \left| \frac{1}{K} \sum_{k=1}^{K} \frac{1}{1 - {\rm tr}[\mathbf{L}_{\lambda}^k]_{ii}} - \frac{1}{1 - \tfrac{1}{n} {\rm tr}[\mathbf{L}^\mathrm{ens}_{\lambda}]} \right| \simeq \max_{1 \le i \le n} \left| \frac{1}{1 - [\mathbf{L}_{\mu}^{\mathrm{ridge}}]_{ii}} - \frac{1}{1 - \tfrac{1}{n} {\rm tr}[\mathbf{L}_{\mu}^\mathrm{ridge}]} \right|.$$ From the proof of Theorem 3 of [@patil2022estimating], the right-hand side of the display above almost surely vanishes. This completes the proof. ## Proof of {#proof-of-2} This is a simple consequence of using the definition of convergence in Wasserstein $W_2$ metric. See, e.g., Chapter 6 of [@villani2009optimal]. # Proofs in {#app:proofs-sec:tuning} ## Proof of {#proof-of-3 .unnumbered} We begin by noting that when $\lambda = 0$, it suffices to prove the result for i.i.d. Gaussian sketches only. Consider first any sketch $\mathbf{S}= \mathbf{U}\mathbf{D}\mathbf{V}^\top$ where $\mathbf{U}$ is a uniformly distributed unitary matrix and $\mathbf{D}$ is invertible. Then $$\mathbf{S}\big( \mathbf{S}^\top\widehat{{\bm{\Sigma}}}\mathbf{S} \big)^{-1} \mathbf{S}^\top= \mathbf{U}\big( \mathbf{U}^\top\widehat{{\bm{\Sigma}}}\mathbf{U} \big)^{-1} \mathbf{U}^\top,$$ and so the result does not depend on $\mathbf{D}$ and $\mathbf{V}$, and we can choose them to have the same distribution as in i.i.d. Gaussian sketching. For general free sketching, $\mathbf{S}$ may not have $\mathbf{U}$ uniformly distributed in finite dimensions, but the subordination relations in depend only on the spectrum of $\mathbf{S}\mathbf{S}^\top$, so we can without loss of generality assume that $\mathbf{U}$ are uniformly distributed and obtain the exact same equivalence relationship. The subordination relation $$\mu \simeq\lambda \mathscr S_{\mathbf{S}\mathbf{S}^\top} \big( -\tfrac{1}{p} {\rm tr}\big[ \widehat{{\bm{\Sigma}}}\big( \widehat{{\bm{\Sigma}}}+ \mu \mathbf{I}_p \big)^{-1} \big] \big)$$ for $\mu > 0$ and $\lambda = 0$ requires that the the S-transform must go to $\infty$. From , we know $$\mathscr S_{\mathbf{S}\mathbf{S}^\top}(w) = \frac{\alpha}{\alpha + w}$$ for i.i.d. sketching, which means that we must send the denominator to 0. Thus we obtain the condition $$\begin{aligned} \alpha = \tfrac{1}{p} {\rm tr}\left[ \widehat{{\bm{\Sigma}}}\big( \widehat{{\bm{\Sigma}}}+ \mu \mathbf{I}_p \big)^{-1} \right].\end{aligned}$$ By letting $K \to \infty$, the variance term vanishes, and only the bias term remains, proving the result. # Proofs in {#app:proofs-sec:discussion} ## Proof of {#proof-of-4} We provide below the complete statement of , which includes expressions for $\nu'$ and $\nu''$. These are excluded from the main paper. [\[prop:gcv-primal-fails-appendix\]]{#prop:gcv-primal-fails-appendix label="prop:gcv-primal-fails-appendix"} Under , for $\lambda > \widetilde{\lambda}_0$ and any $K$, $$\begin{gathered} \label{eq:risk-gcv-decomp-primal-proof} R \big( \widetilde{{\bm{\beta}}}_{\lambda}^\mathrm{ens} \big) \simeq R \big( \widetilde{{\bm{\beta}}}_{\nu}^\mathrm{ridge} \big) + \frac{\nu' \widetilde{\Delta}}{K} \quad \text{and} \quad \widetilde{R} \big( \widetilde{{\bm{\beta}}}_{\lambda}^\mathrm{ens} \big) \simeq \widetilde{R} \big( \widetilde{{\bm{\beta}}}_{\nu}^\mathrm{ridge} \big) + \frac{\nu'' \widetilde{\Delta}}{K}, \end{gathered}$$ where $\nu$ is an implicit regularization parameter that solves [\[eq:fp-primal-sketch\]](#eq:fp-primal-sketch){reference-type="eqref" reference="eq:fp-primal-sketch"}, $\widetilde{\Delta} = \tfrac{1}{n} \mathbf{y}^\top(\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top+ \nu \mathbf{I}_n)^{-2} \mathbf{y}$, and $\nu' \ge 0$ is an inflation factor in the risk decomposition given by: $$\begin{aligned} &\nu' = \nonumber \relax & - \frac{\partial \mu}{\partial \lambda} \lambda^2 \mathscr S_{\mathbf{T}\mathbf{T}^\top}'\big(-\tfrac{1}{n}{\rm tr}\big[\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top\big( \tfrac{1}{n} \mathbf{X}\mathbf{X}^\top+ \nu \mathbf{I}_n \big)^{-1}\big]\big) \tfrac{1}{p} {\rm tr}\big[\tfrac{1}{n} \mathbf{X}{\bm{\Sigma}}\mathbf{X}^\top\big( \tfrac{1}{n} \mathbf{X}\mathbf{X}^\top+ \nu \mathbf{I}_n \big)^{-2}\big], \label{eq:tmu'-proof} \end{aligned}$$ while $\nu'' \geq 0$ is an inflation the GCV decomposition given by: $$\begin{aligned} &\nu''= \nonumber\relax & \frac{\displaystyle -\frac{\partial \mu}{\partial \lambda} \lambda^2 \mathscr S_{\mathbf{T}\mathbf{T}^\top}'\big(-\tfrac{1}{n}{\rm tr}\big[\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top\big( \tfrac{1}{n} \mathbf{X}\mathbf{X}^\top+ \nu \mathbf{I}_n \big)^{-1}\big]\big) \tfrac{1}{p} {\rm tr}\big[\tfrac{1}{n} \mathbf{X}(\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}) \mathbf{X}^\top\big( \tfrac{1}{n} \mathbf{X}\mathbf{X}^\top+ \nu \mathbf{I}_n \big)^{-2}\big]}{\displaystyle \big(1 - \tfrac{1}{n} \mathbf{X}\mathbf{X}^\top (\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top+ \nu \mathbf{I}_n)^{-1}\big)^2}. \label{eq:tmu''-proof} \end{aligned}$$ Furthermore, in general we have $\nu' \not \simeq\nu'',$ and therefore $\widetilde{R}(\widetilde{{\bm{\beta}}}_{\lambda}^{\mathrm{ens}}) \not \simeq R(\widetilde{{\bm{\beta}}}_{\lambda}^{\mathrm{ens}}).$ *Proof.* The proof for the decomposition in [\[eq:risk-gcv-decomp-primal-proof\]](#eq:risk-gcv-decomp-primal-proof){reference-type="eqref" reference="eq:risk-gcv-decomp-primal-proof"} is similar to the proof of . Our main workforce will be . *Notation*: We will use the same strategy and notation as in the proof of . We will decompose the unknown response $y_0$ into the linear predictor corresponding to best linear projection parameter and the residual. Let ${\bm{\beta}}_0$ denote the best projection parameter: ${\bm{\beta}}_0 = {\bm{\Sigma}}^{-1} \mathbb{E}[\mathbf{x}_0 y_0].$ We decompose the response into the best linear predictor $\mathbf{x}^\top {\bm{\beta}}_0$ and the residual error $y_0 - \mathbf{x}_0^\top {\bm{\beta}}_0$. We will denote by $\sigma^2 = \mathbb{E}[(y_0 - \mathbf{x}_0^\top {\bm{\beta}}_0)^2]$. #### Part 1: Risk asymptotics. As done in the proof of , we have $$\begin{aligned} R(\widetilde{{\bm{\beta}}}_\lambda^\mathrm{ens}) = (\widetilde{{\bm{\beta}}}_\lambda^\mathrm{ens}- {\bm{\beta}}_0)^\top {\bm{\Sigma}}(\widetilde{{\bm{\beta}}}_\lambda^\mathrm{ens}- {\bm{\beta}}_0) + \sigma^2 \simeq (\widetilde{{\bm{\beta}}}_\mu^\mathrm{ridge}- {\bm{\beta}}_0)^\top {\bm{\Sigma}}(\widetilde{{\bm{\beta}}}_\mu^\mathrm{ridge}) + \sigma^2 + \frac{\nu' \widetilde{\Delta}}{K}, \end{aligned}$$ where $\nu'$ is as defined in [\[eq:tmu\'-proof\]](#eq:tmu'-proof){reference-type="eqref" reference="eq:tmu'-proof"}. In the second step, we now instead used to obtain the desired equivalence. This completes the proof for the risk asymptotics decomposition. #### Part 2: GCV asymptotics. We will obtain asymptotic equivalents for the numerator and denominator of GCV in [\[eq:def-gcv-primal\]](#eq:def-gcv-primal){reference-type="eqref" reference="eq:def-gcv-primal"} separately below. *Numerator*: Similar to the proof of , we first decompose $$\begin{aligned} \tfrac{1}{n} \| \mathbf{y}- \mathbf{X}\widetilde{{\bm{\beta}}}_\lambda^\mathrm{ens}\|_2^2 = \tfrac{1}{n} \| \mathbf{X}({\bm{\beta}}_0 - \widetilde{{\bm{\beta}}}_{\lambda}^{\mathrm{ens}}) \|_2^2 + \tfrac{1}{n} \| (\mathbf{y}- \mathbf{X}{\bm{\beta}}_0) \|_2^2 + \tfrac{2}{n} (\mathbf{y}- \mathbf{X}{\bm{\beta}}_0)^\top \mathbf{X}({\bm{\beta}}_0 - \widetilde{{\bm{\beta}}}_{\lambda}^{\mathrm{ens}}). \end{aligned}$$ An application of yields $$\tfrac{2}{n} (\mathbf{y}- \mathbf{X}{\bm{\beta}}_0)^\top \mathbf{X}({\bm{\beta}}_0 - \widetilde{{\bm{\beta}}}_{\lambda}^{\mathrm{ens}}) \simeq \tfrac{2}{n} (\mathbf{y}- \mathbf{X}{\bm{\beta}}_0)^\top \mathbf{X}({\bm{\beta}}_0 - \widetilde{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}}).$$ Notice that $$\begin{aligned} \tfrac{1}{n} \| \mathbf{X}({\bm{\beta}}_0 - \widetilde{{\bm{\beta}}}_{\lambda}^{\mathrm{ens}}) \|_2^2 = (\widetilde{{\bm{\beta}}}_{\lambda}^{\mathrm{ens}} - {\bm{\beta}}_0)^\top \tfrac{1}{n} \mathbf{X}^\top \mathbf{X} (\widetilde{{\bm{\beta}}}_{\lambda}^{\mathrm{ens}} - {\bm{\beta}}_0) \simeq (\widetilde{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}} - {\bm{\beta}}_0)^\top \tfrac{1}{n} \mathbf{X}^\top \mathbf{X} (\widetilde{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}} - {\bm{\beta}}_0) + \frac{\nu''_{\mathrm{num}} \widetilde{\Delta}}{K}, \end{aligned}$$ where $\nu''_{\mathrm{num}}$ is expressed as: $$\nu''_{\mathrm{num}} = - \frac{\partial \mu}{\partial \lambda} \lambda^2 \mathscr S_{\mathbf{T}\mathbf{T}^\top}'\big(- \tfrac{1}{n}{\rm tr}\big[\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top\big( \tfrac{1}{n} \mathbf{X}\mathbf{X}^\top+ \nu \mathbf{I}_n \big)^{-1}\big]\big).$$ Using , we get $$\begin{aligned} \tfrac{1}{n} \| \mathbf{y}- \mathbf{X}\widetilde{{\bm{\beta}}}_{\lambda}^{\mathrm{ens}} \|_2^2 &\simeq (\widetilde{{\bm{\beta}}}_{\mu}^\mathrm{ridge}- {\bm{\beta}}_0)^\top \tfrac{1}{n} \mathbf{X}^\top \mathbf{X} (\widetilde{{\bm{\beta}}}_{\mu}^\mathrm{ridge}- {\bm{\beta}}_0) + \tfrac{2}{n} (\mathbf{y}- \mathbf{X}{\bm{\beta}}_0)^\top \mathbf{X}({\bm{\beta}}_0 - \widetilde{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}}) \nonumber \relax &\quad + \tfrac{1}{n} \| (\mathbf{y}- \mathbf{X}{\bm{\beta}}_0) \|_2^2 + \frac{\nu''_{\mathrm{num}} \widetilde{\Delta}}{K}. \label{eq:ensemble-squared-decomp-primal-1} \end{aligned}$$ On the other hand, we also have $$\begin{aligned} \tfrac{1}{n} \| \mathbf{y}- \mathbf{X}\widetilde{{\bm{\beta}}}_\mu^\mathrm{ridge}\|_2^2 &= \tfrac{1}{n} \| (\mathbf{y}- \mathbf{X}{\bm{\beta}}_0) + \mathbf{X}({\bm{\beta}}_0 - \widehat{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}}) \|_2^2 \nonumber \relax &= \tfrac{1}{n} \| \mathbf{y}- \mathbf{X}{\bm{\beta}}_0 \|_2^2 + \tfrac{2}{n} (\mathbf{y}- \mathbf{X}{\bm{\beta}}_0)^\top \mathbf{X}({\bm{\beta}}_0 - \widetilde{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}}) + \tfrac{1}{n} \| \mathbf{X}({\bm{\beta}}_0 - \widetilde{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}}) \|_2^2. \label{eq:ensemble-squared-decomp-primal-2} \end{aligned}$$ Combining [\[eq:ensemble-squared-decomp-primal-1\]](#eq:ensemble-squared-decomp-primal-1){reference-type="eqref" reference="eq:ensemble-squared-decomp-primal-1"} and [\[eq:ensemble-squared-decomp-primal-2\]](#eq:ensemble-squared-decomp-primal-2){reference-type="eqref" reference="eq:ensemble-squared-decomp-primal-2"}, we deduce that $$\label{eq:gcv-numerator-asympequi-primal} \tfrac{1}{n} \| \mathbf{y}- \mathbf{X}\widetilde{{\bm{\beta}}}_\lambda^\mathrm{ens}\|_2^2 \simeq\tfrac{1}{n} \| \mathbf{y}- \mathbf{X}\widetilde{{\bm{\beta}}}_\mu^\mathrm{ridge}\|_2^2 + \frac{\nu''_{\mathrm{num}} \widetilde{\Delta}}{K}.$$ *Denominator*: We will now derive an asymptotic equivalent for the GCV denominator. By repeated applications of the Woodbury matrix identity along with the first-order equivalence for observation sketch from , we get $$\begin{aligned} \widetilde{\mathbf{L}}_\lambda^\mathrm{ens} &= \frac{1}{K} \sum_{k=1}^{K} \tfrac{1}{n} \mathbf{X} (\tfrac{1}{n} \mathbf{X}^\top\mathbf{T}_k \mathbf{T}_k^\top\mathbf{X}+ \lambda \mathbf{I}_p)^{-1} \mathbf{X}^\top\mathbf{T}_k \mathbf{T}_k^\top\relax &= \frac{1}{K} \sum_{k=1}^{K} \tfrac{1}{n} \mathbf{X}\mathbf{X}^\top \mathbf{T}_k (\tfrac{1}{n} \mathbf{T}_k^\top\mathbf{X}\mathbf{X}^\top\mathbf{T}_k + \lambda \mathbf{I}_m)^{-1} \mathbf{T}_k^\top\relax &\simeq \tfrac{1}{n} \mathbf{X}\mathbf{X}^\top (\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top+ \nu \mathbf{I}_n)^{-1} \relax &= \tfrac{1}{n} \mathbf{X} (\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}+ \nu \mathbf{I}_p)^{-1} \mathbf{X}^\top. \end{aligned}$$ In the chain above, we used the Woodbury matrix identity for equality in the second and forth lines, and for the equivalence in the third line. Hence, we get $$\label{eq:gcv-denominator-asympequi-primal} {\rm tr}[\widetilde{\mathbf{L}}^\mathrm{ens}_{\lambda}] \simeq {\rm tr}[\tfrac{1}{n} \mathbf{X}(\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}+ \nu \mathbf{I}_p)^{-1} \mathbf{X}^\top] = {\rm tr}[(\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}+ \nu \mathbf{I}_p)^{-1} \tfrac{1}{n} \mathbf{X}^\top \mathbf{X}] = {\rm tr}[\widetilde{\mathbf{L}}^\mathrm{ridge}_{\nu}].$$ Therefore, combining [\[eq:gcv-numerator-asympequi-primal\]](#eq:gcv-numerator-asympequi-primal){reference-type="eqref" reference="eq:gcv-numerator-asympequi-primal"} and [\[eq:gcv-denominator-asympequi-primal\]](#eq:gcv-denominator-asympequi-primal){reference-type="eqref" reference="eq:gcv-denominator-asympequi-primal"}, for the GCV estimator, we obtain $$\begin{aligned} \widetilde{R}(\widetilde{{\bm{\beta}}}_{\lambda}^{\mathrm{ens}}) = \frac{\displaystyle \tfrac{1}{n} \| \mathbf{y}- \mathbf{X}\widetilde{{\bm{\beta}}}_{\lambda}^{\mathrm{ens}} \|_2^2}{\displaystyle (1 - \tfrac{1}{n} {\rm tr}[\widetilde{\mathbf{L}}_{\lambda}^{\mathrm{ens}}])^2} &\simeq \frac{\displaystyle \tfrac{1}{n} \| \mathbf{y}- \mathbf{X}\widetilde{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}} \|_2^2 + \frac{\nu''_{\mathrm{num}} \widetilde{\Delta}}{K} }{\displaystyle (1 - \tfrac{1}{n} {\rm tr}[\widetilde{\mathbf{L}}_{\lambda}^{\mathrm{ens}}])^2} \relax &\simeq \frac{\displaystyle \tfrac{1}{n} \| \mathbf{y}- \mathbf{X}\widetilde{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}} \|_2^2 + \frac{\nu''_{\mathrm{num}} \widetilde{\Delta}}{K} }{\displaystyle (1 - \tfrac{1}{n} {\rm tr}[\widetilde{\mathbf{L}}_{\nu}^{\mathrm{ridge}}])^2} \relax &= \widetilde{R}(\widetilde{{\bm{\beta}}}_{\mu}^{\mathrm{ridge}}) + \frac{\nu'' \widetilde{\Delta}}{K}, \end{aligned}$$ where $\nu''$ is as defined in [\[eq:tmu\'\'-proof\]](#eq:tmu''-proof){reference-type="eqref" reference="eq:tmu''-proof"}. This finishes the proof of the GCV asymptotics decomposition. #### Part 3: GCV inconsistency. The inconsistency follows from the asymptotic mismatch of $\nu'$ and $\nu''$. To show the mismatch, it suffices to show that $$\tfrac{1}{p} {\rm tr}\big[\tfrac{1}{n} \mathbf{X}{\bm{\Sigma}}\mathbf{X}^\top\big( \tfrac{1}{n} \mathbf{X}\mathbf{X}^\top+ \nu \mathbf{I}_n \big)^{-2}\big] \not \simeq \frac{\displaystyle \tfrac{1}{p} {\rm tr}\big[\tfrac{1}{n} \mathbf{X}(\tfrac{1}{n} \mathbf{X}^\top \mathbf{X}) \mathbf{X}^\top\big( \tfrac{1}{n} \mathbf{X}\mathbf{X}^\top+ \nu \mathbf{I}_n \big)^{-2}\big]}{\displaystyle \big(1 - \tfrac{1}{n} \mathbf{X}\mathbf{X}^\top (\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top+ \nu \mathbf{I}_n)^{-1}\big)^2}.$$ This is shown in . This concludes all the three parts and finishes the proof. ◻ ### Helper lemmas for the proof of {#helper-lemmas-for-the-proof-of .unnumbered} [\[lem:quad-risk-ensemble-primal\]]{#lem:quad-risk-ensemble-primal label="lem:quad-risk-ensemble-primal"} Assume the conditions of . Let ${\bm{\Psi}}$ be any positive semidefinite matrix with uniformly bounded operator norm, that is independent of $( \mathbf{T}_k )_{k=1}^{K}$. Let ${\bm{\beta}}_0 \in \mathbb{R}^{p}$ be any vector with uniformly bounded Euclidean norm, that is independent of $( \mathbf{T}_k )_{k=1}^{K}$. Consider the ensemble estimator obtained with observation sketch as defined in [\[eq:primal-sketch-estimator\]](#eq:primal-sketch-estimator){reference-type="eqref" reference="eq:primal-sketch-estimator"}. Then, under , for $\lambda > \widetilde{\lambda}_0$ and all $K$, $$(\widetilde{{\bm{\beta}}}_{\lambda}^{\mathrm{ens}} - {\bm{\beta}}_0)^\top {\bm{\Psi}} (\widetilde{{\bm{\beta}}}_{\lambda}^{\mathrm{ens}} - {\bm{\beta}}_0) \simeq (\widetilde{{\bm{\beta}}}_{\nu}^{\mathrm{ridge}} - {\bm{\beta}}_0)^\top {\bm{\Psi}} (\widetilde{{\bm{\beta}}}_{\nu}^{\mathrm{ridge}} - {\bm{\beta}}_0) + \frac{\nu_{{\bm{\Psi}}}' \widetilde{\Delta}}{K},$$ where $\nu$ is as defined in [\[eq:fp-primal-sketch\]](#eq:fp-primal-sketch){reference-type="eqref" reference="eq:fp-primal-sketch"}, $\nu_{{\bm{\Psi}}}'$ is as defined in [\[eq:tmu\'\_mPsi\]](#eq:tmu'_mPsi){reference-type="eqref" reference="eq:tmu'_mPsi"}, $\widetilde{\Delta}$ is as defined in . *Proof.* The proof follows analogously to the proof of , except now we use the first- and second-order equivalences from for observation sketch (instead of feature sketch). We omit the details. ◻ [\[lem:primal-mu\'-mu\'\'-mismatching\]]{#lem:primal-mu'-mu''-mismatching label="lem:primal-mu'-mu''-mismatching"} Under , $$\begin{aligned} & \tfrac{1}{n} {\rm tr}\big[(\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top + \nu \mathbf{I}_n)^{-1} (\tfrac{1}{n} \mathbf{X}{\bm{\Sigma}}\mathbf{X}^\top) (\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top + \nu \mathbf{I}_n)^{-1}\big] \nonumber \relax &\not \simeq \frac{ \tfrac{1}{n} {\rm tr}\big[(\tfrac{1}{n}\mathbf{X}\mathbf{X}^\top + \nu \mathbf{I}_n)^{-1} (\tfrac{1}{n} \mathbf{X}{\widehat{{{\bm{\Sigma}}}}}\mathbf{X}^\top) (\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top + \nu \mathbf{I}_n)^{-1}\big] }{ \big(1 - \tfrac{1}{n} {\rm tr}[{\widehat{{{\bm{\Sigma}}}}}({\widehat{{{\bm{\Sigma}}}}}+ \nu \mathbf{I}_p)^{-1}]\big)^2 }. \label{eq:primal-matching-target-proof} \end{aligned}$$ *Proof.* We will first derive asymptotic equivalents for both the left- and right-hand sides of [\[eq:primal-matching-target-proof\]](#eq:primal-matching-target-proof){reference-type="eqref" reference="eq:primal-matching-target-proof"}. Then we will show that the difference in their asymptotic equivalents is non-zero. #### Asymptotic equivalent for left-hand side. Using Woodbury matrix identity, we can write $$\begin{aligned} {\rm tr}\big[(\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top + \nu \mathbf{I}_n)^{-1} (\tfrac{1}{n} \mathbf{X}{\bm{\Sigma}}\mathbf{X}^\top) (\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top + \nu \mathbf{I}_n)^{-1}\big] &= {\rm tr}[({\widehat{{{\bm{\Sigma}}}}}+ \nu \mathbf{I}_p)^{-1} {\widehat{{{\bm{\Sigma}}}}}({\widehat{{{\bm{\Sigma}}}}}+ \nu \mathbf{I}_p)^{-1} {\bm{\Sigma}}] \relax &= {\rm tr}[({\widehat{{{\bm{\Sigma}}}}}+ \nu \mathbf{I}_p)^{-1} {\bm{\Sigma}} (\mathbf{I}_p - \nu ({\widehat{{{\bm{\Sigma}}}}}+ \nu \mathbf{I}_p)^{-1})] \relax &= {\rm tr}[({\widehat{{{\bm{\Sigma}}}}}+ \nu \mathbf{I}_p)^{-1} {\bm{\Sigma}}] - \nu {\rm tr}[({\widehat{{{\bm{\Sigma}}}}}+ \nu \mathbf{I}_p)^{-1} {\bm{\Sigma}}({\widehat{{{\bm{\Sigma}}}}}+ \nu \mathbf{I}_p)^{-1}].\end{aligned}$$ #### Asymptotic equivalent for right-hand side. Similarly, the numerator of GCV can be expressed as $$\begin{aligned} {\rm tr}\big[(\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top + \nu \mathbf{I}_n)^{-1} (\tfrac{1}{n} \mathbf{X}{\widehat{{{\bm{\Sigma}}}}}\mathbf{X}^\top) (\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top + \nu \mathbf{I}_n)^{-1}\big] &= {\rm tr}[({\widehat{{{\bm{\Sigma}}}}}+ \nu \mathbf{I}_p)^{-1} {\widehat{{{\bm{\Sigma}}}}}({\widehat{{{\bm{\Sigma}}}}}+ \nu \mathbf{I}_p)^{-1} {\widehat{{{\bm{\Sigma}}}}}] \relax &= {\rm tr}[({\widehat{{{\bm{\Sigma}}}}}+ \nu \mathbf{I}_p)^{-1} {\widehat{{{\bm{\Sigma}}}}}] - \nu {\rm tr}[({\widehat{{{\bm{\Sigma}}}}}+ \nu \mathbf{I}_p)^{-1} {\widehat{{{\bm{\Sigma}}}}}({\widehat{{{\bm{\Sigma}}}}}+ \nu \mathbf{I}_p)^{-1}].\end{aligned}$$ From , we have that $$\nu {\rm tr}[({\widehat{{{\bm{\Sigma}}}}}+ \nu \mathbf{I}_p)^{-1} {\bm{\Sigma}}({\widehat{{{\bm{\Sigma}}}}}+ \nu \mathbf{I}_p)^{-1}] \simeq \frac{\displaystyle \nu {\rm tr}[({\widehat{{{\bm{\Sigma}}}}}+ \nu \mathbf{I}_p)^{-1} {\widehat{{{\bm{\Sigma}}}}}({\widehat{{{\bm{\Sigma}}}}}+ \nu \mathbf{I}_p)^{-1}]}{\displaystyle \big(1 - \tfrac{1}{n} {\rm tr}[{\widehat{{{\bm{\Sigma}}}}}({\widehat{{{\bm{\Sigma}}}}}+ \nu \mathbf{I}_p)^{-1}]\big)^2}.$$ #### Mismatching of asymptotic equivalents. Observe that $$\begin{aligned} & \tfrac{1}{n} {\rm tr}\big[(\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top + \nu \mathbf{I}_n)^{-1} (\tfrac{1}{n} \mathbf{X}{\bm{\Sigma}}\mathbf{X}^\top) (\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top + \nu \mathbf{I}_n)^{-1}\big] \relax &- \frac{\tfrac{1}{n} {\rm tr}\big[(\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top + \nu \mathbf{I}_n)^{-1} (\tfrac{1}{n} \mathbf{X}{\widehat{{{\bm{\Sigma}}}}}\mathbf{X}^\top) (\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top + \nu \mathbf{I}_n)^{-1}\big] }{ \big(1 - \tfrac{1}{n} {\rm tr}[{\widehat{{{\bm{\Sigma}}}}}({\widehat{{{\bm{\Sigma}}}}}+ \nu \mathbf{I}_p)^{-1}]\big)^2 } \relax &\simeq \tfrac{1}{n} {\rm tr}[({\widehat{{{\bm{\Sigma}}}}}+ \nu \mathbf{I}_p)^{-1} {\bm{\Sigma}}] - \frac{\tfrac{1}{n} {\rm tr}[({\widehat{{{\bm{\Sigma}}}}}+ \nu \mathbf{I}_p)^{-1} {\widehat{{{\bm{\Sigma}}}}}]} { \big(1 - \tfrac{1}{n} {\rm tr}[{\widehat{{{\bm{\Sigma}}}}}({\widehat{{{\bm{\Sigma}}}}}+ \nu \mathbf{I}_p)^{-1}]\big)^2 } \relax &\simeq \frac{1}{ 1 - \tfrac{1}{n} {\rm tr}[{\widehat{{{\bm{\Sigma}}}}}({\widehat{{{\bm{\Sigma}}}}}+ \nu \mathbf{I}_p)^{-1}]} - 1 - \frac{\tfrac{1}{n} {\rm tr}[({\widehat{{{\bm{\Sigma}}}}}+ \nu \mathbf{I}_p)^{-1} {\widehat{{{\bm{\Sigma}}}}}]} { \big(1 - \tfrac{1}{n} {\rm tr}[{\widehat{{{\bm{\Sigma}}}}}({\widehat{{{\bm{\Sigma}}}}}+ \nu \mathbf{I}_p)^{-1}]\big)^2 } \relax &= - \left( 1 - \frac {\tfrac{1}{n} {\rm tr}[({\widehat{{{\bm{\Sigma}}}}}+ \nu \mathbf{I}_p)^{-1} {\widehat{{{\bm{\Sigma}}}}}]} {1 - \tfrac{1}{n} {\rm tr}[({\widehat{{{\bm{\Sigma}}}}}+ \nu \mathbf{I}_p)^{-1} {\widehat{{{\bm{\Sigma}}}}}]} \right)^2.\end{aligned}$$ The last line is in general not equal to $0$, proving the desired asymptotic mismatch. This finishes the proof. ◻ ## Correction using ensemble trick for GCV with observation sketch {#sec:primal-correction-ensemble-trick} Below we outline a method that corrects GCV for the sketched ensemble estimator with observation sketch. The idea of the method is to estimate the error term in the mismatch in using a combination of the ensemble trick and our second-order sketched equivalences. The correction takes a complicated form involving both the unsketched and sketched data. We are not aware of any method that uses only sketched data. 1. Estimate $\nu$ from the data and sketch using the subordination relation [\[eq:fp-primal-sketch\]](#eq:fp-primal-sketch){reference-type="eqref" reference="eq:fp-primal-sketch"}. 2. Estimate the following two quantities that appear in the inflation of the GCV decomposition: $$\widetilde{\Delta} = \tfrac{1}{n} \mathbf{y}^\top (\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top + \nu \mathbf{I}_n)^{-2} \mathbf{y} \text{ and } C_1 = \frac{\tfrac{1}{n} {\rm tr}\big[(\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top + \nu \mathbf{I}_n)^{-1} (\tfrac{1}{n} \mathbf{X}{\widehat{{{\bm{\Sigma}}}}}\mathbf{X}^\top) (\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top + \nu \mathbf{I}_n)^{-1}\big] }{ \big(1 - \tfrac{1}{n} {\rm tr}[{\widehat{{{\bm{\Sigma}}}}}({\widehat{{{\bm{\Sigma}}}}}+ \nu \mathbf{I}_p)^{-1}]\big)^2 }.$$ 3. Use ensemble trick as explained in on $\widetilde{R}(\widetilde{{\bm{\beta}}}_\lambda^\mathrm{ens}) = \widetilde{R}(\widetilde{{\bm{\beta}}}_\nu^\mathrm{ridge}) + \tfrac{\nu'' \widetilde{\Delta}}{K}$ with $K = 1$ and $K = 2$ to estimate $\widetilde{R}(\widetilde{{\bm{\beta}}}_\nu^\mathrm{ridge})$ first and then estimate the following component: $$C = \nu'' \widetilde{\Delta} = -\frac{\partial \nu}{\partial \lambda} \lambda^2 \mathscr S_{\mathbf{T}\mathbf{T}^\top}'\big(- \tfrac{1}{n}{\rm tr}\big[\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top\big( \tfrac{1}{n} \mathbf{X}\mathbf{X}^\top+ \nu \mathbf{I}_n \big)^{-1}\big]\big) C_1 \widetilde{\Delta}.$$ 4. Eliminate $\widetilde{\Delta}$ from $C$ to get an estimate for the following component: $$C_2 = -\frac{\partial \nu}{\partial \lambda} \lambda^2 \mathscr S_{\mathbf{T}\mathbf{T}^\top}'\big(- \tfrac{1}{n}{\rm tr}\big[\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top\big( \tfrac{1}{n} \mathbf{X}\mathbf{X}^\top+ \nu \mathbf{I}_n \big)^{-1}\big]\big).$$ 5. Then use the following equivalence to estimate: $$\begin{aligned} C_1' &= \tfrac{1}{n} {\rm tr}\big[(\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top + \nu \mathbf{I}_n)^{-1} (\tfrac{1}{n} \mathbf{X}{\bm{\Sigma}}\mathbf{X}^\top) (\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top + \nu \mathbf{I}_n)^{-1}\big] \relax &\simeq \frac{ \tfrac{1}{n} {\rm tr}\big[(\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top + \nu \mathbf{I}_n)^{-1} (\tfrac{1}{n} \mathbf{X}{\widehat{{{\bm{\Sigma}}}}}\mathbf{X}^\top) (\tfrac{1}{n} \mathbf{X}\mathbf{X}^\top + \nu \mathbf{I}_n)^{-1}\big] }{ \big(1 - \tfrac{1}{n} {\rm tr}[{\widehat{{{\bm{\Sigma}}}}}({\widehat{{{\bm{\Sigma}}}}}+ \nu \mathbf{I}_p)^{-1}]\big)^2 } \relax &\qquad - \left( 1 - \frac {\tfrac{1}{n} {\rm tr}[({\widehat{{{\bm{\Sigma}}}}}+ \nu \mathbf{I}_p)^{-1} {\widehat{{{\bm{\Sigma}}}}}]} {1 - \tfrac{1}{n} {\rm tr}[({\widehat{{{\bm{\Sigma}}}}}+ \nu \mathbf{I}_p)^{-1} {\widehat{{{\bm{\Sigma}}}}}]} \right)^2. \end{aligned}$$ 6. Finally, obtain the corrected estimate for risk using the GCV asymptotics decomposition from : $$\widetilde{R}(\widetilde{{\bm{\beta}}}_\nu^\mathrm{ridge}) + \frac{C_2 C_1' \widetilde{\Delta}}{K}.$$ ## Anisotropic sketching and generalized ridge regression {#sec:generalized-ridge} Using structural equivalences to anisotropic sketching matrices and generalized ridge regression, one can extend our results to anisotropic sketching and generalized ridge regression. Specifically, let $\mathbf{R}\in \mathbb{R}^{p \times p}$ be an invertible positive semidefinite matrix with bounded operator norm. Consider generalized ridge regression with a anisotropic sketching matrices $\widetilde{\mathbf{S}}_k = \mathbf{R}^{1/2} \mathbf{S}_k$ for $k \in [K]$: $$\label{eq:beta-hat-gridge-appendix} \widehat{{\bm{\beta}}}_\lambda^k =\widetilde{\mathbf{S}}_k \widehat{{\bm{\beta}}}^{\widetilde{\mathbf{S}}_k}_\lambda, \quad \text{where} \quad \widehat{{\bm{\beta}}}^{\widetilde{\mathbf{S}}_k}_\lambda =\mathop{\mathrm{arg\,min}}_{{\bm{\beta}}\in \mathbb R^{q}} \tfrac{1}{n} \big\Vert \mathbf{y}- \mathbf{X}\widetilde{\mathbf{S}}_k {\bm{\beta}}\big\Vert_2^2 + \lambda \big\Vert \mathbf{G}^{1/2} {\bm{\beta}}\big\Vert_2^2,$$ where $\mathbf{G}\in \mathbb{R}^{p \times p}$ is a positive definite matrix with bounded operator norm. Let $\widehat{{\bm{\beta}}}^\mathrm{ens}_\lambda$ be the ensemble estimator defined analogously as in [\[eq:ensemble-estimator\]](#eq:ensemble-estimator){reference-type="eqref" reference="eq:ensemble-estimator"} and GCV defined analogously as in [\[eq:def-gcv\]](#eq:def-gcv){reference-type="eqref" reference="eq:def-gcv"}. Using Corollary 7.1 of [@lejeune2022asymptotics], all of our results carry in this case in a straightforward manner. # Experimental details {#sec:experimental-details} All experiments were run in less than 1 hour on a Macbook Air (M1, 2020) and coded in Python using standard scientific computing packages. CountSketch [@charikar2002frequent] is implemented by generating a sparse matrix corresponding to the hash function, and due to rounding of the size parameters to match theoretical rates, we cannot choose arbitrary sketch sizes and are often restricted to non-standard sequences. Instead of the SRHT, which requires an implementation of the fast Walsh--Hadamard transform not readily available and platform-independent in Python (and also suffers statistically from zero-padding issues as described by [@lejeune2022asymptotics]), we use a subsampled randomized discrete cosine transform (SRDCT), which is fast, widely available, and does not suffer the statistical drawbacks. All sketches are normalized such that $\mathbf{S}\mathbf{S}^\top\simeq\mathbf{I}_p$ and therefore $\tfrac{1}{p}\ensuremath{{\|\mathbf{S}^\top\mathbf{x}\|}_{2}}^2 \simeq \tfrac{1}{p}\ensuremath{{\left\|\mathbf{x}\right\|}_{2}}^2$. We refer the reader to our code repository for implementation details: <https://github.com/dlej/sketched-ridge>. ## GCV paths in {#sec:experimental-details-fig:gcv_paths} For this experiment, over 100 trials, we sampled $\mathbf{X}$ with each row $\mathbf{x}\sim {\mathcal N}({\bf 0}, \mathbf{I}_p)$ and generated $y = \mathbf{x}^\top{\bm{\beta}}+ \xi$ for ${\bm{\beta}}\sim {\mathcal N}({\bf 0}, \tfrac{1}{p} \mathbf{I}_p)$ and independent noise $\xi \sim {\mathcal N}(0, 1)$. We have $n = 500$ and $p = 600$, and our sketching ensembles have $K = 5$. For the left plot, we fix $q = 441$, which is an allowed sketch size for CountSketch. For the right plot, we fix $\lambda = 0.2$ and sweep through the choices of $q$ which are allowed by CountSketch, which are $q \in \left\{63, 126, 189, 252, 315, 378, 441, 504, 567\right\}$. ## Real data in {#sec:experimental-details-fig:real-data} For both real data datasets, we fit our sketched ridge regressors on centered sketched data and responses and then added the mean of the training responses to any outputs. For both datasets, we sketched using CountSketch, which is among the most computationally efficient sketches, especially for sparse data as in RCV1. We plot risk on a `symlog` scale of $\lambda$ with linear region from $-10$ to $10$. For RCV1 [@lewis2004rcv1], we downloaded the data from `scikit-learn` [@sklearn_api]. We discarded all labels except for `GCAT` and `CCAT` and then discarded all examples that did not uniquely fall into one of these categories. These became our binary class labels. We then randomly subsampled 20000 training points and 5000 test points, and discarded any features that took value 0 for all train and test points. This left 30617 features, and we used $q=515$ for CountSketch. We normalized each data vector $\mathbf{x}$ such that $\ensuremath{{\left\|\mathbf{x}\right\|}_{2}} = \sqrt{p}$, preserving sparsity. We then fit ensembles of size $K=5$, reporting error over 10 random trials. For RNA-Seq [@weinstein2013tcga], we downloaded the data from the UCI Machine Learning repository [@uci_repo] at: <https://archive.ics.uci.edu/ml/datasets/gene+expression+cancer+RNA-Seq>. We discarded all examples that were labeled neither `BRCA` nor `KIRC`, the most common classes, leaving 446 observations, which were split into a training set of 356 and test set of 90. We then z-scored each of the 20223 features using the training data statistics. We fit ensembles of size $K=5$, reporting error over 10 random trials. ## Prediction intervals in {#sec:experimental-details-fig:confidence-intervals} For this experiment, we use SRDCT sketches. For each choice of $$q \in \left\{80, 180, 280, 380, 480, 580, 680, 780, 880, 980\right\},$$ we generated data over 30 trials in similar manner to the experiment in : for $n = 1500$ and $p=1000$ we sampled $\mathbf{X}$ with each row $\mathbf{x}\sim {\mathcal N}({\bf 0}, \mathbf{I}_p)$, but we generated $y = g(\mathbf{x}^\top{\bm{\beta}})$ for ${\bm{\beta}}\sim {\mathcal N}({\bf 0}, \tfrac{1}{p} \mathbf{I}_p)$ and $g$ the soft-thresholding operator: $$\begin{aligned} g(u) = \begin{cases} u - 1 & \text{if } u > 1 \relax 0 & \text{if } -1 \leq u \leq 1 \relax u + 1 & \text{if } u < -1 \end{cases}.\end{aligned}$$ We compute the 95% and 99% prediction intervals by identifying the 2.5% and 0.1% tail intervals of the GCV corrected residuals $(y - z) \colon (y, z) \sim {\widehat{P}}_\lambda^\mathrm{ens}$ and evaluate coverage on $1500$ test residuals $y_0 - \mathbf{x}_0^\top\widehat{{\bm{\beta}}}_\lambda^\mathrm{ens}$. We plot 2D histograms of $P_\lambda^\mathrm{ens}$ (empirical using test points) and ${\widehat{P}}_\lambda^\mathrm{ens}$ (using training points) on a logarithmic color scale. ## Details for {#sec:experimental-details-fig:tuning-applications} For this experiment, we use SRDCT sketches. For $n = 600$ and $p = 800$, for each trial, we generate Gaussian data with ${\bm{\Sigma}}= \mathrm{diag}(\mathbf{a})$, where $a_i = 2 / (1 + 30 t_i)$, where $t_i$ are $p$ linearly spaced values from $0$ to $1$. We generate $y = \mathbf{x}^\top{\bm{\beta}}+ \xi$ for ${\bm{\beta}}_{1:80} \sim {\mathcal N}({\bf 0}, \tfrac{1}{80} \mathbf{I}_{80})$ and ${\bm{\beta}}_{81:} = {\bf 0}$ and $\xi \sim {\mathcal N}(0, 4)$. We evaluate the mapping from $\mu \mapsto \lambda$ for feature sketching by inverting the subordination relation $$\mu = \lambda \mathscr S_{\mathbf{S}\mathbf{S}^\top} \big( -\tfrac{1}{p} {\rm tr}\big[ \mathbf{S}^\top\widehat{{\bm{\Sigma}}}\mathbf{S}\big( \mathbf{S}^\top\widehat{{\bm{\Sigma}}}\mathbf{S}+ \lambda \mathbf{I}_q \big)^{-1} \big] \big)$$ for a single random generation of data and sketch. For observation sketching, we do the same but use the relation $$\mu = \lambda \mathscr S_{\mathbf{S}\mathbf{S}^\top} \big( -\tfrac{1}{n} {\rm tr}\big[ \tfrac{1}{n} \mathbf{X}^\top\mathbf{T}\mathbf{T}^\top\mathbf{X}\big( \tfrac{1}{n} \mathbf{X}^\top\mathbf{T}\mathbf{T}^\top\mathbf{X}+ \lambda \mathbf{I}_p \big)^{-1} \big] \big).$$ We evaluate the mapping from $\mu \mapsto \alpha$ using the same method as in feature sketching, except we take $q$ as 20 values logarithmically spaced between $1$ and $800$, rounded down to the nearest integer. For computing the curves where we vary $\alpha$, we pre-sketch using $q=p$ and then subsample and normalize to obtain the sketched data for each desired $q$. ## Verification of convergence rate for sketched ensembles {#sec:rate_plot} We demonstrate that both GCV and risk for sketched ensembles converge at rate $1/K$ to the equivalent ridge for sketched ensembles in . For $n = 140$ and $p = 200$, for a single trial, we generate Gaussian data with ${\bm{\Sigma}}= \mathbf{I}_p$ and $y = \mathbf{x}^\top{\bm{\beta}}+ \xi$ for ${\bm{\beta}}\sim {\mathcal N}({\bf 0}, \tfrac{1}{p} \mathbf{I}_p)$ and $\xi \sim {\mathcal N}(0, 1)$. We fit $1000$ sketched predictors for $\lambda = 0.1$ for each sketch using $q=156$, and then successively build ensembles of size $K$ by taking the first $K$ predictors. We then subtract the risk of the unsketched predictor at the equivalent $\mu$, determined numerically to be $0.283$ for Gaussian sketching, $0.157$ for orthogonal sketching, $0.281$ for CountSketch, and $0.157$ for SRDCT. ![ **Both GCV and risk converge at rate $1/K$ to the equivalent ridge for sketched ensembles.** See for the setup details. ](figures/ensemble_convergence.pdf){#fig:rate-plot-risk-gcv width="99%"} [^1]: Department of Statistics, University of California, Berkeley, CA 94720, USA. [^2]: Department of Statistics, Stanford University, Stanford, CA 94305, USA. [^3]: Note that this includes the two sketches most commonly studied analytically: those with i.i.d. Gaussian entries, and random orthogonal projections. [^4]: The original theorem in [@lejeune2022asymptotics] was given for complex $\lambda$ and $\mu$, but the stated version follows by analytic continuation to the real line.
arxiv_math
{ "id": "2310.04357", "title": "Asymptotically free sketched ridge ensembles: Risks, cross-validation,\n and tuning", "authors": "Pratik Patil, Daniel LeJeune", "categories": "math.ST stat.ML stat.TH", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this work, we show that, for any simply-connected elliptic space $S$ admitting a pure minimal Sullivan model with a differential of constant length, we have ${\rm{TC}\hskip 1pt}_0(S)\leq 2{\rm{cat}\hskip 1pt}_0(S)+\chi_{\pi}(S)$ where $\chi_{\pi}(S)$ is the homotopy characteristic. This is a consequence of a structure theorem for this type of models, which is actually our main result. address: - My Ismail University of Meknès, Department of Mathematics, B. P. 11 201 Zitoune, Meknès, Morocco. - Centro de Matemática, Universidade do Minho, Campus de Gualtar, 4710-057 Braga, Portugal. author: - Said Hamoun - Youssef Rami - Lucile Vandembroucq title: An upper bound for the rational topological complexity of a family of elliptic spaces --- # Introduction Let $S$ be a path-connected topological space. In his work [@FM], M. Farber introduced the notion of topological complexity of $S$ denoted by ${\rm{TC}\hskip 1pt}(S)$. This is a homotopy invariant defined as the least integer $m$ for which the map $ev_{0,1}: S^{[0,1]}\rightarrow S\times S$, $\lambda \rightarrow (\lambda (0),\lambda (1))$ admits $m+1$ local continuous sections $s_i: U_i \rightarrow S ^{[0,1]}$ where $\{U_i\}_{i=0,\cdots,m}$ is a family of open subsets covering $S\times S$. If $S$ is a simply-connected space of finite type and $S_{\!{0}}$ is its rationalization, then the rational topological complexity of $S$, denoted and defined by ${\rm{TC}\hskip 1pt}_0(S):={\rm{TC}\hskip 1pt}(S_{\!{0}})$, provides a lower bound for ${\rm{TC}\hskip 1pt}(S)$. Through rational homotopy techniques, ${\rm{TC}\hskip 1pt}_0$ can be expressed in terms of Sullivan models ([@C], [@CKV]) in the same spirit as ${\rm{cat}\hskip 1pt}_0$, the rational Lusternik--Schnirelmann category, was characterized by Félix and Halperin [@FH]. Recall that a Sullivan model of $S$ (model for short) is a commutative differential graded algebra $(\Lambda V,d)$ which contains all the information on the rational homotopy type of $S$. In particular, $H^*(S;\mathbb{Q})=H^*(\Lambda V,d)$ and if the model is minimal, that is, $dV\subset \Lambda^{\geq 2} V$, then we have $V\cong \pi_*(S)\otimes \mathbb{Q}$. The standard reference is [@FHT]. When there exists an integer $l\geq 2$ such that $dV\subset \Lambda^{l} V$, we say that $d$ is of constant length $l$. In particular when $l=2$, $(\Lambda V,d)$ is said *coformal*. In this article, we establish the following result which is an improvement of our Theorem B in [@HRV]. **Theorem 1**. *Let $S$ be an elliptic space admitting a pure minimal Sullivan model $(\Lambda V,d)$ where $d$ is of constant length. Then $${\rm{TC}\hskip 1pt}_0(S)\leq 2{\rm{cat}\hskip 1pt}_0(S ) +\chi_{\pi}(S)$$ where $\chi_{\pi}(S)$ denotes the homotopy characteristic of $S$.* Recall that $S$ (or equivalently its minimal model $(\Lambda V,d)$) is *elliptic* if both $\dim \pi_*(S)\otimes \mathbb{Q}=\dim V$ and $\dim H^*(S; \mathbb{Q})= \dim H^*(\Lambda V,d)$ are finite. The model $(\Lambda V,d)$ is said *pure* if $dV^{even}=0$ and $dV^{odd}\subset \Lambda V^{even}$. We also recall that the homotopy characteristic of $S$ is $\chi_{\pi}(S)=\dim \pi_{even}(S)\otimes \mathbb{Q}-\dim \pi_{odd}(S)\otimes \mathbb{Q}=\dim V^{even}-\dim V^{odd}$. When $S$ is elliptic, we always have $\chi_{\pi}(S)\leq 0$, that is, $\dim V^{odd}\geq \dim V^{even}$. Moreover, if $\chi_{\pi}(\Lambda V):=\chi_{\pi}(S)=0$, then the elliptic model $(\Lambda V,d)$ is said an $F_0$-model. Given a pure elliptic model $(\Lambda V,d)$, we will refer as an *$F_0$-basis extension* to a relative Sullivan model of the form $(\Lambda Z,d) \hookrightarrow (\Lambda V,d)$ where $Z$ is a graded subspace of $V$ and the pure model $(\Lambda Z,d)$ is an $F_0$-model. As is known, the existence of such an $F_0$-basis extension can be impossible, see for instance Example [Example 2](#ex){reference-type="ref" reference="ex"} below. In [@HRV Theorem B], we obtained the same upper bound as in Theorem [Theorem 1](#th1.1.2){reference-type="ref" reference="th1.1.2"} assuming that the differential $d$ has constant length and, in addition, that there exists an $F_0$-basis extension $(\Lambda Z,d) \hookrightarrow (\Lambda V,d)$ such that $Z^{even}=V^{even}$. Here, we will see that this latter additional hypothesis can be relaxed. This will follow from the following structure theorem which, in comparison to [@H Lemma 8], may have its own interest. **Theorem 2**. *Let $(\Lambda V,d)$ be a pure elliptic minimal model where $d$ is a differential of constant length. Then there exists an $F_0$-basis extension $$(\Lambda Z,d) \hookrightarrow (\Lambda V,d)$$ where $Z^{even}=V^{even}$.* Note that this means that $(\Lambda V,d)$ is the model of the total space of a fibration over an $F_0$-space with fibre a product of odd-dimensional spheres. We prove Theorem [Theorem 1](#th1.1.2){reference-type="ref" reference="th1.1.2"} in Section [2](#S22){reference-type="ref" reference="S22"} and derive its applications to rational topological complexity in Section [3](#S32){reference-type="ref" reference="S32"}. # Structure theorem {#S22} In the sequel, we assume that $S$ is a simply-connected CW-complex of finite type admitting a pure minimal Sullivan model $(\Lambda V,d)$. We suppose that $\dim V$ is finite and use the notations $X=V^{even}$ and $Y=V^{odd}$. If $\mathcal{B}=\{x_1,\cdots,x_n\}$ is a basis of $X$, then $(\Lambda V,d)$ is elliptic if and only if for any $x_i \in \mathcal{B}$ there exists $N_i \in \mathbb{N}$ such that $[x_i^{N_i}]=0$ in $H^*(\Lambda V,d)$. It is then easy to see that, given a surjective morphism $\varphi :(\Lambda V,d)\rightarrow (\Lambda W,d)$, if $(\Lambda V,d)$ is pure, minimal and elliptic, then so is $(\Lambda W,d)$. Let $\alpha_1, \cdots,\alpha_p$ be a family of elements in $\Lambda^+X$. The family $\alpha_1, \cdots,\alpha_p$ is said a *regular sequence* in $\Lambda^+X$ if it satisfies the two following conditions: - $\alpha_1$ is not a zero divisor in $\Lambda ^+X$ - For all $i=2,\cdots,p$, $\alpha_i$ is not a zero divisor in $\Lambda ^+X/( \alpha_1,\cdots , \alpha_{i-1} )$ where $( \alpha_1,\cdots , \alpha_{i-1} )$ is the ideal of $\Lambda ^+X$ generated by $\alpha_1,\cdots,\alpha_{i-1}$. Note that, since we are considering $X=V^{even}$, the first condition is automatically satisfied as soon as $\alpha_1\neq 0$. We recall the following result due to Halperin. **Theorem 1**. *([@H Lemma 8], see also [@F Prop 5.4.5]) [\[th2.1.2\]]{#th2.1.2 label="th2.1.2"} Let $(\Lambda V,d)= (\Lambda (X\oplus Y),d)$ be a pure elliptic Sullivan model. There exists a basis (not necessarily homogeneous) $u_1, \cdots, u_m$ of $Y$ such that $du_1,\cdots, du_n$ is a regular sequence in $\Lambda X$ with $n=\dim X$.* Recall that a pure model $(\Lambda Z,d)$ such that $\dim Z<\infty$ and $\chi_{\pi}(\Lambda Z)=0$ is an $F_0$-model if and only if there exists a (homogeneous) basis $z_1,\dots, z_n$ of $Z^{odd}$ such that $dz_1,\dots, dz_n$ is a regular sequence in $\Lambda Z^{even}$ ([@FHT Prop. 32.10]). Given a pure elliptic model $(\Lambda V,d)$, the obvious intuition coming from Theorem [\[th2.1.2\]](#th2.1.2){reference-type="ref" reference="th2.1.2"} to obtain an $F_0$-basis extension $(\Lambda Z,d)\hookrightarrow (\Lambda V,d)$ with $Z^{even}=V^{even}$, would be to consider $(\Lambda Z,d)=(\Lambda(x_1,\cdots, x_n,u_1,\cdots,u_n),d).$ Unfortunately, since the elements $u_1,\cdots,u_n$ are not necessarily homogeneous, this does not produce in general a well-defined graded differential algebra. We point out that, in the result above, Halperin used some commutative algebra arguments which do not take in consideration the homogeneity of the elements with respect to the degree. To be clear and to take off any kind of ambiguity about this fact, we consider the following example taken from [@AJ] **Example 2**. *Let $(\Lambda V,d)=(\Lambda (X\oplus Y),d)=(\Lambda (x_1,x_2,y_1,y_2,y_3 ),d)$ where $|x_1|=6$, $|x_2|=8$, $dy_1=x_1(x_1^4+x_2^3)$, $dy_2=x_2(x_1^4+x_2^3)$ and $dy_3=x_1^3x_2^2$. We will see that there is no $F_0$-basis extension $(\Lambda Z,d) \hookrightarrow (\Lambda V,d)$ with $Z^{even}=X=\langle x_1, x_2 \rangle=\mathbb{Q}x_1\oplus \mathbb{Q} x_2$. We note that $|y_1|=29, |y_2|=31$ and $|y_3|=33$. If there were such an extension then $Z^{odd}$ should be a graded subspace of $Y$. This means that we shoud be able to find a (homogeneous) basis {$u_1,u_2,u_3\}$ of $Y$ such that $Z^{odd}=\langle u_1,u_2 \rangle$. For degree reasons we can suppose that, up to a scalar, $u_1\in \{y_1,y_2,y_3\}$ and $u_2\in \{y_1,y_2,y_3\}\setminus \{u_1\}$. Since $(\Lambda Z,d)$ is an $F_0$-model, $\{du_1,du_2\}$ must be a regular sequence in $\Lambda(x_1,x_2)$. If $u_1=y_1$ then $du_1=x_1(x_1^4+x_2^3)$ is clearly not a zero divisor in $\Lambda(x_1,x_2)$.\ As shown in the following table which considers the possible values of $u_2$, we can see that $du_2$ is always a zero divisor in the quotient $\Lambda(x_1,x_2)/( du_1)$.* *$u_2$* *$y_2$* *$y_3$* --------------------------------- --------------- ------------------------- *In $\Lambda(x_1,x_2)/( du_1)$* *$x_1dy_2=0$* *$(x_1^4+x_2^3)dy_3=0$* *We can then conclude that there is no regular sequence $\{du_1,du_2\}$ where $u_1=y_1$. Similarly, we can verify that if either $u_1=y_2$ or $u_1=y_3$ then we can not find $u_2$ such that $\{du_1,du_2\}$ is a regular sequence in $\Lambda(x_1,x_2)$. Therefore there is no $F_0$-basis extension $(\Lambda Z,d)\hookrightarrow (\Lambda V,d)$ with $Z^{even}=V^{even}$ and any basis $\{u_1,u_2,u_3\}$ provided by Theorem [\[th2.1.2\]](#th2.1.2){reference-type="ref" reference="th2.1.2"} is necessarily non-homogeneous. For instance, we can check that $\{u_1=y_3, u_2=y_1+y_2, u_3=y_3\}$ is a basis of $Y$ such that $\{du_1, du_2\}$ is a regular sequence in $\Lambda(x_1,x_2)$ and the element $u_2$ is not homogeneous since $|y_1|\neq|y_2|$.* Note that the differential in the example above has non-constant length. In this work, we consider $(\Lambda V,d)$ a pure elliptic model and, as stated in Theorem [Theorem 2](#th1.2.2){reference-type="ref" reference="th1.2.2"}, we will prove that there exists an $F_0$-basis extension $(\Lambda Z,d) \hookrightarrow (\Lambda V,d)$ with $Z^{even}=V^{even}$ whenever $d$ is of constant length. In other words, our result ensures the existence of a homogeneous basis in Theorem [\[th2.1.2\]](#th2.1.2){reference-type="ref" reference="th2.1.2"} provided that $d$ is of constant length. We first set some notations and prove a special case which will be crucial in the proof of the general case. Suppose that ${\mathcal B}=\{x_1, \cdots, x_n\}$ is a basis of $X$ satisfying $|x_1|\leq \cdots \leq |x_n|$ and $\{y_1, \cdots , y_m\}$ a basis of $Y$. Let $X_1:= \langle x_k: |x_k|= |x_1| \rangle$ be the vector subspace of $X$ generated by the elements $x_k$ for which $|x_k|=|x_1|$. Similarly, let $Y_1:= \langle y_k: |y _k| \leq l |x_1| -1 \rangle$ be the vector subspace of $Y$ generated by the elements $y_k$ satisfying $|y_k| \leq l|x_1|-1$. Notice that if $dy_k\neq 0$ then $|y_k|=l|x_1| -1$. For $V_1= X_ 1\oplus Y_1$ we have $dY_1 \subset \Lambda X_1$ and $(\Lambda V_1 ,d)$ is a pure commutative differential graded algebra, called thereafter the first stage of $(\Lambda V,d)$. **Lemma 3**. *Let $(\Lambda V,d)$ be a pure elliptic model where $d$ is a differential of constant length $l$ and let $(\Lambda V_1,d)$ be the first stage of $(\Lambda V,d)$. Then* - *$(\Lambda V_1,d)$ is pure elliptic.* - *There exists an $F_0$-basis extension $(\Lambda E,d) \hookrightarrow (\Lambda V_1,d)$ with $E^{even}=V_1^{even}$.* *Proof.* (i) Let $x_k \in {\mathcal B}$ such that $|x_k|=|x_1|$. Since $(\Lambda V,d)$ is an elliptic model then there exists $N_k \in \mathbb{N}\setminus\{0\}$ satisfying $x_k^{N_k}=dP_k$ for some $P_k \in \Lambda V$. Furthermore, since $(\Lambda V,d)$ is pure and $d$ is of constant length $l$, $P_k$ can be written as $\sum \limits _j m_j\cdot y_j$ where $m_j \in \Lambda ^{N_k-l} X$ and $dy_j\neq 0$ for each $j$. As $|x_1|$ is the lowest degree, we have $|m_j| \geq (N_k-l)|x_1|$ and $|d y_j | \geq l|x_1|$. Since on the first hand $|dP_k|=N_k|x_k|=N_k |x_1|$ and, on the other hand, $|dP_k|=|m_jdy_j|$ for any $j$ we must have $|m_j|= (N_k -l)|x_1|$ and $|dy_j|=l |x_1|$. Therefore $m_j\in \Lambda X_1$ and $y_j \in Y _1$ for any $j$ and then $P_k \in \Lambda V_1.$ This shows that $[x_k^{N_k}]=0$ in $H^*(\Lambda V_1,d)$ and consequently $(\Lambda V_1,d)$ is elliptic.\ (ii) We consider $$\begin{cases} R= \langle y_k \in Y_1: dy_k\neq 0 \rangle, \\ T= \langle y_k \in Y_1: dy_k= 0\rangle. \end{cases}$$ We clearly see that - $(\Lambda X_1 \otimes \Lambda R,d)$ is pure, - The elements of $R$ are of the same degree. Moreover, $(\Lambda V_1,d)= (\Lambda(X_1\oplus R),d)\otimes (\Lambda T,0)$. From $(i)$ we know that $(\Lambda V_1,d)$ is elliptic, hence so is $(\Lambda (X_1\oplus R),d)$. By Theorem [\[th2.1.2\]](#th2.1.2){reference-type="ref" reference="th2.1.2"} applied to $(\Lambda (X_1 \oplus R),d)$, there exists a (a priori not necessarily homogeneous) basis $u_1, \cdots , u_p, \cdots , u_q$ of $R$ such that $du_1, \cdots , du_p$ is a regular sequence in $\Lambda X_1$ and $p=\dim X_1$. Since all the elements of $R$ have the same degree we can assert that this basis is necessarily homogeneous. In other words, we can decompose $R$ as $$R=R_1\oplus R_2$$ where $R_1$ and $R_2$ are two vector subspaces of $R$ such that $\dim R_1=\dim X_1$ and $\Lambda(X_1\oplus R_1),d)$ is an $F_0$-model. Setting $E=X_1\oplus R_1$ we obtain an $F_0$-basis extension $(\Lambda E,d)\hookrightarrow (\Lambda V_1,d)$ with $E^{even}=V_1^{even}$. ◻ **Remark 4**. *If $(\Lambda V,d)$ is an elliptic pure minimal model with $V^{even}$ concentrated in a single degree then, Jessup in [@J Lemma 3.3] proved that there always exists an $F_0$-basis extension $(\Lambda Z,d)\hookrightarrow (\Lambda V,d)$ with $V^{even}=Z^{even}$. Our Lemma [Lemma 3](#lm){reference-type="ref" reference="lm"} above recovers this result in the particular case where $d$ is a differential of constant length.* We are now ready to prove our structure theorem, namely Theorem [Theorem 2](#th1.2.2){reference-type="ref" reference="th1.2.2"} from the introduction. *Proof of Theorem [Theorem 2](#th1.2.2){reference-type="ref" reference="th1.2.2"}.* We proceed by induction on $n=\dim V^{even}$. For $n=1$ the result is obvious. By induction, we suppose that for any pure elliptic model $(\Lambda V,d)$ with $\dim V^{even} \leq n-1$ and $d$ a differential of constant length $l$, there exists an $F_0$-basis extension $$(\Lambda Z,d) \hookrightarrow (\Lambda V,d)$$ satisfying $Z^{even}= V^{even}$. Let $(\Lambda V,d)=(\Lambda(x_1,\cdots, x_{n}, y_1,\cdots, y_m),d)$ be a pure elliptic model with $d$ a differential of constant length $l$ and $\dim V^{even}=n$. By Lemma [Lemma 3](#lm){reference-type="ref" reference="lm"} there exists an extension $$(\Lambda E,d) \hookrightarrow (\Lambda V,d)$$ where $(\Lambda E,d)$ is an $F_0$-model and $\dim E>0$. Here, without loss of generality, we may suppose that $(\Lambda E,d)$ has the form $(\Lambda E,d)=(\Lambda (x_1,\cdots, x_p,y_1,\cdots, y_p),d)$ where $p\geq 1$. We now consider the following fibration $$(\Lambda E,d) \rightarrow(\Lambda V,d)\rightarrow (\Lambda W,\bar{d}):=(\Lambda (x_{p+1},\cdots, x_n,y_{p+1},\cdots, y_m, \bar{d}).$$ As $(\Lambda V,d)\rightarrow (\Lambda W, \bar{d})$ is a surjective morphism and $(\Lambda V,d)$ is a pure elliptic minimal model with differential of constant length $l$, so is $(\Lambda W, \bar{d})$. Since $\dim W^{even}<n$, we next use the induction hypothesis on $(\Lambda W, \bar{d})$ to ensure the existence of an $F_0$-basis extension $$(\Lambda (x_{p+1},\cdots, x_n,u_{p+1},\cdots,u _{n}), \bar{d}) \hookrightarrow (\Lambda W, \bar{d})=(\Lambda(x_{p+1},\cdots, x_n,y_{p+1},\cdots, y_{m}), \bar{d})$$ where $\langle u_{p+1},\cdots,u _{n}\rangle$, the vector space generated by $u_{p+1},\cdots,u _{n}$, is a graded subspace of $\langle y_{p+1},\cdots,y _{m}\rangle$. Let $U=\langle y_1,\cdots, y_p,u_{p+1}, \cdots, u_n \rangle \subset Y$ be the vector subspace of $Y$ generated by $\{y_1,\cdots, y_p,u_{p+1}, \cdots, u_n \}$ and let $(\Lambda Z,d):=(\Lambda(X\oplus U),d)\subset (\Lambda V,d)$. It is clear that we have an extension $(\Lambda Z,d)\hookrightarrow (\Lambda V,d)$ where $Z^{even}=V^{even}$ and $\chi_{\pi}(\Lambda Z)=0$. In order to prove that this is an $F_0$-basis extension it remains to show that $(\Lambda Z,d)$ is elliptic. Since $(\Lambda E,d)$ is an elliptic subalgebra of $(\Lambda Z,d)$, we already know that, for $1\leq i\leq p$, there exist $M_i\in \mathbb{N}$ and $\xi_i\in \Lambda Z$ such that $d\xi_i=x_i^{M_i}$. We will now see that the same is true for any $i\in \{p+1,\cdots, n\}$. Let us fix $i\in\{ p+1,\cdots, n\}$. It follows from the ellipticity and pureness of $$(\Lambda (x_{p+1},\cdots, x_n,u_{p+1},\cdots,u _{n}), \bar{d})$$ that there exists an integer $N_i \in \mathbb{N}$ satisfying $$\bar{d}(v_i)=x_i^{N_i} ,\quad \text{ for some } \quad v_i \in \Lambda (x_{p+1},\cdots, x_n)\otimes \Lambda ^1(u_{p+1},\cdots,u _{n}).$$ As $\Lambda (x_{p+1},\cdots, x_n)\otimes \Lambda ^1(u_{p+1},\cdots,u _{n})\subset \Lambda X\otimes \Lambda^1Y$, we may look at $v_i$ as an element of $\Lambda X\otimes \Lambda^1 Y$ so that we have in the algebra $(\Lambda V,d)$ $$\label{gamma} dv_i=x_i^{N_i}+\gamma_i\quad \text{ where } \gamma_i \in \Lambda^+(x_1,\cdots, x_p) \otimes \Lambda (x_{p+1},\cdots, x_n).$$ In what follows we express the element $\gamma_i$ from ([\[gamma\]](#gamma){reference-type="ref" reference="gamma"}) as an element of $$\Lambda ^+(x_1,\cdots,x_p)\otimes \Lambda(x_i)\otimes \Lambda(x_{p+1}, \cdots,\hat{x}_i,\cdots ,x_n).$$ As usual the notation " $\hat{}$ " means that the corresponding component is omitted. Explicitly we write $$\gamma_i= \sum \limits_{(K,k)} \alpha^k _Kx_i ^k\cdot x ^K_{\langle \mathbf{p};i\rangle}$$ where $K=(k_{p+1},\cdots,{k}_{i-1}, {k}_{i+1},\cdots ,k_n) \in \mathbb{N}^{n-p-1}$, $k\geq 0$, $$x ^K_{\langle \mathbf{p};i \rangle}= x_{p+1}^{k_{p+1}} \cdot x_{p+2}^{k_{p+2}} \cdots {x} _{i-1}^{k_{i-1}}\cdot {x} _{i+1}^{k_{i+1}}\cdots x_{n}^{k_{n}}$$ and $\alpha ^k_K \in \Lambda^{\geq 1}(x_1,\cdots, x_p)$ is the coefficient of the monomial $x_i ^k\cdot x ^K_{\langle \mathbf{p};i\rangle}$. In the notation $x ^K_{\langle \mathbf{p};i \rangle}$, the subscript $\langle \mathbf{p};i\rangle$ means that the factors $x_1,\dots,x_p$ and $x_i$ are omitted. Formula ([\[gamma\]](#gamma){reference-type="ref" reference="gamma"}) can then be written as follows: $$dv_i=x_i^{N_i}+\sum \limits_{(K,k)} \alpha^k _Kx_i ^{k}\cdot x ^K_{\langle \mathbf{p} ;i\rangle}.$$ Note that, for degree reasons, there are only a finite number of pairs $(K,k)$ for which $\alpha_K^k\neq 0.$ For any integer $m_i\in \mathbb{N}$, we then have $$d(x_i^{m_i}v_i)=x_i^{N_i+m_i}+\sum \limits_{(K,k)} \alpha^k _{K}x_i ^{k+m_i}\cdot x ^K_{\langle \mathbf{p}; i \rangle}.$$ From this calculation, we will use the following iterative process. In the first step, we consider the elements $\alpha^k_Kx_i ^{k+m_i}\cdot x ^K_{\langle \mathbf{p};i \rangle}$. Assuming that $m_i$ is sufficiently large (here $m_i\geq N_i$), we have $$d(\alpha^k _Kx_i ^{k+m_i-N_i}\cdot x ^K_{\langle \mathbf{p}; i \rangle} v_i)= \alpha^k _Kx_i ^{k+m_i}\cdot x ^K_{\langle \mathbf{p};i\rangle} + \sum \limits_{(K',k')} \alpha^k _K\alpha^{k'} _{K'}x_i ^{k+m_i-N_i+k'}\cdot x ^{K+K'}_{\langle \mathbf{p};i \rangle}$$ where as before $K'\in \mathbb{N}^{n-p-1}$ and $K+K'$ is the usual component by component sum. Therefore $$d(x_i^{m_i}v_i-\sum_{(K,k)}\alpha^k _{K}x_i ^{k+m_i-N_i}\cdot x ^K_{\langle \mathbf{p};i \rangle} v_i)=x_i^{N_i+m_i}-\sum \limits_{(K,k)}\sum \limits_{(K',k')} \alpha^k _K\alpha^{k'} _{K'}x_i ^{k+m_i-N_i+k'}\cdot x ^{K+K'}_{\langle \mathbf{p} ;i\rangle}.$$ Remark that in this first step, we have $\alpha^k _{K}\alpha^{k'} _{K'}\in \Lambda ^{\geq 2}(x_1,\cdots,x_p)$. As a second step we consider the elements $\alpha^k _{K}\alpha^{k'} _{K'}x_i ^{k+m_i-N_i+k'}\cdot x ^{K+K'}_{\langle \mathbf{p};i \rangle}$. Again, assuming that $m_i$ is sufficiently large (which is possible because there exist only a finite number of relevant sequences $K,K'$), we can do the following second iteration: $$\begin{aligned} d\left(\alpha^k _{K}\alpha^{k'} _{K'}x_i ^{k+m_i-2N_i+k'}\cdot x ^{K+K'}_{\langle \mathbf{p} ;i\rangle}v_i\right)&=& \alpha^k _{K}\alpha^{k'} _{K'}x_i ^{k+m_i-N_i+k'}\cdot x ^{K+K'}_{\langle \mathbf{p};i \rangle}v_i\\ &+& \sum \limits_{(K'',k'')} \alpha^k _{K}\alpha^{k'} _{K'}\alpha^{k''} _{K''}x_i ^{k+m_i-2N_i+k'+k''}\cdot x ^{K+K'+K''}_{\langle \mathbf{p};i \rangle}. \end{aligned}$$ We thus have $$\begin{gathered} \!\!\!\!d\left(x_i^{m_i}v_i-\!\sum\limits _{(K,k)}\!\alpha^k _{K}x_i ^{k+m_i-N_i}\cdot x ^K_{\langle \mathbf{p};i \rangle} v_i + \!\sum \limits_{(K,k)}\!\sum \limits_{(K',k')}\!\alpha^k _{K}\alpha^{k'} _{K'}x_i ^{k+m_i-2N_i+k'}\cdot x ^{K+K'}_{\langle \mathbf{p};i \rangle}v_i\right)= x_i^{N_i+m_i}\\ +\sum \limits_{(K,k)}\sum \limits_{(K',k')}\sum \limits_{(K'',k'')} \alpha^k _{K}\alpha^{k'} _{K'}\alpha^{k''} _{K''}x_i ^{k+m_i-2N_i+k'+k''}\cdot x ^{K+K'+K''}_{\langle \mathbf{p};i \rangle}. \end{gathered}$$ Now, in this second iteration we have $\alpha^k _{K}\alpha^{k'} _{K'}\alpha^{k''} _{K''} \in \Lambda ^{\geq 3}(x_1,\cdots, x_p)$ and with $m_i$ sufficiently large we can reiterate the same process as many times as we want. After $s$ iterations, we can reformulate the obtained expression as $$d\left(x_i^{m_i}v_i+\sum _{(J,j)} \tilde{\alpha}_{J}^j x_i^j\cdot x_{\langle \mathbf{p};i \rangle}^Jv_i \right) = x_i^{N_i+m_i}+\sum_{(H,h)} \tilde{\beta}_H^h x_i^h\cdot x_{\langle \mathbf{p};i\rangle}^H \label{eq}$$ where $J,H\in \mathbb{N}^{n-p-1}$, $j,h\geq 0$, $\tilde{\alpha}_J^j\in \Lambda(x_1,\cdots, x_p)$, and $\tilde{\beta}_H^h\in \Lambda^{>s}(x_{1},\cdots, x_p)$. Since $(\Lambda E,d)=(\Lambda(x_1,\cdots, x_p,y_1,\cdots,y_p),d)$ is elliptic, we choose $s\geq f$ where $f$ is its formal dimension. We then have $[\tilde{\beta}_H^h]=0$ in $H(\Lambda E,d)$, that is $\tilde{\beta}_H^h=db_H^h$ with $b_H^h\in\Lambda E$. Consequently the equation ([\[eq\]](#eq){reference-type="ref" reference="eq"}) implies $$x_i^{N_i+m_i}=d\xi_i$$ where $$\xi_i=x_i^{m_i}v_i+\sum _{(J,j)} \tilde{\alpha}_J^j x_i^j\cdot x_{\langle \mathbf{p};i \rangle}^Jv_i-\sum_{(H,h)} b_H^h x_i^h\cdot x_{\langle \mathbf{p};i \rangle}^H\in \Lambda(x_1,\cdots\!, x_n,y_1,\cdots\!, y_p, u_{p+1},\cdots\!,u _{n}).$$ As we can do the process above for any $i\in\{p+1,\dots, n\}$, we conclude that, for any $i\in\{p+1,\dots, n\}$, there exist $M_i \in \mathbb{N}$ and $\xi_i\in \Lambda Z$ such that $x_i^{M_i}=d\xi_i$, which completes the proof. ◻ # Application to rational topological complexity {#S32} We now use Theorem [Theorem 2](#th1.2.2){reference-type="ref" reference="th1.2.2"} to obtain an upper bound for the rational topological complexity of certain elliptic spaces. We will use the notation ${\rm{TC}\hskip 1pt}(\Lambda V)$ instead of ${\rm{TC}\hskip 1pt}_0(S)$ and ${\rm{cat}\hskip 1pt}(\Lambda V)$ instead of ${\rm{cat}\hskip 1pt}_0(S)$ where $(\Lambda V,d)$ is a minimal Sullivan model of $S$. With this notations, Theorem [Theorem 1](#th1.1.2){reference-type="ref" reference="th1.1.2"} from the introduction can be written as **Theorem 5** (Theorem [Theorem 1](#th1.1.2){reference-type="ref" reference="th1.1.2"}). *Let $(\Lambda V,d)$ be a pure elliptic model with $d$ a differential of constant length. Then $${\rm{TC}\hskip 1pt}(\Lambda V) \leq 2{\rm{cat}\hskip 1pt}(\Lambda V)+\chi_{\pi}(\Lambda V).$$* *Proof.* This follows from Theorem [Theorem 2](#th1.2.2){reference-type="ref" reference="th1.2.2"} and [@HRV Th. 4.2] (which is the version in terms of models of [@HRV Th. B]). ◻ In particular, if $(\Lambda V,d)$ is coformal then we have the following corollary **Corollary 6**. *Let $(\Lambda V,d)$ be a pure elliptic coformal minimal model. Then $${\rm{TC}\hskip 1pt}(\Lambda V) \leq \dim V.$$* *Proof.* The result follows directly from the previous theorem and the well-known fact due to Félix and Halperin that the rational LS-category of an elliptic coformal minimal model $(\Lambda V,d)$ satisfies ${\rm{cat}\hskip 1pt}(\Lambda V)=\dim V^{odd}$ [@FH]. ◻ We may extend the result obtained above to a particular case of non-pure elliptic minimal models. More precisely : **Theorem 7**. *Let $(\Lambda V,d)$ be an elliptic minimal model with $d$ a differential of constant length. If there exists an extension $(\Lambda Z,d) \hookrightarrow (\Lambda V,d)$ where $(\Lambda Z,d)$ is a pure elliptic algebra satisfying $Z^{even}=V^{even}$ then $${\rm{TC}\hskip 1pt}( \Lambda V) \leq 2{\rm{cat}\hskip 1pt}(\Lambda V)+\chi_{\pi}(\Lambda V).$$* *Proof.* Under the conditions of the theorem, we can suppose that $\Lambda V=\Lambda (Z\oplus U)$ where $U$ is a vector subspace of $V^{odd}$. Then, by [@HRV Cor. 3.4], we have $$\begin{aligned} {\rm{TC}\hskip 1pt}( \Lambda V)&=& {\rm{TC}\hskip 1pt}( \Lambda (Z\oplus U))\\ &\leq & {\rm{TC}\hskip 1pt}( \Lambda Z) + \dim U. \end{aligned}$$ As $(\Lambda Z,d)$ is an elliptic pure minimal model of differential $d$ of constant length, Theorem [Theorem 5](#th2){reference-type="ref" reference="th2"} yields ${\rm{TC}\hskip 1pt}(\Lambda Z)\leq 2{\rm{cat}\hskip 1pt}(\Lambda Z)+\chi_{\pi}(\Lambda Z)$ and consequently $$\begin{aligned} {\rm{TC}\hskip 1pt}( \Lambda V)&\leq& 2{\rm{cat}\hskip 1pt}(\Lambda Z)+\chi_{\pi}(\Lambda Z)+ \dim U. \end{aligned}$$ On the other hand, from the Lechuga-Murillo formula established in [@LM] (see also [@L]), we have ${\rm{cat}\hskip 1pt}(\Lambda Z)=\dim Z^{odd}+2\dim Z^{even}(l-2)$ where $l$ is the length of the differential. As $\chi_{\pi}(\Lambda Z)= \dim Z^{even}-\dim Z^{odd}$ we then have : $$\begin{aligned} {\rm{TC}\hskip 1pt}( \Lambda V)&\leq& 2{\rm{cat}\hskip 1pt}(\Lambda Z)+\chi_{\pi}(\Lambda Z)+ \dim U\\ &\leq& 2(\dim Z^{odd}+\dim Z^{even}(l-2)) +\dim Z^{even}-\dim Z^{odd} + \dim U\\ &\leq& 2(\dim U+\dim Z^{odd}+\dim Z^{even}(l-2)) +\dim Z^{even}-\dim Z^{odd}\\ & +& \dim U - 2\dim U. \end{aligned}$$ Moreover, we have $\dim V^{odd}=\dim Z^{odd}+\dim U$ and $\dim Z^{even}=\dim V^{even}$. We hence have: $$\begin{aligned} {\rm{TC}\hskip 1pt}( \Lambda V)&\leq& 2(\dim V^{odd}+ \dim V^{even}(l-2)) + \dim V^{even}-\dim Z^{odd}-\dim U\\ &\leq& 2(\dim V^{odd}+ \dim V^{even}(l-2)) + \dim V^{even}-\dim V^{odd}. \end{aligned}$$ As we have ${\rm{cat}\hskip 1pt}(\Lambda V)=\dim V^{odd}+ \dim V^{even}(l-2)$ and $\chi_{\pi}( \Lambda V)=\dim V^{even}-\dim V^{odd},$ we finally obtain ${\rm{TC}\hskip 1pt}( \Lambda V) \leq 2{\rm{cat}\hskip 1pt}(\Lambda V)+\chi_{\pi}(\Lambda V).$ ◻ In particular, if $(\Lambda V,d)$ is an elliptic coformal model, we have the following corollary **Corollary 8**. *Let $(\Lambda V,d)$ be an elliptic coformal minimal model such that there exists an extension $(\Lambda Z,d) \hookrightarrow (\Lambda V,d)$ where $(\Lambda Z,d)$ is a pure elliptic algebra satisfying $Z^{even}= V^{even}$. Then $${\rm{TC}\hskip 1pt}( \Lambda V) \leq \dim V.$$* *Proof.* In this case, we have ${\rm{cat}\hskip 1pt}(\Lambda V) =\dim V^{odd}$ (see [@FH]) and by the previous theorem $$\begin{aligned} {\rm{TC}\hskip 1pt}( \Lambda V)&\leq & 2{\rm{cat}\hskip 1pt}(\Lambda V)+\chi_{\pi} (\Lambda V)\\ &\leq & 2\dim V^{odd}+ \dim V^{even}- \dim V^{odd}\\ &\leq & \dim V. \end{aligned}$$ ◻ # Acknowledgements {#acknowledgements .unnumbered} This work has been partially supported by Portuguese Funds through FCT -- Fundação para a Ciência e a Tecnologia, within the projects UIDB/00013/2020 and UIDP/00013/2020. S.H would like to thank the Moroccan center CNRST --Centre National pour la Recherche Scientifique et Technique for providing him with a research scholarship grant number: 7UMI2020.
arxiv_math
{ "id": "2310.00449", "title": "An upper bound for the rational topological complexity of a family of\n elliptic spaces", "authors": "Said Hamoun, Youssef Rami, Lucile Vandembroucq", "categories": "math.AT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Motivated by modern-day applications such as Attended Home Delivery and Preference-based Group Scheduling, where decision makers wish to steer a large number of customers toward choosing the exact same alternative, we introduce a novel class of assortment optimization problems, referred to as *Maximum Load Assortment Optimization*. In such settings, given a universe of substitutable products, we are facing a stream of customers, each choosing between either selecting a product out of an offered assortment or opting to leave without making a selection. Assuming that these decisions are governed by the Multinomial Logit choice model, we define the random *load* of any underlying product as the total number of customers who select it. Our objective is to offer an assortment of products to each customer so that the expected maximum load across all products is maximized. We consider both static and dynamic formulations of the maximum load assortment optimization problem. In the static setting, a single offer set is carried throughout the entire process of customer arrivals, whereas in the dynamic setting, the decision maker offers a personalized assortment to each customer, based on the entire information available at that time. As can only be expected, both formulations present a wide range of computational challenges and analytical questions. The main contribution of this paper resides in proposing efficient algorithmic approaches for computing near-optimal static and dynamic assortment policies. In particular, we develop a polynomial-time approximation scheme (PTAS) for the static problem formulation. Additionally, we demonstrate that an elegant policy utilizing weight-ordered assortments yields a $1/2$-approximation. Concurrently, we prove that such policies are sufficiently strong to provide a $1/4$-approximation with respect to the dynamic formulation, establishing a constant-factor bound on its adaptivity gap. Finally, we design an adaptive policy whose expected maximum load is within factor $1-\epsilon$ of optimal, admitting a quasi-polynomial time implementation. author: - "Omar El Housni[^1]" - Marouane Ibn Brahim - "Danny Segev[^2]" bibliography: - BIB-Max-Load.bib title: | Maximum Load Assortment Optimization:\ Approximation Algorithms and Adaptivity Gaps --- # Introduction Assortment optimization forms one of the most fundamental problems in revenue management, arising in a wide spectrum of application domains such as retailing and online advertising. At a high level, in such settings, the decision maker wishes to decide on a subset of products, picked out of a given universe, that will be offered to arriving customers in order to optimize a certain objective function. At least traditionally, each customer either chooses a single product from the offered assortment or decides to leave without making any purchase, with choice probabilities that are captured by a *discrete choice model*. The vast majority of assortment optimization models are guided by having either revenue maximization or sales maximization as their objective function. In the former case, each underlying product is associated with a fixed selling price, and the goal is to identify an assortment that maximizes the expected revenue due to a single representative customer, where the price of each product within this assortment is weighted by its corresponding choice probability. Problems of this form arise, for instance, when an online retailer displays a subset of products from a large universe in order to maximize the expected revenue. On the other hand, in sales maximization, our goal is to determine an assortment that maximizes the expected market share, given by the probability that a customer would purchase a product from the offered set. For instance, publishers such as Google Ads or Microsoft Ads may wish to select a subset of online ads to display, aiming to maximize the probability that customers will click on one of these ads. For a comprehensive overview of classical assortment optimization problems and their applications, we refer the reader to related surveys and books [@kok2009assortment; @phillips2021pricing; @gallego2019revenue]. #### Informal model description. In this paper, we introduce and study a new class of assortment optimization problems where, informally speaking, our goal is to identify, either statically or adaptively, assortments that would steer a large number of customers towards choosing the exact same product. Deferring the formal model formulation to be discussed in Section [2](#sec:form){reference-type="ref" reference="sec:form"}, given a universe of substitutable products, we are facing a stream of customers, each choosing between either selecting a product out of an offered assortment or opting to leave without making a selection. Assuming that these decisions are governed by the Multinomial Logit (MNL) choice model, we define the random *load* of any underlying product as the total number of customers who select it along the arrival sequence. This way, the number of customers who choose the most selected product corresponds to the maximum load across all products. Our objective is to offer an assortment of products to each customer so that the expected maximum load across all products is maximized. We refer to problem formulations along these lines as *Maximum Load Assortment Optimization*. Specifically, we consider both the static formulation ([\[SMLA\]](#SMLA){reference-type="ref" reference="SMLA"}), where a single offer set should be kept unchanged for the entire sequence of customer arrivals, and the dynamic setting ([\[DMLA\]](#DMLA){reference-type="ref" reference="DMLA"}), in which the decision maker offers a personalized assortment to each customer, based on the entire information available at that time, taking into account the choices of all previously-arriving customers. As we proceed to show next, the above-mentioned objective function is motivated by real-life applications in e-commerce such as Attended Home Delivery (AHD), where customers have the option to select their delivery slot, as well as in scheduling platforms, where users select a common time slot to meet. #### Attended Home Delivery. Online supermarket chains such as WholeFoods, FreshDirect, and AmazonFresh provide customers with various delivery time slots to choose from, based on their individual preferences. Similarly, e-retailers such as Amazon, Wayfair, and Walmart allow their customers to select an appealing delivery time among the available options. The dominant business model in the grocery delivery sector is known as Attended Home Delivery (AHD) [@manerba2018attended]. This model entails the customer's presence during the delivery process, necessitating an agreement on a specific time slot between the e-grocer and the customer. To optimize delivery costs, e-commerce platforms are interested in *packing* as many customers as possible from the same geographical area into the same time slot. In their survey on this topic, @wassmuth2023demand [Sec. 2.2] highlight the importance of managing customer demand: "*Demand management aims to manage the resulting trade-offs between captured demand (revenue) and assembly and delivery efficiency (costs)*". In the context of AHD, there are two primary strategies for managing customer demand: offering and pricing. In the offering strategy, decision makers determine which delivery time slots will be presented to customers and which slots will be hidden [@casazza2016optimizing; @truden2022computational; @van2022machine; @wassmuth2023demand]. On the other hand, in the pricing strategy, prices are assigned to each delivery slot in order to influence customers choices [@campbell2006incentive; @mackert2019choice]. In this paper, we focus on exploiting the offering strategy to steer customers towards selecting the same time slot. To this end, while booking their delivery times, the platform can guide customers by strategically determining the assortment of time slots to offer. This objective seamlessly aligns with our framework, where each delivery time slot can be viewed as a product, meaning that the "load" of each product represents the number of customers who select its corresponding time slot. Consequently, our aim is to determine an assortment of time slots that maximizes the expected maximum load across all available time slots. #### Preference-based Group Scheduling. When scheduling a group meeting, the overarching goal is to identify the most suitable time slot from a given set of options, i.e., one that accommodates the maximum number of attendees. Platforms such as Doodle and When2meet often rely on users selecting a preferred time slot from the available choices. However, the individual choice made by each user is influenced by the available options, due to substitution effects. Therefore, to maximize the likelihood of users selecting the same time slot, decision makers can carefully curate the assortment of offered time slots. This approach is known as preference-based group scheduling [@brzozowski2006grouptime; @berry2007preference]. That said, to our knowledge, previous studies have not approached such questions from the perspective of assortment optimization, nor have they utilized discrete choice models to deal with customer preferences. Our framework effectively captures this scenario, by viewing each time slot as a product, again implying that the load of each product represents the number of users who select that time slot. Thus, our objective would be to determine an assortment of time slots that maximizes the expected maximum load among all available options. ## Fundamental challenges As readers would quickly find out by examining our model formulations, whether one considers static or adaptive settings, coming up with efficient algorithmic approaches that can be rigorously analyzed appears to be a very challenging goal. To better understand where some hurdles are emerging from, we should bear in mind the conceptual trade-off between offering an extensive set of products versus a more focused set, due to two competing effects. On one hand, providing a wide array of products grants customers more choices, reducing their likelihood to leave the market without making a selection, and potentially increasing the maximum load. On the other hand, offering too many products may disperse customer demand across all available choices. As our objective is to guide customers towards selecting the same product, this dispersion can potentially diminish the maximum load. Let us proceed by briefly highlighting some fundamental challenges in addressing both problem formulations. In the static setting, the first and foremost challenge revolves around the highly non-linear nature of the objective function. Unlike revenue or sales maximization, we are considering a novel objective function, appearing to be very different from classical settings in the assortment optimization literature. Among other missing pieces, we are not aware of any integer programming formulations or linear relaxations for the problem in question. In fact, even computing the expected maximum load for a given static assortment is very much unclear at first glance. In the dynamic setting, the state space of every conceivable dynamic program describing this problem is exponential in size. Therefore, by directly solving natural dynamic programs, we would not end up with efficient algorithmic approaches. Moreover, as we explain in the sequel, the Bellman equations associated with such dynamic programs include optimizing over the seemingly-unstructured collection of all relevant assortments, which generally poses a complex challenge by itself. On top of these obstacles, we will discuss additional challenges in subsequent sections, as soon as they can be better digested. ## Main contributions The primary contribution of this paper resides in developing a unified optimization framework with provably near-optimal performance guarantees for both formulations of the maximum load assortment optimization problem. In the static setting, we first present a polynomial-time evaluation oracle to compute the expected maximum load of a given assortment. Then, by uncovering well-hidden structural properties of the objective function, we provide an elegant constant-factor approximation for [\[SMLA\]](#SMLA){reference-type="ref" reference="SMLA"}. As our main result for the static setting, we present a polynomial-time approximation scheme (PTAS). For the dynamic formulation, by developing novel coupling arguments in this context, we first establish a constant-factor bound on its adaptivity gap. Moreover, we devise a $(1-\epsilon)$-approximate adaptive policy that can be computed in quasi-polynomial time. We proceed by providing refined details on our main contributions. #### Static maximum load assortment optimization. - *Polynomial-time evaluation oracle for the expected maximum load.* The first challenge in the static formulation consists of the seemingly-simple question of evaluating the objective function of [\[SMLA\]](#SMLA){reference-type="ref" reference="SMLA"} for a given assortment, i.e., computing its expected maximum load. In fact, even though the latter admits a closed-form expression, it requires summing over exponentially-many terms that arise from the Multinomial distribution. Our first contribution is to design a polynomial-time evaluation oracle for computing the expected maximum load of a given static assortment. Our algorithm, whose specifics are given in Section [3.1](#compute){reference-type="ref" reference="compute"}, builds on the work of [@frey2009algorithm] who designed polynomial-time procedures to evaluate *rectangular probabilities* for the Multinomial distribution. In essence, we show that the expected maximum load function can be computed through polynomially-many external calls to evaluate rectangular probabilities. - *$1/2$-approximation via preference-weight-ordered assortments.* Prior to presenting our main result regarding [\[SMLA\]](#SMLA){reference-type="ref" reference="SMLA"}, we propose in Section [3.3](#subsec:halfapprox){reference-type="ref" reference="subsec:halfapprox"} an elegant and easy-to-implement way to obtain a $1/2$-approximation, utilizing *preference-weight-ordered assortments*. In a nutshell, such assortments prioritize products with higher preference weights. Specifically, when a product is included in a preference-weight-ordered assortment, all products with higher preference weights are included as well. Interestingly, we prove that there exists a preference-weight-ordered assortment whose expected maximum load is within a factor $1/2$ of the optimum. Our policy then examines all such assortments, of which there are only linearly-many, picking the best via our previously-mentioned evaluation oracle for their expected maximum load. As a side note, avid readers may be familiar with the notion of "revenue-ordered" assortments, which has been explored and exploited in early literature. Most notably, in revenue maximization under the MNL model, optimal assortments are known to be revenue-ordered [@talluri2004revenue]. That said, beyond the natural resemblance through a certain parametric order, the analysis of preference-weight-ordered assortments turns out to be entirely different and requires new analytical ideas. - *Polynomial-time approximation scheme.* Our main technical contribution with respect to the static formulation resides in developing a polynomial-time approximation scheme (PTAS) for this setting, whose specifics are provided in Section [3.4](#subsec:PTAS){reference-type="ref" reference="subsec:PTAS"}. Namely, for any fixed $\epsilon>0$, our algorithm constructs in polynomial time an assortment whose expected maximum load is within factor $1-\epsilon$ of the optimum. To derive this result, we prove the existence of a polynomially-sized family of highly-structured assortments via efficient enumeration ideas. We refer to these assortments as being block-based, showing that at least one such assortment yields a $(1-\epsilon)$-approximation. Finally, using our polynomial-time evaluation oracle, we enumerate over all block-based assortments, and pick the best one. We should note that, despite our best efforts in studying the computational complexity of [\[SMLA\]](#SMLA){reference-type="ref" reference="SMLA"}, we still do not know whether this problem is NP-hard or not. This difficulty mainly arises due to the nature of our objective function, as we are unaware of NP-hard problems with similar structure that would serve as candidates for potential reductions. Hence, attaining complexity lower bounds that will match our algorithmic guarantees remains an intriguing open question, further discussed in Section [6](#sec:conclusion){reference-type="ref" reference="sec:conclusion"}. #### Dynamic maximum load assortment optimization. - *Adaptivity gap.* Our first line of investigation examines questions related to the adaptivity gap of the maximum load assortment optimization problem. In this context, the adaptivity gap is defined as the maximal ratio between the objective values of [\[DMLA\]](#DMLA){reference-type="ref" reference="DMLA"} and [\[SMLA\]](#SMLA){reference-type="ref" reference="SMLA"} over all possible instances. This measure quantifies the value of introducing adaptivity, quantifying the improvement gained by employing a dynamic policy instead of a static one. In Section [4](#sec:adaptivitygap){reference-type="ref" reference="sec:adaptivitygap"}, we prove the existence of a static policy, utilizing preference-weight-ordered assortments, whose expected maximum load is within factor $1/4$ of the adaptive optimum, implying that the adaptivity gap is surprisingly bounded by $4$. This result immediately translates to a polynomial-time $1/4$-approximation for [\[DMLA\]](#DMLA){reference-type="ref" reference="DMLA"}. Moreover, when all products have the same preference weight, we improve the adaptivity gap to $2$. In the opposite direction, we present a family of instances demonstrating that the adaptivity gap of [\[DMLA\]](#DMLA){reference-type="ref" reference="DMLA"} is at least $4/3$. - *$(1-\epsilon)$-approximate adaptive policy in quasi-polynomial time.* Our cornerstone technical contribution in relation to [\[DMLA\]](#DMLA){reference-type="ref" reference="DMLA"} resides is devising a quasi-polynomial time adaptive policy whose expected maximum load is within factor $1-\epsilon$ of the optimum. This policy, whose finer details are discussed in Section [5](#sec:dynamic){reference-type="ref" reference="sec:dynamic"}, builds upon two key ideas. Firstly, rather than attempting to solve an exponentially-sized natural dynamic program, we demonstrate that its state space can be shrunk to a quasi-polynomial scale while only sacrificing an $O(\epsilon)$-factor in optimality. More specifically, we observe that once a sufficiently large load is attained, the expected marginal gain from offering any further assortments becomes negligible in comparison to the already-attained maximum load. Consequently, we can effectively terminate the arrival process (i.e., offer the empty assortment from this point on), which significantly reduces the state space size. Secondly, to compute an optimal action for the resulting recursive equations, an assortment-like optimization problem needs to be solved at each stage. We argue that this problem can be reformulated as an unconstrained revenue maximization question under the Multinomial Logit model, which can indeed be solved in polynomial time. ## Related literature In what follows, we discuss three lines of research that are directly relevant to our work. Firstly, we discuss the Multinomial Logit model, which stands as one of the most widespread choice models in both theoretical and practical domains. Secondly, we provide a concise overview of the assortment optimization literature, emphasizing how our model fits within this body of research. Lastly, we discuss several classic balls and bins problems, highlighting the relevant connections and similarities between these problems and our own setting. #### The Multinomial Logit model. The Multinomial Logit model (MNL) is widely regarded as the predominant choice model employed by the revenue management community to capture customer behavior when selecting from a given assortment. This model was initially introduced by [@luce1959individual], with subsequent works by [@mcfadden1973conditional] and [@hausman1984specification] further refining its specification. Informally, the MNL models assigns a preference weight to each product. Then, each product is chosen with probability proportional to its preference weight, thereby capturing the substitution effect that occurs between various alternatives within any given assortment. The model's simplicity in calculating choice probabilities, its predictive power, and computational tractability have all contributed to its widespread adoption and extensive study in various domains. Some of these directions are evidenced by research works such as those of [@mahajan2001stocking], [@talluri2004revenue], [@rusmevichientong2014assortment], [@sumida2021revenue], [@gao2021assortment], [@bai2022coordinated], and [@el2021joint] to mention a few. For a comprehensive understanding and further references, we refer the reader to Chapter 4 of the book by [@gallego2019revenue]. #### Assortment optimization. Assortment optimization represents a long-standing research domain within revenue management, seeking to address fundamental questions regarding the selection of offer sets for customers under various choice models. Here, the typical goal is to optimize performance metrics such as revenue, market share, and engagement. Over the past decades, this field has witnessed substantial growth, resulting in an extensive literature encompassing different algorithmic developments under various choice models, such as Multinomial Logit [@talluri2004revenue; @rusmevichientong2014assortment; @aouad2021assortment], Markov Chain [@blanchet2016markov; @feldman2017revenue], Nested Logit [@davis2014assortment; @gallego2014constrained], and non-parametric choice models [@farias2013nonparametric; @aouad2018approximability]. For a comprehensive study and further references, we refer the reader to related surveys and books [@kok2009assortment; @phillips2021pricing; @gallego2019revenue]. As previously mentioned, it is important to note that our work diverges from the classic assortment optimization literature in terms of the objective function we optimize. To the best of our knowledge, this paper is the first study to investigate the maximum load objective function from an assortment optimization perspective. #### Balls and bins. In its most general setting, the literature on balls and bins explores the outcomes of randomly placing $m$ balls into $n$ bins. This topic finds numerous applications, with load balancing and hashing being arguably the most commonly known ones [@mitzenmacher2017probability; @mirrokni2018consistent]. Relating such questions to our setting, each customer can be viewed as a ball, whereas each product can be represented by a bin. The probability of a particular ball falling into a specific bin corresponds to the likelihood of a particular customer selecting a particular product. In their seminal work, [@raab1998balls] provide a comprehensive analysis of the maximum number of balls in any bin, offering precise upper and lower bounds that hold asymptotically. Specifically, for $n$ balls and $n$ bins with equal probabilities, the expected maximum load is $(1+o(1)) \cdot \frac{\log n}{\log\log n}$ with high probability. In a different direction, [@azar1994balanced] proved a significant drop in the maximum load to $\frac{\log \log n}{\log 2}+O(1)$ with high probability, when each ball is placed in the least loaded out of two randomly chosen bins. It is important to point out that the literature on balls and bins primarily focuses on load balancing rather than on load maximizing applications, where one actually wishes to over-pack bins by actively selecting which bins to make use of, given the constant presence of an outside option. That being said, we find that some well-known results still offer preliminary insights. However, to our knowledge, none of these results are directly relevant to statically or adaptively making assortment decisions in order to optimize the maximum load. In other words, our paper is the first to study balls-and-bins-like problems within the context of choice modeling and assortment optimization. # Problem Formulation {#sec:form} #### The MNL choice model. In what follows, we begin by explaining how the Multinomial Logit choice model is formally defined. To this end, let $\mathcal{N}= \{1, 2, \ldots, n\}$ be the universe of products at our disposal, where each product $i\in \mathcal{N}$ is associated with a *preference weight* $v_i>0$. In addition, the option of not selecting any of these products will be symbolically represented as product $0$, referred to as the no-purchase or no-selection option, with a preference weight of $v_0 = 1$. While the precise meaning of these parameters will be explained below, we mention in passing that the preference weight assigned to each product reflects its level of attractiveness, meaning that higher preference weights would indicate a greater level of popularity. With these conventions, an assortment (or an offer set) is simply a subset of products $S \subseteq \mathcal{N}$. For convenience, we make use of $S_+=S \cup \{0\}$ to denote the inclusion of the no-purchase option within this assortment. Now, when any given assortment $S \subseteq \mathcal{N}$ is offered to an arriving customer, the MNL model prescribes a probability of $\phi_i(S) = \frac{ v_i }{ v(S_+) } = \frac{v_i}{1+\sum_{j\in S} v_j}$ for picking product $i \in S$ as the one to be purchased. Alternatively, this customer may decide to avoid selecting any of these products (i.e., picking the no-purchase option), which happens with the complementary probability, $\phi_0(S) = \frac{1}{1+\sum_{j\in S}v_j}$. #### Stream of customers. Next, we move on to introduce the static formulation of the maximum load assortment problem, following by discussing how its dynamic counterpart is defined. In both formulations, we will be facing a stream of $T$ customers, arriving one after the other, and we therefore refer to these customers by their arrival indices, $1, \ldots, T$. We assume that the choice of each customer among any offered assortment is governed by the above-mentioned Multinomial Logit model, meaning in particular that their purchasing decisions are mutually independent. ## Static Maximum Load Assortment Optimization ([\[SMLA\]](#SMLA){reference-type="ref" reference="SMLA"}) {#subsec:SMLA} In the static setting, we will be operating under the restriction that all customers should be offered the exact same assortment of products throughout the arrival sequence. Specifically, consider an assortment $S\subseteq \mathcal{N}$. For any product $i\in S_+$ and for any customer $t \in [T]$, we define a Bernoulli random variable $X_{it}(S)$ to indicate whether customer $t$ selects product $i$ or not. Since customer $t$ chooses this product with probability $\phi_i(S)$, we have $\mathbb{P}(X_{it}(S)=1) = \phi_i(S)$. As such, $\sum_{i\in S+}X_{it}(S)=1$, reflecting the fact that each customer chooses exactly one product from the assortment $S$ or decides not to select any product at all. In addition, since customers' decisions are independent, the indicators $\{ X_{it}(S) \}_{t \in [T]}$ of different customers are mutually independent. Given an offered assortment $S$, we define the *load* of product $i\in\mathcal{N}$ as the total number of customers who select this product. This random quantity will be designated by $L_i(S)$, noting that it can be expressed as $L_i(S) = \sum_{t=1}^TX_{it}(S)$. We use $L_0(S)=\sum_{t=1}^TX_{0t}(S)$ to denote the no-purchase load upon offering the assortment $S$, i.e., the number of customers who did not select any products. Finally, $M(S)$ will stand for the maximum load over all products, i.e., $M(S) = \max_{i\in S}L_i(S)$. Hence, the number of customers who choose the most selected product corresponds to the maximum load across all products. Our optimization problem consists in computing an assortment that maximizes the expected maximum load. We refer to this problem as *Static Maximum Load Assortment* ([\[SMLA\]](#SMLA){reference-type="ref" reference="SMLA"}), compactly formulated as follows: $$\label{SMLA} \max_{S\subseteq\mathcal{N}} \;\mathbb{E}\left( M(S) \right). \tag{\sf{Static-MLA}}$$ #### Closed-form expression for the maximum load. Given this formulation, we first observe that efficiently computing the expected maximum load $\mathbb{E}(M(S))$ of a given assortment $S$ is a non-trivial task. Prior to developing an efficient algorithm for this purpose, let us start by deriving a supposedly straightforward closed-form expression. Consider an assortment $S \subseteq \mathcal{N}$ and suppose without loss of generality that $S = \{1,\ldots,k\}$ for some $k\leq n$. For each product $i \in S_+$, its corresponding random load $L_i(S)$ clearly follows a Binomial distribution of mass parameter $T$ and probability of success $\phi_i(S)$. However, these random variables are correlated, since $\sum_{i \in S_+} L_i(S)=T$. In fact, the load vector ${\mathbf L}(S)\coloneqq(L_0(S),\ldots,L_k(S))$ is a random vector that follows a Multinomial distribution. In particular, for every $\boldsymbol{\ell}\coloneqq(\ell_0,\ldots,\ell_{k})\in\mathbb{N}^{|S_+|}$ with $\sum_{i\in S_+}\ell_i= T$, we have $$\mathbb{P}\left({\mathbf L}(S)=\boldsymbol{\ell}\right)= \binom{ T }{ \ell_0,\ldots,\ell_k } \cdot \prod_{i\in S_+}\left(\phi_i(S)\right)^{\ell_i},$$ where $\binom{ T }{ \ell_0,\ldots,\ell_k }\coloneqq\frac{T!}{\ell_0!\cdots \ell_k!}$ is the Multinomial coefficient. As such, a direct expression for the expected maximum load is given by: $$\label{eq:multinomial} \mathbb{E}\left( M(S) \right)=\sum_{ \genfrac{}{}{0pt}{}{{\boldsymbol{\ell}}\in\mathbb{N}^{k+1} :}{\sum_{i\in S_+}\ell_i=T}}\mathbb{P}\left({\mathbf L}(S)={\boldsymbol{\ell}}\right)\cdot \max_{i\in S}\ell_i.$$ However, in this representation, we sum over an exponential number of terms, $\binom{ T+k}{ k}$, which makes this computation intractable. In Section [3.1](#compute){reference-type="ref" reference="compute"}, we provide a polynomial-time algorithm to compute the expected maximum load for any given assortment. #### Numerical example. To provide some preliminary understanding and basic intuition, we proceed by presenting a toy example, considering a very simple scenario with two products and two customers. We examine two sets of preference weights and observe how they yield very distinct optimal assortments. This demonstration highlights the impact of varying product preferences on the optimal assortment selection. Specifically, consider an instance where we have two customers ($T=2$) and two products ($n=2$) with equal preference weights $v_1=v_2=1$. Since these products are similar, there are two non-empty types of assortments to examine: either offering a single product or offering both products. Let us calculate the expected maximum load for each of these options. If we offer say product ${1}$, the MNL choice probability of selecting product $1$ is ${\phi_1(\{1\}) = 1/2}$. In this case, the maximum load is simply the load of product $1$. Therefore, the expected maximum load is the mean of a Binomial random variable, which is given by $\mathbb{E}(M(\{1\})) = 2\cdot 1/2 = 1$. Now, let us analyze the option of offering both products. In this case, the MNL choice probability for each of the two products is ${\phi_1(\{1,2\})=\phi_2(\{1,2\}) = 1/3}$. The maximum load can take three possible values -- $0$, $1$, or $2$ -- and using simple calculations, we find that $\mathbb{P}(M(\{1,2\})=0) = 1/9$, $\mathbb{P}(M(\{1,2\})=1) = 6/9$, and $\mathbb{P}(M(\{1,2\})=2) = 2/9$. Therefore, $\mathbb{E}(M(\{1,2\})) = 10/9> \mathbb{E}(M(\{1\}))$. Thus, for this instance, it is optimal to offer the two products rather than a single product. Now, let us change the preference weights to $v_1=v_2=2$. Applying the same logic, the expected maximum load of offering a single product is $4/3$, while offering both products gives a maximum load of $32/25=1.28$. Hence, it is optimal to offer the assortment consisting of a single product. Beyond providing insights into the combinatorial aspects involved, this example highlights a crucial structural tension: When high-weight options are available, we expect optimal assortments to be comprised of fewer products. Conversely, when product weights are considerably smaller compared to the no-purchase weight, we anticipate the optimal assortment to encompass a larger number of products. However, the structure of optimal assortments remains unclear for general instances. ## Dynamic Maximum Load Assortment Optimization ([\[DMLA\]](#DMLA){reference-type="ref" reference="DMLA"}) {#subsec:DMLA} In the dynamic setting, customers arrive one after the other, allowing the decision maker to tailor the assortment offered to each customer based on the choices observed for previously-arriving customers. In particular, at each time period, we have access to the current load vector, which provides the number of customers who have selected each product up to that point. Based on this information, we wish to determine a personalized assortment that will be offered to the next arriving customer. As such, the solution concept in this setting corresponds to an adaptive policy, captured by a function that takes as input the current system state (i.e., the number of customers remaining and the current load vector), and returns an assortment to offer to the next customer. The objective is to propose an adaptive policy that maximizes the expected maximum load over all products upon termination of the arrival stream. #### Dynamic programming representation. To formalize the dynamic setting, we take the view of a dynamic program that determines the actions taken by an optimal policy, i.e., the personalized assortments that will be offered to arriving customers. For this purpose, we consider a planning horizon consisting of $T$ periods, each with a single customer arrival. To describe the system state at the beginning of any time period, we introduce the state variable $\boldsymbol{\ell}= (\ell_i : i \in \mathcal{N})$, where $\ell_i$ represents the number of customers who have selected product $i$ up to that point. For each time period $t = 1, \ldots, T$, we use ${\cal M}_t(\boldsymbol{\ell})$ to denote the optimal expected maximum load when there are $t$ customers remaining in the planning horizon, and the system's state at the beginning of this period is characterized by the load vector $\boldsymbol{\ell}$. By employing ${\bf e}_i \in \mathbb{R}_+^n$ to represent the $i$-th unit vector, we can compute the value functions $\{ {\cal M}_t \}_{ t \in [T]}$ via the following dynamic program: $$\begin{array}{ll} \label{DMLA}\tag{\sf{Dynamic-MLA}} {\cal M}_{t}(\mathbf{\boldsymbol{\ell}})=\displaystyle\max_{S\subseteq \mathcal{N}}\left({\cal M}_{t-1}( \boldsymbol{\ell})\cdot\phi_0(S)+\sum_{i\in S}{\cal M}_{t-1}(\mathbf{\boldsymbol{\ell}}+\mathbf{e}_i)\cdot \phi_i(S)\right) \end{array}$$ with the boundary condition ${\cal M}_0(\boldsymbol{\ell})=\max_{i \in [n]}\ell_i$. To better understand the recursive equation above, note that when the current load vector is $\boldsymbol{\ell}$ and we offer the assortment $S$, the first possible outcome is that the currently arriving customer will choose the no-selection option, with probability $\phi_0(S)$, in which case the load vector remains unchanged. The second outcome corresponds to choosing one of the products $i \in S$, with probability $\phi_i(S)$; here, the load vector $\boldsymbol{\ell}$ is updated to $\boldsymbol{\ell}+\mathbf{e}_i$. Clearly, the maximum expected load in the entire horizon is given by ${\cal M}_{T}(\bf 0)$. It is imperative to mention that this formulation should be viewed as being an explicit characterization of optimal adaptive policies rather than as an efficiently-implementable algorithm, due to being defined over a state space of exponential size, $\Omega( T^n )$. Moreover, a careful inspection of [\[DMLA\]](#DMLA){reference-type="ref" reference="DMLA"} shows that each of the value functions ${\cal M}_t(\cdot)$ is recursively obtained by only considering ${\cal M}_{t-1}(\cdot)$-related terms. However, these equations by themselves ask us to solve an assortment-like optimization problem. Quite surprisingly, in Section [5.4](#subsec:proofquasitime){reference-type="ref" reference="subsec:proofquasitime"}, we show that this inner problem can be reformulated as an unconstrained revenue maximization question under the Multinomial Logit model, which can be solved in polynomial time. #### Numerical example. Again, to enhance our understanding and develop additional intuition, we proceed by presenting a simple toy example. Here, we consider a scenario with two customers ($T=2$) and three products ($n=3$), solving the dynamic program over the entire planning horizon. Assuming identical preference weights, $v_1=v_2=v_3=1$, let us solve [\[DMLA\]](#DMLA){reference-type="ref" reference="DMLA"} for this instance using a bottom-up approach: - $t=0$: In this case, there are no customers left, and the objective is simply the maximum load attained, which corresponds to the boundary condition mentioned above. - $t=1$: In this case, there is only one customer remaining, meaning that the potential load vector realizations are: $(0,0,0)$, $(1,0,0)$, $(0,1,0)$, or $(0,0,1)$. Therefore, at this step, we calculate ${\cal M}_1(\boldsymbol{\ell})$ for each of the aforementioned load vector realizations, as summarized in Table [1](#tab:egdynamic1){reference-type="ref" reference="tab:egdynamic1"}. The rows of this table represent the possible load vectors at $t=1$, while its columns represent the feasible assortments that can be offered at this step. Each entry specifies the expected maximum load for pair of load vector and possible assortment. In each row, we highlight in bold the optimal expected maximum load. As one can only expect, the optimal policy here operates as follows: if the first customer selected some product $i$, then simply offer the assortment $\{i\}$; otherwise, offer $\{1,2,3\}$. - $t=2$: This step determines the optimal assortment to offer to the first customer. At the beginning, our load vector is $\boldsymbol{\ell}= (0,0,0)$, and we should simply compute ${\cal M}_2((0,0,0))$. Given Table [1](#tab:egdynamic1){reference-type="ref" reference="tab:egdynamic1"}, we find that ${\cal M}_2((0,0,0))=21/16$, by offering all products to the first customer. Table [2](#tab:egdynamic2){reference-type="ref" reference="tab:egdynamic2"} shows the objective value for offering each of the possible assortments. $\{1\}$ $\{2\}$ $\{3\}$ $\{1,2\}$ $\{1,3\}$ $\{2,3\}$ $\{1,2,3\}$ ----------- ---------------- ---------------- ---------------- ----------- ----------- ----------- ---------------- $(0,0,0)$ $1/2$ $1/2$ $1/2$ $2/3$ $2/3$ $2/3$ $\mathbf{3/4}$ $(1,0,0)$ $\mathbf{3/2}$ $1$ $1$ $4/3$ $4/3$ $1$ $5/4$ $(0,1,0)$ $1$ $\mathbf{3/2}$ $1$ $4/3$ $1$ $4/3$ $5/4$ $(0,0,1)$ $1$ $1$ $\mathbf{3/2}$ $1$ $4/3$ $4/3$ $5/4$ : The expected maximum load for different values of the state variable $\boldsymbol{\ell}$ and different feasible assortments at $t=1$. Offered assortment $\{1\}$ $\{2\}$ $\{3\}$ $\{1,2\}$ $\{1,3\}$ $\{2,3\}$ $\{1,2,3\}$ ----------------------- --------- --------- --------- ----------- ----------- ----------- ------------------ Expected maximum Load $9/8$ $9/8$ $9/8$ $5/4$ $5/4$ $5/4$ $\mathbf{21/16}$ : The expected maximum load for different offered assortments to customer $1$. Let us now summarize the whole policy in words, along with a very natural interpretation. When the first customer arrives, we offer the whole universe $\{1,2,3\}$. This decision makes sense since all products are identically-weighted, noting that it is generally untrue for arbitrary instances. Now, if the first customer selects the no-purchase option, for the same reasons, we offer the whole universe of products again. However, if she selects some product $i\in\{1,2,3\}$, we simply offer the assortment $\{i\}$ to the next customer. This policy makes sense since the only outcome that can increase the maximum load is for the second customer to select product $i$ again, and the assortment $\{i\}$ maximizes the probability of this outcome. # The Static Setting: Approximation Algorithms {#sec:SMLA} In this section, we present our main algorithmic results for [\[SMLA\]](#SMLA){reference-type="ref" reference="SMLA"}, eventually showing that this setting can be efficiently approximated within any degree of accuracy. Toward this objective, in Section [3.1](#compute){reference-type="ref" reference="compute"}, we first provide a polynomial time evaluation oracle for computing the expected maximum load of a given assortment. In Section [3.2](#subsec:PTASlemmas){reference-type="ref" reference="subsec:PTASlemmas"}, we establish a number of structural lemmas that will be useful in analyzing our algorithmic framework. Using these claims, we show in Section [3.3](#subsec:halfapprox){reference-type="ref" reference="subsec:halfapprox"} that an elegant policy based on preference-weight-ordered assortments yields a $1/2$-approximation for [\[SMLA\]](#SMLA){reference-type="ref" reference="SMLA"}. Finally, in Section [3.4](#subsec:PTAS){reference-type="ref" reference="subsec:PTAS"}, we present our main contribution for the static formulation, showing that it admits a polynomial-time approximation scheme (PTAS). ## Polynomial-time evaluation oracle {#compute} As previously mentioned, one of the basic challenges in addressing [\[SMLA\]](#SMLA){reference-type="ref" reference="SMLA"} resides in simply evaluating the objective function of a given assortment. As discussed in Section [2.1](#subsec:SMLA){reference-type="ref" reference="subsec:SMLA"}, computing the expected maximum load via representation [\[eq:multinomial\]](#eq:multinomial){reference-type="eqref" reference="eq:multinomial"} requires summing over exponentially-many terms, which is clearly not a tractable approach. Our first contribution is to provide a polynomial time algorithm for computing the expected maximum load function. **Theorem 1** (Evaluation oracle). *The expected maximum load of any given assortment can be computed in $O(n^2T^3)$ time.* In a nutshell, our algorithm builds upon the work of [@frey2009algorithm] and [@lebrun2013efficient], who designed polynomial-time procedures to evaluate *rectangular probabilities* for the Multinomial distribution. Specifically, let $\mathbf L$ be a random vector that follows a Multinomial distribution. A rectangular probability is the probability of a so-called *rectangular event*, of the form $\{ \mathbf{a} \leq \mathbf{L} \leq \mathbf{b} \}$, where $\mathbf a$ and $\mathbf b$ are integer vectors. To design our evaluation oracle, we will exploit the work of [@frey2009algorithm], showing how to compute in polynomial time the probability $\mathbb{P}( \mathbf{a}\leq\mathbf{L}\leq\mathbf{b})$ for any pair of integer vectors $\mathbf{a}$ and $\mathbf{b}$. #### Preliminaries. Consider an arbitrarily-structured assortment $S\subseteq\mathcal{N}$ and suppose without loss of generality that $S=\{1,\ldots,k\}$. We remind the reader that the random variable $L_i(S)$ stands for the load of product $i\in S$, with $\mathbf{L}(S)=(L_1(S),\ldots,L_k(S))$ being the overall load vector. Additionally, $M(S)$ is the random variable that refers to the maximum load across the products in $S$, i.e., $M(S) = \max_{i \in S} L_i(S)$. As argued in Section [2.1](#subsec:SMLA){reference-type="ref" reference="subsec:SMLA"}, the load vector $\mathbf{L}(S)$ follows a Multinomial distribution. In what follows, we explain how to compute $\mathbb{E}(M(S))$ using only a polynomial number of externals calls to evaluate rectangular probabilities. To this end, noting that $$\label{eq:mean} \mathbb{E}\left(M(S)\right) = \sum\limits_{\ell=1}^{T}\ell\cdot \mathbb{P}(M(S) = \ell),$$ it suffices to show how to efficiently compute each of the terms $\mathbb{P}(M(S) = \ell)$. In turn, we write each event $\{M(S)=\ell\}$ as a partition into $O( n )$ of rectangular events with respect to the random vector ${\mathbf L}(S)$. Specifically, for $1\leq \ell\leq T$ and $1\leq j\leq k$, we define the event $F_{\ell j}(S)$ as: $$\begin{aligned} \label{eq:rect} \nonumber F_{\ell j}(S) =& \left[\bigwedge_{i=1}^{j-1}\left\{L_i(S)<\ell\right\}\right]\;\boldsymbol\land\; \left\{L_j(S)=\ell\right\} \;\boldsymbol\land\;\left[\bigwedge_{i=j+1}^k\left\{L_i(S)\leq \ell\right\}\right] \\ =&\left\{\mathbf{a}_{\ell j}\leq\mathbf L(S)\leq \mathbf{b}_{\ell j}\right\}, \end{aligned}$$ where ${\mathbf{a}_{\ell j}=\ell\cdot \mathbf{e}_j}$ and ${\mathbf{b}_{\ell j}=(\ell-1)\cdot\sum_{i=1}^{j-1}\mathbf{e}_i+\ell\cdot\sum_{i=j}^{k}\mathbf{e}_i}$. Here, $F_{\ell j}(S)$ corresponds to the event where the maximum load is equal to $\ell$, and product $j$ is the minimal-index product that attains this load. The above expression implies that $F_{\ell j}(S)$ is a rectangular event, meaning that its probability can be computed using Frey's algorithm. #### Computing $\boldsymbol{\mathbb{E}(M(S))}$. The next lemma shows how to utilize these rectangular events to compute $\mathbb{E}(M(S))$ for any assortment $S\subseteq\mathcal{N}$. **Lemma 2**. *For any assortment $S=\{1,\ldots,k\}\subseteq \mathcal{N}$, we have $$\mathbb{E}(M(S)) = \sum\limits_{\ell=1}^T\left[\ell\cdot\sum\limits_{j=1}^k \mathbb{P}\left(F_{\ell j}(S)\right)\right].$$* *Proof.* For convenience, we denote the random variables $M(S)$ and $L_j(S)$ simply by $M$ and $L_j$; similarly, the events $F_{\ell j}(S)$ will be replaced by $F_{\ell j}$. Fixing some $\ell=0,\ldots,T$, we will show that $(F_{\ell j})_{j=1,\ldots ,k}$ is a partition of the event $\{M=\ell\}$, i.e., the union of the events $\left(F_{\ell j}\right)_{j=1,\ldots ,k}$ is precisely $\{M=\ell\}$ and these events are mutually exclusive. Consequently, ${\mathbb{P}\left(M=\ell\right)=\sum_{j=1}^k\mathbb{P}(F_{\ell j})}$, and replacing this expression in Equation [\[eq:mean\]](#eq:mean){reference-type="eqref" reference="eq:mean"} yields the desired result. First, we show the events $(F_{\ell j})_{j=1,\ldots ,k}$ are mutually exclusive, i.e., for all $j_1 \neq j_2$, we have $F_{\ell j_1} \cap F_{\ell j_2}= \emptyset$. To verify this claim, suppose without loss of generality that $j_1<j_2$. In the event $F_{\ell j_2}$, we have by definition $L_{j_1}<\ell$ since $j_1<j_2$. In particular, $L_{j_1} \neq \ell$ which implies that $\mathbf{L}\notin F_{\ell j_1}$. Hence, $F_{\ell j_1} \cap F_{\ell j_2}= \emptyset$. Second, let us show that $\bigvee_{j=1}^k F_{\ell j} = \{M=\ell\}$. First, by definition, the maximum load in any event $F_{\ell j}$ is exactly $\ell$, and therefore $\bigvee_{j=1}^k F_{\ell j} \subseteq \{M=\ell\}$. In the opposite direction, suppose that $M=\ell$. Then, at least one product has a load of $\ell$ and all other products have a load of at most $\ell$. Let $j_{\min}$ be the lowest-index product with a load of exactly $\ell$, i.e., $j_{\min} = \min\{i=1,\ldots,k \mid L_i=\ell\}$. By definition of $j_{\min}$, we have $L_i<\ell$ for all $i=1,\ldots,j_{\min}-1$ and $L_{j_{\min}} = \ell$. Also, since $M=\ell$ by supposition, $L_i\leq \ell$ for all $i=j_{\min}+1,\ldots,k$, meaning that $\{M=\ell\}\subseteq F_{\ell j_{\min}}\subseteq \bigvee_{j=1}^k F_{\ell j}$. ◻ #### Concluding the proof of Theorem [Theorem 1](#thm:oracle){reference-type="ref" reference="thm:oracle"}. {#concluding-the-proof-of-theorem-thmoracle.} We consider the running time incurred by computing $\mathbb{E}(M(S))$ via Lemma [Lemma 2](#lem:partition){reference-type="ref" reference="lem:partition"}. For each of the events $\{M(S)=\ell\}$, we have to compute $O(n)$ rectangular probabilities. Since Frey's algorithm admits an $O(nT^2)$-time implementation, we arrive at $O(n^2T^2)$ operations per event. In turn, Equation [\[eq:mean\]](#eq:mean){reference-type="eqref" reference="eq:mean"} involves $O(T)$ such events, amounting to an overall running time of $O(n^2T^3)$, precisely as stated in Theorem [Theorem 1](#thm:oracle){reference-type="ref" reference="thm:oracle"}. ## Structural lemmas {#subsec:PTASlemmas} In what follows, we shed light on a number of structural claims that will be useful in presenting our algorithmic framework. In Lemmas [Lemma 3](#lem:merge){reference-type="ref" reference="lem:merge"} and [Lemma 4](#lem:charging){reference-type="ref" reference="lem:charging"}, we introduce two operations, referred to as *Merge* and *Transfer*, showing that their application to any assortment does not decrease the expected maximum load. In Lemmas [Lemma 5](#lem:probs){reference-type="ref" reference="lem:probs"} and [Lemma 6](#lem:weights){reference-type="ref" reference="lem:weights"}, we show that minor alterations of the instance parameters (choice probabilities or preference weights) yield a correspondingly small deviation from the expected maximum load. Before stating these lemmas, let us introduce the following definition. We say that a product $i$ is *lighter* (resp. *heavier*) than a product $j$, if $v_i\leq v_j$ (resp. $v_i\geq v_j$), emphasizing that we have weak inequalities in both definitions. #### Operation 1: Merge. Consider an assortment $S$ and let $i$ and $j$ be two products in this assortment, with respective preference weights $v_i$ and $v_j$. The operation of merging products $i$ and $j$ consists of replacing both products with a single new product, whose preference weight is $v_i+v_j$. In the next lemma, whose proof is presented in Appendix [7.1](#apx:merge){reference-type="ref" reference="apx:merge"}, we show that the merge operation cannot decrease the expected maximum load. **Lemma 3**. *Consider an assortment $S\subseteq \mathcal{N}$ and let $\widetilde{S}$ be the assortment resulting from merging any two products of $S$. Then, $$\mathbb{E}(M(\widetilde{S})) \geq \mathbb{E}(M(S)) .$$* Roughly speaking, the main idea behind proving this result argues that, when we merge products $i$ and $j$, simple coupling arguments show that the maximum load of $\widetilde{S}$ is stochastically larger than that of $S$. In fact, the load of the merged product is distributed similarly to the sum of the loads of products $i$ and $j$. Moreover, since the total sum of MNL preferences weights remains unchanged after a merge, the choice probabilities of all remaining products in the assortment $S$ are also unchanged, and consequently, their load is similar to its pre-merge counterpart. Hence, merging can only increase the maximum load across all products. #### Operation 2: Transfer. Consider an assortment $S$ and let $i$ and $j$ be two products in this assortment with respective preference weights $v_i\geq v_j$. For any $\delta \in [0, v_j]$, the operation of $\delta$-weight transfer from product $j$ to product $i$ consists of: (1) Replacing product $i$ with a new product of preference weight $v_i+\delta$; and (2) replacing product $j$ with a new product of preference weight $v_j-\delta$. Notably, we always transfer weight from a lighter product to a heavier product. In the next lemma, whose proof is provided in Appendix [7.2](#apx:charging){reference-type="ref" reference="apx:charging"}, we show that the transfer operation cannot decrease the expected maximum load. **Lemma 4**. *Consider an assortment $S\subseteq \mathcal{N}$ and let $\widetilde{S}_{\delta}$ be the assortment resulting from a $\delta$-weight transfer. Then, $$\mathbb{E}(M(\widetilde{S}_{\delta})) \geq \mathbb{E}(M(S)).$$* In the proof of this result, we analytically study the function $\delta \mapsto \mathbb{E}(M(\widetilde S_{\delta}))$, showing that it is non-decreasing. In fact, when $\delta$ increases, the choice probability of product $i$ increases, the choice probability of product $j$ decreases, and all other choice probabilities remain unchanged. Therefore, at least intuitively, by increasing $\delta$, we virtually transfer part of the load of product $j$ to product $i$. Since product $i$ is heavier than product $j$, it is more likely that product $i$ has a higher load, and therefore, transferring part of the load of product $j$ to product $i$ can only help increasing the expected maximum load. #### Sensitivity of the expected maximum load function. In the following, we study the effect of small changes in the instance parameters (choice probabilities or preference weights), on the expected maximum load of a given assortment. The next lemma shows that, given a Multinomial vector, where $0$ is the no-selection option and $1,\ldots,m$ are the product options, slight changes in the choice probabilities translate to small changes in the expected maximum load. **Lemma 5**. *Let ${\bf Y}=(Y_0,Y_1,\ldots,Y_m)$ and ${\bf W}=(W_0,W_1,\ldots,W_m)$ be Multinomial vectors, with parameters $(T,p^Y_0,\ldots,p^Y_m)$ and $(T,p^W_0,\ldots,p^W_m)$, respectively. Then, when $p^W_i\geq (1-\epsilon)p^Y_i$ for all $i \in \{1,\ldots,m\}$, we have $$\mathbb{E}\left(\max_{i=1,\ldots,m}W_i\right) \geq (1-\epsilon) \cdot \mathbb{E}\left(\max_{i=1,\ldots,m}Y_i\right).$$* To prove this result, we construct a coupling between the random variables $\mathbf{Y}$ and $\mathbf{W}$, where every customer that selects some option $i$ in $W_i$ also selects the same option in $Y_i$ with probability at least $1-\epsilon$. Consequently, when $i$ is a deterministic option, it is relatively straightforward to claim that the load of this option only suffers an $\epsilon$-fraction loss (in expectation). However, in our case, the option $i$ itself is a random variable, corresponding to the product that attains the maximum load. Based on an elegant conditioning argument, we show that a claim of similar spirit can be extended to the latter setting. The full proof of this lemma appears in Appendix [7.4](#apx:probs){reference-type="ref" reference="apx:probs"}. Similarly, in Lemma [Lemma 6](#lem:weights){reference-type="ref" reference="lem:weights"}, we show that with respect to any assortment, small changes in the preference weights of its products translate to small changes in the expected maximum load. The proof of this result can be found in Appendix [7.5](#apx:weights){reference-type="ref" reference="apx:weights"}. **Lemma 6**. *Let $S^+=\{1^+,\ldots,m^+\}$ and $S^-=\{1^-,\ldots,m^-\}$ be a pair of assortments, and let $v_i^+$ (resp. $v_i^-$) be the preference weight of product $i^+$ (resp. $i^-$), for all $i\in [m]$. When $(1-\epsilon)v_{i}^+\leq v_{i}^-\leq v_{i}^+$ for all $i \in [m]$, we have $$\mathbb{E}\left(M(S^-)\right) \geq (1-\epsilon)\cdot\mathbb{E}\left(M(S^+)\right).$$* ## $\boldsymbol{1/2}$-approximation via preference-weight-ordered assortments {#subsec:halfapprox} #### Main result. Roughly speaking, preference-weight-ordered assortments prioritize products with higher preference weight. Formally, assuming without loss of generality that $v_1\geq \cdots \geq v_n$, we say that an assortment $S$ is preference-weight-ordered when it forms a prefix of this sequence, i.e., $S=\{1,2,\ldots,j\}$ for some $1 \leq j \leq n$. In what follows, we show that there exists a weight-ordered-assortment whose expected maximum load is within factor $2$ of optimal. Since there are only $n$ such assortments, and since we can compute the expected maximum load for each of these assortments by employing our evaluation oracle (see Section [3.1](#compute){reference-type="ref" reference="compute"}), the latter claim yields a polynomial time $1/2$-approximation for [\[SMLA\]](#SMLA){reference-type="ref" reference="SMLA"}, as stated in the following theorem. **Theorem 7**. *There is a preference-weight-ordered assortment that forms a $1/2$-approximation to [\[SMLA\]](#SMLA){reference-type="ref" reference="SMLA"}. Moreover, we can compute such an assortment in polynomial time.* In order to establish this result, the overall idea is to consider an optimal assortment $S^*$ for [\[SMLA\]](#SMLA){reference-type="ref" reference="SMLA"}, that may not necessarily be preference-weight-ordered. We will then sequentially modify $S^*$ to obtain a weight-ordered assortment. We prove that, due to these modifications, the loss incurred does not exceed $1/2$ of the objective function. In other words, letting $S$ be the resulting preference-weight-ordered assortment, we would claim that ${\mathbb{E}(M(S))\geq \mathbb{E}(M(S^*))}/2$. #### Outline of analysis. To prove Theorem [Theorem 7](#thm:halfapprox){reference-type="ref" reference="thm:halfapprox"}, using the Merge and Transfer operations presented in Section [3.2](#subsec:PTASlemmas){reference-type="ref" reference="subsec:PTASlemmas"}, we first show that any sufficiently heavy assortment can be replaced by a preference-weight-ordered assortment, plus a so-called virtual product (i.e., not present in the universe $\mathcal{N}$), without decreasing the expected maximum load. Moreover, we argue that the preference weight of the latter product is upper-bounded by every preference weight in the weight-ordered assortment. **Lemma 8**. *Let $S\subseteq \mathcal{N}$ be an assortment with $v(S)\geq v_1$. Then, there exists a weight-ordered assortment $\widetilde S\subseteq \mathcal{N}$ and a virtual product $k$ whose preference weight is at most $\min_{i\in \widetilde S}v_i$ such that $$\mathbb{E}\left(M\left(\widetilde S\cup\{k\}\right)\right) \geq \mathbb{E}(M(S)).$$* To arrive at Lemma [Lemma 8](#lem:virtual){reference-type="ref" reference="lem:virtual"}, we begin with an assortment $S$ and execute a sequence of Merge and Transfer operations to generate the assortment $\widetilde S$ along with the virtual product $k$, both with the desired structure. The proof of this claim can be found in Appendix [7.6](#apx:virtual){reference-type="ref" reference="apx:virtual"}. Recalling that product $k$ is not part of the universe $\mathcal{N}$, the next lemma shows that this virtual product can be removed, while losing a factor of at most $1/(|S|+1)$ in the objective function. **Lemma 9**. *Let $S\subseteq \mathcal{N}$ be any non-empty assortment, and let $k\notin S$ be a product with $v_k \leq \min_{i\in S}v_i$. Then, $$\mathbb{E}\left(M\left(S\right)\right) \geq \frac{|S|}{|S|+1} \cdot \mathbb{E}\left(M\left(S\cup \{k\}\right)\right).$$* In particular, when $S \neq \emptyset$, the lemma above shows that, by removing the virtual product $k$ from $S\cup \{k\}$, we lose a factor of at most $1/2$ in the objective function. The proof of this result is based on our structural results regarding on the sensitivity of the expected maximum load function, along the lines of Lemma [Lemma 6](#lem:weights){reference-type="ref" reference="lem:weights"}. Its complete details are given in Appendix [7.7](#apx:drop){reference-type="ref" reference="apx:drop"}. #### Concluding the proof of Theorem [Theorem 7](#thm:halfapprox){reference-type="ref" reference="thm:halfapprox"}. {#concluding-the-proof-of-theorem-thmhalfapprox.} Let $S^*$ be an optimal assortment for [\[SMLA\]](#SMLA){reference-type="ref" reference="SMLA"}. First, we observe that $v(S^*)\geq v_1$. Indeed, suppose by contradiction that $v(S^*)< v_1$. In this case, on one hand, when offering the assortment $S^*$, the total number of customers who select an option in $S^*$ (i.e., do not select the no-purchase option) is a Binomial random variable with $T$ trials and success probability $v(S^*)/(1+v(S^*))$. Since the maximum load when offering $S^*$ is trivially upper-bounded by the total number of purchases, it follows that $\mathbb{E}(M(S^*))\leq Tv(S^*)/(1+v(S^*))$. On the other hand, when offering the single-product assortment $\{1\}$, the expected maximum load is given by $\mathbb{E}(M(\{1\})) = Tv_1/(1+v_1)$. Since the function $x\mapsto x/(1+x)$ is increasing over $[0,+\infty)$, and since $v(S^*)<v_1$, we have $\mathbb{E}(M(S^*))<\mathbb{E}(M(\{1\}))$, contradicting the optimality of $S^*$. Now, given that $v(S^*)\geq v_1$, the conditions of Lemma [Lemma 8](#lem:virtual){reference-type="ref" reference="lem:virtual"} are met, and therefore, there exists a weight-ordered assortment $\widetilde S$ and a virtual product $k$ whose preference weight is at most $\min_{i\in \widetilde S}v_i$, such that $\mathbb{E}(\widetilde S\cup\{k\}) \geq \mathbb{E}(M(S^*))$. By definition, weight-ordered assortments are non-empty, meaning that according to Lemma [Lemma 9](#lem:drop){reference-type="ref" reference="lem:drop"}, we have $\mathbb{E}(M(\widetilde S)) \geq \frac{ 1 }{ 2 } \cdot \mathbb{E}(M(\widetilde S\cup\{k\}))$. Putting both inequalities together, $$\mathbb{E}(M(\widetilde S))\geq \frac12\cdot \mathbb{E}(M(\widetilde S\cup \{k\}))\geq \frac12\cdot\mathbb{E}(M(S^*)).$$ ## Polynomial-time approximation scheme {#subsec:PTAS} Our main technical contribution for [\[SMLA\]](#SMLA){reference-type="ref" reference="SMLA"} consists in designing a polynomial-time approximation scheme (PTAS). In other words, for any fixed $\epsilon\in (0,1)$, we propose a polynomial-time algorithm for identifying an assortment whose expected maximum load is within factor $1 - \epsilon$ of optimal. This result is formalized in the next theorem. **Theorem 10**. *For any $\epsilon\in (0,1)$, [\[SMLA\]](#SMLA){reference-type="ref" reference="SMLA"} can be approximated within a factor of $1-\epsilon$ of optimal. The running time of our algorithm is $O(T^{O(1)} \cdot n^{O(\frac 1\epsilon\log \frac 1\epsilon)})$.* #### Block-based assortments. In what follows, we introduce a family of highly-structured assortments, which will be referred to as being "block-based". As explained below, these assortments are defined in three steps, starting from the block of products with the highest preference weight and gradually moving to blocks with lower weights. Without loss of generality, we assume that $1/\epsilon$ takes an integer value, and that products are indexed in non-increasing order of preference weights, i.e., $v_1 \geq \cdots \geq v_n$. With these conventions, an assortment $S \subseteq \mathcal{N}$ is said to be block-based either if its cardinality is at most $1/\epsilon$, or when it can be written as $S = S_1 \cup S_2 \cup S_3$, where the latter sets are structured as follows: - *Block 1:* The first set, $S_1$, is an arbitrary collection $1/\epsilon$ products. These products will form the subset of heaviest products in this assortment. - *Block 2:* Let $j$ be the highest-index product in $S_1$. The second subset of products in our assortment, $S_2$, is a contiguous block of products, starting from $j+1$. In other words, $S_2=\{j+1,j+2,\ldots,k\}$, for some $k\leq n$. - *Further blocks:* Let $h=k+1$, where $k$ is the highest-index product in $S_2$. We create a multiplicative grid across $[\epsilon\cdot v_h, v_h]$ as follows. The class $C_1$ consists of all products whose weight falls within $[(1-\epsilon) \cdot v_h, v_h]$. Then, the class $C_2$ consists of all products with weights in $[(1-\epsilon)^2\cdot v_h,(1-\epsilon)\cdot v_h)$. So on and so forth, until we hit the lower bound $\epsilon\cdot v_h$. Letting $C_1, \ldots, C_L$ be the resulting classes, one can easily verify that the number of classes is $L = O( \frac{ 1 }{ \epsilon} \log \frac{ 1 }{ \epsilon} )$. Now, for each class $C_\ell$, we select a number $N_\ell$ of products to be included in the assortment, and then simply include the $N_\ell$ products with largest indices from this class. For example, if a certain class is $\{8,9,10,11\}$, then we include $\emptyset$ when $N_{\ell} = 0$, $\{11\}$ when $N_{\ell} = 1$, up to $\{11,10,9,8\}$ when $N_{\ell} = 4$. We will refer to the union of these sets over all classes as $S_3$. We proceed by explaining how to explicitly construct the entire family of blocked-based assortments in $O(n^{O( \frac1\epsilon\log( \frac1\epsilon) )})$ time. First, there are $O(n^{O(1/\epsilon)})$ options to construct an assortment whose cardinality is strictly smaller than $1/\epsilon$. Let us now construct the assortments whose cardinality is at least $1/\epsilon$. In order to create the first block, $S_1$, it is easy to see that there are $O(n^{ O(1/\epsilon)})$ options. Since the second block $S_2$ is contiguous, there are at most $O(n)$ options here. For the remaining blocks, we create $L$ classes, and for each of these classes, we simply choose the number of products $N_{\ell}$ to be included. Therefore, there are $O(n^L) = O(n^{ O( \frac{ 1 }{ \epsilon} \log \frac{ 1 }{ \epsilon} ) })$ options to construct $S_3$. #### The performance of block-based assortments. Our algorithmic approach consists of enumerating over all block-based assortments. Since the evaluation oracle provided in Section [3.1](#compute){reference-type="ref" reference="compute"} can be implemented in $O(n^2T^3)$ time, our overall algorithm indeed has a running time of $O(T^{O(1)} \cdot n^{ O( \frac{ 1 }{ \epsilon} \log \frac{ 1 }{ \epsilon} )})$, as stated in Theorem [Theorem 10](#thm:algo){reference-type="ref" reference="thm:algo"}. The next result proves that at least one of the assortments we are enumerating over yields a $(1-\epsilon)$-approximation of [\[SMLA\]](#SMLA){reference-type="ref" reference="SMLA"}. **Theorem 11**. *Letting $S^*$ be an optimal assortment for [\[SMLA\]](#SMLA){reference-type="ref" reference="SMLA"}, there exists a block-based assortment $S$ for which $$\mathbb{E}(M(S))\geq (1-\epsilon)\cdot \mathbb{E}(M(S^*)).$$* #### Definitions and notation. In order to establish this theorem, we begin by introducing a number of useful definitions and their surrounding notation: - For any assortment $S$, and for any $j=1,\ldots,|S|$, we define ${I}_{j}(S)$ as the index of the $j$-th heaviest product in $S$. That is, if $S=\{p_1,p_2, \ldots, p_{|S|} \}$ where $v_{p_1}\geq \cdots \geq v_{p_{|S|}}$, then ${I}_{j}(S)= p_j$. - For any assortment $S$ with $|S|> 1/\epsilon$, we define its *$\epsilon$-hole* as the product with index $h_{\epsilon}(S)$ where $$h_{\epsilon}(S) = \min\{j \notin S : j \geq {I}_{1/\epsilon}(S)\}.$$ Put simply, the $\epsilon$-hole refers to the heaviest product in the universe $\mathcal{N}$ that is not part of the assortment $S$, but is lighter than the $1/\epsilon$ heaviest product of $S$. - Finally, we say that an assortment $S$ is *$\epsilon$-restricted* either when $|S|\leq 1/\epsilon$, or when $|S|> 1/\epsilon$ and the weight of each product in $S$ is larger than a fraction $\epsilon$ of the weight of its $\epsilon$-hole, i.e., ${v_i\geq \epsilon\cdot v_{h_{\epsilon}(S)}}$ for every $i \in S$. It is worth noting that, by definition, all block-based assortments are $\epsilon$-restricted. #### Analysis. Given these definitions, the proof of Theorem [Theorem 11](#thm:block_based_good){reference-type="ref" reference="thm:block_based_good"} consists of two steps. In Lemma [Lemma 12](#lem:thmstep1){reference-type="ref" reference="lem:thmstep1"}, we prove that for any sufficiently large assortment $S$, there exists an $\epsilon$-restricted assortment $\widehat S$ and a virtual product $k$, such that the expected maximum load of $\widehat S \cup \{ k \}$ is at least as large as that of $S$. The proof of this lemma makes use of the Merge and Transfer operations introduced in Section [3.2](#subsec:PTASlemmas){reference-type="ref" reference="subsec:PTASlemmas"}, transforming the assortment $S$ into the union of an $\epsilon$-restricted assortment and a virtual product. The detailed proof is included in Appendix [7.8](#apx:thmstep1){reference-type="ref" reference="apx:thmstep1"}. **Lemma 12**. *Let $S\subseteq \mathcal{N}$ be an assortment with $|S| > 1/\epsilon$. Then, there exists an $\epsilon$-restricted assortment $\widehat S\subseteq \mathcal{N}$ with $|\widehat S| > 1/\epsilon$, and a virtual product $k$ with $v_k \leq v_{h_{\epsilon}(\widehat S)}$, such that $$\mathbb{E}\left(M\left(\widehat S\cup\{k\}\right)\right) \geq \mathbb{E}\left(M(S)\right).$$* In the following lemma, whose proof is presented in Appendix [7.9](#apx:thmstep2){reference-type="ref" reference="apx:thmstep2"}, we show that the assortment $\widehat S\cup\{k\}$ obtained in Lemma [Lemma 12](#lem:thmstep1){reference-type="ref" reference="lem:thmstep1"} can be transformed into a block-based assortment, losing at most a factor $\epsilon$ in its objective value. **Lemma 13**. *Let $\widehat S\subseteq \mathcal{N}$ be an $\epsilon$-restricted assortment with $|\widehat S|> 1/\epsilon$, and let $k$ be a virtual product with $v_k \leq v_{h_{\epsilon}(\widehat S)}$. Then, there exists a block-based assortment $\widetilde S\subseteq \mathcal{N}$ such that $$\mathbb{E}(M(\widetilde S)) \geq (1-\epsilon)\cdot \mathbb{E}\left(M\left(\widehat S\cup\{k\}\right)\right).$$* To conclude the proof of Theorem [Theorem 11](#thm:block_based_good){reference-type="ref" reference="thm:block_based_good"}, let $S^*$ be an optimal assortment for [\[SMLA\]](#SMLA){reference-type="ref" reference="SMLA"}. When $|S^*|\leq 1/\epsilon$, we know that $S^*$ is a block-based assortment by definition, and it remains to consider the opposite case, where $|S^*|> 1/\epsilon$. By Lemma [Lemma 12](#lem:thmstep1){reference-type="ref" reference="lem:thmstep1"}, there exists an $\epsilon$-restricted assortment $\widehat S$ with $|\widehat S| > 1/\epsilon$ and a virtual product $k$ with $v_k \leq v_{h_{\epsilon}(\widehat S)}$ such that $\mathbb{E}(M(\widehat S\cup \{k\}))\geq \mathbb{E}(M(S^*))$. In turn, since $\widehat S$ and $k$ satisfy the conditions of Lemma [Lemma 13](#lem:thmstep2){reference-type="ref" reference="lem:thmstep2"}, there exists a block-based assortment $\widetilde S$ such that $\mathbb{E}(M(\widetilde S)) \geq (1-\epsilon)\cdot \mathbb{E}(M(\widehat S\cup\{k\})).$ Combining these two inequalities yields $\mathbb{E}(M(\widetilde S)) \geq (1-\epsilon)\cdot \mathbb{E}(M(S^*))$, as desired. # The Dynamic Setting: Constant-Factor Adaptivity Gaps {#sec:adaptivitygap} In this section, we examine how good could an optimal static assortment be in comparison to an optimal adaptive policy. Specifically, we study the adaptivity gap of maximum load optimization, namely, the worst-possible ratio between the expected maximum load of an optimal adaptive policy and that of an optimal static policy, over all problem instances. Quite surprisingly, we establish an adaptivity gap of at most $4$, showing that statically offering a weight-ordered assortment guarantees a $1/4$-approximation to [\[DMLA\]](#DMLA){reference-type="ref" reference="DMLA"}. Moreover, we show that this gap reduces to at most $2$ when all products have identical same preference weights. #### Outline. In Section [4.1](#subsec:notationresult){reference-type="ref" reference="subsec:notationresult"}, we provide some useful notation and describe our main adaptivity gap results in greater detail. In Section [4.2](#subsec:auxlemmas){reference-type="ref" reference="subsec:auxlemmas"}, we present several auxiliary claims that will be helpful in subsequent analysis. Then, we prove an adaptivity gap of at most $4$ for general instances in Section [4.3](#subsec:proofadaptivity){reference-type="ref" reference="subsec:proofadaptivity"}, deferring the improved finding for the identical-weight setting to Appendix [8.5](#apx:eqv){reference-type="ref" reference="apx:eqv"}. Finally, we construct a [\[DMLA\]](#DMLA){reference-type="ref" reference="DMLA"} instance, demonstrating that the adaptivity gap of this problem is at least $4/3$. ## Notation and main results {#subsec:notationresult} #### Notation. Let us start by introducing some helpful notation and definitions. In what follows, we generalize the notion of preference-weight-ordered assortments to any universe of products $U\subseteq\mathcal{N}$, still prioritizing those with higher preference weights. Formally, suppose that $U=\{i_1,\ldots,i_k\}\subseteq \mathcal{N}$ and that $v_{i_1}\geq \cdots\geq v_{i_k}$. We say that the assortment $S\subseteq U$ is preference-weight-ordered in $U$ when $S = \{i_1,\ldots,i_m\}$ for some $1 \leq m\leq k$. With this definition, let ${\sf OPT}^{\mathrm{WO}}(U)$ be the optimal expected maximum load achievable by a static preference-weight-ordered assortment in $U$. In other words, $${\sf OPT}^{\mathrm{WO}}(U) = \max_{m=1,\ldots,k} \mathbb{E}\left(M\left(\{i_1,\ldots,i_m\}\right)\right).$$ In addition, we define ${\sf OPT}^{\mathrm{DP}}(U)$ as the expected maximum load of an optimal dynamic policy, using only products in $U$. #### Main results. Quite surprisingly, we show that by statically offering a weight-ordered assortment, one can attain an expected maximum load of at least $1/4$ of the optimal expected maximum load of [\[DMLA\]](#DMLA){reference-type="ref" reference="DMLA"}. The proof of this result appears in Section [4.3](#subsec:proofadaptivity){reference-type="ref" reference="subsec:proofadaptivity"}. **Theorem 14**. *There exists a static weight-ordered assortment that provides a $1/4$-approximation to [\[DMLA\]](#DMLA){reference-type="ref" reference="DMLA"}, i.e., $${\sf OPT}^{\mathrm{WO}}(\mathcal{N})\geq \frac{1}{4} \cdot {\sf OPT}^{\mathrm{DP}}(\mathcal{N}).$$* It is worth noting that ${\sf OPT}^{\mathrm{DP}}(\mathcal{N})$ represents the expected maximum load of an optimal dynamic policy for [\[DMLA\]](#DMLA){reference-type="ref" reference="DMLA"}, where all products in $\mathcal{N}$ are considered. Additionally, ${\sf OPT}^{\mathrm{WO}}(\mathcal{N})$ denotes the expected maximum load of an optimal weight-ordered static assortment in $\mathcal{N}$, which is clearly upper-bounded by the expected maximum load of an optimal assortment for [\[SMLA\]](#SMLA){reference-type="ref" reference="SMLA"}. Therefore, the adaptivity gap of this setting is at most $4$. Moreover, as explained in Section [3.1](#compute){reference-type="ref" reference="compute"}, we can compute ${\sf OPT}^{\mathrm{WO}}(\mathcal{N})$ in polynomial time, meaning that the above theorem yields a $1/4$-approximation for [\[DMLA\]](#DMLA){reference-type="ref" reference="DMLA"}. When all products are associated with identical preference weights, we derive an improved adaptivity gap of $2$, as stated in the next theorem, whose proof is provided in Appendix [8.5](#apx:eqv){reference-type="ref" reference="apx:eqv"}. **Theorem 15**. *Suppose that all products have identical preference weights. Then, there exists a static weight-ordered assortment that provides a $1/2$-approximation to [\[DMLA\]](#DMLA){reference-type="ref" reference="DMLA"}, i.e., $${\sf OPT}^{\mathrm{WO}}(\mathcal{N})\geq \frac{1}{2} \cdot {\sf OPT}^{\mathrm{DP}}(\mathcal{N}) .$$* ## Auxiliary claims {#subsec:auxlemmas} #### Upper-bounding $\boldsymbol{{\sf OPT}^{\mathrm{DP}}(U)}$. For a fixed universe of products $U \subseteq \mathcal{N}$, recall that ${\sf OPT}^{\mathrm{DP}}(U)$ represents the expected maximum load attained by an optimal dynamic policy with respect to the universe $U$. In Lemma [Lemma 16](#lem:multinom){reference-type="ref" reference="lem:multinom"} below, whose proof is given in Appendix [8.1](#apx:multinom){reference-type="ref" reference="apx:multinom"}, we provide an upper bound on ${\sf OPT}^{\mathrm{DP}}(U)$ which will serve as an initial step towards bounding the expected maximum load of an optimal dynamic policy. Specifically, we consider a random Multinomial vector $(L_1,\ldots,L_k)$, and establish a condition on its vector of probabilities and on the preference weights of products in $U$, ensuring that the expected maximal component of this Multinomial vector exceeds ${\sf OPT}^{\mathrm{DP}}(U)$. **Lemma 16**. *Let $(L_1,\ldots,L_k)$ be a random Multinomial vector with $T$ trials and probability vector $(p_1,\ldots,p_k)$. Let $U\subseteq\mathcal{N}$ be a set of products with $$\label{eq:multinom} \min_{i=1,\ldots,k}p_i\geq \max_{i\in U} \frac{v_i}{1+v_i}.$$ Then, $$\mathbb{E}(\max(L_1,\ldots,L_k)) \geq {\sf OPT}^{\mathrm{DP}}(U).$$* To interpret the condition in Equation [\[eq:multinom\]](#eq:multinom){reference-type="eqref" reference="eq:multinom"}, consider an optimal dynamic policy for the maximum load assortment problem with respect to the universe $U$. Whenever this policy offers an assortment $S$ to some customer $t \in [T]$, the MNL choice probability of each product $i\in S$ is $\frac{ v_i }{ 1+v(S) } \leq \frac{ v_i }{ 1+v_i }$. Therefore, Equation [\[eq:multinom\]](#eq:multinom){reference-type="eqref" reference="eq:multinom"} can be viewed as a condition where, regardless of the offered assortment, the MNL choice probabilities of all products in $U$ are upper-bounded by $\min_{i=1,\ldots,k}p_i$. Under this condition, we prove that the expected maximal component of $(L_1,\ldots,L_k)$ is an upper bound on the expected maximum load of an optimal dynamic policy. #### Consequences of offering larger subsets. Consider an adaptive policy $A$ for the maximum load assortment problem with respect to the universe $U$. We denote by ${\cal E}^A(U)$ the expected maximum load achieved by this policy. Additionally, for each $t=1,\ldots,T$, we make use of $S^A_t$ to designate the subset of $U$ offered by $A$ to customer $t$. This subset is clearly random, since it depends on the random selections made by previously arriving customers. Now, consider two adaptive policies, $A$ and $B$, such that for any $t \in [T]$, the assortment offered by policy $A$ to customer $t$ is almost surely a subset of the one offered by policy $B$ to this customer. However, we assume that the difference in total preference weight between these two assortment is almost surely upper-bounded by some $\epsilon \geq 0$. The next lemma, whose proof is included in Appendix [8.3](#apx:augment){reference-type="ref" reference="apx:augment"}, gives a lower bound on the ratio between the expected maximum loads of the two policies as a function of $\epsilon$. **Lemma 17**. *Let $A$ and $B$ be two adaptive policies for [\[DMLA\]](#DMLA){reference-type="ref" reference="DMLA"} with respect to the universe $U$. For every $t \in [T]$, suppose that $S^A_t\subseteq S^B_t$ and $v( S^B_t \setminus S^A_t) \leq \epsilon$ almost surely. Then, $${\cal E}^B(U)\geq \frac1{1+\epsilon}\cdot{\cal E}^A(U).$$* #### Subadditivity. Finally, we prove that ${\sf OPT}^{\mathrm{DP}}(\cdot)$ is a subadditive function, as formally stated below. **Lemma 18** (Subadditivity). *For any $U_1,U_2\subseteq \mathcal{N}$, we have $${\sf OPT}^{\mathrm{DP}}(U_1\cup U_2) \leq {\sf OPT}^{\mathrm{DP}}(U_1) + {\sf OPT}^{\mathrm{DP}}(U_2).$$* The proof of this result relies on a coupling argument involving three dynamic policies: (1) A policy that offers only products from the universe $U_1$; (2) A policy offering only products from $U_2$; and (3) An optimal policy that offers products from the combined universe, $U_1\cup U_2$. Within this coupling, we demonstrate that for at least one of the policies (1) and (2), its maximum load is almost surely at least as large as that of policy (3) with respect to the constructed coupling. The complete proof is provided in Appendix [8.4](#apx:subadd){reference-type="ref" reference="apx:subadd"}. ## Proof of Theorem [Theorem 14](#thm:adaptivitygap){reference-type="ref" reference="thm:adaptivitygap"} {#subsec:proofadaptivity} #### The easy regime: $\boldsymbol{v(\mathcal{N})< 1}$. In this case, recalling that $v_1\geq \cdots\geq v_n$, we simply make use of the static policy $A$ where all products in $\mathcal{N}$ are offered to every customer $t \in [T]$. We will show that the expected maximum load of this policy is at least ${\sf OPT}^{\mathrm{DP}}(\mathcal{N})/2$. To this end, focusing on an optimal dynamic policy $A^*$, let $S_t^{ A^* }$ be the random assortment it offers to customer $t$. We have $S_t^{ A^* } \subseteq \mathcal{N}$ and $v(\mathcal{N}\setminus S_t^{ A^* } )\leq 1$, since $v(\mathcal{N})<1$ by the case hypothesis. Therefore, Lemma [Lemma 17](#lem:augment){reference-type="ref" reference="lem:augment"} implies that the expected maximum load ${\cal E}^A(\mathcal{N})$ of our static policy is at least ${\cal E}^{A^*}(\mathcal{N}) / 2$. In other words, $\mathbb{E}(M(\mathcal{N})) \geq {\sf OPT}^{\mathrm{DP}}(\mathcal{N})/2$. Clearly, $\mathcal{N}$ is also preference-weight ordered, meaning that ${\sf OPT}^{\mathrm{WO}}(\mathcal{N})\geq {\sf OPT}^{\mathrm{DP}}(\mathcal{N})/2$. #### Overview of the difficult regime: $\boldsymbol{v(\mathcal{N}) \geq 1}$. Let $k$ be the minimal integer for which $\sum_{i=1}^kv_i\geq 1$, and consider the assortment $U=\{1,\ldots,k\}$. In the following, we argue that by statically offering this assortment to all customers, the expected maximum load is at least ${\sf OPT}^{\mathrm{DP}}(\mathcal{N})/4$. In other words, $\mathbb{E}(M(U))\geq {\sf OPT}^{\mathrm{DP}}(\mathcal{N})/4$. Since $U$ is weight-ordered, the latter bound would imply that ${\sf OPT}^{\mathrm{WO}}(\mathcal{N})\geq {\sf OPT}^{\mathrm{DP}}(\mathcal{N})/4.$ For this purpose, by Lemma [Lemma 18](#lem:subadd){reference-type="ref" reference="lem:subadd"}, we know that ${\sf OPT}^{\mathrm{DP}}(\cdot)$ is a subadditive function, meaning in particular that $$\label{eq:subadd} {\sf OPT}^{\mathrm{DP}}(\mathcal{N}) \leq {\sf OPT}^{\mathrm{DP}}(U)+{\sf OPT}^{\mathrm{DP}}(\mathcal{N}\setminus U).$$ Now, let $(\widehat L_1,\ldots,\widehat L_k)$ be a Multinomial vector with $T$ trials and probability vector $( p_1,\ldots,p_k)$, where $p_i = \frac{ v_i }{ v(U) }$ for every $i=1,\ldots,k$. In the next two lemmas, whose proofs appear in the sequel, we show that both ${\sf OPT}^{\mathrm{DP}}(U)$ and ${\sf OPT}^{\mathrm{DP}}(\mathcal{N}\setminus U)$ are upper bounded by $\mathbb{E}(\widehat M)$, where $\widehat M = \max_{i=1,\ldots,k}\widehat L_i$. **Lemma 19**. *${\sf OPT}^{\mathrm{DP}}(U) \leq \mathbb{E}(\widehat M)$.* **Lemma 20**. *${\sf OPT}^{\mathrm{DP}}(\mathcal{N}\setminus U) \leq \mathbb{E}(\widehat M)$.* On the other hand, we argue that the expected maximum load achieved by the static assortment $U$ is at least $\mathbb{E}(\widehat M)/2$. **Lemma 21**. *$\mathbb{E}(M(U)) \geq \mathbb{E}(\widehat M)/2$.* We are now ready to complete the proof of Theorem [Theorem 14](#thm:adaptivitygap){reference-type="ref" reference="thm:adaptivitygap"}. To this end, noting that the assortment $U$ is preference-weight-ordered, we know by Lemma [Lemma 21](#lem:3){reference-type="ref" reference="lem:3"} that ${\sf OPT}^{\mathrm{WO}}(\mathcal{N})\geq \mathbb{E}(\widehat M)/2$. Consequently, $${\sf OPT}^{\mathrm{WO}}(\mathcal{N}) \geq \frac{ 1 }{ 2 } \cdot \mathbb{E}(\widehat M) \geq \frac{ 1 }{ 4 } \cdot \left( {\sf OPT}^{\mathrm{DP}}(U) + {\sf OPT}^{\mathrm{DP}}(\mathcal{N}\setminus U) \right) \geq \frac{ 1 }{ 4 } \cdot {\sf OPT}^{\mathrm{DP}}(\mathcal{N}),$$ where the second inequality follows from Lemmas [Lemma 19](#lem:1){reference-type="ref" reference="lem:1"} and [Lemma 20](#lem:2){reference-type="ref" reference="lem:2"}, and the third inequality is precisely [\[eq:subadd\]](#eq:subadd){reference-type="eqref" reference="eq:subadd"}. #### Proof of Lemma [Lemma 19](#lem:1){reference-type="ref" reference="lem:1"}. {#proof-of-lemma-lem1.} Let $(\widetilde L_1,\ldots,\widetilde L_k)$ be the random loads of the products when employing an optimal adaptive policy for the universe $U$, and let $\widetilde M=\max_{i\in U}\widetilde L_i$. By definition, ${\sf OPT}^{\mathrm{DP}}(U) = \mathbb{E}(\widetilde M)$, and our objective is to prove that $\mathbb{E}(\widetilde M) \leq \mathbb{E}(\widehat M)$. For this purpose, we begin by observing that, for every $i\in [k]$, $$\label{eq:obs} \frac{v_i}{v(U)} > \frac{v_i}{1+v_k}\geq \frac{v_i}{1+v_i},$$ where the first and second inequalities hold respectively since $\sum_{j=1}^{k-1} v_j < 1$, by definition of $k$, and since $v_1 \geq \cdots\geq v_k$. Looking into Equation [\[eq:obs\]](#eq:obs){reference-type="eqref" reference="eq:obs"}, its left hand side is exactly the probability of component $i$, in the process of generating the random Multinomial vector $(\widehat L_1,\ldots,\widehat L_k)$. The right hand side is the choice probability of product $i$ when the single-product assortment $\{i\}$ is offered, which is an upper bound on the choice probability of product $i$ at any step of generating $(\widetilde L_1,\ldots,\widetilde L_k)$. In other words, letting $S_t^{ A^* } \subseteq U$ be the random assortment offered by the optimal dynamic policy $A^*$ to customer $t \in [T]$, for every product $i \in [k]$, we have $$\label{eq:obs2} \phi_i(S_t^{ A^* }) \leq \frac{v_i}{1+v_i} \leq \frac{v_i}{v(U)}.$$ This observation implies the existence of a simple way to couple ${(\widetilde L_1,\ldots,\widetilde L_k)}$ and ${(\widehat L_1,\ldots,\widehat L_k)}$ such that ${\widetilde L_i\leq \widehat L_i}$, for every product $i \in [k]$. As a result, $\mathbb{E}(\widetilde M) \leq \mathbb{E}(\widehat M)$. #### Proof of Lemma [Lemma 20](#lem:2){reference-type="ref" reference="lem:2"}. {#proof-of-lemma-lem2.} The key idea is to notice that, for every pair of products $1 \leq i \leq k$ and $k+1 \leq j \leq n$, $$\frac{v_i}{v(U)} \geq \frac{v_i}{1+v_i} \geq \frac{v_j}{1+v_j}.$$ Here, the first inequality is precisely Equation [\[eq:obs\]](#eq:obs){reference-type="eqref" reference="eq:obs"}, and the second inequality holds since $v_1 \geq \cdots\geq v_n$. Therefore, $$\min_{i=1,\ldots,k}\frac{v_i}{v(U)}\geq\max_{j=k+1,\ldots,n} \frac{v_j}{1+v_j}.$$ Recall that $v_i/v(U)$ is the probability of picking option $i$ in the Multlinomial vector ${(\widehat L_1,\ldots,\widehat L_k)}$. Therefore, by applying Lemma [Lemma 16](#lem:multinom){reference-type="ref" reference="lem:multinom"}, we have $$\mathbb{E}(\max{(\widehat L_1,\ldots,\widehat L_k)})\geq {\sf OPT}^{\mathrm{DP}}(\mathcal{N}\setminus U).$$ Finally, by definition of $\widehat M$, $$\mathbb{E}(\widehat M)\geq {\sf OPT}^{\mathrm{DP}}(\mathcal{N}\setminus U).$$ #### Proof of Lemma [Lemma 21](#lem:3){reference-type="ref" reference="lem:3"}. {#proof-of-lemma-lem3.} Consider the static policy where we offer the weight-ordered assortment $U$ to every customer, and let $(L_0, L_1, \ldots, L_k)$ be the load vector associated with this policy. The latter vector follows a Multinomial distribution, where the choice probability of each product $i\in U$ is $\frac{ v_i }{ 1+v(U) }$ and the no-purchase option has probability $\frac{ 1 }{ 1+v(U) }$. On the other hand, consider the random vector $(\widehat L_1,\ldots,\widehat L_k)$, recalling that it has been defined as being Multinomial with $T$ trials and probabilities $( p_1,\ldots,p_k)$, where $p_i = \frac{ v_i }{ v(U) }$ for every $i=1,\ldots,k$. We complement this vector with $\widehat L_0$ that has a probability of $0$. We proceed by applying Lemma [Lemma 5](#lem:probs){reference-type="ref" reference="lem:probs"} to the Multinomial vectors $(L_0, L_1, \ldots, L_k)$ and $(\widehat L_0, \widehat L_1,\ldots,\widehat L_k)$. Specifically, since $v(U)\geq 1$, we have $$\frac{v_i}{1+v(U)}\geq \frac12\cdot\frac{v_i}{v(U)},$$ implying that the conditions of this lemma are met with $\epsilon= 1/2$. It follows that ${\mathbb{E}(M(U))\geq \mathbb{E}(\widehat M)/2}$. ## Lower bound on the adaptivity gap {#subsec:lowbd} While Theorem [Theorem 14](#thm:adaptivitygap){reference-type="ref" reference="thm:adaptivitygap"} attains an upper bound of $4$ on the adaptivity gap, we proceed to consider the opposite direction and provide a lower bound on this measure. In particular, we construct an instance of the maximum load assortment problem, demonstrating that the adaptivity gap of this setting is at least $4/3$. **Lemma 22**. *The adaptivity gap of [\[DMLA\]](#DMLA){reference-type="ref" reference="DMLA"} is at least $4/3$.* In what follows, we consider an instance defined over a universe of $n$ products, each with a preference weight of $1$. In addition, the number of customers is $T=2$. Let us compare the optimal adaptive policy against the performance of an optimal static assortment. #### The optimal dynamic policy. Since products have identical preference weights, it is easy to verify that the optimal dynamic policy starts by offering the whole universe of products to the first customer. With probability $n/(1+n)$, she selects some product, say $i$. In this event, the optimal policy will offer the assortment $\{i\}$ to the second customer, as adding any other product can only cannibalize product $i$. In the complementary event, the first customer selects the no-purchase option, and in this case, the optimal policy offers the whole universe of products to the second customer. Therefore, by conditioning on the choice of the first customer, the expected maximum load of the optimal dynamic policy is given by $$\label{eqn:bad_inst_opt} {\sf OPT}^{\mathrm{DP}}(\mathcal{N}) = \frac{n}{1+n}\cdot \left(1+\frac{1}{2}\right) + \frac{1}{1+n}\cdot \frac{n}{1+n} = \frac{ 3 }{ 2 } \cdot \left( 1 - \frac{ 1 }{ n+1 } \right) + \frac{ n }{ (1+n)^2 } .$$ #### The optimal static assortment. For every $k=1,\ldots,n$, we compute the expected maximum load achieved by statically offering an assortment $S_k$ consisting of $k$ products. For this assortment, the maximum load is $2$ if and only if the same product is selected by both customers, which happens with probability $\frac{ k }{(1+k)^2}$. Similarly, the maximum load is $0$ if and only if the no-purchase option is selected by both customers, which happens with probability $\frac{1} { (1+k)^2 }$. It follows that the maximum load is $1$ with probability $\frac{ k^2 + k }{ (1+k)^2 }$, and a simple calculation shows that $$\label{eqn:bad_inst_static} \mathbb{E}(M(S_k)) = \frac{k^2 + 3k}{(1+k)^2}.$$ To bound the latter expectation, elementary calculus arguments show that the function $x \mapsto \frac{x^2 + 3x}{(1+x)^2}$ attains its maximum value over $[0,\infty)$ at $x = 3$, and therefore $\max_{k \in [n]} \mathbb{E}(M(S_k)) \leq 9/8$ #### Lower bound on the adaptivity gap. By combining equations [\[eqn:bad_inst_opt\]](#eqn:bad_inst_opt){reference-type="eqref" reference="eqn:bad_inst_opt"} and [\[eqn:bad_inst_static\]](#eqn:bad_inst_static){reference-type="eqref" reference="eqn:bad_inst_static"}, we obtain an adaptivity gap of at least $$\lim_{n \to \infty} \frac{ {\sf OPT}^{\mathrm{DP}}(\mathcal{N}) }{ \max_{k \in [n]} \mathbb{E}(M(S_k)) } \geq \frac{ 8 }{ 9 } \cdot \lim_{n \to \infty} \left( \frac{ 3 }{ 2 } \cdot \left( 1 - \frac{ 1 }{ n+1 } \right) + \frac{ n }{ (1+n)^2 } \right) = \frac{ 4 }{ 3 } .$$ As a side note, due to considering the case of identical preference weights, by Theorem [Theorem 15](#thm:eqv){reference-type="ref" reference="thm:eqv"}, we know that the adaptivity gap in this case is upper bounded by $2$. It is important to note that the choice of the instance presented above is not arbitrary. Rather, it results from numerically optimizing the number of customer $T$ and their (uniform) preference weights to obtain the highest lower bound on the adaptivity gap. # The Dynamic Setting: Quasi-Polynomial $\boldsymbol{(1-\epsilon)}$-Approximate Policy {#sec:dynamic} In this section, we shift our focus towards designing a truly near-optimal policy for [\[DMLA\]](#DMLA){reference-type="ref" reference="DMLA"}. Specifically, for any $\epsilon> 0$, we propose a $(1-\epsilon)$-approximate adaptive policy, admitting a quasi-polynomial time implementation. Our approach involves exploring a carefully selected class of policies with distinct properties, allowing one to dramatically reduce the search space of seemingly-intractable dynamic programming ideas. We start by presenting our main result in Section [5.1](#subsec:maindynamic){reference-type="ref" reference="subsec:maindynamic"}, stating the existence of a near-optimal dynamic policy that can be implemented in quasi-polynomial time. Subsequently, Section [5.2](#subsec:highr){reference-type="ref" reference="subsec:highr"} will provide a number of auxiliary lemmas and observations. In Sections [5.3](#subsec:policy){reference-type="ref" reference="subsec:policy"} and [5.4](#subsec:proofquasitime){reference-type="ref" reference="subsec:proofquasitime"}, we present the specifics of our algorithmic approach and establish its performance guarantees. ## Main result {#subsec:maindynamic} #### Main result. As previously mentioned, our primary result consists of a quasi-polynomial time approximation scheme (QPTAS) for [\[DMLA\]](#DMLA){reference-type="ref" reference="DMLA"}. The next theorem describes this finding in greater detail, noting that $O_{\epsilon}( \cdot )$ simply hides polynomial dependencies in $1/\epsilon$. **Theorem 23**. *For any $\epsilon> 0$, we can compute an adaptive policy for [\[DMLA\]](#DMLA){reference-type="ref" reference="DMLA"} whose expected maximum load is within factor $1 - \epsilon$ of optimal. This policy can be implemented in $O( n^{ O_{\epsilon}( \log^3 n) } )$.* It is worth mentioning that the term "implementation" in this specific context encompasses two important aspects. Firstly, it includes any preprocessing steps undertaken prior to the beginning of the customer arrival process. Secondly, it contains the additional procedures required to compute a personalized assortment that will be offered to each arriving customer. #### Technical overview. The fundamental challenge in addressing the dynamic setting arises from the exponential size of its dynamic programming state space (see Section [2.2](#subsec:DMLA){reference-type="ref" reference="subsec:DMLA"}). Thus, our initial focus lies in modifying the original instance, with the intent of arriving at a dramatically scaled-down state space. This alteration involves transforming the original universe of products $\mathcal{N}$ into a modified universe, where product weights are slightly altered. Additionally, we confine our exploration to a specific class of policies that truncates the arrival sequence once a predetermined threshold on the maximum load is reached, incurring at most an $\epsilon$-fraction loss in the objective function. While the idea of altering the space of products $\mathcal{N}$ helps mitigate the search space issue, it introduces a new source of complexity, due to the dissimilarity between the products in the new universe and our initial universe. Consequently, our second step consists of recovering a policy with respect to the original universe, while essentially preserving the expected maximum load. ## Useful claims {#subsec:highr} In this section, we introduce several auxiliary claims that will be helpful in designing our near-optimal policy as well as in its analysis. For convenience, we assume without loss of generality that $T\geq 2$ and $n\geq 2$. Indeed, when $T = 1$, it is optimal to offer the whole universe of products. Similarly, the setting of $n=1$ corresponds to having a single product, in which case it is optimal to offer this product to every customer. #### Two parametric regimes. Let us start by explaining how any given instance can be classified into two possible regimes, referred to as high-weight and low-weight. For this purpose, let $\alpha = v_{\max}/(1+v_{\max})$, which is precisely the choice probability of the heaviest product by a single customer, when it is the only one offered. Consequently, $T\alpha$ represents the expected load of this product when it is being statically offered to all customers. Then, the high-weight regime captures problem instances where $T\alpha\geq 12\ln(nT)/\epsilon^3$ or $v_{\max}\geq 1/\epsilon$, in which case we will show that a very simple static policy achieves a $(1-\epsilon)$-approximation. As explained later on, the core difficulty lies within the low-weight regime, where $T\alpha< 12\ln(nT)/\epsilon^3$ and $v_{\max}>1/\epsilon$, which will be the main focus of this section. #### High weight regime: $\boldsymbol{T\alpha\geq 12\ln(nT)/\epsilon^3}$ or $\boldsymbol{v_{\max}\geq 1/\epsilon}$. In this case, we prove that statically offering the heaviest product to all customers provides a $(1-\epsilon)$-approximation to [\[DMLA\]](#DMLA){reference-type="ref" reference="DMLA"}, as formally stated below. The proof of this result appears in Appendix [9.1](#apx:high){reference-type="ref" reference="apx:high"}. **Lemma 24**. *When $T\alpha \geq \frac{12\ln(nT)}{\epsilon^3}$ or $v_{\max}\geq 1/\epsilon$, the static policy that offers the heaviest product guarantees a $(1-\epsilon)$-approximation for [\[DMLA\]](#DMLA){reference-type="ref" reference="DMLA"}.* Roughly speaking, the high-weight regime allows us to efficiently employ concentration bounds. These bounds demonstrate that, for any given policy, the event where its (random) maximum load exceeds $T\alpha$ by a non-negligible factor is highly improbable. Consequently, with the right choice of parameters, we will show that the optimal expected maximum load is upper-bounded by $(1+\epsilon)\cdot T\alpha$. As such, Lemma [Lemma 24](#lem:high){reference-type="ref" reference="lem:high"} will follow by recalling that $T\alpha$ represents the expected maximum load of statically offering the heaviest product to all customers. #### Low weight regime: $\boldsymbol{T\alpha< 12\ln(nT)/\epsilon^3}$ and $\boldsymbol{v_{\max}< 1/\epsilon}$. In this case, we establish a polylogarithmic upper bound on the optimal expected maximum load, which will be utilized in Section [5.3](#subsec:policy){reference-type="ref" reference="subsec:policy"} to prove the near-optimality of a specific class of policies. Intuitively, under the low weight regime, products are not associated with high enough weights to prompt frequent selections of the same product. By formalizing this intuition, we show that within the low weight regime, the expected maximum load is polylogarithmically bounded. The proof of this result is included in Appendix [9.2](#apx:low){reference-type="ref" reference="apx:low"}. **Lemma 25**. *When $T\alpha< \frac{12\ln(nT)}{\epsilon^3}$ and $v_{\max}< 1/\epsilon$, we have ${\sf OPT}^{\mathrm{DP}}(\mathcal{N}) \leq \frac{300\ln^2(nT)}{\epsilon^6}$.* #### Stability of policies with respect to weight alterations. Here, we showcase the possibility of altering a universe of products by performing slight weight modifications, while still controlling the extent to which the expected maximum load is affected. For any universe of products $U \subseteq \mathcal{N}$, a dynamic policy that limits its offered assortments to products from $U$ will be referred to as a $U$-policy. Let us introduce an auxiliary universe $\widetilde U$, with a one-to-one correspondence to $U$, assuming without loss of generality that $U=\{1,\ldots,k\}$ and $\widetilde U=\{\widetilde 1,\ldots,\widetilde k\}$. We proceed by considering a technical condition on this pair of universes, stipulating that for every $i\in U$, the choice probabilities of the products $i$ and $\widetilde i$ are within factor $1-\epsilon$ of each other, with respect to any assortment. To formalize this condition, for any assortment $S\subseteq U$, we denote by $\widetilde S = \{\widetilde i \in \widetilde U \,\mid\,i\in S\}$ its corresponding assortment in $\widetilde U$. Given $\delta \in [0,1)$, we say that the universes $U$ and $\widetilde U$ satisfy the $\delta$-tightness condition if, for every assortment $S\subseteq U$ and for every product $i\in S$, we have $$\label{eq:condbound} \phi_{i}(S)\geq (1-\delta)\cdot\phi_{\widetilde i}(\widetilde S).$$ In Lemma [Lemma 26](#lem:alteruniverse){reference-type="ref" reference="lem:alteruniverse"}, we show that this condition is sufficient to prove that, for any $\widetilde U$-policy $\widetilde P$, there exists an analogous $U$-policy $P$ whose expected maximum load deviates only slightly from that of $\widetilde P$. For ease of notation, we designate the expected maximum loads of these policies by ${\cal E}^P$ and ${\cal E}^{\widetilde P}$. Interestingly, along the proof of this claim, whose details are provided in Appendix [9.3](#apx:altering){reference-type="ref" reference="apx:altering"}, the implementation time of $P$ will be shown to match that of $\widetilde P$, up to factors that are polynomial in $n$ and $T$. **Lemma 26**. *Suppose that $U$ and $\widetilde U$ satisfy the $\delta$-tightness condition. Then, for every $\widetilde U$-policy $\widetilde P$, there exists a $U$-policy $P$ such that ${\cal E}^P\geq (1-\delta)\cdot {\cal E}^{\widetilde P}$.* At a high level, our proof shows that given the policy $\widetilde P$, we can determine specific assortments of products from the universe $U$ to be offered at each step to the arriving customer, given the choices of all previous customers, thereby defining a new $U$-policy $P$. Using this elaborate form of simulation, we show that the achieved expected maximum load of the $U$-policy $P$ is at least $1-\delta$ times that of the $\widetilde U$-policy $\widetilde P$. ## Constructing our policy {#subsec:policy} According to Lemma [Lemma 24](#lem:high){reference-type="ref" reference="lem:high"}, statically offering the heaviest product to all customers achieves a $(1-\epsilon)$-approximation of the optimum under the high-weight regime. Therefore, in the remainder of this section, we focus on the low-weight regime. For convenience, let $${\cal B}= \frac{300\ln^2(nT)}{\epsilon^6}$$ be the upper bound we obtained in Lemma [Lemma 25](#lem:low){reference-type="ref" reference="lem:low"} on the expected maximum load ${\sf OPT}^{\mathrm{DP}}(\mathcal{N})$ of an optimal dynamic policy. The primary idea behind our adaptive policy is to only explore policies that: - Stop offering products as soon as the maximum load reaches a value of ${\cal B}/\epsilon$, assuming without loss of generality that the latter term is an integer. Namely, the empty assortment is offered to all remaining customers once we hit this threshold. - Avoid offering products of "tiny" preference weight, upper-bounded by $\epsilon^2 v_{\max}/n$. We refer to such adaptive policies as *truncated* policies. The motivation behind this restriction is that it allows us to considerably shrink the search space, and in particular, to compute a near-optimal policy in quasi-polynomial time. An important question that remains to be answered is obviously centered around the performance guarantee of such policies. Our analysis will argue that, based on truncated policies, we indeed construct a near-optimal dynamic policy. #### Step 1: Dropping light products. We start by considering a new universe of products $U$, where we drop all products whose preference weight is at most $\epsilon^2\cdot v_{\max}/n$, i.e., $U= \{i\in \mathcal{N}\,\mid\, v_i>\epsilon^2\cdot v_{\max}/n\}$. Clearly, any $U$-policy is also an $\mathcal{N}$-policy, with the restriction of not offering any products whose preference weight is at most $\epsilon^2\cdot v_{\max}/n$. We assume without loss of generality that $U=\{1,\ldots,k\}$ for some $k\leq n$, and let $v_{\min}$ be the smallest weight among the products in $U$. By construction, $v_{\max}/v_{\min} \leq n/\epsilon^2$. #### Step 2: Creating weight classes. We create a new universe of products $\widetilde U$ by modifying $U$ as follows. First, we partition the interval $[v_{\min},v_{\max}]$ geometrically by powers of $1+\epsilon$, into a collection of buckets $I_1,I_2,\ldots,I_J$, where $J = \lfloor\frac{\log(v_{\max}/v_{\min})}{\log(1+\epsilon)}\rfloor = O_{\epsilon}(\log n)$. Formally, $$I_j = \left[ v_{\min}\cdot(1+\epsilon)^j,v_{\min}\cdot(1+\epsilon)^{j+1} \right),$$ for $j=0,\ldots,J$. Now, we associate a product $\widetilde i$ to each product $i\in U$, whose weight is the left endpoint of the bucket containing $v_i$. In other words, the universe of products $\widetilde U=\{\widetilde 1,\ldots,\widetilde k\}$ is created such that, for every product $i=1,\ldots,k$, we determine the interval $I_j$ where $v_i$ resides, and then set ${v}_{\widetilde i} = (1+\epsilon)^j \cdot v_{\min}$. Consequently, the products in $\widetilde U$ take only $O(J)$ possible values. #### Step 3: Solving a reduced dynamic program. We proceed by explaining how to compute an optimal truncated policy $\widetilde A$ with respect to the universe $\widetilde U$. To this end, let us define *constrained* load vectors $\mathbf \boldsymbol{\ell}=(\ell_1,\ldots,\ell_k)$ as those whose maximal component is at most our threshold, i.e., $\max_{i=1,\ldots,k}\ell_i \leq {\cal B}/\epsilon$. We denote by ${\cal L }$ the collection of all such vectors. While at a first glance, the number of constrained vectors is exponential in $n$, we present in Section [5.4](#subsec:proofquasitime){reference-type="ref" reference="subsec:proofquasitime"} an efficient representation of these vectors, that effectively reduces their number to a quasi-polynomial magnitude. Now, in order to compute an optimal truncated policy $\widetilde A$ with respect to $\widetilde U$, we solve the so-called reduced dynamic program, for every load vector $\boldsymbol{\ell}\in {\cal L }$, given through the following recursive equations: $$\label{eq:reducedDP} M_{t}(\mathbf{\boldsymbol{\ell}})=\max_{S\subseteq \widetilde U}\left(M_{t-1}(\mathbf \boldsymbol{\ell})\cdot\phi_0(S)+\sum\limits_{i\in S}M_{t-1}(\mathbf{\boldsymbol{\ell}}+\mathbf{e}_i)\cdot \phi_i(S)\right).$$ However, we modify the boundary conditions of this program, such that $M_t(\boldsymbol{\ell}) = {\cal B}/\epsilon$ for every vector $\boldsymbol{\ell}$ with $\max_{i=1,\ldots,k}\ell_i={\cal B}/\epsilon$. Note that the recursive equations here are identical to those characterizing [\[DMLA\]](#DMLA){reference-type="ref" reference="DMLA"} in Section [2.2](#subsec:DMLA){reference-type="ref" reference="subsec:DMLA"}, as these two programs only differ in their boundary condition. #### Step 4: Recovering the approximate policy. Our final step consists of exploiting Lemma [Lemma 26](#lem:alteruniverse){reference-type="ref" reference="lem:alteruniverse"}, for the purpose of converting the $\widetilde U$-policy $\widetilde A$ from Step 3 into an approximate policy $A$ with respect to $U$. To this end, we observe that these two universes satisfy the $\delta$-tightness condition $\eqref{eq:condbound}$ with $\delta = \epsilon$. Indeed, for every assortment $S\subseteq U$ and for every product $i\in S$, we have $\phi_i(S) = \frac{v_i}{1+\sum_{j\in S}v_j}$. On the other hand, $v_{\widetilde j}\leq v_j\leq \frac{1}{1-\epsilon}\cdot v_{\widetilde j}$ for all $j\in U$, by construction. Therefore, $$\phi_i(S) \geq \frac{v_{\widetilde i}}{1+\frac{1}{1-\epsilon}\cdot \sum_{j\in S}v_{\widetilde j}}\geq (1-\epsilon)\cdot \frac{v_{\widetilde i}}{1+\sum_{\widetilde j\in\widetilde S}v_{\widetilde j}}=(1-\epsilon)\cdot \phi_{\widetilde i}(\widetilde S).$$ Consequently, Lemma [Lemma 26](#lem:alteruniverse){reference-type="ref" reference="lem:alteruniverse"} enables us to recover a policy $A$ whose expected maximum load is $$\label{eq:lemmaequation} {\cal E}^{A}\geq (1-\epsilon)\cdot {\cal E}^{\widetilde A}.$$ ## Concluding the analysis {#subsec:proofquasitime} In order to prove Theorem [Theorem 23](#thm:quasitime){reference-type="ref" reference="thm:quasitime"}, we first propose an efficient representation of constrained vectors, allowing us to implement our overall approach in $O( n^{ O_{\epsilon}( \log^3 n) } )$ time. Subsequently, we prove that the expected maximum load obtained upon utilizing the policy $A$, constructed in Section [5.3](#subsec:policy){reference-type="ref" reference="subsec:policy"}, is within factor $1-\epsilon$ of the optimal expected maximum load. #### Implementation and running time analysis. In what follows, we will be assuming that, under the low-weight regime, the number of arriving customers $T$ is polynomial in $n$ and $1/\epsilon$. This is a consequence of the next claim, which we prove in Appendix [9.5](#apx:polywlog){reference-type="ref" reference="apx:polywlog"}. **Claim 27**. *Under the low-weight regime, when $T\geq 576 n^3/\epsilon^8$, by offering the whole universe of products to every customer we attain an expected maximum load of at least $(1-\epsilon) \cdot {\sf OPT}^{\mathrm{DP}}(\mathcal{N})$.* Let $\cal S$ be the collection of states considered by the reduced dynamic program in Step $3$. Each such state corresponds to a pair $(t,\boldsymbol{\ell})$, where $t$ is the remaining number of customers, and $\boldsymbol{\ell}\in {\cal L }$ is our current load vector. We remind the reader that ${\cal L }$ stands for the collection of constrained load vectors, namely, those where each product has a load of at most ${\cal B}/\epsilon$. We start by providing an efficient representation of each state $(t,\boldsymbol{\ell})\in \cal S$. To this end, for every $j\in \{0,\ldots, J\}$ and $m\in \{0,\ldots,{\cal B}/\epsilon\}$, let $N_{j,m}(\boldsymbol{\ell})$ be the number of products with weight $v_{\min}\cdot (1+\epsilon)^j$, whose load with respect to $\boldsymbol{\ell}$ is precisely $m$, i.e., $$N_{j,m}(\boldsymbol{\ell}) = \left| \{\widetilde i\in \widetilde U\mid v_{\widetilde i} = v_{\min}\cdot (1+\epsilon)^j\text{ and } \ell_i = m\} \right| .$$ Given this notation, we represent each vector $\boldsymbol{\ell}\in{\cal L }$ by its corresponding vector $N(\boldsymbol{\ell})= (N_{j,m}(\boldsymbol{\ell})\,\mid\,j\in \{0,\ldots, J\}\text{ and } m\in \{0,\ldots,{\cal B}/\epsilon\})$. Clearly, the collection $\{N(\boldsymbol{\ell}) \,\mid\, \boldsymbol{\ell}\in{\cal L }\}$ consists of only $O(n^{O(J{\cal B}/\epsilon)})$ vectors. Therefore, by representing each state $(t,\boldsymbol{\ell})\in \cal S$ by its corresponding vector $(t,N(\boldsymbol{\ell}))$, and recalling that ${\cal B}= O_{\epsilon}( \log^2(nT) )$, $J = O_{\epsilon}(\log n)$, and $T< 576 n^3/\epsilon^8$, our search space size becomes $$O(T n^{O_{\epsilon}(J{\cal B})} ) = O( n^{ O_{\epsilon}( \log^3 n ) } ).$$ As a side note, we remark that this representation is not injective. However, it encompasses all the information needed to solve our reduced dynamic program. Indeed, since all products with the same weight and load are interchangeable, it is easy to see that for each pair of constrained vectors $\boldsymbol{\ell}^1$ and $\boldsymbol{\ell}^2$ that share the same representation, we can transform $\boldsymbol{\ell}^1$ into $\boldsymbol{\ell}^2$ by performing a finite number of permutations on the names of the products that share the same weight and load. To conclude that Steps 1-4 can be performed in $O( n^{ O_{\epsilon}( \log^3 n ) } )$ overall time, it remains to argue that the optimization problem in Equation [\[eq:reducedDP\]](#eq:reducedDP){reference-type="eqref" reference="eq:reducedDP"} can be efficiently solved, as stated in the next claim. This result will be established via a reduction to an appropriately constructed revenue maximization problem under the MNL model, which is solvable in polynomial time (see, e.g., [@talluri2004revenue]). The details of this proof are included in Appendix [9.6](#apx:revenuemax){reference-type="ref" reference="apx:revenuemax"}. **Lemma 28**. *Problem [\[eq:reducedDP\]](#eq:reducedDP){reference-type="eqref" reference="eq:reducedDP"} can be solved to optimality in $O(n)$ time.* #### Approximation guarantee. In the remainder of this section, we show that the policy $A$, as introduced in Section [5.3](#subsec:policy){reference-type="ref" reference="subsec:policy"}, yields the desired performance guarantee. In other words, we argue that ${\cal E}^A\geq (1-4 \epsilon)\cdot {\sf OPT}^{\mathrm{DP}}(\mathcal{N})$. To this end, we establish the following sequence of claims: - *Loss due to dropping tiny-weight products.* We remind the reader that the universe of products $U$ was created in Step 1 by eliminating all products whose preference weight is at most $\epsilon^2v_{\max}/n$. Our first claim is that this alteration leads to losing at most an $\epsilon$-fraction of the optimal expected maximum load. The proof of the next lemma is included in Appendix [9.7](#apx:quasi1){reference-type="ref" reference="apx:quasi1"}. **Lemma 29**. *${\sf OPT}^{\mathrm{DP}}(U)\geq (1-\epsilon)\cdot {\sf OPT}^{\mathrm{DP}}(\mathcal{N})$.* - *Loss due to altering product weights.* Recall that in Step 2, each product $i \in U$ was replaced by a corresponding product $\widetilde{i} \in \widetilde U$ whose weight is the left endpoint of the bucket containing $v_i$. In Lemma [Lemma 30](#lem:quasi2){reference-type="ref" reference="lem:quasi2"}, whose proof appears in Appendix [9.8](#apx:quasi2){reference-type="ref" reference="apx:quasi2"}, we show that the optimal expected maximum loads of $U$ and $\widetilde U$ are within factor $1-\epsilon$ of one another. **Lemma 30**. *$(1-\epsilon)\cdot {\sf OPT}^{\mathrm{DP}}(U) \leq{\sf OPT}^{\mathrm{DP}}(\widetilde U) \leq \frac{1}{1-\epsilon}\cdot {\sf OPT}^{\mathrm{DP}}(U)$.* - *Loss due to considering truncated policies.* In Step 3, we make use of our reduced dynamic program to compute an optimal truncated $\widetilde U$-policy, $\widetilde A$. In the following lemma, we prove that $\widetilde A$ is in fact a $(1-\epsilon)$-approximate $\widetilde U$-policy. The proof of this result is deferred to Appendix [9.9](#apx:quasi3){reference-type="ref" reference="apx:quasi3"}. **Lemma 31**. *${\cal E}^{\widetilde A}\geq (1-\epsilon)\cdot {\sf OPT}^{\mathrm{DP}}(\widetilde U)$.* In conclusion, it follows that the expected maximum load of the policy $A$ is $$\begin{aligned} {\cal E}^A &\geq & (1-\epsilon)\cdot {\cal E}^{\widetilde A} \\ & \geq & (1-\epsilon)^2\cdot {\sf OPT}^{\mathrm{DP}}(\widetilde U) \\ & \geq & (1-\epsilon)^3\cdot {\sf OPT}^{\mathrm{DP}}(U) \\ & \geq & (1-\epsilon)^4\cdot {\sf OPT}^{\mathrm{DP}}(\mathcal{N}) \\ & \geq & (1-4\epsilon)\cdot {\sf OPT}^{\mathrm{DP}}(\mathcal{N}).\end{aligned}$$ Here, first inequality is simply a restatement of Equation [\[eq:lemmaequation\]](#eq:lemmaequation){reference-type="eqref" reference="eq:lemmaequation"}. The second inequality follows from Lemma [Lemma 31](#lem:quasi3){reference-type="ref" reference="lem:quasi3"}. In the third inequality, we plug in the result of Lemma [Lemma 30](#lem:quasi2){reference-type="ref" reference="lem:quasi2"}. The fourth inequality is a consequence of Lemma [Lemma 29](#lem:quasi1){reference-type="ref" reference="lem:quasi1"}. Finally, the last inequality follows from Bernoulli's inequality. # Concluding Remarks {#sec:conclusion} In conclusion, this paper introduces the novel concept of Maximum Load Assortment Optimization, developing innovative algorithmic techniques and analytical tools for both static and dynamic formulations. Quite surprisingly, in the static case, we devised a polynomial-time approximation scheme, allowing us to approach optimal expected loads within any degree of accuracy. Concurrently, we demonstrated that highly structured policies, based on weight-ordered assortments, deliver a viable $1/2$-approximation. In the more complex dynamic setting, the latter family of policies was shown to provide a $1/4$-approximation, thereby bounding the adaptivity gap of this problem within a factor of $4$. Moreover, we have crafted an adaptive policy that, remarkably, attains an expected maximum load within factor $1-\epsilon$ of optimal, while enabling a quasi-polynomial time implementation. This comprehensive study elucidates the potential of assortment optimization in manipulating customer choices towards maximum product selection, offering rigorous methods to address contemporary applications such as Attended Home Delivery and Preference-based Group Scheduling. We believe that our work lays solid foundations for Maximum Load Assortment Optimization, potentially being the onset of further exploration. In what follows, we discuss several intriguing open questions, along with particularly appealing extensions of our modeling approach. #### Hardness of the static formulation? Despite our best efforts, the computational complexity of the [\[SMLA\]](#SMLA){reference-type="ref" reference="SMLA"} problem remains an open question. Specifically, we still do not know whether this setting is NP-hard or whether optimal static assortments can be computed in polynomial time. This question is particularly challenging due to the unique problem structure, appearing to require either innovative optimization techniques or hardness proofs that are very different from what one typically meets in assortment optimization. #### Dynamic formulation: Improved bounds? In the dynamic setting, devising a polynomial-time ($1-\epsilon$)-approximate policy poses a great technical challenge due to the inherent high-dimensional nature of this problem. Through new algorithmic techniques, we have been successful at attaining quasi-polynomial running times; however, further progress seem to necessitate yet-uncharted ideas. On a different front, even though we have established a lower bound of $4/3$ on the adaptivity gap of [\[DMLA\]](#DMLA){reference-type="ref" reference="DMLA"}, there is still a meaningful room for improved constructions in this context. #### Practical applications. Our work's practical implications bring forth captivating questions. In future research, it would be interesting to conduct data-driven case studies, examining the applicability of maximum load assortment optimization in real-world settings, thereby bridging the gap between theory and practice. Furthermore, exploring new domains and industries beyond Attended Home Delivery and Preference-based Group Scheduling could uncover novel challenges and untapped practical impact. #### Extensions. Along the above-mentioned lines, extending our problem formulation to additional families of choice models, such as the Markov Chain model [@blanchet2016markov; @feldman2017revenue] or the non-parametric ranking-based model [@farias2013nonparametric; @aouad2018approximability], is an interesting direction for future research, potentially showing that our algorithmic framework is applicable in a broad spectrum of contexts. Yet another fundamental question is that of exploring a wide array of constraints on the offered assortments, such cardinality, capacity, and matroid constraints. Finally, it would be interesting to investigate an extended formulation, where our goal is to optimize the expected summation of $k$-highest loads rather than solely focusing on the maximum load. At present time, this objective function appears to be significantly more challenging to deal with. # Proofs from Section [3](#sec:SMLA){reference-type="ref" reference="sec:SMLA"} {#apx:static} ## Proof of Lemma [Lemma 3](#lem:merge){reference-type="ref" reference="lem:merge"} {#apx:merge} Assume without loss of generality that $S=\{1,\ldots,k\}$, and that we merge products $1$ and $2$ to obtain the assortment $\widetilde S$. Recall that $\mathbf L(S)=(L_1(S),L_2(S),\ldots,L_k(S))$ is the random variable denoting the load vector when offering the assortment $S$. With the same notation for $\widetilde S$, it is easy to verify that $\mathbf L(\widetilde S)$ is equal in distribution to $(L_1(S)+L_2(S),L_3(S),\ldots,L_k(S))$. Clearly, $$\max(L_1(S)+L_2(S),L_3(S),\ldots,L_k(S))\geq \max(L_1(S),L_2(S),\ldots,L_k(S)),$$ and by taking expectations we get $$\begin{aligned} \mathbb{E}(M(\widetilde S)) & = & \mathbb{E}( \max(L_1(S)+L_2(S),L_3(S),\ldots,L_k(S)) ) \\ & \geq & \mathbb{E}(\max (L_1(S), L_2(S),\ldots,L_k(S))) \\ & = & \mathbb{E}(M(S)).\end{aligned}$$ ## Proof of Lemma [Lemma 4](#lem:charging){reference-type="ref" reference="lem:charging"} {#apx:charging} Assume without loss of generality that $S=\{1,\ldots,k\}$, and that we perform a $\delta$-weight transfer from product $2$ to product $1$, where $v_2 \leq v_1$ and $0 \leq \delta \leq v_2$, obtaining the assortment $\widetilde S_\delta$. Let $v=(v_1+v_2)/2$. For any $\omega \in[0,v]$, we define $1_\omega$ and $2_\omega$ as virtual products with respective preference weights $v+\omega$ and $v-\omega$. Let $S_\omega$ be the assortment that results from $S$, after replacing products $1$ and $2$ with the virtual products $1_\omega$ and $2_\omega$, i.e., $S_\omega = \{1_\omega, 2_\omega \}\cup \{3, \ldots,k\}$. In order to prove Lemma [Lemma 4](#lem:charging){reference-type="ref" reference="lem:charging"}, we establish the following claim in Appendix [7.3](#app:proof_claim_apx_mono){reference-type="ref" reference="app:proof_claim_apx_mono"}. **Claim 32**. *The function $\omega\mapsto \mathbb{E}(M(S_\omega))$ is monotonically non-decreasing across the interval $[0,v]$.* Let us show how this claim implies the result stated in Lemma [Lemma 4](#lem:charging){reference-type="ref" reference="lem:charging"}. Let $\omega_1=(v_1-v_2)/2$ and $\omega_2= \omega_1+\delta$, noting that $0 \leq \omega_1 \leq \omega_2 \leq v$. Hence, Claim [Claim 32](#claim:apx:mono){reference-type="ref" reference="claim:apx:mono"} implies that $\mathbb{E}(M( S_{\omega_2}))\geq \mathbb{E}(M(S_{\omega_1}))$. However, $v+\omega_1= v_1$ and $v-\omega_1=v_2$, which means that $S_{\omega_1} = S$. On the other hand, $v+\omega_2= v_1+\delta$ and $v-\omega_2=v_2-\delta$, which means that $S_{\omega_2} = \widetilde S_\delta$. Therefore, $\mathbb{E}(M(\widetilde S_\delta))\geq \mathbb{E}(M(S))$. ## Proof of Claim [Claim 32](#claim:apx:mono){reference-type="ref" reference="claim:apx:mono"} {#app:proof_claim_apx_mono} To facilitate our analysis, let us define $V=\sum_{i\in S}v_i$, which represents the total preference weights of all products in the assortment $S$. It is important to note that, for any value of $\omega$, the total of preference weights of all products in $S_\omega$ is equal to $V$ as well. Additionally, we define $p = v/(1+V)$ and $p_i=v_i/(1+V)$ for $i\in S$. Instead of working directly with the variable $\omega$, we perform the following change of variables, ${q=\omega/(1+V)}$. As such, by defining the function $f(q) = \mathbb{E}(M(S_{(1+V)\cdot q}))$, it suffices to prove that $f$ is monotonically non-decreasing across the interval $[0,p]$. To this end, for any $\omega \in [0,v]$, when we offer assortment $S_{\omega}$, the MNL choice probability of product $1_{\omega}$ is given by $\frac{v+\omega}{1+V}=p+q$. Similarly, the MNL choice probability of product $2_{\omega}$ is given by $\frac{v-\omega}{1+V}=p-q$. For any other product $i \in \{3, \ldots, k \}$, its choice probability is $\frac{v_i}{1+V}=p_i$. Therefore, according to the closed-form expression of the maximum load in Equation [\[eq:multinomial\]](#eq:multinomial){reference-type="eqref" reference="eq:multinomial"}, $$f(q)=\sum_{\mathbf{x}\in\Delta_T}h(\mathbf{x},T) \cdot (p+q)^{x_1}\cdot(p-q)^{x_2}\cdot \left(\prod_{i=3}^k p_i^{x_i}\right)\cdot p_0^{T-\sum_{i=1}^kx_i}\cdot \max_{i=1,\ldots,k} x_i,$$ where $h(\mathbf x,T)$ refers to multinomial coefficient and $\Delta_T$ is the support set of $\mathbf x$, i.e., $$h(\mathbf x,T)\coloneqq \binom{T}{ x_1,\ldots,x_k,T-\sum_{i=1}^k x_i} \qquad \text{ and } \qquad \Delta_T\coloneqq\left\{{{\mathbf x}\in\mathbb{N}^k\,\bigg\vert\,\sum_{i=1}^k x_i\leq T }\right\} .$$ Since $f$ is a polynomial function of $q$, it is differentiable with respect to $q$. Therefore, by differentiating, we obtain $\frac{d}{dq}f(q)=T \cdot (Q_1(q)- Q_2(q))$, where $$\begin{aligned} Q_1(q)=\sum_{\mathbf{x}\in\Delta_T,x_1 \geq 1 }h(\mathbf{x}-\mathbf{e}_1,T-1)\cdot (p+q)^{x_1-1} \cdot (p-q)^{x_2}\cdot \left(\prod_{i=3 }^k p_i^{x_i}\right) \cdot p_0^{T-\sum_{i=1}^kx_i}\cdot\max\limits_{i=1,\ldots,k} x_i, \\ Q_2(q)=\sum_{\mathbf{x}\in\Delta_T,x_2 \geq 1} h(\mathbf{x}-\mathbf{e}_2,T-1) \cdot (p+q)^{x_1} \cdot (p-q)^{x_2-1} \cdot \left(\prod_{i=3 }^k p_i^{x_i}\right) \cdot p_0^{T-\sum_{i=1}^kx_i}\cdot\max\limits_{i=1,\ldots,k} x_i. \end{aligned}$$ By examining $Q_1(q)$, we observe that it corresponds to the expected maximum load when we offer the assortment $S_{\omega}$, conditional on customer $T$ selecting product $1_{\omega} \in S_{\omega}$. Similarly, $Q_2(q)$ corresponds to the expected maximum load when we offer the assortment $S_{\omega}$, conditional on customer $T$ selecting product $2_{\omega}\in S_{\omega}$. In other words, $$Q_1(q)= \mathbb{E}\left(M( S_{\omega})\,\vert \,X_{1_{\omega},T}(S_{\omega})=1\right) \qquad \text{and} \qquad Q_2(q)=\mathbb{E}\left(M(S_{\omega})\,\vert \,X_{2_{\omega},T}(S_{\omega})=1\right),$$ where $\{X_{1_{\omega}T}(S_{\omega})=1\}$ is the event in which customer $T$ selects product $1_{\omega}$, and similarly $\{ X_{2_{\omega}T}(S_{\omega})=1\}$ corresponds to customer $T$ selecting product $2_{\omega}$. In order to prove that $f$ is monotonically non-decreasing, it suffices to show that $Q_1(q)\geq Q_2(q)$. Let us define $Q(q)$ as the expected maximum load when we offer the assortment $S_{\omega}$, conditional on customer $T$ selecting the no-purchase option, i.e., $$Q(q)= \mathbb{E}\left(M( S_{\omega})\,\vert \,X_{0,T}(S_{\omega})=1\right) .$$ It is sufficient to show that $Q_1(q)-Q(q)\geq Q_2(q)-Q(q)$. By examining the difference $\{ M( S_{\omega})\,\vert \,X_{1_{\omega},T}(S_{\omega})=1 \}- \{ M( S_{\omega})\,\vert \,X_{0,T}(S_{\omega})=1 \}$ in the same probability space, we observe that this difference is $1$ if product $1_{\omega}$ has the highest load after $T-1$ customers; otherwise, the difference is $0$. Therefore, $Q_1(q)-Q(q)$ is exactly the probability that product $1_{\omega}$ has the highest load given $T-1$ customers. Similarly, $Q_2(q)-Q(q)$ is exactly the probability that product $2_{\omega}$ has the highest load given $T-1$ customers. However, the choice probability of product $1_{\omega}$ is $p+q$, which is greater than the choice probability of product $2_{\omega}$, given by $p-q$. Hence, a straightforward coupling argument on the customers $1, \ldots, T-1$ implies that $Q_1(q)-Q(q)\geq Q_2(q)-Q(q)$. ## Proof of Lemma [Lemma 5](#lem:probs){reference-type="ref" reference="lem:probs"} {#apx:probs} Let us start by defining an intermediate Multinomial vector $\mathbf Z$ with parameters $(T,p_0^Z,\ldots,p_m^Z)$ where $p_i^Z = \min(p_i^Y,p_i^W)$ for all $i\in\{1,\ldots,m\}$, and $p_0^Z = 1-\sum_{i=1}^m p_i^Z$. By this definition, $p^W_i\geq p^Z_i$ for all $i\in\{1,\ldots,m\}$, and we can therefore easily couple $\mathbf W$ and $\mathbf Z$ such that $W_i\geq Z_i$ for all $i\in \{1,\ldots,m\}$, which implies that $$\label{eq:couple1} \mathbb{E}\left(\max_{i=1,\ldots,m}W_i\right) \geq \mathbb{E}\left(\max_{i=1,\ldots,m}Z_i\right).$$ Next, we introduce a coupling between $\mathbf Y$ and $\mathbf Z$. For every $i\in\{1,\ldots,m\}$ and for every $t\in[T]$, let $B_{i,t}$ be a Bernoulli random variable, with success probability $p^Z_i/p^Y_i$. These Bernoulli random variables are independent. Given the random variable $\mathbf Y$, we construct a new random vector $\mathbf{\widetilde Z}=(\widetilde Z_0,\ldots,\widetilde Z_m)$ as follows. If a trial $t \in [T]$ is classified in $0$ for $\mathbf{Y}$, then it is also classified in $0$ for $\mathbf{\widetilde Z}$. Otherwise, if the trial is classified in some $i\in\{1,\ldots,m\}$ for $\mathbf Y$, we distinguish between two cases: When $B_{i,t} = 1$, the trial is classified in $i$ for $\mathbf{\widetilde Z}$; when $B_{i,t} = 0$, this trial is classified in $0$. The first key idea to notice is that $\mathbf{\widetilde Z}$ is equal in distribution to $\mathbf{Z}$. Indeed, a trial $t$ is classified in $0$ for $\mathbf{\widetilde Z}$ if one of the following happens: (i) It was classified in $0$ for $\mathbf Y$; or (ii) It was classified in some option $i\in\{1,\ldots,m\}$ for $\mathbf Y$ and $B_{i,t}=0$. The probability of one of the two events happening is given by $p^Y_0+\sum_{i=1}^m p^Y_i\cdot (1-p_i^Z/p_i^Y)=p_0^Z.$ Similarly, a trial $t$ is classified in some option $i\in\{1,\ldots,m\}$ in $\mathbf{\widetilde Z}$ if both of the following happen: (i) The trial $t$ was classified in $i$ for $\mathbf{Y}$; and (ii) $B_{i,t}=1$. The probability of both these events happening is given by $p^Y_i\cdot p_i^Z/p_i^Y = p_i^Z$. Now, letting $I$ be the random index of the highest-load option among $Y_1,Y_2,\ldots,Y_m$, i.e., $I= \mathop{\mathrm{arg\,max}}_{i=1,\ldots,m}Y_i$, breaking ties by taking the smallest index, we have $$\mathbb{E}\left( \left. \max_{i=1,\ldots,m} \widetilde Z_i\,\right|\, \mathbf Y\right) \geq \mathbb{E}\left( \left. \widetilde Z_{I}\,\right|\, \mathbf Y\right) =\frac{p^Z_{I}}{p^Y_{I}} \cdot Y_{I}.$$ Here, the latter equality follows from the construction of $\mathbf{\widetilde Z}$, since every trial that is classified in $I$ for $\mathbf{Y}$ is classified in $I$ for $\widetilde{ \mathbf{Z}}$ with probability ${p^Z_{I}/p^Y_{I}}$. By the lemma's assumption, ${p^Z_{I}/p^Y_{I}}\geq 1-\epsilon$, and therefore $$\mathbb{E}\left( \left. \max_{i=1,\ldots,m} \widetilde Z_i\,\right|\, \mathbf Y\right)\geq (1-\epsilon) \cdot Y_{I} = (1-\epsilon)\cdot \mathbb{E}\left( \left. \max_{i=1,\ldots,m} Y_i\,\right|\, \mathbf Y\right).$$ Now by taking expectations in the previous inequality with respect to $\mathbf Y$ and applying the law of total expectation, we get $$\mathbb{E}\left(\max_{i=1,\ldots,m} \widetilde Z_i\right)\geq (1-\epsilon)\cdot \mathbb{E}\left(\max_{i=1,\ldots,m} Y_i\right).$$ Since $\mathbf{\widetilde Z}$ is equal in distribution to $\mathbf Z$, $$\label{eq:couple2} \mathbb{E}\left(\max_{i=1,\ldots,m} Z_i\right)\geq (1-\epsilon)\cdot \mathbb{E}\left(\max_{i=1,\ldots,m} Y_i\right).$$ Finally, combining Equations [\[eq:couple1\]](#eq:couple1){reference-type="eqref" reference="eq:couple1"} and [\[eq:couple2\]](#eq:couple2){reference-type="eqref" reference="eq:couple2"} gives the desired result, i.e., $$\mathbb{E}\left(\max_{i=1,\ldots,m}W_i\right)\geq (1-\epsilon)\cdot \mathbb{E}\left(\max_{i=1,\ldots,m} Y_i\right).$$ ## Proof of Lemma [Lemma 6](#lem:weights){reference-type="ref" reference="lem:weights"} {#apx:weights} To establish the desired claim, we will apply Lemma [Lemma 5](#lem:probs){reference-type="ref" reference="lem:probs"}. Using the notation of the latter claim, let ${\bf Y}=(Y_0,Y_1,\ldots,Y_m)$ and ${\bf W}=(W_0,W_1,\ldots,W_m)$ be Multinomial vectors, with parameters $(T,p^Y_0,\ldots,p^Y_m)$ and $(T,p^W_0,\ldots,p^W_m)$, respectively, where for all $i\in\{1,\ldots,m\}$: $$p_i^Y= \frac{ v^+_i }{ 1+\sum_{j=1}^m v^+_j} \qquad \text{and} \qquad p_i^W= \frac{ v^-_i }{ 1+\sum_{j=1}^m v^-_j }.$$ We also define ${p_0^Y = 1-\sum_{j=1}^mp^Y_j}$ and ${p_0^W = 1-\sum_{j=1}^mp^W_j}$. Therefore, $\mathbb{E}(M(S^-))= \mathbb{E}(\max_{i=1,\ldots,m}W_i)$ and $\mathbb{E}(\max_{i=1,\ldots,m} Y_i)= \mathbb{E}(M(S^+)$. Moreover, for all $i\in\{1,\ldots,m\}$, we have $$p_i^W = \frac{v^-_i}{1+\sum_{j=1}^m v^-_j}\\ \geq \frac{v^-_i}{1+\sum_{j=1}^m v^+_j}\\ \geq (1-\epsilon) \cdot \frac{v^+_i}{1+\sum_{j=1}^m v^+_j} \\ = (1-\epsilon) \cdot p_i^Y,$$ where the first inequality is a consequence of the condition $v^-_i\leq v^+_i$, and the second inequality holds since $v^-_i\geq (1-\epsilon)\cdot v^+_i$. Therefore, applying Lemma [Lemma 5](#lem:probs){reference-type="ref" reference="lem:probs"} yields the desired result. ## Proof of Lemma [Lemma 8](#lem:virtual){reference-type="ref" reference="lem:virtual"} {#apx:virtual} In what follows, we define a virtual assortment as a couple $(S,k)$, where $S\subseteq \mathcal{N}$ and $k$ is a virtual product with weight $v_k \leq \min_{i \in S} v_i$. In addition, we define the FILL operation as one that takes as input a virtual assortment $(S,k)$ and applies the following steps. First, if $S$ is preference-weight-ordered, then FILL simply returns $(S,k)$. Otherwise, let $h$ be the heaviest product in $\mathcal{N}\setminus S$, i.e., $h=\mathop{\mathrm{arg\,max}}_{i \in \mathcal{N}\setminus S} v_i$, where $\mathop{\mathrm{arg\,max}}$ breaks ties by selecting the product with lowest index. In addition, let ${\cal T} = \{i \in S : i > h \}$, to which we refer as the collection of *tail products*. Since $S$ is not preference-weight-ordered, ${\cal T} \neq \emptyset$. Note that $\{1,\ldots, h-1\}$ is the largest preference-weight-ordered assortment included in $S$ and that $S=\{1,\ldots, h-1\} \cup {\cal T}$. The FILL operation proceeds by considering two cases. #### Case 1: $\boldsymbol{v_k+\sum_{i\in {\cal T}}v_i \leq v_h}$. Here, the total weight of the tail products plus the virtual product $k$ is at most the weight of product $h$. In this case, we remove the products in ${\cal T}$ from $S$ and let $\widetilde S$ be the resulting assortment, i.e., $\widetilde S = S\setminus \{i \in S : i >h \}$. Then, we merge the tail products along with the virtual product $k$ into a single virtual product, denoted by $\widetilde k$, whose weight is given by $v_{\widetilde k} = v_k+\sum_{i\in {\cal T}}v_i$. FILL returns the virtual assortment $(\widetilde S,\widetilde k)$. Clearly, the assortment $\widetilde S$ is preference-weight-ordered, since all tail products were removed. In addition, by the case hypothesis, ${v_{\widetilde k}\leq v_h \leq \min_{i\in \widetilde S}v_i}$. #### Case 2: $\boldsymbol{v_k+\sum_{i\in {\cal T}}v_i > v_h}$. In this case, we will use a subset of $\cal T$ and the virtual product $k$ to create a replica of the missing product $h$. Formally, suppose that ${\cal T}=\{p_1,\ldots,p_m\}$, where without loss of generality $v_{p_1}\leq \cdots \leq v_{p_m}$. Recall that $v_k$ is upper-bounded by the weight of every product in $S$, meaning in particular that the latter is upper-bounded by the weight of every tail product, and in turn that $v_k \leq v_h$. On the other hand, we have $v_k+\sum_{i\in \cal T}v_i > v_h$ by the case hypothesis. Let $j$ be the unique index for which ${v_k+\sum_{i=1}^{j-1}v_{p_i} \leq v_h < v_k+\sum_{i=1}^{j}v_{p_i}}$. The FILL operation starts by merging the products $p_1,\ldots,p_{j-1}$ and $k$, creating a virtual product $\widehat k$ with weight $v_{\widehat k} = v_k+ \sum_{i=1}^{j-1}v_{p_i}$. We proceed by considering two cases: - *When $v_{\widehat k}>v_{p_j}$:* We perform a $\delta$-weight transfer from product $p_j$ to the virtual product $\widehat k$, with $\delta = v_h-v_{\widehat k}$. This transfer is well defined since $v_{p_j} \geq \delta \geq 0$, by definition of $j$. We have therefore created a replica of product $h$, as well as a virtual product $\widetilde k$ with weight $v_{\widetilde k} = v_{p_j}-\delta$. Finally, the FILL operation returns the virtual assortment $(\widetilde S, \widetilde k)$, where $\widetilde S = (S\cup \{h\})\setminus \{p_1,\ldots, p_j\}$. It is important to note that the virtual product $\widetilde k$ satisfies $v_{\widetilde k} \leq v_{p_j}\leq \min_{i\in \widetilde S}v_i$, and therefore $(\widetilde S, \widetilde k)$ is a virtual assortment. - *When $v_{\widehat k} \leq v_{p_j}$:* We perform a $\delta$-weight transfer from the virtual product $\widehat k$ to product $p_j$ with $\delta = v_h-v_{p_j}$. This transfer is well defined since $v_{\widehat k} \geq \delta \geq 0$, where the first inequality follows from the definition of $j$ and the second inequality holds since $p_j$ is a tail product, and thus, lighter than $h$. We have therefore created a replica of product $h$, as well as a virtual product $\widetilde k$ with preference weight $v_{\widetilde k} = v_{\widehat k} -\delta$. The FILL operation returns the virtual assortment $(\widetilde S, \widetilde k)$, where $\widetilde S = (S\cup \{h\})\setminus \{p_1,\ldots,p_j\}$. Again, the virtual product $\widetilde k$ satisfies $v_{\widetilde k}\leq v_{\widehat k}\leq v_{p_j} \leq \min_{i\in \widetilde S}v_i$, meaning that $(\widetilde S, \widetilde k)$ is a virtual assortment. Given these definitions, with respect to any assortment $S \subseteq \mathcal{N}$, we apply the FILL operation to the virtual assortment $(S,k)$, where initially $v_k=0$. If $1\notin S$, then the condition $v(S)\geq v_1$ guarantees that product $1$ is included in the resulting assortment. Otherwise, this product is already in the resulting assortment. We then repeat this operation until it returns a virtual assortment $(\widetilde{S}, \widetilde{k})$, where $\widetilde{S}$ is preference-weight-ordered. Such an assortment will eventually be obtained since, at each step, if $\widetilde{S}$ is not preference-weight-ordered, the FILL operation increases the size of the largest preference-weight-ordered assortment included in $\widetilde{S}$ by at least one product, as discussed in Case 2. Finally, since the FILL operation is a composition of a sequence of Merge and Transfer operations, as stated in Lemmas [Lemma 3](#lem:merge){reference-type="ref" reference="lem:merge"} and [Lemma 4](#lem:charging){reference-type="ref" reference="lem:charging"}, we know that the expected maximum load of the resulting assortment $\widetilde{S} \cup \{\widetilde{k}\}$ is lower-bounded by that of the initial assortment $S$. ## Proof of Lemma [Lemma 9](#lem:drop){reference-type="ref" reference="lem:drop"} {#apx:drop} Since $v_k\leq \min_{i\in S}v_i$, we can perform a weight transfer from product $k$ to any product in $S$ without decreasing the objective function. In the proof of this lemma, we start by successively performing a $\delta$-weight transfer from the virtual product $k$ to each of the products in $S$, with $\delta = v_k/|S|$. Eventually, the weight of product $k$ becomes $0$, whereas the weight of any product in $S$ increases by $\delta$. We therefore obtain an assortment $S^+ = \{ i^+\,|\, i\in S\}$, where each product $i^+$ has weight $v_{i^+} = v_i+ v_k/|S|$. Moreover, since these weight transfers cannot decrease the expected maximum load to Lemma [Lemma 4](#lem:charging){reference-type="ref" reference="lem:charging"}, we have $$\label{eq:firsteq} \mathbb{E}\left(M\left(S^+\right)\right)\geq\mathbb{E}\left(M\left(S\cup\{k\}\right)\right).$$ We proceed to show that the objective values of $\widetilde S$ and $S$ are within $|S|/(|S|+1)$ from each other, using Lemma [Lemma 6](#lem:weights){reference-type="ref" reference="lem:weights"}. Indeed, for all $i\in S$, we have $v_{i^+} = v_i+\delta \geq v_i$. Moreover, we have $$v_{i^+} = v_i+\frac{v_k}{|S|} \leq v_i+\frac{v_i}{|S|} = \frac{|S|+1}{|S|}\cdot v_i,$$ where the inequality above holds since $v_k\leq \min_{j\in S}v_j$. As a result, $v_i\geq \frac{|S|}{|S|+1}\cdot v_{i^+}$, and according to Lemma [Lemma 6](#lem:weights){reference-type="ref" reference="lem:weights"}, we have $$\mathbb{E}\left(M\left(S\right)\right)\geq \frac{|S|}{|S|+1}\cdot \mathbb{E}\left(M\left(S^+\right)\right) \geq \frac{|S|}{|S|+1}\cdot \mathbb{E}\left(M\left(S\cup \{k\}\right)\right),$$ where the last inequality follows from [\[eq:firsteq\]](#eq:firsteq){reference-type="eqref" reference="eq:firsteq"}. ## Proof of Lemma [Lemma 12](#lem:thmstep1){reference-type="ref" reference="lem:thmstep1"} {#apx:thmstep1} The proof of this lemma is similar in spirit to that of Lemma [Lemma 8](#lem:virtual){reference-type="ref" reference="lem:virtual"}. We remind the reader that, as defined in Section [3.4](#subsec:PTAS){reference-type="ref" reference="subsec:PTAS"}, the $\epsilon$-hole of an assortment $S$ is given by $h_{\epsilon}(S) = \min\{j : j \geq {I}_{1/\epsilon}(S) \text{ and } j\notin S\}$, where $I_{1/\epsilon}(S)$ is the $1/\epsilon$ heaviest product in $S$. Also, a virtual assortment was defined in Appendix [7.6](#apx:virtual){reference-type="ref" reference="apx:virtual"} as a pair $(S,k)$, where $S\subseteq \mathcal{N}$ and $k$ is a virtual product whose weight satisfies $v_k\leq \min_{i\in S} v_i$. In addition, we define the $\epsilon$-FILL operation as one that takes as input a virtual assortment $(S,k)$ with $|S|\geq 1/ \epsilon$, and returns a pair $(\widetilde S, \widetilde k)$ where $\widetilde S\subseteq \mathcal{N}$ and $\widetilde k$ is a virtual product. We proceed to explain how the latter operation is performed. Consider a virtual assortment $(S,k)$ with $|S|\geq 1/ \epsilon$. Let $h$ be the $\epsilon$-hole of $S$, and let $S_0$ be the subset of $S$, consisting of all products whose weights are smaller than $\epsilon\cdot v_h$, i.e., $S_0 = \{i\in S\,\mid\, v_i < \epsilon\cdot v_h\}$. When $S$ is $\epsilon$-restricted, $\epsilon$-FILL simply outputs $(S,k)$. Otherwise, we consider the next two cases: #### Case 1: $\boldsymbol S$ **is not** $\boldsymbol{\epsilon}$**-restricted and** $\boldsymbol{v_k+\sum_{i\in S_0}v_i<v_{h}}$. In this case, we merge product $k$ and all products in $S_0$, creating a new virtual product $\widetilde k$ whose weight is $v_{\widetilde k} = v_k+\sum_{i\in S_0}v_i$. The $\epsilon$-FILL operation then outputs the pair $(\widetilde S,\widetilde k)$, where $\widetilde S = S\setminus S_0$. This pair satisfies two important properties: First, since the $\epsilon$-hole of $S$ is identical to that of $\widetilde S$ and we removed $S_0$, then $\widetilde S$ is $\epsilon$-restricted. Second, since $v_k+\sum_{i\in S_0}v_i<v_{h}$, product $\widetilde k$ is lighter than the $\epsilon$-hole of $\widetilde S$, i.e., $v_{\widetilde k}\leq v_{h}=v_{h_\epsilon(\widetilde S)}.$ #### Case 2: $\boldsymbol S$ **is not** $\boldsymbol{\epsilon}$**-restricted and** $\boldsymbol{v_k+\sum_{i\in S_0}v_i\geq v_{h}}$. Since our input $(S,k)$ is a virtual assortment, we have $v_k\leq \min_{i\in S}v_i$. Also, since $S$ is not $\epsilon$-restricted, $S_0\neq \emptyset$. Therefore, $\min_{i\in S}v_i < \epsilon\cdot v_h\leq v_h$, implying that $v_k\leq v_h$. In what follows, we employ the Merge and Transfer operations to create a replica of the $\epsilon$-hole of $S$. Suppose that $S_0=\{p_1,\ldots,p_m\}$, with $v_{p_1}\leq\cdots\leq v_{p_m}$, and let $j$ be the unique index for which ${v_k+\sum_{i=1}^{j-1}v_{p_i}\leq v_h< v_k+\sum_{i=1}^{j}v_{p_i}}$, which is well defined since $v_k\leq v_h$ and $v_k+\sum_{i\in S_0}v_i\geq v_h$. Then, the $\epsilon$-FILL operation starts by merging products $1,\ldots,j-1$ and $k$, thereby creating a virtual product $\widehat k$ with weight $v_{\widehat k} = v_k+\sum_{i=1}^{j-1}v_{p_i}$. We then have two possibilities: 1. *When $v_{\widehat k}>v_{p_j}$*: We perform a $\delta$-weight transfer from product $p_j$ to the virtual product $\widehat k$, with $\delta = v_h-v_{\widehat k}$. Since $v_h\geq v_k+\sum_{i=1}^{j-1}v_{p_i}$, we know that $\delta\geq 0$. Also, by the inequality $v_h< v_k+\sum_{i=1}^j v_{p_i}$, it is easy to see that $\delta \leq v_{p_j}$. We have therefore created a copy of product $h$, as well as a virtual product $\widetilde k$ with preference weight $v_{\widetilde k} = v_{p_j}-\delta$. Finally, the $\epsilon$-FILL operation returns the pair $(\widetilde S, \widetilde k)$, where $\widetilde S = (S\cup \{h\})\setminus\{p_1,\ldots,p_j\}$. Note that the virtual product $\widetilde k$ satisfies $v_{\widetilde k} \leq v_{p_j}\leq \min_{i\in \widetilde S} v_i$, meaning that $(\widetilde S, \widetilde k)$ is a virtual assortment. 2. *When $v_{\widehat k}\leq v_{p_j}$*: We perform a $\delta$-weight transfer from the virtual product $\widehat k$ to product $p_j$, with $\delta = v_h-v_{p_j}$. Since $v_{p_j} < \epsilon\cdot v_h$ by definition of $S_0$, we know that $\delta\geq 0$. Also, by the inequality $v_h<v_k+\sum_{i=1}^{j-1}v_{p_i}$, it is easy to see that $\delta\leq v_{\widehat k}$. We have therefore created a copy of the $\epsilon$-hole $h$, as well as a virtual product $\widetilde k$ with preference weight $v_{\widetilde k} = v_{\widehat k}-\delta$. Finally, the $\epsilon$-FILL operation returns the pair $(\widetilde S, \widetilde k)$, where $\widetilde S = (S\cup \{h\})\setminus\{p_1,\ldots,p_j\}$. Note that the virtual product $\widetilde k$ satisfies $v_{\widetilde k} \leq v_{\widehat k} \leq v_{p_j}\leq \min_{i\in \widetilde S} v_i$, implying that $(\widetilde S, \widetilde k)$ is a virtual assortment. To complete the proof, consider an assortment $S$ with $|S|\geq 1/\epsilon$. We apply the $\epsilon$-FILL operation to the virtual assortment $(S,k)$, where $k$ is a virtual product with weight $0$. If $S$ is $\epsilon$-restricted, $\epsilon$-FILL simply outputs $(\widetilde S, \widetilde k)=(S,k)$, which satisfies the conditions of our lemma. Otherwise, if we are in Case 1, then $\epsilon$-FILL outputs a pair $(\widetilde S, \widetilde k)$ such that $\widetilde S$ is $\epsilon$-restricted and $v_{\widetilde k}\leq v_{h_{\epsilon}(\widetilde S)}$, again satisfying the required conditions. Finally, if we are in Case 2, then $\epsilon$-FILL outputs a virtual assortment $(\widetilde S,\widetilde k).$ Here, $\epsilon$-FILL will be reapplied to the pair $(\widetilde S,\widetilde k)$, so on and so forth, as long as we encounter Case 2. The main observation is that, in each such iteration, the $\epsilon$-hole of the assortment $S$ is included in $\widetilde S$, and is never removed in subsequent steps. Therefore, there are at most $n$ iterations of Case 2. Finally, since the $\epsilon$-FILL operation is a composition of a sequence of Merge and Transfer operations, as stated in Lemma [Lemma 3](#lem:merge){reference-type="ref" reference="lem:merge"} and Lemma [Lemma 4](#lem:charging){reference-type="ref" reference="lem:charging"}, we know that the expected maximum load of the resulting assortment $\widetilde{S} \cup \{\widetilde{k}\}$ is lower-bounded by that of the initial assortment $S$. ## Proof of Lemma [Lemma 13](#lem:thmstep2){reference-type="ref" reference="lem:thmstep2"} {#apx:thmstep2} Let $\widehat S\subseteq \mathcal{N}$ be an assortment with $|\widehat S|> 1/\epsilon$ and let $k$ be a virtual product with preference weight $v_k \leq v_{h_{\epsilon}(\widehat S)}$. Recall that, for an assortment $S$, its $\epsilon$-hole $h_\epsilon(S)$ is given by $h_{\epsilon}(S) = \min\{j : j \geq {I}_{1/\epsilon}(S) \text{ and } j\notin S\}$, where $I_{1/\epsilon}(S)$ is the $1/\epsilon$ heaviest product in $S$. Therefore, product $k$ is lighter than each of the $1/\epsilon$ heaviest products in $S$. Let the subset of these $1/\epsilon$ products be $\{p_1,\ldots,p_{1/\epsilon}\}$. We proceed by performing a weight transfer from product $k$ to each of the products $p_1,\ldots,p_{1/\epsilon}$. Specifically, for every $j=1,\ldots,1/\epsilon$, we successively perform a $\delta$-weight transfer from product $k$ to product $p_j$, with $\delta = \epsilon\cdot v_k$. At the end of these transfers, the virtual product $k$ is removed, whereas each product $p_j$ was replaced by a virtual product $p_j^{\epsilon}$ whose weight is $v_{p_j^{\epsilon}} = v_{p_j}+ \epsilon\cdot v_k$. Let $\widehat S_\epsilon$ be the resulting assortment, i.e., ${\widehat S_\epsilon= (\widehat S \cup \{p_1^\epsilon,\ldots,p_{1/\epsilon}^\epsilon\} )\setminus \{p_1,\ldots,p_{1/\epsilon}\}}$. According to Lemma [Lemma 4](#lem:charging){reference-type="ref" reference="lem:charging"}, the transfer operation cannot decrease the expected maximum load, and therefore $$\label{eq:morocco1} \mathbb{E}\left(M\left(\widehat S_\epsilon\right)\right) \geq \mathbb{E}\left(M\left(\widehat S\cup \{k\}\right)\right).$$ Moreover, for any $j=1,\ldots,1/\epsilon$, since $v_k\leq v_{p_j}$ and $v_{p_j^{\epsilon} }= v_{p_j}+ \epsilon\cdot v_k$, we get $v_{p_j^{\epsilon}} \geq v_{p_j}\geq (1-\epsilon)\cdot v_{p_j^{\epsilon}}$. Note that currently $\widehat S_{\epsilon}$ may not be block-based, as it contains virtual products. Hence, for each product in $\widehat S_\epsilon$, we define its counterpart in a block-based assortment as follows: - For each product $p_j^\epsilon$, its counterpart is simply the product $p_j$. Let us denote this subset of products by $S_1=\{p_1,\ldots,p_{1/\epsilon}\}$. - For every product $i$ with $p_{1/\epsilon} < i < h_{\epsilon}(\widehat S)$, its counterpart is the product itself. We refer to this subset of products as $S_2$. - Every remaining product is contained in one of the classes $C_1,\ldots, C_L$ that were introduced in Section [3.4](#subsec:PTAS){reference-type="ref" reference="subsec:PTAS"} to define block-based assortments. Indeed, since $\widehat S$ is $\epsilon$-restricted, the weight of any product in $\widehat S$ is at least $\epsilon\cdot v_{h_\epsilon(\widehat S)}$. Therefore, each product of $\widehat S_\epsilon$ that is lighter than product $h_\epsilon({\widehat S})$ is contained in one of the classes $C_1,\ldots, C_L$, since their union contains every product whose weight resides within $[\epsilon \cdot v_{h_{\epsilon}(\widehat S)} ,v_{h_{\epsilon}(\widehat S)})$. For every class $C_i$, let $\widehat R_i$ be the subset of $\widehat S_\epsilon$ comprised of the products in the class $C_i$, and let $N_i$ be the number of these products. In other words, $\widehat R_i = \widehat S_{\epsilon}\cap C_i$ and $N_i=|\widehat R_i|$. Then the counterpart of the products of $\widehat R_i$ are the $N_i$ lightest products of $C_i$. We denote the latter subset by $R_i$. It is important to note that, by definition $C_1,\ldots,C_L$, the weights of any two products in the same class are within $1-\epsilon$ of each other. Let $S_3=\bigcup_{i=1}^L R_i$. To summarize, for each product in $\widehat S_\epsilon$, we have defined a counterpart whose weight is within factor $1-\epsilon$. Therefore, letting $\widetilde S = S_1\cup S_2\cup S_3$ be our resulting assortment, by Lemma [Lemma 6](#lem:weights){reference-type="ref" reference="lem:weights"}, we get $$\label{eq:morocco2} \mathbb{E}\left(M\left(\widetilde S\right)\right) \geq (1-\epsilon)\cdot \mathbb{E}\left(M\left(\widehat S_\epsilon\right)\right) \geq (1-\epsilon)\cdot \mathbb{E}\left(M\left(\widehat S\cup \{k\}\right)\right),$$ where the last inequality follows from [\[eq:morocco1\]](#eq:morocco1){reference-type="eqref" reference="eq:morocco1"}. By construction, $\widetilde S$ is a block-based assortment. # Proofs from Section [4](#sec:adaptivitygap){reference-type="ref" reference="sec:adaptivitygap"} {#apx:adapt_gap} ## Proof of Lemma [Lemma 16](#lem:multinom){reference-type="ref" reference="lem:multinom"} {#apx:multinom} To establish the desired claim, we construct a coupling between the Multinomial vector $(L_1,\ldots,L_k)$ and the load vector of an optimal dynamic policy with respect to the universe of products $U$. In the process of generating $(L_1,\ldots,L_k)$, we view the $T$ trials as if they occur sequentially, in the order $1, \ldots, T$, letting $L_{i,t}$ be the random value of component $i$ after $t$ trials. In addition, let $L_{i,t}^{\mathrm{DP}}$ be the random load of product $i$ after $t$ customers in a fixed optimal dynamic policy. By convention, $L_{i,t}=0$ and $p_i = 0$ for $i>k$, and $L_{j,t}^{\mathrm{DP}}=0$ for $j \notin U$. Let us initialize both $(L_{1,0},\ldots,L_{n,0})$ and $(L_{1,0}^{\mathrm{DP}}, \ldots,L_{n,0}^{\mathrm{DP}})$ to be the zero vector. #### Sampling the Multinomial vector. For $t \in [T]$, we sample the component for the $t$-th trial of the Multinomial vector as follows. Let $(\varphi_{t-1}(1) ,\ldots, \varphi_{t-1}(n))$ be the permutation of $\{1,\ldots,n\}$ for which $L_{\varphi_{t-1}(1),{t-1}} \geq\cdots\geq L_{\varphi_{t-1}(n),{t-1}}$, breaking ties by order of increasing indices. We partition $(0,1]$ into a collection of pairwise-disjoint intervals $\{ I_{\varphi_{t-1}(i),t} \}_{i \in [n]}$, where $$I_{\varphi_{t-1}(i),t} = \left(\sum_{j=1}^{i-1}p_{\varphi_{t-1}(j)}, \sum_{j=1}^{i}p_{\varphi_{t-1}(j)}\right].$$ We now sample a uniform random variable $U_t$ in $[0,1]$, and increment component $i$ by one if and only if $U_t \in I_{i,t}$. In other terms, we have $L_{i,t} = L_{i,t-1} + \mathbbm{1}(U_t\in I_{i,t})$. #### Sampling the load vector. For $t \in [T]$, we generate the choice of customer $t$ with respect to the optimal dynamic policy as follows. Let $(\psi_{t-1}(1), \ldots,\psi_{t-1}(n))$ be the permutation of $\{1,\ldots,n\}$ for which $L^{\mathrm{DP}}_{\psi_{t-1}(1),{t-1}} \geq\cdots\geq L^{\mathrm{DP}}_{\psi_{t-1}(n),{t-1}}$, again breaking ties in order of increasing product indices. Let $S_t$ be the assortment offered by the optimal adaptive policy to customer $t$. Note that this assortment is a-priori random, as it depends on the choices of customers $1,\ldots,t-1$; however, it is deterministic, conditional on the choices of customers $1,\ldots,t-1$. Let $p_{i,t}^{\mathrm{DP}}=\phi_i(S_t)$ be the MNL choice probability of product $i$ with respect to this assortment; in particular, $p_{i,t}^{\mathrm{DP}}=0$ when $i\notin S_t$. As before, we define the collection of pairwise-disjoint intervals $\{ J_{\psi_{t-1}(i),t} \}_{i \in [n]}$, where $$J_{\psi_{t-1}(i),t}=\left(\sum_{j=1}^{i-1}p_{\psi_{t-1}(j),t}^{\mathrm{DP}}, \sum_{j=1}^{i}p_{\psi_{t-1}(j),t}^{\mathrm{DP}}\right].$$ To generate the choice of customer $t$, we make use of exactly the same uniform random variable $U_t$ that was previously sampled, when generating the Multinomial vector. Specifically, customer $t$ selects the product $i$ for which $U_t \in J_{i,t}$. When none of the intervals $\{ J_{\psi_{t-1}(i),t} \}_{i \in [n]}$ contains $U_t$, customer $t$ selects the no-purchase option. Formally, $L^{\mathrm{DP}}_{i,t} = L^{\mathrm{DP}}_{i,t-1} + \mathbbm{1}(U_t\in J_{i,t})$. It is worth emphasizing again that the same random variable $U_t$ is utilized to simulate the $t$-th Multinomial trial as well as the choice of customer $t$, consequently coupling the two vectors. #### Analysis. Moving forward, for $i \in [n]$ and $t\in [T]$, let $C_{i,t}$ be the cumulative sum of the $i$-highest components of the Multinomial vector after $t$ trials, i.e., $C_{i,t} = \sum_{j=1}^iL_{\varphi_t(j),t}$. Similarly, let $C^{\mathrm{DP}}_{i,t}$ be the cumulative sum of the $i$-highest loads of the load vector after $t$ customers have made their choice, i.e., $C^{\mathrm{DP}}_{i,t} = \sum_{j=1}^iL^{\mathrm{DP}}_{\psi_t(j),t}$. In both definitions, the cumulative sums are taken at the end of step $t$. The crux of our analysis resides in establishing the next relation between these two cumulative sums. The proof of this result is provided in Appendix [8.2](#apx:invariant){reference-type="ref" reference="apx:invariant"}. **Claim 33**. *For every $i \in [n]$ and $t\in [T]$, we have $C_{i,t}\geq C^{\mathrm{DP}}_{i,t}$.* We conclude the proof by arguing that the latter claim indeed implies $\mathbb{E}(\max(L_1,\ldots,L_k)) \geq {\sf OPT}^{\mathrm{DP}}(U)$. For this purpose, we almost surely have $$\max(L_1,\ldots,L_k) = L_{\varphi_{T}(1),T} = C_{1,T}\geq C^{\mathrm{DP}}_{1,T} =L_{\psi_{T}(1),T}^{\mathrm{DP}} = \max_{i\in U}L^{\mathrm{DP}}_{i,T},$$ where the inequality above is obtained by instantiating Claim [Claim 33](#cl:invariant){reference-type="ref" reference="cl:invariant"} with $i=1$ and $t=T$. By taking expectations, we indeed get $\mathbb{E}(\max(L_1,\ldots,L_k))\geq\mathbb{E}(\max_{i\in U}L^{\mathrm{DP}}_{i,T}) = {\sf OPT}^{\mathrm{DP}}(U)$. ## Proof of Claim [Claim 33](#cl:invariant){reference-type="ref" reference="cl:invariant"} {#apx:invariant} Our proof works by induction on $t$. Indeed, the result is trivial for $t=0$ since $C_{i,t} = C^{\mathrm{DP}}_{i,t} = 0$ for every $i \in [n]$. Now, for $t \geq 1$, suppose by induction that $C_{i,t}\geq C^{\mathrm{DP}}_{i,t}$, and let us prove that $C_{i,t+1}\geq C^{\mathrm{DP}}_{i,t+1}$ by considering two cases. #### Case 1: $\boldsymbol{C_{i,t}> C^{\mathrm{DP}}_{i,t}}$. Here, the sum of the $i$ highest loads in the dynamic policy is strictly smaller than the sum of the $i$ highest components of the Multinomial vector at step $t$. As only one customer arrives at each time step, both sums can increase by at most $1$. Therefore, we clearly have $C_{i,t+1}\geq C^{\mathrm{DP}}_{i,t+1}$. #### Case 2: $\boldsymbol{C_{i,t}= C^{\mathrm{DP}}_{i,t}}$. In this case, the sum of the $i$ highest loads in the dynamic policy is equal to the sum of the $i$ highest components of the Multinomial vector. First, if customer $t+1$ chooses the no-purchase option, then the invariant is trivially maintained, since we would have $C^{\mathrm{DP}}_{i,t+1} = C^{\mathrm{DP}}_{i,t}$ and $C_{i,t+1}\geq C_{i,t}$. Otherwise, suppose that $U_{t+1}\in I_{\varphi_{t}(a),t+1}$ and $U_{t+1}\in J_{\psi_{t}(b), t+1}$, for some $a$ and $,b$, i.e., the $a$-th highest component of the Multinomial vector after $t$ trials is assigned to the $(t+1)$-th trial, and the $b$-th most loaded product after $t$ customers is selected by customer $t+1$. We first observe that $a\leq b$, advising the reader to consult Figure [\[fig:multinomcoupling\]](#fig:multinomcoupling){reference-type="ref" reference="fig:multinomcoupling"} to better understand our next explanation. The key idea is to exploit the inequality ${\max_{i\in U} \frac{ v_i }{ 1+v_i }\leq \min_{i=1,\ldots,k}p_i}$, stating that any choice probability in the optimal dynamic policy is upper-bounded by the selection probability of each of the $k$ components in our Multinomial vector. As demonstrated in Figure [\[fig:multinomcoupling\]](#fig:multinomcoupling){reference-type="ref" reference="fig:multinomcoupling"}, the length of each interval $I_{i,t}$ is upper-bounded by the length of each interval $J_{j,t}$ for all $i,j\in\{1,\ldots,n\}$. Therefore, when the $(t+1)$-th trial is assigned to the $a$-th highest component, and the $(t+1)$-th customer selects the $b$-th most loaded product, we must have $a\leq b$. Our proof proceeds by considering two subcases, depending on the relation between $i$ and $b$. #### Case 2a: $\boldsymbol{b\leq i}$. Here, the $b$-th highest load in the load vector was increased by $1$, and since $a\leq b\leq i$, the sum of the $i$ highest components in the Multinomial vector was increased by $1$ as well, i.e., $C_{i,t+1} = C_{i,t}+1$. Therefore, $$C_{i,t+1} = C_{i,t}+1 \geq C^{\mathrm{DP}}_{i,t}+1 \geq C^{\mathrm{DP}}_{i,t} \ ,$$ where the first inequality follows from the induction hypothesis. #### Case 2b: $\boldsymbol{b>i}$. In particular, according to the definition of $\psi_{t}$, we have, $L_{\psi_{t}(b),t}^{\mathrm{DP}}\leq L_{\psi_{t}(i),t}^{\mathrm{DP}}$. We consider two cases, depending on whether the latter inequality is strict or not: - When $L_{\psi_{t}(b),t}^{\mathrm{DP}}<L_{\psi_{t}(i),t}^{\mathrm{DP}}$: Then, when customer $t+1$ selects product $\psi_{t}(b)$, the load of that product is increased by exactly $1$, and therefore does not surpass $L_{\psi_{t}(i),t}^{\mathrm{DP}}$. As such, the sum of the $i$ highest loads remains unchanged at step $t+1$. Consequently, $$C_{i,t+1}\geq C_{i,t} \geq C^{\mathrm{DP}}_{i,t}=C^{\mathrm{DP}}_{i,t+1},$$ where the second inequality holds by the induction hypothesis. - When $L_{\psi_{t}(b),t}^{\mathrm{DP}} = L_{\psi_{t}(i),t}^{\mathrm{DP}}$: Here, the load of the $b$-th and $i$-th most loaded products are equal. It follows that all products in between have the same loads, i.e., $$\label{eq:equality} L_{\psi_{t}(i),t}^{\mathrm{DP}}=L_{\psi_{t}(i+1),t}^{\mathrm{DP}}=\cdots=L_{\psi_{t}(b),t}^{\mathrm{DP}}.$$ Therefore, when customer $t+1$ selects product $\psi_t(b)$, product $\psi_t(b)$ becomes more loaded than $\psi_t(i), \psi_t(i+1),\ldots,\psi_t(b-1)$. Consequently, the sum of the $i$ highest loads increases by $1$, i.e.,$$C^{\mathrm{DP}}_{i,t+1}=\sum_{j=1}^i L_{\psi_{t+1}(j),t+1}^{\mathrm{DP}}=\sum_{j=1}^i L_{\psi_{t}(j),t}^{\mathrm{DP}}+1 = C^{\mathrm{DP}}_{i,t}+1.$$ It remains to show that the sum of the $i$ highest components of the Multinomial vector also increases by $1$. First, if $a\leq i$, this claim is trivial, as the $a$-th highest component is increased by $1$, and therefore the sum of the $i$ highest components is also increased by $1$. Now suppose that $a>i$. We prove in the next paragraph that, for all $c\in \{i,i+1\ldots,b\}$, we have $$\label{eq:claim} L_{\psi_{t-1}(c),t}^{\mathrm{DP}} = L_{\varphi_{t-1}(c),t}.$$ As a result, according to Equation [\[eq:equality\]](#eq:equality){reference-type="eqref" reference="eq:equality"}, $$L_{\varphi_{t-1}(i),t} = L_{\varphi_{t-1}(i+1),t}=\cdots =L_{\varphi_{t-1}(b),t}.$$ Therefore, since $i<a\leq b$, when the $a$-th highest component is increased by $1$, it becomes strictly greater than each of the components $\varphi_t(i),\varphi_t(i+1),\ldots,\varphi_t(a-1)$, meaning that the sum of the $i$-th highest components also increases by $1$. #### Leftover: Proof of Equation [\[eq:claim\]](#eq:claim){reference-type="eqref" reference="eq:claim"}. By the induction hypothesis, $C_{i-1,t}\geq C^{\mathrm{DP}}_{i-1,t}$, and according to the case hypothesis, $C_{i,t}= C^{\mathrm{DP}}_{i,t}$. Therefore, $C^{\mathrm{DP}}_{i,t}-C^{\mathrm{DP}}_{i-1,t}\geq C_{i,t}-C_{i-1,t}$, and it follows that $L_{\psi_{t}(i),t}^{\mathrm{DP}} \geq L_{\varphi_{t}(i),t}$. Suppose by contradiction that $L_{\psi_{t}(i),t}^{\mathrm{DP}} > L_{\varphi_{t}(i),t}$. Then,$$L_{\psi_{t}(i+1),t}^{\mathrm{DP}} = L_{\psi_{t}(i),t}^{\mathrm{DP}} > L_{\varphi_{t}(i),t} \geq L_{\varphi_{t}(i+1),t}.$$ Recalling from the case hypothesis that $C_{i,t}= C^{\mathrm{DP}}_{i,t}$, the latter inequality implies $$C_{i+1,t}=\sum_{j=1}^{i+1} L_{\varphi_{t}(j),t} < \sum_{j=1}^{i+1} L_{\psi_{t}(j),t}^{\mathrm{DP}} = C^{\mathrm{DP}}_{i+1,t},$$ which contradicting the induction hypothesis. Therefore, $L_{\psi_{t-1}(i),t}^{\mathrm{DP}} = L_{\varphi_{t-1}(i),t}$. Finally, we replicate the same argument for $c=i+1,\ldots,b$, which proves Equation ([\[eq:claim\]](#eq:claim){reference-type="ref" reference="eq:claim"}). ## Proof of Lemma [Lemma 17](#lem:augment){reference-type="ref" reference="lem:augment"} {#apx:augment} Let $A$ and $B$ be two adaptive policies for [\[DMLA\]](#DMLA){reference-type="ref" reference="DMLA"}. We will make use of $1^A,\ldots,T^A$ and $1^B,\ldots,T^B$ to denote the sequences of customers that will encounter these policies, respectively. We couple their choices as follows. #### Sampling the policies. For any stage $t\in[T]$, we explain how to sample the choices of customers $t^A$ and $t^B$. First, the choice of customer $t^A$ is sampled according to policy $A$. In particular, if $S^A_t$ is the assortment offered to this customer, each product $i\in U \cup \{0\}$ is chosen with the MNL probability $\phi_i(S^A_t)$. The choice of customer $t^B$ is sampled in a coupled manner: 1. When customer $t^A$ selects the no-purchase option: In this case, the choice of customer $t^B$ is sampled using an MNL choice model, with respect to the assortment $S_t^B\setminus S_t^A$, i.e., each product $i$ is selected with probability $\phi_i(S_t^B\setminus S_t^A)$. 2. When customer $t^A$ selects some product $i\in S^A_t$: Here, with probability $\phi_i(S^B_t)/\phi_i(S^A_t)$, customer $t^B$ is assigned to product $i$. With probability $1-\phi_i(S^B_t)/\phi_i(S^A_t)$, as in item 1, the choice of customer $t^B$ is sampled using an MNL choice model with respect to the assortment $S_t^B\setminus S_t^A$, i.e., each product $j$ is selected with probability $\phi_j(S_t^B\setminus S_t^A)$. As a side note, we indeed have $\phi_i(S^B_t)/\phi_i(S^A_t) \leq 1$, since $S^A_t \subseteq S^B_t$. #### Equivalence with offering $\boldsymbol{S_t^B}$. We will show that sampling the choice of customer $t^B$ as described above is equivalent to using an MNL choice model with respect to the assortment $S^B_t$. Indeed, for every $i\in S_t^A$, the probability that this product is selected by customer $t^B$ is given by$$\label{eq:prob} \phi_i(S_t^A)\cdot \frac{\phi_i(S^B_t)}{\phi_i(S^A_t)} = \phi_i(S^B_t).$$ Now, for a product $i\in S_t^B\setminus S_t^A$ to be selected, we first have to be choosing using the MNL model with respect to the assortment $S_t^B\setminus S_t^A$, which happens with probability $$\phi_0(S_t^A) + \sum_{j \in S_t^A } \phi_j(S_t^A) \left( 1 - \frac{\phi_j(S^B_t)}{\phi_j(S^A_t)} \right) =1-\sum_{j\in S_t^A}\phi_j(S_t^B)= 1- \frac{v(S_t^A)}{1+v(S_t^B)} = \frac{1+v(S_t^B) - v(S_t^A)}{1+v(S_t^B)}.$$ Conditional on this event, customer $t^B$ selects product $i\in S_t^B\setminus S_t^A$ with probability $\phi_i(S_t^B\setminus S_t^A)$, and we indeed get $$\frac{1+v(S_t^B) - v(S_t^A)}{1+v(S_t^B)} \cdot \phi_i(S_t^B\setminus S_t^A) = \frac{v_i}{1+v(S_t^B)} = \phi_i(S^B _t).$$ #### Concluding the proof of Lemma [Lemma 17](#lem:augment){reference-type="ref" reference="lem:augment"}. {#concluding-the-proof-of-lemma-lemaugment.} For each product $i\in U$, let $L_i^A$ and $L_i^B$ be the (random) loads of product $i$ with respect to the policies $A$ and $B$. Based on the above-mentioned coupling, we will establish the next auxiliary claim, whose proof is deferred to the end of this section. **Claim 34**. *$\mathbb{E}(L_i^B\,\mid\, L_i^A)\geq \frac{1}{1+\epsilon}\cdot L_i^A$, for every product $i\in U$,.* To derive Lemma [Lemma 17](#lem:augment){reference-type="ref" reference="lem:augment"}, let $I$ be the random product where the maximum load of policy $A$ is attained. In the case of ties, the product with smallest index is selected, i.e., $I = \min\{i\,\mid\, L_i^A = \max_{j\in U}L_j^A\}$. Then, by Claim [Claim 34](#cl:conditional){reference-type="ref" reference="cl:conditional"}, $$\mathbb{E}\left(L^B_I\,\left|\,(L_i^A)_{i\in U} \right. \right)\geq \frac{1}{1+\epsilon}\cdot L_I^A = \frac1{1+\epsilon}\cdot \mathbb{E}\left(L^A_I\,\left|\,(L_i^A)_{i\in U} \right. \right).$$ Therefore, $$\mathbb{E}\left( \left. \max_{i\in U}L^B_i\,\right|\,(L_i^A)_{i\in U}\right)\geq \frac{1}{1+\epsilon}\cdot \mathbb{E}\left(L^A_I\,\left|\,(L_i^A)_{i\in U} \right. \right) = \frac{1}{1+\epsilon}\cdot\mathbb{E}\left( \left. \max_{i\in U}L^A_i\,\right|\,(L_i^A)_{i\in U}\right).$$ The desired result now follows by introducing the expectation over $(L_i^A)_{i\in U}$ and using the tower property. #### Proof of Claim [Claim 34](#cl:conditional){reference-type="ref" reference="cl:conditional"}. {#proof-of-claim-clconditional.} We will show that $\mathbb{E}(L_i^B\,\mid\, L_i^A=\ell)\geq \frac{\ell}{1+\epsilon}$ for all $\ell\in\{0,\ldots,T\}$. To this end, it suffices to prove that, for each customer $t^A$ of the $\ell$ customers who selected product $i$ in the arrival sequence of policy $A$, its corresponding customer $t^B$ in the sequence of policy $B$ selects this product with probability at least $1/(1+\epsilon)$. In other words, $\mathbb{P}(t^B\text{ selects }i\,\mid\,t^A\text{ selects }i) \geq \frac{1}{1+\epsilon}$. Suppose that customer $t^A$ selects product $i$. In particular, $i\in S_t^A$, and we therefore have $$\begin{aligned} \mathbb{P}(t^B\text{ selects }i\,\mid\,t^A\text{ selects }i) & = & \frac{\mathbb{P}(t^B\text{ selects }i\text{ and }t^A\text{ selects }i)}{\mathbb{P}(t^A\text{ selects }i)} \\ & = & \frac{\mathbb{P}(t^B\text{ selects }i)}{\mathbb{P}(t^A\text{ selects }i)} \\ & = & \frac{\phi_{i}(S_t^B)}{\phi_{i}(S_t^A)} \ ,\end{aligned}$$ where the second equality holds since, by case 2 of our coupling, given that customer $t^B$ was assigned to product $i\in S_t^A$, we know that customer $t^A$ selected this product as well. We conclude the proof by noting that $$\phi_i(S^B_t)=\frac{v_i}{1+v(S^B_t)}\geq \frac{v_i}{ 1+\epsilon+ v(S^A_t)} \geq \frac1{1+\epsilon}\cdot\phi_i(S^A_t),$$ where the first inequality is obtained by recalling that $v(S_t^B)-v(S_t^A)\leq \epsilon$. ## Proof of Lemma [Lemma 18](#lem:subadd){reference-type="ref" reference="lem:subadd"} {#apx:subadd} Letting $U_0=U_1\cup U_2$, we assume to have three sequences of $T$ customers each. Specifically, we denote the first sequence by ${\cal T}_0$, consists of the customers $1^{(0)},\ldots, T^{(0)}$. Similarly, we refer to the second and third sequences by ${\cal T}_1$ and ${\cal T}_2$, with customers $1^{(1)},\ldots, T^{(1)}$ and $1^{(2)},\ldots,T^{(2)}$, respectively. On one hand, the sequence ${\cal T}_0$ will encounter an optimal dynamic policy for the universe of products $U_0$. On the other hand, ${\cal T}_1$ and ${\cal T}_2$ will respectively encounter dynamic policies for $U_1$ and $U_2$. The latter two policies will not necessarily be optimal. In the following, we first start by describing our policies for ${\cal T}_1$ and ${\cal T}_2$. Second, we sample the choices of these sequences in a coupled fashion. #### Describing the policies. Let ${\cal P}_0$ be an optimal dynamic policy for $U_0$. In this proof, we use the notation $S^{{\cal P}_0}_t$ to denote the random assortment offered by the policy ${\cal P}_0$ to customer $t^{(0)}$. As such, we define the policy ${\cal P}_1$, offering the assortment $S^{{\cal P}_1}_t = S^{{\cal P}_0}_t\cap U_1$ to its $t$-th customer. By definition, this policy only offers products from the universe $U_1$. We denote the expected maximum load of this policy by ${\cal E}^1$. Similarly, we define ${\cal P}_2$ as the policy that offers the assortment $S^{{\cal P}_2}_t = S^{{\cal P}_0}_t\cap U_2$ to its $t$-th customer, noting that only products from $U_2$ are offered. We denote the expected maximum load of this policy by ${\cal E}^2$. #### Sampling customer choices. Let $t\in \{1,\ldots,T\}$. First, we sample the choice of customer $t^{(0)}$ using an MNL choice model with respect to the assortment $S^{{\cal P}_0}_t$. Let us show how to sample the choices of customers $t^{(1)}$ and $t^{(2)}$ in a coupled fashion. For customer $t^{(1)}$, if customer $t^{(0)}$ selected some product $i\in S^{{\cal P}_1}_t\cup\{0\}$, then customer $t^{(1)}$ is also assigned to product $i$. Otherwise, $i\in S^{{\cal P}_0}_t\setminus S^{{\cal P}_1}_t$, and the choice of customer $t^{(1)}$ is decided according to an MNL choice model with respect to $S^{{\cal P}_1}_t$. Similarly, for customer $t^{(2)}$, if $t^{(0)}$ selected some product $i\in S^{{\cal P}_2}_t\cup\{0\}$, then $t^{(2)}$ is also assigned to product $i$. Otherwise, the choice of customer $t^{(2)}$ is decided according to a choice model with respect to $S^{{\cal P}_2}_t$. Next, we show that sampling the choices of customers $t^{(1)}$ and $t^{(2)}$ as described above is equivalent to simply offering the assortment $S^{{\cal P}_1}_t$ and $S^{{\cal P}_2}_t$, respectively. We explain why this is true for $t^{(1)}$, noting that the argument for for $t^{(2)}$ is symmetrical. First, customer $t^{(1)}$ can only select a product in $S^{{\cal P}_1}_t\cup \{0\}$. For every $i\in S^{{\cal P}_1}_t$, product $i$ is selected by customer $t^{(1)}$ if and only if one of the next two disjoint events occurs: - Product $i$ is selected by customer $t^{(0)}$. This happens with probability $v_i/(1+v(S^{{\cal P}_0}_t))$. - Customer $t^{(0)}$ selected some product in $S^{{\cal P}_0}_t\setminus S^{{\cal P}_1}_t$ and then product $i$ was selected by the MNL choice model when $S^{{\cal P}_1}_t$ was offered. This happens with probability $$\frac{v(S^{{\cal P}_0}_t)-v(S^{{\cal P}_1}_t)}{1+v(S^{{\cal P}_0}_t)} \cdot \frac{v_i}{1+v(S^{{\cal P}_1}_t)}.$$ Therefore, the overall probability that customer $t^{(1)}$ selects product $i$ is given by $$\frac{v_i}{1+v(S^{{\cal P}_0}_t)}+\frac{v(S^{{\cal P}_0}_t)-v(S^{{\cal P}_1}_t)}{1+v(S^{{\cal P}_0}_t)}\cdot\frac{v_i}{1+v(S^{{\cal P}_1}_t)} = \frac{v_i}{1+v(S^{{\cal P}_1}_t)} = \phi_i(S^{{\cal P}_1}_t).$$ #### Concluding the proof. Let $(L_i^0\,\mid\,i\in U_0)$ be the load vector attained by applying the policy ${\cal P}_0$ for the arrival sequence ${\cal T}_0$. Similarly, $(L_i^1\,\mid\,i\in U_1)$ and $(L_i^2\,\mid\,i\in U_2)$ will be the load vectors corresponding to ${\cal P}_1$ and ${\cal P}_2$, applied for ${\cal T}_1$ and ${\cal T}_2$, respectively. The key observation is that, for every $t\in[T]$, when customer $t^{(0)}$ selects some product $i\in U_1$, then customer $t^{(1)}$ also selects this product. Therefore, for every $i\in U_1$, we have $L^0_i\leq L^1_i$. By analogy, for every $i\in U_2$, we have $L^0_i\leq L^2_i$. As a result, $$\begin{aligned} \max_{i\in U_0} L^0_i &= & \max\left(\max_{i\in U_1} L_i^0, \max_{i\in U_2}L_i^0\right)\nonumber\\ &\leq & \max\left(\max_{i\in U_1} L_i^1, \max_{i\in U_2}L_i^2\right)\nonumber\\ &\leq & \max_{i\in U_1} L_i^1+ \max_{i\in U_2}L_i^2.\label{eq:lasteq}\end{aligned}$$ Therefore, $$\begin{aligned} {\sf OPT}^{\mathrm{DP}}(U_0) &= &\mathbb{E}\left(\max_{i\in U_0} L^0_i\right) \\ &\leq& \mathbb{E}\left(\max_{i\in U_1} L_i^1\right) + \mathbb{E}\left(\max_{i\in U_2} L_i^2\right) \\ &=& {\cal E}^1+{\cal E}^2 \\ &\leq& {\sf OPT}^{\mathrm{DP}}(U_1)+{\sf OPT}^{\mathrm{DP}}(U_2). \end{aligned}$$ Here, the first equality follows by recalling that $(L^0_i\,\mid\,i\in U_0)$ is the load vector of an optimal dynamic policy for $U_0$. The next inequality is a consequence of Equation [\[eq:lasteq\]](#eq:lasteq){reference-type="eqref" reference="eq:lasteq"}. The following equality follows from the definition of ${\cal E}^1$ and ${\cal E}^2$. The last inequality is obtained by noting that ${\cal P}_1$ and ${\cal P}_2$ are feasible dynamic policies with respect to the universes $U_1$ and $U_2$, respectively. ## Proof of Theorem [Theorem 15](#thm:eqv){reference-type="ref" reference="thm:eqv"} {#apx:eqv} In what follows, we consider [\[DMLA\]](#DMLA){reference-type="ref" reference="DMLA"} instances where all products have the same preference weight, which will be denoted by $v$. Our approach proceeds by distinguishing between three cases, depending on the magnitude of this parameter. #### **Case 1:** $\boldsymbol{v\geq 1}$. In this case, we statically offer the same single product at each time step. The choice probability of this product is $v/(1+v)$, and its load is a Binomial random variable with $T$ trials and success probability $v/(1+v)$. This yields an expected maximum load of $Tv/(1+v)$. Therefore, $${\sf OPT}^{\mathrm{WO}}(\mathcal{N})\geq T\cdot\frac{v}{1+v}\geq \frac T2\geq\frac{{\sf OPT}^{\mathrm{DP}}(\mathcal{N})}2,$$ where the second inequality holds since $v\geq 1$, and the third inequality follows by noting that the maximum load is always upper bounded by the total number of customers $T$. #### **Case 2:** $\boldsymbol{v<1/n}$. Here, we statically offer all products in the universe $\mathcal{N}$ to each customer, arguing that this policy guarantees a $1/2$-approximation. Using the notation of Lemma [Lemma 17](#lem:augment){reference-type="ref" reference="lem:augment"}, let $A$ be an optimal adaptive policy for the universe $\mathcal{N}$, and let $B$ be the static policy that offers the whole universe of products. In this case, the condition $S_t^A\subseteq S_t^B$ is trivially satisfied since $S_t^B=\mathcal{N}$. Moreover, $v(\mathcal{N}) < 1$ since $v < 1/n$, meaning that $v(S_t^B\setminus S_t^A) < 1$, for all customers $t \in [T]$. Therefore, by employing Lemma [Lemma 17](#lem:augment){reference-type="ref" reference="lem:augment"} with $\epsilon=1$, we have $${\sf OPT}^{\mathrm{WO}}(\mathcal{N})\geq \mathbb E (M(\mathcal{N})) \geq\frac{{\sf OPT}^{\mathrm{DP}}(\mathcal{N})}2.$$ #### **Case 3:** $\boldsymbol{1/n\leq v<1}$. In this case, let $2 \leq k\leq n$ be the unique integer for which $\frac1k\leq v< \frac{1}{k-1}$. In order to prove that ${\sf OPT}^{\mathrm{WO}}(\mathcal{N})\geq \frac{1}{2} \cdot {\sf OPT}^{\mathrm{DP}}(\mathcal{N})$, we argue that a $1/2$-approximation can be attained by statically offering the same set of $k$ products to all customers, say $S = \{ 1, \ldots, k \}$. To analyze the exact guarantee of this policy, let $\widetilde S = \{\Tilde 1,\ldots, \Tilde k\}$ be a collection of $k$ virtual products, where each product $\Tilde{i}\in \widetilde S$ has a preference weight of $1/k$. Also, let $(\widehat L_1, \ldots, \widehat L_k)$ be a Multinomial vector with $T$ trials and probabilities $1/k$ for each outcome, with $\widehat M = \max_{i=1,\ldots,k}\widehat L_i$. Our analysis is based on proving the next three claims. **Lemma 35**. *$\mathbb{E}(M(S)) \geq \mathbb{E}(M(\widetilde S))$.* **Lemma 36**. *$\mathbb{E}(M(\widetilde S)) \geq \mathbb{E}(\widehat M)/2$.* **Lemma 37**. *$\mathbb{E}(\widehat M) \geq {\sf OPT}^\mathrm{DP}(\mathcal{N})$.* Consequently, by combining Lemmas [Lemma 35](#lem:eqv1){reference-type="ref" reference="lem:eqv1"}-[Lemma 37](#lem:eqv3){reference-type="ref" reference="lem:eqv3"}, it follows that $${\sf OPT}^{\mathrm{WO}}(\mathcal{N})\geq \mathbb{E}(M(S)) \geq \mathbb{E}(M(\widetilde S))\geq \frac{ \mathbb{E}(\widehat M) }{ 2 } \geq \frac{ {\sf OPT}^\mathrm{DP}(\mathcal{N}) }{ 2 }.$$ #### Proof of Lemma [Lemma 35](#lem:eqv1){reference-type="ref" reference="lem:eqv1"}. {#proof-of-lemma-lemeqv1.} When statically offering $S$, the choice probability of each product is $p = v/(1+kv)$. Similarly, when statically offering, $\widetilde S$, the choice probability of each product is $\Tilde p = 1/2k$. The key idea is to notice that $$p = \frac{v}{1+kv} = \frac{1}{1/v + k}\geq \frac{1}{2k} = \Tilde p,$$ where inequality above holds since $v\geq 1/k$. Hence, the choice probability of each product when offering $S$ is at least that of any product when offering $\widetilde S$. It is therefore easy to construct a coupling where $M(S) \geq M(\widetilde S)$, implying that $\mathbb{E}(M(S)) \geq \mathbb{E}(M(\widetilde S))$. #### Proof of Lemma [Lemma 36](#lem:eqv2){reference-type="ref" reference="lem:eqv2"}. {#proof-of-lemma-lemeqv2.} Let us first notice that, when statically offering $\widetilde S$, the choice probability of every product is $1/(2k)$, whereas the probability of every component of the Multinomial vector $(\widehat L_1,\ldots,\widehat L_k)$ is $1/k$. To couple between $M(\widetilde S)$ and $\widehat M$, for each customer $t\in [T]$, we select a component $I_t\in \{1,\ldots,k\}$ uniformly at random; the $t$-th trial of the Multinomial vector $(\widehat L_1, \ldots,\widehat L_k)$ is then assigned to component $I_t$. In order to simulate the selection of customer $t$ with respect to the load vector $\mathbf L(\widetilde S)$, we sample a Bernoulli random variable $Z_t$ with success probability $1/2$. If $Z_t=1$, then customer $t$ is assigned to product $I_t$. Otherwise, this customer is assigned to the no-purchase option. It is easy to verify that the constructed load vector is equal in distribution to $\mathbf{L}(\widetilde S)$. To complete the proof, let $I$ be the random variable specifying the index of the maximum component of the Multinomial vector $(\widehat L_1, \ldots,\widehat L_k)$. In the case of ties, we take the one with lowest index, i.e., $I=\min\{i : \widehat L_i = \widehat M\}$. Conditioning on the outcome of $(\widehat L_1,\ldots,\widehat L_k)$, we have $$\begin{aligned} \mathbb{E}(M(\widetilde S)\,\mid\,\widehat L_1,\ldots,\widehat L_k) &\geq &\mathbb{E}(L_I(\widetilde S)\,\mid\,\widehat L_1,\ldots,\widehat L_k)\\ &=&\frac12\cdot\mathbb{E}(\widehat L_I\,\mid\,\widehat L_1,\ldots,\widehat L_k)\\ &=&\frac{1}{2}\cdot\mathbb{E}(\widehat M\,\mid\,\widehat L_1,\ldots,\widehat L_k). \end{aligned}$$ Here, the first inequality comes from the fact that $M(\widetilde S)$ is by definition greater than or equal to all the loads of the vector $\mathbf{L}(\widetilde S)$. Finally, by taking expectations on both sides of this inequality and applying the tower property, it follows that $\mathbb{E}(M(\widetilde S)) \geq \mathbb{E}(\widehat M)/2$. #### Proof of Lemma [Lemma 37](#lem:eqv3){reference-type="ref" reference="lem:eqv3"}. {#proof-of-lemma-lemeqv3.} We argue that this claim is a direct implication of Lemma [Lemma 16](#lem:multinom){reference-type="ref" reference="lem:multinom"}. Let us show that the conditions of the latter lemma are met. First, the probability of every component of the Multinomial vector $(\widehat L_1,\ldots,\widehat L_k)$ is exactly $1/k$. Second, since all products have the same preference weight, $v$, the right hand side of Equation [\[eq:multinom\]](#eq:multinom){reference-type="eqref" reference="eq:multinom"} is simply $v/(1+v)$. In addition, since $v < \frac{ 1 }{k-1}$, we indeed have $\frac{ 1 }{ k } \geq \frac{ v }{ 1 + v }$. Therefore, by applying Lemma [Lemma 16](#lem:multinom){reference-type="ref" reference="lem:multinom"}, we conclude that $\mathbb{E}(\widehat M) \geq {\sf OPT}^\mathrm{DP}(\mathcal{N})$. # Proofs from Section [5](#sec:dynamic){reference-type="ref" reference="sec:dynamic"} {#proofs-from-section-secdynamic} ## Proof of Lemma [Lemma 24](#lem:high){reference-type="ref" reference="lem:high"} {#apx:high} First, when $v_{\max} \geq 1/\epsilon$, it is easy to verify that statically offering the heaviest product achieves an expected maximum load of at least $\frac{ v_{\max} }{ 1 + v_{\max} } \cdot T \geq (1-\epsilon) \cdot T$, and therefore yields the desired $(1-\epsilon)$-approximation. In the remainder of this proof, we consider the case where $T\alpha\geq 12\ln(nT)/\epsilon^3$. Focusing on a fixed optimal dynamic policy, let $(L_1,\ldots,L_n)$ be its random load vector, and let $M = \max_{i\in \mathcal{N}}L_i$. In particular, we have $\mathbb{E}(M) = {\sf OPT}^{\mathrm{DP}}(\mathcal{N})$. First, at every step, we observe that the choice probability of any product is at most $\alpha$, since for every assortment $S\subseteq \mathcal{N}$ and every product $i\in S$, $$\phi_i(S) = \frac{v_i}{1+\sum_{j\in S}v_j}\leq \frac{v_i}{1+v_i}\leq \frac{v_{\max}}{1+v_{\max}}=\alpha.$$ Therefore, we can couple each random load $L_i$ with a Binomial random variable $Z_i\sim B(T,\alpha)$, such that $Z_i\geq L_i$ almost surely. As a result, $$\begin{aligned} \mathbb{P}\left(L_i\geq \left(1+\frac \epsilon2\right)\cdot T\alpha\right) & \leq & \mathbb{P}\left(Z_i\geq\left(1+\frac\epsilon2\right)\cdot \mathbb{E}(Z_i)\right) \\ & \leq & \exp\left(-\frac{\epsilon^2}{12}T\alpha\right) \\ & \leq & \exp\left(-\frac{\ln(nT)}{\epsilon}\right) \\ & = &\left(\frac{1}{nT}\right)^{\frac{1}{\epsilon}}. \end{aligned}$$ Here, the second inequality comes from the following Chernoff bound [@doerr2020probabilistic Sec. 1.10.1], stating that when $X$ is a Binomial random variable and $\delta <1$, $$\mathbb{P}(X\geq (1+\delta) \cdot \mathbb{E}(X)) \leq \exp\left( -\frac{ \delta^2\mathbb{E}(X)}{3} \right).$$ The third inequality follows from the case hypothesis, $T\alpha \geq \frac{12\ln(nT)}{\epsilon^3}$. Using a union bound, we get $$\label{eq:imoback} \mathbb{P}\left(M\geq \left(1+\frac\epsilon 2\right) \cdot T\alpha\right)\leq \sum_{i=1}^n\mathbb{P}\left(L_i\geq \left(1+\frac\epsilon2\right) \cdot T\alpha\right) \leq n\cdot \left(\frac1{nT}\right)^{\frac1\epsilon} \leq \left(\frac1{nT}\right)^{\frac1\epsilon- 1}.$$ By conditioning on the event $\{M\geq \left(1+\frac\epsilon 2\right) \cdot T\alpha\}$ and on its complement, we have $$\begin{aligned} \mathbb{E}(M) &\leq \mathbb{P}\left(M\geq \left(1+\frac\epsilon 2\right)T\alpha\right)\cdot T+\left(1+\frac\epsilon2\right) \cdot T\alpha\\ &\leq \left(\frac{1}{nT}\right)^{\frac{1}{\epsilon}-1}+\left(1+\frac\epsilon2\right) \cdot T\alpha\\ &\leq (1+\epsilon)\cdot T\alpha\end{aligned}$$ The first inequality holds since $\mathbb{E}(M | M\geq \left(1+\frac\epsilon 2\right)T\alpha)$ is trivially bounded by the number of customers $T$ and since $\mathbb{P}(M < \left(1+\frac\epsilon 2\right)T\alpha) \leq 1$. In the second inequality, we substitute Equation [\[eq:imoback\]](#eq:imoback){reference-type="eqref" reference="eq:imoback"}. The last inequality follows Claim [Claim 38](#cl:ineq){reference-type="ref" reference="cl:ineq"} below. Consequently, we have just shown that $T \alpha \geq (1-\epsilon) \cdot \mathbb{E}(M)$. Thus, by statically offering the heaviest product to all customers, we secure at least a $(1-\epsilon)$-fraction of the optimal expected maximum load. **Claim 38**. *$(\frac1{nT})^{\frac1\epsilon-1}\leq \frac{\epsilon}{2}T\alpha$.* *Proof.* Since $n\geq 2$ and $T\geq 2$, as argued in the beginning of Section [5.2](#subsec:highr){reference-type="ref" reference="subsec:highr"}, we have $$\frac{\epsilon}{2}T\alpha\geq \frac{\epsilon}{2}\cdot\frac{12\ln(nT)}{\epsilon^3}\geq \frac{6\ln(4)}{\epsilon^2} \geq \frac{4}{\epsilon^2},$$ where the first inequality holds by the case hypothesis, $T\alpha\geq 12\ln(nT)/\epsilon^3$. On the other hand, $(\frac1{nT})^{\frac1\epsilon-1}\leq (\frac1{4})^{\frac1\epsilon-1}$. Therefore, it suffices to show that $\frac{4}{\epsilon^2}\geq \left(\frac1{4}\right)^{\frac1\epsilon-1}$, which is equivalent to $\frac{\epsilon^2}{4^{1/\epsilon}}\leq 1$. This inequality holds since the function $x\mapsto \frac{x^2}{4^{1/x}}$ is nondecreasing on $(0,1]$, reaching its its maximum at $x=1$. Therefore, $\frac{\epsilon^2}{4^{1/\epsilon}}\leq 1/4\leq 1$. ◻ ## Proof of Lemma [Lemma 25](#lem:low){reference-type="ref" reference="lem:low"} {#apx:low} By recycling the notation of Appendix [9.1](#apx:high){reference-type="ref" reference="apx:high"}, for a fixed optimal dynamic policy, let $(L_1,\ldots,L_n)$ be its random load vector, and let $M = \max_{i\in \mathcal{N}}L_i$. We first argue that $\mathbb{P}(L_i=k) \leq \frac{1}{nT^2}$, for every product $i \in \mathcal{N}$ and for every integer $k \in [(\frac{12\ln(nT)}{\epsilon^3})^2, T]$. To this end, since $\alpha = v_{\max}/(1+v_{\max})$ is an upper bound on the choice probability of any product with respect to any assortment, we have $$\label{eqn:light_mass_Li} \mathbb{P}(L_i=k) \leq \binom{T}{k}\alpha^k (1 - \alpha)^{T-k} \leq \left(\frac{eT\alpha}{k}\right)^k \leq \left(\frac{e}{\sqrt{k}}\right)^k \leq \frac{1}{nT^2}.$$ Here, the second and third inequalities hold since $\binom{T}{k}\leq (eT/k)^k$ and since $k \geq (\frac{12\ln(nT)}{\epsilon^3})^2 \geq (T\alpha)^2$, by the lemma's hypothesis. The final inequality is stated as the next claim, whose proof appears at the end of this section. **Claim 39**. *$(\frac{e}{\sqrt{k}})^k\leq \frac{1}{nT^2}$.* By combining inequality [\[eqn:light_mass_Li\]](#eqn:light_mass_Li){reference-type="eqref" reference="eqn:light_mass_Li"} and the union bound, $$\label{eq:equation1000} \mathbb{P}\left(M\geq \left(\frac{12\ln(nT)}{\epsilon^3}\right)^2\right) \leq \sum_{i=1}^n \sum_{k \geq (\frac{12\ln(nT)}{\epsilon^3})^2} \mathbb{P}\left(L_i = k\right) \leq \frac{ 1 }{ T }.$$ We are now ready to derive the desired upper bound on ${\sf OPT}^{\mathrm{DP}}(\mathcal{N}) = \mathbb{E}(M)$. Specifically, by conditioning on the event $\{M\geq (\frac{12\ln(nT)}{\epsilon^3})^2\}$ and on its complement, $$\begin{aligned} \mathbb{E}(M)&\leq& T\cdot \mathbb{P}\left(M\geq \left(\frac{12\ln(nT)}{\epsilon^3}\right)^2\right) + \left(\frac{12\ln(nT)}{\epsilon^3}\right)^2\cdot \mathbb{P}\left(M < \left(\frac{12\ln(nT)}{\epsilon^3}\right)^2\right)\\ &\leq & 1 + \left(\frac{12\ln(nT)}{\epsilon^3}\right)^2\\ & \leq & 2\cdot \left(\frac{12\ln(nT)}{\epsilon^3}\right)^2\\ &\leq& \frac{300\ln^2(nT)}{\epsilon^6},\end{aligned}$$ where the second inequality follows from [\[eq:equation1000\]](#eq:equation1000){reference-type="eqref" reference="eq:equation1000"} and third inequality holds since $T\geq 2$ and $n\geq 2$. #### Proof of Claim [Claim 39](#cl:bound){reference-type="ref" reference="cl:bound"}. {#proof-of-claim-clbound.} To obtain the desired inequality, note that $$\begin{aligned} \left(\frac{e}{\sqrt{k}}\right)^k&\leq&\left(\frac{e}{12\ln(nT)}\right)^{k}\\ &\leq & \left(\frac{e}{12\ln(4)}\right)^k\\ &\leq & \left(\frac{1}{e}\right)^k\\ &\leq & \left(\frac{1}{e}\right)^{144\ln^2(nT)}\\ &\leq& \frac{1}{nT^2}.\end{aligned}$$ Here, the first and fourth inequalities hold since $k\geq (\frac{12\ln(nT)}{\epsilon^3})^2\geq 144\ln^2(nT)$. The second inequality is obtained by recalling that $n\geq 2$ and $T\geq 2$, as assumed without loss of generality in Section [5.2](#subsec:highr){reference-type="ref" reference="subsec:highr"}. ## Proof of Lemma [Lemma 26](#lem:alteruniverse){reference-type="ref" reference="lem:alteruniverse"} {#apx:altering} Let $(1^{ P}, \ldots, T^{ P})$ and $(1^{\widetilde P}, \ldots, T^{\widetilde P})$ be two sequences of customers. While the sequence $(1^P,\ldots,T^P)$ encounters the policy $P$ that will be designed below, we make use of the second sequence $(1^{\widetilde P},\ldots,T^{\widetilde P})$ to sample outcomes of the policy $\widetilde P$. In what follows, the load of each product $\widetilde i\in \widetilde U$ will be referring to the number of customers from the sequence $(1^{\widetilde P},\ldots,T^{\widetilde P})$ who selected this product. #### Describing the policy $\boldsymbol{P}$. In order to construct our policy $P$, at each time step $t=1,\ldots,T$, let us describe the assortment offered to customer $t^P$, given the choice outcomes of all previously-arriving customers. To this end, suppose that the choices of customers $1^P,\ldots,(t-1)^P$ and customers $1^{\widetilde P},\ldots,(t-1)^{\widetilde P}$ are already known. Let $\widetilde S_t$ be the assortment offered by the policy $\widetilde P$ to customer $t^{\widetilde P}$. Note that, conditional on the known choices of customers $1^{\widetilde P},\ldots,(t-1)^{\widetilde P}$, this assortment is deterministic. Then, the policy $P$ offers the assortment $S_t = \{i\,\mid\, \widetilde i\in \widetilde S_t\}$. #### Simulating the outcome of $\boldsymbol{\widetilde P}$. After offering the assortment $S_t$ to customer $t^P$, we proceed to observe her choice according to the MNL model. Subsequently, in a coupled manner with the choice of customer $t^P$, we simulate the choice of customer $t^{\widetilde P}$, when offered $\widetilde S_t$. Specifically, let $S_t^\uparrow = \{i\in S_t\cup \{0\}\,\mid\,\phi_{\widetilde i}(\widetilde S_t)\geq \phi_i(S_t) \}$ and let $S_t^\downarrow = \{i\in S_t\cup \{0\}\,\mid\,\phi_{\widetilde i}(\widetilde S_t) < \phi_i(S_t) \}$. In addition, let ${\alpha_t = \sum_{i\in S_i^\downarrow} ( \phi_{i}(S_t) - \phi_{\widetilde i}(\widetilde S_t)}) \geq 0$. Now, suppose that product $i \in S_t \cup \{ 0 \}$ is the one selected by customer $t^P$. If $i\in S_t^\uparrow$, then customer $t^{\widetilde P}$ selects product $\widetilde i$. Otherwise, $i\in S_t^\downarrow$, implying that $\alpha_t>0$. In this case, we proceed as follows: - With probability $\phi_{\widetilde i}(\widetilde S_t)/\phi_i(S_t)$, customer $t^{\widetilde P}$ selects product $\widetilde i$. - With probability $1-\phi_{\widetilde i}(\widetilde S_t)/\phi_i(S_t)$, customer $t^{\widetilde P}$ randomly selects one of the products $\{ {\widetilde j} | j\in S_t^\uparrow \}$, where each product $\widetilde j$ is selected with probability $p_j = \frac{\phi_{\widetilde j}(\widetilde S_t)-\phi_j(S_t)}{\alpha_t} \geq 0$. It is worth noting that these terms indeed add up to $1$, since $$\begin{aligned} \sum_{j\in S_t^\uparrow}p_j & = & \frac{1}{\alpha_t} \cdot \sum_{j\in S_t^\uparrow} \left( \phi_{\widetilde j}(\widetilde S_t)- \phi_j(S_t)\right) \\ & = & \frac{1}{\alpha_t} \cdot \left( \left( 1-\sum_{j\in S_t^\downarrow}\phi_{\widetilde j}(\widetilde S_t) \right) - \left( 1-\sum_{j\in S_t^\downarrow}\phi_j(S_t) \right)\right) \\ & = & 1. \end{aligned}$$ #### Correctness of the simulation. In what follows, we show that for each product $\widetilde j\in \widetilde S_t\cup \{0\}$, the probability for customer $t^{\widetilde P}$ to select this product, via the simulation process described above, is exactly $\phi_{\widetilde j}(\widetilde S_t)$. For this purpose, we consider two cases: - When $j\in S_t^\downarrow$: In this case, if customer $t^P$ selected some product different from $j$, then customer $t^{\widetilde P}$ cannot select product $\widetilde j$. If customer $t^P$ selected product $j$, which happens with probability $\phi_{ j}(S_t)$, then customer $t^{\widetilde P}$ selects product $\widetilde j$ with probability $\phi_{\widetilde j}(\widetilde S_t)/\phi_{j}(S_t)$. Therefore, the overall probability for customer $t^{\widetilde P}$ to select product $\widetilde j$ is $\phi_{\widetilde j}(\widetilde S_t)$. - When $j\in S_t^\uparrow$: There are three cases to examine: 1. If customer $t^P$ selected product $j$, with probability $\phi_{j}(S_t)$, then customer $t^{\widetilde P}$ selects $\widetilde j$ with probability $1$. 2. If customer $t^P$ selected product $i\in S_t^\uparrow\setminus\{j\}$, then customer $t^{\widetilde P}$ cannot select product $\widetilde j$ according to the described process. 3. If customer $t^P$ selected some product $i\in S_t^\downarrow$, which happens with probability $\phi_{i}(S_t)$, then with probability $1-\phi_{\widetilde i}(\widetilde S_t)/\phi_i(S_t)$, some random product from $\{ {\widetilde j} | j\in S_t^\uparrow \}$ will be selected, and it will be product $\widetilde j$ with probability $p_j$. Therefore, the overall probability that customer $t^{\widetilde P}$ selects product $\widetilde j$ is given by $$\phi_{j}(S_t) + \sum_{i\in S_t^\downarrow}\phi_{i}(S_t)\cdot \left(1-\frac{\phi_{\widetilde i}(\widetilde S_t)}{\phi_i(S_t)}\right)\cdot p_j = \phi_{ j}( S_t) + \alpha_t\cdot p_j =\phi_{\widetilde j}(\widetilde S_t),$$ where the last equality holds since $p_j = \frac{\phi_{\widetilde j}(\widetilde S_t)-\phi_j(S_t)}{\alpha_t}$. #### Approximation guarantee of $\boldsymbol{P}$. In the remainder of this proof, we show that ${\cal E}^P\geq (1-\delta)\cdot {\cal E}^{\widetilde P}$, i.e., the expected maximum load attained by the policy $P$ is at least $1-\delta$ times that of $\widetilde P$. Similarly to the notation introduced in Section [2.1](#subsec:SMLA){reference-type="ref" reference="subsec:SMLA"} for the static formulation, let $X_{it}$ be a Bernoulli random variable, indicating whether customer $t^P$ selects product $i$. This way, the load of each product $i\in U$ with respect to the policy $P$ is $L_i= \sum_{t=1}^TX_{it}$. Similarly, let $X_{\widetilde it}$ be a Bernoulli random variable, indicating whether customer $t^{\widetilde P}$ selects product $\widetilde i$. As such, the load of each product $\widetilde i\in\widetilde U$ with respect to the policy $\widetilde P$ is $L_{\widetilde i} = \sum_{t=1}^TX_{\widetilde it}$. Finally, let $\widetilde I$ be the random index of the most loaded products in $\widetilde U$, namely, $\widetilde{I} = \mathop{\mathrm{arg\,max}}_{\widetilde{i} \in \widetilde{U}} L_{\widetilde{i}}$, breaking ties by taking the smallest index. The crucial invariant we establish is captured by the next claim, whose proof is provided in Appendix [9.4](#apx:proofcomplement){reference-type="ref" reference="apx:proofcomplement"}. **Claim 40**. *$\mathbb{E}(X_{It})\geq (1-\delta)\cdot\mathbb{E}(X_{\widetilde It})$, for all $t = 1,\ldots,T$.* Given this result, we conclude the proof by observing that $$\begin{aligned} {\cal E}^P &= &\mathbb{E}\left(\max_{i\in U}L_i\right)\\ &\geq & \mathbb{E}\left(L_I\right)\\ &=& \sum_{t=1}^T\mathbb{E}\left(X_{It}\right)\\ &\geq& (1-\delta)\cdot \sum_{t=1}^T\mathbb{E}\left(X_{\widetilde It}\right)\\ &=& (1-\delta)\cdot \mathbb{E}\left(L_{\widetilde I}\right)\\ &=& (1-\delta)\cdot{\cal E}^{\widetilde P},\end{aligned}$$ where the inequality above is a direct application of Claim [Claim 40](#cl:proofcomplement){reference-type="ref" reference="cl:proofcomplement"}. ## Proof of Claim [Claim 40](#cl:proofcomplement){reference-type="ref" reference="cl:proofcomplement"} {#apx:proofcomplement} Instead of directly working with $(X_{it})_{i\in U,t\in[T]}$ and $(X_{\widetilde it})_{\widetilde i\in \widetilde U,t\in[T]}$, we propose a new construction of these random variables, $({\cal X}_{it})_{i\in U,t\in[T]}$ and $({\cal X}_{\widetilde it})_{\widetilde i\in \widetilde U,t\in[T]}$, such that $$\left((X_{it})_{i\in U,t\in[T]},(X_{\widetilde it})_{\widetilde i\in \widetilde U,t\in[T]}\right) \stackrel{d}{=} \left(({\cal X}_{it})_{i\in U,t\in[T]},({\cal X}_{\widetilde it})_{\widetilde i\in \widetilde U,t\in[T]}\right).$$ However, in this construction, the choices of customers $1^P,\ldots,T^P$ do not affect those of customers $1^{\widetilde P}, \ldots, T^{\widetilde P}$. In particular, we will first sample $({\cal X}_{\widetilde it})_{\widetilde i\in \widetilde U,t\in[T]}$, and only then sample $({\cal X}_{it})_{i\in U,t\in[T]}$ in a coupled manner. #### Stage 1: Constructing $\boldsymbol{({\cal X}_{\widetilde it})_{\widetilde i\in \widetilde U,t\in[T]}}$. First, in order to construct the policy $\widetilde P$, and hence the choices $({\cal X}_{\widetilde it})_{\widetilde i\in \widetilde U,t\in[T]}$ of customers $1^{\widetilde P},\ldots, T^{\widetilde P}$, upon the arrival of each customer $t^{\widetilde P}$, we observe the choices of all previous customers, and use the policy $\widetilde P$ to determine the assortment $\widetilde S_t$ that will be offered to this customer. Then, we sample the choice $({\cal X}_{\widetilde it})_{\widetilde i\in \widetilde U}$ of customer $t^{\widetilde P}$ according to the MNL choice model, where each product $\widetilde{i} \in {\widetilde S}_t$ has a probability of $\phi_{\widetilde{i}}( {\widetilde S}_t )$ to be the one selected. #### Stage 2: Constructing $\boldsymbol{({\cal X}_{it})_{i\in U,t\in[T]}}$. Once $({\cal X}_{\widetilde it})_{\widetilde i\in \widetilde U,t\in[T]}$ have already been determined, let us describe how to construct the choices $({\cal X}_{it})_{i\in U,t\in[T]}$ of customers $1^P,\ldots, T^P$ according to the policy $P$. To this end, upon the arrival of each customer $t^{P}$, we determine her choice $({\cal X}_{it})_{i\in U}$ in a coupled fashion. As in Appendix [9.3](#apx:altering){reference-type="ref" reference="apx:altering"}, we will make use of $S_t^\uparrow = \{i\in S_t\cup \{0\}\,\mid\,\phi_{\widetilde i}(\widetilde S_t)\geq \phi_i(S_t) \}$, $S_t^\downarrow = \{i\in S_t\cup \{0\}\,\mid\,\phi_{\widetilde i}(\widetilde S_t) < \phi_i(S_t) \}$, and ${\alpha_t = \sum_{i\in S_i^\downarrow} ( \phi_{i}(S_t) - \phi_{\widetilde i}(\widetilde S_t)}) \geq 0$. Now, suppose that product $\widetilde i \in \widetilde S_t \cup \{ 0 \}$ is the one selected by customer $t^{\widetilde P}$, meaning that ${\cal X}_{\widetilde it} = 1$. If $i\in S_t^\downarrow$, then customer $t^{P}$ selects product $i$. Otherwise, $i\in S_t^\uparrow$, and we proceed as follows: - With probability $\phi_i(S_t)/\phi_{\widetilde i}(\widetilde S_t)$, customer $t^{P}$ selects product $i$, i.e., ${\cal X}_{it} = 1$. - With probability $1-\phi_i(S_t)/\phi_{\widetilde i}(\widetilde S_t)$, customer $t^{P}$ selects one of the products in $S_t^\downarrow$, where each product $j$ is selected with probability $q_j = \frac{\phi_j(S_t)-\phi_{\widetilde j}(\widetilde S_t)}{\alpha_t} \geq 0$. Similarly to Appendix [9.3](#apx:altering){reference-type="ref" reference="apx:altering"}, it is easy to verify that these terms add up to $1$. #### Proving equality in distribution. We proceed by showing that $(({\cal X}_{it})_{i\in U,t\in[T]},({\cal X}_{\widetilde it})_{\widetilde i\in \widetilde U,t\in[T]})$ and $((X_{it})_{i\in U,t\in[T]},(X_{\widetilde it})_{\widetilde i\in \widetilde U,t\in[T]})$ are indeed equal in distribution. In particular, we show that at each step, the joint choice probabilities of the customers $t^P$ and $t^{\widetilde P}$ are identical for both constructions. Formally, we argue that for all $i\in U$, $\widetilde j\in \widetilde U$, and $t\in[T]$, $$\label{eq:equation219} \mathbb{P}({\cal X}_{it}=1,{\cal X}_{\widetilde jt}=1) = \mathbb{P}(X_{it}=1,X_{\widetilde jt}=1).$$ Note that we do not need to consider events of the form $X_{it}=0$, since they can be written as a disjoint union of the events $X_{jt}=1$ for $j\neq i$, i.e., $\{X_{it}=0\} = \bigvee_{j\neq i}\{X_{jt}=1\}$. We prove Equation [\[eq:equation219\]](#eq:equation219){reference-type="eqref" reference="eq:equation219"} via the following case analysis. - Case 1: $i=j$: - If $i\in S_t^{\uparrow}$: Then $X_{it}=1$ implies $X_{\widetilde it}=1$. Therefore, $$\mathbb{P}(X_{it}=1,X_{\widetilde it}=1)=\mathbb{P}(X_{it}=1)=\phi_i(S_t).$$ On the other hand, $$\mathbb{P}({\cal X}_{it}=1,{\cal X}_{\widetilde it}=1) = \mathbb{P}({\cal X}_{\widetilde it}=1)\cdot \mathbb{P}({\cal X}_{it}=1|{\cal X}_{\widetilde it}=1) = \phi_{\widetilde i}(\widetilde S_t)\cdot \frac{\phi_{i}(S_t)}{\phi_{\widetilde i}(\widetilde S_t)} = \phi_i(S_t).$$ - If $i\in S_t^{\downarrow}$: Then, $$\mathbb{P}(X_{it}=1,X_{\widetilde it}=1)= \mathbb{P}(X_{it}=1)\cdot\mathbb{P}(X_{\widetilde it}=1|X_{it}=1) = \phi_i(S_t)\cdot \frac{\phi_{\widetilde i}(\widetilde S_t)}{\phi_i(S_t)} = \phi_{\widetilde i}(\widetilde S_t).$$ On the other hand, if ${\cal X}_{\widetilde it}=1$ then ${\cal X}_{it} =1$, and therefore $$\mathbb{P}({\cal X}_{it}=1,{\cal X}_{\widetilde it}=1) =\mathbb{P}({\cal X}_{\widetilde it}=1) = \phi_{\widetilde i}(\widetilde S_t).$$ - Case 2: $i\neq j$: - If $i\in S_t^{\uparrow}$, then $X_{it}=1$ implies $X_{\widetilde it}=1$, and ${\cal X}_{it}=1$ implies ${\cal X}_{\widetilde it}=1$. Therefore, since $i \neq j$, $$\mathbb{P}(X_{it}=1,X_{\widetilde jt}=1)=\mathbb{P}({\cal X}_{it}=1,{\cal X}_{\widetilde jt}=1)=0.$$ - If $j\in S_t^{\downarrow}$, then for similar reasons, $$\mathbb{P}(X_{it}=1,X_{\widetilde jt}=1)=\mathbb{P}({\cal X}_{it}=1,{\cal X}_{\widetilde jt}=1)=0.$$ - If $i\in S_t^{\downarrow}$ and $j\in S_t^{\uparrow}$, then $$\begin{aligned} \mathbb{P}(X_{it}=1,X_{\widetilde jt}=1)&= &\mathbb{P}(X_{it}=1)\cdot\mathbb{P}(X_{\widetilde jt}=1|X_{it}=1)\\ &=&\phi_i(S_t)\cdot\left(1-\frac{\phi_{\widetilde i}(\widetilde S_t)}{\phi_i(S_t)}\right)\cdot p_j\\ &=&\frac{1}{\alpha_t}\cdot (\phi_i(S_t) - \phi_{\widetilde i}(\widetilde S_t))\cdot(\phi_{\widetilde j}(\widetilde S_t)-\phi_j(S_t)). \end{aligned}$$ On the other hand, $$\begin{aligned} \mathbb{P}({\cal X}_{it}=1,{\cal X}_{\widetilde jt}=1)&= &\mathbb{P}({\cal X}_{\widetilde jt}=1)\cdot\mathbb{P}({\cal X}_{it}=1|{\cal X}_{\widetilde jt}=1)\\ &=&\phi_{\widetilde j}(S_t)\cdot\left(1-\frac{\phi_j(S_t)}{\phi_{\widetilde j}(\widetilde S_t)}\right)\cdot q_i\\ &=&\frac{1}{\alpha_t}\cdot (\phi_i(S_t) - \phi_{\widetilde i}(\widetilde S_t))\cdot(\phi_{\widetilde j}(\widetilde S_t)-\phi_j(S_t)). \end{aligned}$$ #### Concluding the proof. In the construction we have just described, the choices of customers $1^P,\ldots,T^P$ in stage 2 obviously do not affect the policy $\widetilde P$ in stage 1. Thus, we can initially sample the choices $\widetilde {\cal X}= ({\cal X}_{\widetilde it})_{\widetilde i\in \widetilde U,t\in[T]}$ of customers $1^{\widetilde P}, \ldots, T^{\widetilde P}$, and then use this realization to sample the choices $({\cal X}_{it})_{i\in U,t\in[T]}$ of customers $1^P,\ldots, T^P$. The important observation is that, for every possible realization $\widetilde x$ of $\widetilde {\cal X}$, we have $$\begin{aligned} \mathbb{E}({\cal X}_{I t}\,|\,\widetilde {\cal X}= \widetilde x) &=& \mathbb{P}({\cal X}_{I t}=1\,|\,\widetilde {\cal X}= \widetilde x) \nonumber\\ &=&\mathbb{P}({\cal X}_{I t}=1\,|\,{\cal X}_{\widetilde I t}=1,\widetilde {\cal X}= \widetilde x)\cdot \mathbb{P}({\cal X}_{\widetilde I t}=1\,|\,\widetilde {\cal X}= \widetilde x)\nonumber\\ && \mbox{}+ \mathbb{P}({\cal X}_{I t}=1\,|\,{\cal X}_{\widetilde I t}=0,\widetilde {\cal X}= \widetilde x)\cdot \mathbb{P}({\cal X}_{\widetilde I t}=0\,|\,\widetilde {\cal X}= \widetilde x)\nonumber\\ & \geq &\mathbb{P}({\cal X}_{I t}=1\,|\,{\cal X}_{\widetilde I t}=1,\widetilde {\cal X}= \widetilde x)\cdot \mathbb{P}({\cal X}_{\widetilde I t}=1\,|\,\widetilde {\cal X}= \widetilde x) \nonumber\\ &\geq &(1-\delta)\cdot \mathbb{P}({\cal X}_{\widetilde I t}=1\,|\,\widetilde {\cal X}= \widetilde x)\nonumber\\ &=&(1-\delta)\cdot \mathbb{E}({\cal X}_{\widetilde I t}\,|\,\widetilde {\cal X}= \widetilde x). \label{eqn:cond_sample_path}\end{aligned}$$ Here, the second inequality holds since $\mathbb{P}({\cal X}_{I t}=1\,|\,{\cal X}_{\widetilde I t}=1,\widetilde {\cal X}= \widetilde x) \geq 1-\delta$. This claim is a direct consequence of our simulation process. Indeed, conditional on $\widetilde {\cal X}= \widetilde x$, the random index $\widetilde{I}$ of the most loaded product with respect to $\widetilde{P}$ is clearly deterministic. As such, given that customer $t^{\widetilde{P}}$ selects product $\widetilde{I}$ (i.e., ${\cal X}_{\widetilde I t}=1$), there are two cases: Either $I \in S_t^\downarrow$, in which case ${\cal X}_{it}=1$ almost surely, or $I\in S_t^{\uparrow}$, in which case, ${\cal X}_{it}=1$ with probability $\phi_{I}(S_t)/\phi_{\widetilde I}(\widetilde S_t) \geq 1-\delta$, where the last inequality follows from our $\delta$-tightness condition. As a consequence, by summing inequality [\[eqn:cond_sample_path\]](#eqn:cond_sample_path){reference-type="eqref" reference="eqn:cond_sample_path"} over all possible sample paths $\widetilde x$, weighted by their probability, we have $\mathbb{E}({\cal X}_{It}) \geq (1-\delta)\cdot \mathbb{E}({\cal X}_{\widetilde I t})$. Finally, since $(({\cal X}_{it})_{i\in U,t\in[T]},({\cal X}_{\widetilde it})_{\widetilde i\in \widetilde U,t\in[T]})$ and $((X_{it})_{i\in U,t\in[T]},(X_{\widetilde it})_{\widetilde i\in \widetilde U,t\in[T]})$ are equal in distribution, we deduce that $\mathbb{E}(X_{It}) \geq (1-\delta)\cdot \mathbb{E}(X_{\widetilde I t})$. ## Proof of Claim [Claim 27](#cl:polywlog){reference-type="ref" reference="cl:polywlog"} {#apx:polywlog} Let us first notice that $$\alpha\leq\frac{12\ln(nT)}{T\epsilon^3}\leq \frac{12\sqrt{nT}}{T\epsilon^3} \leq \frac{\epsilon}{2n}\leq \frac{\epsilon/n}{1+\epsilon/n}.$$ Here, the first inequality holds since $T\alpha \leq 12\ln(nT)/\epsilon^3$, due to being in the low-weight regime. The second inequality comes from applying the identity $\ln x\leq \sqrt{x}$ for all $x>0$. Finally, the third inequality is obtained by recalling that $T\geq 576 n^3/\epsilon^8$, according to the claim's hypothesis. Consequently, since $\alpha = v_{\max}/(1+v_{\max})$, we must have $v_{\max}\leq \epsilon/n$, implying in turn that $v(\mathcal{N})\leq \epsilon$. Now, circling back to Lemma [Lemma 17](#lem:augment){reference-type="ref" reference="lem:augment"}, let $A$ be an optimal dynamic policy for the universe $\mathcal{N}$, and let $B$ being the policy that statically offers the whole universe to each arriving customer. Then, we almost surely have $S^A_t\subseteq \mathcal{N}= S^B_t$ and $v( S^B_t \setminus S^A_t) \leq v(\mathcal{N}) \leq \epsilon$, as shown above. It follows that both conditions of this lemma are satisfied, and therefore ${\cal E}^B \geq \frac1{1+\epsilon}\cdot{\cal E}^A \geq (1 - \epsilon) \cdot {\sf OPT}^{\mathrm{DP}}(\mathcal{N})$, as desired. ## Proof of Lemma [Lemma 28](#cl:revenuemax){reference-type="ref" reference="cl:revenuemax"} {#apx:revenuemax} Noting that $\phi_0(S) = 1-\sum_{i\in S}\phi_i(S)$ for any $S\subseteq \widetilde U$, the optimization problem [\[eq:reducedDP\]](#eq:reducedDP){reference-type="eqref" reference="eq:reducedDP"} can be reformulated as: $$M_{t}(\mathbf{\boldsymbol{\ell}})=M_{t-1}(\mathbf{\boldsymbol{\ell}})+\max_{S\subseteq \widetilde U}\left(\sum\limits_{i\in S}\left(M_{t-1}(\mathbf{\boldsymbol{\ell}}+\mathbf{e}_i)-M_{t-1}(\mathbf \boldsymbol{\ell})\right)\cdot \phi_i(S)\right).$$ Therefore, letting $r_i = M_{t-1}(\mathbf{\boldsymbol{\ell}}+\mathbf{e}_i)-M_{t-1}(\mathbf \boldsymbol{\ell}) \geq 0$ be the so-called price of each product $i\in \widetilde U$, we are left with computing an optimal solution to $\max_{S\subseteq \widetilde U}(\sum_{i\in S}r_i\cdot \phi_i(S))$. We have therefore obtained an instance of the revenue maximization problem under the Multinomial Logit model, which is well-known to be solvable in polynomial time (see, e.g., [@talluri2004revenue]). ## Proof of Lemma [Lemma 29](#lem:quasi1){reference-type="ref" reference="lem:quasi1"} {#apx:quasi1} According to Lemma [Lemma 18](#lem:subadd){reference-type="ref" reference="lem:subadd"}, we know that ${\sf OPT}^{\mathrm{DP}}(\cdot)$ is a subadditive function, implying in particular that $$\label{eq:quasisubadd} {\sf OPT}^\mathrm{DP}(\mathcal{N})\leq{\sf OPT}^{\mathrm{DP}}(U) + {\sf OPT}^{\mathrm{DP}}(\mathcal{N}\setminus U),$$ and therefore, it suffices to show that ${\sf OPT}^{\mathrm{DP}}(\mathcal{N}\setminus U) \leq \epsilon\cdot {\sf OPT}^{\mathrm{DP}}(\mathcal{N})$. For this purpose, by definition of $U$, every product in $\mathcal{N}\setminus U$ has a preference weight of at most $\epsilon^2\cdot v_{\max}/n$. Therefore, the random number of purchases across all products in $\mathcal{N}\setminus U$ is stochastically smaller than a Binomial random variable with $T$ trials and success probability $\frac{ v(\mathcal{N}\setminus U) }{ 1 + v(\mathcal{N}\setminus U) }$. Additionally, the maximum load of any $\mathcal{N}\setminus U$-policy is upper-bounded by the total number of purchases. Consequently, $$\begin{aligned} {\sf OPT}^{\mathrm{DP}}(\mathcal{N}\setminus U) & \leq & T\cdot \frac{ v(\mathcal{N}\setminus U) }{ 1 + v(\mathcal{N}\setminus U) } \\ & \leq & T\cdot \frac{\epsilon^2\cdot v_{\max}}{1+\epsilon^2\cdot v_{\max}} \\ & \leq & \epsilon\cdot T\cdot \frac{v_{\max}}{1+v_{\max}} \\ & \leq & \epsilon\cdot {\sf OPT}^{\mathrm{DP}}(\mathcal{N}).\end{aligned}$$ Here, the second inequality holds since $v(\mathcal{N}\setminus U) \leq \epsilon^2v_{\max}$, as the weight of each product in $\mathcal{N}\setminus U$ is at most $\epsilon^2v_{\max}/n$ and there are at most $n$ such products. For the third inequality, it is easy to verify that $\frac{\epsilon}{1+\epsilon^2\cdot v_{\max}} \leq \frac{1}{1+v_{\max}}$ when $\epsilon< 1$ and $v_{\max}\leq 1/\epsilon$. In the last inequality, we bound $Tv_{\max}/(1+v_{\max})$ by ${\sf OPT}^{\mathrm{DP}}(\mathcal{N})$, since $Tv_{\max}/(1+v_{\max})$ corresponds to the expected maximum load when statically offering only the heaviest product in $\mathcal{N}$. The latter is obviously dominated by the expected maximum load attained by an optimal dynamic policy. ## Proof of Lemma [Lemma 30](#lem:quasi2){reference-type="ref" reference="lem:quasi2"} {#apx:quasi2} We argue that this result is a direct consequence of Lemma [Lemma 26](#lem:alteruniverse){reference-type="ref" reference="lem:alteruniverse"}. We start by proving the first inequality, $(1-\epsilon)\cdot {\sf OPT}^{\mathrm{DP}}(U) \leq{\sf OPT}^{\mathrm{DP}}(\widetilde U)$. Let $P$ be an optimal policy for the universe of products $U$, meaning in particular that ${\cal E}^{P}={\sf OPT}^{\mathrm{DP}}(U)$. By definition of $\widetilde U$, we have $(1-\epsilon) \cdot v_{i}\leq v_{\widetilde i}\leq v_i$. Therefore, for any assortment $S\subseteq U$ and for any product $i\in S$, $$\phi_{\widetilde i}(\widetilde S) = \frac{v_{\widetilde i}}{1+\sum_{\widetilde j\in \widetilde S}v_{\widetilde j}} \geq \frac{(1-\epsilon)\cdot v_{i}}{1+\sum_{j\in S}v_j} = (1-\epsilon)\cdot \phi_i(S),$$ where $\widetilde S = \{\widetilde j \in \widetilde U \mid j\in S\}$. Consequently, according to Lemma [Lemma 26](#lem:alteruniverse){reference-type="ref" reference="lem:alteruniverse"}, there exists a policy $\widetilde P$ for the universe $\widetilde U$ such that ${\cal E}^{\widetilde P}\geq (1-\epsilon)\cdot {\cal E}^{P}$. In turn, $${{\sf OPT}^{\mathrm{DP}}(\widetilde U)\geq {\cal E}^{\widetilde P} \geq (1-\epsilon)\cdot {\cal E}^P= (1-\epsilon) \cdot {\sf OPT}^{\mathrm{DP}}(U)}.$$ To derive the second inequality, ${\sf OPT}^{\mathrm{DP}}(\widetilde U) \leq \frac{1}{1-\epsilon}\cdot {\sf OPT}^{\mathrm{DP}}(U)$, note that for every assortment $\widetilde S\subseteq U$ and for every product $\widetilde i\in \widetilde S$, we have $$\phi_i(S) = \frac{v_i}{1+\sum_{j\in S}v_j}\geq \frac{v_{\widetilde i}}{1+\frac{1}{1-\epsilon}\cdot\sum_{j\in \widetilde S}v_{\widetilde j}} \geq (1-\epsilon)\cdot \phi_{\widetilde i}(\widetilde S),$$ where $S = \{j \in U \mid\widetilde j\in \widetilde S\}$. Therefore, the exact same argument as in the first inequality proves that ${{\sf OPT}^{\mathrm{DP}}( U) \geq (1-\epsilon)\cdot{\sf OPT}^{\mathrm{DP}}( \widetilde U)}$. ## Proof of Lemma [Lemma 31](#lem:quasi3){reference-type="ref" reference="lem:quasi3"} {#apx:quasi3} Let $\widetilde P$ be an optimal $\widetilde U$-policy; in particular, ${\cal E}^{\widetilde P} = {\sf OPT}^{\mathrm{DP}}(\widetilde U)$. Let $\widetilde B$ be the policy obtained by truncating the optimal dynamic policy $\widetilde P$, namely, the one that offers precisely the same assortments as $\widetilde P$ while the maximum load is smaller than $\beta = {\cal B}/\epsilon$. Once we hit this threshold, the empty set will be offered to all remaining customers. Since $\widetilde B$ is a truncated policy and $\widetilde A$ is an optimal truncated policy, we trivially have ${\cal E}^{\widetilde A}\geq {\cal E}^{\widetilde B}$, meaning that it suffices to show that ${\cal E}^{\widetilde B} \geq (1 - \epsilon) \cdot {\cal E}^{\widetilde P}$. For this purpose, let $\widetilde M$ be the random variable specifying the maximum load attained by employing the policy $\widetilde P$; in particular, we have ${\cal E}^{\widetilde P} = \mathbb{E}(\widetilde M)$. Also, let $\widetilde M^{-} = \min(\beta, \widetilde M)$, noting that the latter random variable is exactly the maximum load attained by employing $\widetilde B$, meaning that ${\cal E}^{\widetilde B} = \mathbb{E}(\widetilde M^{-})$. Our analysis will be based on the following auxiliary claims, whose proofs appear at the end of this section. **Claim 41**. *$\mathbb{E}(\widetilde M) \leq (\beta+\mathbb{E}(\widetilde M))\cdot \mathbb{P}(\widetilde M\geq \beta)+ \mathbb{E}(\widetilde M\mid \widetilde M< \beta)\cdot \mathbb{P}(\widetilde M< \beta)$.* **Claim 42**. *$\mathbb{E}(\widetilde M^{-}) \geq \frac{\beta}{\beta+\mathbb{E}(\widetilde M)}\cdot( (\beta+\mathbb{E}(\widetilde M))\cdot \mathbb{P}(\widetilde M\geq \beta)+ \mathbb{E}(\widetilde M\mid \widetilde M< \beta)\cdot \mathbb{P}(\widetilde M< \beta) )$.* **Claim 43**. *$\frac{\beta}{\beta+\mathbb{E}(\widetilde M)} \geq 1-\epsilon$.* We conclude by observing that these claims suffice to show that ${\cal E}^{\widetilde B} \geq (1 - \epsilon) \cdot {\cal E}^{\widetilde P}$, since $${\cal E}^{\widetilde B} = \mathbb{E}(\widetilde M^{-})\geq \frac{\beta}{\beta+\mathbb{E}(\widetilde M)}\cdot \mathbb{E}(\widetilde M) \geq (1 - \epsilon) \cdot \mathbb{E}(\widetilde M) = (1 - \epsilon) \cdot {\cal E}^{\widetilde P},$$ where the first inequality follows by combining Claims [Claim 41](#cl:upbd){reference-type="ref" reference="cl:upbd"} and [Claim 42](#cl:lowbd){reference-type="ref" reference="cl:lowbd"}, and the second inequality is obtained by plugging in Claim [Claim 43](#clm:beta_frac_eps){reference-type="ref" reference="clm:beta_frac_eps"}. #### Proof of Claim [Claim 41](#cl:upbd){reference-type="ref" reference="cl:upbd"}. {#proof-of-claim-clupbd.} Our proof begins by arguing that $$\label{eq:claim2} \mathbb{E}(\widetilde M\mid \widetilde M\geq \beta)-\beta\leq \mathbb{E}(\widetilde M),$$ noting that $\mathbb{E}(\widetilde M\mid \widetilde M\geq \beta)-\beta$ represents the expected increase in the maximum load, starting from the point where the threshold $\beta$ is reached, all with respect to the policy $\widetilde P$. To bound the latter quantity, let ${\cal T}$ be the (random) index of the customer following the one for which a load of $\beta$ is attained. In other words, ${\cal T}$ is the minimal stage index for which the current maximum load is at least $\beta$. As such, $\mathbb{E}(\widetilde M\mid \widetilde M\geq \beta, {\cal T})-\beta$ is upper-bounded by the expected maximum load considering only customers ${\cal T}+1,\ldots,T$, which is trivially bounded by the optimal expected maximum load considering customers $1,\ldots,T$, i.e., $$\mathbb{E}(\widetilde M\mid \widetilde M\geq \beta, {\cal T})-\beta\leq {\sf OPT}^{\mathrm{DP}}(\widetilde U).$$ By recalling that ${\sf OPT}^{\mathrm{DP}}(\widetilde U) = \mathbb{E}(\widetilde M)$, inequality [\[eq:claim2\]](#eq:claim2){reference-type="eqref" reference="eq:claim2"} follows by introducing the expectation over ${\cal T}$ and using the tower property. Consequently, $$\begin{aligned} \mathbb{E}(\widetilde M) & = & \mathbb{E}(\widetilde M\mid \widetilde M\geq \beta)\cdot \mathbb{P}(\widetilde M\geq \beta)+ \mathbb{E}(\widetilde M\mid \widetilde M< \beta)\cdot \mathbb{P}(\widetilde M< \beta) \\ & \leq & (\beta+\mathbb{E}(\widetilde M))\cdot \mathbb{P}(\widetilde M\geq \beta)+ \mathbb{E}(\widetilde M\mid \widetilde M< \beta)\cdot \mathbb{P}(\widetilde M< \beta) . \end{aligned}$$ #### Proof of Claim [Claim 42](#cl:lowbd){reference-type="ref" reference="cl:lowbd"}. {#proof-of-claim-cllowbd.} The desired claim is obtained by noting that $$\begin{aligned} \mathbb{E}(\widetilde M^{-}) & = & \mathbb{E}(\widetilde M^-\mid\widetilde M\geq \beta)\cdot \mathbb{P}(\widetilde M\geq \beta)+ \mathbb{E}(\widetilde M^-\mid \widetilde M< \beta)\cdot \mathbb{P}(\widetilde M< \beta) \\ & = & \beta\cdot \mathbb{P}(\widetilde M\geq \beta)+ \mathbb{E}(\widetilde M\mid \widetilde M< \beta)\cdot \mathbb{P}(\widetilde M< \beta)\\ &\geq & \frac{\beta}{\beta+\mathbb{E}(\widetilde M)}\cdot\left( (\beta+\mathbb{E}(\widetilde M))\cdot \mathbb{P}(\widetilde M\geq \beta)+ \mathbb{E}(\widetilde M\mid \widetilde M< \beta)\cdot \mathbb{P}(\widetilde M< \beta) \right),\end{aligned}$$ where the second equality holds since $\mathbb{E}(\widetilde M^-\mid\widetilde M\geq \beta) = \beta$ and $\mathbb{E}(\widetilde M^-\mid\widetilde M\leq \beta) = \mathbb{E}(\widetilde M \mid\widetilde M\leq \beta)$, by definition of $\widetilde M^-$. #### Proof of Claim [Claim 43](#clm:beta_frac_eps){reference-type="ref" reference="clm:beta_frac_eps"}. {#proof-of-claim-clmbeta_frac_eps.} We begin by observing that $$\frac{\beta}{\beta+\mathbb{E}(\widetilde M)} = \frac{{\cal B}/\epsilon}{{\cal B}/\epsilon+{\sf OPT}^{\mathrm{DP}}(\widetilde U)} = \frac{1}{1+\epsilon\cdot \frac{{\sf OPT}^{\mathrm{DP}}(\widetilde U)}{{\cal B}}},$$ and it therefore remains to show that ${\sf OPT}^{\mathrm{DP}}(\widetilde U)\leq \frac{{\cal B}}{1-\epsilon}$. For this purpose, note that $${\sf OPT}^{\mathrm{DP}}(\widetilde U)\leq \frac{1}{1-\epsilon} \cdot {\sf OPT}^{\mathrm{DP}}(U)\leq \frac{1}{1-\epsilon} \cdot{\sf OPT}^{\mathrm{DP}}(\mathcal{N})\leq \frac{{\cal B}}{1-\epsilon},$$ where the first and third inequalities follow from Lemmas [Lemma 30](#lem:quasi2){reference-type="ref" reference="lem:quasi2"} and [Lemma 25](#lem:low){reference-type="ref" reference="lem:low"}, respectively. [^1]: School of Operations Research and Information Engineering, Cornell Tech, Cornell University, NY, USA. Email: `{oe46,mi262}@cornell.edu`. [^2]: Department of Statistics and Operations Research, School of Mathematical Sciences, Tel Aviv University, Tel Aviv 69978, Israel. Email: `segevdanny@tauex.tau.ac.il`. Supported by Israel Science Foundation grant 1407/20.
arxiv_math
{ "id": "2309.01772", "title": "Maximum Load Assortment Optimization: Approximation Algorithms and\n Adaptivity Gaps", "authors": "Omar El Housni, Marouane Ibn Brahim, Danny Segev", "categories": "math.OC", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We deduce continuity properties of classes of Fourier multipliers, pseudo-differential and Fourier integral operators when acting on Orlicz spaces. Especially we show classical results like Hörmander's improvement of Mihlin's Fourier multiplier theorem are extendable to the framework of Orlicz spaces. We also show how some properties of the Young functions $\Phi$ of the Orlicz spaces are linked to properties of certain Lebesgue exponents $p_\Phi$ and $q_\Phi$ emerged from $\Phi$. address: - Dipartimento di Matematica "G. Peano", Universitá degli Studi di Torino - Dipartimento di Matematica "G. Peano", Universitá degli Studi di Torino - Department of Mathematics, Linnæus University, Sweden - Department of Mathematics, Linnæus University, Sweden author: - Matteo Bonino - Sandro Coriasco - Albin Petersson - Joachim Toft title: Fourier type operators on Orlicz spaces and the role of Orlicz Lebesgue exponents --- # Introduction {#sec0} Orlicz spaces, introduced by W. Orlicz in 1932 [@Orl], are Banach spaces which generalize the normal $L^p$ spaces (see Section [2](#sec1){reference-type="ref" reference="sec1"} for notations). Orlicz spaces are denoted by $L^\Phi$ where $\Phi$ is a Young function, and we obtain the usual $L^p$ spaces, $1\leqslant p < \infty$, by choosing $\Phi(t) = t^p$. For more facts on Orlicz spaces, see [@Rao]. An advantage of Orlicz spaces is that they are suitable when solving certain problems where $L^p$ spaces are insufficient. As an example, consider the entropy of a probability density function $f$ given by $$E(f) = -\int f(\xi) \log f(\xi) \, d \xi.$$ In this case, it may be more suitable to work with an Orlicz norm estimate, for instance with $\Phi(t) = t \log(1+t)$, as opposed to $L^1$ norm estimates. The literature on Orlicz spaces is rich, see e.g. [@ApKaZa; @MajLab1; @MajLab2; @HaH; @Mil; @OsaOzt] and the references therein. Recent investigations also put pseudo-differential operators in the framework of Orlicz modulation spaces (cf [@TofUst], see also [@SchF; @ToUsNaOz] for further properties on Orlicz modulation spaces). In this paper, we deal with pseudo-differential operators as well as Fourier multipliers in Orlicz spaces. Results pertaining to continuity properties on $L^p$-spaces are well-established. Our approach is to utilize a Marcinkiewicz interpolation-type theorem by Liu and Wang in [@Liu] to extend such continuity properties to also hold on Orlicz spaces. As an initial example, the methods described in the subsequent sections allow us to obtain the following extension of Mihlin's Fourier multiplier theorem (see [@Mic] for the original theorem). **Theorem 1** (Mihlin). *Let $\Phi$ be a strict Young function and $a\in L^\infty (\mathbf R^{d} \setminus \{ 0\})$ be such that $$\sup_{\xi\neq 0} \left( |\xi|^{|\alpha|} \left| \partial^\alpha a(\xi) \right| \right)$$ is finite for every $\alpha\in \mathbf N^{d}$ with $|\alpha|\leqslant [\frac{d}{2}]+1$. Then $a(D)$ is continuous on $L^\Phi(\mathbf R^{d})$.* In fact, we also obtain Hörmander's improvement of Mihlin's Fourier multiplier theorem (cf [@Ho0]) in the context of Orlicz spaces. This result can be found in Section [4](#sec3){reference-type="ref" reference="sec3"} (Theorem [Theorem 24](#Thm:OrlContHormMult){reference-type="ref" reference="Thm:OrlContHormMult"}). In a similar manner, we obtain continuity results for pseudo-differential operators of order $0$ in Orlicz spaces as well, see Theorem [Theorem 23](#Thm:OrlContPseudo){reference-type="ref" reference="Thm:OrlContPseudo"}. Finally, we show a continuity result for a broad class of Fourier integral operators, under a condition on the order of the amplitude (that is, a loss of derivatives and decay), see Theorem [Theorem 25](#Thm:OrlContSGFIOs){reference-type="ref" reference="Thm:OrlContSGFIOs"}. Section [2](#sec1){reference-type="ref" reference="sec1"} also include investigations of Lebesgue exponents $p_\Phi$ and $q_\Phi$ constructed from the Young function $\Phi$, which are important for the interpolation theorem. These parameters were described in [@Liu], where it was claimed that $$\begin{aligned} p_\Phi < \infty &\iff \Phi \text{ fulfills the $\Delta_2$ condition} \label{eq:pPhi} \intertext{and} q_\Phi > 1 &\iff \Phi \text{ is strictly convex.}\label{eq:qPhi}\end{aligned}$$ In Section [2](#sec1){reference-type="ref" reference="sec1"}, we confirm that [\[eq:pPhi\]](#eq:pPhi){reference-type="eqref" reference="eq:pPhi"} is correct, but that neither logical implication of [\[eq:qPhi\]](#eq:qPhi){reference-type="eqref" reference="eq:qPhi"} is correct. Instead, other conditions on $\Phi$ are found which characterize $q_\Phi > 1$ (see Proposition [Proposition 16](#Prop:YounFuncEquivCond){reference-type="ref" reference="Prop:YounFuncEquivCond"}). At the same time, we deduce a weaker form of the equivalence [\[eq:qPhi\]](#eq:qPhi){reference-type="eqref" reference="eq:qPhi"} and show that if $q_\Phi > 1$, then there is an equivalent Young function to $\Phi$ which is strictly convex. (see Proposition [Proposition 19](#Prop:YounFuncEquivCond2){reference-type="ref" reference="Prop:YounFuncEquivCond2"}). # Preliminaries {#sec1} In this section we recall some facts on Orlicz spaces and pseudo-differential operators. Especially we recall Lebesgue exponents given in e. g. [@Liu] and explain some of their features. ## Orlicz Spaces {#subsec1.1} In this subsection we provide an overview of some basic definitions and state some technical results that will be needed. First, we recall the definition of weak $L^p$ spaces. **Definition 2**. Let $p\in (0,\infty ]$. The *weak $L^p$ space* $wL^p(\mathbf{R}^{d})$ consists of all Lebesgue measurable functions $f : \mathbf R^{d} \to \mathbf C$ for which $$\| f \|_{wL^p} \equiv \sup_{t>0} t \left ( \mu _f(t) \right )^{\frac{1}{p}}$$ is finite. Here $\mu _f(t)$ is the Lebesgue measure of the set $\{ \, x\in \mathbf R^{d}\, ;\, |f(x)>t|\, \}$. *Remark 3*. Notice that the $wL^p$-norm is not a true norm, since the triangular inequality fails. Nevertheless, one has that $\| f \|_{wL^p} \leqslant \| f \|_{L^p}$. In particular, $L^p(\mathbf R^{d})$ is continuously embedded in $wL^p (\mathbf R^{d})$. Next, we recall some facts concerning Young functions and Orlicz spaces. (See [@Rao; @HaH].) **Definition 4**. A function $\Phi :\mathbf R \rightarrow \mathbf R \cup \{ \infty\}$ is called *convex* if $$\Phi (s_1 t_1+ s_2 t_2) \leqslant s_1 \Phi (t_1)+s_2\Phi (t_2)$$ when $s_j,t_j\in \mathbf{R}$ satisfy $s_j \geqslant 0$ and $s_1 + s_2 = 1,\ j=1,2$. We observe that $\Phi$ might not be continuous, because we permit $\infty$ as function value. For example, $$\Phi (t)= \begin{cases} c,&\text{when}\ t \leqslant a \\[1ex] \infty ,&\text{when}\ t>a \end{cases}$$ is convex but discontinuous at $t=a$. **Definition 5**. Let $\Phi$ be a function from $[0,\infty)$ to $[0,\infty]$. Then $\Phi$ is called a *Young function* if 1. $\Phi$ is convex, 2. $\Phi (0)=0$, 3. $\lim \limits_{t\rightarrow\infty} \Phi (t)=+\infty$. It is clear that $\Phi$ in Definition [Definition 5](#Def:YoungFunc){reference-type="ref" reference="Def:YoungFunc"} is non-decreasing, because if $0\leqslant t_1\leqslant t_2$ and $s\in [0,1]$ is chosen such that $t_1=st_2$, then $$\Phi (t_1)=\Phi (st_2+(1-s)0) % \\[1ex] \leqslant s\Phi (t_2)+(1-s)\Phi (0) % \\[1ex] \leqslant \Phi (t_2),$$ since $\Phi (0) =0$ and $s\in [0,1]$. The Young functions $\Phi _1$ and $\Phi _2$ are called *equivalent*, if there is a constant $C\geqslant 1$ such that $$\begin{aligned} {2} C^{-1}\Phi _2(t)&\leqslant \Phi _1(t)\leqslant C\Phi _2(t), & \quad t&\in [0,\infty ]. \notag \intertext{We recall that a Young function is said to fulfill the \emph{$\Delta _2$-condition} if there is a constant $C\geqslant 1$ such that} % \Phi (2t) &\leqslant C\Phi (t),&\qquad t&\in [0,\infty ]. \notag \intertext{ \indent We also introduce the following condition. A Young function is said to fulfill the \emph{$\Lambda$-condition} if there is a $p>1$ such that} % \Phi(ct) &\leqslant c^p \Phi(t), &\qquad t &\in [0,\infty ],\ c\in (0,1]. \label{qphiCond}\end{aligned}$$ The following characterization of Young functions fulfilling the $\Delta _2$-condition follows from the fact that any Young function is increasing. The verifications are left for the reader. **Proposition 6**. *Let $\Phi$ be a Young function. Then the following conditions are equivalent:* 1. *$\Phi$ satisfies the $\Delta _2$-condition;* 2. *for every constant $c>0$, the Young function $t\mapsto \Phi (ct)$ is equivalent to $\Phi$;* 3. *for some constant $c>0$ with $c\neq 1$, the Young function $t\mapsto \Phi (ct)$ is equivalent to $\Phi$.* For any Young function $\Phi$, t The *upper* and *lower Lebesgue exponents* for a Young function $\Phi$ are defined by $$\begin{aligned} p_{\Phi} &\equiv \sup_{t>0} \left (\frac{t \Phi _+'(t)}{\Phi(t)}\right ) = \sup_{t>0} \left (\frac{t \Phi _-'(t)}{\Phi(t)}\right ) \label{Eq:StrictYoungFunc2} \intertext{and} q_{\Phi} &\equiv \inf _{t>0} \left (\frac{t \Phi _+'(t)}{\Phi(t)}\right ) = \inf _{t>0} \left (\frac{t \Phi _-'(t)}{\Phi(t)}\right ), \label{Eq:StrictYoungFunc1}\end{aligned}$$ respectively. We recall that these exponents are essential in the analysis in [@Liu]. We observe that for any $r_1,r_2>0$, $$\begin{aligned} {3} t^{p_\Phi}&\lesssim \Phi (t)\lesssim t^{q_\Phi} & \quad &\text{when} &\quad t&\leqslant r_1 \label{Eq:Squeezing1} \intertext{and} t^{q_\Phi}&\lesssim \Phi (t)\lesssim t^{p_\Phi} & \quad &\text{when} &\quad t&\geqslant r_2. \label{Eq:Squeezing2}\end{aligned}$$ In order to shed some light on this as well as demonstrate arguments used in the next section we here show these relations. By [\[Eq:StrictYoungFunc2\]](#Eq:StrictYoungFunc2){reference-type="eqref" reference="Eq:StrictYoungFunc2"} we obtain $$\frac {t\Phi '_+(t)}{\Phi (t)}-p_\Phi \leqslant 0 \quad \Leftrightarrow \quad \left ( \frac {\Phi (t)}{t^{p_\Phi}} \right ) '_+\leqslant 0.$$ Hence $\Phi (t) = t^{p_\Phi}h(t)$ for some decreasing function $h(t)>0$. This gives $$\Phi (t) = t^{p_\Phi}h(t)\geqslant t^{p_\Phi}h(r_1)\gtrsim t^{p_\Phi}$$ for $t\leqslant r_1$ and $$\Phi (t) = t^{p_\Phi}h(t)\leqslant t^{p_\Phi}h(r_2)\lesssim t^{p_\Phi}$$ for $t\geqslant r_2$. This shows the relations between $t^{p_\Phi}$ and $\Phi (t)$ in [\[Eq:Squeezing1\]](#Eq:Squeezing1){reference-type="eqref" reference="Eq:Squeezing1"} and [\[Eq:Squeezing2\]](#Eq:Squeezing2){reference-type="eqref" reference="Eq:Squeezing2"}. The remaining relations follow in similar ways. In our investigations we need to assume that our Young functions are *strict* in the following sense. **Definition 7**. The Young function $\Phi$ from $[0,\infty)$ to $[0,\infty]$ is called *strict* or a *strict Young function*, if 1. $\Phi (t)<\infty$ for every $t\in [0,\infty )$, 2. $\Phi$ satisfies the $\Delta _2$-condition, 3. $\Phi$ satisfies the $\Lambda$-condition. In Section [3](#sec2){reference-type="ref" reference="sec2"} we give various kinds of characterizations of the conditions (2) and (3) in Definition [Definition 7](#Def:StrictYoungFunc){reference-type="ref" reference="Def:StrictYoungFunc"}. In particular we show that (2) and (3) in Definition [Definition 7](#Def:StrictYoungFunc){reference-type="ref" reference="Def:StrictYoungFunc"} are equivalent to $p_\Phi <\infty$ and $q_\Phi >1$, respectively. (See Proposition [Proposition 18](#Prop:qPhiEquivCond){reference-type="ref" reference="Prop:qPhiEquivCond"}.) It will also be useful to rely on regular Young functions, which is possible due to the following proposition. **Proposition 8**. *Let $\Phi$ be a Young function which satisfies the $\Delta _2$ condition. Then there is a Young function $\Psi$ such that the following is true:* 1. *$\Psi$ is equivalent to $\Phi$ and $\Psi \leqslant \Phi$;* 2. *$\Psi$ is smooth on $\mathbf R_+$;* 3. *$\Psi _+'(0)=\Phi _+'(0)$.* *Proof.* Let $\phi \in C_0^\infty [0,1]$ be such that $\phi \geqslant 0$ and $\int _0^1 \phi (s)\, ds=1$. Put $$\Psi (t) = \int _0^1 \Phi (t-\textstyle{\frac 12}st)\phi (s)\, ds.$$ Then using this formula and $$\Psi (t) = \int _{t/2}^t \Phi (s)\phi (s-2s/t)\frac ts\, ds,$$ we reach the result. ◻ It follows that $\Psi$ in Proposition [Proposition 8](#Prop:SmoothEquivYoungFunc){reference-type="ref" reference="Prop:SmoothEquivYoungFunc"} fulfills the $\Delta _2$ condition, because $\Phi$ satisfy that condition and $\Psi$ is equivalent to $\Phi$. **Definition 9**. Let $\Phi$ be a Young function. The *Orlicz space* $L^{\Phi }(\mathbf R^{d})$ consists of all Lebesgue measurable functions $f:\mathbf R^{d} \rightarrow \mathbf C$ such that $$\Vert f\Vert_{L^{\Phi}} \equiv \inf \left \{ \, \lambda>0\, ;\, \int_{\mathbf R^{d}} \Phi \left ( \frac{|f(x)|}{\lambda} \right ) dx\leqslant 1\, \right \}$$ is finite. **Definition 10**. Let $\Phi$ be a Young function. The *weak Orlicz space* $wL^{\Phi }(\mathbf R^{d})$ consists of all Lebesgue measurable functions $f:\mathbf R^{d} \rightarrow \mathbf C$ such that $$\Vert f\Vert _{wL^{\Phi}} \equiv \inf \left \{ \, \lambda >0\, ;\, \sup_{t>0} \left ( \Phi \left(\frac{t}{\lambda}\right) \mu _f(t) \right ) \leqslant 1\, \right \}$$ is finite. Here $\mu _f(t)$ is the Lebesgue measure of the set $\{ \, x\in \mathbf R^{d}\, ;\, |f(x)>t|\, \}$. In accordance with the usual Lebesgue spaces, $f,g\in wL^{\Phi}(\mathbf R^{d})$ are equivalent whenever $f=g$ a. e. ## Pseudo-differential operators {#subsec1.2} Let $\mathbf{M}(d,\Omega )$ be the set of all $d\times d$-matrices with entries in the set $\Omega$, and let $a\in \mathscr S (\mathbf R^{2d})$ and $A\in \mathbf{M}(d,\mathbf R)$ be fixed. Then the pseudo-differential operator $\operatorname{Op}_A(a)$ is the linear and continuous operator on $\mathscr S(\mathbf R^{d})$, given by $$\label{e0.5} (\operatorname{Op}_A(a)f)(x) = (2\pi ) ^{-d}\iint a(x-A(x-y),\xi )f(y) e^{i\langle x-y,\xi \rangle}\, dyd\xi ,$$ when $f\in \mathscr S(\mathbf R^{d})$. For general $a\in \mathscr S'(\mathbf R^{2d})$, the pseudo-differential operator $\operatorname{Op}_A(a)$ is defined as the linear and continuous operator from $\mathscr S(\mathbf R^{d})$ to $\mathscr S'(\mathbf R^{d})$ with distribution kernel given by $$\label{atkernel} K_{a,A}(x,y)=(2\pi )^{-d/2}(\mathscr F_2^{-1}a)(x-A(x-y),x-y).$$ Here $\mathscr F_2F$ is the partial Fourier transform of $F(x,y)\in \mathscr S'(\mathbf R^{2d})$ with respect to the $y$ variable. This definition makes sense, since the mappings $$\label{homeoF2tmap} \mathscr F_2\quad \text{and}\quad F(x,y)\mapsto F(x-A(x-y),x-y)$$ are homeomorphisms on $\mathscr S'(\mathbf R^{2d})$. In particular, the map $a\mapsto K_{a,A}$ is a homeomorphism on $\mathscr S'(\mathbf R^{2d})$. An important special case appears when $A=t\cdot I$, with $t\in \mathbf R$. Here and in what follows, $I\in \mathbf{M}(d,\mathbf R)$ denotes the $d\times d$ identity matrix. In this case we set $$\operatorname{Op}_t(a) = \operatorname{Op}_{t\cdot I}(a).$$ The normal or Kohn-Nirenberg representation, $a(x,D)$, is obtained when $t=0$, and the Weyl quantization, $\operatorname{Op}^w(a)$, is obtained when $t=\frac 12$. That is, $$a(x,D) = \operatorname{Op}_0(a) \quad \text{and}\quad \operatorname{Op}^w(a) = \operatorname{Op}_{1/2}(a).$$ For any $K\in \mathscr S'(\mathbf R^{d_1+d_2})$, we let $T_K$ be the linear and continuous mapping from $\mathscr S(\mathbf R^{d_1})$ to $\mathscr S'(\mathbf R^{d_2})$, defined by the formula $$\label{pre(A.1)} (T_Kf,g)_{L^2(\mathbf R^{d_2})} = (K,g\otimes \overline f )_{L^2(\mathbf R^{d_1+d_2})}.$$ It is well-known that if $A\in \mathbf{M}(d,\mathbf R)$, then it follows from Schwartz kernel theorem that $K\mapsto T_K$ and $a\mapsto \operatorname{Op}_A(a)$ are bijective mappings from $\mathscr S'(\mathbf R^{2d})$ to the set of linear and continuous mappings from $\mathscr S(\mathbf R^{d})$ to $\mathscr S'(\mathbf R^{d})$ (cf. e. g. [@Ho1]). In particular, for every $a_1\in \mathscr S'(\mathbf R^{2d})$ and $A_1,A_2\in \mathbf{M}(d,\mathbf R)$, there is a unique $a_2\in \mathscr S'(\mathbf R^{2d})$ such that $\operatorname{Op}_{A_1}(a_1) = \operatorname{Op}_{A_2} (a_2)$. The following result explains the relations between $a_1$ and $a_2$. **Proposition 11**. *Let $a_1,a_2\in \mathscr S'(\mathbf R^{2d})$ and $A_1,A_2\in \mathbf{M}(d,\mathbf R)$. Then $$%\label{calculitransform} \operatorname{Op}_{A_1}(a_1) = \operatorname{Op}_{A_2}(a_2) \quad \Leftrightarrow \quad e^{i\langle A_2D_\xi,D_x \rangle}a_2(x,\xi )=e^{i\langle A_1D_\xi,D_x \rangle}a_1(x,\xi ).$$* In [@Toft15], a proof of the previous proposition is given, which is similar to the proof of the case $A=t\cdot I$ in [@Ho1; @Sh; @Tr]. Let $r, \rho ,\delta \in \mathbf R$ be such that $0\leqslant \delta \leqslant \rho \leqslant 1$ and $\delta <1$. Then we recall that the Hörmander class $S^r_{\rho ,\delta}(\mathbf R^{2d})$ consists of all $a\in C^\infty (\mathbf R^{2d})$ such that $$\sum _{|\alpha |,|\beta |\leqslant N}\sup _{x,\xi \in \mathbf R^{d}} \left ( \langle \xi \rangle^{-r+\rho |\alpha |-\delta |\beta |}|D_\xi ^\alpha D_x^\beta a(x,\xi )| \right )$$ is finite for every integer $N\geqslant 0$. We recall the following continuity property for pseudo-differential operators acting on $L^p$-spaces (see e. g. [@Wong]). **Proposition 12**. *Let $p\in (1,\infty )$, $A\in \mathbf{M}(\mathbf R,d)$ and $a\in S^0_{1,0}(\mathbf R^{2d})$. Then $\operatorname{Op}_A(a)$ is continuous on $L^p(\mathbf R^{d})$.* In the next proposition we essentially recall Hörmander's improvement of Mihlin's Fourier multiplier theorem. **Proposition 13**. *Let $p\in (1,\infty )$ and $a\in L^\infty (\mathbf R^{d}\setminus 0)$ be such that $$\sup _{R>0} \left ( R^{-d+2|\alpha |}\int _{A_R} |\partial ^\alpha a(\xi )|^2\, d\xi \right )$$ is finite for every $\alpha \in \mathbf N^{d}$ with $|\alpha |\leqslant [\frac d2]+1$, where $A_R$ is the annulus $\{ \, \xi \in \mathbf R^{d}\, ;\, R<|\xi |<2R\, \}$. Then $a(D)$ is continuous on $L^p(\mathbf R^{d})$.* ## Fourier integral operators of $SG$ type We recall that the so-called $SG$-symbol class $S^{m,\mu}(\mathbf R^{2d})$, $m,\mu\in\mathbf R^{}$, consists of all $a\in C^\infty (\mathbf R^{2d})$ such that $$\sum _{|\alpha |,|\beta |\leqslant N}\sup _{x,\xi \in \mathbf R^{d}} \left ( \langle x\rangle ^{-m+|\alpha|} \langle \xi \rangle^{-\mu + |\beta |}|D_x ^\alpha D_\xi^\beta a(x,\xi )| \right )$$ is finite for every integer $N\geqslant 0$. Following [@CR2], we say that $\varphi\in C^\infty(\mathbf R^{d}\times(\mathbf R^{d}\setminus 0))$ is a phase-function if it is real-valued, positively $1$-homogeneous with respect to $\xi$, that is, $\varphi(x,\tau\xi)=\tau\varphi(x,\xi)$ for all $\tau>0$, $x,\xi\in\mathbf R^{d}$, $\xi\not=0$, and satisfies, for all $x,\xi\in\mathbf R^{d}$, $\xi\not=0$, $$\label{eq:phf} \begin{aligned} |\det\partial_x\partial_\xi\varphi(x,\xi)|\geq C>0, \quad \partial^\alpha_x\varphi(x,\xi) \prec \langle x\rangle^{1-|\alpha|} |\xi| \textrm{ for all } \alpha\in \mathbf N^{d},\\ \; \langle \varphi^\prime_\xi(x,\xi)\rangle\sim\langle x\rangle, \; \langle \varphi^\prime_x(x,\xi)\rangle\sim\langle \xi\rangle. \end{aligned}$$ In the sequel, we will denote the set of all such phase-functions by $\mathfrak{P}_r^\mathrm{hom}$. For any $a\in S^{m,\mu}(\mathbf R^{2d})$ and $\varphi\in\mathfrak{P}_r^\mathrm{hom}$, the Fourier integral operator $\operatorname{Op}_{\varphi}(a)$ is the linear and continuous operator from $\mathscr S(\mathbf R^{d})$ to $\mathscr S'(\mathbf R^{d})$, given by $$\label{eq:fio2} (\operatorname{Op}_\varphi(a)f)(x)=\int_{\mathbf R^{d}} e^{i\varphi(x,\xi)} a(x,\xi) \widehat{f}(\xi)\, d\xi, \quad f\in\mathscr{S}(\mathbf R^{d}).$$ We recall the following (global on $\mathbf R^{d}$) $L^p$-boundedness result, proved in [@CR2]. **Theorem 14**. *Let $p\in (1,\infty)$, $m,\mu\in \mathbf R$ be such that $$\label{eq:sharp-thresholds} m\le -(d-1)\left|\frac{1}{p}-\frac12\right| \textrm{ and } \mu\le -(d-1)\left|\frac{1}{p}-\frac12\right| ,$$ and suppose that $a\in S^{m,\mu}(\mathbf R^{2d})$ is such that $|\xi|\geq\varepsilon$, for some $\varepsilon>0$, on the support of $a$. Then $\operatorname{Op}_{\varphi}(a)$ from $\mathscr S(\mathbf R^{d})$ to $\mathscr S'(\mathbf R^{d})$ extends uniquely to a continuous operator on $L^p(\mathbf R^{d})$.* *Remark 15*. As it is well-known, in view of the presence of a phase function $\varphi\in\mathfrak{P}_r^\mathrm{hom}$, assumed different from $\varphi(x,\xi)=x\cdot\xi$ (for which [\[eq:fio2\]](#eq:fio2){reference-type="eqref" reference="eq:fio2"} actually becomes a pseudo-differential operator), the uniform boundedness of the amplitude $a$ is, in general, not enough to guarantee that $\operatorname{Op}_\varphi(a)$ continuously maps $L^p$ into itself, even if the support of $f$ is compact (see the celebrated paper [@SSS91]), except when $p=2$. This is, of course, in strong contrast with Proposition [Proposition 12](#Prop:LpCont){reference-type="ref" reference="Prop:LpCont"}. Notice, in [\[eq:sharp-thresholds\]](#eq:sharp-thresholds){reference-type="eqref" reference="eq:sharp-thresholds"}, the *loss of decay* (that is, the condition on the $x$-order $m$ of the amplitude), together with the well-known *loss of smoothness* (that is, the condition on the $\xi$-order $\mu$ of the amplitude). Notice also that no condition of compactness of the support of $f$ is needed in Theorem [Theorem 14](#Thm:LpSGFIOcont){reference-type="ref" reference="Thm:LpSGFIOcont"} (see [@CR2] and the references quoted therein for more details). # The role of upper and lower Lebesgue exponents for Young functions {#sec2} In this section we investigate the Orlicz Lebesgue exponents $p_\Phi$ and $q_\Phi$ and link conditions on these exponents to various properties on their Young functions $\Phi$. Especially we show that both implications in [\[eq:qPhi\]](#eq:qPhi){reference-type="eqref" reference="eq:qPhi"} involving $q_\Phi$ are wrong (see Proposition [Proposition 19](#Prop:YounFuncEquivCond2){reference-type="ref" reference="Prop:YounFuncEquivCond2"}). Instead we deduce other conditions $\Phi$ which characterize $q_\Phi >1$ (see Propositions [Proposition 16](#Prop:YounFuncEquivCond){reference-type="ref" reference="Prop:YounFuncEquivCond"} and [Proposition 18](#Prop:qPhiEquivCond){reference-type="ref" reference="Prop:qPhiEquivCond"}). In the following proposition we list some basic properties of relations between Young functions and their upper and lower Lebesgue exponents. **Proposition 16**. *Let $\Phi$ be a Young function which is non-zero outside the origin, and let $q_{\Phi}$ and $p_{\Phi}$ be as in [\[Eq:StrictYoungFunc1\]](#Eq:StrictYoungFunc1){reference-type="eqref" reference="Eq:StrictYoungFunc1"} and [\[Eq:StrictYoungFunc2\]](#Eq:StrictYoungFunc2){reference-type="eqref" reference="Eq:StrictYoungFunc2"}. Then the following is true:* 1. *$1\leqslant q_\Phi \leqslant p_\Phi$;* 2. *$p_\Phi =1$, if and only if $\Phi$ is a linear map;* 3. *$p_{\Phi} < \infty$, if and only if $\Phi$ fulfills the $\Delta_2$-condition;* 4. *$q_{\Phi} > 1$, if and only if there is a $p>1$ such that $\frac{\Phi(t)}{t^p}$ increases.* *Remark 17*. Taking into account that $\Phi$ in Proposition [Proposition 16](#Prop:YounFuncEquivCond){reference-type="ref" reference="Prop:YounFuncEquivCond"} is a Young function, we find that (4) is equivalent to 1. *$q_{\Phi} > 1$, if and only if there is a $p>1$ such that $\frac{\Phi(t)}{t^p}$ increases, $$\lim _{t\to 0+} \frac{\Phi(t)}{t^p} = 0 \quad \text{and}\quad \lim _{t\to \infty} \frac{\Phi(t)}{t^p} = \infty .$$* Most of Proposition [Proposition 16](#Prop:YounFuncEquivCond){reference-type="ref" reference="Prop:YounFuncEquivCond"} and Remark [Remark 17](#Rem:YounFuncEquivCond){reference-type="ref" reference="Rem:YounFuncEquivCond"} are well-known. In order to be self-contained we here present a proof. *Proof of Proposition [Proposition 16](#Prop:YounFuncEquivCond){reference-type="ref" reference="Prop:YounFuncEquivCond"}.* Since $\Phi$ and its left and right derivatives are increasing, the mean-value theorem gives that for some $c=c_t\in [0,1]$, we have $$\Phi (t) =\Phi (t)-\Phi (0) \leqslant t\Phi _+(ct)\leqslant t\Phi _+(t).$$ This gives (1). If $\Phi$ is linear, then $\frac {t\Phi '(t)}{\Phi (t)}=1$, giving that $q_\Phi =p_\Phi =1$. Suppose instead that $p_\Phi =1$. Then $$\frac {t\Phi '(t)}{\Phi (t)}=1,$$ in view of (1) and its proof. This implies that $\Phi (t)=Ct$ for some constant $C$, and (2) follows. In order to prove (3), we first suppose that $p_{\Phi} < \infty$. Then $$\frac{t \Phi _+'(t)}{\Phi(t)}\leqslant R \quad \Leftrightarrow \quad t \Phi _+'(t)-R\Phi(t) \leqslant 0,$$ for some $R>0$. Since $\Phi (0)=0$, we obtain $$\Phi (t) = t^Rh(t),\quad t>0,$$ for some positive decreasing function $h(t)$, $t>0$. This gives $$\Phi (2t) = (2t)^Rh(2t) \leqslant 2^R t^Rh(t)=2^R\Phi (t),$$ and it follows that $\Phi$ satisfies the $\Delta _2$-condition when $p_{\Phi} < \infty$. Suppose instead that $\Phi$ satisfies the $\Delta _2$-condition. By the mean-value theorem and the fact that $\Phi '_+(t)$ is increasing we obtain $$\Phi '_+(t)t \leqslant \Phi (2t)-\Phi (t)\leqslant \Phi (2t)\leqslant C\Phi (t),$$ for some constant $C>0$. Here the last inequality follows from the fact that $\Phi$ satisfies the $\Delta _2$-condition. This gives $$\frac{t \Phi _+'(t)}{\Phi(t)} \leqslant C,$$ giving that $p_\Phi \leqslant C<\infty$, and we have proved (3). Next we prove (4). Suppose that $q_\Phi>1$. Then there is a $p>1$ such that $$\frac{t \Phi_\pm'(t)}{\Phi(t)} > p$$ for all $t>0$, which gives $$t \Phi_\pm'(t) - p \Phi(t) > 0.$$ Hence $$\frac{t^p \Phi_\pm'(t) - p t^{p-1}\Phi(t)}{t^{2p}}> 0,$$ or equivalently $$\left(\frac{\Phi(t)}{t^p} \right)_\pm' > 0.$$ Hence, the result now holds. If we instead suppose that $\frac{\Phi(t)}{t^p}$ is increasing for some $p>1$, then applying the arguments above in reverse order now yields $q_\Phi\geqslant p > 1$. ◻ For the equivalence in (4) of Proposition [Proposition 16](#Prop:YounFuncEquivCond){reference-type="ref" reference="Prop:YounFuncEquivCond"} we note further. **Proposition 18**. *Let $\Phi$ be a Young function which is non-zero outside the origin, and let $q_{\Phi}$ be as in [\[Eq:StrictYoungFunc1\]](#Eq:StrictYoungFunc1){reference-type="eqref" reference="Eq:StrictYoungFunc1"}. Then the following conditions are equivalent:* 1. *$q_\Phi >1$;* 2. *there is a $p>1$ such that $\frac{\Phi(t)}{t^p}$ increases;* 3. *there are $p,q>1$ such that $\frac{\Phi(t)}{t^{p}}$ increases near the origin and $\frac{\Phi(t)}{t^{q}}$ increases at infinity;* 4. *there is a $p>1$ such that for every $t>0$ and every $c\in (0,1]$, $\Phi(ct) \leqslant c^p \Phi(t)$.* *Proof.* The equivalence of (1) and (2) was established in Proposition [Proposition 16](#Prop:YounFuncEquivCond){reference-type="ref" reference="Prop:YounFuncEquivCond"}. Trivially, (2) implies (3). Moreover, $\frac{\Phi(t)}{t^p}$ increases if and only if for any $t>0$ and any $c\in (0,1]$, $$\frac{\Phi(ct)}{(ct)^p} \leqslant \frac{\Phi(t)}{t^p}$$ which is equivalent to (4), hence (2) is equivalent to (4). We will now show that (3) implies (1), yielding the result. Suppose that (3) holds. Then by assumption, there are $R_1,R_2>0$ such that $\Phi(t)$ is increasing for $t\in (0,R_1)\cup (R_2,\infty)$, $$q_1 = \inf_{t\in(0,R_1)} \left ( \frac{t \Phi_+'(t)}{\Phi(t)} \right ) \geqslant p>1\quad\text{and}\quad q_3 = \inf_{t\in(R_2,\infty)} \left ( \frac{t \Phi_+'(t)}{\Phi(t)} \right ) \geqslant q>1.$$ Let $q_2 = \inf_{t\in[R_1,R_2]} \frac{t \Phi_+'(t)}{\Phi(t)}.$ We want to show that $q_2>1$, which will in turn yield $q_\Phi = \inf \{ q_1,q_2,q_3\} > 1$, completing the proof. Let $\varphi_1(t) = k_1 t - m_1$ and $\varphi_2(t) = k_2 t - m_2$, with $k_j = \Phi_+'(R_j)$ and $m_j$ chosen so that $\varphi_j(R_j) = \Phi(R_j)$, $j=1,2$. Given that $\Phi$ is a Young function, is convex, and fulfills (3), it is clear that $k_1 \leqslant k_2$, $m_1 \leqslant m_2$ and $m_j > 0$ for $j=1,2$. We now approximate $\Phi(t)$ with linear segments forming polygonal chains for $R_1 \leqslant t \leqslant R_2$. Pick points $R_1 = t_0 < t_1 < \dots < t_n = R_2$ and define functions $f_j(t) =a_j t - b_j$ such that $f_j(t_j) = \Phi(t_j)$ and $f_j(t_{j+1})=\Phi(t_{j+1})$. Let $\Phi_n(t)$ be the polygonal chain on $[R_1,R_2]$ formed by connecting the functions $f_j$, meaning $\Phi_n(t) = f_j(t)$ whenever $t\in [t_j,t_{j+1}]$. Since $\Phi$ is convex and increasing, we have $k_1 \leqslant a_j \leqslant k_2$ and $m_1 \leqslant b_j \leqslant m_2$ for all $j=1,\dots, n$. Hence, for any $j = 1, \dots, n$, $$\inf_{t\in[t_j,t_{j+1}]} \left( \frac{t (f_j)_+'(t)}{f_j(t)} \right) = \inf_{t\in[t_j,t_{j+1}]} \left( 1 + \frac{b_j}{a_j t_j - b_j} \right)> 1 + \frac{m_1}{\Phi(R_2)},$$ where the last inequality follows from the fact that $b_j > m_1$ and $a_j t_j - b_j = f_j(t_j) \leqslant f_n(t_n) = \Phi(R_2)$. From this, it is clear that $$q_{\Phi_n} = \inf_{t\in[R_1,R_2]} \left ( \frac{t (\Phi_n)_+'(t)}{\Phi_n(t)} \right ) > 1 + \frac{m_1}{\Phi(R_2)}$$ independent of the choice of $n$ and the points $t_j$, $j=1,\dots n-1$, and therefore $$q_2 = \lim_{n\rightarrow \infty} q_{\Phi_n} \geqslant 1 + \frac{m_1}{\Phi(R_2)} > 1.$$ This gives (1), completing the proof. ◻ The following proposition shows that the condition $q_\Phi >1$ cannot be linked to strict convexity for the Young function $\Phi$. **Proposition 19**. *Let $\Phi$ and $\Psi$ be Young functions which are non-zero outside the origin, and let $q_{\Phi}$ be as in [\[Eq:StrictYoungFunc1\]](#Eq:StrictYoungFunc1){reference-type="eqref" reference="Eq:StrictYoungFunc1"}. Then the following is true:* 1. *if $q_{\Phi} > 1$, then there is an equivalent Young function to $\Phi$ which is strictly convex;* 2. *$\Phi$ can be chosen such that $q_\Phi >1$ but $\Phi$ is not strictly convex;* 3. *$\Phi$ can be chosen such that $q_\Phi =1$ but $\Phi$ is strictly convex.* *Remark 20*. In [@Liu] it is stated that (1) in Proposition [Proposition 19](#Prop:YounFuncEquivCond2){reference-type="ref" reference="Prop:YounFuncEquivCond2"} can be replaced by 1. $q_\Phi >1$, if and only if $\Phi$ is strictly convex. This is equivalent to that the following conditions should hold: 1. if $q_\Phi >1$, then $\Phi$ is strictly convex; 2. if $\Phi$ is strictly convex, then $q_\Phi >1$. (See remark after (1.1) in [@Liu].) Evidently, the assertion in [@Liu] is (strictly) stronger than Proposition [Proposition 19](#Prop:YounFuncEquivCond2){reference-type="ref" reference="Prop:YounFuncEquivCond2"} (1). On the other hand, Proposition [Proposition 19](#Prop:YounFuncEquivCond2){reference-type="ref" reference="Prop:YounFuncEquivCond2"} (2) shows that (2)$'$ can not be true and Proposition [Proposition 19](#Prop:YounFuncEquivCond2){reference-type="ref" reference="Prop:YounFuncEquivCond2"} (3) shows that (3)$'$ can not be true. Consequently, both implications in (1)$'$ are false. *Proof of Proposition [Proposition 19](#Prop:YounFuncEquivCond2){reference-type="ref" reference="Prop:YounFuncEquivCond2"}.* We begin by proving (1). Therefore assume that $q_\Phi>1$. Suppose that $\Phi$ fails to be strict convex in $(0,\varepsilon)$, for some $\varepsilon>0$. Then $\Phi ''(t)=0$ when $t\in (0,\varepsilon)$. This implies that $\Phi (t)=ct$ when $t\in (0,\varepsilon)$, for some $c\geqslant 0$, which in turn gives $q_\Phi=1$, violating the condition $q_\Phi>1$. Hence $\Phi$ must be strict convex in $(0,\varepsilon)$, for some choice of $\varepsilon>0$. Let $$\Psi (t) = \int _0^t\Phi (t-s)e^{-s}\, ds.$$ Then $$\Psi ''(t) = \Phi '(0)+\int _0^t\Phi ''(t-s)e^{-s} \, ds \ge \int _{t-\varepsilon}^t\Phi ''(t-s)e^{-s}\, ds >0,$$ since $\Phi ''(t-s)>0$ when $s\in (t-\varepsilon,t)$. This shows that $\Psi$ is a strictly convex Young function. Since $\Phi$ is increasing we also have $$\Psi (t)\leqslant \Phi (t),$$ because $$\Psi (t) = \int _0^t\Phi (t-s)e^{-s}\, ds \le \Phi (t)\int _0^te^{-s}\, ds \le \Phi (t)\int _0^\infty e^{-s}\, ds = \Phi (t).$$ This implies that $$\Phi _1(t)\equiv \Phi (t)+\Psi (t)$$ is equivalent to $\Phi (t)$. Since $\Psi$ is strictly convex, it follows that $\Phi _1$ is strictly convex as well. Consequently, $\Phi _1$ fulfills the required conditions for the searched Young function, and (4) follows. In order to prove (2), we choose $$\Phi (t)= \begin{cases} 2t^2,&\text{when}\ t \leqslant 1 \\[1ex] 4t-2 ,&\text{when}\ 1 \leqslant t \leqslant 2 \\[1ex] t^2+2,& \text{when}\ t \geqslant 2 \end{cases}$$ which is not strictly convex. Then $$\begin{aligned} q_{\Phi} &= \inf _{t>0} \left ( \frac{t \Phi'(t)}{\Phi(t)} \right ) \\[1ex] &= \min \left \{ \inf_{t \leqslant 1} \left (\frac{4t^2}{2t^2}\right ), \inf_{1 \leqslant t \leqslant 2} \left (\frac{4t}{4t-2}\right ), \inf_{t \geqslant 2} \left (\frac{2t^2}{t^2+2}\right ) \right \} = \frac{4}{3} >1,\end{aligned}$$ which shows that $\Phi$ satisfies all the searched properties. This gives (2). Next we prove (3). Let $$\Phi (t) = t\ln (1+t),\quad t\geqslant 0.$$ Then $\Phi$ is a Young function, and it follows by straight-forward computations that $q_{\Phi} = 1$. We also have $\Phi ''(t) >0$, giving that $\Phi$ is strictly convex. Consequently, $\Phi$ satisfies all searched properties, and (3) follows. This gives the result. ◻ # Continuity for pseudo-differential operators, Fourier multipliers, and Fourier integral operators on Orlicz spaces {#sec3} In this section we extend properties on $L^p$ continuity for various types of Fourier type operators into continuity on Orlicz spaces. Especially we perform such extensions for Hörmander's improvement of Mihlin's Fourier multiplier theorem (see Theorem [Theorem 24](#Thm:OrlContHormMult){reference-type="ref" reference="Thm:OrlContHormMult"}). We also deduce Orlicz space continuity for suitable classes of pseudo-differential and Fourier integral operators (see Theorems [Theorem 23](#Thm:OrlContPseudo){reference-type="ref" reference="Thm:OrlContPseudo"} and [Theorem 25](#Thm:OrlContSGFIOs){reference-type="ref" reference="Thm:OrlContSGFIOs"}). Our investigations are based on a special case of MarcinKiewicz type interpolation theorem for Orlicz spaces, deduced in [@Liu]. We now recall the following interpolation theorem on Orlicz spaces, which is a special case of [@Liu Theorem 5.1]. **Proposition 21**. *Let $\Phi$ be a strict Young function and $p_0,p_1 \in (0,\infty ]$ are such that $p_0<q_{\Phi} \leqslant p_{\Phi}<p_1$, where $q_{\Phi}$ and $p_{\Phi}$ are defined in [\[Eq:StrictYoungFunc1\]](#Eq:StrictYoungFunc1){reference-type="eqref" reference="Eq:StrictYoungFunc1"} and [\[Eq:StrictYoungFunc2\]](#Eq:StrictYoungFunc2){reference-type="eqref" reference="Eq:StrictYoungFunc2"}. Also let $$\label{Eq:OrliczIntPol} T : L^{p_0}(\mathbf R^{d}) + L^{p_1}(\mathbf R^{d}) \to wL^{p_0}(\mathbf R^{d}) + wL^{p_1}(\mathbf R^{d})$$ be a linear and continuous map which restricts to linear and continuous mappings $$\begin{aligned} {5} T &:&\, L^{p_0}(\mathbf R^{d}) &\to \, wL^{p_0}(\mathbf R^{d}) & \qquad &\text{and} &\qquad T &:& L^{p_1}(\mathbf R^{d}) &\to wL^{p_1}(\mathbf R^{d}). \notag % \end{alignat*} % %% \intertext{Then \eqref{Eq:OrliczIntPol} restricts to linear and continuous mappings} % %% % \begin{alignat*}{2} T &:& L^{\Phi}(\mathbf R^{d}) &\to L^{\Phi}(\mathbf R^{d})& \qquad &\text{and} &\qquad T &:& \ wL^{\Phi}(\mathbf R^{d}) &\to wL^{\Phi}(\mathbf R^{d}). \label{Eq:InterpolCont}\end{aligned}$$* *Remark 22*. Let $\Phi$ and $T$ be the same as in Proposition [Proposition 21](#Prop:OrliczIntPol){reference-type="ref" reference="Prop:OrliczIntPol"}. Then the continuity of the mappings in [\[Eq:InterpolCont\]](#Eq:InterpolCont){reference-type="eqref" reference="Eq:InterpolCont"} means $$\begin{aligned} {2} \Vert Tf\Vert _{L^{\Phi}} &\lesssim \Vert f\Vert _{L^{\Phi}}, & \quad f &\in L^{\Phi}(\mathbf R^{d}) \intertext{and} \Vert Tf\Vert _{w L^{\Phi}} &\lesssim \Vert f\Vert _{w L^{\Phi}}, & \quad f &\in w L^{\Phi}(\mathbf R^{d}).\end{aligned}$$ A combination of Propositions [Proposition 12](#Prop:LpCont){reference-type="ref" reference="Prop:LpCont"} and [Proposition 21](#Prop:OrliczIntPol){reference-type="ref" reference="Prop:OrliczIntPol"} gives the following result on continuity properties for pseudo-differential operators on $L^{\Phi}$-spaces. **Theorem 23**. *Let $\Phi$ be a strict Young function, $A \in \mathbf{M}(d,\mathbf R )$ and $a\in S^0_{1,0}(\mathbf R^{2d})$. Then $$\operatorname{Op}_A(a) : L^{\Phi}(\mathbf R^{d}) \to L^{\Phi}(\mathbf R^{d}) \quad \text{and}\quad \operatorname{Op}_A(a) : wL^{\Phi}(\mathbf R^{d}) \to wL^{\Phi}(\mathbf R^{d})$$ are continuous.* *Proof.* By Proposition [Proposition 16](#Prop:YounFuncEquivCond){reference-type="ref" reference="Prop:YounFuncEquivCond"} it follows that $q_{\Phi}>1$ and $p_{\Phi}< \infty$. Choose $p_0,p_1\in (1,\infty )$ such that $p_0 <q_{\Phi}$ and $p_1 >p_{\Phi}$. In view of Remark [Remark 3](#Rem:WeakLp){reference-type="ref" reference="Rem:WeakLp"} and Proposition [Proposition 12](#Prop:LpCont){reference-type="ref" reference="Prop:LpCont"}, $$\|\mathrm{Op}(a) f\|_{wL^{p_j}} \leqslant \|\mathrm{Op}(a) f\|_{L^{p_j}} \leqslant C\|f\|_{L^{p_j}}, \quad f \in L^{p_j}(\mathbf R^{d}), \ j=0,1.$$ Then it follows that $\operatorname{Op}_A(a)$ extends uniquely to a continuous map from $L^{p_0}(\mathbf R^{d})+L^{p_1}(\mathbf R^{d})$ to $wL^{p_0}(\mathbf R^{d})+wL^{p_1}(\mathbf R^{d})$ (see e. g. [@BerLof]). Hence the conditions of Proposition [Proposition 21](#Prop:OrliczIntPol){reference-type="ref" reference="Prop:OrliczIntPol"} are fulfilled and the result follows. ◻ By using Proposition [Proposition 13](#Prop:HormMult){reference-type="ref" reference="Prop:HormMult"} instead of Proposition [Proposition 12](#Prop:LpCont){reference-type="ref" reference="Prop:LpCont"} in the previous proof we obtain the following extension of Hörmander's improvement of Mihlin's Fourier multiplier theorem. The details are left for the reader. **Theorem 24**. *Let $\Phi$ be a strict Young function and $a\in L^\infty (\mathbf R^{d}\setminus 0)$ be such that $$\sup _{R>0} \left ( R^{-d+2|\alpha |}\int _{A_R} |\partial ^\alpha a(\xi )|^2\, d\xi \right )$$ is finite for every $\alpha \in \mathbf N^{d}$ with $|\alpha |\leqslant [\frac d2]+1$, where $A_R$ is the annulus $\{ \, \xi \in \mathbf R^{d}\, ;\, R<|\xi |<2R\, \}$. Then $a(D)$ is continuous on $L^\Phi (\mathbf R^{d})$ and on $wL^\Phi (\mathbf R^{d})$.* Finally, employing Theorem [Theorem 14](#Thm:LpSGFIOcont){reference-type="ref" reference="Thm:LpSGFIOcont"}, we prove the following continuity result for Fourier integral operators on $L^\Phi$-spaces. **Theorem 25**. *Let $\Phi$ be a strict Young function, $\varphi\in\mathfrak{P}_r^\mathrm{hom}$ a phase function, $a\in S^{m,\mu}(\mathbf R^{2d})$ an amplitude function such that $$\label{eq:orderOrlcont} m<\mathfrak{T}_{d,\Phi} \textrm{ and } \mu<\mathfrak{T}_{d,\Phi},$$ where $$\mathfrak{T}_{d,\Phi}=-(d-1)\max\left\{ \left|\frac{1}{p_\Phi}-\frac12\right| , \left|\frac{1}{q_\Phi}-\frac12\right| \right\}.$$ Moreover, assume that $|\xi|\ge\varepsilon$ on the support of $a$, for some $\varepsilon>0$. Then, $$\operatorname{Op}_\varphi(a)\colon L^\Phi(\mathbf R^{d})\to L^\Phi(\mathbf R^{d}) \quad \text{and}\quad \operatorname{Op}_\varphi(a) : wL^{\Phi}(\mathbf R^{d}) \to wL^{\Phi}(\mathbf R^{d})$$ are continuous.* *Remark 26*. Notice the strict inequality in [\[eq:orderOrlcont\]](#eq:orderOrlcont){reference-type="eqref" reference="eq:orderOrlcont"}, differently from condition [\[eq:sharp-thresholds\]](#eq:sharp-thresholds){reference-type="eqref" reference="eq:sharp-thresholds"} in Theorem [Theorem 14](#Thm:LpSGFIOcont){reference-type="ref" reference="Thm:LpSGFIOcont"} for the $L^p$-boundedness of the Fourier integral operators in [\[eq:fio2\]](#eq:fio2){reference-type="eqref" reference="eq:fio2"}. The sharpness of condition [\[eq:orderOrlcont\]](#eq:orderOrlcont){reference-type="eqref" reference="eq:orderOrlcont"} will be investigated elsewhere. *Proof.* As above, by Proposition [Proposition 16](#Prop:YounFuncEquivCond){reference-type="ref" reference="Prop:YounFuncEquivCond"} it follows that $q_{\Phi}>1$ and $p_{\Phi}< \infty$. Choose $p_0,p_1\in (1,\infty )$ such that $p_0 <q_{\Phi}$ and $p_1 >p_{\Phi}$, and, as it is possible, by continuity and the hypothesis [\[eq:orderOrlcont\]](#eq:orderOrlcont){reference-type="eqref" reference="eq:orderOrlcont"}, such that $$m < -(d-1) \left|\frac{1}{p_j}-\frac12\right| \textrm{ and } \mu< -(d-1) \left|\frac{1}{p_j}-\frac12\right|, \quad j=0,1.$$ In view of Remark [Remark 3](#Rem:WeakLp){reference-type="ref" reference="Rem:WeakLp"} and Theorem [Theorem 14](#Thm:LpSGFIOcont){reference-type="ref" reference="Thm:LpSGFIOcont"}, $$\|\mathrm{Op}_\varphi(a) f\|_{wL^{p_j}} \leqslant \|\mathrm{Op}_\varphi(a) f\|_{L^{p_j}} \leqslant C\|f\|_{L^{p_j}}, \quad f \in L^{p_j}(\mathbf R^{d}), \ j=0,1.$$ By Proposition [Proposition 21](#Prop:OrliczIntPol){reference-type="ref" reference="Prop:OrliczIntPol"}, the claim follows, arguing as in the final step of the proof of Theorem [Theorem 23](#Thm:OrlContPseudo){reference-type="ref" reference="Thm:OrlContPseudo"}. ◻ 99 J. Appell, A. Kalitvin, P. Zabreiko *Partial integraloperators in Orlicz spaces with mixed norm in: Colloquium Mathematicum, **78**, 1998, pp. 293--306.* J. Bergh, J. Löfström *Interpolation Spaces, An Introduction*, Springer-Verlag, Berlin Heidelberg NewYork, 1976. S. Coriasco, M. Ruzhansky *Global $L^p$-continuity of Fourier Integral Operators*, Trans. Amer. Math. Soc. **366**, 5 (2014), 2575--2596. P. Harjulehto, P. Hästö, *Orlicz Spaces and Generalized Orlicz Spaces* Springer, (2019). L. Hörmander *Estimates for translation invariant operators in $L^p$ spaces*, Acta Math. **104** (1960), 93--140. L. Hörmander *The Analysis of Linear Partial Differential Operators*, vol I--III, Springer-Verlag, Berlin Heidelberg NewYork Tokyo, 1983, 1985. PeiDe Liu, MaoFa Wang *Weak Orlicz spaces: some basic properties and their applications to harmonic analysis*, Science China Mathematics, **56**, Springer, 2013, 789--802. W. A. Majewski, L. E. Labuschagne *On applications of Orlicz spaces to statistical physics*, Ann. Henri Poincaré **15** (2014), 1197--1221. W. A. Majewski, L. E. Labuschagne *On entropy for general quantum systems*, Adv. Theor. Math. Phys. **24** (2020), 491--526. S. G. Michlin *Fourier integrals and multiple singular integrals* (Russian) Vestnik Leningrad. Univ. Ser. Mat. Meh. Astronom. **12** (1957), 143--155. M. Milman *A note on $L(p,\, q)$ spaces and Orlicz spaces with mixed norms*, Proc. Ame. Math. Soc. **83** 1981, 743--746. W. Orlicz *Über eine gewisse Klasse von Räumen vom Typus B*, (German) Bull. Int. Acad. Polon. Sci. A 1932, 207--220 (1932). A. Osançliol, S. Öztop *Weighted Orlicz algebras on locally compact groups*, J. Aust. Math. Soc. **99** (2015), 399--414. M. M. Rao, Z. D. Ren, *Theory of Orlicz Spaces*, Marcel Dekker, New York, 1991. C. Schnackers, H. Führ *Orlicz Modulation Spaces*, Proceedings of the 10th International Conference on Sampling Theory and Applications. A. Seeger, C.D. Sogge, E.M. Stein, *Regularity properties of Fourier integral operators*, Ann. of Math. **134** (1991), 231--251. M. Shubin *Pseudodifferential operators and the spectral theory*, Springer Series in Soviet Mathematics, Springer Verlag, Berlin 1987. J. Toft *Gabor analysis for a broad class of quasi-Banach modulation spaces in: S. Pilipović, J. Toft (eds), Pseudo-differential operators, generalized functions*, Operator Theory: Advances and Applications **245**, Birkhäuser, 2015, 249--278. J. Toft, R. Üster *Pseudo-differential operators on Orlicz modulation spaces* J. Pseudo-Differ. Oper. Appl. **14** (2023), Paper no. 6. J. Toft, R. Üster, E. Nabizadeh and S. Öztop, *Continuity and Bargmann mapping properties of quasi-Banach Orlicz modulation spaces* Forum. Math. **34** (2022), 1205--1232. G. Tranquilli *Global normal forms and global properties in function spaces for second order Shubin type operators* PhD Thesis, 2013. M. W. Wong *An introduction to pseudo-differential operators*, Ser. Anal. Appl. Comput. **6**, World Scientific, Hackensack, 2014.
arxiv_math
{ "id": "2309.15229", "title": "Fourier type operators on Orlicz spaces and the role of Orlicz Lebesgue\n exponents", "authors": "Matteo Bonino, Sandro Coriasco, Albin Petersson, Joachim Toft", "categories": "math.FA", "license": "http://creativecommons.org/licenses/by/4.0/" }
**Aryabhata and the Construction of the First Trigonometric Table** **Vijay A. Singh**[^1]\ **Visiting Professor, Physics Department, Centre for for Excellence in Basic Sciences, Kalina, Santa-Cruz East, Mumbai-400098, India** **Aneesh Kumar**\ **Dhirubhai Ambani International School, BKC, Bandra East, Mumbai-400098, India**\ Few among us would know that the first mention of the sine and the enumeration of the first sine table are to be credited to Aryabhata. The method to generate this relies on the sine difference formula which is derived using ingenious arguments based on similar triangles. We describe how this was done. In order to make our presentation pedagogical we take a unit circle and radians instead of the (now) archaic notation in the *Aryabhatiya* and its commentators. We suggest a couple of exercise problems and invite the enterprising student to try their hands. We also point out that his sine and the second sine difference identities are related to the finite difference calculus we now routinely use to calculate derivatives and second derivatives. An understanding of these trigonometric identities and preparation of the sine table will enable a student to get an appreciation of the path breaking work of Aryabhata. **Keywords:** Aryabhata, Sine, Sine Table, Finite Difference Calculus # Introduction We routinely look up a table of sines or punch on our calculator to obtain the value of a trigonometric function for a given angle. Little do we realize that a debt of gratitude is owed to the fifth century Indian savant Aryabhata who first showed the way. Aryabhata enumerated the table of sines for closely spaced angles. His methods were based on general trigonometric identities and lend themselves to extensions. The first mention of the sine function is to be found in his (one and only) seminal work the *Aryabhatiya* (499 CE). Aryabhata describes it in poetic terms as the half bow-string or *Ardha-Jya*. This is illustrated in Fig. 1. The arrow or *saar* is related to the cosine function. This is not the only example of poetry intruding into his mathematics. To describe the fact, heretical and revolutionary for those times and for a long time afterwards, that the earth is rotating and the sun is stationary, Aryabahta evokes the tranquil metaphor of a boat floating down the river and the stationary land mass which seems to move backwards. The *Aryabhatiya* is in verse form and there are over a 100 of them. They are written fully respecting the norms of grammar and metre. Aryabhata was a poet as well. The *Aryabhatiya* consists of 121 cryptic verses, dense and laden with meaning [@kern; @shukla]. The work is divided into 4 parts or *padas*: the *Gitikapada* (13 verses), the mathematics or *Ganitapada* (33 verses), the *Kalkriyapada* (25 verses) and the astronomy or *Golapada* (50 verses). The astronomy is better known. There are two verses in the mathematics *Ganitapada* describing the solution of the linear Diophantine equation. This has received due recognition. Our focus here is on the trigonometry part in the *Ganitapada* which in our opinion has suffered neglect and is a pioneering achievement of this savant. Virtually every major Indian mathematician has commented on the *Aryabhatiya* (499 CE). Often it is in terms of a formal *Bhasya* (Commentary). Table I lists some of these. Notable among them is the voluminous work (*Maha Bhashya*) of the 15th century mathematician Nilakantha Somaiyaji (1444 CE - 1544 CE) who was part of the Kerala school which, beginning with Madhava (1350 CE - 1420 CE), founded the calculus of trigonometric functions. Our presentation relies on Somaiyaji's commentary [@soma] and the works of Kripa Shankar Shukla and K. V. Sarma [@shukla] as well as of P. P. Divakaran [@ppd]. In this article we describe the trigonometric identities used by Aryabhata to obtain the table of sines. This entails taking the difference of the sine of two closely spaced angles and then taking the second sine difference. We next show that these identities are the same as the finite difference calculus one uses nowadays to numerically obtain the first derivative and second derivative of sine functions. We follow this up with a brief discussion. The Indian mathematical tradition is largely word based. Results are mentioned and derivations are omitted. The *Aryabhatiya* (499 CE) with a little over a 100 cryptic, super-compressed verses of dense mathematics is a prime example. Our approach is pedagogical and one which will help the student and teacher to appreciate this pioneering work. Hence we shall take some liberties and describe the great master's work in terms of unit circle and radians instead of degrees and minutes. A couple of exercises in the end will help one get a more hands on understanding. Bhaskara I 629 CE Sanskrit Valabhi, Gujarat --------------------- -------------- ---------- ----------------------- Suryadeva Yajvan born 1191 CE Sanskrit Gangaikonda-Colapuram Parameshvara 1400 CE Sanskrit Allathiyur, Kerala Nilakantha Somayaji 1500 CE Sanskrit Trikandiyur, Kerala Kondadarma unknown Telegu Andhra Abul Hasan Ahwazi 800 CE Arabic Ahwaz, Iran : A host of eminent mathematicians have commented on the *Aryabhatiya* written CE 499. Some, like Brahmagupta (600 CE) or Bhaskara II (1100 CE) have not written a specific commentary but dwelt extensively on it. The above is an abbreviated list of specific commentaries and the dates are approximate. Our main source for this is the work of K. S. Shukla and K. V. Sarma [@shukla] which cites around 20 commentaries. [\[table:1\]]{#table:1 label="table:1"} # The *Ardha-Jya* or Sine Function As mentioned earlier the *Aryabhatiya* has some 121 verses out of which 33 verses belong to the mathematics section (*Ganitapada*). Aryabhata works with, for the first time in the history of mathematics, the sine function. It is the half-chord $AP$ of the unit circle in Fig. 1. $$\begin{aligned} sin(\theta) &=& \dfrac{AP}{OA} \\ &=& AP \,\,\,\,\,\,\,\, (OA = 1) \end{aligned}$$The circle maybe large or small; correspondingly $AP$ and $OA$ maybe large or small, but the l.h.s. is a function of $\theta$ and is **invariant**. Further, all metrical properties related to the circle can be derived using trigonometric functions and the Pythagorean theorem (also described as the Baudhayana or Diagonal theorem [@ppd]). For example the geometric property of a triangle can be related to the arcs of the circumscribing circle using the sine and cosine functions. Or the diagonals of the inscribed quadrilateral can be related to its sides. (If a recent proof of the this Pythagorean theorem using the the law of sines holds up to scrutiny then *all* metrical properties of a circle can by obtained by trigonometry alone [@shirali].) By emphasizing the role of this half-chord Aryabhata endowed circle geometry with metrical properties. This alone may qualify him as the founder of trigonometry. But he did more. ![The bow superposed on the unit circle. Half the bow string or half chord $AP$ is $sin(\theta)$ as defined by Aryabhata. $OP$ is $cos(\theta)$ while $PX$ = 1-$cos(\theta)$ is called the *saar*. See text for comments.](ardhya-jya-2.png) It may also help to note that the length of the half-chord $AP$ is very close to the length of the arc $AX$ only when the angle is small (e.g. sin($\theta$) $\approx \theta$ if $\theta$ is small and in radians). This was known to Aryabhata in all likelihood by inspection. Similarly the sine of 90$^{\degree}$ is 1, since then the half-chord is the same as the radius. In the 9th verse of the *Ganitapada* he uses the property of an equilateral triangle and obtains the sine of 30$^{\degree}$ as 1/2. We pause to note that Aryabhata also states the value of $\pi$ as 62832/20000 in the 10th verse. This value is 3.416 and he is careful enough to state that this is proximate (*Asanna*) which means we can obtain better and truer values for $\pi$ presumably with more effort[^2]. # The Difference Formula for Sine and Cosine The 12th verse of the *Ganitapada* plays a central role in the tabulation of the sine function. It is cryptic and to unravel its meaning we first need to obtain the difference formula for the sine. The presentation below relies on a number of sources: (i) The commentary of Nilakantha Somaiyaji [@soma]; (ii) the treatment of Shukla and Sarma [@shukla]; (iii) and for the sake of ease of understanding we follow Divarkaran [@ppd] and take a unit circle as opposed to a radius of 3438 by earlier workers[^3]. Figure 2 depicts a quadrant of the unit circle where $OX$ = $OY$ = 1. The arcs $XA$, $XB$ and $XC$ trace angles $\theta$, $\theta + \phi$ and $\theta - \phi$ respectively. The half-chords $AP$, $BQ$ and $CR$ are the corresponding sine functions. We drop a perpendicular $CS$ from the circumference onto the half-chord $BQ$ as shown. According to his commentator Nilakantha Somaiyaji [@soma], Aryabhata obtained the relationship between the difference in the trigonometric functions by demonstrating that the two triangles $BSC$ and $OPA$ are similar and then exploiting this ingenuously. We trace his line of reasoning in the Appendix where we derive the following relations $$\begin{aligned} sin(\theta + \phi) - sin(\theta - \phi) & = & 2 sin(\phi) cos(\theta) \label{sindiff1} \end{aligned}$$ The difference in the sines is proportional to the cosine of the mean angle. $$\begin{aligned} cos(\theta + \phi) - cos(\theta - \phi) & = & - 2 sin(\phi) sin(\theta) \label{cosdiff1}\end{aligned}$$ The difference in the cosines is proportional to the (negative) of the sine of the mean angle. # The Sine Table Aryabhata obtained the values of the sines at fixed angles between 0 and $\pi /2$ thus generating the sine table for $\pi/48$ = 3.75 degrees, 7.5 degrees up to 90 degrees. This table is stated in verse 12 of the first chapter, the *Gitikapada*. The table has been used by Indian astronomers (and astrologers) in some form or another since 499 CE up to the present. We shall see how the table was generated. Let us take $\phi = \epsilon/2$ where $\epsilon$ is small. We take $\theta = (n-1/2) \epsilon$ where $n$ is a positive integer from 1 to $N$. To fix our ideas $\epsilon = \pi$/48 = 3.75$^0$ = 225' and $N$ = 24. We re-state the sine and cosine difference formulae (Eqs. ([\[sindiff1\]](#sindiff1){reference-type="ref" reference="sindiff1"}) and ([\[cosdiff1\]](#cosdiff1){reference-type="ref" reference="cosdiff1"})) from the previous section $$\begin{aligned} \delta s_{n} &=& s_{n} - s_{n-1} = 2 s_{1/2} c_{(n-1/2)} \label{finite1} \\ \delta c_n &=& c_{n} - c_{n-1} = -2 s_{1/2} s_{(n-1/2)} \label{finite2} \end{aligned}$$ where the symbol $s_n$ stands for $sin(n \epsilon)$, $c_n$ stands for $cos(n \epsilon)$ and $s_{1/2}$ for $sin(\epsilon/2)$. The above is a pair of coupled equations and it was Aryabhata's insight to take the second difference, namely $$\begin{aligned} \delta^2 s_n &=& \delta s_{n} - \delta s_{n-1} = 2 s_{1/2} (c_{(n-1/2)}-c_{(n-3/2)}) \nonumber \\ & =& - 4 s_{1/2}^2 s_{n-1} \mbox{\,\,\,\,\,\,\, on using Eq.\,(\ref{finite2}) } \label{finite3} \end{aligned}$$ Thus the second difference of the sines is proportional to the sine itself. The next step is to represent the r.h.s in terms of a recursion. We observe $s_n$ on the r.h.s. of Eq.([\[finite3\]](#finite3){reference-type="ref" reference="finite3"}) may be written as $s_n = s_n - s_{n-1} + s_{n-1} - s_{n-2} + s_{n-2} - ...$ = $\delta s_{n-1} + \delta s_{n-2} + ...$ Thus $$\begin{aligned} \delta s_{n} - \delta s_{n-1} &=& - 4 s_{1/2}^2 \sum_{m=1} ^{n-1} \delta s_m \label{recur1} \end{aligned}$$ Thus we get a recursion relation where the second difference of the sines is expressed in terms of all previously obtained first sine differences. To initiate the recursion we need $\delta s_1$ which is $s_1 - s_0 = sin(\epsilon) - sin(0) \approx \epsilon$ since for small angles the half-chord and the corresponding arc are equal as stated in the previous section. Using the recursion relation of the sine, we can generate the celebrated sine table of Aryabhata, taking $\pi$ = 3.1416 and sin($\epsilon$) = $\epsilon$ = 0.0654 (= 225'). Table [2](#table:3){reference-type="ref" reference="table:3"} depicts some typical values of the sine function as well as the value of the sine multiplied by 3438 (the so called '*Rsine*' of Aryabhata). We can see that this matches Aryabhata's celebrated sine table, up to $\pm$ 1 minute. For example, $\theta$ = $\pi$/6 gives 1719 minutes. For comparison, we also show the modern values of sine up to four decimal places. Note that Aryabhata takes angles up to $\pi$/2 and seemed aware of the fact that going further was unnecessary given the periodic nature of the sine function. $\theta$ sin($\theta$) Aryabhata sin($\theta$) (minutes) sin($\theta$) Modern ------------ ------------------------- ------------------------- ---------------------- $\pi$/48 0.0654 225 0.0654 $2\pi$/48 0.1305 449 0.1305 $3\pi$/48 0.1951 671 0.1951 $4\pi$/48 0.2588 890 0.2588 $5\pi$/48 0.3214 1105 0.3214 $6\pi$/48 0.3827 1315 0.3827 $7\pi$/48 0.4423 1520 0.4423 $8\pi$/48 0.5000 1719 0.5000 $9\pi$/48 0.5556 1910 0.5556 $10\pi$/48 0.6088 2093 0.6088 $11\pi$/48 0.6594 2267 0.6593 $12\pi$/48 0.7072 2431 0.7071 $13\pi$/48 0.7519 2585 0.7518 $14\pi$/48 0.7935 2728 0.7934 $15\pi$/48 0.8316 2859 0.8315 $16\pi$/48 0.8662 2978 0.8660 $17\pi$/48 0.8971 3084 0.8969 $18\pi$/48 0.9241 3177 0.9239 $19\pi$/48 0.9472 3256 0.9469 $20\pi$/48 0.9662 3322 0.9659 $21\pi$/48 0.9812 3373 0.9808 $22\pi$/48 0.9919 3410 0.9914 $23\pi$/48 0.9983 3432 0.9979 $24\pi$/48 1.0005 3439 1.0000 : Table of values of sine using the Aryabhatan method, taking episilon = $\pi$/48 (= 3.75$\degree$ = 225') and pi=3.1416 and comparison with the modern day values. In column 3 we also quote values in minutes as done in Verse 12, *Gitika* chapter of the *Aryabhatiya* [@kern; @shukla]. [\[table:3\]]{#table:3 label="table:3"} # Finite Difference Calculus Of greater relevance is the fact that the sine (or cosine) difference formulae foreshadow finite difference calculus, a popular numerical technique in this age of computation. Rewriting Eqs. ([\[sindiff1\]](#sindiff1){reference-type="ref" reference="sindiff1"}) and ([\[cosdiff1\]](#cosdiff1){reference-type="ref" reference="cosdiff1"}) with $\phi = \epsilon$, $$\begin{aligned} \dfrac{sin(\theta + \epsilon) - sin(\theta - \epsilon)}{2 sin(\epsilon)} & = & cos(\theta) \label{fdc1} \\ \dfrac{cos(\theta + \epsilon) - cos(\theta - \epsilon)}{2 sin(\epsilon)} & = & - sin(\theta) \label{fdc2} \end{aligned}$$ Aryabhata took $\epsilon$ to be $\pi/48$. But he also stated that its value is *yateshtani* or as per our wish (Verse 11, *Ganitapada*). Some took it to be $\pi/96$ and others like Brahmagupta took it as $\pi/12$ or 15$^{\degree}$. If we take $\epsilon$ to be sufficiently small we have our classic formula for finite difference calculus. Noting that 2 $sin(\epsilon/2) \approx \epsilon$ we have the finite difference version of the derivative of sine $$\dfrac{\delta sin(\theta)}{\delta \theta} = cos(\theta)$$ and similarly for the cosine. $$\dfrac{\delta cos(\theta)}{\delta \theta} = -sin(\theta)$$ Let us understand this with an example. We know that sine(37$^0$) is close to 0.6 and sine(30$^0$) is 0.5. The difference in angle is 7$^0$ which in radians is 0.122. Thus the derivative of sine of the median angle 33.5$^0$ from Eq. ([\[fdc1\]](#fdc1){reference-type="ref" reference="fdc1"}) is $$\begin{aligned} \delta sin(\theta) /\delta \theta &=& (0.6 - 0.5)/0.122 = .82 \end{aligned}$$ Looking up the sine table or the calculator yields cos(33.5) = 0.83. Similarly Eqs. ([\[finite3\]](#finite3){reference-type="ref" reference="finite3"}) yields the second derivatives namely $$\begin{aligned} \delta^2 sin(\theta) / \delta^2 \theta & \approx & - sin(\theta) \\ \delta^2 cos(\theta) / \delta^2 \theta & \approx & - cos(\theta) \end{aligned}$$ The above are now called central difference approximations to the derivative and the second derivative. Aryabhata does not mention the term finite difference calculus (let alone calculus). But similar methods are now used to numerically solve our differential equations. A student can readily recognize the above as a standard solution of the classical simple harmonic oscillator. Note also that Newton's II Law and the famous Schrodinger equation of quantum mechanics are both second order differential equations. # Discussion One can discern a continuity in Indian mathematics, howsoever tenuous, from pre-Vedic times ($<$ 1000 BCE) up and until 1800s. A striking example is the influence of the *Aryabhatiya* (499 CE) on major Indian mathematicians who followed him including Madhava (1350 CE) who founded Calculus [@ppd]; as also the influence on Aryabhata of the mathematics which preceded him [@datta; @amma]. Aryabhata describes the sine function "poetically" as the *Ardha-Jya* or half bow (see Fig. 1). To reiterate, he seemed aware that the sine of (i) zero is zero; (ii) small angle is itself, or the small arc is almost equal to the half chord; (iii) 30 degrees is 1/2 (*Ganitapada* verse 9); (iv) and 90 degrees is unity. Further, that the sine function is periodic so he prudently stops the enumeration of the sine for angles greater than 90 degrees. Then, in a remarkably insightful way, he laid down the recursion relation for sine differences which enables one to generate the sine table. It is this work, more than his solution to the linear Diophantine equation (verses 31 and 32 of the *Ganitapada*) which establishes him as a genius and one of the brightest stars in the firmament of world mathematics. The sine table can also be generated using the half angle formula. This was demonstrated in the *Panchasiddhantika* a text written barely 50 years after the appearance of the *Aryabhatiya* [@vara]. As pointed out, a feature of the Aryabhata's difference relation is how contemporaneous it is. It can be easily recognized as finite difference calculus. It also led to the development of the calculus of trigonometric functions by the Madhava (1350 CE) and his disciples along the banks of the Nila river in Kerala. This school is variously called the Nila [@ppd] and even as the Aryabhata school [@amma]. Another aspect to note is that Bhaskara II (1100 CE) used the canonical 2 $\pi$ /96 division of the great circle to carry out discrete integration and obtain the (correct) expressions for the surface area and volume of the sphere. Jyesthdeva of the Nila (or Aryabhta) school in his work *Yuktibhasa* derived the same results using calculus (circa 1500 CE). Aryabhata can legitimately be called the founder of trigonometry. To sum up, the *Arybhatiya* exercised a tremendous influence over Indian mathematicians and, for over a thousand years. For a book with just over a 100 pithy verses, its legacy remains unparalleled in the scientific world. We hope that our article will give our young audience an introduction to his work and will serve as an inspiration. #### Acknowledgement: One of the authors (VAS) would place on record the many useful discussions he had with Prof. P. P. Divakaran. # Appendix: The Derivation of the sine and cosine difference formula {#appendix-the-derivation-of-the-sine-and-cosine-difference-formula .unnumbered} As stated in the text Figure 2 depicts a quadrant of the unit circle where $OX$ = $OY$ = 1. The arcs $XA$, $XB$ and $XC$ trace angles $\theta$, $\theta + \phi$ and $\theta - \phi$ respectively. The half-chords $AP$, $BQ$ and $CR$ are the corresponding sine functions. We drop a perpendicular $CS$ from the circumference onto the half-chord $BQ$ as shown. We show that the two triangles $BSC$ and $OPA$ are similar. By construction $\angle{BSC}$ and $\angle{OPA}$ are each 90 degrees. Note $OB$ = $OC$ = 1 (unit radius) and hence $\triangle OBC$ is isosceles. This implies that $\angle{OBC}$ = 90$^{\circ} - \phi$. Also $\angle{OBQ}$ = 90$^{\circ} - \phi - \theta$. Hence $\angle{SBC}$=$\angle{OBC}$-$\angle{OBQ}$=$\theta$. Therefore, $\angle{SBC}$=$\angle{POA}$=$\theta$. This establishes the similarity of the two triangles by the angle-angle test. $$\begin{aligned} \dfrac{BS}{OP} & = & \dfrac{BC}{OA}\end{aligned}$$ Now $OA$ = 1 (unit circle), $OP$ = $cos(\theta)$ and $BC$ = 2 $sin(\phi)$. By inspection $BS$ = $BQ$ - $CR$ = $sin(\theta + \phi)$ - $sin(\theta - \phi)$. This yields the sine difference formula (Eq. ([\[sindiff1\]](#sindiff1){reference-type="ref" reference="sindiff1"}) $$\begin{aligned} sin(\theta + \phi) - sin(\theta - \phi) & = & 2 sin(\phi) cos(\theta) \end{aligned}$$ Thus the difference in the sines is proportional to the cosine of the mean angle. We can also obtain the cosine difference formula (Eq. ([\[sindiff1\]](#sindiff1){reference-type="ref" reference="sindiff1"}) by noting that $$\begin{aligned} \dfrac{CS}{AP} & = & \dfrac{BC}{OA}\end{aligned}$$ Note $AP$ = $sin(\theta)$ and $CS$ = $OR$ - $OQ$ = $cos(\theta - \phi)$ - $cos(\theta + \phi)$. Hence $$\begin{aligned} cos(\theta + \phi) - cos(\theta - \phi) & = & - 2 sin(\phi) sin(\theta) \end{aligned}$$ The difference in the cosines is proportional to the (negative) of the sine of the mean angle. We pause to note that *prima facie* the two triangles we considered appear unrelated. A hallmark of Indian mathematics is strong geometric intuition and this dates back to the *Sulbasutra circa* 800 BCE. Another is the reliance on the "rule of three" (*trirasikam*) - here we employ a simple version of it namely, if $a/b = c$ then $a = b \times c$. **EXERCISES** 1. We can generate the sine table as per Aryabhata's suggestion but not exactly using the same value for $\epsilon$ he used. We choose $\epsilon = \pi/80 \approx 0.0393$ which is the same as 2.25$^0$. We take $sin(\epsilon) \approx \epsilon$. If you have a simple calculator generate all values of sine from 2.25 to 18 degrees in equal steps using Eq. ([\[recur1\]](#recur1){reference-type="ref" reference="recur1"}). Alternatively if you have a programmable calculator or a computer generate all values of sine from 2.25 to 90 degrees. Compare with the results your calculator will otherwise yield. 2. In the last section reference is made of the text *Panchasiddhantika* wherein the half angle forumla is mentioned, viz. $$cos(2 \theta) = 1 - 2 sin^2(\theta)$$ How would you (i) derive this by a geometrical construction; (i) employ this to generate the sine table? 99 "*Aryabhatiya*", Walter E. Clark, University of Chicago Press (1930). "*Aryabhatiya*", Kripashanker Shukla and K V Sarma, Vols. I and II, Indian National Science Academy Publication (1976). The Section III has been shaped by a critical reading of certain verses from the *Ganitapada* by these authors. "*Aryabhatiya*" with the *Bhasya* (Commentary) of Nilakantha Somaiyaji, Parts I and II ed. Sambasiva Sastri, Government of Travancore, Trivandrum (1930, 1931); Part III, ed. Suranad Kunjan Pillai, University of Kerala, Trivandrum (1957). This work is in Sanskrit and there is no English (or Hindi) translation of this seminal text to the best of our knowledge. The other works mentioned herein are in English. "The Mathematics of India", P. P. Divakaran, Hindustan Book Agency (2018). The book has shaped this article in ways covert and overt. "*Two New Proofs of the Pythagorean Theorem - Part I*", Shailesh Shirali, At Right Angles, Issue 16, pages 7-17, (July 2023). "The History of Hindu Mathematics's", Bibhutibhusan Datta and Avadhesh Narayan Singh, Vols. I and II, Asia Publishing House, Delhi (1935 and 1938). A pioneering book on Indian Mathematics written in Pre-Independence India. "Geometry in Ancient and Medieval India", Sarasvati Amma, Motilal Banarsidas, Delhi (1999). It has detailed discussions worth looking at. It also uncovers the element of continuity in the Indian mathematical traditions from ancient times to the pre-British era. "*Panchasiddhantika*" of Varahamihira with translation and notes by T. S. Kuppanna Sastry, P.P.S.T. Foundation, Madras (1993). *Prof. Vijay A. Singh has been faculty at IIT Kanpur (1984-2014) and HBCSE, Tata Institute for Fundamental Research (2005-2015) where he was the National Coordinator of both the Science Olympiads and the National Initiative on Undergraduate Science for a decade. He is a Fellow, National Academy of Sciences, India and was President, the Indian Association of Physics Teachers (2019-21). He is currently a Visiting Professor CEBS, Mumbai. (emailid: physics.sutra\@gmail.com)* *Aneesh Kumar is Standard XII student at the Dhirubhai Ambani International School, Mumbai, who is keenly interested in Mathematics and Physics. (emailid: aneesh.kumar11235\@gmail.com)* [^1]: emailid: physics.sutra\@gmail.com [^2]: The word *Asanna* or proximate is to be distinguished from *Sthula* which is approximate or roughly equal [^3]: One radian is 3438 minutes and we remind the reader that 2 $\pi$ radians is 180 degrees and 1 degree is 60 minutes.
arxiv_math
{ "id": "2309.13577", "title": "Aryabhata and the Construction of the First Trigonometric Table", "authors": "Vijay A. Singh and Aneesh Kumar", "categories": "math.HO physics.hist-ph", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | For a unital spectral triple $(\mathcal{A}, H,D)$, we study when its truncation converges to itself. The spectral truncation is obtained by using the spectral projection $P_{\Lambda}$ of $D$ onto $[-\Lambda,\Lambda]$ to deal with the case where only a finite range of energy levels of a physical system is available. By restricting operators in $\mathcal{A}$ and $D$ to $P_{\Lambda}H$, we obtain a sequence of operator system spectral triples $\{(P_{\Lambda}\mathcal{A}P_{\Lambda},P_{\Lambda}H,P_{\Lambda}DP_{\Lambda})\}_{\Lambda}$. We prove that if the spectral triple is the one constructed using a discrete group with polynomial growth, then the sequence of operator systems $\{P_{\Lambda}\mathcal{A}P_{\Lambda}\}_{\Lambda}$ converges to $\mathcal{A}$ in the sense of quantum Gromov-Hausdorff convergence with respect to the Lip-norm coming from high order derivatives. author: - "Ryo Toyota [^1]" bibliography: - main.bib date: title: Quantum Gromov-Hausdorff convergence of spectral truncations for groups with polynomial growth --- # Introduction In this paper, we study quantum Gromov-Hausdorff convergences of operator systems given by spectral truncations. The study of spectral truncation is initiated by A. Connes and W. D. van Suijlekom in [@connes2021spectral] to approximate the original spectral triple from the partial data of spectrum of Dirac operator (or Hamiltonian). More precisely, assume that we are given a unital spectral triple $(\mathcal{A},H,D)$ and for each positive number $\Lambda$, let $P_{\Lambda}\in B(H)$ be the projection onto the space spanned by eigenvectors of $D$ corresponding to eigenvalues in $[-\Lambda,\Lambda]$. Spectral truncation is the procedure to restrict operators in $\mathcal{A}$ and $D$ to $P_{\Lambda}H$ to obtain an operator system version of a spectral triple $(P_{\Lambda}\mathcal{A}P_{\Lambda},P_{\Lambda}H,P_{\Lambda}DP_{\Lambda})$ of [@connes2021spectral]. The main purpose of this article is to study when the truncated operator systems $P_{\Lambda}\mathcal{A}P_{\Lambda}$ approximate the original algebra $\mathcal{A}$ in the language of compact quantum metric space defined by Marc A. Rieffel in [@rieffel1999metrics] and [@rieffel2004gromov]. For the spectral triple $(C^{\infty}(M),L^2(S_M),D_M)$ of a compact spin Riemannian manifold $M$ with spinor bundle $S_M$ and the Dirac operator $D_M$, we can recover the distance on $M$ from the spectral data by the Connes distance formula (Formula 1 of [@connes1994noncommutative] of section 6.1) $$\label{diatance} d(x,y)=\sup\{|f(x)-f(y)|:f \in C^{\infty}(M),\|[D_M,f]\|\leq 1\},$$ for $x,y \in M$. If we regard the evaluation at $x \in M$ to a function $f \in C^{\infty}(M)$ as the evaluation of a function $f$ to the delta function $\delta_x$ supported at $x$, we can extend the formula [\[diatance\]](#diatance){reference-type="eqref" reference="diatance"} to the distance of state spaces. Namely for two states $\phi$ and $\psi$ on $C^{\infty}(M)$, we define the distance between them by $$\label{stateconnesdistance} d(\phi,\psi)=\sup\{|\phi(f)-\psi(f)|:f \in C^{\infty}(M),\|[D_M,f]\|\leq 1\}.$$ The idea of compact quantum metric space is to replace a compact metric space $(X,d)$ by a pair $(A,L)$ consists of an operator system $A$ (or Archimedean ordered unit vector space more generally) and a seminorm $L$ on $A$ so called a Lip-norm to define a distance on the state space of $A$ by the formula $$\label{lipdistance} d(\phi,\psi)=\sup\{|\phi(f)-\psi(f)|:f \in A,L(f)\leq 1\},$$ for two states $\phi$ and $\psi$ on $A$. We call such a pair $(A,L)$ a compact quantum metric space. Philosophically, Lip-norm $L$ is a norm of commutator with derivative (or Dirac operator) but we are more flexible and the actual requirements for a seminorm on $A$ to be a Lip-norm is in Definition 2.1. The most important one is that the distance defined by [\[lipdistance\]](#lipdistance){reference-type="eqref" reference="lipdistance"} has to induce weak-$*$ topology on the state space. Since we are working on operator systems, we can not impose Leibniz inequality on $L$ (for example, the norm of high order commutator with Dirac operator can be a Lip-norm in some cases [@antonescu2004metrics].) For given two compact quantum metric spaces $(A,L_A)$ and $(B,L_B)$, we define the quantum Gromov-Hausdorff distance between them by a certain infimum of Hausdorff distance between their state spaces. We apply this framework to the operator systems obtained by spectral truncations. In [@leimbach2023gromov], Gromov-Hausdorff convergence of the state speces of truncated operator systems $P_{\Lambda}\mathcal{A}P_{\Lambda}$ to the state spaces of the algebra $\mathcal{A}$ was proved when the spectral triple is the torus $(\mathcal{A},H,D)=(C^{\infty}(\mathbb{T}^d),L^2(S_{\mathbb{T}^d}),D_{\mathbb{T}^d})$ for $d=1,2,3$ using the metric defined in [\[stateconnesdistance\]](#stateconnesdistance){reference-type="eqref" reference="stateconnesdistance"}. In this paper, we deal with spectral triples $(\mathbb{C}[G],\ell^2(G),D)$ consisting of the group algebra $\mathbb{C}[G]$ of a finitely generated group $G$ with a length function $\ell$, Hilbert space $\ell^2(G)$ and the Dirac operator $D$ that is the multiplication of the length function: $$\begin{aligned} D:\ell^2(G)\rightarrow \ell^2(G): \text{ }\delta_x \mapsto \ell(x)\delta_x. \end{aligned}$$ Note that in this case, for each positive $\Lambda$, the spectral projection $P_{\Lambda}$ is the orthogonal projection onto the space spanned by vectors supported in the closed ball $B_{\Lambda}$ centered at the unit $e$ with radius $\Lambda$. The truncated operator system consists of operators of the form of Toeplitz type matrices: $$\label{toeplitz} P_{\Lambda}\mathbb{C}[G]P_{\Lambda}:=\{(a_{xy^{-1}})_{x,y\in B_{\Lambda}}:\text{ }a_z\in \mathbb{C} \text{ for all }z\in B_{2\Lambda}\}.$$ We use Lip-norms coming from high order derivatives. The use of high order derivatives in compact quantum metric spaces was initiated in [@antonescu2004metrics] and it produces new examples of compact quantum metric spaces. But our Lip-norm is slightly different from the one introduced in [@antonescu2004metrics] (the reason is in Remark 3.1). For each positive integer $s$, we define Lip-norms on group algebras and truncated operator systems by $$\begin{aligned} L_s(f)&:=\|\sum \ell(x)^s f(x)\delta_x\|_{B(\ell^2(G))}\\ L_{s,\Lambda}(A)&:=\|(\ell(xy^{-1})^sa_{xy^{-1}})_{x,y\in B_{\Lambda}}\|_{B(P_{\Lambda}\ell^2(G))}\end{aligned}$$ for $f \in \mathbb{C}[G]$ and $A=(a_{xy^{-1}})_{x,y \in B_{\Lambda}}\in P_{\Lambda}\mathbb{C}[G]P_{\Lambda}$. In Lemma 3.1, we verify that these seminorms satisfy the requirement of Definition 2.1 for being Lip-norm. Now we can state our main theorem. **Theorem 1**. * For each group $G$ with polynomial growth, there exists $s>0$ such that the sequence of compact quantum metric spaces $\{(P_{\Lambda}\mathbb{C}[G]P_{\Lambda},L_{s,\Lambda})\}_{\Lambda}$ converges to $(\mathbb{C}[G],L_s)$ in quantum Gromov-Hausdorff distance. * # Preliminaries about compact quantum metric spaces In this section, we recall the notion of compact quantum metric spaces and quantum analogue of Gromov-Hausdorff convergence introduced by Marc A Rieffel in [@rieffel1999metrics] and [@rieffel2004gromov]. We recall some definitions and properties introduced in [@rieffel2023convergence]. **Definition 1**. *Let $(A,e)$ be a pair of an operator system $A$ with its unit $e$. Then a seminorm $L$, which may take value $\infty$ on $A$ is called a Lip-norm if $L$ satisfies the following conditions:* - *For $a \in A$, we have $L(a)=L(a^*)$, and $L(a)=0$ if and only if $a \in \mathbb{C}e$.* - *$\{a \in A;\text{ }L(a)< \infty\}$ is dence in $A$.* - *Define a distance $d^L$ on the state space $S(A)$ by $$\begin{aligned} d^L(\phi,\psi):=\sup\{|\phi(a)-\psi(a)|:\text{ }a\in A \text{ and }L(a)\leq 1 \} \end{aligned}$$ for any states $\phi$ and $\psi$ on $A$. We require that the induced topology on $S(A)$ by the distance $d^L$ agrees with the weak-$*$ topology.* - *$L$ is lower semi-continuous with respect to the operator norm.* *We call a pair $(A,L)$ of operator system $A$ with a Lip-norm $L$ a compact quantum metric space.* In the next definition, we recall the definition of quantum Gromov-Hausdorff distance between two compact quantum metric spaces $(A,L^A)$ and $(B,L^B)$. We denote the coordinate quotient maps by $$\begin{aligned} q_A:A\oplus B \rightarrow A\text{ and } q_B:A \oplus B \rightarrow B. \end{aligned}$$ Each Lip-norm $L$ on $A\oplus B$ induces a seminorm $q_A^*(L)$ (resp. $q_B^*(L)$) on $A$ (resp. on $B$) by $$\begin{aligned} q_A^*(L)(a):=\inf \{L(a,b):b\in B\} \text{ (resp. } q_B^*(L)(b):=\inf \{L(a,b):a\in A\}\text{)}.\end{aligned}$$ There are embeddings of state spaces $$\begin{aligned} q_A^*:S(A)\hookrightarrow S(A\oplus B) \text{ and } q_B^*:S(B) \hookrightarrow S(A \oplus B)\end{aligned}$$ defined by the pullbacks by $q_A$ and $q_B$. For a Lip-norm $L$ on $A\oplus B$, we denote the Hausdorff distance between $q_A^*S(A)$ and $q_B^*S(B)$ in the compact metric space $(S(A\oplus B),d^L)$ by $$\begin{aligned} \text{dist}_H^{d^L}(q_A^*S(A),q_B^*S(B)).\end{aligned}$$ **Definition 2**. * Let $\mathcal{M}(L_A,L_B)$ be the set of all Lip-norm on $A\oplus B$ which induces $L_A$ and $L_B$. Then the quantum Gromov-Hausdorff distance between $(A,L_A)$ and $(B,L_B)$ is defined to be $$\begin{aligned} \text{dist}_q((A,L_A),(B,L_B)):=\inf \{ \text{dist}_H^{d^L}(S(A),S(B)): \text{ }L\in \mathcal{M}(L_A,L_B) \}. \end{aligned}$$ * We finish this section by stating a criterion of quantum Gromov-Hausdorff convergence, which is used in [@rieffel2004gromov] to show the quantum Gromov-Hausdorff convergence of matrix algebras to $2$-sphere and explicitly proved in [@van2021gromov] as a sufficient condition for Gromov-Hausdorff convergences of state spaces. **Lemma 1**. * Let $(A,L_A)$ and $(B,L_B)$ be compact quantum metric spaces. Assume that there are unital positive maps which are contractive with respect to their Lip-norms $$\begin{aligned} r:A \rightarrow B \text{ and } s:B \rightarrow A \end{aligned}$$ such that $$\begin{aligned} \|s \circ r(a)-a\| &\leq \epsilon L_A(a) \\ \|r \circ s(b)-b\| &\leq \epsilon L_{B}(b). \end{aligned}$$ Then the quantum Gromov-Hausddorff distance between $(A,L_A)$ and $(B,L_B)$ is less than $2\epsilon$. * The above criterion should be well-known among experts but the author was not able to find any precise reference. For the convenience of readers, we give a proof of the above lemma. We use the next concept and result introduced in [@rieffel2004gromov] (the second half of the book, which consists of two papers). **Definition 3** (Definition 5.1 of [@rieffel2004gromov]). *Let $(A,L_A)$ and $(B,L_B)$ be compact quantum metric spaces. By a bridge between $(A,L_A)$ and $(B,L_B)$ we mean a seminorm, $N$, on $A\oplus B$ such that* 1\) : *$N$ is continuous for the norm on $A\oplus B$.* 2\) : *$N(e_A,e_B)=0$ but $N(e_A,0)\neq 0$.* 3\) : *For any $a\in A$ and $\delta>0$ there is $b \in B$ such that $$\begin{aligned} L_B(b) \lor N(a,b) \leq L_A(a) +\delta, \end{aligned}$$ and similarly for $A$ and $B$ interchanged. Here the symbol $\lor$ means \"maximum\".* **Theorem 2** (Theorem 5.2 of [@rieffel2004gromov]). * Let $N$ be a bridge between a compact quantum metric spaces $(A,L_A)$ and $(B,L_B)$. Define $L$ on $A\oplus B$ by $$\begin{aligned} L(a,b)=L_A(a)\lor L_B(b) \lor N(a,b). \end{aligned}$$ Then $L$ is a Lip-norm which induces $L_A$ and $L_B$. * **Lemma 2**. * Under the assumption of Lemma 2.1, we define a seminorm on $A \oplus B$ by $$\begin{aligned} N(a,b):=\frac{1}{\epsilon}\|a-s(b)\|. \end{aligned}$$ Then $N$ is a bridge between $(A,L_A)$ and $(B,L_B)$. * **Proof 1**. * We only need to prove the third condition in Definition 2.3. We choose $b=r(a)$, then $$\begin{aligned} L_B(b) \lor N(a,b)&=L_B(r(a)) \lor \frac{1}{\epsilon}\|a-s\circ r(a)\| \\ &= L_A(a) \lor \frac{1}{\epsilon}\cdot (\epsilon L_A(a))=L_A(a). \end{aligned}$$ We verify the same condition for $A$ and $B$ interchanged. Fix any $b\in B$. We choose $a=s(b)$. Then we have $$\begin{aligned} L_A(a) \lor N(a,b) = L_A(s(b)) \lor \frac{1}{\epsilon} \|s(b)-s(b)\|\leq L_B(b). \end{aligned}$$ 0◻* *Proof of Lemma 2.1.* Let $L$ be the Lip-norm defined in Theorem 2.1 for the bridge defined in Lemma 2.2. It suffices to show the Hausdorff distance $\text{dist}_H^{d^L}(q_A^*S(A),q_B^*S(B))$ is smaller than $2\epsilon$. First take any $\mu \in S(A)$. We show $$\begin{aligned} d^L(q_A^*(\mu), q_B^*s^*(\mu)) \leq \epsilon.\end{aligned}$$ For any $(a,b)\in A\oplus B$ with $L(a,b)\leq 1$, we have $$\begin{aligned} |q_A^*(\mu)(a,b)-q_B^*s^*(\mu)(a,b)|=|\mu(a)-\mu(s(b))|\leq \|\mu\|\cdot \|a-s(b)|\leq \epsilon,\end{aligned}$$ because of the definition of $N$. This means $q_A^*S(A)$ is contained in the $\epsilon$-neighborhood of $q_B^*S(B)$. For the other direction, we take any $\nu\in S(B)$. We show $$\begin{aligned} d^L(q_A^*r^*(\nu), q_B^*(\nu)) \leq 2\epsilon.\end{aligned}$$ For any $(a,b)\in A\oplus B$ with $L(a,b)\leq 1$, we have $$\begin{aligned} |q_A^*r^*(\nu)(a,b)- q_B^*(\nu)(a,b)|&=|\nu(r(a))-\nu(b)|\\ &\leq |\nu(r(a))-\nu(r\circ s(b))|+|\nu(r\circ s(b))-\nu(b)|\\ &\leq \|\nu\|\cdot \|a-s(b)\|+ \|\nu\|\cdot \|r\circ s(b)-b\| \leq 2\epsilon.\end{aligned}$$ This means $q_B^*S(B)$ is contained in the $2\epsilon$-neighborhood of $q_A^*S(A)$.  ◻ # Group $C^*$-algebras as compact quantum metric spaces In this section, we introduce quantum metric structures on group $C^*$-algebras and their spectral truncations. For a discrete group $G$, there is a natural Dirac operator on $\ell^2(G)$ introduced by A. Connes in [@connes1989compact]. The seminorm defined to be the norm of commutator with the Dirac operator has been studied in [@ozawa2005hyperbolic],[@antonescu2004metrics] and [@christ2017nilpotent]. But as shown in Remark 3.1, this definition is not a Lip-norm on truncated operator systems in the sense of Definition 2.1. Therefore we introduce a different definition of Lip-norm on group $C^*$-algebras. We first introduce several notations for discrete groups which will be used later. Let $G$ be a group generated by a finite symmetric subset $S=S^{-1} \subset G$. Then this generator $S$ induces a word length $\ell:G\rightarrow\mathbb{R}_{+}$ and a distance $d$ on $G$ defined by $d(x,y)=\ell(x^{-1}y)$ for all $x,y \in G$. The closed ball of radius $\Lambda$ centered at $x \in G$ is denoted by $B_{\Lambda}(x)$ and especially when $x=e$ it is denoted by $B_{\Lambda}$. We denote the left regular representation of group algebra $\mathbb{C}[G]$ on $\ell^2(G)$ by $\lambda$. The following example provides an interesting class of compact quantum metric spaces. **Example 1**. * Let $G$ be a finitely generated group with a word length $\ell$. We define a Dirac operator on $\ell^2(G)$ by $$\begin{aligned} D:\ell^2(G) \rightarrow \ell^2(G); \text{ } \sum f(x)\delta_x \rightarrow \sum \ell(x)f(x)\delta_x. \end{aligned}$$ Then we define a seminorm on a group algebra $\mathbb{C}[G]$ by $$\begin{aligned} L'(f):=\|[D,\lambda(f)]\| <\infty \end{aligned}$$ for $f \in \mathbb{C}[G]$. It is shown that if $G$ is hyperbolic ([@ozawa2005hyperbolic]) or polynomial growth ([@christ2017nilpotent]), then $L'$ is a Lip-norm. * For a spectral triple $(\mathcal{A},H,D)$, the natural choice of Lip-norm is $L'(a):=\|[D,a]\|$ (if it satisfies the conditions in Definition 2.1) because in commutative case this distance coincides with the distance of the base space by the Connes distance formula [\[diatance\]](#diatance){reference-type="eqref" reference="diatance"}. But we will not use this Lip-norm in this paper. The reason is that this does not give a Lip-norm for truncated operator system as we see in the next remark. **Remark 1**. * Let $(\mathcal{A},H,D)=(\mathbb{C}[G],\ell^2(G),D)$ be the spectral triple of a discrete group $G$, where the Dirac operator $D$ is the one defined in Example 3.1. Then we can form an operator system spectral triple $(P_{\Lambda}\mathcal{A}P_{\Lambda},P_{\Lambda}H,P_{\Lambda}DP_{\Lambda})$. In this case, the operator system $P_{\Lambda}\mathcal{A}P_{\Lambda}$ is specified in [\[toeplitz\]](#toeplitz){reference-type="eqref" reference="toeplitz"}. We define a seminorm $L'_{\Lambda}$ on $P_{\Lambda}\mathcal{A}P_{\Lambda}$ by $$\begin{aligned} L'_{\Lambda}(A):=\|[P_{\Lambda}DP_{\Lambda},A]\| \end{aligned}$$ for $A=(a_{xy^{-1}})_{x,y \in B_{\Lambda}}$. Let's say $G=\mathbb{Z}$ and define a matrix $A\in P_{\Lambda}\mathbb{C}[G]P_{\Lambda}$ by $a_{2\Lambda}=1$ and $a_n=0$ for $n=-2\Lambda,-2\Lambda+1, \cdots, 2\Lambda-1$. Then $$\begin{aligned} =((\ell(x)-\ell(y))\cdot a_{x-y})_{x,y=-\Lambda}^{\Lambda}. \end{aligned}$$ By construction, the $(i,j)$-entry of above matrix is $0$ if $(i,j)\neq (\Lambda,-\Lambda)$ but for $(\Lambda,-\Lambda)$-entry, we have $\ell(\Lambda)-\ell(-\Lambda)=0$. So $[D_{\Lambda},A]=0$ and $L'_{\Lambda}$ has a nontrivial kernel other than scalar matrices. * We introduce another seminorms, which might be more natural analogue of Lipschitz norm of commutative case. We fix the notations in the next definition through this paper. **Definition 4**. * We define derivatives on group algebra $\mathbb{C}[G]$ and its truncated operator systems $P_{\Lambda}\mathbb{C}[G]P_{\Lambda}$ by $$\begin{aligned} d : \mathbb{C}[G] \rightarrow \mathbb{C}[G] &;\text{ }\sum f(x)\delta_x \rightarrow \sum \ell(x)f(x)\delta_x \\ d_{\Lambda}:P_{\Lambda}\mathbb{C}[G]P_{\Lambda} \rightarrow P_{\Lambda}\mathbb{C}[G]P_{\Lambda}&;\text{ } (a_{xy^{-1}})_{x,y \in B_{\Lambda}} \mapsto (\ell(xy^{-1})a_{xy^{-1}})_{x,y \in B_{\Lambda}}. \end{aligned}$$ For each positive integer $s$, we define seminorms by $L_s(f):=\|d^sf\|_{B(\ell^2(G))}$ and $L_{s,\Lambda}(T):=\|d_{\Lambda}^sT\|_{B(P_{\Lambda}\ell^2(G))}$ for $f\in \mathbb{C}[G]$ and $T\in P_{\Lambda}\mathbb{C}[G]P_{\Lambda}$. * **Lemma 3**. * For any finitely generated group $G$ and $s$, $L_s$ is lower semicontinuous with respect to the reduced $C^*$-norm. * **Proof 2**. *Assume $\{f_j\}_j$ is a sequence in $\mathbb{C}[G]$ that converges to $f \in \mathbb{C}[G]$ with respect to the operator norm and $L_s(f_j)\leq 1$ for all $j$. For any subset $F \subset G$, let $P_F\in B(\ell^2(G))$ be the projection onto the space $\ell^2(F)\subset \ell^2(G)$. Then for any finite subset $F \subset G$, we have $$\begin{aligned} \left\|P_F (\lambda(\sum_x \ell(x)^sf(x)\delta_x))P_F \right\|_{B(\ell^2(G))} =\lim_j \left\|P_F \lambda(d^sf_j) P_F\right\|_{B(\ell^2(G))} \leq 1 \end{aligned}$$ Since the operator norm of $\lambda(\sum_x \ell(x)^sf(x)\delta_x)$ is the supremum of the left hand side of the above formula for all finite subset $F \subset G$, we have shown that $L_s(f)\leq 1$. 0◻* We need to show that the seminorm defined above is a Lip-norm. The next Lemma is an analogue of Theorem 2.6 of [@antonescu2004metrics] for our seminorms and the same proof works for our case. First we recall the definition of rapid decay property of descrete groups, which appears in the assumption of the next lemma. **Definition 5** ([@valette2002introduction] Chapter 8). * Let $G$ be a finitely generated group with the length function $\ell$. For $s>0$, we define a weighted $\ell^2$-norm by $$\begin{aligned} \|f\|_{H^s}:=\left (\sum_{x \in G} |f(x)|^2(1+\ell(x))^{2s} \right )^{\frac{1}{2}}. \end{aligned}$$ for $f \in \mathbb{C}[G]$. We say that the group $G$ has rapid decay property if there exist $C>0$ and $s>0$ such that $$\begin{aligned} \|\lambda(f)\|_{C^*_r(G)} \leq C\|f\|_{H^s} \end{aligned}$$ for all $f \in \mathbb{C}[G]$. * **Lemma 4**. * If $G$ is a group with rapid decay property, then there exists a positive integer $s_0$ such that $L_s$ is a Lip-norm on $\mathbb{C}[G]$ for every $s \geq s_0$. * **Proof 3**. *We show the third condition in Definition 2.1. Since $G$ satisfies the Haagerup inequality, there exist constants $C$ and $s_0$ such that $$\begin{aligned} \|\lambda(f)\|^2_{B(\ell^2(G))} \leq C \sum_x (1+\ell(x))^{2s_0} |f(x)|^2 \end{aligned}$$ By Proposition 1.3 of [@ozawa2005hyperbolic], it suffices to show that the set $$\begin{aligned} \mathcal{D}_s:=\{f \in \mathbb{C}[G]: \text{ }L_s(f)\leq 1 \text{, } f(e)=0\} \end{aligned}$$ is totally bounded with respect to the operator norm for sufficiently large $s$ depending on $G$. Let $\epsilon>0$ be given. We show the following claim* ***Claim 1**. * For any $s \geq s_0+1$, there exists $N$ such that for any $f \in \mathcal{D}_s$ we have $$\begin{aligned} \left \| \lambda\left (\sum_{\ell(x)\geq N} f(x)\delta_x\right ) \right \|_{B(\ell^2(G))} \leq \epsilon . \end{aligned}$$ ** **Proof of Claim.* For $f \in \mathcal{D}_s$, we have $$\begin{aligned} \left \| \lambda\left (\sum_{\ell(x)\geq N} f(x)\delta_x \right) \right \|^2_{B(\ell^2(G))}&\leq C\sum _{\ell(x)\geq N} (1+\ell(x))^{2s_{0}} |f(x)|^2 \\ & \leq C\cdot 2^{s_0} \sum _{\ell(x)\geq N} \ell(x)^{2s_0} |f(x)|^2 \\ & = C\cdot 2^{s_0} \sum _{\ell(x)\geq N} \ell(x)^{2s_0-2s} \ell(x)^{2s} |f(x)|^2 \\ & \leq C\cdot 2^{s_0} \cdot \frac{1}{N^{2(s-s_0)}} \sum _{\ell(x)\geq N} \ell(x)^{2s} |f(x)|^2 \\ & \leq C\cdot 2^{s_0} \cdot \frac{1}{N^{2(s-s_0)}} \sum _{x \in G} \ell(x)^{2s} |f(x)|^2 \\ & \leq C\cdot 2^s \cdot \frac{1}{N^{2(s-s_0)}} \left \| \lambda\left (d^s f\right ) (\delta_e) \right \|_{\ell^2(G)}\\ & \leq C\cdot 2^s \frac{1}{N^{2(s-s_0)}} \left \| \lambda\left (d^s f\right ) \right \|_{B(\ell^2(G))}\\ & \leq \frac{ C\cdot 2^s }{N^{2(s-s_0)}}. \end{aligned}$$ Since $2s-2s_0\geq 0$, by taking $N$ such that $\frac{ C\cdot 2^s }{N^{2(s-s_0)}}<\epsilon$, the claim follows. ◻* *Since $G$ is finitely generated $\{x\in G: \text{ }\ell(x)<N\}$ is a finite set so $$\begin{aligned} \{f \in \mathcal{D}_s: \text{ }\text{supp}(f)\subset B_N\} \end{aligned}$$ is totally bounded. Therefore, $\mathcal{D}_s$ is totally bounded. 0◻* Since the kernel of $L_{s,\Lambda}$ is only scalar and the truncated operator systems $P_{\Lambda}\mathbb{C}[G]P_{\Lambda}$ is finite dimensional, we have the following. **Lemma 5**. * For every finitely generated group $G$ and for every positive integer $s$, the seminorm $L_{s,\Lambda}$ is a Lip-norm for $P_{\Lambda}\mathbb{C}[G]P_{\Lambda}$. * In the last section, we work on groups with polynomial growth. Here we remark a theorem on an asymptotic estimate of the volume of boundary by R.Tessera. **Theorem 3** (R. Tessera [@tessera2007volume]). * Let $G$ be a compactly generated group with polynomial growth with Haar measure $\mu$. Then there exist a constant $C$ and $\beta>0$ such that $$\begin{aligned} \frac{\mu( B_{\Lambda+1}\setminus B_{\Lambda})}{\mu (B_{\Lambda})} \leq C\Lambda^{-\beta}. \end{aligned}$$ * # Noncommutative Fejer kernel In this section, we develop a theory of noncommutative Fejer kernel for discrete groups to prepare for some estimates we need in the last section. For a subset $F \subset G$, its cardinality is denoted by $\#F$. For positive number $\Lambda$, we define a finitely supported function $F_{\Lambda}\in \mathbb{C}[G]$ by $$\begin{aligned} F_{\Lambda}(x):=\frac{\#(B_{\Lambda}\cap B_{\Lambda}(x))}{\#B_{\Lambda}}.\end{aligned}$$ This gives a pointwise multiplication on the group algebra $\mathbb{C}[G]$, which is denoted by $$\begin{aligned} \mathcal{F}_{\Lambda} : \mathbb{C}[G] \rightarrow \mathbb{C}[G] ; \text{ } \sum f(x)\delta_x \rightarrow \sum F_{\Lambda}(x)f(x)\delta_x.\end{aligned}$$ We regard $\mathcal{F}_{\Lambda}$ as a noncommutative analogue of Fejer Kernel introduced for $G=\mathbb{Z}$ to approximate the delta function by Laurent polynomials on $S^1$ with finite degrees. In particular, it has the following approximation property (Lemma 10 of [@van2021gromov]);\ \"there exists a sequence $\{\epsilon_{\Lambda}\}_{\Lambda}$ which converges to $0$ as $\Lambda$ goes to infinity such that $$\begin{aligned} \|f-\mathcal{F}_{\Lambda}(f)\|_{\infty}\leq \epsilon_{\Lambda}\|f'\|_{\infty}\end{aligned}$$ for all $f\in C^{\infty}(S^1)\subset C(S^1)\cong C^*_r(\mathbb{Z})$.\" But in [@leimbach2023gromov], it is pointed out that this approximation property can not be generalized to high dimensional torus. But if we use a high order derivative, we can show this approximaton holds for all group with polynomial growth. Here we remark that for finitely generated discrete groups, having polynomial growth is equivalent to being amenable and having rapid decay property ([@valette2002introduction] Chapter 8). Before stating the main estimate of this section, we need the following lemma; **Lemma 6**. * Let $G$ be a discrete group generated by a finite symmetric subset $S$. If the sequence $\{B_{\Lambda}\}_{\Lambda}$ is a Følner sequence then there exists a sequence $\{\epsilon_{\Lambda}\}_{\Lambda}$ of positive numbers which converges to $0$ as $\Lambda$ goes to infinity such that $$\begin{aligned} \|f - \mathcal{F}_{\Lambda}(f)\|_{\ell^2(G)}\leq \epsilon_{\Lambda} \|Df\|_{\ell^2(G)}. \end{aligned}$$ * **Proof 4**. * Assume $\{B_{\Lambda}\}_{\Lambda}$ is a Følner sequence. Then there exists a sequence $\{\epsilon_{\Lambda}\}_{\Lambda}$ of positive numbers which converges to $0$ as $\Lambda$ goes to infinity such that $$\begin{aligned} \frac{\#(B_{\Lambda}\setminus B_{\Lambda}(x))}{\#B_{\Lambda}} \leq \epsilon_{\Lambda} \end{aligned}$$ for any $x \in S$. Therefore, since for any $x=x_1 x_2 \cdots x_n \in G$, $$\begin{aligned} B_{\Lambda}\setminus B_{\Lambda}(x) \subset (B_{\Lambda}\setminus B_{\Lambda}(x_n)) &\amalg (B_{\Lambda}(x_n)\setminus B_{\Lambda}(x_{n-1}x_{n})) \amalg \\\cdots &\amalg (B_{\Lambda}(x_2\cdots x_n)\setminus B_{\Lambda}(x)), \end{aligned}$$ we have $$\label{boundary} \frac{\#(B_{\Lambda}\setminus B_{\Lambda}(x))}{\#B_{\Lambda}} \leq \ell(x) \epsilon_{\Lambda}.$$ So, for $f \in \mathbb{C}[G]$ we have the following estimate $$\begin{aligned} &\|f-\mathcal{F}_{\Lambda}(f)\|_{\ell^2(G)}^2\\ =& \sum_{\ell(x)^2\leq \frac{1}{{\epsilon_{\Lambda}}}}\left( 1- \frac{\#(B_{\Lambda}\cap B_{\Lambda}(x))}{\#B_{\Lambda}} \right)^2|f(x)|^2+ \sum_{\ell(x)^2 > \frac{1}{{\epsilon_{\Lambda}}}}\left( 1- \frac{\#(B_{\Lambda}\cap B_{\Lambda}(x))}{\#B_{\Lambda}} \right)^2|f(x)|^2\\ \leq& \sum_{0< \ell(x)^2\leq \frac{1}{{\epsilon_{\Lambda}}}}\left( \sqrt{\frac{1}{\epsilon_{\Lambda}}} \cdot \epsilon_{\Lambda}\right)^2|f(x)|^2 + \sum_{\ell(x)^2 > \frac{1}{{\epsilon_{\Lambda}}}}|f(x)|^2 \\ \leq& \epsilon_{\Lambda} \sum_{x \neq e} \ell(x)^2|f(x)|^2. \end{aligned}$$ 0◻ * **Theorem 4**. * If $G$ is a discrete group with polynomial growth, then there exists a positive integer $s$ and a sequence $\{\epsilon_{\Lambda}\}_{\Lambda}$ of positive numbers which converges to $0$ as $\Lambda$ goes to infinity such that $$\begin{aligned} \|f-\mathcal{F}_{\Lambda}(f)\|_{B(\ell^2(G))}\leq \epsilon_{\Lambda} \|d^s(f)\|_{B(\ell^2(G))} \end{aligned}$$ for any $f \in \mathbb{C}[G]$. * **Proof 5**. * Since $G$ has polynomial growth, the sequence $\{B_{\Lambda}\}_{\Lambda}$ is a Følner sequence. By applying Lemma 4.1 for $(1+D)^sf$, for any positive integer $s$ (will be fixed later), there exists a sequence $\{\epsilon_{\Lambda}\}_{\Lambda}$ of positive numbers which converges to $0$ as $\Lambda$ goes to infinity such that $$\begin{split} \|(1+D)^{s}(f-\mathcal{F}_{\Lambda}(f))\|^2_{\ell^2(G)} & = \|(1+D)^sf-\mathcal{F}_{\Lambda}((1+D)^sf)\|^2_{\ell^2(G)}\\ &\leq \epsilon_{\Lambda}^2 \|D(1+D)^{s}f\|^2_{\ell^2(G)}\\ & = \epsilon_{\Lambda}^2 \sum_{x \neq e} \ell(x)^2(1+\ell(x))^{2s}|f(x)|^2 \\ \label{sobolev} &\leq 2^{2s}\epsilon_{\Lambda}^2 \sum_x \ell(x)^{2s+2}|f(x)|^2 \\ & = 2^{2s}\epsilon_{\Lambda}^2 \|\lambda(d^{s+1}f)\delta_e\|_{\ell^2(G)}^2 \\ & \leq 2^{2s}\epsilon_{\Lambda}^2 \|\lambda(d^{s+1}f)\|_{B(\ell^2(G))}^2. \end{split}$$ Note that in the first equality, we use the fact $D$ and $\mathcal{F}_{\Lambda}$ commute since both of them are pointwise multiplication. Since $G$ satisfies the rapid decay property (Example 8.5 of [@valette2002introduction]), the left hand side dominates $\|f-\mathcal{F}_{\Lambda}(f)\|_{C^*_r(G)}$ up to some constant which is independent on $\Lambda$ and $f$ for sufficiently large $s$. This proves the theorem. 0◻ * # Proof of the main theorem In this section, we prove our main theorem. **Theorem 5**. * If $G$ is a finitely generated group with polynomial growth, then there exists $s$ such that the truncated operator systems converges to the original algebra in terms of quantum Gromov-Hausdorff distance defined by $s$-Lip-norms; i.e. $$\begin{aligned} \text{dist}_q((\mathbb{C}[G],L_s),(P_{\Lambda}\mathbb{C}[G]P_{\Lambda},L_{s,\Lambda}))\rightarrow 0. \end{aligned}$$ * Recall from Section 3, the $s$-Lip-norm is a Lip-norm induced by $s$-times derivative. That is, $$\begin{aligned} L_s(f)&:=\|\lambda(d^sf)\|_{B(\ell^2(G))}\\ L_{s,\Lambda}(T)&:=\|d_{\Lambda}^sT\|_{B(P_{\Lambda}\ell^2(G))} \end{aligned}$$ for $f \in \mathbb{C}[G]$ and for $T\in P_{\Lambda}\mathbb{C}[G]P_{\Lambda}$. **Definition 6**. * We define a unital linear map $$\begin{aligned} r_{\Lambda}: P_{\Lambda}\mathbb{C}[G]P_{\Lambda} &\rightarrow \mathbb{C}[G] \\ (f(xy^{-1}))_{x,y \in B_{\Lambda}} &\mapsto \sum_{x \in B_{\Lambda}}f(x)\frac{\#(B_{\Lambda}\cap B_{\Lambda}(x))}{\#B_{\Lambda}} \delta_x. \end{aligned}$$ * For $G =\mathbb{Z}^d$, the positivity of $r_{\Lambda}$ was shown by harmonic analytical techniques in [@leimbach2023gromov]. Here we give an operator theoretic proof for general case. **Lemma 7**. * The map $r_{\Lambda}$ is unital and completely positive. * **Proof 6**. * We only need to show the completely positivity. For each $\alpha \in G$ we define an operator $$\begin{aligned} U_{\alpha}:H \rightarrow H \end{aligned}$$ by $(U_{\alpha}\xi)(x)=\xi(x\alpha^{-1})$ for $\xi \in H$ and $x \in G$. Then for any $A=(f(xy^{-1}))_{x,y \in B_{\Lambda}} \in P_{\Lambda}\mathbb{C}[G]P_{\Lambda}$ and $\xi \in H$, we have $$\begin{aligned} \sum_{\alpha \in G} \langle P_{\Lambda}U_{\alpha}\xi,AP_{\Lambda}U_{\alpha}\xi \rangle = \sum_{\alpha \in G} & \sum_{x \in B_{\Lambda}} \overline{\xi(x\alpha^{-1})} \left ( \sum_{y \in B_{\Lambda}} f(xy^{-1})\xi(y\alpha^{-1}) \right )\\ = \sum_{\alpha \in G} &\sum_{x,y \in B_{\Lambda}} \overline{\xi(x\alpha^{-1})} f((x\alpha^{-1})(y\alpha^{-1})^{-1}) \xi(y\alpha^{-1})\\ = \sum_{x',y'\in G} & \left (\overline{\xi(x')}f(x'(y')^{-1})\xi(y') \cdot \right . \\ & \left . \#\{(x,y,\alpha)\in B_{\Lambda}^2\times G;x\alpha^{-1}=x'\text{ and }y\alpha^{-1}=y'\} \right )\\ = \sum_{x',y'\in G} & \left (\overline{\xi(x')}f(x'(y')^{-1})\xi(y') \cdot \#(B_{\Lambda}(x')\cap B_{\Lambda}(y')) \right )\\ = \#B_{\Lambda}\cdot & \langle \xi, r_{\Lambda}(A) \xi \rangle . \end{aligned}$$ Therefore $r_{\Lambda}$ is completely positive. 0◻ * **Remark 2**. * Two unital and completely positive maps $q_{\Lambda}$ and $r_{\Lambda}$ commute with the derivatives in the sense that $$\begin{aligned} q_{\Lambda}(df)=d_{\Lambda}(q_{\Lambda}f) \text{ and }r_{\Lambda}(d_{\Lambda}T)=d(r_{\lambda}T) \end{aligned}$$ for all $f \in \mathbb{C}[G]$ and $T \in P_{\Lambda}\mathbb{C}[G]P_{\Lambda}$. Especially these two maps are contractive with respect to Lip-norms $L_s$ and $L_{s,\Lambda}$. * Note that $$\begin{aligned} f-r_{\Lambda}\circ q_{\lambda}(f)&=f-\mathcal{F}_{\Lambda}(f)\\ T-q_{\lambda}\circ r_{\Lambda}(T)&=(T_{xy^{-1}})_{x,y\in B_{\Lambda}}-(F_{\Lambda}(xy^{-1})T_{xy^{-1}})_{x,y\in B_{\Lambda}}.\end{aligned}$$ In Theorem 3.1, if $G$ has polynomial growth, we have already established the estimate $$\label{C^*-estimate} \|f-r_{\Lambda}\circ q_{\Lambda}(f)\|_{C^*_r(G)}=\|f-\mathcal{F}_{\Lambda}(f)\|_{C^*_r(G)} \leq \epsilon_{\Lambda} L_s (f)$$ with a positive integer $s>0$ and a sequence of positive numbers $\{\epsilon_{\Lambda}\}$ which converges to $0$. The other inequality is much more delicate because we do not have Haagerup inequality for truncation in the following sense. Even if $G$ satisfies the Haagerup inequality for constants $C$ and $s$ (i.e. $\|\lambda(f)\|_{C^*_r(G)} \leq C\|f\|_{H^s}$ for all $f\in \mathbb{C}[G]$) we can not conclude $$\begin{aligned} \|P_{\Lambda}\lambda(f)P_{\Lambda}\|_{B(P_{\Lambda }\ell^2(G))}\leq C\|P_{\Lambda}(1+D_{\Lambda})^sf\|_{P_{\Lambda}\ell^2(G)}\end{aligned}$$ in general. For example, if $f=\delta _x$ for $x \in G$ with $\Lambda <\ell(x) \leq 2\Lambda$ then the right hand side is $0$ but $P_{\Lambda}\lambda(f)P_{\Lambda}$ is a nonzero operator. But actually, we can prove the following Lemma. **Lemma 8**. * For any group $G$ with polynomial growth, there exists positive integer $s>0$ and a sequence $\{\epsilon_{\Lambda}\}_{\Lambda}$ of positive number which converges to $0$ as $\Lambda$ goes to infinity such that $$\begin{aligned} \|T-q_{\lambda}\circ r_{\Lambda}(T)\|_{B(P_{\Lambda}\ell^2(G))} \leq \epsilon_{\Lambda} L_{s,\Lambda}(T). \end{aligned}$$ * **Proof 7**. * Since $G$ has polynomial growth, by Lemma 3.4, there exist constant $C$ and $\beta>0$ such that $\frac{\#(\partial B_{\Lambda})}{\# B_{\Lambda}}\leq \frac{C}{\Lambda^{\beta}}$. We can take a constant $d$ such that $(\Lambda^{\beta})^d$ dominates $\#B_{\Lambda}$. Since groups with polynomial growth has rapid decay property, there exists $s>0$ and $C'>0$ such that every $f\in \mathbb{C}[G]$ satisfies the Haagerup inequality; $$\begin{aligned} \|\lambda(f)\|_{B(\ell^2(G))} \leq C'\sum_{x\in G} (1+\ell(x))^{2s}|f(x)|^2. \end{aligned}$$ Let $f\in \mathbb{C}[G]$ be such that $\text{supp}(f)\subset B_{2\Lambda}$. For $x \in B_{\Lambda}$ we have $$\begin{aligned} (d_{\Lambda}^{s+d+1}P_{\Lambda}\lambda(f)P_{\Lambda}) \delta_x =\sum_{y\in B_{\Lambda}} \ell(yx^{-1})^{s+d+1}f(yx^{-1})\delta_{y}. \end{aligned}$$ Therefore, by denoting $A:=P_{\Lambda}\lambda(f)P_{\Lambda}$ we have $$\label{evaluation} L_{s+d+1,\Lambda}(A)^2= \|d_{\Lambda}^{s+d+1}(A)\|^2\geq \sum_{y \in B_{\Lambda}(x^{-1})} \ell(y)^{2s+2d+2}|f(y)|^2 .$$ By taking the sum of [\[evaluation\]](#evaluation){reference-type="eqref" reference="evaluation"} for all $x \in B_{\Lambda}$, $$\label{mainestimate} \begin{split} \#B_{\Lambda} \cdot L_{s+d+1,\Lambda}(A)^2\geq & \sum_{x \in B_{\Lambda}} \sum_{y \in B_{\Lambda}(x^{-1})} \ell(y)^{2s+2d+2}|f(y)|^2\\ =&\sum_{y \in B_{2\Lambda}} \#(B_{\Lambda}\cap B_{\Lambda}(y)) \cdot \ell(y)^{2s+2d+2}|f(y)|^2\\ =& \sum_{\ell(y)\leq {\Lambda}^{\frac{1}{2}\beta}} \#(B_{\Lambda}\cap B_{\Lambda}(y)) \cdot \ell(y)^{2s+2d+2}|f(y)|^2\\ &+ \sum_{{\Lambda}^{\frac{1}{2}\beta}<\ell(y)\leq 2\Lambda} \#(B_{\Lambda}\cap B_{\Lambda}(y)) \cdot \ell(y)^{2s+2d+2}|f(y)|^2 \\ \geq& \sum_{\ell(y)\leq {\Lambda}^{\frac{1}{2}\beta}} \#(B_{\Lambda})(1-C\frac{1}{\Lambda^{\beta}}\cdot {\Lambda}^{\frac{1}{2}\beta}) \ell(y)^{2s+2d+2}|f(y)|^2\\ &+ \sum_{{\Lambda}^{\frac{1}{2}\beta}<\ell(y)\leq 2\Lambda} \ell(y)^{2s+2d+2}|f(y)|^2 \\ \geq & \min\{\#(B_{\Lambda})(1-C\frac{1}{\Lambda^{\beta}}\cdot {\Lambda}^{\frac{1}{2}\beta}), (\Lambda^{\frac{1}{2}\beta})^{2d}\} \sum_{y\in B_{2\Lambda}} \ell(y)^{2s+2}|f(y)|^2\\ \geq & \min\{\#(B_{\Lambda})(1-C\frac{1}{\Lambda^{\frac{1}{2}\beta}}), (\Lambda^{d\beta})\} \sum_{y\in B_{2\Lambda}} \ell(y)^{2s+2}|f(y)|^2. \end{split}$$ For the second inequality, we use the formula [\[boundary\]](#boundary){reference-type="eqref" reference="boundary"} and the estimate for the volume of the boundary. By the choice of $d$, both term in $\min$ is larger than $\#B_{\Lambda}$ up to a constant depending only on $G$. Since we know from the estimate of [\[sobolev\]](#sobolev){reference-type="eqref" reference="sobolev"} that there exists a sequence ${\epsilon_{\Lambda}}$ converges to $0$ such that $$\begin{aligned} \|q_{\Lambda}\lambda(f)&-q_{\Lambda}\circ r_{\Lambda}\circ q_{\Lambda}(\lambda(f))\|_{B(P_{\Lambda}\ell^2(G))}\\ &\leq \|\lambda(f)-r_{\Lambda}\circ q_{\Lambda}(\lambda(f))\|_{B(\ell^2(G))}\leq \epsilon_{\Lambda} \sum_{y\in B_{2\Lambda}} \ell(y)^{2s+2}|f(y)|^2 \end{aligned}$$ for any $f\in \mathbb{C}[G]$ supported in $B_{2\Lambda}$, by combining this inequality with [\[mainestimate\]](#mainestimate){reference-type="eqref" reference="mainestimate"}, we have the desired estimate. 0◻ * Applying Lemma 5.1, Remark 5.1, [\[C\^\*-estimate\]](#C^*-estimate){reference-type="eqref" reference="C^*-estimate"} and Lemma 5.2 to lemma 2.1, we have proven Theorem 5.1. Finally, we study the natural question whether $(P_{\Lambda}\mathbb{C}[G]P_{\Lambda})$ converges to the (reduced) group $C^*$-algebra $C^*_r(G)$ or not under the same setting as Theorem 5.1. To answer it, we have to make it clear how to extend the Lip-norm $L_s$ on $\mathbb{C}[G]$ to its completion $C^*_r(G)$ following Chapter 4 of the paper of Rieffel [@rieffel1999metrics]. Let $\mathcal{L}_1$ be the ball with respect to the Lip-norm $L_s$ i.e, $$\begin{aligned} \mathcal{L}_1:=\{f \in \mathbb{C}[G]:L_s(f)\leq 1\}.\end{aligned}$$ We denote the norm closure of $\mathcal{L}_1$ in $C^*_r(G)$ (rather than in $\mathbb{C}[G]$) by $\overline{\mathcal{L}}_1$. Then we define a (densely defined) seminorm $\overline{L}$ on $C^*_r(G)$ by $$\begin{aligned} \overline{L}_s(f)=\inf \{r\in \mathbb{R}_+:\text{ }f \in r \overline{\mathcal{L}}_1\}.\end{aligned}$$ Then it is shown in Proposition 4.4 of [@rieffel1999metrics] that if $L_s$ is a Lip-norm then $\overline{L}_s$ is a Lip-norm on $C^*_r(G)$ in the sense of Definition 2.1 which extends $L_s$. Using $\overline{L}_s$, we can show the $C^*$-algebra version of Theorem 5.1. **Corollary 1**. * If $G$ is a finitely generated group with polynomial growth, then there exists $s$ such that we have the quantum Gromov-Hausdorff convergence $$\begin{aligned} \text{dist}_q((C^*_r(G),\overline{L}_s),(P_{\Lambda}\mathbb{C}[G]P_{\Lambda},L_{s,\Lambda}))\rightarrow 0. \end{aligned}$$ * **Proof 8**. * We only need to show that $$\begin{aligned} \|f-r_{\Lambda}\circ q_{\Lambda}(f)\| \leq \epsilon_{\Lambda} \overline{L}_s(f) \end{aligned}$$ for all $f\in C^*_r(G)$ and a sequence of positive numbers $\{\epsilon_{\Lambda}\}$ appeared in [\[C\^\*-estimate\]](#C^*-estimate){reference-type="eqref" reference="C^*-estimate"}. Let us assume $R:=\overline{L}_s(f)< \infty$. Then there exists a sequence $\{f_j\}$ in $R\mathcal{L}_1$ such that $\|f-f_j\|_{C^*_r(G)}\rightarrow 0$. Therefore $$\begin{aligned} \|f-r_{\Lambda}\circ q_{\Lambda}(f)\|&=\lim_j \|f_j-r_{\Lambda}\circ q_{\Lambda}(f_j)\|\\ & \leq \limsup_j \epsilon_{\Lambda} L_s(f_j) \leq \epsilon_{\Lambda} R=\epsilon_{\Lambda} \overline{L}_s(f). \end{aligned}$$ 0◻ * # Acknowledgement {#acknowledgement .unnumbered} The author would like to thank his advisor Professor Guoliang Yu for encouragements and discussions. He is also grateful to Jinmin Wang for discussing technical difficulties. [^1]: Department of Mathematics, Texas A&M University, College Station, TX 77843, USA
arxiv_math
{ "id": "2309.13469", "title": "Quantum Gromov-Hausdorff convergence of spectral truncations for groups\n with polynomial growth", "authors": "Ryo Toyota", "categories": "math.OA math-ph math.MP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We consider the 3D incompressible Euler equations under the following three scale hierarchical situation: large-scale vortex stretching the middle-scale, and at the same time, the middle-scale stretching the small-scale. In this situation, we show that, the stretching effect of this middle-scale flow is weakened by the large-scale. In other words, the vortices being stretched could have the corresponding stress tensor being weakened. address: - Department of Mathematics and RIM, Seoul National University. - Department of Mathematics, Brown University. - Graduate School of Economics, Hitotsubashi University. author: - In-Jee Jeong - Jungkyoung Na - Tsuyoshi Yoneda title: Weakened vortex stretching effect in three scale hierarchy for the 3D Euler equations --- # Introduction Recent direct numerical simulations [@Goto-2008; @GSK; @Motoori-2019; @Motoori-2021] of the 3D Navier--Stokes turbulence at high Reynolds numbers have shown that there exists a *hierarchy* of scale local vortex stretching dynamics. In particular, Goto--Saito--Kawahara [@GSK] discovered that turbulence at sufficiently high Reynolds numbers in a periodic cube is composed of a self-similar hierarchy of anti-parallel pairs of vortex tubes, which is sustained by creation of smaller-scale vortices due to stretching in larger-scale strain fields. This discovery has been further investigated by Y.--Goto--Tsuruhashi [@YGT] (see also [@TGOY]). From these previous results, we could conclude physically that the most important features of the Navier--Stokes turbulence could be scale local vortex stretching which does not seem to be random (see also [@JY1; @JY2; @JY3] for the related mathematical results). Therefore as the sequence of these studies, our next study will be clarifying the locality of vortex stretching dynamics precisely, and in this paper we consider it with three-scale hierarchical structure (c.f. for a geometric approach considering this locality, see Shimizu--Y. [@SY]). In this paper, we shall consider solutions to the 3D incompressible Euler equations in $\mathbb T^3$ $$\label{eq:3D-Euler} \left\{ \begin{aligned} \partial_t u + u \cdot \nabla u + \nabla p = 0 , \\ \nabla\cdot u = 0, \end{aligned} \right.$$ in which the velocity has the following hierarchical structure: $u^\mathcal L+ u^\mathcal I+ u^\mathcal S$, where the letters $\mathcal L,\mathcal I,\mathcal S$ stand for large, intermediate, and small-scale, respectively. They will be arranged in a way that the corresponding vorticities, denoted by $\omega^\mathcal L, \omega^\mathcal I, \omega^\mathcal S$, are mutually almost orthogonal. This is natural since the rate of vortex stretching interaction is maximized when the vortex lines are orthogonal to each other. Indeed, this orthogonality was confirmed in direct numerical simulations [@Goto-2008; @GSK; @Motoori-2019; @Motoori-2021] by statistical means. The goal of this paper is to understand the dynamics of vortex stretching under this hierarchical structure. Before we describe our results, let us give details of our flow configuration. We take the length scale $L$ of the torus $\mathbb{T}^3:=[-L,L)^3$ to be large ($L=100$ suffices) and fix the large-scale velocity to be the linear strain $$\label{eq:u-L} \begin{split} u^\mathcal L(x) := (Mx_1, -Mx_2, 0), \end{split}$$ in the region $|x|\le 10$, for some large $M\gg 1$. This velocity field corresponds to a large-scale antiparallel columnar vortex parallel to the $x_{3}$-axis supported away from the origin. In the following, we shall implicitly assume that $u^\mathcal L$ is a steady solution to 3D Euler with some smooth forcing $f^\mathcal L$ which is supported in the region $|x|>10$. To motivate our choice of smaller scale vorticities, consider the *linearized* Euler dynamics around $u^\mathcal L$ in vorticity form: $$\label{eq:3D-lin} \begin{split} \partial_t\omega+ u^\mathcal L\cdot\nabla\omega= \nabla u^\mathcal L\omega \end{split}$$ and since $$\begin{split} \nabla u^\mathcal L= \begin{pmatrix} M & 0 & 0 \\ 0 & -M & 0 \\ 0 & 0& 0 \end{pmatrix}, \end{split}$$ we see that, with respect to the $L^{\infty}$-norm, the solution to [\[eq:3D-lin\]](#eq:3D-lin){reference-type="eqref" reference="eq:3D-lin"} expands and contracts exponentially in time with rate $Mt$ along the $x_1$ and $x_2$-axis, respectively. This raises the following question: if we arrange intermediate and small-scale vorticities to be respectively parallel to $x_1$ and $x_2$ axis, is it possible that the intermediate vortex (being exponentially stretched by the large-scale) significantly stretches the small-scale, by dominating the decay effect of the large-scale? It turns out that, interestingly, the answer is no: the small-scale vortex, even in the presence of exponentially stretched intermediate-scale, still decays exponentially in time with the same rate $-Mt$. While this is not too hard to see for the linearized Euler equations, we establish this in the full nonlinear Euler equations: consider the system $$\label{eq:3D-vort} \left\{ \begin{aligned} \partial_t \omega+ (u + u^{\mathcal L})\cdot \nabla\omega= \nabla(u + u^\mathcal L) \omega, \\ u = \nabla\times (-\Delta)^{-1}\omega, \end{aligned} \right.$$ where we shall assume that $\omega= \omega^\mathcal I+ \omega^\mathcal S$ is supported in the region $|x|\le 10$, so that [\[eq:u-L\]](#eq:u-L){reference-type="eqref" reference="eq:u-L"} applies. We write $u^\mathcal I= \nabla\times (-\Delta)^{-1}\omega^\mathcal I$ and $u^\mathcal S= \nabla\times (-\Delta)^{-1}\omega^\mathcal S$. More precisely, we take $$\label{def: iw_0} \begin{split} \omega^{\mathcal{I}}_0(x) := (\omega^{\mathcal{I}}_{1,0}(x_2,x_3),0,0),\qquad \omega^{\mathcal{I}}_{1,0}(x_2,x_3) := \sum_{i,j \in \{0,1\} } (-1)^{i+j} \varphi( (-1)^i x_2, (-1)^j x_3), \end{split}$$ where $\varphi\in C^{\infty}_c$ is a non-negative function supported in $\left\{(x_2,x_3)\in \mathbb{T}^2: 1\le x_2,x_3\le 2 \right\}$. Note that $\omega^{\mathcal{I}}_0$ is parallel to the $x_1$-axis and has odd symmetry in both $x_2$ and $x_3$. These odd symmetries are imposed to maximize the stretching effects of $\omega^{\mathcal{I}}$ near the origin and has been inspired by several works [@KS; @Z; @EM]. Finally, with some sufficiently small $0< \varepsilon, \ell \ll 1$, we fix some divergence-free vector field $\tilde{\psi} \in C^{\infty}_{c}(B_{0}(2\ell))$ and require $$\label{def: sw_0} \omega^{\mathcal{S}}_0(x) = \varepsilon\tilde{\psi}(x), \qquad \tilde{\psi}(x)=\left(0,\psi\left(\ell^{-1}x\right),0\right), \qquad |x|\le \ell$$ for some smooth bump function $\psi\ge0$. We consider the flow map $\Phi$ generated by $u^{\mathcal{L}}+u^{\mathcal{I}}+u^{\mathcal{S}}$: namely, $\Phi(0,x)=x$ and $$\label{def: flow Phi} \frac{d\Phi}{dt}(t,x) = (u^{\mathcal{L}}+u^{\mathcal{I}}+u^{\mathcal{S}})(t,\Phi(t,x)).$$ The solution to [\[eq:3D-vort\]](#eq:3D-vort){reference-type="eqref" reference="eq:3D-vort"} satisfies the Cauchy formula $\omega(t,\Phi)=\nabla\Phi \omega_0$, and naturally, we define the evolution of intermediate and small-scale vortex by $\omega^{\mathcal{I}}(t,\Phi(t,x)) = \nabla\Phi(t,x) \omega^{\mathcal{I}}_{0}(x)$ and $\omega^{\mathcal{S}}(t,\Phi(t,x)) = \nabla\Phi(t,x) \omega^{\mathcal{S}}_{0}(x)$. As long as the solution remains smooth, the support of $\omega^{\mathcal{I}}$ and $\omega^{\mathcal{S}}$ remains disjoint. We are now ready to state our main theorem. For $0<r<L$, $B_0(r)$ denotes the ball $\{ |x| < r \}$. **Theorem 1**. *Consider the solution to [\[eq:3D-vort\]](#eq:3D-vort){reference-type="eqref" reference="eq:3D-vort"} with initial data $\omega_{0} = \omega^{\mathcal{I}}_0 + \omega^{\mathcal{S}}_0 \in C^\infty(\mathbb T^3)$ with $\mathbb T^3:=[-L,L)^3$. The solution remains smooth in the time interval $[0,T_{M}]$, where $T_M = M^{-1}\log(1+M)$. On this time interval, we have $$\label{eq:nb-u-i-decay} \begin{split} \left\Vert{ \nabla u^{\mathcal{I}}(t,\cdot) }\right\Vert_{L^\infty(B_{0}(\ell))} \le C\exp(-C^{-1}Mt) {\Vert\nabla u^{\mathcal{I}}_{0}\Vert_{L^\infty(\mathbb T^{3})} } \end{split}$$ and $$\label{eq:omega-s-decay} \begin{split} \left\Vert{ \omega^{\mathcal{S}}(t,\cdot) }\right\Vert_{L^{\infty}\left( { B_{0}( \tilde{\ell}) } \right)} \le C\exp(-Mt) {\Vert\omega^{\mathcal{S}}_{0}\Vert_{L^\infty(\mathbb T^{3})} }, \qquad \tilde{\ell} = (M^{-1}\ell)^{C} \end{split}$$ uniformly for all $M\gg1$, $0<\ell\le M^{-C}$, and $0<\varepsilon\le \ell^{2s-3}\exp{(-M^C)}$ with $s>5/2$, where $C>1$ is a constant depending only on $L$, $\psi$ and $\varphi$.* *Remark 1*. While $T_M\to0$ as $M\to\infty$, we have that $MT_M\to\infty$, so that within the timescale of $T_M$, the exponential terms in the right hand sides of [\[eq:nb-u-i-decay\]](#eq:nb-u-i-decay){reference-type="eqref" reference="eq:nb-u-i-decay"} and [\[eq:omega-s-decay\]](#eq:omega-s-decay){reference-type="eqref" reference="eq:omega-s-decay"} decays to zero. Furthermore, [\[eq:nb-u-i-decay\]](#eq:nb-u-i-decay){reference-type="eqref" reference="eq:nb-u-i-decay"} should be contrasted with exponential growth of $\omega^{\mathcal{I}}$: $\Vert\omega^{\mathcal{I}}(t,\cdot)\Vert_{L^\infty(\mathbb T^3)} \gtrsim \exp(Mt)\Vert\omega^{\mathcal{I}}_0\Vert_{L^\infty(\mathbb T^3)}$ in the same time interval. In other words, the vortices being stretched could have the corresponding stress tensor being weakened. **Interpretation of the result**. It is important to compare the above with the case when the large-scale strain field is absent: in this case, the small-scale vortex gets stretched *at least exponentially in time* by the intermediate-scale (cf. [@EM]). Therefore, we see that the vortex stretching effect of the intermediate scale is weakened by the large-scale, and the resulting small-scale vortex dynamics is not really different from the case when the intermediate vortex is absent. Our result suggests that the vortex stretching of "adjacent" scales is the one most likely to occur, and thus possible blow-up solution to the three dimensional Navier--Stokes and/or Euler equations may not possess multi-scale vortex stretching motion. This insight is consistent with the numerical result by Kang--Yun--Protas [@KYP]. Based on solving a suitable optimization problem numerically, they investigated the largest possible growth of the vorticity in finite time in three-dimensional Navier--Stokes flows. Their findings revealed that the flows maximizing the vortex growth exhibit the form of three perpendicular pairs of anti-parallel vortex tubes with the same size (see Figure 11 in [@KYP]). Furthermore, the flow evolution resulting from such an initial vorticity is accompanied by reconnection events. We can at least see that this flow does not possess multi-scale vortex stretching motion. *Remark 2*. One could similarly investigate the situation in which the directions of the intermediate and small-scale vorticities are switched. In this case, both large and intermediate-scale vortex stretch the small-scale vortex. However, in this case, the length scale of the intermediate vortex grows at the same time, and escapes the $O(1)$-region around the origin. **Ideas of the proof**. Let us briefly explain the main steps of the proof. To begin with, when $\omega^{\mathcal{S}}$ is completely absent, one can simply study the nonlinear evolution of $\omega^{\mathcal{I}}$ and obtain the bound [\[eq:nb-u-i-decay\]](#eq:nb-u-i-decay){reference-type="eqref" reference="eq:nb-u-i-decay"}. This is already non-trivial since one needs to understand the nonlinear self-interaction of $\omega^{\mathcal{I}}$. Then, one can introduce $\omega^{\mathcal{S}}$ and formally analyze the vortex stretching equation of $\omega^{\mathcal{S}}$, which is $D_t(\omega^{\mathcal{S}}) \simeq (\nabla u^\mathcal L+ \nabla u^\mathcal I) \omega^{\mathcal{S}}$ if one neglects the nonlinear self-interaction. Applying the bound [\[eq:nb-u-i-decay\]](#eq:nb-u-i-decay){reference-type="eqref" reference="eq:nb-u-i-decay"} and integrating, one obtains the desired estimate [\[eq:omega-s-decay\]](#eq:omega-s-decay){reference-type="eqref" reference="eq:omega-s-decay"}. However, in this case, not only one needs to handle the self-interaction term, but also needs to deal with the linear feedback of $\omega^{\mathcal{S}}$ onto the intermediate scale $\omega^{\mathcal{I}}$. Indeed, as soon as $\omega^{\mathcal{S}}$ is introduced, $\omega^{\mathcal{S}}$ and $\omega^{\mathcal{I}}$ satisfy a coupled system of PDEs, even at the linearized level. In particular, it becomes tricky to obtain the bound [\[eq:nb-u-i-decay\]](#eq:nb-u-i-decay){reference-type="eqref" reference="eq:nb-u-i-decay"} in the first place. Therefore, our proof consists of a two-step comparison procedure: we introduce the "pseudo-solution" pair $(\omega^{\mathcal I,P}, \omega^{\mathcal S,P})$ where $\omega^{\mathcal I,P}$ corresponds to the intermediate scale in the absense of the small scale. Then, we compare $\omega^{\mathcal{S}}$ with $\omega^{\mathcal S,P}$ which is then compared in turn with the linearized dynamics around $\omega^{\mathcal I,P}$. ## Notation {#notation .unnumbered} We employ the letters $C,\,C_1,\,C_2,\cdots$ to denote any constants which may change from line to line in a given computation. In particular, the constants depend only on $L$, $\psi$ and $\varphi$. We sometimes use $A\approx B$ and $A\lesssim B$, which mean $A=CB$ and $A\le CB$, respectively, for some constant $C$. # Preliminaries The aim of this section is to establish two principles for comparing perturbed solutions of 3D Euler equations in $\mathbb T^3$. Let $\Bar{u}\in L^{\infty}([0,\bar{T}];H^{s+2}(\mathbb T^3))$ with $s>\frac{5}{2}$ be a solution to the following Cauchy problem for the 3D incompressible Euler equations: $$\label{eq:euler bu} \left\{ \begin{aligned} \partial_{t} \Bar{u}+ \Bar{u}\cdot \nabla\Bar{u}+\nabla\bar{p} &= 0, \\ \nabla\cdot \Bar{u}&=0, \\ \Bar{u}(t=0)&=\Bar{u}_0, \end{aligned} \right.$$ where $\Bar{u}_0$ is a function in $H^{s+2}(\mathbb T^3)$. Then we consider a perturbation problem of [\[eq:euler bu\]](#eq:euler bu){reference-type="eqref" reference="eq:euler bu"}. To be precise, let $u$ be the solution of the following problem: $$\label{eq:euler just u} \left\{ \begin{aligned} \partial_{t} u + u \cdot \nabla u +\nabla p &= 0, \\ \nabla\cdot u &=0, \\ u(t=0)&=\Bar{u}_0 + \varepsilon \tilde{u}_0, \end{aligned} \right.$$ where $\varepsilon>0$ and $\tilde{u}_0$ is a function belonging to $H^{s+1}(\mathbb T^3)$. Now we introduce and prove our two principles. ## Principle 1 Defining $\tilde{u}=u-\Bar{u}$ and $\tilde{p}=p-\bar{p}$, we have $$\label{eq:euler tu} \left\{ \begin{aligned} \partial_{t} \tilde{u}+ \Bar{u}\cdot \nabla\tilde{u}+ \tilde{u}\cdot\nabla\Bar{u}+ \tilde{u}\cdot \nabla\tilde{u}+\nabla\tilde{p} &= 0, \\ \nabla\cdot \tilde{u}&=0, \\ \tilde{u}(t=0)&=\varepsilon \tilde{u}_0. \end{aligned} \right.$$ Next, we consider linearization around $\Bar{u}$: writing $u^{Lin}= \Bar{u}+ \Tilde{u}^{Lin}$, and dropping quadratic terms in the perturbation, we arrive at $$\label{eq:euler ltu} \left\{ \begin{aligned} \partial_{t} \Tilde{u}^{Lin}+ \Bar{u}\cdot \nabla\Tilde{u}^{Lin}+ \Tilde{u}^{Lin}\cdot\nabla\Bar{u}+\nabla\tilde{p}^{Lin} &= 0, \\ \nabla\cdot \Tilde{u}^{Lin}&=0, \\ \Tilde{u}^{Lin}(t=0)&=\varepsilon \tilde{u}_0. \end{aligned} \right.$$ Then we estimate ${\left\Vert \omega(t,\cdot)-\omega^{Lin}(t,\cdot) \right\Vert}_{L^{\infty}\left (\mathbb{T}^3\right)}$ on $[0,\bar{T}]$ for sufficiently small $\varepsilon>0$, where $\omega = \nabla\times u$ and $\omega^{Lin} = \nabla\times u^{Lin}$ are corresponding vortices.: **Proposition 3**. *Under the above setting, there exists a constant $C>0$ such that if $\varepsilon >0$ satisfies $$\label{condi: eps} \varepsilon \le \frac{1}{C{\left\Vert \tilde{u}_0 \right\Vert}_{H^{s+1}(\mathbb{T}^3)} \Bar{T} }\exp\left(-C\int_0^{\Bar{T}} {\left\Vert \Bar{u}(\tau,\cdot) \right\Vert}_{H^{s+2}(\mathbb{T}^3)}d\tau \right) ,$$ then on $[0,\Bar{T}]$, we have $$\label{eq:perturbation1} {\left\Vert \omega(t,\cdot)-\omega^{Lin}(t,\cdot) \right\Vert}_{L^{\infty}\left (\mathbb{T}^3\right)} \le \varepsilon^2{\left\Vert \tilde{u}_0 \right\Vert}_{H^{s+1}(\mathbb{T}^3)}^2C\Bar{T}\exp\left(C\int_0^{\Bar{T}} {\left\Vert \Bar{u}(\tau,\cdot) \right\Vert}_{H^{s+2}(\mathbb{T}^3)}d\tau \right).$$* *Remark 4*. The point is that the right hand side of [\[eq:perturbation1\]](#eq:perturbation1){reference-type="eqref" reference="eq:perturbation1"} is $O(\varepsilon^2)$, while a priori the left hand side is only $O(\varepsilon)$ a priori. Furthermore, the right hand side depends only on $\tilde{u}_0$ and $\Bar{u}$. *Proof.* From an elementary estimate $${\left\Vert \omega(t,\cdot)-\omega^{Lin}(t,\cdot) \right\Vert}_{L^{\infty}\left (\mathbb{T}^3\right)} \lesssim {\left\Vert \omega(t,\cdot)-\omega^{Lin}(t,\cdot) \right\Vert}_{H^{s-1}(\mathbb{T}^3)} \lesssim {\left\Vert u(t,\cdot)- { u^{Lin}} (t,\cdot) \right\Vert}_{H^{s}(\mathbb{T}^3)},$$ it suffices to prove $${\left\Vert u(t,\cdot)-u^{Lin}(t,\cdot) \right\Vert}_{H^{s}(\mathbb{T}^3)} \le \varepsilon^2{\left\Vert \tilde{u}_0 \right\Vert}_{H^{s+1}(\mathbb{T}^3)}^2C\Bar{T}\exp\left(C\int_0^{\Bar{T}} {\left\Vert \Bar{u}(\tau,\cdot) \right\Vert}_{H^{s+2}(\mathbb{T}^3)}d\tau \right)$$ for $\varepsilon>0$ satisfying [\[condi: eps\]](#condi: eps){reference-type="eqref" reference="condi: eps"}. Our first step is to estimate ${\left\Vert \tilde{u}(t,\cdot) \right\Vert}_{H^s}$ on $[0,\bar{T}]$. Denoting $J=(I-\Delta)^{\frac{1}{2}}$, [\[eq:euler tu\]](#eq:euler tu){reference-type="eqref" reference="eq:euler tu"} gives $$\begin{split} \frac12 \frac{d}{dt} {\left\Vert \tilde{u} \right\Vert}_{H^{s+1}}^2& = -\int J^{s+1}(\Bar{u}\cdot\nabla\tilde{u})\cdot J^{s+1}\tilde{u} -\int J^{s+1}(\tilde{u}\cdot\nabla\Bar{u})\cdot J^{s+1}\tilde{u}\\ &\quad -\int J^{s+1}(\tilde{u}\cdot\nabla\tilde{u})\cdot J^{s+1}\tilde{u}-\int J^{s+1}\nabla\tilde{p}\cdot J^{s+1}\tilde{u}\\ &=\mathrm{I}+ \mathrm{II}+ \mathrm{III}+ \mathrm{IV}. \end{split}$$ Since $\tilde{u}$ is divergence-free, $\mathrm{IV}=0$. Using the fact that $H^{s+1}{(\mathbb{T}^3)}$ is a Banach algebra, we have $$\mathrm{II}\lesssim {\left\Vert \tilde{u}\cdot\nabla\Bar{u} \right\Vert}_{H^{s+1}}{\left\Vert \tilde{u} \right\Vert}_{H^{s+1}} \lesssim {\left\Vert \Bar{u} \right\Vert}_{H^{s+2}}{\left\Vert \tilde{u} \right\Vert}_{H^{s+1}}^2.$$ For $\mathrm{I}$, we note that $\nabla\cdot \tilde{u}=0$ yields $\int \left(\Bar{u}\cdot \nabla J^{s+1}\tilde{u}\right) \cdot J^{s+1} \tilde{u}=0,$ so that $$\mathrm{I}= -\int \left(\left[J^{s+1}, \Bar{u}\cdot\right] \nabla\tilde{u}\right) \cdot J^{s+1}\tilde{u}.$$ Recalling the Kato-Ponce commutator estimate ([@KP88]): $$\label{est:K-P} {\left\Vert \left[J^{s'},f\right]g \right\Vert}_{L^2\left (\mathbb{T}^3\right)} \lesssim \left(\Vert f\Vert_{H^{\frac{5}{2}+\varepsilon'}(\mathbb T^3)}{\left\Vert J^{s'-1}g \right\Vert}_{L^2\left (\mathbb{T}^3\right)} + {\left\Vert J^{s'}f \right\Vert}_{L^2\left (\mathbb{T}^3\right)}\Vert g\Vert_{H^{\frac{3}{2}+\varepsilon'}(\mathbb T^3)}\right)$$ for any $\varepsilon'>0$ and $s'>0$, we obtain $$\mathrm{I}\lesssim {\left\Vert \left[J^{s+1}, \Bar{u}\cdot\right] \nabla\tilde{u} \right\Vert}_{L^2}{\left\Vert J^{s+1}\tilde{u} \right\Vert}_{L^2} \lesssim {\left\Vert \Bar{u} \right\Vert}_{H^{s+1}}{\left\Vert \tilde{u} \right\Vert}_{H^{s+1}}^2.$$ Replacing $\Bar{u}$ with $\tilde{u}$ and proceeding in the same way, we can also estimate $\mathrm{III}\lesssim {\left\Vert \tilde{u} \right\Vert}_{H^{s+1}}^3.$ Combining all, we arrive at $$\frac{d}{dt} {\left\Vert \tilde{u} \right\Vert}_{H^{s+1}} \le C \left({\left\Vert \Bar{u} \right\Vert}_{H^{s+2}}{\left\Vert \tilde{u} \right\Vert}_{H^{s+1}} + {\left\Vert \tilde{u} \right\Vert}_{H^{s+1}}^2\right).$$ Introducing the quantity $y(t) = {\left\Vert \tilde{u}(t) \right\Vert}_{H^{s+1}}\exp(-C\int_0^t {\left\Vert \Bar{u}(\tau) \right\Vert}_{H^{s+2}}d\tau ),$ we have $$\left\{ \begin{aligned} &\frac{d}{dt}y(t) \le C \exp\left(C\int_0^{\Bar{T}} {\left\Vert \Bar{u}(\tau) \right\Vert}_{H^{s+2}}d\tau \right) y^2(t), \\ & { y(0) } =\varepsilon {\left\Vert \tilde{u}_0 \right\Vert}_{H^{s+1}} \end{aligned} \right.$$ on $[0,\Bar{T}]$. Then for $\varepsilon>0$ satisfying [\[condi: eps\]](#condi: eps){reference-type="eqref" reference="condi: eps"} (by adjusting $C$ if necessary), we have $$y(t) \le \frac{\varepsilon{\left\Vert \tilde{u}_0 \right\Vert}_{H^{s+1}}}{1-\varepsilon{\left\Vert \tilde{u}_0 \right\Vert}_{H^{s+1}}Ct\exp\left(C\int_0^{\Bar{T}} {\left\Vert \Bar{u}(\tau) \right\Vert}_{H^{s+2}}d\tau \right)} \le 2\varepsilon {\left\Vert \tilde{u}_0 \right\Vert}_{H^{s+1}},$$ and consequently on $[0,\Bar{T}]$, we have $$\label{est:tu} {\left\Vert \tilde{u}(t) \right\Vert}_{H^{s+1}} \le 2\varepsilon {\left\Vert \tilde{u}_0 \right\Vert}_{H^{s+1}}\exp\left(C\int_0^{\bar{T}} {\left\Vert \Bar{u}(\tau) \right\Vert}_{H^{s+2}}d\tau \right).$$ Next, we consider the equation of $\Tilde{u}^{D}= \tilde{u}-\Tilde{u}^{Lin}$: $$\left\{\begin{split} \partial_t \Tilde{u}^{D}+ \Bar{u}\cdot \nabla\Tilde{u}^{D}+ \Tilde{u}^{D}\cdot \nabla\Bar{u}+ \tilde{u}\cdot \nabla\tilde{u}+ \nabla(\tilde{p}-\Tilde{p}^{Lin})&=0, \\ \nabla\cdot \Tilde{u}^{D}&=0,\\ \Tilde{u}^{D}(t=0)&=0, \end{split} \right.$$ which gives $$\begin{split} \frac12 \frac{d}{dt} {\left\Vert \Tilde{u}^{D} \right\Vert}_{H^s}^2& = -\int J^{s}(\Bar{u}\cdot\nabla\Tilde{u}^{D})\cdot J^{s}\Tilde{u}^{D} -\int J^{s}(\Tilde{u}^{D}\cdot\nabla\Bar{u})\cdot J^{s}\Tilde{u}^{D}\\ & \qquad -\int J^{s}(\tilde{u}\cdot\nabla\tilde{u})\cdot J^{s}\Tilde{u}^{D}-\int J^{s}\nabla(\tilde{p}-\Tilde{p}^{Lin})\cdot J^{s}\Tilde{u}^{D}\\ &=\mathrm{V}+ \mathrm{VI}+ \mathrm{VII}+ \mathrm{VIII}. \end{split}$$ To estimate $\mathrm{V}$, $\mathrm{VI}$, and $\mathrm{VIII}$, proceeding in the same way as $\mathrm{I}$, $\mathrm{II}$, and $\mathrm{IV}$, respectively, we have $$\mathrm{V}\lesssim {\left\Vert \Bar{u} \right\Vert}_{H^s}{\left\Vert \Tilde{u}^{D} \right\Vert}_{H^s}^2, \qquad \mathrm{VI}\lesssim {\left\Vert \Bar{u} \right\Vert}_{H^{s+1}}{\left\Vert \Tilde{u}^{D} \right\Vert}_{H^s}^2,\qquad \mathrm{VIII}=0.$$ For $\mathrm{VII}$, the fact that $H^s{(\mathbb{T}^3)}$ is a Banach algebra implies $$\mathrm{VII}\lesssim {\left\Vert \tilde{u}\cdot\nabla\tilde{u} \right\Vert}_{H^s} {\left\Vert \Tilde{u}^{D} \right\Vert}_{H^s} \lesssim {\left\Vert \tilde{u} \right\Vert}_{H^{s+1}}^2{\left\Vert \Tilde{u}^{D} \right\Vert}_{H^s}.$$ Combining all, we obtain $$\frac{d}{dt} {\left\Vert \Tilde{u}^{D} \right\Vert}_{H^s} \le C \left({\left\Vert \Bar{u} \right\Vert}_{H^{s+1}}{\left\Vert \Tilde{u}^{D} \right\Vert}_{H^s} + {\left\Vert \tilde{u} \right\Vert}_{H^{s+1}}^2\right).$$ Thus, for $\varepsilon>0$ satisfying [\[condi: eps\]](#condi: eps){reference-type="eqref" reference="condi: eps"}, the Grönwall's inequality and [\[est:tu\]](#est:tu){reference-type="eqref" reference="est:tu"} give us $$\begin{split} {\left\Vert \Tilde{u}^{D}(t) \right\Vert}_{H^s} &\le \exp \left(C\int_0^{t} {\left\Vert \Bar{u}(\tau) \right\Vert}_{H^{s+1}}d\tau \right) C \int_0^t {\left\Vert \tilde{u}(\tau) \right\Vert}_{H^{s+1}}^2 d\tau \le 4\varepsilon^2{\left\Vert \tilde{u}_0 \right\Vert}_{H^{s+1}}^2C\Bar{T}\exp\left(3C\int_0^{\Bar{T}} {\left\Vert \Bar{u}(\tau) \right\Vert}_{H^{s+2}}d\tau \right) \end{split}$$ on $[0,\Bar{T}]$. Since $\Tilde{u}^{D}=\tilde{u}-\Tilde{u}^{Lin}=u-u^{Lin}$, we are done. ◻ ## Principle 2 Let $u$ be the solution of [\[eq:euler just u\]](#eq:euler just u){reference-type="eqref" reference="eq:euler just u"}. We consider the following PDE: $$\label{eq:euler busu} \left\{ \begin{aligned} \partial_{t} \Bar{u}^*+ u \cdot \nabla\Bar{u}^*+\nabla\bar{p}^*-\nabla\Bar{u}^*\cdot (u-\Bar{u}^*) &= 0, \\ \nabla\cdot \Bar{u}^*&=0, \\ \Bar{u}^*(t=0)&=\Bar{u}_0, \end{aligned} \right.$$ where $(\nabla\Bar{u}^*\cdot (u-\Bar{u}^*))_{i}:=\sum_{j=1}^3\partial_{i}\Bar{u}^*_{j} (u-\Bar{u}^*)_{j}$ with $i=1,2,3$. (This is different from $(u-\Bar{u}^*)\cdot \nabla\Bar{u}^*$.) This time, we compare $\Bar{u}^*$ with $\Bar{u}$ of [\[eq:euler bu\]](#eq:euler bu){reference-type="eqref" reference="eq:euler bu"}. Note that [\[eq:euler bu\]](#eq:euler bu){reference-type="eqref" reference="eq:euler bu"} and [\[eq:euler busu\]](#eq:euler busu){reference-type="eqref" reference="eq:euler busu"} share the same initial data. We have the following: **Proposition 5**. *Under the above setting, for $t\in[0,\Bar{T}]$ and $\varepsilon>0$ satisfying [\[condi: eps\]](#condi: eps){reference-type="eqref" reference="condi: eps"}, there exists a constant $C>0$ such that $${\left\Vert \Bar{u}^*(t,\cdot)-\Bar{u}(t,\cdot) \right\Vert}_{H^{s}(\mathbb{T}^3)} \le \varepsilon{\left\Vert \tilde{u}_0 \right\Vert}_{H^{s+1}}C\bar{T}\left(\sup_{t\in[0,\bar{T}]}{\left\Vert \Bar{u}(t) \right\Vert}_{H^{s+1}(\mathbb{T}^3)}\right)\exp\left(C\int_0^{\Bar{T}}\left(1+ {\left\Vert \Bar{u}(\tau) \right\Vert}_{H^{s+2}(\mathbb{T}^3)}\right)d\tau \right).$$* *Proof.* Denoting $\tilde{u}=u-\Bar{u}$ and $\bar{u}^{D*}=\Bar{u}^*-\Bar{u}$, we obtain the equation of $\bar{u}^{D*}$: $$\left\{\begin{split} &\partial_t \bar{u}^{D*} +\tilde{u}\cdot \nabla\bar{u}^{D*}+ \Bar{u}\cdot \nabla\bar{u}^{D*}+ \tilde{u}\cdot \nabla\Bar{u}-\nabla\bar{u}^{D*}\cdot \tilde{u}+ \nabla\bar{u}^{D*}\cdot \bar{u}^{D*}- \nabla\Bar{u}\cdot \tilde{u}+ \nabla\Bar{u}\cdot \bar{u}^{D*}+ \nabla(\bar{p}^*-\bar{p})=0, \\ &\nabla\cdot \bar{u}^{D*}=0,\\ &\bar{u}^{D*}(t=0)=0. \end{split} \right.$$ This gives $$\begin{split} \frac12 \frac{d}{dt} {\left\Vert \bar{u}^{D*} \right\Vert}_{H^s}^2& = -\int J^{s}(\tilde{u}\cdot\nabla\bar{u}^{D*})\cdot J^{s}\bar{u}^{D*} -\int J^{s}(\Bar{u}\cdot \nabla\bar{u}^{D*})\cdot J^{s}\bar{u}^{D*}-\int J^{s}(\tilde{u}\cdot \nabla\Bar{u})\cdot J^{s}\bar{u}^{D*}\\ &\quad +\int J^{s}(\nabla\bar{u}^{D*}\cdot \tilde{u})\cdot J^{s}\bar{u}^{D*} -\int J^{s}(\nabla\bar{u}^{D*}\cdot \bar{u}^{D*})\cdot J^{s}\bar{u}^{D*} +\int J^{s}(\nabla\Bar{u}\cdot \tilde{u})\cdot J^{s}\bar{u}^{D*}\\ &\quad -J^{s}(\nabla\Bar{u}\cdot \bar{u}^{D*})\cdot J^{s}\bar{u}^{D*} -\int J^{s}\nabla(\tilde{p}-\Tilde{p}^{Lin})\cdot J^{s}\bar{u}^{D*}\\ &=\mathrm{I}+\mathrm{II}+\mathrm{III}+\mathrm{IV}+\mathrm{V}+ \mathrm{VI}+ \mathrm{VII}+ \mathrm{VIII}. \end{split}$$ For $\mathrm{I}$, $\mathrm{II}$, and $\mathrm{VIII}$, we proceed in the same way as the proof of Proposition [Proposition 3](#prop: perturbation){reference-type="ref" reference="prop: perturbation"} to obtain $$\mathrm{I}\lesssim {\left\Vert \tilde{u} \right\Vert}_{H^s}{\left\Vert \bar{u}^{D*} \right\Vert}_{H^s}^2, \qquad \mathrm{II}\lesssim {\left\Vert \Bar{u} \right\Vert}_{H^s}{\left\Vert \bar{u}^{D*} \right\Vert}_{H^s}^2, \qquad \mathrm{VIII}=0.$$ For $\mathrm{III}+\mathrm{VI}+\mathrm{VII}$, we use the fact that $H^{s+1}{(\mathbb{T}^3)}$ is a Banach algebra to have $$\begin{split} \mathrm{III}+\mathrm{VI}+\mathrm{VII}%&\lesssim \left(\normhs{\tu\cdot\nb \bu} +\normhs{\nb \bu \cdot \tu} +\normhs{\nb \bu \cdot \dusu}\right) \normhs{\dusu} \\ & \lesssim \left({\left\Vert \tilde{u} \right\Vert}_{H^s} + {\left\Vert \bar{u}^{D*} \right\Vert}_{H^s}\right){\left\Vert \Bar{u} \right\Vert}_{H^{s+1}}{\left\Vert \bar{u}^{D*} \right\Vert}_{H^s}. \end{split}$$ For $\mathrm{IV}$ and $\mathrm{V}$, $\nabla\cdot\bar{u}^{D*}=0$ and the integration by parts give us $$\mathrm{IV}= -\int J^{s}( \bar{u}^{D*}_j \partial_i \tilde{u}_j) J^{s}\bar{u}^{D*}_i \lesssim {\left\Vert \tilde{u} \right\Vert}_{H^{s+1}}{\left\Vert \bar{u}^{D*} \right\Vert}_{H^s}^2, \qquad \mathrm{V}= -\frac12\int J^{s}\partial_i |\bar{u}^{D*}|^2 J^{s}\bar{u}^{D*}_i=0.$$ Combining all, we arrive at $$\frac{d}{dt} {\left\Vert \bar{u}^{D*} \right\Vert}_{H^s} \le C \left({\left\Vert \tilde{u} \right\Vert}_{H^{s+1}} + {\left\Vert \Bar{u} \right\Vert}_{H^{s+1}} \right){\left\Vert \bar{u}^{D*} \right\Vert}_{H^s} + {\left\Vert \tilde{u} \right\Vert}_{H^{s+1}} {\left\Vert \Bar{u} \right\Vert}_{H^{s+1}}.$$ Thus, for $\varepsilon>0$ satisfying [\[condi: eps\]](#condi: eps){reference-type="eqref" reference="condi: eps"}, the Grönwall's inequality and [\[est:tu\]](#est:tu){reference-type="eqref" reference="est:tu"} yield $$\begin{split} {\left\Vert \bar{u}^{D*}(t) \right\Vert}_{H^s} &\le \exp \left(C\int_0^{t} \left({\left\Vert \tilde{u}(\tau) \right\Vert}_{H^{s+1}} + {\left\Vert \Bar{u}(\tau) \right\Vert}_{H^{s+1}} \right)d\tau \right) C \int_0^t {\left\Vert \tilde{u}(\tau) \right\Vert}_{H^{s+1}} {\left\Vert \Bar{u}(\tau) \right\Vert}_{H^{s+1}} d\tau\\ &\le 2\varepsilon{\left\Vert \tilde{u}_0 \right\Vert}_{H^{s+1}}C\bar{T}\left(\sup_{t\in[0,\bar{T}]}{\left\Vert \Bar{u}(t) \right\Vert}_{H^{s+1}}\right)\exp\left(2C\int_0^{\Bar{T}}\left(1+ {\left\Vert \Bar{u}(\tau) \right\Vert}_{H^{s+2}}\right)d\tau \right) \end{split}$$ on $[0,\Bar{T}]$. ◻ # Proof of the main result The aim of this section is to show Theorem [Theorem 1](#thm: presence){reference-type="ref" reference="thm: presence"}. In the proof, we shall always assume that initial data $\omega^{\mathcal{I}}_0$ and $\omega^{\mathcal{S}}_0$ satisfy [\[def: iw_0\]](#def: iw_0){reference-type="eqref" reference="def: iw_0"} and [\[def: sw_0\]](#def: sw_0){reference-type="eqref" reference="def: sw_0"}, respectively. We note that by our definitions of $\omega^{\mathcal{I}}$ and $\omega^{\mathcal{S}}$, they solve the following coupled system: $(\omega^{\mathcal{I}},\omega^{\mathcal{S}})(t=0) = (\omega^{\mathcal{I}}_0,\omega^{\mathcal{S}}_0)$ and $$\label{eq:euler-l-vorticity-original} \left\{ \begin{aligned} \partial_{t} \omega^{\mathcal{I}}+ (u^{\mathcal{L}}+ u^{\mathcal{I}}+u^{\mathcal{S}}) \cdot \nabla\omega^{\mathcal{I}}&= \nabla(u^{\mathcal{L}}+u^{\mathcal{I}}+u^{\mathcal{S}}) \omega^{\mathcal{I}}, \\ \partial_{t} \omega^{\mathcal{S}}+ (u^{\mathcal{L}}+u^{\mathcal{I}}+u^{\mathcal{S}}) \cdot \nabla\omega^{\mathcal{S}}&= \nabla(u^{\mathcal{L}}+u^{\mathcal{I}}+u^{\mathcal{S}}) \omega^{\mathcal{S}}, \\ (u^{\mathcal{I}},u^{\mathcal{S}}) &=\nabla\times (-\Delta)^{-1}(\omega^{\mathcal{I}},\omega^{\mathcal{S}}).\\ \end{aligned} \right.$$ Recalling the definition of $u^{\mathcal{L}}$ in [\[eq:u-L\]](#eq:u-L){reference-type="eqref" reference="eq:u-L"}, we have $\omega^{\mathcal{L}}:=\nabla\times u^{\mathcal{L}}=0$ in $|x|\le 10$. Thus, setting $\Bar{u}^*:=u^{\mathcal{L}}+u^{\mathcal{I}}$ and $u:=u^{\mathcal{L}}+u^{\mathcal{I}}+u^{\mathcal{S}}$, and noticing $$\begin{split} \nabla\times \left( u \cdot \nabla\Bar{u}^*-\nabla\Bar{u}^*\cdot (u-\Bar{u}^*)\right) &= \nabla\times \left( \Bar{u}^*\cdot \nabla\Bar{u}^*+ (u-\Bar{u}^*) \cdot \nabla\Bar{u}^*-\nabla\Bar{u}^*\cdot (u-\Bar{u}^*)\right) \\ &=\nabla\times \left( (\bar{\omega}^* \times \Bar{u}^*) +\frac{1}{2}\nabla |\Bar{u}^*|^2 + \bar{\omega}^* \times (u-\Bar{u}^*)\right) \qquad (\bar{\omega}^*:= \nabla\times \Bar{u}^*) \\ &= \Bar{u}^*\cdot \nabla\bar{\omega}^*- \nabla\Bar{u}^*\bar{\omega}^* + (u-\Bar{u}^*) \cdot \nabla\bar{\omega}^*-\nabla(u-\Bar{u}^*) \bar{\omega}^* =u\cdot\nabla\bar{\omega}^*-\nabla u \,\bar{\omega}^*, \end{split}$$ we can check that $\Bar{u}^*$ and $u$ solve [\[eq:euler busu\]](#eq:euler busu){reference-type="eqref" reference="eq:euler busu"} and [\[eq:euler just u\]](#eq:euler just u){reference-type="eqref" reference="eq:euler just u"}, respectively. On the other hand, we shall introduce pseudo-solutions $(\omega^{\mathcal{I},P},\omega^{\mathcal{S},P})$ as the solutions of $$\label{eq:euler-l-vorticity-i} \left\{ \begin{aligned} \partial_{t} \omega^{\mathcal{I},P}+ (u^{\mathcal{L}}+ u^{\mathcal{I},P}) \cdot \nabla\omega^{\mathcal{I},P}&= \nabla(u^{\mathcal{L}}+u^{\mathcal{I},P}) \omega^{\mathcal{I},P}, \\ u^{\mathcal{I},P}&=\nabla\times (-\Delta)^{-1}\omega^{\mathcal{I},P}, \end{aligned} \right.$$ and $$\label{eq:euler-l-vorticity-s} \left\{ \begin{aligned} \partial_{t} \omega^{\mathcal{S},P}+ (u^{\mathcal{L}}+u^{\mathcal{I},P}+u^{\mathcal{S},P}) &\cdot \nabla\omega^{\mathcal{S},P}= \nabla(u^{\mathcal{L}}+u^{\mathcal{I},P}+u^{\mathcal{S},P}) \omega^{\mathcal{S},P}+ \nabla u^{\mathcal{S},P}\omega^{\mathcal{I},P}-u^{\mathcal{S},P}\cdot \nabla\omega^{\mathcal{I},P}, \\ u^{\mathcal{S},P}&=\nabla\times (-\Delta)^{-1}\omega^{\mathcal{S},P}, \end{aligned} \right.$$ with initial data $\omega^{\mathcal{I},P}(t=0) = \omega^{\mathcal{I}}_0$ and $\omega^{\mathcal{S},P}(t=0) = \omega^{\mathcal{S}}_0$. Then we can check that $\Bar{u}:=u^{\mathcal{L}}+u^{\mathcal{I},P}$ and $u:=u^{\mathcal{L}}+u^{\mathcal{I},P}+u^{\mathcal{S},P}$ are solutions of [\[eq:euler bu\]](#eq:euler bu){reference-type="eqref" reference="eq:euler bu"} and [\[eq:euler just u\]](#eq:euler just u){reference-type="eqref" reference="eq:euler just u"}, respectively. Note that $u=u^{\mathcal{L}}+u^{\mathcal{I},P}+u^{\mathcal{S},P}=u^{\mathcal{L}}+u^{\mathcal{I}}+u^{\mathcal{S}}$, which implies $u^{\mathcal{I},P}-u^{\mathcal{I}}=u^{\mathcal{S}}-u^{\mathcal{S},P}$. Our strategy is first to analyze pseudo-solutions $(\omega^{\mathcal{I},P},\omega^{\mathcal{S},P})$, and then to compare them with real solutions $(\omega^{\mathcal{I}},\omega^{\mathcal{S}})$ using the principles introduced in the previous section. ## Behavior of $\omega^{\mathcal{I},P}$ and $\nabla u^{\mathcal{I},P}$ Abusing the notation for simplicity, we denote pseudo-solutions $\omega^{\mathcal{I},P}$ and $u^{\mathcal{I},P}$ by $\omega^{\mathcal{I}}$ and $u^{\mathcal{I}}$, respectively. We note that the the symmetry $\omega^{\mathcal{I}}(t,x_1,x_2,x_3)=(\omega^{\mathcal{I}}_{1}(t,x_2,x_3),0,0)$ propagates in time for the solutions to [\[eq:euler-l-vorticity-i\]](#eq:euler-l-vorticity-i){reference-type="eqref" reference="eq:euler-l-vorticity-i"}. Namely, the assumptions $$\label{condi: iw} \partial_1 \omega^{\mathcal{I}}_1 \equiv 0,\quad \omega^{\mathcal{I}}_2\equiv 0, \quad \omega^{\mathcal{I}}_3 \equiv 0$$ hold for all times if they are valid at $t=0$. This is not trivial since $u^{\mathcal{L}}$ depends on $x_1$. To see this, first note that [\[condi: iw\]](#condi: iw){reference-type="eqref" reference="condi: iw"} implies $$\label{condi: iu} u^{\mathcal{I}}_1 \equiv 0,\quad \partial_1u^{\mathcal{I}}_2\equiv 0, \quad \partial_1u^{\mathcal{I}}_3 \equiv 0$$ by the Biot-Savart law. Together this implies $\nabla u^{\mathcal{I}}\omega^{\mathcal{I}}\equiv 0$. Denoting $D_t=\partial_t + (u^{\mathcal{L}}+ u^{\mathcal{I}})\cdot \nabla$, from $$D_t(\omega^{\mathcal{I}}_2)=(\nabla u^{\mathcal{I}}\omega^{\mathcal{I}})_2 - M\omega^{\mathcal{I}}_2, \quad D_t(\omega^{\mathcal{I}}_3)=(\nabla u^{\mathcal{I}}\omega^{\mathcal{I}})_3,$$ we see that $(\nabla u^{\mathcal{I}}\omega^{\mathcal{I}})\equiv 0$ is consistent with $\omega^{\mathcal{I}}_2,\omega^{\mathcal{I}}_3$ being zero for all times. Lastly, $$\begin{aligned} D_t(&\omega^{\mathcal{I}}_1)=(\nabla u^{\mathcal{I}}\omega^{\mathcal{I}})_1 + M\omega^{\mathcal{I}}_1,\qquad D_t(\partial_1\omega^{\mathcal{I}}_1)=\partial_1((\nabla u^{\mathcal{I}}\omega^{\mathcal{I}})_1), \end{aligned}$$ which shows that $\partial_1\omega^{\mathcal{I}}_1 \equiv 0$ propagates in time. This shows that the equation for $\omega^{\mathcal{I}}_1$ is given by $$\label{eq:reduced iw} \partial_t \omega^{\mathcal{I}}_1 + (u^{\mathcal{I}}_2-Mx_2)\partial_2\omega^{\mathcal{I}}_1 +u^{\mathcal{I}}_3\partial_3\omega^{\mathcal{I}}_1 = M\omega^{\mathcal{I}}_1.$$ Comparing [\[eq:reduced iw\]](#eq:reduced iw){reference-type="eqref" reference="eq:reduced iw"} with 2D Euler equation which has global well-posedness of smooth solutions, we can check that there exists the unique global smooth solution $\omega^{\mathcal{I}}_1$ of [\[eq:reduced iw\]](#eq:reduced iw){reference-type="eqref" reference="eq:reduced iw"} with initial data given in [\[def: iw_0\]](#def: iw_0){reference-type="eqref" reference="def: iw_0"}. Furthermore, we can also observe that $\omega^{\mathcal{I}}_1$ keeps the odd symmetry in both $x_2$ and $x_3$. To begin with, we observe temporal behaviors of ${\left\Vert \omega^{\mathcal{I}}(t,\cdot) \right\Vert}_{L^{p}}$ for $p\in[1,\infty]$. **Lemma 6**. *For $t\in[0,\infty)$, $\omega^{\mathcal{I}}(t,\cdot)=(\omega^{\mathcal{I}}_1(t,\cdot),0,0)$ satisfies $$\label{est: iw} {\left\Vert \omega^{\mathcal{I}}_1(t,\cdot) \right\Vert}_{L^{p}} { = {\left\Vert \omega^{\mathcal{I}}_{0} \right\Vert}_{L^{p}}}e^{\frac{M(p-1)}{p}t}\;(1\le p <\infty)\quad \text{and} \quad {\left\Vert \omega^{\mathcal{I}}_1(t,\cdot) \right\Vert}_{L^{\infty}} {={\left\Vert \omega^{\mathcal{I}}_{0} \right\Vert}_{L^{\infty}}} e^{Mt}.$$* *Proof.* Taking the $L^2$ inner product of [\[eq:reduced iw\]](#eq:reduced iw){reference-type="eqref" reference="eq:reduced iw"} with $\omega^{\mathcal{I}}_1|\omega^{\mathcal{I}}_1|^{p-2}$, we have $$\begin{split} \frac{1}{p}\frac{d}{dt}{\left\Vert \omega^{\mathcal{I}}_1 \right\Vert}_{L^{p}}^p &= -\int (u^{\mathcal{I}}_2\partial_2\omega^{\mathcal{I}}_1 + u^{\mathcal{I}}_3\partial_3\omega^{\mathcal{I}}_1)\omega^{\mathcal{I}}_1|\omega^{\mathcal{I}}_1|^{p-2} +\int Mx_2\partial_2\omega^{\mathcal{I}}_1\omega^{\mathcal{I}}_1|\omega^{\mathcal{I}}_1|^{p-2} + M{\left\Vert \omega^{\mathcal{I}}_1 \right\Vert}_{L^{p}}^p. \end{split}$$ After the integration by parts, the first integral vanishes since $u^{\mathcal{I}}_1=0$ and $\nabla\cdot u^{\mathcal{I}}=0$, and the second integral is equal to $-\frac{M}{p}{\left\Vert \omega^{\mathcal{I}}_1 \right\Vert}_{L^{p}}^p$. Thus we have $$\begin{split} \frac{d}{dt}{\left\Vert \omega^{\mathcal{I}}_1 \right\Vert}_{L^{p}}^p &= M(p-1){\left\Vert \omega^{\mathcal{I}}_1 \right\Vert}_{L^{p}}^p, \quad \mbox{ which implies} \qquad {\left\Vert \omega^{\mathcal{I}}_1(t,\cdot) \right\Vert}_{L^{p}} = e^{\frac{M(p-1)}{p}t}{\left\Vert \omega^{\mathcal{I}}_{1,0} \right\Vert}_{L^{p}}. \end{split}$$ Passing $p\rightarrow \infty$, we can also obtain $L^{\infty}$ estimate. ◻ Next, using [\[est: iw\]](#est: iw){reference-type="eqref" reference="est: iw"}, we estimate $\left\Vert{u^{\mathcal{I}}(t,\cdot)}\right\Vert_{H^s}$ with $s>\frac{5}{2}$ for $t\in[0,T_M]$ with $T_M=\frac{\log(M+1)}{M}$. This estimate will be used in Section [3.3](#comparison){reference-type="ref" reference="comparison"}. **Lemma 7**. *For $s>\frac52$ and $t\in[0,T_M]$, ${\left\Vert u^{\mathcal{I}}(t,\cdot) \right\Vert}_{H^{s}(\mathbb{T}^3)}\lesssim e^{ {C}Mt}$.* *Proof.* Fix $s>\frac{5}{2}$. Noticing that $u^{\mathcal{I}}$ solves $$\left\{ \begin{aligned} \partial_t u^{\mathcal{I}}+ (u^{\mathcal{L}}+ u^{\mathcal{I}}) \cdot \nabla u^{\mathcal{I}}+ \nabla p^{\mathcal{I}} = 0 , \\ \nabla\cdot u^{\mathcal{I}}= 0. \end{aligned} \right.$$ and denoting $J=(I-\Delta)^{\frac{1}{2}}$, we have $$\begin{split} \frac12 \frac{d}{dt} {\left\Vert u^{\mathcal{I}} \right\Vert}_{H^s}^2& = -\int J^{s}(u^{\mathcal{L}}\cdot\nabla u^{\mathcal{I}})\cdot J^{s}u^{\mathcal{I}} -\int J^{s}(u^{\mathcal{I}}\cdot\nabla u^{\mathcal{I}})\cdot J^{s}u^{\mathcal{I}}-\int J^{s}\nabla p^{\mathcal{I}}\cdot J^{s}u^{\mathcal{I}}=\mathrm{I}+ \mathrm{II}+ \mathrm{III}. \end{split}$$ $\mathrm{III}=0$ follows from $\nabla\cdot u^{\mathcal{I}}=0$. Noticing $\int \left(u^{\mathcal{L}}\cdot \nabla J^{s}u^{\mathcal{I}}\right) \cdot J^{s} u^{\mathcal{I}}= \int \left(u^{\mathcal{I}}\cdot \nabla J^{s}u^{\mathcal{I}}\right) \cdot J^{s} u^{\mathcal{I}}=0,$ we obtain $$\mathrm{I}= -\int \left(\left[J^{s}, u^{\mathcal{L}}\cdot\right] \nabla u^{\mathcal{I}}\right) \cdot J^{s}u^{\mathcal{I}} \quad \text{and} \quad \mathrm{II}= -\int \left(\left[J^{s}, u^{\mathcal{I}}\cdot\right] \nabla u^{\mathcal{I}}\right) \cdot J^{s}u^{\mathcal{I}},$$ where we recall that $[\cdot, \cdot]$ denotes the commutator. Using $\left\Vert{u^{\mathcal{L}}}\right\Vert_{H^{s}(\mathbb T^3)}\lesssim M$ and the Sobolev embedding $H^s(\mathbb T^3) \hookrightarrow W^{1,\infty}(\mathbb T^3)$, we obtain $$\mathrm{I}\lesssim {\left\Vert \left[J^{s}, u^{\mathcal{L}}\cdot\right] \nabla u^{\mathcal{I}} \right\Vert}_{L^2}{\left\Vert J^{s}u^{\mathcal{I}} \right\Vert}_{L^2} \lesssim M{\left\Vert u^{\mathcal{I}} \right\Vert}_{H^s}^2, \quad \mathrm{II}\lesssim {\left\Vert \left[J^{s}, u^{\mathcal{I}}\cdot\right] \nabla u^{\mathcal{I}} \right\Vert}_{L^2}{\left\Vert J^{s}u^{\mathcal{I}} \right\Vert}_{L^2} \lesssim {\left\Vert \nabla u^{\mathcal{I}} \right\Vert}_{L^{\infty}}{\left\Vert u^{\mathcal{I}} \right\Vert}_{H^s}^2,$$ which lead to $$\frac{d}{dt} {\left\Vert u^{\mathcal{I}} \right\Vert}_{H^s} \lesssim \left(M+ {\left\Vert \nabla u^{\mathcal{I}} \right\Vert}_{L^{\infty}}\right){\left\Vert u^{\mathcal{I}} \right\Vert}_{H^s}.$$ According to the Calderon-Zygmund theory, we have $${\left\Vert \nabla u^{\mathcal{I}} \right\Vert}_{L^{\infty}\left (\mathbb{T}^3\right)} \lesssim {\left\Vert \omega^{\mathcal{I}} \right\Vert}_{L^{\infty}\left (\mathbb{T}^3\right)}\log \left(10 + \frac{\left\Vert{u^{\mathcal{I}}}\right\Vert_{H^s(\mathbb T^3)}}{{\left\Vert \omega^{\mathcal{I}} \right\Vert}_{L^{\infty}\left (\mathbb{T}^3\right)}} \right)\lesssim e^{Mt}\log \left(10 + {\left\Vert u^{\mathcal{I}} \right\Vert}_{H^s} \right),$$ where we used [\[est: iw\]](#est: iw){reference-type="eqref" reference="est: iw"} in the last inequality. This gives $$\frac{d}{dt}\log \left(10 + {\left\Vert u^{\mathcal{I}} \right\Vert}_{H^s}\right) \lesssim M+ e^{Mt}\log \left(10 + {\left\Vert u^{\mathcal{I}} \right\Vert}_{H^s}\right).$$ Using Grönwall's inequality, we obtain the desired estimate on $[0,T_M]$. ◻ Our next aim is to estimate the maximum of $\nabla u^{\mathcal{I}}(t,\cdot)$ in a small region near the origin up to time $T_M=\frac{\log(M+1)}{M}$. The corresponding velocity $u^{\mathcal{I}}=(0,u^{\mathcal{I}}_2,u^{\mathcal{I}}_3)$ to $\omega^{\mathcal{I}}=(\omega^{\mathcal{I}}_1,0,0)$ has explicit formula: $$\label{formula: iu} \begin{split} u^{\mathcal{I}}_2(t,x_2,x_3)=\frac{1}{2\pi}\sum_{n=(n_2,n_3)\in \mathbb{Z}^2}\iint_{[-L,L)^2} \frac{-x_{3}+y_3+2Ln_3}{|x-y-2Ln|^2}\omega^{\mathcal{I}}_1(t,y_2,y_3)dy_2dy_3, \\ u^{\mathcal{I}}_3(t,x_2,x_3)=\frac{1}{2\pi}\sum_{n=(n_2,n_3)\in \mathbb{Z}^2}\iint_{[-L,L)^2} \frac{x_2-y_2-2Ln_2}{|x-y-2Ln|^2}\omega^{\mathcal{I}}_1(t,y_2,y_3)dy_2dy_3. \end{split}$$ Using this, we can prove a log-Lipschitz estimate of $u^{\mathcal{I}}$. **Lemma 8**. *Let $x,x'\in \mathbb{T}^2:=[-L,L)^2$. Then we have $$\label{est: iu} \left|u^{\mathcal{I}}(t,x)-u^{\mathcal{I}}(t,x')\right| \lesssim e^{tM}|x-x'|\left(1+\log \frac{3L}{|x-x'|} \right).$$* Note that the argument of the logarithm in [\[est: iu\]](#est: iu){reference-type="eqref" reference="est: iu"} is always greater than 1 because $|x-x'|\le 2\sqrt{2}L$. We omit the proof since it follows directly from the standard log-Lipschitz estimate for 2d Euler (see for instance [@MB]). Now we consider a characteristic curve $A^{\mathcal{I}}(t,a_2,a_3)=\left(A^{\mathcal{I}}_2(t,a_2,a_3),A^{\mathcal{I}}_3(t,a_2,a_3)\right):[0,\infty)\times \mathbb{T}^2\rightarrow \mathbb{T}^2$ of [\[eq:reduced iw\]](#eq:reduced iw){reference-type="eqref" reference="eq:reduced iw"} defined by $A^{\mathcal{I}}(0,a_2,a_3)=(a_2,a_3)$ and $$\label{def: iA} \left\{ \begin{aligned} &\frac{d}{dt}A^{\mathcal{I}}_2(t,a_2,a_3)=u^{\mathcal{I}}_2(t,A^{\mathcal{I}}(t,a_2,a_3))-MA^{\mathcal{I}}_2(t,a_2,a_3),\\ &\frac{d}{dt}A^{\mathcal{I}}_3(t,a_2,a_3)=u^{\mathcal{I}}_3(t,A^{\mathcal{I}}(t,a_2,a_3)). \end{aligned} \right.$$ Evaluating along this curve, we have from [\[eq:reduced iw\]](#eq:reduced iw){reference-type="eqref" reference="eq:reduced iw"} $$\label{value: iw} \omega^{\mathcal{I}}_1(t,A^{\mathcal{I}}_2(t,a),A^{\mathcal{I}}_3(t,a))=e^{Mt}\omega^{\mathcal{I}}_{1,0}(a_2,a_3).$$ We need the following two lemmas for $A^{\mathcal{I}}$. **Lemma 9**. *The determinant of $\nabla_aA^{\mathcal{I}}$ with $a=(a_2,a_3)$ satisfies $$\label{eq:detA} \det (\nabla_aA^{\mathcal{I}}) = e^{-Mt}.$$* *Proof.* Using $u^{\mathcal{I}}_1=0$ and $\nabla\cdot u^{\mathcal{I}}=0$, we compute $$\begin{aligned} \frac{d}{dt}\det (\nabla_aA^{\mathcal{I}})(t,a)&=(\partial_2 u^{\mathcal{I}}_2(t,A^{\mathcal{I}}(t,a)) -M + \partial_3u^{\mathcal{I}}_3(t,A^{\mathcal{I}}(t,a)))\det (\nabla_aA^{\mathcal{I}})(t,a) = -M\det (\nabla_aA^{\mathcal{I}})(t,a). \end{aligned}$$ Since $\det (\nabla_aA^{\mathcal{I}})(0,a)=1$, we obtain [\[eq:detA\]](#eq:detA){reference-type="eqref" reference="eq:detA"}. ◻ **Lemma 10**. *Let $1\le a_2,a_3\le 2$. Then for $0\le t\le T_M$, there exist constants $C_1,\,C_2 >0$, $C_3\ge1$ independent of $M$ such that $$\label{est:A_2} C_1 e^{-C_3 Mt}\le A^{\mathcal{I}}_2(t,a_2,a_3) \le C_2 e^{-C_3^{-1} Mt},$$ and $$\label{est:A_3} C_1 \le A^{\mathcal{I}}_3(t,a_2,a_3) \le C_2.$$* *Proof.* Note that $u^{\mathcal{I}}_2(t,0,A^{\mathcal{I}}_3(t,a_2,a_3))=0$ for all $t$ by odd symmetry of $\omega^{\mathcal{I}}_1$, which gives $$\label{equality: iA_2} \frac{d}{dt}A^{\mathcal{I}}_2(t,a_2,a_3)=u^{\mathcal{I}}_2(t,A^{\mathcal{I}}(t,a_2,a_3))-u^{\mathcal{I}}_2(t,0,A_3^{\mathcal{I}}(t,a_2,a_3))-MA^{\mathcal{I}}_2(t,a_2,a_3).$$ To begin with, we show the upper bound of $A^{\mathcal{I}}_2(t,a_2,a_3)$ in [\[est:A_2\]](#est:A_2){reference-type="eqref" reference="est:A_2"}. Applying [\[est: iu\]](#est: iu){reference-type="eqref" reference="est: iu"} and noticing $A^{\mathcal{I}}_2(t,a_2,a_3)>0$, we have $$\frac{d}{dt}A^{\mathcal{I}}_2(t,a_2,a_3) \le C e^{tM} A_2^{\mathcal{I}}(t,a_2,a_3)\left(1+\log \frac{3L}{A_2^{\mathcal{I}}(t,a_2,a_3)}\right)-MA^{\mathcal{I}}_2(t,a_2,a_3),$$ which leads to $$\frac{d}{dt}\left(\log A^{\mathcal{I}}_2(t,a_2,a_3)-(1+\log 3L)\right)\le -C e^{tM}\left(\log A^{\mathcal{I}}_2(t,a_2,a_3)-(1+\log 3L)\right)-M.$$ From $$\frac{d}{dt}\left(\left(\log A^{\mathcal{I}}_2(t,a_2,a_3)-(1+\log 3L)\right)\exp{\left( \int_{0}^{t} Ce^{sM} ds \right)} \right) \le -M\exp{\left( \int_{0}^{t} Ce^{sM} ds \right)} \le -M,$$ we see that for $0\le t \le T_M$, $$\label{est: A_2 sample} \begin{split} \log A^{\mathcal{I}}_2(t,a_2,a_3) &\le 1+\log 3L + \exp\left(-\int_{0}^{t}Ce^{sM}ds\right) \left(\log a_2 -(1+\log 3L) -Mt\right) \\ %&= 1+\log 3L+ \exp \left(\frac{C(1-e^{Mt})}{M}\right)\left(\log a_2 -(1+\log 3L)-Mt \right) \\ &\le 1+\log 3L+ \exp(-C)\left(\log a_2 -(1+\log3L)-Mt \right), \end{split}$$ where we used $\log a_2 -(1+\log 3L)-Mt<0$ and $t\le T_M=\frac{\log(M+1)}{M}$. Since $a_2\le 2$, we arrive at $$A^{\mathcal{I}}_2(t,a_2,a_3) \lesssim \exp \left(\exp(-C)(\log 2 -(1+\log 3L) -Mt)\right)$$ for $t\in[0,T_M]$. We take $C_3:=\exp(C)$, which satisfies $C_3\ge1$. Now we prove the lower bound of $A^{\mathcal{I}}_2(t,a_2,a_3)$ in [\[est:A_2\]](#est:A_2){reference-type="eqref" reference="est:A_2"}. Recalling [\[equality: iA_2\]](#equality: iA_2){reference-type="eqref" reference="equality: iA_2"} and [\[est: iu\]](#est: iu){reference-type="eqref" reference="est: iu"}, we have $$\begin{split} -\frac{d}{dt}A^{\mathcal{I}}_2(t,a_2,a_3) &=-u^{\mathcal{I}}_2(t,A^{\mathcal{I}}(t,a_2,a_3))+u^{\mathcal{I}}_2(t,0,A_3^{\mathcal{I}}(t,a_2,a_3))+MA^{\mathcal{I}}_2(t,a_2,a_3) \\ &\le C e^{tM} A_2^{\mathcal{I}}(t,a_2,a_3)\left(1+\log \frac{3L}{A_2^{\mathcal{I}}(t,a_2,a_3)}\right)+MA^{\mathcal{I}}_2(t,a_2,a_3), \end{split}$$ which yields $$\frac{d}{dt}\left(1+\log \frac{3L}{A_2^{\mathcal{I}}(t,a_2,a_3)}\right) \le C e^{tM} \left(1+\log \frac{3L}{A_2^{\mathcal{I}}(t,a_2,a_3)}\right)+M.$$ Noticing $$\frac{d}{dt}\left(\left(1+\log \frac{3L}{A_2^{\mathcal{I}}(t,a_2,a_3)}\right)\exp{\left(- \int_{0}^{t} C e^{sM}ds \right)} \right) \le M\exp{\left( -\int_{0}^{t} C e^{sM} ds \right)} \le M,$$ we see that for $0\le t \le T_M$, $$\label{est: A_2 opposite sample} \begin{split} \log \frac{3L}{A_2^{\mathcal{I}}(t,a_2,a_3)}&\le -1+ \exp\left(\int_{0}^{t} C e^{sM}ds\right)\left(1+\log \frac{3L}{a_2} + Mt \right) %\\ &= -1+ \exp \left(\frac{C(e^{Mt}-1)}{M}\right)\left(1+\log \frac{3L}{a_2}+Mt \right) \\ \le -1+ C_3\left(1+\log \frac{3L}{a_2}+Mt \right) \end{split}$$ where $C_3:=\exp(C)$. Since $a_2\ge1$, we have $A^{\mathcal{I}}_2(t,a_2,a_3) \ge 3L\exp \left(1-C_3(1+\log 3L +Mt) \right).$ To obtain the bounds of $A^{\mathcal{I}}_3(t,a_2,a_3)$ in [\[est:A_3\]](#est:A_3){reference-type="eqref" reference="est:A_3"}, we note that $u^{\mathcal{I}}_3(t,A^{\mathcal{I}}_2(t,a_2,a_3),0)=0$ (by the odd symmetry of $\omega^{\mathcal{I}}_1$) and $A^{\mathcal{I}}_3(t,a_2,a_3)>0$ for all $t>0$, and proceed as we have shown the case of $A^{\mathcal{I}}_2(t,a_2,a_3)$. ◻ Henceforth, let $C_i$ $(i=1,2,3)$ denote the constants in Lemma [Lemma 10](#lemma: iA){reference-type="ref" reference="lemma: iA"}. Now we are ready to estimate ${\left\Vert \nabla u^{\mathcal{I}}(t,\cdot) \right\Vert}_{L^{\infty}}$ near the origin. **Lemma 11**. *Let $\delta:=\frac{C_1}{10(M+1)^{C_3}}$. Then for $t\in [0,T_M]$, we have $$\label{est: nabla iu} \left\Vert{\nabla u^{\mathcal{I}}(t,\cdot)}\right\Vert_{L^{\infty}(B_0(\delta))} \lesssim e^{-C_3^{-1}Mt}.$$* *Remark 12*. We note that $\delta\le \frac{C_1 e ^{-C_3Mt}}{10}$ for $0\le t \le T_M$ by the definitions of $\delta$ and $T_M$. *Proof.* Let $x=(x_2,x_3)\in B_0(\delta)$. We recall explicit formulas $$\label{formula: partial2 iu2} \begin{split} &\partial_2 u^{\mathcal{I}}_2 (t,x_2,x_3)= -\partial_3 u^{\mathcal{I}}_3(t,x_2,x_3)\\ &\qquad=\frac{1}{\pi}\sum_{n=(n_2,n_3)\in \mathbb{Z}^2}p.v.\iint_{[-L,L)^2} \frac{(x_2-y_2-2Ln_2)(x_3-y_3-2Ln_3)}{|x-y-2Ln|^4}\omega^{\mathcal{I}}_1(t,y_2,y_3)dy_2dy_3 \\ \end{split}$$ Moreover, since $\omega^{\mathcal{I}}_1=0$ in $[0,T_M]\times B_0(\delta)$ by Lemma [Lemma 10](#lemma: iA){reference-type="ref" reference="lemma: iA"}, $$\label{formula: partial3 iu2} \begin{split} &\partial_2 u^{\mathcal{I}}_3 (t,x_2,x_3) = -\partial_3 u^{\mathcal{I}}_2(t,x_2,x_3) \\ &\qquad=\frac{1}{2\pi}\sum_{n=(n_2,n_3)\in \mathbb{Z}^2}p.v.\iint_{[-L,L)^2} \frac{(x_2-y_2-2Ln_2)^2-(x_3-y_3-2Ln_3)^2}{|x-y-2Ln|^4}\omega^{\mathcal{I}}_1(t,y_2,y_3)dy_2dy_3 \end{split}$$ in $[0,T_M]\times B_0(\delta)$. (See e.g. [@MB] for derivations of [\[formula: partial2 iu2\]](#formula: partial2 iu2){reference-type="eqref" reference="formula: partial2 iu2"} and [\[formula: partial3 iu2\]](#formula: partial3 iu2){reference-type="eqref" reference="formula: partial3 iu2"}.) To begin with, we estimate $\left\Vert{\partial_2u^{\mathcal{I}}_2}\right\Vert_{L^{\infty}(B_0(\delta))} =\left\Vert{\partial_3u^{\mathcal{I}}_3}\right\Vert_{L^{\infty}(B_0(\delta))}$. By the odd symmetry of $\omega^{\mathcal{I}}_1$ in both $x_2$ and $x_3$, [\[formula: partial2 iu2\]](#formula: partial2 iu2){reference-type="eqref" reference="formula: partial2 iu2"} yields $$\begin{split} &\partial_2 u^{\mathcal{I}}_2 (t,x_2,x_3) \\ &\approx \sum_{n \in \mathbb{Z}^2}\iint_{\left\{0\le y_2,y_3\le L\right\}} \left[\frac{(x_2-y_2-2Ln_2)(x_3-y_3-2Ln_3)}{|x-y-2Ln|^4} -\frac{(x_2+y_2-2Ln_2)(x_3-y_3-2Ln_3)}{\left((x_2+y_2-2Ln_2)^2 + (x_3-y_3-2Ln_3)^2 \right)^2} \right. \\ &\left.\quad -\frac{(x_2-y_2-2Ln_2)(x_3+y_3-2Ln_3)}{\left((x_2-y_2-2Ln_2)^2 + (x_3+y_3-2Ln_3)^2 \right)^2}+\frac{(x_2+y_2-2Ln_2)(x_3+y_3-2Ln_3)}{\left((x_2+y_2-2Ln_2)^2 + (x_3+y_3-2Ln_3)^2 \right)^2} \right]\omega^{\mathcal{I}}_1(t,y_2,y_3)dy_2y_3 \\ &= \mathrm{I}+ \mathrm{II}+ \mathrm{III}+ \mathrm{IV}. \end{split}$$ To estimate $\mathrm{I}$, we divide it into $\mathrm{I}_0$ and $\mathrm{I}_{\neq 0}$ which correspond to the term with $n=(0,0)$ and the sum of all other terms with $n\neq (0,0)$, respectively. For $\mathrm{I}_0$, we make a change of variables $(y_2,y_3)\mapsto (a_2,a_3)$ by $y=A^{\mathcal{I}}(t,a)$ and use [\[eq:detA\]](#eq:detA){reference-type="eqref" reference="eq:detA"} to have $${\mathrm{I}_0} \approx \iint_{\left\{0\le a_2,a_3\le L\right\}} \frac{(x_2-A^{\mathcal{I}}_2(t,a))(x_3-A^{\mathcal{I}}_3(t,a))}{|x-A^{\mathcal{I}}(t,a)|^4}\omega^{\mathcal{I}}_1(t,A^{\mathcal{I}}_2(t,a),A^{\mathcal{I}}_3(t,a))e^{-Mt}da_2da_3.$$ By [\[value: iw\]](#value: iw){reference-type="eqref" reference="value: iw"} and the assumption that $\omega^{\mathcal{I}}_{1,0}$ is supported on $\left\{1\le a_2,a_3\le 2 \right\}$ (see [\[def: iw_0\]](#def: iw_0){reference-type="eqref" reference="def: iw_0"}), we obtain $${\mathrm{I}_0} \le {\left\Vert \omega^{\mathcal{I}}_{1,0} \right\Vert}_{L^{\infty}} \iint_{\left\{1\le a_2,a_3\le 2\right\}} \frac{\left|(x_2-A^{\mathcal{I}}_2(t,a))(x_3-A^{\mathcal{I}}_3(t,a))\right|}{|x-A^{\mathcal{I}}(t,a)|^4}da_2da_3.$$ Since $x\in B_0(\delta)$, Lemma [Lemma 10](#lemma: iA){reference-type="ref" reference="lemma: iA"} and Remark [Remark 12](#rmk: dlt){reference-type="ref" reference="rmk: dlt"} imply $$\label{est: x_2,x_3-iA_2,3} \left\{ \begin{aligned} \frac{9C_1e^{-C_3Mt}}{10} \le A^{\mathcal{I}}_2(t,a)-|x_2| &\le A^{\mathcal{I}}_2(t,a)-x_2 \le A^{\mathcal{I}}_2(t,a)+ |x_2| \le \frac{11C_2e^{-C_3^{-1}Mt}}{10}, \\ \frac{9C_1}{10}\le A^{\mathcal{I}}_3(t,a)-|x_3| &\le A^{\mathcal{I}}_3(t,a)-x_3 \le A^{\mathcal{I}}_3(t,a)+|x_3| \le \frac{11C_2}{10}. \end{aligned} \right.$$ Consequently, $$\frac{\left|(x_2-A^{\mathcal{I}}_2(t,a))(x_3-A^{\mathcal{I}}_3(t,a))\right|}{|x-A^{\mathcal{I}}(t,a)|^4} \le \frac{\left|(x_2-A^{\mathcal{I}}_2(t,a))(x_3-A^{\mathcal{I}}_3(t,a))\right|}{(x_3-A^{\mathcal{I}}_3(t,a))^4} \lesssim e^{-C_3^{-1}Mt},$$ which gives ${\mathrm{I}_0} \lesssim e^{-C_3^{-1}Mt}$. To estimate $\mathrm{I}_{\neq 0}$, denoting $\tilde{n}:=(-n_2,n_3)$, we compute $$\begin{split} &\frac{(x_2-y_2-2Ln_2)(x_3-y_3-2Ln_3)}{|x-y-2Ln|^4} +\frac{(x_2-y_2+2Ln_2)(x_3-y_3-2Ln_3)}{|x-y-2L\tilde{n}|^4} \\ &=\frac{(x_3-y_3-2Ln_3)\left((x_2-y_2)(|x-y-2L\tilde{n}|^4+|x-y-2Ln|^4)-2L_2(|x-y-2L\tilde{n}|^4-|x-y-2Ln|^4)\right)}{|x-y-2Ln|^4|x-y-2L\tilde{n}|^4}. \end{split}$$ Since $|x-y-2L\tilde{n}|^4-|x-y-2Ln|^4=16Ln_2(x_2-y_2)\left((x_2-y_2)^2+4L^2n_2^2) + (x_3-y_3-2Ln_3)^2\right)$ and $|x-y|\le |x|+|y|\le \delta+ L$ implies $$\label{lowerbound: x-y-2Ln} |x-y-2Ln|,\,|x-y-2L\Tilde{n}|\gtrsim |n|,$$ we have $$\left|\frac{(x_2-y_2-2Ln_2)(x_3-y_3-2Ln_3)}{|x-y-2Ln|^4} +\frac{(x_2-y_2+2Ln_2)(x_3-y_3-2Ln_3)}{|x-y-2L\tilde{n}|^4}\right| \lesssim\frac{ {|x_2-y_2|}}{|n|^3}.$$ Hence, making again the change of variables $(y_2,y_3)\mapsto (a_2,a_3)$ by $y=A^{\mathcal{I}}(t,a)$ and using [\[est: x_2,x_3-iA_2,3\]](#est: x_2,x_3-iA_2,3){reference-type="eqref" reference="est: x_2,x_3-iA_2,3"}, we proceed in the same argument to obtain ${\mathrm{I}_{\neq 0}} \lesssim e^{-C_3^{-1}Mt}.$ In a similar manner, we can also show that all of $\mathrm{II},\, \mathrm{III},$ and $\mathrm{IV}$ have the same upper bound, which implies $\left\Vert{\partial_2u^{\mathcal{I}}_2}\right\Vert_{L^{\infty}(B_0(\delta))} \lesssim e^{-C_3^{-1}Mt}.$ Next, we estimate $\left\Vert{\partial_3u^{\mathcal{I}}_2}\right\Vert_{L^{\infty}(B_0(\delta))}\left(=\left\Vert{\partial_2u^{\mathcal{I}}_3}\right\Vert_{L^{\infty}(B_0(\delta))}\right)$. By the odd symmetry of $\omega^{\mathcal{I}}_1$ in both $x_2$ and $x_3$, [\[formula: partial3 iu2\]](#formula: partial3 iu2){reference-type="eqref" reference="formula: partial3 iu2"} yields $$\begin{split} &\partial_3 u^{\mathcal{I}}_2 (t,x_2,x_3)\\ &\approx\sum_{n \in \mathbb{Z}^2}\iint_{\left\{0\le y_2,y_3\le L\right\}} \left[\frac{(x_2-y_2-2Ln_2)^2-(x_3-y_3-2Ln_3)^2}{|x-y-2Ln|^4} -\frac{(x_2+y_2-2Ln_2)^2-(x_3-y_3-2Ln_3)^2}{\left((x_2+y_2-2Ln_2)^2 + (x_3-y_3-2Ln_3)^2 \right)^2} \right. \\ &\left.\quad -\frac{(x_2-y_2-2Ln_2)^2-(x_3+y_3-2Ln_3)^2}{\left((x_2-y_2-2Ln_2)^2 + (x_3+y_3-2Ln_3)^2 \right)^2}+\frac{(x_2+y_2-2Ln_2)^2-(x_3+y_3-2Ln_3)^2}{\left((x_2+y_2-2Ln_2)^2 + (x_3+y_3-2Ln_3)^2 \right)^2} \right]\omega^{\mathcal{I}}_1(t,y_2,y_3)dy_2y_3 \\ &= \mathrm{I}+ \mathrm{II}+ \mathrm{III}+ \mathrm{IV}. \end{split}$$ for $(t,x)\in [0,T_M] \times B_0(\delta)$. To bound $\mathrm{I}+\mathrm{II}$, we again divide it into $(\mathrm{I}+\mathrm{II})_0$ and $(\mathrm{I}+\mathrm{II})_{\neq 0}$ which correspond to the term with $n=(0,0)$ and the sum of all other terms with $n\neq (0,0)$, respectively. For $(\mathrm{I}+\mathrm{II})_0$, we again make a change of variables $(y_2,y_3)\mapsto (a_2,a_3)$ by $y=A^{\mathcal{I}}(t,a)$ and use [\[eq:detA\]](#eq:detA){reference-type="eqref" reference="eq:detA"} to have $$\begin{split} {(\mathrm{I}+\mathrm{II})_0}= \frac{1}{2\pi} \iint_{\left\{0\le a_2,a_3\le L\right\}} K_{x_2,x_3}(A^{\mathcal{I}}_2(t,a),A^{\mathcal{I}}_3(t,a))\,\omega^{\mathcal{I}}_1(t,A^{\mathcal{I}}_2(t,a),A^{\mathcal{I}}_3(t,a))e^{-Mt}da_2da_3, \end{split}$$ where $$K_{x_2,x_3}(z_2,z_3) = \frac{\left((x_2-z_2)^2-(x_3-z_3)^2\right)\left((x_2+z_2)^2 + (x_3-z_3)^2 \right)^2-\left( (x_2+z_2)^2-(x_3-z_3)^2\right)|x-z|^4}{|x-z|^4\left((x_2+z_2)^2 + (x_3-z_3)^2 \right)^2}$$ for $z=(z_2,z_3)$. From [\[value: iw\]](#value: iw){reference-type="eqref" reference="value: iw"} and the assumption that $\omega^{\mathcal{I}}_{1,0}$ is supported on $\left\{1\le a_2,a_3\le 2 \right\}$, we have $${(\mathrm{I}+\mathrm{II})_0} \le {\left\Vert \omega^{\mathcal{I}}_{1,0} \right\Vert}_{L^{\infty}} \iint_{\left\{1\le a_2,a_3\le 2\right\}} \left| K_{x_2,x_3}(A^{\mathcal{I}}_2(t,a),A^{\mathcal{I}}_3(t,a)) \right| da_2da_3.$$ Note that the numerator of $K_{x_2,x_3}(z_2,z_3)$ can be written as $$\label{est: kernel K} \sum_{\substack{\alpha_1,\alpha_2,\alpha_4\ge0, \alpha_3\ge1 \\ \alpha_1+\alpha_2+\alpha_3+\alpha_4=6, \alpha_3\ge1}} C_{(\alpha_1,\alpha_2,\alpha_3,\alpha_4)} x_2^{\alpha_1}x_3^{\alpha_2}z_2^{\alpha_3}z_3^{\alpha_4}$$ for some constants $C_{(\alpha_1,\alpha_2,\alpha_3,\alpha_4)}$'s. Moreover, the denominator of $K_{x_2,x_3}(z_2,z_3)$ is bounded below by $(x_3-z_3)^8$. Thus, using Lemma [Lemma 10](#lemma: iA){reference-type="ref" reference="lemma: iA"} and Remark [Remark 12](#rmk: dlt){reference-type="ref" reference="rmk: dlt"}, we have $$\begin{split} \left|K_{x_2,x_3}(A^{\mathcal{I}}_2(t,a),A^{\mathcal{I}}_3(t,a))\right| &\lesssim \sum_{\substack{\alpha_1,\alpha_2,\alpha_4\ge0, \alpha_3\ge1 \\ \alpha_1+\alpha_2+\alpha_3+\alpha_4=6, \alpha_3\ge1}} \frac{\left|x_2^{\alpha_1}x_3^{\alpha_2}(A^{\mathcal{I}}_2)^{\alpha_3}(A^{\mathcal{I}}_3)^{\alpha_4}\right|}{(x_3-A^{\mathcal{I}}_3(t,a))^8} \\ &\lesssim \sum_{\substack{\alpha_1,\alpha_2,\alpha_4\ge0, \alpha_3\ge1 \\ \alpha_1+\alpha_2+\alpha_3+\alpha_4=6, \alpha_3\ge1}} \left(e^{-C_3Mt}\right)^{\alpha_1}\left(e^{-C_3Mt}\right)^{\alpha_2} \left(e^{-C_3^{-1}Mt}\right)^{\alpha_3} \lesssim e^{-C_3^{-1}Mt} \end{split}$$ for $t\in[0,T_M]$. In the last line, we used $\alpha_3\ge1$. This gives ${(\mathrm{I}+\mathrm{II})_0} \lesssim e^{-C_3^{-1}Mt}$. For $(\mathrm{I}+\mathrm{II})_{\neq 0}$, we recall [\[est: kernel K\]](#est: kernel K){reference-type="eqref" reference="est: kernel K"} and [\[lowerbound: x-y-2Ln\]](#lowerbound: x-y-2Ln){reference-type="eqref" reference="lowerbound: x-y-2Ln"}, which give $$\begin{split} \left|K_{x_2-2Ln_2,x_3-2Ln_3}(y_2,y_3)\right| &\lesssim \sum_{\substack{\alpha_1,\alpha_2,\alpha_4\ge0, \alpha_3\ge1 \\ \alpha_1+\alpha_2+\alpha_3+\alpha_4=6, \alpha_3\ge1}} \frac{\left|(x_2-2Ln_2)^{\alpha_1}(x_3-2Ln_3)^{\alpha_2}y_2^{\alpha_3}y_3^{\alpha_4}\right|}{|n|^8} \lesssim \frac{y_2}{|n|^3}, \end{split}$$ where in the last inequality, we used $\alpha_1+\alpha_2\le 5$ and $\alpha_3 \ge 1$. Hence proceeding as before, we obtain $(\mathrm{I}+\mathrm{II})_{\neq 0}\lesssim e^{-C_3^{-1}Mt}.$ In the same way, we can also show that $\mathrm{III}+ \mathrm{IV}\lesssim e^{-C_3^{-1}Mt}$, and consequently, we obtain $\left\Vert{\partial_3u^{\mathcal{I}}_2}\right\Vert_{L^{\infty}(B_0(\delta))} \lesssim e^{-C_3^{-1}Mt}.$  ◻ ## Linearization of the equation for $\omega^{\mathcal{S},P}$ {#section: linearization} Abusing the notation as in the last section, we denote pseudo-solutions $\omega^{\mathcal{S},P}$ and $u^{\mathcal{S},P}$ by $\omega^{\mathcal{S}}$ and $u^{\mathcal{S}}$, respectively. Dropping nonlinear terms in [\[eq:euler-l-vorticity-s\]](#eq:euler-l-vorticity-s){reference-type="eqref" reference="eq:euler-l-vorticity-s"}, we obtain the following linearized equation of $\omega^{\mathcal{S}}$: $$\label{eq: linearized sw with iw} \partial_t\omega^{\mathcal{S}}+ (u^{\mathcal{L}}+ u^{\mathcal{I}})\cdot \nabla\omega^{\mathcal{S}}= \nabla(u^{\mathcal{L}}+ u^{\mathcal{I}}) \omega^{\mathcal{S}}+ \nabla u^{\mathcal{S}}\omega^{\mathcal{I}}-u^{\mathcal{S}}\cdot \nabla\omega^{\mathcal{I}}.$$ Now recalling [\[def: iA\]](#def: iA){reference-type="eqref" reference="def: iA"} and abusing the notation, we denote a characteristic curve $$A^{\mathcal{I}}(t,a_1,a_2,a_3)=\left(A^{\mathcal{I}}_1(t,a_1,a_2,a_3),A^{\mathcal{I}}_2(t,a_1,a_2,a_3),A^{\mathcal{I}}_3(t,a_1,a_2,a_3)\right):[0,\infty)\times \mathbb{T}^3\rightarrow \mathbb{T}^3$$ defined by $A^{\mathcal{I}}(0,a_1,a_2,a_3)=(a_1,a_2,a_3)$ and $$\label{def: iA-extended} \left\{ \begin{aligned} &\frac{d}{dt}A^{\mathcal{I}}_1(t,a_1,a_2,a_3)=MA^{\mathcal{I}}_1(t,a_1,a_2,a_3),\\ &\frac{d}{dt}A^{\mathcal{I}}_2(t,a_1,a_2,a_3)=u^{\mathcal{I}}_2(t,A_2^{\mathcal{I}}(t,a_1,a_2,a_3),A_3^{\mathcal{I}}(t,a_1,a_2,a_3))-MA^{\mathcal{I}}_2(t,a_1,a_2,a_3),\\ &\frac{d}{dt}A^{\mathcal{I}}_3(t,a_1,a_2,a_3)=u^{\mathcal{I}}_3(t,A_2^{\mathcal{I}}(t,a_1,a_2,a_3),A_3^{\mathcal{I}}(t,a_1,a_2,a_3)). \end{aligned} \right.$$ Then we can show the following. **Lemma 13**. *Let $\delta=\frac{C_1}{10(M+1)^{C_3}}$ as in Lemma [Lemma 11](#lemma: nb iu){reference-type="ref" reference="lemma: nb iu"}. Then for $a=(a_1,a_2,a_3)\in \mathbb{T}^3$ and $\ell \le \min\left\{\frac{\delta}{M+1}, (3Le)^{1-C_3}\delta^{C_3}\right\}$, the following statements hold:* - *if $a\in B_{0}(\ell)$, then $A^{\mathcal{I}}(t,a)\in B_{0}(\delta)$ for $t\in[0,T_M]$,* - *if $a\in \mathbb{T}^3\backslash B_{0}(\ell)$, then $A^{\mathcal{I}}(t,a)\in \mathbb{T}^3\backslash B_{0}\left(C_4\left(\frac{\ell}{M+1}\right)^{C_3}\right)$ for $t\in[0,T_M]$, where $C_4>0$ is a constant.* *Proof.* Let us prove the first statement. Suppose that $a\in B_{0}(\ell)$. Then $|a_1|\le \ell$ implies that $\left|A^{\mathcal{I}}_1(t,a_1,a_2,a_3)\right| = |a_1| e^{Mt}\le \ell(M+1) \le \delta$ for $t\in[0,T_M]$. For $A^{\mathcal{I}}_2(t,a)$ and $A^{\mathcal{I}}_3(t,a)$, we only need to consider the case when $a_2,a_3\ge 0$ by the odd symmetry of $\omega^{\mathcal{I}}$ in both $x_2$ and $x_3$. We claim that if $a_2,a_3\in B_{0}(\ell)$ with $a_2,a_3\ge 0$, then $$\label{est:A_2-small} 0\le A^{\mathcal{I}}_2(t,a_1,a_2,a_3) \le \delta e^{-C_3^{-1} Mt},$$ and $$\label{est:A_3-small} 0\le A^{\mathcal{I}}_3(t,a_1,a_2,a_3) \le \delta.$$ Recalling [\[equality: iA_2\]](#equality: iA_2){reference-type="eqref" reference="equality: iA_2"}, [\[est: iu\]](#est: iu){reference-type="eqref" reference="est: iu"}, and $A^{\mathcal{I}}_2(t,a_1,a_2,a_3)\ge0$, we have $$\frac{d}{dt}A^{\mathcal{I}}_2(t,a_1,a_2,a_3) \le C e^{tM} A_2^{\mathcal{I}}(t,a_1,a_2,a_3)\left(1+\log \frac{3L}{A_2^{\mathcal{I}}(t,a_1,a_2,a_3)}\right)-MA^{\mathcal{I}}_2(t,a_1,a_2,a_3),$$ Then proceeding as we did to derive [\[est: A_2 sample\]](#est: A_2 sample){reference-type="eqref" reference="est: A_2 sample"}, we see that for $0\le t \le T_M$, $$\log A^{\mathcal{I}}_2(t,a_1,a_2,a_3) \le 1+\log 3L+ C_3^{-1}\left(\log a_2 -(1+\log3L)-Mt \right),$$ which is equivalent to $$A^{\mathcal{I}}_2(t,a_1,a_2,a_3) \le 3Le \left(\frac{a_2}{3Le}\right)^{C_3^{-1}} e^{-C_3^{-1}Mt}.$$ But since we assumed $a_2\le \ell \le (3Le)^{1-C_3}\delta^{C_3}$, [\[est:A_2-small\]](#est:A_2-small){reference-type="eqref" reference="est:A_2-small"} holds. Then, [\[est:A_3-small\]](#est:A_3-small){reference-type="eqref" reference="est:A_3-small"} can be handled by a parallel argument. Next, we prove our second statement. Suppose that $a\in \mathbb{T}^3\backslash B_{0}(\ell)$. Then $|a_1|\ge \ell$ implies that $\left|A^{\mathcal{I}}_1(t,a_1,a_2,a_3)\right| = |a_1| e^{Mt}\ge \ell$ for all $t$. For $A^{\mathcal{I}}_2(t,a)$ and $A^{\mathcal{I}}_3(t,a)$, we only need to consider the case when $a_2,a_3\ge 0$ by the odd symmetry of $\omega^{\mathcal{I}}$ in both $x_2$ and $x_3$. We claim that if $a_2,a_3\in \mathbb{T}^3 \backslash B_{0}(\ell)$ with $a_2,a_3\ge 0$, then $$\label{est:A_2-small-1} A^{\mathcal{I}}_2(t,a_1,a_2,a_3) \ge (3Le)^{1-C_3}\ell^{C_3}e^{-C_3Mt},$$ and $$\label{est:A_3-small-1} A^{\mathcal{I}}_3(t,a_1,a_2,a_3) \ge (3Le)^{1-C_3}\ell^{C_3}.$$ From [\[equality: iA_2\]](#equality: iA_2){reference-type="eqref" reference="equality: iA_2"}, [\[est: iu\]](#est: iu){reference-type="eqref" reference="est: iu"}, and $A^{\mathcal{I}}_2(t,a_1,a_2,a_3)\ge0$, we have $$-\frac{d}{dt}A^{\mathcal{I}}_2(t,a_1,a_2,a_3) \le C e^{tM} A_2^{\mathcal{I}}(t,a_2,a_3)\left(1+\log \frac{3L}{A_2^{\mathcal{I}}(t,a_1,a_2,a_3)}\right)+MA^{\mathcal{I}}_2(t,a_1,a_2,a_3).$$ With the same argument as the derivation of [\[est: A_2 opposite sample\]](#est: A_2 opposite sample){reference-type="eqref" reference="est: A_2 opposite sample"}, we obtain for $0\le t \le T_M$, $$\log \frac{3L}{A_2^{\mathcal{I}}(t,a_1,a_2,a_3)}\le -1+ C_3\left(1+\log \frac{3L}{a_2}+Mt \right),$$ so that $$A^{\mathcal{I}}_2(t,a_1,a_2,a_3) \ge 3Le\left(\frac{a_2}{3Le}\right)^{C_3} e^{-C_3Mt}\ge (3Le)^{1-C_3}\ell^{C_3}e^{-C_3Mt} {,}$$ where we used the assumption $a_2\ge \ell$ in the last inequality. Similarly, we can show [\[est:A_3-small-1\]](#est:A_3-small-1){reference-type="eqref" reference="est:A_3-small-1"}. Noticing $T_M=\frac{\log(M+1)}{M}$, our second statement follows. ◻ Now we are ready to estimate $\left\Vert{\omega^{\mathcal{S}}(t,\cdot)}\right\Vert_{L^{\infty}}$ near the origin. **Lemma 14**. *Let $\ell$ in [\[def: sw_0\]](#def: sw_0){reference-type="eqref" reference="def: sw_0"} satisfy $\ell \le \min\left\{\frac{\delta}{M+1}, (3Le)^{1-C_3}\delta^{C_3}\right\}$. Then for $0\le t \le T_M$, we have $$\left\Vert{\omega^{\mathcal{S}}(t,\cdot)}\right\Vert_{L^{\infty}\left(B_{0}\left(C_4\left(\frac{\ell}{M+1}\right)^{C_3}\right)\right)}\lesssim \varepsilon e^{-Mt}.$$* *Proof.* Recalling $\omega^{\mathcal{I}}\equiv 0$ in $[0,T_M]\times B_{\delta}(0)$, the previous lemma reduces [\[eq: linearized sw with iw\]](#eq: linearized sw with iw){reference-type="eqref" reference="eq: linearized sw with iw"} to $$\label{eq: linearized sw with iw-1} \partial_t\omega^{\mathcal{S}}+ (u^{\mathcal{L}}+ u^{\mathcal{I}})\cdot \nabla\omega^{\mathcal{S}}= \nabla(u^{\mathcal{L}}+ u^{\mathcal{I}}) \omega^{\mathcal{S}}$$ in $[0,T_M]\times B_{\delta}(0)$. First of all, we claim that $\omega^{\mathcal{S}}_1 (t,\cdot) = 0$ in $[0,T_M]\times B_{0}\left(C_4\left(\frac{\ell}{M+1}\right)^{C_3}\right)$. Indeed, noticing $u^{\mathcal{I}}_1=0$ (see [\[condi: iu\]](#condi: iu){reference-type="eqref" reference="condi: iu"}), [\[eq: linearized sw with iw-1\]](#eq: linearized sw with iw-1){reference-type="eqref" reference="eq: linearized sw with iw-1"} gives $$\partial_t \omega^{\mathcal{S}}_1 + Mx_1 \partial_1 \omega^{\mathcal{S}}_1 + (-Mx_2 + u^{\mathcal{I}}_2)\partial_2 \omega^{\mathcal{S}}_1 + u^{\mathcal{I}}_3\partial_3 \omega^{\mathcal{S}}_1 = M \omega^{\mathcal{S}}_1.$$ Evaluating along the characteristic $A^{\mathcal{I}}$, Lemma [Lemma 13](#lemma: iA-small){reference-type="ref" reference="lemma: iA-small"} implies $\omega^{\mathcal{S}}_1 = 0$ in $[0,T_M]\times B_{0}\left(C_4\left(\frac{\ell}{M+1}\right)^{C_3}\right)$ because $\omega^{\mathcal{S}}_{1,0} =0$ in $B_0(\ell)$. Next, [\[condi: iu\]](#condi: iu){reference-type="eqref" reference="condi: iu"} reduces the equations of $\omega^{\mathcal{S}}_2$ and $\omega^{\mathcal{S}}_3$ as follows: $$\label{eq: reduced sw linearized} \left\{ \begin{aligned} &\partial_t \omega^{\mathcal{S}}_2 + Mx_1 \partial_1 \omega^{\mathcal{S}}_2 + (-Mx_2 + u^{\mathcal{I}}_2)\partial_2 \omega^{\mathcal{S}}_2 + u^{\mathcal{I}}_3\partial_3 \omega^{\mathcal{S}}_2 = (-M+\partial_2 u^{\mathcal{I}}_2) \omega^{\mathcal{S}}_2 + \partial_3u^{\mathcal{I}}_2\omega^{\mathcal{S}}_3, \\ &\partial_t \omega^{\mathcal{S}}_3 + Mx_1 \partial_1 \omega^{\mathcal{S}}_3 + (-Mx_2 + u^{\mathcal{I}}_2)\partial_2 \omega^{\mathcal{S}}_3 + u^{\mathcal{I}}_3\partial_3 \omega^{\mathcal{S}}_3 = \partial_2 u^{\mathcal{I}}_3 \omega^{\mathcal{S}}_2 + \partial_3u^{\mathcal{I}}_3\omega^{\mathcal{S}}_3, \end{aligned} \right.$$ so that in $[0,T_M]\times B_{0}\left(C_4\left(\frac{\ell}{M+1}\right)^{C_3}\right)$. Hence using [\[est: nabla iu\]](#est: nabla iu){reference-type="eqref" reference="est: nabla iu"} and Lemma [Lemma 13](#lemma: iA-small){reference-type="ref" reference="lemma: iA-small"}, we derive $$\label{est: spw2} \begin{split} \partial_t |\omega^{\mathcal{S}}_2(t,A^{\mathcal{I}}(t,a))| &=\frac{\omega^{\mathcal{S}}_2(t,A^{\mathcal{I}}(t,a)) \partial_t \left(\omega^{\mathcal{S}}_2(t,A^{\mathcal{I}}(t,a))\right) }{|\omega^{\mathcal{S}}_2(t,A^{\mathcal{I}}(t,a))|} \\ &\le \left(-M+Ce^{-C_3^{-1}Mt}\right)|\omega^{\mathcal{S}}_2(t,A^{\mathcal{I}}(t,a))| + Ce^{-C_3^{-1}Mt}|\omega^{\mathcal{S}}_3(t,A^{\mathcal{I}}(t,a))| \end{split}$$ and similarly $$\label{est: spw3} \partial_t |\omega^{\mathcal{S}}_3(t,A^{\mathcal{I}}(t,a))| \le Ce^{-C_3^{-1}Mt} \left(|\omega^{\mathcal{S}}_2(t,A^{\mathcal{I}}(t,a))|+|\omega^{\mathcal{S}}_3(t,A^{\mathcal{I}}(t,a))| \right).$$ Thus, from $$\partial_t \left(|\omega^{\mathcal{S}}_2(t,A^{\mathcal{I}}(t,a))|+ |\omega^{\mathcal{S}}_3(t,A^{\mathcal{I}}(t,a))|\right) \lesssim e^{-C_3^{-1}Mt} \left(|\omega^{\mathcal{S}}_2(t,A^{\mathcal{I}}(t,a))|+|\omega^{\mathcal{S}}_3(t,A^{\mathcal{I}}(t,a))|\right),$$ we have $$|\omega^{\mathcal{S}}_2(t,A^{\mathcal{I}}(t,a))|+ |\omega^{\mathcal{S}}_3(t,A^{\mathcal{I}}(t,a))| \lesssim \left|\omega^{\mathcal{S}}_{2,0}(a) + \omega^{\mathcal{S}}_{3,0}(a)\right|\lesssim \varepsilon,$$ where we used [\[def: sw_0\]](#def: sw_0){reference-type="eqref" reference="def: sw_0"} in the last inequality. Inserting $|\omega^{\mathcal{S}}_2(t,A^{\mathcal{I}}(t,a))|\lesssim \varepsilon$ into [\[est: spw3\]](#est: spw3){reference-type="eqref" reference="est: spw3"}, we obtain $$\partial_t |\omega^{\mathcal{S}}_3(t,A^{\mathcal{I}}(t,a))| \le Ce^{-C_3^{-1}Mt} |\omega^{\mathcal{S}}_3(t,A^{\mathcal{I}}(t,a))|+C\varepsilon e^{-C_3^{-1}Mt}.$$ Noticing $\omega^{\mathcal{S}}_{3,0}(a)=0$ in $B_0(\ell)$, Grönwall's inequality gives $$\begin{split} |\omega^{\mathcal{S}}_3(t,A^{\mathcal{I}}(t,a))| &\le \exp{\left(\int_0^t C e^{-C_3^{-1}Ms}ds\right)}\int_0^t C\varepsilon e^{-C_3^{-1}Ms}ds \\ &= \exp\left( \frac{C\left(1- e^{-C_3^{-1}Mt}\right)}{C_3^{-1}M}\right)\frac{C\varepsilon\left(1- e^{-C_3^{-1}Mt}\right)}{C_3^{-1}M} \lesssim \frac{\varepsilon}{M}\lesssim \varepsilon e^{-Mt}. \end{split}$$ for $t\in[0,T_M]$. Inserting this into [\[est: spw2\]](#est: spw2){reference-type="eqref" reference="est: spw2"}, we obtain $$\partial_t |\omega^{\mathcal{S}}_2(t,A^{\mathcal{I}}(t,a))| \le \left(-M+Ce^{-C_3^{-1}Mt}\right)|\omega^{\mathcal{S}}_2(t,A^{\mathcal{I}}(t,a))| + C\varepsilon e^{-(C_3^{-1}+1)Mt},$$ which yields $$\begin{split} \partial_t \left(|\omega^{\mathcal{S}}_2(t,A^{\mathcal{I}}(t,a))|\exp \left(\int_0^t M-Ce^{-C_3^{-1}Ms} ds\right)\right) &\le C\varepsilon e^{-(C_3^{-1}+1)Mt} \exp \left(\int_0^t M-Ce^{-C_3^{-1}Ms} ds\right)\lesssim \varepsilon e^{-C_3^{-1}Mt}. \end{split}$$ Integrating from $0$ to $t$, we obtain $$\begin{split} |\omega^{\mathcal{S}}_2(t,A^{\mathcal{I}}(t,a))| &\le \exp \left(\int_0^t -M+Ce^{-C_3^{-1}Ms} ds\right) \left(\omega^{\mathcal{S}}_{2,0}(a) + \int_{0}^t C\varepsilon e^{-C_3^{-1}Ms} ds \right) \lesssim \varepsilon e^{-Mt}. \qedhere \end{split}$$ ◻ ## Comparison between solutions {#comparison} In this section, we compare our pseudo-solutions with real solutions as we mentioned in the beginning of this section. In order to distinguish solutions of [\[eq:euler-l-vorticity-s\]](#eq:euler-l-vorticity-s){reference-type="eqref" reference="eq:euler-l-vorticity-s"} and [\[eq: linearized sw with iw\]](#eq: linearized sw with iw){reference-type="eqref" reference="eq: linearized sw with iw"}, we denote the solution of the linearized equation [\[eq: linearized sw with iw\]](#eq: linearized sw with iw){reference-type="eqref" reference="eq: linearized sw with iw"} by $(\omega^{\mathcal{S},P,Lin},u^{\mathcal{S},P,Lin})$. We fix $s>\frac52$ and set $\bar{u}=u^{\mathcal{L}}+u^{\mathcal{I},P}$ in [\[eq:euler bu\]](#eq:euler bu){reference-type="eqref" reference="eq:euler bu"}, $u=u^{\mathcal{L}}+u^{\mathcal{I},P}+u^{\mathcal{S},P}$ in [\[eq:euler just u\]](#eq:euler just u){reference-type="eqref" reference="eq:euler just u"}, $\tilde{u}=u-\Bar{u}$ in [\[eq:euler tu\]](#eq:euler tu){reference-type="eqref" reference="eq:euler tu"}, and $\tilde{u}^{Lin}=u^{\mathcal{S},P,Lin}$ in [\[eq:euler ltu\]](#eq:euler ltu){reference-type="eqref" reference="eq:euler ltu"} to employ Proposition [Proposition 3](#prop: perturbation){reference-type="ref" reference="prop: perturbation"}. [\[def: sw_0\]](#def: sw_0){reference-type="eqref" reference="def: sw_0"} implies that $\tilde{u}_0$ in [\[eq:euler tu\]](#eq:euler tu){reference-type="eqref" reference="eq:euler tu"} satisfies $$\label{tpsi-hsps} {\left\Vert \tilde{u}_0 \right\Vert}_{H^{s+1}(\mathbb{T}^3)} \lesssim {\left\Vert \tilde{\psi} \right\Vert}_{H^{s}(\mathbb{T}^3)}=\ell^{-s+\frac{3}{2}}{\left\Vert \psi \right\Vert}_{H^{s}(\mathbb{T}^3)},$$ and Lemma [Lemma 7](#lem: iu-Hs-bound){reference-type="ref" reference="lem: iu-Hs-bound"} gives $$\label{est:baru int} \int_0^{T_M} {\left\Vert (u^{\mathcal{L}}+u^{\mathcal{I},P})(\tau,\cdot) \right\Vert}_{H^{s+2}}ds \lesssim MT_M + \frac{e^{ {C}MT_M}-1}{ {C}M} \lesssim {M^C},$$ by adjusting the value of absolute constant $C>1$ from an inequality to another. Therefore, Proposition [Proposition 3](#prop: perturbation){reference-type="ref" reference="prop: perturbation"} implies $${\left\Vert \omega^{\mathcal{S},P}(t,\cdot)-\omega^{\mathcal{S},P,Lin}(t,\cdot) \right\Vert}_{L^{\infty}} \lesssim \varepsilon^2 \ell^{-2s+3} e^{CM^C}$$ on $[0,T_M]$, whenever $\varepsilon>0$ satisfies $$\label{eps condi'} \varepsilon \le \ell^{s-\frac32} e^{-CM^C}.$$ Hence, it follows from Lemma [Lemma 14](#lemma sw-l^infty-linear){reference-type="ref" reference="lemma sw-l^infty-linear"} that $$\label{est: sw in small ball} \left\Vert{\omega^{\mathcal{S},P}(t,\cdot)}\right\Vert_{L^{\infty}\left(B_{0}\left(C_4\left(\frac{\ell}{M+1}\right)^{C_3}\right)\right)}\lesssim \varepsilon e^{-Mt}$$ on $[0,T_M]$ if we pick $\ell$ and $\varepsilon$ satisfying $$\label{condi: eps, ell} \ell \le \min\left\{\frac{\delta}{M+1}, (3Le)^{1-C_3}\delta^{C_3}\right\},\qquad \varepsilon\le \ell^{2s-3} e^{-M^C}$$ with $C$ adjusted. (Here, we have used that $M \gg 1$.) Next, we set $\Bar{u}^*:=u^{\mathcal{L}}+u^{\mathcal{I}}$, so that $\Bar{u}^*$ solves [\[eq:euler busu\]](#eq:euler busu){reference-type="eqref" reference="eq:euler busu"}. Thus, using [\[est: nabla iu\]](#est: nabla iu){reference-type="eqref" reference="est: nabla iu"}, [\[tpsi-hsps\]](#tpsi-hsps){reference-type="eqref" reference="tpsi-hsps"}, and [\[est:baru int\]](#est:baru int){reference-type="eqref" reference="est:baru int"}, Proposition [Proposition 5](#prop: perturbation 2){reference-type="ref" reference="prop: perturbation 2"} give us [\[eq:nb-u-i-decay\]](#eq:nb-u-i-decay){reference-type="eqref" reference="eq:nb-u-i-decay"} for $\varepsilon,\,\ell$ satisfying [\[condi: eps, ell\]](#condi: eps, ell){reference-type="eqref" reference="condi: eps, ell"} of which $C$ is adjusted if necessary. To derive [\[eq:omega-s-decay\]](#eq:omega-s-decay){reference-type="eqref" reference="eq:omega-s-decay"}, noticing $u=u^{\mathcal{L}}+u^{\mathcal{I},P}+u^{\mathcal{S},P}=u^{\mathcal{L}}+u^{\mathcal{I}}+u^{\mathcal{S}}$ and recalling [\[est:tu\]](#est:tu){reference-type="eqref" reference="est:tu"}, [\[tpsi-hsps\]](#tpsi-hsps){reference-type="eqref" reference="tpsi-hsps"}, and [\[est:baru int\]](#est:baru int){reference-type="eqref" reference="est:baru int"}, there exists a constant $C>0$ such that $$\frac{d}{dt}\left|\Phi(t,a)-A^{\mathcal{I}}(t,a) \right| \le {\left\Vert u^{\mathcal{S},P} \right\Vert}_{L^{\infty}} {\lesssim {\left\Vert u^{\mathcal{S},P} \right\Vert}_{H^{s+1}}} \lesssim \varepsilon\ell^{-s+\frac32} e^{CM^C}$$ for $\varepsilon>0$ satisfying [\[eps condi\'\]](#eps condi'){reference-type="eqref" reference="eps condi'"}, where $\Phi, A^{\mathcal{I}}$ are from [\[def: flow Phi\]](#def: flow Phi){reference-type="eqref" reference="def: flow Phi"}, [\[def: iA-extended\]](#def: iA-extended){reference-type="eqref" reference="def: iA-extended"}, respectively. Thus if $\varepsilon,\, \ell$ satisfy [\[condi: eps, ell\]](#condi: eps, ell){reference-type="eqref" reference="condi: eps, ell"} of which $C$ is adjusted if necessary, then the estimate $$\left|\Phi(t,a)-A^{\mathcal{I}}(t,a) \right| \lesssim \varepsilon\ell^{-s+\frac32} e^{CM^C}t \lesssim \sqrt{\varepsilon}$$ on $[0,T_M]$ and Lemma [Lemma 13](#lemma: iA-small){reference-type="ref" reference="lemma: iA-small"} imply $$\left|\Phi(t,a)\right|\le \left|A^{\mathcal{I}}(t,a) \right|+\left|\Phi(t,a)-A^{\mathcal{I}}(t,a) \right|\le \delta+ \sqrt{\varepsilon} \le 2\delta$$ for $(t,a)\in [0,T_M] \times B_0(\ell)$ while $$\left|\Phi(t,a)\right|\ge \left|A^{\mathcal{I}}(t,a) \right|-\left|\Phi(t,a)-A^{\mathcal{I}}(t,a) \right|\ge C_4\left( \frac{\ell}{M+1}\right)^{C_3}- \sqrt{\varepsilon} \ge C_5 \left( \frac{\ell}{M+1}\right)^{C_3}$$ for $(t,a)\in [0,T_M] \times \mathbb T^3\backslash B_0 (\ell)$ and some $C_5>0$. Recall that $\omega^{\mathcal{I},P}=0$ in $B_0(2\delta)\times [0,T_M]$ (see Lemma [Lemma 10](#lemma: iA){reference-type="ref" reference="lemma: iA"} and Remark [Remark 12](#rmk: dlt){reference-type="ref" reference="rmk: dlt"}). Hence by [\[eq:euler-l-vorticity-s\]](#eq:euler-l-vorticity-s){reference-type="eqref" reference="eq:euler-l-vorticity-s"}, $\omega^{\mathcal{S},P}$ solves $$\partial_{t} \left(\omega^{\mathcal{S},P}(t,\Phi(t,a))\right) = \nabla u (t,\Phi(t,a)) \omega^{\mathcal{S},P}(t,\Phi(t,a))$$ for $(t,a)\in [0,T_M] \times B_0(\ell)$. Since $\omega^{\mathcal{S}}$ also solves $$\partial_{t} \left(\omega^{\mathcal{S}}(t,\Phi(t,a))\right) = \nabla u (t,\Phi(t,a)) \omega^{\mathcal{S}}(t,\Phi(t,a)),$$ we have $$\partial_{t} \left|\omega^{\mathcal{S},P}(t,\Phi(t,a))-\omega^{\mathcal{S}}(t,\Phi(t,a)) \right| \le {\left\Vert \nabla u \right\Vert}_{L^{\infty}} \left|\omega^{\mathcal{S},P}(t,\Phi(t,a))-\omega^{\mathcal{S}}(t,\Phi(t,a)) \right|$$ for $(t,a)\in[0,T_M] \times B_0(\ell)$. But ${\left\Vert \nabla u \right\Vert}_{L^{\infty}} < \infty$ up to time $T_M$, and $\omega^{\mathcal{S},P}$ and $\omega^{\mathcal{S}}$ have the same initial data $\omega^{\mathcal{S}}_0$, so that $\omega^{\mathcal{S},P}(t,\Phi(t,a))=\omega^{\mathcal{S}}(t,\Phi(t,a))$ for $(t,a)\in[0,T_M] \times B_0(\ell)$. This implies $\omega^{\mathcal{S},P}(t,x)=\omega^{\mathcal{S}}(t,x)$ for $(t,x)\in[0,T_M] \times B_0\left(C_5 \left( \frac{\ell}{M+1}\right)^{C_3} \right)$, and therefore [\[eq:omega-s-decay\]](#eq:omega-s-decay){reference-type="eqref" reference="eq:omega-s-decay"} follows from [\[est: sw in small ball\]](#est: sw in small ball){reference-type="eqref" reference="est: sw in small ball"}. This completes our proof of Theorem [Theorem 1](#thm: presence){reference-type="ref" reference="thm: presence"}. $\Box$ **Acknowledgments**. Research of TY was partially supported by Grant-in-Aid for Scientific Research B (20H01819), Japan Society for the Promotion of Science (JSPS). IJ has been supported by the National Research Foundation of Korea(NRF) grant No. 2022R1C1C1011051. 10 T. Elgindi and N. Masmoudi, *$L^{\infty}$ ill-posedness for a class of equations arising in hydrodynamics*, Arch. Ration. Mech. Anal. **235** (2020), no. 3, 1979--2025. S. Goto, *A physical mechanism of the energy cascade in homogeneous isotropic turbulence*, J. Fluid Mech. **605** (2008) 355--366. S. Goto, Y. Saito, and G. Kawahara, *Hierarchy of antiparallel vortex tubes in spatially periodic turbulence at high Reynolds numbers*, Phys. Rev. Fluids **2** (2017) 064603. P. E. Hamlington, J. Schumacher and W. J. A. Dahm, *Direct assessment of vorticity alignment with local and nonlocal strain rates in turbulent flows*, Phys. Fluids **20** (2008) 111703. I.-J. Jeong and T. Yoneda, *Enstrophy dissipation and vortex thinning for the incompressible 2D Navier-Stokes equations*, Nonlinearity **34** (2021) 1837. I.-J. Jeong and T. Yoneda, *Vortex stretching and enhanced dissipation for the incompressible 3D Navier-Stokes equations*, Math. Annal. **380** (2021) 2041-2072. I.-J. Jeong and T. Yoneda, *Quasi-streamwise vortices and enhanced dissipation for the incompressible 3D Navier-Stokes equations*, Proceedings of AMS **150** (2022) 1279-1286. A. Kiselev and V. Sverak *Small scale creation for solutions of the incompressible two-dimensional Euler equation.* Annals of Math., 180 (2014), 1205-1220. D. Kang, D. Yun and B. Protas, *Maximum amplification of enstrophy in three-dimensional Navier-Stokes flows*, J. Fluid Mech. **893**, (2020) A22. T. Kato and G. Ponce *Commutator estimates and the Euler and Navier-Stokes equations*, Comm. Pure Appl. Math. **41** (1988), no. 7, 891--907. MR 0951744 A. Majda and A. Bertozzi, *Vorticity and incompressible flow*, Cambridge Texts in Applied Mathematics, vol.**27**, Cambridge University Press, Cambridge, 2002. MR 1867882 Y. Motoori and S. Goto, *Generation mechanism of a hierarchy of vortices in a turbulent boundary layer*, J. Fluid Mech. **865** (2019) 1085--1109. Y. Motoori and S. Goto, *Hierarchy of coherent structures and real-space energy transfer in turbulent channel flow*, J. Fluid Mech. **911** (2021) A27. Y.  Shimizu and T.  Yoneda, *Locality of vortex stretching for the 3D Euler equations*, J. Math. Fluid Mech. **25** (2023) 18. T. Tsuruhashi, S. Goto, S. Oka and T. Yoneda, *Self-similar hierarchy of coherent tubular vortices in turbulence*, Phil. Trans. R. Soc., A, **380** (2022), 20210053. T. Yoneda, S. Goto and T. Tsuruhashi, *Mathematical reformulation of the Kolmogorov-Richardson energy cascade in terms of vortex stretching*, Nonlinearity **34** (2021) 1837. A. Zlatos, *Exponential growth of the vorticity gradient for the Euler equation on the torus*, Adv. Math., **268** (2015), 396-403.
arxiv_math
{ "id": "2309.06644", "title": "Weakened vortex stretching effect in three scale hierarchy for the 3D\n Euler equations", "authors": "In-Jee Jeong, Jungkyoung Na, Tsuyoshi Yoneda", "categories": "math.AP", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Let $n\geq 1$ be an integer, $p$, $q$ be distinct odd primes. Let ${G}$, $N$ be two groups of order $p^nq$ with their Sylow-$p$-subgroups being cyclic. We enumerate the Hopf-Galois structures on a Galois ${G}$-extension, with type $N$. This also computes the number of skew braces with additive group isomorphic to $G$ and multiplicative group isomorphic to $N$. Further when $q<p$, we give a complete classification of the Hopf-Galois structures on Galois-$G$-extensions. address: - The Institute of Mathematical Sciences, 4th Cross St, CIT Campus, Tharamani, Chennai, Tamil Nadu 600113, India - Harish-Chandra Research Institute- Main Building, Chhatnag Road, Jhusi, Uttar Pradesh 211019, India author: - Namrata Arvind - Saikat Panja bibliography: - ref.bib title: "Hopf Galois structures, skew braces for groups of size $p^nq$: The cyclic Sylow subgroup case" --- # Introduction {#sec:introduction} The study of Hopf-Galois structures comes under the realm of group theory and number theory. These structures were first studied by S. Chase and M. Sweedler in $1969$, in their work [@ChSw69]. Subsequently in [@GrPa87], C. Greither and B. Pareigis defined a Hopf-Galois structure for separable extensions. In recent times, algebraic objects called Skew braces were introduced in the PhD thesis of D. Bachiller. They have been studied by various mathematicians like W. Rump, D. Bachiller, F. Cedo in [@CeJeOk14], [@BaCeJe16] *etc.*. Skew braces are known to give set-theoretic solutions to the Yang-Baxter equations. Subsequently, A. Smoktunowicz and L. Vendramin noticed a connection between the study of skew braces and that of Hopf-Galois structures in their work [@SmVe18]. For more details, and background on this topic, and the interplay between skew braces and Hopf-Galois structures, one can refer to the book [@Ch00book] of L. N. Childs and the Ph.D. thesis of K. N. Zenouz [@Ze18]. A Hopf-Galois structure on a finite field extension is defined in the following way. Assume $K/F$ is a finite Galois field extension. An $F$-Hopf algebra $\mathcal{H}$, with an action on $K$ such that $K$ is an $H$-module algebra and the action makes $K$ into an $\mathcal{H}$-Galois extension, will be called a *Hopf-Galois structure* on $K/F$. A *left skew brace* is a triple $(\Gamma,+,\times)$ where $(\Gamma,+),(\Gamma,\times)$ are groups and satisfy $a\times(b+c)=(a\times b)+a^{-1}+(a\times c),$ for all $a,b,c\in\Gamma$. Given a group $G$, the $Holomorph$ of $G$ is defined as $G\rtimes \textup{Aut}(G)$, via the identity map. It is denoted by $\textup{Hol}(G)$. Let $G$ and $N$ be two finite groups of the same order. By $e(G,N)$ we mean the number of Hopf-Galois structures on a finite Galois field extension $L/K$ with Galois group isomorphic to $G$, and the type isomorphic to $N$. In [@GrPa87], the authors gave a bijection between Hopf-Galois structures on a finite Galois extension with Galois group $G$ and regular subgroups in $\mathrm{Perm}(G)$, which are normalised by $\lambda(G)$. Further in [@By96], N. Byott showed that $$\label{Byotts-translation} e(G,N)= \dfrac{|\textup{Aut}(G)|}{|\textup{Aut}(N)|}\cdot e'(G,N),$$ where $e'(G,N)$ is the number of regular subgroups of $\textup{Hol}(N)$ isomorphic to $G$. Here a subgroup $\Gamma$ of $\textup{Hol}(N)$ is called regular if it has exactly one element $(e_G,\zeta)\in \Gamma$ with $\zeta=I$, the identity automorphism. We will use this condition to check regular embeddings of the concerned groups in the article. It turns out that $e'(G, N)$ also gives the number of Skew-Braces with the additive group isomorphic to $N$ and the multiplicative group isomorphic to $G$. The number $e(G,N)$ has been computed for several groups. For example, N. Byott determined $e(G,N)$ when $G$ is isomorphic to a cyclic group [@By13]; C. Tsang determined $e(G,N)$ when $G$ is a quasisimple group [@Ts21a]; N. K. Zenouz consider the groups of order $p^3$ [@Ze18] to determine $e(G,N)$ ; T. Kohl determined $e(G,N)$ when $G$ is a dihedral group [@Ko20]. Previously in [@ArPa22], the authors computed $e(G,N)$ whenever $G$ and $N$ are isomorphic to $\mathbb{Z}_n\rtimes \mathbb{Z}_2$, where $n$ is odd and its radical is a Burnside number. Groups of order $p^2q$ with cyclic Sylow subgroups have been considered in [@CaCaDe20]. We can show that any group of order $p^nq$ with cyclic Sylow subgroups, when $p$ and $q$ are distinct primes, is a semidirect product of two cyclic groups (see [2](#sec:prelim){reference-type="ref" reference="sec:prelim"}). In this article, we compute $e(G,N)$ (and $e'(G,N)$), whenever $G$ and $N$ are groups of order $p^n q$ with cyclic Sylow-$p$ subgroup, where $p$ and $q$ are distinct odd primes. We do this by looking at the number of regular subgroups of $\textup{Hol}(N)$ which are isomorphic to $G$. Finally whenever $q < p$ we give a necessary and sufficient condition on when the pair $(G,N)$ is realizable. We now fix some notations. For a ring $R$, we will use $R^{\times}$ to denote the set of multiplicative units of $R$. For a group $G$, the identity element will be sometimes denoted by $e_G$ and mostly by $1$, when the context is clear. The automorphism group of a group $G$ will be denoted by $\textup{Aut}(G)$, and the holomorph $G\rtimes_{\mathrm{id}}\textup{Aut}(G)$ will be denoted by $\textup{Hol}(G)$. The binomial coefficients will be denoted by ${l \choose m}$. The Euler totient function will be denoted by $\varphi$. We will use $\mathbb{Z}_m$ to denote the cyclic group of order $m.$ We will use $\mathbb{Z}_m$ as a group as well as a ring, which will be clear from the context. Now, we state the two main results of this article. To state the second result we use notations from [2](#sec:prelim){reference-type="ref" reference="sec:prelim"}. **Theorem 1**. *Let $p>q$ be odd primes and $q|p-1$. Let $G$ denote the nonabelian group of the form $\mathbb{Z}_{p^n}\rtimes\mathbb{Z}_q$ and $C$ denote the cyclic group of order $p^nq$. Then the following are true:* 1. *$e'(G,G)=e(G,G)=2+2p^n(q-2)$,* 2. *$e'(G,C)=q-1$, and $e(G,C)=p^n$,* 3. *$e'(C,G)=p^{2n-1}$, and $e(C,G)=2p^{n-1}(q-1)$.* **Theorem 2**. *Let $p<q$ be odd primes and $p^a||q-1$. For $1\leq b\leq\mathrm{min}\{n,a\}$, let $G_b$ denote the unique nonabelian group of the form $\mathbb{Z}_{p^n}\rtimes\mathbb{Z}_{q}$ determined by $b$, and $C$ denote the cyclic group of $p^nq$. Then the following results hold;* 1. *$e'(G_b,G_b)=e(G_b,G_b)=2\left(p^{n-b}+q\left(\varphi(p^n)-p^{n-b}\right)\right)$,* 2. *$e'(G_{b_1},G_{b_2})=2qp^{n+b_1-b_2-1}(p-1)$, and $e(G_{b_1},G_{b_2})=2qp^{n-1}(p-1)$ for $b_1\neq b_2$,* 3. *$e'(C,G_b)=2p^{n-b}q$, and $e(C,G_b)=2(p-1)p^{n-1}$,* 4. *$e'(G_b,C)=p^{n+b-2}(p-1)$, and $e(G-b,C)=p^{n-1}b$.* The rest of the article is organised as follows. In [2](#sec:prelim){reference-type="ref" reference="sec:prelim"}, we give a detailed description of the groups under consideration and determine their automorphism groups. Next, in [3](#sec:main-results-1){reference-type="ref" reference="sec:main-results-1"} and [4](#sec:main-results-2){reference-type="ref" reference="sec:main-results-2"} we will prove [Theorem 1](#theorem-p->-q){reference-type="ref" reference="theorem-p->-q"} and [Theorem 2](#theorem-p-<-q){reference-type="ref" reference="theorem-p-<-q"} respectively. Lastly, in [5](#sec:conclusion){reference-type="ref" reference="sec:conclusion"} we discuss the realizability problem and solve them for some of the groups mentioned in this article. # Preliminaries {#sec:prelim} ## The groups under consideration In this subsection we will describe the groups under consideration and fix some notations. Let $p$ and $q$ be distinct odd primes. We look at groups of order $p^nq$ whose Sylow-$p$-subgroups are cyclic. These come under two families, depending on whether $p>q$ or $p<q$. In case $p>q$, the groups are isomorphic to $\mathbb{Z}_{p^n}\rtimes \mathbb{Z}_{q}$, since $\mathbb{Z}_{p^n}$ is normal. Indeed all these groups $G$ fits into the short exact sequence $1\longrightarrow \mathbb{Z}_{p^n}\longrightarrow G\longrightarrow \mathbb{Z}_q\longrightarrow 1$. Thus by the well known Schur--Zassenhaus theorem $G$ is isomorphic to $\mathbb{Z}_{p^n}\rtimes \mathbb{Z}_{q}$. Since $\textup{Aut}(\mathbb{Z}_{p^n})$ is cyclic, the semidirect product is either trivial (in this case the group is cyclic) or uniquely nontrivial. Let $G \cong \mathbb{Z}_{p^n} \rtimes \mathbb{Z}_q$. If $q\nmid p-1$ then $G$ is cyclic. In case $q|p-1$, let $\phi : \mathbb{Z}_q \rightarrow \textup{Aut}(\mathbb{Z}_{p^n})$ be the homomorphism defined as $\phi(1)=k$. Here $k$ is an element of $\textup{Aut}(\mathbb{Z}_{p^n})$ of order $q$. Hereafter, we denote $\mathbb{Z}_{p^n} \rtimes_{\phi} \mathbb{Z}_q$ by $\mathbb{Z}_{p^n} \rtimes_k \mathbb{Z}_q$. Let $$\{x,y| x^{p^n}=y^q=1 , yxy^{-1} = x^k\}$$ be a presentation of $\mathbb{Z}_{p^n} \rtimes_k \mathbb{Z}_q$. Note that since $e(G,G)$ is already known whenever $G$ is cyclic, we will assume $q|p-1$ for our calculations. Now if $p < q$ we need to use a result of W. Burnside from [@Bu05], which states that for a finite group $G$, all the Sylow subgroups are cyclic if and only if $G$ is a semidirect product of two cyclic groups of coprime order. Applying this to our situation, we get that $G$ is either a cyclic group or a non-trivial semidirect product of the form $\mathbb{Z}_q\rtimes\mathbb{Z}_{p^n}$. Next, we elaborate on different possible semidirect products of the form $\mathbb{Z}_q\rtimes\mathbb{Z}_{p^n}$ . Once again in this case we assume that $p|q-1$. Let $p^a||q-1$ and for $b\leq \mathrm{min}\{n,a\}$ fix $\psi_{b}:\mathbb{Z}_{p^n}\longrightarrow\textup{Aut}(\mathbb{Z}_q)$ to be a homomorphism, such that $|\mathrm{Im}~\psi_{b}|=p^b$. Take ${G}_{b}=\mathbb{Z}_q\rtimes_{\psi_{b}}\mathbb{Z}_{p^n}$. The group ${G}_{b}$ is unique up to isomorphism. The presentation of this group can be taken to be $$\langle x,y|x^{p^n}=1,y^q=1,xyx^{-1}=y^k\rangle,$$ where $k$ is an element of order $p^b$ in $\textup{Aut}(\mathbb{Z}_q)=\mathbb{Z}_q^\times$. From now on we denote $\mathbb{Z}_q\rtimes_{\psi_{b}}\mathbb{Z}_{p^n}$ by $\mathbb{Z}_q\rtimes_{k}\mathbb{Z}_{p^n}$. ## The basic lemmas In this subsection we note down the basic group-theoretic results, which will be used throughout the article. **Lemma 3**. *Let $p$ be a positive odd integer. Take $a=bp^c$ where $p\nmid b$. Then we have that $(1+p)^{a}\equiv 1+dp^{c+1}\pmod{p^{c+2}}$ for some $p\nmid d$, for all integer $c\geq 0$.* *Proof.* We prove it by induction on $c$. If $c=0$, then $(1+p)^a=1+ap+a'p^2$, for some $a'\in\mathbb{Z}$. Hence $(1+p)^{a}\equiv 1+ap\pmod{p^2}$ with $d=a$. Next, assume it to be true for all $l\leq c$ and in particular for $l=c$. Hence $(1+p)^{bp^c}=1+dp^{c+1}+d'p^{c+2}$ for some $d'\in\mathbb{Z}$. Then we have $$\begin{aligned} (1+p)^{bp^{c+1}}=\left(1+dp^{c+1}+d'p^{c+2}\right)^p=(1+d''p^{c+1})^p, \end{aligned}$$ for some $d''\in Z$ and $(d'',p)=1$. Hence it follows that $(1+p)^{bp^{c+1}}\equiv 1+ d'' p^{c+2}\pmod{p^{c+3}}$, which also finishes the induction, and hence the proof. ◻ **Lemma 4**. *Let $G$ be the non-abelian group isomorphic to $\mathbb{Z}_{p^n}\rtimes \mathbb{Z}_{q}$. We have $\textup{Aut}(G) \cong \textup{Hol}(\mathbb{Z}_{p^n})$.* *Proof.* We first embed $G$ as a normal subgroup of $\textup{Hol}(\mathbb{Z}_{p^n})$. Take the homomorphism $\psi$ defined as $$\begin{aligned} \psi(x)&=\begin{pmatrix}1&k\\0&1\end{pmatrix},~ \psi(y)=\begin{pmatrix}k&1\\0&1\end{pmatrix}.\end{aligned}$$ This embedding can be shown to be injective. Now consider the following map $$\begin{aligned} \Phi: \textup{Hol}(\mathbb{Z}_{p^n}) \longrightarrow \textup{Aut}(G) \text{ defined as }\Phi(z)(w)=zwz^{-1} \end{aligned}$$ for all $z\in \textup{Hol}(\mathbb{Z}_{p^n})$ and $y\in G$ is an injective group homomorphism, since $\ker\Phi$ consists only of the identity matrix. From [@Walls1986 Theorem B] we have $|\textup{Aut}(G)|= |\textup{Hol}(\mathbb{Z}_{p^n})|$. Thus $\Phi$ is an isomorphism. ◻ **Lemma 5**. *Let $p,q$ be primes such that $(p,q)=1$ and $q|p-1$. Let $k$ be a multiplicative unit in $\mathbb{Z}_{p^n}$, of multiplicative order $q$. Then $k-1$ is a multiplicative unit in $\mathbb{Z}_{p^n}$.* *Proof.* Suppose $k-1$ is not a unit in $\mathbb{Z}_{p^n}$, then $k-1 = mp$ for some $m\in \mathbb{Z}_{p^n}$. Since $(k)^q\equiv 1\pmod{p^n}$, we get $$\begin{aligned} (mp+1)^q \equiv 1+ {q \choose 1}mp + {q \choose 2}(mp)^2+ \cdots +(mp)^q \equiv 1\pmod{p^n},\end{aligned}$$ which in turn implies that $$\begin{aligned} mp\cdot\left(q+ {q \choose 2}mp + {q \choose 3}(mp)^2+ \cdots +(mp)^{q-1}\right) \equiv 0 \pmod{p^n}.\end{aligned}$$ We note that $$\begin{aligned} t=q+ {q \choose 2}mp + {q \choose 3}(mp)^2+ \cdots +(mp)^{q-1} \end{aligned}$$ is a unit since $q$ is a unit and $t-q$ is a nilpotent element. Thus $mp \equiv 0 \pmod{p^n},$ which implies $k-1 \equiv 0 \pmod{p^n}$. This is a contradiction since $k$ is an element of order $q$. ◻ **Lemma 6**. *Let $G_b \cong \mathbb{Z}_q\rtimes_{k}\mathbb{Z}_{p^n}$, where $k$ is an element of order $p^b$ in $\mathbb{Z}_{q}^{\times}$. Assume $p|q-1$, then for $b>0$, we have that $\textup{Aut}({G}_{b})\cong\mathbb{Z}_{p^{n-b}}\times\textup{Hol}(\mathbb{Z}_q)$.* *Proof.* The proof will be divided into two steps. First, we calculate the size of the automorphism group. In the next step, we will determine the group's description in terms of generators and relations, from which the result will follow. Let us take an automorphism $\Psi$ of ${G}_b$. Since an automorphism is determined by its value on the generator, assume that $\Psi(x)=y^\alpha x^\gamma$ and $\Psi(y)=y^{\beta}x^{\delta}$, where $0\leq \alpha,\beta\leq q-1$ and $0\leq \gamma,\delta\leq p^n-1$. Note that we have $\Psi(y)^q=y^{\beta(1+k^\delta+k^{2\delta}+k^{(q-1)\delta})}x^{q\delta}$. Since $\Psi(y)^q=1$, we must have $\delta=0$. [Thus $\beta$ should be a unit in $\mathbb{Z}_q$]{style="color: blue"}. Now consider the equation $\Psi(x)\Psi(y)=\Psi(y)^k\Psi(x)$. This imposes the condition that $y^{\alpha+\beta k^{\gamma}}x^{\gamma}=y^{\beta k+\alpha}x^{\gamma}$. Hence we should have $\beta k^{\gamma}\equiv\beta k\pmod{q}$, whence $k^{\gamma-1}\equiv 1\pmod{q}$, as $\beta$ is a unit in $\mathbb{Z}_q$. Since $k$ is an element of order $p^b$, [we get that $\gamma\in\{Rp^b+1:0\leq R<p^{n-b}\}$]{style="color: blue"}. Next considering the equation $\Psi(x)^{p^n}=1$, we have that $y^{\alpha(1+k^{\gamma}+k^{2\gamma}+\ldots+k^{(p^{n}-1)\gamma})}x^{p^n\gamma}=1$. Since $x^{p^n\gamma}=1$, we have that $\alpha(1+k^{\gamma}+k^{2\gamma}+\ldots+k^{(p^{n}-1)\gamma})=0\pmod{q}$. Regardless of the value of $k$, [any $0\leq\alpha\leq q$]{style="color: blue"} satisfies the last congruence. Hence the group is of order $p^{n-b}q(q-1)$. Hereafter we denote $\Psi$ by $(\gamma,\beta,\alpha)$. Consider the following three elements of the group given by $$\begin{aligned} \Psi_1=\left((1+p)^{p^{b-1}},1,0\right), \Psi_2=\left(1,t,0\right), \Psi_3=(1,1,1),\end{aligned}$$ where $1\leq t\leq q-1$ satisfies $\mathbb{Z}_q^\times=\langle\overline{t}\rangle$. Since $\overline{(1+p)}\in\mathbb{Z}_{p^n}^\times$ is of order $p^{n-1}$, we get that $\Psi_1$ is an element of order $p^{n-b}$. Given that, $\overline{t}$ is an element of order $q-1$, the element $\Psi_2$ is of order $q-1$. Lastly, $\Psi_3$ is an element of order $q$. Note that [$\Psi_1\Psi_2=\Psi_2\Psi_1$]{style="color: blue"}, follows from an easy calculation. Now, $\Psi_1\Psi_3(x)=yx^{(1+p)^{p^{b-1}}}$. Further, we have, $$\begin{aligned} \Psi_3\Psi_1(x)=(yx)^{(1+p)^{p^{b-1}}}=y^{1+k+\ldots+k^{\left(1+p\right)^{p^{b-1}}}}x^{(1+p)^{p^{b-1}}}=y^{1+\left(\frac{k^{(1+p)^{p^{b-1}}-1}}{k-1}\right)}x^{k^{(1+p)^{p^{b-1}}-1}}.\end{aligned}$$ Since ${(1+p)^{p^{b-1}}-1}\equiv 1\pmod{p^b}$ and $\overline{k-1}$ is a unit in $\mathbb{Z}_q$, we conclude that $\Psi_1\Psi_3(x)=\Psi_3\Psi_1(x)$. Since $\Psi_1 \Psi_3(y)=\Psi_3\Psi_1(y)$, we conclude that [$\Psi_1\Psi_3=\Psi_3\Psi_1$]{style="color: blue"}. We now take the subgroup generated by $\Psi_2$ and $\Psi_3$. In this group $\langle\Psi_3\rangle$ is normal as $\Psi_2\Psi_3\Psi_2^{-1}=\Psi_3^t\in\langle\Psi_3\rangle$. Also $\langle\Psi_2\rangle\cap\langle\Psi_3\rangle$ contains only identity. Hence $|\langle\Psi_2,\Psi_3\rangle|=q(q-1)$. Take the map $T:\langle \Psi_2,\Psi_3\rangle\longrightarrow\textup{Hol}(\mathbb{Z}_q)$, defined as $$\begin{aligned} T(\Psi_2)=\begin{pmatrix} t&0\\ 0&1 \end{pmatrix}\text{ and } T(\Psi_3)=\begin{pmatrix} 1&1\\ 0&1 \end{pmatrix}.\end{aligned}$$ This determines a homomorphism since $T(\Psi_2)T(\Psi_3)T(\Psi_2)^{-1}=T(\Psi_3)^t$. For any $\begin{pmatrix} u & v\\ 0 &1 \end{pmatrix}\in\textup{Hol}(\mathbb{Z}_q)$, we have that $T(\Psi_2^{w_1}\Psi_3^{w_2})=\begin{pmatrix} u & v \\ 0 & 1 \end{pmatrix}$, where $w_1$ satisfies $t^{w_1}=u$ and $w_2=v/u$. Since the order of the groups are the same, we conclude that [$\langle\Psi_2,\Psi_3 \rangle\cong \textup{Hol}(\mathbb{Z}_q)$]{style="color: blue"}. Now we will show that $\langle\Psi_1\rangle\cap\langle \Psi_2,\Psi_3\rangle$ has only the identity element. Indeed, if $\Psi_1^d= \Psi_2^e\Psi_3^f$ (for some $0\leq d< p^{n-b}$, $0\leq e<q-1$ and $0\leq f<q$), then $e=0$, comparing the evaluation of both the functions at $y$. Finally, if we consider $\Psi_1^d(x)=\Psi_3^f(x)$, we get that $x^{p'^d}=y^fx$ where $p'=(1+p)^{p^{b-1}}$. This forces us to have $f=0$, consequently $d=0$. Thus [$\langle \Psi_1,\Psi_2,\Psi_3\rangle\cong\langle\Psi_1\rangle\times\langle\Psi_2,\Psi_3\rangle$]{style="color: blue"} and [is of order $p^{n-b}q(q-1)$]{style="color: blue"}. Hence we have proved that $\textup{Aut}({G}_b)\cong\mathbb{Z}_{p^{n-b}}\times\textup{Hol}(\mathbb{Z}_q)$. ◻ We denote the elements of $\textup{Aut}(G_b)$ by $\left(\gamma,\begin{pmatrix} \beta & \alpha \\ 0 & 1 \end{pmatrix}\right)\in \mathbb{Z}_{p^{n}}^{\times}\times \textup{Hol}(\mathbb{Z}_q)$, such that $\gamma^{p^{n-b}}=1$. **Remark 7**. We note down the action of the automorphism group of ${G}_b$ on the group ${G}_b$, by means of generators. This will be necessary for counting the Hopf-Galois structures concerning ${G}_b$'s. For $b>0$, the action is as follows. $$\begin{aligned} \left(\gamma,\begin{pmatrix} \beta & \alpha\\ 0 & 1 \end{pmatrix}\right)\cdot x= y^{\alpha}x^{\gamma}\text{ and, } \left(\gamma,\begin{pmatrix} \beta & \alpha\\ 0 & 1 \end{pmatrix}\right)\cdot y= y^{\beta}. \end{aligned}$$ **Remark 8**. For $b=0$, the group ${G}_b\cong\mathbb{Z}_{p^n}\times \mathbb{Z}_q$. Since $(p,q)=1$ and both are abelian groups, it follows from [@BiCuMc06 Theorem 3.2] that $\textup{Aut}({G}_b)\cong\mathbb{Z}_{p^{n-1}(p-1)}\times\mathbb{Z}_{q-1}$ in this case. The action is defined to be component-wise. # The case $p>q$ {#sec:main-results-1} This section is devoted to the proof of [Theorem 1](#theorem-p->-q){reference-type="ref" reference="theorem-p->-q"}. As discussed in [2](#sec:prelim){reference-type="ref" reference="sec:prelim"}, upto isomorphism there are precisely two groups of order $p^nq$ whenever their Sylow subgroups are cyclic. Counting the number of skew braces with multiplicative group $G$ and additive group $N$ is equivalent to (up to multiplication by a constant; see [@ArPa22 Proof of Proposition 3.2]) counting the number of regular embedding of $G$ in $\textup{Hol}(N)$. Then using [\[Byotts-translation\]](#Byotts-translation){reference-type="ref" reference="Byotts-translation"}, we are able to conclude about the number of Hopf-Galois structures on $G$-extensions of type $N$. We will use the regularity criterion given in [1](#sec:introduction){reference-type="ref" reference="sec:introduction"}. This section will be divided into three subsections, depending on the isomorphism types of $G$ and $N$. From [Lemma 4](#lem:holaut){reference-type="ref" reference="lem:holaut"}, we have that $\textup{Aut}(\mathbb{Z}_{p^n} \rtimes_k \mathbb{Z}_q) \cong \textup{Hol}(\mathbb{Z}_{p^n})$, where the action is given by, $$\begin{aligned} \begin{pmatrix} \beta & \alpha\\ 0 & 1 \end{pmatrix} \cdot x^iy^j = x^{\beta i +\alpha k^{-1}-\alpha k^{j-1}}y^j.\end{aligned}$$ ## Embedding of $\mathbb{Z}_{p^n} \rtimes_k \mathbb{Z}_q$ into $\textup{Hol}(\mathbb{Z}_{p^n} \rtimes_k \mathbb{Z}_q)$ {#subsec:eGG} Let $\Phi : \mathbb{Z}_{p^n} \rtimes_k \mathbb{Z}_q \longrightarrow \textup{Hol}(\mathbb{Z}_{p^n} \rtimes_k \mathbb{Z}_q)$ be a regular embedding. Let $$\begin{aligned} \Phi(x)= \left(x^{i_1}y^{j_1}, \begin{pmatrix} \beta_1 & \alpha_1\\ 0 & 1 \end{pmatrix}\right), \Phi(y)= \left(x^{i_2}y^{j_2}, \begin{pmatrix} \beta_2 & \alpha_2\\ 0 & 1 \end{pmatrix}\right).\end{aligned}$$ From $(\Phi(x))^{p^n}=1$ we get $$\begin{aligned} j_1\equiv 0 \pmod{q},\label{e1} \end{aligned}$$ since $p^nj_1\equiv 0\pmod{q}$ and $(p,q)=1$, $$\begin{aligned} \beta_1^{p^n}\equiv 1&\pmod{p^n},\label{e2}\\ i_1(1+\beta_1+\beta_1^2+\ldots +\beta_1^{p^n-1})\equiv 0 &\pmod{p^n},\label{e3}\\ \alpha_1(1+\beta_1+\beta_1^2+\ldots +\beta_1^{p^n-1})\equiv 0 &\pmod{p^n}.\label{e4}\end{aligned}$$ Similarly from $\Phi(yxy^{-1})= \Phi(x^k)$ we get $$\begin{aligned} \beta_1^{k-1} &\equiv 1 \pmod{p^n},\label{f1}\end{aligned}$$ which implies $\beta_1=1$ from [\[e2\]](#e2){reference-type="ref" reference="e2"}, [\[f1\]](#f1){reference-type="ref" reference="f1"} and using [Lemma 5](#prop:unit){reference-type="ref" reference="prop:unit"}; furthermore, $$\begin{aligned} \beta_2\alpha_1 + \alpha_2 &\equiv \beta_1^k\alpha_2 + \alpha_1 \pmod{p^n},\label{f2}\\ % i_1\cdot(1+\beta_1+\beta_1^2+\ldots +\\ % \beta_1^{k-1})+\beta_1^k\cdot i_2+\\ \alpha_1\cdot(1+\beta_1+\beta_1^2+\ldots \\ % +\beta_1^{k-1})\cdot k^{-1} -\alpha_1(1+\beta_1+\beta_1^2+\ldots +\beta_1^{k-1})\cdot k^{j_2-1}\\ % \equiv i_2 + k^{j_2}\beta_2 i_2 \pmod{p^n},\label{f3}.\\ ki_1\left(k^{j_2-1}\beta_2-1\right)&\equiv \alpha_1\left(1-k^{j_2}\right)\pmod{p^n}.\label{f3} %\left(i_1+\alpha_1-\alpha_1k^{j_2-1}\right)\cdot\left(\sum\limits_{i=0}^{k-1}\beta^i\right)+\beta_1^ki_2 &\equiv i_2 + k^{j_2}\beta_2 i_2 &\pmod{p^n},\label{f3}\end{aligned}$$ Further taking $\beta_1 = 1$ in [\[f2\]](#f2){reference-type="ref" reference="f2"} and [\[f3\]](#f3){reference-type="ref" reference="f3"} we get that, $$\begin{aligned} \alpha_1 \cdot(k-\beta_2) \equiv 0 \pmod{p^n},\label{f4}\\ ki_1\cdot(k^{j_2-1}\beta_2-1) \equiv \alpha_1\cdot (1-k^{j_2}) \pmod{p^n}.\label{f5}\end{aligned}$$ We note that in general, $$\begin{aligned} \Phi(y)^k=\left(x^{\ell_k} y^{kj_2},\begin{pmatrix} \beta_2^k& \alpha_2(1+\beta_2+\beta_2^2+\cdots+\beta_2^{k-1})\\ & 1 \end{pmatrix}\right),\end{aligned}$$ where $$\begin{aligned} \ell_k = i_2\left(\sum\limits_{t=0}^{k-1}\left(\beta_2k^{j_2}\right)^t\right)+\left(\alpha_2k^{j_2-1}-\alpha_2k^{2j_2-1}\right) \left(1+\sum\limits_{u=1}^{k-2} \left(\sum\limits_{v=0}^u\beta_2^v\right)k^{uj_2}\right)\label{g}.\end{aligned}$$ Using $\Phi(y)^q =1$ we get $$\begin{aligned} \beta_2^q &\equiv 1 \pmod{p^n},\label{g1}\\ \alpha_2(1+\beta_2+\beta_2^2+\ldots +\beta_2^{q-1})&\equiv 0 \pmod{p^n}&&j_2\neq 0,\label{g2}\\ \ell_q &\equiv 0 \pmod{p^n}.\label{g3} \end{aligned}$$ From [\[g1\]](#g1){reference-type="ref" reference="g1"} we get $\beta_2 = k^a$, for some $0\leq a\leq q-1$, since $\mathbb{Z}_{p^n}^*$ has a unique subgroup of order $q$ and is generated by $k$. First let us show that, in any regular embedding $j_2\neq 0$. If possible let $j_2=0$. Then we get that $\beta_2=k$. This forces that for any $0\leq \omega_1\leq p^n-1$ and $0\leq \omega_2\leq q-1$ $$\begin{aligned} \Phi(x)^{\omega_1}\Phi(y)^{\omega_2} =\left(x^{\omega_1i_1+i_2\left(1+k+\cdots+k^{\omega_2-1}\right)},\begin{pmatrix} k^{\omega_2}& \star\\ 0 & 1 \end{pmatrix}\right).\label{disc:regularity}\end{aligned}$$ Since $i_1$ is a unit, making a suitable choice of $\omega_1$ and $\omega_2$ we get that this embedding will not be regular. Indeed note that $1-k$ and $1-k^{\omega_2}$ are both units and so is $1+k+\cdots+k^{\omega_2-1}$. We now divide the possibilities of $a$ into $3$ cases. ### **Case I: $a=0$** Using [\[f3\]](#f3){reference-type="ref" reference="f3"} and [\[f4\]](#f4){reference-type="ref" reference="f4"}, we conclude that $\alpha_1 \equiv 0\pmod{p^n}$, $j_2 \equiv 1 \pmod{q}$ and, $\alpha_2\equiv 0\pmod{p^n}$. Since $i_1$ is a unit in $\mathbb{Z}_{p^n}$ and $i_2\in\mathbb{Z}_{p^n}$ can take any value, the total number of embeddings in this case is given by $p^n\varphi(p^n)$. Moreover, all of these embeddings are regular. We remark that all the above embedding corresponds to the canonical Hopf-Galois structure. ### **Case II: $a=1$** Note that using [\[f5\]](#f5){reference-type="ref" reference="f5"} we get that $ki_1\equiv -\alpha_1\pmod{p^n}$. We deal with this in two subcases depending on the value of $j_2$. First, we consider the case $j_2$ being equal to $q-1$. In this case using $\ell_q=0$, we get that $i_2$ gets determined by the value of $\alpha_2$ since $\left(\sum\limits_{t=0}^{k-1}\left(\beta_2k^{j_2}\right)^t\right)=q$ is a unit in $\mathbb{Z}_{p^n}$. Hence the number of embedding in this subcase is given by $p^n\varphi(p^n)$. For the other case, since the element $k^{j_2}(1-k^a)$ is a unit and $j_2+a\neq 0 \pmod{q}$ we get $$\begin{aligned} &1+\sum\limits_{s=1}^{q-2}\left(\sum\limits_{t=0}^{s}k^{ta}\right)k^{sj_2} % =&1\\ % +&\left(1+k^{a}\right)k^{j_2}\\ % +&\left(1+k^{a}+k^{2a}\right)k^{2j_2}\\ % +&\cdots\\ % +&\left(1+k^{a}+k^{2a}+\ldots+k^{(q-2)a}\right)k^{(q-2)j_2}\\ =\dfrac{1}{k^{j_2}(1-k^a)}\left\{\sum\limits_{t=1}^{q-1}\left(1-k^{ta}\right)k^{tj_2}\right\} =\dfrac{1}{k^{j_2}(1-k^a)}\cdot (1-1) =0,\end{aligned}$$ Thus $\Phi(y)^q=1$ does not impose any conditions on $i_2$ and $\alpha_2$. Hence, in this subcase, the total number of possibilities is $p^{2n}\varphi(p^n)(q-2)$. Since $j_2\neq 0$, we conclude that all the embeddings are regular. ### **Case III: $a\geq 2$** This conditions together with [\[f4\]](#f4){reference-type="ref" reference="f4"} and [\[f5\]](#f5){reference-type="ref" reference="f5"}, imply that $\alpha_1=0$ and $j_2\equiv a-1\pmod{q}$. Since $a+j_2\not\equiv 0\pmod{q}$, a mutatis mutandis of Case II gives that $i_2$ and $\alpha_2$ can be chosen independently, whence each of them has $p^n$ possibilities. Thus, in this case, the total number of possibilities is given by $p^{2n}\varphi(p^n)(q-2)$. Similar to the previous case, all the embeddings are regular. Summarizing the above cases we get the following result. **Lemma 9**. *The total number of regular embeddings of $\mathbb{Z}_{p^n}\rtimes\mathbb{Z}_{q}$ inside $\textup{Hol}(\mathbb{Z}_{p^n}\rtimes\mathbb{Z}_{q})$ is given by $2p^n\varphi(p^n)+2p^{2n}\varphi(p^n)(q-2)$.* **Proposition 10**. *Let ${G}$ be a non-abelian groups of the form $\mathbb{Z}_{p^n}\rtimes\mathbb{Z}_q$, where $p$ and $q$ are primes satisfying $q|p-1$. Then $e({G},{G})$ is given by $2+2p^n(q-2)$.* *Proof.* From [Lemma 9](#prop:regular-embeddings){reference-type="ref" reference="prop:regular-embeddings"} we get the total number of regular embeddings. Dividing this number by the Automorphism of $G$ will give us the total number of Hopf-Galois structures. ◻ ## Embedding of $G=\mathbb{Z}_{p^n}\rtimes \mathbb{Z}_q$ in the $\textup{Hol}(\mathbb{Z}_{p^n}\times \mathbb{Z}_q)$ {#subsec:eGC} Next we consider the case of regular embedding of $G=\mathbb{Z}_{p^n}\rtimes \mathbb{Z}_q$ in the $\textup{Hol}(\mathbb{Z}_{p^n}\times \mathbb{Z}_q)$. Let us fix the presentation of $C=\mathbb{Z}_{p^n}\times \mathbb{Z}_q$ to be $\langle r,s|r^{p^n}=s^q=1,rs=sr\rangle.$ Then it can be shown that $\textup{Hol}(C)\equiv\textup{Hol}(\mathbb{Z}_{p^n})\times\textup{Hol}(\mathbb{Z}_q)$. We take a typical element of $\textup{Hol}(C)$ to be $\left(\begin{pmatrix} b&a\\0&1 \end{pmatrix},\begin{pmatrix} d&c\\0&1 \end{pmatrix}\right)$, where $a$, $c$ are elements of $\mathbb{Z}_{p^n}$, $\mathbb{Z}_q$ respectively and $b$, $d$ are elements of $\mathbb{Z}_{p^n}^\times$, $\mathbb{Z}_q^\times$ respectively. Starting with an embedding $\Phi$ of $G$ inside $\textup{Hol}(C)$ and assuming that $$\begin{aligned} \Phi(x)=\left(\begin{pmatrix} b_1&a_1\\0&1 \end{pmatrix},\begin{pmatrix} d_1&c_1\\0&1 \end{pmatrix}\right), \Phi(y)=\left(\begin{pmatrix} b_2&a_2\\0&1 \end{pmatrix},\begin{pmatrix} d_2&c_2\\0&1 \end{pmatrix}\right).\end{aligned}$$ From $\Phi(x)^{p^n}=e_{\textup{Hol}(C)}$ we get the equations $$\begin{aligned} b_1^{p^n}&\equiv 1\pmod{p^n},\label{gc1}\\ a_1\left(1+b_1+\cdots+b_1^{p^n-1}\right)&\equiv 0\pmod{p^n}\label{gc2},\\ d_1^{p^n}&\equiv 1\pmod{q},\label{gc3}\\ c_1\left(1+d_1+\cdots+d_1^{p^n-1}\right)&\equiv 0\pmod{q}\label{gc4}.\end{aligned}$$ Note that $d_1^{q-1}\equiv 1\pmod{q}$ and $(q-1,p^n)=1$. Combining this with [\[gc3\]](#gc3){reference-type="ref" reference="gc3"}, we get that $d_1=1$. Then plugging $d_1=1$ in [\[gc4\]](#gc4){reference-type="ref" reference="gc4"}, conclude that $c_1=0$. For ensuring regularity, we need to take $a_1$ is a unit in $\mathbb{Z}_{p^n}$. Using the equation $\Phi(y)^q=1$ we get the equations $$\begin{aligned} b_2^{q}&\equiv1\pmod{p^n},\label{gcy1}\\ a_2\left(1+b_2+\cdots+b_2^{q-1}\right)&\equiv 0\pmod{p^n}\label{gcy2},\\ d_2^{q}&\equiv 1\pmod{q},\label{gcy3}\\ c_2\left(1+d_2+\cdots+d_2^{q-1}\right)&\equiv 0\pmod{q} \label{gcy4}.\end{aligned}$$ Since the order of $d_2$ divides $q-1$, we get $d_2=1$ from [\[gcy3\]](#gcy3){reference-type="ref" reference="gcy3"}. Finally comparing both sides of the equation $\Phi(x)^k\Phi(y)=\Phi(y)\Phi(x)$ we get (using the conclusions of the preceding discussions) $$\begin{aligned} b_1^{k-1}\equiv 1&\pmod{p^n}\label{gcxy1}\\ b_2a_1+a_2\equiv b_1^ka_2+a_1\left(1+b_1+\cdots+b_1^{k-1}\right) &\pmod{p^n}\label{gcxy2}.\end{aligned}$$ Using [Lemma 5](#prop:unit){reference-type="ref" reference="prop:unit"}, [\[gc1\]](#gc1){reference-type="ref" reference="gc1"} and [\[gcxy1\]](#gcxy1){reference-type="ref" reference="gcxy1"} we conclude that $b_1=1$. Putting the value of $b_1$ in [\[gcxy2\]](#gcxy2){reference-type="ref" reference="gcxy2"} we get that $b_2=k$. Further to ensure regularity we need to impose $c_2\neq 0$ (using a similar argument in the discussion after [\[disc:regularity\]](#disc:regularity){reference-type="ref" reference="disc:regularity"}). Thus the total number of regular embeddings in this case is given by $\varphi(p^n)p^n(q-1)$. **Proposition 11**. *Let ${C}$ be the cyclic group of order $p^nq$ and ${G}$ be the nonabelian group isomorphic to $\mathbb{Z}_{p^n}\rtimes\mathbb{Z}_q$, where $p$ and $q$ are primes. Then $e({G},{C})=p^n$ and $e'({G},{C})=q-1$.* ## Embedding of ${C}=\mathbb{Z}_{p^n}\times \mathbb{Z}_q$ in the $\textup{Hol}(\mathbb{Z}_{p^n}\rtimes \mathbb{Z}_q)$ {#subsec:eCG} Recall the description of $\textup{Hol}(G)$ from [3.1](#subsec:eGG){reference-type="ref" reference="subsec:eGG"} and the presentation for $C$ from [3.2](#subsec:eGC){reference-type="ref" reference="subsec:eGC"}. Consider a homomorphism $\Phi:{C}\longrightarrow\textup{Hol}(G)$ determined by $$\begin{aligned} \Phi(r)=\left(x^{i_1}y^{j_1},\begin{pmatrix} \beta_1&\alpha_1\\ 0 & 1 \end{pmatrix}\right), \Phi(s)=\left(x^{i_2}y^{j_2},\begin{pmatrix} \beta_2&\alpha_2\\ 0 & 1 \end{pmatrix}\right). \end{aligned}$$ Given that $\Phi(r)$ has to be an element of order $p^n$ and the embedding is regular, using a similar argument as in [3.1](#subsec:eGG){reference-type="ref" reference="subsec:eGG"} we conclude that $j_1=0$, $i_1$ is a unit in $\mathbb{Z}_{p_n}$ and, $j_2$ is a unit in $\mathbb{Z}_q$. From $\Phi(r)^{p^n}=1$, we get that $$\begin{aligned} i_1\left(1+\beta_1+\cdots+\beta^{p^n-1}\right)&\equiv 0\pmod{p^n},\\ \alpha_1\left(1+\beta_1+\cdots+\beta^{p^n-1}\right)&\equiv 0\pmod{p^n},\\ \beta_1^{p^n}&\equiv 1\pmod{p^n}.\end{aligned}$$ From the last equation above and [[@ArPa22 Corollary 2.2]]{style="color: red"} we get that $\beta_1\equiv 1\pmod{p}$. Hence the first two equations will always be satisfied irrespective of choices of $i_1$ and $\alpha_1$. From the equation $\Phi(s)^q=1$, we get $$\begin{aligned} \beta_2^q &\equiv 1 \pmod{p^n},\label{ecg1}\\ \alpha_2(1+\beta_2+\beta_2^2+\ldots +\beta_2^{q-1})&\equiv 0 \pmod{p^n}&,\label{ecg2}\\ \ell_q &\equiv 0 \pmod{p^n},\label{ecg3}\end{aligned}$$ where $\ell_q$ is as defined in [3.1](#subsec:eGG){reference-type="ref" reference="subsec:eGG"}. Furthermore $\Phi(r)\Phi(s)=\Phi(s)\Phi(r)$ gives that $$\begin{aligned} \alpha_2(\beta_1-1)&\equiv \alpha_1(\beta_2-1),\pmod{p^n}\label{ecg4} \\ i_1+\beta_1i_2+\alpha_1k^{-1}\left(1-k^{j_2}\right) &\equiv i_2+k^{j_2}\beta_2i_1\pmod{p^n}\label{ecg5}.\end{aligned}$$ Let $\beta_2=k^a$ for some $a\geq 0$. We divide this into two cases $a=0$ and $a\neq 0$. ### Case I: a=0 In this case we get $\alpha_2=0$ from [\[ecg2\]](#ecg2){reference-type="ref" reference="ecg2"}. Hence [\[ecg4\]](#ecg4){reference-type="ref" reference="ecg4"} is always satisfied. Note that [\[ecg3\]](#ecg3){reference-type="ref" reference="ecg3"} holds true, since $j_2+a\neq q$ by using similar arguments as of [3.1](#subsec:eGG){reference-type="ref" reference="subsec:eGG"}. Putting $\beta_2=1$ in [\[ecg5\]](#ecg5){reference-type="ref" reference="ecg5"} we get $\left(i_1+\alpha_1k^{-1}\right)\left(1-k^{j_2}\right)\equiv i_2\left(1-\beta_2\right)\pmod{p^n}$. Hence the choice of $\alpha_1$ gets determined by those of $i_1$, $i_2$, $\beta_1$ and, $j_2$. Hence the total number of embedding in this case becomes $\varphi(p^n)p^{2n-1}(q-1)$. ### Case II: $a\neq 0$ From [\[ecg4\]](#ecg4){reference-type="ref" reference="ecg4"}, substituting $\alpha_1= {\alpha_2(\beta_1-1)}{(k^a-1)^{-1}}$ in [\[ecg5\]](#ecg5){reference-type="ref" reference="ecg5"} we get $$\begin{aligned} i_1\left(k^a-1\right)\left(1-k^{j_2+a}\right)\equiv \left(1-\beta_1\right)\left(i_2\left(k^a-1\right)+\alpha_2k^{-1}\left(1-k^{j_2}\right)\right)\pmod{p^n}.\label{ecg6}\end{aligned}$$ We claim that $j_2+a=q$. Indeed, if $j_2+a\neq q$, we have that the LHS of [\[ecg6\]](#ecg6){reference-type="ref" reference="ecg6"} is a unit in $\mathbb{Z}_{p^n}$, whereas $(1-\beta_1)$ is never a unit (since $\beta_1\equiv 1\pmod{p^n}$). Next, putting $j_2+a=q$ in [\[ecg6\]](#ecg6){reference-type="ref" reference="ecg6"}, the LHS becomes $0$. Substituting $j_2+a=q$ in [\[ecg3\]](#ecg3){reference-type="ref" reference="ecg3"} we get $i_2\equiv-\alpha_2k^{-1}(1-k^{j_2})(k^{j_2}q^{-1})(1+(1+k^a)k^{j_2}+\cdots$$+(1+k^a+\cdots+k^{(q-2)a})k^{(q-2)j_2})$ $\pmod{p^n}$. Further substituting this value of $i_2$ to [\[ecg6\]](#ecg6){reference-type="ref" reference="ecg6"}, we get that both sides of the equation become zero. Hence we get that in this case, the total number of regular embedding of ${C}$ in $\textup{Hol}(G)$ is given by $\varphi(p^n)p^{2n-1}(q-1)$. **Proposition 12**. *Let ${C}$ be the cyclic group of order $p^nq$ and ${G}$ be the nonabelian group isomorphic to $\mathbb{Z}_{p^n}\rtimes\mathbb{Z}_q$. Then $e({C},{G})=2p^{n-1}(q-1)$ and [$e'({C},{G})=p^{2n-1}$]{style="color: black"}.* Now [Theorem 1](#theorem-p->-q){reference-type="ref" reference="theorem-p->-q"} follows from [Proposition 10](#thm:total-isomorphic){reference-type="ref" reference="thm:total-isomorphic"}, [Proposition 11](#thm:G-Cyclic){reference-type="ref" reference="thm:G-Cyclic"}, and [Proposition 12](#thm:G-Cyclic-number){reference-type="ref" reference="thm:G-Cyclic-number"}. # The case $p < q$ {#sec:main-results-2} In this section, we prove [Theorem 2](#theorem-p-<-q){reference-type="ref" reference="theorem-p-<-q"}. We use methods, described in the beginning of [3](#sec:main-results-1){reference-type="ref" reference="sec:main-results-1"}. In this case, there are exactly $b+1$ types of groups up to isomorphism, where $b=\mathrm{min}\{a,n\}$ with $p^a||q-1$. This section will be divided into four subsections, depending on the isomorphism types of $G=G_{b_1}$ and $N=G_{b_2}$, where $0\leq b_1,b_2\leq n$. ## Isomorphic type {#subsub:isom} First, we consider the isomorphic case. Let $G= \mathbb{Z}_q \rtimes_k \mathbb{Z}_p^n$, where $k$ is an element of order $p^b$. We are looking at $e(G, G)$. ### The case $b=0$ In this case, the groups are cyclic and $e'(G,G)$, $e(G,G)$ have been enumerated in [@By13 Theorem 2]. ### The case when $0<b \leq n$ Let us take a group homomorphism $\Phi:G_b\longrightarrow\textup{Hol}(G_b)$ defined by $$\begin{aligned} \Phi(x)=\left(y^{j_1}x^{i_1},\left(\gamma_1,\begin{pmatrix} \beta_1 & \alpha_1\\ 0 & 1 \end{pmatrix}\right)\right), \text{ and }\Phi(y)=\left(y^{j_2}x^{i_2},\left(\gamma_2,\begin{pmatrix} \beta_2 & \alpha_2\\ 0 & 1 \end{pmatrix}\right)\right).\end{aligned}$$ From $\Phi(y)^q=1$ and from $\Phi(xy)= \Phi(y^kx)$, we get the relations $i_2=0$, $\beta_2=1$, $\gamma_2=1$ and $$\begin{aligned} \alpha_2(k-\beta_1)\equiv0&\pmod{q},\label{iso1}\\ j_2(k^{i_1-1}\beta_1-1)\equiv \alpha_2(1+k+k^2\cdots k^{i_1-1})&\pmod{q}.\label{iso2}\end{aligned}$$ Thus if $\alpha_2=0$, then $\beta_1= k^{1-i_1}$. If $\alpha_2\neq 0$, then $\beta_1=k$ and $\alpha_2= j_2(k-1)$. From $\Phi(x)^{p^n}=1$, we get the following equivalences in $\mathbb{Z}_q$. $$\begin{aligned} {\beta_1}^{p^n}= 1\label{xpower1}\\ \alpha_1(1+\beta_1+{\beta_1}^2 \cdots {\beta_1^{p^n-1}})= 0\label{xpower2}.\end{aligned}$$ By explicit calculations, we can show that, the exponent of $y$ in $\Phi(x)^{p^n}$ is given by $$\begin{aligned} \textcolor{blue}{ \mathrm{Exp}_y\left(\Phi(x)^{p^n}\right)=j_1\left(\sum\limits_{u=0}^{p^n-1}m^u\right)+\dfrac{\alpha_1}{m(k^{\gamma_1}-1)}\left\{\sum\limits_{v=1}^{p^n-1}m^{p^n-v}\left(k^{i_1\left(1+\gamma_1+\ldots+\gamma_1^{v-1}\right)}-k^{i_1}\right)\right\},}\end{aligned}$$ where $m=\beta_1 k^{i_1}$. Using [\[iso1\]](#iso1){reference-type="ref" reference="iso1"} and [\[iso2\]](#iso2){reference-type="ref" reference="iso2"}, we can show that $m\in\{k,k^{i_1+1}\}$ First, let us take $m=k$. Then $\sum\limits_{u=0}^{p^n-1}m^{u}\equiv 0\pmod{q}$. We aim to show that the other summand is also zero in $\mathbb{Z}_q$. We have in $\mathbb{Z}_q$ $$\begin{aligned} \sum\limits_{v=1}^{p^n-1}m^{p^n-v}\left(k^{i_1\left(1+\gamma_1+\ldots+\gamma_1^{v-1}\right)}-k^{i}\right) % =\sum\limits_{v=1}^{p^n-1}k^{i_1\left(1+\gamma+\ldots+\gamma^{v-1}\right)-v}+k^i =\sum\limits_{v=1}^{p^n}k^{i_1\left(1+\gamma_1+\ldots+\gamma_1^{v-1}\right)-v}.\end{aligned}$$ Note that here $i_1$ and $\gamma_1$ are fixed. Denote by $\Gamma(v)=i_1(1+\gamma_1+\ldots+\gamma_1^{v-1})-v\pmod{p^n}$. Suppose for $1\leq v_1\neq v_2\leq p^n$ we have $\Gamma(v_1)\equiv\Gamma(v_2)\pmod{p^n}$. Then we have $i(\gamma_1^{v_1}-\gamma_1^{v_2})\equiv(v_1-v_2)(\gamma_1-1)\pmod{p^n}$. Since the Sylow-$p$-subgroup of $\mathbb{Z}_{p^n}^\times$ is generated by $(1+p)$ and $\gamma_1$ is an element having $p$-power order, say an element of order $p^g$. Then $p^{n-g}||\gamma_1-1$. Thus $v_1-v_2\equiv0\pmod{p^g}$, using [Lemma 3](#lem-power-of-p){reference-type="ref" reference="lem-power-of-p"}. Conversely if $v_1-v_2\equiv 0\pmod{\mathrm{ord}{\gamma_1}}$, then $i(\gamma_1^{v_1}-\gamma_1{v_2})\equiv (v_1-v_2)(\gamma_1-1)\pmod{p^n}$. Thus $\Gamma$ gives rise to a function from $\mathbb{Z}_{p^n}$ to the subset $\{p^g,2p^g,3p^g,\ldots,p^n\}$. Thus the sum is reduced to $p^g\sum\limits_{t=1}^{p^{n-g}}k^{tp^g}$. If $k^{p^g}=1$, we get the sum to be zero. Otherwise this sum is $p^g\dfrac{k^{p^n}-1}{k^{p^g}-1}=0$. This finishes the proof. Now, take the case when $m=k^{i_1+1}$. Then again the multiplier of $j_1$ is zero in $\mathbb{Z}_q$. We claim that the other summand is also zero in the above expression. We have in this case, $$\begin{aligned} &\sum\limits_{v=1}^{p^n-1}m^{p^n-v}\left(k^{i_1\left(1+\gamma_1+\ldots+\gamma_1^{v-1}\right)}-k^{i_1}\right) %=&\sum\limits_{v=1}^{p^n-1}k^{-(i_1+1)v}\left(k^{i_1\left(1+\gamma+\ldots+\gamma^{v-1}\right)}-k^{i_1}\right)\\ =\sum\limits\limits_{v=1}^{p^n}k^{i_1\left(1+\gamma_1+\ldots+\gamma_1^{v-1}-v\right)-v}-\sum\limits_{v=1}^{p^n}k^{i_1(1-v)-v}\\ % \end{align*} % \begin{align*} =&\begin{cases} \sum\limits _{v=1}^{p^n}k^{i_1\left(1+\gamma_1+\ldots+\gamma_1^{v-1}\right)-\left(i_1+1\right)v} &\text{ when }i_1+1\neq 0\pmod{p}\\ \sum\limits _{v=1}^{p^n}k^{i_1\left(1+\gamma_1+\ldots+\gamma_1^{v-1}-v\right)-v}-\sum\limits_{v=1}^{p^n}k^{i_1(1-v)-v}&\text{otherwise}. \end{cases}\end{aligned}$$ We start by considering the first subcase, i.e. $i_1+1$ being a unit in $\mathbb{Z}_{p^n}$. Again denote by $\Gamma(v)=i_1\left(1+\gamma_1+\ldots+\gamma_1^{v-1}-v\right)-(i_1+1)v$. Then $\Gamma(v_1)\equiv\Gamma(v_2)\pmod{p^n}$ implies that $i_1(\gamma_1^{v_1}-\gamma_1^{v_2})\equiv (i_1+1)(\gamma_1-1)(v_1-v_2)\pmod{p^n}$. Then proceeding as before, we get the result. Next, consider the second subcase. In this case, we show that both of the sums are zero. Take $\Gamma_1(v)=i_1\left(1+\gamma_1+\ldots+\gamma_1^{v-1}-v\right)-(i_1+1)v$ and $\Gamma_2(v)=i_1(1-v)-v$. Assume $p^h||i_1+1$, then $\Gamma_2(v')=\Gamma_2(v'')$ iff $v'\equiv v''\pmod{p^{n-h}}$, using [Lemma 3](#lem-power-of-p){reference-type="ref" reference="lem-power-of-p"}. Thus $\Gamma_2$ determines a function to the subset $\{p^{n-h},2p^{n-h},\ldots, p^n\}$ and hence the second term of the expression above vanishes. An argument similar to the previous cases of $\Gamma(v)$, shows that the first term is $0$ as well in $\mathbb{Z}_q$. Thus we have proved the following lemma. **Lemma 13**. *In $\mathrm{Exp}_y\left(\Phi(x)^{p^n}\right)$, if the coefficient of $j_1$ is zero in $\mathbb{Z}_q$, then so is the coefficient of $\alpha_1$.* We claim that $i_1$ is a unit. Suppose $i_1$ is not a unit. We note that $\Phi(x)^{p^{n-1}}=\left(y^{J},\left(1,\begin{pmatrix} 1 & 0\\ 0 & 1 \end{pmatrix}\right)\right)$, for some $J$. Note that if $\beta_1=0$ then $\alpha_1=0$, otherwise $1+\beta_1+ \ldots+\beta_1^{p^n-1}=0$, whence the matrix entry is justified. Now, if $J=0$ then this map is not regular. Otherwise when $J\neq 0$, we get $J$ is a unit in $\mathbb{Z}_q$. Since $p$ is a unit is $\mathbb{Z}_q$, we get that $\Phi(x_1)^{p^n}$ is not identity element. This proves claim. Now we are ready to count the number of Hopf-Galois structures on extensions, whose group is of the form ${G}_b$ for some $0<b<n$. This will be divided into four cases. Before proceeding, we note that none of the cases, impose any condition on $j_2$ and $\gamma_1$. *Case $1$: The case $\beta_1=1$.* This implies $\alpha_2=0$. Since if $\alpha_2\neq 0$, then $\beta_1= k\neq1$. From $\alpha_2= 0$ we get that $i_1\equiv 1 \pmod{p^b}$, from which we get that $i_1$ has $p^{n-b}$ possibilities. Further $\alpha_1=0$ from [\[xpower2\]](#xpower2){reference-type="ref" reference="xpower2"}. In this case, $j_1$ has $q$ possibilities since $m\neq 1$, using [Lemma 13](#lem-simulatenous-zero){reference-type="ref" reference="lem-simulatenous-zero"}. Thus in this case we get $\varphi(q)q p^{2(n-b)}$ embedding. *Case $2$: The case $\beta_1\neq1$, and $\alpha_2=0$.* Note that $\alpha_2=0$ implies that $\beta_1=k^{1-i_1}$. Also, $\beta_1\neq 1$ imposes the condition that $i_1$ has $\varphi(p^n)-p^{n-b}$ possibilities. In this case, $j_1$ and $\alpha_1$ have $q$ possibilities each. Thus in this case we have $\varphi(q)\left(\varphi(p^n)-p^{n-b}\right) q^2 p^{n-b}$ embeddings. *Case $3$: The case $\beta_1\neq1$, $\alpha_2\neq 0$, and $1+i_1\equiv 0\pmod{p^b}$.* Since $1+i_1\equiv 0\pmod{p^b}$, we get $m=1$. Hence the value of $j_1$ gets fixed. Thus in this case, we have $\varphi(q)q p^{2(n-b)}$ embeddings. *Case $4$: The case $\beta_1\neq1$, $\alpha_2\neq 0$, and $1+i_1\not\equiv 0\pmod{p^b}$.* In this case $i_1$ has $\varphi(p^n)-p^{n-b}$ possibilities. Similar to Case $2$, $j_1$ has $q$ possible values. Thus in this case, we have $\varphi(q)\left(\varphi(p^n)-p^{n-b}\right) q^2 p^{n-b}$ embeddings. In all of the cases above, the embeddings are regular, which is guaranteed by the conditions that $i_1$ and $j_2$ are units. Furthermore, In conclusion, we have proved the following result. **Proposition 14**. *Let ${G}_b=\mathbb{Z}_q\rtimes_k \mathbb{Z}_{p^n}$, where $k\in\mathbb{Z}_q$ is of order $p^b$ for some $0<b\leq n$. Then $e'\left({G}_b,{G}_b\right)=e\left({G}_b,{G}_b\right)=2\left(p^{n-b}+q\left(\varphi(p^n)-p^{n-b}\right)\right)$.* ## Non-isomorphic type This case will be divided into three cases, depending on the values of $b_1$ and $b_2$. ### The case $1\leq b_1\neq b_\leq n$ We will need a variation of [Lemma 13](#lem-simulatenous-zero){reference-type="ref" reference="lem-simulatenous-zero"}, for dealing with this case. We start with a presentation of these two groups. For $t=1$ and $2$, let us fix $$\begin{aligned} {G}_{b_t}=\left\langle x_t,y_t\middle\vert x_t^{p^n}=y_t^{q}=1, x_ty_tx_t^{-1}=y_t^{k_t}\right\rangle,\end{aligned}$$ where $k_t$ is an element of order $p^{b_t}$. Now we consider $\Phi:{G}_{b_1}\longrightarrow \textup{Hol}\left({G}_{b_2}\right)$ is an regular embedding and $\Phi(x_1)=\left(y_2^{j_1}x_2^{i_1},\left(\gamma_1, \begin{pmatrix} \beta_1 & \alpha_1\\ 0 & 1 \end{pmatrix}\right)\right)$, then it can be proved that, $$\begin{aligned} \textcolor{blue}{ \mathrm{Exp}_y\left(\Phi(x)^{p^n}\right)=j\left(\sum\limits_{u=0}^{p^n-1}m^u\right)+\dfrac{\alpha_1}{m(k_2^{\gamma_1}-1)}\left\{\sum\limits_{v=1}^{p^n-1}m^{p^n-v}\left(k_2^{i_2\left(1+\gamma_1+\ldots+\gamma_1^{v-1}\right)}-k_2^{i_2}\right)\right\},}\end{aligned}$$ where $m=\beta_1 k_2^{i_1}$. It can be shown that $m\in\left\{k_1,k_1k_2^{i_1} \right\}$ modulo $q$, using [\[iso1\]](#iso1){reference-type="ref" reference="iso1"} and [\[iso2\]](#iso2){reference-type="ref" reference="iso2"}. Note that in any of the cases $b_1<b_2$ or $b_2<b_1$, $m$ is purely a power of $k_1$ or $k_2$, since $\mathbb{Z}_{p^n}^\times$ is cyclic. Then a variation of the argument before [Lemma 13](#lem-simulatenous-zero){reference-type="ref" reference="lem-simulatenous-zero"}, proves the following result. **Lemma 15**. *In $\mathrm{Exp}_y\left(\Phi(x_1)^{p^n}\right)$ if the coefficient of $j_1$ is $0$ in $\mathbb{Z}_q$, then so is the coefficients of $\alpha_1$.* Hoping that the reader is now familiar with the flow of arguments, without loss of generality in this case we will assume that the embedding is given by, $$\begin{aligned} \Phi(x_1) = \left(y_2^{j_1}x_2^{i_1}\left(\gamma_1,\begin{pmatrix} \beta_1 & \alpha_1 \\ 0 & 1 \end{pmatrix}\right)\right), \Phi(y_1) = \left(y_2^{j_2}\left(1,\begin{pmatrix} 1 & \alpha_2 \\ 0 & 1 \end{pmatrix}\right)\right),\end{aligned}$$ where $i_1$ is a unit in $\mathbb{Z}_{p^n}$ (using same argument as in [4.1](#subsub:isom){reference-type="ref" reference="subsub:isom"}), $\gamma_1$ is a unit in $\mathbb{Z}_{p^n}$ satisfying $\gamma_1^{p^{n-b_2}}=1$, and $j_2$ is a unit in $\mathbb{Z}_q$. Comparing the both sides of the equation $\Phi(x_1)\Phi(y_1)=\Phi(y_1)^{k_1}\Phi(x_1)$, we get $$\begin{aligned} \alpha_2(k_1-\beta_1)\equiv 0&\pmod{q},\label{noniso1}\\ k_2^{i_1}\beta_1j_2\equiv j_2k_1+j_2\left(1+k_2+\ldots+k_2^{i_1-1}\right)&\pmod{q}.\label{noniso2}\end{aligned}$$ From [\[noniso1\]](#noniso1){reference-type="ref" reference="noniso1"} either $\alpha_2=0$ or $\beta_1=k_1$. Irrespective of the cases $\beta_1k_1^{i_1}\neq 1$. Thus from [Lemma 15](#lem-simul-non-iso){reference-type="ref" reference="lem-simul-non-iso"} $j_1$ can take any value from $\mathbb{Z}_{q}$. Now, in the first case, $\beta_1=k_1k_2^{-i_1}$ (from [\[noniso2\]](#noniso2){reference-type="ref" reference="noniso2"}). Also $\gamma_1$ and $\alpha_1$ have $p^{n-b_2}$ and $q$ many choices respectively. This gives that total number of embeddings in this case is given by $\varphi(q)\varphi(p^n)q^2p^{n-b_2}$. In the second case, $\alpha_2=(k_2-1)j_2$ and $\gamma_1$, $\alpha_1$ have $p^{n-b_2}$, $q$ many choices respectively. Thus the total number of embeddings arising from this case is given by $\varphi(q)\varphi(p^n)q^2p^{n-b_2}$. Given that $i_1$ and $j_2$ are units, we get that the constructed map is regular. We now have the following result. **Proposition 16**. *Let ${G}_{b_t}= %\left\langle x_t,y_t\middle\vert x_t^{p^n}=y_t^{q}=1, x_ty_tx_t^{-1}=y_t^{k_t}\right\rangle \mathbb{Z}_q\rtimes _{k_t}\mathbb{Z}_{p^n}$, where $k_t$ is an element of $\mathbb{Z}_{p^n}$ of order $p^{b_t}$, for $t=1$, $2$. Let $0<b_1\neq b_2\leq n$. Then $$\begin{aligned} e'\left({G}_{b_1},{G}_{b_2}\right)= 2qp^{n+b_1-b_2-1}(p-1),~e\left({G}_{b_1},{G}_{b_2}\right)=2qp^{n-1}(p-1). \end{aligned}$$* ### The case $0= b_1<b_2\leq n$ In this case ${G}_{b_1}$ is cyclic and hence the presentations of the groups ${G}_{b_1}$ and ${G}_{b_2}$ are chosen to be $$\begin{aligned} {G}_{b_1}=\left\langle x_1,y_1\middle\vert x_1^{p^n}=y_1^{q}=1, x_1y_1x_1^{-1}=y_1\right\rangle, {G}_{b_2}=\left\langle x_2,y_2\middle\vert x_2^{p^n}=y_2^{q}=1, x_2y_2x_2^{-1}=y_2^{k_2}\right\rangle,\end{aligned}$$ with $k_2\in\mathbb{Z}_{p^n}$ being an element of multiplicative order $p^{b_2}$. Fix a homomorphism $\Phi:{G}_{b_1}\longrightarrow \textup{Hol}({G}_{b_2})$ given by $$\begin{aligned} \Phi(x_1)=\left(y_2^{j_1}x_2^{i_1}\left(\gamma_1,\begin{pmatrix} \beta_1 & \alpha_1 \\ 0 & 1 \end{pmatrix}\right)\right), \Phi(y_1)=\left(y_2^{j_2}x_2^{i_2}\left(\gamma_2,\begin{pmatrix} \beta_2 & \alpha_2 \\ 0 & 1 \end{pmatrix}\right)\right).\end{aligned}$$ From the condition $\Phi(y_1)^q$, we get the conditions that $i_2=0$, $\gamma_2=0$ and $\beta_2=1$. To ensure the regularity of the maps, we will need $i_1$ and $j_2$ to be units in $\mathbb{Z}_{p^n}$ and $\mathbb{Z}_{q}$ respectively (see [4.1](#subsub:isom){reference-type="ref" reference="subsub:isom"}). Equating the two sides of the equality $\Phi(x_1)\Phi(y_1)=\Phi(y_1)\Phi(x_1)$, we get that $$\begin{aligned} \alpha_2(1-\beta_1)\equiv 0 & \pmod{q},\label{nonisocyc1}\\ \beta_1k_2^{i_1}j_2\equiv j_2+\alpha_2\left(1+k_2+\ldots+k_2^{i_1-1}\right)&\pmod{q}.\label{nonisocyc2}\end{aligned}$$ Hence from [\[nonisocyc1\]](#nonisocyc1){reference-type="ref" reference="nonisocyc1"} we have either $\alpha_2=0$ or $\beta_1=1$. In case $\alpha_2=0$, plugging the value in [\[nonisocyc2\]](#nonisocyc2){reference-type="ref" reference="nonisocyc2"} we get that $\beta_1k_2^{i_1}=1$, whence $j_1$ has fixed choice, once $\alpha_1$ is fixed. Furthermore $\alpha_1$, $\gamma_1$ have $q$, $p^{n-b_2}$ choices. In the case $\beta+1=1$, from [\[nonisocyc2\]](#nonisocyc2){reference-type="ref" reference="nonisocyc2"} we get that $\alpha_2=j_2(k_2-1)$ and $\beta_1 k_2^{i_1}\neq 1$. Hence [Lemma 15](#lem-simul-non-iso){reference-type="ref" reference="lem-simul-non-iso"} applies. Thus $j_1$, and $\gamma_1$ have $q$, and $p^{n-b_2}$ possibilities. We conclude that in both cases the number of regular embedding of the cyclic group of order $p^nq$ in $\textup{Hol}\left({G}_{b_2}\right)$ is given by $q\varphi(q)p^{n-b_2}\varphi(p^n)$. We have the following result. **Proposition 17**. *Let ${C}$ denotes the cyclic group of order $p^nq$ and ${G}_b\cong\mathbb{Z}_q\rtimes_{k_b}\mathbb{Z}_{p^n}$, where $k_b\in\mathbb{Z}_q$ is an element of multiplicative order $p^{b}$. Then $$\begin{aligned} e'({C},{G}_b)=2p^{n-b}q, \text{ and }e({C},{G}_b)=2(p-1) p^{n-1} \end{aligned}$$* ### The case $0=b_2< b_1\leq n$ Here we count the number $e'({G}_{b_1},{G}_{b_2})$ (equivalently $e({G}_{b_1},{G}_{b_2})$). Here ${G}_{b_2}$ is a cyclic group of order $p^nq$. In this case, we have, $$\begin{aligned} \textup{Hol}({G}_{b_2})\cong \left\{\left(y_2^jx_2^{i},(\omega,\delta)\middle\vert \substack{ (j,i)\in\mathbb{Z}_q\times\mathbb{Z}_{p^n}\\ (\omega,\delta)\in\mathbb{Z}_q^{\times}\times \mathbb{Z}_{p^n}^\times}\right)\right\}.\end{aligned}$$ We fix an embedding $\Phi:{G}_{b_1}\longrightarrow \textup{Hol}\left({G}_{b_2}\right)$ determined by $$\begin{aligned} \Phi(x_1)=\left(y_2^{j_1}x_2^{i_1},\left(\omega_1,\delta_1\right)\right),~ \Phi(x_1)=\left(y_2^{j_2}x_2^{i_2},\left(\omega_2,\delta_2\right)\right).\end{aligned}$$ From $\Phi(y_1)^q=1$, we get $\omega_2=1,\delta_2=1$ and $i_2=0$. Considering $\Phi(x_1)^{p^n}=1$ we get that $\omega_1^{p^n}=1$, $\delta_1^{p^n}=1$, and $$\begin{aligned} j_1\left(1+\omega_1+\ldots+\omega_1^{p^n-1}\right)\equiv 0 &\pmod{q},\label{intocyc1}\\ i_1\left(1+\delta_1+\ldots+\delta_1^{p^n-1}\right)\equiv 0 &\pmod{p^n}.\label{intocyc2}.\end{aligned}$$ Finally comparing both sides of the equation $\Phi(x_1)\Phi(y_1)=\Phi(y_1)^{k_1}\Phi(x_1)$, we get that $\omega_1=k_1$ and hence [\[intocyc1\]](#intocyc1){reference-type="ref" reference="intocyc1"} gets satisfied automatically. To ensure that the embedding is regular, we will need that $i_1$ and $j_2$ are units. Any choice of $\delta _1$ satisfies [\[intocyc2\]](#intocyc2){reference-type="ref" reference="intocyc2"}. Thus $j_1$, $j_2$, $i_1$, and $\delta_1$ have $\varphi(q)$, $q$, $\varphi(p^n)$, and $p^{n-1}$ possibilities respectively. We conclude with the following result. **Proposition 18**. *Let ${G}_b\cong\mathbb{Z}_q\rtimes_{k_b}\mathbb{Z}_{p^n}$, where $k_b$ is an element of $\mathbb{Z}_q$ of order $p^b$, $1\leq b\leq n$, and ${C}$ denote the cyclic group of order $p^nq$. Then we have $$\begin{aligned} e'\left({G}_b,{C}\right)=p^{n+b-2}(p-1),~ e\left({G}_b,{C}\right)=p^{n-1}q. \end{aligned}$$* The [Theorem 2](#theorem-p-<-q){reference-type="ref" reference="theorem-p-<-q"} follows from [Proposition 14](#thm:isomorphic-2){reference-type="ref" reference="thm:isomorphic-2"}, [Proposition 16](#thm:non-iso-noncyclic){reference-type="ref" reference="thm:non-iso-noncyclic"}, [Proposition 17](#thm:cyc-nontrivial){reference-type="ref" reference="thm:cyc-nontrivial"}, and [Proposition 18](#thm:into-cyc){reference-type="ref" reference="thm:into-cyc"}. # Realizable pair of groups {#sec:conclusion} Given two finite groups $G$ and $N$ of the same order, we say that the pair $(G,N)$ is *realizable* if there exists a Hopf-Galois structure on a Galois $G$-extension, of type $N$. In other words a pair $(G,N)$ is realizable if $e(G,N) \neq 0$. This is equivalent to saying there exists a skew brace with its multiplicative group isomorphic to $G$ and its additive group isomorphic to $N$. This problem is not well studied since given an integer $n$, the classification of all the groups of size $n$ is not known. However, they have been studied for a variety of groups. When $G$ is a cyclic group of odd order and the pair $(G,N)$ is realizable then in [@By13], the author showed that if $N$ is abelian then it is cyclic. If $N$ is a non ableian simple group and $G$ is a solvable group with the pair $(G,N)$ being realizable, then in [@Ts23a] $N$ was completely classified. Whenever $N$ or $G$ is isomorphic to $\mathbb{Z}_n\rtimes \mathbb{Z}_2$ for an odd $n$ then their realizabilities were studied in [@ArPa23]. Among a few available techniques, the notion of bijective crossed homomorphism to study realizability problems for a pair of groups of the same order, was introduced by Tsang in the work [@Ts19]. Given an element $\mathfrak{f}\in\mathrm{Hom}(G,\textup{Aut}(N))$, a map $\mathfrak{g}\in \mathrm{Map}(G,N)$ is said to be a *crossed homomorphism* with respect to $\mathfrak{f}$ if $\mathfrak{g}(ab)=\mathfrak{g}(a)\mathfrak{f}(a)(\mathfrak{g}(b))\text{ for all }a,b\in G$. Setting $Z_{\mathfrak{f}}^1(G,N)=\{\mathfrak{g}:\mathfrak{g}$ is bijective crossed homomorphism with respect to $\mathfrak{f}\}$, we have the following two results. **Proposition 19**. *[@Ts19 Proposition 2.1][\[p002\]]{#p002 label="p002"} The regular subgroups of $\textup{Hol}(N)$ which are isomorphic to $G$ are precisely the subsets of $\textup{Hol}(N)$ of the form $\{(\mathfrak{g}(a),\mathfrak{f}(a)):a\in G\},$ where $\mathfrak{f}\in\mathrm{Hom}(G,\textup{Aut}(N)),\mathfrak{g}\in Z_{\mathfrak{f}}^1(G,N)$.* **Proposition 20**. *[@TsQi20 Proposition 3.3] Let $G,N$ be two groups such that $|G|=|N|$. Let $\mathfrak{f}\in\mathrm{Hom}(G,\textup{Aut}(N))$ and $\mathfrak{g}\in Z_\mathfrak{f}^1(G,N)$ be a bijective crossed homomorphism (i.e. $(G,N)$ is realizable). Then if $M$ is a characteristic subgroup of $N$ and $H=\mathfrak{g}^{-1}(M)$, we have that the pair $(H,M)$ is realizable.* We will need the following two results, where the realizability of cyclic groups have been characterized. We will use modifications of these characterizations towards proving the realizability of groups of the form $\mathbb{Z}_{p^n}\rtimes\mathbb{Z}_{q}$. **Proposition 21**. *[@Ts22 Theorem 3.1][\[p003\]]{#p003 label="p003"} Let $N$ be a group of odd order $n$ such that the pair $(\mathbb{Z}_n,N)$ is realizable. Then $N$ is a $C$-group (i.e. all the Sylow subgroups are cyclic).* **Proposition 22**. *[@Ru19 Theorem 1][\[p004\]]{#p004 label="p004"} Let $G$ be a group of order $n$ such that $(G,\mathbb{Z}_n)$ is realizable. Then $G$ is solvable and almost Sylow-cyclic (i.e. its Sylow subgroups of odd order are cyclic, and every Sylow-$2$ subgroup of G has a cyclic subgroup of index at most $2$).* **Theorem 23**. *Let ${N}$ be a group of order $qp^n$, where $q$ is a prime, $q< p$ and $(q,p)=1$. Then the pair $(\mathbb{Z}_{p^n}\rtimes\mathbb{Z}_q,{N})$ (or $(N,\mathbb{Z}_{p^n}\rtimes \mathbb{Z}_q)$) is realizable if and only if ${N}\cong \mathbb{Z}_{p^n}\rtimes \mathbb{Z}_q$.* *Proof.* Let $(\mathbb{Z}_{p^n}\rtimes\mathbb{Z}_q,{N})$ be realizable. By [\[p002\]](#p002){reference-type="ref" reference="p002"} there exists a bijective crossed homomorphism $\mathfrak{g}\in Z^1_{\mathfrak{f}}(\mathbb{Z}_{p^n}\rtimes\mathbb{Z}_q,N)$ for some $\mathfrak{f}\in \mathrm{Hom}(\mathbb{Z}_{p^n}\rtimes\mathbb{Z}_q,\textup{Aut}(N))$. Let $H_p$ be the Sylow-$p$ subgroup of $N$ (it is unique since $q < p$). Then using [Proposition 20](#t002){reference-type="ref" reference="t002"} the pair $(\mathfrak{g}^{-1}H_{p},H_{p})$ is realizable. Note that $\mathbb{Z}_{p^n}\rtimes\mathbb{Z}_q$ has unique subgroup of order $p^n$, which is cyclic. This implies that $(\mathbb{Z}_{p^n},H_p)$ is realizable. Hence by [\[p003\]](#p003){reference-type="ref" reference="p003"} we get that $H_p$ is isomorphic to $\mathbb{Z}_{p^n}$ and therefore $N \cong \mathbb{Z}_{p^n}\rtimes \mathbb{Z}_q$. Conversely if $N \cong \mathbb{Z}_{p^n}\rtimes \mathbb{Z}_q$ then the pair $(\mathbb{Z}_{p^n}\rtimes\mathbb{Z}_q,{N})$ is realizable since $e(\mathbb{Z}_{p^n}\rtimes\mathbb{Z}_q,{N})$ is non-zero from [3](#sec:main-results-1){reference-type="ref" reference="sec:main-results-1"}. Now if the pair $(N,\mathbb{Z}_{p^n}\rtimes\mathbb{Z}_q)$ is realizable, by [\[p002\]](#p002){reference-type="ref" reference="p002"} there exists a bijective crossed homomorphism $\mathfrak{g}\in Z^1_{\mathfrak{f}}(N,\mathbb{Z}_{p^n}\rtimes\mathbb{Z}_q)$ for some $\mathfrak{f}\in \mathrm{Hom}(G,\textup{Aut}(\mathbb{Z}_n\rtimes\mathbb{Z}_2))$. Since $\mathbb{Z}_{p^n}$ is a characteristic subgroups of $\mathbb{Z}_{p^n}\rtimes\mathbb{Z}_{q}$, we get that $\mathfrak{g}^{-1}(\mathbb{Z}_{p^n})$ is a subgroup of $N$ and $(\mathfrak{g}^{-1}(\mathbb{Z}_{p^n}),\mathbb{Z}_{p^n})$ is realizable. Then by [\[p004\]](#p004){reference-type="ref" reference="p004"}, we have that $\mathfrak{g}^{-1}(\mathbb{Z}_{p^n})$ is almost Sylow-cylic and therefore isomorphic to $\mathbb{Z}_{p^n}$. Hence $N \cong \mathbb{Z}_{p^n}\rtimes \mathbb{Z}_q$. Conversely if $N \cong \mathbb{Z}_{p^n}\rtimes \mathbb{Z}_q$, then by [3](#sec:main-results-1){reference-type="ref" reference="sec:main-results-1"} we have the pair $(N,\mathbb{Z}_{p^n}\rtimes\mathbb{Z}_q)$ is realizable. ◻
arxiv_math
{ "id": "2309.06848", "title": "Hopf Galois structures, skew braces for groups of size $p^nq$: The\n cyclic Sylow subgroup case", "authors": "Namrata Arvind, Saikat Panja", "categories": "math.GR math.RA", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | The *order of appearance* $z(n)$ of a positive integer $n$ in the Fibonacci sequence is defined as the smallest positive integer $j$ such that $n$ divides the $j$-th Fibonacci number. A *fixed point* arises when, for a positive integer $n$, we have that the $n \textsuperscript{th}$ Fibonacci number is the smallest Fibonacci that $n$ divides. In other words, $z(n) = n$. In 2012, Marques proved that fixed points occur only when $n$ is of the form $5^{k}$ or $12\cdot5^{k}$ for all non-negative integers $k$. It immediately follows that there are infinitely many fixed points in the Fibonacci sequence. We prove that there are infinitely many integers that iterate to a fixed point in exactly $k$ steps. In addition, we construct infinite families of integers that go to each fixed point of the form $12 \cdot 5^{k}$. We conclude by providing an alternate proof that all positive integers $n$ reach a fixed point after a finite number of iterations. address: - Department of Mathematics and Statistics, Williams College, Williamstown, MA 01267 - Department of Mathematics and Statistics, Williams College, Williamstown, MA 01267 - Department of Mathematics, Trinity College, Hartford, CT 06106 author: - Molly FitzGibbons - Steven J. Miller - Amanda Verga title: Dynamics of the Fibonacci Order of Appearance Map --- # Introduction In $1202$, the Italian mathematician Leonardo Fibonacci introduced the Fibonacci sequence $\{F_{n}\}_{n=0}^{\infty}$, defined recursively as $F_{n} = F_{n-1} + F_{n-2}$ with initial conditions $F_{0} = 0$ and $F_{1} = 1$. By reducing $\{F_{n}\}_{n=0}^{\infty}$ modulo $m$, we obtain a periodic sequence $\{F_{n} \mod m \}_{n=0}^{\infty}$. This new sequence and its divisibility properties have been extensively studied, see for example [@M1; @M5]. To see why the reduced sequence is periodic, note that by the pigeonhole principle if we look at $n^{2} + 1$ pairs $(F_{k}, F_{k-1})$, at least two are identical (mod $n$) and the recurrence relation generates the same future terms. **Definition 1**. [The]{.nodecor} *order (or rank) of appearance* $z(n)$ [for a natural number $n$ in the Fibonacci sequence is the smallest positive integer $\ell$ such that $n \mid F_{\ell}$.]{.nodecor} Observe that the function $z(n)$ is well defined for all $n$ since the Fibonacci sequences begins with $0, 1, \ldots$ and when reduced by modulo $n$, a $0$ will appear again in the periodic sequence. Thus, there will always be a Fibonacci number that is congruent to $0 \mod n$ for each choice of $n$. The upper bound of $n^2 + 1$ on $z(n)$ is improved in [@S], which states $z(n) \leq 2n$ for all $n \geq 1$. This is the sharpest upper bound on $z(n)$. In [@M2], sharper upper bounds for $z(n)$ are provided for some positive integers $n$. Additional results on $z(n)$ include explicit formulae for the order of appearance of some $n$ relating to sums containing Fibonacci numbers [@M3] and products of Fibonacci numbers [@M4]. We study repeated applications of $z$ on $n$ and denote the $k \textsuperscript{th}$ application of $z$ on $n$ as $z^{k}(n)$. We are interested in the following quantity. **Definition 2**. [The]{.nodecor} *fixed point order* [for a natural number $n$ is the smallest positive integer $k$ such that $z^{k}(n)$ is a fixed point. If $n$ is a fixed point, then we say the *fixed point order* of $n$ is 0.]{.nodecor} Table [1](#table:zkiterations){reference-type="ref" reference="table:zkiterations"} shows which values occur after repeated iterations of $z$ on the first $12$ positive integers. We further the study of repeated iterations of $z$ on $n$. In Section [2](#auxresults){reference-type="ref" reference="auxresults"}, we provide some useful properties of the order of appearance in Fibonacci numbers. In the remaining sections, we prove our main results, found below. **Theorem 3**. *For all positive integers $k$, there exist infinitely many $n$ with fixed point order $k$.* **Theorem 4**. *Infinitely many integers $n$ iterate to each fixed point of the form $12 \cdot 5^{k}$.* **Theorem 5**. *All positive integers $n$ have finite fixed point order.* Theorem [Theorem 5](#allngotofp){reference-type="ref" reference="allngotofp"} was proved first in [@LT] by showing that within finite $k$, $z^k(n) = 2^a3^b5^c$ where $a,b,c \in \mathbb{Z}_{\geq 0}$ and then proving that $2^a3^b5^c$ iterates to a fixed point in a finite number of steps. It was later proved in [@Ta1] using a relationship between the Pisano period of $n$ and $z(n)$. We provide an alternate proof using a minimal counterexample argument. $n \setminus k$ 1 2 3 4 5 ----------------- -------- -------- -------- -------- -------- 1 **1** 2 3 4 6 **12** 3 4 6 **12** 4 6 **12** 5 5 6 **12** 7 8 6 **12** 8 6 **12** 9 **12** 10 15 20 30 **60** 11 10 15 20 30 **60** 12 **12** : Iterations of $z$ on $n$, numbers in bold are fixed points. # Auxiliary Results {#auxresults} Here we include some needed results from previous papers. **Lemma 6**. *Let $n$ be a positive integer. Then $z(n) = n$ if and only if $n = 5^{k}$ or $n = 12 \cdot 5^{k}$ for some $k \geq 0$.* A proof of Lemma [Lemma 6](#allfixedpointsofform){reference-type="ref" reference="allfixedpointsofform"} can be found in [@M1; @SM]. **Lemma 7**. *For all $a \in \mathbb{Z}, a \geq 3$, $z\left(2^{a}\right) = 2^{a-2} \cdot 3$. For all $b \in \mathbb{Z}^{+}, z(3^b) = 4 \cdot 3^{b-1}$.* Lemma [Lemma 7](#pow2equals){reference-type="ref" reference="pow2equals"} is Theorem 1.1 of [@M2]. **Lemma 8**. *Let $n \geq 2$ be an integer with prime factorization $n = p_{1}^{e_{1}}p_{2}^{e_{2}} \cdots p_{m}^{e_{m}}$ where $p_{1}, p_{2}, \ldots, p_{m}$ are prime and $e_{1}, e_{2}, \ldots, e_{m}$ are positive integers. Then $$z(n) \ = \ z(p_{1}^{e_{1}}p_{2}^{e_{2}}\cdots p_{m}^{e_{m}}) \ = \ \textnormal{lcm}\left( z(p_{1}^{e_{1}}), z(p_{2}^{e_{2}}), \ldots, z(p_{m}^{e_{m}})\right).$$* A proof of Lemma [Lemma 8](#z(n)=lcm(p^e's)){reference-type="ref" reference="z(n)=lcm(p^e's)"} can be found in Theorem $3.3$ of [@R]. Lemma [Lemma 8](#z(n)=lcm(p^e's)){reference-type="ref" reference="z(n)=lcm(p^e's)"} has been generalized as follows. **Lemma 9**. *Let $n \geq 2$ be an integer with prime factorization $n = p_{1}^{e_{1}}p_{2}^{e_{2}} \cdots p_{m}^{e_{m}}$ where $p_{1}, p_{2}, \ldots, p_{m}$ are prime and $e_{1}, e_{2}, \ldots, e_{m}$ are positive integers. Then $$z \left(\textnormal{lcm}( m_{1},m_{2},\ldots, m_{n}) \right) \ = \ \textnormal{lcm}\left(z(m_{1}), z(m_{2}), \ldots, z(m_{n})\right).$$* A proof of Lemma [Lemma 9](#general z(n)=lcm(p^e's)){reference-type="ref" reference="general z(n)=lcm(p^e's)"} can be found in Lemma $4$ of [@Ty]. **Lemma 10**. *For all primes $p$, $z(p)\leq p+1$.* A proof of Lemma [Lemma 10](#z(p)leqp+1){reference-type="ref" reference="z(p)leqp+1"} can be found in Lemma 2.3 of [@M1]. **Lemma 11**. *For all positive integers $n$, $z(n) \leq 2n$, with equality if and only if $n=6\cdot 5^k$ for some $k \in \mathbb{Z}_{\geq 0}$* Lemma [Lemma 11](#z(n)leq2n){reference-type="ref" reference="z(n)leq2n"} is proven in [@S]. **Lemma 12**. *For all primes $p \neq 5$, we have that $\gcd\left(p, z(p)\right) = 1$.* Lemma [Lemma 12](#z(p)relprimetop){reference-type="ref" reference="z(p)relprimetop"} is proven in Lemma 2.3 of [@M1]. **Lemma 13**. *If $n \vert F_m$, where $F_m$ is the $m\textsuperscript{th}$ number in the Fibonacci sequence, then $z(n) \vert m$.* Lemma [Lemma 13](#ifndividesF_m){reference-type="ref" reference="ifndividesF_m"} is Lemma 2.2 of [@M1]. **Lemma 14**. *For all odd primes $p$, we have $z\left(p^{e}\right) = p^{\max (e - a, 0)}z(p)$ where $a$ is the number of times that $p$ divides $F_{z(p)}$, $a \geq 1$. In particular, $z\left(p^{e}\right) = p^{r}z(p)$ for some $0 \leq r \leq e-1$.* For a proof of Lemma [Lemma 14](#z(p^e)=z(p^r)z(p)){reference-type="ref" reference="z(p^e)=z(p^r)z(p)"}, see Theorem 2.4 of [@FM]. # Infinitely many integers take a given number of iterations to reach a fixed point {#infnreachfpink} In this section we first prove Lemma [Lemma 15](#coefconst){reference-type="ref" reference="coefconst"}, which helps us show that when $z^{i}(n)$ is written as the product of a constant relatively prime to $5$ and a power of $5$, then $z^{i}\left(n \cdot 5^{a}\right)$ can be written as the product of that same constant and another power of $5$. Table [2](#table2){reference-type="ref" reference="table2"} lists the smallest $n$ that takes exactly $k$ iterations to reach a fixed point for positive integers $k$ up to 10. k n FP ---- ------- ---- 1 1 1 2 4 12 3 3 12 4 2 12 5 11 60 6 89 60 7 1069 60 8 2137 60 9 4273 60 10 59833 60 : First $n$ that takes $k$ iterations to reach a fixed point. **Lemma 15**. *Let $z^{i}(5^{a}\cdot n) = c_{(i,a, n)}5^{a_i}$, where $c_{(i, a, n)}$ is a constant that is relatively prime to $5$ and depends on $i$ and $n$, and $a_i \in \mathbb{Z}^+$. Fix $i, n \in \mathbb{Z}_{\geq 0}$. Then $c_{(i, a, n)}$ remains the same for all choices of $a$.* *Proof.* Let the prime factorization of an integer $n$ be $n = 5^{e_1}p_{2}^{e_2} \cdots p_{m}^{e_m}$ where $e_{1} \geq 0$ and each of the $e_{2},e_{3},\ldots,e_{m} \geq 1$. We proceed by induction on the number of iterations of $z$. First suppose $i = 1$ and let the prime factorization of $\textnormal{lcm}\left( z(p_{2}^{e_2}), \ldots, z(p_{m}^{e_m}) \right)$ equal $5^{f_1}q_{2}^{f_2} \cdots q_{r}^{f_r}$ where $f_1 \geq 0$ and each of the $f_2,\ldots,f_r \geq 1$. Observe that $$\begin{aligned} z(5^{a}\cdot n) & \ = \ \textnormal{lcm}\left(z(5^{e_1 +a}), z(p_{2}^{e_2}), \ldots, z(p_{m}^{e_m})\right) \nonumber\\ & \ = \ \textnormal{lcm}\left(z\left(5^{e_1 +a}\right), \textnormal{lcm}\left(z(p_{2}^{e_2}), \ldots, z(p_{m}^{e_m}) \right) \right) \nonumber \\ %% \text{Let the prime factorization of $\lcm \left( z(p_{2}^{e_2}), ..., z(p_{m}^{e_m}) \right)$ be $5^{f_1}q_{2}^{f_2}...q_{r}^{f_r}$ where $f_1 \geq 0$ and $f_2,...,f_r \geq 1$. Then} \\ & \ = \ \textnormal{lcm}\left(5^{e_1+a}, 5^{f_1}q_{2}^{f_2}\cdots q_{r}^{f_r}\right) \nonumber\\ & \ = \ q_{2}^{f_2} \cdots q_{r}^{f_r} \cdot 5^{\text{max}(e_1+a, f_1)}.\end{aligned}$$ Thus, $c_{(1, a, n)} = q_{2}^{f_2} \cdots q_{r}^{f_{r}}$ for any non-negative integer $a$ when $n$ is not a power of $5$ or when $\textnormal{lcm}\left( z(p_2^{e_2}), \ldots, z(p_m^{e_m}) \right)$ is not a power of $5$, and $c_{(1, a, n)} = 1$ otherwise. Next, assume that for some $i, n \in \mathbb{Z}^{+}$, we have $z^{i}(5^{a}\cdot n) = c_{(i, a, n)}5^{a_i}$ where $c_{(i,a,n)}$ is the same for all choices of $a \in \mathbb{Z}_{\geq 0}$. First suppose $c_{(i,a, n)}=1$. Then $$\begin{aligned} z^{i+1}(5^a \cdot n) & \ = \ z(z^{i}(5^{a}\cdot n)) \nonumber\\ & \ = \ z\left((c_{(i,a,n)}) \cdot 5^{a_i} \right) \quad\quad \text{where $ a_i \in \mathbb{Z}_{\geq0} $} \nonumber\\ & \ = \ \textnormal{lcm}\left(z(1), z(5^{a_i})\right) \nonumber\\ & \ = \ 5^{a_i}. \end{aligned}$$ Therefore, for any choice of $a$, $c_{(i+1,a,n)}=1$. Now suppose $c_{(i,a,n)} \neq 1$. Then let the prime factorization of $c_{(i,a,n)} = q_{1}^{f_{1}} \cdots q_{r}^{f_{r}}$, where $q_{1},\ldots,q_{r} \neq 5$ since $\gcd \left(c_{(i,a,n)},5\right) = 1$. Let $\textnormal{lcm}\left( z(q_{1}^{f_1}),\ldots,z(q_{r}^{f_r})\right) = 5^{g_1}h_{2}^{g_2}\cdots h_{j}^{g_j}$ where $h_{2}, \ldots, h_{j}$ are primes not equal to 5. Then $$\begin{aligned} z^{i+1}(5^a \cdot n) & \ = \ z(z^{i}(5^{a}\cdot n)) \nonumber\\ & \ = \ z\left((c_{(i,a,n)}) \cdot 5^{a_i} \right) \quad\quad \text{where $ a_i \in \mathbb{Z}_{\geq0} $} \nonumber\\ & \ = \ \textnormal{lcm}\left( z(q_{1}^{f_1}),\ldots,z(q_{r}^{f_r}), z(5^{a_i})\right) \nonumber\\ & \ = \ \textnormal{lcm}\left( \textnormal{lcm}\left(z(q_{1}^{f_1}),\ldots,z(q_{r}^{f_r})\right), z(5^{a_i})\right) \nonumber\\ & \ = \ \textnormal{lcm}\left(5^{g_1}h_{2}^{g_2} \cdots h_{j}^{g_j}, 5^{a_i}\right) \nonumber\\ & \ = \ h_{2}^{g_2} \cdots h_{j}^{g_j} \cdot 5^{\text{max}(g_1, t)} \\ & \ = \ c\left(i,a, n\right) \cdot 5^{\text{max}\left(g,t\right)}.\end{aligned}$$ ◻ We use Lemma [Lemma 15](#coefconst){reference-type="ref" reference="coefconst"} in our proof of Theorem [Theorem 3](#infntakek){reference-type="ref" reference="infntakek"} to show that if there exists an integer $n$ that takes exactly $k$ iterations of $z$ to reach a fixed point, then there are infinitely many integers that take exactly $k$ iterations of $z$ to reach a fixed point. The following lemma provides us with information on the $k\textsuperscript{th}$ iteration of $z$ on powers of $10$, enabling us to find integers that require exactly $k$ iterations of $z$ to reach a fixed point for any positve integer $k$. **Lemma 16**. *For all $k, m \in \mathbb{Z}, k\geq 0, m\geq 4$ and $2k+2\leq m$, $z^{k}(10^{m}) = 3\cdot5^m\cdot2^{m-2k}$.* *Proof.* We proceed by induction on the number of iterations of $z$. Observe that when $k = 1$, $$\begin{aligned} z\left(10^{m}\right) & \ = \ \textnormal{lcm}\left(z\left(2^{m}\right),z\left(5^{m}\right)\right) \nonumber\\ & \ = \ \textnormal{lcm}\left(3\cdot 2^{m-2}, 5^m\right) \nonumber\\ & \ = \ 3 \cdot5^{m} \cdot 2^{m-2}.\end{aligned}$$ Now suppose that $z^{k}(10^{m}) \ = \ 3\cdot5^m\cdot2^{m-2k}$ for some positive integer $k$. Then we have $$\begin{aligned} z^{k+1}(10^{m}) & \ = \ z\left(z^{k}\left(10^{m}\right)\right) \nonumber\\ & \ = \ z\left( 3\cdot5^m\cdot2^{m-2k}\right) \nonumber\\ & \ = \ \textnormal{lcm}\left(z\left(3\right),z\left(5^{m}\right),z(2^{m-2k}) \right) \nonumber\\ & \ = \ \textnormal{lcm}\left(4, 5^m,2^{m-2k-2} \cdot 3\right). \end{aligned}$$ By assumption, $m \geq 2(k+1)+2 = 2k+4$, thus we have $z^{k+1}(10^{m}) \ = \ 3 \cdot 5^{m} \cdot 2^{m-2(k+1)}$. ◻ Using Lemmas [Lemma 15](#coefconst){reference-type="ref" reference="coefconst"} and [Lemma 16](#10mgoestoink){reference-type="ref" reference="10mgoestoink"}, we now prove Theorem [Theorem 3](#infntakek){reference-type="ref" reference="infntakek"}:\ *For all $k \in \mathbb{Z}_{\geq 0}$, there exist infinitely many $n$ with fixed point order $k$.* *Proof of Theorem [Theorem 3](#infntakek){reference-type="ref" reference="infntakek"}.* Let $g, h \in \mathbb{Z}^{+}, g>h$. Then $g=h+\ell$ for some $\ell \in \mathbb{Z}^{+}$. Suppose that $z^h(n)$ is a fixed point. Then $z^g(n) = z^{\ell}\left(z^{h}(n) \right)$, so $z^{g}(n)$ is also a fixed point. Similarly, if $z^{g}(n)$ is not a fixed point, then $z^{h}(n)$ cannot be a fixed point for any $h<g$. Note that by Lemma [Lemma 16](#10mgoestoink){reference-type="ref" reference="10mgoestoink"} $$z^{k}(10^{2k+2}) \ = \ 3 \cdot 5^{2k+2} \cdot 2^{(2k+2)-2k} \ = \ 12 \cdot 5^{2k+2}$$ and $$z^{k-1}(10^{2k+2}) \ = \ 3 \cdot 5^{2k+2} \cdot 2^{(2k+2)-2(k-1)} \ = \ 12 \cdot 5^{2k+2} \cdot 2^{2}.$$ Thus, $10^{2k+2}$ takes exactly $k$ iterations of $z$ to reach a fixed point, as $z^{k-1}(10^{2k+2})$ is not a fixed point. We prove that we can find infinitely many integers that take exactly $k$ iterations to reach a fixed point once one such integer is identified (which we have just done). We first consider the case where an integer $n$ goes to a fixed point of the form $12 \cdot 5^{a'}$, where $a' \in \mathbb{Z}_{\geq 0}$, in exactly $k$ iterations of $z$. Thus, $z^{k}(n) = 12\cdot5^{a'}$ and $z^{k-1}(n) = c\cdot5^{b'}$ for some non-negative integer $b'$ and positive integer $c \neq 1,12$. Let $r$ be an arbitrary positive integer. By Lemma [Lemma 15](#coefconst){reference-type="ref" reference="coefconst"}, we have $z^{k}\left(5^{r}\cdot n\right) = 12 \cdot 5^{a''}$ and $z^{k-1}\left(5^{r}\cdot n\right) = c \cdot 5^{b''}$ for non-negative integers $a''$ and $b''$. Thus, $5^{r}\cdot n$ requires exactly $k$ iterations to reach a fixed point. Next we consider the case where $n$ goes to a fixed point of the form $5^{a'}$ in exactly $k$ steps. Then, $z^{k}(n) = 5^{a'}$ and $z^{k-1}(n) = c \cdot 5^{b'}$ for some non-negative integer $b'$ and positive integer $c \neq 1,12$. Let $r$ be an arbitrary positive integer. By Lemma [Lemma 15](#coefconst){reference-type="ref" reference="coefconst"}, we have $z^{k}\left(5^{r}\cdot n\right) = 5^{a''}$ and $z^{k-1}\left(5^{r}\cdot n\right) = c \cdot 5^{b''}$ for non-negative integers $a''$ and $b''$. Thus, $5^{r}\cdot n$ requires exactly $k$ iterations to reach a fixed point. As $r$ is arbitrary, there are infinitely many integers with fixed point order $k$ for any positive integer $k$. ◻ # Infinitely many integers go to each fixed point We begin this section with a proof about the $k \textsuperscript{th}$ iteration of $z$ on powers of $2$. **Lemma 17**. *For all $k,a \in \mathbb{Z}$ such that $2\leq k$ and $4\leq a$, $z^{k}(2^{a}) = \textnormal{lcm}\left( 2^{a-2k} \cdot 3, 4 \right)$.* *Proof.* We induct on $k$; we use Lemma [Lemma 7](#pow2equals){reference-type="ref" reference="pow2equals"} to note that $z(2^{a})=z^{a-2}\cdot 3$ (valid as $a \geq 3$) with base case $k=2$: $$\begin{aligned} z^{2}(2^{a}) & \ = \ z\left(z(2^{a})\right) \nonumber\\ & \ = \ z(2^{a-2} \cdot 3) \nonumber\\ & \ = \ \textnormal{lcm}\left( z(2^{a-2}), z(3) \right) \nonumber\\ & \ = \ \textnormal{lcm}\left(2^{a-4}\cdot 3, 4 \right).\end{aligned}$$ For the inductive step, assume that $z^{k}(2^{a}) = \textnormal{lcm}\left( 2^{a-2k} \cdot 3, 4 \right)$ for some $k$. We show that $z^{k+1}(2^{a}) \ = \ \textnormal{lcm}\left( 2^{a-2(k+1)} \cdot 3, 4 \right)$. First suppose that $a > 2k+2$. Then $$\begin{aligned} z^{k+1}(2^{a}) & \ = \ z\left(z^{k}(2^{a})\right) \nonumber\\ & \ = \ z\left(\textnormal{lcm}\left( 2^{a-2k} \cdot 3, 4 \right)\right) \nonumber\\ & \ = \ z\left(2^{a-2k} \cdot 3\right)\nonumber \\ & \ = \ \textnormal{lcm}\left(z(2^{a-2k}), z(3)\right)\nonumber \\ & \ = \ \textnormal{lcm}\left( 2^{a-2k-2} \cdot 3, 4 \right) \nonumber\\ & \ = \ \textnormal{lcm}\left( 2^{a-2(k+1)} \cdot 3, 4 \right).\end{aligned}$$ Now suppose that $a \leq 2k+2$. Then $$\begin{aligned} z^{k+1}(2^{a}) & \ = \ z\left(z^{k}\left(2^{a}\right)\right) \nonumber\\ & \ = \ z\left(\textnormal{lcm}\left( 2^{a-2k} \cdot 3, 4 \right)\right) \nonumber\\ & \ = \ z\left(12\right) \nonumber\\ & \ = \ 12 \nonumber\\ & \ = \ \textnormal{lcm}\left( 2^{a-2(k+1)} \cdot 3, 4 \right).\end{aligned}$$ ◻ We now use Lemma [Lemma 17](#z^i(2^a)equals){reference-type="ref" reference="z^i(2^a)equals"} in our proof of Lemma [Lemma 18](#all2to12){reference-type="ref" reference="all2to12"}, which proves that all powers of 2 go to the fixed point $12$ and determines how many iterations of $z$ it takes for a power of 2 to reach $12$. **Lemma 18**. *For all $a \in \mathbb{Z}^{+}$, $2^{a}$ reaches the fixed point $12$ in finitely many iterations of $z$. For $a \geq 4$, exactly $\lceil \frac{a}{2}\rceil -1$ iterations of $z$ are required to reach $12$.* *Proof.* When $a\leq 4$, the claim follows from straightforward computation. Notice that $z^{4}(2) = 12, z^{2}(2^{2}) = 12, z^{2}(2^{3}) = 12, z(2^{4}) = 12$. We prove for $a>4$ using Lemma [Lemma 17](#z^i(2^a)equals){reference-type="ref" reference="z^i(2^a)equals"}. Note that if $a$ is even, then $\lceil\frac{a}{2}\rceil = \frac{a}{2}$. Thus, in the case where $a$ is even, $$z^{\lceil\frac{a}{2}\rceil-1}(2^{a}) \ = \ \textnormal{lcm}(2^{a-2(\frac{a}{2}-1)} \cdot 3, 4) \ = \ \textnormal{lcm}(2^{a-a+2} \cdot 3,4) \ = \ \textnormal{lcm}(2^{2} \cdot 3,4) \ = \ 12.$$ So $2^a$ takes at most $\left\lceil \frac{a}{2} \right\rceil -1$ iterations of $z$ to reach a fixed point when $a$ is even. We next show that it takes exactly $\lceil \frac{a}{2}\rceil -1$ by showing that $z^{(\lceil \frac{a}{2}\rceil-1)-1}(2^a)$ is not a fixed point: $$z^{\left(\left\lceil\frac{a}{2}\right\rceil-1\right)-1}(2^{a}) = \textnormal{lcm}(2^{a-2(\frac{a}{2}-2)} \cdot 3, 4) \ = \ \textnormal{lcm}(2^{a-a+4} \cdot 3,4) = \textnormal{lcm}(2^{4} \cdot 3,4) = 12\cdot2^{2},$$ which is not a fixed point. When $a$ is odd, $\left\lceil \frac{a}{2} \right\rceil -1 = \frac{a-1}{2}$, giving us $$z^{\left\lceil \frac{a}{2} \right\rceil-1}(2^{a}) \ = \ \textnormal{lcm}(2^{a-2(\frac{a-1}{2})} \cdot 3, 4) \ = \ \textnormal{lcm}(2^{a-a+1} \cdot 3,4) \ = \ \textnormal{lcm}(2 \cdot 3,4) \ = \ 12.$$ However $$z^{(\lceil \frac{a}{2}\rceil-1)-1}(2^{a}) \ = \ \textnormal{lcm}(2^{a-2(\frac{a-1}{2}-1)} \cdot 3, 4) \ = \ \textnormal{lcm}(2^{a-a+1+2} \cdot 3,4) \ = \ \textnormal{lcm}(2^{3} \cdot 3,4) \ = \ 12\cdot2,$$ which is not a fixed point. ◻ Lemma [Lemma 18](#all2to12){reference-type="ref" reference="all2to12"} and Lemma [Lemma 15](#coefconst){reference-type="ref" reference="coefconst"} now yield Theorem [Theorem 4](#infngotoeachFP){reference-type="ref" reference="infngotoeachFP"}: *Infinitely many integers $n$ go to each fixed point of the form $12 \cdot 5^{k}$.* *Proof of Theorem [Theorem 4](#infngotoeachFP){reference-type="ref" reference="infngotoeachFP"}.* Using Lemma [Lemma 18](#all2to12){reference-type="ref" reference="all2to12"}, we know that $z^{\lceil\frac{a}{2}\rceil-1}(2^{a} \cdot 5^{0}) = 12$. Thus, by Lemma [Lemma 15](#coefconst){reference-type="ref" reference="coefconst"}, $z^{\lceil\frac{a}{2}\rceil-1}(2^{a} \cdot 5^{b}) = 12 \cdot 5^{b'}$ for some nonnegative integer $b'$. We show that $b = b'$ by inducting on $t$ to show that $z^{t}(2^a \cdot 5^{b}) = 2^{a'} \cdot 3 \cdot 5^{b}, a'\in\mathbb{Z}^{+}$, for all $a > t, a > 2$. When $t = 1$, $$\begin{aligned} z(2^{a}\cdot 5^{b}) & \ = \ \textnormal{lcm}\left( z(2^{a}), z(5^{b})\right) \nonumber\\ & \ = \ \textnormal{lcm}\left( 2^{a-2} \cdot 3, 5^{b} \right) \nonumber\\ & \ = \ 2^{a-2} \cdot 3 \cdot 5^{b}.\end{aligned}$$ Now suppose that $z^{t}(2^a \cdot 5^{b}) = 2^{a'} \cdot 3 \cdot 5^{b}$ for some positive integer $a'$. Then $$\begin{aligned} z^{t+1}(2^{a}\cdot 5^{b}) & \ = \ z\left(z^{t}(2^{a}\cdot 5^{b})\right) \nonumber\\ & \ = \ z\left( 2^{a'} \cdot 3 \cdot 5^{b} \right) \nonumber\\ & \ = \ \textnormal{lcm}\left( z(2^{a'}), z(3), z(5^{b})\right).\end{aligned}$$ If $a' \leq 3$, then $\textnormal{lcm}\left( z(2^{a'}), z(3), z(5^{b})\right) = 2^{2} \cdot 3 \cdot 5^{b}$. If $a' > 3$, then $$\begin{aligned} \textnormal{lcm}\left( z(2^{a'}), z(3), z(5^{b})\right) & \ = \ \textnormal{lcm}\left( 2^{a'-2} \cdot 3, 4, 5^{b} \right) \nonumber\\ & \ = \ 2^{a'-2} \cdot 3 \cdot 5^{b}. \end{aligned}$$ A straightforward calculation shows that $2 \cdot 5^{b}$ and $2^{2} \cdot 5^{b}$ iterate to the fixed point $12 \cdot 5^{b}$ (see Appendix [\[2,4timespow5iterateto12\]](#2,4timespow5iterateto12){reference-type="ref" reference="2,4timespow5iterateto12"} for a proof). Therefore $2^{a}\cdot 5^{b}$ iterates to the fixed point $12 \cdot 5^{b}$ for all $a \in \mathbb{Z}^{+}$. ◻ # All integers have finite fixed point order We now prove that when $a,b$ are relatively prime, $z^k(ab) = \textnormal{lcm}(z^k(a),z^k(b))$. We will use this in the proof of Theorem [Theorem 5](#allngotofp){reference-type="ref" reference="allngotofp"}. **Lemma 19**. *Let $n = ab$ where $\gcd (a,b) = 1$. Then $z^{k}(n) = \textnormal{lcm}(z^{k}(a),z^{k}(b))$.* *Proof.* We first consider the case where $n$ has only one prime in its prime factorization. Without loss of generality, suppose $a = 1$ and $b = n$ and $z^{k}(n) = \textnormal{lcm}(1, z^{k}(n))$. If $n = 1$, then $a = b = 1$ and $z^k\left(1\right) = 1 = \textnormal{lcm}\left(z^k(1), z^k(1)\right)$. Next consider when $n$ has at least two distinct primes in its prime factorization. Let the prime factorization of $n = p_{1}^{e_1}\cdots p_{m}^{e_m}$ and let $a = p_{1}^{e_1}\cdots p_{r}^{e_r}$, $b = p_{r+1}^{e_{r+1}}\cdots p_{m}^{e_m}$ where $1 \leq r < m$. Note that the primes are not necessarily in increasing order. We proceed by induction. In the base case $k = 1$, and using Lemma [Lemma 8](#z(n)=lcm(p^e's)){reference-type="ref" reference="z(n)=lcm(p^e's)"} we have: $$\begin{aligned} z(n) & \ = \ \textnormal{lcm}\left( z \left(p_{1}^{e_1}\right), \ldots, z(p_{r}^{e_r}), z(p_{r+1}^{e_{r+1}}),\ldots, z(p_{m}^{e_m}) \right) \nonumber \\ & \ = \ \textnormal{lcm}\left( \textnormal{lcm}\left(z(p_{1}^{e_1}), \ldots, z(p_{r}^{e_r}) \right), \textnormal{lcm}\left(z\left(p_{r+1}^{e_{r+1}}\right),\ldots, z(p_{m}^{e_m}) \right) \right) \nonumber \\ & \ = \ \textnormal{lcm}\left( z(p_{1}^{e_1}\cdots p_{r}^{e_r}), z(p_{r+1}^{e_{r+1}}\cdots p_{m}^{e_m}) \right) \nonumber \\ &\ = \ \textnormal{lcm}\left( z(a), z(b) \right).\end{aligned}$$ For the inductive step, assume that for some $k \geq 1$, $z^{k}(n) = \textnormal{lcm}\left(z^{k}(a),z^{k}(b)\right)$. We show that $z^{k+1}(n) = \textnormal{lcm}\left(z^{k+1}(a),z^{k+1}(b)\right)$. We have $$\begin{aligned} z^{k+1}(n) & \ = \ z\left(z^k(n) \right) \nonumber \\ & \ = \ z\left(\textnormal{lcm}(z^{k}(a),z^{k}(b)) \right) \nonumber \\ & \ = \ \textnormal{lcm}\left(z(z^{k}(a)),z(z^{k}(b)) \right) \quad\quad \text {by Lemma 2.4} \nonumber\\ & \ = \ \textnormal{lcm}\left( z^{k+1}(a),z^{k+1}(b) \right).\end{aligned}$$ ◻ Now we are ready to prove Theorem [Theorem 5](#allngotofp){reference-type="ref" reference="allngotofp"}. *Proof of Theorem [Theorem 5](#allngotofp){reference-type="ref" reference="allngotofp"}.* Suppose that $n$ is the smallest positive integer with undefined fixed point order. We prove first the case where $n=ab$ where $\gcd(a,b)=1$ and $a,b \geq 2$, then prove cases where $n$ is a power of a prime. *Case 1:* First suppose that $n$ has at least two distinct primes in its prime factorization, so $n$ can be written $n = ab$ where $\gcd(a,b) = 1$ and $a,b >1$. Since $a,b < n$, we know that $a,b$ have finite fixed point order. Suppose the fixed point order of $a$ is $c$ and the fixed point order of $b$ is $d$. Let $k = \text{max}(c,d)$. Then by Lemma [Lemma 19](#z^k(n)=lcm(z^k(a),z^k(b))){reference-type="ref" reference="z^k(n)=lcm(z^k(a),z^k(b))"} we have $z^k(n) = \textnormal{lcm}\left(z^k(a), z^k(b) \right)$ telling us $z^k(n)$ is a fixed point, which is a contradiction. *Case 2:* Now suppose that $n = p$ for some prime $p$. Notice that $p \neq 2$ since we prove in Lemma [Lemma 17](#z^i(2^a)equals){reference-type="ref" reference="z^i(2^a)equals"} that powers of $2$ reach the fixed point 12 in finitely many iterations of $z$. By Lemma [Lemma 10](#z(p)leqp+1){reference-type="ref" reference="z(p)leqp+1"}, $z(p)\leq p+1$. Note that $z(p) \neq p$, or else $p$ would have fixed point order of $0$. Thus, $z(p) = p+1$ or $z(p) < p$ Since we are assuming $p$ does not iterate to a fixed point, neither does $z(p)$. Thus $z(p)$ is not a power of $2$ since powers of $2$ iterate to a fixed point by Lemma [Lemma 17](#z^i(2^a)equals){reference-type="ref" reference="z^i(2^a)equals"}. Thus if $z(p) = p+1$ then $z(p) = 2^r \cdot t$ where $r \in \mathbb{Z}^+$ (since $p+1$ is even) and $t \in \mathbb{Z}, t \geq 3$, $\gcd(2,t)=1$ and $2^r, t < p$. So by the same argument as in Case 1, $z(p)$ has finite fixed point order, so $p$ also has finite fixed point order since it reaches a fixed point after one more iteration of $z$ than $z(p)$. If $z(p) < p$ then $z(p)$ has finite fixed point order by the assumption that $p$ is the smallest integer with undefined fixed point order. Thus $p$ also has finite fixed point order. *Case 3:* Now suppose $n = p^e$ where $p$ is prime and $e \geq 2$. From Lemmas [Lemma 14](#z(p^e)=z(p^r)z(p)){reference-type="ref" reference="z(p^e)=z(p^r)z(p)"} and [Lemma 11](#z(n)leq2n){reference-type="ref" reference="z(n)leq2n"} we know that for some $r \in \mathbb{Z}_{\geq 0}, r<e$, we have $z(p^e) = p^r z(p)$. As $e>1$, we have $z(p) \leq p+1 < p^e$ by Lemma [Lemma 10](#z(p)leqp+1){reference-type="ref" reference="z(p)leqp+1"}. Thus, $z(p)$ has finite fixed point order. Notice that $p^r<p^e$, so $p^r$ also has finite fixed point order. Let $h = \max (\text{fixed point order of } z(p), \text{fixed point order of } p^r)$. Note that $\gcd (z(p),p^r)=1$ since $z(p)$ is relatively prime to $p$ by Lemma [Lemma 12](#z(p)relprimetop){reference-type="ref" reference="z(p)relprimetop"}. Then by Lemma [Lemma 19](#z^k(n)=lcm(z^k(a),z^k(b))){reference-type="ref" reference="z^k(n)=lcm(z^k(a),z^k(b))"}, $$z^{h+1}(p^e) \ = \ z^{h}\left(z\left(p^{e}\right)\right) \ = \ z^h \left( p^r z(p)\right) \ = \ \textnormal{lcm}\left( z^h(p^r), z^h(z(p)) \right).$$ Thus, $p^e$ iterates to a fixed point within $h+1$ iterations of $z$, so $p^e$ has finite fixed point order. ◻ # Acknowledgements The authors were supported by the National Science Foundation under Grants No. DMS-2241623 and DMS-1947438 while in residence at Williams College in Williamstown, MA. # Appendix {#appendix .unnumbered} 1. [\[2,4timespow5iterateto12\]]{#2,4timespow5iterateto12 label="2,4timespow5iterateto12"} We first prove that $2^{2} \cdot 5^{b}$ iterates to the fixed point $12 \cdot 5^{b}$. Observe: $$\begin{aligned} z^{2} \left(4 \cdot 5^{b} \right) & \ = \ z \left( z\left( 4 \cdot 5^{b} \right) \right) \nonumber\\ & \ = \ z \left( \textnormal{lcm}\left( z(4), z(5^{b}) \right) \right) \nonumber\\ & \ = \ z \left( \textnormal{lcm}\left( 6, 5^{b} \right) \right) \nonumber\\ & \ = \ z \left( \textnormal{lcm}\left( 6, 5^{b} \right) \right) \nonumber\\ & \ = \ z \left( 6 \cdot 5^{b} \right) \nonumber\\ & \ = \ z \left( 6 \cdot 5^{b} \right) \nonumber\\ & \ = \ \textnormal{lcm}\left( z(2), z(3), z(5^{b}) \right) \nonumber\\ & \ = \ \textnormal{lcm}\left( 3, 4, 5^{b} \right) \nonumber\\ & \ = \ 12 \cdot 5^{b}. \end{aligned}$$ 2. Next we prove that $2 \cdot 5^{b}$ iterates to the fixed point $12 \cdot 5^{b}$. Observe: $$\begin{aligned} z^{4} (2 \cdot 5^{b}) & \ = \ z^{3}\left(z(2 \cdot 5^{b}\right) \nonumber\\ & \ = \ z^{3}\left( \textnormal{lcm}\left( z(2), z(5^{b}) \right) \right) \nonumber\\ & \ = \ z^{3}\left( \textnormal{lcm}\left( 3, 5^{b} \right) \right) \nonumber\\ & \ = \ z^{3} \left( 3 \cdot 5^{b} \right) \nonumber\\ & \ = \ z^{2} \left( z\left( 3 \cdot 5^{b} \right) \right) \nonumber\\ & \ = \ z^{2} \left( \textnormal{lcm}\left( z(3), z(5^{b}) \right) \right) \nonumber\\ & \ = \ z^{2} \left( \textnormal{lcm}\left(4, 5^{b} \right) \right) \nonumber\\ & \ = \ z^{2} \left(4 \cdot 5^{b} \right) \nonumber\\ & \ = \ 12. \end{aligned}$$ 99 John D. Fulton and William L. Morris. "On arithmetical functions related to the Fibonacci numbers". In: *Acta Arithmetica* 2.16 (1969), pp. 105--110. Jirí Klaška. "Donald Dines Wall's Conjecture". In: *Fibonacci Quarterly* 56.1(Feb.2018), pp. 43--51.<https://www.fq.math.ca/Papers/56-1/Klaska10917.pdf>. Edouard Lucas. "Théorie des fonctions numériques simplement périodiques". In: *American Journal of Mathematics* (1878), pp. 289--321. doi: [10.2307/2369373](https://www.jstor.org/stable/2369373?origin \ = \ crossref).<https://doi.org/10.2307/2369373>. Florian Luca and Emanuele Tron. "The Distribution of Self-Fibonacci Divisors". In: Alaca, A., Alaca, s., Williams, K. (eds) *Advances in the Theory of Numbers*. Fields Institute Communications, vol 77. Springer, New York, NY. <https://doi.org/10.1007/978-1-4939-3201-6_6> Diego Marques. "Fixed points of the order of appearance in the Fibonacci sequence". In: *Fibonacci Quarterly* 50.4 (Nov. 2012), pp. 346--352. <https://www.mathstat.dal.ca/FQ/Papers1/50-4/MarquesFixedPoint.pdf>. Diego Marques. "Sharper upper bounds for the order of appearance in the Fibonacci sequence". In: *Fibonacci Quarterly* 51.3 (Aug. 2013), pp. 233--238. <https://www.fq.math.ca/Papers1/51-3/MarquesSharperUpperBnds.pdf>. Diego Marques. "On the order of appearance of integers at most one away from Fibonacci numbers", In: *Fibonacci Quarterly* 50.1 (2012), 36--43. <https://www.fq.math.ca/Papers1/50-1/Marques.pdf> Diego Marques. "The order of appearance of product of consecutive Fibonacci numbers", In: *Fibonacci Quarterly* 50.2 (May 2012), 132--139. <https://fq.math.ca/Papers1/50-2/MarquesOrderofAppear.pdf> Diego Marques. "The order of appearance of powers of Fibonacci and Lucas numbers", In: *Fibonacci Quarterly* 50.3 (Aug. 2012), 239-245. <https://www.fq.math.ca/Papers1/50-3/MarquesPowersFibLucas.pdf> Marc Stetson Renault. "Properties of the Fibonacci sequence under various moduli". MA thesis. Wake Forest University. Department of Mathematics and Computer Science, 1996. <http://webspace.ship.edu/msrenault/fibonacci/FibThesis.pdf>. H.J.A. Sallé. "A Maximum value for the rank of apparition of integers in recursive sequences". In: *Fibonacci Quarterly* 13.2 (1975), pp. 159--161. <https://www.fq.math.ca/Scanned/13-2/salle.pdf>. Lawrence Somer and M Křı́žek. "Fixed points and upper bounds for the rank of appearance in Lucas sequences". In: *Fibonacci Quarterly* 51.4 (Nov. 2013), pp. 291--306. <https://www.fq.math.ca/Papers1/51-4/SomerKrizekFixedPoints.pdf>. Eva Trojovská. "On periodic points of the order of appearance in the Fibonacci sequence". In: *Mathematics* 8.5 (May 2020), p. 773. doi: [10.3390/math8050773](https://www.mdpi.com/2227-7390/8/5/773). <https://doi.org/10.3390/math8050773>. Eva Trojovská. "On the Diophantine Equation $z(n) = \left(2 - 1/k \right)n$ Involving the Order of Appearance in the Fibonacci Sequence". In: *Mathematics* 8.1 (Jan. 2020), p. 124. doi: [10.3390/math8010124](https://www.mdpi.com/2227-7390/8/1/124). <https://doi.org/10.3390/math8010124>. Pavel Trojovský. "On the Parity of the Order of Appearance in the Fibonacci Sequence". In: *Mathematics* 9.16 (Aug. 2021), p. 1928. doi: [10.3390/math9161928](https://www.mdpi.com/2227-7390/9/16/1928#B12-mathematics-09-01928). [https://www.mdpi.com/2227-7390/9/16/1928](//www.mdpi.com/2227-7390/9/16/1928). Wolfram Research Inc. Mathematica, Version 13.3. Champaign, IL, 2023. [https: //www.wolfram.com/mathematica](https://www.wolfram.com/mathematica). MSC2010: 60B10, 11B39 (primary) 65Q30 (secondary)
arxiv_math
{ "id": "2309.14501", "title": "Dynamics of the Fibonacci Order of Appearance Map", "authors": "Molly FitzGibbons, Steven J. Miller, Amanda Verga", "categories": "math.NT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We discuss Poisson structures on a weighted polynomial algebra $A:=\Bbbk[x, y, z]$ defined by a homogeneous element $\Omega\in A$, called a *potential*. We start with classifying potentials $\Omega$ of degree $\deg(x)+\deg(y)+\deg(z)$ with any positive weight $(\deg(x),\deg(y),\deg(z))$ and list all with isolated singularity. Based on the classification, we study the rigidity of $A$ in terms of graded twistings and classify Poisson fraction fields of $A/(\Omega)$ for irreducible potentials. Using Poisson valuations, we characterize the Poisson automorphism group of $A$ when $\Omega$ has an isolated singularity extending a nice result of Makar-Limanov-Turusbekova-Umirbaev. Finally, Poisson cohomology groups are computed for new classes of Poisson polynomial algebras. address: - "Huang: Department of Mathematics, Rice University, Houston, TX 77005, USA" - "Tang: Department of Mathematics & Computer Science, Fayetteville State University, Fayetteville, NC 28301, USA" - "Wang: Department of Mathematics, Louisiana State University, Baton Rouge, Louisiana 70803, USA" - "Zhang: Department of Mathematics, Box 354350, University of Washington, Seattle, Washington 98195, USA" author: - Hongdi Huang, Xin Tang, Xingting Wang, and James J. Zhang title: Weighted Poisson polynomial rings in dimension three --- # Introduction {#introduction .unnumbered} Poisson algebras are used in classical mechanics to describe observable evolution in Hamiltonian systems. They have been widely studied concerning topics such as (twisted) Poincaré duality and modular derivations [@LuWW1; @LvWZ2; @Wa], Poisson Dixmier-Moeglin equivalence [@BLSM; @Go1; @GLa; @LaS; @LuWW2], Poisson enveloping algebras [@Ba1; @Ba2; @Ba3; @LvWZ1; @LvWZ3], noncommutative discriminant [@BY1; @BY2; @LY1; @NTY] and so on. They have also been utilized to study the representation theory of PI Sklyanin algebras [@WWY1; @WWY2]. Additionally, Poisson algebras have been investigated in the context of isomorphism problem, invariant theory and cancellation problem [@GVW; @GW; @GWY; @Ma]. Let ${\Bbbk}$ be an algebraically closed base field of characteristic zero throughout. Quadratic Poisson structures with $\deg(x_i)=1$ for all $i$ on $\Bbbk[x_1,\ldots, x_n]$ have been applied in various fields, as discussed in papers [@Bo; @Go2; @GLe; @LX; @Py] and their references. Note that the deformation quantization of such a Poisson structure is the homogeneous coordinate ring of quantum ${\mathbb P}^{n-1}$. It is Calabi-Yau if the corresponding formal Poisson bracket on $\Bbbk[x_1,\ldots,x_n]$ is unimodular [@Do]. The modular derivations can be considered a Poisson analog of the Nakayama automorphisms of skew Calabi-Yau algebras. For more information on skew Calabi-Yau algebras, see [@RRZ1; @RRZ2] and related references. A notable family of quadratic Poisson structures is the elliptic Poisson algebras. These were independently introduced by Feigin and Odesskii [@FO98] and Polishchuk [@Po]. Elliptic Poisson algebras can be viewed as semi-classical limits of the elliptic Sklyanin algebras studied by Feigin and Odesskii [@FO89]. A unimodular Poisson structure on $\Bbbk[x, y, z]$ is determined by a potential $\Omega \in \Bbbk[x, y, z]$. Elliptic Poisson algebras in $3$ variables are defined as a particular case by a homogeneous potential $\Omega$ of degree $3$ with an isolated singularity at the origin. Van den Bergh earlier considered these elliptic Poisson algebras in his work on Hochschild homology of 3-dimensional Sklyanin algebras [@VdB]. Makar-Limanov-Turusbekova-Umirbaev computed Poisson automorphism groups of these algebras [@MTU] when all the generators have degree one. Recently, it has been proved that every connected graded Poisson polynomial algebra is a twist of an unimodular Poisson polynomial algebra [@TWZ]. In associative algebra, Stephenson [@St1; @St2] has classified and studied the weighted version of connected graded Artin-Schelter regular (or skew Calabi-Yau) algebras of global dimension three. However, there is little knowledge about the Poisson analog of these algebras. ## General setup {#zzsec0.1} In this paper, the graded Poisson polynomial algebras $A$ in dimension three are given by a weighted homogeneous potential $\Omega$. We relax two assumptions made in elliptic ones above: (a) generators being in degree one and (b) isolated singularities of potential $\Omega$. We study those $A$ that exhibit similar Poisson cohomological behaviors to elliptic Poisson algebras. Additionally, we are interested in the Poisson automorphism groups of $A$ and some of its quotients $A/(\Omega-\xi)$ where $\xi\in \Bbbk$. Set $\mathbb N=\{0, 1, 2,\cdots\}$ and $\mathbb N_+=\{1, 2, 3,\cdots\}$. An algebra $A$ is said to be *connected graded* if $A=\bigoplus_{i\ge 0} A_i$ is $\mathbb N$-graded and $A_0={\Bbbk}$. If so, we use $|f|$ to denote the degree of a homogeneous element $f\in A$. We say $A$ is a *connected $w$-graded Poisson algebra* (for $w\in \mathbb Z$) if $A=\bigoplus_{i\ge 0} A_i$ is a connected graded algebra where the Poisson bracket of $A$ satisfies $\{A_i,A_j\}\subseteq A_{i+j-w}$ for all $i,j\ge 0$. If $w=0$, we simply say $A$ is a connected graded Poisson algebra. Below is a general setup for some of the main objects in this paper. **Hypothesis 1**. 1. *Let $A:={\Bbbk}[x, y, z]$ be a weighted polynomial algebra with $\deg(x)=a, \deg(y)=b, \deg (z)=c$ for $a,b,c\in \mathbb N_+$.* 2. *Let $\Omega\in A$ be a nonzero homogeneous element of degree $n>0$. We call $\Omega$ a potential of $A$ and let $w:=n-a-b-c$.* 3. *\[\] Assuming parts (1) and (2), let $A_{\Omega}$ (or simply $A$) denote the connected $w$-graded unimodular Poisson algebra. Its Poisson bracket is determined by the weighted homogeneous potential $\Omega$ such that $$\notag \{f,g\}~=~\det \begin{pmatrix} f_{x} & f_{y} & f_{z}\\ g_{x} & g_{y} & g_{z}\\ \Omega_{x} & \Omega_{y} & \Omega_{z} \end{pmatrix}\quad \text{for all $f,g\in A_\Omega$}.$$* 4. *Throughout most of the paper, except , we assume $n=a+b+c$.* **Remark 2**. To save space, the following will be implemented. 1. In most mathematical statements, including theorems and propositions, it is assumed that $a \leq b \leq c$ and $\gcd(a,b,c) = 1$. 2. We will use tables, such as , at the end of the Introduction and in the to present results concisely. 3. Some computations will not be shown but are available from the authors. 4. When analyzing arguments divided into cases, authors typically provide detailed analysis for one case and skip details for others if the proofs are similar. All details can be provided if necessary. ## Classification {#zzsec0.2} We classify all potentials $\Omega$ in ${\Bbbk}[x,y,z]$ of degree $a+b+c$ (refer to for details). The classification for $(a, b, c)=(1, 1, 1)$ is well-known [@BM; @DH; @DML; @KM; @LX]. We characterize all possible weights $(a, b, c) \in \mathbb{N}_{+}^{3}$ on $\Bbbk[x, y, z]$ that guarantee the existence of potentials $\Omega$ of degree $a+b+c$ with isolated singularities \[\]. Together with , we identify all three possible parametric families of such potentials. **Theorem 3** (). *Assume . Below is a complete list of potentials with isolated singularities of degree $a+b+c$ (up to graded automorphisms):* - *$\Omega_1=x^3+y^3+z^3+\lambda xyz$ for $(-\lambda)^3\neq 3\cdot 3\cdot 3$ and $(a,b,c)=(1,1,1)$.* - *$\Omega_2=x^4+y^4+z^2+\lambda xyz$ for $(-\lambda)^4\neq 4\cdot 4\cdot 2^2$ and $(a,b,c)=(1,1,2)$.* - *$\Omega_3=x^6+y^3+z^2+\lambda xyz$ for $(-\lambda)^6\neq 6\cdot 3^2\cdot 2^3$ and $(a,b,c)=(1,2,3)$.* Note that these potentials correspond to the homogeneous coordinate rings of Veronese embeddings of elliptic curves in a weighted projective plane. The embeddings are defined by a divisor $D=kP$ with a marked point $P\in E$ and $k=3,2,1$. Our result will hopefully have independent interests in weighted projective spaces. ## Rigidities {#zzsec0.3} In [@TWZ], the Poisson version of graded twists of graded associative algebras introduced by [@Zh] was used to define a numerical invariant $rgt(A)$ \[\] for any $\mathbb{Z}$-graded Poisson algebra $A$. This invariant measures the size of the vector space of graded twists of $A$. If $rgt(A)=0$, then all graded twists of $A$ are isomorphic to $A$, and we call $A$ rigid. When $A={\Bbbk}[x,y,z]$ is a polynomial Poisson algebra generated in degree one, it was shown in [@TWZ Corollary 6.7] that any connected graded unimodular Poisson structure on $A$ is rigid if and only if the associated potential $\Omega$ is irreducible. We generalize this equivalence to the weighted case. **Theorem 4** (). *Assume . Then $rgt(A_{\Omega})=0$ if and only if $\Omega$ is irreducible.* In Subsections 4.2 and 4.3, we will briefly discuss two other types of rigidities. ## Automorphism problem {#zzsec0.4} One of the aims of this paper is to present universal methods for studying Poisson algebra on essential subjects such as Poisson automorphism groups \[\] and Poisson cohomologies \[\]. In [@HTWZ1], Poisson valuations were introduced and used to solve rigidity, automorphism, isomorphism, and embedding problems for various Poisson algebras/fields. We use them to determine Poisson automorphism groups for $A_{\Omega}$ and its quotient $A_{\Omega}/(\Omega-\xi)$ when $\Omega$ is a potential of degree $a+b+c$ with isolated singularity. Our approach offers an alternative method to determine the automorphism groups of three-dimensional elliptic Poisson algebras when the Poisson algebra $A_{\Omega}$ is generated in degree one, differing from [@MTU]. **Theorem 5** (). *Assume . Suppose $\Omega$ has an isolated singularity. Denote by $P_{\Omega-\xi}=A_{\Omega}/(\Omega-\xi)$ for $\xi\in {\Bbbk}$ and write $P_{\Omega-0}$ simply as $P_{\Omega}$.* - *Every Poisson automorphism of $P_{\Omega}$ is graded and every Poisson automorphism of $P_{\Omega-\xi}$ is linear when $\xi\ne0$.* - *Every Poisson automorphism of $A_{\Omega}$ is graded.* *Moreover, the explicit automorphism groups of $P_{\Omega-\xi}$ and $A_{\Omega}$ are listed in Lemmas [Lemma 29](#zzlem3.11){reference-type="ref" reference="zzlem3.11"}--[Lemma 32](#zzlem3.14){reference-type="ref" reference="zzlem3.14"}.* If $\Omega$ has no isolated singularity, finding the Poisson automorphism group of $A_\Omega$ becomes challenging. According to , the Poisson fraction fields $Q=Q(P_\Omega)$ can be divided into three families. For convenience, we call $\Omega$ *Weyl type* if $Q$ is isomorphic to the Poisson Weyl field $K_{Weyl}:={\Bbbk}(x,y)$ with $\{x,y\}=1$, and we call $\Omega$ *quantum type* if $Q$ is isomorphic to the Poisson quantum field $K_q:={\Bbbk}(x,y)$ with $\{x,y\}=qxy$ for some $q\in {\Bbbk}^\times$. When $\Omega$ is of Weyl type, we can construct many ungraded Poisson automorphisms of $A$ (see ). However, we cannot construct any ungraded Poisson automorphisms of $A$ if $\Omega$ is of quantum type. We are curious if the Poisson automorphisms of their Poisson algebras are similar to those of elliptic Poisson algebras, which are all graded (see ). ## Poisson cohomologies {#zzsec0.5} Poisson cohomologies can be notoriously difficult to calculate. We characterize Poisson cohomological groups for various Poisson algebras in three dimensions. Inspired by $PH^1$-minimality from [@TWZ], we introduce the concept of $uPH^2$-vacancy in to control the second Poisson cohomology. The property of being $uPH^2$-vacant was implicitly used by Pichereau in [@Pi1 Remark 3.9] for $\Omega$ having an isolated singularity. In the theorem below, we generalize [@TWZ Theorem 0.6] to the weighted case. We call an irreducible potential $\Omega$ in ${\Bbbk}[x, y, z]$ *balanced* if $\Omega_x \Omega_y\Omega_z\ne 0$ for any choice of graded generators $(x, y, z)$; otherwise, we call it *non-balanced*. **Theorem 6**. *Let $A:={\Bbbk}[x, y, z]$ be a connected graded Poisson polynomial algebra satisfying (1 and 4). Denote by $Z$ the Poisson center of $A$. Then, the following statements are equivalent.* 1. *$rgt(A)=0$ and any homogeneous Poisson derivation of $A$ with negative degree is zero.* 2. *Any graded twist of $A$ is isomorphic to $A$, and any homogeneous Poisson derivation of $A$ with a negative degree is zero.* 3. *The Hilbert series of the graded vector space of Poisson derivations of $A$ is $\frac{1}{(1-t^a)(1-t^b)(1-t^c)}$.* 4. *$h_{PH^1(A)}(t)$ is $\frac{1}{1-t^n}$.* 5. *$h_{PH^1(A)}(t)$ is equal to $h_Z(t)$.* 6. *Every Poisson derivation $\phi$ of $A$ has a decomposition $\phi=zE+H_a$, where $z\in Z$ and $a\in A$. Here, $z$ is unique, and $a$ is unique up to a Poisson central element.* 7. *Every Poisson derivation of $A$ that vanishes on $Z$ is Hamiltonian.* 8. *$A$ is an unimodular Poisson algebra determined by an irreducible potential $\Omega$ that is balanced.* 9. *$h_{PH^3(A)}(t)-h_{PH^2(A)}(t)=t^{-n}$.* 10. *$A$ is unimodular and $h_{PH^2(A)}(t)=\frac{1}{t^n}\left(\frac{(1-t^{a+b})(1-t^{a+c}) (1-t^{b+c})}{(1-t^n)(1-t^{a})(1-t^{b})(1-t^{c})}-1\right)$.* 11. *$A$ is unimodular and $h_{PH^3(A)}(t)=\frac{(1-t^{a+b})(1-t^{a+c})(1-t^{b+c})} {t^n(1-t^n)(1-t^{a})(1-t^{b})(1-t^{c})}$.* 12. *$A$ is $uPH^2$-vacant.* It is important to note that Van den Bergh [@VdB] already computed the Poisson (co)homology of $A$ for the case where $A$ is generated in degree one and $\Omega$ is a cubic polynomial with isolated singularity. However, it was later computed by Pichereau [@Pi1] for an arbitrary weighted homogeneous $\Omega$ with isolated singularities. Additional computations can be found in [@Pe1; @Pe2]. As stated in , the calculation of Poisson cohomology for $A_{\Omega}$ is possible for both quantum and balanced Weyl types of $\Omega$, regardless of whether or not isolated singularities exist. In the graph below, all potentials of degree $a+b+c$ listed in are divided into irreducible potentials in the black circle and reducible ones in the complement, where irreducible potentials are subdivided into three types: isolated singularity, quantum, and (balanced and non-balanced) Weyl. : Isolated singularity\ : Quantum type\ : Non-balanced Weyl type\ : Balanced Weyl type\ : Weyl type=+\ : Reducible We conclude the introduction with a table summarizing the main results for each type of $\Omega$. Indeed, the table provides information concerning the smoothness of the projective curve $\Omega=0$ (see ), the Gelfand-Kirillov dimension (GKdim) of $A_{sing}\colon=A/(\Omega_{x}, \Omega_{y}, \Omega_{z})$, the rigidity of $A$ (see ), the Poisson automorphism group of $A$ (see and ), the Poisson fraction field of $A/\Omega$ (see ), the $uPH^2$-vacancy (see ), and the $K_1$-sealedness (see ). 0.6em --------------- ------------------- --------------- ----------- ---------------------------- ---------------------- ---------- -------- --------------- $\Omega$-type Proj. Curve GKdim $rgt(A)$ $\mathop{\mathrm{Aut}}(A)$ $Q(P_{\Omega})$ $uPH^2$- $K_1$- $h_{PH^i(A)}$ $\Omega=0$ of $A_{sing}$ graded? vacant sealed computed? smooth 0 0 yes $S_{\zeta, \lambda}$ yes yes yes nodal singularity 1 0 ? $K_q$ yes ? yes cusp singularity 1 0 no $K_{Weyl}$ yes ? yes cusp singularity 1 0 no $K_{Weyl}$ no no no reducible {1,2} $\leq -1$ some undefined no no some --------------- ------------------- --------------- ----------- ---------------------------- ---------------------- ---------- -------- --------------- : [\[t:potential\]]{#t:potential label="t:potential"} Potential $\Omega$ This paper is divided into six sections. Section 1 provides basic notations and results for Poisson algebras and briefly describes Poisson valuations. In Section 2, we classify all homogeneous polynomials $\Omega$ in ${\Bbbk}[x,y,z]$ such that $|\Omega|=|x|+|y|+|z|$ and prove Theorem 0.3. In Section 3, we prove Theorem 0.5; in Section 4, we prove results about several different rigidities, including Theorem 0.4. We study $K_1$-sealedness and $uPH^2$-vacancy in Section 5, which will be useful for the following section. In Section 6, we establish the results on Poisson cohomology for Poisson algebras ${\Bbbk}[x,y,z]$ with irreducible potentials $\Omega$ of degree $|x|+|y|+|z|$ as summarized in Theorem 0.6. # Preliminaries {#zzsec1} ## Terminology {#zzsec1.1} Let $A={\Bbbk}[x_1,\ldots,x_n]$ be a polynomial algebra. We denote by $\mathfrak X^\bullet(A)=\oplus_{i=0}^{\infty} \mathfrak X^{i}(A)$ the set of skew-symmetric multi-derivations of $A$. For $P\in \mathfrak X^p(A)$ and $Q\in \mathfrak X^q(A)$, their *wedge product* $P\wedge Q\in \mathfrak X^{p+q}(A)$ is the skew-symmetric $(p+q)$-derivation of $A$, defined by $$\begin{aligned} (P\wedge Q)&(a_1,\ldots,a_{p+q})~:=~\sum_{\sigma\in \mathbb S_{p,q}} {\rm sgn}(\sigma) P(a_{\sigma(1)},\cdots,a_{\sigma(p)}) \,Q(a_{\sigma(p+1)},\ldots, a_{\sigma(p+q)})\end{aligned}$$ for all $a_1,\ldots,a_{p+q}\in A$, where $\mathbb S_{p,q}\subset \mathbb S_{p+q}$ is the set of all $(p,q)$-shuffles. Note that $(\mathfrak X^\bullet(A),\wedge)$ is a graded commutative algebra [@LPV Proposition 3.1]. Recall that the *Schouten bracket* on $\mathfrak X^\bullet(A)$ is given by $$[\cdot \, , \,\cdot]_S: \mathfrak X^p(A)\times \mathfrak X^q(A)\to \mathfrak X^{p+q-1}(A)$$ such that $$\begin{aligned} _S(a_1,&\ldots,a_{p+q-1}) ~=~\sum_{\sigma\in \mathbb S_{q,p-1}}{\rm sgn}(\sigma)P \left(Q(a_{\sigma(1)},\ldots, a_{\sigma(q)}), a_{\sigma(q+1)},\ldots,a_{\sigma(q+p-1)}\right)\\ &-(-1)^{(p-1)(q-1)}\sum_{\sigma\in \mathbb S_{p,q-1}}{\rm sgn} (\sigma)Q\left(P(a_{\sigma(1)},\ldots, a_{\sigma(p)}), a_{\sigma(p+1)},\ldots,a_{\sigma(p+q-1)}\right) \end{aligned}$$ for any $P\in \mathfrak X^p(A), Q\in \mathfrak X^q(A)$ and $p,q\in \mathbb N$. Note that $(\mathfrak X^\bullet(A),\wedge,[-,-]_S)$ is a Gerstenhaber algebra [@LPV Proposition 3.7]. Let $\Omega^1(A)$ be the module of Kähler differentials over $A$ and $\Omega^p(A)=\wedge^p_A \Omega^1(A)$ for $p\ge 2$. The differential $d: A\to \Omega^1(A)$ extends to a well-defined differential of the complex $\Omega^\bullet (A)$ and the complex $(\Omega^\bullet ,d)$ is called the algebraic de Rham complex of $A$. For every $P\in \mathfrak {X}^p(A)$, the *internal product* with respect to $P$, denoted by $\iota_P$, is an $A$-module map $$\iota_P: \Omega^{\bullet}(A)\to \Omega^{\bullet-p}(A)$$ which is determined by $$\notag \iota_P(dF_1\wedge dF_2 \wedge \cdots \wedge dF_k)~=~ \begin{cases} 0& k<p,\\ \sum\limits_{\sigma\in {\mathbb S}_{p,k-p}} {\rm sgn}(\sigma) P(F_{\sigma(1)},\ldots, F_{\sigma(p)}) \\ \qquad\qquad dF_{\sigma(p+1)} \wedge \cdots \wedge dF_{\sigma(k)} \in \Omega^{k-p}(A) &k\geq p \end{cases}$$ for all $dF_1\wedge dF_2 \wedge \cdots \wedge dF_k \in \Omega^k(A)$. Then the *Lie derivative* with respective to $P$ is defined to be $$\notag {\mathcal L}_{P}~=~[\iota_P,d]: \Omega^{\bullet}(A) \to \Omega^{\bullet-p+1}(A)$$ see [@LPV (3.49)]. Let $\delta\in \mathfrak X^1(A)$ be a derivation of $A$. The *divergence* of $\delta$, denoted by ${\rm div}(\delta)$, is an element in $A$ defined by the equation $$\notag \mathcal L_\delta (\nu) ~= ~{\rm div}(\delta)\nu,$$ where $\nu\in \Omega^n(A)$ is a fixed volume form for $A$. In particular, if we choose $\nu=dx_1\wedge \cdots \wedge dx_n$, from [@TWZ Lemma 1.2(1)] we get $$\begin{aligned} {\rm div}(\delta)~=~\sum_{1\leq i\leq n} \frac{\partial \delta(x_i)}{\partial x_i}.\end{aligned}$$ Let $(A,\pi)$ be a Poisson algebra with $\pi\in \mathfrak X^2(A)$ satisfying $[\pi,\pi]_S=0$. We usually write the corresponding Poisson bracket on $A$ as $\{-,-\}=\pi(-,-)$. A derivation $\delta$ on $A$ is called a *Poisson derivation* if $[\delta,\pi]_S=0$ or $\delta(\{f,g\})=\{\delta(f),g\} +\{f,\delta(g)\}$ for all $f,g\in A$. There is a special class of Poisson derivations on $A$ called *Hamiltonian derivations*, which is given by $H_f:=\{f,-\}$ for any $f\in A$. The *modular derivation* of $A$ is defined by $$\begin{aligned} \mathfrak m(f)~:=~-\mathop{\mathrm{div}}(H_f)\end{aligned}$$ for all $f\in A$. We call $A$ *unimodular* if $\mathfrak m=0$. When $A$ has unimodular Poisson structure $\pi$, a duality exists between its Poisson homology and Poisson cohomology. For each $q\ge 0$, the *$q$-th Poisson cohomology* of $A$ is defined to be the $q$th-cohomology of the cochain complex $(\mathfrak X^\bullet(A),d_\pi^\bullet)$ with differential $d_\pi=-[-,\pi]_S$. In particular, for any $f\in \mathfrak X^q(A)$, $d_\pi^q(f)\in \mathfrak X^{q+1}(A)$ is determined by $$\begin{aligned} \label{E1.0.1}\tag{E1.0.1} d_{\pi}^q(f)(a_0,\ldots,a_q) ~=~&\sum_{i=0}^q (-1)^i \{ a_i, f(a_0,\ldots,\widehat{a_i},\ldots,a_q)\}\\ &+\sum_{0\leq i< j\leq q} (-1)^{i+j} f(\{a_i,a_j\}, a_0,\ldots, \widehat{a_i},\ldots,\widehat{a_j},\ldots,a_q)\notag\end{aligned}$$ for any $a_0,a_1,\ldots,a_q\in A$. We denote by $$\begin{aligned} PH^q(A)~:=~\ker(d_\pi^q)/\text{\upshape im}(d_\pi^{q-1}). \end{aligned}$$ Let $Pd(A)$ be the Lie algebra of all Poisson derivations of $A$ and let $Hd(A)$ be the Lie ideal of $Pd(A)$ consisting of all Hamiltonian derivations. We also denote by $Z_P(A)$ the Poisson center of $A$. In particular, $$\begin{aligned} PH^0(A)~=~Z_P(A),\quad PH^1(A)~=~Pd(A)/Hd(A).\end{aligned}$$ On the other hand, for each $q\ge 0$, the *$q$-th Poisson homology* of $A$ is defined to be the $q$th-homology of the chain complex $(\Omega^\bullet(A),\partial^\pi)$, where the differentials are given by $\partial^\pi_q=\mathcal L_\pi=[i_{\pi}, d]: \Omega^q(A)\to \Omega ^{q-1}(A)$. We denote by $$\begin{aligned} PH_q(A)~:=~\ker(\partial^\pi_q)/\text{\upshape im}(\partial^\pi_{q+1}). \end{aligned}$$ Let's review the concepts of $H$-ozoneness and $PH^1$-minimality about a connected graded Poisson algebra. **Definition 7**. [@TWZ Definition 7.1] [\[zzdef1.1\]]{#zzdef1.1 label="zzdef1.1"} Let $A={\Bbbk}[x_1,\ldots,x_n]$ be a connected graded Poisson algebra with its Poisson center denoted by $Z$. - $\delta\in Pd(A)$ is called *ozone* if $\delta(Z)=0$. - Let $Od(A)$ denote the Lie algebra of all ozone Poisson derivations of $A$. - We say $A$ is $H$-ozone if $Od(A)=Hd(A)$, namely, any ozone derivation is Hamiltonian. - We say $A$ is $PH^1$-minimal if $PH^1(A)\cong ZE$ as graded $Z$-modules where $E$ is the Euler derivation [\[E1.1.1\]](#E1.1.1){reference-type="eqref" reference="E1.1.1"} below. ## Twists of graded Poisson algebras {#zzsec1.2} Let $A={\Bbbk}[x_1,\ldots,x_n]$ be a graded Poisson polynomial algebra with Poisson bracket $\pi=\{-,-\}$. In [@TWZ §2], the notion of graded twists of $A$ was introduced. For any homogeneous element $f\in A$, we use $|f|$ to denote its degree in $A$. Define the *Euler derivation* $E$ of $A$ by $$\label{E1.1.1}\tag{E1.1.1} E(f)~:=~|f|\, f$$ for all homogeneous elements $f\in A$. We point out that $E$ is a Poisson derivation and $\mathop{\mathrm{div}}(E)=\sum_{i=1}^n \deg(x_i)$. Recall that a derivation $\delta$ on $A$ is said to be a *semi-Poisson derivation* if $$[E\wedge \delta,\pi]_S~=~E\wedge [\delta,\pi]_S~=~0.$$ The set of all graded semi-Poisson derivations (resp. graded Poisson derivations) of $A$ is denoted by $Gspd(A)$ (resp. $Gpd(A)$). When $A$ is a $\mathbb{Z}$-graded Poisson algebra, $Gspd(A)$ is a $\Bbbk$-vector space. For any $\delta\in Gspd(A)$, we can define a new Poisson algebra $A^\delta:=(A,\pi_{new})$, called *a graded twist* of $A$, with $$\label{E1.1.2}\tag{E1.1.2} \pi_{new}~:=~\pi\,+\, E\wedge \delta$$ or namely, $\{f,g\}_{new}=\{f,g\}+E(f)\delta(g)-\delta(f)E(g)$ for all homogeneous elements $f,g\in A$. **Definition 8**. [@TWZ Definition 4.3] [\[zzdef1.2\]]{#zzdef1.2 label="zzdef1.2"} Let $A={\Bbbk}[x_1,\ldots,x_n]$ be a $\mathbb{Z}$-graded Poisson algebra. The *rigidity* of $A$ is defined to be $$\begin{aligned} rgt(A)~:=~1-\dim_{\Bbbk}Gspd(A).\end{aligned}$$ In particular, we say $A$ is *rigid* if $rgt(A)=0$. Let $(A, \pi)$ be a Poisson algebra with Poisson bracket $\pi$. Let $\xi$ be any nonzero scalar. We define a new Poisson bracket $\pi_\xi:=\xi\pi$ or $\{-,-\}_\xi := \xi\{-,-\}$ on $A$. Then, it is easy to see that $A':=(A, \pi_\xi)$ is a Poisson algebra. The following lemma shows how Poisson structures and their Poisson cohomologies behave when we replace $A$ (resp. $\pi$) by $A'$ (resp. $\pi':=\pi_{\xi}$). **Lemma 9**. *[@TWZ Lemma 1.5] [\[zzlem1.3\]]{#zzlem1.3 label="zzlem1.3"} Retain the notations as above with $\xi\in {\Bbbk}^\times$. Let $d_{\pi}^q$ (resp. $d_{\pi'}^q$) be the differential of ${\mathfrak X}^{\bullet}(A)$ (resp. ${\mathfrak X}^{\bullet}(A')$) as defined in [\[E1.0.1\]](#E1.0.1){reference-type="eqref" reference="E1.0.1"}. The following is true.* 1. *$d_{\pi'}^q=\xi d_{\pi}^q$ for all $q$.* 2. *$\ker(d_{\pi'}^q)=\ker(d_{\pi}^q)$ for all $q$.* 3. *$\text{\upshape im}(d_{\pi'}^q)=\text{\upshape im}(d_{\pi}^q)$ for all $q$.* 4. *$PH^q(A)=PH^q(A')$ for all $q$.* 5. *$rgt(A_\Omega)=rgt(A_{\xi\Omega})$.* 6. *$A_\Omega$ is $H$-ozone if and only if $A_{\xi\Omega}$ is $H$-ozone.* 7. *$A_\Omega$ is $PH^1$-minimal if and only if $A_{\xi\Omega}$ is $PH^1$-minimal.* ## Notations for Poisson (co)homology in dimension three {#zzsec1.3} We consider the polynomial algebra $A={\Bbbk}[x,y,z]$ with grading $(\deg(x),\deg(y),\deg(z))=(a,b,c)\in (\mathbb N_+)^3$. Note that a connected $w$-graded unimodular Poisson structure $\pi$ on $A$ is determined by a homogeneous polynomial $\Omega$ of degree $n$ (not necessarily equal to $a+b+c$). We write $(A,\pi_\Omega)$ for the corresponding Poisson algebra where the Poisson bracket on $A$ is homogeneous of degree $w:=n-a-b-c$. So the cochain complex $(\mathfrak X^\bullet(A), d_\pi^\bullet)$ consisting of graded vector spaces of skew-symmetric multi-derivations are given by $$\label{E1.3.1}\tag{E1.3.1} 0\xlongrightarrow{} \mathfrak X^0(A) \xlongrightarrow{d_\pi^0}\mathfrak X^1(A)[w] \xlongrightarrow{d_\pi^1}\mathfrak X^2(A)[2w] \xlongrightarrow{d_\pi^2}\mathfrak X^3(A)[3w] \xlongrightarrow{}0.$$ Here, we choose the natural isomorphisms of graded vector spaces as follows $$\label{E1.3.2}\tag{E1.3.2} \left \{\begin{aligned} &\mathfrak X^1(A)\xrightarrow{\sim} A[a]\oplus A[b]\oplus A[c] \quad && V\mapsto (V(x),V(y), V(z))\\ &\mathfrak X^2(A)\xrightarrow{\sim} A[b+c] \oplus A[a+c]\oplus A[a+b]\quad && V\mapsto (V(y,z),V(z,x),V(x,y))\\ &\mathfrak X^3(A)\xrightarrow{\sim} A[a+b+c] \quad && V\mapsto (V(x,y,z)).\\ \end{aligned}\right.$$ Using these isomorphisms, it becomes convenient to compute the associated Hilbert series. The elements of $A^{\oplus3}$ are viewed as vector-valued functions on $A$, and we denote such an element by $\overrightarrow{f}\in A^{\oplus3}$. Let $\cdot,\times$ denote the usual inner and cross products, respectively; while $\overrightarrow{\nabla}, \overrightarrow{\nabla}\times$ and ${\rm Div}$ denote respectively the gradient, the curl and the divergence operators. Therefore, the cochain complex [\[E1.3.1\]](#E1.3.1){reference-type="eqref" reference="E1.3.1"} can be identified as $$\label{E1.3.3}\tag{E1.3.3} \centering \begin{tabular}{ccccc} &$A[w+a]$ & &$A[2w+b+c]$& \vspace*{-2.1mm}\\ $0\xlongrightarrow{} A \xlongrightarrow{\delta^0_\Omega}$ & \hspace*{-2.5mm}$\oplus A[w+b]$ &\hspace*{-2.5mm}$\xlongrightarrow{\delta^1_\Omega}$ & \hspace*{-2.5mm}$ \oplus A[2w+a+c]$ &\hspace*{-2.5mm}$ \xlongrightarrow{\delta^2_\Omega}A[3w+a+b+c]\xlongrightarrow{}0 $\\ &\hspace*{-2.5mm}$\oplus A[w+c]$& &\hspace*{-2.5mm}$\oplus A[2w+a+b]$& \end{tabular} %\caption{Caption}$$ where the differential $\delta_\Omega$ can be written in a compact form $$\begin{aligned} \delta_\Omega^0(f)&~=~ \overrightarrow{\nabla} f\times \overrightarrow{\nabla} \Omega,\quad \text{for}\ f\in A \xrightarrow{\sim}\mathfrak X^0(A),\label{E1.3.4}\tag{E1.3.4}\\ \delta_\Omega^1(\overrightarrow{f})&~=~ -\overrightarrow{\nabla} (\overrightarrow{f}\cdot \overrightarrow{\nabla} \Omega) +{\rm Div}(\overrightarrow{f})\overrightarrow{\nabla} \Omega, \quad \text{for}\ \overrightarrow{f}\in A^{\oplus3}\xrightarrow{\sim} \mathfrak X^1(A),\label{E1.3.5}\tag{E1.3.5}\\ \delta_\Omega^2(\overrightarrow{f})&~=~ -\overrightarrow{\nabla} \Omega\cdot( \overrightarrow{\nabla} \times \overrightarrow{f}) =-{\rm Div}(\overrightarrow{f}\times\overrightarrow{\nabla} \Omega), \quad \text{for}\ \overrightarrow{f}\in A^{\oplus3}\xrightarrow{\sim} \mathfrak X^2(A).\label{E1.3.6}\tag{E1.3.6}\end{aligned}$$ For any graded vector space $M=\oplus_{i\in \mathbb Z} M_i$ that is locally finite, we use $$h_M(t)~=~\sum_{i\in \mathbb Z} \dim_{\Bbbk}(M_i)\, t^i$$ to denote the *Hilbert series* of $M$. Note that the Hilbert series of $A$ is given by $$h_A(t)~=~\frac{1}{(1-t^a)(1-t^b)(1-t^c)}.$$ As a consequence of [\[E1.3.3\]](#E1.3.3){reference-type="eqref" reference="E1.3.3"}, the Poisson cohomology $HP^{\bullet}(A)$ (resp. $HP_\bullet(A)$) are graded vector spaces, and we denote their Hilbert series as $h_{HP^\bullet(A)}(t)$ (resp. $h_{HP_\bullet(A)}(t)$). By additivity of the Hilbert series of [\[E1.3.1\]](#E1.3.1){reference-type="eqref" reference="E1.3.1"}-[\[E1.3.3\]](#E1.3.3){reference-type="eqref" reference="E1.3.3"}, we have $$\label{E1.3.7}\tag{E1.3.7} \sum_{i=0}^3 (-t^{-w})^{i} h_{PH^i(A)}(t)= -\frac{1}{t^{3w+a+b+c}}\frac{(1-t^{w+a})(1-t^{w+b})(1-t^{w+c})} {(1-t^a)(1-t^b)(1-t^c)}.$$ ## Poisson valuations and filtrations {#zzsec1.4} In [@HTWZ1], the notion of Poisson valuations was introduced to solve rigidity, automorphism, isomorphism, and embedding problems for various classes of Poisson algebras/fields. In this subsection, we recall some basics of Poisson valuations. Let $w$ be an integer. **Definition 10**. [@HTWZ1 Definition 1.1] [\[zzdef1.4\]]{#zzdef1.4 label="zzdef1.4"} Let $K$ be a Poisson algebra (or a Poisson field) over $\Bbbk$. A *$w$-valuation* on $K$ is a map $$\nu: K \to {\mathbb Z}\cup\{\infty\}$$ which satisfies the following properties: for all $f,g\in K$, 1. $\nu(f)=\infty$ if and only if $f=0$, 2. $\nu(f)=0$ for all $f\in \Bbbk^{\times}:=\Bbbk\setminus \{0\}$, 3. $\nu(fg)=\nu(f)+\nu(g)$ (assuming $n+\infty=\infty$ when $n\in {\mathbb Z}\cup\{\infty\}$), 4. $\nu(f+g)\geq \min\{\nu(f),\nu(g)\}$, with equality if $\nu(f) \neq \nu(g)$. 5. $\nu(\{f,g\})\geq \nu(f)+\nu(g)-w$. Note that conditions (a)-(d) mean $\nu$ is an ordinary valuation on $K$. Next, we state the definition of $w$-filtration closely related to the Poisson $w$-valuation on $K$. **Definition 11**. [@HTWZ1 Definition 2.2] [\[zzdef1.5\]]{#zzdef1.5 label="zzdef1.5"} Let $A$ be a Poisson algebra. Let $\mathbb F=\{F_i\,|\, i\in \mathbb Z\}$ be a chain of ${\Bbbk}$-subspaces of $A$. We say $\mathbb F$ is a *$w$-filtration* of $A$ if it satisfies - $F_i\supseteq F_{i+1}$ for all $i\in \mathbb Z$ and $1\in F_0\setminus F_{1}$; - $F_iF_j\subseteq F_{i+j}$ for all $i,j\in \mathbb Z$; - $\cap_{i\in \mathbb Z} F_i=\{0\}$; - $\cup_{i\in \mathbb Z} F_i=A$; - $\{ F_i, F_j\}\subseteq F_{i+j-w}$ for all $i,j\in \mathbb Z$. Let $\mathbb F=\{ F_i\,|\,i\in \mathbb Z\}$ be a $w$-filtration of the Poisson algebra $A$. The *associated graded algebra* of the $w$-filtration $\mathbb F$ of $A$ is defined to be $${\rm gr}_\mathbb FA:=\bigoplus_{i\in \mathbb Z}\, F_i/F_{i+1}.$$ For any nonzero element $f\in F_i$, we denote $\overline{f}$ the element $f+F_{i+1}$ in the $i$th degree component $({\rm gr}_\mathbb FA)_i:=F_i/F_{i+1}$. It is clear that ${\rm gr}_\mathbb FA$ is a graded algebra. Moreover, by [@HTWZ1 Lemma 2.3], ${\rm gr}_\mathbb FA$ is a $w$-graded Poisson algebra with the induced homogeneous Poisson bracket of degree $-w$ such that $$\{F_i/ F_{i+1}, F_{j}/ F_{j+1}\}\subseteq F_{i+j-w}/ F_{i+j+1-w},$$ namely, $\{({\rm gr}_\mathbb FA)_i, ({\rm gr}_\mathbb FA)_j\} \subseteq ({\rm gr}_\mathbb FA)_{i+j-w}$ for all $i,j\in \mathbb Z$. We call $\mathbb F$ a *good* filtration if ${\rm gr}_\mathbb FA$ is a domain. Give a good $w$-filtration $\mathbb F$ on $A$, we define the notion of a *degree* function, denoted by $\deg: \; A\to \mathbb Z \cup \{\infty\}$ via $$\begin{aligned} \notag \deg(f):=i\ \text{if $f\in F_i\setminus F_{i+1}$\ and \ $\deg(0)=+\infty$.}\end{aligned}$$ One can see that $\deg$ is a valuation on $A$. Conversely, given a valuation $\nu$ on $A$, we can define a filtration $\mathbb F^\nu_i:=\{F^\nu_i\,|\, i\in \mathbb Z\}$ of $A$ (associated to $\nu$) by $$\begin{aligned} \notag F_i^\nu:=\{f\in A\,|\, \nu(f)\ge i\}.\end{aligned}$$ The corresponding associated graded algebra of $A$ is denoted by ${\rm gr}_{\nu}A.$ **Proposition 12**. *[@HTWZ1 Lemma 2.6] [\[zzpro1.6\]]{#zzpro1.6 label="zzpro1.6"} Let $A$ be a Poisson algebra. There is a one-to-one correspondence between the set of good $w$-filtrations of $A$ and the set of $w$-valuations on $A$ via the above constructions.* In this paper, we will mainly focus on the following special class of Poisson valuations. **Definition 13**. [@HTWZ1 Definition 3.1] [\[zzdef1.7\]]{#zzdef1.7 label="zzdef1.7"} Let $K$ be a Poisson field over ${\Bbbk}$. A $w$-valuation $\nu$ on $K$ is called a *faithful $w$-valuation* if the following hold. 1. The image of $\nu$ is $\mathbb Z\cup \{\infty\}$. 2. $\mathop{\mathrm{GKdim}}(\mathop{\mathrm{gr}}_\nu K)=\mathop{\mathrm{GKdim}}(K)$. 3. The $w$-graded Poisson bracket on $\mathop{\mathrm{gr}}_\nu K$ is nonzero. A Poisson $w$-valuation on a Poisson domain $A$ is faithful if its natural extension to the Poisson fractional field $Q(A)$ is faithful. Note that the above conditions (b), and (c) are different from the original ones [@HTWZ1 Definition 3.1(1)]. But it is easy to see they are equivalent (see [@HTWZ2 Definitions 0.2 and 3.1]). # Classification of potentials $\Omega$ in $\Bbbk[x,y,z]$ {#zzsec2} In this section, we first classify all possible homogeneous polynomials $\Omega$ satisfying , up to some graded automorphism of $A$. We will use the following definition. **Definition 14**. We define the *Jacobian quotient algebra* of $A$ with respect to $\Omega$ to be $$A_{sing}:={\Bbbk}[x,y,z]/(\Omega_x,\Omega_y,\Omega_z).$$ It is clear that $A_{sing}$ is independent of the choices of graded generators $(x,y,z)$. **Lemma 15**. *Assume (1,2,4).* 1. *The following are equivalent.* 1. *There is an irreducible homogeneous potential $\Omega$ in ${\Bbbk}[x,y]$.* 2. *$2<a<b<c=\frac{ab}{\gcd(a,b)}-a-b$.* *In this case, $\Bbbk[x,y]_{c}=0$, and, up to a graded automorphism of $A$, $\Omega=x^{\frac{b}{\gcd(a,b)}}+y^{\frac{a}{\gcd(a,b)}}$.* 2. *Let $\Omega$ denote a nonzero homogeneous polynomial in ${\Bbbk}[x,y]$. The following are equivalent.* 1. *$A$ has a nonzero graded derivation $\delta$ of degree $0$ satisfying $\mathop{\mathrm{div}}(\delta)=\delta(\Omega)=0$.* 2. *$\Omega$ is reducible.* *Proof.* (1) $(1b)\Rightarrow (1a)$: This is clear by taking $\Omega=x^{\frac{b}{\gcd(a,b)}}+y^{\frac{a}{\gcd(a,b)}}$. $(1a)\Rightarrow (1b)$: Let $\Omega=h(x,y)\in {\Bbbk}[x, y]$ be an irreducible polynomial of degree $n:=a+b+c$. We have $\deg(h(x,y))=a+b+c=ka+lb$ for some $k, l\in \mathbb N$. As a result, $c=(k-1)a+(l-1)b$. If needed, we can divide the degrees of $x$ and $y$ by ${\rm gcd}(a,b)$ and thus assume that ${\rm gcd}(a, b)=1$. If no more than one of the $x^{\frac{a+b+c}{a}}$ and $y^{\frac{a+b+c}{b}}$ terms appears in $h(x,y)$, then $h(x,y)=xf(x,y)$ or $h(x,y)=yf(x,y)$ for some non-constant polynomial $f(x,y)\in \Bbbk[x,y]$. In this case, $\Omega=h(x,y)$ is reducible. Next we consider the case where $h(x,y)$ contains both $x^{\frac{a+b+c}{a}}$ and $y^{\frac{a+b+c}{b}}$ terms. We have $a\mid b+c$ and $b\mid a+c$, which implies that $a\mid l$ and $b\mid k$. Say $l=ma$ and $k=nb$ for some $m,n \in \mathbb N_+$. Thus $a+b+c=(m+n)ab$. So we have $$h(x,y)=\lambda_{m+n}\,x^{(m+n)b}+\lambda_{m+n-1}\,x^{(m+n-1)b}y^a +\cdots+\lambda_1\,x^by^{(m+n-1)a}+\lambda_0\,y^{(m+n)a}$$ for some $\lambda_0,\ldots,\lambda_{m+n}\in {\Bbbk}$. We rewrite $p=x^b$ and $q=y^a$. Then, we can get $$h(x,y)=h(p,q)=\lambda_{m+n}\,p^{m+n}+\lambda_{m+n-1}\, p^{m+n-1}q+\cdots+\lambda_0\,q^{m+n}$$ with $\deg(p)=\deg(q)=ab$. If $m+n>1$, then $h(p,q)$ is always reducible. Since we assume $\Omega$ is irreducible, we obtain that $m+n=1$. Then, after a linear transformation, we can assume that $h(x,y)=x^b+y^a$ (which is irreducible for $\gcd(a, b)=1$). Since $1\leq a\leq b\leq c$ and ${\rm gcd}(a,b)=1$, we have $1\leq a<b$. Since $2b< a+b+c=ab$, we have $a>2$. Note that $c=ab-a-b=(a-1)b-a$. Thus, we have $c>b$. Thus we obtain (1b) when ${\rm gcd}(a,b)=1$. Therefore (1b) holds by lifting to the general case when ${\rm gcd}(a,b)> 1$. One can easily show that the conditions in (1b) imply that ${\Bbbk}[x,y]_c=0$. \(2\) $(2b)\Rightarrow (2a)$: By assumption, $\Omega$ is reducible. By the proof of part (1), $\deg(\Omega)=ka+lb$ for some $k, l\in \mathbb N$. If $k, l\ge 1$, we can let $\delta=x^{(k-1)a}y^{(l-1)b}\frac{\partial}{\partial z}$. Otherwise, we may assume $k=0$ and $\Omega=y^l$ (as $\Omega$ is reducible). Then we let $\delta=z\frac{\partial}{\partial z}-x\frac{\partial}{\partial x}$. Then (2a) holds. $(2a)\Rightarrow (2b)$: Assume to the contrary that $\Omega$ is irreducible. Without loss of generality, let $\Omega=x^b+y^a$ with $\gcd(a,b)=1$ as in part (1). By (1b), we have $m+n=1$ and $a<b<c=ab-a-b$. This (together with ${\Bbbk}[x,y]_c=0$) implies that any graded derivation $\delta$ of $A$ of degree zero must have the form $\delta(x)=\alpha x$, $\delta(y)=\beta y$ and $\delta(z)=\gamma z$ for some $\alpha,\beta,\gamma\in {\Bbbk}$. So $\mathop{\mathrm{div}}(\delta)=\delta(\Omega)=0$ yields that $\delta=0$. This finishes the proof. ◻ **Proposition 16**. *Let $A={\Bbbk}[x, y, z]$ be a weighted polynomial algebra with $\deg x=1, \deg y=1, \deg z=2$. Then the nonzero homogeneous degree $4$ polynomials $\Omega\in A$ can be classified in . In particular, $\Omega$ has an isolated singularity if and only if $\Omega=z^2+xy^3+\lambda x^2y^2+x^3y$ with $\lambda\ne \pm 2$ up to graded isomorphisms of $A$.* *Proof.* Since $\deg(\Omega)=4$, we have $\Omega=l_1z^2+l_2zg(x,y)+h(x,y)$, where $\deg g(x,y)=2$, $\deg h(x,y)=4$ and $l_1, l_2\in {\Bbbk}$. If $l_1\ne 0$, then after a linear transformation of $z$, we can assume that $\Omega=z^2+h(x,y)$. If $l_1=0$ and $l_2\ne 0$, we can assume that $g(x,y)=x^2$ or $xy$ after a further linear transformation of $x$ and $y$. So, we only need to consider the following cases. $\Omega= z^2+h(x,y)$. If $h(x,y)=0$, then $\Omega=z^2$. If $0\neq h(x,y)$ has a root of multiplicity $4$ in $\mathbb P^1$, then without loss of generality, we can assume that $h(x,y)=x^4$. If $h(x,y)$ has a root of multiplicity $3$, then we can assume that $h(x,y)=x^3y$ due to the symmetry between $x$ and $y$. If $h(x,y)$ has a root of multiplicity $2$, then we can assume that $h(x,y)=x^2y(y+\lambda x)$ for some $\lambda=0$ or $1$. If $h(x,y)$ has no repeated root, then we can assume that $h(x,y)=xy(y+x)(y+kx)$ for some $k\in {\Bbbk}\setminus\{0,1\}$. Then $h(x,y)=xy^3+(k+1)x^2y^2+kx^3y$. By a suitable re-scaling of $x$ and $y$, we obtain $h(x,y)=xy^3+\lambda x^2y^2+x^3y$ for some $\lambda\in{\Bbbk}$. $\Omega=x^2z+h(x,y)$. After a linear transformation of $z$ and re-scaling of $x,y$ and $z$ if necessary, we can assume that $\Omega=x^2z+\lambda_1xy^3+\lambda_2y^4$ for some $\lambda_1, \lambda_2\in \{0,1\}$. $\Omega=xyz+h(x,y)$. After a linear transformation of $z$ and re-scaling of $x,y$ and $z$ if necessary, we can have $\Omega=xyz+\lambda_1x^4+\lambda_2y^4$ for some $\lambda_1, \lambda_2\in \{0,1\}$. $\Omega=h(x,y)$. If $\Omega=h(x,y)$, then by the same argument of Case 1, we can show that $\Omega$ is one of the following forms: $$x^4, \quad x^3y, \quad x^2y^2, \quad x^2y^2+ x^3y, \quad xy^3+\lambda x^2y^2+x^3y\,\,\, \text{for some} \,\lambda\in {\Bbbk}.$$ By direct computation, we can verify that $z^2+xy^3+\lambda x^2y^2+x^3y$ with $\lambda \ne \pm 2$ has an isolated singularity. ◻ **Proposition 17**. *Let $A={\Bbbk}[x, y, z]$ be a weighted polynomial algebra with $\deg x=1, \deg y=2, \deg z=3$. Then the nonzero homogeneous degree $6$ polynomials $\Omega\in A$ can be classified in . In particular, $\Omega$ has an isolated singularity if and only if $\Omega=z^2+y^3+\lambda x^2y^2+x^4y$ with $\lambda\ne \pm2$ up to graded isomorphisms of $A$.* *Proof.* Since $\deg (\Omega)=6$, we have $\Omega=l_1z^2+l_2zg(x,y)+h(x,y)$, where $g(x,y)\in\Bbbk[x,y]$ and $h(x,y)\in\Bbbk[x,y]$ have degrees $3$ and $6$, respectively, and $l_1, l_2\in {\Bbbk}$. If $l_1\ne0$, then by a linear transformation of $z$, we can assume that $\Omega=z^2+h(x,y)$. If $l_1=0$ and $l_2\ne 0$, by a possible linear transformation of $x$ and $y$, we can have $g(x,y)=x^3$ or $g(x,y)=xy$. We write, in general, $$h(x,y)=w_1y^3+w_2x^2y^2+w_3x^4y+w_4x^6,$$ where $w_i\in {\Bbbk}$ for $1\leq i\leq 4$. So, we only need to consider the following cases. $\Omega=z^2+h(x,y)$. If $w_1\ne 0$, then we can write $h(x,y)=(y+ax^2)(y+bx^2)(y+cx^2)$ for $a, b, c\in {\Bbbk}$. After a linear transformation of $y$, we can assume $h(x,y)=y(y+ax^2)(y+bx^2)$. By a possible re-scaling of $x,y$ and $z$, we can assume that $\Omega$ is one of the following forms: $$z^2+y^3,\quad z^2+y^3+x^2y^2, \quad z^2+y^3 +\lambda x^2y^2+x^4y \,\,\, \text{for some} \, \lambda \in {\Bbbk}.$$ If $w_1=0$ and $w_2\neq 0$, then, similarly, we can assume that $h(x,y)=x^2(y+ax^2)(y+bx^2)$ for some $a,b\in {\Bbbk}$. A further linear transformation of $x,y$ and $z$ yields $\Omega=z^2+x^2y^2$ or $\Omega=z^2+x^2y^2+x^4y$. If $w_1=w_2=0$ and $w_3\neq 0$, then we can assume that $h(x,y)=x^4(y+ax^2)$ for some $a\in {\Bbbk}$. A linear transformation of $y$ yields $\Omega=z^2+x^4y$. If $w_1=w_2=w_3=0$ and $w_4\neq 0$, then by a re-scaling of $x$, we get $\Omega=z^2+x^6$. Finally, if $w_1=w_2=w_3=w_4=0$, then we have $\Omega=z^2$. $\Omega=x^3z+h(x,y)$. After a linear transformation of $z$, we can assume that $\Omega=x^3z+\lambda_1x^2y^2+\lambda_2y^3$ for some $\lambda_1, \lambda_2\in {\Bbbk}$. It is easy to check that $\Omega$ is one of the following forms: $$x^3z, \quad x^3z+y^3, \quad x^3z+x^2y^2, \quad x^3z+x^2y^2+y^3.$$ $\Omega=xyz+h(x,y)$. Again, via a linear transformation of $z$, one can assume that $$\Omega=xyz+\lambda_1x^6+\lambda_2y^3$$ for some $\lambda_1, \lambda_2\in {\Bbbk}$. After re-scaling $x$ and $y$ as needed, we can assume that $\Omega$ is of one of the following forms: $$xyz, \quad xyz+x^6, \quad xyz+y^3, \quad xyz+x^6+y^3.$$ If $\Omega=h(x,y)$, then by the same argument as in Case 1, $\Omega$ can be assumed to be one of the following forms: $$y^3, \quad y^3+x^2y^2,\quad y^3+\lambda x^2y^2+x^4y,\quad x^4y, \quad x^2y^2, \quad x^2y^2+x^4y,\quad x^6$$ where $\lambda\in{\Bbbk}$. By a direct computation, we can further verify that $\Omega=z^2+y^3+\lambda x^2y^2+x^4y$ has an isolated singularity if and only if $\lambda\ne \pm2$. ◻ **Theorem 18**. *Let $A={\Bbbk}[x, y, z]$ be a weighted polynomial algebra with $\deg(x)=a, \deg (y)=b, \deg(z)=c$ for $1\leq a\leq b\leq c$. Let $\Omega$ be a nonzero homogeneous polynomial of degree $a+b+c$. Then, up to a graded automorphism of $A$, we have the following:* 1. *[@BM; @DH; @DML; @KM; @LX] If $a=b=c$, then $\Omega$ is one of the forms listed in .* 2. *If $a=b<c$, then $\Omega$ is one of the forms listed in and .* 3. *If $a<b=c$, then every $\Omega$ is reducible and is one of the forms listed in .* 4. *If $a<b<c$, then $\Omega$ is one of the forms listed in and .* *Proof.* $(1)$ Since $a=b=c$, we can reduce the classification of $\Omega$ to the case where the degrees of $x, y$ and $z$ are all equal to $1$. In this case, the classification of $\Omega$ is well-known. Also see [@TWZ Corollary 6.7]. $(2)$ Since $a=b<c$, then $\deg (\Omega)=2a+c<3c$. So we can write $\Omega= z^2f(x,y)+zg(x,y)+h(x,y)$, where $\deg(f(x,y))=a+b-c<a$, $\deg(g(x,y))=2a$ and $\deg(h(x,y))=2a+c$. If $a\nmid c$, then in particular, $c\ne a+b$, whence $f(x,y)=0$. If $h(x,y)\neq 0$, then $a\mid \deg (h(x, y))=2a+c$, we get $a\mid c$, yielding a contradiction. So $\Omega=zg(x,y)$ is reducible. By a linear transformation of $x,y$, we can assume that $\Omega=xyz$ or $x^2z$. If $c=ka$ for some integer $k\geq 2$, then $(a, b, c)=(a, a, ka)$. If $k=2$, then the result is given by Proposition [Proposition 16](#zzpro2.3){reference-type="ref" reference="zzpro2.3"}. Now assume $k>2$. Then we have $\Omega=zg(x,y)+h(x,y)$. We can assume $g(x,y)=x^2$ or $xy$ after a linear transformation of $x,y$. If $\Omega=x^2z+h(x,y)$, after a necessary linear transformation of $z$, we can write $\Omega=x^2z+\lambda_1xy^{k+1}+\lambda_2y^{k+2}$ for $\lambda_1, \lambda_2\in \{0,1\}$. If $\Omega=xyz+h(x,y)$, then similarly, we can write $\Omega=xyz+\lambda_1x^{k+2}+\lambda_2y^{k+2}$ for some $\lambda_1, \lambda_2\in \{0,1\}$. $(3)$ Since $a<b=c$, we have that $\deg (\Omega)=a+2b<3b$. If $a\mid b$, then we have $\Omega=\lambda x^{1+2\frac{b}{a}}+ x^{1+\frac{b}{a}}g(y,z)+xf(y,z)$, where $\lambda\in {\Bbbk}, \deg (f(y,z))=2b$ and $\deg (g(y,z))=b$. Thus $\Omega$ is reducible. If $a\nmid b$, then $\Omega=xf (y, z)$, which is again reducible. If $a\nmid b$, after a linear transformation of $y, z$, we can assume that $\Omega=xyz$ or $xy^2$. If $a\mid b$, we can assume that $\Omega=x\Omega_1$ where $\Omega_1=\lambda u^2+ug(y,z)+f(y,z)$ with $u=x^{\frac{b}{a}}$ for some $\lambda \in {\Bbbk}$. We can rewrite $\Omega_{1}$ as follows $$\Omega_1=k_1z^2+k_2zh_1(y,u)+h_2(y,u)$$ for some $k_1, k_2\in {\Bbbk}$ and $h_1(y,u), h_2(y,u)\in {\Bbbk}[y,u]$. If $k_1\ne 0$, then we can assume that $\Omega$ is one of the following forms: $$xz^2, \quad xz^2+xy^2, \quad xz^2+x^{1+\frac{2b}{a}},\quad xz^2+xy^2+x^{1+\frac{2b}{a}}, \quad x^{1+\frac{b}{a}}y+xz^2.$$ If $k_1=k_2=0$, then $\Omega=xy^2, x^{1+\frac{2b}{a}}, x^{1+\frac{b}{a}}y, xy^2+x^{1+\frac{2b}{a}}$. If $k_1=0$ and $k_2\ne 0$, then $\Omega=x^{1+\frac{b}{a}}z, x^{1+\frac{b}{a}}z+xy^2$, $xyz, xyz+x^{1+\frac{2b}{a}}$. $(4)$ If $\Omega\in {\Bbbk}[x,y]$, by , the irreducible ones are given by $\Omega=x^{b/d}+y^{a/d}$ where $d=\gcd(a, b)$ and $2<a<b<c=ab/d-a-b$. Moreover, such irreducible $\Omega$ won't occur unless $c=ma+nb, c\neq a+b, a\nmid b$ for some $m, n\in \mathbb{Z}$. Let us assume $\Omega\not\in {\Bbbk}[x,y]$ in the remaining argument. Since $a<b<c$, we have that $\deg (\Omega)=a+b+c<3c$. Thus we can assume that $\Omega=z^2f(x,y)+zg(x,y)+h(x,y)$, where $\deg(f(x,y))=a+b-c<a$, $\deg(g(x,y))=a+b$ and $\deg(h(x,y))=a+b+c$. We divide the argument into two cases. **Case 1:** $c=ma+nb$ for some integers $m$ and $n$. **Subcase 1:** If $c=a+b$, then we have $\deg (\Omega)=2c$ and $f(x,y)\in {\Bbbk}$. As a result, we can assume that $\Omega=z^2+h(x,y)$ or $zg(x,y)+h(x,y)$. Since $\deg(h(x,y))=2a+2b<4b$, we can write $h(x,y)=h_3(x)y^3+h_2(x)y^2+h_1(x)y+h_0(x)$ where $h_i(x)$ is a monomial in $x$ of degree $2a+(2-i)b$ for $0\leq i\leq 3$. Let us further assume that $a\nmid b$. After a linear transformation of $x,y$, we can have $g(x,y)=xy$ if $g(x, y)\neq 0$. If $a\nmid 2b$, then $h(x,y)=\lambda x^2y^2$ for some $\lambda \in {\Bbbk}$. Hence, by a linear transformation of $z$, $\Omega$ can be one of the following possible forms: $$z^2, \quad z^2+x^2y^2,\quad xyz.$$ If $a\mid 2b$, then $h(x,y)$ can be one of the following forms: $$x^2y^2,\quad x^{2+\frac{2b}{a}}, \quad x^2y^2+ x^{2+\frac{2b}{a}}.$$ By a linear transformation of $z$, $\Omega$ can be one of the following possible forms: $$z^2, \quad z^2+x^2y^2, \quad z^2+ x^{2+\frac{2b}{a}},\quad z^2+x^2y^2+x^{2+\frac{2b}{a}}, \quad xyz, \quad xyz+x^{2+\frac{2b}{a}}.$$ Now we assume that $a\mid b$. If $b=2a$, then $c=3a$ and so see . We next consider the case $b\neq 2a$. Since $a+b+c=2a+2b<4b$ and $b\neq 2a$, we can write $h(x,y)=\lambda_1x^2y^2+\lambda_2x^{2+\frac{b}{a}}y +\lambda_3x^{2+2\frac{b}{a}}$ for some $\lambda_1, \lambda_2, \lambda_3\in{\Bbbk}$. If $\Omega=z^2+h(x,y)$, then after a linear transformation of $x, y$, $\Omega$ is one of the following forms: $$z^2, \quad z^2+x^2y^2, \quad z^2+x^{2+\frac{b}{a}}y, \quad z^2+x^{2+\frac{2b}{a}}, \quad z^2+x^2y^2+x^{2+\frac{b}{a}}y.$$ If $\Omega=zg(x,y)+h(x,y)$, then after a linear transformation of $x,y$, we can have $g(x,y)=xy$ or $x^{1+\frac{b}{a}}$. By a linear transformation of $z$, $\Omega$ is one of the following: $$xyz,\quad xyz+x^{2+\frac{2b}{a}}, \quad x^{1+\frac{b}{a}}z,\quad x^{1+\frac{b}{a}}z+x^2y^2.$$ **Subcase 2:** If $c\neq a+b$, then we can assume that $\Omega=g(x,y)z+h(x,y)$ where $\deg(g(x,y))=a+b$ and $\deg(h(x,y))=(m+1)a+(n+1)b$. Let $k, l$ be the possible integers such that $k=(n+1)b/a$ and $l=(m+1)a/b$. Note that $kl=(m+1)(n+1)$. If $a\nmid b$, then we can assume that $g(x,y)=xy$ if $g(x,y)\neq 0$. If $a\mid b$, then we have $g(x,y)=xy$ or $x^{1+\frac{b}{a}}$ if $g(x,y)\neq 0$. By a linear transformation of $z$, $\Omega$ can be reduced to one of the following forms: $$xyz, \quad xyz+x^{m+1+k}, \quad xyz+y^{n+1+l}, \quad xyz+x^{m+1+k}+y^{n+1+l},\quad x^{1+\frac{b}{a}}z$$ and $$x^{1+\frac{b}{a}}z+x^{\frac{b}{a}}y^{n+l},\quad x^{1+\frac{b}{a}}z+x^{\frac{b}{a}}y^{n+l}+ y^{1+n+l}, \quad x^{1+\frac{b}{a}}z+ y^{1+n+l}$$ where $k$ and $l$ are assumed to be integers in the first place. **Case 2:** If $c\ne ma+nb$ for any $m,n\in \mathbb Z$, then $f(x,y)=0$. Suppose $h(x,y) \ne 0$. Then $a+b+c=\deg (h(x,y))=sa+lb$ for some $s,l\in \mathbb N$. Then $c=(s-1)a+(l-1)b$, which is a contradiction. So $h(x,y)=0$. Then $\Omega=zg(x,y)$ is reducible. After a linear transformation of $x, y$, we can assume that $\Omega=xyz$ for $a\nmid b$ and $\Omega= xyz$ or $x^{1+\frac{b}{a}}z$ for $a\mid b$. This completes the proof. ◻ # Poisson fraction fields and automorphism groups {#zzsec3} In this section, we discuss Poisson fraction fields and Poisson automorphism groups related to unimodular Poisson algebras in dimension three. We always assume . **Definition 19**. We define a map $\pi: A\to \mathfrak X^2(A)$ such that, for each $h\in A$, $\pi_{h}\colon =\pi(h)$ is defined by $$\label{E3.1.1}\tag{E3.1.1} \pi_h(f,g)=\det \begin{pmatrix} f_{x} & f_{y} & f_{z}\\ g_{x} & g_{y} & g_{z}\\ h_{x} & h_{y} & h_{z} \end{pmatrix} (= \det\left(\frac{\partial\left(f,\,g,\,h\right)} {\partial\left(x,\,y,\,z\right)}\right)=\{f,g\}_h) \quad \text{for all $f,g\in A:={\Bbbk}[x,y,z]$}.$$ Note that the definition of $\pi_h$ depends on the generating set $\{x,y,z\}$. It is clear that if a new set of generators $\{x,y,z\}$ is used, then the corresponding $\pi_{h}$ will be a scalar multiple of the original $\pi_{h}$. One can check that $[\pi_h,\pi_h]_S=0$. In particular, $\pi_\Omega$ is a connected graded Poisson bracket on $A$ if $\Omega$ satisfies . To keep things simple, we will introduce the following. **Notation 20**. * * - *$A_\Omega:=(A,\pi_\Omega)$: the unimodular Poisson algebra with potential $\Omega$.* - *${\rm Aut}_{gr}(A)$: the group of all graded algebra automorphisms of $A$.* - *${\rm Aut}_{P}(A)$ (resp. ${\rm Aut}_{grP}(A)$): the group of all (resp. graded) Poisson automorphisms of $A=A_\Omega$.* - *$P_{\Omega-\xi}:=A/(\Omega-\xi)$ ($\xi\in {\Bbbk}$): the quotient Poisson algebra.* - *${\rm Aut}_{P}(P_\Omega)$ (resp. ${\rm Aut}_{grP}(P_\Omega)$): the group of all (resp. graded) Poisson automorphisms of $P_\Omega$.* - *$Q(P_{\Omega-\xi})$: the Poisson fraction field of $P_{\Omega-\xi}$ whenever it is an integral domain.* Note that any graded unimodular Poisson structure on $A$ is determined by such a potential $\Omega$. If $\Omega$ is homogeneous of degree $|x|+|y|+|z|$, then $P_{\Omega-\xi}\cong P_{\Omega-1}$ whenever $\xi\neq 0$. In this case, we can always assume $\xi$ to be either $0$ or $1$. **Lemma 21**. *Let $\Omega$ and $\Omega'$ be two homogeneous potentials for $A={\Bbbk}[x,y,z]$ of degrees $>\max\{|x|,|y|,|z|\}$. Then the two Poisson algebras $A_\Omega$ and $A_{\Omega'}$ are isomorphic if and only if there is an algebra automorphism $\phi$ of $A$ such that $\Omega'=\phi(\Omega)/\det(\phi)$ where $\det(\phi)=\det\left(\frac{\partial\left(\phi(x),\,\phi(y), \,\phi(z)\right)}{\partial\left(x,\,y,\,z\right)}\right)$. As a consequence, $$\begin{aligned} {\rm Aut}_{P}(A_\Omega)~=~\{\phi\in {\rm Aut}(A)\,| \, \phi(\Omega)=\det(\phi)\Omega\},\end{aligned}$$ and ${\rm Aut}_P(A_\Omega)={\rm Aut}_{P}(A_{\lambda\Omega})$ for any $\lambda\in {\Bbbk}^\times$.* *Proof.* Suppose there is a Poisson algebra isomorphism $\phi: A_\Omega\to A_{\Omega'}$. For any polynomials $f, g\in A$, $\phi(f)$ and $\phi(g)$ can be regarded as polynomials of variables $\phi(x), \phi(y)$ and $\phi(z)$ in $\phi(A)=A$. Therefore, according to [\[E3.1.1\]](#E3.1.1){reference-type="eqref" reference="E3.1.1"}, we have $$\begin{aligned} \{\phi(f),\phi(g)\}_{\Omega'} &=\phi(\{f,g\}_{\Omega})= \phi\left(\det(\frac{\partial(f,\,g,\,\Omega)}{\partial(x,y,z)})\right)= \det\left(\frac{\partial\left(\phi(f),\,\phi(g),\, \phi(\Omega)\right)}{\partial\left(\phi(x),\,\phi(y),\,\phi(z)\right)}\right) \\ &=\det\left(\frac{\partial \left(x,\,y,\,z\right)} {\partial\left(\phi(x),\,\phi(y),\, \phi(z)\right)}\right) \det\left(\frac{\partial\left(\phi(f),\,\phi(g),\, \phi(\Omega)\right)}{\partial\left(x,\,y,\,z\right)}\right)\\ &=\det(\phi)^{-1}\{\phi(f),\phi(g)\}_{\phi(\Omega)}=\{\phi(f),\phi(g)\}_{\det(\phi)^{-1}\phi(\Omega)}. \end{aligned}$$ This implies that $$\{-,-\}_{\Omega'}=\{-,-\}_{\det(\phi)^{-1}\phi(\Omega)}$$ which yields the same Poisson bracket on $A$. Thus $\phi(\Omega)=\det(\phi)\Omega'+\lambda$ for some scalar $\lambda \in{\Bbbk}$. Since $\Omega$ and $\Omega'$ are homogeneous of degrees $>\max\{|x|,|y|,|z|\}$, their partial derivatives are all homogeneous of positive degree. Therefore, one can check that $A/(\det(\phi)\Omega'+\lambda)$ is regular for $\lambda \neq 0$ while $A/(\Omega)$ is not regular by the Jacobian criterion. As a consequence, the induced algebra isomorphism $\phi: A/(\Omega)\to A/(\phi(\Omega))=A/(\det(\phi)\Omega'+\lambda)$ implies that $\lambda=0$. So $\Omega'=\phi(\Omega)/\det(\phi)$. Conversely, suppose there is an algebra isomorphism $\phi$ of $A$ such that $\Omega'=\phi(\Omega)/\det(\phi)$. Then we have $$\begin{aligned} \phi(\{f,g\}_{\Omega})&= \phi\left(\det(\frac{\partial(f,\,g,\,\Omega)}{\partial(x,y,z)})\right)= \det\left(\frac{\partial\left(\phi(f),\,\phi(g),\, \phi(\Omega)\right)}{\partial\left(\phi(x),\,\phi(y),\,\phi(z)\right)}\right)= \det(\phi)\det\left(\frac{\partial\left(\phi(f),\,\phi(g),\,\Omega'\right)}{\partial\left(\phi(x),\,\phi(y),\,\phi(z)\right)}\right)\\ &=\det(\phi)\det\left(\frac{\partial \left(x,\,y,\,z\right)} {\partial\left(\phi(x),\,\phi(y),\, \phi(z)\right)}\right) \det\left(\frac{\partial\left(\phi(f),\,\phi(g),\, \Omega'\right)}{\partial\left(x,\,y,\,z\right)}\right)\\ &=\det(\phi)\det(\phi)^{-1}\{\phi(f),\phi(g)\}_{\Omega'}=\{\phi(f),\phi(g)\}_{\Omega'}.\end{aligned}$$ So, $\phi$ is indeed a Poisson isomorphism. Finally, the consequences follow immediately. ◻ Now, we can classify all connected graded unimodular Poisson algebras in three dimensions. **Theorem 22**. *Any connected graded unimodular Poisson algebra $A$ is isomorphic to some $A_{\lambda \Omega}$, where $\Omega$ is listed in and $\lambda\in {\Bbbk}^\times$.* *Proof.* By [@Pr Theorem 5], any unimodular Poisson structure on $A={\Bbbk}[x,y,z]$ is given by $\pi_\Omega$ for some potential $\Omega$. Moreover, $\pi_\Omega$ being graded implies $|\Omega|=|x|+|y|+|z|$. Hence our classification follows from , , and . ◻ **Example 23**. We introduce some examples of Poisson fields of transcendental degree two over the field ${\Bbbk}$. - We define $K_{Weyl}:={\Bbbk}(x,y)$ to be the Poisson field ${\Bbbk}(x, y)$ with the Poisson bracket $\{x,y\}=1$. - We define $K_q:={\Bbbk}(x,y)$ to be the Poisson field ${\Bbbk}(x, y)$ with the Poisson bracket $\{x,y\}=qxy$ for some $q\in {\Bbbk}^\times$. It is shown in [@GLa Corollary 5.4] that $K_q\cong K_{p}$ as Poisson fields if and only if $p=\pm q$. Note that $K_{Weyl}$ is not isomorphic to any $K_{q}$ as Poisson fields by [@GLa Corollary 5.3]. - Consider the irreducible cubic polynomial $$\Omega_{\zeta,\lambda}: =\zeta\left(x^3+y^3+z^3\right)+\lambda xyz$$ with two parameters $\zeta,\lambda\in {\Bbbk}$ such that $\zeta\neq 0,\lambda^3\neq -3^3\zeta^3$. We denote the corresponding graded unimodular Poisson algebra by $A_{\Omega_{\zeta,\lambda}}$, where the Poisson structure on ${\Bbbk}[x,y,z]$ is defined by $$\{x,y\}=3\zeta z^2+\lambda xy,\ \{y,z\} =3\zeta x^2+\lambda yz,\ \{z,x\}=3\zeta y^2+\lambda xz$$ with $\deg(x)=\deg(y)=\deg(z)=1$. Let $S_{\zeta,\lambda}:=Q(P_{\Omega_{\zeta,\lambda}})$ be the Poisson fraction field. By [@HTWZ1 Corollary 6.4], $S_{\zeta, \lambda}$ is not isomorphic to $K_{Weyl}$ or $K_q$. However, $S_{\zeta,\lambda}=F_{\zeta,\lambda}(t)$ as fields, where $F_{\zeta,\lambda}$ is the function field of the elliptic curve $\Omega_{\zeta,\lambda}=0$. Suppose $S_{\zeta,\lambda}\cong S_{\zeta',\lambda'}$ as Poisson fields, whence they are isomorphic as function fields. By [@De Theorem 2], we have $F_{\zeta,\lambda}\cong F_{\zeta',\lambda'}$ as function fields. As a result, the two elliptic curves $\Omega_{\zeta,\lambda}=0$ and $\Omega_{\zeta',\mu'}=0$ are birationally equivalent or have the same $j$-invariant $27\frac{(\lambda/\zeta)^3 ((\lambda/\zeta)^3+8)^3} {((\lambda/\zeta)^3-1)}=27\frac{(\lambda'/\zeta')^3 ((\lambda'/\zeta')^3+8)^3}{((\lambda'/\zeta')^3-1)}$ (for the $j$-invariant of a Hesse form of a smooth elliptic curve, see [@Frium Theorem 2.11]). It is unclear if two elliptic curves with equations $\Omega_{\zeta,\lambda}=0$ and $\Omega_{\zeta',\lambda'}=0$ being birationally equivalent necessarily implies an isomorphism between their corresponding Poisson fields $S_{\zeta,\lambda}$ and $S_{\zeta',\lambda'}$. **Theorem 24**. *Let $A_\Omega$ be a connected graded unimodular Poisson algebra defined by some homogeneous irreducible potential $\Omega$. Let $Q=Q(P_\Omega)$ be the Poisson fraction field of $P_{\Omega}$. Then the following hold:* - *If $\Omega$ does not have an isolated singularity, then $Q$ is isomorphic to either $K_{Weyl}$ or $K_q$ for some $q \in {\Bbbk}^\times$.* - *If $\Omega$ has an isolated singularity, then $Q$ is isomorphic to $S_{\zeta,\lambda}$ for some $\zeta,\lambda\in {\Bbbk}$ with $\zeta \neq 0$ and $\lambda^3\neq -3^3\zeta^3$.* *Moreover, we label those $\Omega$ with the corresponding $Q$ that are isomorphic to $S_{\zeta,\lambda}$ by and those that are isomorphic to $K_q$ by and those that are isomorphic to $K_{Weyl}$ by in --.* *Proof.* We conduct a case-by-case verification for those irreducible potentials $\Omega$ listed in --. Then the result follows from . \(1\) Suppose $\Omega$ does not have an isolated singularity. As an illustration, we provide some details when $\Omega=z^2+y^3+2x^2y^2+x^4y$ ($\deg x=1, \deg y =2$ and $\deg z=3$) in and $\Omega=z^2+x^2y^2+x^{2+\frac{2b}{a}}$ with $a\nmid b$ ($\deg x=a>2, \deg y =b$ and $\deg z=c$) in . We also check for $\Omega=z^2+x^{2+\frac{2b}{a}}$ when $c=a+b$ and $a\nmid b$ in . Let $Q=Q(P_{\lambda\Omega})$ for some $\lambda\in {\Bbbk}^\times$. If $\Omega=z^2+y^3+2x^2y^2+x^4y$, then we have $z^2+y(y+x^2)^2=0$ in $P_\Omega$. Let $w=\frac{z}{y+x^2}$. Then $y=-w^2$ and $Q={\Bbbk}(x,w)$. Consider $\{x, w\}=\{x, \frac{z}{y+x^2}\} =-\lambda(y+x^2)=\lambda(w^2-x^2)$. After a linear transformation of $x$ and $w$, we have $\{x, w\}=pxw$ for some $p\in {\Bbbk}^{\times}$. In this case, $Q(P_{\lambda\Omega})\cong K_{p}$. If $\Omega=z^2+x^2y^2+x^{2+\frac{2b}{a}}$, then $\frac{2b}{a}$ is odd since $a\nmid b$, say $2t+1$. In $Q$, we have $x=-(\frac{z}{x^{1+t}})^2-(\frac{y}{x^{t}})^2$. Set $u=\frac{z}{x^{1+t}}$ and $v=\frac{y}{x^t}$. Then $Q={\Bbbk}(u,v)$. One can check that $\{u,v\}=\lambda(u^2+v^2)$. After a linear transformation of $u$ and $v$, we can obtain $\{u,v\}=puv$ for some $p\in {\Bbbk}^{\times}$. If $\Omega=z^2+x^{2+\frac{2b}{a}}$ with $c=a+b$ and $a\nmid b$, then we have $2+\frac{2b}{a}$ is odd say $2l+1$. So $Q={\Bbbk}(t,y)$ with $z=t^{2l+1}$ and $x=-t^2$. It can be verified that $\{t,y\}=\{(-1)^l\frac{z}{x^l},y\}=-t^{2l}$. After setting $v=y/\lambda$ and $u=\frac{-1}{(1-2l)t^{2l-1}}$, we get $Q={\Bbbk}(u,v)$ with $\{u,v\}=1$. \(2\) Suppose $\Omega$ has an isolated singularity. By , we need to consider the following three cases. Up to a scalar multiple, we can assume that 1. $\Omega_1=z^2x+yx^2+\lambda xy^2+y^3$ with $(a,b,c)=(1,1,1)$ and $\lambda\ne \pm 2$; 2. $\Omega_2=z^2+x^3y+\lambda x^2y^2+xy^3$ with $(a, b, c)=(1,1,2)$ and $\lambda\ne \pm 2$; and 3. $\Omega_3=z^2+y^3+\lambda x^2y^2+x^4y$ with $(a, b, c)=(1,2,3)$ and $\lambda\ne \pm 2$. We use an alternative form in (a) following [@DML p. 255] instead of the Hessian normal form. We consider $Q_i=Q(P_{\mu\Omega_i})$ for some $\mu\in {\Bbbk}^\times$ for $i=1,2,3$. **Case (a):** In $Q_1$, we have $$\{x,y\}= 2\mu zx, \quad \quad \{y,z\}= \mu(z^2+2xy+\lambda y^2), \quad {\rm and} \quad \{z,x\}= \mu(x^2+2\lambda xy+3y^2)$$ Denote $u=\frac{z}{x}$, $v=\frac{y}{x}$ and $w=x$. We have $$0=\frac{\Omega_1}{x^3}=\left(\frac{z}{x}\right)^2+\frac{y}{x} +\lambda\left(\frac{y}{x}\right)^2+\left(\frac{y}{x}\right)^3 =u^2+v+\lambda v^2+v^3.$$ A direct computation yields that $$\{u,w\}=\mu(w+2\lambda wv+3wv^2),\quad \{v,w\}=-2\mu wu, \quad \{u,v\}=0.$$ **Case (b):** In $Q_2$, we have $$\{x,y\}= 2\mu z, \,\, \{y,z\}=\mu( 3x^2y+2\lambda xy^2+y^3), \,\, {\rm and}\,\, \{z,x\}= \mu(x^3+2\lambda x^2y+3xy^2).$$ Denote $u=\frac{z}{x^2}$, $v=\frac{y}{x}$ and $w=x$. We have $$0=\frac{\Omega_2}{x^4}=\left(\frac{z}{x^2}\right)^2+\frac{y}{x} +\lambda\left(\frac{y}{x}\right)^2+\left(\frac{y}{x}\right)^3 =u^2+v+\lambda v^2+v^3.$$ We can easily verify that $$\{u,w\}=\mu(w+2\lambda wv+3wv^2),\,\, \{v,w\}=-2\mu wu,\,\, \{u,v\}=0.$$ **Case (c):** In $Q_3$, we have $$\{x,y\}= 2\mu z, \,\, \{y,z\}= \mu(2\lambda xy^2+4x^3y),\,\, \{z,x\}= \mu(3y^3+2\lambda x^2y+x^4).$$ Denote $u=\frac{z}{x^3}$, $v=\frac{y}{x^2}$ and $w=x$. Then $$0=\frac{\Omega_2}{x^6}=\left(\frac{z}{x^3}\right)^2+ \left(\frac{y}{x^2}\right)^3+\lambda\left(\frac{y}{x^2}\right)^2 +\left(\frac{y}{x^2}\right)=u^2+v^3+\lambda v^2+v.$$ It is routine to check that $$\{u,w\}=\mu(w+2\lambda wv+3wv^2),\,\, \{v,w\}=-2\mu wu, \, \,\{u,v\}=0.$$ Therefore, $Q_1\cong Q_2\cong Q_3$ as Poisson fraction fields. Finally, we reuse the Hesse form for (1) and note that any re-scaling of $\Omega$ can be written as $\Omega=\zeta(x^3+y^3+z^3)+\lambda xyz$ for some $\zeta,\lambda\in {\Bbbk}$ such that $\zeta\neq 0, \lambda^3\neq -3^3\zeta^3$. The result follows from (3). ◻ **Remark 25**. Let $\Omega$ be irreducible and denote the Poisson fraction field by $Q=Q(P_\Omega)$. Let $X$ be the projective curve in the weighted projective space $\mathbb P(a,b,c)$ determined by such $\Omega$. Then the statement of can be refined as: if $X$ is smooth, then $Q\cong S_{\sigma,\lambda}$; if $X$ has nodal singularity, then $Q\cong K_q$ for some $q\in \Bbbk^{\times}$; and if $X$ has cusp singularity, then $Q\cong K_{Weyl}$. **Remark 26**. We note that the isomorphisms between $Q(P_\Omega)$ depend not on grading but projective curves $\Omega=0$ when $\Omega$ is homogeneous of degree $|x|+|y|+|z|$. The geometry of elliptic curves seems to reflect Poisson algebra properties. We aim to investigate the relationship between the Poisson automorphism group of $A_{\Omega}$ and the type of $\Omega$. We determine the automorphism group of every connected graded unimodular Poisson algebra $A_\Omega$ where the potential $\Omega$ has an isolated singularity. **Theorem 27**. *Let $A_\Omega$ be a connected graded unimodular Poisson algebra. If $\Omega$ has an isolated singularity, then every Poisson automorphism of $A$ is graded. As a consequence, $${\rm Aut}_P(A_{\Omega})\cong {\rm Aut}_{grP}(A_{\Omega})\cong {\rm Aut}_{grP}(P_\Omega)\cong {\rm Aut}_{P}(P_\Omega).$$* We will demonstrate a few lemmas before proving . We'll use the Hessian normal forms of these elliptic curves in the weighted projective space $\mathbb P(a,b,c)$ to prove the lemmas ahead. These forms can be obtained from the $\Omega$ listed in part (2) of by a linear transformation, as stated in the following Lemma. **Lemma 28**. *Let $A={\Bbbk}[x, y, z]$ be a weighted polynomial algebra with $\deg(x)=a, \deg(y)=b, \deg (z)=c$ for some $1\leq a\leq b\leq c$. If $\Omega$ is a homogeneous polynomial of degree $a+b+c$ that has an isolated singularity, then $\Omega$ is one of the forms* - *$\Omega_1=x^3+y^3+z^3+\lambda xyz$ for $(-\lambda)^3\neq 27$ and $(a,b,c)=(1,1,1)$.* - *$\Omega_2=x^4+y^4+z^2+\lambda xyz$ for $(-\lambda)^4\neq 64$ and $(a,b,c)=(1,1,2)$.* - *$\Omega_3=x^6+y^3+z^2+\lambda xyz$ for $(-\lambda)^6\neq 432$ and $(a,b,c)=(1,2,3)$.* *Proof.* We will show case (2) and obtain the other two cases similarly. Replacing $z+\frac{1}{2}\lambda xy$ by $z$, one can rewrite $\Omega_2$ as $z^2+x^4+y^4-\frac{1}{4}\lambda^2 x^2y^2$, whence it becomes $$z^2+(x^2-u y^2)(x^2-v y^2)=z^2+(x+\sqrt{u}y)(x-\sqrt{u}y) (x+\sqrt{v}y)(x-\sqrt{v}y)$$ with $uv=1$ and $u+v=\frac{1}{4}\lambda^2$. Since $(-\lambda)^4\ne 64$, it implies $\sqrt{v}\ne \sqrt{u}$. Lastly, after a linear transformation of $x$ and $y$, we can obtain that $\Omega_2=z^2+xy^3+x^3y+kx^2y^2$ for some $k\in {\Bbbk}$. Since $\Omega_2$ has isolated singularity, it forces that $k\ne \pm 2$ as desired. ◻ Recall that $P_{\Omega-\xi}=A/(\Omega-\xi)$ for any $\xi\in {\Bbbk}$. Let $P=P_\Omega$. Since $P=\oplus_{i\ge 0}P_i$ is graded, $P$ has two natural $0$-filtrations denoted by $\mathbb F^{Id}=\{F_i^{Id}\,|\, i\in \mathbb Z\}$ and $\mathbb F^{-Id}=\{F_i^{-Id}\,|\, i\in \mathbb Z\}$ respectively such that $$\begin{aligned} \label{E3.10.1}\tag{E3.10.1} F_i^{Id}=\bigoplus_{n\ge i} P_n\ \text{and} \ F_i^{-Id}=\bigoplus_{n\leq -i} P_n.\end{aligned}$$ Since $\mathop{\mathrm{gr}}_{\mathbb F^{\pm Id}} P\cong P$ (with grading flipped $i\leftrightarrow -i$ for $\mathbb F^{-Id}$), $P$ has two canonical faithful $0$-valuations. We denote them by $\nu^{Id}$ and $\nu^{-Id}$ respectively. In particular, we have $$\begin{aligned} \label{E3.10.2}\tag{E3.10.2} \nu^{Id}(f)=n\ \text{and}\ \nu^{-Id}(f)=-m\end{aligned}$$ for any $f=\sum_{i=n}^{m}f_i$ with $n\le m$, $f_i\in P_i$ and $f_n,f_m\neq 0$. Now consider $P_{\Omega-\xi}=A/(\Omega-\xi)$ for any $\xi\in {\Bbbk}^\times$. We define a filtration $\mathbb F^{-Id}=\{F_i^{-Id}\,|\, i\in \mathbb Z\}$ of $P_{\Omega-\xi}$ by $$F_{-i}^{-Id}P_{\Omega-\xi}=\{\sum a_jf_j \,|\, a_j\in {\Bbbk},\ \text{$f_j$ are monomials of degree $\le i$}\}.$$ One can check that $\mathop{\mathrm{gr}}_{\mathbb F^{-Id}}P_{\Omega-\xi}\cong P$, and the corresponding faithful $0$-valuation is given by $\nu^{-Id}$ via $\nu^{-Id}(x)=-a$, $\nu^{-Id}(y)=-b$ and $\nu^{-Id}(z)=-c$. We say a Poisson algebra automorphism $\sigma$ of $P_{\Omega-\xi}$ is *linear* if it preserves the specific $0$-filtration $\mathbb F^{-Id}$ on $P_{\Omega-\xi}$, or namely $\sigma(F^{-Id}_i)\subseteq F^{-Id}_i$ for $i$. The Poisson valuations directly apply to Poisson automorphism groups of $P_{\Omega-\xi}$ when the homogeneous potential $\Omega$ has an isolated singularity. **Lemma 29**. *Let $A_\Omega$ be a connected graded Poisson algebra. We have the following if the potential $\Omega$ has an isolated singularity.* - *Every Poisson automorphism of $P_\Omega$ is graded.* - *If $\xi\neq 0$, then every Poisson automorphism of $P_{\Omega-\xi}$ is linear.* *Proof.* We only check for $\Omega_2$ (with $(\deg(x),\deg(y),\deg(z))=(1,1,2)$) and the proofs for other cases are similar. For simplicity, we write $\Omega=\Omega_2$, $P=P_{\Omega_2}$ and $P_\xi=P_{\Omega_2-\xi}$. \(1\) By (2), the fraction ring $Q=Q(P)$ is isomorphic to $Q(P_{\Omega_1})$ as Poisson fields. By [@HTWZ1 Theorem 3.8], $Q$ has exactly two faithful $0$-valuations, namely $\{\nu^{\pm Id}\}$ as discussed above. Let $\phi$ be any Poisson automorphism of $P$. We extend $\phi$ to a Poisson field automorphism of $Q$, which we still denote by $\phi$. It is clear that $\nu^{\pm Id}\circ \phi$ are two distinct faithful $0$-valuations of $Q$. So $\{\nu^{\pm Id}\}=\{\nu^{\pm Id}\circ \phi\}$. By [\[E3.10.2\]](#E3.10.2){reference-type="eqref" reference="E3.10.2"}, an element $f\in P$ is homogeneous if and only if $\nu^{Id}(f)+\nu^{-Id}(f)=0$. Let $f$ be any homogeneous element of $P$. So we have $$0=\nu^{Id}(f)+\nu^{-Id}(f)=(\nu^{Id}\circ \phi)(f) +(\nu^{-Id}\circ \phi)(f)=\nu^{Id}(\phi(f))+\nu^{-Id}(\phi(f)).$$ This implies that $\phi(f)$ is again homogeneous. In particular, $\phi(x),\phi(y),\phi(z)$ are homogeneous and nonzero. By re-cycling letters $a,b,c$, respectively, we assume they have degrees $a,b,c$. Hence, $\phi(\Omega)=\phi(x)^4+\phi(y)^4+\phi^2(z) +\lambda\phi(x)\phi(y)\phi(z)$ is homogeneous. So we have $4a=4b=2c=a+b+c$. Since $P$ is generated by $\phi(x),\phi(y), \phi(z)$, we must have $a=b=1$ and $c=2$. Hence, $\phi$ is graded. \(2\) Let $Q=Q(P_{\xi})$ for $\xi\in {\Bbbk}^\times$. We can use the similar argument of [@HTWZ1 Theorem 3.11] to show that $Q$ has only one faithful $0$-valuation, namely $\nu^{-Id}$ as discussed above. So for any Poisson automorphism $\phi$ of $P_\xi$, we have $\nu^{-Id}=\nu^{-Id}\circ \phi$. Let $f\in P_\xi$. By definition, we have $f\in F^{-Id}_i$ if and only if $\nu^{-Id}(f)\leq -i$ if and only if $(\nu^{-Id}\circ \phi)(f) =\nu^{-Id}(\phi(f))\leq -i$. Hence $\phi$ preserves the $0$-filtration $\mathbb F^{-Id}$ of $P_\xi$ and so is linear. ◻ Next, we explicitly compute the Poisson automorphism groups of $P_{\Omega-\xi}$. **Lemma 30**. *Let $\Omega=x^3+y^3+z^3+\lambda xyz$ for $(-\lambda)^3\neq 27$ with $\deg(x)=1, \deg(y)=1,\deg(z)=1$.* - *There is a short exact sequence of groups: $$1\to G\to {\rm Aut}_P(P_\Omega)\to C_3\to 1,$$ where $G=\{(\alpha_1,\alpha_2,\alpha_3)\,|\, \alpha_1^3 =\alpha_2^3=\alpha_3^3=\alpha_1\alpha_2\alpha_3\}\subset {\Bbbk}^\times\times {\Bbbk}^\times\times {\Bbbk}^\times$ and ${\rm Aut}_P(P_\Omega)\cong C_3\ltimes G$.* - *There is a short exact sequence of groups: $$1\to G'\to {\rm Aut}_P(P_{\Omega-1})\to C_3\to 1,$$ where $G'=\{(\alpha_1,\alpha_2,\alpha_3)\,|\, \alpha_1^3 =\alpha_2^3=\alpha_3^3=\alpha_1\alpha_2\alpha_3=1\}\subset {\Bbbk}^\times\times {\Bbbk}^\times\times {\Bbbk}^\times$ and ${\rm Aut}_P(P_{\Omega-1})\cong C_3\ltimes G'$.* *Proof.* (1) Note that the argument of [@MTU Theorem 1] works for ${\rm Aut}_{grP}(P_\Omega)$ as well. So ${\rm Aut}_{grP}(P_\Omega)$ is generated by all possible diagonal actions $G=\{(\alpha_1,\alpha_2,\alpha_3)\,|\,\alpha_1^3=\alpha_2^3 =\alpha_3^3=\alpha_1\alpha_2\alpha_3\}$ and the permutation $\tau(x,y,z)=(y,z,x)$. So our result follows by . \(2\) By , every automorphism $\phi$ of $P_{\Omega-1}$ is linear. Note that for $0\le i\le 2$, we will identify $A_i$ with $(P_{\Omega-1})_i$ as vector spaces. So we can write $$\begin{aligned} \phi(x)=f_1+f_0,\ \phi(y)=g_1+g_0,\ \phi(z)=h_1+h_0\end{aligned}$$ where $f_i,g_i,h_i$ are homogeneous polynomials of degree $i$ in $A_i=(P_{\Omega-1})_i$ for all possible $0\le i\le 1$. Note that $(f_{0}, g_{0}, h_{0})$ is a point on the surface defined by $\Omega=0$. For simplicity, we will denote the images of $x, y, z$ in $P_{\Omega-1}$ by $x, y, z$. We have $$\{x,y\}=3z^2+\lambda xy,\ \{y,z\}=3x^2+\lambda yz,\ \{z,x\}=3y^2+\lambda xz$$ in $P_{\Omega-1}$. We apply $\phi$ to each of the above three equations. After comparing the constant terms on both sides of the resulting equations, we obtain $$3f_0^2+\lambda g_0h_0= 3g_0^2+\lambda f_0h_0 =3h_0^2+\lambda f_0g_0=0.$$ So $(f_0, g_0,h_0)$ is a singular point on the surface $\Omega=0$. Hence $(f_0,g_0,h_0)=(0, 0, 0)$ since $\Omega$ only has an isolated singularity at the origin. So $\phi$ maps ${\Bbbk}x+{\Bbbk}y+{\Bbbk}z$ to itself. This implies that $\mathop{\mathrm{gr}}(\phi): \mathop{\mathrm{gr}}(P_{\Omega-1})\to \mathop{\mathrm{gr}}(P_{\Omega-1})$ is a graded automorphism of $P_{\Omega}(\cong \mathop{\mathrm{gr}}(P_{\Omega-1}))$ which has been described in part (1). The rest of the proof follows from a direct computation. ◻ **Lemma 31**. *Let $\Omega=x^4+y^4+z^2+\lambda xyz$ for $(-\lambda)^4\neq 64$ with $\deg(x)=\deg(y)=1$ and $\deg(z)=2$.* - *There is a short exact sequence of groups: $$1\to G\to {\rm Aut}_P(P_\Omega)\to C_2\to 1$$ where $G=\{(\alpha_1,\alpha_2)\,|\, \alpha_1^2=\alpha_2^2\} \subset {\Bbbk}^\times\times {\Bbbk}^\times$ and ${\rm Aut}_P(P_\Omega)\cong C_2\ltimes G$.* - *There is a short exact sequence of groups: $$1\to G'\to {\rm Aut}_P(P_{\Omega-1})\to C_2\to 1$$ where $G'=\{(\alpha_1,\alpha_2)\,|\, \alpha_1^4=\alpha^4_2=1, \alpha_1^2=\alpha_2^2\}\subset {\Bbbk}^\times\times {\Bbbk}^\times$ and ${\rm Aut}_P(P_{\Omega-1})\cong C_2\ltimes G'$.* *Proof.* (1) By , every Poisson automorphism $\phi$ of $P_\Omega$ is graded. For the convenience of this proof, we write $(x,y,z)=(x_1,x_2,x_3)$. In general, we can assume that $\phi$ is given by $$\left\{ \begin{aligned} \phi(x_1)&=\alpha_1x_1+\alpha_2x_2\\ \phi(x_2)&=\beta_1x_1+\beta_2x_2\\ \phi(x_3)&=\gamma x_3+h(x_1, x_2), \end{aligned}\right.$$ where $\gamma$ and $\alpha_1\beta_2-\alpha_2\beta_1$ are not zero in ${\Bbbk}$ and $h(x_1,x_2)$ is a quadratic polynomial in ${\Bbbk}[x_1,x_2]$, as $\det(\phi)=\gamma(\alpha_1\beta_2-\alpha_2\beta_1)\ne 0$. After a proper re-scaling of variables, we first assume that $\alpha_1\beta_2-\alpha_2\beta_1=1$. Applying $\phi$ to the equation $\{x_1,x_2\}=2x_3+\lambda x_1x_2$, we get $\gamma=1$ and $h(x_1, x_2)=\frac{\lambda}{2} (x_1x_2-\phi(x_1)\phi(x_2))$. Applying $\phi$ to the other two equations $\{x_2,x_3\}=4x_1^3+\lambda x_2x_3$ and $\{x_3,x_1\}=4x_2^3+\lambda x_1x_3$, we further obtain the following relations: $$\left\{\begin{aligned} 24\alpha_1^2\alpha_2 &=\lambda^2(\beta_1+\alpha_2\beta_1^2+2\beta_1\beta_2\alpha_1) =3\lambda^2\alpha_1\beta_1\beta_2,\\ 24\alpha_1\alpha_2^2 &=\lambda^2(-\beta_2+\alpha_1\beta_2^2+2\beta_1\beta_2\alpha_2) =3\lambda^2\alpha_2\beta_1\beta_2,\\ 24\beta_1^2\beta_2 &=\lambda^2(-\alpha_1+\alpha_1^2\beta_2+2\alpha_1\alpha_2\beta_1) =3\lambda^2\alpha_1\alpha_2\beta_1,\\ 24\beta_1\beta_2^2 &=\lambda^2(\alpha_2+\alpha_2^2\beta_1+2\alpha_1\alpha_2\beta_2) =3\lambda^2\alpha_1\alpha_2\beta_2. \end{aligned}\right.$$ Since $\alpha_1\beta_2-\alpha_2\beta_1=1$, $\alpha_1$ and $\alpha_2$ cannot be simultaneously equal to zero and similar situation happens to $\beta_1$ and $\beta_2$. After simplifying the above equations, we have $8\alpha_1\alpha_2=\lambda^2\beta_1\beta_2$ and $8\beta_1\beta_2=\lambda^2\alpha_1\alpha_2$ and so $(64-\lambda^4)\alpha_1\alpha_2=(64-\lambda^4)\beta_1\beta_2=0$. Since $\lambda^4\neq 64$, we get $\alpha_1\alpha_2=\beta_1\beta_2=0$. Thus, we have either $\alpha_2=\beta_1=0$ or $\alpha_1=\beta_2=0$. Now, by taking care of the re-scaling of variables, we can find a permutation $\sigma\in S_2$ and write $\phi$ as $$\begin{aligned} \label{E3.13.1}\tag{E3.13.1} \phi(x_1)=\alpha_1x_{\sigma(1)},\,\, \phi(x_2)=\alpha_2x_{\sigma(2)},\,\, \phi(x_3)=\alpha_3x_3+h.\end{aligned}$$ for some nonzero scalars $\alpha_1,\alpha_2,\alpha_3$ and some quadratic polynomial $h(x_1,x_2)$. Then, it is routine to check that $\phi$ is a Poisson automorphism of $P_\Omega$ if and only if $$\alpha_3={\rm sgn}(\sigma)\alpha_1\alpha_2,\ \alpha_1^2=\alpha_2^2,\ h=\frac{\lambda}{2}({\rm sgn}(\sigma)-1)\alpha_1\alpha_2x_1x_2.$$ Consider the normal subgroup $G$ of ${\rm Aut}_{grP}(P_\Omega)$ given by $\phi(x)=\alpha_1x, \phi(y)=\alpha_2y, \phi(z)=\alpha_1\alpha_2z$ satisfying $\alpha_1^2=\alpha_2^2$. Finally, it is clear that ${\rm Aut}_{grP}(P_\Omega)/G\cong S_2$. \(2\) By , every automorphism $\phi$ of $P_{\Omega-1}$ is linear. So we can write $$\begin{aligned} \phi(x)=f_1+f_0,\,\, \phi(y)=g_1+g_0,\,\, \phi(z)=h_2+h_1+h_0\end{aligned}$$ where $f_i,g_i,h_i$ are homogeneous polynomials of degree $i$ in $A_i=(P_{\Omega-1})_i$ for all possible $0\le i\le 2$. We apply $\phi$ to the following Poisson brackets: $$\{x,y\}=2z+\lambda xy,\ \{y,z\}=4x^3+\lambda yz,\ \{z,x\}=4y^3+\lambda xz.$$ By examining these equations at degree zero part, we get $$2h_0+\lambda f_0g_0=4f_0^3+\lambda g_0h_0 =4g_0^2+\lambda f_0h_0=0.$$ This implies that $(f_0,g_0,h_0)$ is a singular point for $\Omega=0$. So $f_0=g_0=h_0=0$ since $\Omega$ only has an isolated singularity at the origin. Now since $\phi(\Omega)=1$ in $P_\Omega$, we get $$f_1^4+g_1^4+(h_2+h_1)^2+\lambda f_1g_1(h_2+h_1)=1.$$ Degree two part of the above equation yields that $h_1^2=0$ and so $h_1=0$. Hence, $\phi$ maps ${\Bbbk}x+{\Bbbk}y$ (resp. ${\Bbbk}z$) to itself, and we can write $\phi$ as in [\[E3.13.1\]](#E3.13.1){reference-type="eqref" reference="E3.13.1"}. Similar to (1), indeed, we have $$\begin{aligned} \phi(x_1)=\alpha_1x_{\sigma(1)},\,\, \phi(x_2)=\alpha_2x_{\sigma(2)},\,\, \phi(x_3)=\alpha_3x_3+\frac{\lambda}{2} ({\rm sgn}(\sigma)-1)\alpha_1\alpha_2x_1x_2,\end{aligned}$$ for some $\sigma\in S_2$ and $\alpha_1^4=\alpha_2^4=1$ and $\alpha_1^2=\alpha_2^2$ since $\phi(\Omega)=1$. So the result follows. ◻ **Lemma 32**. *Let $\Omega=x^6+y^3+z^2+\lambda xyz$ for $(-\lambda)^6\neq 432$ with $\deg(x)=1, \deg(y)=2,\deg(z)=3$.* - *Every Poisson automorphism of $P_\Omega$ is of the form $$x\mapsto \zeta x,\ y\mapsto \zeta^2y,\ z\mapsto \zeta^3z$$ for some $\zeta\in {\Bbbk}^\times$.* - *Every Poisson automorphism of $P_{\Omega-1}$ is of the form $$x\mapsto \zeta x,\ y\mapsto \zeta^2y,\ z\mapsto \zeta^3z,$$ where $\zeta^6=1$.* *Proof.* (1) By , every automorphism $\phi$ of $P_\Omega$ is graded. So we can write $$\label{E3.14.1}\tag{E3.14.1} \phi(x)=\alpha_1x,\ \phi(y)=\beta_1y+\beta_0x^2,\ \phi(z)=\gamma_2z+\gamma_1xy+\gamma_0x^3$$ for some $\alpha_1,\beta_1,\gamma_2\in {\Bbbk}^\times$ and $\beta_0,\gamma_1,\gamma_0\in {\Bbbk}$. Applying $\phi$ to $\{z,x\}=3y^2+\lambda xz$, we obtain that $\beta_0=\gamma_0 =\gamma_1=0$. So $\phi$ equals a scalar multiple when it is applied to the generators. Finally, from $\{\phi(y),\phi(z)\}=\phi(6x^5+\lambda yz)$, $\{ \phi(x), \phi(y)\}=\phi(2z+\lambda xy)$ and $\{\phi(z), \phi(x)\}=\phi(3y^2+\lambda xz)$, we obtain $\beta_1\gamma_2=\alpha_1^5$, $\alpha_1\beta_1=\gamma_2$, and $\gamma_2\alpha_1=\beta_1^2$. Let $\zeta=\gamma_2/\beta_1$. We have $\alpha_1=\zeta,\beta_1=\zeta^2,\gamma_2=\zeta^3$. So the result follows. \(2\) By , every automorphism $\phi$ of $P_{\Omega-1}$ is linear. Note that $A_i=(P_{\Omega-1})_i$ for $0\le i\le 5$. So we can write $$\begin{aligned} \phi(x)=f_1+f_0,\,\, \phi(y)=g_2+g_1+g_0,\,\, \phi(z)=h_3+h_2+h_1+h_0\end{aligned}$$ where $f_i,g_i,h_i$ are homogeneous polynomials of degree $i$ in $A_i=(P_{\Omega-1})_i$ for all possible $0\le i\le 3$. We apply $\phi$ to the following Poisson brackets: $$\{x,y\}=2z+\lambda xy,\ \{y,z\}=6x^5+\lambda yz,\ \{z,x\} =3y^2+\lambda xz$$ and examine the resulting equations at different degree parts. First of all, the degree zero part yields that $$2h_0+\lambda f_0g_0=6f_0^5+\lambda g_0h_0 =3g_0^2+\lambda f_0h_0=0.$$ This is equivalent that $(f_0,g_0,h_0)$ is a singular point for $\Omega=0$. So $f_0=g_0=h_0=0$ since $\Omega$ only has an isolated singularity at the origin. Then, the degree one part further implies that $g_1=0$. Now since $\phi(\Omega)=1$ in $P_\Omega$, we get $$f_1^6+g_2^3+(h_3+h_2+h_1)^2+\lambda f_1g_2(h_3+h_2+h_1)=1.$$ Degree two part of the above equation yields that $h_1^2=0$ and so $h_1=0$. Furthermore, the degree four part implies that $h_2^2=0$ and $h_2=0$. Hence, we have $\phi$ as in [\[E3.14.1\]](#E3.14.1){reference-type="eqref" reference="E3.14.1"}. Similar to the proof of part (1), indeed, we have $\phi(x)=\zeta x,\phi(y)=\zeta^2 y,\phi(z)=\zeta^3 z$ for some $\zeta\in {\Bbbk}^\times$. Finally, $\phi(\Omega)=1$ implies that $\zeta^6=1$. ◻ *Proof of .* It suffices to prove the result when $\Omega$ is case (2) and case (3), since case (1) was shown in [@MTU]. Though our argument works for all cases; here, we only provide the details for Case (2) as an illustration. Let $\phi$ be a Poisson algebra automorphism of $A_\Omega$. By the same argument as in , the restriction of $\phi$ on the Poisson center ${\Bbbk}[\Omega]$ is given by $\phi(\Omega)=\alpha \Omega$ with $\alpha\in {\Bbbk}^\times$. So $\phi$ preserves the principal ideal generated by $\Omega$. Let $\phi'$ denote the induced Poisson algebra automorphism for $P_\Omega$. By , $\phi'$ is graded. Moreover, since the equations $\{x,y\}=\Omega_z,\{y,z\}=\Omega_x, \{z,x\}=\Omega_y$ are homogeneous of degree $<\deg(\Omega)$, we can lift $\phi'$ to a unique graded Poisson automorphism of $A_\Omega$, denoted by $\sigma$. It is clear that $\sigma'=\phi'$. Let $\varphi=\phi\circ \sigma^{-1}$. Then it satisfies the equation $\varphi'={\rm id}_{P_\Omega}$. It remains to show that $\varphi={\rm id}_{A_\Omega}$. Note that we can write $$\begin{aligned} \varphi(x)=x+\Omega f,\,\, \varphi(y)=y+\Omega g,\,\, \varphi(z)=z+\Omega h\end{aligned}$$ for some polynomials $f,g,h\in A_\Omega$. An easy computation yields that $\varphi(\Omega)=\Omega+\Omega\,\alpha(f,g,h)$, where $\alpha(f,g,h)\in (A_\Omega)_{\ge 1}$. Since $\varphi$ induces an algebra automorphism of the Poisson center ${\Bbbk}[\Omega]$ of $A_\Omega$, we must have $\alpha=0$ and $\varphi(\Omega)=\Omega$. Now we consider a ${\Bbbk}$-linear basis $\mathbb B: =\{1,x,y,z\}\cup \{b_s\}$ of the algebra $P_\Omega$ consisting of all possible monomials $\{x^{s_1}y^{s_2}z^{s_3}\,|\, s_1,s_2\ge 0, 0\le s_3\le 1\}$. We also treat $\mathbb B$ as a fixed subset of monomials in $A_\Omega$ by lifting. We claim that every polynomial $f$ in $A_\Omega$ can be written in the form $$\begin{aligned} \label{E3.14.2}\tag{E3.14.2} f=1f^1(\Omega)+xf^x(\Omega)+yf^y(\Omega)+zf^z(\Omega) +\sum_{b_s} b_sf^{b_s}(\Omega)\end{aligned}$$ where each $f^*(\Omega)\in {\Bbbk}[\Omega]$. We prove this claim by induction on $\deg(f)$. It is trivial for $\deg(f)=0$. Suppose our claim holds for $\deg(f)\leq m$. For any polynomial $f$ of degree $m+1$, we can write $$f=1f^1+xf^x+yf^y+zf^z+\sum_{b_s}b_sf^{b_s}+g\Omega$$ for some scalars $f^*\in {\Bbbk}$ and $\deg(g)=m-3$ by looking at the image of $f$ in $P_\Omega$. So, by the induction hypothesis, we can write $g$ in the required form. We get our claim by replacing $g$ above. Therefore, we can write $$\begin{aligned} \label{E3.14.3}\tag{E3.14.3} \varphi(x)=1f^1(\Omega)+xf^x(\Omega)+yf^y(\Omega)+zf^z(\Omega) +\sum_{b_s}b_sf^{b_s}(\Omega)\end{aligned}$$ for some $f^*(\Omega)\in {\Bbbk}[\Omega]$. Now for each scalar $\xi\neq 0$, let $\pi_\xi: A_\Omega\to P_{\Omega-\xi}$ be the quotient map and write $\varphi_\xi'$ as the induced automorphism of $\varphi$ since $\varphi(\Omega)=\Omega$. Note that the image of $\mathbb B$ via $\pi_\xi$ is a ${\Bbbk}$-basis of $P_{\Omega-\xi}$, which we continue to denote by $b_s$ etc. So we have $$\begin{aligned} \label{E3.14.4}\tag{E3.14.4} \varphi_\xi'(x)=1f^1(\xi)+xf^x(\xi)+yf^y(\xi)+zf^z(\xi) +\sum_{b_s}b_sf^{b_s}(\xi).\end{aligned}$$ By , $\varphi_\xi'$ is linear. Thus, $f^{b_s}(\xi)=0$ for all $\xi\neq 0$ and $b_{s}$. Hence $f^{b_s}(\Omega)=0$ and [\[E3.14.3\]](#E3.14.3){reference-type="eqref" reference="E3.14.3"} reduces to $$\begin{aligned} \varphi(x)=1f^1(\Omega)+xf^x(\Omega)+yf^y(\Omega)+zf^z(\Omega).\end{aligned}$$ Moreover, since $\varphi'(x)=x$ in $P_{\Omega}$, we have $f^x(\Omega)=1+p\Omega \neq 0$ for some $p\in {\Bbbk}[\Omega]$. If $f^y(\Omega)\neq 0$, we can choose some nonzero $\xi_0$ such that $f^x(\xi_0),f^y(\xi_0)\neq 0$. But in this case, $\varphi_{\xi_0}'(x)$ in [\[E3.14.4\]](#E3.14.4){reference-type="eqref" reference="E3.14.4"} contains both terms of $x$ and $y$. This contradicts to the description of ${\rm Aut}_{grP}(P_{\Omega-\xi_0})$ by . So $f^y(\Omega)=0$ and we get $f^1(\Omega)=f^z(\Omega)=0$ in the same fashion. This implies that $\varphi(x)=x(1+p\Omega\,)$. Similarly, we get $\varphi^{-1}(x)=x(1+\Omega\,h)$ for some $h\in {\Bbbk}[\Omega]$. By using $\varphi(\Omega)=\Omega$, we obtain that $$x=\varphi(\varphi^{-1}(x))=\varphi(x(1+\Omega\,h) =x(1+\Omega\, p)(1+\Omega\,h )=x,$$ which implies that $p=h=0$. Therefore, $\varphi(x)=x$. We further get $\varphi(y)=y$ and $\varphi(z)=z$ by the same argument. Hence, $\varphi$ is the identity as required. ◻ **Example 33**. Let $\Omega=z^2+x^3y$ in with $\deg x=\deg y=1$ and $\deg z=2$. Then the Poisson structure of $A_{\Omega}$ is determined by $$\{x,y\}=2z, \quad \{y,z\}=3x^2 y, \quad \{z,x\}=x^3.$$ Consider the algebra automorphism of the polynomial ring $\Bbbk[x,y,z]$ defined by $$\phi(x)=x,\,\, \phi(y)=y-x^3-2z, \,\,\,\textnormal{and} \,\,\, \phi(z)=z+x^3.$$ It is straightforward to check that $\phi$ is an ungraded Poisson automorphism of $A_\Omega$. We can prove that if $\Omega$ belongs to the Weyl type, meaning $Q(P_\Omega)\cong K_{Weyl}$, then $A_\Omega$ possesses ungraded Poisson automorphisms. However, we have not discovered any ungraded automorphisms for other irreducible $\Omega$, i.e., those satisfying $Q(P_\Omega)\cong K_q$. Therefore, we pose the following question. **Question 34**. Let $\Omega$ be a homogeneous polynomial of degree $|x|+|y|+|z|$. If $Q(P_\Omega)\cong K_q$ as Poisson fields, is every Poisson automorphism of $A_\Omega$ and $P_\Omega$ graded? # Rigidities {#zzsec4} In this section, we discuss several different rigidities about the Poisson algebras related to $\Omega$. We intend to provide general methods for these rigidities and omit some details. ## Rigidity of graded twistings {#zzsec4.1} Note that the *rigidity of graded twistings*, denoted by $rgt$, was defined in . The following lemma provides an easy way to compute Poisson derivations and $rgt$ of $A_\Omega$. **Lemma 35**. *Let $A$ be a connected graded unimodular Poisson algebra $A_\Omega$ given in . For any derivation $\delta$ of $A$, we have the following:* 1. *$\delta$ is a Poisson derivation if and only if $\mathop{\mathrm{div}}(\delta)\pi_{\Omega}=\pi_{\delta(\Omega)}$ in $\mathfrak X^2(A)$.* 2. *Suppose $\mathop{\mathrm{div}}(\delta)=0$. Then $\delta$ is a Poisson derivation if and only if $\delta(\Omega)=0$.* 3. *We have $$\begin{aligned} rgt(A)&=1-\dim_{\Bbbk}Gspd(A)\\ &=1-\dim_{\Bbbk}Gpd(A)\\ &=1-\dim_{\Bbbk}(PH^1(A))_0\\ &=-\dim_{\Bbbk}\{\delta\in Gspd(A)\,|\, \mathop{\mathrm{div}}(\delta)=0\}\\ &=-\dim_{\Bbbk}\{\delta\in Gpd(A)\,|\, \mathop{\mathrm{div}}(\delta)=0\}\\ &=-\dim_{\Bbbk}\{\delta\in (Der(A))_0\,|\, \mathop{\mathrm{div}}(\delta)=\delta(\Omega)=0\}.\end{aligned}$$* *Proof.* Note that $\delta$ is a Poisson derivation if and only if $d_\pi^1(\delta)=0$. Recall that the cochain complexes [\[E1.3.1\]](#E1.3.1){reference-type="eqref" reference="E1.3.1"} and [\[E1.3.3\]](#E1.3.3){reference-type="eqref" reference="E1.3.3"} are isomorphic to each other. So $(d_{\pi}^1(\delta)(y,z), d_{\pi}^1(\delta)(z,x), d_{\pi}^1(\delta)(x,y))= \delta_\Omega^1(\delta(x), \delta(y), \delta(z))$ as vectors. By [\[E1.3.5\]](#E1.3.5){reference-type="eqref" reference="E1.3.5"} and [\[E3.1.1\]](#E3.1.1){reference-type="eqref" reference="E3.1.1"}, we conclude that $$\label{E4.1.1}\tag{E4.1.1} d_\pi^1(\delta)=~\mathop{\mathrm{div}}(\delta)\,\pi_{\Omega}-\pi_{\delta(\Omega)}$$ in $\mathfrak X^2(A)$. Thus (1) follows immediately. For (2), suppose $\delta(\Omega)=0$. Then by [\[E4.1.1\]](#E4.1.1){reference-type="eqref" reference="E4.1.1"}, $\delta$ is a Poisson derivation. Conversely, suppose $\delta$ is a Poisson derivation. By [\[E4.1.1\]](#E4.1.1){reference-type="eqref" reference="E4.1.1"}, $\delta(\Omega)\in {\Bbbk}=A_0$. We write $\delta=\sum \delta_i$, where each $\delta_i$ is a homogeneous derivation of degree $i$. Since $\deg(\Omega)=n$, we get $\delta_i(\Omega)=0$ for $i\neq -n$. Moreover, $\delta_{-n}(x)$, if not zero, has degree $a-n<0$. So $\delta_{-n}(x)=0$ as $A$ is $\mathbb N$-graded. Similarly, we get $\delta_{-n}(y)=\delta_{-n}(z)=0$. Therefore, $\delta_{-n}(\Omega)=0$ and so $\delta(\Omega)=0$, as desired. For (3), consider the subspace $Gpd_0(A)=\{\delta\in Gpd(A)\,|\, \mathop{\mathrm{div}}(\delta)=0\}$ of $Gpd(A)$. Let $\delta\in Gpd(A)$. By Lemma [@TWZ Lemma 1.2 (2)], $\mathop{\mathrm{div}}(\delta)\in {\Bbbk}$. In particular, $\mathop{\mathrm{div}}(E)=a+b+c\in {\Bbbk}^\times$ for the Euler derivation $E$ of $A$. Hence $$\label{E4.1.2}\tag{E4.1.2} \delta'=\delta-\frac{\mathop{\mathrm{div}}(\delta)}{\mathop{\mathrm{div}}(E)}\,E\in Gpd_0(A).$$ So $Gpd(A)=Gpd_0(A)\oplus {\Bbbk}\, E$. Thus, the formulas of $rgt(A)$ can be derived from [@TWZ Lemma 4.1], where the last equality follows from (2). ◻ When $A$ is generated in degree one, in [@TWZ Corollary 6.7] it was shown that any graded unimodular structure $\pi_\Omega$ on $A$ is rigid (namely, $rgt(A_\Omega)=0$) if and only if $\Omega$ is irreducible. Now, we can generalize it to any weighted case. **Theorem 36**. *Let $A_\Omega$ be a connected graded unimodular Poisson algebra given in . Then $A_\Omega$ is rigid if and only if the potential $\Omega$ is irreducible. In in the appendix, we will list all $rgt(A)$ and $GKdim(A_{sing})$ of these $\Omega$ from to .* *Proof.* We apply (also see ) and . Indeed, our method is a case-by-case verification. First of all, when $a=b<c$ and $\Omega=h(x,y)$ is reducible, we have $rgt(A)\neq 0$ according to . For the rest of the proof, we only provide the details for the following two cases for illustration: irreducible $\Omega_1=z^2+xy^3+\lambda x^2y^2+x^3y$ and reducible $\Omega_2=x^2z+xy^3$ in (with $\deg(x)=\deg(y)=1$ and $\deg(z)=2$). Let $\phi$ be any graded Poisson derivation of degree zero. Replacing $\phi$ by $\phi+cE$ for some suitable scalar $c\in {\Bbbk}$ if needed, we can always assume ${\rm div}(\phi)=0$ by [\[E4.1.2\]](#E4.1.2){reference-type="eqref" reference="E4.1.2"}. Thus, we can write $\phi(x)=\alpha_1 x+\alpha_2y$, $\phi(y)=\alpha_3x+\alpha_4y$ and $\phi(z)=(-\alpha_1-\alpha_4)z+\alpha_5x^2+\alpha_6xy+\alpha_7y^2$ for some $\alpha_i\in {\Bbbk}$. By (2), we have $\phi(\Omega)=0$ for $\Omega=\Omega_1$ or $\Omega_2$. From $\phi(\Omega_1)=0$, one can show that $\alpha_i=0$ for $i=1, \cdots, 7$, which implies $\phi=0$. As a result, $rgt(A_{\Omega_1})=0$. From $\phi(\Omega_2)=0$, we obtain $\alpha_7+3\alpha_3=0$ and $\alpha_i=0$ for any $i\neq 3, 7$. This implies that $Gpd(A_{\Omega_2})={\rm span}\{E,\phi\}$, where $\phi(x)=0$, $\phi(y)=x$, $\phi(z)=-3y^2$. Hence we get $rgt(A_{\Omega_2})=-1$. Finally, a standard Gröbner basis computation yields all possible $\mathop{\mathrm{GKdim}}(A_{sing})$. We skip the details here. ◻ ## Rigidity of gradings {#zzsec4.2} In this and the following subsections, we use Poisson valuations to establish some results about the rigidity of gradings and filtrations. We believe that these kinds of rigidities deserve more attention. **Theorem 37**. *Let $A_\Omega$ be a connected graded Poisson algebra with $\Omega$ having an isolated singularity. Then $P_\Omega$ has a unique connected grading such that it is Poisson graded with nonzero degree one part.* *Proof.* We only prove the result for $\Omega:=\Omega_1$ in . The argument for $\Omega_2$ and $\Omega_3$ is analogous. By [@HTWZ1 Theorem 3.8], $Q(P_\Omega)$ only has two faithful $0$-valuations, which are denoted by $\{\nu^{\pm Id}\}$ as before. They correspond to connected gradings on $P_\Omega$ with $x,y,z$ in degree 1 according to [\[E3.10.1\]](#E3.10.1){reference-type="eqref" reference="E3.10.1"}. We assume $\mathbb G^{\pm Id}$ to be the $0$-filtrations on $Q(P_\Omega)$ associated with any new grading, and denote by $\mu^{\pm Id}$ the corresponding faithful $0$-valuations. It is clear that $\mu^{-Id}(f)<0$ for some $f\in P_\Omega$. So we have $\mu^{-Id}=\nu^{-Id}$ and $\mu(x)=-1,\mu(y)=-1,\mu(z)=-1$. So we can write $x=x_0+f$, $y=y_0+g$ , and $z=z_0+h$, where $x_0,y_0,z_0$ are homogeneous of new degree $1$ and $f,g,h\in {\Bbbk}$. Since every non-trivial linear combination of $x,y,z$ has $\mu^{-Id}$-value $-1$ (as $\mu^{-Id}=\nu^{-Id}$), $x_0,y_0,z_0$ are linearly independent. Since $\Omega=x^3+y^3+z^3+\lambda xyz=0$ is a sum of homogeneous relations, we get $$(3f^2+\lambda gh)x_0+(3g^2+\lambda fh)y_0+(3h^2+\lambda fg)z_0=0.$$ This implies that $3f^2+\lambda gh=3g^2+\lambda fh=3h^2+\lambda fg=0$, or equivalently $(f,g,h)$ is a singular point of $\Omega=0$. Since $\Omega$ has an isolated singularity at the origin, we get $f=g=h=0$ and $x=x_0$, $y=y_0$, $z=z_0$ are homogeneous of degree $1$ in this new grading. Since $P_\Omega$ is generated by $x,y,z$, the new grading agrees with the given grading. ◻ ## Rigidity of filtrations {#zzsec4.3} **Theorem 38**. *Suppose and assume that $\Omega$ has an isolated singularity. Then $P_{\Omega-\xi}$, with $\xi\neq 0$, has a unique filtration $\mathbb F$ such that the associated graded ring $\mathop{\mathrm{gr}}_\mathbb F(P_{\Omega-\xi})$ is a connected graded Poisson domain with nonzero degree one part.* *Proof.* Again, we only prove the result for $\Omega:=\Omega_1$ in . The argument for $\Omega_2$ and $\Omega_3$ is analogous. The result follows from [@HTWZ1 Theorem 3.11] that $Q(P_{\Omega-\xi})$ has only one faithful $0$-valuation $\nu^{-Id}$ and . ◻ By , we have the following $$\label{E4.4.1}\tag{E4.4.1} {\text{$\Omega$ being irreducible}} \Leftrightarrow {\text{$rgt(A_{\Omega})=0$.}}$$ Now and can be summarized as $$\label{E4.4.2}\tag{E4.4.2} {\text{$P_{\Omega}$ having a unique grading}} \Leftarrow {\text{$\Omega$ having isolated singularity}} \Rightarrow {\text{$P_{\Omega-1}$ having a unique filtration.}}$$ There is another diagram for balanced irreducible potentials $\Omega$, see [\[E5.11.1\]](#E5.11.1){reference-type="eqref" reference="E5.11.1"}. # $K_1$-sealedness and $uPH^2$-vacancy {#zzsec5} In this section, we introduce two technical concepts -- $K_1$-sealedness \[(2)\] and $uPH^2$-vacancy \[(3)\]. Together with $H$-ozoneness \[(3)\], they will play an important role in computing Poisson cohomology in the next section. Note that the $uPH^2$-vacancy of $A_{\Omega}$ is independent of the choices of the graded generators $(x,y,z)$; however, the $K_1$-sealedness of $\Omega$ may be dependent on the choices of the graded generators $(x,y,z)$. ## $K_1$-sealedness {#zzsec5.1} Let $\Omega$ be a homogeneous element of degree $n>0$ in the weighted polynomial ring $A:=\Bbbk[x,y,z]$. Recall that the *Koszul complex* $K_\bullet(\overrightarrow{\nabla}\Omega)$ given by the sequence $\overrightarrow{\nabla}\Omega: =(\Omega_x,\Omega_y,\Omega_z)$ in $A$ is: $$\label{E5.0.1}\tag{E5.0.1} \begin{tabular}{ccccc} &\,$A[-2n+b+c]$ & &$A[a-n]$& \vspace*{-2mm}\\ $0\to A[-3n+(a+b+c)]\xrightarrow{\overrightarrow{\nabla}\Omega}$ & \hspace*{-2.5mm}$\oplus A[-2n+a+c]$ &\hspace*{-2.5mm}$\xrightarrow{\overrightarrow{\nabla}\Omega\times }$ & \hspace*{-2.5mm}$ \oplus A[b-n]$ &\hspace*{-2.5mm}$\xrightarrow{\overrightarrow{\nabla}\Omega\cdot }A \to A/(\Omega_x,\Omega_y,\Omega_z)\hspace*{-1mm}\to \hspace*{-.8mm}0$.\\ &\hspace*{-2.5mm}$\oplus A[-2n+a+b]$& & \hspace*{-2.5mm}$\oplus A[c-n]$ & \end{tabular} %\caption{Caption}$$ Note that [\[E5.0.1\]](#E5.0.1){reference-type="eqref" reference="E5.0.1"} is a complex of graded vector spaces, where the differentials are graded maps of degree zero. A 1-cycle in $\ker(\overrightarrow{\nabla}\Omega \cdot)$ is called *sealed* if $\overrightarrow{\nabla}\cdot \overrightarrow{f}=0$ when further considered as an element in $A_{sing}$. Let $s_1(\Omega)$ be the subspace of $\ker(\overrightarrow{\nabla}\Omega \cdot)$ consisting of all sealed 1-cycles in the above complex. The following follows from commutative algebra. **Lemma 39**. *Retain the above notation.* 1. *If ${\rm gcd}(\Omega_x,\Omega_y,\Omega_z)=1$, then the Koszul complex [\[E5.0.1\]](#E5.0.1){reference-type="eqref" reference="E5.0.1"} is exact everywhere except possibly for the position at $K_1(\overrightarrow{\nabla}\Omega)$.* 2. *If $\mathop{\mathrm{GKdim}}A_{sing}\leq 1$, then ${\rm gcd}(\Omega_x,\Omega_y,\Omega_z)=1$.* 3. *If $\Omega$ is irreducible (and weighted homogeneous), then ${\rm gcd}(\Omega_x,\Omega_y,\Omega_z)=1$.* 4. *$\text{\upshape im}(\overrightarrow{\nabla} \Omega\times)\subseteq s_1(\Omega) \subseteq \ker(\overrightarrow{\nabla} \Omega\cdot)$* *Proof.* (1) It follows from [@Pi1 Remarks 3.6 & 3.7]. (2,3) These are clear. \(4\) For any $\overrightarrow{g}\in A^{\oplus 3}$, by a computation, $\overrightarrow{\nabla}\cdot( \overrightarrow{\nabla}\Omega\times \overrightarrow{g})=0$ in $A_{sing}$. The second inclusion follows from the definition. ◻ **Definition 40**. Let $\Omega\in A$ be a potential, and we consider the Koszul complex [\[E5.0.1\]](#E5.0.1){reference-type="eqref" reference="E5.0.1"}. 1. The *sealed first Koszul homology* of $(A, \Omega)$ is defined to be $$sK_1(A,\Omega):=s_1(\Omega)/\text{\upshape im}(\overrightarrow{\nabla} \Omega\times).$$ 2. We say $\Omega$ is *$K_1$-sealed* if $sK_1(A,\Omega)=0$. That is, for any $\overrightarrow{f}\in A^{\oplus 3}$, if $\overrightarrow{f}\cdot \overrightarrow{\nabla}\Omega=0$ in $A$ and $\overrightarrow{\nabla}\cdot \overrightarrow{f}=0$ when considered as an element in $A_{sing}$, then $\overrightarrow{f} =\overrightarrow{\nabla}\Omega\times \overrightarrow{g}$ for some $\overrightarrow{g}\in A^{\oplus 3}$. The property of being $K_1$-sealed was implicitly used by Luo in her thesis [@Luo]. It involves computing the Poisson homology using the first homology of the corresponding Koszul complex in certain special cases. It is unclear if the $K_1$-sealedness of $\Omega$ depends on the choice of graded generators $(x,y,z)$. We assume a fixed set of $(x,y,z)$ when discussing $K_1$-sealedness. By definition, the $K_1$-sealedness for $\Omega$ can be reflected via the homology of the Koszul complex $K_\bullet(\overrightarrow{\nabla}\Omega)$ in the following way: for any 1-cycle $\overrightarrow{f}\in Z_1 (K_\bullet(\overrightarrow{\nabla}\Omega))$, if $\overrightarrow{\nabla}\cdot \overrightarrow{f}=0$ in $A_{sing}$, then $\overrightarrow{f}$ belongs to the 1-boundary, namely $\overrightarrow{f}=0$ in $H_1(K_\bullet(\overrightarrow{\nabla}\Omega))$. If $\Omega$ has an isolated singularity at the origin, then $H_1(K_\bullet(\overrightarrow{\nabla}\Omega))=0$ [@Pi1 Proposition 3.5]. Hence, such an $\Omega$ is always $K_1$-sealed. In the rest of this subsection, we will show that $K_1$-sealedness holds for some other families of $\Omega$ that do not have isolated singularity. **Lemma 41**. *Let $\Omega=xyz+g(x,y)$ for some $g(x,y)\in {\Bbbk}[x,y]$ satisfying the following conditions.* - *$g(x,y)$ is homogeneous with respect to some new grading $\deg_{new}(x)=a'$ and $\deg_{new}(y)=b'$ for $a',b'\ge 3$.* - *$g(x,y)$ contains both terms $x^{b'}$ and $y^{a'}$.* - *$xy\mid g_xg_y$.* *Then $\Omega$ is $K_1$-sealed. In this case, by choosing such new grading together with $\deg_{new}(z)=a'b'-a'-b'=:c'$, we have $h_{H_1(K_\bullet)}(t)=\frac{t^{c'+a'b'}}{1-t^{c'}}$.* In applications/examples in the next section, we have $(a,b,c)=(a'/g, b'/g, c'/g)$ for $g=\gcd(a',b',c')$. *Proof of .* We assign a new grading, denoted by $\deg_{new}$, on $A$ by choosing ${\rm deg}_{new}(x):=a'$, ${\rm deg}_{new}(y):=b'$, and ${\rm deg}_{new}(z):=a'b'-a'-b'$. It is obvious that $\Omega$ becomes a homogeneous potential of degree $n':=a'b'$ under this new grading. It is easy to check that $\mathop{\mathrm{GKdim}}A_{sing}=1$ and a ${\Bbbk}$-basis of $A_{sing}$ will be explicitly constructed later on. By Lemma [Lemma 39](#zzlem5.1){reference-type="ref" reference="zzlem5.1"}, the Koszul complex $K_\bullet(\Omega_x,\Omega_y,\Omega_z)$ given in [\[E5.0.1\]](#E5.0.1){reference-type="eqref" reference="E5.0.1"} is exact everywhere except at $K_1$. We claim that $H_1(K_\bullet)$ is spanned by the images of the elements $$\overrightarrow{\varphi_l}: =z^l\left(xz,\, -g_x,\, \frac{g_xg_y}{xy}-z^2\right)$$ in $\left(A[a'-n']\oplus A[b'-n']\oplus A[c'-n']\right)_{c'(l+1)+n'}$ for all $l\ge 0$. It is easy to check that $\overrightarrow{\varphi_l}\cdot \overrightarrow{\nabla}\Omega=0$. Suppose we have $\overrightarrow{\varphi_l} =\overrightarrow{f}\times \overrightarrow{\nabla}\Omega$ for some $\overrightarrow{f}=(f_1,f_2,f_3)\in \left(A[-a'-n']\oplus A[-b'-n']\oplus A[-c'-n']\right)_{c'(l+1)+n'}$. Then the third component of $\overrightarrow{\varphi_l} =\overrightarrow{f}\times \overrightarrow{\nabla}\Omega$ is equal to $$f_1(xz+g_y)-f_2(yz+g_x)=\frac{g_xg_y}{xy}\,z^l-z^{l+2}.$$ However, this is impossible since the monomial $z^{l+2}$ cannot appear on the left side. Since each homogeneous element $\overrightarrow{\varphi_l}$ has a different degree for distinct $l$, their images in $H_1(K_\bullet)$ must be linearly independent. To prove our claim, it suffices to match the Hilbert series of $H_1(K_\bullet)$ with $\frac{t^{c'+n'}}{1-t^{c'}}$, which is the one associated with the ${\Bbbk}$-subspace spanned by {$\overrightarrow{\varphi_l} \mid l\geq 0\}$. By our assumption (2), we can apply the diamond lemma to obtain a ${\Bbbk}$-basis of $A_{sing}=A/(\Omega_x,\Omega_y,\Omega_y)={\Bbbk}[x,y,z]/(xy,xz+g_y,yz+g_x)$ given by $$\begin{aligned} \label{E5.3.1}\tag{E5.3.1} \{z^i\,|\, i\ge 0\}\ \cup\ \{x,\ldots,x^{b'-1}\}\ \cup\ \{y,\ldots,y^{a'-1}\}.\end{aligned}$$ Thus, $A_{sing}$ has a Hilbert series given below $$h_{A_{sing}}(t)=\frac{1}{1-t^{c'}} +\frac{t^{a'}-t^{a'b'}}{1-t^{a'}} +\frac{t^{b'}-t^{b'a'}}{1-t^{b'}}.$$ A calculation for the Hilbert series of the terms in [\[E5.0.1\]](#E5.0.1){reference-type="eqref" reference="E5.0.1"} yields the following $$h_{H_1(K_\bullet)}(t)=h_A(t)(t^{a'+b'}-1)(t^{b'+c'}-1) (t^{a'+c'}-1)+h_{A_{sing}}(t)=\frac{t^{c'+n'}}{1-t^{c'}}.$$ Therefore, we have proved the claim. Next, we show that $\Omega$ is $K_1$-sealed. Taking an arbitrary homogeneous element $\overrightarrow{f}\in Z_1(K_\bullet(\overrightarrow{\nabla}\Omega))$ satisfying $\overrightarrow{\nabla}\cdot \overrightarrow{f}=0$ in $A_{sing}$, up to a boundary, we can take $\overrightarrow{f}=\lambda \overrightarrow{\varphi_l}$ for some $\lambda \in {\Bbbk}$. One can check that $$\overrightarrow{\nabla}\cdot \overrightarrow{f}= \lambda (\overrightarrow{\nabla}\cdot \overrightarrow{\varphi_l}) =\lambda\left(l\frac{g_xg_y}{xy}\,z^{l-1}-g_{xy}\,z^l-(l+1)z^{l+1}\right)=0\ \text{in $A_{sing}$}.$$ By our assumption (2), one can check directly $y\nmid g_x$ and $x\nmid g_y$. So our assumption (3) implies $x\mid g_x$ and $y\mid g_y$. Thus, there is a surjection $$\pi: A\twoheadrightarrow A_{sing}=A/(\Omega_x,\Omega_y,\Omega_z)\twoheadrightarrow A/(x,y)\cong {\Bbbk}[t]$$ via $\pi(x)=\pi(y)=0$ and $\pi(z)=t$. So we have $$\begin{aligned} \pi(\overrightarrow{\nabla}\cdot \overrightarrow{f})&=\lambda\left(l\pi(g_x/x)\pi(g_y/y)\,\pi(z)^{l-1}-\pi(g_{xy})\,\pi(z)^l-(l+1)\pi(z)^{l+1}\right)\\ &=\lambda l\pi(g_x/x)\pi(g_y/y)t^{l-1}-\lambda \pi(g_{xy})\,t^l-\lambda (l+1)t^{l+1}\end{aligned}$$ which cannot be zero in ${\Bbbk}[t]$ unless $\lambda=0$. Hence $\overrightarrow{f}\in B_1(K_\bullet( \overrightarrow{\nabla}\Omega))$, and consequently, $sK_1(A,\Omega)=0$. Therefore, $\Omega$ is $K_1$-sealed. ◻ Another type of $K_1$-sealed $\Omega$ is given by $\Omega=z^n+g(x,y)$ for some $g(x,y)$ is in ${\Bbbk}[x,y]$. We need the following definition. **Definition 42**. A polynomial $g\in {\Bbbk}[x,y]$ is called *special* if for any polynomial $f\in {\Bbbk}[x,y]$, $d\mid (\frac{g_y}{d}f)_x-(\frac{g_x}{d}f)_y$ implies that $d\mid f$ where $d=\gcd(g_x,g_y)$. **Lemma 43**. *The following $g\in {\Bbbk}[x,y]$ are special.* - *$g=x^ky^l$ with $k,l\ge 1$ and $\gcd(k,l)=1$.* - *$g=x^ky^l+x^m$ with $l\ge 1$ and $m\ge k\ge 1$.* - *$g=x^ky^l+y^m$ with $k\ge 1$ and $m\ge l\ge 1$.* *Proof.* (1) We have $d=\gcd(g_x,g_y)=x^{k-1}y^{l-1}$. Suppose there is some $f\in {\Bbbk}[x,y]$ such that $x^{k-1}y^{l-1}\mid(\frac{g_y}{d}f)_x-(\frac{g_x}{d}f)_y=(lxf)_x-(kyf)_y=(l-k)f+lxf_x-kyf_y$. Without loss of generality, we can take $f=x^my^n$ to be monomial. Thus, $x^{k-1}y^{l-1}\mid(l(m+1)-k(n+1))f$. If to the contrary, $x^{k-1}y^{l-1}\nmid f$, then we must have $m<k-1$ or $n<l-1$ and $l(m+1)=k(n+1)$. The latter implies that $l\mid n+1$ and $k\mid m+1$ since $\gcd(k,l)=1$. This yields a contradiction. \(2\) We have $g_x=x^{k-1}(ky^l+mx^{m-k})$ and $g_y=lx^ky^{l-1}$. So $d=\gcd(g_x,g_y)=x^{k-1}$. Suppose there is some $f\in {\Bbbk}[x,y]$ such that $$x^{k-1}\mid(\frac{g_y}{d}f)_x-(\frac{g_x}{d}f)_y=(lxy^{l-1}f)_x-((ky^l+mx^{m-k})f)_y=(1-k)ly^{l-1}f+ly^{l-1}xf_x-(ky^l+mx^{m-k})f_y.$$ We consider $f$ a polynomial in $x$ with coefficients in ${\Bbbk}[y]$ and denote its lowest term as $hx^n$ for some $0\neq h\in {\Bbbk}[y]$. Then the possible lowest term in $(\frac{g_y}{d}f)_x-(\frac{g_x}{d}f)_y$ is $x^n$ with its coefficient given by $$-l(k-1-n)y^{l-1}h-k(y^l+\delta_{m, k})h_y.$$ If $n\geq k-1$, then $d\mid f$. Suppose $n<k-1$. Then it implies that the above coefficient is zero and so we get $h_y/h=-\frac{k-1-n}{k}ly^{l-1}/(y^l+\delta_{m, k})$. Thus $h=\lambda/ (y^l+\delta_{m, k})^{(k-1-n)/k}$ for some non-zero scalar $\lambda$, which is not a polynomial. This gives a contradiction. \(3\) The proof is similar to (2). ◻ **Lemma 44**. *Let $\Omega=z^n+g(x,y)$ where $n\ge 2$ and $g(x,y)\in {\Bbbk}[x,y]$. Then $\Omega$ is $K_1$-sealed if $g$ is special.* *Proof.* Let $\overrightarrow{f}=(f_1,f_2,f_3)\in A^{\oplus 3}$ satisfying $\overrightarrow{\nabla}\Omega\cdot \overrightarrow{f}=0$ in $A$ and $\overrightarrow{\nabla}\cdot \overrightarrow{f}=0$ in $A_{sing}$. Write $f_i=\sum_{j=0}^{n-1} h_{ij}z^j$ where $h_{ij}\in {\Bbbk}[x,y]$ for $0\le j\le n-2$ and $h_{i(n-1)}\in {\Bbbk}[x,y,z]$ for $i=1,2$. Thus $$\begin{aligned} \overrightarrow{\nabla}\Omega\cdot \overrightarrow{f}= \sum_{i=0}^{n-2}(h_{1i}g_x+h_{2i}g_y)z^i+(h_{1(n-1)}g_x+h_{2(n-1)}g_y+nf_3)z^{n-1}=0.\end{aligned}$$ So we get $h_{1i}g_x+h_{2i}g_y=0$ in ${\Bbbk}[x,y]$ for $0\le i\le n-2$ and $f_3=-(h_{1(n-1)}g_x+h_{2(n-1)}g_y)/n$. Set $d=\gcd(g_x,g_y)$, we can further write $h_{1i}=(g_y/d)l_i$ and $h_{2i}=-(g_x/d)l_i$ with $l_i\in {\Bbbk}[x,y]$ for all $0\le i\le n-2$. Therefore, $$\begin{aligned} \overrightarrow{\nabla}\cdot\overrightarrow{f}&=\overrightarrow{\nabla}\cdot\left(\sum_{i=0}^{n-2} l_i\left(\frac{g_y}{d},-\frac{g_x}{d},0\right)z^i+h_{1(n-1)}\left(z^{n-1},0,-\frac{g_x}{n}\right)+h_{2(n-1)}\left(0,z^{n-1},-\frac{g_y}{n}\right)\right)\\ &=\overrightarrow{\nabla}\cdot\left(\sum_{i=0}^{n-2} l_i\left(\frac{g_y}{d},-\frac{g_x}{d},0\right)z^i+\overrightarrow{\nabla}\Omega\times\left(\frac{h_{2(n-1)}}{n},-\frac{h_{1(n-1)}}{n},0\right)\right)\\ &=\sum_{i=0}^{n-2}\left((\frac{g_y}{d}l_i)_x-(\frac{g_x}{d}l_i)_y\right)z^i-\overrightarrow{\nabla}\Omega\, \cdot\, \left(\overrightarrow{\nabla}\times \left(\frac{h_{2(n-1)}}{n},-\frac{h_{1(n-1)}}{n},0\right)\right)\\ &=\sum_{i=0}^{n-2}\left((\frac{g_y}{d}l_i)_x-(\frac{g_x}{d}l_i)_y\right)z^i\ \text{in $A_{sing}$}\\ &=0\ \text{in $A_{sing}$}.\end{aligned}$$ Note that $A_{sing}={\Bbbk}[x,y,z]/(g_x,g_y,z^{n-1})=\oplus_{i=0}^{n-2} ({\Bbbk}[x,y]/(g_x,g_y))\, z^i$. This implies that $(\frac{g_y}{d}l_i)_x=(\frac{g_x}{d}l_i)_y$ in ${\Bbbk}[x,y]/(g_x,g_y)$, and hence $d\mid(\frac{g_y}{d}l_i)_x-(\frac{g_x}{d}l_i)_y$ for all $0\le i\le n-2$. Since $g$ is special, by definition, we can write $l_i=dm_i$ for $m_i\in {\Bbbk}[x,y]$ for all $0\le i\le n-2$, and hence $$\begin{aligned} \overrightarrow{f}&=\sum_{i=0}^{n-2} l_i\left(\frac{g_y}{d},-\frac{g_x}{d},0\right)z^i+\overrightarrow{\nabla}\Omega\times\left(\frac{h_{2(n-1)}}{n},-\frac{h_{1(n-1)}}{n},0\right)\\ &=\sum_{i=0}^{n-2} \left(g_y,-g_x,0\right)m_iz^i+\overrightarrow{\nabla}\Omega\times\left(\frac{h_{2(n-1)}}{n},-\frac{h_{1(n-1)}}{n},0\right)\\ &=\overrightarrow{\nabla}\Omega\times\left(\frac{h_{2(n-1)}}{n},-\frac{h_{1(n-1)}}{n},\sum_{i=0}^{n-2}m_iz^i\right).\end{aligned}$$ This proves our result. ◻ ## $uPH^2$-vacancy In this subsection, we introduce another new concept: $uPH^2$-vacancy for $A_\Omega$ and we shall show that if $\Omega$ is $K_1$-sealed, then its corresponding Poisson algebra $A_{\Omega}$ is $uPH^2$-vacant. By definition, $$\ker(d_{\pi_\Omega}^2)~= ~\{\delta\in \mathfrak X^2(A)\,|\, [\delta,\pi_\Omega]_S=0\}$$ where $d_{\pi_\Omega}^\bullet$ is the differential in the cochain complex $(\mathfrak X^\bullet(A_\Omega),d_{\pi_\Omega}^\bullet)$ [\[E1.3.1\]](#E1.3.1){reference-type="eqref" reference="E1.3.1"}. Before stating the definitions, we need a lemma. **Definition-Lemma 45**. *Retain the above notation. Let $$M^2(A):= \{f\pi_{\Omega}+\pi_g\,|\, f,g\in A\}$$ which is a subspace of $\mathfrak X^2(A)$. Then $\text{\upshape im}(d_{\pi_\Omega}^1) \subseteq M^2(A)\subseteq \ker(d_{\pi_\Omega}^2)$.* *Proof.* The first inclusion follows directly from [\[E4.1.1\]](#E4.1.1){reference-type="eqref" reference="E4.1.1"}. That is, $d_\pi^1(\delta)=~\mathop{\mathrm{div}}(\delta)\, \pi_{\Omega}-\pi_{\delta(\Omega)}$ for any $\delta\in {\mathfrak X}^1(A)$. For the second one, it suffices to show that $u:=[f \pi_{\Omega}, \pi_{\Omega}]_S=0$ and $v:=[\pi_g,\pi_{\Omega}]_S=0$ for all $f,g\in A_{\Omega}$. Note that both $u$ and $v$ are in ${\mathfrak X}^3(A)$. Hence, it remains to show that $u(x,y,z)=0$ and $v(x,y,z)=0$. They follow the definition of the Schouten bracket and [\[E3.1.1\]](#E3.1.1){reference-type="eqref" reference="E3.1.1"} via a straightforward computation. We omit the details. ◻ **Definition 46**. Let $A_\Omega$ be the unimodular Poisson algebra defined in Notation [Notation 20](#zznot3.2){reference-type="ref" reference="zznot3.2"}. 1. The *upper division of the second Poisson cohomology* of $A_{\Omega}$ is defined to be $$uPH^2(A_{\Omega}):=\ker(d_{\pi_\Omega}^2)/M^2(A).$$ 2. The *lower division of the second Poisson cohomology* of $A_{\Omega}$ is defined to be $$lPH^2(A_{\Omega}):=M^2(A)/\text{\upshape im}(d_{\pi_\Omega}^1).$$ 3. We say $A_{\Omega}$ is *$uPH^2$-vacant* if $uPH^2(A_{\Omega})=0$, or equivalently $lPH^2(A_{\Omega})=PH^2(A_{\Omega})$. According to , the definition of $uPH^2(A_{\Omega})$ (and $lPH^2(A_{\Omega})$ as well as $uPH^2$-vacancy) is invariant under the choice of graded generators $(x,y,z)$. Consequently, $uPH^2$-vacancy is independent of the choice of graded generators $(x,y,z)$. According to the following lemma, $lPH^2(A_{\Omega})$ is typically non-zero. **Lemma 47**. *Retain the above notation and assume (1-3). If $h_{PH^0(A)(t)}=h_{PH^1(A)}(t)=\frac{1}{1-t^n}$, then the Hilbert series of $lPH^2(A_\Omega)$ is given by $$h_{lPH^2(A)}(t)~=~\frac{1}{ t^{a+b+c}}\left(\frac{(1-t^{n-a}) (1-t^{n-b})(1-t^{n-c})}{(1-t^n)(1-t^a)(1-t^b)(1-t^c)}-1\right).$$* *Proof.* We claim that the following sequence of graded vector spaces $$\label{E5.9.1}\tag{E5.9.1} 0\to {\Bbbk}[\Omega][a+b+c]\xrightarrow{\alpha} A[-n+a+b+c]\oplus A[a+b+c]\xrightarrow{\beta} M^2(A)\xrightarrow{}0$$ is exact, where $\alpha(g)=(-dg/d\Omega,g)$ and $\beta(f,g)=f\pi_\Omega+\pi_g$. It is clear that $\alpha,\beta$ are graded maps, $\alpha$ is injective and $\beta$ is surjective, and $\beta\alpha=0$. Hence it remains to show that $\ker (\beta)=\text{\upshape im}(\alpha)$. Suppose $\beta(f,g)=f\pi_\Omega+\pi_g=0$. One can check that $$d_\pi^0(g)=[\pi_\Omega,g]_S=-[\pi_g,\Omega]_S=[f\pi_\Omega,\Omega]_S=0.$$ By the assumption, we have $Z_P(A)={\Bbbk}[\Omega]$. Therefore, we have $g\in Z_P(A)={\Bbbk}[\Omega]$ and $f=-dg/d\Omega$. Note that we have the following exact sequences of graded vector spaces, $$\label{E5.9.2}\tag{E5.9.2} 0\to PH^0(A)\to \mathfrak X^0(A)\to \text{\upshape im}(d_\pi^0)\to 0,$$ $$\label{E5.9.3}\tag{E5.9.3} 0\to \text{\upshape im}(d_\pi^0)\to Pd(A)[w]\to PH^1(A)[w]\to 0,$$ $$\label{E5.9.4}\tag{E5.9.4} 0\to Pd(A)[w]\to \mathfrak X^1(A)[w]\to M^2(A)[2w] \to lPH^2(A)[2w]\to 0.$$ One can deduce that $$\label{E5.9.5}\tag{E5.9.5} h_{lPH^2(A)}(t)=h_{M^2(A)}(t)-h_{\mathfrak X^1(A)}(t)t^w +h_{\mathfrak X^0(A)}(t)t^{2w} +h_{PH^1(A)}(t)t^w-h_{PH^0(A)}(t)t^{2w}.$$ The assertion follows from [\[E5.9.1\]](#E5.9.1){reference-type="eqref" reference="E5.9.1"}, and a direct computation. ◻ We also need the *de Rham complex* for $A$: $$\label{E5.9.6}\tag{E5.9.6} 0\to {\Bbbk}\to \Omega^0_A\xrightarrow{d} \Omega^1_A\xrightarrow{d} \Omega^2_A\xrightarrow{d} \Omega^3_A\to 0.$$ We will utilize the following natural isomorphisms of graded vector spaces $$\label{E5.9.7}\tag{E5.9.7} \left \{ \centering \begin{aligned} &\Omega^0_A\xrightarrow{\sim}A &&\\ &\Omega^1_A\xrightarrow{\sim} A[-a]\oplus A[-b]\oplus A[-c] && fdg \mapsto f\overrightarrow{\nabla} g \\ &\Omega^2_A\xrightarrow{\sim} A[-b-c]\oplus A[-a-c]\oplus A[-a-b] && fdg\wedge dh\mapsto f\overrightarrow{\nabla} g\times \overrightarrow{\nabla} h\\ &\Omega^3_A\xrightarrow{\sim} A[-a-b-c] && fdx\wedge dy\wedge dz\mapsto f\\ \end{aligned}\right.$$ to [\[E5.9.6\]](#E5.9.6){reference-type="eqref" reference="E5.9.6"} for $f, g,h\in A$. As a result, we have the following complex of graded vector spaces: $$\label{E5.9.8}\tag{E5.9.8} \centering \begin{tabular}{ccccc} &\,$A[-a]$ & &$A[-b-c]$& \vspace*{-2mm}\\ $0\to {\Bbbk}\to A\xrightarrow{\overrightarrow{\nabla}}$ & \hspace*{-2.5mm}$\oplus A[-b]$ &\hspace*{-2.5mm}$\xrightarrow{\overrightarrow{\nabla}\times }$ & \hspace*{-2.5mm}$ \oplus A[-a-c]$ &\hspace*{-2.5mm}$\xrightarrow{\overrightarrow{\nabla}\cdot } A[-a-b-c]\to 0.$\\ &\hspace*{-2.5mm}$\oplus A[-c]$& & \hspace*{-2.5mm}$ \oplus A[-a-b]$& \end{tabular} %\caption{Caption}$$ In the sequel, we will use the well-known fact that the de Rham complex [\[E5.9.8\]](#E5.9.8){reference-type="eqref" reference="E5.9.8"} for $A$ is always exact. The following lemma shows that $A_{\Omega}$ is $uPH^2$-vacant if and only if it is $H$-ozone. It is important to note that the $uPH^2$-vacancy of $A_{\Omega}$ plays a significant role in computing the Hilbert series of Poisson cohomology groups. In this lemma, we will give two more equivalent descriptions in terms of a combination of the Koszul complex for the sequence $\overrightarrow{\nabla}\Omega=(\Omega_x, \Omega_y, \Omega_z)$ and the de Rham complex for $A$. **Lemma 48**. *Let $A$ be $A_\Omega$ satisfying (1,2,3) where $\Omega$ is a nonconstant homogeneous potential. Then the following are equivalent.* - *$A$ is $H$-ozone.* - *For any $\overrightarrow{f}\in A^{\oplus 3}$, $\overrightarrow{\nabla}\cdot \overrightarrow{f} =\overrightarrow{f}\cdot \overrightarrow{\nabla}\Omega=0$ implies that $\overrightarrow{f}=\overrightarrow{\nabla}g \times \overrightarrow{\nabla}\Omega$ for some $g\in A$.* - *For any $\overrightarrow{f}\in A^{\oplus 3}$, if $(\overrightarrow{\nabla}\times \overrightarrow{f})\cdot \overrightarrow{\nabla}\Omega=0$, then $\overrightarrow{f}=\overrightarrow{\nabla}m +g\overrightarrow{\nabla}\Omega$ for some $m,g\in A$.* - *$A$ is $uPH^2$-vacant.* *Proof.* (1)$\Leftrightarrow$(2) Assume (2) holds and suppose $\delta$ is an ozone derivation of $A$, i.e., $\delta(\Omega)=0$. Write $\overrightarrow{\delta} =(\delta(x),\delta(y),\delta(z))\in A^{\oplus 3}$. So $0=\delta(\Omega)=\overrightarrow{\delta} \cdot \overrightarrow{\nabla}\Omega$. By [\[E4.1.1\]](#E4.1.1){reference-type="eqref" reference="E4.1.1"}, we have $\overrightarrow{\nabla}\cdot \overrightarrow{\delta}={\rm div}(\delta)=0$ as $\delta(\Omega)=0$. So (2) implies that $\overrightarrow{\delta}=\overrightarrow{\nabla}g\times \overrightarrow{\nabla}\Omega$ for some $g\in A$, which is equivalent to $\delta=\{-,g\}$. So (1) holds. Conversely, suppose there is some $\overrightarrow{f} \in A^{\oplus 3}$ such that $\overrightarrow{\nabla}\cdot \overrightarrow{f}=\overrightarrow{f}\cdot \overrightarrow{\nabla}\Omega=0$. Consider the derivation $\delta$ of $A$ defined by $(\delta(x),\delta(y),\delta(z))=\overrightarrow{f}$. The assumptions on $\overrightarrow{f}$ yield that $\delta(\Omega)=\mathop{\mathrm{div}}(\delta)=0$. So (2) implies that $\delta$ is a Poisson derivation. Note that in the proof of [@MTU Lemma 1], it is shown that the Poisson center $Z_P(A)$ is algebraic over ${\Bbbk}[\Omega]$. Hence, $\delta$ vanishes on $Z_P(A)$, so it is ozone. So by (1), $\delta=\{-,g\}$ for some $g\in A$, that is, $\overrightarrow{f}=\overrightarrow{\nabla}g\times \overrightarrow{\nabla}\Omega$. (2)$\Leftrightarrow$(3) Assume (2) holds and suppose that $(\overrightarrow{\nabla}\times \overrightarrow{f})\cdot \overrightarrow{\nabla}\Omega=0$ for $\overrightarrow{f}\in A^{\oplus 3}$. Write $\overrightarrow{h}:=\overrightarrow{\nabla}\times \overrightarrow{f}\in A^{\oplus 3}$. It is easy to see that $\overrightarrow{\nabla}\cdot \overrightarrow{h} =0$ and $\overrightarrow{h}\cdot \overrightarrow{\nabla}\Omega=0$. So $\overrightarrow{h}=\overrightarrow{\nabla}g\times \overrightarrow{\nabla}\Omega=\overrightarrow{\nabla} \times(g\overrightarrow{\nabla}\Omega)$ for some $g\in A$. This implies that $\overrightarrow{\nabla}\times (\overrightarrow{f}-g\overrightarrow{\nabla}\Omega)=0$. By the exactness of the de Rham complex for $A$ [\[E5.9.8\]](#E5.9.8){reference-type="eqref" reference="E5.9.8"}, we can write $\overrightarrow{f}-g\overrightarrow{\nabla}\Omega =\overrightarrow{\nabla}m$ for some $m\in A$. So (3) holds. Conversely, suppose $\overrightarrow{\nabla}\cdot \overrightarrow{f}=0$ and $\overrightarrow{f}\cdot \overrightarrow{\nabla}\Omega=0$ for some $\overrightarrow{f}\in A^{\oplus 3}$. By the exactness of the de Rham complex for $A$ [\[E5.9.8\]](#E5.9.8){reference-type="eqref" reference="E5.9.8"}, we can write $\overrightarrow{f}=\overrightarrow{\nabla} \times \overrightarrow{h}$ for some $\overrightarrow{h}\in A^{\oplus 3}$. Hence $(\overrightarrow{\nabla}\times \overrightarrow{h}) \cdot \overrightarrow{\nabla}\Omega=0$. So, by (3), we can write $\overrightarrow{h}=\overrightarrow{\nabla}m+g \overrightarrow{\nabla}\Omega$ for some $g, m\in A$. Thus $\overrightarrow{f}=\overrightarrow{\nabla}\times (\overrightarrow{\nabla}m+g\overrightarrow{\nabla}\Omega) =\overrightarrow{\nabla}g\times \overrightarrow{\nabla}\Omega$ and (2) follows. (3)$\Leftrightarrow$(4) We use the identification $\mathfrak X^2(A)\xrightarrow{\sim} A^{\oplus 3}$ described in [\[E1.3.2\]](#E1.3.2){reference-type="eqref" reference="E1.3.2"} via $$\delta\mapsto \overrightarrow{\delta} =(\delta(y, z), \delta(z, x), \delta(x, y)),$$ which is different from the notation $\overrightarrow{\delta}$ used in the proof of part (1). One can check that $\pi_g$ and $f\pi_\Omega$ correspond to $\overrightarrow{\nabla}g$ and $f\overrightarrow{\nabla}\Omega$, respectively. Moreover by [\[E1.3.6\]](#E1.3.6){reference-type="eqref" reference="E1.3.6"}, we have $-[\delta,\pi_\Omega]_S=-(\overrightarrow{\nabla}\times \overrightarrow{\delta})\cdot \overrightarrow{\nabla}\Omega$. Then, (3) and (4) are equivalent by reinterpreting the conditions through the above identification. ◻ The result below links the $K_1$-sealness of a potential $\Omega$ to the $H$-ozoness of its associated Poisson algebra, $A_\Omega$. **Lemma 49**. *Let $\Omega$ be a homogeneous polynomial of positive degree $n$. If $\Omega$ is $K_1$-sealed, then $A_\Omega$ is H-ozone.* *Proof.* It suffices to show that $A$ satisfies condition (3) in . Suppose that $\deg(x)=a,\deg(y)=b$, and $\deg(z)=c$ for some positive integers $a,b,c$. Note that $\Omega$ is homogeneous with a positive degree $n$, which is not necessarily equal to $a+b+c$. As a result, the Poisson bracket $\pi_\Omega$ on $A$ is homogeneous of degree $n-a-b-c$. The following diagram combines the Koszul complex [\[E5.0.1\]](#E5.0.1){reference-type="eqref" reference="E5.0.1"} and the de Rham complex [\[E5.9.8\]](#E5.9.8){reference-type="eqref" reference="E5.9.8"}. $$\xymatrix{ & \left(A[-a] \oplus A[-b] \oplus A[-c]\right)[-n] \ar[d]^-{\overrightarrow{\nabla}\Omega\times}&\\ A[-a]\oplus A[-b]\oplus A[-c]\ar[r]^-{\overrightarrow{\nabla}\times} & A[-b-c]\oplus A[-a-c]\oplus A[-a-b] \ar[d]^-{\overrightarrow{\nabla}\Omega\cdot} \ar[r]^-{\overrightarrow{\nabla}\cdot}& A[-a-b-c]&\\ &A[n-a-b-c].& }$$ Without loss of generality, let us assume that $\overrightarrow{f}\in \left(A[-a]\oplus A[-b] \oplus A[-c]\right)_\ell$ for some $\ell\in \mathbb Z$ such that $(\overrightarrow{\nabla}\times \overrightarrow{f})\cdot \overrightarrow{\nabla}\Omega=0$. The condition $(\overrightarrow{\nabla}\times \overrightarrow{f})\cdot \overrightarrow{\nabla}\Omega=0$ implies that $\overrightarrow{\nabla}\times \overrightarrow{f}\in Z_1(K_\bullet(\overrightarrow{\nabla}\Omega))$. So we can write $$\overrightarrow{\nabla}\times \overrightarrow{f} =\overrightarrow{f_1}\times \overrightarrow{\nabla}\Omega +\overrightarrow{h}$$ for some $\overrightarrow{f_1}\in \left(A[-a]\oplus A[-b] \oplus A[-c]\right)_{\ell-n}$ and $\overrightarrow{h}\in Z_1(K_\bullet( \overrightarrow{\nabla}\Omega))\setminus B_1(K_\bullet(\overrightarrow{\nabla}\Omega))$ or zero. If $\overrightarrow{h}\ne 0$, then $$\begin{aligned} \overrightarrow{\nabla}\cdot \overrightarrow{h} &= \overrightarrow{\nabla}\cdot(\overrightarrow{\nabla}\times \overrightarrow{f}-\overrightarrow{f_1}\times \overrightarrow{\nabla}\Omega)\\ &=-\overrightarrow{\nabla}\cdot(\overrightarrow{f_1}\times \overrightarrow{\nabla}\Omega)\\ &=-(\overrightarrow{\nabla}\times \overrightarrow{f_1})\cdot \overrightarrow{\nabla}\Omega\\ &=0\ \text{in $A_{sing}$}.\end{aligned}$$ The last equality follows from a direct computation. Since $\Omega$ is $K_1$-sealed, $\overrightarrow{h}\in B_1(K_\bullet(\overrightarrow{\nabla}\Omega))$, yielding a contradiction. Thus $\overrightarrow{h}= \overrightarrow{0}$, and consequently, $(\overrightarrow{\nabla}\times \overrightarrow{f_1})\cdot \overrightarrow{\nabla}\Omega=0$ by the above calculation. Repeating this procedure with initial setting $f_0=f$, we get $$\overrightarrow{\nabla}\times \overrightarrow{f_{m-1}} =\overrightarrow{f_m}\times \overrightarrow{\nabla}\Omega \quad{\text{and}}\quad (\overrightarrow{\nabla}\times \overrightarrow{f_m})\cdot \overrightarrow{\nabla}\Omega=0$$ for some $\overrightarrow{f_m}\in \left(A[-a]\oplus A[-b]\oplus A[-c]\right)_{\ell-mn}$ for $m\geq 1$. Since $A$ is connected graded, we have $\overrightarrow{f_p}=0$ for some $p\gg 0$, which implies that $\overrightarrow{\nabla}\times \overrightarrow{f_{p-1}}=0$. By the exactness of the de Rham complex for $A$, we have $\overrightarrow{f_{p-1}}=\overrightarrow{\nabla}G_{p-1}$ for some $G_{p-1}\in A$. To complete the proof, it suffices to prove that, if we can write $\overrightarrow{f_{q}}=\overrightarrow{\nabla}G_q +H_q\overrightarrow{\nabla}\Omega$ for some $G_q,H_q\in A$, then so can we write for $\overrightarrow{f_{q-1}}$. An easy calculation yields that $$\begin{aligned} \overrightarrow{\nabla}\times \overrightarrow{f_{q-1}} &=\overrightarrow{f_q}\times \overrightarrow{\nabla}\Omega\\ &=(\overrightarrow{\nabla}G_q+H_q\overrightarrow{\nabla}\Omega) \times \overrightarrow{\nabla}\Omega\\ &=\overrightarrow{\nabla}G_q\times \overrightarrow{\nabla}\Omega\\ &=\overrightarrow{\nabla}\times (G_q\overrightarrow{\nabla}\Omega).\end{aligned}$$ This implies that $\overrightarrow{\nabla}\times (\overrightarrow{f_{q-1}}-G_q\overrightarrow{\nabla}\Omega)=0$. By the exactness of the de Rham complex for $A$, we can write $\overrightarrow{f_{q-1}}-G_q\overrightarrow{\nabla}\Omega =\overrightarrow{\nabla}G_{q-1}$ for some $G_{q-1}\in A$. Now, our result is followed by a downward induction. ◻ By the above two lemmas (and Theorem [Theorem 6](#zzthm0.6){reference-type="ref" reference="zzthm0.6"}), we have $$\label{E5.11.1}\tag{E5.11.1} {\text{$\Omega$ is $K_1$-sealed}} \Rightarrow {\text{$A_{\Omega}$ is $uPH^2$-vacant}} \Leftrightarrow {\text{$A_{\Omega}$ is $H$-ozone}} \Leftrightarrow {\text{$\Omega$ is irreducible and balanced}}$$ However, it is not clear if "${\text{$\Omega$ is $K_1$-sealed}} \Leftarrow {\text{$A_{\Omega}$ is $uPH^2$-vacant}}$". So, we ask the following question. **Question 50**. Is $K_1$-sealing condition equivalent to $H$-ozone for $\Omega$ with $|\Omega|=|x|+|y|+|z|$? # Poisson cohomology of weighted Poisson algebras {#zzsec6} In this last section, we investigate Poisson cohomology for graded unimodular Poisson algebra $A$ in dimension three following . We aim to expand upon the results of Van den Bergh [@VdB] and Pichereau [@Pi1] to encompass a wider range of Poisson algebras. First, we show that $A_\Omega$ is $H$-ozone for all balanced irreducible $\Omega$. When $\Omega$ has an isolated singularity, $\Omega$ is $K_1$-sealed and $A_{\Omega}$ is thus $H$-ozone. By and , we can use the classification of $\Omega$ in to further divide the rest of the irreducible ones (without isolated singularities) into the following four cases: - Under a new grading, the associated graded Poisson algebra of $A_{\Omega}$ is $A_{\overline{\Omega}}$ for some $\overline{\Omega}=z^n+x^ky^l$ with $n\geq 2$ and $\gcd(k,l)=1$ up to a permutation of $x,y,z$; - $\Omega=xyz+x^{b'}+y^{a'}$, where $a',b'\ge 3$; - $\Omega$ is non-balanced and irreducible; - $\Omega=z^2+y^3+x^2 y^2$ or $z^2+x^2y^2+x^{2+\frac{2b}{a}}$. ## Case (1) {#zzsec6.1} We first verify that $A_\Omega$ is $H$-ozone in the case $\Omega=z^n+x^ky^l$ with $n\geq 2$ and ${\rm gcd}(k, l)=1$. For any general $\Omega$ containing $z^n+x^ky^l$, we construct a $w$-filtration on $A_\Omega$ and consider the associated $w$-graded Poisson algebra instead. Applying a spectral sequence argument, we can establish that $A_\Omega$ is still $H$-ozone for any such general $\Omega$. **Lemma 51**. *Let $\Omega=z^n+x^ky^l$ for some positive integers $n, k,l$ such that $n\geq 2$ and $\gcd(k,l)=1$. Then $A_{\Omega}$ is H-ozone.* *Proof.* By (1) and , $\Omega$ is $K_1$-sealed. The assertion follows from . ◻ Now suppose $\Omega$ is as in Case (1). We will consider some Poisson $w$-filtration on $A_\Omega$, whose associated $w$-graded Poisson algebra is defined by a potential of the form $z^n+x^ky^l$. For the rest of this subsection, it is more convenient to use the filtration $\mathbb F=\{F_i\,|\, i\in \mathbb Z\}$ on $A$ consisting of an increasing chain of ${\Bbbk}$-subspaces $F_i\subseteq F_{i+1}$ (which is different from the one given in ). Accordingly, we have to modify other parts of . In particular, we will change $w$ to $-w$ when we use a Poisson $w$-filtration. **Lemma 52**. *Let $A$ be a connected graded polynomial Poisson algebra and $\mathbb F$ be a $w$-filtration of $A$. Suppose the following hold:* - *The Poisson center $Z_P(A)={\Bbbk}[\chi]$ with $\deg(\chi)>0$.* - *The Euler derivation $E$ of $A$ preserves the $w$-filtration $\mathbb F$.* - *The associated $w$-graded Poisson algebra ${\rm gr}_\mathbb FA$ is again a connected graded polynomial Poisson algebra.* - *The Poisson center $Z_P({\rm gr}_\mathbb FA)$ is ${\Bbbk}[\overline{\chi}]$ with $s:=\deg_{new}(\overline{\chi})>0$ here $\deg_{new}$ is the new grading associated to the filtration ${\mathbb F}$.* - *${\rm gr}_\mathbb FA$ has no non-zero Poisson derivation of degree $-s$.* - *${\rm gr}_\mathbb FA$ is $H$-ozone.* *Then $A$ is both $H$-ozone and $PH^1$-minimal.* *Proof.* Let $E$ be the Euler derivation of $A_\Omega$. By (2), we have the induced graded Poisson derivation, denoted by $E^{ind}$, on ${\rm gr}_\mathbb FA$. So we can write $E^{ind}(\overline{\chi})=m\,\overline{\chi}$, where $m=\deg(\chi)$ is under the original grading of $A$. Denote $Z=Z_P(\mathop{\mathrm{gr}}_\mathbb F A)$. We claim that $PH^1(\mathop{\mathrm{gr}}_\mathbb F A)\cong ZE^{ind}$ by following the argument in [@TWZ Lemma 7.7]. Note that the induced Poisson bracket on ${\rm gr}_\mathbb FA$ is homogeneous of degree $w$. So, it suffices to consider all homogeneous Poisson derivations. Say $\phi$ is such one of degree $i$. One can check that $\phi(\overline{\chi})\in Z$. By (5), we get $\phi(\overline{\chi})=a \overline{\chi}^n$ for some $a\in {\Bbbk}$ and $n>0$ (and further $\phi(\overline{\chi})=0$ if $s\nmid i$). Write $\phi'=\phi-\frac{a}{m}\overline{\chi}^{n-1}E^{ind}$. Then $\phi'(\overline{\chi})=0$, whence $\phi'$ is ozone. Now (6) implies that $\phi=\frac{a}{m}\overline{\chi}^{n-1}E^{ind}+\phi'$ for some Hamiltonian derivation $\phi'$. This means that $Pd({\rm gr}_\mathbb FA)=ZE^{ind}+Hd({\rm gr}_\mathbb FA)$. It remains to show that $ZE^{ind}\cap Hd({\rm gr}_\mathbb FA)=0$. Let $\phi=fE^{ind}$ be Hamiltonian for some $f\in Z$. So $\phi(\overline{\chi})=fE^{ind}(\overline{\chi})=mf\overline{\chi}=0$. Hence, $f=0$ and $\phi=0$ since $Z$ is an integral domain. This proves our claim. Next, we use the $w$-filtration $\mathbb F=\{F_iA\mid i\in \mathbb Z\}$ of $A$ to filter the cochain complex $(\mathfrak X^\bullet(A), d_\pi^\bullet)$ and compute $PH^1(A)$ by spectral sequence. For each $p,i\in \mathbb Z$ with $i\ge 0$, we define a ${\Bbbk}$-subspace $F_p\mathfrak X^i(A)$ of $\mathfrak X^i(A)$ $$\begin{aligned} F_p\mathfrak X^i(A)=\{f\in \mathfrak X^i(A)\,|\, f(a_{1},\ldots,a_{i})\in F_{l_1+\cdots +l_i-p+iw}A\ \text{for any $a_{j}\in F_{l_j}A$}, 1\leq j\leq i\}. \end{aligned}$$ Applying the differential formula [\[E1.0.1\]](#E1.0.1){reference-type="eqref" reference="E1.0.1"}, for any $f\in F_p\mathfrak X^i(A)$ we have $$\begin{aligned} d_{\pi}^i(f)(a_0,\ldots,a_i) &=\sum_{j=0}^i (-1)^j \{ a_j, f(a_0,\ldots, \widehat{a_j},\ldots,a_i)\}\\ &\quad +\sum_{0\leq j<k\leq i} (-1)^{j+k} f(\{a_j,a_k\}, a_0,\ldots,\widehat{a_j},\ldots, \widehat{a_k},\ldots,a_i)\\ &\in F_{l_0+\cdots+l_i-p+(i+1)w}A\end{aligned}$$ where $a_j\in F_{l_j}A$ for $0\leq j\leq i$. So $d_\pi^i: F_p\mathfrak X^i(A)\to F_p\mathfrak X^{i+1}(A)$. Then $\{F_p\mathfrak X^\bullet(A)\,|\, p\in \mathbb Z\}$ is a (decreasing) filtration on the cochain complex $(\mathfrak X^\bullet(A),d_\pi^\bullet)$, which is exhaustive and bounded below since ${\rm gr}_\mathbb FA$ is connected graded. Thus according to [@We §5.4], we have a cohomology spectral sequence with $$E_0^{p,q}=F_p\mathfrak X^{p+q}(A)/F_{p+1}\mathfrak X^{p+q} \cong \mathfrak X^{p+q}({\rm gr}_\mathbb FA)_{-p+(p+q)w}$$ and $$E_1^{p,q}=PH^{p+q}({\rm gr}_\mathbb FA)_{-p+(p+q)w} \Longrightarrow PH^{p+q}(A).$$ By (4), we know $PH^0({\rm gr}_\mathbb FA)={\Bbbk}[\overline{\chi}]$. Since every cocycle $(\overline{\chi})^n$ in $E_1$-page can be lifted to a Poisson central element $\chi^n$ in $PH^0(A)$, the elements in $Z={\Bbbk}[\overline{\chi}]$ are all permanent cocycles and survive to $E_\infty$-page. As a consequence, the differentials $d_r^{p,q}: E_r^{p,q}\to E_r^{p+r,q-r+1}$ are all zero whenever $p+q=0$. So $E_1^{p,q}=E_\infty^{p,q}$ when $p+q=0$. By our previous claim, we know $PH^1({\rm gr}_\mathbb FA)=ZE^{ind}$. Similarly, every cocycle $\overline{\chi}^nE^{ind}$ can be lifted to a Poisson derivation $\chi^nE$ in $PH^1(A)$. Hence $E_1^{p,q}=E_\infty^{p,q}$ when $p+q=1$. This implies that $PH^1(A)={\Bbbk}[\chi]E$ and hence $A$ is $PH^1$-minimal. Finally, $A$ is $H$-ozone by [@TWZ Lemma 7.5]. ◻ By combining and , we obtain the following consequence. **Proposition 53**. *If $\Omega$ is of the form in Case (1), then $A_{\Omega}$ is $H$-ozone.* *Proof.* We first show that any such $A_\Omega$ satisfies the assumptions in , and then the result follows. We use $\Omega=z^2+x^2y^2+x^4y$ as an illustration. Note that the original grading on $A$ is given by $\deg(x)=1,\deg(y)=2,\deg(z)=3$. Now we set $\deg_{new}(x)=3,\deg_{new}(y)=2,\deg_{new}(z)=7$ and consider the following algebra filtration $\mathbb F=\{F_i\,|\,i\in \mathbb N\}$, where $F_i$ are spanned by all monomials $x^jy^kz^l$ satisfying $3j+2k+7l\le i$. By [@HTWZ1 Lemma 2.9], it is easy to check that $\{F_i,F_j\}\subseteq F_{i+j+2}$ for all possible $i,j\in \mathbb N$. So $\mathbb F$ is a $w$-filtration for the Poisson algebra $A_\Omega$ with $w=2$. It is clear that ${\rm gr}_\mathbb F A= {\Bbbk}[\overline{x},\overline{y},\overline{z}]$ with the new grading $\deg_{new}(\overline{x})=3, \deg_{new}(\overline{y})=2,\deg_{new}(\overline{z})=7$. One can further verify that ${\rm gr}_\mathbb F A$ is still unimodular with a homogeneous potential $\overline{\Omega}=\overline{z}^2+\overline{x}^4\overline{y}$. By , $(\mathop{\mathrm{gr}}_\mathbb F A)_{\overline{\Omega}}$ is $H$-ozone. Moreover, it is routine to check that $(\mathop{\mathrm{gr}}_\mathbb F A)_{\overline{\Omega}}$ satisfies all the requirements in . So $A_\Omega$ is $H$-ozone. ◻ ## Case (2) {#zzsec6.2} **Proposition 54**. *If $\Omega$ is of the form in Case (2), then $A_{\Omega}$ is $H$-ozone.* *Proof.* If $\Omega$ is of the form $xyz+x^{b'}+y^{a'}$, by , $\Omega$ is $K_1$-sealed. The assertion follows from . ◻ ## Case (3) {#zzsec6.3} In this subsection, we show that $A_\Omega$ is not $H$-ozone for each non-balanced irreducible $\Omega$. **Lemma 55**. *Let $\Omega$ be non-balanced and irreducible. Then $A$ is not $H$-ozone.* *Proof.* By a choice of $(x,y,z)$, we may assume that $\Omega=h(x,y)=\sum_{ai+bj=n} \alpha_{ij} x^{i} y^{j}$ where $n=a+b+c$. Since $\Omega$ is irreducible, $\alpha_{0k}$ and $\alpha_{l0}$ are nonzero where $kb=n=al$. Since $c=n-a-b=kb -a-b$, the assumption $\gcd(a,b,c)=1$ implies that $\gcd(a,b)=1$. Then the equation $kb=al$ implies that $k=ag$ and $l=bg$ where $g=\gcd(k,l)$. If $\alpha_{ij}\neq 0$ for some $(i,j)$, then $ai+bj=n=abg$. Hence $a\mid j$ and $b\mid i$. This means that $h(x,y)=f(x^b, y^a)$ for some homogeneous polynomial $f(s,t)$. Since $\Omega$ is irreducible, so is $f(s,t)$. The only possibility is when $f(s,t)$ is linear, or equivalently, $h(x,y)=x^b+y^a$. Since $ab=n=a+b+c$, $a,b\geq 2$. To show $A$ being not $H$-ozone, it suffices to show that $\Omega$ does not satisfy condition (2) in . Considering $\overrightarrow{f}=(0,0,1)\in A^{\oplus3}$. It is easy to check that $\overrightarrow{\nabla}\cdot \overrightarrow{f} =\overrightarrow{f}\cdot \overrightarrow{\nabla}\Omega=0$. Suppose $\overrightarrow{f} =\overrightarrow{\nabla}g\times \overrightarrow{\nabla}\Omega$ for some $g\in A$. Write $\overrightarrow{\nabla}g=(g_1,g_2,g_3)\in A^{\oplus 3}$. Then we must have $1=ag_1y^{a-1}-bg_2x^{b-1}$, which is impossible since $a,b\ge 2$. ◻ ## Case (4) {#zzsec6.4} Finally, we deal with the two exceptional potentials: $\Omega=z^2+y^3+x^2 y^2$ or $z^2+x^2y^2+x^{2+\frac{2b}{a}}$. **Proposition 56**. *If $\Omega$ is of one of the above forms, then $A_{\Omega}$ is H-ozone.* *Proof.* By (2)-(3) and , $\Omega$ is $K_1$-sealed. The assertion follows from . ◻ **Corollary 57**. *Let $\Omega$ be an irreducible potential in the classification that is neither $x^{k}+y^{l}$ nor $x^{k}+z^{l}$ nor $y^{k}+z^{l}$. Then, $\Omega$ is balanced.* *Proof.* By , and , for such an $\Omega$, $A_{\Omega}$ is $H$-ozone. The assertion follows from . ◻ ## Main results on Poisson cohomology {#zzsec6.5} We compute the Hilbert series of the Poisson cohomology groups of $A_\Omega$ when it is $H$-ozone. Our result shows that their Hilbert series only depends on the grading of $A_\Omega$. **Theorem 58**. *Let $A={\Bbbk}[x,y,z]$ be a connected graded algebra such that $\deg(x)=a,\deg(y)=b,\deg(z)=c$ with unimodular Poisson structure given by some homogeneous polynomial $\Omega$ of degree $n>0$. Suppose the following statements hold:* - *$Z_P(A)={\Bbbk}[\Omega]$.* - *$A$ is $H$-ozone.* - *$A$ has a degree zero Poisson derivation that is not ozone.* - *$A$ has no non-zero Poisson derivation of degree $-n$.* *Then, the Hilbert series of the Poisson cohomology groups of $A$ is given by* 1. *$h_{PH^0(A)(t)}=\frac{1}{1-t^n}$.* 2. *$h_{PH^1(A)}(t)=\frac{1}{1-t^n}$.* 3. *$h_{PH^2(A)}(t)=\frac{1}{t^{a+b+c}}\left(\frac{(1-t^{n-a}) (1-t^{n-b})(1-t^{n-c})}{(1-t^n)(1-t^a)(1-t^b)(1-t^c)}-1\right)$.* 4. *$h_{PH^3(A)}(t)=\frac{(1-t^{n-a})(1-t^{n-b}) (1-t^{n-c})}{t^{a+b+c}(1-t^n)(1-t^a)(1-t^b)(1-t^c)}$.* *Proof.* Consider the cochain complex $(\mathfrak X^\bullet(A),d_\pi^\bullet)$ in [\[E1.3.1\]](#E1.3.1){reference-type="eqref" reference="E1.3.1"} for computing $PH^\bullet(A)$. \(1\) By (a), it is clear that $h_{PH^0(A)(t)}=h_{Z_P(A)}(t)=\frac{1}{1-t^n}$. \(2\) By (c), we have a degree zero Poisson derivation, denoted by $\delta$, such that $\delta(\Omega)\neq 0$. Without loss of generality, we can assume $\delta(\Omega)=\Omega$ as $\delta$ has degree zero. By (d) and a similar argument of [@TWZ Lemma 7.7] (also see the proof of ), one can see that $PH^1(A)\cong Z_P(A)\delta$. So $h_{PH^1(A)}(t)=h_{Z_P(A)}(t)=\frac{1}{1-t^n}$. \(3\) By (b) and , $A$ is $uPH^2(A)$-vacant, namely, $lPH^2(A)=PH^2(A)$. Then the result follows from and parts (1, 2). \(4\) Notice $\ker(d_\pi^2)=M^2(A)$ since $A$ is $uPH^2(A)$-vacant. Considering [\[E5.9.2\]](#E5.9.2){reference-type="eqref" reference="E5.9.2"}--[\[E5.9.3\]](#E5.9.3){reference-type="eqref" reference="E5.9.3"} and the following exact sequence $$\label{E6.7.1}\tag{E6.7.1} 0\to Pd(A)[w]\to \mathfrak X^1(A)[w]\to \ker(d_\pi^2)[2w] \to PH^2(A)[2w]\to 0,$$ we obtain $$\begin{aligned} \label{E6.7.2}\tag{E6.7.2} h_{PH^3(A)}(t)=h_{\ker(d_\pi^2)}(t)t^w -h_{\mathfrak X^1(A)}(t)t^{2w}+h_{\mathfrak X^0(A)}(t)t^{3w} +\frac{1}{t^{a+b+c}}\frac{(1-t^{w+a})(1-t^{w+b})(1-t^{w+c})} {(1-t^a)(1-t^b)(1-t^c)}\end{aligned}$$ via [\[E1.3.7\]](#E1.3.7){reference-type="eqref" reference="E1.3.7"}. The rest can be deduced from [\[E5.9.1\]](#E5.9.1){reference-type="eqref" reference="E5.9.1"} via another direct computation. ◻ As a consequence, we obtain the Hilbert series of Poisson cohomology for any connected graded unimodular Poisson algebra $A_\Omega$ for any balanced irreducible $\Omega$ (but not necessarily having isolated singularities). **Corollary 59**. *Assume . If $\Omega$ is an irreducible potential in the classification that is neither $x^{k}+y^{l}$ nor $x^{k}+z^{l}$ nor $y^{k}+z^{l}$, then the Hilbert series of Poisson cohomology of $A$ are given by* 1. *$h_{PH^0(A)(t)}=\frac{1}{1-t^n}$.* 2. *$h_{PH^1(A)}(t)=\frac{1}{1-t^n}$.* 3. *$h_{PH^2(A)}(t)=\frac{1}{t^n}\left(\frac{(1-t^{a+b})(1-t^{a+c}) (1-t^{b+c})}{(1-t^n)(1-t^{a})(1-t^{b})(1-t^{c})}-1\right)$.* 4. *$h_{PH^3(A)}(t)=\frac{(1-t^{a+b})(1-t^{a+c})(1-t^{b+c})} {t^n(1-t^n)(1-t^{a})(1-t^{b})(1-t^{c})}$.* *Proof.* It suffices to check that $A_\Omega$ satisfies all the requirements in when $\Omega$ is an irreducible potential mentioned above. (a), (c) and (d) are obvious. So it remains to show all such $A_\Omega$ are $H$-ozone. If $\Omega$ has isolated singularity, $A_{\Omega}$ is $H$-ozone. As discussed before , the rest of the irreducible $\Omega$ classified in are divided into four classes. Note that covers class (1), covers class (2), covers class (4), and covers class (3). So our result follows immediately by letting $n=a+b+c$. ◻ The Hilbert series of the Poisson cohomology only depends on the weights of $x, y, z$ when $\Omega$ is balanced. Thus, these connected graded unimodular Poisson algebras exhibit the same homological behaviors, making it impossible to distinguish irreducible potentials only using the Hilbert series. It would be interesting to see if additional structures in Poisson cohomology can distinguish different irreducible potentials. See the next question. **Question 60**. Can we use ${\Bbbk}[\Omega]$-module structures on $PH^\bullet(A)$ to distinguish between singular and smooth curves and identify types of singularity for $\Omega=0$? We generalize [@TWZ Theorem 0.6] by removing the condition that generators are in degree one and include equivalent conditions for the second Poisson cohomological group. *Proof of .* (1) $\Rightarrow$ (2): Since $rgt(A)=0$, every degree zero Poisson derivation $\delta$ for $A$ is of the form $\alpha E$ for some $\alpha\in {\Bbbk}$ where $E$ is the Euler derivation. Then $E\wedge \delta=0$. By [\[E1.1.2\]](#E1.1.2){reference-type="eqref" reference="E1.1.2"}, $\pi_{new}=\pi$. So $A=A^{\delta}$. The assertion follows. \(2\) $\Rightarrow$ (1): By [@TWZ Corollary 0.3], there is a Poisson derivation $\delta$ of degree zero such that $A^{\delta}$ is unimodular. Since $A^{\delta}\cong A$ for all $\delta$, $A$ is unimodular. Suppose to the contrary that $A$ is not rigid. Then, there is a Poisson derivation $\delta$ of degree zero not in ${\Bbbk}\,E$. Thus, by Theorem [@TWZ Theorem 0.2], the modular derivation of $A^{\delta}$ is $${\mathbf n}=0+n\delta-\mathop{\mathrm{div}}(\delta) E$$ which cannot be zero as $\mathop{\mathrm{div}}(\delta)\in \Bbbk$ [@TWZ Lemma 1.2(3)]. Therefore, $A^{\delta}$ is not isomorphic to $A$, yielding a contradiction. \(1\) $\Leftrightarrow$ (8): In the proof of (1)$\Leftrightarrow$(2), (1) implies that $A$ is unimodular with potential $\Omega$. By , $\Omega$ is irreducible. According to , up to graded isomorphisms of $A$, the non-balanced potentials are $\Omega=z^2+y^3$, $z^2+x^{\frac{2b}{a}}$, or $\Omega=f(x,y)$ in . It is easy to check that these corresponding Poisson algebras have Poisson derivations of negative degree. So we get (8). The other direction follows from the calculation of $PH^1(A)$ in and [@TWZ Remark 5.2]. \(3\) $\Leftrightarrow$ (5): This follows from [@TWZ Proposition 7.4]. \(5\) $\Leftrightarrow$ (6): Under the hypothesis (5), $A$ is $PH^1$-minimal. One implication follows by [@TWZ Lemma 7.5] and the other direction is clear. \(6\) $\Rightarrow$ (7): See the proof of [@TWZ Lemma 7.5]. \(7\) $\Rightarrow$ (1): Note that $A$ is unimodular in this case. From the classification of $\Omega$ in , it is easy to see that $Z_P(A)={\Bbbk}[\chi]$ with $\deg(\chi)>0$. Thus, $rgt(A)=0$ from [@TWZ Lemma 7.7(2)]. Moreover, suppose $\delta\in Pd(A)_{<0}$. If $\delta(\Omega)=0$, then $\delta=H_f$ for some $f\in A$. This is impossible since $\deg(f)=\deg(\delta)<0$ and $A$ is connected graded. If $\delta(\Omega)\in {\Bbbk}^\times$, then $\deg(\delta)=-n$. Then $\deg(\delta(x))=-b-c,\deg(\delta(y))=-a-c,\deg(\delta(z))=-a-b$, which are all negative. Hence $\delta=0$. So $Pd(A)_{<0}=0$. \(8\) $\Rightarrow$ (4,5,10,11): This assertion follows from . \(4\) $\Rightarrow$ (1): The assertion $rgt(A)=0$ follows from [@TWZ Remark 5.2]. Moreover, since $A$ is connected graded, each non-zero Hamiltonian derivation is of positive degree. As a result, we get $Pd(A)_{<0}=PH^1(A)_{<0}=0$. \(5\) $\Leftrightarrow$ (9): It follows from [\[E1.3.7\]](#E1.3.7){reference-type="eqref" reference="E1.3.7"} that $h_{PH^3(A)}(t)-h_{PH^2(A)}(t) =h_{PH^0(A)}(t)-h_{PH^1(A)}(t)+t^{-n}$. We know that $PH^0(A)=Z_P(A)$. So, the assertion follows from the fact that $h_{PH^1(A)}(t)=h_{PH^0(A)}(t)=h_Z(t)$ if and only if $h_{PH^3(A)}(t)-h_{PH^2(A)}(t)=t^{-n}$. \(10\) $\Rightarrow$ (7): Note that the subspace $M^2(A)=\{f\pi_\Omega+\pi_g\,|\, f,g\in A\}$ of $\mathfrak X^2(A)$ lies in $\ker(d_\pi^2)$ \[Definition-Lemma [Definition-Lemma 45](#zzlem5.7){reference-type="ref" reference="zzlem5.7"}\]. For any two locally finite graded vector spaces $M$ and $N$, we use the notation $h_M(t)\ge h_N(t)$ to mean $\dim M_i\ge \dim N_i$ for each $i\in \mathbb Z$. By [\[E5.9.1\]](#E5.9.1){reference-type="eqref" reference="E5.9.1"}--[\[E5.9.3\]](#E5.9.3){reference-type="eqref" reference="E5.9.3"} and [\[E6.7.1\]](#E6.7.1){reference-type="eqref" reference="E6.7.1"} with $w=0$, we get $$\begin{aligned} h_{PH^2(A)}(t) &=h_{\ker(d_\pi^2)}(t)-h_{\mathfrak X^1(A)}(t) +h_{\mathfrak X^0(A)}(t)+(h_{PH^1(A)}(t)-h_{PH^0(A)}(t))\\ &\ge h_{M^2(A)}(t)-h_{\mathfrak X^1(A)}(t) +h_{\mathfrak X^0(A)}(t)+( h_{Z_P(A)E}(t)-h_{PH^0(A)}(t))\\ &\ge h_{M^2(A)}(t)-h_{\mathfrak X^1(A)}(t)+h_{\mathfrak X^0(A)}(t)\\ &=\frac{1}{t^n}\left(\frac{(1-t^{a+b})(1-t^{a+c})(1-t^{b+c})} {(1-t^n)(1-t^{a})(1-t^{b})(1-t^{c})}-1\right)\end{aligned}$$ where the last equality follows from the exact sequence [\[E5.9.1\]](#E5.9.1){reference-type="eqref" reference="E5.9.1"} together with a direct computation. By the assumption, one can obtain $M^2(A)=\ker(d_\pi^2)$ and so $A$ is $uPH^2$-vacant. This is equivalent to $A$ being $H$-ozone by . \(11\) $\Rightarrow$ (7): The argument is similar to the proof of (10) $\Rightarrow$ (7) by using [\[E6.7.2\]](#E6.7.2){reference-type="eqref" reference="E6.7.2"}. \(7\) $\Leftrightarrow$ (12): It follows from . ◻ # Classification of Weight Polynomials $\Omega$ of degree $|x|+|y|+|z|$ in ${\Bbbk}[x,y,z]$ {#appendix} The classification of $\Omega$ when $\deg(x)=\deg(y)=\deg(z)=1$ is well-known, see [@BM; @DH; @DML; @KM; @LX] and the first table below. Irreducible $\Omega$ Reducible $\Omega$ ------------------------------------------------------------------------------------------------------ -- -- ------------------------------- $x^3+y^2z$   , $x^3+x^2z+y^2z$   $x^3, x^2y, xyz$ $x^3+y^3+z^3+3\lambda xyz$ $(\lambda^3 \ne -1 ,\raisebox{.5pt}{\textcircled{\raisebox{-.5pt} {i}}})$ $xy(x+y), xyz+x^3, xy^2+x^2z$ : [\[tab:111\]]{#tab:111 label="tab:111"} (a, b, c)=(1, 1, 1). Irreducible $\Omega$ Reducible $\Omega$ ----------------------------------------------------------------------------------------------------------------------------------------------------- -- -- ---------------------------------- $z^2+x^3y$   , $z^2+ x^2y^2+ x^3y$   $x^4, x^3y, x^2y^2, x^2y^2+x^3y$ $z^2+xy^3+\lambda x^2y^2+x^3y$ $z^2+x^2y^2, z^2, z^2+x^4$, $(\lambda \ne \pm 2,\raisebox{.5pt}{\textcircled{\raisebox{-.5pt} {i}}})$,  $(\lambda=\pm 2, \raisebox{.5pt}{\textcircled{\raisebox{-.5pt} {q}}} )$ $xy^3+\lambda x^2y^2+x^3y$ $x^2z+y^4$   , $x^2z+xy^3+y^4$    $x^2z, x^2z+xy^3$ $xyz+x^4+y^4$   $xyz, xyz+x^4$ : [\[tab:112\]]{#tab:112 label="tab:112"} (a, b, c)=(1, 1, 2). $k\in \mathbb N$ Irreducible $\Omega$ Reducible $\Omega$ ------------------ ---------------------------- ----------------------------- $c\ne ka$ $xyz, x^2z$ $x^2z+xy^{k+1}+y^{k+2}$    $x^2z, x^2z+xy^{k+1}$ $c=ka$ $x^2z+y^{k+2}$    $xyz, xyz+x^{k+2}$ $k\ne 2$ $xyz+x^{k+2}+y^{k+2}$    $h(x,y)$ of degree $(k+2)a$ : [\[tab:a=b\<c\]]{#tab:a=b<c label="tab:a=b<c"} $(a=b<c)\neq (1, 1, 2)$ Reducible $\Omega$ ------------------------------------------------------------------------------------------------------------------------------------------ $xyz, \,xy^2,\,xz^2+x^{1+\frac{2b}{a}},\, xyz+x^{1+\frac{2b}{a}}$,  $x^{1+\frac{b}{a}}y+xz^2$,  $x^{1+\frac{2b}{a}}, x^{1+\frac{b}{a}}y$ : [\[tab:a\<b=c\]]{#tab:a<b=c label="tab:a<b=c"} $(a<b=c)$ Irreducible $\Omega$ Reducible $\Omega$ --------------------------------------------------------------------------------------------------------------------------------------------------- -- -- --------------------------------- $z^2+y^3$    $z^2$, $y^3, x^6$, $y^3+x^2y^2$ $z^2+y^3+x^2y^2$    $x^2y^2$, $x^2y^2+x^4y$ $z^2+y^3+\lambda x^2y^2+x^4y$ $y^3+\lambda x^2y^2+x^4y$ $(\lambda \ne \pm 2,\raisebox{.5pt}{\textcircled{\raisebox{-.5pt} {i}}})$, $(\lambda=\pm 2, \raisebox{.5pt}{\textcircled{\raisebox{-.5pt} {q}}})$ $x^4y$ $z^2+x^4y$   $z^2+x^6$ $z^2+x^2y^2+x^4y$    $z^2+x^2y^2$ $x^3z+y^3+ x^2y^2$    , $x^3z+y^3$   $x^3z$, $x^3z+x^2y^2$ $xyz+x^6+y^3$   $xyz, xyz+x^6$, $xyz+y^3$ : [\[tab:123\]]{#tab:123 label="tab:123"} (a, b, c)=(1, 2, 3). $m,n\in \mathbb N\cup \{-1\}$ Irreducible $\Omega$ Reducible $\Omega$ ------------------------------- ----------------------------------------------------------- --------------------------------------------- $c\ne ma+nb$ $xyz$,   $x^{1+\frac{b}{a}}z$ $c=a+b$ $z^2+x^{2+\frac{2b}{a}}$   $z^2, z^2+x^2y^2$ $a\nmid b$ $z^2+x^2y^2+x^{2+\frac{2b}{a}}$   $xyz, xyz+x^{2+\frac{2b}{a}}$ $\quad\quad\quad$ $\quad\quad\quad$ $h(x,y)$ of degree $(2a+2b)$ $z^2, z^2+x^{2+\frac{2b}{a}}$ $c=a+b$ $z^2+x^2y^2+x^{2+\frac{b}{a}}y$   $z^2+x^2y^2, x^{1+\frac{b}{a}}z$ $b\ne 2a,\, a\mid b$ $z^2+x^{2+\frac{b}{a}}y$   $x^{1+\frac{b}{a}}z+x^2y^2, xyz$ $xyz+x^{2+\frac{2b}{a}}$ $\quad\quad\quad$ $\quad\quad\quad$ $h(x,y)$ of degree $(2a+2b)$ $c=ma+nb$ $xyz+x^{m+1+k}$ $c\ne a+b$,  $a\nmid b$ $xyz+x^{m+1+k}+y^{n+1+l}$   $xyz+y^{n+1+l}$ $\frac{m+1}{k}=\frac{l}{n+1}$ $x^{\frac{b}{\gcd(a,b)}}+y^{\frac{a}{\gcd(a,b)}}$   $xyz$ $\quad\quad\quad$ $\quad\quad\quad$ $h(x,y)$ of degree $(m+1)a+(n+1)b$ $c=ma+nb$ $xyz+x^{m+1+k}+y^{n+1+l}$    $x^{1+\frac{b}{a}}z, xyz+x^{m+1+k}$ $c\ne a+b$,  $a\mid b$ $x^{1+\frac{b}{a}}z+y^{1+n+l}$    $xyz, xyz+y^{n+1+l}$ $\frac{m+1}{k}=\frac{l}{n+1}$ $x^{1+\frac{b}{a}}z+ x^{\frac{b}{a}}y^{n+l}+y^{1+n+l}$    $x^{1+\frac{b}{a}}z+x^{\frac{b}{a}}y^{n+l}$ $\quad\quad\quad$ $\quad\quad\quad$ $h(x,y)$ of degree $(m+1)a+(n+1)b$ : [\[tab:abc2\]]{#tab:abc2 label="tab:abc2"} $(a<b<c)\neq (1, 2, 3)$ $\Omega$ $rgt$ $GK$ $\Omega$ $rgt$ $GK$ ----------------------------------- ------- ------------------------------------------------------------- ------------------------------------------------------- ------------------------------------------ ------------ $z^2+x^3y$ $0$ $1$ $xz^2+x^{1+\frac{2b}{a}}$ $\substack{-2(a\mid b)\\ -1 (a\nmid b)}$ $2$ $z^2+ x^2y^2+ x^3y$ $0$ $1$ $xyz+x^{1+\frac{2b}{a}}$ $-1$ $1$ $z^2+xy^3+\lambda x^2y^2+x^3y$ $0$ $\substack{0\, (\lambda \ne \pm 2) \\ 1\, (\lambda=\pm 2)}$ $x^{1+\frac{b}{a}}y+xz^2$ $-1$ $1$ $x^2z+y^4$ $0$ $1$ $x^{1+\frac{2b}{a}}$ $\substack{-5(a\mid b)\\ -3(a\nmid b)}$ $2$ $x^2z+xy^3+y^4$ $0$ $1$ $x^{1+\frac{b}{a}}y\,\, (\ast)$ $-3$ $2$ $xyz+x^4+y^4$ $0$ $1$ $x^2z+xy^{k+1}+y^{k+2}$ $0$ $1$ $x^4$ $-5$ $2$ $x^2z+y^{k+2}$ $0$ $1$ $x^3y$ $-4$ $2$ $xyz+x^{k+2}+y^{k+2}$ $0$ $1$ $x^2y^2$ $-4$ $2$ $xyz$ $-2$ $1$ $x^2y^2+x^3y$ $-3$ $2$ $x^2z$ $-2$ $2$ $z^2+x^2y^2$ $-1$ $1$ $x^2z$ $-2$ $2$ $z^2$ $-3$ $2$ $x^2z+xy^{k+1}$ $-1$ $1$ $z^2+x^4$ $-1$ $1$ $xyz$ $-2$ $1$ $xy^3+\lambda x^2y^2+x^3y$ $-3$ $1$ $xyz+x^{k+2}\,\, (\ast)$ $-1$ $1$ $x^2z$ $-2$ $2$ $z^2+x^{2+\frac{2b}{a}}$ $0 (a\nmid b)$ $1$ $x^2z+xy^3$ $-1$ $1$ $z^2+x^2y^2+x^{2+\frac{2b}{a}}$ $0$ $1$ $xyz$ $-2$ $1$ $z^2+x^2y^2+x^{2+\frac{b}{a}}y$ $0$ $1$ $xyz+x^4\,\, (\ast)$ $-1$ $1$ $z^2+x^{2+\frac{b}{a}}y$ $0$ $1$ $z^2+y^3$ $0$ $1$ $xyz+x^{m+1+k}+y^{n+1+l}\, (a\nmid b)$ $0$ $1$ $z^2+y^3+x^2y^2$ $0$ $1$ $xyz+x^{m+1+k}+y^{n+1+l}\, (a\mid b)$ $0$ $1$ $z^2+y^3+\lambda x^2y^2+x^4y$ $0$ $\substack{0 (\lambda \ne \pm 2)\\ 1 (\lambda=\pm 2)}$ $x^{1+\frac{b}{a}}z+y^{1+n+l}$ $0$ $1$ $z^2+x^4y$ $0$ $1$ $x^{1+\frac{b}{a}}z+x^{\frac{b}{a}}y^{n+l}+y^{1+n+l}$ $0$ $1$ $z^2+x^2y^2+x^4y$ $0$ $1$ $xyz$ $-2$ $1$ $x^3z+y^3+x^2y^2$ $0$ $1$ $x^{1+\frac{b}{a}}z$ $-2$ $2$ $x^3z+y^3$ $0$ $1$ $z^2$ $-1(a\nmid b)$ $2$ $xyz+x^6+y^3$ $0$ $1$ $z^2+x^2y^2$ $-1$ $1$ $z^2$ $-2$ $2$ $xyz$ $-2$ $1$ $y^3$ $-3$ $2$ $xyz+x^{2+\frac{2b}{a}}$ $-1$ $1$ $x^6$ $-4$ $2$ $z^2$ $-2 (a\mid b)$ $2$ $y^3+x^2y^2$ $-2$ $2$ $z^2+x^{2+\frac{2b}{a}}$ $-1(a\mid b)$ $1$ $x^2y^2$ $-3$ $2$ $z^2+x^2y^2$ $-1$ $1$ $x^2y^2+x^4y$ $-2$ $2$ $x^{1+\frac{b}{a}}z$ $-2$ $2$ $y^3+\lambda x^2y^2+x^4y$ $-2$ $\substack{1\,(\lambda\ne \pm 2)\\ 2\,(\lambda =\pm 2)}$ $x^{1+\frac{b}{a}}z+x^2y^2$ $-1$ $2$ $x^4y$ $-3$ $2$ $xyz$ $-2$ $1$ $z^2+x^6$ $-1$ $1$ $xyz+x^{2+\frac{2b}{a}}$ $-1$ $1$ $z^2+x^2y^2$ $-1$ $1$ $xyz+x^{m+1+k}$ $-1$ $1$ $x^3z$ $-2$ $2$ $xyz+y^{n+1+l}$ $-1$ $1$ $x^3z+x^2y^2$ $-1$ $2$ $xyz$ $-2$ $1$ $xyz$ $-2$ $1$ $x^{1+\frac{b}{a}}z$ $-2$ $2$ $xyz+x^6$ $-1$ $1$ $xyz+x^{m+1+k}$ $-1$ $1$ $xyz+y^3\,\, (\ast)$ $-1$ $1$ $xyz$ $-2$ $1$ $xyz$ $-2$ $1$ $xyz+y^{n+1+l}$ $-1$ $1$ $xy^2$ $-2$ $2$ $x^{1+\frac{b}{a}}z+x^{\frac{b}{a}}y^{n+l}$ $-1$ $2$ $x^{b/\gcd(a,b)}+y^{a/\gcd(a,b)}$ $0$ $1$ $\text{Reducible}\, h(x,y)$ $\leq-1$ $\{1, 2\}$ : [\[tab:rgtanddim\]]{#tab:rgtanddim label="tab:rgtanddim"} $rgt:=rgt(A)$ and $GK:=GKdim(A_{ sing})$ ## Acknowledgments {#acknowledgments .unnumbered} Wang was partially supported by Simons collaboration grant \#688403 and Air Force Office of Scientific Research grant FA9550-22-1-0272. Zhang was partially supported by the US National Science Foundation (No. DMS-2001015 and DMS-2302087). Part of this research work was done during the third author's visit to the Department of Mathematics at Rice University in November 2022, and the first and the third authors' visit to the Department of Mathematics at the University of Washington in January 2023. They wish to thank Rice University and University of Washington for their hospitality. 10 V.V. Bavula, Generalized Weyl algebras and their representations, *St. Petersburg Math. J.* **4(1)**(1993), 71--92. V.V. Bavula, The generalized Weyl Poisson algebras and their Poisson simplicity criterion, *Lett. Math. Phys.* **110** (2020), no. 1, 105--119. V.V. Bavula, The PBW Theorem and simplicity criteria for the Poisson enveloping algebra and the algebra of Poisson differential operators, preprint, arxiv.2107.00321. J. Bell, S. Launois, O. Sànchez, and R. Moosa, Poisson algebras via model theory and differential-algebraic geometry, *J. Eur. Math. Soc. (JEMS)* **19** (2017), no. 7, 2019--2049. A.I. Bondal, Non-commutative deformations and Poisson brackets on projective spaces, Max-Planck-Institute preprint, 1993, no.93--67. A. Bonifant and J. Milnor, On real and complex cubic curves, *Enseign. Math.* **63** (2017), no. 1-2, 21--61. K.A. Brown and M. Yakimov, Azumaya loci and discriminant ideals of PI algebras, *Adv. Math.*, **340** (2018), 1219--1255. K.A. Brown and M. Yakimov, Poisson trace orders, preprint, arXiv:2211.11660. J.K. Deveney, Ruled function fields, *Proc. Amer. Math. Soc.* **86** (1982), no. 2, 213--215 V. Dolgushev, The Van den Bergh duality and the modular symmetry of a Poisson variety, *Selecta Math.* (N.S.) **14** (2009), no. 2, 199--228. J. Donin and L. Makar-Limanov, Quantization of quadratic Poisson brackets on a polynomial algebra of three variables, *J. Pure Appl. Algebra* **129** (1998), no. 3, 247--261. J.-P. Dufour and A. Haraki, Rotationnels et structures de Poisson quadratiques, *C. R. Acad. Sci. Paris Sér. I Math.* **312** (1991), no. 1, 137--140. B.L. Feigin and A.V. Odesskii, Sklyanin algebras associated with an elliptic curve, Preprint deposited with Institute of Theoretical Physics of the Academy of Sciences of the Ukrainian SSR (1989), 33 pages. B.L. Feigin and A.V. Odesskii, Vector bundles on an elliptic curve and Sklyanin algebras, *Topics in quantum groups and finite-type invariants*, Amer. Math. Soc. Transl. Ser. 2, vol. 185, Amer. Math. Soc., Providence, RI, 1998, pp. 65--84. H. R. Frium, The group law on elliptic curves on Hesse form, Finite fields with applications to coding theory, cryptography and related areas, (Oaxaca, 2001), Springer, Berlin, (2002), 123--151. J. Gaddis, P. Veerapen, and X.-T. Wang, Reflection groups and rigidity of quadratic Poisson algebras, *Algebr. Represent. Theory.* (2021), 1--30. J. Gaddis and X.-T. Wang, The Zariski cancellation problem for Poisson algebras, *J. Lond. Math. Soc. (2)* **101** (2020), no. 3, 1250--1279. J. Gaddis, X.-T. Wang, and D. Yee, Cancellation and skew cancellation for Poisson algebras, *Math. Z.* 301 (2022), 3503---3523. K.R. Goodearl, A Dixmier-Moeglin equivalence for Poisson algebras with torus actions, In Algebra and its applications, volume **419** of *Contemp. Math.*, pages 131--154, Amer. Math. Soc., Providence, RI, 2006. K.R. Goodearl, Semiclassical limits of quantized coordinate rings, In *Advances in ring theory*, Trends Math., pages 165--204, Birkhäuser/Springer Basel AG, Basel, 2010. H.-D. Huang, X. Tang, X.-T. Wang, and J.J. Zhang, Poisson valuations, preprint, arXiv:2309.05511. H.-D. Huang, X. Tang, X.-T. Wang, and J.J. Zhang, Valuation method for Nambu Poisson algebras, in preparation. K.R. Goodearl and S. Launois, The Dixmier-Moeglin equivalence and a Gel'fand-Kirillov problem for Poisson polynomial algebras, *Bull. Soc. Math. France*, **139(1)** (2011), 1--39. K.R. Goodearl and E.S. Letzter, Semiclassical limits of quantum affine spaces, *Proc. Edinb. Math. Soc. (2)*, **52** (2009), no. 2, 387--407. I. Kogan and M. Moreno Maza, Computation of canonical forms for ternary cubics, *Proceedings of the 2002 International Symposium on Symbolic and Algebraic Computation*, 151--160, ACM, New York, 2002. S. Launois and O.L. Sánchez, On the Dixmier--Moeglin equivalence for Poisson--Hopf algebras, *Adv. Math.*, **346** (2019), 48--69. C. Laurent-Gengoux, A. Pichereau, and P. Vanhaecke, *Poisson structures*, Grundlehren der Mathematischen Wissenschaften, 347. Springer, Heidelberg, 2013. J. Levitt and M. Yakimov, Quantized Weyl algebras at roots of unity, *Israel J. Math.*, (2018), 681--719. Z.-J. Liu and P. Xu, On quadratic Poisson structures, *Lett. Math. Phys.* **26**(1) (1992) 33--42. J. Luo, Duality theory and BV algebra structures over Poisson (co)homology, Ph.D. thesis, 2016. J. Luo, S.-Q. Wang, and Q.-S. Wu, Twisted Poincaré duality between Poisson homology and Poisson cohomology, *J. Algebra* **442** (2015), 484--505. J. Luo, X.-T. Wang, and Q.-S. Wu, Poisson Dixmier-Moeglin equivalence from a topological point of view, *Israel J. Math.*, (2020), 1--37. J.-F. Lü, X.-T. Wang, and G.-B. Zhuang, Universal enveloping algebras of Poisson Hopf algebras, *J. Algebra*, **426** (2015), 92--136. J.-F. Lü, X.-T. Wang, and G.-B. Zhuang, Homological unimodularity and Calabi-Yau condition for Poisson algebras, *Lett. Math. Phys.* **107** (2017), no. 9, 1715--1740. J.-F. Lü, X.-T. Wang, and G.-B. Zhuang, Universal enveloping algebras of Poisson Ore extensions, *Proc. Amer. Math. Soc.* **143** (2015), no. 11, 4633--4645. C.-Y. Ma, Invariants of unimodular quadratic polynomial Poisson algebras of dimension 3, preprint, arXiv:2302.13588. L. Makar-Limanov, U. Turusbekova, and U. Umirbaev, Automorphisms of elliptic Poisson algebras, Algebras, representations and applications, 169--177, *Contemp. Math.* 483, Amer. Math. Soc., Providence, RI, 2009. B. Nguyen, K. Trampel, and M. Yakimov, Noncommutative discriminants via Poisson primes, *Adv. Math.* **322** (2017), 269--307. S.R.T. Pelap, Poisson (co)homology of polynomial Poisson algebras in dimension four: Sklyanin's case, *J. Algebra* **322** (2009), no. 4, 1151--1169. S.R.T. Pelap, Homological properties of certain generalized Jacobian Poisson structures in dimension 3, *J. Geom. Phys.* **61** (2011), no. 12, 2352--2368. A. Pichereau, Poisson (co)homology and isolated singularities, *J. Algebra* **299** (2006), no. 2, 747--777. A. Polishchuk, Poisson structures and birational morphisms associated with bundles on elliptic curves, *Internat. Math. Res. Notices* (1998), no. 13, 683--703. R. Przybysz, On one class of exact Poisson structures, *J. Math. Phys.* 42 (2001) 1913--1920. B. Pym, Quantum deformations of projective three-space, *Adv. Math.* **281** (2015), 1216--1241. M. Reyes, D. Rogalski, and J.J. Zhang, Skew Calabi-Yau algebras and homological identities, *Adv. Math.* **264** (2014), 308--354. M. Reyes, D. Rogalski, and J.J. Zhang, Skew Calabi-Yau triangulated categories and Frobenius Ext-algebras, *Trans. Amer. Math. Soc.* **369** (2017), no. 1, 309--340. D.R. Stephenson, Artin-Schelter regular algebras of global dimension three, *J. Algebra* **183** (1996), no. 1, 55--73. D.R. Stephenson, Algebras associated to elliptic curves, *Trans. Amer. Math. Soc.* **349** (1997), no. 6, 2317--2340. X. Tang, X.-T. Wang, and J.J. Zhang, Twists of graded Poisson algebras and related properties, preprint, arXiv:2206.05639v1. M. Van den Bergh, Noncommutative homology of some three-dimensional quantum spaces, *Proceedings of Conference on Algebraic Geometry and Ring Theory in Honor of Michael Artin, Part III*, Antwerp, 1992, vol. 8, 1994, pp. 213--230. S.-Q. Wang, Modular derivations for extensions of Poisson algebras, *Front. Math. China* **12** (2017), no. 1, 209--218. C. Walton, X.-T. Wang, and M. Yakimov, Poisson geometry of PI three-dimensional Sklyanin algebras, *Proc. Lond. Math. Soc.* (3) **118** (2019), no. 6, 1471--1500. C. Walton, X.-T. Wang, and M. Yakimov, Poisson geometry and representations of PI 4-dimensional Sklyanin algebras, *Selecta Math.* (N.S.) **27** (2021), no. 5, Paper No. 99, 60 pp. C.A. Weibel, *A Introduction to Homological Algebra*, Cambridge University Press, 1994. J.J. Zhang, Twisted graded algebras and equivalences of graded categories, *Proc. London Math. Soc.* (3) **72** (1996), no. 2, 281--311.
arxiv_math
{ "id": "2309.00714", "title": "Weighted Poisson polynomial rings", "authors": "Hongdi Huang, Xin Tang, Xingting Wang, and James J. Zhang", "categories": "math.RA", "license": "http://creativecommons.org/licenses/by-nc-sa/4.0/" }
--- abstract: | We discuss the asymptotics of the nonparametric maximum likelihood estimator (NPMLE) in the normal mixture model. We then prove the convergence rate of the NPMLE decision in the empirical Bayes problem with normal observations. We point to (and use) the connection between the NPMLE decision and Stein unbiased risk estimator (SURE). Next, we prove that the same solution is optimal in the compound decision problem where the unobserved parameters are not assumed to be random. Similar results are usually claimed using an oracle-based argument. However, we contend that the standard oracle argument is not valid. It was only partially proved that it can be fixed, and the existing proofs of these partial results are tedious. Our approach, on the other hand, is straightforward and short. address:  is Professor, Department of Statistics, University of Michigan, Ann Arbor, Michigan USA. author: -   bibliography: - CDminimax.bib title: "No need for an oracle: the nonparametric maximum likelihood decision in the compound decision problem is minimax" --- # Introduction Suppose, for some *unobserved* $\vartheta_1,\dots,\vartheta_n$, the observations $Y_1,\dots,Y_n$ are independent given $\vartheta_1,\dots,\vartheta_n$ and $Y_i\mid \vartheta_{[n]} \dist N(\vartheta_i,\sig^2)$. The goal is to estimate $\vartheta_{[n]}$ under the $L_2$ norm, where $[n]\eqdef 1,\dots,n$ and for a general sequence, $a_{[n]}$ is a short notation for $a_1,\dots,a_n$. A few models were considered in the literature to model $\vartheta_{[n]}$. In the classical empirical Bayes (EB) problem, cf. [@Robbins56; @Zhang03; @Efron19], $\vartheta_1,\dots,\vartheta_n$ are $G$ for some unknown $G$. More generally, they may be a time sequence following some state-space model or a hidden Markov model. The statistician's aim is to minimize $R(G,\hat \vartheta_{[n]})\eqdef \E \scl(\vartheta_{[n]},\hat\vartheta_{[n]})$ without the knowledge of $G$. Note that the expectation is taken assuming that $\vartheta_{[n]}$ are themselves random. In the compound decision (CD) problems, $\vartheta_{[n]}$ are just fixed unknown parameters. The target now is to minimize the same risk function, but the expectation is taken only over $\hat\vartheta_{[n]}$ since $\vartheta_{[n]}$ are fixed. In this paper, we restrict our attention only to the EB/CD problem, i.e., $\vartheta_1,\dots.\vartheta_n$ are either sample or unknown but fixed. To simplify the discussion, we assume that $|\vartheta_i|<c$. [@Efron19] titled his authoritative *Statistical Science* review by *"Bayes, Oracle Bayes and Empirical Bayes."* Oracle Bayes is what we refer to as the CD problem. Efron's title brings forward an oracle that essentially makes the CD setup as if it were an EB with respect to the (unknown to the statistician) empirical distribution of the $\vartheta$s. Our main contribution is to argue that this oracle is misleading and unnecessary. We claim that the existence of an asymptotic minimax solution can be proved by the standard method of considering the statistical problem as a zero sum game (with asymmetric information) between the statistician and Nature. These are two complementary methods to prove that a result is minimax. In both methods, we compare the problem at hand to another problem whose solution is known and serves as a bound of what is achievable. With the oracle, we compare the statistician to a more knowledgeable statistician, which faces an easier problem. In the game theoretic approach, we make Nature more atrocious; thus, the statistician faces a more difficult problem. We advocate for the nonparametric maximum likelihood estimator (NPMLE) as a good strategy for dealing with both models. It is based on the assumption that the observations come from a mixture of normal distributions *as if* $\vartheta_{[n]}$ are (as they are in the EB model but not in the CD one). This approach is based on the connection between the NPMLE and Stein's unbiased risk estimator (SURE), as presented below. We then bring a new proof that the same solution is indeed valid in the CD context, where seemingly the NPMLE doesn't fit the model. Thus the contribution of this communication is threefold: (1) We prove concentration inequality for the NPMLE procedure in the EB and CD problems, (2) We relate the SURE to the NPMLE, and (3) we prove that the NPMLE is minimax in both problems. # Maximum likelihood estimator for the empirical Bayes problem Consider the EB problem described in the introduction. If $G$ were known the optimal decision was the Bayes estimator $\hat\vartheta_i=\del_G(Y_i)$, where $\del_G(y)\eqdef\E(\vartheta\mid Y=y)$ under the assumption that $\vartheta\dist G$ and $Y\mid \vartheta\dist \Phi\bigl((\cdot-\vartheta)/\sig\bigr)$, where $\Phi$ is the standard normal cdf. It is well known that $\del_G$ is given by the Tweedie formula, [@Robbins56; @Efron11; @Efron19]: where is the marginal density of $Y$. Here $\varphi$ is the standard normal density. In the EB context, where $G$ is not known, [@BrownGreenshtein09] proved that we can replace $f_G$ by a kernel density estimate of the marginal distribution of $Y$; [@GreenshteinRitov22] advocated estimating $f_G$ by $f_{\hat G}$, where $\hat G$ is the nonparametric maximum likelihood (NPMLE) of $G$. In the following, in particular Theorem [Theorem 1](#EBSURE){reference-type="ref" reference="EBSURE"}, I'll make their case precise. We start with: **Proposition1 1**. *Let $\ell(G;y)=\log f_G(y)+y^2/2\sig^2$. Then $\del_G(y)-y=\sig^2\ell'(G;y)$. The functions $\ell(G;y)$ and $\del_G(y)-y$ have bounded derivatives (with respect to $y$) of any order.* *Proof.* Clearly I.e., it is the log of the moment generating function of $d\ti G_{y_0}$ evaluated at $y-y_0$, where $$d\ti G_{y_0}\eqdef A \exp({y_0\vartheta/\sig^2-\vartheta^2/2\sig^2}) dG(\vartheta)$$ for some normalizing constant $A=A(G,y_0)$. It follows that the $k$th derivative of $\ell(G;y)$ with respect to $y$ at $y_0$ is the $k$ cumulants of $d\ti G_{y_0}$---a distribution with compact support. The $k$th cumulant of any distributions on $(-c,c)$ is bounded by $(2ck)^k$ (see [@DubkovMalakhov76]). Since the assertions about $\del_G$ follow. ◻ In particular: Note that $\del_G$ is strictly monotone (unless $G$ is a single point mass and $\del_G$ is constant), which follows since the model has the monotone likelihood property. If $Y\mid \vartheta$ is a mixture of an exponential family with mixing distribution whose support is everywhere dense, then any mean 0 function of $Y$ is in the tangent space. It follows (cf. [@bkrw; @GreenshteinRitov22]) that Since $\int h(y)dF_{\hat G}(y)=0$ for any $h$ in the tangent set of $\{F_G\}$ at $\hat G$. The Stein unbiased risk estimator (SURE) follows the following expansion. Suppose $Y\dist N(\vartheta,\sig^2)$, then where the second term on the RHS follows an integration by part: Therefore, $\frac1n\summ i1n \Bigl(\bigl(\del(Y_i)-Y_i\bigr)^2+2\sig^2\bigl(\del'(Y_i)-1\bigr)\Bigr)+\sig^2$ is an unbiased estimator of $R(\vartheta_{[n]},\hat\vartheta_{[n]})$, where $\hat\vartheta_i=\del(Y_i)$. The Stein unbiased risk estimator is given by ${\rm SURE}\xspace(\del,\bbf_n)$ where In particular $R(G,\del)\equiv {\rm SURE}\xspace(\del,F_G)/n$. Specializing the definition to the Bayesian estimator $\del_G$ was obtained: Now the functions whose mean are computed in the ${\rm SURE}\xspace(\del_G;\bbf_n)$ are all with uniformly bounded derivatives and with $L_2$ square integrable envelope, since Thus, By [\[NPMLEmean\]](#NPMLEmean){reference-type="eqref" reference="NPMLEmean"}: But since $\del_{\hat G}$ minimizes the risk under $\hat G$ and $\del_{ G_0}$ under $F_{G_0}$. Thus, $$\begin{aligned} %\eqsplit{ &\hspace{-1em}{\rm SURE}\xspace(\del_{ G_0},F_{G_0}) && \\ &\le {\rm SURE}\xspace(\del_{\hat G},F_{G_0}),&\text{by \eqref{byItSelf}} & \\ &={\rm SURE}\xspace(\del_{\hat G},\bbf_n)+\OP({\sqrt{\frac{\log n}{n}}}), &\text{by \eqref{sureCLT}}& \\ &={\rm SURE}\xspace(\del_{\hat G},F_{\hat G})+\OP({\sqrt{\frac{\log n}{n}}}), &\text{by \eqref{sureTangent}}& \\ &\le {\rm SURE}\xspace(\del_{G_0},F_{\hat G})+\OP({\sqrt{\frac{\log n}{n}}}), &\text{by \eqref{byItSelf}}& \\ &={\rm SURE}\xspace(\del_{G_0},\bbf_n)+\OP({\sqrt{\frac{\log n}{n}}}), &\text{by \eqref{NPMLEmean}}& \\ &\le {\rm SURE}\xspace(\del_{ G_0},F_{G_0}) +\OP({\sqrt{\frac{\log n}{n}}}),&\text{by CLT}.& \end{aligned}$$ Comparing the LHS, the RHS, and the second line of the display, we obtain that ${\rm SURE}\xspace(\del_{\hat G},F_{G_0})$ is between ${\rm SURE}\xspace(\del_{ G_0},F_{G_0})$ and ${\rm SURE}\xspace(\del_{ G_0},F_{G_0}) +\OP({\sqrt{{\log n}/{n}}})$. In another form: **Theorem 1**. Theorem [Theorem 1](#EBSURE){reference-type="ref" reference="EBSURE"} established the asymptotic optimality of using $\del_{\hat G}$. The statistician can achieve without knowing $G$ almost as he could if $G$ was known. **Remark 1**. *The SURE is defined only with respect to the marginal distribution of $Y$, although it refers to a loss function for some $\vartheta_1,\dots,\vartheta_n$. It is an unbiased estimator of the risk conditioned on $\vartheta_{[n]}$. It obeys uniform LLN and CLT when the set of decision procedures is a VC, as is the case of all Bayes procedures.* # The Maximum likelihood estimator and the compound decision problem The optimality for the EB problem was proved by considering an *oracle* who can do anything the statistician can and knows everything the statistician knows, but unlike the statistician, also knows[^1] $G$. Thus, the oracle should use the Bayes procedure $\ti\vartheta_i=\del_G(Y_i)$, and the argument is that the estimator that the statistician is going to use, $\hat\vartheta_i=\del_{\hat G}(Y_i)$ has similar performance as is stated in Theorem [Theorem 1](#EBSURE){reference-type="ref" reference="EBSURE"}. When we consider the CD problem, this oracle is not useful since he assumes that $\vartheta_{[n]}$ are $G$, and they are not. It was suggested in the literature, e.g., [@JiangZhang09; @Efron19] to consider a similar oracle who knows $\vartheta_{[n]}$ up to permutation, effectively their empirical distribution function $\bbg_n$, and use $\del_{\bbg_n}$. Then it is argued that $\del_{\bbg_n}$ is comparable to $\del_{\hat G}$ (or any other estimator based on the Tweedie formula [\[Tweedie\]](#Tweedie){reference-type="eqref" reference="Tweedie"}). We find this approach to be problematic. An oracle that knows $\bbg_n$ will not use $\del_{\hat G}$. Consider the extreme case of $n=2$, and the oracle knows the values $\vartheta_1<\vartheta_2$, and observe, wlog, $Y_1<Y_2$. In that case he would consider the pairing $(\vartheta_1,Y_1), (\vartheta_2,Y_2)$ as more likely than the other pairing $(\vartheta_1,Y_2), (\vartheta_2,Y_1)$. Certainly, if $Y_2-Y_2\gg \sig$. [@GreenshteinRitov09; @GreenshteinRitov19] argue that the minimax decision for this particular oracle is the permutation invariant estimator where $\Pi_n(i,j)$ is the set of all permutations of $1,\dots,n$ such that $\pi(i)=j$ for every $\pi\in \Pi_n(i,j)$. This estimator uses all of $Y_{[n]}$ to estimate $\vartheta_i$. Thus, the oracle should be somehow prevented from using the optimal (for him) procedure. One approach was to enforce the oracle to use a 'simple' or 'separable' estimator such that $\hat\vartheta_i$ depends on the observations only through $Y_i$. Indeed, an oracle thus restricted should use $\del_{\bbg_n}$. However, we cannot compare him to the statistician: He, the oracle, knows better but is more restricted than the human statistician. The estimator $\del_{\hat G}(Y_i)$ the statistician is using is not simple and uses all the observations (through $\hat G$) for the estimate $\hat\vartheta_i$. ![Comparing the risks of the estimators. The permutation invariant estimator is based on 1000000 random permutations. The $Y\dist N(\vartheta,1)$, while $\vartheta$ is with probability 0.8 a $N(0,9)$ rv and with probability 0.2, $N(3,9)$. (a) The loss function of the permutation invariant estimators on 1000 samples vs. the loss of the simple estimators applied to the same samples. The sample size was $n=100$; (b) The efficiency of the permutation invariant estimator relative to that of the simple estimator as a function of sample size $n$. Each point is based on 1000 Monte Carlo trials.](RvR1000,0.8,9.jpg){#anOracle width="48%"} \(a\) ![Comparing the risks of the estimators. The permutation invariant estimator is based on 1000000 random permutations. The $Y\dist N(\vartheta,1)$, while $\vartheta$ is with probability 0.8 a $N(0,9)$ rv and with probability 0.2, $N(3,9)$. (a) The loss function of the permutation invariant estimators on 1000 samples vs. the loss of the simple estimators applied to the same samples. The sample size was $n=100$; (b) The efficiency of the permutation invariant estimator relative to that of the simple estimator as a function of sample size $n$. Each point is based on 1000 Monte Carlo trials.](RR1000,0.8,9.jpg "fig:"){#anOracle width="48%"} \(b\) The difference between $\del_{\bbg_n}$ and $\del^*_{\bbg_n}$ is considerable even with moderate to large sample. The optimal oracle procedure is uncomputable[^2], but we can approximate it by a biased sample of random permutations, where permutations are weighted independently of $Y_{[n]}$, according to their likelihood under $\vartheta_{[n]}$. In Figure [2](#anOracle){reference-type="ref" reference="anOracle"}, we compare the simple estimator, $\del_{\bbg_n}$, to the biased sample approximation, and the approximate permutation invariant estimator is strictly better up to $n=100$. Similar simulations were presented in [@GreenshteinRitov19]. It is true that, asymptotically, the two estimators seem to be equivalent. It was proved in [@GreenshteinRitov09] that under specific conditions $R(\bbg_n,\del_{\hat G})=R(\bbg_n,\del^*)+\op({1})$. But the argument is tedious, and the conditions are strong, while it is unclear whether this oracle makes sense. **Example 1**. *Consider $\vartheta_{i}=\tau_i+\xi_i$ where $\xi_i\dist \scn(0,1)$ and $\tau_i/c_n$ is uniform of $1,\dots,k_n$ for some $c_n\to\en$ and $n/k_n\to m$. Thus, the oracle is faced by $k_n$ separated clusters, each of size $m$. All the random variables are independent. But, for any $m$, the permutation invariant estimator is strictly better than the simple estimator. We conclude that the argument based on the oracle fails to prove the efficiency of the EB estimator in this CD problem.* # Asymptotic efficiency of the NPMLE for the CD problem Our approach is different. There are other approaches to prove the optimality of a procedure. If the oracle method compared the statistician to a better decision maker, the classical approach was to give Nature more freedom and consider a zero-sum game in which the minimizer, the statistician, chooses a procedure and the maximizer, Nature, chooses a parameter with payoff. We consider a game between the Statistician and Nature. If we let Nature choose *any* $\vartheta_{[n]}$, He would select the global minimax solution, cf. [@Bickel81], which would not be relevant to our real statistical realm in which there are some arbitrary $\vartheta_{[n]}$ and not the worst possible. We want to *adapt* to these arbitrary points. To consider adaptive estimator while keeping the minimax notion, we commonly restrict Nature to be in some neighborhood. Thus, we have the local minimax in the sense of Hájek and Le Cam, or more generally, the adaptive estimation in the semiparametric models. We follow these ideas. **The game:** For a giving $G$ let $\scg$ be the set of all random (or not) probability distribution functions, such that if $\{\bbg_n\}\in\scg$ then $\E \bbg_n = G$ and $\bbg_n(t)=\frac1n \summ i1n \ind(\vartheta\le\vartheta_i)$ for some $\vartheta_1,\dots,\vartheta_n$. The statistician observes $Y_{[n]}$ which are independent given $\vartheta_{[n]}$ and $Y_i\dist \scn(\vartheta_i,\sig^2)$. He, the statistician, who neither knows $G$ nor $\bbg_n$, has to choose $\hat\vartheta_{[n]}=\Del(Y_{[n]})$, $\Del:\R^n\to\R^n$, with payoff given by [\[L2loss\]](#L2loss){reference-type="eqref" reference="L2loss"}. The value of the game is $R(\bbg_n,\Del)=E\scl(\vartheta_{[n]},\hat\vartheta_{[n]})$. Note that $\bbg_n$ in the definition of $R(\cdot,\cdot)$ is a random element. Thus the expectation is also over the (possibly) random $\vartheta_{[n]}$. **Theorem 2**. *Asymptotically, $\del_{\hat G}$ is a minimax strategy, and $\bbg_n$, an empirical distribution function of a random sample from $G$, is a maxmin strategy for Nature. The conclusion of Theorem [Theorem 1](#EBSURE){reference-type="ref" reference="EBSURE"} is valid for the game.* The proof of the theorem is immediate. If $\bbg_n$ is the edf of an sample from $G$, then $\E\bbg_n=G$, and since for any given procedure of the statistician, the reward for Nature is linear in $\bbg_n$, the value for each $\bbg_n\in\scg$ is the same and is a function only of $G$. On the other hand, if $\bbg_n$ is edf, then we are back in the EB setup, and hence $\del_{\hat G}$ is approximately optimal. Thus $(\bbg_n,\del_{\hat G})$ is an asymptotic saddle point. We express the result from the point of view of the statistician: **Theorem 3**. *If $\Del_{\hat G}=\bigl(\del_{\hat G}(Y_1),\dots,\del_{\hat G}(Y_n)\bigr)$, then for any $\bbg_n\in\scg$:* Note that if Nature "chooses" some fixed $\vartheta_{[n]}$. then we can simply take $\scg$ to be a point mass at $\bbg_n$. We considered a specific restriction on Nature, namely $\bbg_n\in\scg$. We could replace it with other restrictions. For example, we could consider $\{\bbg_n\}\in\ti\scg$ if $\bbg_n\weakly G$, since then $R(\bbg_n,\Del)\to R(G,\Del)$ by the definition of weak convergence. [^1]: This isn't the Oracle from Delphi, who was quite limited and obscure. [^2]: The oracle does not really care that the permutation invariant estimator cannot be computed in a reasonable time. However, we are naive human beings and do care about such trivialities. Thu humanoids would not exist long enough to complete the exact computation even for a moderate $n$.
arxiv_math
{ "id": "2309.11401", "title": "No need for an oracle: the nonparametric maximum likelihood decision in\n the compound decision problem is minimax", "authors": "Ya'acov Ritov", "categories": "math.ST stat.TH", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | This paper is inspired by Richards' work on large gaps between sums of two squares [@Richards]. It is demonstrated in [@Richards] that there exist arbitrarily large values of $\lambda$ and $\mu$, where $\mu \geq C \log \lambda$, such that intervals $[\lambda, \,\lambda + \mu ]$ do not contain any sums of two squares. Geometrically, these gaps between sums of two squares correspond to annuli in $\mathbb R^2$ that do not contain any integer lattice points. The primary objective of this paper is to investigate the sparse distribution of integer lattice points within annular regions in $\mathbb R^2$. Specifically, we establish the existence of annuli $\{x\in \mathbb R^2: \lambda \leq |x|^2 \leq \lambda + \kappa\}$ with arbitrarily large values of $\lambda$ and $\kappa$, where $\kappa \geq C \lambda^s$ with $0<s<\frac{1}{4}$, satisfying that any two integer lattice points within any one of these annuli must be sufficiently far apart. Furthermore, we extend our analysis to include the sparse distribution of lattice points in spherical shells in $\mathbb R^3$. address: - | Department of Mathematics & Statistics\ Florida International University\ Miami, Florida 33199, USA - | Department of Mathematics & Statistics\ Florida International University\ Miami, Florida 33199, USA author: - Yanqiu Guo - Michael Ilyin date: September 19, 2023 title: Sparse distribution of lattice points in annular regions --- # Introduction This article is inspired by Richards' paper on large gaps between integers that can be expressed as sums of two squares [@Richards]. It is proved in [@Richards] that large gaps between sums of two squares increase at least logarithmically as an asymptotic rate. More precisely, let $s_1$, $s_2, \cdots$ be the sequence, arranged in increasing order, of sums of two squares $x^2 + y^2$, then $$\begin{aligned} \label{loggap} \limsup_{n\rightarrow \infty} \frac{s_{n+1} - s_n}{\log s_n} \geq C >0.\end{aligned}$$ In [@Richards], $C=\frac{1}{4}$; but this constant has been improved to $C \approx 0.87$ in [@Dietmann] by Dietmann et al. The logarithmic-type estimate ([\[loggap\]](#loggap){reference-type="ref" reference="loggap"}), established by Richards, represents an improvement over a result by Erdös [@Erdos]. Note that, formula ([\[loggap\]](#loggap){reference-type="ref" reference="loggap"}) can be interpreted as stating that there exist arbitrarily large values of $n$ for which $s_{n+1}-s_n\geq \alpha \log s_n$, where $\alpha>0$. In other words, the lower bound of large gaps between sums of two squares increases logarithmically. Geometrically, these gaps between sums of two squares correspond to annular regions in $\mathbb R^2$ that contain no integer lattice points. Therefore, Richards' result [@Richards] can be restated as follows: there exist arbitrarily large values of $\lambda$ and $\mu$, where $\mu \geq \alpha \log \lambda$, such that intervals $[\lambda,\,\lambda+\mu]$ do not contain sums of two squares, meaning that there are no integer lattice points in annuli $\{x\in \mathbb R^2: \lambda \leq |x|^2 \leq \lambda + \mu\}$. On the other hand, there has been research in the literature regarding the upper bound of large gaps between sums of squares. Bambah and Chowla [@Bambah] proved that if $\beta > 2\sqrt{2}$, then for all large integers $k$, there are integers $u$ and $v$ such that $k \leq u^2 + v^2 < k + \beta k^{\frac{1}{4}}$. This implies that gaps between sums of two squares have an upper bound of polynomial growth rate. In particular, $s_{n+1} - s_n \leq \beta s_n^{\frac{1}{4}}$ for sufficiently large $n$, where $\{s_n\}$ is the sequence of sum of two squares arranged in increasing order. Also, Shiu [@Shiu] provided a very short proof of Bambah-Chowla theorem. By combining the results from [@Bambah] and [@Richards], it can be concluded that large gaps between sums of two squares have both lower and upper bounds. Specifically, there exist arbitrarily large $n$ for which: $$\begin{aligned} \label{bounds} \alpha \log s_n \leq s_{n+1} - s_n \leq \beta s_n^{\frac{1}{4}},\end{aligned}$$ for some positive constants $\alpha$ and $\beta$. Geometrically, gaps between sums of two squares correspond to annuli in $\mathbb R^2$ that contain no integer lattice points, and the size of the gap is related to the thickness of the annulus. Motivated by this geometric perspective on gaps between sums of two squares, our study focuses on the sparse distribution of integer lattice points within annular regions. Specifically, we aim to identify annuli in $\mathbb R^2$ where any two integer lattice points inside such an annulus are sufficiently distant from each other. We anticipate that an annulus containing sparsely distributed lattice points will have greater thickness than one with no lattice points. To formalize this, we demonstrate that, for any large distance $d$, there exist arbitrarily large $\lambda$ and $\kappa \geq C \lambda^s$, where $0 < s < \frac{1}{4}$, satisfying the condition that any two integer lattice points belonging to the annulus $\{x\in \mathbb R^2: \lambda \leq |x|^2 \leq \lambda + \kappa\}$ must be separated by a distance greater than $d$. It's essential to note that the polynomial growth rate of the interval's length, i.e., $\kappa \geq C \lambda^s$ with $0 < s < \frac{1}{4}$, is significantly larger than the logarithmic growth rate of the gaps between sums of squares. We also consider three dimensions. There are no large gaps between sums of three squares. Due to the representation of sum of three squares (as seen in, e.g., [@Gross]), the gaps between sum of three squares can only be 1, 2 or 3. Consequently, if $[m, \,m + \delta]$ does not contain sums of three squares, then $0<\delta<3$. In other words, if a spherical shell $\{x\in \mathbb R^3: m \leq |x|^2 \leq m+ \delta\}$ does not contain any integer lattice points, then $0<\delta < 3$. This suggests that a spherical shell that contains no integer lattice points has small thickness. However, in this paper, we demonstrate that a spherical shell containing sparsely distributed lattice points can have a more substantial thickness. In particular, we establish that, for any large distance $d$, there exist arbitrarily large $m$ and $h \geq C \sqrt{\log m}$, satisfying the condition that any two integer lattice points belonging to the spherical shell $\{x\in \mathbb R^3: m \leq |x|^2 \leq m + h\}$ must be separated by a distance greater than $d$. Large gaps between sums of squares and the sparse distribution of integer lattice points in annular regions have significant applications in the study of the long-term behavior of dissipative dynamical systems. One example of a dissipative dynamical system, modeled by nonlinear partial differential equations, is the reaction-diffusion equation:$$\begin{aligned} \label{reaction} \partial_t u - \Delta u + f(u) = 0,\end{aligned}$$ where $f(u)$ is a nonlinear term, such as $f(u) = u^3$. When studying this equation on a two-dimensional periodic physical domain, the solution $u$ can be represented as a Fourier series: $u(x,t) = \sum_{k\in \mathbb Z^2} \hat u (k, t)e^{i k \cdot x}$ for $x\in \mathbb T^2 = [0,2\pi]^2$, $t\geq 0$. In the Hilbert space $L^2(\mathbb T^2)$, the Laplace operator $-\Delta$ has eigenfunctions $\{e^{i k \cdot x}\}$ with corresponding eigenvalues $\{k_1^2 + k_2^2\}$, where $k=(k_1, k_2) \in \mathbb Z^2$. It's important to note that these eigenvalues $\{k_1^2 + k_2^2\}$ are sums of two squares. In general, a PDE can be considered as a system of infinitely many ODEs with infinitely many unknowns $\hat u (k, t)$, where $k\in \mathbb Z^2$. Some dissipative PDEs can be reduced to a system of finitely many ODEs as $t\rightarrow \infty$. This type of finite-dimensional reduction often relies on the existence of large gaps between the eigenvalues of the Laplacian. In fact, large gaps between sums of two squares, as demonstrated in Richards' result [@Richards], can lead to the finite-dimensional reduction for certain dissipative PDEs, such as the reaction-diffusion equation ([\[reaction\]](#reaction){reference-type="ref" reference="reaction"}) in a 2D periodic domain. However, in some cases, large gaps between the eigenvalues of the Laplacian are not available. For instance, if one considers the Laplacian operator acting on $L^2(\mathbb T^3)$ functions, the eigenvalues are sums of three squares, which do not exhibit large gaps. In such scenarios, the sparse distribution of integer lattice points in spherical shells can be valuable in reducing a dissipative PDE to a finite-dimensional system at large times. For example, an important work by Mallet-Paret and Sell [@MS] demonstrates a finite-dimensional simplification for 3D reaction-diffusion equations using the sparse distribution of lattice points in spherical shells. # Statements of main results Our first result is concerned with the sparse distribution of lattice points in annuli in $\mathbb R^2$. **Theorem 1**. *Assume $0<s<\frac{1}{4}$. Let $d>0$. There exist arbitrarily large $\lambda$ and $\kappa \geq C \lambda^s$ for some constant $C$ independent of $\lambda$, $\kappa$ and $d$, such that, any two lattice points $k,\ell \in \mathbb Z^2$ that belong to the annulus $\{x\in \mathbb R^2: \lambda \leq |x|^2 \leq \lambda + \kappa\}$ must satisfy $|k-\ell|>d$.* *Remark 2*. The annulus $\{x\in \mathbb R^2: \lambda \leq |x|^2 \leq \lambda + \kappa\}$ described in the theorem may contain either no lattice points, only one lattice point, or multiple lattice points. However, when there are multiple points within such an annulus, it is ensured that the distance between any two lattice points is sufficiently large. Our second result is about the sparse distribution of lattice points in spherical shells in $\mathbb R^3$. **Theorem 3**. *Let $d>0$. There exist arbitrarily large $m$ and $h \geq C \sqrt{\log m}$ for some constant $C$ independent of $m$ and $h$, such that, any two lattice points $k,\ell \in \mathbb Z^3$ that belong to the spherical shell $\{x\in \mathbb R^3: m \leq |x|^2 \leq m + h\}$ must satisfy $|k-\ell| > d$.* # Proof of Theorem [Theorem 1](#thm2D){reference-type="ref" reference="thm2D"} {#proof-of-theorem-thm2d} This section is dedicated to proving Theorem [Theorem 1](#thm2D){reference-type="ref" reference="thm2D"}, which asserts the sparse distribution of lattice points in annuli in $\mathbb R^2$. Before presenting the proof, we will introduce a notation: given functions $f(x)$ and $g(x)$, we denote $$\begin{aligned} f(x) \sim g(x) \;\; \text{to mean} \;\; \lim_{x\rightarrow \infty} \frac{f(x)}{g(x)} = 1.\end{aligned}$$ *Proof.* We draw ideas from [@MS]. Consider the family of disjoint annuli in $\mathbb R^2$: $$\begin{aligned} N_m^{\mu} = \{x\in \mathbb R^2: \mu+ m \kappa < |x|^2 \leq \mu + (m+1) \kappa\},\end{aligned}$$ where $m \in \mathbb Z$ with $0\leq m \leq J = \lfloor \mu^{1/2} \rfloor$. We set $$\kappa = \mu^s, \;\;\text{where}\;\; 0<s<\frac{1}{4}.$$ We aim to show that, for sufficiently large $\mu$, there exists $m\in [0,J]$ such that $N_m^{\mu}$ does not contain a pair of lattice points with a distance less than or equal to $d$. The union of these annuli is denoted as $$\begin{aligned} N^{\mu} = \bigcup_{m=0}^{J} N_m^{\mu} = \{x \in \mathbb R^2: \; \mu < |x|^2 \leq \mu + (J+1)\kappa\}.\end{aligned}$$ Notice that $N^{\mu}$ is also an annulus. We will estimate the thickness of the annulus $N^{\mu}$: $$\begin{aligned} \label{Nu1} \text{thickness of} \,N^{\mu} &= \sqrt{\mu + (J+1)\kappa} - \sqrt{\mu}.\end{aligned}$$ Using $J= \lfloor \mu^{1/2} \rfloor$ and $\kappa = \mu^s$ for $0<s<\frac{1}{4}$, a simple calculation shows that, as $\mu\rightarrow \infty$, $$\begin{aligned} \label{thickN} \text{thickness of} \,N^{\mu} \sim \frac{1}{2} \mu^s, \;\text{namely},\; \lim_{\mu\rightarrow \infty} \frac{\text{thickness of} \,N^{\mu} }{\frac{1}{2}\mu^s} = 1.\end{aligned}$$ Suppose there exist lattice points $k, \ell \in \mathbb Z^2$ such that $$\begin{aligned} k, \ell \in N^{\mu}_m \;\;\text{with} \;\; 0<|k - \ell| \leq d,\end{aligned}$$ for some $m \in [0,J]$. Let $j= \ell-k$, then $0<|j| \leq d$. Since $$\begin{aligned} |\ell|^2 = |k|^2 + 2 k \cdot j + |j|^2,\end{aligned}$$ then $$\begin{aligned} |k \cdot j|\leq \frac{1}{2} \left| |k|^2 - |\ell|^2 \right| + \frac{1}{2} |j|^2 < \frac{1}{2}\kappa + \frac{1}{2} d^2,\end{aligned}$$ because $k, \ell \in N^{\mu}_m$. Since $k$ and $\ell$ are interchangeable, we have also $|\ell \cdot j| < \frac{1}{2} \kappa + \frac{1}{2} d^2$. Therefore, the lattice points $k$ and $\ell$ belong to the strip $$\begin{aligned} \label{strip} S_j^{\mu} = \{x \in \mathbb R^2: |x \cdot j|< \frac{1}{2}\kappa + \frac{1}{2}d^2\}\end{aligned}$$ for some $j\in \mathbb Z^2$ satisfying $0<|j| \leq d$. The strip $S_j$ is symmetric about the origin, and also symmetric about the straight line $x \cdot j =0$. We denote $S^{\mu}$ as the finite union: $$\begin{aligned} S^{\mu} = \bigcup_{0<|j| \leq d} S^{\mu}_j.\end{aligned}$$ Note that the set $S^{\mu}$ contains all pairs of lattice points $(k,\ell)$ with distance less than or equal to $d$ belonging to an annulus $N^{\mu}_m$ for some $m \in [0,J]$. In other words, $$\begin{aligned} \label{Smusup} S^{\mu} \supset \left\{ k, \ell \in \mathbb Z^2: \, 0< |k-\ell| \leq d \;\text{with} \; k, \ell \in N^{\mu}_m \; \text{for some} \; m \in [0,J] \right\}.\end{aligned}$$ We remark that constructing set $S^{\mu}$ is crucial for this proof. Using ([\[strip\]](#strip){reference-type="ref" reference="strip"}), we observe that, for sufficiently large $\mu$, $$\begin{aligned} \label{widS} \text{the width of} \; S^{\mu}_j \leq 2\kappa = 2\mu^s. \end{aligned}$$ Also, as $\mu \rightarrow \infty$, asymptotically, $$\begin{aligned} \label{measSN} \text{meas}(S_j^{\mu} \cap N^{\mu}) \sim 2 (\text{width of $S^{\mu}_j$})(\text{thickness of $N^{\mu}$}).\end{aligned}$$ Combining ([\[thickN\]](#thickN){reference-type="ref" reference="thickN"}), ([\[widS\]](#widS){reference-type="ref" reference="widS"}) and ([\[measSN\]](#measSN){reference-type="ref" reference="measSN"}), it follows that, for sufficiently large $\mu$, $$\begin{aligned} \label{measSN2} \text{meas}(S_j^{\mu} \cap N^{\mu}) \leq 2 \mu^{2s}.\end{aligned}$$ Moreover, since $S^{\mu} = \cup_{0<|j| \leq d} S^{\mu}_j$ is a finite union, then by ([\[measSN2\]](#measSN2){reference-type="ref" reference="measSN2"}), for sufficiently large $\mu$, $$\begin{aligned} \label{measSN3} \text{meas}(S^{\mu} \cap N^{\mu}) \leq C_d \cdot \mu^{2s}, \end{aligned}$$ for some constant $C_d$ that depends solely on $d$. We know that, as $\mu \rightarrow \infty$, asymptotically, $$\begin{aligned} \label{cardSN} \text{card}(S^{\mu} \cap N^{\mu} \cap \mathbb Z^2) \sim \text{meas} (S^{\mu}\cap N^{\mu}).\end{aligned}$$ By ([\[measSN3\]](#measSN3){reference-type="ref" reference="measSN3"}) and ([\[cardSN\]](#cardSN){reference-type="ref" reference="cardSN"}), for sufficiently large $\mu$, $$\begin{aligned} \label{cont} \text{card}(S^{\mu} \cap N^{\mu} \cap \mathbb Z^2) \leq C_d \cdot \mu^{2s}.\end{aligned}$$ If each of the disjoint sets $S^{\mu} \cap N^{\mu}_m \cap \mathbb Z^2$ is not empty for every $m \in [0,J]$, where $J=\lfloor \mu^{1/2} \rfloor$, then $\text{card}(S^{\mu} \cap N^{\mu} \cap \mathbb Z^2)$ will grow at least as fast as $\mu^{1/2}$ as $\mu\rightarrow \infty$, which contradicts ([\[cont\]](#cont){reference-type="ref" reference="cont"}) because $0<2s<1/2$. Therefore, for sufficiently large $\mu$, there exists $m_0\in [0,J]$ such that the set $S^{\mu} \cap N^{\mu}_{m_0} \cap \mathbb Z^2$ is empty. Then, we conclude from ([\[Smusup\]](#Smusup){reference-type="ref" reference="Smusup"}) that the annulus $N^{\mu}_{m_0}$ does not contain two lattice points with a distance less than or equal to $d$. Denote $\lambda = \mu + m_0\kappa$, then $$\begin{aligned} \label{Nm0} N_{m_0}^{\mu} &= \{x\in \mathbb R^2: \mu+ m_0 \kappa < |x|^2 \leq \mu + (m_0+1) \kappa\} \notag\\ &= \{x\in \mathbb R^2: \lambda < |x|^2 \leq \lambda + \kappa\}.\end{aligned}$$ Notice that $\lim_{\mu \rightarrow \infty} \frac{\lambda}{\mu}=1$ and $\kappa = \mu^s$, and therefore, $\lim_{\lambda \rightarrow \infty} \frac{\lambda^s}{\kappa}=1$, where $0<s<1/4$. Hence, we have $\kappa \geq C \lambda^s$, for sufficiently large $\lambda$. Here, $C$ is a constant independent of $\lambda$ and $d$. Also, the half open annulus given in ([\[Nm0\]](#Nm0){reference-type="ref" reference="Nm0"}) can be easily adjusted to a closed annulus. ◻ # Proof of Theorem [Theorem 3](#thm3D){reference-type="ref" reference="thm3D"} {#proof-of-theorem-thm3d} In this section, we prove Theorem [Theorem 3](#thm3D){reference-type="ref" reference="thm3D"}. It is about the sparse distribution of lattice points in spherical shells in $\mathbb R^3$. Before presenting the proof, we introduce some concepts in number theory and state some lemmas. ## Definitions and lemmas Let us recall the definitions of Legendre's symbol and Kronecker's symbol. One may refer to classic books [@Hardy; @Hua]. The notation $p\nmid m$ means that $p$ does not divide $m$. **Definition 4**. Given an odd prime $p>0$ and an integer $m$ with $p\nmid m$. The integer $m$ is called a *quadratic residue* mod $p$ if $m\equiv k^2$ (mod $p$) for some $k\in \mathbb Z$. If the equation $m\equiv k^2$ (mod $p$) has no solution $k$, then $m$ is called a *quadratic non-residue* mod $p$. The *Legendre's symbol* is defined as $$\begin{aligned} \label{Lego} \left( \frac{m}{p} \right) = \begin{cases} 1 & \text{if} \; m \;\text{is a quadratic residue mod} \; p \\ -1 & \text{if} \; m \;\text{is a quadratic non-residue mod} \; p. \end{cases}\end{aligned}$$ In the theory of quadratic forms, the discriminant $d= b^2 - 4ac$ is considered, which implies $d=0$ or 1 (mod 4). Under this scenario, we define the Kronecker's symbol as follows. **Definition 5**. Assume $d \equiv 0$ or 1 (mod 4) and $d \not=0$. Let $n>0$ with the prime factorization $n= \prod_{j=1}^k p_j$. Assume gcd$(d,n) =1$. The *Kronecker's symbol* $\left(\frac{d}{n} \right)$ is defined as $$\begin{aligned} \label{Jacobi} \left( \frac{d}{n} \right) = \prod_{j=1}^k \left( \frac{d}{p_j} \right) ,\end{aligned}$$ where $\left(\frac{d}{p}\right)$ is the Legendre's symbol for odd prime $p$ with $p \nmid d$, and $$\begin{aligned} \label{Jaco2} \left( \frac{d}{2} \right) = \begin{cases} 1 & \text{if} \; d \equiv 1 \;(\text{mod} \;8) \\ -1 & \text{if} \; d \equiv 5\;(\text{mod} \;8). \end{cases}\end{aligned}$$ The Kronecker's symbol $\left(\frac{d}{n} \right)$ can be extended to negative values of $n$ by using $\left(\frac{d}{-m} \right) = \left(\frac{d}{-1} \right) \left(\frac{d}{m} \right)$ with $\left(\frac{d}{-1} \right) = 1$ when $d>0$; $\left(\frac{d}{-1} \right) = -1$ when $d<0$. Kronecker's symbol has the property $$\begin{aligned} \label{Jaco-p} \left( \frac{d}{n_1}\right) = \left( \frac{d}{n_2}\right), \;\; \text{if} \; n_1 \equiv n_2 \, ( \text{mod} \;d) \;\;\text{and} \;\; d \equiv 0 \; \text{or} \; 1 \,(\text{mod} \; 4),\end{aligned}$$ provided $d \not = 0$, $n_j \not = 0$, and gcd $(d, n_j)=1$, for $j=1,2$. Lemma [Lemma 6](#lem1){reference-type="ref" reference="lem1"} and Lemma [Lemma 7](#lem2){reference-type="ref" reference="lem2"} have been proved by Mallet-Paret and Sell in [@MS]. **Lemma 6**. **(Mallet-Paret and Sell [@MS])* Let $D\subset \mathbb{Z}$ be a finite nonempty set of integers $d \equiv 0$ or 1 (mod 4), with the property that $\prod_{d\in A} d$ is not a perfect square whenever $% A\subset D$ has odd cardinality. Then there exists an integer $r\not = 0$ such that $$\begin{aligned} \emph{gcd}(d,r) = 1 \;\; \text{and} \;\; \left( \frac{d}{r}\right) = -1\end{aligned}$$ for each $d\in D$.* Before stating the next lemma, we introduce some notations. Let $p>0$ be a prime. We use notations $p\,|_e\, n$ and $p\,|_o\, n$ to represent that $p$ divides $n$ an even or odd number of times, respectively. More precisely, we write $$p\,|_e\, n$$ to mean either $n = p^{\alpha} m$ where $\alpha$ is even and $p \nmid m$, or else $n=0$. Note that $\alpha=0$ is permitted, so $p \, |_e \, n$ holds if $p \nmid n$. Similarly, we write $$p\,|_o\, n$$ to mean $n = p^{\alpha} m$, where $\alpha$ is odd and $p \nmid m$. **Lemma 7**. **(Mallet-Paret and Sell [@MS])* Consider a quadratic form $T(k_1,k_2)=ak_1^2 + b k_1 k_2 + c k_2^2$ with integer coefficients and discriminant $d=b^2 - 4 ac$, and let $p$ be a prime satisfying $p\nmid d$ and $\left( \frac{d}{p}\right) = -1$. Then $$\begin{aligned} p\,|_e \, T(k_1,k_2)\end{aligned}$$ for any $k_1$, $k_2\in \mathbb Z$.* *Remark 8*. In [@MS], Lemma [Lemma 7](#lem2){reference-type="ref" reference="lem2"} was proved for odd prime $p$. The same conclusion holds for the case $p=2$, and we briefly show the proof as follows. Let $p=2$. Since $\left( \frac{d}{2}\right) = -1$, then $d \equiv 5$ (mod 8) by ([\[Jaco2\]](#Jaco2){reference-type="ref" reference="Jaco2"}). Then, $d^2 \equiv 9$ (mod 16). Since $2\nmid d$ and $d= b^2 - 4ac$, we obtain that $b$ is odd. Then, $a$ and $c$ must both be odd. In fact, if either $a$ or $c$ is even, then $d^2 = (b^2 - 4ac)^2 \equiv 1$ (mod 16), contradicting that $d^2 \equiv 9$ (mod 16). Therefore, we conclude that all of $a,b,c$ are odd, namely, coefficients of quadratic form $T(k_1,k_2)$ are all odd. Therefore, if $2$ divides $T(k_1,k_2)$, then 2 divides both $k_1$ and $k_2$, and thus $4$ divides $T(k_1,k_2)$. Repeatedly factoring out 4 gives that $2\,|_e \, T(k_1,k_2)$. The following lemma is motivated by the works in [@Richards; @MS]. It's essential to emphasize that we present a logarithmic-type estimate for the interval's length within which a family of quadratic forms does not assume values. This logarithmic-type estimate plays a critical role in justifying Theorem [Theorem 3](#thm3D){reference-type="ref" reference="thm3D"}, which concerns the thickness of spherical shells containing sparsely distributed lattice points. **Lemma 9**. *Let $D\subset \mathbb{Z}$ be a finite nonempty set of integers $d \equiv 0$ or 1 (mod 4), with the property that $\prod_{d\in A} d$ is not a perfect square whenever $% A\subset D$ has odd cardinality. There exist arbitrarily large $m$ and $h\geq C \log m$ for some constant $C>0$ that depends solely on $D$, satisfying: if $T$ is any quadratic form $$\begin{aligned} T(k_1,k_2)=ak_1^2 + bk_1 k_2 + ck_2^2, \quad a, b, c\in \mathbb{Z},\end{aligned}$$ with discriminant $d=b^2 - 4 ac \in D$, then $$\begin{aligned} T(k_1,k_2) \not \in [m,m+h] \text{\;\;for each\;\;} k_1, k_2 \in \mathbb{Z}.\end{aligned}$$* *Remark 10*. If $D$ contains only negative integers, then obviously $\prod_{d\in A} d$ is not a perfect square whenever $A\subset D$ has odd cardinality. *Proof.* The argument adopts ideas from [@Richards; @MS]. Thanks to Lemma [Lemma 6](#lem1){reference-type="ref" reference="lem1"}, there exists $r\not=0$ such that $$\text{gcd\ }(d,r)=1\text{\ \ \ and\ \ \ }\left( \frac{d}{r}\right) =-1,\text{% \ \ for each\ \ }d\in D. \label{ms4}$$ Define $$\begin{aligned} \label{def-delta} \delta :=\text{\ lcm\ }\{|d|:d\in D\}.\end{aligned}$$ Note, ([\[ms4\]](#ms4){reference-type="ref" reference="ms4"}) and ([\[def-delta\]](#def-delta){reference-type="ref" reference="def-delta"}) imply that $\text{gcd\ }(\delta,r)=1$. Let $h>0$ and set $$A:=\sup_{0\leq j\leq h}|r+\delta j|. \label{ms8}$$ Define $P$ be the product $$P:=\prod p^{1+\alpha } \label{ms7}$$ where the product is taken over all primes $p$ with $$\label{ms7'} p\nmid \delta \text{\ \ and\ \ }p^{\alpha }\leq A<p^{1+\alpha }\text{\ \ for some integer\ \ }\alpha >0.$$ Because of ([\[ms7\]](#ms7){reference-type="ref" reference="ms7"}) and ([\[ms7\'\]](#ms7'){reference-type="ref" reference="ms7'"}), we have gcd$(P ,\delta)=1$. Then, by Bezout's identity, there exists an integer $m\in \lbrack 1,P]$ satisfying $$\delta m\equiv r\;(\text{mod}\;P). \label{ms6}$$ We argue that $h\geq C\log m$, if $h$ is sufficiently large. Indeed, the number of primes $p\leq A$ is asymptotic to $\frac{A}{% \log A}$. Therefore, for $A$ sufficiently large, the number of primes $p\leq A$ is less than $\frac{2A}{\log A}$. By ([\[ms7\]](#ms7){reference-type="ref" reference="ms7"}) and ([\[ms7\'\]](#ms7'){reference-type="ref" reference="ms7'"}), we obtain that $$\label{ms9} P = \prod p^{1+\alpha } \leq \prod p^{2\alpha} \leq \prod A^2 \leq A^{\frac{4A}{\log A}}=e^{4A}.$$ Due to ([\[ms8\]](#ms8){reference-type="ref" reference="ms8"}), we have, for sufficiently large $h$, $$\begin{aligned} \label{ms10} A \leq |r| + \delta h \leq 2 \delta h.\end{aligned}$$ Because of ([\[ms9\]](#ms9){reference-type="ref" reference="ms9"}) and ([\[ms10\]](#ms10){reference-type="ref" reference="ms10"}) together with the fact $m\leq P$, we conclude that, for sufficiently large $h$, $$\begin{aligned} \label{ms11} m \leq e^{8 \delta h}.\end{aligned}$$ Inequality ([\[ms11\]](#ms11){reference-type="ref" reference="ms11"}) can be written as $h \geq \frac{1}{8\delta} \log m$, for sufficiently large $h$. We claim that $T(k_{1},k_{2})\not\in [m,m+h]$  for any  $k_{1},k_{2}\in \mathbb{Z}$. Indeed, thanks to Lemma [Lemma 7](#lem2){reference-type="ref" reference="lem2"}, it is sufficient to show that whenever $0\leq j \leq h$ and $d = b^2 - 4ac \in D$, there exists a prime number $p$ satisfying $$\begin{aligned} \label{claim} p\nmid d, \;\;\; \left( \frac{d}{p}\right) = -1, \;\; \text{and} \;\; p\,|_o\, (m+j).\end{aligned}$$ We take an arbitrary integer $j \in [0,h]$. Note, $r+\delta j \equiv r$ (mod $d$) due to ([\[def-delta\]](#def-delta){reference-type="ref" reference="def-delta"}). Thus, we obtain from ([\[Jaco-p\]](#Jaco-p){reference-type="ref" reference="Jaco-p"}) and ([\[ms4\]](#ms4){reference-type="ref" reference="ms4"}) that $$\begin{aligned} \label{-1} \left(\frac{d}{r+ \delta j} \right) = \left(\frac{d}{r} \right) = -1.\end{aligned}$$ Since gcd$(d,r)=1$ and $d$ divides $\delta$, we see that gcd$(d,r+\delta j)=1$. We write the prime factorization for $r+ \delta j = \prod p^a$ and use ([\[Jacobi\]](#Jacobi){reference-type="ref" reference="Jacobi"}) to obtain that $\left(\frac{d}{r+ \delta j} \right) = \prod \left( \frac{d}{p} \right)^a = -1$. It follows that there exists a prime $p\nmid d$ with $\left( \frac{d}{p}\right) = -1$ satisfying $$\begin{aligned} \label{po} p\, |_o \, (r + \delta j),\end{aligned}$$ namely, $p$ divides $r+ \delta j$ an odd number of times. Since $\text{gcd\ }(\delta,r)=1$ and using ([\[po\]](#po){reference-type="ref" reference="po"}), we have $p\nmid \delta$. Then, because of ([\[ms8\]](#ms8){reference-type="ref" reference="ms8"}), ([\[ms7\]](#ms7){reference-type="ref" reference="ms7"}) and ([\[ms7\'\]](#ms7'){reference-type="ref" reference="ms7'"}), we see that $p$, as a factor of $P$, occurs with a greater multiplicity than as a factor of $r + \delta j$. Moreover, by ([\[ms6\]](#ms6){reference-type="ref" reference="ms6"}), $$\begin{aligned} \delta (m+j)\equiv r+\delta j\;(\text{mod}\;P).\end{aligned}$$ As a result, $p$ divides $\delta (m+j)$ and $r+\delta j$ the same number of times. Then, due to ([\[po\]](#po){reference-type="ref" reference="po"}) and $p\nmid \delta$, it follows that $p\,|_o\, (m+j)$ as claimed in ([\[claim\]](#claim){reference-type="ref" reference="claim"}). ◻ **Proposition 11**. *Consider a quadratic function $T(k_1, k_2) = a k_1^2 + b k_1 k_2 + c k_2^2 + s k_1 + t k_2 + u$ for $k_1,k_2\in \mathbb Z$, where coefficients $a,b,c,s,t,u \in \mathbb Q$ such that $b^2 - 4ac \not = 0$. Then there exist $\xi_1, \xi_2, \xi_3 \in \mathbb Q$ such that $T(x_1 + \xi_1, x_2 + \xi_2) - \xi_3 = a x_1^2 + b x_1 x_2 + c x_2^2$ for all $x_1, x_2 \in \mathbb Q$.* *Proof.* Let us consider $$\begin{aligned} &T(x_1 + \xi_1, x_2 + \xi_2) - \xi_3 = a x_1^2 + b x_1 x_2 + c x_2^2 + x_1(2 a \xi_1 + b \xi_2 + s) + x_2(2 c \xi_2 + b \xi_1 + t) \notag\\ &+(a \xi_1^2 + b \xi_1 \xi_2 + c\xi_2^2 + s\xi_1 + t\xi_2 + u -\xi_3) = a x_1^2 + b x_1 x_2 + c x_2^2.\end{aligned}$$ Therefore, $$\begin{aligned} \begin{cases} 2 a \xi_1 + b \xi_2 + s =0 \\ 2 c \xi_2 + b \xi_1 + t = 0 \\ a \xi_1^2 + b \xi_1 \xi_2 + c\xi_2^2 + s\xi_1 + t\xi_2 + u -\xi_3 = 0 \end{cases}\end{aligned}$$ Hence, $$\begin{aligned} \begin{pmatrix} \xi_1 \\ \xi_2 \end{pmatrix} =\begin{pmatrix} 2a & b \\ b & 2c \end{pmatrix}^{-1} \begin{pmatrix} -s \\ -t \end{pmatrix} = \frac{1}{4ac - b^2} \begin{pmatrix} 2c & -b \\ -b & 2a \end{pmatrix} \begin{pmatrix} -s \\ -t \end{pmatrix} = \frac{1}{4ac - b^2} \begin{pmatrix} -2cs + bt \\ bs - 2 a t \end{pmatrix}.\end{aligned}$$ It follows that $$\begin{aligned} \label{xi1-2} \xi_1 = \frac{bt - 2cs }{4ac - b^2} , \;\;\; \xi_2 = \frac{bs - 2at}{4ac - b^2}.\end{aligned}$$ With values of $\xi_1$ and $\xi_2$, we can find the value of $\xi_3$: $$\begin{aligned} \label{xi3} \xi_3 = a \xi_1^2 + b \xi_1 \xi_2 + c\xi_2^2 + s\xi_1 + t\xi_2 + u.\end{aligned}$$ ◻ ## Proof of Theorem [Theorem 3](#thm3D){reference-type="ref" reference="thm3D"} {#proof-of-theorem-thm3d-1} *Proof.* We adopt ideas from [@MS]. Let us fix a distance $d>0$. Consider a spherical shell in $\mathbb R^3$: $$\begin{aligned} N= \{x \in \mathbb R^3: \; m\leq |x|^2 \leq m+h\}.\end{aligned}$$ Suppose there exist lattice points $k, \ell \in \mathbb Z^3$ such that $$\begin{aligned} \label{lessd} k, \ell \in N \;\;\text{with} \;\; 0<|k - \ell| \leq d.\end{aligned}$$ Let $j= \ell-k$, then $0<|j| \leq d$. Thus, $|\ell|^2 = |k|^2 + |j|^2 + 2 k \cdot j$. It follows that $$\begin{aligned} |k \cdot j|\leq \frac{1}{2} \left| |k|^2 - |\ell|^2 \right| + \frac{1}{2} |j|^2 \leq \frac{1}{2}h + \frac{1}{2} d^2,\end{aligned}$$ because $k,\ell \in N$. We denote $n = k \cdot j$. Then $n\in \mathbb Z$ satisfying $$\begin{aligned} \label{nbound} |n| = |k \cdot j| \leq \frac{1}{2}h + \frac{1}{2} d^2.\end{aligned}$$ Since $j=(j_1, j_2,j_3) \not = 0$, without loss of generality, we assume $j_3 \not =0$. For $k=(k_1,k_2,k_3)$, solving $n = k \cdot j= k_1 j_1 + k_2 j_2 + k_3 j_3$, we obtain $$\begin{aligned} k_3 = j_3^{-1} (n - k_1j_1 -k_2 j_2). \end{aligned}$$ Thus, $$\begin{aligned} \label{Tjn} |k|^2 &= k_1^2 + k_2^2 + j_3^{-2} ( k_1j_1 + k_2 j_2 -n)^2 \notag\\ &= j_3^{-2} \left[ (j_1^2 + j_3^2) k_1^2 + (2j_1 j_2 ) k_1 k_2 + (j_2^2 + j_3^2) k_2^2 - 2n j_1 k_1 - 2n j_2 k_2 + n^2 ) \right] \notag\\ &=: T_{j,n}(k_1,k_2),\end{aligned}$$ where $j\in \mathbb Z^3$, $n\in \mathbb Z$ such that $0<|j| \leq d$ and $|n| \leq \frac{1}{2}h + \frac{1}{2} d^2$. Since $k\in N$, we know that $$\begin{aligned} \label{Tjnkk} T_{j,n}(k_1,k_2) \in [m, m+h].\end{aligned}$$ We remark that the function $T_{j,n}$ defined in ([\[Tjn\]](#Tjn){reference-type="ref" reference="Tjn"}) is a function of the form $T_{j,n}(k_1,k_2) = ak_1^2 + b k_1 k_2 + ck_2^2 + s k_1 + t k_2 + u$ where coefficients $a,b,c,s,t,u\in \mathbb Q$ and $b^2 - 4ac <0$. Thanks to Proposition [Proposition 11](#prop1){reference-type="ref" reference="prop1"}, there exist rational numbers $\xi_1$, $\xi_2$ and $\xi_3$ such that $$\begin{aligned} \label{Txx} T_{j,n} (x_1 + \xi_1, x_2+\xi_2) - \xi_3 = j_3^{-2} (j_1^2 + j_3^2) x_1^2 + (2 j_3^{-2} j_1 j_2 ) x_1 x_2 + j_3^{-2} (j_2^2 + j_3^2) x_2^2,\end{aligned}$$ for all rational numbers $x_1$ and $x_2$. Moreover, due to ([\[xi1-2\]](#xi1-2){reference-type="ref" reference="xi1-2"}) and ([\[xi3\]](#xi3){reference-type="ref" reference="xi3"}) and straightforward calculation, we obtain $$\begin{aligned} \label{xixixi} \xi_1 = \frac{n j_1}{|j|^2}, \;\;\; \xi_2 = \frac{n j_2}{|j|^2}, \;\;\; \xi_3 = \frac{n^2}{|j|^2}.\end{aligned}$$ Without loss of generality, we assume $d$ is an integer. We set $$\begin{aligned} \label{lcm} \beta = \text{lcm}\{1^2, 2^2, 3^3, \cdots, d^2\}.\end{aligned}$$ Consider arguments of the form $x_1 = \frac{i_1}{\beta}$ and $x_2 = \frac{i_2}{\beta}$. Then, by ([\[Txx\]](#Txx){reference-type="ref" reference="Txx"}), we obtain $$\begin{aligned} \label{ae2} &\left[T_{j,n} \Big(\frac{i_1}{\beta} + \xi_1, \frac{i_2}{\beta}+\xi_2\Big) - \xi_3\right] \beta^3 \notag\\ &= \beta j_3^{-2}(j_1^2 + j_3^2) i_1^2 + (2\beta j_3^{-2} j_1 j_2 ) i_1 i_2 + \beta j_3^{-2}(j_2^2 + j_3^2) i_2^2 \notag\\ &=: \tilde T_{j}(i_1, i_2).\end{aligned}$$ Because of ([\[lcm\]](#lcm){reference-type="ref" reference="lcm"}), $\frac{i_1}{\beta} + \xi_1$ and $\frac{i_2}{\beta}+\xi_2$ can take any integer values by adjusting $i_1$ and $i_2$. Also due to ([\[lcm\]](#lcm){reference-type="ref" reference="lcm"}) and ([\[ae2\]](#ae2){reference-type="ref" reference="ae2"}), $\tilde T_{j}(i_1, i_2)$ is a quadratic form with integer coefficients and negative discriminant. Note, $j$ belongs to a finite set. Thanks to Lemma [Lemma 9](#thm1){reference-type="ref" reference="thm1"} and Remark [Remark 10](#remk1){reference-type="ref" reference="remk1"}, there exist arbitrary large $m_0$ and $h_0 \geq C \log m_0$ such that $$\begin{aligned} \label{Ttilde} \tilde T_{j}(i_1, i_2) \not \in [m_0, \, m_0+h_0], \;\text{for any} \;i_1,i_2 \in \mathbb Z, \; \text{and for any}\; j\in \mathbb Z^3 \; \text{with} \; 0<|j|\leq d.\end{aligned}$$ For sufficiently large $h_0$, we can find $h>0$ satisfying $$\begin{aligned} \label{h0} &h_0 = \left(h + \frac{1}{4}(h + d^2)^2\right) \beta^3.\end{aligned}$$ Then, we set $$\begin{aligned} \label{defm} m = \frac{m_0}{\beta^3} + \frac{1}{4}(h + d^2)^2. \end{aligned}$$ Due to ([\[xixixi\]](#xixixi){reference-type="ref" reference="xixixi"}) and ([\[nbound\]](#nbound){reference-type="ref" reference="nbound"}), we have $$\begin{aligned} \label{xi3b} \xi_3 = \frac{n^2}{|j|^2} \leq \frac{1}{4}(h + d^2)^2. \end{aligned}$$ Using ([\[ae2\]](#ae2){reference-type="ref" reference="ae2"})-([\[xi3b\]](#xi3b){reference-type="ref" reference="xi3b"}), we obtain $$\begin{aligned} \label{Tmm} T_{j,n} \Big(\frac{i_1}{\beta} + \xi_1, \frac{i_2}{\beta}+\xi_2\Big) \not \in [m - \frac{1}{4}(h + d^2)^2 + \xi_3, \, m+h + \xi_3] \supset [m, m+h],\end{aligned}$$ for any $i_1,i_2 \in \mathbb Z$. In particular, there exist $i_1,i_2 \in \mathbb Z$ such that $k_1=\frac{i_1}{\beta} + \xi_1$ and $k_2=\frac{i_2}{\beta} + \xi_2$, and thus ([\[Tmm\]](#Tmm){reference-type="ref" reference="Tmm"}) shows that $T_{j,n}(k_1, k_2) \not \in [m, m+h]$, which contradicts ([\[Tjnkk\]](#Tjnkk){reference-type="ref" reference="Tjnkk"}). Therefore, for these pairs of $m$ and $h$, ([\[lessd\]](#lessd){reference-type="ref" reference="lessd"}) cannot happen. Thus, for any one of these pairs of $m$ and $h$, if there exist two lattice points $k,\ell \in \mathbb Z^3$ that belong to the spherical shell $\{x\in \mathbb R^3: m \leq |x|^2 \leq m + h\}$, then $|k-\ell| > d$. Furthermore, since we fix $d$ at the beginning, then asymptotically, $$\begin{aligned} h_0 \sim \frac{1}{4} \beta^3 h^2, \;\;\; m \sim \frac{1}{\beta^3} m_0.\end{aligned}$$ Then, along with the fact that $h_0 \geq C \log m_0$, we conclude that $$\begin{aligned} h \geq \tilde C \sqrt{\log m},\end{aligned}$$ for sufficiently large $m$, where the constant $\tilde C$ depends on $d$. ◻ # Discussion In this section, we discuss some related problems and future work. The original motivation of this project was to find an optimal estimate for large gaps between sums of two squares, and this remains its long-term goal. It is an important open problem whether one can improve the logarithmic growth rate of the lower bound for large gaps between sums of squares presented in ([\[bounds\]](#bounds){reference-type="ref" reference="bounds"}). Additionally, it is interesting to ask whether one can reduce the polynomial growth rate (the power of 1/4) of the upper bound in ([\[bounds\]](#bounds){reference-type="ref" reference="bounds"}). We would like to draw some naive comparisons with a related problem: gaps between primes. It is well-known that the number of primes less than $x$ is approximately $\frac{x}{\log x}$, whereas the number of sums of squares less than $x$ behaves asymptotically as $\frac{bx}{\sqrt{\log x}}$, where $b$ is the Landau--Ramanujan constant. Therefore, there are fewer primes than sums of squares in $[0,x]$ for large $x$. Thus, on average, sums of squares are distributed more densely than primes throughout the natural numbers. It is a classical result of Westzynthius [@West] that $$\begin{aligned} \label{gapprime} \limsup_{n\rightarrow \infty} \frac{p_{n+1}-p_n}{\log p_n} = \infty,\end{aligned}$$ where $p_n$ is the sequence of primes. By comparing ([\[gapprime\]](#gapprime){reference-type="ref" reference="gapprime"}) and ([\[loggap\]](#loggap){reference-type="ref" reference="loggap"}), we see that large gaps between primes might grow faster than large gaps between sum of squares asymptotically. Also, it is worth mentioning that gaps between primes have an upper bound $$\begin{aligned} \label{primeupp} p_{n+1} - p_n \leq p_n^{\theta},\end{aligned}$$ with an estimate $\theta=0.525$, for sufficiently large $n$ (see [@Baker]). One can compare $\theta=0.525$ in ([\[primeupp\]](#primeupp){reference-type="ref" reference="primeupp"}) with the exponent $1/4$ in ([\[bounds\]](#bounds){reference-type="ref" reference="bounds"}) concerning the upper bound of gaps between sums of two squares. Furthermore, we would like to mention the twin prime conjecture regarding small gaps between primes; however, small gaps between sums of squares are always 1, which is trivial. All of the above observations show that, in general, sums of squares appear more frequently than primes on the number line. See [@Shiu] for more discussion on this topic. Regarding the sparse distribution of lattice points in annular regions, it will be interesting to investigate how to improve the thickness of these annuli in $\mathbb R^2$ and spherical shells in $\mathbb R^3$. Theorems [Theorem 1](#thm2D){reference-type="ref" reference="thm2D"} and [Theorem 3](#thm3D){reference-type="ref" reference="thm3D"} provide estimates for the thickness, but it is unknown whether our estimates are optimal. In our proof, there are deep geometric perspectives that can be explored further to generate other useful results. More importantly, our results regarding the sparse distribution of lattice points have the potential for applications in simplifying infinite-dimensional dissipative dynamical systems at large time, such as Navier-Stokes equations in fluid dynamics. Please refer to [@MS] for an example of applications. Sums of squares are eigenvalues of the Laplacian on a periodic domain. Likewise, we can consider gaps between eigenvalues of the Dirichlet Laplacian on a bounded domain in $\mathbb{R}^n$, or more generally, on an $n$-dimensional Riemannian manifold. In these cases, explicit expressions of eigenvalues are usually not available. An important problem is to find sharp estimates for the upper and lower bounds of the size of the gaps between eigenvalues of the Dirichlet Laplacian. Please refer to [@Chen] for an estimate of the upper bound. # Acknowledgement {#acknowledgement .unnumbered} The research of this project was completed during the Summer 2023 REU program "AMRPU @ FIU\" that took place at the Department of Mathematics and Statistics, Florida International University, and was supported by the NSF (REU Site) grant DMS-2050971. In particular, support of M. Ilyin came from the above grant. 10 R. C. Baker, G. Harman, and J. Pintz, *The difference between consecutive primes. II.* Proc. London Math. Soc. (3) 83 (2001), 532--562. R. P. Bambah and S. Chowla, *On numbers which can be expressed as a sum of two squares*, Proc. Nat. Inst. Sci. India 13 (1947), 101--103. D. Chen, T. Zheng, and H. Yang, *Estimates of the gaps between consecutive eigenvalues of Laplacian*, Pacific J. Math. 282 (2016), 293--311. R. Dietmann, C. Elsholtz, A. Kalmynin, S. Konyagin, and J. Maynard, *Longer gaps between values of binary quadratic forms*, Int. Math. Res. Not. IMRN (2023), 10313--10349. P. Erdös, *Some problems in elementary number theory*, Publ. Math. Debrecen 2 (1951), 103--109. E. Grosswald, *Representations of integers as sums of squares*, Springer-Verlag, New York, 1985. G. H. Hardy and E. M. Wright, *A introduction to the theory of numbers*, The Clarendon Press, Oxford University Press, 1979. L. K. Hua, *Introduction to number theory*, Springer-Verlag, 1982. J. Mallet-Paret and G. R. Sell, *Inertial manifolds for reaction diffusion equations in higher space dimensions*, J. Amer. Math. Soc. 1 (1988), 805--866. I. Richards, *On the gaps between numbers which are sums of two squares*, Advances in Mathematics 46 (1982), 1--2. P. Shiu, *The gaps between sums of two squares*, Math. Gaz. 97 (2013), 256 -- 262. E. Westzynthius, *Über die Verteilung der Zahlen die zu den n ersten Primzahlen teilerfremd sind*, Commentationes Physico-Mathematicae Helsingsfors 5 (1931), 1--37.
arxiv_math
{ "id": "2309.10971", "title": "Sparse distribution of lattice points in annular regions", "authors": "Yanqiu Guo and Michael Ilyin", "categories": "math.NT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | This article studies special solutions to symplectic curvature flow in dimension four. Firstly, we derive a local normal form for static solutions in terms of holomorphic data and use this normal form to show that every complete static solution to symplectic curvature flow in dimension four is Kähler-Einstein. Secondly, we perform an exterior differential systems analysis of the soliton equation for symplectic curvature flow and use the Cartan-Kähler theorem to prove a local existence and generality theorem for solitons. address: "[University of Wisconsin-Madison]{.smallcaps}, [Department of Mathematics]{.smallcaps}, [Madison, WI, USA]{.smallcaps}" author: - Gavin Ball bibliography: - SympCurvStatRefs.bib title: Static solutions to symplectic curvature flow in dimension four --- # Introduction An almost-Kähler manifold $(X, \Omega, J)$ consists of a even-dimensional manifold $X$ endowed with a symplectic form $\Omega$ and a compatible almost complex structure $J.$ Together, $\Omega$ and $J$ define a Riemannian metric $g$ on $X.$ One perspective on almost-Kähler geometry is to fix a symplectic form $\Omega$ on $X,$ choose a compatible $J,$ and think of $J$ and $g$ as auxiliary tools used to study the symplectic geometry of $(X, \Omega)$. In this direction, symplectic curvature flow is a degenerate parabolic evolution equation for almost-Kähler structures introduced by Streets--Tian [@StreetsTian14] given by $$\label{eq:introsympcurvfl} \begin{aligned} \tfrac{\partial}{\partial t} \Omega &= - 2 \rho, \\ \tfrac{\partial}{\partial t} g &= - 2 \rho^{1,1} - 2 \, \mathrm{Ric}^{2,0+0,2}, \end{aligned}$$ where $\rho$ is the Chern-Ricci form of $(\Omega, J)$ and $\mathrm{Ric}$ is the Ricci tensor of $g.$ The flow ([\[eq:introsympcurvfl\]](#eq:introsympcurvfl){reference-type="ref" reference="eq:introsympcurvfl"}) preserves the symplectic condition $d \Omega = 0$ and restricts to Kähler-Ricci flow in the case where $J$ is integrable. The idea behind symplectic curvature flow is to evolve an initial almost-Kähler structure on $X$ towards some canonical structure. In their initial paper, Streets & Tian prove short time existence for ([\[eq:introsympcurvfl\]](#eq:introsympcurvfl){reference-type="ref" reference="eq:introsympcurvfl"}), but it is to be expected that symplectic curvature flow will encounter singularities in general. Analogy with other geometric flows, especially Ricci flow, suggests that singularity formation should be modeled on *soliton* solutions to ([\[eq:introsympcurvfl\]](#eq:introsympcurvfl){reference-type="ref" reference="eq:introsympcurvfl"}). **Definition 1**. *An almost-Kähler manifold $(X, \Omega, J, g)$ is a *symplectic curvature flow soliton* if there exists a constant $\lambda \in \mathbb{R}$ and a vector field $V$ on $X$ such that $$\label{eq:intsympcurvsoliton} \begin{aligned} \lambda \Omega + \mathcal{L}_V \Omega &= - 2 \rho, \\ \lambda g + \mathcal{L}_V g &= - 2 \rho^{1,1} - 2 \, \mathrm{Ric}^{2,0+0,2}. \end{aligned}$$ If $\lambda > 0,$ $= 0,$ or $< 0$, we say the soliton is *expanding, steady,* or *shrinking* respectively. If the vector field $V$ vanishes identically, then we say $(X, \Omega, J, g)$ is a *static solution* to symplectic curvature flow.* The soliton solutions are precisely the almost-Kähler structures which evolve by rescaling and diffeomorphisms along ([\[eq:introsympcurvfl\]](#eq:introsympcurvfl){reference-type="ref" reference="eq:introsympcurvfl"}). The static solutions are the structures which evolve purely by rescaling. For Ricci flow, the static solutions are the Einstein metrics, and so static solutions to symplectic curvature flow can be thought of as the almost-Kähler analogues of Einstein metrics. ## Results In this paper, we study soliton and static solutions to symplectic curvature flow in dimension four using the techniques of exterior differential systems and the moving frame. These techniques are well-suited to the non-linear, overdetermined PDE system ([\[eq:intsympcurvsoliton\]](#eq:intsympcurvsoliton){reference-type="ref" reference="eq:intsympcurvsoliton"}). Our first main result is Theorem [Theorem 7](#thm:locnormalform){reference-type="ref" reference="thm:locnormalform"}, which gives a local normal form for static solutions. We summarize it here as: **Theorem 2**. *Every four-dimensional static solution to symplectic curvature flow is steady ($\lambda = 0$). If $(X^4, \Omega, J, g)$ is a static solution and $p \in X$ is a point where the Nijenhuis tensor of $J$ is non-vanishing, then there is a neighbourhood of $p$ with complex coordinates $z_1, z_2$ and a holomorphic function $h(z_1)$ such that the almost-Käler structure on $X$ is given by Equation ([\[eq:finalint\]](#eq:finalint){reference-type="ref" reference="eq:finalint"}).* The geometry of the local normal form ([\[eq:finalint\]](#eq:finalint){reference-type="ref" reference="eq:finalint"}) is constrained enough to preclude the existence of non-trivial complete static solutions. **Corollary 3**. *A complete static solution to symplectic curvature flow in dimension four is Kähler-Einstein.* This corollary can be compared to results by Streets--Tian [@StreetsTian14] and Kelleher [@Kelleher19] on static solutions. By contrast to their techniques, our calculations are primarily local in nature. We also note that Pook [@pook2012homogeneous] has constructed compact static solutions to symplectic curvature flow in dimensions $n (n+1).$ The key ingredient in the proof of Theorem [Theorem 7](#thm:locnormalform){reference-type="ref" reference="thm:locnormalform"} is an integrability condition that must be satisfied by any static solution. The existence of this integrability condition implies that the overdetermined PDE system describing static solutions is not involutive in the sense of exterior differential systems. It is then natural to ask if the more general soliton system is involutive. Our second main result, Theorem [Theorem 9](#thm:SolitonEDS){reference-type="ref" reference="thm:SolitonEDS"}, shows that the soliton system is involutive. Hence, the Cartan-Kähler Theorem may be applied to prove the existence of solutions. We summarize this result here as: **Theorem 4**. *The overdetermined PDE system for four-dimensional symplectic curvature flow solutions is involutive and symplectic curvature flow solitons exist locally.* In §[4.1](#eg:ASL2R){reference-type="ref" reference="eg:ASL2R"} we provide an explicit example of homogeneous symplectic curvature flow soliton. Further examples have appeared in [@LauretWill17] and explicit solutions to symplectic curvature flow have been analyzed in [@LauretSymp15; @pook2012homogeneous; @FernandezCulma15]. # Structure equations Let $X$ be a 4-manifold endowed with an almost Kähler structure $\left(\Omega, J \right),$ that is to say $\Omega$ is a symplectic form on $X$ and $J$ is a $\Omega$-compatible almost complex structure on $X$. Together, $\Omega$ and $J$ determine a Riemannian metric $g$ on $X.$ Let $\mathcal{P}$ denote the $\mathrm{U}(2)$-structure on $X$ determined by the almost Kähler structure $\left( \Omega, J \right)$. Explicitly, $\pi :\mathcal{P} \to X$ is the principal $\mathrm{U}(2)$-bundle defined by $$\mathcal{P} = \left\lbrace u : T_p X \to \mathbb{C}^2 \mid \text{$u$ is a complex linear isomorphism,} \:\:\: u^* \Omega_{\mathrm{Std}} = \Omega_p \right\rbrace,$$ where $\Omega_{\mathrm{Std}}$ denotes the standard Kähler form on $\mathbb{C}^2$, $$\Omega_{\mathrm{Std}} = \tfrac{i}{2} \left( e^1 \wedge \overline{e^1} + e^2 \wedge \overline{e^2} \right).$$ Let $\eta$ denote the $\mathbb{C}^2$-valued tautological 1-form on $\mathcal{P},$ defined by $\eta (v) = u (\pi_* (v)),$ for $v \in T_u \mathcal{P}.$ Denote the components of $\eta$ with respect to the standard basis $e_1,$ $e_2$ of $\mathbb{C}^2$ by $\eta_1, \eta_2.$ The forms $\eta_1, \eta_2$ are a basis for the semi-basic forms on $\mathcal{P},$ and they encode the almost complex structure $J$ in the sense that a 1-form $\theta$ on $X$ is a $(1,0)$-form if and only if the pullback $\pi^* \theta$ lies in $\operatorname{span}(\eta_1, \eta_2).$ We also have that, on $\mathcal{P},$ $$\begin{aligned} \Omega &= \tfrac{i}{2} \eta_i \wedge \overline{\eta_i}, \\ g &= \eta_i \cdot \overline{\eta_i}, \end{aligned}$$ where $1 \leq i \leq 2$ and the unitary summation convention is employed (as it will be in the remainder of this article). ## The first structure equation On $\mathcal{P},$ Cartan's first structure equation reads $$\label{eq:CartIpre} d \eta_i = - \kappa_{i \overline{j}} \wedge \eta_j - \xi_{ij} \wedge \overline{\eta_j},$$ where $\kappa_{i \overline{j}} = - \overline{\kappa_{j \overline{i}}}$ and $\xi_{ij} = -\xi_{ji}.$ Here, $\kappa$ is the $\mathfrak{u}(2)$-valued connection form for the *Chern connection* on $X.$ The $\Lambda^2_{\mathbb{C}}$-valued 1-form $\xi_{ij}$ is $\pi$-semibasic, so that there exist functions $A_{ij \overline{k}}$ and $N_{ijk}$ on $X$ such that $$\xi_{ij} = A_{ij\overline{k}} \eta_k + N_{ijk} \overline{\eta_k}.$$ It is easily checked that the symplectic condition $d \Omega = 0$ implies $A_{ij\overline{k}} = 0.$ Cartan's first structure equation ([\[eq:CartIpre\]](#eq:CartIpre){reference-type="ref" reference="eq:CartIpre"}) may therefore be rewritten as $$\label{eq:CartI} d \eta_i = - \kappa_{i \overline{j}} \wedge \eta_j + N_{ijk} \, \overline{\eta_j} \wedge \overline{\eta_k}.$$ The tensor $N = N_{ijk} \left(\overline{\eta_i \wedge \eta_j}\right) \otimes \overline{\eta_k}$ descends to $X$ to give a well-defined section of $\Lambda^{0,2} \otimes \Lambda^{0,1}.$ In fact, $N$ is simply the *Nijenhuis tensor* of the almost complex structure $J.$ The structure equations ([\[eq:CartI\]](#eq:CartI){reference-type="ref" reference="eq:CartI"}) are valid in any even dimension. However, they may be written in a simpler form in complex dimension 2, by exploiting the fact that $\Lambda^2_\mathbb{C}$ is 1-dimensional, and this simplification is worthwhile because it leads to a simplification of the second structure equation. Let $\varepsilon_{ij}$ denote the totally skew-symmetric symbol with $\varepsilon_{12} = 1/2.$ Let $N_i$ denote the functions on $\mathcal{P}$ defined by $$N_{ijk} = \varepsilon_{ij} N_k.$$ The first structure equation ([\[eq:CartI\]](#eq:CartI){reference-type="ref" reference="eq:CartI"}) may be rewritten as $$\label{eq:fourdCone} d \eta_i = - \kappa_{i \overline{j}} \wedge \eta_j + \varepsilon_{ij} N_{k} \, \overline{\eta_j} \wedge \overline{\eta_k}.$$ ## The second structure equations The identity $d^2 = 0$ applied to equation ([\[eq:fourdCone\]](#eq:fourdCone){reference-type="ref" reference="eq:fourdCone"}) implies the following equations: $$\label{eq:structtwo} \begin{aligned} d N_i =& \overline{\varepsilon_{jk}} A_{ij} \eta_k + B \, \eta_i + \left(F_{ij} + \varepsilon_{ij} H \right) \overline{\eta_j} - \kappa_{i \overline{j}} N_j - \kappa_{j \overline{j}} N_i, \\ d \kappa_{i \overline{j}} =& - \kappa_{i \overline{k}} \wedge \kappa_{k \overline{j}} + K_{i\bar{j}k\bar{l}} \overline{\eta_k} \wedge \eta_l + \left(R + N_l \overline{N_l} \right) \left(-\tfrac{1}{3} \eta_i \wedge \overline{\eta_j} - \tfrac{1}{3} \delta_{i \bar{j}} \eta_k \wedge \overline{\eta_k} \right) \\ & + N_i \overline{N_j} \eta_k \wedge \overline{\eta_k} + Q_{i \bar{k}} \eta_k \wedge \overline{\eta_j} + Q_{k \bar{j}} \eta_i \wedge \overline{\eta_k} - \tfrac{1}{2} A_{ik} \overline{\eta_j \wedge \eta_k} + \tfrac{1}{2} \overline{A_{jk}} \eta_i \wedge \eta_k \\ & + 2 B \, \varepsilon_{ik} \overline{\eta_j \wedge \eta_k} - 2 \overline{B} \overline{\varepsilon_{jk}} \eta_i \wedge \eta_k, \end{aligned}$$ for functions $A_{ij}, B, F_{ij}, H, K_{i \bar{j} k \bar{l}}, Q_{i \bar{j}}, R$ on $\mathcal{P}$ having the following symmetries: $$\begin{aligned} A_{ij} & = A_{ji}, & K_{i \bar{j} k \bar{l}} &= K_{k \bar{j} i \bar{l}} & Q_{i \bar{j}} &= -\overline{Q_{j \bar{i}}} \\ F_{ij} & = F_{ji}, & K_{i \bar{j} k \bar{l}} &= K_{i \bar{l} k \bar{j}} & Q_{i \bar{i}} &= 0 \\ & & K_{i \bar{i} k \bar{l}} &= 0 & & \\ & & K_{i \bar{j} k \bar{l}} &= \overline{K_{j \bar{i} l \bar{k}}} & & \end{aligned}$$ Each of these functions takes values in an irreducible $\mathrm{U}(2)$-representation, and the right-hand-side of the equation for $d \kappa$ in ([\[eq:structtwo\]](#eq:structtwo){reference-type="ref" reference="eq:structtwo"}) represents the irreducible decomposition of the curvature of the Chern connection of $X.$ The equations ([\[eq:structtwo\]](#eq:structtwo){reference-type="ref" reference="eq:structtwo"}) are called the second structure equations of the almost Kähler structure $(g, \Omega).$ By a classical theorem of Cartan, the functions $A_{ij}, B, F_{ij}, K_{i \bar{j} k \bar{l}}, Q_{i \bar{j}}, R$ form a complete set of second-order invariants of $(g, \Omega).$ The second equation of ([\[eq:structtwo\]](#eq:structtwo){reference-type="ref" reference="eq:structtwo"}) gives the curvature of the Chern connection on $X.$ The Chern-Ricci form is the trace of this curvature: $$d \kappa_{i \overline{i}} = -R \eta_{i} \wedge \overline{\eta_i} - 4 i \operatorname{Im}\left(\overline{B} \eta_1 \wedge \eta_2 \right) - 2 Q_{i \overline{j}} \overline{\eta_i} \wedge \eta_j.$$ ### The curvature of $g$ The Riemann curvature tensor of a 4-dimensional manifold is a section of a vector bundle modeled on the $\mathrm{SO}(4)$ representation $$\label{eq:curvdecompso} \mathrm{Sym}^2 \left( \Lambda^2 \mathbb{R}^4 \right) \cong \mathbb{R}\oplus \mathrm{Sym}^2_0 \mathbb{R}^4 \oplus \mathrm{Sym}^2_0 \left(\Lambda^2_+ \mathbb{R}^4 \right) \oplus \mathrm{Sym}^2_0 \left( \Lambda^2_- \mathbb{R}^4 \right).$$ Corresponding to each irreducible component, we have the scalar curvature $r$, the traceless Ricci curvature $\mathrm{Ric}^0$, and the self-dual $W^+$ and anti-self dual $W^-$ Weyl tensors respectively. The second structure equations ([\[eq:structtwo\]](#eq:structtwo){reference-type="ref" reference="eq:structtwo"}) can be compared with the structure equations of a Riemannian manifold to write the components of the Riemann curvature tensor of $g$ in terms of the first and second order invariants of the almost Kähler structure. The result of this calculation is: $$\begin{aligned} \mathrm{Scal}(g) &= - 8 N_i \overline{N_i} - 8 R, \\ \mathrm{Ric}(g) &= A_{ij} \overline{\eta_i} \cdot \overline{\eta_j} + \overline{A_{ij}} \eta_i \cdot \eta_j + \left( Q_{i \bar{j}} + N_i \overline{N_j} \right) \overline{\eta_i} \cdot \eta_j - \left(\tfrac{1}{2} R + N_k \overline{N_k} \right) \eta_i \cdot \overline{\eta_i}, \\ W^+ (g) &= \left(4 N_i \overline{N_i} - 4 R \right) \Omega^2 - 8 i B \, \Omega \cdot \left( \eta_1 \wedge \eta_2 \right) + 8 i \overline{B} \, \Omega \cdot \left( \overline{\eta_1 \wedge \eta_2 } \right) \\ & + 2 H \, \left(\eta_1 \wedge \eta_2 \right)^2 + 2 \overline{H} \left( \overline{\eta_1 \wedge \eta_2 } \right)^2 - 4 N_i \overline{N_i} \left(\eta_1 \wedge \eta_2 \right) \cdot \left(\overline{\eta_1 \wedge \eta_2 }\right), \\ W^-(g) &= -2 K_{i \bar{j} k \bar{l}} \left(\overline{\eta}_i \wedge \eta_j \right) \cdot \left(\overline{\eta_k} \wedge \eta_l \right), \end{aligned}$$ where here we are viewing the Ricci tensor as a symmetric 2-tensor, and the self-dual and anti-self-dual Weyl tensors as sections of $\mathrm{Sym}^2_0 \left(\Lambda^2_{\pm} T X \right).$ ### Symplectic curvature flow The right hand side of the symplectic curvature flow equation ([\[eq:introsympcurvfl\]](#eq:introsympcurvfl){reference-type="ref" reference="eq:introsympcurvfl"}) may also be written in terms of second and first order invariants of $\left(g, \Omega \right).$ We have $$\label{eq:symplcurvfloweqs} \begin{aligned} \frac{\partial}{\partial t} \Omega &= 4 R \, \Omega + 4 i Q_{i \bar{j}} \, \overline{\eta_i} \wedge \eta_{j} - 8 \, \operatorname{Im}\left(\overline{B} {\eta_1 \wedge \eta_2 } \right), \\ \frac{\partial}{\partial t} g & = 4 R \, g - 8 Q_{i \overline{j}} \overline{\eta_i} \cdot \eta_j - 4 \operatorname{Re}( A_{ij} \overline{\eta_i} \cdot \overline{\eta_{j}}), \\ \frac{\partial}{\partial t} J &= \left( 8 \operatorname{Im}\left( \overline{B} {\eta_1 \wedge \eta_2 } \right) - 4 \operatorname{Re}\left( A_{ij} \overline{\eta_i} \wedge \overline{\eta_j} \right) \right) g^{-1}, \end{aligned}$$ # Static solutions in dimension four {#sect:staticsolns} We now study static solutions to symplectic curvature flow in dimension 4. These are solutions which evolve strictly by rescaling under the flow, so for such a solution we must have $$\begin{aligned} \frac{\partial}{\partial t} \Omega &= \lambda \Omega, \\ \frac{\partial}{\partial t} g &= \lambda g, \\ \frac{\partial}{\partial t} J &= 0. \end{aligned}$$ Comparing with ([\[eq:symplcurvfloweqs\]](#eq:symplcurvfloweqs){reference-type="ref" reference="eq:symplcurvfloweqs"}), we see that the static almost Kähler structures are characterized by the following conditions on their second order invariants: $$\label{eq:statinvcond} \begin{aligned} R = \frac{\lambda}{4}, \:\:\: B = 0, \:\:\: Q_{i \bar{j}} = 0, \:\:\: A_{ij} = 0. \end{aligned}$$ Therefore, the first and second structures equations for a static solution reduce to $$\begin{aligned} d \eta_i &= - \kappa_{i \overline{j}} \wedge \eta_j + \varepsilon_{ij} N_{k} \, \overline{\eta_j} \wedge \overline{\eta_k}, \\ d N_i =& \left(F_{ij} + \varepsilon_{ij} H \right) \overline{\eta_j} - \kappa_{i \overline{j}} N_j - \kappa_{j \overline{j}} N_i, \\ d \kappa_{i \overline{j}} =& - \kappa_{i \overline{k}} \wedge \kappa_{k \overline{j}} + K_{i\bar{j}k\bar{l}} \overline{\eta_k} \wedge \eta_l + \left(\tfrac{\lambda}{4} + N_l \overline{N_l} \right) \left(-\tfrac{1}{3} \eta_i \wedge \overline{\eta_j} - \tfrac{1}{3} \delta_{i \bar{j}} \eta_k \wedge \overline{\eta_k} \right) \\ & + N_i \overline{N_j} \eta_k \wedge \overline{\eta_k} + Q_{i \bar{k}} \eta_k \wedge \overline{\eta_j}. \end{aligned}$$ Differentiating the second equation, we find $$\label{eq:firstcondition} 0 = d^2 N_i \wedge \overline{\eta_1 \wedge \eta_2} = \left(F_{ij} \overline{N_j} + \frac{1}{2} \varepsilon_{ij} H \overline{N_j} \right) \eta_1 \wedge \eta_2 \wedge \overline{\eta_1 \wedge \eta_2}.$$ The equations $$\label{eq:firsttorscond} F_{ij} \overline{N_j} + \frac{1}{2} \varepsilon_{ij} H \overline{N_j} = 0, \:\:\: i = 1, 2$$ must therefore be satisfied by any static almost Kähler structure. These equations give a restriction on the second-order invariants of a static solution which is not an algebraic consequence of ([\[eq:statinvcond\]](#eq:statinvcond){reference-type="ref" reference="eq:statinvcond"}), so the static equations ([\[eq:statinvcond\]](#eq:statinvcond){reference-type="ref" reference="eq:statinvcond"}) are not involutive in the sense of exterior differential systems. The equation ([\[eq:firsttorscond\]](#eq:firsttorscond){reference-type="ref" reference="eq:firsttorscond"}) may be simplified by adapting coframes. Suppose $N$ is non-zero at a point $p \in X$. Say a coframe $u \in \mathcal{P}_p$ is $N$-adapted if $N_2 = 0$ at $u \in \mathcal{P}.$ The group $\mathrm{U}(2)$ acts transitively on $\Lambda^{0,2} \otimes \Lambda^{0,1}$ with stabilizer $\mathrm{U}(1) \times \mathrm{U}(1),$ so the collection of all $N$-adapted coframes is a $\mathrm{U}(1) \times \mathrm{U}(1)$-bundle over the locus in $X$ where $N \neq 0.$ For simplicity, we shall assume from now on $N$ is nowhere vanishing on $X$ (otherwise restrict to the open dense locus where $N \neq 0$). Denote the bundle of $N$-adapted coframes by $\mathcal{P'} \to X.$ Equations ([\[eq:firsttorscond\]](#eq:firsttorscond){reference-type="ref" reference="eq:firsttorscond"}) imply that, on $\mathcal{P}',$ $$F_{11} = 0, \:\:\: F_{12} = \tfrac{1}{2} H.$$ Differentiating the identity $N_2 = 0,$ we find that on $\mathcal{P}'$ $$0 = F_{22} \overline{\eta_2} - N_1 \kappa_{2 \bar{1}}.$$ Define $G = N_1 F_{22},$ so we have $\kappa_{2 \bar{1}} = G \, \overline{\eta_2}.$ Let us also define $\mathbb{R}$-valued 1-forms $\alpha = - i \kappa_{1 \bar{1}}$ and $\beta = - i \kappa_{2 \bar{2}}.$ The forms $\alpha$ and $\beta$ together define a connection on the $\mathrm{U}(1) \times \mathrm{U}(1)$-bundle $\mathcal{P}'.$ The structure equations restricted to $\mathcal{P}'$ now read $$\begin{aligned} d \eta_1 =& i \alpha \wedge \eta_1 + N_1 \overline{\eta_1 \wedge \eta_2}, \\ d \eta_2 =& i \beta \wedge \eta_2 + G \eta_1 \wedge \overline{\eta_2}, \\ d N_1 =& H \overline{\eta_2} + 2 i N_1 \alpha + i N_1 \beta, \\ d \alpha =& i \left(\tfrac{1}{3} \lvert N_1 \rvert^2 - \tfrac{1}{6} \lambda - K_{1 \bar{1} 1 \bar{1}} \right) \eta_1 \wedge \overline{\eta_1} + i K_{2 \bar{2} 2 \bar{1}} \eta_1 \wedge \overline{\eta_2} - i K_{1 \bar{1} 1 \bar{2}} \eta_2 \wedge \overline{\eta_1} \\ & + i \left(\tfrac{2}{3} \lvert N_1 \rvert^2 - \tfrac{1}{12} \lambda + \lvert G \rvert^2 + K_{1 \bar{1} 1 \bar{1}} \right) \eta_2 \wedge \overline{\eta_2}, \\ d \beta =& i \left(-\tfrac{1}{3} \lvert N_1 \rvert^2 - \tfrac{1}{12} \lambda + K_{1 \bar{1} 1 \bar{1}} \right) \eta_1 \wedge \overline{\eta_1} - i K_{2 \bar{2} 2 \bar{1}} \eta_1 \wedge \overline{\eta_2} + i K_{1 \bar{1} 1 \bar{2}} \eta_2 \wedge \overline{\eta_1} \\ & + i \left(-\tfrac{2}{3} \lvert N_1 \rvert^2 - \tfrac{1}{6} \lambda - \lvert G \rvert^2 - K_{1 \bar{1} 1 \bar{1}} \right) \eta_2 \wedge \overline{\eta_2} \end{aligned}$$ The identities $d^2 \eta_i = 0$ imply $$\begin{aligned} K_{1 \bar{1} 1 \bar{2}} &= 0, \:\:\:\: K_{2 \bar{2} 2 \bar{1}} = 0, \\ K_{1 \bar{1} 1 \bar{1}} &= \tfrac{1}{3} \lvert N_1 \rvert^2 + \tfrac{1}{12} \lambda - \lvert G \rvert^2, \\ d \, G &= G_{\bar{1}} \eta_1 + G_{2} \overline{\eta_2} - i G \alpha + 2 i G \beta, \end{aligned}$$ for some $\mathbb{C}$-valued functions $G_{\bar{1}}$ and $G_2$ on $\mathcal{P}'.$ The identity $d^2 \alpha = 0$ implies $G G_2 = 0,$ so $G_2$ must vanish identically on $\mathcal{P}'$ (since the vanishing of $G$ implies the vanishing of $G_2$). Next, differentiating the identity $\kappa_{2 \bar{1}} = G \, \overline{\eta_2}$ implies $$K_{1\bar{2}1\bar{2}} = - \overline{G_{\bar{1}}}, \:\:\:\:\: K_{2\bar{1}2\bar{1}} = - G_{\bar{1}}.$$ The identity $d^2 G = 0$ yields $$N_1 G_{\bar{1}} \eta_1 \wedge \overline{\eta_1 \wedge \eta_2} + G \left( 3 \lvert N_1 \rvert^2 - \tfrac{1}{2} \lambda \right) \overline{\eta_2} \wedge \eta_1 \wedge \eta_2 = 0,$$ leading to two possibilities: 1. $G$ vanishes identically on $\mathcal{P}$; or 2. $\lvert N_1 \rvert^2 = \tfrac{1}{6} \lambda$ and $G_{\bar{1}} = 0$ on $\mathcal{P}'.$ Let us suppose case (2) holds. Differentiating $\lvert N_1 \rvert^2 = \tfrac{1}{6} \lambda$ implies $H = 0$ on $\mathcal{P},$ so we have $$d N_1 = 2 i N_1 \alpha + i N_1 \beta.$$ The identity $d^2 N_1$ then implies $\lvert N_1 \rvert^2 = 0,$ contradicting our assumption that $N$ is non-zero. Therefore case (2) is not possible and we must instead have case (1) holding: $G = 0$ on $\mathcal{P}'.$ Finally, the identity $d^2 N_1 \wedge \overline{\eta_2} = 0$ implies that $\lambda N_1 = 0,$ so we must have $\lambda = 0$ for any non-trivial static solution to symplectic curvature flow. At this stage, the structure equations have simplified to $$\label{eq:finishedstructeqs} \begin{aligned} d \eta_1 =& i \alpha \wedge \eta_1 + N_1 \overline{\eta_1 \wedge \eta_2}, \\ d \eta_2 =& i \beta \wedge \eta_2, \\ d N_1 =& H \overline{\eta_2} + 2 i N_1 \alpha + i N_1 \beta, \\ d \alpha =& i \lvert N_1 \rvert^2 \eta_2 \wedge \overline{\eta_2} \\ d \beta =& -i \lvert N_1 \rvert^2 \eta_2 \wedge \overline{\eta_2}. \end{aligned}$$ It is easy to check that equations ([\[eq:finishedstructeqs\]](#eq:finishedstructeqs){reference-type="ref" reference="eq:finishedstructeqs"}) represent an involutive prescribed coframing problem in the sense of [@BryEDSNotes]. The primary invariants are the real and imaginary parts of the function $N_1,$ the free derivatives are the real and imaginary parts of $H$ and the tableau of free derivatives is equivalent to the standard Cauchy-Riemann tableau $$\begin{bmatrix} x & y \\ -y & x \end{bmatrix}.$$ Therefore, Theorem 3 of [@BryEDSNotes] may be applied to show that non-trivial static solutions to symplectic curvature flow exist locally and depend on two functions of one variable. We summarize the conclusions of this section in the following theorem. **Theorem 5**. *A non-trivial static solution to symplectic curvature flow in dimension 4 must have $\lambda = 0.$ Real analytic non-trivial static solutions exist locally and depend on two functions of one variable in the sense of exterior differential systems.* *Remark 6*. We will show in the following subsection that any static solution must be real analytic. ### Integrating the structure equations In this subsection we shall integrate the structure equations ([\[eq:finishedstructeqs\]](#eq:finishedstructeqs){reference-type="ref" reference="eq:finishedstructeqs"}) to obtain a local normal form for four-dimensional static solutions to symplectic curvature flow. This local normal form will allow us to draw conclusions on the global structure of such solutions. We begin by noting that $d \left(\alpha + \beta \right) = 0,$ so we have locally $\beta = -\alpha + d g$ for some function $g.$ The function $g$ may be integrated away in the structure equations, so we may assume we are working on the $\mathrm{U}(1)$-subbundle $\mathcal{P}'' \to X$ where $\beta = -\alpha.$ The distribution $\eta_1 = 0$ descends from $\mathcal{P}''$ to $X$ to give a well-defined real codimension-two distribution on $X.$ Each leaf of the resulting foliation has a metric given by the restriction of $\left\lvert N_1 \right\rvert^2 \overline{\eta_2} \cdot \eta_2,$ and the equations $$\begin{aligned} d \left(N_1 \overline{\eta_2}\right) &= 2 i \alpha \wedge \left(N_1 \overline{\eta_2}\right), \\ d \alpha &= -i \left(N_1 \overline{\eta_2}\right) \wedge \overline{\left(N_1 \overline{\eta_2}\right)} \end{aligned}$$ imply that this metric has constant curvature $-4.$ It follows that if $U \subset X$ is simply-connected, then there exists a $\mathbb{C}$-valued function $z_1$ and an $\mathbb{R}$-valued function $s$ on $\mathcal{P''}|_U$ such that $$N_1 \, \overline{\eta_2} = \frac{e^{is} \, d z_1}{1 - \left\lvert z_1 \right\rvert^2}, \:\:\:\: \alpha = -\frac{i}{2} \frac{\overline{z_1} d z_1 - z_1 d \overline{z_1}}{1 - \left\lvert z_1 \right\rvert^2} + ds.$$ We restrict to the locus $s = 0.$ This amounts to restricting the $\mathrm{U}(1)$-structure $\mathcal{P}''$ to an $\left\lbrace e \right\rbrace$-structure over $U$. The equation $$d \left( N_1 d z_1 \right) = \frac{1}{2} \frac{N_1 z_z}{1 - \lvert z_1 \rvert^2} d \overline{z_1} \wedge d z_1$$ implies $$\label{eq:nijint} N_1 = \frac{h(z_1)}{\sqrt{1 - \lvert z_1 \rvert^2}}$$ for some holomorphic function $h$ of a single complex variable. The $d \eta_1$ equation in ([\[eq:finishedstructeqs\]](#eq:finishedstructeqs){reference-type="ref" reference="eq:finishedstructeqs"}) implies $$d \begin{bmatrix} \eta_1 \\ \overline{\eta_1} \end{bmatrix} = - \frac{1}{2} \frac{1}{1 - \lvert z_1 \rvert^2} \begin{bmatrix} \overline{z_1} d z_1 - z_1 d \overline{z_1} & 2 \, d z_1 \\ 2 \, d \overline{z_1} & z_1 d \overline{z_1} - \overline{z_1} d z_1 \end{bmatrix} \wedge \begin{bmatrix} \eta_1 \\ \overline{\eta_1} \end{bmatrix}.$$ It follows that $$d \left( \frac{\eta_1 + z_1 \overline{\eta_1}}{\sqrt{1-\lvert z_1 \rvert^2}} \right) = 0,$$ so there exists a coordinate $z_2$ on $U$ with $$\eta_1 = \frac{d z_2 - z_1 d \overline{z_2}}{\sqrt{1-\lvert z_1 \rvert^2}}.$$ The coordinate $z_2$ is unique up to addition of a constant. We have now proven the first part of the following theorem. The second part follows by reversing the steps above. **Theorem 7**. *Let $\left(X, \Omega, g \right)$ be a non-trivial 4-dimensional static solution to symplectic curvature flow and suppose $p \in X$ is a point where the Nijenhuis tensor is non-vanishing. Then there is a local neighbourhood of $p$ with complex coordinates $z_1$ and $z_2$ and a holomorphic function $h(z_1)$ such that the symplectic form $\Omega$ and metric $g$ on $X$ are given by $$\label{eq:finalint} \begin{aligned} \Omega &= \frac{i}{2} \left( \frac{d z_1 \wedge d \overline{z_1} }{\lvert h(z_1) \rvert^2 \left(1 - \lvert z_1 \rvert^2 \right)} + d z_2 \wedge d \overline{z_2} \right), \\ g &= \frac{d z_1 \cdot d \overline{z_1} }{\lvert h(z_1) \rvert^2 \left(1 - \lvert z_1 \rvert^2 \right)} + \frac{1 + \lvert z_1 \rvert^2}{1 - \lvert z_1 \rvert^2} d z_2 \cdot d \overline{z_2} - \operatorname{Re}\left(\frac{\overline{z_1} d z_2^2}{1 - \lvert z_1 \rvert^2}\right). \end{aligned}$$* *Conversely, let $h(z_1)$ be a meromorphic function on the unit disk and let $\Sigma \subset \mathbb{C}$ be the subset of the unit disk where $h(z_1)$ has no zeros or poles. Then equation ([\[eq:finalint\]](#eq:finalint){reference-type="ref" reference="eq:finalint"}) defines a static solution to symplectic curvature flow on the 4-manifold $\Sigma \times \mathbb{C}.$* The local normal form of Theorem [Theorem 7](#thm:locnormalform){reference-type="ref" reference="thm:locnormalform"} is strong enough to derive global consequences. **Corollary 8**. *There are no non-trivial complete static solutions to symplectic curvature flow in dimension 4.* *Proof.* Let $L$ be a leaf of the foliation $\eta_1 = 0$ on $X$ and let $\widehat{L}$ denote the universal cover of $L.$ The function $z_1 : \widehat{L} \to \mathbb{C}$ and holomorphic function $h(z_1)$ exist globally on $\widehat{L},$ and $z_1$ identifies $\widehat{L}$ with the unit disk $\mathbb{D} \subset \mathbb{C}.$ Formula ([\[eq:finalint\]](#eq:finalint){reference-type="ref" reference="eq:finalint"}) implies that the boundary $\left\lvert z_1 \right\rvert = 1$ is at finite distance, and the function $\left\lvert N_1 \right\rvert^2$ blows up there. Therefore the metric induced on $\widehat{L}$ is incomplete, hence the metric $g|_L$ is incomplete, hence $g$ is incomplete. ◻ # Local existence of soliton solutions In the previous section we showed that the static system was non-involutive in the sense of exterior differential systems. This non-involutivity lead to several compatibility conditions which placed severe restrictions on the local geometry of a static solution. By contrast, the soliton equation is involutive as shall be explained in this section. The upshot is a local existence and uniqueness theorem for soliton solutions. The soliton equations are $$\begin{aligned} \frac{\partial}{\partial t} \Omega &= \lambda \Omega + \mathcal{L}_V \Omega, \\ \frac{\partial}{\partial t} g &= \lambda g + \mathcal{L}_V g, \end{aligned}$$ where $\lambda \in \mathbb{R}$ is a constant and $V$ is a vector field on $X.$ Let $V$ be a vector field on an almost-Kähler 4-manifold $X,$ thought of as a $\mathrm{U}(2)$-equivariant map $\mathcal{B} \to \mathbb{C}^2$ and let $V_i$ be the components of this map with respect to the standard basis of $\mathbb{C}^2.$ There exist functions $U,$ $S_{ij},$ $W_{i \bar{j}},$ and $Y$ with symmetries $W_{i \bar{i}} = 0,$ $S_{ij} = S_{ji}$ such that, on $\mathcal{B},$ $$\label{eq:dVectorField} d V_i + \kappa_{i \overline{j}} V_j = \left( W_{i \bar{j}} + Y \delta_{i \bar{j}} \right) \eta_j + \left( S_{ij} + U \epsilon_{ij} \right) \overline{\eta_j}.$$ These functions appear in the Lie derivatives of $\Omega$ and $g$ with respect to $V$: $$\begin{aligned} \mathcal{L}_V \Omega &= - \tfrac{i}{2} \left(W_{i \bar{j}} + \overline{W_{j \bar{i}}} \right) \overline{\eta_{i}} \wedge \eta_j + 2\, \operatorname{Re}\left(U \right) \Omega + \operatorname{Im}\left( \left( Y - V_i \overline{N_i} \right) \eta_1 \wedge \eta_2 \right) \\ \mathcal{L}_V g & = 2\, \operatorname{Re}\left( \overline{S_{ij}} \eta_i \cdot \eta_j \right) + 4 \operatorname{Re}\left( \overline{\epsilon_{kj}} V_k \overline{N}_i \eta_i \cdot \eta_j \right) + \left(W_{i \bar{j}} + \overline{W}_{j \bar{i}} \right) + 2 \, \operatorname{Re}(U) g \overline{\eta}_i \cdot \eta_j. \end{aligned}$$ Comparing with the symplectic curvature flow equations ([\[eq:symplcurvfloweqs\]](#eq:symplcurvfloweqs){reference-type="ref" reference="eq:symplcurvfloweqs"}), we see that the second-order invariants of a soliton solution satisfy $$\label{eq:SolConds} \begin{aligned} Q_{i \bar{j}} &= -\tfrac{1}{4} \operatorname{Re}(W_{i \bar{j}}), & R &= \tfrac{1}{2} \operatorname{Re}(Y) + \tfrac{1}{4} \lambda, \\ A_{ij} &= -\tfrac{1}{2} S_{ij} + \tfrac{1}{2} \varepsilon_{ik} \overline{V_k} N_j + \tfrac{1}{2} \varepsilon_{jk} \overline{V_k} N_i, & B &= \tfrac{1}{8} U + \tfrac{1}{8} \overline{V_i} N_i. \end{aligned}$$ Conversely, if $(X, \Omega, J)$ is an almost-Kähler 4-manifold with a vector field $V$ so that equations ([\[eq:fourdCone\]](#eq:fourdCone){reference-type="ref" reference="eq:fourdCone"}), ([\[eq:structtwo\]](#eq:structtwo){reference-type="ref" reference="eq:structtwo"}), ([\[eq:dVectorField\]](#eq:dVectorField){reference-type="ref" reference="eq:dVectorField"}), ([\[eq:SolConds\]](#eq:SolConds){reference-type="ref" reference="eq:SolConds"}) are satisfied on the $\mathrm{U}(2)$-bundle $\mathcal{B},$ then $(X, \Omega, J)$ is a soliton for symplectic curvature flow. **Theorem 9**. *Solitons for the symplectic curvature flow exist locally and depend on 10 functions of 3 variables in the sense of exterior differential systems.* *Proof.* We first reduce the problem on constructing symplectic curvature flow solitons to a prescribed coframing problem in the style of [@BryEDSNotes]. By the above paragraph, the $\mathrm{U}(2)$-coframe bundle $\mathcal{B} \to X$ of a symplectic curvature soliton carries 1-forms $\eta$ and $\kappa$ and functions $N_i,$ $V_i,$ $W_{i\bar{j}}$, $Y$, $S_{ij},$ $U,$ $F_{ij}$ and $K_{i \bar{j} k \bar{l}}$ satisfying equations ([\[eq:fourdCone\]](#eq:fourdCone){reference-type="ref" reference="eq:fourdCone"}), ([\[eq:structtwo\]](#eq:structtwo){reference-type="ref" reference="eq:structtwo"}), ([\[eq:dVectorField\]](#eq:dVectorField){reference-type="ref" reference="eq:dVectorField"}), ([\[eq:SolConds\]](#eq:SolConds){reference-type="ref" reference="eq:SolConds"}). Conversely, if $M$ is an 8-manifold together with 1-forms $\eta_i$ and $\kappa_{i \bar{j}}$ and functions $N_i,$ $V_i,$ $W_{i\bar{j}}$, $Y$, $S_{ij},$ $U,$ $F_{ij}$ and $K_{i \bar{j} k \bar{l}}$ satisfying equations ([\[eq:fourdCone\]](#eq:fourdCone){reference-type="ref" reference="eq:fourdCone"}), ([\[eq:structtwo\]](#eq:structtwo){reference-type="ref" reference="eq:structtwo"}), ([\[eq:dVectorField\]](#eq:dVectorField){reference-type="ref" reference="eq:dVectorField"}), ([\[eq:SolConds\]](#eq:SolConds){reference-type="ref" reference="eq:SolConds"}), then an argument similar to Theorem 1 of [@BryantBochKah01] implies that (after possibly shrinking $M$) the 4-dimensional leaf space $X$ of the integrable plane field $\eta = 0$ carries an almost-Kähler structure which is a soliton for the symplectic curvature flow and $M$ may be identified an an open set in the $\mathrm{U}(2)$-bundle $\mathcal{B} \to X$ associated to this almost-Kähler structure. The prescribed coframing problem ([\[eq:fourdCone\]](#eq:fourdCone){reference-type="ref" reference="eq:fourdCone"}), ([\[eq:structtwo\]](#eq:structtwo){reference-type="ref" reference="eq:structtwo"}), ([\[eq:dVectorField\]](#eq:dVectorField){reference-type="ref" reference="eq:dVectorField"}), ([\[eq:SolConds\]](#eq:SolConds){reference-type="ref" reference="eq:SolConds"}) is written in a form where it is natural to attempt to apply Theorem 3 of [@BryEDSNotes]. The 'primary invariants' are the functions $N_i,$ $V_i,$ $\operatorname{Re}(Y),$ $\operatorname{Re}(W_{i \bar{j}}),$ $U,$ $S_{ij},$ and $K_{i \bar{j} k \bar{l}}$ while the derived invariants consist of the covariant derivatives of these functions with respect to $\kappa_{i \bar{j}},$ taking the identity $d^2 \eta = 0$ into account. However, the Cartan's existence theorem cannot be applied directly because the tableau of free derivatives is not involutive. One may compute that it has Cartan characters $(24,22,13,1),$ while the dimension of the prolongation is $106,$ so Cartan's test fails. Nevertheless, the system does become involutive after one prolongation. We omit the details here due to length, but after prolongation the tableau of free derivatives has Cartan characters $(56,31,10,0)$ and the dimension of the prolongation is $148 = (56)+2(31)+3(10)+4(0),$ so Cartan's test is passed and the system is involutive. Existence and generality in the real analytic category then follows from Theorem 3 of [@BryEDSNotes]. ◻ *Remark 10* (Real analyticity). Streets--Tian [@StreetsTian14] prove that symplectic curvature flow is parabolic modulo diffeomorphism. Similarly, it can be shown that the soliton system is elliptic modulo diffeomorphism. Therefore, symplectic curvature flow solitons which are $C^{1, \alpha}$ in harmonic coordinates for some $\alpha > 0$ must be real analytic in those coordinates. *Remark 11* (The gradient case). The equation $d V^\flat = 0$ implies $$\begin{aligned} U & = \overline{V_i} N_i, & \operatorname{Im}(Y) &= 0, & \operatorname{Im}(W_{i \bar{j}}) &= 0. & & \end{aligned}$$ Adjoining these equations to ([\[eq:fourdCone\]](#eq:fourdCone){reference-type="ref" reference="eq:fourdCone"}), ([\[eq:structtwo\]](#eq:structtwo){reference-type="ref" reference="eq:structtwo"}), ([\[eq:dVectorField\]](#eq:dVectorField){reference-type="ref" reference="eq:dVectorField"}), ([\[eq:SolConds\]](#eq:SolConds){reference-type="ref" reference="eq:SolConds"}) gives a prescribed coframing problem whose local solutions correspond to gradient solitons. However, in contrast to the general case, this system is not involutive even after prolongation because the equation $d^2 Y = 0$ yields a restriction on the 3-jet of a solution. In the language of exterior differential systems, this problem has *intrinsic torsion*. Unfortunately, the restriction on the 3-jet is more algebraically complicated than the equations encountered in §[3](#sect:staticsolns){reference-type="ref" reference="sect:staticsolns"} and the existence and local generality of gradient solitons remains unknown. *Remark 12* (Comparison to Laplacian flow). Symplectic curvature flow has a formal similarity to the Laplacian flow of closed $\mathrm{G}_2$-structures in that both are flows of geometric structures with torsion defined by closed differential forms. This formal similarity extends to the local existence theory for static and soliton solutions. The static solutions to Laplacian flow are the *eigenforms*, analyzed in [@BallQuad23] where it was shown that the relevant EDS is not involutive. By contrast, the general Laplace soliton system is well-behaved. In a recent paper Bryant [@bryant2023generality] has shown that the EDS describing Laplace solitons is involutive. ## An example {#eg:ASL2R} Let $\mathrm{G} = \mathrm{SL}_2 \mathbb{R}\ltimes \mathbb{R}^2$ denote the group of volume preserving affine transformations of $\mathbb{R}^2$ and write the left-invariant Maurer-Cartan form $\mu$ of $\mathrm{G}$ as $$\mu = \begin{bmatrix} 0 & 0 & 0 \\ \alpha_1 & \alpha_3 & \beta - \alpha_4 \\ \alpha_2 & - \beta - \alpha_4 & -\alpha_3 \end{bmatrix}.$$ Let $\mathrm{S}^1$ denote the circle subgroup of $\mathrm{G}$ generated by the action of the vector field dual to $\beta.$ For each pair of non-zero numbers $(a,b) \in \mathbb{R}^2$ the 2-form $\Omega_{a,b}$ and metric $g_{a,b}$ defined by $$\begin{aligned} \Omega_{a,b} &= a^2 \, \alpha_1 \wedge \alpha_2 + b^2 \, \alpha_3 \wedge \alpha_4, \\ g_{a,b} &= a^2 \left( \alpha_1^2 + \alpha_2^2 \right) + b^2 \left( \alpha_3^2 + \alpha_4^2 \right), \end{aligned}$$ descend to the 4-dimensional quotient $X = \mathrm{G}/ \mathrm{S}^1$ to define an almost Kähler structure on $X.$ The $U(2)$-invariants of this structure may be computed via the Maurer-Cartan equation $d \mu = - \mu \wedge \mu.$ For this structure, we find that the right hand side of the symplectic curvature flow equations are given by $$\begin{aligned} 4 R \, \Omega + 4 i Q_{i \bar{j}} \, \overline{\eta_i} \wedge \eta_{j} - 8 \, \operatorname{Im}\left(\overline{B} {\eta_1 \wedge \eta_2 } \right) &= 4 \, \alpha_3 \wedge \alpha_4, \\ 4 R \, g - 8 Q_{i \overline{j}} \overline{\eta_i} \cdot \eta_j - 4 \operatorname{Re}( A_{ij} \overline{\eta_i} \cdot \overline{\eta_{j}}) &= 4 \left(\alpha_3^2 + \alpha_4^2 \right). \end{aligned}$$ Therefore, the 1-parameter family of almost-Kähler structures on $X$ defined by $$\begin{aligned} \Omega(t) &= a^2 \, \alpha_1 \wedge \alpha_2 + \sqrt{4 t + b^4} \, \alpha_3 \wedge \alpha_4, \\ g(t) &= a^2 \left(\alpha_1^2 + \alpha_2^2 \right) + \sqrt{4 t + b^4} \left(\alpha_3^2 + \alpha_4^2 \right). \end{aligned}$$ gives a solution to symplectic curvature flow with initial condition $\Omega(0) = \Omega_{a,b},$ $g(0) = g_{a,b}.$ For any $a,$ $a'$ the structures $(\Omega_{a,b}, g_{a,b})$ and $(\Omega_{a',b}, g_{a',b})$ are diffeomorphism equivalent, so in fact each $(\Omega(t), g(t))$ defines a soliton solution to symplectic curvature flow. *Remark 13*. The manifold $X$ defined above is non-compact and $\mathrm{G}$ does not admit a uniform lattice, so it has no compact quotients. However, $\mathrm{G}$ does admit cocompact lattices $\Gamma$ (for example $\Gamma = \mathrm{SL}_2 \mathbb{Z} \ltimes \mathbb{Z}^2$) and these give quotients $\Gamma \backslash X$ with finite volume. The 1-parameter family of almost-Kähler structures $(\Omega(t), g(t))$ descends to each $\Gamma \backslash X$ to give a solution of symplectic curvature flow, however the diffeomorphism between the structures $(\Omega_{a,b}, g_{a,b})$ and $(\Omega_{a',b}, g_{a',b})$ does not descend to $\Gamma \backslash X,$ so $(\Omega(t), g(t))$ does not give a soliton solution on $\Gamma \backslash X.$
arxiv_math
{ "id": "2310.00143", "title": "Static solutions to symplectic curvature flow in dimension four", "authors": "Gavin Ball", "categories": "math.DG math.SG", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | In this note, merging ideas from [@loeffen_2009b] and [@renaud_2019], we prove that an $(a,b)$-strategy maximizes dividend payments subject to fixed transaction costs in a spectrally negative Lévy model with Parisian ruin, as long as the tail of the Lévy measure is log-convex. address: Département de mathématiques, Université du Québec à Montréal (UQAM), 201 av. Président-Kennedy, Montréal (Québec) H2X 3Y7, Canada author: - Jean-François Renaud bibliography: - references-SNLPs.bib title: A note on the optimal dividends problem with transaction costs in a spectrally negative Lévy model with Parisian ruin --- # Introduction Maximizing dividend payments is a trade-off between paying out (as early and) as much as possible while trying to avoid ruin for the possibility of more payments in the future. Classically, three versions of this problem have been studied: singular, absolutely continuous and impulse, which are related to the corresponding set of admissible control strategies. See, e.g., [@jeanblanc-shiryaev_1995] for an analysis of these three problems in a Brownian model. Later, these problems have been studied in more general models; in a spectrally negative Lévy model, they have been studied for example in [@avram-et-al_2007; @loeffen_2008], [@kyprianou-et-al_2012] and [@alvarez-rakkolainen_2009; @loeffen_2009b; @thonhauser-albrecher_2011], respectively. See also [@albrecher-thonhauser_2009] for a survey on optimal dividend strategies. In the above control problems, the termination time is the classical ruin time or, more precisely, the first time the controlled process goes below zero (or another pre-specified critical level). For more than a decade now, *exotic* definitions of ruin have been studied. One of them is Parisian ruin, with exponential clocks or with deterministic delays. Naturally, the effect of a given type of Parisian ruin on the maximization of dividends has been considered. In a spectrally negative Lévy model, the singular control problem with Parisian ruin has been studied in [@czarna-palmowski_2014] and [@renaud_2019], for deterministic delays and exponential delays respectively, while the absolutely continuous version with Parisian ruin is analyzed in [@locas-renaud_2023]. In this note, we aim at *completing the trilogy* by considering the impulse control problem in a spectrally negative Lévy model with Parisian ruin. Our main contribution is a solution to this problem, in the spirit of recent literature. More precisely, we give a fairly general condition on the Lévy measure of the underlying spectrally negative Lévy process under which a simple impulse control strategy is optimal. The condition is that the tail of the Lévy measure is log-convex. It turns out that this condition also improves the analysis originally provided in [@loeffen_2009b] (see [@loeffen-renaud_2010] for a discussion) when standard ruin is considered. It is also the condition imposed in the solution of the singular control problem in [@renaud_2019]. Our solution is based on techniques used and results obtained in both [@loeffen_2009b] and [@renaud_2019]. As a consequence, mathematical details will be kept at a minimum when standard arguments are involved. # Problem formulation and verification lemma On a filtered probability space $\left( \Omega, \mathcal{F}, \left\lbrace \mathcal{F}_t, t \geq 0 \right\rbrace, \mathbb{P}\right)$, let $X=\left\lbrace X_t , t \geq 0 \right\rbrace$ be a spectrally negative Lévy process (SNLP) with Laplace exponent $$\theta \mapsto \psi(\theta) = \gamma \theta + \frac{1}{2} \sigma^2 \theta^2 + \int^{\infty}_0 \left( \mathrm{e}^{-\theta z} - 1 + \theta z \mathbf{1}_{(0,1]}(z) \right) \nu(\mathrm{d}z) ,$$ where $\gamma \in \mathbb{R}$ and $\sigma \geq 0$, and where $\nu$ is a sigma-finite measure on $(0,\infty)$ satisfying $$\int^{\infty}_0 (1 \wedge x^2) \nu(\mathrm{d}x) < \infty .$$ This measure is called the Lévy measure of $X$. As $X$ is a (strong) Markov process, its law when starting from $X_0 = x$ will be denoted by $\mathbb{P}_x$ and the corresponding expectation by $\mathbb{E}_x$. If $X_0=0$, we will simply write $\mathbb{P}$ and $\mathbb{E}$. With this notation, we thus have $\psi(\theta)=\ln \left(\mathbb{E}\left[\mathrm{e}^{\theta X_1} \right] \right)$. Let us model a cash process using $X$. We consider the optimization problem in which, at each dividend payment, a fixed transaction cost of $\beta>0$ is paid by the policyholders. Therefore, a dividend strategy $\pi$ is represented by a non-decreasing, left-continuous and adapted (thus predictable) stochastic process $L^\pi = \left\lbrace L^\pi_t , t \geq 0 \right\rbrace$ such that $L^\pi_0 = 0$ and $$\label{E:lump-sum} L^\pi_t = \sum_{0 \leq s < t} \Delta L^\pi_s ,$$ where $\Delta L^\pi_s = L^\pi_{s+} - L^\pi_s$, which represents the cumulative amount of dividends paid up to time $t$. The assumption in [\[E:lump-sum\]](#E:lump-sum){reference-type="eqref" reference="E:lump-sum"} means that all dividend payments are lump sum dividend payments, which is in contrast with the singular version of the problem. Mathematically speaking, the process $L^\pi$ is a pure-jump process. In particular, $\sum_{0 \leq s < t} \mathbf{1}_{\{\Delta L^\pi_s > 0 \}}$ represents the number of dividend payments made up to time $t$. For a strategy $\pi$, the controlled cash process $U^\pi = \left\lbrace U^\pi_t , t \geq 0 \right\rbrace$ is given by $U^\pi_t = X_t - L^\pi_t$. In our control problem, the termination time is given by the time of Parisian ruin (with fixed rate $p>0$) defined by $$\sigma_p^\pi= \inf \left\lbrace t>0 \colon t-g_t^\pi > \mathbf{e}_p^{g_t^\pi} \; \text{and} \; U^\pi_t < 0 \right\rbrace ,$$ where $g_t^\pi = \sup \left\lbrace 0 \leq s \leq t \colon U^\pi_s \geq 0 \right\rbrace$, with $\mathbf{e}_p^{g_t^\pi}$ an independent random variable, following the exponential distribution with mean $1/p$, associated to the corresponding excursion below $0$. A strategy $\pi$ is said to be admissible if $\Delta L^\pi_t \leq U^\pi_t$, for all $t < \sigma_p^\pi$. In words, a dividend payment should not push the cash process down into the *red zone* (below zero). As a consequence, no dividends can be paid when $U^\pi$ is below zero because the dividend process $L^\pi$ is assumed to be non-decreasing. Let us denote the set of admissible dividend strategies by $\Pi_{\beta,p}$. For a fixed discount rate $q \geq 0$, the performance function of an admissible dividend strategy $\pi \in \Pi_{\beta,p}$ is given by $$v_\pi (x) = \mathbb{E}_x \left[ \sum_{0 \leq t < \sigma_p^\pi} \mathrm{e}^{-q t} \left( \Delta L^\pi_t - \beta \mathbf{1}_{\{\Delta L^\pi_t > 0 \}} \right) \right] , \quad x \in \mathbb{R}.$$ Therefore, the value function $v_\ast$ of the problem is defined on $\mathbb{R}$ by $v_\ast (x) = \sup_{\pi \in \Pi_{\beta,p}} v_\pi (x)$. For the rest of the paper, we assume that the control problem parameters $\beta, p, q$ are fixed. Our goal is to obtain an analytical expression for $v_\ast$ by finding an optimal strategy $\pi_\ast \in \Pi_{\beta,p}$, i.e., such that $v_{\pi_\ast} (x) = v_\ast (x)$, for all $x \in \mathbb{R}$, and for which we can compute the performance function. **Remark 1**. *Note that the performance function, which is now defined on the whole real line, can also be written as follows: $$v_\pi (x) = \mathbb{E}_x \left[ \int_0^{\sigma_p^\pi} \mathrm{e}^{-q t} \mathrm{d} \left( L^\pi_t - \beta \sum_{0 \leq s < t} \mathbf{1}_{\{\Delta L^\pi_s > 0 \}} \right) \right] .$$* Define the operator $$\label{eq:generator} \Gamma v(x) = \gamma v^\prime (x)+\frac{\sigma^2}{2} v^{\prime \prime}(x) + \int_{0+}^\infty \left( v(x-z)-v(x)+v^\prime (x) z \mathbf{1}_{(0,1]}(z) \right) \nu(\mathrm{d}z)$$ acting on functions $v$, defined on $\mathbb{R}$, such that $x \mapsto \Gamma v(x)$ is defined almost everywhere. We say that a function $v$ is sufficiently smooth if it is continuously differentiable on $(0,\infty)$ when $X$ is of bounded variation and piecewise twice continuously differentiable when $X$ is of unbounded variation. Here is a verification lemma for our maximization problem: **Lemma 1**. *Suppose that $\hat{\pi} \in \Pi_{\beta,p}$ is such that $v_{\hat{\pi}}$ is sufficiently smooth. If, for almost every $x \in \mathbb{R}$, $$\Gamma v_{\hat{\pi}} (x) - \left( q+p \mathbf{1}_{(-\infty,0)}(x) \right) v_{\hat{\pi}} (x) \leq 0$$ and if, for all $y \geq x \geq 0$, $$v_{\hat{\pi}}(y)-v_{\hat{\pi}}(x) \geq y-x-\beta ,$$ then $\hat{\pi}$ is an optimal strategy for the control problem.* *Proof.* Set $w:=v_{\hat{\pi}}$ and let $\pi \in \Pi_{\beta,p}$ be an arbitrary admissible strategy. We will use the main ingredients in the proof of the verification lemma in [@renaud_2019]. Let us first apply a change-of-variable formula or an Ito-type formula (depending on the level of smoothness of $w$) to the multidimensional process $\left( t, \int_0^t \mathbf{1}_{(-\infty,0)}(U^\pi_r) \mathrm{d}r , U^\pi_t \right)$. Consequently, for $t > 0$, we can write $$\begin{gathered} \mathrm{e}^{-q t - p \int_0^t \mathbf{1}_{(-\infty,0)}(U^\pi_r) \mathrm{d}r} w \left(U^\pi_t \right) - w \left(U^\pi_0 \right) \\ \leq \int_0^t \mathrm{e}^{-q s - p \int_0^s \mathbf{1}_{(-\infty,0)}(U^\pi_r) \mathrm{d}r} \left[ \Gamma w \left(U^\pi_s \right) -q w \left(U^\pi_s \right) - p \mathbf{1}_{(-\infty,0)} (U^\pi_s) w \left(U^\pi_s \right) \right] \mathrm{d}s \\ - \sum_{0 < s < t} \mathrm{e}^{-q s - p \int_0^s \mathbf{1}_{(-\infty,0)}(U^\pi_r) \mathrm{d}r} \left(\Delta L^\pi_s - \beta \mathbf{1}_{\{\Delta L^\pi_s > 0\}} \right) .\end{gathered}$$ To obtain this last inequality, we used the fact that the jumps of $U^\pi$ can come from either $X$ or $L^\pi$ and that $L^\pi$ is a pure-jump process. Also, we used one of the assumptions yielding $$w \left(U^\pi_s - \Delta L^\pi_s \right) - w \left(U^\pi_s \right) < - \Delta L^\pi_s + \beta$$ at a time $s$ for which $\Delta L^\pi_s > 0$. Second, let us consider an independent (of the sigma-algebra $\mathcal{F}_\infty := \sigma \left( \cup_{i \geq 0} \mathcal{F}_i \right)$) Poisson process with intensity measure $p \, \mathrm{d}t$ and jump times $\left\lbrace T^p_i , i \geq 1 \right\rbrace$, allowing us to write $$\mathrm{e}^{- p \int_0^s \mathbf{1}_{(-\infty,0)}(U^\pi_r) \mathrm{d}r} = \mathbb{P}_x \left( T^p_i \notin \left\lbrace r \in (0,s] \colon U^\pi_r < 0 \right\rbrace , \; \text{for all $i \geq 1$} \vert \mathcal{F}_\infty \right) = \mathbb{E}_x \left[ \mathbf{1}_{\{\sigma_p^\pi > s\}} \vert \mathcal{F}_\infty \right] .$$ Consequently, $$\mathbb{E}_x \left[ \sum_{0 < s < t} \mathrm{e}^{-q s - p \int_0^s \mathbf{1}_{(-\infty,0)}(U^\pi_r) \mathrm{d}r} \left(\Delta L^\pi_s - \beta \mathbf{1}_{\{\Delta L^\pi_s > 0\}} \right) \right] \\ = \mathbb{E}_x \left[ \sum_{0<s<\sigma_p^\pi \wedge t} \mathrm{e}^{-q s} \left(\Delta L^\pi_s - \beta \mathbf{1}_{\{\Delta L^\pi_s > 0\}} \right) \right] ,$$ where we used that $X$ and $L^\pi$ are both adapted to the filtration. Finally, since for all $x \in \mathbb{R}$ we have $\left(\Gamma-q-p\mathbf{1}_{(-\infty,0)}(x) \right) w (x) \leq 0$, then, using standard arguments, we obtain $$\begin{gathered} w(x) \geq \mathbb{E}_x \left[ \sum_{s>0} \mathrm{e}^{-q s - p \int_0^s \mathbf{1}_{(-\infty,0)}(U^\pi_r) \mathrm{d}r} \left(\Delta L^\pi_s - \beta \mathbf{1}_{\{\Delta L^\pi_s > 0\}} \right) \right] \\ = \mathbb{E}_x \left[ \sum_{0<s<\sigma_p^\pi} \mathrm{e}^{-q s} \left(\Delta L^\pi_s - \beta \mathbf{1}_{\{\Delta L^\pi_s > 0\}} \right) \right] = v_\pi(x) .\end{gathered}$$ ◻ # A family of simple impulse strategies For fixed values $0 \leq a < b$, the corresponding $(a,b)$-strategy is defined as follows: each time the controlled process $U^{a,b}$ crosses level $b$, a dividend payment of $b-a$ is made. Mathematically speaking, the corresponding dividend process is defined by: $L^{a,b}_{0+}=\left(X_0-a \right) \mathbf{1}_{\{X_0 > b\}}$ and, for $t > 0$, $$L^{a,b}_t = L^{a,b}_{0+} + \sum_{k=2}^\infty (b-a) \mathbf{1}_{\{T_k^{a,b}<t\}} ,$$ where $T_k^{a,b} := \inf \left\lbrace t > 0 \colon X_t > X_0 \wedge b + (b-a)(k-1) \right\rbrace$, and then we set $U^{a,b}_t := X_t-L^{a,b}_t$. The performance function $v_{a,b}$ of this strategy can be expressed analytically using scale functions of the underlying SNLP. First, recall that scale functions $\left\lbrace W^{(q)}, q \geq 0 \right\rbrace$ are equal to zero on $(-\infty,0)$ and are given by $$\int_0^\infty \mathrm{e}^{-\theta x} W^{(q)}(x) \mathrm{d}x = \left(\psi(\theta)-q \right)^{-1} ,$$ for all $\theta> \Phi(q)=\sup \left\lbrace \lambda \geq 0 \colon \psi(\lambda)=q \right\rbrace$, on $[0,\infty)$. Also, we need to define $$x \mapsto Z_{p,q} \left(x \right) = p \int_0^\infty \mathrm{e}^{-\Phi(p+q) y} W^{(q)}(x+y) \mathrm{d}y , \quad x \in \mathbb{R}.$$ Then, for $x > 0$, we have $$Z_{p,q}^\prime \left(x \right) = p \int_0^\infty \mathrm{e}^{-\Phi(p+q) y} W^{(q)\prime}(x+y) \mathrm{d}y ,$$ which is well defined since $W^{(q)}$ is differentiable almost everywhere. Now, the performance function of an admissible $(a,b)$-strategy can be easily computed using standard Markovian-type arguments and Parisian first-passage identities for $X$. **Lemma 2**. *For a fixed pair $(a,b)$ such that $a \geq 0$ and $b > a + \beta$, we have $$v_{a,b} (x) = \begin{cases} \left( \frac{b-a-\beta}{Z_{p,q} \left(b \right)-Z_{p,q} \left(a \right)} \right) Z_{p,q} \left(x \right) & \text{for $x \in (-\infty,b]$,}\\ x-a-\beta + \left( \frac{b-a-\beta}{Z_{p,q} \left(b \right)-Z_{p,q} \left(a \right)} \right) Z_{p,q} \left(a \right) & \text{for $x \in (b,\infty)$.} \end{cases}$$* *Proof.* As the controlled process $U^{a,b}$ is allowed to go below zero, we modify the proof of Proposition 2 in [@loeffen_2009b] as follows. As in the proof of Proposition 1 in [@renaud_2019], we define $$\kappa^p = \inf \left\lbrace t>0 \colon t-g_t > \mathbf{e}_p^{g_t} \; \text{and} \; X_t < 0 \right\rbrace ,$$ where $g_t = \sup \left\lbrace 0 \leq s \leq t \colon X_s \geq 0 \right\rbrace$. This is the time of Parisian ruin with rate $p$ for $X$. Also, let us define the stopping time $\tau_b^+ = \inf \left\lbrace t>0 \colon X_t >b \right\rbrace$. Using Equation (16) in [@lkabous-renaud_2019], we can write, for $x \in (-\infty, b]$, $$\mathbb{E}_x \left[ \mathrm{e}^{-q \tau_b^+} \mathbf{1}_{\{\tau_b^+<\kappa^p\}} \right] = \frac{Z_{p,q} \left(x \right)}{Z_{p,q} \left(b \right)} .$$ Consequently, for $x \in (-\infty,b]$, using the strong Markov property, we can write $$v_{a,b} (x) = \frac{Z_{p,q} \left(x \right)}{Z_{p,q} \left(b \right)} v_{a,b} (b) = \frac{Z_{p,q} \left(x \right)}{Z_{p,q} \left(b \right)} \left( b-a-\beta + v_{a,b} (a) \right) .$$ When $x=a$, we can solve for $v_{a,b} (a)$ and the result follows. ◻ **Remark 2**. *Note that $v_{a,b}$ is not necessarily continuously differentiable at $x=b$.* The following lemma was proved in [@renaud_2019]. **Lemma 3**. *If the tail of the Lévy measure is log-convex, then $Z_{p,q}^\prime \left(\cdot \right)$ is log-convex. In particular, $Z_{p,q}^{\prime \prime} \left(\cdot \right)$ exists and is continuous almost everywhere.* Next, our objective is to find an optimal pair $(a^\ast,b^\ast)$, that is such that, for any other admissible $(a,b)$-strategy, we have $v_{a^\ast,b^\ast} (x) \geq v_{a,b} (x)$ for all $x \in \mathbb{R}$. **Lemma 4**. *If the tail of the Lévy measure is log-convex, then there exists a unique optimal pair $(a^\ast,b^\ast)$. Also, we have that $a^\ast \wedge 0 \leq c^\ast < b^\ast$, where $$c^\ast := \sup \left\lbrace c \geq 0 \colon Z_{p,q}^\prime \left(c \right) \leq Z_{p,q}^\prime \left(x \right) , \; \text{for all $x \geq 0$} \right\rbrace \in [0,\infty) .$$* *Proof.* Let us define the function $$g(x,y) = \frac{Z_{p,q} \left(y \right)-Z_{p,q} \left(x \right)}{y-x-\beta}$$ on $\left\lbrace (x,y) \in \mathbb{R}\times \mathbb{R}\colon x \geq 0 , \, y>x+\beta \right\rbrace$. Under the assumption on the Lévy measure, thanks to Lemma [Lemma 3](#lemma:log-convexity){reference-type="ref" reference="lemma:log-convexity"}, we have that $Z_{p,q} \left(\cdot \right)$ is continuously differentiable. Consequently, the conclusion of Proposition 3 in [@loeffen_2009b] also holds in our setup. More precisely, there exists at least one pair $(\hat{a},\hat{b})$ in the domain of $g$ such that either $\hat{a}>0$ and $Z_{p,q}^\prime \left(\hat{a} \right)=Z_{p,q}^\prime \left(\hat{b} \right)$, or $\hat{a}=0$. Moreover, we have $$\label{E:identity-at-optimality} Z_{p,q}^\prime \left(\hat{b} \right) = g(\hat{a},\hat{b}) ,$$ in both cases. The proof of uniqueness goes also as in Section 4 of [@loeffen_2009b], even with our relaxed condition on the Lévy measure. Indeed, it was proved in [@renaud_2019] that if the tail of the Lévy measure is log-convex, then $Z_{p,q}^\prime \left(\cdot \right)$ is strictly increasing on $(c^\ast,\infty)$ (in that paper, the notation $b_p^\ast$ is used instead of $c^\ast$). This, and the smoothness of $Z_{p,q} \left(\cdot \right)$ (cf. Lemma [Lemma 3](#lemma:log-convexity){reference-type="ref" reference="lemma:log-convexity"}), is all we need to conclude. ◻ **Remark 3**. *Note that, under the assumptions of the previous lemma, the function $v_{a^\ast,b^\ast}$ is continuously differentiable at $x=b^\ast$.* # An optimal strategy Here is our main result. **Theorem 1**. *If the tail of the Lévy measure is log-convex, then the impulse strategy given by $(a^\ast,b^\ast)$ is optimal and we have $$v_\ast (x) = \begin{cases} \frac{Z_{p,q} \left(x \right)}{Z_{p,q}^\prime \left(b^\ast \right)} & \text{for $x \in (-\infty,b^\ast]$,}\\ x-b^\ast + \frac{Z_{p,q} \left(a^\ast \right)}{Z_{p,q}^\prime \left(b^\ast \right)} & \text{for $x \in (b^\ast,\infty)$.} \end{cases}$$* *Proof.* Using [\[E:identity-at-optimality\]](#E:identity-at-optimality){reference-type="eqref" reference="E:identity-at-optimality"} with $(a^\ast,b^\ast)$, we can simplify the expression of the performance function for the $(a^\ast,b^\ast)$-strategy, as given in Lemma [Lemma 2](#L:performance){reference-type="ref" reference="L:performance"}. It was proved in [@renaud_2019] that $\Gamma Z_{p,q} \left(x \right) - \left( q+p \mathbf{1}_{(-\infty,0)}(x) \right) Z_{p,q} \left(x \right) = 0$ for all $x \in \mathbb{R}$. Also, since $Z_{p,q} \left(\cdot \right)$ is continuously differentiable under the assumption on the Lévy measure, we can use the same arguments as in Lemma 5 in [@loeffen_2009b] to prove that $v_{a^\ast,b^\ast}(y)-v_{a^\ast,b^\ast}(x) \geq y-x-\beta$, for all $y \geq x \geq 0$. The rest is standard due to the fact that, under the assumption on the Lévy measure, $Z_{p,q}^\prime \left(\cdot \right)$ is sufficiently smooth and strictly increasing on $(c^\ast,\infty)$. ◻ # Acknowledgements {#acknowledgements .unnumbered} Funding in support of this work was provided by a Discovery Grant from the Natural Sciences and Engineering Research Council of Canada (NSERC).
arxiv_math
{ "id": "2309.17152", "title": "A note on the optimal dividends problem with transaction costs in a\n spectrally negative L\\'evy model with Parisian ruin", "authors": "Jean-Fran\\c{c}ois Renaud", "categories": "math.PR math.OC", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | In this paper, we present an implicit Crank--Nicolson finite element (FE) scheme for solving a nonlinear Schrödinger--type system, which includes Schrödinger--Helmholz system and Schrödinger--Poisson system. In our numerical scheme, we employ an implicit Crank--Nicolson method for time discretization and a conforming FE method for spatial discretization. The proposed method is proved to be well-posedness and ensures mass and energy conservation at the discrete level. Furthermore, we prove optimal $L^2$ error estimates for the fully discrete solutions. Finally, some numerical examples are provided to verify the convergence rate and conservation properties.\ **Keywords**: Crank--Nicolson method, finite element method, optimal $L^2$ error estimate, mass and energy conservation\ **Mathematics Subject Classification**: 65M60, 65N15, 65N30 author: - "Zhuoyue Zhang[^1]" - "Wentao Cai[^2]" title: "**Optimal $L^2$ error estimates of mass- and energy- conserved FE schemes for a nonlinear Schrödinger--type system**" --- # Introduction {#sec:intro} We consider the following Schrödinger--type system: $$\begin{aligned} & {\bf i}u_t+\Delta u-\phi u={0},\label{i1}\\ & \alpha\phi-\beta^2\Delta\phi=|u|^2,\label{i2}\end{aligned}$$ for $t\in(0,T]$, in a bounded convex polygonal or polyhedral domain $\Omega\subset\mathbb{R}^d\ (d=2,3)$. In this model, ${\bf i}=\sqrt{-1}$ is the imaginary unit, $\alpha$, $\beta$ are non-negative real constants, $u({\bf x},t)$ is a complex-valued function which denotes the single particle wave function and $\phi({\bf x},t)$ is a real-valued function which denotes the potential. Here, the system [\[i1\]](#i1){reference-type="eqref" reference="i1"}-[\[i2\]](#i2){reference-type="eqref" reference="i2"} is subject to the following initial and boundary conditions: $$\begin{aligned} \label{i3} \begin{aligned} &u({\bf x},0)={u_0}(\bf x), &&\phi ({\bf x},0)=\phi_0(\bf x), &&\textrm{in }\Omega,\\ &u({\bf x},t)=0, &&\phi({\bf x},t)=0, &&\textrm{on }{\partial\Omega\times(0,T]}. \end{aligned}\end{aligned}$$ For different parameter $\alpha$, $\beta$, the system [\[i1\]](#i1){reference-type="eqref" reference="i1"}-[\[i3\]](#i3){reference-type="eqref" reference="i3"} represents the different models: - Schrödinger--Helmholz model (as $\alpha \neq 0$ and $\beta\neq 0$) - Schrödinger--Poisson model (as $\alpha = 0$ and $\beta=1$) The nonlinear Schrödinger--type equations can be used to describe a variety of physical phenomena in optics, quantum mechanics, and plasma physics, as referenced in [@wuli1; @wuli2]. The mathematical theory of the system [\[i1\]](#i1){reference-type="eqref" reference="i1"}-[\[i3\]](#i3){reference-type="eqref" reference="i3"} has been studied in [@Cao; @analysis1; @analysis2] to establish the well-posedness of strong solutions. It is evident that the Schrödinger--type system [\[i1\]](#i1){reference-type="eqref" reference="i1"}-[\[i3\]](#i3){reference-type="eqref" reference="i3"} conserves the total mass: $$\begin{aligned} {\cal M}(t):={\int_\Omega|u(t)|^2}d{\bf x} ={\cal M}(0),\quad t>0,\nonumber\end{aligned}$$ and the total energy $$\begin{aligned} {\cal E}(t):={\int_\Omega(|\nabla u(t)|^2+\frac{\alpha}{2}|\phi(t)|^2+\frac{\beta^2}{2}|\nabla \phi(t)|^2})d{\bf x}={\cal E}(0),\quad t>0.\nonumber\end{aligned}$$ As pointed out in article [@49], non-conservative numerical schemes may lead to numerical solutions blow up. Therefore, it is desirable to design numerical schemes that ensure mass and energy conservation at discrete level. In recent years, numerical methods and analysis for structure-preserving schemes for the Schrödinger--type equations have been investigated extensively, e.g., see [@fdm1; @fdm2; @fdm3; @fdmsun; @fdm4; @fdm5] for finite difference methods, [@femD; @DG1; @DG2; @fem4; @fem2; @DG3; @fem5; @fem3] for finite element methods, [@spectral1; @spectral2; @spectral3] for spectral or pseudo-spectral methods, and [@other1; @otherbai; @othercao; @other2; @otherli; @other3] for others. For linearized numerical schemes, Chang et al. [@1] proposed several finite difference schemes for solving the generalized nonlinear Schrödinger (GNLS) equation. Compared to Hopscotch-type schemes, split step Fourier scheme and pseudospectral scheme, the authors demonstrated that Crank--Nicolson method was more efficient and robust. In [@Akrivis], Akrivis et al. presented a linearized Crank--Nicolson FE scheme for solving GNLS equation. By the energy method, the authors obtained the optimal $L^2$ error estimate under the time step condition $\tau = \mathcal{O}(h^\frac{d}{4})$ $(d=2,3)$, where $\tau$ is the time step size, and $h$ is the spatial step size. Wang [@3] investigated the linearized Crank--Nicolson FE scheme for GNLS equation. Inspired by the work [@Li], an error splitting method was employed to analyze the $L^2$ error of FE solutions. By introducing a time-discrete scheme, the authors established uniform boundedness and error estimates of temporal semi-discrete solutions. By the above results and inverse inequality, the uniform boundedness of fully-discrete solutions was demonstrated without grid-ratio restriction conditions. Based on these results, Wang derived unconditionally optimal $L^2$ error estimates of the Crank--Nicolson FE scheme for the GNLS equation. Recently, Wang [@4] studied a Crank--Nicolson FE scheme for Schrödinger--Helmholtz equations. Utilizing the error splitting technique, optimal $L^2$ error estimates were obtained without any restrictive conditions on time-step size. Compared to linearized numerical schemes, nonlinear numerical schemes exhibit better stability and preserve more physical structures. In [@Akrivis], Akrivis et al. proposed several implicit Crank--Nicolson FE schemes for the GNLS equation. The optimal $L^2$ error estimates were obtained under certain time-step restrictive conditions. Feng et al. [@6] focused on the high-order SAV-Gauss collocation FE schemes for the GNLS equation, which used a Gauss collocation method for temporal discretization and the conforming FE method for spatial discretization. This scheme was shown to ensure mass and energy conservation. The authors also presented the existence, uniqueness and optimal $L^\infty(0,T;H^1)$ error estimates for fully-discrete collocation FE solutions. Henning and Peterseim [@8] studied the nonlinear implicit Crank--Nicolson FE method for the GNLS equation. By error splitting technique, the authors obtained the optimal $L^\infty(0,T;L^2)$ error estimate without any restrictive conditions on time-step size. In [@7], a class of nonlinear discontinuous Galerkin (DG) schemes was introduced for the Schrödinger--Poisson equations, which consists of the high-order conservative DG schemes and the second-order time discrete schemes. These schemes were proven to conserve mass and energy. Moreover, optimal $L^2$ error estimates were provided for the spatial semi-discrete DG scheme. In this paper, we propose an implicit Crank--Nicolson FE scheme to solve a nonlinear Schrödinger--type system, which includes Schrödinger--Helmholz model and Schrödinger--Poisson model. In our scheme, the system [\[i1\]](#i1){reference-type="eqref" reference="i1"}-[\[i3\]](#i3){reference-type="eqref" reference="i3"} is discretized by an implicit Crank--Nicolson scheme in the time direction and a conforming FE scheme in the spatial direction. The advantage of our numerical scheme is mass and energy conservation, ensuring that the scheme will not lead to instability. Later, we prove the existence and uniqueness of the fully-discrete numerical solutions based on Schaefer's fixed point theorem. Finally, we demonstrate the optimal $L^2$ error estimates of fully discrete numerical solutions for Schrödinger--Helmholz system and Schrödinger--Poisson system. The rest of this paper is organized as follows. In Sect.2, we introduce notations, our numerical scheme and present the main theorem. In Sect.3, we demonstrate the discrete mass and energy conservation properties of our numerical scheme. In Sect.4, we present the well-posedness of the numerical scheme by Schaefer's fixed point theorem. In Sect.5, we prove the optimal $L^2$ error estimates of fully discrete solutions. Finally, some numerical examples in Sect.6 are presented to verify the convergence rate and conservation properties. # Preliminaries and the main results Let $\Omega$ be an open, bounded convex polygonal domain in $\mathbb{R}^2$ (or polyhedral domain in $\mathbb{R}^3$) with a smooth boundary $\partial\Omega$. For any integer $k\geq 0$ and $1\le p\le+\infty$, let ${W^{k,p}}(\Omega)$ be the standard Sobolev space with the abbreviation $H^k(\Omega)={W^{k,2}}(\Omega)$ for simplicity. The norms of $W^{k,p}(\Omega)$ and $H^k(\Omega)$ are denoted by $\|\cdot\|_{W^{k,p}}$ and $\|\cdot\|_{H^k}$, respectively. We denote by $\mathcal{T}_h$ a quasi-uniform partition of $\Omega$ into triangles $\{{\mathcal K}_j\}_{j=1}^M$ in $\mathbb{R}^2$ or tetrahedrons in $\mathbb{R}^3$, and let $h=\mathop{\max}\limits_{1\le j\le M}\{diam{\mathcal K}_j\}$ denote the mesh size. For a partition ${\mathcal T}_h$, we define standard FE space as $$\begin{aligned} \nonumber V_h=\{v_h\in {\mathcal C}(\overline\Omega): v_h|_{\mathcal{K}_j}\in P_{r}(\mathcal{K}_j)\textrm{ and }v_h=0\textrm{ on }\partial\Omega,\, \forall\,{\mathcal{K}_j}\in{\mathcal{T}_h}\},\end{aligned}$$ where ${P_r}({\mathcal K}_j)$ is the space of continuous piecewise polynomials of degree $r\,(r\ge 1)$ on ${\mathcal K}_j$. For any two complex functions $u,v\in{L^2}(\Omega)$, the ${L^2}(\Omega)$ inner product is defined by $$\begin{aligned} (u,v) ={\int_\Omega{u({\bf x})v^*({\bf x})}}d{\bf x}, \nonumber\end{aligned}$$ in which $v^*$ denotes the conjugate of $v$. Let $\{t_n|t_n=n\tau;0\le n\le N\}$ be a uniform partition of the time interval $[0,T]$ with time step size $\tau=T/N$, and $(u^n,\phi^n)=(u(t_n),\phi(t_n))$. For any sequence $\{\omega ^n\}_{n=1}^N$, we define $$\begin{aligned} D_\tau\omega^n=\frac{\omega^n-\omega^{n-1}}{\tau}, \quad \overline\omega^{n-\frac{1}{2}}=\frac{\omega^n+\omega^{n-1}}{2},\quad n=1,2,\dots,N.\nonumber\end{aligned}$$ With the above notations, an implicit Crank--Nicolson FE method for the Schrödinger--type system [\[i1\]](#i1){reference-type="eqref" reference="i1"}-[\[i3\]](#i3){reference-type="eqref" reference="i3"} is to find $u_h^n,\phi_h^n\in{V_h}$ such that $$\begin{aligned} & {\bf i}(D_\tau u_h^n,v_h)-(\nabla\overline u_h^{n-\frac{1}{2}},\nabla v_h)-(\overline\phi _h^{n-\frac{1}{2}}\overline u_h^{n-\frac{1}{2}},v_h)=0, \label{CN1}\\ & \alpha(\phi_h^n,w_h)+\beta^2(\nabla\phi_h^n,\nabla w_h)=(|u_h^n|^2,w_h) \label{CN2}\end{aligned}$$ for all $v_h\in V_h$, $w_h\in V_h$ and $n=1,2,\dots,N$. At the initial time step, we choose $u_h^0=R_h u_0$, $\phi_h^0=R_h \phi_0$. Here, $R_h$ denotes the Ritz projection. In the remaining parts of this paper, we assume that the solution to the initial and boundary value problem [\[i1\]](#i1){reference-type="eqref" reference="i1"}-[\[i3\]](#i3){reference-type="eqref" reference="i3"} exists and satisfies $$\begin{aligned} \|u_t\|_{L^\infty([0,T];H^{r+1})} &+\|u_{tt}\|_{L^\infty([0,T];H^2)} +\|u_{ttt}\|_{L^\infty([0,T];L^2)}\nonumber\\ &+\|\phi\|_{L^\infty([0,T];H^{r+1})} +\|\phi_{tt}\|_{L^\infty([0,T];L^2)} \le M_0, \label{solution}\end{aligned}$$ where $M_0$ is a positive constant depends only on $\Omega$. We present our main results in the following theorem and the proof will be given in section 5. **Theorem 1**. *Suppose that the system [\[i1\]](#i1){reference-type="eqref" reference="i1"}-[\[i3\]](#i3){reference-type="eqref" reference="i3"} has a unique solution $(u,\phi)$ satisfying the regularity condition [\[solution\]](#solution){reference-type="eqref" reference="solution"}. Then there exists a positive constant $\tau_0$, such that when $\tau\leq\tau_0$, the fully-discrete system [\[CN1\]](#CN1){reference-type="eqref" reference="CN1"}-[\[CN2\]](#CN2){reference-type="eqref" reference="CN2"} admits a unique FE solution $(u_h^n, \phi_h^n)$ satisfying $$\begin{aligned} \|u^n-u_h^n\|_{L^2}+\|\phi^n-\phi_h^n\|_{L^2} \le C_0(\tau^2+h^{r+1}), \quad n=1, 2, ...,N,\nonumber\end{aligned}$$ where $C_0$ is a positive constant independent of $n$, $h$ and $\tau$.* Next, we provide two inequalities, which will be used frequently in our proof. **Lemma 1**. *(Discrete Gronwall's inequality) [@G] Let $\tau,B$ and $a_k,b_k,c_k,\gamma_k$ (for integers $k\ge 0$) be nonnegative numbers such that $$\begin{aligned} a_l+\tau\sum\limits_{k=0}^\ell b_k \le\tau\sum\limits_{k=0}^\ell\gamma_k a_k +\tau\sum\limits_{k=0}^\ell c_k+B,\quad \textrm{for $\ell\ge 0$}.\nonumber\end{aligned}$$ Suppose that $\tau\gamma_k<1$ for all $k$, and set $\sigma_k=(1-\tau\gamma_k)^{-1}$, then $$\begin{aligned} a_l+\tau\sum\limits_{k=0}^\ell b_k \le\exp\bigg(\tau\sum\limits_{k=0}^\ell\gamma_k\sigma _k\bigg) \bigg(\tau\sum\limits_{k = 0}^\ell{c_k}+B\bigg),\quad \textrm{for $\ell\ge 0$}.\nonumber\end{aligned}$$* **Lemma 2**. *(Gagliardo--Nirenberg inequality) [@NG2; @NG1] Let $u$ be a function defined on $\Omega$ and ${\partial ^s}u$ be any partial derivative of $u$ of order $s$, then $$\begin{aligned} \label{G-N-Ineq} \|\partial^j u\|_{L^p} \le C\|\partial^m u\|_{L^r}^a \|u\|_{L^q}^{1-a}+C\|u\|_{L^q}\end{aligned}$$ for $0\le j<m$ and $\frac{j}{m}\le a\le 1$ with $$\begin{aligned} \frac{1}{p}=\frac{j}{d}+a(\frac{1}{r}-\frac{m}{d})+(1-a)\frac{1}{q}\nonumber\end{aligned}$$ except $1<r<\infty$ and $m-j-\frac{d}{r}$ is a non-negative integer, in which case the above estimate holds for $\frac{j}{m}\le a<1$.* Throughout this paper, we denote by $C$ a generic constant, and $\varepsilon$ a sufficiently small generic constant, which could be different at different occurrences. # Discrete conservation laws In this section, we demonstrate that the proposed scheme [\[CN1\]](#CN1){reference-type="eqref" reference="CN1"}-[\[CN2\]](#CN2){reference-type="eqref" reference="CN2"} conserves mass and energy. **Theorem 2**. *The solution of scheme [\[CN1\]](#CN1){reference-type="eqref" reference="CN1"}-[\[CN2\]](#CN2){reference-type="eqref" reference="CN2"} satisfies the discrete mass and energy conservation: $$\begin{aligned} {\mathcal M}_h^n={\mathcal M}_h^0, \quad{\mathcal E}_h^n={\mathcal E}_h^0,\quad n=1,2,...,N, \label{ME1}\end{aligned}$$ where the discrete mass and energy are defined as $$\begin{aligned} \label{ME2} {\mathcal M}_h^n:=\|u_h^n\|_{L^2}^2,\quad {\mathcal E}_h^n:= \|\nabla u_h^n\|_{L^2}^2 +\frac{\alpha}{2}\|\phi_h^n\|_{L^2}^2 +\frac{\beta^2}{2}\|\nabla\phi_h^n\|_{L^2}^2.\end{aligned}$$* *Proof.* Choosing $v_h=u_h^n+u_h^{n-1}$ in [\[CN1\]](#CN1){reference-type="eqref" reference="CN1"} and taking the imaginary parts, we have $$\begin{aligned} \|u_h^n\|_{L^2}^2=\|u_h^{n-1}\|_{L^2}^2, \nonumber\end{aligned}$$ which implies $$\begin{aligned} {\mathcal M}_h^n={\mathcal M}_h^0,\quad\enspace\textrm{for}\enspace n=1,2,..,N.\label{me1}\end{aligned}$$ Putting $v_h=u_h^n-u_h^{n-1}$ in [\[CN1\]](#CN1){reference-type="eqref" reference="CN1"} and taking the real parts, one has $$\begin{aligned} {\rm Im}(D_\tau u_h^n,u_h^n-u_h^{n-1}) -{\rm Re}(\nabla\overline u_h^{n-\frac{1}{2}},\nabla(u_h^n-u_h^{n-1})) -{\rm Re}(\overline\phi_h^{n-\frac{1}{2}}\overline u_h^{n-\frac{1}{2}},u_h^n-u_h^{n-1} )=0,\nonumber\end{aligned}$$ which reduces to $$\begin{aligned} \label{ME3} -(\|\nabla u_h^n\|_{L^2}^2-\|\nabla u_h^{n-1}\|_{L^2}^2)= (\overline\phi_h^{n-\frac{1}{2}}u_h^n,u_h^n) -(\overline\phi_h^{n-\frac{1}{2}}u_h^{n-1},u_h^{n-1}).\end{aligned}$$ From [\[CN2\]](#CN2){reference-type="eqref" reference="CN2"}, it is easy to see $$\begin{aligned} \alpha(\phi_h^n-\phi_h^{n-1},w_h)+\beta^2(\nabla(\phi_h^n-\phi_h^{n-1}),\nabla w_h)=(|u_h^n|^2-|u_h^{n-1}|^2,w_h).\nonumber\end{aligned}$$ Taking $w_h=\overline\phi_h^{n-\frac{1}{2}}$ in the above equation, one has $$\begin{aligned} \nonumber \frac{\alpha}{2}(\|\phi_h^n\|_{L^2}^2-\|\phi_h^{n-1}\|_{L^2}^2) +\frac{\beta^2}{2}(\|\nabla\phi_h^n\|_{L^2}^2 -\|\nabla\phi_h^{n-1}\|_{L^2}^2) =(|u_h^n|^2-|u_h^{n-1}|^2 ,\overline\phi_h^{n-\frac{1}{2}}),\end{aligned}$$ which with [\[ME3\]](#ME3){reference-type="eqref" reference="ME3"} implies $$\begin{aligned} \frac{\alpha}{2}(\|\phi_h^n\|_{L^2}^2-\|\phi_h^{n-1}\|_{L^2}^2) +\frac{\beta ^2}{2}(\|\nabla\phi_h^n\|_{L^2}^2 -\|\nabla\phi_h^{n-1}\|_{L^2}^2) +(\|\nabla u_h^n\|_{L^2}^2-\|\nabla u_h^{n-1}\|_{L^2}^2) = 0.\nonumber\end{aligned}$$ Thus, we have $$\begin{aligned} {\mathcal E}_h^n={\mathcal E}_h^0,\quad\enspace\textrm{for }n=1,2,..,N.\label{me2}\end{aligned}$$ Based on [\[me1\]](#me1){reference-type="eqref" reference="me1"} and [\[me2\]](#me2){reference-type="eqref" reference="me2"}, we complete the proof of [\[ME1\]](#ME1){reference-type="eqref" reference="ME1"}-[\[ME2\]](#ME2){reference-type="eqref" reference="ME2"} ◻ **Remark 1**. *The discrete conservation laws [\[ME1\]](#ME1){reference-type="eqref" reference="ME1"}-[\[ME2\]](#ME2){reference-type="eqref" reference="ME2"} yields naturally the following regularity result for the numerical solution: $$\begin{aligned} \label{ME4} \|u_h^n\|_{{L^2}} +\|\nabla u_h^n\|_{L^2} +\|\phi_h^n\|_{L^2} +\|\nabla\phi_h^n\|_{L^2} \le C_1,\quad n=1,2,...,N,\end{aligned}$$ where $$\begin{aligned} C_1&=\|u_h^0\|_{L^2} + \sqrt{\|\nabla u_h^0\|_{L^2}^2+\frac{\alpha}{2}\|\phi_h^0\|_{L^2}^2+\frac{\beta^2}{2}\| \nabla\phi_h^0\|_{L^2}^2 } + \sqrt{\frac{2}{\alpha}\|\nabla u_h^0\|_{L^2}^2+\|\phi_h^0\|_{L^2}^2+\frac{\beta^2}{\alpha}\| \nabla\phi_h^0\|_{L^2}^2 }\nonumber\\ &\quad+ \sqrt{ \frac{2}{\beta^2}\|\nabla u_h^0\|_{L^2}^2+\frac{\alpha}{\beta^2}\|\phi_h^0\|_{L^2}^2+\| \nabla\phi_h^0\|_{L^2}^2}.\nonumber\end{aligned}$$* This result will be used frequently in our analysis process. # Existence and uniqueness of numerical solution The existence and uniqueness of the solution of the scheme [\[CN1\]](#CN1){reference-type="eqref" reference="CN1"}-[\[CN2\]](#CN2){reference-type="eqref" reference="CN2"} will be established in Theorem [Theorem 3](#TH4-1){reference-type="ref" reference="TH4-1"} and [Theorem 4](#TH4-2){reference-type="ref" reference="TH4-2"}. In order to get these properties, we begin by introducing the following Schaefer's fixed point theorem. **Lemma 3**. *(Schaefer's Fixed Point Theorem) [@Evans] Let $B$ be a Banach space, and assume that ${\mathcal G}: B\rightarrow B$ is a continuous and compact mapping. If the set $$\begin{aligned} \varTheta:=\{(u,\phi)\in B:\exists\,\theta\in[0,1]\ \textrm{such that}\ (u,\phi)=\theta\, {\mathcal G}(u,\phi)\}\nonumber\end{aligned}$$ is bounded in $B$, then ${\mathcal G}$ has at least one fixed point.* **Theorem 3**. *For any given $\tau>0$ and $h>0$, there exists a solution $(u_h^n,\phi_h^n)\ (n=1,2,...,N)$ to the scheme [\[CN1\]](#CN1){reference-type="eqref" reference="CN1"}-[\[CN2\]](#CN2){reference-type="eqref" reference="CN2"}.* *Proof.* Firstly, we define map ${\mathcal G}:V_h\times V_h\rightarrow V_h\times V_h$ by $$\begin{aligned} {\mathcal G}(u_h^n,\phi_h^n)=(u_{h,*}^n,\phi_{h,*}^n), \quad{\forall\,(u_h^n,\phi_h^n)\in V_h\times V_h}, \nonumber\end{aligned}$$ where $(u_{h,*}^n,\phi_{h,*}^n)\in V_h\times V_h$ satisfies the following equations: For given $u_h^{n-1}$, $\phi_h^{n-1}$, $u_h^n$ and $\phi_h^n$, $$\begin{aligned} & {\bf i}\,\Big(\frac{u_{h,*}^n-u_h^{n-1}}{\tau},v_h\Big) -\frac{1}{2}(\nabla u_{h,*}^n,\nabla v_h) =\frac{1}{2}(\nabla u_h^{n-1},\nabla v_h) +(\overline\phi_h^{n-\frac{1}{2}}\overline u_h^{n-\frac{1}{2}},v_h),\quad\forall\, v_h\in V_h, \label{*1}\\ & \alpha(\phi_{h,*}^n,w_h)+\beta^2(\nabla\phi_{h,*}^n,\nabla w_h) =(|u_h^n|^2,w_h),\quad\forall\, w_h\in V_h. \label{*2} \end{aligned}$$ Next, we will prove the map ${\mathcal G}$ satisfies the three conditions of Lemma [Lemma 3](#fixed){reference-type="ref" reference="fixed"}. Thus, the map ${\mathcal G}$ has a fixed point which is a solution of the scheme [\[CN1\]](#CN1){reference-type="eqref" reference="CN1"}-[\[CN2\]](#CN2){reference-type="eqref" reference="CN2"}.\ **Step 1. Well-defined map**\ Given $u_h^n=0$, $u_h^{n-1}=0$, $\phi_h^n=0$, $\phi_h^{n-1}=0$ in [\[\*1\]](#*1){reference-type="eqref" reference="*1"}-[\[\*2\]](#*2){reference-type="eqref" reference="*2"}, we have $$\begin{aligned} & \frac{\bf i}{\tau}(u_{h,*}^n,v_h)-\frac{1}{2}(\nabla u_{h,*}^n,\nabla v_h)=0, \qquad\forall\, v_h\in V_h,\nonumber\\ & \alpha(\phi_{h,*}^n,w_h)+\beta^2(\nabla\phi_{h,*}^n,\nabla w_h)=0, \,\quad\forall\, w_h\in V_h.\nonumber\end{aligned}$$ Putting $(v_h,w_h)=(u_{h,*}^n,\phi_{h,*}^n)$ in the above equations, we obtain $$\begin{aligned} \|u_{h,*}^n\|_{L^2}^2=\|\nabla u_{h,*}^n\|_{L^2}^2 =\|\phi_{h,*}^n\|_{L^2}^2=\|\nabla\phi_{h,*}^n\|_{L^2}^2=0.\nonumber\end{aligned}$$ Therefore, the map ${\mathcal G}$ is well-defined.\ **Step 2. Continuous and compact**\ In order to prove the continuity of the map ${\mathcal G}$, we let $$\begin{aligned} {\mathcal G}(\widehat u_h^n,\widehat\phi_h^n)=(\widehat u_{h,*}^n,\widehat\phi_{h,*}^n) \quad\textrm{and}\quad {\mathcal G}(\mathring u_h^n,\mathring\phi_h^n)=(\mathring u_{h,*}^n,\mathring\phi_{h,*}^n),\nonumber\end{aligned}$$ where $(\widehat u_h^n,\widehat\phi_h^n)\in V_h\times V_h$ and $(\mathring u_h^n,\mathring\phi_h^n)\in V_h\times V_h$.\ Next, we prove that when $\widehat u_h^n\rightarrow \mathring u_h^n$ and $\widehat\phi_h^n\rightarrow\mathring\phi_h^n$, the following results hold: $$\begin{aligned} \widehat u_{h,*}^n\rightarrow\mathring u_{h,*}^n\quad\textrm{and} \quad\widehat\phi_{h,*}^n\rightarrow\mathring\phi_{h,*}^n.\label{convergence result}\end{aligned}$$ Since all norms are equivalent in the finite dimensional space $V_h$, we want to prove [\[convergence result\]](#convergence result){reference-type="eqref" reference="convergence result"} holds, as long as $$\begin{aligned} \|\widehat u_{h,*}^n-\mathring u_{h,*}^n\|_{H^1}\rightarrow 0 \quad\textrm{and}\quad\|\widehat\phi_{h,*}^n-\mathring\phi_{h,*}^n\|_{H^1} \rightarrow 0.\nonumber\end{aligned}$$ Next, we will demonstrate these results. By the definition of the map ${\mathcal G}$, we get $$\begin{aligned} & \frac{\bf i}{\tau}(\widehat u_{h,*}^n-\mathring u_{h,*}^n,v_h) -\frac{1}{2}(\nabla(\widehat u_{h,*}^n-\mathring u_{h,*}^n),\nabla v_h)\nonumber\\ &\quad= \frac{1}{4}((\widehat\phi_h^n+\phi_h^{n-1}) (\widehat u_h^n+u_h^{n-1}) -(\mathring\phi_{h}^n+\phi_h^{n-1}) (\mathring u_h^n+u_h^{n-1}),v_h),\label{*5}\\ & \alpha (\widehat\phi_{h,*}^n-\mathring \phi_{h,*}^n,w_h) +\beta^2(\nabla(\widehat\phi_{h,*}^n-\mathring\phi_{h,*}^n),\nabla w_h) =(|\widehat u_h^n|^2-|\mathring u_{h}^n|^2,w_h)\label{*6} \end{aligned}$$ for any $v_h\in V_h,\,w_h\in V_h$.\ Substituting $v_h=\widehat u_{h,*}^n-\mathring u_{h,*}^n$ into [\[\*5\]](#*5){reference-type="eqref" reference="*5"}, we obtain $$\begin{aligned} & \frac{\bf i}{\tau}\|\widehat u_{h,*}^n-\mathring u_{h,*}^n\|_{L^2}^2 -\frac{1}{2}\|\nabla(\widehat u_{h,*}^n-\mathring u_{h,*}^n)\|_{L^2}^2\nonumber\\ &= \frac{1}{4}((\widehat\phi_h^n+\phi_h^{n-1}) (\widehat u_h^n-\mathring u_h^n),\widehat u_{h,*}^n-\mathring u_{h,*}^n) +\frac{1}{4}((\widehat\phi_h^n-\mathring\phi_h^n) (\mathring u_h^n+u_h^{n-1}),\widehat u_{h,*}^n-\mathring u_{h,*}^n).\label{*6.5} \end{aligned}$$ Taking the real parts and imaginary parts of the equation [\[\*6.5\]](#*6.5){reference-type="eqref" reference="*6.5"} respectively, and then adding them together, we get $$\begin{aligned} & \frac{1}{\tau}\|\widehat u_{h,*}^n-\mathring u_{h,*}^n\|_{L^2}^2+ \frac{1}{2}\|\nabla(\widehat u_{h,*}^n-\mathring u_{h,*}^n)\|_{L^2}^2 \nonumber\\ &\le C(\|\widehat\phi_h^n+\phi_h^{n-1}\|_{L^2} \|\widehat u_h^n-\mathring u_h^n\|_{L^4} +\|\widehat\phi_h^n-\mathring \phi_h^n\|_{L^4} \|\mathring u_h^n+u_h^{n-1}\|_{L^2}) \|\widehat u_{h,*}^n-\mathring u_{h,*}^n\|_{L^4}\nonumber\\ &\le \varepsilon\|\nabla(\widehat u_{h,*}^n-\mathring u_{h,*}^n)\|_{L^2}^2 +C\|\widehat\phi_h^n+\phi_h^{n-1}\|_{L^2}^2 \|\widehat u_h^n-\mathring u_h^n\|_{L^4}^2 +C\|\widehat\phi_h^n-\mathring\phi_h^n\|_{L^4}^2 \|\mathring u_h^n+u_h^{n-1}\|_{L^2}^2.\nonumber\end{aligned}$$ Since $u_h^{n-1}$, $\mathring u_h^n$, $\phi_h^{n-1}$ and $\widehat\phi_h^n$ $\in V_h\hookrightarrow L^2(\Omega)$, then $\|\widehat\phi_h^n+\phi_h^{n-1}\|_{L^2}$ and $\|\mathring u_h^n+u_h^{n-1}\|_{L^2}$ are boundedness, which with the above inequality lead to $$\begin{aligned} \frac{1}{\tau}\|\widehat u_{h,*}^n-\mathring u_{h,*}^n\|_{L^2}^2+\frac{1}{2}\|\nabla(\widehat u_{h,*}^n-\mathring u_{h,*}^n)\|_{L^2}^2 \le C\|\widehat u_h^n-\mathring u_h^n\|_{L^4}^2 +C\|\widehat\phi_h^n-\mathring\phi_h^n\|_{L^4}^2.\nonumber\end{aligned}$$ Thus, when $\widehat u_h^n\rightarrow\mathring u_h^n$ and $\widehat\phi_h^n\rightarrow\mathring\phi_h^n$, we get $$\begin{aligned} \|\widehat u_{h,*}^n-\mathring u_{h,*}^n\|_{H^1} \le\|\widehat u_{h,*}^n-\mathring u_{h,*}^n\|_{L^2} +\|\nabla(\widehat u_{h,*}^n-\mathring u_{h,*}^n)\|_{L^2}\rightarrow 0.\label{continous1}\end{aligned}$$ Substituting $w_h=\widehat\phi_{h,*}^n-\mathring\phi_{h,*}^n$ into [\[\*6\]](#*6){reference-type="eqref" reference="*6"} yields $$\begin{aligned} & \alpha\|\widehat\phi_{h,*}^n-\mathring\phi_{h,*}^n\|_{L^2}^2 +\beta^2\|\nabla(\widehat\phi_{h,*}^n-\mathring\phi_{h,*}^n)\|_{L^2}^2\nonumber\\ & =(|\widehat u_h^n|^2-|\mathring u_h^n|^2,\widehat\phi_{h,*}^n-\mathring\phi_{h,*}^n)\nonumber\\ & =((|\widehat u_h^n|+|\mathring u_h^n|)(|\widehat u_{h}^n|-|\mathring u_h^n|),\widehat\phi_{h,*}^n-\mathring\phi_{h,*}^n)\nonumber\\ & \le (\|\widehat u_h^n\|_{L^4}+\|\mathring u_h^n\|_{L^4})\|\widehat u_{h}^n-\mathring u_h^n\|_{L^4}\|\widehat\phi_{h,*}^n-\mathring\phi_{h,*}^n\|_{L^2}\nonumber\\ & \le C\|\widehat u_{h}^n-\mathring u_h^n\|_{L^4} +\varepsilon\alpha \|\widehat\phi_{h,*}^n-\mathring\phi_{h,*}^n\|_{L^2}^2. \quad\mbox{(use $\widehat u_h^n$ and $\mathring u_h^n\in V_h\hookrightarrow L^4$)}\nonumber\end{aligned}$$ Thus, choosing $\varepsilon$ to be a sufficiently small constant, we obtain $$\begin{aligned} & \alpha\|\widehat\phi_{h,*}^n-\mathring\phi_{h,*}^n\|_{L^2}^2 +\beta^2\|\nabla(\widehat\phi_{h,*}^n-\mathring\phi_{h,*}^n)\|_{L^2}^2 \le C\|\widehat u_h^n-\mathring u_h^n\|_{L^4}^2\rightarrow 0\end{aligned}$$ as $\widehat u_h^n\rightarrow\mathring u_h^n$, which leads to $$\begin{aligned} \|\widehat\phi_{h,*}^n-\mathring\phi_{h,*}^n\|_{H^1} \le \|\widehat\phi_{h,*}^n-\mathring\phi_{h,*}^n\|_{L^2} +\|\nabla(\widehat\phi_{h,*}^n-\mathring\phi_{h,*}^n)\|_{L^2}\rightarrow 0\label{continous2}\end{aligned}$$ as $\widehat u_h^n\rightarrow\mathring u_h^n$. Combining [\[continous1\]](#continous1){reference-type="eqref" reference="continous1"} and [\[continous2\]](#continous2){reference-type="eqref" reference="continous2"}, we complete the proof of the continuity of the map ${\mathcal G}$. Additionally, since the FE space $V_h\times V_h$ is finite-dimensional, it follows that ${\mathcal G}$ is a compact map.\ **Step 3. Uniform boundedness of the set $\varTheta$**\ Next, we prove the uniform boundedness of the set $\varTheta$: $$\begin{aligned} \|u_{h,*}^n\|_{H^1} +\|\phi_{h,*}^n\|_{H^1} \le M,\quad\forall\,(u_{h,*}^n,\phi_{h,*}^n) \in \varTheta, \nonumber\end{aligned}$$ where $M$ is a positive constant independent of $u_{h,*}^n$ and $\phi_{h,*}^n$. For any $(u_{h,*}^n,\phi_{h,*}^n)\in\varTheta$, we have $$\begin{aligned} {\mathcal G}(u_{h,*}^n,\phi_{h,*}^n)=\frac{1}{\theta}(u_{h,*}^n,\phi_{h,*}^n),\quad\theta\in[0,1],\nonumber\end{aligned}$$ that is $$\begin{aligned} & \frac{\bf i}{\tau}(u_{h,*}^n-\theta u_h^{n-1},v_h)-\frac{1}{2}(\nabla u_{h,*}^n,\nabla v_h)\nonumber\\ & \qquad=\frac{\theta}{2}(\nabla u_{h}^{n-1},\nabla v_h)+\frac{\theta}{4}((\phi_{h,*}^n+\phi_h^{n-1})(u_{h,*}^n+u_h^{n-1}),v_h),\label{bounded1}\\ & \alpha(\phi_{h,*}^n,w_h)+\beta^2(\nabla\phi_{h,*}^n,\nabla w_h)=\theta(|u_{h,*}^n|^2,w_h) \label{bounded2}\end{aligned}$$ for any $v_h\in V_h, w_h\in V_h$. By [\[CN2\]](#CN2){reference-type="eqref" reference="CN2"} and [\[bounded2\]](#bounded2){reference-type="eqref" reference="bounded2"}, we have $$\begin{aligned} \alpha(\phi_{h,*}^n-\phi_{h}^{n-1},w_h)+\beta^2(\nabla(\phi_{h,*}^n-\phi_{h}^{n-1}),\nabla w_h)=\theta(|u_{h,*}^n|^2-|u_{h}^{n-1}|^2,w_h).\nonumber\end{aligned}$$ Substituting $w_h=\phi_{h,*}^n+\phi_h^{n-1}$ into the above equation, we obtain $$\begin{aligned} \alpha(\|\phi_{h,*}^n\|_{L^2}^2-\|\phi_h^{n-1}\|_{L^2}^2)+\beta^2(\|\nabla\phi_{h,*}^n\|_{L^2}^2-\|\nabla\phi_h^{n-1}\|_{L^2}^2)=\theta(|u_{h,*}^n|^2-|u_h^{n-1}|^2,\phi_{h,*}^n+\phi_h^{n-1} ).\label{bounded2.5}\end{aligned}$$ Further, taking $v_h=u_{h,*}^n-u_h^{n-1}$ in [\[bounded1\]](#bounded1){reference-type="eqref" reference="bounded1"}, we have $$\begin{aligned} & \frac{\bf i}{2}(\|u_{h,*}^n\|_{L^2}^2-\|u_h^{n-1}\|_{L^2}^2+\|u_{h,*}^n-u_h^{n-1}\|_{L^2}^2)\nonumber\\ & \quad-\frac{\tau}{4}(\|\nabla u_{h,*}^n\|_{L^2}^2-\|\nabla u_h^{n-1}\|_{L^2}^2+\|\nabla(u_{h,*}^n-u_h^{n-1})\|_{L^2}^2)\nonumber\\ & ={\bf i}\theta(u_h^{n-1},u_{h,*}^n-u_h^{n-1})+\frac{\theta\tau}{2}(\nabla u_h^{n-1},\nabla(u_{h,*}^n-u_h^{n-1}))\nonumber\\ & \quad+\frac{\theta\tau}{4}((\phi_{h,*}^n+\phi_h^{n-1})(u_{h,*}^n+u_h^{n-1}),u_{h,*}^n-u_h^{n-1})\nonumber\\ & ={\bf i}\theta(u_h^{n-1},u_{h,*}^n-u_h^{n-1})+\frac{\theta\tau}{2}(\nabla u_h^{n-1},\nabla(u_{h,*}^n-u_h^{n-1}))\nonumber\\ & \quad+\frac{\alpha\tau}{4}(\|\phi_{h,*}^n\|_{L^2}^2-\|\phi_h^{n-1}\|_{L^2}^2)+\frac{\beta^2\tau}{4}(\|\nabla\phi_{h,*}^n\|_{L^2}^2 -\|\nabla\phi_h^{n-1}\|_{L^2}^2),\label{bounded3}\end{aligned}$$ where we use [\[bounded2.5\]](#bounded2.5){reference-type="eqref" reference="bounded2.5"} to get the last equality. Taking the real parts of [\[bounded3\]](#bounded3){reference-type="eqref" reference="bounded3"}, we get $$\begin{aligned} & \frac{\tau}{4}\|\nabla u_{h,*}^n\|_{L^2}^2+\frac{\alpha\tau}{4}\|\phi_{h,*}^n\|_{L^2}^2 +\frac{\beta^2\tau}{4}\|\nabla\phi_{h,*}^n\|_{L^2}^2\nonumber \\ &\le \left|{\rm Im}(\theta(u_h^{n-1},u_{h,*}^n-u_h^{n-1}))+{\rm Re}\Big(\frac{\theta\tau}{2}(\nabla u_h^{n-1},\nabla(u_{h,*}^n-u_h^{n-1}))\Big)\right|\nonumber\\ & \quad+\frac{\tau}{4}\|\nabla u_{h}^{n-1}\|_{L^2}^2+\frac{\alpha\tau}{4}\|\phi_{h}^{n-1}\|_{L^2}^2+\frac{\beta^2\tau}{4}\|\nabla\phi_{h}^{n-1}\|_{L^2}^2\nonumber \\ &\le \theta\|u_h^{n-1}\|_{L^2}\|u_{h,*}^{n}\|_{L^2}+\frac{\theta\tau}{2}\|\nabla u_h^{n-1}\|_{L^2}\|\nabla u_{h,*}^{n}\|_{L^2}\nonumber\\ & \quad+\frac{(2\theta+1)\tau}{4}\|\nabla u_h^{n-1}\|_{L^2}^2+\frac{\alpha\tau}{4}\|\phi_h^{n-1}\|_{L^2}^2+\frac{\beta^2\tau}{4}\|\nabla\phi_h^{n-1}\|_{L^2}^2 \nonumber\\ &\le \frac{\theta}{4}\|u_{h,*}^{n}\|_{L^2}^2+\theta\|u_h^{n-1}\|_{L^2}^2+\frac{\theta\tau}{8}\|\nabla u_{h,*}^{n}\|_{L^2}^2\nonumber\\ & \quad+\frac{(4\theta+1)\tau}{4}\|\nabla u_h^{n-1}\|_{L^2}^2+\frac{\alpha\tau}{4}\|\phi_h^{n-1}\|_{L^2}^2+\frac{\beta^2\tau}{4}\|\nabla\phi_h^{n-1}\|_{L^2}^2.\label{bounded3.1}\end{aligned}$$ Moreover, taking the imaginary parts of [\[bounded3\]](#bounded3){reference-type="eqref" reference="bounded3"}, we get $$\begin{aligned} \frac{1}{2} \|u_{h,*}^n\|_{L^2}^2 & \le\frac{1}{2}\|u_h^{n-1}\|_{L^2}^2+\left|{\rm Re}\left(\theta(u_h^{n-1},u_{h,*}^n-u_h^{n-1})\right) +{\rm Im}\Big(\frac{\theta\tau}{2}(\nabla u_h^{n-1},\nabla(u_{h,*}^n-u_h^{n-1})) \Big)\right|\nonumber\\ &\le \frac{2\theta+1}{2}\|u_h^{n-1}\|_{L^2}^2 +\theta\|u_h^{n-1}\|_{L^2}\|u_{h,*}^{n}\|_{L^2} +\frac{\theta\tau}{2}\|\nabla u_h^{n-1}\|_{L^2}\|\nabla u_{h,*}^{n}\|_{L^2}\nonumber\\ &\le \frac{4\theta+1}{2}\|u_h^{n-1}\|_{L^2}^2+\frac{\theta}{4}\|u_{h,*}^{n}\|_{L^2}^2+\frac{\theta\tau}{8}\|\nabla u_{h,*}^{n}\|_{L^2}^2+\frac{\theta\tau}{2}\|\nabla u_h^{n-1}\|_{L^2}^2.\label{bounded3.2}\end{aligned}$$ Adding [\[bounded3.1\]](#bounded3.1){reference-type="eqref" reference="bounded3.1"} and [\[bounded3.2\]](#bounded3.2){reference-type="eqref" reference="bounded3.2"}, we have $$\begin{aligned} & \frac{1}{2}\|u_{h,*}^n\|_{L^2}^2+\frac{\tau}{4}\|\nabla u_{h,*}^n\|_{L^2}^2+\frac{\alpha\tau}{4}\|\phi_{h,*}^n\|_{L^2}^2+\frac{\beta^2\tau}{4}\|\nabla\phi_{h,*}^n\|_{L^2}^2\nonumber\\ &\le \frac{\theta}{2}\|u_{h,*}^n\|_{L^2}^2+\frac{\theta\tau}{4}\|\nabla u_{h,*}^n\|_{L^2}^2 +\frac{6\theta+1}{2}\|u_h^{n-1}\|_{L^2}^2\nonumber\\ & \quad+\frac{(6\theta+1)\tau}{4}\|\nabla u_h^{n-1}\|_{L^2}^2+\frac{\alpha\tau}{4}\|\phi_h^{n-1}\|_{L^2}^2+\frac{\beta^2\tau}{4}\|\nabla\phi_h^{n-1}\|_{L^2}^2.\nonumber\end{aligned}$$ Since $u_h^{n-1}$, $\phi_h^{n-1}\in V_h\hookrightarrow H^1(\Omega)$, then $\|u_h^{n-1}\|_{L^2}$, $\|\nabla u_h^{n-1}\|_{L^2}$, $\|\phi_h^{n-1}\|_{L^2}$ and $\|\nabla\phi_h^{n-1}\|_{L^2}$ are boundedness, which with the above inequality lead to $$\begin{aligned} \|u_{h,*}^n\|^2_{H^1}+\|\phi_{h,*}^n\|^2_{H^1} \le\|u_{h,*}^n\|_{L^2}^2+\|\nabla u_{h,*}^n\|_{L^2}^2+\|\phi_{h,*}^n\|_{L^2}^2+\|\nabla\phi _{h,*}^n\|_{L^2}^2\le M,\nonumber\end{aligned}$$ where $M>0$ is a constant independent of $u_{h,*}^n$ and $\phi_{h,*}^n$. Therefore, we complete the proof of the uniform boundedness of the set $\varTheta$. Based on the above analysis, we conclude that the map ${\mathcal G}$ satisfies the three conditions in Lemma [Lemma 3](#fixed){reference-type="ref" reference="fixed"}. Thus, the scheme [\[CN1\]](#CN1){reference-type="eqref" reference="CN1"}-[\[CN2\]](#CN2){reference-type="eqref" reference="CN2"} has a solution, which is a fixed point of the map ${\mathcal G}$. ◻ Next, we will prove the uniqueness of the solution to the system [\[CN1\]](#CN1){reference-type="eqref" reference="CN1"}-[\[CN2\]](#CN2){reference-type="eqref" reference="CN2"}. **Theorem 4**. *Under the conditions of Lemma [Lemma 3](#fixed){reference-type="ref" reference="fixed"}, there exists a constant $\tau^*>0$, so that when $\tau\le\tau^*$, the solution $(u_h^n,\phi_h^n)$ $(n=1,2,...,N)$ to the scheme [\[CN1\]](#CN1){reference-type="eqref" reference="CN1"}-[\[CN2\]](#CN2){reference-type="eqref" reference="CN2"} is unique.* *Proof.* Let ${(u_{h,1}^n,\phi_{h,1}^n)}$ and ${(u_{h,2}^n,\phi_{h,2}^n)}$ be two FE solutions of the scheme [\[CN1\]](#CN1){reference-type="eqref" reference="CN1"}-[\[CN2\]](#CN2){reference-type="eqref" reference="CN2"}. we denote $$\begin{aligned} \widetilde u_h^n=u_{h,1}^n-u_{h,2}^n\quad\textrm{and} \quad\widetilde\phi_h^n=\phi_{h,1}^n-\phi_{h,2}^n.\nonumber\end{aligned}$$ From [\[CN1\]](#CN1){reference-type="eqref" reference="CN1"}-[\[CN2\]](#CN2){reference-type="eqref" reference="CN2"}, we have $$\begin{aligned} & \frac{\bf i}{\tau}(\widetilde u_h^n ,v_h) -\frac{1}{2}(\nabla\widetilde u_h^n,\nabla v_h)\nonumber\\ & =\frac{1}{4}((\phi_{h,1}^n+\phi_h^{n-1})(u_{h,1}^n+u_h^{n-1})-(\phi_{h,2}^n+\phi_h^{n-1})(u_{h,2}^n+u_h^{n-1}),v_h),\label{unique1}\\ & \alpha(\widetilde\phi_h^n,w_h)+\beta^2(\nabla\widetilde\phi_h^n,\nabla w_h)=(|u_{h,1}^n|^2-|u_{h,2}^n|^2,w_h).\label{unique2}\end{aligned}$$ Substituting $v_h=\widetilde u_h^n$ into [\[unique1\]](#unique1){reference-type="eqref" reference="unique1"}, we obtain $$\begin{aligned} & \frac{\bf i}{\tau}\|\widetilde u_h^n\|_{L^2}^2-\frac{1}{2}\|\nabla\widetilde u_h^n\|_{L^2}^2\nonumber \\ & =\frac{1}{4}((\phi_{h,1}^n+\phi_h^{n-1})(u_{h,1}^n+u_h^{n-1})-(\phi_{h,2}^n+\phi_h^{n-1} )(u_{h,2}^n+u_h^{n-1}),\widetilde u_h^n)\nonumber\\ & =\frac{1}{4}(\widetilde\phi_h^n u_{h,1}^n,\widetilde u_h^n)+\frac{1}{4}(\widetilde\phi_h^n u_h^{n-1},\widetilde u_h^n)+\frac{1}{4}(\phi_{h,2}^n\widetilde u_{h}^n,\widetilde u_h^n)+\frac{1}{4}(\phi_h^{n-1}\widetilde u_h^n,\widetilde u_h^n).\label{unique3}\end{aligned}$$ By taking the imaginary parts of [\[unique3\]](#unique3){reference-type="eqref" reference="unique3"}, we obtain $$\begin{aligned} \frac{1}{\tau}\|\widetilde u_h^n\|_{L^2}^2 &= \frac{1}{4}{\rm Im} \left((\widetilde\phi_h^n u_{h,1}^n,\widetilde u_h^n)+(\widetilde\phi_h^n u_h^{n-1},\widetilde u_h^n)+(\phi_{h,2}^n \widetilde u_h^n,\widetilde u_h^n)+(\phi_h^{n-1}\widetilde u_h^n,\widetilde u_h^n)\right)\nonumber\\ &= \frac{1}{4}{\rm Im}\left((\widetilde\phi_h^n u_{h,1}^n,\widetilde u_h^n)+(\widetilde\phi_h^n u_h^{n-1},\widetilde u_h^n)\right)\nonumber\\ &\le C\|\widetilde\phi_h^n\|_{L^6}\|u_{h,1}^n\|_{L^3}\|\widetilde u_h^n\|_{L^2}+C\|\widetilde\phi_h^n\|_{L^6}\|u_{h}^{n-1}\|_{L^3}\|\widetilde u_h^n\|_{L^2}\nonumber\\ &\le C\|\nabla\widetilde\phi_h^n\|_{L^2}\|\nabla u_{h,1}^n\|_{L^2}\|\widetilde u_h^n\|_{L^2}+C\|\nabla \widetilde\phi_h^n\|_{L^2}\|\nabla u_h^{n-1}\|_{L^2}\|\widetilde u_h^n\|_{L^2}\nonumber\\ &\le C\|\nabla\widetilde\phi_h^n\|_{L^2}^2+C\|\widetilde u_h^n\|_{L^2}^2. \qquad\qquad \mbox{(use \eqref{ME4})}\label{unique3.5}\end{aligned}$$ Further, taking the real parts of [\[unique3\]](#unique3){reference-type="eqref" reference="unique3"}, we get $$\begin{aligned} \|\nabla\widetilde u_h^n\|_{L^2}^2 &\le \frac{1}{2}\left|{\rm Re}\left((\widetilde\phi_h^n u_{h,1}^n,\widetilde u_h^n )+(\widetilde\phi_h^n u_h^{n-1},\widetilde u_h^n)+(\phi_{h,2}^n\widetilde u_h^n,\widetilde u_h^n)+(\phi_h^{n-1}\widetilde u_h^n,\widetilde u_h^n)\right)\right|\nonumber\\ &\le C\|\widetilde\phi_h^n\|_{L^6}\|u_{h,1}^n\|_{L^3}\|\widetilde u_h^n\|_{L^2}+C\|\widetilde\phi_h^n\|_{L^6}\|u_{h}^{n-1}\|_{L^3}\|\widetilde u_h^n\|_{L^2}\nonumber\\ & \quad+C\|\phi_{h,2}^n\|_{L^2}\|\widetilde u_h^n\|_{L^4}^2+C\|\phi_h^{n-1}\|_{L^2}\|\widetilde u _h^n\|_{L^4}^2\nonumber\\ & \le C\|\nabla\widetilde\phi_h^n\|_{L^2}\|\widetilde u_h^n\|_{L^2}(\|\nabla u_{h,1}^n\|_{L^2}+\|\nabla u_h^{n-1}\|_{L^2})\nonumber\\ & \quad+C\|\widetilde u_h^n\|_{L^2}\|\nabla\widetilde u_h^n\|_{L^2}(\|\phi_{h,2}^n\|_{L^2}+\|\phi _{h}^{n-1}\|_{L^2})\quad\,\mbox{(use \eqref{G-N-Ineq})}\nonumber\\ &\le C\|\nabla\widetilde\phi_h^n\|_{L^2}^2 +C\|\widetilde u_h^n\|_{L^2}^2 +\varepsilon\|\nabla\widetilde u_h^n \|_{L^2}^2. \qquad\qquad\mbox{(use \eqref{ME4})}\nonumber\end{aligned}$$ It follows from the above inequality and [\[unique3.5\]](#unique3.5){reference-type="eqref" reference="unique3.5"} that $$\begin{aligned} \|\widetilde u_h^n\|_{L^2}^2+\tau\|\nabla\widetilde u_h^n\|_{L^2}^2 \le C\tau\|\nabla\widetilde\phi_h^n\|_{L^2}^2\label{unique4}\end{aligned}$$ as $\tau\le\tau^*_1$ ($\tau^*_1>0$ is a certain constant).\ Taking $w_h=\widetilde \phi _h^n$ in [\[unique2\]](#unique2){reference-type="eqref" reference="unique2"}, along with [\[unique4\]](#unique4){reference-type="eqref" reference="unique4"}, we have $$\begin{aligned} \alpha\|\widetilde\phi_h^n\|_{L^2}^2+\beta^2\|\nabla\widetilde\phi_h^n\|_{L^2}^2 &= (|u_{h,1}^n|^2-|u_{h,2}^n|^2,\widetilde\phi_h^n)=(u_{h,1}^n(\widetilde u_h^n)^*+\widetilde u_h^n (u_{h,2}^n)^*,\widetilde\phi_h^n)\nonumber\\ &\le \|u_{h,1}^n\|_{L^3}\|\widetilde u_h^n\|_{L^2}\|\widetilde\phi_h^n\|_{L^6}+\|\widetilde u_h^n\|_{L^2}\|u_{h,2}^n\|_{L^3}\|\widetilde\phi_h^n\|_{L^6}\nonumber\\ &\le C\|\nabla u_{h,1}^n\|_{L^2}\|\widetilde u_h^n\|_{L^2}\|\nabla\widetilde\phi_h^n\|_{L^2} +C\|\widetilde u_h^n\|_{L^2}\|\nabla u_{h,2}^n\|_{L^2}\|\nabla\widetilde\phi_h^n\|_{L^2}\nonumber\\ &\le C\|\widetilde u_h^n\|_{L^2}^2+\varepsilon\|\nabla\widetilde\phi_h^n\|_{L^2}^2 \qquad\qquad\mbox{(use \eqref{ME4})}\nonumber\\ &\le (C\tau+\varepsilon)\|\nabla\widetilde\phi_h^n\|_{L^2}^2.\nonumber\end{aligned}$$ Combining the above result and [\[unique4\]](#unique4){reference-type="eqref" reference="unique4"}, we have $$\begin{aligned} \|\widetilde u_h^n\|_{L^2}^2+\tau\|\nabla\widetilde u_h^n\|_{L^2}^2+\alpha\|\widetilde\phi_h^n\|_{L^2}^2+\beta^2\|\nabla\widetilde\phi_h^n\|_{L^2}^2 &\le 0\nonumber\end{aligned}$$ as $\tau\le\tau^*_2$ ($\tau^*_2>0$ is a certain constant). Thus, let $\tau^*=\min\{\tau^*_1,\tau^*_2\}$, we acquire $$\begin{aligned} \|\widetilde u_h^n\|_{L^2}=\|\nabla\widetilde u_h^n\|_{L^2} =\|\widetilde\phi_h^n\|_{L^2}=\|\nabla\widetilde\phi_h^n\|_{L^2} =0\nonumber\end{aligned}$$ as $\tau\le\tau^*$. The proof of Theorem [Theorem 4](#TH4-2){reference-type="ref" reference="TH4-2"} is complete. ◻ # Optimal $L^2$ error estimates In order to get the optimal $L^2$ error estimates, we introduce the Ritz projection $R_h:H_0^1(\Omega)\to{V_h}$, defined by $$\begin{aligned} (\nabla(v-R_h v),\nabla w_h)=0,\quad \forall\,w_h\in V_h.\nonumber\end{aligned}$$ By the classical FE theory [@ritz1; @ritz2], it holds that $$\begin{aligned} &\|v-R_h v\|_{L^2}+h\|\nabla(v-R_h v)\|_{L^2}\le Ch^{r+1}\|v\|_{H^{r+1}},\quad\forall\, v\in H^{r+1}(\Omega)\cap H_0^1(\Omega)\label{ritz1}\\ &\|R_h v\|_{L^\infty}\le C\|v\|_{H^2}, \quad \forall\, v\in H^2(\Omega)\cap H_0^1(\Omega).\label{ritz2}\end{aligned}$$ Let $\Pi_h:C(\Omega)\to V_h$ be the Lagrange interpolation operator. By the classical interpolation theory [@interpolation1; @ritz2], it is easy to see that $$\begin{aligned} \label{interpolation} \|v- \Pi_h v\|_{L^2}+h\|\nabla(v- \Pi_h v)\|_{L^2} \le Ch^{\ell+1}\|v\|_{H^{\ell+1}},\quad\, \forall\, v\in H^{\ell+1}(\Omega), \,\, 1\le\ell\le r.\end{aligned}$$ For the numerical solution $(u_h^n,\phi_h^n)$ of the system [\[CN1\]](#CN1){reference-type="eqref" reference="CN1"}-[\[CN2\]](#CN2){reference-type="eqref" reference="CN2"}, we denote $$\begin{aligned} e_u^n=R_h u^n-u_h^n \quad\mbox{and}\quad e_\phi^n=R_h \phi^n-\phi_h^n.\end{aligned}$$ In the following subsections, we will give optimal $L^2$ error estimates of the fully-discrete scheme [\[CN1\]](#CN1){reference-type="eqref" reference="CN1"}-[\[CN2\]](#CN2){reference-type="eqref" reference="CN2"} of Schrödinger--Helmholz equations (as $\alpha \neq 0$ and $\beta\neq 0$) and Schrödinger--Poisson equations (as $\alpha = 0$ and $\beta=1$), respectively. ## Error estimates of the Schrödinger--Helmholz equations The weak form of system [\[i1\]](#i1){reference-type="eqref" reference="i1"}-[\[i3\]](#i3){reference-type="eqref" reference="i3"} can be written as $$\begin{aligned} & {\bf i}(D_\tau u^n,v)-(\nabla\overline u^ {n-\frac{1}{2}},\nabla v)-(\overline\phi^{n-\frac{1}{2}}\overline u^{n-\frac{1}{2}},v)=( R^{n-\frac{1}{2}},v),\label{weak1}\\ & \alpha(\phi^n,w) +\beta^2(\nabla\phi^n,\nabla w)=(|u^n|^2,w)\label{weak2}\end{aligned}$$ for any $v\in H^1_0(\Omega)$, $w\in H^1_0(\Omega)$, and $n=1,2,\dots,N$, where $$\begin{aligned} \label{trun-err} R^{n-\frac{1}{2}}_{tr}= {\bf i}(D_\tau u^n-u_t^{n-\frac{1}{2}}) +\Delta(\overline u^{n-\frac{1}{2}}-u^{n-\frac{1}{2}}) -(\overline\phi^{n-\frac{1}{2}}\overline u^{n-\frac{1}{2}} -\phi^{n-\frac{1}{2}}u^{n-\frac{1}{2}})\end{aligned}$$ denotes the truncation error. By Taylor's expansion and the assumption [\[solution\]](#solution){reference-type="eqref" reference="solution"}, we obtain that $$\begin{aligned} \tau\sum^N_{n=1}\|R^{n-\frac{1}{2}}_{tr}\|^2_{L^2}\le C\tau^4.\label{truncation}\end{aligned}$$ Subtracting [\[weak1\]](#weak1){reference-type="eqref" reference="weak1"}-[\[weak2\]](#weak2){reference-type="eqref" reference="weak2"} from [\[CN1\]](#CN1){reference-type="eqref" reference="CN1"}-[\[CN2\]](#CN2){reference-type="eqref" reference="CN2"}, we can get $$\begin{aligned} {\bf i}(D_\tau e_u^n,v_h) -(\nabla\overline e_u^{n-\frac{1}{2}},\nabla v_h) &-(\overline\phi^{n-\frac{1}{2}}\overline u^{n-\frac{1}{2}}-\overline\phi_h^{n-\frac{1}{2}}\overline u_h^{n-\frac{1}{2}},v_h)\nonumber\\ &\qquad =(R^{n-\frac{1}{2}}_{tr},v_h)-{\bf i}(D_\tau(u^n-R_h u^n),v_h),\label{error1}\\ \alpha(e_\phi^n,w_h)+\beta^2(\nabla e_\phi^n,\nabla w_h) &=(|u^n|^2-|u_h^n|^2,w_h)-\alpha(\phi^n-R_h\phi^n,w_h) \label{error2}\end{aligned}$$ for any $v_h\in V_h$, $w_h\in V_h$.\ Taking $v_h=\overline e_u^{n-\frac{1}{2}}$ in [\[error1\]](#error1){reference-type="eqref" reference="error1"}, we obtain $$\begin{aligned} & \frac{\bf i}{2\tau}(\|e_u^n\|_{L^2}^2-\|e_u^{n-1}\|_{L^2}^2) -\|\nabla\overline e_u^{n-\frac{1}{2}}\|_{L^2}^2\nonumber \\ & =(R^{n-\frac{1}{2}}_{tr},\overline e_u^{n-\frac{1}{2}})-{\bf i}(D_\tau(u^n-R_h u^n),\overline e_u^{n-\frac{1}{2}})+(\overline\phi^{n-\frac{1}{2}}\overline u^{n-\frac{1}{2}}-\overline\phi _h^{n-\frac{1}{2}}\overline u_h^{n-\frac{1}{2}},\overline e_u^{n-\frac{1}{2}}).\nonumber\end{aligned}$$ Taking the imaginary parts of the above equation leads to $$\begin{aligned} \label{u1} & \frac{1}{2\tau}(\|e_u^n\|_{L^2}^2-\|e_u^{n-1}\|_{L^2}^2)\\ &\le \left| {\rm Im}(R^{n-\frac{1}{2}}_{tr},\overline e_u^{n-\frac{1}{2}}) -{\rm Re}(D_\tau(u^n-R_h u^n),\overline e_u^{n-\frac{1}{2}}) +{\rm Im}(\overline\phi^{n-\frac{1}{2}}\overline u^{n-\frac{1}{2}}-\overline\phi _h^{n-\frac{1}{2}}\overline u_h^{n-\frac{1}{2}},\overline e_u^{n-\frac{1}{2}}) \right|\nonumber\\ &\le C\|R^{n-\frac{1}{2}}_{tr}\|_{L^2}^2+C\|\overline e_u^{n-\frac{1}{2}}\|_{L^2}^2+C\|D_\tau(u^n-R_h u^n)\|_{L^2}^2+\left|{\rm Im}(\overline\phi^{n-\frac{1}{2}}\overline u^{n-\frac{1}{2}}- \overline\phi_h^{n-\frac{1}{2}}\overline u_h^{n-\frac{1}{2}},\overline e_u^{n-\frac{1}{2}})\right|\nonumber\\ &\le C\| R^{n-\frac{1}{2}}_{tr}\|_{L^2}^2+C(\|e_u^n\|_{L^2}^2+\|e_u^{n-1}\|_{L^2}^2)+Ch^{2(r+1)}+\left|{\rm Im}(\overline\phi^{n-\frac{1}{2}}\overline u^{n-\frac{1}{2}}- \overline\phi_h^{n-\frac{1}{2}}\overline u_h^{n-\frac{1}{2}},\overline e_u^{n-\frac{1}{2}})\right|,\nonumber\end{aligned}$$ where we use [\[ritz1\]](#ritz1){reference-type="eqref" reference="ritz1"} and [\[solution\]](#solution){reference-type="eqref" reference="solution"} to get the last inequality. It remains to estimate the last term $$\begin{aligned} & \left|{\rm Im}(\overline\phi^{n-\frac{1}{2}}\overline u^{n-\frac{1}{2}}-\overline\phi _h^{n-\frac{1}{2}}\overline u_h^{n-\frac{1}{2}},\overline e_u^{n-\frac{1}{2}})\right|\nonumber\\ &\le \frac{1}{4}\left|{\rm Im}(\phi^n u^n-\phi_h^n u_h^n,\overline e_u^{n-\frac{1}{2}})\right|+\frac{1}{4}\left|{\rm Im}(\phi^n u^{n-1}-\phi_h^n u_h^{n-1},\overline e_u^{n-\frac{1}{2}})\right|\nonumber\\ & \quad+\frac{1}{4}\left|{\rm Im}(\phi^{n-1}u^n-\phi_h^{n-1} u_h^n,\overline e_u^{n-\frac{1}{2}})\right| +\frac{1}{4}\left|{\rm Im}(\phi^{n-1}u^{n-1}-\phi_h^{n-1} u_h^{n-1},\overline e_u^{n-\frac{1}{2}})\right|\nonumber\\ &=: \sum\limits_{i=1}^4 I_i.\nonumber \end{aligned}$$ Based on [\[ritz1\]](#ritz1){reference-type="eqref" reference="ritz1"}-[\[ritz2\]](#ritz2){reference-type="eqref" reference="ritz2"} and [\[solution\]](#solution){reference-type="eqref" reference="solution"}-[\[G-N-Ineq\]](#G-N-Ineq){reference-type="eqref" reference="G-N-Ineq"}, we have $$\begin{aligned} I_1 &\le \frac14\Big|{\rm Im} ((\phi^n-R_h\phi^n)u^n,\overline e_u^{n-\frac{1}{2}})+{\rm Im}(R_h\phi^n(u^n-R_h u^n),\overline e_u^{n-\frac{1}{2}})\nonumber \\ & \quad+{\rm Im}(R_h\phi^n e_u^n,\overline e_u^{n-\frac{1}{2}})+{\rm Im}(R_h u^ne_\phi^n,\overline e_u^{n-\frac{1}{2}})-{\rm Im}(e_\phi^n e_u^n,\overline e_u^{n-\frac{1}{2}}) \Big|\nonumber\\ &\le C\|u^n\|_{L^\infty}\|\phi^n-R_h\phi^n\|_{L^2}\|\overline e_u^{n-\frac{1}{2}}\|_{L^2} +C\|R_h\phi^n\|_{L^\infty}\|u^n-R_h u^n\|_{L^2}\|\overline e_u^{n-\frac{1}{2}}\|_{L^2} \nonumber\\ & \quad+C\|R_h\phi^n\|_{L^\infty}\|e_u^n\|_{L^2}\|\overline e_u^{n-\frac{1}{2}}\|_{L^2} +C\|R_h u^n\|_{L^\infty}\|e_\phi^n\|_{L^2}\|\overline e_u^{n-\frac{1}{2}}\|_{L^2} +C\|e_\phi^n\|_{L^4}\|e_u^n\|_{L^4}\|\overline e_u^{n-\frac{1}{2}}\|_{L^2}\nonumber\\ &\le C\|\phi^n-R_h\phi^n\|^2_{L^2} +C\|u^n-R_h u^n\|^2_{L^2} +C(\|e^n_u\|^2_{L^2} +\|e^n_{\phi}\|^2_{L^2})\nonumber\\ &\quad +C\|e_\phi^n\|_{L^2}\|\nabla e_\phi^n\|_{L^2}\|e_u^n\|_{L^2}\|\nabla e _u^n\|_{L^2} +C\|\overline e_u^{n-\frac{1}{2}}\|^2_{L^2} \nonumber\\ & \le Ch^{2(r+1)}+\|\overline e_u^{n-\frac{1}{2}}\|^2_{L^2} +C\|e_u^n\|^2_{L^2}+C\|e_\phi^n\|^2_{L^2}. \qquad\mbox{(use \eqref{ME4} and \eqref{ritz1})} \label{I1} %&\le %{\color{red}Ch^{2(r+1)}}+C\|e_u^n\|_{L^2}^2 +C\|e_u^{n-1}\|_{L^2}^2+C\|e_\phi ^n\|_{L^2}^2,\qquad\qquad \ \mbox{(use \eqref{ME4})}\end{aligned}$$ Similarly, we have $$\begin{aligned} I_2+I_3+I_4 \le Ch^{2(r+1)}+C\|e_u^n\|_{L^2}^2 +C\|e_u^{n-1}\|_{L^2}^2+C\|e_\phi ^n\|_{L^2}^2+C\|e_\phi ^{n-1}\|_{L^2}^2.\label{I2}\end{aligned}$$ Thus, substituting [\[I1\]](#I1){reference-type="eqref" reference="I1"}-[\[I2\]](#I2){reference-type="eqref" reference="I2"} into [\[u1\]](#u1){reference-type="eqref" reference="u1"}, we obtain $$\begin{aligned} &\frac{1}{2\tau}(\|e_u^n\|_{L^2}^2-\|e_u^{n-1}\|_{L^2}^2) \nonumber\\ &\le Ch^{2(r+1)}+C\|R^{n-\frac12}_{tr}\|^2_{L^2}+C(\|e_u^n\|_{L^2}^2+\|e_u^{n-1}\|_{L^2}^2) +C(\|e_\phi ^n\|_{L^2}^2+\|e_\phi ^{n-1}\|_{L^2}^2).\label{u2}\end{aligned}$$ In order to estimate $\|e_\phi^n\|_{L^2}$, we choose $w_h=e_\phi^n$ in [\[error2\]](#error2){reference-type="eqref" reference="error2"} to get $$\begin{aligned} \alpha\|e_\phi^n\|_{L^2}^2 +\beta^2\|\nabla e_\phi^n\|_{L^2}^2 &\le \left|(|u^n|^2-|u_h^n|^2,e_\phi^n)\right| +\alpha\left|(\phi^n-R_h\phi^n,e_\phi^n)\right|\nonumber \\ &\le \left|(|u^n|^2-|u_h^n|^2,e_\phi^n)\right| +\alpha\|\phi^n-R_h\phi^n\|_{L^2}\|e_\phi^n\|_{L^2} \nonumber \\ & \le\left|(|u^n|^2-|u_h^n|^2,e_\phi^n)\right| +Ch^{2(r+1)}+\varepsilon\alpha\|e_\phi^n\|_{L^2}^2, \label{phi1}\end{aligned}$$ where the last inequality is obtained by [\[solution\]](#solution){reference-type="eqref" reference="solution"} and [\[ritz1\]](#ritz1){reference-type="eqref" reference="ritz1"}. By [\[ritz1\]](#ritz1){reference-type="eqref" reference="ritz1"}-[\[ritz2\]](#ritz2){reference-type="eqref" reference="ritz2"} and [\[solution\]](#solution){reference-type="eqref" reference="solution"}, we get $$\begin{aligned} &\big|(|u^n|^2-|u_h^n|^2,e_\phi^n)\big|\nonumber\\ &=\big|(u^n(u^n)^*-u_h^n(u_h^n)^*,e_\phi^n)\big|\nonumber\\ &\le\big|(u^n(u^n-R_h u^n)^*,e_\phi^n)\big| +\big|((u^n-R_h u^n)(R_h u^n)^*, e_\phi^n)\big|\nonumber\\ &\quad +\big|(R_h u^n(e_u^n)^*, e_\phi^n)\big| +\big|(e_u^n(R_h u^n)^*, e_\phi^n)\big| +\big|(e_u^n(e_u^n)^*,e_\phi^n)\big|\nonumber\\ &\le (\|u^n\|_{L^\infty}+\|R_h u^n\|_{L^\infty})\|u^n-R_h u^n\|_{L^2}\|e_\phi^n\|_{L^2} \nonumber\\ &\quad +C\|R_h u^n\|_{L^\infty}\|e_u^n\|_{L^2}\|e_\phi^n\|_{L^2} +\|e_u^n\|_{L^4}^2\|e_\phi^n\|_{L^2}\nonumber \\ &\le Ch^{2(r+1)}+\varepsilon\alpha\|e^n_{\phi}\|^2_{L^2}+C\|e^n_{u}\|^2_{L^2}+C\|e^n_u\|^2_{L^2}\|\nabla e^n_u\|^2_{L^2}\quad\mbox{(use \eqref{G-N-Ineq})}\nonumber\\ &\le Ch^{2(r+1)}+\varepsilon \alpha\|e_\phi^n\|_{L^2}^2+C\|e_u^n\|_{L^2}^2.\quad\mbox{(use \eqref{ME4})} \nonumber\end{aligned}$$ The above inequality with [\[phi1\]](#phi1){reference-type="eqref" reference="phi1"} implies $$\begin{aligned} \alpha\|e_\phi^n\|_{L^2}^2+\beta^2\|\nabla e_\phi^n\|_{L^2}^2 \le Ch^{2(r+1)}+C\|e_u^n\|_{L^2}^2.\label{phi2}\end{aligned}$$ Combining [\[u2\]](#u2){reference-type="eqref" reference="u2"} and [\[phi2\]](#phi2){reference-type="eqref" reference="phi2"}, we obtain $$\begin{aligned} \frac{1}{2\tau}(\|e^n_u\|^2_{L^2}-\|e^{n-1}_u\|^2_{L^2}) \le Ch^{2(r+1)}+C\|R^{n-\frac12}_{tr}\|^2_{L^2}+C(\|e^n_u\|^2_{L^2}+\|e^{n-1}_u\|^2_{L^2}).\nonumber\end{aligned}$$ Summing up the above result from $n=1$ to $n=m$, we get $$\begin{aligned} \|e_u^m\|_{L^2}^2 \le& Ch^{2(r+1)}+\|e_u^0\|_{L^2}^2+C\tau\sum^m_{n=1}\|R^{n-\frac12}_{tr}\|^2_{L^2}+C\tau \sum\limits_{n=0}^m\| e_u^n\|_{L^2}^2\nonumber\\ \le& C\tau^4+Ch^{2(r+1)}+C\tau \sum\limits_{n=0}^m\| e_u^n\|_{L^2}^2\nonumber\end{aligned}$$ for $m=1,2,...,N,$ where we use the fact $e^0_u=0$ and [\[truncation\]](#truncation){reference-type="eqref" reference="truncation"} to get the last inequality. By the discrete Gronwall's inequality, there exists a positive constant $\tau_1>0$, such that $$\begin{aligned} \max_{1\le n\le N}\|e_u^n\|_{L^2}^2\le C(h^{2(r+1)}+\tau^4)\nonumber\end{aligned}$$ as $\tau<\tau_1$. Combining the above result and [\[phi2\]](#phi2){reference-type="eqref" reference="phi2"}, we obtain $$\begin{aligned} \max_{1\le n\le N}\|e_\phi^n\|_{L^2}^2 \le C(h^{2(r+1)}+\tau^4).\nonumber\end{aligned}$$ Thus, based on the above two estimates and [\[ritz1\]](#ritz1){reference-type="eqref" reference="ritz1"}, we have $$\begin{aligned} & \|u^n-u_h^n\|_{L^2} \le \|u^n-R_h u^n\|_{L^2}+\|e_u^n\|_{L^2} \le C(h^{r+1}+\tau^2),\label{Sch-Hel-1}\\ & \|\phi^n-\phi_h^n\|_{L^2} \le \|\phi^n-R_h\phi^n\|_{L^2}+\|e_\phi^n\|_{L^2} \le C(h^{r+1}+\tau^2)\label{Sch-Hel-2} \end{aligned}$$ for $n=1,2,...,N$. ## Error estimates for the Schrödinger--Poisson equations In this subsection, we consider the error estimates of numerical scheme [\[CN1\]](#CN1){reference-type="eqref" reference="CN1"}-[\[CN2\]](#CN2){reference-type="eqref" reference="CN2"} for $\alpha=0$, $\beta=1$. Taking $\alpha=0$ and $\beta=1$ in [\[error1\]](#error1){reference-type="eqref" reference="error1"}-[\[error2\]](#error2){reference-type="eqref" reference="error2"}, the error equations can be presented as follows $$\begin{aligned} & {\bf i}(D_\tau e_u^n,v_h) -(\nabla\overline e_u^{n-\frac{1}{2}},\nabla v_h) -(\overline\phi^{n-\frac{1}{2}}\overline u^{n-\frac{1}{2}}-\overline\phi _h^{n-\frac{1}{2}}\overline u_h^{n-\frac{1}{2}},v_h) \nonumber\\ & \qquad=(R^{n-\frac{1}{2}}_{tr},v_h) -{\bf i}(D_\tau(u^n-R_h u^n),v_h),\label{error3}\\ & (\nabla e_\phi^n,\nabla w_h)=(|u^n|^2-|u_h^n|^2,w_h)\label{error4}\end{aligned}$$ for any $v_h\in V_h$ and $w_h\in V_h$, where $R^{n-\frac{1}{2}}_{tr}$ is truncation error, defined identically as [\[trun-err\]](#trun-err){reference-type="eqref" reference="trun-err"}.\ Taking $w_h=e_\phi^n$ in [\[error4\]](#error4){reference-type="eqref" reference="error4"} and using [\[ritz1\]](#ritz1){reference-type="eqref" reference="ritz1"}, we get $$\begin{aligned} \|\nabla e_\phi^n\|_{L^2}^2&=(|u^n|^2-|u_h^n|^2,e_\phi^n)=(u^n(u^n)^*-u_h^n(u_h^n)^*,e_\phi^n)\nonumber \\ &=%1 \big|(u^n(u^n-R_h u^n)^*, e_\phi^n)\big| +\big|((u^n-R_h u^n)(u_h^n)^*,e_\phi^n)\big|\nonumber\\ &\qquad +\big|(u^n(e_u^n)^*, e_\phi^n)\big| +\big|(e_u^n(u_h^n)^*,e_\phi^n)\big| \nonumber\\ &\le (\|u^n\|_{L^4}+\|u_h^n\|_{L^4})\|e_\phi^n\|_{L^4}\|u^n-R_h u^n\|_{L^2} \nonumber\\ &\qquad+(\|u^n\|_{L^4}+\|u_h^n\|_{L^4})\|e_u^n\|_{L^2}\|e_\phi^n\|_{L^4}\nonumber\\ &\le %3 Ch^{r+1}(\|u^n\|_{L^2}^{\frac{1}{2}}\|\nabla u^n\|_{L^2}^{\frac{1}{2}} +\|u_h^n\|_{L^2}^{\frac{1}{2}}\|\nabla u_h^n\|_{L^2}^{\frac{1}{2}}) \|\nabla e_\phi^n\|_{L^2} \nonumber\\ &\quad +C(\|u^n\|_{L^2}^{\frac{1}{2}}\|\nabla u^n\|_{L^2}^{\frac{1}{2}} +\|u_h^n\|_{L^2}^{\frac{1}{2}}\|\nabla u_h^n\|_{L^2}^{\frac{1}{2}})\|e_u^n\|_{L^2}\|\nabla e_\phi^n\|_{L^2}\qquad\mbox{(use \eqref{G-N-Ineq})} \nonumber\\ &\le%4 Ch^{r+1}\|\nabla e_\phi^n\|_{L^2} +C\|e_u^n\|_{L^2}\|\nabla e_\phi^n\|_{L^2},\quad \qquad \qquad\quad\,\, \mbox{(use \eqref{solution} and \eqref{ME4})}\nonumber \end{aligned}$$ which implies $$\begin{aligned} \|\nabla e_\phi^n\|_{L^2} \le Ch^{r+1}+C\|e_u^n\|_{L^2}.\label{phi3}\end{aligned}$$ In order to estimate $\|e_\phi^n\|_{L^2}$, we introduce the following dual problem $$\begin{aligned} -\Delta \psi =&e_\phi ^n, \quad\,\,\,\text{in }\Omega, \label{ell-pro-1}\\ \psi=&0, \qquad\text{on }\partial \Omega.\label{ell-pro-2}\end{aligned}$$ By the classic estimates of elliptic equation [@Evans], it holds that $$\begin{aligned} \|\psi\|_{H^2} \le\|e_\phi^n\|_{L^2},\label{dual}\end{aligned}$$ for $\psi\in H^2(\Omega)$.\ Integrating [\[ell-pro-1\]](#ell-pro-1){reference-type="eqref" reference="ell-pro-1"} against $e^n_\phi$ and using [\[error4\]](#error4){reference-type="eqref" reference="error4"}, we get $$\begin{aligned} \|e_\phi^n\|_{L^2}^2 &= (\nabla\psi,\nabla e_\phi^n) =(\nabla\Pi_h\psi,\nabla e_\phi^n)+(\nabla(\psi-\Pi_h\psi),\nabla e_\phi^n) \nonumber \\ &=%1 (|u^n|^2-|u_h^n|^2,\Pi _h\psi)+(\nabla(\psi-\Pi_h\psi),\nabla e_\phi^n) \nonumber\\ &\le%2 \big|(u^n(u^n-R_h u^n)^*, \Pi_h\psi)\big| +\big|((u^n-R_h u^n)(u_h^n)^*, \Pi_h\psi)\big|\nonumber\\ &\quad +\big|(u^n(e_u^n)^*+e_u^n(u_h^n)^*, \Pi_h\psi)\big| +\|\nabla(\psi-\Pi_h\psi )\|_{L^2}\|\nabla e_\phi^n\|_{L^2}\nonumber\\ &\le %3 \|u^n\|_{L^4}\|\Pi_h\psi\|_{L^4}\|u^n-R_h u^n\|_{L^2}+\|u^n_h\|_{L^4}\|\Pi_h\psi\|_{L^4}\|u^n-R_h u^n\|_{L^2}\nonumber\\ &\quad +(\|u^n\|_{L^4}+\|u_h^n\|_{L^4})\|\Pi_h\psi\|_{L^4}\|e_u^n\|_{L^2} +Ch\|\psi\|_{H^2}\|\nabla e_\phi^n\|_{L^2}\nonumber\\ &\le %4 Ch^{r+1}\|\nabla\Pi_h\psi\|_{L^2}( \|u^n\|_{L^2}^{\frac{1}{2}}\|\nabla u^n\|_{L^2}^{\frac{1}{2}} +\|u_h^n\|_{L^2}^{\frac{1}{2}}\|\nabla u_h^n\|_{L^2}^{\frac{1}{2}}) \nonumber\\ &\quad +C\|e_u^n\|_{L^2}\|\nabla\Pi_h\psi\|_{L^2}(\|u^n\|_{L^2}^{\frac{1}{2}}\|\nabla u^n\|_{L^2}^{\frac{1}{2}} +\|u_h^n\|_{L^2}^{\frac{1}{2}}\|\nabla u_h^n\|_{L^2}^{\frac{1}{2}}) +Ch\|e_\phi^n\|_{L^2}\|\nabla e_\phi^n\|_{L^2} \nonumber\\ &\le C(h^{r+1}+\|e_u^n\|_{L^2})\|e^n_{\phi}\|_{L^2}+Ch\|e_\phi^n\|_{L^2}\|\nabla e_\phi^n\|_{L^2}, \label{phi4}\end{aligned}$$ where we use [\[ME4\]](#ME4){reference-type="eqref" reference="ME4"}, [\[interpolation\]](#interpolation){reference-type="eqref" reference="interpolation"} and [\[dual\]](#dual){reference-type="eqref" reference="dual"} to get the last inequality. From [\[phi3\]](#phi3){reference-type="eqref" reference="phi3"} and [\[phi4\]](#phi4){reference-type="eqref" reference="phi4"}, we can get $$\begin{aligned} \|e_\phi^n\|_{L^2}\le Ch^{r+1}+C\|e_u^n\|_{L^2}.\label{phi5}\end{aligned}$$ Since the error equation [\[error3\]](#error3){reference-type="eqref" reference="error3"} is equivalent to [\[error1\]](#error1){reference-type="eqref" reference="error1"}, we substitute $v_h=\overline{e}^{n-\frac12}_u$ into [\[error3\]](#error3){reference-type="eqref" reference="error3"}, and apply the same analysis as [\[u1\]](#u1){reference-type="eqref" reference="u1"}-[\[u2\]](#u2){reference-type="eqref" reference="u2"} to obtain the following estimate: $$\begin{aligned} &\frac{1}{2\tau}(\|e_u^n\|_{L^2}^2-\|e_u^{n-1}\|_{L^2}^2) \nonumber\\ &\le Ch^{2(r+1)}+C\|R^{n-\frac12}_{tr}\|^2_{L^2}+C(\|e_u^n\|_{L^2}^2+\|e_u^{n-1}\|_{L^2}^2)+C(\|e_\phi ^n\|_{L^2}^2+\|e_\phi ^{n-1}\|_{L^2}^2).\label{u4}\end{aligned}$$ Substituting [\[phi5\]](#phi5){reference-type="eqref" reference="phi5"} into [\[u4\]](#u4){reference-type="eqref" reference="u4"} and summing up from $n=1$ to $n=m$, we get $$\begin{aligned} \|e_u^m\|_{L^2}^2 \le& Ch^{2(r+1)}+\|e_u^0\|_{L^2}^2+C\tau\sum^m_{n=1}\|R^{n-\frac12}_{tr}\|^2_{L^2}+C\tau\sum\limits_{n=0}^m\|e_u^n\|_{L^2}^2\\ \le& C(h^{2(r+1)}+\tau^4)+C\tau\sum\limits_{n=0}^m\|e_u^n\|_{L^2}^2\end{aligned}$$ for $m=1,2,...,N,$ where we use [\[truncation\]](#truncation){reference-type="eqref" reference="truncation"} and $e^0_u=0$. By the discrete Gronwall's inequality, there exists a positive constant $\tau_2>0$, such that $$\begin{aligned} \max_{1\le n\le N}\|e_u^n\|_{L^2}^2\le C(h^{2(r+1)}+\tau^4)\nonumber\end{aligned}$$ as $\tau<\tau_2$.\ Combining the above result and [\[phi5\]](#phi5){reference-type="eqref" reference="phi5"}, we have $$\begin{aligned} \max_{1\le n\le N}\|e_\phi^n\|_{L^2}^2 \le C(h^{2(r+1)}+\tau^4).\nonumber\end{aligned}$$ By the estimate [\[ritz1\]](#ritz1){reference-type="eqref" reference="ritz1"} of Ritz projection, we have $$\begin{aligned} & \|u^n-u_h^n\|_{L^2}\le \|u^n-R_h u^n\|_{L^2} +\|e_u^n\|_{L^2} \le C(h^{r+1}+\tau^2),\label{Sch-poss-1}\\ & \|\phi^n-\phi_h^n\|_{L^2}\le \|\phi^n-R_h\phi^n\|_{L^2}+\|e_\phi^n\|_{L^2} \le C(h^{r+1}+\tau^2)\label{Sch-poss-2}\end{aligned}$$ for $n=1,2,...,N$. Thus, based on Theorem [Theorem 4](#TH4-2){reference-type="ref" reference="TH4-2"}, [\[Sch-Hel-1\]](#Sch-Hel-1){reference-type="eqref" reference="Sch-Hel-1"}-[\[Sch-Hel-2\]](#Sch-Hel-2){reference-type="eqref" reference="Sch-Hel-2"} and [\[Sch-poss-1\]](#Sch-poss-1){reference-type="eqref" reference="Sch-poss-1"}-[\[Sch-poss-2\]](#Sch-poss-2){reference-type="eqref" reference="Sch-poss-2"}, by setting $\tau_0=\min\{\tau^*, \tau_1, \tau_2\}$, we complete the proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"}. # Numerical Examples In this section, we present some numerical examples to verify the convergence rate and conservation properties of our scheme [\[CN1\]](#CN1){reference-type="eqref" reference="CN1"}-[\[CN2\]](#CN2){reference-type="eqref" reference="CN2"}. All the computations are performed by FreeFem++. **Example 1**. *[\[example\]]{#example label="example"} We consider the following two-dimensional Schrödinger--Helmholz system $$\begin{aligned} &\quad\qquad\qquad\qquad\qquad\qquad{\bf i}u_t+\Delta u-\phi u=f_1, &&{\bf x}\in\Omega,\, 0<t\le T ,\label{ex-equ-1}\\ &\quad\qquad\qquad\qquad\qquad\qquad\phi-\Delta\phi=|u|^2+f_2, &&{\bf x}\in\Omega, \, 0<t\le T,\label{ex-equ-2}\\ &\quad\qquad\qquad\qquad\qquad\qquad u({\bf x},0)=u_0({\bf x}),\quad\phi({\bf x},0)=\phi_0({\bf x}), && {\bf x}\in\Omega,\label{ex-equ-3}\\ &\quad\qquad\qquad\qquad\qquad\qquad u({\bf x},t)=\phi({\bf x},t)=0, &&{\bf x}\in\partial\Omega, \, 0<t\le T,\label{ex-equ-4}\end{aligned}$$ where $\Omega=[0,1]\times[0,1]$ and we take $T=0.5$. Moreover, the initial conditions $u_0$, $\phi_0$ and right-hand side function $f_1$, $f_2$ are determined by the following exact solutions $$\begin{aligned} &u=e^{({\bf i}+1)t}\sin(x)\sin(y)\sin(\pi x)\sin(\pi y),\nonumber\\ &\phi=e^{(t+x+y)}(1-x)(1-y)\sin(x)\sin(y).\nonumber\end{aligned}$$ The system [\[ex-equ-1\]](#ex-equ-1){reference-type="eqref" reference="ex-equ-1"}-[\[ex-equ-4\]](#ex-equ-4){reference-type="eqref" reference="ex-equ-4"} is discretized by the implicit Crank--Nicolson FE method [\[CN1\]](#CN1){reference-type="eqref" reference="CN1"}-[\[CN2\]](#CN2){reference-type="eqref" reference="CN2"} with linear and quadratic FE approximation. Since our algorithm [\[CN1\]](#CN1){reference-type="eqref" reference="CN1"}-[\[CN2\]](#CN2){reference-type="eqref" reference="CN2"} is nonlinear, the following Picard's iteration is used to obtain the numerical solution $(u^n_h, \phi^n_h)$ at each time step.\ **Picard's iteration for [\[CN1\]](#CN1){reference-type="eqref" reference="CN1"}-[\[CN2\]](#CN2){reference-type="eqref" reference="CN2"}** \ **Step 1.** Initialization for the time marching: Let the time step $n=0$, and set the initial value $u_h^0=R_h u_0$, $\phi_h^0=R_h\phi_0$.\ **Step 2.** Initialization for nonlinear iteration: For $n\ge1$, set $u_h^{n,0}=u_h^{n-1}$ and define by $\phi_h^{n,0}$ the solution of the following system: $$\begin{aligned} (\phi_h^{n,0},w_h)+(\nabla\phi_h^{n,0},\nabla w_h)=(|u_h^{n,0}|^2,w_h)+(f_2^n,w_h),\qquad\forall\, w_h\in V_h.\nonumber\end{aligned}$$ **Step 3.** FE computation on each time level: For $\ell\ge 0$, find $u_h^{n,\ell+1}\in V_h$, such that $$\begin{aligned} %\label{iteration-2} &{\bf i}\bigg( \frac{u_h^{n,\ell+1}-u_h^{n-1}}{\tau},v_h \bigg) - \bigg( \nabla \frac{u_h^{n,\ell+1} + u_h^{n-1}}{2},\nabla v_h\bigg) - \bigg(\frac{\phi_h^{n,\ell}+\phi_h^{n-1}}{2}\frac{u_h^{n,\ell+1}+u_h^{n-1}}{2},v_h\bigg)\nonumber\\ &= \bigg(f_1^{n-\frac{1}{2}},v_h\bigg),\qquad \forall\, v_h\in V_h,\nonumber\end{aligned}$$ and find $\phi_h^{n,\ell+1}\in V_h$, such that $$\begin{aligned} %\label{iteration-1} (\phi_h^{n,\ell+1},w_h)+(\nabla\phi_h^{n,\ell+1},\nabla w_h)=(|u_h^{n,\ell+1}|^2,w_h)+(f_2^n,w_h),\qquad\forall\, w_h\in V_h.\nonumber\end{aligned}$$\ **Step 4.** To check the stopping criteria for nonlinear iteration: For a fixed tolerance $\delta=10^{-7}$, stop the iteration when $$\begin{aligned} \|u_h^{n,\ell+1}-u_h^{n,\ell}\|_{L^2} \le\delta,\quad\| \phi_h^{n,\ell+1}-\phi_h^{n,\ell}\|_{L^2}\le\delta, \nonumber\end{aligned}$$ and set $u_h^{n}=u_h^{n,\ell+1}$, $\phi_h^{n}=\phi_h^{n,\ell+1}$. Otherwise, set $\ell \leftarrow \ell+1$, and go to **Step 3** to continue the nonlinear iteration.\ **Step 5.** Time marching: if $n=N$, stop the iteration. Otherwise, set $n-1\leftarrow n$, and go to **Step 2**.* *To test the convergence order, we choose $\tau=h$ for linear FE approximation and $\tau=h^{\frac32}$ for quadratic FE approximation, respectively. Obviously, numerical results in Table [1](#space1){reference-type="ref" reference="space1"} and Table [2](#space2){reference-type="ref" reference="space2"} are consistent with the theoretical analysis in Theorem [Theorem 1](#main){reference-type="ref" reference="main"}.* *$h$* *$\|{\bf u}^n-{\bf u}_h^n\|_{L^2}$* *order* *$\|\phi^n-\phi_h^n\|_{L^2}$* *order* ---------- ------------------------------------- --------- ------------------------------- --------- *$1/10$* *7.1770E-03* */* *3.9596E-03* */* *$1/20$* *1.9732E-03* *1.86* *1.0579E-03* *1.90* *$1/40$* *5.2178E-04* *1.92* *2.7276E-04* *1.96* : *$L^2$-norm errors of linear FE method with $\tau=h$* *[\[space1\]]{#space1 label="space1"}* *[\[5.2\]]{#5.2 label="5.2"}* *$h$* *$\|{\bf u}^n-{\bf u}_h^n\|_{L^2}$* *order* *$\|\phi^n-\phi_h^n\|_{L^2}$* *order* ---------- ------------------------------------- --------- ------------------------------- --------- *$1/10$* *2.0123E-04* */* *8.5171E-05* */* *$1/20$* *2.5669E-05* *2.97* *1.0736E-05* *2.99* *$1/40$* *3.1864E-06* *3.01* *1.3468E-06* *2.99* : *$L^2$-norm errors of quadratic FE method with $\tau=h^{\frac32}$* *[\[space2\]]{#space2 label="space2"}* **Example 2**. *[\[example2\]]{#example2 label="example2"} We consider the following two-dimensional Schrödinger--Poisson system $$\begin{aligned} &\quad\qquad\qquad\qquad\qquad\qquad{\bf i}{u_t}+\Delta u-\phi u=g_1, &&{\bf x}\in\Omega, \,0<t\le T,\label{Exam2-1}\\ &\quad\qquad\qquad\qquad\qquad\qquad-\Delta\phi=|u|^2+g_2, &&{\bf x}\in\Omega, \,0<t\le T,\label{Exam2-2}\\ &\quad\qquad\qquad\qquad\qquad\qquad u({\bf x},0)=u_0({\bf x}),\quad\phi({\bf x},0)=\phi_0({\bf x}), &&{\bf x}\in\Omega,\label{Exam2-3}\\ &\quad\qquad\qquad\qquad\qquad\qquad u({\bf x},t)=\phi({\bf x},t)=0, &&{\bf x}\in\partial\Omega, \, {0<t\le T},\label{Exam2-4}\end{aligned}$$ where $\Omega=[0,1]\times[0,1]$ and we take $T=$ 0.5. Moreover, the function $u_0$, $\phi_0$ and $g_1$, $g_2$ are determined by the following exact solutions $$\begin{aligned} &u=2e^{{\bf i}t+(x+y)/5}(1+5t^3)x(1-x)y(1-y),\\ &\phi=5(1+3t^2+\sin(t))\sin\left(\frac{x}{2}\right)\sin\left(\frac{x}{2}\right)(1-x)(1-y).\end{aligned}$$* *We solve the system [\[Exam2-1\]](#Exam2-1){reference-type="eqref" reference="Exam2-1"}-[\[Exam2-4\]](#Exam2-4){reference-type="eqref" reference="Exam2-4"} by Crank--Nicolson FE scheme [\[CN1\]](#CN1){reference-type="eqref" reference="CN1"}-[\[CN2\]](#CN2){reference-type="eqref" reference="CN2"} with both linear and quadratic approximation. Similar to the Example [\[example\]](#example){reference-type="ref" reference="example"}, we apply Picard's iteration method to solve our nonlinear scheme [\[CN1\]](#CN1){reference-type="eqref" reference="CN1"}-[\[CN2\]](#CN2){reference-type="eqref" reference="CN2"} for $\alpha=0$ and $\beta=1$. To verify the convergence rate, we choose $\tau=h$ and $\tau=h^{\frac32}$ for linear and quadratic FE approximation, respectively. The numerical results are presented in Table [3](#space3){reference-type="ref" reference="space3"} and Table [4](#space4){reference-type="ref" reference="space4"}. These results indicate that the $L^2$ error results of linear FE approximation are proportional to $h^2$, and the $L^2$ error results of quadratic FE approximation are proportional to $h^3$, which are consistent with theoretical analysis.* *$h$* *$\|{\bf u}^n-{\bf u}_h^n\|_{L^2}$* *order* *$\|\phi^n-\phi_h^n\|_{L^2}$* *order* ---------- ------------------------------------- --------- ------------------------------- --------- *$1/10$* *3.4536E-03* */* *2.1514E-03* */* *$1/20$* *1.1618E-03* *1.57* *5.9753E-04* *1.85* *$1/40$* *3.0870E-04* *1.91* *1.5686E-04* *1.93* : *$L^2$-norm errors of linear FE method with $\tau=h$ * *[\[space3\]]{#space3 label="space3"}* *$h$* *$\|{\bf u}^n-{\bf u}_h^n\|_{L^2}$* *order* *$\|\phi^n-\phi_h^n\|_{L^2}$* *order* ---------- ------------------------------------- --------- ------------------------------- --------- *$1/10$* *1.5718E-04* */* *3.7893E-05* */* *$1/20$* *2.0529E-05* *2.94* *4.8388E-06* *2.97* *$1/40$* *2.5437E-06* *3.01* *6.0897E-07* *2.99* : *$L^2$-norm errors of quadratic FE method with $\tau=h^{\frac32}$* *[\[space4\]]{#space4 label="space4"}* **Example 3**. *[\[example3\]]{#example3 label="example3"} To verify the discrete mass and energy conservation, we consider the following Schrödinger--type system $$\begin{aligned} &\qquad\qquad\qquad\qquad\qquad{\bf i}u_t+\Delta u-\phi u=0, &&{\bf x}\in\Omega,\, 0<t\le T,\label{eg31}\\ &\qquad\qquad\qquad\qquad\qquad\alpha\phi-\beta^2\Delta\phi=|u|^2, &&{\bf x}\in \Omega,\, 0<t\le T,\label{eg32}\\ &\qquad\qquad\qquad\qquad\qquad u({\bf x},0)=u_0({\bf x}),\quad\phi({\bf x},0)=\phi_0({\bf x}), &&{\bf x}\in\Omega,\label{eg33}\\ &\qquad\qquad\qquad\qquad\qquad u({\bf x},t)=\phi({\bf x},t)=0 , &&{\bf x}\in\partial\Omega ,\, {0<t\le T}\label{eg34}\end{aligned}$$ with the initial conditions $$\begin{aligned} &u({\bf x},0)=\sin(x)\sin(y)\sin(\pi x)\sin(\pi y), \nonumber\\ &\phi({\bf x},0)=e^{x+y}(1-x)(1-y)\sin(x)\sin(y),\nonumber\end{aligned}$$ where $\Omega=[0, 1]\times[0, 1]$. To test mass and energy conservation, we apply the Crank--Nicolson linear FE method to solve the above system [\[eg31\]](#eg31){reference-type="eqref" reference="eg31"}-[\[eg34\]](#eg34){reference-type="eqref" reference="eg34"}, which includes two models:* - *Schrödinger--Helmholz system (as $\alpha=1$ and $\beta=1$)* - *Schrödinger--Poisson system (as $\alpha=0$ and $\beta=1$)* *The numerical results at different time stages $T=0,10,20,...,100$ are presented in Fig.[\[figure\]](#figure){reference-type="ref" reference="figure"}. From Fig.[\[figure\]](#figure){reference-type="ref" reference="figure"}, we can see that the discrete energy $\mathcal{E}^n_h$ and mass $\mathcal{M}^n_h$ are conserved exactly during time evolution, which consistent with the theoretical analysis in Theorem [Theorem 2](#energy){reference-type="ref" reference="energy"}.* # Conclusions In this paper, an implicit Crank--Nicolson FE scheme is present to solve a nonlinear Schrödinger--type system, which includes Schrödinger--Helmholz equations and Schrödinger--Poisson equations. The advantage of our numerical method is mass and energy conservation in the discrete level. The well-posedness (existence and uniqueness) of the fully discrete solutions are proved by Schaefer's fixed point theorem. We demonstrate optimal $L^2$ error estimates for the fully discrete solutions for Schrödinger--Helmholz equations and Schrödinger--Poisson equations. Finally, some numerical examples are provided to confirm the theoretical analysis. # Acknowledgments The work of Zhuoyue Zhang and Wentao Cai was partially supported by the Zhejiang Provincial Natural Science Foundation of China grant LY22A010019 and the National Natural Science Foundation of China grant 11901142. 00 R.A. Adams, J.J. Fournier, Sobolev Spaces, Academic Press, New York, 2003. G.D. Akrivis, V.A. Dougalis, O.A. Karakashian, On fully discrete Galerkin methods of second-order temporal accuracy for the nonlinear Schrödinger equation, Numer. Math. 59 (1991) 31--53. X. Antoine, C. Besse, P. Klein, Absorbing boundary conditions for general nonlinear Schrödinger equations, SIAM J. Sci. Comput. 33 (2011) 1008--1033. G. Bai, B. Li, Y. Wu, A constructive low-regularity integrator for the 1D cubic nonlinear Schrödinger equation under the Neumann boundary condition, IMA J. Numer. Anal. (2022) doi: 10.1093/imanum/drac075. W. Bao, Y. Cai, Uniform error estimates of finite difference methods for the nonlinear Schrödinger equation with wave operator, SIAM J. Numer. Anal. 50 (2) (2012) 492-521. W. Bao, Y. Cai, Mathematical theory and numerical methods for Bose--Einstein condensation, Kinet. Relat. Mod. 6 (2013) 1-135. W. Bao, Q. Tang, Z. Xu, Numerical methods and comparison for computing dark and bright solitons in the nonlinear Schrödinger equation, J. Comput. Phys. 235 (2013) 423--445. S.C. Brenner, L.R. Scott, The Mathematical Theory of Finite Element Methods, Springer, Berlin, 1994. J. Cao, B. Li, Y. Lin, A new second-order low-regularity integrator for the cubic nonlinear Schrödinger equation, IMA J. Numer. Anal. (2023) doi: 10.1093/imanum/drad017. Y. Cao, Z.H. Musslimani, E.S. Titi, Nonlinear Schrödinger--Helmholtz equation as numerical regularization of the nonlinear Schrödinger equation, Nonlinearity 21 (5) (2008) 879. I. Catto, P.L. Lions, Binding of atoms and stability of molecules in Hartree and Thomas--Fermi type theories. Part 1: A necessary and sufficient condition for the stability of general molecular systems, Commun. Partial Differ. Equ. 17 (1992) 1051--1110. Q. Chang, E. Jia, W. Sun, Difference schemes for solving the generalized nonlinear Schrödinger equation, J. Comput. Phys. 148 (1999) 397--415. L.C. Evans, Partial Differential, 2nd edn. AMS, Providence, 2010. X. Feng, B. Li, S. Ma, High-order mass- and energy-conserving SAV--Gauss collocation finite element methods for the nonlinear Schrödinger equation, SIAM J. Numer. Anal. 59 (3) (2021) 1566--1591. P. Henning, D. Peterseim, Crank--Nicolson Galerkin approximations to nonlinear Schrödinger equations with rough potentials, Math. Models Methods Appl. Sci. 27 (11) (2017) 2147-2184. J.G. Heywood, R. Rannacher, Finite-element approximation of the nonstationary Navier--Stokes problem. Part IV: Error analysis for the second--order time discretization, SIAM J. Numer. Anal. 27 (1990) 353-384. B. Li, G. Fairweather, B. Bialecki, Discrete-time orthogonal spline collocation methods for Schrödinger equations in two space variables, SIAM J. Numer. Anal. 35 (1998) 453--477. B. Li, W. Sun, Error analysis of linearized semi-implicit Galerkin finite element methods for nonlinear parabolic equations, Int. J. Numer. Anal. Model. 10 (3) (2013) 622-633. B. Li, Y. Wu, A fully discrete low-regularity integrator for the 1D periodic cubic nonlinear Schrödinger equation, Numer. Math. 149 (2021) 151--183. D. Li, J. Wang, J. Zhang, Unconditionally convergent $L1$-Galerkin FEMs for nonlinear time-fractional Schrödinger equations, SIAM J. Sci. Comput. 39 (2017) A3067-A3088. H. Liao, Z. Sun, H. Shi, Error estimate of fourth-order compact scheme for linear Schrödinger equations, SIAM J. Numer. Anal. 47 (6) (2010) 4381--4401. E.H. Lieb, Thomas--Fermi and related theories of atoms and molecules, Rev. Mod. Phys. 53 (4) (1981) 603--641. H. Liu, Y. Huang, W. Lu, N. Yi, On accuracy of the mass-preserving DG method to multi-dimensional Schrödinger equations, IMA J. Numer. Anal. 39 (2) (2019) 760-791. T. Lu, W. Cai, A Fourier spectral-discontinuous Galerkin method for time-dependent 3-D Schrödinger--Poisson equations with discontinuous potentials, J. Comput. Appl. Math. 220 (1-2) (2008) 588-614. W. Lu, Y. Huang, H. Liu, Mass preserving discontinuous Galerkin methods for Schrödinger equations, J. Comput. Phys. 282 (2015) 210--226. C. Lubich, On splitting methods for Schrödinger--Poisson and cubic nonlinear Schrödinger equations, Math. Comput. 77 (264) (2008) 2141--2153. S. Masaki, Local existence and WKB approximation of solutions to Schrödinger--Poisson system in the two-dimensional whole space, Commun. Partial Differ. Equ. 35 (12) (2010) 2253--2278. L. Nirenberg, An extended interpolation inequality, Ann. Sc. Norm. Super. Pisa 20 (4) (1966) 733--737. R. Rannacher, R. Scott, Some optimal error estimates for piecewise linear finite element approximations, Math. Comput. 38 (158) (1982) 437--445. B. Reichel, S. Leble, On convergence and stability of a numerical scheme of coupled nonlinear Schrödinger equations, Comput. Math. Appl. 55 (4) (2008) 745--759. D. Shi, J. Wang, Unconditional superconvergence analysis of a Crank--Nicolson Galerkin FEM for nonlinear Schrödinger equation, J. Sci. Comput. 72 (3) (2017) 1093--1118. H.P. Stimming, The IVP for the Schrödinger--Poisson--X$\alpha$ equation in one dimension, Math. Models Methods Appl. Sci. 15 (8) (2005) 1169--1180. W. Sun, J. Wang, Optimal error analysis of Crank--Nicolson schemes for a coupled nonlinear Schrödinger system in 3D, J. Comput. Appl. Math. 317 (2017) 685-699. V. Thomée, Galerkin Finite Element Methods for Parabolic Problems, vol. 25, Springer Science and Business Media, 2007. Y. Tourigny, Optimal $H^1$ estimates for two time-discrete Galerkin approximations of a nonlinear Schrödinger equation, IMA J. Numer. Anal. 11 (4) (1991) 509--523. J. Wang, A new error analysis of Crank--Nicolson Galerkin FEMs for a generalized nonlinear Schrödinger equation, J. Sci. Comput. 60 (2014) 390-407. J. Wang, Unconditional stability and convergence of Crank--Nicolson Galerkin FEMs for a nonlinear Schrödinger--Helmholtz system, Numer. Math. 139 (2) (2018) 479--503. T. Wang, Y. Jiang, Point-wise errors of two conservative difference schemes for the Klein--Gordon--Schrödinger equation, Commun. Nonlinear Sci. Numer. Simul. 17 (12) (2012) 4565--4575. T. Wang, X. Zhao, J. Jiang, Unconditional and optimal $H^2$-error estimates of two linear and conservative finite difference schemes for the Klein--Gordon--Schrödinger equation in high dimensions, Adv. Comput. Math. 44 (2018) 477--503. Y. Xu, C. Shu, Local discontinuous Galerkin methods for nonlinear Schrödinger equations, J. Comput. Phys. 205 (2005) 72--97. Y. Yang, Y. Jiang, B. Yu, Unconditional optimal error estimates of linearized, decoupled and conservative Galerkin FEMs for the Klein--Gordon--Schrödinger equation, J. Sci. Comput. 87 (3) (2021). N. Yi, H. Liu, A mass-and energy-conserved DG method for the Schrödinger--Poisson equation, Numer. Algorithms 89 (2022) 905--930. F. Zhang, V.M. Pérez-García, L. Vázquez, Numerical simulation of nonlinear Schrödinger systems: a new conservative scheme, Appl. Math. Comput. 71 (2-3) (1995) 165--177. G.E. Zouraris, On the convergence of a linear two-step finite element method for the nonlinear Schrödinger equation, ESAIM: Math. Model. Numer. Anal. 35 (3) (2001) 389--405. [^1]: Department of Mathematics, School of Sciences, Hangzhou Dianzi University, Hangzhou, Zhejiang, P. R. China. Email address: `zzhuoyue@hdu.edu.cn` [^2]: Corresponding author; Department of Mathematics, School of Sciences, Hangzhou Dianzi University, Hangzhou, Zhejiang, P. R. China. Email address: `femwentao@hdu.edu.cn`
arxiv_math
{ "id": "2309.05967", "title": "Optimal $L^2$ error estimates of mass- and energy-conserved FE schemes\n for a nonlinear Schr\\\"odinger-type system", "authors": "Zhuoyue Zhang and Wentao Cai", "categories": "math.NA cs.NA", "license": "http://creativecommons.org/licenses/by/4.0/" }
arxiv_math
{ "id": "2310.01713", "title": "Greedy invariant-domain preserving approximation for hyperbolic systems", "authors": "Jean-Luc Guermond, Matthias Maier, Bojan Popov, Laura Saavedra,\n Ignacio Tomas", "categories": "math.NA cs.NA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We prove that a two dimensional pseudoconvex domain of finite type with a Kähler-Einstein Bergman metric is biholomorphic to the unit ball. This answers an old question of Yau for such domains. The proof relies on asymptotics of derivatives of the Bergman kernel along critically tangent paths approaching the boundary, where the order of tangency equals the type of the boundary point being approached. address: - Universität zu Köln, Mathematisches Institut, Weyertal 86-90, 50931 Köln, Germany - Department of Mathematics, University of California at San Diego, La Jolla, CA 92093, USA author: - Nikhil Savale & Ming Xiao bibliography: - biblio2.bib title: Kähler-Einstein Bergman metrics on pseudoconvex domains of dimension two --- [^1] # [\[sec:Introdunction\]]{#sec:Introdunction label="sec:Introdunction"} Introduction Let $D\subset\mathbb{C}^{n}$ be a bounded pseudoconvex domain. There exist two natural canonical metrics defined in its interior. The first is the Bergman metric [@Bergman70-book] defined using the Bergman kernel. The other is the complete Kähler-Einstein metric in $D$, whose existence was established by the work of Cheng-Yau and Mok-Yau [@Cheng-Yau-80; @Mok-Yau1983]. The importance of the metrics stems from their biholomorphic invariance property and intimate connections with the boundary geometry. It is hence a natural question to ask when the two canonical metrics coincide; i.e. when the Bergman metric on the domain $D$ is Kähler-Einstein. It was asked, in some form by Yau [@Yau82-problems pg. 679], whether this happens if and only if $D$ is homogeneous. The reverse direction of Yau's question (i.e. if $D$ is homogeneous, then the Bergman metric is Kähler-Einstein) follows from a simple observation using the Bergman invariant function (cf. Fu-Wong [@Fu-Wong97]). The challenging aspect of Yau's question is the forward direction which is still wide open in its full generality. It should be noted that homogeneous domains have been classified in [@Vinberg-Gindikin-PS-1963] and the only smoothly bounded homogeneous domain is the ball, as a consequence of Wong [@Wong1977] and Rosay [@Rosay1979]. A more tractable case of Yau's question is when $D$ has strongly pseudoconvex smooth boundary. An explicit conjecture in this case was posed earlier by Cheng [@Ch]: if the Bergman metric of a smoothly bounded strongly pseudoconvex domain is Kähler-Einstein, then the domain is biholomorphic to the unit ball. Cheng's conjecture was confirmed by the combined work of Fu-Wong [@Fu-Wong97] and Nemirovski-Shafikov [@Nemirovski-Shafikov-2006] in dimension two. In higher dimensions, it was proved more recently by Huang and the second author [@Huang-Xiao21]. Since then there has been further work on Cheng's conjecture on Stein manifolds, and more generally on possibly singular Stein spaces, with strongly pseudoconvex boundary. See Huang-Li [@HuLi], Ebenfelt, Xu and the second author [@EXX], as well as Ganguly-Sinha [@GaSi] for results along this line. Other variations of Cheng's conjecture can also be found in Li [@Li05; @Li09] and references therein. The proofs of Cheng's conjecture in [@Fu-Wong97; @Huang-Xiao21] fundamentally use Fefferman's asymptotic result [@Fefferman74] for the Bergman kernel, together with its connections to the CR invariant theory for the boundary geometry. In the broader context of pseudoconvex finite type domains, both tools are either absent or insufficiently understood. As a result, little progress was made towards understanding Yau's question in this context. To the best knowledge of the authors, the only known result was due to Fu-Wong [@Fu-Wong97]. They showed that, on a smoothly bounded, complete Reinhardt, pseudoconvex domain of finite type domain in $\mathbb{C}^{2}$, if the Bergman metric is Kähler-Einstein, then the domain is biholomorphic to the unit ball. Their proof utilized the non-tangential limit of the Bergman invariant function (see Fu [@Fu1996]). Besides, their proof used the aid of a computer, again reflecting the intricacy of the problem in the more general finite type case. Our main theorem below gives an affirmative answer to Yau's question for smoothly bounded pseudoconvex domains of finite type in dimension two. ** 1**. *Let $D\subset\mathbb{C}^{2}$ be a smoothly bounded pseudoconvex domain of finite type. If the Bergman metric of $D$ is Kähler-Einstein, then $D$ is biholomorphic to the unit ball in $\mathbb{C}^{2}$.* A key role is again played by the boundary asymptotics for the Bergman kernel. For two dimensional pseudoconvex domains of finite type, Hsiao and the first author [@HsiaoSavale-2022] recently described the asymptotics of the Bergman kernel along transversal paths approaching the boundary. For our proof we shall need to extend this asymptotic result to tangential paths approaching a pseudoconvex point on the boundary. The paths shall further be chosen to be *critically tangent*; their order of tangency with the boundary equals the type of the point on the boundary that is being approached (see Remark below for a further discussion of this choice). As a consequence of our main theorem, we also positively answer Yau's question for two dimensional bounded domains with real analytic boundary (such domains are always of finite type). ** 2**. *Let $D\subset\mathbb{C}^{2}$ be a bounded pseudoconvex domain with real analytic boundary. If the Bergman metric of $D$ is Kähler-Einstein, then $D$ is biholomorphic to the unit ball in $\mathbb{C}^{2}$.* The article is organized as follows. We begin with some preliminaries on the Bergman and Kähler-Einstein metrics in . In , we establish the asymptotics for the Bergman kernel and its derivatives along a critically tangent path. The leading term of the asymptotics is computed as well in terms of a model Bergman kernel on the complex plane. Then we carry out the requisite analysis of the model in . Finally we prove in . # [\[sec:Preliminaries\]]{#sec:Preliminaries label="sec:Preliminaries"} Preliminaries In this section we begin with some requisite preliminaries on the Bergman and Kähler-Einstein metrics. Let $D\subset\mathbb{C}^{n}$ be a smoothly bounded domain. A boundary defining function is a smooth function $\rho\in C^{\infty}\left(\bar{D}\right)$ satisfying $D=\left\{ \rho\left(z\right)<0\right\} \subset\mathbb{C}^{2}\textrm{ and }\left.d\rho\right|_{\partial D}\neq0.$ The CR and Levi-distributions on the boundary $X\coloneqq\partial D$ are defined via $T^{1,0}X=T^{1,0}\mathbb{C}^{2}\cap T_{\mathbb{C}}X$ and $HX\coloneqq\textrm{Re}\left[T^{1,0}X\oplus T^{0,1}X\right]$ respectively. The Levi form on the boundary is defined by $$\begin{aligned} \mathscr{L} & \in\left(T^{1,0}X\right)^{*}\otimes\left(T^{0,1}X\right)^{*}\nonumber \\ \mathscr{L}\left(U,\bar{V}\right) & \coloneqq\partial\bar{\partial}\rho\left(U,\bar{V}\right)=-\overline{\partial}\rho\left(\left[U,\bar{V}\right]\right)\label{eq:Levi form}\end{aligned}$$ for $U,V\in T^{1,0}X$. The domain is called *strongly pseudoconvex* if the Levi form is positive definite; and *weakly pseudoconvex* (or simply *pseudoconvex*) if the Levi form is semi-definite. We now recall the notion of finite type. There are two standard notions of finite type (D'Angelo and Kohn/Bloom-Graham) of a smooth real hypersurface $M$, and these happen to coincide in $\mathbb{C}^{2}.$ (The reader is referred to [@Baouendi-Ebenfelt-Rothschild-99] for more details). The domain is called of finite type (in the sense of Kohn/Bloom-Graham) if the Levi-distribution $HX$ is bracket generating: $C^{\infty}\left(HX\right)$ generates $TX$ under the Lie bracket. In particular the *type of a point* on the boundary $x\in X=\partial D$ is the smallest integer $r\left(x\right)$ such that $H_{x}X_{r\left(x\right)}=T_{x}X$, where the subspaces $HX_{j}\subset TX$, $j=1,\ldots$ are inductively defined by $$\begin{aligned} HX_{1} & \coloneqq HX\nonumber \\ HX_{j+1} & \coloneqq HX+\left[HX_{j},HX\right],\quad\forall j\geq1.\label{eq:canonical flag}\end{aligned}$$ In general, the function $x\mapsto r\left(x\right)$ is only upper semi-continuous. The finite type hypothesis is then equivalent to $r\coloneqq\max_{x\in X}r\left(x\right)<\infty.$ Note that the type of a strongly pseudoconvex point $x$ is $r\left(x\right)=2$. The Bergman projector of $D$ is the orthogonal projector $$K_{D}:L^{2}\left(D\right)\rightarrow L^{2}\left(D\right)\cap\mathcal{O}\left(D\right)\label{eq:Bergman kernel}$$ from square integrable functions onto the closed subspace of square-integrable holomorphic ones. Its Schwartz kernel, still denoted by $K_{D}\left(z,z'\right)\in L^{2}\left(D\times D\right),$ is called the Bergman kernel of $D$. It is well-known to be smooth in the interior and positive along the diagonal. The Bergman metric is the Kähler metric in the interior defined by $$g_{\alpha\bar{\beta}}^{D}\coloneqq\partial_{\alpha}\partial_{\bar{\beta}}\ln K_{D}\left(z,z\right).$$ Denote by $G=\det\left(g_{\alpha\bar{\beta}}^{D}\right)$ the determinant of the above metric. The Ricci tensor of $g^{D}$ is by definition $R_{\alpha\bar{\beta}}=-\partial_{\alpha}\partial_{\bar{\beta}}\ln G$. The Bergman metric is always Kähler, and is further said to be *Kähler-Einstein* if $R_{\alpha\bar{\beta}}=cg_{\alpha\bar{\beta}}^{D}$ for some constant $c$. Since $D$ is a bounded domain, the sign of $c$ must necessarily be negative (cf. [@Cheng-Yau-80 page 518]). The Bergman invariant function is defined by $B\left(z\right)\coloneqq\frac{G\left(z\right)}{K_{D}\left(z,z\right)}$. It follows from the transformation formula of the Bergman kernel that the Bergman invariant function is invariant under biholomorphisms. Next we briefly discuss the Kähler-Einstein metric. Recall the existence of a complete Kähler-Einstein metric on $D\subset\mathbb{C}^{n}$ is governed by the following Dirichlet problem: $$\begin{aligned} J\left(u\right)\coloneqq(-1)^{n}\det\begin{pmatrix}u & u_{\bar{\beta}}\\ u_{\alpha} & u_{\alpha\bar{\beta}} \end{pmatrix} & =1\quad\textrm{in }D,\nonumber \\ u & =0\quad\text{on }\partial D.\label{eq:Fefferman J equation}\end{aligned}$$ with $u>0$ in $D$. Here $u_{\alpha}$ denotes $\partial_{z_{\alpha}}u$, and likewise for $u_{\bar{\beta}}$ and $u_{\alpha\bar{\beta}}$. The problem was first studied by Fefferman [@Fefferman74], and $J(\cdot)$ is often referred as Fefferman's complex Monge-Ampère operator. Cheng and Yau [@Cheng-Yau-80] proved the existence and uniqueness of an exact solution $u\in C^{\infty}(D)$ to [\[eq:Fefferman J equation\]](#eq:Fefferman J equation){reference-type="eqref" reference="eq:Fefferman J equation"}, on a smoothly bounded strongly pseudoconvex domain $D$. The function $u$ is called the Cheng--Yau solution; and $-\partial\overline{}{\partial}\log u$ gives rise to a complete Kähler-Einstein metric on $D$. Mok-Yau [@Mok-Yau1983] further showed a bounded domain admits a complete Kähler-Einstein metric if and only if it is a domain of holomorphy. We next make some observations on the Monge-Ampère operator for later applications. The left hand side of the first equation in [\[eq:Fefferman J equation\]](#eq:Fefferman J equation){reference-type="eqref" reference="eq:Fefferman J equation"} can further be invariantly written as $J\left(u\right)=u^{n+1}\det\left[\partial\bar{\partial}\left(-\ln u\right)\right]$. It may thus be computed in terms of any orthonormal frame $\{Z_{\alpha}\}_{\alpha=1}^{n}$ of $T^{1,0}\mathbb{C}^{n}$ as $$J\left(u\right)=\det\begin{pmatrix}u & \bar{Z}_{\beta}u\\ Z_{\alpha}u & Z_{\alpha}\bar{Z}_{\beta}u-\left[Z_{\alpha},\bar{Z}_{\beta}\right]^{0,1}u \end{pmatrix}.\label{eq:Monge Ampere determinant}$$ This can be proved using the identity $$\partial\bar{\partial}f\left(Z_{\alpha},\bar{Z}_{\beta}\right)=Z_{\alpha}\bar{Z}_{\beta}\left(f\right)-\bar{\partial}f\left(\left[Z_{\alpha},\bar{Z}_{\beta}\right]\right).\label{eq:ddbar calc.}$$ Here the normality of $\{Z_{\alpha}\}_{\alpha=1}^{n}$ means each of them has the same norm as $\partial_{z_{1}},\cdots,\partial_{z_{n}}$. The following proposition gives an equivalent condition for the Bergman metric being Kähler-Einstein, which is easier to work with. The proof is similar to [@Fu-Wong97 Proposition 1.1] and [@HuLi Proposition 3.3]. ** 3**. *Let $D\subset\mathbb{C}^{n},n\geq2,$ be a smoothly bounded pseudoconvex domain. Then its Bergman metric $g^{D}$ is Kähler-Einstein if and only if the Bergman invariant function is constant $B\left(z\right)\equiv\left(n+1\right)^{n}\frac{\pi^{n}}{n!}$. This is also equivalent to the Bergman kernel $K_{D}$ satisfying $J\left(K_{D}\right)=(-1)^{n}\frac{(n+1)^{n}\pi^{n}}{n!}K_{D}^{n+2}$.* *Proof.* We start with the proof of the first assertion. Since the reverse direction is trivial, we only need to prove the forward part. Assume the Bergman metric of $D$ is Kähler-Einstein. Recall a smoothly bounded domain in $\mathbb{C}^{n}$ always has a strongly pseudoconvex boundary point. Therefore we can find a strongly pseudoconvex open connected piece $M$ of $\partial D$. Fix $p\in M$. Next pick a small smoothly bounded strongly pseudoconvex domain $D'\subseteq D$ such that $D'\cap O=D\cap O$ and $\partial D'\cap O=\partial D\cap O=:M_{0}\subseteq M$ for some small ball $O$ in $\mathbb{C}^{n}$ centered at $p$. Write $K_{D'}$ for the Bergman kernel of $D'$. Then by the localization of the Bergman kernel on pseudoconvex domains at a strongly pseudoconvex boundary point (cf. Theorem 4.2 in Engli¨ [@En01]), there is a smooth function $\Phi$ in a neighborhood of $D'\cup M_{0}$ such that $$K_{D}=K_{D'}+\Phi~\text{on}~D'.\label{eq:eqnphi}$$ Note that $K_{D'}$ obeys Fefferman asymptotic expansion on $D'$ by [@Fefferman74]. Combining this with , we see for any defining function $\rho$ of $D\cap O$ with $D\cap O=\{z\in O:\rho(z)<0\}$, the Bergman kernel $K_{D}$ also has the Fefferman type expansion in $D\cap O:$ $$K_{D}=\frac{\phi}{\rho^{n+1}}+\psi\log(-\rho)\quad\text{on}~D\cap O.\label{eq:eqnkz}$$ Here $\phi$ and $\psi$ are smooth in a neighborhood of $D'\cup M_{0}$ with $\phi$ nowhere zero on $M_{0}.$ Then by and (the proof of) Theorem 1 of Klembeck [@Kl78], the Bergman metric of $D$ is asymptotically of constant holomorphic sectional curvature $\frac{-2}{n+1}$ as $z\in D\rightarrow M_{0}$. Consequently, the Bergman metric of $D$ is asymptotically of constant Ricci curvature $-1$ as $z\in D\rightarrow M_{0}$ (To prove the latter fact, alternatively one can apply a similar argument as page 510 of Cheng-Yau [@Cheng-Yau-80]). Therefore by the Kähler-Einstein assumption, we must have $R_{i\overline{}{j}}=-g_{i\overline{}{j}}.$ This yields $\partial\overline{}\bar{\partial}\log B\equiv0$ in $D$. That is, $\log B$ is pluriharmonic in $D.$ Furthermore, by and a similar argument as in the proof of Lemma 3.2 in [@HuLi], we have $B(z,\overline{}{z})\rightarrow\frac{(n+1)^{n}\pi^{n}}{n!}$ as $z\rightarrow M_{0}.$ Now write $\Delta=\{z\in\mathbb{C}:|z|<1\}$ for the unit disk. Let $f:\Delta\rightarrow O$ be an analytic disk attached to $M_{0}.$ That is, $f$ is holomorphic in $\Delta$ and continuous in $\overline{}{\Delta}$ with $f(\Delta)\subset O\cap D$ and $f(\partial\Delta)\subset M_{0}.$ Then $\log B(f)$ is harmonic in $\Delta,$ continuous up to $\partial\Delta,$ and takes constant value $\log\frac{(n+1)^{n}\pi^{n}}{n!}$ on $\partial\Delta.$ This implies $B$ takes the constant value $\frac{(n+1)^{n}\pi^{n}}{n!}$ on $f(\Delta).$ But since $M_{0}$ is strongly pseudoconvex, we can find a family $\mathcal{F}$ of analytic disks such that $\cup_{f\in\mathcal{F}}f(\Delta)$ fills up an open subset $U$ of $O\cap D$(cf. [@Baouendi-Ebenfelt-Rothschild-99]). Thus $B$ is constant on $U$. Since $B$ is real analytic and $D$ is connected, we see $B\equiv\frac{(n+1)^{n}\pi^{n}}{n!}$. Finally, a routine computation using the formula $J(u)=u^{n+1}\det\left(\partial\overline{\partial}(-\ln u)\right)$ yields that, $B\left(z\right)=c$ if and only if $J\left(K_{D}\right)=(-1)^{n}cK_{D}^{n+2}$. Then the second assertion of the proposition follows immediately. ◻ # [\[sec:Bergman kernel der asymptotics\]]{#sec:Bergman kernel der asymptotics label="sec:Bergman kernel der asymptotics"} The Bergman kernel and its derivatives To prove , we shall fundamentally use the asymptotics of the Bergman kernel on pseudoconvex domains of finite type. In this section, we first briefly recall some classical and recent known work, and then prove new results for asymptotics of the Bergman kernel. In , we already made use of Fefferman's Bergman kernel asymptotics in the strongly pseudoconvex case. Let $D$ be a strongly pseudoconvex domain with a defining function $\rho\in C^{\infty}\left(\bar{D}\right)$. Fefferman [@Fefferman74] showed that the Bergman kernel of the domain $D$ has an asymptotic expansion $$K_{D}\left(z,z\right)=a\left(z\right)\rho{}^{-n-1}+b\left(z\right)\ln\left(-\rho\right)\label{eq:Fefferman expansion}$$ for some functions $a\left(z\right),b\left(z\right)\in C^{\infty}\left(\bar{D}\right)$. Recently, the asymptotics in were extended to pseudoconvex domains of finite type in $\mathbb{C}^{2}$ by Hsiao and the first author [@HsiaoSavale-2022 Theorem 2]. They established the full asymptotic expansion of the Bergman kernel described along transversal paths approaching the boundary. This is not suitable for our proof of . We shall need the asymptotic expansion of the Bergman kernel, and its derivatives, along certain critically tangent paths (see and Remark ) approaching the boundary. Besides, we also need information of the leading coefficient in the asymptotics. To state our result, some setup is in order. Fix $x^{*}\in X=\partial D$ on the boundary of the domain of type $r=r\left(x^{*}\right)$. Let $U_{1},U_{2}\coloneqq JU_{1}\in C^{\infty}\left(HX\right)$ be two local orthonormal sections of the Levi distribution and $U_{3}\in C^{\infty}\left(TX\right)$, $U_{3}\perp HX$ to be a unit normal to the Levi distribution. One then extends $U_{1}$ to a local unit length vector field in the interior of $D$. Set $U_{2}=JU_{1}$ to be an extension of $U_{2}$ to the interior of $D$. Choose an extension of $U_{3}$ of unit length and that is orthogonal to $U_{1},U_{2}$. Set $U_{0}=-JU_{3}$ (so that $U_{3}=JU_{0}$). It is easy to see that $U_{0}$ is of unit length and normal to the boundary $U_{0}\perp T\partial D$ near $x^{*}\in X$. Replacing $U_{3}$ by $-U_{3}$ if needed, we assume $U_{0}$ is outward-pointing to $D$. This also gives a local boundary defining function $\rho$ via $U_{0}\left(\rho\right)=1$, $\left.\rho\right|_{X}=0$. Note that the flow of the normal vector field $U_{0}$ also gives a locally defined projection $\pi:D\rightarrow X=\partial D$ onto the boundary. The pairs of vector fields define CR vector fields $Z=\frac{1}{2}\left(U_{1}-iU_{2}\right),W=\frac{1}{2}\left(U_{0}-iU_{3}\right)\in T^{1,0}\mathbb{C}^{2}$. In [@Christ89-embedding Prop. 3.2] (see also [@Baouendi-Ebenfelt-Rothschild-99]) it was shown that a coordinate system $x=\left(x_{1},x_{2},x_{3}\right)$ on the boundary centered at $x^{*}$ maybe chosen so that $$\begin{aligned} \left.Z\right|_{X} & =\frac{1}{2}\left[\underbrace{\partial_{x_{1}}+\left(\partial_{x_{2}}p\right)\partial_{x_{3}}-i\left(\partial_{x_{2}}-\left(\partial_{x_{1}}p\right)\partial_{x_{3}}\right)}_{\eqqcolon Z_{0}}+R\right],\label{eq:Christ normal form}\end{aligned}$$ where $p\left(x_{1},x_{2}\right)$ is a homogeneous, subharmonic (and non-harmonic) real polynomial of degree and weight $r$. We note that $r$ must be even. Besides, $p$ has no purely holomorphic or anti-holomorphic terms in $z_{1}=x_{1}+ix_{2}$ in its Taylor expansion at $0$. Moreover, $R=\sum_{j=1}^{3}r_{j}\left(x\right)\partial_{x_{j}}$ is a real vector field of weight $w\left(R\right)\geq0$. Here the weight of local functions and vector fields are defined as follows. The weight of a monomial $x^{\alpha}$, $\alpha\in\mathbb{N}_{0}^{3}$, is defined as $w.\alpha\coloneqq\alpha_{1}+\alpha_{2}+r\alpha_{3}$, with $w(x)=w\left(x_{1},x_{2},x_{3}\right)\coloneqq\left(1,1,r\right)$. The weight $w\left(f\right)$ of a function $f\in C^{\infty}\left(X\right)$ is then the minimum weight of the monomials appearing in its Taylor series at $x^{*}=0$. Finally, the weight $w\left(U\right)$ of a smooth vector field $U=\sum_{j=1}^{3}f_{j}\partial_{x_{j}}$ is $w\left(U\right)\coloneqq\min\left\{ w\left(f_{1}\right)-1,w\left(f_{2}\right)-1,w\left(f_{3}\right)-r\right\}$. The coordinates $\left(x_{1},x_{2},x_{3}\right)$ are next extended to the interior of the domain by being constant in the normal direction $U_{0}\left(x_{j}\right)=0$, $j=1,2,3$. Then $x'\coloneqq\left(\rho,x_{1},x_{2},x_{3}\right)$ serve as coordinates on the interior of the domain near $x^{*}$ in which $U_{0}=\partial_{\rho}$. We also extend the notion of weights to the new coordinate system. The weight of a monomial $\rho^{\alpha_{0}}x^{\alpha}$ is defined as $w'\left(\rho^{\alpha_{0}}x^{\alpha}\right)=w'.\alpha'\coloneqq r\alpha_{0}+\alpha_{1}+\alpha_{2}+r\alpha_{3}$, with $w'(x')=w'\left(\rho,x_{1},x_{2},x_{3}\right)\coloneqq\left(r;1,1,r\right)$ now denoting the augmented weight vector. We again define the weight $w(f)$ of a smooth function $f\in C^{\infty}\left(D\right)$ near $x^{*}$ as the minimum weight of the monomials appearing in its Taylor series at $x^{*}$ in these coordinates. Finally, the weight $w\left(U\right)$ of a smooth vector field $U=f_{0}\partial_{\rho}+\sum_{j=1}^{3}f_{j}\partial_{x_{j}}$ is $w\left(U\right)\coloneqq\min\left\{ w\left(f_{0}\right)-r,w\left(f_{1}\right)-1,w\left(f_{2}\right)-1,w\left(f_{3}\right)-r\right\}$. Note that one has $w\left(U\right)\geq-r,$ and $w\left(U\right)>-r$ if $f_{0}(0)=f_{3}(0)=0$. Below $O\left(k\right)$ denotes a vector field of weight $k$ or higher. By a rescaling of the $x_{3}$ coordinate, and at the cost of scaling the polynomial $p\left(x_{1},x_{2}\right)$ , we may also arrange $\left.U_{3}\right|_{x^{*}=0}=\pm\partial_{x_{3}}$. By the fact that $\left[Z,\bar{Z}\right]=[-\Delta p\left(z_{1}\right)\frac{i}{2}\partial_{x_{3}}]+O\left(-1\right)$ and the pseudoconvexity condition , one can show that it must be $\partial_{x_{3}}$. But the sign is irrelevant to our proof, and thus we will not elaborate it here. Therefore we have $$U_{3}=\partial_{x_{3}}+O\left(-r+1\right).\label{eq:arrangement for U3 =00003D00003D00003D000026 c}$$ Next let $V\in C^{\infty}\left(HX\right)$ denote another locally defined section of the Levi distribution. This defines a local *tangential path* approaching $x^{*}$ via $$\begin{aligned} z\left(\epsilon\right)\coloneqq\left(\underbrace{e^{\epsilon V}x^{*}}_{=\pi\left(z\left(\epsilon\right)\right)},\underbrace{-\epsilon^{r}}_{=\rho\left(z\left(\epsilon\right)\right)}\right) & \in D,\quad\epsilon>0.\label{eq:tangential path}\end{aligned}$$ Note the above path is indeed tangential to the boundary; its tangent vector at $x^{*}$ is in the Levi-distribution $\left.\frac{dz}{d\epsilon}\right|_{\epsilon=0}=V_{x^{*}}\in H_{x^{*}}X$. The order of tangency the path makes with the boundary is the type of the point $r\left(x^{*}\right)$. Writing $V=\sum_{j=1}^{3}g_{j}\partial_{x_{j}}$, we associate the section $V$ with a point $$\begin{aligned} z_{1,V}\coloneqq\left(x_{1,V},x_{2,V}\right)=\left(g_{1}\left(0\right),g_{2}\left(0\right)\right)\in\mathbb{R}^{2}\label{eq:vector field V in coords}\end{aligned}$$ In the computation of the leading asymptotics of the Bergman kernel $K_{D}$ (see in ), one will further see the appearance of the *model Bergman kernel* $B_{0}$ corresponding to the subharmonic polynomial $p$ in . For the readers' convenience, we briefly recall the notion of model Bergman kernel. For that, we consider the $L^{2}$ orthogonal projector from $L^{2}\left(\mathbb{C}_{z_{1}}\right)$ to $H_{p}^{2}$. Here $$H_{p}^{2}\coloneqq\left\{ f\in L^{2}\left(\mathbb{C}_{z_{1}}\right)|~\bar{\partial}_{p}f=0\right\} ;\quad\mathrm{and}~~\bar{\partial}_{p}\coloneqq\partial_{\bar{z}_{1}}+\partial_{\bar{z}_{1}}p.$$ Then $B_{0}$ is defined to be the Schwartz kernel of this projector. More discussion and analysis of the model Bergman kernel follows in . We now state the necessary asymptotics result for the Bergman kernel and its derivatives. Below $\partial^{\alpha'}=\left(\frac{1}{2}U_{0}\right)^{\alpha_{0}}Z^{\alpha_{1}}\bar{Z}^{\alpha_{2}}\left(\frac{1}{2}U_{3}\right)^{\alpha_{3}}$ denotes a mixed derivative along the respective vector fields for the multi-index $\alpha'=\left(\alpha_{0},\alpha_{1},\alpha_{2},\alpha_{3}\right)\in\mathbb{N}_{0}^{4}$. ** 4**. *Let $D\subset\mathbb{C}^{2}$ be a smoothly bounded pseudoconvex domain of finite type. For any point $x^{*}\in X=\partial D$ on the boundary, of type $r=r\left(x^{*}\right)$, the Bergman kernel and its derivatives satisfy the asymptotics $$\partial^{\alpha'}K_{D}\left(z,z\right)=\sum_{j=0}^{N}\frac{1}{\left(-2\rho\right){}^{2+\frac{2+w'.\alpha'}{r}-\frac{1}{r}j}}a_{j}+\sum_{j=0}^{N}b_{j}\left(-\rho\right){}^{j}\log\left(-\rho\right)+O\left(\left(-\rho\right)^{\frac{1}{r}\left(N+1\right)-2-\frac{2+w'.\alpha'}{r}}\right),\quad\forall N\in\mathbb{N},\label{eq:HS equation}$$ for some set of numbers $a_{j},b_{j}$ as $z\rightarrow x^{*}$ tangentially to the boundary along the path $\prettyref{eq:tangential path}$.* *Furthermore, the leading term can be computed in terms of the model Bergman kernel of the subharmonic polynomial $p$ as $$a_{0}=\delta_{0\alpha_{3}}.\left[\partial_{z_{1}}^{\alpha_{1}}\partial_{\bar{z}_{1}}^{\alpha_{2}}\underbrace{\left(\frac{1}{\pi}\int_{0}^{\infty}e^{-s}s^{1+\frac{2}{r}+\alpha_{0}}B_{0}\left(s^{\frac{1}{r}}z_{1}\right)ds\right)}_{\eqqcolon\tilde{B}_{0,\alpha_{0}}\left(z_{1}\right)}\right]_{z_{1}=z_{1,V}}.\label{eq:computation of the leading term}$$* *Proof.* The proof is similar to [@HsiaoSavale-2022 Thm. 2]. We shall only point out the necessary modifications. In [@HsiaoSavale-2022 Sec. 4 ] the following space of symbols $\hat{S}_{\frac{1}{r}}^{m}\left(\mathbb{C}^{2}\times\mathbb{C}^{2}\times\mathbb{R}_{t}\right)$, $m\in\mathbb{R}$, in the variables $\left(\rho,x,\rho',y;t\right)\in\mathbb{C}^{2}\times\mathbb{C}^{2}\times\mathbb{R}_{t}$ was defined. This is the space of smooth functions satisfying the symbolic estimates $$\begin{aligned} \left|\partial_{\rho}^{\alpha_{0}}\partial_{\rho'}^{\beta_{0}}\partial_{x}^{\alpha}\partial_{y}^{\beta}\partial_{t}^{\gamma}a(\rho,x,\rho',y,t)\right| & \leq C_{N,\alpha\beta\gamma}\left\langle t\right\rangle ^{m-\gamma+\frac{w'.\left(\alpha'+\beta'\right)}{r}}\frac{\left(1+\left|t^{\frac{1}{r}}\hat{x}\right|+\left|t^{\frac{1}{r}}\hat{y}\right|\right)^{N\left(\alpha',\beta',\gamma\right)}}{\left(1+\left|t^{\frac{1}{r}}\hat{x}-t^{\frac{1}{r}}\hat{y}\right|\right)^{-N}},\label{eq:symbolic estimates-1}\end{aligned}$$ for each $\left(x,y,\rho,\rho',t,N\right)\in\mathbb{R}_{x,y}^{6}\times\mathbb{R}_{\rho,\rho'}^{2}\times\mathbb{R}_{t}\times\mathbb{N}$ and $\left(\alpha',\beta',\gamma\right)\in\mathbb{N}_{0}^{4}\times\mathbb{N}_{0}^{4}\times\mathbb{N}_{0}$ with $\alpha'=(\alpha_{0},\alpha),\beta'=(\beta_{0},\beta)$. Here $N\left(\alpha',\beta',\gamma\right)\in\mathbb{N}$ depends only on the given indices, $\left\langle t\right\rangle \coloneqq\sqrt{1+t^{2}}$ denotes the Japanese bracket while the notation $\hat{x}=\left(x_{1},x_{2}\right)$ denotes the first two coordinates of the tuple $x=\left(x_{1},x_{2},x_{3}\right)$. Below $\hat{S}\left(\mathbb{R}_{\hat{x}}^{2}\times\mathbb{R}_{\hat{y}}^{2}\right)$ further denotes the space of restrictions of functions in $\hat{S}_{\frac{1}{r}}^{m}$ to $x_{3},y_{3},\rho,\rho'=0$ and $t=1$. Next a generalization of this space is defined via $$\hat{S}_{\frac{1}{r}}^{m,k}\coloneqq\bigoplus_{p+q+p'+q'\leq k}\left(tx_{3}\right)^{p}\left(t\rho\right)^{q}\left(ty_{3}\right)^{p'}\left(t\rho'\right)^{q'}\hat{S}_{\frac{1}{r}}^{m},\label{eq:Smk symbol space def.}$$ for each $\left(m,k\right)\in\mathbb{R}\times\mathbb{N}_{0}$. Finally, the subspace of classical symbols $\hat{S}_{\frac{1}{r},{\rm cl\,}}^{m}\subset\hat{S}_{\frac{1}{r}}^{m}$ comprises of those symbols for which there exist $a_{jpp'qq'}\left(\hat{x},\hat{y}\right)\in\hat{S}\left(\mathbb{R}^{2}\times\mathbb{R}^{2}\right),j,p,p',q,q'\in\mathbb{N}_{0}$, such that $$a\left(x,y,t\right)-\sum_{j=0}^{N}\sum_{p+q+p'+q'\leq j}t^{m-\frac{1}{r}j}\left(tx_{3}\right)^{p}\left(t\rho\right)^{q}\left(ty_{3}\right)^{p'}\left(t\rho'\right)^{q'}a_{jpp'qq'}\left(t^{\frac{1}{r}}\hat{x},t^{\frac{1}{r}}\hat{y}\right)\in\hat{S}_{\frac{1}{r}}^{m-\left(N+1\right)\frac{1}{r},N+1}\label{eq:asymbol}$$ for each $N\in\mathbb{N}_{0}$. The space $\hat{S}_{\frac{1}{r},\textrm{cl}}^{m,k}$ is now defined similarly to . The principal symbol of such an element $a\in\hat{S}_{\frac{1}{r},{\rm cl\,}}^{m}$ is defined to be the function $$\sigma_{L}\left(a\right)\coloneqq a_{00000}\in\hat{S}\left(\mathbb{R}^{2}\times\mathbb{R}^{2}\right).$$ Now, following the proof of [@Hsiao2010 Prop. 7.6], there exists a smooth phase function $\Phi(z,w)$ defined locally on a neighbourhood $U\times U$ of $\left(x^{*},x^{*}\right)$ in $\bar{D}\times\bar{D}$ such that $$\begin{split} & \Phi(z,w)=x_{3}-y_{3}-i\rho\sqrt{-\sigma_{\triangle_{X}}(x,(0,0,1))}-i\rho'\sqrt{-\sigma_{\triangle_{X}}(y,(0,0,1))}+O(\left|\rho\right|^{2})+O(\left|\rho'\right|^{2}),\\ & \mbox{\ensuremath{q_{0}(z,d_{z}\Phi)} vanishes to infinite order on \ensuremath{\rho=0}},\\ & \mbox{\ensuremath{q_{0}(w,-\overline{d}_{w}\Phi)} vanishes to infinite order on \ensuremath{\rho'=0}}. \end{split} \label{eq:two variable phase}$$ Here $\triangle_{X}$ denotes the real Laplace operator on the boundary $X=\partial D$ of the domain, while $q_{0}=\sigma\left(\Box_{f}\right)$ denotes the principal symbol of the complex Laplace-Beltrami operator $\Box_{f}=\bar{\partial}_{f}^{*}\bar{\partial}+\bar{\partial}\bar{\partial}_{f}^{*}$ on the domain. The proofs of [@HsiaoSavale-2022 Lemma 17] and [@HsiaoSavale-2022 Lemma 20] can be repeated to obtain the following description for the Bergman kernel: for some $a\left(z;w,t\right)\in\hat{S}_{\frac{1}{r},{\rm cl\,}}^{1+\frac{2}{r}}\left(\mathbb{C}^{2}\times\mathbb{C}^{2}\times\mathbb{R}_{t}\right)$ one has $$\begin{aligned} K_{D}\left(z,w\right)=\frac{1}{\pi}\int_{0}^{\infty}e^{i\Phi(z,w)t}a\left(z,w,t\right)dt\quad\left(\textrm{mod }C^{\infty}\left(\left(U\times U\right)\cap\left(\overline{D}\times\overline{D}\right)\right)\right)\label{eq:Bergman kernel description}\end{aligned}$$ with $\sigma_{L}\left(a\right)=B_{0}$ being the model Bergman kernel defined prior to the statement of this theorem. We need to differentiate the last description . For that, we adopt the notion of weights we defined before . By construction, the chosen vector fields $\left(U_{0},Z,\overline{Z},U_{3}\right)$ have weights $\left(-r,-1,-1,-r\right)$ respectively. Furthermore, the leading parts in their weight expansions are given by $$\begin{aligned} \left(U_{0},Z,\overline{Z},U_{3}\right) & =\left(\partial_{\rho},Z_{0}+O\left(0\right),\bar{Z}_{0}+O\left(0\right),\partial_{x_{3}}+O\left(-r+1\right)\right),\label{eq:coords =00003D00003D00003D000026 vector fields along the path}\end{aligned}$$ Here $Z_{0}\coloneqq\frac{1}{2}[\partial_{x_{1}}+\left(\partial_{x_{2}}p\right)\partial_{x_{3}}-i\left(\partial_{x_{2}}-\left(\partial_{x_{1}}p\right)\partial_{x_{3}}\right)]$ is now understood as a locally defined vector field in the interior of the domain. Next we observe from definitions of the symbol spaces , that a vector field $U$ of weight $w\left(U\right)$ maps $$U:\hat{S}_{\frac{1}{r},{\rm cl\,}}^{m}\rightarrow\hat{S}_{\frac{1}{r},{\rm cl\,}}^{m-\frac{1}{r}w\left(U\right)}.\label{eq:weight differentiation properties of Smk}$$ The equations , , now allow us to differentiate to obtain: for some $a_{\alpha}\left(z;w,t\right)\in\hat{S}_{\frac{1}{r},{\rm cl\,}}^{1+\frac{2+w'.\alpha}{r},\alpha_{0}+\alpha_{3}}\left(\mathbb{C}^{2}\times\mathbb{C}^{2}\times\mathbb{R}_{t}\right)$ one has $$\begin{aligned} \partial^{\alpha}K_{D}\left(z,z\right) & =\frac{1}{\pi}\int_{0}^{\infty}e^{i\Phi(z,z)t}a_{\alpha}\left(z,z,t\right)dt\quad\left(\textrm{mod }C^{\infty}\left(\left(U\times U\right)\cap\left(\overline{D}\times\overline{D}\right)\right)\right)\nonumber \\ \textrm{with }\quad a_{\alpha} & =\left(Z_{0}^{\alpha_{1}}\bar{Z}_{0}^{\alpha_{2}}B_{0}\right)t^{1+\frac{2+w'.\alpha}{r}}+\hat{S}_{\frac{1}{r},{\rm cl\,}}^{1+\frac{1+w'.\alpha}{r},\alpha_{0}+\alpha_{3}}.\label{eq:Bergman kernel der. description}\end{aligned}$$ Recall the vector field $V=\sum_{j=1}^{3}g_{j}\partial_{x_{j}}\in C^{\infty}\left(HX\right)$ lies in the Levi distribution. By , its $\partial_{x_{3}}$-component function has weight $w\left(g_{3}\right)\geq r-1$. Thus along the flow of $V$, and consequently along the path $z\left(\epsilon\right)$ in , the coordinate functions satisfy $$\left(x_{1},x_{2},x_{3},\rho\right)=\left(\epsilon g_{1}\left(0\right)+O\left(\epsilon^{2}\right),\epsilon g_{2}\left(0\right)+O\left(\epsilon^{2}\right),O\left(\epsilon^{r}\right),-\epsilon^{r}\right).\label{eq:expansion of path}$$ The last two equations and now combine to give the theorem. ◻ * 5*. (Critical tangency) The path $z\left(\epsilon\right)$ in is particularly chosen to be critically tangent to the boundary. Namely its order of tangency with the boundary is the type $r\left(x^{*}\right)$ of the boundary point $x^{*}\in\partial D$ that is being approached. This order of tangency is critical in the sense that it is the maximum for which the expansion in can be proved. As for a higher order of tangency (i.e., $\rho$ having vanishing order higher than $r$ at $\epsilon=0$), the terms in the symbolic expansion of $a_{\alpha}\in\hat{S}_{\frac{1}{r},{\rm cl\,}}^{1+\frac{2+w'.\alpha}{r},\alpha_{0}+\alpha_{3}}$ in become increasing in order and not asymptotically summable. This means in particular, the double summation in would be asymptotically non-summable along the path. A critically tangent path is necessary in our proof below since for such a path the leading coefficient picks up information of the model Bergman kernel at the arbitrary tangent vector $V$. For a path tangent at a lesser order, the leading coefficient only depends on the value of the model kernel $B_{0}$ at the origin. # [\[sec: Analysis of the model\]]{#sec: Analysis of the model label="sec: Analysis of the model"} Analysis of the model kernel In , we introduced the model Bergman kernel $B_{0}$, corresponding to a subharmonic, homogeneous polynomial $p\left(x_{1},x_{2}\right)$. As we see from , it plays an important role in the asymptotics of the Bergman kernel $K_{D}$ of $D$. To prepare for the proof of , we need to further analyze this model Bergman kernel $B_{0}$. For convenience, we will also write $p\left(x_{1},x_{2}\right)$ as $p\left(z_{1}\right)$, where $z_{1}=x_{1}+ix_{2}.$ ## Expansion of the model kernel and first few coefficients First we will work out the expansion of the model Bergman kernel $B_{0}$, and compute the values of the first few coefficients in the expansion. As usual, for a smooth function $f$ on $\mathbb{C}_{z_{1}}$, we write $f_{z_{1}}=\partial_{z_{1}}f=\frac{\partial f}{\partial z_{1}}$, and likewise for $f_{\bar{z}_{1}}$ and $f_{z_{1}\bar{z}_{1}}$. ** 6**. *For any $z_{1}\in\mathbb{R}^{2}$, with $\Delta p\left(z_{1}\right)\neq0$, the model Bergman kernel on diagonal satisfies the asymptotics $$\begin{aligned} \left[\partial_{z_{1}}^{\alpha_{1}}\partial_{\bar{z}_{1}}^{\alpha_{2}}B_{0}\right]\left(t^{\frac{1}{r}}z_{1}\right) & =\frac{t^{1-\frac{2+\left|\alpha\right|}{r}}}{2\pi}\partial_{z_{1}}^{\alpha_{1}}\partial_{\bar{z}_{1}}^{\alpha_{2}}\left[\sum_{j=0}^{N}b_{j}t^{-j}+O\left(t^{-N-1}\right)\right]\label{eq:proposition about model kernel}\end{aligned}$$ for each $N\in\mathbb{N}$ as $t\rightarrow\infty$. Moreover, the first four terms in the asymptotics are given by $$\begin{aligned} b_{0} & =4q;\quad b_{1}=q^{-2}Q;\quad b_{2}=\frac{1}{6}\partial_{z_{1}}\partial_{\bar{z}_{1}}\left[q^{-3}Q\right];\nonumber \\ b_{3} & =\frac{q}{48}\left\{ \left[q^{-1}\partial_{z_{1}}\partial_{\bar{z}_{1}}\right]^{2}q^{-3}Q-q^{-4}Q\left[\partial_{z_{1}}\partial_{\bar{z}_{1}}\right]q^{-3}Q-q^{-1}\left[\partial_{\bar{z}_{1}}\left(q^{-3}Q\right)\right]\left[\partial_{z_{1}}\left(q^{-3}Q\right)\right]\right\} ;\label{eq:computation of first four Bergman coefficients}\end{aligned}$$ where $q\coloneqq\frac{1}{4}\Delta p=p_{z_{1}\bar{z}_{1}}$ and $Q\coloneqq qq_{z_{1}\bar{z}_{1}}-q_{z_{1}}q_{\bar{z}_{1}}$ are defined in terms of the polynomial $p$.* *Proof.* The proof uses some rescaling arguments. Following [@Marinescu-Savale18 Sec. 4.1], we introduce the rescaling operator $\delta_{t^{-\frac{1}{r}}}:\mathbb{C}\rightarrow\mathbb{C}$ given by $\delta_{t^{-\frac{1}{r}}}\left(z_{1}\right)\coloneqq t^{-\frac{1}{r}}z_{1},t>0.$ Recall when introducing $B_{0},$ we defined $\bar{\partial}_{p}\coloneqq\partial_{\bar{z}_{1}}+\partial_{\bar{z}_{1}}p.$ The corresponding Kodaira Laplacian on functions $\Box_{p}=\bar{\partial}_{p}^{*}\bar{\partial}_{p}$ then gets rescaled to the operator $$\left(\delta_{t^{-\frac{1}{r}}}\right)_{*}\Box_{p}=t^{-\frac{2}{r}}\Box_{t}$$ where $\Box_{t}\coloneqq\bar{\partial}_{t}^{*}\bar{\partial}_{t}$, and $\bar{\partial}_{t}\coloneqq\partial_{\bar{z}_{1}}+t\left(\partial_{\bar{z}_{1}}p\right)$. We pause to introduce two more Bergman type kernel functions that are defined similarly as $B_{0}.$ Set $H_{t,p}^{2}\coloneqq\left\{ f\in L^{2}\left(\mathbb{C}_{z_{1}}\right)|\bar{\partial}_{t}f=0\right\}$ and consider the $L^{2}$ orthogonal projector $B_{t}$ from $L^{2}\left(\mathbb{C}_{z_{1}}\right)$ to $H_{t,p}^{2}.$ Slightly abusing notation, we still denote the Schwartz kernel of this projector by $B_{t}$. Next we define $L_{t}^{2}\left(\mathbb{C}_{z_{1}}\right)\coloneqq\left\{ f|e^{-tp}f\in L^{2}\left(\mathbb{C}_{z_{1}}\right)\right\}$, and denote by $\mathcal{O}\left(\mathbb{C}_{z_{1}}\right)$ the space of entire functions on $\mathbb{C}_{z_{1}}.$ Consider the $L^{2}$ orthogonal projector $B_{t}^{p}$ from $L_{t}^{2}\left(\mathbb{C}_{z_{1}}\right)$ to $L_{t}^{2}\left(\mathbb{C}_{z_{1}}\right)\cap\mathcal{O}\left(\mathbb{C}_{z_{1}}\right)$. We still write the Schwartz kernel of this projector as $B_{t}^{p}$. A routine computation yields that the two kernels $B_{t}$ and $B_{t}^{p}$ are related by $$B_{t}\left(z_{1},z_{1}'\right)=e^{-tp\left(z_{1}\right)-tp\left(z_{1}'\right)}B_{t}^{p}\left(z_{1},z_{1}'\right),\label{eq:relation between Bergman projectors}$$ Moreover, $B_{t}$ can be equivalently understood as the Bergman projector for the trivial holomorphic line bundle on $\mathbb{C}$ with Hermitian metric $h_{t}=e^{-tp}$. The curvature of this metric is $t\underbrace{\left(2\partial_{z_{1}}\partial_{\bar{z}_{1}}p\right)}_{=\frac{1}{2}\Delta p}dz_{1}\wedge d\bar{z}_{1}$. Its eigenvalue is $\Delta p$. In [@HsiaoSavale-2022 Thm. 14], the Bergman kernel of $B_{t}$ was related to the model via $$B_{0}\left(t^{\frac{1}{r}}z,t^{\frac{1}{r}}z'\right)=t^{-\frac{2}{r}}B_{t}\left(z,z'\right).\label{eq:reln between Bergman kernels}$$ Furthermore, in its proof the following spectral gap property for $\Box_{t}$ was observed $$\textrm{Spec}\left(\Box_{t}\right)\subset\left\{ 0\right\} \cup\left[c_{1}t^{2/r}-c_{2},\infty\right)$$ for some $c_{1},c_{2}>0$. At a point $z_{1}\in\mathbb{C}$, where $\Delta p\left(z_{1}\right)\neq0$, the asymptotics of $B_{t}\left(z,z\right)$ as $t\rightarrow\infty$ are thus the standard asymptotics for the Bergman kernel on tensor powers of a positive line bundle (cf. [@Hsiao-Marinescu-2014 Thm. 1.6]). There is an asymptotic expansion $$\partial_{z_{1}}^{\alpha_{1}}\partial_{\bar{z}_{1}}^{\alpha_{2}}B_{t}\left(z\right)=\frac{t}{2\pi}\partial_{z_{1}}^{\alpha_{1}}\partial_{\bar{z}_{1}}^{\alpha_{2}}\left[\sum_{j=0}^{N}b_{j}t^{-j}+O\left(t^{-N-1}\right)\right]\label{eq:standard derivative asymptotics}$$ for each $N\in\mathbb{N}$ as $t\rightarrow\infty$. The last two equations and combine to prove . It remains to compute the first four coefficients in . For that we will make use of , by which it suffices to find the corresponding coefficients in the expansion of $B_{t}^{p}.$ The computations for the latter can be found in [@Englis2000 (6.2) and Theorem 9]. In order to see the specialization of the formulas therein to the special case here, we note the Kähler metric $g=\partial\bar{\partial}p$ with potential $p$ has component $g_{1\bar{1}}=q=\partial_{z_{1}}\partial_{\bar{z}_{1}}p$. The only non-zero Christoffel symbols are $\overline{\Gamma_{11}^{1}}=\Gamma_{\bar{1}\bar{1}}^{\bar{1}}=q^{-1}\partial_{\bar{z}_{1}}q.$ Furthermore, the only non-zero components of the Riemannian, Ricci and scalar curvatures respectively are given by the following. Here we follow the convention of curvatures in [@Englis2000 pp. 6], which may differ from that of some other papers by a negative sign. $$\begin{aligned} R_{1\bar{1}1\bar{1}}=\partial_{z_{1}}\partial_{\bar{z}_{1}}q-q^{-1}\left(\partial_{z_{1}}q\right)\left(\partial_{\bar{z}_{1}}q\right)=q^{-1}Q;\quad\textrm{Ric}_{1\bar{1}}=q^{-2}Q;\quad R=q^{-3}Q.\end{aligned}$$ The corresponding Laplace operator $L_{1}$ of [@Englis2000 (2.10)] in our special context is given by $L_{1}=q^{-1}\partial_{z_{1}}\partial_{\bar{z}_{1}}$. Bringing these specializations into [@Englis2000 (6.2) and Theorem 9], a routine computation yields the values of $b_{0}$, $b_{1}$, $b_{2}$ and $b_{3}$. ◻ * 7*. Although we computed the values of $b_{0},\cdots,b_{3}$ in Proposition , we will only use $b_{3}$ in the proof of . ## Models with vanishing expansion coefficients Having shown that the model kernel $B_{0}\left(t^{\frac{1}{r}}z_{1}\right)$ admits an asymptotic expansion at $t\rightarrow\infty$, we ask when the terms of the asymptotic expansion are eventually zero, or in other words, $b_{j}=0$ for $j$ sufficiently large. This is relevant to our theorem below. We prove the following somewhat surprising result which shows the vanishing of the third coefficient is already restrictive. As above, let $p\left(x_{1},x_{2}\right)$ be a subharmonic and non-harmonic homogeneous polynomial of degree $r$. ** 8**. *Suppose the third term $b_{3}$ vanishes in the asymptotic expansion of the model kernel $B_{0}$ corresponding to $p$. Then there exists some real number $c_{0}>0$ such that $q=c_{0}\left(z_{1}\bar{z}_{1}\right)^{\frac{r}{2}-1}.$ Here as before, $q\coloneqq\frac{1}{4}\Delta p$.* To prove the theorem, we carry out some Hermitian analysis. For that, we start with a few definitions and lemmas. In the remainder of this subsection, we will write $z$ instead of $z_{1}$ for simplicity. ** 9**. Let $f\in\mathbb{C}[z,\zeta]$ be a polynomial of two variables. Fix $a\in\mathbb{C}.$ Let $k\in\mathbb{N}_{0}$ and $\lambda\in\mathbb{C}.$ We say $h$ is divisible by $(z+a\zeta)^{k}$ with coefficient $\lambda,$ denoted by $f\sim D_{a}(k,\lambda),$ if $f(z,\zeta)=(z+a\zeta)^{k}\hat{f}(z,\zeta)$ for some $\hat{f}\in\mathbb{C}[z,\zeta]$ with $\hat{f}(-a,1)=\lambda.$ It is clear that if $f\sim D_{a}(k,\lambda)$ with $k\geq1$, then we have $f\sim D_{a}(k-1,0).$ In the following, we say $f\in\mathbb{C}[z,\zeta]$ is Hermitian if $f(z,\bar{z})$ is real-valued for every $z\in\mathbb{C}.$ ** 10**. *Let $f\in\mathbb{C}[z,\zeta]$ be a nonconstant Hermitian homogeneous polynomial of two variables. Then there exist $a\in\mathbb{C},k\geq1$ and a nonzero $\lambda\in\mathbb{C}$ such that $f\sim D_{a}(k,\lambda).$ Moreover, if $f\neq cz^{m}\zeta^{m}$ for every real number $c\neq0$ and integer $m\geq1$, then we can further choose $a\neq0.$* *Proof.* Write $d$ for the degree of $f$. Since $f$ is homogeneous, we have $$f(z,\zeta)=\zeta^{d}f(\frac{z}{\zeta},1).\label{eqnq1-1}$$ By assumption, $f(\eta,1)\in\mathbb{C}[\eta]$ is nonconstant, for otherwise $f(z,\zeta)$ is not Hermitian. Then by the fundamental theorem of algebra, write $$f(\eta,1)=c\eta^{m}\prod_{j=1}^{l}(\eta-a_{j})^{k_{j}}.\label{eqnq2-1}$$ Here $c\in\mathbb{C}$ is nonzero, and $m,l\geq0$ and $k_{j}\geq1$ satisfy $m+\sum_{j=1}^{l}k_{j}\leq d.$ Moreover, $a_{j}'$s are distinct nonzero complex numbers. When $l=0,$ the above equation is understood as $f(\eta,1)=c\eta^{m}.$ By ([\[eqnq1-1\]](#eqnq1-1){reference-type="ref" reference="eqnq1-1"}) and ([\[eqnq2-1\]](#eqnq2-1){reference-type="ref" reference="eqnq2-1"}), we have $$f(z,\zeta)=cz^{m}\zeta^{n}\prod_{j=1}^{l}(z-a_{j}\zeta)^{k_{j}},\quad\text{where}\quad n=d-m-\sum_{j=1}^{l}k_{j}.\label{eqnqzx-1}$$ We first consider the case where $l=0.$ In this case, $f(z,\zeta)=cz^{m}\zeta^{n}.$ Since $f$ is nonconstant and Hermitian, we must have $c\in\mathbb{R},c\neq0,$ and $n=m\geq1$. The conclusion of the lemma follows if we choose $a=0,k=m\geq1,\lambda=c\neq0.$ We next assume $l\geq1.$ Then by ([\[eqnqzx-1\]](#eqnqzx-1){reference-type="ref" reference="eqnqzx-1"}), the conclusion of the lemma follows if we choose $a=-a_{1}\neq0,k=k_{1}\geq1,\lambda=ca_{1}^{m}\prod_{j=2}^{l}(a_{1}-a_{j})^{k_{j}}\neq0$. This proves the first part of Lemma [ 10](#lm3){reference-type="ref" reference="lm3"}. Note if $f$ is not a multiple of $z^{m}\zeta^{m}$ for any integer $m$, then it can only be the latter case, and this establishes the second part of Lemma [ 10](#lm3){reference-type="ref" reference="lm3"}. ◻ We next extend the above definition to rational functions. ** 11**. Let $g\in\mathbb{C}(z,\zeta)$ be a rational function. Write $g=\frac{f_{1}}{f_{2}},$ where $f_{1},f_{2}\in\mathbb{C}[z,\zeta]$ and $f_{2}\neq0.$ If $f_{i}\sim D_{a}(k_{i},\lambda_{i}),1\leq i\leq2,$ with $k_{1},k_{2}\geq0$ and $\lambda_{2}\neq0,$ then we say $g\sim D_{a}(k_{1}-k_{2},\frac{\lambda_{1}}{\lambda_{2}}).$ Note that $k_{1}-k_{2}$ could be negative. Note if $g\in\mathbb{C}(z,\zeta)$ and $g\sim D_{a}(k,\lambda)$, then we have $g\sim D_{a}(k-1,0).$ We next make a few more observations. ** 12**. *If $g\in\mathbb{C}(z,\zeta)$ and $g\sim D_{a}(k,\lambda)$ for some $a\in\mathbb{C}$, then the following hold:* *(1) $\partial_{z}g\sim D_{a}(k-1,k\lambda)$ and $\partial_{\zeta}g\sim D_{a}(k-1,ak\lambda);$* *(2) $\partial_{z}\partial_{\zeta}g\sim D_{a}(k-2,ak(k-1)\lambda).$* *Proof.* Write $g=\frac{f_{1}}{f_{2}}$ with $f_{1},f_{2}\in\mathbb{C}[z,\zeta],f_{2}\neq0.$ Write $f_{i}=(z+a\zeta)^{k_{i}}h_{i}$ for $1\leq i\leq2,$ where $h_{1},h_{2}\in\mathbb{C}[z,\zeta],k_{1},k_{2}\geq0,k_{1}-k_{2}=k$ and $h_{2}(-a,1)\neq0,\frac{h_{1}(-a,1)}{h_{2}(-a,1)}=\lambda.$ A routine computation yields $$\partial_{z}g=\frac{f_{2}\partial_{z}f_{1}-f_{1}\partial_{z}f_{2}}{f_{2}^{2}}=\frac{(k_{1}-k_{2})(z+a\zeta)^{k_{1}+k_{2}-1}h_{1}h_{2}+(z+a\zeta)^{k_{1}+k_{2}}(h_{2}\partial_{z}h_{1}-h_{1}\partial_{z}h_{2})}{(z+a\zeta)^{2k_{2}}h_{2}^{2}}.$$ Then it is clear that $\partial_{z}g\sim D_{a}(k-1,k\lambda)$. Similarly one can show $\partial_{\zeta}g\sim D_{a}(k-1,ak\lambda).$ This finishes the proof of part (1). The conclusion in part (2) follows immediately from part (1). ◻ The statements in the next lemma follow from direct computations. We omit the proof. ** 13**. *Let $g_{1},g_{2}\in\mathbb{C}(z,\zeta)$ and $a\in\mathbb{C}$. Assume $g_{i}\sim D_{a}(k_{i},\lambda_{i})$ for $1\leq i\leq2$ where $k_{i}\in\mathbb{Z}$ and $\lambda_{i}\in\mathbb{C}$, then the following hold:* *(1) $g_{1}g_{2}\sim D_{a}(k_{1}+k_{2},\lambda_{1}\lambda_{2});$* *(2) $cg_{1}\sim D_{a}(k_{1},c\lambda_{1})$ for any complex number $c;$* *(3) $g_{1}+g_{2}\sim D_{a}(k_{1},\lambda_{1}+\lambda_{2})$ if $k_{1}=k_{2};$ and $g_{1}+g_{2}\sim D_{a}(k_{1},\lambda_{1})$ if $k_{1}<k_{2};$* *(4) In addition assume $\lambda_{2}\neq0.$ Then $\frac{g_{1}}{g_{2}}\sim D_{a}(k_{1}-k_{2},\frac{\lambda_{1}}{\lambda_{2}}).$* We are now ready to prove . *Proof of .* Recall $q=\partial_{z}\partial_{\bar{z}}p$ and $Q=q(\partial_{z}\partial_{\bar{z}}p)-(\partial_{z}q)(\partial_{\bar{z}}q)$ are real polynomials in $\mathbb{C}[z,\bar{z}]$. Note we can assume $q$ is nonconstant, for otherwise the conclusion is trivial. We will identify $p(z,\bar{z})\in\mathbb{C}[z,\bar{z}]$ with its complexification $p(z,\zeta)\in\mathbb{C}[z,\zeta]$ (where we replace $\bar{z}$ by a new variable $\zeta$). Moreover, since $p(z,\bar{z})$ is real-valued, $p(z,\zeta)$ is Hermitian. Likewise for $q(z,\bar{z})$ and $Q(z,\bar{z}).$ To establish , it suffices to show that $q(z,\zeta)=c_{0}z^{m}\zeta^{m}$ for some integer $m\geq1$. Seeking a contraction, suppose the conclusion fails. Then by Lemma [ 10](#lm3){reference-type="ref" reference="lm3"}, we can find some complex numbers $a\neq0,\lambda\neq0$, and some integer $k\geq1$ such that $q\sim D_{a}(k,\lambda).$ That is, we can write $q(z,\zeta)=(z+a\zeta)^{k}h,$ where $h\in\mathbb{C}[z,\zeta]$ and $h(-a,1)=\lambda.$ A direct computation yields the following holds for some $\hat{h}\in\mathbb{C}[z,\zeta].$ $$Q(z,\zeta)=-ak(z+a\zeta)^{2k-2}h^{2}+(z+a\zeta)^{2k-1}\hat{h}.$$ Thus we have $Q\sim D_{a}(2k-2,-ak\lambda^{2}).$ By assumption $b_{3}\equiv0.$ We multiply it by $\frac{48}{q}$ and use the standard complexification to get $$\left[q^{-1}\partial_{z}\partial_{\zeta}\right]^{2}q^{-3}Q-q^{-4}Q\left[\partial_{z}\partial_{\zeta}\right]q^{-3}Q-q^{-1}\left[\partial_{\zeta}\left(q^{-3}Q\right)\right]\left[\partial_{z}\left(q^{-3}Q\right)\right]=0.\label{eqncomb3}$$ On the other hand, by Lemma [ 13](#lm6){reference-type="ref" reference="lm6"}, $q^{3}\sim D_{a}(3k,\lambda^{3})$ and $q^{-3}Q\sim D_{a}(-k-2,-\frac{ak}{\lambda}).$ Then by Lemma [ 12](#lm5){reference-type="ref" reference="lm5"}, $$\partial_{z}\left(q^{-3}Q\right)\sim D_{a}(-k-3,\frac{ak(k+2)}{\lambda});\quad\partial_{\zeta}\left(q^{-3}Q\right)\sim D_{a}(-k-3,\frac{a^{2}k(k+2)}{\lambda}).$$ Using the above and Lemma [ 13](#lm6){reference-type="ref" reference="lm6"}, we can compute the last term on the left hand side of ([\[eqncomb3\]](#eqncomb3){reference-type="ref" reference="eqncomb3"}): $$-q^{-1}\left[\partial_{\zeta}\left(q^{-3}Q\right)\right]\left[\partial_{z}\left(q^{-3}Q\right)\right]\sim D_{a}(-3k-6,-\frac{a^{3}k^{2}(k+2)^{2}}{\lambda^{3}}).$$ Similarly, we compute the first two terms on the left hand side of ([\[eqncomb3\]](#eqncomb3){reference-type="ref" reference="eqncomb3"}): $$\left[q^{-1}\partial_{z}\partial_{\zeta}\right]^{2}q^{-3}Q\sim D_{a}(-3k-6,-\frac{a^{3}k(k+2)(k+3)(2k+4)(2k+5)}{\lambda^{3}});$$ $$-q^{-4}Q\left[\partial_{z}\partial_{\zeta}\right]q^{-3}Q\sim D_{a}(-3k-6,-\frac{a^{3}k^{2}(k+2)(k+3)}{\lambda^{3}}).$$ Consequently, the left hand side of ([\[eqncomb3\]](#eqncomb3){reference-type="ref" reference="eqncomb3"}) equals to $D_{a}(-3k-6,T),$ where $$T=-\frac{a^{3}k(k+2)}{\lambda^{3}}\left[k(k+2)+(k+3)(2k+4)(2k+5)+k(k+3)\right]\neq0.$$ This means the left hand side of ([\[eqncomb3\]](#eqncomb3){reference-type="ref" reference="eqncomb3"}) is nonzero, a contradiction. The proof is completed. ◻ ## The case $p=\frac{c}{2}\left(z_{1}\bar{z}_{1}\right)^{\frac{r}{2}}$ We next consider the particular case when $p=\frac{c}{2}\left(z_{1}\bar{z}_{1}\right)^{\frac{r}{2}}$ for $c>0$ (recall $r$ must be even). Here it becomes possible to compute the Bergman kernel $B_{0}$ explicitly. ** 14**. *The model Bergman kernel corresponding to the homogeneous subhamonic polynomial $p=\frac{c}{2}\left(z_{1}\bar{z}_{1}\right)^{\frac{r}{2}}$ is given by $$\begin{aligned} B_{0}\left(z_{1},z'_{1}\right) & =\frac{re^{-\left[p\left(z_{1}\right)+p\left(z_{1}'\right)\right]}c^{\frac{2}{r}}}{2\pi}G\left(c^{\frac{2}{r}}z_{1}\overline{z_{1}'}\right),\quad\textrm{where }\label{eq:model Bergman kernel-1}\\ G\left(x\right) & \coloneqq\sum_{\alpha=0}^{\frac{r}{2}-1}\frac{x^{\alpha}}{\Gamma\left(\frac{2\left(\alpha+1\right)}{r}\right)}+x^{\frac{r}{2}-1}e^{x^{\frac{r}{2}}}\left[\sum_{\alpha=0}^{\frac{r}{2}-1}\frac{\Gamma\left(\frac{2\left(\alpha+1\right)}{r}\right)-\Gamma\left(\frac{2\left(\alpha+1\right)}{r},x^{\frac{r}{2}}\right)}{\Gamma\left(\frac{2\left(\alpha+1\right)}{r}\right)}\right]\label{eq:function G}\end{aligned}$$ is given in terms of the incomplete gamma function $\Gamma\left(a,u\right)\coloneqq\int_{u}^{\infty}t^{a-1}e^{-t}dt$, $u>0$.* *Proof.* From the formulas $\Box_{p}=\bar{\partial}_{p}^{*}\bar{\partial}_{p}$ and $\bar{\partial}_{p}\coloneqq\partial_{\bar{z}_{1}}+\partial_{\bar{z}_{1}}p=\partial_{\bar{z}_{1}}+\frac{cr}{4}z_{1}^{\frac{r}{2}}\bar{z}_{1}^{\frac{r}{2}-1}$, an orthonormal basis for $\textrm{ker}\left(\Box_{p}\right)$ is easily found to be $$\begin{aligned} s_{\alpha} & \coloneqq\left(\frac{1}{2\pi}\frac{r}{\Gamma\left(\frac{2\left(\alpha+1\right)}{r}\right)}c^{\frac{2\left(\alpha+1\right)}{r}}\right)^{1/2}z_{1}^{\alpha}e^{-p},\quad\alpha\in\mathbb{N}_{0}.\end{aligned}$$ Since $B_{0}=\sum s_{\alpha}\overline{s_{\alpha}}$, we have $$\begin{aligned} B_{0}\left(z_{1},z_{1}'\right) & =\frac{re^{-\left[p\left(z_{1}\right)+p\left(z'_{1}\right)\right]}}{2\pi}\sum_{\alpha\in\mathbb{N}_{0}}\frac{1}{\Gamma\left(\frac{2\left(\alpha+1\right)}{r}\right)}c^{\frac{2\left(\alpha+1\right)}{r}}\left(z_{1}\overline{z_{1}'}\right)^{\alpha}.\label{eq:Bergman kernel orthogonal basis}\end{aligned}$$ To compute the above in a closed form, consider the series $$\begin{aligned} F\left(y\right)\coloneqq\sum_{\alpha=0}^{\infty}\frac{y^{\frac{\alpha+1}{s}-1}}{\Gamma\left(\frac{\alpha+1}{s}\right)}=\sum_{\alpha=0}^{s-1}\frac{y^{\frac{\alpha+1}{s}-1}}{\Gamma\left(\frac{\alpha+1}{s}\right)}+\underbrace{\sum_{\alpha=s}^{\infty}\frac{y^{\frac{\alpha+1}{s}-1}}{\Gamma\left(\frac{\alpha+1}{s}\right)}}_{F_{0}\left(y\right)\coloneqq},\end{aligned}$$ for $s=\frac{r}{2}$. Differentiating the second term in the series and using $\Gamma\left(z+1\right)=z\Gamma\left(z\right)$ yields $F_{0}'\left(y\right)=F_{0}\left(y\right)+\sum_{\alpha=0}^{s-1}\frac{y^{\frac{\alpha+1}{s}-1}}{\Gamma\left(\frac{\alpha+1}{s}\right)}$ for $y>0.$ This ODE can be solved (uniquely) with the boundary condition $F_{0}\left(0\right)=0$ to give $$\begin{aligned} F_{0}\left(y\right)=e^{y}\left[\sum_{\alpha=0}^{s-1}\frac{\Gamma\left(\frac{\alpha+1}{s}\right)-\Gamma\left(\frac{\alpha+1}{s},y\right)}{\Gamma\left(\frac{\alpha+1}{s}\right)}\right]\label{eq:important series}\end{aligned}$$ in terms of the incomplete gamma function. Thus in particular we have computed $F\left(y\right)\coloneqq y^{\frac{1}{s}-1}G\left(y^{\frac{1}{s}}\right)$, where $G$ is as defined in . Finally we note from that $$B_{0}\left(z,z'\right)=\frac{re^{-\left[p\left(z_{1}\right)+p\left(z'_{1}\right)\right]}c^{\frac{2}{r}}}{2\pi}x^{s-1}F\left(x^{s}\right),$$ for $x=c^{\frac{2}{r}}z_{1}\overline{z'_{1}}$, completing the proof. ◻ # [\[sec:Main theorem proof\]]{#sec:Main theorem proof label="sec:Main theorem proof"} Proof of the main theorem In this section we finally prove . *Proof of .* It suffices to show that $D$ is strongly pseudoconvex, or the type $r=2$ along the boundary, as thereafter one can apply Fu-Wong [@Fu-Wong97] and Nemirovski-Shafikov [@Nemirovski-Shafikov-2006]. To this end, suppose $x^{*}\in\partial D$ is a point on the boundary of type $r=r\left(x^{*}\right)\geq2$. By Proposition and , under the assumption of , the Bergman kernel $K=K_{D}$ of the domain satisfies the following Monge-Ampère equation inside $D.$ $$J\left(K\right)\coloneqq\det\begin{pmatrix}K & \bar{Z}K & \bar{W}K\\ ZK & \left(Z\bar{Z}-\left[Z,\bar{Z}\right]^{0,1}\right)K & \left(Z\bar{W}-\left[Z,\bar{W}\right]^{0,1}\right)K\\ WK & \left(W\bar{Z}-\left[W,\bar{Z}\right]^{0,1}\right)K & \left(W\bar{W}-\left[W,\bar{W}\right]^{0,1}\right)K \end{pmatrix}=\frac{9\pi^{2}}{2}K^{4}.\label{eq:MA equation for K}$$ Here we have used the orthonormal frame of $T^{1,0}\mathbb{C}^{2}$ given by $Z=\frac{1}{2}\left(U_{1}-iU_{2}\right)$, $W=\frac{1}{2}\left(U_{0}-iU_{3}\right)$ defined prior to . Using and , we compute the $\left(0,1\right)$ components of the commutators above: $$\begin{aligned} \left[Z,\bar{Z}\right]^{0,1}= & \left[-\Delta p\left(z_{1}\right)\frac{i}{2}\partial_{x_{3}}\right]^{0,1}+O\left(-1\right)\\ = & \frac{\Delta p\left(z_{1}\right)}{2}\left(W-\bar{W}\right)^{0,1}+O\left(-1\right)\\ = & -\frac{\Delta p\left(z_{1}\right)}{2}\bar{W}+O\left(-1\right);\\ \left[Z,\bar{W}\right]^{0,1}= & ~O\left(-r\right);\quad\left[W,\bar{Z}\right]^{0,1}=O\left(-r\right);\quad\left[W,\bar{W}\right]^{0,1}=O\left(-2r+1\right).\end{aligned}$$ This allows us to compute the most singular term in the asymptotics of both sides of as $z\rightarrow x^{*}$ along the tangential path $z\left(\epsilon\right)$ in . By , one obtains along $z\left(\epsilon\right)$, $$\begin{aligned} J\left(K\right) & =\left[\left(-2\rho\right)^{-2-\frac{2}{r}}\right]^{4}\left[\det\begin{pmatrix}\tilde{B}_{0,0} & \partial_{\bar{z}_{1}}\tilde{B}_{0,0} & \tilde{B}_{0,1}\\ \partial_{z_{1}}\tilde{B}_{0,0} & \partial_{z_{1}}\partial_{\bar{z}_{1}}\tilde{B}_{0,0}+\left[\frac{\Delta p}{2}\right]\tilde{B}_{0,1} & \partial_{z_{1}}\tilde{B}_{0,1}\\ \tilde{B}_{0,1} & \partial_{\bar{z}_{1}}\tilde{B}_{0,1} & \tilde{B}_{0,2} \end{pmatrix}\left(z_{1,V}\right)+o_{\epsilon}\left(1\right)\right]\\ K^{4} & =\left[\left(-2\rho\right)^{-2-\frac{2}{r}}\right]^{4}\left[\tilde{B}_{0,0}\left(z_{1,V}\right)^{4}+o_{\epsilon}\left(1\right)\right].\end{aligned}$$ Here we say a function $\phi$ is $o_{\epsilon}\left(1\right)$ if $\phi(\epsilon)$ goes to $0$ as $\epsilon\rightarrow0^{+}.$ (Recall $\rho=-\epsilon^{r}$ along the path). Thus comparing the leading coefficients in the asymptotics gives the following equation $$\det\begin{pmatrix}\tilde{B}_{0,0} & \partial_{\bar{z}_{1}}\tilde{B}_{0,0} & \tilde{B}_{0,1}\\ \partial_{z_{1}}\tilde{B}_{0,0} & \partial_{z_{1}}\partial_{\bar{z}_{1}}\tilde{B}_{0,0}+\left[\frac{\Delta p}{2}\right]\tilde{B}_{0,1} & \partial_{z_{1}}\tilde{B}_{0,1}\\ \tilde{B}_{0,1} & \partial_{\bar{z}_{1}}\tilde{B}_{0,1} & \tilde{B}_{0,2} \end{pmatrix}\left(z_{1}\right)=\frac{9\pi^{2}}{2}\tilde{B}_{0,0}\left(z_{1}\right)^{4},\label{eq:comparing coeff. eqn.}$$ at each $z_{1}\in\mathbb{R}^{2}$, for the model Bergman kernel. Here $\tilde{B}_{0,\alpha_{0}}$ is as defined in . Finally, one chooses $z_{1}$ such that $\Delta p\left(z_{1}\right)\neq0$ and substitutes $z_{1}\mapsto t^{\frac{1}{r}}z_{1}$ in the last equation above for the model. The terms involved in the above equation are then of the form $$\begin{aligned} \tilde{B}_{0,\alpha_{0}}\left(t^{\frac{1}{r}}z_{1}\right) & =\frac{1}{\pi}\int_{0}^{\infty}e^{-s}s^{1+\frac{2}{r}+\alpha_{0}}B_{0}\left(s^{\frac{1}{r}}t^{\frac{1}{r}}z_{1}\right)ds\\ & =\frac{t^{-2-\frac{2}{r}-\alpha_{0}}}{\pi}\int_{0}^{\infty}e^{-\frac{\tau}{t}}\tau^{1+\frac{2}{r}+\alpha_{0}}B_{0}\left(\tau^{\frac{1}{r}}z_{1}\right)d\tau\end{aligned}$$ from the definition . Upon differentiation, using Proposition and standard asymptotics for the Laplace transform of a classical symbol $\tau^{1+\frac{2}{r}+\alpha_{0}}B_{0}\left(\tau^{\frac{1}{r}}z_{1}\right)\in S_{\tau,\textrm{cl}}^{2+\alpha_{0}}$, the terms involved in now have an asymptotic expansion $$\begin{aligned} \left[\partial_{z_{1}}^{\alpha_{1}}\partial_{\bar{z}_{1}}^{\alpha_{2}}\tilde{B}_{0,\alpha_{0}}\right]\left(t^{\frac{1}{r}}z_{1}\right) & =t^{1-\frac{2+\left|\alpha\right|}{r}}\left[\sum_{j=0}^{N+2+\alpha_{0}}c_{j}t^{-j}+\sum_{j=0}^{N}d_{j}t^{-\left(3+\alpha_{0}+j\right)}\ln t+O\left(t^{-\left(3+\alpha_{0}+N\right)}\right)\right],\label{eq:asymptotic of det terms}\end{aligned}$$ $\forall N\in\mathbb{N},$ as $t\rightarrow\infty$. Furthermore the leading logarithmic term is $d_{0}=\frac{1}{2\pi^{2}}\partial_{z_{1}}^{\alpha_{1}}\partial_{\bar{z}_{1}}^{\alpha_{2}}b_{3+\alpha_{0}}$. The above allows us to compute the asymptotics of both sides of the equation as $t\rightarrow\infty$. In particular the right hand side of is seen to contain the logarithmic term $$\frac{9\pi^{2}}{2}b_{3}^{4}\left(\frac{1}{2\pi^{2}}t^{-2-\frac{2}{r}}\ln t\right)^{4}$$ in its asymptotic expansion. Such a term involving the fourth power of a logarithm is missing from the left hand side of . This particularly gives $b_{3}=0$. Using , it now follows that $q(z,\bar{z})=c_{0}(z_{1}\bar{z}_{1})^{\frac{r}{2}-1}$ for some $c_{0}>0$. Since $p$ has no purely holomorphic or anti-holomorphic terms in $z_{1}$, this gives $p=\frac{c}{2}\left(z_{1}\bar{z}_{1}\right)^{\frac{r}{2}}$ for some $c>0.$ However, the model kernel $B_{0}$ for this potential $p=\frac{c}{2}\left(z_{1}\bar{z}_{1}\right)^{\frac{r}{2}}$ was computed in . Suppose $r>2$. By and definition of $\tilde{B}_{0,\alpha_{0}}$ in , $$\begin{aligned} \tilde{B}_{0,\alpha_{0}}\left(0\right) & =\frac{1}{\pi}\Gamma\left(2+\frac{2}{r}+\alpha_{0}\right)B_{0}\left(0\right)=\frac{1}{2\pi^{2}}\Gamma\left(2+\frac{2}{r}+\alpha_{0}\right)\frac{r}{\Gamma\left(\frac{2}{r}\right)}c^{\frac{2}{r}};\\ \left[\partial_{z_{1}}\tilde{B}_{0,\alpha_{0}}\right]\left(0\right) & =\left[\partial_{\overline{z}_{1}}\tilde{B}_{0,\alpha_{0}}\right]\left(0\right)=0;\\ \left[\partial_{z_{1}}\partial_{\bar{z}_{1}}\tilde{B}_{0,\alpha_{0}}\right]\left(0\right) & =\frac{1}{\pi}\Gamma\left(2+\frac{4}{r}+\alpha_{0}\right)\left[\partial_{z_{1}}\partial_{\bar{z}_{1}}B_{0}\right]\left(0\right)\\ & =\frac{1}{2\pi^{2}}\Gamma\left(2+\frac{4}{r}+\alpha_{0}\right)\frac{r}{\Gamma\left(\frac{4}{r}\right)}c^{\frac{4}{r}}.\end{aligned}$$ Plugging the above into with $z_{1}=0$, and noting $\Delta p(0)=0$ as $r>2$, we obtain $$\left(\frac{r}{2\pi^{2}}\right)^{3}\frac{\Gamma\left(2+\frac{4}{r}\right)}{\Gamma\left(\frac{4}{r}\right)}c^{\frac{8}{r}}\left[\frac{\Gamma\left(2+\frac{2}{r}\right)}{\Gamma\left(\frac{2}{r}\right)}\frac{\Gamma\left(4+\frac{2}{r}\right)}{\Gamma\left(\frac{2}{r}\right)}-\frac{\Gamma\left(3+\frac{2}{r}\right)}{\Gamma\left(\frac{2}{r}\right)}\frac{\Gamma\left(3+\frac{2}{r}\right)}{\Gamma\left(\frac{2}{r}\right)}\right]=\frac{9\pi^{2}}{2}\left[\frac{r}{2\pi^{2}}\frac{\Gamma\left(2+\frac{2}{r}\right)}{\Gamma\left(\frac{2}{r}\right)}c^{\frac{2}{r}}\right]^{4}.$$ Using $\Gamma\left(z+1\right)=z\Gamma\left(z\right)$, the above simplifies to the equation $$\left(1+\frac{4}{r}\right)\left(2+\frac{2}{r}\right)=\frac{9}{4}\left(1+\frac{2}{r}\right)^{2}.$$ Solving this quadratic equation yields $r=2$, a plain contradiction. This finishes the proof. ◻ * 15*. Note in our proof above, we compared the $\left(t^{-2-\frac{2}{r}}\ln t\right)^{4}$ term on both sides of . For that, we only used the information of $b_{3}$, where $b_{3}$ arises in the coefficient of the first $\ln t$ term in the asymptotics for the model Bergman kernel (see ). The authors also compared the non-logarithmic terms on two sides of : the $\left(t^{-2-\frac{2}{r}}\right)^{4}$ and $\left(t^{-2-\frac{2}{r}}\right)^{4}t^{-1}$ terms, whose calculations then involve $b_{0}$ and $b_{1}.$ Nevertheless, we only got tautologies and thus derived no contradiction. It is interesting to compare this with the proofs of Cheng's conjecture. In dimension $2$, Fu-Wong [@Fu-Wong97] used information of the logarithmic term in the Fefferman expansion of the Bergman kernel ; while in higher dimension, Huang and the second author [@Huang-Xiao21] utilized information of the non-logarithmic term (principal singular term) in the expansion . [^1]: N. S. was partially supported by the DFG funded project CRC/TRR 191.\ M. X. was partially supported by the NSF grants DMS-1800549 and DMS-2045104.
arxiv_math
{ "id": "2309.10595", "title": "K\\\"ahler-Einstein Bergman metrics on pseudoconvex domains of dimension\n two", "authors": "Nikhil Savale, Ming Xiao", "categories": "math.CV math.AP math.DG", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | In [@TTmethod], an alternative method for solving for the hyperbolic structure of the complement of a link in $S^3$ is developed. We generalize this method to other classes of links, in particular links in the thickened torus and fully augmented links. address: - Alice Kwon, Department of Science, SUNY Maritime, 6 Pennyfield Avenue, Bronx, NY 10465, USA - Byungdo Park, Department of Mathematics Education, Chungbuk National University, Cheongju 28644, Republic of Korea - ^\*^ Ying Hong Tham Fachbereich Mathematik, Universität Hamburg, Bundesstraße 55, 20146 Hamburg, Germany author: - Alice Kwon - Byungdo Park - Ying Hong Tham ^\*^ bibliography: - references.bib title: Generalization of the Thistlethwaite--Tsvietkova Method --- # Introduction {#s:intro} In [@TTmethod], Thistlethwaite and Tsvietkova describe an alternative method for calculating the hyperbolic structure on a classical link complement. They produce a set of equations based on the link diagram without resorting to a triangulation of the link complement; the unknowns in these equations are meant to capture and quantify parallel transport in the complete hyperbolic structure. We refer to solutions to these equations as *algebraic solutions*; there is a unique algebraic solution which corresponds to the complete hyperbolic geometry on the link complement, and we refer to it as the *geometric solution*. In general, it is difficult to know which of the algebraic solutions is the geometric solution. In instances when there is only one solution to those equations, one knows that the solution corresponds to the complete hyperbolic structure. In this paper, we generalize the method of [@TTmethod], which we refer to as the *TT method*, to 3-manifolds with toric ends. This generalization naturally lends itself to other types of links, in particular to links in the thickened torus and a class of links called *fully augmented links* (in $S^3$ and $T^2$). One of the main contributions of this paper is to show that for a certain class of links, namely for fully augmented links, there are purely algebraic necessary and sufficient criteria to determine if an algebraic solution is the geometric solution (Corollary [Corollary 41](#c:FAL-TTmethod-geometric){reference-type="ref" reference="c:FAL-TTmethod-geometric"}). ## Overview {#s:overview} We provide some background in Section [2](#s:background){reference-type="ref" reference="s:background"}, and in particular, we recall the original TT method in Section [2.1](#s:recap){reference-type="ref" reference="s:recap"}. Next, in Section [3](#s:TT-extensions){reference-type="ref" reference="s:TT-extensions"}, we generalize the TT method to 3-manifolds with toric cusps; this serves as a unifying principle of the several modifications of the TT methods, in particular, for links in the thickened torus and fully augmented links. Then in Section [4](#s:alg-to-geom){reference-type="ref" reference="s:alg-to-geom"}, we relate the TT method to geometry. The first half, Section [4.1](#s:suff-geometric){reference-type="ref" reference="s:suff-geometric"}, of this section concerns 3-manifolds in general. Given a triangulation and an algebraic solution, we obtain shape parameters on the tetrahedra of the triangulation. Theorem [Theorem 36](#t:all-positive-geometric){reference-type="ref" reference="t:all-positive-geometric"} provides a sufficient condition for algebraic solution to be geometric, namely, that all the shape parameters must agree with the orientation of the 3-manifold. In the second half, Section [4.2](#s:FAL-nec-suff-geometric){reference-type="ref" reference="s:FAL-nec-suff-geometric"}, we focus on fully augmented links. We describe a correspondence between algebraic solutions and circle packings associated with fully augmented link diagrams - this is encapsulated in Theorem [Theorem 40](#t:FAL-circle-packing-TTmethod){reference-type="ref" reference="t:FAL-circle-packing-TTmethod"}, where we also show that additional properties on circle packings translate to additional properties on the algebraic solution. From this we can infer that some necessary criteria for an algebraic solution to be geometric are in fact sufficient (see Corollary [Corollary 41](#c:FAL-TTmethod-geometric){reference-type="ref" reference="c:FAL-TTmethod-geometric"}). Finally, in Section [5](#s:examples){reference-type="ref" reference="s:examples"}, we show some examples. # Background {#s:background} ## Recap of the TT method {#s:recap} Let us briefly recall the methods in [@TTmethod], with some minor differences in conventions. Just as Thurston's gluing/completeness equations can be set up for any triangulation of a 3-manifold, the TT method can be set up for any link regardless of hyperbolicity. We will first describe the setup for an arbitrary link $L$. Fix $L \subset \mathbb{R}^3$ such that the projection to the first two coordinates is an immersion and gives a link diagram $D$. At each crossing $c$, let $γ_c$ be the straight line segment between the over- and underpass of $c$. We call $γ_c$ the *crossing arc* at $c$. Pick a small $δ > 0$ and consider the $δ$-neighborhood of $L$, and let $T_i$'s be the boundaries of each component. We refer to these $T_i$'s as the *peripheral tori* of $L$. Each $γ_c$ intersects two peripheral tori (possibly the same peripheral torus), at points $p_c^o$ (overpass) and $p_c^u$ (underpass). We refer to these points as the *peripheral endpoints* of the crossing arc $γ_c$. Consider an edge $e$ of the link diagram $D$ between two crossings $c,c'$. Let $R$ be a region of $D$ that has $e$ in its boundary. Let $T_i$ be the peripheral torus around $e$ (viewed as a segment of the link). Let $p,p'$ be the peripheral endpoints of $γ_c, γ_{c'}$ respectively that are on $T_i$. There is a unique (up to homotopy) curve on $T_i$ that has endpoints $p,p'$ and whose projection to the link diagram is contained in $R$ - denote this curve by $ε_{e,R}$; we call this a *peripheral edge*. Note that if $e$, as a segment of $L$, is the overpass at both $c,c'$ (or underpass at both), then $ε_{e,R}$ is homotopic to $ε_{e,R'}$, where $R'$ is the other region containing $e$ in its boundary. If $e$ is overpass at $c$ and underpass at $c'$ (or vice versa), then $ε_{e,R},ε_{e,R'}$ differ by a meridian. In general, crossing arcs, typically denoted by $γ$, live in the link complement and have endpoints on the link (or a cusp), and peripheral edges, typically denoted by $ε$, live on the peripheral torus of a link component (or a cusp), and have endpoints on $γ$'s (specifically at their peripheral endpoints), so we may say something along the lines of "$ε$ is the peripheral edge (on peripheral torus $T$) between $γ$ and $γ'$". Now we bring in hyperbolic geometry. We think of ${\mathbb{H}^3}$ in terms of the upper half space model, and often identify it with $\mathbb{C}× \mathbb{R}^+$, with $(x,y,z) \leftrightarrow (x+yi,z)$. Suppose the link $L$ we have been considering is hyperbolic; we keep the picture of $L ⊂ \mathbb{R}^3 ⊂ S^3$ but keeping in mind that the complement has a complete hyperbolic structure. **Definition 1**. A diagram of a hyperbolic link is *taut* if each associated checkerboard surface is incompressible and boundary incompressible in the link complement, and moreover does not contain any simple closed curve representing an accidental parabolic. Suppose the link diagram $D$ is taut. Perform isotopies on the peripheral tori $T_i$ and crossing arcs $γ_c$ such that no new intersections between peripheral tori and crossing arcs are created, and such that at the end of the isotopy: - the meridians of $T_i$ have length 1, - $T_i$ lift to horospheres, - $γ_c$ are geodesics, - and we also straighten out the peripheral edges $ε_{e,R}$ into straight segments (in the Euclidean $T_i$). Then by [@adamswaist], these peripheral tori are pairwise disjoint (except when $L$ is the figure-eight knot; see Remark [Remark 2](#r:meridian-CC){reference-type="ref" reference="r:meridian-CC"}.) Next, we put coordinates on the peripheral tori, or more precisely, the horospheres that cover them. Orient the link arbitrarily, and orient the meridians of each $T_i$ by the right-hand rule. [^1] We identify these horospheres with $\mathbb{C}$ (as an affine $\mathbb{C}$-line) such that the meridians of the peripheral tori lift to $+1$, and such that the orientation from $\mathbb{C}$ agrees with the orientations on the peripheral tori induced from the link complement; in particular, if such a horosphere were centered at $(0,\infty) ∈ \mathbb{C}× \mathbb{R}^+$, i.e., a horizontal plane, then the projection onto $\mathbb{C}× \{0\}$ is a $\mathbb{C}$-affine map. [^2] (See Remark [Remark 2](#r:meridian-CC){reference-type="ref" reference="r:meridian-CC"} on different way to identify with $\mathbb{C}$.) Now we define the geometric crossing labels and geometric edge labels, which captures some geometric information. In general, when we solve the equations of the TT method, we will obtain several algebraic solutions, i.e., collections of crossing and edge labels, and we need to perform further study in order to determine which of the algebraic solutions is the geometric one. The *geometric crossing labels* $w(γ_c)$ (or $w_{γ_c}$, or $w_c$ for brevity) is defined as follows: its magnitude is $e^{-d}$, where $d$ is the distance between $p_c^o$ and $p_c^u$ along $γ_c$, and its argument is the argument of $μ'$, where $μ'$ is the parallel transport of the meridian at $p_c^u$ along $γ_c$ to $p_c^o$ (and measured in the horosphere at $p_c^o$). [^3] The *geometric edge label* $u(ε_{e,R})$ (or $u_{e,R}$) is assigned to each edge $e$ and region $R$ of the link diagram such that $R$ has $e$ in its boundary, and is defined as follows. Fix an arbitrary orientation on $e$; typically we restrict the link orientation to $e$, and make this choice here. Suppose $e$ is oriented from crossing $c$ to crossing $c'$. Let $p,p'$ be the peripheral endpoints of $γ_c,γ_{c'}$ respectively that are closer to $e$. The lift of $ε_{e,R}$ to $\mathbb{C}$ defines endpoints $\widetilde{p},\widetilde{p'}$ whose difference is well-defined; we define $u_{e,R} = \widetilde{p'} - \widetilde{p}$. These labels satisfy some equations that come from the hyperbolic geometry. One set of equations arise from edges, relating $u_{e,L}$ and $u_{e,R}$, where $L$ and $R$ are the regions to the left and right of $e$, respectively. As mentioned before, the peripheral edges $ε_{e,L},ε_{e,R}$ are either homotopic or differ by a meridian, so the concatenation $ε_{e,R} ε_{e,L}^{-1}$ is homotopic to $κ \cdot \textrm{meridian}$, where $κ = -1,0,+1$, depending on $e$'s relations to the crossings (over- or underpass) at either end of $e$, and $e$'s relation to the meridian; as a result, we have the *edge equation* at $e$: $$\label{e:edge-equation} \begin{split} u_{e,R} - u_{e,L} &= κ' \cdot κ'' \\ κ' = \begin{cases} +1 \text{ if }e\text{ goes from under- to overpass} \\ -1 \text{ if }e\text{ goes from over- to underpass} \\ 0 \text{ else} \end{cases} \;\; & \;\; κ'' = \begin{cases} +1 \text{ if the meridian follows right-handle rule around }e \\ -1 \text{ if the meridian follows left-handle rule around }e \end{cases} \end{split}$$ Another set of equations arises out of a "periodic" condition around regions; these equations are written in terms of shape parameters, as defined in the following, which can in turn be expressed in terms of the labels. Let $R$ be a region of the link diagram, and let the edges and crossings around it be, in counterclockwise order, $e_1,c_1,e_2,c_2,\ldots,e_n,c_n$. One of the checkerboard surfaces of $L$ has a face $R'$ corresponding to $R$, which has boundary $e_1 γ_1 ⋅⋅⋅ e_n γ_n$ [^4] (writing $γ_i := γ_{c_i}$ for brevity). Lifting $R'$ to the universal cover, $\widetilde{R'}$ is bounded by geodesics $\widetilde{γ}_1,\ldots,\widetilde{γ}_n$, and the ideal vertex $z_{i+1}$ between $\widetilde{γ}_i$ and $\widetilde{γ}_{i+1}$ corresponds to $e_{i+1}$. The *shape parameter* $ζ_{c_i,R}$, assigned to the crossing $c_i$ and a region $R$ that has $c_i$ in its boundary, is the shape parameter associated to the edge $\widetilde{γ_i}$ in the tetrahedron defined by $z_{i-1},z_i,z_{i+1},z_{i+2}$, that is, $ζ_{c_i,R} = \frac{(z_{i-1} - z_i)(z_{i+1} - z_{i+2})} {(z_{i-1} - z_{i+1})(z_i - z_{i+2})}$. Note that if $ζ_{c_i,R}$ has *positive* imaginary part, then the tetrahedron, oriented by $[\overrightarrow{z_{i-1} z_i}, \overrightarrow{z_{i-1} z_{i+1}}, \overrightarrow{z_{i-1} z_{i+2}}]$, has the *opposite* orientation relative to the ambient ${\mathbb{H}^3}$. Intuitively, the shape parameter should be related to the geometric edge and crossing labels. Indeed, the shape parameter $ζ_{c_i,R}$ measures how much $γ_{i-1}$ and $γ_{i+1}$ are bent with respect to each other, with $γ_i$ in between acting as the "axis" for comparison; the geometric edge labels $u_{e_i,R},u_{e_{i+1},R}$ measure how much $γ_{i-1},γ_{i+1}$ respectively deviate from $γ_i$, measured with respect to the meridians, while $w_{c_i}$ measures the difference between meridians. More precisely, by [@TTmethod]\*Prop 4.1, [^5] $$\label{e:shape-parameter-labels} ζ_{c_i,R} = \frac{κ w_{c_i}}{u_{e_i,R} u_{e_{i+1,R}}}$$ where $κ$ is a sign that depends on whether both edges $e_i,e_{i+1}$ point away from $c$ ($κ = 1$), both point towards $c$ ($κ = 1$), or point in different directions ($κ = -1$). The shape parameters for crossings around a region satisfy some equations, which in turn give the *region equations* from $R$. By triple transitivity, we may assume that $z_n,z_1,z_2$ are at $1,\infty,0$, so that $z_3 = ζ_1$ ($:= ζ_{c_1,R}$). The isometry $z \mapsto \frac{-ζ_1}{z - 1}$ sends $1,\infty,0$ to $\infty,0,ζ_1$. Thus, applying its inverse would send $z_1,z_2,z_3$ to $1,\infty,0$. Applying the inverse of $z \mapsto \frac{-ζ_2}{z-1}$ then sends $z_2,z_3,z_4$ to $1,\infty,0$, and so on, until we are back to having $z_n,z_1,z_2$ at $1,\infty,0$. Thus, we have the following equality (up to scalar factor): $$\label{e:region-zeta-eqn} \begin{pmatrix} 0 & -ζ_1 \\ 1 & -1 \end{pmatrix} \begin{pmatrix} 0 & -ζ_2 \\ 1 & -1 \end{pmatrix} \; ⋅⋅⋅ \; \begin{pmatrix} 0 & -ζ_n \\ 1 & -1 \end{pmatrix} \; \sim \; \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$$ From the bottom left entry, we get a relation $f_n = 0$, where $f_n$ is a polynomial in the $ζ_i$'s, which can be constructed recursively by: [^6] $$\label{e:poly-recursive} f_3 = 1 - ζ_2 \;\; , \;\; f_4 = ζ_2 + ζ_3 - 1 \;\; , \;\; f_n = -f_{n-1} - ζ_{n-1} f_{n-2}$$ By a simple induction, one can show that $$f_n = \sum_{k \leq \lfloor \frac{n-1}{2} \rfloor} (-1)^{n+k+1} \sum_{ \substack{ 1 < i_1 < ⋅⋅⋅ < i_k < n \\ |i_{j+1} - i_j| \geq 2 } } ζ_{i_1} ⋅⋅⋅ ζ_{i_k}$$ Note that $ζ_1$ and $ζ_n$ do not show up in $f_n$. Let $f_n^+$ (resp. $f_n^-$) be the polynomials obtained from $f_n$ by increasing (resp. decreasing) the indices of $ζ$'s by 1. By shifting indices up and down by 1, we get two more polynomial equations; since $f_n$ is linear in each variable, it is clear that the three equations $f_n = f_n^+ = f_n^- = 0$ are algebraically independent. Finally, the *region equations* on the geometric crossing and edge labels are simply the equations $$\label{e:region-equations-f} f_n = f_n^+ = f_n^- = 0$$ but with the shape parameters substituted by geometric crossing and edge labels as in ([\[e:shape-parameter-labels\]](#e:shape-parameter-labels){reference-type="ref" reference="e:shape-parameter-labels"}). So far, we have described how the hyperbolic geometry on the link, complement can be encoded in these labels (which actually determines the hyperbolic geometry; see next section), and that these labels satisfy some equations. The TT method is simply the system of edge and region equations above. More precisely, one starts with unknown variables $w(γ_c), u(ε_{e,R})$ associated with crossing arcs $γ_c$ and peripheral edges $ε_{e,R}$, and looks for solutions satisfying the edge and region equations; we refer to such a solution $Ω = (w(-),u(-))$ as an *algebraic solution to the TT method*. One then checks each algebraic solution whether they give a complete hyperbolic structure on the link complement. We refer to the algebraic solution that corresponds to the complete hyperbolic structure on the link complement as the *geometric solution*, i.e., it is the algebraic solution which has the geometric labels as values. Currently, there are no easy general criteria for verifying whether an algebraic solution comes from the hyperbolic structure; one can attempt to obtain a hyperbolic geometry on the link complement from an algebraic solution (see Section [2.1.1](#s:geometric-content-labels){reference-type="ref" reference="s:geometric-content-labels"}), but the geometry may end up being highly degenerate. In Section [4.2.1](#s:FAL-nec-crit){reference-type="ref" reference="s:FAL-nec-crit"}, we give necessary and sufficient criteria for an algebraic solution to be hyperbolic when the link is *fully augmented*. (See Theorem [Theorem 40](#t:FAL-circle-packing-TTmethod){reference-type="ref" reference="t:FAL-circle-packing-TTmethod"}.) ### Note on Geometric Content of Labels {#s:geometric-content-labels} As we can see from ([\[e:shape-parameter-labels\]](#e:shape-parameter-labels){reference-type="ref" reference="e:shape-parameter-labels"}), the geometric crossing and edge labels contain geometric information. The difference between these labels and those shape parameters is that the geometric information in the shape parameters is designed to be intrinsic, independent of any choice of reference, whereas the labels depend on some choices. The values of the crossing and edge labels depend heavily on the value of the meridian under the identification of the universal cover of peripheral tori with $\mathbb{C}$. (See Remark [Remark 2](#r:meridian-CC){reference-type="ref" reference="r:meridian-CC"}.) However, we may interpret the labels as giving geometric information directly when the universal cover is viewed the right way. To wit, given a point $p$ on a peripheral torus, we say that we are in *$p$-centered view*, or the *view is centered at $p$*, if the identification of the universal cover of the link complement with ${\mathbb{H}^3}$ is such that $(0,1) ∈ \mathbb{C}× \mathbb{R}^+$ is a lift of $p$, the peripheral torus at $p$ lifts to the horizontal plane at height 1, and the meridian based at $p$ lifts to the segment ending at $(1,1) \in \mathbb{C}× \mathbb{R}^+$. Suppose we hop around the link, visiting peripheral endpoints $p_1,p_2,\ldots$ by traversing crossing arcs ($γ_c$'s) or peripheral edges ($ε_{e,R}$'s). Place ${\mathbb{H}^3}$ in $p_1$-centered view. Each time we traverse a crossing arc or peripheral edge from $p_i$ to $p_{i+1}$, we perform the isometry on ${\mathbb{H}^3}$ that takes us from $p_i$-centered view to $p_{i+1}$-centered view. Then the crossing or edge label directly specifies this isometry as follows: - If $p_i,p_{i+1}$ are connected by a peripheral edge $ε$, say oriented from $p_i$ to $p_{i+1}$, then the isometry of ${\mathbb{H}^3}$ that turns $p_i$-centered view into $p_{i+1}$-centered view is a horizontal translation by $-u(ε)$. - If $p_i,p_{i+1}$ are connected by a crossing arc $γ$, then the isometry of ${\mathbb{H}^3}$ that turns $p_i$-centered view into $p_{i+1}$-centered view is an inversion followed by scaling by $w = w(γ)$, that is, (at the sphere at infinity,) $ζ \mapsto w/ζ$. This is because, in the $p_i$-centered view, $p_{i+1}$ is at $(0,0,e^{-d}=|w|)$, and the meridian at $p_{i+1}$ is parallel to $w$; after the inversion, $p_{i+1}$ is at $(0,0,e^d = 1/|w|)$, and the meridian at $p_{i+1}$ is parallel to $\overline{w}$, and after the scaling by $w$, clearly we are in $p_{i+1}$-centered view. In terms of matrices in ${PSL(2,\mathbb{C})}\cong \mathrm{Isom}({\mathbb{H}^3})$, $$\label{e:label-matrix} ε \mapsto \begin{pmatrix} 1 & -u(ε) \\ 0 & 1 \end{pmatrix} \;\;\;\; γ \mapsto \begin{pmatrix} 0 & w(γ) \\ 1 & 0 \end{pmatrix}$$ The content of this section is implicit in [@TTmethod], in particular when they discuss how the labels give rise to a representation of the fundamental group of the link complement into $PSL_2(\mathbb{C})$ ([@TTmethod]\*Section 5), and also when they give a version of the region equations that directly involves the labels without a detour through shape parameters ([@TTmethod]\*Prop 4.2). *Remark 2*. We may consider peripheral tori such that the meridian is of different length, in particular, we may take peripheral tori that are small enough so that they are pairwise disjoint, thus avoiding invoking [@adamswaist]. However, this comes at the cost of changing some of the equations. Say the meridian of peripheral torus $T_i$ around a link component has length $r_i$. We identify horospheres covering $T_i$ with $\mathbb{C}$ such that $1 ∈ \mathbb{C}$ corresponds to a vector of length 1; then the meridian is sent to $\nu_i = r_i e^{i θ_i} ∈ \mathbb{C}$ for some angle $θ_i$. We may define the geometric crossing and edge labels $w'(-),u'(-)$ in the same way, except that for the geometric crossing labels, we parallel transport and measure the vector corresponding to 1 instead of the meridian. What is the relation between these labels and the geometric labels from the original method? The peripheral edges $ε_-$ still relate to the meridians in the same way, so the labels should be scaled by $\nu_i$, i.e., $u'(ε) = \nu_i \cdot u(ε)$ for peripheral edges $ε$ on $T_i$. Note that they satisfy modified edge equations $u_{e,R}' - u_{e,L}' = κ \cdot \nu_i$, with $\nu_i$ being equal to 1 in the original method. In order to keep the shape parameters $ζ_{c_j,R}$ unchanged (which must be unchanged since they are intrinsic quantities), we should scale the crossing label for crossing arc $γ$ between peripheral tori $T_i, T_{i'}$ by $\nu_i \nu_{i'}$, i.e., $w'(γ) = \nu_i \nu_{i'} \cdot w(γ)$. We can also show this directly: the distance between the new and old peripheral tori is $\log \frac{1}{r_i}$, so the distance factor $e^{-d}$ leads to an extra factor of $r_i r_{i'}$, and it is also clear that the arguments $e^{i θ_i}, e^{i θ_{i'}}$ also factors into $w'(γ)$. Thus, the only modification to the TT method is to modify the edge equations as above, and we can recover algebraic solutions to the original TT method by appropriately scaling the algebraic solutions to the modified equations that we have considered here. The notion of $p$-centered view (Section [2.1.1](#s:geometric-content-labels){reference-type="ref" reference="s:geometric-content-labels"}) is the same, except that $1 ∈ \mathbb{C}$ (instead of the meridian) should be at $(1,1)$.  \ ## Fully Augmented Links {#s:FAL}  \ We briefly review *fully augmented links*, or *FAL*s for short, and the decomposition/triangulation of their complements, to which we apply the generalized TT method. We refer to [@purcell] for a comprehensive introduction to FALs in $S^3$, and to [@kwon2020] for FALs in $\mathbf{T}$, where $\mathbf{T}$ denotes the thickened torus $T^2 × (-1,1)$. A FAL in $S^3$ or $\mathbf{T}$ is obtained from the diagram of a link as follows. Let $K$ be a link with diagram $D(K)$. We encircle each twist region (a maximal string of bigons) of $D(K)$ with a single unknotted component called a crossing circle. The complement of the resulting link is homeomorphic to the link obtained by removing all full-twists i.e., pairs of crossings from each twist region. Therefore, a diagram of the fully augmented link contains a finite number of crossing circles, each encircling two strands of the link. These crossing circles are perpendicular to the projection plane and the other link components are embedded on the projection plane, except possibly for a finite number of single crossings, called half-twists which are adjacent to the crossing circles. See Figure [1](#fig:falS3){reference-type="ref" reference="fig:falS3"}. Lackenby's cut-slice-flatten method [@lackenby] for fully augmented links in $S^3$ produces a decomposition of the link complement into two isometric hyperbolic polyhedra. In [@kwon2020] the first author of this paper describes the analog of the cut-slice-flatten method for fully augmented links in $\mathbf{T}$. **Definition 3**. A *fully augmented link diagram* $D$ in $S^3$ or $\mathbf{T}$ is a link diagram that is obtained from a twist-reduced diagram of a link $K$ in $S^3$ or $\mathbf{T}$ as follows: 1. augment every twist region with an augmentation circle 2. remove all full twists. See Figure [1](#fig:falS3){reference-type="ref" reference="fig:falS3"}. A *fully augmented link* in $S^3$ or $\mathbf{T}$ is a link that has a fully augmented link diagram in $S^3$ or $\mathbf{T}$. ![(a) Link diagram of $K$ (b) crossing circles added to each twist region (c) the third picture is a fully augmented link diagram with all full-twists removed (d) fully augmented link diagram with no half-twists](figures/falS3.jpeg){#fig:falS3 height="4cm"} **Definition 4**. A *torihedron* is a cone on the torus, i.e., $T^2 × [0,1]/(T^2 × \{1\})$, together with a cellular graph $G$ on $T^2 × \{0\}$. The graph $G$ is called the *graph of the torihedron*. We refer to the edges (resp. faces) of $G$ as the edges (resp. faces) of the torihedron. An *ideal torihedron* is a torihedron with the vertices of $G$ and the vertex $T^2 × \{1\}$ removed; the removed vertices are referred to as its *ideal vertices*. The following is a rephrasing of the "base-angled pyramids" from [@kwontham]: **Definition 5**. Let ${\mathcal{T}}$ be a torihedron, with graph $G$. The *conical polyhedral decomposition* of ${\mathcal{T}}$ is the polyhedral decomposition obtained by coning $G$ over the cone point, i.e., it has a 3-cell for each face of $G$, a 2-cell for each edge of $G$ (in addition to the faces of $G$), and a 1-cell for each vertex of $G$ (in addition to the edges of $G$). **Definition 6**. Let ${\mathcal{T}}$ be a torihedron, with graph $G$. Let $G'$ be a triangulation of $T^2 × \{0\}$ obtained from $G$ by adding only edges, and let ${\mathcal{T}}'$ be the torihedron with graph $G'$. The *conical triangulation of ${\mathcal{T}}$ (based on $G'$)* is the conical polyhedral decomposition of ${\mathcal{T}}'$. We may say a triangulation is *a* conical triangulation of ${\mathcal{T}}$ if we do not need to be specific about the choice of $G'$. *Remark 7*. For the polyhedral decomposition of FALs in $S^3$, one can also obtain a triangulation that refines it by coning over a point, but there is no canonical choice of such a point; we simply make a choice of a vertex of the bow-tie graph, which corresponds to an ideal vertex in each polyhedron. *Remark 8*. The conical triangulation of Definition [Definition 6](#d:torihedra-conical-triangulation){reference-type="ref" reference="d:torihedra-conical-triangulation"} and the triangulation of Remark [Remark 7](#r:FAL-S3-triangulation){reference-type="ref" reference="r:FAL-S3-triangulation"} are geometric. Let us describe the polyhedral/torihedral decomposition and triangulation of a FAL complement in $S^3$ and $\mathbf{T}$ that one obtains by the cut-slice-flatten method and some further refinement. Let $L$ be a FAL. We first assume that $L$ has no half-twists. **Definition 9**. Let $L$ be a FAL in $S^3$ with no half-twists. For an augmentation circle $C_i$, denote by $F_{C_i}$ the *spanning (twice-punctured) disk* bounded by $C_i$; the projection plane cuts $F_{C_i}$ along three arcs, denoted $γ_i^0,γ_i^1,γ_i^2$, into two half-disks which we denote by $F_{C_i}^+,F_{C_i}^-$ and refer to as the *spanning faces*. The arcs $γ_i^{•}$ separate the projection plane into 2-cells, one for each region $R$ of the link diagram of $L$, which we denote by $F_R$; we refer to these 2-cells as *regional faces*. For $L$ in $S^3$, Lackenby's ideal polyhedral decomposition of $S^3 - L$ consists of the arcs $γ_i^{•}$ as 1-cells, the faces $F_{C_i}^+,F_{C_i}^-,F_R$ as 2-cells, and two 3-cells which are the connected components of $S^3 - L$ minus the 1- and 2-cells. **Definition 10**. Let $L$ be a FAL in $\mathbf{T}$ with no half-twists. The definitions of arcs and faces in Definition [Definition 9](#d:decomp-S3){reference-type="ref" reference="d:decomp-S3"} apply to $\mathbf{T}- L$. Kwon's ideal torihedral decomposition of $\mathbf{T}- L$ [@kwon2020] consists of the arcs $γ_i^{•}$ as 1-cells, the faces $F_{C_i}^+,F_{C_i}^-,F_R$ as 2-cells, and two ideal torihedra which are the connected components of $\mathbf{T}- L$ minus the 1- and 2-cells. In each torihedron, the conical polyhedral decomposition adds one 1-cell for each augmentation circle $C_i$, denoted by $γ_{C_i}$ and one 1-cell for each segment $a_j$ of $L$ demarcated by spanning disks, denoted by $γ_{a_j}$, connecting them to the cusp of the torihedron, which we refer to as *vertical crossing arcs*, and one 2-cell for each pair $(C_i,a_j)$ of an augmentation circle $C_i$ and segment $a_j$ that are adjacent, which we denote by $F_{C_i,a_j}$ and refer to as a *vertical face*, whose boundary edges are $γ_{C_i}, γ_{a_j}$, and $γ_i^{[1/2]}$ (with ${[1/2]}$ meaning either $1$ or $2$). [^7] We thus get a polyhedral decomposition of $\mathbf{T}- L$ with the above 1- and 2-cells in addition to those in Definition [Definition 9](#d:decomp-S3){reference-type="ref" reference="d:decomp-S3"}, and two 3-cells for each regional face. The graph on the boundary of the polyhedra/torihedra is the bow-tie graph of $L$, as described below: **Definition 11**. The *bow-tie graph* $B_L$ of a FAL $L$ in $S^3$ or $\mathbf{T}$ is obtained by removing all half-twists, replacing each augmentation circle by a pair of triangles, "bow-tie", and shrink each segment of the link to a vertex. See Figure [7](#fig:falDecomp){reference-type="ref" reference="fig:falDecomp"}. Next, we describe the same decompositions for FALs with half-twists. The definitions below technically subsume Definitions [Definition 9](#d:decomp-S3){reference-type="ref" reference="d:decomp-S3"}, [Definition 10](#d:decomp-T2){reference-type="ref" reference="d:decomp-T2"}, but it is easier to describe the no half-twist case first. **Definition 12**. Let $L$ be a FAL in $S^3$ or $\mathbf{T}$, possibly with half-twists. Let $L'$ be the FAL obtained from $L$ by removing all half-twists. The construction of the polyhedral decomposition of $S^3 - L$ is the same as for $S^3 - L'$ except for regional faces, which we describe below. For a region $R$, let $F_R'$ be the regional face of $S^3 - L$. As an ideal polygon, its boundary edges are of two types: $γ_i^0$ for some augmentation circle $C_i$, or a pair of edges, both $γ_i^1$ or both $γ_i^2$. Near an edge of the former type, $F_R$ is obtained from $F_R'$ by a half-twist, just following the half-twist; near edges of the latter type, $F_R$ is constructed as in Figure [3](#f:regional-face-half-twist){reference-type="ref" reference="f:regional-face-half-twist"}. ![ Regional face near a half-twist; the two diagrams are related by a reflection across the projection plane; they are also related by performing a Dehn twist on the augmentation circle. See Remark [Remark 14](#r:FAL-half-twist-symmetry){reference-type="ref" reference="r:FAL-half-twist-symmetry"}. ](figures/diagram-regional-face-half-twist-top.png "fig:"){#f:regional-face-half-twist height="4cm"} ![ Regional face near a half-twist; the two diagrams are related by a reflection across the projection plane; they are also related by performing a Dehn twist on the augmentation circle. See Remark [Remark 14](#r:FAL-half-twist-symmetry){reference-type="ref" reference="r:FAL-half-twist-symmetry"}. ](figures/diagram-regional-face-half-twist-bottom.png "fig:"){#f:regional-face-half-twist height="4cm"} *Remark 13*. In the case of $L$ in $\mathbf{T}$, all the vertical arcs and faces are constructed the same as a FAL without half-twists, but may not be as obvious to visualize. *Remark 14*. When a FAL $L$ has no half-twists, its complement has an obvious symmetry that swaps the top and bottom polyhedra/torihedra, which, in particular, preserves the 1- and 2-cells in the projection plane and swaps the two 3-cells associated with each region. Suppose $L$ has half-twists. Let $L'$ be the link obtained from $L$ by applying this top-bottom symmetry (i.e., flip the crossing at the half-twists). Then the polyhedral decomposition of Definition [Definition 12](#d:decomp-half-twist){reference-type="ref" reference="d:decomp-half-twist"} for $L$ and $L'$ are related in two ways: - by the top-bottom symmetry; - by performing a Dehn-twist at each augmentation circle that harbors a half-twist. Thus, the composition of the top-bottom symmetry and then the Dehn-twists is a symmetry on the complement of $L$ that preserves the 1- and 2-cells in the projection plane and swaps the two 3-cells associated with each region. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![ A brief review of the cut-slice-flatten method of [@lackenby], [@kwon2020]: (a) A fundamental domain for a fully augmented square weave, $L$. (b) Disks cut in half at each crossing circle. (c) Sliced and flattened half-disks at each crossing circle (d) Collapsing the strands of the link and parts of the augmented circles (shown in bold) to ideal points gives the bow-tie graph $B_{L}$. The half-disks become shaded bow-ties and the white regions become hexagons. ](figures/picall1.jpeg){#fig:falDecomp height="3cm"} ![ A brief review of the cut-slice-flatten method of [@lackenby], [@kwon2020]: (a) A fundamental domain for a fully augmented square weave, $L$. (b) Disks cut in half at each crossing circle. (c) Sliced and flattened half-disks at each crossing circle (d) Collapsing the strands of the link and parts of the augmented circles (shown in bold) to ideal points gives the bow-tie graph $B_{L}$. The half-disks become shaded bow-ties and the white regions become hexagons. ](figures/picall2.jpeg){#fig:falDecomp height="3cm"} ![ A brief review of the cut-slice-flatten method of [@lackenby], [@kwon2020]: (a) A fundamental domain for a fully augmented square weave, $L$. (b) Disks cut in half at each crossing circle. (c) Sliced and flattened half-disks at each crossing circle (d) Collapsing the strands of the link and parts of the augmented circles (shown in bold) to ideal points gives the bow-tie graph $B_{L}$. The half-disks become shaded bow-ties and the white regions become hexagons. ](figures/picall3.jpeg){#fig:falDecomp height="3cm"} ![ A brief review of the cut-slice-flatten method of [@lackenby], [@kwon2020]: (a) A fundamental domain for a fully augmented square weave, $L$. (b) Disks cut in half at each crossing circle. (c) Sliced and flattened half-disks at each crossing circle (d) Collapsing the strands of the link and parts of the augmented circles (shown in bold) to ideal points gives the bow-tie graph $B_{L}$. The half-disks become shaded bow-ties and the white regions become hexagons. ](figures/picall4.jpeg){#fig:falDecomp height="3cm"} \(a\) \(b\) \(c\) \(d\) ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ### Circle Packings and FALs {#s:FAL-circ} We briefly recall circle packings and their relation to FALs. In short, the complete hyperbolic geometry on the complement of a FAL is determined by a circle packing, with one circle corresponding to each region of the link. We refer to [@purcell] and [@kwon2020] for more details. **Definition 15**. Let $Σ$ be a surface which is $\mathbb{C}$ or $S^2=\mathbb{C}\cup \{\infty\}$, and let $Γ$ be a simple graph (typically a triangulation) embedded in $Σ$. A *circle packing realizing $Γ$ in $Σ$* is a collection of circles in $Σ$, $Ξ = \{X_i\}_{i ∈ V}$, indexed by the vertices $V$ of $Γ$, such that $X_i,X_j$ are tangent if $i,j$ are adjacent vertices in $Γ$. (Note that our definition of a circle packing slightly differs from the standard one, in that the interiors of circles are considered as additional data; see e.g. [@BandS] for more on circle packings/packings.) **Definition 16**. Let $Σ = T^2 = \mathbb{C}/Λ$ be the torus given as a quotient of $\mathbb{C}$ by a group $Λ \simeq \mathbb{Z}\oplus \mathbb{Z}$ of translations, and let $Γ$ be a graph embedded in $Σ$. A *circle packing realizing $Γ$ in $Σ$* is a circle packing realizing $\widetilde{Γ}$ in $\widetilde{Σ}\simeq \mathbb{C}$ that is invariant under $Λ$, where $\widetilde{Γ}$ is the preimage of $Γ$ in $\mathbb{C}$. We allow "accidental" intersections or tangencies of circles, that is, $i,j$ not adjacent in $Γ$ does not imply that $X_i,X_j$ are not tangent. We also allow the circles to have interiors that intersect (see Definition [Definition 19](#d:univalence){reference-type="ref" reference="d:univalence"}). Note that in the $Σ = T^2$ case, the existence of a circle packing on $Γ$ may impose a constraint on $Λ$. In the following definitions, $Σ$ may be one of the surfaces above, and $Γ \subset Σ$ is an embedded simple graph as above. **Definition 17**. An *interior filling* of a circle packing $Ξ$ is a choice, for each circle $X_i ∈ Ξ$, of a connected component of $Σ \backslash X_i$; the chosen component is called the *interior (region)*. Equivalently, it is a choice of orientation of each $X_i$ (the corresponding interior filling begins defined by the region to the left as the interior). Of course, for $Σ = \mathbb{C}, T^2$, a circle packing has a unique interior filling. The following are some adjectives for interior fillings: **Definition 18**. An interior filling is *locally order-preserving* if the cyclic order on points of tangencies on $X_v$ from the orientation on $X_v$ (i.e., induced by the interior) agrees with the counterclockwise cyclic order on the vertices adjacent to $v$ in $Γ$ (imposed by its embedding in $Σ$). **Definition 19**. An interior filling is *locally univalent* if the interiors of $X_i,X_j$ are disjoint for adjacent vertices $i,j$ of $Γ$; if disjointness holds for *all* pairs of vertices, then we say the interior filling is *(globally) univalent*. (See [@Stephenson].) Note that if a locally univalent interior filling exists, it is unique, so we may describe a circle packing as locally univalent. Univalence implies the resulting graph of tangencies (vertices are the circles, edges are tangencies) is isotopic to $Γ$ (or the image of $Γ$ under a diffeomorphism of $Σ$). In fact, in our use case, it is a local condition: **Lemma 20**. *Let $Ξ$ be a circle packing of a triangulation $Γ$ of $\mathbb{C}$ or $S^2$, with a choice of interior filling. Then univalence is a local condition in the following sense: if the interior filling is locally order-preserving and locally univalent, then the interior filling is univalent.* Note that there is no finiteness condition on $Γ$. *Proof.* First, observe that if a vertex of $Γ$ has degree 2, then since $Γ$ is a triangulation, the only possibility is that $Σ = S^2$, and $Γ$ consists of three vertices, all connected to each other, so there is nothing to prove. Hence let us assume that the degree of every vertex of $Γ$ is at least 3. Let $Γ_D$ be the dual graph to $Γ$, which has a natural "transverse to $Γ$" embedding in $S^2$ (uniquely defined up to isotopy). Let $R_v$ be the region of $Γ_D$ corresponding to vertex $v$ of $Γ$. We show that there is a map from $S^2$ to itself that sends $X_v$ and its interior into $R_v$ for each vertex $v$, which would immediately imply univalence. Consider a triangular region of $Γ$ between vertices $v_1,v_2,v_3$. The interiors of $X_{v_1},X_{v_2},X_{v_3}$ are pairwise disjoint. Let the points of tangencies among the $X$'s be $q_1,q_2,q_3$, with $q_{i+2}$ between $X_{v_i}$ and $X_{v_{i+1}}$. !["Triangular" region between $X_{v_1},X_{v_2},X_{v_3}$](figures/fig-univalent-q-alpha.png){#f:univalent-q-alpha width="6cm"} There are two arcs of $X_{v_1}$ between $q_2$ and $q_3$; by locally order-preserving-ness, and $\deg v_1 \geq 3$, exactly one of these arcs has no other points of tangencies. Let $α_1$ be this arc. Similarly define $α_2, α_3$. These three arcs enclose a region that is exactly one of the connected components of the complement of the union of $X_{v_1},X_{v_2},X_{v_3}$ and their interiors. Pick a point $q_{v_1 v_2 v_3}$ in this region, and draw curves from each $q_i$ to $q_{v_1 v_2 v_3}$ within this enclosed region. It is easy to see that these points $q_{v_1 v_2 v_3}$ and the curves we draw to the points of tangencies make an embedded graph that is isomorphic to $Γ_D$; the self-diffeomorphism of $S^2$ sending this graph to $Γ_D$ would send $X_v$ and its interior into $R_v$, hence we are done. ◻ **Definition 21**. Let $L$ be a FAL in $S^3$ (resp. $\mathbf{T}$) with bow-tie graph $B_L$ in $S^2$ (resp. $T^2$). The *region graph*, denoted by $Γ_L$, is the embedded graph consisting of a vertex in each non-bow-tie region $R$ of $B_L$, and an edge for each vertex of $B_L$ (which connects the two non-bow-tie regions adjacent to it). The faces of $Γ_L$ correspond bijectively with bow-tie faces, hence $Γ_L$ is a triangulation of $S^2$ (resp. $T^2$). A univalent circle packing on $Γ_L$ determines geometries on the polyhedra in the (conical) polyhedral decomposition of the link complement, which follows from the observation that the graph on $Σ$ ($= S^2$ or $T^2$) with vertices given by the points of tangencies of the circle packing and edges connecting neighboring points of tangencies on a circle is precisely the bow-tie graph $B_L$. In fact, this is the complete hyperbolic geometry on the link complement: **Lemma 22**. *Let $L$ be a FAL in $S^3$ (resp. $\mathbf{T}$) and let $Γ_L$ be its region graph in $S^2$ (resp. $T^2$). A univalent circle packing on $Γ_L$ is unique up to conformal maps, and determines the geometry of the polyhedra in the polyhedral (resp. conical polyhedral) decomposition of the link complement that makes it complete hyperbolic; that is, the complete hyperbolic metric on the link complement determines a realization of each polyhedron as an ideal hyperbolic polyhedron, whose ideal vertices are the points of tangencies of the circle packing.* *Proof.* For uniqueness, see [@koebe] and [@BandS]. For circle packing to geometry, see [@lackenby] and [@kwon2020]. ◻ # Extensions of the TT method {#s:TT-extensions} We consider an extension of the TT method to links in the thickened torus, which basically amounts to adding more crossing arcs between the link and the ${*_N},{*_S}$ cusps of $\mathbf{T}$. We also consider modifications of the TT method that are adapted to FALs in $S^3$ and $\mathbf{T}$, which, in particular, avoids some redundancies. While it is quite easy to find the right extensions/modifications, we proceed in a more systematic manner, namely by generalizing the TT method to 3-manifolds with toric boundaries together with a polyhedral decomposition. Then we realize these extensions/modifications of the TT method as applications of the generalized method to the link complements together with some natural polyhedral decomposition. ## Generalization to 3-Manifolds with Toric Boundary {#s:TT-generalization}  \ Consider a compact 3-manifold $M$ with toric boundaries $T_i$, with at least one boundary component. For each boundary component $T_i$, choose a simple closed curve $μ_i$. Let $τ$ be an ideal polyhedral decomposition of $M^{∘} := M \backslash ∂M$; its truncation, $\overline{τ}$, is a polyhedral decomposition of a truncated $M^{∘}$, which is naturally identified with $M$. Each vertex of $\overline{τ}$ lies on the boundary, and is incident to exactly one interior 1-cell. The internal 1-cells of $\overline{τ}$ are in bijection with the 1-cells of $τ$; the boundary 1-cells of $\overline{τ}$ are in bijection with corners of 2-cells of $τ$, that is, a boundary 1-cell $e ∈ \overline{τ}$ comes from the truncation of a 2-cell at one of its corners. The boundary of each internal 2-cell $P$ of $\overline{τ}$ consists of 1-cells that alternate between boundary 1-cells and internal 1-cells. Associate to each internal 1-cell $γ$ a *crossing label* $w(γ) ∈ \mathbb{C}$, and to each boundary (oriented) 1-cell $\vec{e}$ an *edge label* $u(\vec{e}) ∈ \mathbb{C}$. A collection of labels $Ω = (w(-), u(-))$ is an *algebraic solution* if it satisfies: - Normalization: For each boundary component $T_i$, if $\vec{e}_1,\ldots,\vec{e}_k$ is a sequence of 1-cells in $T_i$ such that their concatenation is homotopic to $μ_i$, then $Σ u_{e_j} = 1$; - Region equations: For each oriented internal 2-cell $P$, with boundary 1-cells $\vec{γ}_1,\vec{e}_1,\ldots,\vec{γ}_k,\vec{e}_k$ in that order, their labels satisfy $$\label{e:region-eqn-matrix} \begin{pmatrix} 0 & w_{γ_k} \\ 1 & 0 \end{pmatrix} \begin{pmatrix} 1 & -u_{\vec{e}_k} \\ 0 & 1 \end{pmatrix} ⋅⋅⋅ \begin{pmatrix} 0 & w_{γ_1} \\ 1 & 0 \end{pmatrix} \begin{pmatrix} 1 & -u_{\vec{e}_1} \\ 0 & 1 \end{pmatrix} \sim \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$$ in ${PSL(2,\mathbb{C})}$; - Edge equations: For each oriented boundary 2-cell $P$, the sum of edge labels in its boundary is 0, i.e., $Σ_{\vec{e} ∈ ∂P} u_{\vec{e}} = 0$; equivalently, we may write this in an analogous form as above: $$\label{e:edge-eqn-matrix} \prod_{\vec{e} ∈ ∂P} \begin{pmatrix} 1 & -u_{\vec{e}} \\ 0 & 1 \end{pmatrix} \sim \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$$ If $M^{∘}$ has a complete hyperbolic geometry, we can construct *geometric labels* just as in Section [2.1](#s:recap){reference-type="ref" reference="s:recap"}, that is, for each boundary component, take the horosphere $H_i$ such that the chosen closed curve $μ_i$ has length 1, and identify the lifts of $H_i$ in the universal cover of $M^{∘}$ with $\mathbb{C}$ so that the lifts of $μ_i$ are identified with $1 ∈ \mathbb{C}$; then the definition of geometric crossing and edge labels follows exactly as in Section [2.1](#s:recap){reference-type="ref" reference="s:recap"}. The geometric solution is easily seen to be an algebraic solution (see Section [2.1.1](#s:geometric-content-labels){reference-type="ref" reference="s:geometric-content-labels"}). *Remark 23*. The original TT method, as laid out in Section [2.1](#s:recap){reference-type="ref" reference="s:recap"}, is obtained from taking the Menasco decomposition. As suggested by the names of the labels, the internal 1-cells correspond to the crossing arcs, the boundary 1-cells correspond to the peripheral edges, the vertices correspond to peripheral endpoints. *Remark 24*. The (generalized) TT method may be thought of as equations defining a flat connection in a lattice gauge theory on the 1-skeleton of $\overline{τ}$ under a certain choice of gauge. A connection is an assignment of a matrix $A(\vec{e}) ∈ {PSL(2,\mathbb{C})}$ to each oriented edge $\vec{e}$ of $\overline{τ}$, and flatness of the connection means the product of these matrices along the boundary of a 2-cell of $\overline{τ}$ is 1. An algebraic solution $Ω$ defines a connection as follows: to the edges of $\overline{τ}$ of type $γ,ε$, we assign matrices $A(γ), A(ε) ∈ {PSL(2,\mathbb{C})}$ as in ([\[e:label-matrix\]](#e:label-matrix){reference-type="ref" reference="e:label-matrix"}) (using labels from $Ω$, not the geometric labels), and to an edge of $\overline{τ}$ corresponding to a meridian, we assign the upper-triangular matrix $(1 \; 1; 0 \; 1)$. The region/edge equations ([\[e:region-eqn-matrix\]](#e:region-eqn-matrix){reference-type="ref" reference="e:region-eqn-matrix"},[\[e:edge-eqn-matrix\]](#e:edge-eqn-matrix){reference-type="ref" reference="e:edge-eqn-matrix"}) ensure that this connection is flat. As we saw in Remark [Remark 2](#r:meridian-CC){reference-type="ref" reference="r:meridian-CC"}, the act of changing the choice of peripheral torus and the identification of its lift with $\mathbb{C}$ transforms the labels by some scalings; this can be seen as a particular type of gauge transform, namely with the gauge group element $g = \{g_i\}$, where $i$ indexes peripheral tori, and $g_i$ are of the form $\textrm{diag}(\nu_i, 1)$  \ ## Extension to Links in the Thickened Torus {#s:TT-thickened-torus}  \ We apply the generalized TT method above to a polyhedral decomposition of $\mathbf{T}- L$, closely related to the torihedral decomposition (see Definition [Definition 4](#d:torihedron){reference-type="ref" reference="d:torihedron"}), which we describe below. Note that one could identify $\mathbf{T}$ with the complement of the Hopf link $H$ in $S^3$, then apply the original TT method directly to $H \cup L$, but this feels somewhat unnatural. Let $L$ be a link in $\mathbf{T}$ with a cellular link diagram $D$. Let ${*_N},{*_S}$ denote the top and bottom cusps of $\mathbf{T}$, i.e., ${*_N}= \mathbf{T}\times 1$, ${*_S}= \mathbf{T}\times -1$. Choose simple closed curves $μ_N,μ_S$, e.g. we may take the curves that correspond to the meridians of the Hopf link $H$ (in the identification $\mathbf{T}- L \simeq S^3 - H \cup L$). For each crossing $c$ of $D$, let $γ_c$ be the crossing arc at $c$. For a region $R$ of $D$, with boundary consisting of edges and vertices $e_1,c_1,\ldots,e_k,c_k$, let $F_R$ be an ideal polygon embedded in $\mathbf{T}- L$ with sides $γ_1,\ldots,γ_k$ and ideal vertices at segments $e_1,\ldots,e_k$ of $L$; the interior of $F_R$ should project homeomorphically onto $R$ under the link projection. We call $F_R$ a *regional face*. The union of all the $F_R$'s separates $\mathbf{T}$ into two pieces. Denote the piece connected to ${*_N},{*_S}$ by $Q_N, Q_S$ respectively. An *overpass segment* refers to a maximal segment of the link (or equivalently, a maximal contiguous sequence of edges of $D$) that lies between two underpasses, and similarly for an *underpass segment*. We say an overpass segment $a_1$ *runs over* $a_2$ if $a_1$ goes over a crossing which is an endpoint of $a_2$; we say $a_1,a_2$ are *adjacent* if one runs over the other. For each overpass segment $a$, let $γ_a$ be a crossing arc [^8] that travels from a point of $a$ to ${*_N}$. Likewise, for each underpass segment $b$, let $γ_b$ be a crossing arc that travels from a point of $b$ to ${*_S}$. For each pair of adjacent overpass segments $a_1,a_2$, with $a_1$ running over $a_2$ at the crossing $c$, let $F_{a_1,a_2,c}$ be the ideal triangle in $\mathbf{T}$ with sides $γ_{a_1},γ_c,γ_{a_2}$ and ideal vertices at $a_1,a_2,{*_N}$. Similarly construct $F_{b_1,b_2,c}$ for adjacent underpass segments $b_1,b_2$ meetings at crossing $c$. We call $F_{a_1,a_2,c}$ and $F_{b_1,b_2,c}$ *vertical faces*. For each region $R$ of $D$, the union of vertical faces $F_{a_1,a_2,c}$ for $a_1,a_2,c$ adjacent to $R$, together with $F_R$, cuts out a volume $V_R$ above $R$; since $D$ is cellular, $R$ is a disk, so $V_R$ is a ball. Similarly, we get a volume $V_R'$ below $R$ (connected to ${*_S}$). We thus have an ideal polyhedral decomposition $τ$ of $\mathbf{T}- L$, where the 3-cells are the volumes $V_R,V_R'$, the 2-cells are regional faces $F_R$ and vertical faces $F_{a_1,a_2,c}$,$F_{b_1,b_2,c}$, the 1-cells are crossing arcs $γ_c,γ_a,γ_b$. The truncation $\overline{τ}$ introduces more notation. Instead of cutting off the ends of $\mathbf{T}- L$, we describe the boundary vertices/1-cells/2-cells on peripheral tori as in Section [2.1](#s:recap){reference-type="ref" reference="s:recap"}. For crossing arcs $γ_c$ corresponding to crossings, we again refer to their peripheral endpoints by $p_c^o,p_c^u$. For crossing arcs $γ_a$ corresponding to an overpass segment $a$, we have one peripheral endpoint $p_N^a$ at ${*_N}$ and another one $p_a$ near $L$. Likewise, for underpass segment $b$, $γ_b$ has peripheral endpoint $p_S^b$ at ${*_S}$ and another one $p_b$ near $L$. As in Section [2.1](#s:recap){reference-type="ref" reference="s:recap"}, we have peripheral edges $ε_{e,R}$ which comes from the intersection of regional faces $F_R$ with the peripheral tori. The vertical face $F_{a_1,a_2,c}$ gives rise to three new types of peripheral edges: - $ε_{a,c}$ connecting $p_a$ to $p_c^o$ when $a$ ends at $c$, - $ε_{a,c}$ connecting $p_a$ to $p_c^u$ when $a$ runs over $c$, - $ε_{a_1,a_2}$ connecting $p_N^{a_1}$ to $p_N^{a_2}$ at ${*_N}$. The peripheral endpoints $p_N^a$ and peripheral edges $ε_{a_1,a_2}$ on the peripheral torus at ${*_N}$ make up a graph, which we refer to as the *overpass graph*; likewise, the *underpass graph* is the graph consisting of vertices $p_S^a$ and edges $ε_{a_1,a_2}$ at ${*_S}$. The regions of these graphs are in bijection with the regions of $D$. The overpass graph can be obtained from the link diagram $D$ of $L$ by essentially a quotient operation, as follows (similar to the underpass graph). Start with $D$. For each overpass segment $a$ that does not go over any crossings, we add a vertex to the middle of $e$, the edge of $D$ which $a$ naturally corresponds to, splitting $e$ into two edges. For each overpass segment $a$ that goes over at least one crossing, we collapse those crossings that $a$ goes over (and the edges between themselves) into one vertex. The resulting graph has vertices in bijection with overpass segments, and the degree of a vertex is $2(n_a + 1)$, where $n_a$ is the number of crossing that $a$ runs over. It is straightforward to check that the resulting graph is isomorphic to the graph $(\{p_a\}, \{ε_{a,a'}\})$ on the peripheral torus of ${*_N}$. Note that if $L$ is alternating, then the overpass and underpass graphs are isomorphic to the link diagram $D$ of $L$. ### **Equations for Thickened Torus Case** {#s:TT-thickened-torus-eqn}  \ We consider the equations imposed on labels $Ω = (w_-,u_-)$ that form an algebraic solution. The region equations ([\[e:region-equations-f\]](#e:region-equations-f){reference-type="ref" reference="e:region-equations-f"}) and edge equations ([\[e:edge-equation\]](#e:edge-equation){reference-type="ref" reference="e:edge-equation"}) must be satisfied by all the "old" edge and crossing labels, i.e., labels on the crossing arcs $γ_c$ for crossings and peripheral edges $ε_{e,R}$ from regional faces. They must satisfy additional equations, as follows. The two peripheral edges between $p_a$ and $p_c^o$ for an overpass segment $a$ that goes over a crossing $c$, differ by exactly a meridian, so by the normalization condition on an algebraic solution, we have the following equation, which we also call an *edge equation*: $$\label{e:edge-equation-torus} u(ε_{a,c}^r) - u(ε_{a,c}^l) = 1$$ where $ε_{a,c}^l$ is to the left of the segment $a$, and $ε_{a,c}^r$ on the right, and we orient both peripheral edges from $p_a$ to $p_c^o$. There is also a region equation arising from each vertical face $F_{a_1,a_2,c}$. Let overpass segment $a_2$ run over $a_1$ at a crossing $c$. The truncated $F_{a_1,a_2,c}$ has edges $ε_{a_1,c}, γ_c, ε_{a_2,c}^*, γ_{a_2}, ε_{a_1,a_2}, γ_{a_1}$ (here the superscript $*$ stands for $l$ or $r$, whichever is relevant to this triangle). The consequence of ([\[e:region-zeta-eqn\]](#e:region-zeta-eqn){reference-type="ref" reference="e:region-zeta-eqn"}) is the following *vertical region equation*: $$\label{e:vertical-region-equations} \frac{w_c}{u_{a_1,c} u_{a_2,c}^*} = -\frac{w_{a_2}}{u_{a_1,a_2} u_{a_2,c}} = \frac{w_{a_1}}{u_{a_1,a_2} u_{a_1,c}} = 1$$ Intuitively, the "shape parameters" as in ([\[e:shape-parameter-labels\]](#e:shape-parameter-labels){reference-type="ref" reference="e:shape-parameter-labels"}) must be 1. See Figure [19](#f:square-weave){reference-type="ref" reference="f:square-weave"}, ([\[e:hopf-link-wall\]](#e:hopf-link-wall){reference-type="ref" reference="e:hopf-link-wall"}) for an example. *Remark 25*. Suppose we've solved for the old edge and crossing labels. One can get quite far from just knowing $w_a$ for some arc $a$ and $u_{a,c}$ for some adjacent crossing. Say $c$ is an endpoint of $a$. It is clear that ([\[e:vertical-region-equations\]](#e:vertical-region-equations){reference-type="ref" reference="e:vertical-region-equations"}) determines the other three labels, in particular $w_{a_2}$ and $u_{a_2,c}^*$. Say $*$ is $l$; then $u_{a_2,c}^r$ is determined. Then we can consider the next arc $a_3$ that is next to $a$ (at $c$). Again it is clear that ([\[e:vertical-region-equations\]](#e:vertical-region-equations){reference-type="ref" reference="e:vertical-region-equations"}) determines $w_{a_3}$ and $u_{a_3,c}$. Let $c'$ be the next crossing (other endpoint of $a_3$). Then from the old edge labels, we can get the difference $p_{c'}^u - p_c^u$, which is equal to $u_{a_3,c'} - u_{a_3,c}$, and thus we get $u_{a_3,c'}$. We can continue in this manner until we return to $a$.  \ ## FALs in $S^3$ {#s:TT-FAL-S3}  \ We apply the generalized TT method of Section [3.1](#s:TT-generalization){reference-type="ref" reference="s:TT-generalization"} to the polyhedral decomposition of the complement of FALs in $S^3$ described in Definitions [Definition 9](#d:decomp-S3){reference-type="ref" reference="d:decomp-S3"}, [Definition 12](#d:decomp-half-twist){reference-type="ref" reference="d:decomp-half-twist"}. As in Section [3.2](#s:TT-thickened-torus){reference-type="ref" reference="s:TT-thickened-torus"}, we would need to consider the truncated version of this polyhedral decomposition. In the following, we spell these out in detail, in terms of crossing arcs, peripheral edges, and equations on the labels; this will be similar to the modified TT method described in [@rochyTT]. Let $D_L$ be the link diagram of $L$ after removing full twists. Let $B_L$ be the bow-tie graph for $L$. Let $\{C_i\}$ be the collection of augmentation circles of $L$. Let $\{a_j\}$ be the collection of segments of $L$ demarcated by spanning disks. See Figure [10](#f:DL-BL){reference-type="ref" reference="f:DL-BL"}. ![FAL diagram $D_L$ and corresponding bow-tie graph $B_L$; the arrows on the vertices in $B_L$ indicate the orientations of their corresponding components/segments of $L$; for $C_i$, the arrow follows the direction of the top half of $C_i$. ](figures/diagram-FAL-DL.png "fig:"){#f:DL-BL height="6cm"} ![FAL diagram $D_L$ and corresponding bow-tie graph $B_L$; the arrows on the vertices in $B_L$ indicate the orientations of their corresponding components/segments of $L$; for $C_i$, the arrow follows the direction of the top half of $C_i$. ](figures/diagram-FAL-BL.png "fig:"){#f:DL-BL height="6cm"} The truncation of the spanning face $F_{C_i}^+$ has six sides, with three peripheral edges denoted $e_i^0,e_i^1,e_i^2$ opposite the three crossing arcs $γ_i^0,γ_i^2,γ_i^1$. The circle $C_i$ intersects two regions, giving two peripheral edges denoted $e_i^{μ1},e_i^{μ2}$, with the top of $C_i$ oriented from $e_i^{μ1}$ to $e_i^{μ2}$. See Figure [12](#f:notation-spanning-face){reference-type="ref" reference="f:notation-spanning-face"} for orientation conventions. Each segment $a_j$ meets two regions, giving peripheral edges denoted $e_i^l,e_i^r$, with $e_i^l$ from the region to the left of $a_j$. See Figure [14](#f:notation-regional-face){reference-type="ref" reference="f:notation-regional-face"} for orientation conventions. ![ Notation and orientation convention for crossing arcs and peripheral edges near spanning face with/without half-twist; $e_i^0$ is oriented following $C_i$; the order between $γ_i^1,γ_i^2$ is chosen so $e_i^0$ points from $γ_i^1$ to $γ_i^2$; $e_i^1,e_i^2$ are oriented from $γ_i^0$ to $γ_i^1,γ_i^2$; $e_i^{μ1},e_i^{μ2}$ are oriented by right-hand rule around $C_i$. ](figures/diagram-notation-spanning-face-full.png "fig:"){#f:notation-spanning-face width="8cm"} ![ Notation and orientation convention for crossing arcs and peripheral edges near spanning face with/without half-twist; $e_i^0$ is oriented following $C_i$; the order between $γ_i^1,γ_i^2$ is chosen so $e_i^0$ points from $γ_i^1$ to $γ_i^2$; $e_i^1,e_i^2$ are oriented from $γ_i^0$ to $γ_i^1,γ_i^2$; $e_i^{μ1},e_i^{μ2}$ are oriented by right-hand rule around $C_i$. ](figures/diagram-notation-spanning-face-half.png "fig:"){#f:notation-spanning-face width="6.5cm"} ![ Notation and orientation convention for crossing arcs and peripheral edges in boundary of regional face near spanning faces with/without half-twist; $e_i^l,e_i^r$ are oriented in parallel with $a_j$. ](figures/diagram-notation-regional-face-full.png "fig:"){#f:notation-regional-face width="6cm"} ![ Notation and orientation convention for crossing arcs and peripheral edges in boundary of regional face near spanning faces with/without half-twist; $e_i^l,e_i^r$ are oriented in parallel with $a_j$. ](figures/diagram-notation-regional-face-half.png "fig:"){#f:notation-regional-face width="6cm"} ![ Crossing arcs and peripheral edges depicted in $B_L$; note this is unchanged if we introduce half-twists. ](figures/diagram-FAL-BL-graph-labels.png){#f:BL-crossing-arcs-edges height="8cm"} Recall that there is one 2-cell $F_R$, the regional face, for each region $R$ of $L$, and two 2-cells, the spanning faces, $F_{C_i}^{\pm}$, for each augmentation circle $C_i$, which have boundary consisting of $γ_i^0,γ_i^1,γ_i^2$. Writing $f_{C_i}$ for $F_{C_i}^+$, $f_{C_i}$ is made into two copies in $B_L$, making up the two triangles in the bow-tie corresponding to $C_i$, which we label $f_{C_i}^l$ and $f_{C_i}^r$. The crossing arcs and peripheral edges in the boundary of (the truncated) $f_{C_i}$ likewise split into two copies in $B_L$. The edges of $B_L$ correspond to crossing arcs; the peripheral edges are recorded in $B_L$ as tiny arcs near vertices. As usual, one considers the equations that would make labels on these crossing arcs and peripheral edges an algebraic solution, for example, the edge equation from a segment $a_j$ is simply $$u(e_j^l) = u(e_j^r).$$ *Remark 26*. The polyhedral decomposition respects the $\mathbb{Z}/2$-symmetry of reflecting the link in the plane of the diagram, which naturally leads to constraints on the geometric labels. These constraints and some other "positivity" constraints turn out to be sufficient to prove that an algebraic solution satisfying these constraints is indeed geometric. This discussion will be elaborated on in Section [4.2](#s:FAL-nec-suff-geometric){reference-type="ref" reference="s:FAL-nec-suff-geometric"}.  \ ## FALs in $\mathbf{T}$ {#s:TT-FAL-T2}  \ We repeat Section [3.3](#s:TT-FAL-S3){reference-type="ref" reference="s:TT-FAL-S3"} but for FALs in $\mathbf{T}$, namely, we apply the generalized TT method of Section [3.1](#s:TT-generalization){reference-type="ref" reference="s:TT-generalization"} to the polyhedral decomposition of the complement of FALs in $\mathbf{T}$ described in Definitions [Definition 9](#d:decomp-S3){reference-type="ref" reference="d:decomp-S3"}, [Definition 12](#d:decomp-half-twist){reference-type="ref" reference="d:decomp-half-twist"}. The same discussion involving spanning and regional faces can be repeated verbatim. The vertical faces introduce new peripheral edges. We consider only the top torihedron, the bottom being identical. Each vertical arc $γ_{C_i}$ is adjacent to four vertical faces, one for each segment $a_j$ meeting the spanning disk of $C_i$. These peripheral edges are denoted by $ε_i^{1l}, ε_i^{1r}, ε_i^{2l}, ε_i^{2r}$; here the 1 or 2 in the superscript indicates which of $γ_i^1$ or $γ_i^2$ is an endpoint, and the $l$ or $r$ indicates on which side of $C_i$ it is, similar to Figure [15](#f:BL-crossing-arcs-edges){reference-type="ref" reference="f:BL-crossing-arcs-edges"}; see Figure [17](#f:FAL-T2-vertical-faces){reference-type="ref" reference="f:FAL-T2-vertical-faces"}. Each vertical arc $γ_{a_j}$ is adjacent to four vertical faces, one for each crossing arc meeting $a_j$; if $a_j$ travels from augmentation circle $C_i$ to $C_{i'}$, then these crossing arcs are $γ_i^0,γ_i^{[1/2]},γ_{i'}^0,γ_{i'}^{[1/2]}$. These peripheral edges are denoted by $ε_j^{il}, ε_j^{ir}, ε_j^{i'l}, ε_j^{i'r}$. ![Peripheral edges involving vertical faces around a vertical crossing arc $γ_{C_i}$; the diagram on the right depicts them over the bow-tie graph.](figures/diagram-FAL-torus-Ci.png "fig:"){#f:FAL-T2-vertical-faces height="7cm"} ![Peripheral edges involving vertical faces around a vertical crossing arc $γ_{C_i}$; the diagram on the right depicts them over the bow-tie graph.](figures/diagram-FAL-torus-Ci-BL.png "fig:"){#f:FAL-T2-vertical-faces height="5cm"} For the region equations on the algebraic solution, we leave it as an exercise for the reader to recover them (see Section [3.2.1](#s:TT-thickened-torus-eqn){reference-type="ref" reference="s:TT-thickened-torus-eqn"}, Section [5](#s:examples){reference-type="ref" reference="s:examples"}). *Remark 27*. Given a geometric solution $\Omega = (w(-),u(-))$ to the TT method, it is easy to obtain the cusp shapes of the peripheral tori. For a peripheral torus $T$, which has a chosen meridian, consider a sequence of peripheral edges $ε_1,\ldots,ε_k$ whose concatenation forms a closed loop that is homotopic to a longitude of the peripheral torus. Then the sum $\sum_{i=1}^k u(ε_i)$ is the cusp shape of $T$. # From Algebraic Solutions to Geometry {#s:alg-to-geom} ## Sufficient Criterion on Algebraic Solutions to be Geometric {#s:suff-geometric} We present a sufficient criterion on an algebraic solution $Ω$ to the generalized TT method on a compact 3-manifold $M$ with toric boundary components (as in Section [3.1](#s:TT-generalization){reference-type="ref" reference="s:TT-generalization"}) that implies that $Ω$ is the geometric solution. We describe a procedure to assign shape parameters to an each tetrahedron of a triangulation $τ$. The sufficient criterion is simply that the shape parameters assigned have negative imaginary part; we find that the Thurston gluing/completeness equations are then automatically satisfied. (See Theorem [Theorem 36](#t:all-positive-geometric){reference-type="ref" reference="t:all-positive-geometric"}.) Practically speaking, this sufficient criterion is only useful if $τ$ is a geometric triangulation, which is a fact that is often not easy to know or verify. The description of the procedures to get the shape parameters and the developing map (see Section [4.1.2](#s:alg-to-geom-reconstruction){reference-type="ref" reference="s:alg-to-geom-reconstruction"}) may be of independent interest; for example, the developing map must be surjective if the algebraic solution is geometric, a fact which one can use as a necessary criterion of geometric-ness. We fix an 3-manifold $M$ with a polyhedral decomposition $τ₀$, and we fix an algebraic solution $Ω$ to the generalized TT method (which is, importantly, not necessarily geometric). We denote by $M^{∘} = M \backslash ∂M$, and $τ₀^{∘}$ the corresponding ideal polyhedral decomposition. As usual, we may identify the truncation of $M$ with $M$ itself, and accordingly identify $τ₀$ with the truncation of $τ₀^{∘}$.  \ ### Assigning labels $w(γ), u(ε)$ to arbitrary arcs and peripheral edges {#s:algebraic-to-arbitrary-labels}   Let $γ$ be an arc between cusps of $M^{∘}$ which is not homotopic to a curve contained in some neighborhood $N(v_i)$ of an ideal vertex (e.g. $γ$ is an edge in the triangulation). For such an arc, denote by $\overline{γ} ⊂ γ$ the portion between peripheral endpoints, i.e., $\overline{γ} = γ \backslash \bigcup N(v_i)$. If we understood the hyperbolic geometry of $M^{∘}$, we may assign a crossing label $w(γ)$ to $γ$ as in Section [2.1](#s:recap){reference-type="ref" reference="s:recap"}, having absolute value $e^{-\textrm{len}(\overline{γ})}$ and argument given by comparing meridians at the endpoints of $\overline{γ}$. However, we want to obtain $w(γ)$ in terms of the given $Ω$. The following procedure produces some crossing label $w(γ)$ that would be consistent with the hyperbolic geometry if the algebraic solution was indeed the geometric one. Let the peripheral endpoints of $γ$ (intersection with peripheral tori) be $p$ and $q$; orient $γ$ from $p$ to $q$ (end result is independent of choice of orientation). It is possible to homotope $\overline{γ}$ into a concatenation of curves $ε_0,\overline{γ_1},ε_1,\ldots,\overline{γ_k},ε_k$, such that $γ_i$ are 1-edges of $τ₀^{∘}$, and $ε_i$ are peripheral edges which are in $τ₀$ for $i ≠ 0,k$, while $ε_0,ε_k$ are new peripheral edges connecting $p,q$ to endpoints of $\overline{γ_1},\overline{γ_k}$ respectively. Note that here we orient $ε_i$ in accordance with $γ$, which may disagree with the orientations that may be imposed by convention, e.g. in the original TT method. We define $u(ε_0), u(ε_k), w(γ)$ to be solutions to the following equation ($u = u(ε_0), u' = u(ε_k), w = w(γ)$): $$\label{e:arb-crossing-label-1} \begin{pmatrix} 1 & -u' \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 0 & w(γ_k) \\ 1 & 0 \end{pmatrix} ⋅⋅⋅ \begin{pmatrix} 1 & -u(ε_1) \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 0 & w(γ_1) \\ 1 & 0 \end{pmatrix} \begin{pmatrix} 1 & -u \\ 0 & 1 \end{pmatrix} %%%%%%%%%%%%%%%%%%%%% \sim \begin{pmatrix} 0 & w \\ 1 & 0 \end{pmatrix}$$ where $\sim$ means equality in ${PSL(2,\mathbb{C})}$, i.e., up to nonzero scalar. Rearranging, we have $$\label{e:arb-crossing-label-2} \begin{pmatrix} 0 & w(γ_k) \\ 1 & 0 \end{pmatrix} ⋅⋅⋅ \begin{pmatrix} 1 & -u(ε_1) \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 0 & w(γ_1) \\ 1 & 0 \end{pmatrix} %%%%%%%%%%%%%%%%%%%%% \sim \begin{pmatrix} u' & u u' + w \\ 1 & u \end{pmatrix}$$ from which $u, u', w$ can be easily obtained. Well-definedness of $w(γ)$ is shown in Proposition [Proposition 30](#p:labels-well-defn){reference-type="ref" reference="p:labels-well-defn"} below. The definition above is motivated by considering the situation where we know that the algebraic solution given is the geometric solution. Place ${\mathbb{H}^3}$ so that we are in $p$-centered view (see Section [2.1.1](#s:geometric-content-labels){reference-type="ref" reference="s:geometric-content-labels"}). As we successively traverse the curves, hopping from peripheral endpoint to peripheral endpoint, we perform the isometry on ${\mathbb{H}^3}$ that takes us to the view centered on the next peripheral endpoint. The composition of these isometries must be equal to the isometry by traversing $γ$ directly. This results in ([\[e:arb-crossing-label-1\]](#e:arb-crossing-label-1){reference-type="ref" reference="e:arb-crossing-label-1"}). Next we consider peripheral edges not in $τ₀$. Given two arcs $γ,γ'$ as above that emanate from the same cusp, we can consider a peripheral edge $α$ from $p$ to $p'$, where $p,p'$ are the peripheral endpoints of $γ,γ'$ at the common peripheral torus, respectively. Let $γ \sim ε_0 γ_1 ⋅⋅⋅ γ_k ε_k$, $γ' \sim ε_0' γ_1' ⋅⋅⋅ γ_l' ε_l'$, as in the construction above. Let $p_1,p_1'$ be the endpoints of $ε_0,ε_0'$ (which are not $p,p'$) respectively. Let $ε$ be some concatenation of peripheral edges in $τ₀$ homotopic (in the peripheral torus) to $ε_0^{-1}α ε_0'$. We define $$\label{e:arb-edge-label} u(α) = - u(ε_0) + u(ε) + u(ε_0')$$ Again, well-definedness of $u(α)$ is shown in Proposition [Proposition 30](#p:labels-well-defn){reference-type="ref" reference="p:labels-well-defn"} below. In the following, we will abuse notation by dropping the overline from $\overline{γ}$, as it should be clear which we are referring to. Before we proceed, let us introduce some notation: **Definition 28**. For a curve $α$ that is a peripheral edge or crossing arc in $τ₀$, we define $W_Ω(α)$ to be the matrix given by ([\[e:label-matrix\]](#e:label-matrix){reference-type="ref" reference="e:label-matrix"}) (using labels from $Ω$, not the geometric labels), which represents the isometry used to traverse $α$. More generally, for a concatenation $α = α_1 ⋅⋅⋅ α_k$ of such curves, we define $W_Ω(α)$ to be the composition $W_Ω(α_k) ⋅⋅⋅ W_Ω(α_1)$. Note that $W_Ω(-)$ is contravariant with respect to composition, i.e., $W_Ω(α ∘ α') = W_Ω(α') W_Ω(α)$, which is natural in terms of order of operation, since the $W_Ω$'s, as matrices and isometries of ${\mathbb{H}^3}$, act to the right. **Lemma 29**. *For any two homotopic curves $α,α'$ that are concatenations of peripheral edges or crossing arcs in $τ₀$, we have $W_Ω(α) \sim W_Ω(α')$.* *Proof.* The region/edge equations ([\[e:region-eqn-matrix\]](#e:region-eqn-matrix){reference-type="ref" reference="e:region-eqn-matrix"}),([\[e:edge-eqn-matrix\]](#e:edge-eqn-matrix){reference-type="ref" reference="e:edge-eqn-matrix"})) boil down to $W(∂P) \sim 1$ for any 2-cell $P$ of $τ₀$ (see Remark [Remark 24](#r:gauge-theory){reference-type="ref" reference="r:gauge-theory"}). Since $α^{-1}α'$ is homotopic to a trivial loop, and $π₁(τ₀^{(2)}) \simeq π₁(τ₀)$, it follows that $W_Ω(α^{-1}α') \sim 1$. ◻ **Proposition 30**. *The constructions above of labels $w(γ), u(ε)$ for arbitrary crossing arcs $γ$ and peripheral edges $ε$, as given in ([\[e:arb-crossing-label-2\]](#e:arb-crossing-label-2){reference-type="ref" reference="e:arb-crossing-label-2"}), ([\[e:arb-edge-label\]](#e:arb-edge-label){reference-type="ref" reference="e:arb-edge-label"}), are well-defined.* *Proof.* Suppose $γ$ is homotopic to a concatenation of crossing arcs and peripheral edges in two ways as in the construction above, say $γ \sim ε_0 γ_1 ⋅⋅⋅ γ_k ε_k$ and $γ \sim ε_0' γ_1' ⋅⋅⋅ γ_l' ε_l'$. We want to show that the crossing labels $w,w'$ for $γ$ that the construction above yields using these two presentations of $γ$ are equal. Let $ε$ be a concatenation of peripheral edges in $τ₀$ that is homotopic (in the relevant peripheral torus) to $ε_0^{-1}ε_0'$; similarly, let $ε'$ be a concatenation of peripheral edges in $τ₀$ that is homotopic (in the relevant peripheral torus) to $ε_l' ε_k^{-1}$. Let $α = γ_1 ε_1 ⋅⋅⋅ γ_k$, $β = γ_1' ε_2' ⋅⋅⋅ γ_l'$, $α' = ε β ε'$. By definition, $α$ and $α'$ are homotopic (to $ε_0^{-1}γ ε_k^{-1}$), so by Lemma [Lemma 29](#l:TT-W-homotopic){reference-type="ref" reference="l:TT-W-homotopic"}, $W_Ω(α) \sim W_Ω(α') = W_Ω(ε') W_Ω(β) W_Ω(ε)$. Thus we have $$\begin{aligned} \begin{pmatrix} 0 & w \\ 1 & 0 \end{pmatrix} \sim W_Ω(ε_k) W_Ω(α) W_Ω(ε_0) \sim W_Ω(ε_k) W_Ω(α') W_Ω(ε_0) \sim W_Ω(ε' ε_k) W_Ω(β) W_Ω(ε_0 ε) \\ \sim W_Ω(ε_l'^{-1}ε' ε_k) \begin{pmatrix} 0 & w' \\ 1 & 0 \end{pmatrix} W_Ω(ε_0 ε ε_0'^{-1})\end{aligned}$$ Since $ε_l'^{-1}ε' ε_k$ and $ε_0 ε ε_0'^{-1}$ lie in a peripheral torus, $W_Ω(ε_l'^{-1}ε' ε_k)$ and $W_Ω(ε_0 ε ε_0'^{-1})$ must be strict upper triangular, hence by some basic algebraic argument, the above equality implies that they must both be identity. This immediately implies $w = w'$, and $u(ε) = -u(ε_0) + u(ε_0')$, $u(ε') = u(ε_l') - u(ε_k)$. The well-definedness for labels associated to peripheral edges between arbitrary crossing arcs follows easily from this as well. ◻ A similar argument shows that we have the analog of Lemma [Lemma 29](#l:TT-W-homotopic){reference-type="ref" reference="l:TT-W-homotopic"} for an arbitrary collection of crossing arcs, peripheral edges (not just those from $τ₀$): **Lemma 31**. *For any two homotopic curves $α,α'$ that are concatenations of arbitrary peripheral edges or crossing arcs, we have $W_Ω(α) \sim W_Ω(α')$.* ### Algebraic solution $Ω$ to piecewise geometry {#s:alg-to-geom-reconstruction}  \ Let $τ$ be an ideal triangulation of $M^{∘}$. Each ideal tetrahedron $T ∈ τ$ inherits an orientation from the 3-manifold. **Definition 32**. A *piecewise geometry* on $τ$ is a collection $\{ζ_T\}_{T ∈ τ}$ of surjective maps $ζ_T: T \to Δ_T$, where $Δ_T ⊂ {\mathbb{H}^3}$ is an ideal hyperbolic tetrahedron considered up to isometries of ${\mathbb{H}^3}$, and $ζ_T$ is considered up to homotopy fixing the vertices of $T$; equivalently, it suffices to record $ζ_T$ as a function on the set of vertices of $T$, valued in $\mathbb{C}∪ \{∞\}$, considered up to linear fractional transformations. $Δ_T$ can be degenerate, and $ζ_T$ may be orientation reversing. We denote by $ζ_{T,e}$ the shape parameter of $Δ_T$ along $e$, or in terms of the vertices, $ζ_{T,e} = \frac{(z₀ - z₁)(z₂ - z₃)}{(z₀ - z₂)(z₁ - z₃)}$, where the $z_i$ are the locations of the vertices of $Δ_T$ corresponding to the vertices $v_i$ of $T$, ordered so that $e = \overline{v_1 v_2}$ and $[\overrightarrow{v_0 v_1}, \overrightarrow{v_0 v_2}, \overrightarrow{v_0 v_3}]$ is the orientation of $T$ (see the definition of $ζ_{c_i,R}$ from Section [2.1](#s:recap){reference-type="ref" reference="s:recap"}). Note that we place no restriction on $ζ_T$, in particular, the imaginary part of $ζ_{T,e}$ can be 0 ($Δ_T$ is flat) or positive ($ζ_T$ induces a homeomorphism $T \simeq Δ_T$ that is orientation-reversing). This is useful for interpreting algebraic solutions to the TT method as describing "geometries" on $M$. **Definition 33**. Let $(τ,\{ζ_T\})$ be a triangulation with a piecewise geometry. Let $e$ be an edge of $τ$. Let $T_1,T_2,\ldots,T_k$ be the tetrahedra encountered as one traverses around $e$. We say that $(τ,\{ζ_T\})$ *[weakly]{.ul} satisfies the Thurston gluing equation at $e$* if $\prod ζ_{T_i,e} = 1$. We say that $(τ,\{ζ_T\})$ *[strongly]{.ul} satisfies the Thurston gluing equation at $e$* if $ζ_{T_i,e} \neq \infty$, $\text{Im}(ζ_{T_i,e}) \neq 0$, and $\prod \widetilde{ζ_{T_i,e}} = \exp(2π i) \in \widetilde{\mathbb{C}\backslash \{0\}}$, where $\widetilde{ζ_{T_i,e}}$ is the lift of $ζ_{T_i,e}$ with argument in $(-π,π)$. The latter is the usual definition of Thurston gluing equations [@thurston]. We describe a developing map ${\mathcal{D}}$ for $M$ based on $Ω$, which in particular defines a piecewise geometry on $τ$. Let $π: \widetilde{M} \to M$ be a universal cover of $M$ Let $\widetilde{τ}₀$ be the lift along $π$ of the polyhedral decomposition $τ₀$ of $M$ to $\widetilde{M}$, and likewise let $\widetilde{τ}$ be the lift of $τ$ along $π$. The algebraic solution $Ω$ lifts to a collection of labels $\widetilde{Ω} = (\widetilde{w}(-), \widetilde{u}(-))$ by pullback, i.e., for any crossing arc $γ$ and peripheral edge $ε$ in $\widetilde{τ}₀$, we define $\widetilde{w}(γ) = w(π(γ)), \widetilde{u}(ε) = u(π(ε))$. [^9] We denote by $W_{\widetilde{Ω}}(-)$ the matrices based on the labels $\widetilde{Ω}$ as in Definition [Definition 28](#d:W){reference-type="ref" reference="d:W"}. Choose a polyhedron $P ∈ \widetilde{τ}₀$, a vertex $v₀$ of $P$, and an edge $γ₀$ meeting $v₀$. Let $z₀ = ∞$ and let $Q₀ ⊆ ∂{\mathbb{H}^3}$ be the horizontal plane at height 1, with the identification with $\mathbb{C}$ by projection. This will serve as our reference point for the developing map. We first describe where vertices of $\widetilde{τ}₀$ go. Let $v ≠ v₀$ be a vertex of $\widetilde{τ}₀$. There is a unique up to homotopy arc $γ$ from $v$ to $v₀$, and let $ε$ be the peripheral edge between $γ$ and $γ₀$ near $v₀$ (if $γ = γ₀$ then $ε$ is the constant path). We assign to $v$ the point $z(v) ∈ ∂{\mathbb{H}^3}$ and horosphere $Q(v)$ which are obtained from $z₀$ and $Q₀$ by applying the isometry defined by $W_{\widetilde{Ω}}(εγ)^{-1}$; $Q(v)$ comes with an identification with $\mathbb{C}$ given by precomposing that of $Q₀$ with $W_{\widetilde{Ω}}(εγ)$. Suppose we chose a different starting point $v₀'$ and edge $γ₀'$, which yields assignments $z'(v), Q'(v)$; these are related to $z(v), Q(v)$ by a global an isometry, as follows: Let $α₀ = ε₁^{-1}γ₁ ε₁'$, where $γ₁$ is the crossing arc between $v₀,v₀'$ and $ε₁,ε₁'$ are peripheral edges from $γ₁$ to $γ₀,γ₀'$ (if $v₀ = v₀'$, we take $α₀$ to be the peripheral edge between $γ₀,γ₀'$). Write $U = W_{\widetilde{Ω}}(α₀)$. Then by definition, $U$ sends $Q(v₀') = W_{\widetilde{Ω}}(ε₁^{-1}γ₁)^{-1}(Q₀)$ to $W_{\widetilde{Ω}}(α₀)(W_{\widetilde{Ω}}(ε₁^{-1}γ₁)^{-1}(Q₀)) = W_{\widetilde{Ω}}(ε₁')(Q₀) = Q₀ = Q'(v₀')$; more generally, for any vertex $v$ of $\widetilde{τ}₀$, if $α = εγ$ is the path used to define $Q(v)$, and $α' = ε'γ'$ for $Q'(v)$, that is, $Q(v) = W_{\widetilde{Ω}}(α)^{-1}(Q₀)$, $Q'(v) = W_{\widetilde{Ω}}(α')^{-1}(Q₀)$, then since $α = α₀ α'$, we have $U(Q(v)) = W_{\widetilde{Ω}}(α₀) W_{\widetilde{Ω}}(α)^{-1}(Q₀) = W_{\widetilde{Ω}}(α')^{-1}(Q₀) = Q'(v)$. (The identification of the $Q(v)$ with $\mathbb{C}$ with that of $Q'(v)$, as we can already see for $v₀'$, but the difference is only a translation, which ultimately does not affect any results.) An immediate consequence of the above observation is that the assignments $z(-), Q(-)$ are covariant with deck transformations of $π: \widetilde{M} \to M$. It is clear from the "$p$-centered view" perspective (Section [2.1.1](#s:geometric-content-labels){reference-type="ref" reference="s:geometric-content-labels"}) and invariance/well-definedness of labels (Section [4.1.1](#s:algebraic-to-arbitrary-labels){reference-type="ref" reference="s:algebraic-to-arbitrary-labels"}) that the geometric labels attained from the construction of horospheres above agrees with $\widetilde{Ω}$ - more precisely, for any arc $γ'$ in $\widetilde{M}$ between vertices $v,v'$, the construction above gives points $z,z' ∈ ∂{\mathbb{H}^3}$ and horospheres $Q,Q'$ centered about them with some identifications with $\mathbb{C}$, such that the geometric crossing label associated to the geodesic between $z,z'$ is $\widetilde{w}(γ')$, and likewise for peripheral edges in $\widetilde{M}$. In particular, for a tetrahedron $\widetilde{T} ∈ \widetilde{τ}$, we obtain an ideal tetrahedron $Δ_{\widetilde{T}} ⊆ {\mathbb{H}^3}$ with vertices $z(v_i)$, where $v_i$ are the vertices of $\widetilde{T}$, such that the $\widetilde{Ω}$ labels of the edges of $\widetilde{T}$ agree with the geometric labels of the edges of $Δ_{\widetilde{T}}$; maps $ζ_{\widetilde{T}} : \widetilde{T} \to Δ_{\widetilde{T}}$ can be chosen such that they agree with neighboring tetrahedra on their faces. The *developing map* ${\mathcal{D}}: \widetilde{M} \to {\mathbb{H}^3}$ is simply the union of the maps $\{ζ_{\widetilde{T}}\}$ on $\widetilde{τ}$. By covariance with deck transformations, the $ζ_{\widetilde{T}}$'s and $Δ_{\widetilde{T}}$'s descends to a piecewise geometry on $τ$, which we refer to as the *piecewise geometry induced by $Ω$*. **Proposition 34**. *Let $\{ζ_T\}$ be the piecewise geometry on a triangulation $τ$ induced by an algebraic (not necessarily geometric) solution $Ω$ to the TT method. Then $\{ζ_T\}$ weakly satisfies Thurston gluing equations at every edge.* *Proof.* Let $e$ be an edge of $τ$, and let $v$ be one of its vertices. Let $Q$ be the peripheral torus at $v$. Let $T_1,\ldots,T_k$ (possibly with repetitions) be the tetrahedra of $τ$ that meet $e$. They intersect $Q$ in triangles $δ_1,\ldots,δ_k$; let these be arranged in counterclockwise (as viewed from $v$) order around $e$. Let $ε_i = δ_i \cap δ_{i-1}$, oriented away from $e$. Then by definition, $ζ_{T_i,e} = u(ε_{i+1}) / u(ε_i)$. Clearly these multiply to 1. ◻ ### Sufficient criteria on $Ω$ to be geometric {#s:geometric-sufficient}  \ Before we come to the main result, we state a lemma that will be applied to the peripheral tori to show that under "all positive orientation" condition, the Thurston gluing equations are strongly satisfied at edges: **Lemma 35**. *Let $X$ be an oriented torus with a cellular decomposition $τ$ (in particular, a triangulation). For each 2-cell $δ$ of $τ$, assign a Euclidean similarity structure compatible with the orientation on $X$ - concretely, a Euclidean similarity structure is a simple (i.e., non-self-intersecting) polygon $δ'$ in $\mathbb{R}^2$ considered up to Euclidean similarity transforms.* *Suppose for each vertex $v$, the Euclidean similarity structures of the 2-cells around it are "weakly Euclidean", in the sense that there is a well-defined map from the star of $v$ to $\mathbb{R}^2$ obtained by "unfolding" the 2-cells with the similarity structure; more precisely, if $δ_0,δ_1,\ldots,δ_k = δ_0$ are the 2-cells around $v$ in counterclockwise order, with $e_i$ being the edge between $δ_{i-1}$ and $δ_i$, then an "unfolding" is a choice of representatives $δ_i'$ of the similarity structures for $δ_i$ such that $δ_i'$ adjoins $δ_{i-1}'$ along the edge corresponding to $e_i$, and we require that $δ_0' = δ_k'$ (exactly, not up to similarity transform).* *Let the *local degree of $τ$ at $v$* be the number of times the "unfolding" map above winds around $v$, or more concretely, it is the sum of angles at $v$ in $δ_i'$ from $i=1$ to $k$, divided by $2π$.* *Then weakly Euclidean implies that the local degree at $v$ is 1 for every vertex, i.e., the Euclidean similarity structure extends over all of $X$.* *Proof.* The Euclidean similarity structures define a developing map $\widetilde{X} \to \mathbb{R}^2$; under the developing map, loops in $\widetilde{X}$ avoiding the vertices of $τ$ get a well-defined turning number. The boundary of a fundamental domain of $\widetilde{X}$ must have turning number 1. By invariance of turning number under regular homotopy, and studying how turning number changes under connect sums, we can conclude that there cannot be singularities. ◻ **Theorem 36**. *Let $τ$ be a triangulation of a 3-manifold $M$ with toric ends. Let $Ω = (w(-), u(-))$ be an algebraic solution to the generalized TT method, and let $\{ζ_T\}$ be the induced geometry on $τ$. If $ζ_T$ has negative imaginary part for all $T ∈ τ$, then $Ω$ is the geometric solution.* *Proof.* By Proposition [Proposition 34](#p:TT-weak){reference-type="ref" reference="p:TT-weak"}, the induced geometry weakly satisfies Thurston gluing equations at every edge. We want to show that it also satisfies it strongly. Let $e$ be an edge of $τ$, and $v$ one of its endpoints. Consider the peripheral torus of $v$; it has a triangulation $\overline{τ}$ obtained by intersection with $τ$. The induced geometry on tetrahedra of $τ$ induces a Euclidean similarity structure on each of the triangles in $\overline{τ}$. By Lemma [Lemma 35](#l:torus-strong-Thurston){reference-type="ref" reference="l:torus-strong-Thurston"}, the local degrees of $\overline{τ}$ at vertices of $\overline{τ}$ is 1, which implies, in particular, that the induced geometry on $τ$ is strongly Thurston at $e$. It remains to check that the induced geometry is complete at ideal vertices. Observe that the edge labels give more than Euclidean similarity structures on the triangles of $\overline{τ}$ - they actually give a triangle in $\mathbb{C}$ up to translations. Thus, in the developing map of a peripheral torus, the $\mathbb{Z}\oplus \mathbb{Z}$-action consists of only translations, and hence the induced geometry is indeed complete at ideal vertices. ◻  \ ## FALs: Necessary and Sufficient Criteria on Algebraic Solutions to be Geometric {#s:FAL-nec-suff-geometric} ### Necessary Criteria {#s:FAL-nec-crit} We present criteria on an algebraic solution to the TT method for a FAL that are necessary for it to be the geometric solution: **Lemma 37**. *Let $L$ be a FAL in $S^3$ or $\mathbf{T}$. Recall the notation for crossing arcs and peripheral edges from Sections [3.3](#s:TT-FAL-S3){reference-type="ref" reference="s:TT-FAL-S3"}, [3.4](#s:TT-FAL-T2){reference-type="ref" reference="s:TT-FAL-T2"}. Let $Ω = (w(-), u(-))$ be an algebraic solution to the TT method. If $Ω$ is geometric, then the labels satisfy the following criteria: (with $i$ indexing augmentation circles $C_i$ and $j$ indexing segments $a_j$):* - *$u(e_i^{μ 1}) = u(e_i^{μ 2}) = 1$,* - *$u(e_i^{1l}) = u(e_i^{2l}) = \pm 1/2$, $u(e_i^{1r}) = u(e_i^{2r}) = \pm 1/2$,* - *$u(e_j^l) = u(e_j^r) ∈ i\mathbb{R}$,* - *$w(γ_i^1),w(γ_i^2) ∈ i\mathbb{R}$,* - *$w(γ_i^0) ∈ \mathbb{R}$,* - *$u(e_i^0) ∈ i\mathbb{R}$.* *(The signs on the second line depend on whether the peripheral edge agrees in orientation with the meridian of the segment.)* *Proof.* The first item is true by definition, and the rest are simple consequences of the top-bottom symmetry by reflection (see Remark [Remark 14](#r:FAL-half-twist-symmetry){reference-type="ref" reference="r:FAL-half-twist-symmetry"} for FALs with half-twists); e.g. the symmetry fixes peripheral edges $e_j^l,e_j^r$, but performs an anti-$\mathbb{C}$-linear automorphism on the peripheral torus of $a_j$ which flips the meridian to $-1$, hence $u(e_j^l),u(e_j^r)$ must be pure imaginary. ◻ **Lemma 38**. *With the same setup as Lemma [Lemma 37](#l:fal-pure-imag){reference-type="ref" reference="l:fal-pure-imag"}, if $Ω$ is geometric, then $u(-)$ satisfies:* - *$u(e_i^0) ∈ i\mathbb{R}^{<0}$;* - *$u(e_j^l) ∈ i\mathbb{R}^{<0}$.* *Proof.* Follows from Remark [Remark 8](#r:geometric-ness){reference-type="ref" reference="r:geometric-ness"}. ◻ **Lemma 39**. *Let $L, Ω$ be as in Lemma [Lemma 37](#l:fal-pure-imag){reference-type="ref" reference="l:fal-pure-imag"}. Let $R$ be a non-bow-tie region of the bow-tie graph $B_L$ of $L$. Let $γ$ be an edge of $R$, and let $γ',γ''$ be any pair of non-intersecting diagonals of $R$ that share vertices with $γ$; these $γ$'s correspond to crossing arcs in the link complement.* *Let $ζ$ be the shape parameter at $γ$ as given by ([\[e:shape-parameter-labels\]](#e:shape-parameter-labels){reference-type="ref" reference="e:shape-parameter-labels"}), i.e., $ζ = w(γ)/ u(ε_{γ γ'}) u(ε_{γ γ''})$, where $ε_{γ γ'}, ε_{γ γ''}$ are the peripheral edges from $γ$ to $γ',γ''$ respectively.* *If $Ω$ is the geometric solution, then $ζ$ satisfies the following criteria: $ζ$ is real and $0 < ζ < 1$.* *Proof.* If $γ',γ''$ are also edges of $R$, then the fact that $ζ$ must be real follows directly from Lemma [Lemma 37](#l:fal-pure-imag){reference-type="ref" reference="l:fal-pure-imag"}; this means that every four consecutive vertices of $R$ must lie on a common circle, hence $R$ is circumscribed, and it follows that $ζ$ is real for any $γ',γ''$ as well. We may also prove real-ness by the following more geometric argument, which leads more naturally into the rest of the proof. From the top-bottom reflection symmetry, it follows that $R$ can be isotoped to lie in a geodesic plane, and in particular, that the vertices of $R$ lie on a circle in the sphere at infinity. This implies that the shape parameters are real. Moreover, the polygon formed by $R$ must be a convex polygon. Thus, if the path $ζ' ζ ζ''$ traverses the vertices $v₀,v₁,v₂,v₃$, and $v₀,v₁,v₂$ are placed that $1,∞,0$, then $v₃$ is placed at the shape parameter $ζ$, and then convexity of the polygon formed by $R$ implies that $0 < ζ < 1$. ◻ ### Sufficiency of Criteria of Section [4.2.1](#s:FAL-nec-crit){reference-type="ref" reference="s:FAL-nec-crit"} {#s:suff-nec-crit} **Theorem 40**. *Let $L$ be a link obtained from fully augmenting a link $K$ in $S^3$ (resp. $\mathbf{T}$) with bow-tie graph $B_L$ in $Σ = S^2$ (resp. $Σ = T^2$). Let $Γ_L$ be the region graph of $L$ (see Definition [Definition 21](#d:dual-ish-bow-tie-graph){reference-type="ref" reference="d:dual-ish-bow-tie-graph"}).* *Then an algebraic solution $Ω = (w(-),u(-))$ to the TT method that satisfies the criteria in Lemma [Lemma 37](#l:fal-pure-imag){reference-type="ref" reference="l:fal-pure-imag"} determines a unique circle packing realizing $Γ_L$ up to orientation-preserving conformal transformations of $Σ$; moreover, if in addition it satisfies the criteria in Lemma [Lemma 38](#l:fal-ortn-criteria){reference-type="ref" reference="l:fal-ortn-criteria"} and Lemma [Lemma 39](#l:fal-shape-param){reference-type="ref" reference="l:fal-shape-param"}, then the circle packing is univalent.* *Conversely, a circle packing realizing $Γ_L$ in $Σ$ determines an algebraic solution $Ω$ to the TT method that satisfies the criteria of Lemma [Lemma 37](#l:fal-pure-imag){reference-type="ref" reference="l:fal-pure-imag"}, and moreover if it is univalent, then $Ω$ satisfies the criteria of Lemma [Lemma 38](#l:fal-ortn-criteria){reference-type="ref" reference="l:fal-ortn-criteria"} and Lemma [Lemma 39](#l:fal-shape-param){reference-type="ref" reference="l:fal-shape-param"}.* *Proof.* We first work with the case where the link is in $S^3$; we deal with the $\mathbf{T}$ case later, which only needs some minor modifications. Consider the developing map ${\mathcal{D}}: \widetilde{M} \to {\mathbb{H}^3}$ from Section [4.1.2](#s:alg-to-geom-reconstruction){reference-type="ref" reference="s:alg-to-geom-reconstruction"}. Let $P$ be the polyhedron (3-cell) in the polyhedral decomposition of $S^3 - L$ that lies above the projection plane (see Definition [Definition 9](#d:decomp-S3){reference-type="ref" reference="d:decomp-S3"}). The developing map ${\mathcal{D}}$ sends the vertices of $P$ to $∂{\mathbb{H}^3}$. We need to show that if $Ω$ satisfies the criteria of Lemma [Lemma 37](#l:fal-pure-imag){reference-type="ref" reference="l:fal-pure-imag"}, then these vertices are the points of tangencies for a circle packing on $Γ_L$, the region graph of $L$. As noted in the proof of Lemma [Lemma 39](#l:fal-shape-param){reference-type="ref" reference="l:fal-shape-param"}, the criteria of Lemma [Lemma 37](#l:fal-pure-imag){reference-type="ref" reference="l:fal-pure-imag"} implies the real-ness of some shape parameters, which means the vertices of $R$ lie on a circle; we denote this circle by $X_R$. Let $η$ be an edge of the region graph $Γ_L$ connecting vertices $R,R'$ of $Γ_L$; in terms of the bow-tie graph $B_L$, $η$ is a vertex of $B_L$ that meets regions $R$ and $R'$. If $η$ corresponds to an augmentation circle $C_i$ of $L$, then the first of the criteria, $u(e_i^{μ 1}) = u(e_i^{μ 2}) = 1$, implies that $X_R$ and $X_R'$ are tangent, while if $η$ corresponds to a segment $a_j$ of $L$, then the second item of the criteria, $u(e_i^{1l}) = u(e_i^{1r}) = \pm 1/2$, $u(e_i^{2l}) = u(e_i^{2r}) = \pm 1/2$, implies that the circles $X_R, X_{R'}$ are tangent. (These are easiest to understand by placing $η$ at $∞$, so that $X_R,X_R'$ become straight lines, and these criteria imply that the lines are parallel.) Thus, $\{X_R\}$ is a circle packing for $Γ_L$. Now suppose $Ω$ also satisfies the criteria of Lemma [Lemma 38](#l:fal-ortn-criteria){reference-type="ref" reference="l:fal-ortn-criteria"} and Lemma [Lemma 39](#l:fal-shape-param){reference-type="ref" reference="l:fal-shape-param"}. The criterion $0 < ζ < 1$ of Lemma [Lemma 39](#l:fal-shape-param){reference-type="ref" reference="l:fal-shape-param"} implies that the vertices of $R$ appear in order along $X_R$, i.e., we can orient $X_R$ so that $\{X_R\}$ is locally order-preserving. Then it is easy to see that the criteria $u(e_i^0) ∈ i\mathbb{R}^{<0}$, $u(e_j^l) ∈ i\mathbb{R}^{<0}$ imply that the circle packing is locally univalent. Hence, by Lemma [Lemma 20](#l:univalence-local){reference-type="ref" reference="l:univalence-local"}, $\{X_R\}$ is univalent.  \ Next we prove the converse. Let $\{X_R\}$ be a circle packing realizing $Γ_L$. We need to choose horospheres $Q(η)$ for each vertex $η$ of $Γ_L$, and identifications $Q(η)$ with $\mathbb{C}$. Place $η$ at $∞$. Suppose $η$ corresponds to an augmentation circle $C_i$ of $L$. Let $γ_i^{1l},γ_i^{1r},γ_i^{2l},γ_i^{2r}$ be the edges of $B_L$ meeting $η$, as in Figure [15](#f:BL-crossing-arcs-edges){reference-type="ref" reference="f:BL-crossing-arcs-edges"}, and let $η_i^{1l},η_i^{1r},η_i^{2l},η_i^{2r}$ be their endpoints besides $η$. Let $R¹$ be the region that meets $γ_i^{1l},γ_i^{1r}$, let $R²$ be the region that meets $γ_i^{2l},γ_i^{2r}$, and let $R^l,R^r$ be the regions adjacent to $γ_i^{0l}, γ_i^{0r}$ respectively (in Figure [15](#f:BL-crossing-arcs-edges){reference-type="ref" reference="f:BL-crossing-arcs-edges"}, the red arrow in the middle of the bow-tie sits at $η$ and is pointing from $R¹$ to $R²$; $R^l,R^r$ are the (non-bow-tie) regions above and below $f_{C_i}^+,f_{C_i}^-$ respectively). Since $η$ is at $∞$, the circles $X_{R¹},X_{R²}$ are parallel lines, and $X_{R^l}, X_{R^r}$ are circles tangent to both lines. The vertices $η_i^{1l}$ etc. are the points of tangencies among these circles, more specifically, $η_i^{1l} = X_{R¹} ∩ X_{R^l}, η_i^{1r} = X_{R¹} ∩ X_{R^r}, η_i^{2l} = X_{R²} ∩ X_{R^l}, η_i^{2r} = X_{R²} ∩ X_{R^r}$; hence, they form a rectangle. Thus, it is possible to choose $Q(η)$ so that $u(e_i^{μ1}) = u(e_i^{μ2}) = 1$; we can choose $Q(η)$ to be the horizontal plane at a height such that the intersection points of $γ_i^{1l},γ_i^{1r}$ with $Q(η)$ are distance 1 apart in $Q(η)$, and choose the identification with $\mathbb{C}$ so that the peripheral edge $e_i^{μ1}$, which runs from $γ_i^{1l},γ_i^{1r}$, is identified with 1. It also follows immediately that $u(e_i^0) ∈ i\mathbb{R}$. Suppose $η$ corresponds to a segment $a_j$ of $L$ Then a similar procedure and analysis shows that we can choose $Q(η)$ so that for the peripheral edge $e = e_i^{2l}$ (as in Figure [15](#f:BL-crossing-arcs-edges){reference-type="ref" reference="f:BL-crossing-arcs-edges"}; in general $e$ of the form $e_i^{[1/2][l/r]}$), we have $u(e_i^{2l}) = \pm 1/2$ (here the sign is $+$ iff $e_i^{2l}$ agrees in orientation with the meridian of $a_j$). Thus, we can guarantee the second item $u(e_i^{1l}) = u(e_i^{2l}) = \pm 1/2$, $u(e_i^{1r}) = u(e_i^{2r}) = \pm 1/2$, of Lemma [Lemma 37](#l:fal-pure-imag){reference-type="ref" reference="l:fal-pure-imag"}. The third item $u(e_j^l) = u(e_j^r) ∈ i\mathbb{R}$ also follows immediately because peripheral edges form a rectangle as before. Then the fact that the vertices of $R$ are on a circle $X_R$ means that shape parameters are real, hence implying the remaining items $w(γ_i^1),w(γ_i^2) ∈ i\mathbb{R}$ and $w(γ_i^0) ∈ \mathbb{R}$. Now suppose $\{X_R\}$ is univalent. Being local order-preserving means that any such $γ',γ,γ''$ from Lemma [Lemma 39](#l:fal-shape-param){reference-type="ref" reference="l:fal-shape-param"} forms a convex quadrilateral, hence the shape parameter $ζ$ is between 0 and 1. Local univalence implies the criteria of Lemma [Lemma 38](#l:fal-ortn-criteria){reference-type="ref" reference="l:fal-ortn-criteria"}, i.e., $u(e_i^0) ∈ i\mathbb{R}^{<0}$, $u(e_j^l) ∈ i\mathbb{R}^{<0}$. To see this, consider a point of tangency $η$ that corresponds to an augmentation circle $C_i$. Keeping with the notation above, suppose we arrange the parallel lines $X_{R¹}, X_{R²}$ to run vertically, with $X_{R¹}$ on the left of $X_{R²}$. Then $e_i^{μ1}$ and $e_i^{μ2}$ point vertically upwards. By local univalence, neighboring circles must lie between them, so the interior region of $X_{R¹}$ must lie to its left, and the interior region of $X_{R²}$ must lie to its right. The edge $e_i^0$ runs horizontally from $X_{R¹}$ to $X_{R²}$, so by comparing with $u(e_i^{μ1}) = 1$, we see that $u(e_i^0) ∈ i\mathbb{R}^{<0}$. The case of $η$ corresponding to a segment $a_j$ is dealt with similarly.  \ Finally, we address some modifications to the argument for FAL in $\mathbf{T}$. A circle packing realizing $Γ_L$ is equivalent to a circle packing in $\mathbb{C}$ realizing its universal cover $\widetilde{Γ}_L$ that is invariant under a lattice group of translations of the plane. Instead of a polyhedron $P$, we consider the top torihedron ${\mathcal{T}}$, and consider a copy of its universal cover $\widetilde{{\mathcal{T}}}$ in $\widetilde{M}$. The developing map ${\mathcal{D}}$ sends vertices of $\widetilde{{\mathcal{T}}}$ to $∂{\mathbb{H}^3}$, which should serve as the points of tangencies of a circle packing realizing $\widetilde{Γ}_L$. All the arguments essentially hold verbatim, one just has to ensure the constructions are co-/invariant with respect to the group of translations. We note that in the arguments above, we often send some point to $∞$, but this is merely for convenience/intuition, so the arguments still hold even though we are working with circle packings in $\mathbb{C}$ rather than in $S^2$. ◻ **Corollary 41**. *Let $L$ be as in Theorem [Theorem 40](#t:FAL-circle-packing-TTmethod){reference-type="ref" reference="t:FAL-circle-packing-TTmethod"}.* *Then an algebraic solution $Ω = (w(-),u(-))$ to the TT method on $L$ is the geometric solution if and only if it satisfies the criteria in Lemmas [Lemma 37](#l:fal-pure-imag){reference-type="ref" reference="l:fal-pure-imag"}, [Lemma 38](#l:fal-ortn-criteria){reference-type="ref" reference="l:fal-ortn-criteria"}, and [Lemma 39](#l:fal-shape-param){reference-type="ref" reference="l:fal-shape-param"}.* *Proof.* Forward implication is precisely proven in those lemmas. If the criteria are satisfied, then by Theorem [Theorem 40](#t:FAL-circle-packing-TTmethod){reference-type="ref" reference="t:FAL-circle-packing-TTmethod"}, we have a univalent circle packing, hence by Lemma [Lemma 22](#l:geometric-univalence){reference-type="ref" reference="l:geometric-univalence"}, we are done. ◻ # Examples {#s:examples} The following example is the TT method on the standard Square Weave in the thickened torus. ![The square weave. ](figures/diagram-square-weave-05.png "fig:"){#f:square-weave width="10cm"} ![The square weave. ](figures/diagram-square-weave-05-vertical-face.png "fig:"){#f:square-weave width="5cm"} We first deal with labels from the old TT method. The edge equations of the TT method give us the following set of equations. $$\label{e:square-edge-eqn} u_1^r - u_1^l = u_2^r - u_2^l = -(u_3^r - u_3^l) = -(u_4^r - u_4^l) = -(u_5^r - u_5^l) = -(u_6^r - u_6^l) = u_7^r - u_7^l = u_8^r - u_8^l = 1$$ Under a symmetry of the left diagram in Figure [19](#f:square-weave){reference-type="ref" reference="f:square-weave"}, namely the clockwise rotation by $90^{∘}$ preserving $R$, the peripheral edges and crossing arcs get sent to each other: $$\label{e:rot-90-sym-label} ε_1^r \mapsto ε_2^r \mapsto -ε_3^l \mapsto -ε_4^l \mapsto ε_1^r \;\;\; ; \;\;\; ε_1^l \mapsto ε_2^l \mapsto -ε_3^r \mapsto -ε_4^r \mapsto ε_1^l \;\;\; ; \;\;\; γ_{c_1} \mapsto γ_{c_2} \mapsto γ_{c_3} \mapsto γ_{c_4} \mapsto γ_{c_1}$$ Note that the meridians also get sent to meridians, but with sign changes: If $μ_{a_i}$ refers to the meridian on the overpass segment $a_i$, then $$\label{e:rot-90-sym-meridian} μ_{a_1} \mapsto -μ_{a_2} \mapsto -μ_{a_3} \mapsto μ_{a_4} \mapsto μ_{a_1}$$ Thus, we have: $$\label{e:u-w} u := u_1^r = u_2^l = u_3^l = u_4^r \;\;\; ; \;\;\; w := w_1 = -w_2 = w_3 = -w_4$$ where in the left half, the minus signs in ([\[e:rot-90-sym-label\]](#e:rot-90-sym-label){reference-type="ref" reference="e:rot-90-sym-label"}) get canceled out by the minus signs in ([\[e:rot-90-sym-meridian\]](#e:rot-90-sym-meridian){reference-type="ref" reference="e:rot-90-sym-meridian"})). We get the following shape parameters for the region $R$: $$ζ_{1,R} = \frac{+w_1}{u_4^l u_1^r} = \frac{w}{u^2} \;\; , \;\; ζ_{2,R} = \frac{-w_2}{u_1^r u_2^r} = \frac{w}{u^2} \;\; , \;\; ζ_{3,R} = \frac{+w_3}{u_2^r u_3^l} = \frac{w}{u^2} \;\; , \;\; ζ_{4,R} = \frac{-w_4}{u_3^l u_4^l} = \frac{w}{u^2}$$ As one would expect from the rotation symmetry, the shape parameters above are all equal. The shape parameters satisfy the following relations (from $f_4 = 0$ in ([\[e:poly-recursive\]](#e:poly-recursive){reference-type="ref" reference="e:poly-recursive"})): $$ζ_{2,R} + ζ_{3,R} - 1 = 0 \;\;,\;\; ζ_{3,R} + ζ_{4,R} - 1 = 0 \;\;,\;\; ζ_{1,R} + ζ_{2,R} - 1 = 0$$ Since $ζ_{1,R} = ζ_{2,R} = ζ_{3,R} = ζ_{4,R} = \frac{w}{u^2}$, each of the above relations reduce to $2(\frac{w}{u^2}) - 1 = 0 \implies w = \frac{1}{2}u^2$. By applying the same process to $R'$, we have $ζ_{1,R'} = ζ_{2,R'} = ζ_{3,R'} = ζ_{4,R'} = \frac{1}{2}$, and $ζ_{1,R'} = -\frac{w}{u'^2}$, where $u' = u_1^l = u_8^l = u_3^r = u_6^r$. But we also have, by edge equations ([\[e:square-edge-eqn\]](#e:square-edge-eqn){reference-type="ref" reference="e:square-edge-eqn"}), that $u' = u_3^r = u_3^l - 1 = u - 1$. So we have $w = -\frac{1}{2}(u-1)^2$. Thus we have $-u^2 = (u-1)^2$, from which we obtain $u = \frac{1}{2} \pm \frac{1}{2}i$ and $w = \pm \frac{1}{4}i$ Observe that reflection of the link diagram across a link component, say $a_2$, is a symmetry of the link diagram, which induces an orientation-reversing homeomorphism of the link complement that also fixes each link component, thus acts on each peripheral torus by an anti-$\mathbb{C}$-linear automorphism. For example, this symmetry sends $μ_2$ to $-μ_2$ and thus acts as $z \mapsto -\overline{z}$; since it sends $ε_1^l$ to $ε_1^r$, we have $u = u_1^r = -\overline{u_1^l} = -\overline{u'} = 1 - \overline{u}$, which is consistent with $u = \frac{1}{2} \pm \frac{1}{2}i$.  \ Next, we solve the equations involving new crossing labels and edge labels, i.e., those involving ${*_N},{*_S}$ (the Hopf link components). The vertical region in Figure [19](#f:square-weave){reference-type="ref" reference="f:square-weave"} gives rise to a vertical region equation as in ([\[e:vertical-region-equations\]](#e:vertical-region-equations){reference-type="ref" reference="e:vertical-region-equations"}). It is a truncated ideal triangle bounded by crossing arcs and peripheral edges $γ_{a_1}, ε_{a_1 c_1}, γ_{c_1}, ε_{a_2 c_1}, γ_{a_2}, ε_{a_1 a_2}$, and we have: $$\label{e:hopf-link-wall} \frac{w_{a_1}}{u_{a_1 a_2} u_{a_1 c_1}} = \frac{w_1}{u_{a_1 c_1} u_{a_2 c_1}} = -\frac{w_{a_2}}{u_{a_1 a_2} u_{a_2 c_1}} = 1 %\frac{w_{a_1}}{u_{a_1 a_4} u_{a_1 c_4}} %= %-\frac{w_{a_4}}{u_{a_1 a_4} u_{a_4 c_4}^r} %= %\frac{w_4}{u_{a_4 c_4}^r u_{a_1 c_4}} %= 1$$ The symmetry of reflecting across $a_2$ preserves the vertical region in Figure [19](#f:square-weave){reference-type="ref" reference="f:square-weave"}, so $u_{a_2 c_1} = -\frac{1}{2}(u_1^l + u_1^r) = -\frac{1}{2}(u' + u) = \mp\frac{1}{2}i$. Symmetry across $a_1$ gives $u_{a_1 c_1} = -\frac{1}{2}$. Note that $u_{a_1 a_2}$ depends on the choice of meridian on ${*_N}$; we choose the meridian to be homotopic to $a_2$, so that $u_{a_1 a_2} = \frac{1}{2}$. Then from ([\[e:hopf-link-wall\]](#e:hopf-link-wall){reference-type="ref" reference="e:hopf-link-wall"}), $$\label{e:blue-region-consequences} w_{a_1} = -\frac{1}{4} \;\; , \;\; w_1 = \pm \frac{1}{4} i \;\; , \;\; w_{a_2} = \pm \frac{1}{4} i %w_{a_1} = \mp \frac{1}{2} i u_{a_1 a_4} %\;\; , \;\; %w_{a_4} = -\frac{1}{2} u_{a_1 a_4} \;\;\;\; , \;\;\;\; u_{a_2 c_1} = \mp\frac{1}{2} i \;\; , \;\; u_{a_1 c_1} = -\frac{1}{2} \;\; , \;\; u_{a_1 a_2} = \frac{1}{2} %u_{a_4 c_4}^r = \frac{1}{2} %\;\; , \;\; %u_{a_1 c_4} = \mp \frac{1}{2} i$$ The clockwise $90^{∘}$-rotation about $R$ of the diagram sends $γ_{a_1}$ to $γ_{a_2}$, so we must have, and indeed we do, $|w_{a_1}| = |w_{a_2}|$. The rotation sends the meridian $μ_N$ at ${*_N}$ to $-i$, and further careful consideration shows that we must have $w_{a_1} = -i w_{a_2}$, and thus we pick out a sign for $u,w$ of ([\[e:u-w\]](#e:u-w){reference-type="ref" reference="e:u-w"}): $$u = \frac{1}{2} - \frac{1}{2} i \;\;\;,\;\;\; w = - \frac{1}{4} i$$ Note that choosing the other sign would give the algebraic solution for the same link complement but with reversed orientation (since the rotation would send the meridian $μ_N$ at ${*_N}$ to $i$). All other labels are obtained by symmetry (e.g. rotating by $180^{∘}$ about a crossing). The algebraic solution is unique, and hence is also the geometric solution. **Example 42**. In Figure [21](#f:counter-xmp-FAL){reference-type="ref" reference="f:counter-xmp-FAL"}, we present an example of a FAL in $S^3$ with more than one algebraic solution to the TT method. By the correspondence between algebraic solutions and circle packings discussed in Theorem [Theorem 40](#t:FAL-circle-packing-TTmethod){reference-type="ref" reference="t:FAL-circle-packing-TTmethod"}, we describe two such solutions, one geometric and the other not geometric, by presenting their corresponding circle packings. ![ Example of FAL in $S^3$ with more than one algebraic solution. The red arrows on the right diagram are the centers of the bow-tie. ](figures/fig-counter-xmp-FAL.png "fig:"){#f:counter-xmp-FAL height="7cm"} ![ Example of FAL in $S^3$ with more than one algebraic solution. The red arrows on the right diagram are the centers of the bow-tie. ](figures/fig-counter-xmp-bow-tie-graph.png "fig:"){#f:counter-xmp-FAL height="7cm"} ![ Circle packing corresponding to geometric solution; the right diagram is a close-up of the center of the left diagram; note in the very center there is one more tiny circle, which corresponds to the smallest (non-bow-tie) region in Figure [21](#f:counter-xmp-FAL){reference-type="ref" reference="f:counter-xmp-FAL"}. ](figures/fig-counter-xmp-circ-pat-good.png "fig:"){#f:counter-xmp-good height="6cm"} ![ Circle packing corresponding to geometric solution; the right diagram is a close-up of the center of the left diagram; note in the very center there is one more tiny circle, which corresponds to the smallest (non-bow-tie) region in Figure [21](#f:counter-xmp-FAL){reference-type="ref" reference="f:counter-xmp-FAL"}. ](figures/fig-counter-xmp-circ-pat-good-closeup.png "fig:"){#f:counter-xmp-good height="6cm"} ![ Circle packing corresponding to non-geometric solution. The tiny circle in Figure [23](#f:counter-xmp-good){reference-type="ref" reference="f:counter-xmp-good"} is replaced by a bigger circle which is still tangent to the same three circles. Local univalence can be satisfied (by choosing the interior of this circle to be outside), but univalence clearly cannot be satisfied. ](figures/fig-counter-xmp-circ-pat-bad.png "fig:"){#f:counter-xmp-bad height="6cm"} ![ Circle packing corresponding to non-geometric solution. The tiny circle in Figure [23](#f:counter-xmp-good){reference-type="ref" reference="f:counter-xmp-good"} is replaced by a bigger circle which is still tangent to the same three circles. Local univalence can be satisfied (by choosing the interior of this circle to be outside), but univalence clearly cannot be satisfied. ](figures/fig-counter-xmp-circ-pat-bad-closeup.png "fig:"){#f:counter-xmp-bad height="6cm"} [^1]: With the right hand, thumb points along the link's orientation, other fingers points in the meridian's orientation. [^2]: This is one place where our conventions differ from [@TTmethod]; there, they identify the horospheres with $\mathbb{C}$ such that viewing the horosphere from the bulk looks like $\mathbb{C}$. [^3]: This is another place where our conventions differ from [@TTmethod]; there, they would take $-w_{γ_c}$ as the label. [^4]: Convention for concatenation of paths: From left to right [^5]: Note the lack of complex conjugation and difference in $κ$ due to our conventions. [^6]: Our $f_n$'s differ from [@TTmethod] by a sign $(-1)^{n+1}$. [^7]: The notation $γ_{C_i},γ_{a_j},F_{C_i,a_j}$ is ambiguous as it could refer to vertical crossing arcs of the top or bottom torihedron; this should not be a problem as we usually only work with the top torihedron, with the same arguments applying to the bottom torihedron by symmetry. [^8]: We refer to these arcs $γ_a$ as crossing arcs even though they are not indexed by crossings, because they play a similar role to $γ_c$'s, and can in fact be thought of as crossing arcs when $L$ is considered as a link in the complement of the Hopf link, $S^3 \backslash H$. [^9]: $\widetilde{Ω}$ is essentially an algebraic solution on $\widetilde{τ}₀$, but technically we have not generalized the TT method to such manifolds, though it is quite easy to see how to do it.
arxiv_math
{ "id": "2309.01282", "title": "Generalization of the Thistlethwaite--Tsvietkova Method", "authors": "Alice Kwon, Byungdo Park, and Ying Hong Tham", "categories": "math.GT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | The fine 1-curve graph of a surface is a graph whose vertices are simple closed curves on the surface and whose edges connect vertices that intersect in at most one point. We show that the automorphism group of the fine 1-curve graph is naturally isomorphic to the homeomorphism group of a closed, orientable surface with genus at least one. author: - Katherine Williams Booth, Daniel Minahan, and Roberta Shapiro bibliography: - mainbib.bib title: Automorphisms of the fine 1-curve graph --- # Introduction Let $S = S_{g}$ be an oriented, connected, closed surface with genus $g$. The *fine 1-curve graph of $S$*, denoted $\mathcal{C}^\dagger_1(S)$, is a graph whose vertices are simple, closed, essential curves in $S$. There is an edge between two vertices $u$ and $v$ if $|u \cap v|\leq 1.$ Since homeomorphisms preserve intersections, there is a natural homomorphism $\mathop{\mathrm{Homeo}}(S) \rightarrow \mathop{\mathrm{Aut}}\mathcal{C}^\dagger_1(S)$ induced by the standard action of $\mathop{\mathrm{Homeo}}(S)$ on $S$. Our main result is the following. **Theorem 1**. *Let $S_g$ be a closed, orientable, connected surface with $g \geq 1$. The map $$\Phi:\mathop{\mathrm{Homeo}}(S_g) \rightarrow \mathop{\mathrm{Aut}}\mathcal{C}^\dagger_1(S_g)$$ is an isomorphism.* A version of the fine 1-curve graph, denoted $\mathcal{C}^\dagger_\pitchfork(S),$ was introduced by Le Roux--Wolff [@transaut]. In their paper, they work with connected, nonspherical and possibly nonorientable or noncompact surfaces. The vertices of their graph correspond to nonseparating curves while edges connect pairs of curves that are either disjoint or intersect once essentially (termed torus pairs below). Le Roux--Wolff show that $\mathop{\mathrm{Aut}}\mathcal{C}^\dagger_\pitchfork(S)$ is isomorphic to $\mathop{\mathrm{Homeo}}(S)$. **The fine curve graph.** The *fine curve graph* of $S,$ denoted $\mathcal{C}^\dagger(S)$, was introduced by Bowden--Hensel--Webb to study $\mathop{\mathrm{Diff}}_0(S)$ [@BHW]. The vertices of $\mathcal{C}^\dagger(S)$ are essential, non-peripheral, simple, closed curves in $S$. Two vertices are connected by an edge if their corresponding curves are disjoint. In analogy with our theorem, Long--Margalit--Pham--Verberne--Yao prove that $\mathop{\mathrm{Aut}}\mathcal{C}^\dagger(S)$ is isomorphic to $\mathop{\mathrm{Homeo}}(S)$ [@fineaut]. **Graphs of curves and Ivanov's metaconjecture.** A classically studied object is the *curve graph* of $S$, denoted $\mathcal{C}(S)$. The vertices of $\mathcal{C}(S)$ are isotopy classes of essential, non-peripheral (if $S$ has boundary), simple closed curves in $S$. Two vertices are connected by an edge if they admit disjoint representatives. The *extended mapping class group* of $S,$ denoted $\mathop{\mathrm{MCG}}^{\pm}(S),$ is the group of connected components of $\pi_0 (\mathop{\mathrm{Homeo}}(S))$. Ivanov showed that, for surfaces of genus at least three, $\mathop{\mathrm{Aut}}\mathcal{C}(S)$ is isomorphic to $\mathop{\mathrm{MCG}}^\pm(S)$ [@ivanov]. Following this, Ivanov made the following metaconjecture [@Farbproblems pg 84]. *Ivanov's metaconjecture.* Every object naturally associated to a surface S and having a sufficiently rich structure has $\mathop{\mathrm{MCG}}^{\pm}(S)$ as its groups of automorphisms. Moreover, this can be proved by a reduction to the theorem about the automorphisms of $\mathcal{C}(S).$ Brendle and Margalit showed that Ivanov's metaconjecture holds for a large number of graphs where edges correspond to disjointness [@BrendleMargalit Theorem 1.7]. **The $k$-curve graph.** Ivanov's metaconjecture may also hold for graphs of curves where edges do not correspond to disjointness. For example, consider the *$k$-curve graph*, $\mathcal{C}_k(S_g),$ which has the same vertices as the curve graph. Edges connect vertices whose isotopy classes admit representatives that intersect at most $k$ times. Agrawal--Aougab--Chandran--Loving--Oakley--Shapiro--Xiao [@kcurve] showed that when $g$ is sufficiently large with respect to $k$, $\mathop{\mathrm{Aut}}\mathcal{C}_k(S_g)$ is isomorphic to $\mathop{\mathrm{MCG}}^\pm(S_g)$ for any $k \geq 1$. Similarly to how the fine curve graph is an analogue of the curve graph, the fine 1-curve graph is an analogue of the $k$-curve graph for $k=1$. **Sketch of the proof of Theorem [Theorem 1](#maintheorem){reference-type="ref" reference="maintheorem"}.** When $g \geq 2$, we prove Theorem [Theorem 1](#maintheorem){reference-type="ref" reference="maintheorem"} by reducing to the theorem of Long--Margalit--Pham--Verberne--Yao [@fineaut]. In particular, we show that every automorphism of $\mathcal{C}^\dagger_1(S)$ preserves the set of edges connecting disjoint curves. For $g=1$, we reduce to the theorem of Le Roux--Wolff [@transaut] and show that the set of edges that are in $\mathcal{C}^\dagger_1(S)$ but not in $\mathcal{C}^\dagger_\pitchfork(S)$ are preserved by automorphisms. Edges in this set correspond to pairs of curves in a specific configuration; such a pair of curves will be called a pants pair and is defined in the following paragraph. *Torus pairs versus pants pairs.* There are two types of configurations of pairs of curves that intersect once. If a pair of curves crosses at their point of intersection, we call it a *torus pair*, as on the left side of Figure [2](#torusvspantsfig){reference-type="ref" reference="torusvspantsfig"}. We note that both curves that comprise a torus pair must be nonseparating. Otherwise, if neither curve crosses the other at their point of intersection, we call it a *pants pair*, as on the right hand side of Figure [2](#torusvspantsfig){reference-type="ref" reference="torusvspantsfig"}. These definitions are reminiscent of those in Long--Margalit--Pham--Verberne--Yao [@fineaut], with two key differences: we require all intersections to be single points (called *degenerate* in [@fineaut]) and the curves in a pants pair are allowed to be homotopic. ![Examples of torus pairs (left) and pants pairs (right)](Figures/Torus1.png "fig:"){#torusvspantsfig width="1.5in"} ![Examples of torus pairs (left) and pants pairs (right)](Figures/Pants2.png "fig:"){#torusvspantsfig width="1.5in"} **Paper outline.** In Section [2](#sec:separatingcurves){reference-type="ref" reference="sec:separatingcurves"}, we prove several preliminary results about separating curves. In Section [3](#toruspairssection){reference-type="ref" reference="toruspairssection"}, we show that torus pairs are preserved by automorphisms of $\mathcal{C}^\dagger_1(S)$. In Section [4](#pantspairssection){reference-type="ref" reference="pantspairssection"}, we show that pants pairs are preserved by automorphisms when $g \geq 2$. In Section [5](#torussection){reference-type="ref" reference="torussection"}, we prove Theorem [Theorem 1](#maintheorem){reference-type="ref" reference="maintheorem"} in the case that $g = 1$. We conclude by proving Theorem [Theorem 1](#maintheorem){reference-type="ref" reference="maintheorem"} in Section [6](#mainsection){reference-type="ref" reference="mainsection"}. **Acknowledgments.** The authors would like to thank their advisor Dan Margalit for many helpful conversations. The authors would also like to thank Jaden Ai, Ryan Dickmann, Jacob Guynee, Sierra Knavel, and Abdoul Karim Sane for useful discussions. The authors thank Fédéric Le Roux and Maxime Wolff for sharing their manuscript and further correspondences. The authors further thank Nick Salter for comments on a draft of the manuscript. The first author was supported by the National Science Foundation under Grant No. DMS-1745583. The third author was partially supported by the National Science Foundation under Grant No. DMS-2203431. # Separating curves and their homotopy classes {#sec:separatingcurves} In this section, we give several prelimininary results about separating curves. We prove in Lemma [Lemma 2](#separating){reference-type="ref" reference="separating"} that separating curves are preserved by automorphisms of $\mathcal{C}^\dagger_1(S_g)$. In Lemma [Lemma 3](#sepdisjointhom){reference-type="ref" reference="sepdisjointhom"}, we prove that pairs of homotopic separating curves that are adjacent in $\mathcal{C}^\dagger_1(S_g)$ are preserved by automorphisms of $\mathcal{C}^\dagger_1(S_g)$. We also define a new quotient graph that will be used in future sections. The relation used to define the quotient graph is proven to be equivalent to homotopy of curves in Lemma [Lemma 4](#equividhomo){reference-type="ref" reference="equividhomo"}. Moreover, we show that the structure of this quotient graph is preserved by automorphisms of $\mathcal{C}^\dagger_1(S_g)$ in Lemma [Lemma 6](#seplinkaction){reference-type="ref" reference="seplinkaction"}. **Preliminary graph theoretic definitions.** Two vertices connected by an edge in a graph are called *adjacent*. The *link* of a vertex $v$ in a graph $G$, denoted $\mathop{\mathrm{link}}(v)$, is the subgraph induced by the vertices adjacent to $v$ in $G$. A graph $G$ is a *join* if there is a partition of the vertices of $G$ into at least two nonempty subsets, called *parts*, such that every vertex in one part is adjacent to all vertices in all of the other parts. A graph $G$ is an $n$-*join* if it is a join that can be partitioned into $n$ sets but cannot be partitioned into $n+1$ sets. A graph $G$ is a *cone* if there is a vertex $v,$ called a *cone point*, that is adjacent to all other vertices in $G.$ We say that a separating curve $u$ *separates* the curves $a$ and $b$ if $a$ and $b$ lie in the closures of distinct connected components of $S\setminus u.$ If $a$ and $b$ are contained in the closure of the same connected component of $S\setminus u$, then they are on the same side of $u;$ otherwise, $a$ and $b$ are on different sides of $u.$ We begin by showing that the sets of separating and nonseparating curves are each preserved by automorphisms of $\mathcal{C}^\dagger_1(S_g)$. **Lemma 2**. *Let $\varphi\in \mathop{\mathrm{Aut}}\mathcal{C}^\dagger_1(S_g)$. Then $u$ is a separating curve if and only if $\varphi(u)$ is a separating curve. Moreover, $\varphi$ preserves the sides of $u$.* *Proof.* We will show that a curve $u$ is separating if and only if $\mathop{\mathrm{link}}(u)$ is a join. In fact, we will show that it is a 2-join. Suppose $u$ is separating. Then no curve in $\mathop{\mathrm{link}}(u)$ can form a torus pair with $u$ and must be either disjoint or form a pants pair with $u$. Therefore, every curve in $\mathop{\mathrm{link}}(u)$ will lie in the closure of exactly one of the components of $S_g\setminus u$. Let $A$ and $B$ be the closures of the two components of $S_g\setminus u.$ We claim that $\mathop{\mathrm{link}}(u)$ is a join with two parts are given by curves contained in $A$ and the curves contained in $B.$ Suppose $a$ and $b$ are curves in $\mathop{\mathrm{link}}(u)$ such that $a\subset A$ and $b\subset B.$ Since $|a\cap u|\leq 1$ and $|b\cap u|\leq 1,$ and $(a\cap b) \subset u,$ we have that $|a\cap b|\leq 1.$ We conclude that $a$ and $b$ are adjacent in $\mathcal{C}^\dagger_1(S_g),$ thus concluding the proof of the claim. Suppose $u$ is not separating. Then, $S_g\setminus u$ is a single connected component. Let $a,b\in \mathop{\mathrm{link}}(u).$ Then, we can move $a$ off of itself and isotope it to intersect $a$ and $b$ at least twice each and $u$ at most once; call this new curve $a'.$ It follows that $a'$ is adjacent to neither $a$ nor $b$, so $a$ and $b$ cannot be in different parts of a join. We conclude that $\mathop{\mathrm{link}}(u)$ cannot be a join. If two curves $a,b\in\mathop{\mathrm{link}}(u)$ lie in the closure of the same component of $S_g\setminus u$, they must lie in the same part of the join $\mathop{\mathrm{link}}(u)$. It follows that $\mathop{\mathrm{link}}(u)$ is a 2-join. Thus, $\varphi(a)$ and $\varphi(b)$ are in the same part of the join $\mathop{\mathrm{link}}(\varphi(u))$ and therefore lie in the closure of the same component of $S\setminus\varphi(u).$ ◻ **Link of an edge and the separating link.** Let $u$ and $v$ be adjacent vertices in $\mathcal{C}^\dagger_1(S).$ The *link of an edge* spanned by $u,v$ is $\mathop{\mathrm{link}}(u,v)=\mathop{\mathrm{link}}(u)\cap \mathop{\mathrm{link}}(v).$ The *separating link of* $(u,v)$, denoted $\mathop{\mathrm{link}}^{\text{sep}}(u,v),$ is the subgraph of $\mathop{\mathrm{link}}(u,v)$ induced by separating curves. We use these definitions to show that homotopic pairs of separating curves adjacent in $\mathcal{C}^\dagger_1(S_g)$ are preserved by automorphisms of $\mathcal{C}^\dagger_1(S_g).$ **Lemma 3**. *Let $u$ and $v$ be adjacent separating curves in $\mathcal{C}^\dagger_1(S_g)$. Then $u$ and $v$ are homotopic if and only if $\mathop{\mathrm{link}}(u,v)$ is a 3-join such that one of the parts contains only separating curves. Hence, for any $\varphi \in \mathop{\mathrm{Aut}}\mathcal{C}^\dagger_1(S)$, a separating curve $u \in \mathcal{C}^\dagger_1(S)$ is homotopic to an adjacent curve $v \in \mathcal{C}^\dagger_1(S)$ if and only if $\varphi(u)$ is homotopic to $\varphi(v)$.* *Proof.* Because $u$ and $v$ are both separating and are either disjoint or form a pants pair, $S\setminus(u\cup v)$ must have three connected components. By a similar argument to Lemma [Lemma 2](#separating){reference-type="ref" reference="separating"}, $\mathop{\mathrm{link}}(u,v)$ is a 3-join induced by said components. Moreover, the link of a pair of adjacent nonseparating curves or a nonseparating and separating adjacent pair of curves is not a 3-join. This follows directly using the same argument that the link of a nonseparating curve is not a join and from the fact that such pairs will not split a surface into three connected components. If $u$ and $v$ are homotopic as in Figure [\[homosepfig\]](#homosepfig){reference-type="ref" reference="homosepfig"}, then one of the components of $S\setminus(u\cup v)$ is a (possibly pinched) annulus. All essential curves that lie in the (possibly pinched) annulus are separating, so it follows that one of the parts is comprised of separating curves. If $u$ and $v$ are not homotopic as in Figure [\[nonhomosepfig\]](#nonhomosepfig){reference-type="ref" reference="nonhomosepfig"}, then $S\setminus(u\cup v)$ consists of three components, each of which has genus. Therefore, each component supports a nonseparating curve. It follows that all parts of the join contain a nonseparating curve.  ◻ With this in mind, we prove the following result about homotopic curves in the link of an edge. The original idea behind the proof is in Bowden--Hensel--Webb [@BHW] and is expanded on in Long--Margalit--Pham--Verberne--Yao [@fineaut]. **Lemma 4**. *Let $u,v$ be two nonseparating adjacent curves in $\mathcal{C}^\dagger_1(S)$. Let $a$ and $b$ be two separating curves in $\mathop{\mathrm{link}}(u,v)$. Then $a$ and $b$ are homotopic if and only if there is a path from $a$ to $b$ in $\mathop{\mathrm{link}}(u,v)$ consisting of curves homotopic to $a$ and $b$.* We need one auxiliary result before proving Lemma [Lemma 4](#equividhomo){reference-type="ref" reference="equividhomo"}. **Lemma 5**. *Let $u,v$ be two nonseparating adjacent curves in $\mathcal{C}^\dagger_1(S)$. Let $a$ and $b$ be two separating, homotopic curves in $\mathop{\mathrm{link}}(u,v)$. Then there is a homotopy $\psi:S^1 \times I \rightarrow S_g$ from $a$ to $b$ such that, for every $t \in I$, there is a closed neighborhood $T \subseteq I$ with $t \in T$ where $\psi(S^1 \times T)$ is contained in a possibly pinched annulus $A_T$ such that $u$ intersects $A_T$ at most once, and $v$ intersects $A_T$ at most once.* *Proof.* There are three cases to consider. *Case 1: $u$ and $v$ are on the same side of $a$.* Since $a$ and $b$ are isotopic, $u$ and $v$ must be on the same side of $b$ as well. Choose an annulus $A$ with $a \subseteq \partial A$ such that $A$ is supported on the side of $a$ that does not contain $u$ and $v$. Then $a$ is homotopic to the other boundary component $a'$ of $A$, and each curve in this homotopy is disjoint from $u$ and $v$ (besides $a$ itself). Similarly, construct $b'$ adjacent to $b$. Then $a'$ and $b'$ are disjoint from $u$ and $v$, so there is a homotopy between $a'$ and $b'$ that consists only of vertices of $\mathop{\mathrm{link}}(u,v)$ disjoint from $u$ and $v$. Let $\psi:S^1 \times I$ be the resulting homotopy from $a$ to $b$. Then for any $t \in I$, there is an annulus $A_t$ containing $\psi(S^1 \times \{t\})$ that contains $\psi(S^1 \times T)$ for some closed neighborhood of $t$, and that only intersects $u$ and $v$ each in at most one point, namely either the points of intersection of $a$ with $u$ and $v$, or the points of intersection of $b$ with $u$ and $v$, so the lemma holds. *Case 2: $u$ and $v$ are on opposite sides of $a$, and $u$ and $v$ are disjoint.* As in Case 1, it suffices to show that $a$ is homotopic to $a' \in \mathop{\mathrm{link}}(u,v)$ disjoint from $u$ and $v$ via vertices of $\mathop{\mathrm{link}}(u,v)$. Since $u$ and $v$ are disjoint, we can choose closed annuli $A_u$ and $A_v$ such that $u \subseteq A_u$, $v \subseteq A_v$, and $A_u \cap a$, $A_v \cap a$ are each either connected or empty. If $A_u \cap a$ is nonempty, let $d_u$ be the unique subinterval of $\partial A_u$ that connects the two points of intersection $a \cap A_u$ to each other, such that the path $A_u \cap a$ is homotopic rel $\partial A_u \cap a$ to $d_u$. Let $a''$ be the curve given by removing $a \cap A_u$ from $a$ and replacing it with $d_u$. Construct $d_v$ similarly, and let $a'$ be the resulting curve given by removing $a'' \cap A = a \cap A$ and replacing it with $d_v$. Then $a$ is homotopic to $a'$ and $a'$ is disjoint from $u$ and $v$. Repeat this same process for $b$ to get $b'$, and then $a'$ and $b'$ are homotopic through vertices disjoint from $u$ and $v$. Similarly to Case I, this completes the proof of the Lemma in this case. *Case 3: $u$ and $v$ are on opposite sides of $a$, and $u$ and $v$ intersect.* Let $\delta$ be the unique point of intersection in $u \cap v$. Since $u$ and $v$ are on opposite sides of $a$, we see that $\delta \in a$. Now, since $b$ is homotopic to $a$, it must be the case that $u$ and $v$ are on opposite sides of $b$ as well. Therefore, $\delta \in b$ as well. Now, we can think of $\delta$ as a marked point and identify $a$, $b$, $u$, and $v$ with arcs $\overline{a},\ \overline{b},\ \overline{u},$ and $\overline{v},$ respectively, based at $\delta$. We see that $\overline{a}$ and $\overline{b}$ are disjoint from $\overline{u}$ and $\overline{v}$, since as curves $a$ and $b$ only intersect $u$ and $v$ each at most once. Hence the arcs $\overline{a}$ and $\overline{b}$ are homotopic to each other in the marked surface $(S_g, \delta)$ via a path of arcs disjoint from $\overline{u}$ and $\overline{v}$. But then this homotopy of arcs is canonically identifies with a homotopy of curves in $\mathop{\mathrm{link}}(u,v)$ that all intersect $u$ and $v$ only at $\delta$. Let $\psi$ be the resulting homotopy. For any $t \in I$, there is a pinched annulus $A$ containing $\psi(S^1 \times T)$ for some closed neighborhood $t \in T$, such that $A$ intersects $u$ and $v$ only at $\delta$, so the lemma holds. ◻ *Proof of Lemma [Lemma 4](#equividhomo){reference-type="ref" reference="equividhomo"}.* The backwards direction follows by definition, so we only need to prove the forwards direction. Let $\psi:S^1 \times I \rightarrow S_g$ be a homotopy as in Lemma [Lemma 5](#disjisolemma){reference-type="ref" reference="disjisolemma"}. For each point $t_i \in I$, choose an interval $T_i = [s_i, s'_i] \subseteq I$ with $s_i \neq s'_i$ that contains $t_i$ such that the set $\psi(S^1 \times T_i)$ is contained in a (possibly pinched) annulus $A_i$, such that $A_i$ intersects $u$ and $v$ in at most one point each. Since $I$ is compact, there is a finite collection of such intervals $T_1,\ldots, T_n$ whose interiors cover $I$. By restricting each $T_i$, we may assume that $s'_i = s_{i+1}$ for all $0 \leq i < n$. Observe that if two separating curves $x,y \subseteq S_g$ are contained in the interior of a (possibly pinched) annulus $A$, then a curve $z\subset \partial A$ is both adjacent and homotopic to $x$ and $y$, forming a path of length two between $x$ and $y.$ For each $T_i$, there are two curves $x_i=\psi(S^1\times \{s_i\})$ and $y_i=\psi(S^1\times \{s'_i\})$ given by the restriction of $\psi$ to each endpoint of $T_i$ such that $x_i$ and $y_i$ both each intersect $u$ and $v$ in at most one point, since we have assumed the same for $A_i$. Hence each $x_i$ and $y_i$ are vertices in $\mathop{\mathrm{link}}(u,v)$. Then each $x_i$ is connected by a path of length 2 in $\mathop{\mathrm{link}}(u,v)$ to $y_i$. We have chosen the $T_i$ in such way that $y_i = x_{i+1}$ for $0 \leq i < n$. Then $x_0 = a$ and $y_n = b$ by construction, so there is a path from $x_0$ to $y_n$ of length $2n$ in $\mathop{\mathrm{link}}(u,v)$, such that each vertex of this path is homotopic to $a$, so the lemma holds. ◻ **Separating link quotient.** Let $u$ and $v$ be adjacent nonseparating curves in $\mathcal{C}^\dagger_1(S_g).$ Define the *separating link quotient of $(u,v)$*, denoted $\mathop{\mathrm{\mathcal{Q}^{sep}}}(u,v),$ to be the separating link of $u$ and $v$ quotiented out by homotopy on the vertices. In other words, the vertices of $\mathop{\mathrm{\mathcal{Q}^{sep}}}(u,v)$ are homotopy classes of separating curves in $\mathop{\mathrm{link}}(u,v)$ and an edge connects two vertices if they admit disjoint representatives. Hereafter, if $a$ is a separating curve in $\mathcal{C}^\dagger_1(S_g)$, we will denote two things by $[a]$: 1) the set of curves in $\mathcal{C}^\dagger_1(S_g)$ homotopic to $a$ and 2) the vertex that $a$ represents in $\mathop{\mathrm{\mathcal{Q}^{sep}}}(u,v)$. In the following lemma, we prove that the structure of the separating link quotient is preserved under automorphisms of $\mathcal{C}^\dagger_1(S_g).$ **Lemma 6**. *Let $u$ and $v$ be nonseparating curves adjacent in $\mathcal{C}^\dagger_1(S_g)$ and let $\varphi \in \mathop{\mathrm{Aut}}\mathcal{C}^\dagger_1(S_g)$. Then $\mathop{\mathrm{\mathcal{Q}^{sep}}}(u,v) \cong \mathop{\mathrm{\mathcal{Q}^{sep}}}(\varphi(u), \varphi(v))$.* *Proof.* By Lemma [Lemma 2](#separating){reference-type="ref" reference="separating"}, $\varphi$ induces an isomorphism between $\mathop{\mathrm{link}}^{\mathop{\mathrm{sep}}}(u,v)$ and $\mathop{\mathrm{link}}^{\mathop{\mathrm{sep}}}(\varphi(u), \varphi(v))$. Then by Lemma [Lemma 3](#sepdisjointhom){reference-type="ref" reference="sepdisjointhom"}, $\varphi$ preserves disjoint homotopic separating curves. Hence if $d, d' \in \mathop{\mathrm{link}}^{\mathop{\mathrm{sep}}}(u,v)$ map to the same point in $\mathop{\mathrm{\mathcal{Q}^{sep}}}(u,v)$, then $\varphi(d)$ and $\varphi(d')$ must map to the same point in $\mathop{\mathrm{\mathcal{Q}^{sep}}}(\varphi(u), \varphi(v))$. Therefore the isomorphism $\mathop{\mathrm{link}}^{\mathop{\mathrm{sep}}}(u,v) \cong \mathop{\mathrm{link}}^{\mathop{\mathrm{sep}}}(\varphi(u), \varphi(v))$ descends to an isomorphism $\mathop{\mathrm{\mathcal{Q}^{sep}}}(u,v) \cong \mathop{\mathrm{\mathcal{Q}^{sep}}}(\varphi(u), \varphi(v))$. ◻ With these preliminary results in mind, we are ready to proceed with the main body of the proof of Theorem [Theorem 1](#maintheorem){reference-type="ref" reference="maintheorem"}. # Torus pairs {#toruspairssection} In this section, we prove that automorphisms of our graph preserve torus pairs. **Proposition 7**. *Let $S_g$ be a surface of genus $g \geq 2$. Let $\varphi \in \mathop{\mathrm{Aut}}\mathcal{C}^\dagger_1(S_g)$ and $u, v$ be adjacent curves in $\mathcal{C}^\dagger_1(S_g)$. Then $\left(\varphi(u), \varphi(v)\right)$ is a torus pair if and only if $(u,v)$ is a torus pair.* We will now characterize torus pairs using the graph $Q^{\mathop{\mathrm{sep}}}(u,v)$ in Lemma [Lemma 8](#toruspairqsepcharlemma){reference-type="ref" reference="toruspairqsepcharlemma"}. **Lemma 8**. *Let $g \geq 2$. Let $u$ and $v$ be nonseparating, adjacent curves in $\mathcal{C}^\dagger_1(S_g)$. Then $(u,v)$ is a torus pair if and only if the separating link quotient $\mathop{\mathrm{\mathcal{Q}^{sep}}}(u,v)$ is a cone.* *Proof.* We first assume that $(u,v)$ is a torus pair and show that $Q^{\mathop{\mathrm{sep}}}(u,v)$ is a cone. We then assume that $(u,v)$ is not a torus pair, and show that $Q^{\mathop{\mathrm{sep}}}(u,v)$ is not a cone. *Suppose $(u,v)$ is a torus pair.* Consider the homotopy class $\mathcal{H}$ of curves in $\mathop{\mathrm{link}}(u,v)$ homotopic to the boundary of the torus that $u$ and $v$ fill. By Lemma [Lemma 4](#equividhomo){reference-type="ref" reference="equividhomo"}, $\mathcal{H}$ descends to a single point in $Q^{\mathop{\mathrm{sep}}}(u,v)$. Let $a \in \mathop{\mathrm{link}}(u,v)\setminus \mathcal{H}$ be a separating curve. Then, $a$ is not contained in the torus filled by $u\cup v,$ and therefore is isotopic to a curve $a'$ disjoint from $u\cup v.$ Let $b$ be the boundary of a regular neighborhood of $u\cup v$ disjoint from $a'$. Thus $b\in\mathcal{H}$ and $b$ is disjoint from---and therefore adjacent to---$a'.$ We therefore have that $\mathcal{H}$ is a cone point of $Q^{\mathop{\mathrm{sep}}}(u,v)$. *Suppose $(u,v)$ is not a torus pair.* We must show that there is no cone point of $\mathop{\mathrm{\mathcal{Q}^{sep}}}(u,v)$. We will do this by ascertaining that for any homotopy class of separating curves in $\mathop{\mathrm{link}}(u,v),$ there is another homotopy class of curves that minimally intersects the first at least twice. Let $a\in\mathop{\mathrm{link}}(u,v)$ be a separating curve. Because $a$ is separating, there must be at least one genus in each connected component of $S_g\setminus a.$ Therefore, there is a nonseparating curve $c$ that: 1) intersects $a$ and cannot be homotoped to be disjoint from $a$, and 2) is disjoint from $u\cup v.$ Consider a representative $b$ of $T_{[c]}[a],$ the Dehn twist of $[a]$ about $[c].$ The intersection number of $[a]$ and $[b]$ must be at least 2, so $|a\cap b|\geq 2$. Furthermore, since $c$ and $a$ are disjoint from $u$ and $v$, we can choose $b$ to be disjoint from $u$ and $v$ as well. Hence, the vertices in $\mathop{\mathrm{\mathcal{Q}^{sep}}}(u,v)$ corresponding to $a$ and $b$ are not adjacent, and thus the vertex corresponding to $a$ is not a cone point. We conclude that $\mathop{\mathrm{\mathcal{Q}^{sep}}}(u,v)$ has no cone points. ◻ *Proof of Proposition [Proposition 7](#toruspairaut){reference-type="ref" reference="toruspairaut"}.* Let $u,v$ be as in the statement of the proposition and let $\varphi \in \mathop{\mathrm{Aut}}\mathcal{C}^\dagger_1(S_g)$. Since $\varphi$ is invertible, it suffices to show that $(u,v)$ a torus pair implies $(\varphi(u), \varphi(v))$ a torus pair. By Lemma [Lemma 8](#toruspairqsepcharlemma){reference-type="ref" reference="toruspairqsepcharlemma"}, $Q^{\mathop{\mathrm{sep}}}(u,v)$ is a cone. Then by Lemma [Lemma 6](#seplinkaction){reference-type="ref" reference="seplinkaction"}, $Q^{\mathop{\mathrm{sep}}}(\varphi(u), \varphi(v))$ is a cone. Therefore $(\varphi(u), \varphi(v))$ is a torus pair by Lemma [Lemma 8](#toruspairqsepcharlemma){reference-type="ref" reference="toruspairqsepcharlemma"}. ◻ # Pants pairs {#pantspairssection} The goal of this section is to prove the following proposition: **Proposition 9**. *Let $S_g$ be a surface of genus $g \geq 2$. Let $\varphi \in \mathop{\mathrm{Aut}}\mathcal{C}^\dagger_1(S_g)$ and $u, v$ be adjacent curves in $\mathcal{C}^\dagger_1(S_g)$. Then $\left(\varphi(u), \varphi(v)\right)$ is a pants pair if and only if $(u,v)$ is a pants pair.* The main observation behind the proof of Proposition [Proposition 9](#allpants){reference-type="ref" reference="allpants"} is that two curves $u$ and $v$ are disjoint if and only if $u$ has a neighborhood disjoint from $v$. To use this observation, we will break down the proof of Proposition [Proposition 9](#allpants){reference-type="ref" reference="allpants"} into several steps, as follows. 1. Reduce to the case that $u$ and $v$ are both nonseparating (Lemma [Lemma 10](#seppantslemma){reference-type="ref" reference="seppantslemma"}). 2. Distinguish the boundary curves of a neighborhood of a nonseparating curve by showing that automorphisms preserve: 1. pairs of adjacent homotopic nonseparating curves $\mathcal{C}^\dagger_1(S_g)$ (Lemma [Lemma 11](#nonsephomotaut){reference-type="ref" reference="nonsephomotaut"}), 2. whether a nonseparating curve is contained (in some sense) in the annulus bounded by adjacent homotopic curves $a$ and $b$ (Lemma [Lemma 14](#lemma:annularmain){reference-type="ref" reference="lemma:annularmain"}), and 3. whether two homotopic nonseparating curves adjacent in $\mathcal{C}^\dagger_1(S_g)$ form a pants pair or are disjoint (Lemma [Lemma 15](#homnonseppants){reference-type="ref" reference="homnonseppants"}). 3. Show that automorphisms preserve whether non-homotopic nonseparating curves are disjoint or a pants pair (Lemma [Lemma 17](#nonsepnonhompants){reference-type="ref" reference="nonsepnonhompants"}). Combining Steps 0, 1.3, and 2 proves Proposition [Proposition 9](#allpants){reference-type="ref" reference="allpants"} for all possible arrangements of curves. We begin by proving Step 0 in the following lemma. We use the notation $A_1{*}A_2{*}\cdots{*}A_k$ to denote the decompositon of a $k$-join into its parts. **Lemma 10**. *Let $u$ and $v$ be adjacent curves in $\mathcal{C}^\dagger_1(S)$. Suppose that $u$ is separating.* 1. ***If $\boldmath{v}$ is nonseparating:** Let $A {*} B$ be the decomposition of $\mathop{\mathrm{link}}(u)$ into a join and let $v\in A.$ Then $(u,v)$ is a pants pair if and only if there exists a nonseparating curve $w\in B$ such that $(v,w)$ is a pants pair.* 2. ***If $\boldmath{v}$ is separating and homotopic to $\boldmath{u}$:** Let $A{*}B{*}C$ be a decomposition of $\mathop{\mathrm{link}}(u,v)$ into a 3-join such that $B$ contains only separating curves. Then, $(u,v)$ is a pants pair if and only if there exist nonseparating curves $a\in A$ and $c\in C$ such that $(a,c)$ is a pants pair.* 3. ***If $\boldmath{v}$ is separating and not homotopic to $\boldmath{u}$:** Let $A{*}B{*}C$ be a decomposition of $\mathop{\mathrm{link}}(u,v)$ into a 3-join. Then, $(u,v)$ is a pants pair if and only if there exist nonseparating curves $a\in A,\ b\in B,$ and $c\in C$ such that $(a,b),$ $(b,c),$ and $(c,a)$ are all pants pairs.* *In particular, if $\varphi \in \mathop{\mathrm{Aut}}\mathcal{C}^\dagger_1(S)$ and $\varphi$ sends pants pairs consisting of nonseparating curves to pants pairs consisting of nonseparating curves, then $\varphi$ sends any pants pair to a pants pair.* *Proof.* *The case that $v$ is nonseparating.* Suppose $(u,v)$ is a pants pair. Let $w\in B$ be a nonseparating curve that intersects $u$ at the point $u \cap v$. On the other hand, suppose $u$ and $v$ are disjoint. Then any curve $w \in B$ must be disjoint from $v$. *The case that $v$ is separating and homotopic to $u$.* Suppose $(u,v)$ is a pants pair. Then, we can choose nonseparating curves $a\in A$ and $c\in C$ that intersect $u$ and $v$ at $u\cap v,$ as in Figure [\[seppantsmorefig\]](#seppantsmorefig){reference-type="ref" reference="seppantsmorefig"}. Suppose $u$ and $v$ are disjoint. Then any curve in $A$ is disjoint from any curve in $C$. *The case that $v$ is separating and not homotopic to $u$.* Suppose $(u,v)$ is a pants pair. Then we can choose nonseparating curves $a\in A$, $b\in B,$ and $c\in C$ such that they pairwise form pants pairs. An example of such a selection of curves is in Figure [\[seppantsevenmorefig\]](#seppantsevenmorefig){reference-type="ref" reference="seppantsevenmorefig"}. Suppose $u$ and $v$ are disjoint. Take $A$ to correspond to all curves supported in the subsurface bounded by only $u$ and $C$ to correspond to all curves supported in the subsurface bounded by only $v.$ It follows that any $a\in A$ and $c\in C$ are disjoint. ◻ Now that we have reduced Proposition [Proposition 9](#allpants){reference-type="ref" reference="allpants"} to the case of nonseparating pairs of curves, we are ready to prove Step 1. ## Step 1: Neighborhoods of nonseparating curves {#sec:neighborhoods} In this section, we prove that neighborhoods of nonseparating curves are preserved by automorphisms of $\mathcal{C}^\dagger_1(S_g).$ This is broken down into three main steps. In Step 1.1, we show that automorphisms preserve adjacent homotopic nonseparating curves in $\mathcal{C}^\dagger_1(S_g)$. In Step 1.2, we show that automorphisms preserve whether a curve is contained (in some sense) in the annulus bounded by two homotopic curves. In Step 1.3, we show that automorphisms preserve whether homotopic nonseparating curves form a pants pair. ### Step 1.1: Homotopic nonseparating curves {#sec:homnonsep .unnumbered} The main result of this step is the following lemma. **Lemma 11**. *Let $u$ and $v$ be two adjacent nonseparating curves in $\mathcal{C}^\dagger_1(S_g)$. Let $\varphi \in \mathop{\mathrm{Aut}}\mathcal{C}^\dagger_1(S_g)$. Then $\varphi(u)$ is homotopic to $\varphi(v)$ if and only if $u$ is homotopic to $v$.* Since homotopic curves are jointly separating, the first step in the proof of Lemma [Lemma 11](#nonsephomotaut){reference-type="ref" reference="nonsephomotaut"} is to show that the set of jointly separating pairs of curves is preserved by automorphisms of $\mathcal{C}^\dagger_1(S_g).$ **Lemma 12**. *Let $\varphi\in\mathop{\mathrm{Aut}}\mathcal{C}^\dagger_1(S_g)$ for $g\geq 2.$ Let $u$ and $v$ be adjacent nonseparating curves in $S_g$ that do not form a torus pair. Then $u$ and $v$ are not jointly separating if and only if $\varphi(u)$ and $\varphi(v)$ are not jointly separating.* *It follows that $u$ and $v$ are jointly separating if and only if $\varphi(u)$ and $\varphi(v)$ are jointly separating.* *Proof.* We prove the lemma by showing the following: $u$ and $v$ are not jointly separating if and only if there exists a separating curve $c$ such that $c$ separates $u$ and $v.$ *Suppose $u$ and $v$ are jointly separating.* In this case, any separating curve $c$ in $\mathop{\mathrm{link}}(u,v)$ would have to lie in a single component of $S\setminus(u\cup v)$; otherwise, $c$ would intersect $u$ or $v$ at least twice. Therefore, any such $c$ does not separate $u$ and $v.$ *Suppose $u$ and $v$ are neither jointly separating nor a torus pair.* In this case, there is a separating curve that separates $u$ from $v$ as in Figure [\[Thm2.3a_2\]](#Thm2.3a_2){reference-type="ref" reference="Thm2.3a_2"}. To find such a curve, cut $S$ along $u$ and $v$ (retaining the boundaries), and take $c$ to be the separating curve that forms a (potentially pinched) pair of pants with the boundaries arising from $u.$ ◻ A pair of adjacent homotopic curves is jointly separating, so we now further distinguish between homotopic jointly separating curves and non-homotopic jointly separating curves. **Lemma 13**. *Let $u$ and $v$ be two adjacent jointly separating nonseparating curves in $\mathcal{C}^\dagger_1(S)$. Then, $u$ and $v$ are homotopic if and only if $\varphi(u)$ and $\varphi(v)$ are homotopic.* *Proof.* By Lemma [Lemma 6](#seplinkaction){reference-type="ref" reference="seplinkaction"}, it is enough to show that $u$ and $v$ are homotopic if and only if $\mathop{\mathrm{\mathcal{Q}^{sep}}}(u,v)$ is not a join. *Suppose $u$ and $v$ are homotopic.* Let $[a]$ and $[b]$ be distinct vertices of $\mathop{\mathrm{\mathcal{Q}^{sep}}}(u,v).$ Then, $a$ and $b$ are necessarily in the same connected component of $S\setminus (u\cup v).$ Let $c$ be a curve in that same component of $S\setminus(u\cup v)$ such that no curve isotopic to $c$ is disjoint from neither $a$ nor $b.$ Then, every curve in the isotopy class $[d]=T_{[c]}[a],$ the Dehn twist of $[a]$ about $[c],$ intersects every curve in the isotopy class of $a$ and every curve in the isotopy class of $b$ at least twice. Choose a representative $d$ of $T_{[c]}[a]$ disjoint from $u\cup v$. Then we have that $[d]$, a vertex of $\mathop{\mathrm{\mathcal{Q}^{sep}}}(u,v)$ is neighbors with neither $[a]$ nor $[b].$ Since this is true for any $[a]$ and $[b]$, we conclude that $\mathop{\mathrm{\mathcal{Q}^{sep}}}(u,v)$ is not join. *Suppose $u$ and $v$ are not homotopic.* Let $[a]$ be a vertex of $Q^{\mathop{\mathrm{sep}}}(u,v)$ and $a\in \mathop{\mathrm{link}}(u,v)$ be an arbitrary representative. Then, $a$ is contained in one connected component of $S\setminus (u,v)$ except possibly for points of intersection with $u$ and $v$. By pushing $a$ off of itself in the direction away from $u$ and $v,$ we obtain another curve $a'$ with $[a']=[a].$ We notice that no curve $a'$ homotopic to $a$ is contained in a different connected component of $S\setminus (u\cup v)$ from $a,$ since then $a$ and $a'$ would bound a (potentially pinched) annulus that must contain $u$ or $v,$ a contradiction. We now conclude that $\mathop{\mathrm{\mathcal{Q}^{sep}}}(u,v)$ is indeed a join, where the parts correspond to equivalence classes of curves contained in each of the components of $S\setminus (u\cup v)$ (potentially except for intersections with $u$ and $v$). ◻ We now show that homotopic nonseparating pairs of curves are preserved by automorphisms of $\mathcal{C}^\dagger_1(S_g)$. *Proof of Lemma [Lemma 11](#nonsephomotaut){reference-type="ref" reference="nonsephomotaut"}.* By Proposition [Proposition 7](#toruspairaut){reference-type="ref" reference="toruspairaut"}, torus pairs are preserved by automorphisms, and thus the set of disjoint and pants pairs are preserved by automorphisms. Disjoint and pants pairs may be jointly separating or not jointly separating; by Lemma [Lemma 12](#separatedns){reference-type="ref" reference="separatedns"}, being (not) jointly separating is preserved by automorphisms. Finally, jointly separating pairs may be either homotopic or not. Lemma [Lemma 13](#homjointlysep){reference-type="ref" reference="homjointlysep"} ascertains that, within the set of jointly separating pairs, homotopic pairs are preserved by automorphisms of $\mathcal{C}^\dagger_1(S_g).$ By combining these results, we conclude that nonseparating homotopic pairs are preserved by automorphisms. ◻ ### Step 1.2: Containment in an annulus {#sec:containinannulus .unnumbered} In this section, we prove that automorphisms preserve whether a curve lies in the annulus bounded by two homotopic curves. Such a curve must be homotopic to the two boundary curves, and therefore all three curves must be adjacent in $\mathcal{C}^\dagger_1(S_g).$ First, we define what it means for a curve $w$ to lie in the annulus bounded by adjacent homotopic curves $u$ and $v$ in $\mathcal{C}^\dagger_1(S_g).$ Let $\mathop{\mathrm{Ann}}_{\leq1}(u,v) \subseteq \mathop{\mathrm{link}}(u,v)$ denote the subgraph generated by curves $w$ supported on the (possibly pinched) annulus bounded by $u$ and $v$ such that $|w\cap(u\cup v)| \leq 1$ as in Figure [\[annulus\]](#annulus){reference-type="ref" reference="annulus"}. Our main goal in this section is to prove the following result. **Lemma 14**. *Let $u,$ $v,$ and $w$ be pairwise adjacent homotopic nonseparating curves in $\mathcal{C}^\dagger_1(S_g)$ and $\varphi\in\mathop{\mathrm{Aut}}\mathcal{C}^\dagger_1(S_g)$. Then $w\in\mathop{\mathrm{Ann}}_{\leq1}(u,v)$ if and only if $\varphi(w)\in \mathop{\mathrm{Ann}}_{\leq1}(\varphi(u),\varphi(v)).$* *Proof.* Since elements of $\mathop{\mathrm{Aut}}\mathcal{C}^\dagger_1(S_g)$ preserve separating links by Lemma [Lemma 3](#sepdisjointhom){reference-type="ref" reference="sepdisjointhom"}, it is enough to show that $w \in \mathop{\mathrm{Ann}}_{\leq1}(u,v)$ if and only if $\mathop{\mathrm{link}}^{\text{sep}}(u,v)\subset \mathop{\mathrm{link}}^{\text{sep}}(w)$. *Suppose $w \in \mathop{\mathrm{Ann}}_{\leq 1}(u,v)$.* Let $a\in \mathop{\mathrm{link}}^{\text{sep}}(u,v)$. Since $|w\cap (u,v)|\leq 1$ and $w$ is in the annulus bounded by $u$ and $v$ while $a$ is not, $|a\cap w|\leq 1.$ We conclude that $a\in\mathop{\mathrm{link}}^{\text{sep}}(w).$ *Suppose $w \not\in \mathop{\mathrm{Ann}}_{\leq 1}(u,v)$.* Then, $w$ is either (1) in the annulus bounded by $u$ and $v$ and intersects both of them or (2) not in the annulus bounded by $u$ and $v.$ Suppose $w$ is in case (1), and take any separating curve $a\in\mathop{\mathrm{link}}(u,v)$ disjoint from $u\cup v.$ We can then isotope $a$ to touch $u$ and $v$ at $u\cap w$ and $v\cap w,$ respectively. Thus $a\not\in \mathop{\mathrm{link}}(w).$ Suppose $w$ is in case (2), and take any separating curve $a\in\mathop{\mathrm{link}}(u,v).$ We can then isotope $a$ to intersect $w$ at least twice, so $a\not\in\mathop{\mathrm{link}}(w)$. We conclude that $\mathop{\mathrm{link}}^{\text{sep}}(u,v)\not\subset \mathop{\mathrm{link}}^{\text{sep}}(w)$. ◻ ### Step 1.3: Homotopic nonseparating pants pairs {#sec:homnonseppants .unnumbered} We now state and prove the result which allows us to distinguish pants pairs from disjoint pairs in the case that $u$ and $v$ are homotopic nonseparating curves. **Lemma 15**. *Let $u$ and $v$ be adjacent homotopic nonseparating curves in $S_g$ and $\varphi\in\mathop{\mathrm{Aut}}\mathcal{C}^\dagger_1(S_g).$ Then, $(u,v)$ is a pants pair if and only if $(\varphi(u),\varphi(v))$ is a pants pair. It follows that $(u,v)$ is a disjoint pair if and only if $(\varphi(u),\varphi(v))$ is a disjoint pair.* The following lemma provides the combinatorial conditions necessary to prove Lemma [Lemma 15](#homnonseppants){reference-type="ref" reference="homnonseppants"}. **Lemma 16**. *Let $u$ and $v$ be adjacent homotopic nonseparating curves in $\mathcal{C}^\dagger_1(S_g)$. Then $(u,v)$ is a pants pair if and only if there exists a curve $\gamma \in \mathop{\mathrm{link}}(u,v)$ such that 1) $\gamma$ forms torus pairs with both $u$ and $v$ and 2) for any $\delta \in \mathop{\mathrm{Ann}}_{\leq1}(u,v)$, $\gamma$ and $\delta$ are adjacent in $\mathcal{C}^\dagger_1(S_g)$. Otherwise, $u$ and $v$ are disjoint.* *Proof.* Suppose $(u,v)$ is a pants pair. Choose a curve $\gamma\in \mathop{\mathrm{link}}(u,v)$ that forms a torus pair with $u$ and $v$ and passes through their point of intersection. Then, $\gamma$ intersects every essential curve contained in $\mathop{\mathrm{Ann}}_{\leq1}(u,v)$ exactly once, and is therefore adjacent to them. A schematic of this situation is pictured on the left in Figure [\[touch1fig\]](#touch1fig){reference-type="ref" reference="touch1fig"}. Suppose $u$ and $v$ are disjoint. Choose any curve $\gamma\in\mathop{\mathrm{link}}(u,v)$ that forms a torus pair with both $u$ and $v$. Since an interval of $\gamma$ lies in the annulus bounded by $u$ and $v,$ we can choose a curve in $\mathop{\mathrm{Ann}}_{\leq1}(u,v)$ that intersects $\gamma$ at least twice. An example of such a curve is shown on the right in Figure [\[touch1fig\]](#touch1fig){reference-type="ref" reference="touch1fig"}. ◻ *Proof of Lemma [Lemma 15](#homnonseppants){reference-type="ref" reference="homnonseppants"}.* The statement follows directly from Lemma [Lemma 16](#nsdisjointhom){reference-type="ref" reference="nsdisjointhom"}, as torus pairs (Proposition [Proposition 7](#toruspairaut){reference-type="ref" reference="toruspairaut"}) and being an element of $\mathop{\mathrm{Ann}}_{\leq1}(u,v)$ (Lemma [Lemma 14](#lemma:annularmain){reference-type="ref" reference="lemma:annularmain"}) are both preserved by automorphisms of $\mathcal{C}^\dagger_1(S_g).$ ◻ Lemma [Lemma 15](#homnonseppants){reference-type="ref" reference="homnonseppants"} will allow us to conclude Proposition [Proposition 9](#allpants){reference-type="ref" reference="allpants"} in the case that $u$ and $v$ are homotopic. ## Step 2: Non-homotopic nonseparating curves: pants pairs vs. disjoint pairs {#sec:nonsepnonhompants} The main result from this section is Lemma [Lemma 17](#nonsepnonhompants){reference-type="ref" reference="nonsepnonhompants"}. It proves Proposition [Proposition 9](#allpants){reference-type="ref" reference="allpants"} in the case that the edge $(u,v)$ consists of nonseparating, non-homotopic curves. **Lemma 17**. *Let $u$ and $v$ be adjacent non-homotopic nonseparating curves in $S_g.$ Then, for any $\varphi\in\mathop{\mathrm{Aut}}\mathcal{C}^\dagger_1(S_g),$ we have that $(u,v)$ is a pants pair if and only if $(\varphi(u),\varphi(v))$ is a pants pair. We therefore have that automorphisms of $\mathcal{C}^\dagger_1(S_g)$ also preserve disjoint pairs of curves.* The main tool used to prove the above lemma is Lemma [Lemma 18](#nonseppants){reference-type="ref" reference="nonseppants"}, which provides a necessary and sufficient combinatorial condition for non-homotopic nonseparating curves to form a pants pair. **Lemma 18**. *Let $u$ and $v$ be adjacent non-homotopic nonseparating curves in $\mathcal{C}^\dagger_1(S)$. Suppose that $(u,v)$ is not a torus pair. Then $u$ and $v$ are disjoint if and only if there exist adjacent curves $\alpha,\beta\in\mathop{\mathrm{link}}(u,v)$, such that* 1. *$\alpha$, $u$, and $\beta$ are homotopic,* 2. *$\alpha$, $u$, and $\beta$ are disjoint, and* 3. *$u \in \mathop{\mathrm{Ann}}_{\leq1}(\alpha, \beta).$* *Otherwise, $(u, v)$ is a pants pair.* The main idea behind Lemma [Lemma 18](#nonseppants){reference-type="ref" reference="nonseppants"} is that $u$ and $v$ are disjoint if and only if there is a regular neighborhood of $u$ that is disjoint from $v.$ Figure [\[touch2fig\]](#touch2fig){reference-type="ref" reference="touch2fig"} gives a schematic of the proof. *Proof of Lemma [Lemma 18](#nonseppants){reference-type="ref" reference="nonseppants"}.* Suppose first that $u$ and $v$ are disjoint. Then, there is an annular neighborhood of $u$ disjoint from $v.$ The boundary components of such a neighborhood are the desired curves $\alpha$ and $\beta.$ Suppose now that such $\alpha$ and $\beta$ exist. Let $N$ be the annulus with boundary components $\alpha$ and $\beta$. By hypothesis, $u \subseteq \text{Int}(N)$. If $u \cap v \neq \emptyset$, then $v$ must intersect either $\alpha$ or $\beta$ in two places, which contradicts the assumption that $\alpha, \beta \in \mathop{\mathrm{link}}(u,v)$. ◻ We are now ready to complete the main result of the Section [4.2](#sec:nonsepnonhompants){reference-type="ref" reference="sec:nonsepnonhompants"}. *Proof of Lemma [Lemma 17](#nonsepnonhompants){reference-type="ref" reference="nonsepnonhompants"}.* Suppose that $u$ and $v$ are disjoint. Let $\alpha$ and $\beta$ be as in Lemma [Lemma 18](#nonseppants){reference-type="ref" reference="nonseppants"}. Then $\varphi$ preserves property (1) of $\alpha$ and $\beta$ by Lemma [Lemma 11](#nonsephomotaut){reference-type="ref" reference="nonsephomotaut"}, property (2) by Lemma [Lemma 15](#homnonseppants){reference-type="ref" reference="homnonseppants"}, and property (3) by Lemma [Lemma 14](#lemma:annularmain){reference-type="ref" reference="lemma:annularmain"}. Hence $\varphi(\alpha)$ and $\varphi(\beta)$ realize $\varphi(u)$ and $\varphi(v)$ as being disjoint by Lemma [Lemma 18](#nonseppants){reference-type="ref" reference="nonseppants"}, so the result follows. ◻ We are now ready to complete the main result of Section [4](#pantspairssection){reference-type="ref" reference="pantspairssection"}. *Proof of Proposition [Proposition 9](#allpants){reference-type="ref" reference="allpants"}.* Let $u$ and $v$ be adjacent curves in $\mathcal{C}^\dagger_1(S_g).$ We must show that if $(u,v)$ is a pants pair, then for any $\varphi\in \mathop{\mathrm{Aut}}\mathcal{C}^\dagger_1(S_g),$ $(\varphi(u),\varphi(v))$ is also a pants pair. By Lemma [Lemma 10](#seppantslemma){reference-type="ref" reference="seppantslemma"}, we may assume that $u$ and $v$ are nonseparating. We prove the proposition with casework. *Case 1.* If $u$ and $v$ are homotopic, nonseparating, and form a pants pair, Lemma [Lemma 15](#homnonseppants){reference-type="ref" reference="homnonseppants"} asserts that $\varphi(u)$ and $\varphi(u)$ form a pants pair. *Case 2.* If $u$ and $v$ are non-homotopic, nonseparating, and form a pants pair, Lemma [Lemma 17](#nonsepnonhompants){reference-type="ref" reference="nonsepnonhompants"} asserts that $\varphi(u)$ and $\varphi(u)$ form a pants pair. ◻ # Torus Case {#torussection} In this section, we approach the case where the surface $S_g$ has genus 1, and is therefore a torus. We will denote our surface by $T$ to avoid ambiguity. Our previous tools do not apply in the torus case because there are no essential separating curves on a torus. Proposition [Proposition 19](#toruscase){reference-type="ref" reference="toruscase"} is the main result of this section. **Proposition 19**. *Let $T$ be a torus. Then the natural map $$\Phi:\mathop{\mathrm{Homeo}}(T)\to \mathop{\mathrm{Aut}}\mathcal{C}^\dagger_1(T)$$ is an isomorphism.* We prove Proposition [Proposition 19](#toruscase){reference-type="ref" reference="toruscase"} in two steps. First, in Section [5.1](#sec:torustorusvselse){reference-type="ref" reference="sec:torustorusvselse"}, we use the proof method of Le Roux--Wolff [@transaut] to show that torus pairs are preserved by automorphisms of $\mathcal{C}^\dagger_1(T)$. Then, in Section [5.2](#sec:toruspantsvsdisjoint){reference-type="ref" reference="sec:toruspantsvsdisjoint"}, we show that pants pairs (and therefore disjoint pairs) are preserved by automorphisms of $\mathcal{C}^\dagger_1(T).$ ## Torus pairs vs. non-torus pairs {#sec:torustorusvselse} The goal of this section is to prove Proposition [Proposition 20](#prop:torustorusvselse){reference-type="ref" reference="prop:torustorusvselse"}. **Proposition 20**. *Let $u$ and $v$ be adjacent curves in $\mathcal{C}^\dagger_1(T)$ and $\varphi\in\mathop{\mathrm{Aut}}\mathcal{C}^\dagger_1(T).$ Then $(u,v)$ is a torus pair if and only if $(\varphi(u),\varphi(v))$ is a torus pair.* We prove this proposition by building on the work of Le Roux--Wolff. We begin by introducing the relevant results and explaining how we adapt their statements and proofs to apply in the case of the fine 1-curve graph on the torus. For any nonspherical, possibly non-orientable and possibly noncompact surface $S$, Le Roux--Wolff study the graph $\mathcal{C}^\dagger_\pitchfork(S)$, which has a vertex for every essential, simple, closed curve and two vertices are connected by an edge if they are either disjoint or they have one topologically transverse intersection point. (In the language of our paper, the edges correspond to disjoint pairs and torus pairs.) Le Roux--Wolff show that $\mathop{\mathrm{Homeo}}(S)\cong \mathop{\mathrm{Aut}}\mathcal{C}^\dagger_\pitchfork(S)$ via the natural homomorphism [@transaut]. To do this, they categorize all the possible curve configurations. A *clique* is a collection of pairwise adjacent vertices in a graph. In particular, a clique $(a,b,c)$ is of type *necklace* if $(a,b),\ (b,c),$ and $(c,a)$ are all torus pairs and do not have a common intersection point. Then they make the assertion that in $\mathcal{C}^\dagger_\pitchfork(S),$ 1. a clique $(a,b,c)$ is of type necklace if and only if there exists a finite set of (at most 8) vertices such that every $d\in\mathop{\mathrm{link}}(a,b,c)$ is adjacent to at least one vertex in the finite set, 2. adjacent vertices $a, b$ are disjoint if and only if there is no $c\in\mathop{\mathrm{link}}(a,b)$ such that $(a,b,c)$ is of type necklace, and 3. $a, b$ form a torus pair if and only if $a$ and $b$ are not disjoint. In the case of the fine 1-curve graph of the torus, these properties still hold with an adjustment to the last two: 1. adjacent vertices $a, b$ are disjoint or a pants pair if and only if there is no $c\in\mathop{\mathrm{link}}(a,b)$ such that $(a,b,c)$ is of type necklace and 2. adjacent vertices $a, b$ are a torus pair if and only if $a$ and $b$ are neither disjoint nor a pants pair. In fact, (1) is the main property we must verify en route to Proposition [Proposition 20](#prop:torustorusvselse){reference-type="ref" reference="prop:torustorusvselse"}. It is stated in the following lemma. **Lemma 21**. *A clique $(a,b,c)$ in $\mathcal{C}^\dagger_1(T)$ is of type necklace if and only if there exists a finite set of (at most 8) vertices such that every $d\in\mathop{\mathrm{link}}(a,b,c)$ is adjacent to at least one vertex in the finite set.* The forward direction of Lemma [Lemma 21](#property1){reference-type="ref" reference="property1"} is proven by Le Roux--Wolff and applies without adaptations. The key to the backward direction is an adaptation to Lemma 2.5 of Le Roux--Wolff, which we give as follows. **Lemma 22** (Adaptation of Lemma 2.5 of Le Roux--Wolff). *Let $(a,b,c)$ be a clique not of type necklace. Then, there exists $d\in\mathop{\mathrm{link}}(a,b,c)$ such that $d$ intersects every component of $T\setminus \{a,b,c\}.$* This proof is done via casework on the possible arrangements of cliques. To aid in categorizing arrangements of cliques, we use an updated version of the notation of Le Roux--Wolff: for a clique $(a,b,c)$, we record pairwise intersection types by a triple $(\cdot,\cdot,\cdot)$, up to permutation. If two curves are disjoint, we signify this by a 0; if they form a torus pair, by a 1; and if they form a pants pair, by a P. *Proof of Lemma [Lemma 22](#lerouxwolf25){reference-type="ref" reference="lerouxwolf25"}.* The cases (1,1,1), (1,1,0), and (0,0,0) are all accounted for in Le Roux--Wolff; (1,0,0) does not apply in the torus case. The remainder of the cases arise from replacing some 0's with P's. *Case 1: (1,1,P).* In this case, we have two homotopic, touching curves and a third curve that forms a torus pair with them. There are two subcases: whether the three curves intersect at one point or at 3 distinct points. *Case 1a: one point of intersection.* To create a fourth curve adjacent to all three that intersects all components of $T\setminus\{a,b,c\}$, we push the third curve off of itself. A schematic of this configuration is shown in Figure [\[figure1a\]](#figure1a){reference-type="ref" reference="figure1a"}. *Case 1b: three points of intersection.* The difficulty here is that $T\setminus\{a,b,c\}$ has 3 connected components, so pushing the third curve will no longer work. In particular, a curve that satisfies the conditions of the lemma must form a torus pair with each of $a,$ $b,$ and $c.$ A schematic of this configuration is shown in Figure [\[figure1b\]](#figure1b){reference-type="ref" reference="figure1b"}, along with a curve $d$ that satisfies the condition of the lemma. *Case 2: variants of (0,0,0): (P,0,0), (P,P,0), and (P,P,P).* In all of these configurations, all of $a,b,c$ are homotopic, and thus form (possibly pinched) annuli. A curve $d$ transverse to $a,b,$ and $c$ that does not cross any of the touching points satisfies the hypotheses of the lemma. A schematic of these configurations, along with a curve $d$ that satisfies the hypotheses of the lemma, is shown in Figure [\[figurecase2\]](#figurecase2){reference-type="ref" reference="figurecase2"}. ◻ -------------- -------------- --------------------------- $\downarrow$ $\downarrow$ $\downarrow$ $\downarrow$ -------------- -------------- --------------------------- Then, Lemma 2.6 of Le Roux--Wolff holds with minor adaptations and an identical proof; here, it appears as Lemma [Lemma 23](#lerouxwolff26){reference-type="ref" reference="lerouxwolff26"}. It provides the converse to Lemma [Lemma 21](#property1){reference-type="ref" reference="property1"}. **Lemma 23**. *Let $(a,b,c)$ be a clique in $\mathcal{C}^\dagger_1(T)$ not of type necklace and $\{\alpha_1,\ldots,\alpha_j\}$ be a finite collection of curves distinct from $a,\ b,$ and $c.$ Then, there is a vertex $d\in\mathop{\mathrm{link}}(a,b,c)$ that is not adjacent to any $\alpha_i.$* The main idea of the proof is to start with a curve $d$ given by Lemma [Lemma 22](#lerouxwolf25){reference-type="ref" reference="lerouxwolf25"} and isotope it in $T\setminus\{a,b,c\}$ to intersect each $\alpha_j$ arbitrarily many times. We now have all the tools we need to prove Lemma [Lemma 21](#property1){reference-type="ref" reference="property1"}. *Proof of Lemma [Lemma 21](#property1){reference-type="ref" reference="property1"}.* The forward direction is given by Lemma 2.8 of Le Roux--Wolff. The proof applies because all edge relations in $\mathcal{C}^\dagger_\pitchfork(T)$ still exist in $\mathcal{C}^\dagger_1(T).$ The reverse direction follows from Lemma [Lemma 23](#lerouxwolff26){reference-type="ref" reference="lerouxwolff26"}. ◻ It remains to show properties ($2'$) and ($3'$) above. The following lemma is similar to Corollary 2.4 of Le Roux--Wolff. **Lemma 24**. *In $\mathcal{C}^\dagger_1(T),$ the following properties hold.* 1. *Adjacent vertices $a$ and $b$ are disjoint or a pants pair if and only if there is no $c\in\mathop{\mathrm{link}}(a,b)$ such that $(a,b,c)$ is of type necklace and* 2. *adjacent vertices $a$ and $b$ are a torus pair if and only if $a$ and $b$ are neither disjoint nor a pants pair.* *Proof.* If adjacent curves $a$ and $b$ are disjoint or form a pants pair, then by definition, there is no third curve $c$ such that $(a,b,c)$ is a necklace clique. Alternately, if $a$ and $b$ are a torus pair, then up to homeomorphism, they are the (1,0) and (0,1) curves on the torus. These curves, along with some (1,1) curve, form a necklace clique. A schematic of this configuration appears in Figure [\[torusfigure\]](#torusfigure){reference-type="ref" reference="torusfigure"}. Property ($3'$) follows from the definitions of torus pairs. ◻ *Proof of Proposition [Proposition 20](#prop:torustorusvselse){reference-type="ref" reference="prop:torustorusvselse"}.* By Lemma [Lemma 24](#properties2and3){reference-type="ref" reference="properties2and3"}, $(u,v)$ is a torus pair if and only if there is a curve $w$ such that $(u,v,w)$ are of type necklace. Lemma [Lemma 21](#property1){reference-type="ref" reference="property1"} asserts that the property of being of necklace type is indeed combinatorial, and therefore is preserved by automorphisms of $\mathcal{C}^\dagger_1(T)$. We conclude that torus pairs are preserved by automorphisms of $\mathcal{C}^\dagger_1(T).$ ◻ ## Pants pairs vs. disjoint pairs {#sec:toruspantsvsdisjoint} It remains to show that we can distinguish pants pairs from disjoint pairs in $\mathcal{C}^\dagger_1(T).$ We will do this in the following proposition. **Proposition 25**. *Let $u$ and $v$ be adjacent curves in $\mathcal{C}^\dagger_1(T)$ and $\varphi\in \mathop{\mathrm{Aut}}\mathcal{C}^\dagger_1(T)$. Then $(u,v)$ is a pants pair if and only if $(\varphi(u),\varphi(v))$ is a pants pair.* *Proof.* Proposition [5.1](#sec:torustorusvselse){reference-type="ref" reference="sec:torustorusvselse"} ascertains that automorphisms preserve the set of pants and disjoint pairs. Without loss of generality, we may assume that $(u,v)$ is a pants or disjoint pair. By Proposition [Proposition 20](#prop:torustorusvselse){reference-type="ref" reference="prop:torustorusvselse"}, we can distinguish whether adjacent curves on a torus are homotopic, which is precisely when they are not a torus pair. Define the *homotopic link* of $(u,v)$, denoted by $\mathop{\mathrm{link}}^{\text{hom}}(u,v)$, to be the subgraph of $\mathop{\mathrm{link}}(u,v)$ induced by curves homotopic to $u$ and $v.$ We will prove the theorem by showing that $(u,v)$ is a pants pair if and only if $\mathop{\mathrm{link}}^{\text{hom}}(u,v)$ is a join. *Suppose $(u,v)$ is a pants pair.* Then, $u$ and $v$ bound a pinched annulus. Any curve in $\mathop{\mathrm{link}}^{\text{hom}}(u,v)$ is contained in this pinched annulus or in its complement (possibly intersecting $u$ and $v$). Let $w\in \mathop{\mathrm{link}}^{\text{hom}}(u,v)$ be contained in the pinched annulus. Then, $w$ must intersect both $u$ and $v$ at $u\cap v,$ and thus does not intersect them elsewhere. Let $x\in \mathop{\mathrm{link}}^{\text{hom}}(u,v)$ be contained in the complement of the pinched annulus. If $x\cap (u\cup v)$ is nonempty, then $|w\cap x|=1$ if $x\cap u\cap v\neq \emptyset$ and $|w\cap x|=0$ otherwise. We conclude that $x$ and $w$ are adjacent in $\mathop{\mathrm{link}}^{\text{hom}}(u,v),$ which is therefore a join with parts defined by the subsurfaces of $T$ bounded by $u$ and $v$. *Suppose $(u,v)$ is a disjoint pair.* We will show that $\mathop{\mathrm{link}}^{\text{hom}}(u,v)$ is not a join by showing that no possible partition exists. We have that $u$ and $v$ bound two annuli with boundary, $A$ and $B,$ and every curve in $\mathop{\mathrm{link}}^{\text{hom}}(u,v)$ is contained in precisely one of the two annuli. First, we claim that curves contained in the same annulus cannot be in different parts of the partition. Let $a_1,a_2\in \mathop{\mathrm{link}}^{\text{hom}}(u,v)$ be contained in $A.$ Then, there exists a curve $a_3\in\mathop{\mathrm{link}}^{\text{hom}}(u,v)$ contained in $A$ that intersects $a_1$ and $a_2$ at least two times each, and is therefore adjacent to neither. Thus, the only possible partition for a join is into two parts, each corresponding to curves contained in one of $A$ or $B.$ However, there is a curve $a$ contained in $A$ that intersects each of $u$ and $v$ once, and a curve $b$ contained in $B$ that intersects $u$ and $v$ at $a\cap u$ and $a\cap v,$ respectively. Therefore, $a$ and $b$ are not adjacent. We conclude that there is no possible partition of $\mathop{\mathrm{link}}^{\text{hom}}(u,v)$ into a join. ◻ Proposition [Proposition 25](#toruspantsvsdisjoint){reference-type="ref" reference="toruspantsvsdisjoint"} allows us to combinatorially distinguish between pants pairs and disjoint pairs. With that in hand, we are ready to prove Proposition [Proposition 19](#toruscase){reference-type="ref" reference="toruscase"}. ## *Proof of Proposition [Proposition 19](#toruscase){reference-type="ref" reference="toruscase"}.* We first note that any homeomorphism of $T$ induces an element of $\mathop{\mathrm{Aut}}\mathcal{C}^\dagger_1(T).$ It remains to show that an element of $\mathop{\mathrm{Aut}}\mathcal{C}^\dagger_1(T)$ induces a homeomorphism of $S.$ We will reduce this claim to the theorem of Le Roux--Wolff [@transaut] by showing that any automorphism of $\mathcal{C}^\dagger_1(T)$ induces an automorphism of $\mathcal{C}^\dagger_\pitchfork(T)$. It suffices to show that for any $\varphi \in \mathop{\mathrm{Aut}}\mathcal{C}^\dagger_1(T)$ and any adjacent curves $u$ and $v$ in $\mathcal{C}^\dagger_1(T)$, we have $$(u,v) \text{ disjoint } \iff (\varphi(u), \varphi(v)) \text{ disjoint.}$$ In particular, we must show that $(u,v)$ is a pants pair if and only if $(\varphi(u), \varphi(v))$ is a pants pair. By Proposition [Proposition 20](#prop:torustorusvselse){reference-type="ref" reference="prop:torustorusvselse"}, torus pairs are preserved by automorphisms, and by Proposition [Proposition 25](#toruspantsvsdisjoint){reference-type="ref" reference="toruspantsvsdisjoint"}, pants pairs are distinguishable from disjoint pairs. We therefore have that $\varphi$ induces an element of $\mathop{\mathrm{Aut}}\mathcal{C}^\dagger_\pitchfork(T).$ We now invoke Le Roux--Wolff [@transaut Theorem 1.1] that $\mathop{\mathrm{Homeo}}(S_g)\cong \mathop{\mathrm{Aut}}\mathcal{C}^\dagger_\pitchfork(S_g)$ to complete the proof. ◻ # Proof of the main theorem {#mainsection} *Proof of Theorem [Theorem 1](#maintheorem){reference-type="ref" reference="maintheorem"}.* There are two cases, depending on the genus $g$. We begin by showing that automorphisms of $\mathcal{C}^\dagger_1(S_g)$ induce automorphisms of $\mathcal{C}^\dagger_\pitchfork(T)$ (if $g=1)$ or $\mathcal{C}^\dagger(S_g)$ (if $g\geq 2$). *Case 1: $g=1$.* This is Proposition [Proposition 19](#toruscase){reference-type="ref" reference="toruscase"}. *Case 2: $g\geq 2$.* We observe that homeomorphisms preserve intersection number, so any homeomorphism of $S_g$ induces an automorphism of $\mathcal{C}^\dagger_1(S_g).$ It remains to show that an automorphism of $\mathcal{C}^\dagger_1(S_g)$ induces a homeomorphism of $S$. We reduce this claim to the result of Long--Margalit--Pham--Verberne--Yao that $\mathop{\mathrm{Aut}}\mathcal{C}^\dagger(S_g)\cong \mathop{\mathrm{Homeo}}(S_g)$ via the natural isomorphism. To do this, we show that any automorphism of $\mathcal{C}^\dagger_1(S_g)$ sends all pairs of vertices corresponding to once-intersecting curves to pairs of curves corresponding to once-intersecting curves. By Proposition [Proposition 7](#toruspairaut){reference-type="ref" reference="toruspairaut"}, automorphisms preserve torus pairs, and by Proposition [Proposition 9](#allpants){reference-type="ref" reference="allpants"}, automorphisms preserve pants pairs. Since an edge can only correspond to a torus pair, a pants pair, or a pair of disjoint curves, we conclude that any $\varphi \in \mathop{\mathrm{Aut}}\mathcal{C}^\dagger_1(S_g)$ induces an automorphism of the image of the natural inclusion $\mathcal{C}^\dagger(S_g) \hookrightarrow \mathcal{C}^\dagger_1(S_g)$. We apply the theorem of Long--Margalit--Pham--Verberne--Yao to prove that an automorphism of $\mathcal{C}^\dagger_1(S_g)$ naturally induces a homeomorphism of $S_g$. It remains to show that the maps we construct are indeed the inverses of $\Phi.$ For the sake of clarity, we name the maps we use. Let $G$ be $\mathcal{C}^\dagger(S_g)$ if $g\geq 2$ and $\mathcal{C}^\dagger_\pitchfork(T)$ if $g=1.$ Let $\Psi_1:\mathop{\mathrm{Aut}}\mathcal{C}^\dagger_1(S_g)\to \mathop{\mathrm{Aut}}G$ be the map such that $\Psi_1(f)=\overline{f}$ is the automorphism induced by $f.$ We note that $f$ and $\overline{f}$ act the same way on the vertex sets of their corresponding graphs. Let $\Psi_2:\mathop{\mathrm{Aut}}G\to \mathop{\mathrm{Homeo}}(S_g)$ be the map constructed by Le Roux--Wolff [@transaut] (if $g=1$) or Long--Margalit--Pham--Verberne--Yao [@fineaut] (if $g\geq 2$). Let $\varphi\in \mathop{\mathrm{Homeo}}(S_g).$ Then, $$\begin{aligned} \Psi_2\circ \Psi_1\circ \Phi(\varphi) &= \Psi_2\circ \Psi_1(f_{\varphi}), \textrm{ where } f_{\varphi} \textrm{ permutes vertices as prescribed by }\varphi\\ &= \Phi_2(\overline{f_{\varphi}})\\ &= \Psi_2(\Psi_2^{-1}(\varphi))\\ &= \varphi. \end{aligned}$$ Conversely, let $f\in \mathop{\mathrm{Aut}}\mathcal{C}^\dagger_1(S_g).$ Then, $$\begin{aligned} \Phi\circ\Psi_2\circ \Psi_1(f) &= \Phi \circ \Psi_2(\overline{f})\\ &=\Phi(\varphi_{\overline{f}}), \textrm{ where } \varphi_{\overline{f}} \textrm{ permutes curves as prescribed by } \overline{f}\\ &=\Phi (\varphi_f), \textrm{ since } f \textrm{ and } \overline{f} \textrm{ permute vertices in the same way}\\ &=f. \end{aligned}$$ We conclude that the natural map $\mathop{\mathrm{Aut}}G\to \mathop{\mathrm{Homeo}}(S_g)$ is an isomorphism. ◻
arxiv_math
{ "id": "2309.16041", "title": "Automorphisms of the fine 1-curve graph", "authors": "Katherine Williams Booth, Daniel Minahan, Roberta Shapiro", "categories": "math.GT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- author: - | Yves Baudelaire Fomatati\ Department of Mathematics and Statistics, University of Ottawa,\ Ottawa, Ontario, Canada K1N 6N5.\ yfomatat\@uottawa.ca. bibliography: - fomatati_ref.bib title: | Some properties of n-matrix factorizations of polynomials --- > **Abstract** Let $R=K[x_{1},x_{2},\cdots, x_{m}]$ where $K$ is a field. In this paper, we give some properties of $n$-matrix factorizations of polynomials in $R$. We also derive some results giving some lower bounds on the number of $n$-matrix factors of polynomials. In particular, we give a lower bound on the number of matrix factors of minimal size for the sums of squares polynomial $f_{m}=x_{1}^{2}+\cdots + x_{m}^{2}$ for $m=8$.\ **Keywords.** Matrix factorizations, polynomial, sums of squares polynomial.\ **Mathematics Subject Classification (2020).** 15A23, 12D05, 16D40.\ In the sequel, except otherwise stated, our polynomials will be taken from $S=\mathbb{R}[x_{1},x_{2},\cdots, x_{m}]$ where $\mathbb{R}$ is the set of real numbers. Sometimes instead of indexing the indeterminates when they are at most three, we will write $x,y,z$. # Introduction Both reducible and irreducible polynomials can be factored using matrices. For instance, the polynomial $f=z^{2}+y^{2}$ is irreducible over $\mathbb{R}[z,y]$ but can be factorized as follows: $$\begin{bmatrix} z & -y \\ y & z \end{bmatrix} \begin{bmatrix} z & y \\ -y & z \end{bmatrix} = (z^{2} + y^{2})\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} =fI_{2}$$ We say that $(\begin{bmatrix} z & -y \\ y & z \end{bmatrix}, \begin{bmatrix} z & y \\ -y & z \end{bmatrix})$ is a $2 \times 2$ matrix factorization of $f$. Eisenbud [@eisenbud1980homological] was the first to introduce the notion of matrix factorization. In fact, it is a generalization of the classical polynomial factorization in the sense that classical polynomial factors can now be seen as $1\times 1$ matrix factors. Matrix factorizations and some of their properties were studied in several papers including [@eisenbud1980homological], [@carqueville2016adjunctions], [@crisler2016matrix], [@fomatati2022tensor] and [@camacho2015matrix].\ It is important to study matrix factorizations of polynomials and their properties for several reasons. One obvious reason is that irreducible polynomials can be factorized using matrices. Moreover, Buchweitz et al. [@buchweitz1987cohen] found that matrix factorizations of polynomials (over the reals) of the form $f_{n}=x_{1}^{2}+\cdots + x_{n}^{2}$; for $n=1,2,4$ and $8$ are related to the existence of composition algebras over $\mathbb{R}$ of dimension $1,2,4$ and $8$ namely the complex numbers, the quaternions and the octonians. In addition, the notion of matrix factorization is a classical tool in the study of hypersurface singularity algebras [@eisenbud1980homological]. More on the importance of matrix factorizations with references can be found in the introduction of [@fomatati2022tensor].\ The original definition of a matrix factorization of an element $f$ in a ring $R$ (with unity) given by Eisenbud (p.15 of [@eisenbud1980homological]) in 1980 is as follows: a matrix factorization of an element $f$ in a ring $R$ (with unity) is an ordered pair of maps of free $R-$modules $\phi: F\rightarrow G$ and $\psi: G \rightarrow F$ s.t., $\phi\psi=f\cdot 1_{G}$ and $\psi\phi=f\cdot 1_{F}$. In their paper [@carqueville2016adjunctions] published in 2016, Carqueville and Murfet defined a matrix factorization using linear factorizations and $\mathbb{Z}_{2}$-graded modules (cf. p.8 of [@carqueville2016adjunctions])\ Another (simpler) way in which matrix factorizations of a polynomial can be defined is found in Yoshino's paper [@yoshino1998tensor]. If $K$ denotes a field and $x$ denotes the tuple $x_{1},\cdots, x_{n}$; then in 1998, Yoshino [@yoshino1998tensor] defined a matrix factorization of a power series $f\in K[[x]]$ to be a pair of matrices $(P,Q)$ such that $fI=PQ$. Diveris and Crisler in 2016 used this definition (cf. Definition 1 of [@crisler2016matrix]) of Yoshino. In this paper, we follow suit and we refer to this type of matrix factorization of a polynomial $f$ as a 2-matrix factorization of $f$ or simply a matrix factorization of $f$. Next, we extend this definition to $n$-matrix factorizations of a polynomial.\ Properties of 2-matrix factorizations were used in [@crisler2016matrix] to give the minimal 2-matrix factorization for a polynomial which is the sum of squares of 8 monomials. They were also used in chapter 6 of [@fomatati2019multiplicative] to give necessary conditions for the existence of a Morita Context in the bicategory of Landau-Ginzburg models. Moreover, one of the properties of 2-matrix factorizations was used to conclude that a polynomial admits more than one pair of 2-matrix factors (cf. proposition 2.2 of [@fomatati2019multiplicative]).\ We will derive some properties of $n$-matrix factorizations, some of which are generalizations of the properties given for the case $n=2$ in [@crisler2016matrix]. One of the properties we give shows that once $n$-matrix factors (see Definition [Definition 3](#defn: n-matrix facto){reference-type="ref" reference="defn: n-matrix facto"}) of $f$ are found, we actually have $n$ $n$-matrix factorizations of $f$ (See Theorem [Theorem 1](#thm: main ppty of n-matrix facto){reference-type="ref" reference="thm: main ppty of n-matrix facto"}). Another result we state and prove shows that if $f$ admits a pair of matrix factors which are each constituted of block matrices of equal sizes that commute, then several other matrix factors can be found. This will enable us to give a lower bound on the number of matrix factors of minimal size that can be obtained for a sums of squares polynomial with $n$ monomials, $n=4$ and $n=8$. The minimal size for matrix factors of such a polynomial was studied in [@brown2016knorrer], [@yoshino1990maximal], [@buchweitz1987cohen] and [@crisler2016matrix]. [@buchweitz1987cohen] studies matrix factorizations over quadratic surfaces and also includes a study of matrix factorizations of sums of squares polynomials $f_{n}=x_{1}^{2}+x_{2}^{2}+\cdots + x_{n}^{2}$. In [@buchweitz1987cohen], it is shown that there is an equivalence of categories between matrix factorizations of $f_{n}$ and graded modules over a Clifford algebra associated to $f_{n}$. This equivalence is then exploited to generate matrix factorizations. This technique in used to generate minimal matrix factorizations in [@brown2016knorrer] and [@yoshino1990maximal] for $f_{n}$, $n\geq 2$. With the standard method for matrix factorizations of polynomials (see subsection [2.1](#subsec: the standard method){reference-type="ref" reference="subsec: the standard method"}), one finds matrix factors for $f_{8}$ that are $128\times 128$ matrices. In [@crisler2016matrix], authors used an elementary method to construct an algorithm that produces minimal matrix factorizations for $f_{n}$, $1\leq n\leq 8$. For $n=8$, they show that their algorithm produces matrix factors which are $8\times 8$ matrices. This agrees with the results in [@buchweitz1987cohen] where it is shown that for $n\geq 8$, the smallest possible matrix factorization for $f_{n}$ is bounded below by $2^{\frac{n-2}{2}}\times 2^{\frac{n-2}{2}}$.\ This paper is organized as follows: In the next section, we give some preliminaries. In section 3, we recall the definition of matrix factorization of a polynomial referred to as $2$-matrix factorization of a polynomial and we also extend the definition to $n$-matrix factorization of a polynomial. Moreover, we state and prove a result that gives a lower bound for the number of matrix factors of polynomials satisfying some particular conditions. Finally we give an application of this result. # Preliminaries In this section, we recall the standard method for matrix factorization of polynomials and we give an example. ## The standard method for polynomial factorization {#subsec: the standard method} **Introduction**\ Here, we recall the standard technique for factoring polynomials using matrices. **Proposition 1**. *[@crisler2016matrix] For $i,j\in \{1,2\}$, let $(C_{i},D_{i})$ denote an $n\times n$ matrix factorization of the polynomial $f_{i}\in S$. In addition, assume that the matrices $C_{i}$ and $D_{j}$ commute when $i\neq j.$ Then the matrices* *$$\begin{pmatrix} C_{1} & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep}& -D_{2} \\ \hline C_{2} & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep}& D_{1} \end{pmatrix}, \begin{pmatrix} D_{1} & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep}& D_{2} \\ \hline -C_{2} & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep}& C_{1} \end{pmatrix}$$ give a $2n\times 2n$ matrix factorization of $f_{1} + f_{2}$.* The following corollary is crucial to factor polynomials using matrices. **Corollary 1**. *[@crisler2016matrix] If $(C,D)$ is an $n\times n$ matrix factorization of $f$ and $g,h$ are two polynomials, then $$\begin{pmatrix} C & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep}& -gI_{n} \\ \hline hI_{n} & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep}& D \end{pmatrix}, \begin{pmatrix} D & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep}& gI_{n} \\ \hline -hI_{n} & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep}& C \end{pmatrix}$$* *give a $2n\times 2n$ matrix factorization of $f + gh$.* *Proof.* Since the matrices $gI_{n}$ and $hI_{n}$ commute with all $n\times n$ matrices, the proof follows from the previous proposition. ◻ Thanks to this corollary, one can inductively construct matrix factorizations of polynomials of the form:\ $$f = f_{k} = g_{1}h_{1} + g_{2}h_{2}+ \cdots + g_{k}h_{k}.$$ For $k=1$, we have $f=g_{1}h_{1}$ and clearly $[g_{1}][h_{1}]=[g_{1}h_{1}]=[f_{1}]$ is a $1\times 1$ matrix factorization. Next, suppose that $C$ and $D$ are matrix factorizations of $f_{k-1}$, i.e., $CD=If_{k-1}$ where $I$ is the identity matrix of the right size. Thus, using the foregoing corollary, we obtain a matrix factorization of $f_{k}$: $$(\begin{pmatrix} C & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep}& -g_{k}I_{n} \\ \hline h_{k}I_{n} & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep}& D \end{pmatrix}, \begin{pmatrix} D & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep}& g_{k}I_{n} \\ \hline -h_{k}I_{n} & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep}& C \end{pmatrix})$$ **Definition 1**. *[@crisler2016matrix] The foregoing algorithm is called **the standard method** for factoring polynomials.* **Example 1**. *Let $g= x^{2}y+x^{2}z+yz^{2}$. We use the standard method to find a matrix factorization of $g$. First a matrix factorization of $x^{2}y+x^{2}z$ is $$(\begin{bmatrix} x^{2} & -x^{2} \\ z & y \end{bmatrix},\begin{bmatrix} y & x^{2} \\ -z & x^{2} \end{bmatrix})$$ Hence, a matrix factorization of $g= x^{2}y+x^{2}z+yz^{2}$ is then: $$N=(\begin{bmatrix} x^{2} & -x^{2} & -y & 0\\ z & y & 0 & -y\\ z^{2} & 0 & y & x^{2}\\ 0 & z^{2}& -z & x^{2} \end{bmatrix},\begin{bmatrix} y & x^{2} & y & 0\\ -z & x^{2} & 0 & y\\ -z^{2} & 0 & x^{2} & -x^{2}\\ 0 & -z^{2}& z & y \end{bmatrix})$$ We could give a $2 \times 2$ matrix factorization of $g$ after factorizing it:\ $g= x^{2}y+x^{2}z+yz^{2}=x^{2}(y+z) +yz^{2}$* *$$\begin{bmatrix} x^{2} & -y \\ z^{2} & y+z \end{bmatrix} \begin{bmatrix} y+z & y \\ -z^{2} & x^{2} \end{bmatrix} = (x^{2}y+x^{2}z+yz^{2})\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} =gI_{2}$$ Thus;* *$(\begin{bmatrix} x^{2} & -y \\ z^{2} & y+z \end{bmatrix}, \begin{bmatrix} y+z & y \\ -z^{2} & x^{2} \end{bmatrix} )$* *is a $2 \times 2$ matrix factorization of $g$.* The standard method can be used to produce matrix factorizations of any polynomial since every polynomial can be expressed as a sum of finitely many monomials. It is easy to verify that for a polynomial with $n$ monomials, the standard method produces matrix factors which are $2^{n-1}\times 2^{n-1}$ matrices. So for a sums of squares polynomial $f_{8}$ with 8 monomials, the standard method produces matrix factors which are $2^{8-1}\times 2^{8-1}=128\times 128$ matrices. As discussed in the introduction, the minimal size of matrix factors for $f_{8}$ are $8\times 8$ matrices. In subsection [3.3](#subsec: lower bound on number of min factors){reference-type="ref" reference="subsec: lower bound on number of min factors"}, we will give a lower bound on the number of minimal size matrix factors for $f_{8}$.\ We first discuss $n$-matrix factorization of polynomials and their properties below. # $n$-matrix factorization of polynomials ## 2-matrix factorization of polynomials **Definition and some Examples**\ Let $K[[x_{1},x_{2},\cdots, x_{m}]]$ be the power series ring in the indeterminates $x_{1},x_{2},\cdots, x_{m}$. We will sometimes write $K[[x]]$ instead of $K[[x_{1},x_{2},\cdots, x_{m}]]$ for ease of notation.\ The notion of matrix factorization is defined in [@yoshino1998tensor] for nonzero non-invertible $f\in K[[x_{1},x_{2},\cdots, x_{m}]]$. We define it as in [@crisler2016matrix] slightly generalizing the one given in [@yoshino1998tensor] by including elements like $1\in K$ for convenience. Yoshino [@yoshino1998tensor] requires an element $f\in K[[x]]$ to be nonzero non-invertible because if $f=0$ then $K[[x]]/(f)=K[[x]]$ and if $f$ is a unit, then $K[[x]]/(f)=$K\[\[x\]\]/K\[\[x\]\]$=\{1\}$. But in this work, we will not bother about such restrictions because we will not deal with the homological methods used in [@yoshino1998tensor]. Furthermore, we will restrict ourselves to the the ring of polynomials $R=K[x_{1},x_{2},\cdots, x_{m}]$. **Definition 2**. *[@yoshino1998tensor], [@crisler2016matrix] [\[defn matrix facto of polyn\]]{#defn matrix facto of polyn label="defn matrix facto of polyn"}\ An $m\times m$ **matrix factorization** of a polynomial $f\in \;R$ is a pair of $m$ $\times$ $m$ matrices $(P,Q)$ such that $PQ=fI_{m}$, where $I_{m}$ is the $m \times m$ identity matrix and the coefficients of $P$ and of $Q$ are taken from $R$.* Since in this paper we will be discussing factorizations of polynomials using two or more matrices, we will refer to the type of factorizations of definition [\[defn matrix facto of polyn\]](#defn matrix facto of polyn){reference-type="ref" reference="defn matrix facto of polyn"} as 2-matrix factorizations because we have two matrix factors. This is the type that one easily finds in the literature (e.g. [@yoshino1998tensor], [@crisler2016matrix]). We will generalize definition [\[defn matrix facto of polyn\]](#defn matrix facto of polyn){reference-type="ref" reference="defn matrix facto of polyn"} below (see definition [Definition 3](#defn: n-matrix facto){reference-type="ref" reference="defn: n-matrix facto"}).\ ## Properties of $n$-matrix factorizations Here, we define what an $n$-matrix factorization of a polynomial is and we give some properties. Thanks to one of these properties, we give a lower bound for the number of minimal matrix factorizations for a sums of squares polynomial $f_{n}=x_{1}^{2}+\cdots x_{n}^{2}$, for $n=8$. **Definition 3**. *An $m\times m$ **$n$-matrix factorization** of a polynomial $f\in \;R$ is an $n$-tuple of $m\times m$ matrices $(A_{1},A_{2}, \cdots, A_{n})$ such that $A_{1} A_{2} \cdots A_{n}=fI_{m}$, where $I_{m}$ is the $m \times m$ identity matrix and the coefficients of each matrix $A_{i}$, $i\in \{1,2,\cdots, n\}$, is taken from the field of fraction of $R$.* We know that if $(P,Q)$ is a 2-matrix factorization of a nonzero polynomial $f\in \;R$, then $(Q,P)$ is also a matrix factorization of $f$ (cf. proposition 4 of [@crisler2016matrix]). In other words, 2-matrix factors of a nonzero polynomial commute unlike with ordinary matrices. We are going to observe and prove below (see theorem [Theorem 1](#thm: main ppty of n-matrix facto){reference-type="ref" reference="thm: main ppty of n-matrix facto"}) that if the $n$-tuple $(A_{1},A_{2}, \cdots, A_{n})$ is an $n$-matrix factorization of $f$, then if the $A_{i}'s$ are rearranged in a certain order (more precisely if they follow a certain cyclic order), then we will still have an $n$-matrix factorization of $f$. But to set the stage, we need to establish some preliminary results. **Lemma 1**. *Let $f\in R$. If the $n$-tuple of $m\times m$ matrices $(A_{1},A_{2}, \cdots, A_{n})$ is an $n$-matrix factorization of $f$ i.e., $fI_{m}=A_{1} A_{2} \cdots A_{n}$, then:* 1. *The determinant of $A_{i}$ (denoted by $\mid A_{i} \mid$), for $1\leq i \leq n$ divides $f^{m}$. Moreover, if $f$ is irreducible in $R$, then $\mid A_{i} \mid$ is a power of $f$.* 2. *Each $A_{i}$ is invertible.* *Proof.* 1. Suppose $fI_{m}=A_{1} A_{2} \cdots A_{n}$.\ Then $\mid A_{1} A_{2} \cdots A_{n} \mid$ =$\mid fI_{m} \mid$ i.e., $\mid A_{1}\mid \,\mid A_{2} \mid \,\cdots \mid A_{n} \mid$=$f^{m}$. So, $\mid A_{i} \mid$ divides $f^{m}$ for each $i\in \{1,2,\cdots, n\}$.\ Clearly, if $f$ is irreducible then the only divisors of $f^{m}$ are powers of $f$. Hence, $\mid A_{i} \mid$ is a power of $f$ in case $f$ is irreducible. 2. From the first part of this lemma, we see that $\mid A_{i} \mid$ is nonzero since it divides $f^{m}$. So over $\mathcal{F}$, the fraction field of $R$, $A_{i}$ is invertible for each $i\in \{1,2,\cdots n\}$.  ◻ The following result states a property of matrix factors that is not enjoyed by all matrices. It shows that once we have an $n$-tuple of $m\times m$ matrices forming an $n$-matrix factorization of $f$, in order to obtain another $n$-matrix factorization of $f$, it suffices to put them on a circle and read it clockwise from any matrix factor. This shows that once we have an $n$-matrix factorization of a polynomial $f$, we actually readily have $n$ others by simply rearranging the order of appearance of the matrix factors. Thus proving that the $n$-matrix factorization of a polynomial is not unique. **Theorem 1**. *Let $f\in R$. If the $n$-tuple of $m\times m$ matrices $(A_{1},A_{2}, \cdots, A_{n})$ is an $n$-matrix factorization of $f$ i.e., $fI_{m}=A_{1} A_{2} \cdots A_{n}$, then $fI_{m}=A_{i}A_{i+1} \cdots A_{n}A_{1}\cdots A_{i-1}$ for $1\leq i\leq n$, i.e., $A_{i}A_{i+1} \cdots A_{n}A_{1}\cdots A_{i-1}$ is an $n$-matrix factorization of $f$ for each $i\in \{1,2,\cdots n\}$.* *Proof.* We will use the fact that $n$-matrix factors of a polynomial are invertible (see Lemma [Lemma 1](#Lem: matrix factors are invertible){reference-type="ref" reference="Lem: matrix factors are invertible"}) to prove this theorem. We will also use the fact that matrix multiplication is associative.\ Suppose $fI_{m}=A_{1} A_{2} \cdots A_{n}$ then $A_{i}\overset{\star}{=} A_{i-1}^{-1}\cdots A_{2}^{-1}A_{1}^{-1}fI_{m}A_{n}^{-1}\cdots A_{i+1}^{-1}$ since each $n$-matrix factor is invertible by Lemma [Lemma 1](#Lem: matrix factors are invertible){reference-type="ref" reference="Lem: matrix factors are invertible"}.\ Hence,\ $A_{i} A_{i+1} \cdots A_{n}A_{1}\cdots A_{i-1}\\ %\overset{(by\, \star)}{=} =(A_{i-1}^{-1}\cdots A_{2}^{-1}A_{1}^{-1}fI_{m}A_{n}^{-1}\cdots A_{i+2}^{-1} A_{i+1}^{-1})A_{i+1}A_{i+2}\cdots A_{n}A_{1}\cdots A_{i-1} \,\,by \,\,\star$.\ $=A_{i-1}^{-1}\cdots A_{2}^{-1}A_{1}^{-1}fI_{m}(A_{i+1}A_{i+2} \cdots A_{n})^{-1}(A_{i+1}A_{i+2}\cdots A_{n})A_{1}\cdots A_{i-1}$.\ $=A_{i-1}^{-1}\cdots A_{2}^{-1}A_{1}^{-1}fI_{m}A_{1}\cdots A_{i-1}$.\ $=(A_{1}A_{2}\cdots A_{i-1})^{-1}fI_{m}(A_{1}\cdots A_{i-1})$.\ $=fI_{m}(A_{1}A_{2}\cdots A_{i-1})^{-1}(A_{1}\cdots A_{i-1})$ since $fI_{m}$ commutes with all $m\times m$ matrices.\ $=fI_{m}$\ So, $A_{i} A_{i+1} \cdots A_{n}A_{1}\cdots A_{i-1}=fI_{m}$ for $1\leq i\leq n$ as desired. ◻ Observe that for $n=2$, the above theorem actually says that if $(P,Q)$ is a pair of matrix factors of a polynomial $f$, then $PQ=QP$ as is already mentioned in the literature (see page 2 of [@yoshino1998tensor], proposition 4 of [@crisler2016matrix]).\ We give another property of $n$-matrix factorizations. **Proposition 2**. *Let $f\in R$. If the $n$-tuple of $m\times m$ matrices $(A_{1},A_{2}, \cdots, A_{n})$ is an $n$-matrix factorization of $f$ i.e., $fI_{m}=A_{1} A_{2} \cdots A_{n}$, then $fI_{m}=A_{n}^{t} A_{n-1}^{t} \cdots A_{1}^{t}$ i.e., $(A_{n}^{t},A_{n-1}^{t}, \cdots, A_{1}^{t})$ is an $n$-matrix factorization of $f$.* *Proof.* Suppose $fI_{m}=A_{1} A_{2} \cdots A_{n}$, then $(fI_{m})^{t}=(A_{1} A_{2} \cdots A_{n})^{t}$, i.e., $fI_{m}=A_{n}^{t} A_{n-1}^{t} \cdots A_{1}^{t}$ as desired. ◻ The following result (Theorem [Theorem 2](#thm many matrix factors for f){reference-type="ref" reference="thm many matrix factors for f"}) shows that if a polynomial admits a matrix factorization (say $(A,B)$) in which matrix factors can be divided into four block matrices of the same size which commute, then one can derive many other matrix factors for $f$ by performing some specific operations on the blocks constituting the matrix factors. These operations are: simultaneously rotating the blocks of $A$ clockwise and those of $B$ anti-clockwise (see items 2 to 5 of Theorem [Theorem 2](#thm many matrix factors for f){reference-type="ref" reference="thm many matrix factors for f"}), interchanging the rows of $A$ (respectively the columns of $A$) and interchanging the columns of $B$ (respectively the rows of $B$) \[see items 5 and 6 of Theorem [Theorem 2](#thm many matrix factors for f){reference-type="ref" reference="thm many matrix factors for f"}\], taking the block transpose of the product $AB$ (see items 7 to 12 of Theorem [Theorem 2](#thm many matrix factors for f){reference-type="ref" reference="thm many matrix factors for f"}) and finally taking the block transpose of $A$ and that of $B$ (see items 13 and 14 of Theorem [Theorem 2](#thm many matrix factors for f){reference-type="ref" reference="thm many matrix factors for f"}).\ The application we will give after Theorem [Theorem 2](#thm many matrix factors for f){reference-type="ref" reference="thm many matrix factors for f"} will give a lower bound on the number of matrix factors of minimal size that one can obtain for a sums of squares polynomial $f_{n}=x_{1}^{2}+\cdots + x_{n}^{2}$, for $n=4$ and $n=8$. We will focus on the case $n=8$. **Theorem 2**. *Let $(A,B)$ be a matrix factorization of $f$ where $A$ and $B$ are $2n\times 2n$ matrices, i.e., $fI_{2n\times 2n}=AB$ where $A$ and $B$ are $2n\times 2n$ matrices. Thus, we can write* *$A=\begin{bmatrix} A_{1} & A_{2} \\ A_{3} & A_{4} \end{bmatrix}$ and $B=\begin{bmatrix} B_{1} & B_{2} \\ B_{3} & B_{4} \end{bmatrix}$ where $A_{i}$ and $B_{i}$ are $n\times n$ matrices for each $i\in \{1,2,3,4\}$.\ Suppose that either* a) *$A_{i}=B_{i}$ for $i\in \{2,3\}$, $A_{i}=-B_{j}$, $i,j\in \{1,4\}$ with $i\neq j$ or* b) *$A_{i}=B_{j}$ for $i,j\in \{1,4\}$ with $i\neq j$, $A_{i}=-B_{i}$, $i\in \{2,3\}$.* *and that the following identities hold: $A_{i}A_{j}=A_{j}A_{i}$; $i,j\in \{1,2,3,4\}$ with $i\neq j$. Then each of the following pair of block matrices forms a matrix factorization of $f$:* *3* 1. *$(\begin{bmatrix} A_{1} & A_{2} \\ A_{3} & A_{4} \end{bmatrix},\begin{bmatrix} B_{1} & B_{2} \\ B_{3} & B_{4} \end{bmatrix})$,* 2. *$(\begin{bmatrix} A_{3} & A_{1} \\ A_{4} & A_{2} \end{bmatrix},\begin{bmatrix} B_{2} & B_{4} \\ B_{1} & B_{3} \end{bmatrix})$,* 3. *$(\begin{bmatrix} A_{4} & A_{3} \\ A_{2} & A_{1} \end{bmatrix},\begin{bmatrix} B_{4} & B_{3} \\ B_{2} & B_{1} \end{bmatrix})$,* 4. *$(\begin{bmatrix} A_{2} & A_{4} \\ A_{1} & A_{3} \end{bmatrix},\begin{bmatrix} B_{3} & B_{1} \\ B_{4} & B_{2} \end{bmatrix})$,* 5. *$(\begin{bmatrix} A_{2} & A_{1} \\ A_{4} & A_{3} \end{bmatrix},\begin{bmatrix} B_{3} & B_{4} \\ B_{1} & B_{2} \end{bmatrix})$,* 6. *$(\begin{bmatrix} A_{3} & A_{4} \\ A_{1} & A_{2} \end{bmatrix},\begin{bmatrix} B_{2} & B_{1} \\ B_{4} & B_{3} \end{bmatrix})$* 7. *$(\begin{bmatrix} B_{1} & B_{3} \\ B_{2} & B_{4} \end{bmatrix},\begin{bmatrix} A_{1} & A_{3} \\ A_{2} & A_{4} \end{bmatrix})$,* 8. *$(\begin{bmatrix} B_{2} & B_{1} \\ B_{4} & B_{3} \end{bmatrix},\begin{bmatrix} A_{3} & A_{4} \\ A_{1} & A_{2} \end{bmatrix})$,* 9. *$(\begin{bmatrix} B_{4} & B_{2} \\ B_{3} & B_{1} \end{bmatrix},\begin{bmatrix} A_{4} & A_{2} \\ A_{3} & A_{1} \end{bmatrix})$,* 10. *$(\begin{bmatrix} B_{3} & B_{4} \\ B_{1} & B_{2} \end{bmatrix},\begin{bmatrix} A_{2} & A_{1} \\ A_{4} & A_{3} \end{bmatrix})$,* 11. *$(\begin{bmatrix} B_{3} & B_{1} \\ B_{4} & B_{2} \end{bmatrix},\begin{bmatrix} A_{2} & A_{4} \\ A_{1} & A_{3} \end{bmatrix})$,* 12. *$(\begin{bmatrix} B_{2} & B_{4} \\ B_{1} & B_{3} \end{bmatrix},\begin{bmatrix} A_{3} & A_{1} \\ A_{4} & A_{2} \end{bmatrix})$,* 13. *$(\begin{bmatrix} A_{1} & A_{3} \\ A_{2} & A_{4} \end{bmatrix},\begin{bmatrix} B_{1} & B_{3} \\ B_{2} & B_{4} \end{bmatrix})$ and* 14. *$(\begin{bmatrix} A_{4} & A_{2} \\ A_{3} & A_{1} \end{bmatrix},\begin{bmatrix} B_{4} & B_{2} \\ B_{3} & B_{1} \end{bmatrix})$* *Proof.* Suppose that in the \"either or\" part of the hypothesis, part a) holds i.e., $A_{i}=B_{i}$ for $i\in \{2,3\}$, $A_{1}=-B_{4}$, $A_{4}=-B_{1}$ and that the following identities hold: $A_{i}A_{j}=A_{j}A_{i}$; $i,j\in \{1,2,3,4\}$ with $i\neq j$. To prove the result, it suffices to show that the product of each of those pairs of block matrices yields the same answer as the product $AB$:\ $\begin{bmatrix} A_{1} & A_{2} \\ A_{3} & A_{4} \end{bmatrix}\begin{bmatrix} B_{1} & B_{2} \\ B_{3} & B_{4} \end{bmatrix} = \begin{bmatrix} A_{1}B_{1}+A_{2}B_{3} & A_{1}B_{2}+A_{2}B_{4} \\ A_{3}B_{1}+A_{4}B_{3} & A_{3}B_{2}+A_{4}B_{4} \end{bmatrix} = \begin{bmatrix} A_{1}B_{1}+A_{2}B_{3} & A_{1}A_{2}+A_{2}(-A_{1}) \\ A_{3}(-A_{4})+A_{4}A_{3} & A_{3}B_{2}+A_{4}B_{4} \end{bmatrix} =\\ \begin{bmatrix} A_{1}B_{1}+A_{2}B_{3} & 0 \\ 0 & A_{3}B_{2}+A_{4}B_{4} \end{bmatrix}= \begin{bmatrix} A_{1}B_{1}+A_{2}B_{3} & 0 \\ 0 & B_{3}A_{2}+B_{1}A_{1} \end{bmatrix}= \begin{bmatrix} A_{1}B_{1}+A_{2}B_{3} & 0 \\ 0 & A_{1}B_{1}+A_{2}B_{3} \end{bmatrix}$\ \ The second equality above is obtained thanks to part a) and the third equality is obtained thanks to the fact that the block matrices in A commute i.e., $A_{i}A_{j}=A_{j}A_{i}$; $i,j\in \{1,2,3,4\}$ with $i\neq j$. The fourth equality is obtained thanks to part a). Finally, the fifth equality is obtained thanks to the commutativity of the blocks in A.\ For the equalities below, we will use the hypothesis as we have just done without mentioning it as it will easy to see what assumption we used. 1. $\begin{bmatrix} A_{1} & A_{2} \\ A_{3} & A_{4} \end{bmatrix}\begin{bmatrix} B_{1} & B_{2} \\ B_{3} & B_{4} \end{bmatrix}$ This is obvious by hypothesis. 2. $\begin{bmatrix} A_{3} & A_{1} \\ A_{4} & A_{2} \end{bmatrix}\begin{bmatrix} B_{2} & B_{4} \\ B_{1} & B_{3} \end{bmatrix}=\begin{bmatrix} A_{3} B_{2} +A_{1}B_{1} & A_{3} B_{4} +A_{1}B_{3} \\ A_{4} B_{2} +A_{2}B_{1} & A_{4} B_{4} +A_{2}B_{3} \end{bmatrix} =\begin{bmatrix} B_{3} A_{2} +A_{1}B_{1} & A_{3} (-A_{1}) +A_{1}A_{3} \\ A_{4} A_{2} +A_{2}(-A_{4}) & A_{4} B_{4} +A_{2}B_{3} \end{bmatrix}\\= \begin{bmatrix} A_{1}B_{1}+A_{2}B_{3} & 0 \\ 0 & A_{1}B_{1}+A_{2}B_{3} \end{bmatrix}$ 3. $\begin{bmatrix} A_{4} & A_{3} \\ A_{2} & A_{1} \end{bmatrix}\begin{bmatrix} B_{4} & B_{3} \\ B_{2} & B_{1} \end{bmatrix}=\begin{bmatrix} A_{4} B_{4} +A_{3}B_{2} & A_{4} B_{3} +A_{3}B_{1} \\ A_{2} B_{4} +A_{1}B_{2} & A_{2} B_{3} +A_{1}B_{1} \end{bmatrix}= \begin{bmatrix} A_{1}B_{1}+A_{2}B_{3} & 0 \\ 0 & A_{1}B_{1}+A_{2}B_{3} \end{bmatrix}$ 4. $\begin{bmatrix} A_{2} & A_{4} \\ A_{1} & A_{3} \end{bmatrix}\begin{bmatrix} B_{3} & B_{1} \\ B_{4} & B_{2} \end{bmatrix}=\begin{bmatrix} A_{2} B_{3} +A_{4}B_{4} & A_{2} B_{1} +A_{4}B_{2} \\ A_{1} B_{3} +A_{3}B_{4} & A_{1} B_{1} +A_{3}B_{2} \end{bmatrix}= \begin{bmatrix} A_{1}B_{1}+A_{2}B_{3} & 0 \\ 0 & A_{1}B_{1}+A_{2}B_{3} \end{bmatrix}$ 5. $\begin{bmatrix} A_{2} & A_{1} \\ A_{4} & A_{3} \end{bmatrix}\begin{bmatrix} B_{3} & B_{4} \\ B_{1} & B_{2} \end{bmatrix}=\begin{bmatrix} A_{2} B_{3} +A_{1}B_{1} & A_{2} B_{4} +A_{1}B_{2} \\ A_{4} B_{3} +A_{3}B_{1} & A_{4} B_{4} +A_{3}B_{2} \end{bmatrix}= \begin{bmatrix} A_{1}B_{1}+A_{2}B_{3} & 0 \\ 0 & A_{1}B_{1}+A_{2}B_{3} \end{bmatrix}$ 6. $\begin{bmatrix} A_{3} & A_{4} \\ A_{1} & A_{2} \end{bmatrix}\begin{bmatrix} B_{2} & B_{1} \\ B_{4} & B_{3} \end{bmatrix}=\begin{bmatrix} A_{3} B_{2} +A_{4}B_{4} & A_{3} B_{1} +A_{4}B_{3} \\ A_{1} B_{2} +A_{2}B_{4} & A_{1} B_{1} +A_{2}B_{3} \end{bmatrix}= \begin{bmatrix} A_{1}B_{1}+A_{2}B_{3} & 0 \\ 0 & A_{1}B_{1}+A_{2}B_{3} \end{bmatrix}$ 7. $\begin{bmatrix} B_{1} & B_{3} \\ B_{2} & B_{4} \end{bmatrix}\begin{bmatrix} A_{1} & A_{3} \\ A_{2} & A_{4} \end{bmatrix}=\begin{bmatrix} B_{1}A_{1}+ B_{3}A_{2} & B_{1} A_{3} +B_{3}A_{4} \\ B_{2} A_{1} +B_{4}A_{2} & B_{2}A_{3}+ B_{4}A_{4} \end{bmatrix}= \begin{bmatrix} A_{1}B_{1}+A_{2}B_{3} & 0 \\ 0 & A_{1}B_{1}+A_{2}B_{3} \end{bmatrix}$ 8. $\begin{bmatrix} B_{2} & B_{1} \\ B_{4} & B_{3} \end{bmatrix}\begin{bmatrix} A_{3} & A_{4} \\ A_{1} & A_{2} \end{bmatrix}=\begin{bmatrix} B_{2}A_{3}+ B_{1}A_{1} & B_{2} A_{4} +B_{1}A_{2} \\ B_{4} A_{3} +B_{3}A_{1} & B_{4}A_{4}+ B_{3}A_{2} \end{bmatrix}= \begin{bmatrix} A_{1}B_{1}+A_{2}B_{3} & 0 \\ 0 & A_{1}B_{1}+A_{2}B_{3} \end{bmatrix}$ 9. $\begin{bmatrix} B_{4} & B_{2} \\ B_{3} & B_{1} \end{bmatrix}\begin{bmatrix} A_{4} & A_{2} \\ A_{3} & A_{1} \end{bmatrix}=\begin{bmatrix} B_{4}A_{4}+ B_{2}A_{3} & B_{4} A_{2} +B_{2}A_{1} \\ B_{3} A_{4} +B_{1}A_{3} & B_{3}A_{2}+ B_{1}A_{1} \end{bmatrix}= \begin{bmatrix} A_{1}B_{1}+A_{2}B_{3} & 0 \\ 0 & A_{1}B_{1}+A_{2}B_{3} \end{bmatrix}$ 10. $\begin{bmatrix} B_{3} & B_{4} \\ B_{1} & B_{2} \end{bmatrix}\begin{bmatrix} A_{2} & A_{1} \\ A_{4} & A_{3} \end{bmatrix}=\begin{bmatrix} B_{3}A_{2}+ B_{4}A_{4} & B_{3} A_{1} +B_{4}A_{3} \\ B_{1} A_{2} +B_{2}A_{4} & B_{1}A_{1}+ B_{2}A_{3} \end{bmatrix}= \begin{bmatrix} A_{1}B_{1}+A_{2}B_{3} & 0 \\ 0 & A_{1}B_{1}+A_{2}B_{3} \end{bmatrix}$ 11. $\begin{bmatrix} B_{3} & B_{1} \\ B_{4} & B_{2} \end{bmatrix}\begin{bmatrix} A_{2} & A_{4} \\ A_{1} & A_{3} \end{bmatrix}=\begin{bmatrix} B_{3}A_{2}+ B_{1}A_{1} & B_{3}A_{4} +B_{1} A_{3} \\ B_{4}A_{2}+ B_{2} A_{1} & B_{4}A_{4}+ B_{2}A_{3} \end{bmatrix}= \begin{bmatrix} A_{1}B_{1}+A_{2}B_{3} & 0 \\ 0 & A_{1}B_{1}+A_{2}B_{3} \end{bmatrix}$ 12. $\begin{bmatrix} B_{2} & B_{4} \\ B_{1} & B_{3} \end{bmatrix}\begin{bmatrix} A_{3} & A_{1} \\ A_{4} & A_{2} \end{bmatrix}=\begin{bmatrix} B_{2}A_{3}+ B_{4}A_{4} & B_{2} A_{1} +B_{4}A_{2} \\ B_{1} A_{3} +B_{3}A_{4} & B_{1}A_{1}+ B_{3}A_{2} \end{bmatrix}= \begin{bmatrix} A_{1}B_{1}+A_{2}B_{3} & 0 \\ 0 & A_{1}B_{1}+A_{2}B_{3} \end{bmatrix}$ 13. $\begin{bmatrix} A_{1} & A_{3} \\ A_{2} & A_{4} \end{bmatrix}\begin{bmatrix} B_{1} & B_{3} \\ B_{2} & B_{4} \end{bmatrix}=\begin{bmatrix} A_{1} B_{1} +A_{3}B_{2} & A_{1} B_{3} +A_{3}B_{4} \\ A_{2} B_{1} +A_{4}B_{2} & A_{2} B_{3} +A_{4}B_{4} \end{bmatrix}= \begin{bmatrix} A_{1}B_{1}+A_{2}B_{3} & 0 \\ 0 & A_{1}B_{1}+A_{2}B_{3} \end{bmatrix}$ and 14. $\begin{bmatrix} A_{4} & A_{2} \\ A_{3} & A_{1} \end{bmatrix}\begin{bmatrix} B_{4} & B_{2} \\ B_{3} & B_{1} \end{bmatrix}=\begin{bmatrix} A_{4}B_{4}+ A_{2} B_{3} & A_{4} B_{2} +A_{2}B_{1} \\ A_{3} B_{4} +A_{1}B_{3} & A_{3} B_{2} +A_{1}B_{1} \end{bmatrix}= \begin{bmatrix} \begin{array}{c|c} A_{1}B_{1}+A_{2}B_{3} & 0 \\ \hline 0 & A_{1}B_{1}+A_{2}B_{3} \end{array}\end{bmatrix}$ If we instead suppose that in the \"either or\" part of the hypothesis, part b) holds and that the following identities hold: $A_{i}A_{j}=A_{j}A_{i}$; $i,j\in \{1,2,3,4\}$ with $i\neq j$, then to prove the result, we need to proceed in a manner similar to what was done above. So, we omit the proof. ◻ ## An application: A lower bound on the number of minimal matrix factors for $f_{8}=x_{1}^{2}+\cdots +x_{8}^{2}$ {#subsec: lower bound on number of min factors} Here, we give an application of Theorem [Theorem 2](#thm many matrix factors for f){reference-type="ref" reference="thm many matrix factors for f"}.\ As mentioned at the introduction, in [@buchweitz1987cohen], it was shown that for $n\geq 8$, the smallest possible matrix factorizations for $f_{n}=x_{1}^{2}+\cdots +x_{n}^{2}$ is bounded below by $2^{\frac{n-2}{2}}\times 2^{\frac{n-2}{2}}$. Moreover in [@buchweitz1987cohen], it is shown that the factorizations obtained when $n = 1; 2; 4, \,and \,8$ are related to the existence of composition algebras over $\mathbb{R}$ of dimension $1; 2; 4;\, and\, 8$. In fact, authors in [@buchweitz1987cohen] deduce Hurwitz's Theorem that no real composition algebra of dimension $n$ exists for $n \neq 1; 2; 4,\, or \,8$. They use the lower bound on the size of the smallest matrix factors of $f_{n}$ as a crucial ingredient in their proof. Furthermore, They show that a necessary condition for the existence of a real composition algebra of dimension $n$ is that $f_{n}$ admits a matrix factorization of size $n \times n$. Since, for all $n > 8$, we have $n < 2^{\frac{n-2}{2}}$, they deduce that no composition algebra of dimension $n$ exists when $n > 8$.\ In [@crisler2016matrix], using an elementary but elegant method, authors constructed an algorithm which yields matrix factors for $f_{8}$ which are of minimal size i.e., $8\times 8$ matrix factors . The above mentioned papers ([@buchweitz1987cohen; @crisler2016matrix]) do not tell us how many smallest size matrix factorizations can be obtained. Here, we exhibit 14 matrix factorizations for $f_{8}=x_{1}^{2}+\cdots +x_{8}^{2}$ that are of the smallest possible size, i.e., $8\times 8$ matrix factors. Hence, this gives a lower bound on the number of minimal matrix factors for $f_{8}$.\ From the above discussion, we know that the smallest possible matrix factors for $f_{8}$ are $8\times 8$ matrices. In [@crisler2016matrix], $8\times 8$ matrix factors were found for $f_{8}$. We copy them here (without showing how they were obtained) and we show that this pair of matrix factors verify the hypotheses of Theorem [Theorem 2](#thm many matrix factors for f){reference-type="ref" reference="thm many matrix factors for f"}.\ Following [@crisler2016matrix], $f_{8}=AB=AA^{T}$ where $A=\begin{bmatrix} \begin{array}{c|c} A_{1} & A_{2} \\ \hline A_{3} & A_{4} \end{array}\end{bmatrix}$ and $B=\begin{bmatrix} \begin{array}{c|c} B_{1} & B_{2} \\ \hline B_{3} & B_{4} \end{array}\end{bmatrix}$ with\ $A_{1}=\begin{bmatrix} x_{1} & -x_{2} & x_{3} & x_{4} \\ x_{2} & x_{1} & -x_{4} & x_{3} \\ -x_{3} & x_{4} & x_{1} & x_{2} \\ -x_{4} & -x_{3} & -x_{2} & x_{1} \end{bmatrix}$ $A_{2}=\begin{bmatrix} x_{5} & x_{6} & -x_{7} & -x_{8} \\ -x_{6} & x_{5} & -x_{8} & x_{7} \\ x_{7} & x_{8} & x_{5} & x_{6} \\ x_{8} & -x_{7} & -x_{6} & x_{5} \end{bmatrix}$\ $A_{3}=\begin{bmatrix} -x_{5} & x_{6} & -x_{7} & -x_{8} \\ -x_{6} & -x_{5} & -x_{8} & x_{7} \\ x_{7} & x_{8} & -x_{5} & x_{6} \\ x_{8} & -x_{7} & -x_{6} & -x_{5} \end{bmatrix}$ $A_{4}=\begin{bmatrix} x_{1} & x_{2} & -x_{3} & -x_{4} \\ -x_{2} & x_{1} & x_{4} & -x_{3} \\ x_{3} & -x_{4} & x_{1} & -x_{2} \\ x_{4} & x_{3} & x_{2} & x_{1} \end{bmatrix}$\ Since $B=A^{T}$, we have\ $B_{1}=\begin{bmatrix} x_{1} & x_{2} & -x_{3} & -x_{4} \\ -x_{2} & x_{1} & x_{4} & -x_{3} \\ x_{3} & -x_{4} & x_{1} & -x_{2} \\ x_{4} & x_{3} & x_{2} & x_{1} \end{bmatrix}$ $B_{2}=\begin{bmatrix} -x_{5} & -x_{6} & x_{7} & x_{8} \\ x_{6} & -x_{5} & x_{8} & -x_{7} \\ -x_{7} & -x_{8} & -x_{5} & -x_{6} \\ -x_{8} & x_{7} & x_{6} & -x_{5} \end{bmatrix}$\ $B_{3}=\begin{bmatrix} x_{5} & -x_{6} & x_{7} & x_{8} \\ x_{6} & x_{5} & x_{8} & -x_{7} \\ -x_{7} & -x_{8} & x_{5} & -x_{6} \\ -x_{8} & x_{7} & x_{6} & x_{5} \end{bmatrix}$ $B_{4}=\begin{bmatrix} x_{1} & -x_{2} & x_{3} & x_{4} \\ x_{2} & x_{1} & -x_{4} & x_{3} \\ -x_{3} & x_{4} & x_{1} & x_{2} \\ -x_{4} & -x_{3} & -x_{2} & x_{1} \end{bmatrix}$\ \ Hence, the part b) of the hypothesis of Theorem [Theorem 2](#thm many matrix factors for f){reference-type="ref" reference="thm many matrix factors for f"} is satisfied because $A_{i}=B_{i}$ for $i\in \{1,4\}$, $A_{i}=-B_{j}$, $i,j\in \{2,3\}$ with $i\neq j$. It now remains to show that the following identities hold: $A_{i}A_{j}=A_{j}A_{i}$; $i,j\in \{1,2,3,4\}$ with $i\neq j$.\ \ $A_{1}A_{2}=\begin{bmatrix} x_{1} & -x_{2} & x_{3} & x_{4} \\ x_{2} & x_{1} & -x_{4} & x_{3} \\ -x_{3} & x_{4} & x_{1} & x_{2} \\ -x_{4} & -x_{3} & -x_{2} & x_{1} \end{bmatrix} \begin{bmatrix} x_{5} & x_{6} & -x_{7} & -x_{8} \\ -x_{6} & x_{5} & -x_{8} & x_{7} \\ x_{7} & x_{8} & x_{5} & x_{6} \\ x_{8} & -x_{7} & -x_{6} & x_{5} \end{bmatrix}$\ $$\begin{gathered} \setlength{\arraycolsep}{.60\arraycolsep} \renewcommand{\arraystretch}{1.5} \text{\footnotesize$\displaystyle =\begin{pmatrix} x_{1}x_{5}+x_{2}x_{6}+x_{3}x_{7}+x_{4}x_{8} & x_{1}x_{6}-x_{2}x_{5}+x_{3}x_{8}-x_{4}x_{7} & -x_{1}x_{7}+x_{2}x_{8}+x_{3}x_{5}-x_{4}x_{6} & -x_{1}x_{8}-x_{2}x_{7}+x_{3}x_{6}+x_{4}x_{5} \\ x_{2}x_{5}-x_{1}x_{6}-x_{4}x_{7}+x_{3}x_{8} & x_{2}x_{6}+x_{1}x_{5}-x_{4}x_{8}-x_{3}x_{7} & -x_{2}x_{7}-x_{1}x_{8}-x_{4}x_{5}-x_{3}x_{6} & -x_{2}x_{8}+x_{1}x_{7}-x_{4}x_{6}+x_{3}x_{5} \\ -x_{3}x_{5}-x_{4}x_{6}+x_{1}x_{7}+x_{2}x_{8} & -x_{3}x_{6}+x_{4}x_{5}+x_{1}x_{8}-x_{2}x_{7} & x_{3}x_{7}-x_{4}x_{8}+x_{1}x_{5}-x_{2}x_{6} & x_{3}x_{8}+x_{4}x_{7}+x_{1}x_{6}+x_{2}x_{5} \\ -x_{4}x_{5}+x_{3}x_{6}-x_{2}x_{7}+x_{1}x_{8} & -x_{4}x_{6}-x_{3}x_{5}-x_{2}x_{8}-x_{1}x_{7} & x_{4}x_{7}+x_{3}x_{8}-x_{2}x_{5}-x_{1}x_{6} & x_{4}x_{8}-x_{3}x_{7}-x_{2}x_{6}+x_{1}x_{5} \end{pmatrix} $}\end{gathered}$$\ $$\begin{gathered} \setlength{\arraycolsep}{.60\arraycolsep} \renewcommand{\arraystretch}{1.5} \text{\footnotesize$\displaystyle =\begin{pmatrix} x_{5}x_{1}+x_{6}x_{2}+x_{7}x_{3}+x_{8}x_{4} & -x_{5}x_{2}+x_{6}x_{1}-x_{7}x_{4}+x_{8}x_{3} & x_{5}x_{3}-x_{6}x_{4}-x_{7}x_{1}+x_{8}x_{2} & x_{5}x_{4}+x_{6}x_{3}-x_{7}x_{2}-x_{8}x_{1} \\ -x_{6}x_{1}+x_{5}x_{2}+x_{8}x_{3}-x_{7}x_{4} & x_{6}x_{2}+x_{5}x_{1}-x_{8}x_{4}-x_{7}x_{3} & -x_{6}x_{3}-x_{5}x_{4}-x_{8}x_{1}-x_{7}x_{2} & -x_{6}x_{4}+x_{5}x_{3}-x_{8}x_{2}+x_{7}x_{1} \\ x_{7}x_{1}+x_{8}x_{2}-x_{5}x_{3}-x_{6}x_{4} & -x_{7}x_{2}+x_{8}x_{1}+x_{5}x_{4}-x_{6}x_{3} & x_{7}x_{3}-x_{8}x_{4}+x_{5}x_{1}-x_{6}x_{2} & x_{7}x_{4}+x_{8}x_{3}+x_{5}x_{2}+x_{6}x_{1} \\ x_{8}x_{1}-x_{7}x_{2}+x_{6}x_{3}-x_{5}x_{4} & -x_{8}x_{2}-x_{7}x_{1}-x_{6}x_{4}-x_{5}x_{3} & x_{8}x_{3}+x_{7}x_{4}-x_{6}x_{1}-x_{5}x_{2} & x_{8}x_{4}-x_{7}x_{3}-x_{6}x_{2}+x_{5}x_{1} \end{pmatrix} $}\end{gathered}$$\ $=\begin{bmatrix} x_{5} & x_{6} & -x_{7} & -x_{8} \\ -x_{6} & x_{5} & -x_{8} & x_{7} \\ x_{7} & x_{8} & x_{5} & x_{6} \\ x_{8} & -x_{7} & -x_{6} & x_{5} \end{bmatrix}\begin{bmatrix} x_{1} & -x_{2} & x_{3} & x_{4} \\ x_{2} & x_{1} & -x_{4} & x_{3} \\ -x_{3} & x_{4} & x_{1} & x_{2} \\ -x_{4} & -x_{3} & -x_{2} & x_{1} \end{bmatrix} =A_{2}A_{1}$ $A_{3}A_{4}=\begin{bmatrix} -x_{5} & x_{6} & -x_{7} & -x_{8} \\ -x_{6} & -x_{5} & -x_{8} & x_{7} \\ x_{7} & x_{8} & -x_{5} & x_{6} \\ x_{8} & -x_{7} & -x_{6} & -x_{5} \end{bmatrix} \begin{bmatrix} x_{1} & x_{2} & -x_{3} & -x_{4} \\ -x_{2} & x_{1} & x_{4} & -x_{3} \\ x_{3} & -x_{4} & x_{1} & -x_{2} \\ x_{4} & x_{3} & x_{2} & x_{1} \end{bmatrix}$\ $$\begin{gathered} \setlength{\arraycolsep}{.60\arraycolsep} \renewcommand{\arraystretch}{1.5} \text{\footnotesize$\displaystyle =\begin{pmatrix} -x_{5}x_{1}-x_{6}x_{2}-x_{7}x_{3}-x_{8}x_{4} & -x_{5}x_{2}+x_{6}x_{1}+x_{7}x_{4}-x_{8}x_{3} & x_{5}x_{3}+x_{6}x_{4}-x_{7}x_{1}-x_{8}x_{2} & x_{5}x_{4}-x_{6}x_{3}+x_{7}x_{2}-x_{8}x_{1} \\ -x_{6}x_{1}+x_{5}x_{2}-x_{8}x_{3}+x_{7}x_{4} & -x_{6}x_{2}-x_{5}x_{1}+x_{8}x_{4}+x_{7}x_{3} & x_{6}x_{3}-x_{5}x_{4}-x_{8}x_{1}+x_{7}x_{2} & x_{6}x_{4}+x_{5}x_{3}+x_{8}x_{2}+x_{7}x_{1} \\ x_{7}x_{1}-x_{8}x_{2}-x_{5}x_{3}+x_{6}x_{4} & x_{7}x_{2}+x_{8}x_{1}+x_{5}x_{4}+x_{6}x_{3} & -x_{7}x_{3}+x_{8}x_{4}-x_{5}x_{1}+x_{6}x_{2} & -x_{7}x_{4}-x_{8}x_{3}+x_{5}x_{2}+x_{6}x_{1} \\ x_{8}x_{1}+x_{7}x_{2}-x_{6}x_{3}-x_{5}x_{4} & x_{8}x_{2}-x_{7}x_{1}+x_{6}x_{4}-x_{5}x_{3} & -x_{8}x_{3}-x_{7}x_{4}-x_{6}x_{1}-x_{5}x_{2} & -x_{8}x_{4}+x_{7}x_{3}+x_{6}x_{2}-x_{5}x_{1} \end{pmatrix} $}\end{gathered}$$\ $$\begin{gathered} \setlength{\arraycolsep}{.60\arraycolsep} \renewcommand{\arraystretch}{1.5} \text{\footnotesize$\displaystyle =\begin{pmatrix} -x_{1}x_{5}-x_{2}x_{6}-x_{3}x_{7}-x_{4}x_{8} & x_{1}x_{6}-x_{2}x_{5}-x_{3}x_{8}+x_{4}x_{7} & -x_{1}x_{7}-x_{2}x_{8}+x_{3}x_{5}+x_{4}x_{6} & -x_{1}x_{8}+x_{2}x_{7}-x_{3}x_{6}+x_{4}x_{5} \\ x_{2}x_{5}-x_{1}x_{6}+x_{4}x_{7}-x_{3}x_{8} & -x_{2}x_{6}-x_{1}x_{5}+x_{4}x_{8}+x_{3}x_{7} & x_{2}x_{7}-x_{1}x_{8}-x_{4}x_{5}+x_{3}x_{6} & x_{2}x_{8}+x_{1}x_{7}+x_{4}x_{6}+x_{3}x_{5} \\ -x_{3}x_{5}+x_{4}x_{6}+x_{1}x_{7}-x_{2}x_{8} & x_{3}x_{6}+x_{4}x_{5}+x_{1}x_{8}+x_{2}x_{7} & -x_{3}x_{7}+x_{4}x_{8}-x_{1}x_{5}+x_{2}x_{6} & -x_{3}x_{8}-x_{4}x_{7}+x_{1}x_{6}+x_{2}x_{5} \\ -x_{4}x_{5}-x_{3}x_{6}+x_{2}x_{7}+x_{1}x_{8} & x_{4}x_{6}-x_{3}x_{5}+x_{2}x_{8}-x_{1}x_{7} & -x_{4}x_{7}-x_{3}x_{8}-x_{2}x_{5}-x_{1}x_{6} & -x_{4}x_{8}+x_{3}x_{7}+x_{2}x_{6}-x_{1}x_{5} \end{pmatrix} $}\end{gathered}$$\ $=\begin{bmatrix} x_{1} & x_{2} & -x_{3} & -x_{4} \\ -x_{2} & x_{1} & x_{4} & -x_{3} \\ x_{3} & -x_{4} & x_{1} & -x_{2} \\ x_{4} & x_{3} & x_{2} & x_{1} \end{bmatrix}\begin{bmatrix} -x_{5} & x_{6} & -x_{7} & -x_{8} \\ -x_{6} & -x_{5} & -x_{8} & x_{7} \\ x_{7} & x_{8} & -x_{5} & x_{6} \\ x_{8} & -x_{7} & -x_{6} & -x_{5} \end{bmatrix} =A_{4}A_{3}$ $A_{1}A_{4}=\begin{bmatrix} x_{1} & -x_{2} & x_{3} & x_{4} \\ x_{2} & x_{1} & -x_{4} & x_{3} \\ -x_{3} & x_{4} & x_{1} & x_{2} \\ -x_{4} & -x_{3} & -x_{2} & x_{1} \end{bmatrix} \begin{bmatrix} x_{1} & x_{2} & -x_{3} & -x_{4} \\ -x_{2} & x_{1} & x_{4} & -x_{3} \\ x_{3} & -x_{4} & x_{1} & -x_{2} \\ x_{4} & x_{3} & x_{2} & x_{1} \end{bmatrix}$\ $$\begin{gathered} \setlength{\arraycolsep}{.60\arraycolsep} \renewcommand{\arraystretch}{1.5} \text{\footnotesize$\displaystyle =\begin{pmatrix} x_{1}^{2} +x_{2}^{2}+x_{3}^{2}+x_{4}^{2} & 0 & 0 & 0 \\ 0 & x_{2}^{2}+x_{1}^{2} +x_{4}^{2}+x_{3}^{2} & 0 & 0 \\ 0 & 0 & x_{3}^{2}+x_{4}^{2}+x_{1}^{2} +x_{2}^{2} & 0 \\ 0 & 0 & 0 & x_{4}^{2} +x_{3}^{2}+x_{2}^{2}+x_{1}^{2} \end{pmatrix} $}\end{gathered}$$\ $=\begin{bmatrix} x_{1} & x_{2} & -x_{3} & -x_{4} \\ -x_{2} & x_{1} & x_{4} & -x_{3} \\ x_{3} & -x_{4} & x_{1} & -x_{2} \\ x_{4} & x_{3} & x_{2} & x_{1} \end{bmatrix}\begin{bmatrix} x_{1} & -x_{2} & x_{3} & x_{4} \\ x_{2} & x_{1} & -x_{4} & x_{3} \\ -x_{3} & x_{4} & x_{1} & x_{2} \\ -x_{4} & -x_{3} & -x_{2} & x_{1} \end{bmatrix} =A_{4}A_{1}$ $A_{2}A_{3}=\begin{bmatrix} x_{5} & x_{6} & -x_{7} & -x_{8} \\ -x_{6} & x_{5} & -x_{8} & x_{7} \\ x_{7} & x_{8} & x_{5} & x_{6} \\ x_{8} & -x_{7} & -x_{6} & x_{5} \end{bmatrix} \begin{bmatrix} -x_{5} & x_{6} & -x_{7} & -x_{8} \\ -x_{6} & -x_{5} & -x_{8} & x_{7} \\ x_{7} & x_{8} & -x_{5} & x_{6} \\ x_{8} & -x_{7} & -x_{6} & -x_{5} \end{bmatrix}$\ $$\begin{gathered} \setlength{\arraycolsep}{.60\arraycolsep} \renewcommand{\arraystretch}{1.5} \text{\footnotesize$\displaystyle =\begin{pmatrix} -x_{5}^{2} -x_{6}^{2}-x_{7}^{2}-x_{8}^{2} & 0 & 0 & 0 \\ 0 & -x_{6}^{2}-x_{5}^{2}-x_{8}^{2}-x_{7}^{2} & 0 & 0 \\ 0 & 0 & -x_{7}^{2}-x_{8}^{2}-x_{5}^{2} -x_{6}^{2} & 0 \\ 0 & 0 & 0 & -x_{8}^{2}-x_{7}^{2}-x_{6}^{2}-x_{5}^{2} \end{pmatrix} $}\end{gathered}$$\ $=\begin{bmatrix} -x_{5} & x_{6} & -x_{7} & -x_{8} \\ -x_{6} & -x_{5} & -x_{8} & x_{7} \\ x_{7} & x_{8} & -x_{5} & x_{6} \\ x_{8} & -x_{7} & -x_{6} & -x_{5} \end{bmatrix}\begin{bmatrix} x_{5} & x_{6} & -x_{7} & -x_{8} \\ -x_{6} & x_{5} & -x_{8} & x_{7} \\ x_{7} & x_{8} & x_{5} & x_{6} \\ x_{8} & -x_{7} & -x_{6} & x_{5} \end{bmatrix} =A_{3}A_{2}$ $A_{1}A_{3}=\begin{bmatrix} x_{1} & -x_{2} & x_{3} & x_{4} \\ x_{2} & x_{1} & -x_{4} & x_{3} \\ -x_{3} & x_{4} & x_{1} & x_{2} \\ -x_{4} & -x_{3} & -x_{2} & x_{1} \end{bmatrix} \begin{bmatrix} -x_{5} & x_{6} & -x_{7} & -x_{8} \\ -x_{6} & -x_{5} & -x_{8} & x_{7} \\ x_{7} & x_{8} & -x_{5} & x_{6} \\ x_{8} & -x_{7} & -x_{6} & -x_{5} \end{bmatrix}$\ $$\begin{gathered} \setlength{\arraycolsep}{.60\arraycolsep} \renewcommand{\arraystretch}{1.5} \text{\footnotesize$\displaystyle =\begin{pmatrix} -x_{1}x_{5}+x_{2}x_{6}+x_{3}x_{7}+x_{4}x_{8} & x_{1}x_{6}+x_{2}x_{5}+x_{3}x_{8}-x_{4}x_{7}& -x_{1}x_{7}+x_{2}x_{8}-x_{3}x_{5}-x_{4}x_{6} & -x_{1}x_{8}-x_{2}x_{7}+x_{3}x_{6}-x_{4}x_{5} \\ -x_{2}x_{5}-x_{1}x_{6}-x_{4}x_{7}+x_{3}x_{8} & x_{2}x_{6}-x_{1}x_{5}-x_{4}x_{8}-x_{3}x_{7} & -x_{2}x_{7}-x_{1}x_{8}+x_{4}x_{5}-x_{3}x_{6} & -x_{2}x_{8}+x_{1}x_{7}-x_{4}x_{6}-x_{3}x_{5} \\ x_{3}x_{5}-x_{4}x_{6}+x_{1}x_{7}+x_{2}x_{8} & -x_{3}x_{6}-x_{4}x_{5}+x_{1}x_{8}-x_{2}x_{7} & x_{3}x_{7}-x_{4}x_{8}-x_{1}x_{5}-x_{2}x_{6} & x_{3}x_{8}+x_{4}x_{7}+x_{1}x_{6}-x_{2}x_{5} \\ x_{4}x_{5}+x_{3}x_{6}-x_{2}x_{7}+x_{1}x_{8} & -x_{4}x_{6}+x_{3}x_{5}-x_{2}x_{8}-x_{1}x_{7} & x_{4}x_{7}+x_{3}x_{8}+x_{2}x_{5}-x_{1}x_{6} & x_{4}x_{8}-x_{3}x_{7}-x_{2}x_{6}-x_{1}x_{5} \end{pmatrix} $}\end{gathered}$$\ $$\begin{gathered} \setlength{\arraycolsep}{.60\arraycolsep} \renewcommand{\arraystretch}{1.5} \text{\footnotesize$\displaystyle =\begin{pmatrix} -x_{5}x_{1}+x_{6}x_{2}+x_{7}x_{3}+x_{8}x_{4} & x_{5}x_{2}+x_{6}x_{1}-x_{7}x_{4}+x_{8}x_{3} & -x_{5}x_{3}-x_{6}x_{4}-x_{7}x_{1}+x_{8}x_{2} & -x_{5}x_{4}+x_{6}x_{3}-x_{7}x_{2}-x_{8}x_{1} \\ -x_{6}x_{1}-x_{5}x_{2}+x_{8}x_{3}-x_{7}x_{4} & x_{6}x_{2}-x_{5}x_{1}-x_{8}x_{4}-x_{7}x_{3} & -x_{6}x_{3}+x_{5}x_{4}-x_{8}x_{1}-x_{7}x_{2} & -x_{6}x_{4}-x_{5}x_{3}-x_{8}x_{2}+x_{7}x_{1} \\ x_{7}x_{1}+x_{8}x_{2}+x_{5}x_{3}-x_{6}x_{4} & -x_{7}x_{2}+x_{8}x_{1}-x_{5}x_{4}-x_{6}x_{3} & x_{7}x_{3}-x_{8}x_{4}-x_{5}x_{1}-x_{6}x_{2} & x_{7}x_{4}+x_{8}x_{3}-x_{5}x_{2}+x_{6}x_{1} \\ x_{8}x_{1}-x_{7}x_{2}+x_{6}x_{3}+x_{5}x_{4} & -x_{8}x_{2}-x_{7}x_{1}-x_{6}x_{4}+x_{5}x_{3} & x_{8}x_{3}+x_{7}x_{4}-x_{6}x_{1}+x_{5}x_{2} & x_{8}x_{4}-x_{7}x_{3}-x_{6}x_{2}-x_{5}x_{1} \end{pmatrix} $}\end{gathered}$$\ $=\begin{bmatrix} -x_{5} & x_{6} & -x_{7} & -x_{8} \\ -x_{6} & -x_{5} & -x_{8} & x_{7} \\ x_{7} & x_{8} & -x_{5} & x_{6} \\ x_{8} & -x_{7} & -x_{6} & -x_{5} \end{bmatrix}\begin{bmatrix} x_{1} & -x_{2} & x_{3} & x_{4} \\ x_{2} & x_{1} & -x_{4} & x_{3} \\ -x_{3} & x_{4} & x_{1} & x_{2} \\ -x_{4} & -x_{3} & -x_{2} & x_{1} \end{bmatrix} =A_{3}A_{1}$ $A_{2}A_{4}=\begin{bmatrix} x_{5} & x_{6} & -x_{7} & -x_{8} \\ -x_{6} & x_{5} & -x_{8} & x_{7} \\ x_{7} & x_{8} & x_{5} & x_{6} \\ x_{8} & -x_{7} & -x_{6} & x_{5} \end{bmatrix} \begin{bmatrix} x_{1} & x_{2} & -x_{3} & -x_{4} \\ -x_{2} & x_{1} & x_{4} & -x_{3} \\ x_{3} & -x_{4} & x_{1} & -x_{2} \\ x_{4} & x_{3} & x_{2} & x_{1} \end{bmatrix}$\ $$\begin{gathered} \setlength{\arraycolsep}{.60\arraycolsep} \renewcommand{\arraystretch}{1.5} \text{\footnotesize$\displaystyle =\begin{pmatrix} x_{5}x_{1}-x_{6}x_{2}-x_{7}x_{3}-x_{8}x_{4} & x_{5}x_{2}+x_{6}x_{1}+x_{7}x_{4}-x_{8}x_{3} & -x_{5}x_{3}+x_{6}x_{4}-x_{7}x_{1}-x_{8}x_{2} & -x_{5}x_{4}-x_{6}x_{3}+x_{7}x_{2}-x_{8}x_{1} \\ -x_{6}x_{1}-x_{5}x_{2}-x_{8}x_{3}+x_{7}x_{4} & -x_{6}x_{2}+x_{5}x_{1}+x_{8}x_{4}+x_{7}x_{3} & x_{6}x_{3}+x_{5}x_{4}-x_{8}x_{1}+x_{7}x_{2} & x_{6}x_{4}-x_{5}x_{3}+x_{8}x_{2}+x_{7}x_{1} \\ x_{7}x_{1}-x_{8}x_{2}+x_{5}x_{3}+x_{6}x_{4} & x_{7}x_{2}+x_{8}x_{1}-x_{5}x_{4}+x_{6}x_{3} & -x_{7}x_{3}+x_{8}x_{4}+x_{5}x_{1}+x_{6}x_{2} & -x_{7}x_{4}-x_{8}x_{3}-x_{5}x_{2}+x_{6}x_{1} \\ x_{8}x_{1}+x_{7}x_{2}-x_{6}x_{3}+x_{5}x_{4} & x_{8}x_{2}-x_{7}x_{1}+x_{6}x_{4}+x_{5}x_{3} & -x_{8}x_{3}-x_{7}x_{4}-x_{6}x_{1}+x_{5}x_{2} & -x_{8}x_{4}+x_{7}x_{3}+x_{6}x_{2}+x_{5}x_{1} \end{pmatrix} $}\end{gathered}$$ $$\begin{gathered} \setlength{\arraycolsep}{.60\arraycolsep} \renewcommand{\arraystretch}{1.5} \text{\footnotesize$\displaystyle =\begin{pmatrix} x_{1}x_{5}-x_{2}x_{6}-x_{3}x_{7}-x_{4}x_{8} & x_{1}x_{6}+x_{2}x_{5}-x_{3}x_{8}+x_{4}x_{7}& -x_{1}x_{7}-x_{2}x_{8}-x_{3}x_{5}+x_{4}x_{6} & -x_{1}x_{8}+x_{2}x_{7}-x_{3}x_{6}-x_{4}x_{5} \\ -x_{2}x_{5}-x_{1}x_{6}+x_{4}x_{7}-x_{3}x_{8} & -x_{2}x_{6}+x_{1}x_{5}+x_{4}x_{8}+x_{3}x_{7} & x_{2}x_{7}-x_{1}x_{8}+x_{4}x_{5}+x_{3}x_{6} & x_{2}x_{8}+x_{1}x_{7}+x_{4}x_{6}-x_{3}x_{5} \\ x_{3}x_{5}+x_{6}x_{4}+x_{1}x_{7}-x_{2}x_{8} & x_{3}x_{6}-x_{4}x_{5}+x_{1}x_{8}+x_{2}x_{7} & -x_{3}x_{7}+x_{4}x_{8}+x_{1}x_{5}+x_{2}x_{6} & -x_{3}x_{8}-x_{4}x_{7}+x_{1}x_{6}-x_{2}x_{5} \\ x_{4}x_{5}-x_{3}x_{6}+x_{2}x_{7}+x_{1}x_{8} & x_{4}x_{6}+x_{3}x_{5}+x_{2}x_{8}-x_{1}x_{7} & -x_{4}x_{7}-x_{3}x_{8}+x_{2}x_{5}-x_{1}x_{6} & -x_{4}x_{8}+x_{3}x_{7}+x_{2}x_{6}+x_{1}x_{5} \end{pmatrix} $}\end{gathered}$$\ \ $=\begin{bmatrix} x_{1} & x_{2} & -x_{3} & -x_{4} \\ -x_{2} & x_{1} & x_{4} & -x_{3} \\ x_{3} & -x_{4} & x_{1} & -x_{2} \\ x_{4} & x_{3} & x_{2} & x_{1} \end{bmatrix}\begin{bmatrix} x_{5} & x_{6} & -x_{7} & -x_{8} \\ -x_{6} & x_{5} & -x_{8} & x_{7} \\ x_{7} & x_{8} & x_{5} & x_{6} \\ x_{8} & -x_{7} & -x_{6} & x_{5} \end{bmatrix} =A_{4}A_{2}$\ Thus, the hypotheses of Theorem [Theorem 2](#thm many matrix factors for f){reference-type="ref" reference="thm many matrix factors for f"} are satisfied for $f_{8}$ and so we can conclude that $f_{8}$ has at least 14 matrix factors of minimal size as enumerated in the conclusion of the Theorem. They are all of minimal size since they are all $8\times 8$ matrices, as discussed earlier at the introduction of this paper.\ For $f_{4}$, the matrix factorization obtained from the standard method is a pair of $2^{4-1}\times 2^{4-1}=8\times 8$ matrices. The one obtained by the method in [@crisler2016matrix] is a pair of $4\times 4$ matrices and one can verify that the hypotheses of Theorem [Theorem 2](#thm many matrix factors for f){reference-type="ref" reference="thm many matrix factors for f"} are satisfied for that matrix factor, meaning that from it we can derive 14 more matrix factors which are $4\times 4$ matrices. So, $f_{4}$ admits at least 15 matrix factorizations. References
arxiv_math
{ "id": "2310.03372", "title": "Some properties of n-matrix factorizations of polynomials", "authors": "Yves Fomatati", "categories": "math.RA", "license": "http://creativecommons.org/licenses/by-nc-sa/4.0/" }
--- abstract: | Let $F$ be a compact orientable surface with nonempty boundary other than a disk. Let $L$ be a link in $F \times I$ with a connected weakly prime cellular alternating projection to $F$. We provide simple conditions that determine exactly when $(F \times I) \setminus N(L)$ is hyperbolic. We also consider suitable embeddings of $F \times I$ in an ambient manifold $Y$ with boundary and provide conditions on links $L \subset F \times I$ which guarantee hyperbolicity of $Y \setminus N(L)$. These results provide many examples of hyperbolic links in handlebodies and other manifolds. They also provide many examples of staked links that are hyperbolic. address: - Department of Mathematics, Williams College, Williamtown, MA 01267 - Department of Mathematics, MIT, 77 Massachusetts Avenue, Cambridge, MA 02139-4307 author: - Colin Adams - Joye Chen bibliography: - references.bib nocite: "[@*]" title: Hyperbolicity of Alternating Links in Thickened Surfaces with Boundary --- # Introduction and Statement of Theorems A link $L$ in a compact 3-manifold $Y$ is called *hyperbolic* if the complement $M = Y \setminus N(L)$ admits a complete metric of constant sectional curvature $-1$. Hyperbolicity has proven useful in studying links in $S^3$, giving rise to many powerful invariants, in particular volume. Hence, the problem of determining which links are hyperbolic is of key interest. Thurston proved that the complement of a link in a compact orientable 3-manifold is hyperbolic if it contains no essential properly embedded spheres, disks, tori or annuli. In [@menasco], Menasco used this to prove that all non-split prime alternating links in $S^3$ which are not 2-braid links are hyperbolic. This result was extended by Adams et al in [@small18], where they proved that all prime, cellular (all complementary regions of the projection are disks) alternating links in thickened closed surfaces of positive genus are hyperbolic. In [@hp17], Howie and Purcell obtained a more general result using angled chunks, proving that under certain conditions, a link $L$ in an arbitrary compact, orientable, irreducible 3-manifold with a weakly prime, cellular alternating projection onto a closed projection surface is hyperbolic. A natural next step is to consider projection surfaces with boundary and determine when links which are alternating with respect to these projection surfaces are hyperbolic in manifolds containing these surfaces. Throughout, we denote a projection surface by $F$, which we require to be connected, orientable, and compact with nonempty boundary. We are interested in links $L \subset F \times I$ and the corresponding 3-manifold $M = (F \times I) \setminus N(L)$, where $N(\cdot)$ denotes a closed regular neighborhood and $I = [0, 1]$ denotes a closed interval. Since the manifolds $M$ we are interested in often have higher genus boundary (that is, boundary components with genus at least 2), we would like to have the stronger notion of *tg-hyperbolicity*. **Definition 1**. A compact orientable 3-manifold $N$ is *tg-hyperbolic* if, after capping off all spherical boundary components with 3-balls and removing torus boundaries, the resulting manifold admits a complete hyperbolic metric such that all higher genus boundary components are totally geodesic in the metric. Ultimately, we would like to use hyperbolic volume to study links in various manifolds, and requiring tg-hyperbolicity allows us to associate a well-defined finite volume to a manifold with higher genus boundary. We also require our links to be prime in $F \times I$ and have cellular alternating projections on $F$. **Definition 2**. Let $F$ be a projection surface with boundary, and let $L \subset F \times I$ be a link with projection diagram $\pi(L)$. We say $L$ is *prime* in $F \times I$ if every 2-sphere in $F \times I$ which is punctured twice by $L$ bounds, on one side, a 3-ball intersecting $L$ in precisely one unknotted arc. We say $\pi(L)$ is *cellular alternating* on $F$ if it is alternating on $F$ and, after every boundary component of $F$ is capped off with a disk to obtain a closed orientable surface $F_0$ with diagram $\pi(L)$, every complementary region of $F_0 \setminus \pi(L)$ is an open disk. When $\pi(L)$ is a *reduced diagram*, there is an easy way to check whether or not $L$ is prime in $F \times I$. **Definition 3**. Let $\pi(L)$ be a link projection on a surface $F$ with boundary. We say $\pi(L)$ is *reduced* if there is no circle in $F$ bounding a disk in $F$ and which intersects $\pi(L)$ transversely in exactly one (double) point. Note that when a projection is not reduced, we can reduce it by flipping that portion of the projection inside the circle and lower the number of crossings. **Definition 4**. Let $\pi(L)$ be a reduced link projection on a surface $F$ with boundary. We say a link projection $\pi(L) \subset F$ is *weakly prime* if every disk $D \subset F$ which has its boundary $\partial D$ intersect $\pi(L)$ transversely in exactly two points contains no crossings of the projection in its interior. We prove the following extensions of Theorem 2 from [@small18] and Theorem 1(b) from [@menasco] to allow projection surfaces with boundary. **Proposition 5**. *Let $F$ be a projection surface with nonempty boundary, and let $L \subset F \times I$ be a link with a connected, reduced, cellular alternating projection diagram $\pi(L) \subset F \times \{1/2\}$. Then $L$ is prime in $F \times I$ if and only if $\pi(L)$ is weakly prime on $F \times \{1/2\}$.* ## A criterion for hyperbolicity Our first main result characterizes when alternating links on projection surfaces with boundary are hyperbolic. **Theorem 6**. *Let $F$ be a projection surface with nonempty boundary which is not a disk, and let $L \subset F \times I$ be a link with a connected, reduced, alternating projection diagram $\pi(L) \subset F \times \{1/2\}$ with at least one crossing. Let $M = (F \times I) \setminus N(L)$. Then $M$ is tg-hyperbolic if and only if the following four conditions are satisfied:* 1. *$\pi(L)$ is weakly prime on $F \times \{1/2\}$;* 2. *the interior of every complementary region of $(F \times \{1/2\}) \setminus \pi(L)$ is either an open disk or an open annulus;* 3. *if regions $R_1$ and $R_2$ of $(F \times \{1/2\}) \setminus \pi(L)$ share an edge, then at least one is a disk;* 4. *there is no simple closed curve $\alpha$ in $F \times \{1/2\}$ that intersects $\pi(L)$ exactly in a nonempty collection of crossings, such that for each such crossing, $\alpha$ bisects the crossing and the two opposite complementary regions meeting at that crossing that do not intersect $\alpha$ near that crossing are annuli.* We often refer to $F \times \{1/2\}$ by $F$ when there is no ambiguity. We exclude the case where $F$ is a disk, since this case is covered by [@menasco]. Note that in that case, one must also exclude a cycle of bigons. In our case, a cycle of bigons is excluded by conditions (ii) and (iv). Note that condition (ii) implies the link is cellular alternating. For an alternative formulation, we may start with a closed projection surface $F_0$ with a link $L \subset F_0 \times I$. Choose a set of distinct points $\{x_i\} \in F_0 \setminus \pi(L)$. Then $F := F_0 \setminus \bigcup_{i = 1}^n \mathring{N}(x_i)$ is a surface with boundary and we may consider $L$ as a link in $F \times I$. In this setting, Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"} can be rephrased to say that if the $\{x_i\}$ are chosen such that: (i) $\pi(L)$ is weakly prime on $F$, (ii) every region of $F_0 \setminus \pi(L)$ contains at most one of the $x_i$, (iii) no two adjacent regions of $F_0 \setminus \pi(L)$ both contain an $x_i$, and (iv) there is no simple closed curve on $F_0$ intersecting $\pi(L)$ exactly in a nonempty set of crossings such that it bisects each crossing and each of the two regions it does not pass through at each such crossing contain an $x_i$, then $M$ is tg-hyperbolic. Any other choice of $\{x_i\}$ ensures $M$ will not be tg-hyperbolic. This formulation fits well with the notion of a staked link, which we discuss in the final section. However, the advantage to the statement of Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"} is it avoids reference to an initial closed surface $F_0$. ![Four examples of $F$ (shaded) and $\pi(L)$ satisfying conditions (i), (ii) and (iii) of Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"}. Examples (a) and (b) also satisfy condition (iv) so the corresponding manifolds $(F \times I) \setminus N(L)$ are tg-hyperbolic. Examples (c) and (d) fail condition (iv) and neither is tg-hyperbolic. A problematic simple closed curve appears in red.](figures/thmthickenedexsnewer.pdf){#fig-examples-thickened} The conditions (i)-(iv) of Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"} are necessary. Indeed, if (i) does not hold, then there is an essential twice-punctured sphere. If (ii) does not hold then a region with genus greater than 0 produces an essential annulus by taking $\alpha \times I$ for any nontrivial non-boundary-parallel simple closed curve $\alpha$ in the region. A planar region with more than one boundary produces an essential disk as in Figure [2](#fig-nonexamples-thickened){reference-type="ref" reference="fig-nonexamples-thickened"}(a). If (iii) does not hold, then there is an essential annulus as in Figure [2](#fig-nonexamples-thickened){reference-type="ref" reference="fig-nonexamples-thickened"}(b). If (iv) does not hold, then we will show in Lemma [Lemma 25](#lemma-failiv-exists-annulus){reference-type="ref" reference="lemma-failiv-exists-annulus"} that there is an essential annulus, an example of which appears in Figure [3](#annulusharingcrossings){reference-type="ref" reference="annulusharingcrossings"}. So our main task will be to prove that these conditions imply tg-hyperbolicity, which we do in Section 3. ![Conditions (ii) and (iii) are necessary for Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"}: on the left is a local portion of $\pi(L)$ on $F$ and on the right is the corresponding portion of $M$. In (a) we exhibit an essential disk $\alpha \times I$ when a complementary region has two or more boundary components, and in (b) we exhibit an essential annulus $(\alpha \times I) \setminus N(L)$ when adjacent regions both have boundary.](figures/thmthickenednonexs.pdf){#fig-nonexamples-thickened} ![An essential annulus is present when the red curve does not satisfy condition (iv). ](figures/annulisharingcrossings.pdf){#annulusharingcrossings} Note that if $F$ is a disk, then $M$ is the complement of a link in a 3-ball. Capping off the spherical boundary with a 3-ball yields a link complement in $S^3$. Then Corollary 2 of [@menasco] characterizes when $M$ is hyperbolic. Similarly, if $F$ is closed and of genus at least 1, Theorem 1 of [@small18] characterizes when $M$ is tg-hyperbolic. As an application of Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"}, note that $F \times I$ is always a handlebody. Specifically, if $F$ is an orientable genus $g$ surface with $k$ boundary components, then $F \times I$ is a genus $2g + (k - 1)$ handlebody. Hence, if $L$ is a link in a handlebody and we can find a way to represent this handlebody as a thickened projection surface $F$ such that $L$ is cellular alternating on $F$, we can determine if $L$ is hyperbolic. In particular, there are examples of links in handlebodies which do not have a closed projection surface satisfying the hypotheses of Theorem 1.1 from [@hp17], but do have a projection surface with boundary satisfying the hypotheses of Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"}. We give an example in Section [\[section-apps\]](#section-apps){reference-type="ref" reference="section-apps"}. Relatively few links in handlebodies with tg-hyperbolic complements were previously known. One was proved to be so in [@adamsnew]. There are a finite number given in [@FMP]. Each of [@Frigerio] and [@simplesmallknots] give an explicit infinite set, one for each genus. Theorem 1.1 of [@hp17] does generate many examples when each compressing disk on the boundary of the handlebody is crossed at least four times by an appropriate alternating link. Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"}, particularly when conjoined with a form of "composition", as described in Theorem 2.1 of [@complinks], further increases the number of such known. ## Generalizing to additional ambient manifolds We now consider a compact orientable 3-manifold with boundary $Y$ that contains a properly embedded orientable surface $F$ with boundary that is both incompressible and $\partial$-incompressible and that intersects all essential annuli and tori in $Y$. We show that if an appropriate link is removed from $F \times I \subset Y$, the link complement in $Y$ is tg-hyperbolic. First, we state a similarly easy way to check whether or not $L$ is prime in $Y$. Let $\pi(L)$ be a projection of a link $L$ to $F$. **Proposition 7**. *Let $F$ be an orientable incompressible $\partial$-incompressible surface with nonempty boundary properly embedded in a compact orientable irreducible $\partial$-irreducible 3-manifold $Y$, and let $L$ be a link in $Y$ with a connected, reduced, cellular alternating projection to $F$. Then $L$ is prime in $Y$ if and only if $\pi(L)$ is weakly prime on $F$.* Then we have the following theorem: **Theorem 8**. *Let $Y$ be an orientable irreducible $\partial$-irreducible 3-manifold with $\partial Y \neq \emptyset$. Let $F$ be a properly embedded orientable incompressible and $\partial$-incompressible connected surface with boundary in $Y$. Suppose all essential tori and annuli that exist in $Y$ intersect $F$. Let $L$ be a link in a regular neighborhood $N = F \times I$ of $F$ that has a connected reduced alternating projection $\pi(L)$ to $F$ with at least one crossing and that satisfies the following conditions:* 1. *$\pi(L)$ is weakly prime on $F$;* 2. *the interior of every complementary region of $F \setminus \pi(L)$ is either an open disk or an open annulus;* *Then $Y \setminus N(L)$ is tg-hyperbolic.* Compare to Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"}; in particular, here, the previous condition (iii) that adjacent regions cannot both be annuli and the previous condition (iv) are no longer necessary. In either of these cases, the annulus that is generated when the condition fails does not extend to an annulus in $Y$. Also, note again that condition (ii) implies the link is cellular alternating. The requirement that $F$ be connected is not essential, as the theorem can be repeated for additional surface components and tg-hyperbolicity is maintained. Examples for $Y$ include any finite-volume hyperbolic 3-manifold with cusps and/or totally geodesic boundary of genus at least 2. In these cases, there are no essential annuli or tori to worry about and there is always an incompressible $\partial$-incompressible surface that can play the role of $F$ (see for instance Lemma 9.4.6 of [@martelli]). A simple example for $F$ would be a minimal genus Seifert surface in a hyperbolic knot complement $Y$ in $S^3$. In Figure [4](#Theorem1.8examples){reference-type="ref" reference="Theorem1.8examples"}(a), we see an example where $Y$ is the complement of the trefoil knot. There is an essential annulus, however, it intersects the shaded Seifert surface playing the role of $F$. Hence the link complement shown is hyperbolic by Theorem [Theorem 8](#surfaceinmanifold){reference-type="ref" reference="surfaceinmanifold"}. ![Examples of manifolds that are tg-hyperbolic by Theorem [Theorem 8](#surfaceinmanifold){reference-type="ref" reference="surfaceinmanifold"}.](figures/altlinks-ex-thm1.8.png){#Theorem1.8examples} Examples also include any surface bundle $F \Tilde{\times} S^1$ over a compact orientable surface $F$ with nonempty boundary other than a disk or annulus. We pick the incompressible $\partial$-incompressible surface to be a fiber. Such a manifold does contain essential annuli and/or tori but they all intersect $F$. See Figure [4](#Theorem1.8examples){reference-type="ref" reference="Theorem1.8examples"}(b) for an example. ## Organization and further directions In Section [\[section-int-curves\]](#section-int-curves){reference-type="ref" reference="section-int-curves"}, we introduce the notion of *bubbles*, first defined by Menasco in [@menasco]. We also adapt various lemmas from [@small18] concerning intersection curves. In Section [\[section-pf-thm-thickened\]](#section-pf-thm-thickened){reference-type="ref" reference="section-pf-thm-thickened"}, we prove Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"} as well as Proposition [Proposition 5](#prop-weakly-prime){reference-type="ref" reference="prop-weakly-prime"}, and in Section [\[section-pf-thm-circle-bundle\]](#section-pf-thm-circle-bundle){reference-type="ref" reference="section-pf-thm-circle-bundle"}, we prove Theorem [Theorem 8](#surfaceinmanifold){reference-type="ref" reference="surfaceinmanifold"} and Proposition [Proposition 7](#prop-weakly-prime-circ){reference-type="ref" reference="prop-weakly-prime-circ"}. Finally, in Section [\[section-apps\]](#section-apps){reference-type="ref" reference="section-apps"}, we discuss some applications of our results. One motivation for Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"} is that it gives a large class of hyperbolic links in handlebodies, which we may naturally view as thickened surfaces-with-boundary. Besides being interesting objects in their own right, they appear naturally in the study of *knotoidal graphs*, as defined in [@generalizedknotoids]. In that paper, the authors construct a map $\phi_\Sigma^D$ from the set of knotoidal graphs to the set of spatial graphs in 3-manifolds. Hence, there is a well-defined notion of hyperbolicity and hyperbolic volume for knotoidal graphs which may be used to distinguish them. In particular, *staked links*, as defined in [@generalizedknotoids], are obtained by adding vertices, called isolated poles, to the complementary regions of a link projection. As in the case of endpoints of knotoids, we do not allow strands of the projection to pass over or under these poles. A subclass of knotoidal graphs, these staked links are mapped by $\phi_\Sigma^D$ into the set of links in handlebodies. Then it is interesting to determine which staked links are hyperbolic and compute their volumes. Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"} gives the answer to the first question in the case of alternating staked links. Furthermore, we can show that certain staked links which are "close to alternating\" in some sense are also hyperbolic, using results from [@complinks]. Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"} is used in [@generalizedknotoids] to prove that every link in $S^3$ can be staked to be hyperbolic. In future work, it would be interesting to obtain volume bounds for alternating links in handlebodies (and such results would immediately apply to alternating staked links). Also, while we work only with orientable 3-manifolds, we suspect similar results hold when $F$ is a nonorientable surface with boundary, or when we allow for orientation-reversing self-homeomorphisms of $F$ in the construction of $F \Tilde{\times} S^1$. ## Acknowledgements {#acknowledgements .unnumbered} The research was supported by Williams College and NSF Grant DMS-1947438 supporting the SMALL Undergraduate Research Project. We are grateful to Alexandra Bonat, Maya Chande, Maxwell Jiang, Zachary Romrell, Daniel Santiago, Benjamin Shapiro and Dora Woodruff, who are the other members of the knot theory group of the 2022 SMALL REU program at Williams College, for many helpful discussions and suggestions. [\[section-intro\]]{#section-intro label="section-intro"} # Bubbles and Intersection Curves Let $F$ be a compact, connected, orientable surface with boundary, and let $L \subset F \times I$ be a link with connected, cellular alternating projection $\pi(L) \subset F \times \{1/2\} = F$. By Thurston's Hyperbolization Theorem, proving that $M$ has no essential spheres, tori, disks, or annuli is sufficient to conclude that $M$ is tg-hyperbolic. Throughout this section, $\Sigma$ is a properly embedded essential surface in $M = (F \times I) \setminus N(L)$ with boundary on $\partial(F \times I)$. Note that when $\Sigma$ is a disk, we may always assume it has boundary on $\partial(F \times I)$. Indeed, suppose $\Sigma$ has boundary on $\partial N(K)$, where $K$ is a component of $L$. Then $\partial (N(K) \cup N(D))$ is an essential sphere. Hence, if we can eliminate essential spheres, then we have eliminated such disks. As in [@menasco], arrange $L$ to lay in $F = F \times \{1/2\}$ away from the crossings, and at each crossing place a 3-ball $B$ which we call a *bubble*. Arrange the over and understrands so that they lie in the upper hemisphere $\partial B_+$ and lower hemisphere $\partial B_-$ of the bubble, respectively. We may isotope $\Sigma$ to intersect the bubbles in saddle-shaped disks, by first isotoping $\Sigma$ to intersect the vertical axis of each bubble transversely, then pushing $\Sigma$ radially outward from the axis. See Figure [5](#fig-bubble-saddle){reference-type="ref" reference="fig-bubble-saddle"}. ![A surface $\Sigma$ intersecting a bubble in a saddle disk. ](figures/saddledisknew2.pdf){#fig-bubble-saddle} Let $F_+$ (resp. $F_-$) be the surface obtained from $F \times \{1/2\}$ by removing each equatorial disk where $F \times \{1/2\}$ intersects a bubble $B$ and replacing it with the upper hemisphere $\partial B_+$ (resp. lower hemisphere $\partial B_-$). The desired contradictions come from analyzing the intersection curves between $\Sigma$ and the $F_\pm$, which may be closed or properly embedded arcs since $\partial \Sigma \subset \partial (F \times I)$ and therefore, can be perturbed to intersect the $F_\pm$ transversely. In the remainder of this section, we state and prove various lemmas for $\Sigma \cap F_+$, noting that all results and arguments apply to to $\Sigma \cap F_-$ as well. **Lemma 9**. *There is at least one intersection curve in $\Sigma \cap F_+$.* *Proof.* Suppose for contradiction that $\Sigma \cap F_+ = \emptyset$. Without loss of generality, $\Sigma$ can be isotoped to lie in $F \times (1/2, 1]$, a handlebody, and if $\Sigma$ has boundary, then $\partial \Sigma$ lies in $(F \times \{1\}) \cup (\partial F \times (1/2, 1])$. If $\Sigma$ is a sphere or torus, then it is compressible, and if $\Sigma$ is a disk or annulus, then it is $\partial$-parallel. ◻ We would like to simplify $\Sigma \cap F_+$ as much as possible. Assign to each embedding of $\Sigma$ an ordered pair $(s, i)$, where $s$ is the number of saddle disks in the intersection between $\Sigma$ and the bubbles and $i$ is the number of intersection curves in $\Sigma \cap F_+$. For the remainder of the section, we may assume that our choice of an embedding of $\Sigma$ minimizes $(s, i)$ under lexicographical ordering. To this end, we can show that $\Sigma \cap F_+$ cannot contain any intersection curves which are *trivial* on both $\Sigma$ and $F_+$. **Definition 10**. We say a simple closed curve on a surface is *trivial* if it bounds a disk in the surface. We say a properly embedded arc on a surface is *trivial* if it cuts a disk from the surface. To eliminate curves trivial on both $\Sigma$ and $F_+$, we define the notion of *meridional (in)compressibility*, first introduced in [@menasco]. Eventually, in Section [\[section-pf-thm-thickened\]](#section-pf-thm-thickened){reference-type="ref" reference="section-pf-thm-thickened"}, we show that essential surfaces in $M$ cannot be meridionally incompressible nor meridionally compressible, thus eliminating them. **Definition 11**. Let $Y$ be a compact 3-manifold containing a link $L$, and let $\Sigma$ be a properly embedded surface in $Y \setminus N(L)$. We say $\Sigma$ is *meridionally incompressible* if for every disk $D$ in $Y$ such that $D \cap \Sigma = \partial D$ and $D$ is punctured exactly once by $L$, there is another disk $D'$ in $\Sigma \cup N(L)$ such that $\partial D' = \partial D$ and $D'$ is punctured by $L$ exactly once. Otherwise, we say $\Sigma$ is *meridionally compressible* and we call $D$ a *meridional compression disk*. We refer to surgery on $\Sigma$ along $D$ as a *meridional compression*. Throughout this section we take $Y = F \times I$. We remark that if $\Sigma$ is essential in $M$ and has a meridional compression disk $D$, then the (not necessarily connected) surface $\Sigma'$ resulting from meridionally compressing $\Sigma$ along $D$ is also essential. The following lemma tells us that when $\Sigma$ is meridionally incompressible, all of the closed intersection curves are nontrivial in some sense. **Lemma 12**. *Suppose $\Sigma$ is meridionally incompressible and $(s, i)$ is minimized. Then no closed intersection curve in $\Sigma \cap F_+$ can be trivial in both $\Sigma$ and $F_+$.* In order to prove this, we first prove several other lemmas. Note that the upper hemisphere of each bubble $B$ is separated by the overstrand into two sides. We observe that because $\pi(L)$ is an alternating diagram, any intersection curve in $\Sigma \cap F_+$ must alternate between entering bubbles on the left side of the overstrand and entering on the right side of the overstrand. See Figure [6](#fig-alternating-property){reference-type="ref" reference="fig-alternating-property"} for a local picture. The alternating property places strong restrictions on the appearance of the intersection curves in $F_+$, as the next two lemmas demonstrate. ![The alternating property: $\alpha$ alternates sides as it encounters each bubble. ](figures/alternatingprop.pdf){#fig-alternating-property} **Lemma 13**. *A closed intersection curve $\alpha$ in $\Sigma \cap F_+$ which is trivial in $F_+$ cannot intersect a bubble twice on the same side.* *Proof.* Let $B$ be a bubble and for convenience, let $B^L$ and $B^R$ denote the two halves of $B$ obtained by slicing along a vertical plane containing the overstrand. Without loss of generality, suppose $\alpha$ meets $B^L$ in more than one arc, and let $D$ be the disk in $F_+$ bounded by $\alpha$. Let $\{\alpha_i\}$ be the set of arcs in $\alpha \cap B^L$ and observe that each corresponds to a distinct saddle disk in $\Sigma \cap B$. Furthermore, we may assume that there exists a pair of arcs, $\alpha_0$ and $\alpha_1$, which are adjacent on $B^L$. Indeed, if there is another intersection curve $\alpha'$ in $\Sigma \cap F_+$ which meets $B^L$ in between $\alpha_0$ and $\alpha_1$, then it does so in at least two arcs between $\alpha_0$ and $\alpha_1$ because $\alpha'$ must be contained in $D$. We can continue finding these nested loops until we are able to choose an adjacent pair of arcs belonging to the same intersection curve. As in [@adams], let $\mu$ be an arc running along $\alpha$ in $\Sigma$ connecting $\alpha_0$ and $\alpha_1$. Then we may isotope $\Sigma$ to remove the two saddles corresponding to $\alpha_0$ and $\alpha_1$ by pushing a regular neighborhood of $\mu$ across the disk $D$, through $B$, and downwards past $F_+$. See Figure [7](#fig-finger-saddle-move){reference-type="ref" reference="fig-finger-saddle-move"}. This reduces $s$ by $2$, contradicting minimality of $(s, i)$. ◻ ![Isotoping $\Sigma$ to remove two saddles. We visualize $\Sigma$ as folding back over itself to intersect $B^L$ twice. Then imagine using your finger to push the fold into the bubble and downwards past $F$, smoothing it out and removing $\alpha_0$ and $\alpha_1$. For simplicity, we only draw $\Sigma$ where it lies above $F$. ](figures/removesaddle.pdf){#fig-finger-saddle-move} **Lemma 14**. *Suppose $\Sigma$ is meridionally incompressible and suppose $\alpha$ is a closed intersection curve in $\Sigma \cap F_+$ which is trivial in $F_+$ and bounds a disk containing the overstrand of some bubble $B$. Then $\alpha$ cannot intersect $B$ on both sides.* *Proof.* Suppose for contradiction that a curve $\alpha$ satisfying the hypotheses intersects $B$ on both sides in arcs $\alpha_0$ and $\alpha_1$. Let $D$ be a disk in $F_+$ bounded by $\alpha$. First, observe that $\alpha_0$ and $\alpha_1$ must be connected by an arc $\gamma$ as in Figure [8](#fig-both-sides){reference-type="ref" reference="fig-both-sides"}(a). Otherwise, suppose $\gamma$ appears as in Figure [8](#fig-both-sides){reference-type="ref" reference="fig-both-sides"}(b). Then because $\alpha$ bounds a disk, the endpoint of $\alpha$ cannot escape the enclosed region along a handle and hence must exit the enclosed region by passing over $B$ again. However, whichever side of $B$ it passes over, this contradicts Lemma [Lemma 13](#lemma-same-side-bubble){reference-type="ref" reference="lemma-same-side-bubble"}. ![The two possibilities when an intersection curve intersects the opposite sides of a bubble. We rule out 8(b) by contradiction with Lemma [Lemma 13](#lemma-same-side-bubble){reference-type="ref" reference="lemma-same-side-bubble"}. ](figures/bothsidesbubble4.pdf){#fig-both-sides} So now we can assume we are in a situation as depicted in Figure [8](#fig-both-sides){reference-type="ref" reference="fig-both-sides"}(a). Of the intersection curves which are trivial and intersect $B$ on both sides, we may choose $\alpha$ to intersect closest to the overstrand of $B$ on one of its sides. We claim that the two arcs of $\alpha$ which intersect $B$ closest to the overstrand on either side belong to the same saddle disk of $\Sigma \cap B$. Indeed, suppose $\alpha'$ is another intersection curve such that it meets $B$ between an arc of $\alpha \cap B$ and the overstrand. Since $\alpha$ bounds a disk, $\alpha'$ must intersect $B$ at least twice on one side of $B$. Since $D$ contains the overstrand of $B$, $\alpha'$ is contained in $D$ and hence, is trivial. This contradicts Lemma [Lemma 13](#lemma-same-side-bubble){reference-type="ref" reference="lemma-same-side-bubble"}. Let $\sigma \subset \Sigma \cap B$ be the saddle disk which contains $\alpha_0$ and $\alpha_1$ in its boundary. The remainder of the proof more or less follows the proof of Lemma 7 of [@small18] or Lemma 1 of [@menasco]. Let $\mu$ be an arc in $\Sigma$ running parallel to $\alpha$ which connects $\alpha_0$ and $\alpha_1$. Then there is an arc $\gamma$ in $\sigma$ such that $\mu \cup \gamma$ is a circle in $\Sigma$ which bounds a disk punctured once by $L$ in $M$. See Figure [9](#fig-both-sides-comp-disk){reference-type="ref" reference="fig-both-sides-comp-disk"}. But this yields a meridional compression disk, a contradiction. ![The curve $\mu \cup \gamma$ bounds a meridional compression disk for $\Sigma$. ](figures/meridionalcompressionnew.pdf "fig:"){#fig-both-sides-comp-disk} ◻ To prove Lemma [Lemma 12](#lemma-removing-triv-curve){reference-type="ref" reference="lemma-removing-triv-curve"} and subsequent lemmas, the notion of innermost (closed) curves and outermost arcs is very useful. **Definition 15**. Let $\alpha$ be a closed intersection curve in $\Sigma \cap F_+$. If $\alpha$ is trivial on $\Sigma$ (resp. trivial on $F_+$), we say $\alpha$ is *innermost* on $\Sigma$ (resp. on $F_+$) if $\alpha$ bounds a disk $D$ in $\Sigma$ (resp. $F_+$) which does not contain any intersection curves of $\Sigma \cap F_+$ in its interior. Similarly, suppose $\alpha$ is an intersection arc in $\Sigma \cap F_+$. If $\alpha$ is trivial on $\Sigma$ (resp. $F_+$), we say $\alpha$ is *outermost* on $\Sigma$ (resp. on $F_+$) if $\alpha$, together with an arc of $\partial \Sigma$ (resp. $\partial F_+$), bounds a disk $D$ in $\Sigma$ (resp. $F_+$) which does not contain any intersection curves of $\Sigma \cap F_+$ in its interior. *Proof of Lemma [Lemma 12](#lemma-removing-triv-curve){reference-type="ref" reference="lemma-removing-triv-curve"}.* Let $D \subset F_+$ and $D' \subset \Sigma$ be disks bounded by $\alpha$, and of the (closed) intersection curves contained in $D$, let $\beta$ be an innermost such curve on $F_+$. Then $\beta$ bounds a disk $E \subset D \subset F_+$ and by Lemmas [Lemma 13](#lemma-same-side-bubble){reference-type="ref" reference="lemma-same-side-bubble"} and [Lemma 14](#lemma-both-sides-bubble){reference-type="ref" reference="lemma-both-sides-bubble"}, $\beta$ cannot intersect any bubbles. By iterating this argument, we find that none of the intersection curves contained in $D$ can intersect bubbles and hence, neither can $\alpha$. Then $D \cup D'$ is a 2-sphere which is not punctured by the link. By irreducibility of $F \times I$, $D \cup D'$ bounds a 3-ball and we may isotope $D'$ through the 3-ball, pushing it slightly past $F_+$ to remove $\alpha$ and any intersection curves contained in $D$. This contradicts minimality of $(s, i)$. ◻ We conclude this section by using Lemma [Lemma 12](#lemma-removing-triv-curve){reference-type="ref" reference="lemma-removing-triv-curve"} to prove several more lemmas restricting the appearance of the intersection curves when $\Sigma$ is meridionally incompressible. **Lemma 16**. *Suppose $\Sigma$ is meridionally incompressible, and let $\alpha \subset \Sigma \cap F_+$ be an intersection curve. Then $\alpha$ is trivial in $\Sigma$ if and only if one of the following is true:* 1. *$\alpha$ is trivial in $F_+$; or* 2. *$\alpha$ is an arc bounding a $\partial$-compression disk for $F_+$ in $M$.* *Proof.* Suppose $\alpha$ is nontrivial in $\Sigma$ and trivial in $F_+$. When $\Sigma$ is a sphere or disk, this is clearly impossible. Of all intersection curves which are nontrivial in $\Sigma$ and trivial in $F_+$, choose $\alpha$ to be an innermost closed curve or an outermost arc on $F_+$. Let $D$ be a disk in $F_+$ bounded by $\alpha$ (and possibly an arc of $\partial F_+$ if $\alpha$ is an arc). Then any intersection curve in $D$ is trivial in both $\Sigma$ and $F_+$, so by Lemma [Lemma 12](#lemma-removing-triv-curve){reference-type="ref" reference="lemma-removing-triv-curve"}, we may assume that all of these curves are arcs. In particular, if $\Sigma$ is a torus, or $\Sigma$ is an annulus and $\alpha$ is a closed curve, then all intersection curves contained in $D$ are eliminated, and $\alpha$ bounds a compression disk for $\Sigma$, a contradiction. If $\Sigma$ is an annulus and $\alpha$ is an arc, then all closed intersection curves in $D$ are eliminated. If all intersection curves in $D$ are closed, then $\alpha$ bounds a compression disk for $\Sigma$. Hence, there is at least one intersection arc contained in $D$. But then the outermost such arc bounds a $\partial$-compression disk for $\Sigma$, contradicting essentiality. Conversely, suppose $\alpha$ is trivial in $\Sigma$ and nontrivial in $F_+$. Fill in $N(L)$ to work in the handlebody $F \times I$. We show that $\Sigma$ cannot intersect $F_+$ in such an $\alpha$ in $F \times I$, much less in $(F \times I) \setminus N(L)$. Of all the curves which are trivial in $\Sigma$ and nontrivial in $F_+$, choose $\alpha$ to be an innermost closed curve or outermost arc on $F_+$, and let $D'$ be a disk in $\Sigma$ bounded by $\alpha$ (and possibly an arc of $\partial \Sigma$ if $\alpha$ is an arc). Every intersection curve contained in $D'$ is trivial in $\Sigma$ and in $F_+$. Of these curves, suppose at least one is closed and choose $\beta$ to be an innermost such curve on $\Sigma$. Let $E$ and $E'$ be disks bound by $\beta$ in $F_+$ and $\Sigma$ respectively. Push the interior of $E$ slightly off $F_+$ so $E \cup E'$ is a 2-sphere in $M$. By irreducibility of $F \times I$, $E \cup E'$ bounds a 3-ball. Isotope $\Sigma$ through this 3-ball to remove $\beta$ as before, and by iterating this process, we can remove all closed intersection curves contained in $D$. If $\alpha$ is a closed curve, then all intersection curves in $D$ are eliminated, and $\alpha$ bounds a compression disk for $F_+$ in $F \times I$, a contradiction. If $\alpha$ is an arc, it is either trivial in $F_+$ or $D'$ is a $\partial$-compression disk for $F_+$ in $F \times I$, hence in $M$. ◻ Henceforth, we say an intersection curve is trivial if it is trivial on $\Sigma$ and $F_+$. When $\Sigma$ is meridionally incompressible we may assume such curves are arcs by Lemmas [Lemma 12](#lemma-removing-triv-curve){reference-type="ref" reference="lemma-removing-triv-curve"} and [Lemma 16](#lemma-triv-int-curve-iff){reference-type="ref" reference="lemma-triv-int-curve-iff"}. **Lemma 17**. *Suppose $\Sigma$ is meridionally incompressible, and let $\alpha \subset \Sigma \cap F_+$ be an intersection arc. Then $\alpha$ intersects at least one bubble.* *Proof.* Suppose for contradiction that $\alpha$ does not intersect any bubble. Then the arc $\alpha$ is properly embedded in a complementary region $R$ of $F_+ \setminus \pi(L)$ which is homeomorphic to an annulus, and furthermore, both endpoints of $\alpha$ lie on the same boundary component of $R$. In particular, $\alpha$ is trivial in $F_+$ and $\Sigma$, using Lemma [Lemma 16](#lemma-triv-int-curve-iff){reference-type="ref" reference="lemma-triv-int-curve-iff"}. Let $D$ be a disk in $R \subset F_+$ bounded by $\alpha$ and an arc $\beta$ of $\partial F_+$, and let $D'$ be a disk in $\Sigma$ bounded by $\alpha$ an arc $\gamma$ of $\partial \Sigma \subset \partial (F \times I)$. Then $D \cup D'$ is a disk with boundary $\beta \cup \gamma$ on $\partial (F \times I)$. Furthermore, since $\alpha$ is outermost on $\Sigma$, $\gamma$ does not intersect $\partial F_+$ away from its endpoints. Hence, without loss of generality, $\beta \cup \gamma$ lies in $(F \times \{1\}) \cup (\partial F \times [1/2, 1])$. But $D \cup D'$ cannot be a compression disk for $F \times [1/2, 1]$, so $\beta \cup \gamma$ must bound a disk $E$ in $(F \times \{1\}) \cup (\partial F \times [1/2, 1])$. Then $D \cup D' \cup E$ is a 2-sphere in $F \times [1/2, 1]$ bounding a 3-ball. Isotope $D'$ to $D$ through the 3-ball and slightly past $F_+$, removing $\alpha$ and any other intersection curves contained in $D$. This reduces $i$ without affecting $s$, a contradiction. ◻ [\[section-int-curves\]]{#section-int-curves label="section-int-curves"} # Proof of Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"} {#proof-of-theorem-thm-thickened} For now, we assume the statement Proposition [Proposition 5](#prop-weakly-prime){reference-type="ref" reference="prop-weakly-prime"} so we may regard $L$ as prime in $F \times I$. We prove that $M$ has no essential spheres, tori, disks, or annuli in that order. To use the lemmas from the previous section, we assume that $\partial \Sigma \subset \partial(F \times I)$: later on we eliminate essential annuli with at least one boundary component on $\partial N(L)$ using different methods. We will be explicit about which of the conditions we use from Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"} so that we can also use the appropriate lemmas for the proof of Theorem [Theorem 8](#surfaceinmanifold){reference-type="ref" reference="surfaceinmanifold"}. **Lemma 18**. *Let $\Sigma$ be a meridionally incompressible essential torus or an essential annulus with both boundary components on $\partial(F \times I)$ such that $L$ satisfies the hypotheses of Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"} with the possible exception of conditions (iii) and (iv). Then at least one intersection curve in either $\Sigma \cap F_+$ or $\Sigma \cap F_-$ is trivial on $\Sigma$.* *Proof.* Consider $\Sigma$ with all of the intersection curves in $\Sigma \cap F_+$ and $\Sigma \cap F_-$ projected onto it along with saddles corresponding to quadrilaterals. After shrinking down the saddles to vertices, we obtain a (not necessarily connected) 4-valent graph together with some circles without vertices. These circles correspond to nontrivial closed intersection curves which do not meet any bubbles and lie in an annular region of $F_+ \setminus \pi(L)$. Denote the graph together with the circles without vertices by $\Gamma$; we call $\Gamma$ the *intersection graph* of $\Sigma$ (see Figure [10](#fig-int-graph){reference-type="ref" reference="fig-int-graph"}). Each complementary region of $\Sigma \setminus \Gamma$ lies entirely in one of the components of $M \setminus F$, and the boundary of a region is precisely an intersection curve in either $\Sigma \cap F_+$ or $\Sigma \cap F_-$, depending on which side of $F$ the region lies in. Hence, our goal is to show that at least one region of $\Sigma \setminus \Gamma$ is a disk. ![Possible intersection graphs for $\Sigma$.](figures/intersectiongraph2.pdf){#fig-int-graph} First, suppose there are no circles without vertices in $\Gamma$, so $\Gamma$ is just a 4-valent properly embedded graph on $\Sigma$. Let $V$, $E$, and $F$ be the number of saddles, edges, and regions of $\Sigma \setminus F$ respectively, including endpoints of intersection arcs on the boundary of $\Sigma$ as vertices. The Euler characteristic $\chi(\Sigma)$ is 0 when $\Sigma$ is a torus or annulus. From Lemmas [Lemma 9](#lemma-at-least-one-int-curve){reference-type="ref" reference="lemma-at-least-one-int-curve"} and [Lemma 17](#lemma-int-curve-one-bubble){reference-type="ref" reference="lemma-int-curve-one-bubble"}, we know there are vertices from saddles. If no regions are disks, then each has nonpositive Euler characteristic contribution. On the torus, $E = 2V$, and we then have $$\chi(\Sigma) \leq V - E = V - 2V = -V < 0.$$ Hence, when there are no circles without vertices, there is a disk region of $\Sigma \setminus \Gamma$, and its boundary will be an intersection curve in $\Sigma \cap F_+$ or $\Sigma \cap F_-$ which is trivial on $\Sigma$. On the annulus, let $V_I$ be the number of interior vertices corresponding to saddles and $V_B$ be the number of boundary vertices. Then $$\chi(\Sigma) \leq V_I + V_B - \frac{4 V_I + 3 V_B}{2} = -V_I - \frac{V_B}{2} < 0.$$ Again, this forces there to be a region with positive Euler characteristic, meaning there is a disk region. Now suppose that $\Gamma$ has at least one circle without vertices and at least one vertex. Note that the circles without vertices must be parallel to one another. Hence, the circles cut $\Sigma$ into annuli, and at least one annulus $A$ contains a nonempty 4-valent graph $\Gamma'$. Then the same Euler characteristic argument shows that at least one region of $A \setminus \Gamma'$ is a disk. It remains to show that $\Sigma$ must meet the bubbles in at least one saddle. Suppose $\Sigma$ does not meet any bubbles for contradiction, and consider the projection of the intersection curves in $\Sigma \cap F$ onto $F$. Then the intersection curves must be contained in one annular region of $F \setminus \pi(L)$ and encircle the same boundary component of $F$. First consider the case where $\Sigma$ is a torus. Then there are an even number of curves since $F$ is separating in $M$ (in particular, there are at least two). Take a pair of intersection curves $C_1$ and $C_2$ which are adjacent in $\Sigma$ (note they might not be adjacent in $F_+$). They bound an annulus $A$ in $\Sigma$ and an annulus $A'$ in $F_+$. After pushing $A'$ slightly off $F_+$, we find a torus $T = A \cup A'$ contained in one component of $M \setminus F$, both components of which are homeomorphic to a handlebody $F \times I$. Then observe that $T$ must have a compressing disk, as $\pi_1(T) \cong \mathbb{Z}^2$ cannot inject into $\pi_1(F \times I) \cong \mathbb{Z}^{\ast g}$. Compress $T$ along some compressing disk to obtain a 2-sphere, which bounds a 3-ball. Gluing back in the compressing disk yields a solid torus bounded by $T$. It cannot yield a knot exterior because of the fact $A'$ lies in the boundary of the handlebody. Note that the boundary curves of the two annuli must intersect the boundary of the compressing disk once. So we can isotope $\Sigma$ through this solid torus to remove intersection curves $C_1$ and $C_2$, contradicting minimality of $(s, i)$. If $\Sigma$ is an annulus and there are an even number of intersection curves, the same argument goes through. Otherwise, we may assume $\Sigma$ intersects $F_+$ in a single intersection curve. Then we claim $\Sigma$ is $\partial$-parallel. Let $\gamma_0$ and $\gamma_1$ denote the boundary circles of $\Sigma$: since $\Sigma \cap F_+$ contains a single curve $\gamma_{\frac{1}{2}}$, we know that $\gamma_1$ must be in $(\partial F \times (\frac{1}{2}, 1]) \cup (F \times \{1\})$ and $\gamma_0$ must be in $(\partial F \times [0, \frac{1}{2})) \cup (F \times \{0\})$ (note that both of these spaces are homeomorphic to $F$). Viewed on $F$, the $\gamma_i$ for $i = 0, \frac{1}{2}, 1$ are all homotopic to the same component of $\partial F$. This means that $\partial \Sigma$ cuts an annular component $A$ from $\partial (F \times I)$. As above, $\Sigma$ must be parallel to $A$ making it $\partial$-parallel, a contradiction to its being essential. ◻ Now we are ready to eliminate essential spheres and tori. **Proposition 19**. *Let $M = (F \times I) \setminus N(L)$ as in Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"} with the possible exception of conditions (iii) and (iv). Then $M$ has no essential spheres or tori.* *Proof.* Suppose $\Sigma$ is meridionally compressible. If $\Sigma$ is a 2-sphere, then a meridional compression yields two 2-spheres in $F \times I$, each punctured by $L$ exactly once, which cannot occur. If $\Sigma$ is a torus, then a meridional compression yields a 2-sphere which is twice-punctured by the link. By Proposition [Proposition 5](#prop-weakly-prime){reference-type="ref" reference="prop-weakly-prime"}, $L$ is prime in $F \times I$. Then $\Sigma$ must be $\partial$-parallel. Hence, $\Sigma$ must be meridionally incompressible. By Lemmas [Lemma 18](#lemma-torus-annulus-triv-curve){reference-type="ref" reference="lemma-torus-annulus-triv-curve"} and [Lemma 16](#lemma-triv-int-curve-iff){reference-type="ref" reference="lemma-triv-int-curve-iff"}, we may assume without loss of generality that $\Sigma \cap F_+$ contains some intersection curve $\alpha$ which is trivial in $\Sigma$. But by Lemma [Lemma 16](#lemma-triv-int-curve-iff){reference-type="ref" reference="lemma-triv-int-curve-iff"}, $\alpha$ is also trivial in $F_+$, and this contradicts Lemma [Lemma 12](#lemma-removing-triv-curve){reference-type="ref" reference="lemma-removing-triv-curve"}. Let $D$ be a disk in $F_+$ bounded by $\alpha$, and choose $\beta$ to be the innermost intersection curve contained in $D$. Let $D'$ be a disk in $F_+$ bounded by $\beta$. Since $\beta$ is trivial and closed, it intersects at least two bubbles (counted with multiplicity). Because of the alternating property, $D'$ contains the overstrand of some bubble $B$ intersected by $\beta$, and since $D'$ contains no other intersection curve, we conclude that $\beta$ must intersect $B$ on both sides (see Figure [11](#fig-no-essential-S2T2){reference-type="ref" reference="fig-no-essential-S2T2"}). But this contradicts Lemma [Lemma 14](#lemma-both-sides-bubble){reference-type="ref" reference="lemma-both-sides-bubble"}. ![If $\Sigma$ is a meridionally incompressible essential sphere or torus, then there must be some curve $\beta$ bounding $D' \subset F_+$ appearing in this configuration at some bubble.](figures/noessentialS2T2.pdf "fig:"){#fig-no-essential-S2T2} ◻ As previously remarked, this also eliminates the possibility of an essential disk whose boundary lies on $N(K)$ for some component $K$ of $L$. To eliminate essential disks in general, we introduce the useful notion of *forks* (which are similar to forks as defined in [@adams-forks] but here they only have two prongs). Consider an essential disk $\Sigma$ with the intersection curves in $\Sigma \cap F_+$ projected onto it. After shrinking down the vertices, we obtain a 4-valent intersection graph $\Gamma$ on $\Sigma$. **Definition 20**. A *fork* of $\Gamma$ is a vertex with at least two non-opposite edges ending on $\partial \Sigma$. See Figure [12](#fig-fork-in-M){reference-type="ref" reference="fig-fork-in-M"}. Note that the endpoints of the two edges need not be adjacent on the boundary of $\Sigma$. ![A portion of the intersection graph on $\Sigma$ corresponding to a fork.](figures/fork3.pdf){#fig-fork-in-M} **Proposition 21**. *Let $M = (F \times I) \setminus N(L)$ as in Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"} with the possible exception of condition (iv). Then $M$ has no essential disks.* *Proof.* First, note that suppose $\Sigma$ cannot be meridionally compressible as a meridional compression would generate a disk with boundary on a meridian of $N(L)$, that is, a 2-sphere once-punctured by the link, which cannot occur. Hence, $\Sigma$ must be meridionally incompressible, and by Lemma [Lemma 17](#lemma-int-curve-one-bubble){reference-type="ref" reference="lemma-int-curve-one-bubble"}, the intersection graph $\Gamma$ contains at least one vertex. As in [@adams-forks], we can show there is at least one fork in $\Gamma$ as follows. Because there are no closed intersection curves using Lemmas [Lemma 12](#lemma-removing-triv-curve){reference-type="ref" reference="lemma-removing-triv-curve"} and [Lemma 16](#lemma-triv-int-curve-iff){reference-type="ref" reference="lemma-triv-int-curve-iff"}, every complementary region on the disk must intersect the boundary of the disk. If we discard all edges that touch the boundary to obtain a new graph $\Gamma'$ on $\Sigma$, we are left with a collection of trees. If a tree has two or more vertices, then it must have two or more leaves, each of which has three edges on $\Gamma$ that end on the boundary. So we have a fork. If the tree is only a single vertex, then there are four edges leaving it that end on the boundary of the disk and we again have a fork. Let $\alpha$ and $\beta$ denote two adjacent edges of the fork. Note that $\alpha$ and $\beta$ cannot have endpoints on the same component of $\partial F$; otherwise, $\alpha \cup \beta$ together with an arc in $\partial F$ and an arc of the saddle bound a disk in $F_+$ which has exactly one intersection point with $\pi(L)$. Since $\alpha$ and $\beta$ don't meet other bubbles, they lie in adjacent complementary regions. But this implies that two adjacent regions are homeomorphically annuli, contradicting condition (iii) in the statement of Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"}. ◻ We remark that condition (iii) (or perhaps some weaker variation thereof) is necessary to show that no disks exist: for example, if all four complementary regions meeting at a crossing bubble are annuli, then there is an essential disk $D$ as pictured in Figure [13](#fig-essential-disk-wo-ciii){reference-type="ref" reference="fig-essential-disk-wo-ciii"}. ![There is an essential disk if we allow all four complementary regions meeting at a crossing bubble to be annuli: it meets exactly one bubble in exactly one saddle disk.](figures/fourannuli.pdf){#fig-essential-disk-wo-ciii} It remains to show that there are no essential annuli. Suppose $\Sigma$ is an essential annulus. There are three cases to consider: 1. $\Sigma$ has both boundary components on $\partial (F \times I)$; 2. $\Sigma$ has one boundary component on $\partial (F \times I)$ and one boundary component on $\partial N(L)$; 3. $\Sigma$ has both boundary components on $\partial N(L)$. **Lemma 22**. *Let $M = (F \times I) \setminus N(L)$ as in Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"} with the possible exception of conditions (iii) and (iv). Then if $M$ has a essential annulus of type (2) or type (3), it has an essential torus or an essential annulus of type (1).* *Proof.* Let $A$ be an essential annulus of type (3), so both of its boundaries are on $\partial N(L)$. If there exists a single component $K$ such that $\partial A \subset \partial N(K)$, then let $T_1$ and $T_2$ be the two tori that form the boundary of $N(A \cup N(K))$. The boundaries of $A$ cannot be meridians on $\partial N(K)$, as that would contradict primeness. The annulus $A$ cuts $(F \times I) \setminus N(K)$ into two components, one of which contains $\partial (F \times I)$. Choose $T_1$ to be the torus that separates $K$ from $\partial (F \times I)$. We prove that $T_1$ is either essential or there is an essential type (2) annulus. If $T_1$ is boundary-parallel to the side containing $K$, then it must be boundary-parallel to $\partial N(K)$. But then $A$ would have been boundary-parallel to $\partial N(K)$, a contradiction. If $T_1$ is boundary-parallel to the side not containing $K$, then the manifold outside $T_1$ is $T \times I$ with $T \times \{0\} = T_1$ and $T \times \{1\} = \partial (F \times I).$ Then $F$ must be an annulus. Since $T_1$ can be isotoped to $\partial (F \times I)$, there is an essential annulus that has one boundary that is on $\partial N(K)$ and a second in $\partial (F \times I)$. This is an essential type (2) annulus. The torus $T_1$ is incompressible to the side containing $K$, as $K$ prevents a compressing disk other than one with boundary parallel to that of $\partial A$, and that cannot exist by incompressibility of $A$. If there is a compressing disk to the other side of $T_1$, compressing along it yields a sphere, which must bound a ball to that side, leaving no place for $\partial (F \times I)$. Similar arguments work in the case the two boundaries of $A$ are on the boundaries of regular neighborhoods of two different components of $L$. So we can restrict to the case of an essential type (2) annulus. Suppose that $A$ is such an annulus with one boundary on $\partial (F \times I)$ and one on $\partial N(K)$ for some component $K$ of $L$. Let $A'$ be the boundary of $N(A \cup K)$. It is now a type (1) annulus. That it is incompressible is immediate from the fact $A$ is incompressible. And similar to the previous cases, it cannot be boundary-parallel to the side containing $K$, and it cannot be boundary-parallel to the other side, because of the presence of $\partial (F \times I)$ to that side. Thus, whenever there is an essential type (2) or (3) annulus present, there is an essential torus or an essential type (1) annulus present. ◻ We have already eliminated essential tori in Proposition [Proposition 19](#prop-no-essential-spheres-tori){reference-type="ref" reference="prop-no-essential-spheres-tori"}. So, we now eliminate essential type (1) annuli. But, for this case, we need all four of the conditions from Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"}. **Lemma 23**. *Let $M = (F \times I) \setminus N(L)$ as in Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"}. Then $M$ has no essential type (1) annuli.* *Proof.* Suppose $\Sigma$ is an essential type (1) annulus in $M$, and suppose it is meridionally compressible. Then a meridional compression of $\Sigma$ yields two disks with boundary on $\partial (F \times I)$, each punctured once by the link. Let $\Sigma'$ be one of these once-punctured disks; similar to the proof of Proposition [Proposition 21](#prop-no-essential-disks){reference-type="ref" reference="prop-no-essential-disks"}, consider the projection of the intersection curves in $\Sigma' \cap F_\pm$ and saddle disks to $\Sigma'$, which is homeomorphic to an annulus. We call the boundary component in $\partial(F \times I)$ the *outer boundary* and we call the boundary component in $\partial N(L)$ the *inner boundary*. After shrinking down the saddle disks, we obtain a 4-valent graph, possibly with some circles without vertices. We denote their union by $\Gamma$ which we call the intersection graph on $\Sigma'$. Since $\Sigma$ is punctured exactly once by $L$, it is punctured away from the crossings. In particular, $L$ lies in $F$ near where it punctures $\Sigma$. Furthermore, $\Sigma'$ was obtained via a meridional compression on $\Sigma$, so the core curve of $\Sigma'$ is isotopic in $M$ to a meridian $\mu$ of $L$. Hence, there are exactly two intersection arcs of $\Gamma$ with an endpoint on the inner boundary; all other intersection arcs must have both endpoints on the outer boundary, and in particular, they are trivial on $\Sigma'$. Observe that we may assume the intersection graph $\Gamma$ has no circles without vertices. Suppose $\gamma$ is such a circle in $\Sigma' \cap F_\pm$. Then it is isotopic via $\Sigma'$ to $\mu$. We know $\gamma$ cannot be trivial in $F_\pm$; otherwise $\mu$ would be trivial in $M$. But then $\gamma$ represents a nontrivial element in $\pi_1(F \times I) \subset \pi_1(M)$ and hence, cannot be homotopic to $\mu$. In particular, this shows that $\Gamma$ is a genuine 4-valent graph on $\Sigma'$, and every complementary region of $\Sigma' \setminus \Gamma$ intersects $\partial \Sigma'$ in its boundary. Next, we determine when $\Gamma$ contains a fork. As in the proof of Proposition [Proposition 21](#prop-no-essential-disks){reference-type="ref" reference="prop-no-essential-disks"} and following remark, the existence of a fork with both edges ending on the outer boundary immediately gives us the desired contradiction by condition (iii). Consider the subgraph $\Gamma'$ obtained by throwing away any edges of $\Gamma$ which meet either boundary component of $\Sigma'$. If there are no cycles in $\Gamma'$, then $\Gamma'$ is a collection of trees. If one of those trees has two or more vertices, then there are at least two vertices of the tree which appear at the end of leaves. Since in $\Gamma$, every vertex is 4-valent, each of these vertices has at least three edges ending on $\partial \Sigma'$. Since there are exactly two edges in $\Gamma$ with endpoint on the inner boundary, we know there is at least one fork with both endpoints on the outer boundary. If there is only one tree with exactly one vertex, then all four of its edges have an endpoint on $\partial \Sigma'$, and in particular, exactly two (adjacent) edges have endpoint on the outer boundary. This is a fork. Finally, if there are no vertices, then the two arcs with an endpoint each on the inner boundary have their other endpoint on the outer boundary. Denote these two arcs by $\alpha$ and $\beta$. On the projection surface $F$, $\alpha \cup \beta$ appears as an arc which has both endpoints on $\partial F$ and intersects $\pi(L)$ transversely in exactly one point (corresponding to the puncture in $\Sigma)$. See Figure [14](#fig-mini-fork){reference-type="ref" reference="fig-mini-fork"} for a diagram of $\Gamma$ on $\Sigma'$ and the corresponding picture on $F$. But $\alpha$ and $\beta$ lie in adjacent regions of $F \setminus \pi(L)$, both of which contain boundary in $\partial F$. This contradicts condition (iii) from the statement of Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"}. If the intersection graph on $\Gamma'$ does contain a cycle, then because there are no trivial cycles on $\Gamma'$, the only possibility is a single cycle parallel to the boundaries of $\Gamma'$. Because there cannot be any forks, the cycle must intersect two saddles in order that there are two edges ending on the inner boundary. The opposite edge of each saddle ends on the outer boundary of $\Sigma'$. The complement of this graph is four disks, two of which live above $F_+$ and two of which live below $F_-$. Let $D_1^+$ denote the disk that lives above $F_+$ and that has a part of its boundary on the inner boundary of $\Sigma'$. We can realize $\partial D_1^+$ as a simple closed curve $\mu$ on $F_+$ that is crossed once by the link corresponding to the inner boundary and twice more by the link at the saddles. Because $F$ is incompressible in $F \times I$, $\mu$ must be trivial on $F \times I$. But we cannot have three strands of the link entering a disk region on $F$, a contradiction. ![An intersection graph on $\Sigma'$ without vertices and intersection arcs $\alpha$ and $\beta$ as they appear on $F$.](figures/notwoarcs.pdf){#fig-mini-fork} Thus, $\Sigma$ must be meridionally incompressible. By Lemma [Lemma 9](#lemma-at-least-one-int-curve){reference-type="ref" reference="lemma-at-least-one-int-curve"}, without loss of generality $\Sigma \cap F_+$ contains at least one intersection curve $\alpha'$. First, suppose that $\alpha'$ is trivial. If $\alpha'$ is closed, then by Lemma [Lemma 16](#lemma-triv-int-curve-iff){reference-type="ref" reference="lemma-triv-int-curve-iff"}, it is trivial on $F_+$ and contradicts Lemma [Lemma 12](#lemma-removing-triv-curve){reference-type="ref" reference="lemma-removing-triv-curve"}. Then $\alpha'$ must be an arc (which bounds a disk $D$ in $\Sigma$). Consider the intersection graph $\Gamma'$ on $\Sigma$. By Lemma [Lemma 17](#lemma-int-curve-one-bubble){reference-type="ref" reference="lemma-int-curve-one-bubble"}, this graph has at least one vertex. If $\Gamma'$ has a fork (where the endpoints of its edges can lie on either boundary component of $\Sigma$), then we are done. Suppose that the intersection graph contains a cycle. As the cycle cannot be trivial, it must wrap once around the core curve of the annulus and it must have vertices. To avoid trivial cycles and to avoid forks, the only possibility is that the remaining two edges coming out of a vertex must go directly out to the boundary, one to each of the separate boundaries of $\Sigma$. Thus, we obtain an intersection graph as in Figure [15](#circleinannulus){reference-type="ref" reference="circleinannulus"}, but with any even number of vertices. Note that the intersection graph decomposes $\Sigma$ into disks, and any two disks that share an edge in the graph must appear on opposite sides of $F$. This forces the number of vertices to be even. ![An example of a nontrivial cycle in the intersection graph on $\Sigma$. There must be an even number of vertices on the cycle.](figures/circleinannulus2.pdf){#circleinannulus} We now consider such annuli and show that they contradict condition (iv) of Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"}. Suppose there is an annulus with such an intersection graph. Then we first consider how that annulus $\Sigma$ can sit in $F \times I$. Note that if all of the saddles that occur on $\Sigma$ appear in distinct bubbles, then the core curve in the intersection graph yields a simple closed curve $\beta$ on $F$ that passes through the corresponding bubbles, bisecting each crossing. The fact the remaining two edges coming out of each saddle must go directly to a boundary component of $F$ implies that the two complementary regions on $F$ at such a crossing that do not intersect $\beta$ in their interiors must contain these edges and therefore must be annular regions so that there are boundary components of $F$ for these edges to end on. This is exactly the situation that condition (iv) eliminates. Note that in this case, the number of crossings intersected by $\alpha$ is the number of saddles in $\Sigma$, which is even. Suppose now that there are some saddles from $\Sigma$ that occur in the same bubble. Then the resultant curve $\beta$ on $F$ will pass through a given crossing more than once. However, at any such crossing, $\beta$ must only pass through the same pair of opposite complementary regions, as otherwise all four complementary regions meeting at the crossing would have to be annuli to accommodate the branches on the intersection graph and this contradicts condition (iii). The curve $\beta$ must cross itself transversely as it passes through a crossing because it is passing along the diagonal of saddles each time it passes through as in Figure [16](#selfintersectingcurve){reference-type="ref" reference="selfintersectingcurve"}. Note that it is still the case that the two opposite complementary regions through which $\beta$ does not pass must still be annuli to accommodate the branches coming out of the saddles. ![When the curve $\beta$ passes through a crossing more than once, it must intersect the same pair of opposite complementary regions and it must cross itself transversely in the process.](figures/selfintersectingcurve.pdf){#selfintersectingcurve} Suppose there are bubbles with more than two saddles. Then we can surger $\beta$ so that it passes through the crossing once if the original passed through an odd number of times and twice if the original passed through an even number of times, as in Figure [17](#surgercurve){reference-type="ref" reference="surgercurve"}. Call the resulting set of curves $\Phi$. If there is at least one component $\phi$ of $\Phi$ with at least one crossing that it passes through once, then surger all remaining crossings that are passed through twice by $\phi$ so as not to pass through those crossings. Then take a simple closed curve component $\phi'$ that results and that passes through at least one crossing. Such a $\phi'$ must exist. It contradicts condition (iv). If there is no component of $\Phi$ that passes through a bubble once, so each component passes through every bubble it intersects twice, then choose any component $\mu$ that passes through bubbles. Its projection on $F$ is the projection of a knot. Because two of the opposite regions at a crossing of $\mu$ must touch regions of the link projection that are annuli and the other two regions cannot touch regions of the link projection that are annuli by condition (iii), the projection of $\mu$ must be checkerboard colorable, with the annular regions of the link projection touching a crossing all occurring in the shaded regions of the projection of $\mu$. However, we can then surger the crossings of $\mu$ around the crossings on the boundary of a single shaded region to obtain a loop that passes through each crossing of the link projection once, with annular regions to either side at each crossing. This loop contradicts condition (iv). ![Surgering curve at crossings to make sure it passes through a crossing either once or twice.](figures/surgercurve.pdf "fig:"){#surgercurve} ◻ **Proposition 24**. *The manifold $M$ contains no essential annuli.* *Proof.* This follows from Proposition [Proposition 19](#prop-no-essential-spheres-tori){reference-type="ref" reference="prop-no-essential-spheres-tori"} and Lemmas [Lemma 22](#lemma-type2and3annuliimplytype1){reference-type="ref" reference="lemma-type2and3annuliimplytype1"} and [Lemma 23](#lemma-no-type1-annuli){reference-type="ref" reference="lemma-no-type1-annuli"}. ◻ In order to prove Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"} in both directions, we need the following lemma. **Lemma 25**. *If there exists a simple closed curve $\alpha$ in $F$ that intersects $\pi(L)$ exactly in a nonempty collection of crossings, such that for each crossing, $\alpha$ bisects the crossing and the two opposite complementary regions meeting at that crossing that do not intersect $\alpha$ near that crossing are annuli, then there exists an essential annulus in $M = (F \times I) \setminus N(L)$.* In Figure [18](#annulusthatfailsiv){reference-type="ref" reference="annulusthatfailsiv"} (a), we see a particular example of such an annulus. Green curves represent the link, red curves represent the intersection arcs and saddles, and purple curves represent the boundary curves of the annulus. ![The intersection curves for an annulus that is generated when condition (iv) does not hold.](figures/annulusfromivfail.pdf){#annulusthatfailsiv} *Proof.* First assume that $\alpha$ passes through an even number of crossings $n$. We construct an annulus $\Sigma$ as in Figure [15](#circleinannulus){reference-type="ref" reference="circleinannulus"} that exists in the manifold. The nontrivial cycle in $\Sigma$ becomes $\alpha$. Each of the branch edges going out to $\partial F$ on $\Sigma$ exist on $F$ since at each crossing that $\alpha$ passes through, the opposite regions that it does not pass through are annuli. Thus, we can create the corresponding intersection graph on $F$. We describe how to insert the disk $D_1^+$ in $(F \times I) \setminus N(L)$, but the same description works to insert any of the disks $D_i^{\pm}$. The disk $D_1^+$ has boundary on $F = F \times \{1/2\}$ that runs along an arc in the intersection graph starting at $x_1$ on $\partial F$, followed by an arc on the boundary of a saddle, followed by another arc in the intersection graph along $\alpha$, followed by an arc on the boundary of another saddle, followed by an arc in the intersection graph that ends at $x_2$ on $\partial F$ on the opposite side of $\alpha$ from $x_1$. (See Figure [18](#annulusthatfailsiv){reference-type="ref" reference="annulusthatfailsiv"}(b).) Call this longer arc $\lambda_0.$ The remaining portion of $\partial D_1^+$ lies on the boundary of the handlebody. We describe it as follows. Let $\lambda_1 = \{x_1\} \times [1/2,1]$ and $\lambda_2 = \{x_2\} \times [1/2,1]$. Let $\lambda_3$ be a copy of the arc $\lambda _0$ from $F$ but appearing on $F \times \{1\}$. Then $\partial D_1^+ = \lambda_0 \cup \lambda_1 \cup \lambda_2 \cup \lambda_3$. That this curve bounds a disk that avoids $L$ is immediate from the construction. That the set of disks inserted in this manner together with the corresponding saddles form a properly embedded annulus is also immediate from the construction and the way the various disks share certain edges. It remains to prove that $\Sigma$ is essential. Suppose $\Sigma$ is compressible. Then $\alpha$ must bound a disk in $F \times I$. But by assumption, since branch arcs to either side of $\alpha$ must end on boundary components of $F$, $\alpha$ cannot be trivial on $F$. Hence, it cannot be trivial in $F \times I$. Suppose now that $\Sigma$ is boundary compressible. Then if we take a path through the intersection graph that corresponds to two opposite edges at a saddle that each go out to the boundary, together with a diagonal of the saddle, and call that arc $\kappa$, then $\kappa$ together with an arc $\kappa'$ on the boundary of $F \times I$ bound a disk $D$ in $M$. Consider the intersection graph of $D$ with $F$. If $D$ only intersects $F$ along $\kappa$, then $\partial F$ wraps once around $L$ when $\kappa$ passes under $L$ on the saddle, Forcing $D$ to be punctured. Otherwise, there are arcs of intersection that go out to the boundary of $D$. The result is an intersection graph on $D$ much like the intersection graph we had on $D$ when we were proving there were no essential disks in $M$. By the same argument given there using forks, we prove no such disk can exist. Finally we must consider the case that $\alpha$ passes through an odd number of bubbles. In this case, the same construction yields a properly embedded Möbius band. The boundary of a regular neighborhood of this Möbius band is a properly embedded annulus $\Sigma$. It is both incompressible and boundary incompressible to the side that contains the Möbius band. A similar argument to the previous case demonstrates that it is also incompressible and boundary incompressible to the other side. ◻ This allows us to complete the proof of Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"}, assuming Proposition [Proposition 5](#prop-weakly-prime){reference-type="ref" reference="prop-weakly-prime"}. *Proof of Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"}.* By Thurston's Hyperbolization Theorem, $M = \\ (F \times I) \setminus N(L)$ is tg-hyperbolic if and only if there are no essential spheres, tori, disks, or annuli. Then the theorem follows in one direction from Propositions [Proposition 19](#prop-no-essential-spheres-tori){reference-type="ref" reference="prop-no-essential-spheres-tori"} and [Proposition 21](#prop-no-essential-disks){reference-type="ref" reference="prop-no-essential-disks"} and Lemmas [Lemma 22](#lemma-type2and3annuliimplytype1){reference-type="ref" reference="lemma-type2and3annuliimplytype1"} and [Lemma 23](#lemma-no-type1-annuli){reference-type="ref" reference="lemma-no-type1-annuli"}, and in the other direction by the comments subsequent to the theorem statement and Lemma [Lemma 25](#lemma-failiv-exists-annulus){reference-type="ref" reference="lemma-failiv-exists-annulus"}. ◻ We conclude the section with a proof of Proposition [Proposition 5](#prop-weakly-prime){reference-type="ref" reference="prop-weakly-prime"}. *Proof of Proposition [Proposition 5](#prop-weakly-prime){reference-type="ref" reference="prop-weakly-prime"}.* First, suppose that $\pi(L)$ is not weakly prime, and let $\gamma$ be a circle intersecting $\pi(L)$ transversely in exactly two points such that it bounds a disk $D$ and $D$ contains at least two crossings (note this uses that $\pi(L)$ is reduced). Let $B$ be a regular neighborhood of $D$ which is a 3-ball. Its boundary $\partial B$ is a 2-sphere punctured twice by $L$. Furthermore, $D$ contains at least two (non-reducible) crossings and $\pi(L)$ is alternating; hence, $B$ intersects $L$ in a nontrivial arc and $L$ is not prime. It remains to show that if $L$ is not prime, then $\pi(L)$ is not weakly prime. Let $\Sigma \subset F \times I$ be an essential sphere which is punctured twice by $L$. Note that we may assume $\Sigma$ is meridionally incompressible. Indeed, a meridional compression yields two essential spheres, each of which are punctured twice by $L$. Then iteratively perform these compressions until we obtain a meridionally incompressible essential twice-punctured sphere. Let $\Tilde{F}$ be the closed orientable surface of genus $g$ obtained by capping off each circle boundary of $F$ with a disk. Then we may regard $L \subset F \times I$ as a link in $\Tilde{F} \times I$ with projection diagram $\pi(L)$ onto $\Tilde{F} \times \{1/2\}$. We refer to this projection surface by $\Tilde{F}$ when there is no ambiguity. Analogously, we may define $\Tilde{F}_+$ (resp. $\Tilde{F}_-$) to be the surfaces obtained from $\Tilde{F}$ by removing the disks in its intersection with the bubbles and replacing them with the upper (resp. lower) hemispheres of the bubbles. Also, we view $\Sigma$ as a sphere in $\Tilde{F} \times I$ which is twice-punctured by $L$ via the inclusion map $F \times I \hookrightarrow \Tilde{F} \times I$. Note that $\Sigma$ is still essential in $\Tilde{F} \times I$. In Lemma 13 of [@small18], the authors show that when $\Sigma$ is a meridionally incompressible essential sphere which is punctured twice by $L$ and the embedding of $\Sigma$ minimizes $(s, i)$, then there is exactly one intersection curve $\alpha$ in $\Sigma \cap \Tilde{F}_+$ and it intersects $L$ at least twice. This is true even when $\pi(L)$ is not reduced when viewed on $\Tilde{F}_+$. Moreover, the authors of [@small18] show that $\alpha$ must be trivial on $\Sigma$ and $\Tilde{F}_+$, and it does not intersect any bubbles. Hence, $\alpha$ is a circle bounding a disk $D$ in $\Tilde{F}_+$ which intersects $\pi(L)$ exactly twice. Since $\Sigma$ is essential, $D$ contains at least one crossing. If $\pi(L)$ is reduced when viewed on $\Tilde{F}$, then $\pi(L)$ is not weakly prime on $\Tilde{F}$. Observe that $\alpha$ must bound a disk in $F_+$ as well; otherwise, we find a compression disk for $F \times \{1/2\}$ in $F \times [1/2, 1]$. Hence, $\pi(L)$ is not weakly prime on $F$. If $\pi(L)$ is not reduced, suppose $\alpha$ may be isotoped slightly in $\Tilde{F}_+$ so it intersects $\pi(L)$ exactly once at a double point. Consider the regions of $\Tilde{F} \setminus \pi(L)$ which are contained in $D$ and do not intersect $\alpha$, and observe that since $\pi(L)$ is reduced on $F$, at least one of these regions is obtained by capping off a boundary component of the corresponding region in $F \setminus \pi(L)$. But then we can find a compression disk for $F \times \{1/2\}$ in $F \times [1/2, 1]$. Hence, the crossings contained in $D$ are unaffected by reducing $\pi(L)$ in $\Tilde{F}_+$, and by the same argument as before, $\pi(L)$ is not weakly prime in $F$. ◻ [\[section-pf-thm-thickened\]]{#section-pf-thm-thickened label="section-pf-thm-thickened"} # Proof of Theorem [Theorem 8](#surfaceinmanifold){reference-type="ref" reference="surfaceinmanifold"} {#proof-of-theorem-surfaceinmanifold} *Proof of Theorem [Theorem 8](#surfaceinmanifold){reference-type="ref" reference="surfaceinmanifold"}.* Note that $F$ is neither a disk by $\partial$-irreducibility of $Y$ nor an annulus since if it were, we could push a copy off itself and we would have an essential annulus in $Y$ that did not intersect $F$. Suppose $\Sigma$ is a properly embedded essential disk or sphere in $Y \setminus N(L)$. If $\Sigma$ does not intersect $F \times I$, its existence contradicts the $\partial$-irreducibility or irreducibility of $Y$. If $\Sigma$ is entirely contained in $(F \times I) \setminus N(L)$ then if it is a sphere, it must bound a ball in $(F \times I) \setminus N(L)$ by Proposition [Proposition 19](#prop-no-essential-spheres-tori){reference-type="ref" reference="prop-no-essential-spheres-tori"}, which did not use conditions (iii) or (iv), and hence a ball in $Y \setminus N(L)$, a contradiction to its being essential. If it is a disk $D$, the boundary of the disk would have to be a nontrivial curve in $\partial F \times I$, contradicting the fact $\partial F \times I$ is incompressible in $F \times I$ and also the $\partial$-irreducibility of $Y$. Still assuming $\Sigma$ is an essential disk or sphere, if $\Sigma$ intersects $F \times I$ but is not entirely contained in it, then by incompressibility and $\partial$-incompressibility of $F$, we can replace it by a properly embedded essential $\Sigma'$ that does not intersect $F \times \{0,1\}$, a contradiction to the cases we have already discussed. Suppose now that $\Sigma$ is an essential torus or annulus in $Y \setminus N(L)$. If it does not intersect $F \times I$, then it must either be compressible or boundary-parallel in $Y$. If it compresses in $Y$, then a compressing disk $D$ must intersect $F \times I$. But by incompressibility of $F$, we can find another compressing disk that does not intersect $F \times I$, contradicting essentiality in $Y \setminus N(L)$. If $\Sigma$ is boundary-parallel in $Y$, then there is a boundary component $H$ of $Y$ with which $\Sigma$ is boundary-parallel. In the case $\Sigma$ is a torus, there is a $T \times I$ through which $\Sigma$ is parallel to the boundary, and $H$ is a torus. Since $F$ does not intersect $\Sigma$, it must be contained in $T \times I$ so that $L \subset F \times I$ can prevent $\Sigma$ from being boundary-parallel in $Y \setminus N(L)$. Further, $F$ must have all of its boundary on $H$. But there are no essential surfaces in $T \times I$ with all boundaries on $T \times \{1\}$. So $F$ does not intersect $T \times I$ and $\Sigma$ remains boundary-parallel in $Y \setminus N(L)$, a contradiction. In the case $\Sigma$ is an annulus that is boundary-parallel in $Y$, there is a solid torus through which $\Sigma$ is parallel into a boundary component $H$ of $Y$. Then $\Sigma$ must have both boundary components on $H$. Again, there are no essential surfaces to play the role of $F$ in the solid torus with boundary just on $H$, so $F$ does not intersect the solid torus and $\Sigma$ remains boundary-parallel in $Y \setminus N(L)$, a contradiction. In the case that $\Sigma$ is entirely contained in $(F \times I) \setminus N(L)$, $\Sigma$ cannot be a torus by Proposition [Proposition 19](#prop-no-essential-spheres-tori){reference-type="ref" reference="prop-no-essential-spheres-tori"}, which did not assume conditions (iii) and (iv). If $\Sigma$ is an annulus entirely contained in $(F \times I) \setminus N(L)$, then by Lemma [Lemma 22](#lemma-type2and3annuliimplytype1){reference-type="ref" reference="lemma-type2and3annuliimplytype1"}, there exists a type (1) annulus $\Sigma'$ that has both boundaries in $\partial F \times I$, and they must be nontrivial curves so that $\Sigma$ is incompressible in $Y \setminus N(L)$. If each boundary is on a different component of $\partial F \times I$, then the two components would be isotopic in $F \times I$ through the annulus, a contradiction to the fact $F$ is not itself an annulus. If both boundary components are in the same component of $\partial F \times I$, then $\Sigma$ can be isotoped so that $\partial \Sigma$ does not intersect $F$. Therefore, the intersection graph on $\Sigma$ has no edges that touch $\partial \Sigma$. Since the intersection graph is 4-valent, this implies that there are simple closed curves in the intersection graph that are trivial, which contradicts the comment following the proof of Lemma [Lemma 16](#lemma-triv-int-curve-iff){reference-type="ref" reference="lemma-triv-int-curve-iff"}. Hence the intersection graph is empty. Thus, $\Sigma$ does not intersect $F$ and it must be boundary-parallel in $F \times I$, a contradiction to its being essential in $Y \setminus N(L)$. Let $\Sigma$ be an essential torus or annulus in $Y \setminus N(L)$ that does intersect $F \times I$ but is not entirely contained in it. In the case $\Sigma$ is a torus, we can assume it is meridionally incompressible, as if not, we would generate a twice-punctured sphere that is essential, and Lemma [Proposition 7](#prop-weakly-prime-circ){reference-type="ref" reference="prop-weakly-prime-circ"} would then imply the projection of our link $L$ to $F$ is not weakly prime, contradicting condition (i) of the theorem. In the case $\Sigma$ is an annulus, if it is not meridionally incompressible, we can compress to obtain two annuli, each essential. If either had a second meridional compression, we would similarly contradict the fact that the projection of our link $L$ to $F$ is not weakly prime. Thus by replacing $\Sigma$ by one of the two resulting annuli if needed, we can assume $\Sigma$ is both essential and meridionally incompressible. We then consider its intersection curves with $F \times \{0, 1\}$. Assume we have chosen $\Sigma$ to minimize the number of such intersection curves. Any simple closed intersection curve that bounds a disk on $\Sigma$ must also bound a disk on $F$ by incompressibility. Using irreducibility of $Y$ and of $(F \times I) \setminus N(L)$, we can isotope to eliminate all such simple closed curves, contradicting our choice of $\Sigma$ as having a minimal number of intersection curves. So the only possible simple closed curves of intersection are nontrivial on $\Sigma$ and therefore cut it into annuli. In the case when $\Sigma$ is an annulus, we can also have intersection arcs. If such an arc cuts a disk from $\Sigma$, take an outermost such. It must also cut a disk from $F$ by $\partial$-incompressibility of $F$. The union of these two disks generates a properly embedded disk in $Y$, which by $\partial$-irreducibility of $Y$ must have trivial boundary on $\partial Y$. We can then form a sphere that bounds a ball, through which we can isotope $\Sigma$ to eliminate the intersection arc, again contradicting minimality of intersection curves on $\Sigma$. In the case there are simple closed curves of intersection, $\Sigma$ will intersect $F \times I$ in a collection of annuli, each with boundary components on $F \times \{0,1\}$ and possibly one on $\partial N(L)$ and also intersect $Y \setminus (F \times I)$ in another collection of annuli, the boundaries of which are on $F \times \{0,1\}$. Given such an annulus $A'$ in $(F \times I) \setminus N(L)$ with both boundaries in $F \times \{0, 1\}$, it is incompressible because $\Sigma$ is incompressible in $Y \setminus N(L)$. It cannot be boundary-parallel since we cannot reduce the number of intersections of $\Sigma$ with $F \times \{0, 1\}$. So $A'$ is essential in $(F \times I) \setminus N(L)$. But $\partial A'$ does not intersect $F$, and therefore the intersection graph on $A '$ has no edges that touch $\partial A'$. As argued in a previous case, since the intersection graph is 4-valent, this implies that there are simple closed curves in the intersection graph that are trivial, which contradicts the comment following the proof of Lemma [Lemma 16](#lemma-triv-int-curve-iff){reference-type="ref" reference="lemma-triv-int-curve-iff"}. Hence the intersection graph is empty and $A'$ does not intersect $F$. So, it must be boundary-parallel in $F \times I$, a contradiction to the minimality of the number of intersection curves with $F \times \{0, 1\}$. The last possibility for $A'$ when the intersection curves of $\Sigma \cap (F \times \{0, 1\})$ are simple closed curves is that $A'$ intersects $F \times \{0,1\}$ in a single simple closed curve. Then its other boundary is on $\partial N(K)$ where $K$ is a component of $L$. The boundary of a regular neighborhood of $A' \cup K$ is an annulus $A''$ with both boundaries on one component of $F \times \{0,1\}.$ It must be incompressible since $\Sigma$ is incompressible. And it is not boundary-parallel in $(F \times I) \setminus N(L)$ to the side containing $K$. If it was boundary-parallel to the other side, then that side must be a solid torus. The side containing $K$ is already a solid torus. These two solid tori share an annulus on their boundary and their union is all of $F \times I$. Thus, the boundary of $F \times I$ must be a torus and $F$ must be an annulus, a contradiction to our comment at the beginning of the proof that $F$ cannot be an annulus. Thus, $A''$ is essential in $F \times I$ with both boundaries on $F \times \{0,1\}$. But we have already eliminated this possibility. Finally, suppose that $\Sigma$ is an annulus, and all intersection curves are arcs from one boundary of $\Sigma$ to the other. They cut $\Sigma$ into disks that alternately lie in $(F \times I) \setminus N(L)$ and $Y \setminus (F \times I)$. Let $D$ be such a disk in $(F \times I) \setminus N(L)$. Its boundary $\partial D$ breaks up into four arcs, two of which are non-adjacent arcs in $F \times \{0, 1\}$ and two of which are in $\partial F \times I$. The disk $D$ must intersect $F \times \{1/2\}$ or we could isotope it off $F \times I$ and reduce the number of intersections of $\Sigma$ with $F \times I$. Then we can isotope to obtain an intersection graph on $D$ with at least one vertex by Lemma [Lemma 17](#lemma-int-curve-one-bubble){reference-type="ref" reference="lemma-int-curve-one-bubble"}. Hence as argued previously, there must be a fork. In fact, we can argue that there are at least four forks. If there is just a single vertex, there are four edges that go to the boundary and hence four forks. If there is more than one vertex, removing all edges in the intersection graph that intersect $\partial D$, we are left with a tree or trees with two or more leaves. Each then has three edges going out to the boundary, and thus generates at least two forks. So we have at least four forks. If both edges of a fork end on the same boundary component of $F$, then we create a region in the complement of the intersection graph on $F$ that has one strand of $L$ entering from a bubble with nowhere to go, since the region is a disk or annulus, a contradiction. Hence, the two edges making up the fork must end on distinct components of the boundary of $F$. However, in order for $\partial D$ to pass from one component of $\partial F \times I$ to a different component, it must travel on the boundary of $F \times I$ up $\partial F \times I$, across one of $F \times \{0, 1\}$ and down $\partial F \times I$. Since there are at least four such forks, this contradicts the fact there are only two connected arcs on the boundary of $D$ that are in $\partial F \times I$. Therefore, since there are no essential spheres, disks, tori or annuli in $Y \setminus N(L)$, it must be tg-hyperbolic. ◻ We conclude this section with a proof of Proposition [Proposition 7](#prop-weakly-prime-circ){reference-type="ref" reference="prop-weakly-prime-circ"}. *Proof of Proposition [Proposition 7](#prop-weakly-prime-circ){reference-type="ref" reference="prop-weakly-prime-circ"}.* First, the direction "$\pi(L)$ is not weakly prime implies $L$ is not prime in $Y$\" follows as in the proof of Proposition [Proposition 5](#prop-weakly-prime){reference-type="ref" reference="prop-weakly-prime"}. To prove the converse, suppose that $L$ is not prime and let $\Sigma \subset Y$ be an essential sphere punctured twice by $L$. Let $G = F \times \{0\} \cup F \times \{1\}$. Consider the intersection curves in $\Sigma \cap G$. If the intersection is empty, then we can cut along $G$ so that $\Sigma$ is an essential sphere embedded in $F \times I$. Then by Proposition [Proposition 5](#prop-weakly-prime){reference-type="ref" reference="prop-weakly-prime"}, $\pi(L)$ is not weakly prime on $F$. Now suppose $\Sigma \cap G$ is not empty, and let $\alpha \subset \Sigma \cap G$ be an intersection curve which is innermost on $\Sigma$. Since $G$ is incompressible in $Y$, $\alpha$ bounds a disk $D \subset G$ on $G$. Let $D' \subset \Sigma$ be a disk bounded by $\alpha$ on $\Sigma$ which contains no other intersection curves of $\Sigma \cap G$: then $D \cup D'$ is a 2-sphere in $Y \setminus G$. If $D \cup D'$ is outside $F \times I$ it must bound a ball outside $F \times I$ and we can isotope to remove the intersection. If it is inside $F \times I$, it bounds a ball in $F \times I$ by Proposition [Proposition 19](#prop-no-essential-spheres-tori){reference-type="ref" reference="prop-no-essential-spheres-tori"}. Thus we can eliminate all intersections of $\Sigma$ with $G$, and hence Proposition [Proposition 5](#prop-weakly-prime){reference-type="ref" reference="prop-weakly-prime"} implies $\pi(L)$ is not weakly prime. ◻ [\[section-pf-thm-circle-bundle\]]{#section-pf-thm-circle-bundle label="section-pf-thm-circle-bundle"} # Applications and Further Directions One motivation for Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"} comes from studying hyperbolic links in handlebodies. In addition to being interesting objects in their own right as a natural generalization of classical link theory, they also show up naturally when studying hyperbolicity of knotoids and generalized knotoids as in [@generalizedknotoids] and [@hypknotoids]. Indeed, the map $\phi_{\Sigma}^D$ constructed in these papers allows us to associate a hyperbolic volume to generalized knotoids by mapping them to the set of links in a handlebody. Observe that a genus $g$ handlebody can be obtained by thickening a 2-sphere with $(g + 1)$ disks removed, or more generally by thickening a genus $k$ closed orientable surface with $(g - k)$ disks removed (where $k \geq 1$). In the remainder of this section, when we refer to a projection surface in a handlebody, we always mean one of these surfaces, so that the handlebody is obtained by thickening it. Then Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"} is useful for studying links in handlebodies, and hence, for studying generalized knotoids as well. As one application of the theorem, note that if a generalized knotoid $k$ has any poles of nonzero valency, then $\phi_\Sigma^D$ never yields a link which is cellular alternating with respect to one of these projection surfaces. This is because the construction requires us to double the rail diagram of $k$ across the boundary portions corresponding to the poles of nonzero valency. However, we can restrict to the class of generalized knotoids whose poles are all valency-zero, that is, generalized knotoids whose diagram consists of a link on $\Sigma$ together with a set of valency-zero poles. These are the *staked links* defined in [@generalizedknotoids]. Then, as noted in Proposition 7.7 of [@generalizedknotoids], Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"} precisely characterizes which alternating staked links (or equivalently, alternating links in handlebodies) are hyperbolic under $\phi_\Sigma^D$. In that paper, this is used to prove Theorem 7.8, which says that every link with a checkerboard-colorable diagram on a closed surface $\Sigma$ has a diagram such that staking that diagram makes the resulting link hyperbolic in $\Sigma \times I$. In particular, this means that we can define the staked volume for any such link to be the minimum volume of any hyperbolic staking of the link. Since all link diagrams in $S^3$ are checkerboard-colorable, we can define staked volume for every link in $S^3$. See the last section of [@generalizedknotoids] for more details. As we mentioned in the introduction, there are examples of links in handlebodies shown to be hyperbolic by Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"} that are not covered by the hypothesis of Theorem 1.1 from [@hp17]. For example, consider the family of examples in Figure [19](#fig-thm1.1-nonex){reference-type="ref" reference="fig-thm1.1-nonex"}. ![Here $F$ is an annulus and $T$ is a prime, cellular alternating tangle which is not an integer tangle. By Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"}, this is a family of tg-hyperbolic links in a solid torus. There are no closed projection surfaces which satisfy the hypotheses of Theorem 1.1 from [@hp17]. For example, if we choose the torus parallel to $\partial(F \times I)$, the link will not have a cellular projection.](figures/thm1.1nonex.pdf){#fig-thm1.1-nonex} We would like to extend Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"} to cellular non-alternating links. The following result gives an extension in this direction: **Corollary 26**. *Let $F$ be a projection surface with nonempty boundary which is not a disk, and let $L \subset F \times I$ be a link with a reduced cellular (not necessarily alternating) projection diagram $\pi(L) \subset F \times \{1/2\}$, and let $M = (F \times I) \setminus N(L)$. Suppose conditions (i)-(iv) of Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"} are satisfied, as well as the following:* 5. *let $c_1, \dotsc, c_n$ be crossings of $\pi(L)$ such that $\pi(L)$ becomes alternating after each $c_i$ is changed to the opposite crossing. Each $c_i$ locally divides $F$ into four complementary regions such that a pair of opposite regions are homeomorphically annuli.* *Then the conclusion of Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"} holds.* *Proof.* Consider the projection diagram $\pi(L) \subset F$. For each crossing $c$ in $\{c_i\}_{i = 1}^n$, let $R_1$ and $R_2$ denote the two complementary regions of $F \setminus \pi(L)$ which meet $c$ and are homeomorphically annuli. There is an arc $\alpha \subset F$ which has an endpoint each on $\partial R_1 \cap \partial F$ and $\partial R_2 \cap \partial F$ and intersects $\pi(L)$ exactly once through $c$. Then $\alpha \times I$ is a properly embedded disk $\Sigma$ in $F \times I$ which is punctured twice by $L$. See Figure [20](#fig-crossing-change){reference-type="ref" reference="fig-crossing-change"}. ![On the left is a local picture of $\pi(L) \subset F$ near crossing $c$ with the $R_i$ shaded. Crossing the arc $\alpha$ by $I$ yields the twice-punctured disk $\Sigma$, shown on the right.](figures/crossingchange.pdf){#fig-crossing-change} Now we may cut $M$ along $\Sigma$, yielding two copies, $\Sigma_1$ and $\Sigma_2$. Reglue $\Sigma_2$ to $\Sigma_1$ along a rotation by $2\pi$: this has the effect of changing the crossing $c$ to the opposite crossing, and the resulting manifold is homeomorphic to $M$. By performing this operation for each of the $c_i$, we obtain a link $L' \subset F \times I$ such that $(F \times I) \setminus N(L')$ is homeomorphic to $M$ and its projection $\pi(L')$ is cellular alternating on $F$. Moreover, conditions (i), (ii), (iii) and (iv) still hold. Then the statement follows from Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"}. ◻ The corollary expands the number of known hyperbolic staked links as defined in [@generalizedknotoids], or equivalently tg-hyperbolic links in handlebodies. Theorem [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"} may also be combined with results from [@complinks] to give other ways of obtaining tg-hyperbolic links in handlebodies, namely, by *composition*. There is much work to be done in expanding the number of links in handlebodies known to be tg-hyperbolic. The methods used in this paper rely heavily on the alternating property: however, it is conceivable that these methods might be adapted for almost alternating links by taking into account the different behavior of the intersection curves at the non-alternating crossing. Another direction is to shrink the number of hypotheses needed for Theorem [Theorem 8](#surfaceinmanifold){reference-type="ref" reference="surfaceinmanifold"}. Theorem 1.1 of [@hp17] is very powerful in this sense: it applies to links in an arbitrary compact 3-manifold $Y$ (satisfying some mild conditions) with a cellular alternating diagram on a *closed* projection surface that is not necessarily incompressible in $Y$ but rather the diagram satisfies a certain representivity condition. There should be a version of Theorem [Theorem 8](#surfaceinmanifold){reference-type="ref" reference="surfaceinmanifold"} where $F$ need not be incompressible and $\partial$-incompressible, but similarly, the diagram satisfies an appropriate representativity condition. We might also try to generalize Theorems [Theorem 6](#thm-thickened){reference-type="ref" reference="thm-thickened"} and [Theorem 8](#surfaceinmanifold){reference-type="ref" reference="surfaceinmanifold"} to allow for nonorientable projection surfaces $F$ or for nonorientable $I$-bundles. Since the analogous results for closed surfaces in [@small18] hold in the nonorientable case, we suspect these generalizations hold here as well. We are also interested in volume computations for hyperbolic alternating links in thickened surfaces with boundary. In [@lackenby], Lackenby proves a lower bound on hyperbolic volume for alternating links in $S^3$ in terms of the number of twist regions, which can be read off the link diagram. Howie and Purcell generalize this in [@hp17] to a lower bound for volumes of links in $Y$. It would be interesting to try adapting their methods to prove a similar lower bound on volume in our case. This might be done by defining a slightly more general version of Howie and Purcell's *angled chunks* which can account for boundary coming from the projection surface. Alternatively, we might try to find proofs of the lower bounds from the viewpoint of bubbles instead. [\[section-apps\]]{#section-apps label="section-apps"}
arxiv_math
{ "id": "2309.04999", "title": "Hyperbolicity of Alternating Links in Thickened Surfaces with Boundary", "authors": "Colin Adams and Joye Chen", "categories": "math.GT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | To realize adaptive operation planning with MILP unit commitment, piecewise-linear approximations of the functions that describe the operating behavior of devices in the energy system have to be computed. We present an algorithm to compute a piecewise-linear approximation of a multi-variate non-linear function. The algorithm splits the domain into two regions and approximates each region with a set of hyperplanes that can be translated to a convex set of constraints in MILP. The main advantage of this "piecewise-convex approximation" (PwCA) compared to more general piecewise-linear approximation with simplices is that the MILP representation of PwCA requires only one auxiliary binary variable. For this reason, PwCA yields significantly faster solving times in large MILP problems where the MILP representation of certain functions has to be replicated many times, such as in unit commitment. To quantify the impact on solving time, we compare the performance using PwCA with the performance of simplex approximation with logarithmic formulation and show that PCA outperforms the latter by a big margin. For this reason, we conclude that PCA will be a useful tool to set up and solve large MILP problems such as arise in unit commitment and similar engineering optimization problems. author: - Felix Birkelbach - David Huber - René Hofmann bibliography: - references.bib title: Piecewise linear approximation for MILP leveraging piecewise convexity to improve performance --- We present an algorithm to compute a piecewise-linear approximation of a data set We discuss the accuracy of the approximation and show that it yields fast solving times in MILP We show that piecewise-convex approximation is a useful tool for engineering optimization problems MILP ,piecewise-linear approximation ,unit commitment # Introduction {#sec:intro} MILP has been applied successfully in a wide range of engineering applications such as combined design- and operation optimization [@Halmschlager2022], process control [@Fuhrmann2022] and unit commitment [@halmschlager_optimizing_2021], to name just a few. MILP solvers have made big advances in recent years, which allow to solve ever larger problems in less time [@Koch2022]. The availability of mature solvers makes MILP interesting for industrial applications. Considering that optimization problems in engineering are generally non-linear, mixed integer non-linear programming (MINLP) approaches would be the natural choice to solve these problems. Even though there have been big advances in the area of deterministic MINLP solvers recently, they are still limited to comparably small problem instances [@Kronqvist2019]. Meta-heuristic solvers for MINLP are not an option for many applications, because they cannot guarantee to find the global optimal solution. Another option is to approximate the non-linear parts of the problem and use mixed integer linear programming (MILP). Even though there will be some approximation error, this drawback is generally out-weighted by the availabiltity of mature solvers for MILP problems. The central challenge is to find viable (piecewise) linear approximations of the non-linear functions [@brito2020]. Our goal is to develop a framework for continuous operation planning for energy systems [@schwarzmayr_development_2022], where the models that describe the operating behavior of devices in the energy system are regularly updated based on historic data to account for gradual changes of the operating behavior. The operation planning will be done with a MILP unit commitment problem. The operating behavior of each device is represented by a non-linear function. To realize the adaptive modeling of the operating behavior, we need to compute piecewise-linear approximations of the data set that represents the non-linear operating behavior. Finding an efficient piecewise-linear approximation is especially crucial for unit commitment and similar optimization problems, where the MILP representation of certain non-linear functions has to be replicated many times. In unit commitment problems, the non-linear operating behavior of certain devices has to be replicated at each time step in the MILP model. A more efficient MILP representation of the operating behavior can have a significant impact on the solving time of the MILP problem. We are interested in piecewise-linear approximation methods that can be applied to continuous functions with two or more independent variables. The approximation must be able to capture key features of the non-linear function such as directions of positive and negative curvature and it must have an efficient (in terms of impact on the solving time) MILP representation. An interesting approach was used by Koller et al. [@Koller_2019]: they divided a non-linear function that described the operating behavior of a thermal storage into two regions and set up a convex approximation of the function in each region so that the approximation was continuous at the interface. This resulted in an excellent approximation of their function and only required one auxiliary binary variable in the MILP problem to toggle between the two regions. Even though, this approach puts certain limitations on the shape of the non-linear function (i.e. two regions where the function is approximately convex), it is considerably more versatile than simple convex approximation and, at the same time, should yield very fast solving times in MILP. Koller et al. set up their approximation by hand, which is why their approach cannot be readily transferred to other applications. Though, if it was possible to set up this type of approximation based on a data set that describes the non-linear function, these models could be very useful for MILP applications where performance is critical and certain non-linear features cannot be neglected, such as our adaptive operation planning framework. In this paper we present a novel algorithm to approximate a multi-variate continuous non-linear function by splitting it in two regions and computing a convex approximation for each region. In the remainder of this paper, we will refer to this approach as piecewise-convex approximation (PwCA). In the next section we discuss other methods for piecewise linear approximation methods and their MILP representations. In Section [3](#sec:algorithm){reference-type="ref" reference="sec:algorithm"} we describe the approximation algorithm and how a PwCA can be translated to MILP. In Section [4](#sec:results){reference-type="ref" reference="sec:results"} we assess the performance of PwCA. We show that PwCA is a considerably better approximation than simple convex approximation for many non-linear functions and that it outperforms simplex approximation in large MILP optimization problems. From this, we conclude in Section [5](#sec:conclusion){reference-type="ref" reference="sec:conclusion"} that PwCA fills an important gap among piecewise linear approximation methods for MILP and that it is a useful tool for solving large unit commitment problems and similar engineering optimization problems where the MILP representation of a non-linear function has to be replicated many times. # Background The simplest, albeit also quite limited, method to linearize a non-linear function is to compute a linear approximation. If this approximation is sufficiently accurate, it is by a big margin the most efficient way to linearize functions for MILP. If it is not sufficiently accurate, two more sophisticated methods can be applied: piecewise linear approximation with simplices and convex/concave approximation with (hyper-)planes. Piecewise linear approximation with simplices is the most universal method to incorporate a non-linear function in MILP. The domain of the function is triangulated and the function is interpolated on each simplex of the triangulation. (For functions with one independent variable, this is equivalent to linear spline approximation.) Any surjective function, even highly non-linear ones, can be approximated with arbitrary accuracy (i.e. approximation error). By increasing the number of linear segments, the accuracy of the approximation can be increased. Though, this accuracy comes at a cost: the more segments a function is divided into, the more constraints and auxiliary variables are introduced into the optimization problem, which can quickly make it intractable. For functions with one independent variable, the simplex approximation can be translated to MILP as a special ordered set of type 2 (SOS2). This is highly efficient and covered in introductory text books on MILP. Functions with more than one independent variables, cannot be translated as efficiently [@Vielma_2010]. Especially for multi-variate functions the number of auxiliary variables increases quickly. Various MILP formulations are available, which differ in the number of auxiliary variables and constraints [@Vielma_2010]. Though, even with specialized MILP formulations optimized for a large number of simplices [@Vielma_2011], the maximum accuracy which is still tractable may be severely limited. With convex/concave approximation, the non-linear function is incorporated into the MILP problem by super-positioning multiple linear constraints. Thus, the non-linear function has to be approximated by a set of hyperplanes. As the name suggests, this approximation will only yield good results, if the non-linear function is either convex or concave. For many engineering problems, where the non-linear function is neither convex or concave, the approximation will be poor. The main advantage is that the MILP representation is very efficient. For representing convex approximations in minimization problems or concave approximations for maximization problems, not a single auxiliary variables is required. For concave approximations in minimization problems and convex approximations in maximization problems, one binary variable per constraint has to be introduced. Overall, this results in considerably less auxiliary variables than with simplex approximation, and consequently these approximations generally yield very good MILP performance. Another advantage is that this approach extends to functions with many independent variables with very little overhead. For simplicity, we will refer to this approach as convex approximation in the remainder of this paper, even though it can also be applied to concave functions. Many non-linear functions in engineering problems do not have an exceedingly complicated shape. If convex approximation is sufficiently accurate, it is the natural choice because it is easy to set up and performs very well. Though, convex approximation will fail to capture key features of the function, if the function has both a direction with positive and a direction with negative curvature. Then, the only other option is piecewise linear approximation with simplices, which introduces a large number of auxiliary binary variables and consequently has a negative impact on performance ---especially for multi-variate functions. We propose piecewise-convex approximation (PwCA), which can capture features of non-linear function that simple convex approximation cannot. If the accuracy of the PwCA is sufficient, it yields much better MILP performance than simplex approximation because the MILP representation of PwCA requires only one auxiliary binary variable. Finding a good PwCA for a given non-linear function is surprisingly challenging --- and doing it with an automated algorithm even more so. Not only do we need to find the right position to split the function into two regions so that a convex approximation of the function in each region is accurate (in terms of deviation from the non-linear function), but also do we need to set up the convex approximations in both regions so that the hyperplanes coincide on the interface. Otherwise the approximation will not be continuous, which may cause problems when translating the approximation to MILP. Naïvely extending the fitting approach of simple convex approximation to approximation with two regions and implementing the continuity requirement as a constraint in the fitting procedure (i.e. treating the fitting problem as a constrained non-linear optimization problem) does not work. The continuity constraint is so complex that it inhibits convergence. A different approach is required to compute a PwCA. # The algorithm {#sec:algorithm} Consider a continuous function $y = f(x_1,\ldots,x_{n-1})$ with $n-1$ independent variables in the domain $x_{j,\mathrm{min}} \le x_j \le x_{j,\mathrm{max}}$. This function describes a hyper-surface in $n$-dimensional space. We assume that the non-linear function is given by a set of $m\in \{1,\ldots, N_\mathrm{data}\}$ data points $\{x'_{1,m},\ldots,x'_{n-1,m}, y'_m \}$. This data set may be derived from evaluating the non-linear function on a grid, simulation data or even experimental data. The goal of the algorithm is to compute an accurate piecewise linear approximation of this non-linear function, i.e. to minimize the deviation between the estimate $\hat{y}$ and the data $y_m$ by adjusting the model parameters. Piecewise-convex approximation (PwCA) can be seen as an extension of simple convex approximation. For this reason, we will first describe how to compute a simple convex approximation and then we will move to PwCA. A Matlab implementation of these algorithms is available at <https://gitlab.tuwien.ac.at/iet/public/milptools>. ## Computing a simple convex approximation {#sec:fitSC} With convex approximation, the function is approximated by a set of linear functions, i.e. hyperplanes, that can be translated to a convex set of constraints in MILP. Assume that each hyperplane $i \in \{1,\ldots,N_\mathrm{hyp} \}$ is given by a set of $n+1$ coefficients $\bm{a}_i$ so that $[1,x_1,\ldots,x_{n-1},y] \bm{a}_i = 0$ for all points on the hyperplane. Then the estimate $\hat{y}(\bm{x})$ is given by $$\hat{y}(\bm{x}) = \max \{y_i(\bm{x}) : i \in \{1,\ldots,N_\mathrm{hyp}\} \} \label{eq:yhat}$$ where $$y_i(\bm{x}) = \left( a_{i,0} + \sum_{j=1}^{n-1} a_{i,j} x_j \right) \frac{1}{a_{i,n}} \label{eq:yi}$$ is the $y$-value of the $i$th hyperplane evaluated at $\bm{x} = [x_1,\ldots,x_{n-1}]$. The objective of the fitting algorithm is to minimize the deviation in terms of the sum of squared errors $$\mathop{\mathrm{arg\,min}}_{\bm{a}_1,\ldots,\bm{a}_{N_\mathrm{hyp}}} \sum_{m=1}^{N_\mathrm{data}}\big(\hat{y}(x'_{1,m},\ldots,x'_{n-1,m}) - y'_m \big)^2$$ by adjusting the hyperplane parameters $\bm{a}_1$ through $\bm{a}_{N_\mathrm{hyp}}$. This is an unconstrained non-linear minimization problem. It turns out to converge remarkably well, regardless of the initial guess for the hyperplane parameters $\bm{a}_i$. Only if many hyperplanes are used for the approximation, it can happen that some hyperplanes are "unused", i.e. that they are dominated by the other hyperplanes on the whole domain. This can be remedied by penalizing the distance between $y_i$ at the corners of the domain and $\max \{y'_m\}$, which drives hyperplanes upwards until they contribute to the approximation. The same approach can be used to fit concave functions. In Eq. [\[eq:yhat\]](#eq:yhat){reference-type="eqref" reference="eq:yhat"}, $\max y_i$ has to be replaced by $\min y_i$ or the identity $\min y_i = -\max{-y_i}$ can be used to transform the problem. ## Computing a piecewise-convex approximation {#sec:fitDC} With PwCA, the non-linear function is divided into two regions and for each region, a convex approximation is computed. The two convex approximations have to coincide at the interface that separates the two regions. Otherwise the PwCA would not be continuous, which may result in unwanted effects in the MILP problem. To ensure continuity, PwCA always uses an even number of hyperplanes $N_\mathrm{hyp}$. Each pair of planes exactly intersects at the interface. The naïve approach of extending the algorithm for simple convex approximation to PwCA would be to implement the continuity requirement at the interface as a constraint. Unfortunately, it turns out that the resulting constrained non-linear optimization problem does not converge because the continuity constraint is so complex that it inhibits convergence. The problem has to be reformulated so that the continuity is inherent in the way that the approximation is set-up. The goal is to parameterize the approximation model in a way that we again arrive at an unconstrained problem. We propose to use a series of rotations and shifts to set up the approximation model. Instead of the hyperplane parameters $\bm{a}_i$ we use a set of rotation angles and distances to define the position of the hyperplanes. The clever thing with this approach is that the hyperplanes are defined via the intersection at the interface. This guarantees continuity at the interfaces and we again end up with an unconstrained non-linear optimization problem, which converges reliably. For the 3-dimensional case, the rotations and shifts to set up the model are illustrated in Figure [\[fig:doubleConvex\]](#fig:doubleConvex){reference-type="ref" reference="fig:doubleConvex"}. In 3D, hyperplanes are ordinary planes and hyperlines are ordinary lines. Figure [\[fig:doubleConvex\]](#fig:doubleConvex){reference-type="ref" reference="fig:doubleConvex"} shows a PwCA with two pairs of hyperplanes. The starting point for the algorithm is the orthonormal basis with $n$ basis vectors in the directions of $\bm{x}_1$ to $\bm{x}_{n-1}$ and $\bm{y}$ respectively (see Figure [\[fig:doubleConvex\]](#fig:doubleConvex){reference-type="ref" reference="fig:doubleConvex"} a). Each hyperplane, including the interface, is uniquely identified by a normal vector and a point on the hyperplane. After performing the rotations and shifts, the transformed $\bm{y}$ vector will be the normal vector of each hyperplane. In Figure [\[fig:doubleConvex\]](#fig:doubleConvex){reference-type="ref" reference="fig:doubleConvex"} it is marked by a dot at the end. Another vector has to be selected as the normal vector of the interface. In principle, this can be any of the other base vectors. Here, we chose the $\bm{x}_1$ vector. In Figure [\[fig:doubleConvex\]](#fig:doubleConvex){reference-type="ref" reference="fig:doubleConvex"} it is marked by an asterisks. For the rotations, we consider basic rotations, i.e. rotations in each of the planes that is spanned by the pairwise combinations of the basis vectors. In two dimensions there is one basic rotation, in three dimensions three basic rotations and in the general $n$-dimensional case there are $\binom{n}{2}$ basic rotations. Transformation 1 is the rotation of the initial basis and shifting it into the direction of $\bm{x}_1$. Then, the vectors $\bm{x}_2$ to $\bm{x}_{n-1}$ and $\bm{y}$ span the hyperplane that is the interface and $\bm{x}_1$ is the normal vector of the interface (marked by an asterisk in Figure [\[fig:doubleConvex\]](#fig:doubleConvex){reference-type="ref" reference="fig:doubleConvex"} b). Transformation 1 is characterized by $\binom{n}{2}$ rotation angles $\bm{r}_1$ and one shift distance $s_1$. It is the same for all hyperplanes of the model. Transformation 2 is to rotate the basis again, but only in the planes spanned by the pairwise combination of the vectors $\bm{x}_2$ to $\bm{x}_{n-1}$ and $\bm{y}$. The orientation of the vector $\bm{x}_1$, the normal vector of the interface, does not change. Then, the basis is shifted into the direction of $\bm{y}$ to get the intersection hyperline of each pair of hyperplanes on the interface. The intersection hyperline is spanned by the vectors $\bm{x}_2$ to $\bm{x}_{n-1}$ and the vectors $\bm{x}_1$ and $\bm{y}$ are the normal vectors. In the 3D case, which is illustrated in Figure [\[fig:doubleConvex\]](#fig:doubleConvex){reference-type="ref" reference="fig:doubleConvex"}, each intersection hyperline is a simple line, so only one vector $\bm{x}_2$ is required to define the intersection. Since the model in Figure [\[fig:doubleConvex\]](#fig:doubleConvex){reference-type="ref" reference="fig:doubleConvex"} has two pairs of hyperplanes, two bases shown. To describe transformation 2, $\binom{n-1}{2}$ rotation angles $\bm{r}_{2,i}$ plus one shift distance $s_{2,i}$ is needed for each pair of hyperplanes $i \in \{1,\ldots,N_\mathrm{hyp}/2\}$. Transformation 3 is to rotate the basis of the intersection in the plane that is spanned by $\bm{x}_1$ and $\bm{y}$. This rotation does not affect the vectors $\bm{x}_2$ to $\bm{x}_{n-1}$, which span the intersection hyperline. In this way, the continuity of the PwCA is ensured. After this final rotation the transformed $\bm{y}$ vector is the normal vector of each hyperplane that constitute the model (see Figure [\[fig:doubleConvex\]](#fig:doubleConvex){reference-type="ref" reference="fig:doubleConvex"} d). The rotation angles of these final rotations are $r_{3,i}^-$ and $r_{3,i}^+$ for the hyperplanes on either side of the interface. The last step, illustrated in Figure [\[fig:doubleConvex\]](#fig:doubleConvex){reference-type="ref" reference="fig:doubleConvex"} e, is to compute the plane coefficients of the hyperplanes that constitute the model and of the interface that separates the two convex regions. They can be directly derived from each hyperplane's normal vector $\bm{\nu}$ (i.e. $\bm{y}$ for the model hyperplanes and $\bm{x}_1$ for the interface) and its origin $\bm{o}$. $$\bm{a} = \frac{1}{\| \bm{\nu}\|} \left[ -{\bm{o}}^\mathsf{T} \bm{\nu}, \nu_1, \ldots, \nu_n \right]^\mathsf{T} \label{eq:planeCoef}$$ This series of transformation that defines the PwCA model is summarized in Algorithm [\[alg:transform\]](#alg:transform){reference-type="ref" reference="alg:transform"}. $\bm{B} \leftarrow$ identity matrix $\bm{o} \leftarrow$ $[0,\ldots,0]^\mathsf{T}$ $\bm{B} \leftarrow$ ($\bm{B}$, $\bm{r}_1$) $\bm{x}_1 \leftarrow$ ($\bm{B}$, 1) $\bm{o} \leftarrow$ $\bm{o} + \bm{x}_1 \cdot s_1$ $\bm{a}_\mathrm{ifc} \leftarrow$ ($\bm{x}_1$, $\bm{o}$) In the end, $\bm{a}_i^-$ contains the parameters of the hyperplanes below the interface, i.e. in negative direction of the normal vector of the interface, $\bm{a}_i^+$ the parameters of hyperplanes above the interface and $\bm{a}_\mathrm{ifc}$ the parameters of the interface itself. Then the model estimate $\hat{y}$ is given by $$\hat{y} = \begin{cases} \max \{ y_i^-(\bm{x}) : i \in \{1,\ldots,N_\mathrm{hyp}/2\} \} \quad &\text{if} \quad [1, \bm{x}, \hat{y}] \, \bm{a}_\mathrm{ifc} \le 0 \\ %\begin{bmatrix} 1 & x_1 & \ldots & x_{n-1} & \hat{y}\end{bmatrix} \max \{ y_i^+(\bm{x}) : i \in \{1,\ldots,N_\mathrm{hyp}/2\} \} %\quad : \quad i \in [N_\mathrm{hyp}/2+1, N_\mathrm{hyp}] \quad &\text{if} \quad [1, \bm{x}, \hat{y}] \, \bm{a}_\mathrm{ifc} > 0 \end{cases}$$ where $y_i^-$ are the $y$-values of the hyperplanes below the interface given by Eq. [\[eq:yi\]](#eq:yi){reference-type="eqref" reference="eq:yi"} with the hyperplane parameters $\bm{a}_i^-$ and $y_i^+$ the equivalent above the interface. The reformulated fitting problem is $$\mathop{\mathrm{arg\,min}}_{\bm{r}_1, s_1, \bm{r}_{2,i}, s_{2,i}, r_{3,i}^-, r_{3,i}^+} \sum_{m=1}^{N_\mathrm{data}} \big(\ \hat{y}(x'_{1,m},\ldots,x'_{n-1,m}) - y'_m \big)^2 . \label{eq:optDC}$$ This is again an unconstrained non-linear optimization problem, which converges reliably given fairly good starting values. Just as the fitting algorithm for simple convex approximation, penalizing the distance between $y_i$ at the corners of the domain and $\max \{y'_m\}$ helps ensure that no hyperplane is dominated by the others on the whole domain. ## Starting values for piecewise-convex approximation To make the PwCA converge reliably, it is essential to compute good starting values for the hyperplanes that constitute the model. The key to compute good starting values is to observe that the intersection of the hyperplanes at the interface will be a convex function in $(n-1)$ dimension. To initialize the model we start with an initial guess for the interface that separates the two regions $\bm{r}'_1, s'_1$. By projecting points from the data set in the vicinity of the interface onto the interface and computing a simple convex approximation with the algorithm outlined in Section [3.1](#sec:fitSC){reference-type="ref" reference="sec:fitSC"}, a good approximation of the intersection hyperlines $\bm{r}'_{2,i}, s'_{2,i}$ can be obtained. Then only one rotation parameter has to be estimated for each hyperplane $r_{3,i}^{ -\,\prime}, r_{3,i}^{+\,\prime }$ on either side of the interface to make the set of starting values complete. This can be done by using the optimization problem in Eq. [\[eq:optDC\]](#eq:optDC){reference-type="eqref" reference="eq:optDC"} and fixing all parameters but the $r_{3,i}^-, r_{3,i}^+$. ## Translating a piecewise-convex approximation to MILP Assume that the PwCA is given by set of plane coefficients as described above: $\bm{a}_i^-$ contains the coefficients of the hyperplanes below the interface, i.e. in negative direction of the normal vector, $\bm{a}_i^+$ the coefficients above the interface and $\bm{a}_\mathrm{ifc}$ the coefficients of the interface itself. Then, the set of constraints that describes PwCA is $$\begin{aligned} a_{\mathrm{ifc},0} + \sum_{j=1}^{n-1} a_{\mathrm{ifc},j} x_j + a_{\mathrm{ifc},n} y &\le M_\mathrm{t}^{+} t & \label{eq:ifc0}\\ a_{\mathrm{ifc},0} + \sum_{j=1}^{n-1} a_{\mathrm{ifc},j} x_j + a_{\mathrm{ifc},n} y &\ge M_\mathrm{t}^{-} (1-t) & \label{eq:ifc1}\\ a_{i,0}^- + \sum_{j=1}^{n-1} a_{i,j}^- x_j + a_{i,n}^- y &\ge M_i^- t &: \quad \forall i \in \{1,\ldots,N_\mathrm{hyp}/2\} \label{eq:hyp0}\\ a_{i,0}^+ + \sum_{j=1}^{n-1} a_{i,j}^+ x_j + a_{i,n}^+ y &\ge M_i^+ (1-t) &: \quad \forall i \in \{1,\ldots,N_\mathrm{hyp}/2\} \label{eq:hyp1}\end{aligned}$$ where $x_1$,...,$x_{n-1}$ and $y$ are the continuous variables of the optimization problem and $t$ is the binary toggle variable that is 0 if the point is below the interface and 1 if it is above the interface. Eq. [\[eq:ifc0\]](#eq:ifc0){reference-type="eqref" reference="eq:ifc0"} and [\[eq:ifc1\]](#eq:ifc1){reference-type="eqref" reference="eq:ifc1"} set the toggle variable. Eq. [\[eq:hyp0\]](#eq:hyp0){reference-type="eqref" reference="eq:hyp0"} and [\[eq:hyp1\]](#eq:hyp1){reference-type="eqref" reference="eq:hyp1"} represent the convex combination of the linear constraints, which constitute the PwCA and which are toggled according to $t$. (Note: For Eq. [\[eq:hyp0\]](#eq:hyp0){reference-type="eqref" reference="eq:hyp0"} and [\[eq:hyp1\]](#eq:hyp1){reference-type="eqref" reference="eq:hyp1"} to work, the normal vector of the hyperplanes must point in positive $y$ direction, i.e. the $a_{i,n}$ component must have a positive sign.) Translating a PwCA to MILP requires one binary variable and $N_\mathrm{hyp}+2$ big-M constraints. The big-M values are the maximum distances of points in the domain to the hyperplanes $$\begin{aligned} M_t^+ &= \max_{\bm{p}} \bm{p}^\mathsf{T} \bm{a}_\mathrm{ifc} \\ M_t^- &= \min_{\bm{p}} \bm{p}^\mathsf{T} \bm{a}_\mathrm{ifc} \\ M_i^- &= \min_{\bm{p}} \bm{p}^\mathsf{T} \bm{a}_i^- \quad :\quad \bm{p}^\mathsf{T} \bm{a}_\mathrm{ifc} \ge 0 \\ M_i^+ &= \min_{\bm{p}} \bm{p}^\mathsf{T} \bm{a}_i^+ \quad :\quad \bm{p}^\mathsf{T} \bm{a}_\mathrm{ifc} \le 0\end{aligned}$$ where $$\bm{p} = [1,x_1,\ldots, x_{n-1},y]^\mathsf{T} .$$ Using this approach to translate concave functions to MILP takes $N_\mathrm{hyp}/2+1$ binary variables and $N_\mathrm{hyp}+2$ big-M constraints. ## Piecewise-convex approximation with three and more regions PwCA could be extended to more than two regions by using more than one interface. This would potentially make the approximation more accurate. Though, the intersection hyperlines on two adjacent interfaces cannot be positioned independently, because they have to be connected by a hyperplane, which locks one degree of freedom. Consequently, the accuracy gained by adding additional interfaces will be diminished. Furthermore, the main advantage of PwCA ---the highly efficient MILP formulation--- would suffer, because auxiliary binary variables for the additional interfaces would need to be introduced. PwCA is designed to fill the gap between simple convex approximation and approximation with simplices. To do this, one interface is sufficient. Considering the trade-off between accuracy, performance and complexity, we decided to not generalize PwCA in this way. # Evaluation {#sec:results} Piecewise convex approximation (PwCA) fills a gap between simple convex approximation and approximation with simplices. In this section we will first show that PwCA yields a significantly better approximation than simple convex approximation for functions that are curved in more than one direction. Then, we will show that PwCA yields significantly better performance than approximation with simplices in large optimization problems, because it introduces fewer binary variables. As a benchmark, we use the multiplication of two variables $y = x_1 \cdot x_2$ in the range $x_1, x_2 \in [0, 1]$. This function has a positive curvature in the direction $x_1 + x_2$ and negative curvature orthogonal to that. Because of this curvature, simple convex approximation can be expected to perform poorly. Nevertheless, this function is a good example to showcase PwCA. For one, multiplication is, in a way, the most trivial kind of non-linearity and we need an efficient way to linearize it to solve non-linear problems with MILP. Additionally, there are various functions with similar shape that arise in engineering optimization problems. We use Matlab 2020b on a laptop with an Intel Core i5-10310U with 4 cores at 1.70 GHz and 16 GB DDR4 RAM. The optimization problems were solved with Matlab's `intlinprog` and the YALMIP toolbox [@lofberg2004]. ## Accuracy Here, we compare the accuracy of PwCA with simple convex approximation depending on the number of planes used for the approximation. The data set was generated by sampling the multiplication function on an equally spaced $100 \times 100$ grid. Figure [\[fig:comparison_models\]](#fig:comparison_models){reference-type="ref" reference="fig:comparison_models"} shows the approximation of the multiplication function with 1 to 10 planes. The corresponding approximation error, measured by the root mean square error (RMSE), is shown in Figure [\[fig:comparison_modelsRMSE\]](#fig:comparison_modelsRMSE){reference-type="ref" reference="fig:comparison_modelsRMSE"}. With the simple convex approximation anything more than two planes does not yield substantial improvements because then, the main curvature along the $x_1 + x_2$ direction is already well approximated and because simple convex approximation can --- by definition --- not model the other curvature. PwCA, on the other hand, is more flexible. Starting at four planes, the PwCA splits the domain in half along the $x_1 + x_2$ direction and approximates both sides with a convex combination of two planes, which reduces the RMSE from 0.044 to 0.017. Adding two more planes, yields a slight improvement still. Then, also PwCA has reached its accuracy limit. If a more accurate approximation than this is required, one would need to resort to simplex approximation. Even though the accuracy of PwCA is limited, it is able to capture key characteristics of many non-linear functions. Depending on the application, this may be sufficient. Then, the main advantage of PwCA --- the fast MILP solving time --- will come into effect, as we will see in the next subsection. ## MILP performance To assess the performance of PwCA compared to approximation with simplices, the multiplication function was approximated using 4 planes. Then, an equivalent simplex approximation was set up, which required 8 simplices. Both these approximations achieved a root mean square error (RMSE) of 0.1304. The two approximations are shown in Figure [1](#fig:models2D){reference-type="ref" reference="fig:models2D"}. ![Illustration of the piecewise-convex approximation (blue solid) and the approximation using simplices (red dashed) for the benchmark.](res/simplex.png){#fig:models2D} The translation of the PwCA to MILP was done with Eq. [\[eq:ifc0\]](#eq:ifc0){reference-type="eqref" reference="eq:ifc0"}--[\[eq:hyp1\]](#eq:hyp1){reference-type="eqref" reference="eq:hyp1"} that were introduced in the previous section. For the translation of the simplex approximation, we chose three of the formulations introduced by Vielma et al. [@Vielma_2010]: convex combination (CC), multiple choice (MC) and the logarithmic convex combination (Log). Table [1](#tab:my_label){reference-type="ref" reference="tab:my_label"} shows the number of constraints and binary variables that are required for each model type and each MILP formulation. For a more detailed discussion on the number of variables and constraints in the simplex models, we refer to Table 1 in [@Vielma_2010]. approximation binary continuous constraints ------------------ -------- ------------ ------------- piecewise convex 1 0 5 simplex MC 8 16 28 simplex CC 8 9 13 simplex Log 3 9 10 : Auxiliary variables and constraints for each approximation in Figure [1](#fig:models2D){reference-type="ref" reference="fig:models2D"}. [\[tab:my_label\]]{#tab:my_label label="tab:my_label"} For the benchmark, each approximation was used in a MILP problem, with a different number of query points $N$. In each run of the benchmark, $N$ random $[x'_{1,m}, x'_{2,m}]$ pairs were generated and the optimization problem was set up as $$\begin{aligned} \min &\sum_m y_m \\ \text{s.t. } &y_m \ge \mathrm{PwLA}(x_{1,m} \cdot x_{2,m}) \\ &x_{1,m} = x'_{1,m} \\ &x_{2,m} = x'_{2,m} \\ &y_m, x_{1,m}, x_{2,m} \in [0,1]\end{aligned}$$ where PwLA is the MILP representation of the piecewise linear approximation of the product of $x_1$ and $x_2$. For each query point, the MILP representation of the approximation must be replicated. Thus, the number of variables and constraints in the optimization problem increases linearly with $N$. The need to evaluate a model at various points arises, to name just one example, in unit commitment problems. If the model represents the operating behavior of a device, the model has to be replicated for each time step. The results of the benchmark are illustrated in Figure [2](#fig:performance2D){reference-type="ref" reference="fig:performance2D"}. We measured the performance in MILP problems with 1 to 10 000 replications of the MILP representation of the approximations. To get a more accurate estimate of the solving time and eliminate random effects, each MILP problem was solved 10 times and the median solving time was computed. PwCA outperforms the simplex approximation by a large margin, regardless of the MILP formulation that was used to translate the simplex model to MILP. At 10 query points, PwCA performed 2 times faster than simplex approximation with Log-formulation (6.7 ms vs. 14.3 ms). At 300 query points, it even performed 39 times faster (8.5 ms vs. 329 ms), which is a consequence of the exponentially increasing complexity of MILP problems with the number of binary variables. ![MILP performance of the PwCA and the simplex approximation with various MILP formulations in the test case.](res/performance.png){#fig:performance2D} These results highlight the main advantage of PwCA: The MILP representation requires only one auxiliary binary variable. If the MILP representation is replicated multiple times in an optimization problem, e.g. at various time-steps, then the performance gain is significant. # Conclusion {#sec:conclusion} We presented an algorithm to automatically compute a piecewise-linear approximation of a non-linear functions for MILP. The domain is split into two regions and each region is approximated by a convex combination of linear constraints. This piecewise-convex approximation (PwCA) fills a gap between simple convex approximation and approximation using simplices. PwCA can capture features of the non-linear function that simple convex approximation can not and it yields faster solving time than simplex approximation because only one auxiliary binary variable is required. PwCA works especially well with functions that have both a direction with positive and with negative curvature. Then they provide a very accurate (in terms of deviation) approximation of the non-linear function, and at the same time they also perform very well in MILP. In this paper, we showcased this by considering the multiplication of two variables as an example. PwCA was both more accurate than simple convex approximation and yielded faster solving times than simplex approximation. Compared to simple convex approximation, the root mean square error was about 61 % lower (0.044 with simple convex vs. 0.017 with piecewise convex). Compared to approximation with simplices at the same accuracy, PwCA resulted in a 2 times faster solving time in a MILP problem with 10 instances and 39 times faster with 300 instances. This is a consequence of the exponentially increasing complexity of MILP problems with the number of binary variables. Since PwCA only requires one auxiliary binary variable (compared to 3 for an approximation with 8 simplices and Log-formulation) it scales considerably better for large MILP problems. This speed-up is especially useful for engineering optimization problems such as unit commitment, where the function that represents the operating behavior of a device has to be replicated at every time step. Then, the performance gain due to the reduced number of binary variables will be significant. Research is under way to realize adaptive operation planning, where PwCA is used to approximate the operating behavior of devices in an energy system. PwCA fills a niche between simple convex approximation and simplex approximation, because it more accurate than simple convex approximation and yields faster solving times than simplex approximation. For this reason, PwCA will also be a useful tool for other applications where non-linear functions have to be approximated for MILP.
arxiv_math
{ "id": "2309.10372", "title": "Piecewise linear approximation for MILP leveraging piecewise convexity\n to improve performance", "authors": "Felix Birkelbach, David Huber, Ren\\'e Hofmann", "categories": "math.OC", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/" }
--- abstract: | In this work, we develop a novel efficient quadrature and sparse grid based polynomial interpolation method to price American options with multiple underlying assets. The approach is based on first formulating the pricing of American options using dynamic programming, and then employing static sparse grids to interpolate the continuation value function at each time step. To achieve high efficiency, we first transform the domain from $\mathbb{R}^d$ to $(-1,1)^d$ via a scaled tanh map, and then remove the boundary singularity of the resulting multivariate function over $(-1,1)^d$ by a bubble function and simultaneously, to significantly reduce the number of interpolation points. We rigorously establish that with a proper choice of the bubble function, the resulting function has bounded mixed derivatives up to a certain order, which provides theoretical underpinnings for the use of sparse grids. Numerical experiments for American arithmetic and geometric basket put options with the number of underlying assets up to 16 are presented to validate the effectiveness of the approach.\ **Key words**: sparse grids, American option pricing, multiple underlying assets, continuation value function, quadrature **MSC classification:** 65D40, 91G20 author: - "Jiefei Yang [^1] and Guanglian Li[^2]" bibliography: - references.bib date: September 15, 2023 title: On Sparse Grid Interpolation for American Option Pricing with Multiple Underlying Assets --- # Introduction This paper is concerned with American option pricing with payoffs affected by many underlying instruments, which can be assets such as stocks, bonds, currencies, commodities, and indices (e.g., S&P 500, NASDAQ 100) [@zhang1997exotic p. 365]. Classical examples include pricing American max-call and basket options [@andersen2004primal; @haugh2004pricing; @hu2020pricing; @kovalov2007pricing; @nielsen2008penalty; @reisinger2007efficient; @scheidegger2021pricing]. In practice, American options can be exercised only at discrete dates. The options that can be exercised only at finite discrete dates are called Bermudan options, named after the geographical feature of Bermudan Islands (i.e., being located between America and Europe while much closer to the American seashore) [@guyon2013nonlinear]. Merton [@merton1976option] showed that pricing an American call option on a single asset with no dividend is equivalent to pricing a European option, and obtained an explicit solution to pricing perpetual American put. However, except these two cases no closed-form solution is known to price American options even in the simplest Black Scholes model. Thus it is imperative to develop efficient numerical methods for American option pricing. Various numerical methods have been proposed for pricing and hedging in the past five decades, using different formulations of American option pricing, e.g., an optimal stopping problem, a variational inequality, or a free boundary problem [@bensoussan2011applications; @jaillet1990variational; @McKean1965; @van1974optimal]. A finite difference method (FDM) was proposed to price American options based on variational inequalities [@brennan1977valuation], with its convergence proved in [@jaillet1990variational] by showing that the $\mathcal{C}^1$ regularity of the value function with respect to the underlying price. The binomial options pricing model (BOPM) based upon optimal time stopping was developed in [@cox1979option], and its convergence was shown in [@amin1994convergence]. When the number $d$ of underlying assets is smaller than four, one can extend one-dimensional pricing methods using tensor product or additional treatment to price multi-asset options. For example, Cox-Ross-Rubinstein (CRR) binomial tree model can be extended to the multinomial option pricing model to price American options with two underlying assets [@boyle1988lattice]. However, dynamic programming (cf. [\[eq: DP\]](#eq: DP){reference-type="eqref" reference="eq: DP"} below) or variational inequalities [@bensoussan1984theory] are predominant when $d$ is greater than three. Many numerical schemes have been developed based on the variational inequalities, e.g., FDMs [@achdou2005computational; @brennan1977valuation; @forsyth2002quadratic] and finite element methods (FEMs) [@kovalov2007pricing]. However, only the first order convergence rate can be achieved, since the value function has only $\mathcal{C}^1$ regularity with respect to the underlying price (i.e., smooth pasting condition [@peskir2006optimal]). There are mainly two lines of research on American option pricing based on dynamic programming, i.e., simulation based methods [@andersen2004primal; @longstaff2001valuing; @tsitsiklis2001regression] and quadrature and interpolation (Q&I) based methods [@quecke2007efficient; @sullivan2000valuing]. Simulation based methods are fast and easy to implement, but their accuracy is hard to justify. One representative method is the least square Monte Carlo method [@longstaff2001valuing], which employs least square regression and Monte Carlo method to approximate conditional expectations, cf. [\[eq: DP\]](#eq: DP){reference-type="eqref" reference="eq: DP"}. Q&I based methods employ quadrature to approximate conditional expectations and interpolation to construct function approximators. One can use Gaussian quadrature or adaptive quadrature to approximate conditional expectations in [\[eq: DP\]](#eq: DP){reference-type="eqref" reference="eq: DP"} and Chebyshev polynomial interpolation, spline interpolation or radial basis functions to reconstruct the continuation value or the value function [@quecke2007efficient; @sullivan2000valuing]. In [@glau2019new], a dynamic Chebyshev method via polynomial interpolation of the value functions was developed, allowing the generalized moments evaluation in the offline stage to reduce computational complexity. Although Q&I based methods are efficient in low-dimensional settings, the extensions to high-dimensional cases are highly nontrivial, and there are several outstanding challenges, e.g., curse of dimensionality, unboundedness of the domain and the absence of natural boundary conditions. For example, if the domain is truncated and artificial boundary conditions are imposed, then one is actually pricing an American barrier option with rebate instead of the American option itself. Accurate boundary conditions can be obtained by pricing a $(d-1)$-dimensional problem [@nielsen2008penalty], which, however, is still computationally challenging. The sparse grids technique has been widely used in option pricing with multiple underlying assets, due to its capability to approximate high-dimensional functions with bounded mixed derivatives, for which the computational complexity for a given accuracy does not grow exponentially with respect to $d$ [@bungartz2004sparse]. It has been applied to price multi-asset European and path-dependent Asian options, by formulating them as a high-dimensional integration problem [@bayer2021numerical; @bungartz2003multivariate; @holtz2010sparse], and also combined with the FDMs [@reisinger2007efficient] and FEMs [@bungartz2012option] to price options with $d\leq 5$. In the context of Q&I methods, adaptive sparse grids interpolation with local linear hierarchical basis have been used to approximate value functions [@scheidegger2021pricing]. In this work, we propose a novel numerical approach to price American options under multiple underlying assets, which is summarized in Algorithm [\[alg: SGSG\]](#alg: SGSG){reference-type="ref" reference="alg: SGSG"}. It crucially draws on the $\mathcal{C}^\infty$ regularity of the continuation value function, cf. [\[eq:continuation\]](#eq:continuation){reference-type="eqref" reference="eq:continuation"}, and uses sparse grid Chebyshev polynomial interpolation to alleviate the curse of dimensionality. This is achieved in several crucial steps. First, we transform the unbounded domain $\mathbb{R}^d$ into a bounded one, which eliminates the need of imposing artificial boundary conditions. Second, to further improve the computational efficiency, using a suitable bubble function, we obtain a function that can be continuously extended to the boundary with vanishing boundary values and with bounded mixed derivatives up to certain orders, which is rigorously justified in Theorem [Theorem 1](#thm: bounded-mixed){reference-type="ref" reference="thm: bounded-mixed"}. This construction enables the use of the standard sparse grid technique with much fewer sparse grids (without adaptivity), and moreover, the interpolation functions fulfill the requisite regularity conditions, thereby admitting theoretical convergence guarantees. The distinct features of the proposed method include using static sparse grids at all time steps and allowing deriving the value function on the whole domain $\mathbb{R}^d$, and thus can also be used to estimate important parameters, e.g., hedge ratio. Extensive numerical experiments demonstrate that Algorithm [\[alg: SGSG\]](#alg: SGSG){reference-type="ref" reference="alg: SGSG"} can break the curse of dimensionality in the sense that high accuracy is achieved with the associated computational complexity being almost independent of the dimension. Furthermore, our experiments can significantly enrich feasible dimensionality, e.g., pricing American arithmetic basket put options up to $d=12$ (versus $d=6$ from the literature), and pricing American geometric basket put options up to $d=16$. In both cases, we consistently observe high accuracy of the approximation, with almost no influence from the dimension. Further experimental evaluations show that the method is robust with respect to the choice of various algorithmic parameters and the specifics of the quadrature scheme. In sum, our proposed algorithm significantly improves over the state-of-the-art pricing schemes for American options with multiple underlying assets and in the meanwhile has rigorous theoretical guarantee. The remainder of our paper is structured as follows. In Section [2](#sec:prelim){reference-type="ref" reference="sec:prelim"}, we derive the continuation value function $f_k(\mathbf{x})$ for an American basket put option with underlying assets following the correlated geometric Brownian motion under the risk-neutral probability. In Section [3](#sec:method){reference-type="ref" reference="sec:method"}, we develop the novel method. In Section [4](#sec:analysis){reference-type="ref" reference="sec:analysis"}, we establish the smoothness of the interpolation function in the space of functions with bounded mixed derivatives. In Section [5](#sec:numerical){reference-type="ref" reference="sec:numerical"}, we present extensive numerical tests for American basket options with up to 16 underlying assets. Finally, we conclude with future work in Section [6](#sec:conclusion){reference-type="ref" reference="sec:conclusion"}. # The mathematical model {#sec:prelim} Martinagle pricing theory gives the fair price of an American option as the solution to the optimal stopping problem in the risk-neutral probability space $(\Omega, \mathcal{F}, (\mathcal{F}_t)_{0\le t\le T}, \mathbb{Q})$: $$\label{eq: OSP} V(t) = \sup_{\tau_t \in [t, T]} \mathbb{E}[e^{-r(\tau_t - t)} g(\mathbf{S}(\tau_t)) | \mathcal{F}_t],$$ where $\tau_t$ is a $\mathcal{F}_t$-stopping time, $T$ is the expiration date, $(\mathbf{S}(t))_{0\le t\le T}$, is a collection of $d$-dimensional price processes, and $g(\cdot)$ is the payoff function depending on the type of the option. The payoffs of put and call options take the following form $g(\mathbf{S}) = \max( \kappa - \Psi(\mathbf{S}) , 0)$ and $g(\mathbf{S}) = \max( \Psi(\mathbf{S}) - \kappa , 0)$, respectively, where $\Psi: \mathbb{R}_{+}^d\to\mathbb{R}_{+}$, and $\kappa$ is the strike price. Now we describe a detailed mathematical model for pricing an American put option on $d$ underlying assets with a strike price $\kappa$ and a maturity date $T$, whose numerical approximation is the main objective of this work. One classical high-dimensional example is pricing American basket options. Let $g:\mathbb{R}_{+}^d\to\mathbb{R}_{+}$ be its payoff function, and assume that the prices of the underlying assets $\mathbf{S}(t) = [S^1(t), \dots, S^d(t)]^\top$ follow the correlated geometric Brownian motions $$\label{eq:model} \mathrm{d}S^i(t) = (r - \delta_i) S^i(t)\,\mathrm{d}t + \sigma_i S^i(t)\,\mathrm{d}\tilde{W}^i(t) \quad \text{ with } S^i(0) = S^i_0, \quad i = 1,2,\dots,d,$$ where $\tilde{W}^i(t)$ are correlated $\mathbb{Q}$-Brownian motions with $\mathbb{E}[\mathrm{d}\tilde{W}^i(t) \mathrm{d}\tilde{W}^j(t)] = \rho_{ij} \,\mathrm{d}t$, $\rho_{ii} = 1$ for $i,j = 1,2,\dots, d$, and $r$, $\delta_i$ and $\sigma_i$ are the riskless interest rate, dividend yields, and volatility parameters, respectively. The payoffs of an arithmetic and a geometric basket put are respectively given by $$g(\mathbf{S}) = \max\left(\kappa - \frac{1}{d}\sum_{i=1}^d S^i, 0\right)\quad \mbox{and}\quad g(\mathbf{S}) = \max\left(\kappa - \left(\prod_{i=1}^d S^i\right)^{1/d}, 0\right).$$ In practice, the $\mathcal{F}_0$-stopping time $\tau_0$ is assumed to be taken in a set of discrete time steps, $\mathcal{T}_{0,T} := \{t_k = k\Delta t: k=0,1,\ldots,K\}$, with $\Delta t= T/K$. This leads to the pricing of a $K$-times exercisable Bermudan option that satisfies the following dynamic programming problem $$\label{eq: DP} \begin{split} & \mathcal{V}_K(\mathbf{s}) = g(\mathbf{s}), \\ & \mathcal{V}_k(\mathbf{s}) = \max\big( g(\mathbf{s}), \mathbb{E}[e^{-r\Delta t} \mathcal{V}_{k+1}(\mathbf{S}_{k+1}) | \mathbf{S}_k = \mathbf{s} ] \big)\quad\text{ for } k = K-1:-1:0, \end{split}$$ where $\mathcal{V}_k$ is called the value function at time $t_k$ and $\mathbf{S}_k := \mathbf{S}(t_k)$ is the discretized price processes. Throughout, for a stochastic process $\mathbf{X}(t)$, we write $\mathbf{X}_k := \mathbf{X}(t_k)$, $k=0,1,\dots, K$. Note that by the Markov property of Itô process, we can substitute the conditional expectation conditioning on $\mathcal{F}_{t_k}$ to $\mathbf{S}_k$. The conditional expectation as a function of prices is called the continuation value function, i.e., $$\label{eq:continuation} C_k(\mathbf{s}) = \mathbb{E}[e^{-r\Delta t} \mathcal{V}_{k+1}(\mathbf{S}(t_{k+1})) | \mathbf{S}(t_k) = \mathbf{s} ]\quad \text{ for } k = 0,1,\dots, K-1.$$ Below we recast problem [\[eq: DP\]](#eq: DP){reference-type="eqref" reference="eq: DP"} in terms of the continuation value function $$\label{eq: DPC} \begin{aligned} C_{K-1}(\mathbf{s}) &= \mathbb{E}\left[e^{-r\Delta t} g(\mathbf{S}_K) | \mathbf{S}_{K-1} = \mathbf{s}\right],&& \\ C_k(\mathbf{s}) &= \mathbb{E}\left[e^{-r\Delta t} \max \left( g(\mathbf{S}_{k+1}), C_{k+1}(\mathbf{S}_{k+1}) \right) | \mathbf{S}_k = \mathbf{s} \right] &&\text{ for } k = K-2:-1:0. \end{aligned}$$ Given an approximation to $C_0(\mathbf{s})$, the price of Bermudan option can be obtained by $$\label{eq:goal} \mathcal{V}_0(\mathbf{S}_0) = \max( g(\mathbf{S}_0), C_0(\mathbf{S}_0)).$$ The reformulation in terms of $C_k$ is crucial to the development of the numerical scheme. Next we introduce the rotated log-price, which has independent components with Gaussian densities. We denote the correlation matrix as $P = (\rho_{ij})_{d\times d}$, the volatility matrix $\Sigma$ as a diagonal matrix with volatility $\sigma_i$ on the diagonal, and write the dividend yields as a vector $\bm{\delta} = [\delta_1, \dots, \delta_d]^\top$. Then the log-price $\mathbf{X}(t)$ with each component defined by $X^i(t):= \ln(S^i(t)/S^i_0)$ follows a multivariate Gaussian distribution $$\mathbf{X}(t)\sim \mathcal{N}\left(\left(r - \bm{\delta} - \frac{1}{2}\Sigma^2\mathbf{1}\right)t, \Sigma P \Sigma^\top t\right),$$ with $\mathbf{1} := [1,1,\dots,1]^\top\in\mathbb{R}^d$. The covariance matrix $\Sigma P \Sigma^\top$ admits the spectral decomposition $\Sigma P \Sigma^\top=Q^{\top}\Lambda Q$. Then the rotated log-price $\Tilde{\mathbf{X}}(t) := Q^\top \mathbf{X}(t)$ follows an independent Gaussian distribution $$\Tilde{\mathbf{X}}(t)\sim \mathcal{N}\left(Q^\top \left(r - \bm{\delta} - \frac{1}{2}\Sigma^2\mathbf{1}\right)t, \Lambda t\right).$$ Therefore, to eliminate the correlation, we introduce the transformation $$\begin{aligned} \Tilde{\mathbf{X}}(t) := Q^\top \ln(\mathbf{S}(t)./\mathbf{S}_0),\end{aligned}$$ and denote the inverse transformation by $$\begin{aligned} \phi(\Tilde{\mathbf{X}}(t)): = \mathbf{S}(t) = \mathbf{S}_0.*\exp(Q\Tilde{\mathbf{X}}(t)),\end{aligned}$$ where $./$ and $.*$ represent component-wise division and multiplication. Finally, by defining $f_k(\mathbf{x}) := C_k(\phi(\mathbf{x}))$ as the continuation value function with respect to the rotated log-price for $\mathbf{x}\in \mathbb{R}^d$, we obtain the following dynamic programming procedure $$\label{eq: DP rotated log-price} \begin{aligned} f_{K-1}(\mathbf{x}) &= \mathbb{E}\left[e^{-r\Delta t}g( \phi(\Tilde{\mathbf{X}}_K)) \Big| \Tilde{\mathbf{X}}_{K-1} = \mathbf{x}\right],&&\\ f_k(\mathbf{x}) &= \mathbb{E}\left[ e^{-r\Delta t}\max \left( g( \phi(\Tilde{\mathbf{X}}_{k+1})), f_{k+1}(\Tilde{\mathbf{X}}_{k+1}) \right) \Big| \Tilde{\mathbf{X}}_k = \mathbf{x} \right]&& \text{ for } k = K-2:-1:0. \end{aligned}$$ The main objective of this work is to develop an efficient numerical method for solving problem [\[eq: DP rotated log-price\]](#eq: DP rotated log-price){reference-type="eqref" reference="eq: DP rotated log-price"} with a large number of dimension $d$. The detailed construction is given in Section [3](#sec:method){reference-type="ref" reference="sec:method"}. # Methodologies {#sec:method} In this section, we systematically develop a novel algorithm, based on quadrature and sparse grids polynomial interpolation (SGPI) [@barthelmann2000high] to solve problem [\[eq: DP rotated log-price\]](#eq: DP rotated log-price){reference-type="eqref" reference="eq: DP rotated log-price"} so that highly accurate results can be obtained for moderately large dimensions. This is achieved as follows. First, we propose a mapping $\psi$ [\[eq: mapping for unbounded\]](#eq: mapping for unbounded){reference-type="eqref" reference="eq: mapping for unbounded"} that transforms the domain from $\mathbb{R}^d$ to $\Omega:=(-1,1)^d$, and obtain problem [\[eq: DP bounded variable\]](#eq: DP bounded variable){reference-type="eqref" reference="eq: DP bounded variable"} with the unknown function $F_{k}$ defined over the hypercube $\Omega$ in Section [3.1](#subsec:unbounded-bounded){reference-type="ref" reference="subsec:unbounded-bounded"}. The mapping $\psi$ enables utilizing identical sparse grids for all time stepping $k = K-1:-1:0$, which greatly facilitates the computation, and moreover, it avoids domain truncation and artificial boundary data when applying SGPI, which eliminates extra approximation errors. However, the partial derivatives of $F_{k}$ may have boundary singularities, leading to low-efficiency of SGPI. To resolve this issue, we multiply $F_k$ with a bubble function [\[eq: bubble function\]](#eq: bubble function){reference-type="eqref" reference="eq: bubble function"}, and derive problem [\[eq: DP bubble\]](#eq: DP bubble){reference-type="eqref" reference="eq: DP bubble"} with unknown functions $u_k$ defined over the hypercube $\Omega$. Second, we present the SGPI of the unknown $u_k$ in Section [3.2](#subsec:sg){reference-type="ref" reference="subsec:sg"}. Third and last, we provide several candidates for quadrature rules and summarize the algorithm in Section [3.3](#subsec:algorithm){reference-type="ref" reference="subsec:algorithm"}, and analyze its complexity. ## Mapping for unbounded domains and bubble functions {#subsec:unbounded-bounded} A direct application of the SGPI to problem [\[eq: DP rotated log-price\]](#eq: DP rotated log-price){reference-type="eqref" reference="eq: DP rotated log-price"} is generally involved since the problem is formulated on $\mathbb{R}^d$. For $d=1$, quadrature and interpolation-based schemes can be applied to problem [\[eq: DP rotated log-price\]](#eq: DP rotated log-price){reference-type="eqref" reference="eq: DP rotated log-price"} with domain truncation and suitable boundary conditions, e.g., payoff function. However, for $d\ge 2$, the exact boundary conditions of the truncated domain requires solving $(d-1)$-dimensional American option pricing problems [@nielsen2008penalty]. Therefore, the unboundedness of the domain $\mathbb{R}^d$ and the absence of natural boundary conditions pose great challenges to develop direct yet efficient interpolation for the continuation value function $f_k$, and there is an imperative need to develop a new approach to overcome the challenges. Inspired by spectral methods on unbounded domains [@boyd2001chebyshev], we propose the use of the logarithmic transformation $\psi: \mathbb{R}^d \to \Omega := (-1,1)^d$, defined by $$\label{eq: mapping for unbounded} \left\{\begin{aligned} & \mathbf{Z} = \psi(\Tilde{\mathbf{X}}) \text{ with each component } Z^i := \tanh(L \Tilde{X}^i) \in (-1,1), \\ & \Tilde{\mathbf{X}} = \psi^{-1}(\mathbf{Z}) \text{ with each component } \Tilde{X}^i := L^{-1} \mathop{\mathrm{arctanh}}(Z^i)\in \mathbb{R}, \end{aligned}\right.$$ where $L>0$ is a scale parameter controlling the slope of the mapping. The logarithmic mapping $\psi$ is employed since the transformed points decay exponentially as they tend to infinity. Asymptotically the exponential decay rate matches that of the Gaussian distribution of the rotated log-price $\Tilde{\mathbf{X}}(t)$. Then by Itô's lemma, the new stochastic process $\mathbf{Z}(t) = [Z^1(t), \dots, Z^d(t)]^\top$ satisfies the stochastic differential equations $$\label{eq: SDE of Z} \mathrm{d}Z^i(t) = L\left( 1-(Z^i(t))^2 \right) \left( (\mu_i - \lambda_i L Z^i(t))\,\mathrm{d}t + \sqrt{\lambda_i} \,\mathrm{d}W^i(t) \right) \quad \text{ for } i=1,\dots,d,$$ where $\bm{\mu} = [\mu_1,\dots,\mu_d]^\top = Q^\top \left(r - \bm{\delta} - \frac{1}{2}\Sigma^2\mathbf{1}\right)$, $\lambda_i$ are diagonal elements of $\Lambda$, and $W^i(t)$ are independent standard Brownian motions. Note that the drift and diffusion terms in [\[eq: SDE of Z\]](#eq: SDE of Z){reference-type="eqref" reference="eq: SDE of Z"} vanish on the boundary $\partial \Omega$. Thus, [\[eq: SDE of Z\]](#eq: SDE of Z){reference-type="eqref" reference="eq: SDE of Z"} fulfills the reversion condition [@zhu2003multi], which implies $\mathbf{Z}(t)\in \Omega$ for $t>s$ provided $\mathbf{Z}(s)\in \Omega$. Then we apply the mapping $\psi$ to the dynamic programming procedure [\[eq: DP rotated log-price\]](#eq: DP rotated log-price){reference-type="eqref" reference="eq: DP rotated log-price"}. Let $F_k(\mathbf{z}) := f_k(\psi^{-1}(\mathbf{z})) = C_k\left(\phi (\psi^{-1}(\mathbf{z}))\right)$ be the continuation value function of the bounded variable $\mathbf{z}$. Then problem [\[eq: DP rotated log-price\]](#eq: DP rotated log-price){reference-type="eqref" reference="eq: DP rotated log-price"} can be rewritten as, for any $\mathbf{z}\in \Omega$, $$\label{eq: DP bounded variable} \begin{aligned} F_{K-1}(\mathbf{z}) &= \mathbb{E}\left[e^{-r\Delta t} g\left( \phi ( \psi^{-1}(\mathbf{Z}_K))\right) \big| \mathbf{Z}_{K-1} = \mathbf{z}\right],&& \\ F_k(\mathbf{z}) &= \mathbb{E}\left[ e^{-r\Delta t}\max \left(g\left(\phi (\psi^{-1}(\mathbf{Z}_{k+1}))\right), F_{k+1}(\mathbf{Z}_{k+1}) \right) \big| \mathbf{Z}_k = \mathbf{z} \right] &&\text{ for } k = K-2:-1:0. \end{aligned}$$ Below we denote by $H(\cdot) := g\left(\phi(\psi^{-1}(\cdot))\right)$ the payoff function with respect to the bounded variable $\mathbf{z}$. Note that problem [\[eq: DP bounded variable\]](#eq: DP bounded variable){reference-type="eqref" reference="eq: DP bounded variable"} is posed on the bounded domain $\Omega := (-1,1)^d$, $d\ge 1$, which however remains challenging to approximate. First, the function $F_k: \Omega \to \mathbb{R}$ may have singularities on the boundary $\partial\Omega$ due to the use of the mapping $\psi$. Second, the Dirichlet boundary condition of $F_k$ is not identically zero, which is undesirable for controlling the computational complexity of the algorithm, especially in high dimensions. Thus, we employ a bubble function of the form $$\label{eq: bubble function} b(\mathbf{z}) = \prod_{i=1}^d (1-z_i^2)^{\beta}, \quad \mathbf{z}\in \overline{\Omega}:=[-1,1]^d,$$ where the parameter $\beta>0$ controls the shape of $b(\mathbf{z})$. Note that $b(\mathbf{z})>0$ for $\mathbf{z}\in \Omega$ and $b(\mathbf{z})=0$ on $\partial \Omega$. Let $$\begin{aligned} u_k(\mathbf{z}) := F_k(\mathbf{z})b(\mathbf{z}),\quad k = 0,\ldots,K-1.\end{aligned}$$ Then the dynamic programming problem [\[eq: DP bounded variable\]](#eq: DP bounded variable){reference-type="eqref" reference="eq: DP bounded variable"} is equivalent to $$\label{eq: DP bubble} \begin{aligned} u_{K-1}(\mathbf{z}) &= \mathbb{E}\left[e^{-r\Delta t}H(\mathbf{Z}_K) | \mathbf{Z}_{K-1} = \mathbf{z}\right] b(\mathbf{z}),&& \\ u_k(\mathbf{z}) &= \mathbb{E}\left[ e^{-r\Delta t}\max \left( H(\mathbf{Z}_{k+1}), \frac{u_{k+1}(\mathbf{Z}_{k+1})}{b(\mathbf{Z}_{k+1})} \right) \bigg| \mathbf{Z}_k = \mathbf{z} \right]b(\mathbf{z}) &&\text{ for } k = K-2:-1:0. \end{aligned}$$ **Remark 1**. *The term $\frac{u_{k+1}(\mathbf{Z}_{k+1})}{b(\mathbf{Z}_{k+1})}$ appearing in [\[eq: DP bubble\]](#eq: DP bubble){reference-type="eqref" reference="eq: DP bubble"} is well-defined for $\mathbf{Z}_{k+1} \in \Omega$. Nevertheless, since the bubble function $b(\mathbf{z})$ approaches zero as $\mathbf{z}\to \partial\Omega$, in the numerical experiments, one should guarantee that $b(\cdot)$ evaluated on the computational grid will be greater than the machine epsilon, e.g., $\varepsilon = 2.2204\times 10^{-16}$ in MATLAB. This will be established in Proposition [Proposition 1](#prop:bubble){reference-type="ref" reference="prop:bubble"}.* ## Approximation by sparse grids polynomial interpolation (SGPI) {#subsec:sg} Next, we apply SGPI [@barthelmann2000high] to approximate the zero extension $\overline{u}_k: \overline{\Omega} \to \mathbb{R}$ of $u_k$ iteratively backward in time. By choosing suitable bubble functions $b(\mathbf{z})$, we shall prove in Theorem [Theorem 1](#thm: bounded-mixed){reference-type="ref" reference="thm: bounded-mixed"} below the smoothness property of $\overline{u}_k$ up to order $r$. Thus the SGPI enjoys a geometrical convergence rate with respect to the number of interpolation points $\Tilde{N}$, which depends only on the dimension $d$ in a logarithm term, i.e. $\mathcal{O}(\Tilde{N}^{-r} (\log \Tilde{N})^{(r+1) (d-1)})$ as stated in Proposition [Proposition 2](#prop: error of sparse-interp){reference-type="ref" reference="prop: error of sparse-interp"}. In addition, thanks to the zero boundary condition of $\overline{u}_k$, the required number of interpolation data is greatly reduced when compared with the full grids. SGPI [@barthelmann2000high] (also called Smolyak approximation) is a powerful tool for constructing function approximations over a high-dimensional hypercube. Consider a function $f: \overline{\Omega} \to \mathbb{R}$. For $d = 1$, we denote by $X^\ell = \{x_1^\ell, \dots, x_{N_\ell}^\ell\}$ the set of nested Chebyshev-Gauss-Lobatto (CGL) points, with the nodes $x_j^\ell$ given by $$x_j^\ell = \begin{cases} 0 &\quad\text{ for }j=1 \text{ if }\ell = 1, \\ \cos(\frac{(j-1)\pi}{N_\ell - 1}) &\quad \text{ for }j=1,\dots, N_\ell \text{ if }\ell \ge 2. \end{cases}$$ The cardinality of the set $X^\ell$ is $$N_\ell = \begin{cases} 1 &\quad \text{ if }\ell = 1, \\ 2^{\ell-1}+1 &\quad \text{ if }\ell\ge 2. \end{cases}$$ The polynomial interpolation $U^\ell f$ of $f$ over the set $X^\ell$ is defined as follows. For $\ell = 1$, consider the midpoint rule, i.e., $(U^1 f)(x) = f(0)$. For $\ell\geq 2$, $U^\ell f$ is given by $$(U^\ell f)(x) = \sum_{j=1}^{N_\ell} f(x_j^\ell) L_j^\ell(x), \quad \mbox{with } L_j^\ell(x) = \prod_{k=1, k\ne j}^{N_\ell} \frac{x-x_k}{x_j - x_k},$$ where $L_j^\ell$ are Lagrange basis polynomials. Then we define the difference operator $$\Delta^\ell f = (U^\ell - U^{\ell-1})f \quad \text{ with } U^0 f = 0.$$ For $d>1$, Smolyak's formula approximates a function $f:\overline{\Omega}\to\mathbb{R}$ by the interpolation operator $$A(q,d) = \sum_{\ensuremath{\boldsymbol\ell}\in I(q,d)} \Delta^{\ell_1} \otimes \dots \otimes \Delta^{\ell_d},$$ with the index set $I(q,d):= \{\ensuremath{\boldsymbol\ell}\in \mathbb{N}^d: |\ensuremath{\boldsymbol\ell}| \le q\}$ and $|\ensuremath{\boldsymbol\ell}| = \ell_1 + \dots + \ell_d$ [@barthelmann2000high]. Equivalently, the linear operator $A(q,d)$ can be represented as [@wasilkowski1995explicit Lemma 1] $$\label{eq: SG interp} A(q,d) = \sum_{\ensuremath{\boldsymbol\ell}\in P(q,d)} (-1)^{q-|\ensuremath{\boldsymbol\ell}|}\binom{d-1}{q-|\ensuremath{\boldsymbol\ell}|} U^{\ell_1} \otimes \dots \otimes U^{\ell_d},$$ with the index set $P(q,d):= \{\ensuremath{\boldsymbol\ell}\in \mathbb{N}^d: q-d+1 \le |\ensuremath{\boldsymbol\ell}| \le q\}$, where the tensor product of the univariate interpolation operators is defined by $$(U^{\ell_1} \otimes \dots \otimes U^{\ell_d})(f) = \sum_{j_1=1}^{N_{\ell_1}} \dots \sum_{j_d=1}^{N_{\ell_d}}f(x_{j_1}^{\ell_1}, \dots, x_{j_d}^{\ell_d}) (L_{j_1}^{\ell_1} \otimes \dots \otimes L_{j_d}^{\ell_d}),$$ i.e., multivariate Lagrange interpolation. With the set $X^{\ell_i}$ (i.e., one-dimensional nested CGL points), the formula [\[eq: SG interp\]](#eq: SG interp){reference-type="eqref" reference="eq: SG interp"} indicates that computing $A(q,d)(f)$ only requires function evaluations on the sparse grids $$H(q,d) = \bigcup_{\ensuremath{\boldsymbol\ell}\in P(q,d)} X^{\ell_1} \times \dots \times X^{\ell_d}.$$ We denote the cardinality of $H(q,d)$ by $\Tilde{N}_{CGL}(q,d)$. Usually, the interpolation level $L_I\in \mathbb{N}_0$ is defined by $$L_I := q-d.$$ Then for fixed $L_I$ and $d\to \infty$, the following asymptotic estimate of $\Tilde{N}_{CGL}(L_I+d,d)$ holds [@novak1999simple] $$\label{eq: cardinality of N_CGL} \Tilde{N}_{CGL}(L_I+d,d) \approx \frac{2^{L_I}}{L_I!}d^{L_I}.$$ The sparse grid has much fewer grid points than the full grid generated by the tensor product. Furthermore, in high-dimensional hypercube, a significant number of sparse grids lie on the boundary. We will compare the number of CGL sparse grids $\Tilde{N}_{CGL}(L_I+d,d)$ and the number of inner sparse grids $N$ in Section [3.3](#subsec:algorithm){reference-type="ref" reference="subsec:algorithm"}. In Remark [Remark 1](#rem: 1){reference-type="ref" reference="rem: 1"}, we require that $b(\cdot)$ evaluated on the computational grid be greater than the machine epsilon. Note that for all inner sparse grids $\{\mathbf{z}^n\}_{n=1}^N$ of the interpolation level $L_I$, each coordinate $z^n_j$ satisfies $$\label{eq: estimate interpolation points} -\cos\left(\frac{\pi}{2^{\ell_j-1}}\right) \le z^n_j \le \cos\left(\frac{\pi}{2^{\ell_j-1}}\right),$$ where $\ell_1 + \dots + \ell_d \le L_I+d$. ## Numerical algorithm {#subsec:algorithm} We interpolate the function $\overline{u}_k: \overline{\Omega} \to \mathbb{R}$ on CGL sparse grids in the dynamic programming [\[eq: DP bubble\]](#eq: DP bubble){reference-type="eqref" reference="eq: DP bubble"}, where the interpolation data can be formulated as high-dimensional integrals. Since $\overline{u}_k(\mathbf{z}) = 0$ for $\mathbf{z}\in \partial \Omega$, only function evaluations on the inner sparse grids are required. This greatly reduces the computational complexity, especially for large $d$. Indeed, since $\Tilde{\mathbf{X}}_{k+1} = \Tilde{\mathbf{X}}_k + \mathbf{Y}$ with $\mathbf{Y} \sim \mathcal{N}(Q^\top (r - \bm{\delta} - \frac{1}{2}\Sigma^2\mathbf{1})\Delta t, \Lambda \Delta t)$, $$\label{eq: pts next step} \mathbf{Z}_{k+1} = \psi (\psi^{-1}(\mathbf{Z}_k) + \mathbf{Y}).$$ For a fixed interpolation knot $\mathbf{z} \in \Omega$, we denote $$\label{eq: def of integrand} v_{k+1}^{\mathbf{z}}(\mathbf{Y}) = \max \left( H\big(\psi (\psi^{-1}(\mathbf{z}) + \mathbf{Y}) \big), \frac{u_{k+1}}{b} \big( \psi (\psi^{-1}(\mathbf{z}) + \mathbf{Y}) \big) \right),$$ and the probability density of $\mathbf{Y}$ as $\rho(\mathbf{y}) = \prod_{i=1}^d \rho_i(y_i)$. Following [\[eq: DP bubble\]](#eq: DP bubble){reference-type="eqref" reference="eq: DP bubble"}, the interpolation data $u_k(\mathbf{z})$ is given by $$\begin{split} u_k(\mathbf{z}) = e^{-r\Delta t}\mathbb{E}[v_{k+1}^{\mathbf{z}}(\mathbf{Y})] b(\mathbf{z}) = e^{-r\Delta t}b(\mathbf{z}) \int_{\mathbb{R}^d} v_{k+1}^{\mathbf{z}}(\mathbf{y}) \rho(\mathbf{y}) \,\mathrm{d}\mathbf{y}, \end{split}$$ where the last integral can be computed by any high-dimensional quadrature methods, e.g., Monte Carlo (MC), quasi-Monte Carlo (QMC) method, or sparse grid quadrature. 1. The **Monte Carlo method** approximates the integral by averaging random samples of the integrand $$\label{eq: MC} \int_{\mathbb{R}^d} v_{k+1}^{\mathbf{z}}(\mathbf{y}) \rho(\mathbf{y}) \,\mathrm{d}\mathbf{y} \approx \frac{1}{M}\sum_{m=1}^M v_{k+1}^{\mathbf{z}}(\mathbf{y}^m),$$ where $\{\mathbf{y}^m\}_{m=1}^M$ are independent and identically distributed (i.i.d.) random samples drawn from the distribution $\rho(\mathbf{y})$. 2. The **Quasi-Monte Carlo method** takes the same form as [\[eq: MC\]](#eq: MC){reference-type="eqref" reference="eq: MC"}, but $\{\mathbf{y}^m\}_{m=1}^M$ are the transformation of QMC points. By changing variables $\mathbf{x} = \Phi(\mathbf{y})$ with $\Phi$ being the cumulative density function (CDF) of $\mathbf{Y}$, we have $$\begin{aligned} \int_{\mathbb{R}^d} v_{k+1}^{\mathbf{z}}(\mathbf{y}) \rho(\mathbf{y}) \,\mathrm{d}\mathbf{y} &= \int_{[0,1]^d} v_{k+1}^{\mathbf{z}}\left(\Phi^{-1}(\mathbf{x}) \right)\,\mathrm{d}\mathbf{x}\nonumber \\ &\approx \frac{1}{M}\sum_{m=1}^M v_{k+1}^{\mathbf{z}}\left(\Phi^{-1}(\mathbf{x}^m) \right)= \frac{1}{M}\sum_{m=1}^M v_{k+1}^{\mathbf{z}}(\mathbf{y}^m), \label{eq: QMC} \end{aligned}$$ where $\{\mathbf{x}^m\}_{m=1}^M$ are QMC points taken from a low-discrepancy sequence, e.g., Sobol sequence and sequences generated by the lattice rule. Note that other transformations $\Phi$ are also available for designing QMC approximation [@kuo2016practical]. 3. The **Sparse grid quadrature** approximates the integral based on a combination of tensor products of univariate quadrature rule. For the integration with Gaussian measure, several types of sparse grids are available, including Gauss-Hermite, Genz-Keister, and weighted Leja points [@piazzola.tamellini:SGK]. Let $(\omega_m,\mathbf{y}^m)_{m=1}^M$ be quadrature weights and points associated with an anisotropic Gaussian distribution of $\mathbf{Y}$. Then the integral is computed by $$\label{eq: SG quadrature} \int_{\mathbb{R}^d} v_{k+1}^{\mathbf{z}}(\mathbf{y}) \rho(\mathbf{y}) \,\mathrm{d}\mathbf{y} \approx \sum_{m=1}^M \omega_m v_{k+1}^{\mathbf{z}}(\mathbf{y}^m).$$ By employing the transformation between the asset price $\mathbf{S}$ and the bounded variable $\mathbf{z}$, the sparse grid interpolation points and the quadrature points are shown in Fig. [\[fig: interp_quad_pts\]](#fig: interp_quad_pts){reference-type="ref" reference="fig: interp_quad_pts"}. ![](interp_quad_pts_a.eps){#fig: interp_quad_pts_a width=".95\\textwidth"}   ![](interp_quad_pts_b.eps){#fig: interp_quad_pts_b width="\\textwidth"} Now we can describe the procedure of iteratively interpolating the function $\overline{u}_k$ in Algorithm [\[alg: SGSG\]](#alg: SGSG){reference-type="ref" reference="alg: SGSG"}. Generate CGL sparse grids in $\overline{\Omega}$ Find grids in $\Omega$, which are denoted by $\{\mathbf{z}^n\}_{n=1}^N$ Generate quadrature points $\{\mathbf{y}^m\}_{m=1}^M$ and weights $\{w_m\}_{m=1}^M$ with Gaussian density $\mathcal{N}(Q^\top (r - \bm{\delta} - \frac{1}{2}\Sigma^2\mathbf{1})\Delta t, \Lambda \Delta t)$ **Dynamic programming:**\ Compute the payoff $H_{n,m} = H(\psi(\psi^{-1}(\mathbf{z}^n) + \mathbf{y}^m))$ Set the terminal value as $V_{n,m} = H_{n,m}$ $V_0 = \max\left( H(\mathbf{0}), \frac{u_0}{b(\mathbf{0})}\right)$. Followed by the Remark [Remark 1](#rem: 1){reference-type="ref" reference="rem: 1"}, the next result gives a sufficient condition on the well-definedness of the algorithm. **Proposition 1**. *Let $\varepsilon$ be the machine epsilon. Assume that each coordinate $y_j^m$ of the sampling or quadrature points $\{\mathbf{y}^m\}_{m=1}^M$ of the random variable $\mathbf{Y} \sim \mathcal{N}(Q^\top (r - \bm{\delta} - \frac{1}{2}\Sigma^2\mathbf{1})\Delta t, \Lambda \Delta t)$ satisfies $$\label{eq: assumption} \max_{m=1,2,\dots, M} |y^m_j| \le C \sqrt{\lambda_j \Delta t},$$ where $\lambda_j$ is the $j$-th diagonal element of $\Lambda$, and $C$ is a constant. If $\beta = 1$ and $L_I, L$ and $\Delta t$ are chosen such that $$\frac{4^{L_I+d}}{\pi^{2d}} \exp \Big( 2CL\sqrt{\Delta t}\sum_{j=1}^d \sqrt{\lambda_j} \Big) \le \frac{1}{\varepsilon},$$ then for all sampling or quadrature points $\mathbf{Z}_{k+1}^{n,m}$ of $\mathbf{Z}_{k+1}$, $n=1,2,\dots,N, m = 1,2,\dots,M$, we have $$b(\mathbf{Z}_{k+1}^{n,m}) > \varepsilon.$$* *Proof.* Using [\[eq: pts next step\]](#eq: pts next step){reference-type="eqref" reference="eq: pts next step"}, we have $\mathbf{Z}_{k+1}^{n,m} = \psi(\psi^{-1}(\mathbf{z}^n) + \mathbf{y}^m)$, where $\{\mathbf{z}^n\}_{n=1}^N$ are inner sparse grid interpolation points and $\{\mathbf{y}^m\}_{m=1}^M$ are sampling or quadrature points of $\mathbf{Y}$. Thus the $j$-th coordinate of $\mathbf{Z}_{k+1}^{n,m}$ is given by $$\begin{aligned} Z_{k+1}^{n,m,j} = 1-\frac{2}{1+\eta_j^{n,m}}, \quad \mbox{with } \eta_j^{n,m}:=\frac{1+z^n_j}{1-z^n_j}\exp(2Ly^m_j). \end{aligned}$$ For $\beta = 1$, to ensure $b(\mathbf{z}) = \prod_{j=1}^d (1-z_j^2) > \varepsilon$, it suffices to prove $\prod_{j=1}^d (1-|z_j|) >\varepsilon$. Without loss of generality, consider the case $Z_{k+1}^{n,m,j} \to 1$, i.e., $\eta_j^{n,m}\to+\infty$. Clearly, we have $$\prod_{j=1}^d (1-Z_{k+1}^{n,m,j}) = \prod_{j=1}^d \frac{2}{1+\eta_j^{n,m}} > \varepsilon \quad \Leftrightarrow \quad \prod_{j=1}^d \left( 1+\eta_j^{n,m} \right) < \frac{2^d}{\varepsilon}.$$ Noting that $\eta_j^{n,m}\to+\infty$, the binomial expansion implies that it suffices to have $\prod_{j=1}^d \eta_j^{n,m} <\varepsilon^{-1}$. Using the relationship $$\frac{1+\cos x}{1-\cos x} = \frac{1}{\tan^2 \frac{x}{2}} \le \frac{4}{x^2} \quad \text{ as } x\to 0^+,$$ the inequality [\[eq: estimate interpolation points\]](#eq: estimate interpolation points){reference-type="eqref" reference="eq: estimate interpolation points"}, and the assumption [\[eq: assumption\]](#eq: assumption){reference-type="eqref" reference="eq: assumption"}, we obtain $$\begin{split} \prod_{j=1}^d \left( \frac{1+z^n_j}{1-z^n_j}\exp(2Ly^m_j) \right) &\le \prod_{j=1}^d \frac{1+\cos\left(\frac{\pi}{2^{\ell_j-1}}\right)}{1-\cos\left(\frac{\pi}{2^{\ell_j-1}}\right)} \exp(2CL \sqrt{\lambda_j \Delta t}) \\ &\le \left(\frac{4}{\pi^2}\right)^d 4^{\sum_{j=1}^d\ell_j - d} \exp \left( 2CL\sqrt{\Delta t}\sum_{j=1}^d \sqrt{\lambda_j} \right). \end{split}$$ Since $\sum_{j=1}^d\ell_i\leq L_I+d$, this proves the desired assertion. ◻ Last, we discuss the computational complexity of the algorithm. By introducing the bubble function, the interpolation functions $\overline{u}_k$ have zero boundary values. Thus, at each time step, we require $N$ evaluations of $u_k$ only on the inner sparse grids, where each evaluation is approximated by equation [\[eq: MC\]](#eq: MC){reference-type="eqref" reference="eq: MC"}, [\[eq: QMC\]](#eq: QMC){reference-type="eqref" reference="eq: QMC"} or [\[eq: SG quadrature\]](#eq: SG quadrature){reference-type="eqref" reference="eq: SG quadrature"} with $M$ sampling or quadrature points. The number of inner sparse grids $N$ of level $L_I = 5$ in dimension $d$ is listed in Table [3](#tab: num SG){reference-type="ref" reference="tab: num SG"}, which also shows the numbers for tensor-product full grids $\Tilde{N}_{full}$ and CGL sparse grids with boundary points $\Tilde{N}_{CGL}$. Unlike full grids, the number of sparse grids does not increase exponentially as the dimension increases. In particular, the inner sparse grids account for less than three percent of the CGL sparse grids in $d=10$. This represents a dramatic reduction of the evaluation points. $d$ $\Tilde{N}_{full}$ $\Tilde{N}_{CGL}$ $N$ ----- -------------------- ------------------- ------ 2 961 145 81 3 29791 441 151 4 923521 1105 241 5 2.8629e+7 2433 351 6 8.8750e+8 4865 481 7 2.7513e+10 9017 631 8 8.5289e+11 15713 801 9 2.6440e+13 26017 991 10 8.1963e+14 41265 1201 ![The number of tensor-product full grids $\Tilde{N}_{full}$, the number of Chebyshev-Gauss-Lobatto sparse grids $\Tilde{N}_{CGL}$, and the number of inner sparse grids $N$ in the $d$-dimensional hypercube with level $L_I= 5$.](num_compare.eps){#tab: num SG width="70%"} # Smoothness analysis {#sec:analysis} In this section, we analyze the smoothness of the function $u_k$, in order to justify the use of SGPI, thereby providing solid theoretical underpinnings of its excellent performance. First we list several useful notations. Let $\bm{\alpha} = (\alpha_1, \dots, \alpha_d) \in \mathbb{N}_0^d$ be the standard multi-index with $|\bm{\alpha}| = \alpha_1 + \dots + \alpha_d$. Then $\bm{\alpha} + \bm{\gamma} := (\alpha_1 + \gamma_1, \dots, \alpha_d + \gamma_d)$, $\bm{\alpha}! := \prod_{j=1}^d \alpha_j!$, and $\bm{\gamma} \preceq \bm{\alpha}$ denotes that each component of the multi-index $\bm{\gamma} = (\gamma_1,\dots,\gamma_d)$ satisfies $\gamma_j \le \alpha_j$. We define the differential operator $D^{\bm{\alpha}}$ by $D^{\bm{\alpha}}f := \frac{\partial^{|\bm{\alpha}|}f}{\prod_{j=1}^d \partial x_j^{\alpha_j}}$. For an open set $D \subset \mathbb{R}^d$ and $r\in \mathbb{N}$, the space $\mathcal{C}^r(\overline{D})$ denotes the space of functions with their derivatives of orders up to $r$ being continuous on the closure $\overline{D}$ of $D$, i.e., $$\mathcal{C}^r(\overline{D}) = \{f: \overline{D} \to \mathbb{R}\mid D^{\bm{\alpha}}f \text{ continuous if }|\bm{\alpha}|\le r \}.$$ Specially, $\mathcal{C}^r(\overline{\mathbb{R}^d})$ consists of functions $f\in \mathcal{C}^r(\mathbb{R}^d)$ such that $D^{\bm{\alpha}}f$ is bounded and uniformly continuous on $\mathbb{R}^d$ for all $|\bm{\alpha}|\le r$ [@adams2003sobolev]. We also define $\mathcal{C}^\infty(\overline{\mathbb{R}^d})$ as the intersection of all $\mathcal{C}^r(\overline{\mathbb{R}^d})$ for $r\in \mathbb{N}$, i.e., $\mathcal{C}^\infty(\overline{\mathbb{R}^d}):=\bigcap_{r=1}^{\infty}\mathcal{C}^r(\overline{\mathbb{R}^d})$. The analysis of SGPI employs the space of functions on $\overline{\Omega}:=[-1,1]^d$ with bounded mixed derivatives [@bungartz2004sparse]. Let $F_d^r(\overline{\Omega})$ be the set of all functions $f:\overline{\Omega}\to \mathbb{R}$ such that $D^{\bm{\alpha}}f$ is continuous for all $\bm{\alpha}\in \mathbb{N}_0^d$ with $\alpha_i\le r$ for all $i$, i.e., $$\label{eq: function space} F_d^r(\overline{\Omega}) := \{f:\overline{\Omega} \to \mathbb{R}\mid D^{\bm{\alpha}} f \text{ continuous if }\alpha_i\le r \text{ for all }i\}.$$ We equip the space $F_d^r(\overline{\Omega})$ with the norm $\|f\|_{F_d^r(\overline{\Omega})} := \max\{\|D^{\bm{\alpha}} f\|_{L^\infty(\overline{\Omega})} \mid \bm{\alpha} \in \mathbb{N}_0^d, \alpha_i\le r\}$. For the sake of completeness, we present in Proposition [Proposition 2](#prop: error of sparse-interp){reference-type="ref" reference="prop: error of sparse-interp"} the interpolation error using SGPI described in Section [3.2](#subsec:sg){reference-type="ref" reference="subsec:sg"}. **Proposition 2** ([@barthelmann2000high Theorem 8, Remark 9]). *For $f\in F_d^r(\overline{\Omega})$, there exists a constant $c_{d,r}$ depending only on $d$ and $r$ such that $$\|A(q,d)(f) - f\|_{L^\infty(\overline{\Omega})} \le c_{d,r} \cdot \Tilde{N}^{-r} \cdot (\log \Tilde{N})^{(r+1)(d-1)} \|f\|_{F_d^r(\overline{\Omega})},$$ where $\Tilde{N} = \Tilde{N}_{CGL}(q,d)$ is the number of CGL sparse grids.* To provide theoretical guarantees of applying SGPI, we next prove $\overline{u}_k \in F_d^r(\overline{\Omega})$ in Theorem [Theorem 1](#thm: bounded-mixed){reference-type="ref" reference="thm: bounded-mixed"}. This result follows by Lemma [Lemma 1](#lem: 1){reference-type="ref" reference="lem: 1"} [^3] and Lemma [Lemma 2](#lem: 2){reference-type="ref" reference="lem: 2"}. **Lemma 1** ($f_{K-1} \in \mathcal{C}^\infty(\overline{\mathbb{R}^d})$). *Let $g$ be the payoff of a put option. Then the continuation value function $f_{K-1}$ defined in [\[eq: DP rotated log-price\]](#eq: DP rotated log-price){reference-type="eqref" reference="eq: DP rotated log-price"} is infinitely differentiable, bounded and uniformly continuous with all its derivatives up to the order $r$ for any $r\in \mathbb{N}$, i.e., $f_{K-1} \in \mathcal{C}^\infty(\overline{\mathbb{R}^d})$.* *Proof.* Consider the conditional expectation without the discount factor $e^{-r\Delta t}$: $$\label{eq: continuation value as integral} f(\mathbf{x}) = \mathbb{E}[G(\Tilde{X}_K) | \Tilde{X}_{K-1} = \mathbf{x}] = \int_{\mathbb{R}^d} G(\mathbf{y}) p(\mathbf{x},\mathbf{y})\,\mathrm{d}\mathbf{y},$$ where $G(\cdot) = g(\phi(\cdot))$ is the payoff with respect to the rotated log-price, and $p(\mathbf{x},\mathbf{y})$ is the density of the Gaussian distribution $\mathcal{N}(\mathbf{x} + Q^\top (r - \bm{\delta} - \frac{1}{2}\Sigma^2\mathbf{1})\Delta t, \Lambda \Delta t)$, that is, $$\label{eq: transition density} p(\mathbf{x},\mathbf{y}) = \frac{1}{(2\pi)^{d/2}\sqrt{ \det( \Lambda )\Delta t}}\exp \left( -\frac{1}{2 \Delta t}(\mathbf{y} - \mathbf{x} - \bm{\mu})^\top \Lambda^{-1} (\mathbf{y} - \mathbf{x} - \bm{\mu}) \right)$$ with $\bm{\mu} = Q^\top (r - \bm{\delta} - \frac{1}{2}\Sigma^2\mathbf{1})\Delta t$. Since $g$ is the payoff of a put option, $G$ is bounded in $\mathbb{R}^d$. Let $$P(\mathbf{x}) := \frac{1}{(2\pi)^{d/2}\sqrt{ \det( \Lambda )\Delta t}}\exp \left( -\frac{1}{2 \Delta t}(\mathbf{x} + \bm{\mu})^\top \Lambda^{-1} (\mathbf{x} + \bm{\mu}) \right).$$ Then $P\in L^1(\mathbb{R}^d)$ with $\|P\|_1= \int_{\mathbb{R}^d}P(\mathbf{x})\,\mathrm{d}\mathbf{x} = 1$. The representation [\[eq: continuation value as integral\]](#eq: continuation value as integral){reference-type="eqref" reference="eq: continuation value as integral"} is equivalent to $$f(\mathbf{x}) = G*P(\mathbf{x}),$$ where $*$ denotes the convolution operator. Then an application of [@folland1999real Proposition 8.8] implies that $f$ is bounded and uniformly continuous in $\mathbb{R}^d$, and $$\label{eq: boundedness of f} \|f\|_{L^\infty(\mathbb{R}^d)} \le \|G\|_{L^\infty(\mathbb{R}^d)} \|P\|_{L^1(\mathbb{R}^d)} = \|G\|_{L^\infty(\mathbb{R}^d)}.$$ Next, we show that $f$ has the bounded first order partial derivatives for all $i\in \{1,2,\dots,d\}$, and $$\label{eq: 1st derivative} \frac{\partial f}{\partial x_i}(\mathbf{x}) = G*\frac{\partial P}{\partial x_i}(\mathbf{x}).$$ For any fixed $\mathbf{x}_0\in \mathbb{R}^d$, $i\in \{1,2,\dots,d\}$, $$\frac{\partial p}{\partial x_i}(\mathbf{x}_0, \mathbf{y}) = \lim_{\mathbf{x}_n \to \mathbf{x}_0} q_{i,n}(\mathbf{y}),\quad \text{with } q_{i,n}(\mathbf{y}) = \frac{p(\mathbf{x}_n, \mathbf{y}) - p(\mathbf{x}_0, \mathbf{y})}{x_{i,n} - x_{i,0}}.$$ Since the Gaussian density $p(\mathbf{x}, \mathbf{y})$ has bounded partial derivatives $\frac{\partial p}{\partial x_i}(\mathbf{x},\mathbf{y})$ for all $\mathbf{x}, \mathbf{y}$ and $i\in \{1,2,\dots, d\}$, then $$\label{eq: derivative in limit} \begin{split} \frac{\partial f}{\partial x_i}(\mathbf{x}_0) &= \lim_{\mathbf{x}_n \to \mathbf{x}_0} \frac{f(\mathbf{x}_n) - f(\mathbf{x}_0) }{x_{i,n} - x_{i,0}} \\ &= \lim_{\mathbf{x}_n \to \mathbf{x}_0} \int_{\mathbb{R}^d} G(\mathbf{y}) \frac{ p(\mathbf{x}_n,\mathbf{y}) - p(\mathbf{x}_0, \mathbf{y}) }{x_{i,n} - x_{i,0}} \,\mathrm{d}\mathbf{y} \\ &= \lim_{\mathbf{x}_n \to \mathbf{x}_0} \int_{\mathbb{R}^d} G(\mathbf{y}) q_{i,n}(\mathbf{y}) \,\mathrm{d}\mathbf{y}. \end{split}$$ As $G$ is bounded and for fixed $\mathbf{x}$, the function $p(\mathbf{x},\mathbf{y})$ decays as $\mathcal{O}(e^{-\|\mathbf{y}\|_2^2})$ at infinity, the integrand $G(\mathbf{y})q_{i,n}(\mathbf{y})$ is bounded by some Lebesgue integrable function for all $n$. Hence, by the dominated convergence theorem, one can interchange the limit and integral in the last equation of [\[eq: derivative in limit\]](#eq: derivative in limit){reference-type="eqref" reference="eq: derivative in limit"} and thus $$\frac{\partial f}{\partial x_i}(\mathbf{x}_0) = \int_{\mathbb{R}^d} G(\mathbf{y}) \frac{\partial p}{\partial x_i}(\mathbf{x}_0,\mathbf{y}) \,\mathrm{d}\mathbf{y} = G*\frac{\partial P}{\partial x_i}(\mathbf{x}_0).$$ By [@folland1999real Proposition 8.8], we obtain that $\frac{\partial f}{\partial x_i}$ is bounded and uniformly continuous in $\mathbb{R}^d$, and $$\left\|\frac{\partial f}{\partial x_i}\right\|_{L^\infty(\mathbb{R}^d)} \le \left\|G\right\|_{L^\infty(\mathbb{R}^d)} \left\|\frac{\partial P}{\partial x_i}\right\|_{L^1(\mathbb{R}^d)}.$$ Using the dominated convergence theorem, we derive the bounded second order partial derivatives $$\frac{\partial^2 f}{\partial x_j \partial x_i}(\mathbf{x}) = G*\frac{\partial^2 P}{\partial x_j \partial x_i}(\mathbf{x}),$$ since for fixed $\mathbf{x}$, the function $$\frac{\partial p}{\partial x_i}(\mathbf{x}, \mathbf{y}) = -\frac{(x_i - y_i + \mu_i)}{\lambda_i \Delta t} p(\mathbf{x},\mathbf{y})$$ decays as $\mathcal{O}(\mathbf{y} e^{-\|\mathbf{y}\|_2^2})$ at infinity. By [@folland1999real Proposition 8.8], we obtain $\frac{\partial^2 f}{\partial x_j \partial x_i}$ is bounded and uniformly continuous on $\mathbb{R}^d$, and $$\left\|\frac{\partial^2 f}{\partial x_j \partial x_i}\right\|_{L^\infty(\mathbb{R}^d)} \le \left\|G\right\|_{L^\infty(\mathbb{R}^d)}\left \|\frac{\partial^2 P}{\partial x_j \partial x_i}\right\|_{L^1(\mathbb{R}^d)}.$$ Repeating the argument yields the boundedness and uniform continuity of the partial derivatives of $f$ up to the order $r$ for any $r\in \mathbb{N}$. Therefore, $f_{K-1} \in \mathcal{C}^\infty(\overline{\mathbb{R}^d})$. ◻ **Lemma 2** ($f_k\in \mathcal{C}^\infty(\overline{\mathbb{R}^d})$ for $k=0,1, \dots, K-1$). *Let $g$ be the payoff of a put option. Then the continuation value functions $\{f_k\}_{k=0}^{K-1}$ defined in [\[eq: DP rotated log-price\]](#eq: DP rotated log-price){reference-type="eqref" reference="eq: DP rotated log-price"} are infinitely differentiable, bounded and uniformly continuous with all its derivatives up to the order $r$ for any $r\in \mathbb{N}$, i.e., $f_k\in \mathcal{C}^\infty(\overline{\mathbb{R}^d})$ for $k=0,1, \dots, K-1$.* *Proof.* Let the value at $t_{k+1}$ be $$V_{k+1}(\mathbf{y}) = \max\left( G(\mathbf{y}), f_{k+1}(\mathbf{y}) \right), \quad \text{ for } k = 0,1,\dots, K-2,$$ where $G(\cdot) = g(\phi(\cdot))$ is bounded for a put option. By [\[eq: boundedness of f\]](#eq: boundedness of f){reference-type="eqref" reference="eq: boundedness of f"}, $f_{K-1}$ is bounded. Hence, $V_{K-1}$ is bounded in $\mathbb{R}^d$. Using the argument of Lemma [Lemma 1](#lem: 1){reference-type="ref" reference="lem: 1"}, we obtain $$f_{K-2}(\mathbf{x}) = e^{-r\Delta t}\mathbb{E}[V_{K-1}(\Tilde{\mathbf{X}}_{K-1}) | \Tilde{\mathbf{X}}_{K-2} = \mathbf{x}] = e^{-r\Delta t} V_{K-1}*P(\mathbf{x})$$ is infinitely differentiable, bounded and uniformly continuous with all its derivatives up to the order $r$ for any $r\in \mathbb{N}$. Since $\|f_{K-2}\|_{L^\infty(\mathbb{R}^d)} \le e^{-r\Delta t} \|V_{K-1}\|_{L^\infty(\mathbb{R}^d)} \|P\|_{L^1(\mathbb{R}^d)}$, the value $V_{K-2}$ is bounded in $\mathbb{R}^d$. Similarly, we can obtain $f_k\in \mathcal{C}^\infty(\overline{\mathbb{R}^d})$ for all $k=0,1, \dots, K-1$. ◻ Using the smoothness of the continuation value functions in $\mathbb{R}^d$, now we can establish the smoothness of the extended interpolation function $\overline{u}_k$ in the bounded domain $\overline{\Omega} = [-1,1]^d$. **Theorem 1** ($\overline{u}_k \in F_d^r(\overline{\Omega}) \text{ for } k=0,1,\dots,K-1$). *Let $g$ be the payoff of a put option. Let $u_k: \Omega \to \mathbb{R}$ be the function defined in [\[eq: DP bubble\]](#eq: DP bubble){reference-type="eqref" reference="eq: DP bubble"} with $u_k(\mathbf{z}) = f_k(\psi^{-1}(\mathbf{z}))b(\mathbf{z})$. Let $b(\cdot)$ be the bubble function of the form $\eqref{eq: bubble function}$ and $\psi^{-1}$ be the mapping between unbounded and bounded domains defined in [\[eq: mapping for unbounded\]](#eq: mapping for unbounded){reference-type="eqref" reference="eq: mapping for unbounded"}. If $\beta\ge r$ with $r\in \mathbb{N}$, then $u_k$ can be extended to $\overline{u}_k: \overline{\Omega} \to \mathbb{R}$ such that $\overline{u}_k$ has bounded mixed derivatives up to the order $r$, i.e., $$\overline{u}_k \in F_d^r(\overline{\Omega}),\quad \text{ for } k=0,1,\dots,K-1.$$ Furthermore, for $\bm{\alpha}\in \mathbb{N}_0^d$ with $\alpha_j \le r$, there holds $$\label{eq: exact mixed derivatives} \begin{aligned} D^{\bm{\alpha}}\overline{u}_k (\mathbf{z}) = \sum_{\substack{\bm{\gamma} + \bm{\zeta} = \bm{\alpha}\\ \bm{\gamma} \preceq \bm{\alpha},\bm{\zeta} \preceq \bm{\alpha} }} \frac{\bm{\alpha}!}{\bm{\gamma}! \bm{\zeta}!} D^{\bm{\gamma}}f_k \prod_{j=1:d, \gamma_j = 0} \left( (1-z_j^2)^{\beta - \zeta_j}Q_{\zeta_j}(z_j)\right) \\ \times\prod_{j=1:d, \gamma_j\ge 1} \left(\frac{1}{L} (1-z_j^2)^{\beta - \zeta_j - \gamma_j} Q_{\zeta_j}(z_j)P_{\gamma_j}(z_j) \right), \end{aligned}$$ where $Q_{\zeta_j}$ and $P_{\gamma_j}$ are univariate polynomials of degrees $\zeta_j$ and $\gamma_j-1$ defined on $[-1,1]$.* *Proof.* First, for the multi-index $\bm{\alpha}$ with $|\bm{\alpha}| = 0$, $$\begin{aligned} D^{\bm{\alpha}} u_k(\mathbf{z}) = u_k(\mathbf{z}) = F_k(\mathbf{z})b(\mathbf{z}) = f_k(\psi^{-1}(\mathbf{z}))b(\mathbf{z}) \end{aligned}$$ is continuous and bounded in $\Omega$ due to the continuity and boundedness of $f_k$ in Lemma [Lemma 2](#lem: 2){reference-type="ref" reference="lem: 2"}. Since $u_k$ approaches zero as $\mathbf{z}\to \partial\Omega$, we can define $\overline{u}_k|_{\partial \Omega} = 0$, and $\overline{u}_k|_{\Omega} = u_k$ for each $k=0,1,\dots, K-1$. Then $\overline{u}_k$ is continuous in the closure $\overline{\Omega}$. The following argument holds for all $k = 0,1,\dots, K-1$. Thus, we drop the subscript $k$ and denote $u:= u_k$, $F:= F_k$ and $f:= f_k$. For any multi-index $\bm{\alpha}\in \mathbb{N}_0^d$, Leibniz's rule implies $$\label{eq: product rule} D^{\bm{\alpha}}u = \sum_{\substack{\bm{\gamma} + \bm{\zeta} = \bm{\alpha}\\ \bm{\gamma} \preceq \bm{\alpha},\bm{\zeta} \preceq \bm{\alpha} }} \frac{\bm{\alpha}!}{\bm{\gamma}! \bm{\zeta}!} D^{\bm{\gamma}}F \cdot D^{\bm{\zeta}}b.$$ For any fixed $r\in \mathbb{N}$, pick $\beta \ge r$. Consider a multi-index $\bm{\alpha}$ with $\alpha_i\leq r$, $i=1,\ldots,d$. Let $b(\mathbf{z}) = \prod_{j=1}^d b_j(z_j)$ with $b_j(z_j) = (1-z_j^2)^\beta$. Then direct computation yields $$\label{eq: derivative of b} D^{\bm{\zeta}}b = \prod_{j=1}^d \frac{\mathrm{d}^{\zeta_j}b_j}{\mathrm{d} z_j^{\zeta_j}} \quad \mbox{with } \frac{\mathrm{d}^i b_j}{\mathrm{d} z_j^i} := (1-z_j^2)^{\beta-i} Q_i(z_j)$$ where $Q_i(z_j)$, $i=0,1,\ldots,r$, is a polynomial of degree $i$. Note that $F(\mathbf{z}) = f(\psi^{-1}(\mathbf{z}))$ depends only on $z_j$ through $x_j$ with $\mathbf{x} = \psi^{-1}(\mathbf{z})$. Then we have $$\label{eq: derivative of F} D^{\bm{\gamma}}F = D^{\bm{\gamma}}\left(f(\psi^{-1}(\mathbf{z}))\right) = \frac{\partial^{|\bm{\gamma}|}f}{\prod_{j=1}^d \partial x_j^{\gamma_j}} \prod_{j=1:d, \gamma_j\ge 1}\frac{\mathrm{d}^{\gamma_j}x_j}{\mathrm{d} z_j^{\gamma_j}} \quad \mbox{with } \frac{\mathrm{d}^\ell x_j}{\mathrm{d} z_j^\ell} := \frac{1}{L} (1-z_j^2)^{-\ell} P_\ell(z_j),$$ where $P_{\ell}(z_j)$ is a polynomial of degree $\ell-1$ for each $\ell = 1,2,\dots, r$. Combining the last three identities yields the assertion [\[eq: exact mixed derivatives\]](#eq: exact mixed derivatives){reference-type="eqref" reference="eq: exact mixed derivatives"}. Since $\beta - \zeta_j - \gamma_j = \beta - \alpha_j \ge 0$, by Lemma [Lemma 2](#lem: 2){reference-type="ref" reference="lem: 2"}, we deduce that $D^{\bm{\alpha}}u_k$ is bounded. By defining the continuous extension of $D^{\bm{\alpha}}u_k$ using [\[eq: exact mixed derivatives\]](#eq: exact mixed derivatives){reference-type="eqref" reference="eq: exact mixed derivatives"}, we obtain that $\overline{u}_k \in F_d^r(\overline{\Omega})$ for $k=0,1,\dots, K-1$. ◻ To bound $\overline{u}_k$ in the $F_d^r(\overline{\Omega})$ norm, we need suitable estimates of polynomials $Q_i$ and $P_{\ell}$ in [\[eq: exact mixed derivatives\]](#eq: exact mixed derivatives){reference-type="eqref" reference="eq: exact mixed derivatives"}. **Lemma 3**. *For fixed $\beta\ge r$ with $r\in \mathbb{N}$, $i = 1,2,\dots, r$ and $\ell = 1,2,\dots,r$, the polynomials $Q_i$ defined in [\[eq: derivative of b\]](#eq: derivative of b){reference-type="eqref" reference="eq: derivative of b"} and $P_{\ell}$ defined in [\[eq: derivative of F\]](#eq: derivative of F){reference-type="eqref" reference="eq: derivative of F"} satisfy the following estimates $$\label{eq: estimate Q-P} \|Q_i\|_{L^\infty([-1,1])} \le \prod_{n=0}^{i-1} \left( n^2 + 2(\beta - n) \right) \quad \mbox{and}\quad \|P_\ell\|_{L^\infty([-1,1])} \le \prod_{n=0}^{\ell-1} (n^2+1).$$* *Proof.* By [\[eq: derivative of b\]](#eq: derivative of b){reference-type="eqref" reference="eq: derivative of b"}, $\|Q_0\|_{L^\infty([-1,1])} = 1$. Using the identity $\frac{\mathrm{d}}{\mathrm{d} z_j} \left( \frac{\mathrm{d}^{i-1}b_j}{\mathrm{d}z_j^{i-1}} \right) = \frac{\mathrm{d}^i b_j}{\mathrm{d} z_j^i}$, for $i>1$, and the defining identity in [\[eq: derivative of b\]](#eq: derivative of b){reference-type="eqref" reference="eq: derivative of b"}, we obtain $$\frac{\mathrm{d}}{\mathrm{d} z_j} \left( (1-z_j^2)^{\beta-i+1}Q_{i-1}(z_j) \right) = (1-z_j^2)^{\beta-i}Q_i(z_j).$$ Dividing both sides by $(1-z_j^2)^{\beta-i}$ after taking derivative over the left hand side, we obtain $$(\beta-i+1)(-2z_j)Q_{i-1}(z_j) + (1-z_j^2) Q_{i-1}'(z_j) = Q_i(z_j).$$ By the triangular inequality, we derive $$\label{eq:111} \|Q_i\|_{L^\infty([-1,1])} \le 2(\beta-i+1) \|Q_{i-1}\|_{L^\infty([-1,1])} + \|Q_{i-1}'\|_{L^\infty([-1,1])}.$$ Since $Q_{i-1}$ is a polynomial of degree $i-1$ defined on $[-1,1]$, a direct application of the Markov brothers' inequality [@achieser2013theory p. 300] yields $$\|Q_{i-1}'\|_{L^\infty([-1,1])} \le (i-1)^2 \|Q_{i-1}\|_{L^\infty([-1,1])}.$$ Plugging this estimate into [\[eq:111\]](#eq:111){reference-type="eqref" reference="eq:111"} leads to the recurrence relation $$\label{eq: recursion Q} \|Q_i\|_{L^\infty([-1,1])} \le \left((i-1)^2 + 2(\beta-(i-1)) \right) \|Q_{i-1}\|_{L^\infty([-1,1])}.$$ Upon noting $\|Q_0\|_{L^\infty([-1,1])} = 1$, we derive the desired estimate on $Q_i$. The estimate on $P_i$ follows similarly. For $\ell = 1$, $\|P_1\|_{L^\infty([-1,1])} = 1$. For $\ell>1$, using [\[eq: derivative of F\]](#eq: derivative of F){reference-type="eqref" reference="eq: derivative of F"}, we obtain $$\frac{\mathrm{d}}{\mathrm{d} z_j} \left( \frac{\mathrm{d}^{\ell-1}x_j}{\mathrm{d} z_j^{\ell-1}} \right) = \frac{\mathrm{d}}{\mathrm{d} z_j} \left( \frac{1}{L}(1-z_j^2)^{-\ell+1}P_{\ell-1}(z_j) \right) = \frac{1}{L} (1-z_j^2)^{-\ell} P_\ell(z_j) = \frac{\mathrm{d}^\ell x_j}{\mathrm{d} z_j^{\ell}}.$$ This implies $$2(\ell-1)z_j P_{\ell-1}(z_j) + (1-z_j^2) P_{\ell-1}'(z_j) = P_\ell(z_j).$$ Since $P_{\ell-1}$ is a polynomial of degree $\ell-2$, by the triangular inequality and the Markov brothers' inequality, we obtain the recurrence relation $$\label{eq: recursion P} \|P_\ell\|_{L^\infty([-1,1])} \le \left( (\ell-1)^2 + 1 \right) \|P_{\ell-1}\|_{L^\infty([-1,1])}.$$ Together with the identity $\|P_1\|_{L^\infty([-1,1])} = 1$, we prove the desired assertion on $P_\ell$. ◻ We now estimate the norms $\|D^{\bm{\alpha}}u_k\|_{L^\infty(\overline{\Omega})}$ in terms of suitable mixed norms of $f$. Here, $$\begin{aligned} \|f\|_{mix,r} := \max \left\{ \left\|\frac{\partial^{|\bm{\alpha}|} f}{\prod_{j=1}^d \partial x_j^{\alpha_j}}\right\|_{L^\infty(\mathbb{R}^d)} : \mathbf{x}\in \mathbb{R}^d, \bm{\alpha}\in \mathbb{N}_0^d\text{ with } \alpha_j \le r \right\}.\end{aligned}$$ **Theorem 2** (Upper bounds on $\|\overline{u}_k \|_{F_d^1(\overline{\Omega})}$ and $\|\overline{u}_k \|_{F_d^2(\overline{\Omega})}$ for $k=0,1,\dots,K-1$). *Let $g$ be the payoff of a put option, and $u_k: \Omega \to \mathbb{R}$ the function defined in [\[eq: DP bubble\]](#eq: DP bubble){reference-type="eqref" reference="eq: DP bubble"} with $u_k(\mathbf{z}) = f_k(\psi^{-1}(\mathbf{z}))b(\mathbf{z})$. Let $b(\cdot)$ be the bubble function of the form [\[eq: bubble function\]](#eq: bubble function){reference-type="eqref" reference="eq: bubble function"} and $\psi^{-1}$ the mapping between unbounded and bounded domains defined in [\[eq: mapping for unbounded\]](#eq: mapping for unbounded){reference-type="eqref" reference="eq: mapping for unbounded"}. Then for $k\in \{0,1,\dots,K-1\}$, if $\beta = 1$ in [\[eq: bubble function\]](#eq: bubble function){reference-type="eqref" reference="eq: bubble function"}, there holds $$\|\overline{u}_k\|_{F_d^1(\overline{\Omega})} \le \left(2+ \frac{1}{L} \right)^{d}\|f_k\|_{mix,1}.$$ If $\beta = 2$ in [\[eq: bubble function\]](#eq: bubble function){reference-type="eqref" reference="eq: bubble function"}, then $$\|\overline{u}_k\|_{F_d^2(\overline{\Omega})} \le \left(24+\frac{12}{L}\right)^d \|f_k\|_{mix,2}.$$* *Proof.* Combining [\[eq: exact mixed derivatives\]](#eq: exact mixed derivatives){reference-type="eqref" reference="eq: exact mixed derivatives"} with [\[eq: estimate Q-P\]](#eq: estimate Q-P){reference-type="eqref" reference="eq: estimate Q-P"} leads to $$\begin{aligned} \label{eq: estimate of mixed derivatives} \|D^{\bm{\alpha}} \overline{u}_k\|_{L^\infty(\overline{\Omega})} &\le \sum_{\substack{\bm{\gamma} + \bm{\zeta} = \bm{\alpha}\\ \bm{\gamma} \preceq \bm{\alpha},\bm{\zeta} \preceq \bm{\alpha} }} \frac{\bm{\alpha}!}{\bm{\gamma}! \bm{\zeta}!} \|D^{\bm{\gamma}}f_k\|_{L^\infty(\mathbb{R}^d)} \prod_{j=1:d, \gamma_j = 0} \|Q_{\zeta_j}\|_{L^\infty([-1,1])}\\ &\times\prod_{j=1:d, \gamma_j\ge 1} \left(\frac{1}{L} \|Q_{\zeta_j}\|_{L^\infty([-1,1])} \|P_{\gamma_j}\|_{L^\infty([-1,1])} \right).\nonumber\end{aligned}$$ It follows from Lemma [Lemma 3](#lem: polynomial estimates){reference-type="ref" reference="lem: polynomial estimates"} that $$\begin{aligned} \|Q_0\|_{L^\infty([-1,1])} = 1, \quad & \|Q_1\|_{L^\infty([-1,1])} \le 2\beta, \quad \|Q_2\|_{L^\infty([-1,1])} \le (2\beta -1)2\beta,\\ \|P_1\|_{L^\infty([-1,1])} = 1, \quad &\|P_2\|_{L^\infty([-1,1])} \le 2.\end{aligned}$$ Next, we deduce the following estimates from [\[eq: estimate of mixed derivatives\]](#eq: estimate of mixed derivatives){reference-type="eqref" reference="eq: estimate of mixed derivatives"}. If $\beta = 1$, consider the multi-index $\bm{\alpha}$ with $\alpha_j\in \{0,1\}$ for $j=1,2,\dots, d$, then $$\begin{aligned} \|D^{\bm\alpha}\overline{u}_k\|_{L^\infty(\overline{\Omega})} &\le \sum_{\substack{\bm{\gamma} + \bm{\zeta} = \bm{\alpha}\\ \bm{\gamma} \preceq \bm{\alpha},\bm{\zeta} \preceq \bm{\alpha} }} \frac{\bm{\alpha}!}{\bm{\gamma}! \bm{\zeta}!} \|f_k\|_{mix,1} \prod_{j=1:d, \gamma_j = 0} 2\beta \prod_{j=1:d, \gamma_j\ge 1} \frac{1}{L} \\ &=\sum_{\substack{\bm{\gamma} + \bm{\zeta} = \bm{\alpha}\\ \bm{\gamma} \preceq \bm{\alpha},\bm{\zeta} \preceq \bm{\alpha} }} 2^{|\bm{\zeta}|} \left(\frac{1}{L}\right)^{|\bm{\gamma}|} \|f_k\|_{mix,1} = \left(2+ \frac{1}{L} \right)^{|\bm{\alpha}|}\|f_k\|_{mix,1}\\ &\leq \left(2+ \frac{1}{L} \right)^{d}\|f_k\|_{mix,1}. \end{aligned}$$ If $\beta = 2$, consider the multi-index $\bm{\alpha}$ with $\alpha_j\in \{0,1,2\}$ for $j=1,2,\dots,d$, an application of the trinomial expansion implies $$\begin{split} \|D^{\alpha}\overline{u}_k\|_{L^\infty(\overline{\Omega})} &\le \|f_k\|_{mix,2} \sum_{\substack{\bm{\gamma} + \bm{\zeta} = \bm{\alpha}\\ \bm{\gamma} \preceq \bm{\alpha},\bm{\zeta} \preceq \bm{\alpha} }} \frac{\bm{\alpha}!}{\bm{\gamma}! \bm{\zeta}!} \prod_{j=1:d, \gamma_j = 0} (2\beta-1)2\beta \prod_{j=1:d, \gamma_j=1} \frac{1}{L}(2\beta) \prod_{j=1:d, \gamma_j=2} \frac{1}{L}\cdot 2 \\ &\le 2^d \left(2\beta(2\beta-1)+\frac{2\beta}{L}+\frac{2}{L}\right)^d \|f_k\|_{mix,2} =\left(24+\frac{12}{L}\right)^d \|f_k\|_{mix,2}. \end{split}$$ This completes the proof of the theorem. ◻ In Theorem [Theorem 2](#thm: small beta){reference-type="ref" reference="thm: small beta"}, we only consider the cases $\beta = 1$ and $\beta=2$, and both can overcome the curse of dimensionality by means of SGPI while maintaining a relatively small upper bound of the functional norm. Clearly, higher order mixed derivatives of the interpolation function $u_k$ have larger upper bounds. # Numerical experiments {#sec:numerical} In this section, we illustrate the efficiency and robustness of the proposed quadrature and sparse grid interpolation scheme, i.e., Algorithm [\[alg: SGSG\]](#alg: SGSG){reference-type="ref" reference="alg: SGSG"}, for pricing high-dimensional American options. We present pricing results up to dimension $16$. The accuracy of the option price $\hat V$ obtained by Algorithm [\[alg: SGSG\]](#alg: SGSG){reference-type="ref" reference="alg: SGSG"} is measured in the relative error defined by $$e= |\hat V - V^\dag|/{V^\dag},$$ where $V^\dag$ is the reference price, either taken from literature or computed to meet a certain tolerance. The results show that the relative errors decay geometrically as the number of interpolation points increase, and the convergence rate is almost independent of the dimension $d$. The comparison of various quadrature methods is also included. The implementation of sparse grids is based on the Sparse Grids MATLAB Kit, a MATLAB toolbox for high-dimensional quadrature and interpolation [@piazzola.tamellini:SGK]. The computations were performed by MATLAB R2022b with 32 CPU cores (with 4GB memory per core) using research computing facilities offered by Information Technology Services, The University of Hong Kong. The codes for the numerical experiments can be founded in <https://github.com/jiefeiy/multi-asset-American-option/tree/main>. ## American basket option pricing up to dimension 16 The examples are taken from [@kovalov2007pricing], where pricing American options up to 6 assets by the FEM was investigated. For each $d$-dimensional problem, the setting of these examples are $S^i_0 = \kappa = 100$, $T = 0.25$, $r = 0.03$, $\delta_i = 0$, $\sigma_i = 0.2$, $P = (\rho_{ij})_{d\times d}$ with $\rho_{ij} = 0.5$ for $i\ne j$. The prices of arithmetic basket options with $d = 2,3,\dots, 12$ underlying assets are listed in Table [1](#tab: SGGK price of arithmBaskPut){reference-type="ref" reference="tab: SGGK price of arithmBaskPut"}, where the last column gives the reference price $V^\dag_{\rm Amer}$ of American options reported in [@kovalov2007pricing], where the relative error is $0.758\%$ for pricing a $6$-d American geometric put option. To further illustrate the efficiency of the algorithm in high dimensions, we consider pricing the geometric basket put options as benchmarks, which can be reduced to a one-dimensional problem. Thus, highly accurate prices are available using one-dimensional quadrature and interpolation scheme. Indeed, the price of the $d$-dimensional problem equals that of the one-dimensional American put option with initial price, volatility, and dividend yield given by $$\hat{S}_0 = \big(\prod_{i=1}^d S_0^i\big)^{1/d}, \quad \hat{\sigma} = \frac{1}{d}\sqrt{\sum_{i,j}\sigma_i \sigma_j \rho_{ij}}, \quad \hat{\delta} = \frac{1}{d}\sum_{i=1}^d \big( \delta_i + \frac{\sigma_i^2}{2}\big) - \frac{\hat{\sigma}^2}{2},$$ respectively. The prices of geometric basket options with $d = 2,3,\dots, 16$ underlying assets are listed in Table [2](#tab: SGGK price of geoBaskPut){reference-type="ref" reference="tab: SGGK price of geoBaskPut"}, where the reference Bermudan prices $V^\dag_{\rm Ber}$ with $50$ times steps with accuracy up to $10^{-6}$ are calculated using one-dimensional quadrature and interpolation scheme. The last column of Table [2](#tab: SGGK price of geoBaskPut){reference-type="ref" reference="tab: SGGK price of geoBaskPut"} are the reference prices $V^\dag_{\rm Amer}$ of American options reported in [@kovalov2007pricing] priced by the reduced one-dimensional problem. $d$ Sparse grid interpolation level $L_I$ $V_{\rm Amer}^\dag$ ----- --------------------------------------- ----------- ----------- ----------- ----------- --------------------- -- 3 4 5 6 7 2 2.9193 3.1397 3.1330 3.1269 3.1388 3.13955 (7.02e-2) (3.52e-5) (2.07e-3) (4.03e-3) (2.30e-4) 3 2.8649 2.9533 2.9300 2.9304 2.9463 2.94454 (2.71e-2) (2.96e-3) (4.95e-3) (4.81e-3) (5.83e-4) 4 2.8429 2.8547 2.8232 2.8311 2.84019 (9.42e-4) (5.11e-3) (5.98e-3) (3.22e-3) 5 2.8271 2.7906 2.7601 2.7710 2.77193 (1.99e-2) (6.74e-3) (4.25e-3) (3.26e-4) 6 2.8129 2.7455 2.7191 2.7319 2.71838 (3.48e-2) (9.96e-3) (2.60e-4) (4.97e-3) 7 2.7988 2.7140 2.6913 2.7052 8 2.7841 2.6908 2.6718 9 2.7720 2.6690 2.6553 10 2.7599 2.6498 2.6428 11 2.7472 2.6331 2.6334 12 2.7345 2.6210 2.6256 : The prices for the arithmetic basket put option on $d$ assets with $\beta = 1$, $L = 2$, and $K = 50$, using sparse grid quadrature with level $4$ Genz-Keister knots. The relative errors in the brackets are compared with the American price $V^\dag_{\rm Amer}$. $d$ Sparse grid interpolation level $L_I$ $V^\dag_{\rm Ber}$ $V^\dag_{\rm Amer}$ ----- --------------------------------------- ----------- ----------- ----------- ----------- -------------------- --------------------- 3 4 5 6 7 2 2.9489 3.1880 3.1880 3.1839 3.1831 3.18310 3.18469 (7.36e-2) (1.53e-3) (1.55e-3) (2.51e-4) (1.43e-5) 3 2.9130 3.0226 3.0060 3.0029 3.0030 3.00299 3.00448 (3.00e-2) (6.53e-3) (9.96e-4) (1.67e-5) (8.59e-6) 4 2.9028 2.9314 2.9103 2.9088 2.90836 2.90980 (1.91e-3) (7.94e-3) (6.75e-4) (1.42e-4) 5 2.8963 2.8716 2.8514 2.8505 2.84994 2.85135 (1.63e-2) (7.60e-3) (5.03e-4) (2.00e-4) 6 2.8889 2.8283 2.8110 2.8107 2.81026 2.81165 (2.80e-2) (6.40e-3) (2.74e-4) (1.65e-4) 7 2.8810 2.7955 2.7817 2.7817 2.78155 (3.57e-2) (5.03e-3) (4.95e-5) (6.93e-5) 8 2.8725 2.7718 2.7599 2.75980 (4.08e-2) (4.34e-3) (3.47e-5) 9 2.8631 2.7505 2.7428 2.74275 (4.39e-2) (2.84e-3) (3.37e-5) 10 2.8525 2.7320 2.7292 2.72904 (4.52e-2) (1.08e-3) (6.66e-5) 11 2.8412 2.7154 2.7180 2.71776 (4.54e-2) (8.79e-4) (8.83e-5) 12 2.8294 2.7000 2.7085 2.70832 (4.47e-2) (3.09e-3) (7.74e-5) 13 2.8175 2.6855 2.70031 (4.34e-2) (5.47e-3) 14 2.8043 2.6720 2.69342 (4.12e-2) (7.95e-3) 15 2.7910 2.6597 2.68743 (3.85e-2) (1.03e-2) 16 2.7784 2.6502 2.68218 (3.59e-2) (1.19e-2) : The prices for the geometric basket put option on $d$ assets with $\beta = 1$, $L = 2$, and $K = 50$, using sparse grid quadrature with level $4$ Genz-Keister knots. The relative errors in the brackets are compared with $50$-times exercisable Bermudan price $V^\dag_{\rm Ber}$. ## Convergence of interpolation for Bermudan options To verify the convergence rate of the SGPI, we consider $50$-times exercisable Bermudan basket put option on the geometric average of $d$ assets. To avoid the influence of quadrature errors, sparse grid quadrature with level $4$ Genz-Keister knots is applied to ensure the small approximation errors. Fig. [4](#fig: fig_conv_interp){reference-type="ref" reference="fig: fig_conv_interp"} shows the convergence for different dimension $d$. These plots show that for a fixed number of inner sparse grids $N$, the relative error do increase with the dimension $d$, but the convergence rate is nearly independent of the dimension, confirming the theoretical prediction in Section [4](#sec:analysis){reference-type="ref" reference="sec:analysis"}. ![The relative error w.r.t the number of inner sparse grid interpolation points $N$ in dimension $d$ with $\beta = 1$, $L = 2$, and $K = 50$.](fig_conv_interp.eps){#fig: fig_conv_interp width=".4\\textwidth"} ## Comparison of quadrature Now we showcase the performance of Algorithm [\[alg: SGSG\]](#alg: SGSG){reference-type="ref" reference="alg: SGSG"} with different quadrature methods, to demonstrate the flexibility of high-dimensional quadrature rules. Since the pointwise evaluations on the inner sparse grids for interpolation are obtained via quadrature methods, cf. Section [3.3](#subsec:algorithm){reference-type="ref" reference="subsec:algorithm"}, we present the error of the pricing with three kinds of sparse grid quadrature, the random quasi-Monte Carlo (RQMC) method with scramble Sobol sequence, and the state-of-art preintegration strategy for the integrand with 'kinks' (discontinuity of gradient). **Sparse grid quadrature**: We first show the relative errors of pricing using [\[eq: SG quadrature\]](#eq: SG quadrature){reference-type="eqref" reference="eq: SG quadrature"} in Fig. [\[fig:quad-err\]](#fig:quad-err){reference-type="ref" reference="fig:quad-err"}b, for three types of sparse grids, i.e., Gauss-Hermite, Genz-Keister, and normal Leja points for integration with respect to the Gaussian density. The theoretical convergence of the sparse grid quadrature are limited to functions with bounded mixed derivatives, which is not satisfied by $v_{k+1}^{\mathbf{z}}(\cdot)$ defined in [\[eq: def of integrand\]](#eq: def of integrand){reference-type="eqref" reference="eq: def of integrand"}. Nonetheless, the success of sparse grid quadrature for computing risk-neutral expectations has been observed in the literatures [@bungartz2003multivariate; @gerstner2007sparse; @holtz2010sparse]. Fig. [\[fig:quad-err\]](#fig:quad-err){reference-type="ref" reference="fig:quad-err"}a shows the $L^\infty(\overline{\Omega})$-error of approximating $F_{K-1}$ by sparse grid quadrature, where the exact values correspond to the price of European options with expiration time $\Delta t$. We observe that the quadrature errors in Fig. [\[fig:quad-err\]](#fig:quad-err){reference-type="ref" reference="fig:quad-err"}a seems much larger than the relative errors shown in Fig. [\[fig:quad-err\]](#fig:quad-err){reference-type="ref" reference="fig:quad-err"}b, which seems impausible at the first glance since the latter is poluted by many errors including the former. We find that the quadrature errors are large only near the free interface, which is a $(d-1)$-dimensional manifold in the $d$-dimensional problem, and hence do not result in a heavy impact on the relative errors depicted in Fig. [\[fig:quad-err\]](#fig:quad-err){reference-type="ref" reference="fig:quad-err"}b. ![](fig_err_quad_a.eps){#fig: fig_err_quad_a width="\\textwidth"}   ![](fig_err_quad_b.eps){#fig: fig_err_quad_b width=".95\\textwidth"} ![The RMSE w.r.t. the number of scrambled Sobol points $M$ for pricing the 5-d Bermudan geometric basket put option with $L_I = 6$, $\beta = 1$, $L = 2$, and $K = 50$.](RMSE_preint.eps){#fig: RMSE preint width=".4\\textwidth"} **RQMC and RQMC with preintegration**: To show the convergence with respect to the number of quadrature points $M$, we use randomized quasi-Monte Carlo (RQMC) with the scramble Sobol sequence for quadrature. QMC and RQMC have been widely applied to option pricing problems for computing high-dimensional integrals [@giles2008quasi; @joy1996quasi; @l2009quasi]. The max function in [\[eq: def of integrand\]](#eq: def of integrand){reference-type="eqref" reference="eq: def of integrand"} introduces a 'kink', which decreases the efficiency of sparse grid quadrature or QMC. For functions with 'kinks', the preintegration strategy or conditional sampling are developed [@griewank2018high; @liu2023preintegration]. Fig. [7](#fig: RMSE preint){reference-type="ref" reference="fig: RMSE preint"} shows the root mean square error (RMSE) of Bermudan option pricing with respect to the number of quadrature points $M$ in a $5$-d problem, where we use $20$ independent replicates to estimate RMSE by $\text{RMSE} = \sqrt{\frac{1}{20}\sum_{i=1}^{20} ( \hat V^{(i)} - V^\dag )^2}$. ## Robustness To test the robustness of Algorithm [\[alg: SGSG\]](#alg: SGSG){reference-type="ref" reference="alg: SGSG"}, we repeat the experiments for various values of the parameter $\beta$ occurred in the definition of the bubble function [\[eq: bubble function\]](#eq: bubble function){reference-type="eqref" reference="eq: bubble function"}, the scale parameter $L$ introduced in the scaled $\tanh$ map [\[eq: mapping for unbounded\]](#eq: mapping for unbounded){reference-type="eqref" reference="eq: mapping for unbounded"}, and the number of time steps $K$. The corresponding results are shown in Fig. [8](#fig: fig_stable_beta){reference-type="ref" reference="fig: fig_stable_beta"}, Fig. [9](#fig: fig_stable_L){reference-type="ref" reference="fig: fig_stable_L"}, and Fig. [10](#fig: fig_stable_K){reference-type="ref" reference="fig: fig_stable_K"}. We test with the example of pricing Bermudan or American geometric basket put options, where the prices are listed in Table [2](#tab: SGGK price of geoBaskPut){reference-type="ref" reference="tab: SGGK price of geoBaskPut"}. Fig. [8](#fig: fig_stable_beta){reference-type="ref" reference="fig: fig_stable_beta"} shows the convergence of relative errors for $\beta = 1,2,3,4,5$. Theoretically we have only provided the upper bounds of $\|\overline{u}_k\|_{F_d^1(\overline{\Omega})}$ and $\|\overline{u}_k\|_{F_d^2(\overline{\Omega})}$ in Theorem [Theorem 2](#thm: small beta){reference-type="ref" reference="thm: small beta"}. The relative errors with respect to the chosen scale parameter $L$ are displayed in Fig. [9](#fig: fig_stable_L){reference-type="ref" reference="fig: fig_stable_L"}. The smallest relative error is observed for $L = 3.5$. In practice, as mentioned in Section [3.1](#subsec:unbounded-bounded){reference-type="ref" reference="subsec:unbounded-bounded"}, the parameter $L>0$ is determined such that the transformed interpolation points are distributed alike to the asset prices. Theorem [Theorem 2](#thm: small beta){reference-type="ref" reference="thm: small beta"} implies that the scale parameter $L$ should not be too small, and Proposition [Proposition 1](#prop:bubble){reference-type="ref" reference="prop:bubble"} implies that $L$ should not be too large. ![](fig_stable_beta.eps){#fig: fig_stable_beta width="\\textwidth"}   ![](fig_stable_L.eps){#fig: fig_stable_L width="\\textwidth"}   ![](fig_stable_K.eps){#fig: fig_stable_K width="\\textwidth"} The time discretization always arises when using the price of $K$-times exercisable Bermudan option to approximate the American price. For the equidistant time step $\Delta t = T/K$, it is widely accepted that the Bermudan price approaches the American price as $\Delta t\to 0$ with a convergence rate $\mathcal{O} (\Delta t)$. For one single underlying asset, this convergence rate was shown in [@howison2007matched] for the Black Scholes model. A similar convergence rate has been observed in [@quecke2007efficient] and [@fang2009pricing] for Lévy models. In almost all pricing schemes based upon the dynamic programming, a more accurate price can be obtained with more exercise dates (but at a higher computational cost). One general approach to alleviate the cost but still guarantee the accuracy is to apply the Richardson extrapolation [@geske1984american; @chang2007richardson]. Fig. [10](#fig: fig_stable_K){reference-type="ref" reference="fig: fig_stable_K"} present the convergence of Bermudan price to American price as $K$ increases using Algorithm [\[alg: SGSG\]](#alg: SGSG){reference-type="ref" reference="alg: SGSG"}. # Conclusions {#sec:conclusion} In this work, we have developed a novel quadrature and sparse grid interpolation based algorithm for pricing American options with many underlying assets. Unlike most existing methods, it does not involve introducing artificial boundary data by avoiding truncating the computational domain, and that a significant reduction of the number of grid points by introducing a bubble function. The resulting multivariate function has been shown to have bounded mixed derivatives. Numerical experiments for American basket put options with the number of underlying assets up to 16 demonstrate excellent accuracy of the approach. Future work includes pricing max-call options for multiple underlying assets, which are benchmark test cases for high-dimensional American options. Max-call options pose computational challenges due to their unboundedness and thus require certain special treatment. [^1]: Department of Mathematics, The University of Hong Kong, Pokfulam Road, Hong Kong. Email: `jiefeiy@connect.hku.hk` JY acknowledges support from the University of Hong Kong via the HKU Presidential PhD Scholar Programme (HKU-PS). The authors would like to thank the Isaac Newton Institute for Mathematical Sciences for support and hospitality during the programme Uncertainty Quantification and Stochastic Modelling of Materials when work on this paper was undertaken. This work was supported by EPSRC Grant Number EP/R014604/1. [^2]: Department of Mathematics, The University of Hong Kong, Pokfulam Road, Hong Kong. Email: `lotusli@maths.hku.hk` GL acknowledges the support from GRF (project number: 17317122) and Early Career Scheme (Project number: 27301921), RGC, Hong Kong. The authors thanks Prof. Michael Griebel (University of Bonn, Germany) for providing valuable literatures on option pricing and sparse grids. [^3]: At a first glance of [\[eq: continuation value as integral\]](#eq: continuation value as integral){reference-type="eqref" reference="eq: continuation value as integral"}, the $\mathcal{C}^\infty(\mathbb{R}^d)$ smoothness of $f_{K-1}$ follows directly by the fact that convolution smooths out the payoff function. However, the payoff function $G$ of the rotated log-price is not in $L^1(\mathbb{R}^d)$, neither does the density function have compact support. Therefore, we provide a proof of the $\mathcal{C}^\infty(\overline{\mathbb{R}^d})$ regularity in Lemma [Lemma 1](#lem: 1){reference-type="ref" reference="lem: 1"} mainly using the dominated convergence theorem.
arxiv_math
{ "id": "2309.08287", "title": "On Sparse Grid Interpolation for American Option Pricing with Multiple\n Underlying Assets", "authors": "Jiefei Yang, Guanglian Li", "categories": "math.NA cs.NA q-fin.CP", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | In this paper we derive an upper bound for the degree of the strict invariant algebraic curve of a polynomial system in the complex project plane under generic condition. The results are obtained through the algebraic multiplicities of the system at the singular points. A method for computing the algebraic multiplicity using Newton polygon is also presented. address: - | Zhou Pei-Yuan Center for Applied Mathematics\ Tsinghua University\ Beijing, 100084\ P.R.China - | Department of Mathematical Science\ Tsinghua University\ Beijing, 100084\ P.R.China author: - Jinzhi Lei - Lijun Yang title: Algebraic Multiplicity and the Poincaré Problem --- [^1] # Introduction {#sec:1} In this paper, we will present an approach to establish the upper bound of the degree of the strict invariant algebraic curve of a polynomial system in the complex projective plane $\mathbb{P}_\mathbb C^2$. A polynomial system in $\mathbb{P}_\mathbb C^2$ is defined by the vector field $$\label{eq:27} \dot{z} = P(z,w),\ \ \dot{w} = Q(z,w),$$ where $P$ and $Q$ are relatively prime polynomials with complex coefficients. **Definition 1**. A polynomial $f(z,w)$ is said to be a Darboux Polynomial of ([\[eq:27\]](#eq:27){reference-type="ref" reference="eq:27"}) if there exists a polynomial $R_f(z,w)$ such that $$\label{eq:28} P(z,w)\dfrac{\partial f}{\partial z}+Q(z,w) \dfrac{\partial f}{\partial w} = R_f(z,w)f(z,w).$$ We call the zero-set $C(f)=\{(z,w)\in\hat{\mathbb C}^2|\,f(z,w)=0\}$ an invariant algebraic curve, and $R_f$ the cofactor of $f$. In particular, if $C(f)$ contains no constant irreducible component (i.e., the line $z = z_0$ or $w = w_0$), then $f$ is a strict Darboux polynomial, and $C(f)$ is a strict invariant algebraic curve. The study of invariant algebraic curves of a polynomial system goes back to Darboux and Poincaré( see Schlomiuk [@Sch]). In general, the Darboux polynomial of the sytem [\[eq:27\]](#eq:27){reference-type="eqref" reference="eq:27"} can be found by solving the equation [\[eq:28\]](#eq:28){reference-type="eqref" reference="eq:28"} for $f$ and $R_f$. Equation [\[eq:28\]](#eq:28){reference-type="eqref" reference="eq:28"} is easy to solve if the degree of $f$ is known in advance (for example, see Pereira [@Per:2 Propostion 1]). However, it is still an open problem, for a given system, to establish the upper bound for the degree of the invariant algebraic curve effectively. This is named as the Poincaré problem. It is known that such an upper bound do exists for a given polynomial system, see Schlomiuk [@Sch Corollary 3.1]. However, the uniform upper bound that depends merely on the degree of the system does not exists, for non-trivial example, see Ollagnier [@Ol:01]. As consequence, the practical arithmetic to find the bound from the coefficients is significant for the general solution for finding the invariant algebraic curve of a polynomial system. For more remarks and results on the Poincaré problem, see Carnicer [@Car:94], Campillo and Carnicer [@Car:97], Schlomiuk [@Sch], Walcher [@Wal]. The first result to address the Poincaré problem was presented by Carnicer[@Car:94] as following. **Theorem 2** ( Carnicer's theorem[@Car:94]). *Let $\mathcal{F}$ be a foliation of $\mathbb{P}_{\mathbb{C}}^2$ and let $C$ be an algebraic curve in $\mathbb{P}_{\mathbb{C}}^2$. Suppose that $C$ is invariant by $\mathcal{F}$ and there are no dicritical singularities of $\mathcal{F}$ in $C$. Then $$\partial^o C\leq \partial^o \mathcal{F} + 2$$* In the proof of Carnicer's theorem, the relationship between the sum of the multiplicities of a foliation along the branches of a curve, the degree of the curve, the degree of the foliation and the Euler characteristic of the curve are systematic used. This idea is also used in the present paper. However, it was not provided in [@Car:94] the effective method to determine whether a singular point is dicritical or not. The same inequality had been showed by Cerveau and Lins Neto[@Cer:91] for system of which all the singularities of the invariant algebraic curve are nodal. A more straightforward result was presented by Walcher using elementary method[@Wal]. Walcher's result stated: **Theorem 3**. *[@Wal Theorem 3.4] [\[th:1\]]{#th:1 label="th:1"} Assume that a vector field $X$ of degree $M$ on $\mathbb{P}_\mathbb C^2$ admits an irreducible invariant algebraic curve, and if all the stationary points of $X$ at infinity are nondegenerate and non-dicritical, then the degree of the curve cannot exceed $M+1$.* In Walcher's proof, the Poincaré-Dulac normal forms of the nondegenerate stationary points of a vector field were discussed. In particular, when the stationary point is non-dicritical, the precise information of the number of irreducible semi-invariants of the vector field $X$ was obtained, from which the upper bound of the degree of an invariant algebraic curve is derived. It was also pointed out in [@Wal] that if there are dicritical ones among the nondegenerate stationary points, then the vector field can admit infinitely many (pairwise relatively prime) semi-invariants. Moreover, the condition of non-dicritical can be verified through the investigation of the linear approximation of the vector field at the stationary points. Thus, Walcher's result provided a practical approach for the Poincaré problem. In this paper, we will present an alternative approach for the Poincaré problem by considering the algebraic multiplicities (see Definition [Definition 6](#def:1){reference-type="ref" reference="def:1"}) of the singular points of the system, and obtain an approximate inequality for the upper bound for the degrees under some generic conditions. The main results of this paper are: **Theorem 4**. *Consider the differential equation $$\label{eq:17} \dfrac{\mathrm{d}w}{\mathrm{d}z} = \dfrac{P(z,w)}{z\,Q(z,w)},$$ of degree $M = \max\{\deg P(z,w), \deg z\,Q(z,w)\}$. If [\[eq:17\]](#eq:17){reference-type="eqref" reference="eq:17"} admits an irreducible strict Darboux polynomial $f(z,w)$. Let $a_1, \cdots, a_k\in \mathbb{C}$ be all roots of $P(0,w) = 0$, and $a_0 = \infty$, and $\mathrm{Mul}(0,a_i)$ be the algebraic multiplicity of $(0,a_i)$, then $$\label{eq:20} \deg_w f(z,w) \leq \sum_{i = 0}^k\mathrm{Mul}(0,a_i).$$ In particular, if the singularities $(0,a_i)$ are not algebraic critical, then $$\label{eq:P1} \deg_w f(z,w)\leq M\,(k + 1).$$* **Theorem 5**. *Consider the polynomial system [\[eq:27\]](#eq:27){reference-type="eqref" reference="eq:27"} of degree $M = \max\{\deg P(z,w),\\ \deg Q(z,w)\}$, if [\[eq:27\]](#eq:27){reference-type="eqref" reference="eq:27"} has an invariant straight line $L$, and the singular points at $L$ are not algebraic critical, and if [\[eq:27\]](#eq:27){reference-type="eqref" reference="eq:27"} admits an irreducible strict Darboux polynomial $f(z,w)$, then $$\deg f(z,w) \leq M(M+1).$$* Note that, in Theorem [Theorem 5](#th:main){reference-type="ref" reference="th:main"}, we don't need the singularities to be non-degenerate, and we will see in next section that not algebraic critical is weaker than non-dicritical. In Theorem [Theorem 5](#th:main){reference-type="ref" reference="th:main"}, we require that ([\[eq:27\]](#eq:27){reference-type="ref" reference="eq:27"}) has an invariant straight line. In fact, it is generic that the line at infinity is invariant. Hence, the condition in Theorem [Theorem 5](#th:main){reference-type="ref" reference="th:main"} is generic. The rest of this paper is arranged as following. In Section [2](#sec:5){reference-type="ref" reference="sec:5"}, we will introduce the concept and computing method of algebraic multiplicity. And next, the main theorems are proved. In Section [3](#sec:2){reference-type="ref" reference="sec:2"}, as application, the 2D Lotka-Volterra system is studied. # Algebraic Multiplicity and the Poincaré Problem {#sec:5} Let $f(z,w)$ be a Darboux polynomial of ([\[eq:27\]](#eq:27){reference-type="ref" reference="eq:27"}). In general, the upper bound of the degree of $f(z,w)$ can not be determined merely from the equation ([\[eq:28\]](#eq:28){reference-type="ref" reference="eq:28"}). The assumption that $f(z,w)$ is irreducible must be taken into account. If $f(z,w)$ is irreducible, non lost the generality, perform the transformation $(z,w)\mapsto (z + c\,w,w)\ (c\in \mathbb{R})$ if necessary, we may assume that $\deg_w f(z,w) = \deg f(z,w)$. Let $m = \deg_w f(z,w)$, then there are $m$ algebraic functions $w_i(z)$ satisfying $f(z,w_i(z)) = 0\ (i = 1,2,\cdots,m)$. If these $m$ algebraic functions pass through some common singular points, then $m$ can be bounded by the possible number of the algebraic solutions that pass through these singular points. To this end, we will define the algebraic multiplicity as the number of local algebraic solutions as following. **Definition 6**. Consider a differential equation $$\label{eq:16} \dfrac{\mathrm{d}w}{\mathrm{d}z} = F(z,w),$$ and $(z_0, w_0)$ $\in \mathbb C^2$. A formal series $$\label{eq:30} w(z) = w_0 + \sum_{i\geq 0}\alpha_i\,(z-z_0)^{\mu_i},\\$$ is said to be a local algebraic solution of [\[eq:16\]](#eq:16){reference-type="eqref" reference="eq:16"} at $(z_0,w_0)$ if $w(z)$ is a formal series solution of [\[eq:16\]](#eq:16){reference-type="eqref" reference="eq:16"} with $\alpha_i\not= 0$, $\mu_i\in\mathbb{Q}^+$, and $\mu_i < \mu_{i+1} \ (\forall i)$. The algebraic multiplicity of [\[eq:16\]](#eq:16){reference-type="eqref" reference="eq:16"} at $(z_0,w_0)$, denoted by $\mathrm{Mul}(z_0,w_0; F)$ or simply by $\mathrm{Mul}(z_0,w_0)$ while the context is clear, is defined as the number of distinct local non-constant algebraic solutions of [\[eq:16\]](#eq:16){reference-type="eqref" reference="eq:16"} at $(z_0,w_0)$. If $\mathrm{Mul}(z_0,w_0) = \infty$, then $(z_0,w_0)$ is said algebraic critical. It is evident that algebraic critical implies dicritical (i.e., there are infinitely many invariant curves passing through the same point). When $w_0 =\infty$, let $\bar{w} = 1/w$, then $\bar{w}(z)$ satisfies $$\label{eq:32} \dfrac{\mathrm{d}\bar{w}}{\mathrm{d}z} = -\bar{w}^2\,F(z,1/\bar{w}): = \bar{F}(z, \bar{w}),$$ and the algebraic multiplicity $\mathrm{Mul}(z_0, \infty; F)$ is simply defined as $\mathrm{Mul}(z_0,0;\bar{F})$. Let $a, b, c \in \mathbb C$ with $a, c\not= 0$, and let $W = a\,(w - w_0) + b\, (z - z_0)$, $Z=c\,(z - z_0)$, then $W(Z)$ satisfies an equation of form $$\label{eq:33} \dfrac{\mathrm{d}W}{\mathrm{d}Z} = \tilde{F}(Z,W).$$ It is easy to show that a local algebraic solution of [\[eq:16\]](#eq:16){reference-type="eqref" reference="eq:16"} at $(z_0, w_0)$ corresponds to a local algebraic solution of [\[eq:33\]](#eq:33){reference-type="eqref" reference="eq:33"} at $(0,0)$. Hence we have $$\label{eq:6} \mathrm{Mul}(z_0, w_0; F) = \left\{\begin{array}{ll} \mathrm{Mul}(0, 0; \tilde{F}),& \mathrm{if}\ \tilde{F}(Z,0) \not\equiv 0 \\ \mathrm{Mul}(0, 0; \tilde{F}) + 1,& \mathrm{if}\ \tilde{F}(Z,0) \equiv 0 \end{array}\right.$$ It is evident that, if $(z_0, w_0)$ is a regular point and $F(z,w_0)\not\equiv 0$, then $\mathrm{Mul}(z_0, w_0) = 1$. To estimate the algebraic multiplicity at singular point $(z_0, w_0)$, we can substitute [\[eq:30\]](#eq:30){reference-type="eqref" reference="eq:30"} into [\[eq:16\]](#eq:16){reference-type="eqref" reference="eq:16"} to find out all possible formal series solutions. A method for finding the formal series solution of a polynomial system at a singular point is given in Lei and Guan [@Lei] using Newton polygon (Bruno[@Bruno], Chebotarev[@Ceb]). The result and proof are restated below. **Lemma 7**. *Consider the polynomial system $$\label{eq:22} \dfrac{\mathrm{d}w}{\mathrm{d}z} = \dfrac{P(z,w)}{Q(z,w)}$$ where $$P(z,w) = \sum_{i \geq 0} P_i(z)\, w^i,\ \ \ Q(z,w) = \sum_{i \geq 0} Q_i(z)\, w^i,$$ and $$P_i(z) = p_{i,0}\, z^{k_i} + p_{i,1}\,z^{k_i + 1} + \cdots, \ \ Q_i(z) = q_{i,0}\,z^{l_i} + q_{i,1}\, z^{l_i + 1} + \cdots,\ \ (i\geq 0)$$ If $(0,0)$ is a singular point of [\[eq:22\]](#eq:22){reference-type="eqref" reference="eq:22"}, and there exists $j$, satisfying* 1. *$k_j = l_{j-1} - 1$;* 2. *For any $i\not=j$, $$\min\{k_i,l_{i-1}-1\} > k_j + (j-i)\,(p_{j,0}/q_{j-1,0})$$* 3. *$p_{j,0}/q_{j-1,0}\in \mathbb{Q}^+$,* *then $(0,0)$ is algebraic critical for the system [\[eq:22\]](#eq:22){reference-type="eqref" reference="eq:22"}.* *Proof.* Let $\lambda = p_{j,0}/q_{j-1,0}$, and $u(z) = w(z)\,z^{-\lambda}$, then $u(z)$ satisfies $$\begin{aligned} \dfrac{\mathrm{d}u}{\mathrm{d}z} &=& \dfrac{\sum_{i\geq 0} (p_{i,0}\,z^{k_i + i\,\lambda} - q_{i-1,0}\,\lambda\,z^{l_{i-1} + i\,\lambda - 1}+ h.o.t.)\,u^i}{\sum_{i\geq 0}(q_{i,0}\,z^{l_i + (i+1)\,\lambda} + h.o.t.)u^i}\\ &=&\dfrac{z^{l_{j-1} + j\,\lambda-1}\,\sum_{i\geq 0} (p_{i,0}\,z^{k_i-k_j + (i-j)\,\lambda} - q_{i-1,0}\,\lambda\,z^{l_{i-1}-l_{j-1} + (i-j)\,\lambda}+ h.o.t.)\,u^i}{z^{l_{j-1} + j\,\lambda}\,\sum_{i\geq 0}(q_{i,0}\,z^{l_i -l_{j-1} + (i-j)\,\lambda} + h.o.t.)u^i} \end{aligned}$$ Taking the conditions of $j$ into account , we can rewrite above equation as $$\frac{\mathrm{d}u}{\mathrm{d}z} =\dfrac{z^{s}\,\hat{P}(z,u)}{z\,\hat{Q}(z,u)}$$ where $\hat{P}(0,u), \hat{Q}(0,u)\not\equiv 0$, and $s = \min_{i\geq 0}\{k_i-k_j + (i-j)\,\lambda, l_{i-1}-l_{j-1} + (i-j)\,\lambda\}\in \mathbb{Q}^+$. Let $z = \bar{z}^{q_{j-1,0}}$, then $$\label{eq:29} \dfrac{\mathrm{d}u}{\mathrm{d}\bar{z}}= \dfrac{q_{j-1,0}\,\bar{z}^{s\,q_{j-1,0} - 1}\,\hat{P}(\bar{z}^{q_{j-1,0}},u)}{\hat{Q}(\bar{z}^{q_{j-1,0}},u)}$$ It's easy to have $s\,q_{j-1,0}\in \mathbb{N}$ and $\hat{P}(\bar{z}^{q_{j-1,0}},u), \hat{Q}(\bar{z}^{q_{j-1,0}},u)$ are polynomials of $\bar{z}$ and $u$. Thus, for any $\alpha$ such that $\hat{Q}(0,\alpha)\not=0$, [\[eq:29\]](#eq:29){reference-type="eqref" reference="eq:29"} has a unique solution $u(\bar{z};\alpha)$ which is analytic at $\bar{z} = 0$ and satisfies $u(0;\alpha) = \alpha$. Thus, $$w(z;\alpha) = z^{\lambda}\,u(z^{1/q_{j-1,0}};\alpha) = z^{\lambda}(\alpha + \sum_{i\geq 1}\frac{1}{i!}u^{(i)}_z(0;\alpha)z^{i/q_{j-1,0}})$$ is a solution of [\[eq:22\]](#eq:22){reference-type="eqref" reference="eq:22"}, i.e., $w(z;\alpha)$ is a local algebraic solution of [\[eq:22\]](#eq:22){reference-type="eqref" reference="eq:22"} for any $\alpha$ such that $\hat{Q}(0;\alpha)\not=0$. Hence, $(0,0)$ is algebraic critical for [\[eq:22\]](#eq:22){reference-type="eqref" reference="eq:22"}. ◻ *Remark 8*. 1. The Lemma [Lemma 7](#le:1){reference-type="ref" reference="le:1"} is also valid for equations of which $P$ and $Q$ are Puiseux series of $z$ and $w$ (with slight change in the proof): $$P(z,w) = \sum_{i,j\geq0}p_{i,j}\,z^{i/\mu}\,w^{j/\nu},\ \ \ Q(z,w) = \sum_{i,j\geq 0}q_{i,j}\,z^{i/\mu}\,w^{j/\nu}\ \ \ (\mu,\nu\in \mathbb{N})$$ 2. From the proof of Lemma [Lemma 7](#le:1){reference-type="ref" reference="le:1"}, if the index $j$ satisfied conditions (1), (2), but $p_{j,0}/q_{j-1,0}\in\mathbb{R}^+\backslash\mathbb{Q}^+$, let $\lambda = p_{j,0}/q_{j-1,0}$, then [\[eq:22\]](#eq:22){reference-type="eqref" reference="eq:22"} admits infinity solutions of form $w(z;\alpha) = z^{\lambda}\,u(z^{1/s};\alpha)$, where $u(\bar{z};\alpha)$ is the solution of $$\dfrac{\mathrm{d}u}{\mathrm{d}\bar{z}} = \dfrac{\hat{P}(\bar{z}^{1/s},u)}{s\,\hat{Q}(\bar{z}^{1/s},u)}$$ such that $u(0;\alpha) = \alpha$. Thus, [\[eq:22\]](#eq:22){reference-type="eqref" reference="eq:22"} is dicritical at $(0,0)$, but not necessary algebraic critical. **Lemma 9**. *Let $(0,0)$ be a singular point of [\[eq:22\]](#eq:22){reference-type="eqref" reference="eq:22"}, then either $(0,0)$ is algebraic critical, or $$\label{eq:24} \mathrm{Mul}(0,0)\leq \max\{\deg_w P(z,w), \deg_w Q(z,w) + 1\}.$$* *Proof.* Let $N = \deg_w P(z,w), M = \deg_w Q(z,w)$, and $$P(z,w) = \sum_{i = 0}^N P_i(z)\, w^i,\ \ \ Q(z,w) = \sum_{i = 0}^M Q_i(z)\, w^i,$$ where $$P_i(z) = p_{i,0}\, z^{k_i} + p_{i,1}\,z^{k_i + 1} + \cdots, \ \ Q_i(z) = q_{i,0}\,z^{l_i} + q_{i,1}\, z^{l_i + 1} + \cdots$$ Substitute $$\label{eq:w} w(z) = \alpha_0\, z^{\lambda_0} + h.o.t.\ \ (\alpha_0\not=0, \lambda_0\in\mathbb{Q}^+)$$ into [\[eq:22\]](#eq:22){reference-type="eqref" reference="eq:22"}, then $$\begin{aligned} 0&=&\sum_{i = 0}^MQ_i(z)(\alpha_0\,z^{\lambda_0} + h.o.t.)^i\,(\alpha_0\lambda_0\,z^{\lambda_0-1} + h.o.t.) -\sum_{i=0}^NP_i(z)\,(\alpha_0\,z^{\lambda_0} + h.o.t.)^i \\ &=&\sum_{i=0}^M q_{i,0}\,{\lambda_0}\,\alpha_0^{i+1}\,z^{l_i + (i+1)\,\lambda_0 - 1} - \sum_{i=0}^N p_{i,0}\,\alpha_0^i\,z^{k_i + i\,\lambda_0} + h.o.t.\end{aligned}$$ Thus, at least two of the exponents:$$l_i + (i+1)\,\lambda_0 - 1, \ \ k_j + j\,\lambda_0,\ \ (0\leq i\leq M,\ \ 0\leq j\leq N)$$are equal to each other and not larger than any other exponents, and $\alpha_0\not=0$ that vanishes the coefficient of the lowest degree. If this is the case, $(\lambda_0, \alpha_0)$ is said to be acceptable to [\[eq:22\]](#eq:22){reference-type="eqref" reference="eq:22"}. Assume that $(0,0)$ is not algebraic critical (i.e., Lemma [Lemma 7](#le:1){reference-type="ref" reference="le:1"} is not satisfied), then the values $\lambda_0$ and $\alpha_0$ can be obtained using Newton polygon[@Bruno; @Ceb] as following. Let $\Gamma$ be the Newton open polygon of all points(see Figure [\[fig:1\]](#fig:1){reference-type="ref" reference="fig:1"}) $$\label{eq:21} (i+1, l_i - 1),\ \ \ (j, k_j),\ \ (0\leq i\leq M,\ \ 0\leq j\leq N)$$ Let $\Gamma_{i_1}^{i_2}$ be an edge of $\Gamma$, with $i_1 < i_2$ to be the horizontal coordinates of the extreme vertices. Let $-\lambda_0$ to be the slope of $\Gamma_{i_1}^{i_2}$, then $\alpha_0$ should satisfy a polynomial of degree $i_2-i_1$. In particular, $(\lambda_0, \alpha_0)$ is said to be $d$-folded if $\alpha_0$ is a $d$-folded root of above polynomial. Thus, for the edge $\Gamma_{i_1}^{i_2}$, there are at most $i_2 - i_1$ pairs of $(\lambda_0, \alpha_0)$ that are acceptable to [\[eq:22\]](#eq:22){reference-type="eqref" reference="eq:22"}. Thus, there are totally at most $\max\{M+1,N\}$ pairs of $(\lambda_0, \alpha_0)$ that are acceptable to [\[eq:22\]](#eq:22){reference-type="eqref" reference="eq:22"}. (12,12) (0,2)(1,0)10.0 (0,0)(0,1)10.0 (-0.50,1.5)0 (0,9) (1,5) (1,6.8)(0,1)0.4 (0.8,7)(1,0)0.4 (2,2.8)(0,1)0.4 (1.8,3)(1,0)0.4 (3,5) (3,3.8)(0,1)0.4 (2.8,4)(1,0)0.4 (4,2) (4,5.8)(0,1)0.4 (3.8,6)(1,0)0.4 (5,7) (5,2.8)(0,1)0.4 (4.8,3)(1,0)0.4 (6,0.8)(0,1)0.4 (5.8,1)(1,0)0.4 (6,3) (7,2.8)(0,1)0.4 (6.8,3)(1,0)0.4 (2.0,6) (0,9)(1,-4)1.0 (1,5)(1,-2)1.0 (2,3)(2,-1)4.0 (1,2)(0,1)0.2 (2,2)(0,1)0.2 (3,2)(0,1)0.2 (4,2)(0,1)0.2 (5,2)(0,1)0.2 (6,2)(0,1)0.2 (7,2)(0,1)0.2 (0.9,1.5)1 (1.9,1.5)2 (2.9,1.5)3 (3.9,1.5)4 (4.9,1.5)5 (5.9,1.5)6 (6.9,1.5)7 (6,10) (0.5,7.5)$\Gamma_0^1$ (1.5,4.0)$\Gamma_1^2$ (3.5,2.3)$\Gamma_2^6$ For each $(\lambda_0, \alpha_0)$ in the first step, let $w(z) = \alpha_0\, z^{\lambda_0} + w_1(z)$, then $w_1(z)$ satisfies the equation $$\label{eq:w1} Q(z,\alpha_0\,z^{\lambda_0}+w_1) (\alpha_0\, {\lambda_0}\, z^{\lambda_0-1}+w'_1)-P(z,\alpha_0\, z^{\lambda_0}+w_1)=0.$$ Repeat the foregoing argument, if $(0,0)$ is not algebraic critical point of [\[eq:22\]](#eq:22){reference-type="eqref" reference="eq:22"}, then there are finite solutions of [\[eq:w1\]](#eq:w1){reference-type="eqref" reference="eq:w1"} of form $$\label{eq:w2} w_1(z) = \alpha_1\,z^{\lambda_1} + h.o.t. ,\ \ (\lambda_1 \in \mathbb{Q}^+,\ \ \lambda_1 > \lambda_0).$$ To complete the proof, it's sufficient to show that if $(\lambda_0, \alpha_0)$ is $d-$folded, then there are at most $d$ pairs of $(\lambda_1, \alpha_1)$ with $\lambda_1 > \lambda_0$ which are acceptable to [\[eq:w1\]](#eq:w1){reference-type="eqref" reference="eq:w1"}. Let $$\begin{aligned} Q_1(z,w_1) &=& Q(z,\alpha_0\, z^{\lambda_0} + w_1),\\ P_1(z,w_1) &=& P(z,\alpha_0\, z^{\lambda_0} + w_1) - \alpha_0\,\lambda_0\,z^{\lambda_0 - 1}\,Q(z,\alpha_0\,z^{\lambda_0} + w_1)\end{aligned}$$ then $w_1(z)$ satisfies $$\label{eq:w11} Q_1(z,w_1)\,w_1' - P_1(z,w_1) = 0$$ Write $$Q_1(z,w_1) = \sum_{i \geq 0}Q_{1,i}(z)\,w_1^i,\ \ \ P_1(z,w_1) = \sum_{i \geq 0}P_{1,i}(z)\,w_1^i$$ and let $l_{1,i}$ and $k_{1,i}$ be the lowest degrees of $Q_{1,i}(z)$ and $P_{1,i}(z)$ respectively, and $r_{1,i} = \min\{k_{1,i},l_{1,i-1}-1\}$. We will prove that if $(\lambda_0, \alpha_0)$ is $d$-folded, then for any $i>d$, $$\label{eq:31} r_{1,d} \leq r_{1,i} + (i - d)\,\lambda_0$$ When [\[eq:31\]](#eq:31){reference-type="eqref" reference="eq:31"} is satisfied, then there are at most $d$-pairs of $(\lambda_1,\alpha_1)$ which are acceptable to [\[eq:w11\]](#eq:w11){reference-type="eqref" reference="eq:w11"} and $\lambda_1 > \lambda_0$. In fact, let $(\lambda_1,\alpha_1)$ to be acceptable to [\[eq:w11\]](#eq:w11){reference-type="eqref" reference="eq:w11"}, then there exist $j_1 < j_2$, such that $$\lambda_1 = \dfrac{r_{1,j_1} - r_{1,j_2}}{j_2 - j_1} > \lambda_0$$ and $$r_{1,d} \geq r_{1,j_1} + (j_1 - d)\,\lambda_1,\ \ r_{1,d} \geq r_{1,j_2} + (j_2 - d)\,\lambda_1$$ If $j_1 > d$ (or $j_2 > d$), then $$r_{1,d} > r_{1,j_1} + (j_1 - d)\,\lambda_0\ \ \ (\mathrm{or}\ \ r_{1,d} > r_{1,j_2} + (j_2 - d)\,\lambda_0)$$ which is contradict to [\[eq:31\]](#eq:31){reference-type="eqref" reference="eq:31"}. Hence, $j_1<j_2 \leq d$, and there are at most $d$-pairs of $(\lambda_1,\alpha_1)$ (taking account that $(0,0)$ is not algebraic critical). To prove [\[eq:31\]](#eq:31){reference-type="eqref" reference="eq:31"}, let $$\begin{aligned} &&Q(z,\alpha\, z^{\lambda_0}) = \sum_{i \geq 0}\xi_i(\alpha)\, z^{s_i}\ \ \ \ \ \ \ \ \ (s_0 < s_1 < \cdots) \\ &&P(z,\alpha\, z^{\lambda_0}) = \sum_{i \geq 0}\eta_i(\alpha)\, z^{\tau_i}\ \ \ \ \ \ \ \ \ (\tau_0 < \tau_1 <\cdots)\\ %&&\alpha\, \lambda_0\, z^{\lambda_0 - 1}\, Q(z,\alpha\, %z^{\lambda_0}) - P(z,\alpha\, z^{\lambda_0}) = \sum_{i \geq %0}\varphi_i(\alpha)\, z^{v_i}\ \ \ \ \ \ \ \ (v_0 < v_1 < \cdots)\\ %&&\tilde{Q}(z,w_1) = Q(z,\alpha_0\, z^{\lambda_0} + w_1) = \sum_{i \geq 0}\tilde{Q}_i(z)\,w_1^i\\ %&&\tilde{P}(z,w_1) = \alpha_0\,\lambda_0\,z^{\lambda_0 - %1}\,Q(z,\alpha_0\,z^{\lambda_0} + w_1) - P(z,\alpha_0\, %z^{\lambda_0} + w_1) % = \sum_{i \geq 0}\tilde{P}_i(z)\,w_1^i\end{aligned}$$ then $$\begin{aligned} Q_{1,i}(z)%&=&\dfrac{1}{i!}\,\left.\dfrac{\partial^i %\tilde{Q}(z,w_1)}{\partial w_1^i}\right|_{w_1 = %0}=\dfrac{1}{i!}\,\left.\dfrac{\partial^i Q(z, %\alpha\,z^{\lambda_0})}{\partial %\alpha^i}\right|_{\alpha=\alpha_0}\cdot z^{-i\, %\lambda_0}\\ &=&\dfrac{1}{i!}\,z^{-i\,\lambda_0}\,\sum_{j \geq 0}\xi_j^{(i)}(\alpha_0)\, z^{s_j}\\ P_{1,i}(z)%&=&\dfrac{1}{i!}\,\left. %\dfrac{\partial^i \tilde{P}(z,w_1)}{\partial %w_1^i}\right|_{w_1=0}\\ %&=&\dfrac{1}{i!}\left(\alpha_0\, \lambda_0\, %z^{\lambda_0-1}\left.\dfrac{\partial^i Q(z,\alpha\, %z^{\lambda_0})}{\partial\alpha^i} \right|_{\alpha=\alpha_0}- %\left.\dfrac{\partial^i P(z,\alpha\, z^{\lambda_0})}{\partial %\alpha^i}\right|_{\alpha=\alpha_0}\right)\, z^{-i\, %\lambda_0}\\ &=&\dfrac{1}{i!}\,z^{-i\,\lambda_0}\,\left(\sum_{j\geq 0}\eta_j^{(i)}(\alpha_0)\,z^{\tau_j} - \alpha_0\,\lambda_0\,z^{\lambda_0 - 1}\,\sum_{j \geq 0} \xi_j^{(i)}(\alpha_0)\,z^{s_j} \right)\end{aligned}$$ and hence $$\label{eq:46}r_{1,i} \geq \min\{\tau_0, s_0 + \lambda_0 - 1\} - i\, \lambda_0.$$ Thus, it is sufficient to show that $$\label{eq:23} \min\{k_{1,d}, l_{1,d-1} - 1\} = \min\{\tau_0, s_0 + \lambda_0 -1\} - d\,\lambda_0.$$ To this end, write $$\begin{aligned} Q_{1,d-1}(z) &=& \frac{1}{d!} \xi_0^{(d-1)}(\alpha_0)\,z^{s_0 + \lambda_0 - d_0\,\lambda_0} + h.o.t.\\ P_{1,d}(z) &=& \frac{1}{d!}\left(\eta_0^{(d)}(\alpha_0)\,z^{\tau_0} - \alpha_0\,\lambda_0\,\xi_0^{(d)}(\alpha_0)\,z^{s_0 + \lambda_0 - 1}\right)\cdot z^{-d\,\lambda_0} + h.o.t.\end{aligned}$$ and let $$P(z,\alpha\,z^{\lambda_0}) - \alpha\,\lambda_0\,z^{\lambda_0 - 1}\,Q(z,\alpha\,z^{\lambda_0}) = \varphi(\alpha)\,z^{v_0} + h.o.t.$$ Because $(\lambda_0,\alpha_0)$ is acceptable to [\[eq:22\]](#eq:22){reference-type="eqref" reference="eq:22"} and $d$-folded, we have $$\label{eq:f} \varphi(\alpha_0) = \cdots = \varphi^{(d-1)}(\alpha_0) = 0,\ \varphi^{(d)}(\alpha_0)\not = 0.$$ Therefore, we have the following: 1. If $\tau_0 < s_0 + \lambda_0-1$, then $\varphi(\alpha) = \eta_0(\alpha)$ and $\eta_0^{(d)}(\alpha_0) \not = 0$. 2. If $s_0 + \lambda_0 -1 < \tau_0$, then $\varphi(\alpha) = -\lambda_0\,\alpha\, \xi_0(\alpha)$, and hence $\xi_0^{(d)}(\alpha_0) \not = 0$. 3. If $s_0 + \lambda_0 -1 = \tau_0$, then $\varphi_0(\alpha) = \eta_0(\alpha) - \alpha \lambda_0 \xi_0(\alpha)$, and hence $$\varphi_0^{(d)}(\alpha_0)=- \lambda_0 \xi_0^{(d-1)}(\alpha_0) + (\eta_0^{(d)}(\alpha_0) - \alpha_0 \lambda_0 \xi_0^{(d)}(\alpha_0)) \not = 0.$$ Thus, we have $\xi_0^{(d-1)}(\alpha_0)\not=0$ or $\eta_0^{(d)}(\alpha_0) - \alpha_0 \lambda_0\xi_0^{(d)}(\alpha_0)\not=0$. It is not difficult to verify that [\[eq:23\]](#eq:23){reference-type="eqref" reference="eq:23"} is held in any one of the above cases, and thus the Lemma is concluded. ◻ From the proof of Lemma [Lemma 9](#th:3){reference-type="ref" reference="th:3"}, the local algebraic solutions of [\[eq:22\]](#eq:22){reference-type="eqref" reference="eq:22"} at $(0,0)$ can be obtained by repeating the Newton polygon. Moreover, following the procedure, we will either stop by the case that $(0,0)$ is algebraic critical (Lemma [Lemma 7](#le:1){reference-type="ref" reference="le:1"}), or encounter the local algebraic solution of form $$w(z) = \sum_{i = 0}^{k-1}\alpha_i\,z^{\lambda_i} + u(z)$$ where $(\lambda_{k-1}, \alpha_{k-1})$ is 1-folded, and $u(z)$ satisfies an equation $$\label{eq:38} \dfrac{\mathrm{d}u}{\mathrm{d}z} = \dfrac{\hat{P}(z,u)}{\hat{Q}(z,u)}$$ where $\hat{P}, \hat{Q}$ are Puiseux series. Whenever this is the case, we have the following. **Lemma 10**. *In the equation [\[eq:38\]](#eq:38){reference-type="eqref" reference="eq:38"} that derived from [\[eq:22\]](#eq:22){reference-type="eqref" reference="eq:22"} through above procedure, let $$\hat{P}(z,u) = \hat{p}_{0,0}z^{k_0} + \hat{p}_{1,0}z^{k_1}\,u + h.o.t., \ \ \hat{Q}(z,u) = \hat{q}_{0,0}z^{l_0} + h.o.t.$$ If $(\lambda_{k-1}, \alpha_{k-1})$ is 1-folded, and one of the following is satisfied:* 1. *$k_1\not=l_0-1$; or* 2. *$k_1=l_0-1$, and $\hat{p}_{1,0}/\hat{q}_{0,0}\not\in (\lambda_{k-1},\infty)\cap \mathbb{Q}^+$,* *then $(0,0)$ is not algebraic critical of [\[eq:22\]](#eq:22){reference-type="eqref" reference="eq:22"}.* *Proof.* Let $u(z)$ be a local algebraic solution of [\[eq:38\]](#eq:38){reference-type="eqref" reference="eq:38"}, expressed as $$\label{eq:43} u(z) = \sum_{i\geq k}\alpha_i\,z^{\lambda_i}$$ where $\lambda_i > \lambda_{i-1},\ (\forall i\geq k)$. We will show that $(\lambda_i, \alpha_i)$ are determined by [\[eq:38\]](#eq:38){reference-type="eqref" reference="eq:38"} uniquely. From the proof of Lemma [Lemma 9](#th:3){reference-type="ref" reference="th:3"}, we have $$k_0 - \min\{k_1,l_0-1\} > \lambda_{k-1}$$ Hence, substitute [\[eq:43\]](#eq:43){reference-type="eqref" reference="eq:43"} into [\[eq:38\]](#eq:38){reference-type="eqref" reference="eq:38"}, and taking account that $(\lambda_{k-1}, \alpha_{k-1})$ is 1-folded, and either $k_1\not=l_0-1$ or $k_1=l_0-1$, $p_{1,0}/q_{0,0}\not\in (\lambda_{k-1},\infty)\cap \mathbb{Q}^+$, we have $\lambda_k = k_0 - \min\{k_1,l_0-1\}$, and $\alpha_k$ is determined uniquely by $p_{0,0}, q_{0,0}, p_{1,0}, k_1,l_0$. Therefore, $(\lambda_k, \alpha_k)$ is also 1-folded. Let $u(z) = \alpha_k\,z^{\lambda_k} + v(z)$, then $v(z)$ satisfies $$\label{eq:42}\dfrac{\mathrm{d}v}{\mathrm{d}z} = \dfrac{\hat{p}'_{0,0}\,z^{k_0'} + \hat{p}_{1,0}\,z^{k_1}\,v + h.o.t.}{\hat{q}_{0,0}\,z^{l_0} + h.o.t.}$$ where $k_0' > k_0$. In particular, conditions in the Lemma are also valid for [\[eq:42\]](#eq:42){reference-type="eqref" reference="eq:42"}. Thus, we can repeat the procedure, and hence there is unique solution $u(z)$ of form [\[eq:43\]](#eq:43){reference-type="eqref" reference="eq:43"}, and $(0,0)$ is not algebraic critical for [\[eq:22\]](#eq:22){reference-type="eqref" reference="eq:22"}. ◻ *Remark 11*. In the Lemma [Lemma 10](#le:2){reference-type="ref" reference="le:2"}, we might also find the solution of form [\[eq:43\]](#eq:43){reference-type="eqref" reference="eq:43"} when $k_1=l_0-1$ and $\hat{p}_{1,0}/\hat{q}_{0,0}\in (\lambda_{k-1},\infty)\cap \mathbb{Q}^+$. However, when this is the case, we can identify two cases: 1. If $\hat{p}_{1,0}/\hat{q}_{0,0} \in (\lambda_i, \lambda_{i+1})\cap \mathbb{Q}$ for some $i\geq k-1$, then the condition in Lemma [Lemma 7](#le:1){reference-type="ref" reference="le:1"} is satisfied at the $i$'th step, and $(0,0)$ is algebraic critical. 2. If $\hat{p}_{1,0}/\hat{q}_{0,0} = \lambda_i$ for some $i$, then $(0,0)$ is not algebraic critical. In any case, we can stop the procedure in finite steps. Thus, it's effective to find the algebraic multiplicities of [\[eq:22\]](#eq:22){reference-type="eqref" reference="eq:22"} using the Newton polygon. **Example**  Consider the equation $$\label{eq:45} (z + w^2)\,w' - (z^2 + \mu w) = 0$$ (8,8) (0,2)(1,0)8.0 (0,0)(0,1)8.0 (0,6) (2,2) (1.7,2.0)(1,0)0.6 (2.0,1.7)(0,1)0.6 (5.7,0)(1,0)0.6 (6.0,-0.3)(0,1)0.6 (0,6)(1,-2)2.0 (2,2)(2,-1)4.0 (2,2)(0,1)0.2 (4,2)(0,1)0.2 (6,2)(0,1)0.2 (-0.5,1.5)0 (1.7,1.5)1 (3.8,1.5)2 (5.8,1.5)3 (0,4)(1,0)0.2 (0,6)(1,0)0.2 (-0.5,3.8)1 (-0.5,5.8)2 The Newton polygon of [\[eq:45\]](#eq:45){reference-type="eqref" reference="eq:45"} is shown at Figure [\[fig:2\]](#fig:2){reference-type="ref" reference="fig:2"}. From the Newton polygon, if $\mu\in (1/2, 2)\cap \mathbb{Q}$, then $(0,0)$ is algebraic critical, with local algebraic solutions $$w(z) = \alpha_0 z^{\mu} + h.o.t.\ \ \ (\alpha_0\not=0)$$ Mean while, if $\mu\not\in (1/2, 2)\cap \mathbb{Q}$, the possible local algebraic solutions are $$\begin{aligned} w(z) &=& \frac{1}{2-\mu}\,z^2 + h.o.t.\ \ (\mathrm{if}\ \ \mu\not=2)\\ w(z) &=&\pm\sqrt{2\mu-1}\,z^{1/2} + h.o.t.\ \ (\mathrm{if}\ \mu\not=1/2)\end{aligned}$$ When $\mu\not=2$, let $$w(z) = \frac{1}{2-\mu}\,z^2 + w_{1,1}(z)$$ then $w_{1,1}(z)$ satisfies $$w_{1,1}' = \dfrac{2\,z^5 - (2-\mu)^3\,\mu w_{1,1} + h.o.t.}{-(2-\mu)^3\,z + h.o.t.}$$ Thus, we conclude the following. If $\mu\in (2,5)\cap \mathbb{Q}$, then $(0,0)$ is algebraic critical, with local algebraic solutions $$w(z) = \frac{1}{2-\mu}\,z^2 + \alpha_1\,z^\mu + h.o.t,\ \ \ (\alpha_1\not=0).$$ If $\mu\not=2,5$, we have the local algebraic solution $$w(z) = \frac{1}{2-\mu}\,z^2 - \frac{2}{(5-\mu)\,(2-\mu)^3}\,z^5 + h.o.t.$$ When $\mu\not\in [1/2,2)\cap\mathbb{Q}$, let $$w(z) = \sqrt{2\mu-1}\,z^{1/2} + w_{1,2}(z)$$ then $w_{1,2}(z)$ satisfies $$w_{1,2}' = \dfrac{2\,z^{5/2} + (2-2\mu)\,z^{1/2}\,w_{1,2} + h.o.t.}{4\,\mu\,z^{3/2} + h.o.t.}$$ Thus, if $\mu\not=1/5$, we have the local algebraic solution $$w(z) = \sqrt{2\mu-1}\,z^{1/2} + \frac{1}{5\,\mu - 1}\,z^2 +h.o.t.$$ Thus, repeat the above procedure, we can determine, for given $\mu$, the algebraic multiplicity $\mathrm{Mul}(0,0)$ of [\[eq:45\]](#eq:45){reference-type="eqref" reference="eq:45"}. In particular, if $\mu\not\in (1/2,\infty)\cap \mathbb{Q}$, then $\mathrm{Mul}(0,0)\leq 3$. $\square$ In the rest of this section, we will prove the main results. *Proof of Theorem [Theorem 4](#th:2){reference-type="ref" reference="th:2"}.* Let $W$ be the set of all non-constant local algebraic solutions of [\[eq:17\]](#eq:17){reference-type="eqref" reference="eq:17"} at $(0,a_i)$ for some $0\leq i\leq k$. Then $$|W| = \sum_{i=0}^k\mathrm{Mul}(0,a_i)$$ Let $f(z,w)$ be an irreducible strict Darboux polynomial of [\[eq:17\]](#eq:17){reference-type="eqref" reference="eq:17"}, and $m = \deg_wf(z,w)$, then there are $m$ algebraic functions $w_i(z)$ that defined by $f(z,w) = 0$. It is sufficient to show that any algebraic function $w_i(z)\in W$. To this end, we only need to show that $$\label{eq:44}\lim_{z\to 0}w_i(z)= \{a_0,a_1,\cdots,a_k\}$$ Consider the equation $$z\,Q(z,w)\,\dfrac{\partial f}{\partial z} + P(z,w)\,\dfrac{\partial f}{\partial w} = R_f(z,w)\,f(z,w)$$ Let $z = 0$, then $f(0,w)$ satisfies $$P(0,w)\,f_w'(0,w) = R_f(0,w)\,f(0,w).$$ Thus $f(0,w)$ is an non-constant multiply of $\prod_{i = 1}^k (w - w_i)^{l_i},\ \ (l_i\geq 0)$. From which [\[eq:44\]](#eq:44){reference-type="eqref" reference="eq:44"} is easy to conclude. It is easy to have $\mathrm{Mul}(0,\infty) \leq M$. Hence, if the singularities $(0,a_i)$ are not algebraic critical, then, from Lemma [Lemma 9](#th:3){reference-type="ref" reference="th:3"}, $$\deg_wf(z,w)\leq M\,(k+1)$$ ◻ *Proof of Theorem [Theorem 5](#th:main){reference-type="ref" reference="th:main"}.* If [\[eq:27\]](#eq:27){reference-type="eqref" reference="eq:27"} has an invariant straight line $L$, perform suitable transformation, we may assume that $L$ is given by $$a\,z + b\,w + z = 0,\ \ \ \ (a\not=0)$$ and $\deg f(z,w) = \deg_w f(\frac{z - b\,w - c}{a}, w)$. It is easy to see that the degree of the system is not increase under linear transformation. Let $$\bar{w} = w, \bar{z} = a\,z + b\,w + c,$$ then $\bar{w}(\bar{z})$ satisfies the equation of form $$\label{eq:18} \dfrac{d \bar{w}}{d \bar{z}} = \dfrac{\bar{P}(\bar{z}, \bar{w})}{\bar{z}\,\bar{Q}(\bar{z}, \bar{w})},$$ where $\bar{P}(\bar{z}, \bar{w}), \bar{Q}(\bar{z}, \bar{w})$ are polynomials. Moreover, $\bar{f}(\bar{z}, \bar{w}) = f(\frac{\bar{z} - b\,\bar{w} - c}{a}, \bar{w})$ is an irreducible Darboux polynomial of [\[eq:18\]](#eq:18){reference-type="eqref" reference="eq:18"}, and $\deg f(z,w) = \deg_{\bar{w}} \bar{f}(\bar{z}, \bar{w})$. Let $(a_i, b_i), (1\leq i \leq M)$ be singular points of [\[eq:27\]](#eq:27){reference-type="eqref" reference="eq:27"} at $L$, then $(0, b_i)$ are singular points of [\[eq:18\]](#eq:18){reference-type="eqref" reference="eq:18"} at $\bar{z} = 0$, and not algebraic critical. Hence, apply Theorem [Theorem 4](#th:2){reference-type="ref" reference="th:2"} to [\[eq:18\]](#eq:18){reference-type="eqref" reference="eq:18"}, we have $$\deg f(z,w) = \deg_{\bar{w}} \bar{f}(\bar{z}, \bar{w}) \leq M\,(M + 1).$$ ◻ # Application to 2D Lotka-Volterra system {#sec:2} In this section, we will apply Theorem [Theorem 4](#th:2){reference-type="ref" reference="th:2"} to 2D Lotka-Volterra system: $$\label{lv:2-1} \dot{z} = z\,(z + c\,w - 1),\ \ \ \dot{w} = w\,(b\,z + w - a).$$ Invariant algebraic curves of Lotka-Volterra system had been studied by many authors. Recent results on this topic, refer to Ollaginer [@Ol:01-2], Cairó *et.al.*[@JL:03] and the references. In Ollaginer [@Ol:01-2], the complete list of parameters of which the system has strict invariant algebraic curve is presented. We will reobtain one part of the results through the algebraic multiplicity. Note that ([\[lv:2-1\]](#lv:2-1){reference-type="ref" reference="lv:2-1"}) is invariant under following transformations: $$\begin{aligned} \label{eq:11} (z,w,a,b,c)&\rightarrow& (\frac{w}{a}, \frac{z}{a}, \frac{1}{a}, c, b),\ \mathrm{if}\ a\not=0;\\ \label{eq:12} (z,w,a,b,c)&\rightarrow& (\frac{1}{z}, (1-c)\frac{w}{z}, 1-b, 1-a, \frac{c}{c-1}),\ \mathrm{if}\ c\not=1.\end{aligned}$$ Results in this section are also valid under above transformations. Since $z = 0$ and $w = 0$ are invariant straight lines of [\[lv:2-1\]](#lv:2-1){reference-type="eqref" reference="lv:2-1"}, Theorem [Theorem 4](#th:2){reference-type="ref" reference="th:2"} is applicable. **Proposition 12**. *If the 2D L-V system $$\label{lve} \dfrac{\mathrm{d}w}{\mathrm{d}z} = \dfrac{w\,(b\,z+w-a)}{z\,(z+c\,w-1)}$$ has a strict Darboux polynomial $f$, then $$\begin{array}{rcll} \deg_w f(z,w)&\leq& \mathrm{Mul}(0,\infty) + \mathrm{Mul}(0,a) + \mathrm{Mul}(0,0),&\ \ \mathrm{if}\ \ a\not=0 \\ \deg_w f(z,w)&\leq& \mathrm{Mul}(0,\infty) + \mathrm{Mul}(0,0).&\ \ \mathrm{if}\ \ a = 0 \\ \end{array}$$* In particular, we have. **Proposition 13**. *If in [\[lve\]](#lve){reference-type="eqref" reference="lve"}, $$\label{eq:1} a\not\in \mathbb{Q}^+, c\not\in \mathbb{Q}^-, c - \dfrac{1}{a}\not\in\mathbb{Q}^+ \backslash\{1\}$$ then [\[lve\]](#lve){reference-type="eqref" reference="lve"} has strict invariant algebraic curve if and only if $$a (1-c) + (1-b) = 0,$$ and the invariant algebraic curve is given by $$a (z-1) + w = 0.$$* *Proof.* When [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"} is satisfied, the singularities $(0,0), (0,a), (0,\infty)$ are not algebraic critical, and $$\mathrm{Mul}(0,0) = 0, \mathrm{Mul}(0,a)\leq 1, \mathrm{Mul}(0,\infty) = 0$$ Hence, if $f(z,w)$ is a strict irreducible Darboux polynomial, then $\deg_w f = 1$. From which the result is easy to conclude. ◻ Proposition [Proposition 13](#main:2){reference-type="ref" reference="main:2"} shows that the algebraic multiplicities may give an exact bound for the degree of the Darboux polynomial in particular cases. However, if there are algebraic critical points among the singularities, [\[eq:20\]](#eq:20){reference-type="eqref" reference="eq:20"} does not provide the finite value. In this case, as we had seen from Lemma [Lemma 7](#le:1){reference-type="ref" reference="le:1"}, there are infinite local algebraic solutions. On the other hand, this does not automatically imply that all these local algebraic solutions are algebraic solutions. And hence, we come to the following concrete problem: If a singular point of a system is algebraic critical, how many local algebraic solutions are exactly the algebraic function? It requires additional work to discuss this problem, and one may hope that the solution of this problem should lead to the final resolution of the Poincaré problem. 1 A. D. Bruno, *Local Methods in Nonlinear Differential Equations*, Springer Series in Soviet Mathematics, Springer-Verlag, Berlin, 1989. M. M. Carnicer, *The Poincaré problem in the nondicritical case*, Ann. Math., **140**(1994), 289--294. D. Cerveau, A. Lins Neto, *Holomorphic foliations in **CP**(2) having an invariant algebraic curve*,   Ann. Inst. Fourier, **41**(1991), 883--903. A. Campillo, M. M. Carnicer, *Proximity inequalities and bounds for the degree of invariant curves by foliations of $\mathbb{P}_\mathbb{C}^2$*, Trans. Amer. Math. Soc., **349**(6)(1997), 2211--2228. L. Cairó, H. Giacomini and J. Llibre, *Liouvillan first integralsfor the planar Lotka-Volterra system*, Rendiconti del Circolo Matematico di Palermo, **52** (2003), 389--418. N.G.Chebotarev, *Theory of Algebraic Functions*, Higher Education Press, 1956, Beijing (in Chinese), translated from Russian by Dingzhong Xia & Zhizhong Dai. J. Lei, K.Guan, *Analytic expansion of solution passing singular point of second order polynomial system*, Chinese Ann. Math., Ser. A, **22A**(5)(2001), 571--576 (in Chinese). J.M.Ollagnier, *About a conjecture on quadratic vector fields*, J. Pure and Applied Algebra, **165**(2001), 227--234. J.M.Ollaginer, *Liouvillian integration of the Lotka-Volterra system*, Qualitative Theory of Dynamical Systems, **2**(2001), 307--358. J.V.Pereira, *Vector fields, invariant varieties and linear systems*, Ann. Inst. Fourier(Grenoble), **51**(5)(2001), 1385--1405. D. Schlomiuk, *Algebraic and geometric aspects of the theory of polynomial vector fields*, in *Bifurcations and Periodic Orbits of Vector Fields*, (D. Schlomiuk, Ed.), Kluwer Academic, Dordrecht, 1993, pp. 429--467. S. Walcher, *On the Poincaré problem*, J. Diff. Eqs., **166**(2000), 51--78. doi:10.1006/jdeq.2000.3801 # Acknowledgement {#acknowledgement .unnumbered} The authors would like to thank professor Zhiming Zheng and Dongming Wang for their work at the organization of the seminar of DESC2004. The authors are also grateful to the referees for their helpful suggestion. [^1]: This work was supported by the National Natural Science Foundation of China(10301006)
arxiv_math
{ "id": "2309.16392", "title": "Algebraic Multiplicity and the Poincar\\'{e} Problem", "authors": "Jinzhi Lei and Lijun Yang", "categories": "math.CA", "license": "http://creativecommons.org/licenses/by-nc-sa/4.0/" }
--- author: - "Ruoci Sun[^1]" title: The matrix Szegő equation --- $\mathbf{Abstract}$ This paper is dedicated to studying the matrix solutions to the cubic Szegő equation, introduced in Gérard--Grellier [@GGANNENS], leading to the cubic matrix Szegő equation on the torus, $$i \partial_t U = \Pi_{\geq 0} \left(U U ^* U \right), \quad \Pi_{\geq 0}: \sum_{n\in \mathbb{Z}}\hat{U}(n) e^{inx} \mapsto \sum_{n \geq 0} \hat{U}(n) e^{inx}.$$This equation enjoys a two-Lax-pair structure, which allow every solution to be expressed explicitly in terms of the initial data $U(0)$ and the time $t \in \mathbb{R}$.\ $\mathbf{Keywords}$ Szegő operator, Lax pair, explicit formula, Hankel operators, Toeplitz operators.\ # Introduction Given two arbitrary positive integers $M,N \in \mathbb{N}_+$, the cubic $M \times N$ matrix Szegő equation on the torus $\mathbb{T}= \mathbb{R}\slash 2\pi \mathbb{Z}$ reads as $$\label{MSzego} i \partial_t U = \Pi_{\geq 0} \left(U U ^* U \right), \quad U=U(t,x) \in \mathbb{C}^{M \times N}, \quad (t,x)\in \mathbb{R} \times \mathbb{T},$$where $\Pi_{\geq 0}: L^2(\mathbb{T}; \mathbb{C}^{M \times N}) \to L^2(\mathbb{T}; \mathbb{C}^{M \times N})$ denotes Szegő operator which cancels all negative Fourier modes and preserves the nonnegative Fourier modes, i.e. $$\label{MSzegoop} \left(\Pi_{\geq 0} U\right)(x) = \sum_{n\geq 0}\hat{U}(n) e^{inx}, \quad \hat{U}(n) = \tfrac{1}{2\pi}\int_{0}^{2\pi} U(x)e^{-inx} \mathrm{d}x \in \mathbb{C}^{M \times N}, \quad \forall n \in \mathbb{Z},$$for any $U =\sum_{n\in \mathbb{Z}}\hat{U}(n) e^{inx}\in L^2(\mathbb{T}; \mathbb{C}^{M \times N})$. ## Motivation The motivation to introduce equation [\[MSzego\]](#MSzego){reference-type="eqref" reference="MSzego"} is based on the following two facts. On the one hand, the cubic scalar Szegő equation on the torus $\mathbb{T}$, $$\label{sSzego} i\partial_t u = \Pi_{\geq 0}(|u|^2 u), \quad u=u(t,x) \in \mathbb{C}, \quad (t,x) \in \mathbb{R} \times \mathbb{T}, \quad \Pi_{\geq 0}: \sum_{n\in \mathbb{Z}}a_n e^{inx} \mapsto \sum_{n \geq 0}a_n e^{inx},$$which is introduced in Gérard--Grellier [@GGANNENS; @GGinvent; @GGAPDE; @GGTurb2015; @GGTAMS; @GerardGrellierBook] and Gérard--Pushnitski [@GerPush2023], is a model of a nondispersive Hamiltonian equation. It enjoys a two-Lax-pair structure, which allows to establish action--angle coordinates on the finite rank manifolds and the explicit formula for an arbitrary $L^2$-solution, leading to the complete integrability of [\[sSzego\]](#sSzego){reference-type="eqref" reference="sSzego"}. Thanks to its integrable system structure, P. Gérard and S. Grellier also construct weakly turbulent solutions, obtain the quasi-periodicity of rational solutions and the classification of stationary and traveling waves. The cubic scalar Szegő equation on the line $\mathbb{R}$ is studied in the works Pocovnicu [@poAPDE2011; @poDCDS2011] and Gérard--Pushnitski [@GerPush2023].\ On the other hand, if we consider the matrix generalization of the following integrable systems, the corresponding matrix equation still enjoys the Lax pair structure. Given any $d \in \mathbb{N}_+$, the filtered Sobolev spaces are given by $H^s_+(\mathbb{T}; \mathbb{C}^{N \times d}):=\Pi_{\geq 0}(H^s(\mathbb{T}; \mathbb{C}^{N \times d}))$, $\forall s \geq 0$. The right Toeplitz operator of symbol $V \in L^2(\mathbb{T}; \mathbb{C}^{M \times N})$ is defined by $$\label{Rtoep} \begin{split} & \mathbf{T}^{(\mathbf{r})}_{V} : G \in H^1_+(\mathbb{T}; \mathbb{C}^{N \times d}) \mapsto \mathbf{T}^{(\mathbf{r})}_{V}(G) = \Pi_{\geq 0}(V G) \in L^2_+(\mathbb{T}; \mathbb{C}^{M \times d}).\\ \end {split}$$ 1. The matrix cubic intertwined derivative Schrödinger system of Calogero--Moser--Sutherland type on $\mathbb{T}$, which is introduced in Sun [@SUNInterCMSDNLS] $$\label{IdnlsCMS} \begin{cases} \partial_t U = i \partial_x^2 U + U \partial_x \Pi_{\geq 0}\left(V^* U \right) + V \partial_x \Pi_{\geq 0}\left(U^* U \right),\\ \partial_t V = i \partial_x^2 V + V \partial_x \Pi_{\geq 0}\left(U^* V \right) + U \partial_x \Pi_{\geq 0}\left(V^* V \right), \end {cases} \quad (t,x) \in \mathbb{R} \times \mathbb{T},$$where $U=U(t), V=V(t) \in H^2_+ (\mathbb{T}; \mathbb{C}^{M \times N})$, enjoys the following Lax pair structure $$\begin{split} \mathbf{L}_{(U,V)}^{\mathrm{dNLS}} = & \mathrm{D} - \tfrac{1}{2}\left(\mathbf{T}_{U }^{(\mathbf{r})} \mathbf{T}_{V^*}^{(\mathbf{r})} + \mathbf{T}_{V}^{(\mathbf{r})} \mathbf{T}_{U^*}^{(\mathbf{r})}\right), \quad \mathrm{D}=-i \partial_x, \quad \forall U, V \in H^1_+ (\mathbb{T}; \mathbb{C}^{M \times N}), \\ \mathbf{B}_{(U,V)}^{\mathrm{dNLS}} = & \tfrac{1}{2}\left(\mathbf{T}_{U}^{(\mathbf{r})} \mathbf{T}_{\partial_x V^*}^{(\mathbf{r})} + \mathbf{T}_{V}^{(\mathbf{r})} \mathbf{T}_{\partial_x U^*}^{(\mathbf{r})} - \mathbf{T}_{\partial_x V}^{(\mathbf{r})} \mathbf{T}_{U^*}^{(\mathbf{r})} - \mathbf{T}_{\partial_x U}^{(\mathbf{r})} \mathbf{T}_{V^*}^{(\mathbf{r})} \right) + \tfrac{i}{4}\left(\mathbf{T}_{U}^{(\mathbf{r})} \mathbf{T}_{V^*}^{(\mathbf{r})} + \mathbf{T}_{V}^{(\mathbf{r})} \mathbf{T}_{U^*}^{(\mathbf{r})} \right)^2. \end{split}$$where $\mathbf{L}_{(U,V)}^{\mathrm{dNLS}},\mathbf{B}_{(U,V)}^{\mathrm{dNLS}}: H^1_+(\mathbb{T}; \mathbb{C}^{M \times d}) \to L^2_+(\mathbb{T}; \mathbb{C}^{M \times d})$ are densely defined operators. The scalar version of equation [\[IdnlsCMS\]](#IdnlsCMS){reference-type="eqref" reference="IdnlsCMS"} is a generalization and integrable perturbation of both the linear Schrödinger equation and the defocusing$\slash$focusing Calogero--Moser--Sutherland dNLS equation, introduced in Badreddine [@Badreddine2023] and Gérard--Lenzmann [@Gerard-Lenzmann2022]. 2. The spin Benjamin--Ono (sBO) equation on $\mathbb{T}$, introduced in Berntson--Langmann--Lenells [@sBOBLL2022], $$\label{SBO1} \partial_t U = \partial_x \left(|\mathrm{D}| U - U^2\right) - i [U, |\mathrm{D}| U], \quad U=U(t,x) \in \mathbb{C}^{N \times N}, \quad (t,x) \in \mathbb{R}\times \mathbb{T}, \quad \mathrm{D}=-i \partial_x,$$enjoys the following Lax pair structure, $$\mathbf{L}_U^{\mathrm{sBO}} := \mathrm{D}-\mathbf{T}_U^{(\mathbf{r})}, \quad \mathbf{B}_U^{\mathrm{sBO}} := i \mathbf{T}_{|\mathrm{D}|U}^{(\mathbf{r})} - i\left( \mathbf{T}_{U}^{(\mathbf{r})} \right)^2 , \quad \forall U \in C^{\infty}(\mathbb{T}; \mathbb{C}^{N \times N}).$$thanks to P. Gérard's work [@sBOLaxP2022]. 3. The matrix KdV equation on $\mathbb{T}$ $$\label{NKdV} \partial_t U = 3 \partial_x (U^2) - \partial_x^3 U, \quad U=U(t,x) \in \mathbb{C}^{N \times N}, \quad (t,x) \in \mathbb{R} \times \mathbb{T},$$enjoys the following Lax pair structure, thanks to the work Lax [@LaxPairCPAMKdV], $$\mathbf{L}_U^{\mathrm{KdV}}= U - \partial_x^2, \quad \mathbf{B}_U^{\mathrm{KdV}}= - 4\partial_x^3 + 6U \partial_x + 3 (\partial_x U), \quad \forall U \in C^{\infty}(\mathbb{T}; \mathbb{C}^{N \times N}).$$ 4. The matrix cubic Schrödinger system on $\mathbb{T}$ $$\label{MNLS} \begin{cases} i \partial_t U = -\partial_x^2 U + 2 UV^*U,\\ i \partial_t V = -\partial_x^2 V + 2 VU^*V,\\ \end{cases}\quad U=U(t,x), \; V=V(t,x)\in \mathbb{C}^{M\times N}, \quad (t,x) \in \mathbb{R} \times \mathbb{T},$$enjoys the following Lax pair structure, thanks to the work Zakharov--Shabat [@ZS1972], $$\mathbf{L}_{\left( U, V\right)}^{\mathrm{NLS}}= \begin{pmatrix} i \partial_x & U\\ V^* & -i \partial_x \end{pmatrix} , \; \mathbf{B}_{\left( U, V\right)}^{\mathrm{NLS}}=\begin{pmatrix} 2 i \partial_x^2 -iUV^* & \partial_x U + 2U\partial_x\\ \partial_x V^* + 2V^*\partial_x & -2i \partial_x^2 +iV^*U \end{pmatrix}, \; \forall U, V \in C^{\infty}(\mathbb{T}; \mathbb{C}^{M \times N}).$$ In previous examples, if the scalar multiplication is replaced by the right multiplication of matrices, then the Lax pair of the original scalar equation becomes the Lax pair of the corresponding matrix equation. It leads automatically to the following question. **Question 1**. *If we substitute the right multiplication of matrices for the scalar multiplication in the Lax pair when doing the matrix generalization for an **arbitrary** integrable equation, will this operation always give a 'Lax pair' for the corresponding matrix equation?* The answer is **False** due to the matrix generalization of the cubic Szegő equation from [\[sSzego\]](#sSzego){reference-type="eqref" reference="sSzego"} to [\[MSzego\]](#MSzego){reference-type="eqref" reference="MSzego"}. Recall that the scalar Hankel operator of symbol $u \in H^{\frac{1}{2}}_+(\mathbb{T}; \mathbb{C})$ is given by $$\label{Hankelsca} H_u : f \in L^2_+(\mathbb{T}; \mathbb{C}) \mapsto H_u(f):=\Pi_{\geq 0}(u \bar{f}) \in L^2_+(\mathbb{T}; \mathbb{C})$$The Hankel operator $H_u$ is a $\mathbb{C}$-antilinear Hilbert-Schmidt operator on $L^2_+(\mathbb{T}; \mathbb{C})$. The scalar Toeplitz operator of symbol $b \in L^{\infty} (\mathbb{T}; \mathbb{C})$, which is given by $$\label{Toeplitzsca} T_b : f \in L^2_+(\mathbb{T}; \mathbb{C}) \mapsto T_b(f) =\Pi_{\geq 0}(bf) \in L^2_+(\mathbb{T}; \mathbb{C}),$$is a bounded $\mathbb{C}$-linear operator on $L^2_+(\mathbb{T}; \mathbb{C})$. If $u \in H^s_+(\mathbb{T}; \mathbb{C})$ for some $s> \frac{1}{2}$, set $B_u = \tfrac{i}{2}H_u^2 - i T_{|u|^2}$. According to Gérard--Grellier [@GGANNENS], $(H_u, B_u)$ is a Lax pair of [\[sSzego\]](#sSzego){reference-type="eqref" reference="sSzego"}, i.e. the function $u \in C^{\infty}(\mathbb{R}; H^s_+(\mathbb{T}; \mathbb{C}))$ solves [\[sSzego\]](#sSzego){reference-type="eqref" reference="sSzego"} if and only if $$\label{LaxPScaSzego} \tfrac{\mathrm{d}}{\mathrm{d}t} H_{u(t)} = [B_{u(t)}, H_{u(t)}].$$When generalizing to the matrix Szegő equation, the Hankel operator has two different matrix generalizations. Given $d, M,N\in\mathbb{N}_+$, the left and right Hankel operators of symbol $U \in H^{\frac{1}{2}}_+(\mathbb{T}; \mathbb{C}^{M\times N})$ are defined by $$\label{rlHankelIntro} \begin{split} & \mathbf{H}^{(\mathbf{r})}_{U} : F \in L^2_+(\mathbb{T}; \mathbb{C}^{d \times N}) \mapsto \mathbf{H}^{(\mathbf{r})}_{U}(F) = \Pi_{\geq 0}(U F^*) \in L^2_+(\mathbb{T}; \mathbb{C}^{M \times d});\\ & \mathbf{H}^{(\mathbf{l})}_{U} : G \in L^2_+(\mathbb{T}; \mathbb{C}^{M \times d}) \mapsto \mathbf{H}^{(\mathbf{l})}_{U}(G) = \Pi_{\geq 0}(G^* U) \in L^2_+(\mathbb{T}; \mathbb{C}^{d \times N}).\\ \end {split}$$Assume that $M \ne N$, then $L^2_+(\mathbb{T}; \mathbb{C}^{d \times N}) \bigcap L^2_+(\mathbb{T}; \mathbb{C}^{M \times d}) = \emptyset$. So it is impossible to find $d \in \mathbb{N}_+$ and an operator $\mathbf{B}_U$ such that the Lie bracket $[\mathbf{B}_U, \mathbf{H}^{(\mathbf{r})}_{U} ] =\mathbf{B}_U \mathbf{H}^{(\mathbf{r})}_{U} - \mathbf{H}^{(\mathbf{r})}_{U}\mathbf{B}_U$ is a well defined operator from $L^2_+(\mathbb{T}; \mathbb{C}^{d \times N})$ to $L^2_+(\mathbb{T}; \mathbb{C}^{M \times d})$, according to the rules of matrix addition and multiplication. The similar result holds for $\mathbf{H}^{(\mathbf{l})}_{U}$. Consequently, neither $\mathbf{H}^{(\mathbf{r})}_{U}$ nor $\mathbf{H}^{(\mathbf{l})}_{U}$ can be a candidate for the Lax operator of the matrix Szegő equation [\[MSzego\]](#MSzego){reference-type="eqref" reference="MSzego"}, while the single scalar Hankel operator $H_u$ is a Lax operator of the scalar Szegő equation [\[sSzego\]](#sSzego){reference-type="eqref" reference="sSzego"}. Unlike the previous integrable systems [\[IdnlsCMS\]](#IdnlsCMS){reference-type="eqref" reference="IdnlsCMS"}, [\[SBO1\]](#SBO1){reference-type="eqref" reference="SBO1"}, [\[NKdV\]](#NKdV){reference-type="eqref" reference="NKdV"}, [\[MNLS\]](#MNLS){reference-type="eqref" reference="MNLS"}, the matrix Szegő equation [\[MSzego\]](#MSzego){reference-type="eqref" reference="MSzego"} provides one counter-example of conjecture $\ref{ScamultoMaMul}$.\ When generalizing a scalar equation to its matrix equation, the transpose transform $$\label{Transpose} \mathfrak{T}=\mathfrak{T}^{-1}: A \in \mathbb{C}^{M \times N} \mapsto \mathfrak{T}(A) = A^{\mathrm{T}} \in \mathbb{C}^{N \times M}$$becomes nontrivial if $(M,N) \ne (1,1)$. The matrix Szegő equation [\[MSzego\]](#MSzego){reference-type="eqref" reference="MSzego"} is invariant under transposing, i.e. if $U \in C^{\infty}(\mathbb{R}; H^s_+(\mathbb{T}; \mathbb{C}^{M\times N}))$ solves [\[MSzego\]](#MSzego){reference-type="eqref" reference="MSzego"}, so does $U^{\mathrm{T}} \in C^{\infty}(\mathbb{R}; H^s_+(\mathbb{T}; \mathbb{C}^{N\times M}))$, $\forall s >\frac{1}{2}$. In addition, the left Hankel operator $\mathbf{H}^{(\mathbf{l})}_{U}$ is conjugate to the right Hankel operator $\mathbf{H}^{(\mathbf{r})}_{U^{\mathrm{T}}}$ via the transpose transform, i.e. $$\label{TconjHrHl} \mathbf{H}^{(\mathbf{l})}_{U} = \mathfrak{T}\circ \mathbf{H}^{(\mathbf{r})}_{U^{\mathrm{T}}} \circ\mathfrak{T}, \quad \mathbf{H}^{(\mathbf{r})}_{U} = \mathfrak{T}\circ \mathbf{H}^{(\mathbf{l})}_{U^{\mathrm{T}}} \circ\mathfrak{T}.$$Even though one single matrix Hankel operator fails to be a Lax operator of [\[MSzego\]](#MSzego){reference-type="eqref" reference="MSzego"}, the matrix Szegő equation [\[MSzego\]](#MSzego){reference-type="eqref" reference="MSzego"} still enjoys a Lax pair structure, which is provided by the double matrix Hankel operators $\mathbf{H}^{(\mathbf{r})}_{U} \mathbf{H}^{(\mathbf{l})}_{U}$ and $\mathbf{H}^{(\mathbf{l})}_{U} \mathbf{H}^{(\mathbf{r})}_{U}$. Before stating the main results, we give the high regularity global wellposedness result of [\[MSzego\]](#MSzego){reference-type="eqref" reference="MSzego"}. **Proposition 2**. *Given $U_0 \in H^{\frac{1}{2}}_+(\mathbb{T}; \mathbb{C}^{M\times N})$, there exists a unique function $U \in C(\mathbb{R}; H^{\frac{1}{2}}_+(\mathbb{T}; \mathbb{C}^{M\times N}))$ solving the cubic matrix Szegő equation [\[MSzego\]](#MSzego){reference-type="eqref" reference="MSzego"} such that $U(0)=U_0$. For each $T>0$, the flow map $\Phi : U_0 \in H^{\frac{1}{2}}_+(\mathbb{T}; \mathbb{C}^{M\times N}) \mapsto U\in C([-T, T]; H^{\frac{1}{2}}_+(\mathbb{T}; \mathbb{C}^{M\times N}))$ is continuous. Moreover, if $U_0 \in H^s_+(\mathbb{T}; \mathbb{C}^{M\times N})$ for some $s > \frac{1}{2}$, then $U \in C^{\infty}(\mathbb{R}; H^s_+(\mathbb{T}; \mathbb{C}^{M\times N}))$.* ## Main results Given $n \in \mathbb{Z}$, we set $\mathbf{e}_n : x \in \mathbb{T} \mapsto e^{inx} \in \mathbb{C}$. Given any positive integers $M,N\in\mathbb{N}_+$, the shift operator $\mathbf{S}$ is defined by $$\label{Sdef} \mathbf{S}: F \in L^2_+(\mathbb{T}; \mathbb{C}^{M\times N}) \mapsto \mathbf{e}_1 F \in L^2_+(\mathbb{T}; \mathbb{C}^{M\times N}).$$Its $L^2_+$-adjoint is denoted by $\mathbf{S}^* \in \mathcal{B}\left(L^2_+(\mathbb{T}; \mathbb{C}^{M\times N})\right)$. If $F= \sum_{n \geq 0}\hat{F}(n)\mathbf{e}_n \in L^2_+(\mathbb{T}; \mathbb{C}^{M\times N})$, $$\label{S*defIntro} \mathbf{S}^* (F)= \Pi_{\geq 0}\left( \mathbf{e}_{-1} F\right) = \sum_{n \geq 0}\hat{F}(n+1)\mathbf{e}_n.$$The left and right shifted Hankel operators of the symbol $U \in H^{\frac{1}{2}}_+(\mathbb{T}; \mathbb{C}^{M\times N})$ are defined by $$\label{rlShHanIntro} \begin{split} & \mathbf{K}^{(\mathbf{r})}_{U} = \mathbf{H}^{(\mathbf{r})}_{U}\mathbf{S} = \mathbf{S}^* \mathbf{H}^{(\mathbf{r})}_{U} =\mathbf{H}^{(\mathbf{r})}_{\mathbf{S}^* U}, \quad \mathbf{K}^{(\mathbf{l})}_{U} = \mathbf{H}^{(\mathbf{l})}_{U}\mathbf{S} = \mathbf{S}^* \mathbf{H}^{(\mathbf{l})}_{U} =\mathbf{H}^{(\mathbf{l})}_{\mathbf{S}^* U}. \end{split}$$The left Toeplitz operator of symbol $V \in L^2(\mathbb{T}; \mathbb{C}^{M \times N})$ is defined by $$\label{Ltoep} \begin{split} \mathbf{T}^{(\mathbf{l})}_{V} : F \in H^1_+(\mathbb{T}; \mathbb{C}^{d \times M}) \mapsto \mathbf{T}^{(\mathbf{l})}_{V}(F) = \Pi_{\geq 0}(FV) \in L^2_+(\mathbb{T}; \mathbb{C}^{d \times N}), \quad \forall d \in \mathbb{N}_+. \end {split}$$The first result of this paper gives the two-Lax-pair structure of the matrix Szegő equation [\[MSzego\]](#MSzego){reference-type="eqref" reference="MSzego"}. **Theorem 3**. *Given $M,N,d \in \mathbb{N}_+$ and $s>\tfrac{1}{2}$, if $U \in C^{\infty}(\mathbb{R}; H^s_+(\mathbb{T}; \mathbb{C}^{M\times N}))$ solves the matrix Szegő equation [\[MSzego\]](#MSzego){reference-type="eqref" reference="MSzego"}, then the time-dependent operators $\mathbf{H}^{(\mathbf{r})}_{U}\mathbf{H}^{(\mathbf{l})}_{U}, \mathbf{K}^{(\mathbf{r})}_{U}\mathbf{K}^{(\mathbf{l})}_{U} \in C^{\infty}(\mathbb{R}; \mathcal{B}(L^2_+(\mathbb{T}; \mathbb{C}^{M\times d})))$ and $\mathbf{H}^{(\mathbf{l})}_{U}\mathbf{H}^{(\mathbf{r})}_{U}, \mathbf{K}^{(\mathbf{l})}_{U}\mathbf{K}^{(\mathbf{r})}_{U} \in C^{\infty}(\mathbb{R}; \mathcal{B}(L^2_+(\mathbb{T}; \mathbb{C}^{d\times N})))$ satisfy the following identities: $$\label{4HeiLaxMSzego} \begin{split} & \tfrac{\mathrm{d}}{\mathrm{d}t}(\mathbf{H}^{(\mathbf{r})}_{U(t)}\mathbf{H}^{(\mathbf{l})}_{U(t)}) = i[\mathbf{H}^{(\mathbf{r})}_{U(t)}\mathbf{H}^{(\mathbf{l})}_{U(t)}, \; \mathbf{T}^{(\mathbf{r})}_{U(t) U(t)^*}]; \quad \tfrac{\mathrm{d}}{\mathrm{d}t}(\mathbf{H}^{(\mathbf{l})}_{U(t)}\mathbf{H}^{(\mathbf{r})}_{U(t)}) = i[\mathbf{H}^{(\mathbf{l})}_{U(t)}\mathbf{H}^{(\mathbf{r})}_{U(t)}, \; \mathbf{T}^{(\mathbf{l})}_{U(t)^* U(t)}]; \\ & \tfrac{\mathrm{d}}{\mathrm{d}t}(\mathbf{K}^{(\mathbf{r})}_{U(t)}\mathbf{K}^{(\mathbf{l})}_{U(t)}) = i[\mathbf{K}^{(\mathbf{r})}_{U(t)}\mathbf{K}^{(\mathbf{l})}_{U(t)}, \; \mathbf{T}^{(\mathbf{r})}_{U(t) U(t)^*}]; \quad \tfrac{\mathrm{d}}{\mathrm{d}t}(\mathbf{K}^{(\mathbf{l})}_{U(t)}\mathbf{K}^{(\mathbf{r})}_{U(t)}) = i[\mathbf{K}^{(\mathbf{l})}_{U(t)}\mathbf{K}^{(\mathbf{r})}_{U(t)}, \; \mathbf{T}^{(\mathbf{l})}_{U(t)^* U(t)}]. \end{split}$$* **Remark 4**. *In the scalar case, i.e. $M=N=1$, the transpose transform $\mathfrak{T}$ becomes trivial and the right Hankel operator $\mathbf{H}^{(\mathbf{r})}_{U}$ coincides with the left Hankel operator $\mathbf{H}^{(\mathbf{l})}_{U}$. In that case, the single Hankel operator $H_u$ becomes a Lax operator of the cubic scalar Szegő equation [\[sSzego\]](#sSzego){reference-type="eqref" reference="sSzego"}.* Thanks to the unitary equivalence between $\mathbf{H}^{(\mathbf{r})}_{U(t)}\mathbf{H}^{(\mathbf{l})}_{U(t)}$ and $\mathbf{H}^{(\mathbf{r})}_{U(0)}\mathbf{H}^{(\mathbf{l})}_{U(0)}$, we have the following large time estimate for the high regularity Sobolev norms of the solution to [\[MSzego\]](#MSzego){reference-type="eqref" reference="MSzego"}. **Corollary 5**. *There exists a constant $\mathcal{C}_s >0$ such that if $U \in C^{\infty}(\mathbb{R}; H^s_+(\mathbb{T}; \mathbb{C}^{M\times N}))$ solves [\[MSzego\]](#MSzego){reference-type="eqref" reference="MSzego"} for some $s>1$, then $\sup_{t \in \mathbb{R}}\|U(t)\|_{L^{\infty}(\mathbb{T}; \mathbb{C}^{M \times N})} \leq 2 \mathrm{Tr}\sqrt{\mathbf{H}^{(\mathbf{r})}_{U(0)}\mathbf{H}^{(\mathbf{l})}_{U(0)}} \leq \mathcal{C}_s \|U(t)\|_{H^{s}(\mathbb{T}; \mathbb{C}^{M \times N})}$ and $$\label{AtmostExpGrowth} \sup_{t \in \mathbb{R}}\|U(t)\|_{H^{s}(\mathbb{T}; \mathbb{C}^{M \times N})} \leq \mathcal{C}_s e^{\mathcal{C}_s |t| \|U(0)\|_{H^{s}(\mathbb{T}; \mathbb{C}^{M \times N})} } \|U(0)\|_{H^{s}(\mathbb{T}; \mathbb{C}^{M \times N})}.$$* Given any positive integers $M,N\in\mathbb{N}_+$, the integral operator is defined by $$\label{IntdefIntro} \mathbf{I} : F \in L^1 (\mathbb{T}; \mathbb{C}^{M\times N}) \mapsto \mathbf{I}(F)= \hat{F}(0) =\frac{1}{2\pi} \int_{0}^{2 \pi} F(x) \mathrm{d}x \in \mathbb{C}^{M \times N}.$$Inspired from the works Gérard--Grellier [@GGTAMS], Gérard [@BOExp2022] and Badreddine [@Badreddine2023], we compare three families of unitary operators acting on the shift operator $\mathbf{S}^* \in \mathcal{B}\left(L^2_+(\mathbb{T}; \mathbb{C}^{M\times N})\right)$ by conjugation in order to linearize the Hamiltonian flow of [\[MSzego\]](#MSzego){reference-type="eqref" reference="MSzego"} and obtain an explicit expression of the Poisson integral of every $H^{\frac{1}{2}}_+$-solution. This explicit formula is given by the following theorem. **Theorem 6**. *Given $M,N \in \mathbb{N}_+$, if $U : t \in \mathbb{R} \mapsto U(t)=\sum_{n \geq 0}\hat{U}(t,n)\mathbf{e}_n\in H^{\frac{1}{2}}_+(\mathbb{T}; \mathbb{C}^{M\times N})$ solves the matrix Szegő equation [\[MSzego\]](#MSzego){reference-type="eqref" reference="MSzego"} with initial data $U(0)=U_0 \in H^{\frac{1}{2}}_+(\mathbb{T}; \mathbb{C}^{M\times N})$, then $$\label{mszeExpFor} \begin{split} \hat{U}(t,n) = & \mathbf{I}\left( (e^{-it \mathbf{H}^{(\mathbf{r})}_{U_0}\mathbf{H}^{(\mathbf{l})}_{U_0}} e^{it \mathbf{K}^{(\mathbf{r})}_{U_0}\mathbf{K}^{(\mathbf{l})}_{U_0}} \mathbf{S}^*)^n e^{-it \mathbf{H}^{(\mathbf{r})}_{U_0}\mathbf{H}^{(\mathbf{l})}_{U_0}} (U_0) \right) \\ = & \mathbf{I}\left( (e^{-it \mathbf{H}^{(\mathbf{l})}_{U_0}\mathbf{H}^{(\mathbf{r})}_{U_0}} e^{it \mathbf{K}^{(\mathbf{l})}_{U_0}\mathbf{K}^{(\mathbf{r})}_{U_0}} \mathbf{S}^*)^n e^{-it \mathbf{H}^{(\mathbf{l})}_{U_0}\mathbf{H}^{(\mathbf{r})}_{U_0}} (U_0) \right) \in \mathbb{C}^{M \times N}. \end{split}$$* Since the single Hankel operator is no longer a Lax operator, some steps in the proof of Theorem 1 of Gérard--Grellier [@GGTAMS] needs to be modified in order to prove theorem $\ref{ExpForThm}$. Thanks to the Kronecker theorem, the right Hankel operator $\mathbf{H}^{(\mathbf{r})}_{U}$ and the double Hankel operator $\mathbf{H}^{(\mathbf{r})}_{U} \mathbf{H}^{(\mathbf{l})}_{U}$ have the same image, when the symbol $U$ is rational. Then we start to prove [\[mszeExpFor\]](#mszeExpFor){reference-type="eqref" reference="mszeExpFor"} for rational solutions and complete the proof by density argument.\ This paper is organized as follows. We recall matrix-valued functional spaces and inequalities in section $\ref{SecPre}$. Section $\ref{SecLaxP}$ is dedicated to establishing the Lax pair structure of [\[MSzego\]](#MSzego){reference-type="eqref" reference="MSzego"} and proving theorem $\ref{LaxPairThm}$. The explicit formula is obtained in section $\ref{SecExplicit}$. # Preliminaries {#SecPre} We give some preliminaries of the matrix valued functional spaces and prove proposition $\ref{GWPH0.5}$. Given $p \in [1, +\infty]$, $s \geq 0$ and $M,N \in \mathbb{N}_+$, a matrix function $U=\left(U_{kj}\right)_{1 \leq k \leq M, 1 \leq j \leq N}$ belongs to $L^p(\mathbb{T}; \mathbb{C}^{M \times N})$ if and only if its $kj$-entry $U_{kj}$ belongs to $L^p(\mathbb{T}; \mathbb{C})$. We set $$\label{LpMat} \begin{split} & \|U\|_{L^p(\mathbb{T}; \mathbb{C}^{M \times N})}^2 := \sum_{k=1}^M \sum_{j=1}^N \|U_{kj}\|_{L^p(\mathbb{T}; \mathbb{C})}^2 = \|U^*\|_{L^p(\mathbb{T}; \mathbb{C}^{N \times M})}^2 ; \\ & \|U\|_{H^s(\mathbb{T}; \mathbb{C}^{M \times N})}^2 := \sum_{k=1}^M \sum_{j=1}^N \|U_{kj}\|_{H^s(\mathbb{T}; \mathbb{C})}^2 =\|U^*\|_{H^s(\mathbb{T}; \mathbb{C}^{N \times M})}^2 . \end{split}$$Let $H^s(\mathbb{T}; \mathbb{C}^{M\times N})$ denote the matrix-valued Sobolev space, i.e. $$\label{M-Hs} H^s(\mathbb{T}; \mathbb{C}^{M\times N}) ):= \{U = \sum_{k=1}^M\sum_{j=1}^N U_{kj} \mathbb{E}_{{kj}}^{(MN)} : U_{kj} \in H^s(\mathbb{T}; \mathbb{C}), \; \forall 1\leq k \leq M, \; 1 \leq j \leq N \}.$$Then $L^2(\mathbb{T}; \mathbb{C}^{M\times N}) = H^0(\mathbb{T}; \mathbb{C}^{M\times N})$. Equipped with the following inner product $$\label{InnerPro} (U, V) \in L^2(\mathbb{T}; \mathbb{C}^{M\times N})^2 \mapsto \langle U, V \rangle_{L^2(\mathbb{T}; \mathbb{C}^{M\times N}) } := \frac{1}{2\pi}\int_0^{2\pi}\mathrm{tr} \left( U(x) V(x)^{*}\right)\mathrm{d}x \in \mathbb{C},$$$L^2(\mathbb{T}; \mathbb{C}^{M\times N})$ is a $\mathbb{C}$-Hilbert space. Given a function $U = \left(U_{kj}\right)_{1 \leq k, j \leq d} \in L^2(\mathbb{T}; \mathbb{C}^{M \times N})$, its Fourier expansion is given as follows, $$U(x)= \sum_{n \in \mathbb{Z}}\hat{U}(n)e^{inx}, \quad \hat{U}(n)=\frac{1}{2\pi}\int_{0}^{2\pi}U(x)e^{-inx} \mathrm{d}x= \sum_{k=1}^M\sum_{j=1}^N \hat{U}_{kj}(n) \mathbb{E}_{{kj}}^{(MN)} \in \mathbb{C}^{M\times N}.$$The Parseval's formula reads as $$\label{Parseval} \langle U, V\rangle_{L^2} = \sum_{k=1}^M\sum_{j=1}^N \int_{\mathbb{R}}U_{kj}(x) \overline{V_{kj}(x)} \mathrm{d}x = \sum_{n \in \mathbb{Z}} \sum_{k=1}^M\sum_{j=1}^N\hat{U}_{kj}(n) \overline{\hat{V}_{kj}(n)} = \sum_{n \in \mathbb{Z}} \mathrm{tr} \left(\hat{U}(n)\left(\hat{V}(n)\right)^* \right).$$The negative Szegő projector $\Pi_{< 0} = \mathrm{id}_{L^2(\mathbb{T}; \mathbb{C}^{M\times N})} - \Pi_{\geq 0}$ on $L^2(\mathbb{T}; \mathbb{C}^{M\times N})$ is given by $$\label{Szego-} \Pi_{<0}(U) := \sum_{n < 0}\hat{U}(n) \mathbf{e}_n \in L^2(\mathbb{T}; \mathbb{C}^{M\times N}), \quad \forall U= \sum_{n \in \mathbb{Z}}\hat{U}(n)\mathbf{e}_n \in L^2(\mathbb{T}; \mathbb{C}^{M\times N}).$$The filtered $H^s$-spaces are given by $$H^s_+(\mathbb{T}; \mathbb{C}^{M\times N}):= \Pi_{\geq 0}\left(H^s(\mathbb{T}; \mathbb{C}^{M\times N}) \right), \quad H^s_-(\mathbb{T}; \mathbb{C}^{M\times N}):= \Pi_{<0}\left(H^s(\mathbb{T}; \mathbb{C}^{M\times N}) \right)$$Then the $\mathbb{C}$-Hilbert space $L^2(\mathbb{T}; \mathbb{C}^{M\times N})$ has the following orthogonal decomposition $$L^2(\mathbb{T}; \mathbb{C}^{M\times N}) = L^2_+(\mathbb{T}; \mathbb{C}^{M\times N}) \bigoplus L^2_-(\mathbb{T}; \mathbb{C}^{M\times N}), \quad L^2_+(\mathbb{T}; \mathbb{C}^{M\times N}) \perp L^2_-(\mathbb{T}; \mathbb{C}^{M\times N}).$$For any $U=\sum_{n\in \mathbb{Z}}\hat{U}(n)\mathbf{e}_n\in L^2(\mathbb{T}; \mathbb{C}^{M\times N})$, where $\mathbf{e}_n: x \in \mathbb{T}\mapsto e^{inx}\in \mathbb{C}$, then $$\label{Pi<0adjTorus} \Pi_{<0} U = \left(\Pi_{\geq 0}(U^*) \right)^* - \hat{U}(0) \in L^2_-(\mathbb{T}; \mathbb{C}^{M\times N}), \quad \left(\Pi_{\geq 0} U \right)^* = \Pi_{<0}(U^*) + ( \hat{U}(0) )^* \in L^2(\mathbb{T}; \mathbb{C}^{N\times M}).$$ **Lemma 7**. *If $U \in L^2(\mathbb{T}; \mathbb{C}^{M\times N})$, $A\in \mathbb{C}^{N \times P}$, $B\in \mathbb{C}^{Q\times M}$ for some $M,N,P,Q \in \mathbb{N}_+$, then $$\label{matrixXL^2} \begin{split} & \Pi_{\geq 0} (UA) = \left(\Pi_{\geq 0}U \right)A \in L^2_+ (\mathbb{T}; \mathbb{C}^{M \times P}); \quad \Pi_{< 0} (UA) = \left(\Pi_{< 0}U\right)A \in L^2_- (\mathbb{T}; \mathbb{C}^{M \times P}); \\ & \Pi_{\geq 0} (BU) = B \left(\Pi_{\geq 0}U \right) \in L^2_+ (\mathbb{T}; \mathbb{C}^{Q \times N}); \quad \Pi_{< 0} (BU) = B \left(\Pi_{< 0}U \right) \in L^2_- (\mathbb{T}; \mathbb{C}^{Q \times N}). \end{split}$$* *Proof.* Since $A_{nj}\in \mathbb{C}$, then $\left(\Pi_{\geq 0} (UA)\right)_{kj}=\sum_{n=1}^N(\Pi_{\geq 0}U_{kn})A_{nj} = \left((\Pi_{\geq 0}U)A \right)_{kj}$, if $1 \leq k \leq M$, $1 \leq j \leq P$. Since $B_{sm} \in \mathbb{C}$, then $\left(\Pi_{\geq 0} (BU)\right)_{st}=\sum_{m=1}^N B_{sm} \Pi_{\geq 0}U_{mt} = \left(B(\Pi_{\geq 0}U) \right)_{st}$, if $1 \leq s \leq Q$, $1 \leq t \leq N$. ◻ **Lemma 8**. *Given $A \in L^2_-(\mathbb{T}; \mathbb{C}^{M\times N})$ and $B \in L^2_+(\mathbb{T}; \mathbb{C}^{d \times N})$ for some $M,N, d \in \mathbb{N}_+$, if one of $A, B$ is essentially bounded , then $AB^* \in L^2_-(\mathbb{T}; \mathbb{C}^{M \times d})$.* *Proof.* If $A = \sum_{n \geq 0}\hat{A}(n)\mathbf{e}_n$, $B = \sum_{n \geq 0}\hat{B}(n)\mathbf{e}_n$, $AB^*= \sum_{l \leq -1} \left(\sum_{n=l}^{-1}\hat{A}(n)(\hat{B}(n-l))^*\right)\mathbf{e}_l\in L^2_-.$ ◻ **Lemma 9**. *Given $A \in L^2_+(\mathbb{T}; \mathbb{C}^{M\times N})$ and $B \in L^2_-(\mathbb{T}; \mathbb{C}^{M \times d})$ for some $M,N, d \in \mathbb{N}_+$, if one of $A, B$ is essentially bounded , then $A^* B \in L^2_-(\mathbb{T}; \mathbb{C}^{N \times d})$.* *Proof.* If $A = \sum_{n \geq 0}\hat{A}(n)\mathbf{e}_n$, $B = \sum_{n \geq 0}\hat{B}(n)\mathbf{e}_n$, $A^* B= \sum_{l\leq -1} \left(\sum_{n=l}^{-1}(\hat{A}(n-l))^*\hat{B}(n) \right)\mathbf{e}_l\in L^2_-.$ ◻ **Lemma 10**. *Given $A \in L^2_+(\mathbb{T}; \mathbb{C}^{M\times N})$ and $B \in L^2_+(\mathbb{T}; \mathbb{C}^{N \times d})$ for some $M,N, d \in \mathbb{N}_+$, if one of $A, B$ is essentially bounded , then $A B \in L^2_+(\mathbb{T}; \mathbb{C}^{M \times d})$.* *Proof.* If $A = \sum_{n \geq 0}\hat{A}(n)\mathbf{e}_n$, $B = \sum_{n \geq 0}\hat{B}(n)\mathbf{e}_n$, then $AB= \sum_{l \geq 0} \left(\sum_{n=0}^l \hat{A}(n)\hat{B}(l-n)\right)\mathbf{e}_l\in L^2_+.$ ◻ **Proposition 11** (Cauchy). *Let $\mathcal{E}$ be a Banach space, $\mathcal{I}$ is an open interval of $\mathbb{R}$ and $A \in C^0 (\mathcal{I}; \mathcal{B}(\mathcal{E}))$, if $(t_0, x_0) \in \mathcal{I} \times \mathcal{E}$, there exists a unique function $x \in C^1(\mathcal{I}; \mathcal{B}(\mathcal{E}))$ such that $x(t_0)=x_0$ and $$\tfrac{\mathrm{d}}{\mathrm{d}t} x(t) = A(t)\left( x(t)\right), \quad \forall t \in \mathcal{I}.$$* *Proof.* See Theorem 1.1.1 of Chemin [@CheminNoteEDPM2-2016]. ◻ ## High regularity wellposedness If $s\geq \frac{1}{2}$, the proof of the $H^s$-global wellposedness of the matrix Szegő equation follows directly the steps of section 2 of Gérard--Grellier [@GGANNENS] and the following matrix inequalities. **Lemma 12** (Brezis--Gallouët [@Brezis-Gallouet1980]). *If $s > \frac{1}{2}$, there exists a constant $\mathcal{C}_s > 0$ such that $\forall U \in H^s(\mathbb{T}; \mathbb{C}^{M \times N})$ for some $M,N \in \mathbb{N}_+$, the following inequality holds, $$\label{BGMatrixIne} \|U\|_{L^{\infty}(\mathbb{T}; \mathbb{C}^{M \times N})}^2 \leq \mathcal{C}_s^2 \|U\|_{H^{\frac{1}{2}}(\mathbb{T}; \mathbb{C}^{M \times N})}^2 \ln \left(2 + \tfrac{\|U\|_{H^s(\mathbb{T}; \mathbb{C}^{M \times N})}}{\|U\|_{H^{\frac{1}{2}}(\mathbb{T}; \mathbb{C}^{M \times N})}} \right).$$* *Proof.* If $U \in H^s(\mathbb{T}; \mathbb{C}^{M \times N})\backslash \{0\}$ for some $s > \frac{1}{2}$, there exists $m\geq 1$ such that $m^{s - \frac{1}{2}} \sqrt{\ln(m+1)} \leq \tfrac{\|U\|_{H^s(\mathbb{T}; \mathbb{C}^{M \times N})}}{\|U\|_{H^{\frac{1}{2}}(\mathbb{T}; \mathbb{C}^{M \times N})}} \leq (m+1)^{s - \frac{1}{2}} \sqrt{\ln(m+2)}$. Set $\mathcal{A}_s := (s-\tfrac{1}{2})^{-1} +1$, so $(2+m^{s -\frac{1}{2}} \sqrt{\ln 2})^{2\mathcal{A}_s} \geq 2+m$. Appendix 2 in Page 805 of Gérard--Grellier [@GGANNENS] yields that there exists $\mathcal{C}_s^{(1)}>0$ such that $$\begin{split} & \|U\|_{L^{\infty}(\mathbb{T}; \mathbb{C}^{M \times N})}^2 \leq \mathcal{C}_s^{(1)} \left(\|U\|_{H^{\frac{1}{2}}(\mathbb{T}; \mathbb{C}^{M \times N})}^2 \ln(2+m) + (m+1)^{1 -2 s } \|U\|_{H^{s}(\mathbb{T}; \mathbb{C}^{M \times N})}^2\right)\\ \leq & 4 \mathcal{A}_s \mathcal{C}_s^{(1)} \|U\|_{H^{\frac{1}{2}}(\mathbb{T})} \ln(2+m^{s -\frac{1}{2}} \sqrt{\ln 2}) \lesssim_s \|U\|_{H^{\frac{1}{2}}(\mathbb{T}; \mathbb{C}^{M \times N})} \ln \left(2+\tfrac{\|U\|_{H^s(\mathbb{T}; \mathbb{C}^{M \times N})}}{\|U\|_{H^{\frac{1}{2}}(\mathbb{T}; \mathbb{C}^{M \times N})}} \right). \end{split}$$ ◻ **Lemma 13** (Trudinger [@Trudinger1967]). *There exists a universal constant $\mathcal{C} > 0$ such that $\forall U \in H^{\frac{1}{2}}(\mathbb{T}; \mathbb{C}^{M \times N})$ for some $M,N \in \mathbb{N}_+$, $\forall p \in [1, +\infty)$, the following inequality holds, $$\label{TruMatrixIne} \|U\|_{L^p(\mathbb{T}; \mathbb{C}^{M \times N})} \leq \mathcal{C} \sqrt{p} \|U\|_{H^{\frac{1}{2}}(\mathbb{T}; \mathbb{C}^{M \times N})}.$$* *Proof.* It is enough to plug Appendix 3 of Gérard--Grellier [@GGANNENS] into [\[LpMat\]](#LpMat){reference-type="eqref" reference="LpMat"}. ◻ ## The Hamiltonian formalism The inner product of $\mathbb{C}$-Hilbert space $L^2_+(\mathbb{T}; \mathbb{C}^{M \times N})$ provides the canonical symplectic structure $$\label{SymplecticForm} \omega(F, G):= \mathrm{Im}\langle F, G \rangle_{L^2(\mathbb{T}; \mathbb{C}^{M \times N})} = \mathrm{Im}\int_0^{2\pi} \mathrm{tr}(F(x) G(x)^*) \tfrac{\mathrm{d}x}{2\pi}, \quad \forall F, G \in L^2_+(\mathbb{T}; \mathbb{C}^{M \times N}),$$because the mapping $\Upsilon^{\omega}: F \in L^2_+(\mathbb{T}; \mathbb{C}^{M \times N}) \mapsto F \lrcorner \omega \in \mathcal{B}(L^2_+(\mathbb{T}; \mathbb{C}^{M \times N}); \mathbb{R})$ is invertible, thanks to Riesz--Fréchet theorem. Given any smooth function $f: L^2_+(\mathbb{T}; \mathbb{C}^{M \times N}) \to \mathbb{R}$, its Fréchet derivative is denoted by $\nabla_U f$, its Hamiltonian vector field is given by $X_f:=-\left(\Upsilon^{\omega}\right)^{-1}(\mathrm{d}f)$, i.e. $$\label{HamVecFie} \mathrm{d}f(U)(F)= \mathrm{Re}\langle F, \nabla_U f(U) \rangle_{L^2(\mathbb{T}; \mathbb{C}^{M \times N})} = \omega (F, X_f(U)), \quad \forall F, U\in L^2_+(\mathbb{T}; \mathbb{C}^{M \times N}).$$Given another smooth function $g: L^2_+(\mathbb{T}; \mathbb{C}^{M \times N}) \to \mathbb{R}$, the Poisson bracket between $f$ and $g$ is given by $$\label{PoisBrac} \{f,g\}(U):= \omega(X_f(U), X_g(U)) = \mathrm{Im}\langle \nabla_U f(U), \nabla_U g(U) \rangle_{L^2(\mathbb{T}; \mathbb{C}^{M \times N})}, \quad \forall U\in L^2_+(\mathbb{T}; \mathbb{C}^{M \times N}).$$Then [\[MSzego\]](#MSzego){reference-type="eqref" reference="MSzego"} has the following Hamiltonian formalism, $$\partial_t U(t)=-i \Pi_{\geq 0}(U(t) U(t)^* U(t))= X_{\mathbf{E}}(U(t)), \quad U(t) \in H^s_+(\mathbb{T}; \mathbb{C}^{M \times N}), \quad s> \tfrac{1}{2},$$where the energy functional $\mathbf{E}: L^4_+(\mathbb{T}; \mathbb{C}^{M \times N}):= \Pi_{\geq 0}(L^4 (\mathbb{T}; \mathbb{C}^{M \times N})) \to \mathbb{R}$ is given by $$\label{EnergyFunc} \mathbf{E}(U):= \frac{1}{8\pi}\int_0^{2\pi} \mathrm{tr}\left( \left(U(x) U(x)^*\right)^2\right) \mathrm{d}x = \frac{1}{4}\|U U^*\|^2_{L^2(\mathbb{T}; \mathbb{C}^{M \times M})}, \quad \forall U \in L^4_+(\mathbb{T}; \mathbb{C}^{M \times N}).$$The matrix Szegő equation [\[MSzego\]](#MSzego){reference-type="eqref" reference="MSzego"} is invariant under both phase rotation and space translation, leading the following conservation laws by Noether's theorem, $$\label{ChargeMomentum} \mathbf{q}(U)= \|U\|^2_{L^2_+(\mathbb{T}; \mathbb{C}^{M \times N})}, \quad \mathfrak{j} (U)= \||\mathrm{D}|^{\frac{1}{2}} U\|^2_{L^2_+(\mathbb{T}; \mathbb{C}^{M \times N})} = -i\langle \partial_x U, U \rangle_{L^2_+(\mathbb{T}; \mathbb{C}^{M \times N})}.$$We have $\{\mathbf{q}, \mathbf{E}\} = \{\mathbf{q}, \mathfrak{j}\}= \{\mathbf{E}, \mathfrak{j}\}=0$ on $H^1_+(\mathbb{T}; \mathbb{C}^{M \times N})$. ## The Poisson integral and shift operators Given $n \in \mathbb{Z}$, we recall that $$\label{expfunTorus} \mathbf{e}_n : x \in \mathbb{T} \mapsto e^{inx} \in \mathbb{C}.$$Let $\mathbb{E}_{kj}^{(M N)} \in \mathbb{C}^{M \times N}$ denotes the $M\times N$ matrix whose $kj$-entry is $1$ and the other entries are all $0$. Given any $F=(F_{st})_{1\leq s \leq M, \; 1\leq t \leq N} \in L^1(\mathbb{T}; \mathbb{C}^{M\times N})$, we have $\langle F, \; \mathbb{E}_{kj}^{(M N)}\rangle_{L^2(\mathbb{T}; \mathbb{C}^{M\times N})} =\hat{F}_{kj}(0)$ and $$\label{I<>formula} \mathbf{I}(F) = \sum_{k=1}^M \sum_{j=1}^N \langle F, \; \mathbb{E}_{kj}^{(M N)}\rangle_{L^2(\mathbb{T}; \mathbb{C}^{M\times N})}\mathbb{E}_{kj}^{(M N)} \in \mathbb{C}^{M\times N}.$$If $s \geq 0$, both the shift operator $\mathbf{S}$ and its adjoint $\mathbf{S}^*$ are bounded operators on $H^s_+(\mathbb{T}; \mathbb{C}^{M \times N})$. In fact, $$\|\mathbf{S}\|_{B\left(L^2_+(\mathbb{T}; \mathbb{C}^{M \times N})\right)} = 1, \qquad \|\mathbf{S}^*\|_{B\left( H^s_+(\mathbb{T}; \mathbb{C}^{M \times N})\right)} \leq 1,\quad \forall s \geq 0.$$Then we have $$\label{inverseSS*} \mathbf{S}^* \mathbf{S} = \mathrm{id}_{L^2_+(\mathbb{T}; \mathbb{C}^{M \times N})}, \quad \mathbf{S} \mathbf{S}^* = \mathrm{id}_{L^2_+(\mathbb{T}; \mathbb{C}^{M \times N})} - \mathbf{I}.$$If $A \in L^2 (\mathbb{T}; \mathbb{C}^{M\times N})$, then $\mathbf{e}_{-1} \Pi_{<0}A \in L^2_- (\mathbb{T}; \mathbb{C}^{M\times N})$. If $F= \sum_{n \geq 0}\hat{F}(n)\mathbf{e}_n \in L^2_+(\mathbb{T}; \mathbb{C}^{M\times N})$, then we use mathematical induction to deduce that $$\label{S*mPower} \left( \mathbf{S}^*\right)^m (F)= \Pi_{\geq 0} \left( \mathbf{e}_{-m} F \right), \quad \hat{F}(m)=\mathbf{I}\left(\left( \mathbf{S}^*\right)^m (F) \right),\quad \forall m \in \mathbb{N}.$$If $z=r e^{i\theta}$ for some $r \in [0, 1)$, $\theta \in \mathbb{T}$, the Poisson integral of $F= \sum_{n \geq 0}\hat{F}(n)\mathbf{e}_n \in L^2_+(\mathbb{T}; \mathbb{C}^{M\times N})$ is given by $$\label{PoiInt} \underline{F}(z) = \mathscr{P}[F](r, \theta) : = \frac{1}{2\pi} \int_{-\pi}^{\pi}\mathbf{p}_r(\theta -x)F(x) \mathrm{d}x =\sum_{n\geq 0}z^n \hat{F}(n) \in \mathbb{C},$$where $\mathbf{p}_r(x)=\frac{1-r^2}{1-2r \cos(x) +r^2}=\sum_{n \in \mathbb{Z}}r^{|n|}e^{i n x}$, $\forall x \in \mathbb{T}$, denotes the Poisson kernel on the torus. Then Theorem 11.16 of Rudin [@RudinRACA] yields that $\mathscr{P}[F](r) = \sum_{n\geq 0}r^n \hat{F}(n) \mathbf{e}_n \in L^2_+(\mathbb{T}; \mathbb{C}^{M\times N})$ and $$\label{L2convPoiInt} \sup_{0 \leq r <1}\|\mathscr{P}[F](r)\|_{L^2_+(\mathbb{T}; \mathbb{C}^{M\times N})} \leq \|F\|_{L^2_+}, \quad \lim_{r \to 1^-}\|\mathscr{P}[F](r) -F\|_{L^2_+(\mathbb{T}; \mathbb{C}^{M\times N})}=0.$$In addition, if $U= \sum_{n \geq 0}\hat{U}(n)\mathbf{e}_n \in C^0 \bigcap L^2_+(\mathbb{T}; \mathbb{C}^{M\times N})$, then we have $$\label{LinftyconvPoiInt} \sup_{0 \leq r <1}\|\mathscr{P}[F](r)\|_{L^{\infty} (\mathbb{T}; \mathbb{C}^{M\times N})} \leq \|F\|_{L^{\infty}}, \quad \lim_{r \to 1^-}\|\mathscr{P}[F](r) -F\|_{L^{\infty}(\mathbb{T}; \mathbb{C}^{M\times N})}=0.$$by Theorem 11.8 and 11.16 of Rudin [@RudinRACA]. **Lemma 14**. *Given any $M,N\in\mathbb{N}_+$ and $z\in \mathbb{C}$ such that $|z| <1$, if $U \in L^2_+(\mathbb{T}; \mathbb{C}^{M\times N})$, then $$\label{InvForTorus} \underline{U}(z) = \mathbf{I} \left( (\mathrm{id}_{L^2_+(\mathbb{T}; \mathbb{C}^{M \times N})}- z \mathbf{S}^* )^{-1} U \right).$$* *Proof.* If $U= \sum_{n \geq 0}\hat{U}(n)\mathbf{e}_n \in L^2_+(\mathbb{T}; \mathbb{C}^{M\times N})$, formulas [\[PoiInt\]](#PoiInt){reference-type="eqref" reference="PoiInt"}, [\[S\*mPower\]](#S*mPower){reference-type="eqref" reference="S*mPower"} and Theorem 18.3 of Rudin [@RudinRACA] yield that $\underline{U}(z) = \sum_{n\geq 0} \mathbf{I}\left(\left( z\mathbf{S}^*\right)^m (U) \right) =\mathbf{I} ( (\mathrm{id}_{L^2_+(\mathbb{T}; \mathbb{C}^{M \times N})}- z \mathbf{S}^* )^{-1} U )$. ◻ # The Lax pair structure {#SecLaxP} This section is devoted to proving theorem $\ref{LaxPairThm}$. ## The Hankel operators Given $d, M,N\in\mathbb{N}_+$, recall that the Hankel operators of symbol $U \in H^{\frac{1}{2}}_+(\mathbb{T}; \mathbb{C}^{M\times N})$ are given by $$\label{rlHankel} \begin{split} & \mathbf{H}^{(\mathbf{r})}_{U} : F \in L^2_+(\mathbb{T}; \mathbb{C}^{d \times N}) \mapsto \mathbf{H}^{(\mathbf{r})}_{U}(F) = \Pi_{\geq 0}(U F^*) \in L^2_+(\mathbb{T}; \mathbb{C}^{M \times d});\\ & \mathbf{H}^{(\mathbf{l})}_{U} : G \in L^2_+(\mathbb{T}; \mathbb{C}^{M \times d}) \mapsto \mathbf{H}^{(\mathbf{l})}_{U}(G) = \Pi_{\geq 0}(G^* U) \in L^2_+(\mathbb{T}; \mathbb{C}^{d \times N}).\\ \end {split}$$If $F=\sum_{j=1}^d \sum_{n=1}^N F_{jn}\mathbb{E}^{(dN)}_{jn} \in L^2_+(\mathbb{T}; \mathbb{C}^{d \times N})$ and $G=\sum_{m=1}^M \sum_{k=1}^d G_{mk}\mathbb{E}^{(Md)}_{mk} \in L^2_+(\mathbb{T}; \mathbb{C}^{M \times d})$, then $$\label{relHrHltoH} \mathbf{H}^{(\mathbf{r})}_{U}(F) = \sum_{k=1}^M \sum_{j=1}^d (\sum_{n=1}^N H_{U_{kn}}(F_{jn}))\mathbb{E}^{(Md)}_{kj}; \quad \mathbf{H}^{(\mathbf{l})}_{U}(G) = \sum_{k=1}^d \sum_{j=1}^N (\sum_{m=1}^M H_{U_{mj}}(G_{mk}))\mathbb{E}^{(dN)}_{kj}.$$Both $\mathbf{H}^{(\mathbf{r})}_{U}: L^2_+(\mathbb{T}; \mathbb{C}^{d \times N}) \to L^2_+(\mathbb{T}; \mathbb{C}^{M \times d})$ and $\mathbf{H}^{(\mathbf{l})}_{U}: L^2_+(\mathbb{T}; \mathbb{C}^{M \times d}) \to L^2_+(\mathbb{T}; \mathbb{C}^{d \times N})$ are $\mathbb{C}$-antilinear Hilbert--Schmidt operators by formula $(12)$ in page 771 of Gérard--Grellier [@GGANNENS]. Precisely, we have $$\label{TrHrHl} \mathrm{Tr}(\mathbf{H}^{(\mathbf{r})}_{U} \mathbf{H}^{(\mathbf{l})}_{U}) = \mathrm{Tr}(\mathbf{H}^{(\mathbf{l})}_{U} \mathbf{H}^{(\mathbf{r})}_{U}) = \|\mathbf{H}^{(\mathbf{r})}_{U}\|_{\mathrm{HS}}^2 =\|\mathbf{H}^{(\mathbf{l})}_{U}\|_{\mathrm{HS}}^2 = d \|\sqrt{1+|\mathrm{D}|}U\|_{L^2_+(\mathbb{T}; \mathbb{C}^{M\times N})}^2.$$In addition, for any $F \in L^2_+(\mathbb{T}; \mathbb{C}^{d \times N})$ and $G \in L^2_+(\mathbb{T}; \mathbb{C}^{M \times d})$, we have $$\label{HrHlinnprod} \langle \mathbf{H}^{(\mathbf{r})}_{U}(F), G \rangle_{L^2_+(\mathbb{T}; \mathbb{C}^{M \times d})} =\frac{1}{2\pi} \int_0^{2\pi}\mathrm{tr}\left(F(x)^* G(x)^* U(x) \right)\mathrm{d}x = \langle \mathbf{H}^{(\mathbf{l})}_{U}(G), F \rangle_{L^2_+(\mathbb{T}; \mathbb{C}^{d \times N})}.$$The following lemma is a direct consequence of formula [\[HrHlinnprod\]](#HrHlinnprod){reference-type="eqref" reference="HrHlinnprod"}. **Lemma 15**. *Given $M,N,d\in\mathbb{N}_+$ and $U \in H^{\frac{1}{2}}_+(\mathbb{T}; \mathbb{C}^{M\times N})$, we have $$\label{OrthoComple} \begin{split} & \mathrm{Ker} \mathbf{H}^{(\mathbf{l})}_{U} \mathbf{H}^{(\mathbf{r})}_{U}=\mathrm{Ker} \mathbf{H}^{(\mathbf{r})}_{U} = (\mathrm{Im} \mathbf{H}^{(\mathbf{l})}_{U} )^{\perp} = (\mathrm{Im} \mathbf{H}^{(\mathbf{l})}_{U} \mathbf{H}^{(\mathbf{r})}_{U} )^{\perp} \subset L^2_+(\mathbb{T}; \mathbb{C}^{d \times N}); \\ & \mathrm{Ker} \mathbf{H}^{(\mathbf{r})}_{U} \mathbf{H}^{(\mathbf{l})}_{U} =\mathrm{Ker} \mathbf{H}^{(\mathbf{l})}_{U} = (\mathrm{Im} \mathbf{H}^{(\mathbf{r})}_{U} )^{\perp} = (\mathrm{Im} \mathbf{H}^{(\mathbf{r})}_{U} \mathbf{H}^{(\mathbf{l})}_{U} )^{\perp} \subset L^2_+(\mathbb{T}; \mathbb{C}^{M \times d}). \end{split}$$As a consequence, $L^2_+(\mathbb{T}; \mathbb{C}^{d \times N}) = \mathrm{Ker} \mathbf{H}^{(\mathbf{r})}_{U} \bigoplus \overline{\mathrm{Im} \mathbf{H}^{(\mathbf{l})}_{U}} = \mathrm{Ker} \mathbf{H}^{(\mathbf{l})}_{U} \mathbf{H}^{(\mathbf{r})}_{U} \bigoplus \overline{\mathrm{Im} \mathbf{H}^{(\mathbf{l})}_{U}\mathbf{H}^{(\mathbf{r})}_{U}}$ and $L^2_+(\mathbb{T}; \mathbb{C}^{M \times d}) = \mathrm{Ker} \mathbf{H}^{(\mathbf{l})}_{U} \bigoplus \overline{\mathrm{Im} \mathbf{H}^{(\mathbf{r})}_{U}} = \mathrm{Ker} \mathbf{H}^{(\mathbf{r})}_{U} \mathbf{H}^{(\mathbf{l})}_{U} \bigoplus \overline{\mathrm{Im} \mathbf{H}^{(\mathbf{r})}_{U}\mathbf{H}^{(\mathbf{l})}_{U}}$. Furthermore, the restrictions $\mathbf{H}^{(\mathbf{r})}_{U}\big|_{\mathrm{Im} \mathbf{H}^{(\mathbf{l})}_{U}}: \mathrm{Im} \mathbf{H}^{(\mathbf{l})}_{U} \to \mathrm{Im} \mathbf{H}^{(\mathbf{r})}_{U}\mathbf{H}^{(\mathbf{l})}_{U}$ and $\mathbf{H}^{(\mathbf{l})}_{U}\big|_{\mathrm{Im} \mathbf{H}^{(\mathbf{r})}_{U}}: \mathrm{Im} \mathbf{H}^{(\mathbf{r})}_{U} \to \mathrm{Im} \mathbf{H}^{(\mathbf{l})}_{U}\mathbf{H}^{(\mathbf{r})}_{U}$ are both injective.* Recall that the left and right shifted Hankel operators of the symbol $U \in H^{\frac{1}{2}}_+(\mathbb{T}; \mathbb{C}^{M\times N})$ are given by $$\label{rlShiftHankel} \begin{split} & \mathbf{K}^{(\mathbf{r})}_{U} = \mathbf{H}^{(\mathbf{r})}_{U}\mathbf{S} = \mathbf{S}^* \mathbf{H}^{(\mathbf{r})}_{U} =\mathbf{H}^{(\mathbf{r})}_{\mathbf{S}^* U} : F \in L^2_+(\mathbb{T}; \mathbb{C}^{d \times N}) \mapsto \Pi_{\geq 0}\left( \mathbf{e}_{-1} UF^*\right)\in L^2_+(\mathbb{T}; \mathbb{C}^{M \times d});\\ & \mathbf{K}^{(\mathbf{l})}_{U} = \mathbf{H}^{(\mathbf{l})}_{U}\mathbf{S} = \mathbf{S}^* \mathbf{H}^{(\mathbf{l})}_{U} =\mathbf{H}^{(\mathbf{l})}_{\mathbf{S}^* U} : G \in L^2_+(\mathbb{T}; \mathbb{C}^{M \times d}) \mapsto \Pi_{\geq 0}\left( \mathbf{e}_{-1} G^* U \right)\in L^2_+(\mathbb{T}; \mathbb{C}^{d \times N}).\\ \end{split}$$We have $$\label{KrKlinnprod} \langle \mathbf{K}^{(\mathbf{r})}_{U}(F), G \rangle_{L^2_+(\mathbb{T}; \mathbb{C}^{M \times d})} =\langle \mathbf{H}^{(\mathbf{r})}_{\mathbf{S}^* U}(F), G \rangle_{L^2_+(\mathbb{T}; \mathbb{C}^{M \times d})} = \langle \mathbf{H}^{(\mathbf{l})}_{\mathbf{S}^* U}(G), F \rangle_{L^2_+(\mathbb{T}; \mathbb{C}^{d \times N})} = \langle \mathbf{K}^{(\mathbf{l})}_{U}(G), F \rangle_{L^2_+(\mathbb{T}; \mathbb{C}^{d \times N})}.$$Then $\mathrm{Ker}( \mathbf{H}^{(\mathbf{r})}_{U})$ and $\mathrm{Ker}( \mathbf{H}^{(\mathbf{l})}_{U})$ are $\mathbf{S}$-invariant, $\mathrm{Im}( \mathbf{H}^{(\mathbf{r})}_{U})$ and $\mathrm{Im}( \mathbf{H}^{(\mathbf{l})}_{U})$ are $\mathbf{S}^*$-invariant, i.e. $$\begin{split} & \mathbf{S}\left(\mathrm{Ker}( \mathbf{H}^{(\mathbf{r})}_{U}) \right) \subset \mathrm{Ker}( \mathbf{H}^{(\mathbf{r})}_{U}) \subset L^2_+(\mathbb{T}; \mathbb{C}^{d \times N} ); \quad \mathbf{S}\left(\mathrm{Ker}( \mathbf{H}^{(\mathbf{l})}_{U}) \right) \subset \mathrm{Ker}( \mathbf{H}^{(\mathbf{l})}_{U}) \subset L^2_+(\mathbb{T}; \mathbb{C}^{M \times d} ); \\ & \mathbf{S}^*\left(\mathrm{Im}( \mathbf{H}^{(\mathbf{r})}_{U}) \right) \subset \mathrm{Im}( \mathbf{H}^{(\mathbf{r})}_{U}) \subset L^2_+(\mathbb{T}; \mathbb{C}^{M \times d} ); \quad \mathbf{S}^*\left(\mathrm{Im}( \mathbf{H}^{(\mathbf{l})}_{U}) \right) \subset \mathrm{Im}( \mathbf{H}^{(\mathbf{l})}_{U}) \subset L^2_+(\mathbb{T}; \mathbb{C}^{d \times N} ). \end{split}$$Given any positive integers $d, M,N, P, Q\in\mathbb{N}_+$, we choose $U \in H^{\frac{1}{2}}_+(\mathbb{T}; \mathbb{C}^{M \times N})$, $V \in H^{\frac{1}{2}}_+(\mathbb{T}; \mathbb{C}^{P \times N})$ and $W \in H^{\frac{1}{2}}_+(\mathbb{T}; \mathbb{C}^{P\times Q})$. For any $x\in \mathbb{T}$, the matrices $U(x) \in \mathbb{C}^{M \times N}$ and $V(x) \in \mathbb{C}^{P \times N}$ have the same number of columns, the $(\mathbf{rl})$-double Hankel operators are given by $$\label{HrHldef} \begin{split} \mathbf{H}^{(\mathbf{r})}_{U} \mathbf{H}^{(\mathbf{l})}_{V}: G_1 \in L^2_+(\mathbb{T}; \mathbb{C}^{P \times d}) \mapsto \mathbf{H}^{(\mathbf{r})}_{U} \mathbf{H}^{(\mathbf{l})}_{V}(G_1) \in L^2_+(\mathbb{T}; \mathbb{C}^{M \times d});\\ \mathbf{H}^{(\mathbf{r})}_{V} \mathbf{H}^{(\mathbf{l})}_{U}: G_2 \in L^2_+(\mathbb{T}; \mathbb{C}^{M \times d}) \mapsto \mathbf{H}^{(\mathbf{r})}_{V} \mathbf{H}^{(\mathbf{l})}_{U}(G_2) \in L^2_+(\mathbb{T}; \mathbb{C}^{P \times d}). \end{split}$$If $G_1 \in L^2_+(\mathbb{T}; \mathbb{C}^{P \times d})$ and $G_2 \in L^2_+(\mathbb{T}; \mathbb{C}^{M \times d})$, we use [\[HrHlinnprod\]](#HrHlinnprod){reference-type="eqref" reference="HrHlinnprod"} to obtain $$\label{HrHladj} \langle \mathbf{H}^{(\mathbf{r})}_{U} \mathbf{H}^{(\mathbf{l})}_{V} (G_1), G_2 \rangle_{ L^2_+(\mathbb{T}; \mathbb{C}^{M \times d})} = \langle \mathbf{H}^{(\mathbf{l})}_{U}(G_2) , \mathbf{H}^{(\mathbf{l})}_{V} (G_1) \rangle_{ L^2_+(\mathbb{T}; \mathbb{C}^{d \times N})} = \langle G_1 , \mathbf{H}^{(\mathbf{r})}_{V} \mathbf{H}^{(\mathbf{l})}_{U}(G_2) \rangle_{ L^2_+(\mathbb{T}; \mathbb{C}^{P \times d})}.$$If $M=P$, then $\mathbf{H}^{(\mathbf{r})}_{U} \mathbf{H}^{(\mathbf{l})}_{V}$ is a $\mathbb{C}$-linear trace class operator on $L^2_+(\mathbb{T}; \mathbb{C}^{M \times d})$ and $\mathbf{H}^{(\mathbf{r})}_{V} \mathbf{H}^{(\mathbf{l})}_{U} = \left( \mathbf{H}^{(\mathbf{r})}_{U} \mathbf{H}^{(\mathbf{l})}_{V} \right)^*$. In addition, if $U=V$, then $\mathbf{H}^{(\mathbf{r})}_{U} \mathbf{H}^{(\mathbf{l})}_{U}\geq 0$ is a $\mathbb{C}$-linear positive self-adjoint operator on $L^2_+(\mathbb{T}; \mathbb{C}^{M \times d})$ of trace class.\ For any $x\in \mathbb{T}$, the matrices $V(x) \in \mathbb{C}^{P \times N}$ and $W(x) \in \mathbb{C}^{P \times Q}$ have the same number of rows, the $(\mathbf{lr})$-double Hankel operators are given by $$\label{HlHrdef} \begin{split} \mathbf{H}^{(\mathbf{l})}_{V} \mathbf{H}^{(\mathbf{r})}_{W} : F_1 \in L^2_+(\mathbb{T}; \mathbb{C}^{d \times Q}) \mapsto \mathbf{H}^{(\mathbf{l})}_{V} \mathbf{H}^{(\mathbf{r})}_{W}(F_1) \in L^2_+(\mathbb{T}; \mathbb{C}^{d \times N});\\ \mathbf{H}^{(\mathbf{l})}_{W} \mathbf{H}^{(\mathbf{r})}_{V} : F_2 \in L^2_+(\mathbb{T}; \mathbb{C}^{d \times N}) \mapsto \mathbf{H}^{(\mathbf{l})}_{W} \mathbf{H}^{(\mathbf{r})}_{V}(F_2) \in L^2_+(\mathbb{T}; \mathbb{C}^{d \times Q}). \end{split}$$If $F_1 \in L^2_+(\mathbb{T}; \mathbb{C}^{d \times Q})$ and $F_2 \in L^2_+(\mathbb{T}; \mathbb{C}^{d \times N})$, we use [\[HrHlinnprod\]](#HrHlinnprod){reference-type="eqref" reference="HrHlinnprod"} to obtain $$\label{HlHradj} \langle \mathbf{H}^{(\mathbf{l})}_{V} \mathbf{H}^{(\mathbf{r})}_{W} (F_1), F_2 \rangle_{ L^2_+(\mathbb{T}; \mathbb{C}^{d \times N})} = \langle \mathbf{H}^{(\mathbf{r})}_{V} (F_2) , \mathbf{H}^{(\mathbf{r})}_{W} (F_1) \rangle_{ L^2_+(\mathbb{T}; \mathbb{C}^{P \times d})} = \langle F_1 , \mathbf{H}^{(\mathbf{l})}_{W} \mathbf{H}^{(\mathbf{r})}_{V} (F_2) \rangle_{ L^2_+(\mathbb{T}; \mathbb{C}^{d \times Q})}.$$If $N=Q$, then $\mathbf{H}^{(\mathbf{l})}_{V} \mathbf{H}^{(\mathbf{r})}_{W}$ is a $\mathbb{C}$-linear trace class operator on $L^2_+(\mathbb{T}; \mathbb{C}^{d \times Q})$ and $\mathbf{H}^{(\mathbf{l})}_{W} \mathbf{H}^{(\mathbf{r})}_{V} = \left( \mathbf{H}^{(\mathbf{l})}_{V} \mathbf{H}^{(\mathbf{r})}_{W} \right)^*$. In addition, if $V=W$, then $\mathbf{H}^{(\mathbf{l})}_{W} \mathbf{H}^{(\mathbf{r})}_{W}\geq 0$ is a $\mathbb{C}$-linear positive self-adjoint operator on $L^2_+(\mathbb{T}; \mathbb{C}^{d \times Q})$ of trace class.\ Since $\mathbf{S}^* U \in H^{\frac{1}{2}}_+(\mathbb{T}; \mathbb{C}^{M \times N})$, $\mathbf{S}^* V \in H^{\frac{1}{2}}_+(\mathbb{T}; \mathbb{C}^{P \times N})$ and $\mathbf{S}^* W \in H^{\frac{1}{2}}_+(\mathbb{T}; \mathbb{C}^{P\times Q})$, the double shifted Hankel operators are given by $$\begin{split} & \mathbf{K}^{(\mathbf{r})}_{U} \mathbf{K}^{(\mathbf{l})}_{V} = \mathbf{H}^{(\mathbf{r})}_{\mathbf{S}^* U} \mathbf{H}^{(\mathbf{l})}_{\mathbf{S}^* V} ; \qquad \mathbf{K}^{(\mathbf{r})}_{V} \mathbf{K}^{(\mathbf{l})}_{U} = \mathbf{H}^{(\mathbf{r})}_{\mathbf{S}^* V} \mathbf{H}^{(\mathbf{l})}_{\mathbf{S}^* U}; \\ & \mathbf{K}^{(\mathbf{l})}_{V} \mathbf{K}^{(\mathbf{r})}_{W}=\mathbf{H}^{(\mathbf{l})}_{\mathbf{S}^* V} \mathbf{H}^{(\mathbf{r})}_{\mathbf{S}^* W}; \qquad \mathbf{K}^{(\mathbf{l})}_{W} \mathbf{K}^{(\mathbf{r})}_{V}=\mathbf{H}^{(\mathbf{l})}_{\mathbf{S}^* W} \mathbf{H}^{(\mathbf{r})}_{\mathbf{S}^* V}. \end{split}$$If $G_1 \in L^2_+(\mathbb{T}; \mathbb{C}^{P \times d})$, $G_2 \in L^2_+(\mathbb{T}; \mathbb{C}^{M \times d})$, $F_1 \in L^2_+(\mathbb{T}; \mathbb{C}^{d \times Q})$ and $F_2 \in L^2_+(\mathbb{T}; \mathbb{C}^{d \times N})$, formulas [\[HrHladj\]](#HrHladj){reference-type="eqref" reference="HrHladj"} and [\[HlHradj\]](#HlHradj){reference-type="eqref" reference="HlHradj"} yield that $$\begin{split} & \langle \mathbf{K}^{(\mathbf{r})}_{U} \mathbf{K}^{(\mathbf{l})}_{V} (G_1), G_2 \rangle_{ L^2_+(\mathbb{T}; \mathbb{C}^{M \times d})} = \langle \mathbf{K}^{(\mathbf{l})}_{U}(G_2) , \mathbf{K}^{(\mathbf{l})}_{V} (G_1) \rangle_{ L^2_+(\mathbb{T}; \mathbb{C}^{d \times N})} = \langle G_1 , \mathbf{K}^{(\mathbf{r})}_{V} \mathbf{K}^{(\mathbf{l})}_{U}(G_2) \rangle_{ L^2_+(\mathbb{T}; \mathbb{C}^{P \times d})};\\ & \langle \mathbf{K}^{(\mathbf{l})}_{V} \mathbf{K}^{(\mathbf{r})}_{W} (F_1), F_2 \rangle_{ L^2_+(\mathbb{T}; \mathbb{C}^{d \times N})} = \langle \mathbf{K}^{(\mathbf{r})}_{V} (F_2) , \mathbf{K}^{(\mathbf{r})}_{W} (F_1) \rangle_{ L^2_+(\mathbb{T}; \mathbb{C}^{P \times d})} = \langle F_1 , \mathbf{K}^{(\mathbf{l})}_{W} \mathbf{K}^{(\mathbf{r})}_{V} (F_2) \rangle_{ L^2_+(\mathbb{T}; \mathbb{C}^{d \times Q})}. \end{split}$$If $M=P$, then $\mathbf{K}^{(\mathbf{r})}_{U} \mathbf{K}^{(\mathbf{l})}_{V}$ is a $\mathbb{C}$-linear trace class operator on $L^2_+(\mathbb{T}; \mathbb{C}^{M \times d})$ and $\mathbf{K}^{(\mathbf{r})}_{V} \mathbf{K}^{(\mathbf{l})}_{U} = \left( \mathbf{K}^{(\mathbf{r})}_{U} \mathbf{K}^{(\mathbf{l})}_{V} \right)^*$. In addition, if $U=V$, then $\mathbf{K}^{(\mathbf{r})}_{U} \mathbf{K}^{(\mathbf{l})}_{U}\geq 0$ is a $\mathbb{C}$-linear positive operator on $L^2_+(\mathbb{T}; \mathbb{C}^{M \times d})$ of trace class.\ If $N=Q$, then $\mathbf{K}^{(\mathbf{l})}_{V} \mathbf{K}^{(\mathbf{r})}_{W}$ is a $\mathbb{C}$-linear trace class operator on $L^2_+(\mathbb{T}; \mathbb{C}^{d \times Q})$ and $\mathbf{K}^{(\mathbf{l})}_{W} \mathbf{K}^{(\mathbf{r})}_{V} = \left( \mathbf{K}^{(\mathbf{l})}_{V} \mathbf{K}^{(\mathbf{r})}_{W} \right)^*$. In addition, if $V=W$, then $\mathbf{K}^{(\mathbf{l})}_{W} \mathbf{K}^{(\mathbf{r})}_{W}\geq 0$ is a $\mathbb{C}$-linear positive operator on $L^2_+(\mathbb{T}; \mathbb{C}^{d \times Q})$ of trace class.\ **Lemma 16**. *Given $M,N, P, Q\in\mathbb{N}_+$, $U \in H^{\frac{1}{2}}_+(\mathbb{T}; \mathbb{C}^{M \times N})$, $V \in H^{\frac{1}{2}}_+(\mathbb{T}; \mathbb{C}^{P \times N})$ and $W \in H^{\frac{1}{2}}_+(\mathbb{T}; \mathbb{C}^{P\times Q})$, if $G \in L^2_+(\mathbb{T}; \mathbb{C}^{P \times d})$, $F \in L^2_+(\mathbb{T}; \mathbb{C}^{d \times Q})$ for some $d \in \mathbb{N}_+$, then we have $$\label{rel2K2Hfor} \mathbf{K}^{(\mathbf{r})}_{U} \mathbf{K}^{(\mathbf{l})}_{V}(G) = \mathbf{H}^{(\mathbf{r})}_{U} \mathbf{H}^{(\mathbf{l})}_{V}(G) - U\widehat{V^* G}(0), \quad \mathbf{K}^{(\mathbf{l})}_{V} \mathbf{K}^{(\mathbf{r})}_{W}(F) = \mathbf{H}^{(\mathbf{l})}_{V} \mathbf{H}^{(\mathbf{r})}_{W}(F) - \widehat{F W^*}(0)V.$$* *Proof.* Formula [\[rlShiftHankel\]](#rlShiftHankel){reference-type="eqref" reference="rlShiftHankel"} yields that $\mathbf{K}^{(\mathbf{r})}_{U} \mathbf{K}^{(\mathbf{l})}_{V} = \mathbf{H}^{(\mathbf{r})}_{U} \mathbf{S} \mathbf{S}^* \mathbf{H}^{(\mathbf{l})}_{V}$ and $\mathbf{K}^{(\mathbf{l})}_{V} \mathbf{K}^{(\mathbf{r})}_{W} = \mathbf{H}^{(\mathbf{l})}_{V} \mathbf{S} \mathbf{S}^* \mathbf{H}^{(\mathbf{r})}_{W}$. Moreover, we have $\mathbf{H}^{(\mathbf{r})}_{U} \left(\widehat{G^* V}(0) \right)=U\widehat{V^* G}(0) \in L^2_+(\mathbb{T}; \mathbb{C}^{M \times d})$ and $\mathbf{H}^{(\mathbf{l})}_{V} \left( \widehat{WF^*}(0) \right)= \widehat{F W^*}(0)V \in L^2_+(\mathbb{T}; \mathbb{C}^{d \times N})$. It suffices to conclude by formula [\[inverseSS\*\]](#inverseSS*){reference-type="eqref" reference="inverseSS*"}. ◻ **Lemma 17**. *Given $M,N,d\in\mathbb{N}_+$ and $t \in \mathbb{R}$, if $U \in H^{\frac{1}{2}}_+(\mathbb{T}; \mathbb{C}^{M \times N})$, then $$\label{ImHuInv} \begin{split} & e^{it \mathbf{K}^{(\mathbf{r})}_{U} \mathbf{K}^{(\mathbf{l})}_{U}}\mathbf{S}^* e^{-it \mathbf{H}^{(\mathbf{r})}_{U} \mathbf{H}^{(\mathbf{l})}_{U}} (\mathrm{Im} \mathbf{H}^{(\mathbf{r})}_{U} ) \subset \mathrm{Im} \mathbf{K}^{(\mathbf{r})}_{U} \subset \mathrm{Im} \mathbf{H}^{(\mathbf{r})}_{U} \subset L^{2}_+(\mathbb{T}; \mathbb{C}^{M \times d});\\ & e^{it \mathbf{K}^{(\mathbf{l})}_{U} \mathbf{K}^{(\mathbf{r})}_{U}}\mathbf{S}^* e^{-it \mathbf{H}^{(\mathbf{l})}_{U} \mathbf{H}^{(\mathbf{r})}_{U}} ( \mathrm{Im} \mathbf{H}^{(\mathbf{l})}_{U} ) \subset \mathrm{Im} \mathbf{K}^{(\mathbf{l})}_{U} \subset \mathrm{Im} \mathbf{H}^{(\mathbf{l})}_{U} \subset L^{2}_+(\mathbb{T}; \mathbb{C}^{d \times N}). \end{split}$$* *Proof.* Since $\left(\mathbf{H}^{(\mathbf{r})}_{U} \mathbf{H}^{(\mathbf{l})}_{U}\right)^n \mathbf{H}^{(\mathbf{r})}_{U} =\mathbf{H}^{(\mathbf{r})}_{U} \left(\mathbf{H}^{(\mathbf{l})}_{U} \mathbf{H}^{(\mathbf{r})}_{U}\right)^n$ and $\left(\mathbf{H}^{(\mathbf{l})}_{U} \mathbf{H}^{(\mathbf{r})}_{U}\right)^n \mathbf{H}^{(\mathbf{l})}_{U} =\mathbf{H}^{(\mathbf{l})}_{U} \left(\mathbf{H}^{(\mathbf{r})}_{U} \mathbf{H}^{(\mathbf{l})}_{U}\right)^n$, $\forall n \in \mathbb{N}$, the power series of $\exp$ in $\mathcal{B}(L^{2}_+(\mathbb{T}; \mathbb{C}^{M \times d}))$ and $\mathcal{B}(L^{2}_+(\mathbb{T}; \mathbb{C}^{d \times N}))$ yields that $$\label{HExpHH} \begin{split} & e^{-it \mathbf{H}^{(\mathbf{r})}_{U} \mathbf{H}^{(\mathbf{l})}_{U}}\mathbf{H}^{(\mathbf{r})}_{U} = \mathbf{H}^{(\mathbf{r})}_{U} e^{it \mathbf{H}^{(\mathbf{l})}_{U} \mathbf{H}^{(\mathbf{r})}_{U}}; \quad e^{-it \mathbf{H}^{(\mathbf{l})}_{U} \mathbf{H}^{(\mathbf{r})}_{U}}\mathbf{H}^{(\mathbf{l})}_{U} = \mathbf{H}^{(\mathbf{l})}_{U} e^{it \mathbf{H}^{(\mathbf{r})}_{U} \mathbf{H}^{(\mathbf{l})}_{U}}; \\ & e^{it \mathbf{K}^{(\mathbf{r})}_{U} \mathbf{K}^{(\mathbf{l})}_{U}}\mathbf{K}^{(\mathbf{r})}_{U} = \mathbf{K}^{(\mathbf{r})}_{U} e^{-it \mathbf{K}^{(\mathbf{l})}_{U} \mathbf{K}^{(\mathbf{r})}_{U}}; \quad e^{it \mathbf{K}^{(\mathbf{l})}_{U} \mathbf{K}^{(\mathbf{r})}_{U}}\mathbf{K}^{(\mathbf{l})}_{U} = \mathbf{K}^{(\mathbf{l})}_{U} e^{-it \mathbf{K}^{(\mathbf{r})}_{U} \mathbf{K}^{(\mathbf{l})}_{U}}. \end{split}$$It suffices to conclude by [\[rlShiftHankel\]](#rlShiftHankel){reference-type="eqref" reference="rlShiftHankel"}. ◻ ## The Kronecker theorem **Definition 18**. *Given a positive integer $n \in\mathbb{N}_+$, let $\mathcal{M}(n)$ denote the set of rational functions $u = \frac{p(\mathbf{e}_1)}{q(\mathbf{e}_1)}$ such that $p \in \mathbb{C}_{\leq n-1}[X], q\in \mathbb{C}_{\leq n}[X]$, the polynomials $p$ and $q$ have no common divisors, $q(0)=1$, $q^{-1}\{0\} \subset \mathbb{C} \backslash \overline{D}(0,1)$, $\deg p= n-1$ or $\deg q= n$. We set $\mathcal{M}(0)=\{0\}$ and $\mathcal{M}_{\mathrm{FR}}:=\bigcup_{n\in \mathbb{N}}\mathcal{M}(n)$.* If $u \in L^2_+(\mathbb{T}; \mathbb{C})$, the Kronecker theorem [@Kronecker1881] yields the following equivalence: $\forall n \in \mathbb{N}$, $$\label{1*1Kroneckerthm} u \in \mathcal{M}(n) \Longleftrightarrow \mathrm{r}(H_u) = \dim_{\mathbb{C}} \mathrm{Im}H_u =n.$$We refer to Appendix 4 (subsection 10.4) of Gérard--Grellier [@GGANNENS] for the proof of [\[1\*1Kroneckerthm\]](#1*1Kroneckerthm){reference-type="eqref" reference="1*1Kroneckerthm"}. Given $M,N \in \mathbb{N}_+$, $$\mathcal{M}_{\mathrm{FR}}^{M \times N} = \{\tfrac{A(\mathbf{e}_1)}{q(\mathbf{e}_1)}: q\in \mathbb{C}[X], \; q^{-1}\{0\} \subset \mathbb{C} \backslash \overline{D}(0,1), \; A\in (\mathbb{C}[X])^{M \times N}\}\subset C^{\infty}_+(\mathbb{T}; \mathbb{C}^{M \times N}).$$ **Proposition 19**. *Given $U \in L^2_+(\mathbb{T}; \mathbb{C}^{M \times N})$ for some $M,N \in \mathbb{N}$, then each of the following three properties implies the others:\ $\mathrm{(a)}$. $U \in \mathcal{M}_{\mathrm{FR}}^{M \times N}$.\ $\mathrm{(b)}$. Both $\mathbf{H}^{(\mathbf{r})}_{U}: L^2_+(\mathbb{T}; \mathbb{C}^{d \times N}) \to L^2_+(\mathbb{T}; \mathbb{C}^{M \times d})$ and $\mathbf{H}^{(\mathbf{l})}_{U}: L^2_+(\mathbb{T}; \mathbb{C}^{M \times d}) \to L^2_+(\mathbb{T}; \mathbb{C}^{d \times N})$ are finite-rank operators, $\forall d \in \mathbb{N}_+$, and $\dim_{\mathbb{C}} \mathrm{Im}\mathbf{H}^{(\mathbf{r})}_{U} = \dim_{\mathbb{C}} \mathrm{Im}\mathbf{H}^{(\mathbf{l})}_{U} = \dim_{\mathbb{C}} \mathrm{Im}\mathbf{H}^{(\mathbf{r})}_{U}\mathbf{H}^{(\mathbf{l})}_{U} = \dim_{\mathbb{C}}\mathrm{Im}\mathbf{H}^{(\mathbf{l})}_{U}\mathbf{H}^{(\mathbf{r})}_{U}<+\infty$.\ $\mathrm{(c)}$. There exists $d \in \mathbb{N}_+$ such that at least one of the subspaces $\mathrm{Im}\mathbf{H}^{(\mathbf{r})}_{U}$, $\mathrm{Im}\mathbf{H}^{(\mathbf{l})}_{U}$, $\mathrm{Im}\mathbf{H}^{(\mathbf{r})}_{U}\mathbf{H}^{(\mathbf{l})}_{U}$, $\mathrm{Im}\mathbf{H}^{(\mathbf{l})}_{U}\mathbf{H}^{(\mathbf{r})}_{U}$ has finite dimension.* *Proof.* $\mathrm{(a)} \Rightarrow \mathrm{(b)}$: If $U =\sum_{k=1}^M\sum_{n=1}^N U_{kn}\mathbb{E}_{kn}^{(MN)}\in \mathcal{M}_{\mathrm{FR}}^{M \times N}$, then $\mathbb{V}_U:=\sum_{k=1}^M\sum_{n=1}^N \mathrm{Im}H_{U_{kn}}$ is a finite dimensional subspace of $L^2_+(\mathbb{T}; \mathbb{C})$ by [\[1\*1Kroneckerthm\]](#1*1Kroneckerthm){reference-type="eqref" reference="1*1Kroneckerthm"}. For any $d \in \mathbb{N}_+$, formula [\[relHrHltoH\]](#relHrHltoH){reference-type="eqref" reference="relHrHltoH"} yields that $\mathbf{H}^{(\mathbf{r})}_{U} \subset \mathbb{V}_U^{M \times d}$. If one of the subspaces $\mathrm{Im}\mathbf{H}^{(\mathbf{r})}_{U}$, $\mathrm{Im}\mathbf{H}^{(\mathbf{l})}_{U}$, $\mathrm{Im}\mathbf{H}^{(\mathbf{r})}_{U}\mathbf{H}^{(\mathbf{l})}_{U}$, $\mathrm{Im}\mathbf{H}^{(\mathbf{l})}_{U}\mathbf{H}^{(\mathbf{r})}_{U}$ has finite dimension, then Lemma $\ref{LemofOrtho}$ implies that $\dim_{\mathbb{C}} \mathrm{Im}\mathbf{H}^{(\mathbf{r})}_{U} = \dim_{\mathbb{C}} \mathrm{Im}\mathbf{H}^{(\mathbf{l})}_{U} = \dim_{\mathbb{C}} \mathrm{Im}\mathbf{H}^{(\mathbf{r})}_{U}\mathbf{H}^{(\mathbf{l})}_{U} = \dim_{\mathbb{C}}\mathrm{Im}\mathbf{H}^{(\mathbf{l})}_{U}\mathbf{H}^{(\mathbf{r})}_{U}<+\infty$. $\mathrm{(c)} \Rightarrow \mathrm{(a)}$: If $U \in L^2_+(\mathbb{T}; \mathbb{C}^{M \times N})$ such that $\mathrm{(c)}$ holds, then $\mathbf{H}^{(\mathbf{r})}_{U} \in \mathrm{HS}(L^2_+(\mathbb{T}; \mathbb{C}^{d \times N}); L^2_+(\mathbb{T}; \mathbb{C}^{M \times d}))$ and $U \in H^{\frac{1}{2}}_+(\mathbb{T}; \mathbb{C}^{M \times N})$ by [\[TrHrHl\]](#TrHrHl){reference-type="eqref" reference="TrHrHl"}. Assume that $U \in H^{\frac{1}{2}}_+(\mathbb{T}; \mathbb{C}^{M \times N}) \backslash \mathcal{M}_{\mathrm{FR}}^{M \times N}$, then $H_{U_{st}}$ has infinite rank for some $1 \leq s \leq M$ and $1\leq t \leq N$, thanks to [\[1\*1Kroneckerthm\]](#1*1Kroneckerthm){reference-type="eqref" reference="1*1Kroneckerthm"}. For any $R \in \mathbb{N}_+$, there exists $f_1, f_2, \cdots, f_R\in L^2_+(\mathbb{T};\mathbb{C})$ such that $\{H_{U_{st}}(f_l)\}_{1\leq l \leq R}$ is linearly independent in $L^2_+(\mathbb{T};\mathbb{C})$. Then $\{\mathbf{H}^{(\mathbf{r})}_{U}(f_l \mathbb{E}_{1t}^{(dN)})\}_{1\leq l \leq R}$ is linearly independent in $\mathrm{Im}\mathbf{H}^{(\mathbf{r})}_{U} \subset L^2_+(\mathbb{T}; \mathbb{C}^{M \times d})$. So $\dim_{\mathbb{C}} \mathrm{Im}\mathbf{H}^{(\mathbf{r})}_{U}=+\infty$, $\forall d \in \mathbb{N}_+$, which contradicts $\mathrm{(c)}$. ◻ **Remark 20**. *Given $M,N \in \mathbb{N}_+$, if $U \in \mathcal{M}_{\mathrm{FR}}^{M \times N}$, Proposition $\ref{WKroneckerM*N}$ yields that $\mathrm{Im}\mathbf{H}^{(\mathbf{r})}_{U} = \mathrm{Im}\mathbf{H}^{(\mathbf{r})}_{U}\mathbf{H}^{(\mathbf{l})}_{U} = \mathbf{H}^{(\mathbf{r})}_{U} \mathrm{Im}\mathbf{H}^{(\mathbf{l})}_{U} \subset L^2_+(\mathbb{T}; \mathbb{C}^{M \times d})$ and $\mathrm{Im}\mathbf{H}^{(\mathbf{l})}_{U} = \mathrm{Im}\mathbf{H}^{(\mathbf{l})}_{U}\mathbf{H}^{(\mathbf{r})}_{U} = \mathbf{H}^{(\mathbf{l})}_{U} \mathrm{Im}\mathbf{H}^{(\mathbf{r})}_{U} \subset L^2_+(\mathbb{T}; \mathbb{C}^{d \times N})$, $\forall d \in \mathbb{N}_+$.* **Lemma 21**. *Given $M,N \in \mathbb{N}_+$ and $s \geq 0$, the set $(\mathcal{M}_{\mathrm{FR}}\backslash \{0\})^{M \times N}$ is dense in $H^s_+(\mathbb{T}; \mathbb{C}^{M \times N})$.* *Proof.* If $U = \sum_{n \geq 0} \hat{U}(n)\mathbf{e}_n \in H^s_+(\mathbb{T}; \mathbb{C}^{M \times N})$, set $V^{(m)}:=\sum_{n = 0}^m \hat{U}(n)\mathbf{e}_n = \sum_{k = 1}^M \sum_{j = 1}^N V^{(m)}_{kj} \mathbb{E}_{kj}^{(MN)}$, $\boldsymbol{\Lambda}_m:=\{(k,j) :V^{(m)}_{kj} =0 \}$ and $\tilde{V}^{(m)}:=V^{(m)} + 2^{-m}\sum_{(k,j) \in \boldsymbol{\Lambda}_m}\mathbb{E}_{kj}^{(MN)} \in (\mathcal{M}_{\mathrm{FR}}\backslash \{0\})^{M \times N}$, $\forall m \in \mathbb{N}$. Then $\tilde{V}^{(m)} \to U$ in $H^s_+(\mathbb{T}; \mathbb{C}^{M \times N})$, as $m \to +\infty$. ◻ ## The Toeplitz operators Given $d, M,N\in\mathbb{N}_+$, recall that the Toeplitz operators of symbol $V \in L^2(\mathbb{T}; \mathbb{C}^{M \times N})$ are given by $$\label{rlToep} \begin{split} & \mathbf{T}^{(\mathbf{r})}_{V} : G \in H^1_+(\mathbb{T}; \mathbb{C}^{N \times d}) \mapsto \mathbf{T}^{(\mathbf{r})}_{V}(G) = \Pi_{\geq 0}(V G) \in L^2_+(\mathbb{T}; \mathbb{C}^{M \times d}),\\ & \mathbf{T}^{(\mathbf{l})}_{V} : F \in H^1_+(\mathbb{T}; \mathbb{C}^{d \times M}) \mapsto \mathbf{T}^{(\mathbf{l})}_{V}(F) = \Pi_{\geq 0}(FV) \in L^2_+(\mathbb{T}; \mathbb{C}^{d \times N}).\\ \end {split}$$If $V \in L^{\infty}(\mathbb{T}; \mathbb{C}^{M \times N})$, then $\mathbf{T}^{(\mathbf{r})}_{V}: L^2_+(\mathbb{T}; \mathbb{C}^{N \times d}) \to L^2_+(\mathbb{T}; \mathbb{C}^{M \times d})$ and $\mathbf{T}^{(\mathbf{l})}_{V}: L^2_+(\mathbb{T}; \mathbb{C}^{d \times M}) \to L^2_+(\mathbb{T}; \mathbb{C}^{d \times N})$ are both bounded operators. Moreover, $\forall G \in L^2_+(\mathbb{T}; \mathbb{C}^{N \times d}), \forall A \in L^2_+(\mathbb{T}; \mathbb{C}^{M \times d})$, we have $$\label{<TrG,A>} \langle \mathbf{T}^{(\mathbf{r})}_{V}(G), A\rangle_{L^2_+(\mathbb{T}; \mathbb{C}^{M \times d})} = \langle G, \mathbf{T}^{(\mathbf{r})}_{V^*}(A)\rangle_{L^2_+(\mathbb{T}; \mathbb{C}^{N \times d})}.$$If $V \in L^{\infty}(\mathbb{T}; \mathbb{C}^{M \times N})$, $\forall F \in L^2_+(\mathbb{T}; \mathbb{C}^{d \times M}), \forall B \in L^2_+(\mathbb{T}; \mathbb{C}^{d \times N})$, we have $$\label{<TlF,B>} \langle \mathbf{T}^{(\mathbf{l})}_{V}(F), B\rangle_{L^2_+(\mathbb{T}; \mathbb{C}^{d \times N})} = \langle F, \mathbf{T}^{(\mathbf{l})}_{V^*}(B)\rangle_{L^2_+(\mathbb{T}; \mathbb{C}^{d \times M})}.$$Set $M=N$. If $V \in L^{\infty}(\mathbb{T}; \mathbb{C}^{N \times N})$, then [\[\<TrG,A\>\]](#<TrG,A>){reference-type="eqref" reference="<TrG,A>"} and [\[\<TlF,B\>\]](#<TlF,B>){reference-type="eqref" reference="<TlF,B>"} imply that $\mathbf{T}^{(\mathbf{r})}_{V^*} = (\mathbf{T}^{(\mathbf{r})}_{V})^* \in \mathcal{B}(L^2_+(\mathbb{T}; \mathbb{C}^{N \times d}))$ and $\mathbf{T}^{(\mathbf{l})}_{V^*} = (\mathbf{T}^{(\mathbf{l})}_{V})^* \in \mathcal{B}(L^2_+(\mathbb{T}; \mathbb{C}^{d \times N}))$. The next lemma shows some commutator formulas between the Toeplitz operators and shift operators. **Lemma 22**. *Given $d, M,N\in \mathbb{N}_+$, if $B \in L^{\infty}(\mathbb{T}; \mathbb{C}^{M\times N})$, $G \in L^2_+(\mathbb{T}; \mathbb{C}^{N \times d})$, $F\in L^2_+(\mathbb{T}; \mathbb{C}^{d \times M})$, then $$\label{ForCommTSS*} \begin{split} & \left[\mathbf{T}^{(\mathbf{r})}_{B}, \mathbf{S}\right](G) = \widehat{BG}(-1) \in \mathbb{C}^{M \times d}; \quad \left[\mathbf{S}^*, \mathbf{T}^{(\mathbf{r})}_{B}\right](G) = \mathbf{S}^*\left(\Pi_{\geq 0}B \right)\hat{G}(0) \in L^2_+(\mathbb{T}; \mathbb{C}^{M \times d});\\ & \left[\mathbf{T}^{(\mathbf{l})}_{B}, \mathbf{S}\right](F) = \widehat{FB}(-1) \in \mathbb{C}^{d \times N}; \quad \left[\mathbf{S}^*, \mathbf{T}^{(\mathbf{l})}_{B}\right](F) = \hat{F}(0) \mathbf{S}^*\left(\Pi_{\geq 0}B \right) \in L^2_+(\mathbb{T}; \mathbb{C}^{d \times N}).\\ \end{split}$$* *Proof.* Since $\Pi_{\geq 0} \left(\mathbf{e}_1 \Pi_{<0} (BG) \right) = \widehat{BG}(-1)$ and $\Pi_{\geq 0} \left(\mathbf{e}_1 \Pi_{<0} (FB) \right) = \widehat{FB}(-1)$. So we have $\mathbf{T}^{(\mathbf{r})}_{B} \mathbf{S} (G) = \mathbf{e}_1 \mathbf{T}^{(\mathbf{r})}_{B}(G) + \Pi_{\geq 0}\left(\mathbf{e}_1 \Pi_{< 0}(BG)\right)= \mathbf{S}\mathbf{T}^{(\mathbf{r})}_{B}(G) + \widehat{BG}(-1)$ and $\mathbf{T}^{(\mathbf{l})}_{B} \mathbf{S} (F) = \mathbf{e}_1 \mathbf{T}^{(\mathbf{l})}_{B}(F) + \Pi_{\geq 0}\left(\mathbf{e}_1 \Pi_{< 0}(FB)\right)= \mathbf{S}\mathbf{T}^{(\mathbf{l})}_{B}(F) + \widehat{FB}(-1)$. Since $\mathbf{e}_{-1} \Pi_{<0}(BG) \in L^2_-(\mathbb{T}; \mathbb{C}^{M \times d})$ and $\Pi_{\geq 0}\left(B \Pi_{<0}(\mathbf{e}_{-1}G) \right)=\Pi_{\geq 0}\left(\mathbf{e}_{-1} B \right)\hat{G}(0)$, so $\mathbf{S}^* \mathbf{T}^{(\mathbf{r})}_{B} (G) = \Pi_{\geq 0} \left(\mathbf{e}_{-1} B G \right)= \mathbf{T}^{(\mathbf{r})}_{B}\mathbf{S}^* (G)+\Pi_{\geq 0} \left(\mathbf{e}_{-1} (\Pi_{\geq 0} B) \right)\hat{G}(0) = \mathbf{T}^{(\mathbf{r})}_{B}\mathbf{S}^* (G) + \mathbf{S}^*\left(\Pi_{\geq 0}B \right)\hat{G}(0)$. Since $\mathbf{e}_{-1}\Pi_{<0}(FB) \in L^2(\mathbb{T}; \mathbb{C}^{d \times N})$ and $\Pi_{\geq 0}\left(\Pi_{<0}(\mathbf{e}_{-1}F) B \right)=\hat{F}(0) \Pi_{\geq 0}\left(\mathbf{e}_{-1} B \right)=\hat{F}(0) \Pi_{\geq 0}\left(\mathbf{e}_{-1} (\Pi_{\geq 0}B) \right)$, we have $\mathbf{S}^* \mathbf{T}^{(\mathbf{l})}_{B} (F) =\Pi_{\geq 0}\left( \mathbf{e}_{-1} F B \right) = \mathbf{T}^{(\mathbf{l})}_{B} \mathbf{S}^* (F) + \hat{F}(0) \mathbf{S}^*\left(\Pi_{\geq 0}B \right)$. ◻ Given any positive integers $d, M,N, P, Q\in\mathbb{N}_+$, we choose $A \in L^{\infty} (\mathbb{T}; \mathbb{C}^{M \times N})$, $B \in L^{\infty}(\mathbb{T}; \mathbb{C}^{N \times P})$ and $C \in L^{\infty}(\mathbb{T}; \mathbb{C}^{P\times Q})$. The following double Toeplitz operators are bounded: $$\label{2Toep} \begin{split} &\mathbf{T}^{(\mathbf{r})}_{A} \mathbf{T}^{(\mathbf{r})}_{B} : G \in L^2_+(\mathbb{T}; \mathbb{C}^{P \times d}) \mapsto \mathbf{T}^{(\mathbf{r})}_{A} \mathbf{T}^{(\mathbf{r})}_{B}(G) = \Pi_{\geq 0}\left( A\Pi_{\geq 0}(BG)\right)\in L^2_+(\mathbb{T}; \mathbb{C}^{M \times d});\\ & \mathbf{T}^{(\mathbf{r})}_{AB} : G \in L^2_+(\mathbb{T}; \mathbb{C}^{P \times d}) \mapsto \mathbf{T}^{(\mathbf{r})}_{AB}(G) = \Pi_{\geq 0}\left(ABG\right)\in L^2_+(\mathbb{T}; \mathbb{C}^{M \times d});\\ &\mathbf{T}^{(\mathbf{l})}_{C} \mathbf{T}^{(\mathbf{l})}_{B} : F \in L^2_+(\mathbb{T}; \mathbb{C}^{d \times N}) \mapsto \mathbf{T}^{(\mathbf{l})}_{C} \mathbf{T}^{(\mathbf{l})}_{B}(G) = \Pi_{\geq 0}\left( \Pi_{\geq 0}(FB)C\right)\in L^2_+(\mathbb{T}; \mathbb{C}^{d\times Q});\\ & \mathbf{T}^{(\mathbf{l})}_{BC} : F \in L^2_+(\mathbb{T}; \mathbb{C}^{d \times N}) \mapsto \mathbf{T}^{(\mathbf{l})}_{BC}(G) = \Pi_{\geq 0}\left(FBC\right)\in L^2_+(\mathbb{T}; \mathbb{C}^{d\times Q}). \end{split}$$ **Lemma 23**. *Given $M,N, P, Q\in\mathbb{N}_+$, $U \in H^{\frac{1}{2}}_+(\mathbb{T}; \mathbb{C}^{M \times N})$, $V \in H^{\frac{1}{2}}_+(\mathbb{T}; \mathbb{C}^{P \times N})$ and $W \in H^{\frac{1}{2}}_+(\mathbb{T}; \mathbb{C}^{P\times Q})$, then we have $$\label{rel2K&T-TTfor} \mathbf{K}^{(\mathbf{r})}_{U} \mathbf{K}^{(\mathbf{l})}_{V} = \mathbf{T}^{(\mathbf{r})}_{U V^*} - \mathbf{T}^{(\mathbf{r})}_{U}\mathbf{T}^{(\mathbf{r})}_{V^*}, \quad \mathbf{K}^{(\mathbf{l})}_{V} \mathbf{K}^{(\mathbf{r})}_{W} = \mathbf{T}^{(\mathbf{l})}_{W^* V} - \mathbf{T}^{(\mathbf{l})}_{V}\mathbf{T}^{(\mathbf{l})}_{W^*}.$$* *Proof.* If $G \in H^1_+(\mathbb{T}; \mathbb{C}^{P \times d})$, $F \in H^1_+(\mathbb{T}; \mathbb{C}^{d \times Q})$ for some $d \in \mathbb{N}_+$, formula [\[Pi\<0adjTorus\]](#Pi<0adjTorus){reference-type="eqref" reference="Pi<0adjTorus"} yields that $$\begin{split} & \mathbf{H}^{(\mathbf{r})}_{U} \mathbf{H}^{(\mathbf{l})}_{V} (G) = \Pi_{\geq 0} \left(UV^* G- U \Pi_{\geq 0}(V^* G) \right) + U\widehat{V^* G}(0) = (\mathbf{T}^{(\mathbf{r})}_{U V^*} - \mathbf{T}^{(\mathbf{r})}_{U}\mathbf{T}^{(\mathbf{r})}_{V^*}) (G) + U\widehat{V^* G}(0); \\ & \mathbf{H}^{(\mathbf{l})}_{V} \mathbf{H}^{(\mathbf{r})}_{W}(F) = \Pi_{\geq 0} \left(F W^* V - \Pi_{\geq 0}(F W^*)V \right) + \widehat{F W^*}(0)V =(\mathbf{T}^{(\mathbf{l})}_{W^* V} - \mathbf{T}^{(\mathbf{l})}_{V}\mathbf{T}^{(\mathbf{l})}_{W^*})(F) + \widehat{F W^*}(0)V. \end{split}$$It suffices to conclude by [\[rel2K2Hfor\]](#rel2K2Hfor){reference-type="eqref" reference="rel2K2Hfor"}. ◻ **Lemma 24**. *Given $P \in \mathbb{C}^{M \times d}$ and $Q \in \mathbb{C}^{d\times N}$ for some $M,N,d \in \mathbb{N}_+$, if $U\in H^{\frac{1}{2}}_+(\mathbb{T}; \mathbb{C}^{M\times N})$, then $$\label{HHP} \mathbf{H}^{(\mathbf{r})}_{U} \mathbf{H}^{(\mathbf{l})}_{U} (P) = \mathbf{T}^{(\mathbf{r})}_{U U^*}(P) \in L^2_+(\mathbb{T}; \mathbb{C}^{M\times d}), \quad \mathbf{H}^{(\mathbf{l})}_{U} \mathbf{H}^{(\mathbf{r})}_{U} (Q) = \mathbf{T}^{(\mathbf{l})}_{U^* U}(Q) \in L^2_+(\mathbb{T}; \mathbb{C}^{d\times N}).$$* *Proof.* Trudinger's inequality [\[TruMatrixIne\]](#TruMatrixIne){reference-type="eqref" reference="TruMatrixIne"} yields that $U U^* \in L^2_+(\mathbb{T}; \mathbb{C}^{M\times M})$ and $U^* U \in L^2_+(\mathbb{T}; \mathbb{C}^{N\times N})$. Then [\[HHP\]](#HHP){reference-type="eqref" reference="HHP"} is obtained by [\[rlHankel\]](#rlHankel){reference-type="eqref" reference="rlHankel"}, [\[rlToep\]](#rlToep){reference-type="eqref" reference="rlToep"} and [\[matrixXL\^2\]](#matrixXL^2){reference-type="eqref" reference="matrixXL^2"}. ◻ ## Proof of theorem $\ref{LaxPairThm}$ **Lemma 25**. *Given $s> \frac{1}{2}$ and $M,N,P,Q \in \mathbb{N}_+$, if $U \in H^s_+(\mathbb{T}; \mathbb{C}^{M \times N})$, $V \in H^s_+(\mathbb{T}; \mathbb{C}^{P \times N})$ and $W \in H^s_+(\mathbb{T}; \mathbb{C}^{P\times Q})$, then $\forall d \in \mathbb{N}_+$, the following identities hold, $$\label{HUVWfor} \begin{split} \mathbf{H}^{(\mathbf{r})}_{\Pi_{\geq 0}(UV^*W)} = &\mathbf{T}^{(\mathbf{r})}_{UV^*}\mathbf{H}^{(\mathbf{r})}_W + \mathbf{H}^{(\mathbf{r})}_U \mathbf{T}^{(\mathbf{l})}_{W^* V} - \mathbf{H}^{(\mathbf{r})}_U \mathbf{H}^{(\mathbf{l})}_V \mathbf{H}^{(\mathbf{r})}_W: L^2_+(\mathbb{T}; \mathbb{C}^{d \times Q}) \to L^2_+(\mathbb{T}; \mathbb{C}^{M \times d});\\ \mathbf{H}^{(\mathbf{l})}_{\Pi_{\geq 0}(UV^*W)} = & \mathbf{T}^{(\mathbf{l})}_{V^* W}\mathbf{H}^{(\mathbf{l})}_U + \mathbf{H}^{(\mathbf{l})}_W \mathbf{T}^{(\mathbf{r})}_{V U^* } - \mathbf{H}^{(\mathbf{l})}_W \mathbf{H}^{(\mathbf{r})}_V \mathbf{H}^{(\mathbf{l})}_U: L^2_+(\mathbb{T}; \mathbb{C}^{M \times d}) \to L^2_+(\mathbb{T}; \mathbb{C}^{d \times Q}); \\ \mathbf{K}^{(\mathbf{r})}_{\Pi_{\geq 0}(UV^*W)} = &\mathbf{T}^{(\mathbf{r})}_{UV^*}\mathbf{K}^{(\mathbf{r})}_W + \mathbf{K}^{(\mathbf{r})}_U \mathbf{T}^{(\mathbf{l})}_{W^* V} - \mathbf{K}^{(\mathbf{r})}_U \mathbf{K}^{(\mathbf{l})}_V \mathbf{K}^{(\mathbf{r})}_W: L^2_+(\mathbb{T}; \mathbb{C}^{d \times Q}) \to L^2_+(\mathbb{T}; \mathbb{C}^{M \times d});\\ \mathbf{K}^{(\mathbf{l})}_{\Pi_{\geq 0}(UV^*W)} = & \mathbf{T}^{(\mathbf{l})}_{V^* W}\mathbf{K}^{(\mathbf{l})}_U + \mathbf{K}^{(\mathbf{l})}_W \mathbf{T}^{(\mathbf{r})}_{V U^* } - \mathbf{K}^{(\mathbf{l})}_W \mathbf{K}^{(\mathbf{r})}_V \mathbf{K}^{(\mathbf{l})}_U: L^2_+(\mathbb{T}; \mathbb{C}^{M \times d}) \to L^2_+(\mathbb{T}; \mathbb{C}^{d \times Q}). \end{split}$$Equivalently, $\forall d \in \mathbb{N}_+$, the following commutator formulas also hold: $$\label{commS*Toep(rlr)} \begin{split} \left[\mathbf{S}^*, \mathbf{T}^{(\mathbf{r})}_{UV^*}\right]\mathbf{H}^{(\mathbf{r})}_W = &\mathbf{K}^{(\mathbf{r})}_U \left( \mathbf{H}^{(\mathbf{l})}_V \mathbf{H}^{(\mathbf{r})}_W - \mathbf{K}^{(\mathbf{l})}_V \mathbf{K}^{(\mathbf{r})}_W\right): L^2_+(\mathbb{T}; \mathbb{C}^{d \times Q}) \to L^2_+(\mathbb{T}; \mathbb{C}^{M \times d}); \\ \mathbf{H}^{(\mathbf{r})}_U \left[\mathbf{T}^{(\mathbf{l})}_{W^* V}, \mathbf{S}\right] = & \left(\mathbf{H}^{(\mathbf{r})}_U \mathbf{H}^{(\mathbf{l})}_V - \mathbf{K}^{(\mathbf{r})}_U \mathbf{K}^{(\mathbf{l})}_V \right) \mathbf{K}^{(\mathbf{r})}_W: L^2_+(\mathbb{T}; \mathbb{C}^{d \times Q}) \to L^2_+(\mathbb{T}; \mathbb{C}^{M \times d}); \\ \left[\mathbf{S}^*, \mathbf{T}^{(\mathbf{l})}_{V^*W }\right]\mathbf{H}^{(\mathbf{l})}_U = &\mathbf{K}^{(\mathbf{l})}_W \left( \mathbf{H}^{(\mathbf{r})}_V \mathbf{H}^{(\mathbf{l})}_U - \mathbf{K}^{(\mathbf{r})}_V \mathbf{K}^{(\mathbf{l})}_U\right): L^2_+(\mathbb{T}; \mathbb{C}^{M \times d}) \to L^2_+(\mathbb{T}; \mathbb{C}^{d \times Q}); \\ \mathbf{H}^{(\mathbf{l})}_W \left[\mathbf{T}^{(\mathbf{r})}_{V U^*}, \mathbf{S}\right] = & \left(\mathbf{H}^{(\mathbf{l})}_W \mathbf{H}^{(\mathbf{r})}_V - \mathbf{K}^{(\mathbf{l})}_W \mathbf{K}^{(\mathbf{r})}_V \right) \mathbf{K}^{(\mathbf{l})}_U: L^2_+(\mathbb{T}; \mathbb{C}^{M \times d}) \to L^2_+(\mathbb{T}; \mathbb{C}^{d \times Q}). \\ \end{split}$$* *Proof.* If $F \in L^2_+(\mathbb{T}; \mathbb{C}^{d \times Q})$, since $UV^*W \in H^1(\mathbb{T}; \mathbb{C}^{M \times Q})$, we have $\Pi_{<0}(UV^* W)F^* \in L^2_-(\mathbb{T}; \mathbb{C}^{M \times d})$ by Lemma $\ref{AB*L2-}$. Formula [\[Pi\<0adjTorus\]](#Pi<0adjTorus){reference-type="eqref" reference="Pi<0adjTorus"} yields that $\Pi_{<0}(WF^*) = \left(\Pi_{\geq 0}(F W^*) \right)^* -\widehat{WF^*}(0) \in L^2_-(\mathbb{T}; \mathbb{C}^{P \times d})$. Then $$\label{Hrpiuvw1} \mathbf{H}^{(\mathbf{r})}_{\Pi_{\geq 0}(UV^*W)} (F) = \Pi_{\geq 0}(UV^* W F^*) = \mathbf{T}^{(\mathbf{r})}_{UV^*}\mathbf{H}^{(\mathbf{r})}_W(F) + \mathbf{H}^{(\mathbf{r})}_{U} \left( \Pi_{\geq 0}(FW^*)V\right) -\Pi_{\geq 0}(UV^*)\widehat{WF^*}(0).$$Using [\[Pi\<0adjTorus\]](#Pi<0adjTorus){reference-type="eqref" reference="Pi<0adjTorus"} again, we obtain $\Pi_{\geq 0}(FW^*) = FW^* - \left(\Pi_{\geq 0}(WF^*)\right)^* + \widehat{F W^*}(0) \in L^2(\mathbb{T}; \mathbb{C}^{d \times P})$. Then $$\label{PiFW*V} \begin{split} \Pi_{\geq 0}(FW^*)V =\Pi_{\geq 0} \left( \Pi_{\geq 0}(FW^*)V \right) = \mathbf{T}^{(\mathbf{l})}_{W^* V}(F) - \mathbf{H}^{(\mathbf{l})}_{V} \mathbf{H}^{(\mathbf{r})}_{W}(F) + \widehat{F W^*}(0)V \in L^2(\mathbb{T}; \mathbb{C}^{d \times N}), \end{split}$$by using Lemma $\ref{ABL2+}$. Since $\widehat{F W^*}(0) = \left( \widehat{WF^*}(0) \right)^* \in \mathbb{C}^{d \times P}$, formula [\[matrixXL\^2\]](#matrixXL^2){reference-type="eqref" reference="matrixXL^2"} implies that $$\label{HUFW^*0V} \mathbf{H}^{(\mathbf{r})}_{U} \left( \widehat{F W^*}(0)V\right) = \Pi_{\geq 0}(UV^*)\widehat{WF^*}(0) \in H^s_+(\mathbb{T}; \mathbb{C}^{M \times d}).$$Plugging formulas [\[PiFW\*V\]](#PiFW*V){reference-type="eqref" reference="PiFW*V"} and [\[HUFW\^\*0V\]](#HUFW^*0V){reference-type="eqref" reference="HUFW^*0V"} into [\[Hrpiuvw1\]](#Hrpiuvw1){reference-type="eqref" reference="Hrpiuvw1"}, we obtain the first formula of [\[HUVWfor\]](#HUVWfor){reference-type="eqref" reference="HUVWfor"}.\ If $G \in L^2_+(\mathbb{T}; \mathbb{C}^{M \times d})$, then $G^* U = \Pi_{\geq 0}(G^* U) + \left(\Pi_{\geq 0}(U^* G) \right)^* -\widehat{G^* U}(0) \in L^2 (\mathbb{T}; \mathbb{C}^{d \times N})$ by [\[Pi\<0adjTorus\]](#Pi<0adjTorus){reference-type="eqref" reference="Pi<0adjTorus"}. Lemma $\ref{A*BL2-}$ implies that $G^* \Pi_{<0}(UV^* W) \in L^2_-(\mathbb{T}; \mathbb{C}^{d \times Q})$. As a consequence, we have $$\label{HlPiUV*WG} \mathbf{H}^{(\mathbf{l})}_{\Pi_{\geq 0}(UV^*W)} (G) = \Pi_{\geq 0}( G^* UV^* W) = \mathbf{T}^{(\mathbf{l})}_{V^* W}\mathbf{H}^{(\mathbf{l})}_U(G) + \mathbf{H}^{(\mathbf{l})}_{W} \left(V \Pi_{\geq 0}(U^*G) -V \widehat{U^* G}(0)\right).$$because [\[matrixXL\^2\]](#matrixXL^2){reference-type="eqref" reference="matrixXL^2"} yields $\mathbf{H}^{(\mathbf{l})}_{W} \left( V \widehat{U^* G}(0)\right) = \widehat{G^* U}(0)\Pi_{\geq 0}(V^*W)$. Lemma $\ref{ABL2+}$ and [\[Pi\<0adjTorus\]](#Pi<0adjTorus){reference-type="eqref" reference="Pi<0adjTorus"} yield $$\label{VPiU*G} V \Pi_{\geq 0}(U^*G) = \Pi_{\geq 0} \left(V \Pi_{\geq 0}(U^*G) \right) = \mathbf{T}^{(\mathbf{r})}_{V U^*}(G) - \mathbf{H}^{(\mathbf{r})}_{V}\mathbf{H}^{(\mathbf{l})}_{U}(G) + V \widehat{U^* G}(0) \in L^2(\mathbb{T}; \mathbb{C}^{P \times d}).$$Plugging formula [\[VPiU\*G\]](#VPiU*G){reference-type="eqref" reference="VPiU*G"} into [\[HlPiUV\*WG\]](#HlPiUV*WG){reference-type="eqref" reference="HlPiUV*WG"}, we obtain the second formula of [\[HUVWfor\]](#HUVWfor){reference-type="eqref" reference="HUVWfor"}.\ Now we turn to prove the commutator formulas [\[commS\*Toep(rlr)\]](#commS*Toep(rlr)){reference-type="eqref" reference="commS*Toep(rlr)"}. If $F\in L^2(\mathbb{T}; \mathbb{C}^{d \times Q})$, [\[matrixXL\^2\]](#matrixXL^2){reference-type="eqref" reference="matrixXL^2"} and [\[ForCommTSS\*\]](#ForCommTSS*){reference-type="eqref" reference="ForCommTSS*"} imply that $\mathbf{H}^{(\mathbf{r})}_{U} \left(\mathbf{e}_1 \widehat{FW^*}(0)V \right) = \Pi_{\geq 0}\left(\mathbf{e}_{-1} UV^* \right) \widehat{WF^*}(0)=\mathbf{S}^* (U V^*) \left(\mathbf{H}^{(\mathbf{r})}_{W}(F)\right)^{\wedge}(0)=\left[\mathbf{S}^* , \mathbf{T}^{(\mathbf{r})}_{U V^*}\right]\mathbf{H}^{(\mathbf{r})}_{W}(F)$. We have $\mathbf{K}^{(\mathbf{r})}_U \left( \mathbf{H}^{(\mathbf{l})}_V \mathbf{H}^{(\mathbf{r})}_W - \mathbf{K}^{(\mathbf{l})}_V \mathbf{K}^{(\mathbf{r})}_W\right) (F) = \mathbf{H}^{(\mathbf{r})}_U \mathbf{S}\left( \widehat{FW^*}(0)V \right) = \left[\mathbf{S}^* , \mathbf{T}^{(\mathbf{r})}_{U V^*}\right]\mathbf{H}^{(\mathbf{r})}_{W}(F)$ by using [\[rel2K2Hfor\]](#rel2K2Hfor){reference-type="eqref" reference="rel2K2Hfor"}. If $G \in L^2_+(\mathbb{T}; \mathbb{C}^{M \times d})$, Lemma $\ref{AB*L2-}$ yields that $\Pi_{<0}\left(\mathbf{e}_{-1} G^* U\right) V^* \in L^2_-(\mathbb{T}; \mathbb{C}^{d \times P})$, then $$\label{KulGV*0mode} \Pi_{\geq 0}\left(\mathbf{K}^{(\mathbf{l})}_{U}(G) V^* \right)=\Pi_{\geq 0}\left(\mathbf{e}_{-1}G^* U V^* \right) \; \Rightarrow \; ( \mathbf{K}^{(\mathbf{l})}_{U}(G) V^*)^{\wedge}(0) = (G^* U V^* )^{\wedge}(1).$$Thanks to formula [\[ForCommTSS\*\]](#ForCommTSS*){reference-type="eqref" reference="ForCommTSS*"}, [\[KulGV\*0mode\]](#KulGV*0mode){reference-type="eqref" reference="KulGV*0mode"} and [\[rel2K2Hfor\]](#rel2K2Hfor){reference-type="eqref" reference="rel2K2Hfor"}, we have $$\begin{split} \mathbf{H}^{(\mathbf{l})}_{W} \left[\mathbf{T}^{(\mathbf{r})}_{V U^*}, \mathbf{S} \right](G) = & \mathbf{H}^{(\mathbf{l})}_{W} (\left(V U^* G )^{\wedge}(-1) \right) = (G^* U V^* )^{\wedge}(1) W = ( \mathbf{K}^{(\mathbf{l})}_{U}(G) V^*)^{\wedge}(0) W \\ =& \left(\mathbf{H}^{(\mathbf{l})}_W \mathbf{H}^{(\mathbf{r})}_V - \mathbf{K}^{(\mathbf{l})}_W \mathbf{K}^{(\mathbf{r})}_V \right) \mathbf{K}^{(\mathbf{l})}_U (G). \end{split}$$The first and the last formula of [\[commS\*Toep(rlr)\]](#commS*Toep(rlr)){reference-type="eqref" reference="commS*Toep(rlr)"} are obtained. Together with the first two formulas of [\[HUVWfor\]](#HUVWfor){reference-type="eqref" reference="HUVWfor"}, we can deduce the last two formulas of [\[HUVWfor\]](#HUVWfor){reference-type="eqref" reference="HUVWfor"}. The second and the third formulas of [\[commS\*Toep(rlr)\]](#commS*Toep(rlr)){reference-type="eqref" reference="commS*Toep(rlr)"} can be obtained by either comparing the first two formulas and the last two formulas of [\[HUVWfor\]](#HUVWfor){reference-type="eqref" reference="HUVWfor"} or following the same idea as the proof of the first and the last formula of [\[commS\*Toep(rlr)\]](#commS*Toep(rlr)){reference-type="eqref" reference="commS*Toep(rlr)"} by using [\[ForCommTSS\*\]](#ForCommTSS*){reference-type="eqref" reference="ForCommTSS*"} and [\[rel2K2Hfor\]](#rel2K2Hfor){reference-type="eqref" reference="rel2K2Hfor"}. ◻ *Proof of theorem $\ref{LaxPairThm}$.* Given $s>\frac{1}{2}$, set $V=W=U\in H^s_+(\mathbb{T}; \mathbb{C}^{M\times N})$ in formula [\[HUVWfor\]](#HUVWfor){reference-type="eqref" reference="HUVWfor"}. Then $$\begin{split} & \mathbf{H}^{(\mathbf{r})}_{U}\mathbf{H}^{(\mathbf{l})}_{\Pi_{\geq 0}(U U^* U)} - \mathbf{H}^{(\mathbf{r})}_{\Pi_{\geq 0}(U U^* U)} \mathbf{H}^{(\mathbf{l})}_{U} = \left[\mathbf{H}^{(\mathbf{r})}_{U}\mathbf{H}^{(\mathbf{l})}_{U}, \; \mathbf{T}^{(\mathbf{r})}_{U U^*}\right] \in \mathcal{B}(L^2_+(\mathbb{T}; \mathbb{C}^{M\times d})); \\ & \mathbf{H}^{(\mathbf{l})}_{U}\mathbf{H}^{(\mathbf{r})}_{\Pi_{\geq 0}(U U^* U)} - \mathbf{H}^{(\mathbf{l})}_{\Pi_{\geq 0}(U U^* U)} \mathbf{H}^{(\mathbf{r})}_{U} = \left[\mathbf{H}^{(\mathbf{l})}_{U}\mathbf{H}^{(\mathbf{r})}_{U}, \; \mathbf{T}^{(\mathbf{l})}_{U^* U}\right] \in \mathcal{B}(L^2_+(\mathbb{T}; \mathbb{C}^{d\times N})); \\ & \mathbf{K}^{(\mathbf{r})}_{U}\mathbf{K}^{(\mathbf{l})}_{\Pi_{\geq 0}(U U^* U)} - \mathbf{K}^{(\mathbf{r})}_{\Pi_{\geq 0}(U U^* U)} \mathbf{K}^{(\mathbf{l})}_{U} = \left[\mathbf{K}^{(\mathbf{r})}_{U}\mathbf{K}^{(\mathbf{l})}_{U}, \; \mathbf{T}^{(\mathbf{r})}_{U U^*}\right] \in \mathcal{B}(L^2_+(\mathbb{T}; \mathbb{C}^{M\times d})); \\ & \mathbf{K}^{(\mathbf{l})}_{U}\mathbf{K}^{(\mathbf{r})}_{\Pi_{\geq 0}(U U^* U)} - \mathbf{K}^{(\mathbf{l})}_{\Pi_{\geq 0}(U U^* U)} \mathbf{K}^{(\mathbf{r})}_{U} = \left[\mathbf{K}^{(\mathbf{l})}_{U}\mathbf{K}^{(\mathbf{r})}_{U}, \; \mathbf{T}^{(\mathbf{l})}_{U^* U}\right] \in \mathcal{B}(L^2_+(\mathbb{T}; \mathbb{C}^{d\times N})). \end{split}$$We conclude by the $\mathbb{C}$-antilinearity of the Hankel operators defined in [\[rlHankel\]](#rlHankel){reference-type="eqref" reference="rlHankel"} and [\[rlShiftHankel\]](#rlShiftHankel){reference-type="eqref" reference="rlShiftHankel"}. ◻ **Remark 26**. *Thanks to formula [\[rel2K&T-TTfor\]](#rel2K&T-TTfor){reference-type="eqref" reference="rel2K&T-TTfor"}, $(\mathbf{K}^{(\mathbf{r})}_{U}\mathbf{K}^{(\mathbf{l})}_{U}, -i \mathbf{T}^{(\mathbf{r})}_{U} \mathbf{T}^{(\mathbf{r})}_{ U^*} )$ and $(\mathbf{K}^{(\mathbf{l})}_{U}\mathbf{K}^{(\mathbf{r})}_{U}, -i \mathbf{T}^{(\mathbf{l})}_{U} \mathbf{T}^{(\mathbf{l})}_{U^*})$ are also Lax pairs of the matrix Szegő equation [\[MSzego\]](#MSzego){reference-type="eqref" reference="MSzego"}.* # The explicit formula {#SecExplicit} This section is dedicated to establish the explicit formula of solutions to [\[MSzego\]](#MSzego){reference-type="eqref" reference="MSzego"}. Thanks to $Theorem \ref{LaxPairThm}$, the matrix Szegő equation [\[MSzego\]](#MSzego){reference-type="eqref" reference="MSzego"} has at least $4$ Lax pairs: $(\mathbf{H}^{(\mathbf{r})}_{U}\mathbf{H}^{(\mathbf{l})}_{U}, -i \mathbf{T}^{(\mathbf{r})}_{U U^*} )$, $(\mathbf{H}^{(\mathbf{l})}_{U}\mathbf{H}^{(\mathbf{r})}_{U}, -i \mathbf{T}^{(\mathbf{l})}_{U^* U} )$, $(\mathbf{K}^{(\mathbf{r})}_{U}\mathbf{K}^{(\mathbf{l})}_{U}, -i \mathbf{T}^{(\mathbf{r})}_{U U^*} )$, $(\mathbf{K}^{(\mathbf{l})}_{U}\mathbf{K}^{(\mathbf{r})}_{U}, -i \mathbf{T}^{(\mathbf{l})}_{U^* U} )$. Then we have the following unitary equivalence corollary. **Corollary 27**. *Given $M,N,d \in \mathbb{N}_+$ and $s>\tfrac{1}{2}$, if $U \in C^{\infty}(\mathbb{R}; H^s_+(\mathbb{T}; \mathbb{C}^{M\times N}))$ solves equation [\[MSzego\]](#MSzego){reference-type="eqref" reference="MSzego"}, let $\mathbf{W}\in C^1(\mathbb{R}; \mathcal{B}(L^2_+(\mathbb{T};\mathbb{C}^{M \times d})))$ and $\mathscr{W}\in C^1(\mathbb{R}; \mathcal{B}(L^2_+(\mathbb{T};\mathbb{C}^{d \times N})))$ denote the unique solution to the following equation: $$\label{WODE} \begin{split} \tfrac{\mathrm{d}}{\mathrm{d}t}\mathbf{W}(t) = -i \mathbf{T}^{(\mathbf{r})}_{U(t) U(t)^*}\mathbf{W}(t), \quad \tfrac{\mathrm{d}}{\mathrm{d}t}\mathscr{W}(t) = -i \mathbf{T}^{(\mathbf{l})}_{U(t)^* U(t)}\mathscr{W}(t) \end{split}$$with initial data $\mathbf{W}(0)= \mathrm{id}_{L^2_+(\mathbb{T};\mathbb{C}^{M \times d}))}$ and $\mathscr{W}(0)= \mathrm{id}_{L^2_+(\mathbb{T};\mathbb{C}^{d \times N}))}$. Then, for any $t \in \mathbb{R}$, $\mathbf{W}(t)$ and $\mathscr{W}(t)$ are both unitary operators and the following identities of unitary equivalences hold: $$\label{UniEquiHHKK} \begin{split} & \mathbf{H}^{(\mathbf{r})}_{U(t)}\mathbf{H}^{(\mathbf{l})}_{U(t)} = \mathbf{W}(t)\mathbf{H}^{(\mathbf{r})}_{U(0)}\mathbf{H}^{(\mathbf{l})}_{U(0)} \mathbf{W}(t)^*; \quad \mathbf{H}^{(\mathbf{l})}_{U(t)}\mathbf{H}^{(\mathbf{r})}_{U(t)} = \mathscr{W}(t)\mathbf{H}^{(\mathbf{l})}_{U(0)}\mathbf{H}^{(\mathbf{r})}_{U(0)} \mathscr{W}(t)^*;\\ & \mathbf{K}^{(\mathbf{r})}_{U(t)}\mathbf{K}^{(\mathbf{l})}_{U(t)} = \mathbf{W}(t)\mathbf{K}^{(\mathbf{r})}_{U(0)}\mathbf{K}^{(\mathbf{l})}_{U(0)} \mathbf{W}(t)^*; \quad \mathbf{K}^{(\mathbf{l})}_{U(t)}\mathbf{K}^{(\mathbf{r})}_{U(t)} = \mathscr{W}(t)\mathbf{K}^{(\mathbf{l})}_{U(0)}\mathbf{K}^{(\mathbf{r})}_{U(0)} \mathscr{W}(t)^*. \end{split}$$* *Proof.* Let $\mathbb{X}_{MN}:= \mathcal{B}(L^2_+(\mathbb{T}; \mathbb{C}^{M \times N}))$, $\forall M,N \in \mathbb{N}_+$. Both $\mathcal{A}^{(\mathbf{r})}: t \in \mathbb{R} \mapsto \mathcal{A}^{(\mathbf{r})}(t) \in \mathcal{B}(\mathbb{X}_{Md})$ and $\mathcal{A}^{(\mathbf{l})}: t \in \mathbb{R} \mapsto \mathcal{A}^{(\mathbf{l})}(t) \in \mathcal{B}(\mathbb{X}_{dN})$ are continuous, where $\mathcal{A}^{(\mathbf{r})}(t): \mathbf{W}\in \mathbb{X}_{Md} \mapsto -i \mathbf{T}^{(\mathbf{r})}_{U(t) U(t)^*}\mathbf{W} \in \mathbb{X}_{Md}$ and $\mathcal{A}^{(\mathbf{l})}(t): \mathscr{W}\in \mathbb{X}_{dN} \mapsto -i \mathbf{T}^{(\mathbf{l})}_{ U(t)^* U(t)}\mathscr{W} \in \mathbb{X}_{dN}$. Then [\[WODE\]](#WODE){reference-type="eqref" reference="WODE"} admits a unique solution thanks to Proposition $\ref{CauchyThmODE}$. Since both $\mathbf{T}^{(\mathbf{r})}_{U(t) U(t)^*} \in \mathbb{X}_{Md}$ and $\mathbf{T}^{(\mathbf{l})}_{ U(t)^* U(t)} \in \mathbb{X}_{dN}$ are self-adjoint operators, we have $\mathbf{W}(t)^* = \mathbf{W}(t)^{-1} \in \mathbb{X}_{Md}$ and $\mathscr{W}(t)^* = \mathscr{W}(t)^{-1} \in \mathbb{X}_{dN}$ by uniqueness argument in Proposition $\ref{CauchyThmODE}$. Then [\[4HeiLaxMSzego\]](#4HeiLaxMSzego){reference-type="eqref" reference="4HeiLaxMSzego"} yields that $\tfrac{\mathrm{d}}{\mathrm{d}t}(\mathbf{W}(t)^* \mathbf{H}^{(\mathbf{r})}_{U(t)}\mathbf{H}^{(\mathbf{l})}_{U(t)}\mathbf{W}(t))=\tfrac{\mathrm{d}}{\mathrm{d}t}(\mathbf{W}(t)^* \mathbf{K}^{(\mathbf{r})}_{U(t)}\mathbf{K}^{(\mathbf{l})}_{U(t)}\mathbf{W}(t))= 0_{\mathbb{X}_{Md}}$ and $\tfrac{\mathrm{d}}{\mathrm{d}t}(\mathscr{W}(t)^* \mathbf{H}^{(\mathbf{l})}_{U(t)}\mathbf{H}^{(\mathbf{r})}_{U(t)}\mathscr{W}(t)) =\tfrac{\mathrm{d}}{\mathrm{d}t}(\mathscr{W}(t)^* \mathbf{K}^{(\mathbf{l})}_{U(t)}\mathbf{K}^{(\mathbf{r})}_{U(t)}\mathscr{W}(t)) = 0_{\mathbb{X}_{dN}}$. ◻ The following lemma gives the relation of the family of unitary operators $(\mathbf{W}(t))_{t \in \mathbb{R}}$ and the unitary groups $(e^{it \mathbf{K}^{(\mathbf{r})}_{U(0)}\mathbf{K}^{(\mathbf{l})}_{U(0)}})_{t \in \mathbb{R}}$ and $(e^{it \mathbf{H}^{(\mathbf{r})}_{U(0)}\mathbf{H}^{(\mathbf{l})}_{U(0)}})_{t \in \mathbb{R}}$, which allows to linearize the matrix Szegő flow. **Lemma 28**. *Given $M,N,d \in \mathbb{N}_+$ and $s>\tfrac{1}{2}$, if $U \in C^{\infty}(\mathbb{R}; H^s_+(\mathbb{T}; \mathbb{C}^{M\times N}))$ solves equation [\[MSzego\]](#MSzego){reference-type="eqref" reference="MSzego"}, $\mathbf{W}\in C^1(\mathbb{R}; \mathcal{B}(L^2_+(\mathbb{T};\mathbb{C}^{M \times d})))$ and $\mathscr{W}\in C^1(\mathbb{R}; \mathcal{B}(L^2_+(\mathbb{T};\mathbb{C}^{d \times N})))$ are defined by [\[WODE\]](#WODE){reference-type="eqref" reference="WODE"} of Corollary $\ref{corUniEquivSzego}$. Then the following identities hold, $\forall t \in \mathbb{R}$: $$\label{W*S*WHH} \begin{split} & \mathbf{W}(t)^* \mathbf{S}^*\mathbf{W}(t) \mathbf{H}^{(\mathbf{r})}_{U(0)}\mathbf{H}^{(\mathbf{l})}_{U(0)} = e^{it \mathbf{K}^{(\mathbf{r})}_{U(0)}\mathbf{K}^{(\mathbf{l})}_{U(0)}} \mathbf{S}^* e^{-it \mathbf{H}^{(\mathbf{r})}_{U(0)}\mathbf{H}^{(\mathbf{l})}_{U(0)}} \mathbf{H}^{(\mathbf{r})}_{U(0)}\mathbf{H}^{(\mathbf{l})}_{U(0)} \in \mathcal{B}(L^2_+(\mathbb{T};\mathbb{C}^{M \times d})); \\ & \mathscr{W}(t)^* \mathbf{S}^*\mathscr{W}(t) \mathbf{H}^{(\mathbf{l})}_{U(0)}\mathbf{H}^{(\mathbf{r})}_{U(0)} = e^{it \mathbf{K}^{(\mathbf{l})}_{U(0)}\mathbf{K}^{(\mathbf{r})}_{U(0)}} \mathbf{S}^* e^{-it \mathbf{H}^{(\mathbf{l})}_{U(0)}\mathbf{H}^{(\mathbf{r})}_{U(0)}} \mathbf{H}^{(\mathbf{l})}_{U(0)}\mathbf{H}^{(\mathbf{r})}_{U(0)} \in \mathcal{B}(L^2_+(\mathbb{T};\mathbb{C}^{d \times N})). \end{split}$$Moreover, for any constant matrices $P \in \mathbb{C}^{M \times d}$ and $Q \in \mathbb{C}^{d\times N}$, we have $$\label{W*PQSze} \mathbf{W}(t)^*(P) = e^{it \mathbf{H}^{(\mathbf{r})}_{U(0)}\mathbf{H}^{(\mathbf{l})}_{U(0)}}(P) \in L^2_+(\mathbb{T}; \mathbb{C}^{M \times d}); \quad \mathscr{W}(t)^*(Q) = e^{it \mathbf{H}^{(\mathbf{l})}_{U(0)}\mathbf{H}^{(\mathbf{r})}_{U(0)}}(Q) \in L^2_+(\mathbb{T}; \mathbb{C}^{d\times N}).$$We also have $$\label{W*U} \mathbf{W}(t)^* (U(t)) =\mathscr{W}(t)^* (U(t)) = U(0) \in H^s_+(\mathbb{T}; \mathbb{C}^{M\times N}).$$* *Proof.* Set $\mathfrak{Y}(t):= \mathbf{W}(t)^* \mathbf{S}^*\mathbf{W}(t) \mathbf{H}^{(\mathbf{r})}_{U(0)}\mathbf{H}^{(\mathbf{l})}_{U(0)}$ and $\mathscr{Y}(t):=\mathscr{W}(t)^* \mathbf{S}^*\mathscr{W}(t) \mathbf{H}^{(\mathbf{l})}_{U(0)}\mathbf{H}^{(\mathbf{r})}_{U(0)}$, $\forall t \in \mathbb{R}$. Then formulas [\[commS\*Toep(rlr)\]](#commS*Toep(rlr)){reference-type="eqref" reference="commS*Toep(rlr)"} and [\[UniEquiHHKK\]](#UniEquiHHKK){reference-type="eqref" reference="UniEquiHHKK"} yield that $$\label{Y'Yt'Sze} \begin{split} &\tfrac{\mathrm{d}}{\mathrm{d}t}\mathfrak{Y}(t)= -i \mathbf{W}(t)^* [\mathbf{S}^*, \mathbf{T}^{(\mathbf{r})}_{U(t) U(t)^*}]\mathbf{H}^{(\mathbf{r})}_{U(t)}\mathbf{H}^{(\mathbf{l})}_{U(t)} \mathbf{W}(t) \in \mathcal{B}(L^2_+(\mathbb{T};\mathbb{C}^{M \times d}))\\ = & i \mathbf{W}(t)^* \mathbf{K}^{(\mathbf{r})}_{U(t)}\left(\mathbf{K}^{(\mathbf{l})}_{U(t)} \mathbf{K}^{(\mathbf{r})}_{U(t)} - \mathbf{H}^{(\mathbf{l})}_{U(t)} \mathbf{H}^{(\mathbf{r})}_{U(t)}\right)\mathbf{H}^{(\mathbf{l})}_{U(t)} \mathbf{W}(t) = i \mathbf{K}^{(\mathbf{r})}_{U(0)} \mathbf{K}^{(\mathbf{l})}_{U(0)} \mathfrak{Y}(t) -i \mathfrak{Y}(t) \mathbf{H}^{(\mathbf{r})}_{U(0)} \mathbf{H}^{(\mathbf{l})}_{U(0)};\\ &\tfrac{\mathrm{d}}{\mathrm{d}t}\mathscr{Y}(t)= -i \mathscr{W}(t)^* [\mathbf{S}^*, \mathbf{T}^{(\mathbf{l})}_{U(t)^* U(t)}]\mathbf{H}^{(\mathbf{l})}_{U(t)}\mathbf{H}^{(\mathbf{r})}_{U(t)} \mathscr{W}(t) \in \mathcal{B}(L^2_+(\mathbb{T};\mathbb{C}^{d \times N}))\\ = & i \mathscr{W}(t)^* \mathbf{K}^{(\mathbf{l})}_{U(t)}\left(\mathbf{K}^{(\mathbf{r})}_{U(t)} \mathbf{K}^{(\mathbf{l})}_{U(t)} - \mathbf{H}^{(\mathbf{r})}_{U(t)} \mathbf{H}^{(\mathbf{l})}_{U(t)}\right)\mathbf{H}^{(\mathbf{r})}_{U(t)} \mathscr{W}(t) = i \mathbf{K}^{(\mathbf{l})}_{U(0)} \mathbf{K}^{(\mathbf{r})}_{U(0)} \mathscr{Y}(t) -i \mathscr{Y}(t) \mathbf{H}^{(\mathbf{l})}_{U(0)} \mathbf{H}^{(\mathbf{r})}_{U(0)}.\\ \end{split}$$Then [\[W\*S\*WHH\]](#W*S*WHH){reference-type="eqref" reference="W*S*WHH"} is obtained by integrating [\[Y\'Yt\'Sze\]](#Y'Yt'Sze){reference-type="eqref" reference="Y'Yt'Sze"} and [\[HExpHH\]](#HExpHH){reference-type="eqref" reference="HExpHH"}. Formula [\[W\*U\]](#W*U){reference-type="eqref" reference="W*U"} is obtained by [\[WODE\]](#WODE){reference-type="eqref" reference="WODE"} and the following expression of the matrix Szegő equation [\[MSzego\]](#MSzego){reference-type="eqref" reference="MSzego"}: $$\label{U'(t)=A((U(t)))} \partial_t U(t) = -i \mathbf{T}^{(\mathbf{r})}_{U(t) U(t)^*}(U(t)) = -i \mathbf{T}^{(\mathbf{l})}_{U(t)^* U(t)}(U(t)) \in H^s_+(\mathbb{T}; \mathbb{C}^{M\times N}).$$If $P \in \mathbb{C}^{M \times d}$ and $Q \in \mathbb{C}^{d\times N}$, then $\partial_t (\mathbf{W}(t)^* (P))=i \mathbf{W}(t)^* \mathbf{H}^{(\mathbf{r})}_{U(t)} \mathbf{H}^{(\mathbf{l})}_{U(t)} (P) = i \mathbf{H}^{(\mathbf{r})}_{U(0)} \mathbf{H}^{(\mathbf{l})}_{U(0)} \mathbf{W}(t)^* (P)$ and $\partial_t (\mathscr{W}(t)^* (Q))=i \mathscr{W}(t)^* \mathbf{H}^{(\mathbf{l})}_{U(t)} \mathbf{H}^{(\mathbf{r})}_{U(t)} (Q) = i \mathbf{H}^{(\mathbf{l})}_{U(0)} \mathbf{H}^{(\mathbf{r})}_{U(0)} \mathscr{W}(t)^* (Q)$ by [\[WODE\]](#WODE){reference-type="eqref" reference="WODE"} and [\[HHP\]](#HHP){reference-type="eqref" reference="HHP"}. ◻ Finally, we act these three families of unitary operators on the shift operator $\mathbf{S}^*$ and complete the proof by conjugation acting method. *Proof of theorem $\ref{ExpForThm}$.* At first, assume that $U_0= U(0) \in \mathcal{M}_{\mathrm{FR}}^{M \times N}$ and $\mathbf{R}:=\dim_{\mathbb{C}} \mathrm{Im}\mathbf{H}^{(\mathbf{r})}_{U_0} \in \mathbb{N}$. Proposition $\ref{WKroneckerM*N}$ and the unitary equivalence property [\[UniEquiHHKK\]](#UniEquiHHKK){reference-type="eqref" reference="UniEquiHHKK"} yield that $U(t) \in \mathcal{M}_{\mathrm{FR}}^{M \times N}$ and $\mathbb{V} :=\mathrm{Im}\mathbf{H}^{(\mathbf{r})}_{U_0} = \mathrm{Im}\mathbf{H}^{(\mathbf{r})}_{U_0}\mathbf{H}^{(\mathbf{l})}_{U_0}$ is an $\mathbf{R}$-dimensional subspace of $L^2_+(\mathbb{T}; \mathbb{C}^{M\times N})$ such that $$\mathbf{W}(t)^* \mathbf{S}^*\mathbf{W}(t)\big|_{\mathbb{V}} = e^{it \mathbf{K}^{(\mathbf{r})}_{U_0}\mathbf{K}^{(\mathbf{l})}_{U_0}} \mathbf{S}^* e^{-it \mathbf{H}^{(\mathbf{r})}_{U_0}\mathbf{H}^{(\mathbf{l})}_{U_0}}\big|_{\mathbb{V}} : \mathbb{V} \to \mathbb{V}$$by [\[W\*S\*WHH\]](#W*S*WHH){reference-type="eqref" reference="W*S*WHH"}. Since $U_0 =\mathbf{H}^{(\mathbf{r})}_{U_0} (\mathbb{I}_N) \in \mathbb{V}$, thanks to the invariant-subspace-property [\[ImHuInv\]](#ImHuInv){reference-type="eqref" reference="ImHuInv"}, we have $$\label{1-zW*S*WMSze} (\mathrm{id} - z \mathbf{W}(t)^* \mathbf{S}^*\mathbf{W}(t) )^{-1}(U_0) = ( \mathrm{id} - z e^{it \mathbf{K}^{(\mathbf{r})}_{U_0}\mathbf{K}^{(\mathbf{l})}_{U_0}} \mathbf{S}^* e^{-it \mathbf{H}^{(\mathbf{r})}_{U_0}\mathbf{H}^{(\mathbf{l})}_{U_0}})^{-1}(U_0) \in \mathbb{V},$$$\forall z \in D(0,1)$. Then [\[1-zW\*S\*WMSze\]](#1-zW*S*WMSze){reference-type="eqref" reference="1-zW*S*WMSze"}, [\[W\*PQSze\]](#W*PQSze){reference-type="eqref" reference="W*PQSze"} and [\[W\*U\]](#W*U){reference-type="eqref" reference="W*U"} imply that $$\label{<>kjmsze} \begin{split} & \langle (\mathrm{id} -z \mathbf{S}^*)^{-1} U(t), \mathbb{E}^{(MN)}_{kj}\rangle_{L^2_+} = \langle (\mathrm{id} -z \mathbf{W}(t)^* \mathbf{S}^* \mathbf{W}(t))^{-1} \mathbf{W}(t)^*U(t), \mathbf{W}(t)^* \mathbb{E}^{(MN)}_{kj}\rangle_{L^2_+} \\ = & \langle (\mathrm{id} - z e^{-it \mathbf{H}^{(\mathbf{r})}_{U_0}\mathbf{H}^{(\mathbf{l})}_{U_0}}e^{it \mathbf{K}^{(\mathbf{r})}_{U_0}\mathbf{K}^{(\mathbf{l})}_{U_0}} \mathbf{S}^* )^{-1} e^{-it \mathbf{H}^{(\mathbf{r})}_{U_0}\mathbf{H}^{(\mathbf{l})}_{U_0}} (U_0) , \mathbb{E}^{(MN)}_{kj}\rangle_{L^2_+(\mathbb{T}; \mathbb{C}^{M \times N})}. \end{split}$$The Poisson integral of $U(t)=\sum_{n \geq 0} \hat{U}(t, n) \mathbf{e}_n \in \mathcal{M}_{\mathrm{FR}}^{M \times N}$ is given by $$\label{PoiUMsze} \underline{U}(t, z) = \sum_{n \geq 0} z^n \hat{U}(t, n) = \sum_{k=1}^M \sum_{j=1}^N \langle (\mathrm{id} -z \mathbf{S}^*)^{-1} U(t), \mathbb{E}^{(MN)}_{kj}\rangle_{L^2_+(\mathbb{T}; \mathbb{C}^{M \times N})}\mathbb{E}^{(MN)}_{kj} \in \mathbb{C}^{M \times N}.$$thanks to [\[I\<\>formula\]](#I<>formula){reference-type="eqref" reference="I<>formula"} and [\[InvForTorus\]](#InvForTorus){reference-type="eqref" reference="InvForTorus"}. Plugging formula [\[\<\>kjmsze\]](#<>kjmsze){reference-type="eqref" reference="<>kjmsze"} into [\[PoiUMsze\]](#PoiUMsze){reference-type="eqref" reference="PoiUMsze"}, we deduce that $$\label{UtzMszerl} \underline{U}(t, z) = \mathbf{I} \left((\mathrm{id} - z e^{-it \mathbf{H}^{(\mathbf{r})}_{U_0}\mathbf{H}^{(\mathbf{l})}_{U_0}}e^{it \mathbf{K}^{(\mathbf{r})}_{U_0}\mathbf{K}^{(\mathbf{l})}_{U_0}} \mathbf{S}^* )^{-1} e^{-it \mathbf{H}^{(\mathbf{r})}_{U_0}\mathbf{H}^{(\mathbf{l})}_{U_0}} (U_0) \right).$$by [\[I\<\>formula\]](#I<>formula){reference-type="eqref" reference="I<>formula"} again. Similarly, since $U_0 =\mathbf{H}^{(\mathbf{l})}_{U_0} (\mathbb{I}_M) \in \mathscr{V}:=\mathrm{Im}\mathbf{H}^{(\mathbf{l})}_{U_0} = \mathrm{Im}\mathbf{H}^{(\mathbf{l})}_{U_0}\mathbf{H}^{(\mathbf{r})}_{U_0}$, which is an $\mathbf{R}$-dimensional subspace of $L^2_+(\mathbb{T}; \mathbb{C}^{M\times N})$ such that $\mathscr{W}(t)^* \mathbf{S}^*\mathscr{W}(t)\big|_{\mathscr{V}} = e^{it \mathbf{K}^{(\mathbf{l})}_{U(0)}\mathbf{K}^{(\mathbf{r})}_{U(0)}} \mathbf{S}^* e^{-it \mathbf{H}^{(\mathbf{l})}_{U(0)}\mathbf{H}^{(\mathbf{r})}_{U(0)}}\big|_{\mathscr{V}} : \mathscr{V} \to \mathscr{V}$ by [\[W\*S\*WHH\]](#W*S*WHH){reference-type="eqref" reference="W*S*WHH"}, then $(\mathrm{id} - z \mathscr{W}(t)^* \mathbf{S}^*\mathscr{W}(t) )^{-1}(U_0) = ( \mathrm{id} - z e^{it \mathbf{K}^{(\mathbf{l})}_{U_0}\mathbf{K}^{(\mathbf{r})}_{U_0}} \mathbf{S}^* e^{-it \mathbf{H}^{(\mathbf{l})}_{U_0}\mathbf{H}^{(\mathbf{r})}_{U_0}})^{-1}(U_0) \in \mathscr{V}$. Following the previous steps, we substitute $\mathscr{W}(t)$ for $\mathbf{W}(t)$ in [\[\<\>kjmsze\]](#<>kjmsze){reference-type="eqref" reference="<>kjmsze"} and obtain that $$\label{UtzMszelr} \underline{U}(t, z) = \mathbf{I} \left((\mathrm{id} - z e^{-it \mathbf{H}^{(\mathbf{l})}_{U_0}\mathbf{H}^{(\mathbf{r})}_{U_0}}e^{it \mathbf{K}^{(\mathbf{l})}_{U_0}\mathbf{K}^{(\mathbf{r})}_{U_0}} \mathbf{S}^* )^{-1} e^{-it \mathbf{H}^{(\mathbf{l})}_{U_0}\mathbf{H}^{(\mathbf{r})}_{U_0}} (U_0) \right).$$Expand $\underline{U}(t, z)$ in [\[UtzMszerl\]](#UtzMszerl){reference-type="eqref" reference="UtzMszerl"} and [\[UtzMszelr\]](#UtzMszelr){reference-type="eqref" reference="UtzMszelr"} into power series of $z \in D(0,1)$. Then [\[mszeExpFor\]](#mszeExpFor){reference-type="eqref" reference="mszeExpFor"} holds for $U_0 \in \mathcal{M}_{\mathrm{FR}}^{M \times N}$.\ For general $U_0 \in H^{\frac{1}{2}}_+(\mathbb{T}; \mathbb{C}^{M\times N})$, it suffices to use the following approximation argument: the mappings $V \mapsto (e^{-it \mathbf{H}^{(\mathbf{r})}_{V }\mathbf{H}^{(\mathbf{l})}_{V }} e^{it \mathbf{K}^{(\mathbf{r})}_{V }\mathbf{K}^{(\mathbf{l})}_{V }} \mathbf{S}^*)^n e^{-it \mathbf{H}^{(\mathbf{r})}_{V }\mathbf{H}^{(\mathbf{l})}_{V }} (V )$, $V \mapsto (e^{-it \mathbf{H}^{(\mathbf{l})}_{V }\mathbf{H}^{(\mathbf{r})}_{V }} e^{it \mathbf{K}^{(\mathbf{l})}_{V }\mathbf{K}^{(\mathbf{r})}_{V }} \mathbf{S}^*)^n e^{-it \mathbf{H}^{(\mathbf{l})}_{V }\mathbf{H}^{(\mathbf{r})}_{V }} (V )$ and the flow map $U_0=U(0) \mapsto U(t)$ are all continuous from $H^{\frac{1}{2}}_+(\mathbb{T}; \mathbb{C}^{M\times N})$ to $L^2_+(\mathbb{T}; \mathbb{C}^{M\times N})$, $\forall (n,t) \in \mathbb{N} \times \mathbb{R}$, thanks to identity [\[TrHrHl\]](#TrHrHl){reference-type="eqref" reference="TrHrHl"} and Proposition $\ref{GWPH0.5}$. The proof is completed thanks to Lemma $\ref{denseFR}$, i.e. the density of $\mathcal{M}_{\mathrm{FR}}^{M \times N}$ in $H^{\frac{1}{2}}_+(\mathbb{T}; \mathbb{C}^{M\times N})$. ◻ $Acknowledgments$ On the one hand, the author would like to warmly thank Prof. Patrick Gérard and Prof. Sandrine Grellier for all the lectures, presentations and mini-courses, which make the author very familiar to the cubic scalar Szegő equation. On the other hand, the author is also grateful to Georgia Institute of Technology for both financially supporting the author's research and assigning the courses **Math1554** and **Math1553** about *Linear Algebra* to the author, which makes the author more familiar to the matrix theory, leading to this matrix generalization of the cubic Szegő equation. 99 Badreddine, R. *On the global well-posedness of the Calogero--Sutherland derivative nonlinear Schrödinger equation*, preprint, available on arXiv:2303.01087. [\[Badreddine2023\]]{#Badreddine2023 label="Badreddine2023"} Berntson, B. K., Langmann, E., Lenells, L. *Spin generalizations of the Benjamin--Ono equation*, Lett. Math. Phys. 112, (2022) [\[sBOBLL2022\]]{#sBOBLL2022 label="sBOBLL2022"} Brezis, H. *Functional analysis, Sobolev spaces and partial differential equations*, Universitext, 2010, Springer.[\[BrezisFASobPDE\]]{#BrezisFASobPDE label="BrezisFASobPDE"} Brezis, H., Gallouët, T. *Nonlinear Schrödinger evolution equations*. Nonlinear Anal. 1980, Vol.4(4), p.677-681. [\[Brezis-Gallouet1980\]]{#Brezis-Gallouet1980 label="Brezis-Gallouet1980"} Chemin, J.-Y. *Notes du cours \"Introduction aux équations aux dérivees partielles d'évolution\"*, Lecture notes of Université Pierre et Marie Curie (Université Paris 6 $\&$ Sorbonne Université), available on [https://www.ljll.math.upmc.fr/chemin/pdf/2016M2EvolutionW.pdf]{.ul} [\[CheminNoteEDPM2-2016\]]{#CheminNoteEDPM2-2016 label="CheminNoteEDPM2-2016"} Gérard, P. *The Lax pair structure for the spin Benjamin--Ono equation*, Advances in Continuous and Discrete Models, Article number: 21 (2023) [\[sBOLaxP2022\]]{#sBOLaxP2022 label="sBOLaxP2022"} Gérard, P. *An explicit formula for the Benjamin--Ono equation*, preprint, available on arXiv:2212.03139. [\[BOExp2022\]]{#BOExp2022 label="BOExp2022"} Gérard, P., Grellier, S. *The cubic Szegő equation*, Ann. Sci. l'Éc. Norm. Supér. (4) 43 (2010), 761-810[\[GGANNENS\]]{#GGANNENS label="GGANNENS"} Gérard, P., Grellier, S. *Invariant tori for the cubic Szegő equation*, Invent. Math. 187:3(2012),707-754. MR 2944951 Zbl 06021979 [\[GGinvent\]]{#GGinvent label="GGinvent"} Gérard, P., Grellier, S. *Effective integrable dynamics for a certain nonlinear wave equation*, Anal. PDE, 5(2012), 1139-1155[\[GGAPDE\]]{#GGAPDE label="GGAPDE"} Gérard, P., Grellier, S. *On the growth of Sobolev norms for the cubic Szegő equation*, text of a talk at IHES, seminar Laurent Schwartz, January 6, 2015. [\[GGTurb2015\]]{#GGTurb2015 label="GGTurb2015"} Gérard, P., Grellier, S. *An explicit formula for the cubic Szegő equation*, Trans. Amer. Math. Soc. 367 (2015), 2979-2995[\[GGTAMS\]]{#GGTAMS label="GGTAMS"} Gérard, P., Grellier, S. *The cubic Szegő equation and Hankel operators*, volume 389 of Astérisque. Soc. Math. de France, 2017.[\[GerardGrellierBook\]]{#GerardGrellierBook label="GerardGrellierBook"} Gérard, P., Grellier, S. *On a damped Szegő equation (with an appendix in collaboration with Christian Klein)*, available on arXiv:1912.10933, 2019 [\[GGDamped\]]{#GGDamped label="GGDamped"} Gérard, P., Guo. Y., Titi, E.S. *On the Radius of Analyticity of Solutions to the Cubic Szegő Equation*, Ann. Inst. Henri Poincaré, Analyse non linéaire Vol. 32, no. 1, (2015), p.97-108. [\[SzegogeveryAnnIHP\]]{#SzegogeveryAnnIHP label="SzegogeveryAnnIHP"} Gérard, P., Kappeler, T. *On the integrability of the Benjamin--Ono equation on the torus*, Comm. Pure Appl. Math. 74 (2021), no. 8, 1685--1747.[\[GKCPAM2021\]]{#GKCPAM2021 label="GKCPAM2021"} Gérard, P., Kappeler, T., Topalov, P. *Sharp well-posedness results of the Benjamin--Ono equation in $H^s(\mathbb{T}, \mathbb{R})$ and qualitative properties of its solution*, available on arXiv:2004.04857, 2020[\[GKTActa\]]{#GKTActa label="GKTActa"} Gérard, P., Lenzmann, E., Pocovnicu, O., Raphaël, P., *A two-soliton with transient turbulent regime for the cubic half-wave equation on the real line*, Annals of PDE, 4(7) 2018.[\[GLPRAnnPDE\]]{#GLPRAnnPDE label="GLPRAnnPDE"} Gérard, P., Lenzmann, E. *The Calogero--Moser Derivative Nonlinear Schrödinger Equation*, preprint, available on arXiv:2208.04105. [\[Gerard-Lenzmann2022\]]{#Gerard-Lenzmann2022 label="Gerard-Lenzmann2022"} Gérard, P., Pushnitski, A. *Unbounded Hankel operators and the flow of the cubic Szegő equation*, Invent. Math. 232 (2023), 995--1026. [\[GerPushInv2022\]]{#GerPushInv2022 label="GerPushInv2022"} Gérard, P., Pushnitski, A. *The cubic Szegő equation on the real line: explicit formula and well-posedness on the Hardy class*, I arXiv:2307.06734. [\[GerPush2023\]]{#GerPush2023 label="GerPush2023"} Grébert, B., Kappeler, T. *The Defocusing NLS Equation and Its Normal Form*, Series of Lectures in Mathematics, European Mathematical Society, 2014. [\[GKappeler2014\]]{#GKappeler2014 label="GKappeler2014"} Kappeler, T., Pöschel, J. *KdV $\&$ KAM*, vol. 45, Ergeb. der Math. und ihrer Grenzgeb., Springer, 2003. [\[KdVKAM\]]{#KdVKAM label="KdVKAM"} Kronecker L. *Zur Theorie der Elimination einer Variabeln aus zwei algebraischen Gleichungen*, Berl. Monatsber 1881 (1881), p. 535--600, Reprinted in Leopold Kronecker's Werke, vol. 2, 113--192, Chelsea, 1968. [\[Kronecker1881\]]{#Kronecker1881 label="Kronecker1881"} Nikolski, N. K. *Operators, Functions and Systems: An Easy Reading, Vol.I: Hardy, Hankel, and Toeplitz, Mathematical Surveys and Monographs*, vol.92, AMS, (2002). [\[Nikolski1\]]{#Nikolski1 label="Nikolski1"} Lax, P. *Integrals of Nonlinear Equations of Evolution and Solitary Waves*, Comm. Pure Appl. Math. Volume $21$, 1968, Pages 467--490[\[LaxPairCPAMKdV\]]{#LaxPairCPAMKdV label="LaxPairCPAMKdV"} Peller, V. V. *Hankel operators of class $\mathcal{S}_p$ and their applications (rational approximation, Gaussian processes, the problem of majorization of operators)*, Math. USSR Sb.41(1982), 443--479.[\[PellerBook\]]{#PellerBook label="PellerBook"} Pelinovsky, D.E. *Intermediate nonlinear Schrödinger equation for internal waves in a fluid of finite depth*, Phys. Lett. A, 197 no. 5--6, (1995) 401--406[\[Pelinovsky1995\]]{#Pelinovsky1995 label="Pelinovsky1995"} Pocovnicu, O. *Traveling waves for the cubic Szegő equation on the real line*, Anal. PDE 4 no. 3 (2011), 379-404 [\[poAPDE2011\]]{#poAPDE2011 label="poAPDE2011"} Pocovnicu, O. *Explicit formula for the solutions of the the cubic Szegő equation on the real line and applications*, Discrete Contin. Dyn. Syst. A 31 (2011) no. 3, 607-649. [\[poDCDS2011\]]{#poDCDS2011 label="poDCDS2011"} Reed, M., Simon, B. *Methods of Modern Mathematical Physics: Vol.: 2.: Fourier analysis, self-adjointness*, Academic Press, 1975.[\[Reed-Simon2\]]{#Reed-Simon2 label="Reed-Simon2"} Reed, M., Simon, B. *Methods of Modern Mathematical Physics: Vol.: 4.: Analysis of Operators*, Academic Press New York, 1978.[\[Reed-Simon4\]]{#Reed-Simon4 label="Reed-Simon4"} Reed, M., Simon, B. *Methods of Modern Mathematical Physics: Vol.: 1.: Functional analysis*, Gulf Professional Publishing, 1980.[\[Reed-Simon1\]]{#Reed-Simon1 label="Reed-Simon1"} Rudin, W. *Real and complex analysis*, 2nd ed., McGraw-Hill, New York, 1974. [\[RudinRACA\]]{#RudinRACA label="RudinRACA"} Rudin, W. *Functional Analysis*, McGraw-Hill Science/Engineering/Math, 2 edition (January 1, 1991), International Series in Pure and Applied Mathematics[\[RudinFA\]]{#RudinFA label="RudinFA"} Sun, R. *Complete integrability of the Benjamin--Ono equation on the multi-soliton manifolds*, Commun. Math. Phys. $\mathbf{383}$, 1051--1092 (2021). https://doi.org/10.1007/s00220-021-03996-1 [\[SUN BO equation multi solitons integrability\]]{#SUN BO equation multi solitons integrability label="SUN BO equation multi solitons integrability"} Sun, R. *The intertwined derivative Schrödinger system of Calogero--Moser--Sutherland type*, in preparation. [\[SUNInterCMSDNLS\]]{#SUNInterCMSDNLS label="SUNInterCMSDNLS"} Trudinger, N.S. *On imbeddings into Orlicz spaces and some applications*, J. Math. Mech. 17 (1967), 473-483.[\[Trudinger1967\]]{#Trudinger1967 label="Trudinger1967"} Yudovich, V. I . *Non-stationary flow of an ideal incompressible liquid*, USSR Comput. Math. Math. Phys. 3 (1963), 1407--1456 (english), Zh. Vuch. Mat. 3 (1963), 1032-- 1066 (russian).[\[Yudovich1963\]]{#Yudovich1963 label="Yudovich1963"} Zakharov, V.E., Shabat, A.B., *Exact theory of two-dimensional self-focusing and one-dimensional self-modulation of waves in nonlinear media*, Soviet Physics JETP 34-62, 1972.[\[ZS1972\]]{#ZS1972 label="ZS1972"} [^1]: School of Mathematics, Georgia Institute of Technology, Atlanta, USA. Email: ruoci.sun.16\@normalesup.org
arxiv_math
{ "id": "2309.12136", "title": "The matrix Szeg\\H{o} equation", "authors": "Ruoci Sun", "categories": "math.AP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Let $G_\omega$ be an edge-weighted graph whose underlying graph is $G$. In this paper, we enlarge the class of Cohen-Macaulay edge-weighted graphs $G_\omega$ by classifying completely them when the graph $G$ has girth $5$ or greater. address: Faculty of Natural Sciences, Hong Duc University, No. 565 Quang Trung Street, Dong Ve Ward, Thanh Hoa, Vietnam author: - Truong Thi Hien title: Cohen-Macaulay edge-weighted graphs of girth $5$ or greater --- # Introduction {#introduction .unnumbered} Let $R = K[x_1,\ldots,x_d]$ be a standard graded polynomial ring over a given field $K$. Let $G$ be a simple graph with the vertex set $V(G) = \{x_1,\ldots,x_d\}$ and the edge set $E(G)$. By abuse of notation, we also use $x_ix_j$ to denote an edge $\{x_i, x_j\}$ of $G$. A *edge-weighted graph* $G_{\omega}$ (whose underlying graph is $G$) is the couple $(G,\omega)$, where $\omega$ is a function $\omega \colon E(G) \to \mathbb Z_{>0}$, which is called a *weight edge* on $G$. An edge-weighted graph $G_{\omega}$ where each edge has the same weight is a trivial edge-weighted graph. The *weighted edge ideal* of $G_\omega$ was introduced by Paulsen and Sather-Wagstaff [@PS], given by $$I(G_\omega) = ((x_ix_j)^{\omega(x_ix_j)}\mid x_i x_j\in E(G)).$$ We say that the edge-weighted graph $G_\omega$ was called *Cohen-Macaulay* if $R/I(G_\omega)$ is Cohen-Macaulay. In [@PS], the authors constructed the irreducible decomposition of $I(G_\omega)$ and classified Cohen-Macalay edge-weighted graphs $G_\omega$ where the underlying graph $G$ is a tree or a cycle. After that Fakhari, Shibata, Terai and Yassemi classified Cohen-Macalay edge-weighted graphs $G_\omega$ when $G$ is a very well-covered graph (see [@SSTY]). It is worth mentioning that the problem of classifying sequentially Cohen-Macaulay edge-weighted graphs is studied in [@MDV], and classifying Cohen-Macaulay vertex-weighted oriented is studied in [@DT; @HLMRV; @PRT; @PRV]. In this paper, we study Cohen-Macaulay properties for the edge-weighted graphs $G_{\omega}$. More specifically, we classify Cohen-Macaulay edge-weighted graphs $G_{\omega}$ when $G$ has girth at least $5$. Recall that the *girth* of a graph $G$, denoted by $\mathop{\mathrm{girth}}(G)$, is the length of the shortest cycle contained in it. If a graph contains no cycle, its girth is defined to be infinite. The main result of the paper is the following theorem. . *Let $G$ be a graph of girth at least $5$ and $\omega$ is a weight edge on $G$. Then, the following conditions are equivalent:* 1. $G_\omega$ is Cohen-Macaulay. 2. $G$ is Cohen-Macaulay and $G_\omega$ is unmixed. 3. $G$ is in the class $\mathop{\mathrm{\mathcal{PC}}}$ and the weight edge $\omega$ on $G$ satisfies: 1. The weight of any pendant edge in $G$ is greater than or equal to the weight of every edge adjacent to it. 2. Every basic $5$-cycle $C$ of $G$ has a balanced vertex adjacent to two vertices on $C$ of degree $2$. 3. If a vertex $x$ is on a basic $5$-cycle $C$ with $\deg_G(x)\geqslant 3$ and $N_C(x) = \{y,v\}$, then $\min\{\omega(xy),\omega(xv)\} \geqslant \max\{\omega(xw) \mid w \in N_G(x)\setminus\{y,v\}\}$. To understand the above theorem clearly, we first recall some definitions and terminologies. An edge-weighted graph $G_\omega$ is called *unmixed* if the quotient ring $R/I(G_\omega)$ is unmixed. An edge of $G$ is called the *pendant edge* if one of its vertices is a leaf. A *basic $5$-cycle* is a cycle of length $5$ and there are no two adjacent vertices of degree three or more in $G$. For a given graph $G$, let $C(G)$ and $P(G)$ denote the set of all vertices that belong to basic 5-cycles and pendant edges, respectively. $G$ is said to be *in the class $\mathcal{PC}$* if 1. $V(G)$ can be partitioned into $V(G)=P(G) \cup C(G)$; and 2. the pendant edges form a perfect matching of $G[P(G)]$. Let $C$ be an induced $5$-cycle of $G$ with $E(C) = \{xy,yz,zu,uv,vx\}$. We say that the vertex $x$ is a *balanced vertex* on $C$ (with respect to $\omega$) if 1. $\omega(xy) = \omega(xv)$; and 2. $\omega(xy) \leqslant \omega(yz)\geqslant \omega(zu) \leqslant \omega(uv) \geqslant \omega(xv)$. This definition is motivated by [@PS Theorem 4.4], which says that $C_\omega$ is Cohen-Macaulay if and only if $C$ has a balanced vertex. In Figure [1](#PSW){reference-type="ref" reference="PSW"}, where the weight edge is indicated on edges, $x$ is a balanced vertex on $C$ if the following inequalities hold: $m \leqslant p \geqslant q \leqslant r\geqslant m$. ![The balanced vertex $x$ on $C$.](PSW-graph.pdf "fig:"){#PSW}\ Let us explain the ideal to prove the theorem [Theorem 19](#main-theorem){reference-type="ref" reference="main-theorem"}. We will prove this theorem by the following sequence: $(1) \Rightarrow (2) \Rightarrow (3) \Rightarrow (1)$. By [@HTT Theorem 2.6], if $G_\omega$ is Cohen-Macaulay, then $I(G) = \sqrt{I(G_\omega)}$ is also Cohen-Macaulay, thus we get $(1) \Rightarrow (2)$. To prove $(2) \Rightarrow (3)$, we has the result that $G$ is in the class $\mathop{\mathrm{\mathcal{PC}}}$ if $G$ has girth at least $5$. In addition, we introduce the notion of weighted vertex cover with minimal support to characterize the associated primes of $I(G_\omega)$. Together with the structure of $G$, we can prove the combinatorial properties $(a)$-$(c)$. It remains to show that $(3) \Rightarrow (1)$. If $G_\omega$ satisfies the condition $(3)$, we will prove $G_\omega$ is Cohen-Macaulay by induction on the number of basic 5-cycles of $G$. Indeed, assume $x$ is a balanced vertex on some basic $5$-cycle $C$ as indicated in the property $(b)$ and $m=\omega(xy)$ with $xy\in E(C)$. We show that $(I(G_\omega),x^m)$ and $I(G_\omega)\colon x^m$ are the weighted edge ideals of some edge-weighted graphs. Furthermore, these edge-weighted graphs also satisfy the condition $(3)$ and have less the number of basic 5-cycles than $G$, then they are Cohen-Macaulay by induction. Therefore, the conclusion is followed. The paper consists of two sections. In Section $1$, we set up some basic notations, terminologies from the graph theory, the irreducible decomposition of the weighted edge ideal of an edge-weighted graph, and Cohen-Macaulay monomial ideals and their colon ideals. In Section $2$, we classify Cohen-Macaulay edge-weighted graphs of girth at least $5$ by giving some characteristics of the weight $\omega$ on pendant edges and basic $5$-cycles of $G$. # Preliminaries We begin this section with some observations from the graph theory. Let $G = (V(G), E(G))$ be a simple graph. Note that two vertices of $G$ are adjacent if they are connected by an edge; two edges of $G$ are adjacent if they share a common vertex. **Definition 1**. A set of vertices is called a *vertex cover* of $G$ if for every edge, $(u, v) \in E(G)$, either $u$ or $v$ or both are a part of the set. A *minimal vertex cover* is a vertex cover that no its proper subset is still a vertex cover. **Definition 2**. The set of non-adjacent vertices is called an *independent set*. A *maximal independent set* is an independent set that is not contained properly in any other independent set of $G$. An independent set is called *maximum* if it is of the largest cardinality. **Remark.** Obviously, a vertex cover corresponds to the complement of an independent vertex set. **Definition 3**. A subset $P$ of edges of $G$ is a *matching* if there are no two edges in $P$ which are adjacent to each other. A matching $P$ of $G$ is *perfect* if every vertex of $G$ is incident to some edge in $P$, i.e. in the case $|V(G)| = 2|P|$. If $X \subseteq V(G)$, $G[X]$ is the induced subgraph of $G$ on $X$. By $G\setminus X$, we mean the induced subgraph $G[V \setminus X]$. The *neighbor* of a vertex $v$ of $G$ means the vertices that are adjacent to $v$ in $G$. The *(open) neighborhood* of a vertex $v$ is the set of its neighbors, i.e., $N_G(v) = \{w \mid w \in V(G) \text{ and } vw\in E(G)\}$. The *closed neighborhood* of $v$ means to all the neighbors of $v$ and itself, i.e., $N_G[v] = N_G(v) \cup \{v\}$; if there is no ambiguity on $G$, we use $N(v)$ and $N[v]$, respectively. We also use the symbol $N_G[X] = X \cup \{v\mid vu \in E(G) \text{ for some } u\in X\}$ to denote the closed neighborhood of $X$ in $G$. The *degree* of $v$ in $G$ is the number of its neighbors and is denoted by $\deg_G(v)$. It implies that $\deg_G(v) = |N_G(v)|$. Note that $v$ is called a leaf if $\deg_G(v) = 1$. We next introduce the class of vertex decomposable graphs (see e.g. [@W]). For a vertex $v$ of $G$, denoted $G\setminus v = G\setminus\{v\}$ and $G_v=G\setminus N_G[v]$. **Definition 4**. A graph $G$ is called *vertex decomposable* if it is a totally disconnected graph (i.e. with no edges) or there is a vertex $v$ in $G$ such that 1. $G\setminus v$ and $G_v$ are both vertex decomposable, and 2. for every independent set $S$ in $G_v$, there is some $u \in N_G(v)$ such that $S \cup \{u\}$ is independent in $G\setminus v$. The vertex $v$ which satisfies the condition $(2)$ is called a *shedding vertex* of $G$. Recall a graph $G$ is well-covered (see [@P]) if every maximal independent set of $G$ has the same size, namely $\alpha(G)$. Thus, if $G$ is well-covered and $v$ is a shedding vertex of $G$, then $G\setminus v$ is also well-covered with $\alpha(G\setminus x) =\alpha(G)$. Now, we consider some results of the irreducible decomposition of the weighted edge ideal of an edge-weighted graph, and Cohen-Macaulay monomial ideals and their colon ideals, which we shall need in the proof of the main theorem. It is widely known that (see e.g. [@Vi Proposition 6.1.16]) $$\mathop{\mathrm{Ass}}(R/I(G)) = \{(v\mid v \in C) \mid C \text{ is a minimal vertex cover of } G\}.$$ Particularly, $\dim R/I(G) = \alpha(G)$ whenever $V(G)=\{x_1,\ldots,x_d\}$. The graph $G$ is called a Cohen-Macaulay graph if the ring $R/I(G)$ is Cohen-Macaulay. In consequence, $G$ is well-covered if it is Cohen-Macaulay. Let $G_\omega$ be an edge-weighted graph. We know that the usual edge ideal of $G$, denoted by $I(G)$, is a special case of the weighted edge ideal $I(G_\omega)$ when the weight $\omega$ on $G$ is the trivial one, i.e., $\omega(e)=1$ for all $e\in E(G)$. Since $I(G)=\sqrt{I(G_\omega)}$, by [@HTT Theorem 2.6], $G$ is Cohen-Macaulay if so is $G_\omega$. Therefore, if we know the structure of the underlying Cohen-Macaulay graph together with the weight edges on it, we can get the picture of the Cohen-Macaulayness of an edge-weighted graph. In this paper, we consider graphs of girth at least $5$ and so the following result plays a crucial role in the paper (see [@BC Theorem 20] or [@HMT Theorem 2.4]). **Lemma 5**. *Let $G$ be a connected graph of girth at least $5$. Then, the following statements are equivalent:* 1. *$G$ is well covered and vertex decomposable;* 2. *$G$ is Cohen-Macaulay;* 3. *$G$ is either a vertex or in the class $\mathcal{PC}$.* We next describe the associated primes of $R/I(G_\omega)$. **Definition 6**. Let $G_\omega$ be an edge-weighted graph, $C$ be a vertex cover of $G$ and a function $\delta \colon C \to \mathbb Z_{>0}$. The pair $(C,\delta)$ is called a *weighted vertex cover* of $G_{\omega}$ if for every $e = uv\in E(G)$ we have either $u\in C$ and $\delta(u) \leqslant \omega(e)$ or $v\in C$ and $\delta(v)\leqslant \omega(e)$. Observe that a pair $(C,\delta)$ where $C\subseteq V(G)$ and $\delta\colon C\to \mathbb Z_{>0}$ is a weighted vertex cover of $G_\omega$ if and only if $P(C,\delta) =(v^{\delta(v)} \mid v\in C) \supseteq I(G_\omega)$. Now, we give a definition of an ordering of weighted vertex covers. **Definition 7**. Let $G_\omega$ be an edge-weighted graph. For two weighted vertex covers $(C,\delta)$ and $(C',\delta')$ of $G_\omega$, we say that $(C, \delta) \leqslant (C',\delta')$ if $C\subseteq C'$ and $\delta(v) \geqslant \delta'(v)$ for every $v\in C$. In the usual sense, $(C,\delta)$ is minimal if it is minimal with respect to this order. Then, **Lemma 8**. *[@PS Theorem 3.5] $I(G_\omega)$ can be represented as $$I(G_\omega)=\bigcap_{(C,\delta) \text{ is minimal }} P(C,\delta)$$ and the intersection is irredundant.* This lemma implies that if $(C,\delta)$ is a minimal weighted vertex cover of $G_\omega$, then $(v\mid v\in C) \in \mathop{\mathrm{Ass}}(R/I(G_\omega))$. We say that a weighted vertex cover $(C,\delta)$ of $G_\omega$ is *minimal support* if there is no proper subset $C'$ of $C$ such that $(C',\delta)$ is a weighted vertex cover of $G_\omega$. **Lemma 9**. *If a weighted vertex cover $(C,\delta)$ of $G_\omega$ is minimal support, then $$(v \mid v \in C) \in \mathop{\mathrm{Ass}}(R/I(G_\omega)).$$* *Proof.* Since $I(G_\omega)\subseteq P(C,\delta)$, by Lemma [Lemma 8](#ID){reference-type="ref" reference="ID"} we have $P(C',\delta') \subseteq P(C,\delta)$ for some minimal weighted vertex cover $(C',\delta')$. In particular, $(C',\delta') \leqslant (C,\delta)$. This implies that $(C', \delta)$ is a weighted vertex cover of $G_\omega$. Since $(C,\delta)$ is minimal support, we have $C= C'$. Therefore, $(v\mid v\in C)\in \mathop{\mathrm{Ass}}(R/I(G_\omega))$, as required. ◻ A monomial ideal $I$ is *unmixed* if every its associated prime has the same height. It is well known that if $R/I$ is Cohen-Macaulay, then $I$ is unmixed. Because $I(G)=\sqrt{I(G_\omega)}$, hence $G$ is well-covered if $G_\omega$ is unmixed. In this case, if $(C,\delta)$ is a weighted minimal vertex cover of $G_\omega$, then $C$ is a minimal vertex cover of $G$. We now recall some techniques to study the Cohen-Macaulayness of monomial ideals as mentioned in [@DT]. **Lemma 10**. *[@DT Lemma 1.4] [\[CM-Q\]]{#CM-Q label="CM-Q"} Let $I$ be a monomial ideal and $f$ a monomial not in $I$. We have* 1. *If $I$ is Cohen-Macaulay, then $I\colon f$ is Cohen-Macaulay.* 2. *If $I\colon f$ and $(I,f)$ are Cohen-Macaulay with $\dim R/I\colon f = \dim R/(I,f)$, then $I$ is Cohen-Macaulay.* **Lemma 11**. *[@DT Lemma 1.5] [\[dim\]]{#dim label="dim"} Let $G$ be a well-covered graph. If $v$ is a shedding vertex of $G$, then $$\dim R/I(G) = \dim R/(I(G\setminus v),v)=\dim R/I(G):v.$$* In the sequel, we need the following lemma obtained from [@DT]. **Lemma 12**. *Let $G$ be a graph in the class $\mathcal{PC}$. Let $C$ be a basic $5$-cycle and $x$ a vertex in $C$ with degree at least $3$. Assume that $E(C) = \{xy,yz,zu,uv,vx\}$ and $N(x) = \{y,v,y_1,\ldots,y_k\}$. Then, there is an independent set of $G$ with $k$ vertices, say $\{z_1,\ldots,z_k\}$, such that* 1. *$G[y_1,\ldots,y_k,z_1,\ldots,z_k]$ consists of $k$ disjoint edges $y_1z_1, \ldots, y_kz_k$.* 2. *$N[z_1,\ldots,z_k] \cap V(C) = \emptyset$.* *Proof.* Follows from Part $(1)$ of [@DT Lemma 2.2]. ◻ # Cohen-Macaulay edge-weighted graphs In this section, we classify the Cohen-Macaulay edge-weighted graphs $G_\omega$ of girth at least $5$. Because $G$ is in the class $\mathop{\mathrm{\mathcal{PC}}}$, which $V(G) = P(G) \cup C(G)$, then we will study the weight $\omega$ on pendant edges and basic $5$-cycles of $G$ as a natural. To investigate the weight on pendant edges, we consider the following lemma. **Lemma 13**. *Let $G_\omega$ be an unmixed edge-weighted graph. Assume that $(xy)^{\omega(xy)}$ and $(xz)^{\omega(xz)}$ are among minimal generators of $I(G_\omega) \colon f$ for some monomial $f\notin I(G_\omega)$. Assume that $x^k \notin I(G_\omega) \colon f$ for every $k$. If $y$ does not appear in any minimal generator of $I(G_\omega)\colon f$ except for $(xy)^{\omega(xy)}$, then $\omega(xy) \geqslant \omega(xz)$.* *Proof.* Follows from [@DT Lemma 2.1]. ◻ We now move on to investigate the weight of basic $5$-cycles. **Lemma 14**. *Let $G_\omega$ be an unmixed edge-weighted graph where $G$ is in the class $\mathop{\mathrm{\mathcal{PC}}}$. Assume that $C$ is a basic $5$-cycle of $G$ such that that $E(C) = \{xy,yz,zu,uv,vx\}$ and $\deg(x) >2$. Then,* 1. *$\omega(xw) \leqslant \min\{\omega(xy), \omega(xv)\}$ for all $w \in N(x) \setminus \{y,v\}$.* 2. *$\omega(zu) \leqslant \min \{\omega(zy), \omega(uv)\}$.* *Proof.* Let $m = \omega(xy), n = \omega(xv), p = \omega(yz), q = \omega(zu), r = \omega(uv)$. Assume that $$N(x) = \{y,v,y_1,\ldots,y_k\}, \text{ where } k \geqslant 1,$$ and $m_i = \omega(xy_i)$ for $i=1,\ldots,k$ so that $m_1\geqslant m_2\geqslant \cdots\geqslant m_k$. $(1)$ Assume on the contrary that $m_1 > \min\{m,n\}$. We may assume that $m_1 > m$. By Lemma [Lemma 12](#CM-L02){reference-type="ref" reference="CM-L02"}, there is an independent set $\{z_1, \ldots,z_k\}$ of $G$ such that the graph $G[y_1,\ldots,y_k,z_1,\ldots,z_k]$ consists of disjoint edges $y_1z_1,\ldots, y_kz_k$ and $N[z_1,\ldots,z_k] \cap V(C) = \emptyset$. ![The structure of $G$.](CMC-P2.pdf "fig:"){#structureG}\ Let $S_1 = \{z_2,\ldots,z_k\}$. As $\mathop{\mathrm{girth}}(G) \geqslant 5$, we deduce that $\{y_1, y, u\} \cup S_1$ is an independent set of $G$. Now extend this set to a maximal independent set of $G$, say $S$. Then, $C^* = V(G) \setminus S$ is a minimal cover of $G$. In particular, $\mathop{\mathrm{ht}}(I(G_\omega)) = |C^*|$. Let $\delta \colon C^* \to \mathbb Z_{>0}$ be such that $(C^*,\delta)$ is a weighted vertex cover of $G_\omega$. Note that $z,v,x, y_2, \ldots,y_k \in C^*$ and $y_1, y,u \notin C^*$. Let $C' = C^*\cup \{y\}$ and $\delta' \colon C' \to \mathbb Z_{>0}$ defined by $$\delta'(w) = \begin{cases} m_1 & \text{ if } w = x,\\ m & \text{ if } w = y,\\ \min\{n, \delta(v)\} & \text{ if } w = v,\\ \min\{m_i, \delta(y_i)\} & \text{ if } w = y_i, \text{ for } i = 2,\ldots,k,\\ \delta(w) & \text{ otherwise}. \end{cases}$$ We now prove that $(C', \delta')$ is a weighted vertex cover of $G_\omega$. Indeed, since $C' = C^*\cup \{y\}$, $(C^*,\delta)$ is a weighted vertex cover of $G_\omega$ and by the definition of the function $\delta'$, it suffices to check the condition of a weighted vertex cover of $G_\omega$ for the set of edges $$\{xy, xv, xy_1, xy_2, \ldots, xy_k, yz, vu\}.$$ If $e = xy$, then $y \in C'$ and $\delta'(y) = m = \omega(xy)$. If $e = xv$, then $v \in C'$ and $\delta'(v) = \min\{n, \delta(v)\} \leqslant n = \omega(xv)$. If $e = xy_1$, then $x \in C'$ and $\delta'(x) = m_1 = \omega(xy_1)$. If $e = xy_i$, then $y_i \in C'$ and $\delta'(y_i) = \min\{m_i, \delta(y_i)\} \leqslant m_i = \omega(xy_i)$, for $i = 2, \ldots, k$. If $e = yz$. By considering the weighted vertex cover $(C^*,\delta)$ we have $\delta(z) \leqslant \omega(yz)$ since $y \notin C^*$ and $z \in C^*$. Thus, for $(C', \delta')$, we have $z \in C'$ and $\delta'(z) = \delta(z) \leqslant \omega(yz)$. If $e = vu$, similarly as the previous case, we have $v \in C'$ and $\delta'(v) = \min\{n, \delta(v)\} \leqslant \delta(v) \leqslant \omega(uv)$. Therefore, $(C', \delta')$ is a weighted vertex cover of $G_\omega$, as desired. Next, we claim that it is minimal support. Indeed, assume on the contrary that it is not the case, then there is a vertex, say $w\in C'$ such that $(C'\setminus \{w\},\delta')$ is still a weighted vertex cover of $G_\omega$. We consider the case $w \in \{y,x,v,z\}$. Since $u,y_1\notin C'$, it follows that $w$ must be $y$. But in this case, we have $\delta'(x) = m_1 \leqslant \omega(xy)=m$ if we look at the edge $e= xy$, a contradiction. Thus, $w\notin \{y,x,v,z\}$. Note that $|C'\setminus \{w\}| = |C^*|$ and $C^*$ is a minimal vertex cover of $G$, it follows that $C' \setminus \{w\}$ is a minimal cover of $G$ since $G$ is well-covered. Consequently, $S' = V(G)\setminus (C'\setminus \{w\})$ is a maximal independent set of $G$. On the other hand, since $y,x,v,z\notin S'$ and $N_G(y) = \{x,z\}$, it follows that $\{y\} \cup S'$ is an independent set of $G$, a contradiction. Thus, $(C',\delta')$ is minimal support, as claimed. Together with Lemma [Lemma 9](#cover){reference-type="ref" reference="cover"}, this claim yields $\mathop{\mathrm{bight}}(I(G_\omega)) \geqslant |C'|=|C^*|+1 = \mathop{\mathrm{ht}}(I(G_\omega)) + 1$. Thus, $I(G_\omega)$ is not unmixed, a contradiction, and thus $m_1 \leqslant m$. By the same way, we get $m_1 \leqslant n$, and $(1)$ follows. $(2)$ From Part $(1)$ and our assumption, we have $m_k \leqslant \min\{m_1,\ldots,m_k,m,n\}$, and hence $$I(G_\omega) \colon y_k^{m_k} = (y^pz^p, z^qu^q, u^rv^r, x^{m_k}, \ldots).$$ Since $N[y_k] \cap V(C) = \{x\}$ and $\deg_G(y) = \deg_G(v) = 2$, we imply that the four monomials in the representation above are among minimal generators of $I(G_\omega) \colon y_k^{m_k}$ and the remaining minimal generators of $I(G_\omega) \colon y_k^{m_k}$ are not involving both $y$ and $v$. Note that $y_kz, y_ku\notin E(G)$, so $z^i, u^i \notin I(G_\omega) \colon y_k^{m_k}$ for every $i$. By Lemma [Lemma 13](#CM-L01){reference-type="ref" reference="CM-L01"}, we obtain $q \leqslant \min \{p,r\}$, as required. ◻ In the following lemmas, we use the setting as illustrated in Figure [2](#structureG){reference-type="ref" reference="structureG"}. Let $G_\omega$ be an unmixed edge-weighted graph where $G$ is in the class $\mathop{\mathrm{\mathcal{PC}}}$ and let $C$ be a basic $5$-cycle of $G$. Assume that $E(C) = \{xy,yz,zu,uv,vx\}$ and $\deg(x) > 2$. Set $$m = \omega(xy), p = \omega(yz), q=\omega(zu), r =\omega(uv), \text{ and } n = \omega(zx).$$ The aim of these lemmas is to show that each basic $5$-cycle of $G$ has a balanced vertex. **Lemma 15**. *If $q < r$, then $n\leqslant \min\{r,m\}$.* *Proof.* Since $q \leqslant \min\{p,r\}$ by Lemma [Lemma 14](#CM-L03){reference-type="ref" reference="CM-L03"}, we have $$I(G_\omega)\colon u^q = (x^my^m, u^{r-q}v^r, v^nx^n, \ldots).$$ Since $q < r$, by using the same argument as in the proof of Part $(2)$ of Lemma [Lemma 14](#CM-L03){reference-type="ref" reference="CM-L03"} above, we get $m\geqslant n$. We next prove that $n \leqslant r$. Assume on the contrary that $n > r$. Let $S$ be a maximal independent set of $G$ containing $x$ and $u$ and let $C^* = V(G)\setminus S$. Then, $C^*$ is a minimal vertex cover of $G$. Let $\delta \colon C^* \to \mathbb Z_{>0}$ be a function such that $(C^*,\delta)$ is a minimal weighted vertex cover of $G_\omega$. Note that $x, u \notin C^*$ and $y, z, v \in C^*$. Let $C' = C^* \cup\{u\}$ and $\delta' \colon C'\to \mathbb Z_{>0}$ given by $$\delta'(w) = \begin{cases} r & \text{ if } w = u,\\ n & \text{ if } w = v,\\ \delta(w) & \text{ otherwise}. \end{cases}$$ Then, $(C', \delta')$ is a weighted vertex cover of $G_\omega$. In fact, it suffices to check the condition of a weighted vertex cover of $G_\omega$ for the set of edges $\{zu, uv, xv\}$. If $e = zu$, then $z \in C'$ and $\delta'(z) = \delta (z) \leqslant \omega(zu)$ (the last inequality holds by look at the weighted vertex cover $(C^*, \delta)$). If $e = uv$, then $u \in C'$ and $\delta'(u) = r = \omega(uv)$. If $e = xv$, then $v \in C'$ and $\delta'(v) = n = \omega(xv)$. Next, we prove $(C', \delta')$ is a minimal support weighted cover of $G_\omega$. Assume on the contrary, there is a vertex $w \in C'$ such that $(C'\setminus \{w\},\delta')$ is still a weighted vertex cover of $G_\omega$. Since $x \notin C'$ then $w$ could not be $y$ and $v$. We consider the following cases: If $w = u$, then $\delta'(v) = n > r = \omega(uv)$, a contradiction. If $w = z$, then $\delta'(u) = r > q = \omega(zu)$, a contradiction. In other cases, since $|C'\setminus \{w\}| = |C^*|$, $C^*$ is a minimal vertex cover of $G$, and $G$ is well-covered, then $C' \setminus \{w\}$ is a minimal cover of $G$. Thus, $S' = V(G)\setminus (C'\setminus \{w\})$ is a maximal independent set of $G$. On the other hand, since $y,v,z,u\notin S'$ and $N_G(u) = \{v,z\}$, it follows that $\{u\} \cup S'$ is an independent set of $G$, a contradiction. Thus, $(C',\delta')$ is minimal support, as claimed. Since $|C'| = |C^*|+1$, we have $\mathop{\mathrm{bight}}(I(G_\omega)) \geqslant |C^*|+1 = \mathop{\mathrm{ht}}(I(G_\omega))+1$. This contradicts the fact that $I(G_\omega)$ is unmixed. Therefore, $n \leqslant r$, as required. ◻ **Lemma 16**. *If $p=q<r$ and $n < m$, then $p\leqslant m$.* *Proof.* Assume on the contrary that $m < p$. Let $S$ be a maximal independent set of $G$ containing $x$ and $z$ and let $C^* = V(G)\setminus S$ so that $C^*$ is a minimal vertex cover of $G$. Let $\delta \colon C^* \to \mathbb Z_{>0}$ be a function such that $(C^*,\delta)$ is a minimal weighted cover of $G_\omega$. Let $C' = C^* \cup\{x\}$ and $\delta' \colon C'\to \mathbb Z_{>0}$ given by $$\delta'(w) = \begin{cases} p & \text{ if } w = y,\\ m & \text{ if } w = x,\\ \delta(w) & \text{ otherwise}. \end{cases}$$ By the same argumnet as the above Lemma, we can verify that $(C', \delta')$ is a minimal support weighted cover of $G_\omega$. Since $|C'| = |C^*|+1$, we have $\mathop{\mathrm{bight}}(I(G_\omega)) \geqslant |C^*|+1 = \mathop{\mathrm{ht}}(I(G_\omega))+1$. This contradicts the fact that $I(G_\omega)$ is unmixed. Therefore, $p \leqslant m$, as required. ◻ **Lemma 17**. *$C$ has a balanced vertex in the set $\{x,z,u\}$.* *Proof.* By Lemma [Lemma 14](#CM-L03){reference-type="ref" reference="CM-L03"} we have $q \leqslant \min\{p,r\}$. If $q < \min\{p,r\}$, then $n=m$, $n \leqslant r$ and $m\leqslant p$ by Lemma [Lemma 15](#CM-L04){reference-type="ref" reference="CM-L04"}. Thus, $x$ is a balanced vertex, and thus it remains to prove the lemma in the case $q = \min\{p,r\}$. We may assume that $p=q \leqslant r$. We now consider two possible cases: *Case $1$*: $p = q = r$. By symmetry, we may assume that $m \leqslant n$. We first claim that $\min\{m,n\} = m \leqslant p$. Indeed, assume on the contrary that $m > p$. Let $S$ be a maximal independent set of $G$ containing $x$ and $u$ and let $C^* = V(G)\setminus S$. Then, $C^*$ is a minimal vertex cover of $G$. Let $\delta \colon C^* \to \mathbb Z_{>0}$ be a function such that $(C^*,\delta)$ is a minimal vertex cover of $G_\omega$. Let $C' = C^* \cup\{u\}$ and $\delta' \colon C'\to \mathbb Z_{>0}$ given by $$\delta'(w) = \begin{cases} m & \text{ if } w = y,\\ \min\{\delta(z),p\} & \text{ if } w = z,\\ p & \text{ if } w = u,\\ n & \text{ if } w = v,\\ \delta(w) & \text{ otherwise}. \end{cases}$$ By the same manner of the proof in the part $(1)$ of Lemma [Lemma 14](#CM-L03){reference-type="ref" reference="CM-L03"}, $(C', \delta')$ is a weighted vertex cover of $G_\omega$. It is straightforward to verify that it is a minimal support weighted cover of $G_\omega$. Since $|C'| = |C^*|+1$, we have $\mathop{\mathrm{bight}}(I(G_\omega)) \geqslant |C^*|+1 = \mathop{\mathrm{ht}}(I(G_\omega))+1$. This contradicts the fact that $I(G_\omega)$ is unmixed. Therefore, $m \leqslant p$, as claimed. If $n\geqslant p$, then $u$ is a balanced vertex on $C$. If $n < p$, we assume that $m < n$. Let $S$ be a maximal independent set of $G$ containing $x$ and $u$ and let $C^* = V(G)\setminus S$. Then, $C^*$ is a minimal vertex cover of $G$. Let $\delta \colon C^* \to \mathbb Z_{>0}$ be a function such that $(C^*,\delta)$ is a minimal weighted vertex cover of $G_\omega$. Let $C' = C^* \cup\{x\}$ and $\delta' \colon C'\to \mathbb Z_{>0}$ given by $$\delta'(w) = \begin{cases} n & \text{ if } w = x,\\ p & \text{ if } w = v,\\ \delta(w) & \text{ otherwise}. \end{cases}$$ Again, $(C', \delta')$ is a weighted vertex cover of $G_\omega$. Moreover, it is a minimal support weighted cover of $G_\omega$. Since $|C'| = |C^*|+1$, we have $\mathop{\mathrm{bight}}(I(G_\omega)) \geqslant |C^*|+1 = \mathop{\mathrm{ht}}(I(G_\omega))+1$. This contradicts the fact that $I(G_\omega)$ is unmixed. Thus, $m < n$ is impossible so that $m=n$. In this case, $x$ is a balanced vertex on $C$. *Case $2$*: $p = q < r$. Then, $n \leqslant \min\{r,m\}$ by Lemma [Lemma 15](#CM-L04){reference-type="ref" reference="CM-L04"}. In particular, $n \leqslant m$. If $n < m$, then $p \leqslant m$ by Lemma [Lemma 16](#CM-L06){reference-type="ref" reference="CM-L06"}, and then $z$ is a balanced vertex on $C$. In the case $n=m$, we have $z$ is a balanced vertex on $C$ if $m \geqslant p$ or $x$ is a balanced vertex on $C$ if $m \leqslant p$. The proof of the lemma is complete. ◻ **Lemma 18**. *Assume further that $\deg(z) > 2$. Then, either $x$ or $z$ is a balanced vertex on $C$.* *Proof.* Since $\deg(x) > 2$ and $\deg(z) > 2$, by Lemma [Lemma 17](#CM_L07){reference-type="ref" reference="CM_L07"}, $C$ has a balanced vertex in the set $\{x,z,u, v\}$. If $v$ is a balanced vertex, then $$r = n \ \text { and } n \leqslant m \geqslant p \leqslant q \geqslant r.$$ On the other hand, since $\deg(x) > 2$ and $\deg(z) > 2$, by Lemma [Lemma 14](#CM-L03){reference-type="ref" reference="CM-L03"} we obtain $$p \geqslant q \leqslant r \text{ and } m \geqslant n \leqslant r.$$ From those inequalities, we get $n=p=q=r\leqslant m$. Hence, $z$ is also a balanced vertex on $C$. In the same way, if $u$ is a balanced vertex, then $x$ is a balanced vertex on $C$ as well. Therefore, we conclude that either $x$ or $z$ is a balanced vertex on $C$. ◻ We are now in a position to prove the main result of the paper. **Theorem 19**. *Let $G$ be a graph of girth at least $5$ and $\omega$ is a weight edge on $G$. Then, the following conditions are equivalent:* 1. *$G_\omega$ is Cohen-Macaulay.* 2. *$G$ is Cohen-Macaulay and $G_\omega$ is unmixed.* 3. *$G$ is in the class $\mathop{\mathrm{\mathcal{PC}}}$ and the weight edge $\omega$ on $G$ satisfies:* 1. *The weight of any pendant edge in $G$ is greater than or equal to the weight of every edge adjacent to it.* 2. *Every basic $5$-cycle $C$ of $G$ has a balanced vertex adjacent to two vertices on $C$ of degree $2$.* 3. *If a vertex $x$ is on a basic $5$-cycle $C$ with $\deg_G(x)\geqslant 3$ and $N_C(x) = \{y,v\}$, then $\min\{\omega(xy),\omega(xv)\} \geqslant \max\{\omega(xw) \mid w \in N_G(x)\setminus\{y,v\}\}$.* *Proof.* $(1)\Longrightarrow (2)$ Since $G_\omega$ is Cohen-Macaulay, then $G_\omega$ is unmixed. On the other hand, $I(G)=\sqrt{I(G_\omega)}$, by [@HTT Theorem 2.6], $G$ is Cohen-Macaulay. $(2)\Longrightarrow (3)$ Since $G$ is a Cohen-Macaulay graph of girth at least $5$, by Lemma [Lemma 5](#HMT){reference-type="ref" reference="HMT"}, $G$ is in the class $\mathop{\mathrm{\mathcal{PC}}}$. Now, we consider two following cases: First, in the case $G$ is just a $5$-cycle, we only need to prove the property $(b)$ and it follows immediately from [@PS Theorem 4.4]. Second, in the remain cases, i.e. $G$ is not a $5$-cycle, then the property $(a)$ equivalent to this statement: \"For every pendant edge $xy$ of $G$ with $y$ is a leaf, then $\omega(xy) \geqslant \omega(xz)$ for any $xz \in E(G)$\", and it follows from Lemma [Lemma 13](#CM-L01){reference-type="ref" reference="CM-L01"}. In addition, the property $(b)$ follows from Lemma [Lemma 17](#CM_L07){reference-type="ref" reference="CM_L07"}, and the property $(c)$ follows immediately from Lemma [Lemma 14](#CM-L03){reference-type="ref" reference="CM-L03"}$(1)$. $(3)\Longrightarrow (1)$ We prove by induction on the number of basic $5$-cycles of $G$. If $G$ has no basic $5$-cycle, then its pendant edges form a perfect matching in $G$. In this case, combine with the condition $(a)$ and [@PS Lemma 5.3], we get $G_\omega$ is Cohen-Macaulay. Assume that $G$ has some basic $5$-cycles. If $G$ is just a $5$-cycle, by [@PS Theorem 4.4], $G_{\omega}$ is Cohen-Macaulay as desired. If not then, assume $C_1,\ldots,C_r$ be the basic $5$-cycles of $G$ with $r\geqslant 1$ and $P$ be the set of pendant edges of $G$. Assume that $E(C_1) = \{xy,yz,zu,uv,vx\}$ with $$\omega(xy) = m, \omega(yz) = p, \omega(zu) = q, \omega(uv) = r, \omega(vx) = n.$$ By our assumptions, $C_1$ has a balanced vertex such that two neighbors in $C_1$ are also of degree $2$. We may assume $x$ is such a vertex so that $m=n$ and $m\leqslant p\geqslant q \leqslant r \geqslant m$. Now we consider two possible cases: *Case $1$*: $\deg_G(x) = 2$. In this case, $N_G(x)=\{y,v\}$, and hence $$I(G_\omega) \colon x^m = (y^m, v^m, I((G_x)_\omega)) \ \text{ and } I(G_\omega) +(x^m)=(x^m) + I((G\setminus x)_\omega).$$ Now, we will prove these ideals are Cohen-Macaulay. Observe that $G\setminus x$ is in the class $\mathop{\mathrm{\mathcal{PC}}}$ with $r-1$ basic $5$-cycles $C_2,\ldots,C_r$ and pendant edges $P \cup \{zy,uv\}$ where $y$ and $v$ are leaves. We now verify the graph $(G\setminus x)_\omega$ satisfies the condition $(3)$. It suffices to prove the property $(a)$. Particularly, we only need to verify this property for the pendant edges $zy$ and $uv$. In particular, we will prove this property for the pendant edge $zy$, and similarly for the pendant edge $uv$. Let $zw \in E((G \setminus x)_\omega)$ for some $w \in V(G\setminus x) \setminus \{y\}$. If $w = u$, then by using condition of a balanced vertex $x$ in $C_1$, we have $\omega(zu) \leqslant \omega(zy)$. If $w \neq u$, then $w \notin C_1$. By applying Lemma [Lemma 14](#CM-L03){reference-type="ref" reference="CM-L03"} on the basic $5$-cycle $C_1$, we get $\omega(zw) \leqslant \omega(zy)$. Thus, the property holds for the graph $(G\setminus x)_\omega$. By the induction hypothesis, $(G\setminus x)_\omega$ is Cohen-Macaulay, so that $I(G_\omega) +(x^m)$ is Cohen-Macaulay. In the same way, we will prove that $I(G_\omega) \colon x^m$ is Cohen-Macaulay as follows. Since $C_1$ is a basic $5$-cycle then one of the vertices of $\{z, u\}$ is a leaf in $G_x = G\setminus \{x,y,v\}$. Thus, $G_x$ is in the class $\mathop{\mathrm{\mathcal{PC}}}$ with $r-1$ basic $5$-cycles $C_2,\ldots,C_r$ and pendant edges $P \cup \{zu\}$. We now verify the graph $(G_ x)_\omega$ satisfies the condition $(3)$. It suffices to prove the property $(a)$. Particularly, it remains to verify this property for the pendant edge $zu$. If both vertices $z$ and $u$ are leaves, then nothing to do. Otherwise, assume $u$ is a leaf and $z$ is not. Let $zw$ be any edge in $E((G_x)_\omega)$, it follows that $w$ is not in the basic $5$-cycle $C_1$. Once again, applying Lemma [Lemma 14](#CM-L03){reference-type="ref" reference="CM-L03"} on the basic $5$-cycle $C_1$, we get $\omega(zw) \leqslant \omega(zu)$. Thus, the property holds for the graph $(G_x)_\omega$, and hence $(G_x)_\omega$ is Cohen-Macaulay by the induction hypothesis. Therefore, $I(G_\omega) \colon x^m$ is Cohen-Macaulay, too. Since $\sqrt{I(G_\omega) +(x^m)} = (x, I(G\setminus x))$ is Cohen-Macaulay, it forces $G\setminus x$ is well-covered. Since $x$ is not an isolated vertex, it is a shedding vertex. Moreover, $\sqrt{I(G_\omega) \colon x^m} = (y, v, I(G_x)) = I(G)\colon x$. By Lemma [\[dim\]](#dim){reference-type="ref" reference="dim"}, we have $$\dim R/I(G_\omega) = \dim R/I(G_\omega)\colon x^m = \dim R/(I(G_\omega),x^m).$$ This implies that $I(G_\omega)$ is Cohen-Macaulay by Lemma [\[CM-Q\]](#CM-Q){reference-type="ref" reference="CM-Q"}. *Case $2$*: $\deg_G(x) > 2$. Let $N(x) =\{y,v,y_1,\ldots,y_k\}$. Since $m\geqslant m_i$ for all $i$ by Lemma [Lemma 14](#CM-L03){reference-type="ref" reference="CM-L03"}, we obtain $$I(G_\omega) \colon x^m = (y^m, v^m) +(y_1^{m_1}, \ldots, y_k^{m_k}, I(G\setminus \{x,y,v\})_\omega)$$ and $$I(G_\omega)+( x^m) = (x^m, x^{m_1} y_1^{m_1}, \ldots, x^{m_k}y_k^{m_k}, I(G\setminus x)_\omega).$$ We now will prove these ideals are Cohen-Macaulay. Let $w$ be a new vertex and $H$ be a graph which is obtained from $G$ by removing two edges $xy$ and $xv$ but adding a new edge $xw$. It means that $H$ is a graph with $V(H) = V(G) \cup \{w\}$ and $E(H) = (E(G) \cup \{xw\}) \setminus \{xy,xv\}$. Since $C_1$ is a basic $5$-cycle and $\deg_G(x) > 2$, then $\deg_G(y) = \deg_G(v) = 2$. Thus, $w,y,v$ are leaves in $H$. Then $H$ is in the class $\mathop{\mathrm{\mathcal{PC}}}$ with $r-1$ basic $5$-cycles and pendant edges $P \cup\{xw, uv\}$. Now we define the weight edge on $H$ by sending $$e \mapsto \begin{cases} m & \text{ if } e = xw,\\ \omega(e) & \text{ otherwise}, \end{cases}$$ which is still denoted by $\omega$. We now verify that $H_\omega$ satisfy the condition $(3)$. It suffices to prove the property $(a)$. In order to do this, it remains to verify this property for the pendant edges $xw$ and $uv$. It follows from Lemma [Lemma 14](#CM-L03){reference-type="ref" reference="CM-L03"} (for the pendant edge $e = uv$) and the way we define the weight edge on $H$, $\omega(xw) = m$ (for the pendant edge $e = xw$). Thus, by the induction hypothesis, $H_\omega$ is Cohen-Macaulay. Since $xw$ is an pendant edge of $H_\omega$, so that $x^mw^m \in I(H_\omega)$, by Lemma [\[CM-Q\]](#CM-Q){reference-type="ref" reference="CM-Q"} we have $I(H_\omega) \colon w^m$ is Cohen-Macaulay. Note that $$I(H_\omega)\colon w^m = (x^m, x^{m_1}y_1^{m_1}, \ldots, x^{m_k}y_k^{m_k}, I(G\setminus x)_{\omega})=I(G_\omega)+( x^m).$$ Hence, $I(G_\omega)+( x^m)$ is Cohen-Macaulay. In order to prove $I(G_\omega)\colon x^m$ is Cohen-Macaulay we use the same technique as above. Let $H'$ be a graph with $V(H') = V(G\setminus \{y,v\})\cup\{w\}$ and $E(H') = E(G\setminus \{y,v\}) \cup \{xw\}$. Next, define the weight edge on $H'$, by sending $$e \mapsto \begin{cases} m & \text{ if } e = xw,\\ \omega(e) & \text{ otherwise}, \end{cases}$$ which is still denoted by $\omega$. With this setting, $H'_\omega$ is Cohen-Macaulay by the same argument as the previous case. Thus, $$I(H'_\omega) \colon x^m = (w^m, y_1^{m_1},\ldots, y_k^{m_k}, I((G\setminus\{x,y,v\})_\omega)$$ is Cohen-Macaulay by Lemma [\[CM-Q\]](#CM-Q){reference-type="ref" reference="CM-Q"}. In particular, $(y_1^{m_1},\ldots, y_k^{m_k}, I((G\setminus\{x,y,v\})_\omega)$ is Cohen-Macaulay, and hence $I(G_\omega) \colon x^m = (y^m,v^m) + (y_1^{m_1},\ldots, y_k^{m_k}, I((G\setminus\{x,y,v\})_\omega))$ is Cohen-Macaulay as well. Finally, since $$\sqrt{I(G_\omega)\colon x^m} = (y,v, y_1,\ldots,y_k) + I(G\setminus\{x,y,v\}) = I(G)\colon x$$ and $$\sqrt{I(G_\omega)+(x^m)} = (I(G),x),$$ by the same argument as in Case $1$, we have $$\dim R/I(G_\omega) = \dim R/I(G_\omega)\colon x^m = \dim R/(I(G_\omega),x^m).$$ Therefore, $I(G_\omega)$ is Cohen-Macaulay by Lemma [\[CM-Q\]](#CM-Q){reference-type="ref" reference="CM-Q"}, and the proof is complete. ◻ **Example 20**. The edge-weighted graph $G_\omega$ as depicted in Figure [3](#CMWE){reference-type="ref" reference="CMWE"} is Cohen-Macaulay. ![The Cohen-Macaulay edge-weighted graph.](CMWE.pdf "fig:"){#CMWE}\ Indeed, we see from the figure that the underlying graph $G$ is in the class $\mathop{\mathrm{\mathcal{PC}}}$ with three pendant edges $fg, hi, jk$; and two basic $5$ cycles $C_1:\ x\to y\to z\to u\to v\to x$ and $C_2: \ a\to b\to c\to d\to e\to a$. Note that $z$ is a balanced vertex on $C_1$ and $c$ is the one on $C_2$ they satisfy the condition $(b)$ in Theorem [Theorem 19](#main-theorem){reference-type="ref" reference="main-theorem"}. We can easily verify that the conditions $(a)-(c)$ in Theorem [Theorem 19](#main-theorem){reference-type="ref" reference="main-theorem"} holds for $G_\omega$, and thus $G_\omega$ is Cohen-Macaulay. ## Acknowledgment {#acknowledgment .unnumbered} This work is partially supported by NAFOSTED (Vietnam) under the grant number 101.04-2023.36. FHN1 T. Biyikŏglu and Y. Civan, *Vertex-decomposable graphs, codismantlability, Cohen-Macaulayness, and Castelnuovo-Mumford regularity*, Electron. J. Combin. **21** (1) (2014) Paper 1.1, 17 pp. L.T.K. Diem, N.C. Minh and T. Vu, *The sequentially Cohen-Macaulay property of edge ideals of edge-weighted graphs*, arXiv:2308.05020. L.X. Dung and T.N. Trung, *Cohen-Macaulay oriented graphs with large girth*, arXiv:2308.11907. H.T. Ha, K. Lin, S. Morey, E. Reyes and R. H. Villarreal, *Edge ideals of oriented graphs*, Int. J. Algebra Comput. **29** (2019), 535--559. J. Herzog, Y. Takayama and N. Terai, *On the radical of a monomial ideal*, Arch. Math. **85** (2005), 397-408. D.T. Hoang, N.C. Minh and T.N. Trung, *Cohen-Macaulay graphs with large girth*, J. Algebra Appl. **14** (2015), no. 7, 1550112, 16 pp. C. Paulsen and S. Sather-Wagstaff, *Edge ideals of weighted graphs*, J. Algebra Appl. **12** (2013), no 5, 1250223, 24pp. Y. Pitones, E. Reyes, and J. Toledo, *Monomial ideals of weighted oriented graphs*, Electron. J. Combin., 26 (2019), no. 3, Research Paper P3.44. Y. Pitones, E. Reyes and R. H. Villarreal, *Unmixed and Cohen-Macaulay weighted oriented König graphs*, Studia Sci. Math. Hungar. **58** (2021), no. 3, 276-292. M. D. Plummer, *Some covering concepts in graphs*, Journal of Combinatorial Theory, **8** (1970), 91-98. A.A. Seyed Fakhari, K. Shibata, K., N.Terai, and S. Yassemi, *Cohen--Macaulay edge-weighted edge ideals of very well-covered graphs*, Commun. Algebra **49**(10), 4249-4257 (2021). R. Villarreal, *Monomial Algebras*, Monographs and Textbooks in Pure and Applied Mathematics Vol. 238, Marcel Dekker, New York, 2001. R. Woodroofe, *Vertex decomposable graphs and obstructions to shellability*, Proc. Amer. Math. Soc. **137** (2009), no. 10, 3235-3246.
arxiv_math
{ "id": "2309.05056", "title": "Cohen-Macaulay edge-weighted graphs of girth $5$ or greater", "authors": "Truong Thi Hien", "categories": "math.AC math.CO", "license": "http://creativecommons.org/licenses/by-nc-sa/4.0/" }
--- abstract: | Let $p$ be a prime number and let $F$ be a field of characteristic different from $p$. We prove that there exist a field extension $L/F$ and $a,b,c,d$ in $L^{\times}$ such that $(a,b)=(b,c)=(c,d)=0$ in $\operatorname{Br}(F)[p]$ but $\langle a,b,c,d\rangle$ is not defined over $L$. Thus the Strong Massey Vanishing Conjecture at the prime $p$ fails for $L$, and the cochain differential graded ring $C^{ \raisebox{-0.25ex}{\scalebox{1.2}{$\cdot$}} }(\Gamma_L,\mathbb Z/p\mathbb Z)$ of the absolute Galois group $\Gamma_L$ of $L$ is not formal. This answers a question of Positselski. address: | Department of Mathematics\ University of California\ Los Angeles, CA 90095\ United States of America author: - Alexander Merkurjev - Federico Scavia date: September 2023 title: Non-formality of Galois cohomology modulo all primes --- # Introduction Let $p$ be a prime number, let $F$ be a field of characteristic different from $p$ and containing a primitive $p$-th root of unity $\zeta$, and let $\Gamma_F$ be the absolute Galois group of $F$. The Norm-Residue Isomorphism Theorem of Voevodsky and Rost [@haesemeyer2019norm] gives an explicit presentation by generators and relations of the cohomology ring $H^{ \raisebox{-0.25ex}{\scalebox{1.2}{$\cdot$}} }(F,\mathbb Z/p\mathbb Z)=H^{ \raisebox{-0.25ex}{\scalebox{1.2}{$\cdot$}} }(\Gamma_F,\mathbb Z/p\mathbb Z)$. In view of this complete description of the cup product, the research on $H^{ \raisebox{-0.25ex}{\scalebox{1.2}{$\cdot$}} }(F, \mathbb Z/p\mathbb Z)$ shifted in recent years to external operations, defined in terms of the differential graded ring of continuous cochains $C^{ \raisebox{-0.25ex}{\scalebox{1.2}{$\cdot$}} }(\Gamma_F, \mathbb Z/p\mathbb Z)$. Hopkins--Wickelgren [@hopkins2015splitting] asked whether $C^{ \raisebox{-0.25ex}{\scalebox{1.2}{$\cdot$}} }(\Gamma_F, \mathbb Z/p\mathbb Z)$ is formal for every field $F$ and every prime $p$. Loosely speaking, this amounts to saying that no essential information is lost when passing from $C^{ \raisebox{-0.25ex}{\scalebox{1.2}{$\cdot$}} }(\Gamma_F, \mathbb Z/p\mathbb Z)$ to $H^{ \raisebox{-0.25ex}{\scalebox{1.2}{$\cdot$}} }(F, \mathbb Z/p\mathbb Z)$. Positselski [@positselski2017koszulity] showed that $C^{ \raisebox{-0.25ex}{\scalebox{1.2}{$\cdot$}} }(\Gamma_F, \mathbb Z/p\mathbb Z)$ is not formal for some finite extensions $F$ of $\mathbb Q_{\ell}$ and $\mathbb F_{\ell}((z))$, where $\ell\neq p$. He then posed the following question; see [@positselski2017koszulity p. 226]. **Question 1** (Positselski). *Does there exist a field $F$ containing all roots of unity of $p$-power order such that $C^{ \raisebox{-0.25ex}{\scalebox{1.2}{$\cdot$}} }(\Gamma_F,\mathbb Z/p\mathbb Z)$ is not formal?* We showed in [@merkurjev2022degenerate Theorem 1.6] that has a positive answer when $p=2$. In the present work we provide examples showing that the answer to is affirmative for all primes $p$. **Theorem 2**. *Let $p$ be a prime number and let $F$ be a field of characteristic different from $p$. There exists a field $L$ containing $F$ such that the differential graded ring $C^{ \raisebox{-0.25ex}{\scalebox{1.2}{$\cdot$}} }(\Gamma_L,\mathbb Z/p\mathbb Z)$ is not formal.* In order to detect non-formality of the cochain differential graded ring, we use Massey products. For any $n\geq 2$ and all $\chi_1,\dots,\chi_n\in H^1(F,\mathbb Z/p\mathbb Z)$, the Massey product of $\chi_1,\dots,\chi_n$ is a certain subset $\langle{\chi_1,\dots,\chi_n}\rangle\subset H^2(F,\mathbb Z/p\mathbb Z)$; see for the definition. We say that $\langle{\chi_1,\dots,\chi_n}\rangle$ is defined if it is not empty, and that it vanishes if it contains $0$. When $\operatorname{char}(F)\neq p$ and $F$ contains a primitive $p$-th root of unity $\zeta$, Kummer Theory gives an identification $H^1(F,\mathbb Z/p\mathbb Z)=F^{\times}/F^{\times p}$, and we may thus consider Massey products $\langle{a_1,\dots,a_n}\rangle$, where $a_i\in F^\times$ for $1\leq i\leq n$. Let $n\geq 3$ be an integer, let $\chi_1,\dots,\chi_n\in H^1(F,\mathbb Z/p\mathbb Z)$, and consider the following assertions: $$\begin{aligned} \label{assertion-1} & \text{The Massey product $\langle{\chi_1,\dots,\chi_n}\rangle$ vanishes.} \\ \label{assertion-2} & \text{The Massey product $\langle{\chi_1,\dots,\chi_n}\rangle$ is defined.} \\ \label{assertion-3} & \text{We have $\chi_i\cup\chi_{i+1}=0$ for all $1\leq i\leq n-1$.} \end{aligned}$$ We have that ([\[assertion-1\]](#assertion-1){reference-type="ref" reference="assertion-1"}) implies ([\[assertion-2\]](#assertion-2){reference-type="ref" reference="assertion-2"}), and that ([\[assertion-2\]](#assertion-2){reference-type="ref" reference="assertion-2"}) implies ([\[assertion-3\]](#assertion-3){reference-type="ref" reference="assertion-3"}). The Massey Vanishing Conjecture, due to Mináč--Tân [@minac2017triple] and inspired by the earlier work of Hopkins--Wickelgren [@hopkins2015splitting], predicts that ([\[assertion-2\]](#assertion-2){reference-type="ref" reference="assertion-2"}) implies ([\[assertion-1\]](#assertion-1){reference-type="ref" reference="assertion-1"}). This conjecture has sparked a lot of activity in recent years. When $F$ is an arbitrary field, the conjecture is known when either $n=3$ and $p$ is arbitrary, by Efrat--Matzri and Mináč--Tân [@matzri2014triple; @efrat2017triple; @minac2016triple], or $n=4$ and $p=2$, by [@merkurjev2023massey]. When $F$ is a number field, the conjecture was proved for all $n\geq 3$ and all primes $p$, by Harpaz--Wittenberg [@harpaz2019massey]. When $n=3$, it is a direct consequence of the definition of Massey product that ([\[assertion-3\]](#assertion-3){reference-type="ref" reference="assertion-3"}) implies ([\[assertion-2\]](#assertion-2){reference-type="ref" reference="assertion-2"}). Thus ([\[assertion-1\]](#assertion-1){reference-type="ref" reference="assertion-1"}), ([\[assertion-2\]](#assertion-2){reference-type="ref" reference="assertion-2"}) and ([\[assertion-3\]](#assertion-3){reference-type="ref" reference="assertion-3"}) are equivalent when $n=3$. In [@minac2017counting Question 4.2], Mináč and Tân asked whether ([\[assertion-3\]](#assertion-3){reference-type="ref" reference="assertion-3"}) implies ([\[assertion-1\]](#assertion-1){reference-type="ref" reference="assertion-1"}). This became known as the Strong Massey Vanishing Conjecture (see e.g. [@pal2018strong]): If $F$ is a field, $p$ is a prime number and $n\geq 3$ is an integer then, for all characters $\chi_1,\dots,\chi_n\in H^1(F,\mathbb Z/p\mathbb Z)$ such that $\chi_i\cup\chi_{i+1}=0$ for all $1\leq i\leq n-1$, the Massey product $\langle{\chi_1,\dots,\chi_n}\rangle$ vanishes. The Strong Massey Vanishing Conjecture implies the Massey Vanishing Conjecture. However, Harpaz and Wittenberg produced a counterexample to the Strong Massey Vanishing Conjecture, for $n=4$, $p=2$ and $F=\mathbb Q$; see [@guillot2018fourfold Example A.15]. More precisely, if we let $b=2$, $c=17$ and $a=d=bc=34$, then $(a,b)=(b,c)=(c,d)=0$ in $\operatorname{Br}(\mathbb Q)$ but $\langle{a,b,c,d}\rangle$ is not defined over $\mathbb Q$. In this example, the classes of $a,b,c,d$ in $F^{\times}/F^{\times 2}$ are not $\mathbb F_2$-linearly independent modulo squares. In fact, by a theorem of Guillot--Mináč--Topaz--Wittenberg [@guillot2018fourfold], if $F$ is a number field and $a,b,c,d$ are independent in $F^\times /F^{\times 2}$ and satisfy $(a,b)=(b,c)=(c,d)=0$ in $\operatorname{Br}(F)$, then $\langle{a,b,c,d}\rangle$ vanishes. If $F$ is a field for which the Strong Massey Vanishing Conjecture fails, for some $n\geq 3$ and some prime $p$, then $C^{ \raisebox{-0.25ex}{\scalebox{1.2}{$\cdot$}} }(\Gamma_F,\mathbb Z/p\mathbb Z)$ is not formal; see for the $n=4$ case. Therefore follows from the next more precise result. **Theorem 3**. *Let $p$ be a prime number, let $F$ be a field of characteristic different from $p$. There exist a field $L$ containing $F$ and $\chi_1,\chi_2,\chi_3,\chi_4\in H^1(L,\mathbb Z/p\mathbb Z)$ such that $\chi_1\cup\chi_2=\chi_2\cup\chi_3=\chi_3\cup\chi_4=0$ in $H^2(L,\mathbb Z/p\mathbb Z)$ but $\langle{\chi_1,\chi_2,\chi_3,\chi_4}\rangle$ is not defined. Thus the Strong Massey Vanishing conjecture at $n=4$ and the prime $p$ fails for $L$, and $C^{ \raisebox{-0.25ex}{\scalebox{1.2}{$\cdot$}} }(\Gamma_L,\mathbb Z/p\mathbb Z)$ is not formal.* This gives the first counterexamples to the Strong Massey Vanishing Conjecture for all odd primes $p$. We easily deduce that ([\[assertion-3\]](#assertion-3){reference-type="ref" reference="assertion-3"}) does not imply ([\[assertion-2\]](#assertion-2){reference-type="ref" reference="assertion-2"}) for all $n\geq 4$, in general: indeed, if the fourfold Massey product $\langle{\chi_1,\chi_2,\chi_3,\chi_4}\rangle$ is not defined, neither is the $n$-fold Massey product $\langle{\chi_1,\chi_2,\chi_3,\chi_4,0,\dots,0}\rangle$. was proved in [@merkurjev2022degenerate Theorem 1.6] when $p=2$, and is new when $p$ is odd. Our proof of is uniform in $p$. We now describe the main ideas that go into the proof of . We may assume without loss of generality that $F$ contains a primitive $p$-th root of unity. In , we collect preliminaries on formality, Massey products and Galois algebras. In particular, we recall Dwyer's Theorem (see ): a Massey product $\langle{\chi_1,\dots,\chi_n}\rangle\subset H^2(F,\mathbb Z/p\mathbb Z)$ vanishes (resp. is defined) if and only if the homomorphism $(\chi_1,\dots,\chi_n)\colon \Gamma_F\to (\mathbb Z/p\mathbb Z)^n$ lifts to the group $U_{n+1}$ of upper unitriangular matrices in $\operatorname{GL}_{n+1}(\mathbb F_p)$ (resp. to the group $\overline{U}_{n+1}$ of upper unitriangular matrices in $\operatorname{GL}_{n+1}(\mathbb F_p)$ with top-right corner removed). As for [@merkurjev2022degenerate Theorem 1.6], our approach is based on , which is a restatement of in terms of Galois algebras. In , we show that a fourfold Massey product $\langle{a,b,c,d}\rangle$ is defined over $F$ if and only if a certain system of equations admits a solution over $F$, and the variety defined by these equations is a torsor under a torus; see . This is done by using Dwyer's to rephrase the property that $\langle{a,b,c,d}\rangle$ is defined in terms of $\overline{U}_5$-Galois algebras, and then by a detailed study of Galois $G$-algebras, for $G=U_3,\overline{U}_4,U_4,\overline{U}_5$. As a consequence, we also obtain an alternative proof of the Massey Vanishing Conjecture for $n=3$ and any prime $p$; see . In , we use the work of to construct a "generic variety" for the property that $\langle{a,b,c,d}\rangle$ is defined. More precisely, under the assumption that $(a,b)=(c,d)=0$ in $\operatorname{Br}(F)$ and letting $X$ be the Severi-Brauer variety of $(b,c)$, we construct an $F$-torus $T$, and a $T_{F(X)}$-torsor $E_w$ such that, if $E_w$ is non-trivial, then $\langle{a,b,c,d}\rangle$ is not defined over $F(X)$; see . The definition of $E_w$ depends on a rational function $w\in F(X)^\times$, which we construct in (3). Since $(a,b)=(b,c)=(c,d)=0$ in $\operatorname{Br}(F(X))$, the proof of will be complete once we give an example of $a,b,c,d$ for which the corresponding torsor $E_w$ is non-trivial. Here we consider the generic quadruple $a,b,c,d$ such that $(a,b)$ and $(c,d)$ are trivial. More precisely, we let $x,y$ be two variables over $F$, and replace $F$ by $E\coloneqq F(x,y)$. We then set $a\coloneqq 1-x$, $b\coloneqq x$, $c\coloneqq y$ and $d\coloneqq 1-y$ over $E$. We have $(a,b)=(b,c)=0$ in $\operatorname{Br}(E)$. The class $(b,c)$ is not zero in $\operatorname{Br}(E)$, so that the Severi-Brauer variety $X/E$ of $(b,c)$ is non-trivial, but $(b,c)=0$ over $L\coloneqq E(X)$. It is natural to attempt to prove that $E_w$ is non-trivial over $L$ by performing residue calculations to deduce that this torsor is ramified. However, the torsor $E_w$ is in fact unramified. We are thus led to consider a finer obstruction to the triviality of $E_w$. This "secondary obstruction" is only defined for unramified torsors. We describe the necessary homological algebra in , and we define the obstruction and give a method to compute it in . In , an explicit calculation shows that the obstruction is non-zero on $E_w$, and hence $E_w$ is non-trivial, as desired. ## Notation {#notation .unnumbered} Let $F$ be a field, let $F_s$ be a separable closure of $F$, and denote by $\Gamma_F\coloneqq \operatorname{Gal}(F_s/F)$ the absolute Galois group of $F$. If $E$ is an $F$-algebra, we let $H^i(E,-)$ be the étale cohomology of $\operatorname{Spec}(E)$ (possibly non-abelian if $i\leq 1$). If $E$ is a field, $H^i(E,-)$ may be identified with the continuous cohomology of $\Gamma_E$. We fix a prime $p$, and we suppose that $\operatorname{char}(F)\neq p$. If $E$ is an $F$-algebra and $a_1,\dots,a_n\in E^{\times}$, we define the étale $E$-algebra $E_{a_1,\dots,a_n}$ by $$E_{a_1,\dots,a_n}\coloneqq E[x_1,\dots,x_n]/(x_1^p-a_1,\dots,x_n^p-a_n)$$ and we set $(a_i)^{1/p}\coloneqq x_i$. More generally, for all integers $d$, we set $(a_i)^{d/p}\coloneqq x_i^d$. We denote by $R_{a_1,\dots,a_n}(-)$ the functor of Weil restriction along $F_{a_1,\dots,a_n}/F$. In particular $R_{a_1,\dots,a_n}(\mathbb G_{\operatorname{m}})$ is the quasi-trivial torus associated to $F_{a_1,\dots,a_n}/F$, and we denote by $R^{(1)}_{a_1,\dots,a_n}(\mathbb G_{\operatorname{m}})$ the norm-one subtorus of $R_{a_1,\dots,a_n}(\mathbb G_{\operatorname{m}})$. We denote by $N_{a_1,\dots,a_n}$ the norm map from $F_{a_1,\dots,a_n}$ to $F$. We write $\operatorname{Br}(F)$ for the Brauer group of $F$. If $\operatorname{char}(F)\neq p$ and $F$ contains a primitive $p$-th root of unity, for all $a,b\in F^\times$ we let $(a,b)$ be the corresponding degree-$p$ cyclic algebra and for its class in $\operatorname{Br}(F)$; see . We denote by $N_{a_1,\dots,a_n}\colon\operatorname{Br}(F_{a_1,\dots,a_n})\to \operatorname{Br}(F)$ for the corestriction map along $F_{a_1,\dots,a_n}/F$. An $F$-variety is a separated integral $F$-scheme of finite type. If $X$ is an $F$-variety, we denote by $F(X)$ the function field of $X$, and we write $X^{(1)}$ for the collection of all points of codimension $1$ in $X$. We set $X_s\coloneqq X\times_FF_s$. If $K$ is an étale algebra over $F$, we write $X_K$ for $X\times_FK$. For all $a_1,\dots,a_n\in F^\times$, we write $X_{a_1,\dots,a_n}$ for $X_{F_{a_1,\dots,a_n}}$. When $X=\mathbb P^d_F$ is a $d$-dimensional projective space, we denote by $\mathbb P^d_{a_1,\dots,a_n}$ the base change of $\mathbb P^d_F$ to $F_{a_1,\dots,a_d}$. # Preliminaries {#section-preliminaries} ## Galois algebras and Kummer Theory {#kummer-sub} Let $F$ be a field and let $G$ be a finite group. A $G$*-algebra* is an étale $F$-algebra $L$ on which $G$ acts via $F$-algebra automorphisms. The $G$-algebra $L$ is *Galois* if $|G|=\dim_F(L)$ and $L^G=F$; see [@knus1998book Definitions (18.15)]. A $G$-algebra $L/F$ is Galois if and only if the morphism of schemes $\operatorname{Spec}(L)\to \operatorname{Spec}(F)$ is an étale $G$-torsor. If $L/F$ is a Galois $G$-algebra, the group algebra $\mathbb Z[G]$ acts on the multiplicative group $L^{\times}$: an element $\operatornamewithlimits{\textstyle\sum}_{i=1}^r m_ig_i\in \mathbb Z[G]$, where $m_i\in \mathbb Z$ and $g_i\in G$, sends $x\in L^{\times}$ to $\operatornamewithlimits{\textstyle\prod}_{i=1}^r g_i(x)^{m_i}$. By [@knus1998book Example (28.15)], we have a canonical bijection $$\label{galois-alg} \operatorname{Hom}_{\operatorname{cont}}(\Gamma_F,G)/_{\sim}\xrightarrow{\sim}\left\{\text{Isomorphism classes of Galois $G$-algebras over $F$}\right\},$$ where, if $f_1,f_2\colon \Gamma_F\to G$ are continuous group homomorphisms, we say that $f_1\simeq f_2$ if there exists $g\in G$ such that $gf_1(\sigma)g^{-1}=f_2(\sigma)$ for all $\sigma\in \Gamma_F$. Let $H$ be a normal subgroup of $G$. Under the correspondence ([\[galois-alg\]](#galois-alg){reference-type="ref" reference="galois-alg"}), the map $\operatorname{Hom}_{\operatorname{cont}}(\Gamma_F,G)/_{\sim}\to \operatorname{Hom}_{\operatorname{cont}}(\Gamma_F,G/H)/_{\sim}$ sends the class of a Galois $G$-algebra $L$ to the class of the Galois $G/H$-algebra $L^H$. **Lemma 4**. *Let $G$ be a finite group, and let $H, H', S$ be normal subgroups of $G$ such that $H\subset S$, $H'\subset S$, and the square $$\label{g-h-h'-s} \begin{tikzcd} G \arrow[r] \arrow[d] & G/H \arrow[d] \\ G/H' \arrow[r] & G/S \end{tikzcd}$$ is cartesian.* *(1) Let $L$ be a Galois $G$-algebra. Then the tensor product $L^H \otimes_{L^S} L^{H'}$ has a Galois $G$-algebra structure given by $g(x\otimes x')\coloneqq g(x)\otimes g(x')$ for all $x\in L^H$ and $x'\in L^{H'}$. Moreover, the inclusions $L^H\to L$ and $L^{H'}\to L$ induce an isomorphism of Galois $G$-algebras $L^H \otimes_{L^S} L^{H'} \rightarrow L$.* *(2) Conversely, let $K$ be a Galois $G/H$-algebra, let $K'$ be a Galois $G/H'$-algebra, and let $E$ be a Galois $G/S$-algebra. Suppose given $G$-equivariant algebra homomorphisms $E \rightarrow K$ and $E \rightarrow K'$. Endow the tensor product $L \coloneqq K \otimes_{E} K'$ with the structure of a $G$-algebra given by $g(x\otimes x')\coloneqq g(x)\otimes g(x')$ for all $x\in K$ and $x'\in K'$. Then $L$ is a Galois $G$-algebra such that $L^H\simeq K$ as $G/H$-algebras, and $L^{H'}\simeq K'$ as $G/H'$-algebras.* The condition that ([\[g-h-h\'-s\]](#g-h-h'-s){reference-type="ref" reference="g-h-h'-s"}) is cartesian is equivalent to $H\cap H'= \left\{1\right\}$ and $S=HH'$. *Proof.* (1) It is clear that the formula $g(x\otimes x')\coloneqq g(x)\otimes g(x')$ makes $L^H \otimes_{L^S} L^{H'}$ into a $G$-algebra. Consider the commutative square of $F$-schemes $$\begin{tikzcd} \operatorname{Spec}(L) \arrow[r] \arrow[d] & \operatorname{Spec}(L)/H' \arrow[d] \\ \operatorname{Spec}(L)/H \arrow[r] & \operatorname{Spec}(L)/S. \end{tikzcd}$$ After base change to a separable closure of $F$, this square becomes the cartesian square ([\[g-h-h\'-s\]](#g-h-h'-s){reference-type="ref" reference="g-h-h'-s"}), and therefore it is cartesian. Passing to coordinate rings, we deduce that the map $L^H \otimes_{L^S} L^{H'} \rightarrow L$ is an isomorphism of $G$-algebras. In particular, since $L$ is a Galois $G$-algebra, so is $L^H \otimes_{L^S} L^{H'}$. \(2\) We have a $G$-equivariant cartesian diagram $$\begin{tikzcd} \operatorname{Spec}(L) \arrow[r]\arrow[d] & \operatorname{Spec}(K')\arrow[d] \\ \operatorname{Spec}(K) \arrow[r] & \operatorname{Spec}(E). \end{tikzcd}$$ Every $G$-equivariant morphism between $G/H$ and $G/S$ is isomorphic to the projection map $G/H\to G/S$. Therefore the base change of $\operatorname{Spec}(K) \to \operatorname{Spec}(E)$ to $F_s$ is $G$-equivariantly isomorphic to the projection $G/H \to G/S$. Similarly for $\operatorname{Spec}(K')\to\operatorname{Spec}(E)$. Therefore the base change of $\operatorname{Spec}(L)\to \operatorname{Spec}(F)$ over $F_s$ is $G$-equivariantly isomorphic to $(G/H)\times_{G/S}(G/H')\simeq G$, that is, the morphism $\operatorname{Spec}(L)\to \operatorname{Spec}(F)$ is an étale $G$-torsor. ◻ Suppose that $\operatorname{char}(F)\neq p$ and that $F$ contains a primitive $p$-th root of unity. We fix a primitive $p$-th root of unity $\zeta\in F^\times$. This determines an isomorphism of Galois modules $\mathbb Z/p\mathbb Z\simeq \mu_p$, given by $1\mapsto\zeta$, and so the Kummer sequence yields an isomorphism $$\label{kummer}\operatorname{Hom}_{\operatorname{cont}}(\Gamma_F,\mathbb Z/p\mathbb Z)=H^1(F,\mathbb Z/p\mathbb Z)\simeq H^1(F,\mu_p)\simeq F^{\times}/F^{\times p}.$$ For every $a\in F^{\times}$, we let $\chi_a\colon \Gamma_F\to \mathbb Z/p\mathbb Z$ be the homomorphism corresponding to the coset $a F^{\times p}$ under ([\[kummer\]](#kummer){reference-type="ref" reference="kummer"}). Explicitly, letting $a'\in F_{\operatorname{sep}}^\times$ be such that $(a')^p=a$, we have $g(a')=\zeta^{\chi_a(g)}a'$ for all $g\in \Gamma_F$. This definition does not depend on the choice of $a'$. Now let $n\geq 1$ be an integer. For all $i=1,\dots,n$, let $\sigma_i$ be the canonical generator of the $i$-th factor $\mathbb Z/p\mathbb Z$ of $(\mathbb Z/p\mathbb Z)^n$. By ([\[kummer\]](#kummer){reference-type="ref" reference="kummer"}) all Galois $(\mathbb Z/p\mathbb Z)^n$-algebras over $F$ are of the form $F_{a_1,\dots,a_n}$, where $a_1,\dots,a_n\in F^\times$ and the Galois $(\mathbb Z/p\mathbb Z)^n$-algebra structure is defined by $(\sigma_i-1)a_i^{1/p}=\zeta$ for all $i$ and $(\sigma_i-1)a_j^{1/p}=1$ for all $j\neq i$. We write $(a,b)$ for the cyclic degree-$p$ central simple algebra over $F$ generated, as an $F$-algebra, by $F_a$ and an element $y$ such that $$y^p=b,\qquad ty=y\sigma_a(t)\ \text{for all $t\in F_a$}.$$ We also write $(a,b)$ for the class of $(a,b)$ in $\operatorname{Br}(F)$. The Kummer sequence yields a group isomorphism $$\iota\colon H^2(F,\mathbb Z/p\mathbb Z)\xrightarrow{\sim}\operatorname{Br}(F)[p].$$ For all $a,b\in F^{\times}$, we have $\iota(\chi_a\cup \chi_b)=(a,b)$ in $\operatorname{Br}(F)$; see [@serre1979local Chapter XIV, Proposition 5]. **Lemma 5**. *Let $p$ be a prime, and let $F$ be a field of characteristic different from $p$ and containing a primitive $p$-th root of unity $\zeta$. The following are equivalent:* *(i) $(a,b)=0$ in $\operatorname{Br}(F)$;* *(ii) there exists $\alpha\in F_a^\times$ such that $b=N_a(\alpha)$;* *(iii) there exists $\beta\in F_b^\times$ such that $a=N_b(\beta)$.* *Proof.* See [@serre1979local Chapter XIV, Proposition 4(iii)]. ◻ ## Formality and Massey products {#massey-section} Let $(A,\partial)$ be a differential graded ring, i.e, $A=\oplus_{i\geq 0}A^i$ is a non-negatively graded abelian group with an associative multiplication which respects the grading, and $\partial\colon A\to A$ is a group homomorphism of degree $1$ such that $\partial\circ \partial=0$ and $\partial(ab)=\partial(a)b+(-1)^ia\partial(b)$ for all $i\geq 0$, $a\in A^i$ and $b\in A$. We denote by $H^{ \raisebox{-0.25ex}{\scalebox{1.2}{$\cdot$}} }(A)\coloneqq \operatorname{Ker}(\partial)/\operatorname{Im}(\partial)$ the cohomology of $(A,\partial)$, and we write $\cup$ for the multiplication (cup product) on $H^{ \raisebox{-0.25ex}{\scalebox{1.2}{$\cdot$}} }(A)$. We say that $A$ is *formal* if it is quasi-isomorphic, as a differential graded ring, to $H^{ \raisebox{-0.25ex}{\scalebox{1.2}{$\cdot$}} }(A)$ with the zero differential. Let $n\geq 2$ be an integer and $a_1,\dots,a_n\in H^1(A)$. A *defining system* for the $n$-th order Massey product $\langle{a_1,\dots,a_n}\rangle$ is a collection $M$ of elements of $a_{ij}\in A^1$, where $1\leq i<j\leq n+1$, $(i,j)\neq (1,n+1)$, such that 1. $\partial(a_{i,i+1})=0$ and $a_{i,i+1}$ represents $a_i$ in $H^1(A)$, and 2. $\partial(a_{ij})=-\operatornamewithlimits{\textstyle\sum}_{l=i+1}^{j-1}a_{il}a_{lj}$ for all $i<j-1$. It follows from (2) that $-\operatornamewithlimits{\textstyle\sum}_{l=2}^na_{1l}a_{l,n+1}$ is a $2$-cocycle: we write $\langle{a_1,\dots,a_n}\rangle_M$ for its cohomology class in $H^2(A)$, called the *value* of $\langle{a_1,\dots,a_n}\rangle$ corresponding to $M$. By definition, the *Massey product* of $a_1,\dots,a_n$ is the subset $\langle{a_1,\dots,a_n}\rangle$ of $H^2(A)$ consisting of the values $\langle{a_1,\dots,a_n}\rangle_M$ of all defining systems $M$. We say that the Massey product $\langle{a_1,\dots,a_n}\rangle$ is *defined* if it is non-empty, and that it *vanishes* if $0\in \langle{a_1,\dots,a_n}\rangle$. **Lemma 6**. *Let $(A,\partial)$ be a differential graded ring, and let $\alpha_1,\alpha_2,\alpha_3,\alpha_4$ be elements of $H^1(A)$ satisfying $\alpha_1\cup\alpha_2=\alpha_2\cup\alpha_3=\alpha_3\cup\alpha_4=0$. If $A$ is formal, then $\langle{\alpha_1,\alpha_2,\alpha_3,\alpha_4}\rangle$ is defined.* *Proof.* This was proved in [@merkurjev2022degenerate Lemma B.1] under the assumption that $A$ is a differential graded $\mathbb F_2$-algebra. The proof for an arbitrary differential graded ring remains the same. ◻ In fact, one could prove the following: If the differential graded ring $A$ is formal, then for all $n\geq 3$ and all $\alpha_1,\dots,\alpha_n\in H^1(A)$ such that $\alpha_i\cup\alpha_{i+1}=0$ for all $1\leq i\leq n-1$, then $\langle{\alpha_1,\dots,\alpha_n}\rangle$ vanishes. ## Dwyer's Theorem {#dwyer-section} Let $p$ be a prime, and let $U_{n+1}\subset \operatorname{GL}_{n+1}(\mathbb F_p)$ be the subgroup of $(n+1)\times (n+1)$ upper unitriangular matrices. For all $1\leq i<j\leq n+1$, we denote by $e_{ij}$ the matrix whose non-diagonal entries are all zero except for the entry $(i,j)$, which is equal to $1$. We set $\sigma_i\coloneqq e_{i,i+1}$ for all $1\leq i\leq n$. By [@biss2001presentation Theorem 1], the group $U_{n+1}$ admits a presentation with generators the $\sigma_i$ and relations: $$\label{relation0} \sigma_i^p=1\qquad\text{for all $1\leq i\leq n$,}$$ $$\label{relation1} [\sigma_i,\sigma_j]=1\qquad\text{for all $1\leq i\leq j-2\leq n-2$,}$$ $$\label{relation2} [\sigma_i,[\sigma_i,\sigma_{i+1}]]=[\sigma_{i+1},[\sigma_i,\sigma_{i+1}]]\qquad\text{for all $1\leq i\leq n-2$,}$$ $$\label{relation3} [[\sigma_i,\sigma_{i+1}],[\sigma_{i+1},\sigma_{i+2}]]=1\qquad\text{for all $1\leq i\leq n-3$.}$$ The following relation holds in $U_{n+1}$: $$[e_{ij},e_{jk}]=e_{ik}\qquad\text{for all $1\leq i< j<k\leq n+1$.}$$ By induction, we deduce that $$e_{1,n+1}=[\sigma_1,[\sigma_2,\dots,[\sigma_{n-2},[\sigma_{n-1},\sigma_n]]\dots]].$$ The center $Z_{n+1}$ of $U_{n+1}$ is the subgroup generated by $e_{1,n+1}$. The factor group $\overline{U}_{n+1}\coloneqq U_{n+1}/Z_{n+1}$ may be identified with the group of all $(n+1)\times (n+1)$ upper unitriangular matrices with entry $(1,n+1)$ omitted. For all $1\leq i<j\leq n+1$, let $\overline{e}_{ij}$ be the coset of $e_{ij}$ in $\overline{U}_{n+1}$, and set $\overline{\sigma}_i\coloneqq \overline{e}_{i,i+1}$ for all $1\leq i\leq n$. Then $\overline{U}_{n+1}$ is generated by all the $\overline{e}_{ij}$ modulo the relations $$\label{relation0-bar} \overline{\sigma}_i^p=1\qquad\text{for all $1\leq i\leq n$,}$$ $$\label{relation1-bar} [\overline{\sigma}_i,\overline{\sigma}_j]=1\qquad\text{for all $1\leq i\leq j-2\leq n-2$,}$$ $$\label{relation2-bar} [\overline{\sigma}_i,[\overline{\sigma}_i,\overline{\sigma}_{i+1}]=[\overline{\sigma}_{i+1},[\overline{\sigma}_i,\overline{\sigma}_{i+1}]]\qquad\text{for all $1\leq i\leq n-2$,}$$ $$\label{relation3-bar} [[\overline{\sigma}_i,\overline{\sigma}_{i+1}],[\overline{\sigma}_{i+1},\overline{\sigma}_{i+2}]]=1\qquad\text{for all $1\leq i\leq n-3$.}$$ $$\label{relation4-bar} [\overline{\sigma}_1,[\overline{\sigma}_2,\dots,[\overline{\sigma}_{n-2},[\overline{\sigma}_{n-1},\overline{\sigma}_n]]\dots]]=1.$$ We write $u_{ij}\colon U_{n+1}\to \mathbb Z/p\mathbb Z$ for the $(i,j)$-th coordinate function on $U_{n+1}$. Note that $u_{ij}$ is not a group homomorphism unless $j=i+1$. We have commutative diagram $$\label{central-exact-seq} \begin{tikzcd} 1 \arrow[r] & Z_{n+1}\arrow[r]\arrow[r] & U_{n+1}\arrow[r]\arrow[dr] & \overline{U}_{n+1}\arrow[r]\arrow[d] & 1\\ &&& (\mathbb Z/p\mathbb Z)^n \end{tikzcd}$$ where the row is a central exact sequence and the homomorphism $U_{n+1}\to (\mathbb Z/p\mathbb Z)^n$ is given by $(u_{12},u_{23},\dots, u_{n,n+1})$. We also let $$Q_{n+1}\coloneqq \operatorname{Ker}[U_{n+1}\to (\mathbb Z/p\mathbb Z)^n],\quad \overline{Q}_{n+1}\coloneqq \operatorname{Ker}[\overline{U}_{n+1}\to (\mathbb Z/p\mathbb Z)^n]=Q_{n+1}/Z_{n+1}.$$ Note that $Z_{n+1}\subset Q_{n+1}$, with equality when $n=2$. Let $G$ be a profinite group. The complex $(C^{ \raisebox{-0.25ex}{\scalebox{1.2}{$\cdot$}} }(G,\mathbb Z/p\mathbb Z),\partial)$ of mod $p$ non-homogeneous continuous cochains of $G$ with the standard cup product is a differential graded ring. Therefore $H^{ \raisebox{-0.25ex}{\scalebox{1.2}{$\cdot$}} }(G,\mathbb Z/p\mathbb Z)=H^{ \raisebox{-0.25ex}{\scalebox{1.2}{$\cdot$}} }(C^{ \raisebox{-0.25ex}{\scalebox{1.2}{$\cdot$}} }(G,\mathbb Z/p\mathbb Z),\partial)$ is endowed with Massey products. The following theorem is due to Dwyer [@dwyer1975homology]. **Theorem 7** (Dwyer). *Let $p$ be a prime number, let $G$ be a profinite group, let $\chi_1,\dots,\chi_n\in H^1(G,\mathbb Z/p\mathbb Z)$, and write $\chi\colon G\to (\mathbb Z/p\mathbb Z)^n$ for the continuous homomorphism with components $(\chi_1,\dots,\chi_n)$. Consider ([\[central-exact-seq\]](#central-exact-seq){reference-type="ref" reference="central-exact-seq"}).* *(1) The Massey product $\langle{\chi_1,\dots,\chi_n}\rangle$ is defined if and only if $\chi$ lifts to a continuous homomorphism $G\to \overline{U}_{n+1}$.* *(2) The Massey product $\langle{\chi_1,\dots,\chi_n}\rangle$ vanishes if and only if $\chi$ lifts to a continuous homomorphism $G\to U_{n+1}$.* *Proof.* See [@dwyer1975homology] for Dwyer's original proof in the setting of abstract groups, and [@efrat2014zassenhaus] or [@harpaz2019massey Proposition 2.2] for the statement in the case of profinite groups. ◻ may be rephrased as follows. **Corollary 8**. *Let $p$ be a prime, $F$ be a field of characteristic different from $p$ and containing a primitive $p$-th root of unity $\zeta$, and let $a_1,\dots,a_n\in F^\times$. The Massey product $\langle{a_1,\dots,a_n}\rangle\subset H^2(F,\mathbb Z/p\mathbb Z)$ is defined (resp. vanishes) if and only if there exists a Galois $\overline{U}_{n+1}$-algebra $K/F$ (resp. a Galois $U_{n+1}$-algebra $L/F$) such that $K^{\overline{Q}_{n+1}}\simeq F_{a_1,\dots,a_n}$ (resp. $L^{{Q}_{n+1}}\simeq F_{a_1,\dots,a_n}$) as $(\mathbb Z/p\mathbb Z)^n$-algebras.* *Proof.* This follows from and ([\[galois-alg\]](#galois-alg){reference-type="ref" reference="galois-alg"}). ◻ We will apply to the cartesian square of groups $$\label{phi-phi'} \begin{tikzcd} \overline{U}_{n+1} \arrow[d,"\varphi'_{n+1}"] \arrow[r,"\varphi_{n+1}"] & U_n \arrow[d,"\varphi'_n"] \\ U_n \arrow[r,"\varphi_n"] & U_{n-1} \end{tikzcd}$$ where $\varphi_{n+1}$ (respectively, $\varphi'_{n+1}$) is the restriction homomorphism from $U_{n+1}$ or from $U_{n+1}$ to the top-left (respectively, bottom-right) $n\times n$ subsquare $U_n$ in $U_{n+1}$. The fact that the square ([\[phi-phi\'\]](#phi-phi'){reference-type="ref" reference="phi-phi'"}) is cartesian is proved in [@merkurjev2022degenerate Proposition 2.7] when $p=2$. The proof extends to odd $p$ without change. # Massey products and Galois algebras {#u5-bar-section} In this section, we let $p$ be a prime number and we let $F$ be a field. With the exception of , we assume that $\operatorname{char}(F)\neq p$ and that $F$ contains a primitive $p$-th root of unity $\zeta$. ## Galois $U_3$-algebras Let $a,b\in F^\times$, and suppose that $(a,b)=0$ in $\operatorname{Br}(F)$. By , we may fix $\alpha\in F_a^\times$ and $\beta\in F_b^\times$ such that $N_a(\alpha)=b$ and $N_b(\beta)=a$. We write $(\mathbb Z/p\mathbb Z)^2=\langle{\sigma_a,\sigma_b}\rangle$, and we view $F_{a,b}$ as a Galois $(\mathbb Z/p\mathbb Z)^2$-algebra as in . The projection $U_3\to \overline{U}_3=(\mathbb Z/p\mathbb Z)^2$ sends $e_{12}\mapsto \sigma_a$ and $e_{23}\mapsto \sigma_b$. We define the following elements of $U_3$: $$\sigma_a\coloneqq e_{12},\qquad \sigma_b\coloneqq e_{23},\qquad \tau\coloneqq e_{13}=[\sigma_a,\sigma_b].$$ Suppose given $x\in F_a^\times$ such that $$\label{u3-x}(\sigma_a-1)x=\frac{b}{\alpha^p}.$$ The étale $F$-algebra $K\coloneqq (F_{a,b})_x$ has the structure of a Galois $U_3$-algebra such that the Galois $(\mathbb Z/p\mathbb Z)^2$-algebra $K^{Q_3}$ is equal to $F_{a,b}$, and $$\label{u3-algebra-1}(\sigma_a-1)x^{1/p}=\frac{b^{1/p}}{\alpha},\qquad (\sigma_b-1)x^{1/p}=1,\qquad (\tau-1)x^{1/p}=\zeta^{-1}.$$ Similarly, suppose given $y\in F_b^\times$ such that $$\label{u3-y}(\sigma_b-1)y=\frac{a}{\beta^p}.$$ The étale $F$-algebra $K\coloneqq (F_{a,b})_y$ has the structure of a Galois $U_3$-algebra, such that the Galois $(\mathbb Z/p\mathbb Z)^2$-algebra $K^{Q_3}$ is equal to $F_{a,b}$, and $$\label{u3-algebra-2}(\sigma_a-1)y^{1/p}=1,\qquad (\sigma_b-1)y^{1/p}=\frac{a^{1/p}}{\beta},\qquad (\tau-1)y^{1/p}=\zeta.$$ In ([\[u3-algebra-1\]](#u3-algebra-1){reference-type="ref" reference="u3-algebra-1"}) and ([\[u3-algebra-2\]](#u3-algebra-2){reference-type="ref" reference="u3-algebra-2"}), the relation involving $\tau$ follows from the first two. If $x\in F_a^\times$ satisfies ([\[u3-x\]](#u3-x){reference-type="ref" reference="u3-x"}), then so does $ax$. We may thus apply ([\[u3-algebra-1\]](#u3-algebra-1){reference-type="ref" reference="u3-algebra-1"}) to $(F_{a,b})_{ax}$. Therefore $(F_{a,b})_{ax}$ has the structure of a Galois $U_3$-algebra, where $U_3$ acts via $\overline{U}_3=\operatorname{Gal}(F_{a,b}/F)$ on $F_{a,b}$, and $$(\sigma_a-1)(ax)^{1/p}=\frac{b^{1/p}}{\alpha},\qquad (\sigma_b-1)(ax)^{1/p}=1,\qquad (\tau-1)(ax)^{1/p}=\zeta^{-1}.$$ Similarly, if $y\in F_b^\times$ satisfies ([\[u3-y\]](#u3-y){reference-type="ref" reference="u3-y"}), we may apply ([\[u3-algebra-2\]](#u3-algebra-2){reference-type="ref" reference="u3-algebra-2"}) to $(F_{a,b})_{by}$. Therefore $(F_{a,b})_{by}$ admits a Galois $U_3$-algebra structure, where $U_3$ acts via $\overline{U}_3=\operatorname{Gal}(F_{a,b}/F)$ on $F_{a,b}$, and $$(\sigma_a-1)(by)^{1/p}=1,\qquad (\sigma_b-1)(by)^{1/p}=\frac{a^{1/p}}{\beta},\qquad (\tau-1)(by)^{1/p}=\zeta.$$ **Lemma 9**. *(1) Let $x\in F_a^\times$ satisfy ([\[u3-x\]](#u3-x){reference-type="ref" reference="u3-x"}), and consider the Galois $U_3$-algebras $(F_{a,b})_x$ and $(F_{a,b})_{ax}$ as in ([\[u3-algebra-1\]](#u3-algebra-1){reference-type="ref" reference="u3-algebra-1"}). Then $(F_{a,b})_x\simeq (F_{a,b})_{ax}$ as Galois $U_3$-algebras.* *(2) Let $y\in F_b^\times$ satisfy ([\[u3-x\]](#u3-x){reference-type="ref" reference="u3-x"}), and consider the Galois $U_3$-algebras $(F_{a,b})_y$ and $(F_{a,b})_{by}$ as in ([\[u3-algebra-2\]](#u3-algebra-2){reference-type="ref" reference="u3-algebra-2"}). Then $(F_{a,b})_y\simeq (F_{a,b})_{by}$ as Galois $U_3$-algebras.* *Proof.* (1) The automorphism $\sigma_b\colon F_{a,b}\to F_{a,b}$ extends to an isomorphism of étale algebras $f\colon (F_{a,b})_x\to (F_{a,b})_{ax}$ by sending $x^{1/p}$ to $(ax)^{1/p}a^{-1/p}$. The map $f$ is well defined because $f(x^{1/p})^p=x=[(ax)^{1/p}a^{-1/p}]^p$. We check that it is $U_3$-equivariant. This is true on $F_{a,b}$ because $\sigma_a\sigma_b=\sigma_b\sigma_a$ on $F_{a,b}$. Moreover, $$\begin{aligned} \sigma_a(f(x^{1/p}))=\sigma_a((ax)^{1/p})\cdot \sigma_a(a^{-1/p})=(b^{1/p}/\alpha)(ax)^{1/p}\cdot\zeta a^{-1/p} \\ =(\zeta b^{1/p}/\alpha)\cdot (ax)^{1/p}a^{-1/p}=f((b^{1/p}/\alpha)(x^{1/p}))=f(\sigma_a(x^{1/p})) \end{aligned}$$ and $$\sigma_b(f(x^{1/p}))=\sigma_b((ax)^{1/p})\cdot \sigma_b(a^{-1/p})=(ax)^{1/p}a^{-1/p}=f(x^{1/p})=f(\sigma_b(x^{1/p})).$$ Thus $f$ is $U_3$-equivariant, as desired. \(2\) The proof is similar to that of (1). ◻ **Proposition 10**. *Let $a,b\in F^\times$ be such that $(a,b)=0$ in $\operatorname{Br}(F)$, and fix $\alpha\in F_a^\times$ and $\beta\in F_b^\times$ such that $N_a(\alpha)=b$ and $N_b(\beta)=a$.* *(1) Every Galois $U_3$-algebra $K$ over $F$ such that $K^{Q_3}\simeq F_{a,b}$ as $(\mathbb Z/p\mathbb Z)^2$-algebras is of the form $(F_{a,b})_x$ for some $x\in F_a^\times$ as in ([\[u3-x\]](#u3-x){reference-type="ref" reference="u3-x"}), with $U_3$-action given by ([\[u3-algebra-1\]](#u3-algebra-1){reference-type="ref" reference="u3-algebra-1"}).* *(2) Every Galois $U_3$-algebra $K$ over $F$ such that $K^{Q_3}\simeq F_{a,b}$ as $(\mathbb Z/p\mathbb Z)^2$-algebras is of the form $(F_{a,b})_y$ for some $y\in F_b^\times$ as in ([\[u3-y\]](#u3-y){reference-type="ref" reference="u3-y"}), with $U_3$-action given by ([\[u3-algebra-2\]](#u3-algebra-2){reference-type="ref" reference="u3-algebra-2"}).* *(3) Let $(F_{a,b})_x$ and $(F_{a,b})_y$ be Galois $U_3$-algebras as in ([\[u3-algebra-1\]](#u3-algebra-1){reference-type="ref" reference="u3-algebra-1"}) and ([\[u3-algebra-2\]](#u3-algebra-2){reference-type="ref" reference="u3-algebra-2"}), respectively. The Galois $U_3$-algebras $(F_{a,b})_x$ and $(F_{a,b})_y$ are isomorphic if and only if there exists $w\in F_{a,b}^\times$ such that $$w^p=xy,\qquad (\sigma_a-1)(\sigma_b-1)w=\zeta.$$* *Proof.* (1) Since $Q_3=\langle{\tau}\rangle\simeq \mathbb Z/p\mathbb Z$ and $K^{Q_3}\simeq F_{a,b}$ as $(\mathbb Z/p\mathbb Z)^2$-algebras, we have an isomorphism of étale $F_{a,b}$-algebras $K\simeq (F_{a,b})_z$, for some $z\in F_{a,b}^\times$ such that $(\tau-1)z^{1/p}=\zeta^{-1}$. We may suppose that $K=(F_{a,b})_z$. As $\tau$ commutes with $\sigma_b$ we have $$(\tau-1)(\sigma_b-1)z^{1/p}=(\sigma_b-1)(\tau-1)z^{1/p}=(\sigma_b-1)\zeta^{-1}=1,$$ hence $(\sigma_b-1)z^{1/p}\in F_{a,b}^\times$. By Hilbert's Theorem 90 for the extension $F_{a,b}/F_a$, there is $t\in F_{a,b}^\times$ such that $(\sigma_b-1)z^{1/p}=(\sigma_b-1)t$. Replacing $z$ by $zt^{-p}$, we may thus assume that $(\sigma_b-1)z^{1/p}=1$. In particular, $z\in F_a^\times$. Since $(\tau-1)z^{1/p}=\zeta^{-1}$, we have $\sigma_b\sigma_a(z^{1/p})=\zeta \sigma_a\sigma_b(z^{1/p})$. Thus $$(\sigma_b-1) (\sigma_a-1)z^{1/p} =(\sigma_b\sigma_a-\sigma_a\sigma_b+ (\sigma_a-1)(\sigma_b-1))z^{1/p}=\zeta (\sigma_a-1) (\sigma_b-1)z^{1/p} = \zeta,$$ and hence $(\sigma_a-1)z^{1/p} = b^{1/p}/\alpha'$ for some $\alpha'\in F_a^\times$. Moreover $N_a(\alpha'/\alpha)=b/b=1$, and so by Hilbert's Theorem 90 there exists $\theta\in F_a^{\times}$ such that $\alpha'/\alpha=(\sigma_a-1)\theta$. We define $x\coloneqq z\theta^p\in F_a^\times$, and set $x^{1/p}\coloneqq z^{1/p}\theta\in (F_{a,b})_z^\times$. Then $K=(F_{a,b})_x$, where $$(\sigma_a-1)x^{1/p}=(\sigma_a-1)w\cdot (\sigma_a-1)\theta=\frac{b^{1/p}}{\alpha'}\cdot\frac{\alpha'}{\alpha}=\frac{b^{1/p}}{\alpha}$$ and $(\sigma_b-1)x^{1/p}=1$, as desired. \(2\) The proof is analogous to that of (1). \(3\) Suppose given an isomorphism of Galois $U_3$-algebras between $(F_{a,b})_x$ and $(F_{a,b})_y$. Let $t\in (F_{a,b})_x$ be the image of $y^{1/p}$ under the isomorphism and set $$w'\coloneqq x^{1/p} t \in (F_{a,b})_x.$$ Set $y'\coloneqq t^p$. We have $(\tau-1)w'=\zeta^{-1}\cdot\zeta=1$, and hence $w'\in F_{a,b}^\times$. We have $(w')^p=xy'$. Since $F_b$ coincides with the $\langle{\sigma_a,\tau}\rangle$-invariant subalgebra of $(F_{a,b})_x$ and $(F_{a,b})_y$, the isomorphism $(F_{a,b})_y\to (F_{a,b})_x$ restricts to an isomorphism of Galois $\mathbb Z/p\mathbb Z$-algebras $F_b \to F_b$. Since the automorphism group of $F_b$ as a Galois $(\mathbb Z/p\mathbb Z)$-algebra is $\mathbb Z/p\mathbb Z$, generated by $\sigma_b$, this isomorphism $F_b \to F_b$ is equal to $\sigma_b^i$ for some $i\geq 0$. Thus $y'=\sigma_b^i(y)$. Define $$w\coloneqq (w' a^{i/p})/\operatornamewithlimits{\textstyle\prod}_{j=0}^i\sigma_b^j(\beta)\in F_{a,b}^\times.$$ We have $$(1-\sigma_b^i)y=(\operatornamewithlimits{\textstyle\sum}_{j=0}^i\sigma_b^j(1-\sigma_b))y=a^i/(\operatornamewithlimits{\textstyle\prod}_{j=0}^i\sigma_b^j(\beta^p))=w^p/(w')^p.$$ Therefore $$\label{w-eq1}w^p=(w')^p(1-\sigma_b^i)y=x\sigma_b^i(y)(1-\sigma_b^i)y=xy.$$ We have $(\sigma_b-1)x^{1/p}=1$ and $$(\sigma_a-1)(\sigma_b-1)t=(\sigma_a-1)(\sigma_b-1)y^{1/p}=(\sigma_a-1)(a^{1/p}/\beta)=\zeta,$$ therefore $$(\sigma_a-1)(\sigma_b-1)w'=(\sigma_a-1)(\sigma_b-1)t=\zeta.$$ Since $(\sigma_a-1)(\sigma_b-1)a^{1/p}=1$ and $(\sigma_a-1)(\sigma_b-1)\beta=1$, we conclude that $$\label{w-eq2}(\sigma_a-1)(\sigma_b-1)w=(\sigma_a-1)(\sigma_b-1)w'=1.$$ Putting ([\[w-eq1\]](#w-eq1){reference-type="ref" reference="w-eq1"}) and ([\[w-eq2\]](#w-eq2){reference-type="ref" reference="w-eq2"}) together, we see that $w$ satisfies the conditions of (3). Conversely, suppose given $w'\in F_{a,b}^\times$ such that $$xy=(w')^p,\qquad (\sigma_a-1)(\sigma_b-1)w'=\zeta.$$ **Claim 11**. There exists $w\in F_{a,b}^\times$ such that $$xy=w^p,\qquad (\sigma_a-1)w=\zeta^{-i}\frac{b^{1/p}}{\alpha},\qquad (\sigma_b-1)w=\zeta^{-j}\frac{a^{1/p}}{\beta}$$ for some integers $i$ and $j$. *Proof of .* We first find $\eta_a\in F_a^\times$ such that $$\label{eta_a} \eta_a^p=1, \qquad (\sigma_a-1)(w'/\eta_a)=\zeta^{-i}\frac{b^{1/p}}{\alpha}.$$ We have $$(\sigma_a - 1) (w')^p=(\sigma_a - 1) x=\frac{b}{\alpha^p}.$$ Let $$\zeta_a\coloneqq (\sigma_a-1)w'\cdot \alpha \cdot b^{-1/p}\in F_{a,b}^\times.$$ We have $\zeta_a^p=1$. Moreover, $(\sigma_b-1)\zeta_a=\zeta\cdot 1\cdot\zeta^{-1}=1$, that is, $\zeta_a$ belongs to $F_a^\times$. If $F_a$ is a field, this implies that $\zeta_a=\zeta^i$ for some integer $i$, and ([\[eta_a\]](#eta_a){reference-type="ref" reference="eta_a"}) holds for $\eta_a=1$. Suppose that $F_a$ is not a field. Then $F_a\simeq F^p$, where $\sigma_a$ acts by cyclically permuting the coordinates: $$\sigma_a(x_1,x_2,\dots,x_p)=(x_2,\dots,x_p, x_1).$$ We have $\zeta_a=(\zeta_1,\dots,\zeta_p)$ in $F_a=F^p$, where $\zeta_i\in F^\times$ is a $p$-th root of unity for all $i$. We have $N_a(\zeta_a)=N_a(\alpha)/b=1$, and so $\zeta_1\cdots\zeta_p=1$. Inductively define $\eta_1\coloneqq 1$ and $\eta_{i+1}\coloneqq \zeta_i\eta_i$ for all $i=1,\dots,p-1$. Then $$\eta_1/\eta_p=(\eta_1/\eta_2)\cdot(\eta_2/\eta_3)\cdots(\eta_{p-1}/\eta_p)=\zeta_1^{-1}\zeta_2^{-1}\cdots\zeta_{p-1}^{-1}=\zeta_p.$$ Therefore the element $\eta_a\coloneqq (\eta_1,\dots,\eta_p)\in F^p=F_a$ satisfies $\eta_a^p=1$ and $$(\sigma_a-1)\eta_a=(\eta_2/\eta_1,\dots,\eta_p/\eta_{p-1},\eta_1/\eta_p)=(\zeta_1,\dots,\zeta_{p-1},\zeta_p)=\zeta_a.$$ Thus $$\eta_a^p=1,\qquad (\sigma_a-1)(w'/\eta_a)=(\sigma_a-1)w'\cdot \zeta_a^{-1}=\frac{b^{1/p}}{\alpha}.$$ Independently of whether $F_a$ is a field or not, we have found $\eta_a$ satisfying ([\[eta_a\]](#eta_a){reference-type="ref" reference="eta_a"}). Similarly, we construct $\eta_b\in F_b^\times$ such that $$\label{eta_b} \eta_b^p=1,\qquad (\sigma_b-1)(w'/\eta_b)=\zeta^{-j}\frac{a^{1/p}}{\beta},$$ for some integer $j$. Set $w\coloneqq w'/(\eta_a\eta_b)\in F_{a,b}^\times$. Putting together ([\[eta_a\]](#eta_a){reference-type="ref" reference="eta_a"}) and ([\[eta_b\]](#eta_b){reference-type="ref" reference="eta_b"}), we deduce that $w$ satisfies the conclusion of . ◻ Let $w\in F_{a,b}^\times$ be as in . By (1), applied $i$ times, the Galois $U_3$-algebra $(F_{a,b})_x$ is isomorphic to $(F_{a,b})_{a^ix}$, where $$(\sigma_a-1)(a^ix)^{1/p}=\frac{b^{1/p}}{\alpha},\qquad (\sigma_b-1)(a^ix)^{1/p}=1,$$ By (2), applied $j$ times, the Galois $U_3$-algebra $(F_{a,b})_y$ is isomorphic to $(F_{a,b})_{b^jy}$, where $$(\sigma_a-1)(b^jy)^{1/p}=1,\qquad (\sigma_b-1)(b^jy)^{1/p}=\frac{a^{1/p}}{\beta}.$$ It thus suffices to construct an isomorphism of $U_3$-algebras $(F_{a,b})_{a^ix}\simeq (F_{a,b})_{b^jy}$. Let $$\tilde{w}\coloneqq wa^{i/p}b^{j/p}\in F_{a,b}^\times,$$ so that $$(\sigma_a-1)\tilde{w}=\frac{a^{1/p}}{\beta},\qquad (\sigma_b-1)\tilde{w}=\frac{b^{1/p}}{\alpha}.$$ Let $f\colon (F_{a,b})_{a^ix}\to (F_{a,b})_{b^jy}$ be the isomorphism of étale algebras which is the identity on $F_{a,b}$ and sends $(a^ix)^{1/p}$ to $\tilde{w}/(b^jy)^{1/p}$. Note that $f$ is well defined because $$(\tilde{w})^p=wa^ib^j=(a^ix)(b^jy).$$ Moreover, $$(\sigma_a-1)(\tilde{w}/(b^jy)^{1/p})=\frac{a^{1/p}}{\beta}=(\sigma_a-1)(a^ix)^{1/p},$$ $$(\sigma_b-1)(\tilde{w}/(b^jy)^{1/p})=\frac{b^{1/p}}{\alpha}\cdot \frac{\alpha}{b^{1/p}}=1=(\sigma_b-1)(a^ix)^{1/p},$$ and hence $f$ is $U_3$-equivariant. ◻ ## Galois $\overline{U}_4$-algebras Let $a,b,c\in F^\times$ be such that $(a,b)=(b,c)=0$ in $\operatorname{Br}(F)$. By , we may fix $\alpha\in F_a^\times$ and $\gamma\in F_c^\times$ be such that $N_a(\alpha)=N_c(\gamma)=b$. We have $\operatorname{Gal}(F_{a,b,c}/F)=\langle{\sigma_a,\sigma_b,\sigma_c}\rangle$. The projection map $\overline{U}_4\to (\mathbb Z/p\mathbb Z)^3$ is given by $\overline{e}_{12}\mapsto\sigma_a$, $\overline{e}_{23}\mapsto\sigma_b$, $\overline{e}_{34}\mapsto\sigma_c$. Its kernel $\overline{Q}_4\subset \overline{U}_4$ is isomorphic to $(\mathbb Z/p\mathbb Z)^2$, generated by $\overline{e}_{13}$ and $\overline{e}_{24}$. We define the following elements of $\overline{U}_4$: $$\sigma_a\coloneqq \overline{e}_{12},\qquad \sigma_b\coloneqq \overline{e}_{23},\qquad \sigma_c\coloneqq \overline{e}_{34},\qquad \tau_{ab}\coloneqq \overline{e}_{13},\qquad \tau_{bc}\coloneqq \overline{e}_{24}.$$ Let $x\in F_a^\times$ and $z\in F_c^\times$ be such that $$\label{uu4-0}(\sigma_a-1)x=\frac{b}{\alpha^p},\qquad (\sigma_c-1)z=\frac{b}{\gamma^p},$$ and consider the Galois $\overline{U}_4$-algebra $K\coloneqq (F_{a,b,c})_{x,z}$, where $\overline{U}_4$ acts on $F_{a,b,c}$ via the surjection onto $\operatorname{Gal}(F_{a,b,c}/F)$, and $$\label{uu4-1} (\sigma_a-1)x^{1/p}=\frac{b^{1/p}}{\alpha},\qquad (\sigma_b-1)x^{1/p}=1,\qquad (\sigma_c-1)x^{1/p}=1,$$ $$\label{uu4-2} (\tau_{ab}-1)x^{1/p}=\zeta^{-1},\qquad (\tau_{bc}-1)x^{1/p}=1,$$ $$\label{uu4-3} (\sigma_a-1)(x')^{1/p}=1,\qquad (\sigma_b-1)(x')^{1/p}=1,\qquad (\sigma_c-1)(x')^{1/p}=\frac{b^{1/p}}{\gamma},$$ $$\label{uu4-4}(\tau_{ab}-1)(x')^{1/p}=1,\qquad (\tau_{bc}-1)(x')^{1/p}=\zeta.$$ Note that ([\[uu4-2\]](#uu4-2){reference-type="ref" reference="uu4-2"}) follows from ([\[uu4-1\]](#uu4-1){reference-type="ref" reference="uu4-1"}) and ([\[uu4-4\]](#uu4-4){reference-type="ref" reference="uu4-4"}) follows from ([\[uu4-3\]](#uu4-3){reference-type="ref" reference="uu4-3"}). We leave to the reader to check that the relations ([\[relation0-bar\]](#relation0-bar){reference-type="ref" reference="relation0-bar"})-([\[relation4-bar\]](#relation4-bar){reference-type="ref" reference="relation4-bar"}) are satisfied. **Proposition 12**. *Let $a,b,c\in F^\times$ be such that $(a,b)=(b,c)=0$ in $\operatorname{Br}(F)$. Fix $\alpha\in F_a^\times$ and $\gamma\in F_c^\times$ such that $N_a(\alpha)=N_c(\gamma)=b$. Let $K$ be a Galois $\overline{U}_4$-algebra such that $K^{\overline{Q}_4}\simeq F_{a,b,c}$ as $(\mathbb Z/p\mathbb Z)^3$-algebras. Then there exist $x\in F_a^\times$ and $x'\in F_c^\times$ such that $K\simeq (F_{a,b,c})_{x,x'}$ as Galois $\overline{U}_4$-algebras, where $\overline{U}_4$ acts on $(F_{a,b,c})_{x,x'}$ by ([\[uu4-1\]](#uu4-1){reference-type="ref" reference="uu4-1"})-([\[uu4-4\]](#uu4-4){reference-type="ref" reference="uu4-4"}).* *Proof.* Let $H$ (resp. $H'$) be the subgroup of $\overline{U}_4$ generated by $\sigma_c$ and $\tau_{bc}$ (resp. $\sigma_b$ and $\tau_{ab}$), and let $S$ be the subgroup of $\overline{U}_4$ generated by $H$ and $H'$. Note that $K^H$ is a Galois $U_3$-algebra over $F$ such that $(K^H)^{Q_3}\simeq F_{a,b}$ as $(\mathbb Z/p\mathbb Z)^2$-algebras and $K^S\simeq F_b$ as $(\mathbb Z/p\mathbb Z)$-algebras. Thus by (1) there exists $x\in F_a^\times$ such that $K^H\simeq (F_{a,b})_x$ as Galois $U_3$-algebras. Similarly, by (2) there exists $x'\in F_c^\times$ such that $K^{H'}\simeq (F_{b,c})_{x'}$ as Galois $U_3$-algebras. Therefore $x$ satisfies ([\[uu4-1\]](#uu4-1){reference-type="ref" reference="uu4-1"}) and $x'$ satisfies ([\[uu4-3\]](#uu4-3){reference-type="ref" reference="uu4-3"}). We apply (2) to ([\[phi-phi\'\]](#phi-phi'){reference-type="ref" reference="phi-phi'"}). We obtain the isomorphisms of $\overline{U}_4$-algebras $$K\simeq K^H\otimes_{K^S}K^{H'}\simeq (F_{a,b,c})_{x,x'},$$ where $(F_{a,b,c})_{x,x'}$ is the $\overline{U}_4$-algebra given by ([\[uu4-1\]](#uu4-1){reference-type="ref" reference="uu4-1"}) and ([\[uu4-3\]](#uu4-3){reference-type="ref" reference="uu4-3"}). ◻ ## Galois $U_4$-algebras Let $a,b,c\in F^\times$, and suppose that $(a,b)=(b,c)=0$ in $\operatorname{Br}(F)$. We write $(\mathbb Z/p\mathbb Z)^3=\langle{\sigma_a,\sigma_b,\sigma_c}\rangle$ and view $F_{a,b,c}$ as a Galois $(\mathbb Z/p\mathbb Z)^3$-algebra over $F$ as in . The quotient map $U_4\to (\mathbb Z/p\mathbb Z)^3$ is given by $e_{12}\mapsto \sigma_a$, $e_{23}\mapsto \sigma_b$ and $e_{34}\mapsto \sigma_c$. The kernel $Q_4$ of this map is generated by $e_{13}$, $e_{24}$ and $e_{14}$ and is isomorphic to $(\mathbb Z/p\mathbb Z)^3$. We define the following elements of $U_4$: $$\sigma_a\coloneqq e_{12},\qquad \sigma_b\coloneqq e_{23},\qquad \sigma_c\coloneqq e_{34},$$$$\tau_{ab}\coloneqq e_{13}=[\sigma_a,\sigma_b],\qquad \tau_{bc}\coloneqq e_{24}=[\sigma_b,\sigma_c],\qquad \rho\coloneqq e_{14}=[\sigma_a,\tau_{bc}]=[\tau_{ab},\sigma_c].$$ **Proposition 13**. *Let $a,b,c\in F^\times$ be such that $(a,b)=(b,c)=0$ in $\operatorname{Br}(F)$. Let $\alpha\in F_a^\times$ and $\gamma\in F_c^\times$ be such that $N_a(\alpha)=b$ and $N_c(\gamma)=b$. Let $K$ be a Galois $\overline{U}_4$-algebra such that $K^{\overline{Q}_4}\simeq F_{a,b,c}$ as $(\mathbb Z/p\mathbb Z)^3$-algebras.* *There exists a Galois $U_4$-algebra $L$ over $F$ such that $L^{Z_4}\simeq K$ as $\overline{U}_4$-algebras if and only if there exist $u,u'\in F_{a,c}^\times$ such that $$\alpha\cdot (\sigma_a-1)u=\gamma\cdot(\sigma_c-1)u'.$$ and such that $K$ is isomorphic to the Galois $\overline{U}_4$-algebra $(F_{a,b,c})_{x,x'}$ determined by ([\[uu4-1\]](#uu4-1){reference-type="ref" reference="uu4-1"})-([\[uu4-4\]](#uu4-4){reference-type="ref" reference="uu4-4"}), where $x=N_c(u)\in F_a^\times$ and $x'=N_a(u')\in F_c^\times$.* *Proof.* Suppose that $K=(F_{a,b,c})_{x,x'}$, with $\overline{U}_4$-action determined by ([\[uu4-1\]](#uu4-1){reference-type="ref" reference="uu4-1"})-([\[uu4-4\]](#uu4-4){reference-type="ref" reference="uu4-4"}). Let $L$ be a Galois $U_4$-algebra over $F$ be such that $L^{Z_4}=K$, and let $y\in K^\times$ be such that $L=K_y$. We have $\operatorname{Gal}(L/F_{a,b,c})=Q_4=\langle{\tau_{ab}, \tau_{bc},\rho}\rangle\simeq (\mathbb Z/p\mathbb Z)^3$, and hence one may choose $y$ in $F_{a,b,c}^\times$ and such that $$(\tau_{ab}-1)y^{1/p}=1,\qquad (\tau_{bc}-1)y^{1/p}=1,\qquad (\rho-1)y^{1/p}=\zeta^{-1}.$$ The element $\sigma_b$ commutes with $\tau_{ab}, \tau_{bc}$ and $\rho$. Hence $$\tau_{ab}(\sigma_b-1)(y^{1/p})=(\sigma_b-1)\tau_{ab}(y^{1/p})=(\sigma_b-1)(y^{1/p}).$$ Similarly $$\tau_{bc}(\sigma_b-1)(y^{1/p})=(\sigma_b-1)(y^{1/p})$$ and $$\rho(\sigma_b-1)(y^{1/p})=(\sigma_b-1)(\zeta\cdot y^{1/p})=(\sigma_b-1)(y^{1/p}).$$ It follows that $(\sigma_b-1)(y^{1/p})\in F_{a,b,c}^\times$. By Hilbert's Theorem 90, applied to $F_{a,b,c}/F_{a,c}$, there is $q\in F_{a,b,c}^\times$ such that $(\sigma_b-1)(y^{1/p})=(\sigma_b-1)q$. Replacing $y$ by $y/ q^p$, we may assume that $\sigma_b(y^{1/p})=y^{1/p}$. In particular, $y\in F_{a,c}^\times$. We have: $$\begin{aligned} \rho(\sigma_a-1) (y^{1/p})= &\ (\sigma_a -1)\rho(y^{1/p})=(\sigma_a -1)(\zeta^{-1}\cdot y^{1/p})=(\sigma_a -1)(y^{1/p}), \\ \sigma_b(\sigma_a-1) (y^{1/p})= &\ (\sigma_a\sigma_b{\tau_{ab}}^{-1} -\sigma_b)(y^{1/p})=(\sigma_a-1) (y^{1/p}),\\ \tau_{ab}(\sigma_a-1) (y^{1/p})= &\ (\sigma_a -1)\tau_{ab}(y^{1/p})=(\sigma_a -1)(y^{1/p}),\\ \tau_{bc}(\sigma_a-1) (y^{1/p})= &\ (\rho^{-1} \sigma_a -1)\tau_{bc}(y^{1/p})=( \sigma_a \rho^{-1} -1)(y^{1/p})=\zeta\cdot (\sigma_a-1) (y^{1/p}). \end{aligned}$$ By ([\[uu4-3\]](#uu4-3){reference-type="ref" reference="uu4-3"})-([\[uu4-4\]](#uu4-4){reference-type="ref" reference="uu4-4"}), analogous identities are satisfied by $(x')^{1/p}$: $$(\rho-1)(x')^{1/p}=(\sigma_b-1)(x')^{1/p}=(\tau_{ab}-1)(x')^{1/p}=1,\qquad (\tau_{bc}-1)(x')^{1/p}=\zeta.$$ Therefore $$(\sigma_a-1) (y^{1/p})=\frac{(x')^{1/p}}{u'}$$ for some $u'\in F_{a,c}^\times$. In particular, $x'= N_a(u')$. A similar computation shows that $$(\sigma_c-1) (y^{1/p})=\frac{x^{1/p}}{u}$$ for some $u\in F_{a,c}^\times$. In particular, $x= N_c(u)$. In addition, $$\frac{b^{1/p}}{\alpha}=(\sigma_a-1) (x^{1/p})=(\sigma_a-1) [u\cdot (\sigma_c-1)(y^{1/p})],$$ $$\frac{b^{1/p}}{\gamma}=(\sigma_c-1) ((x')^{1/p})=(\sigma_c-1) [u'\cdot (\sigma_a-1)(y^{1/p})].$$ Therefore $$\alpha\cdot (\sigma_a-1)u=\gamma\cdot(\sigma_c-1)u'.$$ Conversely, suppose given $u,u'\in F_{a,c}^\times$ such that $$\alpha\cdot(\sigma_a-1)u=\gamma\cdot(\sigma_c-1)u',\qquad x=N_c(u),\qquad x'=N_a(u').$$ Then $$(\sigma_a-1)x=(\sigma_a-1)N_c(u)=N_c(\sigma_a-1)u=N_c\left(\frac{\gamma}{\alpha}\right)=\frac{b}{\alpha^p},$$ $$(\sigma_c-1)x'=(\sigma_c-1)N_a(u')=N_a(\sigma_c-1)u'=N_a\left(\frac{\alpha}{\gamma}\right)=\frac{b}{\gamma^p}.$$ We have $$N_c \left(\frac{x}{u^p}\right)=\frac{N_c(x)}{N_c(u^p)}=\frac{x^p}{x^p}=1,$$ $$N_a \left(\frac{x'}{(u')^p}\right)=\frac{N_a(x')}{N_a((u')^p)}=\frac{(x')^p}{(x')^p}=1,$$ $$(\sigma_a-1)\left(\frac{x}{u^p}\right)=\frac{b}{\alpha^p\cdot (\sigma_a-1)u^p}= \frac{b}{\gamma^p\cdot (\sigma_c-1)(u')^p}=(\sigma_c-1)\left(\frac{x'}{(u')^p}\right).$$ By Hilbert's Theorem 90 applied to $F_{a,c}/F$, there is $y\in F_{a,c}^\times$ such that $$(\sigma_a-1)y=\frac{x'}{(u')^p} \quad\text{and}\quad (\sigma_c-1)y=\frac{x}{u^p}.$$ We consider the étale $F$-algebra $L\coloneqq K_y$ and make it into a Galois $U_4$-algebra such that $L^{Z_4}=K$. It suffices to describe the $U_4$-action on $y^{1/p}$. We set: $$(\sigma_a-1)(y^{1/p})=\frac{(x')^{1/p}}{u'},\quad (\sigma_b-1)(y^{1/p})=1,\quad (\sigma_c-1)(y^{1/p})=\frac{x^{1/p}}{u},$$ One checks that this definition is compatible with the relations ([\[relation0\]](#relation0){reference-type="ref" reference="relation0"})-([\[relation3\]](#relation3){reference-type="ref" reference="relation3"}), and hence that it makes $L$ into a Galois $U_4$-algebra such that $L^{Z_4}=K$. ◻ We use to give an alternative proof for the Massey Vanishing Conjecture for $n=3$ and arbitrary $p$. **Proposition 14**. *Let $p$ be a prime, let $F$ be a field, and let $\chi_1,\chi_2,\chi_3\in H^1(F,\mathbb Z/p\mathbb Z)$. The following are equivalent.* 1. *We have $\chi_1\cup\chi_2=\chi_2\cup\chi_3=0$ in $H^2(F,\mathbb Z/p\mathbb Z)$.* 2. *The Massey product $\langle{\chi_1,\chi_2,\chi_3}\rangle\subset H^2(F,\mathbb Z/p\mathbb Z)$ is defined.* 3. *The Massey product $\langle{\chi_1,\chi_2,\chi_3}\rangle\subset H^2(F,\mathbb Z/p\mathbb Z)$ vanishes.* *Proof.* It is clear that (3) implies (2) and that (2) implies (1). We now prove that (1) implies (3). The first part of the proof is a reduction to the case when $\operatorname{char}(F)\neq p$ and $F$ contains a primitive $p$-th root of unity, and follows [@minac2016triple Proposition 4.14]. Consider the short exact sequence $$\label{q4-u4}1\to Q_4\to U_4\to (\mathbb Z/p\mathbb Z)^3\to 1,$$ where the map $U_4\to (\mathbb Z/p\mathbb Z)^3$ comes from ([\[central-exact-seq\]](#central-exact-seq){reference-type="ref" reference="central-exact-seq"}). Recall from the paragraph preceding that the group $Q_4$ is abelian. Therefore, the homomorphism $\chi\coloneqq(\chi_1,\chi_2,\chi_3)\colon \Gamma_F \to (\mathbb Z/p\mathbb Z)^3$ induces a pullback map $$H^2((\mathbb Z/p\mathbb Z)^3, Q_4)\to H^2(F, Q_4).$$ We let $A\in H^2(F, Q_4)$ be the image of the class of ([\[q4-u4\]](#q4-u4){reference-type="ref" reference="q4-u4"}) under this map. By , for every finite extension $F'/F$ the Massey product $\langle{\chi_1,\chi_2,\chi_3}\rangle$ vanishes over $F'$ if and only if the restriction of $\chi$ to $\Gamma_{F'}$ lifts to $U_4$, and this happens if and only if $A$ restricts to zero in $H^2(F', Q_4)$. When $\operatorname{char}(F)=p$, we have $\operatorname{cd}(F)\leq 1$ by [@serre1997galois §2.2, Proposition 3]. Therefore $H^2(F,Q_4)=0$ and hence $A=0$. Thus (1) implies (3) when $\operatorname{char}(F)=p$. Suppose that $\operatorname{char}(F)\neq p$. There exists an extension $F'/F$ of prime-to-$p$ degree such that $F'$ contains a primitive $p$-th root of $1$. If (1) implies (3) for $F'$, then $A$ restricts to zero in $H^2(F', Q_4)$. By a restriction-corestriction argument, we deduce that $A$ vanishes, that is, (1) implies (3) for $F$. We may thus assume that $F$ contains a primitive $p$-th root of unity $\zeta$. Let $a,b,c\in F^\times$ be such that $\chi_a=\chi_1$, $\chi_b=\chi_2$ and $\chi_c=\chi_3$ in $H^1(F,\mathbb Z/p\mathbb Z)$. Since $(a,b)=(b,c)=0$ in $\operatorname{Br}(F)$, there exists $\alpha\in F_a^\times$ and $\gamma\in F_c^\times$ such that $N_a(\alpha)=N_c(\gamma)=b$. Since $N_{ac}(\gamma/\alpha)=N_c(\gamma)/N_a(\alpha)=1$ in $F_{ac}^\times$, by Hilbert's Theorem 90 there exists $t\in F_{a,c}^\times$ such that $\gamma/\alpha = (\sigma_a\sigma_c - 1)t$. Define $u,u'\in F_{a,c}^\times$ by $u\coloneqq \sigma_c(t)$ and $u'\coloneqq t^{-1}$. Then $$\alpha\cdot(\sigma_a-1)u=\alpha\cdot(\sigma_a\sigma_c-\sigma_c)t= \alpha\cdot(\sigma_a\sigma_c-1)t\cdot(\sigma_c-1)t^{-1}=\gamma\cdot(\sigma_c-1)u'.$$ Let $x\coloneqq N_c(u)\in F_a^\times$ and $x'\coloneqq N_a(u')\in F_c^\times$. Since $\sigma_a\sigma_c=\sigma_c\sigma_a$ on $F_{a,c}^\times$, we have $$(\sigma_a-1)x=N_c((\sigma_a-1)u)=N_c((\sigma_c-1)u'\cdot(\gamma/\alpha))=N_c(\gamma)/N_c(\alpha)=b/\alpha^p.$$ Similarly, $(\sigma_c-1)x'=b/\gamma^p$. Therefore $x,x'$ satisfy ([\[uu4-0\]](#uu4-0){reference-type="ref" reference="uu4-0"}). Let $K\coloneqq (F_{a,b,c})_{x,x'}$ be the Galois $\overline{U}_4$-algebra over $F$, with the $\overline{U}_4$-action given by ([\[uu4-1\]](#uu4-1){reference-type="ref" reference="uu4-1"})-([\[uu4-4\]](#uu4-4){reference-type="ref" reference="uu4-4"}). By , there exists a Galois $U_4$-algebra $L$ over $F$ such that $L^{Z_4}\simeq (F_{a,b,c})_{x,x'}$ as $\overline{U}_4$-algebras. In particular, $L^{Q_4}\simeq F_{a,b,c}$ as $(\mathbb Z/p\mathbb Z)^3$-algebras. By , we conclude that $\langle{a,b,c}\rangle$ vanishes. ◻ ## Galois $\overline{U}_5$-algebras {#uu5-sec} Let $a,b,c,d\in F^\times$. We write $(\mathbb Z/p\mathbb Z)^4=\langle{\sigma_a,\sigma_b,\sigma_c,\sigma_d}\rangle$ and regard $F_{a,b,c,d}$ as a Galois $(\mathbb Z/p\mathbb Z)^4$-algebra over $F$ as in . **Proposition 15**. *Let $a,b,c,d \in F^\times$ be such that $(a,b)=(b,c)=(c,d)=0$ in $\operatorname{Br}(F)$. Then the Massey product $\langle{a,b,c,d}\rangle$ is defined if and only if there exist $u\in F_{a,c}^\times$, $v\in F_{b,d}^\times$ and $w\in F_{b,c}^\times$ such that $$N_a(u)\cdot N_d(v) = w^p,\qquad (\sigma_b - 1)(\sigma_c - 1)w = \zeta.$$* *Proof.* Denote by $U_4^+$ and $U_4^-$ the top-left and bottom-right $4\times 4$ corners of $U_5$, respectively, and let $S\coloneqq U_4^+\cap U_4^-$ be the middle subgroup $U_3$. Let $Q_4^+$ and $Q_4^-$ be the kernel of the map $U_4^+\to (\mathbb Z/p\mathbb Z)^3$ and $U_4^-\to (\mathbb Z/p\mathbb Z)^3$, respectively, and let $P_4^+$ and $P_4^-$ be the kernel of the maps $U_4^+\to U_3$ and $U_4^-\to U_3$, respectively. Suppose $\langle a,b,c,d \rangle$ is defined. By , there exists a $\overline{U}_5$-algebra $L$ such that $L^{\overline{Q}_5}\simeq F_{a,b,c,d}$ as $(\mathbb Z/p\mathbb Z)^4$-algebras. Using , we fix $\alpha\in F_a^\times$ and $\gamma\in F_c^\times$ such that $N_a(\alpha)=b$ and $N_c(\gamma)=b$. By , there exist $u,u'\in F_{a,c}^\times$ such that, letting $x'\coloneqq N_c(u')$ and $x\coloneqq N_a(u)$, the $\overline{U}_4^+$-algebra $K_1$ induced by $L$ is isomorphic to the $\overline{U}_4^+$-algebra $(F_{a,b,c})_{x',x}$, where $\overline{U}_4^+$ acts via ([\[uu4-1\]](#uu4-1){reference-type="ref" reference="uu4-1"})-([\[uu4-4\]](#uu4-4){reference-type="ref" reference="uu4-4"}), and where the roles of $x$ and $x'$ have been switched. Similarly, there exist $v,v'\in F_{b,d}^\times$ such that, letting $z\coloneqq N_d(v)$ and $z'\coloneqq N_b(v')$, the $\overline{U}^-_4$-algebra $K_2$ induced by $L$ is isomorphic to $(F_{b,c,d})_{z,z'}$. Since the $U_3$-algebras $(K_1)^{P_4^+}$ and $(K_2)^{P_4^-}$ are equal, by (3) there exists $w\in F_{b,c}^\times$ such that $$N_a(u)\cdot N_d(v) = xz = w^p,\qquad (\sigma_b -1)(\sigma_c -1)w = \zeta.$$ Conversely, let $u\in F_{a,c}^\times$, $v\in F_{b,d}^\times$, and $w\in F_{b,c}^\times$ be such that $$N_a(u)\cdot N_d(v) = w^p,\qquad (\sigma_b -1)(\sigma_c -1)w = \zeta.$$ By , there exist $\alpha\in F_a^\times$ and $\delta\in F_d^\times$ such that $N_a(\alpha)=b$ and $N_d(\delta)=c$. We may write $$(\sigma_b - 1)w = \frac{c^{1/p}}{\beta},\qquad (\sigma_c - 1)w = \frac{b^{1/p}}{\gamma}.$$ For some $\beta\in F_b^\times$ and $\gamma\in F_c^\times$. We have $$N_a((\sigma_c - 1)u\cdot (\gamma/\alpha)) = (\sigma_c - 1)N_a(u)\cdot N_a(\gamma/\alpha)= (\sigma_c - 1)w^p \cdot (\gamma^p/b) = 1.$$ By Hilbert's Theorem 90, there is $u'\in F_{a,c}^\times$ such that $$\alpha\cdot (\sigma_a - 1)u' = \gamma\cdot (\sigma_c - 1)u.$$ By , we obtain a Galois $U_4^+$-algebra $K_1$ over $F$ with the property that $(K_1)^{Q_4^+}\simeq F_{a,b,c}$ as $(\mathbb Z/p\mathbb Z)^3$-algebras. Similarly, we get a Galois $U_4^-$-algebra over $F$ such that $(K_2)^{Q_4^-}\simeq F_{b,c,d}$ as $(\mathbb Z/p\mathbb Z)^3$-algebras. Since $N_a(u)\cdot N_d(v) = w^p$, by (3) the $U_3$-algebras $(K_1)^{P_4^+}$ and $(K_2)^{P_4^-}$ are isomorphic. Now applied to the cartesian square ([\[phi-phi\'\]](#phi-phi'){reference-type="ref" reference="phi-phi'"}) for $n=4$ yields a $\overline{U}_5$-Galois algebra $L$ such that $L^{Q_5}\simeq F_{a,b,c,d}$ as $(\mathbb Z/p\mathbb Z)^4$-algebras. By , this implies that $\langle{a,b,c,d}\rangle$ is defined. ◻ **Lemma 16**. *Let $b,c\in F^\times$ and $w\in F_{b,c}^{\times}$. Then $(\sigma_b-1)(\sigma_c-1)w=1$ if and only if there exist $w_b\in F_b^\times$ and $w_c\in F_c^\times$ such that $w=w_bw_c$ in $F_{b,c}^\times$.* *Proof.* We have $(\sigma_b-1)(\sigma_c-1)(w_bw_c)=(\sigma_b-1)w_c=1$ for all $w_b\in F_b^\times$ and $w_c\in F_c^\times$. Conversely, if $w\in F_{b,c}^\times$ satisfies $(\sigma_b-1)(\sigma_c-1)w=1$, then $(\sigma_c-1)w\in F_c^\times$ and $N_c((\sigma_c-1)w)=1$, and hence by Hilbert's Theorem 90 there exists $w_c\in F_c^\times$ such that $(\sigma_c-1)w_c=(\sigma_c-1)w$. Letting $w_b\coloneqq w/w_c\in F_{b,c}^\times$, we have $$(\sigma_c-1)w_b=(\sigma_c-1)(w/w_c)=1,$$ that is, $w_b\in F_b^{\times}$. ◻ From , we derive the following necessary condition for a fourfold Massey product to be defined. **Proposition 17**. *Let $p$ be a prime, let $F$ be a field of characteristic different from $p$ and containing a primitive $p$-th root of unity $\zeta$, let $a,b,c,d\in F^\times$, and suppose that $\langle{a,b,c,d}\rangle$ is defined over $F$. For every $w\in F_{b,c}^\times$ such that $(\sigma_b-1)(\sigma_c-1)w=\zeta$, there exist $u\in F_{a,c}^\times$ and $v\in F_{b,d}^\times$ such that $N_a(u)N_d(v)=w^p$.* *Proof.* By , there exist $u_0\in F_{a,c}^\times$, $v_0\in F_{b,d}^\times$ and $w_0\in F_{a,c}^\times$ such that $$N_a(u_0)N_d(v_0) =w_0^p,\qquad (\sigma_b-1)(\sigma_c-1)w_0=\zeta.$$ We have $(\sigma_b-1)(\sigma_c-1)(w_0/w)=1$. By , this implies that $w_0=w w_b w_c$, where $w_b\in F_b^\times$ and $w_c\in F_c^\times$. If we define $u=u_0 w_c$ and $v=v_0 w_b$, then $$N_a(u)N_d(v)=N_a(u_0)N_a(w_c)N_d(v_0)N_d(w_b)=w_0^pw_c^pw_b^p=w^p.\qedhere$$ ◻ # A generic variety {#generic-variety-section} In this section, we let $p$ be a prime number, and we let $F$ be a field of characteristic different from $p$ and containing a primitive $p$-th root of unity $\zeta$. Let $b,c\in F^\times$, and let $X$ be the Severi-Brauer variety associated to $(b,c)$ over $F$; see [@gille2017central Chapter 5]. For every étale $F$-algebra $K$, we have $(b,c)=0$ in $\operatorname{Br}(K)$ if and only if $X_K\simeq \mathbb P^{p-1}_K$ over $K$. In particular, $X_b\simeq \mathbb P^{p-1}_b$ over $F_b$. By [@gille2017central Theorem 5.4.1], the central simple algebra $(b,c)$ is split over $F(X)$. We define the degree map $\deg\colon \operatorname{Pic}(X)\to \mathbb Z$ as the composition of the pullback map $\operatorname{Pic}(X)\to \operatorname{Pic}(X_b)\simeq\operatorname{Pic}(\mathbb P^{p-1}_b)$ and the degree isomorphism $\operatorname{Pic}(\mathbb P^{p-1}_b)\to\mathbb Z$. This does not depend on the choice of isomorphism $X_b\simeq \mathbb P^{p-1}_b$. **Lemma 18**. *Let $b,c\in F^\times$, let $G_b\coloneqq \operatorname{Gal}(F_b/F)$, and let $X$ be the Severi-Brauer variety of $(b,c)$ over $F$. Let $s_1,\dots,s_p$ be homogeneous coordinates on $\mathbb P^{p-1}_F$.* *(1) There exists a $G_b$-equivariant isomorphism $X_b\xrightarrow{\sim} \mathbb P_b^{p-1}$, where $G_b$ acts on $X_b$ via its action on $F_b$, and on $\mathbb P^{p-1}_b$ by $$\sigma_b^*(s_1)=cs_p,\qquad \sigma_b^*(s_i)=s_{i-1}\quad (i=2,\dots,p).$$* *(2) If $(b,c)\neq 0$ in $\operatorname{Br}(F)$, the image of $\deg\colon \operatorname{Pic}(X)\to\mathbb Z$ is equal to $p\mathbb Z$.* *(3) There exists a rational function $w\in F_{b,c}(X)^\times$ such that $$(\sigma_b-1)(\sigma_c-1)w=\zeta$$ and $$\operatorname{div}(w)=x-y\qquad \text{in $\operatorname{Div}(X_{b,c})$},$$ where $x,y\in (X_{b,c})^{(1)}$ satisfy $\deg(x)=\deg(y)=1$, $\sigma_b(x)=x$ and $\sigma_c(y)=y$.* *Proof.* (1) Consider the $1$-cocycle $z\colon G_b\to \operatorname{PGL}_p(F_b)$ given by $$\sigma_b\mapsto \begin{bmatrix} 0 & 0 & \dots & 0 & c \\ 1 & 0 & \dots & 0 & 0 \\ 0 & 1 & \dots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \dots & 1 & 0 \\ \end{bmatrix}.$$ By [@gille2017central Construction 2.5.1], the class $[z]\in H^1(G_b,\operatorname{PGL}_p(F_b))$ coincides with the class of the degree-$p$ central simple algebra over $F$ with Brauer class $(b,c)$, and hence with the class of the associated Severi-Brauer variety $X$. It follows that we have a $G_b$-equivariant isomorphism $X_b\simeq \mathbb P^{p-1}_b$, where $G_b$ acts on $X_b$ via its action on $F_b$, and on $\mathbb P^{p-1}_b$ via the cocycle $z$. This proves (1). \(2\) By a theorem of Lichtenbaum [@gille2017central Theorem 5.4.10], we have an exact sequence $$\operatorname{Pic}(X)\xrightarrow{\deg} \mathbb Z\xrightarrow{\delta}\operatorname{Br}(F),$$ where $\delta(1)=(b,c)$. Since $(b,c)$ has exponent $p$, we conclude that the image of $\deg$ is equal to $p\mathbb Z$. \(3\) Let $G_{b,c}\coloneqq \operatorname{Gal}(F_{b,c}/F)=\langle{\sigma_b,\sigma_c}\rangle$. By (1), there is a $G_{b,c}$-equivariant isomorphism $f\colon \mathbb P^{p-1}_{b,c}\to X_{b,c}$, where $G_{b,c}$ acts on $X_{b,c}$ via its action on $F_{b,c}$, the action of $\sigma_c$ on $\mathbb P^{p-1}_{b,c}$ is trivial and the action of $\sigma_b$ on $\mathbb P^{p-1}_{b,c}$ is determined by $$\sigma_b^*(s_1)=cs_p,\qquad \sigma_b^*(s_i)=s_{i-1}\quad (i=2,\dots,p).$$ Consider the linear form $l\coloneqq\operatornamewithlimits{\textstyle\sum}_{i=1}^{p} c^{i/p}\cdot s_i$ on $\mathbb P^{p-1}_{b,c}$ and set $w'\coloneqq l/s_p\in F_{b,c}(\mathbb P^{p-1})^\times$. We have $\sigma_b^*(l)=c^{1/p}\cdot l$, and hence $(\sigma_b-1)w'=c^{1/p}\cdot (s_p/s_{p-1})$. It follows that $(\sigma_c-1)(\sigma_b-1)w'=\xi$. Let $x',y'\in \operatorname{Div}(\mathbb P^{p-1}_{b,c})$ be the classes of linear subspaces of $\mathbb P^{p-1}_{b,c}$ given by $l=0$ and $s_p=0$, respectively. Then $$\operatorname{div}(w')=x'-y',\qquad \sigma_b(x')=x',\qquad \sigma_c(y')=y'.$$ Define $$w\coloneqq w'\circ f^{-1}\in F_{b,c}(X)^\times,\qquad x'\coloneqq f_*(x)\in (X_{b,c})^{(1)},\qquad y'\coloneqq f_*(y)\in (X_{b,c})^{(1)}.$$ Then $w,x,y$ satisfy the conclusion of (3). ◻ **Lemma 19**. *Let $a,b,c,d\in F^\times$. The complex of tori $$R_{a,c}(\mathbb G_{\operatorname{m}})\times R_{b,d}(\mathbb G_{\operatorname{m}})\xrightarrow{\varphi}R_{b,c}(\mathbb G_{\operatorname{m}})\xrightarrow{\psi} R_{b,c}(\mathbb G_{\operatorname{m}}),$$ where $\varphi(u,v)\coloneqq N_a(u)N_d(v)$ and $\psi(z)=(\sigma_b-1)(\sigma_c-1)z$, is exact.* *Proof.* By , we have an exact sequence $$R_c(\mathbb G_{\operatorname{m}})\times R_b(\mathbb G_{\operatorname{m}})\xrightarrow{\varphi'} R_{b,c}(\mathbb G_{\operatorname{m}})\xrightarrow{\psi} R_{b,c}(\mathbb G_{\operatorname{m}}),$$ where $\varphi'(x,y)=xy$. The homomorphism $\varphi$ factors as $$R_{a,c}(\mathbb G_{\operatorname{m}})\times R_{b,d}(\mathbb G_{\operatorname{m}})\xrightarrow{N_a\times N_d}R_c(\mathbb G_{\operatorname{m}})\times R_b(\mathbb G_{\operatorname{m}})\xrightarrow{\varphi'} R_{b,c}(\mathbb G_{\operatorname{m}}).$$ Since the homomorphisms $N_a$ and $N_d$ are surjective, so is $N_a\times N_d$. We conclude that $\operatorname{Im}(\varphi)=\operatorname{Im}(\varphi')=\operatorname{Ker}(\psi)$, as desired. ◻ Let $a,b,c,d\in F^\times$, and consider the complex of tori of . We define the following groups of multiplicative type over $F$: $$P\coloneqq R_{a,c}(\mathbb G_{\operatorname{m}})\times R_{b,d}(\mathbb G_{\operatorname{m}}),\qquad S\coloneqq \operatorname{Ker}(\psi)=\operatorname{Im}(\varphi),\qquad T\coloneqq \operatorname{Ker}(\varphi)\subset P.$$ By , we get a short exact sequence $$\label{t-p-s-2} 1\to T\xrightarrow{\iota} P\xrightarrow{\pi} S\to 1,$$ where $\iota$ is the inclusion map and $\pi$ is induced by $\varphi$. **Lemma 20**. *The groups of multiplicative type $T$, $P$ and $S$ are tori.* *Proof.* It is clear that $P$ and $S$ are tori. We now prove that $T$ is a torus. Consider the subgroup $Q\subset R_{a,c}(\mathbb G_{\operatorname{m}})$ which makes the following commutative square cartesian: $$\label{cartesian-q} \begin{tikzcd} Q \arrow[r,hook] \arrow[d] & R_{a,c}(\mathbb G_{\operatorname{m}}) \arrow[d,"N_a"] \\ \mathbb G_{\operatorname{m}} \arrow[r,hook] & R_c(\mathbb G_{\operatorname{m}}). \end{tikzcd}$$ Here the bottom horizontal map is the obvious inclusion. It follows that $Q$ is an $R_c(R^{(1)}_a(\mathbb G_{\operatorname{m}}))$-torsor over $\mathbb G_{\operatorname{m}}$, and hence it is smooth and connected. Therefore $Q$ is a torus. The image of the projection $T \xhookrightarrow{\iota} P\to R_{a,c}(\mathbb G_{\operatorname{m}})$ is contained in the torus $Q$. Moreover, the kernel $U$ of the projection is $R_b(R^{(1)}_{F_{b,d}/F_b}(\mathbb G_{\operatorname{m}}))$, and hence it is also a torus. We have an exact sequence $$1\to U\to T\to Q.$$ We have $\dim(U) = p(p-1)$. From ([\[t-p-s-2\]](#t-p-s-2){reference-type="ref" reference="t-p-s-2"}), we see that $\dim (T) = 2p^2 -2p+1$, and from ([\[cartesian-q\]](#cartesian-q){reference-type="ref" reference="cartesian-q"}) that $\dim (Q) = p^2 - p +1$. Therefore $\dim (T) = \dim (U) + \dim (Q)$, and so the sequence $$1 \to U \to T \to Q \to 1$$ is exact. As $U$ and $Q$ are tori, so is $T$. ◻ **Proposition 21**. *Let $p$ be a prime, let $F$ be a field of characteristic different from $p$ and containing a primitive $p$-th root of unity $\zeta$, and let $a,b,c,d\in F^\times$. Suppose that $(a,b)=(b,c)=(c,d)=0$ in $\operatorname{Br}(F)$, and let $w\in F_{b,c}^\times$ be such that $(\sigma_b-1)(\sigma_c-1)w = \zeta$. Let $T$ and $P$ be the tori appearing in ([\[t-p-s-2\]](#t-p-s-2){reference-type="ref" reference="t-p-s-2"}), and let $E_w\subset P$ be the $T$-torsor given by the equation $N_a(u)N_d(v)=w^p$. Then the mod $p$ Massey product $\langle{a,b,c,d}\rangle$ is defined if and only if $E_w$ is trivial.* The construction of $E_w$ is functorial in $F$. Therefore, for every field extension $K/F$, the mod $p$ Massey product $\langle{a,b,c,d}\rangle$ is defined if and only if $E_w$ is split by $K$. We may thus call $E_w$ a generic variety for the property "the Massey product $\langle{a,b,c,d}\rangle$ is defined." *Proof.* By , the Massey product $\langle{a,b,c,d}\rangle$ is defined over $F$ if and only if there exist $u\in F_{a,c}^\times$ and $v\in F_{b,d}^\times$ such that the equation $N_a(u)N_d(v)=w^p$ has a solution over $F$, that is, if and only if the $T$-torsor $E_w$ is trivial. ◻ **Corollary 22**. *Let $p$ be a prime, let $F$ be a field of characteristic different from $p$ and containing a primitive $p$-th root of unity $\zeta$, and let $a,b,c,d\in F^\times$. Let $X$ be the Severi-Brauer variety of $(b,c)$ over $F$, fix $w\in F_{b,c}(X)^\times$ as in (3), and let $E_w\subset P_{F(X)}$ be the $T_{F(X)}$-torsor given by the equation $N_a(u)N_d(v)=w^p$.* *Then $\langle{a,b,c,d}\rangle$ is defined over $F(X)$ if and only if $E_w$ is trivial over $F(X)$.* *Proof.* This is a special case of , applied over the ground field $F(X)$. ◻ # Proof of Theorem [Theorem 3](#main-explicit){reference-type="ref" reference="main-explicit"} {#section-5} Let $p$ be a prime, and let $F$ be a field of characteristic different from $p$ and containing a primitive $p$-th root of unity $\zeta$. Let $a,b,c,d\in F^\times$ be such that their cosets in $F^\times/F^{\times p}$ are $\mathbb F_p$-linearly independent. Consider the field $K\coloneqq F_{a,b,c,d}$, and write $G=\operatorname{Gal}(K/F)=\langle{\sigma_a,\sigma_b,\sigma_c,\sigma_d}\rangle$ as in . We set $N_a\coloneqq \operatornamewithlimits{\textstyle\sum}_{j=0}^{p-1}\sigma_a^j\in \mathbb Z[G]$. For every subgroup $H$ of $G$, we also write $N_a$ for the image of $N_a\in \mathbb Z[G]$ under the canonical map $\mathbb Z[G]\to \mathbb Z[G/H]$. We define $N_b$, $N_c$ and $N_d$ in a similar way. Let $$1\to T\xrightarrow{\iota} P\xrightarrow{\pi} S\to 1$$ be the short exact sequence of $F$-tori ([\[t-p-s-2\]](#t-p-s-2){reference-type="ref" reference="t-p-s-2"}). It induces a short exact sequence of cocharacter $G$-lattices $$0\to T_*\xrightarrow{\iota_*} P_*\xrightarrow{\pi_*} S_*\to 1.$$ By definition of $P$ and $S$, we have $$P_*=\mathbb Z[G/\langle \sigma_b,\sigma_d\rangle]\oplus \mathbb Z[G/\langle \sigma_a,\sigma_c\rangle],\qquad S_*=\langle N_b,N_c\rangle\subset \mathbb Z[G/\langle\sigma_a,\sigma_d\rangle ].$$ Let $X$ be the Severi-Brauer variety associated to $(b,c)\in \operatorname{Br}(F)$. Since $X_K\simeq \mathbb P^{p-1}_K$, the degree map $\operatorname{Pic}(X_K)\to \mathbb Z$ is an isomorphism, and so the map $\operatorname{Div}(X_K)\to \operatorname{Pic}(X_K)$ is identified with the degree map $\deg\colon \operatorname{Div}(X_K)\to \mathbb Z$. The sequence ([\[4exact-t\]](#4exact-t){reference-type="ref" reference="4exact-t"}) for the torus $T$ thus takes the form $$\label{4exact-t-deg}1\to T(K) \to T(K(X))\xrightarrow{\operatorname{div}}\operatorname{Div}(X_K)\otimes T_*\xrightarrow{\deg} T_*\to 0,$$ where $T_*$ denotes the cocharacter lattice of $T$. **Lemma 23**. *(1) We have $(T_*)^G=\mathbb Z\cdot\eta$, where $\iota_*(\eta)=(N_a N_c,-N_b N_d)$ in $(P_*)^G$.* *(2) If $(b,c)\neq 0$ in $\operatorname{Br}(F)$, the image of $\deg\colon (\operatorname{Div}(X_{b,c})\otimes T_*)^G\to (T_*)^G$ is equal to $p(T_*)^G$.* *Proof.* (1) The free $\mathbb Z$-module $(P_*)^G$ has a basis consisting of the elements $(N_aN_c,0)$ and $(0,N_bN_d)$. The map $\pi_*\colon P_* \to S_*\subset \mathbb Z[G/\langle{\sigma_a,\sigma_d}\rangle]$ takes $(1, 0)$ to $N_b$ and $(0, 1)$ to $N_c$. It follows that $\operatorname{Ker}(\pi_*)^G$ is generated by $(N_aN_c, -N_bN_d)$. \(2\) By Lemma [Lemma 18](#construct-w){reference-type="ref" reference="construct-w"}(2), the image of the composition $$\operatorname{Div}(X)\otimes T_*^G=(\operatorname{Div}(X)\otimes T_*)^G\to(\operatorname{Div}(X_{b,c})\otimes T_*)^G\xrightarrow{\operatorname{\deg}} (T_*)^G$$ is equal to $p(T_*)^G$. Thus the image of the degree map contains $p(T_*)^G$. We now show that the image the degree map is contained in $p(T_*)^G$. For every $x\in X^{(1)}$, pick $x'\in (X_{b,c})^{(1)}$ lying over $x$, and write $H_x$ for the $G$-stabilizer of $x'$. The injective homomorphisms of $G$-modules $$j_x\colon \mathbb Z[G/H_x]\hookrightarrow \operatorname{Div}(X_{b,c}),\qquad gH_x\mapsto g(x')$$ yield an isomorphism of $G$-modules $$\oplus_{x\in X^{(1)}}j_x\colon\oplus_{x\in X^{(1)}}\mathbb Z[G/H_x]\xrightarrow{\sim} \operatorname{Div}(X_{b,c}).$$ In order to conclude, it suffices to show that the image of $$\label{degree} (T_*)^{H_x}=(\mathbb Z[G/H_x]\otimes T_*)^G\to (\operatorname{Div}(X_{b,c})\otimes T_*)^G\xrightarrow{\deg} (T_*)^G$$ is contained in $p(T_*)^G$ for all $x\in X^{(1)}$. Set $H:=H_x$. The composition ([\[degree\]](#degree){reference-type="ref" reference="degree"}) takes a cocharacter $q\in (T_*)^{H}$ to $$\deg(\operatornamewithlimits{\textstyle\sum}_{gH\in G/H} gx'\otimes gq)=\deg(x')\cdot N_{G/H}(q).$$ Thus ([\[degree\]](#degree){reference-type="ref" reference="degree"}) coincides with the norm map $N_{G/H}$ times the degree of $x'$. Suppose that $G=H$. Then $\deg(x')=\deg(x)$ and, since $(b,c)\neq 0$, the degree of $x$ is divisible by $p$ by Lemma [Lemma 18](#construct-w){reference-type="ref" reference="construct-w"}(2). Suppose that $G\neq H$. Then either $\langle \sigma_a,\sigma_c\rangle$ or $\langle \sigma_b,\sigma_d\rangle$ is not contained in $H$. Suppose $\langle \sigma_b,\sigma_d\rangle$ is not in $H$, and let $N$ be the subgroup generated by $H,\sigma_b,\sigma_d$. Note that $H$ is a proper subgroup of $N$. The norm map $N_{G/H}:(T_*)^H\to (T_*)^G$ is the composition of the two norm maps $$(T_*)^H\xrightarrow{N_{N/H}} (T_*)^N\xrightarrow{N_{G/N}} (T_*)^G.$$ Since $\mathbb Z[G/\langle{\sigma_b,\sigma_d}\rangle]^H=\mathbb Z[G/\langle{\sigma_b,\sigma_d}\rangle]^N$, the norm map $(T_*)^H\to (T_*)^N$ is multiplication by $[N:H]\in p\mathbb Z$ on the first component of $T_*$ with respect to the inclusion $\iota_*$ of $T_*$ into $P_*=\mathbb Z[G/\langle{\sigma_b,\sigma_d}\rangle]\oplus\mathbb Z[G/\langle{\sigma_a,\sigma_c}\rangle]$. By Lemma [Lemma 23](#invar){reference-type="ref" reference="invar"}(1), $(T_*)^G=\mathbb Z\cdot\eta$, where $\iota_*(\eta)=(N_a N_c,-N_b N_d)$ in $(P_*)^G$. Since $N_a N_c$ is not divisible by $p$ in $\mathbb Z[G/\langle{\sigma_b,\sigma_d}\rangle]$, the image of ([\[degree\]](#degree){reference-type="ref" reference="degree"}) is contained in $p\mathbb Z\cdot\eta=p(T_*)^G$, as desired. ◻ We write $$\overline{\eta} \in \operatorname{Coker}[(\operatorname{Div}(X_{b,c})\otimes T_*)^G\xrightarrow{\deg} (T_*)^G]$$ for the coset of the generator $\eta\in (T_*)^G$ appearing in (1). If $(b,c)\neq 0$, then we have $\overline{\eta}\neq 0$ by (2). We consider the subgroup of unramified torsors $$H^1(G,T(K(X)))_{\operatorname{nr}}\coloneqq \operatorname{Ker}[H^1(G,T(K(X)))\xrightarrow{\operatorname{div}}H^1(G,\operatorname{Div}(X_K\otimes T_*))],$$ and the homomorphism $$\theta\colon H^1(G,T(K(X)))_{\operatorname{nr}}\to \operatorname{Coker}[\operatorname{Div}(X_K)\otimes T_*\xrightarrow{\deg} T_*],$$ which are defined in ([\[theta-t\]](#theta-t){reference-type="ref" reference="theta-t"}). **Lemma 24**. *Let $b,c\in F^\times$ be such that $(b,c)\neq 0$ in $\operatorname{Br}(F)$, let $w\in F_{b,c}(X)^{\times}$ be such that $(\sigma_b-1)(\sigma_c-1)w=\zeta$ and $\operatorname{div}(w)=x-y$, where $\deg(x)=\deg(y)=1$ and $\sigma_b(x)=x$ and $\sigma_c(y)=y$. Let $E_w\subset P_{F(X)}$ be the $T_{F(X)}$-torsor given by the equation $N_a(u)N_d(v)=w^p$, and write $[E_w]$ for the class of $E_w$ in $H^1(G,T(K(X)))$.* *(1) We have $[E_w]\in H^1(G,T(K(X)))_{\operatorname{nr}}$.* *(2) Let $\theta$ be the homomorphism of ([\[theta-t\]](#theta-t){reference-type="ref" reference="theta-t"}). We have $\theta([E_w])=-\overline{\eta}\neq 0$.* *Proof.* The $F$-tori $T$, $P$ and $S$ of ([\[t-p-s-2\]](#t-p-s-2){reference-type="ref" reference="t-p-s-2"}) are split by $K=F_{a,b,c,d}$. Therefore, we may consider diagram ([\[big-diagram\]](#big-diagram){reference-type="ref" reference="big-diagram"}) for the short exact sequence ([\[t-p-s-2\]](#t-p-s-2){reference-type="ref" reference="t-p-s-2"}), the splitting field $K/F$, and $X$ the Severi-Brauer variety of $(b,c)$ over $F$: $$\begin{tikzcd} & \operatorname{Div}((X_K)\otimes T_*)^G \arrow[d,hook,"\iota_*"] \arrow[r,"\deg"] & (T_*)^G \arrow[d,hook,"\iota_*"] \\ P(F(X)) \arrow[d,"\pi_*"] \arrow[r,"{\operatorname{div}}"] & (\operatorname{Div}(X_K)\otimes P_*)^G \arrow[d,"\pi_*"] \arrow[r,"\deg"] & (P_*)^G \arrow[d,"\pi_*"] \\ S(F(X)) \arrow[d,twoheadrightarrow,"\partial"] \arrow[r,"{\operatorname{div}}"] & (\operatorname{Div}(X_K)\otimes S_*)^G \arrow[d,"\partial"] \arrow[r,"\deg"] & (S_*)^G \\ H^1(G,T(K(X))) \arrow[r,"{\operatorname{div}}"] & H^1(G,\operatorname{Div}(X_K\otimes T_*)). \end{tikzcd}$$ Since $(\sigma_b-1)(\sigma_c-1)w^p=1$, we have $w^p\in S(F(X))$. The image of $w^p$ under $\partial$ is equal to $[E_w]\in H^1(G,T(K(X)))$. Let $H\subset G$ be the subgroup generated by $\sigma_a$ and $\sigma_d$. The canonical isomorphism $$\operatorname{Div}(X_{b,c})=\operatorname{Div}(X_K)^H=(\operatorname{Div}(X_K)\otimes \mathbb Z[G/H])^G$$ sends the divisor $\operatorname{div}(w)=x-y$ to $\operatornamewithlimits{\textstyle\sum}_{i,j}\sigma_b^i\sigma_c^j (x-y)\otimes \sigma_b^i\sigma_c^j$. Therefore, the element $\operatorname{div}(w^p)$ in $(\operatorname{Div}(X_K)\otimes S_*)^G\subset (\operatorname{Div}(X_K)\otimes \mathbb Z[G/H])^G$ is equal to $$e:=p\operatornamewithlimits{\textstyle\sum}_{i,j=0}^{p-1}(\sigma_b^i\sigma_c^j(x-y)\otimes \sigma_b^i\sigma_c^j)= p\operatornamewithlimits{\textstyle\sum}_{j=0}^{p-1}(\sigma_c^j x \otimes \sigma_c^j N_b)-p\operatornamewithlimits{\textstyle\sum}_{i=0}^{p-1}(\sigma_b^i y \otimes \sigma_b^i N_c).$$ Since $S_*$ is the sublattice of $\mathbb Z[G/\langle \sigma_a,\sigma_d\rangle]$ generated by $N_b$ and $N_c$, this implies that $e$ belongs to $S_*$. Then $e=\pi_*(f)$, where $$f\coloneqq\operatornamewithlimits{\textstyle\sum}_{j=0}^{p-1}(\sigma_c^j x \otimes \sigma_c^j N_a)-\operatornamewithlimits{\textstyle\sum}_{i=0}^{p-1}(\sigma_b^i y \otimes \sigma_b^i N_d)\in (\operatorname{Div}(X_K)\otimes P_*)^G.$$ It follows that $\operatorname{div}(E_w)=\partial(e)=\partial(\pi_*(f))=0$, which proves (1). Moreover, since $\deg(x)=\deg(y)=1$ we have $$\deg(f)=(N_a N_c, - N_b N_d)=\iota_*(\eta)\qquad\text{in $(P_*)^G$.}$$ In view of ([\[theta\'\]](#theta'){reference-type="ref" reference="theta'"}), this implies that $\theta([E_w])=-\overline{\eta}$. We know from (2) that $\overline{\eta}\neq 0$. This completes the proof of (2). ◻ *Proof of Theorem [Theorem 3](#main-explicit){reference-type="ref" reference="main-explicit"}.* Replacing $F$ by a finite extension if necessary, we may suppose that $F$ contains a primitive $p$-th root of unity $\zeta$. Let $E\coloneqq F(x,y)$, where $x$ and $y$ are independent variables over $F$, let $X$ be the Severi-Brauer variety of the degree-$p$ cyclic algebra $(x,y)$ over $E$, and let $L\coloneqq E(X)$. Consider the following elements of $E^\times$: $$a\coloneqq 1-x,\quad b\coloneqq x,\quad c\coloneqq y,\quad d\coloneqq 1-y.$$ We have $(a,b)=(c,d)=0$ in $\operatorname{Br}(E)$ by the Steinberg relations [@serre1979local Chapter XIV, Proposition 4(iv)], and hence $(a,b)=(b,c)=0$ in $\operatorname{Br}(L)$. Moreover, $(b,c)\neq 0$ in $\operatorname{Br}(E)$ because the residue of $(b,c)$ along $x=0$ is non-zero, while $(b,c)=0$ in $\operatorname{Br}(L)$ by [@gille2017central Theorem 5.4.1]. Thus $(a,b)=(b,c)=(c,d)=0$ in $\operatorname{Br}(L)$. Consider the sequence of tori ([\[t-p-s-2\]](#t-p-s-2){reference-type="ref" reference="t-p-s-2"}) over the ground field $E$, associated to the scalars $a,b,c,d\in E^\times$ chosen above: $$1\to T\to P\to S\to 1.$$ Let $E_w\subset P_L$ be the $T_L$-torsor given by the equation $N_a(u)N_d(v)=w^p$. By (2), the torsor $E_w$ is non-trivial over $L$. Now implies that the Massey product $\langle a,b,c,d\rangle$ is not defined over $L$. In particular, by , the differential graded ring $C^{ \raisebox{-0.25ex}{\scalebox{1.2}{$\cdot$}} }(\Gamma_L,\mathbb Z/p\mathbb Z)$ is not formal. ◻ # Homological algebra {#appendix-a} Let $G$ be a profinite group, and let $$\label{4-term-a} 0\to A_0\xrightarrow{\alpha_0} A_1\xrightarrow{\alpha_1} A_2\xrightarrow{\alpha_2} A_3\to 0$$ be an exact sequence of discrete $G$-modules. We break ([\[4-term-a\]](#4-term-a){reference-type="ref" reference="4-term-a"}) into two short exact sequences $$0\to A_0\xrightarrow{\alpha_0} A_1\to A\to 0,$$ $$0\to A\to A_2\xrightarrow{\alpha_2} A_3\to 0.$$ We obtain a homomorphism $$\label{map-phi}\theta\colon \operatorname{Ker}[H^1(G,A_1)\xrightarrow{\alpha_1} H^1(G,A_2)]\to \operatorname{Coker}[A_2^G\xrightarrow{\alpha_2} A_3^G]$$ defined as the composition of the map $$\operatorname{Ker}[H^1(G,A_1)\xrightarrow{\alpha_1} H^1(G,A_2)]\to \operatorname{Ker}[H^1(G,A)\to H^1(G,A_2)]$$ and the inverse of the isomorphism $$\label{map-phi-two}\operatorname{Coker}[A_2^G\xrightarrow{\alpha_2} A_3^G]\xrightarrow{\sim} \operatorname{Ker}[H^1(G,A)\to H^1(G,A_2)]$$ induced by the connecting homomorphism $A_3^G\to H^1(G,A)$. **Lemma 25**. *We have an exact sequence $$\begin{aligned} H^1(G,A_0)\xrightarrow{\alpha_0} \operatorname{Ker}[H^1(G,A_1)\xrightarrow{\alpha_1}H^1(G,A_2)]\xrightarrow{\theta} \operatorname{Coker}[A_2^G\to A_3^G]\to H^2(G,A_0), \end{aligned}$$ where the last map is defined as the composition of ([\[map-phi-two\]](#map-phi-two){reference-type="ref" reference="map-phi-two"}) and the connecting homomorphism $H^1(G,A)\to H^2(G,A_0)$.* *Proof.* The proof follows from the definition of $\theta$ and the exactness of ([\[4-term-a\]](#4-term-a){reference-type="ref" reference="4-term-a"}). ◻ Consider a commutative diagram of discrete $G$-modules $$\label{a-b-c} \begin{tikzcd} A_0 \arrow[r,hook,"\alpha_0"] \arrow[d,hook,"\iota_0"] & A_1 \arrow[r,"\alpha_1"] \arrow[d,hook,"\iota_1"] & A_2 \arrow[r,->>,"\alpha_2"] \arrow[d,hook,"\iota_2"] & A_3\arrow[d,hook,"\iota_3"] \\ B_0 \arrow[r,hook,"\beta_0"] \arrow[d,->>,"\pi_0"] & B_1 \arrow[r,"\beta_1"] \arrow[d,->>,"\pi_1"] & B_2 \arrow[r,->>,"\beta_2"] \arrow[d,->>,"\pi_2"] & B_3 \arrow[d,->>,"\pi_3"] \\ C_0 \arrow[r,hook,"\gamma_0"]& C_1 \arrow[r,"\gamma_1"] & C_2 \arrow[r,->>,"\gamma_2"] & C_3 \end{tikzcd}$$ with exact rows and columns. It yields a commutative diagram of abelian groups $$\label{p} \begin{tikzcd} A_1^G \arrow[r,"\alpha_1"] \arrow[d,hook,"\iota_1"] & A_2^G \arrow[r,"\alpha_2"] \arrow[d,hook,"\iota_2"] & A_3^G \arrow[d,hook,"\iota_3"] \\ B_1^G \arrow[r,"\beta_1"] \arrow[d,"\pi_1"]& B_2^G \arrow[r,"\beta_2"]\arrow[d,"\pi_2"] & B_3^G \arrow[d,"\pi_3"] \\ C_1^G \arrow[d,"\partial_1"] \arrow[r,"\gamma_1"] & C_2^G \arrow[r,"\gamma_2"] \arrow[d,"\partial_2"] & C_3^G\\ H^1(G,A_1) \arrow[r,"\alpha_1"] & H^1(G,A_2) \end{tikzcd}$$ where the columns are exact and the rows are complexes. Suppose that the connecting homomorphism $\partial_1\colon C_1^G\to H^1(G,A_1)$ is surjective. We define a function $$\theta'\colon \operatorname{Ker}[H^1(G,A_1)\xrightarrow{\alpha_1} H^1(G,A_2)]\to \operatorname{Coker}(A_2^G\xrightarrow{\alpha_2} A_3^G)$$ as follows. Let $z\in H^1(G,A_1)$ such that $\alpha_1(z)=0$ in $H^1(G,A_2)$. By assumption, there exists $c_1\in C_1^G$ such that $\partial_1(c_1)=z$. By the exactness of the second column, there exists $b_2\in B_2^G$ such that $\pi_2(b_2)=\gamma_1(c_1)$. By the exactness of the first column and the injectivity of $\iota_3$, there exists a unique element $a_3\in A_3^G$ such that $\beta_2(b_2)=\iota_3(a_3)$. We set $$\theta'(z)\coloneqq a_3+\alpha_2(A_2^G).$$ A diagram chase shows that $\theta'$ is a well-defined homomorphism. **Lemma 26**. *Let $G$ be a profinite group, and suppose given an exact sequence ([\[4-term-a\]](#4-term-a){reference-type="ref" reference="4-term-a"}) and a commutative diagram ([\[a-b-c\]](#a-b-c){reference-type="ref" reference="a-b-c"}) such that the connecting homomorphism $\partial_1\colon C_1^G\to H^1(G,A_1)$ is surjective. Then $\theta=-\theta'$.* *Proof.* Let $z\in H^1(G,A_1)$ be such that $\alpha_1(z)=0$ in $H^1(G,A_2)$. Since the map $\partial_1\colon C_1^G\to H^1(G,A_1)$ is surjective, there exists $c_1\in C_1^G$ such that $\partial_1(c_1)=z$. Let $b_1\in B_1$ be such that $\pi_1(b_1)=c_1$, and for all $g\in G$ let $a_{1g}$ be the unique element of $A_1$ such that $\iota(a_{1g})=gb-b$. Then $\partial_1(c_1)$ is represented by the $1$-cocycle $\left\{a_{1g}\right\}_{g\in G}$. Define $b_2\coloneqq \beta_1(b_1)$ and $c_2\coloneqq \gamma_1(c_1)$, so that $\pi_2(b_2)=c_2$. Since $\alpha_1(z)=0$ is represented by the cocycle $\left\{\alpha_1(a_{1g})\right\}_{g\in G}$, we deduce that there exists $a_2\in A_2$ such that $\alpha_1(a_{1g})=ga_2-a_2$ for all $g\in G$. It follows that $gb_2-b_2=\iota_2(ga_2-a_2)$ for all $g\in G$, that is, $b_2-\iota_2(a_2)$ belongs to $B_2^G$. Moreover, we have $$\pi_2(b_2-\iota_2(a_2))=\pi_2(b_2)=\gamma_1(c_1).$$ Finally, we have $$\beta_2(b_2-\iota_2(a_2))=\beta_2(\beta_1(b_1))-\iota_3(\alpha_2(a_2))=\iota_3(-\alpha_2(a_2)).$$ By definition, $\theta'(z)=-\alpha_2(a_2)+\alpha_2(A_2^G)$. Note that $\alpha_2(a_2)$ belongs to $A_2^G$ because for all $g\in G$ we have $$g\alpha_2(a_2)-\alpha_2(a_2)=\alpha_2(ga_2-a_2)=\alpha_2(\alpha_1(a_{1g}))=0.$$ For all $g\in G$, let $a_g\in A$ be the image of $a_{1g}$. The homomorphism $$\operatorname{Ker}[H^1(G,A_1)\xrightarrow{\alpha_1} H^1(G,A_2)]\to \operatorname{Ker}[H^1(G,A)\to H^1(G,A_2)]$$ induced by the map $A_1\to A$ sends the class of $\left\{a_{1g}\right\}_{g\in G}$ to the class of $\left\{a_g\right\}_{g\in G}$. The element $a_2\in A_2$ is a lift of $\alpha_2(a_2)$. As $ga_2-a_2=\alpha_1(a_{1g})$ for all $g\in G$, the injective map $A\to A_2$ sends $a_g$ to $ga_2-a_2$ for all $g\in G$. Therefore, the connecting homomorphism $A_3^G\to H^1(G,A)$ sends $\alpha_2(a_2)$ to the class of $\left\{a_g\right\}_{g\in G}$. It follows that the isomorphism $$\operatorname{Coker}[A_2^G\xrightarrow{\alpha_2} A_3^G]\xrightarrow{\sim} \operatorname{Ker}[H^1(G,A)\to H^1(G,A_2)]$$ induced by $A_3^G\to H^1(G,A)$ sends $\alpha_2(a_2)+\alpha_2(A_2^G)$ to the class of $\left\{a_g\right\}_{g\in G}$. By the definition of $\theta$, we conclude that $\theta(z)=\alpha_2(a_2)+\alpha_2(A_2^G)=-\theta'(z)$. ◻ # Unramified torsors under tori {#appendix-b} Let $F$ be a field, let $X$ be a smooth projective geometrically connected $F$-variety, let $K$ be a Galois extension of $F$ (possibly of infinite degree over $F$), and let $G\coloneqq \operatorname{Gal}(K/F)$. We have an exact sequence of discrete $G$-modules $$\label{4exact} 1\to K^\times \to K(X)^\times\xrightarrow{\operatorname{div}} \operatorname{Div}(X_K)\xrightarrow{\lambda} \operatorname{Pic}(X_K)\to 0,$$ where $\operatorname{div}$ takes a non-zero rational function $f\in K(X)^\times$ to its divisor, and $\lambda$ takes a divisor on $X_K$ to its class in $\operatorname{Pic}(X_K)$. Let $T$ be an $F$-torus split by $K$. Write $T_*$ for the cocharacter lattice of $T$: it is a finitely generated $\mathbb Z$-free $G$-module. Tensoring ([\[4exact\]](#4exact){reference-type="ref" reference="4exact"}) with $T_*$, we obtain an exact sequence of $G$-modules $$\label{4exact-t} 1\to T(K) \to T(K(X))\xrightarrow{\operatorname{div}} \operatorname{Div}(X_K)\otimes T_*\xrightarrow{\lambda} \operatorname{Pic}(X_K)\otimes T_*\to 0,$$ where we have used the fact that $K^\times\otimes T_*=T(K)$. We define the subgroup of unramified torsors $$H^1(G,T(K(X)))_{\operatorname{nr}}\coloneqq \operatorname{Ker}[H^1(G,T(K(X)))\xrightarrow{\operatorname{div}}H^1(G,\operatorname{Div}(X_K\otimes T_*))].$$ The sequence ([\[4exact\]](#4exact){reference-type="ref" reference="4exact"}) is a special case of ([\[4-term-a\]](#4-term-a){reference-type="ref" reference="4-term-a"}). In this case, the map $\theta$ of ([\[4-term-a\]](#4-term-a){reference-type="ref" reference="4-term-a"}) takes the form $$\label{theta-t}\theta\colon H^1(G,T(K(X)))_{\operatorname{nr}}\to \operatorname{Coker}[\operatorname{Div}(X_K)\otimes T_*\xrightarrow{\lambda}\operatorname{Pic}(X_K)\otimes T_*].$$ **Proposition 27**. *We have an exact sequence $$\resizebox{\textwidth}{!}{$ H^1(G, T(K)) \to H^1(G, T(K(X)))_{\operatorname{nr}} \xrightarrow{\theta} \operatorname{Coker} [(\operatorname{Div}(X_K)\otimes T_*)^G \xrightarrow{\lambda}(\operatorname{Pic}(X_K)\otimes T_*)^G] \to H^2(G, T(K)),$}$$ where the first map and the last map are induced by ([\[4exact-t\]](#4exact-t){reference-type="ref" reference="4exact-t"}).* *Proof.* This is a special case of . ◻ By , the map $\theta$ may be computed as follows. Let $$\label{t-p-s} 1\to T\xrightarrow{\iota} P\xrightarrow{\pi} S\to 1$$ be a short exact sequence of $F$-tori split by $K$ such that $P$ is a quasi-trivial torus. Passing to cocharacter lattices, we obtain a short exact sequence of $G$-modules $$\label{t-p-s-cocharacter}0\to T_*\xrightarrow{\iota_*} P_*\xrightarrow{\pi_*} S_*\to 0.$$ We tensor ([\[4exact\]](#4exact){reference-type="ref" reference="4exact"}) with $T_*$, $P_*$ and $S_*$ respectively, and pass to group cohomology to obtain the following commutative diagram, where the columns are exact and the rows are complexes: $$\label{big-diagram} \begin{tikzcd} & \operatorname{Div}((X_K)\otimes T_*)^G \arrow[d,hook,"\iota_*"] \arrow[r,"\lambda"] & (\operatorname{Pic}(X_K)\otimes T_*)^G \arrow[d,hook,"\iota_*"] \\ P(F(X)) \arrow[d,"\pi_*"] \arrow[r,"{\operatorname{div}}"] & (\operatorname{Div}(X_K)\otimes P_*)^G \arrow[d,"\pi_*"] \arrow[r,"\lambda"] & (\operatorname{Pic}(X_K)\otimes P_*)^G \arrow[d,"\pi_*"] \\ S(F(X)) \arrow[d,twoheadrightarrow,"\partial"] \arrow[r,"{\operatorname{div}}"] & (\operatorname{Div}(X_K)\otimes S_*)^G \arrow[d,"\partial"] \arrow[r,"\lambda"] & (\operatorname{Pic}(X_K)\otimes S_*)^G \\ H^1(G,T(K(X))) \arrow[r,"{\operatorname{div}}"] & H^1(G,\operatorname{Div}(X_K\otimes T_*)). \end{tikzcd}$$ Note that $\operatorname{Gal}(K(X)/F(X))=G$. Therefore $H^1(G,P(K(X)))$ is trivial, and hence $\partial\colon S(F(X))\to H^1(G,T(K(X)))$ is surjective. Let $\tau\in H^1(G, T(K(X)))_{\operatorname{nr}}$, choose $\sigma\in S(F(X))$ such that $\partial(\sigma)=\tau$. Then pick $\rho\in (\operatorname{Div}(X_K)\otimes P_*)^G$ such that $\pi_*(\rho)=\operatorname{div}(\sigma)$, and let $t$ be the unique element in $(\operatorname{Pic}(X_K)\otimes T_*)^G$ such that $\lambda(\rho)=\iota_*(t)$. implies $$\label{theta'} \theta(\tau)=-t.$$ Finally, suppose that $K=F_s$ is a separable closure of $F$, so that $G=\Gamma_F$, and write $X_s$ for $X\times_FF_s$. The exact sequence ([\[4exact-t\]](#4exact-t){reference-type="ref" reference="4exact-t"}) for $K=F_s$ takes the form $$\label{4exact-t-galois} 1\to T(F_s) \to T(F_s(X))\xrightarrow{\operatorname{div}} \operatorname{Div}(X_s)\otimes T_*\xrightarrow{\lambda} \operatorname{Pic}(X_s)\otimes T_*\to 0.$$ We have the inflation-restriction sequence $$0\to H^1(F,T(F_s(X)))\xrightarrow{\operatorname{Inf}}H^1(F(X),T)\xrightarrow{\operatorname{Res}}H^1(F_s(X),T).$$ Since $T$ is defined over $F$, it is split by $F_s$, and hence by Hilbert's Theorem 90 we have $H^1(F_s(X),T)$=0. Thus the inflation map $H^1(F,T(F_s(X)))\to H^1(F(X),T)$ is an isomorphism. We identify $H^1(F,T(F_s(X)))$ with $H^1(F(X),T)$ via the inflation map. If we define $$H^1(F(X),T)_{\operatorname{nr}}\coloneqq \operatorname{Ker}[H^1(F(X),T)\xrightarrow{\operatorname{div}}H^1(F,\operatorname{Div}(X_s)\otimes T_*)],$$ the map $\theta$ of ([\[map-phi\]](#map-phi){reference-type="ref" reference="map-phi"}) takes the form $$\theta\colon H^1(F(X),T)_{\operatorname{nr}}\to \operatorname{Coker}[\operatorname{Div}(X_s)\otimes T_*\to\operatorname{Pic}(X_s)\otimes T_*].$$ **Corollary 28**. *We have an exact sequence $$\resizebox{\textwidth}{!}{$ H^1(F, T) \to H^1(F(X), T)_{\operatorname{nr}} \xrightarrow{\theta} \operatorname{Coker} [(\operatorname{Div}(X_s)\otimes T_*)^{\Gamma_F} \xrightarrow{\lambda} (\operatorname{Pic}(X_s)\otimes T_*)^{\Gamma_F}] \to H^2(F, T),$}$$ where the first and last map are induced by ([\[4exact-t-galois\]](#4exact-t-galois){reference-type="ref" reference="4exact-t-galois"}).* *Proof.* This is a special case of . ◻ KMRT98 Daniel K. Biss and Samit Dasgupta. A presentation for the unipotent group over rings with identity. , 237(2):691--707, 2001. William G. Dwyer. Homology, Massey products and maps between groups. , 6(2):177--190, 1975. Ido Efrat. The Zassenhaus filtration, Massey products, and representations of profinite groups. , 263:389--411, 2014. Ido Efrat and Eliyahu Matzri. Triple Massey products and absolute Galois groups. , 19(12):3629--3640, 2017. Pierre Guillot, Ján Mináč, and Adam Topaz. Four-fold Massey products in Galois cohomology. , 154(9):1921--1959, 2018. With an appendix by Olivier Wittenberg. Philippe Gille and Tamás Szamuely. , volume 165 of *Cambridge Studies in Advanced Mathematics*. Cambridge University Press, Cambridge, 2017. Second edition. Michael J. Hopkins and Kirsten G. Wickelgren. Splitting varieties for triple Massey products. , 219(5):1304--1319, 2015. Christian Haesemeyer and Charles A. Weibel. , volume 200 of *Annals of Mathematics Studies*. Princeton University Press, Princeton, NJ, 2019. Yonatan Harpaz and Olivier Wittenberg. The Massey vanishing conjecture for number fields. , 172(1):1--41, 2023. Max-Albert Knus, Alexander Merkurjev, Markus Rost, and Jean-Pierre Tignol. , volume 44 of *American Mathematical Society Colloquium Publications*. American Mathematical Society, Providence, RI, 1998. With a preface in French by J. Tits. Eliyahu Matzri. Triple Massey products of weight $(1,n,1)$ in Galois cohomology. , 499:272--280, 2018. Alexander Merkurjev and Federico Scavia. Degenerate fourfold Massey products over arbitrary fields. , 2022. Alexander Merkurjev and Federico Scavia. The Massey vanishing conjecture for fourfold Massey products modulo 2. , 2023. Ján Mináč and Nguyen Duy Tân. Triple Massey products vanish over all fields. , 94(3):909--932, 2016. Ján Mináč and Nguyen Duy Tân. Counting Galois $\mathbb{U}_4(\mathbb{F}_p)$-extensions using Massey products. , 176:76--112, 2017. Ján Mináč and Nguyen Duy Tân. Triple Massey products and Galois theory. , 19(1):255--284, 2017. Ambrus Pál and Endre Szabó. The strong Massey vanishing conjecture for fields with virtual cohomological dimension at most $1$. *arXiv:1811.06192* (2018). Leonid Positselski. Koszulity of cohomology = $K(\pi,1)$-ness + quasi-formality. , 483:188--229, 2017. Jean-Pierre Serre. , volume 67 of *Graduate Texts in Mathematics*. Springer-Verlag, New York-Berlin, 1979. Translated from the French by Marvin Jay Greenberg. Jean-Pierre Serre. . Springer-Verlag, Berlin, 1997. Translated from the French by Patrick Ion and revised by the author.
arxiv_math
{ "id": "2309.17004", "title": "Non-formality of Galois cohomology modulo all primes", "authors": "Alexander Merkurjev and Federico Scavia", "categories": "math.NT math.AG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We prove that the extended Khovanov arc algebras are isomorphic to the basic algebras of anti-spherical Hecke categories for maximal parabolics of symmetric groups. We present these algebras by quiver and relations and provide the full submodule lattices of Verma modules. address: - Department of Mathematics, University of York, Heslington, York, UK - Department of Mathematics, City, University of London, London, UK - Department of Mathematics, University of York, Heslington, York, UK - Mathematical Institute, Endenicher Allee 60, 53115 Bonn author: - Chris Bowman - Maud De Visscher - Amit Hazi - Catharina Stroppel bibliography: - master.bib title: | Quiver presentations and isomorphisms\ of Hecke categories and Khovanov arc algebras --- # Introduction A fundamental notion of categorical Lie theory is that of uniqueness. If a pair of 2-categorical objects share the same underlying Kazhdan--Lusztig theory, then they "should be the same\" --- a striking example of this is given by the ${\mathbb Z}$-graded algebra isomorphisms between KLR algebras and diagrammatic Soergel bimodules conjectured in [@MR4100120] and proven in [@cell4us2]. Our starting point for this paper is the simplest family of ($p$-)Kazhdan--Lusztig polynomials --- those given by oriented Temperley--Lieb diagrams, or equivalently, Dyck tilings. These combinatorial objects underly the (extended) Khovanov arc algebras [@MR2918294] and the anti-spherical Hecke categories associated to maximal parabolics of symmetric groups [@compan]. The first theorem of this paper explains this coincidence via an explicit and elementary ${\mathbb Z}$-graded algebra isomorphism (see Theorem A below) in the spirit of [@MR4100120; @cell4us2]. By a theorem of Gabriel, any basic algebra over a field is isomorphic to the path algebra of its ${\rm Ext}$-quiver, modulo relations. Such presentations are one of the "holy grails\" of representation theory --- they essentially provide complete information about the structure of an algebra. Our second main theorem of this paper provides such a presentation for the basic algebras of anti-spherical Hecke categories of maximal parabolics of symmetric groups (Theorem B). We use this presentation to obtain complete submodule lattices for the Verma modules (Theorem C). The results of this paper are mostly self-contained, with elementary proofs which work over any integral domain $\Bbbk$ containing $i \in \Bbbk$ such that $i^2=-1$. ## The isomorphism theorem The (extended) Khovanov arc algebras $\mathcal{K} _{m,n}$ for $m,n\in {\mathbb N}$ have their origins in 2-dimensional topological quantum field theory and their first applications were in categorical knot theory [@MR1740682; @MR2521250]. These algebras have subsequently been studied from the point of view of Springer varieties [@MR1928174], their cohomological and representation theoretic structure [@MR2600694; @MR2781018; @MR2955190; @MR2881300; @BarWang], and symplectic geometry [@MR4422212] and have further inspired much generalisation: from the Temperley--Lieb setting to web diagrams [@web1; @web2; @web3; @web4] and also from the "even\" setting to "super\" [@odd1] and "odd\" settings [@odd2], as well as to the orthosymplectic case [@MR3518556; @MR3563723; @MR3644792]. In summary, these algebras form the prototype for knot-theoretic categorification, we refer to Stroppel's 2022 ICM address for more details [@ICM1]. On the other hand, anti-spherical Hecke categories of parabolic Coxeter systems $(W,P)$ provide the universal setting for studying the interaction between Kazhdan--Lusztig theory and categorical Lie theory --- they have formed the crux of the resolutions of the Jantzen, Lusztig, and Kazhdan--Lusztig positivity conjectures [@MR3689943; @w13; @MR3245013] and control much of the representation theory of algebraic groups and braid groups [@cell4us2; @BNS2; @MR3868004; @MR4454848; @ELpaper]. We refer to Williamson's 2018 ICM address for a more complete history and the geometric motivation for their study [@MR3966750]. Our first main theorem bridges the gap between these two distinct categorical worlds: **Theorem A 1**. *The extended Khovanov arc algebras $\mathcal{K} _{m,n}$ are isomorphic (as ${\mathbb Z}$-graded $\Bbbk$-algebras) to the basic algebras $\mathcal{H}_{m,n}$ of the anti-spherical Hecke categories associated to the maximal parabolics of symmetric groups $(S_{m+n},S_m \times S_n)$ for all $m,n\in{\mathbb N}$.* Given the vast generalisations of Khovanov arc algebras (in particular to the super world!) and of these anti-spherical Hecke categories (to all parabolic Coxeter systems) we hope that our main theorem will be the starting point of much reciprocal study of these two worlds. ## Quiver and relations for Hecke categories It is well-known that ($p$-)Kazhdan--Lusztig polynomials encode a great deal of character-theoretic and indeed cohomological information about Verma modules (particularly if one puts certain restrictions on $p\geqslant 0$). If the algebra is Koszul (as is the case for our algebras) we further know that the $p$-Kazhdan--Lusztig polynomials carry complete information about the radical layers of indecomposable projective and Verma modules. Given the almost ridiculous level of detail these polynomials encode, it is natural to ask *"what are the limits to what $p$-Kazhdan--Lusztig combinatorics can tell us about the structure of the Hecke category?\"* The starting point of this paper is to delve deep into the Dyck/Temperley--Lieb combinatorics for $p$-Kazhdan--Lusztig polynomials, which was initiated in [@MR2918294; @compan]. There is a wealth of extra, richer combinatorial information which can be encoded into the Dyck tilings underlying these $p$-Kazhdan--Lusztig polynomials. Instead of looking only at the sets of Dyck tilings (which enumerate the $p$-Kazhdan--Lusztig polynomials) we look at the relationships for passing between these Dyck tilings. In fact, this "meta-Kazhdan--Lusztig combinatorics\" is sufficiently rich as to completely determine the full structure of our Hecke categories: **Theorem B 1**. *The $\Bbbk$-algebra $\mathcal{H}_{m,n}$ admits a quadratic presentation as the path algebra of the "Dyck quiver\" $\mathscr{D}_{m,n}$ of [Definition 38](#quiverdefn){reference-type="ref" reference="quiverdefn"} modulo "Dyck-combinatorial relations\" [\[rel1\]](#rel1){reference-type="eqref" reference="rel1"} to [\[adjacent\]](#adjacent){reference-type="eqref" reference="adjacent"}. If $\Bbbk$ is a field, then the ${\rm Ext}$-quiver of $\mathcal{H}_{m,n}$ is isomorphic to $\mathscr{D}_{m,n}$ and this gives a presentation of the algebra by quiver and relations.* In a nutshell, the power of Theorem B is that it allows us to understand not only the *graded composition series* of standard and projective modules (the purview of classical Kazhdan--Lusztig combinatorics) but the *explicit extensions interrelating these composition factors* (in terms of meta-Kazhdan--Lusztig combinatorics). In essence, Theorem B provides complete information about the structure of the anti-spherical Hecke categories of $(S_{m+n},S_m \times S_n)$ for $m,n\in{\mathbb N}$. We reap some of the fruits of Theorem B by providing an incredibly elementary description of the full submodule lattices of Verma modules: **Theorem C 1**. *Let $\Bbbk$ be a field. The submodule lattice of the Verma module $\Delta _{m,n}(\lambda)$ can be calculated in terms of the combinatorics of Dyck tilings; moreover this lattice is independent of the characteristic of $\Bbbk$.* An example is depicted in [\[submodules\]](#submodules){reference-type="ref" reference="submodules"}, below. Specialising to the case that $\Bbbk$ is a field and putting Theorems A and B together, we obtain a conceptually simpler proof of the results of [@BarWang Section 2] (which makes use of the Koszulity of these algebras over a field, which is the main result of [@MR2600694]). $$\begin{tikzpicture}[scale=0.925 ] \draw[very thick] (0,0)--(-5,-2.5) (0,0)--(-3,-2.5) (0,0)--(-1,-2.5) (0,0)--(1,-2.5) (0,0)--(3,-2.5) (0,0)--(5,-2.5); \draw[very thick] (-5,-2.5)--(-5,-6.5) (-5,-2.5)--(-3,-6.5) (-3,-2.5)--(1,-6.5) (-3,-2.5)--(5,-6.5) (-3,-6.5)--(-3,-2.5); \draw[very thick] (-1,-2.5)--(-5,-6.5) (-1,-2.5)--(5,-6.5) (-1,-2.5)--(-1,-6.5) ; \draw[very thick] (1,-2.5)--(-1,-6.5) (1,-2.5)--(-3,-6.5) (1,-2.5)--(3,-6.5) ; \draw[very thick] (5,-2.5)--(5,-6.5) (5,-2.5)--(3,-6.5) (3,-2.5)--(1,-6.5) (3,-2.5)--(3,-6.5) (3,-2.5)--(-5,-6.5) ; \draw[very thick] (0,-6.5-2.5)--(-5,-6.5) (0,-6.5-2.5)--(-3,-6.5) (0,-6.5-2.5)--(-1,-6.5) (0,-6.5-2.5)--(1,-6.5) (0,-6.5-2.5)--(3,-6.5) (0,-6.5-2.5)--(5,-6.5); \path(0,0)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(top); \path(top)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \fill[white](circle) circle (20pt); \draw[densely dotted] (top)--++(45:1*0.3) coordinate (X1)--++(45:1*0.3) coordinate (X2) --++(45:1*0.3) --++(135:3*0.3)--++(-135:3*0.3)--++(-45:1*0.3) coordinate (Y2) --++(-45:1*0.3) coordinate (Y1) --++(-45:1*0.3) ; \draw[densely dotted] (X1)--++(135:0.3*3); \draw[densely dotted] (X2)--++(135:0.3*3); \draw[densely dotted] (Y1)--++(45:0.3*3); \draw[densely dotted] (Y2)--++(45:0.3*3); \draw [very thick,fill=gray!40] (top)--++(45:0.3*2)--++(135:0.3)--++(-135:0.3)--++(135:0.3)--++(-135:0.3) --++(-45:0.3*2); \draw [very thick] (top)--++(45:0.3)--++(135:0.3)--++(-135:0.3); \path(0,-6.5-2.5)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(bottom); \path(bottom)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \fill[white](circle) circle (20pt); \draw[densely dotted] (bottom)--++(45:1*0.3) coordinate (X1)--++(45:1*0.3) coordinate (X2) --++(45:1*0.3) --++(135:3*0.3)--++(-135:3*0.3)--++(-45:1*0.3) coordinate (Y2) --++(-45:1*0.3) coordinate (Y1) --++(-45:1*0.3) ; \draw[densely dotted] (X1)--++(135:0.3*3); \draw[densely dotted] (X2)--++(135:0.3*3); \draw[densely dotted] (Y1)--++(45:0.3*3); \draw[densely dotted] (Y2)--++(45:0.3*3); \draw [very thick,fill=gray!40] (bottom)--++(45:0.3*3)--++(135:0.3)--++(-135:0.3)--++(135:0.3)--++(-135:0.3)--++(135:0.3)--++(-135:0.3) --++(-45:0.3*3); \draw [very thick] (bottom) --++(45:0.3)--++(135:0.3) --++(45:0.3)--++(135:0.3) --++(-135:0.3)--++(-45:0.3) --++(-135:0.3)--++(-45:0.3); \draw [very thick] (bottom) --++(45:0.6)--++(135:0.3); \draw [very thick] (bottom) --++(135:0.6)--++(45:0.3); \path(-5,-2.5)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(LLL1); \path(LLL1)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \fill[white](circle) circle (20pt); \draw[densely dotted] (LLL1)--++(45:1*0.3) coordinate (X1)--++(45:1*0.3) coordinate (X2) --++(45:1*0.3) --++(135:3*0.3)--++(-135:3*0.3)--++(-45:1*0.3) coordinate (Y2) --++(-45:1*0.3) coordinate (Y1) --++(-45:1*0.3) ; \draw[densely dotted] (X1)--++(135:0.3*3); \draw[densely dotted] (X2)--++(135:0.3*3); \draw[densely dotted] (Y1)--++(45:0.3*3); \draw[densely dotted] (Y2)--++(45:0.3*3); \draw [very thick,fill=gray!40] (LLL1)--++(45:0.3*2)--++(135:0.3) --++(135:0.3*2)--++(-135:0.3*2) --++(-45:0.3*3); \draw [very thick] (LLL1) --++(45:0.3)--++(135:0.3) --++(45:0.3)--++(135:0.3) --++(-135:0.3)--++(-45:0.3) --++(-135:0.3)--++(-45:0.3); \draw [very thick] (LLL1) --++(45:0.6)--++(135:0.3); \draw [very thick] (LLL1) --++(135:0.6)--++(45:0.3)--++(135:0.3); \path(-3,-2.5)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(LL1); \path(LL1)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \fill[white](circle) circle (20pt); \draw[densely dotted] (LL1)--++(45:1*0.3) coordinate (X1)--++(45:1*0.3) coordinate (X2) --++(45:1*0.3) --++(135:3*0.3)--++(-135:3*0.3)--++(-45:1*0.3) coordinate (Y2) --++(-45:1*0.3) coordinate (Y1) --++(-45:1*0.3) ; \draw[densely dotted] (X1)--++(135:0.3*3); \draw[densely dotted] (X2)--++(135:0.3*3); \draw[densely dotted] (Y1)--++(45:0.3*3); \draw[densely dotted] (Y2)--++(45:0.3*3); \draw [very thick,fill=gray!40] (LL1)--++(45:0.3*2)--++(135:0.3)--++(-135:0.3)--++(135:0.3*2)--++(-135:0.3) --++(-45:0.3*3); \draw [very thick] (LL1) --++(45:0.3)--++(135:0.3) --++(-135:0.3)--++(-45:0.3); \draw [very thick] (LL1) --++(135:0.6)--++(45:0.3)--++(135:0.3); \path(-1,-2.5)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(L1); \path(L1)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \fill[white](circle) circle (20pt); \draw[densely dotted] (L1)--++(45:1*0.3) coordinate (X1)--++(45:1*0.3) coordinate (X2) --++(45:1*0.3) --++(135:3*0.3)--++(-135:3*0.3)--++(-45:1*0.3) coordinate (Y2) --++(-45:1*0.3) coordinate (Y1) --++(-45:1*0.3) ; \draw[densely dotted] (X1)--++(135:0.3*3); \draw[densely dotted] (X2)--++(135:0.3*3); \draw[densely dotted] (Y1)--++(45:0.3*3); \draw[densely dotted] (Y2)--++(45:0.3*3); \draw [very thick,fill=gray!40] (L1)--++(45:0.3*3)--++(135:0.3*2)--++(-135:0.3)--++(135:0.3)--++(-135:0.3*2) --++(-45:0.3*3); \draw [very thick] (L1) --++(45:0.3)--++(135:0.3) --++(45:0.3)--++(135:0.3) --++(-135:0.3)--++(-45:0.3) --++(-135:0.3)--++(-45:0.3); \draw [very thick] (L1) --++(45:0.6)--++(135:0.3)--++(45:0.3);; \draw [very thick] (L1) --++(135:0.6)--++(45:0.3)--++(135:0.3); \path(1,-2.5)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(R1); \path(R1)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \fill[white](circle) circle (20pt); \draw[densely dotted] (R1)--++(45:1*0.3) coordinate (X1)--++(45:1*0.3) coordinate (X2) --++(45:1*0.3) --++(135:3*0.3)--++(-135:3*0.3)--++(-45:1*0.3) coordinate (Y2) --++(-45:1*0.3) coordinate (Y1) --++(-45:1*0.3) ; \draw[densely dotted] (X1)--++(135:0.3*3); \draw[densely dotted] (X2)--++(135:0.3*3); \draw[densely dotted] (Y1)--++(45:0.3*3); \draw[densely dotted] (Y2)--++(45:0.3*3); \draw [very thick,fill=gray!40] (R1)--++(45:0.3*2)--++(135:0.3) --++(135:0.3)--++(-135:0.3*2) --++(-45:0.3*2); \draw [very thick] (R1) --++(45:0.3)--++(135:0.3) --++(45:0.3)--++(135:0.3) --++(-135:0.3)--++(-45:0.3) --++(-135:0.3)--++(-45:0.3); \draw [very thick] (L1) --++(45:0.3)--++(135:0.3) --++(45:0.3)--++(135:0.3) --++(-135:0.3)--++(-45:0.3) --++(-135:0.3)--++(-45:0.3); \draw [very thick] (L1) --++(45:0.6)--++(135:0.3)--++(45:0.3);; \draw [very thick] (L1) --++(135:0.6)--++(45:0.3)--++(135:0.3); \path(3,-2.5)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(RR1); \path(RR1)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \fill[white](circle) circle (20pt); \draw[densely dotted] (RR1)--++(45:1*0.3) coordinate (X1)--++(45:1*0.3) coordinate (X2) --++(45:1*0.3) --++(135:3*0.3)--++(-135:3*0.3)--++(-45:1*0.3) coordinate (Y2) --++(-45:1*0.3) coordinate (Y1) --++(-45:1*0.3) ; \draw[densely dotted] (X1)--++(135:0.3*3); \draw[densely dotted] (X2)--++(135:0.3*3); \draw[densely dotted] (Y1)--++(45:0.3*3); \draw[densely dotted] (Y2)--++(45:0.3*3); \draw [very thick,fill=gray!40] (RR1)--++(45:0.3*3)--++(135:0.3)--++(-135:0.3*2)--++(135:0.3)--++(-135:0.3) --++(-45:0.3*2); \draw [very thick] (RR1) --++(45:0.3)--++(135:0.3) --++(-135:0.3)--++(-45:0.3); \draw [very thick] (RR1) --++(45:0.6)--++(135:0.3); \path(5,-2.5)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(RRR1); \path(RRR1)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \fill[white](circle) circle (20pt); \draw[densely dotted] (RRR1)--++(45:1*0.3) coordinate (X1)--++(45:1*0.3) coordinate (X2) --++(45:1*0.3) --++(135:3*0.3)--++(-135:3*0.3)--++(-45:1*0.3) coordinate (Y2) --++(-45:1*0.3) coordinate (Y1) --++(-45:1*0.3) ; \draw[densely dotted] (X1)--++(135:0.3*3); \draw[densely dotted] (X2)--++(135:0.3*3); \draw[densely dotted] (Y1)--++(45:0.3*3); \draw[densely dotted] (Y2)--++(45:0.3*3); \draw [very thick,fill=gray!40] (RRR1)--++(45:0.3*3)--++(135:0.3*2) --++(-135:0.3*3) --++(-45:0.3*2); \draw [very thick] (RRR1) --++(45:0.3)--++(135:0.3) --++(45:0.3)--++(135:0.3) --++(-135:0.3)--++(-45:0.3) --++(-135:0.3)--++(-45:0.3); \draw [very thick] (RRR1) --++(45:0.6)--++(135:0.3)--++(45:0.3);; \path(-5,-6.5)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(LLL1); \path(LLL1)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \fill[white](circle) circle (20pt); \draw[densely dotted] (LLL1)--++(45:1*0.3) coordinate (X1)--++(45:1*0.3) coordinate (X2) --++(45:1*0.3) --++(135:3*0.3)--++(-135:3*0.3)--++(-45:1*0.3) coordinate (Y2) --++(-45:1*0.3) coordinate (Y1) --++(-45:1*0.3) ; \draw[densely dotted] (X1)--++(135:0.3*3); \draw[densely dotted] (X2)--++(135:0.3*3); \draw[densely dotted] (Y1)--++(45:0.3*3); \draw[densely dotted] (Y2)--++(45:0.3*3); \draw [very thick,fill=gray!40] (LLL1)--++(45:0.3*3)--++(135:0.3)--++(-135:0.3) --++(135:0.3*2)--++(-135:0.3*2) --++(-45:0.3*3); \draw [very thick] (LLL1) --++(45:0.3)--++(135:0.3) --++(45:0.3)--++(135:0.3) --++(-135:0.3)--++(-45:0.3) --++(-135:0.3)--++(-45:0.3); \draw [very thick] (LLL1) --++(45:0.6)--++(135:0.3)--++(45:0.3);; \draw [very thick] (LLL1) --++(135:0.6)--++(45:0.3)--++(135:0.3); \path(-3,-6.5)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(LL1); \path(LL1)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \fill[white](circle) circle (20pt); \draw[densely dotted] (LL1)--++(45:1*0.3) coordinate (X1)--++(45:1*0.3) coordinate (X2) --++(45:1*0.3) --++(135:3*0.3)--++(-135:3*0.3)--++(-45:1*0.3) coordinate (Y2) --++(-45:1*0.3) coordinate (Y1) --++(-45:1*0.3) ; \draw[densely dotted] (X1)--++(135:0.3*3); \draw[densely dotted] (X2)--++(135:0.3*3); \draw[densely dotted] (Y1)--++(45:0.3*3); \draw[densely dotted] (Y2)--++(45:0.3*3); \draw [very thick,fill=gray!40] (LL1)--++(45:0.3*2)--++(135:0.3*2)--++(-135:0.3)--++(135:0.3)--++(-135:0.3) --++(-45:0.3*3); \draw [very thick] (LL1) --++(45:0.3)--++(135:0.3) --++(45:0.3)--++(135:0.3) --++(-135:0.3)--++(-45:0.3) --++(-135:0.3)--++(-45:0.3); \draw [very thick] (LL1) --++(45:0.6)--++(135:0.3); \draw [very thick] (LL1) --++(135:0.6)--++(45:0.3)--++(135:0.3); \path(-1,-6.5)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(L1); \path(L1)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \fill[white](circle) circle (20pt); \draw[densely dotted] (L1)--++(45:1*0.3) coordinate (X1)--++(45:1*0.3) coordinate (X2) --++(45:1*0.3) --++(135:3*0.3)--++(-135:3*0.3)--++(-45:1*0.3) coordinate (Y2) --++(-45:1*0.3) coordinate (Y1) --++(-45:1*0.3) ; \draw[densely dotted] (X1)--++(135:0.3*3); \draw[densely dotted] (X2)--++(135:0.3*3); \draw[densely dotted] (Y1)--++(45:0.3*3); \draw[densely dotted] (Y2)--++(45:0.3*3); \draw [very thick,fill=gray!40] (L1)--++(45:0.3*3)--++(135:0.3*3) --++(-135:0.3*3) --++(-45:0.3*3); \draw [very thick] (L1) --++(45:0.3)--++(135:0.3) --++(45:0.3)--++(135:0.3) --++(45:0.3)--++(135:0.3) --++(-135:0.3)--++(-45:0.3) --++(-135:0.3)--++(-45:0.3) --++(-135:0.3)--++(-45:0.3); \draw [very thick] (L1) --++(45:0.6)--++(135:0.3)--++(45:0.3);; \draw [very thick] (L1) --++(135:0.6)--++(45:0.3)--++(135:0.3); \path(1,-6.5)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(R1); \path(R1)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \fill[white](circle) circle (20pt); \draw[densely dotted] (R1)--++(45:1*0.3) coordinate (X1)--++(45:1*0.3) coordinate (X2) --++(45:1*0.3) --++(135:3*0.3)--++(-135:3*0.3)--++(-45:1*0.3) coordinate (Y2) --++(-45:1*0.3) coordinate (Y1) --++(-45:1*0.3) ; \draw[densely dotted] (X1)--++(135:0.3*3); \draw[densely dotted] (X2)--++(135:0.3*3); \draw[densely dotted] (Y1)--++(45:0.3*3); \draw[densely dotted] (Y2)--++(45:0.3*3); \draw [very thick,fill=gray!40] (R1)--++(45:0.3*3)--++(135:0.3) --++(-135:0.3*2) --++(135:0.3*2) --++(-135:0.3) --++(-45:0.3*3); \draw [very thick] (R1) --++(45:0.3)--++(135:0.3) --++(-135:0.3)--++(-45:0.3); \draw [very thick] (R1) --++(45:0.6)--++(135:0.3)--++(45:0.3);; \draw [very thick] (R1) --++(135:0.6)--++(45:0.3)--++(135:0.3); \path(3,-6.5)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(RR1); \path(RR1)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \fill[white](circle) circle (20pt); \draw[densely dotted] (RR1)--++(45:1*0.3) coordinate (X1)--++(45:1*0.3) coordinate (X2) --++(45:1*0.3) --++(135:3*0.3)--++(-135:3*0.3)--++(-45:1*0.3) coordinate (Y2) --++(-45:1*0.3) coordinate (Y1) --++(-45:1*0.3) ; \draw[densely dotted] (X1)--++(135:0.3*3); \draw[densely dotted] (X2)--++(135:0.3*3); \draw[densely dotted] (Y1)--++(45:0.3*3); \draw[densely dotted] (Y2)--++(45:0.3*3); \draw [very thick,fill=gray!40] (RR1)--++(45:0.3*3)--++(135:0.3)--++(-135:0.3*1)--++(135:0.3)--++(-135:0.3*2) --++(-45:0.3*2); \draw [very thick] (RR1) --++(45:0.3)--++(135:0.3) --++(45:0.3)--++(135:0.3) --++(-135:0.3)--++(-45:0.3) --++(-135:0.3)--++(-45:0.3); \draw [very thick] (RR1) --++(45:0.6)--++(135:0.3)--++(45:0.3);; \path(5,-6.5)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(RRR1); \path(RRR1)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \fill[white](circle) circle (20pt); \draw[densely dotted] (RRR1)--++(45:1*0.3) coordinate (X1)--++(45:1*0.3) coordinate (X2) --++(45:1*0.3) --++(135:3*0.3)--++(-135:3*0.3)--++(-45:1*0.3) coordinate (Y2) --++(-45:1*0.3) coordinate (Y1) --++(-45:1*0.3) ; \draw[densely dotted] (X1)--++(135:0.3*3); \draw[densely dotted] (X2)--++(135:0.3*3); \draw[densely dotted] (Y1)--++(45:0.3*3); \draw[densely dotted] (Y2)--++(45:0.3*3); \draw [very thick,fill=gray!40] (RRR1)--++(45:0.3*3)--++(135:0.3*2) --++(-135:0.3*2) --++(135:0.3)--++(-135:0.3) --++(-45:0.3*3); \draw [very thick] (RRR1) --++(45:0.3)--++(135:0.3) --++(45:0.3)--++(135:0.3) --++(-135:0.3)--++(-45:0.3) --++(-135:0.3)--++(-45:0.3); \draw [very thick] (RRR1) --++(45:0.6)--++(135:0.3)--++(45:0.3);; \draw [very thick] (RRR1) --++(135:0.6)--++(45:0.3)--++(135:0.3); \end{tikzpicture}$$ ## Structure of the paper In Section 2 we recall the necessary combinatorics of oriented Temperley--Lieb diagrams and $p$-Kazhdan--Lusztig polynomials from [@MR2600694; @compan2]. In Section 3 we recall the extended Khovanov arc algebras and the basic algebras of the Hecke categories which will be of interest in this paper. In Sections 4 and 5 we develop the Dyck path combinatorics and lift this to the level of generators and bases of the basic algebras of the Hecke categories. In Section 6 we take a short detour to discuss the notion of *dilation* for our diagram algebras, which will simplify the main proofs significantly. In Sections 7 we prove Theorem B of this paper, by lifting the Dyck combinatorics to the level of a of $\mathcal{H}_{m,n}$ over an integral domain $\Bbbk$; we then recast this presentation in terms of the quotient of the path algebra of the ${\rm Ext}$-quiver in the case that $\Bbbk$ is a field. In Section 8, we prove Theorem C. Finally, in Section 9 we use Theorem B to prove the isomorphism of Theorem A. **Acknowledgements 1**. *The first and third authors were funded by EPSRC grant EP/V00090X/1.* # The combinatorics of Kazhdan-Lusztig polynomials We begin by reviewing and unifying the combinatorics of Khovanov arc algebras [@MR2600694; @MR2781018; @MR2955190; @MR2881300] and the Hecke categories of interest in this paper [@compan; @compan2]. ## Cosets, weights and partitions Let $S_n$ denote the symmetric group of degree $n$. Throughout this paper, we will work with the parabolic Coxeter system $(W,P) = (S_{m+n}, S_m \times S_n)$. We label the simple reflections with the slightly unusual subscripts $s_i, \, -m+1 \leqslant i \leqslant n-1$ so that $P = \langle s_i \, | \, i\neq 0\rangle \leqslant W$. We view $W$ as the group of permutations of the $n+m$ points on a horizontal strip numbered by the half integers $i\pm \tfrac{1}{2}$ where the simple reflection $s_i$ swaps the points $i-\tfrac{1}{2}$ and $i+\tfrac{1}{2}$ and fixes every other point. The right cosets of $P$ in $W$ can then be identified by labelled horizontal strips called weights, where each point $i\pm \tfrac{1}{2}$ is labelled by either $\wedge$ or $\vee$ in such a way that the total number of $\wedge$ is equal to $m$ (and so the total number of $\vee$ is equal to $n$). Specifically, the trivial coset $P$ is represented by the weight with negative points labelled by $\wedge$ and positive points labelled by $\vee$. The other cosets are obtained by permuting the labels of the identity weight. An example is given on the left hand side of Figure [\[typeAtiling-long\]](#typeAtiling-long){reference-type="ref" reference="typeAtiling-long"}. $$\begin{tikzpicture} [scale=0.7] \path(0,0)--++(135:2.25)--++(-135:2.25) --++(90:-0.16) node {$\wedge$}; \path (0,-0.7)--++(135:0.5)--++(-135:0.5) coordinate (minus1); \path (minus1)--++(135:0.5)--++(-135:0.5) coordinate (minus2); \path (minus2)--++(135:0.5)--++(-135:0.5) coordinate (minus3); \path (minus3)--++(135:0.5)--++(-135:0.5) coordinate (minus4); \path (0,-0.7)--++(45:0.5)--++(-45:0.5) coordinate (plus1); \path (plus1)--++(45:0.5)--++(-45:0.5) coordinate (plus2); \path (plus2)--++(45:0.5)--++(-45:0.5) coordinate (plus3); \path (plus3)--++(45:0.5)--++(-45:0.5) coordinate (plus4); \draw[very thick](plus4)--(minus4); %\draw[very thick,densely dotted](plus4)--++(0:0.5) ; %\draw[very thick,densely dotted](minus4)--++(180:0.5) ; \path(minus1)--++(45:0.4) coordinate (minus1NE); \path(minus1)--++(-45:0.4) coordinate (minus1SE); \path(minus4)--++(135:0.4) coordinate (minus1NW); \path(minus4)--++(-135:0.4) coordinate (minus1SW); \path(minus2)--++(135:0.4) coordinate (start); \draw[rounded corners, thick] (start)--(minus1NE)--(minus1SE)--(minus1SW)-- (minus1NW)--(start); %\draw[densely dotted, thick](minus1NW)--++(180:0.5); %\draw[densely dotted, thick](minus1SW)--++(180:0.5); % \path(plus4)--++(45:0.4) coordinate (minus1NE); \path(plus4)--++(-45:0.4) coordinate (minus1SE); \path(plus1)--++(135:0.4) coordinate (minus1NW); \path(plus1)--++(-135:0.4) coordinate (minus1SW); \path(plus2)--++(135:0.4) coordinate (start); \draw[rounded corners, thick] (start)--(minus1NE)-- (minus1SE)--(minus1SW)--(minus1NW)--(start); %\draw[densely dotted, thick](minus1NE)--++( 0:0.5); %\draw[densely dotted, thick](minus1SE)--++( 0:0.5); \draw[very thick,fill=magenta](0,-0.7) circle (4pt); \draw[very thick, fill=darkgreen](minus1) circle (4pt); \draw[very thick, fill=orange](minus2) circle (4pt); \draw[very thick, fill=lime!80!black](minus3) circle (4pt); \draw[very thick, fill=gray](minus4) circle (4pt); \draw[very thick, fill=cyan!80](plus1) circle (4pt); \draw[very thick, fill=violet](plus2) circle (4pt); \draw[very thick, fill=pink](plus3) circle (4pt); \draw[very thick, fill=brown](plus4) circle (4pt); \path (0,0) coordinate (origin2); \begin{scope} \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \path(origin2) ++(135:2.5) ++(-135:2.5) coordinate(corner1); \path(origin2) ++(45:2.5) ++(135:7.5) coordinate(corner2); \path(origin2) ++(45:2) ++(-45:2) coordinate(corner4); \path(origin2) ++(135:3) ++(45:7) coordinate(corner3); %\draw[thick] (origin2)--(corner1)--(corner2)--(corner3)--(corner4)--(origin2); \draw[ thick ] (corner4) --++( 0:1.41*0.5) coordinate (RIGHT1) (corner3) --++( 0:1.41*0.5) coordinate (RIGHT2) ; \draw[thick] (corner2)--(corner3) ; %\draw[thick,densely dotted] (corner1) --++(180:0.5); %\draw[thick,densely dotted] (corner2) --++(180:0.5); %\draw[thick,densely dotted] (RIGHT1) --++( 0:0.5); %\draw[thick,densely dotted] (RIGHT2) --++( 0:0.5); \clip(corner1)--(corner2)--++(90:0.3)--++(0:7.5)-- (RIGHT2) -- (RIGHT1) --++(90:-0.3)--++(180:6.5) --(corner1); \path[name path=pathd1] (d1)--++(90:7); \path[name path=top] (corner2)--(corner3); \path [name intersections={of = pathd1 and top}]; \coordinate (A) at (intersection-1); \path(A)--++(-90:0.1) node { $\wedge$ }; \path[name path=pathd3] (d3)--++(90:7); \path[name path=top] (corner2)--(corner3); \path [name intersections={of = pathd3 and top}]; \coordinate (A) at (intersection-1); \path(A)--++(90:-0.16) node { $\wedge$ }; \path[name path=pathd5] (d5)--++(90:7); \path[name path=top] (corner2)--(corner3); \path [name intersections={of = pathd5 and top}]; \coordinate (A) at (intersection-1); \path(A)--++(90:0.1) node { $\vee$ }; \path[name path=pathd7] (d7)--++(90:7); \path[name path=top] (corner2)--(corner3); \path [name intersections={of = pathd7 and top}]; \coordinate (A) at (intersection-1); \path(A)--++(90:-0.16) node { $\wedge$ }; \path[name path=pathd9] (d9)--++(90:7); \path[name path=top] (corner2)--(corner3); \path [name intersections={of = pathd9 and top}]; \coordinate (A) at (intersection-1); \path(A)--++(-90:-0.16) node { $\vee$ }; \path[name path=pathc1] (c1)--++(90:7); \path[name path=top] (corner2)--(corner3); \path [name intersections={of = pathc1 and top}]; \coordinate (A) at (intersection-1); \path(A)--++(-90:-0.16) node { $\vee$ }; \path[name path=pathc3] (c3)--++(90:7); \path[name path=top] (corner2)--(corner3); \path [name intersections={of = pathc3 and top}]; \coordinate (A) at (intersection-1); \path(A)--++(-90:-0.16) node { $\vee$ }; \path[name path=pathc5] (c5)--++(90:7); \path[name path=top] (corner2)--(corner3); \path [name intersections={of = pathc5 and top}]; \coordinate (A) at (intersection-1); \path(A)--++(90:-0.16) node { $\wedge$ }; \path[name path=pathc7] (c7)--++(90:7); \path[name path=top] (corner2)--(corner3); \path [name intersections={of = pathc7 and top}]; \coordinate (A) at (intersection-1); \path(A)--++(-90:0.16) node { $\wedge$ }; \path(A)--++(45:0.5)--++(-45:0.5)--++(90:0.1) coordinate (TOPGUY) node { $\vee$ }; \path[name path=pathd1] (d1)--++(-90:7); \path[name path=bottom] (corner1)--(corner4); \path [name intersections={of = pathd1 and bottom}]; \coordinate (A) at (intersection-1); \path (A)--++(90:-0.16) node { $\wedge$ }; \path[name path=pathd3] (d3)--++(-90:7); \path[name path=bottom] (corner1)--(corner4); \path [name intersections={of = pathd3 and bottom}]; \coordinate (A) at (intersection-1); \path (A)--++(90:-0.16) node { $\wedge$ }; \path[name path=pathd5] (d5)--++(-90:7); \path[name path=bottom] (corner1)--(corner4); \path [name intersections={of = pathd5 and bottom}]; \coordinate (A) at (intersection-1); \path (A)--++(90:-0.16) node { $\wedge$ }; \path[name path=pathd7] (d7)--++(-90:7); \path[name path=bottom] (corner1)--(corner4); \path [name intersections={of = pathd7 and bottom}]; \coordinate (A) at (intersection-1); \path (A)--++(90:-0.16) node { $\wedge$ }; \path[name path=pathd9] (d9)--++(-90:7); \path[name path=bottom] (corner1)--(corner4); \path [name intersections={of = pathd9 and bottom}]; \coordinate (A) at (intersection-1); \path (A)--++(90:-0.16) node { $\wedge$ }; \path[name path=pathc1] (c1)--++(-90:7); \path[name path=bottom] (corner1)--(corner4); \path [name intersections={of = pathc1 and bottom}]; \coordinate (A) at (intersection-1); \path (A)--++(90:0.16) node { $\vee$ }; \path[name path=pathc3] (c3)--++(-90:7); \path[name path=bottom] (corner1)--(corner4); \path [name intersections={of = pathc3 and bottom}]; \coordinate (A) at (intersection-1); \path (A)--++(90:0.16) node { $\vee$ }; \path[name path=pathc5] (c5)--++(-90:7); \path[name path=bottom] (corner1)--(corner4); \path [name intersections={of = pathc5 and bottom}]; \coordinate (A) at (intersection-1); \path (A)--++(90:0.16) node { $\vee$ }; \path[name path=pathc7] (c7)--++(-90:7); \path[name path=bottom] (corner1)--(corner4); \path [name intersections={of = pathc7 and bottom}]; \coordinate (A) at (intersection-1); \path (A)--++(90:0.16) node { $\vee$ }; \path[name path=pathc9] (c9)--++(-90:7); \path[name path=bottom] (corner1)--(corner4); \path [name intersections={of = pathc9 and bottom}]; \coordinate (A) at (intersection-1); \path (A)--++(45:0.5)--++(-45:0.5)--++(90:0.16) coordinate (BOTTOMGUY) node { $ \vee$ }; \path (BOTTOMGUY)--++(-90:0.1) coordinate (BOTTOMGUY); \path (TOPGUY)--++(-90:0.1) coordinate (TOPGUY); \draw[thick] (TOPGUY)--(BOTTOMGUY); \clip(corner1)--(corner2)--(corner3)--(corner4)--(corner1); \foreach \i in {1,3,5,7,9,11} { \draw[thick](c\i)--++(90:7); \draw[thick](c\i)--++(-90:7); \draw[thick](d\i)--++(90:7); \draw[thick](d\i)--++(-90:7); } \end{scope} \begin{scope} \clip(0,0) --++(45:4)--++(135:1) --++(135:1)--++(-135:1)--++(-135:1)--++(135:2)--++(-135:1)--++(135:1) --++(-135:1)--++(-45:5); %--++(135:5)--++(-135:4)--++(-45:5); \path (0,0) coordinate (origin2); \foreach \i\j in {0,1,2,3,4,5,6,7,8,9,10,11,12} { \path (origin2)--++(45:0.5*\i) coordinate (a\i); \path (origin2)--++(135:0.5*\i) coordinate (b\j); %\draw[cyan](a\i)--++(135:14); %\draw[cyan](b\j)--++(45:14); %\path (origin2)--++(45:0.5*\i)--++(135:14) coordinate (x\i); %\path (origin2)--++(135:0.5*\i)--++(45:14) coordinate (y\j); } \fill[white] (0,0) --++(45:4)--++(135:5)--++(-135:4)--++(-45:5); %--++(135:1)--++(-135:2) % --++(135:3)--++(135:1) % --++(-135:1)--++(-135:1) % --++(-45:5); \draw[very thick,magenta](c1) --++(135:1) coordinate (cC1); \draw[very thick,darkgreen](cC1) --++(135:1) coordinate (cC1); \draw[very thick,orange](cC1) --++(135:1) coordinate (cC1); \draw[very thick,lime!80!black](cC1) --++(135:1) coordinate (cC1); \draw[very thick,gray](cC1) --++(135:1) coordinate (cC1); \draw[very thick,cyan](c3) --++(135:1) coordinate (cC1); \draw[very thick,magenta](cC1) --++(135:1) coordinate (cC1); \draw[very thick,darkgreen](cC1) --++(135:1) coordinate (cC1); \draw[very thick,orange](cC1) --++(135:1) coordinate (cC1); \draw[very thick,lime!80!black](cC1) --++(135:1) coordinate (cC1); \draw[very thick,violet](c5) --++(135:1) coordinate (cC1); \draw[very thick,cyan](cC1) --++(135:1) coordinate (cC1); \draw[very thick,magenta](cC1) --++(135:1) coordinate (cC1); \draw[very thick,darkgreen](cC1) --++(135:1) coordinate (cC1); \draw[very thick,orange](cC1) --++(135:1) coordinate (cC1); \draw[very thick,lime!80!black](cC1) --++(135:1) coordinate (cC1); \draw[very thick,pink](c7) --++(135:1) coordinate (cC1); \draw[very thick,violet](cC1) --++(135:1) coordinate (cC1); \draw[very thick,cyan](cC1) --++(135:1) coordinate (cC1); \draw[very thick,magenta](cC1) --++(135:1) coordinate (cC1); \draw[very thick,darkgreen](cC1) --++(135:1) coordinate (cC1); \draw[very thick,orange](cC1) --++(135:1) coordinate (cC1); \draw[very thick,lime!80!black](cC1) --++(135:1) coordinate (cC1); %\draw[very thick,darkgreen](d1) --++(45:1) coordinate (x1); \draw[very thick,magenta](d1) --++(45:1) coordinate (x1); \draw[very thick,cyan](x1) --++(45:1) coordinate (x1); \draw[very thick,violet](x1) --++(45:1) coordinate (x1); \draw[very thick,pink](x1) --++(45:1) coordinate (x1); \draw[very thick,darkgreen](d3) --++(45:1) coordinate (x1); \draw[very thick,magenta](x1) --++(45:1) coordinate (x1); \draw[very thick,cyan](x1) --++(45:1) coordinate (x1); \draw[very thick,violet](x1) --++(45:1) coordinate (x1); \draw[very thick,pink](x1) --++(45:1) coordinate (x1); \draw[very thick,orange](d5) --++(45:1) coordinate (x1); \draw[very thick,darkgreen](x1) --++(45:1) coordinate (x1); \draw[very thick,magenta](x1) --++(45:1) coordinate (x1); \draw[very thick,cyan](x1) --++(45:1) coordinate (x1); \draw[very thick,violet](x1) --++(45:1) coordinate (x1); \draw[very thick,pink](x1) --++(45:1) coordinate (x1); \draw[very thick,lime!80!black](d7) --++(45:1) coordinate (x1); \draw[very thick,orange](x1) --++(45:1) coordinate (x1); \path(x1) --++(45:1) coordinate (x1); \path(x1) --++(45:1) coordinate (x1); \path(x1) --++(45:1) coordinate (x1); \path(x1) --++(45:1) coordinate (x1); \path(x1) --++(45:1) coordinate (x1); % \draw[very thick,gray](d9) --++(45:1) coordinate (x1); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \fill[cyan!5,opacity=0.01,rounded corners] (c0)--++(135:5)--++(45:4) --++(-45:5)--++(-135:4)--++(135:1); } \end{scope} \path(origin2) ++(135:2.5) ++(-135:2.5) coordinate(corner1); \path(origin2) ++(45:2) ++(135:7) coordinate(corner2); \path(origin2) ++(45:2) ++(-45:2) coordinate(corner4); \path(origin2) ++(135:2.5) ++(45:6.5) coordinate(corner3); %\draw[thick] (origin2)--(corner1)--(corner2)--(corner3)--(corner4)--(origin2); \draw[thick] (origin2)--(corner1) --++( 0:1.41) (corner4)--(origin2)--++( 0:1.41); %\draw[thick] (corner2)--(corner3) ; %\draw[thick,densely dotted] (corner1) --++(180:0.5); %\draw[thick,densely dotted] (corner2) --++(180:0.5); %\draw[thick,densely dotted] (corner3) --++( 0:0.5); %\draw[thick,densely dotted] (corner4) --++( 0:0.5); \clip(origin2)--(corner1)--(corner2)--(corner3)--(corner4)--(origin2); \end{tikzpicture}\qquad\quad \begin{tikzpicture} [scale=0.7] \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) %%% % --++(90:0.16) node {$\down$}; --++(-90:0.18) node {$\wedge$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(45:0.5*2)--++(-45:0.5*2) %%% % --++(90:0.16) node {$\down$}; --++(-90:-0.18) node {$\vee$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(45:0.5*1)--++(-45:0.5*1) %%% --++(90:-0.16) node {$\wedge$}; % --++(-90:0.18) node {$\up$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(-135:0.5*3) --++(135:0.5*3) %%% --++(90:-0.16) node {$\wedge$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(-135:0.5*5) --++(135:0.5*5) %%% --++(90:0.16) node {$\vee$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(-135:0.5*6) --++(135:0.5*6) %%% --++(90:-0.16) node {$\wedge$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(-135:0.5*7) --++(135:0.5*7) %%% --++(90:0.16) node {$\vee$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(-135:0.5*2)--++(135:0.5*2) %%% % --++(90:0.16) node {$\down$}; --++(-90:-0.18) node {$\vee$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(-135:0.5*1)--++(135:0.5*1) %%% % --++(90:0.16) node {$\down$}; --++(-90:-0.18) node {$\vee$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(-135:0.5*4)--++(135:0.5*4) %%% % --++(90:0.16) node {$\down$}; --++(-90:0.18) node {$\wedge$}; \draw[densely dotted](0,0)--++(45:2)--++(135:5); \draw[densely dotted](0,0)--++(45:3)--++(135:5); \draw[densely dotted](0,0)--++(45:4)--++(135:5); \draw[densely dotted](0,0)--++(135:3)--++(45:5); \draw[densely dotted](0,0)--++(135:4)--++(45:5); \draw[densely dotted](0,0)--++(135:2)--++(45:5); \draw[densely dotted](0,0)--++(135:1)--++(45:5); \path (0,-0.7)--++(135:0.5)--++(-135:0.5) coordinate (minus1); \path (minus1)--++(135:0.5)--++(-135:0.5) coordinate (minus2); \path (minus2)--++(135:0.5)--++(-135:0.5) coordinate (minus3); \path (minus3)--++(135:0.5)--++(-135:0.5) coordinate (minus4); \path (0,-0.7)--++(45:0.5)--++(-45:0.5) coordinate (plus1); \path (plus1)--++(45:0.5)--++(-45:0.5) coordinate (plus2); \path (plus2)--++(45:0.5)--++(-45:0.5) coordinate (plus3); \path (plus3)--++(45:0.5)--++(-45:0.5) coordinate (plus4); \draw[very thick](plus4)--(minus4); \path(minus1)--++(45:0.4) coordinate (minus1NE); \path(minus1)--++(-45:0.4) coordinate (minus1SE); \path(minus4)--++(135:0.4) coordinate (minus1NW); \path(minus4)--++(-135:0.4) coordinate (minus1SW); \path(minus2)--++(135:0.4) coordinate (start); \draw[rounded corners, thick] (start)--(minus1NE)--(minus1SE)--(minus1SW)-- (minus1NW)--(start); \path(plus4)--++(45:0.4) coordinate (minus1NE); \path(plus4)--++(-45:0.4) coordinate (minus1SE); \path(plus1)--++(135:0.4) coordinate (minus1NW); \path(plus1)--++(-135:0.4) coordinate (minus1SW); \path(plus2)--++(135:0.4) coordinate (start); \draw[rounded corners, thick] (start)--(minus1NE)-- (minus1SE)--(minus1SW)--(minus1NW)--(start); % %\draw[densely dotted, thick](minus1NE)--++( 0:0.5); %\draw[densely dotted, thick](minus1SE)--++( 0:0.5); \draw[very thick,fill=magenta](0,-0.7) circle (4pt); \draw[very thick, fill=darkgreen](minus1) circle (4pt); \draw[very thick, fill=orange](minus2) circle (4pt); \draw[very thick, fill=lime!80!black](minus3) circle (4pt); \draw[very thick, fill=gray](minus4) circle (4pt); \draw[very thick, fill=cyan!80](plus1) circle (4pt); \draw[very thick, fill=violet](plus2) circle (4pt); \draw[very thick, fill=pink](plus3) circle (4pt); \draw[very thick, fill=brown](plus4) circle (4pt); \path (0,0) coordinate (origin2); \begin{scope} \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \path(origin2) ++(135:2.5) ++(-135:2.5) coordinate(corner1); \path(origin2) ++(45:2.5) ++(135:7.5) coordinate(corner2); \path(origin2) ++(45:2) ++(-45:2) coordinate(corner4); \path(origin2) ++(135:3) ++(45:7) coordinate(corner3); %\draw[thick] (origin2)--(corner1)--(corner2)--(corner3)--(corner4)--(origin2); \draw[ thick ] (corner4) --++( 0:1.41*0.5) coordinate (RIGHT1) (corner3) --++( 0:1.41*0.5) coordinate (RIGHT2) ; \draw[thick] (corner2)--(corner3) ; \draw[thick, ] (corner1) --++(180:0.5); \draw[thick, ] (corner2) --++(180:0.5); \draw[thick, ] (RIGHT1) --++( 0:0.5); \draw[thick, ] (RIGHT2) --++( 0:0.5); % \clip(corner1)--(corner2)--++(90:0.3)--++(0:7.5)-- (RIGHT2) -- (RIGHT1) % --++(90:-0.3)--++(180:6.5) --(corner1); \end{scope} \begin{scope} \draw[very thick] (0,0) --++(45:4)--++(135:1) --++(135:1)--++(-135:1)--++(-135:1)--++(135:2)--++(-135:1)--++(135:1) --++(-135:1)--++(-45:5); \draw[densely dotted] (0,0) --++(45:5) --++(135:5)--++(-135:5); \clip(0,0) --++(45:4)--++(135:1) --++(135:1)--++(-135:1)--++(-135:1)--++(135:2)--++(-135:1)--++(135:1) --++(-135:1)--++(-45:5); %--++(135:5)--++(-135:4)--++(-45:5); \path (0,0) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6}{ \draw[densely dotted](d\i)--++(135:8); \draw(d\i)--++(135:22); \draw(c\i)--++(45:22);} \fill[magenta](0,0)--++(45:1)--++(135:1)--++(45:1)--++(135:1) --++(-135:1)--++(-45:1)--++(-135:1)--++(-45:1); \draw[very thick](0,0)--++(45:1) coordinate (Xhere) --++(135:5); \draw[very thick, fill=cyan](Xhere)--++(45:1)--++(135:1)--++(45:1)--++(135:1) --++(-135:1)--++(-45:1)--++(-135:1)--++(-45:1); \draw[very thick](0,0)--++(45:2) coordinate (Xhere) --++(135:4) coordinate (Xhere2); \draw[very thick, fill=violet](Xhere)--++(45:1)--++(135:1)--++(45:1)--++(135:1) --++(-135:1)--++(-45:1)--++(-135:1)--++(-45:1); \draw[very thick](0,0)--++(45:3) coordinate (Xhere) --++(135:3) coordinate (Xhere2); \draw[very thick, fill=pink](Xhere)--++(45:1)--++(135:1)--++(45:1)--++(135:1) --++(-135:1)--++(-45:1)--++(-135:1)--++(-45:1); \draw[densely dotted] (Xhere2)--++(135:6); \draw[very thick](0,0)--++(45:4) coordinate (Xhere) --++(135:2) coordinate (Xhere2); \draw[very thick, fill=pink](Xhere)--++(45:1)--++(135:1)--++(45:1)--++(135:1) --++(-135:1)--++(-45:1)--++(-135:1)--++(-45:1); \draw[densely dotted] (Xhere2)--++(135:4); \draw[very thick](0,0)--++(135:1) coordinate (Xhere) --++(45:4) coordinate (Xhere2); \draw[very thick, fill=darkgreen](Xhere)--++(45:1)--++(135:1)--++(45:1)--++(135:1) --++(-135:1)--++(-45:1)--++(-135:1)--++(-45:1); \draw[densely dotted] (Xhere2)--++(135:4); \draw[very thick](0,0)--++(135:2) coordinate (Xhere) --++(45:4) coordinate (Xhere2); \draw[very thick, fill=orange](Xhere)--++(45:1)--++(135:1)--++(45:1)--++(135:1) --++(-135:1)--++(-45:1)--++(-135:1)--++(-45:1); \draw[densely dotted] (Xhere2)--++(135:4); \draw[very thick](0,0)--++(135:3) coordinate (Xhere) --++(45:2) coordinate (Xhere2); \draw[very thick, fill=lime!80!black](Xhere)--++(45:1)--++(135:1)--++(45:1)--++(135:1) --++(-135:1)--++(-45:1)--++(-135:1)--++(-45:1); \draw[densely dotted] (Xhere2)--++(135:4); \draw[very thick](0,0)--++(135:4) coordinate (Xhere) --++(45:2) coordinate (Xhere2); \draw[very thick, fill=gray](Xhere)--++(45:1)--++(135:1)--++(45:1)--++(135:1) --++(-135:1)--++(-45:1)--++(-135:1)--++(-45:1); \draw[densely dotted] (Xhere2)--++(135:4); \foreach \i\j in {0,1,2,3,4,5,6,7,8,9,10,11,12} { \path (origin2)--++(45:0.5*\i) coordinate (a\i); \path (origin2)--++(135:0.5*\i) coordinate (b\j); } \end{scope} \path(origin2) ++(135:2.5) ++(-135:2.5) coordinate(corner1); \path(origin2) ++(45:2) ++(135:7) coordinate(corner2); \path(origin2) ++(45:2) ++(-45:2) coordinate(corner4); \path(origin2) ++(135:2.5) ++(45:6.5) coordinate(corner3); %\draw[thick] (origin2)--(corner1)--(corner2)--(corner3)--(corner4)--(origin2); \draw[thick] (origin2)--(corner1) --++( 0:1.41) (corner4)--(origin2)--++( 0:1.41); %\draw[thick] (corner2)--(corner3) ; %\draw[thick,densely dotted] (corner1) --++(180:0.5); %\draw[thick,densely dotted] (corner2) --++(180:0.5); %\draw[thick,densely dotted] (corner3) --++( 0:0.5); %\draw[thick,densely dotted] (corner4) --++( 0:0.5); \clip(origin2)--(corner1)--(corner2)--(corner3)--(corner4)--(origin2); \end{tikzpicture}$$ We denote by ${^PW}$ the set of minimal length right coset representative of $P$ in $W$. Recall that an element $w\in {^PW}$ precisely when every reduced expression of $w$ starts with $s_0$. This implies that $w$ must be fully commutative, that is no reduced expression for $w$ contains a subword of the form $s_i s_{i\pm 1}s_i$ for some $i$. It follows that the elements of ${^PW}$ can also be represented by partitions that fit into an $m\times n$ rectangle. An example of the correspondence between $w\in {^PW}$, its weight diagram and the associated partition is illustrated in Figure [\[typeAtiling-long\]](#typeAtiling-long){reference-type="ref" reference="typeAtiling-long"}. Formally, a partition $\lambda$ of $\ell$ is defined to be a weakly decreasing sequence of non-negative integers $\lambda = (\lambda_1, \lambda_2, \ldots )$ which sum to $\ell$. We call $\ell (\lambda) := \ell = \sum_{i}\lambda_i$ the length of the partition $\lambda$. We define the Young diagram of a partition to be the collection of tiles $$[\lambda]=\{[r,c] \mid 1\leqslant c \leqslant\lambda_r\}$$ depicted in Russian style with rows at $135^\circ$ and columns at $45^\circ$. We identify a partition with its Young diagram. We let $\lambda^t$ denote the transpose partition given by reflection of the Russian Young diagram through the vertical axis. Given $m,n\in {\mathbb N}$ we let ${\mathscr P}_{m,n}$ denote the set of all partitions which fit into an $m\times n$ rectangle, that is $${\mathscr P}_{m,n}= \{ \lambda\mid \lambda_1\leqslant m, \lambda_1^t \leqslant n\}.$$ For $\lambda \in {\mathscr P}_{m,n}$, the $x$-coordinate of a tile $[r,c] \in \lambda$ is equal to $r-c \in \{-m+1, -m+2 , \ldots , n-2, n-1\}$ and we define this $x$-coordinate to be the content (or "colour\") of the tile and we write ${\sf cont}[r,c]=r-c$. For a partition $\lambda$ of $\ell$, we define a standard tableau of shape $\lambda$ to be a bijection $\mathsf{t}$ from the set of tiles of $\lambda$ to the set $\{1, 2, \ldots , \ell\}$ such that for each $1 \leqslant k \leqslant\ell$, the set of tiles $\mathsf{t}^{-1}(\{1, \ldots , k\})$ is a partition. We can view $\mathsf{t}$ as a filling of the tiles of $\lambda$ by the number $1$ to $\ell$ such that the numbers increase along rows and columns. We denote by ${\rm Std}(\lambda)$ the set of all standard tableaux of shape $\lambda$. There is one particular standard tableau that we will be using throughout this paper which is defined as follows. We let $\mathsf{t}_{\lambda}\in {\rm Std}(\lambda)$ denote the tableau in which we fill the tiles of $\lambda$ according to increasing $y$-coordinate and then refine according to increasing $x$-coordinate. An example is depicted in [\[super\]](#super){reference-type="ref" reference="super"}. For $\lambda$ a partition of $\ell$, we define the content sequence of $\mathsf{s}\in {\rm Std}(\lambda)$ to be the element of ${\mathbb Z}^\ell$ given by reading the contents of the boxes in order. Under the correspondence between ${^PW}$ and ${\mathscr P}_{m,n}$ described above, the content $i$ of each tile of $\lambda \in {\mathscr P}_{m,n}$ corresponds to the subscript of a simple reflection $s_i$. So we will often refer to the simple reflection ${{{\color{cyan}\bm\tau}}}= s_i$ as the content of the tile. Moreover, standard tableaux $\mathsf{t}\in {\rm Std}(\lambda)$ correspond precisely to reduced expressions $\lambda = s_{i_1}s_{i_2} \ldots s_{i_{\ell}}$ where $\mathsf{t}^{-1}(j) = [r_j, c_j]$ with ${\sf cont}[r_j, c_j] = i_j$ for each $1\leqslant j \leqslant\ell$. The Bruhat order on ${^PW}$ becomes simply the inclusion of the (Young diagrams) of partitions in ${\mathscr P}_{m,n}$. Given $\lambda \in {\mathscr P}_{m,n}$, we define the set ${\rm Add}(\lambda)$ to be the set of all tiles $[r,c]\notin \lambda$ such that $\lambda \cup [r,c]\in {\mathscr P}_{m,n}$. Similarly, we define the set ${\rm Rem}(\lambda)$ to be the set of all tiles $[r,c]\in \lambda$ such that $\lambda \setminus [r,c] \in {\mathscr P}_{m,n}$. Note that a partition $\lambda$ has at most one addable or removable tile of each content. So for $[r,c]\in {\rm Add}(\lambda)$ (respectively $[r,c]\in {\rm Rem}(\lambda)$) with ${{{\color{cyan}\bm\tau}}}= s_{r-c}$ we write $\lambda + {{{\color{cyan}\bm\tau}}}$ (respectively $\lambda - {{{\color{cyan}\bm\tau}}}$) for $\lambda \cup [r,c]$ (respectively $\lambda \setminus [r,c]$). $$\begin{tikzpicture} [scale=0.7] % \clip (-4,-1.2) rectangle (4,6.8); \path (0,-0.7)--++(135:0.5)--++(-135:0.5) coordinate (minus1); \path (minus1)--++(135:0.5)--++(-135:0.5) coordinate (minus2); \path (minus2)--++(135:0.5)--++(-135:0.5) coordinate (minus3); \path (minus3)--++(135:0.5)--++(-135:0.5) coordinate (minus4); \path (0,-0.7)--++(45:0.5)--++(-45:0.5) coordinate (plus1); \path (plus1)--++(45:0.5)--++(-45:0.5) coordinate (plus2); \path (plus2)--++(45:0.5)--++(-45:0.5) coordinate (plus3); \path (plus3)--++(45:0.5)--++(-45:0.5) coordinate (plus4); \draw[very thick](plus4)--(minus4); %\draw[very thick,densely dotted](plus4)--++(0:0.5) ; %\draw[very thick,densely dotted](minus4)--++(180:0.5) ; \path(minus1)--++(45:0.4) coordinate (minus1NE); \path(minus1)--++(-45:0.4) coordinate (minus1SE); \path(minus4)--++(135:0.4) coordinate (minus1NW); \path(minus4)--++(-135:0.4) coordinate (minus1SW); \path(minus2)--++(135:0.4) coordinate (start); \draw[rounded corners, thick] (start)--(minus1NE)--(minus1SE)--(minus1SW)-- (minus1NW)--(start); \path(plus4)--++(45:0.4) coordinate (minus1NE); \path(plus4)--++(-45:0.4) coordinate (minus1SE); \path(plus1)--++(135:0.4) coordinate (minus1NW); \path(plus1)--++(-135:0.4) coordinate (minus1SW); \path(plus2)--++(135:0.4) coordinate (start); \draw[rounded corners, thick] (start)--(minus1NE)-- (minus1SE)--(minus1SW)--(minus1NW)--(start); \draw[very thick,fill=magenta](0,-0.7) circle (4pt); \draw[very thick, fill=darkgreen](minus1) circle (4pt); \draw[very thick, fill=orange](minus2) circle (4pt); \draw[very thick, fill=lime!80!black](minus3) circle (4pt); \draw[very thick, fill=gray](minus4) circle (4pt); \draw[very thick, fill=cyan!80](plus1) circle (4pt); \draw[very thick, fill=violet](plus2) circle (4pt); \draw[very thick, fill=pink](plus3) circle (4pt); \draw[very thick, fill=brown](plus4) circle (4pt); \path (0,0) coordinate (origin2); \begin{scope} \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \path(origin2) ++(135:2.5) ++(-135:2.5) coordinate(corner1); \path(origin2) ++(45:2.5) ++(135:7.5) coordinate(corner2); \path(origin2) ++(45:2) ++(-45:2) coordinate(corner4); \path(origin2) ++(135:3) ++(45:7) coordinate(corner3); \path(origin2)--(corner1)--(corner2)--(corner3)--(corner4)--(origin2); \draw[thick] (origin2)--(corner1) --++( 0:1.41) (corner4)--(origin2)--++( 0:1.41); %\draw[thick] (corner2)--(corner3) ; \draw[thick] (corner1)--(corner4) ; \draw[very thick] (0,0)--++(135:5)--++(45:1)--++(-45:1)--++(45:1)--++(-45:2) --++(45:2)--++(-45:2)--(0,0); \end{scope} \draw[very thick] (0,0)--++(135:5)--++(45:1)--++(-45:1)--++(45:1)--++(-45:2) --++(45:2)--++(-45:2)--(0,0); \begin{scope} \clip(0,0)--++(135:5)--++(45:5)--++(-45:5) --(0,0); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \end{scope} \draw[very thick] (0,0)--++(135:5)--++(45:1)--++(-45:1)--++(45:1)--++(-45:2) --++(45:2)--++(-45:2)--(0,0); \clip(origin2)--(corner1)--(corner2)--(corner3)--(corner4)--(origin2); \path(0,0)--++(45:0.5)--++(135:0.5) node {$1$} --++(135:1) node {$2$} coordinate (X) --++(-45:1)--++(45:1) node {$3$} ; \path(X) --++(135:1) node {$4$} coordinate (X) --++(-45:1)--++(45:1) node {$5$}--++(-45:1)--++(45:1) node {$6$} ; \path(X) --++(135:1) node {$7$} coordinate (X) --++(-45:1)--++(45:1) node {$8$}--++(-45:1)--++(45:1) node {$9$}--++(-45:1)--++(45:1) node {$10$} ; \path(X) --++(135:1) node {$11$} coordinate (X) --++(-45:1)--++(45:1) node {$12$}--++(-45:1)--++(45:1) --++(-45:1)--++(45:1) node {$13$} ; \end{tikzpicture}$$ We have seen how to pass from a coset to a weight diagram and a partition. We now explain how to go directly from a weight diagram to a partition. Read the labels of a weight diagram from left to right. Starting at the left most corner of the $m\times n$ rectangle, take a north-easterly step for each $\vee$ and a south-easterly step for each $\wedge$. We end up at the rightmost corner of the rectangle, having traced out the "northern perimeter\" of the Russian Young diagram. In particular, the identity coset corresponds to the weight diagram labelled by $m$ $\wedge$'s followed by $n$ $\vee$'s, tracing the perimeter of the empty partition $\varnothing$. Throughout the paper, we will identify minimal coset representative with their weight diagrams and partitions. ## Oriented Temperley--Lieb diagrams and Kazhdan--Lusztig polynomials The following definitions come from [@MR2918294]. **Definition 1**. - *To each weight $\lambda$ we associate a cup diagram $\underline{\lambda}$ and a cap diagram $\overline{\lambda}$. To construct $\underline{\lambda}$, repeatedly find a pair of vertices labeled $\vee$ $\wedge$ in order from left to right that are neighbours in the sense that there are only vertices already joined by cups in between. Join these new vertices together with a cup. Then repeat the process until there are no more such $\vee$ $\wedge$ pairs. Finally draw rays down at all the remaining $\wedge$ and $\vee$ vertices. The cap diagram $\overline{\lambda}$ is obtained by flipping $\underline{\lambda}$ horizontally. We stress that the vertices of the cup and cap diagrams are not labeled.* - *Let $\lambda$ and $\mu$ be weights. We can glue $\underline{\mu}$ under $\lambda$ to obtain a new diagram $\underline{\mu}\lambda$. We say that $\underline{\mu}\lambda$ is oriented if (i) the vertices at the ends of each cup in $\underline{\mu}$ are labelled by exactly one $\vee$ and one $\wedge$ in the weight $\lambda$ and (ii) it is impossible to find two rays in $\underline{\mu}$ whose top vertices are labeled $\vee$ $\wedge$ in that order from left to right in the weight $\lambda$. Similarly, we obtain a new diagram $\lambda \overline{\mu}$ by gluing $\overline{\mu}$ on top of $\lambda$. We say that $\lambda \overline{\mu}$ is oriented if $\underline{\mu} \lambda$ is oriented.* - *Let $\lambda$, $\mu$ be weights such that $\underline{\mu}\lambda$ is oriented. We set the degree of the diagram $\underline{\mu}\lambda$ (respectively $\lambda \overline{\mu}$) to the the number of clockwise oriented cups (respectively caps) in the diagram.* - *Let $\lambda, \mu ,\nu$ be weights such that $\underline{\mu}\lambda$ and $\lambda\overline{\nu}$ are oriented. Then we form a new diagram $\underline{\mu}\lambda\overline{\nu}$ by gluing $\underline{\mu}$ under and $\overline{\nu}$ on top of $\lambda$. We set ${\rm deg}(\underline{\mu}\lambda\overline{\nu}) = {\rm deg}(\underline{\mu}\lambda)+{\rm deg}(\lambda\overline{\nu})$.* An example is provided in Figure [\[figure4\]](#figure4){reference-type="ref" reference="figure4"}. For the purposes of this paper, for $p\geqslant 0$, we can define the $p$-Kazhdan--Lusztig polynomials of type $(W,P) = (S_{n+m},S_m \times S_n)$ as follows. For $\lambda, \mu \in {\mathscr{P}_{m,n}}$ we set $${^p}n_{\lambda,\mu}= \begin{cases} q^{\deg(\underline{\mu} \lambda)} &\text{if $ \underline{\mu} \lambda$ is oriented}\\ 0 &\text{otherwise.} \end{cases}$$ We refer to [@compan2 Theorem 7.3] and [@compan Theorem A] for a justification of this definition and to [@MR2918294] for the origins of this combinatorics. $$\begin{tikzpicture} [scale=0.85] % \clip(4,1.82) rectangle ++ (6,-2.7); \path (4,1) coordinate (origin); %\draw[fill=white,rounded corners](origin) rectangle ++(4,1); \path (origin)--++(0.5,0.5) coordinate (origin2); %(6,0) \draw(origin2)--++(0:5.5); \foreach \i in {1,2,3,4,5,...,10} { \path (origin2)--++(0:0.5*\i) coordinate (a\i); \path (origin2)--++(0:0.5*\i)--++(-90:0.00) coordinate (c\i); } \foreach \i in {1,2,3,4,5,...,19} { \path (origin2)--++(0:0.25*\i) --++(-90:0.5) coordinate (b\i); \path (origin2)--++(0:0.25*\i) --++(-90:0.9) coordinate (d\i); } \path(a1) --++(90:0.12) node { $ \vee$} ; \path(a3) --++(90:0.12) node { $ \vee$} ; \path(a2) --++(-90:0.15) node { $ \wedge$} ; \path(a4) --++(-90:0.15) node { $ \wedge$} ; \path(a5) --++(-90:0.15) node { $ \wedge$} ; \path(a6) --++(90:0.12) node { $ \vee$} ; \path(a7) --++(90:0.12) node { $ \vee$} ; \path(a8) --++(-90:0.15) node { $ \wedge$} ; \path(a9) --++(-90:0.15) node { $ \wedge$} ; \path(a10) --++(90:0.12) node { $ \vee$} ; \draw[ thick](c2) to [out=-90,in=0] (b3) to [out=180,in=-90] (c1); \draw[ thick](c4) to [out=-90,in=0] (b7) to [out=180,in=-90] (c3); % \draw[ thick](c9) to [out=-90,in=0] (b17) to [out=180,in=-90] (c8); \draw[ thick](c8) to [out=-90,in=0] (b15) to [out=180,in=-90] (c7); \draw[ thick](c9) to [out=-90,in=0] (d15) to [out=180,in=-90] (c6); \draw[ thick](c5) --++(90:-1); \draw[ thick](c10) --++(90:-1); % \end{tikzpicture}\qquad \begin{tikzpicture} [scale=0.85] % \clip(4,1.82) rectangle ++ (6,-2.7); \path (4,1) coordinate (origin); %\draw[fill=white,rounded corners](origin) rectangle ++(4,1); \path (origin)--++(0.5,0.5) coordinate (origin2); %(6,0) \draw(origin2)--++(0:5.5); \foreach \i in {1,2,3,4,5,...,10} { \path (origin2)--++(0:0.5*\i) coordinate (a\i); \path (origin2)--++(0:0.5*\i)--++(-90:0.00) coordinate (c\i); } \foreach \i in {1,2,3,4,5,...,19} { \path (origin2)--++(0:0.25*\i) --++(-90:0.5) coordinate (b\i); \path (origin2)--++(0:0.25*\i) --++(-90:0.9) coordinate (d\i); } % \path(a1) --++(90:0.12) node { $ \down $} ; % \path(a3) --++(90:0.12) node { $ \down $} ; % \path(a2) --++(-90:0.15) node { $ \up $} ; % \path(a4) --++(-90:0.15) node { $ \up $} ; % \path(a5) --++(-90:0.15) node { $ \up $} ; % \path(a6) --++(90:0.12) node { $ \down $} ; % \path(a7) --++(90:0.12) node { $ \down $} ; % \path(a8) --++(-90:0.15) node { $ \up $} ; % \path(a9) --++(-90:0.15) node { $ \up $} ; % \path(a10) --++(90:0.12) node { $ \down $} ; % \draw[ thick](c2) to [out=-90,in=0] (b3) to [out=180,in=-90] (c1); \draw[ thick](c4) to [out=-90,in=0] (b7) to [out=180,in=-90] (c3); % \draw[ thick](c9) to [out=-90,in=0] (b17) to [out=180,in=-90] (c8); \draw[ thick](c8) to [out=-90,in=0] (b15) to [out=180,in=-90] (c7); \draw[ thick](c9) to [out=-90,in=0] (d15) to [out=180,in=-90] (c6); \draw[ thick](c5) --++(90:-1); \draw[ thick](c10) --++(90:-1); % \end{tikzpicture}$$ It is clear that for a fixed $\mu\in {\mathscr{P}_{m,n}}$, the diagram $\underline{\mu}\lambda$ is oriented if and only if the weight $\lambda$ is obtained from the weight $\mu$ by swapping the labels on some of the pairs of vertices connected by a cup in $\underline{\mu}$. Moreover, in this case the degree of $\underline{\mu}\lambda$ is precisely the number of such swapped pairs. See [\[cref-it2\]](#cref-it2){reference-type="ref" reference="cref-it2"} for an example of a cup diagram of degree 8. $$\begin{tikzpicture} [yscale=0.6 ,xscale=0.6 ] \draw[lime!80!black, thick] (6,0) to [out=-90,in=180] (6.5,-0.6) to [out=0,in=-90] (7,0) ; \draw[ thick] (5,0) to [out=-90,in=180] (6.5,-0.9) to [out=0,in=-90] (8,0) ; \draw[ orange, thick] (4,0) to [out=-90,in=180] (6.5,-1.2) to [out=0,in=-90] (9,0) ; \draw[ , thick] (3,0) to [out=-90,in=180] (9.5,-1.3-0.2-0.2-0.2) to [out=0,in=-90] (16,0) ; \draw[violet, thick] (2,0) to [out=-90,in=180] (9.5,-1.5-0.3-0.2-0.2) to [out=0,in=-90] (17,0) ; \draw[ magenta, thick] (1,0) to [out=-90,in=180] (9.5,-1.7-0.2-0.2-0.2-0.2) to [out=0,in=-90] (18,0) ; \draw[ thick] (012,0) to [out=-90,in=180] (12.5,-0.6) to [out=0,in=-90] (13,0) ; \draw[ cyan, thick] (011,0) to [out=-90,in=180] (12.5,-0.9) to [out=0,in=-90] (14,0) ; \draw[ gray, thick] (010,0) to [out=-90,in=180] (12.5,-1.2) to [out=0,in=-90] (15,0) ; \draw[ pink, thick] (20,0) to [out=-90,in=180] (20.5,-0.6) to [out=0,in=-90] (21,0) ; \draw[ brown, thick] (19,0) to [out=-90,in=180] (20.5,-0.9) to [out=0,in=-90] (22,0) ; \draw(-0.5,0)--++(0:23); \foreach \i in {0,3,5,7,9,12,14,15,17,18,21,22} { \draw (\i,0.11) node {$\scriptstyle\vee$};} \foreach \i in {1,2,4,6,8,10,11,13,16,19,20} { \draw(\i,-0.11) node {$\scriptstyle\wedge$};} \draw[thick](0,0)--++(-90:2); \draw[densely dotted,thick](0,-2)--++(-90:0.3); \end{tikzpicture}$$ There is an alternative construction of the cup diagram $\underline{\mu}$ as the top half of a Temperley--Lieb diagram $e_\mu$. An $(m+n)$- Temperley--Lieb diagram is a rectangular frame with, in our case, $m+n$ vertices along the top and $m+n$ along the bottom which are paired-off by non-crossing strands. We refer to a strand connecting a top and bottom vertex as a propagating strand. We refer to any strand connecting two top vertices as a cup and any strand connecting two bottom vertices as a cap. For $\mu\in {\mathscr{P}_{m,n}}$, the Temperley--Lieb diagram $e_\mu$ is obtained by starting from the partition $\mu$ and taking the product of the 'Temperley--Lieb generator' in each of its tiles. The cup diagram $\underline{\mu}$ is then simply the top half of the Temperley--Lieb diagram $e_\mu$. This is illustrated in [\[XXX\]](#XXX){reference-type="ref" reference="XXX"}. For more details, see [@compan2]. $$\begin{tikzpicture} [scale=0.7] \clip (-4,-1.2) rectangle (4,6.8); \path (0,-0.7)--++(135:0.5)--++(-135:0.5) coordinate (minus1); \path (minus1)--++(135:0.5)--++(-135:0.5) coordinate (minus2); \path (minus2)--++(135:0.5)--++(-135:0.5) coordinate (minus3); \path (minus3)--++(135:0.5)--++(-135:0.5) coordinate (minus4); \path (0,-0.7)--++(45:0.5)--++(-45:0.5) coordinate (plus1); \path (plus1)--++(45:0.5)--++(-45:0.5) coordinate (plus2); \path (plus2)--++(45:0.5)--++(-45:0.5) coordinate (plus3); \path (plus3)--++(45:0.5)--++(-45:0.5) coordinate (plus4); \draw[very thick](plus4)--(minus4); %\draw[very thick,densely dotted](plus4)--++(0:0.5) ; %\draw[very thick,densely dotted](minus4)--++(180:0.5) ; \path(minus1)--++(45:0.4) coordinate (minus1NE); \path(minus1)--++(-45:0.4) coordinate (minus1SE); \path(minus4)--++(135:0.4) coordinate (minus1NW); \path(minus4)--++(-135:0.4) coordinate (minus1SW); \path(minus2)--++(135:0.4) coordinate (start); \draw[rounded corners, thick] (start)--(minus1NE)--(minus1SE)--(minus1SW)-- (minus1NW)--(start); \path(plus4)--++(45:0.4) coordinate (minus1NE); \path(plus4)--++(-45:0.4) coordinate (minus1SE); \path(plus1)--++(135:0.4) coordinate (minus1NW); \path(plus1)--++(-135:0.4) coordinate (minus1SW); \path(plus2)--++(135:0.4) coordinate (start); \draw[rounded corners, thick] (start)--(minus1NE)-- (minus1SE)--(minus1SW)--(minus1NW)--(start); \draw[very thick,fill=magenta](0,-0.7) circle (4pt); \draw[very thick, fill=darkgreen](minus1) circle (4pt); \draw[very thick, fill=orange](minus2) circle (4pt); \draw[very thick, fill=lime!80!black](minus3) circle (4pt); \draw[very thick, fill=gray](minus4) circle (4pt); \draw[very thick, fill=cyan!80](plus1) circle (4pt); \draw[very thick, fill=violet](plus2) circle (4pt); \draw[very thick, fill=pink](plus3) circle (4pt); \draw[very thick, fill=brown](plus4) circle (4pt); \path (0,0) coordinate (origin2); \begin{scope} \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \path(origin2) ++(135:2.5) ++(-135:2.5) coordinate(corner1); \path(origin2) ++(45:2) ++(135:7) coordinate(corner2); \path(origin2) ++(45:2) ++(-45:2) coordinate(corner4); \path(origin2) ++(135:2.5) ++(45:6.5) coordinate(corner3); \path(origin2)--(corner1)--(corner2)--(corner3)--(corner4)--(origin2); \draw[ thick ] (corner4) --++( 0:1.41*0.5) coordinate (RIGHT1) (corner3) --++( 0:1.41*0.5) coordinate (RIGHT2) ; \draw[thick] (corner2)--(corner3) ; \clip(corner1)--(corner2)--++(90:0.3)--++(0:7.5)-- (RIGHT2) -- (RIGHT1) --++(90:-0.3)--++(180:6.5) --(corner1); \path (BOTTOMGUY)--++(-90:0.05) coordinate (BOTTOMGUY); \path (TOPGUY)--++(-90:0.7) coordinate (TOPGUY); \draw[thick] (TOPGUY)--(BOTTOMGUY); \clip(corner1)--(corner2)--(corner3)--(corner4)--(corner1); \foreach \i in {1,3,5,7,9,11} { \draw[thick](c\i)--++(90:7); \draw[thick](c\i)--++(-90:7); \draw[thick](d\i)--++(90:7); \draw[thick](d\i)--++(-90:7); } \end{scope} \begin{scope} \clip(0,0)--++(135:5)--++(45:1)--++(-45:1)--++(45:1)--++(-45:2) --++(45:2)--++(-45:2); \foreach \i\j in {0,1,2,3,4,5,6,7,8,9,10,11,12} { \path (origin2)--++(45:0.5*\i) coordinate (a\i); \path (origin2)--++(135:0.5*\i) coordinate (b\j); } \fill[white] (0,0)--++(135:5)--++(45:1)--++(-45:1)--++(45:1)--++(-45:2) --++(45:2)--++(-45:2); %%%%%%%%%%%%%% %%%%%%%%%%%%%%%%1row \draw(a1) ++(135:0.5) ++(-135:0.5) coordinate(next1); \draw[very thick,magenta](a1) to [out=90,in=90] (next1); %%%%%%%%%%%%%% %%%%%%%%%%%%%%%%2row \draw(a3) ++(135:0.5) ++(-135:0.5) coordinate(upnext1); \draw[very thick,cyan](a3) to [out=90,in=90] (upnext1); \draw(upnext1) ++(135:0.5) ++(-135:0.5) coordinate(upnext2); \draw[very thick,magenta](upnext1) to [out=-90,in=-90] (upnext2); \draw(upnext2) ++(135:0.5) ++(-135:0.5) coordinate(upnext3); \draw[very thick,darkgreen](upnext2) to [out=90,in=90] (upnext3); %%%%%%%%%%%%%% %%%%%%%%%%%%%%%%3rdrow \draw(a5) ++(135:0.5) ++(-135:0.5) coordinate(upnext1); \draw[very thick,violet](a5) to [out=90,in=90] (upnext1); \draw(upnext1) ++(135:0.5) ++(-135:0.5) coordinate(upnext2); \draw[very thick,cyan](upnext1) to [out=-90,in=-90] (upnext2); \draw(upnext2) ++(135:0.5) ++(-135:0.5) coordinate(upnext3); \draw[very thick,magenta](upnext2) to [out=90,in=90] (upnext3); \draw(upnext3) ++(135:0.5) ++(-135:0.5) coordinate(upnext4); \draw[very thick, darkgreen](upnext3) to [out=-90,in=-90] (upnext4); \draw(upnext4) ++(135:0.5) ++(-135:0.5) coordinate(upnext5); \draw[very thick,orange](upnext4) to [out=90,in=90] (upnext5); %%%%%%%%%%%%%% %%%%%%%%%%%%%%%%4rdrow \draw(a7) ++(135:0.5) ++(-135:0.5) coordinate(upnext1); \draw[very thick,pink](a7) to [out=90,in=90] (upnext1); \draw(upnext1) ++(135:0.5) ++(-135:0.5) coordinate(upnext2); \draw[very thick,violet](upnext1) to [out=-90,in=-90] (upnext2); \draw(upnext2) ++(135:0.5) ++(-135:0.5) coordinate(upnext3); \draw[ very thick , cyan](upnext2) to [out=90,in=90] (upnext3); \draw(upnext3) ++(135:0.5) ++(-135:0.5) coordinate(upnext4); \draw[very thick, magenta](upnext3) to [out=-90,in=-90] (upnext4); \draw(upnext4) ++(135:0.5) ++(-135:0.5) coordinate(upnext5); \draw[very thick,darkgreen](upnext4) to [out=90,in=90] (upnext5); \draw(upnext5) ++(135:0.5) ++(-135:0.5) coordinate(upnext6); \draw[very thick, orange](upnext5) to [out=-90,in=-90] (upnext6); \draw(upnext6) ++(135:0.5) ++(-135:0.5) coordinate(upnext7); \draw[very thick,lime](upnext6) to [out=90,in=90] (upnext7); %%%%%%%%%%%%%% %%%%%%%%%%%%%%%%4rdrow \draw(a9) ++(135:0.5) ++(-135:0.5) coordinate(upnext1); \path(a9) to [out=90,in=90] (upnext1); \draw(a9) ++(135:1.5) ++(-135:0.5) coordinate(upnext1X); \draw(upnext1) ++(135:1.5) ++(-135:0.5) coordinate(upnext2X); \draw[very thick,violet](upnext1X) to [out=-90,in=-90] (upnext2X); \draw(upnext1) ++(135:0.5) ++(-135:0.5) coordinate(upnext2); \draw[very thick,pink](upnext1) to [out=-90,in=-90] (upnext2); \draw(upnext2) ++(135:0.5) ++(-135:0.5) coordinate(upnext3); \draw[very thick, violet](upnext2) to [out=90,in=90] (upnext3); \draw(upnext3) ++(135:0.5) ++(-135:0.5) coordinate(upnext4); \draw[very thick, cyan](upnext3) to [out=-90,in=-90] (upnext4); \draw(upnext4) ++(135:0.5) ++(-135:0.5) coordinate(upnext5); \path(upnext4) to [out=90,in=90] (upnext5); \draw(upnext5) ++(135:0.5) ++(-135:0.5) coordinate(upnext6); \draw[very thick, darkgreen ](upnext5) to [out=-90,in=-90] (upnext6); \draw(upnext6) ++(135:0.5) ++(-135:0.5) coordinate(upnext7); \draw[very thick,orange ](upnext6) to [out=90,in=90] (upnext7); \draw(upnext7) ++(135:0.5) ++(-135:0.5) coordinate(upnext8); \draw[very thick,lime](upnext7) to [out=-90,in=-90] (upnext8); \draw(upnext8) ++(135:0.5) ++(-135:0.5) coordinate(upnext9); \draw[very thick, gray ](upnext8) to [out=90,in=90] (upnext9); %%%%%%%%%%%%%% %%%%%%%%%%%%%%%%5rdrow \path(a4) --++(135:3.5) coordinate(upnext6); \draw(upnext6) ++(135:0.5) ++(-135:0.5) coordinate(upnext7); \draw[very thick,orange ](upnext6) to[out=-90,in=-90] (upnext7); \draw(upnext7) ++(135:0.5) ++(-135:0.5) coordinate(upnext8); %\path(upnext7) to [out=90,in=90] (upnext8); \draw[very thick,lime ] (upnext7) to [out=90,in=90] (upnext8); \draw(upnext8) ++(135:0.5) ++(-135:0.5) coordinate(upnext9); \draw[very thick, gray ](upnext8) to[out=-90,in=-90] (upnext9); %%%%%%%%%%%%%% %%%%%%%%%%%%%%%%5rdrow \path(a4) --++(135:4.5) coordinate(upnext6); \draw(upnext6) ++(135:0.5) ++(-135:0.5) coordinate(upnext7); \draw[very thick,lime ](upnext6) to[out=-90,in=-90] (upnext7); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \end{scope} \path(origin2) ++(135:2.5) ++(-135:2.5) coordinate(corner1); \path(origin2) ++(45:2) ++(135:7) coordinate(corner2); \path(origin2) ++(45:2) ++(-45:2) coordinate(corner4); \path(origin2) ++(135:2.5) ++(45:6.5) coordinate(corner3); %\draw[thick] (origin2)--(corner1)--(corner2)--(corner3)--(corner4)--(origin2); \draw[thick] (origin2)--(corner1) --++( 0:1.41) (corner4)--(origin2)--++( 0:1.41); \draw[thick] (corner2)--(corner3) ; %\draw[thick,densely dotted] (corner1) --++(180:0.5); %\draw[thick,densely dotted] (corner2) --++(180:0.5); %\draw[thick,densely dotted] (corner3) --++( 0:0.5); %\draw[thick,densely dotted] (corner4) --++( 0:0.5); \clip(origin2)--(corner1)--(corner2)--(corner3)--(corner4)--(origin2); \end{tikzpicture} \qquad % % % % % % \begin{tikzpicture} [scale=0.7] \clip (-4,-1.2) rectangle (4,6.8); \path(origin2) ++(45:2) ++(135:7)--++(90:-2.25) coordinate(corner2); \path(origin2) ++(135:2.5) ++(45:6.5)--++(90:-2.25) --++(45:0.5) --++(-45:0.5) coordinate(corner3); \draw[thick,densely dotted ](corner2)--(corner3); % % \path(0,0)--++(135:0.25)--++(-135:0.25) % --++(135:2)--++(-135:2) % coordinate (1) node {$\up$}; \foreach \i in {0,1,2,3,4,5,6,7,8,9}{ \path(0,0)--++(135:0.25)--++(-135:0.25) --++(135:2)--++(-135:2) --++(45:0.5*\i)--++(-45:0.5*\i) coordinate (\i) coordinate (Z\i) ; } \foreach \i in {0,1,2,3,4,5,6,7,8,9}{ \path(0,0) --++(135:2)--++(-135:2) --++(45:0.5*\i)--++(-45:0.5*\i)--++(90:0.3) coordinate (X\i) ; } \draw[thick] (4) to [out=90,in=180] (X4) to [out=0,in=90] (5); \path(X4)--++(90:0.3) coordinate (X4); \draw[thick] (3) to [out=90,in=180] (X4) to [out=0,in=90] (6); \path(X4)--++(90:0.3) coordinate (X4); \draw[thick] (2) to [out=90,in=180] (X4) to [out=0,in=90] (7); \path(X4)--++(90:0.3) coordinate (X4); \draw[thick] (1) to [out=90,in=180] (X4) to [out=0,in=90] (8); \foreach \i in {0,1,2,3,4,5,6,7,8,9}{ \path(0,0) --++(45:4.5) --++(135:4.5) --++(135:0.25)--++(-135:0.25) --++(135:2)--++(-135:2) --++(45:0.5*\i)--++(-45:0.5*\i) coordinate (\i) ; } \foreach \i in {0,1,2,3,4,5,6,7,8,9}{ \path(0,0) --++(45:4.5) --++(135:4.5) --++(135:2)--++(-135:2) --++(45:0.5*\i)--++(-45:0.5*\i)--++(-90:0.4) coordinate (Y\i) ; } % %\path(Y4)--++(90:0.3) coordinate (Y4); \draw[thick] (0) to [out=-90,in=180] (Y0) to [out=0,in=-90] (1); \draw[thick] (2) to [out=-90,in=180] (Y2) to [out=0,in=-90] (3); \draw[thick] (6) to [out=-90,in=180] (Y6) to [out=0,in=-90] (7); \path(Y6)--++(-90:0.4) coordinate (Y6); \draw[thick] (5) to [out=-90,in=180] (Y6) to [out=0,in=-90] (8); \draw[thick] (4) --++(-90:2) to [out=-90,in=90] (Z0); \path (0,-0.7)--++(135:0.5)--++(-135:0.5) coordinate (minus1); \path (minus1)--++(135:0.5)--++(-135:0.5) coordinate (minus2); \path (minus2)--++(135:0.5)--++(-135:0.5) coordinate (minus3); \path (minus3)--++(135:0.5)--++(-135:0.5) coordinate (minus4); \path (0,-0.7)--++(45:0.5)--++(-45:0.5) coordinate (plus1); \path (plus1)--++(45:0.5)--++(-45:0.5) coordinate (plus2); \path (plus2)--++(45:0.5)--++(-45:0.5) coordinate (plus3); \path (plus3)--++(45:0.5)--++(-45:0.5) coordinate (plus4); \draw[very thick](plus4)--(minus4); %\draw[very thick,densely dotted](plus4)--++(0:0.5) ; %\draw[very thick,densely dotted](minus4)--++(180:0.5) ; \path(minus1)--++(45:0.4) coordinate (minus1NE); \path(minus1)--++(-45:0.4) coordinate (minus1SE); \path(minus4)--++(135:0.4) coordinate (minus1NW); \path(minus4)--++(-135:0.4) coordinate (minus1SW); \path(minus2)--++(135:0.4) coordinate (start); \draw[rounded corners, thick] (start)--(minus1NE)--(minus1SE)--(minus1SW)-- (minus1NW)--(start); \path(plus4)--++(45:0.4) coordinate (minus1NE); \path(plus4)--++(-45:0.4) coordinate (minus1SE); \path(plus1)--++(135:0.4) coordinate (minus1NW); \path(plus1)--++(-135:0.4) coordinate (minus1SW); \path(plus2)--++(135:0.4) coordinate (start); \draw[rounded corners, thick] (start)--(minus1NE)-- (minus1SE)--(minus1SW)--(minus1NW)--(start); \draw[very thick,fill=magenta](0,-0.7) circle (4pt); \draw[very thick, fill=darkgreen](minus1) circle (4pt); \draw[very thick, fill=orange](minus2) circle (4pt); \draw[very thick, fill=lime!80!black](minus3) circle (4pt); \draw[very thick, fill=gray](minus4) circle (4pt); \draw[very thick, fill=cyan!80](plus1) circle (4pt); \draw[very thick, fill=violet](plus2) circle (4pt); \draw[very thick, fill=pink](plus3) circle (4pt); \draw[very thick, fill=brown](plus4) circle (4pt); \path (0,0) coordinate (origin2); \begin{scope} \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \path(origin2) ++(135:2.5) ++(-135:2.5) coordinate(corner1); \path(origin2) ++(45:2) ++(135:7) coordinate(corner2); \path(origin2) ++(45:2) ++(-45:2) coordinate(corner4); \path(origin2) ++(135:2.5) ++(45:6.5) coordinate(corner3); \path(origin2)--(corner1)--(corner2)--(corner3)--(corner4)--(origin2); \draw[ thick ] (corner4) --++( 0:1.41*0.5) coordinate (RIGHT1) (corner3) --++( 0:1.41*0.5) coordinate (RIGHT2) ; \draw[thick] (corner2)--(corner3) ; \clip(corner1)--(corner2)--++(90:0.3)--++(0:7.5)-- (RIGHT2) -- (RIGHT1) --++(90:-0.3)--++(180:6.5) --(corner1); \path (BOTTOMGUY) coordinate (BOTTOMGUY); \path (TOPGUY); \draw[thick] (TOPGUY)--(BOTTOMGUY); \clip(corner1)--(corner2)--(corner3)--(corner4)--(corner1); % % \foreach \i in {1,3,5,7,9,11} %{ % \draw[thick](c\i)--++(90:7); % \draw[thick](c\i)--++(-90:7); %\draw[thick](d\i)--++(90:7); %\draw[thick](d\i)--++(-90:7); % } % \end{scope} \draw[thick](corner1)-- (corner4)--(corner1); \clip(origin2)--(corner1)--(corner2)--(corner3)--(corner4)--(origin2); \end{tikzpicture}$$ Now, let $d$ be any $(m+n)$-Temperley--Lieb diagram, and $\lambda$, $\mu$ be weights, then we can form a new diagram $\lambda d \mu$ by gluing the weight $\lambda$ under $d$ and $\mu$ on top of $d$. We say that $\lambda d \mu$ is an oriented Temperley--Lieb diagram if for each propagating strand in $d$ its two vertices are either both $\vee$ symbols or both $\wedge$ symbols and for each cup or cap in $d$ its two vertices consist of precisely one $\vee$ symbol and one $\wedge$ symbol. It is easy see that for $\lambda, \mu\in {\mathscr{P}_{m,n}}$ we have that $\underline{\mu}\lambda$ is oriented if and only if $\varnothing e_\mu \lambda$ is an oriented Temperley--Lieb diagram. Throughout the paper, we will always view the oriented Temperley--Lieb diagrams $\varnothing e_\mu \lambda$ on the Young diagram of the partition $\mu$ as illustrated in Figure [\[typeAtiling-long2\]](#typeAtiling-long2){reference-type="ref" reference="typeAtiling-long2"}. It was shown in [@compan2] that we can define an graded algebra structure on the space spanned by all oriented Temperley--Lieb diagrams. A crucial ingredient in the construction of the light leaves basis for the Hecke category (see Definition 5.1 below) comes from writing each oriented Temperley--Lieb diagram $\varnothing e_\mu \lambda$ as a product of generators for this algebra. We will not need the explicit presentation for this graded algebra here and will instead view this product of generators as an 'oriented tableau' $\mathsf{t}_\mu^\lambda$. This oriented tableau $\mathsf{t}_\mu^\lambda$ is obtained by assigning to each tile of the partition $\mu$ not only a number between $1$ and $\ell(\mu)$ defined by the tableau $\mathsf{t}_\mu$ but also one of four possible orientations determined by the weight $\lambda$. **Definition 2**. *Let $\lambda, \mu \in {\mathscr{P}_{m,n}}$ such that $\underline{\mu}\lambda$ is oriented. Draw the Temperley--Lieb diagram $e_\mu$ on the tiles of $\mu$ as in Figure [\[XXX\]](#XXX){reference-type="ref" reference="XXX"}. Gluing $\varnothing$ and $\lambda$ on the bottom and top of the diagram respectively defines one of four possible orientations on each tile of $\mu$. We define the orientation label of a tile as follows: $$\begin{tikzpicture}[xscale=-0.9,yscale=0.9] \draw[very thick,densely dotted](0,0)--++(135:2)--++(45:2)--++(-45:2) --(0,0); \path(0,0) coordinate (start2); \path(start2)--++(45:1) coordinate (X1); \path(start2)--++(135:1) coordinate (X2); \draw[very thick,gray ] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:1)--++(135:1) coordinate (X3) ; \path(X2)--++(45:1)--++(135:1) coordinate (X4); \draw[very thick ,gray] (X3) to [out=-135,in= -45] (X4); \path(X1)--++(135:0.5)--++(-135:0.5) --++(90:0.29)--++(180:0.15) coordinate (X); \path(X1)--++(135:0.5)--++(135:0.5) node {$ 1$}; \draw[very thick, gray](X)--++(30:0.3); \draw[very thick, gray](X)--++(-30:0.3); \path(X3)--++(135:0.5)--++(-135:0.5) --++(-90:0.29)--++(180:0.15) coordinate (X); \path(X3)--++(135:0.5)--++(-135:0.5) --++(90:0.05) node {$ $}; \draw[very thick, gray](X)--++(30:0.3); \draw[very thick, gray](X)--++(-30:0.3); \end{tikzpicture} \qquad \begin{tikzpicture}[xscale=-0.9,yscale=0.9] \draw[very thick,densely dotted](0,0)--++(135:2)--++(45:2)--++(-45:2) --(0,0); \path(0,0) coordinate (start2); \path(start2)--++(45:1) coordinate (X1); \path(start2)--++(135:1) coordinate (X2); \draw[very thick,gray ] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:1)--++(135:1) coordinate (X3) ; \path(X2)--++(45:1)--++(135:1) coordinate (X4); \draw[very thick ,gray] (X3) to [out=-135,in= -45] (X4);; \path(X1)--++(135:0.5)--++(-135:0.5) --++(90:0.29)--++(0:0.15) coordinate (X); \path(X1)--++(135:0.5)--++(135:0.5) --++(-90:0.05) node {$ f$}; \draw[very thick, gray](X)--++(150:0.3); \draw[very thick, gray](X)--++(-150:0.3); \path(X3)--++(135:0.5)--++(-135:0.5) --++(-90:0.29)--++(180:0.15) coordinate (X); \draw[very thick, gray](X)--++(30:0.3); \draw[very thick, gray](X)--++(-30:0.3); \end{tikzpicture} \qquad \begin{tikzpicture}[yscale=0.9,xscale=0.9] \draw[very thick,densely dotted](0,0)--++(135:2)--++(45:2)--++(-45:2) --(0,0); \path(0,0) coordinate (start2); \path(start2)--++(45:1) coordinate (X1); \path(start2)--++(135:1) coordinate (X2); \draw[very thick,gray ] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:1)--++(135:1) coordinate (X3) ; \path(X2)--++(45:1)--++(135:1) coordinate (X4); \draw[very thick ,gray] (X3) to [out=-135,in= -45] (X4);; \path(X1)--++(135:0.5)--++(-135:0.5) --++(90:0.29)--++(0:0.15) coordinate (X); \path(X1)--++(135:0.5)--++(-135:0.5) --++(-90:0.05) node {$ $}; \draw[very thick, gray](X)--++(150:0.3); \draw[very thick, gray](X)--++(-150:0.3); \path(X3)--++(135:0.5)--++(-135:0.5) --++(-90:0.29)--++(180:0.15) coordinate (X); \path(X3)--++(-135:0.5)--++(-135:0.5) node {$s$}; \draw[very thick, gray](X)--++(30:0.3); \draw[very thick, gray](X)--++(-30:0.3); \end{tikzpicture} \qquad \begin{tikzpicture}[yscale=0.9,xscale=0.9] \draw[very thick,densely dotted](0,0)--++(135:2)--++(45:2)--++(-45:2) --(0,0); \path(0,0) coordinate (start2); \path(start2)--++(45:1) coordinate (X1); \path(start2)--++(135:1) coordinate (X2); \draw[very thick,gray ] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:1)--++(135:1) coordinate (X3) ; \path(X2)--++(45:1)--++(135:1) coordinate (X4); \draw[very thick ,gray] (X3) to [out=-135,in= -45] (X4);; \path(X1)--++(135:0.5)--++(-135:0.5) --++(90:0.29)--++(180:0.15) coordinate (X); \path(X1)--++(135:0.5)--++(135:0.5) node {$ sf$}; \draw[very thick, gray](X)--++(30:0.3); \draw[very thick, gray](X)--++(-30:0.3); \path(X3)--++(135:0.5)--++(-135:0.5) --++(-90:0.29)--++(180:0.15) coordinate (X); %\path(X3)--++(135:0.5)--++(-135:0.5) %--++(90:0.05) node {$s$}; \draw[very thick, gray](X)--++(30:0.3); \draw[very thick, gray](X)--++(-30:0.3); \end{tikzpicture}$$* *We then define the oriented tableau $\mathsf{t}_\mu^\lambda$ to be the map which assign to each tile $[r,c]\in \mu$ a pair $(k, x)$ where $k=\mathsf{t}_\mu ([r,c])$ and $x\in \{ 1, s , f ,sf\}$ is the orientation label of the tile $[r,c]$.* $$\begin{tikzpicture} [scale=0.7] \path(0,0)coordinate (origin); \path(0,0)coordinate (origin2); %\path (0,-0.7)--++(135:0.5)--++(-135:0.5) coordinate (minus1); % % %\path (minus1)--++(135:0.5)--++(-135:0.5) coordinate (minus2); % % % %\path (minus2)--++(135:0.5)--++(-135:0.5) coordinate (minus3); % % % %\path (minus3)--++(135:0.5)--++(-135:0.5) coordinate (minus4); % % % % % %\path (0,-0.7)--++(45:0.5)--++(-45:0.5) coordinate (plus1); % % %\path (plus1)--++(45:0.5)--++(-45:0.5) coordinate (plus2); % % % % %\path (plus2)--++(45:0.5)--++(-45:0.5) coordinate (plus3); % %\path (plus3)--++(45:0.5)--++(-45:0.5) coordinate (plus4); % % %\draw[very thick](plus4)--(minus4); %\draw[very thick,densely dotted](plus4)--++(0:0.5) ; %\draw[very thick,densely dotted](minus4)--++(180:0.5) ; % % %\path(minus1)--++(45:0.4) coordinate (minus1NE); %\path(minus1)--++(-45:0.4) coordinate (minus1SE); % % %\path(minus4)--++(135:0.4) coordinate (minus1NW); %\path(minus4)--++(-135:0.4) coordinate (minus1SW); % %\path(minus2)--++(135:0.4) coordinate (start); % % %\draw[rounded corners, thick] (start)--(minus1NE)--(minus1SE)--(minus1SW) % (minus1NW)--(start); %\draw[densely dotted, thick](minus1NW)--++(180:0.5); %\draw[densely dotted, thick](minus1SW)--++(180:0.5); % % % % % % % % % % %\path(plus4)--++(45:0.4) coordinate (minus1NE); %\path(plus4)--++(-45:0.4) coordinate (minus1SE); % % %\path(plus1)--++(135:0.4) coordinate (minus1NW); %\path(plus1)--++(-135:0.4) coordinate (minus1SW); % %\path(plus2)--++(135:0.4) coordinate (start); % % %\draw[rounded corners, thick] (start)--(minus1NE) %(minus1SE)--(minus1SW)--(minus1NW)--(start); % %\draw[densely dotted, thick](minus1NE)--++( 0:0.5); %\draw[densely dotted, thick](minus1SE)--++( 0:0.5); % % % %\draw[very thick,fill=magenta](0,-0.7) circle (4pt); % % %\draw[very thick, fill=darkgreen](minus1) circle (4pt); % % % %\draw[very thick, fill=orange](minus2) circle (4pt); % % %\draw[very thick, fill=lime!80!black](minus3) circle (4pt); % % %\draw[very thick, fill=gray](minus4) circle (4pt); % % %\draw[very thick, fill=cyan!80](plus1) circle (4pt); % % %\draw[very thick, fill=violet](plus2) circle (4pt); % % %\draw[very thick, fill=pink](plus3) circle (4pt); % %\draw[very thick, fill=brown](plus4) circle (4pt); \path (0,-0.7)--++(135:0.5)--++(-135:0.5) coordinate (minus1); \path (minus1)--++(135:0.5)--++(-135:0.5) coordinate (minus2); \path (minus2)--++(135:0.5)--++(-135:0.5) coordinate (minus3); \path (minus3)--++(135:0.5)--++(-135:0.5) coordinate (minus4); \path (0,-0.7)--++(45:0.5)--++(-45:0.5) coordinate (plus1); \path (plus1)--++(45:0.5)--++(-45:0.5) coordinate (plus2); \path (plus2)--++(45:0.5)--++(-45:0.5) coordinate (plus3); \path (plus3)--++(45:0.5)--++(-45:0.5) coordinate (plus4); \draw[very thick](plus4)--(minus4); %\draw[very thick,densely dotted](plus4)--++(0:0.5) ; %\draw[very thick,densely dotted](minus4)--++(180:0.5) ; \path(minus1)--++(45:0.4) coordinate (minus1NE); \path(minus1)--++(-45:0.4) coordinate (minus1SE); \path(minus4)--++(135:0.4) coordinate (minus1NW); \path(minus4)--++(-135:0.4) coordinate (minus1SW); \path(minus2)--++(135:0.4) coordinate (start); \draw[rounded corners, thick] (start)--(minus1NE)--(minus1SE)--(minus1SW)-- (minus1NW)--(start); %\draw[densely dotted, thick](minus1NW)--++(180:0.5); %\draw[densely dotted, thick](minus1SW)--++(180:0.5); % \path(plus4)--++(45:0.4) coordinate (minus1NE); \path(plus4)--++(-45:0.4) coordinate (minus1SE); \path(plus1)--++(135:0.4) coordinate (minus1NW); \path(plus1)--++(-135:0.4) coordinate (minus1SW); \path(plus2)--++(135:0.4) coordinate (start); \draw[rounded corners, thick] (start)--(minus1NE)-- (minus1SE)--(minus1SW)--(minus1NW)--(start); %\draw[densely dotted, thick](minus1NE)--++( 0:0.5); %\draw[densely dotted, thick](minus1SE)--++( 0:0.5); \draw[very thick,fill=magenta](0,-0.7) circle (4pt); \draw[very thick, fill=darkgreen](minus1) circle (4pt); \draw[very thick, fill=orange](minus2) circle (4pt); \draw[very thick, fill=lime!80!black](minus3) circle (4pt); \draw[very thick, fill=gray](minus4) circle (4pt); \draw[very thick, fill=cyan!80](plus1) circle (4pt); \draw[very thick, fill=violet](plus2) circle (4pt); \draw[very thick, fill=pink](plus3) circle (4pt); \draw[very thick, fill=brown](plus4) circle (4pt); \path (0,0) coordinate (origin2); \begin{scope} \foreach \i\j in {0,1,2,3,4,5,6,7,8,9,10,11,12} { \path (origin2)--++(45:0.5*\i) coordinate (a\i); \path (origin2)--++(135:0.5*\i) coordinate (b\j); %\draw[cyan](a\i)--++(135:14); %\draw[cyan](b\j)--++(45:14); %\path (origin2)--++(45:0.5*\i)--++(135:14) coordinate (x\i); %\path (origin2)--++(135:0.5*\i)--++(45:14) coordinate (y\j); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \path(origin2) ++(135:2.5) ++(-135:2.5) coordinate(corner1); \path(origin2) ++(45:2) ++(135:7) coordinate(corner2); \path(origin2) ++(45:2) ++(-45:2) coordinate(corner4); \path(origin2) ++(135:2.5) ++(45:6.5) coordinate(corner3); %\draw[thick] (origin2)--(corner1)--(corner2)--(corner3)--(corner4)--(origin2); \draw[ thick ] (corner4) --++( 0:1.41*0.5) coordinate (RIGHT1) (corner3) --++( 0:1.41*0.5) coordinate (RIGHT2) ; \draw[thick] (corner2)--(corner3) ; \draw[thick,densely dotted] (corner1) --++(180:0.5); \draw[thick,densely dotted] (corner2) --++(180:0.5); \draw[thick,densely dotted] (RIGHT1) --++( 0:0.5); \draw[thick,densely dotted] (RIGHT2) --++( 0:0.5); \clip(corner1)--(corner2)--++(90:0.3)--++(0:7.5)-- (RIGHT2) -- (RIGHT1) --++(90:-0.3)--++(180:6.5) --(corner1); \path[name path=pathd1] (d1)--++(90:7); \path[name path=top] (corner2)--(corner3); \path [name intersections={of = pathd1 and top}]; \coordinate (A) at (intersection-1); \path(A)--++(-90:0.1) node { $\wedge$ }; \path[name path=pathd3] (d3)--++(90:7); \path[name path=top] (corner2)--(corner3); \path [name intersections={of = pathd3 and top}]; \coordinate (A) at (intersection-1); \path(A)--++(90:-0.1) node { $\wedge$ }; \path[name path=pathd5] (d5)--++(90:7); \path[name path=top] (corner2)--(corner3); \path [name intersections={of = pathd5 and top}]; \coordinate (A) at (intersection-1); \path(A)--++(90:0.1) node { $\vee$ }; \path[name path=pathd7] (d7)--++(90:7); \path[name path=top] (corner2)--(corner3); \path [name intersections={of = pathd7 and top}]; \coordinate (A) at (intersection-1); \path(A)--++(90:-0.1) node { $\wedge$ }; \path[name path=pathd9] (d9)--++(90:7); \path[name path=top] (corner2)--(corner3); \path [name intersections={of = pathd9 and top}]; \coordinate (A) at (intersection-1); \path(A)--++(-90:-0.1) node { $\vee$ }; \path[name path=pathc1] (c1)--++(90:7); \path[name path=top] (corner2)--(corner3); \path [name intersections={of = pathc1 and top}]; \coordinate (A) at (intersection-1); \path(A)--++(-90:-0.1) node { $\vee$ }; \path[name path=pathc3] (c3)--++(90:7); \path[name path=top] (corner2)--(corner3); \path [name intersections={of = pathc3 and top}]; \coordinate (A) at (intersection-1); \path(A)--++(-90:-0.1) node { $\vee$ }; \path[name path=pathc5] (c5)--++(90:7); \path[name path=top] (corner2)--(corner3); \path [name intersections={of = pathc5 and top}]; \coordinate (A) at (intersection-1); \path(A)--++(90:-0.1) node { $\wedge$ }; \path[name path=pathc7] (c7)--++(90:7); \path[name path=top] (corner2)--(corner3); \path [name intersections={of = pathc7 and top}]; \coordinate (A) at (intersection-1); \path(A)--++(-90:0.1) node { $\wedge$ }; \path(A)--++(45:0.5)--++(-45:0.5)--++(90:0.1) coordinate (TOPGUY) node { $\vee$ }; \path[name path=pathd1] (d1)--++(-90:7); \path[name path=bottom] (corner1)--(corner4); \path [name intersections={of = pathd1 and bottom}]; \coordinate (A) at (intersection-1); \path (A)--++(90:-0.1) node { $\wedge$ }; \path[name path=pathd3] (d3)--++(-90:7); \path[name path=bottom] (corner1)--(corner4); \path [name intersections={of = pathd3 and bottom}]; \coordinate (A) at (intersection-1); \path (A)--++(90:-0.1) node { $\wedge$ }; \path[name path=pathd5] (d5)--++(-90:7); \path[name path=bottom] (corner1)--(corner4); \path [name intersections={of = pathd5 and bottom}]; \coordinate (A) at (intersection-1); \path (A)--++(90:-0.1) node { $\wedge$ }; \path[name path=pathd7] (d7)--++(-90:7); \path[name path=bottom] (corner1)--(corner4); \path [name intersections={of = pathd7 and bottom}]; \coordinate (A) at (intersection-1); \path (A)--++(90:-0.1) node { $\wedge$ }; \path[name path=pathd9] (d9)--++(-90:7); \path[name path=bottom] (corner1)--(corner4); \path [name intersections={of = pathd9 and bottom}]; \coordinate (A) at (intersection-1); \path (A)--++(90:-0.1) node { $\wedge$ }; \path[name path=pathc1] (c1)--++(-90:7); \path[name path=bottom] (corner1)--(corner4); \path [name intersections={of = pathc1 and bottom}]; \coordinate (A) at (intersection-1); \path (A)--++(90:0.1) node { $\vee$ }; \path[name path=pathc3] (c3)--++(-90:7); \path[name path=bottom] (corner1)--(corner4); \path [name intersections={of = pathc3 and bottom}]; \coordinate (A) at (intersection-1); \path (A)--++(90:0.1) node { $\vee$ }; \path[name path=pathc5] (c5)--++(-90:7); \path[name path=bottom] (corner1)--(corner4); \path [name intersections={of = pathc5 and bottom}]; \coordinate (A) at (intersection-1); \path (A)--++(90:0.1) node { $\vee$ }; \path[name path=pathc7] (c7)--++(-90:7); \path[name path=bottom] (corner1)--(corner4); \path [name intersections={of = pathc7 and bottom}]; \coordinate (A) at (intersection-1); \path (A)--++(90:0.1) node { $\vee$ }; \path[name path=pathc9] (c9)--++(-90:7); \path[name path=bottom] (corner1)--(corner4); \path [name intersections={of = pathc9 and bottom}]; \coordinate (A) at (intersection-1); \path (A)--++(45:0.5)--++(-45:0.5)--++(90:0.1) coordinate (BOTTOMGUY) node { $ \vee$ }; \path (BOTTOMGUY)--++(-90:0.1) coordinate (BOTTOMGUY); \path (TOPGUY)--++(-90:0.1) coordinate (TOPGUY); \draw[thick] (TOPGUY)--(BOTTOMGUY); \clip(corner1)--(corner2)--(corner3)--(corner4)--(corner1); \foreach \i in {1,3,5,7,9,11} { \draw[thick](c\i)--++(90:7); \draw[thick](c\i)--++(-90:7); \draw[thick](d\i)--++(90:7); \draw[thick](d\i)--++(-90:7); } \end{scope} % %\begin{scope} %\clip(0,0) --++(45:4)--++(135:1) %--++(135:1)--++(-135:1)--++(-135:1)--++(135:2)--++(-135:1)--++(135:1) %--++(-135:1)--++(-45:5); %%--++(135:5)--++(-135:4)--++(-45:5); % %\path (0,0) coordinate (origin2); % % % \foreach \i\j in {0,1,2,3,4,5,6,7,8,9,10,11,12} %{ %\path (origin2)--++(45:0.5*\i) coordinate (a\i); %\path (origin2)--++(135:0.5*\i) coordinate (b\j); %%\draw[cyan](a\i)--++(135:14); %%\draw[cyan](b\j)--++(45:14); % % %%\path (origin2)--++(45:0.5*\i)--++(135:14) coordinate (x\i); %%\path (origin2)--++(135:0.5*\i)--++(45:14) coordinate (y\j); % % % } % % \fill[white] %(0,0) --++(45:4)--++(135:5)--++(-135:4)--++(-45:5); %%--++(135:1)--++(-135:2) %% --++(135:3)--++(135:1) %% --++(-135:1)--++(-135:1) %% --++(-45:5); % %\draw[very thick,magenta](c1) --++(135:1) coordinate (cC1); %\draw[very thick,darkgreen](cC1) --++(135:1) coordinate (cC1); %\draw[very thick,orange](cC1) --++(135:1) coordinate (cC1); %\draw[very thick,lime!80!black](cC1) --++(135:1) coordinate (cC1); %\draw[very thick,gray](cC1) --++(135:1) coordinate (cC1); % %\draw[very thick,cyan](c3) --++(135:1) coordinate (cC1); %\draw[very thick,magenta](cC1) --++(135:1) coordinate (cC1); %\draw[very thick,darkgreen](cC1) --++(135:1) coordinate (cC1); %\draw[very thick,orange](cC1) --++(135:1) coordinate (cC1); %\draw[very thick,lime!80!black](cC1) --++(135:1) coordinate (cC1); % % % \draw[very thick,violet](c5) --++(135:1) coordinate (cC1); % \draw[very thick,cyan](cC1) --++(135:1) coordinate (cC1); %\draw[very thick,magenta](cC1) --++(135:1) coordinate (cC1); %\draw[very thick,darkgreen](cC1) --++(135:1) coordinate (cC1); %\draw[very thick,orange](cC1) --++(135:1) coordinate (cC1); %\draw[very thick,lime!80!black](cC1) --++(135:1) coordinate (cC1); % % % \draw[very thick,pink](c7) --++(135:1) coordinate (cC1); % \draw[very thick,violet](cC1) --++(135:1) coordinate (cC1); % \draw[very thick,cyan](cC1) --++(135:1) coordinate (cC1); %\draw[very thick,magenta](cC1) --++(135:1) coordinate (cC1); %\draw[very thick,darkgreen](cC1) --++(135:1) coordinate (cC1); %\draw[very thick,orange](cC1) --++(135:1) coordinate (cC1); %\draw[very thick,lime!80!black](cC1) --++(135:1) coordinate (cC1); % % % %%\draw[very thick,darkgreen](d1) --++(45:1) coordinate (x1); %\draw[very thick,magenta](d1) --++(45:1) coordinate (x1); %\draw[very thick,cyan](x1) --++(45:1) coordinate (x1); % \draw[very thick,violet](x1) --++(45:1) coordinate (x1); % \draw[very thick,pink](x1) --++(45:1) coordinate (x1); % % %\draw[very thick,darkgreen](d3) --++(45:1) coordinate (x1); %\draw[very thick,magenta](x1) --++(45:1) coordinate (x1); %\draw[very thick,cyan](x1) --++(45:1) coordinate (x1); % \draw[very thick,violet](x1) --++(45:1) coordinate (x1); % \draw[very thick,pink](x1) --++(45:1) coordinate (x1); % % %\draw[very thick,orange](d5) --++(45:1) coordinate (x1); %\draw[very thick,darkgreen](x1) --++(45:1) coordinate (x1); %\draw[very thick,magenta](x1) --++(45:1) coordinate (x1); %\draw[very thick,cyan](x1) --++(45:1) coordinate (x1); % \draw[very thick,violet](x1) --++(45:1) coordinate (x1); % \draw[very thick,pink](x1) --++(45:1) coordinate (x1); % % %\draw[very thick,lime!80!black](d7) --++(45:1) coordinate (x1); %\draw[very thick,orange](x1) --++(45:1) coordinate (x1); % % %\path(x1) --++(45:1) coordinate (x1); %\path(x1) --++(45:1) coordinate (x1); %\path(x1) --++(45:1) coordinate (x1); %\path(x1) --++(45:1) coordinate (x1); %\path(x1) --++(45:1) coordinate (x1); %% % %\draw[very thick,gray](d9) --++(45:1) coordinate (x1); % % % % \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12} %{ %\path (origin2)--++(45:1*\i) coordinate (c\i); %\path (origin2)--++(135:1*\i) coordinate (d\i); %\fill[cyan!5,opacity=0.01,rounded corners] (c0)--++(135:5)--++(45:4) % --++(-45:5)--++(-135:4)--++(135:1); % } % % % % % % %\end{scope} \begin{scope} \clip(0,0)--++(135:5)--++(45:1)--++(-45:1)--++(45:1)--++(-45:2) --++(45:2)--++(-45:2); \fill[white] (0,0)--++(135:5)--++(45:1)--++(-45:1)--++(45:1)--++(-45:2) --++(45:2)--++(-45:2); %%%%%%%%%%%%%% %%%%%%%%%%%%%%%%1row \draw(a1) ++(135:0.5) ++(-135:0.5) coordinate(next1); \draw[very thick,magenta](a1) to [out=90,in=90] (next1); %%%%%%%%%%%%%% %%%%%%%%%%%%%%%%2row \draw(a3) ++(135:0.5) ++(-135:0.5) coordinate(upnext1); \draw[very thick,cyan](a3) to [out=90,in=90] (upnext1); \draw(upnext1) ++(135:0.5) ++(-135:0.5) coordinate(upnext2); \draw[very thick,magenta](upnext1) to [out=-90,in=-90] (upnext2); \draw(upnext2) ++(135:0.5) ++(-135:0.5) coordinate(upnext3); \draw[very thick,darkgreen](upnext2) to [out=90,in=90] (upnext3); %%%%%%%%%%%%%% %%%%%%%%%%%%%%%%3rdrow \draw(a5) ++(135:0.5) ++(-135:0.5) coordinate(upnext1); \draw[very thick,violet](a5) to [out=90,in=90] (upnext1); \draw(upnext1) ++(135:0.5) ++(-135:0.5) coordinate(upnext2); \draw[very thick,cyan](upnext1) to [out=-90,in=-90] (upnext2); \draw(upnext2) ++(135:0.5) ++(-135:0.5) coordinate(upnext3); \draw[very thick,magenta](upnext2) to [out=90,in=90] (upnext3); \draw(upnext3) ++(135:0.5) ++(-135:0.5) coordinate(upnext4); \draw[very thick, darkgreen](upnext3) to [out=-90,in=-90] (upnext4); \draw(upnext4) ++(135:0.5) ++(-135:0.5) coordinate(upnext5); \draw[very thick,orange](upnext4) to [out=90,in=90] (upnext5); %%%%%%%%%%%%%% %%%%%%%%%%%%%%%%4rdrow \draw(a7) ++(135:0.5) ++(-135:0.5) coordinate(upnext1); \draw[very thick,pink](a7) to [out=90,in=90] (upnext1); \draw(upnext1) ++(135:0.5) ++(-135:0.5) coordinate(upnext2); \draw[very thick,violet](upnext1) to [out=-90,in=-90] (upnext2); \draw(upnext2) ++(135:0.5) ++(-135:0.5) coordinate(upnext3); \draw[ very thick , cyan](upnext2) to [out=90,in=90] (upnext3); \draw(upnext3) ++(135:0.5) ++(-135:0.5) coordinate(upnext4); \draw[very thick, magenta](upnext3) to [out=-90,in=-90] (upnext4); \draw(upnext4) ++(135:0.5) ++(-135:0.5) coordinate(upnext5); \draw[very thick,darkgreen](upnext4) to [out=90,in=90] (upnext5); \draw(upnext5) ++(135:0.5) ++(-135:0.5) coordinate(upnext6); \draw[very thick, orange](upnext5) to [out=-90,in=-90] (upnext6); \draw(upnext6) ++(135:0.5) ++(-135:0.5) coordinate(upnext7); \draw[very thick,lime](upnext6) to [out=90,in=90] (upnext7); %%%%%%%%%%%%%% %%%%%%%%%%%%%%%%4rdrow \draw(a9) ++(135:0.5) ++(-135:0.5) coordinate(upnext1); \path(a9) to [out=90,in=90] (upnext1); \draw(a9) ++(135:1.5) ++(-135:0.5) coordinate(upnext1X); \draw(upnext1) ++(135:1.5) ++(-135:0.5) coordinate(upnext2X); \draw[very thick,violet](upnext1X) to [out=-90,in=-90] (upnext2X); \draw(upnext1) ++(135:0.5) ++(-135:0.5) coordinate(upnext2); \draw[very thick,pink](upnext1) to [out=-90,in=-90] (upnext2); \draw(upnext2) ++(135:0.5) ++(-135:0.5) coordinate(upnext3); \draw[very thick, violet](upnext2) to [out=90,in=90] (upnext3); \draw(upnext3) ++(135:0.5) ++(-135:0.5) coordinate(upnext4); \draw[very thick, cyan](upnext3) to [out=-90,in=-90] (upnext4); \draw(upnext4) ++(135:0.5) ++(-135:0.5) coordinate(upnext5); \path(upnext4) to [out=90,in=90] (upnext5); \draw(upnext5) ++(135:0.5) ++(-135:0.5) coordinate(upnext6); \draw[very thick, darkgreen ](upnext5) to [out=-90,in=-90] (upnext6); \draw(upnext6) ++(135:0.5) ++(-135:0.5) coordinate(upnext7); \draw[very thick,orange ](upnext6) to [out=90,in=90] (upnext7); \draw(upnext7) ++(135:0.5) ++(-135:0.5) coordinate(upnext8); \draw[very thick,lime](upnext7) to [out=-90,in=-90] (upnext8); \draw(upnext8) ++(135:0.5) ++(-135:0.5) coordinate(upnext9); \draw[very thick, gray ](upnext8) to [out=90,in=90] (upnext9); %%%%%%%%%%%%%% %%%%%%%%%%%%%%%%5rdrow \path(a4) --++(135:3.5) coordinate(upnext6); \draw(upnext6) ++(135:0.5) ++(-135:0.5) coordinate(upnext7); \draw[very thick,orange ](upnext6) to[out=-90,in=-90] (upnext7); \draw(upnext7) ++(135:0.5) ++(-135:0.5) coordinate(upnext8); %\path(upnext7) to [out=90,in=90] (upnext8); \draw[very thick,lime ] (upnext7) to [out=90,in=90] (upnext8); \draw(upnext8) ++(135:0.5) ++(-135:0.5) coordinate(upnext9); \draw[very thick, gray ](upnext8) to[out=-90,in=-90] (upnext9); %%%%%%%%%%%%%% %%%%%%%%%%%%%%%%5rdrow \path(a4) --++(135:4.5) coordinate(upnext6); \draw(upnext6) ++(135:0.5) ++(-135:0.5) coordinate(upnext7); \draw[very thick,lime ](upnext6) to[out=-90,in=-90] (upnext7); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \end{scope} \path(origin2) ++(135:2.5) ++(-135:2.5) coordinate(corner1); \path(origin2) ++(45:2) ++(135:7) coordinate(corner2); \path(origin2) ++(45:2) ++(-45:2) coordinate(corner4); \path(origin2) ++(135:2.5) ++(45:6.5) coordinate(corner3); %\draw[thick] (origin2)--(corner1)--(corner2)--(corner3)--(corner4)--(origin2); \draw[thick] (origin2)--(corner1) --++( 0:1.41) (corner4)--(origin2)--++( 0:1.41); \draw[thick] (corner2)--(corner3) ; \draw[thick,densely dotted] (corner1) --++(180:0.5); \draw[thick,densely dotted] (corner2) --++(180:0.5); \draw[thick,densely dotted] (corner3) --++( 0:0.5); \draw[thick,densely dotted] (corner4) --++( 0:0.5); \clip(origin2)--(corner1)--(corner2)--(corner3)--(corner4)--(origin2); \end{tikzpicture} \qquad \quad \begin{tikzpicture} [scale=0.7] % %\path (0,-0.7)--++(135:0.5)--++(-135:0.5) coordinate (minus1); % % %\path (minus1)--++(135:0.5)--++(-135:0.5) coordinate (minus2); % % % %\path (minus2)--++(135:0.5)--++(-135:0.5) coordinate (minus3); % % % %\path (minus3)--++(135:0.5)--++(-135:0.5) coordinate (minus4); % % % % % %\path (0,-0.7)--++(45:0.5)--++(-45:0.5) coordinate (plus1); % % %\path (plus1)--++(45:0.5)--++(-45:0.5) coordinate (plus2); % % % % %\path (plus2)--++(45:0.5)--++(-45:0.5) coordinate (plus3); % %\path (plus3)--++(45:0.5)--++(-45:0.5) coordinate (plus4); % % %\draw[very thick](plus4)--(minus4); %\draw[very thick,densely dotted](plus4)--++(0:0.5) ; %\draw[very thick,densely dotted](minus4)--++(180:0.5) ; % % %\path(minus1)--++(45:0.4) coordinate (minus1NE); %\path(minus1)--++(-45:0.4) coordinate (minus1SE); % % %\path(minus4)--++(135:0.4) coordinate (minus1NW); %\path(minus4)--++(-135:0.4) coordinate (minus1SW); % %\path(minus2)--++(135:0.4) coordinate (start); % % %\draw[rounded corners, thick] (start)--(minus1NE)--(minus1SE)--(minus1SW) % (minus1NW)--(start); %\draw[densely dotted, thick](minus1NW)--++(180:0.5); %\draw[densely dotted, thick](minus1SW)--++(180:0.5); % % % % % % % % % % %\path(plus4)--++(45:0.4) coordinate (minus1NE); %\path(plus4)--++(-45:0.4) coordinate (minus1SE); % % %\path(plus1)--++(135:0.4) coordinate (minus1NW); %\path(plus1)--++(-135:0.4) coordinate (minus1SW); % %\path(plus2)--++(135:0.4) coordinate (start); % % %\draw[rounded corners, thick] (start)--(minus1NE) %(minus1SE)--(minus1SW)--(minus1NW)--(start); % %\draw[densely dotted, thick](minus1NE)--++( 0:0.5); %\draw[densely dotted, thick](minus1SE)--++( 0:0.5); % % % %\draw[very thick,fill=magenta](0,-0.7) circle (4pt); % % %\draw[very thick, fill=darkgreen](minus1) circle (4pt); % % % %\draw[very thick, fill=orange](minus2) circle (4pt); % % %\draw[very thick, fill=lime!80!black](minus3) circle (4pt); % % %\draw[very thick, fill=gray](minus4) circle (4pt); % % %\draw[very thick, fill=cyan!80](plus1) circle (4pt); % % %\draw[very thick, fill=violet](plus2) circle (4pt); % % %\draw[very thick, fill=pink](plus3) circle (4pt); % %\draw[very thick, fill=brown](plus4) circle (4pt); % \path (0,-0.7)--++(135:0.5)--++(-135:0.5) coordinate (minus1); \path (minus1)--++(135:0.5)--++(-135:0.5) coordinate (minus2); \path (minus2)--++(135:0.5)--++(-135:0.5) coordinate (minus3); \path (minus3)--++(135:0.5)--++(-135:0.5) coordinate (minus4); \path (0,-0.7)--++(45:0.5)--++(-45:0.5) coordinate (plus1); \path (plus1)--++(45:0.5)--++(-45:0.5) coordinate (plus2); \path (plus2)--++(45:0.5)--++(-45:0.5) coordinate (plus3); \path (plus3)--++(45:0.5)--++(-45:0.5) coordinate (plus4); \draw[very thick](plus4)--(minus4); %\draw[very thick,densely dotted](plus4)--++(0:0.5) ; %\draw[very thick,densely dotted](minus4)--++(180:0.5) ; \path(minus1)--++(45:0.4) coordinate (minus1NE); \path(minus1)--++(-45:0.4) coordinate (minus1SE); \path(minus4)--++(135:0.4) coordinate (minus1NW); \path(minus4)--++(-135:0.4) coordinate (minus1SW); \path(minus2)--++(135:0.4) coordinate (start); \draw[rounded corners, thick] (start)--(minus1NE)--(minus1SE)--(minus1SW)-- (minus1NW)--(start); %\draw[densely dotted, thick](minus1NW)--++(180:0.5); %\draw[densely dotted, thick](minus1SW)--++(180:0.5); % \path(plus4)--++(45:0.4) coordinate (minus1NE); \path(plus4)--++(-45:0.4) coordinate (minus1SE); \path(plus1)--++(135:0.4) coordinate (minus1NW); \path(plus1)--++(-135:0.4) coordinate (minus1SW); \path(plus2)--++(135:0.4) coordinate (start); \draw[rounded corners, thick] (start)--(minus1NE)-- (minus1SE)--(minus1SW)--(minus1NW)--(start); %\draw[densely dotted, thick](minus1NE)--++( 0:0.5); %\draw[densely dotted, thick](minus1SE)--++( 0:0.5); \draw[very thick,fill=magenta](0,-0.7) circle (4pt); \draw[very thick, fill=darkgreen](minus1) circle (4pt); \draw[very thick, fill=orange](minus2) circle (4pt); \draw[very thick, fill=lime!80!black](minus3) circle (4pt); \draw[very thick, fill=gray](minus4) circle (4pt); \draw[very thick, fill=cyan!80](plus1) circle (4pt); \draw[very thick, fill=violet](plus2) circle (4pt); \draw[very thick, fill=pink](plus3) circle (4pt); \draw[very thick, fill=brown](plus4) circle (4pt); \path (0,0) coordinate (origin2); \begin{scope} \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \path(origin2) ++(135:2.5) ++(-135:2.5) coordinate(corner1); \path(origin2) ++(45:2) ++(135:7) coordinate(corner2); \path(origin2) ++(45:2) ++(-45:2) coordinate(corner4); \path(origin2) ++(135:2.5) ++(45:6.5) coordinate(corner3); %\draw[thick] (origin2)--(corner1)--(corner2)--(corner3)--(corner4)--(origin2); \draw[ thick ] (corner4) --++( 0:1.41*0.5) coordinate (RIGHT1) (corner3) --++( 0:1.41*0.5) coordinate (RIGHT2) ; \draw[thick] (corner2)--(corner3) ; \draw[thick,densely dotted] (corner1) --++(180:0.5); \draw[thick,densely dotted] (corner2) --++(180:0.5); \draw[thick,densely dotted] (RIGHT1) --++( 0:0.5); \draw[thick,densely dotted] (RIGHT2) --++( 0:0.5); \clip(corner1)--(corner2)--++(90:0.3)--++(0:7.5)-- (RIGHT2) -- (RIGHT1) --++(90:-0.3)--++(180:6.5) --(corner1); \path[name path=pathd1] (d1)--++(90:7); \path[name path=top] (corner2)--(corner3); \path [name intersections={of = pathd1 and top}]; \coordinate (A) at (intersection-1); \path(A)--++(-90:0.1) node { $\wedge$ }; \path[name path=pathd3] (d3)--++(90:7); \path[name path=top] (corner2)--(corner3); \path [name intersections={of = pathd3 and top}]; \coordinate (A) at (intersection-1); \path(A)--++(90:-0.1) node { $\wedge$ }; \path[name path=pathd5] (d5)--++(90:7); \path[name path=top] (corner2)--(corner3); \path [name intersections={of = pathd5 and top}]; \coordinate (A) at (intersection-1); \path(A)--++(90:0.1) node { $\vee$ }; \path[name path=pathd7] (d7)--++(90:7); \path[name path=top] (corner2)--(corner3); \path [name intersections={of = pathd7 and top}]; \coordinate (A) at (intersection-1); \path(A)--++(90:0.1) node { $\vee$ }; \path[name path=pathd9] (d9)--++(90:7); \path[name path=top] (corner2)--(corner3); \path [name intersections={of = pathd9 and top}]; \coordinate (A) at (intersection-1); \path(A)--++(-90:0.1) node { $\wedge$ }; \path[name path=pathc1] (c1)--++(90:7); \path[name path=top] (corner2)--(corner3); \path [name intersections={of = pathc1 and top}]; \coordinate (A) at (intersection-1); \path(A)--++(90:-0.1) node { $\wedge$ }; \path[name path=pathc3] (c3)--++(90:7); \path[name path=top] (corner2)--(corner3); \path [name intersections={of = pathc3 and top}]; \coordinate (A) at (intersection-1); \path(A)--++(-90:-0.1) node { $\vee$ }; \path[name path=pathc5] (c5)--++(90:7); \path[name path=top] (corner2)--(corner3); \path [name intersections={of = pathc5 and top}]; \coordinate (A) at (intersection-1); \path(A)--++(90:-0.1) node { $\wedge$ }; \path[name path=pathc7] (c7)--++(90:7); \path[name path=top] (corner2)--(corner3); \path [name intersections={of = pathc7 and top}]; \coordinate (A) at (intersection-1); \path(A)--++(90:0.1) node { $\vee$ }; \path(A)--++(45:0.5)--++(-45:0.5)--++(90:0.1) coordinate (TOPGUY) node { $\vee$ }; \path[name path=pathd1] (d1)--++(-90:7); \path[name path=bottom] (corner1)--(corner4); \path [name intersections={of = pathd1 and bottom}]; \coordinate (A) at (intersection-1); \path (A)--++(90:-0.1) node { $\wedge$ }; \path[name path=pathd3] (d3)--++(-90:7); \path[name path=bottom] (corner1)--(corner4); \path [name intersections={of = pathd3 and bottom}]; \coordinate (A) at (intersection-1); \path (A)--++(90:-0.1) node { $\wedge$ }; \path[name path=pathd5] (d5)--++(-90:7); \path[name path=bottom] (corner1)--(corner4); \path [name intersections={of = pathd5 and bottom}]; \coordinate (A) at (intersection-1); \path (A)--++(90:-0.1) node { $\wedge$ }; \path[name path=pathd7] (d7)--++(-90:7); \path[name path=bottom] (corner1)--(corner4); \path [name intersections={of = pathd7 and bottom}]; \coordinate (A) at (intersection-1); \path (A)--++(90:-0.1) node { $\wedge$ }; \path[name path=pathd9] (d9)--++(-90:7); \path[name path=bottom] (corner1)--(corner4); \path [name intersections={of = pathd9 and bottom}]; \coordinate (A) at (intersection-1); \path (A)--++(90:-0.1) node { $\wedge$ }; \path[name path=pathc1] (c1)--++(-90:7); \path[name path=bottom] (corner1)--(corner4); \path [name intersections={of = pathc1 and bottom}]; \coordinate (A) at (intersection-1); \path (A)--++(90:0.1) node { $\vee$ }; \path[name path=pathc3] (c3)--++(-90:7); \path[name path=bottom] (corner1)--(corner4); \path [name intersections={of = pathc3 and bottom}]; \coordinate (A) at (intersection-1); \path (A)--++(90:0.1) node { $\vee$ }; \path[name path=pathc5] (c5)--++(-90:7); \path[name path=bottom] (corner1)--(corner4); \path [name intersections={of = pathc5 and bottom}]; \coordinate (A) at (intersection-1); \path (A)--++(90:0.1) node { $\vee$ }; \path[name path=pathc7] (c7)--++(-90:7); \path[name path=bottom] (corner1)--(corner4); \path [name intersections={of = pathc7 and bottom}]; \coordinate (A) at (intersection-1); \path (A)--++(90:0.1) node { $\vee$ }; \path[name path=pathc9] (c9)--++(-90:7); \path[name path=bottom] (corner1)--(corner4); \path [name intersections={of = pathc9 and bottom}]; \coordinate (A) at (intersection-1); \path (A)--++(45:0.5)--++(-45:0.5)--++(90:0.1) coordinate (BOTTOMGUY) node { $ \vee$ }; \path (BOTTOMGUY)--++(-90:0.1) coordinate (BOTTOMGUY); \path (TOPGUY)--++(-90:0.1) coordinate (TOPGUY); \draw[thick] (TOPGUY)--(BOTTOMGUY); \clip(corner1)--(corner2)--(corner3)--(corner4)--(corner1); \foreach \i in {1,3,5,7,9,11} { \draw[thick](c\i)--++(90:7); \draw[thick](c\i)--++(-90:7); \draw[thick](d\i)--++(90:7); \draw[thick](d\i)--++(-90:7); } \end{scope} % %\begin{scope} %\clip(0,0) --++(45:4)--++(135:1) %--++(135:1)--++(-135:1)--++(-135:1)--++(135:2)--++(-135:1)--++(135:1) %--++(-135:1)--++(-45:5); %%--++(135:5)--++(-135:4)--++(-45:5); % %\path (0,0) coordinate (origin2); % % % \foreach \i\j in {0,1,2,3,4,5,6,7,8,9,10,11,12} %{ %\path (origin2)--++(45:0.5*\i) coordinate (a\i); %\path (origin2)--++(135:0.5*\i) coordinate (b\j); %%\draw[cyan](a\i)--++(135:14); %%\draw[cyan](b\j)--++(45:14); % % %%\path (origin2)--++(45:0.5*\i)--++(135:14) coordinate (x\i); %%\path (origin2)--++(135:0.5*\i)--++(45:14) coordinate (y\j); % % % } % % \fill[white] %(0,0) --++(45:4)--++(135:5)--++(-135:4)--++(-45:5); %%--++(135:1)--++(-135:2) %% --++(135:3)--++(135:1) %% --++(-135:1)--++(-135:1) %% --++(-45:5); % %\draw[very thick,magenta](c1) --++(135:1) coordinate (cC1); %\draw[very thick,darkgreen](cC1) --++(135:1) coordinate (cC1); %\draw[very thick,orange](cC1) --++(135:1) coordinate (cC1); %\draw[very thick,lime!80!black](cC1) --++(135:1) coordinate (cC1); %\draw[very thick,gray](cC1) --++(135:1) coordinate (cC1); % %\draw[very thick,cyan](c3) --++(135:1) coordinate (cC1); %\draw[very thick,magenta](cC1) --++(135:1) coordinate (cC1); %\draw[very thick,darkgreen](cC1) --++(135:1) coordinate (cC1); %\draw[very thick,orange](cC1) --++(135:1) coordinate (cC1); %\draw[very thick,lime!80!black](cC1) --++(135:1) coordinate (cC1); % % % \draw[very thick,violet](c5) --++(135:1) coordinate (cC1); % \draw[very thick,cyan](cC1) --++(135:1) coordinate (cC1); %\draw[very thick,magenta](cC1) --++(135:1) coordinate (cC1); %\draw[very thick,darkgreen](cC1) --++(135:1) coordinate (cC1); %\draw[very thick,orange](cC1) --++(135:1) coordinate (cC1); %\draw[very thick,lime!80!black](cC1) --++(135:1) coordinate (cC1); % % % \draw[very thick,pink](c7) --++(135:1) coordinate (cC1); % \draw[very thick,violet](cC1) --++(135:1) coordinate (cC1); % \draw[very thick,cyan](cC1) --++(135:1) coordinate (cC1); %\draw[very thick,magenta](cC1) --++(135:1) coordinate (cC1); %\draw[very thick,darkgreen](cC1) --++(135:1) coordinate (cC1); %\draw[very thick,orange](cC1) --++(135:1) coordinate (cC1); %\draw[very thick,lime!80!black](cC1) --++(135:1) coordinate (cC1); % % % %%\draw[very thick,darkgreen](d1) --++(45:1) coordinate (x1); %\draw[very thick,magenta](d1) --++(45:1) coordinate (x1); %\draw[very thick,cyan](x1) --++(45:1) coordinate (x1); % \draw[very thick,violet](x1) --++(45:1) coordinate (x1); % \draw[very thick,pink](x1) --++(45:1) coordinate (x1); % % %\draw[very thick,darkgreen](d3) --++(45:1) coordinate (x1); %\draw[very thick,magenta](x1) --++(45:1) coordinate (x1); %\draw[very thick,cyan](x1) --++(45:1) coordinate (x1); % \draw[very thick,violet](x1) --++(45:1) coordinate (x1); % \draw[very thick,pink](x1) --++(45:1) coordinate (x1); % % %\draw[very thick,orange](d5) --++(45:1) coordinate (x1); %\draw[very thick,darkgreen](x1) --++(45:1) coordinate (x1); %\draw[very thick,magenta](x1) --++(45:1) coordinate (x1); %\draw[very thick,cyan](x1) --++(45:1) coordinate (x1); % \draw[very thick,violet](x1) --++(45:1) coordinate (x1); % \draw[very thick,pink](x1) --++(45:1) coordinate (x1); % % %\draw[very thick,lime!80!black](d7) --++(45:1) coordinate (x1); %\draw[very thick,orange](x1) --++(45:1) coordinate (x1); % % %\path(x1) --++(45:1) coordinate (x1); %\path(x1) --++(45:1) coordinate (x1); %\path(x1) --++(45:1) coordinate (x1); %\path(x1) --++(45:1) coordinate (x1); %\path(x1) --++(45:1) coordinate (x1); %% % %\draw[very thick,gray](d9) --++(45:1) coordinate (x1); % % % % \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12} %{ %\path (origin2)--++(45:1*\i) coordinate (c\i); %\path (origin2)--++(135:1*\i) coordinate (d\i); %\fill[cyan!5,opacity=0.01,rounded corners] (c0)--++(135:5)--++(45:4) % --++(-45:5)--++(-135:4)--++(135:1); % } % % % % % % %\end{scope} \begin{scope} \clip(0,0)--++(135:5)--++(45:1)--++(-45:1)--++(45:1)--++(-45:2) --++(45:2)--++(-45:2); \fill[white] (0,0)--++(135:5)--++(45:1)--++(-45:1)--++(45:1)--++(-45:2) --++(45:2)--++(-45:2); %%%%%%%%%%%%%% %%%%%%%%%%%%%%%%1row \draw(a1) ++(135:0.5) ++(-135:0.5) coordinate(next1); \draw[very thick,magenta](a1) to [out=90,in=90] (next1); %%%%%%%%%%%%%% %%%%%%%%%%%%%%%%2row \draw(a3) ++(135:0.5) ++(-135:0.5) coordinate(upnext1); \draw[very thick,cyan](a3) to [out=90,in=90] (upnext1); \draw(upnext1) ++(135:0.5) ++(-135:0.5) coordinate(upnext2); \draw[very thick,magenta](upnext1) to [out=-90,in=-90] (upnext2); \draw(upnext2) ++(135:0.5) ++(-135:0.5) coordinate(upnext3); \draw[very thick,darkgreen](upnext2) to [out=90,in=90] (upnext3); %%%%%%%%%%%%%% %%%%%%%%%%%%%%%%3rdrow \draw(a5) ++(135:0.5) ++(-135:0.5) coordinate(upnext1); \draw[very thick,violet](a5) to [out=90,in=90] (upnext1); \draw(upnext1) ++(135:0.5) ++(-135:0.5) coordinate(upnext2); \draw[very thick,cyan](upnext1) to [out=-90,in=-90] (upnext2); \draw(upnext2) ++(135:0.5) ++(-135:0.5) coordinate(upnext3); \draw[very thick,magenta](upnext2) to [out=90,in=90] (upnext3); \draw(upnext3) ++(135:0.5) ++(-135:0.5) coordinate(upnext4); \draw[very thick, darkgreen](upnext3) to [out=-90,in=-90] (upnext4); \draw(upnext4) ++(135:0.5) ++(-135:0.5) coordinate(upnext5); \draw[very thick,orange](upnext4) to [out=90,in=90] (upnext5); %%%%%%%%%%%%%% %%%%%%%%%%%%%%%%4rdrow \draw(a7) ++(135:0.5) ++(-135:0.5) coordinate(upnext1); \draw[very thick,pink](a7) to [out=90,in=90] (upnext1); \draw(upnext1) ++(135:0.5) ++(-135:0.5) coordinate(upnext2); \draw[very thick,violet](upnext1) to [out=-90,in=-90] (upnext2); \draw(upnext2) ++(135:0.5) ++(-135:0.5) coordinate(upnext3); \draw[ very thick , cyan](upnext2) to [out=90,in=90] (upnext3); \draw(upnext3) ++(135:0.5) ++(-135:0.5) coordinate(upnext4); \draw[very thick, magenta](upnext3) to [out=-90,in=-90] (upnext4); \draw(upnext4) ++(135:0.5) ++(-135:0.5) coordinate(upnext5); \draw[very thick,darkgreen](upnext4) to [out=90,in=90] (upnext5); \draw(upnext5) ++(135:0.5) ++(-135:0.5) coordinate(upnext6); \draw[very thick, orange](upnext5) to [out=-90,in=-90] (upnext6); \draw(upnext6) ++(135:0.5) ++(-135:0.5) coordinate(upnext7); \draw[very thick,lime](upnext6) to [out=90,in=90] (upnext7); %%%%%%%%%%%%%% %%%%%%%%%%%%%%%%4rdrow \draw(a9) ++(135:0.5) ++(-135:0.5) coordinate(upnext1); \path(a9) to [out=90,in=90] (upnext1); \draw(a9) ++(135:1.5) ++(-135:0.5) coordinate(upnext1X); \draw(upnext1) ++(135:1.5) ++(-135:0.5) coordinate(upnext2X); \draw[very thick,violet](upnext1X) to [out=-90,in=-90] (upnext2X); \draw(upnext1) ++(135:0.5) ++(-135:0.5) coordinate(upnext2); \draw[very thick,pink](upnext1) to [out=-90,in=-90] (upnext2); \draw(upnext2) ++(135:0.5) ++(-135:0.5) coordinate(upnext3); \draw[very thick, violet](upnext2) to [out=90,in=90] (upnext3); \draw(upnext3) ++(135:0.5) ++(-135:0.5) coordinate(upnext4); \draw[very thick, cyan](upnext3) to [out=-90,in=-90] (upnext4); \draw(upnext4) ++(135:0.5) ++(-135:0.5) coordinate(upnext5); \path(upnext4) to [out=90,in=90] (upnext5); \draw(upnext5) ++(135:0.5) ++(-135:0.5) coordinate(upnext6); \draw[very thick, darkgreen ](upnext5) to [out=-90,in=-90] (upnext6); \draw(upnext6) ++(135:0.5) ++(-135:0.5) coordinate(upnext7); \draw[very thick,orange ](upnext6) to [out=90,in=90] (upnext7); \draw(upnext7) ++(135:0.5) ++(-135:0.5) coordinate(upnext8); \draw[very thick,lime](upnext7) to [out=-90,in=-90] (upnext8); \draw(upnext8) ++(135:0.5) ++(-135:0.5) coordinate(upnext9); \draw[very thick, gray ](upnext8) to [out=90,in=90] (upnext9); %%%%%%%%%%%%%% %%%%%%%%%%%%%%%%5rdrow \path(a4) --++(135:3.5) coordinate(upnext6); \draw(upnext6) ++(135:0.5) ++(-135:0.5) coordinate(upnext7); \draw[very thick,orange ](upnext6) to[out=-90,in=-90] (upnext7); \draw(upnext7) ++(135:0.5) ++(-135:0.5) coordinate(upnext8); %\path(upnext7) to [out=90,in=90] (upnext8); \draw[very thick,lime ] (upnext7) to [out=90,in=90] (upnext8); \draw(upnext8) ++(135:0.5) ++(-135:0.5) coordinate(upnext9); \draw[very thick, gray ](upnext8) to[out=-90,in=-90] (upnext9); %%%%%%%%%%%%%% %%%%%%%%%%%%%%%%5rdrow \path(a4) --++(135:4.5) coordinate(upnext6); \draw(upnext6) ++(135:0.5) ++(-135:0.5) coordinate(upnext7); \draw[very thick,lime ](upnext6) to[out=-90,in=-90] (upnext7); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \end{scope} \path(origin2) ++(135:2.5) ++(-135:2.5) coordinate(corner1); \path(origin2) ++(45:2) ++(135:7) coordinate(corner2); \path(origin2) ++(45:2) ++(-45:2) coordinate(corner4); \path(origin2) ++(135:2.5) ++(45:6.5) coordinate(corner3); %\draw[thick] (origin2)--(corner1)--(corner2)--(corner3)--(corner4)--(origin2); \draw[thick] (origin2)--(corner1) --++( 0:1.41) (corner4)--(origin2)--++( 0:1.41); \draw[thick] (corner2)--(corner3) ; \draw[thick,densely dotted] (corner1) --++(180:0.5); \draw[thick,densely dotted] (corner2) --++(180:0.5); \draw[thick,densely dotted] (corner3) --++( 0:0.5); \draw[thick,densely dotted] (corner4) --++( 0:0.5); \clip(origin2)--(corner1)--(corner2)--(corner3)--(corner4)--(origin2); \end{tikzpicture}$$ # The diagrammatic algebras We now recall the construction of the two protagonists of this paper: the Hecke categories associated to $(S_{m+n}, S_m\times S_n)$ and the (extended) Khovanov arc algebras. ## The Hecke categories {#coulda} We denote by $S = \{s_i \, : \, -m+1 \leqslant i \leqslant n-1\}$ the set of simple reflections. To simplify notations, for ${{{\color{magenta}\bm\sigma}}}= s_i, {{{\color{cyan}\bm\tau}}}=s_j \in S$ we write $|{{{\color{magenta}\bm\sigma}}}- {{{\color{cyan}\bm\tau}}}| :=i-j$. So we have $|{{{\color{magenta}\bm\sigma}}}- {{{\color{cyan}\bm\tau}}}|>1$ precisely when ${{{\color{magenta}\bm\sigma}}}{{{\color{cyan}\bm\tau}}}= {{{\color{cyan}\bm\tau}}}{{{\color{magenta}\bm\sigma}}}$ and $|{{{\color{magenta}\bm\sigma}}}- {{{\color{cyan}\bm\tau}}}| = 1$ precisely when ${{{\color{magenta}\bm\sigma}}}{{{\color{cyan}\bm\tau}}}{{{\color{magenta}\bm\sigma}}}= {{{\color{cyan}\bm\tau}}}{{{\color{magenta}\bm\sigma}}}{{{\color{cyan}\bm\tau}}}$. We define the Soergel generators to be the framed graphs $${\sf 1}_{\emptyset } = \begin{minipage}{1.5cm} \begin{tikzpicture}[scale=1] \draw[densely dotted,rounded corners](-0.5cm,-0.6cm) rectangle (0.5cm,0.6cm); \clip(0,0) circle (0.6cm); %\draw[line width=0.06cm, magenta](0,-1)--(0,+1); \end{tikzpicture} \end{minipage} \quad {\sf 1}_{{{{\color{magenta}\bm\sigma}}}} = \begin{minipage}{1.5cm} \begin{tikzpicture}[scale=1] \draw[densely dotted,rounded corners](-0.5cm,-0.6cm) rectangle (0.5cm,0.6cm); \clip(0,0) circle (0.6cm); \draw[line width=0.06cm, magenta](0,-1)--(0,+1); \end{tikzpicture} \end{minipage} \quad {\sf spot}_{{{\color{magenta}\bm\sigma}}}^\emptyset = \begin{minipage}{1.5cm} \begin{tikzpicture}[scale=1] \draw[densely dotted,rounded corners](-0.5cm,-0.6cm) rectangle (0.5cm,0.6cm); \clip(0,0) circle (0.6cm); \draw[line width=0.06cm, magenta](0,-1)--(0,+0); \fill[magenta] (0,0) circle (5pt); \end{tikzpicture}\end{minipage} \quad {\sf fork}_{{{{\color{magenta}\bm\sigma}}}{{{\color{magenta}\bm\sigma}}}}^{{{{\color{magenta}\bm\sigma}}}}= \begin{minipage}{2cm} \begin{tikzpicture}[scale=1] \draw[densely dotted,rounded corners](-0.75cm,-0.5cm) rectangle (0.75cm,0.5cm); \clip (-0.75cm,-0.5cm) rectangle (0.75cm,0.5cm); \draw[line width=0.08cm, magenta](0,0)to [out=-30, in =90] (10pt,-15pt); \draw[line width=0.08cm, magenta](0,0)to [out=-150, in =90] (-10pt,-15pt); \draw[line width=0.08cm, magenta](0,0)--++(90:1); \end{tikzpicture} \end{minipage} \quad {\sf braid}_{{{{\color{magenta}\bm\sigma}}}{{{\color{cyan}\bm\tau}}}}^{{{{\color{cyan}\bm\tau}}}{{{\color{magenta}\bm\sigma}}}}= \begin{minipage}{1cm} \begin{tikzpicture}[scale=1.1] \draw[densely dotted,rounded corners](-0.5cm,-0.5cm) rectangle (0.5cm,0.5cm); \clip(-0.5cm,-0.5cm) rectangle (0.5cm,0.5cm); %\clip(0,0) circle (15pt); \draw[line width=0.08cm, magenta] (0,0) to [out=45, in =-90] (10pt,0.5cm); \draw[line width=0.08cm, magenta] (0,0) to [out=-135, in =90] (-10pt,-0.5cm); \draw[line width=0.08cm, cyan] (0,0) to [out=-45, in =90] (10pt,-0.5cm); \draw[line width=0.08cm, cyan] (0,0) to [out=135, in =-90] (-10pt,0.5cm); \end{tikzpicture}\end{minipage}$$ associated to any pair ${{{\color{magenta}\bm\sigma}}}, {{{\color{cyan}\bm\tau}}}\in S$ such that $|{{{\color{magenta}\bm\sigma}}}-{{{\color{cyan}\bm\tau}}}| >1$. Pictorially, we define the duals of these generators to be the graphs obtained by reflection through their horizontal axes. Non-pictorially, we simply swap the sub- and superscripts. We sometimes denote duality by $\ast$. For example, the dual of the fork generator is pictured as follows $${\sf fork}^{{{{\color{magenta}\bm\sigma}}}{{{\color{magenta}\bm\sigma}}}}_{{{{\color{magenta}\bm\sigma}}}}= \begin{minipage}{1.8cm} \begin{tikzpicture}[scale=-1] \draw[densely dotted,rounded corners](-0.75cm,-0.5cm) rectangle (0.75cm,0.5cm); \clip (-0.75cm,-0.5cm) rectangle (0.75cm,0.5cm); % \clip (-0.5cm,-0.5cm) % rectangle (0.5cm,0.5cm); \draw[line width=0.08cm, magenta](0,0)to [out=-30, in =90] (10pt,-15pt); \draw[line width=0.08cm, magenta](0,0)to [out=-150, in =90] (-10pt,-15pt); \draw[line width=0.08cm, magenta](0,0)--++(90:1); \end{tikzpicture} \end{minipage}.$$ We define the northern/southern reading word of a Soergel generator (or its dual) to be word in the alphabet $S$ obtained by reading the colours of the northern/southern edge of the frame respectively. Given two (dual) Soergel generators $D$ and $D'$ we define $D\otimes D'$ to be the diagram obtained by horizontal concatenation (and we extend this linearly). The northern/southern colour sequence of $D\otimes D'$ is the concatenation of those of $D$ and $D'$ ordered from left to right. Given any two (dual) Soergel generators, we define their product $D\circ D'$ (or simply $DD'$) to be the vertical concatenation of $D$ on top of $D'$ if the southern reading word of $D$ is equal to the northern reading word of $D'$ and zero otherwise. We define a Soergel graph to be any diagram obtained by vertical and horizontal concatenation of the generators and their duals. Given ${{{\color{magenta}\bm\sigma}}}$, we define the corresponding "barbell\", "dork\" and "gap" diagrams to be the elements $${\sf bar}({{{\color{magenta}\bm\sigma}}})= {\sf spot}_ {{{\color{magenta}\bm\sigma}}}^\emptyset {\sf spot}^ {{{\color{magenta}\bm\sigma}}}_\emptyset \qquad {\sf dork}^{{{{\color{magenta}\bm\sigma}}}{{{\color{magenta}\bm\sigma}}}}_{{{{\color{magenta}\bm\sigma}}}{{{\color{magenta}\bm\sigma}}}}= {\sf fork}^{{{{\color{magenta}\bm\sigma}}}{{{\color{magenta}\bm\sigma}}}}_{ {{{\color{magenta}\bm\sigma}}}} {\sf fork}_{{{{\color{magenta}\bm\sigma}}}{{{\color{magenta}\bm\sigma}}}}^{ {{{\color{magenta}\bm\sigma}}}} \qquad {\sf gap}({{{\color{magenta}\bm\sigma}}}) = {\sf spot}^{{{\color{magenta}\bm\sigma}}}_\emptyset {\sf spot}^\emptyset_{{{\color{magenta}\bm\sigma}}}.$$ Given $\mathsf{s},\mathsf{t}\in{\rm Std}(\lambda)$ for $\lambda\in {\mathscr{P}_{m,n}}$, we write ${\sf braid}^{\mathsf{s}}_{\mathsf{t}}$ for the permutation mapping one tableau to the other, with strands coloured according to the contents. For example $${\sf braid}^{\mathsf{s}}_{\mathsf{t}}=\begin{minipage}{3cm}\begin{tikzpicture}[scale=1.5] \draw[densely dotted, rounded corners] (-0.25,0) rectangle (1.75,1); \draw[magenta,line width=0.08cm](0,0)--++(90:1); \draw[cyan,line width=0.08cm](1,0) to (1.5,1); \draw[darkgreen,line width=0.08cm](0.5,0)--(0.5,1); %\draw[orange,line width=0.08cm](1.5-0.5,0)--++(90:1); \draw[orange,line width=0.08cm](1.5,0)-- (1,1); %\draw[magenta,line width=0.08cm](2.5-0.5,0)--++(90:1); %\draw[darkgreen,line width=0.08cm](3-0.5,0)--++(90:1); \end{tikzpicture}\end{minipage} \qquad {\sf braid}^{\mathsf{t}}_{\mathsf{u}}=\begin{minipage}{3cm}\begin{tikzpicture}[scale=1.5] \draw[densely dotted, rounded corners] (-0.25,0) rectangle (1.75,1); \draw[magenta,line width=0.08cm](0,0)--++(90:1); \draw[cyan,line width=0.08cm](0.5,0) to (1,1); \draw[darkgreen,line width=0.08cm](1,0)--(0.5,1); %\draw[orange,line width=0.08cm](1.5-0.5,0)--++(90:1); \draw[orange,line width=0.08cm](1.5,0)-- (1.5,1); %\draw[magenta,line width=0.08cm](2.5-0.5,0)--++(90:1); %\draw[darkgreen,line width=0.08cm](3-0.5,0)--++(90:1); \end{tikzpicture}\end{minipage} \qquad {\sf braid}^{\mathsf{s}}_{\mathsf{u}}=\begin{minipage}{3cm}\begin{tikzpicture}[scale=1.5] \draw[densely dotted, rounded corners] (-0.25,0) rectangle (1.75,1); \draw[magenta,line width=0.08cm](0,0)--++(90:1); \draw[cyan,line width=0.08cm](0.5,0) to [out=90,in=-90] (1.5,1); \draw[darkgreen,line width=0.08cm](1,0)--(0.5,1); %\draw[orange,line width=0.08cm](1.5-0.5,0)--++(90:1); \draw[orange,line width=0.08cm](1.5,0)-- (1,1); %\draw[magenta,line width=0.08cm](2.5-0.5,0)--++(90:1); %\draw[darkgreen,line width=0.08cm](3-0.5,0)--++(90:1); \end{tikzpicture}\end{minipage}$$ for the three $\mathsf{s},\mathsf{t},\mathsf{u}\in {\rm Std}(3,1)$. We sometimes drop the explicit tableaux and simply record the underlying content sequence. **Remark 3**. *The cyclotomic quotients of (anti-spherical) Hecke categories are small categories with finite-dimensional morphism spaces given by the light leaves basis of [@MR3555156; @antiLW]. Working with such a category is equivalent to working with a locally unital algebra, as defined in [@BSBS Section 2.2], see [@BSBS Remark 2.3]. Throughout this paper we will work in the latter setting. The reader who prefers to think of categories can equivalently phrase everything in this paper in terms of categories and representations of categories.* We are ready to define the first diagrammatic algebra of interest in this paper. The following simplification of the presentation of the Hecke category is made using [@compan Theorem 2.1] **Definition 4**. *We define $\mathscr{H}_{m,n}$ to be the locally-unital associative $\Bbbk$-algebra spanned by all Soergel-graphs with multiplication given by $\circ$-concatenation modulo the following local relations and their vertical and horizontal flips. Firstly, for any ${{{\color{magenta}\bm\sigma}}}, {{{\color{cyan}\bm\tau}}}\in S$ we have the idempotent relations $$\begin{aligned} {\sf 1}_{{{{\color{magenta}\bm\sigma}}}} {\sf 1}_{{{{\color{cyan}\bm\tau}}}}& =\delta_{{{{\color{magenta}\bm\sigma}}},{{{\color{cyan}\bm\tau}}}}{\sf 1}_{{{{\color{magenta}\bm\sigma}}}} & {\sf 1}_{\emptyset} {\sf 1}_{{{{\color{magenta}\bm\sigma}}}} & =0 & {\sf 1}_{\emptyset}^2& ={\sf 1}_{\emptyset}\\ {\sf 1}_{\emptyset} {\sf spot}_{{{{\color{magenta}\bm\sigma}}}}^\emptyset {\sf 1}_{{{{\color{magenta}\bm\sigma}}}}& ={\sf spot}_{{{{\color{magenta}\bm\sigma}}}}^{\emptyset} & {\sf 1}_{{{{\color{magenta}\bm\sigma}}}} {\sf fork}_{{{{\color{magenta}\bm\sigma}}}{{{\color{magenta}\bm\sigma}}}}^{{{{\color{magenta}\bm\sigma}}}} {\sf 1}_{{{{\color{magenta}\bm\sigma}}}{{{\color{magenta}\bm\sigma}}}}& ={\sf fork}_{{{{\color{magenta}\bm\sigma}}}{{{\color{magenta}\bm\sigma}}}}^{{{{\color{magenta}\bm\sigma}}}} & {\sf 1}_{{{{\color{cyan}\bm\tau}}}{{{\color{magenta}\bm\sigma}}}} {\sf braid}_{{{{\color{magenta}\bm\sigma}}}{{{\color{cyan}\bm\tau}}}}^{{{{\color{cyan}\bm\tau}}}{{{\color{magenta}\bm\sigma}}}} {\sf 1}_{{{{\color{magenta}\bm\sigma}}}{{{\color{cyan}\bm\tau}}}} & ={\sf braid}_{{{{\color{magenta}\bm\sigma}}}{{{\color{cyan}\bm\tau}}}}^{{{{\color{cyan}\bm\tau}}}{{{\color{magenta}\bm\sigma}}}} \end{aligned}$$ where the final relation holds for all ordered pairs $( {{{\color{magenta}\bm\sigma}}},{{{\color{cyan}\bm\tau}}})\in S^2$ with $|{{{\color{magenta}\bm\sigma}}}- {{{\color{cyan}\bm\tau}}}|>1$. For each ${{{\color{magenta}\bm\sigma}}}\in S$ we have fork-spot contraction, the double-fork, and circle-annihilation relations: $$\begin{aligned} ({\sf spot}_{{{\color{magenta}\bm\sigma}}}^\emptyset \otimes {\sf 1}_{{{\color{magenta}\bm\sigma}}}){\sf fork}^{{{{\color{magenta}\bm\sigma}}}{{{\color{magenta}\bm\sigma}}}}_{{{{\color{magenta}\bm\sigma}}}} = {\sf 1}_{{{{\color{magenta}\bm\sigma}}}} \qquad ({\sf 1}_{{{\color{magenta}\bm\sigma}}}\otimes {\sf fork}_{{{{\color{magenta}\bm\sigma}}}{{{\color{magenta}\bm\sigma}}}}^{ {{{\color{magenta}\bm\sigma}}}} ) ({\sf fork}^{{{{\color{magenta}\bm\sigma}}}{{{\color{magenta}\bm\sigma}}}}_{{{{\color{magenta}\bm\sigma}}}}\otimes {\sf 1}_{{{{\color{magenta}\bm\sigma}}}}) = {\sf fork}^{{{{\color{magenta}\bm\sigma}}}{{{\color{magenta}\bm\sigma}}}}_{{{{\color{magenta}\bm\sigma}}}} {\sf fork}^{{{{\color{magenta}\bm\sigma}}}}_{ {{{\color{magenta}\bm\sigma}}}{{{\color{magenta}\bm\sigma}}}} %\end{align*} %\begin{align*} \qquad {\sf fork}_{{{{\color{magenta}\bm\sigma}}}{{{\color{magenta}\bm\sigma}}}}^{{{{\color{magenta}\bm\sigma}}}} {\sf fork}^{{{{\color{magenta}\bm\sigma}}}{{{\color{magenta}\bm\sigma}}}}_{{{{\color{magenta}\bm\sigma}}}}=0 %\qquad \quad {\sf bar}(\csigma)\otimes {\sf 1}_{\csigma} %+ % {\sf 1}_{\csigma} \otimes {\sf bar}(\csigma) % = %2 {\sf gap} (\csigma)\end{aligned}$$ For $({{{\color{magenta}\bm\sigma}}}, {{{\color{cyan}\bm\tau}}},{{{ \color{green!80!black}\bm\rho}}})\in S^3$ with $|{{{\color{magenta}\bm\sigma}}}- {{{ \color{green!80!black}\bm\rho}}}|, | {{{ \color{green!80!black}\bm\rho}}}- {{{\color{cyan}\bm\tau}}}| , |{{{\color{magenta}\bm\sigma}}}- {{{\color{cyan}\bm\tau}}}|>1$, we have the commutation relations $$\begin{aligned} % \label{braidrelation1} {\sf spot}^{ {{{\color{magenta}\bm\sigma}}}}_{\emptyset} \otimes {\sf 1}_{{{{ \color{green!80!black}\bm\rho}}}} = {\sf braid}^{{{{\color{magenta}\bm\sigma}}}{{{ \color{green!80!black}\bm\rho}}}}_{{{{ \color{green!80!black}\bm\rho}}}{{{\color{magenta}\bm\sigma}}}} ( {\sf 1}_{{{{ \color{green!80!black}\bm\rho}}}} \otimes {\sf spot}^{ {{{\color{magenta}\bm\sigma}}}}_{\emptyset} ) \qquad ( {\sf fork}^{ {{{\color{magenta}\bm\sigma}}}{{{\color{magenta}\bm\sigma}}}}_{ {{{\color{magenta}\bm\sigma}}}} \otimes {\sf 1}_{{{{ \color{green!80!black}\bm\rho}}}} ) {\sf braid}^{{{{\color{magenta}\bm\sigma}}}{{{ \color{green!80!black}\bm\rho}}}}_{{{{ \color{green!80!black}\bm\rho}}}{{{\color{magenta}\bm\sigma}}}} = {\sf braid}^{{{{\color{magenta}\bm\sigma}}}{{{\color{magenta}\bm\sigma}}}{{{ \color{green!80!black}\bm\rho}}}}_{{{{ \color{green!80!black}\bm\rho}}}{{{\color{magenta}\bm\sigma}}}{{{\color{magenta}\bm\sigma}}}} ( {\sf 1}_{{{{ \color{green!80!black}\bm\rho}}}} \otimes {\sf fork}^{ {{{\color{magenta}\bm\sigma}}}{{{\color{magenta}\bm\sigma}}}}_{ {{{\color{magenta}\bm\sigma}}}} ) \end{aligned}$$ $$\begin{aligned} {\sf braid}^{{{{\color{magenta}\bm\sigma}}}{{{\color{cyan}\bm\tau}}}{{{ \color{green!80!black}\bm\rho}}}}_{ {{{\color{magenta}\bm\sigma}}}{{{ \color{green!80!black}\bm\rho}}}{{{\color{cyan}\bm\tau}}}} {\sf braid}^ { {{{\color{magenta}\bm\sigma}}}{{{ \color{green!80!black}\bm\rho}}}{{{\color{cyan}\bm\tau}}}} _{ {{{ \color{green!80!black}\bm\rho}}}{{{\color{magenta}\bm\sigma}}}{{{\color{cyan}\bm\tau}}}} {\sf braid}^ { {{{ \color{green!80!black}\bm\rho}}}{{{\color{magenta}\bm\sigma}}}{{{\color{cyan}\bm\tau}}}} _{ {{{ \color{green!80!black}\bm\rho}}}{{{\color{cyan}\bm\tau}}}{{{\color{magenta}\bm\sigma}}}} = {\sf braid}^{{{{\color{magenta}\bm\sigma}}}{{{\color{cyan}\bm\tau}}}{{{ \color{green!80!black}\bm\rho}}}}_{{{{\color{cyan}\bm\tau}}}{{{\color{magenta}\bm\sigma}}}{{{ \color{green!80!black}\bm\rho}}}} {\sf braid}^ {{{{\color{cyan}\bm\tau}}}{{{\color{magenta}\bm\sigma}}}{{{ \color{green!80!black}\bm\rho}}}} _{ {{{\color{cyan}\bm\tau}}}{{{ \color{green!80!black}\bm\rho}}}{{{\color{magenta}\bm\sigma}}}} {\sf braid}^ { {{{\color{cyan}\bm\tau}}}{{{ \color{green!80!black}\bm\rho}}}{{{\color{magenta}\bm\sigma}}}} _{ {{{ \color{green!80!black}\bm\rho}}}{{{\color{cyan}\bm\tau}}}{{{\color{magenta}\bm\sigma}}}} \end{aligned}$$ For ${{{\color{magenta}\bm\sigma}}},{{{\color{cyan}\bm\tau}}}\in S$ with $|{{{\color{magenta}\bm\sigma}}}- {{{\color{cyan}\bm\tau}}}|=1$ we have the one and two colour Demazure relations: $$\begin{aligned} {\sf bar}({{{\color{magenta}\bm\sigma}}})\otimes {\sf 1}_{{{\color{magenta}\bm\sigma}}} + {\sf 1}_{{{\color{magenta}\bm\sigma}}}\otimes {\sf bar}({{{\color{magenta}\bm\sigma}}}) & =2 {\sf gap}({{{\color{magenta}\bm\sigma}}}) % \qquad % \quad \\ {\sf bar}({{{\color{cyan}\bm\tau}}})\otimes {\sf 1}_{{{\color{magenta}\bm\sigma}}} - {\sf 1}_{{{\color{magenta}\bm\sigma}}}\otimes {\sf bar}({{{\color{cyan}\bm\tau}}}) & = {\sf 1}_{{{\color{magenta}\bm\sigma}}}\otimes {\sf bar}({{{\color{magenta}\bm\sigma}}}) - {\sf gap}({{{\color{magenta}\bm\sigma}}}) \end{aligned}$$ and the null braid relation $$\begin{aligned} {\sf 1}_{{{{\color{magenta}\bm\sigma}}}{{{\color{cyan}\bm\tau}}}{{{\color{magenta}\bm\sigma}}}} + ({\sf 1}_{{{\color{magenta}\bm\sigma}}}\otimes {\sf spot}^{{{\color{cyan}\bm\tau}}}_\emptyset \otimes {\sf 1}_{{{\color{magenta}\bm\sigma}}}) {\sf dork}^{{{{\color{magenta}\bm\sigma}}}{{{\color{magenta}\bm\sigma}}}}_{{{{\color{magenta}\bm\sigma}}}{{{\color{magenta}\bm\sigma}}}} ( {\sf 1}_{{{\color{magenta}\bm\sigma}}}\otimes {\sf spot}_{{{\color{cyan}\bm\tau}}}^\emptyset \otimes {\sf 1}_{{{\color{magenta}\bm\sigma}}}) =0\end{aligned}$$ Further, we require the interchange law and the monoidal unit relation $$% \big(({\sf D }_1\circ {\sf 1}_{ {\w}} )\otimes ({\sf D}_2 \circ {\sf 1}_{ {\x}}) \big) %\big(({\sf 1}_{\w} \circ {\sf D}_3) \otimes ({\sf 1}_\x \circ{\sf D }_4)\big) %= % ({\sf D}_1 \circ {\sf 1}_{{\w}} \circ {\sf D_3}) \otimes ({\sf D}_2 \circ {\sf 1}_{{\x}} \circ {\sf D}_4) \big( {\sf D }_1 \otimes {\sf D}_2 \big)\circ \big( {\sf D}_3 \otimes {\sf D }_4 \big) = ({\sf D}_1 \circ {\sf D_3}) \otimes ({\sf D}_2 \circ {\sf D}_4) %$$ %and the monoidal unit relation %$$ \qquad {\sf 1}_{\emptyset} \otimes {\sf D}_1={\sf D}_1={\sf D}_1 \otimes {\sf 1}_{\emptyset}$$ for all diagrams ${\sf D}_1,{\sf D}_2,{\sf D}_3,{\sf D}_4$. Finally, we require the non-local cyclotomic relations $$\begin{aligned} {\sf 1}_{{{\color{magenta}\bm\sigma}}}\otimes D=0 \qquad {\sf bar}({{{\color{cyan}\bm\tau}}}) \otimes D=0\end{aligned}$$ for all ${\color{magenta}s_0}\neq {{{\color{magenta}\bm\sigma}}}\in S$, ${{{\color{cyan}\bm\tau}}}\in S$ and $D$ any diagram.* *We also define the idempotent truncation $${\sf 1}_{m,n}= \sum_{\lambda\in{\mathscr{P}_{m,n}}} {\sf 1}_{\mathsf{t}_\lambda}\qquad \mathcal{H}_{m,n}= {\sf 1}_{m,n} \mathscr{H}_{m,n} {\sf 1}_{m,n}$$* **Remark 5**. *In [@compan Theorem 4.21] the algebra $\mathcal{H}_{m,n}$ is shown to be the basic algebra of the anti-spherical Hecke category for $W= {S}_{m+n}$ the finite symmetric group and $P = {S}_{m}\times S_n \leqslant W$ a maximal parabolic subgroup.* **Remark 6**. *The algebras $\mathscr{H}_{m,n}$ and $\mathcal{H}_{m,n}$ can be equipped with a $\mathbb Z$-grading which preserves the duality $\ast$. The degrees of the generators under this grading are defined as follows: $${\sf deg}({\sf 1}_\emptyset)=0 \quad {\sf deg}({\sf 1}_{{{\color{magenta}\bm \sigma}}})=0 \quad {\sf deg} ({\sf spot}^\emptyset_{{{\color{magenta}\bm \sigma}}})=1 \quad {\sf deg} ({\sf fork}^{{{\color{magenta}\bm \sigma}}}_{{{{\color{magenta}\bm \sigma}}}{{{\color{magenta}\bm \sigma}}}})=-1 \quad {\sf deg} ({\sf braid}^{{{{\color{magenta}\bm \sigma}}}{{\color{cyan}\boldsymbol \tau}}}_{{{\color{cyan}\boldsymbol \tau}}{{{\color{magenta}\bm \sigma}}}} )=0$$ for ${{{\color{magenta}\bm \sigma}}},{{\color{cyan}\boldsymbol \tau}}\in S$ such that $|{{{\color{magenta}\bm \sigma}}}-{{\color{cyan}\boldsymbol \tau}}|>1$.* ## Khovanov arc algebras {#Khovanov arc algebras} We now recall the definition of the extended Khovanov arc algebras studied in [@MR2600694; @MR2781018; @MR2955190; @MR2881300]. We define $\mathcal{K} _{m,n }$ to be the algebra spanned by diagrams $$\{ \underline{\lambda} \mu \overline{\nu} \mid \lambda,\mu,\nu \in {\mathscr{P}_{m,n}}\text{ such that } \mu\overline{\nu}, \underline{\lambda} \mu \text{ are oriented}\}$$ with the multiplication defined as follows. First set $$(\underline{\lambda} \mu \overline{\nu})(\underline{\alpha} \beta \overline{\gamma}) = 0 \quad \mbox{unless $\nu = \alpha$}.$$ To compute $(\underline{\lambda} \mu \overline{\nu})(\underline{\nu} \beta \overline{\gamma})$ place $(\underline{\lambda} \mu \overline{\nu})$ under $(\underline{\nu} \beta \overline{\gamma})$ then follow the 'surgery' procedure. This surgery combines two circles into one or splits one circle into two using the following rules for re-orientation (where we use the notation $1=\text{anti-clockwise circle}$, $x=\text{clockwise circle}$, $y=\text{oriented strand}$). We have the splitting rules $$1 \mapsto 1 \otimes x + x \otimes 1, \quad x \mapsto x \otimes x, \quad y \mapsto x \otimes y.$$ and the merging rules $$\begin{aligned} 1 \otimes 1 \mapsto 1, \quad 1 \otimes x \mapsto x, \quad x \otimes 1 \mapsto x, \quad x \otimes x \mapsto 0, \quad 1 \otimes y \mapsto y,\quad x \otimes y \mapsto 0,\end{aligned}$$ $$\begin{aligned} y \otimes y \mapsto \left\{ \begin{array}{ll} y\otimes y&\text{if both strands are propagating, one is }\\ &\text{$\wedge$-oriented and the other is $\vee$-oriented;}\\ 0&\text{otherwise.} \end{array} \right.\end{aligned}$$ **Example 7**. *We have the following product of Khovanov diagrams* *$$\begin{minipage}{3.1cm} \begin{tikzpicture} [scale=0.76] \draw(-0.25,0)--++(0:3.75) coordinate (X); \draw[thick](0 , 0.09) node {$\scriptstyle\vee$}; \draw(2,0.09) node {$\scriptstyle\vee$}; \draw[thick](1 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09) node {$\scriptstyle\wedge$}; \draw (0,0) to [out=90,in=180] (0.5,0.5) to [out=0,in=90] (1,0) (1,0) to [out=-90,in=180] (1.5, -0.5) to [out=0,in= -90] (2,0) to [out=90,in=180] (2.5,0.5) to [out=0,in=90] (3,0) to [out=-90,in=0] (1.5, -0.75) to [out=180,in= -90] (0,0) ; \draw[gray!70,<->,thick](1.5,-0.8)-- (1.5,-2.5+0.8); \draw(-0.25,-2.5)--++(0:3.75) coordinate (X); \draw[thick](0 , 0.09-2.5) node {$\scriptstyle\vee$}; \draw(2,0.09-2.5) node {$\scriptstyle\vee$}; % \draw[thick](1,-0.09) node {$\scriptstyle\up$}; % \draw[thick](1.5,-0.09) node {$\scriptstyle\up$}; \draw[thick](1 ,-0.09-2.5) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09-2.5) node {$\scriptstyle\wedge$}; \draw (0,-2.5) to [out=-90,in=180] (0.5,-2.5-0.5) to [out=0,in=-90] (1,-2.5-0) (1,-2.5-0) to [out=90,in=180] (1.5, -2.5+0.5) to [out=0,in= 90] (2,-2.5-0) to [out=-90,in=180] (2.5,-2.5-0.5) to [out=0,in=-90] (3,-2.5-0) to [out=90,in=0] (1.5, -2.5+0.75) to [out=180,in= 90] (0,-2.5-0) ; \end{tikzpicture} \end{minipage} = \; \; \begin{minipage}{3.1cm} \begin{tikzpicture} [scale=0.76] \draw(-0.25,0)--++(0:3.75) coordinate (X); \draw[thick](0 , 0.09) node {$\scriptstyle\vee$}; \draw(2,0.09) node {$\scriptstyle\vee$}; \draw[thick](1 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09) node {$\scriptstyle\wedge$}; \draw (0,0) --++(-90:2.5) --++( 90:2.5) to [out=90,in=180] (0.5,0.5) to [out=0,in=90] (1,0) (1,0) to [out=-90,in=180] (1.5, -0.5) to [out=0,in= -90] (2,0) to [out=90,in=180] (2.5,0.5) to [out=0,in=90] (3,0) ; \draw[gray!70,<->,thick](1.5,-0.6)-- (1.5,-2.5+0.6); \draw(-0.25,-2.5)--++(0:3.75) coordinate (X); \draw[thick](0 , 0.09-2.5) node {$\scriptstyle\vee$}; \draw(2,0.09-2.5) node {$\scriptstyle\vee$}; % \draw[thick](1,-0.09) node {$\scriptstyle\up$}; % \draw[thick](1.5,-0.09) node {$\scriptstyle\up$}; \draw[thick](1 ,-0.09-2.5) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09-2.5) node {$\scriptstyle\wedge$}; \draw (0,-2.5) to [out=-90,in=180] (0.5,-2.5-0.5) to [out=0,in=-90] (1,-2.5-0) (1,-2.5-0) to [out=90,in=180] (1.5, -2.5+0.5) to [out=0,in= 90] (2,-2.5-0) to [out=-90,in=180] (2.5,-2.5-0.5) to [out=0,in=-90] (3,-2.5-0)--++(90:2.5) ; \end{tikzpicture} \end{minipage} =\; \begin{minipage}{3.1cm} \begin{tikzpicture} [scale=0.76] \draw(-0.5,0)--++(0:4) coordinate (X); \draw[thick](0 , 0.09) node {$\scriptstyle\vee$}; \draw[thick](1 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](2 ,-0.09) node {$\scriptstyle\wedge$}; \draw(3,0.09) node {$\scriptstyle\vee$}; \draw (0,0) to [out=90,in=180] (0.5,0.5) to [out=0,in=90] (1,0) to [out=-90,in=0] (0.5,-0.5) to [out=180,in=-90] (0,0) ; \draw (2+0,0) to [out=90,in=180] (2+0.5,0.5) to [out=0,in=90] (2+1,0) to [out=-90,in=0] (2+0.5,-0.5) to [out=180,in=-90] (2+0,0) ; \end{tikzpicture} \end{minipage} \;+\; \begin{minipage}{3.4cm} \begin{tikzpicture} [scale=0.76] \draw(-0.5,0)--++(0:4) coordinate (X); \draw[thick](1 , 0.09) node {$\scriptstyle\vee$}; \draw[thick](0 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09) node {$\scriptstyle\wedge$}; \draw(2,0.09) node {$\scriptstyle\vee$}; \draw (0,0) to [out=90,in=180] (0.5,0.5) to [out=0,in=90] (1,0) to [out=-90,in=0] (0.5,-0.5) to [out=180,in=-90] (0,0) ; \draw (2+0,0) to [out=90,in=180] (2+0.5,0.5) to [out=0,in=90] (2+1,0) to [out=-90,in=0] (2+0.5,-0.5) to [out=180,in=-90] (2+0,0) ; \end{tikzpicture} \end{minipage}$$ where we highlight with arrows the pair of arcs on which we are about to perform surgery. The first equality follows from the merging rule for $1\otimes 1 \mapsto 1$ and the second equality follows from the merging rule $1 \mapsto 1 \otimes x + x \otimes 1$.* **Example 8**. *We have the following product of Khovanov diagrams* *$$\begin{minipage}{3.1cm} \begin{tikzpicture} [scale=0.76] \draw(-0.25,0)--++(0:3.75) coordinate (X); \draw[thick](0 , 0.09) node {$\scriptstyle\vee$}; \draw(2,0.09) node {$\scriptstyle\vee$}; \draw[thick](1 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09) node {$\scriptstyle\wedge$}; \draw (0,0) to [out=-90,in=180] (0.5,-0.5) to [out=0,in=-90] (1,0) (1,0) to [out=90,in=180] (1.5, 0.5) to [out=0,in= 90] (2,0) to [out=-90,in=180] (2.5,-0.5) to [out=0,in=-90] (3,0) to [out=90,in=0] (1.5, 0.75) to [out=180,in= 90] (0,0) ; \draw[gray!70,<->,thick](0.5,-0.8)-- (0.5,-2.5+0.8); \draw(-0.25,-2.5)--++(0:3.75) coordinate (X); \draw[thick](0 , 0.09-2.5) node {$\scriptstyle\vee$}; \draw(2,0.09-2.5) node {$\scriptstyle\vee$}; % \draw[thick](1,-0.09) node {$\scriptstyle\up$}; % \draw[thick](1.5,-0.09) node {$\scriptstyle\up$}; \draw[thick](1 ,-0.09-2.5) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09-2.5) node {$\scriptstyle\wedge$}; \draw (0,-2.5) to [out=90,in=180] (0.5,-2.5+0.5) to [out=0,in=90] (1,-2.5-0) (1,-2.5-0) to [out=-90,in=180] (1.5, -2.5-0.5) to [out=0,in= -90] (2,-2.5-0) to [out=90,in=180] (2.5,-2.5+0.5) to [out=0,in=90] (3,-2.5-0) to [out=-90,in=0] (1.5, -2.5-0.75) to [out=180,in= -90] (0,-2.5-0) ; \end{tikzpicture} \end{minipage} = \; \; % \begin{minipage}{3.1cm} % \begin{tikzpicture} [scale=0.76] % % % % \draw(-0.25,0)--++(0:3.75) coordinate (X); % % % % % \draw[thick](0 , 0.09) node {$\scriptstyle\down$}; % \draw(2,0.09) node {$\scriptstyle\down$}; % \draw[thick](1 ,-0.09) node {$\scriptstyle\up$}; % \draw[thick](3 ,-0.09) node {$\scriptstyle\up$}; % % % \draw % (0,0) --++(-90:2.5) --++( 90:2.5) % to [out=90,in=180] (0.5,0.5) to [out=0,in=90] (1,0) % (1,0) % to [out=-90,in=180] (1.5, -0.5) to [out=0,in= -90] (2,0) % to [out=90,in=180] (2.5,0.5) to [out=0,in=90] (3,0) % % ; % % % % % % % %\draw[gray!70,<->,thick](1.5,-0.6)-- (1.5,-2.5+0.6); % % % % % % % % % % % \draw(-0.25,-2.5)--++(0:3.75) coordinate (X); % % % % % \draw[thick](0 , 0.09-2.5) node {$\scriptstyle\down$}; % \draw(2,0.09-2.5) node {$\scriptstyle\down$}; % % %% \draw[thick](1,-0.09) node {$\scriptstyle\up$}; %% \draw[thick](1.5,-0.09) node {$\scriptstyle\up$}; % \draw[thick](1 ,-0.09-2.5) node {$\scriptstyle\up$}; % \draw[thick](3 ,-0.09-2.5) node {$\scriptstyle\up$}; % % % \draw % (0,-2.5) % to [out=-90,in=180] (0.5,-2.5-0.5) to [out=0,in=-90] (1,-2.5-0) % (1,-2.5-0) % to [out=90,in=180] (1.5, -2.5+0.5) to [out=0,in= 90] (2,-2.5-0) % to [out=-90,in=180] (2.5,-2.5-0.5) to [out=0,in=-90] (3,-2.5-0)--++(90:2.5) % ; % % \end{tikzpicture} \end{minipage} \begin{minipage}{3.1cm} \begin{tikzpicture} [scale=0.76] \draw(-0.25,0)--++(0:3.75) coordinate (X); \draw[thick](0 , 0.09) node {$\scriptstyle\vee$}; \draw(2,0.09) node {$\scriptstyle\vee$}; \draw[thick](1 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09) node {$\scriptstyle\wedge$}; \draw (1,-2)--(1,0) (1,0) to [out=90,in=180] (1.5, 0.5) to [out=0,in= 90] (2,0) to [out=-90,in=180] (2.5,-0.5) to [out=0,in=-90] (3,0) to [out=90,in=0] (1.5, 0.75) to [out=180,in= 90] (0,0) ; \draw[gray!70,<->,thick](2.5,-0.8)-- (2.5,-2.5+0.8); \draw(-0.25,-2.5)--++(0:3.75) coordinate (X); \draw[thick](0 , 0.09-2.5) node {$\scriptstyle\vee$}; \draw(2,0.09-2.5) node {$\scriptstyle\vee$}; % \draw[thick](1,-0.09) node {$\scriptstyle\up$}; % \draw[thick](1.5,-0.09) node {$\scriptstyle\up$}; \draw[thick](1 ,-0.09-2.5) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09-2.5) node {$\scriptstyle\wedge$}; \draw (0,0)--(0,-2.5); \draw (1,0)-- (1,-2.5-0) to [out=-90,in=180] (1.5, -2.5-0.5) to [out=0,in= -90] (2,-2.5-0) to [out=90,in=180] (2.5,-2.5+0.5) to [out=0,in=90] (3,-2.5-0) to [out=-90,in=0] (1.5, -2.5-0.75) to [out=180,in= -90] (0,-2.5-0) ; \end{tikzpicture} \end{minipage} =\; \begin{minipage}{3.1cm} \begin{tikzpicture} [scale=0.76] \draw(-0.5,0)--++(0:4) coordinate (X); \draw[thick](1 , 0.09) node {$\scriptstyle\vee$}; \draw[thick](0 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](2 ,-0.09) node {$\scriptstyle\wedge$}; \draw(3,0.09) node {$\scriptstyle\vee$}; \draw (0+1,0) to [out=90,in=180] (0.5+1,0.5) to [out=0,in=90] (1+1,0) to [out=-90,in=0] (0.5+1,-0.5) to [out=180,in=-90] (0+1,0) ; \draw (0 ,0) to [out=90,in=180] (0.5+1,0.75) to [out=0,in=90] (3,0) to [out=-90,in=0] (0.5+1,-0.75) to [out=180,in=-90] (0,0) ; % \draw % (2+0,0) % to [out=90,in=180] (2+0.5,0.5) to [out=0,in=90] (2+1,0) % to [out=-90,in=0] (2+0.5,-0.5) to [out=180,in=-90] (2+0,0) % % ; \end{tikzpicture} \end{minipage} \;+\; \begin{minipage}{3.4cm} \begin{tikzpicture} [scale=0.76] \draw(-0.5,0)--++(0:4) coordinate (X); \draw[thick](0 , 0.09) node {$\scriptstyle\vee$}; \draw[thick](1 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09) node {$\scriptstyle\wedge$}; \draw(2,0.09) node {$\scriptstyle\vee$}; \draw (0+1,0) to [out=90,in=180] (0.5+1,0.5) to [out=0,in=90] (1+1,0) to [out=-90,in=0] (0.5+1,-0.5) to [out=180,in=-90] (0+1,0) ; \draw (0 ,0) to [out=90,in=180] (0.5+1,0.75) to [out=0,in=90] (3,0) to [out=-90,in=0] (0.5+1,-0.75) to [out=180,in=-90] (0,0) ; % \draw % (0,0) % to [out=90,in=180] (0.5,0.5) to [out=0,in=90] (1,0) % to [out=-90,in=0] (0.5,-0.5) to [out=180,in=-90] (0,0) % % ; % % % % % % \draw % (2+0,0) % to [out=90,in=180] (2+0.5,0.5) to [out=0,in=90] (2+1,0) % to [out=-90,in=0] (2+0.5,-0.5) to [out=180,in=-90] (2+0,0) % % ; \end{tikzpicture} \end{minipage}$$ where we highlight with arrows the pair of arcs on which we are about to perform surgery. This is similar to [Example 7](#brace-surgery){reference-type="ref" reference="brace-surgery"}.* # Dyck combinatorics {#dyckgens} We have defined the $p$-Kazhdan--Lusztig polynomials via counting of certain oriented Temperley--Lieb diagrams. For the purposes of this paper, we require richer combinatorial objects which *refine* the Temperley--Lieb construction: these are provided by tilings by Dyck paths. Let us start with a simple example to see how these Dyck paths come from the oriented Temperley--Lieb diagrams. Consider the partitions $\mu = (5^3,4, 1)$ and $\lambda= (4^2,3,1^2)$. The oriented Temperley--Lieb diagram $\varnothing e_\mu \lambda$ is illustrated in Figure [\[example-2\]](#example-2){reference-type="ref" reference="example-2"}. We see that $\lambda$ is obtained from $\mu$ by swapping the labels of the vertices of one cup in $e_\mu$. The tiles of $\mu$ which intersect this cup form a Dyck path (see definition below) highlighted in pink. Moreover the partition $\lambda$ is obtained from the partition $\mu$ by removing the equivalent Dyck path shaded in grey (or, equivalently, by removing the pink Dyck path, and letting the tiles fall under gravity). More generally, if $\lambda, \mu\in {\mathscr{P}_{m,n}}$ with $\underline{\mu}\lambda$ oriented of degree $k$, then we will see that the partition $\lambda$ is obtained from the partition $\mu$ by removing $k$ Dyck paths. In this section, we develop the combinatorics of Dyck paths needed to give a quadratic presentation for the Hecke category. $$\begin{tikzpicture}[scale=0.6] \path(0,0) coordinate (origin2); \path(0,0)--++(135:2.5)--++(-135:2.5) --++(-45:0.25)--++(45:0.25) %%% % --++(90:0.16) node {$\down$}; --++(-90:0.18) node[gray] {$\wedge$}; \foreach \i in {5,6,7,8,9} { \path(0,0)--++(135:2.5-0.5*\i)--++(-135:2.5-0.5*\i) --++(-45:0.25)--++(45:0.25) %%% % --++(90:0.16) node {$\down$}; --++(90:0.16) node[gray] {$\vee$}; } \foreach \i in {1,2,3,4} { \path(0,0)--++(135:2.5-0.5*\i)--++(-135:2.5-0.5*\i) --++(-45:0.25)--++(45:0.25) %%% % --++(90:0.16) node {$\down$}; --++(-90:0.18) node[gray] {$\wedge$}; } \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) %%% % --++(90:0.16) node {$\down$}; --++(-90:0.18) node {$\wedge$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(45:0.5*2)--++(-45:0.5*2) %%% % --++(90:0.16) node {$\down$}; --++(-90:0.18) node[gray] {$\wedge$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(45:0.5*1)--++(-45:0.5*1) %%% --++(90:0.16) node[gray] {$\vee$}; % --++(-90:0.18) node[gray] {$\up$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(-135:0.5*3) --++(135:0.5*3) %%% --++(90:0.16) node[gray] {$\vee$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(-135:0.5*5) --++(135:0.5*5) %%% --++(90:0.16) node[gray] {$\vee$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(-135:0.5*6) --++(135:0.5*6) %%% --++(90:0.16) node[gray] {$\vee$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(-135:0.5*7) --++(135:0.5*7) %%% --++(90:0.16) node {$\vee$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(-135:0.5*2)--++(135:0.5*2) %%% % --++(90:0.16) node {$\down$}; --++(-90:0.18) node[gray] {$\wedge$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(-135:0.5*1)--++(135:0.5*1) %%% % --++(90:0.16) node {$\down$}; --++(-90:0.18) node[gray] {$\wedge$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(-135:0.5*4)--++(135:0.5*4) %%% % --++(90:0.16) node {$\down$}; --++(-90:0.18) node[gray] {$\wedge$}; \draw[very thick] (0,0)--++(135:5)--++(45:3)--++(-45:1)--++(45:1)--++(-45:3) --++(45:1)--++(-45:1)--(0,0); \path(0,0)--++(135:4) coordinate (high); % \fill[magenta!20] (high)--++(135:1)--++(45:3)--++(-45:1) % --++(45:1)--++(-45:3)--++(-135:1)--++(135:2)--++(-135:1)--++(135:1); \fill[magenta,opacity=0.3] (high)--++(135:1)--++(45:2)--++(-45:1) --++(45:1)--++(-45:1) --++(45:1)--++(-45:2 ) --++(-135:1)--++(135:1)--++(-135:1)--++(135:1) --++(-135:1)--++(135:1) --++(-135:1); \path(0,0)--++(45:2.75)--++(-45:2.75) coordinate (bottomR); \path(0,0)--++(135:2.75)--++(-135:2.75) coordinate (bottomL); \draw[very thick] (bottomL)--(bottomR); \draw[very thick,densely dotted] (bottomL)--++(180:0.5); \path(0,0) --++(45:2.75)--++(-45:2.75)--++(135:5)--++(45:5) coordinate (topR); \path(0,0)--++(135:2.75)--++(-135:2.75)--++(45:5)--++(135:5) coordinate (topL); \draw[very thick] (topL)--(topR); \draw[very thick,densely dotted] (topL)--++(180:0.5); \draw[very thick,densely dotted] (topR)--++( 0:0.5); % \draw[very thick](0,0)--++(135:5)--++(45:3)--++(-45:1) % --++(45:1)--++(-45:3)--++(45:1) % --++(-45:1) % --(0,0); \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) %%% % --++(90:0.16) node {$\down$}; --++(-90:0.18) node {$\wedge$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(45:0.5*2)--++(-45:0.5*2) %%% % --++(90:0.16) node {$\down$}; --++(-90:0.18) node[gray] {$\wedge$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(45:0.5*1)--++(-45:0.5*1) %%% --++(90:0.16) node[gray] {$\vee$}; % --++(-90:0.18) node[gray] {$\up$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(-135:0.5*3) --++(135:0.5*3) %%% --++(90:0.16) node[gray] {$\vee$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(-135:0.5*5) --++(135:0.5*5) %%% --++(90:0.16) node[gray] {$\vee$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(-135:0.5*6) --++(135:0.5*6) %%% --++(90:0.16) node[gray] {$\vee$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(-135:0.5*7) --++(135:0.5*7) %%% --++(90:0.16) node {$\vee$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(-135:0.5*2)--++(135:0.5*2) %%% % --++(90:0.16) node {$\down$}; --++(-90:0.18) node[gray] {$\wedge$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(-135:0.5*1)--++(135:0.5*1) %%% % --++(90:0.16) node {$\down$}; --++(-90:0.18) node[gray] {$\wedge$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(-135:0.5*4)--++(135:0.5*4) %%% % --++(90:0.16) node {$\down$}; --++(-90:0.18) node[gray] {$\wedge$}; \clip(bottomL)--(bottomR)--(topR)--(topL); %first column \path(0,0) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3); \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray!70] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,gray!70] (X1)--++(-90:7); \draw[very thick,gray!70] (X2)--++(-90:7); \path(start)--++(45:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray!70] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,gray!70] (X1)--++(-90:7); \path(start)--++(45:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray!70] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,gray!70] (X1)--++(-90:7); \path(start)--++(45:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray!70] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,gray!70] (X1)--++(-90:7); \path(start)--++(45:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray!70] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,gray!70] (X1)--++(-90:7); \draw[very thick,gray!70] (X3)--++(90:7); \draw[very thick,gray!70] (X4)--++(90:7); %first row \path(0,0) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3); \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray!70] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,gray!70] (X1)--++(-90:7); \draw[very thick,gray!70] (X2)--++(-90:7); \path(start)--++(135:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray!70] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,gray!70] (X2)--++(-90:7); \path(start)--++(135:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray!70] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,gray!70] (X2)--++(-90:7); \path(start)--++(135:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray!70] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,gray!70] (X2)--++(-90:7); \path(start)--++(135:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,black] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,gray!70] (X2)--++(-90:7); \draw[very thick,black] (X4)--++(90:7); %second column \path(0,0)--++(45:1)--++(135:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray!70] (X3) to [out=-135,in= -45] (X4);; \path(start)--++(45:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray!70] (X3) to [out=-135,in= -45] (X4);; \path(start)--++(45:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,black] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,black] (X3)--++(90:7); %second column \path(0,0)--++(45:1)--++(135:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray!70] (X3) to [out=-135,in= -45] (X4);; \path(start)--++(135:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray!70] (X3) to [out=-135,in= -45] (X4);; % % \draw[very thick,black] (X3)--++(90:7); \path(start)--++(135:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,black] (X3) to [out=-135,in= -45] (X4);; \path(start)--++(135:1) coordinate (start2); \path(start2)--++(45:0.5) coordinate (X1); \path(start2)--++(135:0.5) coordinate (X2); \draw[very thick,black ] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick ,gray!70] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,gray!70] (X4)--++(90:7); \path(start)--++(-45:1)--++(45:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,black] (X3) to [out=-135,in= -45] (X4);; \path(start)--++(135:1) coordinate (start2); \path(start2)--++(45:0.5) coordinate (X1); \path(start2)--++(135:0.5) coordinate (X2); \draw[very thick,black ] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick ,gray!70] (X3) to [out=-135,in= -45] (X4);; \path(start)--++(45:1) coordinate (start2); \path(start2)--++(45:0.5) coordinate (X1); \path(start2)--++(135:0.5) coordinate (X2); \draw[very thick,black ] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick ,gray!70] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,gray!70] (X3)--++(90:7); \path(start2)--++(135:1) coordinate (start3); \path(start3)--++(45:0.5) coordinate (X1); \path(start3)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70 ] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick ,gray!70] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,gray!70] (X4)--++(90:7); \draw[very thick,gray!70] (X3)--++(90:7); \path(start3)--++(135:1)--++(-135:1) coordinate (start4); \path(start4)--++(45:0.5) coordinate (X1); \path(start4)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70 ] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick ,gray!70] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,gray!70] (X4)--++(90:7); \draw[very thick,gray!70] (X3)--++(90:7); \clip(0,0)--++(135:5)--++(45:3)--++(-45:1) --++(45:1)--++(-45:3)--++(45:1) --++(-45:1) --(0,0); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \end{tikzpicture} \qquad\qquad \begin{tikzpicture}[scale=0.6] \path(0,0)--++(135:2.5)--++(-135:2.5) --++(-45:0.25)--++(45:0.25) %%% % --++(90:0.16) node {$\down$}; --++(-90:0.18) node {$\wedge$}; \foreach \i in {5,6,7,8} { \path(0,0)--++(135:2.5-0.5*\i)--++(-135:2.5-0.5*\i) --++(-45:0.25)--++(45:0.25) %%% % --++(90:0.16) node {$\down$}; --++(90:0.16) node[gray] {$\vee$}; } \path(0,0)--++(135:2.5-0.5*9)--++(-135:2.5-0.5*9) --++(-45:0.25)--++(45:0.25) %%% % --++(90:0.16) node {$\down$}; --++(90:0.16) node {$\vee$}; \foreach \i in {1,2,3,4} { \path(0,0)--++(135:2.5-0.5*\i)--++(-135:2.5-0.5*\i) --++(-45:0.25)--++(45:0.25) %%% % --++(90:0.16) node {$\down$}; --++(-90:0.18) node[gray] {$\wedge$}; } \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) %%% --++(90:0.16) node {$\vee$}; % --++(-90:0.18) node {$\up$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(45:0.5*2)--++(-45:0.5*2) %%% % --++(90:0.16) node {$\down$}; --++(-90:0.18) node[gray] {$\wedge$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(45:0.5*1)--++(-45:0.5*1) %%% --++(90:0.16) node[gray] {$\vee$}; % --++(-90:0.18) node[gray] {$\up$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(-135:0.5*3) --++(135:0.5*3) %%% --++(90:0.16) node[gray] {$\vee$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(-135:0.5*5) --++(135:0.5*5) %%% --++(90:0.16) node[gray] {$\vee$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(-135:0.5*6) --++(135:0.5*6) %%% --++(90:0.16) node[gray] {$\vee$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(-135:0.5*7) --++(135:0.5*7) %%% --++(-90:0.18) node {$\wedge$}--++(90:0.18) coordinate (hereitis); \path(hereitis)--++(-45:5)--++(-135:5) coordinate (hereitsisnt); \draw[very thick] (hereitis)--(hereitsisnt); \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(-135:0.5*2)--++(135:0.5*2) %%% % --++(90:0.16) node {$\down$}; --++(-90:0.18) node[gray] {$\wedge$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(-135:0.5*1)--++(135:0.5*1) %%% % --++(90:0.16) node {$\down$}; --++(-90:0.18) node[gray] {$\wedge$}; \path(0,0)--++(135:4)--++(45:6) --++(-45:0.25)--++(45:0.25) --++(-135:0.5*4)--++(135:0.5*4) %%% % --++(90:0.16) node {$\down$}; --++(-90:0.18) node[gray] {$\wedge$}; \path (0,0)--++(135:4.2)--++(45:1.2) coordinate (start); % \draw[very thick] (0,0)--++(135:5)--++(45:3)--++(-45:1)--++(45:1)--++(-45:3) % --++(45:1)--++(-45:1)--(0,0); \fill[magenta,opacity=0.3,rounded corners](start)--++(-135:1)--++(135:0.6)--++(45:3-0.4) --++(-45:1) --++(45:1) --++(-45:3-0.4) --++(-135:1-0.4) --++(135:2) --++(-135:1) --++(135:1) --++(-135:1); \draw[very thick] (0,0)--++(135:4)--++(45:2)--++(-45:1)--++(45:1)--++(-45:2) --++(45:2)--++(-45:1)--(0,0); \path(0,0)--++(45:2.75)--++(-45:2.75) coordinate (bottomR); \path(0,0)--++(135:2.75)--++(-135:2.75) coordinate (bottomL); \draw[very thick] (bottomL)--(bottomR); \draw[very thick,densely dotted] (bottomL)--++(180:0.5); \path(0,0) --++(45:2.75)--++(-45:2.75)--++(135:5)--++(45:5) coordinate (topR); \path(0,0)--++(135:2.75)--++(-135:2.75)--++(45:5)--++(135:5) coordinate (topL); \draw[very thick] (topL)--(topR); \draw[very thick,densely dotted] (topL)--++(180:0.5); \draw[very thick,densely dotted] (topR)--++( 0:0.5); % \draw[very thick](0,0)--++(135:5)--++(45:3)--++(-45:1) % --++(45:1)--++(-45:3)--++(45:1) % --++(-45:1) % --(0,0); \clip(bottomL)--(bottomR)--(topR)--(topL); %first column \path(0,0) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3); \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray!70] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,gray!70] (X1)--++(-90:7); \draw[very thick,gray!70] (X2)--++(-90:7); \path(start)--++(45:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray!70] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,gray!70] (X1)--++(-90:7); \path(start)--++(45:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray!70] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,gray!70] (X1)--++(-90:7); \path(start)--++(45:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick ] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,gray!70 ] (X1)--++(-90:7); % \draw[very thick,gray!70] (X3)--++(90:7); \draw[very thick ] (X4)--++(90:7); \path(start)--++(45:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick ] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray!70] (X3) to [out=-135,in= -45] (X4);; \draw[very thick ] (X1)--++(-90:7); \draw[very thick,gray!70] (X3)--++(90:7); \draw[very thick,gray!70] (X4)--++(90:7); %first row \path(0,0) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3); \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray!70] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,gray!70] (X1)--++(-90:7); \draw[very thick,gray!70] (X2)--++(-90:7); \path(start)--++(135:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray!70] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,gray!70] (X2)--++(-90:7); \path(start)--++(135:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray!70] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,gray!70] (X2)--++(-90:7); \path(start)--++(135:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray!70] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,gray!70] (X2)--++(-90:7); \draw[very thick,gray!70] (X4)--++(90:7); %second column \path(0,0)--++(45:1)--++(135:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray!70] (X3) to [out=-135,in= -45] (X4);; \path(start)--++(45:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray!70] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,gray!70] (X3)--++(90:7); %second column \path(0,0)--++(45:1)--++(135:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray!70] (X3) to [out=-135,in= -45] (X4);; \path(start)--++(135:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray!70] (X3) to [out=-135,in= -45] (X4);; \path(start)--++(135:1) coordinate (start2); \path(start2)--++(45:0.5) coordinate (X1); \path(start2)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70 ] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick ,gray!70] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,gray!70] (X4)--++(90:7); \draw[very thick,gray!70] (X3)--++(90:7); \path(start)--++(-45:1)--++(45:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray!70] (X3) to [out=-135,in= -45] (X4);; \path(start)--++(135:1) coordinate (start2); \path(start2)--++(45:0.5) coordinate (X1); \path(start2)--++(135:0.5) coordinate (X2); \draw[very thick,gray!70 ] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick ,gray!70] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,gray!70] (X4)--++(90:7); \draw[very thick,gray!70] (X3)--++(90:7); \clip(0,0)--++(135:4)--++(45:2)--++(-45:1) --++(45:1)--++(-45:2)--++(45:2) --++(-45:1) --(0,0); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \end{tikzpicture}$$ ## Dyck paths We define a path on the $m\times n$ tiled rectangle to be a finite non-empty set $P$ of tiles that are ordered $[r_1, c_1], \ldots , [r_s, c_s]$ for some $s\geqslant 1$ such that for each $1\leqslant i\leqslant s-1$ we have $[r_{i+1}, c_{i+1}] = [r_i+1, c_i]$ or $[r_i, c_i -1]$. Note that the set $\underline{\sf cont}(P)$ of contents of the tiles in a path $P$ form an interval of integers. We say that $P$ is a Dyck path if $$\min \{ r_i+c_i-1 \, : \, 1\leqslant i\leqslant s\} = r_1+c_1-1 = r_s+c_s-1,$$ that is the minimal height of the path is achieved at the start and end of the path. We will write ${\sf first}(P) = {\sf cont}([r_1, c_1])$ and ${\sf last}(P) = {\sf cont}([r_s, c_s])$. Throughout the paper, we will identify all Dyck paths $P$ having the same content interval ${\sf cont}(P)$. There are a few of places where we will need to fix a particular representative for a Dyck path $P$ and in that case we will use subscripts, such as $P_b$ or $P_{sf}$. Given $P$ a Dyck path, we set $|P|$ to be the number of tiles in $P$. We also define the breadth of $P$, denoted by $b(P)$, to be $$b(P)= \tfrac{1}{2}(|P|+1).$$ This measures the horizontal distance covered by the path. **Definition 9**. *Let $P$ and $Q$ be Dyck paths.* - *We say that $P$ and $Q$ are adjacent if and only if the multiset given by the disjoint union ${\sf cont}(P) \sqcup {\sf cont}(Q)$ is an interval.* - *We say that $P$ and $Q$ are distant if and only if $$\min \{ |{\sf cont}[r,c] - {\sf cont}[x,y]| \, : \, [r,c]\in P, [x,y]\in Q\} \geqslant 2.$$* - *We say that $P$ covers $Q$ and write $Q\prec P$ if and only if $${\sf first}(Q) >{\sf first}(P) \,\, \mbox{and} \,\, {\sf last}(Q) < {\sf last} (P).$$* Examples of such Dyck paths $P$ and $Q$ are given in Figure [\[adjdistcover\]](#adjdistcover){reference-type="ref" reference="adjdistcover"}. $$\begin{tikzpicture}[scale=0.35] \path(0,0) coordinate (origin2); \path(0,0)--++(135:2) coordinate (origin3); %\clip (origin3)--++(135:10)--++(45:10)--++(-45:10) % --++(-135:10) ; \foreach \i in {0,1,2,3,4,5,6,7,8} { \path (origin3)--++(45:0.5*\i) coordinate (c\i); \path (origin3)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8} { \path (origin3)--++(45:1*\i) coordinate (c\i); \path (c\i)--++(-45:0.5) coordinate (c\i); \path (origin3)--++(135:1*\i) coordinate (d\i); \path (d\i)--++(-135:0.5) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:9); \draw[thick,densely dotted] (d\i)--++(45:9); } \path(origin3)--++(45:-0.5)--++(135:7.5) coordinate (X); %\fill[magenta](X) circle (4pt); \path(X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \path (X)--++(45:1) coordinate (X) ; \fill[cyan](X) circle (4pt); \draw[ thick, cyan](X)--++(45:1) coordinate (X) ; \fill[cyan](X) circle (4pt); \draw[ thick, cyan](X)--++(45:1) coordinate (X) ; \fill[cyan](X) circle (4pt); \draw[ thick, cyan](X)--++(-45:1) coordinate (X) ; \fill[cyan](X) circle (4pt); \draw[ thick, cyan](X)--++(-45:1) coordinate (X) ; \fill[cyan](X) circle (4pt); \end{tikzpicture} \qquad \begin{tikzpicture}[scale=0.35] \path(0,0) coordinate (origin2); \path(0,0)--++(135:2) coordinate (origin3); %\clip (origin3)--++(135:10)--++(45:10)--++(-45:10) % --++(-135:10) ; \foreach \i in {0,1,2,3,4,5,6,7,8} { \path (origin3)--++(45:0.5*\i) coordinate (c\i); \path (origin3)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8} { \path (origin3)--++(45:1*\i) coordinate (c\i); \path (c\i)--++(-45:0.5) coordinate (c\i); \path (origin3)--++(135:1*\i) coordinate (d\i); \path (d\i)--++(-135:0.5) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:9); \draw[thick,densely dotted] (d\i)--++(45:9); } \path(origin3)--++(45:-0.5)--++(135:7.5) coordinate (X); %\fill[magenta](X) circle (4pt); \path(X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; % \fill[magenta](X) circle (4pt); %\draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \path(X)--++(-45:1) coordinate (X) ; \path (X)--++(45:2) coordinate (X) ; \fill[cyan](X) circle (4pt); \draw[ thick, cyan](X)--++(45:1) coordinate (X) ; \fill[cyan](X) circle (4pt); \draw[ thick, cyan](X)--++(-45:1) coordinate (X) ; \fill[cyan](X) circle (4pt); \draw[ thick, cyan](X)--++(45:1) coordinate (X) ; \fill[cyan](X) circle (4pt); \draw[ thick, cyan](X)--++(-45:1) coordinate (X) ; \fill[cyan](X) circle (4pt); \end{tikzpicture} \qquad \begin{tikzpicture}[scale=0.35] \path(0,0) coordinate (origin2); \path(0,0)--++(135:2) coordinate (origin3); %\clip (origin3)--++(135:10)--++(45:10)--++(-45:10) % --++(-135:10) ; \foreach \i in {0,1,2,3,4,5,6,7,8} { \path (origin3)--++(45:0.5*\i) coordinate (c\i); \path (origin3)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8} { \path (origin3)--++(45:1*\i) coordinate (c\i); \path (c\i)--++(-45:0.5) coordinate (c\i); \path (origin3)--++(135:1*\i) coordinate (d\i); \path (d\i)--++(-135:0.5) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:9); \draw[thick,densely dotted] (d\i)--++(45:9); } \path(origin3)--++(45:-0.5)--++(135:7.5) coordinate (X); %\fill[magenta](X) circle (4pt); \path(X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); %\draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; %\fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; % \fill[magenta](X) circle (4pt); %\draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \path (X)--++(-135:1)--++(135:2) coordinate (X) ; \fill[cyan](X) circle (4pt); \draw[ thick, cyan](X)--++(135:1) coordinate (X) ; \fill[cyan](X) circle (4pt); \draw[ thick, cyan](X)--++(135:1) coordinate (X) ; \fill[cyan](X) circle (4pt); \draw[ thick, cyan](X)--++(-135:1) coordinate (X) ; \fill[cyan](X) circle (4pt); \draw[ thick, cyan](X)--++(135:1) coordinate (X) ; \fill[cyan](X) circle (4pt); \draw[ thick, cyan](X)--++(-135:1) coordinate (X) ; \fill[cyan](X) circle (4pt); \draw[ thick, cyan](X)--++(135:1) coordinate (X) ; \fill[cyan](X) circle (4pt); \draw[ thick, cyan](X)--++(-135:1) coordinate (X) ; \fill[cyan](X) circle (4pt); \draw[ thick, cyan](X)--++(-135:1) coordinate (X) ; \fill[cyan](X) circle (4pt); \end{tikzpicture}$$ ## Removable and addable Dyck paths Now we fix a partition $\mu\in {\mathscr{P}_{m,n}}$. Recall that we identify any pair of Dyck paths which have the same content intervals. **Definition 10**. *Let $\mu \in {\mathscr{P}_{m,n}}$ and $P$ be a Dyck path. We say that $P$ is a removable Dyck path from $\mu$ if there is a representative $P_{b}$ of $P$ such that $\lambda:= \mu\setminus P_b\in {\mathscr{P}_{m,n}}$. In this case we will write $\lambda= \mu-P$. (Note that this is well-defined as if $P_b$ exists then it is unique). We define the set ${\rm DRem}(\mu)$ to be the set of all removable Dyck paths from $\mu$.* *We say that $P$ is an addable Dyck path of $\mu$ if there is a representative $P_b$ of $P$ such that $\lambda:= \mu \sqcup P_b \in {\mathscr{P}_{m,n}}$. In this case we will write $\lambda= \mu + P$. (Note again that this is well-defined as if $P_b$ exists then it is unique). We define the set ${\rm DAdd}(\mu)$ to be the set of all addable Dyck paths of $\mu$.* **Proposition 11**. *Fix $\mu\in {\mathscr{P}_{m,n}}$. There is a bijection between the set of cups in $e_\mu$ (or in $\underline{\mu}$) and the set ${\rm DRem}(\mu)$.* *Proof.* As observed at the beginning of this section, every cup in $e_\mu$ gives rise to a removable Dyck path. The fact that every removable Dyck path corresponds to a cup follows from the construction of the cup diagram $\underline{\mu}$. ◻ **Definition 12**. *Let $\mu \in {\mathscr{P}_{m,n}}$. For each $P\in {\rm DRem}(\mu)$, we define $P_{sf}$ to be its representative given by the set of tiles intersecting the corresponding cup in $e_\mu$ as shaded in pink in Figure [\[example-2\]](#example-2){reference-type="ref" reference="example-2"}.* **Lemma 13**. *Let $\mu\in {\mathscr{P}_{m,n}}$ and let $P,Q\in {\rm DRem}(\mu)$. Then either $P$ covers $Q$, or $Q$ covers $P$, or $P$ and $Q$ are distant.* *Proof.* This follows directly from [Proposition 11](#bijection){reference-type="ref" reference="bijection"} ◻ **Definition 14**. *Let $\mu\in {\mathscr{P}_{m,n}}$ and $P,Q\in {\rm DRem}(\mu)$. We say that $P$ and $Q$ commute if $P\in {\rm DRem}(\mu -Q)$ and $Q\in {\rm DRem}(\mu - P)$.* **Lemma 15**. *Let $\mu\in {\mathscr{P}_{m,n}}$ and $P,Q\in {\rm DRem}(\mu)$. Then $P$ and $Q$ commute if and only if $P_{sf}\cap Q_{sf}=\emptyset$.* *Proof.* This follows directly by definition. ◻ ## Oriented Temperley--Lieb diagrams and Dyck tiling We now introduce Dyck tilings and relate them to oriented arc diagrams. **Definition 16**. *Let $\lambda\subseteq \mu\in {\mathscr{P}_{m,n}}$. A Dyck tiling of the skew partition $\mu\setminus \lambda$ is a set $\{P^1,\dotsc,P^k\}$ of Dyck paths such that $$\mu\setminus \lambda= \bigsqcup_{i=1}^k P^i$$ and for each $i\neq j$ we have either $P^i$ covers $P^j$ (or vice versa), or $P^i$ and $P^j$ are distant. We call $(\lambda,\mu)$ a Dyck pair of degree $k$ if $\mu \setminus \lambda$ has a Dyck tiling with $k$ Dyck paths.* We will see that Dyck tilings are essentially unique, and as a consequence the degree of a Dyck pair is well defined. Examples of such tilings are given in [\[cref-it\]](#cref-it){reference-type="ref" reference="cref-it"} for the pair $(\lambda,\mu) = ((11,9,8,7,6,4,3^2,2^2), (11^7,8^3,2^2))$. We see that even though the tilings are different (as partitions of $\mu \setminus \lambda$), the Dyck paths appearing are the same (remember that we identify Dyck paths with the same content intervals). $$\begin{tikzpicture}[scale=0.35] \path(0,0) coordinate (origin2); \path(0,0)--++(135:2) coordinate (origin3); \draw[very thick](origin3)--++(135:11)--++(45:7)--++(-45:3) --++(45:3)--++(-45:6)--++(45:2) --++(-45:2) --++(-135:12) ; % \clip(0,0)--++(135:7)--++(45:3)--++(-45:1) % --++(45:1)--++(-45:2)--++(45:2)--++(-45:3)--++(45:1)--++(-45:1)--(0,0); \clip (origin3)--++(135:11)--++(45:7)--++(-45:3) --++(45:3)--++(-45:6)--++(45:2) --++(-45:2) --++(-135:12) ; \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \path(0,0)--++(45:0.5)--++(135:12.5) coordinate (X); %\fill[magenta](X) circle (4pt); \path(X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \path(0,0)--++(45:0.5)--++(135:11.5) coordinate (X); %\fill[violet](X) circle (4pt); \path(X)--++(45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(-45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(-45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(-45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(-45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(-45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(-45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(-45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \path(0,0)--++(45:2.5)--++(135:10.5) coordinate (X); \fill[orange](X) circle (4pt); \draw[ thick, orange](X)--++(45:1) coordinate (X) ; \fill[orange](X) circle (4pt); \draw[ thick, orange](X)--++(45:1) coordinate (X) ; \fill[orange](X) circle (4pt); \draw[ thick, orange](X)--++(-45:1) coordinate (X) ; \fill[orange](X) circle (4pt); \draw[ thick, orange](X)--++(-45:1) coordinate (X) ; \fill[orange](X) circle (4pt); \path(0,0)--++(45:3.5)--++(135:9.5) coordinate (X); \fill[green!70!black](X) circle (4pt); \path(0,0)--++(45:5.5)--++(135:7.5) coordinate (X); \fill[gray](X) circle (4pt); \draw[ thick, gray](X)--++(45:1) coordinate (X) ; \fill[gray](X) circle (4pt); \draw[ thick, gray](X)--++(45:1) coordinate (X) ; \fill[gray](X) circle (4pt); \draw[ thick, gray](X)--++(-45:1) coordinate (X) ; \fill[gray](X) circle (4pt); \draw[ thick, gray](X)--++(-45:1) coordinate (X) ; \fill[gray](X) circle (4pt); \path(0,0)--++(45:5.5)--++(135:6.5) coordinate (X); \fill[cyan](X) circle (4pt); \draw[ thick, cyan](X)--++(45:1) coordinate (X) ; \fill[cyan](X) circle (4pt); \draw[ thick, cyan](X)--++(-45:1) coordinate (X) ; \fill[cyan](X) circle (4pt); \path(0,0)--++(45:10.5)--++(135:3.5) coordinate (X); \fill[brown](X) circle (4pt); \draw[ thick, brown](X)--++(45:1) coordinate (X) ; \fill[brown](X) circle (4pt); \draw[ thick, brown](X)--++(-45:1) coordinate (X) ; \fill[brown](X) circle (4pt); %\draw[ thick, brown](X)--++(-45:1) coordinate (X) ; % % %\fill[brown](X) circle (4pt); %\draw[ thick, brown](X)--++(45:1) coordinate (X) ; % % \fill[brown](X) circle (4pt); %\draw[ thick, brown](X)--++(-45:1) coordinate (X) ; %\fill[brown](X) circle (4pt); %\draw[ thick, brown](X)--++(-45:1) coordinate (X) ; %\fill[brown](X) circle (4pt); % % \path(0,0)--++(45:10.5)--++(135:2.5) coordinate (X); \fill[pink](X) circle (4pt); \end{tikzpicture} \qquad \begin{tikzpicture}[scale=0.325] \path(0,0) coordinate (origin2); \path(0,0)--++(135:2) coordinate (origin3); \draw[very thick](origin3)--++(135:11)--++(45:7)--++(-45:3) --++(45:3)--++(-45:6)--++(45:2) --++(-45:2) --++(-135:12) ; % \clip(0,0)--++(135:7)--++(45:3)--++(-45:1) % --++(45:1)--++(-45:2)--++(45:2)--++(-45:3)--++(45:1)--++(-45:1)--(0,0); \clip (origin3)--++(135:11)--++(45:7)--++(-45:3) --++(45:3)--++(-45:6)--++(45:2) --++(-45:2) --++(-135:12) ; \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \path(0,0)--++(45:0.5)--++(135:12.5) coordinate (X); %\fill[magenta](X) circle (4pt); \path(X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \path(0,0)--++(45:0.5)--++(135:11.5) coordinate (X); %\fill[violet](X) circle (4pt); \path(X)--++(45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(-45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(-45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(-45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(-45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(-45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(-45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(-45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \path(0,0)--++(45:4.5)--++(135:12.5) coordinate (X); \fill[orange](X) circle (4pt); \draw[ thick, orange](X)--++(45:1) coordinate (X) ; \fill[orange](X) circle (4pt); \draw[ thick, orange](X)--++(45:1) coordinate (X) ; \fill[orange](X) circle (4pt); \draw[ thick, orange](X)--++(-45:1) coordinate (X) ; \fill[orange](X) circle (4pt); \draw[ thick, orange](X)--++(-45:1) coordinate (X) ; \fill[orange](X) circle (4pt); \path(0,0)--++(45:3.5)--++(135:9.5) coordinate (X); \fill[green!70!black](X) circle (4pt); \path(0,0)--++(45:5.5)--++(135:7.5) coordinate (X); \fill[gray](X) circle (4pt); \draw[ thick, gray](X)--++(45:1) coordinate (X) ; \fill[gray](X) circle (4pt); \draw[ thick, gray](X)--++(45:1) coordinate (X) ; \fill[gray](X) circle (4pt); \draw[ thick, gray](X)--++(-45:1) coordinate (X) ; \fill[gray](X) circle (4pt); \draw[ thick, gray](X)--++(-45:1) coordinate (X) ; \fill[gray](X) circle (4pt); \path(0,0)--++(45:5.5)--++(135:6.5) coordinate (X); \fill[cyan](X) circle (4pt); \draw[ thick, cyan](X)--++(45:1) coordinate (X) ; \fill[cyan](X) circle (4pt); \draw[ thick, cyan](X)--++(-45:1) coordinate (X) ; \fill[cyan](X) circle (4pt); \path(0,0)--++(45:10.5)--++(135:3.5) coordinate (X); \fill[brown](X) circle (4pt); \draw[ thick, brown](X)--++(45:1) coordinate (X) ; \fill[brown](X) circle (4pt); \draw[ thick, brown](X)--++(-45:1) coordinate (X) ; \fill[brown](X) circle (4pt); %\draw[ thick, brown](X)--++(-45:1) coordinate (X) ; % % %\fill[brown](X) circle (4pt); %\draw[ thick, brown](X)--++(45:1) coordinate (X) ; % % \fill[brown](X) circle (4pt); %\draw[ thick, brown](X)--++(-45:1) coordinate (X) ; %\fill[brown](X) circle (4pt); %\draw[ thick, brown](X)--++(-45:1) coordinate (X) ; %\fill[brown](X) circle (4pt); % % \path(0,0)--++(45:10.5)--++(135:2.5) coordinate (X); \fill[pink](X) circle (4pt); \end{tikzpicture}$$ **Lemma 17**. *Let $\lambda \subseteq \mu\in {\mathscr{P}_{m,n}}$ with Dyck tiling $\mu\setminus \lambda= \sqcup_{i=1}^k P^i$ as in [Definition 16](#Dyckpair){reference-type="ref" reference="Dyckpair"}. Then $P^i\in {\rm DRem}(\mu)$ for all $1\leqslant i \leqslant k$.* *Proof.* We prove it by induction on $k$. If $k=0$ there is nothing to prove. If $k=1$ then $\mu \setminus \lambda= P^1$ and so $P^1\in {\rm DRem}(\mu)$ as required. Now let $k\geqslant 2$ and assume that the result holds for $k-1$. Pick a removable tile $[x,y]$ in $\mu\setminus \lambda$ such that it belongs to some $P^j$ with $|P^j|$ minimal. Then we claim that $P^j\in {\rm DRem}(\mu)$. Indeed, if there were any tile $[r,c]$ above $P^j$ preventing it from being removable, then by the definition of Dyck pair and the minimality of $P^j$, we would have that $[r,c]$ belongs to a Dyck path $Q$ which covers $P^j$. But this would contradict the fact that $[x,y]$ is removable. It remains to show that $P^i\in {\rm DRem}(\mu)$ for all $i\neq j$. Now, we have that $(\mu-P^j, \lambda)$ is a Dyck pair and so by induction $P^i\in {\rm DRem}(\mu-P^j)$ for all $i\neq j$. Fix $i\neq j$. If $P^i$ and $P^j$ are distant, then we have $P^i\in {\rm DRem}(\mu)$ as required. If $P^i$ covers $P^j$ then we must have $|P^i|\geqslant|P^j|+4$, as it is impossible to have a partition $\mu$ and Dyck paths $Q, Q'$ with $|Q'|=|Q|+2$, $Q\in {\rm DRem}(\mu)$ and $Q'\in {\rm DRem}(\mu-Q)$. This means that we can shift the tiles of $P^j\in {\rm DRem}(\mu-P^i)$ with the same contents as those of $P^i\in {\rm DRem}(\mu)$ one step up and we get an equivalent Dyck path which is now removable from $\mu$ as required. This is illustrated in [\[YYY\]](#YYY){reference-type="ref" reference="YYY"}. Finally, suppose $P^j$ covers $P^i$. In this case we can again shift the tiles of $P^i\in {\rm DRem}(\mu - P^j)$ one step up so that it is now a subset of $P^j\in {\rm DRem}(\mu)$. We claim that this subset is also removable. If not, then we would have some $P^l$ which is adjacent to $P^i$, contradicting the fact that $(\lambda, \mu)$ is a Dyck pair. This case is illustrated in [\[ZZZ\]](#ZZZ){reference-type="ref" reference="ZZZ"}. ◻ $$\vspace{-1cm} \begin{tikzpicture}[scale=0.35] \path(0,0) coordinate (origin2); \path(0,0)--++(135:2) coordinate (origin3); \path(origin3)--++(135:0.5)--++(45:0.5) coordinate (uselesuekl); \clip(uselesuekl) --++(135:11)--++(45:7)--++(-45:3) --++(45:3)--++(-45:7) --++(-135:10) ; \path(0,0)--++(45:3)--++(135:13) coordinate (X); \fill[cyan,opacity=0.3](X)--++(45:4)--++(-45:3) --++(45:3)--++(-45:4) --++(-135:1)--++(135:3)--++(-135:3)--++(135:3)--++(-135:3); \draw[very thick](origin3)--++(135:11) coordinate (origin4); \draw[very thick,densely dotted] (origin4) --++(45:1) coordinate (origin4); \draw[very thick] (origin4) --++(45:6) --++(-45:3) --++(45:3)--++(-45:6) coordinate (origin4); \draw[very thick,densely dotted] (origin4) --++(-45:1) ; \clip (origin3)--++(135:11)--++(45:7)--++(-45:3) --++(45:3)--++(-45:7) --++(-135:10) ; \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \path(0,0)--++(45:0.5)--++(135:12.5) coordinate (X); %\fill[magenta](X) circle (4pt); \path(X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \end{tikzpicture} \qquad \begin{tikzpicture}[scale=0.35] \path(0,0) coordinate (origin2); \path(0,0)--++(135:2) coordinate (origin3); \path(origin3)--++(135:0.5)--++(45:0.5) coordinate (uselesuekl); \clip(uselesuekl) --++(135:11)--++(45:7)--++(-45:3) --++(45:3)--++(-45:7) --++(-135:10) ; \path(0,0)--++(45:3)--++(135:13) coordinate (X); \fill[cyan,opacity=0.3](X)--++(45:4)--++(-45:3) --++(45:3)--++(-45:4) --++(-135:1)--++(135:3)--++(-135:3)--++(135:3)--++(-135:3); \draw[very thick](origin3)--++(135:11) coordinate (origin4); \draw[very thick,densely dotted] (origin4) --++(45:1) coordinate (origin4); \draw[very thick] (origin4) --++(45:6) --++(-45:3) --++(45:3)--++(-45:6) coordinate (origin4); \draw[very thick,densely dotted] (origin4) --++(-45:1) ; \clip (origin3)--++(135:11)--++(45:7)--++(-45:3) --++(45:3)--++(-45:7) --++(-135:10) ; \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \path(0,0)--++(45:0.5)--++(135:12.5) coordinate (X); %\fill[magenta](X) circle (4pt); \path(X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \end{tikzpicture}$$ $$\begin{tikzpicture}[scale=0.35] \path(0,0) coordinate (origin2); \path(0,0)--++(135:2) coordinate (origin3); \path(origin3)--++(135:0.5)--++(45:0.5) coordinate (uselesuekl); \clip(uselesuekl) --++(135:11)--++(45:7)--++(-45:3) --++(45:3)--++(-45:7) --++(-135:10) ; \path(0,0)--++(45:2)--++(135:13) coordinate (X); \fill[cyan,opacity=0.3](X)--++(45:5)--++(-45:3) --++(45:3)--++(-45:5) --++(-135:1)--++(135:4)--++(-135:3)--++(135:3)--++(-135:4); \draw[very thick](origin3)--++(135:11) coordinate (origin4); \draw[very thick,densely dotted] (origin4) --++(45:1) coordinate (origin4); \draw[very thick] (origin4) --++(45:6) --++(-45:3) --++(45:3)--++(-45:6) coordinate (origin4); \draw[very thick,densely dotted] (origin4) --++(-45:1) ; \clip (origin3)--++(135:11)--++(45:7)--++(-45:3) --++(45:3)--++(-45:7) --++(-135:10) ; \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \path(0,0)--++(45:01.5)--++(135:11.5) coordinate (X); %\fill[magenta](X) circle (4pt); \path(X)--++(45:1) coordinate (X) ; \path(X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \path(X)--++(-45:1) coordinate (X) ; \path (X)--++(45:1) coordinate (X) ; \fill[orange](X) circle (4pt); \draw[ thick, orange](X)--++(45:1) coordinate (X) ; \fill[orange](X) circle (4pt); \draw[ thick, orange](X)--++(45:1) coordinate (X) ; \fill[orange](X) circle (4pt); \fill[orange](X) circle (4pt); \draw[ thick, orange](X)--++(-45:1) coordinate (X) ; \fill[orange](X) circle (4pt); \draw[ thick, orange](X)--++(-45:1) coordinate (X) ; \fill[orange](X) circle (4pt); \path(0,0)--++(45:02.5)--++(135:12.5) coordinate (X); %\fill[magenta](X) circle (4pt); \path(X)--++(45:1) coordinate (X) ; \path(X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \end{tikzpicture} \qquad \begin{tikzpicture}[scale=0.35] \path(0,0) coordinate (origin2); \path(0,0)--++(135:2) coordinate (origin3); \path(origin3)--++(135:0.5)--++(45:0.5) coordinate (uselesuekl); \clip(uselesuekl) --++(135:11)--++(45:7)--++(-45:3) --++(45:3)--++(-45:7) --++(-135:10) ; \path(0,0)--++(45:2)--++(135:13) coordinate (X); \fill[cyan,opacity=0.3](X)--++(45:5)--++(-45:3) --++(45:3)--++(-45:5) --++(-135:1)--++(135:4)--++(-135:3)--++(135:3)--++(-135:4); \draw[very thick](origin3)--++(135:11) coordinate (origin4); \draw[very thick,densely dotted] (origin4) --++(45:1) coordinate (origin4); \draw[very thick] (origin4) --++(45:6) --++(-45:3) --++(45:3)--++(-45:6) coordinate (origin4); \draw[very thick,densely dotted] (origin4) --++(-45:1) ; \clip (origin3)--++(135:11)--++(45:7)--++(-45:3) --++(45:3)--++(-45:7) --++(-135:10) ; \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \path(0,0)--++(45:01.5)--++(135:11.5) coordinate (X); %\fill[magenta](X) circle (4pt); \path(X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \path (X)--++(45:1) coordinate (X) ; \fill[orange](X) circle (4pt); \draw[ thick, orange](X)--++(45:1) coordinate (X) ; \fill[orange](X) circle (4pt); \draw[ thick, orange](X)--++(45:1) coordinate (X) ; \fill[orange](X) circle (4pt); \fill[orange](X) circle (4pt); \draw[ thick, orange](X)--++(-45:1) coordinate (X) ; \fill[orange](X) circle (4pt); \draw[ thick, orange](X)--++(-45:1) coordinate (X) ; \fill[orange](X) circle (4pt); \path(0,0)--++(45:02.5)--++(135:12.5) coordinate (X); %\fill[magenta](X) circle (4pt); \path(X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \end{tikzpicture}$$ **Theorem 18**. *Let $\lambda, \mu\in {\mathscr{P}_{m,n}}$. Then $\underline{\mu}\lambda$ is oriented if and only if $(\lambda,\mu)$ is a Dyck pair.* *Proof.* Assume that $\underline{\mu} \lambda$ is oriented. Then the weight $\lambda$ is obtained from the weight $\mu$ by swapping the labels of pairs corresponding to some of the cups in $e_\mu$. Let $P^1, \ldots, P^k$ be the Dyck paths corresponding to these cups. We list these in order such that if $P^i\prec P^j$ then $j<i$. Then it's easy to see that $P^i\in {\rm DRem}(\mu-P^1 - \ldots - P^{i-1})$ for all $i$ and $\lambda= \mu - P^1 - \ldots - P^k$. It follows from [Lemma 13](#Remproperties){reference-type="ref" reference="Remproperties"} that $\mu\setminus \lambda= \sqcup_{i=1}^k P^i$ is a Dyck tiling. Conversely, suppose that $\lambda\subseteq \mu$ with $\mu\setminus \lambda= \sqcup_{i=1}^k P^i$ a Dyck tiling. Then it follows from [Lemma 17](#tilingrem){reference-type="ref" reference="tilingrem"} that each $P^i\in {\rm DRem}(\mu)$ and so $\underline{\mu}\lambda$ is oriented. ◻ **Corollary 19**. *Let $\lambda\subseteq \mu$. Then $\mu\setminus \lambda= \sqcup_{i=1}^k P^i$ where the $P^i$'s are Dyck paths is a Dyck tiling if and only if $P^i\in {\rm DRem}(\mu)$ for all $i$. In particular, the set of Dyck paths $\{P^i \, : \, 1\leqslant i\leqslant k\}$ is unique. Moreover, in this case we have that ${\rm deg}(\underline{\mu}\lambda) = k$.* ## Dyck paths generated by tiles. We will need one last piece of combinatorics to describe our quadratic presentation for the Hecke category. **Definition 20**. *Fix $\mu \in \mathscr{P}_{m,n}$ and $[r,c]\in \mu$. Let $l, k$ be the maximal non-negative integer such that $[r-i, c+i] \in \mu$ for all $0\leqslant i\leqslant l$, $[r-i+1, c+i]\in \mu$ for all $1\leqslant i\leqslant l$ and $[r+j, c-j]\in \mu$ for all $0\leqslant j\leqslant k$, $[r+j, c-j+1]\in \mu$ for all $1\leqslant j\leqslant k$. Then define the Dyck paths generated by the tile $[r,c]\in \mu$, denoted by $\langle r,c \rangle_\mu$, to be the path $$[r-l, c+l], [r-l+1, c+l] ,\ldots , [r,c] , \ldots , [r+k, c-k].$$* Note that the Dyck path generated by a tile of a partition $\mu$ may or may not be in ${\rm DRem}(\mu)$ as illustrated in Figure [\[genDyck\]](#genDyck){reference-type="ref" reference="genDyck"}. $$\begin{tikzpicture}[scale=0.325] \path(0,0) coordinate (origin2); \draw[very thick](0,0)--++(135:6)--++(45:5)--++(-45:4) --++(45:2)--++(-45:2) --(0,0); % \clip(0,0)--++(135:7)--++(45:3)--++(-45:1) % --++(45:1)--++(-45:2)--++(45:2)--++(-45:3)--++(45:1)--++(-45:1)--(0,0); \clip (0,0)--++(135:6)--++(45:5)--++(-45:4) --++(45:2)--++(-45:2) --(0,0); \path(0,0)--++(45:1.5)--++(135:5.5) coordinate (X); \path(0,0)--++(45:3)--++(135:4) coordinate (XY); \fill[magenta!20](XY)--++(45:1)--++(135:1)--++(-135:1)--++(-45:1); %\fill[magenta](X) circle (4pt); \path(X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \draw[ thick, magenta](X)--++(-45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \path(0,0)--++(45:-0.5)--++(135:5.5) coordinate (X); \path(0,0)--++(45:2)--++(135:3) coordinate (XY); \fill[cyan!20](XY)--++(45:1)--++(135:1)--++(-135:1)--++(-45:1); %\fill[cyan](X) circle (4pt); \path(X)--++(45:1) coordinate (X) ; \fill[cyan](X) circle (4pt); \draw[ thick, cyan](X)--++(45:1) coordinate (X) ; \fill[cyan](X) circle (4pt); \draw[ thick, cyan](X)--++(-45:1) coordinate (X) ; \fill[cyan](X) circle (4pt); \draw[ thick, cyan](X)--++(45:1) coordinate (X) ; \fill[cyan](X) circle (4pt); \draw[ thick, cyan](X)--++(-45:1) coordinate (X) ; \fill[cyan](X) circle (4pt); \fill[cyan](X) circle (4pt); \draw[ thick, cyan](X)--++(45:1) coordinate (X) ; \fill[cyan](X) circle (4pt); \draw[ thick, cyan](X)--++(-45:1) coordinate (X) ; \fill[cyan](X) circle (4pt); \fill[cyan](X) circle (4pt); \draw[ thick, cyan](X)--++(45:1) coordinate (X) ; \fill[cyan](X) circle (4pt); \draw[ thick, cyan](X)--++(-45:1) coordinate (X) ; \fill[cyan](X) circle (4pt); \fill[cyan](X) circle (4pt); \draw[ thick, cyan](X)--++(45:1) coordinate (X) ; \fill[cyan](X) circle (4pt); \draw[ thick, cyan](X)--++(-45:1) coordinate (X) ; \fill[cyan](X) circle (4pt); %\clip (0,0)--++(135:13)--++(45:1)--++(-45:1)--++(45:5) % --++(-45:3) % --++(45:3)--++(-45:5)--++(45:4) % --++(-45:1)--++(45:1) % --++(-45:3)--(0,0); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \end{tikzpicture}$$ # Generators for the Hecke category {#gens} In this section, we lift the combinatorics of [4](#dyckgens){reference-type="ref" reference="dyckgens"} to provide a new set of generators for the Hecke category $\mathcal{H}_{m,n}$. These generators are not 'monoidal', but all lie in degree $0$ or $1$ (this is the first step in providing a quadratic presentation). ## Soergel diagrams from oriented Temperley--Lieb diagrams We now revisit the classical definition of the light leaves basis starting from oriented Temperley--Lieb diagrams. This material is covered in detail (from a slightly different perspective) in [@compan]. **Definition 21**. *We define up and down operators on diagrams as follows. Let $D$ be any Soergel graph with northern colour sequence $\mathsf{s}\in {\rm Std}(\alpha)$ for some $\alpha\in {\mathscr{P}_{m,n}}$.* - *Suppose that ${{{\color{magenta}\bm\sigma}}}\in {\rm Add}(\alpha)$. We define $$\qquad {\sf U}^1_{{{\color{magenta}\bm\sigma}}}(D)=\begin{minipage}{1.85cm}\begin{tikzpicture}[scale=1.5] \draw[densely dotted, rounded corners] (-0.5,0) rectangle (0.75,0.75) node [midway] {$ D$} ; %\draw[magenta,line width=0.08cm](0,0)--++(90:0.75); \end{tikzpicture}\end{minipage} %\otimes \begin{minipage}{0.75cm}\begin{tikzpicture}[scale=1.5] \draw[densely dotted, rounded corners] (-0.25,0) rectangle (0.25,0.75); \draw[magenta,line width=0.08cm](0,0)--++(90:0.75); \end{tikzpicture}\end{minipage} \qquad \qquad {\sf U}^0_{{{\color{magenta}\bm\sigma}}}(D)= \begin{minipage}{1.85cm}\begin{tikzpicture}[scale=1.5] \draw[densely dotted, rounded corners] (-0.5,0) rectangle (0.75,0.75) node [midway] {$D$} ; %\draw[magenta,line width=0.08cm](0,0)--++(90:0.75); \end{tikzpicture}\end{minipage} % \otimes \begin{minipage}{0.75cm}\begin{tikzpicture}[scale=1.5] \draw[densely dotted, rounded corners] (-0.25,0) rectangle (0.25,0.75); \draw[magenta,line width=0.08cm](0,0)--++(90:0.3525) coordinate (hi); \draw[fill=magenta,magenta] (hi) circle (3pt); \end{tikzpicture}\end{minipage}$$* - *Now suppose that ${\color{magenta}[r,c]} \in {\rm Rem}(\alpha)$ for $\alpha \vdash a$ and with ${\color{magenta}r-c}={{{\color{magenta}\bm\sigma}}}\in S$. We let $\mathsf{t}\in {\rm Std}(\alpha-{\color{magenta}[r,c]})$ be defined as follows: if $\mathsf{s}{\color{magenta}[r,c]}=k$ then we let $\mathsf{t}^{-1}(j)=\mathsf{s}^{-1}(j)$ for $1\leqslant j <k$ and $\mathsf{t}^{-1}(j-1)=\mathsf{s}^{-1}(j)$ for $k<j \leqslant a$. We let $\mathsf{t}\otimes {{{\color{magenta}\bm\sigma}}}\in {\rm Std}(\alpha)$ be defined by $(\mathsf{t}\otimes {{{\color{magenta}\bm\sigma}}}){\color{magenta}[r,c]}=a$ and $(\mathsf{t}\otimes {{{\color{magenta}\bm\sigma}}})[x,y] =\mathsf{t}[x,y]$ otherwise. We define $${\sf D}_{{{\color{magenta}\bm\sigma}}}^0(D)= \begin{minipage}{1.85cm}\begin{tikzpicture}[scale=1.5] \draw[densely dotted, rounded corners] (-1,0) rectangle (0.75,0.75) node [midway] {${\sf braid}^{\mathsf{t}\otimes {{{\color{magenta}\bm\sigma}}}}_{\mathsf{s}}$} ; \draw[densely dotted, rounded corners] (-1,0.75) rectangle (0.25,1.5) node [midway] {${\sf 1}_{\mathsf{t}}$}; \draw[densely dotted, rounded corners] (-1,-0.75) rectangle (0.75,0) node [midway] {$D$}; ;\draw[densely dotted, rounded corners] (1.25,0.75) rectangle (0.25,1.5) ; \draw[densely dotted, rounded corners] (0.75,-0.75) rectangle (1.25,0.75); \draw[magenta,line width=0.08cm](1,-0.75)--++(90:1.5) to [out=90,in=-10] (0.5,1.1); %%SPOT ME \draw[magenta,line width=0.08cm](0.5,0.75)--++(90:0.75) ; \end{tikzpicture}\end{minipage} \qquad \qquad \qquad\quad {\sf D}_{{{\color{magenta}\bm\sigma}}}^1(D)= \begin{minipage}{3.55cm}\begin{tikzpicture}[scale=1.5] \draw[densely dotted, rounded corners] (-1,0) rectangle (0.75,0.75) node [midway] {${\sf braid}^{\mathsf{t}\otimes {{{\color{magenta}\bm\sigma}}}}_{\mathsf{s}}$} ; \draw[densely dotted, rounded corners] (-1,0.75) rectangle (0.25,1.5) node [midway] {${\sf 1}_{\mathsf{t}}$}; %\draw[densely dotted, rounded corners] (-1,0) rectangle (0.75,0.75) node [midway] {${\sf braid}_{\SSTT^\la}^{\SSTT^{ \la - \csigma}\otimes\SSTT^{ \csigma}}$} ; % %\draw[densely dotted, rounded corners] (-1,0.75) rectangle (0.25,1.5) node [midway] {${\sf 1}_{\SSTT^{\lambda-\csigma}}$}; \draw[densely dotted, rounded corners] (-1,-0.75) rectangle (0.75,0) node [midway] {$D$}; ;\draw[densely dotted, rounded corners] (1.25,0.75) rectangle (0.25,1.5) ; \draw[densely dotted, rounded corners] (0.75,-0.75) rectangle (1.25,0.75); \draw[magenta,line width=0.08cm](1,-0.75)--++(90:1.5) to [out=90,in=-10] (0.5,1.1); \draw[magenta,line width=0.08cm](0.5,0.75)--++(90:0.6) coordinate(hi); \draw[fill=magenta,magenta] (hi) circle (3pt); \end{tikzpicture}\end{minipage}.\qquad$$* **Definition 22**. *Let $\lambda, \mu \in {\mathscr{P}_{m,n}}$ with $(\lambda,\mu)$ a Dyck pair. Recall the oriented tableau $\mathsf{t}_\mu^\lambda$ from Definition 2.2. Suppose that $\mathsf{t}_\mu^\lambda[r_k,c_k] = (k,x_k)$ for each $1\leqslant k\leqslant\ell(\mu)$. We construct the Soergel graph $D_\mu^\lambda$ inductively as follows. We set $D_0$ to be the empty diagram. Now let $1\leqslant k\leqslant\ell(\mu)$ with ${{{\color{magenta}\bm\sigma}}}= r_k-c_k$. We set $$D_k= \left\{ \begin{array}{ll} {\sf U}^1_{{{{\color{magenta}\bm\sigma}}}}(D_{k-1}) & \mbox{if $x_k = 1$;}\\[6pt] {\sf U}^0_{{{{\color{magenta}\bm\sigma}}}}(D_{k-1}) & \mbox{if $x_k = s$;}\\[6pt] {\sf D}^0_{{{{\color{magenta}\bm\sigma}}}}(D_{k-1}) & \mbox{if $x_k = f$;}\\[6pt] {\sf D}^1_{{{{\color{magenta}\bm\sigma}}}}(D_{k-1}) & \mbox{if $x_k = sf$}. \end{array}\right.$$ Now $D_{\ell(\mu)}$ has southern colour sequence $\mathsf{t}_\mu$ and northern colour sequence some $\mathsf{s}\in {\rm Std}(\lambda)$. We then define $D_\mu^\lambda= {\sf braid}_\mathsf{s}^{\mathsf{t}_\lambda} \circ D_{\ell(\mu)}$. We also define $D^\mu_\lambda= (D^\lambda_\mu)^\ast$.* **Example 23**. *In [\[twopics\]](#twopics){reference-type="ref" reference="twopics"} we provide an example of the labelling of an oriented Temperley--Lieb diagram and the corresponding up-down construction of a basis element.* $$\qquad \begin{minipage}{4cm}\begin{tikzpicture}[scale=1] \path(0,0) coordinate (origin); \path(0,0) coordinate (origin2); \path (0,-0.7)--++(135:0.5)--++(-135:0.5) coordinate (minus1); \path (minus1)--++(135:0.5)--++(-135:0.5) coordinate (minus2); \path (minus2)--++(135:0.5)--++(-135:0.5) coordinate (minus3); \path (minus3)--++(135:0.5)--++(-135:0.5) coordinate (minus4); \path (0,-0.7)--++(45:0.5)--++(-45:0.5) coordinate (plus1); \path (plus1)--++(45:0.5)--++(-45:0.5) coordinate (plus2); \path (plus2)--++(45:0.5)--++(-45:0.5) coordinate (plus3); \path (plus3)--++(45:0.5)--++(-45:0.5) coordinate (plus4); \path (plus2) coordinate (plus35); \path (minus2) coordinate (minus35); \draw[very thick](plus35)--(minus35); %\draw[very thick,densely dotted](plus35)--++(0:0.5) ; %\draw[very thick,densely dotted](minus35)--++(180:0.5) ; \path(minus1)--++(45:0.4) coordinate (minus1NE); \path(minus1)--++(-45:0.4) coordinate (minus1SE); \path(minus2)--++(135:0.4) coordinate (minus1NW); \path(minus2)--++(-135:0.4) coordinate (minus1SW); \path(minus1NE)--++(180:0.4) coordinate (start); \draw[rounded corners, thick] (start)-- (minus1NE)--(minus1SE)--(minus1SW) -- (minus1NW) -- (start); \path(plus2)--++(45:0.4) coordinate (minus1NE); \path(plus2)--++(-45:0.4) coordinate (minus1SE); \path(plus1)--++(135:0.4) coordinate (minus1NW); \path(plus1)--++(-135:0.4) coordinate (minus1SW); \path(minus1NE)--++(180:0.4) coordinate (start); \draw[rounded corners, thick] (start)--(minus1NE)-- (minus1SE)--(minus1SW)--(minus1NW)--(start); \draw[very thick,fill=magenta](0,-0.7) circle (4pt); \draw[very thick, fill=darkgreen](minus1) circle (4pt); \draw[very thick, fill=orange](minus2) circle (4pt); \draw[very thick, fill=cyan!80](plus1) circle (4pt); \draw[very thick, fill=violet](plus2) circle (4pt); \path(0,0)--++(135:3.5)--++(45:3.5) --++(135:0.25)--++(-135:0.25) --++(90:0.13) node {$\vee$}; \path(0,0) --++(-45:0.5) --++(45:0.5) --++(-45:0.5) --++(45:0.5) --++(-45:0.5) --++(45:0.5) --++(135:3.5)--++(45:3.5) --++(135:0.25)--++(-135:0.25) --++(90:0.13) node {$\vee$}; \path(0,0) --++(-45:0.5) --++(45:0.5) --++(-45:0.5) --++(45:0.5) --++(135:3.5)--++(45:3.5) --++(135:0.25)--++(-135:0.25) --++(90:0.13) node {$\vee$}; % \path(0,0) % --++(-135:0.5) --++(135:0.5) --++(-135:0.5) --++(135:0.5) % --++(135:3.5)--++(45:3.5) % --++(135:0.25)--++(-135:0.25) % --++(90:0.13) node {$\down$}; \path(0,0)--++(45:1.5 )--++(-45:1.5 ) coordinate (bottomR); \path(0,0)--++(135:1.5 )--++(-135:1.5 ) coordinate (bottomL); \draw[very thick] (bottomL)--(bottomR); % \draw[very thick,densely dotted] (bottomL)--++(180:0.5); % \draw[very thick,densely dotted] (bottomR)--++(0:0.5); \path(0,0) --++(45:1.5 )--++(-45:1.5 )--++(135:3.5)--++(45:3.5) coordinate (topR); \path(0,0)--++(135:1.5 )--++(-135:1.5 )--++(45:3.5)--++(135:3.5) coordinate (topL); \draw[very thick] (topL)--(topR); % \draw[very thick,densely dotted] (topL)--++(180:0.5); % \draw[very thick,densely dotted] (topR)--++( 0:0.5); % \draw[very thick](0,0)--++(135:5)--++(45:3)--++(-45:1) % --++(45:1)--++(-45:3)--++(45:1) % --++(-45:1) % --(0,0); \clip(bottomL)--(bottomR)--(topR)--(topL); %first column \path(0,0) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3); \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,gray] (X1)--++(-90:7); \draw[very thick,gray] (X2)--++(-90:7); \path(start)--++(45:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,gray] (X1)--++(-90:7); \path(start)--++(45:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,gray] (X1)--++(-90:7); \draw[very thick,gray] (X3)--++(90:7); %first row \path(0,0) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3); \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,gray] (X1)--++(-90:7); \draw[very thick,gray] (X2)--++(-90:7); \path(start)--++(135:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,gray] (X2)--++(-90:7); \path(start)--++(135:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,gray] (X2)--++(-90:7); \draw[very thick,gray] (X4)--++(90:7); \path(X3)--++(45:0.5)--++(135:0.5) coordinate (X3); \draw[very thick,gray] (X3)--++(90:7); %second column \path(0,0)--++(45:1)--++(135:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray] (X3) to [out=-135,in= -45] (X4);; \path(start)--++(45:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,gray] (X3)--++(90:7); \path(start)--++(135:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray] (X3) to [out=-135,in= -45] (X4);; \draw[very thick,gray] (X3)--++(90:7); \draw[very thick,gray] (X4)--++(90:7); %second column \path(0,0)--++(45:1)--++(135:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray] (X3) to [out=-135,in= -45] (X4);; \path(start)--++(135:1) coordinate (start); \path(start)--++(45:0.5) coordinate (X1); \path(start)--++(135:0.5) coordinate (X2); \draw[very thick,gray] (X1) to [out=135,in= 45] (X2); \path(X1)--++(45:0.5)--++(135:0.5) coordinate (X3) ; \path(X2)--++(45:0.5)--++(135:0.5) coordinate (X4); \draw[very thick,gray] (X3) to [out=-135,in= -45] (X4);; % % \draw[very thick,gray] (X3)--++(90:7); \path(0,0) --++(-135:0. 5)--++(135:0. 5) --++(-135:0. 5)--++(135:0. 5) --++(135:3.5)--++(45:3.5) --++(45:0.25)--++(-45:0.25) --++(90:-0.13) node {$\wedge$}; \path(0,0) --++(-135:0. 5)--++(135:1) --++(-135:1)--++(135:0. 5) --++(135:3.5)--++(45:3.5) --++(45:0.25)--++(-45:0.25) --++(90:-0.13) node {$\wedge$}; \path(0,0)--++(135:3.5)--++(45:3.5) --++(45:0.25)--++(-45:0.25) --++(90:-0.13) node {$\wedge$}; \clip(0,0)--++(135:3)--++(45:3)--++(-45:3) --(0,0); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \path(0,0) --++(135:2.5)--++(45:0.5) node {\scalefont{0.75}$s$}; \path(0,0)--++(135:2.5)--++(45:1.5) node {\scalefont{0.75}$sf$}; \path(0,0)--++(135:2.5)--++(45:1.5) --++(45:1) node {\scalefont{0.75}$f$}; \path(0,0) --++(135:2.5)--++(45:0.5) --++(45:1) --++(-45:1) node {\scalefont{0.75}$s$}; \path(0,0)--++(135:2.5)--++(45:1.5) --++(45:1) --++(-45:1)node {\scalefont{0.75}$sf$}; \path(0,0) --++(135:2.5)--++(45:0.5) --++(45:1) --++(-45:1) --++(45:1) --++(-45:1) node {\scalefont{0.75}$s$}; \end{tikzpicture} \end{minipage} \qquad\qquad\qquad\qquad \begin{minipage}{6.4cm} \begin{tikzpicture}[scale=1.4] \draw[densely dotted, rounded corners] (-0.25,0) rectangle ++(0.5,1); \draw[densely dotted, rounded corners] (0.25,0) rectangle ++(0.5,1); \draw[densely dotted, rounded corners] (0.75,0) rectangle ++(0.5,1); \draw[densely dotted, rounded corners] (1.25,0) rectangle ++(0.5,1); \draw[densely dotted, rounded corners] (1.75,0) rectangle ++(0.5,1); \draw[densely dotted, rounded corners] (2.25,0) rectangle ++(0.5,1); \draw[densely dotted, rounded corners,fill=white] (0.25,1) --(-0.25,1)--++(90:1)--++(0:3.5)--++(-90:2)--++(180:0.5)--++(90:1)--++(180:2.75); \draw[densely dotted, rounded corners,fill=white] (0.25,2) --(-0.25,2)--++(90:1)--++(0:4)--++(-90:3)--++(180:0.5)--++(90:2)--++(180:3.25); \draw[densely dotted, rounded corners,fill=white] (0.25,3) --(-0.25,3)--++(90:1)--++(0:4.5)--++(-90:4)--++(180:0.5)--++(90:3)--++(180:3.75); \draw[magenta,line width=0.08cm](0,0)--++(90:4); \draw[darkgreen,line width=0.08cm](0.5,0)--++(90:1.75) coordinate (X); \fill[darkgreen ] (X) circle (3pt); \draw[cyan,line width=0.08cm](1,0) --++(90:2.75) coordinate (X); \fill[cyan ] (X) circle (3pt); \draw[orange,line width=0.08cm](1.5,0) --++(90:0.5) coordinate (X); \fill[orange ] (X) circle (3pt); \draw[magenta,line width=0.08cm](2,0) --++(90:0.5) coordinate (X); \fill[magenta ] (X) circle (3pt); \draw[violet,line width=0.08cm](2.5,0) --++(90:0.5) coordinate (X); \fill[violet ] (X) circle (3pt); \draw[darkgreen,line width=0.08cm] (0.5,1.4)--++(0:2) to [out=0,in=90] (3,1)--(3,0); \draw[cyan,line width=0.08cm] (1,2.4)--++(0:1.75) to [out=0,in=90] (3.5,1.5)--(3.5,0); \draw[magenta,line width=0.08cm] (0,3.4)--++(0:3.1) to [out=0,in=90] (4,2.75)-- (4,0); \draw(0,-0.3) node {\color{magenta}$\sf U^1 $}; \draw(.50,-0.3) node {\color{darkgreen}$\sf U^1 $}; \draw(1.0,-0.3) node {\color{cyan}$\sf U^1 $}; \draw(1.5,-0.3) node {\color{orange}$\sf U^0 $}; \draw(2.0,-0.3) node {\color{magenta}$\sf U^0 $}; \draw(2.5,-0.3) node {\color{violet}$\sf U^0 $}; \draw(3.0,-0.3) node {\color{darkgreen}$\sf D^1 $}; \draw(3.5,-0.3) node {\color{cyan}$\sf D^1 $}; \draw(4.0,-0.3) node {\color{magenta}$\sf D^0 $}; \end{tikzpicture}\end{minipage}$$ **Theorem 24**. *The algebra $\mathcal{H}_{m,n}$ is a graded cellular (in fact quasi-hereditary) algebra with graded cellular basis given by $$\label{basis} \{D_\lambda^\mu D^\lambda_\nu \mid \lambda,\mu,\nu\in {\mathscr{P}_{m,n}}\,\, \mbox{with}\,\, (\lambda,\mu), (\lambda,\nu) \text{ Dyck pairs} \}$$ with $${\rm deg}( D_\lambda^\mu D^\lambda_\nu) = {\rm deg}(\lambda, \mu) + {\rm deg}(\lambda, \nu),$$ with respect to the involution $*$ and the partial order on ${\mathscr{P}_{m,n}}$ given by inclusion.* *Proof.* This is simply a combinatorial rephrasing of the light leaves basis, constructed in full generality in [@MR3555156; @MR2441994] and reproven in the case of $(W,P)= (S_{m+n}, S_m \times S_n)$ paper in [@MR4323501]. ◻ Note, in particular that the degree $0$ basis elements are given by $D_\lambda^\lambda= {\sf 1}_\lambda$, $\lambda\in {\mathscr{P}_{m,n}}$ and the degree $1$ basis elements are given by $D_\mu^\lambda$ and $D_\lambda^\mu$ for $\lambda, \mu \in {\mathscr{P}_{m,n}}$ with $\lambda= \mu - P$ for some $P\in {\rm DRem}(\mu)$. We will show that these degree 0 and degree 1 elements generate $\mathcal{H}_{m,n}$. But first we will describe an easy way of visualising products of light leaves basis elements directly from the oriented tableaux used to define them. ## Multiplying generators on the oriented tableaux We have seen that the Soergel graph $D_\mu^\lambda$ is completely determined by the oriented tableau $\mathsf{t}_\mu^\lambda$. To visualise the multiplication of two such Soergel graphs directly from the oriented tableaux, we will want to consider pairs of tableaux of the same shape. This can easily be done by adding one more possible orientation for tiles, namely $0$. When constructing the corresponding Soergel graph, whenever we encounter a tile with $0$-orientation we will simply tensor with the empty Soergel graph, that is we leave the graph unchanged (see [\[blow-up\]](#blow-up){reference-type="ref" reference="blow-up"} for two examples). $$\begin{minipage}{2.2cm} \begin{tikzpicture}[scale=0.7] \path(0,0) coordinate (origin2); \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --(0,0); \clip(0,0)--++(135:2)--++(45:2)--++(-45:2) --(0,0); %\path(0,0)--++(45:0.5)--++(135:2.5) coordinate (X); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \path(0,0) --++(135:0.5) --++(45:-0.5) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); %\path (origin2)--++(135:1*\i) coordinate (d\i); } \path(c1)--++(135:1) coordinate (d2); \path(d2)--++(135:1) coordinate (d3); \path(d2)--++(45:1) coordinate (d22); \path(d2)--++(45:2) coordinate (d23); \path(d2)--++(135:1)--++(45:1) coordinate (d32); \path(d32)--++(45:1) coordinate (d33); \path(c1) node {1}; \path(c2) node {1}; \path(d2) node {1}; \path(d22) node {\color{magenta}$s$}; \end{tikzpicture} \end{minipage} =\; \begin{minipage}{3.55cm} \begin{tikzpicture}[scale=0.82] \path(0,0) coordinate (origin2); \draw[very thick](0,0)--++(135:3)--++(45:3)--++(-45:3) --(0,0); \clip(0,0)--++(135:3)--++(45:3)--++(-45:3) --(0,0); %\path(0,0)--++(45:0.5)--++(135:2.5) coordinate (X); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \path(0,0) --++(135:0.5) --++(45:-0.5) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); %\path (origin2)--++(135:1*\i) coordinate (d\i); } \path(c1)--++(135:1) coordinate (d2); \path(d2)--++(135:1) coordinate (d3); \path(d2)--++(45:1) coordinate (d22); \path(d2)--++(45:2) coordinate (d23); \path(d2)--++(135:1)--++(45:1) coordinate (d32); \path(d32)--++(45:1) coordinate (d33); \path(c1) node {1}; \path(c2) node {1}; \path(d2) node {1}; \path(c3) node {$\color{cyan}0$}; \path(d3) node {$\color{cyan}0$}; \path(d22) node {$\color{cyan}0$}; \path(d32) node {$\color{cyan}0$}; \path(d23) node {$\color{cyan}0$}; \path(d33) node {\color{magenta}$s$}; \end{tikzpicture} \end{minipage} \qquad\qquad\quad \begin{minipage}{2.2cm} \begin{tikzpicture}[scale=0.7] \path(0,0) coordinate (origin2); \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --(0,0); \clip(0,0)--++(135:2)--++(45:2)--++(-45:2) --(0,0); %\path(0,0)--++(45:0.5)--++(135:2.5) coordinate (X); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \path(0,0) --++(135:0.5) --++(45:-0.5) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); %\path (origin2)--++(135:1*\i) coordinate (d\i); } \path(c1)--++(135:1) coordinate (d2); \path(d2)--++(135:1) coordinate (d3); \path(d2)--++(45:1) coordinate (d22); \path(d2)--++(45:2) coordinate (d23); \path(d2)--++(135:1)--++(45:1) coordinate (d32); \path(d32)--++(45:1) coordinate (d33); \path(c1) node {1}; \path(c2) node {\color{magenta}$s$}; \path(d2) node {\color{magenta}$s$}; \path(d22) node {\color{magenta}$f$}; \end{tikzpicture} \end{minipage} =\; \begin{minipage}{3.55cm} \begin{tikzpicture}[scale=0.82] \path(0,0) coordinate (origin2); \draw[very thick](0,0)--++(135:3)--++(45:3)--++(-45:3) --(0,0); \clip(0,0)--++(135:3)--++(45:3)--++(-45:3) --(0,0); %\path(0,0)--++(45:0.5)--++(135:2.5) coordinate (X); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \path(0,0) --++(135:0.5) --++(45:-0.5) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); %\path (origin2)--++(135:1*\i) coordinate (d\i); } \path(c1)--++(135:1) coordinate (d2); \path(d2)--++(135:1) coordinate (d3); \path(d2)--++(45:1) coordinate (d22); \path(d2)--++(45:2) coordinate (d23); \path(d2)--++(135:1)--++(45:1) coordinate (d32); \path(d32)--++(45:1) coordinate (d33); \path(c1) node {1}; \path(c2) node {\color{magenta}$s$}; \path(d2) node {\color{magenta}$s$}; \path(c3) node {$\color{cyan}0$}; \path(d3) node {$\color{cyan}0$}; \path(d22) node {$\color{cyan}0$}; \path(d32) node {$\color{cyan}0$}; \path(d23) node {$\color{cyan}0$}; \path(d33) node {\color{magenta}$f$}; \end{tikzpicture} \end{minipage}$$ For example, let $P \in {\rm DRem}(\mu)$ and $Q\in {\rm DRem}(\mu-P)$ and assume for now that $P$ and $Q$ commute. We would like to be able to visualise the product $$D^{\mu - P -Q}_{\mu - P} D^{\mu - P}_\mu$$ on the oriented tableaux. So instead of considering the oriented tableau $\mathsf{t}_{\mu - P}^{\mu - P - Q}$ as a labelling of the tiles of $\mu - P$, we can visualise it as a labelling of the tiles of $\mu$ with all tiles belonging to $P_{sf}$ having $0$-orientation. The orientation of the other tiles remain unchanged. We can then easily multiply the elements $D^{\mu - P -Q}_{\mu - P}$ with $D^{\mu - P}_\mu$ simply by 'stacking' the two oriented tableaux, without any need to apply a braid generator in between the two diagrams. An example is depicted in [\[zeroorientation\]](#zeroorientation){reference-type="ref" reference="zeroorientation"}. $$\begin{minipage}{3.55cm} \begin{tikzpicture}[scale=0.82] \path(0,0) coordinate (origin2); \draw[very thick](0,0)--++(135:3)--++(45:3)--++(-45:3) --(0,0); \clip(0,0)--++(135:3)--++(45:3)--++(-45:3) --(0,0); %\path(0,0)--++(45:0.5)--++(135:2.5) coordinate (X); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \path(0,0) --++(135:0.5) --++(45:-0.5) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); %\path (origin2)--++(135:1*\i) coordinate (d\i); } \path(c1)--++(135:1) coordinate (d2); \path(d2)--++(135:1) coordinate (d3); \path(d2)--++(45:1) coordinate (d22); \path(d2)--++(45:2) coordinate (d23); \path(d2)--++(135:1)--++(45:1) coordinate (d32); \path(d32)--++(45:1) coordinate (d33); \path(c1) node {1}; \path(c2) node {1}; \path(d2) node {1}; \path(c3) node {$\color{cyan}0$}; \path(d3) node {$\color{cyan}0$}; \path(d22) node {$\color{cyan}0$}; \path(d32) node {$\color{cyan}0$}; \path(d23) node {$\color{cyan}0$}; \path(d33) node {\color{magenta}$s$}; \end{tikzpicture} \end{minipage} \;\circ \; \begin{minipage}{3.55cm} \begin{tikzpicture}[scale=0.82] \path(0,0) coordinate (origin2); \draw[very thick](0,0)--++(135:3)--++(45:3)--++(-45:3) --(0,0); \clip(0,0)--++(135:3)--++(45:3)--++(-45:3) --(0,0); %\path(0,0)--++(45:0.5)--++(135:2.5) coordinate (X); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \path(0,0) --++(135:0.5) --++(45:-0.5) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); %\path (origin2)--++(135:1*\i) coordinate (d\i); } \path(c1)--++(135:1) coordinate (d2); \path(d2)--++(135:1) coordinate (d3); \path(d2)--++(45:1) coordinate (d22); \path(d2)--++(45:2) coordinate (d23); \path(d2)--++(135:1)--++(45:1) coordinate (d32); \path(d32)--++(45:1) coordinate (d33); \path(c1) node {1}; \path(c2) node {1}; \path(d2) node {1}; \path(c3) node {$\color{cyan}s$}; \path(d3) node {$\color{cyan}s$}; \path(d22) node {$\color{cyan}s$}; \path(d32) node {$\color{cyan}f$}; \path(d23) node {$\color{cyan}f$}; \path(d33) node {$1$}; \end{tikzpicture} \end{minipage} \;= \; \begin{minipage}{3.55cm} \begin{tikzpicture}[scale=0.82] \path(0,0) coordinate (origin2); \draw[very thick](0,0)--++(135:3)--++(45:3)--++(-45:3) --(0,0); \clip(0,0)--++(135:3)--++(45:3)--++(-45:3) --(0,0); \path(0,0) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(135:1*\i) coordinate (d\i); %\draw[gray!40, thick] (d\i)--++(0:4*\i) coordinate (c\i); %\path (origin2)--++(135:1*\i) coordinate (d\i); } %\path(0,0)--++(45:0.5)--++(135:2.5) coordinate (X); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \path(0,0) --++(135:0.5) --++(45:-0.5) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); %\path (origin2)--++(135:1*\i) coordinate (d\i); } \path(c1)--++(135:1) coordinate (d2); \path(d2)--++(135:1) coordinate (d3); \path(d2)--++(45:1) coordinate (d22); \path(d2)--++(45:2) coordinate (d23); \path(d2)--++(135:1)--++(45:1) coordinate (d32); \path(d32)--++(45:1) coordinate (d33); \path(c1)--++(90:0.3) node {\scalefont{0.8}1}; \path(c1)--++(-90:0.3) node {\scalefont{0.8}1}; \path(c2)--++(90:0.3) node {\scalefont{0.8}1}; \path(c2)--++(-90:0.3) node {\scalefont{0.8}1}; \path(d2)--++(90:0.3) node {\scalefont{0.8}1}; \path(d2)--++(-90:0.3) node {\scalefont{0.8}1}; \path(c3)--++(90:0.3) node {\scalefont{0.8}$\color{cyan}0$}; \path(c3)--++(-90:0.3) node {\scalefont{0.8}$\color{cyan}s$}; \path(d3)--++(90:0.3) node {\scalefont{0.8}$\color{cyan}0$}; \path(d3)--++(-90:0.3) node {\scalefont{0.8}$\color{cyan}s$}; \path(d22)--++(90:0.3) node {\scalefont{0.8}$\color{cyan}0$}; \path(d22)--++(-90:0.3) node {\scalefont{0.8}$\color{cyan}s$}; \path(d23)--++(90:0.3) node {\scalefont{0.8}$\color{cyan}0$}; \path(d23)--++(-90:0.3) node {\scalefont{0.8}$\color{cyan}f$}; \path(d32)--++(90:0.3) node {\scalefont{0.8}$\color{cyan}0$}; \path(d32)--++(-90:0.3) node {\scalefont{0.8}$\color{cyan}f$}; \path(d33)--++(90:0.3) node {\scalefont{0.8}$\color{magenta}s$}; \path(d33)--++(-90:0.3) node {\scalefont{0.8}$\color{black}1$}; \end{tikzpicture} \end{minipage} \;= \; \begin{minipage}{3.55cm} \begin{tikzpicture}[scale=0.82] \path(0,0) coordinate (origin2); \draw[very thick](0,0)--++(135:3)--++(45:3)--++(-45:3) --(0,0); \clip(0,0)--++(135:3)--++(45:3)--++(-45:3) --(0,0); \path(0,0) coordinate (origin2); % % \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} %{ % \path (origin2)--++(135:1*\i) coordinate (d\i); %\draw[gray!40, thick] (d\i)--++(0:4*\i) coordinate (c\i); %%\path (origin2)--++(135:1*\i) coordinate (d\i); % } % % %\path(0,0)--++(45:0.5)--++(135:2.5) coordinate (X); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \path(0,0) --++(135:0.5) --++(45:-0.5) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); %\path (origin2)--++(135:1*\i) coordinate (d\i); } \path(c1)--++(135:1) coordinate (d2); \path(d2)--++(135:1) coordinate (d3); \path(d2)--++(45:1) coordinate (d22); \path(d2)--++(45:2) coordinate (d23); \path(d2)--++(135:1)--++(45:1) coordinate (d32); \path(d32)--++(45:1) coordinate (d33); \path(c1) node {\scalefont{0.8}1}; \path(c2) node {\scalefont{0.8}1}; \path(d2) node {\scalefont{0.8}1}; \path(c3) node {\scalefont{0.8}$\color{cyan}s$}; %\path node {\scalefont{0.8}$\color{cyan}s$}; \path(d22) node {\scalefont{0.8}$\color{cyan}s$}; \path(d3) node {\scalefont{0.8}$\color{cyan}s$}; \path(d23) node {\scalefont{0.8}$\color{cyan}f$}; \path(d32) node {\scalefont{0.8}$\color{cyan}f$}; \path(d33) node {\scalefont{0.8}$\color{magenta}s$}; \end{tikzpicture} \end{minipage}$$ $$\begin{minipage}{5cm} \begin{tikzpicture}[scale=1.1] \draw[densely dotted, rounded corners] (-0.25,0) rectangle (4.25,1.5); \draw[magenta,line width=0.08cm](0,0)--++(90:3); \draw[darkgreen,line width=0.08cm](0.5,0)--++(90:3) coordinate (X); % \fill[darkgreen ] (X) circle (3pt); \draw[cyan,line width=0.08cm](1,0) --++(90:3) coordinate (X); % \fill[cyan ] (X) circle (3pt); \draw[orange,line width=0.08cm](1.5,0) --++(90:0.5) coordinate (X); \fill[orange ] (X) circle (3pt); \draw[magenta,line width=0.08cm](2,0) --++(90:0.5) coordinate (X); \fill[magenta ] (X) circle (3pt); \draw[violet,line width=0.08cm](2.5,0) --++(90:0.5) coordinate (X); \fill[violet ] (X) circle (3pt); \draw[darkgreen,line width=0.08cm] (0.5,0.9)--++(0:1) to [out=0,in=90] (3,0); \draw[cyan,line width=0.08cm] (1,1.1)--++(0:1) to [out=0,in=90] (3.5,0); \draw[magenta,line width=0.08cm](4,0)--++(90:2) coordinate (X); \fill[magenta ] (X) circle (3pt); %\draw[magenta,line width=0.08cm](0,2.5)--++(0:1.5) to [out=0,in=90] (4,1.5); \draw[densely dotted, rounded corners] (-0.25,3) rectangle (4.25,1.5); \end{tikzpicture}\end{minipage} \;=\; \begin{minipage}{5cm} \begin{tikzpicture}[scale=1.1] \draw[densely dotted, rounded corners] (-0.25,0) rectangle (4.25,3); \draw[magenta,line width=0.08cm](0,0)--++(90:3); \draw[darkgreen,line width=0.08cm](0.5,0)--++(90:3) coordinate (X); % \fill[darkgreen ] (X) circle (3pt); \draw[cyan,line width=0.08cm](1,0) --++(90:3) coordinate (X); % \fill[cyan ] (X) circle (3pt); \draw[orange,line width=0.08cm](1.5,0) --++(90:0.5) coordinate (X); \fill[orange ] (X) circle (3pt); \draw[magenta,line width=0.08cm](2,0) --++(90:0.5) coordinate (X); \fill[magenta ] (X) circle (3pt); \draw[violet,line width=0.08cm](2.5,0) --++(90:0.5) coordinate (X); \fill[violet ] (X) circle (3pt); \draw[darkgreen,line width=0.08cm] (0.5,0.9)--++(0:1) to [out=0,in=90] (3,0); \draw[cyan,line width=0.08cm] (1,1.1)--++(0:1) to [out=0,in=90] (3.5,0); \draw[magenta,line width=0.08cm](4,0)--++(90:0.5) coordinate (X); \fill[magenta ] (X) circle (3pt); %\draw[magenta,line width=0.08cm](0,2.5)--++(0:1.5) to [out=0,in=90] (4,1.5); \end{tikzpicture}\end{minipage}$$ Now, let $P \in {\rm DRem}(\mu)$ and $Q\in {\rm DRem}(\mu-P)$ be such that $P$ and $Q$ do not commute. We proceed as above (rewriting the tiles in $P_{sf}$ so as to have a $0$-orientation) and then we let each tile with an $s$-orientation fall down one place (from $[r,c]$ to $[r-1,c-1]$, say) and we leave all other tiles unchanged. An example is given in [\[zeroorientation2\]](#zeroorientation2){reference-type="ref" reference="zeroorientation2"}. When multiplying two elements in this way, we will represent the product by splitting each tile in half with the label of the top half corresponding to the first element and the label of the bottom half correspond to the second element. When considering the dual Soergel graphs $D_\lambda^\mu = (D_\mu^\lambda)^*$, we will represent them with the same oriented tableau as $D_\mu^\lambda$ except that we will replace all $s$-orientation (respectively $f$-orientation, or $sf$-orientation) by the symbol $s^*$ (respectively $f^*$, or $f^*s^*$). An example of a degree two basis element $D^\mu_\lambda D^\lambda_\nu$ is given in Figure [\[zeroorientation2\]](#zeroorientation2){reference-type="ref" reference="zeroorientation2"}. We will now restate some of the (simplest) relations in the Hecke category on the oriented tableaux. **Proposition 25**. *The relations depicted in [\[spotpic,spotpic2\]](#spotpic,spotpic2){reference-type="ref" reference="spotpic,spotpic2"} hold.* *$$\begin{minipage}{1.6cm}\begin{tikzpicture}[scale=0.5] \path(0,0) coordinate (origin2); \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \clip(0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \draw[very thick,gray!50](0,0)--++(135:2)--++(0:4) ; \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \path(0,0)--++(45:0.6)--++(135:0.6) coordinate (X) node {\color{magenta}$1$}; \path(0,0)--++(45:1.4)--++(135:1.4) coordinate (X) node {\color{magenta}$s$}; \end{tikzpicture} \end{minipage} =\; \begin{minipage}{1.6cm} \begin{tikzpicture}[scale=0.5] \path(0,0) coordinate (origin2); \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \clip(0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \path(0,0)--++(45:1 )--++(135:1 ) coordinate (X) node {\color{magenta}$s$}; \end{tikzpicture} \end{minipage} =\; \begin{minipage}{1.6cm}\begin{tikzpicture}[scale=0.5] \path(0,0) coordinate (origin2); \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \clip(0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \draw[very thick,gray!50](0,0)--++(135:2)--++(0:4) ; \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \path(0,0)--++(45:0.6)--++(135:0.6) coordinate (X) node {\color{magenta}$s$}; \path(0,0)--++(45:1.4)--++(135:1.4) coordinate (X) node {\color{magenta}$0$}; \end{tikzpicture} \end{minipage} \qquad\qquad \begin{minipage}{1.6cm}\begin{tikzpicture}[scale=0.5] \path(0,0) coordinate (origin2); \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \clip(0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \draw[very thick,gray!50](0,0)--++(135:2)--++(0:4) ; \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \path(0,0)--++(45:0.6)--++(135:0.6) coordinate (X) node {\color{magenta}$\phantom{ _*}s^*$}; \path(0,0)--++(45:1.4)--++(135:1.4) coordinate (X) node {\color{magenta}$1$}; \end{tikzpicture} \end{minipage} =\; \begin{minipage}{1.6cm} \begin{tikzpicture}[scale=0.5] \path(0,0) coordinate (origin2); \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \clip(0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \path(0,0)--++(45:1 )--++(135:1 ) coordinate (X) node {\color{magenta}$\phantom{ _*} s^*$}; \end{tikzpicture} \end{minipage} =\; \begin{minipage}{1.6cm}\begin{tikzpicture}[scale=0.5] \path(0,0) coordinate (origin2); \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \clip(0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \draw[very thick,gray!50](0,0)--++(135:2)--++(0:4) ; \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \path(0,0)--++(45:0.6)--++(135:0.6) coordinate (X) node {\color{magenta}$0$}; \path(0,0)--++(45:1.4)--++(135:1.4) coordinate (X) node {\color{magenta}$\phantom{ _*}s^*$}; \end{tikzpicture} \end{minipage}$$* *$$\begin{minipage}{1.6cm}\begin{tikzpicture}[scale=1.5] \draw[densely dotted, rounded corners] (-0.25,0.75) rectangle (0.75,1.5); \draw[densely dotted, rounded corners] (-0.25,0.75) rectangle (0.75,0); \draw[magenta,line width=0.08cm](0.25,0) --++(90:0.75+0.35) coordinate (spot); \draw[magenta,fill=magenta,line width=0.08cm] (spot) circle (3pt); \end{tikzpicture}\end{minipage} =\; % \begin{minipage}{1.6cm}\begin{tikzpicture}[scale=1.5] \draw[densely dotted, rounded corners] (-0.25,0) rectangle (0.75,1.5); %\draw[magenta,line width=0.08cm](0,0)--++(90:1.5); \draw[magenta,line width=0.08cm](0.25,0) --++(90:0.75) coordinate (spot); \draw[magenta,fill=magenta,line width=0.08cm] (spot) circle (3pt); \end{tikzpicture}\end{minipage} =\; \begin{minipage}{1.6cm}\begin{tikzpicture}[scale=1.5] \draw[densely dotted, rounded corners] (-0.25,0.75) rectangle (0.75,1.5); \draw[densely dotted, rounded corners] (-0.25,0.75) rectangle (0.75,0); \draw[magenta,line width=0.08cm](0.25,0) --++(90:0.45) coordinate (spot); \draw[magenta,fill=magenta,line width=0.08cm] (spot) circle (3pt); \end{tikzpicture}\end{minipage} \qquad \qquad \begin{minipage}{1.8cm}\begin{tikzpicture}[scale=-1.5] \draw[densely dotted, rounded corners] (-0.25,0.75) rectangle (0.75,1.5); \draw[densely dotted, rounded corners] (-0.25,0.75) rectangle (0.75,0); \draw[magenta,line width=0.08cm](0.25,0) --++(90:0.75+0.35) coordinate (spot); \draw[magenta,fill=magenta,line width=0.08cm] (spot) circle (3pt); \end{tikzpicture}\end{minipage} =\; % \begin{minipage}{1.8cm}\begin{tikzpicture}[scale=-1.5] \draw[densely dotted, rounded corners] (-0.25,0) rectangle (0.75,1.5); %\draw[magenta,line width=0.08cm](0,0)--++(90:1.5); \draw[magenta,line width=0.08cm](0.25,0) --++(90:0.75) coordinate (spot); \draw[magenta,fill=magenta,line width=0.08cm] (spot) circle (3pt); \end{tikzpicture}\end{minipage} =\; \begin{minipage}{1.8cm}\begin{tikzpicture}[scale=-1.5] \draw[densely dotted, rounded corners] (-0.25,0.75) rectangle (0.75,1.5); \draw[densely dotted, rounded corners] (-0.25,0.75) rectangle (0.75,0); \draw[magenta,line width=0.08cm](0.25,0) --++(90:0.45) coordinate (spot); \draw[magenta,fill=magenta,line width=0.08cm] (spot) circle (3pt); \end{tikzpicture}\end{minipage}$$* *$$\begin{minipage}{1.6cm} \begin{tikzpicture}[scale=0.5] \begin{scope} \path(0,0) coordinate (origin2); \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \clip(0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \path(0,0)--++(45:1 )--++(135:1 ) coordinate (X) node {\color{magenta}$f$}; \end{scope} \begin{scope} \path(0,0)--++(-45:2)--++(-135:2) coordinate (origin2); \draw[very thick](origin2)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(origin2); \clip(origin2)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(origin2); \draw[very thick](origin2)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(origin2); \path(origin2)--++(45:1 )--++(135:1 ) coordinate (X) node {\color{magenta}$1$}; \end{scope} \end{tikzpicture} \end{minipage} =\; \begin{minipage}{1.6cm} \begin{tikzpicture}[scale=0.5] \begin{scope} \path(0,0) coordinate (origin2); \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \clip(0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \draw[very thick,gray!50](0,0)--++(135:2)--++(0:4) ; \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \path(0,0)--++(45:0.6)--++(135:0.6) coordinate (X) node {\color{magenta}$1$}; \path(0,0)--++(45:1.4)--++(135:1.4) coordinate (X) node {\color{magenta}$f$}; \end{scope} \begin{scope} \path(0,0)--++(-45:2)--++(-135:2) coordinate (origin2); \draw[very thick](origin2)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(origin2); \clip(origin2)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(origin2); \draw[very thick,gray!50](origin2)--++(135:2)--++(0:4) ; \draw[very thick](origin2)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(origin2); \path(origin2)--++(45:0.6)--++(135:0.6) coordinate (X) node {\color{magenta}$1$}; \path(origin2)--++(45:1.4)--++(135:1.4) coordinate (X) node {\color{magenta}$1$}; \end{scope} \end{tikzpicture} \end{minipage} =\; \begin{minipage}{1.6cm} \begin{tikzpicture}[scale=0.5] \begin{scope} \path(0,0) coordinate (origin2); \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \clip(0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \draw[very thick,gray!50](0,0)--++(135:2)--++(0:4) ; \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \path(0,0)--++(45:0.6)--++(135:0.6) coordinate (X) node {\color{magenta}$f$}; \path(0,0)--++(45:1.4)--++(135:1.4) coordinate (X) node {\color{magenta}$0$}; \end{scope} \begin{scope} \path(0,0)--++(-45:2)--++(-135:2) coordinate (origin2); \draw[very thick](origin2)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(origin2); \clip(origin2)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(origin2); \draw[very thick,gray!50](origin2)--++(135:2)--++(0:4) ; \draw[very thick](origin2)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(origin2); \path(origin2)--++(45:0.6)--++(135:0.6) coordinate (X) node {\color{magenta}$1$}; \path(origin2)--++(45:1.4)--++(135:1.4) coordinate (X) node {\color{magenta}$1$}; \end{scope} \end{tikzpicture} \end{minipage} \qquad \quad \begin{minipage}{1.6cm} \begin{tikzpicture}[scale=0.5] \begin{scope} \path(0,0) coordinate (origin2); \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \clip(0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \draw[very thick,gray!50](0,0)--++(135:2)--++(0:4) ; \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \path(0,0)--++(45:0.6)--++(135:0.6) coordinate (X) node {\color{magenta}$f$}; \path(0,0)--++(45:1.4)--++(135:1.4) coordinate (X) node {\color{magenta}$0$}; \end{scope} \begin{scope} \path(0,0)--++(-45:2)--++(-135:2) coordinate (origin2); \draw[very thick](origin2)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(origin2); \clip(origin2)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(origin2); \draw[very thick,gray!50](origin2)--++(135:2)--++(0:4) ; \draw[very thick](origin2)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(origin2); \path(origin2)--++(45:0.6)--++(135:0.6) coordinate (X) node {\color{magenta}$1$}; \path(origin2)--++(45:1.4)--++(135:1.4) coordinate (X) node {\color{magenta}$s$}; \end{scope} \end{tikzpicture} \end{minipage} =\; \begin{minipage}{1.6cm} \begin{tikzpicture}[scale=0.5] \begin{scope} \path(0,0) coordinate (origin2); \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \clip(0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); % \draw[very thick,gray!50](0,0)--++(135:2)--++(0:4) ; \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); % % %\path(0,0)--++(45:0.6)--++(135:0.6) coordinate (X) node {\color{magenta}$f$}; % % %\path(0,0)--++(45:1.4)--++(135:1.4) coordinate (X) node {\color{magenta}$0$}; % \path(origin2)--++(45:1)--++(135:1) coordinate (X) node {\color{magenta}$sf$}; \end{scope} \begin{scope} \path(0,0)--++(-45:2)--++(-135:2) coordinate (origin2); \draw[very thick](origin2)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(origin2); \clip(origin2)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(origin2); % \draw[very thick,gray!50](origin2)--++(135:2)--++(0:4) ; \draw[very thick](origin2)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(origin2); \path(origin2)--++(45:1)--++(135:1) coordinate (X) node {\color{magenta}$1$}; % %\path(origin2)--++(45:1.4)--++(135:1.4) coordinate (X) node {\color{magenta}$s$}; % % \end{scope} \end{tikzpicture} \end{minipage} \qquad \quad \begin{minipage}{1.6cm} \begin{tikzpicture}[scale=0.5] \begin{scope} \path(0,0) coordinate (origin2); \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \clip(0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \draw[very thick,gray!50](0,0)--++(135:2)--++(0:4) ; \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \path(origin2)--++(45:0.6)--++(135:0.6) coordinate (X) node {\color{magenta}$_{\phantom{*}}s^*$}; \path(origin2)--++(45:1.4)--++(135:1.4) coordinate (X) node {\color{magenta}$f$}; %\path(origin2)--++(45:0.6)--++(135:0.6) coordinate (X) node {\color{magenta}$1$}; % % %\path(origin2)--++(45:1.4)--++(135:1.4) coordinate (X) node {\color{magenta}$1$}; \end{scope} \begin{scope} \path(0,0)--++(-45:2)--++(-135:2) coordinate (origin2); \draw[very thick](origin2)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(origin2); \clip(origin2)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(origin2); \draw[very thick,gray!50](origin2)--++(135:2)--++(0:4) ; \draw[very thick](origin2)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(origin2); \path(origin2)--++(45:0.6)--++(135:0.6) coordinate (X) node {\color{magenta}$1$}; \path(origin2)--++(45:1.4)--++(135:1.4) coordinate (X) node {\color{magenta}$1$}; \end{scope} \end{tikzpicture} \end{minipage} =\; \begin{minipage}{1.6cm} \begin{tikzpicture}[scale=0.5] \begin{scope} \path(0,0) coordinate (origin2); \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); \clip(0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); % \draw[very thick,gray!50](0,0)--++(135:2)--++(0:4) ; \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(0,0); % % %\path(0,0)--++(45:0.6)--++(135:0.6) coordinate (X) node {\color{magenta}$f$}; % % %\path(0,0)--++(45:1.4)--++(135:1.4) coordinate (X) node {\color{magenta}$0$}; % \path(origin2)--++(45:1)--++(135:1) coordinate (X) node {\color{magenta}$0$}; \end{scope} \begin{scope} \path(0,0)--++(-45:2)--++(-135:2) coordinate (origin2); \draw[very thick](origin2)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(origin2); \clip(origin2)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(origin2); % \draw[very thick,gray!50](origin2)--++(135:2)--++(0:4) ; \draw[very thick](origin2)--++(135:2)--++(45:2)--++(-45:2) --++(-135:2) --(origin2); \path(origin2)--++(45:1)--++(135:1) coordinate (X) node {\color{magenta}$1$}; % %\path(origin2)--++(45:1.4)--++(135:1.4) coordinate (X) node {\color{magenta}$s$}; % % \end{scope} \end{tikzpicture} \end{minipage}%=\; % \begin{minipage}{1.6cm} %\begin{tikzpicture}[scale=0.5] % % %\begin{scope} % % % % \path(0,0) coordinate (origin2); % \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) % --++(-135:2) --(0,0); % % \clip(0,0)--++(135:2)--++(45:2)--++(-45:2) % --++(-135:2) --(0,0); % % \draw[very thick,gray!50](0,0)--++(135:2)--++(0:4) ; % % \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) % --++(-135:2) --(0,0); % % % %\path(0,0)--++(45:0.6)--++(135:0.6) coordinate (X) node {\color{magenta}$f$}; % % %\path(0,0)--++(45:1.4)--++(135:1.4) coordinate (X) node {\color{magenta}$1$}; % % % % % % % % % % % %\end{scope} % % % % % % %\begin{scope} % % % \path(0,0)--++(-45:2)--++(-135:2) coordinate (origin2); % \draw[very thick](origin2)--++(135:2)--++(45:2)--++(-45:2) % --++(-135:2) --(origin2); % % \clip(origin2)--++(135:2)--++(45:2)--++(-45:2) % --++(-135:2) --(origin2); % % \draw[very thick,gray!50](origin2)--++(135:2)--++(0:4) ; % % \draw[very thick](origin2)--++(135:2)--++(45:2)--++(-45:2) % --++(-135:2) --(origin2); % % % %\path(origin2)--++(45:0.6)--++(135:0.6) coordinate (X) node {\color{magenta}$1$}; % % %\path(origin2)--++(45:1.4)--++(135:1.4) coordinate (X) node {\color{magenta}$0$}; % % % % % %\end{scope} % % % % % % % % % % % % % % %\end{tikzpicture} %\end{minipage}$$* *$$\begin{minipage}{1.6cm}\begin{tikzpicture}[scale=1.5] \draw[densely dotted, rounded corners] (-0.25,0) rectangle (0.75,1.5); \draw[magenta,line width=0.08cm](0,0)--++(90:1.5); \draw[magenta,line width=0.08cm](0.5,0) to [out=90,in=0] (0,0.75); \end{tikzpicture}\end{minipage} =\; \begin{minipage}{1.6cm}\begin{tikzpicture}[scale=1.5] \draw[densely dotted, rounded corners] (-0.25,0) rectangle (0.25,0.75); \draw[densely dotted, rounded corners] (-0.25,0.75) rectangle (0.75,1.5); \draw[densely dotted, rounded corners] (0.25,0.75) rectangle (0.75,0); \draw[magenta,line width=0.08cm](0,0)--++(90:1.5); \draw[magenta,line width=0.08cm](0.5,0)--++(90:0.75) to [out=90,in=0] (0,0.75+0.4); \end{tikzpicture}\end{minipage} =\; \begin{minipage}{1.6cm}\begin{tikzpicture}[scale=1.5] \draw[densely dotted, rounded corners] (-0.25,1.5) rectangle (0.25,0.75); \draw[densely dotted, rounded corners] (-0.25,0.75) rectangle (0.75,0); \draw[densely dotted, rounded corners] (0.25,0.75) rectangle (0.75,1.5); \draw[magenta,line width=0.08cm](0,0)--++(90:1.5); \draw[magenta,line width=0.08cm](0.5,0) to [out=90,in=0] (0, 0.4); \end{tikzpicture}\end{minipage} \qquad\quad \begin{minipage}{1.6cm}\begin{tikzpicture}[scale=1.5] \draw[densely dotted, rounded corners] (-0.25,1.5) rectangle (0.25,0.75); \draw[densely dotted, rounded corners] (-0.25,0.75) rectangle (0.75,0); \draw[densely dotted, rounded corners] (0.25,0.75) rectangle (0.75,1.5); \draw[magenta,line width=0.08cm](0,0)--++(90:1.15) coordinate (spot); \draw[magenta,fill=magenta,line width=0.08cm] (spot) circle (2.5pt); \draw[magenta,line width=0.08cm](0.5,0) to [out=90,in=0] (0, 0.4); \end{tikzpicture}\end{minipage} \;= \begin{minipage}{1.6cm}\begin{tikzpicture}[scale=1.5] \draw[densely dotted, rounded corners] (-0.25,0) rectangle (0.75,1.5); \draw[magenta,line width=0.08cm](0,0)--++(90:1.15) coordinate (spot); \draw[magenta,fill=magenta,line width=0.08cm] (spot) circle (2.5pt); \draw[magenta,line width=0.08cm](0.5,0) to [out=90,in=0] (0,0.55); \end{tikzpicture}\end{minipage} \qquad\quad \begin{minipage}{1.6cm}\begin{tikzpicture}[scale=1.5] \draw[densely dotted, rounded corners] (-0.25,0) rectangle (0.25,0.75); \draw[densely dotted, rounded corners] (-0.25,0.75) rectangle (0.75,1.5); \draw[densely dotted, rounded corners] (0.25,0.75) rectangle (0.75,0); \draw[magenta,line width=0.08cm](0,0)--++(90:1.5); \draw[magenta,line width=0.08cm](0.5,0.4)--++(90:0.75-0.4) to [out=90,in=0] (0,0.75+0.4); \draw[magenta,fill=magenta,line width=0.08cm] (0.5,0.4) circle (2.5pt); \end{tikzpicture}\end{minipage} \;= \begin{minipage}{1.6cm}\begin{tikzpicture}[scale=1.5] \draw[densely dotted, rounded corners] (-0.25,0) rectangle (0.75,1.5); \draw[magenta,line width=0.08cm](0,0)--++(90:1.5) coordinate (spot); \end{tikzpicture}\end{minipage}$$* *Proof.* These are all restatements of the relations given in the monoidal presentation of the Hecke category. For example the last one is the dual fork-spot contraction. ◻ ## Generators for the Hecke category {#generators-for-the-hecke-category} We are now ready to prove that the algebra $\mathcal{H}_{m,n}$ is generated in degree 0 and 1. **Proposition 26**. *The algebra $\mathcal{H}_{m,n}$ is generated by the elements $$\{D^\lambda_\mu, D_\lambda^\mu \mid \lambda, \mu \in {\mathscr{P}_{m,n}}\text{ with $\lambda= \mu - P$ for some $P\in {\rm DRem}(\mu)$} \}\cup\{ D_\mu^\mu = {\sf 1}_\mu \mid \mu \in {\mathscr{P}_{m,n}}\}.$$* *Proof.* lt suffices to show that every element $D_\mu^\lambda$ for $\lambda, \mu\in {\mathscr{P}_{m,n}}$ with $(\lambda,\mu)$ a Dyck pair can be written as a product of these elements. We proceed by induction on $k = {\rm deg}(\underline{\mu}\lambda)$. For $k=0$ or $1$, there is nothing to prove. So assume that $k\geqslant 2$. We have $\mu\setminus \lambda = \sqcup_{i=1}^k P^i$ where each $P^i\in {\rm DRem}(\mu)$. Pick $P\in \{P^i \, : \, 1\leqslant i\leqslant k\}$ such that there is no $P^i$ covering $P$. Then we claim that $$D_\mu^\lambda= D_{\mu - P}^\lambda D_{\mu}^{\mu - P}.$$ The result would then follow by induction. To see this, note that the oriented tableau $\mathsf{t}_{\mu -P}^\lambda$, viewed as a tableau of shape $\mu$ as explained in the last subsection is obtained from $\mathsf{t}_\mu^\lambda$ by setting the orientation of all tiles of $P_{sf}$ to $0$. Moreover, if there is some $P^i\neq P$ which does not commute with $P$, then each of the $s$-orientations on the tiles of $P^i_{sf}$ (which also belong to $P_{sf}$) in $\mathsf{t}_\mu^\lambda$ fall down one tile. Note that by assumption on $P$, these were labelled by $1$ in $\mathsf{t}_\mu^\lambda$. Now using the relations given in [\[spotpic,spotpic2\]](#spotpic,spotpic2){reference-type="ref" reference="spotpic,spotpic2"}, we see that $D_{\mu - P}^\lambda D_{\mu}^{\mu - P} = D_\mu^\lambda$ as required. The dual element $D_\lambda^\mu = (D^\lambda_\mu)^*$ can then by written as the reverse product of the dual degree 1 elements. An example is given in Figure [\[zeroorientation2\]](#zeroorientation2){reference-type="ref" reference="zeroorientation2"}. ◻ $$\begin{minipage}{3.55cm} \begin{tikzpicture}[scale=0.82] \path(0,0) coordinate (origin2); \draw[very thick](0,0)--++(135:3)--++(45:3)--++(-45:3) --(0,0); \clip(0,0)--++(135:3)--++(45:3)--++(-45:3) --(0,0); %\path(0,0)--++(45:0.5)--++(135:2.5) coordinate (X); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \path(0,0) --++(135:0.5) --++(45:-0.5) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); %\path (origin2)--++(135:1*\i) coordinate (d\i); } \path(c1)--++(135:1) coordinate (d2); \path(d2)--++(135:1) coordinate (d3); \path(d2)--++(45:1) coordinate (d22); \path(d2)--++(45:2) coordinate (d23); \path(d2)--++(135:1)--++(45:1) coordinate (d32); \path(d32)--++(45:1) coordinate (d33); \path(c1) node {1}; \path(c2) node {1}; \path(d2) node {1}; \path(c3) node {$\color{cyan}_{\phantom{*}}s^*$}; \path(d3) node {$\color{cyan}_{\phantom{*}}s^*$}; \path(d22) node {$\color{cyan}_{\phantom{*}}s^*$}; \path(d32) node {$\color{cyan}_{\phantom{*}}f^*$}; \path(d23) node {$\color{cyan}_{\phantom{*}}f^*$}; \path(d33) node {$1$}; \end{tikzpicture} \end{minipage}\; \circ \; \begin{minipage}{3.55cm} \begin{tikzpicture}[scale=0.82] \path(0,0) coordinate (origin2); \draw[very thick](0,0)--++(135:3)--++(45:3)--++(-45:3) --(0,0); \clip(0,0)--++(135:3)--++(45:3)--++(-45:3) --(0,0); %\path(0,0)--++(45:0.5)--++(135:2.5) coordinate (X); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \path(0,0) --++(135:0.5) --++(45:-0.5) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); %\path (origin2)--++(135:1*\i) coordinate (d\i); } \path(c1)--++(135:1) coordinate (d2); \path(d2)--++(135:1) coordinate (d3); \path(d2)--++(45:1) coordinate (d22); \path(d2)--++(45:2) coordinate (d23); \path(d2)--++(135:1)--++(45:1) coordinate (d32); \path(d32)--++(45:1) coordinate (d33); \path(c1) node {1}; \path(c2) node {$\color{magenta}_{\phantom{*}}s^*$}; \path(d2) node {$\color{magenta}_{\phantom{*}}s^*$}; \path(c3) node {$\color{cyan}0$}; \path(d3) node {$\color{cyan}0$}; \path(d22) node {$\color{cyan}0$}; \path(d32) node {$\color{cyan}0$}; \path(d23) node {$\color{cyan}0$}; \path(d33) node {\color{magenta}$_{\phantom{*}}f^*$}; \end{tikzpicture} \end{minipage} \;= \; \begin{minipage}{3.55cm} \begin{tikzpicture}[scale=0.82] \path(0,0) coordinate (origin2); \draw[very thick](0,0)--++(135:3)--++(45:3)--++(-45:3) --(0,0); \clip(0,0)--++(135:3)--++(45:3)--++(-45:3) --(0,0); \path(0,0) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(135:1*\i) coordinate (d\i); %\draw[gray!40, thick] (d\i)--++(0:4*\i) coordinate (c\i); %\path (origin2)--++(135:1*\i) coordinate (d\i); } %\path(0,0)--++(45:0.5)--++(135:2.5) coordinate (X); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \path(0,0) --++(135:0.5) --++(45:-0.5) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); %\path (origin2)--++(135:1*\i) coordinate (d\i); } \path(c1)--++(135:1) coordinate (d2); \path(d2)--++(135:1) coordinate (d3); \path(d2)--++(45:1) coordinate (d22); \path(d2)--++(45:2) coordinate (d23); \path(d2)--++(135:1)--++(45:1) coordinate (d32); \path(d32)--++(45:1) coordinate (d33); \path(c1)--++(90:0.3) node {\scalefont{0.8}1}; \path(c1)--++(-90:0.3) node {\scalefont{0.8}1}; \path(c2)--++(-90:0.3) node {\scalefont{0.8}$\color{magenta}_{\phantom{*}}s^*$}; \path(c2)--++(90:0.3) node {\scalefont{0.8}1}; \path(d2)--++(-90:0.3) node {\scalefont{0.8}$\color{magenta}_{\phantom{*}}s^*$}; \path(d2)--++(90:0.3) node {\scalefont{0.8}1}; \path(c3)--++(-90:0.3) node {\scalefont{0.8}$\color{cyan}0$}; \path(c3)--++(90:0.3) node {\scalefont{0.8}$\color{cyan}_{\phantom{*}}s^*$}; \path(d3)--++(-90:0.3) node {\scalefont{0.8}$\color{cyan}0$}; \path(d3)--++(90:0.3) node {\scalefont{0.8}$\color{cyan}_{\phantom{*}}s^*$}; \path(d22)--++(-90:0.3) node {\scalefont{0.8}$\color{cyan}0$}; \path(d22)--++(90:0.3) node {\scalefont{0.8}$\color{cyan}_{\phantom{*}}s^*$}; \path(d23)--++(-90:0.3) node {\scalefont{0.8}$\color{cyan}0$}; \path(d23)--++(90:0.3) node {\scalefont{0.8}$\color{cyan}_{\phantom{*}}f^*$}; \path(d32)--++(-90:0.3) node {\scalefont{0.8}$\color{cyan}0$}; \path(d32)--++(90:0.3) node {\scalefont{0.8}$\color{cyan}_{\phantom{*}}f^*$}; \path(d33)--++(-90:0.3) node {\scalefont{0.8}$\color{magenta}_{\phantom{*}}f^*$}; \path(d33)--++(90:0.3) node {\scalefont{0.8}$\color{black}1$}; \end{tikzpicture} \end{minipage} \; = \; \begin{minipage}{3.55cm} \begin{tikzpicture}[scale=0.82] \path(0,0) coordinate (origin2); \draw[very thick](0,0)--++(135:3)--++(45:3)--++(-45:3) --(0,0); \clip(0,0)--++(135:3)--++(45:3)--++(-45:3) --(0,0); \path(0,0) coordinate (origin2); % % \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} %{ % \path (origin2)--++(135:1*\i) coordinate (d\i); %\draw[gray!40, thick] (d\i)--++(0:4*\i) coordinate (c\i); %%\path (origin2)--++(135:1*\i) coordinate (d\i); % } % % %\path(0,0)--++(45:0.5)--++(135:2.5) coordinate (X); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \path(0,0) --++(135:0.5) --++(45:-0.5) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); %\path (origin2)--++(135:1*\i) coordinate (d\i); } \path(c1)--++(135:1) coordinate (d2); \path(d2)--++(135:1) coordinate (d3); \path(d2)--++(45:1) coordinate (d22); \path(d2)--++(45:2) coordinate (d23); \path(d2)--++(135:1)--++(45:1) coordinate (d32); \path(d32)--++(45:1) coordinate (d33); \path(c1) node {\scalefont{0.8}1}; \path(c2) node {\scalefont{0.8}$\color{magenta}_{\phantom{*}}s^*$}; \path(d2) node {\scalefont{0.8}$\color{magenta}_{\phantom{*}}s^*$}; \path(c3) node {\scalefont{0.8}$\color{cyan}_{\phantom{*}}s^*$}; %\path node {\scalefont{0.8}$\color{cyan}s$}; \path(d22) node {\scalefont{0.8}$\color{cyan}_{\phantom{*}}s^*$}; \path(d3) node {\scalefont{0.8}$\color{cyan}_{\phantom{*}}s^*$}; \path(d23) node {\scalefont{0.8}$\color{cyan}_{\phantom{*}}f^*$}; \path(d32) node {\scalefont{0.8}$\color{cyan}_{\phantom{*}}f^*$}; \path(d33) node {\scalefont{0.8}$\color{magenta}_{\phantom{*}}f^*$}; \end{tikzpicture} \end{minipage}$$ $$%\qquad \begin{minipage}{5.1cm} \begin{tikzpicture}[xscale=1.1,yscale=-1.1] \draw[densely dotted, rounded corners] (-0.25,0) rectangle (4.25,1.5); \draw[magenta,line width=0.08cm](0,0)--++(90:3); \draw[darkgreen,line width=0.08cm](0.5,0)--++(90:2) coordinate (X); \fill[darkgreen ] (X) circle (3pt); \draw[cyan,line width=0.08cm](1,0) --++(90:2) coordinate (X); \fill[cyan ] (X) circle (3pt); \draw[orange,line width=0.08cm](1.5,0) --++(90:0.5) coordinate (X); \fill[orange ] (X) circle (3pt); \draw[magenta,line width=0.08cm](2,0) --++(90:0.5) coordinate (X); \fill[magenta ] (X) circle (3pt); \draw[violet,line width=0.08cm](2.5,0) --++(90:0.5) coordinate (X); \fill[violet ] (X) circle (3pt); \draw[darkgreen,line width=0.08cm] (0.5,0.9)--++(0:1) to [out=0,in=90] (3,0); \draw[cyan,line width=0.08cm] (1,1.1)--++(0:1) to [out=0,in=90] (3.5,0); \draw[magenta,line width=0.08cm](4,0)--++(90:1.5) ; \draw[magenta,line width=0.08cm](0,2.5)--++(0:1.5) to [out=0,in=90] (4,1.5); \draw[densely dotted, rounded corners] (-0.25,3) rectangle (4.25,1.5); \end{tikzpicture}\end{minipage} = %\qquad \begin{minipage}{5.1cm} \begin{tikzpicture}[xscale=1.1,yscale=-1.1] \draw[densely dotted, rounded corners] (-0.25,0) rectangle (4.25,3); \draw[magenta,line width=0.08cm](0,0)--++(90:3); \draw[darkgreen,line width=0.08cm](0.5,0)--++(90:1.75) coordinate (X); \fill[darkgreen ] (X) circle (3pt); \draw[cyan,line width=0.08cm](1,0) --++(90:1.75) coordinate (X); \fill[cyan ] (X) circle (3pt); \draw[orange,line width=0.08cm](1.5,0) --++(90:0.5) coordinate (X); \fill[orange ] (X) circle (3pt); \draw[magenta,line width=0.08cm](2,0) --++(90:0.5) coordinate (X); \fill[magenta ] (X) circle (3pt); \draw[violet,line width=0.08cm](2.5,0) --++(90:0.5) coordinate (X); \fill[violet ] (X) circle (3pt); \draw[darkgreen,line width=0.08cm] (0.5,0.9)--++(0:1) to [out=0,in=90] (3,0); \draw[cyan,line width=0.08cm] (1,1.1)--++(0:1) to [out=0,in=90] (3.5,0); \draw[magenta,line width=0.08cm](4,0)--++(90:1.5) ; \draw[magenta,line width=0.08cm](0,2.35)--++(0:1.75) to [out=0,in=90] (4,1.5); %\draw[densely dotted, rounded corners] (-0.25,3) rectangle (4.25,1.5); \end{tikzpicture}\end{minipage}$$ # Dilation and contraction We now make a slight detour in order to construct dilation maps which allow us to interrelate partitions, weights, Dyck paths as well as Hecke categories and Khovanov arc algebras of different sizes. **Definition 27**. *For $-m \leqslant k \leqslant n$ we define the dilation map $\varphi_k: \mathscr{P}_{m,n} \longrightarrow \mathscr{P}_{m+1,n+1}$ on weights by setting $\varphi_k (\lambda)$ for $\lambda\in \mathscr{P}_{m,n}$ to be the weight obtained from $\lambda$ by moving any label in position $x<k$ to $x-1$, any label in position $x>k$ to $x+1$ and labelling the vertices $k-\frac{1}{2}$ and $k+\frac{1}{2}$ by $\vee$ and $\wedge$ respectively.* The following lemmas follow directly from the definition. **Lemma 28**. *The map $\varphi_k$ is injective with image given by the set $\mathscr{P}^k_{m+1,n+1}$ consisting of all partitions with a removable node of content $k$. We call such partitions contractible at $k$.* **Lemma 29**. *Let $\lambda, \mu \in {\mathscr{P}_{m,n}}$ and let $-m \leqslant k \leqslant n$. We have that $(\lambda,\mu)$ is a Dyck pair of degree $j$ if and only of $(\varphi_k(\lambda),\varphi_k(\mu))$ is a Dyck pair of degree $j$. In particular, if $\lambda= \mu - P$ for some $P\in {\rm DRem}(\mu)$ then we have $\varphi_k(\lambda) = \varphi_k(\mu) - Q$ where $Q\in {\rm DRem}(\varphi_k(\mu))$ satisfies* --------------------------------------------------------------------------------------------------------------------- --------------------------------------------------- *$|Q| = |P|+2$ and $\underline{\sf cont}(Q)=\underline{\sf cont}(P) \cup \{ {\sf first}(P) - 1, {\sf last}(P)+1\}$* *if $k\in \underline{\sf cont}(P)$,* *$|Q| = |P|$ and $\underline{\sf cont}(Q) = \{ l+1 \, : \, l\in \underline{\sf cont}(P)\}$* *if $k<l$ for all $l\in \underline{\sf cont}(P)$* *$|Q| = |P|$ and $\underline{\sf cont}(Q) = \{ l-1 \, : \, l\in \underline{\sf cont}(P)\}$* *if $k>l$ for all $l\in \underline{\sf cont}(P)$* --------------------------------------------------------------------------------------------------------------------- --------------------------------------------------- *We write $\varphi_k(P):=Q$.* $$\begin{tikzpicture}[scale=0.5] \path(0,0) coordinate (origin2); \path(0,0)--++(135:2) coordinate (origin3); \path(origin3)--++(135:5.5)--++(45:3)--++(135:3)--++(90:0.5) coordinate (X1) node {$\vee$}; \path(X1)--++(45:0.5*1)--++(-45:0.5*1) coordinate (X2) node {$\vee$}; \path(X1)--++(45:0.5*2)--++(-45:0.5*2)--++(-90:0.4) coordinate (X3) node {$\wedge$}; \path(X1)--++(45:0.5*3)--++(-45:0.5*3) coordinate (X4) node {$\vee$}; \path(X1)--++(45:0.5*4)--++(-45:0.5*4) coordinate (X5) node {$\vee$}; \path(X1)--++(45:0.5*5)--++(-45:0.5*5)--++(-90:0.4) coordinate (X6) node {$\wedge$}; \path(X1)--++(45:0.5*6)--++(-45:0.5*6) coordinate (X7) node {$\vee$}; \path(X1)--++(45:0.5*7)--++(-45:0.5*7)--++(-90:0.4) coordinate (X8) node {$\wedge$}; \path(X1)--++(45:0.5*8)--++(-45:0.5*8)--++(-90:0.4) coordinate (X9) node {$\wedge$}; \path(X1)--++(45:0.5*9)--++(-45:0.5*9) coordinate (X10) node {$\vee$}; \path(X1)--++(45:0.5*10)--++(-45:0.5*10) coordinate (X11) node {$\vee$}; \path(X1)--++(45:0.5*11)--++(-45:0.5*11)--++(-90:0.4) coordinate (X12) node {$\wedge$}; \path(X1)--++(45:0.5*12)--++(-45:0.5*12)--++(-90:0.4) coordinate (X13) node {$\wedge$}; \path(X1)--++(-90:0.2)--++(180:0.2) coordinate (start); \path(X13)--++(90:0.2)--++(0:0.2) coordinate (end); \draw(start)--(end); \foreach \i in {1,2,4,5,7,10,11} { \path(X\i)--++(-90:0.15) coordinate (X\i); } \foreach \i in {3,6,8,9,12,13} { \path(X\i)--++(90:0.175) coordinate (X\i); } \foreach \i in {1,2,...,13} { \path(X\i)--++(45:0.5*0.5)--++(-45:0.5*0.5) --++(-90:0.75) coordinate (Y\i); } \draw(X1)--++(-90:0.8); \draw[magenta](X2) to [out=-90,in=180] (Y2) to [out=0,in=-90] (X3); \draw(X5) to [out=-90,in=180] (Y5) to [out=0,in=-90] (X6); \draw(X7) to [out=-90,in=180] (Y7) to [out=0,in=-90] (X8); \draw(X11) to [out=-90,in=180] (Y11) to [out=0,in=-90] (X12); \path(Y6)--++(-90:0.3) coordinate (Y6); \draw[violet](X4) to [out=-90,in=180] (Y6) to [out=0,in=-90] (X9); \path(Y11)--++(-90:0.3) coordinate (Y11); \draw[darkgreen](X10) to [out=-90,in=180] (Y11) to [out=0,in=-90] (X13); \draw[very thick] (origin3)--++(135:6)--++(45:2) --++(-45:1) --++(45:2) --++(-45:1) --++(45:1) --++(-45:2) --++(45:2) --++(-45:2)--(origin3) ; \clip (origin3) (origin3)--++(135:6)--++(45:2) --++(-45:1) --++(45:2) --++(-45:1) --++(45:1) --++(-45:2) --++(45:2) --++(-45:2)--(origin3) ; \foreach \i in {0,1,2,3,4,5,6,7,8} { \path (origin3)--++(45:0.5*\i) coordinate (c\i); \path (origin3)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8} { \path (origin3)--++(45:1*\i) coordinate (c\i); \path (c\i)--++(-45:0.5) coordinate (c\i); \path (origin3)--++(135:1*\i) coordinate (d\i); \path (d\i)--++(-135:0.5) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:9); \draw[thick,densely dotted] (d\i)--++(45:9); } \path(origin3)--++(45:0.5)--++(135:5.5) coordinate (X); %\fill[magenta](X) circle (4pt); \path(X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \path(X)--++(-45:1) coordinate (X) ; \path (X)--++(45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(-45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(-45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \path(X)--++(-45:1) coordinate (X) ; \path (X)--++(45:1) coordinate (X) ; \fill[darkgreen](X) circle (4pt); \draw[ thick, darkgreen](X)--++(45:1) coordinate (X) ; \fill[darkgreen](X) circle (4pt); \draw[ thick, darkgreen](X)--++(-45:1) coordinate (X) ; \fill[darkgreen](X) circle (4pt); \end{tikzpicture} \qquad \begin{tikzpicture}[scale=0.5] \path(0,0) coordinate (origin2); \path(0,0)--++(135:2) coordinate (origin3); \path(origin3)--++(135:5.5)--++(45:3)--++(135:3) --++(90:0.5) coordinate (X1) node {$\vee$}; \path(X1)--++(45:0.5*1)--++(-45:0.5*1) coordinate (X2) node {$\vee$}; \path(X1)--++(45:0.5*2)--++(-45:0.5*2)--++(-90:0.4) coordinate (X3) node {$\wedge$}; \path(X1)--++(45:0.5*3)--++(-45:0.5*3) coordinate (X4) node {$\vee$}; \path(X1)--++(45:0.5*4)--++(-45:0.5*4) coordinate (X5) node {$\vee$}; \path(X1)--++(45:0.5*5)--++(-45:0.5*5)--++(-90:0.4) coordinate (X6) node {$\wedge$}; \path(X1)--++(45:0.5*6)--++(-45:0.5*6) coordinate (X7) node {$\vee$}; \path(X1)--++(45:0.5*7)--++(-45:0.5*7)--++(-90:0.4) --++(45:1)--++(-45:1) coordinate (X8) node {$\wedge$}; \path(X1)--++(45:0.5*8)--++(-45:0.5*8)--++(-90:0.4) --++(45:1)--++(-45:1)coordinate (X9) node {$\wedge$}; \path(X1)--++(45:0.5*9)--++(-45:0.5*9) --++(45:1)--++(-45:1) coordinate (X10) node {$\vee$}; \path(X1)--++(45:0.5*10)--++(-45:0.5*10) --++(45:1)--++(-45:1) coordinate (X11) node {$\vee$}; \path(X1)--++(45:0.5*11)--++(-45:0.5*11)--++(-90:0.4) --++(45:1)--++(-45:1) coordinate (X12) node {$\wedge$}; \path(X1)--++(45:0.5*12)--++(-45:0.5*12)--++(-90:0.4) --++(45:1)--++(-45:1) coordinate (X13) node {$\wedge$}; \path(X1)--++(-90:0.2)--++(180:0.2) coordinate (start); \path(X13)--++(90:0.2)--++(0:0.2) coordinate (end); \draw(start)--(end); \foreach \i in {1,2,4,5,7,10,11} { \path(X\i)--++(-90:0.15) coordinate (X\i); } \foreach \i in {3,6,8,9,12,13} { \path(X\i)--++(90:0.175) coordinate (X\i); } \foreach \i in {1,2,...,13} { \path(X\i)--++(45:0.5*0.5)--++(-45:0.5*0.5) --++(-90:0.75) coordinate (Y\i); } \draw(X1)--++(-90:0.8); \draw[magenta](X2) to [out=-90,in=180] (Y2) to [out=0,in=-90] (X3); \path(X1)--++(45:0.5*7)--++(-45:0.5*7) --++(-90:-0.2) coordinate (A) node {\color{cyan}$\vee$}; \path(X1)--++(45:0.5*8)--++(-45:0.5*8)--++(-90:0.4)--++(90:0.2) coordinate (B) node {\color{cyan}$\wedge$}; \path(A) --++(-90:0.2) coordinate (A); \path(B) --++(90:0.2) coordinate (B); \path(Y7)--++(45:0.5)--++(-45:0.5) coordinate (Y7); \draw[cyan](A) to [out=-90,in=180] (Y7) to [out=0,in=-90] (B); \path(Y7)--++(90:-0.2) coordinate (Y7); \draw(X7) to [out=-90,in=180] (Y7) to [out=0,in=-90] (X8); \draw(X5) to [out=-90,in=180] (Y5) to [out=0,in=-90] (X6); \draw(X11) to [out=-90,in=180] (Y11) to [out=0,in=-90] (X12); \path(Y6)--++(-90:0.3) coordinate (Y6); \draw[violet](X4) to [out=-90,in=180] (Y6) to [out=0,in=-90] (X9); \path(Y11)--++(-90:0.3) coordinate (Y11); \draw[darkgreen](X10) to [out=-90,in=180] (Y11) to [out=0,in=-90] (X13); \draw[very thick] (origin3)--++(135:6)--++(45:2) --++(-45:1) --++(45:2) --++(-45:1) --++(45:2) --++(-45:3) --++(45:2) --++(-45:2)--++(-135:8)--(origin3) ; \clip (origin3)--++(135:6)--++(45:2) --++(-45:1) --++(45:2) --++(-45:1) --++(45:2) --++(-45:3) --++(45:2) --++(-45:2)--++(-135:8)--(origin3) ; \foreach \i in {-1,0,1,2,3,4,5,6,7,8} { \path (origin3)--++(45:0.5*\i) coordinate (c\i); \path (origin3)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {-1,0,1,2,3,4,5,6,7,8} { \path (origin3)--++(45:1*\i) coordinate (c\i); \path (c\i)--++(-45:1) coordinate (c\i); \path (origin3)--++(135:1*\i) coordinate (d\i); \path (d\i)--++(-135:1) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:9); \draw[thick,densely dotted] (d\i)--++(45:9); } \path(origin3)--++(45:0.5)--++(135:5.5) coordinate (X); %\fill[magenta](X) circle (4pt); \path(X)--++(45:1) coordinate (X) ; \fill[magenta](X) circle (4pt); \path(X)--++(-45:1) coordinate (X) ; \path (X)--++(45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(-45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(-45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \draw[ thick, violet](X)--++(-45:1) coordinate (X) ; \fill[violet](X) circle (4pt); \path(X)--++(-45:1) coordinate (X) ; \path (X)--++(45:1) coordinate (X) ; \fill[darkgreen](X) circle (4pt); \draw[ thick, darkgreen](X)--++(45:1) coordinate (X) ; \fill[darkgreen](X) circle (4pt); \draw[ thick, darkgreen](X)--++(-45:1) coordinate (X) ; \fill[darkgreen](X) circle (4pt); \end{tikzpicture}$$ We now extend the dilation map $\varphi_k$ to dilation homomorphisms for the Hecke categories and the arc algebras. We will use the same notation for all three dilation maps. We start with the Hecke category. **Theorem 30**. *Let $\Bbbk$ be a commutative integral domain and let $i \in \Bbbk$ be a square root of $-1$. For $-m\leqslant k\leqslant n$, we define the map $\varphi_k : \mathcal{H}_{m,n }\to \mathcal{H}_{ m+1,n+1 }$ on the generators as follows. For $\lambda, \mu \in {\mathscr{P}_{m,n}}$ with $\lambda= \mu - P$ for some $P\in {\rm DRem}(\mu)$ we have $\varphi_k({\sf 1}_\mu)= {\sf 1}_{{\varphi_k(\mu)}},$ and $$\varphi_k(D^\lambda_\mu)= \left\{ \begin{array}{ll} D^{\varphi_k(\lambda)}_{\varphi_k(\mu)} & \mbox{if $k\notin \underline{\sf cont}(P)$} \\ (-i) \cdot D^{\varphi_k(\lambda)}_{\varphi_k(\mu)} & \mbox{ if $k\in \underline{\sf cont}(P)$ and $k$ labels a spot tile in $P_{sf}$}\\ i \cdot D^{\varphi_k(\lambda)}_{\varphi_k(\mu)} & \mbox{ if $k\in \underline{\sf cont}(P)$ and $k$ labels a fork tile in $P_{sf}$} \end{array} \right.$$ and $\varphi_k(D_\lambda^\mu) = \varphi_k((D_\mu^\lambda)^*) = (\varphi_k(D_\mu^\lambda))^*$. Then $\varphi_k$ extends to an injective homomorphism of graded $\Bbbk$-algebras.* *Proof.* The map $\varphi_k$ is defined on the monoidal (spot, fork, braid and idempotent) generators of $\mathcal{H}_{m,n}$ in [@compan Section 5.3] (note that in that paper we use the notation $\varphi_{{{\color{cyan}\bm\tau}}}$ where ${{{\color{cyan}\bm\tau}}}= s_k$), where it is proven to be an injective homomorphism of graded $\Bbbk$-algebras. Rewriting this in terms of the generators $D^\lambda_\mu$ and $D^{\varphi_k(\lambda)}_{\varphi_k(\mu)}$ we deduce the result. ◻ We now define the dilation homomorphisms for $\mathcal{K}_{m,n }$. **Theorem 31**. *Let $\Bbbk$ be a commutative integral domain. For $-m\leqslant k\leqslant n$, the map $\varphi_k : \mathcal{K} _{m,n }\to \mathcal{K}_{m+1,n+1 }$ defined on arc diagrams by $$\varphi_k( {\underline{\mu}\lambda\overline{\nu}} )= \underline{\varphi_k(\mu)} \varphi_k(\lambda) \overline{\varphi_k(\nu)}$$ extends to an injective homomorphism of graded $\Bbbk$-algebras.* *Proof.* Consider a pair of arc diagrams for which the $k\pm\tfrac{1}{2}$ vertices form anti-clockwise oriented circles. We can choose to do the surgery procedure so that this is the final step we consider. The anti-clockwise circle acts as an idempotent and so the result follows. ◻ These dilation homomorphisms will allow us to prove results by induction. The base cases for the induction will be the following: **Definition 32**. *Let $\lambda, \mu \in {\mathscr{P}_{m,n}}$ with $\lambda=\mu-P$ for $P\in {\rm DRem}(\mu)$. We say that $(\lambda,\mu)$ are incontractible if there does not exist $k\in {\mathbb Z}$ such that $\lambda, \mu\in \mathscr{P}^k_{m,n}$.* **Remark 33**. *By definition, it is clear that $(\lambda,\mu)$ are incontractible if and only if $\mu=(c^r)$ is a rectangular partition and $\lambda=(c^{r-1},c-1)$ so that $b(P)=1$.* # The quiver and relations for $\mathcal{H}_{m,n }$ We now provide a presentation for $\mathcal{H}_{m,n }$ over $\Bbbk$ an arbitrary integral domain. Before stating the presentation as a theorem, we first recall (and slightly rephrase) a proposition from [@compan Proposition 4.18] and a lemma which will be used in the proof. **Proposition 34**. *[@compan Proposition 4.18] [\[generationofspots\]]{#generationofspots label="generationofspots"} Let $\lambda\in {\mathscr{P}_{m,n}}$, ${\color{cyan}[r,c]} \in {\rm Add}(\lambda)$ such that $s_{r-c} = {{{\color{cyan}\bm\tau}}}$. Then we have that $$\begin{aligned} \label{notChook2} {\sf 1}_{\lambda} \otimes {\sf bar}({{{\color{cyan}\bm\tau}}}) = \sum_{[x,y]} (-1) ^{b \langle x,y \rangle_{\lambda} } D^{\lambda}_{\lambda- \langle x,y \rangle_{\lambda} } D^{\lambda- \langle x,y \rangle_{\lambda} }_{\lambda}. \end{aligned}$$ where the sum is taken over all $[x,y]\in \lambda$ where either $[x,y]=[x,c]$ with $x<r$ or $[x,y]=[r,y]$ with $y<c$, and $\langle x,y \rangle_{\lambda}\in {\rm DRem}(\lambda)$.* *Proof.* Given $\lambda\in {\mathscr{P}_{m,n}}$, $[x,y]\in \lambda$ such that $\mathsf{t}_\lambda([x,y])=k$ and ${{{\color{magenta}\bm\sigma}}}= s_{x-y}$, we set $${\sf gap}(\mathsf{t}_\lambda-[x,y]) = {\sf 1}_{\mathsf{t}_\lambda{\downarrow}_{\{1,\dots,k-1\}}} \otimes {\sf spot}^{{{{\color{magenta}\bm\sigma}}}}_\emptyset {\sf spot}_{{{{\color{magenta}\bm\sigma}}}}^\emptyset \otimes {\sf 1}_{\mathsf{t}_\lambda{\downarrow}_{\{k+1,\dots,\ell(\lambda)\}}}.$$ It was shown in [@compan Proposition 4.18] that $$\begin{aligned} \label{notChook} {\sf 1}_{ \lambda} \otimes {\sf bar}({{{\color{cyan}\bm\tau}}}) = -\!\! \sum_{[x,y]} {\sf gap}(\mathsf{t}_\lambda- [x,y]) \end{aligned}$$ where the sum is taken over all $[x,y]\in \lambda$ where either $[x,y]=[x,c]$ with $x<r$ or $[x,y]=[r,y]$ with $y<c$. For each term in the sum, we now apply $b\langle x,y \rangle_\lambda-1$ times the null-braid relation to get $${\sf gap}(\mathsf{t}_\lambda- [x,y]) = (-1)^{b\langle x,y \rangle_\lambda-1} D^{\lambda}_{\lambda- \langle x,y \rangle_{\lambda} } D^{\lambda- \langle x,y \rangle_{\lambda} }_{\lambda}.$$ If $\langle x, y \rangle_\lambda\in {\rm DRem}(\lambda)$, then this is a basis element and we are done. If $\langle x, y \rangle_\lambda\notin {\rm DRem}(\lambda)$, write $\langle x, y \rangle_\lambda= [x_1, y_1], \ldots , [x_s, y_s]$. Then we have either $[x_1, y_1+1]\in \lambda$ and $[x_1 -1, y_1+1]\notin \lambda$, or $[x_s +1, y_s]\in \lambda$ and $[x_s +1, y_s -1]\notin \lambda$. In both cases, we can apply the cyclotomic relations to deduce that the correspondent element in the sum is zero. ◻ $$\begin{tikzpicture}[scale=0.7] \clip(-6,0) rectangle (3,11); \path(0,0) coordinate (origin2); \path(0,0) coordinate (origin2); \path(0,0)--++(135:2) coordinate (origin3); \path (origin3)--++(135:5.5)--++(45:4.5) node {$\bullet$}; \path(origin3)--++(135:5.5)--++(45:3)--++(135:3)--++(90:0.5) coordinate (X1) node {\color{violet}$\vee$}; \path(X1)--++(45:0.5*1)--++(-45:0.5*1) coordinate (X2) node {\color{orange}$\vee$}; \path(X1)--++(45:0.5*2)--++(-45:0.5*2) coordinate (X3) node {\color{darkgreen}$\vee$}; \path(X1)--++(45:0.5*3)--++(-45:0.5*3) coordinate (X4) node {\color{gray}$\vee$}; \path(X1)--++(45:0.5*4)--++(-45:0.5*4)--++(-90:0.4) coordinate (X5) node {\color{gray}$\wedge$}; \path(X1)--++(45:0.5*5)--++(-45:0.5*5) coordinate (X6) node {\color{brown}$\vee$}; \path(X1)--++(45:0.5*6)--++(-45:0.5*6) --++(-90:0.4) coordinate (X7) node {\color{brown}$\wedge$}; \path(X1)--++(45:0.5*7)--++(-45:0.5*7)--++(-90:0.4) coordinate (X8) node {\color{darkgreen}$\wedge$}; \path(X1)--++(45:0.5*8)--++(-45:0.5*8)--++(-90:0.4) coordinate (X9) node {\color{orange}$\wedge$}; \path(X1)--++(45:0.5*9)--++(-45:0.5*9) --++(-90:0.4) coordinate (X10) node {\color{violet}$\wedge$}; \path(X1)--++(45:0.5*10)--++(-45:0.5*10) --++(-90:0.4) coordinate (X11) node {$\wedge$}; \path(X1)--++(45:0.5*11)--++(-45:0.5*11)--++(-90:0.4) coordinate (X12) node {$\wedge$}; \path(X1)--++(-90:0.2)--++(180:0.2) coordinate (start); \path(X12)--++(90:0.2)--++(0:0.2) coordinate (end); \draw(start)--(end); \foreach \i in {1,2,3,4,6} { \path(X\i)--++(-90:0.15) coordinate (X\i); } \foreach \i in {5,7,8,9,10,11} { \path(X\i)--++(90:0.175) coordinate (X\i); } \foreach \i in {1,2,...,13} { \path(X\i)--++(45:0.5*0.5)--++(-45:0.5*0.5) --++(-90:0.75) coordinate (Y\i); } \foreach \i in {1,2,...,11} { \path(X\i)--++(45:0.5*0.5)--++(-45:0.5*0.5) --++(90:0.35) coordinate (Z\i); } % \draw (Z4) node {\color{gray}\scalefont{0.75}$Q_{\text{--}1}$}; % \draw (Z6) node {\color{brown}\scalefont{0.75}$Q_{ 1}$}; \foreach \i in {1,2,...,11} { \path(Z6) --++(45:0.5*0.5*\i)--++(-45:0.5*0.5*\i) coordinate (Z\i); } \foreach \i in {1,2,...,11} { \path(Z6) --++(-135:0.5*0.5*\i)--++(135:0.5*0.5*\i) coordinate (Z\i); } \path(origin3)--++(135:6) coordinate (X); \fill[violet!70] (X)--++(45:1) coordinate (X) --++(-45:1)--++(-135:1); \fill[orange] (X)--++(45:1) coordinate (X) --++(-45:1)--++(-135:1); \fill[darkgreen] (X)--++(45:1) coordinate (X) --++(-45:1)--++(-135:1); \fill[gray] (X)--++(45:1) coordinate (X) --++(-45:1)--++(-135:1); \path(X)--++(-45:1) coordinate (X); \fill[brown] (X)--++(45:1) --++(-45:1)--++(-135:1) coordinate (X); \fill[darkgreen] (X)--++(45:1) --++(-45:1)--++(-135:1) coordinate (X); \fill[orange] (X)--++(45:1) --++(-45:1)--++(-135:1) coordinate (X); \fill[violet!70] (X)--++(45:1) --++(-45:1)--++(-135:1) coordinate (X); \draw[very thick] (origin3)--++(135:6)--++(45:4) --++(-45:1) --++(45:1) --++(-45:6)--++(-135:5)--++(135:1) ; \clip (origin3)--++(135:6)--++(45:4) --++(-45:1) --++(45:1) --++(-45:6)--++(-135:5)--++(135:1) ; \foreach \i in {0,1,2,3,4,5,6,7,8} { \path (origin3)--++(45:1*\i) coordinate (c\i); \path (c\i)--++(-45:1) coordinate (c\i); \path (origin3)--++(135:1*\i) coordinate (d\i); \path (d\i)--++(-135:1) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:9); \draw[thick,densely dotted] (d\i)--++(45:9); } \end{tikzpicture} \qquad % % % % % % % \begin{tikzpicture}[scale=0.7] \clip(-6,0) rectangle (3,11); \path(0,0) coordinate (origin2); \path(0,0) coordinate (origin2); \path(0,0)--++(135:2) coordinate (origin3); \path(origin3)--++(135:5.5)--++(45:3)--++(135:3)--++(90:0.5) coordinate (X1) node {\color{violet}$\vee$}; \path(X1)--++(45:0.5*1)--++(-45:0.5*1) coordinate (X2) node {\color{orange}$\vee$}; \path(X1)--++(45:0.5*2)--++(-45:0.5*2) coordinate (X3) node {\color{darkgreen}$\vee$}; \path(X1)--++(45:0.5*3)--++(-45:0.5*3) coordinate (X4) node {\color{gray}$\vee$}; \path(X1)--++(45:0.5*4)--++(-45:0.5*4)--++(-90:0.4) coordinate (X5) node {\color{gray}$\wedge$}; \path(X1)--++(45:0.5*5)--++(-45:0.5*5) coordinate (X6) node {\color{brown}$\vee$}; \path(X1)--++(45:0.5*6)--++(-45:0.5*6) --++(-90:0.4) coordinate (X7) node {\color{brown}$\wedge$}; \path(X1)--++(45:0.5*7)--++(-45:0.5*7)--++(-90:0.4) coordinate (X8) node {\color{darkgreen}$\wedge$}; \path(X1)--++(45:0.5*8)--++(-45:0.5*8)--++(-90:0.4) coordinate (X9) node {\color{orange}$\wedge$}; \path(X1)--++(45:0.5*9)--++(-45:0.5*9) --++(-90:0.4) coordinate (X10) node {\color{violet}$\wedge$}; \path(X1)--++(45:0.5*10)--++(-45:0.5*10) --++(-90:0.4) coordinate (X11) node {$\wedge$}; \path(X1)--++(45:0.5*11)--++(-45:0.5*11)--++(-90:0.4) coordinate (X12) node {$\wedge$}; \path(X1)--++(-90:0.2)--++(180:0.2) coordinate (start); \path(X12)--++(90:0.2)--++(0:0.2) coordinate (end); \draw(start)--(end); \foreach \i in {1,2,3,4,6} { \path(X\i)--++(-90:0.15) coordinate (X\i); } \foreach \i in {5,7,8,9,10,11} { \path(X\i)--++(90:0.175) coordinate (X\i); } \foreach \i in {1,2,...,13} { \path(X\i)--++(45:0.5*0.5)--++(-45:0.5*0.5) --++(-90:0.75) coordinate (Y\i); } \foreach \i in {1,2,...,11} { \path(X\i)--++(45:0.5*0.5)--++(-45:0.5*0.5) --++(90:0.35) coordinate (Z\i); } % \draw (Z4) node {\color{gray}\scalefont{0.75}$Q_{\text{--}1}$}; % \draw (Z6) node {\color{brown}\scalefont{0.75}$Q_{ 1}$}; \foreach \i in {1,2,...,11} { \path(Z6) --++(45:0.5*0.5*\i)--++(-45:0.5*0.5*\i) coordinate (Z\i); } \draw (Z3) node {\color{darkgreen}\scalefont{0.75}$Q_{2}$}; \draw (Z5) node {\color{orange}\scalefont{0.75}$Q_{3}$}; \path (Z6) --++(45:0.5*0.5)--++(-45:0.5*0.5) node {\color{violet}\scalefont{0.75}$Q_{4}$}; \foreach \i in {1,2,...,11} { \path(Z6) --++(-135:0.5*0.5*\i)--++(135:0.5*0.5*\i) coordinate (Z\i); } \draw (Z5) node {\color{brown}\scalefont{0.75}$Q_{ 1}$}; \path (Z5)--++(135:0.5*0.5)--++(-135:0.5*0.5) --++(135:0.5*0.5)--++(-135:0.5*0.5) --++(135:0.5*0.5)--++(-135:0.5*0.5) --++(135:0.5*0.5)--++(-135:0.5*0.5) node {\color{gray}\scalefont{0.75}$Q_{\text{--}1}$}; \draw(X1)--++(-90:0.8); \draw[violet](X1) --++(-90:3); \draw[orange](X2) --++(-90:2); \draw[darkgreen](X3) --++(-90:1.5); % \draw(X5) to [out=-90,in=180] (Y5) % to [out=0,in=-90] (X6); % \draw(X7) to [out=-90,in=180] (Y7) % to [out=0,in=-90] (X8); % \draw(X11) to [out=-90,in=180] (Y11) % to [out=0,in=-90] (X12); \draw[gray](X4) --++(-90:1); \draw[gray](X5) --++(-90:1); \draw[brown](X6) --++(-90:1); \draw[brown](X7) --++(-90:1); \draw[violet](X10) --++(-90:3); \draw[orange](X9) --++(-90:2); \draw[darkgreen](X8) --++(-90:1.5); \draw (X11) --++(-90:2) coordinate(X11); \draw (X12)--++(90:0.12) --++(-90:2)coordinate(X12); ; \draw[densely dashed] (X11) --++(-90:1.5) coordinate(X11); \draw[densely dashed] (X12) --++(-90:1.5)coordinate(X12); ; \path(origin3)--++(135:6)--++(45:0.5) coordinate (start); \path(start) --++(45:0.5)--++(-45:0.5) coordinate (coord1); \draw[violet] (start) to [out=-90,in=-90] (coord1) (start) --++(90:2); \path(coord1) --++(45:0.5)--++(-45:0.5) coordinate (coord2); \draw[violet] (coord1) to [out=90,in=90] (coord2); \path(coord2) --++(45:0.5)--++(-45:0.5) coordinate (coord3); \draw[violet] (coord2) to [out=-90,in=-90] (coord3); \path(coord3) --++(45:0.5)--++(-45:0.5) coordinate (coord4); \draw[violet] (coord3) to [out=90,in=90] (coord4); \path(coord4) --++(45:0.5)--++(-45:0.5) coordinate (coord5); \draw[violet] (coord4) to [out=-90,in=-90] (coord5); \path(coord5) --++(45:0.5)--++(-45:0.5) coordinate (coord6); \draw[violet] (coord5) to [out=90,in=90] (coord6); \path(coord6) --++(45:0.5)--++(-45:0.5) coordinate (coord7); \draw[violet] (coord6) to [out=-90,in=-90] (coord7); \path(coord7) --++(45:0.5)--++(-45:0.5) coordinate (coord8); \draw[violet] (coord7) to [out=90,in=90] (coord8); \path(coord8) --++(45:0.5)--++(-45:0.5) coordinate (coord9); \draw[violet] (coord8) to [out=-90,in=-90] (coord9)--++(90:2); \path(origin3)--++(135:6)--++(45:1.5) coordinate (start); \path(start) --++(45:0.5)--++(-45:0.5) coordinate (coord1); \draw[orange] (start) to [out=-90,in=-90] (coord1) (start) --++(90:2); \path(coord1) --++(45:0.5)--++(-45:0.5) coordinate (coord2); \draw[orange] (coord1) to [out=90,in=90] (coord2); \path(coord2) --++(45:0.5)--++(-45:0.5) coordinate (coord3); \draw[orange] (coord2) to [out=-90,in=-90] (coord3); \path(coord3) --++(45:0.5)--++(-45:0.5) coordinate (coord4); \draw[orange] (coord3) to [out=90,in=90] (coord4); \path(coord4) --++(45:0.5)--++(-45:0.5) coordinate (coord5); \draw[orange] (coord4) to [out=-90,in=-90] (coord5); \path(coord5) --++(45:0.5)--++(-45:0.5) coordinate (coord6); \draw[orange] (coord5) to [out=90,in=90] (coord6); \path(coord6) --++(45:0.5)--++(-45:0.5) coordinate (coord7); \draw[orange] (coord6) to [out=-90,in=-90] (coord7) --++(90:2); \path(origin3)--++(135:6)--++(45:2.5) coordinate (start); \path(start) --++(45:0.5)--++(-45:0.5) coordinate (coord1); \draw[darkgreen] (start) to [out=-90,in=-90] (coord1) (start) --++(90:2); \path(coord1) --++(45:0.5)--++(-45:0.5) coordinate (coord2); \draw[darkgreen] (coord1) to [out=90,in=90] (coord2); \path(coord2) --++(45:0.5)--++(-45:0.5) coordinate (coord3); \draw[darkgreen] (coord2) to [out=-90,in=-90] (coord3); \path(coord3) --++(45:0.5)--++(-45:0.5) coordinate (coord4); \draw[darkgreen] (coord3) to [out=90,in=90] (coord4); \path(coord4) --++(45:0.5)--++(-45:0.5) coordinate (coord5); \draw[darkgreen] (coord4) to [out=-90,in=-90] (coord5) --++(90:2); \path(origin3)--++(135:6)--++(45:3.5) coordinate (start); \path(start) --++(45:0.5)--++(-45:0.5) coordinate (coord1); \draw[gray] (start) to [out=-90,in=-90] (coord1)--++(90:1); \draw[gray] (start) --++(90:1); \path(origin3)--++(135:5)--++(45:4.5) coordinate (start); \path(start) --++(45:0.5)--++(-45:0.5) coordinate (coord1); \draw[brown] (start) to [out=-90,in=-90] (coord1)--++(90:1); \draw[brown] (start) --++(90:1); \path (origin3)--++(135:5.5)--++(45:4.5) node {$\bullet$}; \draw[very thick] (origin3)--++(135:6)--++(45:4) --++(-45:1) --++(45:1) --++(-45:6)--++(-135:5)--++(135:1) ; \clip (origin3)--++(135:6)--++(45:4) --++(-45:1) --++(45:1) --++(-45:6)--++(-135:5)--++(135:1) ; \foreach \i in {0,1,2,3,4,5,6,7,8} { \path (origin3)--++(45:0.5*\i) coordinate (c\i); \path (origin3)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8} { \path (origin3)--++(45:1*\i) coordinate (c\i); \path (c\i)--++(-45:1) coordinate (c\i); \path (origin3)--++(135:1*\i) coordinate (d\i); \path (d\i)--++(-135:1) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:9); \draw[thick,densely dotted] (d\i)--++(45:9); } \end{tikzpicture}$$ **Lemma 35**. *The following relations hold. $$\begin{minipage}{2.55cm} \begin{tikzpicture}[scale=0.92] \path(0,0) coordinate (origin2); \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --(0,0); \clip(0,0)--++(135:2)--++(45:2)--++(-45:2) --(0,0); \path(0,0) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(135:1*\i) coordinate (d\i); %\draw[gray!40, thick] (d\i)--++(0:4*\i) coordinate (c\i); %\path (origin2)--++(135:1*\i) coordinate (d\i); } %\path(0,0)--++(45:0.5)--++(135:2.5) coordinate (X); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \path(0,0) --++(135:0.5) --++(45:-0.5) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); %\path (origin2)--++(135:1*\i) coordinate (d\i); } \path(c1)--++(135:1) coordinate (d2); \path(d2)--++(135:1) coordinate (d3); \path(d2)--++(45:1) coordinate (d22); \path(d2)--++(45:2) coordinate (d23); \path(d2)--++(135:1)--++(45:1) coordinate (d32); \path(d32)--++(45:1) coordinate (d33); \path(c1)--++(90:0.3) node {\scalefont{0.8}\color{magenta}$1$}; \path(c1)--++(-90:0.3) node {\scalefont{0.8}\color{magenta}$1$}; \path(c2)--++(90:0.3) node {\scalefont{0.8}\color{cyan}$0$}; \path(c2)--++(-90:0.3) node {\scalefont{0.8}\color{cyan}$s$}; \path(d2)--++(90:0.3) node {\scalefont{0.8}\color{darkgreen}$1$}; \path(d2)--++(-90:0.3) node {\scalefont{0.8}\color{darkgreen}$1$}; \path(d22)--++(90:0.3) node {\scalefont{0.8}$\color{magenta}0$}; \path(d22)--++(-90:0.3) node {\scalefont{0.8}$\color{magenta}s$}; \end{tikzpicture} \end{minipage} \;\;=\; (-1)\;\begin{minipage}{2.55cm} \begin{tikzpicture}[scale=0.92] \path(0,0) coordinate (origin2); \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --(0,0); \clip(0,0)--++(135:2)--++(45:2)--++(-45:2) --(0,0); \path(0,0) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(135:1*\i) coordinate (d\i); %\draw[gray!40, thick] (d\i)--++(0:4*\i) coordinate (c\i); %\path (origin2)--++(135:1*\i) coordinate (d\i); } %\path(0,0)--++(45:0.5)--++(135:2.5) coordinate (X); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \path(0,0) --++(135:0.5) --++(45:-0.5) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); %\path (origin2)--++(135:1*\i) coordinate (d\i); } \path(c1)--++(135:1) coordinate (d2); \path(d2)--++(135:1) coordinate (d3); \path(d2)--++(45:1) coordinate (d22); \path(d2)--++(45:2) coordinate (d23); \path(d2)--++(135:1)--++(45:1) coordinate (d32); \path(d32)--++(45:1) coordinate (d33); \path(c1)--++(90:0.3) node {\scalefont{0.8}\color{magenta}$1$}; \path(c1)--++(-90:0.3) node {\scalefont{0.8}\color{magenta}$1$}; \path(c2)--++(90:0.3) node {\scalefont{0.8}\color{cyan}$0$}; \path(c2)--++(-90:0.3) node {\scalefont{0.8}\color{cyan}$s$}; \path(d2)--++(90:0.3) node {\scalefont{0.8}\color{darkgreen}$\phantom{_\ast}s^\ast$}; \path(d2)--++(-90:0.3) node {\scalefont{0.8}\color{darkgreen}$s$}; \path(d22)--++(90:0.3) node {\scalefont{0.8}$\color{magenta}0$}; \path(d22)--++(-90:0.3) node {\scalefont{0.8}$\color{magenta}f$}; \end{tikzpicture} \end{minipage} \qquad\qquad\quad \begin{minipage}{2.55cm} \begin{tikzpicture}[scale=0.92] \path(0,0) coordinate (origin2); \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --(0,0); \clip(0,0)--++(135:2)--++(45:2)--++(-45:2) --(0,0); \path(0,0) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(135:1*\i) coordinate (d\i); %\draw[gray!40, thick] (d\i)--++(0:4*\i) coordinate (c\i); %\path (origin2)--++(135:1*\i) coordinate (d\i); } %\path(0,0)--++(45:0.5)--++(135:2.5) coordinate (X); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \path(0,0) --++(135:0.5) --++(45:-0.5) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); %\path (origin2)--++(135:1*\i) coordinate (d\i); } \path(c1)--++(135:1) coordinate (d2); \path(d2)--++(135:1) coordinate (d3); \path(d2)--++(45:1) coordinate (d22); \path(d2)--++(45:2) coordinate (d23); \path(d2)--++(135:1)--++(45:1) coordinate (d32); \path(d32)--++(45:1) coordinate (d33); \path(c1)--++(90:0.3) node {\scalefont{0.8}\color{magenta}$1$}; \path(c1)--++(-90:0.3) node {\scalefont{0.8}\color{magenta}$1$}; \path(c2)--++(90:0.3) node {\scalefont{0.8}\color{cyan}$\phantom{_\ast}s^\ast$}; \path(c2)--++(-90:0.3) node {\scalefont{0.8}\color{cyan}$s$}; \path(d2)--++(90:0.3) node {\scalefont{0.8}\color{darkgreen}$1$}; \path(d2)--++(-90:0.3) node {\scalefont{0.8}\color{darkgreen}$1$}; \path(d22)--++(90:0.3) node {\scalefont{0.8}$\color{magenta}1$}; \path(d22)--++(-90:0.3) node {\scalefont{0.8}$\color{magenta}1$}; \end{tikzpicture} \end{minipage} \;\;=\; (-1)\; \begin{minipage}{2.55cm} \begin{tikzpicture}[scale=0.92] \path(0,0) coordinate (origin2); \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --(0,0); \clip(0,0)--++(135:2)--++(45:2)--++(-45:2) --(0,0); \path(0,0) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(135:1*\i) coordinate (d\i); %\draw[gray!40, thick] (d\i)--++(0:4*\i) coordinate (c\i); %\path (origin2)--++(135:1*\i) coordinate (d\i); } %\path(0,0)--++(45:0.5)--++(135:2.5) coordinate (X); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \path(0,0) --++(135:0.5) --++(45:-0.5) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); %\path (origin2)--++(135:1*\i) coordinate (d\i); } \path(c1)--++(135:1) coordinate (d2); \path(d2)--++(135:1) coordinate (d3); \path(d2)--++(45:1) coordinate (d22); \path(d2)--++(45:2) coordinate (d23); \path(d2)--++(135:1)--++(45:1) coordinate (d32); \path(d32)--++(45:1) coordinate (d33); \path(c1)--++(90:0.3) node {\scalefont{0.8}\color{magenta}$1$}; \path(c1)--++(-90:0.3) node {\scalefont{0.8}\color{magenta}$1$}; \path(c2)--++(90:0.3) node {\scalefont{0.8}\color{cyan}$\phantom{_\ast}s^\ast$}; \path(c2)--++(-90:0.3) node {\scalefont{0.8}\color{cyan}$s$}; \path(d2)--++(90:0.3) node {\scalefont{0.8}\color{darkgreen}$\phantom{_\ast}s^\ast$}; \path(d2)--++(-90:0.3) node {\scalefont{0.8}\color{darkgreen}$s$}; \path(d22)--++(90:0.3) node {\scalefont{0.8}$\color{magenta}\phantom{_\ast}f^\ast$}; \path(d22)--++(-90:0.3) node {\scalefont{0.8}$\color{magenta}f$}; \end{tikzpicture} \end{minipage}$$* *Proof.* The first equation follows by applying the null braid relation followed by the fork-spot contraction as follows: $$\begin{minipage}{3cm}\begin{tikzpicture}[scale=1.5] \draw[densely dotted, rounded corners] (-0.25,0) rectangle (1.75,2); \draw[magenta,line width=0.08cm](0,0)--++(90:2); \draw[cyan,line width=0.08cm](1,0) --++(90:0.4) coordinate (X); \fill [cyan] (X) circle (2.75pt); \draw[darkgreen,line width=0.08cm](0.5,0) --++(90:2) coordinate (X); \draw[magenta,line width=0.08cm](1.5,0) --++(90:0.4) coordinate (X); \fill [magenta] (X) circle (2.75pt); \end{tikzpicture}\end{minipage} = \begin{minipage}{3cm}\begin{tikzpicture}[scale=1.5] \draw[densely dotted, rounded corners] (-0.25,0) rectangle (1.75,2); \draw[magenta,line width=0.08cm](0,0)--++(90:2); \draw[cyan,line width=0.08cm](1,0) --++(90:0.4) coordinate (X); \fill [cyan] (X) circle (2.75pt); \draw[darkgreen,line width=0.08cm](0.5,0) --++(90:2) coordinate (X); \draw[magenta,line width=0.08cm](1.5,0) --++(90:1.8) coordinate (X); \fill [magenta] (X) circle (2.75pt); \end{tikzpicture}\end{minipage} = (-1) \; \begin{minipage}{3cm}\begin{tikzpicture}[scale=1.5] \draw[densely dotted, rounded corners] (-0.25,0) rectangle (1.75,2); \draw[magenta,line width=0.08cm](0,0)--++(90:2); \draw[cyan,line width=0.08cm](1,0) --++(90:0.4) coordinate (X); \fill [cyan] (X) circle (2.75pt); \draw[darkgreen,line width=0.08cm](0.5,0) --++(90:0.4) coordinate (X); \fill [darkgreen] (X) circle (2.75pt); \draw[darkgreen,line width=0.08cm](0.5,2) --++(-90:0.4) coordinate (X); \fill [darkgreen] (X) circle (2.75pt); \draw[magenta,line width=0.08cm](1.5,0) to [out=90,in=0] (0,0.75); \draw[magenta,line width=0.08cm](0,2-0.75-0.2) to [out=0,in=-90] (1.5,2-0.3)--++(90:0.1)coordinate (X); \fill [magenta] (X) circle (2.75pt); \end{tikzpicture}\end{minipage} =(-1) \; \begin{minipage}{3cm}\begin{tikzpicture}[scale=1.5] \draw[densely dotted, rounded corners] (-0.25,0) rectangle (1.75,2); \draw[magenta,line width=0.08cm](0,0)--++(90:2); \draw[cyan,line width=0.08cm](1,0) --++(90:0.4) coordinate (X); \fill [cyan] (X) circle (2.75pt); \draw[darkgreen,line width=0.08cm](0.5,0) --++(90:0.4) coordinate (X); \fill [darkgreen] (X) circle (2.75pt); \draw[darkgreen,line width=0.08cm](0.5,2) --++(-90:0.4) coordinate (X); \fill [darkgreen] (X) circle (2.75pt); \draw[magenta,line width=0.08cm](1.5,0) to [out=90,in=0] (0,0.75); % \draw[magenta,line width=0.08cm](0,2-0.75-0.2) to [out=0,in=-90] (1.5,2-0.3)--++(90:0.1)coordinate (X); %\fill [magenta] (X) circle (2.75pt); \end{tikzpicture}\end{minipage}$$ (where the first equality is merely a trivial isotopy). The second equation is obtained by simply applying the null braid relation as follows: $$\begin{minipage}{3cm}\begin{tikzpicture}[scale=1.5] \draw[densely dotted, rounded corners] (-0.25,0) rectangle (1.75,2); \draw[magenta,line width=0.08cm](0,0)--++(90:2); \draw[cyan,line width=0.08cm](1,0) --++(90:0.4) coordinate (X); \fill [cyan] (X) circle (2.75pt); \draw[cyan,line width=0.08cm](1,2) --++(-90:0.4) coordinate (X); \fill [cyan] (X) circle (2.75pt); \draw[darkgreen,line width=0.08cm](0.5,0) --++(90:2) coordinate (X); \draw[magenta,line width=0.08cm](1.5,0) --++(90:2) coordinate (X); \end{tikzpicture}\end{minipage} =(-1) \; \begin{minipage}{3cm}\begin{tikzpicture}[scale=1.5] \draw[densely dotted, rounded corners] (-0.25,0) rectangle (1.75,2); \draw[magenta,line width=0.08cm](0,0)--++(90:2); \draw[cyan,line width=0.08cm](1,0) --++(90:0.4) coordinate (X); \fill [cyan] (X) circle (2.75pt); \draw[darkgreen,line width=0.08cm](0.5,0) --++(90:0.4) coordinate (X); \fill [darkgreen] (X) circle (2.75pt); \draw[cyan,line width=0.08cm](1,2) --++(-90:0.4) coordinate (X); \fill [cyan] (X) circle (2.75pt); \draw[darkgreen,line width=0.08cm](0.5,2) --++(-90:0.4) coordinate (X); \fill [darkgreen] (X) circle (2.75pt); \draw[magenta,line width=0.08cm](1.5,0) to [out=90,in=0] (0,0.75); \draw[magenta,line width=0.08cm](0,2-0.75) to [out=0,in=-90] (1.5,2) ; \end{tikzpicture}\end{minipage}$$ as required. ◻ **Theorem 36**. *The algebra $\mathcal{H}_{m,n }$ is the associative $\Bbbk$-algebra generated by the elements $$\label{geners} \{D^\lambda_\mu, D_\lambda^\mu \mid \text{$\lambda, \mu\in {\mathscr{P}_{m,n}}$ with $\lambda= \mu - P$ for some $P\in {\rm DRem}(\mu)$} \}\cup\{ {\sf 1}_\mu \mid \mu \in {\mathscr{P}_{m,n}}\}$$ subject to the following relations and their duals.* ***The idempotent relations:** For all $\lambda,\mu \in {\mathscr{P}_{m,n}}$, we have that $$\label{rel1} {\sf 1}_\mu{\sf 1}_\lambda=\delta_{\lambda,\mu}{\sf 1}_\lambda\qquad \qquad {\sf 1}_\lambda D^\lambda_\mu {\sf 1}_\mu = D^\lambda_\mu.$$* ***The self-dual relation:** Let $P\in {\rm DRem}(\mu)$ and $\lambda= \mu - P$. Then we have $$\label{selfdualrel} D_\mu^{\lambda} D_{\lambda}^\mu = (-1)^{b(P)-1}\Bigg( 2 \!\! \sum_{ \begin{subarray}{c} Q\in {\rm DRem}(\lambda) \\ P\prec Q \end{subarray}} \!\! (-1) ^{b(Q) } D^{\lambda}_{\lambda- Q } D^{\lambda- Q }_{\lambda} + \!\!\sum_{ \begin{subarray}{c} Q\in {\rm DRem}(\lambda) \\ Q \, \text{adj.}\, P \end{subarray}} \!\! (-1)^{b(Q)} D^{\lambda}_{\lambda- Q } D^{\lambda- Q }_{\lambda}\Bigg)$$ where we abbreviate "adjacent to\" simply as "adj.\"* ***The commuting relations:** Let $P,Q\in {\rm DRem}(\mu)$ which commute. Then we have $$\label{commuting} D^{\mu -P-Q}_{\mu-P}D^{\mu-P}_\mu = D^{\mu-P-Q}_{\mu -Q}D^{\mu -Q}_\mu \qquad D^{\mu -P}_\mu D^\mu_{\mu - Q} = D^{\mu - P}_{\mu - P - Q}D^{\mu - P - Q}_{\mu - Q}.$$* ***The non-commuting relation:** Let $P,Q\in {\rm DRem}(\mu)$ with $P\prec Q$ which do not commute. Then $Q\setminus P = Q^1\sqcup Q^2$ where $Q^1, Q^2 \in {\rm DRem}(\mu - P)$ and we have $$\label{noncommuting} D^{\mu -Q}_\mu D^\mu_{\mu-P} = D^{\mu - Q}_{\mu - P - Q^1}D^{\mu - P - Q^1}_{\mu - P} = D^{\mu - Q}_{\mu - P - Q^2}D^{\mu - P - Q^2}_{\mu - P}$$* ***The adjacent relation:** Let $P\in {\rm DRem}(\mu)$ and $Q\in {\rm DRem}(\mu - P)$ be adjacent. Denote by $\langle P\cup Q \rangle_\mu$, if it exists, the smallest removable Dyck path of $\mu$ containing $P\cup Q$. Then we have $$\label{adjacent} D^{\mu - P - Q}_{\mu - P}D^{\mu - P}_\mu = \left\{ \begin{array}{ll} (-1)^{b(\langle P\cup Q\rangle_\mu) - b(Q)} D^{\mu - P - Q}_{\mu - \langle P\cup Q\rangle_\mu}D^{\mu - \langle P\cup Q\rangle_\mu}_\mu & \mbox{if $\langle P\cup Q\rangle_\mu$ exists} \\ 0 & \mbox{otherwise} \end{array} \right.$$* **Example 37**. *We have already seen examples of the "commuting relations\" in [\[zeroorientation\]](#zeroorientation){reference-type="ref" reference="zeroorientation"}; of the "non-commuting relations\" in [\[zeroorientation2\]](#zeroorientation2){reference-type="ref" reference="zeroorientation2"}; the "adjacent relation\" in [Lemma 35](#nullbraidontiles){reference-type="ref" reference="nullbraidontiles"}; the "idempotent relations\" are what one would expect; the combinatorics of the "self-dual relation\" is pictured in [\[generationofspots,sum-relation\]](#generationofspots,sum-relation){reference-type="ref" reference="generationofspots,sum-relation"}.* *Proof.* By [Proposition 26](#generatorsarewhatweasay){reference-type="ref" reference="generatorsarewhatweasay"} it is enough to check that ([\[rel1\]](#rel1){reference-type="ref" reference="rel1"}) to ([\[adjacent\]](#adjacent){reference-type="ref" reference="adjacent"}) are a complete list of relations. We first prove that all these relations do hold. The idempotent relations are immediate. We now proceed to check the other relations. **Proof of the self-dual relation.** First consider the case where $b(P) = 1$. This is (up to commutation) exactly the case covered in [\[generationofspots\]](#generationofspots){reference-type="ref" reference="generationofspots"}. We just need to note that $\langle r-1, c\rangle_\lambda$ and $\langle r, c-1\rangle_\lambda$ give Dyck paths adjacent to $P$ and that $Q = \langle r-j, c\rangle_\lambda= \langle r, c-j\rangle_\lambda$ (for $j\geqslant 2$) give two identical Dyck paths satisfying $P\prec Q$. Now if $b(P)\geqslant 2$ then we can find $k$ such that $(\lambda, \mu) = (\varphi_k(\lambda'), \varphi_k(\mu'))$ where $\lambda' = \mu'-P'$ with $b(P')=b(P)-1$ and $k\in \underline{\sf cont}(P')$. Now using induction and applying the dilation homomorphism we get $$\begin{aligned} -D_\mu^{\lambda} D_{\lambda}^\mu &=& (-1)^{b(P')-1}\left( 2 \!\! \sum_{ \begin{subarray}{c} Q'\in {\rm DRem}(\lambda') \\ P'\prec Q' \end{subarray}} \!\! (-1) ^{b(Q') }(-1) D^{\varphi_k(\lambda')}_{\varphi_k(\lambda'- Q') } D^{\varphi_k(\lambda' - Q') }_{\varphi_k(\lambda')} \right. \\ && + \left. \!\!\sum_{ \begin{subarray}{c} Q'\in {\rm DRem}(\lambda') \\ Q' \, \text{adjacent to}\, P' \end{subarray}} \!\! (-1)^{b(Q')} D^{\varphi_k(\lambda')}_{\varphi_k(\lambda'- Q') } D^{\varphi_k(\lambda' - Q') }_{\varphi_k(\lambda)}\right). \end{aligned}$$ (Note that the $(-1)$ in the first summand comes from the fact that $k\in \underline{\sf cont}(Q')$ whereas this is not the case for the terms in the second summand). Now we have $\varphi_k(\lambda'-Q') = \lambda- Q$ for some $Q\in {\rm DRem}(\lambda)$. Moreover, $\varphi_k$ gives bijections between all $Q'$s satisfying $P'\prec Q'$ and all $Q$s satisfying $P\prec Q$, and between all $Q'$s adjacent to $P'$ and all $Q$s adjacent to $P$. Finally, noting that $b(P) = b(P')+1$, $b(Q) = b(Q')+1$ if $P'\prec Q'$ and $b(Q) = b(Q')$ if $Q'$ adjacent to $P$ gives the required relation. **Proof of the commuting relations.** The fact that $P$ and $Q$ commute is equivalent to $P_{sf}\cap Q_{sf} = \emptyset$. Now these relations follow directly by applying the spot idempotent and fork idempotent relations illustrated in [\[spotpic,spotpic2\]](#spotpic,spotpic2){reference-type="ref" reference="spotpic,spotpic2"}. (An example is depicted in [\[zeroorientation\]](#zeroorientation){reference-type="ref" reference="zeroorientation"}.) **Proof of the non-commuting relations.** First consider the case where $b(P) = 1$ and $b(Q)=2$ then we have $Q\setminus P = Q^1 \sqcup Q^2$ with $|Q^1|=|Q^2| = 1$. Visualising the product of basis elements on the partition $\mu$ we have that the only non-trivial part is given in [\[ADD DIAGRAM\]](#ADD DIAGRAM){reference-type="ref" reference="ADD DIAGRAM"}, simply by applying the spot-fork relation (as in [\[spotpic2\]](#spotpic2){reference-type="ref" reference="spotpic2"}). This proves the result in this case. $$\begin{minipage}{2.55cm} \begin{tikzpicture}[scale=0.92] \path(0,0) coordinate (origin2); \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --(0,0); \clip(0,0)--++(135:2)--++(45:2)--++(-45:2) --(0,0); \path(0,0) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(135:1*\i) coordinate (d\i); %\draw[gray!40, thick] (d\i)--++(0:4*\i) coordinate (c\i); %\path (origin2)--++(135:1*\i) coordinate (d\i); } %\path(0,0)--++(45:0.5)--++(135:2.5) coordinate (X); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \path(0,0) --++(135:0.5) --++(45:-0.5) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); %\path (origin2)--++(135:1*\i) coordinate (d\i); } \path(c1)--++(135:1) coordinate (d2); \path(d2)--++(135:1) coordinate (d3); \path(d2)--++(45:1) coordinate (d22); \path(d2)--++(45:2) coordinate (d23); \path(d2)--++(135:1)--++(45:1) coordinate (d32); \path(d32)--++(45:1) coordinate (d33); \path(c1)--++(90:0.3) node {\scalefont{0.8}1}; \path(c1)--++(-90:0.3) node {\scalefont{0.8}1}; \path(c2)--++(90:0.3) node {\scalefont{0.8}\color{cyan}$s$}; \path(c2)--++(-90:0.3) node {\scalefont{0.8}1}; \path(d2)--++(90:0.3) node {\scalefont{0.8}\color{cyan}$s$}; \path(d2)--++(-90:0.3) node {\scalefont{0.8}1}; \path(d22)--++(90:0.3) node {\scalefont{0.8}$\color{cyan}f$}; \path(d22)--++(-90:0.3) node {\scalefont{0.8}$\color{magenta}{\phantom {_*}}s^*$}; \end{tikzpicture} \end{minipage} \;\; = \; \begin{minipage}{2.55cm} \begin{tikzpicture}[scale=0.92] \path(0,0) coordinate (origin2); \draw[very thick](0,0)--++(135:2)--++(45:2)--++(-45:2) --(0,0); \clip(0,0)--++(135:2)--++(45:2)--++(-45:2) --(0,0); \path(0,0) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(135:1*\i) coordinate (d\i); %\draw[gray!40, thick] (d\i)--++(0:4*\i) coordinate (c\i); %\path (origin2)--++(135:1*\i) coordinate (d\i); } %\path(0,0)--++(45:0.5)--++(135:2.5) coordinate (X); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \path(0,0) --++(135:0.5) --++(45:-0.5) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); %\path (origin2)--++(135:1*\i) coordinate (d\i); } \path(c1)--++(135:1) coordinate (d2); \path(d2)--++(135:1) coordinate (d3); \path(d2)--++(45:1) coordinate (d22); \path(d2)--++(45:2) coordinate (d23); \path(d2)--++(135:1)--++(45:1) coordinate (d32); \path(d32)--++(45:1) coordinate (d33); \path(d2)--++(45:1) node {\scalefont{0.8}0}; \path(c1) node {\scalefont{0.8}1}; %\path(c1)--++(-90:0.3) node {\scalefont{0.8}1}; % %\path(c2)--++(90:0.3) node {\scalefont{0.8}\color{cyan}$s$}; %\path(c2)--++(-90:0.3) node {\scalefont{0.8}1}; % %\path(d2)--++(90:0.3) node {\scalefont{0.8}\color{cyan}$s$}; %\path(d2)--++(-90:0.3) node {\scalefont{0.8}1}; % % % %\path(d22)--++(90:0.3) node {\scalefont{0.8}$\color{cyan}f$}; %\path(d22)--++(-90:0.3) node {\scalefont{0.8}$\color{magenta}{\phantom {_*}}s^*$}; % \path(c2) node {\scalefont{0.8}\color{orange}$s$}; \path(d2) node {\scalefont{0.8}\color{darkgreen}$s$}; \end{tikzpicture} \end{minipage} %%%%$$ Now if $b(P)\geqslant 2$ or $b(Q)\geqslant 3$ then we can find $k$, such that $(\mu, \mu -P, \mu -Q, \mu - P - Q^1, \mu - P - Q^2) = (\varphi_k(\mu'), \varphi_k(\mu' - P'), \varphi_k(\mu' - Q'), \varphi_k(\mu' -P' - Q'^1), \varphi_k(\mu' -P' - Q'^2))$ such that $k\in \underline{\sf cont}(P')\sqcup \underline{\sf cont}(Q'^1) \sqcup \underline{\sf cont}(Q'^2)$. Now we can use induction and apply the dilation homomorphism. Noting that spot (respectively fork) tiles in $P_{sf}$ correspond to fork (respectively spot) tiles in $Q_{sf}$, gives the required relation. (The example in [\[zeroorientation2\]](#zeroorientation2){reference-type="ref" reference="zeroorientation2"} is obtained by dilating the example in [\[ADD DIAGRAM\]](#ADD DIAGRAM){reference-type="ref" reference="ADD DIAGRAM"} at $k=0$.) **Proof of the adjacent relation.** First assume that $b(P)=b(Q)=1$. Say $Q={[r,c]}$ then we have $\langle P\cup Q \rangle_\mu = \langle r,c\rangle_\mu$. Now applying [Lemma 35](#nullbraidontiles){reference-type="ref" reference="nullbraidontiles"} (i) once followed by $b(\langle r,c\rangle_\mu)-2$ times [Lemma 35](#nullbraidontiles){reference-type="ref" reference="nullbraidontiles"} (ii), we obtain $$D^{\mu -P-Q}_{\mu-P}D^{\mu - P}_\mu = (-1)^{b(\langle r,c\rangle_\mu)-1}D^{\mu-P-Q}_{\mu - \langle r,c\rangle_\mu}D^{\mu - \langle r,c\rangle_\mu}_\mu$$ Now if $\langle r,c\rangle_\mu\notin {\rm DRem}(\mu)$ then this is equal to zero by the cyclotomic relations, giving the results. An example of this case is provided in [\[a big one\]](#a big one){reference-type="ref" reference="a big one"}. Now if $b(P)\geqslant 2$ or $b(Q)\geqslant 2$ then we can find $k$ such that $(\mu, \mu -P , \mu -P-Q, \mu-\langle P \cup Q\rangle_\mu) = (\varphi_k(\mu'), \varphi_k(\mu' -P') , \varphi_k(\mu' -P'-Q'), \varphi_k(\mu'-\langle P' \cup Q'\rangle_\mu'))$ with either $k\in \underline{\sf cont}(P')$ or $k\in \underline{\sf cont}(Q')$ but not both. In the first case, using induction and the dilation homomorphism we have $$(\pm i) D^{\mu -P-Q}_{\mu-P}D^{\mu - P}_\mu = (-1)^{b(\langle P'\cup Q' \rangle_\mu')-b(Q')} (\mp i) D^{\mu-P-Q}_{\mu - \langle r,c\rangle_\mu}D^{\mu - \langle r,c\rangle_\mu}_\mu.$$ Noting that $b(\langle P\cup Q \rangle_\mu) = b(\langle P'\cup Q'\rangle_{\mu'})+1$ and $b(Q)=b(Q')$ gives the result. In the second case, using induction and the dilation homomorphism we have $$(\pm i) D^{\mu -P-Q}_{\mu-P}D^{\mu - P}_\mu = (-1)^{b(\langle P'\cup Q' \rangle_\mu')-b(Q')} (\pm i) D^{\mu-P-Q}_{\mu - \langle r,c\rangle_\mu}D^{\mu - \langle r,c\rangle_\mu}_\mu.$$ Noting that $b(\langle P\cup Q \rangle_\mu) = b(\langle P'\cup Q'\rangle_{\mu'})+1$ and $b(Q)=b(Q')+1$ gives the result. $$\begin{minipage}{5.5cm} \begin{tikzpicture}[scale=0.92] \path(0,0) coordinate (origin2); \draw[thick](0,0)--++(135:4)--++(45:2)--++(-45:1)--++(45:1) --++(-45:1)--++(45:1)--++(-45:2) --(0,0); \path(0,0)--++(45:2) coordinate (X); \draw[line width=2.2] (X)--++(45:2)--++(135:2)--++(-135:2)--++(-45:2); \clip(0,0)--++(135:4)--++(45:2)--++(-45:1)--++(45:1) --++(-45:1)--++(45:1)--++(-45:2) --(0,0); \path(0,0) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(135:1*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \path(0,0) --++(135:0.5) --++(45:-0.5) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i1); \path (origin2)--++(135:1)--++(45:1*\i) coordinate (c\i2); \path (origin2)--++(135:2)--++(45:1*\i) coordinate (c\i3); \path (origin2)--++(135:3)--++(45:1*\i) coordinate (c\i4); } % % \path(c11)--++(135:1) coordinate (d2); % \path(d2)--++(135:1) coordinate (d3); % % % % % % \path(d2)--++(45:1) coordinate (d22); % \path(d2)--++(45:2) coordinate (d23); % \path(d2)--++(135:1)--++(45:1) coordinate (d32); % \path(d32)--++(45:1) coordinate (d33); % % \path(c11)--++(90:0.3) node {\scalefont{0.8}\color{black}1}; \path(c11)--++(-90:0.3) node {\scalefont{0.8}\color{black}1}; \path(c21)--++(90:0.3) node {\scalefont{0.8}\color{black}$1$}; \path(c21)--++(-90:0.3) node {\scalefont{0.8}\color{black}$1$}; \path(c12)--++(90:0.3) node {\scalefont{0.8}\color{black}$1$}; \path(c12)--++(-90:0.3) node {\scalefont{0.8}\color{black}$1$}; \path(c22)--++(90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c22)--++(-90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c32)--++(90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c32)--++(-90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c23)--++(90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c23)--++(-90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c33)--++(90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c33)--++(-90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c31)--++(90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c31)--++(-90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c13)--++(90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c13)--++(-90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c14)--++(90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c14)--++(-90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c24)--++(90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c24)--++(-90:0.3) node {\scalefont{0.8}$\color{black}1$}; %\path(c41)--++(90:0.3) node {\scalefont{0.8}$\color{black}1$}; %\path(c41)--++(-90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c41) node { $\color{cyan}s$}; \path(c42) node {$\color{magenta}s$}; \end{tikzpicture} \end{minipage} = (-1)\phantom{^2} \; \begin{minipage}{5.5cm} \begin{tikzpicture}[scale=0.92] \path(0,0) coordinate (origin2); \draw[thick](0,0)--++(135:4)--++(45:2)--++(-45:1)--++(45:1) --++(-45:1)--++(45:1)--++(-45:2) --(0,0); \path(0,0)--++(45:1)--++(135:1) coordinate (X); \draw[line width=2.2] (X)--++(45:2)--++(135:2)--++(-135:2)--++(-45:2); \clip(0,0)--++(135:4)--++(45:2)--++(-45:1)--++(45:1) --++(-45:1)--++(45:1)--++(-45:2) --(0,0); \path(0,0) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(135:1*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \path(0,0) --++(135:0.5) --++(45:-0.5) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i1); \path (origin2)--++(135:1)--++(45:1*\i) coordinate (c\i2); \path (origin2)--++(135:2)--++(45:1*\i) coordinate (c\i3); \path (origin2)--++(135:3)--++(45:1*\i) coordinate (c\i4); } % % \path(c11)--++(135:1) coordinate (d2); % \path(d2)--++(135:1) coordinate (d3); % % % % % % \path(d2)--++(45:1) coordinate (d22); % \path(d2)--++(45:2) coordinate (d23); % \path(d2)--++(135:1)--++(45:1) coordinate (d32); % \path(d32)--++(45:1) coordinate (d33); % % \path(c11)--++(90:0.3) node {\scalefont{0.8}\color{black}1}; \path(c11)--++(-90:0.3) node {\scalefont{0.8}\color{black}1}; \path(c21)--++(90:0.3) node {\scalefont{0.8}\color{black}$1$}; \path(c21)--++(-90:0.3) node {\scalefont{0.8}\color{black}$1$}; \path(c12)--++(90:0.3) node {\scalefont{0.8}\color{black}$1$}; \path(c12)--++(-90:0.3) node {\scalefont{0.8}\color{black}$1$}; \path(c22)--++(90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c22)--++(-90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c32)--++(90:0.3) node {\scalefont{0.8}$\color{black}\phantom{_\ast}s^\ast$}; \path(c32)--++(-90:0.3) node {\scalefont{0.8}$\color{black}s$}; \path(c23)--++(90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c23)--++(-90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c33)--++(90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c33)--++(-90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c31)--++(90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c31)--++(-90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c13)--++(90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c13)--++(-90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c14)--++(90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c14)--++(-90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c24)--++(90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c24)--++(-90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c41)--++(90:0.3) node {\scalefont{0.8}$\color{black}0$}; \path(c41)--++(-90:0.3) node {\scalefont{0.8}$\color{black}s$}; \path(c42)--++(90:0.3) node {\scalefont{0.8}$\color{black}0$}; \path(c42)--++(-90:0.3) node {\scalefont{0.8}$\color{black}f$}; % %\path(c41) node { $\color{cyan}s$}; %\path(c42) node {$\color{magenta}s$}; \end{tikzpicture} \end{minipage}$$ $$\begin{minipage}{5.5cm} \begin{tikzpicture}[scale=0.92] \path(0,0) coordinate (origin2); \draw[thick,white](0,0)--++(135:4)--++(45:2)--++(-45:1)--++(45:1) --++(-45:1)--++(45:1)--++(-45:2) --(0,0); \end{tikzpicture} \end{minipage} = (-1)^2 \; \begin{minipage}{5.5cm} \begin{tikzpicture}[scale=0.92] \path(0,0) coordinate (origin2); \draw[thick](0,0)--++(135:4)--++(45:2)--++(-45:1)--++(45:1) --++(-45:1)--++(45:1)--++(-45:2) --(0,0); \path(0,0)--++(45:0)--++(135:2) coordinate (X); \draw[line width=2.2] (X)--++(45:2)--++(135:2)--++(-135:2)--++(-45:2); \clip(0,0)--++(135:4)--++(45:2)--++(-45:1)--++(45:1) --++(-45:1)--++(45:1)--++(-45:2) --(0,0); \path(0,0) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(135:1*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \path(0,0) --++(135:0.5) --++(45:-0.5) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i1); \path (origin2)--++(135:1)--++(45:1*\i) coordinate (c\i2); \path (origin2)--++(135:2)--++(45:1*\i) coordinate (c\i3); \path (origin2)--++(135:3)--++(45:1*\i) coordinate (c\i4); } % % \path(c11)--++(135:1) coordinate (d2); % \path(d2)--++(135:1) coordinate (d3); % % % % % % \path(d2)--++(45:1) coordinate (d22); % \path(d2)--++(45:2) coordinate (d23); % \path(d2)--++(135:1)--++(45:1) coordinate (d32); % \path(d32)--++(45:1) coordinate (d33); % % \path(c11)--++(90:0.3) node {\scalefont{0.8}\color{black}1}; \path(c11)--++(-90:0.3) node {\scalefont{0.8}\color{black}1}; \path(c21)--++(90:0.3) node {\scalefont{0.8}\color{black}$1$}; \path(c21)--++(-90:0.3) node {\scalefont{0.8}\color{black}$1$}; \path(c12)--++(90:0.3) node {\scalefont{0.8}\color{black}$1$}; \path(c12)--++(-90:0.3) node {\scalefont{0.8}\color{black}$1$}; \path(c22)--++(90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c22)--++(-90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c32)--++(90:0.3) node {\scalefont{0.8}$\color{black}\phantom{_\ast}s^\ast$}; \path(c32)--++(-90:0.3) node {\scalefont{0.8}$\color{black}s$}; \path(c23)--++(90:0.3) node {\scalefont{0.8}$\color{black}_{\phantom*}s^*$}; \path(c23)--++(-90:0.3) node {\scalefont{0.8}$\color{black}s$}; \path(c33)--++(90:0.3) node {\scalefont{0.8}$\color{black}_{\phantom*}f^*$}; \path(c33)--++(-90:0.3) node {\scalefont{0.8}$\color{black}f$}; \path(c31)--++(90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c31)--++(-90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c13)--++(90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c13)--++(-90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c14)--++(90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c14)--++(-90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c24)--++(90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c24)--++(-90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c41)--++(90:0.3) node {\scalefont{0.8}$\color{black}0$}; \path(c41)--++(-90:0.3) node {\scalefont{0.8}$\color{black}s$}; \path(c42)--++(90:0.3) node {\scalefont{0.8}$\color{black}0$}; \path(c42)--++(-90:0.3) node {\scalefont{0.8}$\color{black}f$}; % %\path(c41) node { $\color{cyan}s$}; %\path(c42) node {$\color{magenta}s$}; \end{tikzpicture} \end{minipage}$$ $$\begin{minipage}{5.5cm} \begin{tikzpicture}[scale=0.92] \path(0,0) coordinate (origin2); \draw[thick,white](0,0)--++(135:4)--++(45:2)--++(-45:1)--++(45:1) --++(-45:1)--++(45:1)--++(-45:2) --(0,0); \end{tikzpicture} \end{minipage} = (-1)^3 \; \begin{minipage}{5.5cm} \begin{tikzpicture}[scale=0.92] \path(0,0) coordinate (origin2); \draw[thick](0,0)--++(135:4)--++(45:2)--++(-45:1)--++(45:1) --++(-45:1)--++(45:1)--++(-45:2) --(0,0); \clip(0,0)--++(135:4)--++(45:2)--++(-45:1)--++(45:1) --++(-45:1)--++(45:1)--++(-45:2) --(0,0); \path(0,0) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(135:1*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:0.5*\i) coordinate (c\i); \path (origin2)--++(135:0.5*\i) coordinate (d\i); } \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i); \path (origin2)--++(135:1*\i) coordinate (d\i); \draw[thick,densely dotted] (c\i)--++(135:14); \draw[thick,densely dotted] (d\i)--++(45:14); } \path(0,0) --++(135:0.5) --++(45:-0.5) coordinate (origin2); \foreach \i in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} { \path (origin2)--++(45:1*\i) coordinate (c\i1); \path (origin2)--++(135:1)--++(45:1*\i) coordinate (c\i2); \path (origin2)--++(135:2)--++(45:1*\i) coordinate (c\i3); \path (origin2)--++(135:3)--++(45:1*\i) coordinate (c\i4); } % % \path(c11)--++(135:1) coordinate (d2); % \path(d2)--++(135:1) coordinate (d3); % % % % % % \path(d2)--++(45:1) coordinate (d22); % \path(d2)--++(45:2) coordinate (d23); % \path(d2)--++(135:1)--++(45:1) coordinate (d32); % \path(d32)--++(45:1) coordinate (d33); % % \path(c11)--++(90:0.3) node {\scalefont{0.8}\color{black}1}; \path(c11)--++(-90:0.3) node {\scalefont{0.8}\color{black}1}; \path(c21)--++(90:0.3) node {\scalefont{0.8}\color{black}$1$}; \path(c21)--++(-90:0.3) node {\scalefont{0.8}\color{black}$1$}; \path(c12)--++(90:0.3) node {\scalefont{0.8}\color{black}$1$}; \path(c12)--++(-90:0.3) node {\scalefont{0.8}\color{black}$1$}; \path(c22)--++(90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c22)--++(-90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c32)--++(90:0.3) node {\scalefont{0.8}$\color{violet}\phantom{_\ast}s^\ast$}; \path(c32)--++(-90:0.3) node {\scalefont{0.8}$\color{darkgreen}s$}; \path(c23)--++(90:0.3) node {\scalefont{0.8}$\color{violet}_{\phantom*}s^*$}; \path(c23)--++(-90:0.3) node {\scalefont{0.8}$\color{darkgreen}s$}; \path(c33)--++(90:0.3) node {\scalefont{0.8}$\color{violet}_{\phantom*}f^*$}; \path(c33)--++(-90:0.3) node {\scalefont{0.8}$\color{darkgreen}f$}; \path(c31)--++(90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c31)--++(-90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c13)--++(90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c13)--++(-90:0.3) node {\scalefont{0.8}$\color{black}1$}; \path(c14)--++(90:0.3) node {\scalefont{0.8}$\color{black}\color{violet}_{\phantom*}s^*$}; \path(c14)--++(-90:0.3) node {\scalefont{0.8}$\color{darkgreen}s$}; \path(c24)--++(90:0.3) node {\scalefont{0.8}$\color{violet}_{\phantom*}f^*$}; \path(c24)--++(-90:0.3) node {\scalefont{0.8}$\color{darkgreen}f$}; \path(c41)--++(90:0.3) node {\scalefont{0.8}$\color{black}0$}; \path(c41)--++(-90:0.3) node {\scalefont{0.8}$\color{darkgreen}s$}; \path(c42)--++(90:0.3) node {\scalefont{0.8}$\color{black}0$}; \path(c42)--++(-90:0.3) node {\scalefont{0.8}$\color{darkgreen}f$}; % %\path(c41) node { $\color{cyan}s$}; %\path(c42) node {$\color{magenta}s$}; \end{tikzpicture} \end{minipage}$$ It remains to show that these form a complete list of relations. **Completeness of relations.** It is enough to show that, using ([\[rel1\]](#rel1){reference-type="ref" reference="rel1"})--([\[adjacent\]](#adjacent){reference-type="ref" reference="adjacent"}) and their duals, we can rewrite any product of $k$ degree $1$ generators as a linear combination of light leaves basis elements. We proceed by induction on $k$. For $k=1$ there is nothing to prove. For $k=2$, note that ([\[rel1\]](#rel1){reference-type="ref" reference="rel1"})--([\[adjacent\]](#adjacent){reference-type="ref" reference="adjacent"}) and their duals cover precisely all possible (non-zero) product of two degree $1$ generators and rewrite these as linear combinations of basis elements. Now, assume that the result holds for $k$ and consider a product of $k+1$ generators. By induction, it is enough to consider a product of the form $$D^\mu_\lambda D^\lambda_\nu D^\nu_{\nu \pm P}$$ where $D^\mu_\lambda D^\lambda_\nu$ is a basis element of degree $k$ and $P$ is a Dyck path. To show that this product can be rewritten as a linear combination of basis elements, we will additionally use induction on $\ell(\lambda)+\ell(\nu)$. If $\ell(\lambda)+\ell(\nu) = 0$ then $\lambda= \nu = \emptyset$, $\nu +P = (1)$ and we have, using ([\[rel1\]](#rel1){reference-type="ref" reference="rel1"}), $$D^\mu_\emptyset D^\emptyset_\emptyset D^\emptyset_{(1)} = D^\mu_\emptyset D^\emptyset_{(1)}$$ which is a basis element. Now assume that $\ell(\lambda) + \ell(\nu) \geqslant 1$. If $\lambda\neq \nu$ then we can write $$D^\lambda_\nu = D^\lambda_{\nu -Q} D^{\nu - Q}_\nu$$ for some $Q\in {\rm DRem}(\nu)$. Now using ([\[rel1\]](#rel1){reference-type="ref" reference="rel1"})--([\[adjacent\]](#adjacent){reference-type="ref" reference="adjacent"}) we can write $$D^{\nu - Q}_\nu D^\nu_{\nu \pm P} = \sum_{\nu'}c_{\nu'}D^{\nu - Q}_{\nu'}D^{\nu'}_{\nu \pm P}$$ for some $c_{\nu '}\in \Bbbk$ where $\ell(\nu') \leqslant\ell(\nu - Q) < \ell(\nu)$. Now we have $$\begin{aligned} D^\mu_\lambda D^\lambda_\nu D^\nu_{\nu \pm P} &=& D^\mu_\lambda D^\lambda_{\nu -Q}D^{\nu - Q}_\nu D^\nu_{\nu \pm P} \\ &=& \sum_{\nu '} c_{\nu'} (D^\mu_\lambda D^\lambda_{\nu - Q}D^{\nu - Q}_{\nu'})D^{\nu'}_{\nu \pm P}\\ &=& \sum_{\nu' , \lambda'} d_{\nu ' , \lambda'} D^\mu_{\lambda'}D^{\lambda'}_{\nu'}D^{\nu'}_{\nu\pm P}.\end{aligned}$$ by induction, and $\ell(\lambda')+\ell(\nu')<\ell(\lambda)+\ell(\nu)$ so we're done. It remains to consider the case where $\lambda= \nu$. Here we must have $\mu \neq \lambda$. First observe that $$D^\mu_\lambda D^\lambda_\lambda D^\lambda_{\lambda+P} = D^\mu_\lambda D^\lambda_{\lambda+P}$$ by ([\[rel1\]](#rel1){reference-type="ref" reference="rel1"}) and this is a basis element. The last case to consider is $$D^\mu_\lambda D^\lambda_\lambda D^\lambda_{\lambda-P} = D^\mu_\lambda D^\lambda_{\lambda-P}.$$ As $\mu\neq \lambda$ we have $D^\mu_\lambda= D^\mu_{ \lambda+Q}D^{\lambda+ Q}_\lambda$, for some $Q\in {\rm DAdd}(\lambda)$ and so $$D^\mu_\lambda D^\lambda_\lambda D^\lambda_{\lambda+P} = D^\mu_{\lambda+Q}D^{\lambda+Q}_\lambda D^\lambda_{\lambda-P}.$$ Now, using ([\[rel1\]](#rel1){reference-type="ref" reference="rel1"})--([\[adjacent\]](#adjacent){reference-type="ref" reference="adjacent"}) we have $$D^{\lambda+Q}_\lambda D^\lambda_{\lambda-P} = \sum_{\nu'}c_{\nu'} D^{\lambda+ Q}_{\nu'} D^{\nu'}_{\lambda-P}$$ with $\ell(\nu') \leqslant\ell (\lambda- P)<\ell(\lambda)$ and so $$\begin{aligned} D^\mu_{\lambda+Q}D^{\lambda+Q}_\lambda D^\lambda_{\lambda-P} &=& \sum_{\nu'}c_{\nu'} (D^\mu_{\lambda+ Q} D^{\lambda+ Q}_{\nu'}) D^{\nu'}_{\lambda-P}\\ &=& \sum_{\lambda' , \nu'}d_{\lambda' , \nu'} D^\mu_{\lambda'} D^{\lambda'}_{\nu'}D^{\nu'}_{\lambda- P}\end{aligned}$$ using induction as ${\rm deg} D^\mu_{\lambda+ Q} D^{\lambda+ Q}_{\nu'} = k$, with $\ell(\nu') \leqslant\ell(\lambda-P)< \ell(\lambda)$. Now as $\ell(\lambda')+\ell(\nu')<\ell(\lambda)+\ell(\nu)$, we're done by induction. ◻ ## Recasting the Dyck presentation as a quiver and relations Gabriel proved that every basic algebra is isomorphic to the path algebra of its ${\rm Ext}$-quiver modulo relations. We now go through the formal procedure of recasting the Dyck presentation in this language. **Definition 38**. *We define the Dyck quiver $\mathscr{D}_{m,n}$ with vertex set $\{E_\lambda\mid \lambda\in {\mathscr{P}_{m,n}}\}$ and arrows $d^\lambda_\mu: \lambda\to \mu$ and $d^\mu_\lambda:\mu \to \lambda$ for every $\lambda=\mu - P$ with $P\in {\rm DRem}(\mu)$.* An example is depicted in [\[quiver\]](#quiver){reference-type="ref" reference="quiver"}. $$\begin{tikzpicture}[scale=0.7 ] \path (0,-2.5)--++(-135+12.5:0.742) coordinate(X); \path(-3,-5.5) --++(45-12.5:0.742) coordinate(Y); \draw[thick,->] (X) to (Y) ; \path (0,-2.5)--++(-135-12.5:0.742) coordinate(X); \path(-3,-5.5) --++(45+12.5:0.742) coordinate(Y); \draw[thick,<-] (X) to (Y) ; \path (3,-5.5)--++(-135+12.5:0.742) coordinate(X); \path(0,-8.5) --++(45-12.5:0.742) coordinate(Y); \draw[thick,->] (X) to (Y) ; \path (3,-5.5)--++(-135-12.5:0.742) coordinate(X); \path(0,-8.5) --++(45+12.5:0.742) coordinate(Y); \draw[thick,<-] (X) to (Y) ; \path (-3,-5.5)--++(-45+12.5:0.742) coordinate(X); \path(0,-8.5) --++(135-12.5:0.742) coordinate(Y); \draw[thick,->] (X) to (Y) ; \path (-3,-5.5)--++(-45-12.5:0.742) coordinate(X); \path(0,-8.5) --++(135+12.5:0.742) coordinate(Y); \draw[thick,<-] (X) to (Y) ; \path (0,-2.5)--++(-45+12.5:0.742) coordinate(X); \path(3,-5.5) --++(135-12.5:0.742) coordinate(Y); \draw[thick,->] (X) to (Y) ; \path (0,-2.5)--++(-45-12.5:0.742) coordinate(X); \path(3,-5.5) --++(135+12.5:0.742) coordinate(Y); \draw[thick,<-] (X) to (Y) ; \path (0,-2.5)--++(-90-12.5:0.742) coordinate(X); \path(0,-5.5) --++(90+12.5:0.742) coordinate(Y); \draw[thick,<-] (X) -- (Y) ; \path (0,-2.5)--++(-90+12.5:0.742) coordinate(X); \path(0,-5.5) --++(90-12.5:0.742) coordinate(Y); \draw[thick,->] (X) to (Y) ; \path (0,0.5)--++(-90-12.5:0.742) coordinate(X); \path(0,-2.5) --++(90+12.5:0.742) coordinate(Y); \draw[thick,<-] (X) to (Y) ; \path (0,0.5)--++(-90+12.5:0.742) coordinate(X); \path(0,-2.5) --++(90-12.5:0.742) coordinate(Y); \draw[thick,->] (X) to (Y) ; \path (0,-5.5)--++(-90-12.5:0.742) coordinate(X); \path(0,-8.5) --++(90+12.5:0.742) coordinate(Y); \draw[thick,<-] (X) to (Y) ; \path (0,-5.5)--++(-90+12.5:0.742) coordinate(X); \path(0,-8.5) --++(90-12.5:0.742) coordinate(Y); \draw[thick,->] (X) to (Y) ; \path(0,0.5)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(top); \path(top)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \draw[densely dotted] (top)--++(45:1*0.45) coordinate (X1) --++(45:1*0.45) --++(135:2*0.45)--++(-135:3*0.3)--++(-45:1*0.45) coordinate(Y2) --++(-45:1*0.45) ; \draw[densely dotted] (X1)--++(135:0.3*3); \draw[densely dotted] (Y2)--++(45:0.3*3); \path(0,-2.5)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(top); \path(top)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \draw[densely dotted] (top)--++(45:1*0.45) coordinate (X1) --++(45:1*0.45) --++(135:2*0.45)--++(-135:3*0.3)--++(-45:1*0.45) coordinate(Y2) --++(-45:1*0.45) ; \draw[densely dotted] (X1)--++(135:0.3*3); \draw[densely dotted] (Y2)--++(45:0.3*3); \draw [very thick,fill=gray!40] (top)--++(45:0.45*1) --++(135:0.45*1)--++(-135:0.45*1)--++(-45:0.45*1); \path(-3,- 5.5)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(top); \path(top)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \draw[densely dotted] (top)--++(45:1*0.45) coordinate (X1) --++(45:1*0.45) --++(135:2*0.45)--++(-135:3*0.3)--++(-45:1*0.45) coordinate(Y2) --++(-45:1*0.45) ; \draw[densely dotted] (X1)--++(135:0.3*3); \draw[densely dotted] (Y2)--++(45:0.3*3); \draw [very thick,fill=gray!40] (top)--++(45:0.45*1) --++(135:0.45*2)--++(-135:0.45*1)--++(-45:0.45*1) coordinate (C)--++(-45:0.45*1); \draw [very thick,fill=gray!40] (C) --++(45:0.45*1); \path( 3,- 5.5)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(top); \path(top)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \draw[densely dotted] (top)--++(45:1*0.45) coordinate (X1) --++(45:1*0.45) --++(135:2*0.45)--++(-135:3*0.3)--++(-45:1*0.45) coordinate(Y2) --++(-45:1*0.45) ; \draw[densely dotted] (X1)--++(135:0.3*3); \draw[densely dotted] (Y2)--++(45:0.3*3); \draw [very thick,fill=gray!40] (top)--++(135:0.45*1) --++(45:0.45*2)--++(-45:0.45*1)--++(-135:0.45*1) coordinate (C)--++(-135:0.45*1); \draw [very thick,fill=gray!40] (C) --++(135:0.45*1); \path(0,- 5.5)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(top); \path(top)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \draw[densely dotted] (top)--++(45:1*0.45) coordinate (X1) --++(45:1*0.45) --++(135:2*0.45)--++(-135:3*0.3)--++(-45:1*0.45) coordinate(Y2) --++(-45:1*0.45) ; \draw[densely dotted] (X1)--++(135:0.3*3); \draw[densely dotted] (Y2)--++(45:0.3*3); \draw [very thick,fill=gray!40] (top)--++(135:0.45*2) --++(45:0.45*2)--++(-45:0.45*2); \draw [very thick,fill=gray!40] (top)--++(135:0.45*1) --++(45:0.45*2)--++(-45:0.45*1)--++(-135:0.45*1) coordinate (C)--++(-135:0.45*1); \draw [very thick,fill=gray!40] (C) --++(135:0.45*1); \draw [very thick,fill=gray!40] (top)--++(45:0.45*1) --++(135:0.45*2)--++(-135:0.45*1)--++(-45:0.45*1) coordinate (C)--++(-45:0.45*1); \draw [very thick,fill=gray!40] (C) --++(45:0.45*1); \path(0,- 8.5)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(top); \path(top)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \draw[densely dotted] (top)--++(45:1*0.45) coordinate (X1) --++(45:1*0.45) --++(135:2*0.45)--++(-135:3*0.3)--++(-45:1*0.45) coordinate(Y2) --++(-45:1*0.45) ; \draw[densely dotted] (X1)--++(135:0.3*3); \draw[densely dotted] (Y2)--++(45:0.3*3); \draw [very thick,fill=gray!40] (top)--++(135:0.45*1) --++(45:0.45*2)--++(-45:0.45*1)--++(-135:0.45*1) coordinate (C)--++(-135:0.45*1); \draw [very thick,fill=gray!40] (C) --++(135:0.45*1); \draw [very thick,fill=gray!40] (top)--++(45:0.45*1) --++(135:0.45*2)--++(-135:0.45*1)--++(-45:0.45*1) coordinate (C)--++(-45:0.45*1); \draw [very thick,fill=gray!40] (C) --++(45:0.45*1); \end{tikzpicture}$$ **Proposition 39**. *The map $$E_\mu \mapsto {\sf 1} _\mu \qquad d^\lambda_\mu \mapsto D^\lambda_\mu$$ defines an algebra homomorphism from the path algebra of the Dyck quiver $\mathscr{D}_{m,n}$ to $\mathcal{H}_{m,n}$. Thus, the algebra $\mathcal{H}_{m,n}$ is isomorphic to the quotient of the path algebra of the Dyck quiver $\mathscr{D}_{m,n}$ by the quadratic relations given in ([\[rel1\]](#rel1){reference-type="ref" reference="rel1"})--([\[adjacent\]](#adjacent){reference-type="ref" reference="adjacent"}) (where we replace all $D_\mu^\lambda$'s with $d_\mu^\lambda$'s).* *Proof.* This follows directly from [Theorem 36](#presentation){reference-type="ref" reference="presentation"}. ◻ **Example 40**. *Continuing with [\[quiver\]](#quiver){reference-type="ref" reference="quiver"} we have that the algebra $\mathcal{H}_{2,2}$ is the path algebra of the quiver $\mathscr{D}_{2,2}$ modulo the following relations and their duals $$\begin{aligned} \label{1thing} d^\varnothing _{(1)}d^{(1)}_{(2)}=0=d^\varnothing _{(1)}d^{(1)}_{(1^2)} \qquad d^{(1)}_{(2)}d^{(2)}_{(2,1)} = d^{(1)}_{(2^2)}d^{(2^2)}_{(2,1)} = d^{(1)}_{(1^2)}d^{(1^2)}_{(2,1)} %$$ % $$ \qquad d^{(1)}_{\lambda}d_{(1)}^{\lambda} =-d_\varnothing^{(1)} d^\varnothing_{(1)} \end{aligned}$$ for $\lambda= (2), (1^2)$ or $(2^2)$, $$\begin{aligned} \label{2thing} d^{(2,1)}_{(2^2)}d_{(2,1)}^{(2^2)} =-d^{(2,1)}_{(2 )}d_{(2,1)}^{(2 )} -d^{(2,1)}_{(1^2 )}d_{(2,1)}^{(1^2 )} \end{aligned}$$ and for any pair $\nu<\mu$ not of the above form, we have that $$\begin{aligned} \label{3thing} d^\nu _{\mu} d^\mu_\nu =0.\end{aligned}$$* **Example 41**. *[\[coef-quiver\]]{#coef-quiver label="coef-quiver"} Apart from the categories of this paper, the only Hecke categories whose quiver and relations were understood were those corresponding to Weyl groups of ranks 2 and 3 [@MR2017061] and those bi-serial algebras corresponding to $(W,P)=(S_n,S_{n-1})$, which we now describe. In this case, the quiver is depicted in [\[quive2r\]](#quive2r){reference-type="ref" reference="quive2r"}.* *$$\begin{tikzpicture}[scale=0.8 ] \draw(-1.25,0.5) node {$\varnothing$}; \draw(2,0.5) node {$(1)$}; \path (0,0.5)--++(-180-12.5:0.742) coordinate(X); \path(2,0.5) --++(180-12.5:0.742) coordinate(Y); \draw[thick,<-] (X) to (Y) ; \path (0,0.5)--++(-180+12.5:0.742) coordinate(X); \path(2,0.5) --++(180+12.5:0.742) coordinate(Y); \draw[thick,->] (X) to (Y) ; \path (0+3.5,0.5)--++(-180-12.5:0.742) coordinate(X); \path(2+3.5,0.5) --++(180-12.5:0.742) coordinate(Y); \draw[thick,<-] (X) to (Y) ; \path (0+3.5,0.5)--++(-180+12.5:0.742) coordinate(X); \path(2+3.5,0.5) --++(180+12.5:0.742) coordinate(Y); \draw[thick,->] (X) to (Y) ; \draw(2+3.5,0.5) node {$(2)$}; \path (0+3.5+3.5,0.5)--++(-180-12.5:0.742) coordinate(X); \path(2+3.5+3.5,0.5) --++(180-12.5:0.742) coordinate(Y); \draw[thick,<-,densely dotted] (X) to (Y) ; \path (0+3.5+3.5,0.5)--++(-180+12.5:0.742) coordinate(X); \path(2+3.5+3.5,0.5) --++(180+12.5:0.742) coordinate(Y); \draw[thick,->,densely dotted] (X) to (Y) ; \path (0+3.5+3.5+3.5+1.25,0.5)--++(180:1.75) node {$\dots$}; \path (0+3.5+3.5+3.5+1.75,0.5)--++(-180-12.5:0.742) coordinate(X); \path(2+3.5+3.5+3.5+1.75,0.5) --++(180-12.5:0.742) coordinate(Y); \draw[thick,<-,densely dotted] (X) to (Y) ; \path (0+3.5+3.5+3.5+1.75,0.5)--++(-180+12.5:0.742) coordinate(X); \path(2+3.5+3.5+3.5+1.75,0.5) --++(180+12.5:0.742) coordinate(Y); \draw[thick,->,densely dotted] (X) to (Y) ; % % \draw(2+3.5+3.5+3.5+1.75,0.5) node {$(n)$}; \end{tikzpicture}$$* *We have that the algebra $\mathcal{H}_{n,1}$ is the path algebra of the quiver $\mathscr{D}_{n,1}$ modulo the following relations and their duals $$d^{(k)}_{(k+1)} d_{(k)}^{(k+1)} = d^{(k)}_{(k-1)} d_{(k)}^{(k-1)} \qquad d^{(k)}_{(k\pm1)} d_{(k\pm2)}^{(k\pm1)}=0$$ for $1\leqslant k <n$. The projective modules are all uni-serial or biserial and their structure is encoded in the Alperin diagrams in [\[quiver3\]](#quiver3){reference-type="ref" reference="quiver3"}.* *$$\begin{minipage}{1cm} \begin{tikzpicture}[scale=0.7 ] \draw[thick] (0,0) -- (0,-2) ; \fill[white] (0,0) circle (12pt); \fill[white] (0,-2) circle (12pt); \draw(0,0) node {\scalefont{0.9}$\varnothing$}; \draw(0,-2) node {\scalefont{0.9}$(1)$}; \end{tikzpicture}\end{minipage} \qquad\quad \begin{minipage}{3cm} \begin{tikzpicture}[scale=0.7 ] \draw[thick] (0,0) -- (-2,-2) --(0,-4)--(2,-2)--(0,0); \fill[white] (0,0) circle (12pt); \fill[white] (-2,-2) circle (12pt); \fill[white] (2,-2) circle (12pt); \fill[white] (0,-4) circle (12pt); \draw(0,0) node {\scalefont{0.9}$(1) $}; \draw(-2,-2) node {\scalefont{0.9}$(2)$}; \draw(2,-2) node {\scalefont{0.9}$\varnothing$}; \draw(0,-4) node {\scalefont{0.9}$(1) $}; \end{tikzpicture}\end{minipage} \qquad \; \dots \quad \begin{minipage}{3cm} \begin{tikzpicture}[scale=0.7 ] \draw[thick] (0,0) -- (-2,-2) --(0,-4)--(2,-2)--(0,0); \fill[white] (0,0) circle (12pt); \fill[white] (-2,-2) circle (12pt); \fill[white] (2,-2) circle (12pt); \fill[white] (0,-4) circle (12pt); \draw(0,0) node {\scalefont{0.9}$(n-1) $}; \draw(-2,-2) node {\scalefont{0.9}$(n)$}; \draw(2,-2) node {\scalefont{0.9}$(n-2)$}; \draw(0,-4) node {\scalefont{0.9}$(n-1) $}; \end{tikzpicture}\end{minipage} \qquad \quad \qquad \begin{minipage}{1.2cm} \begin{tikzpicture}[scale=0.7 ] \draw[thick] (0,0) -- (0,-4) ; \fill[white] (0,0) circle (12pt); \fill[white] (0,-2) circle (12pt); \fill[white] (0,-4) circle (12pt); \draw(0,-4) node {\scalefont{0.9}$(n)$}; \draw(0,0) node {\scalefont{0.9}$(n)$}; \draw(0,-2) node {\scalefont{0.9}$(n-1)$}; \end{tikzpicture}\end{minipage}$$* # Submodule structure of standard modules For this section, we assume that $\Bbbk$ is a field. As noted in [Theorem 24](#heere ris the basus){reference-type="ref" reference="heere ris the basus"}, the algebra $\mathcal{H}_{m,n}$ is a basic (positively) graded quasi-hereditary algebra with graded cellular basis given by $$\{D^\mu_\lambda D^\lambda_\nu \, : \, \text{for all Dyck pairs} \, (\lambda, \mu), (\lambda, \nu)\, \text{with}\, \, \lambda, \mu, \nu\in {\mathscr{P}_{m,n}}\}.$$ For $\lambda\in {\mathscr{P}_{m,n}}$, write $\mathcal{H}_{m,n}^{\leqslant\lambda} = {\rm span}\{D^\mu_\alpha D^\alpha_\lambda\, : \, \alpha, \mu \in {\mathscr{P}_{m,n}}, \, \alpha \leqslant\lambda\}$ and $\mathcal{H}_{m,n}^{< \lambda} = {\rm span}\{D^\mu_\alpha D^\alpha_\lambda\, : \, \alpha, \mu \in {\mathscr{P}_{m,n}}, \, \alpha < \lambda\}$. Setting $${\rm DP}(\lambda):=\{ \mu\in {\mathscr{P}_{m,n}}\, : \, (\lambda, \mu)\, \text{is a Dyck pair}\},$$ the (left) standard module $\Delta_{m,n}(\lambda) = \mathcal{H}_{m,n}^{\leqslant\lambda} / \mathcal{H}_{m,n}^{< \lambda}$ has a basis given by $$\{ u_\mu := D^\mu_\lambda+ \mathcal{H}_{m,n}^{<\lambda} \, : \, \mu \in {\rm DP}(\lambda)\}.$$ Each $u_\mu$ generates a submodule of $\Delta_{m,n}(\lambda)$ with a $1$-dimensional simple head, which we denote by $L_{m,n}(\mu)$. In this section, we describe the full submodule structure of the standard modules. As $\mathcal{H}_{m,n}$ is positively graded, the grading provides a submodule filtration of $\Delta_{m,n}(\lambda)$. Decompose ${\rm DP}(\lambda)$ as $${\rm DP}(\lambda) = \bigsqcup_{k \geqslant 0} {\rm DP}_k(\lambda) \quad \text{where} \quad {\rm DP}_k(\lambda) = \{\mu\in {\rm DP}(\lambda) \, : \, \deg (\lambda, \mu)=k\}.$$ Note further, that the algebra $\mathcal{H}_{m,n}$ is generated in degree $1$. This implies that, in order to describe the full submodule structure, it is enough to find, for each $\mu \in {\rm DP}_k(\lambda)$, the set of all $\nu \in {\rm DP}_{k+1}(\lambda)$ such that $$u_\nu = c D^\nu_\mu u_\mu$$ for some $c\in \Bbbk$ and $\nu = \mu \pm P$ for some $P\in {\rm DAdd}(\mu)$ or $P\in {\rm DRem}(\mu)$ respectively. Thus, the condition that $\nu = \mu \pm P$ for some $P\in {\rm DRem}(\mu)$ or $P\in {\rm DAdd}(\mu)$ respectively is certainly a necessary condition for the existence of an extension between $L_{m,n}(\mu)$ and $L_{m,n}(\nu)$ in $\Delta_{m,n}(\lambda)$. We claim that it is also sufficient. Assume $\mu \setminus \lambda= \sqcup_i Q^i$. For $P\in {\rm DAdd}(\mu)$, note that $(\lambda, \mu +P)$ is a Dyck pair if and only if $P$ is not adjacent to any $Q^i$ and so in this case $(\mu +P)\setminus \lambda= \sqcup_i Q^i \sqcup P$ is the Dyck tiling and we have $$D^{\mu +P}_\mu D^\mu_\lambda= D^{\mu +P}_\lambda$$ by the definition of the light leaves basis. For $P\in {\rm DRem}(\mu)$, the only way to have ${\rm deg}(\lambda, \mu -P) = {\rm deg}(\lambda, \mu) +1$ is if $P\notin \{Q^i\}$ and there exists some $Q\in \{Q^i\}$ such that $P\prec Q$ do not commute. In this case we have $Q\setminus P = R \sqcup S$ for some $R, S\in {\rm DRem}(\mu - P)$. We prove by induction on ${\rm deg}(\lambda, \mu)$ that $D^{\mu- P}_\mu D^\mu_\lambda= D^{\mu - P}_\lambda$. If ${\rm deg}(\lambda, \mu) = 1$ then $\mu - Q=\lambda$ and the non-commuting relation gives $$D^{\mu -P}_\mu D^\mu_\lambda= D^{\mu -P}_{\mu - P - R} D^{\mu - P - R}_{\mu - Q} = D^{\mu - P}_\lambda$$ as required. Now assume that ${\rm deg}(\lambda, \mu)\geqslant 2$. Suppose $Q\not\prec Q'$ for $Q'\in \{Q^i\}$. Then we can write $D^\mu_\lambda= D^\mu_{\mu - Q} D^{\mu - Q}_\lambda$ and we have $$D^{\mu - P}_\mu D^\mu_\lambda= D^{\mu - P}_\mu D^{\mu}_{\mu - Q}D^{\mu - Q}_\lambda= D^{\mu - P}_{\mu - P - R} D^{\mu - P - R}_{\mu - Q} D^{\mu - Q}_\lambda= D^{\mu - P}_\lambda$$ by the non-commuting relations and the definition of the light leaves basis. Otherwise, we have that $D^{\mu}_\lambda= D^\mu_{\mu - Q'}D^{\mu - Q'}_\lambda$ for some $Q'$ commuting with $P$. Then we have $$D^{\mu - P}_\mu D^\mu_\lambda= D^{\mu - P}_\mu D^{\mu}_{\mu - Q'}D^{\mu -Q'}_\lambda= D^{\mu - P}_{\mu - P - Q'} D^{\mu - P - Q'}_{\mu - Q'}D^{\mu - Q'}_\lambda= D^{\mu - P}_{\mu - P - Q'}D^{\mu - P - Q'}_\lambda= D^{\mu - P}_\lambda$$ where the second equality follows from the commuting relation, the third one follows by induction (as ${\rm deg}(\lambda, \mu - Q') = {\rm deg}(\lambda, \mu) - 1$), and the final equality follows by the definition of the light leaves basis. **Remark 42**. *We set $k_\lambda= \max \{ k\geqslant 0 \, | \, {\rm DP}_k(\lambda)\neq \emptyset\}$. Then it is easy to check that ${\rm DP}_{k_\lambda}(\lambda)$ consists of a single element $\mu_\lambda$. To construct the cup diagram of $\mu_\lambda$, start with the weight $\lambda$ and apply the following two steps:* 1. *repeatedly find a pair of vertices labeled $\wedge$ $\vee$ in order from left to right that are neighbours in the sense that there are only vertices already joined by cups in between. Join these new vertices together with a cup. Then repeat the process until there are no more such $\wedge$ $\vee$ pairs.* *We are left with a sequences of $\vee$'s followed by a sequences of $\wedge$'s.* 2. *Join these using concentric anti-clockwise cups. We are left with either a sequence of $\wedge$'s or a sequence of $\vee$'s. Draw vertical rays on these.* Suppose $\mu_\lambda\setminus \lambda= \sqcup_i Q^i$. Note that $\mu_\lambda$ is characterised by the following two properties: 1. There is no $P\in {\rm DAdd}(\mu_\lambda)$ such that $P \sqcup \left(\bigsqcup_i Q^i\right)$ is a Dyck tiling of $(\mu_\lambda+P) \setminus \lambda$. 2. If $P\in \{Q^i\}$ and $Q\prec P$ then $Q\in \{Q^i\}$. This implies, in particular, that if $\mu \in {\rm DP}_k(\lambda)$ for $k<k_\lambda$, then either (1) or (2) above fails and we have seen that in each case we can find some $h\in \mathcal{H}_{m,n}$ and $\nu\in {\rm DP}_{k+1}(\lambda)$ such that $u_\nu = hu_\mu$. Thus the radical and socle filtration of $\Delta_{m,n}(\lambda)$ coincide with its grading filtration and the socle of $\Delta_{m,n}(\lambda)$ is given by $L_{m,n}(\mu_\lambda)$. We have proved the following: **Theorem 43**. *Let $\lambda\in {\mathscr{P}_{m,n}}$. The Alperin diagram of the standard module $\Delta_{m,n}(\lambda)$ has vertex set labelled by the set $\{ L_{m,n}(\mu)\, : \, \mu\in {\rm DP}(\lambda)\}$ and edges $$L_{m,n}(\mu) \longrightarrow L_{m,n}(\nu)$$ whenever $\mu \in {\rm DP}_k(\lambda)$, $\nu \in {\rm DP}_{k+1}(\lambda)$ for some $k\geqslant 0$ and $\nu = \mu \pm P$ for some $P\in {\rm DAdd}(\mu)$ or $P\in {\rm DRem}(\mu)$ respectively. Moreover, the radical and socle filtration both coincide with the grading filtration and $\Delta_{m,n}(\lambda)$ has simple socle isomorphic to $L_{m,n}(\mu_\lambda)$ (where $\mu_\lambda$ is described in [Remark 42](#socle){reference-type="ref" reference="socle"}).* An example is provided in [\[standard21\]](#standard21){reference-type="ref" reference="standard21"}. $$\begin{tikzpicture}[scale=1 ] \draw[very thick, gray!60](0,0)--(-5,-2.5) (0,0)--(-3,-2.5) (0,0)--(-1,-2.5) (0,0)--(1,-2.5) (0,0)--(3,-2.5) (0,0)--(5,-2.5); \draw[very thick,] (-5,-2.5)--(-3,-6.5); \draw[very thick, gray!60] (-5,-2.5)--(-5,-6.5) (-3,-2.5)--(1,-6.5) (-3,-2.5)--(5,-6.5) (-3,-6.5)--(-3,-2.5); \draw[very thick,] (-1,-2.5)--(-5,-6.5) (-1,-2.5)--(5,-6.5) ; \draw[very thick,gray!60] (-1,-2.5)--(-1,-6.5) ; \draw[very thick, gray!60] (1,-2.5)--(-1,-6.5) (1,-2.5)--(-3,-6.5) (1,-2.5)--(3,-6.5) ; \draw[very thick,] (5,-2.5)--(3,-6.5) ; \draw[very thick, gray!60] (5,-2.5)--(5,-6.5) (3,-2.5)--(1,-6.5) (3,-2.5)--(3,-6.5) (3,-2.5)--(-5,-6.5) ; \draw[very thick,] (0,-6.5-2.5)--(-5,-6.5) (0,-6.5-2.5)--(-1,-6.5) (0,-6.5-2.5)--(5,-6.5); \draw[very thick, gray!60] (0,-6.5-2.5)--(-3,-6.5) (0,-6.5-2.5)--(1,-6.5) (0,-6.5-2.5)--(3,-6.5) ; \path(0,0)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(top); \path(top)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \fill[white](circle) circle (20pt); \draw[gray!60] (top)--++(45:1*0.3) coordinate (X1)--++(45:1*0.3) coordinate (X2) --++(45:1*0.3) --++(135:3*0.3)--++(-135:3*0.3)--++(-45:1*0.3) coordinate (Y2) --++(-45:1*0.3) coordinate (Y1) --++(-45:1*0.3) ; \draw[gray!60] (X1)--++(135:0.3*3); \draw[gray!60] (X2)--++(135:0.3*3); \draw[gray!60] (Y1)--++(45:0.3*3); \draw[gray!60] (Y2)--++(45:0.3*3); \draw [very thick,fill=gray!40] (top)--++(45:0.3*2)--++(135:0.3)--++(-135:0.3)--++(135:0.3)--++(-135:0.3) --++(-45:0.3*2); \draw [very thick] (top)--++(45:0.3)--++(135:0.3)--++(-135:0.3); \path(0,-6.5-2.5)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(bottom); \path(bottom)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \fill[white](circle) circle (20pt); \draw[gray!60] (bottom)--++(45:1*0.3) coordinate (X1)--++(45:1*0.3) coordinate (X2) --++(45:1*0.3) --++(135:3*0.3)--++(-135:3*0.3)--++(-45:1*0.3) coordinate (Y2) --++(-45:1*0.3) coordinate (Y1) --++(-45:1*0.3) ; \draw[gray!60] (X1)--++(135:0.3*3); \draw[gray!60] (X2)--++(135:0.3*3); \draw[gray!60] (Y1)--++(45:0.3*3); \draw[gray!60] (Y2)--++(45:0.3*3); \draw [very thick ] (bottom)--++(45:0.3*3)--++(135:0.3)--++(-135:0.3)--++(135:0.3)--++(-135:0.3)--++(135:0.3)--++(-135:0.3) --++(-45:0.3*3); \draw [very thick,fill=gray!40] (bottom)--++(45:0.3*2)--++(135:0.3)--++(-135:0.3)--++(135:0.3)--++(-135:0.3) --++(-45:0.3*2); \path(bottom)--++(45:0.5*0.3)--++(135:2.5*0.3) coordinate (start); \fill[darkgreen] (start) circle (1.8pt); \path (start)--++(-45:0.3*1)--++(45:0.3*1) coordinate (start); \fill[brown] (start) circle (1.8pt); \path (start)--++(-45:0.3*1)--++(45:0.3*1) coordinate (start); \fill[lime!80!black] (start) circle (2pt); \draw [very thick] (bottom) --++(45:0.3)--++(135:0.3) --++(45:0.3)--++(135:0.3) --++(-135:0.3)--++(-45:0.3) --++(-135:0.3)--++(-45:0.3); \draw [very thick] (bottom) --++(45:0.6)--++(135:0.3); \draw [very thick] (bottom) --++(135:0.6)--++(45:0.3); \path(-5,-2.5)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(LLL1); \path(LLL1)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \fill[white](circle) circle (20pt); \draw[densely dotted] (LLL1)--++(45:1*0.3) coordinate (X1)--++(45:1*0.3) coordinate (X2) --++(45:1*0.3) --++(135:3*0.3)--++(-135:3*0.3)--++(-45:1*0.3) coordinate (Y2) --++(-45:1*0.3) coordinate (Y1) --++(-45:1*0.3) ; \draw[densely dotted] (X1)--++(135:0.3*3); \draw[densely dotted] (X2)--++(135:0.3*3); \draw[densely dotted] (Y1)--++(45:0.3*3); \draw[densely dotted] (Y2)--++(45:0.3*3); \draw [very thick,fill=white] (LLL1)--++(45:0.3*2)--++(135:0.3) --++(135:0.3*2)--++(-135:0.3*2) --++(-45:0.3*3); \draw [very thick,fill=gray!40] (LLL1)--++(45:0.3*2)--++(135:0.3)--++(-135:0.3)--++(135:0.3)--++(-135:0.3) --++(-45:0.3*2); \path(LLL1)--++(45:0.5*0.3)--++(135:2.5*0.3) coordinate (start); \fill[magenta] (start) circle (1.8pt); \draw[magenta,very thick] (start)--++(45:0.3) coordinate (start); \fill[magenta] (start) circle (1.8pt); \draw[magenta,very thick] (start)--++(-45:0.3) coordinate (start); \fill[magenta] (start) circle (2pt); \draw [very thick] (LLL1) --++(45:0.3)--++(135:0.3) --++(45:0.3)--++(135:0.3) --++(-135:0.3)--++(-45:0.3) --++(-135:0.3)--++(-45:0.3); \draw [very thick] (LLL1) --++(45:0.6)--++(135:0.3); \draw [very thick] (LLL1) --++(135:0.6)--++(45:0.3)--++(135:0.3); \path(-3,-2.5)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(LL1); \path(LL1)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \fill[white](circle) circle (20pt); \draw[densely dotted] (LL1)--++(45:1*0.3) coordinate (X1)--++(45:1*0.3) coordinate (X2) --++(45:1*0.3) --++(135:3*0.3)--++(-135:3*0.3)--++(-45:1*0.3) coordinate (Y2) --++(-45:1*0.3) coordinate (Y1) --++(-45:1*0.3) ; \draw[densely dotted] (X1)--++(135:0.3*3); \draw[densely dotted] (X2)--++(135:0.3*3); \draw[densely dotted] (Y1)--++(45:0.3*3); \draw[densely dotted] (Y2)--++(45:0.3*3); \draw [very thick,fill=white] (LL1)--++(45:0.3*2)--++(135:0.3)--++(-135:0.3)--++(135:0.3*2)--++(-135:0.3) --++(-45:0.3*3); \draw [very thick,fill=gray!40] (LL1)--++(45:0.3*2)--++(135:0.3)--++(-135:0.3)--++(135:0.3)--++(-135:0.3) --++(-45:0.3*2); \path(LL1)--++(45:0.5*0.3)--++(135:2.5*0.3) coordinate (start); \fill[darkgreen] (start) circle (1.8pt); %\draw[magenta,very thick] (start)--++(45:0.3) coordinate (start); % %\fill[magenta] (start) circle (1.8pt); %\draw[magenta,very thick] (start)--++(-45:0.3) coordinate (start); % %\fill[magenta] (start) circle (2pt); \draw [very thick] (LL1) --++(45:0.3)--++(135:0.3) --++(-135:0.3)--++(-45:0.3); \draw [very thick] (LL1) --++(135:0.6)--++(45:0.3)--++(135:0.3); \path(-1,-2.5)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(L1); \path(L1)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \fill[white](circle) circle (20pt); \draw[densely dotted] (L1)--++(45:1*0.3) coordinate (X1)--++(45:1*0.3) coordinate (X2) --++(45:1*0.3) --++(135:3*0.3)--++(-135:3*0.3)--++(-45:1*0.3) coordinate (Y2) --++(-45:1*0.3) coordinate (Y1) --++(-45:1*0.3) ; \draw[densely dotted] (X1)--++(135:0.3*3); \draw[densely dotted] (X2)--++(135:0.3*3); \draw[densely dotted] (Y1)--++(45:0.3*3); \draw[densely dotted] (Y2)--++(45:0.3*3); \draw [very thick,fill=white] (L1)--++(45:0.3*3)--++(135:0.3*2)--++(-135:0.3)--++(135:0.3)--++(-135:0.3*2) --++(-45:0.3*3); \draw [very thick,fill=gray!40] (L1)--++(45:0.3*2)--++(135:0.3)--++(-135:0.3)--++(135:0.3)--++(-135:0.3) --++(-45:0.3*2); \path(L1)--++(45:0.5*0.3)--++(135:2.5*0.3) coordinate (start); \fill[cyan] (start) circle (1.8pt); \draw[cyan,very thick] (start)--++(45:0.3) coordinate (start); \fill[cyan] (start) circle (1.8pt); \draw[cyan,very thick] (start)--++(-45:0.3) coordinate (start); \fill[cyan] (start) circle (2pt); \fill[cyan] (start) circle (1.8pt); \draw[cyan,very thick] (start)--++(45:0.3) coordinate (start); \fill[cyan] (start) circle (1.8pt); \draw[cyan,very thick] (start)--++(-45:0.3) coordinate (start); \fill[cyan] (start) circle (2pt); \draw [very thick] (L1) --++(45:0.3)--++(135:0.3) --++(45:0.3)--++(135:0.3) --++(-135:0.3)--++(-45:0.3) --++(-135:0.3)--++(-45:0.3); \draw [very thick] (L1) --++(45:0.6)--++(135:0.3)--++(45:0.3);; \draw [very thick] (L1) --++(135:0.6)--++(45:0.3)--++(135:0.3); \path(1,-2.5)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(R1); \path(R1)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \fill[white](circle) circle (20pt); \draw[densely dotted] (R1)--++(45:1*0.3) coordinate (X1)--++(45:1*0.3) coordinate (X2) --++(45:1*0.3) --++(135:3*0.3)--++(-135:3*0.3)--++(-45:1*0.3) coordinate (Y2) --++(-45:1*0.3) coordinate (Y1) --++(-45:1*0.3) ; \draw[densely dotted] (X1)--++(135:0.3*3); \draw[densely dotted] (X2)--++(135:0.3*3); \draw[densely dotted] (Y1)--++(45:0.3*3); \draw[densely dotted] (Y2)--++(45:0.3*3); \draw [very thick,fill=white] (R1)--++(45:0.3*2)--++(135:0.3) --++(135:0.3)--++(-135:0.3*2) --++(-45:0.3*2); \draw [very thick,fill=gray!40] (R1)--++(45:0.3*2)--++(135:0.3)--++(-135:0.3)--++(135:0.3)--++(-135:0.3) --++(-45:0.3*2); \path(R1)--++(45:1.5*0.3)--++(135:1.5*0.3) coordinate (start); \fill[brown] (start) circle (1.8pt); \draw [very thick] (R1) --++(45:0.3)--++(135:0.3) --++(45:0.3)--++(135:0.3) --++(-135:0.3)--++(-45:0.3) --++(-135:0.3)--++(-45:0.3); \draw [very thick] (L1) --++(45:0.3)--++(135:0.3) --++(45:0.3)--++(135:0.3) --++(-135:0.3)--++(-45:0.3) --++(-135:0.3)--++(-45:0.3); \draw [very thick] (L1) --++(45:0.6)--++(135:0.3)--++(45:0.3);; \draw [very thick] (L1) --++(135:0.6)--++(45:0.3)--++(135:0.3); \path(3,-2.5)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(RR1); \path(RR1)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \fill[white](circle) circle (20pt); \draw[densely dotted] (RR1)--++(45:1*0.3) coordinate (X1)--++(45:1*0.3) coordinate (X2) --++(45:1*0.3) --++(135:3*0.3)--++(-135:3*0.3)--++(-45:1*0.3) coordinate (Y2) --++(-45:1*0.3) coordinate (Y1) --++(-45:1*0.3) ; \draw[densely dotted] (X1)--++(135:0.3*3); \draw[densely dotted] (X2)--++(135:0.3*3); \draw[densely dotted] (Y1)--++(45:0.3*3); \draw[densely dotted] (Y2)--++(45:0.3*3); \draw [very thick,fill=white] (RR1)--++(45:0.3*3)--++(135:0.3)--++(-135:0.3*2)--++(135:0.3)--++(-135:0.3) --++(-45:0.3*2); \draw [very thick,fill=gray!40] (RR1)--++(45:0.3*2)--++(135:0.3)--++(-135:0.3)--++(135:0.3)--++(-135:0.3) --++(-45:0.3*2); \path(RR1)--++(135:0.5*0.3)--++(45:2.5*0.3) coordinate (start); \fill[lime!80!black] (start) circle (1.8pt); \draw [very thick] (RR1) --++(45:0.3)--++(135:0.3) --++(-135:0.3)--++(-45:0.3); \draw [very thick] (RR1) --++(45:0.6)--++(135:0.3); \path(5,-2.5)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(RRR1); \path(RRR1)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \fill[white](circle) circle (20pt); \draw[densely dotted] (RRR1)--++(45:1*0.3) coordinate (X1)--++(45:1*0.3) coordinate (X2) --++(45:1*0.3) --++(135:3*0.3)--++(-135:3*0.3)--++(-45:1*0.3) coordinate (Y2) --++(-45:1*0.3) coordinate (Y1) --++(-45:1*0.3) ; \draw[densely dotted] (X1)--++(135:0.3*3); \draw[densely dotted] (X2)--++(135:0.3*3); \draw[densely dotted] (Y1)--++(45:0.3*3); \draw[densely dotted] (Y2)--++(45:0.3*3); \draw [very thick,fill=white] (RRR1)--++(45:0.3*3)--++(135:0.3*2) --++(-135:0.3*3) --++(-45:0.3*2); \draw [very thick,fill=gray!40] (RRR1)--++(45:0.3*2)--++(135:0.3)--++(-135:0.3)--++(135:0.3)--++(-135:0.3) --++(-45:0.3*2); \path(RRR1)--++(135:0.5*0.3)--++(45:2.5*0.3) coordinate (start); \fill[violet] (start) circle (1.8pt); \draw[violet,very thick] (start)--++(135:0.3) coordinate (start); \fill[violet] (start) circle (1.8pt); \draw[violet,very thick] (start)--++(-135:0.3) coordinate (start); \fill[violet] (start) circle (1.8pt); \draw [very thick] (RRR1) --++(45:0.3)--++(135:0.3) --++(45:0.3)--++(135:0.3) --++(-135:0.3)--++(-45:0.3) --++(-135:0.3)--++(-45:0.3); \draw [very thick] (RRR1) --++(45:0.6)--++(135:0.3)--++(45:0.3);; \path(-5,-6.5)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(LLL1); \path(LLL1)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \fill[white](circle) circle (20pt); \draw[densely dotted] (LLL1)--++(45:1*0.3) coordinate (X1)--++(45:1*0.3) coordinate (X2) --++(45:1*0.3) --++(135:3*0.3)--++(-135:3*0.3)--++(-45:1*0.3) coordinate (Y2) --++(-45:1*0.3) coordinate (Y1) --++(-45:1*0.3) ; \draw[densely dotted] (X1)--++(135:0.3*3); \draw[densely dotted] (X2)--++(135:0.3*3); \draw[densely dotted] (Y1)--++(45:0.3*3); \draw[densely dotted] (Y2)--++(45:0.3*3); \draw [very thick,fill=white] (LLL1)--++(45:0.3*3)--++(135:0.3)--++(-135:0.3) --++(135:0.3*2)--++(-135:0.3*2) --++(-45:0.3*3); \draw [very thick,fill=gray!40] (LLL1)--++(45:0.3*2)--++(135:0.3)--++(-135:0.3)--++(135:0.3)--++(-135:0.3) --++(-45:0.3*2); \path(LLL1)--++(45:0.5*0.3)--++(135:2.5*0.3) coordinate (start); \fill[magenta] (start) circle (1.8pt); \draw[magenta,very thick] (start)--++(45:0.3) coordinate (start); \fill[magenta] (start) circle (1.8pt); \draw[magenta,very thick] (start)--++(-45:0.3) coordinate (start); \fill[magenta] (start) circle (2pt); \path (start)--++(45:0.3) coordinate (start); \path (start)--++(-45:0.3) coordinate (start); \fill[brown] (start) circle (2pt); \draw [very thick] (LLL1) --++(45:0.3)--++(135:0.3) --++(45:0.3)--++(135:0.3) --++(-135:0.3)--++(-45:0.3) --++(-135:0.3)--++(-45:0.3); \draw [very thick] (LLL1) --++(45:0.6)--++(135:0.3)--++(45:0.3);; \draw [very thick] (LLL1) --++(135:0.6)--++(45:0.3)--++(135:0.3); \path(-3,-6.5)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(LL1); \path(LL1)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \fill[white](circle) circle (20pt); \draw[densely dotted] (LL1)--++(45:1*0.3) coordinate (X1)--++(45:1*0.3) coordinate (X2) --++(45:1*0.3) --++(135:3*0.3)--++(-135:3*0.3)--++(-45:1*0.3) coordinate (Y2) --++(-45:1*0.3) coordinate (Y1) --++(-45:1*0.3) ; \draw[densely dotted] (X1)--++(135:0.3*3); \draw[densely dotted] (X2)--++(135:0.3*3); \draw[densely dotted] (Y1)--++(45:0.3*3); \draw[densely dotted] (Y2)--++(45:0.3*3); \draw [very thick,fill=white] (LL1)--++(45:0.3*2)--++(135:0.3*2)--++(-135:0.3)--++(135:0.3)--++(-135:0.3) --++(-45:0.3*3); \draw [very thick,fill=gray!40] (LL1)--++(45:0.3*2)--++(135:0.3)--++(-135:0.3)--++(135:0.3)--++(-135:0.3) --++(-45:0.3*2); \path(LL1)--++(45:0.5*0.3)--++(135:2.5*0.3) coordinate (start); \fill[darkgreen] (start) circle (1.8pt); \path(start)--++(45:0.3) coordinate (start); \path (start)--++(-45:0.3) coordinate (start); \fill[brown] (start) circle (2pt); \draw [very thick] (LL1) --++(45:0.3)--++(135:0.3) --++(45:0.3)--++(135:0.3) --++(-135:0.3)--++(-45:0.3) --++(-135:0.3)--++(-45:0.3); \draw [very thick] (LL1) --++(45:0.6)--++(135:0.3); \draw [very thick] (LL1) --++(135:0.6)--++(45:0.3)--++(135:0.3); \path(-1,-6.5)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(L1); \path(L1)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \fill[white](circle) circle (20pt); \draw[densely dotted] (L1)--++(45:1*0.3) coordinate (X1)--++(45:1*0.3) coordinate (X2) --++(45:1*0.3) --++(135:3*0.3)--++(-135:3*0.3)--++(-45:1*0.3) coordinate (Y2) --++(-45:1*0.3) coordinate (Y1) --++(-45:1*0.3) ; \draw[densely dotted] (X1)--++(135:0.3*3); \draw[densely dotted] (X2)--++(135:0.3*3); \draw[densely dotted] (Y1)--++(45:0.3*3); \draw[densely dotted] (Y2)--++(45:0.3*3); \draw [very thick,fill=white] (L1)--++(45:0.3*3)--++(135:0.3*3) --++(-135:0.3*3) --++(-45:0.3*3); \draw [very thick,fill=gray!40] (L1)--++(45:0.3*2)--++(135:0.3)--++(-135:0.3)--++(135:0.3)--++(-135:0.3) --++(-45:0.3*2); \path(L1)--++(45:0.5*0.3)--++(135:2.5*0.3) coordinate (start); \fill[cyan] (start) circle (1.8pt); \draw[cyan,very thick] (start)--++(45:0.3) coordinate (start); \fill[cyan] (start) circle (1.8pt); \draw[cyan,very thick] (start)--++(45:0.3) coordinate (start); \fill[cyan] (start) circle (2pt); \fill[cyan] (start) circle (1.8pt); \draw[cyan,very thick] (start)--++(-45:0.3) coordinate (start); \fill[cyan] (start) circle (1.8pt); \draw[cyan,very thick] (start)--++(-45:0.3) coordinate (start); \fill[cyan] (start) circle (2pt); \path (start)--++(-135:0.3)--++(135:0.3) coordinate (start); \fill[brown] (start) circle (2pt); % % % % %\path(L1)--++(45:0.5*0.3)--++(135:2.5*0.3) coordinate (start); % %\fill[cyan] (start) circle (1.8pt); %\draw[cyan,very thick] (start)--++(45:0.3) coordinate (start); % % %\fill[cyan] (start) circle (2pt); % % %\draw[cyan,very thick] (start)--++(-45:0.3) coordinate (start); % %\fill[cyan] (start) circle (1.8pt); %\draw[cyan,very thick] (start)--++(45:0.3) coordinate (start); % % %\fill[cyan] (start) circle (1.8pt); %\draw[cyan,very thick] (start)--++(-45:0.3) coordinate (start); % %\fill[cyan] (start) circle (2pt); % % %\path (start)--++( 135:0.3)--++(135:0.3) coordinate (start); %\fill[brown] (start) circle (2pt); \draw [very thick] (L1) --++(45:0.3)--++(135:0.3) --++(45:0.3)--++(135:0.3) --++(45:0.3)--++(135:0.3) --++(-135:0.3)--++(-45:0.3) --++(-135:0.3)--++(-45:0.3) --++(-135:0.3)--++(-45:0.3); \draw [very thick] (L1) --++(45:0.6)--++(135:0.3)--++(45:0.3);; \draw [very thick] (L1) --++(135:0.6)--++(45:0.3)--++(135:0.3); \path(1,-6.5)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(R1); \path(R1)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \fill[white](circle) circle (20pt); \draw[densely dotted] (R1)--++(45:1*0.3) coordinate (X1)--++(45:1*0.3) coordinate (X2) --++(45:1*0.3) --++(135:3*0.3)--++(-135:3*0.3)--++(-45:1*0.3) coordinate (Y2) --++(-45:1*0.3) coordinate (Y1) --++(-45:1*0.3) ; \draw[densely dotted] (X1)--++(135:0.3*3); \draw[densely dotted] (X2)--++(135:0.3*3); \draw[densely dotted] (Y1)--++(45:0.3*3); \draw[densely dotted] (Y2)--++(45:0.3*3); \draw [very thick,fill=white] (R1)--++(45:0.3*3)--++(135:0.3) --++(-135:0.3*2) --++(135:0.3*2) --++(-135:0.3) --++(-45:0.3*3); \draw [very thick,fill=gray!40] (R1)--++(45:0.3*2)--++(135:0.3)--++(-135:0.3)--++(135:0.3)--++(-135:0.3) --++(-45:0.3*2); \path(R1)--++(45:0.5*0.3)--++(135:2.5*0.3) coordinate (start); \fill[darkgreen] (start) circle (1.8pt); \path (start)--++(-45:0.3*2)--++(45:0.3*2) coordinate (start); \fill[lime!80!black] (start) circle (2pt); \draw [very thick] (R1) --++(45:0.3)--++(135:0.3) --++(-135:0.3)--++(-45:0.3); \draw [very thick] (R1) --++(45:0.6)--++(135:0.3)--++(45:0.3);; \draw [very thick] (R1) --++(135:0.6)--++(45:0.3)--++(135:0.3); \path(3,-6.5)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(RR1); \path(RR1)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \fill[white](circle) circle (20pt); \draw[densely dotted] (RR1)--++(45:1*0.3) coordinate (X1)--++(45:1*0.3) coordinate (X2) --++(45:1*0.3) --++(135:3*0.3)--++(-135:3*0.3)--++(-45:1*0.3) coordinate (Y2) --++(-45:1*0.3) coordinate (Y1) --++(-45:1*0.3) ; \draw[densely dotted] (X1)--++(135:0.3*3); \draw[densely dotted] (X2)--++(135:0.3*3); \draw[densely dotted] (Y1)--++(45:0.3*3); \draw[densely dotted] (Y2)--++(45:0.3*3); \draw [very thick,fill=white] (RR1)--++(45:0.3*3)--++(135:0.3)--++(-135:0.3*1)--++(135:0.3)--++(-135:0.3*2) --++(-45:0.3*2); \draw [very thick,fill=gray!40] (RR1)--++(45:0.3*2)--++(135:0.3)--++(-135:0.3)--++(135:0.3)--++(-135:0.3) --++(-45:0.3*2); \path(RR1)--++(45:1.5*0.3)--++(135:1.5*0.3) coordinate (start); \fill[brown] (start) circle (1.8pt); \path (start)--++(-45:0.3*1)--++(45:0.3*1) coordinate (start); \fill[lime!80!black] (start) circle (2pt); \draw [very thick] (RR1) --++(45:0.3)--++(135:0.3) --++(45:0.3)--++(135:0.3) --++(-135:0.3)--++(-45:0.3) --++(-135:0.3)--++(-45:0.3); \draw [very thick] (RR1) --++(45:0.6)--++(135:0.3)--++(45:0.3);; \path(5,-6.5)--++(-45:0.3*1.5)--++(-135:0.3*1.5) coordinate(RRR1); \path(RRR1)--++(45:0.3*1.5)--++(135:0.3*1.5) coordinate (circle); \fill[white](circle) circle (20pt); \draw[densely dotted] (RRR1)--++(45:1*0.3) coordinate (X1)--++(45:1*0.3) coordinate (X2) --++(45:1*0.3) --++(135:3*0.3)--++(-135:3*0.3)--++(-45:1*0.3) coordinate (Y2) --++(-45:1*0.3) coordinate (Y1) --++(-45:1*0.3) ; \draw[densely dotted] (X1)--++(135:0.3*3); \draw[densely dotted] (X2)--++(135:0.3*3); \draw[densely dotted] (Y1)--++(45:0.3*3); \draw[densely dotted] (Y2)--++(45:0.3*3); \draw [very thick,fill=white] (RRR1)--++(45:0.3*3)--++(135:0.3*2) --++(-135:0.3*2) --++(135:0.3)--++(-135:0.3) --++(-45:0.3*3); \draw [very thick,fill=gray!40] (RRR1)--++(45:0.3*2)--++(135:0.3)--++(-135:0.3)--++(135:0.3)--++(-135:0.3) --++(-45:0.3*2); \path(RRR1)--++(45:0.5*0.3)--++(135:2.5*0.3) coordinate (start); \fill[magenta] (start) circle (1.8pt); \path (start)--++(45:0.3) coordinate (start); \path (start)--++(-45:0.3) coordinate (start); \fill[violet] (start) circle (2pt); \fill[violet] (start) circle (1.8pt); \draw[violet,very thick] (start)--++(45:0.3) coordinate (start); \fill[violet] (start) circle (1.8pt); \draw[violet,very thick] (start)--++(-45:0.3) coordinate (start); \fill[violet] (start) circle (2pt); \draw [very thick] (RRR1) --++(45:0.3)--++(135:0.3) --++(45:0.3)--++(135:0.3) --++(-135:0.3)--++(-45:0.3) --++(-135:0.3)--++(-45:0.3); \draw [very thick] (RRR1) --++(45:0.6)--++(135:0.3)--++(45:0.3);; \draw [very thick] (RRR1) --++(135:0.6)--++(45:0.3)--++(135:0.3); \end{tikzpicture}$$ # The isomorphism between Hecke categories and Khovanov arc algebras We now utilise our newfound presentations in order to prove that the Khovanov arc algebras and Hecke categories are isomorphic as ${\mathbb Z}$-graded $\Bbbk$-algebras for $\Bbbk$ any commutative integral domain containing a square root of $-1$. ## Signs and the statement of the isomorphism For the purpose of defining our isomorphism, we will wish to consider all degree 1 diagrams in the Khovanov arc algebra. Using the dilation homomorphism of section 6, we are often able to restrict our attention to diagrams which are incontractible. which by [Remark 33](#incontr-prtns){reference-type="ref" reference="incontr-prtns"} are of the form $\underline{ \mu} { \lambda} \overline{ \lambda}$ (or its dual) for $\mu$ a rectangle and $\lambda=\mu -P$ with $b(P)=1$. The following lemma is immediate by construction of the arc diagrams. **Lemma 44**. *The diagrams $\underline{\mu}\lambda\overline{\lambda}$ for $(\mu,\lambda) = ((1), \emptyset)$, $(\mu,\lambda) =( (c), (c-1))$ and $(\mu,\lambda) = ( (1^r), (1^{r-1}))$ are given respectively by the arc, left-zigzag and right-zigzag diagrams $$\label{gens-arc0} \begin{tikzpicture} [yscale=0.76,xscale=-0.76] \draw[white](0,0)--++(-90:1); \draw(-0.25,0)--++(0:1.5); \draw(0,0.09) node {$\scriptstyle\vee$}; \draw[thick](1,-0.09) node {$\scriptstyle\wedge$}; \draw (0,1)--(0,0) to [out=-90,in=180] (0.5,-0.5) to [out=0,in=-90] (1,0) --++(90:1); \end{tikzpicture} \qquad \ \qquad %\begin{minipage}{3cm} \begin{tikzpicture} [scale=0.76] \draw(-0.25,0)--++(0:3.5) coordinate (X); \draw[densely dotted](X)--++(0:1) coordinate (X);; \draw (X)--++(0:1) coordinate (X);; \draw[thick](0,-0.09) node {$\scriptstyle\wedge$}; \draw(1,0.09) node {$\scriptstyle\vee$}; \draw[thick](2,-0.09) node {$\scriptstyle\wedge$}; \draw (0,1)--(0,0) to [out=-90,in=180] (0.5,-0.5) to [out=0,in=-90] (1,0) to [out=90,in=180] (1.5, 0.5) to [out=0,in= 90] (2,0)--++(-90:1); \draw[thick](3,-0.09) node {$\scriptstyle\wedge$}; \draw[densely dotted] (3,-1) --++(90:2); \draw[thick](4.75,-0.09) node {$\scriptstyle\wedge$}; \draw[densely dotted] (4.75,-1) --++(90:2); \end{tikzpicture} \qquad \ \qquad \begin{tikzpicture} [scale=0.76] \draw(-1.25,0)--++(0:3.5) ; \draw[densely dotted](-1.25,0)--++(180:1) coordinate (X);; \draw (X)--++(180:1) coordinate (X);; \draw(0,0.09) node {$\scriptstyle\vee$}; \draw(2,0.09) node {$\scriptstyle\vee$}; \draw[thick](1,-0.09) node {$\scriptstyle\wedge$}; \draw (0,-1)--(0,0) to [out=90,in=180] (0.5,0.5) to [out=0,in=90] (1,0) to [out=-90,in=180] (1.5, -0.5) to [out=0,in= -90] (2,0)--++(90:1); \draw[thick](-1,0.09) node {$\scriptstyle\vee$}; \draw[densely dotted] (-1,-1) --++(90:2); \draw[thick](-2.75,0.09) node {$\scriptstyle\vee$}; \draw[densely dotted] (-2.75,-1) --++(90:2); \end{tikzpicture}$$ with $c-2$ (respectively $r-2$) vertical strands to the right (respectively, the left). For $(\mu,\lambda) =( (c^r), (c^{r-1}, c-1))$ with $r\geqslant c>1$ we have that $\underline{\mu} { \lambda} \overline{\lambda}$ is the brace generator diagram $$\label{gens-arc1} \begin{minipage}{8cm} \begin{tikzpicture} [scale=0.76] \draw(-1.25,0)--++(0:5.75) coordinate (X); \draw[densely dotted](X)--++(0:1 ) coordinate (X); \draw (X)--++(0:1 ) coordinate (X); \draw[densely dotted](-1.25,0)--++(180:0.75 ) coordinate (X); \draw (X)--++(180:2 ) coordinate (X); \draw[thick](-1 , 0.09) node {$\scriptstyle\vee$}; \draw[thick](0 , 0.09) node {$\scriptstyle\vee$}; \draw(2,0.09) node {$\scriptstyle\vee$}; \draw(-2.5,0.09) node {$\scriptstyle\vee$}; % \draw[thick](1,-0.09) node {$\scriptstyle\up$}; % \draw[thick](1.5,-0.09) node {$\scriptstyle\up$}; \draw[thick](1 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](4 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](6 ,-0.09) node {$\scriptstyle\wedge$}; \draw (0,0) to [out=90,in=180] (0.5,0.5) to [out=0,in=90] (1,0) (1,0) to [out=-90,in=180] (1.5, -0.5) to [out=0,in= -90] (2,0) to [out=90,in=180] (2.5,0.5) to [out=0,in=90] (3,0) to [out=-90,in=0] (1.5, -0.75) to [out=180,in= -90] (0,0) ; \draw[densely dotted] (4,0) to [out=90,in=0](1.5,1 ) to [out=180,in=90](-1,0); %--++( 90:2); \draw[densely dotted] (4,0) to [out=-90,in=0](1.5,-1 ) to [out=180,in=-90](-1,0); \draw[densely dotted] (6,0 ) to [out=90,in=0] (1.5,1.75) to [out=180,in =90] (-2.5,0 ); \draw[densely dotted] (6,-0 ) to [out=-90,in=0] (1.5,-1.75) to [out=180,in =-90] (-2.5,-0 ); \draw[thick](-3.5 , 0.09) node {$\scriptstyle\vee$}; \draw[densely dotted] (-3.5,-1) --++(90:2); \draw[thick](-5.5 , 0.09) node {$\scriptstyle\vee$}; \draw[densely dotted] (-5.5,-1) --++(90:2); \draw[densely dotted](X)--++(180:1) coordinate (X);; \draw (X)--++(180:1) coordinate (X);; \end{tikzpicture} \end{minipage}$$ with $c-2$ concentric circles and a total of $r-c$ vertical strands to the left of the diagram. The case with $c\geqslant r \geqslant 1$ is similar but with $r-2$ concentric circles and a total of $c-r$ vertical strands to the right of the diagram.* **Remark 45**. *The trivial embeddings ${\mathscr{P}_{m,n}}\rightarrow \mathscr{P}_{m+1,n}$ and ${\mathscr{P}_{m,n}}\rightarrow \mathscr{P}_{m,n+1}$ sending a partition to itself extends to algebra embeddings $\mathcal{K}_{m,n} \rightarrow \mathcal{K}_{m+1, n}$ and $\mathcal{K}_{m,n} \rightarrow \mathcal{K}_{m, n+1}$ defined on the arc diagrams by adding an upwards strand to the left or a downwards strand to the right. We have chosen to represent each arc diagram $\underline{\mu}\lambda\overline{\nu}$ in the smallest $\mathcal{K}_{m,n}$ where it is defined to avoid drawing lots of vertical strands which play no role in the multiplication or the next definition.* **Definition 46**. *Let $(\lambda, \mu)$ be a Dyck pair of degree $1$. Then $\lambda= \mu - P$ for some $P\in {\rm DRem}(\mu)$. We set ${\sf sgn}(\lambda, \mu)$ to be the average of the elements in the set $\underline{\sf cont}(P)$. In other words if the unique clockwise cup in $\underline{\mu}\lambda\overline{\lambda}$ connects vertices in position $i-\tfrac{1}{2}$ and $j+\tfrac{1}{2}$ for $i\leqslant j$ then ${\sf sgn}(\lambda, \mu) = \tfrac{1}{2}(j+i)$* **Example 47**. *The generators $\underline{ \mu} { \lambda} \overline{ \lambda}$ of the form $$\begin{tikzpicture} [scale=0.76] \draw[white](0,0)--++(-90:1); \draw[white](0,0)--++(90:1); \draw(-0.25,0)--++(0:1.5); \draw(1,0.09) node {$\scriptstyle\vee$}; \draw[thick](0,-0.09) node {$\scriptstyle\wedge$}; \draw (0,1)--(0,0) to [out=-90,in=180] (0.5,-0.5) to [out=0,in=-90] (1,0) --++(90:1); \end{tikzpicture} \qquad %\begin{minipage}{3cm} \begin{tikzpicture} [scale=0.76] \draw[white](0,0)--++(-90:1); \draw[white](0,0)--++(90:1); \draw(-0.25,0)--++(0:2.5); \draw[thick](0,-0.09) node {$\scriptstyle\wedge$}; \draw(1,0.09) node {$\scriptstyle\vee$}; \draw[thick](2,-0.09) node {$\scriptstyle\wedge$}; \draw (0,1)--(0,0) to [out=-90,in=180] (0.5,-0.5) to [out=0,in=-90] (1,0) to [out=90,in=180] (1.5, 0.5) to [out=0,in= 90] (2,0)--++(-90:1); \end{tikzpicture} \qquad \begin{tikzpicture} [scale=0.76] \draw(-0.25,0)--++(0:2.5); \draw(0,0.09) node {$\scriptstyle\vee$}; \draw(2,0.09) node {$\scriptstyle\vee$}; \draw[thick](1,-0.09) node {$\scriptstyle\wedge$}; \draw (0,-1)--(0,0) to [out=90,in=180] (0.5,0.5) to [out=0,in=90] (1,0) to [out=-90,in=180] (1.5, -0.5) to [out=0,in= -90] (2,0)--++(90:1); \end{tikzpicture} %\end{minipage} \qquad \begin{tikzpicture} [scale=0.76] \draw[white] (0,-1)--(0,0) to [out=90,in=180] (0.5,0.5) to [out=0,in=90] (1,0) to [out=-90,in=180] (1.5, -0.5) to [out=0,in= -90] (2,0)--++(90:1); \draw(-0.25,0)--++(0:3.5); \draw(0,0.09) node {$\scriptstyle\vee$}; \draw(2,0.09) node {$\scriptstyle\vee$}; \draw[thick](1,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](3,-0.09) node {$\scriptstyle\wedge$}; \draw (0,0) to [out=90,in=180] (0.5,0.5) to [out=0,in=90] (1,0) to [out=-90,in=180] (1.5, -0.5) to [out=0,in= -90] (2,0) to [out=90,in=180] (2.5,0.5) to [out=0,in=90] (3,0) to [out=-90,in= 0] (1.5, -0.85) to [out=180,in= -90] (0,0); \end{tikzpicture} \qquad %\begin{minipage}{3cm} \begin{tikzpicture} [scale=0.76] \draw[white](0,0)--++(-90:1); \draw[white](0,0)--++(90:1); \draw(-0.25,0)--++(0:3.5); \draw[thick](0,-0.09) node {$\scriptstyle\wedge$}; \draw(1,0.09) node {$\scriptstyle\vee$}; \draw[thick](2,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](3,-0.09) node {$\scriptstyle\wedge$}; \draw (0,1)--(0,0) to [out=-90,in=180] (0.5,-0.5) to [out=0,in=-90] (1,0) to [out=90,in=180] (1.5, 0.5) to [out=0,in= 90] (2,0)--++(-90:1); \draw (3,-1)--(3,1); \end{tikzpicture}$$ for $\mu=(1^1)$, $(2)$, $(1^2)$, $(2^2)$, $(3)$ respectively and $\lambda= \mu - P$ with $b(P) = 1$ have signs $0, -1, 1, 0$ and $-2$.* **Theorem 48**. *We have a graded $\Bbbk$-algebra isomorphism $\Psi: \mathcal{H}_{m,n } \to \mathcal{K}_{m,n}$ defined on generators by setting, for all $\lambda,\mu\in {\mathscr{P}_{m,n}}$ such that $(\lambda, \mu)$ is a Dyck pair of degree $1$, $$\Psi({\sf 1}_\lambda) = \underline{ \lambda} { \lambda} \overline{ \lambda}, \qquad \Psi (D^\lambda_\mu ) = i^{{\sf sgn}( \lambda, \mu)} \underline{ \lambda} { \lambda} \overline{ \mu}\qquad \Psi (D^\mu_\lambda) = i^{{\sf sgn}( \lambda, \mu)} \underline{ \mu} { \lambda} \overline{ \lambda}$$ where $i$ is a square root of $-1$ in $\Bbbk$.* ## Proof of the isomorphism The remainder of this section is dedicated to the proof of [Theorem 48](#isomorphismthem){reference-type="ref" reference="isomorphismthem"}. **Lemma 49** (Local idempotent relations). *Any anticlockwise-oriented circle which intersects the weight at *precisely* two points is a local weight-idempotent in the following sense: applying the local surgery procedure at this point is equivalent to simply deleting this circle (see [\[local-idem\]](#local-idem){reference-type="ref" reference="local-idem"} for an example).* *Proof.* This follows immediately using the surgery procedures $1\otimes 1 \mapsto 1$, $1\otimes x \mapsto x$ and $1\otimes y \mapsto y$. ◻ $$\quad \begin{minipage}{4.5cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7] \draw (-1.25,0)--++(180:2.5 ) coordinate (X); \draw (X)--++(180:1.5 ) coordinate (X); \draw(-1.25,0)--++(0:3.75) coordinate (X); \draw[thick](-1 , -0.09) node {$\scriptstyle\wedge$}; \draw[thick](0 , 0.09) node {$\scriptstyle\vee$}; \draw(2,0.09) node {$\scriptstyle\vee$}; \draw(-3,0.09) node {$\scriptstyle\vee$}; \draw(-2,-0.09) node {$\scriptstyle\wedge$}; \draw(-5,0.09) node {$\scriptstyle\vee$}; \draw(-4,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](1 ,-0.09) node {$\scriptstyle\wedge$}; \draw (-1,0) to [out=-90,in=180] (0.5, -1) to [out=0,in= -90] (2,0) (-0,0) to [out=-90,in=180] (0.5, -0.5) to [out=0,in= -90] (1,0) ; \draw (-1,0) to [out=90,in=180] (-0.5,0.5) to [out=0,in=90] (0,0) (1,0) to [out=90,in=180] (1.5,0.5) to [out=0,in=90] (2,0) ; \draw [ thick] (-5,0) to [out=90,in=180] (-3.5,1) to [out=0,in=90] (-2,0) (-5,0) to [out=-90,in=180] (-3.5,-1) to [out=0,in=-90] (-2,0) ; \draw (-4,0) to [out=90,in=180] (-3.5,0.5) to [out=0,in=90] (-3,0) (-4,0) to [out=-90,in=180] (-3.5,-0.5) to [out=0,in=-90] (-3,0) ; \draw (-1.25,0-3)--++(180:2.5 ) coordinate (X); \draw (X)--++(180:1.5 ) coordinate (X); \draw(-1.25,0-3)--++(0:3.75) coordinate (X); % \draw (X)--++(0:0.5 ) coordinate (X); % \draw (X)--++(0:1.5 ) coordinate (X); \draw[thick](-1 , -0.09-3) node {$\scriptstyle\wedge$}; \draw[thick](0 , 0.09-3) node {$\scriptstyle\vee$}; \draw(2,0.09-3) node {$\scriptstyle\vee$}; \draw(-2,0.09-3) node {$\scriptstyle\vee$}; \draw(-3,-0.09-3) node {$\scriptstyle\wedge$}; \draw(-4,0.09-3) node {$\scriptstyle\vee$}; \draw(-5,-0.09-3) node {$\scriptstyle\wedge$}; \draw[thick](1 ,-0.09-3) node {$\scriptstyle\wedge$}; \draw (-5,0-3) to [out=90,in=180] (-3.5,1-3) to [out=0,in=90] (-2,0-3) (-0,0-3) to [out=90,in=180] (0.5,0.5-3) to [out=0,in=90] (1,0-3) (-1,0-3) to [out=90,in=180] (0.5,1-3) to [out=0,in=90] (2,0-3) ; \draw (-4,0-3) to [out=90,in=180] (-3.5,0.5-3) to [out=0,in=90] (-3,0-3) (-4,0-3) to [out=-90,in=180] (-3.5,-0.5-3) to [out=0,in=-90] (-3,0-3); \draw (1,0-3) to [out=-90,in=180] (1.5, -0.5-3) to [out=0,in= -90] (2,0-3) (-2,0-3) to [out=-90,in=180] (-1.5, -0.5-3) to [out=0,in= -90] (-1,0-3) (-5,0-3) to [out=-90,in=180] (-2.5, -1-3) to [out=0,in= -90] (-0,0-3) ; % % \draw[thick, <->] (-3.5,0-1.25+0.2) -- (-3.5,-1.75-0.2); % % (1,0-3) % to [out=90,in=180] (1.5,0.5-3) to [out=0,in=90] (2,0-3) % %; \end{tikzpicture} \end{minipage} \!\!\! =\;\;\;\; \begin{minipage}{3.75cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7] \draw (-1.25,0)--++(180:2.5 ) coordinate (X); \draw (X)--++(180:1.5 ) coordinate (X); \draw(-1.25,0)--++(0:3.75) coordinate (X); \draw[thick](-1 , -0.09) node {$\scriptstyle\wedge$}; \draw[thick](0 , 0.09) node {$\scriptstyle\vee$}; \draw(2,0.09) node {$\scriptstyle\vee$}; \draw(-3,0.09) node {$\scriptstyle\vee$}; \draw(-5,-0.09) node {$\scriptstyle\wedge$}; \draw(-2,0.09) node {$\scriptstyle\vee$}; \draw(-4,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](1 ,-0.09) node {$\scriptstyle\wedge$}; \draw (-1,0) to [out=-90,in=180] (0.5, -1) to [out=0,in= -90] (2,0) (-0,0) to [out=-90,in=180] (0.5, -0.5) to [out=0,in= -90] (1,0) ; \draw (-1,0) to [out=90,in=180] (-0.5,0.5) to [out=0,in=90] (0,0) (1,0) to [out=90,in=180] (1.5,0.5) to [out=0,in=90] (2,0) ; \draw (-5,0) to [out=90,in=180] (-3.5,1) to [out=0,in=90] (-2,0) (-5,0)--++(-90:3) (-2,0)--++(-90:3) ; \draw (-4,0) to [out=90,in=180] (-3.5,0.5) to [out=0,in=90] (-3,0) (-4,0) to [out=-90,in=180] (-3.5,-0.5) to [out=0,in=-90] (-3,0) ; \draw (-1.25,0-3)--++(180:2.5 ) coordinate (X); \draw (X)--++(180:1.5 ) coordinate (X); \draw(-1.25,0-3)--++(0:3.75) coordinate (X); % \draw (X)--++(0:0.5 ) coordinate (X); % \draw (X)--++(0:1.5 ) coordinate (X); \draw[thick](-1 , -0.09-3) node {$\scriptstyle\wedge$}; \draw[thick](0 , 0.09-3) node {$\scriptstyle\vee$}; \draw(2,0.09-3) node {$\scriptstyle\vee$}; \draw(-2,0.09-3) node {$\scriptstyle\vee$}; \draw(-3,-0.09-3) node {$\scriptstyle\wedge$}; \draw(-4,0.09-3) node {$\scriptstyle\vee$}; \draw(-5,-0.09-3) node {$\scriptstyle\wedge$}; \draw[thick](1 ,-0.09-3) node {$\scriptstyle\wedge$}; \draw (-0,0-3) to [out=90,in=180] (0.5,0.5-3) to [out=0,in=90] (1,0-3) (-1,0-3) to [out=90,in=180] (0.5,1-3) to [out=0,in=90] (2,0-3) ; \draw (-4,0-3) to [out=90,in=180] (-3.5,0.5-3) to [out=0,in=90] (-3,0-3) (-4,0-3) to [out=-90,in=180] (-3.5,-0.5-3) to [out=0,in=-90] (-3,0-3); \draw (1,0-3) to [out=-90,in=180] (1.5, -0.5-3) to [out=0,in= -90] (2,0-3) (-2,0-3) to [out=-90,in=180] (-1.5, -0.5-3) to [out=0,in= -90] (-1,0-3) (-5,0-3) to [out=-90,in=180] (-2.5, -1-3) to [out=0,in= -90] (-0,0-3) ; \end{tikzpicture} \end{minipage}$$ **Proposition 50** (The idempotent relations). *The idempotent relations are preserved by $\Psi$.* *Proof.* Note that the element $\underline{\lambda}\lambda\overline{\lambda}$ contains only anticlockwise circles intersecting the weight at precisely two points. Thus the result follows from [Lemma 49](#local-surg){reference-type="ref" reference="local-surg"}. ◻ **Proposition 51**. *(The self-dual relation) Let $P\in {\rm DRem}(\mu)$ and $\lambda= \mu - P$. Then we have $$\begin{aligned} (-1)^{{\sf sgn}(\lambda,\mu)} \underline{\lambda} \lambda\overline{\mu} \cdot \underline{\mu} \lambda\overline{\lambda} = && 2 \!\! \sum_{ \begin{subarray}{c} \nu= \lambda- Q \\ P\prec Q \end{subarray}} \! (-1) ^{b(Q)+b(P)-1+ { {\sf sgn}(\nu,\lambda)} } \underline{\lambda} \nu \overline{\nu} \cdot \underline{\nu} \nu \overline{\lambda}\\ && + \! \sum_{ \begin{subarray}{c} \nu= \lambda- Q \\ P \text { adj. } Q \end{subarray}} \!\!\!\!\! (-1) ^{b(Q)+b(P)-1+ { {\sf sgn}(\nu,\lambda)} } \underline{\lambda} \nu \overline{\nu} \cdot \underline{\nu} \nu \overline{\lambda} . \end{aligned}$$* *Proof.* We first focus on the incontractible case in which $\mu=(c^r)$ is a rectangle with $r,c\geqslant 1$ and $b(P)=1$ so that $\lambda=(c^{r-1},c-1)$. We first consider the case $r,c>1$ and set $m = \min \{ r,c\}$. In this case, the set of all Dyck paths $Q\in {\rm DRem}(\lambda)$ appearing on the right hand side of the equation can be described as follows. The set $$\{ Q\in {\rm DRem}(\lambda) \, : \, Q \, \text{ adj. } \, P\} = \{Q^{-1}, Q^1\}$$ satisfies $b(Q^{\pm 1}) = 1$ and, setting $\nu_{\pm 1} = \lambda- Q^{\pm 1}$, we have ${\sf sgn}(\nu_{\pm 1}, \lambda) = {\sf sgn}(\lambda, \mu) \pm 1$. The set $$\{Q\in {\rm DRem}(\lambda) \, : \, P\prec Q\} = \{Q^2, \ldots , Q^{m-1}\}$$ satisfies $b(Q^x) = x+1$ for $2\leqslant x\leqslant m-1$ and, setting $\nu_x = \lambda- Q^x$, we have ${\sf sgn}(\nu_x, \lambda) = {\sf sgn}(\lambda, \mu)$. An example is given in Figure [\[sum-relation\]](#sum-relation){reference-type="ref" reference="sum-relation"}. Thus, the self-dual relation can be rewritten in this case as $$\label{selfdual-rectangle} \underline{\lambda}\lambda\overline{\mu} \cdot \underline{\mu}\lambda\overline{\lambda} = \underline{\lambda}\nu_{-1} \overline{\nu_{-1}} \cdot \underline{\nu_{-1}}\nu_{-1} \overline{\lambda} +\underline{\lambda}\nu_{1} \overline{\nu_{1}} \cdot \underline{\nu_{1}}\nu_{1} \overline{\lambda} + 2 \sum_{x=2}^{m-1} (-1)^{x+1} \underline{\lambda}\nu_{x} \overline{\nu_{x}} \cdot \underline{\nu_{x}}\nu_{x} \overline{\lambda}.$$ The weight diagram for $\mu$ consists of $r$ down arrows followed by $c$ up arrows; the weight diagram for $\lambda=(c^{r-1},c-1)$ is obtained from that of $\mu$ by swapping the neighbouring $\vee\wedge$. Pictorially $$\mu= \left(\begin{minipage}{5 cm} \begin{tikzpicture} \draw ( -.75,0)--++(180:1); \draw[densely dotted]( -0.25,0)--++(180:0.5); \draw(-0.25,0)--++(0:2); \draw[densely dotted]( 1.75,0)--++(0:0.5); \draw ( 2.25,0)--++(0:1); \draw(-1,0.09) node {$\scriptstyle\vee$}; \draw(-1.5,0.09) node {$\scriptstyle\vee$}; \draw(0,0.09) node {$\scriptstyle\vee$}; \draw(0.5,0.09) node {$\scriptstyle\vee$}; \draw[thick](1,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](1.5,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](2.5 ,-0.09) node {$\scriptstyle\wedge$}; \end{tikzpicture}\end{minipage}\right) \qquad \lambda= \left(\begin{minipage}{5 cm} \begin{tikzpicture} \draw ( -.75,0)--++(180:1); \draw[densely dotted]( -0.25,0)--++(180:0.5); \draw(-0.25,0)--++(0:2); \draw[densely dotted]( 1.75,0)--++(0:0.5); \draw ( 2.25,0)--++(0:1); \draw(-1,0.09) node {$\scriptstyle\vee$}; \draw(-1.5,0.09) node {$\scriptstyle\vee$}; \draw(0,0.09) node {$\scriptstyle\vee$}; \draw(1,0.09) node {$\scriptstyle\vee$}; \draw[thick](0.5,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](1.5,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](2.5 ,-0.09) node {$\scriptstyle\wedge$}; \end{tikzpicture}\end{minipage}\right)$$ and therefore $\underline{\mu} \lambda\overline{\lambda}$ can be pictured as follows, $$\underline{\mu} \lambda\overline{\lambda} =\; \begin{minipage}{5cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7] % \begin{tikzpicture} [scale=0.76] \draw[densely dotted](-1.25,0)--++(180:0.75 ) coordinate (X); \draw (X)--++(180:1 ) coordinate (X); \draw(-1.25,0)--++(0:5.75) coordinate (X); \draw[densely dotted](X)--++(0:1 ) coordinate (X); \draw (X)--++(0:1 ) coordinate (X); \draw[thick](-1 , 0.09) node {$\scriptstyle\vee$}; \draw[thick](0 , 0.09) node {$\scriptstyle\vee$}; \draw(2,0.09) node {$\scriptstyle\vee$}; \draw(-2.5,0.09) node {$\scriptstyle\vee$}; % \draw[thick](1,-0.09) node {$\scriptstyle\up$}; % \draw[thick](1.5,-0.09) node {$\scriptstyle\up$}; \draw[thick](1 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](4 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](6 ,-0.09) node {$\scriptstyle\wedge$}; \draw (0,0) to [out=90,in=180] (0.5,0.5) to [out=0,in=90] (1,0) (1,0) to [out=-90,in=180] (1.5, -0.5) to [out=0,in= -90] (2,0) to [out=90,in=180] (2.5,0.5) to [out=0,in=90] (3,0) to [out=-90,in=0] (1.5, -0.75) to [out=180,in= -90] (0,0) ; \draw (4,0) to [out=90,in=0](1.5,1 ) to [out=180,in=90](-1,0); %--++( 90:2); \draw (4,0) to [out=-90,in=0](1.5,-1 ) to [out=180,in=-90](-1,0); % \draw (3,-1)--++( 90:2); % \draw (4,-1)--++( 90:2); %\draw (6,-0.5)--++( 90:1); %\draw (-2.5,-0.5)--++( 90:1); %\draw[densely dotted] (6,0.5) to [out=90,in=0] (1.5,1.75) to [out=180,in =90] (-2.5,0.5); %\draw[densely dotted] (6,-0.5) to [out=-90,in=0] (1.5,-1.75) to [out=180,in =-90] (-2.5,-0.5); \draw[densely dotted] (6,0 ) to [out=90,in=0] (1.5,1.75) to [out=180,in =90] (-2.5,0 ); \draw[densely dotted] (6,-0 ) to [out=-90,in=0] (1.5,-1.75) to [out=180,in =-90] (-2.5,-0 ); \end{tikzpicture} \end{minipage}$$ plus $|r-c|$ vertical strands as in [\[gens-arc1\]](#gens-arc1){reference-type="ref" reference="gens-arc1"} on either side. In other words, $\underline{\mu} \lambda\overline{\lambda}$ consists of a brace surrounded by $m-2$ concentric anti-clockwise circles. We will ignore the vertical strands as they have no effect on the multiplication. Performing surgery on $\underline{\lambda} \lambda\overline{\mu} \cdot \underline{\mu} \lambda\overline{\lambda}$ we first apply [Lemma 49](#local-surg){reference-type="ref" reference="local-surg"} to the $m-2$ concentric circles one at a time (starting from the outermost pair of circles) until we obtain the diagram $$\underline{\lambda} \lambda\overline{\mu}\cdot \underline{\mu} \lambda\overline{\lambda} = \;\; \begin{minipage}{6cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7] % \begin{tikzpicture} [scale=0.76] \draw[densely dotted](-1.25,0)--++(180:1 ) coordinate (X); \draw (X)--++(180:1 ) coordinate (X); \draw(-1.25,0)--++(0:5.75) coordinate (X); \draw[densely dotted](X)--++(0:1 ) coordinate (X); \draw (X)--++(0:1 ) coordinate (X); \draw[thick](-1 , 0.09) node {$\scriptstyle\vee$}; \draw[thick](0 , 0.09) node {$\scriptstyle\vee$}; \draw(2,0.09) node {$\scriptstyle\vee$}; \draw(-2.5,0.09) node {$\scriptstyle\vee$}; \draw[thick](1 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](4 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](6 ,-0.09) node {$\scriptstyle\wedge$}; \draw[densely dotted](-1.25,0-2)--++(180:1 ) coordinate (X); \draw (X)--++(180:1 ) coordinate (X); \draw(-1.25,0-2)--++(0:5.75) coordinate (X); \draw[densely dotted](X)--++(0:1 ) coordinate (X); \draw (X)--++(0:1 ) coordinate (X); \draw[thick](-1 , 0.09-2) node {$\scriptstyle\vee$}; \draw[thick](0 , 0.09-2) node {$\scriptstyle\vee$}; \draw(2,0.09-2) node {$\scriptstyle\vee$}; \draw(-2.5,0.09-2) node {$\scriptstyle\vee$}; \draw[thick](1 ,-0.09-2) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09-2) node {$\scriptstyle\wedge$}; \draw[thick](4 ,-0.09-2) node {$\scriptstyle\wedge$}; \draw[thick](6 ,-0.09-2) node {$\scriptstyle\wedge$}; \draw (0,-2) to [out=-90,in=180] (0.5,-2-0.5) to [out=0,in=-90] (1,-2-0) (1,-2) to [out= 90,in=180] (1.5, -2+0.5) to [out=0,in= 90] (2,-2-0) to [out=-90,in=180] (2.5,-2-0.5) to [out=0,in=-90] (3,-2-0) to [out= 90,in=0] (1.5, -2+0.75) to [out=180,in= 90] (0,-2) ; \draw (0,0) to [out=90,in=180] (0.5,0.5) to [out=0,in=90] (1,0) (1,0) to [out=-90,in=180] (1.5, -0.5) to [out=0,in= -90] (2,0) to [out=90,in=180] (2.5,0.5) to [out=0,in=90] (3,0) to [out=-90,in=0] (1.5, -0.75) to [out=180,in= -90] (0,0) ; \draw (4,0) to [out=90,in=0](1.5,1 ) to [out=180,in=90](-1,0); %--++( 90:2); \draw[densely dotted] (6,0 ) to [out=90,in=0] (1.5,1.75) to [out=180,in =90] (-2.5,0 ); \draw (4,0)--++(90:-2);\draw[densely dotted] (6,0)--++(90:-2); \draw[densely dotted] (-2.5,0)--++(90:-2);\draw (-1,0)--++(90:-2); \draw (4,-2) to [out=-90,in=0](1.5,-1 -2) to [out=180,in=-90](-1,0-2); \draw[densely dotted] (6,-0-2 ) to [out=-90,in=0] (1.5,-1.75-2) to [out=180,in =-90] (-2.5,-0-2 ); \end{tikzpicture} \end{minipage}$$ We then apply the two surgery steps detailed in [Example 7](#brace-surgery){reference-type="ref" reference="brace-surgery"} to the innermost pair of braces to obtain a sum of two diagrams $$\label{LMS} \begin{minipage}{5cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7] % \begin{minipage}{6.9cm} % \begin{tikzpicture} [scale=0.74] \draw[densely dotted](-1.25,0)--++(180:1 ) coordinate (X); \draw (X)--++(180:0.75 ) coordinate (X); \draw(-1.25,0)--++(0:5.75) coordinate (X); \draw[densely dotted](X)--++(0:0.75 ) coordinate (X); \draw (X)--++(0:0.75 ) coordinate (X); \draw[thick](0 , 0.09) node {$\scriptstyle\vee$}; \draw(3,0.09) node {$\scriptstyle\vee$}; \draw[thick](1 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](2 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](-1 , 0.09) node {$\scriptstyle\vee$}; \draw(-2.5,0.09) node {$\scriptstyle\vee$}; \draw[thick](4 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](5.75 ,-0.09) node {$\scriptstyle\wedge$}; \draw (4,0) to [out=90,in=0](1.5,1 ) to [out=180,in=90](-1,0); %--++( 90:2); \draw (4,0) to [out=-90,in=0](1.5,-1 ) to [out=180,in=-90](-1,0); \draw[densely dotted] (5.75,0 ) to [out=90,in=0] (1.5,1.75) to [out=180,in =90] (-2.5,0 ); \draw[densely dotted] (5.75,-0 ) to [out=-90,in=0] (1.5,-1.75) to [out=180,in =-90] (-2.5,-0 ); \draw (0,0) to [out=90,in=180] (0.5,0.5) to [out=0,in=90] (1,0) to [out=-90,in=0] (0.5,-0.5) to [out=180,in=-90] (0,0) ; \draw (2+0,0) to [out=90,in=180] (2+0.5,0.5) to [out=0,in=90] (2+1,0) to [out=-90,in=0] (2+0.5,-0.5) to [out=180,in=-90] (2+0,0) ; \end{tikzpicture} \end{minipage} \! + \;\;\; \begin{minipage}{5cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7] % \begin{minipage}{6.9cm} % \begin{tikzpicture} [scale=0.74] \draw[densely dotted](-1.25,0)--++(180:1 ) coordinate (X); \draw (X)--++(180:0.75 ) coordinate (X); \draw(-1.25,0)--++(0:5.75) coordinate (X); \draw[densely dotted](X)--++(0:.75) coordinate (X); \draw (X)--++(0:0.75 ) coordinate (X); \draw[thick](1 , 0.09) node {$\scriptstyle\vee$}; \draw(2,0.09) node {$\scriptstyle\vee$}; \draw[thick](0 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](-1 , 0.09) node {$\scriptstyle\vee$}; \draw(-2.5,0.09) node {$\scriptstyle\vee$}; \draw[thick](4 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](5.75 ,-0.09) node {$\scriptstyle\wedge$}; \draw (4,0) to [out=90,in=0](1.5,1 ) to [out=180,in=90](-1,0); %--++( 90:2); \draw (4,0) to [out=-90,in=0](1.5,-1 ) to [out=180,in=-90](-1,0); \draw[densely dotted] (5.75,0 ) to [out=90,in=0] (1.5,1.75) to [out=180,in =90] (-2.5,0 ); \draw[densely dotted] (5.75,-0 ) to [out=-90,in=0] (1.5,-1.75) to [out=180,in =-90] (-2.5,-0 ); \draw (0,0) to [out=90,in=180] (0.5,0.5) to [out=0,in=90] (1,0) to [out=-90,in=0] (0.5,-0.5) to [out=180,in=-90] (0,0) ; \draw (2+0,0) to [out=90,in=180] (2+0.5,0.5) to [out=0,in=90] (2+1,0) to [out=-90,in=0] (2+0.5,-0.5) to [out=180,in=-90] (2+0,0) ; \end{tikzpicture} \end{minipage}$$ We need to compare this to the righthand-side of the equation in [\[selfdual-rectangle\]](#selfdual-rectangle){reference-type="ref" reference="selfdual-rectangle"}. The diagrams $\underline{\nu_1} \nu_1 \overline{\lambda}$ and $\underline{\nu_{-1}} \nu_{-1} \overline{\lambda}$ are equal to $$\label{eqna1} % \begin{minipage}{7.3cm} % \begin{tikzpicture} [scale=0.72] \begin{minipage}{5cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7] \draw[densely dotted](-1.25,0)--++(180:1 ) coordinate (X); \draw (X)--++(180:0.75 ) coordinate (X); \draw(-1.25,0)--++(0:5.75) coordinate (X); \draw[densely dotted](X)--++(0:1 ) coordinate (X); \draw (X)--++(0:0.75 ) coordinate (X); \draw[thick](0 , 0.09) node {$\scriptstyle\vee$}; \draw(3,0.09) node {$\scriptstyle\vee$}; \draw[thick](1 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](2 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](-1 , 0.09) node {$\scriptstyle\vee$}; \draw(-2.5,0.09) node {$\scriptstyle\vee$}; \draw[thick](4 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](6 ,-0.09) node {$\scriptstyle\wedge$}; \draw (4,0) to [out=90,in=0](1.5,1 ) to [out=180,in=90](-1,0); %--++( 90:2); \draw (2,0) to [out=-90,in=0](0.5,-1 ) to [out=180,in=-90](-1,0); \draw[densely dotted] (6,0 ) to [out=90,in=0] (1.5,1.75) to [out=180,in =90] (-2.5,0 ); \draw[densely dotted] (6,-0 ) to [out=-90,in=0] (1.5,-1.75) to [out=180,in =-90] (-2.5,-0 ); \draw (0,0) to [out=90,in=180] (0.5,0.5) to [out=0,in=90] (1,0) to [out=-90,in=0] (0.5,-0.5) to [out=180,in=-90] (0,0) ; \draw (2+0,0) to [out=90,in=180] (2+0.5,0.5) to [out=0,in=90] (2+1,0) to [out=-90,in=180] (3+0.5,-0.5) to [out=0,in=-90] (4+0,0) ; \end{tikzpicture} \end{minipage} \qquad \quad % \end{equation} %for $ [r,r-1]$ and %\begin{equation}\label{eqna2} % \underline{\la} \nu \overline{\nu} %= \begin{minipage}{4.5cm} \begin{tikzpicture} [xscale=-0.5,yscale=0.7] % \begin{minipage}{6.6cm} % \begin{tikzpicture} [xscale=-0.72, yscale=0.72] \draw[densely dotted](-1.25,0)--++(180:1 ) coordinate (X); \draw (X)--++(180:0.75 ) coordinate (X); \draw(-1.25,0)--++(0:5.75) coordinate (X); \draw[densely dotted](X)--++(0:1 ) coordinate (X); \draw (X)--++(0:0.75 ) coordinate (X); \draw[thick](1 , 0.09) node {$\scriptstyle\vee$}; \draw(2,0.09) node {$\scriptstyle\vee$}; \draw[thick](4 , 0.09) node {$\scriptstyle\vee$}; \draw(6,0.09) node {$\scriptstyle\vee$}; \draw[thick](0 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](-1 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](-2.5 ,-0.09) node {$\scriptstyle\wedge$}; \draw (4,0) to [out=90,in=0](1.5,1 ) to [out=180,in=90](-1,0); %--++( 90:2); \draw (2,0) to [out=-90,in=0](0.5,-1 ) to [out=180,in=-90](-1,0); \draw[densely dotted] (6,0 ) to [out=90,in=0] (1.5,1.75) to [out=180,in =90] (-2.5,0 ); \draw[densely dotted] (6,-0 ) to [out=-90,in=0] (1.5,-1.75) to [out=180,in =-90] (-2.5,-0 ); \draw (0,0) to [out=90,in=180] (0.5,0.5) to [out=0,in=90] (1,0) to [out=-90,in=0] (0.5,-0.5) to [out=180,in=-90] (0,0) ; \draw (2+0,0) to [out=90,in=180] (2+0.5,0.5) to [out=0,in=90] (2+1,0) to [out=-90,in=180] (3+0.5,-0.5) to [out=0,in=-90] (4+0,0) ; \end{tikzpicture} \end{minipage}$$ respectively. Using [Lemma 49](#local-surg){reference-type="ref" reference="local-surg"} and [Example 8](#brace-surgery2){reference-type="ref" reference="brace-surgery2"}, we have that the product $\underline{\lambda} \nu_1 \overline{\nu_1} \cdot \underline{\nu_1} \nu_1 \overline{\lambda}$ is given by $$\label{LMS2} \begin{minipage}{4.5cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7] \draw[densely dotted](-1.25,0)--++(180:1 ) coordinate (X); \draw (X)--++(180:0.75 ) coordinate (X); \draw(-1.25,0)--++(0:5.75) coordinate (X); \draw[densely dotted](X)--++(0:0.75 ) coordinate (X); \draw (X)--++(0:0.75 ) coordinate (X); \draw[very thick, magenta] (2+0,0) to [out=90,in=180] (2+0.5,0.5) to [out=0,in=90] (2+1,0) to [out=-90,in=0] (2+0.5,-0.5) to [out=180,in=-90] (2+0,0) ; \draw[very thick, cyan] (4,0) to [out=90,in=0](1.5,1 ) to [out=180,in=90](-1,0); %--++( 90:2); \draw[very thick, cyan] (4,0) to [out=-90,in=0](1.5,-1 ) to [out=180,in=-90](-1,0); \draw[thick](0 , 0.09) node {$\scriptstyle\vee$}; \draw(3,0.09) node {$\color{magenta}\scriptstyle\vee$}; \draw[thick](1 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](2 ,-0.09) node {$\color{magenta}\scriptstyle\wedge$}; \draw[thick](-1 , 0.09) node {$\color{cyan}\scriptstyle\vee$}; \draw(-2.5,0.09) node {$\scriptstyle\vee$}; \draw[thick](4 ,-0.09) node {$\color{cyan}\scriptstyle\wedge$}; \draw[thick](5.75 ,-0.09) node {$\scriptstyle\wedge$}; \draw[densely dotted] (5.75,0 ) to [out=90,in=0] (1.5,1.75) to [out=180,in =90] (-2.5,0 ); \draw[densely dotted] (5.75,-0 ) to [out=-90,in=0] (1.5,-1.75) to [out=180,in =-90] (-2.5,-0 ); \draw (0,0) to [out=90,in=180] (0.5,0.5) to [out=0,in=90] (1,0) to [out=-90,in=0] (0.5,-0.5) to [out=180,in=-90] (0,0) ; \end{tikzpicture} \end{minipage} \;\;\; + \; \; \begin{minipage}{4.5cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7] % \begin{minipage}{6.9cm} % \begin{tikzpicture} [scale=0.74] \draw[densely dotted](-1.25,0)--++(180:1 ) coordinate (X); \draw (X)--++(180:0.75 ) coordinate (X); \draw(-1.25,0)--++(0:5.75) coordinate (X); \draw[densely dotted](X)--++(0:0.75 ) coordinate (X); \draw (X)--++(0:0.75 ) coordinate (X); \draw[very thick, cyan] (2+0,0) to [out=90,in=180] (2+0.5,0.5) to [out=0,in=90] (2+1,0) to [out=-90,in=0] (2+0.5,-0.5) to [out=180,in=-90] (2+0,0) ; \draw[very thick, magenta] (4,0) to [out=90,in=0](1.5,1 ) to [out=180,in=90](-1,0); %--++( 90:2); \draw[very thick, magenta] (4,0) to [out=-90,in=0](1.5,-1 ) to [out=180,in=-90](-1,0); \draw[thick](0 , 0.09) node {$\scriptstyle\vee$}; \draw(2,0.09) node {$\color{cyan}\scriptstyle\vee$}; \draw[thick](1 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09) node {$\color{cyan}\scriptstyle\wedge$}; \draw[thick](4 , 0.09) node {$\color{magenta}\scriptstyle\vee$}; \draw(-2.5,0.09) node {$\scriptstyle\vee$}; \draw[thick](-1 ,-0.09) node {$\color{magenta}\scriptstyle\wedge$}; \draw[thick](5.75 ,-0.09) node {$\scriptstyle\wedge$}; \draw[densely dotted] (5.75,0 ) to [out=90,in=0] (1.5,1.75) to [out=180,in =90] (-2.5,0 ); \draw[densely dotted] (5.75,-0 ) to [out=-90,in=0] (1.5,-1.75) to [out=180,in =-90] (-2.5,-0 ); \draw (0,0) to [out=90,in=180] (0.5,0.5) to [out=0,in=90] (1,0) to [out=-90,in=0] (0.5,-0.5) to [out=180,in=-90] (0,0) ; \end{tikzpicture} \end{minipage}$$ Similarly, the product $\underline{\lambda} \nu_{-1} \overline{\nu_{-1}} \cdot \underline{\nu_{-1}} \nu_{-1} \overline{\lambda}$ is given by $$\label{LMS3} \begin{minipage}{4.5cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7] \draw[densely dotted](-1.25,0)--++(180:1 ) coordinate (X); \draw (X)--++(180:0.75 ) coordinate (X); \draw(-1.25,0)--++(0:5.75) coordinate (X); \draw[densely dotted](X)--++(0:0.75 ) coordinate (X); \draw (X)--++(0:0.75 ) coordinate (X); \draw (2+0,0) to [out=90,in=180] (2+0.5,0.5) to [out=0,in=90] (2+1,0) to [out=-90,in=0] (2+0.5,-0.5) to [out=180,in=-90] (2+0,0) ; \draw[very thick, cyan] (4,0) to [out=90,in=0](1.5,1 ) to [out=180,in=90](-1,0); %--++( 90:2); \draw[very thick, cyan] (4,0) to [out=-90,in=0](1.5,-1 ) to [out=180,in=-90](-1,0); \draw[thick](2 , 0.09) node {$\scriptstyle\vee$}; \draw(1,0.09) node {$\color{magenta}\scriptstyle\vee$}; \draw[thick](3 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](0 ,-0.09) node {$\color{magenta}\scriptstyle\wedge$}; \draw[thick](-1 , 0.09) node {$\color{cyan}\scriptstyle\vee$}; \draw(-2.5,0.09) node {$\scriptstyle\vee$}; \draw[thick](4 ,-0.09) node {$\color{cyan}\scriptstyle\wedge$}; \draw[thick](5.75 ,-0.09) node {$\scriptstyle\wedge$}; \draw[densely dotted] (5.75,0 ) to [out=90,in=0] (1.5,1.75) to [out=180,in =90] (-2.5,0 ); \draw[densely dotted] (5.75,-0 ) to [out=-90,in=0] (1.5,-1.75) to [out=180,in =-90] (-2.5,-0 ); \draw [very thick, magenta] (0,0) to [out=90,in=180] (0.5,0.5) to [out=0,in=90] (1,0) to [out=-90,in=0] (0.5,-0.5) to [out=180,in=-90] (0,0) ; \end{tikzpicture} \end{minipage} \;\;\; + \; \; \begin{minipage}{5cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7] \draw[densely dotted](-1.25,0)--++(180:1 ) coordinate (X); \draw (X)--++(180:0.75 ) coordinate (X); \draw(-1.25,0)--++(0:5.75) coordinate (X); \draw[densely dotted](X)--++(0:0.75 ) coordinate (X); \draw (X)--++(0:0.75 ) coordinate (X); \draw (2+0,0) to [out=90,in=180] (2+0.5,0.5) to [out=0,in=90] (2+1,0) to [out=-90,in=0] (2+0.5,-0.5) to [out=180,in=-90] (2+0,0) ; \draw[very thick, magenta] (4,0) to [out=90,in=0](1.5,1 ) to [out=180,in=90](-1,0); %--++( 90:2); \draw[very thick, magenta] (4,0) to [out=-90,in=0](1.5,-1 ) to [out=180,in=-90](-1,0); \draw[thick](2 , 0.09) node {$\scriptstyle\vee$}; \draw(0,0.09) node {$\color{cyan}\scriptstyle\vee$}; \draw[thick](3 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](1 ,-0.09) node {$\color{cyan}\scriptstyle\wedge$}; \draw[thick](4 , 0.09) node {$\color{magenta}\scriptstyle\vee$}; \draw(-2.5,0.09) node {$\scriptstyle\vee$}; \draw[thick](-1 ,-0.09) node {$\color{magenta}\scriptstyle\wedge$}; \draw[thick](5.75 ,-0.09) node {$\scriptstyle\wedge$}; \draw[densely dotted] (5.75,0 ) to [out=90,in=0] (1.5,1.75) to [out=180,in =90] (-2.5,0 ); \draw[densely dotted] (5.75,-0 ) to [out=-90,in=0] (1.5,-1.75) to [out=180,in =-90] (-2.5,-0 ); \draw [very thick, cyan] (0,0) to [out=90,in=180] (0.5,0.5) to [out=0,in=90] (1,0) to [out=-90,in=0] (0.5,-0.5) to [out=180,in=-90] (0,0) ; \end{tikzpicture} \end{minipage}$$ where we have highlighted the pair of re-oriented circles in each case (with the pink circle of degree 2 and the blue of degree 0). Notice that the sum of the left-hand terms in each of ([\[LMS2\]](#LMS2){reference-type="ref" reference="LMS2"}) and ([\[LMS3\]](#LMS3){reference-type="ref" reference="LMS3"}) is the required sum ([\[LMS\]](#LMS){reference-type="ref" reference="LMS"}). The right hand term in ([\[LMS2\]](#LMS2){reference-type="ref" reference="LMS2"}) and ([\[LMS3\]](#LMS3){reference-type="ref" reference="LMS3"}) are identical, and we denote this diagram by $D_1$. We will show that the sum of these 2 terms (which is equal to $2D_1$) will cancel with the remaining terms in the larger sum. For $2\leqslant x\leqslant m-2$ we have $$\label{eqna3} \underline{\nu_x} \nu_x \overline{\lambda} = \begin{minipage}{9cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7] \clip(-6.75,-2.8) rectangle (10,2.8); \draw[densely dotted](-1.25,0)--++(180:0.5 ) coordinate (X); \draw (X)--++(180:3.5 ) coordinate (X); \draw[densely dotted](X)--++(180:0.75 ) coordinate (X); \draw (X)--++(180:1 ) coordinate (X); \draw(-1.25,0)--++(0:5.5) coordinate (X); \draw[densely dotted](X)--++(0:0.5 ) coordinate (X); \draw (X)--++(0:3.5 ) coordinate (X); \draw[densely dotted](X)--++(0:0.75 ) coordinate (X); \draw (X)--++(0:1 ) coordinate (X); \draw[thick](0 , 0.09) node {$\scriptstyle\vee$}; \draw(2,0.09) node {$\scriptstyle\vee$}; \draw[thick](1 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](-1 , 0.09) node {$\scriptstyle\vee$}; \draw(-2,0.09) node {$\scriptstyle\vee$}; \draw[thick](4 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](5 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](-3 ,-0.09) node {$\scriptstyle\wedge$}; \draw(-4,0.09) node {$\scriptstyle\vee$}; \draw(-5,0.09) node {$\scriptstyle\vee$}; \draw[thick](6,0.09) node {$\scriptstyle\vee$}; \draw[thick](8 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](7 ,-0.09) node {$\scriptstyle\wedge$}; \draw[densely dotted] (4,0) to [out=90,in=0](1.5,0.8 ) to [out=180,in=90](-1,0); \draw[densely dotted] (4,0) to [out=-90,in=0](1.5,-0.8 ) to [out=180,in=-90](-1,0); \draw[densely dotted] (5,0 ) to [out=90,in=0] (1.5,1.1) to [out=180,in =90] (-2,0 ); \draw[densely dotted] (5,-0 ) to [out=-90,in=0] (1.5,-1.1) to [out=180,in =-90] (-2,-0 ); \draw (6,-0 ) to [out=-90,in=180] (6.5,-0.5) to [out=0,in =-90] (7,0 ); \draw (-4,-0 ) to [out=-90,in=180] (-3.5,-0.5) to [out=0,in =-90] (-3,0 ); \draw (-3,-0 ) to [out=90,in=180] (1.5,1.5) to [out=0,in =90] (6,0 ); \draw (-4,-0 ) to [out=90,in=180] (1.5,2) to [out=0,in =90] (7,0 ); \draw[densely dotted] (-5,-0 ) to [out=90,in=180] (1.5,2.2) to [out=0,in =90] (8,0 ); \draw[densely dotted] (-5,-0 ) to [out=-90,in=180] (1.5,-2.2) to [out=0,in =-90] (8,0 ); \draw[densely dotted] (-6.5,-0 ) to [out=90,in=180] (1.5,2.6) to [out=0,in =90] (9.5,0 ); \draw[densely dotted] (-6.5,-0 ) to [out=-90,in=180] (1.5,-2.6) to [out=0,in =-90] (9.5,0 ); \draw[thick](-6.5,0.09) node {$\scriptstyle\vee$}; \draw[thick](9.5 ,-0.09) node {$\scriptstyle\wedge$}; \draw (0,0) to [out=90,in=180] (0.5,0.5) to [out=0,in=90] (1,0) to [out=-90,in=0] (0.5,-0.5) to [out=180,in=-90] (0,0) ; \draw (2+0,0) to [out=90,in=180] (2+0.5,0.5) to [out=0,in=90] (2+1,0) to [out=-90,in=0] (2+0.5,-0.5) to [out=180,in=-90] (2+0,0) ; \end{tikzpicture} \end{minipage}$$ where there are $x-2$ concentric dotted hollow circles in the middle of the diagram. Therefore we have that $\underline{\lambda} \nu_x \overline{\nu_x} \cdot \underline{\nu_x} \nu_x \overline{\lambda}$ is equal to $$\begin{aligned} \label{eqna4} %\phantom{+}\; \begin{minipage}{6.8cm} \begin{tikzpicture} [xscale=0.39,yscale=0.7] % [scale=0.5] \clip(-6.75,-2.8) rectangle (10,2.8); \draw[densely dotted](-1.25,0)--++(180:0.5 ) coordinate (X); \draw (X)--++(180:3.5 ) coordinate (X); \draw[densely dotted](X)--++(180:0.75 ) coordinate (X); \draw (X)--++(180:1 ) coordinate (X); \draw(-1.25,0)--++(0:5.5) coordinate (X); \draw[densely dotted](X)--++(0:0.5 ) coordinate (X); \draw (X)--++(0:3.5 ) coordinate (X); \draw[densely dotted](X)--++(0:0.75 ) coordinate (X); \draw (X)--++(0:1 ) coordinate (X); \draw[thick](0 , 0.09) node {$\scriptstyle\vee$}; \draw(2,0.09) node {$\scriptstyle\vee$}; \draw[thick](1 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](-1 , 0.09) node {$\scriptstyle\vee$}; \draw(-2,0.09) node {$\scriptstyle\vee$}; \draw[thick](4 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](5 ,-0.09) node {$\scriptstyle\wedge$}; \draw(-5,0.09) node {$\scriptstyle\vee$}; \draw[thick](8 ,-0.09) node {$\scriptstyle\wedge$}; \draw[densely dotted] (4,0) to [out=90,in=0](1.5,0.8 ) to [out=180,in=90](-1,0); \draw[densely dotted] (4,0) to [out=-90,in=0](1.5,-0.8 ) to [out=180,in=-90](-1,0); \draw[densely dotted] (5,0 ) to [out=90,in=0] (1.5,1.1) to [out=180,in =90] (-2,0 ); \draw[densely dotted] (5,-0 ) to [out=-90,in=0] (1.5,-1.1) to [out=180,in =-90] (-2,-0 ); % % % % % %\draw (6,-0 ) to [out=-90,in=180] (6.5,-0.5) to [out=0,in =-90] (7,0 ); %\draw (-4,-0 ) to [out=-90,in=180] (-3.5,-0.5) to [out=0,in =-90] (-3,0 ); % % % % %\draw (-3,-0 ) to [out=90,in=180] (1.5,1.5) to [out=0,in =90] (6,0 ); %\draw (-4,-0 ) to [out=90,in=180] (1.5,2) to [out=0,in =90] (7,0 ); % % \draw[thick](7,0.09) node {$\scriptstyle\down$}; % \draw[thick](6 ,-0.09) node {$\scriptstyle\up$}; % \draw(-4,0.09) node {$\scriptstyle\down$}; % \draw[thick](-3 ,-0.09) node {$\scriptstyle\up$}; % \draw[very thick, magenta] (-3,-0 ) to [out=90,in=180] (1.5,1.5) to [out=0,in =90] (6,0 ); \draw[very thick, cyan] (-4,-0 ) to [out=90,in=180] (1.5,2) to [out=0,in =90] (7,0 ); \draw[very thick, cyan] (-4,-0 ) to [out=-90,in=180] (1.5,-2) to [out=0,in =-90] (7,0 ); \draw[very thick, magenta] (-3,-0 ) to [out=-90,in=180] (1.5,-1.5) to [out=0,in =-90] (6,0 ); %%% \draw[thick,cyan](-4,0.09) node {$\scriptstyle\vee$}; \draw[thick,magenta](-3 ,-0.09) node {$\scriptstyle\wedge$}; \draw[magenta](6,0.09) node {$\scriptstyle\vee$}; \draw[thick,cyan](7 ,-0.09) node {$\scriptstyle\wedge$}; % % % % \draw[densely dotted] (-5,-0 ) to [out=90,in=180] (1.5,2.2) to [out=0,in =90] (8,0 ); \draw[densely dotted] (-5,-0 ) to [out=-90,in=180] (1.5,-2.2) to [out=0,in =-90] (8,0 ); \draw[densely dotted] (-6.5,-0 ) to [out=90,in=180] (1.5,2.6) to [out=0,in =90] (9.5,0 ); \draw[densely dotted] (-6.5,-0 ) to [out=-90,in=180] (1.5,-2.6) to [out=0,in =-90] (9.5,0 ); \draw[thick](-6.5,0.09) node {$\scriptstyle\vee$}; \draw[thick](9.5 ,-0.09) node {$\scriptstyle\wedge$}; \draw (0,0) to [out=90,in=180] (0.5,0.5) to [out=0,in=90] (1,0) to [out=-90,in=0] (0.5,-0.5) to [out=180,in=-90] (0,0) ; \draw (2+0,0) to [out=90,in=180] (2+0.5,0.5) to [out=0,in=90] (2+1,0) to [out=-90,in=0] (2+0.5,-0.5) to [out=180,in=-90] (2+0,0) ; \end{tikzpicture} \end{minipage} % \\ +\;\; \begin{minipage}{6.4cm} \begin{tikzpicture} [xscale=0.39,yscale=0.7] \clip(-6.75,-2.8) rectangle (10,2.8); \draw[densely dotted](-1.25,0)--++(180:0.5 ) coordinate (X); \draw (X)--++(180:3.5 ) coordinate (X); \draw[densely dotted](X)--++(180:0.75 ) coordinate (X); \draw (X)--++(180:1 ) coordinate (X); \draw(-1.25,0)--++(0:5.5) coordinate (X); \draw[densely dotted](X)--++(0:0.5 ) coordinate (X); \draw (X)--++(0:3.5 ) coordinate (X); \draw[densely dotted](X)--++(0:0.75 ) coordinate (X); \draw (X)--++(0:1 ) coordinate (X); \draw[thick](0 , 0.09) node {$\scriptstyle\vee$}; \draw(2,0.09) node {$\scriptstyle\vee$}; \draw[thick](1 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](-1 , 0.09) node {$\scriptstyle\vee$}; \draw(-2,0.09) node {$\scriptstyle\vee$}; \draw[thick](4 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](5 ,-0.09) node {$\scriptstyle\wedge$}; \draw(-5,0.09) node {$\scriptstyle\vee$}; \draw[thick](8 ,-0.09) node {$\scriptstyle\wedge$}; \draw[densely dotted] (4,0) to [out=90,in=0](1.5,0.8 ) to [out=180,in=90](-1,0); \draw[densely dotted] (4,0) to [out=-90,in=0](1.5,-0.8 ) to [out=180,in=-90](-1,0); \draw[densely dotted] (5,0 ) to [out=90,in=0] (1.5,1.1) to [out=180,in =90] (-2,0 ); \draw[densely dotted] (5,-0 ) to [out=-90,in=0] (1.5,-1.1) to [out=180,in =-90] (-2,-0 ); % % % % % \draw[very thick, cyan] (-3,-0 ) to [out=90,in=180] (1.5,1.5) to [out=0,in =90] (6,0 ); \draw[very thick, magenta] (-4,-0 ) to [out=90,in=180] (1.5,2) to [out=0,in =90] (7,0 ); \draw[very thick, magenta] (-4,-0 ) to [out=-90,in=180] (1.5,-2) to [out=0,in =-90] (7,0 ); \draw[very thick, cyan] (-3,-0 ) to [out=-90,in=180] (1.5,-1.5) to [out=0,in =-90] (6,0 ); %%% \draw[thick,magenta](7,0.09) node {$\scriptstyle\vee$}; \draw[thick,cyan](6 ,-0.09) node {$\scriptstyle\wedge$}; \draw[cyan](-3,0.09) node {$\scriptstyle\vee$}; \draw[thick, magenta](-4 ,-0.09) node {$\scriptstyle\wedge$}; % % % % % % \draw[densely dotted] (-5,-0 ) to [out=90,in=180] (1.5,2.2) to [out=0,in =90] (8,0 ); \draw[densely dotted] (-5,-0 ) to [out=-90,in=180] (1.5,-2.2) to [out=0,in =-90] (8,0 ); \draw[densely dotted] (-6.5,-0 ) to [out=90,in=180] (1.5,2.6) to [out=0,in =90] (9.5,0 ); \draw[densely dotted] (-6.5,-0 ) to [out=-90,in=180] (1.5,-2.6) to [out=0,in =-90] (9.5,0 ); \draw[thick](-6.5,0.09) node {$\scriptstyle\vee$}; \draw[thick](9.5 ,-0.09) node {$\scriptstyle\wedge$}; \draw (0,0) to [out=90,in=180] (0.5,0.5) to [out=0,in=90] (1,0) to [out=-90,in=0] (0.5,-0.5) to [out=180,in=-90] (0,0) ; \draw (2+0,0) to [out=90,in=180] (2+0.5,0.5) to [out=0,in=90] (2+1,0) to [out=-90,in=0] (2+0.5,-0.5) to [out=180,in=-90] (2+0,0) ; \end{tikzpicture} \end{minipage}\;\;\;. \end{aligned}$$ We denote the first diagram in the sum by $D_{x-1}$ and the second by $D_x$, so $D_x$ is the diagram where the pink anticlockwise circle is at distance $x$ from the small innermost circles. (Note that this is consistent with our notation for $D_1$ above). Finally, for $x=m-1$ we have $$%\label{eqna4} \underline{\nu_{m-1}} \nu_{m-1} \overline{\lambda} = \begin{minipage}{7cm} % \begin{tikzpicture} [scale=0.76] % \begin{tikzpicture} [xscale=0.5,yscale=0.7] \begin{tikzpicture} [xscale=0.39,yscale=0.7] \clip(-6.75,-2.8) rectangle (10,2.8); \draw[densely dotted](-1.25,0)--++(180:0.5 ) coordinate (X); \draw (X)--++(180:2.5 ) coordinate (X); \draw[densely dotted](X)--++(180:0.75 ) coordinate (X); \draw (X)--++(180:2 ) coordinate (X); \draw(-1.25,0)--++(0:5.5) coordinate (X); \draw[densely dotted](X)--++(0:0.5 ) coordinate (X); \draw (X)--++(0:2.5 ) coordinate (X); \draw[densely dotted](X)--++(0:0.75 ) coordinate (X); \draw (X)--++(0:2 ) coordinate (X); \draw[thick](2 , 0.09) node {$\scriptstyle\vee$}; \draw(0,0.09) node {$\scriptstyle\vee$}; \draw[thick](1 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](-1 , 0.09) node {$\scriptstyle\vee$}; \draw(-2,0.09) node {$\scriptstyle\vee$}; \draw[thick](4 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](5 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](-3 ,0.09) node {$\scriptstyle\vee$}; \draw(-4,0.09) node {$\scriptstyle\vee$}; \draw(-5.5,0.09) node {$\scriptstyle\vee$}; \draw[thick](7,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](8.5 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](6 ,-0.09) node {$\scriptstyle\wedge$}; \draw[densely dotted] (4,0) to [out=90,in=0](1.5,0.8 ) to [out=180,in=90](-1,0); \draw[densely dotted] (4,0) to [out=-90,in=0](1.5,-0.8 ) to [out=180,in=-90](-1,0); \draw[densely dotted] (5,0 ) to [out=90,in=0] (1.5,1.1) to [out=180,in =90] (-2,0 ); \draw[densely dotted] (5,-0 ) to [out=-90,in=0] (1.5,-1.1) to [out=180,in =-90] (-2,-0 ); %\draw (6,-0 ) to [out=-90,in=180] (6.5,-0.5) to [out=0,in =-90] (7,0 ); %\draw (-4,-0 ) to [out=-90,in=180] (-3.5,-0.5) to [out=0,in =-90] (-3,0 ); % % % % %\draw (-3,-0 ) to [out=90,in=180] (1.5,1.5) to [out=0,in =90] (6,0 ); %\draw (-4,-0 ) to [out=90,in=180] (1.5,2) to [out=0,in =90] (7,0 ); \draw[densely dotted] (-3,-0 ) to [out=90,in=180] (1.5,1.8) to [out=0,in =90] (6,0 ); \draw[densely dotted] (-4,-0 ) to [out=90,in=180] (1.5,2) to [out=0,in =90] (7,0 ); \draw[densely dotted] (-3,-0 ) to [out=-90,in=180] (1.5,-1.8) to [out=0,in =-90] (6,0 ); \draw[densely dotted] (-4,-0 ) to [out=-90,in=180] (1.5,-2) to [out=0,in =-90] (7,0 ); %\draw[densely dotted] (-5.5,-0 ) to [out=-90,in=180] (1.5,-2.2) to [out=0,in =-90] (8.5,0 ); \draw[densely dotted] (-5.5,-0 ) to [out=90,in=180] (1.5,2.2) to [out=0,in =90] (8.5,0 ); \draw[densely dotted] (-5.5,-0 ) to [out=-90,in=180] (1.5,-2.2) to [out=0,in =-90] (8.5,0 ); \draw (-6.5,-0 ) to [out=90,in=180] (1.5,2.6) to [out=0,in =90] (9.5,0 ); %\draw (-6.5,-0 ) to [out=-90,in=180] (1.5,-2.6) to [out=0,in =-90] (9.5,0 ); \draw (-6.5,-0 )--++(-90:2); \draw (9.5,0 )--++(-90:2);; \draw[thick](9.5,0.09) node {$\scriptstyle\vee$}; \draw[thick](-6.5 ,-0.09) node {$\scriptstyle\wedge$}; \draw (0,0) to [out=90,in=180] (0.5,0.5) to [out=0,in=90] (1,0) to [out=-90,in=0] (0.5,-0.5) to [out=180,in=-90] (0,0) ; \draw (2+0,0) to [out=90,in=180] (2+0.5,0.5) to [out=0,in=90] (2+1,0) to [out=-90,in=0] (2+0.5,-0.5) to [out=180,in=-90] (2+0,0) ; \end{tikzpicture} \end{minipage}$$ in which only the outermost strand is clockwise oriented. This gives $\underline{\lambda} \nu_{m-1} \overline{\nu_{m-1}} \cdot \underline{\nu_{m-1}} \nu_{m-1} \overline{\lambda}$ is equal to $$\begin{minipage}{7cm} % \begin{tikzpicture} [scale=0.76] % \begin{tikzpicture} [xscale=0.5,yscale=0.7] \begin{tikzpicture} [xscale=0.39,yscale=0.7] \clip(-6.75,-2.8) rectangle (10,2.8); \draw[densely dotted](-1.25,0)--++(180:0.5 ) coordinate (X); \draw (X)--++(180:2.5 ) coordinate (X); \draw[densely dotted](X)--++(180:0.75 ) coordinate (X); \draw (X)--++(180:2 ) coordinate (X); \draw(-1.25,0)--++(0:5.5) coordinate (X); \draw[densely dotted](X)--++(0:0.5 ) coordinate (X); \draw (X)--++(0:2.5 ) coordinate (X); \draw[densely dotted](X)--++(0:0.75 ) coordinate (X); \draw (X)--++(0:2 ) coordinate (X); \draw[thick](2 , 0.09) node {$\scriptstyle\vee$}; \draw(0,0.09) node {$\scriptstyle\vee$}; \draw[thick](1 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](-1 , 0.09) node {$\scriptstyle\vee$}; \draw(-2,0.09) node {$\scriptstyle\vee$}; \draw[thick](4 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](5 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](-3 ,0.09) node {$\scriptstyle\vee$}; \draw(-4,0.09) node {$\scriptstyle\vee$}; \draw(-5.5,0.09) node {$\scriptstyle\vee$}; \draw[thick](7,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](8.5 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](6 ,-0.09) node {$\scriptstyle\wedge$}; \draw[densely dotted] (4,0) to [out=90,in=0](1.5,0.8 ) to [out=180,in=90](-1,0); \draw[densely dotted] (4,0) to [out=-90,in=0](1.5,-0.8 ) to [out=180,in=-90](-1,0); \draw[densely dotted] (5,0 ) to [out=90,in=0] (1.5,1.1) to [out=180,in =90] (-2,0 ); \draw[densely dotted] (5,-0 ) to [out=-90,in=0] (1.5,-1.1) to [out=180,in =-90] (-2,-0 ); %\draw (6,-0 ) to [out=-90,in=180] (6.5,-0.5) to [out=0,in =-90] (7,0 ); %\draw (-4,-0 ) to [out=-90,in=180] (-3.5,-0.5) to [out=0,in =-90] (-3,0 ); % % % % %\draw (-3,-0 ) to [out=90,in=180] (1.5,1.5) to [out=0,in =90] (6,0 ); %\draw (-4,-0 ) to [out=90,in=180] (1.5,2) to [out=0,in =90] (7,0 ); \draw[densely dotted] (-3,-0 ) to [out=90,in=180] (1.5,1.8) to [out=0,in =90] (6,0 ); \draw[densely dotted] (-4,-0 ) to [out=90,in=180] (1.5,2) to [out=0,in =90] (7,0 ); \draw[densely dotted] (-3,-0 ) to [out=-90,in=180] (1.5,-1.8) to [out=0,in =-90] (6,0 ); \draw[densely dotted] (-4,-0 ) to [out=-90,in=180] (1.5,-2) to [out=0,in =-90] (7,0 ); %\draw[densely dotted] (-5.5,-0 ) to [out=-90,in=180] (1.5,-2.2) to [out=0,in =-90] (8.5,0 ); \draw[densely dotted] (-5.5,-0 ) to [out=90,in=180] (1.5,2.2) to [out=0,in =90] (8.5,0 ); \draw[densely dotted] (-5.5,-0 ) to [out=-90,in=180] (1.5,-2.2) to [out=0,in =-90] (8.5,0 ); \draw[ thick,magenta] (-6.5,-0 ) to [out=90,in=180] (1.5,2.6) to [out=0,in =90] (9.5,0 ); \draw[ thick,magenta] (-6.5,-0 ) to [out=-90,in=180] (1.5,-2.6) to [out=0,in =-90] (9.5,0 ); %\draw (-6.5,-0 )--++(-90:2); %\draw (9.5,0 )--++(-90:2);; \draw[thick,magenta](9.5,0.09) node {$\scriptstyle\vee$}; \draw[thick,magenta](-6.5 ,-0.09) node {$\scriptstyle\wedge$}; \draw (0,0) to [out=90,in=180] (0.5,0.5) to [out=0,in=90] (1,0) to [out=-90,in=0] (0.5,-0.5) to [out=180,in=-90] (0,0) ; \draw (2+0,0) to [out=90,in=180] (2+0.5,0.5) to [out=0,in=90] (2+1,0) to [out=-90,in=0] (2+0.5,-0.5) to [out=180,in=-90] (2+0,0) ; \end{tikzpicture} \end{minipage}$$ which is equal to $D_{m-2}$. Replacing all terms into the right-hand side of ([\[selfdual-rectangle\]](#selfdual-rectangle){reference-type="ref" reference="selfdual-rectangle"}) we obtain $$(\underline{\lambda} \lambda\overline{\mu}\cdot \underline{\mu}\lambda\overline{\lambda} +2 D_1 ) + 2 \sum_{x=2}^{m-2} (-1)^{x+1} (D_{x-1}+D_x) + 2 (-1)^{m} D_{m-2} = \underline{\lambda} \lambda\overline{\mu}\cdot \underline{\mu}\lambda\overline{\lambda}$$ as required. We now consider the degenerate cases in which $r$ or $c$ is equal to 1. The $r=1=c$ case simply follows as $$% (-1)^{\sgnlamu} \underline{\lambda} \lambda\overline{\mu} \cdot \underline{\mu} \lambda\overline{\lambda}= \begin{minipage}{1.2 cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7] \draw(-0.25+1,0)--++(0:1.5) coordinate (X); \draw(2,0.09) node {$\scriptstyle\vee$}; \draw[thick](1 ,-0.09) node {$\scriptstyle\wedge$}; \draw(1,1)-- (1,0) to [out=-90,in=180] (1.5, -0.5) to [out=0,in= -90] (2,0)--++(90:1); \draw(1,-2.5) -- (1,-1.5) to [out=90,in=180] (1.5, -1) to [out=0,in= 90] (2,-1.5)--++(-90:1); \draw(-0.25+1,-1.5)--++(0:1.5) coordinate (X); \draw(2,0.08-1.5) node {$\scriptstyle\vee$}; \draw[thick](1 ,-0.09-1.5) node {$\scriptstyle\wedge$}; \end{tikzpicture} \end{minipage}=0,$$ as required. It remains to consider the cases $\mu = (c), \lambda= (c-1)$ and $\mu = (1^r), \lambda= (1^{r-1})$. We deal with the first one; the second one is similar. Here the weight $\mu$ and $\lambda$ are of the form $$\mu= \left(\begin{minipage}{3.5cm} \begin{tikzpicture} \draw(-0.25,0)--++(0:2); \draw[densely dotted]( 1.75,0)--++(0:0.5); \draw ( 2.25,0)--++(0:1); % \draw [densely dotted ] ( 4.25,0)--++(0:0.5); % \draw ( 4.75,0)--++(0:1.5); \draw(0,0.09) node {$\scriptstyle\vee$}; \draw[thick](0.5,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](1,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](1.5,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](2.5 ,-0.09) node {$\scriptstyle\wedge$}; % \draw(3.5,0.09) node {$\scriptstyle\down$}; % \draw(4,0.09) node {$\scriptstyle\down$}; % \draw(5,0.09) node {$\scriptstyle\down$}; \draw(5.5,0.09) node {$\scriptstyle\down$}; % \draw(6,0.09) node {$\scriptstyle\down$}; \end{tikzpicture}\end{minipage}\right) \qquad \lambda= \left(\begin{minipage}{3.5cm} \begin{tikzpicture} \draw(-0.25,0)--++(0:2); \draw[densely dotted]( 1.75,0)--++(0:0.5); \draw ( 2.25,0)--++(0:1); % \draw [densely dotted ] ( 4.25,0)--++(0:0.5); % \draw ( 4.75,0)--++(0:1.5); \draw(0.5,0.09) node {$\scriptstyle\vee$}; \draw[thick](0 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](1,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](1.5,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](2.5 ,-0.09) node {$\scriptstyle\wedge$}; % \draw(3.5,0.09) node {$\scriptstyle\down$}; % \draw(4,0.09) node {$\scriptstyle\down$}; % \draw(5,0.09) node {$\scriptstyle\down$}; \draw(5.5,0.09) node {$\scriptstyle\down$}; % \draw(6,0.09) node {$\scriptstyle\down$}; \end{tikzpicture}\end{minipage}\right)$$ So we have that $$\underline{\mu} \lambda\overline{\lambda} = \begin{minipage}{3cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7] \draw(-0.25,0)--++(0:4.75) coordinate (X); \draw[densely dotted](X)--++(0:1 ) coordinate (X); \draw (X)--++(0:1 ) coordinate (X); \draw[thick](0 ,-0.09) node {$\scriptstyle\wedge$}; \draw(1,0.09) node {$\scriptstyle\vee$}; % \draw[thick](1,-0.09) node {$\scriptstyle\up$}; % \draw[thick](1.5,-0.09) node {$\scriptstyle\up$}; \draw[thick](2 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](4 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](6 ,-0.09) node {$\scriptstyle\wedge$}; \draw (0,1)--(0,0) to [out=-90,in=180] (0.5,-0.5) to [out=0,in=-90] (1,0) to [out=90,in=180] (1.5, 0.5) to [out=0,in= 90] (2,0)--++(-90:1); \draw (3,-1)--++( 90:2); \draw (4,-1)--++( 90:2); \draw (6,-1)--++( 90:2); \end{tikzpicture} \end{minipage}\;\;\;\;\;.$$ Note that here $\lambda$ only has one removable Dyck path, $Q^1$, which is adjacent to $P$, and using the same notation as in the generic case, we have $$\underline{\nu_1} \nu_1 \overline{\lambda} = \begin{minipage}{3.5cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7] \draw(-0.25,0)--++(0:4.75) coordinate (X); \draw[densely dotted](X)--++(0:1 ) coordinate (X); \draw (X)--++(0:1 ) coordinate (X); \draw[thick](0 ,-0.09) node {$\scriptstyle\wedge$}; \draw(2,0.09) node {$\scriptstyle\vee$}; % \draw[thick](1,-0.09) node {$\scriptstyle\up$}; % \draw[thick](1.5,-0.09) node {$\scriptstyle\up$}; \draw[thick](1 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](4 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](6 ,-0.09) node {$\scriptstyle\wedge$}; \draw (1,0) to [out=90,in=180] (1.5, 0.5) to [out=0,in= 90] (2,0) to [out=-90,in=180] (2.5,-0.5) to [out=0,in=-90] (3,0) --++(90:1); \draw (1,0)--++(-90:1); \draw (0,-1)--++( 90:2); % \draw (3,-1)--++( 90:2); \draw (4,-1)--++( 90:2); \draw (6,-1)--++( 90:2); \end{tikzpicture} \end{minipage} \qquad\quad \text{or} \qquad\quad \underline{\nu_1} \nu_1 \overline{\lambda} = \begin{minipage}{1.5cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7] \draw(-0.25,0)--++(0:2.5) coordinate (X); % \draw[densely dotted](X)--++(0:1 ) coordinate (X); % \draw (X)--++(0:1 ) coordinate (X); \draw[thick](0 ,-0.09) node {$\scriptstyle\wedge$}; % \draw(1,0.09) node {$\scriptstyle\down$}; \draw[thick](2,0.09) node {$\scriptstyle\vee$}; % \draw[thick](1.5,-0.09) node {$\scriptstyle\up$}; \draw[thick](1 ,-0.09) node {$\scriptstyle\wedge$}; % \draw[thick](1 ,-0.09) node {$\scriptstyle\up$}; % \draw[thick](4 ,-0.09) node {$\scriptstyle\up$}; % \draw[thick](6 ,-0.09) node {$\scriptstyle\up$}; % \draw (1,0) to [out=90,in=180] (1.5, 0.5) to [out=0,in= 90] (2,0)--++(-90:1); \draw (1,0)--++(-90:1); \draw (0,0)--++(-90:1) (0,0)--++(90:1); %\draw (0,-1)--++( 90:2); % \draw (3,-1)--++( 90:2); \draw (4,-1)--++( 90:2); %\draw (6,-1)--++( 90:2); \end{tikzpicture} \end{minipage}$$ for $c\geqslant 3$ or $c=2$ respectively. Thus, for $c\geqslant 3$ we have $\underline{\lambda} \lambda\overline{\mu} \cdot \underline{\mu} \lambda\overline{\lambda}$ is equal to $$\begin{minipage}{3.2cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7] \draw[thick](3 ,-0.09-1.5) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09) node {$\scriptstyle\wedge$}; \draw(-0.25,0)--++(0:4) coordinate (X); \draw[densely dotted] (X)--++(0:1) coordinate (X); \draw(X)--++(0:0.5) coordinate (X)--++(0:0.5); \draw(X)--++(90:1)--++(-90:3.5); \path(X)--++(-90:0.09) node {$\scriptstyle\wedge$}; \draw(-0.25,0-1.5)--++(0:4) coordinate (X); \draw[densely dotted] (X)--++(0:1) coordinate (X); \draw(X)--++(0:0.5) coordinate (X)--++(0:0.5); \path(X)--++(-90:0.09) node {$\scriptstyle\wedge$}; \draw(3 ,1)--(3,-2.5); \draw[thick](0 ,-0.09) node {$\scriptstyle\wedge$}; \draw(1,0.09) node {$\scriptstyle\vee$}; \draw[thick](2 ,-0.09) node {$\scriptstyle\wedge$}; \draw (0,1)--(0,0) to [out=-90,in=180] (0.5,-0.5) to [out=0,in=-90] (1,0) to [out=90,in=180] (1.5, 0.5) to [out=0,in= 90] (2,0)--++(-90:1); \draw (0,-2.5)--(0,-1.5) to [out= 90,in=180] (0.5,-1) to [out=0,in= 90] (1,-1.5) to [out=-90,in=180] (1.5, -2) to [out=0,in= -90] (2,-1.5)--++(90:1); \draw(-0.25,-1.5)--++(0:2.75) coordinate (X); \draw[thick](0 ,-0.09-1.5) node {$\scriptstyle\wedge$}; \draw(1,0.08-1.5) node {$\scriptstyle\vee$}; \draw[thick](2 ,-0.09-1.5) node {$\scriptstyle\wedge$}; \end{tikzpicture} \end{minipage} = \begin{minipage}{3.2cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7] \draw[thick](3 ,-0.09-1.5) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09) node {$\scriptstyle\wedge$}; \draw(-0.25,0)--++(0:4) coordinate (X); \draw[densely dotted] (X)--++(0:1) coordinate (X); \draw(X)--++(0:0.5) coordinate (X)--++(0:0.5); \draw(X)--++(90:1)--++(-90:3.5); \path(X)--++(-90:0.09) node {$\scriptstyle\wedge$}; \draw(-0.25,0-1.5)--++(0:4) coordinate (X); \draw[densely dotted] (X)--++(0:1) coordinate (X); \draw(X)--++(0:0.5) coordinate (X)--++(0:0.5); \path(X)--++(-90:0.09) node {$\scriptstyle\wedge$}; \draw(3 ,1)--(3,-2.5); \draw[thick](3 ,-0.09-1.5) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09) node {$\scriptstyle\wedge$}; \draw(-0.25,0)--++(0:3.4) coordinate (X); \draw[densely dotted] (X)--++(0:0.5); \draw(-0.25,0-1.5)--++(0:3.4) coordinate (X); \draw[densely dotted] (X)--++(0:0.5); \draw(3 ,1)--(3,-2.5); \draw(-0.25,0)--++(0:2.75) coordinate (X); \draw[thick](0 ,-0.09) node {$\scriptstyle\wedge$}; \draw(2,0.09) node {$\scriptstyle\vee$}; % \draw[thick](1,-0.09) node {$\scriptstyle\up$}; % \draw[thick](1.5,-0.09) node {$\scriptstyle\up$}; \draw[thick](1 ,-0.09) node {$\scriptstyle\wedge$}; \draw (0,1)--(0,-1.5); \draw [out=0,in=-90] (1,0) to [out=90,in=180] (1.5, 0.5) to [out=0,in= 90] (2,0)--++(-90:1); \draw (0,-2.5)--(0,-1.5); \draw(1,-0) -- (1,-1.5) to [out=-90,in=180] (1.5, -2) to [out=0,in= -90] (2,-1.5)--++(90:1); \draw(-0.25,-1.5)--++(0:2.75) coordinate (X); \draw[thick](0 ,-0.09-1.5) node {$\scriptstyle\wedge$}; \draw(2,0.08-1.5) node {$\scriptstyle\vee$}; % \draw[thick](1,-0.09) node {$\scriptstyle\up$}; % \draw[thick](1.5,-0.09) node {$\scriptstyle\up$}; \draw[thick](1 ,-0.09-1.5) node {$\scriptstyle\wedge$}; \end{tikzpicture} \end{minipage} = \begin{minipage}{3.2cm} \begin{tikzpicture} [xscale=-0.44,yscale=0.7] \draw[thick](3 ,-0.09-1.5) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09) node {$\scriptstyle\wedge$}; \draw(-0.25,0)--++(0:3.4); % % \draw(-0.25,0)--++(180:0.15) coordinate (X); % \draw[densely dotted] (X)--++(180:0.5); % % % \draw(-0.25,-1.5)--++(180:0.15) coordinate (X); % \draw[densely dotted] (X)--++(180:0.5); % % \draw(-0.25,0)--++(180:0.15) coordinate (X); % \draw[densely dotted] (X)--++(180:0.5); % % % \draw(-0.25,-1.5)--++(180:0.15) coordinate (X); % \draw[densely dotted] (X)--++(180:0.5); \draw(0.25,0 ) coordinate (X); \draw[densely dotted ] (X)--++(180:1) coordinate (X); \draw(X)--++(180:0.5) coordinate (X)--++(180:0.5); \draw(X)--++(90:1)--++(-90:3.5); \path(X)--++(-90:0.09) node {$\scriptstyle\wedge$}; \draw(0.25,-1.5 ) coordinate (X); \draw[densely dotted ] (X)--++(180:1) coordinate (X); \draw(X)--++(180:0.5) coordinate (X)--++(180:0.5); % \draw(X)--++(90:1)--++(-90:3.5); \path(X)--++(-90:0.09) node {$\scriptstyle\wedge$}; % % % \draw(-0.25,0-1.5)--++(0:4) coordinate (X); % \draw[densely dotted] (X)--++(0:1) coordinate (X); % \draw(X)--++(0:0.5) coordinate (X)--++(0:0.5); % \path(X)--++(-90:0.09) node {$\scriptstyle\up$}; % % % \draw(3 ,1)--(3,-2.5); % \draw(-0.25,0-1.5)--++(0:3.4) coordinate (X); \draw(3 ,1)--(3,-2.5); \draw(-0.25,0)--++(0:2.75) coordinate (X); \draw[thick](0 ,-0.09) node {$\scriptstyle\wedge$}; \draw(1,0.09) node {$\scriptstyle\vee$}; % \draw[thick](1,-0.09) node {$\scriptstyle\up$}; % \draw[thick](1.5,-0.09) node {$\scriptstyle\up$}; \draw[thick](2 ,-0.09) node {$\scriptstyle\wedge$}; \draw (0,1)--(0,0) to [out=-90,in=180] (0.5,-0.5) to [out=0,in=-90] (1,0) to [out=90,in=180] (1.5, 0.5) to [out=0,in= 90] (2,0)--++(-90:1); \draw (0,-2.5)--(0,-1.5) to [out= 90,in=180] (0.5,-1) to [out=0,in= 90] (1,-1.5) to [out=-90,in=180] (1.5, -2) to [out=0,in= -90] (2,-1.5)--++(90:1); \draw(-0.25,-1.5)--++(0:2.75) coordinate (X); \draw[thick](0 ,-0.09-1.5) node {$\scriptstyle\wedge$}; \draw(1,0.08-1.5) node {$\scriptstyle\vee$}; % \draw[thick](1,-0.09) node {$\scriptstyle\up$}; % \draw[thick](1.5,-0.09) node {$\scriptstyle\up$}; \draw[thick](2 ,-0.09-1.5) node {$\scriptstyle\wedge$}; \end{tikzpicture} \end{minipage}$$ which is equal to $\underline{\lambda} \nu_1 \overline{\nu_1} \cdot \underline{\nu_1} \nu_1 \overline{\lambda}$ as required. The $c=2$ case can be handled in an identical fashion (except that we stop after the first equality and note that there are no up-strands on the right of the diagrams). Finally, take $(\lambda, \mu)$ with $\lambda= \mu -P$ to be any Dyck pair of degree 1 which is contractible at $k$, for some $k$. Then there exists $(\lambda', \mu')$ with $\lambda' = \mu' - P'$ such that $(\lambda, \mu) = (\varphi_k(\lambda'), \varphi_k(\mu'))$. For each $Q'\in {\rm DRem}(\lambda')$, $\nu' = \lambda' - Q'$ we write $\nu = \varphi_k(\nu') = \lambda- Q$. We can assume by induction that the self-dual relation holds for $(\lambda', \mu')$ and applying the dilation homomorphism we obtain $$\begin{array}{lll} (-1)^{{\sf sgn}(\lambda',\mu')} \underline{\lambda} \lambda\overline{\mu} \cdot \underline{\mu} \lambda\overline{\lambda} & = & 2 \displaystyle\sum_{ \begin{subarray}{c} \nu= \lambda- Q \\ P\prec Q \end{subarray}} (-1) ^{b(Q')+b(P')-1+ { {\sf sgn}(\nu' , \lambda')} } \underline{\lambda} \nu \overline{\nu} \cdot \underline{\nu} \nu \overline{\lambda}\\ && \ + \displaystyle \sum_{ \begin{subarray}{c} \nu= \lambda- Q \\ P \text { adj. } Q \end{subarray}} (-1) ^{b(Q')+b(P')-1+ { {\sf sgn}(\nu' , \lambda') } } \underline{\lambda} \nu \overline{\nu} \cdot \underline{\nu} \nu \overline{\lambda} \end{array}$$[\[primed\]]{#primed label="primed"} First consider the case where $k\in \underline{\sf cont}(P')$. Then we have ${\sf sgn}(\lambda', \mu' ) = {\sf sgn}(\lambda, \mu)$ and $b(P') = b(P)-1$. For each $Q$ such that $P\prec Q$, we have $b(Q') = b(Q)-1$ and ${\sf sgn}(\nu', \lambda') = {\sf sgn}(\nu , \lambda)$ so we have $$\begin{aligned} (-1)^{b(Q')+b(P')-1+{\sf sgn}(\nu', \lambda')} &=& (-1)^{b(Q)-1+b(P)-1-1+{\sf sgn}(\nu, \lambda)}\\ &=& (-1)^{b(Q)+b(P)-1+{\sf sgn}(\nu, \lambda)}. \end{aligned}$$ On the other hand, for each $Q$ adjacent to $P$, we have $b(Q') = b(Q)$ and ${\sf sgn}(\nu', \lambda') = {\sf sgn}(\nu , \lambda)\pm 1$ and so we have $$\begin{aligned} (-1)^{b(Q')+b(P')-1+{\sf sgn}(\nu', \lambda')} &=& (-1)^{b(Q)+b(P)-1-1+{\sf sgn}(\nu, \lambda) \pm 1}\\ &=& (-1)^{b(Q)+b(P)-1+{\sf sgn}(\nu, \lambda)}. \end{aligned}$$ So we obtain the required relation for $(\lambda, \mu)$. It remains to consider the case where $k\notin \underline{\sf cont}(P')$. In this case we have ${\sf sgn}(\lambda', \mu' ) = {\sf sgn}(\lambda, \mu)\pm 1$ and $b(P') = b(P)$. Let $Q'\in {\rm DRem}(\lambda')$. If $k\notin \underline{\sf cont}(Q')$ then we have $b(Q') = b(Q)$ and ${\sf sgn}(\nu', \lambda' ) = {\sf sgn}(\nu , \lambda)\pm 1$ and so we have $$(-1)^{b(Q')+b(P')-1+{\sf sgn}(\nu', \lambda')} = (-1)^{b(Q)+b(P)-1+{\sf sgn}(\nu, \lambda)\pm 1}.$$ If $k\in \underline{\sf cont}(Q')$ then we have $b(Q') = b(Q)-1$ and ${\sf sgn}(\nu', \lambda' ) = {\sf sgn}(\nu , \lambda)$ and so we have $$(-1)^{b(Q')+b(P')-1+{\sf sgn}(\nu', \lambda')} = (-1)^{b(Q)-1+b(P)-1+{\sf sgn}(\nu, \lambda)}.$$ Thus dividing the equation ([\[primed\]](#primed){reference-type="ref" reference="primed"}) by $-1$ gives the required equation for $(\lambda, \mu)$. ◻ **Proposition 52** (The commuting relations). *Let $P,Q\in {\rm DRem}(\mu)$ with $P\prec Q$ which commute, and let $\lambda=\mu-P$, $\nu=\mu-Q$, and $\alpha=\mu-P-Q$. We have that $$\begin{aligned} i^{{\sf sgn}(\alpha,\lambda)+{\sf sgn}(\lambda,\mu)} \underline{\alpha} \alpha \overline{\lambda} \cdot \underline{\lambda} \lambda\overline{\mu} & = i^{{\sf sgn}(\alpha,\nu)+{\sf sgn}(\nu,\mu)} \underline{\alpha} \alpha \overline{\nu} \cdot \underline{\nu} \nu \overline{\mu} \label{distcomm1} \\ i^{{\sf sgn}(\lambda,\mu)+{\sf sgn}(\nu,\mu)} \underline{\nu} \nu \overline{\mu} \cdot \underline{\mu} \lambda\overline{\lambda} & = i^{{\sf sgn}(\alpha,\lambda)+{\sf sgn}(\alpha,\nu)} \underline{\nu} \alpha \overline{\alpha} \cdot \underline{\alpha} \alpha \overline{\lambda}. \label{distcomm2}\end{aligned}$$* *Proof.* First note that ${\sf sgn}(\lambda, \mu) = {\sf sgn}(\alpha , \nu)$ as $\lambda= \mu - P$ and $\alpha = \nu - P$. Similarly, ${\sf sgn}(\alpha, \lambda) = {\sf sgn}(\nu , \mu)$. Thus we can cancel the signs on both sides of the equation. Now, if $P$ and $Q$ are distant then the result follows directly using [Lemma 49](#local-surg){reference-type="ref" reference="local-surg"}. It remains to consider the case where $P\prec Q$ and they commute. We first focus on the incontractible case in which $\mu=(c^r)$ is a rectangle for $r,c>2$ and $b(P)=1$ and $3 \leqslant b(Q) \leqslant\min\{r,c\}$ (note that we must have $b(Q)>2$ in order to commute with $P$). Set $m = \min \{r,c\}$ and assume that $b(Q)=m$. We have that $\lambda=(c^{r-1},c-1)$, $\nu=((c-1)^{r-1}, r-m)$, and $\alpha=((c-1)^{r-2},c-2, r-m)$. Thus we have $\underline{\nu}\nu \overline{\mu} \cdot \underline{\mu}\lambda\overline{\lambda}$ is equal to $$\label{distcomm} % \underline{\la} \la \overline{\mu}\cdot % \underline{\mu} \nu \overline{\nu} %\;\; =\;\; \begin{minipage}{5.75cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7] \draw[densely dotted](-1.25,0)--++(180:0.5 ) coordinate (X); \draw (X)--++(180:1.5 ) coordinate (X); \draw(-1.25,0)--++(0:5.5) coordinate (X); \draw[densely dotted](X)--++(0:0.5 ) coordinate (X); \draw (X)--++(0:1.5 ) coordinate (X); \draw[thick](-1 , 0.09) node {$\scriptstyle\vee$}; \draw[thick](0 , 0.09) node {$\scriptstyle\vee$}; \draw(2,0.09) node {$\scriptstyle\vee$}; \draw(-2,0.09) node {$\scriptstyle\vee$}; \draw(-3,0.09) node {$\scriptstyle\vee$}; \draw[thick](1 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](4 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](5 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](6 ,-0.09) node {$\scriptstyle\wedge$}; \draw (0,0) to [out=90,in=180] (0.5,0.5) to [out=0,in=90] (1,0) (1,0) to [out=-90,in=180] (1.5, -0.5) to [out=0,in= -90] (2,0) to [out=90,in=180] (2.5,0.5) to [out=0,in=90] (3,0) to [out=-90,in=0] (1.5, -0.75) to [out=180,in= -90] (0,0) ; \draw[densely dotted] (4,0) to [out=90,in=0](1.5,1 ) to [out=180,in=90](-1,0); %--++( 90:2); \draw[densely dotted] (4,0) to [out=-90,in=0](1.5,-1 ) to [out=180,in=-90](-1,0); \draw[densely dotted] (5,0 ) to [out=90,in=0] (1.5,1.5) to [out=180,in =90] (-2,0 ); \draw[densely dotted] (5,-0 ) to [out=-90,in=0] (1.5,-1.5) to [out=180,in =-90] (-2,-0 ); \draw (6,0 ) to [out=90,in=0] (1.5,1.75) to [out=180,in =90] (-3,0 ); \draw (6,-0 ) to [out=-90,in=0] (1.5,-1.75) to [out=180,in =-90] (-3,-0 ); \draw[densely dotted](-1.25,0-4)--++(180:0.5 ) coordinate (X); \draw (X)--++(180:1.5 ) coordinate (X); \draw(-1.25,-4)--++(0:5.5) coordinate (X); \draw[densely dotted](X)--++(0:0.5 ) coordinate (X); \draw (X)--++(0:1.5 ) coordinate (X); \draw[thick](-1 , 0.09-4) node {$\scriptstyle\vee$}; \draw[thick](0 , 0.09-4) node {$\scriptstyle\vee$}; \draw(1,0.09-4) node {$\scriptstyle\vee$}; \draw(-2,0.09-4) node {$\scriptstyle\vee$}; \draw(6,0.09-4) node {$\scriptstyle\vee$}; \draw[thick](2 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](4 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](5 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](-3 ,-0.09-4) node {$\scriptstyle\wedge$}; % \draw % (0,-4) % to [out=-90,in=180] (0.5,-4-0.5) to [out=0,in=-90] (1,-4-0) % (1,-4-0) % to [out=90,in=180] (1.5, -4+0.5) to [out=0,in= 90] (2,-4-0) % to [out=-90,in=180] (2.5,-4-0.5) to [out=0,in=-90] (3,-4-0) % to [out=90,in=0] (1.5, -4+0.75) to [out=180,in= 90] (0,-4-0) % ; \draw (2,-4-0) to [out=90,in=0](1.5,0.5-4 ) to [out=180,in=90](1,-4-0); %--++( 90:2); \draw (2,-4-0) to [out=-90,in=0](1.5,-0.5-4 ) to [out=180,in=-90](1,-4-0); \draw (3,-4-0) to [out=90,in=0](1.5,0.75-4 ) to [out=180,in=90](0,-4-0); %--++( 90:2); \draw (3,-4-0) to [out=-90,in=0](1.5,-0.75-4 ) to [out=180,in=-90](0,-4-0); \draw[densely dotted] (4,-4-0) to [out=90,in=0](1.5,1-4 ) to [out=180,in=90](-1,-4-0); %--++( 90:2); \draw[densely dotted] (4,-4-0) to [out=-90,in=0](1.5,-1-4 ) to [out=180,in=-90](-1,-4-0); \draw[densely dotted] (5,-4-0 ) to [out=90,in=0] (1.5,-4+1.5) to [out=180,in =90] (-2,-4-0 ); \draw[densely dotted] (5,-4-0 ) to [out=-90,in=0] (1.5,-4-1.5) to [out=180,in =-90] (-2,-4-0 ); \draw (6,0-4 ) to [out=90,in=0] (1.5,1.75-4) to [out=180,in =90] (-3,0-4 )--++(-90:1.5); \draw (6,0-4 ) --++(-90:1.5); \draw[white,fill=white] (1.5,-0.4) circle (5.75pt); \path (1.5,-0.45) node {\scalefont{0.7 }$P$}; \draw[white,fill=white] (1.5,-4+1.75) circle (6.7pt); \path (1.5,-4+1.75) node {\scalefont{0.7 }$Q$}; \end{tikzpicture} \end{minipage} \!\!\!\!\!\! =\;\;\;\; \begin{minipage}{5.5cm} % \begin{tikzpicture} [xscale=0.5,yscale=0.7] % \draw % (0,-4-0) % to [out=90,in=180] (0.5,-4+0.5) to [out=0,in=90] (1,-4-0); % \draw % (0,-4-0) % to [out=-90,in=180] (0.5,-4-0.5) to [out=0,in=-90] (1,-4-0); % \draw % (2+0,-4-0) % to [out=90,in=180] (2+0.5,-4+0.5) to [out=0,in=90] (2+1,-4-0); % \draw % (2+0,-4-0) % to [out=-90,in=180] (2+0.5,-4-0.5) to [out=0,in=-90] (2+1,-4-0); % % % % (1,-4-0) %% to [out=-90,in=180] (1.5, -4-0.5) to [out=0,in= -90] (2,-4-0) %% to [out=90,in=180] (2.5,-4+0.5) to [out=0,in=90] (3,-4+0) %% to [out=-90,in=-4+0] (1.5, -4+-0.75) to [out=180,in= -90] (0,-4+0) %% ; % % \draw[densely dotted](-1.25,0-4)--++(180:0.5 ) coordinate (X); % \draw (X)--++(180:1.5 ) coordinate (X); % \draw(-1.25,-4)--++(0:5.5) coordinate (X); % \draw[densely dotted](X)--++(0:0.5 ) coordinate (X); % \draw (X)--++(0:1.5 ) coordinate (X); % % % % \draw[thick](-1 , 0.09-4) node {$\scriptstyle\down$}; % \draw[thick](0 , 0.09-4) node {$\scriptstyle\down$}; % \draw(2,0.09-4) node {$\scriptstyle\down$}; % \draw(-2,0.09-4) node {$\scriptstyle\down$}; % \draw(6,0.09-4) node {$\scriptstyle\down$}; % % % % % % % % % % % \draw[thick](1 ,-0.09-4) node {$\scriptstyle\up$}; % \draw[thick](3 ,-0.09-4) node {$\scriptstyle\up$}; % \draw[thick](4 ,-0.09-4) node {$\scriptstyle\up$}; % \draw[thick](5 ,-0.09-4) node {$\scriptstyle\up$}; % \draw[thick](-3 ,-0.09-4) node {$\scriptstyle\up$}; % % \draw[densely dotted] (4,-4-0) to [out=90,in=0](1.5,1-4 ) to [out=180,in=90](-1,-4-0); %--++( 90:2); % \draw[densely dotted] (4,-4-0) to [out=-90,in=0](1.5,-1-4 ) to [out=180,in=-90](-1,-4-0); % % % %\draw[densely dotted] (5,-4-0 ) to [out=90,in=0] (1.5,-4+1.5) to [out=180,in =90] (-2,-4-0 ); %\draw[densely dotted] (5,-4-0 ) to [out=-90,in=0] (1.5,-4-1.5) to [out=180,in =-90] (-2,-4-0 ); % % % % % % %\draw (6,0-4 ) to [out=90,in=0] (1.5,1.75-4) to [out=180,in =90] (-3,0-4 )--++(-90:1.5); %\draw (6,0-4 ) --++(-90:1.5); % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % \draw % (0,-4-0-4) % to [out=90,in=180] (0.5,-4+0.5-4) to [out=0,in=90] (1,-4-0-4) % (1,-4-0-4) % to [out=-90,in=180] (1.5, -4-0.5-4) to [out=0,in= -90] (2,-4-0-4) % to [out=90,in=180] (2.5,-4+0.5-4) to [out=0,in=90] (3,-4+0-4) % to [out=-90,in=-4+0] (1.5, -4+-0.75-4) to [out=180,in= -90] (0,-4+0-4) % ; % % \draw[densely dotted](-1.25,0-4-4)--++(180:0.5 ) coordinate (X-4); % \draw (X-4)--++(180:1.5 ) coordinate (X-4); % \draw(-1.25,-4-4)--++(0:5.5 ) coordinate (X-4); % \draw[densely dotted](X-4)--++(0:0.5 ) coordinate (X-4); % \draw (X-4)--++(0:1.5 ) coordinate (X-4); % % % % \draw[thick](-1 , 0.09-4-4) node {$\scriptstyle\down$}; % \draw[thick](0 , 0.09-4-4) node {$\scriptstyle\down$}; % \draw(2,0.09-4-4) node {$\scriptstyle\down$}; % \draw(-2,0.09-4-4) node {$\scriptstyle\down$}; % \draw(6,0.09-4-4) node {$\scriptstyle\down$}; % % % % % % % % % % % \draw[thick](1 ,-0.09-4-4) node {$\scriptstyle\up$}; % \draw[thick](3 ,-0.09-4-4) node {$\scriptstyle\up$}; % \draw[thick](4 ,-0.09-4-4) node {$\scriptstyle\up$}; % \draw[thick](5 ,-0.09-4-4) node {$\scriptstyle\up$}; % \draw[thick](-3 ,-0.09-4-4) node {$\scriptstyle\up$}; % % \draw[densely dotted] (4,-4-0-4) to [out=90,in=0](1.5,1-4 -4) to [out=180,in=90](-1,-4-0-4); %--++( 90:2-4); % \draw[densely dotted] (4,-4-0-4) to [out=-90,in=0](1.5,-1-4 -4) to [out=180,in=-90](-1,-4-0-4); % % % %\draw[densely dotted] (5,-4-0 -4) to [out=90,in=0] (1.5,-4+1.5-4) to [out=180,in =90] (-2,-4-0 -4); %\draw[densely dotted] (5,-4-0 -4) to [out=-90,in=0] (1.5,-4-1.5-4) to [out=180,in =-90] (-2,-4-0 -4); % % % % % % % \draw (-3,0-4 -5.5)--++(-90:1.5-5.5); %\draw (6,0-4 -5.5) --++(-90:1.5-5.5); % % \end{tikzpicture} \end{minipage} \begin{tikzpicture} [xscale=0.5,yscale=0.7] \draw (0,-4-0) to [out=90,in=180] (0.5,-4+0.5) to [out=0,in=90] (1,-4-0) (1,-4-0) to [out=-90,in=180] (1.5, -4-0.5) to [out=0,in= -90] (2,-4-0) to [out=90,in=180] (2.5,-4+0.5) to [out=0,in=90] (3,-4+0) to [out=-90,in=-4+0] (1.5, -4+-0.75) to [out=180,in= -90] (0,-4+0) ; \draw[densely dotted](-1.25,0-4)--++(180:0.5 ) coordinate (X); \draw (X)--++(180:1.5 ) coordinate (X); \draw(-1.25,-4)--++(0:5.5) coordinate (X); \draw[densely dotted](X)--++(0:0.5 ) coordinate (X); \draw (X)--++(0:1.5 ) coordinate (X); \draw[thick](-1 , 0.09-4) node {$\scriptstyle\vee$}; \draw[thick](0 , 0.09-4) node {$\scriptstyle\vee$}; \draw(2,0.09-4) node {$\scriptstyle\vee$}; \draw(-2,0.09-4) node {$\scriptstyle\vee$}; \draw(6,0.09-4) node {$\scriptstyle\vee$}; \draw[thick](1 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](4 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](5 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](-3 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[densely dotted] (4,-4-0) to [out=90,in=0](1.5,1-4 ) to [out=180,in=90](-1,-4-0); %--++( 90:2); \draw[densely dotted] (4,-4-0) to [out=-90,in=0](1.5,-1-4 ) to [out=180,in=-90](-1,-4-0); \draw[densely dotted] (5,-4-0 ) to [out=90,in=0] (1.5,-4+1.5) to [out=180,in =90] (-2,-4-0 ); \draw[densely dotted] (5,-4-0 ) to [out=-90,in=0] (1.5,-4-1.5) to [out=180,in =-90] (-2,-4-0 ); \draw (6,0-4 ) to [out=90,in=0] (1.5,1.75-4) to [out=180,in =90] (-3,0-4 )--++(-90:1.5); \draw (6,0-4 ) --++(-90:1.5); \draw[white,fill=white] (1.5,-4.4) circle (5.75pt); \path (1.5,-4.45) node {\scalefont{0.7 }$P$}; \draw[white,fill=white] (1.5,-4+1.75) circle (6.7pt); \path (1.5,-4+1.75) node {\scalefont{0.7 }$Q$}; \end{tikzpicture} \end{minipage}$$ while $\underline{\nu}\alpha \overline{\alpha} \cdot \underline{\alpha}\alpha \overline{\lambda}$ is equal to $$\begin{minipage}{5.75cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7] \draw[densely dotted](-1.25,0-4)--++(180:0.5 ) coordinate (X); \draw (X)--++(180:1.5 ) coordinate (X); \draw(-1.25,0-4)--++(0:5.5) coordinate (X); \draw[densely dotted](X)--++(0:0.5 ) coordinate (X); \draw (X)--++(0:1.5 ) coordinate (X); \draw[thick](-1 , 0.09-4) node {$\scriptstyle\vee$}; \draw[thick](0 , 0.09-4) node {$\scriptstyle\vee$}; \draw(2,0.09-4) node {$\scriptstyle\vee$}; \draw(-2,0.09-4) node {$\scriptstyle\vee$}; \draw(6,0.09-4) node {$\scriptstyle\vee$}; \draw[thick](1 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](4 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](5 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](-3 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw (0,0-4) to [out=90,in=180] (0.5,0.5-4) to [out=0,in=90] (1,0-4) (1,0-4) to [out=-90,in=180] (1.5, -0.5-4) to [out=0,in= -90] (2,0-4) to [out=90,in=180] (2.5,0.5-4) to [out=0,in=90] (3,0-4) to [out=-90,in=0] (1.5, -0.75-4) to [out=180,in= -90] (0,0-4) ; \draw[densely dotted] (4,0-4) to [out=90,in=0](1.5,1-4 ) to [out=180,in=90](-1,0-4); %--++( 90:2); \draw[densely dotted] (4,0-4) to [out=-90,in=0](1.5,-1-4 ) to [out=180,in=-90](-1,0-4); \draw[densely dotted] (5,0-4 ) to [out=90,in=0] (1.5,1.5-4) to [out=180,in =90] (-2,0-4 ); \draw[densely dotted] (5,-0-4 ) to [out=-90,in=0] (1.5,-1.5-4) to [out=180,in =-90] (-2,-0-4 ); \draw (6,0-4 ) -- +(-90:1.5) -- +(90:1.5); \draw (-3,-0-4 ) -- +(-90:1.5) -- +(90:1.5); \draw[densely dotted](-1.25,0)--++(180:0.5 ) coordinate (X); \draw (X)--++(180:1.5 ) coordinate (X); \draw(-1.25,0)--++(0:5.5) coordinate (X); \draw[densely dotted](X)--++(0:0.5 ) coordinate (X); \draw (X)--++(0:1.5 ) coordinate (X); \draw[thick](-1 , 0.09) node {$\scriptstyle\vee$}; \draw[thick](0 , 0.09) node {$\scriptstyle\vee$}; \draw(2,0.09) node {$\scriptstyle\vee$}; \draw(-2,0.09) node {$\scriptstyle\vee$}; \draw(6,0.09) node {$\scriptstyle\vee$}; \draw[thick](1 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](4 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](5 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](-3 ,-0.09) node {$\scriptstyle\wedge$}; % \draw % (0,-4) % to [out=-90,in=180] (0.5,-4-0.5) to [out=0,in=-90] (1,-4-0) % (1,-4-0) % to [out=90,in=180] (1.5, -4+0.5) to [out=0,in= 90] (2,-4-0) % to [out=-90,in=180] (2.5,-4-0.5) to [out=0,in=-90] (3,-4-0) % to [out=90,in=0] (1.5, -4+0.75) to [out=180,in= 90] (0,-4-0) % ; \draw (3,-0) to [out=90,in=0](2.5,0.5 ) to [out=180,in=90](2,-0); %--++( 90:2); \draw (3,-0) to [out=-90,in=0](2.5,-0.5 ) to [out=180,in=-90](2,-0); \draw (1,-0) to [out=90,in=0](0.5,0.5 ) to [out=180,in=90](0,-0); %--++( 90:2); \draw (1,-0) to [out=-90,in=0](0.5,-0.5 ) to [out=180,in=-90](0,-0); \draw[densely dotted] (4,-0) to [out=90,in=0](1.5,1 ) to [out=180,in=90](-1,-0); %--++( 90:2); \draw[densely dotted] (4,-0) to [out=-90,in=0](1.5,-1 ) to [out=180,in=-90](-1,-0); \draw[densely dotted] (5,-0 ) to [out=90,in=0] (1.5,+1.5) to [out=180,in =90] (-2,-0 ); \draw[densely dotted] (5,-0 ) to [out=-90,in=0] (1.5,-1.5) to [out=180,in =-90] (-2,-0 ); \draw (6,0 ) to [out=90,in=0] (1.5,1.75) to [out=180,in =90] (-3,0 )--++(-90:1.5); \draw (6,0 ) --++(-90:1.5); %\draw[densely dotted] (6,-0 ) to [out=-90,in=0] (1.5,-1.75) to [out=180,in =-90] (-3,-0 ); \draw[white,fill=white] (1.5,-4.4) circle (5.75pt); \path (1.5,-4.45) node {\scalefont{0.7 }$P$}; \draw[white,fill=white] (1.5,-0+1.75) circle (6.7pt); \path (1.5,-0+1.75) node {\scalefont{0.7 }$Q$}; \end{tikzpicture} \end{minipage} \!\!\!\!\!\! =\;\;\;\; \begin{minipage}{5.5cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7] \draw (0,-4-0) to [out=90,in=180] (0.5,-4+0.5) to [out=0,in=90] (1,-4-0) (1,-4-0) to [out=-90,in=180] (1.5, -4-0.5) to [out=0,in= -90] (2,-4-0) to [out=90,in=180] (2.5,-4+0.5) to [out=0,in=90] (3,-4+0) to [out=-90,in=-4+0] (1.5, -4+-0.75) to [out=180,in= -90] (0,-4+0) ; \draw[densely dotted](-1.25,0-4)--++(180:0.5 ) coordinate (X); \draw (X)--++(180:1.5 ) coordinate (X); \draw(-1.25,-4)--++(0:5.5) coordinate (X); \draw[densely dotted](X)--++(0:0.5 ) coordinate (X); \draw (X)--++(0:1.5 ) coordinate (X); \draw[thick](-1 , 0.09-4) node {$\scriptstyle\vee$}; \draw[thick](0 , 0.09-4) node {$\scriptstyle\vee$}; \draw(2,0.09-4) node {$\scriptstyle\vee$}; \draw(-2,0.09-4) node {$\scriptstyle\vee$}; \draw(6,0.09-4) node {$\scriptstyle\vee$}; \draw[thick](1 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](4 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](5 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](-3 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[densely dotted] (4,-4-0) to [out=90,in=0](1.5,1-4 ) to [out=180,in=90](-1,-4-0); %--++( 90:2); \draw[densely dotted] (4,-4-0) to [out=-90,in=0](1.5,-1-4 ) to [out=180,in=-90](-1,-4-0); \draw[densely dotted] (5,-4-0 ) to [out=90,in=0] (1.5,-4+1.5) to [out=180,in =90] (-2,-4-0 ); \draw[densely dotted] (5,-4-0 ) to [out=-90,in=0] (1.5,-4-1.5) to [out=180,in =-90] (-2,-4-0 ); \draw (6,0-4 ) to [out=90,in=0] (1.5,1.75-4) to [out=180,in =90] (-3,0-4 )--++(-90:1.5); \draw (6,0-4 ) --++(-90:1.5); \draw[white,fill=white] (1.5,-4.4) circle (5.75pt); \path (1.5,-4.45) node {\scalefont{0.7 }$P$}; \draw[white,fill=white] (1.5,-4+1.75) circle (6.7pt); \path (1.5,-4+1.75) node {\scalefont{0.7 }$Q$}; \end{tikzpicture} \end{minipage}$$ In both equations there are $r-3$ dotted anti-clockwise circles in each diagram; and equality follows from applying the $1\otimes 1 \mapsto 1$ rule a total of $r$ times as required. A very similar calculation proves [\[distcomm1\]](#distcomm1){reference-type="ref" reference="distcomm1"}. The other cases, where $3\leqslant b(Q) \leqslant m-1$ are completely analogous, except that the large arc in $\underline{\mu} \nu \overline{\nu}$ and $\underline{\lambda} \alpha \overline{\alpha}$ forms part of a zigzag or a brace. Finally, the general case follows directly by applying the dilation homomorphism. ◻ **Proposition 53** (The non-commuting relation). *Let $P,Q\in {\rm DRem}(\mu)$ with $P\prec Q$ which do not commute, and let $\lambda=\mu-P$ and $\nu=\mu-Q$. Then $Q\setminus P = Q^1 \sqcup Q^2$ where $Q^1, Q^2 \in {\rm DRem}(\mu - P)$ and we set $\alpha=\lambda-Q^1$ and $\beta=\lambda-Q^2$. We have that $$\label{thisoney} i^{{\sf sgn}(\lambda,\mu)+{\sf sgn}(\nu,\mu)} \underline{\nu} \nu \overline{\mu} \cdot \underline{\mu} \lambda\overline{\lambda} = %(-1)^{rb(P\wedge Q)} %\Psi( %D^{\la} _{\mu-P \wedge Q} %D^ {\mu-P \wedge Q} _{\nu } ) % (-1)^{\sgnlamu} % \underline{\la} \la \overline{\mu} % \cdot \underline{\mu} \la \overline{\la} % % % % (-1) ^{2b (P\wedge Q) } i^{ %2{\rm rb}(P\wedge Q) % + {\sf sgn}(\alpha,\lambda)+{\sf sgn}(\nu,\alpha) } \underline{\nu} \nu \overline{ \alpha} \cdot \underline{ \alpha} \alpha \overline{\lambda} = i^{ {\sf sgn}(\beta,\lambda)+{\sf sgn}(\nu,\beta) } \underline{\nu} \nu \overline{ \beta} \cdot \underline{ \beta} \beta \overline{\lambda}.$$* *Proof.* We start by proving that all the signs in the equation are equal. Assume that $P$ corresponds to a cup connecting $i_P -\tfrac{1}{2}$ and $j_P +\tfrac{1}{2}$ and $Q$ corresponds to a cup connecting $i_Q -\tfrac{1}{2}$ and $j_Q +\tfrac{1}{2}$. Then $Q^1$ corresponds to a cup connecting $i_Q - \tfrac{1}{2}$ and $(i_P +1 ) -\tfrac{1}{2}$ and $Q^2$ corresponds to a cup connecting $(j_P +1)-\tfrac{1}{2}$ and $j_Q + \tfrac{1}{2}$. This implies that $$\begin{aligned} {\sf sgn}(\alpha, \lambda) + {\sf sgn}(\nu , \alpha) &=& {\sf sgn}(\beta , \lambda) + {\sf sgn}(\nu , \beta) \\ &=& \tfrac{1}{2}({i_Q + i_P - 1}) + \tfrac{1}{2}({j_P + 1 + j_Q}) = \tfrac{1}{2}({i_P + j_P}) + \tfrac{1}{2}({i_Q + j_Q})\\ &=& {\sf sgn}(\lambda, \mu) + {\sf sgn}(\nu , \mu). \end{aligned}$$ Thus we can restrict our attention to the incontractible case as the general case will follow directly by applying the dilation homomorphism. So we can assume that $\mu=(c^r)$ is a rectangle for $r,c>1$ and $b(P)=1$ and $b(Q)=2$. For the $r=2=c$ case, $\lambda=(2,1)$, $\mu=(2^2)$, $\nu=(1)$ and we can choose $\alpha=(2)$ and $\beta=(1^2)$. Here we have that $\underline{\nu}\nu \overline{\mu} \cdot \underline{\mu}\lambda\overline{\lambda}$ is given by $$\begin{minipage}{2.4cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7,scale=1.1] \draw(-0.25,0)--++(0:3.75) coordinate (X); \draw[thick](0 , 0.09) node {$\scriptstyle\vee$}; \draw(2,0.09) node {$\scriptstyle\vee$}; \draw[thick](1 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09) node {$\scriptstyle\wedge$}; \draw[white] (0,0)--++(90:1); \draw (0,0) to [out=90,in=180] (0.5,0.5) to [out=0,in=90] (1,0) (1,0) to [out=-90,in=180] (1.5, -0.5) to [out=0,in= -90] (2,0) to [out=90,in=180] (2.5,0.5) to [out=0,in=90] (3,0) to [out=-90,in=0] (1.5, -0.75) to [out=180,in= -90] (0,0) ; \draw(-0.25,-2)--++(0:3.75) coordinate (X); \draw[thick](1 , 0.09-2) node {$\scriptstyle\vee$}; \draw(3,0.09-2) node {$\scriptstyle\vee$}; \draw[thick](0 ,-0.09-2) node {$\scriptstyle\wedge$}; \draw[thick](2 ,-0.09-2) node {$\scriptstyle\wedge$}; \draw (1,-2-0) to [out=90,in=180] (1.5, -2+0.5) to [out=0,in= 90] (2,-2-0) to [out=-90,in= 0] (1.5, -2-0.5) to [out=180,in= -90] (1,-2-0) (3,-3) -- (3,-2-0) to [out=90,in=0] (1.5, -2+0.75) to [out=180,in= 90] (0,-2-0)--++(-90:1) ; \draw[white,fill=white] (1.5,-0.4) circle (5.75pt); \path (1.5,-0.45) node {\scalefont{0.7 }$P$}; \draw[white,fill=white] (1.5,-2+0.78) circle (7.5pt); \path (1.5,-2+0.75) node {\scalefont{0.7 }$Q$}; \end{tikzpicture} \end{minipage} =\; \; \begin{minipage}{2.3cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7,scale=1.1] \draw[white] (0,-1)--(0,0) to [out=90,in=180] (0.5,0.5) to [out=0,in=90] (1,0) to [out=-90,in=180] (1.5, -0.5) to [out=0,in= -90] (2,0)--++(90:1); \draw[white](0,0) --++(90:1); \draw(-0.25,0)--++(0:3.5); \draw(1,0.09) node {$\scriptstyle\vee$}; \draw(3,0.09) node {$\scriptstyle\vee$}; \draw[thick](0,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](2,-0.09) node {$\scriptstyle\wedge$}; \draw (0,0) to [out=90,in=180] (0.5,0.6) to [out=0,in=90] (1,0) to [out=-90,in=180] (1.5, -0.6) to [out=0,in= -90] (2,0) to [out=90,in=180] (2.5,0.6) to [out=0,in=90] (3,0) --++(-90:1); \draw(0,0) --++(-90:1); \draw[white,fill=white] (0.5,0.6) circle (7.75pt); \path (0.5, 0.6) node {\scalefont{0.7 }$\phantom{_{1}}Q^1$}; \draw[white,fill=white] (2.5,0.6) circle (7.75pt); \path (2.5, 0.6) node {\scalefont{0.7 }$\phantom{_{1}}Q^2$}; % % \draw[white,fill=white] (1.5,-0.4) circle (5.75pt); % \path (1.5,-0.45) node {\scalefont{0.7 }$P$}; % % % \draw[white,fill=white] (1.5,-2+0.78) circle (7.5pt); % \path (1.5,-2+0.75) node {\scalefont{0.7 }$Q$}; \end{tikzpicture} \end{minipage} \; =\;\; \begin{minipage}{2.4cm} \begin{tikzpicture} [xscale=-0.5,yscale=-0.7,scale=1.1] \draw(-0.25,0)--++(0:3.75) coordinate (X); \draw[thick](1 ,0.09) node {$\scriptstyle\wedge$}; \draw[thick](0 , -0.09) node {$\scriptstyle\vee$}; \draw[thick](3 ,0.09) node {$\scriptstyle\wedge$}; \draw(2,-0.09) node {$\scriptstyle\vee$}; \draw[thick](1 , -0.09-2) node {$\scriptstyle\vee$}; \draw(2,-0.09-2) node {$\scriptstyle\vee$}; \draw[thick](0 ,0.09-2) node {$\scriptstyle\wedge$}; \draw[thick](3 ,0.09-2) node {$\scriptstyle\wedge$}; \draw (0,1) --(0,0) to [out=-90,in=180] (0.5,-0.6) to [out=0,in=-90] (1,0) to [out= 90,in=180] (1.5, 0.6) to [out=0,in= 90] (2,0)--++(-90:2); \path(0,-3)--++(90:1); \draw(-0.25,-2)--++(0:3.75) coordinate (X); \draw(0,-2) to [out=90,in=180] (0.5,-1.5) to [out=0,in=90](1,-2) ; \draw(0,-2) to [out=-90,in=180] (0.5,-2.5) to [out=0,in=-90](1,-2) ; \draw(2,-2) to [out=-90,in=180] (2.5,-2.5) to [out=0,in=-90](3,-2)--++(90:3); \draw[white,fill=white] (0.5,-0.6) circle (7.75pt); \path (0.5, -0.6) node {\scalefont{0.7 }$\phantom{_{1}}Q^2$}; \draw[white,fill=white] (2.5,-0.6-2) circle (7.75pt); \path (2.5, -0.6-2) node {\scalefont{0.7 }$\phantom{_{1}}Q^1$}; \end{tikzpicture} \end{minipage} \; =\;\; \begin{minipage}{2.4cm} \begin{tikzpicture} [xscale=0.5,yscale=-0.7,scale=1.1] \draw(-0.25,0)--++(0:3.75) coordinate (X); \draw[thick](0 ,0.09) node {$\scriptstyle\wedge$}; \draw[thick](1 , -0.09) node {$\scriptstyle\vee$}; \draw[thick](2 ,0.09) node {$\scriptstyle\wedge$}; \draw(3,-0.09) node {$\scriptstyle\vee$}; \draw[thick](0 , -0.09-2) node {$\scriptstyle\vee$}; \draw(3,-0.09-2) node {$\scriptstyle\vee$}; \draw[thick](1 ,0.09-2) node {$\scriptstyle\wedge$}; \draw[thick](2 ,0.09-2) node {$\scriptstyle\wedge$}; \draw (0,1) --(0,0) to [out=-90,in=180] (0.5,-0.6) to [out=0,in=-90] (1,0) to [out= 90,in=180] (1.5, 0.6) to [out=0,in= 90] (2,0)--++(-90:2); \path(0,-3)--++(90:1); \draw(-0.25,-2)--++(0:3.75) coordinate (X); \draw(0,-2) to [out=90,in=180] (0.5,-1.5) to [out=0,in=90](1,-2) ; \draw(0,-2) to [out=-90,in=180] (0.5,-2.5) to [out=0,in=-90](1,-2) ; \draw(2,-2) to [out=-90,in=180] (2.5,-2.5) to [out=0,in=-90](3,-2)--++(90:3); \draw[white,fill=white] (2.5,-0.6-2) circle (7.75pt); \path (2.5, -0.6-2) node {\scalefont{0.7 }$\phantom{_{1}}Q^2$}; \draw[white,fill=white] (0.5,-0.6 ) circle (7.75pt); \path (0.5, -0.6 ) node {\scalefont{0.7 }$\phantom{_{1}}Q^1$}; \end{tikzpicture} \end{minipage}$$ where the last two terms are equal to $\underline{\nu} \nu \overline{\alpha}\cdot \underline{\alpha}\alpha\overline{\lambda}$ and $\underline{\nu}\nu \overline{\beta}\cdot \underline{\beta} \beta \overline{\lambda}$ as required. Now suppose $r,c>2$. We can safely ignore vertical strands on the left or right as they will not affect the multiplication. In other words, we may assume in our pictures that $r=c$. We have that $\lambda=(r^{r-1},r-1)$ and $\nu=(r^{r-2},r-1,r-2)$. We set $\alpha=(r^{r-1},r-2)$ and $\beta= (r^{r-2},(r-1)^2)$. Here we have that $\underline{\nu} \nu \overline{\mu}\cdot \underline{\mu}\lambda\overline{\lambda}$ is given by $$% \underline{\la} \la \overline{\mu}\cdot % \underline{\mu} \nu \overline{\nu} %\;\; =\;\; \begin{minipage}{5.75cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7] \draw[densely dotted](-1.25,0)--++(180:0.5 ) coordinate (X); \draw (X)--++(180:1.5 ) coordinate (X); \draw(-1.25,0)--++(0:5.5) coordinate (X); \draw[densely dotted](X)--++(0:0.5 ) coordinate (X); \draw (X)--++(0:1.5 ) coordinate (X); \draw[thick](-1 , 0.09) node {$\scriptstyle\vee$}; \draw[thick](0 , 0.09) node {$\scriptstyle\vee$}; \draw(2,0.09) node {$\scriptstyle\vee$}; \draw(-2,0.09) node {$\scriptstyle\vee$}; \draw(-3,0.09) node {$\scriptstyle\vee$}; \draw[thick](1 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](4 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](5 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](6 ,-0.09) node {$\scriptstyle\wedge$}; \draw (0,0) to [out=90,in=180] (0.5,0.5) to [out=0,in=90] (1,0) (1,0) to [out=-90,in=180] (1.5, -0.6) to [out=0,in= -90] (2,0) to [out=90,in=180] (2.5,0.5) to [out=0,in=90] (3,0) to [out=-90,in=0] (1.5, -0.85) to [out=180,in= -90] (0,0) ; \draw (4,0) to [out=90,in=0](1.5,1 ) to [out=180,in=90](-1,0); %--++( 90:2); \draw (4,0) to [out=-90,in=0](1.5,-1 ) to [out=180,in=-90](-1,0); \draw[densely dotted] (5,0 ) to [out=90,in=0] (1.5,1.5) to [out=180,in =90] (-2,0 ); \draw[densely dotted] (5,-0 ) to [out=-90,in=0] (1.5,-1.5) to [out=180,in =-90] (-2,-0 ); \draw[densely dotted] (6,0 ) to [out=90,in=0] (1.5,1.75) to [out=180,in =90] (-3,0 ); \draw[densely dotted] (6,-0 ) to [out=-90,in=0] (1.5,-1.75) to [out=180,in =-90] (-3,-0 ); \draw[densely dotted](-1.25,0-4)--++(180:0.5 ) coordinate (X); \draw (X)--++(180:1.5 ) coordinate (X); \draw(-1.25,-4)--++(0:5.5) coordinate (X); \draw[densely dotted](X)--++(0:0.5 ) coordinate (X); \draw (X)--++(0:1.5 ) coordinate (X); \draw[thick](-1 , 0.09-4) node {$\scriptstyle\vee$}; \draw[thick](3 , 0.09-4) node {$\scriptstyle\vee$}; \draw(1,0.09-4) node {$\scriptstyle\vee$}; \draw(-2,0.09-4) node {$\scriptstyle\vee$}; \draw(-3,0.09-4) node {$\scriptstyle\vee$}; \draw[thick](2 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](6 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](4 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](5 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](0 ,-0.09-4) node {$\scriptstyle\wedge$}; % \draw % (0,-4) % to [out=-90,in=180] (0.5,-4-0.5) to [out=0,in=-90] (1,-4-0) % (1,-4-0) % to [out=90,in=180] (1.5, -4+0.5) to [out=0,in= 90] (2,-4-0) % to [out=-90,in=180] (2.5,-4-0.5) to [out=0,in=-90] (3,-4-0) % to [out=90,in=0] (1.5, -4+0.75) to [out=180,in= 90] (0,-4-0) % ; \draw (2,-4-0) to [out=90,in=0](1.5,0.5-4 ) to [out=180,in=90](1,-4-0); %--++( 90:2); \draw (2,-4-0) to [out=-90,in=0](1.5,-0.5-4 ) to [out=180,in=-90](1,-4-0); \draw (3,-4-0) to [out=90,in=0](1.5,0.75-4 ) to [out=180,in=90](0,-4-0); %--++( 90:2); % \draw (3,-4-0) to [out=-90,in=0](1.5,-0.75-4 ) to [out=180,in=-90](0,-4-0); \draw (4,-4-0) to [out=90,in=0](1.5,1.2-4 ) to [out=180,in=90](-1,-4-0); %--++( 90:2); \draw (4,-4-0) to [out=-90,in=0](3.5,-0.5-4 ) to [out=180,in=-90](3,-4-0); \draw (0,-4-0) to [out=-90,in=0](-0.5,-0.5-4 ) to [out=180,in=-90](-1,-4-0); \draw[densely dotted] (5,-4-0 ) to [out=90,in=0] (1.5,-4+1.5) to [out=180,in =90] (-2,-4-0 ); \draw[densely dotted] (5,-4-0 ) to [out=-90,in=0] (1.5,-4-1.5) to [out=180,in =-90] (-2,-4-0 ); \draw[densely dotted] (6,0-4 ) to [out=90,in=0] (1.5,1.75-4) to [out=180,in =90] (-3,0-4 ); \draw[densely dotted] (6,0-4 ) to [out=-90,in=0] (1.5,-1.75-4) to [out=180,in =-90] (-3,0-4 ); \draw[white,fill=white] (1.5,-0.55) circle (5.85pt); \path (1.5,-0.525) node {\scalefont{0.7 }$P$}; \draw[white,fill=white] (1.5,-4+0.78) circle (7.5pt); \path (1.5,-4+0.75) node {\scalefont{0.7 }$Q$}; \end{tikzpicture} \end{minipage} \!\!\!\!\!\!\!\!\!\!\!\!\! = \begin{minipage}{5.5cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7] \draw (0,-4-0) to [out=90,in=180] (0.5,-4+0.6) to [out=0,in=90] (1,-4-0) (1,-4-0) to [out=-90,in=180] (1.5, -4-0.6) to [out=0,in= -90] (2,-4-0) to [out=90,in=180] (2.5,-4+0.6) to [out=0,in=90] (3,-4+0) ; \draw[white,fill=white] (0.5,0.6-4) circle (7.75pt); \path (0.5, 0.6-4) node {\scalefont{0.7 }$\phantom{_{1}}Q^1$}; \draw[white,fill=white] (2.5,0.6-4) circle (7.75pt); \path (2.5, 0.6-4) node {\scalefont{0.7 }$\phantom{_{1}}Q^2$}; \draw[densely dotted](-1.25,0-4)--++(180:0.5 ) coordinate (X); \draw (X)--++(180:1.5 ) coordinate (X); \draw(-1.25,-4)--++(0:5.5) coordinate (X); \draw[densely dotted](X)--++(0:0.5 ) coordinate (X); \draw (X)--++(0:1.5 ) coordinate (X); \draw[thick](-1 , 0.09-4) node {$\scriptstyle\vee$}; \draw[thick](1 , 0.09-4) node {$\scriptstyle\vee$}; \draw(3,0.09-4) node {$\scriptstyle\vee$}; \draw(-2,0.09-4) node {$\scriptstyle\vee$}; \draw(-3,0.09-4) node {$\scriptstyle\vee$}; \draw[thick](0 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](2 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](4 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](5 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](6 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw (4,-4-0) to [out=90,in=0](1.5,1.2-4 ) to [out=180,in=90](-1,-4-0); %--++( 90:2); \draw (4,-4-0) to [out=-90,in=0](3.5,-0.5-4 ) to [out=180,in=-90](3,-4-0); \draw (0,-4-0) to [out=-90,in=0](-0.5,-0.5-4 ) to [out=180,in=-90](-1,-4-0); \draw[densely dotted] (5,-4-0 ) to [out=90,in=0] (1.5,-4+1.5) to [out=180,in =90] (-2,-4-0 ); \draw[densely dotted] (5,-4-0 ) to [out=-90,in=0] (1.5,-4-1.5) to [out=180,in =-90] (-2,-4-0 ); \draw[densely dotted] (6,0-4 ) to [out=90,in=0] (1.5,1.75-4) to [out=180,in =90] (-3,0-4 ); \draw[densely dotted] (6,0-4 ) to [out=-90,in=0] (1.5,-1.75-4) to [out=180,in =-90] (-3,0-4 ); \end{tikzpicture} \end{minipage} \!\!\!\!\!\!\!\! = \begin{minipage}{5.75cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7] \draw[densely dotted](-1.25,0)--++(180:0.5 ) coordinate (X); \draw (X)--++(180:1.5 ) coordinate (X); \draw(-1.25,0)--++(0:5.5) coordinate (X); \draw[densely dotted](X)--++(0:0.5 ) coordinate (X); \draw (X)--++(0:1.5 ) coordinate (X); \draw[thick](-1 , 0.09) node {$\scriptstyle\vee$}; \draw[thick](0 , -0.09) node {$\scriptstyle\wedge$}; \draw(2,0.09) node {$\scriptstyle\vee$}; \draw(-2,0.09) node {$\scriptstyle\vee$}; \draw(-3,0.09) node {$\scriptstyle\vee$}; \draw[thick](1 , 0.09) node {$\scriptstyle\vee$}; \draw[thick](3 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](4 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](5 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](6 ,-0.09) node {$\scriptstyle\wedge$}; \draw(2,0) to [out=-90,in=180] (2.5, -0.6) to [out=0,in= -90] (3,0) (2,0) to [out= 90,in=180] (2.5, 0.6) to [out=0,in= 90] (3,0) ; \draw (0,0) to [out=90,in=180] (0.5,0.6) to [out=0,in=90] (1,0) (1,0) (-1,0) to [out=-90,in=180] (-0.5, -0.6) to [out=0,in= -90] (0,0) ; \draw (4,0) to [out=90,in=0](1.5,1.2 ) to [out=180,in=90](-1,0); %--++( 90:2); \draw (4,0) to [out=-90,in=0](2.5,-1.2 ) to [out=180,in=-90]( 1,0); \draw[densely dotted] (5,0 ) to [out=90,in=0] (1.5,1.5) to [out=180,in =90] (-2,0 ); \draw[densely dotted] (5,-0 ) to [out=-90,in=0] (1.5,-1.5) to [out=180,in =-90] (-2,-0 ); \draw[densely dotted] (6,0 ) to [out=90,in=0] (1.5,1.75) to [out=180,in =90] (-3,0 ); \draw[densely dotted] (6,-0 ) to [out=-90,in=0] (1.5,-1.75) to [out=180,in =-90] (-3,-0 ); \draw[densely dotted](-1.25,0-4)--++(180:0.5 ) coordinate (X); \draw (X)--++(180:1.5 ) coordinate (X); \draw(-1.25,-4)--++(0:5.5) coordinate (X); \draw[densely dotted](X)--++(0:0.5 ) coordinate (X); \draw (X)--++(0:1.5 ) coordinate (X); \draw[thick](-1 , 0.09-4) node {$\scriptstyle\vee$}; \draw[thick](3 , 0.09-4) node {$\scriptstyle\vee$}; \draw(1,0.09-4) node {$\scriptstyle\vee$}; \draw(-2,0.09-4) node {$\scriptstyle\vee$}; \draw(-3,0.09-4) node {$\scriptstyle\vee$}; \draw[thick](2 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](6 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](4 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](5 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](0 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw (2-2,-4-0) to [out=90,in=0](1.5-2,0.6-4 ) to [out=180,in=90](1-2,-4-0); \draw (2-2,-4-0) to [out=-90,in=0](1.5-2,-0.6-4 ) to [out=180,in=-90](1-2,-4-0); \draw (2+1,-4-0) to [out=90,in=0](1.5+1,0.6-4 ) to [out=180,in=90](1+1,-4-0); \draw (2,-4-0) to [out=-90,in=0](1.5,-0.6-4 ) to [out=180,in=-90](1,-4-0); \draw (2+2,-4-0) to [out=-90,in=0](1.5+2,-0.6-4 ) to [out=180,in=-90](1+2,-4-0); % \draw (3,-4-0) to [out=90,in=0](1.5,0.75-4 ) to [out=180,in=90](0,-4-0); % % % \draw (4,-4-0) to [out=90,in=0](2.5,1.2-4 ) to [out=180,in=90](1,-4-0); % \draw (4,-4-0) to [out=-90,in=0](3.5,-0.5-4 ) to [out=180,in=-90](3,-4-0); % \draw (0,-4-0) to [out=-90,in=0](-0.5,-0.5-4 ) to [out=180,in=-90](-1,-4-0); \draw[densely dotted] (5,-4-0 ) to [out=90,in=0] (1.5,-4+1.5) to [out=180,in =90] (-2,-4-0 ); \draw[densely dotted] (5,-4-0 ) to [out=-90,in=0] (1.5,-4-1.5) to [out=180,in =-90] (-2,-4-0 ); \draw[densely dotted] (6,0-4 ) to [out=90,in=0] (1.5,1.75-4) to [out=180,in =90] (-3,0-4 ); \draw[densely dotted] (6,0-4 ) to [out=-90,in=0] (1.5,-1.75-4) to [out=180,in =-90] (-3,0-4 ); \draw[white,fill=white] (0.5,0.6 ) circle (7.75pt); \path (0.5, 0.6 ) node {\scalefont{0.7 }$\phantom{_{1}}Q^1$}; \draw[white,fill=white] (2.5,0.6-4 ) circle (7.75pt); \path (2.5, 0.6-4 ) node {\scalefont{0.7 }$\phantom{_{1}}Q^2$}; \end{tikzpicture} \end{minipage}$$ with $r-3$ dotted anti-clockwise circles in each diagram; the equality follows from applying the $1\otimes 1 \mapsto 1$ rule a total of $r$ times as required. Note that the last product is $\underline{\nu} \nu \overline{\alpha}\cdot \underline{\alpha}\alpha\overline{\lambda}$ as required. It can be shown similarly that it is also equal to $\underline{\nu} \nu \overline{\beta}\cdot \underline{\beta}\beta\overline{\lambda}$. The remaining cases (where $r=2$ and $c>2$ or vice versa) are analogous, except that we make use of the oriented strand rules in the surgery procedure. ◻ **Proposition 54** (The adjacent relation). *Let $P\in {\rm DRem}(\mu)$ and $Q\in {\rm DRem}(\mu - P)$ be adjacent. Let $\langle P\cup Q \rangle_\mu$ denote the smallest removable Dyck path of $\mu$ containing $P\cup Q$ (if it exists), and set $\lambda=\mu-P$, $\nu=\lambda-Q$, and $\alpha=\mu-\langle P\cup Q \rangle_\mu$. Then we have $$\label{thisoney2} i^{{\sf sgn}(\lambda,\mu)+{\sf sgn}(\nu,\lambda)} \underline{\mu} \lambda\overline{\lambda} \cdot \underline{\lambda} \nu \overline{\nu} = \begin{cases} i^{2b(\langle P\cup Q\rangle_\mu) - 2b(Q) +{\sf sgn}(\alpha , \mu)+{\sf sgn}(\alpha , \nu) } \underline{\mu} \alpha \overline{ \alpha} \cdot \underline{ \alpha} \alpha \overline{\nu} & \text{if $\langle P\cup Q \rangle_\mu$ exists,} \\ 0 & \text{otherwise.} \end{cases}$$* *Proof.* We start by proving that the signs on both sides of the equation are equal. Write $\langle P\cup Q\rangle_\mu = Q'\sqcup P \sqcup Q$ and assume that $Q'$ is to the left of $Q$. Then we have that the cup corresponding to $P$ connects $i_P - \tfrac{1}{2}$ and $j_P + \tfrac{1}{2}$, the cup corresponding to $Q$ connects $(j_P +1)-\tfrac{1}{2}$ and $j_Q +\tfrac{1}{2}$, the cup corresponding to $Q'$ connects $i_{Q'}-\tfrac{1}{2}$ and $(i_P-1) + \tfrac{1}{2}$ and finally the cup corresponding to $\langle P\cup Q\rangle_\mu$ connects $i_{Q'} - \tfrac{1}{2}$ and $j_Q + \tfrac{1}{2}$ for some $i_{Q'} < i_P \leqslant j_P < j_Q$. Note that $$b(\langle P\cup Q\rangle_\mu) = \tfrac{1}{2}( {j_Q - i_{Q'}})+1 \quad \text{and} \quad b(Q) = \tfrac{1}{2}({j_Q - (j_P +1)}) +1.$$ Thus we have $$\begin{aligned} && 2b(\langle P\cup Q\rangle_\mu) - 2b(Q) +{\sf sgn}(\alpha , \nu)+{\sf sgn}(\alpha , \nu) \\ && = 2 \left( \frac{j_Q - i_{Q'}}{2} +1 \right) -2 \left(\frac{j_Q - (j_P +1)}{2} +1 \right) + \frac{i_Q' + j_Q}{2} + \frac{i_{Q'} + (i_P -1)}{2} \\ && = \frac{j_P+i_P}{2} + \frac{j_Q + (j_P+1)}{2} \\ && = {\sf sgn}(\lambda, \mu) + {\sf sgn}(\nu, \lambda) \end{aligned}$$ as required. Thus we can restrict our attention to the incontractible case as the general case will follow directly by applying the dilation homomorphism. So we can assume that $\mu = (c^r), \lambda= (c^{r-1}, c-1)$ and $\nu = (c^{r-1}, c-2)$ (the case $\nu = (c^{r-2}, (c-1)^2)$ is similar). Note that $\langle P\cup Q\rangle_\mu$ exists precisely when $r\geqslant 2$. When $r=1$ it's easy to check that $\underline{\mu}\lambda\overline{\lambda} \cdot \underline{\lambda} \nu \overline{\nu} =0$ using $y\otimes y \mapsto 0$ as we are applying surgery to two $\wedge$-oriented propagating strands. We now assume that $r\geqslant 2$. Then we have $\alpha = (c^{r-2}, c-1, c-2)$. In this case, for $r\geqslant 3$, the equation $\underline{\mu}\lambda\overline{\lambda} \cdot \underline{\lambda} \nu \overline{\nu} = \underline{\mu}\alpha \overline{\alpha} \cdot \underline{\alpha}\alpha \overline{\nu}$ becomes $$\label{eqna21} % \underline{\la} \mu \overline{\mu} % \cdot % \underline{\mu} \mu \overline{\nu} %\;\;= \; %i\times \begin{minipage}{5.5cm} \begin{tikzpicture} % [xscale=0.76,yscale= 0.76] [xscale=0.5,yscale=0.7] \draw[densely dotted](-2.25,-4)--++(180:0.5 ) coordinate (X); \draw (X)--++(180:1 ) coordinate (X); \draw(-2.25,-4)--++(0:7.75) coordinate (X); \draw[densely dotted](X)--++(0:0.5 ) coordinate (X); \draw (X)--++(0:1 ) coordinate (X); \draw[thick](-2 , 0.09-4) node {$\scriptstyle\vee$}; \draw[thick](0 , 0.09-4) node {$\scriptstyle\vee$}; \draw(2,0.09-4) node {$\scriptstyle\vee$}; \draw[thick](1 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](3 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw (0,0-4) to [out=90,in=180] (0.5,0.6-4) to [out=0,in=90] (1,0-4) (1,0-4) to [out=-90,in=180] (1.5, -0.6-4) to [out=0,in= -90] (2,0-4) to [out=90,in=180] (2.5,0.6-4) to [out=0,in=90] (3,0-4) to [out=-90,in=0] (1.5, -1-4) to [out=180,in= -90] (0,0-4) ; \draw[thick](-1 , 0.09-4) node {$\scriptstyle\vee$}; \draw(-3.25 ,0.09-4) node {$\scriptstyle\vee$}; \draw[thick](4 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](6.5 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](5 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[densely dotted] (6.5,0-4) to [out=90,in=0] (1.5,1.75-4) to [out=180,in =90] (-3.25,0-4); \draw[densely dotted] (6.5,-0-4) to [out=-90,in=0] (1.5,-1.75-4) to [out=180,in =-90] (-3.25,-0-4); \draw[densely dotted] (5,0-4) to [out=90,in=0] (1.5,1. 5-4) to [out=180,in =90] (-2,0-4); \draw[densely dotted] ( 5,-0-4) to [out=-90,in=0] (1.5,-1. 5-4) to [out=180,in =-90] (-2,-0-4); \draw (4,0-4) to [out=90,in=0] (1.5,1. 25-4) to [out=180,in =90] (-1,0-4); \draw ( 4,-0-4) to [out=-90,in=0] (1.5,-1. 25-4) to [out=180,in =-90] (-1,-0-4); \draw[densely dotted](-2.25,0)--++(180:0.5 ) coordinate (X); \draw (X)--++(180:1 ) coordinate (X); \draw(-2.25,0)--++(0:7.75) coordinate (X); \draw[densely dotted](X)--++(0:0.5 ) coordinate (X); \draw (X)--++(0:1 ) coordinate (X); \draw[thick](-2 , 0.09) node {$\scriptstyle\vee$}; \draw[thick](0 , 0.09) node {$\scriptstyle\vee$}; \draw(3,0.09) node {$\scriptstyle\vee$}; \draw[thick](1 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](2 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](-1 , 0.09) node {$\scriptstyle\vee$}; \draw(-3.25 ,0.09) node {$\scriptstyle\vee$}; \draw[thick](4 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](6.5 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](5 ,-0.09) node {$\scriptstyle\wedge$}; \draw[densely dotted] (6.5,0 ) to [out=90,in=0] (1.5,1.75) to [out=180,in =90] (-3.25,0 ); \draw[densely dotted] (6.5,-0 ) to [out=-90,in=0] (1.5,-1.75) to [out=180,in =-90] (-3.25,-0 ); \draw[densely dotted] (5,0 ) to [out=90,in=0] (1.5,1. 5) to [out=180,in =90] (-2,0 ); \draw[densely dotted] ( 5,-0 ) to [out=-90,in=0] (1.5,-1. 5) to [out=180,in =-90] (-2,-0 ); \draw (4,0) to [out=-90,in=0](1.5,-1.2 ) to [out=180,in=-90](-1,0); %--++( 90:2); % \draw (2,0) to [out= 90,in=0](0.5, 1.2) to [out=180,in= 90](-1,0); \draw (0,0) to [out=90,in=180] (0.5,0.6) to [out=0,in=90] (1,0) to [out=-90,in=0] (0.5,-0.6) to [out=180,in=-90] (0,0) ; \draw (2+0,0) to [out=-90,in=180] (2+0.5,-0.6) to [out=0,in=-90] (2+1,0) to [out= 90,in=180] (3+0.5, 0.6) to [out=0,in= 90] (4+0,0) ; \draw[white,fill=white] (1.5, -0.6-4) circle (7pt); \path (1.5, -0.6-4) node {\scalefont{0.67 }$ P$}; \draw[white,fill=white] (2.5, - 0.6) circle (7pt); \path (2.5, - 0.6) node {\scalefont{0.67 }$ Q$}; \end{tikzpicture} \end{minipage} = \begin{minipage}{5.5cm} \begin{tikzpicture} % [xscale=0.76,yscale= 0.76] [xscale=0.5,yscale=0.7] \draw[densely dotted](-2.25,-4)--++(180:0.5 ) coordinate (X); \draw (X)--++(180:1 ) coordinate (X); \draw(-2.25,-4)--++(0:7.75) coordinate (X); \draw[densely dotted](X)--++(0:0.5 ) coordinate (X); \draw (X)--++(0:1 ) coordinate (X); \draw[thick](-2 , 0.09-4) node {$\scriptstyle\vee$}; \draw[thick](01 , 0.09-4) node {$\scriptstyle\vee$}; \draw(3,0.09-4) node {$\scriptstyle\vee$}; \draw[thick](0 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](2 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw (1,0-4) to [out=-90,in=180] (1.5, -0.5-4) to [out=0,in= -90] (2,0-4) (1-1,0-4) to [out=-90,in=180] (1.5, -1-4) to [out=0,in= -90] (2+1,0-4) ; \draw ( 4,-0-4) to [out=-90,in=0] (1.5,-1. 4-4) to [out=180,in =-90] (-1,-0-4); \draw ( 4,-4) to [out=90,in=0] (3.5,-4+0.5) to [out=180,in =90] (3,-0-4); \draw ( 4-2,-4) to [out=90,in=0] (3.5-2,-4+0.5) to [out=180,in =90] (3-2,-0-4); \draw ( 4-2-2,-4) to [out=90,in=0] (3.5-2-2,-4+0.5) to [out=180,in =90] (3-2-2,-0-4); \draw[thick](-1 , 0.09-4) node {$\scriptstyle\vee$}; \draw(-3.25 ,0.09-4) node {$\scriptstyle\vee$}; \draw[thick](4 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](6.5 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](5 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[densely dotted] (6.5,0-4) to [out=90,in=0] (1.5,1.75-4) to [out=180,in =90] (-3.25,0-4); \draw[densely dotted] (6.5,-0-4) to [out=-90,in=0] (1.5,-1.75-4) to [out=180,in =-90] (-3.25,-0-4); \draw[densely dotted] (5,0-4) to [out=90,in=0] (1.5,1. 5-4) to [out=180,in =90] (-2,0-4); \draw[densely dotted] ( 5,-0-4) to [out=-90,in=0] (1.5,-1. 5-4) to [out=180,in =-90] (-2,-0-4); \draw[densely dotted](-2.25,0)--++(180:0.5 ) coordinate (X); \draw (X)--++(180:1 ) coordinate (X); \draw(-2.25,0)--++(0:7.75) coordinate (X); \draw[densely dotted](X)--++(0:0.5 ) coordinate (X); \draw (X)--++(0:1 ) coordinate (X); \draw[thick](-2 , 0.09) node {$\scriptstyle\vee$}; \draw[thick](0 , -0.09) node {$\scriptstyle\wedge$}; \draw(2,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](1 ,0.09) node {$\scriptstyle\vee$}; \draw[thick](3 ,0.09) node {$\scriptstyle\vee$}; \draw[thick](-1 , 0.09) node {$\scriptstyle\vee$}; \draw(-3.25 ,0.09) node {$\scriptstyle\vee$}; \draw[thick](4 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](6.5 ,-0.09) node {$\scriptstyle\wedge$}; \draw[thick](5 ,-0.09) node {$\scriptstyle\wedge$}; \draw[densely dotted] (6.5,0 ) to [out=90,in=0] (1.5,1.75) to [out=180,in =90] (-3.25,0 ); \draw[densely dotted] (6.5,-0 ) to [out=-90,in=0] (1.5,-1.75) to [out=180,in =-90] (-3.25,-0 ); \draw[densely dotted] (5,0 ) to [out=90,in=0] (1.5,1. 5) to [out=180,in =90] (-2,0 ); \draw[densely dotted] ( 5,-0 ) to [out=-90,in=0] (1.5,-1. 5) to [out=180,in =-90] (-2,-0 ); \draw (2,0) to [out= 90,in=0](0.5, 1 ) to [out=180,in= 90](-1,0); \draw (1,0) to [out= 90,in=0](0.5, 0.5 ) to [out=180,in= 90](0,0); \draw (1+3,0) to [out= 90,in=0](0.5+3, 0.5 ) to [out=180,in= 90](0+3,0); \draw (2,0) to [out=-90,in=0] (1.5,-0.6) to [out=180,in=-90] (1,0); \draw (2-2,0) to [out=-90,in=0] (1.5-2,-0.6) to [out=180,in=-90] (1-2,0); \draw (2+2,0) to [out=-90,in=0] (1.5+2,-0.6) to [out=180,in=-90] (1+2,0); \draw[white,fill=white] (1.5,-1-4) circle (7.75pt); \draw[white,fill=white] (0.9 ,-1-4) circle (7.75pt); \draw[white,fill=white] (2,-1-4) circle (7.75pt); \path (1.6, -1-4) node {\scalefont{0.65 }$ \langle P \!\cup\! Q \rangle _\mu$}; \draw[white,fill=white] (0.5, 0.6) circle (7.75pt); \path (0.5, 0.6 ) node {\scalefont{0.67 }$\phantom{_{'}}Q'$}; \end{tikzpicture} \end{minipage}$$ where there are $r-3$ concentric dotted outer circles in the top and bottom diagrams on each side of the product. We can rewrite both sides of this equation using the $1\otimes 1 \mapsto 1$ surgery procedure $r-3$ times on the dotted strands (trivially turning each pair of dotted circles into a large dotted circle) followed by 3 times on the solid strands in order to obtain in both cases the diagram $$\label{eqna22} %i \times \underline{\la} \alpha \overline{\alpha} \cdot \underline{\alpha} \alpha \overline{\nu} % \;\;= \; \begin{minipage}{6.5cm} \begin{tikzpicture}% [xscale=0.76,yscale= 0.76] [xscale=0.5,yscale=0.7] \draw[densely dotted](-2.25,-4)--++(180:0.5 ) coordinate (X); \draw (X)--++(180:1 ) coordinate (X); \draw(-2.25,-4)--++(0:7.75) coordinate (X); \draw[densely dotted](X)--++(0:0.5 ) coordinate (X); \draw (X)--++(0:1 ) coordinate (X); \draw (1,0-4) to [out=-90,in=180] (1.5, -0.6-4) to [out=0,in= -90] (2,0-4) (3,0-4) to [out=-90,in=0] (1.5, -1.1-4) to [out=180,in= -90] (0,0-4) ; \draw[white,fill=white] (1.5,-1-4) circle (7.75pt); \draw[white,fill=white] (0.9 ,-1-4) circle (7.75pt); \draw[white,fill=white] (2,-1-4) circle (7.75pt); \path (1.6, -1-4) node {\scalefont{0.65 }$ \langle P \!\cup\! Q \rangle _\mu$}; \draw[thick](-2 , 0.09-4) node {$\scriptstyle\vee$}; \draw[thick](1 , 0.09-4) node {$\scriptstyle\vee$}; \draw(3,0.09-4) node {$\scriptstyle\vee$}; \draw[thick](0 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](2 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](-1 , 0.09-4) node {$\scriptstyle\vee$}; \draw(-3.25 ,0.09-4) node {$\scriptstyle\vee$}; \draw[thick](4 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](6.5 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[thick](5 ,-0.09-4) node {$\scriptstyle\wedge$}; \draw[densely dotted] (6.5,-0-4) to [out=-90,in=0] (1.5,-1.9-4) to [out=180,in =-90] (-3.25,-0-4); \draw[densely dotted] ( 5,-0-4) to [out=-90,in=0] (1.5,-1. 6-4) to [out=180,in =-90] (-2,-0-4); \draw ( 4,-0-4) to [out=-90,in=0] (1.5,-1. 45-4) to [out=180,in =-90] (-1,-0-4); \draw[densely dotted] (6.5,0-4 ) to [out=90,in=0] (1.5,1.75-4) to [out=180,in =90] (-3.25,0-4 ); \draw[densely dotted] (5,0 -4) to [out=90,in=0] (1.5,1.5-4) to [out=180,in =90] (-2,0 -4); % \draw (2,0-4) to [out= 90,in=0](0.5, 1.2-4 ) to [out=180,in= 90](-1,0-4); \draw (0,0-4) to [out=90,in=180] (0.5,0.6-4) to [out=0,in=90] (1,0-4) ; \draw (2+1,0-4) to [out= 90,in=180] (3+0.5, 0.6-4) to [out=0,in= 90] (4+0,0-4) ; \draw[white,fill=white] (0.5, 0.6-4) circle (7.75pt); \path (0.5, 0.6-4) node {\scalefont{0.67 }$\phantom{_{'}}Q'$}; \end{tikzpicture} \end{minipage}$$as required. The other cases (where $r=2$ and $c>2$ or vice versa, and where $r=c=2$) are similar. ◻ We thus conclude that the map $\Psi$ is indeed a ${\mathbb Z}$-graded homomorphism of $\Bbbk$-algebras (for $\Bbbk$ an integral domain containing a square root of $-1$). It remains to check that this map is a bijection, we now verify this by showing that the image of the light leaves basis of $\mathcal{H}_{m,n}$ is a basis of $\mathcal{K}_{m,n}$ (we do this by showing that every basis element of $\mathcal{K}_{m,n}$ can be written as a product of degree 1 elements). Over a field, we could deduce the following result from the known Koszulity of $\mathcal{K}_{m,n}$ (this is the main result of [@MR2600694]). However, we wish to work over more general rings (where Koszulity does not hold). From an aesthetic point of view, we also prefer to deduce that the algebra is generated in degree 1 directly (as appealing to the strong cohomological property of Koszulity is somewhat "using a sledgehammer to crack a nut\"). **Proposition 55**. *The map $\Psi$ is an isomorphism of graded $\Bbbk$-algebras.* *Proof.* We have already seen above that the map is a $\Bbbk$-algebra homomorphism. We need to show that the map is bijective. Clearly the map is bijective when one restricts attention to the degree 0 elements (indexed by weights/partitions) and degree 1 elements (indexed by Dyck pairs of degree 1) of both algebras. We now consider elements of higher degree: we will do this in two stages. We first show that every element of the form $\underline{\mu}\lambda\overline{\lambda}$ is in the image, by constructing it as a product of the degree 1 elements. We will then show that $$\begin{aligned} \label{profoftheform} (\underline{\mu}\lambda\overline{\lambda}) (\underline{\lambda}\lambda\overline{\nu}) = \underline{\mu}\lambda\overline{\nu}+\sum_{\zeta < \lambda} \underline{\mu}\zeta \overline{\nu}\end{aligned}$$ and so deduce that the map is bijective by unitriangularity. **Step 1:** We fix a sequence $\mu =\lambda^{(0)}\supset \lambda^{(1)}\supset \dots\supset \lambda^{(m)} =\lambda$ such that the Dyck path $\lambda^{(k-1)} = \lambda^{(k)} - P^k$ where $P^k$ is a removable Dyck path of $\lambda^{(k)}$ of breadth $p_k$ with $p_0\geqslant p_1\geqslant p_2 \dots \geqslant p_m$ (this decreasing size condition is not strictly necessary, but is helpful for visualisation). We have that $$(\underline{\mu}\lambda\overline{\lambda})= (\underline{\mu}\mu \overline{\lambda^{(1)}}) (\underline{\lambda^{(1)}}\lambda^{(1)} \overline{\lambda^{(2)}}) \dots (\underline{\lambda^{(m-1)}}\lambda^{(m-1)} \overline{\lambda} )$$ by repeated application of [Lemma 49](#local-surg){reference-type="ref" reference="local-surg"}. **Step 2:** We first observe that the subdiagram $\lambda\overline{\lambda} \underline{\lambda}\lambda$ within the wider diagram $(\underline{\mu}\lambda\overline{\lambda}) (\underline{\lambda}\lambda\overline{\nu})$ is an oriented Temperley--Lieb diagram whose arcs are all anti-clockwise oriented and which is invariant under the flip through the horizontal axis. We will work through the cases of the surgery rules of [3.2](#Khovanov arc algebras){reference-type="ref" reference="Khovanov arc algebras"} and how they can potentially be applied to the anti-clockwise arcs in $\lambda\overline{\lambda} \underline{\lambda}\lambda$ (within the wider diagram $(\underline{\mu}\lambda\overline{\lambda}) (\underline{\lambda}\lambda\overline{\nu})$) in turn: we will show that none of $x \otimes x \mapsto 0$, $x \otimes y \mapsto 0$, or $y \otimes y \mapsto 0$ can occur (thus $(\underline{\mu}\lambda\overline{\lambda}) (\underline{\lambda}\lambda\overline{\nu})\neq0$) and that in all the other cases there is a single term with weight $\lambda$ (as required). We first require some notation. Let $\lambda\in {\mathscr{P}_{m,n}}$ and fix $i<j\in {\mathbb Z}+\tfrac{1}{2}$ connected by a cup in $\underline{\lambda}$, we denote this cup by $\underline{c}%=\overline{AB} \in \underline{\lambda}$ and similarly we denote the reflected cap by $\overline{c}%=\underline{AB} \in \overline{\lambda}$. We will also need to speak of the interval $\underline{I}$ on the top weight line (respectively $\overline{I}$ on the bottom weight line) lying strictly between the points $i<j\in {\mathbb Z}+\tfrac{1}{2}$. The surgery procedure is not concerned with the local orientation of the arcs $\underline{c}$ and $\overline{c}$ (which are always anti-clockwise in our proof), but rather the global orientation of the circle/strand to which the arcs $\underline{c}$ and $\overline{c}$ belong. Determining a global orientation of clockwise/anti-clockwise is equivalent to assigning an "inside\"/"outside\" label to the regions $\underline{I}$ and $\overline{I}$ in a manner we shall make more precise (case-by-case) and this will allow us to show that certain cases cannot occur (for topological reasons). We will assume that all our diagrams in this proof have the minimum number of propagating lines --- this does not affect the surgery procedure, which is defined topologically, but allows us to speak of the 'left' and 'right' propagating lines in a given circle. This is illustrated in [\[cup-cap\]](#cup-cap){reference-type="ref" reference="cup-cap"}. $$\quad \begin{minipage}{4.75cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7] \draw (0,-3) coordinate (X); \draw (X)--++(180:1.5); \draw (X)--++(0:2.5)coordinate (Y);; \draw[densely dotted] (Y)--++(0:2)coordinate (Z);; \path (Y)--++(0:1)--++(90:0.3)coordinate (Y);; % \fill[white] (Y) circle (12pt); \draw (Y) node {\scalefont{0.75}\text{outside}}; \draw (Z)--++(0:1)coordinate (Y);; \draw[densely dotted] (Y)--++(0:1)coordinate (Z);; \draw (Z)--++(0:1)coordinate (Y);; \path (X)--++(0:0) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(0:1) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(180:1) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:2) coordinate (T1) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(0:5)coordinate (T2) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:7) --++(90:0.1) node {$\scriptstyle\vee$}; \draw[ thick] (T1) to [out=-90,in=180] (3.5,-0.75-3) to [out=0,in=-90] (T2) ; \path (3.5,-0.985-3) node {\scalefont{0.8}$\underline{c}$}; \draw% [densely dashed] (1,0-3) to [out=-90,in=0] (0.5,-0.5-3) to [out=180,in=-90] (0,0-3) ; \draw (T1) to [out=90,in=0] (1.5,0.5-3) to [out=180,in=90] (1,0-3) ; \draw %[densely dashed] (T2) to [out=90,in=180] (6,0.75-3) to [out=0,in=90] (7,0-3) ; \draw (0,0-3) to [out=90,in=0] (-0.5,0.5-3) to [out=180,in=90] (-1,0-3) to [out=-90,in=180] (3,-2-3) to [out=0,in=-90] (7,0-3) ; % \path(T1)--++(90:1) coordinate (top); % \draw [densely dotted](top)--++(90:-3.5) coordinate (top); % \draw (top) node [below] {$i$}; % % % \path(T2)--++(90:1) coordinate (top); % \draw [densely dotted](top)--++(90:-3.5) coordinate (top); % \draw (top) node [below] {$j$}; \path(T1)--++(90:2.5) coordinate (top); \draw [densely dotted](top)--++(90:-5) coordinate (top); \draw (top) node [below] {\scalefont{0.8}$i$}; \path(T2)--++(90:2.5) coordinate (top); \draw [densely dotted](top)--++(90:-5) coordinate (top); \draw (top) node [below] {\scalefont{0.8}$j$}; \end{tikzpicture} \end{minipage} \qquad \begin{minipage}{4.75cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7] \draw (0,-3) coordinate (X); \draw (X)--++(0:-0.5); \draw (X)--++(0:2.5)coordinate (Y);; \draw[densely dotted] (Y)--++(0:2)coordinate (Z);; \path (Y)--++(0:1)--++(90:0.3)coordinate (Y);; % \fill[white] (Y) circle (12pt); \draw (Y) node {\scalefont{0.75}\text{inside}}; \draw (Z)--++(0:1)coordinate (Y);; \draw[densely dotted] (Y)--++(0:1)coordinate (Z);; \draw (Z)--++(0:2 )coordinate (Y);; \path (X)--++(0:0) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(0:1) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:2) coordinate (T1) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(0:5)coordinate (T2) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:7) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++( 0:8) --++(90:-0.1) node {$\scriptstyle\wedge$}; \draw[ thick] (T1) to [out=-90,in=180] (3.5,-0.75-3) to [out=0,in=-90] (T2) ; \path (3.5,-0.985-3) node {\scalefont{0.8}$\underline{c}$}; \draw% [densely dashed] (1,0-3) to [out=-90,in=0] (0.5,-0.5-3) to [out=180,in=-90] (0,0-3) ; \draw (T1) to [out=90,in=0] (1.5,0.5-3) to [out=180,in=90] (1,0-3) ; \draw %[densely dashed] (T2) to [out=90,in=180] (6,0.75-3) to [out=0,in=90] (7,0-3) ; \draw (0,0-3) to [out= 90,in=180] (4, 2-3) to [out=0,in= 90] (8,0-3) to [out=-90,in=0] (7.5,-0.5-3) to [out=180,in=-90] (7,0-3) ; \path(T1)--++(90:2.5) coordinate (top); \draw [densely dotted](top)--++(90:-5) coordinate (top); \draw (top) node [below] {\scalefont{0.8}$i$}; \path(T2)--++(90:2.5) coordinate (top); \draw [densely dotted](top)--++(90:-5) coordinate (top); \draw (top) node [below] {\scalefont{0.8}$j$}; \end{tikzpicture} \end{minipage}$$ We first consider the rules in which our pair of arcs belong to the same connected component (either a circle or a strand). If the arcs $\underline{c}$ and $\overline{c}$ both belong to the same clockwise oriented circle, then the regions $\underline{I}$ and $\overline{I}$ both lie *outside* this circle. If the arcs $\underline{c}$ and $\overline{c}$ both belong to the same anti-clockwise oriented circle, then the regions $\underline{I}$ and $\overline{I}$ both lie *inside* this circle. This is depicted in [\[clocker\]](#clocker){reference-type="ref" reference="clocker"}. Applying the surgery procedure in the former case, we obtain two non-nested clockwise oriented circles. The rightmost propagating strand of one circle goes through the point $i$ (which was $\vee$-oriented in $\lambda$) and the leftmost propagating strand of the other circle goes through the point $j$ (which was $\wedge$-oriented in $\lambda$) and so $\lambda$ is preserved. Applying the surgery procedure in the latter case, we obtain two nested circles (and sum over the $1\otimes x$ and $x\otimes 1$ orientations). In the rightmost case in [\[clocker\]](#clocker){reference-type="ref" reference="clocker"}, the $\lambda$ weight corresponds to orienting the inner circle clockwise, see [\[clocker2\]](#clocker2){reference-type="ref" reference="clocker2"} (the leftmost case in [\[clocker\]](#clocker){reference-type="ref" reference="clocker"} corresponds to orienting the inner circle anti-clockwise). $$\quad \begin{minipage}{4.75cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7] \draw (0,-6) coordinate (X); \path (X)--++(0:2.5)coordinate (Y);; \draw (Y)--++(180:1) coordinate (Y1); \draw[densely dotted] (Y1)--++(180:2) coordinate (Y1); \draw (Y1)--++(180:1) coordinate (Y1); \draw (Y)--++(180:1); \draw[densely dotted] (Y)--++(0:2)coordinate (Z);; \path (Y)--++(0:1)--++(90:-0.3)coordinate (Y);; % \fill[white] (Y) circle (12pt); \draw (Y) node {\scalefont{0.75}\text{outside}}; \draw (Z)--++(0:1)coordinate (Y);; \draw (Y)--++(0:1)coordinate (Z);; \draw (Z)--++(0:2)coordinate (Y);; % % \path (X)--++(0:0) --++(90:0.1) node {$\scriptstyle\down$}; % \path (X)--++(0:1) --++(90:-0.1) node {$\scriptstyle\up$}; \path (X)--++(180:1) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:2) coordinate (T1) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(0:5)coordinate (T2) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:8) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(0:6) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(0:7) --++(90:-0.1) node {$\scriptstyle\wedge$}; % % %\draw[ thick] % (T1) % to [out= 90,in=180] (3.5,0.75-6) to [out=0,in= 90] (T2) % ; % % \path (3.5,0.4-6) node {\scalefont{0.8}$\overline{c}$}; \draw[ thick] (T1) to [out= 90,in=180] (3.5,0.75-6) to [out=0,in= 90] (T2) ; \draw[white,fill=white] (3.5,0.75-6) circle (8pt); \path (3.5,0.75-6) node {\scalefont{0.8}$\overline{c}$}; \draw (6,-6) to [out= 90,in=180] (6.5,0. 5-6) to [out=0,in= 90] (7,-6) (5,-6) to [out= -90,in=180] (5.5,-0. 5-6) to [out=0,in= -90] (6,-6) (7,-6) to [out= -90,in=180] (7.5,-0. 5-6) to [out=0,in= -90] (8,-6)--++(90:3) ; \draw (2,-6) to [out= -90,in=0] (0.5,-0. 75-6) to [out=180,in= -90] (-1,-6)--++(90:3) ; \draw (0,-3) coordinate (X); \draw (X)--++(180:1.5); \draw (X)--++(0:2.5)coordinate (Y);; \draw[densely dotted] (Y)--++(0:2)coordinate (Z);; \path (Y)--++(0:1)--++(90:0.3)coordinate (Y);; % \fill[white] (Y) circle (12pt); \draw (Y) node {\scalefont{0.75}\text{outside}}; \draw (Z)--++(0:1)coordinate (Y);; \draw[densely dotted] (Y)--++(0:2)coordinate (Z);; \draw (Z)--++(0:1)coordinate (Y);; \path (X)--++(0:0) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(0:1) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(180:1) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:2) coordinate (T1) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(0:5)coordinate (T2) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:8) --++(90:0.1) node {$\scriptstyle\vee$}; %\draw[ thick] % (T1) % to [out=-90,in=180] (3.5,-0.75-3) to [out=0,in=-90] (T2) % ; % % \path (3.5,-0.4-3) node {\scalefont{0.8}$\underline{c}$}; \draw[ thick] (T1) to [out=-90,in=180] (3.5,-0.75-3) to [out=0,in=-90] (T2) ; \draw[white,fill=white] (3.5,-0.75-3) circle (8pt); \path (3.5,-0.75-3) node {\scalefont{0.8}$\underline{c}$}; \draw (1,0-3) to [out=-90,in=0] (0.5,-0.5-3) to [out=180,in=-90] (0,0-3) ; \draw (T1) to [out=90,in=0] (1.5,0.5-3) to [out=180,in=90] (1,0-3) ; \draw %[densely dashed] (T2) to [out=90,in=180] (6.5,0.75-3) to [out=0,in=90] (8,0-3) ; \draw (0,0-3) to [out=90,in=0] (-0.5,0.5-3) to [out=180,in=90] (-1,0-3) % to [out=-90,in=180] (3,-2-3) % to [out=0,in=-90] (7,0-3) ; % \path(T1)--++(90:2.5) coordinate (top); % \draw [densely dotted](top)--++(90:-5) coordinate (top); % \draw (top) node [below] {\scalefont{0.8}$i$}; % % % \path(T2)--++(90:2.5) coordinate (top); % \draw [densely dotted](top)--++(90:-5) coordinate (top); % \draw (top) node [below] {\scalefont{0.8}$j$}; \end{tikzpicture} \end{minipage} \quad \qquad % % % % % % % % % \begin{minipage}{4.75cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7] \draw (0,-6) coordinate (X); \path (X)--++(0:2.5)coordinate (Y);; \draw (Y)--++(180:1) coordinate (Y1); \draw (Y1)--++(180:1) coordinate (Y1); \draw (Y1)--++(180:1) coordinate (Y1); \draw (Y)--++(180:1); \draw[densely dotted] (Y)--++(0:2)coordinate (Z);; \path (Y)--++(0:1)--++(90:-0.3)coordinate (Y);; % \fill[white] (Y) circle (12pt); \draw (Y) node {\scalefont{0.75}\text{inside}}; \draw (Z)--++(0:1)coordinate (Y);; \draw (Y)--++(0:1)coordinate (Z);; \draw (Z)--++(0:1)coordinate (Y);; % % \path (X)--++(0:0) --++(90:0.1) node {$\scriptstyle\down$}; % \path (X)--++(0:1) --++(90:-0.1) node {$\scriptstyle\up$}; \path (X)--++(180:0) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(180:-1) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:2) coordinate (T1) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(0:5)coordinate (T2) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:6) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(0:7) --++(90:-0.1) node {$\scriptstyle\wedge$}; \draw[ thick] (T1) to [out= 90,in=180] (3.5,0.75-6) to [out=0,in= 90] (T2) ; \draw[white,fill=white] (3.5,0.75-6) circle (8pt); \path (3.5,0.75-6) node {\scalefont{0.8}$\overline{c}$}; \draw (6,-6) to [out= 90,in=180] (6.5,0. 5-6) to [out=0,in= 90] (7,-6) (6,-6) to [out= -90,in=180] (6.5,-0. 5-6) to [out=0,in= -90] (7,-6) ; \draw (2,-6) to [out= -90,in=0] (1.5,-0. 5-6) to [out=180,in= -90] ( 1,-6)--++(90:3) (5,-6) to [out= -90,in=0] (2.5,-1.5-6) to [out=180,in= -90] ( -0,-6)--++(90:3) ; \draw (0,-3) coordinate (X); \path (X)--++(0:2.5)coordinate (Y);; \draw (Y)--++(180:1) coordinate (Y1); \draw (Y1)--++(180:1) coordinate (Y1); \draw (Y1)--++(180:1) coordinate (Y1); \draw (Y)--++(180:1); \draw[densely dotted] (Y)--++(0:2)coordinate (Z);; \path (Y)--++(0:1)--++(90:0.3)coordinate (Y);; % \fill[white] (Y) circle (12pt); \draw (Y) node {\scalefont{0.75}\text{inside}}; \draw (Z)--++(0:1)coordinate (Y);; \draw (Y)--++(0:1)coordinate (Z);; \draw (Z)--++(0:1)coordinate (Y);; % % \path (X)--++(0:0) --++(90:0.1) node {$\scriptstyle\down$}; % \path (X)--++(0:1) --++(90:-0.1) node {$\scriptstyle\up$}; \path (X)--++(180:0) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(180:-1) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:2) coordinate (T1) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(0:5)coordinate (T2) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:6) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(0:7) --++(90:-0.1) node {$\scriptstyle\wedge$}; % % \draw (0,-3) coordinate (X); % \draw (X)--++(180:1.5); % \draw (X)--++(0:2.5)coordinate (Y);; % \draw[densely dotted] (Y)--++(0:2)coordinate (Z);; % \path (Y)--++(0:1)--++(90:0.3)coordinate (Y);; %% \fill[white] (Y) circle (12pt); % \draw (Y) node {\scalefont{0.75}\text{outside}}; % \draw (Z)--++(0:1)coordinate (Y);; % \draw[densely dotted] (Y)--++(0:2)coordinate (Z);; % \draw (Z)--++(0:1)coordinate (Y);; % % \path (X)--++(0:0) --++(90:0.1) node {$\scriptstyle\down$}; % \path (X)--++(0:1) --++(90:-0.1) node {$\scriptstyle\up$}; % \path (X)--++(180:1) --++(90:-0.1) node {$\scriptstyle\up$}; % % \path (X)--++(0:2) coordinate (T1) --++(90:0.1) node {$\scriptstyle\down$}; % \path (X)--++(0:5)coordinate (T2) --++(90:-0.1) node {$\scriptstyle\up$}; % % \path (X)--++(0:8) --++(90:0.1) node {$\scriptstyle\down$}; % \draw[ thick] (T1) to [out=-90,in=180] (3.5,-0.75-3) to [out=0,in=-90] (T2) ; \draw[white,fill=white] (3.5,-0.75-3) circle (8pt); \path (3.5,-0.75-3) node {\scalefont{0.8}$\underline{c}$}; \draw (T1) to [out=90,in=0] (1.5,0.5-3) to [out=180,in=90] (1,0-3) ; \draw (T2) to [out=90,in=180] (5.5,0.5-3) to [out=0,in=90] (6,0-3) (6,0-3) to [out=-90,in=180] (6.5,-0.5-3) to [out=0,in=-90] (7,0-3) ; \draw (7,-3) to [out= 90,in=0] (3.5, 1.5-3) to [out=180,in= 90] ( -0,-3) ; \end{tikzpicture} \end{minipage} \ % % % % % % % % % \begin{minipage}{4.25cm} \begin{tikzpicture} [xscale=-0.5,yscale=0.7] \draw (0,-6) coordinate (X); \path (X)--++(0:2.5)coordinate (Y);; \draw (Y)--++(180:1) coordinate (Y1); \draw (Y1)--++(180:1) coordinate (Y1); \draw (Y1)--++(180:1) coordinate (Y1); \draw (Y)--++(180:1); \draw[densely dotted] (Y)--++(0:2)coordinate (Z);; \path (Y)--++(0:1)--++(90:-0.3)coordinate (Y);; % \fill[white] (Y) circle (12pt); \draw (Y) node {\scalefont{0.75}\text{inside}}; \draw (Z)--++(0:1)coordinate (Y);; \draw (Y)--++(0:1)coordinate (Z);; \draw (Z)--++(0:1)coordinate (Y);; % % \path (X)--++(0:0) --++(90:0.1) node {$\scriptstyle\up$}; % \path (X)--++(0:1) --++(90:-0.1) node {$\scriptstyle\down$}; \path (X)--++(180:0) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(180:-1) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(0:2) coordinate (T1) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:5)coordinate (T2) --++(90: 0.1) node {$\scriptstyle\vee$}; \path (X)--++(0:6) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:7) --++(90:0.1) node {$\scriptstyle\vee$}; \draw[ thick] (T1) to [out= 90,in=180] (3.5,0.75-6) to [out=0,in= 90] (T2) ; \draw[white,fill=white] (3.5,0.75-6) circle (8pt); \path (3.5,0.75-6) node {\scalefont{0.8}$\overline{c}$}; %\draw % (6,-6) % to [out= 90,in=180] (6.5,0. 5-6) to [out=0,in= 90] (7,-6) % % (5,-6) % to [out= -90,in=180] (5.5,-0. 5-6) to [out=0,in= -90] (6,-6) % % ; % % % % % %\draw % (2,-6) % to [out= -90,in=0] (1.5,-0. 5-6) to [out=180,in= -90] ( 1,-6)--++(90:3) % % % (7,-6) % to [out= -90,in=0] (3.5,-1.5-6) to [out=180,in= -90] ( -0,-6)--++(90:3) % %; % \draw (6,-6) to [out= 90,in=180] (6.5,0. 5-6) to [out=0,in= 90] (7,-6) (6,-6) to [out= -90,in=180] (6.5,-0. 5-6) to [out=0,in= -90] (7,-6) ; \draw (2,-6) to [out= -90,in=0] (1.5,-0. 5-6) to [out=180,in= -90] ( 1,-6)--++(90:3) (5,-6) to [out= -90,in=0] (2.5,-1.5-6) to [out=180,in= -90] ( -0,-6)--++(90:3) ; \draw (0,-3) coordinate (X); \path (X)--++(0:2.5)coordinate (Y);; \draw (Y)--++(180:1) coordinate (Y1); \draw (Y1)--++(180:1) coordinate (Y1); \draw (Y1)--++(180:1) coordinate (Y1); \draw (Y)--++(180:1); \draw[densely dotted] (Y)--++(0:2)coordinate (Z);; \path (Y)--++(0:1)--++(90:0.3)coordinate (Y);; % \fill[white] (Y) circle (12pt); \draw (Y) node {\scalefont{0.75}\text{inside}}; \draw (Z)--++(0:1)coordinate (Y);; \draw (Y)--++(0:1)coordinate (Z);; \draw (Z)--++(0:1)coordinate (Y);; % % \path (X)--++(0:0) --++(90:0.1) node {$\scriptstyle\up$}; % \path (X)--++(0:1) --++(90:-0.1) node {$\scriptstyle\down$}; \path (X)--++(180:0) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(180:-1) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(0:2) coordinate (T1) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:5)coordinate (T2) --++(90: 0.1) node {$\scriptstyle\vee$}; \path (X)--++(0:6) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:7) --++(90:0.1) node {$\scriptstyle\vee$}; % % \draw (0,-3) coordinate (X); % \draw (X)--++(180:1.5); % \draw (X)--++(0:2.5)coordinate (Y);; % \draw[densely dotted] (Y)--++(0:2)coordinate (Z);; % \path (Y)--++(0:1)--++(90:0.3)coordinate (Y);; %% \fill[white] (Y) circle (12pt); % \draw (Y) node {\scalefont{0.75}\text{outside}}; % \draw (Z)--++(0:1)coordinate (Y);; % \draw[densely dotted] (Y)--++(0:2)coordinate (Z);; % \draw (Z)--++(0:1)coordinate (Y);; % % \path (X)--++(0:0) --++(90:0.1) node {$\scriptstyle\up$}; % \path (X)--++(0:1) --++(90:-0.1) node {$\scriptstyle\down$}; % \path (X)--++(180:1) --++(90:-0.1) node {$\scriptstyle\down$}; % % \path (X)--++(0:2) coordinate (T1) --++(90:0.1) node {$\scriptstyle\up$}; % \path (X)--++(0:5)coordinate (T2) --++(90:-0.1) node {$\scriptstyle\down$}; % % \path (X)--++(0:8) --++(90:0.1) node {$\scriptstyle\up$}; % \draw[ thick] (T1) to [out=-90,in=180] (3.5,-0.75-3) to [out=0,in=-90] (T2) ; \draw[white,fill=white] (3.5,-0.75-3) circle (8pt); \path (3.5,-0.75-3) node {\scalefont{0.8}$\underline{c}$}; \draw (T1) to [out=90,in=0] (1.5,0.5-3) to [out=180,in=90] (1,0-3) ; \draw (T2) to [out=90,in=180] (5.5,0.5-3) to [out=0,in=90] (6,0-3) (6,0-3) to [out=-90,in=180] (6.5,-0.5-3) to [out=0,in=-90] (7,0-3) ; \draw (7,-3) to [out= 90,in=0] (3.5, 1.5-3) to [out=180,in= 90] ( -0,-3) ; \end{tikzpicture} \end{minipage}$$ \+ We now suppose that the arcs $\underline{c}$ and $\overline{c}$ both belong to the same strand. In which case, the terminating points of this strand are either both less than or equal to $i$ or both greater than or equal to $j$. We claim that the surgery $y \mapsto x\otimes y$ does not change the weight $\lambda$. Applying the surgery in the former (respectively latter) case, we obtain a circle whose leftmost intersection with the weight lines is at the point $j$ and a strand whose rightmost intersection with the weight lines is at the point $i$ (respectively a circle whose rightmost intersection with the weight lines is at the point $i$ and a strand whose leftmost intersection with the weight lines is at the point $j$). A circle whose leftmost point is $\wedge$-oriented (respectively rightmost point is $\vee$-oriented) is necessarily clockwise oriented, and so the claim follows. Next we consider the rules in which our pair of arcs belong to the distinct connected components (each of which is either a circle or a strand). The $1\otimes 1 \mapsto 1$, $x\otimes 1 \mapsto x$, and $1\otimes x \mapsto x$ rules can be checked in a similar fashion to above. In the case of $1\otimes 1 \mapsto 1$ the original diagram consists of two non-nested circles (see [\[11\>1\]](#11>1){reference-type="ref" reference="11>1"}); there are two propagating strands in the circle produced by surgery and the orientation of the circle can be determined via the left/right propagating strand and checked to match $\lambda$ by the fact that this left/right propagating strand passes through $i$ or $j$ with label $\vee$ or $\wedge$ respectively. In the case of $x\otimes 1 \mapsto x$ or $1\otimes x \mapsto x$, the original diagram consists of two nested circles (see [\[1x\>x\]](#1x>x){reference-type="ref" reference="1x>x"}); there are four propagating strands in the circle produced by surgery and the orientation of the circle can be determined via the leftmost/rightmost propagating strand and checked to match $\lambda$ by the fact that this leftmost (respectively rightmost) propagating strand has the sign $\wedge$ (respectively $\vee$) given by the *opposite direction* to that encountered at $i$ (respectively at $j$). $$\begin{minipage}{4.1cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7] \draw (0,-6) coordinate (X); \draw (X)--++(180:0.5); \draw (X)--++(0:2.5)coordinate (Y);; \draw[densely dotted] (Y)--++(0:2)coordinate (Z);; \path (Y)--++(0:1)--++(90:-0.3)coordinate (Y);; % \fill[white] (Y) circle (12pt); \draw (Y) node {\scalefont{0.75}\text{inside}}; \draw (Z)--++(0:1)coordinate (Y);; \draw (Y)--++(0:1)coordinate (Z);; \draw (Z)--++(0:1)coordinate (Y);; \path (X)--++(0:0) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(0:1) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:2) coordinate (T1) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(0:5)coordinate (T2) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:7) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:6) --++(90:0.1) node {$\scriptstyle\vee$}; \draw[ thick] (T1) to [out= 90,in=180] (3.5,0.75-6) to [out=0,in= 90] (T2) ; \draw[white,fill=white] (3.5,0.75-6) circle (8pt); \path (3.5,0.75-6) node {\scalefont{0.8}$\overline{c}$}; \draw (6,-6) to [out= 90,in=180] (6.5,0. 5-6) to [out=0,in= 90] (7,-6) (5,-6) to [out= -90,in=180] (5.5,-0. 5-6) to [out=0,in= -90] (6,-6) ; \draw (2,-6) to [out= -90,in=0] (1.5,-0. 5-6) to [out=180,in= -90] ( 1,-6) (0,-6) to [out= 90,in=180] (0.5,0. 5-6) to [out=0,in= 90] (1,-6) (7,-6) to [out= -90,in=0] (3.5,-1.5-6) to [out=180,in= -90] ( 0,-6) ; \draw (0,-3) coordinate (X); \draw (X)--++(180:0.5); \draw (X)--++(0:2.5)coordinate (Y);; \draw[densely dotted] (Y)--++(0:2)coordinate (Z);; \path (Y)--++(0:1)--++(90:0.3)coordinate (Y);; % \fill[white] (Y) circle (12pt); \draw (Y) node {\scalefont{0.75}\text{inside}}; \draw (Z)--++(0:1)coordinate (Y);; \draw (Y)--++(0:1)coordinate (Z);; \draw (Z)--++(0:1)coordinate (Y);; \path (X)--++(0:0) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(0:1) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:2) coordinate (T1) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(0:5)coordinate (T2) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:7) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:6) --++(90:0.1) node {$\scriptstyle\vee$}; \draw[ thick] (T1) to [out=-90,in=180] (3.5,-0.75-3) to [out=0,in=-90] (T2) ; \draw[white,fill=white] (3.5,-0.75-3) circle (8pt); \path (3.5,-0.75-3) node {\scalefont{0.8}$\underline{c}$}; \draw (1,0-3) to [out=-90,in=0] (0.5,-0.5-3) to [out=180,in=-90] (0,0-3) ; \draw (T1) to [out=90,in=0] (1.5,0.5-3) to [out=180,in=90] (1,0-3) ; \draw %[densely dashed] (T2) to [out=90,in=180] (5.5,0.5-3) to [out=0,in=90] (6,0-3) to [out=-90,in=180] (6.5,-0.5-3) to [out= 0,in=-90] (7,0-3) ; \draw %[densely dashed] (0,-3) to [out=90,in=180] (3.5,1.5-3) to [out=0,in=90] (7,0-3); \end{tikzpicture} \end{minipage} \mapsto \;\; \begin{minipage}{4.1cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7] \draw (0,-6) coordinate (X); \draw (X)--++(180:0.5); \draw (X)--++(0:2.5)coordinate (Y);; \draw[densely dotted] (Y)--++(0:2)coordinate (Z);; \path (Y)--++(0:1)--++(90:-0.3)coordinate (Y);; % \fill[white] (Y) circle (12pt); % \draw (Y) node {\scalefont{0.75}\text{inside}}; \draw (Z)--++(0:1)coordinate (Y);; \draw (Y)--++(0:1)coordinate (Z);; \draw (Z)--++(0:1)coordinate (Y);; \path (X)--++(0:0) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(0:1) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:2) coordinate (T1) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(0:5)coordinate (T2) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:7) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:6) --++(90:0.1) node {$\scriptstyle\vee$}; \draw[ thick] (T2) --++(90:3) ; % % \draw[white,fill=white] (3.5,0.75-6) circle (8pt); % \path (3.5,0.75-6) node {\scalefont{0.8}$\overline{c}$}; \draw (6,-6) to [out= 90,in=180] (6.5,0. 5-6) to [out=0,in= 90] (7,-6) (5,-6) to [out= -90,in=180] (5.5,-0. 5-6) to [out=0,in= -90] (6,-6) ; \draw (2,-6) to [out= -90,in=0] (1.5,-0. 5-6) to [out=180,in= -90] ( 1,-6) (0,-6) to [out= 90,in=180] (0.5,0. 5-6) to [out=0,in= 90] (1,-6) (7,-6) to [out= -90,in=0] (3.5,-1.5-6) to [out=180,in= -90] ( 0,-6) ; \draw (0,-3) coordinate (X); \draw (X)--++(180:0.5); \draw (X)--++(0:2.5)coordinate (Y);; \draw[densely dotted] (Y)--++(0:2)coordinate (Z);; \path (Y)--++(0:1)--++(90:0.3)coordinate (Y);; % \fill[white] (Y) circle (12pt); % \draw (Y) node {\scalefont{0.75}\text{inside}}; \draw (Z)--++(0:1)coordinate (Y);; \draw (Y)--++(0:1)coordinate (Z);; \draw (Z)--++(0:1)coordinate (Y);; \path (X)--++(0:0) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(0:1) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:2) coordinate (T1) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(0:5)coordinate (T2) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:7) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:6) --++(90:0.1) node {$\scriptstyle\vee$}; \draw[ thick] (T1) --++(-90:3) ; % % \draw[white,fill=white] (3.5,-0.75-3) circle (8pt); % \path (3.5,-0.75-3) node {\scalefont{0.8}$\underline{c}$}; \draw (1,0-3) to [out=-90,in=0] (0.5,-0.5-3) to [out=180,in=-90] (0,0-3) ; \draw (T1) to [out=90,in=0] (1.5,0.5-3) to [out=180,in=90] (1,0-3) ; \draw %[densely dashed] (T2) to [out=90,in=180] (5.5,0.5-3) to [out=0,in=90] (6,0-3) to [out=-90,in=180] (6.5,-0.5-3) to [out= 0,in=-90] (7,0-3) ; \draw %[densely dashed] (0,-3) to [out=90,in=180] (3.5,1.5-3) to [out=0,in=90] (7,0-3); \end{tikzpicture} \end{minipage} % % % % % % % % %$$ $$\begin{minipage}{5.2cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7] \draw (0,-6) coordinate (X); % \draw (X)--++(180:0.5); % \draw (X)--++(0:2.5)coordinate (Y);; % \draw[densely dotted] (Y)--++(0:2)coordinate (Z);; % \path (Y)--++(0:1)--++(90:-0.3)coordinate (Y);; %% \fill[white] (Y) circle (12pt); % \draw (Y) node {\scalefont{0.75}\text{inside}}; % \draw (Z)--++(0:1)coordinate (Y);; % \draw (Y)--++(0:1)coordinate (Z);; % \draw (Z)--++(0:1)coordinate (Y);; % % \path (X)--++(0:0) --++(90:0.1) node {$\scriptstyle\down$}; % % % \path (X)--++(0:1) --++(90:-0.1) node {$\scriptstyle\up$}; % % \path (X)--++(0:2) coordinate (T1) --++(90:0.1) node {$\scriptstyle\down$}; % \path (X)--++(0:5)coordinate (T2) --++(90:-0.1) node {$\scriptstyle\up$}; % % % \path (X)--++(0:7) --++(90:-0.1) node {$\scriptstyle\up$}; % \path (X)--++(0:6) --++(90:0.1) node {$\scriptstyle\down$}; % \draw (X)--++(180:1.5); \draw (X)--++(0:2.5)coordinate (Y);; \draw[densely dotted] (Y)--++(0:2)coordinate (Z);; \path (Y)--++(0:1)--++(90:-0.3)coordinate (Y);; % \fill[white] (Y) circle (12pt); \draw (Y) node {\scalefont{0.75}\text{inside}}; \draw (Z)--++(0:2)coordinate (Y);; \draw (Y)--++(0:1)coordinate (Z);; \draw (Z)--++(0:1)coordinate (Y);; \path (X)--++(0:0) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(180:1) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:1) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:2) coordinate (T1) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(0:5)coordinate (T2) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:7) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:6) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(0:8) --++(90:0.1) node {$\scriptstyle\vee$}; \draw[ thick] (T1) to [out= 90,in=180] (3.5,0.75-6) to [out=0,in= 90] (T2) ; \draw[white,fill=white] (3.5,0.75-6) circle (8pt); \path (3.5,0.75-6) node {\scalefont{0.8}$\overline{c}$}; \draw (6,-6) to [out= 90,in=180] (6.5,0. 5-6) to [out=0,in= 90] (7,-6) (5,-6) to [out= -90,in=180] (5.5,-0. 5-6) to [out=0,in= -90] (6,-6) ; \draw (2,-6) to [out= -90,in=0] (1.5,-0. 5-6) to [out=180,in= -90] ( 1,-6) (0,-6) to [out= 90,in=180] (0.5,0. 5-6) to [out=0,in= 90] (1,-6) (7,-6) to [out= -90,in=0] (3.5,-1.5-6) to [out=180,in= -90] ( 0,-6) ; \draw (8,-6) to [out= -90,in=0] (3.5,-1. 85-6) to [out=180,in= -90] ( -1,-6) ; \draw (0,-3) coordinate (X); \draw (X)--++(180:1.5); \draw (X)--++(0:2.5)coordinate (Y);; \draw[densely dotted] (Y)--++(0:2)coordinate (Z);; \path (Y)--++(0:1)--++(90:0.3)coordinate (Y);; % \fill[white] (Y) circle (12pt); \draw (Y) node {\scalefont{0.75}\text{outside}}; \draw (Z)--++(0:2)coordinate (Y);; \draw (Y)--++(0:1)coordinate (Z);; \draw (Z)--++(0:1)coordinate (Y);; \path (X)--++(0:0) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(180:1) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:1) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:2) coordinate (T1) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(0:5)coordinate (T2) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:7) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:6) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(0:8) --++(90:0.1) node {$\scriptstyle\vee$}; \draw[ thick] (T1) to [out=-90,in=180] (3.5,-0.75-3) to [out=0,in=-90] (T2) ; \draw[white,fill=white] (3.5,-0.75-3) circle (8pt); \path (3.5,-0.75-3) node {\scalefont{0.8}$\underline{c}$}; \draw (1,0-3) to [out=-90,in=0] (0.5,-0.5-3) to [out=180,in=-90] (0,0-3) ; \draw (T1) to [out=90,in=0] (1.5,0.5-3) to [out=180,in=90] (1,0-3) ; \draw (8,-3) --++(-90:3) (8,-3) to [out=90,in=0] (7.5,0.5-3) to [out=180,in=90] (7,0-3) ; \draw (0,-3) to [out=90,in=0] (-0.5,0.5-3) to [out=180,in=90] (-1,0-3) --++(-90:3); \draw %[densely dashed] (T2) to [out=90,in=180] (5.5,0.5-3) to [out=0,in=90] (6,0-3) to [out=-90,in=180] (6.5,-0.5-3) to [out= 0,in=-90] (7,0-3) ; % \draw %[densely dashed] % (0,-3) % to [out=90,in=180] (3.5,1.5-3) to [out=0,in=90] (7,0-3); \end{tikzpicture} \end{minipage} \mapsto\;\; \begin{minipage}{5.1cm} \begin{tikzpicture} [xscale=0.5,yscale=0.7] \draw (0,-6) coordinate (X); % \draw (X)--++(180:0.5); % \draw (X)--++(0:2.5)coordinate (Y);; % \draw[densely dotted] (Y)--++(0:2)coordinate (Z);; % \path (Y)--++(0:1)--++(90:-0.3)coordinate (Y);; %% \fill[white] (Y) circle (12pt); % \draw (Y) node {\scalefont{0.75}\text{inside}}; % \draw (Z)--++(0:1)coordinate (Y);; % \draw (Y)--++(0:1)coordinate (Z);; % \draw (Z)--++(0:1)coordinate (Y);; % % \path (X)--++(0:0) --++(90:0.1) node {$\scriptstyle\down$}; % % % \path (X)--++(0:1) --++(90:-0.1) node {$\scriptstyle\up$}; % % \path (X)--++(0:2) coordinate (T1) --++(90:0.1) node {$\scriptstyle\down$}; % \path (X)--++(0:5)coordinate (T2) --++(90:-0.1) node {$\scriptstyle\up$}; % % % \path (X)--++(0:7) --++(90:-0.1) node {$\scriptstyle\up$}; % \path (X)--++(0:6) --++(90:0.1) node {$\scriptstyle\down$}; % \draw (X)--++(180:1.5); \draw (X)--++(0:2.5)coordinate (Y);; \draw[densely dotted] (Y)--++(0:2)coordinate (Z);; \path (Y)--++(0:1)--++(90:-0.3)coordinate (Y);; % \fill[white] (Y) circle (12pt); % \draw (Y) node {\scalefont{0.75}\text{inside}}; \draw (Z)--++(0:2)coordinate (Y);; \draw (Y)--++(0:1)coordinate (Z);; \draw (Z)--++(0:1)coordinate (Y);; \path (X)--++(0:0) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(180:1) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:1) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:2) coordinate (T1) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(0:5)coordinate (T2) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:7) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:6) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(0:8) --++(90:0.1) node {$\scriptstyle\vee$}; \draw[ thick] (T1) --++(90:3) ; \draw (6,-6) to [out= 90,in=180] (6.5,0. 5-6) to [out=0,in= 90] (7,-6) (5,-6) to [out= -90,in=180] (5.5,-0. 5-6) to [out=0,in= -90] (6,-6) ; \draw (2,-6) to [out= -90,in=0] (1.5,-0. 5-6) to [out=180,in= -90] ( 1,-6) (0,-6) to [out= 90,in=180] (0.5,0. 5-6) to [out=0,in= 90] (1,-6) (7,-6) to [out= -90,in=0] (3.5,-1.5-6) to [out=180,in= -90] ( 0,-6) ; \draw (8,-6) to [out= -90,in=0] (3.5,-1. 85-6) to [out=180,in= -90] ( -1,-6) ; \draw (0,-3) coordinate (X); \draw (X)--++(180:1.5); \draw (X)--++(0:2.5)coordinate (Y);; \draw[densely dotted] (Y)--++(0:2)coordinate (Z);; \path (Y)--++(0:1)--++(90:0.3)coordinate (Y);; % \fill[white] (Y) circle (12pt); % \draw (Y) node {\scalefont{0.75}\text{outside}}; \draw (Z)--++(0:2)coordinate (Y);; \draw (Y)--++(0:1)coordinate (Z);; \draw (Z)--++(0:1)coordinate (Y);; \path (X)--++(0:0) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(180:1) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:1) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:2) coordinate (T1) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(0:5)coordinate (T2) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:7) --++(90:-0.1) node {$\scriptstyle\wedge$}; \path (X)--++(0:6) --++(90:0.1) node {$\scriptstyle\vee$}; \path (X)--++(0:8) --++(90:0.1) node {$\scriptstyle\vee$}; \draw[ thick] (T2) --++(-90:3) ; % % % \draw[white,fill=white] (3.5,-0.75-3) circle (8pt); % \path (3.5,-0.75-3) node {\scalefont{0.8}$\underline{c}$}; \draw (1,0-3) to [out=-90,in=0] (0.5,-0.5-3) to [out=180,in=-90] (0,0-3) ; \draw (T1) to [out=90,in=0] (1.5,0.5-3) to [out=180,in=90] (1,0-3) ; \draw (8,-3) --++(-90:3) (8,-3) to [out=90,in=0] (7.5,0.5-3) to [out=180,in=90] (7,0-3) ; \draw (0,-3) to [out=90,in=0] (-0.5,0.5-3) to [out=180,in=90] (-1,0-3) --++(-90:3); \draw %[densely dashed] (T2) to [out=90,in=180] (5.5,0.5-3) to [out=0,in=90] (6,0-3) to [out=-90,in=180] (6.5,-0.5-3) to [out= 0,in=-90] (7,0-3) ; % \draw %[densely dashed] % (0,-3) % to [out=90,in=180] (3.5,1.5-3) to [out=0,in=90] (7,0-3); \end{tikzpicture} \end{minipage}$$ The $x \otimes x \mapsto 0$ case is of a different flavour entirely. We must show that we *never* have to apply this rule in the simplification of a product of the form above (in [\[profoftheform\]](#profoftheform){reference-type="ref" reference="profoftheform"}). To see this, note that the intervals $\underline{I}$ and $\overline{I}$ must both lie outside of their circles: however, this implies that both circles contain propagating lines and (by the planarity condition on arc diagrams) each circle must nest inside the other (as they cannot intersect), which provides a contradiction. Similarly the $x\otimes y \mapsto 0$ case (also the $y \otimes y \mapsto 0$ case) gives rise to an intersecting circle and strand, which is a contradiction. The merging rules involving a strand can be checked in a similar fashion. ◻
arxiv_math
{ "id": "2309.13695", "title": "Quiver presentations and isomorphisms of Hecke categories and Khovanov\n arc algebras", "authors": "Chris Bowman and Maud De Visscher and Amit Hazi and Catharina Stroppel", "categories": "math.RT math.CO", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | The $\mathcal{L}_2$-gain characterizes a dynamical system's input-output properties and is used for important control methods, like $\mathcal{H}_{\infty}$ control. However, gain can be difficult to determine for nonlinear systems. Previous work designed a nonconvex optimization problem to simultaneously search for a [cpa]{acronym-label="cpa" acronym-form="singular+short"} storage function and an upper bound on the small-signal $\mathcal{L}_2$-gain of a dynamical system over a triangulated region about the origin. This work improves upon those results to establish a tighter upper-bound on a system's gain through a convex optimization problem. By reformulating the relationship between the Hamilton-Jacobi equations and gain as a [lmi]{acronym-label="lmi" acronym-form="singular+short"} and then developing novel [lmi]{acronym-label="lmi" acronym-form="singular+short"} error bounds for a triangulation, tighter gain bounds are derived and computed more efficiently. Numerical results demonstrate the less conservative upper bound on a dynamical system's gain. author: - "Amy K. Strong$^{1}$, Reza Lavaei$^{1}$, and Leila J. Bridgeman$^1$ [^1] [^2]" bibliography: - IEEEabrv.bib - biblio.bib title: " **Improved Small-Signal $\\mathcal{L}_2$ Gain Analysis for Nonlinear Systems** " --- # INTRODUCTION [io]{acronym-label="io" acronym-form="singular+short"} stability theory views a dynamical system as a mapping between inputs and outputs. One of the most widely used [io]{acronym-label="io" acronym-form="singular+short"} descriptors is the $\mathcal{L}_2$-gain of a system, which bounds the norm of the output with respect to that of the input. The gain provides an understanding of the energy dissipation of the system, particularly in the presence of norm-bounded disturbances. Moreover, it is leveraged in control design through the Small Gain Theorem [@zames1966input], the basis of $\mathcal{H}_{\infty}$ control. As such, the $\mathcal{L}_2$-gain of the system is an essential tool both for analysis of and control synthesis for dynamical systems. While the gain of a linear dynamical system can be determined easily [@zhou1998essentials], this is a more difficult task for nonlinear systems[@van19922; @zhang2012performance]. The nonlinear Kalman-Yacubovich-Popov lemma [@hill1976stability; @brogliato2007dissipative] or relationships between gain and the [hji]{acronym-label="hji" acronym-form="singular+short"} [@van19922] can provide conditions to determine gain. However, these methods require knowledge of a positive, semi-definite storage function for the system. Unfortunately, determining the storage function for a nonlinear system is a non-trivial task. In fact, it is an active area of research to find a Lyapunov function for a nonlinear system, which is one such storage function [@ratschan2010providing; @jarvis2003some; @dai2021lyapunov; @chang2019neural; @julian1999parametrization; @giesl2012construction; @giesl2014revised]. Recent work leveraged [cpa]{acronym-label="cpa" acronym-form="singular+short"} Lyupanov functions over a triangulated region [@giesl2012construction; @giesl2014revised] and the [hji]{acronym-label="hji" acronym-form="singular+short"} to formulate a nonconvex optimization problem that simultaneously searches for a nonlinear system's storage function and an upper bound on its gain [@lavaei2023L2]. The optimization problem depends on a specific error term for the [hji]{acronym-label="hji" acronym-form="singular+short"} constraint that, when imposed with the [hji]{acronym-label="hji" acronym-form="singular+short"} on each vertex of the triangulation, ensures the inequality holds for the entire region. However, the [hji]{acronym-label="hji" acronym-form="singular+short"} and its error term are polynomial in design variables -- leading to an overly conservative, nonconvex optimization problem that requires methods like [ico]{acronym-label="ico" acronym-form="singular+short"} to solve. Moreover, specific requirements of the error bound created conditions so that the optimization problem could only be used for limited nonlinear systems -- notably precluding linear control affine terms. [lmis]{acronym-label="lmi" acronym-form="plural+short"} are needed to convexify the previous optimization problem. This paper develops a new [lmi]{acronym-label="lmi" acronym-form="singular+short"} error upper bound for [cpa]{acronym-label="cpa" acronym-form="singular+short"} function approximations on a triangulation. With this, the [hji]{acronym-label="hji" acronym-form="singular+short"} can be reformulated as an [lmi]{acronym-label="lmi" acronym-form="singular+short"}, creating a convex optimization problem that finds the global optimum of the gain bound. Further, a combined quadratic and [cpa]{acronym-label="cpa" acronym-form="singular+short"} Lyapunov function is considered, expanding the dynamical systems this work can be applied to. The end result is a tighter upper bound on the small-signal $\mathcal{L}_2$-gain of nonlinear dynamical systems than that of [@lavaei2023L2]. # PRELIMINARIES ## Notation The interior, boundary, and closure of the set $\Omega \in \mathbb{R}^{n}$ are denoted as $\Omega^o,$ $\delta \Omega,$ and $\bar{\Omega},$ repsectively. The set of all compact subsets $\Omega \subset \mathbb{R}^m$ satisfying i) $\Omega^o$ is connected and contains the origin and ii) $\Omega = \bar{\Omega^o},$ is signified by $\mathfrak{R}^n.$ The $p$-norm of the vector, $x \in \mathbb{R}^{n}$, is shown as $||\cdot||_p,$ where $p \in [0, \infty].$ Let $\mathcal{L}_p$ represent the normed function spaces with the norm $||x||_{\mathcal{L}_p} = \left(\int_0^\infty ||x(t)||dt\right)^{\frac{1}{p}}$ for $p \in [1, \infty)$ and $||x||_{\mathcal{L}_\infty} = \sup_{t\geq 0}||x(t)||\leq \infty.$ There also exists an extended space, $\mathcal{L}_{pe},$ where if $x \in \mathcal{L}_{pe}$, then the truncation of $x$ for $t \in [0, T]$ is in $\mathcal{L}_p$ for all $T \geq 0.$ Let $f(\cdot) \in \mathcal{C}^k$ indicate the real-valued function, $f(\cdot)$, is $k$-times continuously differentiable over its domain. A positive definite matrix is denoted as $\mathbf{P}\succ 0$ , while positive semi-definite and negative definite and semi-definite matrices are denoted similarly. Identity and zero matrices are represented as $\mathbf{I}$ and $\mathbf{0}.$ Let $1_n$ denote a vector of ones in $\mathbb{R}^n.$ ## CPA Functions The construction of [cpa]{acronym-label="cpa" acronym-form="singular+short"} Lyapunov functions for nonlinear dynamical systems [@giesl2012construction][@giesl2014revised] is particularly relevant to this work. A compact region of a system's domain, $\mathbb{R}^n$, is triangulated. Then, a constrained linear optimization problem is formulated to solve for a [cpa]{acronym-label="cpa" acronym-form="singular+short"} Lyapunov function -- affine on each vertex. A similar process can be used to find a positive, semi-definite [cpa]{acronym-label="cpa" acronym-form="singular+short"} storage function. The necessary definitions and tools for traingulation and problem formulation are listed below. **Definition 1**. *(*Affine independence* [@giesl2014revised]): A collection of vectors $\{\mathbf{x}_0, \mathbf{x}_1, ..., \mathbf{x}_n\} \in \mathbb{R}^{n}$ are affinely independent if $\mathbf{x}_1-\mathbf{x}_0,... \mathbf{x}_n - \mathbf{x}_0$ are linearly independent.* **Definition 2**. *(*$n$ - simplex* [@giesl2014revised]): A simplex, $\sigma$, is defined as a convex combination of $n+1$ affinely independent vectors, $co\{\mathbf{x}_0, \mathbf{x}_1, ...,\mathbf{x}_n\}$, where each vector, $\mathbf{x}_i \in \mathbb{R}^n$, is a vertex.* **Definition 3**. *(*Triangulation* [@giesl2014revised]): A finite collection of $m_{\mathcal{T}}$ simplexes, where the intersection of any two simplexes is either a face or an empty set, is represented as $\mathcal{T}= \{\sigma_i\}_{i=1}^m \in \mathfrak{R}.$* Let $\mathcal{T}= \{\sigma_i\}_{i=1}^n.$ Further, let $\{\mathbf{x}_{i,j}\}_{j=0}^n$ be $\sigma_i$'s vertices, making $\sigma_i = \text{co}(\{\mathbf{x}_{i,j}\}_{j=0}^n).$ The choice of $\mathbf{x}_{i,0}$ in $\sigma_i$ is arbitrary unless $0 \in \sigma_i,$ in which case $\mathbf{x}_{i,0} = 0.$ The vertices of the triangulation $\mathcal{T}$ that are in $\Omega \subseteq \mathcal{T}$ is denoted by $\mathbb{E}_{\Omega}.$ Let $\Sigma_0$ denote the collection of simplexes in $\mathcal{T}$ that contain $0 \in \sigma_i.$ **Lemma 1**. *(Remark 9 [@giesl2014revised]) Consider the triangulation $\mathcal{T}= \{\sigma_i\}_{i=1}^{m_{\mathcal{T}}},$ where $\sigma_i = \text{co}(\{\mathbf{x}_{i,j}\}_{j=0}^n)$ and a set $\mathbf{W}= \{W_x\}_{x \in \mathbb{E_{\mathcal{T}}}}\subset\mathbb{R}.$ Let $\mathbf{X}_i \in \mathbb{R}^{n \times n}$ be a matrix that has $x_{i,j} - x_{i,0}$ as its $j$-th row and $\bar{W}_i\in \mathbb{R}^{n}$ be a vector that has $W_{x_{i,j}} - W_{x_{i,0}},$ as its $j$-th element. The function $W(x) = x_i^\top \mathbf{X}_i^{-1}\bar{W}_i + \omega_i$ is the unique [cpa]{acronym-label="cpa" acronym-form="singular+short"} interpolation of $\mathbf{W}$ on $\mathcal{T}$ satisfying $W(\mathbf{x}) = W_x, \forall x \in \mathbb{E}_{\mathcal{T}}.$* The [cpa]{acronym-label="cpa" acronym-form="singular+short"} Lyapunov linear programming problem enforces inequality constraints on each vertex of a triangulation $\mathbf{x}_0, ..., \mathbf{x}_n \in \mathcal{T}$. These constraints use a specific error bound that ensures all $\mathbf{x}\in \mathcal{T}$ obey the constraints without needing to enforce infinite constraints [@giesl2012construction]. The following lemma uses Taylor's Theorem [@fitzpatrick2009advanced] to develop that error term for a function $g \in \mathcal{C}^2$. **Lemma 2**. *(*Proposition 2.2 and Lemma 2.3* [@giesl2012construction]) Consider $\hat{\Omega} \in \mathcal{R}^n$ and let $g: \hat{\Omega} \rightarrow \mathbb{R}^n$ satisfy $g \in \mathcal{C}^2$ for some triangulation, $\mathcal{T}= \{\sigma_i\}_{i = 1}^{m_{\mathcal{T}}}$ of $\hat{\Omega}.$ Then, for any $x \in \sigma_i = \text{co}(\{\mathbf{x}_{i,j}\}_{j=0}^n) \in \mathcal{T}$, $$\left\lVert g(\mathbf{x}) - \sum_{j=0}^n\lambda_jg(\mathbf{x}_{i,j})\right\rVert_{\infty} \leq \beta_i\sum_{j=0}^n\lambda_jc_{i,j},$$ where $\{\lambda_j\}_{j=0}^n \in \mathbb{R}$ is the set of unique coefficients satisfying $\mathbf{x}= \sum_{j=0}^n\lambda_j\mathbf{x}_{i,j}$ with $\sum_{j=0}^n\lambda_j = 1$ and $0 \leq \{\lambda_j\}_{j=0}^n \leq 1,$ $$\label{eq:oldC} c_{i,j} {=} \frac{n}{2}\left\lVert\mathbf{x}_{i,j} {-} \mathbf{x}_{i,0}\right\rVert\!(\max_{k \in \mathbb{Z}_1^n}\left\lVert\mathbf{x}_{i,k} {-} \mathbf{x}_{i,0}\right\rVert\! {+} \!\left\lVert\mathbf{x}_{i,j} {-} \mathbf{x}_{i,0}\right\rVert),$$ and $$\label{eq:oldB} \beta_i \geq \max_{p,q,r \in\mathbb{Z}_1^n} \max_{\xi \in \sigma_i} \left\lvert\frac{\delta^2f(\mathbf{x})^{(p)}}{\delta \mathbf{x}^{(q)}\delta \mathbf{x}^{(r)}}\Bigr|_{x = \xi}\right\rvert.$$* Previous iterations of [cpa]{acronym-label="cpa" acronym-form="singular+short"} function construction have focused purely on inequality constraints. In contrast, this paper will focus on [lmis]{acronym-label="lmi" acronym-form="plural+short"}. Therefore, the definition of a negative definite matrix is required. **Definition 4**. *A matrix, $\mathbf{M}= \mathbf{M}^\top \in \mathbb{R}^{n \times n}$, is positive (negative) semi-definite, if $\mathbf{w}^\top\mathbf{M}\mathbf{w}\geq 0$ ($\mathbf{w}^\top\mathbf{M}\mathbf{w}\leq 0$) for all $\mathbf{w}\in \mathbb{R}^{n}.$* ## $\mathcal{L}_2$ Stability Analysis The $\mathcal{L}_2$ gain is a general [io]{acronym-label="io" acronym-form="singular+short"} descriptor of a mapping between two Hilbert spaces. **Definition 5**. *(*$\mathcal{L}$ stability [@zames1966input]*) A mapping $\mathcal{G}: \mathcal{L}_e^m \rightarrow \mathcal{L}_e^q$ is $\mathcal{L}$ finite gain stable if there exist $\gamma_1, \gamma_2 \geq 0$ such that $$\label{eq:l2} \left\lVert(\mathcal{G}\mathbf{u})_{\tau}\right\rVert_{\mathcal{L}} \leq \gamma_1 \left\lVert\mathbf{u}_{\tau}\right\rVert_{\mathcal{L}} + \gamma_2,$$ where $\mathbf{u}\in \mathcal{L}_e^m$ and $\tau \in [0, \infty).$* The definition of $\mathcal{L}_2$-gain is modified when considering a constrained input signal. **Definition 6**. *(*Small-signal $\mathcal{L}$ stability [@khalil2002]*) The mapping $\mathcal{G}: \mathcal{L}_e^m \rightarrow \mathcal{L}_e^q$ is $\mathcal{L}$ small-signal finite-gain stable if there exists $r_u>0$ such that ([\[eq:l2\]](#eq:l2){reference-type="ref" reference="eq:l2"}) is satisfied for all $\mathbf{u}\in \mathbb{R}^m$ with $\sup_{0 \leq t \leq \tau}\left\lVert u(t)\right\rVert \leq r_u.$* The [hji]{acronym-label="hji" acronym-form="singular+short"} establishes a relationship between $\mathcal{L}_2$-gain and the Hamilton-Jacobi equations [@van19922] and is the key to relating gain and [cpa]{acronym-label="cpa" acronym-form="singular+short"} storage functions in [@lavaei2023L2]. **Theorem 1**. *([@van19922]) Consider the smooth system $\dot{\mathbf{x}} = f(\mathbf{x}) + g(\mathbf{x})\mathbf{u}, \quad \mathbf{y}= h(\mathbf{x}),$ where $\mathbf{x}\in \mathbb{R}^{n}$, $\mathbf{y}\in \mathbb{R}^{p}$ and $\mathbf{u}\in \mathbb{R}^{m}.$ The function $f(\mathbf{x})$ is locally Lipschitz, $g(\mathbf{x}), h(\mathbf{x})$ are continuous over $\mathbb{R}^n,$ and $f(0) = h(0) = 0.$ Let $\gamma$ be a positive number and suppose there is a smooth positive semi-definite function $V(x): \mathbb{R}^n \rightarrow \mathbb{R}$ that satisfies the [hji]{acronym-label="hji" acronym-form="singular+short"}, $$\begin{aligned} \label{eq:gain_inequal} \begin{split} H = \nabla V ^\top f(\mathbf{x}) + \frac{1}{2\gamma^2}\nabla V^\top g(\mathbf{x})g^\top (\mathbf{x})\nabla V\\ + \frac{1}{2}h^\top (\mathbf{x})h(\mathbf{x}) \leq 0, \end{split} \end{aligned}$$ for all $\mathbf{x}\in \mathbb{R}^{n}.$ Then, for all $\mathbf{x}_0 \in \mathbb{R}^{n},$ the system is $\mathcal{L}_2$ stable with gain less than or equal to $\gamma.$* As previously noted in [@lavaei2023L2], because the [cpa]{acronym-label="cpa" acronym-form="singular+short"} storage function used to verify ([\[eq:gain_inequal\]](#eq:gain_inequal){reference-type="ref" reference="eq:gain_inequal"}) is only defined on the bounded set $\Omega \in \mathfrak{R}^n,$ the $\mathcal{L}_2$ gain can only be found on a subset of $\Omega.$ Therefore, the small-signal properties of the system are used to ensure the system is unable to leave the subset. This is accomplished by using a non-convex optimization problem to synthesize a modified barrier function modelled as a [cpa]{acronym-label="cpa" acronym-form="singular+short"} function (Theorem 2, [@lavaei2023L2]). # MAIN RESULTS [lmis]{acronym-label="lmi" acronym-form="plural+short"} are a valuable optimization tool that often make complex control analysis or synthesis problems more straightforward to solve. To leverage this tool for [cpa]{acronym-label="cpa" acronym-form="singular+short"}-analysis of nonlinear systems, this paper develops novel error bounds for an [lmi]{acronym-label="lmi" acronym-form="singular+short"} on an $n$-simplex. These bounds are used to design a convex optimization problem that bounds the $\mathcal{L}_2$-gain for a given region of a nonlinear dynamical system. ## LMI Error Bounds This section develops a positive definite, error bound matrix for an [lmi]{acronym-label="lmi" acronym-form="singular+short"} constraint. It will be shown that enforcing the [lmi]{acronym-label="lmi" acronym-form="singular+short"} constraint plus its error bound on vertex points of an $n$-simplex ($\mathbf{x}_0,...,\mathbf{x}_n \in \sigma$) implies that the [lmi]{acronym-label="lmi" acronym-form="singular+short"} holds for values within that simplex ($\mathbf{x}\in \sigma$). While [@giesl2012construction] established a general error bound for $\mathcal{C}^2$ vector-valued functions (), it does not translate automatically to an [lmi]{acronym-label="lmi" acronym-form="singular+short"} containing $\mathcal{C}^2$ vector-valued functions. The structure of an [lmi]{acronym-label="lmi" acronym-form="singular+short"} affects its definiteness and is considered in the following theorem. **Theorem 2**. *Consider $$\label{eq:genericMat} \mathbf{M}(\mathbf{x}) = \begin{bmatrix} \phi(x) & \zeta^\top (\mathbf{x}) \\ \zeta(\mathbf{x}) & -\mathbf{I} \end{bmatrix},$$ where $\mathbf{x}\in \mathbb{R}^{n}$, $\phi: \mathbb{R}^n \rightarrow \mathbb{R},$ $\zeta: \mathbb{R}^n \rightarrow \mathbb{R}^m$, $\phi,\zeta\in \mathcal{C}^2$, and $\zeta_k(\mathbf{x})$ is the $k^{th}$ element of $\zeta$. Let $\sigma := \text{co}\{\mathbf{x}_i\}_{i=0}^n$ be an n-simplex in $\mathbb{R}^n$. If $\mathbf{x}= \sum_{i=0}^{n}\lambda_i\mathbf{x}_i \in \sigma,$ then $$\begin{aligned} \label{eq:lmi_bound} \mathbf{M}(\mathbf{x}) - \sum_{i=0}^{n}\lambda_i\mathbf{M}(\mathbf{x}_i) \preceq & \sum_{i = 0}^n\lambda_i\!\!\begin{bmatrix} \frac{1}{2}(\beta c_{i} {+} \!\!\sum\limits_{k=1}^m\!\!\mu_k^2 c_{i}^2) & 0 \\ * & \frac{1}{2}\mathbf{I} \end{bmatrix} \nonumber\\ =& \sum_{i = 0}^n\lambda_i\mathbf{E}(\mathbf{x}_i) \vcentcolon=\mathbf{E}(\mathbf{x}), \end{aligned}$$ where $$\label{eq:c} c_{i} = n\left\lVert\mathbf{x}_{i} {-} \mathbf{x}_{0}\right\rVert(\max_{j \in \mathbb{Z}_1^n}\left\lVert\mathbf{x}_{j} {-} \mathbf{x}_{0}\right\rVert {+} \left\lVert\mathbf{x}_{i} {-} \mathbf{x}_{0}\right\rVert),$$ $$\beta \geq \max_{p,q,r \in\mathbb{Z}_1^n} \max_{\xi \in \sigma} \left\lvert\frac{\delta^2\phi(\mathbf{x})^{(p)}}{\delta \mathbf{x}^{(q)}\delta \mathbf{x}^{(r)}}\Bigr|_{\mathbf{x}= \xi}\right\rvert, \text{ and }$$ $$\mu_{k} \geq \max_{p,q,r \in\mathbb{Z}_1^n} \max_{\xi \in \sigma} \left\lvert\frac{\delta^2\zeta_{k}(\mathbf{x})^{(p)}}{\delta \mathbf{x}^{(q)}\delta \mathbf{x}^{(r)}}\Bigr|_{\mathbf{x}= \xi}\right\rvert.$$ Moreover, if $\mathbf{M}(\mathbf{x}_i) +\mathbf{E}(\mathbf{x}_i) \preceq 0$ is enforced for each vertex point of $\sigma,$ then $\mathbf{M}(\mathbf{x}) \preceq 0$ for all $\mathbf{x}\in \sigma.$* bounds the difference between $\mathbf{M}(\mathbf{x})$ at any convex combination of vertex points, $\mathbf{M}(\sum_{i=0}^n\lambda_i\mathbf{x}_i)$, versus a convex combination of $\mathbf{M}(\mathbf{x})$ at each vertex point, $\sum_{i=0}^n\lambda_i\mathbf{M}(\mathbf{x}_i)$. The proof parallels Proposition 2.2 in [@giesl2012construction], developing initial remainder terms using Taylor's theorem, but exploits the unique structure of $\mathbf{M}(\mathbf{x})$ to establish an [lmi]{acronym-label="lmi" acronym-form="singular+short"} error bound. The identity matrix in the bottom-right corner is essential when enforcing negative semi-definiteness. *Proof.* By definition, any point $\mathbf{x}\in \sigma$ can be written as a convex combination of the vertices; therefore, $\mathbf{x}= \sum_{i=0}^n\lambda_i\mathbf{x}_i$. Applying Taylor's Theorem as given in [@fitzpatrick2009advanced Theorem 14.20], the functions $\phi$ and $\zeta$ can be written as () =& (\_0) + (\_0),\_i=0\^n_i\_i\ &+ \_()\_i=0\^n_i\_i,\_i=0\^n_j\_j and \_k() =& \_k(\_0) + \_k(\_0),\_i=0\^n_i\_i\ &+ \_\_k()\_i=0\^n_i\_i,\_i=0\^n_j\_j respectively, where $\Delta\mathbf{x}_i=\mathbf{x}_i - \mathbf{x}_0$, $\mathbf{H}_{\phi}$ is the Hessian for function $\phi$, and $\mathbf{z}$ is some convex combination of $\mathbf{x}_0$ and $\mathbf{x}$. Because $\zeta$ is a vector-valued function, each dimension of $\zeta(\mathbf{x}) \in \mathbb{R}^{m}$ is separately expanded -- represented as element $\zeta_k(\mathbf{x})$ with a corresponding Hessian $\mathbf{H}_{\zeta_{k}}$ for $k =1,...,m$. When $\mathbf{M}(\mathbf{x}_i)$ is applied to each vertex point, its elements can likewise be expanded. For brevity, only $\phi(\mathbf{x}_i)$ is shown, $$\phi(\mathbf{x}_i) {=} \phi(\mathbf{x}_0) {+} \langle \nabla \phi(\mathbf{x}_0),\Delta\mathbf{x}_i\rangle {+} \frac{1}{2}\langle \mathbf{H}_{\phi}(\mathbf{z})\Delta\mathbf{x}_i,\Delta\mathbf{x}_i \rangle.$$ The error is then defined as, &() = (\_i=0\^n\_i_i) - \_i=0\^n\_i(\_i) = \_i=0\^n_i\ & \_()\_j=0\^n_j\_j - \_(\_i)\_i, \_i & \*& &\ \_\_0()\_j=0\^n_j\_j - \_\_0(\_i)\_i, \_i & & &\ ⋮& ⋮ & & ⋮\ \_\_m()\_j=0\^n_j\_j - \_\_m(\_i)\_i, \_i & && . Consider Definition [Definition 4](#def:negativeDef){reference-type="ref" reference="def:negativeDef"} and let $\mathbf{w}= \begin{bmatrix} \mathbf{w}_1 & \mathbf{w}_2 \end{bmatrix}^\top$, where $\mathbf{w}_1 \in \mathbb{R}^{1}$ and $\mathbf{w}_2 \in \mathbb{R}^{p}.$ The expansion of $\mathbf{w}^\top \tilde{\mathbf{E}}(\mathbf{x})\mathbf{w}$ is [\[eq:bound2\]]{#eq:bound2 label="eq:bound2"} &\_i=0\^n_i. From the cross terms, it becomes clear that the definiteness of $\tilde{\mathbf{E}}$ depends on $\mathbf{x}$ and $\mathbf{w}.$ Whether $\mathbf{w}^\top\tilde{\mathbf{E}}(\mathbf{x})\mathbf{w}$ is positive or negative changes depending on both the values within $\tilde{\mathbf{E}}(\mathbf{x})$ and $\mathbf{w}$ itself, because of the cross terms. Completing the square on the cross terms, this dependency is removed to bound $\tilde{\mathbf{E}}(\mathbf{x})$: &\^()\ &\_i=0\^n_i)\_1\]\ &+ \_2_2. This expression holds for all $\mathbf{w}\in \mathbb{R}^{p+1}$ and can be simplified by applying Lemma 2.3 of [@giesl2012construction], as well as the triangle inequality and Cauchy-Schwarz inequality to produce the final upper bound on $\mathbf{w}^\top \tilde{\mathbf{E}}(\mathbf{x})\mathbf{w}$, $$\mathbf{w}_1^\top \frac{1}{2}\sum_{i=0}^n\lambda_i\Bigl( \beta c_{i} + \sum_{k=1}^m(\mu_k^2c_{i}^2)\Bigr)\mathbf{w}_1 + \mathbf{w}_2^\top \frac{1}{2}\mathbf{I}\mathbf{w}_2.$$ Hence, $\mathbf{w}^\top \tilde{\mathbf{E}}(\mathbf{x})\mathbf{w}\leq \mathbf{w}^\top \mathbf{E}(\mathbf{x})\mathbf{w}$ for all $\mathbf{w}\in \mathbb{R}^{p+1}$, implying . Now suppose that $\mathbf{M}(\mathbf{x}) \preceq 0$ must be imposed on all $\mathbf{x}\in \sigma.$ By assumption, $\mathbf{M}(\mathbf{x}_i) + \mathbf{E}(\mathbf{x}_i) \preceq 0$ holds for each vertex of $\sigma$ $(\mathbf{x}_0, \hdots, \mathbf{x}_n).$ The set of negative semi-definite [lmis]{acronym-label="lmi" acronym-form="plural+short"} is a convex cone [@boyd2004convex]. By enforcing the constraint $\mathbf{M}(\mathbf{x}_i) + \mathbf{E}(\mathbf{x}_i) \preceq 0$ on each vertex, the expression $\sum_{i=0}^n\lambda_i(\mathbf{M}(\mathbf{x}_i) + \mathbf{E}(\mathbf{x}_i)) \preceq 0$ will also hold. By the S-procedure, $\mathbf{M}(\mathbf{x}) + \mathbf{E}(\mathbf{x}) \preceq 0$ implies $\mathbf{M}(\mathbf{x})\preceq 0,$ because $\mathbf{E}(\mathbf{x})\succeq 0$. Therefore, $\mathbf{M}(\mathbf{x}) \preceq 0$ for all $\mathbf{x}\in \sigma.$ ◻ ## $\mathcal{L}_2$ Gain Analysis In this section, the [hji]{acronym-label="hji" acronym-form="singular+short"} is equivalently expressed as an [lmi]{acronym-label="lmi" acronym-form="singular+short"}. Then, the [lmi]{acronym-label="lmi" acronym-form="singular+short"} error upper bound is used to create a convex optimization problem that bounds a dynamical system's $\mathcal{L}_2$-gain. **Lemma 3**. *The [hji]{acronym-label="hji" acronym-form="singular+short"} () is equivalent to $$\label{eq:gain_lmi} \begin{bmatrix} \nabla V^\top f(\mathbf{x}) & \nabla V^\top g(\mathbf{x}) & h^\top (\mathbf{x}) \\ * & -2\gamma^2\mathbf{I}& 0 \\ * & * & -2\mathbf{I} \end{bmatrix} \preceq 0.$$* *Proof.* Perform a Schur complement [@caverly2019lmi] about the terms $-\nabla V^\top g(\mathbf{x})(-\frac{1}{2\gamma^2}\mathbf{I})g(\mathbf{x})^\top\nabla V$ and $-h^\top (\mathbf{x})(-\frac{1}{2}\mathbf{I})h(\mathbf{x}).$ ◻ uses to develop an $\mathcal{L}_2$ gain analysis optimization problem. Two important design choices of the following problem are: i) separating the linear and nonlinear terms in the dynamical system and ii) modelling the storage function, $V(\mathbf{x})\succeq 0$, as a combined quadratic function and [cpa]{acronym-label="cpa" acronym-form="singular+short"} function. In any simplex with a vertex at the origin $V(\mathbf{x})$ is purely quadratic, which ensures that all error bounds go to zero at the origin. These changes, along with the new [lmi]{acronym-label="lmi" acronym-form="singular+short"} error bound, enable the $\mathcal{L}_2-$analysis to be applied to a more wide range of dynamical systems. Moreover, the $\mathcal{L}_2-$analysis can now be formulated as a convex optimization problem, resulting in a less conservative bound on the gain of these systems. **Theorem 3**. *Consider the constrained mapping $\mathcal{G}: \mathcal{L}_e^m \rightarrow \mathcal{L}_e^p$ defined by $y = \mathcal{G}u,$ where $$\label{eq:dynSys} \mathcal{G}: \begin{cases} \dot{\mathbf{x}} = \tilde{F}(\mathbf{x}) + \tilde{G}(\mathbf{x})\mathbf{u}& \mathbf{x}\in \mathcal{X}\in \mathfrak{R}^n, \mathbf{u}\in \mathcal{U} \in \mathfrak{R}^m \\ \mathbf{y}= \tilde{H}(\mathbf{x}), \end{cases}$$ where $\tilde{F}(\mathbf{x}) = \mathbf{A}\mathbf{x}+ f(\mathbf{x}),$ $\tilde{G}(\mathbf{x}) = \mathbf{B}+ g(\mathbf{x}),$ $\tilde{H}(\mathbf{x}) = \mathbf{C}+ h(\mathbf{x}),$ $\mathbf{A}\in \mathbb{R}^{n \times n}, \mathbf{B}\in \mathbb{R}^{n \times m}, \mathbf{C}\in \mathbb{R}^{n \times p}$, and $f(0) = g(0) = h(0) = 0.$ Let $\mathcal{U} = \{\mathbf{u}\in \mathbb{R}^{m} | \sup_{0 \leq t \leq \tau}\left\lVert u(t)\right\rVert \leq r_u\}$ for some $r_u >0$ that ensures $\mathbf{x}$ remains in a subset of $\Omega$. Suppose that $f, g, h \in \mathcal{C}^2$ for a triangulation, $\mathcal{T}=\{\sigma_i\}_{i=1}^{m_{\mathcal{T}}}$ of set $\Omega \in \mathfrak{R}^n.$ Define a candidate storage function, $$\label{eq:newLyap} V(\mathbf{x}) = \bar{\mathbf{V}}(\mathbf{x}) + \mathbf{x}^\top \mathbf{P}\mathbf{x},$$ where $\bar{\mathbf{V}}$ is a [cpa]{acronym-label="cpa" acronym-form="singular+short"} function defined as $\bar{\mathbf{V}} = \{\bar{V}_x\}_{\mathbf{x}\in \mathbb{E}_{\mathcal{T}}}$ and $\mathbf{P}\in \mathbb{R}^{n \times n}$ is a symmetric matrix. Consider the optimization problem $$\min_{\mathbf{V}, \mathbf{P}, \alpha, \mathbf{L}} \alpha$$* *[\[eq:optGamma\]]{#eq:optGamma label="eq:optGamma"} $$\begin{aligned} \mathbf{P}\succ 0, \\ \alpha > 0, \\ V(\mathbf{x}) \geq 0 \quad \forall \mathbf{x}\in \mathbb{E}_{\mathcal{T}}, \label{eq:constr_posDef}\\ \left\lvert\nabla V_i\right\rvert \leq l_i, \quad \forall i \in \mathbb{Z}_1^{m_{\mathcal{T}}}, \\ \bar{V}_i = 0 \quad \forall i \in \Sigma_0, \\ \mathbf{M}_{i,j} \preceq 0, \quad \forall i \in \mathbb{Z}_1^{m_{\mathcal{T}}}, \forall j \in \mathbb{Z}_0^n, x \neq 0, \end{aligned}$$* *where $\alpha = \gamma^2$, $\mathbf{L} = \{l_i\}_{i=1}^{m_{\mathcal{T}}},$* *[\[eq:gainUB\]]{#eq:gainUB label="eq:gainUB"}* *&\_i,j =\ &* *V\^(\_i,j) & V\^(\_i,j) & (\_i,j) & 0\ \* & -2& 0 & 0\ \* & \* & -2& 0\ \* & \* & \* & 0* *+\ &* *(\_ic\_i,j(1_n\^l_i) + \_a=0\^p\_i,a\^2c\_i,j\^2) & 0 & 0 & \_k=0\^m\_i, kc\_i,j(1_n\^l_i)\ \* & & 0 & 0\ \* & \* & & 0\ \* & \* & \* & -2* *.* *Further, $c_{i,j}$ is given in , $$\label{eq:beta} \beta_i \geq \max_{p,q,r \in\mathbb{Z}_1^n} \max_{\xi \in \sigma_i}\left\lvert\frac{\delta^2f(x)^{(p)}}{\delta x ^{(q)}\delta x ^{(r)}}\Bigr|_{x = \xi}\right\rvert,$$ $$\label{eq:rho} \rho_{i,a} \geq \max_{p,q,r \in\mathbb{Z}_1^n} \max_{\xi \in \sigma_i}\left\lvert\frac{\delta^2h_{a}(x)^{(p)}}{\delta x ^{(q)}\delta x ^{(r)}}\Bigr|_{x = \xi}\right\rvert,$$ $$\label{eq:mu} \mu_{i,k} \geq \max_{p,q,r \in\mathbb{Z}_1^n} \max_{\xi \in \sigma_i}\left\lvert\frac{\delta^2g_{k}(x)^{(p)}}{\delta x ^{(q)}\delta x ^{(r)}}\Bigr|_{x = \xi}\right\rvert,$$ and $a$ and $k$ represent the dimensions of vector valued functions $h(\mathbf{x}) \in \mathbb{R}^{p}$ and $g(\mathbf{x}) \in \mathbb{R}^{m}$, respectively.* *If the optimization problem, , is feasible, then holds for all points in $\Omega^o$ and $\gamma^{*} = \sqrt{\alpha^*}$ is an upper bound on the $\mathcal{L}_2$ gain of $\mathcal{G}$ in $\Omega^o.$* *Proof.* The gradient of the [cpa]{acronym-label="cpa" acronym-form="singular+short"} function $\nabla V_i$ can be computed using Lemma [Lemma 1](#lemma_gradW){reference-type="ref" reference="lemma_gradW"}. Note that $c_{i,j}$ is finite for all $n$-simplexes in $\mathcal{T},$ because $\Omega \in \mathfrak{R}^n.$ Furthermore, $\beta_i,$ $\rho_{i, a}$, and $\mu_{i, k}$ exist for all simplexes because $f, g, h \in \mathcal{C}^2$. The upper bound developed in can be applied to the gain [lmi]{acronym-label="lmi" acronym-form="singular+short"} (), along with Hölder's inequality on elements that interact with $\nabla \bar{V}$, to get the upper bound & V\^(\_i,j)+e\_i,j & V\^(\_i,j) & (\_i,j)\ \* & -2+ & 0\ \* & \* & -2+ , where &e\_i,j = (\_ic\_i,j(1_n\^l_i) + \_a=0\^p \_i,a\^2c\_i,j\^2 + \_k=0\^m\_i, k\^2c\_i,j\^2(1_n\^l_i)\^2). Because $\bar{V}(\mathbf{x})$ is zero at all triangulations about the origin, the error bound at zero is zero. Perform a Schur complement [@caverly2019lmi] about $\frac{1}{2}\sum\limits_{k=0}^m\mu_{i, k}^2c_{i,j}^2(1_n^\top l_i)^2$ to get the equivalent [lmi]{acronym-label="lmi" acronym-form="singular+short"}, . By enforcing Constraint [\[eq:constr_posDef\]](#eq:constr_posDef){reference-type="ref" reference="eq:constr_posDef"} and using the convexity of both the [cpa]{acronym-label="cpa" acronym-form="singular+short"} and quadratic function on each $n-$simplex, $V(\mathbf{x})$ is a viable storage function for $\mathcal{G}$ in the region $\Omega$. From , will hold for all $\mathbf{x}\in \Omega^o$ for $\gamma^*.$ ◻ After finding an upper bound on the dynamical system's gain, Theorem 5 in [@lavaei2023L2] can be used to determine the small-signal properties of the dynamical system ([\[eq:dynSys\]](#eq:dynSys){reference-type="ref" reference="eq:dynSys"}). This determines the largest value $r_u >0$ that ensures the system remains within a subset of $\Omega^o$ -- where the gain bound is valid. # NUMERICAL EXAMPLE Consider the dynamical system $$\label{eq:numExSys} \mathcal{G}: \begin{cases} \dot{x}_{1}= x_2 \\ \dot{x}_2 = -\sin x_1 -x_2 + k(\mathbf{x})u \\ y = x_2, \end{cases}$$ where $\mathbf{x}\in \mathbb{R}^{2},$ $u \in \mathbb{R}$, and $y \in \mathbb{R}.$ Let $\Omega$ be the triangulated region about the origin; an example of which is shown in . In open-loop, does have a Lyapunov function, $$V(\mathbf{x}) = (1 - \cos x_1) + \frac{1}{2}x_2^2.$$ Substituting $V(\mathbf{x})$ into the [hji]{acronym-label="hji" acronym-form="singular+short"} as a storage function results in the inequality, $$\label{eq:hji_numEx} 0.5x_2^2\left(\gamma^{-1}k(\mathbf{x})^2 - 1\right) \leq 0.$$ Given a $k(\mathbf{x}),$ the $\mathcal{L}_2$-gain of $\mathcal{G}$ can be bounded above. The numerical examples will show how close the general purpose, algorithmic search proposed here can come to the tight bounds achieved through $V(x)$, representing ad hoc ingenuity, that cannot necessarily be replicated in all systems. In this section, we consider two variations of $\mathcal{G}$ -- comparing our methodology to analytical techniques of determining $\mathcal{L}_2$-gain and previous work that established a system's $\mathcal{L}_2$ gain iteratively. In each numerical example, the gain resulting from solving Problem [\[eq:optGamma\]](#eq:optGamma){reference-type="ref" reference="eq:optGamma"} is shown for increasingly fine, uniform triangulations about the region, $\Omega,$ using the triangulation refinement process in [@lavaei2023synth]. shows triangulations with 266 and 7266 -- the least and most simplexes used to characterize the $\mathcal{L}_2$-gain of $\mathcal{G}$ in this paper. ![Two different unifrom triangulations for the region, $\Omega$, about the origin for dynamical system, $\mathcal{G}$.](Figures/compare_tep.jpg){#fig:region width="0.95\\columnwidth"} ## Pendulum Consider $\mathcal{G}$ with $k(\mathbf{x}) = 1$; this is a classic pendulum. Previous work, [@lavaei2023L2], could not be used to bound the $\mathcal{L}_2$-gain of an pendulum because $k(\mathbf{x})$ is a linear term and therefore did not have the property, $\left\lVert g(\mathbf{x})^\top g(\mathbf{x})\right\rVert_{\infty} = 0$ at $\mathbf{x}= 0$. However, the new convex optimization removes this requirement, so Problem [\[eq:optGamma\]](#eq:optGamma){reference-type="ref" reference="eq:optGamma"} can now be used on the wide range of systems that have linear control terms. From , the $\mathcal{L}_2$-gain of $\mathcal{G}$ satisfies $\gamma\leq 1$. shows the result of optimizing Problem [\[eq:optGamma\]](#eq:optGamma){reference-type="ref" reference="eq:optGamma"} for an increasing number of $n$-simplexes, as well as the time taken to complete this convex optimization. The $\mathcal{L}_2$-gain bound decreases as the triangulation becomes more refined, and reaches its lowest at $\gamma \leq 1.26$ when optimizing Problem [\[eq:optGamma\]](#eq:optGamma){reference-type="ref" reference="eq:optGamma"} for 7226 $n$-simplexes. Applying Theorem 5 with Initialization 4 from [@lavaei2023L2], the small-signal property for this system to remain in $\Omega$ was found to be $||u(t)||_{\infty} \leq 0.077.$ ![The $\mathcal{L}_2$-gain of a classic pendulum was solved with an increasing number of simplexes -- resulting in the best estimate of $\gamma \leq 1.26.$](Figures/Pend_results_corrected_9_13.jpg){#fig:invPend_res width="0.95\\columnwidth"} ## Pendulum with Control Affine Input The variation $k(\mathbf{x}) = x_2$ is considered here to compare with previous work [@lavaei2023L2]. From , $\gamma \leq \max\{x_2^2|x_2\in\Omega\}$. For the given $\Omega,$ $\gamma \leq 0.64.$ Previous work developed a non-convex optimization problem and solved it using [ico]{acronym-label="ico" acronym-form="singular+short"}, which only guarantees convergence to a local minima, creating more conservative bounds than Problem [\[eq:optGamma\]](#eq:optGamma){reference-type="ref" reference="eq:optGamma"}. In [@lavaei2023L2], the gain bound was found to be $\gamma \leq 3.85.$ In comparison, the best $\mathcal{L}_2$ gain bound found using the techniques developed in this paper was $\gamma \leq 0.76$ with the small signal property, $||u(t)||_{\infty} \leq 0.2554.$ shows the optimal gain bound found when solving Problem [\[eq:optGamma\]](#eq:optGamma){reference-type="ref" reference="eq:optGamma"} over different numbers of $n$-simplexes. Even at the lowest number of $n$-simplexes, the gain bound still outperforms [@lavaei2023L2] by finding a bound of $\gamma \leq 1.11.$ ![A pendulum with linear control affine input was analyzed for increasing numbers of simplexes over $\Omega.$ The $\mathcal{L}_2$ gain bound was found to be $\gamma \leq 0.76.$](Figures/rezaExample_results_corrected_9_13.jpg){#fig:rezEx_res width="0.95\\columnwidth"} # DISCUSSION This work developed a convex optimization problem to determine the $\mathcal{L}_2$-gain of a dynamical system for a triangulated region about the origin. By reformulating the [hji]{acronym-label="hji" acronym-form="singular+short"} as an [lmi]{acronym-label="lmi" acronym-form="singular+short"} and developing novel [lmi]{acronym-label="lmi" acronym-form="singular+short"} error bounds for a triangulation, the system's gain can be bounded more tightly than a previous method, [@lavaei2023L2]. A limitation of this current method is that it can only be applied to bounded regions of the system's state space, so future work should consider methods to expand this technique to the entire state space. # ACKNOWLEDGMENT {#acknowledgment .unnumbered} Thank you to Dr. Miroslav Krstic for his helpful suggestions on the numerical examples. [^1]: This work was supported by NSF GFRP Grant No. 1644868, the Alfred P. Sloan Foundation, ONR YIP Grant No. N00014-23-1-2043, and NSF Grant No. 2303158. [^2]: $^{1}$Amy K. Strong, Reza Lavaei, and Leila J. Bridgeman are with the Department of Mechanical Engineering and Materials Science at Duke University, Durham, NC, 27708, USA. (email: `amy.k.strong@duke.edu, reza.lavaei@duke.edu, leila.bridgeman@duke.edu`), phone: (919) 660-1260
arxiv_math
{ "id": "2309.08034", "title": "Improved Small-Signal L2 Gain Analysis for Nonlinear Systems", "authors": "Amy Strong, Reza Lavaei, Leila J. Bridgeman", "categories": "math.OC cs.SY eess.SY", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | For post-critically finite rational maps, we give a sufficient condition for the existence of disjoint type renormalizations by using multicurves. We also give examples satisfying the condition. address: - Guizhen Cui, School of Mathematical Sciences, Shenzhen University, Shenzhen, 518052, P. R. China, & Academy of Mathematics and Systems Science Chinese Academy of Sciences, Beijing, 100190, P. R. China - Fei Yang, Department of Mathematics, Nanjing University, Nanjing, 210093, P. R. China - Luxian Yang, School of Mathematical Sciences, Shenzhen University, Shenzhen, 518052, P. R. China author: - Guizhen Cui - Fei Yang - Luxian Yang bibliography: - renorm.bib title: | Renormalizations of rational maps\ and stable multicurves --- [^1] # Introduction A rational map of one complex variable induces a dynamical system on the Riemann sphere by iteration. The Riemann sphere is decomposed into the disjoint union of the Fatou set and the Julia set, defined by whether the iterated sequence forms a normal family in a neighborhood of the point (see e.g. [@beardon1991iteration; @Milnor2006Dynamics] for their definitions and basic properties). Renormalization is an important tool in complex dynamics. In [@douady1985dynamics], Douady and Hubbard used renormalizations to explain why Mandelbrot set occurs over and over again in the parameter space of complex polynomials. Infinitely renormalization of quadratic polynomials has been thoroughly studied, for example in [@mcmullen1994complex; @lyubich1997dynamics; @kahn2006priori; @kahn2008priori; @kahn2009priori]. The renormalization of polynomials is usually detected by Yoccoz puzzles, which is constructed by external rays and equipotential curves. For rational maps, there is a similar way to define renormalizations and rational-like maps (refer to [@buff2003virtually; @cui2016renormalizations]). However, there is no effective approach to detect renormalizations in general. The concept of multicurve was first introduced by Thurston to characterize the combinatorics of rational maps [@douady1993proof]. Later it is found that multicurves could also be used to characterize the disconnected Julia set of hyperbolic rational maps [@pilgrim2000rational]. In fact, a stable multicurve always gives a combinatorial decomposition of rational maps (see Section [3](#sec:combinatorial){reference-type="ref" reference="sec:combinatorial"}). However a combinatorial decomposition does not give a renormalization in general. For example, for rational maps which are the mating of polynomials, the equator forms a stable multicurve. But it does not induce a renormalization. On the other hand, the boundary of the renormalization domain of a disjoint type renormalization usually form a stable multicurve. A natural question is that under what conditions a stable multicurve induces a renormalization. A multicurve is a Cantor multicurve if its consecutive pullbacks generate a strictly increasing number of curves in each isotopy class. It was proved that if a post-critically finite rational map has a Cantor multicurve, then there exist disjoint type renormalizations such that each boundary component of the renormalization domains is isotopic to a curve in the Cantor multicurve [@cui2016renormalizations]. In this paper, we give a sufficient condition for the existence of disjoint type renormalizations by using stable multicurves which need not to be a Cantor multicurve. The main theorem is the following (refer to §2.1 for the definition of coiling curves). **Theorem 1**. *Let $f: \widehat{\mathbb{C}}\to\widehat{\mathbb{C}}$ be a post-critically finite rational map. Suppose that $\Gamma$ is a completely stable multicurve of $f$ containing coiling curves. Then there are disjoint type renormalizations of $f$ such that each boundary component of the renormlization domains is isotopic to a coiling curve in $\Gamma$.* To prove this theorem, at first we extract a sub-multicurve such that its complement contains a periodic component $U$ in the sense of isotopy, whose boundary consists of coiling curves (refer to §4.1). Then we apply the argument in [@cui2016renormalizations]: Deform $f$ to a branched covering $F$ in its Thurston equivalence class, such that the dynamics of $F$ on $U$ is well understood. By Rees-Shishikura theorem, we obtain a semi-conjugacy from $F$ to $f$. The boundary of $U$ consists of coiling curves ensures that the fibers of the semi-conjugacy do not connect distinct components of $F^{-1}(U)$. This implies that the dynamics of $F$ on $U$ projects to a renormalization of $f$ and the theorem follows. In the last part of this paper, we construct post-critically finite rational maps which have stable multicurves containing coiling curves but no Cantor multicurves. By tuning and disc-annulus surgery, we first construct rational maps which have coiled Fatou domains (see Section [6.2](#sec:coiled){reference-type="ref" reference="sec:coiled"}), and then prove that a coiled Fatou domain is tunable with any post-critically finite polynomials (Theorem [Theorem 6](#thm:tunable){reference-type="ref" reference="thm:tunable"}). The resulting rational maps have stable multicurves containing coiling curves but no Cantor multicurves. **Convention**. In this paper, an isotopy in $\widehat{\mathbb{C}}$ usually means an isotopy in $\widehat{\mathbb{C}}$ relative a subset of $\widehat{\mathbb{C}}$. For the simplicity, sometimes we omit the relative subset if one may understand it from the context. # Preliminaries Let $F$ be a branched covering of the Riemann sphere $\widehat{\mathbb{C}}$. We always assume $\deg F\ge 2$ in this paper. Denote by $\Omega_F$ the set of critical points of $F$. The **post-critical set** of $f$ is $$\mathcal{P}_F=\overline{\bigcup_{n \geq 1}F^n(\Omega_F)}.$$ The map $F$ is **post-critically finite (PCF)** if $\mathcal{P}_F$ is a finite set. By a **marked** branched covering $(F,\mathcal{P})$ we mean a PCF branched covering $F$ with a finite marked set $\mathcal{P}\subset\widehat{\mathbb{C}}$ such that $(\mathcal{P}_F\cup F(\mathcal{P}))\subset\mathcal{P}$. In case of $\mathcal{P}=\mathcal{P}_F$, we write $(F,\mathcal{P})=F$ for simplicity. ## Multicurves {#sec:curve} Let $(F,\mathcal{P})$ be a marked branched covering of $\widehat{\mathbb{C}}$. A Jordan curve $\gamma\subset\widehat{\mathbb{C}}\setminus\mathcal{P}$ is **essential** if each of the two components of $\widehat{\mathbb{C}}\setminus\gamma$ contains at least two points in $\mathcal{P}$. A **multicurve** $\Gamma$ is a non-empty and finite collection of mutually disjoint Jordan curves, each essential and no two isotopic rel $\mathcal{P}$. A multicurve $\Gamma$ is **stable** if each essential curve in $F^{-1}(\Gamma)$ is isotopic to a curve in $\Gamma$; or is **pre-stable** if each curve in $\Gamma$ is isotopic to a curve in $F^{-1}(\Gamma)$; or is **completely stable** if it is stable and pre-stable. Any pre-stable multicurve $\Gamma$ generates a completely stable multicurve as the following: Let $\Gamma_n$ be a multicurve representing the isotopy classes of curves in $F^{-n}(\Gamma)$ for $n\ge 0$. Then each curve in $\Gamma_n$ is isotopic to a curve in $\Gamma_{n+1}$. Thus $\#\Gamma_n\le\#\Gamma_{n+1}$. Noticing that any multicurve contains at most $\#P-3$ curves, there is an integer $n_0\ge 0$ such that $\#\Gamma_{n_0}=\#\Gamma_{n_0+1}$. This shows that $\Gamma_{n_0}$ is completely stable. Let $\Gamma$ be a stable multicurve of $(f,P)$. For any $\gamma\in\Gamma$ and any $n\ge 1$, denote by $\mathscr{C}_n(\gamma,\Gamma)$ the collection of curves in $F^{-n}(\Gamma)$ isotopic to $\gamma$ and denote $$\kappa_n(\gamma,\Gamma)=\#\mathscr{C}_n(\gamma,\Gamma).$$ Then $\kappa_n(\gamma,\Gamma)$ is increasing. We call $\gamma$ a **coiling curve** of $\Gamma$ if $\kappa_n(\gamma,\Gamma)\to\infty$ as $n\to\infty$. A stable multicurve $\Gamma$ is a **Cantor multicurve** if every curve $\gamma\in\Gamma$ is a coiling curve of $\Gamma$. Thus a Cantor multicurve is always completely stable. ## Thurston Theorem A marked branched covering $(F,\mathcal{P})$ is called **Thurston equivalent** to a marked rational map $(f,\mathcal{Q})$ if there exist orientation preserving homeomorphisms $(\phi_0,\phi_1)$ of $\widehat{\mathbb{C}}$ such that $\phi_0$ is isotopic to $\phi_1$ rel $\mathcal{P}$ and $\phi_0\circ F=f\circ\phi_1$. Let $\Gamma$ be a multicurve of $(F,\mathcal{P})$. Its transition matrix $M_{\Gamma}=(a_{\gamma\beta})$ is defined by $$a_{\gamma\beta}=\sum_{\delta}\frac{1}{\deg(F:\,\delta\to\beta)},$$ where the summation is taken over all components of $F^{-1}(\beta)$ isotopic to $\gamma$ rel $\mathcal{P}$. Denote by $\lambda(M_{\Gamma})$ the leading eigenvalue of $M_{\Gamma}$. The multicurve $\Gamma$ is a **Thurston obstruction** of $(F,\mathcal{P})$ if $\lambda(M_{\Gamma})\ge 1$. **Theorem 2** (**Thurston**). *Let $(F,\mathcal{P})$ be a marked branched covering of $\widehat{\mathbb{C}}$ with hyperbolic orbifold. Then $(F,\mathcal{P})$ is Thurston equivalent to a marked rational map if and only if $(F,\mathcal{P})$ has no Thurston obstructions.* Refer to [@douady1993proof; @buff2014teichmuller] for the definition of hyperbolic orbifold and the proof of the theorem. The next lemma is useful to check whether a multicurve is a Thurston obstruction (refer to [@mcmullen1994complex Theorem B.6].) **Lemma 1**. *For any multicurve $\Gamma$ with $\lambda(M_{\Gamma})>0$, there is an irreducible multicurve $\Gamma_0\subset\Gamma$ such that $\lambda(M_{\Gamma_0})=\lambda(M_{\Gamma})$.* A multicurve $\Gamma$ of $(F,\mathcal{P})$ is **irreducible** if for each pair $(\gamma,\beta)\in\Gamma\times\Gamma$, there exists a sequence of curves $$\{\gamma=\alpha_0,\, \alpha_1,\cdots,\alpha_n=\beta\}$$ in $\Gamma$ such that for $1\le k\le n$, $F^{-1}(\alpha_k)$ has a component isotopic to $\alpha_{k-1}$ rel $\mathcal{P}$. ## Semi-conjugacy Suppose that $F$ is a PCF branched covering of $\widehat{\mathbb{C}}$ which is Thurston equivalent to a rational map $f$ through the pair $(\psi_0, \psi_1)$. Suppose that $F$ is holomorphic in a neighborhood of the critical cycles of $F$. The next theorem shows that there is a semi-conjugacy from $F$ to $f$ (refer to Appendix A in [@cui2016renormalizations]). Denote by $\mathcal{J}_f$ the Julia set and by $\mathcal{F}_f$ the Fatou set of $f$. **Theorem 3** (**Rees-Shishikura**). *There exist a neighborhood $U$ of the critical cycles of $F$ and a sequence of homeomorphisms $\{\phi_n\}\,(n\geq 0)$ of $\widehat{\mathbb{C}}$ isotopic to $\psi_0$ rel $\mathcal{P}_{F}$, such that* - *$\phi_0|_{U}$ is holomorphic and $\phi_n|_{U}=\phi_0|_{U}$;* - *$f\circ\phi_{n+1}=\phi_n\circ F$ on $\widehat{\mathbb{C}}$;* - *$\{\phi_n\}$ converges uniformly to a map $h$ and $h\circ F=f\circ h$ on $\widehat{\mathbb{C}}$;* - *$h^{-1}(z)$ is a single point if $z\in\mathcal{F}_f$.* ## Renormalizations {#sec:rational-like} Let $f$ be a PCF rational map and $\mathcal{K}\subsetneq\widehat{\mathbb{C}}$ be a non-empty and finite union of non-degenerate continua. The set $\mathcal{K}$ is called a **stable set** of $f$ if $f(\mathcal{K})\subset\mathcal{K}$ and each component of $f^{-1}(\mathcal{K})$ is either a component of $\mathcal{K}$ or disjoint from $\mathcal{K}$. By definition, each component of $\mathcal{K}$ is eventually periodic. Refer to [@cui2023decomposition] for the next result, which actually gives a disjoint type renormlization. **Theorem 4**. *Let $f$ be a PCF rational map and $\mathcal{K}\subsetneq\widehat{\mathbb{C}}$ be a stable set of $f$. For each periodic component $K$ of $\mathcal{K}$ with period $p\ge 1$, there is a PCF rational map $g$ and a quasiconformal map $\phi$ of $\widehat{\mathbb{C}}$ such that* - *$\phi\circ f^p=g\circ\phi$ on $K$,* - *$\partial\phi(K)=\mathcal{J}_{g}$, and* - *$\phi$ is holomorphic in the interior of $K$.* *Moreover, the rational map $g$ is unique up to holomorphic conjugacy, which is called a **renormalization** of $f$ on $K$.* # Combinatorial Decompositions {#sec:combinatorial} In this section, we will show that a completely stable multicurve induces a combinatorial decomposition for PCF rational maps. Moreover, the small Julia sets are immersed in the Julia set of the original map. Let $f$ be a PCF rational map. A domain or a continuum $E\subset\widehat{\mathbb{C}}$ is **simple type** if there is a simply connected domain $D\subset\widehat{\mathbb{C}}$ such that $E\subset D$ and $\#(D\cap\mathcal{P}_f)\le 1$; or **annular type** if $E$ is not simple type and there is an annulus $A\subset\widehat{\mathbb{C}}\setminus\mathcal{P}_f$ such that $E\subset A$; or **complex type** otherwise. For any complex type domain $U$, we denote by $\widehat{U}$ the union of $U$ with all the simple type components of $\widehat{\mathbb{C}}\setminus U$. Let $\Gamma$ be a completely stable multicurve of $f$. Then each component of $\widehat{\mathbb{C}}\setminus\Gamma$ is complex type. Denote by $\mathscr{U}$ the collection of components of $\widehat{\mathbb{C}}\setminus\Gamma$, and by $\mathscr{U}^n$ the collection of all the complex type components of $\widehat{\mathbb{C}}\setminus f^{-n}(\Gamma)$. Since $\Gamma$ is completely stable, for each $n\ge 1$ and any component $U\in\mathscr{U}$, there exists a unique component $U^n\in\mathscr{U}^n$ such that $\widehat{U^n}$ is isotopic to $U$ rel $\mathcal{P}_f$. Conversely, for each component $U^n\in\mathscr{U}^n$, there exists a unique component $U\in\mathscr{U}$ such that $\widehat{U^n}$ is isotopic to $U$ rel $\mathcal{P}_f$. Define a map $f_*$ on $\mathscr{U}$ by $f_*(U)=V$ if $f(U^1)=V$, where $U^1\in\mathscr{U}^1$ is the unique element such that $\widehat{U^1}$ is isotopic to $U$. Since $\mathscr{U}$ is a finite collection, every $U\in\mathscr{U}$ is eventually $f_*$-periodic. For each $f_*$-periodic $U\in\mathscr{U}$ with period $p\ge 1$, there is a unique complex type component $U^p\in\mathscr{U}^p$ such that $\widehat{U^p}$ is isotopic to $U$ rel $\mathcal{P}_f$. Moreover, $f^p(U^p)=U$. **Theorem 5**. *Suppose that $U\in\mathscr{U}$ is periodic with period $p\ge 1$. Let $U^p$ be the unique component of $f^{-p}(U)$ such that $\widehat{U^p}$ is isotopic to $U$. Then there exist a marked rational map $(g,\mathcal{Q})$ and orientation preserving homeomorphisms $(\varphi_0,\varphi_1)$ of $\widehat{\mathbb{C}}$ such that* - *$\varphi_1(U^p)$ is compactly contained in $\varphi_0(U)$,* - *$\mathcal{Q}\cap\varphi_0(U)=\varphi_0(\mathcal{P}_f\cap U)$ and every component of $\widehat{\mathbb{C}}\setminus\varphi_0(U)$ contains exactly one point of $\mathcal{Q}$,* - *$\varphi_1$ is isotopic to $\varphi_0$ rel $\varphi_0^{-1}(\mathcal{Q})$,* - *$\varphi_0\circ f^p=g\circ\varphi_1$ on $U^p$,* - *there is a continuous map $\pi:\,\mathcal{J}_g\to\mathcal{J}_f$ such that $f^p\circ\pi=\pi\circ g$ on $\mathcal{J}_g$.* *Moreover, the rational map $g$ is unique up to holomorphic conjugacy. We call $g$ a **combinatorial renormalization** of $f$ on $U$.* *Proof.* Since $\widehat{U^p}$ is isotopic to $U$, there exists a homeomorphism $\theta_0$ isotopic to the identity rel $\mathcal{P}_f$ such that $\theta_0$ is holomorphic in a neighborhood of $\mathcal{P}_f$ and $U':=\theta_0(U^p)$ is compactly contained in $U$. Denote $F_1=f^p\circ\theta_0^{-1}$. Then $U'$ is a component of $F_1^{-1}(U)$. For each component $\gamma$ of $\partial U$, denote by $D(\gamma)$ the component of $\widehat{\mathbb{C}}\setminus\gamma$ disjoint from $U$. For each component $\alpha$ of $\partial U'$, denote by $D(\alpha)$ the component of $\widehat{\mathbb{C}}\setminus\alpha$ disjoint from $U'$. Then for each component $\gamma$ of $\partial U$, there is a unique component $\alpha$ of $\partial U'$ isotopic to $\gamma$ rel $\mathcal{P}_f$. Note that $D(\gamma)\subset D(\alpha)$. Mark one point $a_{\gamma}\in D(\gamma)$ and denote by $\widetilde\mathcal{Q}_0$ the set of these marked points. For each component $\beta$ of $\partial U'$, $\gamma:=f^p(\beta)$ is a component of $\partial U$. There is a branched covering $H_{\beta}:\,D(\beta)\to D(\gamma)$ such that - $H_{\beta}$ can be extended continuously to the boundary and $H_{\beta}=F_1$ on $\beta$, - $H_{\beta}$ has exactly one critical value at $a_{\gamma}$ if $\deg(F_1:\,\beta\to\gamma)>1$, - if $\beta$ is isotopic to a curve $\alpha\subset\partial U$ rel $\mathcal{P}_f$, then $H_{\beta}(a_{\alpha})=a_{\gamma}$. Define a branched covering $G:\,\widehat{\mathbb{C}}\to\widehat{\mathbb{C}}$ by: $$G= \begin{cases} F_1 & \text{on } \overline{U'}, \\ H_{\beta} & \text{on } D(\beta) \text{ for each component }\beta \text{ of } \partial U' \end{cases}$$ Then $G$ is a PCF branched covering. Denote by $\widetilde\mathcal{Q}=\widetilde\mathcal{Q}_0\cup(\mathcal{P}_f\cap U)$. Then $\mathcal{P}_G\cup G(\widetilde\mathcal{Q})\subset\widetilde\mathcal{Q}$ and hence $(G,\widetilde\mathcal{Q})$ is a marked branched covering. For any stable multicurve $\Gamma_1$ of $(G,\widetilde\mathcal{Q})$, since each component of $\widehat{\mathbb{C}}\setminus U$ contains at most one point of $\widetilde\mathcal{Q}$, one may assume that each curve in $\Gamma_1$ is contained in $U$. Let $M_1$ and $M_2$ be the transition matrices of $\Gamma_1$ under $F_1$ and $(G,\widetilde\mathcal{Q})$, respectively. Then $M_1\ge M_2$. Thus $\lambda(M_2)\le\lambda(M_1)<1$ since $F_1$ has no Thurston obstructions. Hence $(G,\widetilde\mathcal{Q})$ has no Thurston obstructions. Since $G(\widetilde\mathcal{Q}_0)\subset\widetilde\mathcal{Q}_0$, each point in $\widetilde\mathcal{Q}_0$ is eventually periodic. Moreover, each cycle in $\widetilde\mathcal{Q}_0$ contains critical points of $G$ since $\lambda(M_{\Gamma})<1$. By Theorem [Theorem 2](#thm:thurston){reference-type="ref" reference="thm:thurston"}, the marked branched covering $(G,\widetilde\mathcal{Q})$ is Thurston equivalent to a marked rational map $(g,\mathcal{Q})$, i.e., there exist orientation preserving homeomorphisms $(\phi_0,\phi_1)$ of $\widehat{\mathbb{C}}$ such that $\phi_0$ is isotopic to $\phi_1$ rel $\widetilde\mathcal{Q}$ and $\phi_0\circ G=g\circ\phi_1$ on $\widehat{\mathbb{C}}$. Since $F_1$ is holomorphic in a neighborhood of $\mathcal{P}_f$, one may choose $\phi_0$ such that both $\phi_0$ and $\phi_1$ are holomorphic in a neighborhood $\mathcal{P}_f\cap U$. Denote $\mathcal{Q}_0=\phi_0(\widetilde\mathcal{Q}_0)$. Then $g(\mathcal{Q}_0)\subset\mathcal{Q}_0$ and each cycle in $\mathcal{Q}_0$ is super-attracting. Since each component of $\widehat{\mathbb{C}}\setminus\phi_0(\overline{U})$ contains exactly one point of $\mathcal{Q}_0$ and disjoint from $\mathcal{Q}\setminus\mathcal{Q}_0$, $\phi_0$ can be chosen such that $\widehat{\mathbb{C}}\setminus\phi_0(\overline{U})\subset\mathcal{F}_{g}$, and each component of $\widehat{\mathbb{C}}\setminus\phi_0(\overline{U})$ is a disk in Böttcher coordinates. Thus $\phi_1(U')=g^{-1}(\phi_0(U))$ is compactly contained in $\phi_0(U)$. Note that $G=F_1=f^p\circ\theta_0^{-1}$ on $U'=\theta_0(U^p)$. Thus $$\phi_0\circ G=\phi_0\circ f^p\circ\theta_0^{-1}=g\circ\phi_1\,\text{ on }\theta_0(U^p).$$ Set $\varphi_0=\phi_0$ and $\varphi_1=\phi_1\circ\theta_0$. Then $\varphi_1(U^p)$ is compactly contained in $\varphi_0(U)$, and $$\varphi_0\circ f^p=g\circ\varphi_1\,\text{ on }U^p.$$ Now we have completed the proof of statements (1)-(4). $$\diagram U'\drto_{\phi_1} & U^p\lto_{\theta_0}\dto^{\varphi_1}\rto^{f^p} & U \dto^{\phi_0} \\ & \varphi_1(U^p)\rto^{g} & \phi_0(U) \enddiagram$$ For the proof of statement (5), we need to recover a PCF branched covering from $g$ such that it is Thurston equivalent to $f$. Denote $V=\phi_0(U)$ and $V'=\phi_1(U')$. Then $V'$ is compactly contained in $V$. Since $\phi_1$ is isotopic to $\phi_0$ rel $\widetilde\mathcal{Q}$, there is a homeomorphism $\psi_0:\, U\to V$ such that - $\psi_0$ is isotopic to $\phi_0|_U$ rel $\partial U\cup(U\cap\mathcal{P}_f)$, - $\psi_0=\phi_1$ on $U'$, Recall that $G=\phi_0^{-1}\circ g\circ\phi_1=F_1$ on $U'$. Define $$F_2= \begin{cases} F_1 & \text{on }\widehat{\mathbb{C}}\setminus U' \\ \psi_0^{-1}\circ g\circ\phi_1 & \text{on } U'. \end{cases}$$ Then $F_2=F_1$ on $\partial U'$ and hence is a branched covering of $\widehat{\mathbb{C}}$. It is easy to check that $\mathcal{P}_{F_2}=\mathcal{P}_{F_1}=\mathcal{P}_f$. Define $$\theta_1= \begin{cases} \psi_0^{-1}\circ\phi_0 & \text{on } U \\ \text{id} & \text{on }\widehat{\mathbb{C}}\setminus U. \end{cases}$$ Then $\theta_1$ is isotopic to the identity rel $\mathcal{P}_f$ and $F_2=\theta_1\circ F_1$ on $\widehat{\mathbb{C}}$. Consequently, $F_2=\theta_1\circ f^p\circ\theta_0^{-1}$. Both $\theta_0$ and $\theta_1$ are isotopic to the identity rel $\mathcal{P}_f$ and holomorphic in a neighborhood of $\mathcal{P}_f$. Thus $F_2$ is holomorphic in a neighborhood of $\mathcal{P}_f$. By Theorem [Theorem 3](#thm:semiconjugacy){reference-type="ref" reference="thm:semiconjugacy"}, there is a semi-conjugacy $h$ from $F_2$ to $f^p$. Since $\psi_0=\phi_1$ on $U'$, we have $F_2=\phi_1^{-1}\circ g\circ\phi_1$ in a neighborhood of $\phi_1^{-1}(\mathcal{J}_g)$. For any point $z\in h^{-1}(\mathcal{F}_f)$, its forward orbit $\{F_2^n(z)\}$ converges to a super-attracting cycle in $\mathcal{P}_f$. On the other hand, for any point $z\in\phi_1^{-1}(\mathcal{J}_g)$, its forward orbit $\{F_2^n(z)\}$ is always contained in $\phi_1^{-1}(\mathcal{J}_g)$, which is disjoint from super-attracting cycles in $\mathcal{P}_f$. It follows that $\phi_1^{-1}(\mathcal{J}_g)\subset h^{-1}(\mathcal{J}_f)$. Set $\pi=\phi_1^{-1}\circ h$, then $\pi(\mathcal{J}_g)\subset\mathcal{J}_f$ and $f^p\circ\pi=\pi\circ g$ on $\mathcal{J}_g$. By (2) and (4), the Thurston equivalence class of $g$ is uniquely determined. So the rational map $g$ is unique up to holomorphic conjugacy by Theorem [Theorem 2](#thm:thurston){reference-type="ref" reference="thm:thurston"}. ◻ A natural question is when the immersion $\pi$ in Theorem [Theorem 5](#thm:CD){reference-type="ref" reference="thm:CD"} is an embedding. Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"} gives a sufficient condition for this question. # Decomposition by coiling curves {#sec:domain} Any completely stable multicurve $\Gamma$ gives a combinatorial decomposition. When $\Gamma$ contains coiling curves, we expect the decomposition has periodic pieces bounded by coiling curves. However, this is not true in general. Instead, we want to find a completely stable sub-multicurve in $\Gamma$ satisfying the above property. When $\Gamma$ contains a Cantor multicurve, then the decomposition by the Cantor multicurve satisfies the above property. In this section, under the condition that $\Gamma$ contains no Cantor multicurves, we will extract a completely stable sub-multicurve such that the decomposition by this sub-multicurve has periodic pieces bounded by coiling curves. ## Periodic coiling curves Let $f$ be a PCF rational map and $\Gamma$ be a completely stable multicurve of $f$. A curve $\gamma\in\Gamma$ is **periodic** if there exists an integer $p\ge 1$ such that $f^{-p}(\gamma)$ contains a curve isotopic to $\gamma$ rel $\mathcal{P}_f$. Its period is defined as the minimal integer $p\ge 1$ satisfying the above property. Any pre-stable multicurve $\Gamma$ contains periodic curves. In fact, begin with any $\gamma_0\in\Gamma$, there is a sequence $\{\gamma_i\}$ in $\Gamma$ such that $f^{-1}(\gamma_{i+1})$ contains a curve isotopic to $\gamma_i$ for $i\ge 0$. Thus there are integers $0\le i_1\le i_2\le\#\Gamma$ such that $\gamma_{i_1}$ is isotopic to $\gamma_{i_2}$, i.e., $\gamma_{i_1}$ is periodic. **Lemma 2**. *Let $\Gamma$ be a completely stable multicurve with coiling curves. Then there exist periodic coiling curves in $\Gamma$.* *Proof.* Let $\Gamma_c$ be the collection of coiling curves in $\Gamma$. We only need to prove that $\Gamma_c$ is pre-stable by the above discussion. For each $\gamma\in\Gamma_c$, let $\alpha_i$ ($1\le i\le m$) be all the curves in $\Gamma$ such that $f^{-1}(\alpha_i)$ contains exactly $k_i\ge 1$ curves isotopic to $\gamma$. Then $$\sum_{i=1}^m k_i\cdot\kappa_n(\alpha_i,\Gamma)=\kappa_{n+1}(\gamma,\Gamma).$$ Thus there is an integer $1\le i\le m$ such that $\kappa_n(\alpha_i,\Gamma)\to\infty$. So $\alpha_i\in\Gamma_c$ and hence $\Gamma_c$ is pre-stable. ◻ Each periodic curve $\gamma\in\Gamma$ generates a completely stable multicurve $\Lambda_{\gamma}\subset\Gamma$ as the following: $\beta\in\Lambda_{\gamma}$ if there is an integer $k\ge 1$ such that $f^{-k}(\gamma)$ contains a curve isotopic to $\beta$ rel $P_f$. **Lemma 3**. *Let $\Lambda_{\gamma}\subset\Gamma$ be the multicurve generated by a periodic curve $\gamma\in\Gamma$. If $\Gamma$ contains no Cantor multicurves, then $\kappa_n(\gamma,\Lambda_{\gamma})=1$ for all $n\ge 1$.* *Proof.* By definition, each curve $\beta\in\Lambda_{\gamma}$ is isotopic to a curve in $f^{-k}(\gamma)$ for some integer $k\ge 1$. Thus $$\kappa_{n+k}(\beta,\Lambda_{\gamma})\ge\kappa_n(\gamma,\Lambda_{\gamma})$$ for all $n\ge 1$. If $\sup_{n\ge 1}\kappa_n(\gamma,\Lambda_{\gamma})=\infty$, then $\sup_{n\ge 1}\kappa_n(\beta,\Lambda_{\gamma})=\infty$ for all $\beta\in\Lambda_{\gamma}$. This shows that $\Lambda_{\gamma}$ is a Cantor multicurve. This contradicts the condition of the lemma. Thus $\kappa_n(\gamma,\Lambda_{\gamma})$ is bounded. Since $\gamma$ is periodic, there are curves $\gamma_i\in\Lambda_{\gamma}$ ($1\le i<p$) such that $f^{-1}(\gamma_{i})$ contains a curve isotopic to $\gamma_{i-1}$ for $1\le i\le p$ (set $\gamma_p=\gamma_0=\gamma$). The above discussion shows that there is an integer $n_0\ge 1$ such that for $n\ge n_0$ and $1\le i<p$, $\kappa_n(\gamma_i,\Lambda_{\gamma})=\kappa_n(\gamma,\Lambda_{\gamma})=k$. Now $f^{-n_0}(\Lambda_{\gamma})$ contains $k$ curves $\{\delta_1,\cdots,\delta_k\}$ isotopic to $\gamma_1$. If $f^{-1}(\gamma_1)$ contains $m\ge 1$ curves isotopic to $\gamma$, then for each $1\le j\le k$, $f^{-1}(\delta_j)$ contains $m\ge 1$ curves isotopic to $\gamma$. Thus $f^{-n_0-1}(\Lambda_{\gamma})$ would contain $km$ curves isotopic to $\gamma$. So $m=1$, i.e., $f^{-1}(\gamma_1)$ contains exactly one curve isotopic to $\gamma$. Moreover, for any curve $\delta$ in $f^{-n_0}(\Lambda_{\gamma})$ with $\delta\neq\delta_j$ ($1\le j\le k$), $f^{-1}(\delta)$ contains no curves isotopic to $\gamma$. Since $\Lambda_{\gamma}$ is completely stable, any curve $\beta\in\Lambda_{\gamma}$ with $\beta\neq\gamma_1$ is isotopic to a curve $\delta$ in $f^{-n_0}(\Lambda_{\gamma})$. Thus $f^{-1}(\beta)$ contains no curves isotopic to $\gamma$ since $f^{-1}(\delta)$ contains no curves isotopic to $\gamma$. Therefore $\kappa_n(\gamma,\Lambda_{\gamma})=k=1$ for all $n\ge 1$. ◻ ## Periodic pieces bounded by coiling curves Let $f$ be a PCF rational map and $\Gamma$ be a completely stable multicurve of $f$ containing no Cantor multicurves. Let $\gamma\in\Gamma$ be a periodic coiling curve with period $p\ge 1$. Then for any $n\ge 1$, there is a unique curve $\gamma^n$ in $f^{-n}(\Lambda_{\gamma})$ isotopic to $\gamma$ by Lemma [Lemma 3](#lem:unique){reference-type="ref" reference="lem:unique"}. Recall that $\mathscr{C}_n(\gamma,\Gamma)$ is the collection of curves in $f^{-n}(\Gamma)$ isotopic to $\gamma$ for $n\ge 1$, and $\kappa_n(\gamma,\Gamma)=\#\mathscr{C}_n(\gamma,\Gamma)\to\infty$ as $n\to\infty$. We call $\gamma$ a **one-side** coiling periodic curve of $\Gamma$ if $\widehat{\mathbb{C}}\setminus\gamma$ has a component $D$ with the property that for each $n\geq 1$, there exists an isotopy $\theta_n$ of $\widehat{\mathbb{C}}$ such that $\theta_n(\gamma^n)=\gamma$ and $\theta_n(\delta)\subset\overline{D}$ for any $\delta\in\mathscr{C}_n(\gamma,\Gamma)$. **Lemma 4**. *Either $\gamma$ is a one-side coiling periodic curve of $\Gamma$, or $\gamma^n$ separates curves in $\mathscr{C}_n(\gamma,\Gamma)\setminus\{\gamma^n\}$ for all $n>p$.* *Proof.* Let $\{\gamma_0=\gamma, \gamma_1,\cdots,\gamma_{p-1}\}\subset\Lambda_{\gamma}$ be the unique collection such that $f^{-1}(\gamma_{i})$ contains a unique curve $\gamma^1_{i-1}$ isotopic to $\gamma_{i-1}$ for $1\le i\le p$ (set $\gamma_p=\gamma$). Then there is a homeomorphism $\theta_0$ of $\widehat{\mathbb{C}}$ isotopic to the identity, such that $\theta_0(\gamma_i^1)=\gamma_i$. Set $F=f\circ\theta_0$. Then $F(\gamma_i)=\gamma_{i+1}$ for $0\le i<p$. Since $F$ is Thurston equivalent to $f$, we only need to prove the lemma for $F$. Note that for $F$, $\gamma^n=\gamma$ for all $n\ge 1$. Assume that $\gamma$ does not separate curves in $\mathscr{C}_{n_0}(\gamma,\Gamma)\setminus\{\gamma\}$ for some integer $n_0>p$. We want to prove $\gamma$ is a one-side coiling periodic curve. By the assumption, there exists a component $D$ of $\widehat{\mathbb{C}}\setminus\gamma$ such that each curve in $\mathscr{C}_{n_0}(\gamma,\Gamma)$ is contained in $\overline{D}$. For each $0<i<p$, let $D_i$ be the component of $\widehat{\mathbb{C}}\setminus\gamma_i$ such that if a curve $\delta\subset D_i\setminus\mathcal{P}_f$ is isotopic to $\gamma_i$, then the unique curve in $F^{-i}(\delta)$ isotopic to $\gamma$ is contained in $D$. Consequently, if a curve $\delta$ isotopic to $\gamma_i$ is disjoint from $D_i$, then the unique curve in $F^{-i}(\delta)$ isotopic to $\gamma$ is disjoint from $D$. We claim that for $0\le i<p$, each curve $\beta\in\mathscr{C}_1(\gamma_i,\Gamma)$ is contained in $\overline{D_i}$ (set $\gamma_0=\gamma$ and $D_0=D$). Assume by contradiction that $\beta$ is disjoint from $\overline{D_i}$. Then $F(\beta)\in\Gamma\setminus\Lambda_{\gamma}$. Since $\Gamma$ is pre-stable, $F^{-n_0+i+1}(\Gamma)$ contains a curve $\alpha$ isotopic to $F(\beta)$. Thus there is a homeomorphism $\theta$ of $\widehat{\mathbb{C}}$ isotopic to the identity, such that $\theta(\alpha)=F(\beta)$ and $\theta(\gamma_i)=\gamma_i$ for $0\le i<p$. By lifting, we obtain a curve $\beta'$ in $F^{-n_0+i}(\Gamma)$ isotopic to $\beta$. Thus $\beta'$ is disjoint from $\overline{D_i}$ since $\beta$ is disjoint from $\overline{D_i}$. By definition of $D_i$, the unique curve $\delta$ in $F^{-i}(\beta')$ isotopic to $\gamma$ is disjoint from $\overline{D}$. This is a contradiction since $\delta\in\mathscr{C}_{n_0}(\gamma,\Gamma)$. $$\diagram \delta\rto^{F^i} & \beta'\dto\rto^{F} & \alpha\dto^{\theta} \\ & \beta\rto^{F} & F(\beta) \enddiagram$$ For each curve $\delta\in\mathscr{C}_2(\gamma_i,\Gamma)$, either $F(\delta)\in\mathscr{C}_1(\gamma_{i+1},\Gamma)$, or $F(\delta)$ is isotopic to a curve $\alpha\in\Gamma\setminus\Lambda_{\gamma}$. In the former case, $\delta\subset\overline{D_i}$ by the claim and the definition of $D_i$. In the latter case, as above, there is a homeomorphism $\theta$ of $\widehat{\mathbb{C}}$ isotopic to the identity, such that $\theta(\alpha)=F(\delta)$ and $\theta(\gamma_i)=\gamma_i$ for $0\le i<p$. By lifting, we obtain a curve $\delta'\in\mathscr{C}_1(\gamma_{i},\Gamma)$ isotopic to $\delta$. Thus $\delta\subset\overline{D_i}$ since $\delta'\subset\overline{D_i}$. In summary, each curve in $\mathscr{C}_2(\gamma_i,\Gamma)$ is contained in $\overline{D_i}$. Inductively, each curve in $\mathscr{C}_n(\gamma_i,\Gamma)$ is contained in $\overline{D_i}$ for all $n\ge 1$. In particular, $\gamma$ is a one-side coiling periodic curve. ◻ Now let $\gamma\in\Gamma$ be a one-side coiling curve with period $p\ge 1$. Then there is a unique component $U_{\gamma}$ of $\widehat{\mathbb{C}}\setminus\Lambda_{\gamma}$ whose boundary contains $\gamma$, such that for all $n\ge 1$ each curve in $\mathscr{C}_n(\gamma,\Gamma)$ is disjoint from $U^n_{\gamma}$. Here $U^n_{\gamma}$ is the unique component of $\widehat{\mathbb{C}}\setminus f^{-n}(\Lambda_{\gamma})$ with the property that $\widehat{U_{\gamma}^n}$ is isotopic to $U_{\gamma}$. Consequently, $f^p(U^p_{\gamma})=U_{\gamma}$, i.e., $U_{\gamma}$ is $f_*$-periodic. Denote by $\mathcal{U}_{\gamma}$ the union of all the components of $\widehat{\mathbb{C}}\setminus\Lambda_{\gamma}$ in the grand orbit of $U_{\gamma}$ under $f_*$. Denote $$\Gamma_{\gamma}=\{\beta\in\Gamma:\, \beta\text{ is disjoint from }\mathcal{U}_{\gamma}\}.$$ Then $\Lambda_{\gamma}\subset\Gamma_{\gamma}$. **Lemma 5**. *Let $\Gamma$ be a completely stable multicurve contains no Cantor multicurves. Suppose that $\gamma\in\Gamma$ is a one-side coiling periodic curve. Then the following statements hold.* - *$\Gamma_{\gamma}$ is a completely stable multicurve.* - *$U_{\gamma}$ is a $f_*$-periodic component of $\widehat{\mathbb{C}}\setminus\Gamma_{\gamma}$.* - *Each curve in $\Lambda_{\gamma}$ is a coiling curve of $\Gamma_{\gamma}$.* *Proof.* (1) For each $n\ge 0$, let $\mathcal{U}_{\gamma}^n$ be the union of of all the complex type components $U^n$ of $\widehat{\mathbb{C}}\setminus f^{-n}(\Lambda_{\gamma})$ such that $\widehat{U^n}$ is isotopic to a component of $\mathcal{U}_{\gamma}$. Then for each component $U^{n+1}$ of $\mathcal{U}_{\gamma}^{n+1}$, $f(U^{n+1})$ is a component of $\mathcal{U}^n_{\gamma}$. Conversely, for each component $U^n$ of $\mathcal{U}^n_{\gamma}$ and each complex type component $U^{n+1}$ of $f^{-1}(U^n)$, $U^{n+1}$ is a component of $\mathcal{U}^{n+1}_{\gamma}$. For any $\beta\in\Gamma_{\gamma}$, $\beta$ is disjoint from $\mathcal{U}_{\gamma}$. Thus $f^{-1}(\beta)$ is disjoint from $\mathcal{U}_{\gamma}^1$. Therefore each essential curve in $f^{-1}(\beta)$ is isotopic to a curve in $\Gamma_{\gamma}$. So $\Gamma_{\gamma}$ is a stable multicurve. Conversely, since $\Gamma$ is pre-stable, there is a curve $\alpha\in\Gamma$ such that $f^{-1}(\alpha)$ contains a curve $\beta_1$ isotopic to $\beta\in\Gamma_{\gamma}$. Either $\alpha\subset\mathcal{U}_{\gamma}$ or $\alpha$ is disjoint from $\mathcal{U}_{\gamma}$. In the latter case, $\alpha\in\Gamma_{\gamma}$. In the former case, $\beta_1\subset\mathcal{U}_{\gamma}^1$. Thus $\beta\in\Lambda_{\gamma}$ since $\beta$ is disjoint from $\mathcal{U}_{\gamma}$. Noticing that $\Lambda_{\gamma}$ is pre-stable, there is a curve $\alpha'\in\Lambda_{\gamma}\subset\Gamma_{\gamma}$ such that $f^{-1}(\alpha')$ contains a curve isotopic to $\beta$. Therefore $\Gamma_{\gamma}$ is pre-stable. \(2\) By definition of $\Gamma_{\gamma}$, $U_{\gamma}$ is a component of $\widehat{\mathbb{C}}\setminus\Gamma_{\gamma}$. Recall that $U_{\gamma}$ is a periodic component of $\widehat{\mathbb{C}}\setminus\Lambda_{\gamma}$, i.e., there is an integer $p\ge 1$ such that $f^p(U_{\gamma}^p)=U_{\gamma}$, where $U_{\gamma}^{p}$ is the components of $\widehat{\mathbb{C}}\setminus f^{-p}(\Lambda_{\gamma})$ such that $\widehat{U^p_{\gamma}}$ is isotopic to $U$. As above, for any $\beta\in\Gamma_{\gamma}$, $f^{-p}(\beta)$ is disjoint from $\mathcal{U}_{\gamma}^p$. Thus $U^p_{\gamma}$ is a component of $\widehat{\mathbb{C}}\setminus f^{-p}(\Gamma_{\gamma})$. So $U_{\gamma}$ is a $f_*$-periodic component of $\widehat{\mathbb{C}}\setminus\Gamma_{\gamma}$. ![$\gamma$ is a coiling curve of $\Gamma_{\gamma}$.](Lambdacoloring.pdf){#fig:Lambda width="12cm"} \(3\) Since each curve in $\Lambda_{\gamma}$ is isotopic to a curve in $f^{-k}(\gamma)$ for some integer $k\ge 1$. We only need to show that $\gamma$ is a coiling curve of $\Gamma_{\gamma}$. For any curve $\alpha\in\Gamma\cap\mathcal{U}_{\gamma}$, each essential curve $\delta$ in $f^{-n}(\alpha)$ is contained either in $\mathcal{U}_{\gamma}^{n}$ or an annular type component $A$ of $\widehat{\mathbb{C}}\setminus f^{-n}(\Lambda_{\gamma})$. In the former case $\delta$ is not isotopic to $\gamma$ since $\mathcal{U}^n_{\gamma}$ is disjoint from $\mathscr{C}_n(\gamma,\Gamma)$. In the latter case, let $\delta_0$ and $\delta_1$ be the two components of $\partial A$. Then both of them are isotopic to $\delta\subset A$ and contained in $f^{-n}(\Lambda_{\gamma})$. If $\delta$ is isotopic to $\gamma$, then $f^{-n}(\Lambda_{\gamma})$ would contain two distinct curves isotopic to $\gamma$. This is impossible by Lemma [Lemma 3](#lem:unique){reference-type="ref" reference="lem:unique"}. In summary, for each curve $\alpha\in\Gamma\setminus\Gamma_{\gamma}$, $f^{-n}(\alpha)$ contains no curves isotopic to $\gamma$. So $\kappa_n(\gamma,\Gamma_{\gamma})=\kappa_n(\gamma,\Gamma)\to\infty$ as $n\to\infty$. ◻ # Stable sets {#sec: disjoint} We will prove Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"} in this section following the strategy in [@cui2016renormalizations]: At first we deform the rational map $f$ to be a branched covering $F$ in its Thurston equivalence class, such that $F$ has a topological stable set corresponding to the pieces bounded by coiling curves defined in §4. Applying Rees-Shishikira Theorem, we obtain a semi-conjugacy from $F$ to $f$. The key point of the proof is that distinct components of the topological stable set have disjoint images under the semi-conjugacy. This implies that the projection of the topological stable set is a stable set of $f$. Let $f$ be a PCF rational map and $\Gamma$ be a completely stable multicurve containing coiling curves. Assume that there is a one-side coiling periodic curve $\gamma\in\Gamma$ with period $p\ge 1$. Let $\Lambda_{\gamma}$ be the multicurve generated by $\gamma$. By Lemma [Lemma 5](#lem:one-side){reference-type="ref" reference="lem:one-side"}, we obtain a completely stable multicurve $\Gamma_{\gamma}\subset\Gamma$ such that each curve in $\Lambda_{\gamma}$ is a coiling curve of $\Gamma_{\gamma}$, and an $f_*$-periodic component $U_{\gamma}$ of $\widehat{\mathbb{C}}\setminus\Gamma_{\gamma}$ whose boundary consists of curves in $\Lambda_{\gamma}$. Pick an annulus neighborhood $A_{\beta}\supset\beta$ for each $\beta\in\Gamma_{\gamma}$, such that $\partial A_{\beta}$ consists of two curves isotopic to $\beta$ and all these annuli $A_{\beta}$ have pairwise disjoint closures. Denote $$\mathcal{A}=\bigcup_{\beta\in\Gamma_{\gamma}} A_{\beta}\,\text{ and }\,\mathcal{A}_0=\bigcup_{\beta\in\Lambda_{\gamma}} A_{\beta}.$$ Let $A^1_{\beta}$ be the minimal annulus containing all the components of $f^{-1}(\mathcal{A})$ whose boundary components are isotopic to $\beta$. Then there is a homeomorphism $\theta$ of $\widehat{\mathbb{C}}$ isotopic to the identity such that $\theta$ is holomorphic in a neighborhood of $\mathcal{P}_f$ and $\theta(A_{\beta})=A^1_{\beta}$ for all $\beta\in\Gamma_{\gamma}$. Denote $F=f\circ\theta$. For any $n\ge 0$, let $E^n_{\gamma}$ be the complex type component of $\widehat{\mathbb{C}}\setminus F^{-n}(\mathcal{A})$ contained in $U_{\gamma}$. Then Each component of $E^n_{\gamma}\setminus E^{n+1}_{\gamma}$ is a simple type Jordan domain. Moreover, $F^p(E^{n+p}_{\gamma})=E^n_{\gamma}$. Set $$K=\bigcap_{n\ge 0}E^n_{\gamma}\,\text{ and }\,\mathcal{K}=\bigcup_{i=1}^p F^i(K).$$ Then $F^p(K)=K$ and $\mathcal{K}$ is a topological stable set of $F$, i.e., each component of $F^{-1}(\mathcal{K})$ either is a component of $\mathcal{K}$ or is disjoint from $\mathcal{K}$. By Theorem [Theorem 3](#thm:semiconjugacy){reference-type="ref" reference="thm:semiconjugacy"}, there is a semi-conjugacy $h:\widehat{\mathbb{C}}\to\widehat{\mathbb{C}}$ from $F$ to $f$. **Lemma 6**. *For any two distinct components $E_1$ and $E_2$ of $\widehat{\mathbb{C}}\setminus F^{-n}(\mathcal{A}_0)$, $h(E_1)\cap h(E_2)=\emptyset$.* The next lemma comes from [@cui2016renormalizations]. We write their proof here for self-containment. **Lemma 7**. *Let $\Gamma\subset\widehat{\mathbb{C}}$ be a finite disjoint union of Jordan curves. Suppose that $D\subset\widehat{\mathbb{C}}$ is a Jordan domain and $E\subset D$ is a continuum. Then there is an integer $N<\infty$ such that for any two distinct points $z_1,z_2\in E$, there exists a Jordan arc $\delta\subset D$ connecting $z_1$ with $z_2$ such that $\#(\delta\cap\Gamma)\le N$.* *Proof.* Denote $$\begin{aligned} \mathscr{L} &=& \{\sigma: \sigma\text{ is a component of }\Gamma\cap D\}\text{ and} \\ \mathscr{L}_0 &=& \{\sigma\in\mathscr{L}:\,\sigma\cap E\neq\emptyset\}.\end{aligned}$$ Let $\tau: S^1\times\{1,2,\cdots,n\}\to\Gamma$ be a homeomorphism. Then $\tau^{-1}(\Gamma\cap E)$ is a compact subset, which is covered by the open sets $\{\tau^{-1}(\sigma)\}$ ($\sigma\in\mathscr{L}$) in $S^1\times\{1,2,\cdots,n\}$. Thus there are finitely many such open sets cover $\tau^{-1}(\Gamma\cap E)$. So $N:=\#\mathscr{L}_0<\infty$. For any two distinct points $z_1,z_2\in E$, denote by $\mathscr{L}(z_1,z_2)$ the collection of arcs $\sigma\in\mathscr{L}$ such that either $\sigma\cap\{z_1,z_2\}\neq\emptyset$ or $\sigma\cup\partial D$ separates $z_1$ from $z_2$. Then for each $\sigma\in\mathscr{L}(z_1,z_2)$, $\sigma\cap E\neq\emptyset$ since $E$ is connected. Thus $\mathscr{L}(z_1,z_2)\subset\mathscr{L}_0$ and hence $\#\mathscr{L}(z_1,z_2)\le\#\mathscr{L}_0=N<\infty$. By definition of $\mathscr{L}(z_1,z_2)$, there exists a Jordan arc $\delta\subset D$ connecting $z_1$ with $z_2$ such that $\delta$ intersects each $\sigma\in\mathscr{L}(z_1,z_2)$ at a single point and is disjoint from any arcs in $\mathscr{L}\setminus\mathscr{L}(z_1,z_2)$. So $\#(\delta\cap\Gamma)\le\#\mathscr{L}(z_1,z_2)\le N$. ◻ We say a continuum $E$ crosses an annulus $A$ if $E$ intersects both of the two boundary components of $A$. Note that $h^{-1}(z)$ is connected for any $z\in\widehat{\mathbb{C}}$ since $h$ is a limit of a sequence of homeomorphisms. Define $$T=\{z\in\widehat{\mathbb{C}}: h^{-1}(z)\text{ crosses a component of }\mathcal{A}_0.\}$$ Then $T\subset\mathcal{J}_f$ by Theorem [Theorem 3](#thm:semiconjugacy){reference-type="ref" reference="thm:semiconjugacy"} (3). Moreover $T$ is a closed set and $f(T)\subset T$. More precisely, let $\{z_n\}_{n\ge 1}$ is a sequence of points in $T$ and converges to a point $z_{\infty}\in\widehat{\mathbb{C}}$. Passing to a subsequence, we may assume that for all $n\ge 1$, there exist two points $\zeta_n,\zeta'_n\in h^{-1}(z_n)$, such that they are contained in the two distinct boundary components of the same annulus $A\in\mathcal{A}_0$. By taking subsequence furthermore, we may further assume that $\{\zeta_n\}_{n\ge 1}$ and $\{\zeta'_n\}_{n\ge 1}$ converge to the points $\zeta_{\infty}$ and $\zeta'_{\infty}$ respectively. The continuity of $h$ implies that $h(\zeta_{\infty})=z_{\infty}=h(\zeta'_{\infty})$. Thus $z_{\infty}\in T$ and $T$ is a closed set. For each point $z\in T$, $h^{-1}(z)$ crosses some component $A$ of $\mathcal{A}_0$. Since $\Lambda_{\gamma}$ is a completely stable multicurve, $F^{-1}(\mathcal{A}_0)$ has a component $A^1$ contained in $A$ essentially. Thus $h^{-1}(f(z))$ crosses $F(A^1)\in(\mathcal{A}_0)$ since $F(h^{-1}(z))=h^{-1}(f(z))$. Hence $f(T)\subset T$. **Lemma 8**. *The set $T$ is empty.* *Proof.* Assume that $T$ is non-empty. Set $T_{\infty}=\bigcap_{n\geq 1}f^{n}(T)$. Since $T$ is a closed set and $f(T)\subset T$, $T_{\infty}$ is a non-empty closed set and $f(T_{\infty})=T_{\infty}$. Hence there is a point $z_0 \in T_{\infty}$ and a sequence of points $\{z_n\}_{n\ge 0}$ contained in $T_{\infty}$ such that $f(z_{n+1})=z_n$ for any $n\geq 0$. This sequence is actually a backward orbit of $z_0$ under $f$. If there is an integer $n_0$ such that $z_{n_0}$ is not periodic, then $z_n$ is not periodic for any $n \geq n_0$. Hence we may assume that $z_n$ is not critical point of $f$ since $f$ has only finitely many critical points. If every $z_n$ is periodic, then $\{z_n\}$ is actually a finite set which forms a repelling cycle on the Julia set of $f$, which also contains no critical points of $f$. So we may assume that every $z_n$ is not a critical point of $f$. Set $L_n = h^{-1}(z_n)$. Then by Theorem [Theorem 3](#thm:semiconjugacy){reference-type="ref" reference="thm:semiconjugacy"} (3) and (4), $L_n$ is a component of $F^{-1}(L_{n-1})$ and there is a Jordan domain $D_0$ containing $L_0$ such that $F^{n}: D_n\to D_0$ is a homeomorphism for any $n\ge 1$, where $D_n$ is the component of $F^{-n}(D_0)$ containing $L_n$. By Lemma [Lemma 7](#lem:top){reference-type="ref" reference="lem:top"}, there exists an integer $N\ge 0$ such that for any two distinct points $\zeta_0,\zeta'_0\in L_0$, there is a Jordan arc $\delta\subset D_0$ connecting $\zeta_0$ with $\zeta'_0$ and $\#(\delta\cap\Gamma_{\gamma})\le N$. Recall that for any curve $\beta\in\Lambda_{\gamma}$, $\kappa_n(\beta,\Gamma_{\gamma})\to\infty$. So there exists an integer $m>0$ such that for any component $A$ of $\mathcal{A}_0$, there are at least $N+1$ components of $F^{-m}(\mathcal{A})$ contained essentially in $A$. Since $L_m$ crosses a component of $\mathcal{A}_0$, there exist two distinct points $\zeta_m,\zeta'_m\in L_m$ such that $F^{-m}(\Gamma_{\gamma})$ has at least $N+1$ components separating $\zeta_m$ from $\zeta'_m$. Now the two points $F^{m}(\zeta_m)$ and $F^{m}(\zeta'_m)$ are contained in $L_0$. We have known that there is a Jordan arc $\delta\subset D_0$ such that $\delta$ connecting $F^{m}(\zeta_m)$ and $F^{m}(\zeta'_m)$ and $\#(\delta\cap\Gamma_{\gamma})\le N$. Let $\delta_m$ be the component of $F^{-m}(\delta)$ contained in $D_m$. Then $\delta_m$ connects $\zeta_m$ and $\zeta'_m$. Since $F^{m}:\delta_m\to\delta$ is a homeomorphism, we have $\#(\delta_m\cap F^{-m}(\Gamma_0))\le N$. This contradicts with the fact that $F^{-m}(\Gamma_{\gamma})$ has at least $N+1$ essential components separating $\zeta_m$ from $\zeta'_m$. Hence $T$ is empty. ◻ *Proof of Lemma [Lemma 6](#lem:key){reference-type="ref" reference="lem:key"}.* For any two disjoint components $E_1$ and $E_2$ of $\widehat{\mathbb{C}}\setminus F^{-n}(\mathcal{A}_0)$, if $h(E_1)\cap h(E_2)\neq\emptyset$, then there is a point $z\in h(E_1)\cap h(E_2)$. Thus $h^{-1}(z)$ crosses a component $A$ of $F^{-n}(\mathcal{A}_0)$. By Theorem [Theorem 3](#thm:semiconjugacy){reference-type="ref" reference="thm:semiconjugacy"} (4), $F^n(h^{-1}(z))=h^{-1}(f^n(z))$ crosses $F^n(A)$ which is a component of $\mathcal{A}_0$. This contradicts Lemma [Lemma 8](#lem:empty){reference-type="ref" reference="lem:empty"}. ◻ *Proof of Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"}.* We begin to prove Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"}. If $\Gamma$ contains Cantor multicurves, then there is a disjoint type renormalization by Theorem 1.3 in [@cui2016renormalizations]. Now we assume that $\Gamma$ contains coiling curves but no Cantor multicurves. *Case 1. $\Gamma$ contains a periodic one-side coiling curve.* By Lemma [Lemma 6](#lem:key){reference-type="ref" reference="lem:key"}, for any two distinct components $K_1, K_2$ of $F^{-1}(\mathcal{K})$, $h(K_1)\cap h(K_2)=\emptyset$. Thus $h(\mathcal{K})$ is a stable set of $f$. Applying Theorem [Theorem 4](#thm:renorm){reference-type="ref" reference="thm:renorm"}, we obtain a disjoint type renormlization of $f$. *Case 2. $\Gamma$ contains no one-side coiling curves.* Let $\alpha\in\Gamma$ be a periodic coiling curve with period $p\ge 1$. Let $\alpha^n$ be the unique curve in $\mathscr{C}_n(\alpha,\Lambda_{\alpha})$. By Lemma [Lemma 5](#lem:one-side){reference-type="ref" reference="lem:one-side"}, $\alpha^n$ separates curves $\mathscr{C}_n(\alpha,\Gamma)\setminus\{\alpha^n\}$ for $n>p$. Pick an annulus neighborhood $A_{\beta}\supset\beta$ for each $\beta\in\Gamma$, such that $\partial A_{\beta}$ consists of two curves isotopic to $\beta$ and all these annuli $A_{\beta}$ have pairwise disjoint closures. Denote $$\mathcal{A}=\bigcup_{\beta\in\Gamma}A_{\beta}\,\text{ and }\,\mathcal{A}_0=\bigcup_{\beta\in\Lambda_{\alpha}}A_{\beta}.$$ Let $A^1_{\beta}$ be the minimal annulus containing all the components of $f^{-1}(\mathcal{A})$ whose boundary components are isotopic to $\beta$. Then there is a homeomorphism $\theta$ of $\widehat{\mathbb{C}}$ isotopic to the identity such that $\theta$ is holomorphic in a neighborhood of $\mathcal{P}_f$ and $\theta(A_{\beta})=A^1_{\beta}$ for all $\beta\in\Gamma$. Denote $F=f\circ\theta$. For each $\beta\in\Lambda_{\alpha}$ and any $n\ge 0$, let $A^n_{\beta}$ be the unique essential annulus in $A_{\beta}$ such that $F^n(A^n_{\beta})$ is a component of $\mathcal{A}_0$. Since $\alpha^n$ separates curves $\mathscr{C}_n(\alpha,\Gamma)\setminus\{\alpha^n\}$, we know that $$\mathcal{K}=\bigcap_{n\ge 1}\bigcup_{\beta\in\Lambda_{\alpha}}A^n_{\beta}$$ is a topological stable set of $F$. The remaining of the proof is the same as in Case 1. Now the proof of Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"} is complete. ◻ # Rational Maps with coiling curves {#example} In this section, by tuning and disk-annulus surgery, we will construct rational maps with completely stable multicurves which contain coiling curves but no Cantor multicurves. ## Tuning We will introduce the tuning on a fixed Fatou domain for the simplicity. Let $R$ be a PCF rational map with a fixed Fatou domain $D$. Let $\phi$ be a Böttcher coordinate of $D$, i.e., $\phi$ is a conformal map from $D$ onto the unit disk $\mathbb{D}$ such that $\phi\circ R\circ\phi^{-1}(z)=z^d$, where $d:=\deg R|_D$. Then $\phi^{-1}$ can be extended continuously on $\partial\mathbb{D}$ since $\partial D$ is locally connected. Let $P$ be a monic PCF polynomial with $\deg P=d$ such that $\#\mathcal{P}_P\ge 3$ and $\mathcal{P}_P$ contains the origin. Let $\chi(z)=|z|z/(1-|z|)$. Then $\chi:\,\mathbb{D}\to\mathbb{C}$ is a homeomorphism. Thus $\chi^{-1}\circ P\circ\chi$ is a branched covering of $\mathbb{D}$, which can be continuously extended to the boundary with boundary value $z\mapsto z^d$. Thus $\phi^{-1}\circ\chi^{-1}\circ P\circ\chi\circ\phi$ is a branched covering of $D$ whose boundary value coincides with $R$. The **formal tuning** of $R$ with $P$ in $D$ is defined by $$G= \begin{cases} R & \text{on }\widehat{\mathbb{C}}\setminus D, \\ \phi^{-1}\circ\chi^{-1}\circ P\circ\chi\circ\phi & \text{on }D. \end{cases}$$ Since $\mathcal{P}_P$ contains the origin, we have $\mathcal{P}_G=\phi^{-1}\circ\chi^{-1}(\mathcal{P}_P)\cup\mathcal{P}_R$. Thus $G$ is a PCF branched covering. If $G$ has no Thurston obstructions, then it is Thurston equivalent to a rational map $g$, which is called the **tuning** of $R$ with $P$ in $D$. Here we remark that the Böttcher coordinate $\phi$ is unique up to rotations with angles $2k\pi/(d-1)$ ($1\le k<d-1$). Thus the definition of formal tuning has $d-1$ choices when $d>2$. ## Coiled Fatou domains {#sec:coiled} Let $g$ be a PCF rational map with a fixed Fatou domain $D$. We say $D$ is **coiled** by a PCF polynomial $Q$ if there is an essential curve $\alpha\subset\widehat{\mathbb{C}}\setminus\mathcal{P}_g$ such that - $g^{-1}(\alpha)$ contains a unique essential curve, which is isotopic to $\alpha$. - $g^{-1}(\alpha)$ contains a curve peripheral around the fixed point $a\in D$. - $Q$ is a combinatorial renormalization of $g$ on $U$, where $U$ is the component of $\widehat{\mathbb{C}}\setminus\alpha$ disjoint from the point $a$. A Jordan curve $\beta\in\widehat{\mathbb{C}}\setminus\mathcal{P}_g$ is **peripheral** around a point $a\in\mathcal{P}_g$ if one component of $\widehat{\mathbb{C}}\setminus\beta$ intersects $\mathcal{P}_g$ at the point $a$. See Theorem [Theorem 5](#thm:CD){reference-type="ref" reference="thm:CD"} for the definition of combinatorial renormalization. ![A coiled Fatou domain.](curves.pdf){#fig: curves width="9cm"} **Theorem 6**. *Let $g$ be a PCF rational map with a fixed Fatou domain $D$ coiled by a PCF polynomial $Q$. If $Q$ is injective on its Hubbard tree, then the formal tuning of $g$ with any PCF polynomial $P$ in $D$ is Thurston equivalent to a rational map.* Refer to [@orsay Definition 4.1] for the definition of Hubbard trees. *Proof.* For any PCF polynomial $P$, let $F$ be the formal tuning of $g$ with $P$. Then $\mathcal{P}_F=\phi^{-1}\circ\chi^{-1}(\mathcal{P}_P)\cup\mathcal{P}_g$. Let $\beta\subset D\setminus\mathcal{P}_F$ be a Jordan curve such that the annular component of $D\setminus\beta$ is disjoint from $\mathcal{P}_F$. Then $\beta$ is an essential curve and $F^{-1}(\beta)$ contains a unique essential curve, which is isotopic to $\beta$. By definition, there is an essential curve $\alpha\subset\widehat{\mathbb{C}}\setminus\mathcal{P}_F$ disjoint from $\beta$, such that $F^{-1}(\alpha)$ contains at least two essential curves, one is isotopic to $\alpha$, another is isotopic to $\beta$. Denote by $U$ the component of $\widehat{\mathbb{C}}\setminus\alpha$ disjoint from $\beta$. Since $Q$ is injective on its Hubbard tree, there is a tree $T\subset U$ containing $\mathcal{P}_F\cap U$ and a homeomorphism $\theta$ of $\widehat{\mathbb{C}}$ isotopic to the identity, such that $F$ is injective on $\theta(T)$ and $F(\theta(T))\subset T$. By Lemma [Lemma 1](#lem:irreducible){reference-type="ref" reference="lem:irreducible"}, we only need to prove that for any irreducible multicurve $\Gamma$ of $F$, its leading eigenvalue $\lambda(M_{\Gamma})<1$. For each curve $\gamma\in\widehat{\mathbb{C}}\setminus\mathcal{P}_F$, denote $$k(\alpha, \gamma)=\min_{\gamma'\sim\gamma}\#(\alpha\cap\gamma'),\,\text{ and }\,k(\beta,\gamma)=\min_{\gamma'\sim\gamma}\#(\beta\cap\gamma'),$$ where the minimum is taken over all the curves $\gamma'$ isotopic to $\gamma$. If there exists a curve $\gamma\in\Gamma$ such that $k(\beta,\gamma)=0$, then for any curve $\delta$ in $F^{-1}(\gamma)$, $k(\beta,\delta)=0$ since $F^{-1}(\beta)$ contains a curve isotopic to $\beta$. By the irreducibility, $k(\beta,\gamma)=0$ for all $\gamma\in\Gamma$. Thus the transition matrix of $\Gamma$ under $F$ is equal to the transition matrix of $\Gamma$ under $g$. Hence $\lambda(M_{\Gamma})<1$. If $k(\alpha,\gamma)=0$ for some curve $\gamma\in\Gamma$, then $k(\beta,\delta)=0$ for any curve $\delta$ contained in $F^{-1}(\gamma)$ since $F^{-1}(\alpha)$ contains a curve isotopic to $\beta$. So we also have $\lambda(M_{\Gamma})<1$ from the above discussion. Now assume for each curve $\gamma\in\Gamma$, both $k(\beta,\gamma)$ and $k(\alpha,\gamma)$ are non-zero. Denote $$k(T,\gamma)=\min_{\gamma'\sim\gamma}\#(T\cap\gamma'),$$ where the minimum is taken over all the curves $\gamma'$ isotopic to $\gamma$. Then $k(T,\gamma)\neq 0$ for all $\gamma\in\Gamma$. For each $\gamma\in\Gamma$, assume $\#(T\cap\gamma)=k(T,\gamma)$. Let $\delta_i$ ($1\le i\le n$) be all the curves in $F^{-1}(\gamma)$. Since $F$ is injective on $\theta(T)$ and $F(\theta(T))\subset T$, we obtain $$\sum_{i=1}^n k(T,\delta_i)\le\sum_{i=1}^n\#(\theta(T)\cap\delta_i)\le\#(T\cap\gamma)=k(T,\gamma).$$ Combining this inequality with the irreducibility of $\Gamma$, we obtain that $k(T,\gamma)$ is a constant for $\gamma\in\Gamma$, and for any $\gamma\in\Gamma$, $F^{-1}(\gamma)$ contains exactly one curve isotopic to a curve in $\Gamma$. If $\lambda(M_{\Gamma})\ge 1$, then $\Gamma$ contains a Levy cycle $\{\gamma_i\}$ ($1\le i\le p$) in $\Gamma$, i.e., $F^{-1}(\gamma_{i+1})$ contains a curve $\gamma'_i$ isotopic to $\gamma_i$ for $1\le i\le p$ (set $\gamma_{p+1}=\gamma_1$), and $\deg F|_{\gamma_i'}=1$. Now assume $\#(\alpha\cap\gamma_{i+1})=k(\alpha,\gamma_{i+1})$. Since $\deg F|_{\gamma_i'}=1$, we have $$k(\beta,\gamma_i)+k(\alpha,\gamma_i)\le\#(F^{-1}(\alpha)\cap\gamma'_i)\le\#(\alpha\cap\gamma_{i+1})=k(\alpha,\gamma_{i+1}).$$ This is impossible since $\{\gamma_i\}$ is a cycle and both $k(\beta,\gamma_i)$ and $k(\alpha,\gamma_i)$ are non-zero for all $1\le i\le p$. Thus $\lambda(M_{\Gamma})<1$. By Theorem [Theorem 2](#thm:thurston){reference-type="ref" reference="thm:thurston"}, $F$ is Thurston equivalent to a rational map. ◻ **Corollary 1**. *The rational map derived in Theorem [Theorem 6](#thm:tunable){reference-type="ref" reference="thm:tunable"} has a completely stable multicurve containing coiling curves but no Cantor multicurves.* *Proof.* We only need to show that the formal tuning $F$ of $g$ with any PCF polynomial $P$ has a multicurve satisfying the above property. Let $\alpha'\in\widehat{\mathbb{C}}\setminus\mathcal{P}_g$ be a curve isotopic to $\alpha$ and disjoint from $D$. Then $g^{-1}(\alpha')$ is also disjoint from $D$. Let $\alpha_0$ be the unique essential curve in $g^{-1}(\alpha')$ and $\beta_i$ ($0\le i<k$) be all the curves in $g^{-1}(\alpha')$ which are peripheral around the fixed point $a\in D$. After tuning, $\alpha_0$ is still essential in $\widehat{\mathbb{C}}\setminus\mathcal{P}_F$, and $\beta_i$ are essential and isotopic to each other in $\widehat{\mathbb{C}}\setminus\mathcal{P}_F$. All the other else curves in $g^{-1}(\alpha')$ are still non-essential in $\widehat{\mathbb{C}}\setminus\mathcal{P}_F$. Thus $\Gamma=\{\alpha_0,\beta_0\}$ is a completely stable multicurve of $F$. $F^{-1}(\beta_0)$ contains exactly one essential curve, which is isotopic to $\beta_0$. $F^{-1}(\alpha_0)$ contains one curve isotopic to $\alpha_0$ and $k\ge 1$ curves isotopic to $\beta_0$. Thus $\kappa_n(\beta_0,\Gamma)=kn+1\to\infty$ as $n\to\infty$ but $\kappa_n(\alpha_0,\Gamma)=1$ for all $n\ge 1$. Thus $\Gamma$ is a completely stable multicurve of $F$ containing coiling curves but no Cantor multicurves. ◻ ## Tuning and disk-annulus surgery Now we want to construct a PCF rational map satisfying the conditions of Theorem [Theorem 6](#thm:tunable){reference-type="ref" reference="thm:tunable"}. We begin with a PCF rational map $R$ with a fixed Fatou domain $U$ such that $R^{-1}(U)$ has another component $V\neq U$. Let $Q$ be a monic PCF polynomial with $\deg Q=\deg R|_U$ such that $\#\mathcal{P}_Q\ge 3$, $\mathcal{P}_Q$ contains the origin and $Q$ is injective on its Hubbard tree $T$. Recall that the formal tuning of $R$ with $Q$ in $U$ is defined as $$G_0= \begin{cases} R & \text{on }\widehat{\mathbb{C}}\setminus U, \\ \phi^{-1}\circ\chi^{-1}\circ Q\circ\chi\circ\phi & \text{on }U, \end{cases}$$ where $\phi$ is a Böttcher coordinate of $U$ and $\chi(z)=|z|z/(1-|z|)$. Pick a Jordan curve $\alpha\subset U$ separating $\phi^{-1}\circ\chi^{-1}(T)$ from $\partial U$. Then $R^{-1}(\alpha)$ contains a unique curve $\alpha_0$ in $U$, which also separating $\phi^{-1}\circ\chi^{-1}(T)$ from $\partial U$, and a unique curve $\beta_0$ in $V$, which separates the unique pre-periodic point $c\in V$ from $\partial V$. Pick a Jordan curve $\beta_1\subset V$ disjoint from $\beta_0$ and separating the point $c$ from $\beta_0$. Denote by - $U(\alpha)\subset U$: the domain bounded by $\alpha$, - $V(\beta_0)\subset V$: the domain bounded by $\beta_0$, - $V(\beta_1)\subset V(\beta_0)$: the domain bounded by $\beta_1$, and - $A(\beta_0,\beta_1)\subset V(\beta_0)$: the annulus bounded by $\beta_0$ and $\beta_1$. Define a branched covering $G:\,\widehat{\mathbb{C}}\to\widehat{\mathbb{C}}$ such that - (tuning) $G=G_0$ on $\widehat{\mathbb{C}}\setminus V(\beta_0)$, - (disk-annulus surgery) $G:\, A(\beta_0,\beta_1)\to U(\alpha)$ is a branched covering whose critical values are contained in $\phi^{-1}\circ\chi^{-1}(\mathcal{P}_Q)$, - (new fixed critical point) $G:\, V(\beta_1)\to\widehat{\mathbb{C}}\setminus\overline{U(\alpha)}$ is a branched covering with exactly one critical point at $c\in V$ and $G(c)=c$. Then $G$ is a PCF branched covering with $\mathcal{P}_G=\phi^{-1}\circ\chi^{-1}(\mathcal{P}_Q)\cup\mathcal{P}_{R}\cup\{c\}$. Note that $\alpha$ is an essential curve in $\widehat{\mathbb{C}}\setminus\mathcal{P}_G$. $G^{-1}(\alpha)$ contains exactly one essential curve $\alpha_0$ isotopic to $\alpha$. From $G^{-1}(\alpha)=R^{-1}(\alpha)\cup\beta_1$, we know that $\deg G=\deg R+\deg G|_{\beta_1}$. ![Definition of $G$. ](Gn.pdf){#fig: G width="10cm"} **Lemma 9**. *Assume that $G_0$ has no Thurston obstructions. Then $G$ is Thurston equivalent to a rational map $g$, which satisfies the conditions in Theorem [Theorem 6](#thm:tunable){reference-type="ref" reference="thm:tunable"}.* *Proof.* By Lemma [Lemma 1](#lem:irreducible){reference-type="ref" reference="lem:irreducible"}, we only need to prove that for any irreducible multicurve $\Gamma$ of $F$, its leading eigenvalue $\lambda(M_{\Gamma})<1$. Assume $k(\alpha,\gamma)=\#(\alpha\cap\gamma)$ for every $\gamma\in\Gamma$. If there is a curve $\gamma\in\Gamma$ such that $k(\alpha,\gamma)=0$, then $G^{-1}(\gamma)$ is disjoint from $\beta_0\cup\alpha_0$. Thus $k(\alpha,\gamma)=0$ for each $\gamma\in\Gamma$ since $\Gamma$ is irreducible. Therefore either each curve in $\Gamma$ is contained in $U(\alpha)$, or each curve in $\Gamma$ is disjoint from $U(\alpha)$. In the former case, the transition matrix of $\Gamma$ under $G$ is equal to the transition matrix of $\Gamma$ under $G_0$. Hence $\lambda(M_{\Gamma})<1$. In the latter case, for each $\gamma\in\Gamma$, $G^{-1}(\gamma)$ is disjoint from $A(\beta_0,\beta_1)$. Thus the transition matrix of $\Gamma$ under $G$ is also equal to the transition matrix of $\Gamma$ under $G_0$. Hence $\lambda(M_{\Gamma})<1$. Now we assume $k(\alpha,\gamma)\neq 0$ for each $\gamma\in\Gamma$. Using the same argument as in the proof of Theorem [Theorem 6](#thm:tunable){reference-type="ref" reference="thm:tunable"}, we know that if $\lambda(M_{\Gamma})\ge 1$, then $\Gamma$ contains a Levy cycle $\{\gamma_i\}$ since $Q$ is injective on its Hubbard tree. Moreover, $$\#[(\beta_0\cup\beta_1]\cap\gamma'_i)+k(\alpha,\gamma_i)\le\#(G^{-1}(\alpha)\cap\gamma'_i)\le\#(\alpha\cap\gamma_{i+1})=k(\alpha,\gamma_{i+1}).$$ Thus $\#[(\beta_0\cup\beta_1)\cap\gamma'_i]=0$ and hence $G|_{\gamma'_i}=G_0|_{\gamma'_i}$. This is a contradiction since $G_0$ has no Thurston obstructions. In summary, we have $\lambda(M_{\Gamma})<1$. By Theorem [Theorem 2](#thm:thurston){reference-type="ref" reference="thm:thurston"}, $G$ is Thurston equivalent to a PCF rational map $g$ by a pair of orientation preserving homeomorphisms $(\phi_0,\phi_1)$ of $\widehat{\mathbb{C}}$. Let $D$ be the fixed Fatou domain of $g$ containing the fixed critical point $\phi_0(c)$. Then $D$ is coiled by the polynomial $Q$. Thus the rational map $g$ satisfies the conditions in Theorem [Theorem 6](#thm:tunable){reference-type="ref" reference="thm:tunable"}. ◻ ## Examples of coiled Fatou domains Here we provide some explicit examples of rational maps with coiled Fatou domains by using Lemma [Lemma 9](#lem:existence){reference-type="ref" reference="lem:existence"}. **Example 1**. We consider $$R(z)=-\frac{27}{4}z^2(z-1) \text{\quad and\quad} Q(z)=z^2-1.$$ The cubic polynomial $R$ has $2$ critical points $0$, $2/3$ and a critical orbit $2/3\mapsto 1\mapsto 0\mapsto 0$. Then $R$ has a fixed Fatou domain $U$ containing $0$ and $R^{-1}(U)=U\cup V$, where $V$ is the Fatou domain containing $1$. Since $R$ is a real polynomial and has a Fatou domain containing $2/3$ (which is different from $U$ and $V$), we have $\overline{U}\cap \overline{V}=\emptyset$ and hence all bounded Fatou domains of $R$ have pairwise disjoint closures. By [@shen2021primitive], we obtain a cubic polynomial $g_0$ which is the tuning of $R$ with $Q$ and $$g_0(z)=\frac{1+\nu}{\nu}z^2\left(z-\mu\right)+\nu \text{\quad with } \mu=\frac{1+\nu+\nu^2}{1+\nu},$$ where $\nu=-0.1219358974084060...$ is chosen such that $g_0$ has $2$ finite critical points $0$, $c_0=2\mu/3$ and a critical orbit $c_0\mapsto 1\mapsto 0\mapsto \nu\mapsto 0$. See Figure [5](#fig:exam1){reference-type="ref" reference="fig:exam1"} for the Julia sets of $R$ and $g_0$. ![Julia sets $\mathcal{J}_R$, $\mathcal{J}_{g_0}$ in Example 1. The critical orbits are marked. The ranges of these pictures are all $[-0.5,1.3]\times [-0.25,0.25]$.](Julia-cubic-R.png "fig:"){#fig:exam1 width="13cm"} ![Julia sets $\mathcal{J}_R$, $\mathcal{J}_{g_0}$ in Example 1. The critical orbits are marked. The ranges of these pictures are all $[-0.5,1.3]\times [-0.25,0.25]$.](Julia-cubic-g0.png "fig:"){#fig:exam1 width="13cm"} By performing a disk-annulus surgery and applying Lemma [Lemma 9](#lem:existence){reference-type="ref" reference="lem:existence"}, we obtain a rational map $g$ with a coiling Fatou domain. The map $g$ has the form $$g(z)=-\frac{d(z-a)(z-b)(z-c)^3}{bc^3(z^2-q z+d)} \text{\quad with } q=\frac{d(3ab+ac+bc)}{abc},$$ where $$\begin{split} a&=- 0.1203582660251960..., \quad b=0.1369645575161714..., \\ c&=\hskip0.33cm 0.9975907956140505..., \quad d=1.0001239392656081... \end{split}$$ are chosen such that $g$ has $8$ critical points: $\infty$ and $c$ are double, $0$, $1$, $c_1$ and $c_2$ are simple; and $4$ critical orbits: $\infty\mapsto \infty$, $c_2\mapsto 1\mapsto 1$, $c\mapsto 0\mapsto a\mapsto 0$ and $c_1\mapsto a\mapsto 0\mapsto a$. See Figure [6](#fig:exam1-g){reference-type="ref" reference="fig:exam1-g"} for the Julia set of $g$, where the Fatou domain of $g$ containing $1$ (colored green) is a coiled Fatou domain of $g$. ![The Julia set $\mathcal{J}_g$ and its successive zoom in near the fixed point $1$ of $g$ in Example 1. The widths of these pictures are $10^{-k}$, where $k=1,2,\cdots, 6$ respectively.](Julia-cubic-g.png "fig:"){#fig:exam1-g width="13cm"}\ **Example 2**. We consider $$R(z)=\frac{2z^2}{z^4+1}\text{\quad and\quad} Q(z)=z^2-1.$$ The rational map $R$ has $6$ simple critical points $\infty$, $0$, $\pm 1$, $\pm i$ and two critical orbits $\infty\mapsto 0$ and $\pm i\mapsto -1\mapsto 1\mapsto 1$. Then $R$ is PCF having a fixed Fatou domain $U$ containing $0$ and $R^{-1}(U)=U\cup V$, where $V$ is the Fatou domain containing $\infty$. After tuning $R$ with $Q$ (such tuning exists by [@cui2018hyperbolic]) we obtain a quartic rational map $$g_0(z)=\frac{z^2-\nu^2}{\nu(\nu-1)z^4-(2\nu^2-2\nu-1)z^2-\nu},$$ where $\nu=-0.4287815744562657...$ is chosen such that the even map $g_0$ has $6$ critical points $\infty$, $0$, $\pm 1$, $\pm c_0=\pm i\sqrt{1-2\nu^2}$ and two critical orbits $\infty\mapsto 0\mapsto \nu\mapsto 0$ and $\pm c_0\mapsto -1\mapsto 1\mapsto 1$. See Figure [8](#fig:exam2){reference-type="ref" reference="fig:exam2"} for the Julia sets of $R$ and $g_0$. ![Julia sets $\mathcal{J}_R$ and $\mathcal{J}_{g_0}$ in Example 2. The critical orbits are marked.](f0.png "fig:"){#fig:exam2 width="6.5cm"} ![Julia sets $\mathcal{J}_R$ and $\mathcal{J}_{g_0}$ in Example 2. The critical orbits are marked.](f1.png "fig:"){#fig:exam2 width="6.5cm"} ![The Julia set $\mathcal{J}_g$ in Example 2 and its zoom in near the origin. The widths of these two pictures are $64$ and $4$ respectively.](g0.png "fig:"){#fig:g0 width="6.5cm"} ![The Julia set $\mathcal{J}_g$ in Example 2 and its zoom in near the origin. The widths of these two pictures are $64$ and $4$ respectively.](g0b.png "fig:"){#fig:g0 width="6.5cm"} By performing a disk-annulus surgery and applying Lemma [Lemma 9](#lem:existence){reference-type="ref" reference="lem:existence"}, we obtain a rational map $g$ with a coiling Fatou domain. It has the form $$g(z)=\frac{c(z^2-1)^2(z^2-a^2)}{z^4+bz^2-ac},$$ where $$\begin{split} a&=\hskip0.33cm 0.1266022073620638..., \quad b=-0.0469758128977771..., \\ c&=-0.0327926126839635... \end{split}$$ are chosen such that $g$ has $10$ simple critical points $\infty$, $0$, $\pm 1$, $\pm c$, $\pm c_1$, $\pm c_2$ and $4$ critical orbits $\infty\mapsto \infty$, $\pm c\mapsto -c_1\mapsto c_1\mapsto c_1$, $\pm 1\mapsto 0\mapsto a\mapsto 0$ and $\pm c_2\mapsto a\mapsto 0\mapsto a$. See Figure [10](#fig:g0){reference-type="ref" reference="fig:g0"} for the Julia set of $g$, where the unbounded domain is a coiled Fatou domain of $g$. [^1]: The first author is supported by National Key R&D Program of China No.2021YFA1003203, and NSFC grants No.12131016 and No.12071303. The second author is supported by NSFC grants No.12222107 and No.12071210. The third author is supported by Basic and Applied Basic Research Foundation of Guangdong Province Grants No.2023A1515010058.
arxiv_math
{ "id": "2309.03464", "title": "Renormalizations of rational maps and stable multicurves", "authors": "Guizhen Cui, Fei Yang and Luxian Yang", "categories": "math.DS", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper, we give a characterization of Coxeter group representations of Lusztig's $\boldsymbol{a}$-function value $1$, and determine all the irreducible such representations for certain simply laced Coxeter groups. address: | Beijing International Center for Mathematical Research, Peking University,\ 5 Yi He Yuan Road, Haidian District, Beijing, 100871, China author: - Hongsheng Hu bibliography: - a-fun-one.bib date: September 2, 2023 title: Representations of Coxeter groups of Lusztig's $\boldsymbol{a}$-function value 1 --- Coxeter groups ,Lusztig's $\boldsymbol{a}$-function ,second-highest two-sided cell ,cell representations ,simply laced Coxeter groups 20C15 ,20C08 ,20F55 # Introduction {#sec-intro} In the seminal paper [@KL79], D. Kazhdan and G. Lusztig introduced the notion of (left, right, two-sided) cells for an arbitrary Coxeter group $(W,S)$. Each cell gives a representation of the corresponding Hecke algebra (and the Coxeter group, after specialization), called a cell representation. Cell representations have been proved to be of great significance in representation theory. If $(W,S)$ is irreducible, there is a two-sided cell, formulated in [@Lusztig83-intrep], consisting of non-unity elements of $W$ whose reduced expression is unique. This two-sided cell, we denote by $\mathcal{C}_1$, is the second-highest with respect to the cell order. In another remarkable paper [@Lusztig85-cell-i], G. Lusztig defined a function $\boldsymbol{a}$ on $W$, which is a very useful tool for studying cells and representations. The function $\boldsymbol{a}$ takes constant value on each two-sided cell. In particular, the $\boldsymbol{a}$-function value is $1$ on $\mathcal{C}_1$. In fact, $\mathcal{C}_1$ can be characterized as the set of elements whose $\boldsymbol{a}$-function value is $1$ (see Lemma [\[lem-C1\]](#lem-C1){reference-type="ref" reference="lem-C1"}). It is a long-standing conjecture that the function $\boldsymbol{a}$ is uniformly bounded on $W$ whenever $(W,S)$ is of finite rank (and an explicit bound has been proposed, see [@Xi94 Question 1.13], [@Lusztig14-hecke-unequal Conjecture 13.4]). If the function $\boldsymbol{a}$ is bounded on $W$, it can be proved that any irreducible representation of $W$ (or of the corresponding Hecke algebra) is a quotient of some cell representation provided by a unique two-sided cell. It turns out that the converse is true, by considering the sign representation of $W$ (see Remark [\[rmk-sign-1\]](#rmk-sign-1){reference-type="ref" reference="rmk-sign-1"}). As a byproduct, the existence of the lowest two-sided cell in a Coxeter group with bounded $\boldsymbol{a}$-function is proved (see Corollary [Corollary 7](#cor-cell-low){reference-type="ref" reference="cor-cell-low"}, and also [@Xi12 Theorem 1.5]). In general, without the boundedness assumption on the function $\boldsymbol{a}$, if an irreducible representation is a quotient of a cell representation provided by some two-sided cell, we may define the $\boldsymbol{a}$-function value of the representation to be the $\boldsymbol{a}$-function value on that cell. We may further extend this definition to arbitrary representations, not necessary to be irreducible (see Definition [\[def-a\]](#def-a){reference-type="ref" reference="def-a"}). As an example, the geometric representation of $W$ defined in [@Bourbaki-Lie456 Ch. V, §4] is of $\boldsymbol{a}$-function value $1$. The main aim of this paper is to study the representations of $W$ of $\boldsymbol{a}$-function value $1$. In general, the cell representation provided by the two-sided cell $\mathcal{C}_1$ is too "big". It possibly admits infinite many non-isomorphic simple quotients (see Remark [\[rmk-a=1\]](#rmk-a=1){reference-type="ref" reference="rmk-a=1"} [\[rmk-a=1-2\]](#rmk-a=1-2){reference-type="ref" reference="rmk-a=1-2"} and Remark [\[rmk-simply-laced\]](#rmk-simply-laced){reference-type="ref" reference="rmk-simply-laced"} [\[rmk-simply-laced-1\]](#rmk-simply-laced-1){reference-type="ref" reference="rmk-simply-laced-1"}). Moreover, it possibly admits simple quotients of infinite dimension as well (see Remark [\[rmk-a=1\]](#rmk-a=1){reference-type="ref" reference="rmk-a=1"} [\[rmk-a=1-3\]](#rmk-a=1-3){reference-type="ref" reference="rmk-a=1-3"}). The first main result (Theorem [Theorem 8](#thm-A1){reference-type="ref" reference="thm-A1"}) is a characterization of representations of $W$ of $\boldsymbol{a}$-function value $1$. It states that a nonzero and nontrivial representation of $W$ is of $\boldsymbol{a}$-function value $1$ if and only if there is no common eigenvector of eigenvalue $-1$ for any defining generators $r,t \in S$, whenever the order $m_{rt}$ of $rt$ is finite. We feel that the function $\boldsymbol{a}$ might be related in some implicit manner to the common eigenvectors of eigenvalue $-1$ shared by the defining generators. The second main result (Theorem [Theorem 11](#thm-A1=R){reference-type="ref" reference="thm-A1=R"}) is a determination (and hence a classification) of irreducible representations of $\boldsymbol{a}$-function value $1$ for irreducible simply laced Coxeter groups whose Coxeter graph is a tree or admits only one cycle. In other words, for these Coxeter groups we are able to find out all the simple quotients of the cell representation provided by $\mathcal{C}_1$. It turns out that such representations are some kind of reflection representations, i.e., on the representation space each generator in $S$ acts by an abstract reflection. In a fancy way, these representations are called R-representations in another paper [@Hu23] by the author, and their isomorphism classes are classified in that paper. For convenience, we present briefly the concrete construction and the classification of R-representations in our specified setting in Subsection [4.3](#subsec-R){reference-type="ref" reference="subsec-R"}. In particular, all of these representations are finite dimensional. However, for other irreducible simply laced Coxeter groups, there exist infinite dimensional irreducible representations of $\boldsymbol{a}$-function value $1$, whose classification seems tough (see Remark [\[rmk-simply-laced\]](#rmk-simply-laced){reference-type="ref" reference="rmk-simply-laced"} [\[rmk-simply-laced-2\]](#rmk-simply-laced-2){reference-type="ref" reference="rmk-simply-laced-2"}). The rest of this paper is organized as follows. In Section [2](#sec-pre){reference-type="ref" reference="sec-pre"}, we recollect briefly some preliminary knowledge on dihedral groups and Kazhdan-Lusztig theory. In Section [3](#sec-main-1){reference-type="ref" reference="sec-main-1"}, we give the characterization of Coxeter group representations of $\boldsymbol{a}$-function value $1$. In Section [4](#sec-main-2){reference-type="ref" reference="sec-main-2"}, we focus on certain simply laced Coxeter groups and determine all of their irreducible representations of $\boldsymbol{a}$-function value $1$. # Preliminaries {#sec-pre} We use $e$ to denote the neutral element in a group. The base field of vector spaces are assume to be $\mathbb{C}$ unless otherwise specified. ## Representations of dihedral groups {#subsec-dih} Let $D_m := \langle r,t \mid r^2 = t^2 = (rt)^m = e \rangle$ be a finite dihedral group ($m \in \mathbb{N}_{\ge 2}$), and $V_{r,t} := \mathbb{C}\beta_r \oplus \mathbb{C}\beta_t$ be a vector space with formal basis $\{\beta_r, \beta_t\}$. For natural numbers $k$ satisfying $1 \leq k \leq \frac{m}{2}$, we define the action $\rho_k : D_m \to GL(V_{r,t})$ by $$\refstepcounter{theorem} \label{eq-Dm-rho-k} \begin{aligned} r \cdot \beta_r & := - \beta_r, \quad & r \cdot \beta_t & := \beta_t + 2\cos\frac{k\uppi}{m} \beta_r,\\ t\cdot \beta_t & := - \beta_t, & t \cdot \beta_r & := \beta_r + 2\cos\frac{k\uppi}{m} \beta_t. \end{aligned}$$ Intuitively, $\rho_k$ has a real form spanned by $\beta_r$ and $\beta_t$, and $r,t$ acts on the (real) plane by two reflections with respect to two lines with an angle of $\frac{k \uppi}{m}$, see Figure [\[rho-k\]](#rho-k){reference-type="ref" reference="rho-k"}. If $k < \frac{m}{2}$, then $\rho_k$ is irreducible. If $m$ is even and $k = \frac{m}{2}$, then $\rho_\frac{m}{2} \simeq \varepsilon_r \oplus \varepsilon_t$ splits into a direct sum of two representations of dimension $1$, where $$\refstepcounter{theorem} \label{eq-epsilon-rt} \varepsilon_r: r \mapsto -1, t \mapsto 1; \quad \quad \varepsilon_t: r \mapsto 1, t \mapsto -1.$$ We denote by $\mathds{1}$ the trivial representation, and by $\varepsilon$ the sign representation, i.e., $$\refstepcounter{theorem} \label{eq-epsilon} \mathds{1}: r,t \mapsto 1; \quad \quad \varepsilon: r,t \mapsto -1.$$ **Lemma 1**. *([@Serre77 §5.3]) The following are all the irreducible representations of $D_m$, $$\begin{gathered} \{\mathds{1}, \varepsilon\} \cup \{\rho_1, \dots, \rho_{\frac{m-1}{2}}\}, \quad \text{if $m$ is odd}; \\ \{\mathds{1}, \varepsilon, \varepsilon_r, \varepsilon_t\} \cup \{\rho_1, \dots, \rho_{\frac{m}{2}-1}\}, \quad \text{if $m$ is even}.\end{gathered}$$ These representations are non-isomorphic to each other.* [\[rmk-Dm-rep\]]{#rmk-Dm-rep label="rmk-Dm-rep"} The $-1$-eigenspace of either $r$ or $t$ in the representation space $V_{r,t}$ of $\rho_k$ $(1 \le k \le \frac{m}{2})$ is one dimensional. ## The Kazhdan-Lusztig cells and Lusztig's function $\boldsymbol{a}$ Let $(W,S)$ be a Coxeter group with Coxeter matrix $(m_{st})_{s,t \in S}$ and length function $\ell$. Following the seminal papers [@KL79; @Lusztig85-cell-i], we define the *Hecke algebra* $\mathcal{H}$ of $(W,S)$ as follows. Let $q^{ \frac{1}{2}}$ be an indeterminate and $\mathbb{Z}[q^{\pm \frac{1}{2}}]$ be the ring of Laurent polynomials in $q^{ \frac{1}{2}}$. Let $\mathcal{H}$ be a free $\mathbb{Z}[q^{\pm \frac{1}{2}}]$-module with formal basis $\{\widetilde{T}_w\}_{w \in W}$. The multiplication on $\mathcal{H}$ is defined by $$\widetilde{T}_s \widetilde{T}_w := \begin{cases} \widetilde{T}_{sw} , & \mbox{if } \ell(sw) > \ell(w) \\ (q^{ \frac{1}{2}}- q^{- \frac{1}{2}}) \widetilde{T}_w + \widetilde{T}_{sw} , & \mbox{if } \ell(sw) < \ell(w) \end{cases}, \quad \forall s \in S, w \in W.$$ Then $\mathcal{H}$ is an associative $\mathbb{Z}[q^{\pm \frac{1}{2}}]$-algebra with unity $\widetilde{T}_e$. Here $\widetilde{T}_w = q^{- \frac{\ell(w)}{2}} T_w$ where $T_w$ is the standard basis element in [@KL79]. Besides the normalized standard basis $\{\widetilde{T}_w \mid w \in W\}$, we have the basis $\{C_w \mid w \in W\}$ in [@KL79], such that $$\begin{aligned} C_w & = \sum_{y \in W} (-1)^{\ell(w)+\ell(y)} q^{\frac{1}{2} (\ell(w) - \ell(y))} P_{y,w}(q^{-1}) \widetilde{T}_y \refstepcounter{theorem} \label{eq-Cw} \\ & = \sum_{y \in W} (-1)^{\ell(w)+\ell(y)} q^{-\frac{1}{2} (\ell(w) - \ell(y))} P_{y,w}(q) \widetilde{T}_{y^{-1}}^{-1}, \notag %\refstepcounter{theorem}\end{aligned}$$ where $P_{y,w} \in \mathbb{Z}[q]$, $P_{y,w} = 0$ unless $y \leq w$, $P_{w,w} = 1$, and $\deg_q P_{y,w} \leq \frac{1}{2} (\ell(w) - \ell(y) - 1)$ if $y < w$. Here $\le$ is the Bruhat order on $W$, and $y < w$ indicates $y \le w$ and $y \ne w$. [\[eg-C\]]{#eg-C label="eg-C"} 1. [\[eg-C-1\]]{#eg-C-1 label="eg-C-1"} $C_e = \widetilde{T}_e$; $C_s = \widetilde{T}_s - q^{ \frac{1}{2}}$ for $s \in S$. 2. [\[eg-C-2\]]{#eg-C-2 label="eg-C-2"} If $r, t \in S$ with $r \ne t$ and $m_{rt} = m < \infty$, we write for simplicity $r_{k} := rtr \cdots$ (product of $k$ factors), $t_k = trt \cdots$ ($k$ factors), and $w_{rt} := r_{m} = t_{m}$. Then $P_{y, w_{rt}} = 1$ for any $y = r_k$ or $t_k$ ($0 \le k \le m$). Thus $$\refstepcounter{theorem} \label{Cwrt} C_{w_{rt}} = \widetilde{T}_{w_{rt}} - q^{ \frac{1}{2}}(\widetilde{T}_{r_{m-1}} + \widetilde{T}_{t_{m-1}}) + \dots + (-1)^i q^{ \frac{i}{2}} (\widetilde{T}_{r_{m-i}} + \widetilde{T}_{t_{m-i}}) + \dots + (-1)^m q^{ \frac{m}{2}}.$$ We keep the notations $r_k, t_k$ and $w_{rt}$ in Example [\[eg-C\]](#eg-C){reference-type="ref" reference="eg-C"} [\[eg-C-2\]](#eg-C-2){reference-type="ref" reference="eg-C-2"} throughout the paper. For any $x, y, w \in W$, let $h_{x, y, w} \in \mathbb{Z}[q^{\pm \frac{1}{2}}]$ be such that $C_x C_y = \sum_{w \in W} h_{x,y,w} C_w$. Following G. Lusztig [@Lusztig85-cell-i], for each $w \in W$ we define $\boldsymbol{a}(w) := \min \{i \in \mathbb{N} \mid q^{ \frac{i}{2}} h_{x,y,w} \in \mathbb{Z}[q^{ \frac{1}{2}}], \forall x, y \in W\}$. Then $\boldsymbol{a}(w)$ is a well-defined natural number. We say the function $\boldsymbol{a}$ is *bounded* if there is $N \in \mathbb{N}$ such that $\boldsymbol{a}(w) \le N$ for any $w \in W$. It is a long-standing conjecture that Lusztig's function $\boldsymbol{a}$ is bounded on any Coxeter group $(W,S)$ of finite rank (i.e., $\lvert S \rvert < \infty$). Moreover, it is also conjectured that it is bounded by the maximal length of longest elements of all finite parabolic subgroups of $W$. This conjecture has been proved in some cases (see for example [@Belolipetsky04; @LS19; @Lusztig85-cell-i; @Xi12; @Zhou13]). For $x, y \in W$, we say $y \underset {\mathrm{LR}} {\leqslant}x$ if there exist $H_1, H_2 \in \mathcal{H}$, such that $C_y$ has nonzero coefficient in the expression of $H_1 C_x H_2$ with respect to the basis $\{C_w\}_{w \in W}$. We say $x \underset{\mathrm{LR}}{\sim}y$ if $x \underset {\mathrm{LR}} {\leqslant}y$ and $y \underset {\mathrm{LR}} {\leqslant}x$. It turns out that $\underset {\mathrm{LR}} {\leqslant}$ is a pre-order on $W$, and $\underset{\mathrm{LR}}{\sim}$ is an equivalence relation on $W$. The equivalence classes are called *two-sided cells*. The set of two-sided cells forms a partial order set with respect to $\underset {\mathrm{LR}} {\leqslant}$. We have the following standard properties of Kazhdan-Lusztig basis $\{C_w\}_{w \in W}$, two-sided cells, and Lusztig's function $\boldsymbol{a}$. **Proposition 2** ([@Lusztig85-cell-i; @Lusztig87-cell-ii; @Lusztig14-hecke-unequal]). *Let $s \in S$ and $w, x, y \in W$.* 1. *[\[eg-C-3\]]{#eg-C-3 label="eg-C-3"} $P_{y,w}(0) = 1$ for any $y \leq w$.* 2. *[\[prop-cell-a-1\]]{#prop-cell-a-1 label="prop-cell-a-1"} Suppose $sw > w$. Then $C_sC_w = C_{sw} + \sum \mu_{y,w} C_y$ where the sum runs over all $y < w$ such that $\deg_q P_{y,w} = \frac{1}{2} (\ell(w) - \ell(y) - 1)$ and $s y < y$, and $\mu_{y,w}$ is the coefficient of the top-degree term in $P_{y,w}$. The formula is similar for the case $ws > w$. In particular, $sw \underset {\mathrm{LR}} {\leqslant}w$ if $sw > w$, $ws \underset {\mathrm{LR}} {\leqslant}w$ if $ws > w$.* 3. *[\[prop-cell-a-2\]]{#prop-cell-a-2 label="prop-cell-a-2"} $\boldsymbol{a}(w) = 0$ if and only if $w = e$. We have $\boldsymbol{a}(s) = 1$ for any $s \in S$.* 4. *[\[prop-cell-a-3\]]{#prop-cell-a-3 label="prop-cell-a-3"} If $m_{rt} < \infty$ for some $r, t \in S$ with $r \ne t$, then $\boldsymbol{a}(w_{rt}) = m_{rt}$.* 5. *[\[prop-cell-a-4\]]{#prop-cell-a-4 label="prop-cell-a-4"} If $y \underset {\mathrm{LR}} {\leqslant}x$, then $\boldsymbol{a}(y) \geq \boldsymbol{a}(x)$. If $y \underset{\mathrm{LR}}{\sim}x$, then $\boldsymbol{a}(y) = \boldsymbol{a}(x)$.* 6. *[\[prop-cell-a-5\]]{#prop-cell-a-5 label="prop-cell-a-5"} Suppose $\boldsymbol{a}$ is bounded on $W$. If $\boldsymbol{a}(y) = \boldsymbol{a}(x)$ and $y \underset {\mathrm{LR}} {\leqslant}x$, then $y \underset{\mathrm{LR}}{\sim}x$.* By Proposition [Proposition 2](#prop-cell-a){reference-type="ref" reference="prop-cell-a"} [\[prop-cell-a-4\]](#prop-cell-a-4){reference-type="ref" reference="prop-cell-a-4"}, we may write $\boldsymbol{a}(\mathcal{C})$ for a two-sided cell $\mathcal{C}$ to indicate the number $\boldsymbol{a}(x)$ where $x$ is an arbitrary element in $\mathcal{C}$. We also write $\mathcal{C} \underset {\mathrm{LR}} {\leqslant}w$ to indicate that $x \underset {\mathrm{LR}} {\leqslant}w$ for some $x \in \mathcal{C}$. Similar for $w \underset {\mathrm{LR}} {\leqslant}\mathcal{C}$ and $\mathcal{C}^\prime \underset {\mathrm{LR}} {\leqslant}\mathcal{C}$. We shall need the following lemma due to H. Matsumoto and J. Tits. See also [@Lusztig14-hecke-unequal Theorem 1.9]. **Lemma 3** ([@Matsumoto64; @Tits69]). *Let $s_1 \cdots s_n$ and $s_1^\prime \cdots s_n^\prime$ be two reduced expressions of some element $w \in W$. Then we can obtain one of the expressions from the other by finite steps of replacement of the form $$\underbrace{rtr\cdots} _{m_{rt} \text{ factors}} = \underbrace{trt\cdots} _{m_{rt} \text{ factors}} \quad (r,t \in S, r \ne t, m_{rt} < \infty).$$* If the Coxeter group $(W,S)$ is irreducible (i.e., the Coxeter graph is connected), there is a two-sided cell with a simple description, formulated by G. Lusztig [@Lusztig83-intrep] as follows. **Lemma 4** (See also [@Lusztig83-intrep Proposition 3.8]). *[\[lem-C1\]]{#lem-C1 label="lem-C1"}* 1. *[\[lem-C1-1\]]{#lem-C1-1 label="lem-C1-1"} For $w \in W \setminus \{e\}$, $\boldsymbol{a}(w) = 1$ if and only if $w$ has a unique reduced expression.* 2. *[\[lem-C1-2\]]{#lem-C1-2 label="lem-C1-2"} Let $\mathcal{C}_1 := \{w \in W \mid \boldsymbol{a}(w) = 1\}$. If $(W,S)$ is irreducible, then $\mathcal{C}_1$ is a two-sided cell. Moreover, for any $x \in W \setminus \{e\}$, we have $x \underset {\mathrm{LR}} {\leqslant}\mathcal{C}_1$.* A proof of Lemma [\[lem-C1\]](#lem-C1){reference-type="ref" reference="lem-C1"} [\[lem-C1-1\]](#lem-C1-1){reference-type="ref" reference="lem-C1-1"} is also provided in [@Xu19 Corollary 3.1] with the boundedness assumption on the function $\boldsymbol{a}$ (the proof there uses the result stated in Proposition [Proposition 2](#prop-cell-a){reference-type="ref" reference="prop-cell-a"} [\[prop-cell-a-5\]](#prop-cell-a-5){reference-type="ref" reference="prop-cell-a-5"}). But as we shall see, this assumption can be removed. *Proof of Lemma [\[lem-C1\]](#lem-C1){reference-type="ref" reference="lem-C1"}.* If $w$ has two different reduced expressions, then by Lemma [Lemma 3](#lemma-two-expr){reference-type="ref" reference="lemma-two-expr"} $w$ has a reduced expression of the form $$w = \cdots \underbrace{(rtr\cdots)} _{m_{rt} \text{ factors}} \cdots \quad (r,t \in S, r \ne t, m_{rt} < \infty).$$ Then $w \underset {\mathrm{LR}} {\leqslant}w_{rt}$ by Proposition [Proposition 2](#prop-cell-a){reference-type="ref" reference="prop-cell-a"} [\[prop-cell-a-1\]](#prop-cell-a-1){reference-type="ref" reference="prop-cell-a-1"}. Thus $\boldsymbol{a}(w) \ge \boldsymbol{a}(w_{rt}) = m_{rt} \ge 2$ by Proposition [Proposition 2](#prop-cell-a){reference-type="ref" reference="prop-cell-a"} [\[prop-cell-a-3\]](#prop-cell-a-3){reference-type="ref" reference="prop-cell-a-3"} [\[prop-cell-a-4\]](#prop-cell-a-4){reference-type="ref" reference="prop-cell-a-4"}. Conversely, suppose that the reduced expression of $w$ is unique and $sw < w$, $s\in S$. We have $w \underset{\mathrm{LR}}{\sim}s$ by [@Lusztig83-intrep Proposition 3.8]. Therefore $\boldsymbol{a}(w) = 1$ by Proposition [Proposition 2](#prop-cell-a){reference-type="ref" reference="prop-cell-a"} [\[prop-cell-a-2\]](#prop-cell-a-2){reference-type="ref" reference="prop-cell-a-2"} [\[prop-cell-a-4\]](#prop-cell-a-4){reference-type="ref" reference="prop-cell-a-4"}. The point [\[lem-C1-1\]](#lem-C1-1){reference-type="ref" reference="lem-C1-1"} of this lemma is proved. The first assertion in the point [\[lem-C1-2\]](#lem-C1-2){reference-type="ref" reference="lem-C1-2"} has been proved in [@Lusztig83-intrep Proposition 3.8]. The second one is deduced form Proposition [Proposition 2](#prop-cell-a){reference-type="ref" reference="prop-cell-a"} [\[prop-cell-a-1\]](#prop-cell-a-1){reference-type="ref" reference="prop-cell-a-1"} and the fact that $s \in \mathcal{C}_1$. ◻ In general, if we decompose the Coxeter graph into connected components, say $S = \sqcup_i S_i$, then $\mathcal{C}_1 = \sqcup_i \mathcal{C}_{1,i}$ is a union of two-sided cells, where $\mathcal{C}_{1,i} = \{w \in W \mid \boldsymbol{a}(w) = 1, \text{ and $w$ is a product of elements in $S_i$}\}$. The following result is useful but the proof (by B. Elias and G. Williamson [@EW14]) is highly nontrivial. **Lemma 5**. *([@EW14 Corollary 1.2]) For any $x,y,w \in W$, it holds $P_{y,w} \in \mathbb{N}[q]$. If we write $h_{x,y,w} = \sum_{i \in \mathbb{Z}} c_i q^{ \frac{i}{2}}$ where each $c_i \in \mathbb{Z}$, then $(-1)^i c_i \in \mathbb{N}$.* ## The cell representations We will consider complex representations of Coxeter groups. For convenience, we define $\mathcal{H}_\mathbb{C}:= \mathcal{H}\otimes_\mathbb{Z}\mathbb{C}$ to be the Hecke algebra with coefficient ring ${\mathbb{C}[q^{\pm \frac{1}{2}}]}$. By convention, a *representation* (over $\mathbb{C}$) of $\mathcal{H}_\mathbb{C}$ is defined to be a $\mathbb{C}$-vector space endowed with an $\mathcal{H}_\mathbb{C}$-module structure, where $q^{ \frac{1}{2}}$ acts by a $\mathbb{C}^\times$-scalar multiplication. Let $\chi: {\mathbb{C}[q^{\pm \frac{1}{2}}]}\to \mathbb{C}$ be a homomorphism of $\mathbb{C}$-algebras (called a *specialization*) and $\mathcal{C}$ be a two-sided cell. We define $\mathcal{J}_\mathcal{C}$ to be a $\mathbb{C}$-vector space with formal basis $\{J_w \mid w \in \mathcal{C}\}$. The following formula defines an $\mathcal{H}_\mathbb{C}$-module structure on $\mathcal{J}_\mathcal{C}$, $$C_x \cdot J_y := \sum_{w \in \mathcal{C}} \chi(h_{x,y,w}) J_w, \quad \forall x \in W, y \in \mathcal{C}.$$ This representation of $\mathcal{H}_\mathbb{C}$ is called the *cell representation provided by* $\mathcal{C}$. We denote it also by $\mathcal{J}_\mathcal{C}^\chi$ if the specialization $\chi$ needs to be emphasized. [\[rmk-cell-rep\]]{#rmk-cell-rep label="rmk-cell-rep"} The cell representation $\mathcal{J}_\mathcal{C}$ has the following property. If $C_x \cdot \mathcal{J}_\mathcal{C}\ne 0$, then there exists $y \in \mathcal{C}$ such that $h_{x, y, w} \ne 0$ for some $w \in \mathcal{C}$, and consequently $\mathcal{C}\underset {\mathrm{LR}} {\leqslant}x$. In particular, $\boldsymbol{a}(x) \le \boldsymbol{a}(\mathcal{C})$. The following proposition is due to G. Lusztig. But a sketched proof is presented here because the author did not find an explicit statement in the literature. **Proposition 6** (G. Lusztig, 1980's). *Suppose $V$ is a nonzero irreducible representation of $\mathcal{H}_\mathbb{C}$ on which $q^{ \frac{1}{2}}$ acts by a scalar $\chi(q^{ \frac{1}{2}})$.* 1. *[\[prop-cell-rep-m1\]]{#prop-cell-rep-m1 label="prop-cell-rep-m1"} Suppose there exists a two-sided cell $\mathcal{C}$ such that* 1. *[\[prop-cell-rep-i\]]{#prop-cell-rep-i label="prop-cell-rep-i"} for any $x \in W$, if $C_x \cdot V \neq 0$, then we have $\mathcal{C} \underset {\mathrm{LR}} {\leqslant}x$,* 2. *[\[prop-cell-rep-ii\]]{#prop-cell-rep-ii label="prop-cell-rep-ii"} there is an element $w \in \mathcal{C}$ such that $C_w \cdot V \neq 0$.* *Then such a two-sided cell is unique. Moreover, $V$ is a simple quotient of $\mathcal{J}_\mathcal{C}^\chi$.* 2. *[\[prop-cell-rep-m3\]]{#prop-cell-rep-m3 label="prop-cell-rep-m3"} Suppose the function $\boldsymbol{a}$ is bounded on $W$. Then there exists a two-sided cell $\mathcal{C}$ satisfying the conditions [\[prop-cell-rep-i\]](#prop-cell-rep-i){reference-type="ref" reference="prop-cell-rep-i"} [\[prop-cell-rep-ii\]](#prop-cell-rep-ii){reference-type="ref" reference="prop-cell-rep-ii"} in [\[prop-cell-rep-m1\]](#prop-cell-rep-m1){reference-type="ref" reference="prop-cell-rep-m1"}.* *Proof.* For [\[prop-cell-rep-m1\]](#prop-cell-rep-m1){reference-type="ref" reference="prop-cell-rep-m1"}, suppose there exists another two-sided cell $\mathcal{C}^\prime$ satisfying the conditions [\[prop-cell-rep-i\]](#prop-cell-rep-i){reference-type="ref" reference="prop-cell-rep-i"} [\[prop-cell-rep-ii\]](#prop-cell-rep-ii){reference-type="ref" reference="prop-cell-rep-ii"}. Then by the condition [\[prop-cell-rep-i\]](#prop-cell-rep-i){reference-type="ref" reference="prop-cell-rep-i"} for $\mathcal{C}$ and the condition [\[prop-cell-rep-ii\]](#prop-cell-rep-ii){reference-type="ref" reference="prop-cell-rep-ii"} for $\mathcal{C}^\prime$, we deduce that $\mathcal{C}\underset {\mathrm{LR}} {\leqslant}\mathcal{C}^\prime$, and vice versa. Thus $\mathcal{C}= \mathcal{C}^\prime$ and the two-sided cell $\mathcal{C}$ is unique. Next we take $v \in V$ such that $C_w \cdot v \neq 0$ for some $w \in \mathcal{C}$. Then it can be verified that the map $\varphi: \mathcal{J}_\mathcal{C}^\chi \to V$, $J_y \mapsto C_y \cdot v$ is a well-defined surjective homomorphism of $\mathcal{H}_\mathbb{C}$-modules. Thus $V \simeq \mathcal{J}_\mathcal{C}^\chi / \ker \varphi$ is a quotient of $\mathcal{J}_\mathcal{C}^\chi$. For [\[prop-cell-rep-m3\]](#prop-cell-rep-m3){reference-type="ref" reference="prop-cell-rep-m3"}, let $\mathcal{C}$ be a two-sided cell satisfying the condition [\[prop-cell-rep-ii\]](#prop-cell-rep-ii){reference-type="ref" reference="prop-cell-rep-ii"} such that $\boldsymbol{a}(\mathcal{C})$ is maximal (such a two-sided cell exists since $\boldsymbol{a}$ is assumed to be bounded and $C_e = \widetilde{T}_e$ acts by identity). Then by Proposition [Proposition 2](#prop-cell-a){reference-type="ref" reference="prop-cell-a"} [\[prop-cell-a-5\]](#prop-cell-a-5){reference-type="ref" reference="prop-cell-a-5"}, we have $$\text{for any $y \in W$, if $y \underset {\mathrm{LR}} {\leqslant}\mathcal{C}$ and $y \notin \mathcal{C}$, then $C_y \cdot V = 0$.}$$ Then $V$ is a quotient of $\mathcal{J}_\mathcal{C}^\chi$ by the same arguments in proving [\[prop-cell-rep-m1\]](#prop-cell-rep-m1){reference-type="ref" reference="prop-cell-rep-m1"}. Suppose now $x \in W$ and $C_x \cdot V \neq 0$. Then $C_x \cdot \mathcal{J}_\mathcal{C}^\chi \neq 0$ and hence there exists $y \in \mathcal{C}$ such that $C_x \cdot J_y \neq 0$. In other words, there exist $y, z \in \mathcal{C}$ such that $h_{x,y,z} \neq 0$. Thus $z \underset {\mathrm{LR}} {\leqslant}x$ and the statement [\[prop-cell-rep-i\]](#prop-cell-rep-i){reference-type="ref" reference="prop-cell-rep-i"} is proved. ◻ Immediately we have the following corollary on the lowest two-sided cell, which was first studied for affine Weyl groups in [@Lusztig85-cell-i; @Shi87]. This two-sided cell was later studied for general Coxeter groups in [@Xi12] with the assumption that $\boldsymbol{a}$ is bounded by the maximal length of longest elements of all finite parabolic subgroups of $W$. Here we duduce the existence of this two-sided cell with the weaker assumption that $\boldsymbol{a}$ is bounded. **Corollary 7**. *Suppose $N = \max \{\boldsymbol{a}(w) \mid w \in W\} <\infty$. Then $\mathcal{C}_{\text{\upshape low}} := \{w \in W \mid \boldsymbol{a}(w) = N\}$ is a two-sided cell. Moreover, for any $w \in W$ we have $\mathcal{C}_{\text{\upshape low}} \underset {\mathrm{LR}} {\leqslant}w$.* *Proof.* Let $\varepsilon: s \mapsto -1, \forall s \in S$ be the sign representation of $W$. We regard $\varepsilon$ as a representation of $\mathcal{H}_\mathbb{C}$ on which $q^{ \frac{1}{2}}$ acts by 1. Then by the formula [\[eq-Cw\]](#eq-Cw){reference-type="eqref" reference="eq-Cw"}, we see that for any $w \in W$ the element $C_w$ acts under $\varepsilon$ by the scalar $\varepsilon(C_w) = (-1)^{\ell(w)} \sum_{y} P_{y,w}(1)$. By Proposition [Proposition 2](#prop-cell-a){reference-type="ref" reference="prop-cell-a"} [\[eg-C-3\]](#eg-C-3){reference-type="ref" reference="eg-C-3"} and Lemma [Lemma 5](#lemma-pos){reference-type="ref" reference="lemma-pos"}, this number is nonzero. Let $\mathcal{C}$ be the two-sided cell attached to $\varepsilon$ by Proposition [Proposition 6](#prop-cell-rep-m){reference-type="ref" reference="prop-cell-rep-m"}. Then Proposition [Proposition 6](#prop-cell-rep-m){reference-type="ref" reference="prop-cell-rep-m"} implies that $\mathcal{C} \underset {\mathrm{LR}} {\leqslant}w$ for any $w$. By Proposition [Proposition 2](#prop-cell-a){reference-type="ref" reference="prop-cell-a"} [\[prop-cell-a-4\]](#prop-cell-a-4){reference-type="ref" reference="prop-cell-a-4"}, we have $\boldsymbol{a}(\mathcal{C}) \ge \boldsymbol{a}(w)$ for any $w \in W$. Now we choose $w \in W$ such that $\boldsymbol{a}(w) = N$. Then $\boldsymbol{a}(\mathcal{C}) = N$. By definition, we have $\mathcal{C}\subseteq \mathcal{C}_\text{low}$. Conversely, for any $w \in \mathcal{C}_\text{low}$, i.e., $\boldsymbol{a}(w) = N$, we have seen that $\mathcal{C}\underset {\mathrm{LR}} {\leqslant}w$. Thus, $w \in \mathcal{C}$ by Proposition [Proposition 2](#prop-cell-a){reference-type="ref" reference="prop-cell-a"} [\[prop-cell-a-5\]](#prop-cell-a-5){reference-type="ref" reference="prop-cell-a-5"}. Therefore, $\mathcal{C}= \mathcal{C}_\text{low}$ and $\mathcal{C}_\text{low}$ is a two-sided cell. ◻ [\[rmk-sign-1\]]{#rmk-sign-1 label="rmk-sign-1"} From the proof, we see that the boundedness conjecture for the function $\boldsymbol{a}$ is equivalent to saying that there exists a two-sided cell attached to the sign representation $\varepsilon$ in the sense of Proposition [Proposition 6](#prop-cell-rep-m){reference-type="ref" reference="prop-cell-rep-m"} [\[prop-cell-rep-m1\]](#prop-cell-rep-m1){reference-type="ref" reference="prop-cell-rep-m1"}. See also Remark [\[rmk-sign-2\]](#rmk-sign-2){reference-type="ref" reference="rmk-sign-2"}. # Coxeter group representations of $\boldsymbol{a}$-function value 1 {#sec-main-1} ## Overview Let $(W,S)$, $\mathcal{H}_\mathbb{C}$, $C_w$, etc be as in the last section. The representations we consider are all representations over the field $\mathbb{C}$. [\[def-a\]]{#def-a label="def-a"} Suppose $V$ is a nonzero representation of $\mathcal{H}_\mathbb{C}$, not necessary to be irreducible. If there exists a natural number $N$ such that 1. for each $x \in W$, if $\boldsymbol{a}(x) > N$, then we have $C_x \cdot V = 0$, and 2. there exists an element $w \in W$ such that $\boldsymbol{a}(w) = N$ and $C_w \cdot V \neq 0$, then we say $N$ is the *$\boldsymbol{a}$-function value* of $V$. (If such a number $N$ does not exist, then we may define the $\boldsymbol{a}$-function value of $V$ to be $\infty$.) In particular, if $V$ is irreducible and there exists a two-sided cell $\mathcal{C}$ attached to $V$ in the sense of Proposition [Proposition 6](#prop-cell-rep-m){reference-type="ref" reference="prop-cell-rep-m"} [\[prop-cell-rep-m1\]](#prop-cell-rep-m1){reference-type="ref" reference="prop-cell-rep-m1"}, then $\boldsymbol{a}(\mathcal{C})$ is the $\boldsymbol{a}$-function value of $V$. [\[rmk-sign-2\]]{#rmk-sign-2 label="rmk-sign-2"} The function $\boldsymbol{a}$ is bounded on $W$ if and only if the sign representation $\varepsilon$ is of finite $\boldsymbol{a}$-function value. See also Remark [\[rmk-sign-1\]](#rmk-sign-1){reference-type="ref" reference="rmk-sign-1"}. [\[conv-chiq=1\]]{#conv-chiq=1 label="conv-chiq=1"} 1. In the rest of this paper, we choose the specialization $\chi: {\mathbb{C}[q^{\pm \frac{1}{2}}]}\to \mathbb{C}$ so that $\chi(q^{ \frac{1}{2}}) = 1$. Then a representation of $\mathcal{H}_\mathbb{C}$ on which $q^{ \frac{1}{2}}$ acts by $\chi(q^{ \frac{1}{2}}) = 1$ is the same thing as a representation of $W$. 2. For an arbitrary element $h = \sum_{w \in W} a_w \widetilde{T}_w \in \mathcal{H}_\mathbb{C}$ ($a_w \in {\mathbb{C}[q^{\pm \frac{1}{2}}]}$), we denote again by $h$ the element $\sum_{w \in W} \chi(a_w) w$ in the group algebra $\mathbb{C}[W]$. In particular, in $\mathbb{C}[W]$ we denote by $C_w$ the element $\sum_{y \in W} (-1)^{\ell(w)+\ell(y)} P_{y,w}(1) y$. [\[eg-a=0\]]{#eg-a=0 label="eg-a=0"} Recall that $C_e = \widetilde{T}_e$, $C_s = \widetilde{T}_s - q^{ \frac{1}{2}}\in \mathcal{H}_\mathbb{C}$ (see Example [\[eg-C\]](#eg-C){reference-type="ref" reference="eg-C"} [\[eg-C-1\]](#eg-C-1){reference-type="ref" reference="eg-C-1"}). They are specialized to $C_e = e$, $C_s = s - e \in \mathbb{C}[W]$. Therefore, a nonzero representation $V$ of $W$ is of $\boldsymbol{a}$-function value $0$ if and only if the action of $W$ on $V$ is trivial (note that $\boldsymbol{a}(w) = 0$ if and only if $w = e$, see Proposition [Proposition 2](#prop-cell-a){reference-type="ref" reference="prop-cell-a"} [\[prop-cell-a-2\]](#prop-cell-a-2){reference-type="ref" reference="prop-cell-a-2"}). The main result of this section is the following characterization of representations of $\boldsymbol{a}$-function value $1$. **Theorem 8**. *Let $V$ be a nonzero representation of $W$. Then the following two conditions are equivalent:* 1. *[\[thm-A1-1\]]{#thm-A1-1 label="thm-A1-1"} for any $w \in W$ with $\boldsymbol{a}(w) > 1$, we have $C_w \cdot V = 0$;* 2. *[\[thm-A1-2\]]{#thm-A1-2 label="thm-A1-2"} there is no $v \in V \setminus \{0\}$, such that $r \cdot v = t \cdot v = -v$ for some $r, t \in S$ with $r \ne t$ and $m_{rt} < \infty$.* [\[rmk-a=1\]]{#rmk-a=1 label="rmk-a=1"} 1. This theorem is saying that a nonzero nontrivial representation $V$ of $W$ is of $\boldsymbol{a}$-function value $1$ if and only if for any $r,t \in S$ with $r \ne t$ and $m_{rt} < \infty$ there is no common eigenvector of $r,t$ of eigenvalue $-1$. In particular, the geometric representation of $W$ (see [@Bourbaki-Lie456 Ch. V, §4]) is of $\boldsymbol{a}$-function value $1$. Some generalizations of this representation (see for example [@BB05; @BCNS15; @Donnelly11; @Hee91; @Krammer09; @Vinberg71]) are also of $\boldsymbol{a}$-function value $1$. 2. [\[rmk-a=1-2\]]{#rmk-a=1-2 label="rmk-a=1-2"} In another paper [@Hu23], the author defined and classified two classes of representations, which are called generalized geometric representations and R-representations. These representations are all of $\boldsymbol{a}$-function value $1$. In Section [4](#sec-main-2){reference-type="ref" reference="sec-main-2"}, we will see that for certain simply laced Coxeter groups all the irreducible representations of $\boldsymbol{a}$-function value $1$ are those so-called R-representations. 3. [\[rmk-a=1-3\]]{#rmk-a=1-3 label="rmk-a=1-3"} In a subsequent paper [@Hu22], the author constructs some irreducible representations of infinite dimension for certain Coxeter groups. These representations are of $\boldsymbol{a}$-function value $1$ as well (except the one constructed in [@Hu22 §5]). This indicates that the representations of $\boldsymbol{a}$-function value $1$ might be complicated in general. See also Theorem [Theorem 11](#thm-A1=R){reference-type="ref" reference="thm-A1=R"} and Remark [\[rmk-simply-laced\]](#rmk-simply-laced){reference-type="ref" reference="rmk-simply-laced"} [\[rmk-simply-laced-2\]](#rmk-simply-laced-2){reference-type="ref" reference="rmk-simply-laced-2"}. 4. Recall Lemma [\[lem-C1\]](#lem-C1){reference-type="ref" reference="lem-C1"} that we have the two-sided cell $\mathcal{C}_1$ when $(W,S)$ is irreducible. By Proposition [Proposition 6](#prop-cell-rep-m){reference-type="ref" reference="prop-cell-rep-m"} [\[prop-cell-rep-m1\]](#prop-cell-rep-m1){reference-type="ref" reference="prop-cell-rep-m1"}, an irreducible representation of $\boldsymbol{a}$-function value $1$ is a quotient of the cell representation $\mathcal{J}_{\mathcal{C}_1}^\chi$. 5. Suppose $I \subseteq S$ is a subset and the parabolic subgroup $W_I := \langle I \rangle$ generated by $I$ is finite. Suppose moreover there exists a nonzero vector $v$ in $V$ such that $s \cdot v = - v$ for any $s \in I$. Then it can be shown that the $\boldsymbol{a}$-function value of $V$ is not less than $\ell(w_I)$, where $w_I$ is the longest element in $W_I$. In view of this fact and Theorem [Theorem 8](#thm-A1){reference-type="ref" reference="thm-A1"}, as well as the proof of Corollary [Corollary 7](#cor-cell-low){reference-type="ref" reference="cor-cell-low"}, we feel that the $\boldsymbol{a}$-function value of a representation of $W$ might be related to the pattern in which elements in $S$ share common eigenvectors of eigenvalue $-1$. ## Proof of Theorem [Theorem 8](#thm-A1){reference-type="ref" reference="thm-A1"} {#proof-of-theorem-thm-a1} To prove Theorem [Theorem 8](#thm-A1){reference-type="ref" reference="thm-A1"}, we need the following lemma. **Lemma 9**. *Let $D_m := \langle r,t \mid r^2 = t^2 = (rt)^m = e \rangle$ be a finite dihedral group ($m \in \mathbb{N}_{\ge 2}$). Let $\rho_k, \mathds{1}, \varepsilon$ and $\varepsilon_r, \varepsilon_t$ (if $m$ is even) denote the representations defined in Subsection [2.1](#subsec-dih){reference-type="ref" reference="subsec-dih"}. We also regarded these representations as homomorphisms from the group algebra $\mathbb{C}[D_m]$ to $GL_1(\mathbb{C})$ or $GL_2(\mathbb{C})$.* 1. *[\[lem-Cwrt-1\]]{#lem-Cwrt-1 label="lem-Cwrt-1"} If $1 \le k < \frac{m}{2}$, then $\rho_k(C_{w_{rt}}) = 0$.* 2. *[\[lem-Cwrt-2\]]{#lem-Cwrt-2 label="lem-Cwrt-2"} $\mathds{1}(C_{w_{rt}}) = 0$, $\varepsilon (C_{w_{rt}}) \ne 0$.* 3. *[\[lem-Cwrt-3\]]{#lem-Cwrt-3 label="lem-Cwrt-3"} If $m$ is even, then $\varepsilon_r (C_{w_{rt}}) = \varepsilon_t (C_{w_{rt}}) = 0$.* *Proof.* Recall [\[Cwrt\]](#Cwrt){reference-type="eqref" reference="Cwrt"} that $C_{w_{rt}} = \widetilde{T}_{w_{rt}} + (-1)^m q^{ \frac{m}{2}} + \sum_{i = 1}^{m-1} (-1)^i q^{ \frac{i}{2}} (\widetilde{T}_{r_{m-i}} + \widetilde{T}_{t_{m-i}})$. After our specialization, we have $$\begin{aligned} \mathbb{C}[D_m] \ni C_{w_{rt}} & = w_{rt} + (-1)^m + \sum_{i = 1}^{m-1} (-1)^i (r_{m-i} + t_{m-i}) \\ & = (-1)^m \bigl[ (e + r_2 + t_2 + r_4 + t_4 + \cdots + w_{rt}) - (r + t + r_3 + t_3 + \cdots + r_{m-1} + t_{m-1}) \bigr] \\ & \mathrel {\phantom{=}} \text{(if $m$ is even)} \\ \mathrel{\text{or}} & = (-1)^m \bigl[ (e + r_2 + t_2 + r_4 + t_4 + \cdots + r_{m-1} + t_{m-1}) - (r + t + r_3 + t_3 + \cdots + w_{rt} ) \bigr] \\ & \mathrel {\phantom{=}} \text{(if $m$ is odd).} \end{aligned}$$ Recall Subsection [2.1](#subsec-dih){reference-type="ref" reference="subsec-dih"} that for any integer $k$ satisfying $1 \le k \le \frac{m}{2}$, $\rho_k$ is a representation of dimension 2 with a real form. On this real plane, $r_i$ is a rotation (of angle $\frac{ik\uppi}{m}$) if $i$ is even (we regard $e$ as the rotation of angle 0), and is a reflection if $i$ is odd. Similar for $t_i$. Therefore, $\rho_k(C_{w_{rt}})$ is the difference between the $m$ rotations and the $m$ reflections up to a sign which depends on the parity of $m$. But notice that the sum of the $m$ rotations is zero, and so is the sum of the $m$ reflections. Thus $\rho_k(C_{w_{rt}}) = 0$. In particular, if $m$ is even and $k = \frac{m}{2}$, then $\rho_\frac{m}{2} \simeq \varepsilon_r \oplus \varepsilon_t$. Hence $\varepsilon_r (C_{w_{rt}}) = \varepsilon_t (C_{w_{rt}}) = 0$. The equality $\mathds{1}(C_{w_{rt}}) = 0$ is straightforward. At last, note that $\varepsilon(r_i) = \varepsilon(t_i) = (-1)^i$. Thus $\varepsilon(C_{w_{rt}}) = (-1)^m (m - (-m)) \ne 0$. ◻ **Lemma 10**. *Suppose $V$ is a nonzero representation of $W$ satisfying the condition [\[thm-A1-2\]](#thm-A1-2){reference-type="ref" reference="thm-A1-2"} of Theorem [Theorem 8](#thm-A1){reference-type="ref" reference="thm-A1"}. Suppose $r,t \in S$ such that $r \ne t$ and $m_{rt} < \infty$. Then $C_{w_{rt}} \cdot V = 0$.* *Proof.* The subgroup $\langle r,t \rangle$ of $W$ generated by $r$ and $t$ is a finite dihedral group $D_{m_{rt}}$. We consider the restriction of the representation $V$ to the subgroup $\langle r,t \rangle$. Then $V$ (either finite or infinite dimensional) is decomposed into a direct sum of irreducible representations of $D_{m_{rt}}$ since the group algebra $\mathbb{C}[D_{m_{rt}}]$ is semisimple. By the assumption, the sign representation $\varepsilon$ does not appear in this direct sum. By Lemma [Lemma 1](#lem-Dm-rep){reference-type="ref" reference="lem-Dm-rep"} and Lemma [Lemma 9](#lem-Cwrt){reference-type="ref" reference="lem-Cwrt"}, we see that $C_{w_{rt}}$ acts by zero on every irreducible representation of $D_{m_{rt}}$ except $\varepsilon$. Thus $C_{w_{rt}} \cdot V = 0$. ◻ *Proof of Theorem [Theorem 8](#thm-A1){reference-type="ref" reference="thm-A1"}.* Suppose there exist $r,t \in S$ with $r \ne t$ and $m_{rt} < \infty$, and a nonzero vector $v \in V$ such that $r \cdot v = t\cdot v = -v$. Then $v$ spans a subrepresentation in $V$ for the dihedral subgroup $\langle r,t \rangle \simeq D_{m_{rt}}$. This one-dimensional subrepresentation is isomorphic to the sign representation $\varepsilon$ of $\langle r,t \rangle$. By Lemma [Lemma 9](#lem-Cwrt){reference-type="ref" reference="lem-Cwrt"} [\[lem-Cwrt-2\]](#lem-Cwrt-2){reference-type="ref" reference="lem-Cwrt-2"}, we have $C_{w_{rt}} \cdot v \ne 0$. On the other hand, by Proposition [Proposition 2](#prop-cell-a){reference-type="ref" reference="prop-cell-a"} [\[prop-cell-a-3\]](#prop-cell-a-3){reference-type="ref" reference="prop-cell-a-3"} we have $\boldsymbol{a}(w_{rt}) = m_{rt} \ge 2$. Thus, the statement [\[thm-A1-1\]](#thm-A1-1){reference-type="ref" reference="thm-A1-1"} implies the statement [\[thm-A1-2\]](#thm-A1-2){reference-type="ref" reference="thm-A1-2"}. Conversely, suppose $V$ satisfies the condition [\[thm-A1-2\]](#thm-A1-2){reference-type="ref" reference="thm-A1-2"}. Suppose $w \in W$ and $\boldsymbol{a}(w) > 1$. We do induction on $\ell(w)$ to show that $C_w \cdot V = 0$. First of all, we must have $\ell(w) \ge 2$. Otherwise if $\ell(w) = 0$ or $1$, then $\boldsymbol{a}(w) = \ell(w)$ by Proposition [Proposition 2](#prop-cell-a){reference-type="ref" reference="prop-cell-a"} [\[prop-cell-a-2\]](#prop-cell-a-2){reference-type="ref" reference="prop-cell-a-2"}. If $\ell(w) = 2$, then by Lemma [\[lem-C1\]](#lem-C1){reference-type="ref" reference="lem-C1"} [\[lem-C1-1\]](#lem-C1-1){reference-type="ref" reference="lem-C1-1"} we have $w = w_{rt}$ for some $r,t \in S$ with $m_{rt} = 2$. In this case $C_w \cdot V = 0$ by Lemma [Lemma 10](#lem-Cwrt=0){reference-type="ref" reference="lem-Cwrt=0"}. Suppose now $\ell(w) > 2$. By Lemma [\[lem-C1\]](#lem-C1){reference-type="ref" reference="lem-C1"} [\[lem-C1-1\]](#lem-C1-1){reference-type="ref" reference="lem-C1-1"}, Lemma [Lemma 3](#lemma-two-expr){reference-type="ref" reference="lemma-two-expr"} and the assumption that $\boldsymbol{a}(w) > 1$, $w$ has a reduced expression of the form $$w = \cdots \underbrace{(rtr\cdots)} _{m_{rt} \text{ factors}} \cdots \quad (r,t \in S, r \ne t, m_{rt} < \infty).$$ That is, we can write $w = x w_{rt} y$ where $x, y \in W$ and $\ell(w) = \ell(x) + m_{rt} + \ell(y)$. If $x = y = e$, then $w = w_{rt}$ and $C_w \cdot V = 0$ by Lemma [Lemma 10](#lem-Cwrt=0){reference-type="ref" reference="lem-Cwrt=0"}. If $x \ne e$, then there exists $s \in S$ such that $\ell(s x) < \ell(x)$. It follows that $\ell(sw) < \ell(w)$. Then by Proposition [Proposition 2](#prop-cell-a){reference-type="ref" reference="prop-cell-a"} [\[prop-cell-a-1\]](#prop-cell-a-1){reference-type="ref" reference="prop-cell-a-1"} we have $$C_s C_{sw} = C_w + \sum_{\substack{ z < sw, sz < z \\ \deg_q P_{z,sw} = \frac{1}{2} (\ell(sw) - \ell(z) - 1)}} \mu_{z,sw} C_z.$$ In particular, those $z$ which occur in the summation satisfy $\ell(z) < \ell(sw)$, $z\underset {\mathrm{LR}} {\leqslant}sw$, and then $\boldsymbol{a}(z) \ge \boldsymbol{a}(sw)$ by Proposition [Proposition 2](#prop-cell-a){reference-type="ref" reference="prop-cell-a"} [\[prop-cell-a-4\]](#prop-cell-a-4){reference-type="ref" reference="prop-cell-a-4"}. Notice that $sw$ also has a reduced expression containing a consecutive product $rtr \cdots$ ($m_{rt}$ factors) since $sw = (sx)w_{rt}y$ and $\ell(sw) = \ell(sx) + m_{rt} + \ell(y)$. Then we have $\boldsymbol{a}(sw) > 1$ by Lemma [\[lem-C1\]](#lem-C1){reference-type="ref" reference="lem-C1"} [\[lem-C1-1\]](#lem-C1-1){reference-type="ref" reference="lem-C1-1"}. By induction hypothesis, we have $C_{sw} \cdot V = 0$ and $C_z \cdot V = 0$ for those $z$ which occur in the summation. Therefore, $C_w \cdot V = (C_s C_{sw} - \sum_{z} \mu_{z,sw} C_z) \cdot V = 0$. The case where $y \ne e$ is similar: if $\ell(ys) < \ell(y)$ for some $s \in S$, then $C_w = C_{ws} C_s - \sum_z \mu_{z, ws} C_z$ and $C_{ws} \cdot V = C_z \cdot V = 0$ for those $z$ which occur. ◻ # Simply laced Coxeter groups {#sec-main-2} ## Overview A Coxeter group $(W,S)$ is called *simply laced* if $m_{st} = 2$ or $3$ for any $s,t \in S$. The main result (Theorem [Theorem 11](#thm-A1=R){reference-type="ref" reference="thm-A1=R"}) of this section states that for certain simply laced Coxeter groups we can determine all the irreducible representations of $W$ of $\boldsymbol{a}$-function value $1$. In other words, for such Coxeter groups we are able to determine all the simple quotients of the cell representation $\mathcal{J}_{\mathcal{C}_1}^\chi$, where $\chi(q^{ \frac{1}{2}}) = 1$ as we set in Convention [\[conv-chiq=1\]](#conv-chiq=1){reference-type="ref" reference="conv-chiq=1"}. It turns out that these irreducible representations of $\boldsymbol{a}$-function value $1$ are the so-called R-representations defined and classified in [@Hu23]. Before presenting the precise statement, we shall introduce some related notations and terminologies. For a representation $(V, \rho)$ of $W$ and for $s \in S$, we denote by $V_s^+$ and $V_s^-$ the eigen-subspaces of $\rho(s)$ in $V$ of eigenvalues $+1$ and $-1$ respectively. Clearly, we have $V = V_s^+ \bigoplus V_s^-$ as a vector space for any $s \in S$. The R-representations defined in [@Hu23] are a class of representations on which each $s \in S$ acts by an abstract reflection. For a simply laced Coxeter group $(W,S)$, the definition of R-representations reduces to the following. [\[def-R\]]{#def-R label="def-R"} A representation $(V,\rho)$ of a simply laced Coxeter group $(W,S)$ is called an *R-representation* if 1. [\[def-R-1\]]{#def-R-1 label="def-R-1"} $\sum_{s \in S} V_s^- = V$, 2. [\[def-R-2\]]{#def-R-2 label="def-R-2"} for any $r,t \in S$ with $r \ne t$, we have $V_r^- \cap V_t^- = 0$, 3. [\[def-R-3\]]{#def-R-3 label="def-R-3"} for any $s \in S$, we have $\dim V_s^- = 1$. Clearly, an R-representation is of $\boldsymbol{a}$-function value $1$ by Theorem [Theorem 8](#thm-A1){reference-type="ref" reference="thm-A1"}. Now we can state the main theorem of this section. Recall that a Coxeter group $(W,S)$ is said to be irreducible if its Coxeter graph $G$ is connected. **Theorem 11**. *Let $(W,S)$ be an irreducible simply laced Coxeter group. Suppose there is at most one cycle in the Coxeter graph $G$. Then any irreducible representation of $W$ of $\boldsymbol{a}$-function value $1$ is an R-representation. In particular, such representations are finite dimensional (with dimension not greater than $\lvert S \rvert$).* [\[rmk-simply-laced\]]{#rmk-simply-laced label="rmk-simply-laced"} 1. [\[rmk-simply-laced-1\]]{#rmk-simply-laced-1 label="rmk-simply-laced-1"} The irreducible R-representations are classified in [@Hu23]. In particular, if $G$ is a tree, then such representation is unique (in fact, it is the simple quotient of the geometric representation defined in [@Bourbaki-Lie456 Ch. V, §4]). On the other hand, if $G$ has a cycle, then such representations are parameterized by $\mathbb{C}^\times$. For the reader's convenience, we recollect the construction of such representations in Subsection [4.3](#subsec-R){reference-type="ref" reference="subsec-R"}. 2. [\[rmk-simply-laced-2\]]{#rmk-simply-laced-2 label="rmk-simply-laced-2"} In contrast, if there are at least two cycles in a Coxeter graph, then we can construct an infinite dimensional irreducible representation, which is of $\boldsymbol{a}$-function value $1$, for the corresponding Coxeter group. The construction is motivated by the proof of Theorem [Theorem 11](#thm-A1=R){reference-type="ref" reference="thm-A1=R"}. See [@Hu22] for more details. ## Proof of Theorem [Theorem 11](#thm-A1=R){reference-type="ref" reference="thm-A1=R"} {#proof-of-theorem-thm-a1r} This subsection is devoted to proving Theorem [Theorem 11](#thm-A1=R){reference-type="ref" reference="thm-A1=R"}. Let $(W,S)$ be an irreducible simply laced Coxeter group, and $(V,\rho)$ be an irreducible representation of $W$ of $\boldsymbol{a}$-function value $1$. We first collect some elementary results which we will use later. ### Basic results **Lemma 12**. 1. *[\[lem-R-1\]]{#lem-R-1 label="lem-R-1"} For any $v \in V$ and $s \in S$, we have $s \cdot v - v \in V_s^-$.* 2. *[\[lem-R-2\]]{#lem-R-2 label="lem-R-2"} We have $\sum_{s \in S} V_s^- = V$.* 3. *[\[lem-R-3\]]{#lem-R-3 label="lem-R-3"} For any distinct elements $r, t \in S$, we have $V_r^- \cap V_t^- = 0$.* 4. *[\[lem-R-4\]]{#lem-R-4 label="lem-R-4"} For any $r, t \in S$ with $m_{rt} = 2$, and for any $v \in V_t^-$, we have $r \cdot v = v$.* 5. *[\[lem-R-5\]]{#lem-R-5 label="lem-R-5"} For any distinct elements $r, t \in S$, we have $\dim V_r^- = \dim V_t^-$ (allowed to be infinity).* 6. *[\[lem-R-6\]]{#lem-R-6 label="lem-R-6"} For any $s \in S$, we have $V_s^- \ne 0$.* *Proof.* For [\[lem-R-1\]](#lem-R-1){reference-type="ref" reference="lem-R-1"}, we have $s \cdot (s \cdot v- v) = v - s \cdot v$. Thus $s \cdot v - v \in V_s^-$. Let $U := \sum_{s \in S} V_s^-$. For any vector $u \in U$ and any $s \in S$, we have $s \cdot u - u \in V_s^-$ by [\[lem-R-1\]](#lem-R-1){reference-type="ref" reference="lem-R-1"}. Thus $s \cdot u \in U$. So $U$ stays invariant under the action of $W$. But $V$ is an irreducible representation of $W$. As a result, we must have $U = V$. The point [\[lem-R-2\]](#lem-R-2){reference-type="ref" reference="lem-R-2"} is proved. The assertion $V_r^- \cap V_t^- = 0$ in [\[lem-R-3\]](#lem-R-3){reference-type="ref" reference="lem-R-3"} follows from Theorem [Theorem 8](#thm-A1){reference-type="ref" reference="thm-A1"} and the fact that $V$ is of $\boldsymbol{a}$-function value $1$. For [\[lem-R-4\]](#lem-R-4){reference-type="ref" reference="lem-R-4"}, suppose $m_{rt} = 2$ and $v \in V_t^-$. We have $$\begin{aligned} t \cdot (r \cdot v - v) & = tr \cdot v - t \cdot v \\ & = rt \cdot v + v \\ & = - r \cdot v + v. \end{aligned}$$ Thus $r \cdot v - v \in V_t^-$. But it also holds that $r \cdot v - v \in V_r^-$ by [\[lem-R-1\]](#lem-R-1){reference-type="ref" reference="lem-R-1"}. This forces $r \cdot v - v = 0$ because $V_r^- \cap V_t^- = 0$ as we have seen in [\[lem-R-3\]](#lem-R-3){reference-type="ref" reference="lem-R-3"}. Now we prove [\[lem-R-5\]](#lem-R-5){reference-type="ref" reference="lem-R-5"}. Suppose first that $m_{rt} = 3$. The subgroup $\langle r,t \rangle$ generated by $r$ and $t$ is isomorphic to the finite dihedral group $D_3$. Note that the group algebra $\mathbb{C}[D_3]$ is semisimple. Therefore, $V$ decomposes as a representation of $\langle r,t \rangle$ into a direct sum of copies of irreducible representations of $\langle r,t \rangle$, which are $\mathds{1}, \varepsilon$ and $\rho_1$ (see Lemma [Lemma 1](#lem-Dm-rep){reference-type="ref" reference="lem-Dm-rep"}). But $\varepsilon$ cannot occur in $V$ by Theorem [Theorem 8](#thm-A1){reference-type="ref" reference="thm-A1"}. Thus, we may write $$(V, \rho) = \mathds{1}^{\oplus I} \bigoplus \rho_1^{\oplus J}$$ as a representation of $\langle r,t \rangle$, where $I$ and $J$ are index sets. In view of Remark [\[rmk-Dm-rep\]](#rmk-Dm-rep){reference-type="ref" reference="rmk-Dm-rep"}, we have clearly $\dim V_r^- = \dim V_t^- = \lvert J \rvert$. Suppose otherwise $m_{rt} = 2$. Notice that the Coxeter graph $G$ is assumed to be connected. We have a path in $G$ connecting $r$ and $t$, say, $$(r = s_0, s_1, \dots, s_{k-1}, s_k = t)$$ where $s_i \in S$ and $m_{s_i s_{i+1}} = 3$ for any $i$. Then we have $\dim V_r^- = \dim V_{s_1}^- = \dots = \dim V_t^-$ as desired in [\[lem-R-5\]](#lem-R-5){reference-type="ref" reference="lem-R-5"}. As a result of [\[lem-R-5\]](#lem-R-5){reference-type="ref" reference="lem-R-5"}, if $V_s^- = 0$ for some $s \in S$, then $V_s^- = 0$ for any $s \in S$, and hence the action of $s$ on $V$ is trivial. But then the $\boldsymbol{a}$-function value of $V$ is zero (see Example [\[eg-a=0\]](#eg-a=0){reference-type="ref" reference="eg-a=0"}), which contradicts our assumption on $V$. This proves [\[lem-R-6\]](#lem-R-6){reference-type="ref" reference="lem-R-6"}. ◻ For two arbitrary elements $r, t \in S$ such that $m_{rt} = 3$, we define a linear map $f_{tr}$ as follows, $$\begin{aligned} f_{tr} : V_r^- & \to V_t^-, \\ v & \mapsto t \cdot v - v.\end{aligned}$$ Note that $t \cdot v - v \in V_t^-$ by Lemma [Lemma 12](#lem-R){reference-type="ref" reference="lem-R"} [\[lem-R-1\]](#lem-R-1){reference-type="ref" reference="lem-R-1"}. Thus $f_{tr}: V_r^- \to V_t^-$ is a well-defined linear map. Intuitively, the vector $v$ generates in $V$ a two-dimensional subrepresentation of the dihedral subgroup $\langle r,t \rangle$, and the subrepresentation is isomorphic to $\rho_1$. In this subrepresentation the vector $v$ and $f_{tr} (v)$ play the same roles as $\beta_r$ and $\beta_t$ in Subsection [2.1](#subsec-dih){reference-type="ref" reference="subsec-dih"}, as illustrated in Figure [\[fig-ftr\]](#fig-ftr){reference-type="ref" reference="fig-ftr"}. **Lemma 13**. *For $r,t \in S$ with $m_{rt} = 3$, the map $f_{tr}$ is a linear isomorphism with inverse $f_{rt}$.* *Proof.* For any $v \in V_r^-$, we have $rt \cdot v + v - t\cdot v \in V_r^- \cap V_t^-$ because $$\begin{aligned} r \cdot (rt \cdot v + v - t\cdot v) & = t \cdot v + r \cdot v - rt \cdot v \\ & = t \cdot v - v - rt \cdot v \\ & = - (rt \cdot v + v - t\cdot v), \text{ and} \\ t \cdot (rt \cdot v + v - t\cdot v) & = trt \cdot v + t \cdot v - v \\ & = rtr \cdot v + t \cdot v - v \\ & = -rt \cdot v + t \cdot v - v \\ & = - (rt \cdot v + v - t\cdot v). \end{aligned}$$ But $V_r^- \cap V_t^- = 0$ by Lemma [Lemma 12](#lem-R){reference-type="ref" reference="lem-R"} [\[lem-R-3\]](#lem-R-3){reference-type="ref" reference="lem-R-3"}. Thus $$\refstepcounter{theorem} \label{eq-4-3} rt \cdot v + v - t\cdot v = 0.$$ We have then $$\begin{aligned} f_{rt} f_{tr} (v) & = f_{rt} (t \cdot v - v) \\ & = r \cdot (t\cdot v - v) - (t \cdot v - v) \\ & = r t \cdot v + v - t \cdot v + v \\ & = v \text{ (by Equation \eqref{eq-4-3})}. \end{aligned}$$ We interchange the two letters $r$ and $t$ and then we have $f_{tr} f_{rt} (v) = v$ for any $v \in V_t^-$. ◻ We have seen from Lemma [Lemma 12](#lem-R){reference-type="ref" reference="lem-R"} [\[lem-R-2\]](#lem-R-2){reference-type="ref" reference="lem-R-2"} and [\[lem-R-3\]](#lem-R-3){reference-type="ref" reference="lem-R-3"} that the representation $(V,\rho)$ satisfies the conditions [\[def-R-1\]](#def-R-1){reference-type="ref" reference="def-R-1"} and [\[def-R-2\]](#def-R-2){reference-type="ref" reference="def-R-2"} of Definition [\[def-R\]](#def-R){reference-type="ref" reference="def-R"}. It remains to prove the condition [\[def-R-3\]](#def-R-3){reference-type="ref" reference="def-R-3"} of Definition [\[def-R\]](#def-R){reference-type="ref" reference="def-R"}. We split the proof into two cases depending on the shape of the Coxeter graph $G$. ### Case one: $G$ is a tree In this case, we fix arbitrarily an element $s_0 \in S$ and a nonzero vector $\alpha_{s_0} \in V_{s_0}^-$. Such $\alpha_{s_0}$ exists because $V_{s_0}^- \ne 0$ by Lemma [Lemma 12](#lem-R){reference-type="ref" reference="lem-R"} [\[lem-R-6\]](#lem-R-6){reference-type="ref" reference="lem-R-6"}. For any vertex in $G$, say, $s \in S$, there is a unique path (without repetition of vertices) connecting $s_0$ and $s$ since $G$ is a tree. Suppose this path is $$(s_0, s_1, \dots, s_{k-1}, s_k = s)$$ where $s_i \in S$ and $m_{s_i s_{i+1}} = 3$ for any $i$. We define $$\alpha_s := f_{s_k s_{k-1}} f_{s_{k-1} s_{k-2}} \cdots f_{s_1 s_0} (\alpha_{s_0}).$$ Then $\alpha_s \in V_s^-$, and $\alpha_s$ is nonzero by Lemma [Lemma 13](#lem-fst-isom){reference-type="ref" reference="lem-fst-isom"}. In particular, if $s = s_0$, then the vector is $\alpha_{s_0}$ we have chosen. **Lemma 14**. 1. *[\[lem-tree-subrep-1\]]{#lem-tree-subrep-1 label="lem-tree-subrep-1"} Suppose $s, t \in S$ such that $m_{st} = 3$. Then we have $t \cdot \alpha_s = \alpha_s + \alpha_t$.* 2. *[\[lem-tree-subrep-2\]]{#lem-tree-subrep-2 label="lem-tree-subrep-2"} The subspace $U := \sum_{s \in S} \mathbb{C}\alpha_s$ spanned by $\{\alpha_s \mid s \in S\}$ is a subrepresentation of $W$ in $V$.* *Proof.* Suppose $m_{st} = 3$, and suppose the path in $G$ connecting $s_0$ and $s$ is $$(s_0, s_1, \dots, s_{k-1}, s).$$ If $s_{k-1} = t$, then we have $$\begin{aligned} t \cdot \alpha_s & = (t \cdot \alpha_s - \alpha_s) + \alpha_s \\ & = f_{ts} (\alpha_s) + \alpha_s \\ & = f_{ts} (f_{st} \alpha_{t}) + \alpha_s \text{ (by definition of $\alpha_s$ and $\alpha_t$)} \\ & = \alpha_t + \alpha_s \text{ (by Lemma \ref{lem-fst-isom})}. \end{aligned}$$ If $s_{k-1} \ne t$, then the path in $G$ connecting $s_0$ and $t$ is exactly $$(s_0, s_1, \dots, s_{k-1}, s, t).$$ Then we have $$\begin{aligned} t \cdot \alpha_s & = (t \cdot \alpha_s - \alpha_s) + \alpha_s \\ & = f_{ts} (\alpha_s) + \alpha_s \\ & = \alpha_t + \alpha_s \text{ (by the definition of $\alpha_t$)}. \end{aligned}$$ Therefore, we always have $t \cdot \alpha_s = \alpha_t + \alpha_s$. The point [\[lem-tree-subrep-1\]](#lem-tree-subrep-1){reference-type="ref" reference="lem-tree-subrep-1"} is proved. For [\[lem-tree-subrep-2\]](#lem-tree-subrep-2){reference-type="ref" reference="lem-tree-subrep-2"}, it suffices to show that $t \cdot \alpha_s \in U$ for any $s,t \in S$. If $t = s$ then this is clear since $s \cdot \alpha_s = - \alpha_s \in U$. If $m_{st} = 2$ then $t \cdot \alpha_s = \alpha_s \in U$ by Lemma [Lemma 12](#lem-R){reference-type="ref" reference="lem-R"} [\[lem-R-4\]](#lem-R-4){reference-type="ref" reference="lem-R-4"}. If $m_{st} = 3$ then $t \cdot \alpha_s = \alpha_t + \alpha_s \in U$ by [\[lem-tree-subrep-1\]](#lem-tree-subrep-1){reference-type="ref" reference="lem-tree-subrep-1"}. Thus, $U$ is a subrepresentation. ◻ **Corollary 15**. *$\dim V_s^- = 1$ for any $s \in S$.* *Proof.* Since $V$ is irreducible, we have $V = \sum_{s \in S} \mathbb{C}\alpha_s$ by Lemma [Lemma 14](#lem-tree-subrep){reference-type="ref" reference="lem-tree-subrep"} [\[lem-tree-subrep-2\]](#lem-tree-subrep-2){reference-type="ref" reference="lem-tree-subrep-2"}. Notice that $\alpha_s \in V_s^-$ is a nonzero vector. So $\dim V_s^- \ge 1$. For arbitrarily fixed $s \in S$, we may suppose $s_1, \dots, s_l$ are elements in $S$ such that $\{\alpha_s, \alpha_{s_1}, \dots, \alpha_{s_l}\}$ is a basis of $V$. Suppose $v \in V_s^-$. Then we can write $$\refstepcounter{theorem} \label{eq-cor-tree-1} v = c_s \alpha_s + \sum_{1 \le i \le l} c_i \alpha_{s_i},$$ where $c_s, c_{i} \in \mathbb{C}$. By Lemma [Lemma 12](#lem-R){reference-type="ref" reference="lem-R"} [\[lem-R-4\]](#lem-R-4){reference-type="ref" reference="lem-R-4"} and Lemma [Lemma 14](#lem-tree-subrep){reference-type="ref" reference="lem-tree-subrep"} [\[lem-tree-subrep-1\]](#lem-tree-subrep-1){reference-type="ref" reference="lem-tree-subrep-1"}, we have $$\refstepcounter{theorem} \label{eq-cor-tree-2} \begin{split} -v & = s \cdot v \\ & = - c_s \alpha_s + \sum_{\substack{1 \le i \le l \\ m_{ss_i} = 2}} c_i \alpha_{s_i} + \sum_{\substack{1 \le i \le l \\ m_{ss_i} = 3}} c_i (\alpha_{s_i} + \alpha_s) \\ & = \Bigl(- c_s + \sum_{\substack{1 \le i \le l \\ m_{ss_i} = 3}} c_i \Bigr) \alpha_s + \sum_{1 \le i \le l} c_i \alpha_{s_i}. \end{split}$$ Compare the equations [\[eq-cor-tree-1\]](#eq-cor-tree-1){reference-type="eqref" reference="eq-cor-tree-1"} and [\[eq-cor-tree-2\]](#eq-cor-tree-2){reference-type="eqref" reference="eq-cor-tree-2"}, and notice that $\{\alpha_s, \alpha_{s_1}, \dots, \alpha_{s_l}\}$ is a basis of $V$. Then we must have $c_1 = \dots = c_l = 0$. Thus, $v \in \mathbb{C}\alpha_s$, and then $V_s^- = \mathbb{C}\alpha_s$ is of dimension $1$. ◻ In conclusion, in the case where $G$ is a tree, the representation $V$ is an R-representation. ### Case two: $G$ has one cycle {#subsubsec-cycle} In this case, $G$ has (only) one cycle. We assume $\{s_{i} \in S \mid 0 \le i \le n\}$ is the set of vertices on that cycle such that $m_{s_{i} s_{i+1}} = 3$ for any $i \le n-1$ and $m_{s_0s_n} = 3$, as illustrated in Figure [\[fig-cycle\]](#fig-cycle){reference-type="ref" reference="fig-cycle"}. For any vector $v \in V_{s_0}^-$, we define $$X (v) : = f_{s_0 s_n} f_{s_n s_{n-1}} \cdots f_{s_2 s_1} f_{s_1 s_0} (v).$$ Then by Lemma [Lemma 13](#lem-fst-isom){reference-type="ref" reference="lem-fst-isom"}, $X : V_{s_0}^- \to V_{s_0}^-$ is a linear isomorphism of the vector space $V_{s_0}^-$. Thus, we may regard $V_{s_0}^-$ as a module of the Laurent polynomial ring $\mathbb{C}[X^{\pm 1}]$. Suppose $U_{s_0} \subseteq V_{s_0}^-$ is a submodule of $\mathbb{C}[X^{\pm 1}]$. For any $t \in S$, we define a subspace $U_t \subseteq V_t^-$ as follows. We choose a path (without repetition of vertices) in $G$ connecting $s_0$ and $t$, say, $$(s_0 = t_0, t_1, t_2, \dots,t_{k-1}, t_k = t),$$ where $t_1, \dots, t_{k-1} \in S$ such that $m_{t_i t_{i+1}} = 3$ for any $i$. We define $$U_t := f_{t_k t_{k-1}} f_{t_{k-1} t_{k-2}} \dots f_{t_1 t_0} (U_{s_0}).$$ In particular, if $t = s_0$, then the subspace defined is just $U_{s_0}$. Note that the path connecting $s_0$ and $t$ is not unique in general. But the point [\[lem-cycle-wd-1\]](#lem-cycle-wd-1){reference-type="ref" reference="lem-cycle-wd-1"} of the following lemma ensures that $U_t$ is well defined. **Lemma 16**. 1. *[\[lem-cycle-wd-1\]]{#lem-cycle-wd-1 label="lem-cycle-wd-1"} The definition of $U_t$ is independent of the choice of the path connecting $s_0$ and $t$.* 2. *[\[lem-cycle-wd-2\]]{#lem-cycle-wd-2 label="lem-cycle-wd-2"} If $r,t \in S$ such that $m_{rt} =3$, then $f_{rt} (U_t) = U_r$.* *Proof.* We prove [\[lem-cycle-wd-1\]](#lem-cycle-wd-1){reference-type="ref" reference="lem-cycle-wd-1"} first. If there is only one path connecting $s_0$ and $t$, then there is nothing to prove. If $t = s_i$ for some $i \le n$, then there are two paths connecting $s_0$ and $s_i$, namely, $$\refstepcounter{theorem} {\label{eq-lem-wd-1}} (s_0, s_1, s_2, \dots, s_{i-1}, s_i) \text{ and } (s_0, s_n, s_{n-1}, \dots, s_{i+1}, s_i).$$ Recall that $U_{s_0} \subseteq V_{s_0}^-$ is a submodule of $\mathbb{C}[X^{\pm 1}]$. Thus $X(U_{s_0}) = U_{s_0}$, i.e., $$f_{s_0 s_n} f_{s_n s_{n-1}} \cdots f_{s_2 s_1} f_{s_1 s_0} (U_{s_0}) = U_{s_0}.$$ By Lemma [Lemma 13](#lem-fst-isom){reference-type="ref" reference="lem-fst-isom"}, we have then $$\refstepcounter{theorem} \label{eq-lem-wd-2} f_{s_i s_{i-1}} \cdots f_{s_2 s_1} f_{s_1 s_0} (U_{s_0}) = f_{s_i s_{i+1}} \cdots f_{s_{n-1} s_n} f_{s_n s_0} (U_{s_0})$$ Notice that the two paths in [\[eq-lem-wd-1\]](#eq-lem-wd-1){reference-type="eqref" reference="eq-lem-wd-1"} are the only paths connecting $s_0$ and $s_i$ because there is only one cycle in $G$. Thus the equality [\[eq-lem-wd-2\]](#eq-lem-wd-2){reference-type="eqref" reference="eq-lem-wd-2"} implies that the subspace $U_{s_i}$ is independent of the choice of the path. Suppose otherwise $t$ is not on the cycle and there are two (and only two) paths connecting $s_0$ and $t$. Then there exists a unique index $i$ with $1 \le i \le n$ such that there is a unique path $$(s_i = r_0, r_1, \dots, r_l = t)$$ connecting $s_i$ and $t$, and $r_1, \dots, r_l$ do not lie on the cycle in $G$, as illustrated in Figure [\[fig-path\]](#fig-path){reference-type="ref" reference="fig-path"}. Then by Equation [\[eq-lem-wd-2\]](#eq-lem-wd-2){reference-type="eqref" reference="eq-lem-wd-2"} we have $$f_{r_l r_{l-1}} \cdots f_{r_2 r_1} f_{r_1 s_i} f_{s_i s_{i-1}} \cdots f_{s_2 s_1} f_{s_1 s_0} (U_{s_0}) = f_{r_l r_{l-1}} \cdots f_{r_2 r_1} f_{r_1 s_i} f_{s_i s_{i+1}} \cdots f_{s_{n-1} s_n} f_{s_n s_0} (U_{s_0}).$$ This indicates that the definition of $U_t$ is independent of the choice of the path. The point [\[lem-cycle-wd-1\]](#lem-cycle-wd-1){reference-type="ref" reference="lem-cycle-wd-1"} is proved. For [\[lem-cycle-wd-2\]](#lem-cycle-wd-2){reference-type="ref" reference="lem-cycle-wd-2"}, suppose first that there is a path connecting $s_0$ and $t$ such that $r$ does not lie in this path. Then by definition we have $f_{rt} (U_t) = U_r$. Suppose otherwise that a path connecting $s_0$ and $t$ passes through $r$, then by definition again we have $f_{tr} (U_r) = U_t$. By applying $f_{rt}$ on both sides, and using Lemma [Lemma 13](#lem-fst-isom){reference-type="ref" reference="lem-fst-isom"}, we obtain $U_r = f_{rt} (U_t)$. ◻ **Lemma 17**. *Suppose $U_{s_0} \subseteq V_{s_0}^-$ is a submodule of $\mathbb{C}[X^{\pm 1}]$. Let $U := \sum_{t \in S} U_t \subseteq V$ where $U_t$ is defined as above. Then we have:* 1. *[\[lem-cycle-1\]]{#lem-cycle-1 label="lem-cycle-1"} $U$ is a subrepresentation of $W$ in $V$.* 2. *[\[lem-cycle-2\]]{#lem-cycle-2 label="lem-cycle-2"} $U \cap V_{s_0}^- = U_{s_0}$.* *Proof.* The proof is similar to that of Lemma [Lemma 14](#lem-tree-subrep){reference-type="ref" reference="lem-tree-subrep"} [\[lem-tree-subrep-2\]](#lem-tree-subrep-2){reference-type="ref" reference="lem-tree-subrep-2"} and Corollary [Corollary 15](#cor-tree){reference-type="ref" reference="cor-tree"}. To prove [\[lem-cycle-1\]](#lem-cycle-1){reference-type="ref" reference="lem-cycle-1"}, it suffices to show $s \cdot v \in U$ for any $s,t \in S$ and $v \in U_t$. Note that $U_t \subseteq V_t^-$. Thus if $s = t$ or if $m_{st} = 2$, then the assertion is clear by the definition of $V_t^-$ and Lemma [Lemma 12](#lem-R){reference-type="ref" reference="lem-R"} [\[lem-R-4\]](#lem-R-4){reference-type="ref" reference="lem-R-4"}. Now suppose $m_{st} = 3$. Then we have $$s \cdot v = (s \cdot v - v) + v = f_{st} (v) + v.$$ By Lemma [Lemma 16](#lem-cycle-wd){reference-type="ref" reference="lem-cycle-wd"} [\[lem-cycle-wd-2\]](#lem-cycle-wd-2){reference-type="ref" reference="lem-cycle-wd-2"}, we have $f_{st} (v) + v \in U_s + U_t$. Thus $s \cdot v \in U$. For [\[lem-cycle-2\]](#lem-cycle-2){reference-type="ref" reference="lem-cycle-2"}, we choose a basis $\{\alpha_1, \dots, \alpha_k\}$ of $U_{s_0}$, and suppose $\{\alpha_1, \dots, \alpha_k, \alpha_{k+1}, \dots, \alpha_l\}$ is a basis of $U$ such that for any index $i \ge k+1$, the vector $\alpha_i$ belongs to $U_t$ for some $t \in S\setminus \{s_0\}$. For such index $i$ and such element $t$, if $m_{ts_0} = 2$, then we have $s_0 \cdot \alpha_i = \alpha_i$ by Lemma [Lemma 12](#lem-R){reference-type="ref" reference="lem-R"} [\[lem-R-4\]](#lem-R-4){reference-type="ref" reference="lem-R-4"}; if $m_{ts_0} = 3$, then we have $$s_0 \cdot \alpha_i = (s_0 \cdot \alpha_i -\alpha_i) + \alpha_i = f_{s_0 t} (\alpha_i) + \alpha_i.$$ Note that $f_{s_0 t} (\alpha_i) \in U_{s_0}$ by Lemma [Lemma 16](#lem-cycle-wd){reference-type="ref" reference="lem-cycle-wd"} [\[lem-cycle-wd-2\]](#lem-cycle-wd-2){reference-type="ref" reference="lem-cycle-wd-2"}. Thus $f_{s_0 t} (\alpha_i)$ can be written as a linear combination of $\alpha_1, \dots, \alpha_k$. Suppose now $v \in U \cap V_{s_0}^-$. We write $$v = \sum_{1 \le i \le l} c_i \alpha_i, \quad c_i \in \mathbb{C}.$$ Then $$\begin{aligned} -v & = s_0 \cdot v \\ & = -\sum_{1 \le i \le k} c_i \alpha_i + \sum_{k+1 \le i \le l} c_i \bigl(f_{s_0 t} (\alpha_i) + \alpha_i\bigr) \\ & = \text{(some linear combination of $\alpha_1, \dots, \alpha_k$) } + \sum_{k+1 \le i \le l} c_i \alpha_i. \end{aligned}$$ Thus we must have $c_{k+1} = \dots = c_l = 0$, and $v \in U_{s_0}$. This proves $U \cap V_{s_0}^- \subseteq U_{s_0}$. The inclusion $U \cap V_{s_0}^- \supseteq U_{s_0}$ is obvious. ◻ **Corollary 18**. *$\dim V_{s_0}^- = 1$.* *Proof.* By Hilbert's Nullstellensatz, any simple module of $\mathbb{C}[X^{\pm 1}]$ is one dimensional. Therefore, if $\dim V_{s_0}^- > 1$, then there exists a proper $\mathbb{C}[X^{\pm 1}]$-submodule in $V_{s_0}^-$, say, $U_{s_0} \subsetneq V_{s_0}^-$. Let $U \subseteq V$ be the subrepresentation of $W$ defined as in Lemma [Lemma 17](#lem-cycle){reference-type="ref" reference="lem-cycle"}. Then Lemma [Lemma 17](#lem-cycle){reference-type="ref" reference="lem-cycle"} implies that $U \subsetneq V$ is a proper subrepresentation of $W$, contradicting the irreducibility of $V$. Thus, $\dim V_{s_0}^- = 1$. ◻ By Corollary [Corollary 18](#cor-cycle){reference-type="ref" reference="cor-cycle"} and Lemma [Lemma 12](#lem-R){reference-type="ref" reference="lem-R"} [\[lem-R-5\]](#lem-R-5){reference-type="ref" reference="lem-R-5"}, we have $\dim V_s^- = 1$ for any $s \in S$. The proof of Theorem [Theorem 11](#thm-A1=R){reference-type="ref" reference="thm-A1=R"} is complete. ## Appendix: The construction of irreducible R-representations {#subsec-R} If the Coxeter graph $G$ is a simply laced tree, then the only irreducible representation of $W$ of $\boldsymbol{a}$-function value $1$ is the geometric representation $V_\text{geom}$ (if $V_\text{geom}$ is irreducible) or the simple quotient of $V_\text{geom}$ (if $V_\text{geom}$ is reducible), as mentioned in Remark [\[rmk-simply-laced\]](#rmk-simply-laced){reference-type="ref" reference="rmk-simply-laced"} [\[rmk-simply-laced-1\]](#rmk-simply-laced-1){reference-type="ref" reference="rmk-simply-laced-1"}. For the sake of completeness and reader's convenience, in this subsection we present briefly the concrete construction in [@Hu23] of irreducible R-representations of an irreducible simply laced Coxeter group $(W,S)$ whose Coxeter graph $G$ admits one and only one cycle. As promised in Remark [\[rmk-simply-laced\]](#rmk-simply-laced){reference-type="ref" reference="rmk-simply-laced"} [\[rmk-simply-laced-1\]](#rmk-simply-laced-1){reference-type="ref" reference="rmk-simply-laced-1"}, these representations will be parameterized by $\mathbb{C}^\times$. Suppose $s_0, s_1, \dots, s_n$ are the vertices on the cycle in $G$ as in the sub-sub-section [4.2.3](#subsubsec-cycle){reference-type="ref" reference="subsubsec-cycle"}. For any number $x \in \mathbb{C}^\times$, we define a representation $(\widetilde{V}_x, \widetilde{\rho}_x)$ of $W$ as follows. As a vector space, $\widetilde{V}_x := \bigoplus_{s \in S} \mathbb{C}\alpha_s$ is spanned by a formal basis $\{\alpha_s \mid s \in S\}$. The action $\widetilde{\rho}_x$ of $W$ on $\widetilde{V}_x$ is defined by: 1. $s \cdot \alpha_s = - \alpha_s$, for any $s \in S$; 2. $s_0 \cdot \alpha_{s_n} := \alpha_{s_n} + x \alpha_{s_0}$, and $s_n \cdot \alpha_{s_0} := \alpha_{s_0} + \frac{1}{x} \alpha_{s_n}$; 3. for any other pairs $s,t \in S$ with $s \ne t$, i.e., $\{s,t\} \ne \{s_0, s_n\}$, we define $s \cdot \alpha_t := \alpha_t$ if $m_{st} = 2$, and $s \cdot \alpha_t := \alpha_t + \alpha_s$ if $m_{st} = 3$. It can be verified that $(\widetilde{V}_x, \widetilde{\rho}_x)$ is a well defined representation of $W$. Let $U_x := \{v \in \widetilde{V}_x \mid w \cdot v = v, \forall w \in W\}$ be the subrepresentation with trivial group action ($U_x$ is usually zero, but it is possible to be nonzero, e.g., when $W$ is of affine type $\widetilde{\mathsf{A}}_n$ and $x = 1$). We define $(V_x, \rho_x) := (\widetilde{V}_x, \widetilde{\rho}_x) / U_x$ to be the quotient representation. It turns out that the representations $\{(V_x, \rho_x) \mid x \in \mathbb{C}^\times\}$ are pairwise non-isomorphic, and they are all the irreducible R-representations of $W$, i.e., all the irreducible representations of $W$ of $\boldsymbol{a}$-function value $1$ (up to isomorphism). See [@Hu23] for more details. # Acknowledgement {#acknowledgement .unnumbered} The author would like to thank Professor Nanhua Xi for his patient guidance and insightful discussions. The author is also deeply grateful to professor Si'an Nie for pointing out an error in a previous version of the proof of Theorem [Theorem 11](#thm-A1=R){reference-type="ref" reference="thm-A1=R"}.
arxiv_math
{ "id": "2309.00593", "title": "Representations of Coxeter groups of Lusztig's a-function value 1", "authors": "Hongsheng Hu", "categories": "math.RT math.RA", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Using the notions of Topological dynamics, H. Furstenberg defined central sets and proved the Central Sets Theorem. Later V. Bergelson and N. Hindman characterized central sets in terms of algebra of the Stone-Čech compactification of discrete semigroup. They found that central sets are the members of the minimal idempotents of $\beta S$, the Stone-Čech ompactification of a semigroup $(S,\cdot)$. Hindman and leader introduced the notion of Central set near zero algebraically. Later dynamical and combinatorial characterization have also been established. For any given filter $\mathcal{F}$ in $S$ a set $A$ is said to be a $\mathcal{F}$- central set if it is a member of a minimal idempotent of a closed subsemigroup of $\beta S$, generated by the filter $\mathcal{F}$. In a recent article Bergelson, Hindman and Strauss introduced strongly central and very strongly central sets in [@key-2]. They also dynamically characterized the sets in the same paper. In the present article we will characterize the strongly $\mathcal{F}$- central sets dynamically and combinatorially. Here we introduce the filter version of strongly central sets and very strongly central sets. We also provide dynamical and combinatorial characterization of such sets. address: - Department of Mathematics, University of Kalyani, Kalyani, Nadia-741235, West Bengal, India - Department of Mathematics, University of Kalyani, Kalyani, Nadia-741235, West Bengal, India - Department of Mathematics, University of Kalyani, Kalyani, Nadia-741235, West Bengal, India author: - Dibyendu De, Sujan Pal, Jyotirmoy Poddar title: A study on Filter Version of Strongly Central Sets --- # Introduction H. Frustenberg introduced the notion of central sets [@key-6] using topological dynamics. It turns out that central sets are powerful Ramsey theoretic objects. In fact when $\mathbb{N}$ is partitioned into finitely many cells, one cell must be central. He also proved that central sets are rich in combinatorial properties. Furstenberg's definition was using action of $\mathbb{N},$ but later in [@key-14] authors established that the definition of central sets can also be extended for arbitrary semigroups. ** 1**. A dynamical system is a pair $\left(X,\langle T_{s}\rangle_{s\in S}\right)$ such that 1. $X$ is a compact topological space; 2. $S$ is a semigroup; 3. for each s $s\in S$, $T_{s}$ is a continuous function from $X$ to $X$; and 4. For all $s,t\in S$, $T_{s}\circ T_{t}=T_{st}$. To state dynamical characterization of central sets, we need the following notions. ** 2**. Let $S$ be a discrete semigroup and $\left(X,\langle T_{s}\rangle_{s\in S}\right)$ be a dynamical system. 1. Let $A$ be a subset of $S$. Then $A$ is said to be syndetic if and only if there exists a finite subset $F$ of $S$ such that $S=\cup_{t\in F}t^{-1}A$. 2. A point $x\in X$ is uniformly recurrent point if and only if for each neighbourhood $U$ of $x$, $\left\{ s\in S:T_{s}\left(x\right)\in U\right\}$ is syndetic. 3. $x,y\in X$ are called proximal if and only if there is a net $\langle s_{\iota}\rangle_{\iota\in I}$ in $S$ such that $\langle T_{s_{\iota}}\left(x\right)\rangle_{\iota\in I}$ and $\langle T_{s_{\iota}}\left(y\right)\rangle_{\iota\in I}$ converge to the same point. Now we can state the following definition. ** 3**. Let $S$ be a semigroup and let $B\subseteq S$. Then $B$ is central if and only if there exists a dynamical system $\left(X,\langle T_{s}\rangle_{s\in S}\right)$, two points $x,y\in X$, and a neighbourhood $U$ of $y$ such that $x$ and $y$ are proximal, $y$ is uniformly recurrent and $B=\left\{ s\in S:T_{s}\left(x\right)\in U\right\}$. Later, V. Bergelson and N. Hindman in [@key-3] established an algebraic characterization of central sets. This characterization of central sets is somehow powerful because by this definition one can at once say that superset of any central set is central. To state the characterization of central sets in terms of [@key-3] we first need some preliminaries on the algebraic structure of $\beta S$ of a discrete semigroup $S$. Let $\left(S,\cdot\right)$ be any discrete semigroup, and $\beta S$ be the set of all ultrafilters on $S$, where the points of $S$ are identified with the principal ultrafilters. Then $\left\{ \overline{A}:A\subseteq S\right\}$, where $\overline{A}=\left\{ p\in\beta S:A\in p\right\}$ forms a closed basis for some toplogy on $\beta S$. With this topology $\beta S$ becomes a compact Hausdorff space in which $S$ is dense, called the Stone-Čech compactification of $S$. The operation of $S$ can be extended to $\beta S$ making $\left(\beta S,\cdot\right)$ a compact, right topological semigroup with $S$ contained in its topological center. That is, for all $p\in\beta S$ the function $\rho_{p}:\beta S\to\beta S$ is continuous, where $\rho_{p}(q)=q\cdot p$ and for all $x\in S$, the function $\lambda_{x}:\beta S\to\beta S$ is continuous, where $\lambda_{x}(q)=x\cdot q$. For $p,q\in\beta S$ and $A\subseteq S$, $A\in p\cdot q$ if and only if $\left\{ x\in S:x^{-1}A\in q\right\} \in p$, where $x^{-1}A=\left\{ y\in S:x\cdot y\in A\right\}$. There is a famous theorem due to Ellis [@key-2-1] that if $S$ is a compact right topological semigroup then the set of idempotents $E\left(S\right)\neq\emptyset$. A non-empty subset $I$ of a semigroup $T$ is called a left ideal of $S$ if $TI\subset I$, a right ideal if $IT\subset I$, and a two sided ideal (or simply an ideal) if it is both a left and right ideal. A minimal left ideal is the left ideal that does not contain any proper left ideal. Similarly, we can define minimal right ideal. Any compact Hausdorff right topological semigroup $T$ has the smallest two sided ideal, $\begin{array}{ccc} K(T) & = & \bigcup\left\{ L:L\text{ is a minimal left ideal of }T\right\} \\ & = & \bigcup\left\{ R:R\text{ is a minimal right ideal of }T\right\} \end{array}.$ Given a minimal left ideal $L$ and a minimal right ideal $R$, $L\cap R$ is a group, and in particular contains an idempotent. If $p$ and $q$ are idempotents in $T$ we write $p\leq q$ if and only if $pq=qp=p$. An idempotent is minimal with respect to this relation if and only if it is a member of the smallest ideal $K(T)$ of $T$. See [@key-9] for an elementary introduction to the algebra of $\beta S$ and for any unfamiliar details. ** 4**. Let $S$ be a discrete semigroup and let $C$ be a subset of $S$. Then $C$ is *central* if and only if there is an idempotent $p$ in $K\left(\beta S\right)$ such that $C\in p$. Since $K\left(\beta S\right)$ can be expressed as union of minimal left or right ideals, then it becomes natural to ask whether there exists sets which does not meet every minimal left ideal in some idempotent. The answer of the question turns out to be yes. This proposed a new notion of large sets for semigroup. ** 5**. Let $S$ be a discrete semigroup and let $C$ be a subset of $S$. Then $C$ is said to be *strongly central* if for every minimal left ideal $L$ of $\beta S$, $\overline{C}\cap L$ contains an idempotent. This definition first appeared in [@key-2]. Authors also provided dynamical characterization of *strongly central sets.* **We have a dynamical characterization of strongly central sets in terms of the following notion.** ** 6**. **Let $\mathcal{A}$ be a family of subsets of a semigroup $\left(S,+\right)$. Then $A$ has the** *thick finite intersection property* **if and only if any intersection of finitely many members of $\mathcal{A}$ is thick.** Now we have the following theorem. ** 7**. **Let $\left(S,+\right)$ be a semigroup and let $B\subset S$. Then $B$ is strongly central if and only if whenever $\mathcal{A}$ is a family of subsets of $S$ with the thick finite intersection property and $\left(X,\langle T_{s}\rangle_{s\in S}\right)$ be a dynamical system, there exists a point $y\in X$ such that for each $A\in\mathcal{A}$ and each neighborhood $U$ of $y$, $\left\{ s\in A\cap B:T_{s}\left(y\right)\in U\right\} \neq\emptyset$.** *Proof.* [@key-2 Theorem 2.8]. ◻ **"Filtered" notions of syndetic and piecewise syndetic sets were previously defined and considered by Shuungula, Zelenyuk and Zelenyuk [@key-15]. "Filtered" notions of thick sets have also appeared implicitly in much of the literature related to the algebraic structure of the** Stone-Čech compactification**. Protasov and Slobodianiuk [@key-12] first introduced the relative notions to prove some relative topological results. We also note that Zucker [@key-17] considers some related ideas in the context of a different generalization of syndetic, thick, and piecewise syndetic sets.** **In recent years, several papers have faced the problem of localizing known dynamical notions and results, for example, dynamical and combinatorial results near zero have been obtained in [@key-8],[@key-4],[@key-11],[@key-1],[@key-10] and similar studies near an idempotent have been done in [@key-13], [@key-16]. The interplay between algebra and dynamics has been studied near zero in [@key-11] and it has been extended to idempotents in [@key-16]. Motivated by [@key-8] the most generalized notion of largeness along a filter was introduced in [@key-15].** # Basic Results and definitions In this section we want to extend some prior results for filters. For this we need to define filter first and introduce those filters whose closures are closed subsemigroup of $\beta S$. ** 8**. Let $S$ be any set. Take $\mathcal{F}$, a collection of subsets of $S$. $\mathcal{F}$ is called a *filter* on $S$ if it satisfies the three properties mentioned below: 1. If $A,\,B\in\mathcal{F}$, then $A\cap B\in\mathcal{F}$; 2. If $A\in\mathcal{F}$ and $A\subseteq B\subseteq S$, then $B\in\mathcal{F}$; 3. $\emptyset\notin\mathcal{F}$. Let $\left(S,\cdot\right)$ be an arbitrary discrete semigroup and let $\mathcal{F}$ be any filter on $S$. Define $\bar{\mathcal{F}}\subseteq\beta S$ by $$\bar{\mathcal{F}}=\bigcap_{V\in\mathcal{F}}\bar{V}.$$ Being arbitrary intersection of closed sets, $\bar{\mathcal{F}}$ is a closed subset of $\beta S$ and it consists of ultrafilters which contain $\mathcal{F}$. Given two filters $\mathcal{F}$ and $\mathcal{G}$ on $S$, we can define the product operation $\mathcal{F}\cdot\mathcal{G}$ by $$A\in\mathcal{F}\cdot\mathcal{G}\,\,\text{iff}\,\,\left\{ x\in S:x^{-1}A\in\mathcal{G}\right\} \in\mathcal{F}.$$ If $\mathcal{F}$ is an idempotent filter, i.e., $\mathcal{F}\subseteq\mathcal{F}\cdot\mathcal{F}$, then $\bar{\mathcal{F}}$ becomes a closed subsemigroup of $\beta S$, the equality holds when $\mathcal{F}$ is an ultrafilter. Throughout our work, we will consider only those filters $\mathcal{F}$, for which $\bar{\mathcal{F}}$ is a closed subsemigroup of $\beta S$. In the paper of Christopherson and Johnson [@key-5], they defined the notion of *mesh* for a collection of subsets of a set $X$. ** 9**. **Let $X$ be a nonempty set and let $\mathcal{F}\subseteq\mathcal{P}\left(X\right)$. The mesh of $\mathcal{F}$ is $\mathcal{F}^{*}=\left\{ A\subseteq X:X\setminus A\notin\mathcal{F}\right\}$.** In fact $\mathcal{F}^{*}$ is a dual of $\mathcal{F}$, i.e. members of $\mathcal{F}^{*}$ intersects every member of $\mathcal{F}$. ** 10**. **If $\mathcal{F}$ be a filter then, $\mathcal{F}^{*}=\left\{ A\subseteq X:A\cap B\neq\emptyset,\forall B\in\mathcal{F}\right\}$.** *Proof.* **Let, $\mathcal{F}$ be a filter and denote the set $\mathcal{G}=\left\{ A\subseteq X:A\cap B\neq\emptyset,\forall B\in\mathcal{F}\right\}$.** **Let, $A\in\mathcal{F}^{*},$then by definition 2.2, $X\setminus A\notin\mathcal{F}$.** **If $B\in\mathcal{F}$ with $A\cap B=\emptyset$, then $B\subseteq X\setminus A$ which implies $X\setminus A\in\mathcal{F},$(since $B\in\mathcal{F}$ and $\mathcal{F}$ is a filter) a contradiction. Then $A\in\mathcal{G}$ and hence $\mathcal{F}^{*}\subseteq\mathcal{G}$.** **For the other direction, let $A\in\mathcal{G}$, then $A\cap B\neq\emptyset,\forall B\in\mathcal{F}$. If $X\setminus A\in\mathcal{F}$, then $A\cap\left(X\setminus A\right)\neq\emptyset$ since $A\in\mathcal{G}$, a contradiction. So $X\setminus A\notin\mathcal{F}$ and hence $A\in\mathcal{F}^{*}$.** So we have $\mathcal{G}\subseteq\mathcal{F}^{*}$and the result follows. ◻ ** 11**. Let $\left(S,\cdot\right)$ be a semigroup and $A\subseteq S$. 1. Then $A$ is *thick* if and only if for any $F\in\mathcal{P}_{f}\left(S\right)$, there exists an element $x\in S$ such that $F\cdot x\subseteq A$. This means the sets which contains a translation of any finite subset. 2. Then $A$ is *Syndetic* if and only if there exists $G\in\mathcal{P}_{f}\left(S\right)$ such that $\bigcup_{t\in G}t^{-1}A=S$. That is, with a finite translation if the set covers the entire semigroup, it is Syndetic. Now we can state the filter analog of these large sets motivated from Christopherson and Johnson [@key-5]. ** 12**. **[\[Fthicksyndetic\]]{#Fthicksyndetic label="Fthicksyndetic"}Let $S$ be a semigroup and let $A\subseteq S$.** 1. **$A$ is $\mathcal{F}$-thick if and only if there exists $V\in\mathcal{F},$ for all $$H\in\mathcal{P}_{f}\left(V\right),\bigcap_{t\in H}t^{-1}A\in\mathcal{F}^{*}.$$** 2. **$A$ is $\mathcal{F}$-syndetic if and only if for every $V\in\mathcal{F}$, there exists $H\in\mathcal{P}_{f}\left(V\right),$ such that $\bigcup_{t\in H}t^{-1}A\in\mathcal{F}.$** **In particular if we take $\mathcal{F}=\left\{ S\right\}$, then $\mathcal{F}$-thick is same as thick and $\mathcal{F}$-syndetic is same as syndetic set in $S$.** In the following proposition we give an alternative version of $\mathcal{F}$-thickness to the one stated in Definition [\[Fthicksyndetic\]](#Fthicksyndetic){reference-type="ref" reference="Fthicksyndetic"}. ** 13**. **Let $S$ be a semigroup and let $A\subseteq S$. Then $A$ is $\mathcal{F}$-thick if and only if there exists $V\in\mathcal{F},$ such that for all $H\in\mathcal{P}_{f}\left(V\right)$, for all $W\in\mathcal{F},$ there exists $x\in W$ such that $Hx\subseteq A.$** *Proof.* **We observe that, if $\bigcap_{t\in H}t^{-1}A\in\mathcal{F}^{*}$ then for all $W\in\mathcal{F},$ there exists $x_{W}\in W$, such that $x_{W}\in\bigcap_{t\in H}t^{-1}A$. Hence $x_{W}\in t^{-1}A$ for all $t\in H$. This gives us that $tx_{W}\in A$ for all $t\in H$. So we have that for all $W\in\mathcal{F},$ there exists $x_{W}\in W$, such that $Hx_{W}\subseteq A.$ The converse is basically going backwards, so we skip the proof here.** ◻ **Now, we want to give an example to show that $\mathcal{F}$-thick for a filter $\mathcal{F}$ on $S$ is not same as thick. In particular, we give an example of a set with this property.** ** 14**. **Let $A=\left\{ 2n:n\in\mathbb{N}\right\} \subset\mathbb{N}$. This is clearly not a thick set in $\mathbb{N}$ ( any translation of the set $\left\{ 1,2\right\}$ is not in $A$). But if we take the filter $\mathcal{F}=\left\{ X\subseteq\mathbb{N}:2\in X\right\}$, take $V=\left\{ 2\right\}$. For all $H\in\mathcal{P}_{f}\left(V\right)$,(here the only possibility of $H=\left\{ 2\right\}$) and for all $W\in\mathcal{F},$ we have $x=2\in V$ such that $H+x=\left\{ 2\right\} +2=\left\{ 4\right\} \subseteq A.$ So, $A$ is $\mathcal{F}$-thick.** **Now, we are giving an example to show that a set being $\mathcal{F}$-syndetic for a filter $\mathcal{F}$ on $S$ is not same as syndetic. In particular we give an example of a set, which is syndetic but for a filter $\mathcal{F}$ on $S$, $A$ is not $\mathcal{F}$-syndetic.** ** 15**. **Pick $A=\left\{ 2n+1:n\in\mathbb{N}\right\} \subset\mathbb{N}$. This is clearly a syndetic set on $\mathbb{N}$ ( as $A\cup\left(-1+A\right)=\mathbb{N}$). But if we take the filter $\mathcal{F}=\left\{ X\subseteq\mathbb{N}:2\in X\right\}$, and we take $V=\left\{ 2\right\}$, then for all $H\in\mathcal{P}_{f}\left(V\right)$ (here only possibility of $H=\left\{ 2\right\}$), $\bigcup_{t\in H}\left(-t+A\right)\notin\mathcal{F}$, since, $2\notin\bigcup_{t\in H}(-t+A)$. So, $A$ is not $\mathcal{F}$-syndetic.** It is an easy observation that $A\subseteq S$ is syndetic if and only if $S\setminus A$ is not thick. Motivated by this we have the following for our filter notion. ** 16**. *Let $S$ be a semigroup and let $A\subseteq S$.* 1. *$A$ is $\mathcal{F}$-thick if and only if $X\setminus A$ is not $\mathcal{F}$-syndetic.* 2. *$A$ is $\mathcal{F}$-syndetic if and only if $X\setminus A$ is not $\mathcal{F}$-thick.* *Proof.* (1) **Let $A$ be $\mathcal{F}$-thick. If possible let, $X\setminus A$ is $\mathcal{F}$-syndetic. Let us choose $V\in\mathcal{F}$. Then there exists $H_{V}\in\mathcal{P}_{f}\left(V\right),$ such that $$\bigcup_{t\in H_{V}}t^{-1}\left(S\setminus A\right)=W,$$ for some $W\in\mathcal{F}$.** **Since, but $A$ is $\mathcal{F}$**-thick, **$\bigcap_{t\in H_{V}}t^{-1}A\in\mathcal{F}^{*},$ and therefore $$\bigcap_{t\in H_{V}}t^{-1}A\cap\bigcap_{t\in H_{V}}t^{-1}(S\setminus A)\not=\emptyset.$$ We choose $y\in\bigcap_{t\in H_{V}}t^{-1}A\cap\bigcap_{t\in H_{V}}t^{-1}(X\setminus A)\not=\emptyset$. But this gives a contradiction as for all $t\in H_{V},ty\in A$, so that $ty\notin S\setminus A$.** **Conversely let, $S\setminus A$ is not $\mathcal{F}$-syndetic. Then there exists $V\in\mathcal{F}$, such that for any $F_{V}\in\mathcal{P}_{f}\left(V\right),$ we have $$\bigcup_{t\in F_{V}}t^{-1}\left(S\setminus A\right)\not\in\mathcal{F}.$$ But** $S\setminus\bigcup_{t\in F_{V}}t^{-1}\left(S\setminus A\right)=\bigcap_{t\in F_{V}}t^{-1}A$ **we have that $\bigcap_{t\in F_{V}}t^{-1}A\in\mathcal{F}^{*}$. Therefore, $A$ is $\mathcal{F}$-thick.** \(2\) The proof of this part being similar to the previous part we skip here. ◻ **Bergelson, Hindman and Strauss introduced the notion of Thick finite intersection property in [@key-2]. Following the same path we next introduce the definition of** *$\mathcal{F}$- thick finite intersection property* **which will be useful for several characterizations.** ** 17**. **Let $\mathcal{A}$ be a family of subsets of the semigroup $\left(S,+\right)$. Then $\mathcal{A}$ has the** *$\mathcal{F}$-thick finite intersection property* **if and only if intersection of any finitely many members of $\mathcal{A}$ is $\mathcal{F}$- thick.** We now quickly recall some results, which shall be used later in our paper. The first one is the following. ** 18**. *Let $S$ be a semigroup which contains a minimal left ideal with an idempotent. Let $T$ be a subsemigroup of $S$ which also contains a minimal left ideal with an idempotent and assume that $T\cap K\left(S\right)\neq\emptyset$. Then $K\left(T\right)=T\cap K\left(S\right)$.* *Proof.* **[@key-9 Theorem 1.65].** ◻ The next two theorems are also important for our work. ** 19**. *Let $D$ be a set and let $\mathcal{A}$ be a subset of $\mathcal{P}\left(D\right)$ which has the finite intersection property. Then there is an ultrafilter $\mathcal{U}$ on $D$ such that every member of $\mathcal{A}$ is in $\mathcal{U}$.* *Proof.* **[@key-9 Theorem 3.8] .** ◻ ** 20**. *Let $\left(S,\cdot\right)$ be a semigroup and let $\mathcal{A}$ be a subset of $\mathcal{P}\left(S\right)$ having the finite intersection property. If for each $A\in\mathcal{A}$ and each $x\in A$, there exists $B\in\mathcal{A}$ such that $xB\subseteq A$, then $\bigcap_{A\in\mathcal{A}}\overline{A}$ is a subsemigroup of $\beta S$.* *Proof.* **[@key-9 Theorem 4.20].** ◻ Next we recall the notion of *tree.* We denote $\omega=\left\{ 0,1,2,\ldots\right\}$ be the first infinite ordinal. ** 21**. $\mathcal{T}$ is called a tree in a set $A$ if and only if $\mathcal{T}$ is a set of functions from $\omega$ to $A$ such that for each $f\in\mathcal{T}$, $\text{domain}\,\left(f\right)\in\omega$ and $\text{range}\,\left(f\right)\subseteq A$ and if $\text{domain}\,\left(f\right)=n>0$, then $f\mid_{n-1}\in\mathcal{T}$. $\mathcal{T}$ We will now fix some notations. ** 22**. Here $S$ is an arbitrary semigroup. \(a\) Let $f$ be a function with $\text{domain}\,\left(f\right)=n\in\omega$ and let $x$ be given. Then $f\frown x=f\cup\left\{ \left(n,x\right)\right\}$. \(b\) Given a tree $\mathcal{T}$ and $f\in\mathcal{T}$, $B_{f}=B_{f}\left(\mathcal{T}\right)=\left\{ x:f\frown x\in\mathcal{T}\right\}$. \(c\) Let $A\subseteq S$. Then $\mathcal{T}$ is a $*$ tree in $A$ if and only if $\mathcal{T}$ is a tree in $A$ and for all $f\in\mathcal{T}$ and all $x\in B_{f}$, $B_{f\frown x}\subseteq x^{-1}B_{f}$. \(d\) Let $A\subseteq S$. Then $\mathcal{T}$ is a *FS*-tree in $A$ if and only if $\mathcal{T}$ is a tree in $A$ and for all $f\in\mathcal{T},$ $$B_{f}=\left\{ \prod_{t\in F}g\left(t\right):g\in\varUpsilon,f\subseteq g,\,\text{and\,\ensuremath{\emptyset\neq F\subseteq\text{domain}\,\left(g\right)\setminus\text{domain}\,\left(f\right)}}\right\} .$$ # Algebraic Characterisation of Some Large Sets Along Filter In [@key-1-1] the authors characterized thick sets combinatorially which is the following. ** 23**. *Let $\left(S,\cdot\right)$ be a semigroup and $A\subseteq S$. Then the following are equivalent.* 1. *A is right thick.* 2. *$\left(\forall F\in\mathcal{P}_{f}\left(S\right)\right)\left(\exists x\in S\right)\left(Fx\subseteq A\right)$.* 3. *There is a minimal left ideal $L$ of $\beta S$ such that $L\subseteq\bar{A}$.* *Proof.* [@key-1-1 Theorem 2.6]. ◻ Motivated by this we now state our theorem which characterises $\mathcal{F}$-thich sets combinatorially. ** 24**. *Let $\left(S,\cdot\right)$ be a semigroup and let $\mathcal{F}$ be any filter on $S$ such that $\bar{\mathcal{F}}$ is a closed subsemigroup of $\beta S$. Then the following are equivalent,* 1. *$A$ is $\mathcal{F}$-thick.* 2. *There is a minimal ideal $L$ of $\bar{\mathcal{F}}$ such that $L\subseteq\bar{A}$.* *Proof.* (1) $\implies$(2). **Let $A$ be $\mathcal{F}$-thick. Then by the definition, there exists $V\in\mathcal{F}$ such that for every $F_{V}\in\mathcal{P}_{f}(V),$ we have $\bigcap_{t\in F_{V}}t^{-1}A\in\mathcal{F}^{*}$. Therefore, for all $x\in F_{V}$ we can say $x^{-1}A\in\mathcal{F}^{*}$ which further implies that $x^{-1}A\cap F\neq\emptyset,$ for all $F\in\mathcal{F}$ and for all $x\in F_{V}$. Hence $\left\{ x^{-1}A:x\in V\right\} \cup\mathcal{F}$ has finite intersection property. Let us choose $q\in\beta S$ such that $\left\{ x^{-1}A:x\in V\right\} \cup\mathcal{F}\subseteq q$. In particular we have $\mathcal{F}\subset q$ and so $q\in\bar{\mathcal{F}}$. If $p\in\bar{\mathcal{F}}$, then $V\in p$ as $\mathcal{F}\subseteq p$. Again $V\subseteq\left\{ x\in S:x^{-1}A\in q\right\}$ which implies that $\left\{ x\in S:x^{-1}A\in q\right\} \in p$ so that $A\in p\cdot q$. So we have that $p\cdot q\in\bar{A}$. Now since $\mathcal{F}\subseteq p$, we have that $\mathcal{\bar{F}}\cdot q\subseteq\bar{A}$. Let us denote $L=\mathcal{\bar{F}}\cdot q$. We just need to show that $L$ is an ideal of $\bar{\mathcal{F}}$. In fact if this happen, by the fact that any left ideal contains a minimal left ideal, we are done. Since $q\in\bar{\mathcal{F}}$ and $\bar{\mathcal{F}}$ is closed, we have that $\bar{\mathcal{F}}\cdot q\subseteq\bar{\mathcal{F}}$ and so we have the required result.** \(2\) $\implies$(1). For this direction let us assume we have **$A\subseteq S$ such that there exists $q\in\mathcal{\bar{F}}$ such that $L=\mathcal{\bar{F}}\cdot q\subseteq\bar{A}$. Then $A\in p\cdot q$ for every $p\in\mathcal{\overline{F}}$, that is, $\left\{ x\in S:x^{-1}A\in q\right\} \in p$ for every $p\in\mathcal{\overline{F}}$. Therefore $\left\{ x\in S:x^{-1}A\in q\right\} \in\mathcal{F}$, and if we put $V=\left\{ x\in S:x^{-1}A\in q\right\}$ and take $H\in\mathcal{P}_{f}\left(V\right)$, we have $\bigcap_{t\in H}t^{-1}A\in\mathcal{F}^{*}$ and so $A$ is $\mathcal{F}$-thick.** ◻ Now we give the algebraic characterization of $\mathcal{F}$-syndetic sets. ** 25**. *Let $S$ be a semigroup and $\mathcal{F}$ be an idempotent filter on $S$. Then the following are equivalant* 1. *$A$ is $\mathcal{F}$-syndetic.* 2. *For any minimal ideal $L$ of $K\left(\bar{\mathcal{F}}\right)$, $L\cap\bar{A}\neq\emptyset$.* *Proof.* This follows from Theorem 2.9 and 3.2. ◻ The following proposition will be helpful for us. ** 26**. *Let $S$ be a semigroup and $\mathcal{F}$ be an idempotent filter on $S$. Let $A$ be strongly $\mathcal{F}$-central and $B$ is $\mathcal{F}$-thick. Then we have that $A\cap B$ is $\mathcal{F}$-central.* *Proof.* **Here $A$ is a strongly $\mathcal{F}$-central set, so for any minimal left ideal $L$ of $\bar{\mathcal{F}}$, there is an idempotent $p\in L\cap\bar{A}$. Also since $B$ is $\mathcal{F}$-thick, there is a minimal ideal $L$ of $K\left(\bar{\mathcal{F}}\right)$ such that $L\subseteq\bar{B}$. So there is $p\in K\left(\bar{\mathcal{F}}\right)$ such that $A\cap B\in p$. This proves that it is $\mathcal{F}$-central.** ◻ # Combinatorial and Dynamical Characterization of Strongly $\mathcal{F}$ Central Sets We first define the filter analog of strongly central sets defined in [@key-2]. ** 27**. **Let $S$ be a semigroup and $\mathcal{F}$ be an idempotent filter on $S$ and $A\subseteq S$. $A$ is** *said to be strongly $\mathcal{F}$- central* **if and only if for every minimal left ideal $L$ of $\mathcal{\overline{F}}$, there is an idempotent $p\in L\cap\overline{A}$.** Authors provided dynamical characterization of strongly $\mathcal{F}$- central sets in terms of thick finite intersection property. In fact a family $\mathcal{A}$ family of subsets of the semigroup $\left(S,+\right)$ is said to have thick finite intersection property if and only if any intersection of finitely many members of $\mathcal{A}$ is thick. The following equivalent algebraic formulation of thick finite intersection from [@key-2]. ** 28**. *Let $S$ be a subgroup and $\mathcal{A}$ be a collection of subsets of $S$. Then $\mathcal{A}$ has the thick finite intersection property if and only if there is a left ideal $L$ of $\beta S$ such that $L\subseteq\bigcap_{A\in\mathcal{A}}\bar{A}$.* *Proof.* [@key-2 Lemma 2.7]. ◻ Following [@key-2] we define $\mathcal{F}$- thick finite intersection property as follows. ** 29**. A family $\mathcal{A}$ family of subsets of the semigroup $(S,+)$ is said to have $\mathcal{F}$ -thick finite intersection property if and only if any intersection of finitely many members of $\mathcal{A}$ is $\mathcal{F}$-thick. ** 30**. *Let $\mathcal{F}$ be a filter on a semigroup $S$ and $\mathcal{A}$ be a collection of subsets of $S$. Then $\mathcal{A}$ has $\mathcal{F}$- thick finite intersection property if and only if there is a left ideal $L$ of $\bar{\mathcal{F}}$ such that $L\subseteq\bigcap_{A\in\mathcal{A}}\bar{A}$.* *Proof.* The sufficiency is trivial as by definition a set is $\mathcal{F}$ -thick if and only if its closure contains a left ideal of $\bar{\mathcal{F}}$. For the necessary part, let $\mathcal{A}$ be a collection of subsets of $S$ with $\mathcal{F}$- thick finite intersection property and $V\in\mathcal{F}$. We set $\mathcal{D}=\mathcal{P}_{f}\left(V\right)\times\mathcal{P}_{f}\left(\mathcal{A}\right)$ and define the ordering $\left(G,\mathcal{G}\right)\geq\left(H,\mathcal{H}\right)$ if and only if $G\supseteq H$ and $\mathcal{G}\supseteq\mathcal{H}$. Then $\mathcal{D}$ is a directed set. Now for each $\left(G,\mathcal{G}\right)$ we have that $\bigcap\mathcal{G}$ is $\mathcal{F}$-thick. Hence there exists $W\in\mathcal{F}$ such that for all finite subset $G\subseteq W$, there exists $x_{\left(G,\mathcal{G}\right)}\in V$ such that $$x_{\left(G,\mathcal{G}\right)}+G\subseteq\bigcap\mathcal{G}.$$ $\beta S$ being compact the net $\langle x_{\left(G,\mathcal{G}\right)}\rangle_{\left(G,\mathcal{G}\right)\in D}$ converges to some point $q\in\beta S$. As the net $\langle x_{\left(G,\mathcal{G}\right)}\rangle_{\left(G,\mathcal{G}\right)\in D}$ lies in $V$, we have that $q\in\bar{\mathcal{F}}$. We claim that $F+q\subset\bigcap_{A\in\mathcal{A}}\bar{A}$ for all $F\in\mathcal{F}$. In fact if this does not hold there exists $s\in F$ and $A\in\mathcal{A}$ such that $s+q\notin\bar{A}$. Let $B=S\setminus A$ which gives $-s+B\in q$. Since $q$ is the limit point of the net $\langle x_{\left(G,\mathcal{G}\right)}\rangle_{\left(G,\mathcal{G}\right)\in D}$, we have some $\left(G,\mathcal{G}\right)\geq\left(\left\{ s\right\} ,\left\{ A\right\} \right)$ such that $x_{\left(G,\mathcal{G}\right)}\in\overline{-s+B}$ is a neighbourhood of $q$. But as $x_{\left(G,\mathcal{G}\right)}$ is in $S$, we have that $x_{\left(G,\mathcal{G}\right)}\in-s+B$. Then $s\in G$ and by definition, $s+x_{\left(G,\mathcal{G}\right)}\in\bigcap\mathcal{G}\subseteq A$. At the same time since $x_{\left(G,\mathcal{G}\right)}\in\overline{-s+B}$, and $\overline{-s+B}=-s+B$, we get $s+x_{\left(G,\mathcal{G}\right)}\in B$ which gives a contradiction. Hence for all $F\in\mathcal{F}$ $$\overline{F+q}\subseteq\bigcap_{A\in\mathcal{A}}\bar{A}\,\,\text{which\,gives}\,\,\mathcal{\overline{F}}+q\subseteq\bigcap_{A\in\mathcal{A}}\bar{A}.$$ and $\mathcal{\overline{F}}+q$ is our required left ideal. ◻ In [@key-1-1] the authors gave combinatorial characterization of strongly central sets. In our work we want to give the same for strongly $\mathcal{F}$ central sets. ** 31**. *Let $S$ be a semigroup and $\mathcal{F}$ be an idempotent filter on $S$, $A\subseteq S$. Then $A$ is strongly $\mathcal{F}$ central if and only if given any family $\mathcal{A}$ of subsets of $S$ with $\mathcal{F}$-thick finite intersection property, there exists a downward directed family $\langle C_{F}\rangle_{F\in I}$ of subsets of $A$ such that,* 1. *for each $F\in I$ and each $x\in C_{F}$,there exists $G\in I$ with $C_{G}\subseteq x^{-1}C_{F}$ and* 2. *$\mathcal{A}\cup\left\{ C_{F}:F\in I\right\}$ has the finite intersection property.* *Proof.* Let $L$ be a minimal left ideal of $\bar{\mathcal{F}}$ and let $\mathcal{A}=\left\{ B\subseteq S:L\subseteq\overline{B}\right\}$. Then $L=\bigcap_{B\in\mathcal{A}}\bar{B}$. If $A_{1},A_{2},\ldots,A_{k}\in\mathcal{A}$ with $L\subseteq\overline{\bigcap_{i=1}^{k}A_{i}}$ then by Theorem [ 24](#F-thick){reference-type="ref" reference="F-thick"}, $\bigcap_{i=1}^{k}A_{i}$ is $\mathcal{F}$-thick. Let us assume a downward directed family $\langle C_{F}\rangle_{F\in I}$ of $A$ satisfying the conditions 1 and 2, with $M=\bigcap_{F\in I}\bar{C}_{F}$. Then by theorem [ 20](#subsemi){reference-type="ref" reference="subsemi"} we have that $M$ is a subsemigroup of $\left(\beta S,\cdot\right)$ and then by condition 2, $L\cap M\neq\emptyset$. So $L\cap M$ is a compact right topological semigroup and so contains an idempotent which is in $\bar{A}$. Since $M\subseteq\bar{A}$, so we have $p\in L\cap M\subseteq L\cap\bar{A}$. This proves that $A$ is strongly $\mathcal{F}$-central. **Conversely , let $\mathcal{A}$ be a family of subsets of $S$ satisfying $\mathcal{F}$-thick finite intersection property. By Lemma [ 30](#FThick_algebra){reference-type="ref" reference="FThick_algebra"}, pick an minimal left ideal $L$ of $\bar{\mathcal{F}}$ with $L\subseteq\bigcap_{B\in\mathcal{A}}\bar{B}$. Pick an idempotent $p\in L\cap\bar{A}$. By Lemma 14.23.1 and 14.24 of [@key-9], pick a tree $\mathcal{T}$ in $A$ such that for each $f\in\mathcal{T}$, $B_{f}=\left\{ x\in A:f\frown x\in\mathcal{T}\right\} \in p$ and for each $f\in\mathcal{T}$ and each $x\in B_{f}$, $B_{f\frown x}\subseteq x^{-1}B_{f}$ by definition 14.22 of [@key-9]. For each $F\in\mathcal{P}_{f}\left(\mathcal{T}\right)$, let $C_{F}=\bigcap_{f\in F}B_{f}$. If we dirext $\mathcal{P}_{f}\left(\mathcal{T}\right)$ by inclusion, then given $F,G\in\mathcal{P}_{f}\left(\mathcal{T}\right)$, $C_{F\cap G}\subseteq C_{F}\cup C_{G}$, so $\langle C_{F}\rangle_{F\in I}$ is downword directed. To see that 1 holds, let $F\in\mathcal{P}_{f}\left(\mathcal{T}\right)$ and $x\in C_{F}$. Let $G=\left\{ f\frown x:f\in F\right\}$. Then $C_{G}\subseteq x^{-1}C_{F}$. As$\mathcal{A}\cup\left\{ C_{F}:F\in\mathcal{P}_{f}\left(\mathcal{T}\right)\right\} \subseteq p$ so condition 2 is satisfied.** ◻ **Dynamical characterization of strongly central sets has given in [@key-2] as mentioned in Theorem 1.7. In the following Theorem we get an analogus dynamical characterization of strongly** $\mathcal{F}$ central sets. ** 32**. *Let $\left(S,+\right)$ be a semigroup and let $B\subseteq S$, [$\mathcal{F}$]{.upright} be a filter on $S$. Then $B$ [is]{.upright} strongly [$\mathcal{F}$]{.upright}- central if and only if whenever $\mathcal{A}$ is a family of subsets of $S$ with [$\mathcal{F}$-]{.upright}thick finite intersection property and *$\left(X,\langle T_{s}\rangle_{s\in S}\right)$* is a dynamical system, there exists a point $y\in X$ such that for each $A\in\mathcal{A}$ and each neighborhood $U$ of $y$, $\left\{ s\in A\cap B:T_{s}\left(y\right)\in U\right\} \not\neq\emptyset$.* *Proof.* Let $\left(X,\langle T_{s}\rangle_{s\in S}\right)$ be a dynamical system and $\mathcal{A}$ be a family of subsets of $S$ with the $\mathcal{F}$-thick finite intersection property. By Lemma [ 30](#FThick_algebra){reference-type="ref" reference="FThick_algebra"} we can pick a left ideal $L$ of $\bar{\mathcal{F}}$ such that $L\subseteq\bigcap_{A\in\mathcal{A}}\bar{A}$. Then by definition of $B$, we have an idempotent $p\in L\cap\bar{B}$. Since $p\in L$ we have that $p\in A$. Now Pick any $x\in X$ and let $$y=p-\mathcal{\lim}_{s\in S}T_{s}\left(x\right).$$ Then by Theorem 19.11 of [@key-9], we have $$y=p-\mathcal{\lim}_{s\in S}T_{s}\left(y\right).$$ Let $U$ be a neighbourhood of $y$, then $\left\{ s\in S:T_{s}\left(y\right)\in U\right\} \in p$. Hence $\left\{ s\in S:T_{s}\left(y\right)\in U\right\} \cap A\bigcap B\in p$ and therefore $\left\{ s\in A\cap B:T_{s}\left(y\right)\in U\right\} \not\neq\emptyset$. Conversely let $L$ be a minimal left ideal of $\bar{\mathcal{F}}$, we will show that there is an idempotent $p\in L\cap\bar{B}$. Let $\mathcal{A}=\left\{ A\subseteq S:L\subseteq\overline{A}\right\}$ and by Lemma [ 24](#F-thick){reference-type="ref" reference="F-thick"} this family has $\mathcal{F}$-thick finite intersection property. By Theorem 19.8 of [@key-9], $\left(L,\left\langle \lambda_{s}\mid_{L}\right\rangle _{s\in S}\right)$ is a dynamical system where $\lambda_{s}\mid_{L}$ is the restriction of $\lambda_{s}$ on $L$ (here $\lambda_{s}$ is left multiplication by $s$). Pick a point $r\in L$ such that for all $A\in\mathcal{A}$ and every neighbourhood $\bar{U}$ of $r$ we have $$\left\{ s\in A\cap B:\lambda_{s}\mid_{L}\left(r\right)\in\bar{U}\right\} \neq\emptyset.$$ Let $\mathcal{D}=\mathcal{A}\times r$ and direct $\mathcal{D}$ by agreeing that $\left(A,U\right)\leq\left(A',U'\right)$ if and only if $A\subseteq A'$ and $U\subseteq U'$. For $\left(A,U\right)\in\mathcal{D}$, pick $s_{\left(A,U\right)}\in A\cap B$ such that $s_{\left(A,U\right)}+r\in\bar{U}$. Let $p$ be the limit point of the net **$\left\langle s_{(A,U)}\right\rangle _{(A,U)\in\mathcal{D}}$. Since each $s_{(A,U)}\in B$ we get $p\in\bar{B}$. Also for each $A\in\mathcal{A}$ we have $s_{(A,U)}\in A$ and so $p\in\bar{A}$ where we know that $L\subseteq\bar{A}$. Therefore $p$ is in every clopen neighbourhood of $L$.** If $p\notin L$ then by normality we can say that there is an clopen neighbourhood of $L$ which does not contain $p$, a contradiction. So we get that $p\in L$, where $L$ is a left ideal of $\bar{\mathcal{F}}$. Since $p+r$ is in every clopen neighbourhood of $p$, $p+r=r$. Since $p\in L\subseteq K\left(\bar{\mathcal{F}}\right)$, pick a minimal right ideal $R$ of $\bar{\mathcal{F}}$such that $p\in R$. Then $r=p+r\in p+\bar{\mathcal{F}}=R$, so we have that $p,r\in R\cap L$ and it is a group. Since $p=p+r$ , $p$ is an idempotent. ◻ Next we will give a list of statements and show that all of them are basically equivalent. This was done in [@key-2] for $\beta S$, here we will give the filter analogs. ** 33**. *Let $S$ be a semigroup and [$\mathcal{F}$]{.upright} [*be a filter on*]{.upright} [$S$, $A\subseteq S$.]{.upright} [*Then the following are equivalent.*]{.upright}* 1. *There exists a minimal dynamical system $\left(X,\langle T_{s}\rangle_{s\in S}\right)$, an open subset $U\subseteq X$ and a point $y\in\bar{U}$ such that $\left\{ s\in S:T_{s}\left(y\right)\in U\right\} =A.$* 2. *For every minimal ideal $L$ of $\bar{\mathcal{F}}$, there exists an open subset $V$ of $L$ and a point $p\in\bar{V}$ such that $\left\{ s\in S:s+p\in V\right\} =A$.* 3. *There exists a minimal ideal $L$ of $\bar{\mathcal{F}}$, an open subset $V$ of $L$ and a point $p\in\bar{V}$ such that $\left\{ s\in S:s+p\in V\right\} =A$.* 4. *For every minimal ideal $L$ of $\bar{\mathcal{F}}$, there exists an open subset $V$ of $L$ and an idempotent $q\in\bar{V}$ such that $\left\{ s\in S:s+q\in V\right\} =A\in q$.* 5. *There exists a minimal ideal $L$ of $\bar{\mathcal{F}}$, an open subset $V$ of $L$ and an idempotent $q\in\bar{V}$ such that $\left\{ s\in S:s+q\in V\right\} =A\in q$.* 6. *There is a minimal idempotent $q$ in $\bar{\mathcal{F}}$, such that $A\in q$ and for all $a\in A$ and for all $r\in\bar{\mathcal{F}}$, if $a+q=r+q$, then $A\in r$.* *Proof.* **The proof is almost same as [@key-2 Theorem 2.9], only here we consider $\bar{\mathcal{F}}$ in place of $\beta S$, so we skip it here.** ◻ ** 34**. Let $S$ be a semigroup and $\mathcal{F}$ be a filter on $S$, $C\subseteq S$. Then $C$ is called very strongly $\mathcal{F}$-central if and only if there is a set $A\subseteq S$ which satisfies any of the conditions in Proposition [ 33](#equiv){reference-type="ref" reference="equiv"} with $A\subseteq C$. Immediately we have the following theorem. ** 35**. *Let $S$ be a semigroup and [$\mathcal{F}$]{.upright} [*be a filter on*]{.upright} [$S$.]{.upright} [*Let $C$ be very strongly*]{.upright} [$\mathcal{F}$]{.upright}[*-central set. Then there is a minimal right ideal $R$ of $\bar{\mathcal{F}}$ such that $C$ is a member of every idempotent in $R$.*]{.upright}* *Proof.* Pick $A\subseteq C$ which satisfies the condition 6 from theorem 4.6. So pick a minimal idempotent $q\in\bar{\mathcal{F}}$ such that $A\in q$ and for all $a\in A$ and for all $r\in\bar{\mathcal{F}}$, if $a+q=r+q$, then $A\in r$. Let $R=q+\bar{\mathcal{F}}$. Then $R$ is a minimal right ideal of $\bar{\mathcal{F}}$ and let $p$ be an idempotent in $R$. So $R=p+\bar{\mathcal{F}}$. **Thus $q+p=p$ and $p+q=q.$ Therefore, for every $a\in A$, $a+p+q=a+q,$ so for every $a\in A,$ $A\in a+p.$ Therefore $A\subseteq\left\{ s\in S:-s+A\in p\right\}$ so $A\in q+p=p.$** ◻ Following [@key-1-1] in contrast to strongly $\mathcal{F}$-central set we introduce following definition. ** 36**. A set $A$ is thickly $\mathcal{F}$-central if there is some minimal left ideal $L$ of *$\bar{\mathcal{F}}$* such that all idempotents of $L$ is contained in $\overline{A}$. We get combinatorial charizations of thickly $\mathcal{F}$-central similar to [@key-1-1]. ** 37**. *Let $S$ be a semigroup and [$\mathcal{F}$]{.upright} [*be a filter on*]{.upright} [$S$.]{.upright} [*A set $A$ is thickly*]{.upright} $\mathcal{\mathcal{F}}$-central if and only if there is a family $\mathcal{A}$ of subsets of $S$ with $\mathcal{F}$-thick finite intersection property and a downward directed family $\left\langle C_{F}\right\rangle _{F\in I}$ of subsets of $S\setminus A$ such that* 1. *for each $F\in I$ and $x\in C_{F}$ there exists $G\in I$ such that $C_{G}\subseteq x^{-1}C_{F}$ and* 2. *$\mathcal{A}\cup\left\{ C_{F}:F\in I\right\}$ does not have the finite intersection property.* *Proof.* $A$ is thickly $\mathcal{F}$ central if and only if $S\setminus A$ is not strongly $\mathcal{F}$ central. Then the proof follows from theorem 4.4. ◻ # Product of $\mathcal{F}$ Strongly Central Sets In [@key-19] Hindman and Strauss showed that the product of two central sets is again central and in [@key-3-1] the authors proved that product of some of the large sets along filter is again filter large. Here we want to prove that product of two strongly central sets is again strongly central. The proof uses algebra of Stone-Čech compactification and is the following. ** 38**. *Let $S$ and $T$ be two semigroups. Let $A$ be a strongly central set in $S$ and $B$ be a strongly central set in $T$. Then the cartesian product $A\times B$ is strongly central set in $S\times T$.* *Proof.* Let $\tilde{\iota}^{-1}\left(K\left(\beta S\right)\times K\left(\beta T\right)\right)=R$ where $\tilde{\iota}$ is the continuous extension of the identity function. Then $R$ is a closed subsemigroup of $\beta\left(S\times T\right)$ and $K\left(R\right)=K\left(\beta\left(S\times T\right)\right)\cap R$ by Theorem [ 18](#minimal ideal){reference-type="ref" reference="minimal ideal"}. Let $L$ be a minimal left ideal of $\beta\left(S\times T\right)$, then $N=L\cap R$ is a minimal left ideal of $R$. So $\tilde{\iota}\left(N\right)$ is a minimal left ideal of $\beta S\times\beta T$. Let $L_{1}=\pi_{1}\left(\tilde{\iota}\left(N\right)\right)$ and $L_{2}=\pi_{2}\left(\tilde{\iota}\left(N\right)\right)$ respectively be the coordinate wise projections. Then $L_{1}$ is a minimal left ideal of $\beta S$ and $L_{2}$ is a minimal left ideal of $\beta T$. Since $A$ is strongly central in $S$ and $B$ is strongly central in $T$, there exists $p\in L_{1}\cap\bar{A}$ and $q\in L_{2}\cap\bar{B}$. Denote $M=\tilde{\iota}^{-1}\left[\left\{ \left(p,q\right)\right\} \right]$ and by Theorem 4.43.1 of [@key-9] pick an idempotent $r\in K\left(M\right)$ such that $\tilde{\iota}\left(r\right)=\left(p,q\right)$. Then $A\times B\in r$ since $A\in p$ and $B\in q$. Also $r\in N\subseteq L$. Therefore $r\in L\cap\overline{\left(A\times B\right)}$ which concludes the proof. ◻ There is another alternative method of proving the previous theorem, with th help of the following combinatorial characterization of right strongly central sets, shown in [@key-1-1]. ** 39**. *Let $S$ be a semigroup and $A\subset S$. Then $A$ is right strongly Central if and only if whenever family $\mathcal{A}$ of subsets of $S$ with right thick finite intersection property, there exists a downward directed family $\left\langle C_{F}\right\rangle _{F\in I}$ of subsets of $A$ such that* *(i) for each $F\in I$ and each $x\in C_{F}$ there exists $G\in I$ with $C_{G}\subseteq x^{-1}C_{F}$ and* *(ii) $\mathcal{A}\cup\left\{ C_{F}:F\in I\right\}$ has the finite intersection property.* *Proof.* See [@key-1-1 Theorem 2.6]. ◻ We quickly prove another small lemma, which is the following. ** 40**. *Let $S$ and $T$ be a semigroup and $C\subseteq S\times T$ be a thick set. Then $A=\pi_{1}\left(C\right)$ and $B=\pi_{2}\left(C\right)$ are thick sets in $S$ and $T$ respectively.* *Proof.* Without loss of generality , let $A$ is not thick in $S$. Then there exists a finite subset $H_{1}$ of $S$ such that for any $x\in S$, $H_{1}\cdot x$ is not contained in $A$. Now, for any finite subset $H_{2}$ of $T$, $$H_{1}\times H_{2}\in P_{f}\left(S\times T\right)$$ and for any $\left(x,y\right)\in S\times T$, $\left(H_{1}\times H_{2}\right)\left(x,y\right)$ is not contains in $C$, a contradiction to the fact that $C$ is thick. So, $A=\pi_{1}\left(C\right)$ and $B=\pi_{2}\left(C\right)$ are thick sets in $S$ and $T$ respectively. ◻ Now we prove the following. ** 41**. *Let $S$ and $T$ be semigroups and $A\subset S,B\subset T$ be right strongly Central sets in $S$ and $T$ respectively. Then $A\times B$ is right strongly Central set in $S\times T$.* *Proof.* Let, $\mathcal{C}$ be a family of subsets of $S\times T$ with thick finite intersection property. Let us consider $\mathcal{A}=\left\{ \pi_{1}\left(C\right):C\in\mathcal{C}\right\}$and $\mathcal{B}=\left\{ \pi_{2}\left(C\right):C\in\mathcal{C}\right\}$ be two families of subsets of $S$ and $T$ respectively. Both of them has thick finite intersection property by the previous lemma and the fact that finite intersection of thick sets is thick. Now $A$ is right strongly central set in $S$, then for the family $\mathcal{A}$ of subsets of $S$ with right thick finite intersection property, there exists a downward directed family $\left\langle C_{F}\right\rangle _{F\in I}$ of subsets of $S$ such that \(i\) for each $F\in I$ and each $x\in C_{F}$ there exists $G\in I$ with $C_{G}\subseteq x^{-1}C_{F}$ and \(ii\) $\mathcal{A}\cup\left\{ C_{F}:F\in I\right\}$ has the finite intersection property. Also , $B$ is right strongly central set in $T$, then for the family $\mathcal{B}$ of subsets of $T$ with right thick finite intersection property, there exists a downward directed family $\left\langle D_{F}\right\rangle _{F\in I}$ of subsets of $T$ such that \(i\) for each $F\in I$ and each $y\in D_{F}$ there exists $G\in I$ with $D_{G}\subseteq y^{-1}D_{F}$ and \(ii\) $\mathcal{A}\cup\left\{ D_{F}:F\in I\right\}$ has the finite intersection property. Then $\left\langle C_{F},D_{F}\right\rangle _{F\in I}$ is the downward directed family of $\left(A\times B\right)$ and \(i\) for each $F\in I$ and each $\left(x,y\right)\in E_{F}$ there exists $G\in I$ with $E_{G}\subseteq\left(x^{-1},y^{-1}\right)E_{F}=\left(x,y\right)^{-1}E_{F}$ and \(ii\) $\left(\mathcal{A}\times\mathcal{B}\right)\cup\left\{ E_{F}:F\in I\right\}$ has the finite intersection property. So $\mathcal{C}\cup\left\{ E_{F}:F\in I\right\}$ has the finite intersection property. ◻ Another notion, called right thickly central was also defined in [@key-1-1] where the authors characterised the sets combinatorially. Here we show that product of two such sets is again a set of this kind, using the theorem proved by the authors in their paper, which is the following. ** 42**. *Let $S$ be a semigroup and $A\subset S$. Then $A$ is right thickly Central if and only if there is a family $\mathcal{A}$ of subsets of $S$ with right thick finite intersection property, and for any downward directed family $\left\langle C_{F}\right\rangle _{F\in I}$ of subsets of $S\setminus A$ such that* *(i) for each $F\in I$ and each $x\in C_{F}$ there exists $G\in I$ with $C_{G}\subseteq x^{-1}C_{F}$ and* *(ii) $\mathcal{A}\cup\left\{ C_{F}:F\in I\right\}$ does not have finite intersection property.* *Proof.* [@key-1-1 Theorem 2.3]. ◻ Now we have the following one. ** 43**. *Let $S$ and $T$ be two semigroups. Let $A$ be a right thickly central set in $S$ and $B$ be a right thickly central set in $T$. Then the cartesian product $A\times B$ is right thickly central set in $S\times T$.* *Proof.* Let $\left\langle E_{F}\right\rangle _{F\in I}$ be a downward directed family of $\left(S\times T\right)\setminus\left(A\times B\right)$ and let for any $F\in I$, $C_{F}=\pi_{1}\left(E_{F}\right)$ and $D_{F}=\pi_{2}\left(E_{F}\right)$ are subsets of $S\setminus A$ and $T\setminus B$ respectively. $A$ is right thickly central set in $S$, so there is a family $\mathcal{A}$ of subsets of $S$ with right thick finite intersection property. Then for the downward directed family $\left\langle C_{F}\right\rangle _{F\in I}$ of subsets of $S\setminus A$ we have \(i\) for each $F\in I$ and each $x\in C_{F}$ there exists $G_{1}\in I$ with $C_{G_{1}}\subseteq x^{-1}C_{F}$ and \(ii\) $\mathcal{A}\cup\left\{ C_{F}:F\in I\right\}$ does not have finite intersection property. Also , $B$ is right thickly central set in $T$, so there is a family $\mathcal{B}$ of subsets of $T$ with right thick finite intersection property, and hence there exists a downward directed family $\left\langle D_{F}\right\rangle _{F\in I}$ of subsets of $T\setminus B$ such that \(i\) for each $F\in I$ and each $y\in D_{F}$ there exists $G_{2}\in I$ with $D_{G_{2}}\subseteq y^{-1}D_{F}$ and \(ii\) $\mathcal{B}\cup\left\{ D_{F}:F\in I\right\}$ does not have finite intersection property. We want to show $\mathcal{A}\times\mathcal{B}$ is the required family of subsets of $S\times T$. We have that \(i\) for each $F\in I$ and each $\left(x,y\right)\in E_{F}$ there exists $G\in I$ where $G=max\left(G_{1},G_{2}\right)$ with $E_{G}\subseteq\left(x^{-1},y^{-1}\right)E_{F}=\left(x,y\right)^{-1}E_{F}$ and \(ii\) $\left(\mathcal{A}\times\mathcal{B}\right)\cup\left\{ E_{F}:F\in I\right\}$ does not have finite intersection property. Hence we have the required result. ◻ Another alternative way is there to prove the last fact, which we mention as a passing argument. Since $A\subset S,B\subset T$ are right thickly central sets respectively in $S$ and $T$, then $A^{c}\subset S,B^{c}\subset T$ are not right strongly central sets. So $A^{c}\times B^{c}=\left(A\times B\right)^{c}$ is right strongly central set in $S\times T$. Therefore $A\times B$ is right thickly Central set in $S\times T$. ** 1**. What about the filter versions of these theorems? $\vspace{0.3in}$ **Acknowledgment:** The first author acknoledge support of NBHM grant no 02011/6/2021/NBHM(RP)R&DII/9678 and the second author acknowledges the Grant CSIR-UGC NET fellowship with file No. 09/106(0199)/2019-EMR-I. $\vspace{0.3in}$ HS12 E. Bayatmanesh, M. Akbari Tootkaboni, A. Bagheri Sales, *Central sets theorem near zero*, Topology and its Applications 210(2016) 70-80. **Vitaly Bergelson, Neil Hindman, and Dona Strauss,** *Strongly Central sets and sets of polynomial returns mod 1*, Proc. Amer. Math. Soc. 140 (2012), 2671-2686.** **V. Bergelson and N. Hindman,** *Nonmetrizable topological dynamics and Ramsay theory*, Trans. Am. Math. Soc. 320(1990), 293-320.** **T. Bhattacharya, S. Chakraborty, S. K. Patra,** *Combined algebraic properties of C$^{*}$- sets near zero*, Semigroup Forum, 2019.** **C. Christopherson and J. H. Johnson Jr,** *Algebraic characterizations of some relative notions of size*, arXiv:2105.09723.** R. Ellis, *Lectures on topological dynamics* (W. A. Benjamin New York, 1969). H. Furstenberg, *Recurrence in ergodic theory and combinatorial number theory,* vol. 2, (Princeton University Press, Princeton, New Jersey, 1981) S. Goswami , J. Poddar, *Central Sets Theorem along filters and some combinatorial consequences*, Indagationes Mathematicae 33 (2022), 1312-1325. N. Hindman, L. L. Jones, D. Strauss, *The relationships among many notions of largeness for subsets of a semigroup*, Semigroup Forum 99 (2019), 9-31. N. Hindman, I. Leader, *The semigroup of ultrafilters near $0$,* Semigroup Forum 59 (1999), 33-55. N. Hindman, D. Strauss, *Cartesian products of sets satisfying the Central Sets Theorem*, Topology Proceedings 35 (2010), 203-223. N.Hindman, D.Strauss, *Algebra in the Stone-Čech Compactifications: theory and applications*, second edition, de Gruyter, Berlin, 2012. **L. Luperi Baglini,** *Partition regularity of polynomial systems near 0*, Semigroup Forum 103 (2021), 191--208.** S. Pal, J. Poddar, *A combinatorial study on product of filter large sets*, arXiv:2303.09958. **S.K.Patra,** *Dynamical characterizations of combinatorially rich sets near zero*, Topology and its Applications 240(2018) 173-182.** **Igor Protasov and Serhii Slobodianiuk,** *Relative size of subsets of a semigroup*, https://doi.org/10.48550/arXiv.1506.00112.** **Md. M. Shaikh, S. K. Patra, and M. K. Ram,** *Dynamics near an idempotent*, Topology and its Applications, 282 (2020), 107328.** **H. Shi and H. Young,** *Nonmetrizable topological dynamical characterization of central sets*, Funda. Math. 150 (1996) 1-9.** O. Shuungula,Y. Zelenyuk and Y. Zelenyuk, *The closure of the smallest ideal of an ultrafilter semigroup*, Semigroup Forum 79 (2009) , 531-539. **M. A. Tootkaboni, T. Vahed,** *The semigroup of ultrafilters near an idempotent of semitopological semigroup*, Topology and its Applications 159 (2012) 3494-3503.** **Andy Zucker,** *Thick, syndetic, and piecewise syndetic subsets of Fraïssé structures*, Topology Appl. 223 (2017), 1--12.**
arxiv_math
{ "id": "2310.02020", "title": "A Study on Filter Version of Strongly Central Sets", "authors": "Dibyendu De, Sujan Pal, Jyotirmoy Poddar", "categories": "math.CO", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Inverse problems are inherently ill-posed and therefore require regularization techniques to achieve a stable solution. While traditional variational methods have well-established theoretical foundations, recent advances in machine learning based approaches have shown remarkable practical performance. However, the theoretical foundations of learning-based methods in the context of regularization are still underexplored. In this paper, we propose a general framework that addresses the current gap between learning-based methods and regularization strategies. In particular, our approach emphasizes the crucial role of data consistency in the solution of inverse problems and introduces the concept of data-proximal null-space networks as a key component for their solution. We provide a complete convergence analysis by extending the concept of regularizing null-space networks with data proximity in the visual part. We present numerical results for limited-view computed tomography to illustrate the validity of our framework. **Keywords:** Regularization, null-space network, data-proximal network, convergence analysis, data consistency author: - Simon Göppel - Jürgen Frikel - Markus Haltmeier date: September 12, 2023 title: Data-proximal null-space networks for inverse problems --- # Introduction Inverse problems arise in all kinds of practical applications, such as medical imaging, signal processing, astronomy, computer vision, and more. In this paper, we combine learning-based methods with established regularization concepts for solving inverse problems. Mathematically, an inverse problem can be expressed as the problem of recovering the unknown $x\in \mathbb{X}$ from noisy data $$\label{eq:ip} y^\delta = \mathbf{A}x+ \eta \,,$$ where $\mathbf{A}\colon \mathbb{X}\to \mathbb{Y}$ is a linear operator between separable Hilbert spaces $\mathbb{X}$ and $\mathbb{Y}$, $\eta \in \mathbb{Y}$ with $\lVert\eta^\delta\rVert \leq \delta$ is the unknown data error, and $\delta \geq 0$ is the known noise bound [@engl1996regularization]. ## Regularization methods A major feature of inverse problems is their ill-posedness, so that exact solutions of $\mathbf{A}x= y$ are either not unique or unstable with respect to data perturbations. Non-uniqueness, on the other hand, may even cause $\mathbf{A}^+\mathbf{A}x^\star$ to be different from $x^\star$. To obtain reliable reconstructions, one must use regularization techniques that adopt a stable approach to solving [\[eq:ip\]](#eq:ip){reference-type="eqref" reference="eq:ip"} and account for instability and non-uniqueness. Regularization methods consist of a family of continuous mappings $\mathbf{R}_\gamma \colon \mathbb{Y}\to \mathbb{X}$ for $\gamma \in \Gamma$ which, together with a suitable parameter choice strategy $\gamma^\star(\delta, y^\delta)$, are convergent in the sense specified in Definition [Definition 1](#def:regularization){reference-type="ref" reference="def:regularization"}. Note that, for simplicity, we work with single-valued regularization methods. For the definition of set-valued regularization methods, see [@benning2018modern]. In classical regularization methods, $\Gamma = (0, \infty)$ is a directed interval for which we denote the regularization parameter by $\alpha$; see [@engl1996regularization]. A regularization method is said to be linear if $\mathbf{R}_\alpha$ is linear. Prominent examples of classical linear regularization methods are Tikhonov regularization and more general spectral filtering methods [@engl1996regularization]. A class of non-linear regularization methods is variational regularization, where $\mathbf{R}_\alpha y^\delta$ is a minimizer of the generalized Tikhonov functional $$\label{eq:tik} \mathcal T_\alpha^\delta (x) = \frac{1}{2} \, \lVert y^\delta - \mathbf{A}x\rVert^2 + \alpha \mathcal P(x) \,.$$ Here $\lVert y^\delta - \mathbf{A}x\rVert^2 /2$ is the data fidelity term that enforces data proximity between $\mathbf{A}x$ and $y^\delta$, while the functional $\mathcal P$ incorporates prior information about the underlying signal class. The regularization parameter $\alpha >0$ acts as a trade-off between proximity to the data and regularity. The regularization approach [\[eq:tik\]](#eq:tik){reference-type="eqref" reference="eq:tik"} offers great flexibility because it is easily tailored to the forward operator, the underlying signal, and the given perturbations. Common selections for $\mathcal P$ include the TV penalty, Sobolev norms, or sparsity priors [@scherzer2009variational]. Additionally, variational regularization technique has a solid theoretical foundation. In particular, under certain weak additional assumptions, one obtains convergence $\mathbf{R}_\alpha y^\delta \to x^\star$ and data proximity $\mathbf{A}\mathbf{R}_\alpha y^\delta \to \mathbf{A}x^\star$ as $\delta \to 0$. ## Learned reconstructions Major drawbacks of variational regularization are the challenging design of penalties $\mathcal P$ well tuned to the signals of interest, and the time-consuming minimization of [\[eq:tik\]](#eq:tik){reference-type="eqref" reference="eq:tik"}. To overcome these issues, data-driven methods for solving inverse problems have been developed recently [@adler2017; @altekrueger2023; @arridge2019; @jin2017deep; @li2020NETT; @ongie2020deep; @Wang2020]. In these methods, a class $(\mathbf{R}_\theta)_{\theta \in \theta}$ of reconstruction operators $\mathbf{R}_\theta \colon \mathbb{Y}\to \mathbb{X}$ is designed to perform well on a class of training data $(x_1, y_1^\delta), \dots, (x_N, y_N^\delta) \in \mathbb{X}\times \mathbb{Y}$ consisting of pairs of desired reconstructions and noisy data. The class of reconstruction operators should then be both large enough to include reasonable reconstruction methods, and sufficiently constrained to account for the limited amount of training data and computational resources. Popular architectures for inverse problems are two-step residual networks, $$\label{eq:res-net} \mathbf{R}_\theta (y^\delta) \coloneqq (\mathrm{Id}_\mathbb{X}+ \mathbf{W}_\theta) \circ \mathbf{B}_\alpha(y^\delta)\,,$$ where $\mathbf{B}_\alpha\colon \mathbb{Y}\to \mathbb{X}$ is an initial classical reconstruction method and $(\mathbf{W}_\theta)_{\theta \in \Theta}$ is an image-to-image architecture such as the U-net [@ronneberger2015]. Given the initial reconstructions $z_n^\delta = \mathbf{B}_\alpha(y_n^\delta)$, the network $\mathbf{W}_\theta$ is trained independent of the forward operator, by minimizing the empirical risk $\mathcal{L}_N(\theta) = (1/N) \cdot \sum_{n=1}^N \lVert x_n - (\mathrm{Id}_\mathbb{X}+ \mathbf{W}_\theta) (z_n^\delta)\rVert^2$. Empirically such approaches have been proven to provide excellent results [@antholzer2019deep; @he2016residual; @jin2017deep; @kang2017deep; @lee2017deep; @majumdar2015realtime; @rivenson2017deep]. However, from a regularization point of view, [\[eq:res-net\]](#eq:res-net){reference-type="eqref" reference="eq:res-net"} lacks theoretical justification. Even if $\mathbf{B}_\alpha$ together with parameter choice $\alpha= \alpha(\delta, y^\delta)$ is a regularization method, convergence of neither $\mathbf{R}_{\alpha, \theta} (y^\delta)$ nor $\mathbf{A}\circ \mathbf{R}_{\alpha, \theta} (y^\delta)$ is granted as $\delta \to 0$. In particular, they even lack data consistency in the sense that there is no control over the proximity between the reconstruction $\mathbf{R}_{\alpha, \theta} (y^\delta)$ and $y^\delta$ which limits applicability in safety-critical applications such as medical imaging. To enforce data consistency several approaches integrating the forward operator into the network architecture have been proposed including variational, iterative networks or network cascades [@genzel2023; @hammernik2018learning; @kofler2018; @schlemper2018deep; @yiasemis2022recurrent]. However such architectures still do not automatically provide theoretical reconstruction guarantees. A strategy to overcome this issue has been presented in [@schwab2019deepnullspacelearning; @Schwab2020], where the use of so-called null-space networks has been proposed. Null-space networks are a special form of [\[eq:res-net\]](#eq:res-net){reference-type="eqref" reference="eq:res-net"} where $\mathbf{W}_\theta$ is restricted to have values only in the kernel of $\mathbf{A}$. It can be taken as $\mathbf{W}= P_{\mathcal{N}(\mathbf{A})} \mathbf{U}_\theta$ where $(\mathbf{U}_\theta)_{\theta \in \Theta}$ is any architecture. In [@schwab2019deepnullspacelearning] it has been shown that null-space network provable convergent regularization method for [\[eq:ip\]](#eq:ip){reference-type="eqref" reference="eq:ip"} adapted to the training data. As a drawback, they only modify the initial reconstruction $\mathbf{B}_\alpha(y^\delta)$ on the kernel $\ker(\mathbf{A})$ and keep the part in the complement unchanged. Moreover only linear $\mathbf{B}_\alpha$ have been included in the analysis in [@schwab2019deepnullspacelearning]. The regularizing networks [@Schwab2020] relaxes the null-space assumption but the design of suitable architectures is less obvious. ## Main contributions In this paper we propose and analyze an architecture which allows an update of the component in the complement of the null space, but is limited in the data domain by the noise level. The general architecture for which we design a rigorous analysis takes the form of the residual network [\[eq:res-net\]](#eq:res-net){reference-type="eqref" reference="eq:res-net"} with $$\label{eq:data-prox-net} \mathbf{W}_{\theta, \beta} = P_{\mathcal{N}(\mathbf{A})} \mathbf{U}_\theta + \mathbf{A}^+\Phi_\beta \mathbf{A}\mathbf{V}_\theta \,,$$ where $[\mathbf{V}_\theta, \mathbf{W}_\theta]_{\theta \in \Theta}$ is is an image-to-image architecture with two output channels and $\Phi_\beta$ is a function with $\lVert\Phi_\beta z\rVert \leq \beta$. We will show that data-proximal networks [\[eq:data-prox-net\]](#eq:data-prox-net){reference-type="eqref" reference="eq:data-prox-net"} together with [\[eq:res-net\]](#eq:res-net){reference-type="eqref" reference="eq:res-net"} yields a data-proximal regularization method together with convergence rates. In particular we are show rate-$r$ data proximity which we refer to as data-error estimates of the form $\lVert\mathbf{A}x_\alpha^\delta - y^\delta\rVert = \mathcal{O}(\delta^r)$. Note the architecture [\[eq:data-prox-net\]](#eq:data-prox-net){reference-type="eqref" reference="eq:data-prox-net"} in particular uses an explicit decomposition in the null-space $\mathcal{N}(\mathbf{A})$ and its complement $\mathcal{N}(\mathbf{A})^\bot = \mathcal{R}(\mathbf{A}^+)$. This paper generalizes the regularization results of null-space networks of [@schwab2019deepnullspacelearning] (which are of the form [\[eq:data-prox-net\]](#eq:data-prox-net){reference-type="eqref" reference="eq:data-prox-net"} with $\mathbf{V}_\theta =0$) to data-proximal networks. Furthermore, opposed to [@schwab2019deepnullspacelearning; @Schwab2020] our analysis $(\mathbf{B}_\alpha)_{\alpha>0}$ does not have to be linear and for example can be of variational type [\[eq:tik\]](#eq:tik){reference-type="eqref" reference="eq:tik"}. The idea to only learn the null-space components has also been used in [@mardani2018deep; @sonderby2016amortised; @Bubba_2019]. In the finite dimensional setting, learning the null-space and its complement has been proposed in [@chen2020deep]. A regularization approach using approximate null-space networks has been proposed in [@Schwab2020] without explicitly splitting into null-space component and complements. In contrast, our architecture also allows learning updates in the orthogonal complement of the kernel of $\mathbf{A}$ and has a specific form, which allows to include data consistency easily. ## Outline The remainder of this paper is organized as follows. In Section [2](#sec:theory){reference-type="ref" reference="sec:theory"} we present the theoretical analysis. In particular, we introduce the background and the concept of data-proximal networks, followed by a rigorous convergence analysis. In Section [3](#sec:application){reference-type="ref" reference="sec:application"} we apply the framework to the limited-view problem in computed tomography. We test the method with FBP and TV regularization as initial reconstruction and compare it with plain null-space learning. The paper ends with section [4](#sec:conclusion){reference-type="ref" reference="sec:conclusion"} where we give a short summary and discuss some generalizations and lines of potential future research. # Theory {#sec:theory} Throughout this paper let $\mathbf{A}\colon \mathbb{X}\to \mathbb{Y}$ be a linear bounded operator between separable Hilbert spaces $\mathbb{X}$ and $\mathbb{Y}$. We use $\mathcal{N}(\mathbf{A})$ and $\mathcal{R}(\mathbf{A})$ to the denote the null-space and range of $\mathbf{A}$, respectively. The inversion of [\[eq:ip\]](#eq:ip){reference-type="eqref" reference="eq:ip"} is unstable if $\mathcal{R}(\mathbf{A})$ is non-closed and non-unique if $\mathcal{N}(\mathbf{A}) \neq \{0\}$. Our goal is the stable solution of [\[eq:ip\]](#eq:ip){reference-type="eqref" reference="eq:ip"} in such situations, by combining regularization methods with learned networks. ## Data-proximal regularization In order to solve [\[eq:ip\]](#eq:ip){reference-type="eqref" reference="eq:ip"} we use regularization methods with general parameter sets including the classical setting as well as learned reconstruction as special cases. **Definition 1** (Regularization method). *Let $\Gamma$ be an index set, $\mathbb{M}\subseteq \mathbb{X}$, $(\mathbf{R}_\gamma)_{\gamma \in \Gamma}$ a family of continuous mappings $\mathbf{R}_\gamma \colon \mathbb{Y}\to \mathbb{X}$, and $\gamma^\star \colon (0,\infty) \times \mathbb{Y}\to \Gamma$. The pair $((\mathbf{R}_\theta)_{\theta \in \Theta}, \gamma^\star)$ is said to be a (convergent) regularization method for $\mathbf{A}x= y$ over $\mathbb{M}$, if $$\label{eq:reg-limit} \forall x\in \mathbb{M}\colon \quad \lim_{\delta \to 0} \Bigl( \sup \{ \lVert x- \mathbf{R}_{\gamma^\star (\delta, y^\delta)} (y^\delta)\rVert \mid y^\delta \in \mathbb{Y}\wedge \lVert y^\delta - \mathbf{A}x\rVert \leq \delta \} \Bigr) = 0 \,.$$* Classical regularization methods use $\Gamma = (0, \infty)$ in which case we denote its elements by $\alpha$ and the parameter choice by $\alpha^\star$. In this situation one usually additionally assumes $\gamma^\star (\delta, y^\delta) \to 0$ uniformly in $y^\delta$ as $\delta \to 0$. Many classical methods are further based on the pseudoinverse $\mathbf{A}^+$ where the set of limiting solutions is given by $\mathbb{M}= \mathcal{N}(\mathbf{A})^\bot = \mathcal{R}(\mathbf{A}^+)$; see for example [@engl1996regularization]. However also different limiting solutions are frequently used, in particular in variational regularization. **Remark 2** (Variational regularization). *The prime example with different limiting solutions is variational regularization [\[eq:tik\]](#eq:tik){reference-type="eqref" reference="eq:tik"} which together with $\alpha, \delta^2/\alpha\to 0$ gives a regularization method over the set of $\mathcal P$-minimizing solutions defined by $\mathop{\mathrm{arg\,min}}_{x} \{ \mathcal P(x) \mid \mathbf{A}x= y\}$ with $y\in \mathcal{R}(\mathbf{A})$. This implicitly requires unique minimizers of [\[eq:tik\]](#eq:tik){reference-type="eqref" reference="eq:tik"}. For some regularization methods, $\mathbf{B}_\alpha$ should be taken set-valued and [\[eq:reg-limit\]](#eq:reg-limit){reference-type="eqref" reference="eq:reg-limit"} adjusted accordingly (see [@benning2018modern]). For the sake of simplicity in the presented theory we restrict to the single-valued case.* We are in particular interested in regularization methods $((\mathbf{R}_\gamma)_{\gamma \in \Gamma}, \gamma^\star)$ that are rate-$r$ data proximal in the following sense. **Definition 3** (Data-proximal regularization method). *Let $r\in (0,1]$. A regularization method $((\mathbf{R}_\gamma)_{\gamma \in \Gamma}, \gamma^\star)$ for $\mathbf{A}x= y$ over $\mathbb{M}$ is called rate-$r$ data proximal, if for some $\tau >0$, $$\label{eq:rate-r} \forall x\in \mathbb{M}\; \forall y^\delta \in \mathbb{Y}\colon \quad \lVert y^\delta - \mathbf{A}x\rVert \leq \delta \Rightarrow \lVert y^\delta - \mathbf{A}\mathbf{R}_{\gamma^\star (\delta, y^\delta)} (y^\delta)\rVert \leq \tau \delta^r \,.$$* Data proximity of a regularization method seems a reasonable condition as the true solution is known to satisfy the data proximity condition $\lVert y^\delta - \mathbf{A}x\rVert \leq \delta$. Thus any potential reconstruction without data proximity lacks the only information provided by the noisy data $y^\delta$. Even though this is such an important property we are not aware of an explicit definition in the literature. This may partially be due to the fact that it is automatically satisfied by common regularization methods. For example rate-$1$ data proximity is satisfied by filter based methods under the source condition $x^+\in (\mathbf{A}^* \mathbf{A})^\mu$ for any $\mu > 0$ as well as for variational regularization under the source condition $\partial \mathcal P(x^+) \in \mathcal{R}(\mathbf{A}^*)$. The following example shows that for filter based methods rate-$r$ data proximity for all $r < 1$ even holds without a source condition. **Example 4** (Data proximity without source condition). *Consider a filter based regularization method $((\mathbf{B}_\alpha)_{\alpha>0}, \alpha^\star)$ where $\mathbf{B}_\alpha= g_\alpha(\mathbf{A}^*\mathbf{A}) \mathbf{A}^*$ for filter functions $g_\alpha\colon \mathbb{R}\to \mathbb{R}$ and $\alpha^\star, \delta^2/\alpha^\star \to 0$; see [@engl1996regularization] for precise definitions. Then $$\begin{aligned} \lVert\mathbf{A}\mathbf{B}_\alpha y^\delta - y^\delta\rVert &= \lVert(\mathbf{A}g_\alpha(\mathbf{A}^*\mathbf{A}) \mathbf{A}^* - \mathrm{Id}_\mathbb{Y}) (y^\delta)\rVert \\ &\leq \lVert(\mathbf{A}g_\alpha(\mathbf{A}^*\mathbf{A}) \mathbf{A}^* - \mathrm{Id}_\mathbb{Y})( \mathbf{A}x- y^\delta) \rVert + \lVert(\mathbf{A}g_\alpha(\mathbf{A}^*\mathbf{A}) \mathbf{A}^* - \mathrm{Id}_\mathbb{Y}) \mathbf{A}(x)\rVert \\ &\leq \lVert\mathbf{A}g_\alpha(\mathbf{A}^*\mathbf{A}) \mathbf{A}^* - \mathrm{Id}_\mathbb{Y}\rVert \, \delta + \lVert\mathbf{A}( g_\alpha(\mathbf{A}^*\mathbf{A}) \mathbf{A}^*\mathbf{A}- \mathrm{Id}_\mathbb{X})\rVert \, \lVert x\rVert \\ &\leq \lVert\mathbf{A}g_\alpha(\mathbf{A}^*\mathbf{A}) \mathbf{A}^* - \mathrm{Id}_\mathbb{Y}\rVert \, \delta + \alpha^{1/2} \, \lVert x\rVert \,,\end{aligned}$$ where the latter inequality used that the filter has at least qualification $1/2$. Noting that $\lVert\mathbf{A}( g_\alpha(\mathbf{A}^*\mathbf{A}) \mathbf{A}^*\mathbf{A}- \mathrm{Id}_\mathbb{Y})\rVert$ is bounded for any filter, this shows that for all $r < 1$ and $R >0$ with $\alpha^\star \asymp \delta^{2r}$ and $\lVert x\rVert \leq R$ we get $\lVert\mathbf{A}\mathbf{B}_{\alpha^\star} y^\delta - y^\delta\rVert \leq \tau \delta^r$.* Variational regularization approximates solutions of $\mathbf{A}x= y$ with minimal value of $\mathcal P$. In particular for $\mathcal P= \lVert\cdot\rVert^2/2$ this minimal norm solution is given by the Moore-Penrose inverse $\mathbf{A}^+(y) \in \ker(\mathbf{A})^\perp$. The same holds true for other spectral filtering methods. The concepts of null-space networks [@schwab2019deepnullspacelearning] addresses potentially suboptimal solution selection by approximating elements in a general set parameterized by $\mathcal{R}(\mathbf{A}^+)$. **Example 5** (Null-space networks). *The regularizing null-space networks analyzed in [@schwab2019deepnullspacelearning] take the form $$\label{eq:null-space} \forall \alpha>0 \colon \qquad \mathbf{R}_\alpha\coloneqq (\mathrm{Id}_\mathbb{X}+ P_{\mathcal{N}(\mathbf{A})} \mathbf{U})\circ \mathbf{B}_\alpha\,,$$ where $((\mathbf{B}_\alpha)_{\alpha>0},\alpha^\star)$ is a regularization method with admissible set $\mathcal{R}(\mathbf{A}^+)$, and $\mathbf{U}$ a Lipschitz function. The function $\mathbf{U}$ can be selected from any network architecture $(\mathbf{U}_\theta)_{\theta \in \Theta}$ based on training data $x_1, \dots, x_N$. Except of being Lipschitz, no other assumptions are required from a theoretical point of view. In [@schwab2019deepnullspacelearning] it is shown that $((\mathbf{R}_\alpha)_{\alpha>0},\alpha^\star)$ is a regularization method with $\mathbb{M}\coloneqq ( \mathrm{Id}_\mathbb{X}+ P_{\mathcal{N}(\mathbf{A})} \mathbf{U}) (\mathcal{R}(\mathbf{A}^+) )$. Further, $(\mathrm{Id}_\mathbb{X}+ P_{\mathcal{N}(\mathbf{A})}) \mathbf{U}$ preserves data proximity of $\mathbf{B}_\alpha$ in the sense that $\lVert\mathbf{A}\mathbf{B}_\alpha(y^\delta) - y^\delta\rVert \leq \tau \delta^r \Rightarrow \lVert\mathbf{A}\mathbf{R}_\alpha(y^\delta) - y^\delta\rVert \leq \tau \delta^r$.* Data consistency of null-space networks comes at the cost that the component of $\mathbf{B}_\alpha(y^\delta)$ in the range $\mathcal{R}(\mathbf{A}^+)$ remains unchanged by $( \mathrm{Id}_\mathbb{X}+ P_{\mathcal{N}(\mathbf{A})} \mathbf{U})$. Allowing a network to also act in $\mathcal{R}(\mathbf{A}^+)$ is however beneficial, if the forward operator $\mathbf{A}$ contains many small singular values. How to obtain data consisten regularizations for networks that also act in $\mathcal{R}(\mathbf{A}^+)$ are studied in the present paper. ## Data-proximal networks Throughout this section assume that we have given a data-proximal, potentially non-linear, regularization method $((\mathbf{B}_\alpha)_{\alpha>0}, \alpha^\star)$. **Definition 6** (Data-proximal null-space networks). *Let $[\mathbf{U}_\theta, \mathbf{V}_\theta]_{\theta \in \Theta}$ be a family of Lipschitz mappings $\mathbf{U}_\theta, \mathbf{V}_\theta \colon \mathbb{X}\to \mathbb{X}$ and $(\Phi_\beta)_{\beta >0}$ a family of mappings $\Phi_\beta \colon \mathcal{R}(\mathbf{A}) \to \mathcal{R}(\mathbf{A})$ such that $\forall \beta >0$ $\forall z \in \mathcal{R}(\mathbf{A}) \colon \lVert\Phi_\beta y\rVert_2 \leq \beta$. We call the family of mappings $$\label{eq:data-prox} \mathbf{D}_{\theta, \beta} \coloneqq \mathrm{Id}_\mathbb{X}+ P_{\mathcal{N}(\mathbf{A})} \circ \mathbf{U}_\theta + \mathbf{A}^+\circ \Phi_\beta \circ \mathbf{A}\circ \mathbf{V}_\theta$$ data-proximal null-space network defined by $\mathbf{U}_\theta, \mathbf{V}_\theta, \Phi_\beta, \mathbf{A}$.* Note that for the special case $\mathbf{V}_\theta =0$ we obtain a null-space network $\mathrm{Id}_\mathbb{X}+ P_{\mathcal{N}(\mathbf{A})} \circ \mathbf{U}_\theta$. The latter obeys strict data consistency in the sense that $\mathbf{A}\circ ( \mathrm{Id}_\mathbb{X}+ P_{\mathcal{N}(\mathbf{A})} \circ \mathbf{U}_\theta) = \mathbf{A}$. Data-proximal networks relax the strict data consistency to the data proximity condition $$\lVert\mathbf{A}\circ (\mathrm{Id}_\mathbb{X}+ P_{\mathcal{N}(\mathbf{A})} \circ \mathbf{U}_\theta + \mathbf{A}^+\circ \Phi_\beta \circ \mathbf{A}\circ \mathbf{U}_\theta) (x) - \mathbf{A}x\rVert_2 = \lVert( \Phi_\beta \circ \mathbf{A}\circ \mathbf{V}_\theta)( x)\rVert \leq \beta \,.$$ In particular, if $\lVert\mathbf{A}x- y^\delta\rVert \leq \delta$, then $\lVert\mathbf{A}\mathbf{D}_{\theta, \beta} x- y^\delta\rVert \leq \delta + \beta$ independent of the selected $\theta$. Any reconstruction method without such an estimate seems unreasonable as $\lVert\mathbf{A}x- y^\delta\rVert \leq \delta$ is the information provided by the noise data and therefore should be respected. **Remark 7** (Special cases). *The proposed architecture includes many image reconstruction methods as special case:* - **Classical regularization:* With $\mathbf{U}_\theta = \mathbf{V}_\theta=0$ we have classical regularization $\mathbf{R}_\alpha= \mathbf{B}_\alpha$. For example, in convex variational regularization elements $\mathbf{B}_\alpha$ converge to $\mathcal P$-minimizing solutions [@scherzer2009variational]. No trained network can be included (expect of course learning the regularizer and the regularization parameter).* - **Standard residual networks:* With $\mathbf{U}_\theta = \mathbf{V}_\theta$ and $\Phi_\beta = \mathrm{Id}_\mathbb{Y}$ (thus formally using $\beta = \infty$) and $\mathbf{B}_\alpha= \mathbf{A}^+$ we obtain the residual network $\mathbf{R}_\theta = (\mathrm{Id}_\mathbb{X}+ \mathbf{U}_\theta ) \circ \mathbf{A}^+$ of [@jin2017deep; @lee2017deep]. However it lacks data consistency for which we need $\beta < \infty$.* - **Regularized null-space networks:* With $\mathbf{V}_\theta=0$, $\mathbf{U}_\theta = \mathbf{U}$ and $\mathbf{B}_\alpha$ a regularization for $\mathbf{A}^+$ we obtain the regularized null-space networks $\mathbf{R}_\alpha= (\mathrm{Id}_\mathbb{X}+ P_{\mathcal{N}(\mathbf{A})} \mathbf{U}) \circ \mathbf{B}_\alpha$ of [@schwab2019deepnullspacelearning]. This network architecture does not allow to learn anything orthogonal to the null-space.* - **Range-nullspace decomposition:* With $\Phi_\beta = \mathop{\mathrm{id}}$ and $\mathbf{B}_\alpha= \mathbf{A}^+$ we range-nullspace decomposition $\mathbf{R}_\theta = (\mathrm{Id}_\mathbb{X}+ P_{\mathcal{N}(\mathbf{A})}\mathbf{U}_\theta + P_{\mathcal{R}(\mathbf{A}^+)} \mathbf{V}_\theta) \circ \mathbf{A}^+$ considered in [@chen2020deep]. This architecture does not include data consistency and moreover $\mathbf{A}^+$ might be unstable.* - **Regularizing networks:* With $\mathbf{U}_\theta = \mathbf{U}_\theta$, proper choice $\theta = \theta(\alpha)$ taking $\mathbf{B}_\alpha$ as filter based regularization and under certain convergence properties, we get the regularizing networks $\mathbf{R}_\alpha= (\mathrm{Id}_\mathbb{X}+ \mathbf{U}_{\theta(\alpha)} ) \circ \mathbf{B}_\alpha$ of [@Schwab2020]. This architecture does not explicitly include data consistency.* *Thus our architecture might be seen as generalization of ones of [@Schwab2020; @chen2020deep]. Extending [@chen2020deep] we allow $\mathbf{A}^+$ to be replaced by a regularization $\mathbf{B}_\alpha$, include the data proximity function in the range network $P_{\mathcal{R}(\mathbf{A}^+)} \mathbf{V}_\theta$ and allow the ill-posed case. We extend [@Schwab2020] by treating the range and null-space components separately, include the data proximity function in the range network and treat $\theta$ as independent parameter in the architecture.* Extending the concept of regularizing null-space networks, our aim is to show that $\mathbf{R}_{\alpha, \beta, \theta} = \mathbf{D}_{\beta, \theta } \circ \mathbf{B}_\alpha$ yields a convergent data consistent regularization method in the sense of Definitions [Definition 1](#def:regularization){reference-type="ref" reference="def:regularization"} and [Example 4](#def:regularization-DP){reference-type="ref" reference="def:regularization-DP"} with parameter selections $\alpha^\star, \beta^\star, \theta^\star$. Our strategy assuring this is simple. Starting with an rate-$r$ data-proximal regularization method $((\mathbf{B}_\alpha)_{\alpha>0}, \alpha^\star)$ we select $\theta^\star$ and $\beta^\star$ such that convergence is preserved, however to an element different to $x^\star$ selected by a limiting null-space networks. The network $\mathbf{A}^+\circ \Phi_\beta \circ \mathbf{A}\circ \mathbf{V}_\theta$ is especially relevant in the noisy case in order to obtain improved denoising properties on specific sets and $\Phi_\beta$ is used to preserve data-proximity. Note that the parameter $\beta$ in data-proximal null-space network directly allows to control the data proximity between any $x$ and $\mathbf{D}_{\theta, \beta} (x)$. Opposed to $\theta$ It is not intended to be subject to the training process. ## Convergence analysis Throughout this section let $((\mathbf{B}_\alpha)_{\alpha>0}, \alpha^\star)$ be a regularization method over $\mathbb{M}$ and $\mathbf{D}_{\theta, \beta}$ be a data-proximal null-space network defined by $\mathbf{U}_\theta, \mathbf{V}_\theta, \Phi_\beta, \mathbf{A}$. The goal is to show that $\mathbf{D}_{\theta, \beta} \circ \mathbf{B}_\alpha$ gives a convergent (data-proximal) regularization method with rates. **Theorem 8** (Convergence). *Suppose there a Lipschitz function $\mathbf{U}\colon \mathbb{X}\to \mathbb{X}$ and $\beta^\star = \beta^\star(\delta, y^\delta)$, $\theta^\star = \theta^\star(\delta, y^\delta)$ such that $(\mathbf{D}_{\theta, \beta})_{\theta, \beta}$ are uniformly Lipschitz on bounded sets and $$\label{eq:conv-cond} \forall z \in \mathbb{M}\colon \mathbf{D}_{\theta^\star, \beta^\star} (z) \to (\mathrm{Id}_\mathbb{X}+ P_{\mathcal{N}(\mathbf{A})} \mathbf{U}) (z) \quad \text{ as } \delta \to 0 \,.$$ Then with $(\mathbf{R}_\gamma)_\gamma \coloneqq (\mathbf{D}_{\theta, \beta} \circ \mathbf{B}_\alpha)_{\alpha, \beta, \theta}$ and $\gamma^\star \coloneqq (\alpha^\star, \beta^\star, \theta^\star)$, the following hold:* 1. *$( (\mathbf{R}_\gamma)_{\gamma}, \gamma^\star)$ is a convergent regularization method on $(\mathrm{Id}_\mathbb{X}+ P_{\mathcal{N}(\mathbf{A})} \mathbf{U}) (\mathbb{M})$.* 2. *If $\beta^\star = \mathcal{O} (\delta^r)$ then $( (\mathbf{R}_\gamma)_{\gamma}, \gamma^\star)$ is $r$-rate data proximal, provided $((\mathbf{B}_\alpha)_\alpha, \alpha^\star)$ is.* *Proof.* Let $x^* = (\mathrm{Id}_\mathbb{X}+ P_{\mathcal{N}(\mathbf{A})} \mathbf{U})(z^\star)$ with $z^\star \in \mathbb{M}$. Then $$\begin{aligned} \lVert x^\star - \mathbf{R}_{\gamma} (y^\delta)\rVert & = \lVert(\mathrm{Id}_\mathbb{X}+ P_{\mathcal{N}(\mathbf{A})} \mathbf{U})(z^\star) - \mathbf{D}_{\beta, \theta} (\mathbf{B}_\alpha y^\delta)\rVert \\ & \leq \lVert \mathbf{D}_{\beta, \theta} (\mathbf{B}_\alpha(y^\delta) - z^\star)\rVert + \lVert (\mathrm{Id}_\mathbb{X}+ P_{\mathcal{N}(\mathbf{A})} \mathbf{U}) (z^\star) - \mathbf{D}_{\beta, \theta} ( z^\star)\rVert \\& \leq L \lVert\mathbf{B}_\alpha(y^\delta) - z^\star)\rVert + \lVert (\mathrm{Id}_\mathbb{X}+ P_{\mathcal{N}(\mathbf{A})} \mathbf{U}) (z^\star) - \mathbf{D}_{\beta, \theta} ( z^\star)\rVert \end{aligned}$$ With the convergence of $((\mathbf{B}_\alpha)_{\alpha>0}, \alpha^\star)$ and [\[eq:conv-cond\]](#eq:conv-cond){reference-type="eqref" reference="eq:conv-cond"} this shows the convergence of $( (\mathbf{R}_\gamma)_{\gamma}, \gamma^\star)$. Now let $((\mathbf{B}_\alpha)_\alpha, \alpha^\star)$ be $r$-rate data proximal and $\beta^\star = \mathcal{O} (\delta^r)$. By the definition of $\mathbf{R}_{\gamma}$ we have $\lVert y^\delta - \mathbf{A}\mathbf{R}_{\gamma} (y^\delta)\rVert \leq \lVert y^\delta - \mathbf{A}(\mathbf{B}_{\alpha} (y^\delta)\rVert + \beta$ which gives the $r$-rate data proximity of $((\mathbf{R}_\gamma)_\gamma, \gamma^\star)$ be ◻ Under additional assumptions we also obtain convergence rates. **Theorem 9** (Convergence Rates). *In the situation of Theorem [Theorem 8](#thm:convergence){reference-type="ref" reference="thm:convergence"}, let $((\mathbf{B}_\alpha)_{\alpha>0}, \alpha^\star)$ be rate-$r$ data consistent regularization method over $\mathbb{M}_s \subseteq \mathbb{M}$ that is convergent of rate $s$. Then under the approximation assumption $\lVert (\mathrm{Id}_\mathbb{X}+ P_{\mathcal{N}(\mathbf{A})} \mathbf{U}) (z^\star) - \mathbf{D}_{\beta, \theta} ( z^\star)\rVert = \mathcal{O} (\delta^r)$ on $\mathbb{M}_s$ we have, for $\beta^\star = \mathcal{O} (\delta^r)$ and with $x^\star \in ((\mathrm{Id}_\mathbb{X}+ P_{\mathcal{N}(\mathbf{A})} \mathbf{U} ) (\mathbb{M}_s)$ with $\lVert y^\delta - \mathbf{A}x^\star\rVert \leq \delta$ $$\begin{aligned} \label{eq:rate1} \lVert x^\star - \mathbf{B}_{\gamma} (y^\delta)\rVert &= \mathcal{O} (\delta^s) \quad \text{as $\delta \to 0$} \,,\\ \label{eq:rate2} \lVert y^\delta - \mathbf{A}\mathbf{R}_{\gamma} (y^\delta)\rVert &= \mathcal{O} (\delta^r) \quad \text{as $\delta \to 0$} \,.\end{aligned}$$ That is, the regularization method $( (\mathbf{R}_\gamma)_{\gamma}, \gamma^\star)$ is rate-$r$ data proximal and rate-$s$ convergent on $(\mathrm{Id}_\mathbb{X}+ P_{\mathcal{N}(\mathbf{A})} \mathbf{U}) (\mathbb{M}_s)$.* *Proof.* Condition [\[eq:rate2\]](#eq:rate2){reference-type="eqref" reference="eq:rate2"} follows from Theorem [Theorem 8](#thm:convergence){reference-type="ref" reference="thm:convergence"}. Moreover according to the proof of the theorem we have $\lVert x^\star - \mathbf{R}_{\gamma} (y^\delta)\rVert \leq L \lVert\mathbf{B}_\alpha(y^\delta) - z^\star)\rVert + \lVert (\mathrm{Id}_\mathbb{X}+ P_{\mathcal{N}(\mathbf{A})} \mathbf{U}) (z^\star) - \mathbf{D}_{\beta, \theta} ( z^\star)\rVert$. This gives the claim by the parameter choice and the made approximation assumption. ◻ # Application {#sec:application} In this section we present a numerical example for our proposed data-proximal regularization approach. We consider limited angle computed tomography (CT) modeled by the Radon transform as forward problem. ## The Radon transform The Radon transform of a compactly supported smooth function $u \colon \mathbb{R}^2 \to \mathbb{R}$ is defined by $\mathbf{K}x(\theta, s) \coloneqq \int_{L(\theta, s)} u(x) \mathop{}\!\mathrm{d}L(x)$ for $(\theta, s) \in [-\pi/2, \pi/2) \times \mathbb{R}$. Here $L(\theta, s) \coloneqq \{(x_1, x_2)\in \mathbb{R}^2 \mid x_1 \cos (\theta) + x_2 \sin (\theta) = s\}$ denotes the line in $\mathbb{R}^2$ with singed distance $s\in\mathbb{R}$ from the origin and direction $(\cos (\theta), \sin(\theta))^T$ with $\theta \in [-\pi/2, \pi/2)$. In limited angle CT, the data is only known within a limited subset $\Omega \subseteq [-\pi/2,\pi/2)$ of the full angular range. The limited angle Radon transform is then defined as $$\begin{aligned} \mathbf{K}_\Omega \colon \mathcal{D}(\mathbf{K}_\Omega ) \subseteq L^2(\mathbb{R}^2) \to L^2(S^1\times \mathbb{R}) \colon u \mapsto \chi_{\Omega \times \mathbb{R}} \mathbf{K}u.\end{aligned}$$ The well known filtered back-projection (FBP) inversion formula for the full data Radon transform reads $u = \mathbf{K}^* \mathbf{I}(\mathbf{K}u)$, where $\mathbf{I}$ is the so-called Riesz-potential and defined in the Fourier domain by $\mathbf{F}_2 (\mathbf{I}u) \coloneqq \lVert\cdot\rVert (\mathbf{F}_2 u) / (4\pi)$ where $\mathbf{F}_2$ is the Fourier transform in the second component; see [@natterer2001mathematics]. The application of the FBP formula to limited angular data is known to cause prominent streak artifacts which can obscure important information [@Quinto93; @quinto2017artifacts]. While these artifacts have been characterized by methods from microlocal analysis [@frikel2013characterization; @frikelquinto2016; @Borg2018], finding suitable reconstruction strategies is still an ongoing challenge. Thus, we will employ our proposed data-proximal null-space network to obtain a reliable and data-proximal reconstruction. In our simulations we use synthetic Shepp-Logan type phantoms supported within the ball of radius one where $u$ is represented by discrete image $x\in \mathbb{R}^{N \times N}$ with $N = 128$. To obtain a discretized versions for the forward operator, we evaluate the limited angle Radon transform at $N_s = 128$ equidistant distances in $[-1, 1]$ and $N_\Omega = 120$ equidistant angles in $[-\pi/3, \pi/3)$. More details on the implementation of the discretized version of the Radon transform which we used in our experiments can be found in the repository <https://github.com/drgHannah/Radon-Transformation>. The discretized limited angle Radon transform and FBP formula are denoted by $\mathbf{A}$ and $\mathbf{A}^\sharp$, respectively. ## Network design and training Throughout all numerical calculations, the networks architectures for $\mathbf{U}_\theta$ and $\mathbf{V}_\theta$ are taken as the basic U-net [@ronneberger2015], which is still considered a state-of-the-art model due to its ability to reliably learn image features. Based on the U-net we then consider the architectures $$\begin{aligned} \mathbf{M}_\theta^{(1)} &= \mathop{\mathrm{id}}_\mathbb{X}+ \mathbf{U}_\theta \,, \\ \mathbf{M}_\theta^{(2)} &= \mathop{\mathrm{id}}_\mathbb{X}+ P_{\mathcal{N}(\mathbf{A})} \mathbf{U}_\theta \,, \\ \mathbf{M}_\theta^{(3)} &= \mathop{\mathrm{id}}_\mathbb{X}+ P_{\mathcal{N}(\mathbf{A})} \mathbf{U}_\theta + \mathbf{A}^\sharp \Phi_\beta \mathbf{A}\mathbf{V}_\theta \,, \end{aligned}$$ where $\mathbf{M}_\theta^{(1)}$ is the plain residual U-net, $\mathbf{M}_\theta^{(2)}$ the null-space architecture and $\mathbf{M}_\theta^{(3)}$ the prosed data-proximal null-space network. The data-proximal network uses shared weights and divides the output in two streams via projections onto the kernel and the orthogonal complement, respectively. The data-proximity function is taken as radial function $$\label{eq:DPexample} \Phi_\beta (x) \coloneqq \begin{cases} x, & \lVert x\rVert_2 \leq \beta, \\ \beta \cdot x/ \lVert x\rVert_2, & \text{else} \,. \end{cases}$$ In the numerical simulations we use $\beta \coloneqq \delta \sum_{i=1}^N \lVert\eta_i\rVert_2/N$. This way, we obtain an estimate of the magnitude of the perturbations present in the data domain. As initial reconstruction method we use the FBP operator as well as total-variation (TV) regularization, which is known to be a good prior for the missing data setup [@wang2017; @velikina2007; @sidky2008; @Persson2001]. The family $(\mathbf{B}_\alpha)_{\alpha>0}$ is then given by $\mathbf{B}_\alpha y^\delta \coloneqq \mathop{\mathrm{arg\,min}}_{x} \lVert\mathbf{A}x- y^\delta\rVert_2^2/2 + \alpha\lVert\nabla x\rVert_1$ and numerically solved with the Chambolle-Pock algorithm [@Chambolle2011]. For training the networks we generate data pairs $(x_i, y_i^\delta)_{i=1}^{600}$ with $y_i^\delta =\mathbf{A}x_i + \delta \eta_i$ where $\delta = 0.05$ and $\eta_i \sim \lVert\mathbf{A}x_i\rVert_\infty \cdot \mathcal{N}(0,1)$. All networks $\mathbf{M}^{(i)}_\theta$ are trained by minimizing $$\label{eq:loss} \mathcal{L}_N (\theta) \coloneqq \frac{1}{N} \sum_{i=1}^N \lVert \mathbf{M}_\theta \mathbf{B}_\theta (y_i^\delta) - x_i\rVert_2^2$$ by the Adam optimizer with learning rate of $0.001$. We trained each network for a total $50$ epochs, and chose the learned network parameters with minimal validation error during training as our final network weights. We split our dataset into $500$ training and $100$ test samples. For further implementation details regarding our experiments we refer to our github repository <https://github.com/sgoep/data_proximal_networks>. ## Results For the presented results we write $x^\star$ for the ground truth image, $x_{\rm FBP}$ for the FBP reconstruction, $x_{\rm TV}$ for the TV-regularized solution and add the superscripts ${\rm RES}$, ${\rm NSN}$, ${\rm DP}$ for subsequent residual network, null-space network and data-proximal network, respectively. All reconstructions are compared to the ground truth via the mean squared error (MSE), the peak-signal-to-noise-ratio (PSNR) and the structural similarity index measure (SSIM). \ Reconstruction results are shown in Figure [\[fig:recon\]](#fig:recon){reference-type="ref" reference="fig:recon"}. We see that all data-driven modalities overall yield rather good results. Looking closely at the fine grid like features in the magnified section, we can observe that reconstructions shown in Figure [\[subfig:FBP-res\]](#subfig:FBP-res){reference-type="ref" reference="subfig:FBP-res"}-[\[subfig:tv-nsn\]](#subfig:tv-nsn){reference-type="ref" reference="subfig:tv-nsn"} tend to differ in their extent of their expression. However, these details appear to be recovered more accurately by our proposed data proximity approach as shown in Figure [\[subfig:TV-DP\]](#subfig:TV-DP){reference-type="ref" reference="subfig:TV-DP"}. Here, all dots are of similar intensity and shape. Furthermore, the intersecting part of the upper and left bigger ellipse like features inside the phantom are recovered more precisely. We attribute these improvements to our data-proximal architecture, which with which the output of the residual network is constraint to the correct energy level and able to converge faster to a suitable solution. A quantitative error comparison is shown in Table [1](#tab:errors){reference-type="ref" reference="tab:errors"}. We see that our proposed data proximity reconstruction performs best in the chosen metrics. This is in line with our visual inspection above. Method MSE PSNR SSIM ---------------------------- ------------ ------------- ------------ $x_{\text{FBP}}$ 0.0137 24.6556 0.2867 $x_{\text{TV}}$ 0.0020 33.0772 0.6089 $x_{\text{FBP}}^{\rm RES}$ 0.0013 34.9414 0.8455 $x_{\text{TV}}^{\rm RES}$ 0.0010 35.7354 0.9032 $x_{\text{FBP}}^{\rm NSN}$ 0.0012 35.2030 0.8437 $x_{\text{TV}}^{\rm NSN}$ 0.0009 36.6717 0.9184 $x_{\text{TV}}^\text{DP}$ **0.0008** **37.1900** **0.9265** : Reconstruction errors for CT reconstruction with limited angular range. The best values in each column are highlighted in bold. # Conclusion {#sec:conclusion} In this paper, we have introduced a provably convergent data-driven regularization strategy in terms of data-proximal networks. We have demonstrated improved reconstruction properties in our numerical experiments. These experiments were performed on synthetic phantoms and for the parallel beam geometry of the Radon transform. In particular, data were generated and the noise model is explicitly known. Future work could focus on real world applications. It is possible to combine our approach with appropriate noise estimation techniques and different data proximity functions. More precise adaptation can be achieved by designing more problem-specific data proximity functions of a certain regularity. Analysis under random noise also appears to be an interesting line of research. # Acknowledgement {#acknowledgement .unnumbered} The contribution by S.G. is part of a project that has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 847476. The views and opinions expressed herein do not necessarily reflect those of the European Commission. 10 J. Adler and O. Öktem. Solving ill-posed inverse problems using iterative deep neural networks. , 33(12):124007, 2017. F. Altekrüger, A. Denker, P. Hagemann, J. Hertrich, P. Maass, and G. Steidl. : learning from very few images by patch normalizing flow regularization. , 39(6):064006, 2023. S. Antholzer, M. Haltmeier, and J. Schwab. Deep learning for photoacoustic tomography from sparse data. , 27(7):987--1005, 2019. PMID: 31057659. S. Arridge, P. Maass, O. Öktem, and C.-B. Schönlieb. Solving inverse problems using data-driven models. , 28:1--174, 2019. M. Benning and M. Burger. Modern regularization methods for inverse problems. , 27:1--111, 2018. L. Borg, J. S. Jørgensen, J. Frikel, and E. T. Quinto. Analyzing reconstruction artifacts from arbitrary incomplete x-ray ct data. , 11(4):2786--2814, 2018. T. A. Bubba, G. Kutyniok, M. Lassas, M. März, W. Samek, S. Siltanen, and V. Srinivasan. Learning the invisible: a hybrid deep learning-shearlet framework for limited angle computed tomography. , 35(6):064002, 2019. A. Chambolle and T. Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. , 40(1):120--145, 2011. D. Chen and M. E. Davies. Deep decomposition learning for inverse imaging problems. In *Computer Vision--ECCV 2020: 16th European Conference, Glasgow, UK, August 23--28, 2020, Proceedings, Part XXVIII 16*, pages 510--526. Springer, 2020. H. W. Engl, M. Hanke, and A. Neubauer. , volume 375. Springer Science & Business Media, 1996. J. Frikel and E. T. Quinto. Characterization and reduction of artifacts in limited angle tomography. , 29(12):125007, 2013. J. Frikel and E. T. Quinto. Limited data problems for the generalized radon transform in ${\mathbb{R}^n}$. , 48(4):2301--2318, 2016. M. Genzel, J. Macdonald, and M. März. Solving inverse problems with deep neural networks -- robustness included? , 45(1):1119--1134, 2023. K. Hammernik, T. Klatzer, E. Kobler, M. P. Recht, D. K. Sodickson, T. Pock, and F. Knoll. Learning a variational network for reconstruction of accelerated MRI data. , 79(6):3055--3071, 2018. K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In *2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pages 770--778, 2016. K. H. Jin, M. T. McCann, E. Froustey, and M. Unser. Deep convolutional neural network for inverse problems in imaging. , 26(9):4509--4522, 2017. E. Kang, J. Min, and J. Ye. A deep convolutional neural network using directional wavelets for low-dose x-ray ct reconstruction. , 44(10):e360--e375, 2017. A. Kofler, M. Haltmeier, C. Kolbitsch, M. Kachelrieß, and M. Dewey. A u-nets cascade for sparse view computed tomography. In F. Knoll, A. Maier, and D. Rueckert, editors, *Machine Learning for Medical Image Reconstruction*, pages 91--99, Cham, 2018. Springer International Publishing. D. Lee, J. Yoo, and J. C. Ye. Deep residual learning for compressed sensing mri. In *2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)*, pages 15--18, 2017. H. Li, J. Schwab, S. Antholzer, and M. Haltmeier. solving inverse problems with deep neural networks. , 36(6):065005, 2020. A. Majumdar. Real-time dynamic mri reconstruction using stacked denoising autoencoder, 2015. M. Mardani, E. Gong, J. Y. Cheng, S. S. Vasanawala, G. Zaharchuk, L. Xing, and J. M. Pauly. Deep generative adversarial neural networks for compressive sensing mri. , 38(1):167--179, 2018. F. Natterer. . SIAM, 2001. G. Ongie, A. Jalal, C. A. Metzler, R. G. Baraniuk, A. G. Dimakis, and R. Willett. Deep learning techniques for inverse problems in imaging. , 1(1):39--56, 2020. M. Persson, D. Bone, and H. Elmqvist. Total variation norm for three-dimensional iterative reconstruction in limited view angle tomography. , 46(3):853, 2001. E. T. Quinto. Singularities of the X-ray transform and limited data tomography in $\mathbb{R}^2$ and $\mathbb{R}^3$. , 24(5):1215--1225, 1993. E. T. Quinto. Artifacts and visible singularities in limited data x-ray tomography. , 18(1):1--14, 2017. Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan. Deep learning microscopy. , 4(11):1437--1443, 2017. O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, editors, *Medical Image Computing and Computer-Assisted Intervention -- MICCAI 2015*, pages 234--241, Cham, 2015. Springer International Publishing. O. Scherzer, M. Grasmair, H. Grossauer, M. Haltmeier, and F. Lenzen. . Springer, New York, 2009. J. Schlemper, J. Caballero, J. V. Hajnal, A. N. Price, and D. Rueckert. A deep cascade of convolutional neural networks for dynamic MR image reconstruction. , 37(2):491--503, 2018. J. Schwab, S. Antholzer, and M. Haltmeier. Deep null space learning for inverse problems: convergence analysis and rates. , 35(2):025008, 2019. J. Schwab, S. Antholzer, and M. Haltmeier. Big in Japan: Regularizing networks for solving inverse problems. , 62(3):445--455, 2020. E. Y. Sidky and X. Pan. Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization. , 53(17):4777--4807, 2008. C. Sønderby, J. Caballero, L. Theis, W. Shi, and F. Huszár. Amortised map inference for image super-resolution. In *International Conference on Learning Representations*, 2017. J. Velikina, S. Leng, and G.-H. Chen. Limited view angle tomographic image reconstruction via total variation minimization. , 6510:709--720, 2007. G. Wang, J. C. Ye, and B. De Man. Deep learning for tomographic image reconstruction. , 2(12):737--748, 2020. T. Wang, K. Nakamoto, H. Zhang, and H. Liu. Reweighted anisotropic total variation minimization for limited-angle ct reconstruction. , PP:2742--2760, 2017. G. Yiasemis, J.-J. Sonke, C. Sánchez, and J. Teuwen. Recurrent variational network: A deep learning inverse problem solver applied to the task of accelerated mri reconstruction, 2022.
arxiv_math
{ "id": "2309.06573", "title": "Data-proximal null-space networks for inverse problems", "authors": "Simon G\\\"oppel, J\\\"urgen Frikel, Markus Haltmeier", "categories": "math.NA cs.NA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }